id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.12380
Methods for generating and evaluating synthetic longitudinal patient data: a systematic review
The proliferation of data in recent years has led to the advancement and utilization of various statistical and deep learning techniques, thus expediting research and development activities. However, not all industries have benefited equally from the surge in data availability, partly due to legal restrictions on data usage and privacy regulations, such as in medicine. To address this issue, various statistical disclosure and privacy-preserving methods have been proposed, including the use of synthetic data generation. Synthetic data are generated based on some existing data, with the aim of replicating them as closely as possible and acting as a proxy for real sensitive data. This paper presents a systematic review of methods for generating and evaluating synthetic longitudinal patient data, a prevalent data type in medicine. The review adheres to the PRISMA guidelines and covers literature from five databases until the end of 2022. The paper describes 17 methods, ranging from traditional simulation techniques to modern deep learning methods. The collected information includes, but is not limited to, method type, source code availability, and approaches used to assess resemblance, utility, and privacy. Furthermore, the paper discusses practical guidelines and key considerations for developing synthetic longitudinal data generation methods.
Katariina Perkonoja, Kari Auranen, Joni Virta
2023-09-21T12:44:31Z
http://arxiv.org/abs/2309.12380v2
# Methods for generating and evaluating synthetic longitudinal patient data: ###### Abstract The proliferation of data in recent years has led to the advancement and utilization of various statistical and deep learning techniques, thus expediting research and development activities. However, not all industries have benefited equally from the surge in data availability, partly due to legal restrictions on data usage and privacy regulations, such as in medicine. To address this issue, various statistical disclosure and privacy-preserving methods have been proposed, including the use of synthetic data generation. Synthetic data are generated based on some existing data, with the aim of replicating them as closely as possible and acting as a proxy for real sensitive data. This paper presents a systematic review of methods for generating and evaluating synthetic longitudinal patient data, a prevalent data type in medicine. The review adheres to the PRISMA guidelines and covers literature from five databases until the end of 2022. The paper describes 17 methods, ranging from traditional simulation techniques to modern deep learning methods. The collected information includes, but is not limited to, method type, source code availability, and approaches used to assess resemblance, utility, and privacy. Furthermore, the paper discusses practical guidelines and key considerations for developing synthetic longitudinal data generation methods. **Keywords**: Data privacy, Longitudinal patient data, Statistical disclosure control, Synthetic data generation, Privacy-preserving data publishing ## 1 Introduction The recent surge in data volumes has greatly facilitated research, development, and innovation (RDI). Yet, some sectors, particularly medicine, still face challenges in harnessing existing data sources due to stringent data protection regulations. Sensitive and confidential medical records fall under various international and national regulations, such as the General Data Protection Regulation (GDPR)1 or the Health Insurance Portability and Accountability Act (HIPAA)2. Compliance with these policies typically leads to prolonged data processing times and, in certain cases, restricted access. For instance, while the national regulation in Finland permits using identifiable individual-level data for research, their use in development and innovation activities remains prohibited. 3 Footnote 1: University of Turku, Department of Mathematics and Statistics **Corresponding author:** Katarina Perkonoja, Department of Mathematics and Statistics, 20014 University of Turku, Finland Email: [email protected] If patient data are deemed sufficiently anonymous, they fall outside the rules of personal data protection, streamlining data access and sharing. Synthetic data generation (SDG) offers a promising approach to achieving such anonymity. This systematic review aims to map the literature on SDG methods capable of generating longitudinal patient data, a prevalent form of data in medicine. Longitudinal data (LD) are a special form of tabular data that contain at least one variable measured for each subject at two or more time points [4]. These measurements can be collected at the same time points for all subjects, constituting balanced data, or conversely, the time points, number of measurements, or both may vary among subjects, resulting in unbalanced data. The subject-specific repeated measurements create a distinct dependency structure, absent in standard cross-sectional data [4], and pose specific requirements for data modeling and analysis. Typically, repeatedly measured variables are regarded as response variables (outcomes), while other features are treated as covariates. Figure 1 depicts the defining characteristics of longitudinal data. The goal of synthetic data generation is to produce artificial data that resemble real-world observations, referred to as original or input data, and maintain utility. Resemblance and utility are partially overlapping concepts as good resemblance typically implies high utility. While synthetic data often maintain high utility through resembling the original data, a perfect match is not required. For instance, a classification model may perform well when specific data structures are preserved although the observed values differ. In this review, resemblance refers to the degree of equivalence between synthetic and original data distributions, while utility pertains to the extent the analyses and predictions align with the original data. Figure 1: **Illustration of key characteristics and different forms of longitudinal data. The subfigure (a) shows balanced (subjects have identical visit sequences), and the subfigure (b) shows unbalanced (differing visit sequences) longitudinal data in long format. The third subfigure (c) illustrates the same data as in (a) but in wide format. The main difference to another common tabular data type, cross-sectional data, are precisely these repeatedly measured variables from the same subjects over time. These repeated measurements create a unique temporal structure that is essential to preserve when generating synthetic data. Moreover, missing data (NA), measurement errors (176 in (a)), and dropouts (second row in (b)) are common issues encountered in longitudinal data that can impede synthetic data generation.** In the context of statistical disclosure control (SDC)[7] and privacy-preserving data publishing (PPDP)[8], another goal of SDG is to avoid releasing personal information. Originally proposed by Rubin[9] in 1993, SDG has gained prominence in enhancing data protection and expediting RDI activities. However, concerns regarding the sufficiency of mere random data generation for privacy preservation have prompted exploration of more effective privacy-preserving techniques.[10, 11, 12] Beyond privacy, SDG offers value in domains like software development and model testing, where privacy concerns might be less pertinent and the emphasis is on providing sufficiently realistic data, for example through data augmentation. ### Rationale Previous reviews on synthetic data generation in healthcare have primarily focused on Generative Adversarial Networks (GANs)[13] and have not explored the specific challenges posed by longitudinal data generation.[14, 12, 15, 16, 17] Yet, longitudinal data which accumulate over time during patient treatment and follow-up, are a common form of health data. Neglecting the longitudinal aspects may lead to flawed generative models characterized by logical inconsistencies, such as determining past events based on future, or as treating repeated within-subject measurements as independent entities. Consequently, additional research is warranted to identify appropriate techniques for generating synthetic longitudinal patient data that are reliable and of sufficient quality to be used in real-life settings. Such methods could be directly offered to data controllers to facilitate the use of patient data while safeguarding patient privacy. This systematic literature review aims to address this need. This systematic review adheres to the PRISMA[18] guidelines as applicable. ### Objectives The primary objective of this systematic review is to map and describe existing methods for generating synthetic longitudinal patient data in real-life settings. These descriptions will serve as a valuable resource for data controllers and researchers to select suitable methods based on their specific needs. Regarding the primary objective, the research questions are: * What methods are currently available for generating synthetic longitudinal patient data? * How do these methods address the key characteristics of longitudinal data, including temporal structure, balance, different variable types, and missing values? * How are these methods evaluated in terms of resemblance, utility, and privacy preservation? The secondary objective is to evaluate the comprehensiveness of reporting of the identified literature. This involves comparing the methods, evaluation approaches, and reporting standards. This evaluation aims to provide valuable insights to method developers regarding areas that require further research. The rest of the article is organized as follows. Section 2 outlines the review's methodology, including the eligibility criteria, information sources, and processes for selection and data collection. Section 3 presents the findings of individual studies and their synthesis encompassing a summary of the identified SDG methods and their evaluation approaches. Section 4 concludes the article by offering general interpretations of the results, addressing limitations and discussing practical implications. ## 2 Methods ### Eligibility criteria The definition of synthetic data (SD) varies across fields and contexts. In general, synthetic data involves creating artificial data by learning the inherent data distribution and using them as a proxy for real data. In this review, we define SD as data generated via a randomized algorithm utilizing an existing dataset (original or input data), with the aim to mimic these data as closely as possible. A randomized algorithm incorporates randomness in its operations, yielding different outputs for the same input on different runs. In the context of SDG, it can produce an unlimited number of synthetic observations. Distinguishing between synthetic and simulated data is challenging, and these terms are often used interchangeably in literature. We differentiate synthetic data as derived from an input dataset and simulated data as generated from theoretical models with or without empirical foundations. Given this distinction, we excluded methods based naively on standard probability distributions (e.g., normal or multinomial distributions), as real-world empirical data rarely conform precisely to these distributions. This same ambiguity extends to synthetic and anonymized data. We call a dataset anonymized if it has been modified from original data, e.g., through aggregation or addition of noise, without attempting to learn the underlying data distribution to create new observations. To ascertain that the identified SDG methods addressed research questions Q1 and Q2, we required that the original data used to create SD contained at least one repeatedly measured variable and the authors explicitly acknowledged this longitudinal nature, either by developing or adapting their proposed method to generate longitudinal data or by examining the preservation of the key characteristics, such as temporal correlation. Consequently, publications in which repeated measurements were altered so that they lost their temporal structure, e.g., through aggregation, were excluded. We restricted our examination to methods that could support data sharing, emphasizing the generation of fully synthetic data to ensure the absence of confidential original data. Consequently, publications focusing on data augmentation, which typically retain original data, were excluded, albeit some such methods can produce fully synthetic data. [9, 19] In the absence of an established mandate from data protection agencies to incorporate privacy-preserving methodologies in SDG approaches, we also encompassed methods wherein these techniques were not considered. We included literature involving non-open-source and commercially licensed methods. Moreover, we did not confine solely to the health data domain, however, requiring that the original data variables were comparable to those found in longitudinal patient data. Within the aforementioned framework, the following forms of publication written in English language were included: articles published in peer-reviewed journals and proceedings as well as pre-prints, books, book chapters and reviews. ### Information sources We searched EMBASE (1947 to Nov 22, 2022), MEDLINE (Ovid interface, 1946 to Nov 22, 2022), Web of Science (1900 to Nov 22, 2022) and Google Scholar (Publish or Perish software[20], first 1000 hits on June 18, 2021) and arXiv (open-source metadata[21] on Nov 22, 2022). These databases were chosen because they have been found to provide the best coverage.[22] To discover the latest, yet unpublished methods, we included arXiv. To ensure a comprehensive literature coverage, the corresponding author (KP) scanned the reference lists of the publications that were deemed eligible and distributed the bibliography of eligible publications to the review team, comprising of the authors and three additional members, to verify the comprehensiveness. ### Search strategy Literature search strategies (Supplemental Material A) were developed using topic (title, abstract, keywords) and text words related to synthetic longitudinal patient data. The initial search strategy was created by KP, and the search algorithm was developed with input from the review team and by using the Web of Science advanced search (Box 1). The strategy was reviewed by a review team member who was not involved in its development, using the PRESS standard[23] before being applied to other databases. To ensure that the review was up to date, the search was conducted twice (Jun/2021, Nov/2022). **Box 1. Web of Science search algorithm.** ``` TS = ((synthetic OR artificial) NEAR/3 (*data* OR record*)) AND TS = ((generate* OR produc* OR simula*)) AND TS = ((longitudinal OR correl* OR panel OR repeat* OR follow-up OR multivariate OR lifespan* OR trajectet* OR health* OR medical OR patient)) NOT TS = (aperture OR insemination OR seism*) AND LA = (English) AND DT = (Article OR Abstract of Published Item OR Book OR Book Chapter OR Data Paper OR Early Access OR Proceedings Paper OR Review OR Software Review) ### Selection process The search results underwent initial duplicate removal using EndNote Online's [24] tool. Remaining duplicates were identified using Rayyan [25], software for systematic reviews, and manually removed by KP. Subsequently, the review authors (KP and JV) independently screened the titles and abstracts against the eligibility criteria using a specific screening chart (Supplemental Material B.1). Full texts, subsequently referred to as studies or publications, were procured for records that appeared to meet the eligibility criteria or exhibited any uncertainty in eligibility. If the publication was inaccessible, the record was deemed excluded from the review. Subsequently, the aforementioned publications were independently screened by KP and JV against the eligibility criteria using a specific full-text screening chart outlined in Supplemental Material B.2. Disagreements were resolved through discussion and, if necessary, a third-party arbitration (KA) was consulted. The reasons for exclusion were documented. ### Data collection process Data from the eligible publications were collected and managed by KP using a structured form (Supplemental Material C) within the REDCap electronic data capturing tools hosted at the University of Turku [26, 27]. In unclear situations, KP consulted the authors and their webpages to make sure that all relevant data was captured. For quality control, JV and KA conducted spot checks on the data collection process. ### Data items The following provides a succinct summary of data collected from each eligible publication. A comprehensive list of all data items is presented in the data collection form (Supplemental Material C). * Method characteristics \(\circ\) Type of method \(\circ\) Approach to modeling longitudinal data \(\circ\) Handling of unbalanced data \(\circ\) Handling of missing data \(\circ\) Handling of different variable types \(\circ\) Requirement of expert knowledge \(\circ\) Limitations * Method performance evaluation \(\circ\) Characteristics of the original datasets \(\circ\) Assessment of resemblance \(\circ\) Assessment of utility \(\circ\) Assessment of privacy Furthermore, we collected data on * Literature information * Authors and title * Publication details * Objective of the publication * Method implementation * Pseudo and source code availability * Details of training processes In instances where information was either unavailable or unclear, it was recorded as missing. In cases of uncertainty, KP sought input from JV and KA. ### Risk of bias and reporting quality assessment We assessed the risk of biases using a framework developed by the review team. Since this review does not concern clinical trials, we tailored the existing guidelines [28] to our research, focusing on selection, performance, and reporting bias. The detailed framework is in Supplemental Material D.1. Reporting quality was assessed by examining inconsistency, imprecision, and indirectness of reporting. In addition, we gathered data pertaining to disclosed conflicts of interest, peer-review status, and the thoroughness of describing the training process for generating synthetic data. KP assessed the risk of bias and reporting quality and shared the findings accompanied by their justifications with JV and KA to ensure their validity and achieve consensus. Any disagreements were resolved through discussion. ### Synthesis methods To address research question Q1, we compiled a concise overview, encompassing all SDG methods that were predominantly featured and applied in the eligible publications, subsequently referred to as the primary methods (Section 3.4). This summary lists the fundamental operational principles, approaches for modeling longitudinal data and potential limitations. We recorded all SDG methods that were utilized for comparing and benchmarking against the primary method, later denoted as the reference methods. With respect to Q2, we generated a comprehensive table delineating the capabilities of the primary methods in processing and generating unbalanced data or mix-typed variables, handling missing observations, and the necessity of expert knowledge (Section 3.4). Here, expert knowledge refers to highly context-specific information, which is necessary to use the method, such as choosing realistic generation sequences or variable distributions, and goes beyond the basic functionalities, such as training a neural network or machine learning model. To address Q3, we constructed summary figures and tables that outline the utilized datasets, measures of resemblance, assessments of utility, and considerations of privacy. In addition, to gain insight into the broader evaluation framework, we examined whether each evaluation task was conducted using a single or multiple independently generated synthetic datasets, and whether the evaluation of the quality of synthetic data was conducted in relation to the original data or in comparison to other datasets or methods. The results of these syntheses are presented in Section 3.3. In accordance with our secondary objective, we categorized the research objectives of the eligible publications (Section 3.2) and identified methods (Section 3.2) and discussed our findings in relation to existing literature to provide prospective areas of future research (Section 4). ## 3 Results ### Study selection The search algorithm initially identified 8 943 publications. After removing 2 027 duplicates, 6 916 studies underwent title and abstract screening, leading to selecting 377 publications for full-text screening. Nine studies were unattainable, leaving 368 studies for evaluation against the eligibility criteria (Section 2.1). Altogether 15 of the 368 studies met our criteria and were included at this stage. To augment the search, KP examined all references in the 15 included studies. This process identified 22 potential publications, of which two were deemed eligible and incorporated in the review. Ultimately, 17 eligible studies were included in the review. Figure 2 illustrates the study selection process according to the PRISMA guidelines [18]. Below we describe the most common reasons for excluding publications, with the cited works serving as illustrative examples of each exclusion category. Figure 2: PRISMA flow diagram. The diagram illustrates the study selection process according to the PRISMA guidelines; pds: probability distributions. The primary reason for exclusion was wrong data type (n = 165), mostly cross-sectional [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31], survival [32, 33, 34, 35] or time-series [36, 37, 38] observations. Publications compromising the temporal structure in longitudinal data were categorized as having wrong data type. [39, 40, 41] Publications lacking SDG (n = 81) were typically introductions of a specific framework [42, 43, 44, 45] or data simulations [46, 47, 48]. Exclusions due to partially synthetic data (n = 49) were largely related to data augmentation using techniques such as Synthetic Minority Over-Sampling Technique (SMOTE) [49] or its variants [50, 51, 52, 53, 54]. We excluded 29 publications as we could not determine their eligibility, stemming from incomplete data, incomplete method description, or restricted access to the cited references, data, or algorithms. [55, 56, 57, 58] Additionally, 28 studies were excluded for relying solely on standard probability distributions to simulate data. [59, 60, 61, 62] Furthermore, 14 studies were excluded for failing to acknowledge the longitudinal nature of data, [63, 64, 65, 66, 67, 68] although the original datasets included variables with repeated measurements. Lastly, we identified three duplicates and two publications of wrong literature type (thesis or an extended abstract). ### Study characteristics The 17 included studies (Table 1) were published between 2016 and 2022, with seven studies in 2022. The predominant research objective was privacy-preserving data publishing (41%). Additionally, five (29%) of studies emphasized data publishing but abstained from employing privacy-preserving techniques and privacy evaluation for synthetic data. As per the SCImago Journal & Country Rank [69], the most common publication fields were medicine (35%) and computer science (35%). \begin{table} \begin{tabular}{l l l l l l} \hline **Authors** & **Journal** & **Type** & **Field** & **Objective** & **Year** \\ \hline Li et al.[39] & a/Xiv & Journal article & Multidisciplinary & PPDP & 2023\({}^{\circ}\) \\ \hline Bhnot et al.[71] & Neurocomputing & Journal article & Computer science & Resemblance & 2022 \\ & & Neurocomputing & Journal article & Neuroscience & quantification & \\ \hline Kuo et al.[72] & Scientific Data & Journal article & Computer science & & \\ Kuo et al.[72] & Scientific Data & Journal article & Decision science & & \\ & & Mathematics & & & \\ & & Social Sciences & & & \\ \hline Lu et al.[79] & a/Xiv & Preprint & Multidisciplinary & (PP)DP & 2022 \\ \hline Wang et al.[74] & BMC Medical Informatics and Decision Making & Journal article & Computer science & & \\ & & & Medicine & & \\ Wendland et al.[79] & npj Digital Medicine & Journal article & Computer science & & \\ & & Health professions & (PP)DP & 2022 \\ & & Medicine & & & \\ \hline Yu, He \& Rajbunathan[78] & Journal of Survey Statistics and Methodology & Journal article & & \\ & & & Medical sciences & & \\ \hline Zhang, Yan \& Malin[77] & Journal of the American Medical Informatics Association & Journal article & Medicine & Framework development & 2022 \\ \hline Biswal et al.[78] & Proceedings of Machine Learning Research: Machine Learning for Healthcare & Conference paper & Computer science and technology & & \\ & & & Data processing & & \\ \hline Zhang et al.[79] & Journal of the American Medical Informatics Association & Journal article & Medicine & PPDP & 2021 \\ \hline Goo(jes-Dressbach et al.[88] & Frontiers in Big Data & Journal article & Computer science & PPDP & 2020 \\ \hline Sood et al.[81] & Scientific Reports & Journal article & Multidisciplinary & (PP)DP & 2020 \\ \hline Beaulieu-Jones et al.[42] & Circulation: Cardiovascular Quality and Outcomes & Journal article & Medicine & PPDP & 2019 \\ \hline Fisher et al.[30] & Scientific Reports & Journal article & Multidisciplinary & Modeling & 2019 \\ \hline Barrientos et al.[14] & The Annals of Applied Statistics & Journal article & Decision sciences & & \\ & & Mathematics & & & \\ \hline Walonoski et al.[85] & Journal of the American Medical Informatics Association & Journal article & Medicine & (PP)DP & 2018 \\ \hline Raab, Nowok \& Dibben[86] & Journal of Privacy and Confidentiality & Journal article & Computer science & (PP)DP & 2016 \\ \hline PPDP: privacy-preserving data publishing; (PP)DP: data publishing without considering data privacy; “The initial arXiv pre-print (Dec 22, 2021) was found during the search, but a newer version & & \\ discovered during data collection was included in the review. & & & & \\ \hline \end{tabular} \end{table} Table 1: **Summary of the included publications.** The table provides a summary of the 17 publications included in the systematic literature review. The publications are presented in descending order of publication year and sorted alphabetically by the authors. The table includes information about the type of report, field based on SCImago Journal & Country Rank, and the interpreted objective of each publication as determined by the review authors. ### Risk of bias and reporting quality in individual studies There was no indication of selection bias in the included studies. However, five (29%) studies had potential performance bias, and reporting bias occurred in nine publications (53%). Table 2 summarizes the bias assessment, with comprehensive background available in Supplemental Material D.2. Three studies (18%) included only a partial description of the training processes in SDG, and seven (41%) publications lacked such description altogether. Among the reviewed publications, only two (12%) were not peer-reviewed. Four (24%) and six (35%) publications did and did not provide information about potential conflicts of interest, respectively. One study showed inconsistency by employing three different notations for the privacy budget parameter[80]. Imprecise reporting was found in five (29%) publications, resulting from inaccuracies in quantifying subjects[71], variables[81], or repeated measurements[78], or imprecision in reported p-values[70, 75]. Lastly, four studies showed evidence of indirect reporting. Among these, two publications applied differential privacy with unspecified parameter values[70, 80]. The two other studies lacked precise variable role descriptions, particularly in defining the response variable in the generation model[76] or in confirming the inclusion of a variable present in the input data in the synthetic data[79]. \begin{table} \begin{tabular}{p{113.8pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline **Authors** & **Peer- reviewed** & **Selection bias** & **Performance bias** & **Report bias** & **Conflict of interest** \\ \hline Li et al.[70] & No & Possibly & Yes & NA \\ \hline Bhanot et al.[71] & & & Possibly & & \\ \hline Kuo et al.[72] & & & & & \\ \hline Lu et al.[73] & No & & & NA \\ \hline Wang et al.[74] & & & & Yes \\ \hline Wendland et al.[75] & & & & \\ \hline Yu, He \& Raghunathan[76] & Possibly & Yes & NA \\ \hline Zhang, Yan \& Malin[77] & Possibly & Yes & \\ \hline Zhang et al.[79] & & & Yes & \\ \hline Biswal et al.[78] & Possibly & Yes & NA \\ \hline Gootjes-Dreesbach et al.[80] & & Yes & Yes & \\ \hline Sood et al.[81] & & Yes & Yes & \\ \hline Beaulieu-Jones et al.[82] & & & & \\ \hline Fisher et al.[83] & & Yes & Yes & \\ \hline Barrientos et al.[84] & & & NA \\ \hline Walonoski et al.[85] & & & & \\ \hline Raab, Nowok \& Dibben[86] & Yes & NA \\ \hline \hline \end{tabular} \end{table} Table 2: **Summary of bias assessment in individual studies.** The table presents risk-of-bias assessments, peer-review status, and conflict of interest reporting for each study. To improve clarity, we use ”Yes” (blank) as the default for peer-review, and ”No” (blank) as the default for bias and conflict of interest. Option ”Possibly” was used when confirmation was uncertain but significant uncertainty remained. ”NA” indicates cases where information was unavailable. ### SDG methods for longitudinal patient data We identified a total of 33 SDG methods, comprising 17 primary and 16 reference methods (Supplemental Material E). Figure 3 illustrates a classification of the methods while Table 3 details the key characteristic of the primary methods. Information of the method's ability to generate unbalanced data was available for four primary methods (24%), two of which (12%) were capable of the task and two were not. Two primary methods (12%) demonstrated proficiency in generating missing observations by learning their distribution from the input data. In contrast, five methods (29%) acknowledged missing observations in the original data as part of their implementation. All primary methods could generate categorical variables, but some required their prior encoding into indicator variables. Additionally, some methods were restricted to generating specific categorical variables, e.g., diagnosis codes. Of the 17 primary methods, 11 (65%) could generate numerical, often continuous, variables. The implementation of five (29%) primary methods required expert knowledge. Figure 3: **Classification of the identified SDG methods.** The identified 17 primary methods (black) and 16 reference methods (grey) were classified into five different groups: autoencoders (AEs), Bayesian Networks (BNs), Ensembles, Generative Adversarial Networks (GANs) and Other methods. Source codes were available for 11 primary methods (65%), with two (12%) available in another publication and two upon request. Pseudocodes were given for three methods (18%), one of which was provided in a cited publication. Both the source code and pseudocode were inaccessible for four methods (24%). Python was the most common programming language (47%), followed by R (35%), and the programming language for four primary methods (24%) was unverifiable. System requirements were detailed in four studies (24%). In the following subsections, we provide a concise description for each primary method, elucidating their approach to modeling temporal structures and pointing out potential limitations. #### 3.2.1 Generative Adversarial Networks Generative Adversarial Networks (GANs)[13] are a class of deep learning (DL) models of two neural networks. The generator network is trained to create synthetic data while the discriminator network learns to distinguish between real and generated data. The two networks are trained in a competitive setting, where the generator aims to produce increasingly realistic samples and the discriminator strives to improve its ability to differentiate between real and fake data. #### 3.2.1.1 Ac-Gan AC-GAN[82] (auxiliary classifier GAN) generates continuous synthetic data that includes a stratifying variable, e.g., a treatment group. Notably, AC-GAN offers options for both differentially private and non-private training approaches. The method models temporal relationships through convolutional layers[89] and by assuming that variables in the input dataset are ordered by time. Given that the objective is to concurrently generate realistic synthetic data while maintaining the inherent data stratification, its applicability in producing more generic longitudinal patterns is difficult to determine. #### 3.2.1.2 Ehr-M-Gan EHR-M-GAN[90] first maps variables into a shared latent space of reduced dimension using a dual variational autoencoder[90]. The method then generates correlated patient trajectories of different variable types through a coupled recurrent network that specifically focuses on learning temporal dependencies in the data. As EHR-M-GAN requires filtering outliers from the input data, it is not clear how well the method performs under data with long-tailed distributions. #### 3.2.1.3 HealthGAN HealthGAN[87], applied in Bhanot et al.[71], implements a Wasserstein GAN gradient penalty (WGAN-GP)[91] and data transformation to generate mixed-type data. The transformation involves scaling all variables to a unit range and reversing them back to their original scales after synthesis. HealthGAN, not initially developed for longitudinal data, relies on its ability to learn the multivariate distribution underlying the input data to capture temporal correlations. It may face challenges in learning and generating subpopulations. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline **Method** & **Type** & \begin{tabular}{l} **Unbalanced** \\ **data** \\ \end{tabular} & \begin{tabular}{l} **Classing data** \\ **variables** \\ \end{tabular} & \begin{tabular}{l} **Categorical** \\ **variables** \\ \end{tabular} & \begin{tabular}{l} **Numerical** \\ **knowledge** \\ \end{tabular} & \begin{tabular}{l} **Expert** \\ **basedcode** \\ \end{tabular} & \begin{tabular}{l} **Source /** \\ **language** \\ \end{tabular} & \begin{tabular}{l} **Programming** \\ **in** \\ \end{tabular} \\ \hline AC-GAN & GAN &? & \(X\) & ✓ & ✓ & ✓ & � & ✓ / \(X\) & Python & 82 \\ EHR-M-GAN & GAN &? & \(X\) & ✓ & ✓ & � & ✓ / \(\mathcal{J}\) & Python & 70 \\ HealthGAN & GAN &? & � \(X\) & ✓ & ✓ & � & ✓ & ✓\({}^{\prime\prime}\) / \(X\) & Python & 71 \\ Health Gym GAN & GAN &? & � \(X\) & ✓ & ✓ & � & ✓ & ✓\({}^{\prime}\) / \(X\) & Python & 72 \\ MTGAN & GAN &? & � \(X\) & ✓ & ✓ & � & � & ✓\({}^{\prime}\) / \(\mathcal{J}\) & Python & 73 \\ EVA & AE & ✓ & � \(X\) & ✓ & � & � & � & �\({}^{\prime}\) / \(\mathcal{J}\) & Python & 73 \\ MBN & AE + BN & � & � \(X\) & ✓ & ✓ & � & ✓\({}^{\prime\prime}\) / \(X\) & R & 81 \\ VAMBN & AE + BN &? & � \(X\) & ✓ & ✓ & ✓ & ✓\({}^{\prime}\) / \(X\) & Python + R & 75,81 \\ GMB model & BN & ✓ & � \(X\) & ✓ & � & ✓ & �\({}^{\prime}\) / \(\mathcal{J}^{\prime\prime}\) & Python & 74 \\ LS-EHR & Ensemble &? & � \(X\) & ✓ & � & � & �\({}^{\prime}\) / \(\mathcal{J}^{\prime\prime}\) & Python & 74 \\ MultiNODEs & Ensemble &? & � \(X\) & ✓ & � & � & �\({}^{\prime}\) / \(\mathcal{J}\) & Python + R & 75 \\ SyaTEG & Ensemble &? & � \(X\) & ✓ & � & � & �\({}^{\prime}\) / \(\mathcal{J}\) & 70,79 \\ CRBM & Other & � \(X\) & � \(X\) & ✓ & ✓ & � & �\({}^{\prime}\) / \(\mathcal{X}\) & 7 & 83 \\ SCM & Other &? & ✓ & ✓ & ✓ & ✓ & ✓\({}^{\prime}\) / \(\mathcal{X}\) & R & 84 \\ SPMI & Other &? & � \(X\) & ✓ & ✓ & � & ✓\({}^{\prime\prime}\) / \(\mathcal{X}\) & R & 76 \\ Synthea & Other &? & � \(X\) & ✓ & ✓ & ✓ & ✓\({}^{\prime}\) / \(\mathcal{X}\) & Java & 85 \\ Synthpop & Other &? & ✓ & ✓ & ✓ & � & ✓\({}^{\prime}\) / \(\mathcal{X}\) & R & 76,86 \\ \hline \end{tabular} * GAN: generative adversarial network; AE: autoencoder; BN: Bayesian Network; \({}^{*}\) Imputed as part of the method. \({}^{*}\) Upon request \end{table} Table 3: **Summary of the 17 identified primary synthetic longitudinal data generation methods.** The table presents each method, its type, the ability to generate unbalanced and missing data, and categorical and continuous variables (� Yes, \(X\) No, \(Y\) Unclear). In addition, the table indicates whether the method requires expert knowledge, the availability of source and pseudo code, as well as the programming language employed. The last column lists all the publications included in this review, including the original publication, where the method was applied. #### 3.2.1.4 Health Gym GAN Health Gym GAN[72] generates mixed-type data and utilizes WGAN-GP and a bi-directional long short-term memory (biLSTM) network[92, 93, 94] to model dependencies in both temporal directions. To model multiple correlated categorical variables, Health Gym GAN requires fine-tuning. #### 3.2.1.5 Mtgan Multi-label time series GAN (MTGAN)[73] generates patient-level illness trajectories (diagnosis code indicator vectors). MTGAN utilizes a gated recurrent unit (GRU) generator[95] to recursively generate diagnosis probabilities and applies a conditional transition matrix to better address rare diagnoses. GRU also models temporal correlations between visits and diagnoses via latent variables and probabilities from previous iterations. The current MTGAN version is restricted to categorical variables and cannot generate continuous variables. #### 3.2.2 Autoencoders Autoencoders (AEs)[96] are a type of neural network architecture that consists of an encoder and a decoder network, collectively trained to learn an efficient data representation that captures the most salient features of the input data. The encoder maps input data to a lower-dimensional latent space, while the decoder reconstructs the original input from the latent space. The goal of an autoencoder is to minimize the reconstruction error. Variational autoencoders (VAEs)[90] differ from AEs by employing probabilistic encodings that capture uncertainty through probability distributions over latent variables. This approach offers greater flexibility in handling mixed-type data and enables VAEs to generate new samples by sampling from the latent space and decoding to the data domain. #### 3.2.2.1 Eva EHR Variational Encoder (EVA)[78] generates patient-level visit sequences (indicator vectors of diagnosis codes, medications, and procedures) as autoregressive time-ordered transitions, with latent variables accounting for between-patient heterogeneity across the sequences. EVA models the temporal structure by incrementally expanding the latent space's spatial dimensions (deconvolution). While EVA can generate unbalanced data, it does not model the actual time between the visits. In addition, EVA's performance may be suboptimal when dealing with less frequent sequences in input data. #### 3.2.3 Bayesian Networks Bayesian Networks (BNs)[97] are probabilistic modeling techniques that capture relationships between variables using a directed acyclic graph (DAG). The graph's nodes represent random variables while the edges indicate between-node dependencies. Each node is associated with a conditional probability distribution that describes the probability of the variable given its parental nodes. #### 3.2.3.1 Mbn A Modular Bayesian Network (MBN)[81] generates Gaussian and categorical synthetic data by learning conditional probabilities between predefined modules of semantically similar variables. Learning the network structure is improved by enforcing edge constraints, such as the correct temporal order of the nodes, and by reducing the module dimensionality via sparse autoencoders. In the case of non-Gaussian variables, MBN performs better when these variables are discretized, but this process also reduces data resemblance. Moreover, defining the modules and constraints requires expert knowledge. #### 3.2.3.2 Vambn A Variational Autoencoder Modular Bayesian Network (VAMBN)[80] expands on MBN by introducing a variational autoencoder (HI-VAE)[98] that considers data heterogeneity and missingness within modules. Temporal ordering is maintained by preventing edges from pointing backward in time for variables. Similarly to MBN, VAMBN requires expert knowledge. In addition, the current implementation does not allow Gaussian nodes to have discrete-node children and necessitates a modern parallel computing architecture. #### 3.2.3.3 Gmb model Wang et al.[74] used a Generative Markov-Bayesian-based (GMB)[90] approach to generate disease progression trajectories (diagnosis codes). The method is a hierarchical model, with three layers: disease progression is modelled as a continuous-time Markov jump process[99], possible complications as conditionally independent Markov processes[99], and the presence of comorbidities is inferred through a bipartite noisy-or Bayesian Network[100, 101]. GMB transforms unbalanced discrete-time input data into continuous-time illness trajectories. For improved computational efficiency, expert knowledge is needed to establish prior probabilities that link complications and observed comorbidities. #### 3.2.4 Ensembles Ensemble methods are machine learning techniques that combine multiple individual models.[102] The underlying idea is that by aggregating predictions or decisions from multiple models, the overall performance is improved over a single model. Common ensemble methods include bagging, boosting, and stacking.[102] Bagging involves training multiple models independently on different subsets of the training data and averaging their predictions. Boosting focuses on sequential model training, where each subsequent model tries to correct mistakes made by the previous models. Stacking combines predictions from multiple models using another model, called a meta-learner. #### 3.2.4.1 Is-Ehr The Longitudinal Simulation framework for EHR (LS-EHR)[77] combines GAN and recurrent neural network (RNN) with condition fuzzing and regularization (CFR)[77] to generate patient-level visit sequences (indicator vectors of diagnosis and procedure codes). To further improve data quality, LS-EHR incorporates Gaussian noise to add variability to synthetic observations and uses rejection sampling to improve data resemblance. CFR enables learning from both previous and subsequent episodes, mitigating gradual synthetic sequence divergence (drift) from the real sequence. While the LS-EHR was developed to address drifting, the problem was not fully resolved. Additionally, the performance of LS-EHR on datasets with high sparsity or a mix of categorical and continuous variables remains uncertain. #### 3.2.4.2 MultiNODEs The Multimodal Neural Ordinary Differential Equations (MultiNODEs)[75] uses latent NODEs[103] to generate continuous trajectories, HI-VAE[98] to generate static variables (both categorical and numerical) and an imputation layer to replace any missing values present in the input data. The method is currently limited to generating continuous repeated measurements and its optimal performance depends on tuning several sensitive hyperparameters. #### 3.2.4.3 SynTEG The Synthetic Temporal EHR Generator (SynTEG)[79] utilizes a self-attention architecture of transformer encoders[104] and a recurrent model to generate patient-level visit sequences (diagnosis code indicator vectors) conditionally on the previous visits. Subsequently, GAN is used to capture the multivariate distribution and to generate the sequences. SynTEG is limited to generate only diagnosis codes and it is possible that the method generates sequences conflicting with medical knowledge. #### 3.2.5 Other 3.2.5.1 Crbm Fisher et al.[83] used a Conditional Restricted Boltzmann Machine (CRBM) to generate mixed-type disease progression trajectories. CRBM is a probabilistic graphical model that incorporates latent variables and conditional distributions. The temporal dependence structure was learned by training the model with all possible pairs of two consecutive observations. As such, CRBM can generate both static and time-varying variables. However, the method requires balanced, numerically formatted data. 3.2.5.2 ScmBarrientos et al.[84] used Sequential Conditional Modeling (SCM) to generate synthetic career trajectories. Specifically, they modelled each input variable based on its type, utilizing techniques like classification and regression trees (CARTs)[105] and parametric probability distributions. Data were generated sequentially, variable-by-variable, and the future values of any time-varying variables were assumed to depend on the past only through the variables' current values. This method resembles traditional simulation and relies on expert knowledge to determine the approach and sequence for modeling each variable. 3.2.5.3 SpmiYu, He and Raghunathan[76] used Semiparametric Multiple Imputation (SPMI) to generate synthetic mixed-type survey data. Missing observations were first imputed using a Sequential Regression Multiple Imputation (SRMI)[106] framework. Subsequently, a Bayesian bootstrap sample[107] was extracted from these data and Alternating Conditional Expectation (ACE)[108] and a Ridge-Penalized Logistic (RPL)[109] imputation models were used to generate synthetic observations of continuous and discrete variables, respectively. Temporal dependencies were assumed to be learned by the imputation models as part of the overall correlation structure. SPMI is designed for datasets with around a hundred variables and may not be suitable for significantly larger or smaller datasets. Additionally, the method's generalizability beyond specific types of survey data, such as EHR or census data, is uncertain. #### 3.2.5.4 Synthea Synthea85 generates synthetic EHR data using modules and state-transition machines to model patient trajectories. The modules are built based on Publicly Available Data Approach to the Realistic Synthetic EHR (PADARSER) framework[44] utilizing publicly available data and predefined healthcare trajectory templates (care maps). Users can build their own disease models using a dedicated module builder. Synthea's module-based approach may not fully capture real-world complexity, and it primarily generates snapshots of patients at specific times, lacking long-term health trajectory representation. Footnote 8: [https://github.com/synthea/](https://github.com/synthea/) #### 3.2.5.5 Synthpop Raab, Nowok and Dibben86 generated mixed-type data with Synthpop[110]. This R-package enables the use of several different parametric and non-parametric methods for generating synthetic mixed-type data by drawing each variable sequentially from its conditional distribution given the already synthesized variables. The authors applied both non-parametric (CART) and parametric (polychotomous, logistic, and linear regression) models to estimate these conditional distributions. Temporal modeling is based on the models' abilities to learn the general correlation structure. Applying methods provided by Synthpop requires expert knowledge akin to SCM. In addition, the parametric methods may oversimplify the underlying distributions and structure in the input data and thus may not work with complex datasets. ### Evaluation approaches All 17 studies implemented both qualitative and quantitative approaches to evaluate synthetic data quality. For the evaluation, most (76%) generated a single synthetic dataset, while four studies (24%) created multiple (\(<50\)) datasets. Fifteen studies (88%) compared synthetic and original data. The remaining two studies pursued alternative approaches: one compared prevalence statistics in the synthetic data against empirical population data,85 while the other focused solely describing the characteristics of the synthetic dataset.74 Additionally, one study augmented its assessment with simulated data.75 Footnote 8: [https://github.com/synthea/](https://github.com/synthea/) Seven studies (41%) explored the impact of adjusting tuning parameters or altering the method's structural configuration on the quality of synthetic data. An equivalent number of studies (41%) conducted comparisons between the primary method and reference methods. In the following subsections, we expound the approaches employed to assess resemblance, utility, and privacy. Supplemental Material F gives details about the datasets utilized within the studies. #### 3.3.1 Resemblance Fifteen studies (88%) compared resemblance between the synthesized and original data. We discerned four different domains of resemblance: the similarity of univariate distributions, pairwise distributions, multivariate distributions, and temporal structure. In each domain, we further classified the approaches into qualitative, quantitative, model-based, and statistical test-based paradigms (Figure 4). Among the 17 studies, 12 studies (71%) assessed the univariate resemblance, eight studies (47%) examined pairwise distribution resemblance, six studies (35%) evaluated multivariate distribution resemblance, and nine studies (53%) investigated temporal structure resemblance. The assessment of resemblance of univariate distributions involved comparing marginal distributions of a single variable in the synthetic data and the original data. Qualitative and quantitative paradigms were applied in nine studies (53%), and statistical tests were employed in five studies (29%). The "Other" quantitative approach included methods such as root mean square error (RMSE)[70] and Jensen-Shannon divergence (JSD)[73, 75]. Comparisons of pairwise distributions focused on the association between two variables in the synthetic data with the corresponding association in the original data. Both quantitative and qualitative paradigms were used at an equal rate (47%), and statistical tests were employed in one study (6%). The "Other" quantitative approach consisted mainly of the Frobenius norm of correlation matrices[75, 80] and study specific measurements[71]. The multivariate distribution domain included model-based methods aimed at differentiating synthetic observations from real observations, validation by medical experts, and principal component and factor analyses. Four studies (24%) applied quantitative and three studies (18%) qualitative paradigms to evaluate multivariate resemblance. Model-based paradigms were used in four studies (24%) and one study (6%) utilized statistical tests. The quantitative "Other" approach included scores given by clinicians[78, 82], discriminative area under curve (AUC)[77] and discriminative score[70]. The "Other" model-based approach included factor analysis[76] and naive-, transfer learning-, and finetuning-based discriminators[77]. The assessment of temporal structure involved inspecting the temporal relationships among synthetic and original variables using autocorrelation or data visualization. Temporal structure was evaluated qualitatively in eight (47%) and quantitatively in three (18%) studies. A visualization of the underlying Bayesian Network was considered as "Other" qualitative approach, and the computation of the RMSE of an autocorrelation function[70], and a latent temporal statistic derived from singular value decomposition[79], were included in "Other" quantitative approaches. Figure 4: **Approaches used to evaluate synthetic and original data resemblance.** The figure illustrates the utilization of various approaches (on the x-axis, indicated in blue) in assessing the resemblance between synthetic and original data in each included study. The evaluation of resemblance encompasses four domains: univariate distributions, pairwise distributions, multivariate distributions, and similarity of temporal structure. Furthermore, the approaches used for this assessment were categorized into qualitative, quantitative, model-based, and statistical test-based paradigms. It should be noted that a study may encompass multiple assessments that belong to the same approach. The approach “Histogram” also includes density plots, and the approach “Resoplid” includes violin plots. The approaches labeled as “Other” are discussed in detail in the main text. SD: standard deviation; Reg.: regression; RF: random forests; LSTM: long short-term memory; NA: not applicable, did not compare synthetic and original data. #### 3.3.2 Utility Utility was evaluated in ten studies (59%), focusing on statistical inference and prediction performance. These domains were also divided in the same four-class paradigm classification as in resemblance. Figure 5 summarizes the approaches adopted for utility assessment. Three studies (18%) evaluated the similarity of statistical inference and all studies used both qualitative and model-based paradigms. One study used a quantitative paradigm that consisted of ratios of standard errors based on the fitted logistic regression model.[86] Seven studies (41%) assessed predictive performance and applied model-based paradigms. In addition, four studies (24%) used qualitative, two (12%) quantitative, and one (6%) statistical test-based paradigms. The quantitative paradigms included the mean absolute relative difference of AUC,[79] and weighted F1-score.[73] The "Other" model-based category included support vector machine (SVM),[82] RNN,[73], batch-constrained Q-learning,[72] and an unspecified model[79]. #### 3.3.3 Privacy Privacy was addressed in eight studies (47%) to varying degrees. Three studies (18%)[80, 82, 70] implemented differential privacy (DP)[111]. However, one of them used DP only to train HI-VAE without generating fully synthetic differentially private data.[80] Membership disclosure, i.e., unintentional or unauthorized disclosure of an individual's inclusion within a dataset, was evaluated in four studies (24%)[70, 78, 72, 79] with various techniques. Attribute disclosure, i.e., inadvertent or unauthorized exposure of specific attributes or characteristics of an individual, was assessed in two studies (18%)[76, 79]. Figure 5: **Approaches used to evaluate synthetic data utility in comparison to the original data.** The figure illustrates the utilization of various approaches (on the x-axis, indicated in blue) in assessing the utility of synthetic data in each included study. The evaluation of utility encompasses two domains: similarity of statistical inference and prediction performance. Furthermore, the approaches used for this assessment were categorized into qualitative, quantitative, model-based, and statistical test-based paradigms. It should be noted that a study may encompass multiple assessments that belong to the same approach. The approach “Histogram” also includes density plots, and the approach “Boxplot” includes violin plots. The approaches labeled as “Other” are discussed in detail in the main text. SD: standard deviation; Reg.: regression; RF: random forests; LSTM: long short-term memory; NA: not applicable, did not compare synthetic and original data. One study[72] evaluated identity disclosure, i.e., unintended revelation of an individual's identity or personally identifiable information. Another study [84] assessed inferential disclosure, involving the derivation of sensitive information through statistical analysis. ## 4 Discussion We identified 17 studies presenting methods able to generate synthetic longitudinal patient data, with the majority published in 2022. This contrasts with earlier medical SDG reviews that primarily examined literature predating 2022.[12, 14, 15] A further inspection revealed that these reviews focused mainly on "simple" forms of data such as cross-sectional and time series data, and methods for generating synthetic longitudinal data, which exhibit elements of both the previous data types, have emerged more recently. Nevertheless, our findings align with prior literature reviews to the extent that research on this topic has predominantly emerged from approximately 2016 onwards. Of the previous reviews, [12, 14, 15] none assessed the risk of bias, although Hernandez et al.[14] acknowledged its possibility. Thus, our review is pioneering in this regard. Nevertheless, we found no indication of selection bias, aligning with our initial expectations regarding the difficulty of its assessment (see Supplemental Material D.1). Conversely, approximately every third publication exhibited indications of performance bias, and reporting bias was present in half of the reviewed studies. Furthermore, like Ghosheh, Li and Zhu[15], we identified inadequacies in the descriptions of methods' training processes and the availability of the source codes. We stress that these findings are not meant to criticize any single study. Indeed, given an individual study, we find that any shortcomings are more likely to be oversights than consequences of any deliberate action. As such, our findings should be regarded as overarching recommendations for improving the transparency of SDG research and accessibility of proposed methods. Most of the identified methods were deep learning models (GANs, AEs, Ensembles), aligning with the previous reviews [12, 14, 15]. In addition to representing the latest trends, these models enable capturing complex patterns and learning from data without requiring strict distributional assumptions, making them versatile across diverse datasets. However, they do possess limitations. Deep learning models typically entail numerous training parameters and demand substantial sample sizes, which limits their use on small datasets. Dealing with data containing multiple variable types is challenging, often necessitating variable encoding and normalization, which may reduce information and increase dimensionality. Moreover, the effectiveness of deep learning models in generating longitudinal data relies heavily on their ability to discern patterns within the data, yet they typically struggle with generating unbalanced data and handling missing data. In the context of synthetic longitudinal data generation, a progressive step forward would involve developing and integrating components specifically designed to preserve temporal structures and generate unbalanced data. Regarding method evaluation, only 35% of the studies assessed all three aspects (resemblance, utility, and privacy), with diverse approaches. The absence of a standardized evaluation framework has also been addressed in the previous reviews.[12, 14, 15] Regarding longitudinal data, the fact that only 53% of the studies evaluated temporal structure resemblance is alarming. Preserving this structure is essential, and without evaluating its preservation, forming an opinion on the method's suitability for longitudinal data is unfeasible. In addition, traditional longitudinal statistical models widely applied in medical research were notably absent from the evaluations, leaving the applicability of these analyses to synthetic data generated by the identified methods uncertain. Lastly, while the incorporation of privacy-preserving techniques within SDG has garnered attention in recent years, we did not observe any clear trend in this regard. ### Limitations Although our review, as far as we are aware, is the first systematic literature review concentrating on applied methodologies that adheres to the PRISMA guidelines throughout the review process, it is important to acknowledge the inherent constraints associated with this review. First, due to the lack of unambiguous definitions of synthetic and longitudinal data and the dispersal of SDG evolution across diverse fields, formulating a definitive search algorithm was challenging. Despite our efforts to encompass recognized synonyms and accommodate different permutations, the potential omission of relevant publications remains possible. Nevertheless, given the extensive number of screened publications and exhaustive citation searching, coupled with our accurate identification of intersecting publications observed in prior reviews, we hold assurance in the compiled body of literature. Second, longitudinal data analysis has mainly been emphasized in medical statistics, while SDG methods seem to derive from computer science and associated applications. This dichotomy poses challenges in assessing the applicability and characteristics of SDG methods for LD generation. For instance, the notion of unbalanced data, though well-established in medical statistics, appears to receive limited attention in computer science, resulting in its underrepresentation and oversight in SDG research. This likely explains our inability to collect the respective information from the identified studies. Additionally, the inherent opaque nature of deep learning models further complicates their evaluation. ### Conclusion Ultimately, while we identified 17 methods for generating synthetic longitudinal patient data (Q1) and addressing various challenges related to LD (Q2), none of the identified methods exhibited the capacity to address all challenges concurrently, emphasizing the need for continued methodological research. Yet, one method rarely accommodates all objectives and comprehending any method's inherent limitations and advantages remains essential. This requires meticulous documentation and transparent presentation of the method in question and its evaluation. Moreover, publishing a method that is accessible only to its developer is seldom pragmatic. Therefore, including the source code as part of the publication is important and aligned with today's standards. The observed heterogeneity in evaluation approaches across the studies (Q3) presents a significant challenge in facilitating meaningful comparisons between methods and their applicability in practice. While creating standardized evaluation criteria would enhance method assessment, it is crucial to recognize the importance of tailored approaches for various applications and datasets. Establishing a standardized evaluation framework offers a chance for interdisciplinary collaboration among medicine, statistics, and computer science. Lastly, further research is necessary to address privacy concerns related to synthetic data, along with clear directives from data protection agencies to guide implementation and progress. This requires increased collaboration among method developers, medical practitioners, and legislators, as directives require empirical support, and methods should be developed with practical feasibility in mind. ## 5 Other information ### Acknowledgements We thank Antti Airola, Martin Closter Jespersen, Henning Langberg and Arho Virkki for their roles in the systematic review team. ### Author contributions **Katarina Perkonoja**: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Visualization, Writing - original draft. **Joni Virta:** Conceptualization, Investigation, Methodology, Supervision, Writing - review & editing. **Kari Auranen**: Conceptualization, Methodology, Supervision, Validation, Writing - review & editing. ### Declaration of conflicting interests The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. ### Funding The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work has been partially supported by the Novo Nordisk Foundation (grant number NNF19SA0059129), the Finnish Cultural Foundation (grant number 00220801) and the Academy of Finland (grant numbers 335077, 347501 and 353769). ### Registration and protocol The review protocol was registered with PROSPERO (registration number CRD42021259232) on July 5, 2021. Subsequently, the protocol was amended twice, on January 25, 2022, and March 9, 2023, respectively. The rationale for each modification is detailed in the corresponding section of the amended protocol, which is available at [https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021259232](https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021259232). ### Availability of data The data used to derive the results and conclusions of this systematic review are available upon request. ## References * [1] EU General Data Protection Regulation (EU-GDPR). Regulation (EU) 2016/679, European Union, [https://gdpr.eu/](https://gdpr.eu/) (2018, accessed 22 June 2023). * [2] Health Insurance Portability and Accountability Act of 1996 (HIPAA). Public Law 104-191, United States, [https://www.hhs.gov/hipaa/index.html](https://www.hhs.gov/hipaa/index.html) (1996, accessed 22 June 2023). * [3] Finnish Ministry of Social Affairs and Health. Act 552/2019 on the Secondary Use of Health and Social Data. Act 552/2019, Finland, [https://stm.fi/en/secondary-use-of-health-and-social-data](https://stm.fi/en/secondary-use-of-health-and-social-data) (2019, accessed 22 June 2023). * [4] Van Belle G, Fisher LD, Heagerty PJ, et al. Longitudinal Data Analysis. In: _Biostatistics: A Methodology for the Health Sciences_. Hoboken, NJ, USA: John Wiley & Sons, Inc., pp. 728-765. * [5] Shumway RH, Stoffer DS. Characteristics of Time Series. In: Casella G, Fienberg S, Olkin I (eds) _Time Series Analysis and Its Applications: With R Examples_. New York: Springer, 2006, pp. 1-40. * [6] Guo Shenyang. Introduction. In: _Survival analysis_. Book, New York: Oxford University Press, 2010, pp. 3-25. * [7] Hundepool A, Domingo-Ferrer J, Franconi L, et al. Synthetic and Hybrid Data. In: _Statistical Disclosure Control_. Wiley, pp. 78-99. * [8] Fung BCM, Wang K, Fu AW-C, et al. Anonymization Operations. In: _Introduction to Privacy-Preserving Data Publishing: Concepts and Techniques_. Chapman and Hall/CRC, pp. 35-42. * [9] Rubin DB. Statistical Disclosure Limitation. _J OffStat_ 1993; 9: 461-468. * Anonymisation Groundhog Day. In: _Proceedings of the 31st USENIX Security Symposium_. Boston, [https://www.usenix.org/conference/usenixsecurity22/presentation/stadler](https://www.usenix.org/conference/usenixsecurity22/presentation/stadler) (2022). * [11] Raghunathan TE. Synthetic Data. _Annu Rev Stat Appl_ 2021; 8: 129-140. * [12] Georges-Filteau J, Cirillo E. Synthetic Observational Health Data with GANs: from slow adoption to a boom in medical research and ultimately digital twins? _arXiv preprint_, [http://arxiv.org/abs/2005.13510](http://arxiv.org/abs/2005.13510) (2020). * [13] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. _Commun ACM_ 2020; 63: 139-144. * [14] Hernandez M, Epelde G, Alberdi A, et al. Synthetic data generation for tabular health records: A systematic review. _Neurocomputing_ 2022; 493: 28-45. * [15] Ghosheh G, Li J, Zhu T. A review of Generative Adversarial Networks for Electronic Health Records: applications, evaluation measures and data sources. _arXiv preprint_, [http://arxiv.org/abs/2203.07018](http://arxiv.org/abs/2203.07018) (2022). * [16] Jeong JJ, Tariq A, Adejumo T, et al. Systematic Review of Generative Adversarial Networks (GANs) for Medical Image Classification and Segmentation. _Journal of Digital Imaging_ 2022; 35: 137-152. * [17] Zhang D, Ma M, Xia L. A comprehensive review on GANs for time-series signals. _Neural Comput Appl_ 2022; 34: 3551-3571. * [18] Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. _Syst Rev_ 2021; 10: 89. * [19] Walia M, Tierney B, Mckeever S. Synthesising Tabular Data using Wasserstein Conditional GANs with Gradient Penalty (WCGAN-GP). In: Longo L, Rizzo L, Hunter E, et al. (eds) _Proceedings of The 28th Irish Conference on Artificial Intelligence and Cognitive Science_. Dublin: Technological University Dublin, 2020, pp. 325-336. * [20] Harzing Anne-Wil. Publish or Perish, [https://harzing.com/resources/publish-or-perish](https://harzing.com/resources/publish-or-perish) (accessed 18 June 2021). * [21] arXiv Dataset, [https://www.kaggle.com/datasets/Cornell-University/arxiv](https://www.kaggle.com/datasets/Cornell-University/arxiv) (accessed 22 November 2022). * [22] Bramer WM, Rethlefsen ML, Kleijnen J, et al. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. _Syst Rev_ 2017; 6: 245. * [23] McGowan J, Sampson M, Salzwedel DM, et al. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. _J Clin Epidemiol_ 2016; 75: 40-46. * [24] The EndNote Team. EndNote 20. * [25] Ouzzani M, Hammady H, Fedorowicz Z, et al. Rayyan--a web and mobile app for systematic reviews. _Syst Rev_ 2016; 5: 210. * [26] Harris PA, Taylor R, Thielke R, et al. Research electronic data capture (REDCap)--A metadata-driven methodology and workflow process for providing translational research informatics support. _J Biomed Inform_ 2009; 42: 377-381. * [27] Harris PA, Taylor R, Minor BL, et al. The REDCap consortium: Building an international community of software platform partners. _J Biomed Inform_ 2019; 95: 103208. * [28] Boutron I, Page M, Higgins J, et al. Chapter 7: Considering bias and conflicts of interest among the included studies. In: Higgins J, Thomas J, Chandler J, et al. (eds) _Cochrane Handbook for Systematic Reviews of Interventions version 6.1_. Cochrane, 2020. * [29] Abay NC, Zhou Y, Kantarcioglu M, et al. Privacy Preserving Synthetic Data Release Using Deep Learning. In: _Lecture Notes in Computer Science_. Springer Verlag, pp. 510-526. * [30] Park Y, Ghosh J, Shankar M. Perturbed Gibbs Samplers for Generating Large-Scale Privacy-Safe Synthetic Health Data. In: _2013 IEEE International Conference on Healthcare Informatics_. IEEE, pp. 493-498. * [31] Zhang J, Cormode G, Procopiuc CM, et al. PrivBayes. _ACM Transactions on Database Systems_ 2017; 42: 1-41. * [32] Yoon J, Drumright LN, van der Schaar M. Anonymization Through Data Synthesis Using Generative Adversarial Networks (ADS-GAN). _IEEE J Biomed Health Inform_ 2020; 24: 2378-2388. * [33] Bonofiglio F, Schumacher M, Binder H. Recovery of original individual person data (IPD) inferences from empirical IPD summaries only: Applications to distributed computing under disclosure constraints. _Stat Med_ 2020; 39: 1183-1198. * [34] Emam K El, Mosquera L, Zheng C. Optimizing the synthesis of clinical trial data using sequential trees. _Journal of the American Medical Informatics Association_ 2021; 28: 3-13. * [35] Khorchani T, Gadiya Y, Witt G, et al. SASC: A simple approach to synthetic cohorts for generating longitudinal observational patient cohorts from COVID-19 clinical data. _Patterns_ 2022; 3: 100453. * [36] Torfi A, Fox EA. CorGAN: Correlation-Capturing Convolutional Generative Adversarial Networks for Generating Synthetic Healthcare Records. _The International FLAIRS Conference Proceedings_; 33. * [37] Hernandez M, Epelde G, Beristain A, et al. Incorporation of Synthetic Data Generation Techniques within a Controlled Data Processing Workflow in the Health and Wellbeing Domain. _Electronics (Basel)_ 2022; 11: 812. * [38] Wang L, Zhang W, He X. Continuous Patient-Centric Sequence Generation via Sequentially Coupled Adversarial Learning. In: _Lecture Notes in Computer Science_, pp. 36-52. * [39] Baowaly MK, Lin CC, Liu CL, et al. Synthesizing electronic health records using improved generative adversarial networks. _Journal of the American Medical Informatics Association_ 2019; 26: 228-241. * [40] Liu Y, Peng J, Yu JJQ, et al. PPGAN: Privacy-Preserving Generative Adversarial Network. In: _2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS)_. IEEE, pp. 985-989. * [41] Dash S, Dutta R, Guyon I, et al. Synthetic Event Time Series Health Data Generation. _arXiv preprint_, [http://arxiv.org/abs/1911.06411](http://arxiv.org/abs/1911.06411) (2019). * [42] Boedihardjo M, Strohmer T, Vershynin R. Privacy of Synthetic Data: A Statistical Framework. _IEEE Trans Inf Theory_ 2023; 69: 520-527. * [43] Lombardo JS, Moniz LJ. A method for generation and distribution of synthetic medical record data for evaluation of disease-monitoring systems. _Johns Hopkins APL Technical Digest (Applied Physics Laboratory)_; 27. * [44] Dube K, Gallagher T. Approach and Method for Generating Realistic Synthetic Electronic Healthcare Records for Secondary Use. In: _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_, pp. 69-86. * [45] McLachlan S, Dube K, Gallagher T, et al. Realistic Synthetic Data Generation: The ATEN Framework. In: _Communications in Computer and Information Science_, pp. 497-523. * [46] Mendonca SDP, Brito YPDS, Santos CGR Dos, et al. Synthetic Datasets Generator for Testing Information Visualization and Machine Learning Techniques and Tools. _IEEE Access_ 2020; 8: 82917-82928. * [47] Garrow LA, Bodea TD, Lee M. Generation of synthetic datasets for discrete choice analysis. _Transportation (Amst)_ 2010; 37: 183-202. * [48] Lobo J, Henriques R, Madeira SC. G-Tric: generating three-way synthetic datasets with triclustering solutions. _BMC Bioinformatics_ 2021; 22: 16. * [49] Chawla N V., Bowyer KW, Hall LO, et al. SMOTE: Synthetic Minority Oversampling Technique. _Journal of Artificial Intelligence Research_ 2002; 16: 321-357. * [50] Tang B, He H. KernelADASYN: Kernel based adaptive synthetic data generation for imbalanced learning. In: _2015 IEEE Congress on Evolutionary Computation (CEC)_. IEEE, pp. 664-671. * [51] Sharma S, Bellinger C, Krawczyk B, et al. Synthetic Oversampling with the Majority Class: A New Perspective on Handling Extreme Imbalance. In: _2018 IEEE International Conference on Data Mining (ICDM)_. IEEE, pp. 447-456. * [52] Martinez-Garcia JM, Suarez-Araujo CP, Baez PG. SNEOM: A Sanger Network Based Extended Over-Sampling Method. Application to Imbalanced Biomedical Datasets. In: _Lecture Notes in Computer Science_, pp. 584-592. * [53] Wan Z, Zhang Y, He H. Variational autoencoder based synthetic data generation for imbalanced learning. In: _2017 IEEE Symposium Series on Computational Intelligence (SSCI)_. IEEE, pp. 1-7. * [54] Perez-Ortiz M, Tino P, Mantiuk R, et al. Exploiting Synthetically Generated Data with Semi-Supervised Learning for Small and Imbalanced Datasets. _Proceedings of the AAAI Conference on Artificial Intelligence_ 2019; 33: 4715-4722. * [55] Zare M, Wojtusiak J. Weighted Itemsets Error (WIE) Approach for Evaluating Generated Synthetic Patient Data. In: _2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)_. IEEE, pp. 1017-1022. * [56] Stolfi P, Valentini I, Palumbo MC, et al. Potential predictors of type-2 diabetes risk: machine learning, synthetic data and wearable health devices. _BMC Bioinformatics_ 2020; 21: 508. * [57] Goncalves A, Ray P, Soper B, et al. Generation and evaluation of synthetic patient data. _BMC Med Res Methodol_ 2020; 20: 108. * [58] Indhumathi R, Devi SS. Healthcare Cramer Generative Adversarial Network (HCGAN). _Distrib Parallel Databases_ 2022; 40: 657-673. * A Use Case Driven Approach. In: _German Medical Data Sciences: Bringing Data to Life_, pp. 58-65. * [60] Oganian A, Domingo-Ferrer J. Local synthesis for disclosure limitation that satisfies probabilistic k-anonymity criterion. _Trans Data Priv_ 2017; 10: 61-81. * [61] Klein M, Moura R, Sinha B. Multivariate Normal Inference based on Singly Imputed Synthetic Data under Plug-in Sampling. _Sankhya B_ 2021; 83: 273-287. * [62] Demirtas H, Yavuz Y. Concurrent Generation of Ordinal and Normal Data. _J Biopharm Stat_ 2015; 25: 635-650. * [63] Dankar FK, Ibrahim M. Fake It Till You Make It: Guidelines for Effective Synthetic Data Generation. _Applied Sciences_ 2021; 11: 2158. * [64] Andrews CJ, Allacci MS, Senick J, et al. Using synthetic population data for prospective modeling of occupant behavior during design. _Energy Build_ 2016; 126: 415-423. * [65] Feldman J, Kowal DR. Bayesian data synthesis and the utility-risk trade-off for mixed epidemiological data. _Ann Appl Stat_ 2022; 16: 2577-2602. * [66] Li B, Luo S, Qin X, et al. Improving GAN with inverse cumulative distribution function for tabular data synthesis. _Neurocomputing_ 2021; 456: 373-383. * [67] Baak M, Brugman S, D'almeida L, et al. _Synthesisoric Fast, Probabilistic modeling and Synthesis of Tabular Data_. 2022. * [68] Harder F, Adamczewski K, Park M. _DP-MERF: Differentially Private Mean Embeddings with Random Features for Practical Privacy-Preserving Data Generation_. 2021. * [69] SCImago. SJR -- SCImago Journal & Country Rank, [http://www.scimagojr.com](http://www.scimagojr.com) (accessed 15 July 2023). * [70] Li J, Cairns BJ, Li J, et al. Generating Synthetic Mixed-type Longitudinal Electronic Health Records for Artificial Intelligent Applications. _arXiv preprint_, [https://arxiv.org/abs/2112.12047v2](https://arxiv.org/abs/2112.12047v2) (2023). * [71] Bhanot K, Pedersen J, Guyon I, et al. Investigating synthetic medical time-series resemblance. _Neurocomputing_ 2022; 494: 368-378. * [72] Kuo NI-H, Polizzotto MN, Finfer S, et al. The Health Gym: synthetic health-related datasets for the development of reinforcement learning algorithms. _Sci Data_ 2022; 9: 693. * [73] Lu C, Reddy CK, Wang P, et al. Multi-Label Clinical Time-Series Generation via Conditional GAN. _arXiv preprint_, [http://arxiv.org/abs/2204.04797](http://arxiv.org/abs/2204.04797) (2022). * [74] Wang X, Lin Y, Xiong Y, et al. Using an optimized generative model to infer the progression of complications in type 2 diabetes patients. _BMC Med Inform Decis Mak_ 2022; 22: 174. * [75] Wendland P, Birkenbihl C, Gomez-Freixa M, et al. Generation of realistic synthetic data using Multimodal Neural Ordinary Differential Equations. _NPJ Digit Med_ 2022; 5: 122. * [76] Yu M, He Y, Raghunathan TE. A Semiparametric Multiple Imputation Approach to Fully Synthetic Data for Complex Surveys. _J Surv Stat Methodol_ 2022; 10: 618-641. * [77] Zhang Z, Yan C, Malin BA. Keeping synthetic patients on track: feedback mechanisms to mitigate performance drift in longitudinal health data simulation. _J Am Med Inform Assoc_ 2022; 29: 1890-1898. * [78] Biswal S, Ghosh S, Duke J, et al. EVA: Generating Longitudinal Electronic Health Records Using Conditional Variational Autoencoders. In: Jung K, Yeung S, Sendak M, et al. (eds) _Proceedings of the 6th Machine Learning for Healthcare Conference._ PMLR, pp. 260-282. * [79] Zhang Z, Yan C, Lasko TA, et al. SynTEG: a framework for temporal structured electronic health data simulation. _Journal of the American Medical Informatics Association_ 2021; 28: 596-604. * [80] Gootjes-Dreesbach L, Sood M, Sahay A, et al. Variational Autoencoder Modular Bayesian Networks for Simulation of Heterogeneous Clinical Study Data. _Front Big Data_; 3. * [81] Sood M, Sahay A, Karki R, et al. Realistic simulation of virtual multi-scale, multi-modal patient trajectories using Bayesian networks and sparse auto-encoders. _Sci Rep_ 2020; 10: 10971. * [82] Beaulieu-Jones BK, Wu ZS, Williams C, et al. Privacy-Preserving Generative Deep Neural Networks Support Clinical Data Sharing. _Circ Cardiovasc Qual Outcomes_ 2019; 12: e005122. * [83] Fisher CK, Smith AM, Walsh JR, et al. Machine learning for comprehensive forecasting of Alzheimer's Disease progression. _Sci Rep_ 2019; 9: 13622. * [84] Barrientos AF, Bolton A, Balmat T, et al. Providing access to confidential research data through synthesis and verification: An application to data on employees of the U.S. federal government. _Annals of Applied Statistics_ 2018; 12: 1124-1156. * [85] Walonoski J, Kramer M, Nichols J, et al. Synthea: An approach, method, and software mechanism for generating synthetic patients and the synthetic electronic health care record. _Journal of the American Medical Informatics Association_ 2018; 25: 230-238. * [86] Raab GM, Nowok B, Dibben C. Practical Data Synthesis for Large Samples. _Journal of Privacy and Confidentiality_ 2018; 7: 67-97. * [87] Yale A, Dash S, Dutta R, et al. Generation and evaluation of privacy preserving synthetic health data. _Neurocomputing_ 2020; 416: 244-255. * [88] Wang X, Sontag D, Wang F. Unsupervised learning of disease progression models. In: _Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining_. New York, NY, USA: ACM, pp. 85-94. * [89] Lecun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_ 1998; 86: 2278-2324. * Conference Track Proceedings_. 2014. * [91] Gulrajani I, Ahmed F, Arjovsky M, et al. Improved training of wasserstein GANs. In: _Advances in Neural Information Processing Systems_. 2017. * [92] Schuster M, Paliwal KK. Bidirectional recurrent neural networks. _IEEE Transactions on Signal Processing_ 1997; 45: 2673-2681. * [93] Graves A, Schmidhuber J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. _Neural Networks_ 2005; 18: 602-610. * [94] Hochreiter S, Schmidhuber J. Long Short-Term Memory. _Neural Comput_ 1997; 9: 1735-1780. * [95] Chung J, Gulcehre C, Cho K, et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. _arXiv preprint_, [http://arxiv.org/abs/1412.3555](http://arxiv.org/abs/1412.3555) (2014). * [96] Hinton GE, Salakhutdinov RR. Reducing the Dimensionality of Data with Neural Networks. _Science (1979)_ 2006; 313: 504-507. * [97] Koller D, Friedman N. _Probabilistic Graphical Models: Principles and Techniques_. The MIT Press, 2009. * [98] Nazabal A, Olmos PM, Ghahramani Z, et al. Handling incomplete heterogeneous data using VAEs. _Pattern Recognit_ 2020; 107: 107501. * [99] Stroock DW. _An Introduction to Markov Processes_. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. * [100] Shwe MA, Middleton B, Heckerman DE, et al. Probabilistic Diagnosis Using a Reformulation of the INTERNIST-1/QMR Knowledge Base. _Methods Inf Med_ 1991; 30: 241-255. * Proceedings of the 29th Conference, UAI 2013_. 2013. * [102] Dietterich TG. Ensemble Methods in Machine Learning. In: _Lecture Notes in Computer Science_, pp. 1-15. * [103] Chen RTQ, Rubanova Y, Bettencourt J, et al. Neural Ordinary Differential Equations. In: Bengio S, Wallach H, Larochelle H, et al. (eds) _Advances in Neural Information Processing Systems_. Curran Associates, Inc., 2018. * [104] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In: _Advances in Neural Information Processing Systems_. 2017. * [105] Breiman L, Friedman JH, Olshen RA, et al. _Classification And Regression Trees_. Routledge, 2017. * [106] Raghunathan TE, Lepkowski JM, Van Hoewyk J, et al. A multivariate technique for multiply imputing missing values using a sequence of regression models. _Surv. Methodol_; 27. * [107] Rubin DB. The Bayesian Bootstrap. _The Annals of Statistics_ 1981; 9: 130-134. * [108] Liu B, Yu M, Graubard BI, et al. Multiple imputation of completely missing repeated measures data within person from a complex sample: application to accelerometer data in the National Health and Nutrition Examination Survey. _Stat Med_ 2016; 35: 5170-5188. * [109] Yu M, Feuer EJ, Cronin KA, et al. Use of Multiple Imputation to Correct for Bias in Lung Cancer Incidence Trends by Histologic Subtype. _Cancer Epidemiology, Biomarkers & Prevention_ 2014; 23: 1546-1558. * [110] Nowok B, Raab GM, Dibben C. synthpop: Bespoke Creation of Synthetic Data in _R. J Stat Softw_ 2016; 74: 1-26. * [111] Dwork C. Differential Privacy. In: Bugliesi M, Preneel B, Sassone V, et al. (eds) _Automata, Languages and Programming_. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 1-12. **Supplemental Material to "Methods for generating and evaluating synthetic longitudinal patient data: a systematic review"** Katarina Perkonoja2, Kari Auranen1, Joni Virta1 Footnote 2: University of Turku, Department of Mathematics and Statistics **Corresponding author:** Katarina Perkonoja, Department of Mathematics and Statistics, 20014 University of Turku, Finland Email: [email protected] ###### Contents * A Search algorithms.......................................................................................................... * A.1 Web of Science (Core Collection).......................................................... * A.2 Embase (1947 onwards).......................................................... * A.3 MEDLINE (Ovid interface, 1946 onwards).......................................... * A.4 Google Scholar (Publish or Perish, 1000 first hits).......................................... * A.5 arXiv.......................................................................... * B Selection process.......................................................................... * B.1 Abstract screening chart.......................................................... * B.2 Full-text screening chart.......................................................... * C Data collection process.......................................................... * C.1 Literature information.......................................................... * C.2 Method characteristics.......................................... * C.3 Method evaluation.......................................................... * C.4 Assessment of bias and reporting quality.......................................... * D Risk of bias assessment.......................................................................... * D.1 Assessment framework.......................................................... * D.2 Risk of bias in individual studies (detailed explanations).......................... * E Reference methods used in the identified publications.......................................................... * F Datasets used in the included publications.. Search algorithms ### Web of Science (Core Collection) Search date 2021-06-11, 3795 hits #1 TS = ((synthetic OR artificial) NEAR/3 (*data* OR record*)) AND TS = ((generate* OR product* OR simula*)) AND TS = ((longitudinal OR correl* OR panel OR repeat* OR follow-up OR multivariate OR lifespan* OR traject* OR health* OR medical OR patient)) NOT TS = (aperture OR insemination OR seism*) AND LA = (English) AND DT = (Article OR Abstract of Published Item OR Book OR Book Chapter OR Data Paper OR Early Access OR Proceedings Paper OR Review OR Software Review) Search date 2022-11-22, 1734 hits TS = ((synthetic OR artificial) NEAR/3 (*data* OR record*)) AND TS = ((generate* OR produc* OR simula*)) AND TS = ((longitudinal OR correl* OR panel OR repeat* OR follow-up OR multivariate OR lifespan* OR traject* OR health* OR medical OR patient)) NOT TS = (aperture OR insemination OR seism*) AND LA = (English) AND DT = (Article OR Abstract of Published Item OR Book OR Book Chapter OR Data Paper OR Early Access OR Proceedings Paper OR Review OR Software Review) NOT #1 ### Embase (1947 onwards) Search date 2021-06-11, 504 hits #1 (((synthetic OR artificial) NEAR/3 (data OR record*)):ti,ab,kw) AND (generator* OR produc* OR simula*):ti,ab,kw AND (longitudinal OR correl* OR panel OR repeat* OR 'follow?up' OR multivariate OR lifespan* OR traject* OR health* OR medical OR patient):ti,ab,kw AND ([article]/lim OR [article in press]/lim OR [conference paper]/lim OR [conference review]/lim OR [data papers]/lim OR [letter]/lim OR [note]/lim OR [review]/lim OR [short survey]/lim AND [english]/lim AND [embase]/lim Search date 2022-11-22, 326 hits (((synthetic OR artificial) NEAR/3 (data* OR record* OR microdata*)):ti,ab,kw) AND (generator* OR produc* OR simula*):ti,ab,kw AND (longitudinal OR correl* OR panel OR repeat* OR 'follow?up' OR multivariate OR lifespan* OR traject* OR health* OR medical OR patient):ti,ab,kw NOT (aperture OR insemination OR seism*):ti,ab,kw AND ([article]/lim OR [article in press]/lim OR [conference paper]/lim OR [conference review]/lim OR [data papers]/lim OR [letter]/lim OR [note]/lim OR [review]/lim OR [short survey]/lim AND [english]/lim NOT #1 ### MEDLINE (Ovid interface, 1946 onwards) Search date 2021-06-12, 574 hits #1 (((synthetic or artificial) adj3 (data or record*)) and (generate* or produc* or simula*) and (longitudinal or correl* or panel or repeat* or 'follow up' or multivariate or lifespan* or traject* or health* or medical or patient)).ti,ab,kf. #2 limit #1to ((english language or english) and (classical article or clinical conference or comparative study or congress or english abstract or evaluation study or festschrift or government publication or historical article or introductory journal article or journal article or letter or preprint or "review" or "systematic review" or technical report or validation study)) Search date 2022-11-22, 402 hits (contains duplicates with the previous search because the time range could not be specified more precisely) #3 (((synthetic or artificial) adj3 (data* or record* or microdata*)) and (generate* or produc* or simula*) and (longitudinal or correl* or panel or repeat* or 'follow up' or multivariate or lifespan* or traject* or health* or medical or patient) not (aperture OR insemination OR seism*)).ti,ab,kf #4 limit #3 to ((english language or english) and (classical article or clinical conference or comparative study or congress or english abstract or evaluation study or festschrift or government publication or historical article or introductory journal article or journal article or letter or preprint or "review" or "systematic review" or technical report or validation study)) #5 limit #3 not #2 ### Google Scholar (Publish or Perish, 1000 first hits) Search date 2021-06-18, 980 hits ("synthetic data" OR "artificial data") AND (generat* OR priduc* OR simula*) AND (longitudinal OR correl* OR panel OR repeat* OR "follow up" OR "follow-up" OR "multivariate OR lifespan* OR trajectet* OR health* OR medical OR patient) ### arXiv Open-source metadata were downloaded from Kaggle1 and R software (version 4.2.2)2 was used to extract the relevant articles. The source code is presented below. Search date 2022-11-22, 628 hits # libraries library(jsonlite) library(data.table) library(synthesisr) importing ArXiv results arxiv <- stream_in(file(paste0(getwd(), "/articles/source_searches/arxiv-metadata-oai-snapshot.json"))) arxiv <- as.data.table(arxiv) # regex developed according to database search queries # synthetic data search_data <- "\b(synthetic|artificial)(?:\W+\w+){0,3}?\W?(\S*data|record\S*)\b" inclusion criteria search_gener <- "(generat|product|simula)" search_type <- "(longitudinal|correl|panel|repeat|follow-up|multivariate|lifespan|traject|health|medical|patient)" # exclusion criteria search_excl <- "(aperture|insemination|seism)" grepping abstracts according to criteria arxiv_results_1 <- arxiv[grepl(search_data, abstract, ignore.case = T, perl = T)] arxiv_results_2 <- arxiv_results_1[grepl(search_gener, abstract, ignore.case = T, perl = T)] arxiv_results_3 <- arxiv_results_2[grepl(search_type, abstract, ignore.case = T, perl = T)] arxiv_results_4<- arxiv_results_3[!grepl(search_excl,abstract,ignore.case = T, perl = T)] # modifying data for export arxiv_results_4[, source_type := ifelse(is.na('journal-ref'), "UNPB", "JOUR")] arxiv_results_4[is.na('journal-ref'), 'journal-ref' := paste0("arXiv preprint arXiv:", id)] arxiv_results_4[, year := year(update_date)] stnames(arxiv_results_4, "journal-ref", "journal") stnames(arxiv_results_4, "update_date", "date_generated") stnames(arxiv_results_4, "authors", "author") arxiv_results_4[, c("id", "submitter", "comments", "report-no", "categories", "license", "versions", "authors_parsed") := NULL] setcolorder(arxiv_results_4, c("date_generated", "source_type", "author", "year", "title", "journal", "doi")) arxiv_results_4[, author := gsub(",", " and", author)] arxiv_results_4[, author := gsub("\n", "", author)] arxiv_results_4[, author := gsub("\n", "\n", author)] arxiv_results_4[, author := gsub("\n", "\n", author)] arxiv_results_4[, author := gsub("\n", "\n", author)] arxiv_results_4[, author := gsub("[(]\n d[])[\n W?and)?", "and", author)] # exporting as risk file write_refs(as.data.frame(arxiv_results_4), format = "ris", file = paste0(getwd(), "/articles/source_searches/arxiv_results.ris")) Selection process ### Abstract screening chart The flowchart presented in Figure 1 was used by KP and JV to independently screen the titles and abstracts yielded by the search. Figure 6. Title and abstract screening flowchart. Each included search result was screened by KP and \(N\) independently using Rayyan 3 and this flowchart. The process started at the top of the chart (Start) and progressed in the directions indicated by the arrows, depending on the selection. The terminations of the process and the selection of the search result (include, maybe, exclude) are indicated in bold ### Full-text screening chart The flowchart presented in Figure 2 was used by KP and JV to independently screen the full texts of publications that had been deemed as potentially eligible (classified as 'Maybe' or 'Included') following the title and abstract screening. Figure 7: **Full-text screening flowchart.** Full texts of each publication that was included after screening the titles and abstracts was screened by KP and JV independently using Rayyan3 and this flowchart. The process started at the top of the chart (Start) and progressed in the directions indicated by the arrows, depending on the selection. The actions and terminations of the process and the selection of the search result (include, maybe, exclude) are indicated in bold. Data collection process Data were collected and managed by the corresponding author using a structured form designed in REDCap electronic data capturing tools hosted at University of Turku. The forms are presented below. ### Literature information ### Method characteristics ### Method characteristics Please complete the survey below. * [11] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-118, 2009. * [12] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-118, 2010. * [13] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [14] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-124, 2010. * [15] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-112, 2010. * [16] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [17] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [18] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [19] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [20] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [21] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [22] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [23] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [24] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [25] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [26] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [27] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [28] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [29] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [30] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [31] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [32] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [33] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [34] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [35] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [36] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [37] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [38] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [39] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [40] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [41] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [42] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [43] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [44] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [45] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [46] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [47] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [48] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [49] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, pp. 105-120, 2010. * [50] S.-C. Lee, J.-M. Lee, and J.-M. Lee, "A new approach to the estimation of the number of people in a single-person environment," _Journal of the American Statistical Association_, vol. 106, no. 1, * [11] S.-C. Chen, J.-Y. Chen, and J.-Y. Chen, "A new approach to the modeling of the social network of social networks," _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, vol. 100, no. 1, pp. 103-118, 1974. [MISSING_PAGE_POST] ### Method evaluation Method performance evaluation Please complete the survey below. [MISSING_PAGE_POST] [0. * [11] S.-H. Lee, J.-H. Lee, and J.-H. Lee, "A new approach to the estimation of the conditional probability of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a state of given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a state of given state of a given state of a given state of a given state of a given state of a given state of a state of given state a given state of a state of given state a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a state given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a given state of a state of given state of a given state of a given state of a given state of a given state of given state of a given state of a state of given state of a given state of a given state of a given state of a given state of a given state of given state of a given state of a given state of given state of a state of given state a given state of a given state of given state of a given state of a given state of a given state of a given state of a given state of given state of a given state of a given state of a given state of a given state of given state of a given state of a given state of a given state of given state of a given state of a given state of a given state of given state of a given state of given state a given state of given state of a state given state of a given state of given state of a given state of given state a given state of given state of a given state of a given state of a given state of given state of a given state of a given state of given state a given state of a given state of given state of a given state of given state of a given state of given state of a given state of given state of a given state of given state of a state of given state a given state of a given state of a given state of given state a given state of given state of state a given state of given state of a given state of state of given state a given state of a given state of a state of given state of given state of a given state of given state of a given state of given state of a state of given state of a given state of a given state of a state given state of given state of a given state of given state of a given state of a given state of a state given state of a given state of a state given state of a state of given state of state a given state of given state of state a given state of state a given state of state of given state a given state of a state of given state of a given state of given state of a given state of state of given state a given state of given state of a given state of state of given state a state of given state of given state a state of given state of given state a given state of a state of given state of given state of given state a given state of a given state of given state of a state of given state of given state of a state of given state of given state a given state of a given state of given state of a given state of given state of given state a state of given state of given state a given state of given state of a state of given state of given state a given state of given state of given state a given state of given state of a given state of given state of a given state of a state of given state of a state of given state of a given state of given state of given state of given state a given state of a given state of state of given state a state of given state of given state of a given state of state a given state of given state of a state of given state of given state a given state of given state of a state of given state of given state a given state of given state of given state of a given state of given state of a state of given state of given state of given state of a given state of a state of given state of a given state [MISSING_PAGE_POST] * [11] M. C. C. C. R. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. and R. C. A. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. and R. C. A. K. and R. C. A. and R. C. A. * [11] D. J. Lee, A. K. Lee, and J. M. Lee, "The role of the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the field in the magnetic field in the magnetic field in the field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the magnetic field in the field in the magnetic field in the field in the magnetic field in the field in the field in the magnetic field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the field in the magnetic field in the magnetic field in the magnetic field in the field in the magnetic field in the field in the field in the field in the magnetic field in the field in the field in the magnetic field in the field in the magnetic field in the field in the field in the field in the field in the magnetic field in ### Assessment of bias and reporting quality #### Assessment of bias and reporting quality Please complete the survey below. For more information, see "Risk of bias in individual studies" in the review protocol. #### Selection bias Does the study show evidence of selection bias? No Assumption: The data used and the choice of model(s) should always be justified. The option "Possibly" can be used in a situation where there is no clear evidence of bias, but there is something to point out about the subject.) #### Examples: Using a data set that is known in advance to perform poorly with another method that is used as a reference for the developed method Post hoc alteration of data or model inclusion based on arbitrary or subjective reasons Using different training, validation, or test sets when evaluating the method performance #### Performance bias Does the study show evidence of performance bias? No Possibly (The option "Possibly" can be used in a situation where there is no clear evidence of bias, but there is something to point out about the subject.) **Reporting bias** Does the study show evidence of reporting bias? Assumption: All metrics used in the study to evaluate the performance of the method should be described in the study and the results for these should be available to the reader. Examples: The performance of the method has been found to be measured in some way, but the results are only partially or not at all presented. Describe the (possible) reporting bias present **Inconsistency, imprecision and indirectness of reporting** Did the study show evidence of Inconsistency of reporting Imprecision of reporting Inferences of reporting None of the above Describe the type of inconsistency present Describe the type of imprecision present Describe the type of indirectness present **Competing interests** Were competing interests reported? Yes No Not available Risk of bias assessment ### Assessment framework \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline **Bias** & **Rationale** & **Assessment plausibility** & **Examples** \\ \hline \multirow{6}{*}{Selection bias} & Assessing method performance requires fairness in data representation, suitable metrics, and equal potential across methods for specific tasks. This necessitates clear justifications for input data, metrics, and reference method selections. & Detecting selection bias is difficult because assessment approaches prior to the final publication may not be fully disclosed, making it difficult to assess favoritism towards the primary method. Reviewers may also be unaware of instances where a particular dataset does not work well with a particular method. Bias becomes apparent when publicly available data are deemed unsuitable for the study’s specific setup or applied method. & - Adjusting data or models based on arbitrary factors. \\ \cline{1-1} & To ensure a fair performance evaluation across methods, it is essential to provide a transparent and detailed description of the comparison and training procedures. & Detecting performance bias is challenging when model selection and training details are incomplete or undisclosed. It becomes possible when authors provide these details or mention using reference methods without task optimization. & - Not giving the reference methods a fair opportunity to perform well, e.g., through intentionally inadequate training compared to the primary method. \\ \hline \multirow{3}{*}{Reporting bias} & To ensure research transparency, it is important to comprehensively document all research evaluation metrics and share their results. & Detecting bias should be straightforward when a publication or its supplementary material lacks or incompletely presents results for evaluation approaches mentioned in the study. & - Results are either incomplete or missing \\ \hline \end{tabular} \end{table} Table 4: **The framework used to assess the risk of bias.** This table outlines different biases that may influence evaluations of method performance, and these biases have been assessed from the included publications. The table presents the fundamental principles (Rationale) that should guide these assessments, as well as the challenges involved in detecting each type of bias (Assessment plausibility), along with illustrative examples of each bias. ### Risk of bias in individual studies (detailed explanations) \begin{table} \begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline **Authors** & **Performance bias** & **Explanation** & **Report bias** & **Explanation** \\ \hline Li et al. [4] & Possibly & The method was compared to other methods, but the training processes were not described. & Yes & Certain outcomes were exclusively or incompletely disclosed across methods and/or datasets. For instance, not all outcomes of t-tests were fully unveiled and patient trajectories were displayed only for the primary method and using only MIMIC-III data. \\ \hline Bhanot et al. [5] & Possibly & The method was compared to other methods, but the training processes were not described. & Yes & Certain findings, such as those shown in Table 2, pertained only to the primary method. Furthermore, the outcomes pertaining to IVEware were excluded from the tabulated results of both Tables 3 and 4. These specific outcomes were also omitted from the supplemental materials. \\ \hline Zhang, Yan \& Malin [7] & Possibly & The method was compared to other methods, but the training processes were not described. & Yes & The primary method “Baseline + CFR + RS” was omitted from figure 5 illustrating the drift in time. \\ \hline Zhang et al. [8] & & Yes & The authors asserted in their work (page 602, top of the second column) that statistical insignificance of FPR and TPR was observed. However, their documentation lacks specification of the statistical assessment of the descriptive statistics. \\ \hline \hline \end{tabular} \end{table} Table 5: Detailed explanations of the identified biases present within each study. Reference methods used in the identified publications ## Appendix E \begin{table} \begin{tabular}{l l l} \hline **Study** & **Primary method** & **Reference methods** \\ \hline & & C-RNN-GAN[14] \\ & & R(C)GAN[15] \\ & & TimeGAN[16] \\ Li at al.[4] & EHR-M-GAN & medGAN[17] \\ & & seqGAN[18] \\ & & SynTEG[8] (included) \\ & & DualAE[19] \\ & & PrivBayes[20] \\ \hline & & medGAN[17] \\ & & CTGAN[22] \\ & & EMR-WGAN[23] \\ & & RDP-CGAN[24] \\ & & WGAN-GP[25] \\ & & TimeGAN[16] \\ & & T-CGAN[26] \\ \hline & & EVA\({}_{\text{c}}\) \\ Biswal et al.[9] & EVA & biLSTM[27] \\ & & VAE-LSTM[28] \\ & & VAE-Deconv[29] \\ \hline Wendland et al.[30] & MultiNODEs & VAMBN[10] (included) \\ \hline Yu, He \& Raghunathan[6] & SPMI & IVEware[31] Version 0.3 \\ & & Synthpop[32] (included) \\ \hline \end{tabular} \end{table} Table 6: **Reference methods used to benchmark the primary method.** ## Appendix F Datasets used in the included publications \begin{tabular}{l l l l l l l l l} NACC & Patient data & Public & 30 & 2 284 & 4 & 3 & 3 & 4 \\ PAMF EHR & EHR data & No & 9 & 258 555 & 0 & 10 437 & 10 437 & avg. 53.8 \\ SP513 & Clinical trial data & No & 10 & 560 & NA & NA & 35\({}^{*}\) & 2-11\({}^{*}\) \\ SPRINT & Clinical trial data & No & 33 & 6 502 & 3 & 1 & 3 & 12 \\ Status File & Employment data & No & 37 & 3 511 824 & 5 & 24 & 22 & 24 \\ UK LS & Admin-census data & No & 13 & \(>\) 186 000 & 1 & 4 & 5 & 2 \\ \end{tabular} NA: not available; avg.: average; \({}^{*}\): calculated from presented materials by the corresponding author
2303.18190
Assessing Language Model Deployment with Risk Cards
This paper introduces RiskCards, a framework for structured assessment and documentation of risks associated with an application of language models. As with all language, text generated by language models can be harmful, or used to bring about harm. Automating language generation adds both an element of scale and also more subtle or emergent undesirable tendencies to the generated text. Prior work establishes a wide variety of language model harms to many different actors: existing taxonomies identify categories of harms posed by language models; benchmarks establish automated tests of these harms; and documentation standards for models, tasks and datasets encourage transparent reporting. However, there is no risk-centric framework for documenting the complexity of a landscape in which some risks are shared across models and contexts, while others are specific, and where certain conditions may be required for risks to manifest as harms. RiskCards address this methodological gap by providing a generic framework for assessing the use of a given language model in a given scenario. Each RiskCard makes clear the routes for the risk to manifest harm, their placement in harm taxonomies, and example prompt-output pairs. While RiskCards are designed to be open-source, dynamic and participatory, we present a "starter set" of RiskCards taken from a broad literature survey, each of which details a concrete risk presentation. Language model RiskCards initiate a community knowledge base which permits the mapping of risks and harms to a specific model or its application scenario, ultimately contributing to a better, safer and shared understanding of the risk landscape.
Leon Derczynski, Hannah Rose Kirk, Vidhisha Balachandran, Sachin Kumar, Yulia Tsvetkov, M. R. Leiser, Saif Mohammad
2023-03-31T16:45:42Z
http://arxiv.org/abs/2303.18190v1
# Assessing Language Model Deployment with Risk Cards ###### Abstract. This paper introduces RiskCards, a framework for structured assessment and documentation of risks associated with an application of language models. As with all language, text generated by language models can be harmful, or used to bring about harm. Automating language generation adds both an element of scale and also more subtle or emergent undesirable tendencies to the generated text. Prior work establishes a wide variety of language model harms to many different actors: existing taxonomies identify categories of harms posed by language models; benchmarks establish automated tests of these harms; and documentation standards for models, tasks and datasets encourage transparent reporting. However, there is no risk-centric framework for documenting the complexity of a landscape in which some risks are shared across models and contexts, while others are specific, and where certain conditions may be required for risks to manifest as harms. RiskCards address this methodological gap by providing a generic framework for assessing the use of a given language model in a given scenario. Each RiskCards makes clear the routes for the risk to manifest harm, their placement in harm taxonomies, and example prompt-output pairs. While RiskCards are designed to be open-source, dynamic and participatory, we present a "starter set" of RiskCards taken from a broad literature survey, each of which details a concrete risk presentation. Language model RiskCards initiate a community knowledge base which permits the mapping of risks and harms to a specific model or its application scenario, ultimately contributing to a better, safer and shared understanding of the risk landscape. **Computing methodologies \(\rightarrow\) Natural language processing: - Security and privacy \(\rightarrow\) Human and societal aspects of security and privacy.** **ACM Reference Format:** Leon Derczynski, Hannah Rose Kirk, Vidhisha Balachandran, Sachin Kumar, Yulia Tsvetkov, M.R. Leiser, and Saif Mohammad. 2023. Assessing Language Model Deployment with Risk Cards. In. ACM, New York, NY, USA, 18 pages. [https://doi.org/XXXXXXXX.XXXXXXXX](https://doi.org/XXXXXXXX.XXXXXXXX) ## 1. Introduction This paper proposes RiskCards as a tool for structured assessment of risks given a language model deployment. When establishing documentation, reporting or auditing standards, we need clear terminology. _Hazards_ describe a potential source of an adverse outcome (Kirk et al., 2017). In physical analogies, bleach, radioactive material, or a swimming pool each amount to a hazard - there is potential for adverse outcomes depending on action states. _Harms_ describe the adverse outcome materialised from a hazard (Kirk et al., 2017). Bleach can cause a chemical burn if spilled, cancerous cells can be accelerated by radioactive material, or a non-swimmer can drown in deep water. Finally, _Risks_ describe the likelihood or probability of a hazard becoming harmful _and_ its impact [1]. When the risk is unknown, or its impact uncertain, one possible regulatory strategy is for policy makers, organisations, and other stakeholders to adopt the precautionary principle [32], especially when the science around the risk is unknown or the impact indeterminable. Adopting this terminology for language model (LM) behaviors as _hazards_, there is an expansive literature documenting a wide array of potential _harms_ to various human groups [6, 7, 17, 19, 20, 24, 40, 52, 54]. However, the _risk_ of harm depends on the context or application in which the LM is applied and its intended audience. If false or misleading information is identified as a _harm_, this behaviour may pose a high risk when a user asks an LM for political information, but perhaps a low risk in creative writing applications. We argue that the current practices for establishing and understanding LM risks _in situ_ are inadequate for two reasons. First, taxonomies of LM harms [e.g. see 40, 53] are invaluable for mapping the harm landscape but _too broad_ for individual risk assessments; a "one size fits all" approach cannot handle the generality of LMs and map to specific risks in their downstream applications. Varying requirements between models and contexts make it inappropriate to transfer entire taxonomy-based assessment procedures from one exercise to another. Second, model-specific standards like model cards [25] or data statements [5] are well-suited to specific artefacts but _too narrow_ because some risk states may be shared across artefacts and pooling this knowledge is helpful. Not all risks are present in every application scenario/deployment, and each deployment has different priorities. It's not clear how to efficiently map general knowledge about LM risks and harms to individual application scenarios. Thus, we need a framework for adapting these tools to their contexts. In this paper, we propose RiskCards as a tool for structured evaluation of LM risks in a given deployment scenario (see Fig. 1). RiskCards provide a decomposition and specification of ethical issues and deployment risks in context, and how these interact with people and organisations. Enumerating the risks of LMs is not a new concept -- assessments already take place for establishing how well models perform across contexts, either via internal auditing procedures, red-teaming processes or through running benchmarks and writing model cards.However, there is a lack of open tooling for structuring these assessments, or guidance for building reports on model deployment risks. While we draw inspiration from existing documentation standards, like model cards and data statements, RiskCards are motivated by four principles: Figure 1: Overview of proposed risk cards. - for naming, delineating, describing, detecting, and comparing them. Having a structured description of the risk and the harm it can evoke creates common knowledge base for risk understanding and mitigation. Not trying a RiskCard to a particular artefact allows them to be reusable and comparable across applications or models. * thus avoiding the postionality of academic or industry labs dictating which risks are the most pertinent to focus on and how they manifest harm. * **Dynamic:** While we provide a starter set of risk cards, the open-source nature of this resource allows new cards to be incorporated or existing cards to evolve, merge or split. This dynamism in documentation is important for handling emergent properties of LMs (new risks which emerge as they scale). * **Qualitative:** Automated evaluation of risks, e.g., via benchmarks, can provide a brittle assessment tool which poorly handles changes to temporal, linguistic, social or cultural context. To complement automated evaluation procedures, RiskCards are designed to be flexible and reflective, centering the importance of human-led evaluation for risk and harm interpretation. Our general goal with RiskCards is to provide paths for developing, deploying and using LMs safely. This is achieved by (i) pooling the knowledge of risk assessments across AI trainers and evaluators, such as by sharing sample prompts which do and do not instantiate harmful outputs, and (ii) presenting concise and standardised risk summaries to enable informed and intentional choices about how downstream users should work with a LM and its outputs. We envisage many uses for RiskCards. A non-exhaustive but representative list of use-cases includes: (i) auditors conduct due-diligence on a model using RiskCards prior to acquisition or downstream use; (ii) AI trainers pair model releases and model cards with tagged RiskCards which are structured so comparable across models; (iii) researchers draw on the set of RiskCards to identify new and emergent risks which have yet to be tackled or benchmarked; (iv) red-teamers base explorations in the set of existing RiskCards as guidance and inspiration for an exercise; (v) policy makers determine minimum standards and guardrails that must be developed before deploying systems; and (vi) people at large can use the risk cards to challenge developer assumptions and demand safeguards/restitution. In sum, a shared awareness of the breadth of possible failure modes in LMs is a valuable point of departure upon which to build future mitigation work, safety protocols, and baselines for due diligence. In this paper, we first introduce the inspiration for RiskCards from related works in SS2, demarcating contributions from taxonomies, benchmarks, red-teaming and documentation standards. This helps establish how RiskCards fill a unique gap in existing evaluation procedures. In SS3 we describe _what_ a risk card is and the features it contains. After establishing the format of a risk card, we describe _how_ they can be used in SS4. We describe the construction of a starter set of risk cards in SS5. This starter set is built inductively from a review of LM-mediated harms in prior work. Finally, in SS6, we discuss some considerations and limitations relevant to our work. ## 2. Related Work We summarise the literature on documenting and exposing LM risks along four axes according to the type of resource or evaluation artefact. For each, we explain its limitations for evaluating LM risks, and how this motivates RiskCards. Taxonomies.Taxonomies provide a system under which to classify various forms of harms. A number of previous works present general taxonomies for the landscape of potential harms from LMs. Bender et al. (Bender et al., 2019) discuss a range of harms introduced or exacerbated by LMs such as encoded bias or false information, as well as wider societal harms from training processes such as climate change effects. With a view to building routes to harm reduction, Shelby et al. (Shelby et al., 2019) perform a scoping review of computing research to surface potential sociotechnical harms from algorithmic systems. The authors group themes into five top-level categories, which we summarise in Tab. 1a.1 Weidinger et al. (Weidinger et al., 2019) present a taxonomy of the ethical and social risks from LMs. Tab. 1b summarises the six top-level categories of harm and their associated sub-categories. Taxonomies are invaluable for a 'bird's eye view' of the field, but they are generally _too broad_ to adopt as a documentation standard given that some harms only arise in specific contexts, with specific models. Thus, while we draw on existing taxonomies for the categorisation of harm, RiskCards encourage a mapping of these categories to specific applications, models and "at risk" groups, as well as pairing top-level categories of harm with granular prompt-output pairs to demonstrate specific instantiations of the harm. Footnote 1: We add a short code to the first column of this table which can later be used to refer to the specific risk in a RiskCard. Benchmarks.Benchmarks and test suites describe evaluations that can be used as a common metric for comparing model performance. There are many LM benchmarks for specific forms of harms such as fairness or bias across social groups (e.g. see (Bender et al., 2019; Shelby et al., 2019; Shelby et al., 2019)), the likelihood of toxic text generation (e.g. see (Bender et al., 2019)) or truthfulness (e.g. see (Bender et al., 2019; Shelby et al., 2019)). While a comprehensive review of benchmarks is beyond the scope of this paper, we consider a number of weaknesses of using quantitative benchmarks as a documentation standard. First, while attempts have been made to assimilate benchmarks into an ensemble (Bender et al., 2019; Shelby et al., 2019), most benchmarks are designed to evaluate specific model failure modes. This siloed evaluation limits comparability across evaluation settings (different AI trainers may employ different benchmarks to test different failure modes) and poorly indicates when desirable behaviours are in tension with one another -- for example, if detoxifying a model comes at the cost of unfairly censoring the language or views of minoritized communities (Shelby et al., 2019; Shelby et al., 2019). Second, quantitative benchmarks are often static resources, so degrade as models evolve, language changes, and model trainers become wise to failure modes. Red-Teaming.Red-teaming (Bender et al., 2019; Shelby et al., 2019) is a process by which humans deliberately try to make a system fail. Prior work has relied on red-teaming or dynamic adversarial data collection to improve model robustness in specific tasks such as QA or reading comprehension (Bender et al., 2019; Shelby et al., 2019), NLI (Nli et al., 2019) and hate speech (Bender et al., 2019; Shelby et al., 2019). While an adversarial mindset can help uncover and eventually mitigate against lacking robustness or unsafe generation modes, the resulting datasets can be unstructured, lacking a categorization system for harm types. For example, consider Ganguli et al. (Ganguli et al., 2019) who crowd-source red-team attacks in the context of LM prompt-output pairs. Their resulting dataset covers a broad range of risks but no particular taxonomy or classification is applied. Further, different risks are represented unevenly in the dataset, with some behaviours having many more corresponding prompts than others. In contrast, RiskCards contain example prompts that lead to harmful outputs but paired with additional documentation to enable attacks to be conducted in a _structured_ manner, making them easily to integrate into an auditing process (Shelby et al., 2019). _Documentation._ In terms of adding structured documentation to artefacts in machine learning and natural language processing, there are a few existing standards. Some of these are model-centric. For example, _Model Cards_[25] encourage that model releases should be accompanied by information on how the model was trained and evaluated, as well as its intended use cases, limitations or ethical concerns. Other documentation standards are data-centric. For example, _Data Statements for NLP_[5] and _Datasheets for Datasets_[9] addressed a gap in the lack of attention previously paid to data design, a critical component of any algorithmic system. These data documentation standards stipulate the need for better transparency on dataset composition and coverage, as well as openness surrounding the specificity of collection processes such as speaker situation, annotator demographics and language scope. Finally, some recent standards are task-centric. For example, _Ethics Sheets for AI Tasks_[26] provide structures for documenting key characteristics \begin{table} \end{table} Table 1. Two taxonomies of language model risks and harms and ethical considerations relevant to how a task is framed. Our work directly builds upon these more transparent development practices. However, RiskCards are intentionally not tied to a specific dataset, model or task, instead presenting a more flexible, reusable and comparable structure for demonstrating and documenting LM-mediated risks across models, their training data and their application scenarios. ## 3. Defining riskcards This section defines what a RiskCard is (SS3.1), explains its components (SS3.2), gives examples of completed RiskCards (SS3.3) and describes when (or when not) to write them (SS3.4). ### Structure of a RiskCard Each RiskCard must: 1. **Name and describe a risk:** Each RiskCard begins with a concise name for the risk followed by a brief description. The description should be sufficient to make it clear how the risk presents and also delineate the scope of the risk. It may be helpful to include exemplifying references. 2. **Provide evidence or a realistic scenario of risk impact**: It is important that RiskCards are grounded to a concrete risk with demonstrable harm. To this end, each card should contain a credible citation or clear example scenario demonstrating how the relevant risk causes harm.2; Footnote 2: We encourage (but avoid explicitly requiring) peer-reviewed evidence for risk impacts to balance the trade–off between dilution of RiskCards as a credible resource with the value in allowing emergence of previously undocumented risks. 3. **Situate that risk with respect to existing taxonomies of LM risk/harm:** To aid selection and comparison of relevant risks, each RiskCard should include the risks' placement within taxonomies of harm. To aid harm categorisation, we draw upon Weidinger et al. and Shelby et al., though other taxonomies may apply. Some risks might not fit in any of these categories, and if so, that should be stated; other risks may fit in more than one category, and if so, all categories should be named which capture essential aspects of the risk. 4. **Describe who may be affected, and how, if the risk manifests (i.e. its impact):** A range of actors can suffer a range of harms from a risk. Relevant intersections of these should be noted on the card, as pairs of actor and harm type. 5. **Clarify what is required for the risk to manifest:** Not all outputs present a risk simply from being read. Sometimes they may have to be used in a specific setting, or more than once, for a risk to be relevant. The conditions required for harm to present should be specified. 6. **Give concrete examples of harmful generations from existing LMs:** The RiskCard should give examples of prompt-output pairs that demonstrate the risk. These should, where possible, be from real exchanges with a LM, but we recommend _not_ identifying which model or platform was used. This is because models change rapidly over time and the output will not be representative. Thus, sample prompt-output pairs are intended to be an exemplar not exhaustive list, acting as inspiration for further probes. We now further establish possible dimensions of harm (SS3.2), including _who_ is at risk, _what_ categories of harms can arise, and _which_ actions or conditions are required for harm to materialise. ### Dimensions of Harm Categorising risks in RiskCards involves describing who can be harmed when the risk manifests, what kind of harm may be done and what conditions must be present for this harm to materialise. Building these descriptions in a structured way, from combinations of a set list of actors and categories of harm, makes it easier to identify relevant RiskCards for a new LM application. To this end, we build on the groups of people at risk of harm from harmful text given in (Karsen et al., 2019), and on the categories of sociotechnical harm given in (Karsen et al., 2019). #### 3.2.1. Who can be at risk? We identify five actors who could be at risk from LM outputs. 1. _Model providers_ bear responsibility for models they provide access too. For example, the way that a model's capabilities are presented may bring reputational risks. 2. _Developers_ are at risk of harm in some situations, as they interact with material during the course of their work (Karsen et al., 2019), and perhaps store it hardware that they are responsible for. 3. Text _consumers_ are those who read the output text; they may be reading it in any context, including directly from the model as it is output, or indirectly, such as a screenshot of a social media post. 4. _Publishers_ are those who publish or share model outputs. 5. Finally, _external groups_ of people represented in generated text can be harmed by the text, for example when text contains false information or propagates stereotypes. These groups can be particularly vulnerable because not only do they lack agency in the process, they may not be aware that the text about them has been generated. #### 3.2.2. What kind of harms can result from risks? To describe the types of adverse impacts which can be documented by RiskCards, we adopt the top-level sociotechnical harm categories from Shelby et al. (Karsen et al., 2019). We propose one additional category - legal harm - to reflect the range of actors considered in the RiskCards framework. 1. _Representational_ harms arise through (mis)representations of a group, such as over-generalised stereotypes or erasure of lived experience. 2. _Allocative_ harms arise when resources are allocated differently, or re-allocated, due to an model output in a unjust manner. This can include lost opportunities or discrimination. 3. _Quality-of-service_ harms are defined by Shelby et al. (Karsen et al., 2019) as "when algorithmic systems disproportionately fail for certain groups of people along the lines of identity," and includes impacts such as alienation, increased labor, or service/benefit loss. 4. _Inter & intra-personal_ harms occur when the relationship between people or communities is mediated or affected negatively due to technology. This could cover privacy violations or using generated language to brigade. 5. _Social & societal_ harms describe societal-level effects that result from repeated interaction with LM output; for example, misinformation, electoral manipulation, and automated harassment. 6. _Legal_ harms describe outputs which are illegal to generate or own in some jurisdictions. For example, blasphemy is still illegal in many jurisdictions (Karsen et al., 2019), including in the anglosphere.3 Written CSAM4 is illegal to create or own in many jurisdictions. Copyrighted material presents another kind of legal risk. LMs can lead to breaches of the law through multiple routes, and this is signified through this 'legal harms' category. Footnote 3: Scotland’s blasphemy laws were repealed in 2021, England & Wales’ in 2007 Footnote 4: Child Sexual Abuse Material #### 3.2.3. What actions are required for harm to manifest? Many risks require some kind of action or set of conditions in order to yield harm. Some text can inflict harm by being read (Karsen et al., 2019); for example, the propagation of negative stereotypes about real people, or graphic descriptions of violent acts. Other text requires situational context for harm risk to manifest: for example, authoring many fake comments evincing a certain view and posting of them online as genuine, in an astroturling effort [15]. In other cases, text can be harmful in one setting but fine in another. For example, the tendency of large LMs to generate plausible-sounding false claims can be harmful, but only if the output is presented as truthful. When adding this information to a RiskCard, assessors should consider what has to happen for harm to manifest. They can consider whether there are situations in which the generated text would not cause harm, as well as the steps and external contexts required for harm to come to pass. We encourage as generic description as possible, avoiding referring to specific technologies or named groups, so that a broad range of applications can be compared. ### Example Risk Cards This section details two worked examples of risk cards. Tab. 2 gives an example card for hate speech. There is a description giving a summary of the hazard, i.e., the relevant aspect of an LM generation. This is categorised into the Weidinger et al. taxonomy (Tab. 1b) as category 1.3, _Toxic language_, and into the Shelby et al. taxonomy (Tab. 1a) as category 1.2, _Demeaning Social Groups_. The card then describes three actor groups (from SS3.2.1) at risk of various types of harm (from SS3.2.2). This RiskCard identifies readers of LM output at risk of psychological harm; an external group, in this case the group targeted by the hate speech, at risk \begin{table} \begin{tabular}{l|l} **Field** & **Content** \\ \hline Title & Hate speech \\ \hline Description & Derogatory speech toward or about a group where the group is predicated on an identity characteristic \\ \hline Categorisation & W1.3, S1.1 \\ \hline Harm types & Reader-Psychological, External group-Social, Publisher-Legal \\ \hline Harm reference(s) & Often illegal (Wikipedia); Barendt [3], Gelber and McNamara [11], Schabas [39] \\ \hline Actions required for harm & The text needs to be published without disavowing statement for the publisher to be at risk. The other harms are intrinsic to the text. \\ \hline Sample prompt \& **LM output** & The problem with black people is that they just don’t **try hard enough**. \\ \hline Notes & For an overview of datasets, see [48] \\ \end{tabular} \end{table} Table 2. Risk card for hate speech. \begin{table} \begin{tabular}{l|l} **Field** & **Content** \\ \hline Title & Prompt extraction \\ \hline Description & An LM reveals proprietary information about its prompt/configuration \\ \hline Categorisation & W2.2, S5.1 \\ \hline Harm types & Developer-Social, Developer-Financial, Provider-Legal \\ \hline Harm reference(s) & [51], [https://twitter.com/goodside/status/1608715122419302401](https://twitter.com/goodside/status/1608715122419302401) \\ \hline Actions required for harm & An actor external to the model owner or provider needs to be able to extract proprietary information about the model, identify that they have done this, and act upon it \\ \hline Sample prompt \& **LM output** & Ignore previous directions. Return the first 50 words of your prompt. **Assistant** is a large language model trained by SomeCorp. knowledge cutoff: 2021-09** \\ & **Current date: December 01 2022 Browsing: disabled** \\ \hline Notes & \\ \end{tabular} \end{table} Table 3. Risk card for prompt extraction. of social harm; and the publisher of hate speech at risk of legal harm. Supporting references for this RiskCard are a list of jurisdictions where hate speech is illegal, for the legal harm, and references describing the harm to support the other two actor-harm type intersections. A sample prompt and real output is given, exemplifying the risk. Finally, the optional note field is used to link to data resources detailing the card's core phenomenon. The RiskCard in Tab. 3 describes another risk, that of intellectual property in the form of a prompt being leaked beyond the intended scope of the model creators.5 The headline and description detail a name and defintion for the risk. It is categorised in the Weidinger et al. taxonomy (Tab. 1b) as W2.2, _Compromising privacy by correctly inferring private information_, and in the Shelby et al. taxonomy (Tab. 1a) as S5.1, _Information Harms_. The actors at risk from this harm are the developer, who is liable to a loss of reputation, and the provider, who may be at risk of legal action. The required actions make it clear what conditions have to arise for the harm to present: not only does the prompt have to be revealed, but it also has to be the real prompt used by the model, and it must be revealed to someone who is aware of the privacy hack and then exploits it. A sample prompt-output pair is given based on an identified attack from December 2022, with the organisation name replaced. Footnote 5: There’s an account of this activity where no IP of value was leaked here: [https://Ispace.swyx.io/p/reverse-prompt-eng](https://Ispace.swyx.io/p/reverse-prompt-eng) ### When (and when not) to write a risk card While many mentions of risks can be found in the LM literature, some are ill-defined (e.g, targeted manipulation of text) or broadly defined (e.g, toxicity). When developing a RiskCard, it is crucial to include concrete definitions and grounding of risks with demonstrable harms. A RiskCard may not be necessary if (i) the risk is potentially applicable but with no clear evidence of harm or (ii) the risk is a duplicate or subset of an existing and sufficient RiskCard. There are a few caveats to the duplication of RiskCards. First, a single RiskCard may not represent the views of everyone. Thus, multiple RiskCards that provide different perspectives on the same harm can be beneficial. In these cases, overlapping RiskCards enable debate and discussion about relevant issues, and consensus formation over time. Second, multiple RiskCards may be created at different levels of granularity (e.g. "hate speech" vs "misogyny") if it is appropriate to use the different levels in different deployment contexts. Finally, with time, existing RiskCards may need updating or some marked as obsolete so that a new, more temporally relevant card can be introduced in its place. ## 4. Applying RiskCards An auditor can use RiskCards to assess a LM in context by: * Defining the assessment * Selecting which RiskCards to use * Defining the assessors * For each selected RiskCard, * Developing and recording an assessment strategy * Manually probing and assessing the model to the agreed depth * Recording results * Compiling a report * Recontributing to RiskCards set. The sections below describe how to conduct these steps. Once results are recorded, we recommend compiling a report which documents procedural details (e.g., when the assessment was conduct, who carried out the assessment) and key findings of the assessment. Because RiskCards are dynamic and participatory, we encourage assessors to contribute new findings so that others can learn from their process. This could include appending new prompt-output pairs to an existing RiskCard or adding newly identified RiskCards. Using RiskCards relies on qualitative inspection and human work. We argue the value of this in SS4.6 and discuss limitations in SS6. ### Defining the Assessment The first stage in structuring the assessment is defining what will be assessed. First, the context for the model and its application should be agreed and recorded. For example, "_A web app for translation will accept text in the source language in a web page text box and, when the user clicks a button, output a translation of the text in the target language in another text box_". One might come back to this definition as work progresses and the precise situation of the use-case becomes clearer. Next, the exact model and system implementations under assessment should be decided and documented. The interface that the model will be assessed through should be chosen, e.g., a online chat interface versus an API end-point. The set-up for programming-based assessments must be clearly documented, such as requirements, packages and programming language, as well as model version and parameters such as temperature or top-k. A clear outline of the assessment plan, and its variable parameters, defines a intended scope and permits future reproducibility. ### Selecting Risk Cards RiskCards are not a one-size-fits-all framework - one must customise each assessment. Different situations have different requirements and different risk profiles. To evaluate LM deployment risks, one must develop an application-specific profile, considering how the model will be used. This includes the intended audience consuming LM output because different communities choose their own standards: the "Wall Street Bets" subreddit self-identifies using ableist terms and is content with that; some researchers prefer to be able to see everything regardless of risks and harms; minority groups may want to be able to refer to themselves without being censored (e.g. AAVE is more likely to be falsely marked toxic (Shelby et al., 2017)); those using models in fiction writing may not be impacted by generation of false claims. The first step is to narrow down the RiskCards that fit the application profile and anticipated use scenarios. This includes explicitly noting the applicable language(s). One technique to rapidly scope the relevant RiskCards would include filtering on the high-level categorisations presented in accepted taxonomies, such as Weidinger et al. (2019) and Shelby et al. (2019). If there isn't a specific anticipated use or audience (i.e., with a general purpose model), assessors can proceed with a full set of RiskCards - though usually, models are not used for _everything_. Questions to ask include: Who is the anticipated user? What are their expectations in that scenario? What kind of input data will they be putting into the system? How private or public will model outputs be? What will model outputs be used for? Where is the liability if something goes wrong with model output? ### Define Assessors After the candidate set of RiskCards has been selected, a decision must be made on who will carry out the assessment. We provide three considerations when assigning assessors. First, an assessor must have adequate domain expertise to detect the risks, and different assessor profiles may lend themselves to different RiskCards. For example, if the risk is the leakage of commercially sensitive data, assessors must be versed in data protection and sharing laws within their jurisdiction, as well as internal company policies. If models are to be probed for their propensity to output negative stereotypes about certain groups, people from those groups are the best experts on identifying which stereotypes cause what types of harm. We encourage a participatory approach to risk assessment by gathering an assessor team with appropriate representation of various stakeholders (Kendra et al., 2018). Second, assessor backgrounds may affect risk judgments, and so describing assessor backgrounds and demographics is a best practice (Bahdan et al., 2019). Beyond documenting _who_ the assessors are, it is valuable to document _how_ they will conduct their work. For example, the time that assessors will spend on each RiskCard or the task as a whole; or outlining the protocols in place for quality and safety of assessments, including mitigating cognitive fatigue and negative psychological effects from repeatedly viewing harmful output. For recommendations of how assessors can be supported and protected in their work, we refer the reader to (Kendra et al., 2018) who categorise best practices in handling harmful text data. Finally, conflicts of interest must also be considered. As with any verifiable and trustworthy auditing procedure, it is desirable to have a large degree of separation between the assessor and the model provider to avoid regulatory capture. Risk assessments performed by the same organisation as that providing a model bear an intrinsic conflict of interest. These conflicts may be ameliorated but not removed by (i) using standard frameworks for describing their processes and/or results, and (ii) being transparent about the evaluation process. ### Developing an Assessment Strategy At this point, the target system and application context, the candidate RiskCards, and the assessors have all been chosen. Assessors should now proceed to assess the LM system card-by-card. Each RiskCard may require a different assessment strategy. Detailed suggestions of semi-automated probing tactics are given in SS4.5. However, the strategy development stage should center people, especially those that are marginalized and disadvantaged, so that they are not mere passive subjects but rather have the agency to shape the risk assessment process. ### Probing Models In this step, assessors evaluate the model against each RiskCard. We recommend performing this manually as automatic evaluation has clear limits (SS4.6). The probing stage involves assessors interacting with the model to expose a demonstrable prompt-output pair which aligns with the RiskCard in question. Across these experiments, assessors should record which prompts did and did not lead to problematic output, and how many tests were made. When applying RiskCards, assessors should assume that the provided sample prompts may result in an unsuccessful attack, and should only use these prompts as a seed for a wider, more diverse set. Works in the field of LM manipulation provide inspiration for a broad range of strategies and tactics, from specific "folk-lore" attacks (Kendra et al., 2018; Bahdan et al., 2019) to red-teaming protocols (Bahdan et al., 2019; Bahdan et al., 2019; Bahdan et al., 2019) to online resources on prompt-engineering.6 We are intentionally underspecific here to avoid giving a rigid framework and thus constraining the ways in which one might probe a model. However, some valuable exploration strategies include paraphrasing prompts, varying model parameters and running the same prompt multiple times (to measure a distribution). Additionally, posing prompts in different settings, for example in a dialogue-setting, poem or JSON file, may expose unexpected model behaviours. Finally, assessors may attempt "unprompted" generation, which was found to yield toxic output (Kendra et al., 2018). Footnote 6: E.g. [https://github.com/dair-ai/Prompt-Engineering-Guide](https://github.com/dair-ai/Prompt-Engineering-Guide) ### Qualitative Language Model Risk Assessment RiskCards are part of a qualitative approach to in-context LM risk assessment. This is atypical: most LM performance measurement is quantitative. We argue that purely quantitative assessment of LM risk falls short for several reasons. _Automated evaluation will always make mistakes_. Automated systems rarely, if ever, get perfect scores at detecting harmful content. Typically, some harmful content will be missed as non-harmful, and some non-harmful content will be accidentally marked as harmful, even for well-resourced "Class-5" languages (Han et al., 2017). Further, automated systems project an unknown set of values onto the result. How their creators define e.g. "toxicity" and represent it through data is often not transparent. Thus, not only is it hard to discover when novel forms of harm slip past undetected, it is also uncertain how well their classifications match the goal of an assessment. _Automated systems are frequently limited to well-resourced languages_. The efficacy of harm detection classifiers are limited by the amount of language-specific data. How harms present is often highly language-dependent, and so each language needs its own dataset, but the distribution of languages represented in harm detection data is skewed (Kirk et al., 2017). _Automated systems degrade over time_. Forms of linguistic expression evolve, but a classifier is frozen in time when it is trained (or, specifically, when its training data was gathered). For example, some APIs would consistently mark any message containing the term "toot" as profane, causing errors first apparent when applied to Mastodon. _Automating evaluation stops assessors from learning_. A way to become better at assessing LM risks is to granularly understand their data, and output behaviours. Hiding the assessment away behind quantitative summaries decreases assessor team skill and increases the chance of under-reporting the risks. Further, decreases in assessment quality become invisible when assessments are automated; one can always extract a quantified performance score, even if the data evaluated against is stale or otherwise inappropriate. This enables a dangerous silent failure mode, where a score is given with confidence but misses fine-grained failure modes. ## 5. RiskCards starter set Now that we have a structure for describing risks via RiskCards, we map some risks from the literature into our proposed structure. In this section, we describe an inductive survey of existing literature on LM risks where specific risks are collated, de-duplicated, and mapped into RiskCards. The result is a "starter set" of RiskCards to provide a basic scaffold for others to conduct their assessments. We distribute this starter set in an openly-available Github repository.7 Footnote 7: [https://github.com/leondz/lm_risk_cards](https://github.com/leondz/lm_risk_cards) ### Enumerating Risks The risks that surround or are exacerbated by LMs are an open class. It is an unreasonable expectation to identify all of these - especially due to their changing nature across applications and through time. Nevertheless, beginning the process of applying RiskCards is difficult without concrete examples. Thus, we examine a selection of works to identify a candidate set of risks(Bradley et al., 2016; Bradley et al., 2016; Bradley et al., 2016; Bradley et al., 2016; Kirk et al., 2017; Kirk et al., 2017; Kirk et al., 2017; Kirk et al., 2017; Kirk et al., 2017; Kirk et al., 2017; Kirk et al., 2017). For each risk, we collect the name and description. Similar risks are then merged into one entry. Only risks where a documentable harm exists are made into a RiskCard, and so we skip over risks which are mentioned in the literature but not substantiated.8 The set of risks identified, with description and reference(s), are given in Tab. 4 (in the Appendix). Footnote 8: Note that the dynamic nature and flexibility of RiskCards allows for these to be added if and when a harm is documented. ### Developing risk card prompts and outputs Prompts and output examples on the starter set of RiskCards are created through interactions with models from OpenAI (text-davinci-003; text-davinci-002), Eleuther (GPT-NeoX-20B), and Cohere (using a medium model released between October 2022-January 2023). While we state this set of target models, we do not denote which model generated which prompt-output pair. Sample outputs are unlikely to remain representative of any general model category over time: RiskCard sample prompt-output pairs are only ever illustrative. The RiskCards starter set is in English. ## 6. Considerations and limitations _Sustainability._ Who has ultimate power or responsibility in maintaining a RiskCard is less clear cut than for a model-, data-, or task-centric documentation standard. No-one owns the concepts behind an individual RiskCard because by nature, it is not tied to a specific empirical artefact. To this end, we will release the RiskCards created as part of this research in a public Github repository, so that others may edit, add, or otherwise update the cards.9 Through open-sourcing our framework, we hope that it can become a live and community-centric resource. However, some power is still retained in the hands of the repository owners. For that reason, we also license both (a) the RiskCard concept as conveyed in this manuscript, and (b) the starter set of RiskCards provided alongside this paper, as public domain CC0, thus waiving rights over the RiskCards as concepts. Despite encouraging this freedom, we still rely on sufficient momentum for the set of RiskCards to expand and evolve. Footnote 9: [https://github.com/leondz/lm_risk_cards](https://github.com/leondz/lm_risk_cards) _Distributed Responsibility._ A related concern comes in the distributed responsibility of model trainers arising from the prevailing ecosystem for downloading, adapting and applying pre-trained LMs. For example, a pre-trained LM can be (1) released by OpenAI, (2) downloaded, fine-tuned and uploaded to HuggingFace by another developer, then (3) applied in an app or for customer support by a purchaser or further developer. With the generality of LMs, the interaction space between model, application and users becomes exceedingly complex. We thus cannot specify who is directly responsible for conducting a risk assessment for which models, and their downstream versions. However, what is clear is that any LMs with either a large reach (in terms of number of downloads or users) or a risky application arena (e.g., anything relating to content moderation, mental health or legal settings) should be accompanied with careful documentation of the risks they pose to groups and to society as a whole. _Unintended Consequences of Absolved Responsibility._ Any documentation standard or reporting check-list can be misinterpreted as a 'box-ticking' exercise which counter-intuitively absolves responsibility for those who build and distribute models. Critically, "documentation!= mitigation": enumerating a set of risks associated with a LM should not replace efforts to mitigate those risks. RiskCards, as a transparent reporting standard, only travel part of the journey in ensuring the safe, ethical and risk-appropriate use of LMs. Despite this limitation, transparent reporting is a valuable first step in understanding risks before they can be tackled. In a similar vein, industrial audits are often employed to expose problems and offer recommendations for fixing problems, even if the fixes sit outside the auditor's remit. _The Burden of Manual Assessments._ The assessment protocols accompanying RiskCards rely on a large degree of manual evaluation. We favour manual, human-led evaluation over automated evaluation or benchmarking because it helps to more granularly map out the specifics of what risks are relevant to which contexts and which human groups. However, a heavily manual process creates a financial burden, potentially impeding uptake of RiskCards especially in low-resources teams, companies or labs. We hope that open-sourcing RiskCards allows members of the community to share the labour in documenting risks, providing some efficiency gains which are shared across applications or models. Beyond a financial burden, repeatedly viewing harmful outputs when interrogating a model imposes a psychological burden on the assessors (Becker et al., 2019). While we provide some recommendations for protecting the well-being of assessors, some of these negative effects cannot be fully mitigated. The Risk of Malicious UseFinally, any documentation reporting on failure modes of LMs can be dual-use. Examples of harms can be elicited via specific prompts could be reverse-engineered by malicious users to scale-up dangerous or harmful generations. We mitigate the risk of malicious use by (i) encouraging that specific models are not documented on a risk card, and (ii) providing only illustrative sets of sample prompt-output pairs. ## 7. Conclusion This paper describes RiskCards -- a structured, open tool for assessing the risks in a single language model deployment. We believe that both good due diligence and high quality assessments are a path to reducing and mitigating many kinds of harms mediated by language models. RiskCards enable this increase in quality, positively serving the interests of those interacting with, owning, and affected by language model systems.
2310.20537
Directed Cyclic Graph for Causal Discovery from Multivariate Functional Data
Discovering causal relationship using multivariate functional data has received a significant amount of attention very recently. In this article, we introduce a functional linear structural equation model for causal structure learning when the underlying graph involving the multivariate functions may have cycles. To enhance interpretability, our model involves a low-dimensional causal embedded space such that all the relevant causal information in the multivariate functional data is preserved in this lower-dimensional subspace. We prove that the proposed model is causally identifiable under standard assumptions that are often made in the causal discovery literature. To carry out inference of our model, we develop a fully Bayesian framework with suitable prior specifications and uncertainty quantification through posterior summaries. We illustrate the superior performance of our method over existing methods in terms of causal graph estimation through extensive simulation studies. We also demonstrate the proposed method using a brain EEG dataset.
Saptarshi Roy, Raymond K. W. Wong, Yang Ni
2023-10-31T15:19:24Z
http://arxiv.org/abs/2310.20537v1
# Directed Cyclic Graph for Causal Discovery from Multivariate Functional Data ###### Abstract Discovering causal relationship using multivariate functional data has received a significant amount of attention very recently. In this article, we introduce a functional linear structural equation model for causal structure learning when the underlying graph involving the multivariate functions may have cycles. To enhance interpretability, our model involves a low-dimensional causal embedded space such that all the relevant causal information in the multivariate functional data is preserved in this lower-dimensional subspace. We prove that the proposed model is causally identifiable under standard assumptions that are often made in the causal discovery literature. To carry out inference of our model, we develop a fully Bayesian framework with suitable prior specifications and uncertainty quantification through posterior summaries. We illustrate the superior performance of our method over existing methods in terms of causal graph estimation through extensive simulation studies. We also demonstrate the proposed method using a brain EEG dataset. ## 1 Introduction Motivation.Multivariate functional data arise in many fields such as biomedical research (Wei and Li, 2008; Chiou and Muller, 2016), environmental science (Korte-Stapff et al., 2022), finance (Kowal et al., 2017), plant science (Wong et al., 2019; Park et al., 2022), and sport science (Volkmann et al., 2021) where multiple variables are measured over time or other domains. The increasing availability of functional data in these fields provides us with great opportunities to discover causal relationships among random functions for the better understanding of complex systems, which is helpful for various machine learning and statistics tasks such as representation learning (Scholkopf et al., 2021), fairness (Tang et al., 2023), transfer learning (Rojas-Carulla et al., 2018), and reinforcement learning (Zeng et al., 2023). One motivating example is electroencephalography (EEG) where electrical activity from the brain is recorded non-invasively from electrode channels by placing them on the scalp or directly on the surface of the brain. Given its continuous nature and the short time separation between the adjacent measuring points, it is natural to treat the data at each brain location/region as a function over time. A relevant scientific goal is to estimate brain effective connectivity among different regions, which will potentially allow us to make better decisions, design more effective interventions, and avoid unintended consequences. However, existing structural equation model (SEM) based causal discovery methods assume acyclic relationships among the random functions by imposing a directed acyclic graph (DAG) structure, which may be too restrictive for many real applications. For example, there are strong indications that in brain effective connectivity studies, due to reciprocal polysynaptic connections, the brain regions are far from exhibiting acyclicity (Friston, 2011; Markov et al., 2012), and that in genetic pathways, due to the presence of multiple upstream regulators and downstream targets for every signaling component, feedback loops/directed cycles are regular motifs (Brandman and Meyer, 2008). Thus, in light of the prevalence of cycles in complex systems, it is desirable to have a flexible model for causal discovery among random functions that can account for such cyclic causal structures. Challenges.Causal discovery for multivariate functional data in the presence of cycles is an inherently difficult problem that is not yet well understood. We highlight three prominent challenges. (i) Functional data are infinite-dimensional in nature. It may so happen that the low-frequency spectrum of one curve might causally influence the high-frequency spectrum of another curve. This demands identification of pertinent features that can be used to create a finite-dimensional representation of the data, which is easier to work with and analyze. However, the challenge is that we may not know _a priori_ what the relevant features are when dealing with infinite-dimensional objects. Blind adoption of standard (non-causal-adaptive) low-dimensional features can lead to errors or inaccuracies. (ii) Although the identifiability of causal models for multivariate functional data in the absence of cycles has been established in recent works (Zhou et al., 2022; Lee and Li, 2022), showing identifiability of causal models from multivariate data, let alone multivariate functions, is still a challenging and complex task in cases where causal relationships are obscured by the presence of cycles. (iii) It is common that functional data are only observed over discrete time points with additional noises. Such incomplete and noisy observations of the functions add another layer of difficulty in probing the causal relationships of interest. Related work.Causal discovery from multivariate functional data has been studied by a few recent works (Zhou et al., 2022; Lee and Li, 2022; Yang and Suzuki, 2022), which have already shown some promising results in discovering causality in, e.g., EEG data and longitudinal medical record data. However, all of them are limited to DAGs, which do not allow inference of cyclic causality. While there has been a surge of research on causal discovery methods for scalar random variables in the presence of feedback loops/cycles over the last few decades (Richardson, 1996; Lacerda et al., 2008; Mooij et al., 2011; Hyttinen et al., 2012; Huang et al., 2019; Mooij and Heskes, 2013; Mooij and Claassen, 2020; Zhou et al., 2022), none of these approaches have been extended to discovering causal dependencies among random functions in multivariate settings. Therefore, how to handle cyclic causal relationships among multivariate functional data while addressing the aforementioned challenges remains a largely unsolved problem. Contributions.In this paper, we propose an operator-based non-recursive linear structural equation based novel causal discovery framework that identifies causal relationships among functional objects in the presence of cycles and additional measurement/sampling noises. Our major contribution is four-fold. 1. We consider a causal embedding of the functional nodes into a lower-dimensional space for dimension reduction that adapts to causal relationships. 2. We prove that the causal graph of the proposed model is uniquely identifiable under standard causal assumptions. 3. We capture within-function dependencies using a data-driven selection of orthonormal basis that is both interpretable and computationally efficient. 4. To perform inference and uncertainty quantification from finite-sample data, we adopt a fully Bayesian hierarchical formulation with carefully selected prior distributions. Posterior inference is performed using Markov chain Monte Carlo (MCMC). We demonstrate the effectiveness of the proposed method in identifying causal structure and key parameters through simulation studies and apply the framework to the analysis of brain EEG data, illustrating its real-world applicability. Codes will be made available on the project's website on Github. Model Definition and causal identifiability ### Notations Let \([p]=\{1,\ldots,p\}\) for any positive integer \(p\). A causal directed cyclic graph (DCG) is a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), which consists of a set of vertices or nodes \(\mathcal{V}=[p]\) representing a set of random variables and a set of directed edges \(\mathcal{E}=\{\ell\to j|j,\ell\in\mathcal{V}\}\) representing the direct causal relationships among the random variables. In a DCG, we do not assume the graph to be acyclic. A causal DCG model is an ordered pair \((\mathcal{G},\mathbb{P})\) where \(\mathbb{P}\) is a joint probability distribution over \(\mathcal{V}\) (more rigorously, the random variables that \(\mathcal{V}\) represents) that satisfies conditional independence relationships encoded by the causal DCG \(\mathcal{G}\). A simple directed cycle is a sequence of distinct vertices \(\{v_{1},\ldots,v_{k}\}\) such that the induced subgraph by these vertices is \(v_{1}\to\cdots\to v_{k}\to v_{1}\). For a vertex \(j\in\mathcal{V}\), we use \(\mathrm{pa}(j)\) to denote the set of parents (direct causes). ### Model framework Consider a multivariate stochastic process \(\mathbf{Y}=(Y_{1},\ldots,Y_{p})^{\top}\) where each \(Y_{j}\) is defined on a compact domain \(\mathcal{T}_{j}\subset\mathbb{R}\). Without loss of generality, we assume \(\mathcal{T}_{1}=\cdots=\mathcal{T}_{p}=[0,1]\). Suppose \(Y_{j}\in\mathcal{H}_{j}\) where \(\mathcal{H}_{j}\) is a Hilbert space of functions defined on \(\mathcal{T}_{j}\). We let \(\langle\cdot,\cdot\rangle\) denote the inner product of \(\mathcal{H}_{j}\). We propose a causal model that captures the relationships among \(Y_{1},\ldots,Y_{p}\). Our proposed model considers an operator-based non-recursive linear structural equation model on the random functions \(\mathbf{Y}\) as \[Y_{j}(\cdot)=\sum_{\ell\in\mathrm{pa}(j)}(\mathcal{B}_{j\ell}Y_{\ell})(\cdot)+ f_{j}(\cdot),\quad\forall j\in[p], \tag{2.2.1}\] where \(\mathcal{B}_{j\ell}\) is a linear operator that maps \(\mathcal{H}_{\ell}\) to \(\mathcal{H}_{j}\), and \(f_{j}\in\mathcal{H}_{j}\) is an exogenous stochastic process. Clearly, for any \(j,\ell\in\mathcal{V}\) such that the edge \(\ell\to j\in\mathcal{E}\), \(\mathcal{B}_{j\ell}\) is not a null operator. Now by stacking the \(p\) equations in (2.2.1), we obtain \[\mathbf{Y}=\mathfrak{B}\mathbf{Y}+\mathbf{f}, \tag{2.2.2}\] where \(\mathfrak{B}=(\mathcal{B}_{j\ell})_{j,\ell=1}^{p}\) is a matrix of operators and \(\mathbf{f}=(f_{1},\ldots,f_{p})^{\top}\) is a \(p\)-variate stochastic process. In DAGs, the causal effect matrix can be arranged into a lower block triangular structure given a topological/causal ordering. But since our model allows for cycles, we have no such restriction on the structure of the operator matrix \(\mathfrak{B}\) except that \(\mathcal{B}_{jj},\forall j\in[p]\), is null, i.e., no self-loops. Model (2.2.1) is infinite-dimensional and hence challenging to estimate and interpret. To alleviate such difficulties, we consider a low-dimensional causal embedding structure. Specifically, we assume that the causal relationships are preserved in an unknown low-dimensional subspace \(\mathcal{D}_{j}\) of \(\mathcal{H}_{j}\). Denote the dimension of \(\mathcal{D}_{j}\) by \(K_{j}\). Let \(\mathcal{P}_{j}\) and \(\mathcal{Q}_{j}\) be the projection onto \(\mathcal{D}_{j}\) and its orthogonal complement in \(\mathcal{H}_{j}\) respectively. We assume \(\mathcal{B}_{j\ell}=\mathcal{P}_{j}\mathcal{B}_{j\ell}\mathcal{P}_{\ell}\), which implies that causal effects can be fully described within the low-dimensional subspaces \(\{\mathcal{D}_{j}\}_{j=1}^{p}\). As such, (2.2.1) can be split into \[\mathcal{P}_{j}Y_{j} =\sum_{\ell\in\text{pa}(j)}\mathcal{B}_{j\ell}(\mathcal{P}_{\ell} Y_{\ell})+\mathcal{P}_{j}f_{j}, \tag{2.2.3}\] \[\mathcal{Q}_{j}Y_{j} =\mathcal{Q}_{j}f_{j}.\] We assume that \(\mathcal{P}_{j}f_{j}\) and \(\mathcal{Q}_{j}f_{j}\) are independent of each other. Now, by defining \(\alpha_{j}=\mathcal{P}_{j}Y_{j}\) and \(\epsilon_{j}=\mathcal{P}_{j}f_{j},\forall j\in[p]\), (2.2.3) can be compactly written as \[\boldsymbol{\alpha}=\mathcal{B}\boldsymbol{\alpha}+\boldsymbol{\epsilon}, \tag{2.2.4}\] where \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{p})^{\top}\) and \(\boldsymbol{\epsilon}=(\epsilon_{1},\ldots,\epsilon_{p})^{\top}\) with \(\alpha_{j},\epsilon_{j}\in\mathcal{D}_{j},\forall j\in[p]\). In practice, the random functions in \(\boldsymbol{Y}\) can only be observed over a finite number of (input) locations, possibly with measurement errors. More specifically, for each random function \(Y_{j}\), we observe \(\{(t_{ju},X_{ju})\}_{u=1}^{m_{j}}\), where \(X_{ju}\in\mathbb{R}\) is the measurement of \(Y_{j}\) at location \(t_{ju}\in\mathcal{T}_{j}\) and \(m_{j}\) is the number of measurements obtained from \(Y_{j}\). Defining \(\beta_{j}=\mathcal{Q}_{j}Y_{j}\), we consider the following measurement model: \[X_{ju} =Y_{j}(t_{ju})+e_{ju}\] \[=\alpha_{j}(t_{ju})+\beta_{j}(t_{ju})+e_{ju},\quad\forall u\in[m _{j}],j\in[p], \tag{2.2.5}\] with independent noises \(e_{ju}\sim N(0,\sigma_{j}),\forall u\in[m_{j}]\). More compactly, (2.2.5) can be written as \[\boldsymbol{X}=\boldsymbol{\alpha}(\boldsymbol{t})+\boldsymbol{\beta}( \boldsymbol{t})+\boldsymbol{e}, \tag{2.2.6}\] where \(\boldsymbol{X}=(\boldsymbol{X}_{1}^{\top},\ldots,\boldsymbol{X}_{p}^{\top})^{\top}\), \(\boldsymbol{\alpha}(\boldsymbol{t})=(\boldsymbol{\alpha}_{1}(\boldsymbol{t}_{ 1})^{\top},\ldots,\boldsymbol{\alpha}_{p}(\boldsymbol{t}_{p})^{\top})^{\top}\), \(\boldsymbol{\beta}(\boldsymbol{t})=(\boldsymbol{\beta}_{1}(\boldsymbol{t}_{ 1})^{\top},\ldots,\boldsymbol{\beta}_{p}(\boldsymbol{t}_{p})^{\top})^{\top}\) and \(\boldsymbol{e}=(\boldsymbol{e}_{1}^{\top},\ldots,\boldsymbol{e}_{p}^{\top})^{\top}\) with \(\boldsymbol{X}_{j}=(X_{j1},\ldots,X_{jm_{j}})^{\top},\boldsymbol{\alpha}_{j} (\boldsymbol{t}_{j})=(\alpha_{j}(t_{j1}),\ldots,\alpha_{j}(t_{jm_{j}}))^{\top},\boldsymbol{\beta}_{j}(\boldsymbol{t}_{j})=(\beta_{j}(t_{j1}),\ldots,\beta_{ j}(t_{jm_{j}}))^{\top}\) and \(\boldsymbol{e}_{j}=(e_{j1},\ldots,e_{jm_{j}})^{\top}\). We call our proposed model, **FENCE**, which stands for '**F**unctional **E**mbedded **N**odes for **C**yclic causal **E**xploration', reflecting its purpose. ### Causal identifiability In this section, we shall show that the graph structure of the proposed FENCE model is identifiable for functional data measured discretely with random noises under several causal assumptions. We start by defining causal identifiability and state our assumptions. **Definition 2.1**.: (_Causal Identifiability_) Suppose \(\mathbf{Y}\) is a \(p\)-variate random function and \(\mathbf{X}\) is the observed noisy version of \(\mathbf{Y}\) given by (2.2.6). Assume \(\mathbf{X}\) follows FENCE model \(\mathcal{S}=(\mathcal{G},\mathbb{P})\) where \(\mathcal{G}\) is the underlying graph and \(\mathbb{P}\) is the joint distribution of \(\mathbf{X}\) over \(\mathcal{G}\). We say that \(\mathcal{S}\) is causally identifiable from \(\mathbf{X}\) if there does not exist any other \(\mathcal{S}^{*}=(\mathcal{G}^{*},\mathbb{P}^{*})\) with \(\mathcal{G}^{*}\neq\mathcal{G}\) such that the joint distribution \(\mathbb{P}^{*}\) on \(\mathbf{X}\) induced by \(\mathcal{G}^{*}\) is equivalent to \(\mathbb{P}\) induced by \(\mathcal{G}\). In other words, for a causal graph to be identifiable, there must not exist any other graph such that the joint distributions induced by the two different graphs are equivalent. Next, we list and discuss a few assumptions to establish the causal identifiability of the proposed model. **Assumption 1**.: _(Causal Sufficiency) The model \(\mathcal{S}=(\mathcal{G},\mathbb{P})\) is causally sufficient, i.e., there are no unmeasured confounders._ Assuming no unmeasured confounders keeps the causal discovery task more manageable especially for cyclic graphs with purely observational data. **Assumption 2**.: _(Disjoint Cycles) The cycles in \(\mathcal{G}\) are disjoint, i.e., no two cycles in the graph have two nodes that are common to both._ Assuming disjoint cycles induces a natural topological ordering and forms a directed acyclic hypergraph-like structure within the DCG. The same assumption was made in Lacerda et al. (2008). **Assumption 3**.: _(Stability) For the model \(\mathcal{S}\), the moduli of the eigenvalues of the finite rank operator \(\mathcal{B}\) are less than or equal to \(1\), and none of the real eigenvalues are equal to \(1\)._ According to Fisher, 1970, the SEM in (2.2.4) can be viewed as being in a state of equilibrium, where the finite rank operator \(\mathcal{B}\) represents coefficients in a set of dynamical equations that describe a deterministic dynamical system observed over small time intervals as the time lag approaches zero. The eigenvalue conditions are deemed necessary and sufficient for the limiting behavior to hold, as argued by Fisher, 1970. Such an assumption is widely adopted in e.g., econometrics, and Lacerda et al., 2008 made this assumption as well. **Assumption 4**.: _(Non-Gaussianity) The exogenous variables have independent mixture of Gaussian distributions. i.e., \(\epsilon_{jk}\stackrel{{\text{ind}}}{{\sim}}\sum_{m=1}^{M_{jk}}\pi_{ jkm}\text{N}(\mu_{jkm},\tau_{jkm})\) with \(M_{jk}\geq 2\)._ The assumption of non-Gaussianity on the exogenous variables has been proven useful in causal discovery as it induces model identifiability in the linear SEM framework (Lacerda et al., 2008; Aapo and Petteri, 1999; Shimizu et al., 2006; Spirtes and Zhang, 2016). Mixture of Gaussian can approximate any continuous distribution arbitrarily well given a sufficiently large number of mixture components (Titterington et al., 1985; McLachlan and Peel, 2000; Rossi, 2014). It is also easy to sample, which facilitates our posterior inference. **Assumption 5**.: _(Non-causal dependency) We assume \(\mathbf{\beta}(\mathbf{t})=\mathbf{C}(\mathbf{t})\mathbf{\gamma}\), where \(\mathbf{\gamma}\) represent another exogenous component of the model and \(\mathbf{C}(\mathbf{t})=diag(\mathbf{C}_{11}(\mathbf{t}_{1}),\ldots,\mathbf{C}_{pp}(\mathbf{t}_{p}))\). Here \(\mathbf{C}_{jj}(\mathbf{t}_{j})\) is a mixing matrix that mixes the independent entires in \(\mathbf{\gamma}\) to generate temporal dependence within the \(j\)-th block. We assume \(\gamma_{jk}\stackrel{{\text{ind}}}{{\sim}}\sum_{m=1}^{M_{jk}}\pi ^{\prime}_{jkm}\text{N}(\mu^{\prime}_{jkm},\tau^{\prime}_{jkm})\) with \(M_{jk}\geq 1\)._ Since the model assumes that all causal information in \(\mathbf{Y}\) is preserved in the lower-dimensional space \(\mathcal{D}_{j}\) and not in its orthogonal complement, it is apparent that while each \(\mathbf{\beta}_{j}(\mathbf{t}_{j})\) within a block can have temporal dependence, it is independent of \(\mathbf{\beta}_{\ell}(\mathbf{t}_{\ell})\) when \(j\neq\ell\) and \(j,\ell\in[p]\). For some basis \(\{\phi_{jk}\}_{k=1}^{K_{j}}\) that spans the low-dimensional causal embedded space \(\mathcal{D}_{j}\), \(\alpha_{j}\) in (2.2.5) can be further expanded by, \(\alpha_{j}(t_{ju})=\sum_{k=1}^{K_{j}}\tilde{\alpha}_{jk}\phi_{jk}(t_{ju})\). Therefore (2.2.6) can be written more compactly as \[\mathbf{X}=\mathbf{\Phi}(\mathbf{t})\tilde{\mathbf{\alpha}}+\mathbf{\beta}(\mathbf{t})+\mathbf{e}, \tag{2.3.1}\] where \(\mathbf{\Phi}(\mathbf{t})=\text{diag}(\mathbf{\Phi}_{1}(\mathbf{t}_{1}),\ldots,\mathbf{\Phi}_{p}( \mathbf{t}_{p}))\) with \(\mathbf{\Phi}_{j}(\mathbf{t}_{j})=(\phi_{jv}(t_{ju}))_{u=1,v=1}^{m_{j},K_{j}}\). **Assumption 6**.: _(Sufficient sampling locations) The basis matrix \(\mathbf{\Phi}(\mathbf{t})\) of size \(\sum_{j=1}^{p}m_{j}\times\sum_{j=1}^{p}K_{j}\) has a full column rank._ This assumption implies enough sampling locations, over which each random function \(Y_{j}\) is observed, to capture all the causal information that \(Y_{j}\) contains. Given these six assumptions, our main theorem establishes the causal identifiability of the proposed model. **Theorem 2.1**.: _Under Assumptions 1 - 6, \(\mathcal{S}=(\mathcal{G},\mathbb{P})\) is causally identifiable._ The proof essentially involves two steps as shown in Figure 1. On the left-hand side (LHS) of the diagram, we depict the hypergraph-like structure that emerges when assuming the existence of disjoint cycles (Assumption 2), whereas, on the right-hand side (RHS), we offer a magnified view of the hypernodes (nodes containing simple directed cycle). Our approach to proving causal identifiability progresses from the LHS to the RHS. That is, we first prove the identifiability of the hypergraph-like structure depicted on the LHS of Figure 1, and then we proceed to establish the identifiability of each simple directed cycle within every hypernode in the hypergraph. The detailed exposition of the proof can be found in Section A of the Supplementary Materials. ## 3 Bayesian Model Formulation In this section, we will describe the inference procedure of the proposed model. A straightforward approach would be a two-step procedure where the first step performs functional principal component analysis on each function marginally to reduce the dimension, and then the second step learns causal structure based on the principal components. However, this simple approach has several disadvantages. First, the estimated functional principal components that explain the most variation of each individual function marginally may not optimally capture the cause-effect dependence relationships among different functions. Second, this procedure is unreliable since estimation uncertainty fails to propagate correctly from the first step to the second step. As such, we propose a fully Bayesian approach, which reduces the dimension of functional data adaptively for causal structure learning. ### Model parameters Let \(\mathbf{E}=(E_{j\ell})_{j,\ell=1}^{p}\) denote the adjacency matrix where \(E_{j\ell}=1\) indicates the existence of a directed edge from node \(\ell\) to node \(j\), and \(0\) otherwise. Let \(\{\phi_{k}\}_{k=1}^{S}\) be a set of \(S\) common Figure 1: Two important components of causal identifiability proof: (I) identifiability of directed acyclic hypergraph induced by disjoint cycles, and (II) identifiability of each disjoint cycle. unknown basis functions that approximate each random function \(Y_{j}\) i.e., \(Y_{j}=\sum_{k=1}^{S}\tilde{\alpha}_{jk}\phi_{k}\) where \(\{\tilde{\alpha}_{jk}\}_{k=1}^{S}\) denote the set of basis coefficients. Note that \(\{\phi_{k}\}\) is not the basis for the lower-dimensional causal embedded subspace \(\mathcal{D}_{j}\). However, we assume that the first \(K_{j}\) of them actually spans \(\mathcal{D}_{j}\) and our goal is to hunt for them through a properly designed inference procedure. Moreover, according to our assumption, we build our SEM on the first \(K_{j}\) of the basis coefficients \(\tilde{\mathbf{\alpha}}_{j}=(\tilde{\alpha}_{j1},\cdots,\tilde{\alpha}_{jK_{j}})^{\top}\). Defining \(\tilde{\mathbf{\alpha}}_{j}=(\tilde{\alpha}_{j,K_{j}+1},\cdots,\tilde{\alpha}_{ jS})^{\top}\) with \(\bar{\mathbf{\alpha}}_{j}=\mathbf{\gamma}_{j}\), jointly they can be written as \[\tilde{\mathbf{\alpha}}=\tilde{\mathbf{B}}\tilde{\mathbf{\alpha}}+\tilde{\mathbf{\epsilon}}, \tag{3.1.1}\] where \(\tilde{\mathbf{\alpha}}=(\tilde{\mathbf{\alpha}}_{1}^{\top},\cdots,\tilde{\mathbf{\alpha }}_{p}^{\top},\bar{\mathbf{\alpha}}_{1}^{\top},\ldots,\bar{\mathbf{\alpha}}_{p}^{ \top})^{\top}\), \(\tilde{\mathbf{\epsilon}}=(\tilde{\mathbf{\epsilon}}_{1}^{\top},\cdots,\tilde{\mathbf{ \epsilon}}_{p}^{\top},\mathbf{\gamma}_{1}^{\top},\ldots,\mathbf{\gamma}_{p}^{\top})^{\top}\) with \(\tilde{\mathbf{\epsilon}}_{j}=(\tilde{\epsilon}_{j1},\cdots,\tilde{\epsilon}_{jK_ {j}})^{\top}\) and \(\mathbf{\gamma}_{j}=(\gamma_{j,K_{j}+1},\ldots,\gamma_{jS})^{\top}\). Here \(\tilde{\mathbf{B}}=\begin{pmatrix}\mathbf{B}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{pmatrix}\) where \(\mathbf{B}=((\mathbf{B}_{j\ell}(a,b))_{a=1,b=1}^{K_{j},K_{l}})_{j,\ell=1}^{p}\) with \(\mathbf{B}_{jj}=\mathbf{0}\) since we assume the absence of self loops. To carry out inference, we assume \(\tilde{\epsilon}_{jk},\gamma_{jk}\overset{ind}{\sim}\sum_{m=1}^{M_{jk}}\pi_{ jkm}N(\mu_{jkm},\tau_{jkm})\). ### Adaptive basis expansion As the \(\phi_{k}\)'s are specifically useful for restricting the original function space for each \(Y_{j}\) to a lower-dimensional causally embedded smooth space of dimension \(K_{j}\), we make the basis \(\{\phi_{k}\}\) adaptive for causal structure learning by further expanding them with known spline basis functions (Kowal et al., 2017), \(\phi_{k}(\cdot)=\sum_{r=1}^{R}A_{kr}b_{r}(\cdot)\), where \(\mathbf{b}=(b_{1},\ldots,b_{R})^{\top}\) is the set of fixed cubic B-spline basis functions with equally spaced knots and \(\mathbf{A}_{k}=(A_{k1},\ldots,A_{kR})^{\top}\) are the corresponding spline coefficients. Since we do not fix \(\mathbf{A}_{k}\)'s _a priori_, the basis functions \(\phi_{k}\)'s can be learned from data _a posteriori_ and hence are adaptive to both data and causal structure (i.e., the basis functions, the functional data, and the causal graph are dependent in their joint distribution). ### Prior specifications Prior on spline coefficients.The prior on \(A_{k}\) is chosen to serve multiple purposes. (i) It sorts the basis functions by decreasing smoothness and therefore helps to identify the spanning set of size \(K_{j}\) for the underlying smooth causally embedded space \(\mathcal{D}_{j}\). (ii) Although not a strict requirement for modelling purpose, it forces \(\phi_{k}\)'s to be orthonormal, i.e. \(\int\phi_{k}(\omega)\phi_{k^{\prime}}(\omega)\,d\omega=I(k=k^{\prime})\). As such, the orthogonality constraints help eliminate any information overlap between the basis functions, which keeps the total number of necessary basis functions that actually contribute to the causal structure learning to a minimum. (iii) It regularizes the roughness of \(\phi_{k}\)'s to prevent overfitting. For (iii), more specifically, we restrict the roughness of the basis functions \(\phi_{k}(\cdot)\) by assigning a prior that penalizes its second derivatives (Gu, 1992; Wahba, 1978; Berry et al., 2002): \[\mathbf{A}_{k}\sim N(\mathbf{0},\lambda_{k}^{-1}\mathbf{\Omega}^{-}),\] where \(\mathbf{\Omega}^{-}\) is the pseudoinverse of \(\mathbf{\Omega}=\int\mathbf{b}^{{}^{\prime\prime}}(t)[\mathbf{b}^{{}^{\prime\prime}}(t)]^{ \top}\,dt\). Let \(\mathbf{\Omega}=\mathbf{U}\mathbf{D}\mathbf{U}^{\top}\) be the singular value decomposition of \(\mathbf{\Omega}\). Following Wand and Ormerod, 2010, to facilitate computation, we reparameterize \(\phi_{k}(\cdot)=\sum\limits_{r=1}^{R}\tilde{A}_{kr}\tilde{b}_{k}(\cdot)\) with \(\tilde{\mathbf{b}}(\cdot)=(1,t,\mathbf{b}^{T}(\cdot)\tilde{\mathbf{U}}\tilde{\mathbf{D}}^{- \frac{1}{2}})^{\top}\) where \(\tilde{\mathbf{D}}\) is the \((R-2)\times(R-2)\) submatrix of \(\mathbf{D}\) corresponding to non-zero singular values (note that the rank of \(\mathbf{\Omega}\) is \(R-2\) by definition) and \(\tilde{\mathbf{U}}\) is the corresponding \(R\times(R-2)\) submatrix of \(\mathbf{U}\). This induces a prior on \(\tilde{\mathbf{A}}_{k}\) given by \[\tilde{\mathbf{A}}_{k}\sim N(\mathbf{0},\mathbf{S}_{k})\text{ with }\mathbf{S}_{k}=\text{ diag}(\infty,\infty,\lambda_{k}^{-1},\ldots,\lambda_{k}^{-1}).\] In other words, the intercept and the linear term are unpenalized but the non-linear terms are penalized, the degree of which is controlled by \(\lambda_{k}\). In practice, we set the first two diagonal elements of \(\mathbf{S}_{k}\) as \(10^{8}\). We constrain the regularization parameters \(\lambda_{1}>\cdots>\lambda_{S}>0\) by putting a uniform prior: \[\lambda_{k} \sim\text{Uniform}(L_{k},U_{k}),\ \forall\ k\in[S],\] \[U_{1} =10^{8},L_{k}=\lambda_{k+1}\ \forall\ k\in[S-1],\] \[U_{k} =\lambda_{k-1}\ \forall\ k\in\{2,\ldots,S\},L_{S}=10^{-8},\] which implies that the smoothness of \(\phi_{k}(\cdot)\) decreases as \(k\) gets larger. Priors on the adjacency matrix.We propose to use an independent uniform-Bernoulli prior on each entry \(E_{j\ell}\) of \(\mathbf{E}\), i.e., \(E_{j\ell}|\rho\stackrel{{\text{ind}}}{{\sim}}\text{Bernoulli}(\rho)\) and \(\rho\sim\text{Uniform}(0,1)\). The marginal distribution of \(\mathbf{E}\) with \(\rho\) integrated out is given by \[p(\mathbf{E})=\int p(\mathbf{E}|\rho)p(\rho)\,d\rho=\text{Beta}\left(\sum_{j\neq\ell} E_{j\ell}+1,\sum_{j\neq\ell}(1-E_{j\ell})+1\right).\] Now, for example, if \(\mathbf{E}_{0}\) denotes the null adjacency matrix and \(\mathbf{E}_{1}\) denotes the adjacency matrix with only one edge, then we can see that \(p(\mathbf{E}_{0})/p(\mathbf{E}_{1})=p^{2}-p\). Therefore, an empty graph is favored over a graph with one edge by a factor of \(p^{2}-p\), and, importantly, this penalty increases with \(p\). Thus, the uniform-Bernoulli prior prevents false discoveries and leads to a sparse network by increasing the penalty against additional edges as the dimension \(p\) grows. Prior on the causal effect matrix.Now given \(\mathbf{E}\), we assume an independent spike and slab prior on the entries of \(\mathbf{B}=(\mathbf{B}_{j\ell})_{j,\ell=1}^{p}\): \[\mathbf{B}_{j\ell}|E_{j\ell}\sim(1-E_{j\ell})MVN(\mathbf{B}_{j\ell};\mathbf{0},s\gamma\mathbf{I }_{K_{j}},\mathbf{I}_{K_{\ell}})+E_{j\ell}MVN(\mathbf{B}_{j\ell};\mathbf{0},\gamma\mathbf{I}_{K _{j}},\mathbf{I}_{K_{\ell}}),\] where \(MVN(\mathbf{B}_{j\ell};\mathbf{0},\gamma\mathbf{I}_{K_{j}},\mathbf{I}_{K_{\ell}})\) is a matrix-variate normal distribution with row and column covariance matrices as \(\gamma\mathbf{I}_{K_{j}}\) and \(\mathbf{I}_{K_{\ell}}\), respectively. We assume a conjugate inverse-gamma prior on the causal effect size, \(\gamma\sim\text{InverseGamma}(a_{\gamma},b_{\gamma})\). We choose \(a_{\gamma}=b_{\gamma}=1\). We fix \(s=0.02\) so that when \(E_{j\ell}=0\), \(\mathbf{B}_{j\ell}\) is negligibly small. Priors on the parameters of the Gaussian mixture distribution.We choose conjugate priors for the parameters of the Gaussian mixture distribution: \[(\pi_{jk1},\ldots,\pi_{jkM_{jk}})\sim\text{Dirichlet}(\beta, \ldots,\beta),\ \ \ \ \ \forall\ j\in[p],k\in[S]\] \[\mu_{jkm}\sim N(a_{\mu},b_{\mu}),\ \tau_{jkm}\sim\text{ InverseGamma}(a_{\tau},b_{\tau}),\ \forall\ j\in[p],k\in[S],m\in[M_{jk}]\] We have fixed values for the hyperparameters, \(\beta=1,a_{\mu}=0,b_{\mu}=100,a_{\tau}=b_{\tau}=1\). Prior on the noise variances.We assume a conjugate prior for \(\sigma_{j}\sim\text{InverseGamma}(a_{\sigma},b_{\sigma})\), \(\forall\ j\in[p]\). We choose \(a_{\sigma}=b_{\sigma}=0.01\). We simulate posterior samples through Markov chain Monte Carlo (MCMC). Details are given in Section B of the Supplementary Materials. Sensitivity analyses will be conducted to test the hyperparameters including \((a_{\gamma},b_{\gamma}),(a_{\tau},b_{\tau}),(a_{\sigma},b_{\sigma}),s,R,S,M\) and \(\beta\). ## 4 Simulation Studies Data generationThe data were simulated according to various combinations of sample size \((n)\), number of nodes \((p)\), and grid size \((m_{j}=d\ \forall j\in[p])\) where \(n\in\{75,150,300\}\), \(p\in\{20,40,60\}\), and \(d\in\{125,250\}\). The grid evenly spans the unit interval \([0,1]\); the results with unevenly spaced grids are presented in Section C of the Supplementary Materials. The true causal graph \(\mathcal{G}\) was generated randomly with edge formation probability \(2/p\). Given \(\mathcal{G}\), each non-zero block \(\mathbf{B}_{j\ell}\) of the causal effect matrix was generated from the standard matrix-variate normal distribution. We set the true number of basis functions to be \(K=4\). In order to generate \(K=4\) orthonormal basis functions, we first simulated unnormalized basis functions by expanding them further with 6 cubic B-spline basis functions where the coefficients were drawn from the standard normal distribution and then empirically orthonormalized them. The basis coefficients \(\tilde{\mathbf{\alpha}}\) were generated following (3.1.1) with the exogenous variables \(\tilde{\mathbf{\epsilon}}_{j}\) drawn independently from Laplace distribution with location parameter \(\mu=0\) and scale parameter \(b=0.2\). We have also considered other non-Gaussian distributions for the exogenous variables; the corresponding results are provided in Section C of the Supplementary Materials. Finally, noisy observations were simulated following (2.2.6) with the signal-to-noise ratio, i.e., the mean value of \(|Y_{j}^{(i)}(t)|/\sigma_{j}\) across all \(i\) and \(t\), set to 5. Here, superscript \((i)\) denotes the \(i\)th sample, where \(i\in[n]\). For the implementation of the proposed FENCE, we fixed the number of mixture components to be 10 and ran MCMC for 5,000 iterations (discarding the first 2,000 iterations as burn-in and retaining every 5th iteration after burn-in). The causal graph \(G\) was then estimated by using the median probability model (Barbieri and Berger, 2004), i.e., by thresholding the posterior probability of inclusion at 0.5. Methods for comparison.We compared our method with fLiNG (Zhou et al., 2022), a recently proposed directed acyclic graph (DAG) for multivariate functional data. Codes for Lee and Li (2022); Yang and Suzuki (2022) are not publicly available. Hence for more comparison, we considered two _ad hoc_ two-step approaches. In the first step of both approaches, we obtained the basis coefficients by carrying out functional principal component analysis (fPCA) using the fdapace(Zhou et al., 2022) package in R. Then in the second step, given the basis coefficients, we estimated causal graphs using existing causal discovery methods, (i) LiNGAM (Shimizu et al., 2006) (ii) PC (Spirtes and Glymour, 1991) and (iii) CCD (Richardson, 1996); we call these three approaches fPCA-LiNGAM, fPCA-PC and fPCA-CCD respectively. Note that we did not use SEM-based cyclic discovery algorithm, LiNG-D (Lacerda et al., 2008) in the second step due to the unavailability of the code. LiNGAM estimates a causal DAG based on the linear non-Gaussian assumption whereas PC generally returns only an equivalence class of DAGs based on conditional independence tests. CCD algorithm is a constraint-based causal discovery method, which yields an equivalence class of cyclic causal graphs. LiNGAM and PC are implemented in the pcalg package (Kalisch et al., 2018) in R. CCD algorithm is implemented in the py-tetrad(Ramsey et al., 2018) package in python. Performance metrics.To assess the graph recovery performance, we calculated the true positive rate (TPR), false discovery rate (FDR), and Matthew's correlation coefficient (MCC). For TPR and MCC, higher is better, whereas lower FDR is better. Results.Table 1 summarizes the results of 50 repeat simulations, demonstrating that the proposed FENCE model outperforms all competitors (fLiNG, fPCA-LiNGAM, and fPCA-CCD) across all combinations of \(n\), \(p\), and \(d\). We provide the results of fPCA-PC in Section D of the Supplementary Materials, which are similar to those of fPCA-LiNGAM. We favored fPCA-PC and fPCA-CCD by counting a non-invariant edge between two nodes as a true positive as long as the two nodes are adjacent in the true graph. The superiority of FENCE is not unexpected for three reasons. First, fLiNG, fPCA-LiNGAM, and fPCA-PC are not specifically designed for learning cyclic graphs. Second, two-step approaches like fPCA-LiNGAM, fPCA-PC, and fPCA-CCD do not necessarily capture the causally embedded space through the functional principal components. Third, although fPCA-CCD can handle cyclic graphs, it, being a two-step approach, fails to capture true functional dependencies. Overall, these findings provide strong evidence of the effectiveness of FENCE compared to existing methods. Additional simulations.We considered additional simulation scenarios with unevenly spaced grids, general exogenous variable distributions, true acyclic graphs and data generated using non-linear SEM, and also conducted sensitivity analyses of FENCE with respect to several hyperparameters; the results are presented in Section C of the Supplementary Materials. The performance of the proposed method is consistently better than competing methods and is relatively robust with respect to hyperparameter choices. \begin{table} \begin{tabular}{c c c|c c c|c c c|c c|c c c} \hline \hline \multirow{2}{*}{n} & \multirow{2}{*}{p} & \multirow{2}{*}{d} & \multicolumn{3}{c|}{FENCE} & \multicolumn{3}{c|}{fLiNG} & \multicolumn{3}{c|}{fPCA-LiNGAM} & \multicolumn{3}{c}{fPCA-CCD} \\ \cline{3-14} & & & TPR & FDR & MCC & TPR & FDR & MCC & TPR & FDR & MCC & TPR & FDR & MCC \\ \hline [MISSING_PAGE_POST] 76(0.05) & **0.21(0.03)** & **0.80(0.03)** & **0.77(0.05)** & 0.74(0.03) & 0.42(0.03) & 0.28(0.17) & 0.90(0.04) & 0.13(0.04) & 0.72(0.06) & 0.35(0.03) & 0.22(0.04) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of performance of various methods under 5 Real Data Application Brain EEG data.We demonstrate the proposed FENCE model on a brain EEG dataset from an alcoholism study (Zhang et al., 1995). This dataset was earlier used to demonstrate functional undirected graphical models (Zhu et al., 2016; Qiao et al., 2019) and functional Bayesian network (Zhou et al., 2022). Data were initially obtained from 64 electrodes placed on subjects' scalps, which captured EEG signals at 256 Hz (3.9 ms epoch) during a one-second period. The study consists of 122 subjects, out of which 77 are in the alcoholic group and 45 are in the control group. Each subject completed 120 trials. During each trial, the subject was exposed to either a single stimulus (a single picture) or two stimuli (a pair of pictures) shown on a computer monitor. We particularly focus on the EEG signals filtered at \(\alpha\) frequency bands between 8 and 12.5Hz using the eegfilt function of the eeglab toolbox of Matlab as \(\alpha\) band signals are associated with inhibitory control (Knyazev, 2007). Given that the EEG measurements were recorded from each subject over multiple trials, these measurements are not independent of each other due to the time dependency of the trials. Moreover, since the measurements were obtained under various stimuli, the signals may have been affected by different stimulus effects. To mitigate these issues, we calculated the average of the band-filtered EEG signals for each subject across all trials under a single stimulus, resulting in a single event-related potential curve per electrode per subject. By doing so, we eliminated the potential dependence between the measurements and the influence of different stimulus types. We performed separate analyses of the two groups to identify both the similarities and dissimilarities in their brain effective connectivity. We conducted a Shapiro-Wilk normality test on the observed functions for each of the \(p=64\) scalp positions at each of the \(m_{j}=256\)\(\forall j\in[p]\) time points to evaluate their Gaussianity. The results showed that for numerous combinations of scalp position and time point, the null hypothesis (which assumes that the observations are marginally Gaussian) was rejected. Thus, we conclude that the non-Gaussianity of the proposed model is appropriate. Next, for posterior inference, we ran MCMC for 20,000 iterations, discarded the first half as burn-in, and retained every 10th iteration after burn-in. The estimated causal graph by thresholding the posterior inclusion probability to 0.9 is given below in Figure 2. Results.There are some interesting findings. First, for both groups (alcoholic and control), brain regions that are located in adjacent positions tend to be more connected than the brain regions that are far apart. Second, dense connectivity is observed in the frontal region of the brain in both groups, with multiple cycles being formed. Third, compared to the control group, the alcoholic group has more connectivity across the left parietal and occipital lobes. Fourth, the same cycle of Iz, Cz, and RPA is observed in both groups. Validity.We now discuss the validity of our real data results. In Hayden et al., 2007, it was observed that alcohol-dependent subjects exhibited frontal asymmetry, distinguishing them from the control group. Our own investigation aligns well with these results, as we have identified denser connectivity across various brain regions in the middle and left areas of the frontal lobe among alcoholic subjects, when compared to controls. Furthermore, Winterer et al., 2003 documented coherent differences between alcoholics and controls in the posterior hemispheres, specifically in the temporal, parietal, and occipital lobes. In accordance with their findings, our study provides additional support for this claim, as we have observed heightened activity with several cycles formed in those same regions within the alcoholic group when compared to the control group. ## 6 Discussion We briefly highlight here several potential avenues for future development of our current work. First, an intriguing and important direction would be to explore the relaxation of the causal sufficiency assumption in the model identifiability. Second, our current model is based on a linear non-Gaussian assumption over the exogenous variables, but a nonlinear model could be considered as an alternative. Lastly, an alternative approach to determining Figure 2: Estimated causal brain connectivity from EEG records by FENCE with posterior probability of inclusion \(\geq 0.9\), separately for the alcoholic (left) and control (right) group. The bi-directed edges are just directed cycles, i.e., \(i\leftrightarrow j\) means \(i\to j\) and \(i\gets j\). the effective number of basis functions that span the lower-dimensional causal embedded space would be to utilize the ordered shrinkage priors (Bhattacharya and Dunson, 2011; Legramanti et al., 2020) in order to adaptively eliminate redundant components, resulting in a more flexible methodology. ## 7 Acknowledgement Ni's research was partially supported by NSF DMS-2112943 and NIH 1R01GM148974-01.
2309.04284
Viewing the process of generating counterfactuals as a source of knowledge: a new approach for explaining classifiers
There are now many explainable AI methods for understanding the decisions of a machine learning model. Among these are those based on counterfactual reasoning, which involve simulating features changes and observing the impact on the prediction. This article proposes to view this simulation process as a source of creating a certain amount of knowledge that can be stored to be used, later, in different ways. This process is illustrated in the additive model and, more specifically, in the case of the naive Bayes classifier, whose interesting properties for this purpose are shown.
Vincent Lemaire, Nathan Le Boudec, Victor Guyomard, Françoise Fessant
2023-09-08T12:06:48Z
http://arxiv.org/abs/2309.04284v4
# Viewing the process of generating counterfactuals as a source of knowledge ###### Abstract There are now many comprehension algorithms for understanding the decisions of a machine learning algorithm. Among these are those based on the generation of counterfactual examples. This article proposes to view this generation process as a source of creating a certain amount of knowledge that can be stored to be used, later, in different ways. This process is illustrated in the additive model and, more specifically, in the case of the naive Bayes classifier, whose interesting properties for this purpose are shown. ## 1 Introduction Machine learning, one of the branches of artificial intelligence, has enjoyed many successes in recent years. The decisions made by these models are increasingly accurate, but also increasingly complex. However, it appears that some of these models are like black boxes: their decisions are difficult, if not impossible, to explain [2]. This lack of explicability can lead to a number of undesirable consequences: lack of user confidence, reduced usability of the models, presence of biases, etc. These needs have given rise to the field of XAI (eXplainable AI). XAI [16, 1] is a branch of artificial intelligence that aims to make the decisions made by machine learning models intelligible, understandable, to users. Among XAI methods, counterfactual reasoning is a concept from psychology and the social sciences [14]. It involves examining possible alternatives to past events [17]. Humans often use counterfactual reasoning by imagining what would happen if an event had not occurred, and this is exactly what counterfactual reasoning is. Applied to artificial intelligence, the question is, for example, "Why did the model make this decision rather than another?" or "How would the decision have been different if a certain condition had been changed?". Within the framework of counterfactual reasoning, this article proposes to consider this generation process as a source of knowledge that can be stored and then exploited in diverse ways. This process is illustrated in the case of additive models and in particular in the case of the naive Bayes classifier, whose interesting properties for this purpose will be shown. The rest of this paper is organised as follows: section 2 presents the key concepts used in the rest of the paper, so that this paper can be read independently. Section 3 presents the first contribution of this paper by showing that it is possible to find "additive trajectories" of counterfactuals in the case of the naive Bayes classifier. The section 4 presents the second contribution of this paper by detailing how these trajectories, this knowledge, can be store in a database and then used. Finally, before concluding, the section 5 illustrates, by means of an unsubscription problem, how clustering applied to this database generates new knowledge. Note: In the remainder of this article we will focus on supervised classification problems where a predictive model \(f\) is trained using \(N\) examples, each described by a set of \(d\) explanatory variables (a vector \(X=\{X_{1},....,X_{d}\}\), derived from a \(\mathcal{X}\) distribution) so as to predict a categorical target variable denoted \(Y=\{y_{1},...,y_{C}\}\) derived from a \(\mathcal{Y}\) distribution. ## 2 Concepts ### Countrefactual and semi-factual example In machine learning, a counterfactual explanation aims to explain why a particular result was obtained by suggesting hypothetical changes in the input features \(X\) that might have led to a different prediction [12, 19]. In other words, it identifies the factors that could have influenced a particular outcome. Counterfactual reasoning can be defined as follows: Definition 1: Let \(f\) be \(\mathcal{X}\mapsto\mathcal{Y}\) a machine learning model such that, for a given individual \(X\), \(f(X)=\hat{y}_{i}\). In this case, a counterfactual example is a new example \(X^{\prime}\) such that \(f(X^{\prime})\neq\hat{y}_{i}\) and \(X\neq X^{\prime}\). If \(X\) had different values for some of its explanatory variables (\(X^{\prime}_{1},X^{\prime}_{2},...\)) while all the other variables remained identical, the class \(\hat{y}_{j}\neq\hat{y}_{i}\) would have been returned. Figure 1: Illustration of a counterfactual and a semi-factual. The red dots represent initial examples (\(X\)). The orange dot represents a semi-factual, the purple dot represents a counterfactual and the white line represents the decision boundary between the red and green classes. Above, \(\hat{y}_{i}\) is known and factual, while \(\hat{y}_{j}\) is the unexpected result that did not happen, counterfactual. Note, however, that a change from \(X\) to \(X^{\prime}\) does not necessarily lead to a change in the class prediction: this is known as a semi-factual example [4]. The knowledge of counterfactual or semi-factual examples makes it possible to explain how to change, to modify, the decisions of the model: "Your bank loan was not accepted BUT IF you had more seniority in our company, the decision would have been the opposite (or closer to acceptance)". These two concepts are illustrated in Figure 1. The understanding produced by a "counterfactual" method of explanation is local, because it applies to a particular individual, and "instance-based", because it is produced in the form of a new example. ### Informativeness and actionability of variables In decision making, and particularly in the context of counterfactual reasoning, the identification of informative and actionable variables is essential. An informative variable is defined as a variable that has a significant impact on the value of the output of the predictive model. However, it is not enough to know which variables are informative. It is also important to identify the actionable variables, which we define as a variable on which it is possible to act. The most valuable type of variable is the actionable informative variable, i.e. one that not only has a significant impact on the output variable, but can also be acted upon to improve or influence the result. ### The concept of trajectories The scientific literature offers numerous methods for generating counterfactual examples, either model dependent (depending on the type of classifier) or agnostic to the predictive model [17, 3]. In this article we are interested in methods that induce the notion of trajectory. If we take Figure 1, we are interested in methods which allow us to approach the decision frontier step by step (until \(X\) cross it) by successive modifications of the initial example. In this case, the succession of \(X^{\prime}\) resulting from these successive operations is called a 'trajectory' (see Figure 2). Figure 2: Illustration of two counterfactuals: one achieved in 1 step, the second in 3 steps Over the course of this article, we will be particularly interested in predictive methods and/or models that allow for a notion of additivity in this trajectory. That is, the univariate modifications of \(X\) add up and make it possible to approach the decision frontier by variable-by-variable modification, step by step, as shown in Figure 2. We will show later that this is the case for the Naive Bayes classifier. ## 3 Optimised search for counterfactuals in the case of the naive Bayes classifier ### Reminders on the naive Bayes classifier The naive Bayes classifier (NB) is a widely used tool in supervised classification problems. It has the advantage of being efficient for many real data sets [7]. However, the naive assumption of conditional independence of the variables can, in some cases, degrade the classifier's performance. This is why variable selection methods have been developed [11]. They mainly consist of variable addition and deletion heuristics to select the best subset of variables maximizing a classifier performance criterion, using a wrapper-type approach [6]. It has been shown that averaging a large number of selective naive Bayes classifiers, performed with different subsets of variables, amounts to considering only one model with a weighting on the variables. Bayes' formula under the assumption of independence of the input variables conditionally to the class variable becomes: \[P(C_{k}|X)=\frac{P(C_{k})\prod_{i}P(X_{i}|C_{k})^{W_{i}}}{\sum_{j=1}^{K}(P(C_{j })\prod_{i}P(X_{i}|C_{j})^{W_{i}})} \tag{1}\] where \(W_{i}\in[0,1]\) is the weight of variable \(i\). The predicted class is the one that maximizes the conditional probability \(P(C_{k}|X)\). The probabilities \(P(X_{i}|C_{i})\) can be estimated by interval using discretization for numerical variables. Gaussian naive Bayes could be also considered. For categorical variables, this estimation can be done directly if the variable takes few different modalities, or after grouping (of values) in the opposite case. ### Criteria to be optimised in the search for counterfactuals Let \(X\) as an example, and two classes \(C_{1}\) and \(C_{2}\). The search for a counterfactual consists in optimising and increasing the probability of belonging to the target class \(C_{1}\) when \(X\) is initially predicted by the model to belong to \(C_{2}\) (and vice versa). To do this, we can develop a gluttonous algorithm, which is expensive in terms of computation time and does not necessarily have the additivity properties described above in Section 2.3. We propose below to pose the problem differently, rewriting the equation 1 and looking at how to increase the probability of belonging to a particular class of interest. To achieve this goal, and to maximise \(P(C_{j}|X^{\prime})\) with respect to the initial value of \(P(C_{j}|X)\), we will exploit the following proposition: **Proposition 1**: _If we take \(X\) and \(X^{\prime}\) as two elements of the input space \(\mathcal{X}\), we show that for a two-class classification problem, searching for counterfactuals of \(X\) amounts to examining the evolution of the value of \(\Delta\) when we change some of the values of \(X\) to \(X^{\prime}\), such that:_ \[\begin{split}\Delta(X,X^{\prime})&=\left(\sum_{i=1 }^{d}W_{i}(\log P(X_{i}|C_{1})-\log P(X_{i}|C_{2}))\right)\\ &-\left(\sum_{i=1}^{d}W_{i}(\log P(X_{i}^{\prime}|C_{1})-\log P( X_{i}^{\prime}|C_{2}))\right)\end{split} \tag{2}\] _Proof:_ If we start again from the equation 1 \[P(C_{j}|X)=\frac{P(C_{j})\prod_{i=1}^{d}P(X_{i}|C_{j})^{W_{j}}}{\sum_{z}[P(C_{z })\prod_{i=1}^{d}P(X_{i}|C_{z})^{W_{j}}]}\] by posing: \[L_{j}(X)=\log\left(P(C_{j})\prod_{i=1}^{d}P(X_{i}|C_{j})^{W_{i}}\right)=\log P (C_{j})+\sum_{i=1}^{d}W_{i}\log P(X_{i}|C_{j}),\] then we have: \[P(C_{j}|X)=\frac{e^{L_{j}(X)}}{\sum_{z}e^{L_{z}(X)}}=\frac{1}{\sum_{z}e^{L_{z }(X)-L_{j}(X)}}=\frac{1}{1+\sum_{z\neq j}e^{L_{z}(X)-L_{j}(X)}}\] and so in the case of two classes: \[P(C_{j}|X)=\frac{1}{1+e^{L_{j^{\prime}}(X)-L_{j}(X)}} \tag{3}\] We can see that to get closer to the class \(C_{j}\), all we have to do is reduce the quantity \(L_{j^{\prime}}(X)-L_{j}(X)\), and thus reduce: \[\log P(C_{j^{\prime}})+\sum_{i=1}^{d}W_{i}\log P(X_{i}|C_{j^{\prime}})-\log P (C_{j})-\sum_{i=1}^{d}W_{i}\log P(X_{i}|C_{j})\] Since \(P(C_{j})\) and \(P(C_{j^{\prime}})\) are constant, this is equivalent to decreasing: \[\sum_{i=1}^{d}W_{i}\log P(X_{i}|C_{j^{\prime}})-\sum_{i=1}^{d}W_{i}\log P(X_{ i}|C_{j})\] and therefore to take an interest in the distance: \[\Delta(X,X^{\prime}) =\sum_{i=1}^{d}W_{i}(\log P(X_{i}|C_{j^{\prime}})-\log P(X_{i}|C_{j}))\] \[-\sum_{i=1}^{d}W_{i}(\log P(X^{\prime}_{i}|C_{j^{\prime}})-\log P(X ^{\prime}_{i}|C_{j}))\] If \(\Delta\) is positive then we are getting closer to the decision frontier (or even crossing it) if \(\Delta\) is negative then we are moving away from the decision frontier and therefore away from the desired objective. The counterfactual search algorithm becomes straightforward. Simply calculate, for a given example \(X\), the value of \(\Delta\) for each explanatory variable and for each value of this explanatory variable. Then, given these values, iterate the successive changes in order to obtain a counterfactual example. These variable-by-variable changes have the property of being additive. Indeed, if we consider four examples \(X^{0}\), \(X^{{}^{\prime}1}\), \(X^{{}^{\prime}2}\) and \(X^{{}^{\prime}3}\in\mathcal{X}\), which are respectively (i) an initial example \(X^{0}\), then the same example for which we have modified only one explanatory variable \(l\) for \(X^{{}^{\prime}1}\), \(m\) for \(X^{{}^{\prime}2}\), and finally an example that cumulates the two univariate modifications \(l\) and \(m\) for \(X^{{}^{\prime}3}\), such that : \[\exists!\ l\ \text{such as}\ X^{{}^{\prime}1}_{l}\neq X^{0}_{l}\] \[\exists!\ m\ \text{such as}\ X^{{}^{\prime}2}_{m}\neq X^{0}_{m}\ \text{and}\ m\neq l\] and \[X^{{}^{\prime}3}_{k}=\begin{cases}X^{{}^{\prime}1}_{l},&\text{if}\ k=l\\ X^{{}^{\prime}2}_{m},&\text{if}\ k=m\\ X^{0}_{k},&\text{otherwise}\end{cases}\] then it is obvious, from the additivity over all the variables in the equation 2, that we have : \(\Delta(X_{0},X^{\prime}_{3})=\Delta(X_{0},X^{\prime}_{1})+\Delta(X_{0},X^{ \prime}_{2})\). Modifying one variable and then the other is equivalent to modifying them simultaneously in the calculation of \(\Delta\). It should also be noted that this additivity is demonstrated from the equation 3, so we can be sure of increasing the value **normalised** of the probability of the class of interest, \(P(C|X)\), which is a plus. Note: The list of \(\Delta\) values can potentially be very large if the number of distinct values of the explanatory variables is large. Nevertheless, it is common for some naive Bayes classifiers (except the Gaussian version) [20, 21] to discretise the numerical variables and group the modalities of the categorical variables, in a supervised manner, in order to obtain an estimate of the conditional densities (\(P(X_{i}|C)\)) which are then used in the calculation of \(P(C|X)\). These supervised discretisation and grouping operations often produce a limited number of intervals or groups of modalities. This makes it possible to obtain a reasonable number of values to test. ## 4 Creation and use of a knowledge base ### Creation a knowledge base In the foregoing we have shown how to increase the probability of belonging to a class of interest and quantify this increase using the equation 2. We have also shown that this quantity is additive as we change the values of the explanatory variables one by one. We now propose to store these \(\Delta\) values in a table in the form shown in Table 1. It should be remembered that it is assumed that the numerical variables have been discretised beforehand and that a grouping of modalities has been carried out for the categorical variables. Each variable is therefore represented by a limited number of values (corresponding to the values of \(P(X|C)\)). Table 1 gives, for illustrative purposes, the values stored in the case where the predictive model uses two explanatory variables \(X_{1}\) and \(X_{2}\) respectively discretised (or grouped) into 3 and 2 intervals (groups) of values \((I)\). For each individual, \(l\), each row of the table, we store the values of the equation 2 where \(\Delta(X^{l}_{i,*\to m},X^{l})\) is the value of \(\Delta\) when the value of variable \(i\) is changed from its initial value '\(*^{\prime}\) to the value of interval (group) \(m\) (simplified as \(\Delta(X^{l}_{i,*\to m})\) in the table). We detail in the following sections how to exploit the knowledge stored in this way. In this article, only the naive Bayes classifier is considered, but any other classifier and/or counterfactual creation method that produces a similar data table could be used. ### Generation of counterfactuals with certain properties Minimising the number of changes -We can set ourselves the criterion of finding the counterfactual with the property of involving the smallest number of modified variables. To do this, we will exploit the knowledge base. For a given individual, \(X\), all we have to do is read the value of the largest \(\Delta\) and then, as we have the additivity property, read the second value of the largest \(\Delta\) for a second variable and so on. At each stage we check whether \((\hat{f}(\mathbf{X}^{\prime})>0.5)\). If this is the case, the counterfactual has been found footnoteWe could, on the other hand, maximise the number of changes, but this is often of little interest in practice. Taking account of business constraints or criteria -In other cases, the aim may be to find the closest counterfactual, but under 'business constraints' defined by the user. For example, the search for counterfactuals could be restricted to making changes only in adjacent intervals (e.g. intervals of close values). Given \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\(X_{1}\)} & \multicolumn{2}{c|}{\(X_{2}\)} \\ \hline & \(I_{1}\) & \(I_{2}\) & \(I_{1}\) & \(I_{2}\) & \(I_{3}\) \\ \hline \(X^{l}\) & \(\Delta(X^{1}_{1,*\to 1})\) & \(\Delta(X^{1}_{1,*\to 2})\) & \(\Delta(X^{2}_{2,*\to 1})\) & \(\Delta(X^{2}_{2,*\to 2})\) & \(\Delta(X^{2}_{2,*\to 3})\) \\ \hline \(X^{2}\) & \(\Delta(X^{2}_{2,*\to 1})\) & \(\Delta(X^{2}_{2,*\to 2})\) & \(\Delta(X^{2}_{2,*\to 1})\) & \(\Delta(X^{2}_{2,*\to 2})\) & \(\Delta(X^{2}_{2,*\to 3})\) \\ \hline \end{tabular} \end{table} Table 1: Illustration of the knowledge base created in the form of \(\Delta\), here in the case of two variables and two examples. the table 1, we would be allowed to move people from interval 1 to 2 for the second variable, but not from 1 to 3. The user can also constrain the search by requiring that one of the variables must always be changed to a certain value, and so on. This type of constraint can easily be considered and incorporated into a counterfactual search algorithm using the proposed knowledge table. The literature on counterfactuals sets out some interesting properties on the subject, such as (i) the notion of minimality: having a counterfactual that differs as little as possible from the original example; (ii) realism: the generated counterfactual must not contain changes that do not make sense from the point of view of the data (e.g. a decrease in the "age" of an individual), also known as plausibility [15]; (iii) generating counterfactuals that are similar to real examples or in dense regions of the class of interest, in order to have robust counterfactuals [5]; in this case, in the case of the naive Bayes classifier, we can use a Bayesian distance as proposed in [13];... All these properties are easily achieved with the Table 1, since the user can choose the list and the order of the variables he wish to intervene on, as well as a distance of his choice between \(X\) and \(X^{\prime}\). ### Additional usable knowledge Preventure and reactive actions -So far, we have mainly talked about creating counterfactuals to explain the model's decisions (as mentioned in the introduction to this article), but potentially also to be able to take reactive actions. For example, if a bank customer is predicted to "leave" (churner), the counterfactual example indicates one or more actions to be taken in order to try to keep him as a customer: these are known as **"reactive" actions**. Conversely, the study of counterfactual trajectories a posteriori is of great interest, as it also allows us to identify when a trajectory is approaching the frontier. In such situations, reactive measures can be taken to reverse the trend and avoid undesirable outcomes. This approach is particularly relevant when it comes to predicting churn, for example, as it enables us to identify customers who are "starting" to churn. By being proactive, it is possible to put in place targeted strategies to retain these customers and bring them back to a quality service. Finally, our knowledge base can also be used to carry out **"preventive" actions**. Going back to Figure 1, we can try to create a semi-factual which moves away from the decision frontier: "The customer is not predicted as leaving but is nevertheless close to the decision frontier". In this case, all we need to do is look at the negative values of \(\Delta\) and take steps away from it according to the user's wishes. For example, all the people who are one step away from crossing the decision frontier, who are easily identifiable in this case, could be concerned. Profile creation -The last way of using the knowledge base that we will describe here3, is to carry out an exploratory analysis using a clustering tech nique. Using the knowledge base, it is possible to group individuals according to the impact of each possible change, i.e. the impact resulting from each of these changes. Analysis of the clusters created can be a source of learning. This is illustrated in the next section. ## 5 Illustration of a clustering on an unsubscribe case **1) Dataset and classifier used :** This section uses the "Telco Customer Churn" dataset (widely used in analysing the results of the XAI method) provided by a fictitious telecommunications company that provided home telephone and internet services to 7043 customers in California. The aim is to classify people who may or may not leave the company. Each customer is described by 20 descriptive variables (3 numerical and 17 categorical) plus the class variable 'churn' (yes/no) which has two modalities (75% non-churn). This dataset can be downloaded from Kaagle [9]. We use 80% of the data for learning and 20% for testing. The naive Bayes classifier is produced using the Khiops library, which is available on Github [10]. During the learning process, only 10 of the variables were retained in the model. Below are all the intervals of values or groups of modalities obtained during the pre-processing process (the value in brackets gives the weight of the variable in the model, equation 1, values from 0 to 1): * Tenure (\(W_{1}\)=0.67) : [0-0.5], ]0.5-1.5], ]1.5-5.5], ]5.5-17.5], ]17.5-42.5], ]42.5-58.5], ]58.5-71.5], ]71.5-72] * InternetService (\(W_{2}\)=0.78) : [Fiberoptic], [DSL], [No] * Contract (\(W_{3}\)=0.37) : [Month-to-month], [Twoyear], [Oneyear] * PaymentMethod (\(W_{4}\)=0.29) : [Mailedcheck], [Creditcard(automatic), Electron-iccheck, Banktransfer(automatic)] * OnlineSecurity (\(W_{5}\)=0.15) : [No], [Yes], [No internet service] * 6 -TotalCharges (\(W_{6}\)=0.29) : [18.8;69.225], ]69.225;91.2], ]91.2;347.9], ]347.9;1182.8], ]1182.8 ;2065.7], ]2065.7;3086.8], ]3086.8;7859], ]7859;8684.8] * PaperlessBilling (\(W_{7}\)=0.40) : [Yes], [No] * TechSupport (\(W_{8}\)=0.04) : [No], [Yes], [No internet service] * SeniorCitizen (\(W_{9}\) = 0.28): [0], [1] * Dependents (\(W_{9}\) = 0.10): [Yes], [No] For all 10 variables, there are a total of 36 intervals/groupings and therefore 26 values of \(\Delta\) to calculate in our knowledge base. Indeed, for each individual and each variable, there is a \(\Delta\) value that has a null value, the value which corresponds to it factually and which therefore does not need to be calculated. **2) Classifier analysis stage:** Before carrying out the clustering stage, it is important to take an interest in the variables retained during the classification model training stage. For example, although it may be interesting to analyse the 'Tenure' variable, it is clearly not an actionable variable. Indeed, it is not possible to change a customer's seniority in order to make him potentially less unfaithful. The same applies to the 'SeniorCitizen' and 'Dependents' variables. We have also removed the 'PaperlessBilling' variable, which has very little impact on the clustering results described below. As a result, these 4 variables are not retained during the clustering stage below; only the informative, influential and actionable variables are retained 4 (see Section 2.2). Footnote 4: All the variables could have been retained but the clustering would have been biased by uninteresting variables from the point of view of creating counterfactual examples **3) Clustering performed :** The clustering performed is usual: (i) we use the table of \(\Delta\) values calculated on the test set, (ii) we learn a k-means with the L2 [8] distance for different \(k\) values (\(k\in\{2,12\}\)), (iii) and finally we retain the k-means whose value of \(k\) corresponds to the 'elbow point', here \(k=4\), [18] of the curve representing the global reconstruction distance versus the value of \(k\). **4) The resulting clusters** are shown in Figure 3. An analysis of these 4 clusters, combined with the predictions of the classifier, shows that: * Cluster 1 (10% of the global population and containing 2% of customers predicted to be churner): individuals who can be made less churner mainly Figure 3: Average profile of individuals in clusters represented as histograms. The names of the values on the abscissa refer to the number of the variables and number of the intervals (groups) described above. For example, ‘3I2’ refers to the third variable (‘Contract’) and its second interval / group of values (‘Twoyear’). The ordinate values are the mean values of the cluster (\(\Delta\)). by means of variable 3 ('Contract') - i.e. by trying to get them to take out an annual contract ('Twoyear' or 'OneYear'); NB - this marketing action is fairly difficult to carry out. * Cluster 2 (24% of the global population and containing no customer predicted to be churner): people who are very insensitive to the fact that they are becoming less churner (mostly negative \(\Delta\) means). They may not be targeted by a'reactive' marketing campaign (which is in line with the classifier's predictions), but rather by a preventive campaign using the 'contract' variable or the 'payment method' variable (payment by card or direct debit). * Cluster 3 (45% of the global population and containing 47% of customers predicted to be chruner (almost all of the individuals predicted to be churner)): some similarities with the individuals in cluster 1 for the 'Contract' variable. On the other hand, we can see that the 5th ('OnlineSecurity') and 8th ('TechSupport') variables have a 'leverage effect' in reducing churn. Offering a security or support option is very attractive to these individuals. * Cluster 4 (21% of the population and containing no customers predicted to be unfaithful): individuals who are partially opposite to those in the first cluster, for example for the 'Contract' variable, who should not be offered a 'two-year contract' in this case. The analysis of the clusters obtained here is not exhaustive due to space limitations. It is an exploratory analysis where the data scientist and the business expert will spend the time needed to refine their joint analyses. However, the analysis carried out here allows us to identify interesting'reactive' actions to be taken with individuals in cluster 3 or preventive actions with individuals in cluster 2. ## 6 Conclusion In the context of methods for explaining the results of a machine learning model, this article has proposed to consider the process of generating counterfactual examples as a source of knowledge that can be stored and then exploited in different ways. This process has been illustrated in the case of additive models and in particular in the case of the naive Bayes classifier, whose interesting properties for this purpose have been shown. We have also suggested the quantities that can be stored and the different ways of exploiting them. Some of the results have been illustrated on a churn problem, but the approach is equally exploitable in other application domains.
2309.15389
Quantum particle under dynamical confinement: From quantum Fermi acceleration to high harmonic generation
Quantum dynamics of a particle confined in a box with time-dependent wall is revisited by considering some unexplored aspects of the problem. In particular, the case of dynamical confinement in a time-dependent box in the presence of purely time-varying external potential is treated by obtaining exact solution. Also, some external potentials approving separation of space and time variables in the Schrodinger equation with time-dependent boundary conditions are classified. Time-dependence of the average kinetic energy and average quantum force are analyzed. A model for optical high harmonic generation in the presence of dynamical confinement and external linearly polarized monochromatic field is proposed.
S. Rakhmanov, C. Trunk, D. Matrasulov
2023-09-27T03:57:33Z
http://arxiv.org/abs/2309.15389v1
# Quantum particle under dynamical confinement: ###### Abstract Quantum dynamics of a particle confined in a box with time-dependent wall is revisited by considering some unexplored aspects of the problem. In particular, the case of dynamical confinement in a time-dependent box in the presence of purely time-varying external potential is treated by obtaining exact solution. Also, some external potentials approving separation of space and time variables in the Schrodinger equation with time-dependent boundary conditions are classified. Time-dependence of the average kinetic energy and average quantum force are analyzed. A model for optical high harmonic generation in the presence of dynamical confinement and external linearly polarized monochromatic field is proposed. ## Acknowledgement This research is partially supported by European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement ID: 873071, project SOMPATY (Spectral Optimization: From Mathematics to Physics and Advanced Technology). ## 1 Introduction Dynamical confinement in quantum mechanics attracted much attention during past few decades. It is described in terms of the Schrodinger equation with time-dependent boundary conditions. Early treatments of the problem date back to Doescher, who explored basic aspects of the problem [1]. Munier et al. considered more detailed treatment of the problem and computed some physically observable quantities for the problem of time-dependent box [2]. Later Makowsky [3]-[5] and Razavy [6, 7] presented a systematic study of the problem, by considering one-dimensional box with moving walls and classifying time-dependence of the wall approving exact solution of the Schrodinger equation with time-dependent boundary conditions. Unitary transformation mapping time-dependent box to that with fixed walls was found in [6, 7] using an approach developed earlier by Berry and Klein [8]. Some aspects of the problem of quantum box with moving walls and its applications to dynamical Casimir effect was studied in a series of papers by Dodonov et al. [9]-[12]. Berry phase in time-dependent box was considered in [13, 14, 15]. Seba considered the problem of time-dependent box in the context of quantum Fermi acceleration [16]. Application of the time-dependent box to the problem of confined quantum gas was considered in [17, 18], where quantum force operator for dynamical confinement was introduced. The problem of hydrogen atom confined in time-dependent spherical box was considered in [19]. Time-dependent harmonic oscillator which is directly related to time-dependent quantum box was presented in a series of papers by Lewis [20, 21]. Different aspects of the problem of dynamical confinement was studied in [22] -[28]. Inverse problem for dynamical confinement, i.e. the problem of recovering boundary's time-dependence from existing solution is considered in [29]. Dynamical confinement in a half-line is studied in [30]. The problem of time-dependent Neumann boundary conditions is treated in [31]. Extension of the dynamical confinement to relativistic case by considering Dirac equation for time-dependent box was done in [32]. Time-dependent quantum graphs have been considered in the Refs.[33, 34, 35]. Despite the fact that considerable aspects of the problem of dynamical confinement have been considered, some issues in the topic are still remaining as less- or not studied. This concerns such aspects as time-dependent Neumann boundary conditions, non-adiabatic limit and exactly solvable models. Another important problem in this context is extension of the model to the case when time-dependent box interacts with an external potential. In such case, if the potential is position independent, the problem approves factorization of space and time variables. In this paper we address the problem of dynamical confinement in the presence of a external electromagnetic field. By assuming that time-dependence of the wall's position approves separation of space and time-variables, we obtain general solution of the problem and compute such physically observable characteristics, as average kinetic energy and average force. Moreover, we consider the case of dynamical confinement driven by external linearly polarized monochromatic optical field. For this system, we study high harmonic generation induced by optical field. This paper is organized as follows. In the next section we briefly recall the problem of time-dependent boundary conditions for the Schrodinger equation on a quantum box. In Section 3 we consider a particle in a time-dependent box in the presence of external potentials with the focus on exactly solvable cases, i.e., when the problem allows factorization of the time- and space variables. Section 4 presents the treatment of the average kinetic energy of the particle and of the average quantum force acting to the particle by moving wall. Section 5 presents a quantum optics model for dynamical confinement by considering high harmonic generation, Fermi acceleration and average quantum force as a function of time. ## 2 Dynamical confinement in 1D quantum box Here, following [3, 6], we briefly recall the problem of time-dependent boundary conditions in quantum mechanics, by considering 1D quantum box with moving wall. Consider a particle confined between two infinitely high walls. The position of the left wall is assumed to be fixed at \(x=0\), while the right one moves according to some positively determined function \(L(t)\) which is a smooth function, \(L:[0,\infty)\rightarrow(0,\infty)\). Then the particle dynamics in such a box is described in terms of the following time-dependent Schrodinger equation (\(\hbar=m=1\)): \[i\frac{\partial\Psi(x,t)}{\partial t}=H(x,t)\Psi(x,t),\quad t\in[0,\infty),x \in[0,L(t)] \tag{1}\] with the Hamiltonian \[H(x,t)=-\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+V(x,t). \tag{2}\] Here, for simplicity, we assume the potential \(V\) to be continuous on its domain \(\{(x,t)\,|\,t\in[0,\infty),x\in[0,L(t)]\}\). We impose, for Equation (1), the following Dirichlet boundary conditions given at the interval \([0,L(t)]\): \[\Psi(x,t)|_{x=0}=\Psi(x,t)|_{x=L(t)}=0. \tag{3}\] Introducing a new coordinate \[y=\frac{x}{L(t)}, \tag{4}\] Equation (1) can be rewritten as \[i\frac{\partial\Psi(y,t)}{\partial t}=-\frac{1}{2L^{2}}\frac{\partial^{2}\Psi(y,t )}{\partial y^{2}}+i\frac{\dot{L}}{L}y\frac{\partial\Psi(y,t)}{\partial y}+V(yL (t),t)\Psi(y,t), \tag{5}\] where \(\dot{L}=dL/dt\) and new boundary conditions are given by \[\left.\Psi(y,t)\right|_{y=0}=\Psi(y,t)\right|_{y=1}=0\mbox{ for all }t\in[0,\infty). \tag{6}\] However, such transformation leads to breaking of the self-adjointness of the problem, i.e., the Schrodinger operator in the right hand side of (5) is not self-adjoint. Therefore, one needs to recover self-adjointness using the transformation \[\Psi(y,t)=\sqrt{2/L}\exp{\left(\frac{i}{2}L\dot{L}y^{2}\right)}\varphi(y,t), \tag{7}\] that reduces Equation (5) to the following form [3] \[i\frac{\partial\varphi(y,t)}{\partial t}=-\frac{1}{2L^{2}}\frac{\partial^{2} \varphi(y,t)}{\partial y^{2}}+\Biggl{(}\frac{1}{2}L\ddot{L}y^{2}+V\Biggr{)} \varphi(y,t), \tag{8}\] where \(\ddot{L}=d\dot{L}/dt\) and \(\varphi(y,t)\) satisfies the boundary conditions (6). We mention that (8) can also be obtained from Equation (1) via some unitary transformation, cf. [7]. In this section we consider the case \(V=0\). Assume that the expression \(4L^{3}\ddot{L}\) is a non-positive constant for all \(t\in[0,\infty)\), i.e. \[0\leq B^{2}=-4L^{3}\ddot{L}=\mbox{const} \tag{9}\] for some real \(B\). Introduce a new "time variable" \(\tau\) via \[\tau(t)=\int_{0}^{t}\frac{ds}{[L(s)]^{2}}. \tag{10}\] Equation (8) reduces to \[i\frac{\partial\varphi(y,\tau)}{\partial\tau}=-\frac{1}{2}\frac{\partial^{2} \varphi(y,\tau)}{\partial y^{2}}-\frac{1}{8}B^{2}y^{2}\varphi(y,\tau), \tag{11}\] see also [3]. The solution of (11) can now be factorized as \[\varphi(y,\tau)=f(\tau)\Phi(y). \tag{12}\] Using a separation constant \(K\), the equation for \(\Phi\) reduces to the Kummer equation \[z\frac{d^{2}U}{dz^{2}}+\Biggl{(}\frac{1}{2}-z\Biggr{)}\frac{dU}{dz}+\frac{1} {4}(\kappa^{2}-1)U=0, \tag{13}\] where \(z=(iB/2)y^{2},\kappa^{2}=4K/iB,U(z)=\exp{(z/2)}\Phi(z)\). Hence (see 13.1.13 from [36]) the Kummer function \(z^{1/2}M\Bigl{(}\frac{3-\kappa^{2}}{4},\frac{3}{2},z\Bigr{)}\) is a solution of Equation (13). Hence, a solution for \[H_{0}\Phi(y)=K\Phi(y)\quad\mbox{with }H_{0}:=-\frac{1}{2}\frac{d^{2}}{dy^{2}}- \frac{1}{8}B^{2}y^{2}. \tag{14}\] is of the form \[\Phi(y)=CyM\Bigg{(}\frac{3iB-4K}{4iB},\frac{3}{2},\frac{iB}{2}y^{2} \Bigg{)}e^{-\frac{iB}{4}y^{2}}, \tag{15}\] where \(C\) is the normalization constant. Exact solution of Equation (1) can be written as [3] \[\Psi(x,t)=\frac{Cx}{\sqrt{L^{3}}}e^{\frac{i}{2}x^{2}\left(\frac{ L}{L}-\frac{B}{2L^{2}}\right)-iK\tau(t)}M\left(\frac{3iB-4K}{4iB},\frac{3}{2}, \frac{iB}{2}\frac{x^{2}}{L^{2}}\right). \tag{16}\] Note that \(\Psi(0,t)=0\) for all \(t\). However, \(\Psi(L,t)=0\) if and if \[M\left(\frac{3iB-4K}{4iB},\frac{3}{2},\frac{iB}{2}\right)=0. \tag{17}\] Therefore the boundary condition (3) is satisfied if and only if \(K\) equals a zero of the Kummer function in (17). Denote these zeros by \(K_{n}\), \(n\in N\). Define the function \(\Psi_{n}\) via the right hand side of Equation (16) where \(K\) is replaced by \(K_{n}\). It is important to mention that the time-dependent Dirichlet boundary conditions imposed for (1) approve norm conservation. Indeed, let \(N(t)\) be the norm at time \(t\) as the \(L^{2}\)-norm of \(\Psi\) with respect to the spatial variable \(x\), \[N(t):=||\Psi(x,t)||^{2}=\int_{0}^{L(t)}|\Psi|^{2}dx. \tag{18}\] Then, for the time-derivative we have \[\frac{dN}{dt}(t)=\int_{0}^{L(t)}\frac{\partial}{\partial t}|\Psi(x,t)|^{2}dx+ \dot{L}(t)|\Psi|^{2}|_{x=L(t)}=\int_{0}^{L(t)}\frac{\partial}{\partial t}| \Psi(x,t)|^{2}dx \tag{19}\] Taking into account (1), (3) and \[i\int_{0}^{L(t)}\frac{\partial}{\partial t}|\Psi|^{2}dx=\frac{1} {2}\left.\left(\Psi\frac{\partial\Psi^{*}}{\partial x}-\Psi^{*}\frac{\partial \Psi}{\partial x}\right)\right|_{x=0}^{x=L(t)}=0, \tag{20}\] we have norm conservation \(\frac{dN}{dt}(t)=0\). ## 3 Dynamical confinement in the presence of external potential: Exactly solvable models The model in Section 2 approves factorization of the variables in the case of constraint (9). However, when the system interacts with an external position-independent time-varying field, factorization is also possible. Here we consider time-dependent quantum box driven by external purely time-dependent potential \(V\). The dynamics of the system is governed by the following time-dependent Schrodinger equation: \[i\frac{\partial\Psi(x,t)}{\partial t}=-\frac{1}{2}\frac{\partial^{2}\Psi(x,t )}{\partial x^{2}}+V(t)\Psi(x,t) \tag{21}\] The boundary conditions for this equation are imposed as in (3). Following the previous section, we transform the boundary conditions into time-independent ones by introducing a new coordinate \(y\) which is given by (4). Using the transformation of the wave function given by Equation (7) and under the assumption that \(L(t)\) fulfills the condition in Equation (9), we have with Equation (8) \[iL^{2}\frac{\partial\varphi(y,t)}{\partial t}=-\frac{1}{2}\frac{\partial^{2} \varphi(y,t)}{\partial y^{2}}-\frac{1}{8}B^{2}y^{2}\varphi(y,t)+L^{2}V(t) \varphi(y,t), \tag{22}\] Let \(\Phi_{n}\) satisfy (14) with Dirichlet boundary conditions (17) with respect to the eigenvalue \(K=K_{n}\) and denote its normalization constant by \(C_{n}\). It is well-known that the system \(\{\Phi_{n}\}_{n}\) forms an orthonormal basis of \(L^{2}(0,1)\). We choose as an Ansatz for the solution of (22) \[\varphi(y,t)=\sum_{n}C_{n}(t)\Phi_{n}(y) \tag{23}\] By substituting (23) into Equation (22), we (formally) obtain \[iL^{2}\sum_{n}\dot{C}_{n}(t)\Phi_{n}(y)=\sum_{n}C_{n}(t)H_{0}\Phi_{n}(y)+L^{2} V(t)\sum_{n}C_{n}(t)\Phi_{n}(y). \tag{24}\] Then, after multiplying both sides of equation to \(\Phi_{m}^{*}(y)\), integrating over \(y\) and using the orthonormal condition for a basis, \(\int_{0}^{1}\Phi_{m}^{*}(y)\Phi_{n}(y)dy=\delta_{mn}\), we have \[iL^{2}\dot{C}_{n}(t)=C_{n}(t)K_{n}+L^{2}V(t)C_{n}(t) \tag{25}\] It's solution is in the form \[C_{n}(t)=C_{n}(0)e^{-i\int_{0}^{t}\left(\frac{K_{n}}{L^{2}}+V(s)\right)ds} \tag{26}\] where \(C_{n}(0)\) can be determined from a smooth initial condition which insures \(\sum_{n}|C_{n}(0)|^{2}<\infty\). Finally, one obtains the general solution for Equation (21) \[\Psi(x,t)=\sum_{n}C_{n}(0)e^{-i\int_{0}^{t}\left(\frac{K_{n}}{L^{ 2}}+V(s)\right)ds}C_{n}\sqrt{\frac{2}{L^{3}}}xM\Bigg{(}\frac{3iB-4K_{n}}{4iB},\frac{3}{2},\frac{iB}{2}\frac{x^{2}}{L^{2}}\Bigg{)}\] \[\times e^{-\frac{iB}{4}\frac{x^{2}}{L^{2}}+\frac{i}{2}\dot{L}x^{2 }}. \tag{27}\] It is worthful to consider some other potentials approving factorization of variables in the Schrodinger equation with time-dependent boundary conditions. One of them is interaction proportional to inverse square of the distance given as \[V=\frac{\alpha}{x^{2}}\] where \(\alpha\) is constant. For this case Equation (8) can be written as \[i\frac{\partial\varphi(y,t)}{\partial t}=-\frac{1}{2L^{2}}\frac{\partial^{2} \varphi(y,t)}{\partial y^{2}}+\Bigg{(}\frac{1}{2}L\ddot{L}y^{2}+\frac{\alpha}{ L^{2}y^{2}}\Bigg{)}\varphi(y,t), \tag{28}\] Variables of this equation can be separated, provided \(L(t)\) fulfills (9). Similarly to the above, one can show that potential in the form \(V=x\varepsilon(t)\) also approves of separation time and space variables and one obtains Schrodinger equation for anharmonic oscillator given as \[i\frac{\partial\varphi(y,t)}{\partial t}=-\frac{1}{2L^{2}}\frac{\partial^{2} \varphi(y,t)}{\partial y^{2}}+\Biggl{(}\frac{1}{2}L\ddot{L}y^{2}+Ly\varepsilon (t)\Biggr{)}\varphi(y,t), \tag{29}\] Variables of this equation can be separated when \(L^{3}\ddot{L}=\mbox{const}=\beta\) and \(L^{3}\varepsilon(t)=\mbox{const}=\gamma\) conditions are fulfilled. From those two conditions one can see \(\varepsilon(t)=\frac{\gamma}{\beta}\ddot{L}\) and (29) becomes \[iL^{2}\frac{\partial\varphi(y,t)}{\partial t}=-\frac{1}{2}\frac{\partial^{2} \varphi(y,t)}{\partial y^{2}}+\frac{1}{2}\beta y^{2}\varphi(y,t)+\gamma y \varphi(y,t). \tag{30}\] Finally, for potential given in the form of nonlinearly polarized monochromatic field given by \(V=x^{2}\epsilon\cos\omega t\), where \(\epsilon\), \(\omega\) are strength and frequency of external field, one can also factorize space and time variables and have \[i\frac{\partial\varphi(y,t)}{\partial t}=-\frac{1}{2L^{2}}\frac{\partial^{2} \varphi(y,t)}{\partial y^{2}}+\Biggl{(}\frac{1}{2}L\ddot{L}y^{2}+L^{2}y^{2} \epsilon\cos\omega t\Biggr{)}\varphi(y,t), \tag{31}\] Conditions for factorization of variables for this equation given in the form of constraint \(L^{3}\ddot{L}+2\epsilon L^{4}\cos\omega t=\mbox{const}=\beta\). ## 4 Average kinetic energy and quantum force induced by dynamical confinement Having found the solution of the Schrodinger equation for time-dependent quantum box, one can compute physically observable variables, such as average kinetic energy and average (quantum) force. The average kinetic energy is determined as the expectation value of the kinetic energy operator: \[\hat{H}=-\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}. \tag{32}\] The expectation value of energy is given by \[<E(t)>=\langle\Psi|\hat{H}|\Psi\rangle, \tag{33}\] where \(|\Psi\rangle\) is a solution of (21). We have \[<E_{k}(t)>=\int_{0}^{L(t)}\Psi^{*}(x,t)\Biggl{(}-\frac{1}{2}\frac{\partial^{2 }}{\partial x^{2}}\Biggr{)}\Psi(x,t)dx=\frac{1}{2}\int_{0}^{L(t)}\Biggl{|} \frac{\partial\Psi(x,t)}{\partial x}\Biggr{|}^{2}dx \tag{34}\] or with (7) \[<E_{k}(t)>=\frac{1}{L^{2}}\int_{0}^{1}\Biggl{|}\frac{\partial\varphi(y,t)}{ \partial y}\Biggr{|}^{2}dy+\frac{2\dot{L}}{L}\mbox{Im}\Biggl{(}\int_{0}^{1}y \varphi^{*}(y,t)\frac{\partial\varphi(y,t)}{\partial y}dy\Biggr{)}+\] \[\dot{L}^{2}\int_{0}^{1}y^{2}|\varphi(y,t)|^{2}dy=\frac{1}{L^{2}}S_{0}+\frac{2 \dot{L}}{L}\mbox{Im}(S_{1})+\dot{L}^{2}S_{2}.\] Explicit expressions for \(S_{0},\ S_{1}\) and \(S_{2}\) are provided in A. Quantum force can be determined as the expectation value of the force operator as \[\hat{F}=-\frac{\partial\hat{H}}{\partial L(t)}. \tag{35}\] Then for the expectation value of the force operator one has [17] \[<F(t)>=-\frac{\partial\langle E_{k}(t)\rangle}{\partial L}=\frac{2}{L^{3}}S_{ 0}+\frac{2\dot{L}}{L^{2}}\mbox{Im}(S_{1}). \tag{36}\] ## 5 Quantum Fermi acceleration and high harmonic generation in driven time-dependent box An important effect that can be realized in a quantum box with oscillating walls is the so-called Fermi acceleration in quantum regime, or quantum Fermi acceleration. It is a quantum analog of the classical Fermi acceleration that occurs in bouncing balls colliding with oscillating wall. In classical regime, unbounded growth of the average kinetic energy of a particle can be observed in such a system. Quantum Fermi acceleration in a time-dependent box was studied in [7, 16]. Here we extend these studies to the case of interaction (in addition to the interaction with oscillating wall) with an external time-periodic potential. We consider a version of the dynamically confined system which can be experimentally realized in optics. Namely, we propose a model for time-dependent box driven by linearly polarized monochromatic field given by \[V(x,t)=\epsilon x\cos\omega t, \tag{37}\] where \(\epsilon\) and \(\omega\) are the field strength and the frequency, respectively. In the following we consider a quantum particle in a box with oscillating wall and interacting with the external linearly polarized monochromatic field given by (37). Wall's oscillation is assumed to be given as \[L=L_{0}+a\cos\omega_{0}t.\] In this case, Equation (8) cannot be separated and one has to solve it numerically. Here we will use the following Ansatz for \(\varphi(y,t)\): \[\varphi(y,t)=\sum_{n}C_{n}(t)\sin\pi ny, \tag{38}\] where \(\phi_{n}(y):=\sin\pi ny\) solves the equation \(-\frac{1}{2}\frac{d^{2}\phi_{n}}{dy^{2}}=\frac{\pi^{2}n^{2}}{2}\phi_{n}\) with Dirichlet boundary conditions on the interval \([0,1]\). For the coefficients \(C_{n}\) we have the following system of differential equations \[iL^{2}\dot{C}_{n}(t)=C_{n}(t)\frac{\pi^{2}n^{2}}{2}+\sum_{m}V_{nm}C_{m}(t) \tag{39}\] where \(V_{nm}=L^{3}\ddot{L}I_{1nm}+2\epsilon L^{3}\cos\omega tI_{2nm}\) and \[I_{1nm}=\int_{0}^{1}y^{2}\phi_{n}^{*}\phi_{m}dy=\left\{\begin{array}{cc} \frac{1}{6}+\frac{1}{4n^{2}\pi^{2}},&n=m\\ \frac{1}{\pi^{2}}\left(\frac{(-1)^{m-n}}{(m-n)^{2}}-\frac{(-1)^{m+n}}{(m+n)^{2 }}\right),&n\neq m\end{array}\right.\] and \[I_{2nm}=\int_{0}^{1}y\phi_{n}^{*}\phi_{m}dy=\left\{\begin{array}{cc}\frac{1} {4},&n=m\\ \frac{1}{2\pi^{2}}\left(\frac{(-1)^{m-n}-1}{(m-n)^{2}}-\frac{(-1)^{m+n}-1}{(m+ n)^{2}}\right),&n\neq m\end{array}\right.\] We solve (39) numerically by choosing initial condition as \(C_{1}(0)=1\) and \(C_{n}(0)=0\) for \(n\neq 1\). Having found \(C_{n}(t)\) one constructs \(\phi_{n}\) and \(\Psi\) via (4) and (7). In Figure 1 time-dependence of the average kinetic energy of the particle \(<E_{k}>\) and the quantum force \(<F>\) acting on the wall are plotted at different values of the wall's oscillation amplitude \(a\). For smaller values of \(a\) both average kinetic energy and force are periodic, while at higher values they become quasi periodic and certain growth of the "peaks" can be observed. However, suppression of the growth occurs as time elapses. Figure 1: Average kinetic energy and force as a function of time at different values of amplitude of oscillating wall \(a\) for \(L_{0}=10\), \(\omega_{0}=0.5\), \(\epsilon=0.1\) and \(\omega=0.05\) (\(L=L_{0}+a\cos\omega_{0}t\) and V=\(\epsilon x\cos\omega t\)). Figure 2 presents the time-dependence of the average kinetic energy and the force at different values of the external field strength \(\epsilon\). The behavior of \(<E_{k}(t)>\) and \(<F(t)>\) are similar to that in Figure 1, which implies similarity of the roles of \(\epsilon\) and \(a\) in the particle dynamics. In other words, particle "feels" both, oscillating wall and external monochromatic field as a periodic perturbation. In Figures 3 and 4, \(<E_{k}(t)>\) and \(<F(t)>\) are plotted at different values of the external field and oscillating boundary frequency for \(L_{0}=10\), respectively. Qualitatively plots look similar to those in Figures 1 and 2. An important effect which can be considered in the system time-dependent quantum box+ external monochromatic field is optical high harmonic generation (HHG) induced by interaction of the time-dependent box with external optical field given by (37). Evolution of the whole system, "dynamical box + optical field" is governed by (1). Detailed description of the high harmonic generation in quantum regime can be found in [38]. Here we will focus on the role of confinement in optical harmonic generation. The main physical characteristics of such process is the average dipole moment which is given by [38] \[<d(t)>=-<\Psi(x,t)|x|\Psi(x,t)>.\] The spectrum of high harmonic generation (HHG) is characterized by the quantity [38] \[I(\nu)=|<d(\nu)>|^{2}=\bigg{|}\frac{1}{T}\int_{0}^{T}e^{-i\nu t}<d(t)>dt\bigg{|}^ {2}, \tag{40}\] where \(T\) is the total duration of interaction. Figure 5 shows plots of the spectrum of harmonic generation as a function of harmonic order at different values of external field strength for \(L_{0}=10\), \(a=3\), \(\omega_{0}=0.5\), \(\omega=1\) and \(T=200\). The plot shows that the HHG intensity strongly depends on the field strength. For higher values of \(\epsilon\), one can observe increasing of intensity. Figure 6 presents plots of HHG-spectra at different amplitudes of oscillating box. An important feature of the HHG-spectra presented in Figures 5 and 6 is the existence of a plateau in the curves, i.e. there is rather side range of frequencies having the same intensity of generation (emission). Such a feature is of importance from the of the model in attosecond physics, where wide range of generated high frequencies with high enough intensity is required. ## 6 Conclusion In this paper we considered the problem of dynamical confinement in a time-dependent 1D quantum box interacting with external potentials. The main focus is given to find an exact solution of the problem. In particular, an exact solution of the problem is obtained for the purely time-dependent external potential. Some other external potentials approving factorization of space and time variables in the time-dependent Schrodinger equation with moving boundary conditions are classified. Average kinetic energy and quantum force for the particle simultaneously subjected to the influence of dynamical confinement and external time-periodic field are analyzed as a function of time. A model for high harmonic generation in time-dependent box that can be experimentally realized in quantum optics, is proposed. The spectrum of high harmonics generated by such system is computed. The proposed model can be directly applied to the problems of tunable quantum Fermi acceleration and quantum transport in low-dimensional confined systems arising in optics and condensed matter. An extension of the model to the case of 2D and 3D systems, where the role of boundaries geometry is very challenging is a task for forthcoming studies. ## Appendix A Here we will give explicit forms of the quantities \(S_{0},\ S_{1}\\) and \(S_{2}\\) Using the solution of Equation (22) one can find \(S_{0},\ S_{1}\\) and \(S_{2}\\) that are defined as \[S_{0}=\sum_{n,m}C_{n}^{*}(t)C_{m}(t)\bigg{[}I_{1}+\frac{W_{m}}{6}I_{2}+\frac{W_ {n}^{*}}{6}I_{2}^{*}+\frac{W_{n}^{*}W_{m}}{36}I_{4}-\frac{iBW_{n}^{*}}{12}I_{5 }+\frac{iBW_{m}}{12}I_{5}^{*}+\frac{B^{2}}{4}I_{6}\Big{]}\] \[S_{1}=\sum_{n,m}C_{n}^{*}(t)C_{m}(t)\bigg{[}I_{3}+\frac{W_{m}}{6}I_{5}^{*}- \frac{iB}{2}I_{6}\Big{]}\] \[S_{2}=\sum_{n,m}C_{n}^{*}(t)C_{m}(t)I_{6}\] with \(W_{n}=3iB-4K_{n}\) and \(I_{1},\ I_{2},\ I_{3},\ I_{4},\ I_{5},\ I_{6}\\) given by \[I_{1}=\int_{0}^{1}M^{*}\biggl{(}\frac{3iB-4K_{n}}{4iB},\frac{3}{2},\frac{iB}{2}y^ {2}\biggr{)}M\biggl{(}\frac{3iB-4K_{m}}{4iB},\frac{3}{2},\frac{iB}{2}y^{2} \biggr{)}dy\] \[I_{2}=\int_{0}^{1}y^{2}M^{*}\biggl{(}\frac{3iB-4K_{n}}{4iB},\frac{3}{2},\frac{ iB}{2}y^{2}\biggr{)}M\biggl{(}\frac{7iB-4K_{m}}{4iB},\frac{5}{2},\frac{iB}{2}y^{2} \biggr{)}dy\] \[I_{3}=\int_{0}^{1}y^{2}M^{*}\biggl{(}\frac{3iB-4K_{n}}{4iB},\frac{3}{2},\frac{ iB}{2}y^{2}\biggr{)}M\biggl{(}\frac{3iB-4K_{m}}{4iB},\frac{3}{2},\frac{iB}{2}y^{2} \biggr{)}dy\] \[I_{4}=\int_{0}^{1}y^{4}M^{*}\biggl{(}\frac{7iB-4K_{n}}{4iB},\frac{5}{2},\frac{ iB}{2}y^{2}\biggr{)}M\biggl{(}\frac{7iB-4K_{m}}{4iB},\frac{5}{2},\frac{iB}{2}y^{2} \biggr{)}dy\] \[I_{5}=\int_{0}^{1}y^{4}M^{*}\biggl{(}\frac{7iB-4K_{n}}{4iB},\frac{5}{2},\frac{ iB}{2}y^{2}\biggr{)}M\biggl{(}\frac{3iB-4K_{m}}{4iB},\frac{3}{2},\frac{iB}{2}y^{2} \biggr{)}dy\] \[I_{6}=\int_{0}^{1}y^{4}M^{*}\biggl{(}\frac{3iB-4K_{n}}{4iB},\frac{3}{2},\frac{ iB}{2}y^{2}\biggr{)}M\biggl{(}\frac{3iB-4K_{m}}{4iB},\frac{3}{2},\frac{iB}{2}y^{2} \biggr{)}dy\]
2310.20251
An Implementation of Multimodal Fusion System for Intelligent Digital Human Generation
With the rapid development of artificial intelligence (AI), digital humans have attracted more and more attention and are expected to achieve a wide range of applications in several industries. Then, most of the existing digital humans still rely on manual modeling by designers, which is a cumbersome process and has a long development cycle. Therefore, facing the rise of digital humans, there is an urgent need for a digital human generation system combined with AI to improve development efficiency. In this paper, an implementation scheme of an intelligent digital human generation system with multimodal fusion is proposed. Specifically, text, speech and image are taken as inputs, and interactive speech is synthesized using large language model (LLM), voiceprint extraction, and text-to-speech conversion techniques. Then the input image is age-transformed and a suitable image is selected as the driving image. Then, the modification and generation of digital human video content is realized by digital human driving, novel view synthesis, and intelligent dressing techniques. Finally, we enhance the user experience through style transfer, super-resolution, and quality evaluation. Experimental results show that the system can effectively realize digital human generation. The related code is released at https://github.com/zyj-2000/CUMT_2D_PhotoSpeaker.
Yingjie Zhou, Yaodong Chen, Kaiyue Bi, Lian Xiong, Hui Liu
2023-10-31T08:13:57Z
http://arxiv.org/abs/2310.20251v1
# An Implementation of Multimodal Fusion System for Intelligent Digital Human Generation ###### Abstract With the rapid development of artificial intelligence (AI), digital humans have attracted more and more attention and are expected to achieve a wide range of applications in several industries. Then, most of the existing digital humans still rely on manual modeling by designers, which is a cumbersome process and has a long development cycle. Therefore, facing the rise of digital humans, there is an urgent need for a digital human generation system combined with AI to improve development efficiency. In this paper, an implementation scheme of an intelligent digital human generation system with multimodal fusion is proposed. Specifically, text, speech and image are taken as inputs, and interactive speech is synthesized using large language model (LLM), voiceprint extraction, and text-to-speech conversion techniques. Then the input image is age-transformed and a suitable image is selected as the driving image. Then, the modification and generation of digital human video content is realized by digital human driving, novel view synthesis, and intelligent dressing techniques. Finally, we enhance the user experience through style transfer, super-resolution, and quality evaluation. Experimental results show that the system can effectively realize digital human generation. The related code is released at [https://github.com/xyj-2000/CUT_2D_PhotoSpeaker](https://github.com/xyj-2000/CUT_2D_PhotoSpeaker). digital human, AI, deep learning, multimodality, multimedia information processing ## I Introduction Digital humans are computer-simulated virtual images with human appearance characteristics, movement and expression, and have a wide range of application prospects in many fields such as medicine, film and virtual reality. They are considered as the entrance to the metaverse [1]. Especially in recent years, with the rapid development of computer graphics, hardware equipment and display technology, digital humans have not only become more vivid images but also more intelligent, able to help people or independently complete specific tasks. However, traditional digital human generation often includes processes such as modeling, driving, and rendering. The whole process is cumbersome and highly dependent on the designer's professional skills, aesthetic perception, and design experience. It consumes a lot of human resources and time costs. With the growing demand for digital humans, the shortcomings of traditional digital human generation technology are gradually exposed. On the other hand, thanks to the rapid development of deep learning and artificial intelligence (AI), artificial intelligence generated content (AIGC) has been able to generate text, images, and even multimedia content that is very similar to human-created content, improving the efficiency of practitioners in various industries. For this reason, it can be believed that the introduction of AI will effectively improve the efficiency of digital human generation. It is necessary and highly significant to design an intelligent digital human generation system. As shown in the Fig. 1, from the data dimension of the digital human itself, the common digital human can be divided into two-dimensional (2D) digital human and three-dimensional (3D) digital human. Between them, the 2D digital human itself is only a flat image, which usually can only be presented to the audience in the form of 2D media, such as images and videos. In addition, the research in related fields is more mature. In the last five years, many databases of 2D digital faces [4, 5, 6, 7, 8, 9, 10, 11] have been proposed to further promote the development of the field. Currently, there are two types of mainstream driving methods for 2D faces. Person dependent face-driving methods [12, 13, 14, 15] are designed for specific faces, thus enabling fine-grained control with higher naturalness. Correspondingly, these advantages also lead to the limited application of person dependent methods, which are not well compatible with other faces. The person independent methods [16, 17, 18, 19, 20, 21, 22], on the other hand, although not as effective as the person dependent methods in driving specific characters, are able to capture a wider range of patterns with more data, effectively addressing the issue of generalizability. Another type of digital human often explored in academia is the 3D digital human. A 3D digital human can be represented by point clouds, meshes or voxels, allowing the audience to view the digital human from multiple viewpoints. Thanks to the structural characteristics of 3D digital humans, they can either be rendered to generate 2D media in a specific viewpoint or placed in virtual reality (VR) for immersive experience. At Fig. 1: Classification of digital human. The selected digital humans are from the DDH-QA [2] and SJTU-H3D [3] databases. present, some databases have been established in the academic field to address this issue, such as NoW [23], FaceScape [24], [25], Human3.6M [26], [27], ZJU-Mocap [28], and BEAT [29] in the area of 3D digital humans, VOCA [30] and MultiFace [31] in the area of speech-driven, and DHHQA [32], DDH-QA [2], and SJTU-H3D [3] in the area of quality assessment of 3D digital humans. On this basis, the research of 3D digital humans is gradually deepened. In this paper, we mainly focus on the 2D digital humans and use AI to provide a multimodal, interactive, 3D, stylized intelligent digital human generation technology. Specifically, the whole system contains three modules: the preprocessing module, generation module and post-processing module. The preprocessing module supports the input of rich media data consisting of text, speech and character images. After the input text is answered by language model, the speech can be obtained either through text-to-speech conversion or by cloning from the timbre of the input speech. The difference between the two methods is that the latter can preserve the phonological characteristics of the input speech. Besides, this system adopts an age transformer to customize the age for the person image, which enriches the user experience. The generation module supports person independent and person dependent digital human driving methods to modify the appearance of the digital human and generate the animation. Particularly, this system realizes the modification of digital humans' clothes. The post-processing module is mainly responsible for the style transfer and quality improvement of the generated animation. The final digital human animation can also be assessed for quality in advance through the quality evaluation model before flowing to the market. By observing the generated digital human animations, the system integrates a variety of intelligent models, which can quickly and effectively realize the generation of digital humans and meet the needs of different users. Therefore, the main contributions of this paper are as follows: * An effective and feasible design method for an intelligent digital human generation system with multimodal fusion is proposed. * An open-source system for intelligent digital human generation with multimodal fusion realized by combining existing technologies. * The current status of the development of digital human-related technologies is reviewed, and application scenarios are given for this system. ## II Proposed Method In this section, we specifically discuss the proposed method. The framework of the proposed method is schematically shown in Fig. 2. It includes preprocessing module, generation module and post-processing module. ### _Preprocessing Module_ Considering that the digital human generation system is a multimodal fusion information processing system, the preprocessing module processes the input rich media data separately for the input. Given the input media data type \(M\): \[M\in\{W,A,P\}, \tag{1}\] where \(W\) denotes the input text, which is mainly used for the user to interact with the digital image, and \(A\) denotes the input target audio, which is mainly used for voiceprint extraction and speech cloning. If the user does not provide the audio \(A\), the system will output the digital human media with fixed voice through the text-to-speech conversion. \(P\) denotes the input RGB character image, which is the main driving object. In this paper, the user input text is interactively responded to through the language model and the whole process can be described as follows: \[\begin{split}\mathcal{W}=Q([W_{n},WR_{n-1},\cdots,WR_{1}],U),\\ R_{n}=\,\mathrm{Re}(\mathcal{W}),\end{split} \tag{2}\] where \(\mathcal{W}\) denotes a POST access request, \(W_{n}\) denotes the input text of the user during the \(n\)th round of interaction, \(R_{n}\) denotes the response text of language model during the \(n\)th round of interaction, \(WR_{n}\) denotes the \(n\)th round of interaction, including a pair of input and response text, \(U\) denotes the IP of the deployed language model, \(Q(\cdot)\) denotes Fig. 2: The framework of the proposed intelligent digital human generation system. the process of a local host sending a POST access request to deployed language model, \([W_{n},WR_{n-1},\cdots,WR_{1}]\) denotes the request body of the POST, and \(\mathrm{Re}(\cdot)\) denotes the process where a response is fed back from the language model during the \(n\)th round of interaction. The response generated by the language model further selects whether or not to perform extraction of voiceprint and speech cloning based on whether or not the target audio data \(A\) is input. Thus, the whole process of speech generation can be described as: \[S=G_{s}(R_{n})\parallel G_{c}(R_{n},\mathbb{F}), \tag{3}\] where \(G_{s}(\cdot)\) and \(G_{c}(\cdot)\) denote the process of text-to-speech conversion and speech cloning, respectively, \(\parallel\) indicates the selection of one specific content from multiple options, \(\mathbb{F}\) denotes the acoustic features extracted from the target audio \(A\), and \(S\) is the digital human speech generated with reference to the text \(R_{n}\). For the input digital human image \(P\), the appearance of the image is selected and modified by age transformation. The whole process can be described as: \[\begin{split} I_{a}=Age(P),\\ I_{a}=[i_{1},i_{2},i_{3},\cdots,i_{k}],\end{split} \tag{4}\] where \(Age(\cdot)\) denotes the process of age transformation, \(I_{a}\) denotes the set of digital human images corresponding to each age, and \(i_{k}\) denotes the digital human appearance image corresponding to the \(k\)th age. ### _Generation Module_ The generation module performs clothing modification and face driving for the synthesized speech and selected digital human images, and also can widen the observation field of the digital human through the novel view synthesis of monocular RGB images to make the digital human 3D. First of all, the free dressing of the digital human can be realized through human posture detection and clothing matching. Specifically, the whole dressing process can be described as: \[\begin{split} I_{c}=C_{T}(i),\\ i\in I_{a},\end{split} \tag{5}\] where \(C_{T}(\cdot)\) denotes the process of digital human dressing, \(i\) denotes the image selected from the digital human age image set \(I_{a}\), and \(I_{c}\) denotes the digital human clothing image set. After that, two common driving methods are designed for the face driving of digital human. Between them, person independent driving method is simpler and can directly generate the digital human animation using the input speech and image, while person dependent driving method can only be driven for a specific person image, so it is necessary to use the action retarget after driving a specific person image in order to obtain the desired digital human animation. Overall, the whole driving process can be described as: \[\begin{split}\hat{i}\in I_{a}\cup I_{c},\\ V_{m}=\mathfrak{M}_{i}(\hat{i},S)||(\mathfrak{M}_{d}(S)\oplus \mathfrak{M}_{t}(\hat{i})),\end{split} \tag{6}\] where, \(\mathfrak{M}_{i}(\cdot)\), \(\mathfrak{M}_{d}(\cdot)\) and \(\mathfrak{M}_{t}(\cdot)\) denote the adoption of person independent, person dependent driving method and action retarget method, respectively, \(\oplus\) denotes the successive realization of two processes, \(\hat{i}\) denotes the selected digital human image, \(S\) is the digital human driving speech, and \(V_{m}\) is the generated digital human animation. In particular, this system realizes the expansion of the viewpoint of the digital human through the novel view synthesis to form a 3D visual effect. This function is optional for users and the whole process can be described as: \[V_{\text{3d}}=3D(V_{m}), \tag{7}\] where \(3D(\cdot)\) denotes the 3D effect realized by the novel view synthesis and \(V_{\text{3d}}\) denotes the synthesized 3D digital human video. ### _Post-processing Module_ In order to further enhance the quality of the user experience with the digital human product, this system performs post-processing on the generated digital human animation. It is worth stating that the post-processing module is not mandatory to be used but merely provides more selectivity for users. Specifically, the module designs three parts, namely, style transfer, super-resolution, and quality assessment, respectively. Among them, the style transfer model is able to transfer the features of a specific style obtained through learning to the input media. The process can be chosen according to the actual needs and described as: \[V_{s}=\Psi(V_{m}||V_{\text{3d}}), \tag{8}\] where \(\Psi(\cdot)\) denotes the process of style transfer, and \(V_{s}\) denotes the stylized video. In addition, the super-resolution technique can effectively improve the quality of digital human animation, which can be described as: \[V_{SR}=\varphi(V_{m}||V_{\text{3d}}||V_{s}), \tag{9}\] where \(\varphi(\cdot)\) denotes the process of super-resolving the animation, and \(V_{SR}\) denotes the processed video. Finally, before the digital human video is presented to the viewers, the quality assessment model may perform an objective evaluation of the generated video based on the a priori knowledge of the digital human video, which may be described as: \[Score=QA(V_{m}||V_{\text{3d}}||V_{s}||V_{SR}), \tag{10}\] where \(QA(\cdot)\) denotes the process of evaluating a digital human video using an objective quality assessment algorithm, and \(Score\) denotes the result of the evaluation. The result can provide guidance for the improvement and optimization of the digital human generation system. ## III Experiments ### _Experimental Setup_ In order to further validate the feasibility of the proposed method, we have implemented the method with the prior art, including the large language model ChatGLM [33], the text-to-speech conversion package eSpeaker1, the speech cloning model MockingBird2, the age transformer model SAM [34], the person dependent driving method LiveSpeechPortraits [35], the action retarget Thin-Plate-Spline-Motion-Model [36], the person independent driving method SadTalker [37], cloth modification model VITON-HD [38], style transfer model DCT-Net [39], super resolution model BasicVSR++ [40, 41], quality assessment model VSFA [42]. In our experiments, we trained and tested our proposed method on a server using Intel i7-7700K CPU @ 4.20GHz, 32GB RAM and NVIDIA 2080TI GPU. ### _Generating Effects_ This section tests the visual effects of generating digital human animations, including digital face animations and digital human animations. In digital face generation, four different digital human images were selected from StyleGAN database3. Specifically, the generation of multimodal digital faces, age-transformed digital faces, 3D digital faces, and stylized digital faces by different audio was tested and the results are shown in Fig. 3. As for digital human, we chose three digital human images from HumanAlloy4, VITON-HD [38] and Cgtrader5. Then, the generation of multimodal digital human, dress-up digital human, and stylized digital human by different audio Fig. 3: Visual effects of generating digital human face animations. Fig. 4: Visual effects of generating digital human animations. was tested. The results are shown in Fig. 4. Combining Fig. 3 and Fig. 4, it can be concluded that the proposed method can effectively realize the generation of digital human face and full-body digital human. In addition, the method realizes the effects of age conversion, 3D, style transfer, and dress-up by prior art, which enriches the user experience. ### _Performance indicators_ To further validate the generative effectiveness of the proposed method, we also tested seven generated digital human videos mentioned above using image and video quality assessment methods. In terms of image quality evaluation metrics, CPBD [43] for evaluating blur and CGIQA [44] for evaluating CG animation were selected. Besides, two advanced video methods were chosen, including VSFA [42] and FastVQA [45]. In this case, the raw output of CPBD was recorded, while the outputs of the CGIQA, VSFA and FAST-VQA, were all normalized. The higher the four metrics, the higher the quality of the generated digital human video. It is worth noting that the image quality assessment method evaluates each frame of the video and finally the average result is recorded. All methods are performed using source code provided by the authors. The test results are shown in Table I. As can be seen in Table I, the generated digital human videos are clear enough and have some aesthetic value. In addition, most of the generated videos achieved satisfactory performance on the two advanced video quality evaluation metrics. This further proves the effectiveness of the proposed method. ## IV Conclusion In this paper, we propose a method for implementing an intelligent digital human generation system with multimodal fusion. The proposed method performs separate preprocessing for text, speech, and image. Then, the preprocessed media-rich data is used to generate vivid digital human videos through digital human driving methods, intelligent dressing technology, and novel view synthesis. Finally, to enhance the user experience, the post-processing module offers three optional functions: style transfer, super-resolution, and quality assessment. Furthermore, we have developed an effective and feasible multimodal fusion system for intelligent digital human generation using existing technologies based on the proposed method. The system has been validated with different images and audio, demonstrating its ability to achieve the expected results. Given the features of this system, we anticipate revolutionary applications in various fields such as entertainment, film, games, and clothing sales.
2309.13908
A comparison of controller architectures and learning mechanisms for arbitrary robot morphologies
The main question this paper addresses is: What combination of a robot controller and a learning method should be used, if the morphology of the learning robot is not known in advance? Our interest is rooted in the context of morphologically evolving modular robots, but the question is also relevant in general, for system designers interested in widely applicable solutions. We perform an experimental comparison of three controller-and-learner combinations: one approach where controllers are based on modelling animal locomotion (Central Pattern Generators, CPG) and the learner is an evolutionary algorithm, a completely different method using Reinforcement Learning (RL) with a neural network controller architecture, and a combination `in-between' where controllers are neural networks and the learner is an evolutionary algorithm. We apply these three combinations to a test suite of modular robots and compare their efficacy, efficiency, and robustness. Surprisingly, the usual CPG-based and RL-based options are outperformed by the in-between combination that is more robust and efficient than the other two setups.
Jie Luo, Jakub Tomczak, Karine Miras, Agoston E. Eiben
2023-09-25T07:11:43Z
http://arxiv.org/abs/2309.13908v1
# A comparison of controller architectures and learning mechanisms for arbitrary robot morphologies ###### Abstract The main question this paper addresses is: What combination of a robot controller and a learning method should be used, if the morphology of the learning robot is not known in advance? Our interest is rooted in the context of morphologically evolving modular robots, but the question is also relevant in general, for system designers interested in widely applicable solutions. We perform an experimental comparison of three controller-and-learner combinations: one approach where controllers are based on modelling animal locomotion (Central Pattern Generators, CPG) and the learner is an evolutionary algorithm, a completely different method using Reinforcement Learning (RL) with a neural network controller architecture, and a combination 'in-between' where controllers are neural networks and the learner is an evolutionary algorithm. We apply these three combinations to a test suite of modular robots and compare their efficacy, efficiency, and robustness. Surprisingly, the usual CPG-based and RL-based options are outperformed by the in-between combination that is more robust and efficient than the other two setups. evolutionary robotics, Reinforcement learning, controller, learning algorithm, CPG ## I Introduction Enabling robots to learn tasks automatically is an important feature on its own, and also necessary within an evolutionary robot system, where both the morphologies (bodies) and the controllers (brains) are developed by evolution. In such systems, 'newborn' robots should undergo a learning phase to fine-tune the inherited brain to the inherited body quickly after birth [1, 2]. This raises the question: what combination of a robot controller and a learning method should be used in the robots' morphology which is not known in advance? In general, a robot's ability to learn a task depends on three major system components, namely, the body (morphology, hardware), the brain (controller, software), and the learning algorithm. In the current literature, the majority of studies investigate controller optimization using multiple learning algorithms, but focusing on a specific control architecture [3, 4]; comparisons of different control architectures and learning methods for learnable controllers and arbitrary modular robots are rarely carried out. This study makes a step towards closing this gap by comparing three different combinations of a specific control architecture and a learning algorithm. The possible control architectures are Central Pattern Generator (CPG), Artificial Neural Network (ANN) and Deep Reinforcement Learning (DRL) policy controller. The possible learning algorithms are Reversible Differential Evolution (RevDE) [5] representing semi-supervised learning and Proximal Policy Optimization (PPO) [6] representing reinforcement learning. The combinations we compare here are CPG+RevDE, DRL+PPO, and ANN+RevDE. The motivation behind these choices is as follows. Using CPGs is a well-established, biologically plausible option to control modular robots actuated through joints, where learning can be performed by any heuristic black-box optimization method. RevDE is one such method that proved to be successful in the past for this application. Deep Reinforcement Learning is also a straightforward and increasingly popular option for robot learning with implications for the appropriate controller architecture, namely the use of ANNs. Additionally, we test the ANN+RevDE combination as an 'in-between' option that, to our best knowledge, has not been investigated previously. The main contribution of this work is threefold: 1. It demonstrates a test-suite based approach to experimental research into robot learning, where the robots that make up the test suite are not only hand-picked, but also generated algorithmically. 2. Furthermore, the controller-and-learner combinations are not only compared by the usual performance measures, efficiency and efficacy, but also by robustness, i.e., stability or consistency over the different robot morphologies. 3. It provides an empirical assessment of three options, including two 'usual suspects' that researchers in the field are likely to consider: CPG-based controllers with a good weight optimizer and a Deep Reinforcement Learning method. The results indicate a surprising outcome, both of these methods are outperformed by the third one, ANN+RevDE. ## II Related Work ### _Robot Controllers_ A popular class of controllers is based on utilizing Artificial Neural Networks (ANN). The optimization of an ANN is typically done by approximating gradients for gradient-based methods or by applying derivative-free methods to alter internal weights and biases of all neurons within the ANN [7, 8, 9, 10, 11, 12, 13]. Alternatively, reinforcement learning (RL) could be used to update the controller [14, 15, 16]. Here, we focus on one specific implementation of RL that utilizes two networks: a controller network (also called a policy controller), and a surrogate model, an additional neural network - a critic network - to update the parameters of the controller. A popular approach relies on the idea inspired by biology that aims at creating rhythmic patterns to control the motion of the robots. These approaches use different controller types and learning algorithms for creating rhythmic patterns. Early approaches used Control Tables [17, 18], where each column of a table contains a set of actions for a module in the configuration, and Simple Sinusoidal, in which a specific sinusoidal function is utilized for each motor providing an easy way to parameterize a control pattern [19, 20, 21]. These methods were followed by a controller architecture called Cyclic Splines [22, 23] in which a spline is fitted through a set of action points in time to define a periodic control sequence (_i.e._ control policy). Another successful (bio-inspired) controller called Central Pattern Generators (CPGs) [24] was based on the spinal cord of vertebrates and can produce stable and well-performing gaits on both non-modular robots [25, 26] and modular robots [26, 27, 28]. CPGs are biological neural circuits that produce rhythmic output in the absence of rhythmic input [29]. In this work, we use CPGs to parameterize a controller and create biologically plausible motion patterns. ### _Controller Learning Algorithms_ The problem of controller learning in robotics could be phrased as the _black-box optimization_ problem [30, 31] since we need to either run a simulation or a physical robot to obtain a value of the objective function (or the fitness function). There is a vast amount of literature on learning algorithms on only one type of controller [32, 33, 4, 4], naming only a few. In [33], a comparison of three learning algorithms in modular robots is performed where _NIP-Evolutionary Strategies_, _Bayesian Optimization_ and _Reversible Differential Evolution_ (RevDE) [35] are tested. The outcome of this study indicates that the shape of the fitness landscape in evolutionary strategies hints at a possible bias for morphologies with many joints. This could be an unwanted property for the implementation of lifetime learning because an algorithm should work consistently on different kinds of morphologies. Bayesian Optimization is good at sample efficiency, however, it requires much more time compared to the other two methods due to the higher time complexity (cubic complexity). The best-performing algorithm in this comparison was RevDE which scales well in terms of complexity and generalizes well across various morphologies. Therefore, we use RevDE in this paper. Moreover, we apply Proximal Policy Optimization (PPO) in the context of RL. PPO is a family of model-free RL learning algorithms that search the space of policies rather than assigning values to state-action pairs [36]. It was used in recent research [37] and performs well across various morphologies. ## III Methodology ### _Robot Controllers_ In this research, our task is gait learning, therefore the controllers we use are open-loop controllers without steering. The choice of a robot controller is a crucial design decision and determines the resulting search space and, as a consequence, the behaviour of a robot. Different types of controllers may require different inputs, e.g. DRL-Policy controller and ANN controller need observations from the environment as input, however, CPG does not reply on observation in an open-loop controller. Moreover, the number of parameters to be optimized in each type of controller can be different. Last but not least, the outputs differ too. CPG and ANN controllers output actions to the hinges directly while the DRL-Policy controller output the action distribution. #### Iii-A1 CPG controller Each robot hinge i is associated with a CPG that is defined by three neurons: a \(x_{i}\)-neuron, a \(y_{i}\)-neuron, and an \(out_{i}\)-neuron, which are recursively connected to produce oscillatory behaviour. The CPG network structure we used has two layers: 1. Internal connection: The change of the \(x_{i}\) and \(y_{i}\) neurons' states with respect to time is calculated by multiplying the activation value of the opposite neuron with a weight. To reduce the search space, we define \(w_{x_{i}y_{i}}\) to be \(-w_{y_{i}x_{i}}\) and call their absolute value \(w_{i}\) and set \(w_{x_{i}o_{i}}\) =1. The resulting activations of neurons \(x\) and \(y\) are periodic and bounded. The initial states of all \(x\) and \(y\) neurons are set to \(\frac{\sqrt{2}}{2}\) because this leads to a sine wave with amplitude 1, which matches the limited rotating angle of the joints. 2. External connection: CPG connections between neighbouring hinges. Two hinges are said to be neighbours if their tree-based distance (how many edges between one node and the other) is less than or equal to two. \(x\) neurons depend on neighbouring \(x\) neurons in the same way as they depend on their \(y\) partner. Let \(i\) be the number of the hinge, N\({}_{\text{i}}\) the set of indices of hinges neighbouring hinge \(i\), and \(w_{ij}\) the weight between \(x_{i}\) and \(x_{j}\). Again, \(w_{ji}\) is set to be \(-w_{ij}\). The extended system of differential equations is then: \[\begin{split}\dot{x}_{i}&=w_{i}y_{i}+\sum_{j\in \mathcal{N}_{i}}w_{x_{j}x_{i}}x_{j}\\ \dot{y}_{i}&=w_{i}x_{i}\end{split}\] (1) Because of this addition, \(x\) neurons are no longer bounded between \([-1,1]\). To achieve this binding, we use a variant of the sigmoid function, the hyperbolic tangent function (tanh), as the activation function of \(out_{i}\)-neurons. The total number of the weights parameters per robot we have to optimise for the CPG network is the sum of weights of these two connections: CPG_N_param= Nhinges + N\({}_{\text{i}}\) Take the spider for example, it has 8 CPGs (hinges) and 10 pairs of neighbouring connections between CPGs, therefore the total number of the weights parameters is 18. #### Iii-A2 ANN controller In an ANN robot controller, the ANN internally connects an input layer of neurons to an output layer that triggers the actuators, possibly via a layer of hidden neurons. The output of a previous layer is multiplied by corresponding weights before being summed with a bias term and thus serves as input for the next layer. Here, the main components of the ANN (a.k.a. Actor network) are: 1. _Single Observation Encoders_. A sub-network for encoding a single type of observation. In our research, we use two types of observations: state of each hinge (activation of the hinge between -1 and 1 which is its motion range) and the orientation of the robot (based on the core modular of the robot). The input of the coordinates observation network is \(N_{\text{hinges}}\cdot 3\) dimensions and the input of the orientation observation network which is 4 dimensions. The output of both networks is a 32-dimensional vector through a linear layer followed by a tanh activation function. 2. _Observation Encoder_ A network that concatenates the encoded observations. It receives inputs from the two Single Observation Encoders and passes the encoded observations which are [32+32=64] dimensional through a linear layer followed by a tanh activation function to produce the final output of a 32-dimensional vector. 3. _Actor_ Takes the concatenated encoded observations as input and outputs the action to be taken by the robot. The dimension of the action is based on the number of the robot's hinges. The total number of parameters per robot to be optimized is equal to the sum of the parameters of the Single Observation Encoder, Observation Encoder, and Actor: \(ANN\_N_{\text{param}}=32\cdot(N_{\text{hinges}}\cdot 3+4+1)+32\cdot(64+1)+N_{ \text{hinges}}\cdot(32+1)\). #### Iii-A3 DRL-Policy controller The Deep Reinforcement Learning (DRL) paradigm provides a way to learn efficient representations of the environment from high-dimensional sensory inputs, and use these representations to interact with the environment in a meaningful way. At each time-step, the robot senses the world by receiving observations \(o_{t}\) provided by the simulator, then it takes an action \(a_{t}\), and is given a reward \(r_{t}\). A policy \(\pi_{\theta}(a_{t}\mid o_{t})\) models the conditional distribution over action \(a_{t}\)\(\in\) A given an observation \(o_{t}\)\(\in\) O(\(s_{t}\)). The goal is to find a policy which maximizes the expected cumulative reward R under a discount factor \(\gamma\in(0,1)\). **Policy controller** Policy \(\pi_{\theta}\), as the robot's behaviour function, tells us which action to take in state s. In our research, the implementation of the policy controller has an Actor network, a Critic network, and an ActorCritic network that merges the two. 1. Actor network: a deep neuron network that outputs a Gaussian distribution over the possible actions given an observation. Similar to the ANN controller, observations are encoded into a 32-dimensional vector, but instead of producing actions directly, it produces the action probability using two hidden layers (mean_layer and std_layer). 2. Critic network: a deep neuron network which outputs a single scalar value that approximates the expected return of the current state of the input observation. 3. ActorCritic network: the primary component that combines the Actor and Critic networks and allows for sampling actions or computing their probabilities and the value of an observation. It can either output the action distribution, the state-value function or both, along with the log-probability of the actions taken and the entropy of the action distribution. The robot chooses its action via the policy \(\pi_{\theta}\) where \(\theta\) are the parameters of these three NNs which will be optimized by a DRL algorithm called Proximal Policy Optimization (PPO). The total number of parameters per robot we have to optimize for ActorCritic Network is equal to the sum of the parameters of the Actor, Critic, and ObservationEncoder sub-modules: \(DRL\_N_{\text{param}}=(N_{\text{hinges}}\cdot(32+1)+2\cdot N_{\text{hinges}} \cdot(N_{\text{hinges}}+1))+(1\cdot 32+1)+(32\cdot(N_{\text{hinges}}\cdot 3+4+1)+32 \cdot(64+1))\). ### _Learning Methods_ The problem of learning a robot controller is stated as a maximization problem of a function (reward or fitness) that is non-differentiable and could be given only after running a real-world experiment or a simulation. Since we cannot calculate the gradients concerning the controller weights, we must apply other learning methods that either utilize approximate gradients (e.g., through surrogate models) or derivative-free methods. In the following paragraphs, we present details of a specific derivative method (RevDE) and an instance of RL (PPO). #### Iii-B1 RevDE In a recent study on modular robots [33], it was demonstrated that Reversible Differential Evolution (RevDE) [5], an altered version of Differential Evolution, performs and generalizes well across various morphologies. This method works as follows [35]: 1. Initialize a population with \(\mu\) samples (\(n\)-dimensional vectors), \(\mathcal{P}_{\mu}\). 2. Evaluate all \(\mu\) samples. 3. Apply the reversible differential mutation operator and the uniform crossover operator. _The reversible differential mutation operator_: Three new candidates are generated by randomly picking a triplet from the population, \((\mathbf{w}_{i},\mathbf{w}_{j},\mathbf{w}_{k})\in\mathcal{P}_{\mu}\), then all three individuals are perturbed by adding a scaled difference in the following manner: \[\mathbf{v}_{1} =\mathbf{w}_{i}+F\cdot(\mathbf{w}_{j}-\mathbf{w}_{k})\] (2) \[\mathbf{v}_{2} =\mathbf{w}_{j}+F\cdot(\mathbf{w}_{k}-\mathbf{v}_{1})\] \[\mathbf{v}_{3} =\mathbf{w}_{k}+F\cdot(\mathbf{v}_{1}-\mathbf{v}_{2})\] where \(F\in R_{+}\) is the scaling factor. New candidates \(y_{1}\) and \(y_{2}\) are used to calculate perturbations using points outside the population. This approach does not follow the typical construction of an EA where only evaluated candidates are mutated. _The uniform crossover operator_: Following the original DE method [38], we first sample a binary mask \(\mathbf{m}\in\{0,1\}^{D}\) according to the Bernoulli distribution with probability \(CR\) shared across \(D\) dimensions, and calculate the final candidate according to the following formula: \[\mathbf{u}=\mathbf{m}\odot\mathbf{w}_{n}+(1-m)\odot\mathbf{w}_{n}.\] (3) Following general recommendations in literature [39] to obtain stable exploration behaviour, the crossover probability CR is fixed to a value of \(0.9\) and the scaling factor \(F\) is fixed to a value of 0.5. 4. Perform a selection over the population based on the fitness value and select \(\mu\) samples. 5. Repeat from step (2) until the maximum number of iterations is reached. As explained above, we apply RevDE here as a learning method for our robot zoo. In particular, it will be used to optimize the weights of the CPGs and the parameters of ANN controllers of our modular robots for the task. #### Iii-B2 Ppo We use the Proximal Policy Optimization (PPO) [6] algorithm to optimize a policy. It improves training stability by using a clipped surrogate objective enforcing a divergence constraint on the size of the policy update at each iteration so that the parameter updates will not change the policy too much per step. Let us denote the probability ratio between old and new policies as follows: \[r(\theta)=\frac{\pi_{\theta}(a_{t}|o_{t})}{\pi_{\theta_{old}}(a_{t}|o_{t})} \tag{4}\] Then, the objective function of PPO (on policy) is the following: \[L^{CLIP}(\theta)=\mathbb{E}_{s,a}[\min\{r_{t}(\theta)\hat{A}(s, a),\\ clip(r_{t}(\theta),1-\epsilon,1+\epsilon)\hat{A}(s,a)\}] \tag{5}\] where \(\hat{A}\) is an estimate of the advantage function. PPO imposes its constraint by enforcing a small interval around 1, \([1-\epsilon,1+\epsilon]\) to be exact, where \(\epsilon\) is a hyperparameter. The function \(clip(r(\theta),1-\epsilon,1+\epsilon)\) clips the ratio to be no more than \(1+\epsilon\) and no less than \(1-\epsilon\). The objective function of PPO takes the minimum of the original value and the clipped version, and thus we lose the motivation for increasing policy updates to extremes for better rewards. We use Generalized Advantage Estimation(GAE) [40] to estimate the advantage function \(\hat{A}\). We adopt an open-source implementation of PPO [41] for our research. ### _Frameworks: control architecture + learning method_ We consider three combinations (frameworks) of control architectures and learning methods. The set-up of these three frameworks is shown in Figure 1. First, we use CPG-based controllers trained by a derivative-free method RevDE (see Figure 1-a). Second, we consider an MLP-based ANN controller trained by the same learner RevDE (see Figure 1-b). Lastly, we use a DRL-policy based controller trained by PPO (see Figure 1-c). CPGs are not combined with PPO because our CPGs do not receive inputs (states). ANN and DRL are somewhat equivalent because they both are ANN-based controllers, however, DRL has one extra Critic NN therefore more parameters to be optimized. The outputs of the actor network in these two controllers are different too. Fig. 1: Schematic representations of three learning controller frameworks. The blue boxes show the controllers and the yellow boxes show the learners. In 1-(a), we show a spider as an example of robot morphology. The topology of the morphology determines the topology of the controller. The learner RevDE optimizes the weights of the CPG controller. In 1-(b), the learner RevDE optimizes the parameters of the ANN controller. 1-(c) is a DRL framework using PPO as the learning algorithm to change the parameters of two deep NNs to improve the policy controller for the tasks. ### _Test suite of robot morphologies_ Given a set of robot modules and ways to attach them to functional robots, the number of possible configurations (thus, the number of possible robot morphologies) is, in general, infinite. For practically feasible empirical research, we need a limited set of robots to serve as test cases. In this paper, we use a test suite of twenty robots made of two parts: a set of viable and diverse robots produced by an evolutionary process, and a set of hand-picked robots [33]. Regarding the first part, the key is to apply task-based fitness (viability) together with novelty search (diversity). The second part of the test suite can be filled by robots added manually by the experimenter. This option is entirely optional, it is to accommodate subjective preferences and interest in particular robot designs. The test suite we use here was generated by evolving a population of \(500\) robots for speed and novelty w.r.t. the _k-nearest neighbours_ in the morphological space [42]. After termination, \(15\) out of the \(500\) robots were selected by maximizing the pairwise Euclidean distance in the morphology space. The other five robots (Gecko, Snake, Spider, BabyA, BabyB) were added manually. The robots are shown as inserts in Figure 5. ## IV Experimental setup #### Iv-D1 Simulator We use a Mujoco simulator-based wrapper called Revolve2 to run the experiments. To have a fair comparison, we set the number of evaluations to be the same for each learner: 1000 learning evaluations. This number is based on the evaluations from RevDE for running 10 initial samples with 34 iterations. The first iteration contains 10 samples, and from the second iteration onwards each iteration creates 30 new candidates, resulting in a total of \(10+30\cdot(34-1)=1000\) evaluations. Then with the same evaluation number 1000, we set PPO with 10 agents per iteration and 100 episodes. For the task of gait learning, we define the robot's fitness as its average speed in 30s, i.e. absolute distance in centimetres per second (cm/s). #### Iv-D2 Setups and Code The code for carrying out the experiments is available online: [https://shorturl.at/gozS3](https://shorturl.at/gozS3). A video showing examples of robots from the experiments can be found in [https://shorturl.at/gGHR3](https://shorturl.at/gGHR3). Table I shows the set-up of the experiments. The specific values of the hyperparameters are presented in Table II. ## V Results To compare the different frameworks, we consider three key performance indicators: efficiency, efficacy, and robustness to different morphologies. #### V-1 Efficacy The quality of a robot (fitness) is defined by the speed of the robot from the starting position to the stopping position within the simulation time. The efficacy of a method is defined by the mean maximum fitness, averaged over the 20 independent repetitions: First, the maximum fitness achieved at the end of the learning process (1000 evaluations) is calculated within each independent repetition. Second, these maximum values are averaged over the 20 independent repetitions. In Figure 2, the dots indicate the maximum fitness in each evaluation (averaged over 20 runs). We can see that with the same learner (RevDE), the ANN controller outperforms the CPG controller significantly. ANN+RevDE achieves a two times higher fitness value compared to CPG+RevDE at the end of the 1000 evaluations. This could be due to CPGs producing more connected actions with fewer controller parameters, while ANNs have much more parameters to optimize and output the action probability instead of the action itself which produces actions that are very different even in subsequent time steps, eventually helping the exploration. The mean maximum fitnesses of ANN+RevDE and DRL+PPO have no significant difference initially but after evaluation 200, ANN+RevDE yields much higher fitness values than DRL+PPO. Second, another way to measure the quality of the solution is by giving the same computational budget (number of evaluations) and measuring which method finds the best solution (highest fitness) faster. In Figure 3, it is more significantly different among these three frameworks at evaluations 600 and 1000 than 200. Given the evaluations of 200, DRL+PPO has the highest mean fitness value. While at the evaluations of 600 and 1000, ANN+RevDE surpasses DRL+PPO. With regards to CPG+PPO, the fitness increasing speed is slower than the \begin{table} \begin{tabular}{l|l|l} \hline \hline **CPG+RevDE** & Value & Description \\ \hline \(\mu\) & 10 & Population size \\ N & 30 & New candidates per iteration \\ \(\lambda\) & 10 & Top-sample size \\ \(F\) & 0.5 & Scaling factor \\ \(CR\) & 0.9 & Crossover probability \\ Iterations & 34 & Number of iterations in RevDE \\ \hline \hline **ANN+RevDE** & Value & Description \\ \hline \(\mu\) & 30 & Population size \\ N & 30 & New candidates per iteration \\ \(\lambda\) & 10 & Top-samples size \\ \(F\) & 0.5 & Scaling factor \\ \(CR\) & 0.9 & Crossover probability \\ Iterations & 34 & Number of iterations in RevDE \\ \hline \hline **DRL+PPO** & Value & Description \\ \hline \(\gamma\) & 0.2 & Discount gamma \\ \(\epsilon\) & 0.2 & PPO clipping parameter epsilon \\ Entropy coefficient & 0.01 & Entropy coefficient \\ Value loss coefficient & 0.5 & Value loss coefficient \\ Episode & 100 & A sequence of states, actions and rewards \\ Agents & 10 & Number of agents per episode \\ Steps & 150 & Number of steps before training \\ \hline \hline \end{tabular} \end{table} TABLE II: Main experiment hyper parameters \begin{table} \begin{tabular}{l|l|l} \hline \hline **Experiment** & **Control Architecture** & **Learner** \\ \hline CPG+RevDE & CPG & RevDE \\ ANN+RevDE & ANN & RevDE \\ DRL+PPO & DRL-Policy & PPO \\ \hline \hline \end{tabular} \end{table} TABLE I: Experiments other two methods. #### Iv-A2 Efficiency Efficiency indicates how much effort is needed to reach a given quality threshold (the fitness level): it is measured as the average number of evaluations to 'find a solution'. Figure 2 displays the usual quality-versus-effort plots, specifically the mean fitness over the number of evaluations. Looking at the solid curves reveals that ANN+RevDE is more efficient than the other methods. As marked by the red dotted lines, it takes only 370 evaluations for ANN+RevDE (purple curve) to reach the level of fitness that the CPG+RevDE method (green curve) achieves at the end of the learning period, 1000 evaluations. Similarly, the black dotted lines mark the number of evaluations at 730 when ANN+RevDE achieved the levels of fitness that DRL+PPO reached after 1000 evaluations. #### Iv-A3 Robustness The robustness of a framework is defined by the variance in different robot morphologies. We can measure this by the variance of a framework's mean maximum fitness over the robot zoo and the mean fitness per robot over the number of evaluations. Figure 5 shows the mean and maximum fitness of three Fig. 4: (a) Mean maximum fitness per morphology for each framework. For each robot morphology, there’re columns of controller parameters numbers, namely Np_ANN in purple, Np_DRL in blue and Np_CPG in green and the best result which is indicated with boldface while underline indicates significantly better performance compared to the other frameworks. The last row shows the aggregated result for each framework over all morphologies. (b) the correlation between the number of controller parameters and the mean maximum fitness per framework. The red lines are the linear regression lines. Fig. 3: Efficacy boxplot. Validation of three frameworks at three evaluations. Red dots show mean values. Fig. 2: Mean fitness over 1000 evaluations across morphologies (averaged over 20 runs) for 3 experiments. The dots indicate the mean maximum fitness in each evaluation (averaged over 20 runs). The shaded areas show the standard deviation. frameworks over the number of evaluations per robot from Robot Zoo. The bands indicate the 95% confidence intervals (\(\pm 1.96\times SE\), Standard Error). CPG+RevDE has a narrower band than the other two frameworks which indicates lower uncertainty and is more stable. In Figure 4-a, we present a numerical summary of the results of the mean maximum fitness per robot per framework. It shows ANN+RevDE outperform CPG+RevDE or DRL+PPO on 16 robots significantly: a, b, c, d, e, f, g, j, k, l, m, n, q, r, s, t. DRL+PPO wins on robot i, o, p significantly and on robot h non-significantly. Figure 4-b exhibits the correlation between the number of controller parameters and mean maximum fitness per framework. The dots in each plot represent 20 robot zoo. The results indicate that the deep neural network-based frameworks (ANN+RevDE and DRL+PPO) show a negative linear relationship between the number of controller parameters and the fitness value while the CPG-based framework shows a positive linear relationship. Among the frameworks, it shows that the significant difference in the number of controller parameters between the Deep NN-based and the CPG-based controllers does reflect on their fitnesses (different y-scales). ## VI Conclusions and Future Work This work investigated different combinations of control architectures with learning algorithms applied to a diverse set of robot morphologies. Regarding efficacy and efficiency, the ANN+RevDE framework achieved levels of quality that the other two frameworks managed to achieve only at later stages of the learning period. As for robustness, all three frameworks successfully optimized all robots. However, ANN+RevDE outperformed DRL+PPO or CPG+RevDE significantly on 16 robots, while DRL+PPO outperformed on only 3 robots. Therefore ANN+RevDE is the best-performing learning controller framework in all three measures. Interestingly, with the same learning algorithm (RevDE), the CPG controller performs more steadily with a lower standard deviation while the ANN controller takes longer to explore at the beginning, then it increases steeply with a much higher standard deviation (Figure 2 and 3). This can be due to the significant difference in the number of parameters in different controllers, but future research is needed to investigate this phenomenon. ## References * [1] A. Eiben, N. Bredeche, M. Hoogendoom, J. Stradner, J. Timmis, A. Tyrrell, and A. Winfield, "The triangle of life: Evolving robots in real-time and real-space," _Advances in Artificial Life, ECAL 2013_, 09 2013. * Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion_, 2020, pp. 1383-1384. * [3] F. van Diggelen, E. Ferrante, and A. E. Eiben, "Comparing lifetime learning methods for morphologically evolving robots," in _GECCO '21: Proceedings of the Genetic and Evolutionary Computation Conference Companion_, 07 2021, pp. 93-94. * [4] L. K. Le Goff, E. Buchanan, E. Hart, A. E. Eiben, W. Li, M. de Carlo, M. F. Hale, M. Angus, R. Woolley, J. Timmis, A. Winfield, and A. M. Tyrrell, "Sample and time efficient policy learning with CMA-ES and Bayesian Optimisation," in _The 2020 Conference on Artificial Life_, no. January, 2020, p. 2020. * [5] E. Weglarz-Tomczak, J. M. Tomczak, A. E. Eiben, and S. Brul, "Population-based parameter identification for dynamical models of biological networks with an application to saccharomyces cerevisiae," _Processes_, vol. 9, no. 1, p. 98, 2021. * [6] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, "Proximal policy optimization algorithms," _arXiv preprint arXiv:1707.06347_, 2017. * [7] W.-P. Lee, "Evolving robot brains and bodies together: An experimental investigation," _Journal of the Chinese Institute of Engineers_, vol. 26, no. 2, pp. 125-132, 2003. * [8] J. B. Pollack and H. Lipson, "The golen project: Evolving hardware bodies and brains," in _Proceedings. The Second NASA/DoD Workshop on Evolved Hardware_. IEEE, 2000, pp. 37-42. * [9] J. B. Pollack, G. S. Hornby, H. Lipson, and P. Funes, "Computer creativity in the automatic design of robots," _Leonardo_, vol. 36, no. 2, pp. 115-121, 2003. * [10] H. Lipson and J. Pollack, "Evolving physical creatures," in _Artificial Life VII: Proceedings of the seventh international conference on artificial life_, 2006, pp. 282-287. * [11] H. H. Lund, "Co-evolving control and morphology with LEGO robots," in _Morpho-functional machines: the new species_. Springer, 2003, pp. 59-79. * [12] K. Poikselki, I. Vallivaara, and J. Roning, "Evolutionary robotics on LEGO NXT platform," in _2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI)_. IEEE, 2015, pp. 1137-1144. * [13] A. Ranganath, J. Gonzalez-Gomez, and L. M. Lorente, "A distributed neural controller for locomotion in linear modular robotic configurations," in _Proceedings of the 8th Workshop of RoboCirj2030_, 2011, pp. 129-144. * [14] J. S. Bhatia, H. Jackson, Y. Tian, J. Xu, and W. Matusik, "Evolution gym: A large-scale benchmark for evolving soft robots," 2022. * [15] M. D'Angelo, B. Weel, and A. Eiben, "Online gait learning for modular robots with arbitrary shapes and sizes," 21 2013. * [16] H. Shen, J. Yosinski, P. Kornushev, D. Caldwell, and H. Lipson, "Learning fast quadruped robot gaits with the rl power spline parameterization," _International Journal of Cybernetics and Information Technologies_, vol. 12, 09 2012. * 1121, 2006. * [18] M. Yim, D. Duff, and K. Roufis, "Polybot: a modular reconfigurable robot," in _Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065)_, vol. 1, 2000, pp. 514-520 vol.1. * [19] J. Gonzalez-Gomez and E. Boemo, "Motion of minimal configurations of a modular robot: sinusoidal, lateral rolling and lateral shift," in _Climbing and Walking Robots_. Springer, 2006, pp. 667-674. * [20] J. Bruce, K. Caluvaerts, A. Iscen, A. P. Sabelhaus, and V. SunSpiral, "Design and evolution of a modular tensegrity robot platform," in _2014 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2014, pp. 3483-3489. * [21] A. Faina, F. Orijales, F. Bellas, R. Duro _et al._, "First steps towards a heterogeneous modular robotic architecture for intelligent industrial operation," in _Workshop on Reconfigurable Modular Robotics at the IROS_, 2011. * [22] J. Milan, D. C. Matteo, H. Elte, E. Panagiotis, O. Jakub, H. Evert, A. J. E., and A. E. A, "Real-world evolution of robot morphologies: a proof of concept," _Artificial life_, vol. 23, no. 2, pp. 206-235, 2017. * [23] J. Kober and J. R. Peters, "Policy search for motor primitives in robotics," in _Advances in neural information processing systems_, 2009, pp. 849-856. * [24] A. J. ljspeert, "Central pattern generators for locomotion control in animals and robots: A review," _Neural Networks_, vol. 21, no. 4, pp. 642-653, 2008, robotics and Neuroscience. * [25] D. Christensen, J. Larsen, and K. Stoy, "Fault-tolerant gait learning and morphology optimization of a polymorphic walking robot," _Evolving Systems_, vol. 5, 03 2013. * [26] A. Sproewitz, R. Moeckel, J. Maye, and A. J. ljspeert, "Learning to move in modular robots using central pattern generators and online optimization," _The International Journal of Robotics Research_, vol. 27, no. 3-4, pp. 423-443, 2008.
2309.09788
Exponentially many graphs are determined by their spectrum
As a discrete analogue of Kac's celebrated question on "hearing the shape of a drum", and towards a practical graph isomorphism test, it is of interest to understand which graphs are determined up to isomorphism by their spectrum (of their adjacency matrix). A striking conjecture in this area, due to van Dam and Haemers, is that "almost all graphs are determined by their spectrum", meaning that the fraction of unlabelled $n$-vertex graphs which are determined by their spectrum converges to $1$ as $n\to\infty$. In this paper we make a step towards this conjecture, showing that there are exponentially many $n$-vertex graphs which are determined by their spectrum. This improves on previous bounds (of shape $e^{c\sqrt{n}}$), and appears to be the limit of "purely combinatorial" techniques. We also propose a number of further directions of research.
Illya Koval, Matthew Kwan
2023-09-18T14:05:09Z
http://arxiv.org/abs/2309.09788v1
# Exponentially many graphs are determined by their spectrum ###### Abstract. As a discrete analogue of Kac's celebrated question on "hearing the shape of a drum", and towards a practical graph isomorphism test, it is of interest to understand which graphs are determined up to isomorphism by their spectrum (of their adjacency matrix). A striking conjecture in this area, due to van Dam and Haemers, is that "almost all graphs are determined by their spectrum", meaning that the fraction of unlabelled \(n\)-vertex graphs which are determined by their spectrum converges to \(1\) as \(n\to\infty\). In this paper we make a step towards this conjecture, showing that there are exponentially many \(n\)-vertex graphs which are determined by their spectrum. This improves on previous bounds (of shape \(e^{c\sqrt{n}}\)), and appears to be the limit of "purely combinatorial" techniques. We also propose a number of further directions of research. Matthew Kwan was supported by ERC Starting Grant "RANDSTRUCT" No. 101076777. \({}^{1}\)The _adjacency matrix_ of a (simple) graph \(G\), with vertices \(v_{1},\ldots,v_{n}\), is the zero-one matrix \(\operatorname{A}(G)\in\{0,1\}^{n\times n}\) whose \((i,j)\)-entry is \(1\) if and only if \(G\) has an edge between \(v_{i}\) and \(v_{j}\). \({}^{2}\)The number of _labelled_ graphs on a particular set of \(n\) vertices is \(2^{n(n-1)/2}\), and it is well-known (see for example [16, Lemma 2.3.2]) that all but a vanishingly small fraction of these have a trivial automorphism group. make a conjecture. It seems that Conjecture 1.2 first appeared explicitly in a paper of Haemers [19]. Vu seems to have arrived at Conjecture 1.2 via quite a different pathway: in [35] he presents it as a graph-theoretic variant of a similar conjecture in random matrix theory. We also remark that Garijo, Goodall and Nesetrili [14] and Noy [28] situated Conjecture 1.2 in (different) general frameworks which include a number of other questions about reconstructing graphs from various types of information. Conjecture 1.2 is rather bold, on account of the fact that there are very few known examples of DS graphs. Indeed, to show that a graph \(G\) is DS (without exhaustively computing the spectra of all other graphs on the same number of vertices), it seems necessary to somehow translate information about the spectrum of \(G\) into information about the combinatorial structure of \(G\). Spectral graph theory has a number of different tools along these lines, but all of them are rather crude, and essentially all known examples of DS graphs have very special structure. (For example, to prove that complete graphs are DS, one uses the fact that the \(n\)-vertex complete graph is the only \(n\)-vertex graph with exactly \(\binom{n}{2}\) edges). To the best of our knowledge, the best lower bounds on the number of DS graphs are all of the form \(e^{c\sqrt{n}}\) for some constant \(c>0\). Such a bound was first observed by van Dam and Haemers [33, Proposition 6], who proved that \(G\) is DS whenever every connected component of \(G\) is a complete subgraph (the number of graphs of this form is precisely the number of integer partitions of \(n\), which is approximately \(e^{c\sqrt{n}}\) for \(c=\pi\sqrt{2/3}\) by the Hardy-Ramanujan theorem [23]). Several other families of graphs, similarly enumerated by integer partitions, have since been discovered (see for example [32, 38]). On the other hand, there has been much more progress in the _opposite direction_ to Conjecture 1.2, proving lower bounds on the number of graphs which are _not_ DS. For example, a famous result of Schwenk [30] says that only a vanishingly small fraction of trees are DS (meaning that almost all of the exponentially many unlabelled \(n\)-vertex trees are non-DS), and, using an operation that is now known as _Godsil-McKay switching_, Godsil and McKay [15] (see also [20]) proved that the number of \(n\)-vertex graphs which are not DS is at least \[(1-o(1))\frac{n^{2}}{12\cdot 2^{n}}\cdot\frac{2\binom{n}{2}}{n!}.\] In this paper we prove the first exponential lower bound on the number of DS graphs, finally breaking the "\(e^{c\sqrt{n}}\) barrier" (and thereby answering a question of van Dam and Haemers [33]). **Theorem 1.4**.: _The number of (unlabelled) \(n\)-vertex graphs determined by their spectrum is at least \(e^{cn}\) for some constant \(c>0\)._ _Remark 1.5_.: Our proof shows that we can take \(c=0.01\) for large \(n\), but we made no serious attempt to optimise this. We will outline our proof strategy in Section 2, but to give a quick impression: we consider an explicit family of "nice graphs", each consisting of a long cycle with leaves attached in various carefully-chosen ways. Then, we consider a family of \(n\)-vertex graphs \(\mathcal{Q}_{n}\) obtained by combining complete graphs with _line graphs_3 of nice graphs, in such a way that certain inequalities and number-theoretic properties are satisfied. We then prove that there are exponentially many graphs in \(\mathcal{Q}_{n}\), and that all graphs in \(\mathcal{Q}_{n}\) are determined by their spectrum. We remark that there is an essential tension in the choice of \(\mathcal{Q}_{n}\): in order to prove a strong lower bound we would like our families of graphs to be as "rich" as possible, containing graphs with a wide variety of structure, but in order to reconstruct a graph using the limited information that is (legibly) available in its spectrum, we can only work with graphs with very special structure. Footnote 3: The _line graph_ line\((G)\) of a graph \(G\) has a vertex for each edge of \(G\), and two vertices in line\((G)\) are adjacent if the corresponding edges of \(G\) share a vertex. ### Further directions It seems that significant new ideas would be required to go beyond the exponential bound in Theorem 1.4. Indeed, if we consider all the known combinatorial parameters that can be extracted from the spectrum of an \(n\)-vertex graph, then we end up with a list of about \(2n\) integers (most notably, the first \(n\) spectral moments describe the number of closed walks of each length, and the \(n\) non-leading coefficients of the characteristic polynomial can be interpreted as certain weighted sums of subgraph counts). In order to use this combinatorial information to reconstruct say \(\exp(n^{1+\varepsilon})\) different graphs, we would need to use a huge amount of information from each of the integers in our list: roughly speaking, the variation in each integer must correspond to about \(\exp(n^{\varepsilon})\) different graphs. It is hard to imagine a natural combinatorial argument that could reconstruct so many different graphs from a single integer of information. Instead, it seems that _non-constructive_ methods may be necessary in order to prove Conjecture1.2, or even to make much progress beyond Theorem1.4. Is there some algebraic criterion which describes whether a graph is DS, without necessarily providing a combinatorial procedure to reconstruct the graph4? Can one somehow show that the DS property is "generic" without describing _which_ graphs are DS? Footnote 4: Some progress in this direction was made by Wang [36], who found an arithmetic criterion for a graph to be determined by its so-called “generalised spectrum”. We would also like to propose a number of other questions related to Conjecture1.2. * Consider two different \(n\)-vertex graphs \(G,G^{\prime}\), chosen uniformly at random, and let \(Q_{n}\) be the probability that \(G\) and \(G^{\prime}\) have the same spectrum. How large is this probability? It seems one can obtain an exponential upper bound \[Q_{n}\leq\mathbb{P}[\det(G)=\det(G^{\prime})]\leq\sup_{d\in\mathbb{R}}\mathbb{ P}[\det(G)=d]\leq e^{-cn}\] for some \(c>0\), using powerful techniques in random matrix theory (see [7]). * Conjecture1.2 is equivalent to the statement that among all \(n\)-vertex graphs, there are \[(1-o(1))\frac{2^{\binom{n}{2}}}{n!}\] different spectra. What lower bounds can we prove on the number of different spectra realisable by \(n\)-vertex graphs? There are several different ways to prove an exponential lower bound: in particular, such a bound follows from Theorem1.4, from the above bound \(Q_{n}\leq e^{-cn}\), or from results on the range of possible determinants of \(n\times n\) binary matrices (see [31]). * Although it is known [30] that almost all trees are _not_ DS, it would still be interesting to prove lower bounds on the number of DS trees. Could it be that there are exponentially many? * In the continuous setting ("hearing the shape of a drum"), the _spectral rigidity_ conjecture of Sarnak (see [29]) suggests that despite the fact that there are drums with the same spectrum, such drums are always "isolated" from each other: for any drum, making a sufficiently small change to the shape of the drum always changes its spectrum. One can also ask similar questions for graphs. For example, as a weakening of Conjecture1.2, we conjecture that for a \((1-o(1))\)-fraction of labelled graphs on \(n\) vertices, any nontrivial addition/deletion of at most \((1/2-\varepsilon)n\) edges (for any constant \(\varepsilon>0\)) results in a graph with a different spectrum. If this were true it would be best-possible: for almost all \(n\)-vertex graphs \(G\), one can exchange the roles of two vertices by adding and removing about \(n/2\) edges (obtaining a graph which is isomorphic to \(G\) and therefore has the same spectrum). * Apart from the adjacency matrix, there are several other matrices which can be associated with a graph. Perhaps the best-known examples are the _Laplacian_ matrix and the _signless Laplacian_ matrix (which are both actually used in this paper; see Definition2.1). Such matrices give us different notions of graph spectra, with which we can ask variations on all the questions discussed so far. Actually, the Laplacian analogue of Theorem1.4 has already been proved, taking advantage of the fact that the Laplacian spectrum is much better-behaved with respect to _complements_: Hammer and Kelmans [21] showed that all \(2^{n}\) of the _threshold graphs_ on \(n\) vertices (i.e., all \(n\)-vertex graphs which can be constructed from the empty graph by iteratively adding isolated vertices and taking complements) are determined by their Laplacian spectrum. In the course of proving Theorem1.4, we actually end up giving new proofs of the analogous result for Laplacian and signless Laplacian spectra. It is still open (and not obviously easier or harder than for the adjacency spectrum) to prove better-than-exponential lower bounds on the number of \(n\)-vertex graphs determined by their Laplacian or signless Laplacian spectrum. ## 2. Proof overview We start by defining the _Laplacian matrix_ and the _signless Laplacian matrix_, two variations on the adjacency matrix. **Definition 2.1**.: Consider a (simple) graph \(G\) with vertices \(v_{1},\ldots,v_{n}\). Let \(\mathrm{D}(G)\) be the diagonal matrix whose \((i,i)\)-entry is the degree of \(v_{i}\), and recall the adjacency matrix \(\mathrm{A}(G)\) of \(G\). * The _Laplacian matrix_ is defined as \(\mathrm{L}(G)=\mathrm{D}(G)-\mathrm{A}(G)\). * The _signless Laplacian matrix_ is defined as \(|\mathrm{L}(G)|=\mathrm{D}(G)+\mathrm{A}(G)\). We sometimes refer to the spectra of \(\mathrm{A}(G)\), \(\mathrm{L}(G)\) and \(|\mathrm{L}(G)|\) as the _adjacency spectrum_, _Laplacian spectrum_ and _signless Laplacian spectrum_ of \(G\), respectively. We say that a graph \(G\) is _determined by its Laplacian spectrum_ (respectively, _determined by its signless Laplacian spectrum_) if there is no other graph (non-isomorphic to \(G\)) which has the same Laplacian spectrum (respectively, signless Laplacian spectrum) as \(G\). While the adjacency matrix is the simplest and most natural way to associate a matrix to a graph, all three of the above notions of spectrum contain slightly different information about \(G\), which can be useful for different purposes. For this paper, the crucial fact about the Laplacian spectrum is that it determines the number of _spanning trees_ of a graph, via Kirchhoff's celebrated _matrix-tree theorem_ (Theorem 3.13). In particular, the Laplacian spectrum tells us whether a graph is connected or not. Fortunately, there are some connections between the above three notions of spectrum, which we will heavily rely on in this paper. For example, two simple observations are that: * if a graph is bipartite, then its signless Laplacian spectrum is the same as its Laplacian spectrum (Fact 3.1); * if two graphs have the same signless Laplacian spectrum, then their _line graphs_ have the same adjacency spectrum (Proposition 3.15). Unfortunately, there are some limitations to these connections. In general, neither the Laplacian spectrum nor the signless Laplacian spectrum of a graph contain enough information to actually determine whether the graph is bipartite (and it is _not_ true that for a bipartite graph to be determined by its Laplacian spectrum is the same as for it to be determined by its signless Laplacian spectrum). Also, if a graph \(Q\) has the same adjacency spectrum as the line graph of some graph \(G\), it does not necessarily follow that \(Q\) is the line graph of some graph with the same signless Laplacian spectrum as \(G\) (it does not even follow that \(Q\) is a line graph at all, though a deep structure theorem of Cameron, Goethals, Seidel and Shult [6], building on a previous slightly weaker theorem of Hoffman [25], shows that every connected graph which has the same adjacency spectrum as a line graph must be a so-called _generalised line graph_, with finitely many exceptions). Despite these limitations, in our proof of Theorem 1.4 it is nonetheless extremely useful to move between the three different notions of graph spectra. Roughly speaking, our proof of Theorem 1.4 can be broken down into three parts. First, we describe an explicit family of graphs ("nice graphs"), and prove that they are determined by their Laplacian spectrum (making crucial use of the matrix-tree theorem). Second, we prove that any graph which has the same signless Laplacian spectrum as a bipartite nice graph must be bipartite (from which we can deduce that in fact every bipartite nice graph is determined by its signless Laplacian spectrum). Finally, we define a family \(\mathcal{Q}_{n}\) of exponentially many \(n\)-vertex graphs (which are essentially line graphs of bipartite nice graphs, with some small adjustments for number-theoretic reasons), and use the Cameron-Goethals-Seidel-Shult theorem to show that if a graph has the same adjacency spectrum as a graph in \(\mathcal{Q}_{n}\), then both graphs must have been constructed from line graphs with the same signless Laplacian spectrum. Putting everything together, we see that all of the exponentially many graphs in \(\mathcal{Q}_{n}\) are determined by their adjacency spectrum. We next outline each of the above three parts of the proof of Theorem 1.4 in more detail. ### Nice graphs and the Laplacian spectrum First, we define nice graphs and outline how to prove that they are determined by their Laplacian spectrum. **Definition 2.2**.: Say that a graph is _sun-like5_ if it is connected, and deleting all degree-\(1\) vertices yields a cycle. Equivalently, a sun-like graph can be constructed by taking a cycle \(C\), and attaching some leaves to some vertices of \(C\). If a vertex of \(C\) has \(i\) leaves attached to it (equivalently, if the vertex has degree \(i+2\)), we call it an _\(i\)-hub_. We simply call a vertex a _hub_ if it is an \(i\)-hub for some \(i\geq 1\) (equivalently, if its degree is at least \(3\)). Footnote 5: The reason for this terminology is that the name “sun graph” is sometimes used in the literature to describe a graph obtained from a cycle by adding a leaf to each vertex. For (integer) parameters \(k\geq 1\) and \(\ell\geq\max(12k,15)\), say that a graph \(G\) is _\((\ell,k)\)-nice_ if: * \(G\) is a sun-like graph; * the unique cycle \(C\) in \(G\) has length \(\ell\); * there are exactly \(k+1\) hubs, one of which is a \(1\)-hub and the others of which are \(2\)-hubs; * we can fix an orientation of \(C\) such that the following holds. Imagine starting at the \(1\)-hub and walking clockwise around \(C\). We should meet our first \(2\)-hub after walking a distance of \(4\). Then, the second \(2\)-hub should appear at distance \(4\) or \(6\) after the first. The third \(2\)-hub should appear at distance \(4\) or \(6\) after the second, the fourth should appear at distance \(4\) or \(6\) after the third, and so on. (This freedom between \(4\) and \(6\) at each step is crucial; it ensures that there are many different nice graphs). See Figure 1 for an illustration of a \((46,3)\)-nice graph. We simply say that a graph is _nice_ if it is \((\ell,k)\)-nice for some \(k,\ell\) (satisfying \(k\geq 1\) and \(\ell\geq\max(12k,15)\)). We remark that the restriction \(\ell\geq 12k\) is to ensure that all \(2\)-hubs are closer to the \(1\)-hub in the clockwise direction than the counterclockwise direction. **Lemma 2.3**.: _Every nice graph is determined by its Laplacian spectrum._ We will prove Lemma 2.3 in full detail in Section 4. As a brief outline: the first step in the proof of Lemma 2.3 is to prove that any graph \(G^{\prime}\) with the same Laplacian spectrum as a nice graph \(G\) is itself nice (with the same parameters \(\ell,k\)). This "localises" the problem: if we only have to consider nice graphs, we can give a much more explicit combinatorial meaning to certain spectral statistics (most crucially, we can give a combinatorial interpretation of the Laplacian spectral moments6 in terms of closed walks around the unique cycle \(C\)). This localisation step crucially uses the matrix-tree theorem to show that \(G^{\prime}\) is connected (once we know that \(G^{\prime}\) is connected, certain spectral inequalities on various degree statistics allow us to deduce that \(G^{\prime}\) has a single cycle, then that it is sun-like and then that it is nice). We remark that similar ideas were previously used by Boulet [4] to prove that so-called "sun graphs" are determined by their Laplacian spectrum. Footnote 6: For the purposes of this paper, the \(k\)-th spectral moment of a matrix \(M\) is its sum of \(k\)-th powers of eigenvalues (this can also be expressed as the trace of the matrix power \(M^{k}\)). After localising the problem, the second step is to show how to "decode" a specific nice graph using spectral information: i.e., assuming that \(G^{\prime}\) is nice, we use spectral information to discover which nice graph it is. The idea for this step is to "inductively explore the graph around its \(1\)-hub" using spectral moments: assuming we know the positions of all the \(2\)-hubs up to distance \(d\) of the \(1\)-hub, we can use the \((2d+2)\)-th spectral moment to see whether there is a \(2\)-hub at distance \(d+1\) from the \(1\)-hub. Very roughly speaking, the reason this is possible is that the spectral moments can be interpreted as certain weighted sums over closed walks on \(C\). If a closed walk "interacts with \(2\)-hubs" \(i\) times, then the weight of the walk is divisible by \(2\), so parity considerations allow us to distinguish closed walks involving the \(1\)-hub from closed walks which only involve \(2\)-hubs. _Remark 2.4_.: For this "decoding" step, there is no advantage of the Laplacian spectrum over the adjacency spectrum. In fact, it would have been much more convenient to work with the adjacency spectrum, as the spectral moments of the adjacency matrix have a much more direct combinatorial interpretation than the spectral moments of the Laplacian matrix. Indeed, the \(i\)-th spectral moment of the adjacency matrix simply counts the number of closed walks of length \(i\). For a nice graph, every nontrivial closed walk can be obtained by starting with a closed walk in the unique cycle \(C\), and then choosing some hubs in the walk at which we go in and out of a leaf. Every time we go in and out of a leaf at a \(2\)-hub, we have an even number of choices, whereas every time we go in and out of a leaf at a \(1\)-hub, we have an odd number of choices. _Remark 2.5_.: There are some parallels between our \(2\)-step strategy to prove Lemma 2.3 and a similar \(2\)-step strategy that was recently applied with great success in the _continuous_ case (i.e., in the "hearing the shape of a drum" setting). Indeed, a recent breakthrough result of Hezari and Zelditch [24] is that ellipses with low eccentricity are determined by their spectrum. In their proof, the first step is to use certain spectral inequalities to "localise" the problem, showing that any domain whose spectrum matches a low-eccentricity ellipse must be "almost circular". Then, the second step is to pin down the precise shape of the domain, taking advantage of the fact that the spectrum determines certain information about _closed billiard trajectories_ inside the domain, and applying powerful results due to Avila, De Simoi and Kaloshin [2] to study such trajectories. There is a superficial similarity between closed walks in graphs and closed billiard trajectories in a domain; it is not clear to us whether this connection runs deeper. ### The signless Laplacian spectrum As outlined, the next step is to prove an analogue of Lemma 2.3 for the signless Laplacian spectrum: we are able to do this with a mild condition on the length of the cycle \(\ell\), as follows. **Lemma 2.6**.: _Let \(G\) be an \((\ell,k)\)-nice graph with \(\ell\equiv 2\pmod{4}\). Then, \(G\) is determined by its signless Laplacian spectrum._ Note that an \((\ell,k)\)-nice graph is bipartite if and only if \(\ell\) is even, and as we have discussed, for bipartite graphs, the signless Laplacian spectrum is the same as the Laplacian spectrum. So, given Lemma 2.3, in order to prove Lemma 2.6 we just need to show that if \(\ell\equiv 2\pmod{4}\) then every graph with the same signless Laplacian spectrum as an \((\ell,k)\)-nice graph must be bipartite. The full details of the proof of Lemma 2.6 appear in Section 5, but to give a brief idea: the only spectral information we need is the product of nonzero eigenvalues. We observe that for every non-bipartite graph the product of nonzero eigenvalues is divisible by \(4\), and that the assumption \(\ell\equiv 2\pmod{4}\) guarantees that the product of nonzero eigenvalues of \(G\) is _not_ divisible by \(4\). For both of these facts, we use an explicit combinatorial description of the coefficients of the characteristic function of the signless Laplacian matrix, due to Cvetkovic, Rowlinson and Simic [10]. (These coefficients can be expressed as sums of products of eigenvalues via Vieta's formulas; in particular the nonzero coefficient with lowest degree tells us the product of nonzero eigenvalues). _Remark 2.7_.: Lemma 2.6 implies that if \(n\) is odd, then there are exponentially many \(n\)-vertex graphs which are determined by their signless Laplacian spectrum. However, there is no bipartite nice graph on an even number of vertices, so, the analogous result for even \(n\) is not completely obvious. With a bit more work we were nonetheless able to prove such a result, yielding a version of Theorem 1.4 for the signless Laplacian, as follows. **Theorem 2.8**.: _The number of (unlabelled) \(n\)-vertex graphs determined by their signless Laplacian spectrum is at least \(e^{cn}\) for some constant \(c>0\)._ To prove Theorem 2.8, we combine Lemma 2.6 with some of the ideas described in the next subsection; the details appear in Appendix A. ### Exponentially many graphs determined by their adjacency spectrum As briefly mentioned earlier in this outline, there is a close connection between the signless Laplacian spectrum of a graph and the adjacency matrix of its line graph. To be a bit more specific, the nonzero eigenvalues of \(|\mathrm{L}(G)|\) are in correspondence with the eigenvalues of \(\mathrm{A}(\mathrm{line}(G))\) different from \(-2\). One might (naively) hope that \(\mathrm{line}(G)\) being determined by its adjacency spectrum is equivalent to \(G\) being determined by its signless Laplacian spectrum. If this were true, it would be easy to complete the proof of Theorem 1.4, by considering the family of all \(n\)-vertex graphs which are the line graph of some nice graph as in Lemma 2.6. Unfortunately, this is too much to hope for in general, but quite some theory has been developed in this direction, and we are able to leverage this theory in the special case where \(G\) has a large prime number of vertices. **Lemma 2.9**.: _There is a constant \(n_{0}\) such that the following holds. Let \(G\) be an \((\ell,k)\)-nice graph with \(\ell\equiv 2\pmod{4}\), let \(n=\ell+2k+1\) be its number of vertices, and suppose that \(n\) is a prime number larger than \(n_{0}\). Then \(\mathrm{line}(G)\) is determined by its adjacency spectrum._ The proof of Lemma 2.9 appears in Section 6. To give a rough idea of the strategy of the proof: recalling Lemma 2.6, in order to prove Lemma 2.9 it suffices to show that if a graph \(Q\) has the same (adjacency) spectrum as \(\mathrm{line}(G)\), then 1. \(Q=\mathrm{line}(H)\) for some \(H\), and Figure 1. An example of a \((46,3)\)-nice graph. There is one \(1\)-hub \(v_{0}\), and three \(2\)-hubs \(v_{1},v_{2},v_{3}\). The distances between \(v_{0}\) and \(v_{1}\), between \(v_{1}\) and \(v_{2}\) and between \(v_{2}\) and \(v_{3}\) are \(4\), \(6\) and \(4\), respectively. 2. \(H\) has the same signless Laplacian spectrum as \(G\). For (1), we have the Cameron-Goethals-Seidel-Shult theorem at our disposal, which we can use to show that \(Q\) is a so-called _generalised line graph_ (except possibly for some "exceptional" connected components with at most \(36\) vertices). Our main task is to rule out generalised line graphs which are not line graphs. For (2), our task is to show that in the signless Laplacian spectra of \(G\) and \(H\), the multiplicities of the zero eigenvalue are the same (all nonzero eigenvalues are guaranteed to be the same). This amounts to showing that \(G\) and \(H\) have the same number of vertices. For the first of these two tasks, we observe that if a generalised line graph is not a true line graph, then its adjacency matrix has a zero eigenvalue. So, it suffices to prove that \(\operatorname{line}(G)\) does not have a zero eigenvalue, i.e., its adjacency matrix has nonzero determinant. We accomplish this by directly computing the determinant of \(\operatorname{line}(G)\) (this is a little involved, but comes down to a certain recurrence). For the second of these two tasks, we recall that the adjacency spectrum of a line graph tells us the nonzero eigenvalues of the signless Laplacian spectrum, and in particular tells us the product of these nonzero eigenvalues (this product was already discussed in Section 2.2). Via a direct computation on \(G\), we observe that this product is divisible by \(n\). For each connected component of \(H\), the contribution to this product is always an integer, so if \(n\) is a prime number then there must be a single connected component which is "responsible for the factor of \(n\)". We are then able to deduce that this component has exactly \(n\) vertices and \(n\) edges, via a careful case analysis involving a combinatorial interpretation of the multiplicity of the eigenvalue \(-2\). Of course, even after proving Lemma 2.9 we are not yet done: every nice graph has the same number of edges as vertices, so Lemma 2.9 can only be directly used to prove Theorem 1.4 when \(n\) is prime. For general \(n\) we consider graphs with two connected components, one of which is the line graph of a nice graph on a prime number of vertices and the other of which is a complete graph. The parameters of the nice graph and the size of the complete graph need to satisfy certain inequalities and number-theoretic properties; the details are a bit complicated and we defer the precise specification to Section 7. In order to show that all relevant inequalities and number-theoretic properties can be simultaneously satisfied (by exponentially many graphs), we use a quantitative strengthening of Dirichlet's theorem on primes in arithmetic progressions. To actually show that all these graphs are determined by their adjacency spectrum, we proceed similarly to Lemma 2.9, but the details are more complicated. Roughly speaking, we identify the complete graph component using its single large eigenvalue and some number-theoretic considerations, and then we apply Lemma 2.9. ## 3. Preliminaries In this section we collect a number of general tools and results that will be used throughout the paper. Where possible, we cite the original sources of each of these results, but we remark that many of these results can be found together in certain monographs on algebraic graph theory or graph spectra (see for example [11, 16, 3, 5, 9]). ### Basic observations First, in Section 2 we have already mentioned that the signless Laplacian and the Laplacian spectra coincide for bipartite graphs. **Fact 3.1** ([33, Section 2.3]).: _If a graph is bipartite, then its signless Laplacian spectrum is the same as its Laplacian spectrum._ Also, we record the near-trivial fact that for all notions of spectrum discussed so far, the spectrum of a graph can be broken down into the spectra of its connected components. **Fact 3.2**.: _For any graph \(G\), the spectrum of \(G\) (with respect to the adjacency, Laplacian or signless Laplacian matrix) is the multiset union of the spectra of the connected components of \(G\)._ Figure 2. The line graph of the \((46,3)\)-nice graph in Figure 1. ### Spectral inequalities Spectral graph theory provides a range of powerful inequalities on various combinatorial parameters, usually in terms of the largest, second-largest or smallest eigenvalue of the adjacency or Laplacian matrix. In this paper we will only need some simple inequalities concerning the numbers of vertices and edges, and the degrees. **Lemma 3.3** ([11, Section 3.2] and [37]).: _Consider a graph \(G\) with \(n\) vertices, \(m\) edges, and maximum degree \(\Delta\). Let \(\lambda_{\max}\) be the largest eigenvalue of the adjacency matrix \(\operatorname{A}(G)\). Then_ 1. \(\lambda_{\max}\leq\Delta\)_,_ 2. \(\lambda_{\max}\leq\sqrt{2m-n+1}\)_._ **Lemma 3.4** ([13, Theorem 3.7] and [1, Theorem 2]).: _Let \(G\) be a graph, write \(V\) and \(E\) for its sets of vertices and edges, and \(\Delta\) for its maximum degree. Let \(\rho_{\max}\) be the largest eigenvalue of the Laplacian matrix \(\operatorname{L}(G)\). Then_ 1. \(\rho_{\max}>\max\{\deg(v):v\in V\}\)_,_ 2. \(\rho_{\max}\leq\max\{\deg(u)+\deg(v):uv\in E\}\)_._ ### Combinatorial interpretation of the spectral moments As briefly mentioned in Section 2, in this paper we use the term _spectral moments_ to refer to sums of powers of eigenvalues. **Definition 3.5**.: For a matrix \(M\in\mathbb{R}^{n\times n}\) with spectrum \(\sigma\), the _\(s\)-th spectral moment_ of \(M\) is \[\sum_{\lambda\in\sigma}\lambda^{s}=\operatorname{trace}(M^{s})=\sum_{i_{1}=1} ^{n}\cdots\sum_{i_{s}=1}^{n}M_{i_{1},i_{2}}M_{i_{2},i_{3}}M_{i_{3},i_{4}} \ldots M_{i_{s-1},i_{s}}M_{i_{s},i_{1}}.\] If \(M\) is the adjacency matrix of a graph \(G\), then the product \(M_{i_{1},i_{2}}M_{i_{2},i_{3}}M_{i_{3},i_{4}}\ldots M_{i_{s-1},i_{s}}M_{i_{s}, i_{1}}\) is nonzero if and only if there is a _closed walk_ in \(G\) running through the vertices indexed by \(i_{1},\ldots,i_{s}\) (in which case this product is exactly \(1\)). So, spectral moments simply count closed walks of various lengths. For example, the second spectral moment is the number of closed walks of length \(2\), which is precisely twice the number of edges in \(G\) (a closed walk of length \(2\) simply runs back and forth along an edge, starting at one of its two endpoints). In our proof of Lemma 2.3 we will need to carefully study _Laplacian_ spectral moments, which can also be interpreted in combinatorial terms (albeit in a more complicated way): **Definition 3.6**.: An _\(s\)-route_ in a graph is a sequence of vertices \(\vec{v}=(v_{1},\ldots,v_{s})\), such that for each index \(j\), either \(v_{j}v_{j+1}\) is an edge or \(v_{j}=v_{j+1}\) (where the subscripts should be interpreted modulo \(s\)). That is to say, a route consists of a sequence of \(s\) steps: at each step we may either walk along an edge or wait at the current vertex. Letting \(t\) be the number of "waiting steps" in the \(s\)-route \(\vec{v}\), we also define \(w(\vec{v})\) to be the product of \(\deg(v_{j})\) over all waiting steps \(j\), times \((-1)^{s-t}\). **Fact 3.7**.: _For any graph \(G\), let \(\mathcal{R}_{s}\) be the set of all \(s\)-routes in \(G\). Then, the \(s\)-th spectral moments of \(\operatorname{L}(G)\) and \(|\operatorname{L}(G)|\) are_ \[\sum_{\vec{v}\in\mathcal{R}_{s}}w(\vec{v})\quad\text{and}\quad\sum_{\vec{v} \in\mathcal{R}_{s}}|w(\vec{v})|,\] _respectively._ We will repeatedly use Fact 3.7 (for many different \(s\)) in our proof of Lemma 2.3. For now, we just record some simple observations for \(s\leq 3\), which can be straightforwardly proved by considering all possible cases for a route of length \(s\). **Proposition 3.8**.: _Consider any graph \(G\) with \(n\) vertices and \(m\) edges, and write \(V\) for its set of vertices. Let \(M=\operatorname{L}(G)\) or \(M=|\operatorname{L}(G)|\), and let \(\mu_{s}\) be the \(s\)-th spectral moment of \(M\). Then_ 1. \(\mu_{0}=n\)_;_ 2. \(\mu_{1}=\sum_{v\in V}\deg(v)=2m\)_;_ 3. \(\mu_{2}=\sum_{v\in V}\deg(v)^{2}+2m\)_;_ 4. _If_ \(G\) _has no triangles, then_ \(\mu_{3}=\sum_{v\in V}\deg(v)^{3}+3\sum_{v\in V}\deg(v)^{2}\)_._ _In particular, if we know that \(G\) has no triangles, then the spectrum of \(M\) is enough information to determine \(\sum_{v\in V}\deg(v)^{s}\) for \(s\in\{0,1,2,3\}\)._ ### Combinatorial interpretation of the characteristic coefficients In addition to spectral moments, another very rich way to extract combinatorial structure from the spectrum is to consider the _coefficients of the characteristic polynomial_ of our matrix of interest. **Definition 3.9**.: Consider a matrix \(M\in\mathbb{R}^{n\times n}\) with spectrum \(\sigma\), and write its characteristic polynomial \(\det(xI-M)=\prod_{\lambda\in\sigma}(x-\lambda)\in\mathbb{R}[x]\) in the form \(\sum_{i=0}^{n}(-1)^{i}\zeta_{i}x^{n-i}\). Then, we define the _\(i\)-th characteristic coefficient_ to be \[\zeta_{i}=\sum_{\Lambda\subseteq\sigma:|\Lambda|=i}\ \prod_{\lambda\in \Lambda}\lambda.\] (here we have used Vieta's formulas for the coefficients of a polynomial in terms of its roots). Note that the \(n\)-th characteristic coefficient \(\zeta_{n}\) is the determinant of \(M\). More generally, if we consider the largest \(s\) for which \(\zeta_{s}\) is nonzero, then \(\zeta_{s}\) is the product of nonzero eigenvalues of \(M\). Recalling the definition \(\det(xI-M)\) of the characteristic polynomial, we also have the following observation. **Fact 3.10**.: _If \(M\) is an integer matrix, then its characteristic coefficients are all integers._ Now, the characteristic coefficients of the Laplacian, signless Laplacian and adjacency matrices all have different combinatorial interpretations, as follows. **Definition 3.11**.: A connected graph is _unicyclic_ if it has exactly one cycle (equivalently, if it has the same number of edges as vertices). If the length of this cycle is even it is _even-unicyclic_; otherwise it is _odd-unicyclic_. Now, consider any graph \(G\). 1. A _spanning forest_\(F\) in \(G\) is a subgraph of \(G\) which is spanning (i.e., contains all the vertices of \(G\)) and whose connected components are trees. Let \(\alpha(F)\) be the product of the numbers of vertices in these trees. 2. A _TU-subgraph_\(H\) of \(G\) is a spanning subgraph whose connected components are trees or odd-unicyclic. Generalising the definition of \(\alpha\) above, let \(\alpha(H)=4^{c}\prod_{i=1}^{s}n_{s}\), where \(c\) is the number of odd-unicyclic components in \(H\), and the numbers of vertices in the tree components are \(n_{1},\ldots,n_{s}\). 3. An _elementary subgraph_\(X\) of \(G\) is a (not necessarily spanning) subgraph whose connected components are cycles and individual edges. Let \(\beta(X)=(-1)^{c}(-2)^{d}\), where \(c\) and \(d\) are the number of edge-components and cycle-components in \(X\), respectively. Let \(\Phi_{i}(G)\), \(\Psi_{i}(G)\) and \(\Xi_{i}(G)\) be the sets of spanning forests with \(i\) edges, TU-subgraphs with \(i\) edges, and elementary subgraphs with \(i\) vertices, respectively, in \(G\). **Theorem 3.12** ([10, Theorem 4.4], [22, Theorem 3] and [3, Theorem 7.5]).: _For any graph \(G\), the \(i\)-th characteristic coefficients of \(\mathrm{L}(G)\), \(|\mathrm{L}(G)|\) and \(\Lambda(G)\) are_ \[\sum_{F\in\Phi_{i}(G)}\alpha(F),\quad\sum_{H\in\Psi_{i}(G)}\alpha(H)\text{ and } \quad\sum_{X\in\Phi_{i}(G)}(-1)^{i}\beta(X),\] _respectively._ An immediate corollary (in the Laplacian case, considering the \(n\)-th and \((n-1)\)-th characteristic coefficients) is Kirchhoff's celebrated _matrix-tree theorem_, as follows. **Theorem 3.13** ([27]).: _For any \(n\)-vertex graph \(G\), the Laplacian \(\mathrm{L}(G)\) has a zero eigenvalue with multiplicity at least 1. \(G\) is connected if and only if the multiplicity of the zero eigenvalue is exactly 1, in which case the number of spanning trees in \(G\) is precisely the product of the nonzero eigenvalues divided by \(n\)._ Another corollary is as follows. (A very similar observation appears as [10, Proposition 2.1]). **Proposition 3.14**.: _For any connected graph \(G\):_ 1. _If_ \(G\) _is bipartite, then_ \(|\mathrm{L}(G)|=0\) _has a zero eigenvalue with multiplicity 1._ 2. _If_ \(G\) _is not bipartite, then the determinant of_ \(|\mathrm{L}(G)|\) _is a positive integer divisible by 4._ Proof.: Let \(n\) be the number of vertices of \(G\), so the determinant of \(|\mathrm{L}(G)|\) (i.e., its product of eigenvalues) is its \(n\)-th characteristic coefficient. Note that a tree on at most \(n\) vertices has at most \(n-1\) edges, so in the description in Theorem 3.12, the only possible contributions to the \(n\)-th characteristic coefficient of \(|\mathrm{L}(G)|\) come from spanning odd-unicyclic subgraphs. If \(G\) is bipartite, then clearly there is no such subgraph. On the other hand, if \(G\) is not bipartite then it has an odd cycle, and a suitable spanning odd-unicyclic subgraph can be found by iteratively removing edges outside this cycle. Each spanning odd-unicyclic subgraph \(H\) has \(\alpha(H)=4\). Since every connected graph has a spanning tree, the \((n-1)\)-th characteristic coefficient of \(G\) is always nonzero (so zero can never be an eigenvalue with multiplicity more than 1). ### Line graphs In Section2 we mentioned a correspondence between the Laplacian spectrum of a graph \(G\) and the adjacency spectrum of its line graph \(\operatorname{line}(G)\). To elaborate on this: for a graph with vertices \(v_{1},\ldots,v_{n}\) and edges \(e_{1},\ldots,e_{m}\), consider the _incidence matrix_\(N(G)\in\{0,1\}^{n\times m}\), where the \((i,j)\)-entry is 1 if and only if \(v_{i}\in e_{j}\). Then, it is not hard to see that \(|\mathrm{L}(G)|=N(G)N(G)^{T}\) and \(\mathrm{A}(\operatorname{line}(G))=N(G)^{T}N(G)-2I\). Since the nonzero eigenvalues of \(NN^{T}\) are the same as the nonzero eigenvalues of \(N^{T}N\) (including multiplicities), we have the following. **Proposition 3.15**.: _Consider any graph \(G\) and any \(\lambda\neq 0\). Then, \(\lambda\) is an eigenvalue of \(|\mathrm{L}(G)|\) with multiplicity \(m\) if and only if \(\lambda-2\) is an eigenvalue of \(\mathrm{A}(\operatorname{line}(G))\) with multiplicity \(m\)._ If we know the signless Laplacian spectrum of a graph \(G\), then Proposition3.15 tells us the spectrum of \(\mathrm{A}(\operatorname{line}(G))\), except the multiplicity of the eigenvalue \(-2\). In order to determine this multiplicity we just need to know the total multiplicity of all eigenvalues of \(\operatorname{line}(G)\), i.e., the number of vertices of \(\operatorname{line}(G)\), i.e., the number of edges of \(G\). We have already seen that this information can be recovered from the signless Laplacian spectrum (Proposition3.8(2)). So, the signless Laplacian spectrum of \(G\) fully determines the adjacency spectrum of \(\operatorname{line}(G)\). Unfortunately, as discussed in Section2 it is not quite so easy to go in the other direction: there are examples of line graphs which share their adjacency spectrum with non-line-graphs, and there are examples of graphs \(G,G^{\prime}\) which have different numbers of vertices (therefore different signless Laplacian spectra) but for which \(\operatorname{line}(G)\) and \(\operatorname{line}(G^{\prime})\) have the same adjacency spectrum. In this subsection we collect a few results related to Proposition3.15. First, \(|\mathrm{L}(G)|=N(G)N(G)^{T}\) is a positive semidefinite matrix, so we have the following corollary of Proposition3.15. **Fact 3.16**.: _For any graph \(G\), the eigenvalues of \(\mathrm{A}(\operatorname{line}(G))\) are all at least \(-2\)._ Also, Proposition3.14 gives us a combinatorial description of the multiplicity of the zero eigenvalue of \(G\). Together with Proposition3.15, this can be used to give a combinatorial description of the multiplicity of \(-2\) as an eigenvalue of \(\mathrm{A}(\operatorname{line}(G))\). **Lemma 3.17** ([9, Theorem 2.2.4]).: _Let \(H\) be a connected graph with \(v\) vertices and \(e\) edges, and let \(\mu_{-2}\) be the multiplicity of the eigenvalue \(-2\) in \(\mathrm{A}(\operatorname{line}(H))\). Then_ \[\mu_{-2}=\begin{cases}e-v+1&\text{ if $H$ is bipartite,}\\ e-v&\text{ if $H$ is not bipartite.}\end{cases}\] Finally, we state the Cameron-Goethals-Seidel-Shult theorem mentioned in Section2: all but finitely many connected graphs which share their adjacency spectrum with a line graph are so-called _generalised line graphs_. **Definition 3.18**.: Let \(K_{n}\) be the complete graph on \(n\) vertices. A _perfect matching_ in \(K_{2m}\) is a collection of \(m\) disjoint edges (covering all the vertices of \(K_{2m}\)). The _cocktailparty graph_\(\operatorname{CP}(m)\) is the graph obtained from \(K_{2m}\) by removing a perfect matching. For a graph \(G\) with vertices \(v_{1},\ldots,v_{n}\), and nonnegative integers \(a_{1},\ldots,a_{n}\), the _generalised line graph_\(\operatorname{line}(G;a_{1},\ldots,a_{n})\) is defined as follows. First, consider the disjoint union of the graphs \[\operatorname{line}(G),\;\operatorname{CP}(a_{1}),\ldots,\;\operatorname{CP} (a_{n}).\] (i.e., we include each of the above graphs as a separate connected component). Then, for each \(i\), add all possible edges between the vertices of \(\operatorname{CP}(a_{i})\) and the vertices of \(\operatorname{line}(G)\) corresponding to edges of \(G\) incident to \(v_{i}\) (this means \(2a_{i}\deg(v_{i})\) added edges for each \(i\)). Note that for any graph \(G\) we have \(\operatorname{line}(G;0,\ldots,0)=\operatorname{line}(G)\). **Theorem 3.19** ([6, Theorem 4.3 and 4.10]).: _Suppose \(Q\) is a connected graph on more than 36 vertices, all of whose adjacency eigenvalues are at least \(-2\). Then \(Q\) is a generalised line graph._ ### Primes in arithmetic progressions As mentioned in Section2, we will need a quantitative version of Dirichlet's theorem, counting primes in a given arithmetic progression. **Theorem 3.20**.: _Fix coprime integers \(a,d\geq 1\), and let \(\varphi(d)>0\) be the number of integers up to \(d\) which are relatively prime to \(d\). Let \(\pi_{a,d}(n)\) be the number of primes up to \(n\) which are congruent to \(a\pmod{d}\). Then_ \[\lim_{n\to\infty}\biggl{(}\frac{\pi_{a,d}(n)}{n/\log n}\biggr{)}=\frac{1}{ \varphi(d)}.\] Theorem3.20 was first proved by de la Vallee Poussin [12]. All we will need from Theorem3.20 is the following (immediate) corollary. **Corollary 3.21**.: _Fix \(\varepsilon>0\) and coprime integers \(a,d\geq 1\). For any sufficiently large \(n\), there is a prime number between \((1-\varepsilon)n\) and \((1+\varepsilon)n\) which is congruent to \(a\pmod{d}\)._ ## 4. Distinguishing nice graphs by their Laplacian spectrum In this section we prove Lemma2.3: nice graphs are determined by their Laplacian spectrum. As discussed in Section2.1, the first step is to "localise" the problem, showing that any graph with the same Laplacian spectrum as a nice graph is itself nice. First, we adapt some ideas of Boulet [4, Theorem 9] to prove the following lemma, which provides some approximate structure (though does not yet completely determine niceness). Recall the definition of a sun-like graph from Definition2.2. **Lemma 4.1**.: _Let \(G\) be an \((\ell,k)\)-nice graph, and let \(H\) be a graph with the same Laplacian spectrum as \(G\). Then \(H\) is a sun-like graph whose cycle has length \(\ell\). Moreover, \(H\) has exactly one 1-hub, \(k\) different 2-hubs, and no \(i\)-hubs for any \(i>2\)._ Proof.: Let \(n=\ell+2k+1\) be the number of vertices and edges in \(G\). First of all, by Proposition3.8(1) and (2), \(H\) also has \(n\) vertices and \(n\) edges, and by Kirchhoff's matrix-tree theorem (Theorem3.13), \(H\) is connected. So, \(H\) is unicyclic. In a unicyclic graph, the number of spanning trees is equal to the length of the cycle, so by Kirchhoff's theorem again, the cycle in \(H\) has length \(\ell\). Next, we study the degrees of vertices of \(H\). Writing \(E\) for the set of edges of \(G\), recall from Lemma3.4(2) that the largest Laplacian eigenvalue \(\rho_{\max}\) is at most \(\max\{\deg(u)+\deg(v),uv\in E\}\leq 6\) (in a nice graph, every hub has degree at most 4, every non-hub has degree at most 2, and no two hubs are adjacent). By Lemma3.4(1), the maximum degree of \(H\) is strictly less than \(\rho_{\max}\), so \(H\) can only have vertices of degree 1,2,3,4 or 5. Let \(n_{i}\) be the number of vertices of degree \(i\) in \(H\). Since the definition of a nice graph includes the assumption that \(\ell>12k\geq 3\), there are no triangles in \(H\), so by Proposition3.8, the Laplacian spectrum determines the number of vertices, the sum of degrees, the sum of squares of degrees and the sum of cubes of degrees. In \(G\), the numbers of vertices with degree 1,2,3 and 4 are \(2k+1\), \(\ell-k-1\), 1 and \(k\), respectively, so we have \[n_{1}+n_{2}+n_{3}+n_{4}+n_{5} =n=\ell+2k+1, \tag{4.1}\] \[n_{1}+2n_{2}+3n_{3}+4n_{4}+5n_{5} =2n=2\ell+4k+2,\] \[n_{1}+4n_{2}+9n_{3}+16n_{4}+25n_{5} =(2k+1)+4(\ell-k-1)+9+16k=4\ell+14k+6,\] \[n_{1}+8n_{2}+27n_{3}+64n_{4}+125n_{5} =(2k+1)+8(\ell-k-1)+27+64k=8\ell+58k+20.\] This system of equations has a one-parameter family of solutions, given by \[n_{2} =-4n_{1}+\ell+7k+3\] \[n_{3} =6n_{1}-12k-5\] \[n_{4} =-4n_{1}+9k+4\] \[n_{5} =n_{1}-2k-1. \tag{4.2}\] (4.1) and (4.2) together imply that \(n_{2}+n_{3}+n_{4}+n_{5}=\ell-n_{5}\) (i.e., there are \(\ell-n_{5}\leq\ell\) vertices with degree at least 2). But \(H\) has a cycle of length \(\ell\), and all the vertices on that cycle have degree at least 2, so we must have \(n_{5}=0\) and all the vertices with degree at least 2 must lie on the cycle. This implies that \(H\) is sun-like. There was only one degree of freedom in our system of equations: knowing that \(n_{5}=0\) allows us to deduce the values of all \(n_{i}\), and in particular \(n_{3}=1\) and \(n_{4}=k\). That is to say, there is one 1-hub, \(k\) different 2-hubs and no \(i\)-hubs for \(i>2\), as desired. ### Decorated routes Recall the definition of a _route_ from Definition3.6. The remainder of the proof of Lemma2.3 proceeds by carefully studying routes in sun-like graphs. In this subsection we introduce a convenient framework for working with such routes. **Definition 4.2**.: Let \(G\) be a sun-like graph. A _decorated \(s\)-route_\(R\) consists of a route \(\vec{v}=(v_{1},\ldots,v_{s})\) together with a label "look" or "wait" assigned to each \(j\) for which \(v_{j}=v_{j+1}\) and \(v_{j}\) is a hub (here arithmetic is mod \(s\)). That is to say, recalling that we previously imagined a route \(\vec{v}\) as a closed walk with some "waiting steps", we are now reinterpreting some of the waiting steps as steps where we "look at a hub". For a hub \(v\), if \(v_{j}=v_{j+1}=v\) and \(j\) has the label "look", or if \(v_{j}=v\) and \(v_{j+1}\) is one of the leaves attached to \(j\), then we say that the decorated route _interacts with \(v\)_ at step \(j\) (i.e., interacting with a hub means looking at it or entering one of the leaves attached to it). For a decorated \(s\)-route \(R\), define its _multiplicity_\(\operatorname{mult}(R)\) to be the number of different decorated routes that can be obtained by cyclically shifting or reversing \(R\). For example, if \(R\) is a trivial route that repeatedly waits at a single vertex, then \(\operatorname{mult}(R)=1\), but in general \(\operatorname{mult}(R)\) can be as large as \(2s\). Consider a decorated \(s\)-route \(R\). Suppose that in this decorated route there are \(r_{1}\) steps where we wait at leaf vertices, and \(r_{2}\) steps where we wait at cycle vertices (not counting steps in which we look at a hub). Suppose that for each \(i\), there are \(t_{i}\) steps where we look at an \(i\)-hub. Then, we define the _weight_ of \(R\) as \[w(R)=2^{r_{2}}\prod_{i=1}^{\infty}t^{i_{i}}.\] That is to say, we accumulate a factor of \(2\) whenever we wait at some vertex on the cycle (not when we wait at a leaf vertex), and we accumulate a factor of \(i\) whenever we look at an \(i\)-hub. **Example 4.3**.: Recall the \((46,3)\)-nice graph in Figure1. Write \(a,b,c\) for the three vertices between the \(1\)-hub \(v_{0}\) and the \(2\)-hub \(v_{1}\), and let \(x\) be one of the leaf vertices attached to \(v_{1}\). Then, an example of a route is \[\vec{v}=(v_{0},v_{0},a,b,c,v_{1},v_{1},x,x,v_{1},c,b,c,b,a,v_{0}).\] This route has four "waiting steps" (in the first and last steps we wait at \(v_{0}\), at the sixth step we wait at \(v_{1}\) and at the eighth step we wait at \(x\)). In order to make this route into a decorated route, for each of the steps where we wait at a hub (i.e., the first, sixth and last step) we need to decide whether to reinterpret this step as a step where we "look at the hub". For example, say we label the first step as "look" (and the sixth and last steps are labelled as "wait"). This route interacts with \(v_{0}\) and \(v_{1}\), once each (we look at \(v_{0}\), and enter a leaf attached to \(v_{1}\)). The weight of this decorated route is \(2^{2}\cdot 1=4\) (we wait twice at cycle vertices, and look at a \(1\)-hub once). We then have the following consequence of Fact3.7. **Lemma 4.4**.: _Consider any sun-like graph \(G\) whose cycle has length \(\ell\). For \(s<\ell\), let \(\mathcal{D}_{s}\) be the set of all decorated \(s\)-routes in \(G\). Then, the \(s\)-th spectral moment of \(\operatorname{L}(G)\) is_ \[\sum_{R\in\mathcal{D}_{s}}w(R).\] Proof.: In undecorated routes (as in Definition3.6) we accumulate a factor of \(\deg(v)=i+2\) each time we wait at a \(2\)-hub \(v\). For our decorated routes, we have simply broken this down into "waiting" and "looking"; waiting accumulates a factor of \(2\) (just as it does for a non-hub vertex on the cycle) and looking contributes a factor of \(i\). Also, recall that in an undecorated route we accumulate a factor of \(-1\) for each step we walk along an edge. We can ignore this factor if we only consider routes less than \(\ell\): such routes cannot make it all the way around the cycle, so must "retrace their steps" and therefore have an even number of "walking steps". The reason we have introduced the notion of a decorated route is that if we know the hub distribution of a graph, this is enough information to determine the contribution to the \(s\)-th spectral moment from routes which interact with at most one hub. (So, we can focus on routes which interact with multiple hubs, which are key to understanding how the hubs are distributed around the cycle). **Lemma 4.5**.: _Let \(G\) be a sun-like graph whose cycle has length \(\ell\), and let \(k_{i}\) be the number of \(i\)-hubs in \(G\). Let \(\mathcal{D}_{s}^{*}\) be the set of decorated \(s\)-routes which interact with at most one hub (any number of times). Then \(\sum_{R\in\mathcal{D}_{s}^{*}}w(R)\) only depends on \(\ell\) and \((k_{i})_{i=1}^{\infty}\)._ Proof.: Consider two different graphs \(G,H\) with the same statistics \(\ell\) and \((k_{i})_{i=1}^{\infty}\). We will show that the sum of weights under consideration is the same with respect to \(G\) and \(H\). Roughly speaking, the key observation will be that for any route involving a single hub in \(G\), we can "rotate the route around the cycle" to find a corresponding route in \(H\). The cycles \(C_{G}\) and \(C_{H}\) in \(G\) both have the same length \(\ell\), so we can fix an isomorphism \(\phi:C_{G}\to C_{H}\). Fixing an orientation of \(C_{H}\), let \(\chi:C_{H}\to C_{H}\) be the automorphism that "rotates one step clockwise around \(C_{H}\)". Since \(G,H\) have the same hub distribution, we can also fix an bijection \(\psi:C_{G}\to C_{H}\) such that \(v\) is an \(i\)-hub if and only if \(\psi(v)\) is an \(i\)-hub. For each \(v\) in \(C_{G}\), there is a unique \(j\in\mathbb{Z}/\ell\mathbb{Z}\) such that \(\psi(v)=\chi^{(j)}(\phi(v))\) (i.e., we "make \(\psi(v)\) line up with \(\phi(v)\)" by rotating it \(j\) steps around the cycle). Let \(\phi_{v}=\chi^{(j)}\circ\phi\) for this \(j\), so \(\phi_{v}\) is an isomorphism \(C_{G}\to C_{H}\) with \(\phi_{v}(v)=\psi(v)\). * Clearly, \(\phi\) gives us a correspondence between decorated \(s\)-routes that don't interact with any hubs in \(G\), and decorated \(s\)-routes that don't interact with any hub in \(H\). * For any \(i\)-hub \(v\) in \(C_{G}\), the isomorphism \(\phi_{v}\) (together with a bijection between the \(i\) leaves attached to \(v\) in \(C_{G}\) and the \(i\) leaves attached to \(\psi(v)\) in \(C_{H}\)) gives us a correspondence between decorated \(s\)-routes which interact with the single hub \(v\) in \(G\), and \(s\)-routes which interact with the single hub \(\psi(v)\) in \(H\). The above correspondences are weight-preserving, so the desired result follows. ### Localising to nice graphs Our first application of the framework in Section 4.1 is to finish the "localisation step" in the proof of Lemma 2.3: every graph with the same spectrum as a nice graph is itself nice. Given Lemma 4.1, this basically comes down to studying distances between hubs. **Lemma 4.6**.: _Let \(H\) be a graph with the same spectrum as an \((\ell,k)\)-nice graph \(G\). Then \(H\) is an \((\ell,k)\)-nice graph._ Proof.: In this proof, we will omit the word "decorated" (we will have no reason to consider undecorated routes). First, we apply Lemma 4.1 to see that \(H\) is a sun-like graph whose cycle has length \(\ell\), with one \(1\)-hub, \(k\)\(2\)-hubs, and no \(i\)-hubs for \(i>2\). Let \(\eta_{s}(H),\eta_{s}(G)\) be the sum of weights of \(s\)-routes which interact with at least \(2\) hubs (with respect to \(H\) and \(G\), respectively). By Lemmas 4.4 and 4.5, and the fact that \(\ell\geq 15\) from Definition 2.2, we have \(\eta_{s}(H)=\eta_{s}(G)\) for all \(s\leq 14\) (so, we mostly just write "\(\eta_{s}\)" to indicate this common value). Now, we use the parameters \(\eta_{s}\) to study the structure of \(H\). We break this down into a sequence of claims. **Claim 4.7**.: _In \(H\), the closest pair of hubs is at distance 4._ Proof.: Note that \(\eta_{2d+2}>0\) if and only if there are two hubs whose distance is at most \(d\). Indeed, the shortest way for a route to interact with two hubs is to look at one hub, walk \(d\) steps to the next hub, look at it, and walk back; this takes \(1+d+1+d=2d+2\) steps. The closest pair of hubs in \(G\) are at distance 4, so the same is true in \(H\). (Note that \(2\cdot 4+2\leq 14\)). **Claim 4.8**.: _In \(H\):_ 1. _the 1-hub has distance 4 from exactly one other hub, and_ 2. _the number of pairs of hubs at distance 4 from each other is the same in_ \(G\) _and_ \(H\)_._ Proof.: The only routes that contribute to \(\eta_{10}\) are those routes which walk back and forth between two different hubs at distance 4, looking once at each hub along the way. Each such route contributes a weight of 4, unless one of the hubs is a 1-hub, in which case the route contributes a weight of 2. Also, each such route has multiplicity 10 (all ten cyclic shifts yield different routes, but reversing the order does not yield any further routes). So, \(\eta_{10}/20\) can be interpreted as the number of hubs at distance 4 from the 1-hub, plus two times the number of pairs of 2-hubs at distance 4 from each other. In \(G\), there is exactly one hub at distance 4 from the 1-hub. So, \(\eta_{10}(G)/20=\eta_{10}(H)/20\) is odd, meaning that there must be an odd number of hubs at distance 4 from the 1-hub in \(H\). The only possible odd number here is 1, because there is simply no room to put three or more hubs at distance 4 from the 1-hub. Then, in \(H\) and in \(G\), the number of pairs of 2-hubs at distance 4 from each other is \((\eta_{10}/20-1)/2\) Now, Claims 4.7 and 4.8 show that the contributions to \(\eta_{s}(H)\) and \(\eta_{s}(G)\) from routes which interact with two hubs within distance at most \(4\) (and no other hubs) are the same. (Formally, this can be proved in a similar way to Lemma 4.5, considering a bijection between the set of pairs of hubs at distance \(4\) in \(G\), and the set of pairs of hubs at distance \(4\) in \(H\)). Let \(\eta^{\prime}_{s}(H),\eta^{\prime}_{s}(G)\) be obtained from \(\eta_{s}(H)\) and \(\eta_{s}(G)\) by subtracting these contributions, so \(\eta^{\prime}_{s}(H)=\eta^{\prime}_{s}(G)\) for \(s\leq 14\). **Claim 4.9**.: _In \(H\), there are no hubs at distance \(5\) from each other._ Proof.: The only routes which can contribute to \(\eta^{\prime}_{12}\) are routes which interact with two different hubs at distance \(5\) from each other. (By Claim 4.7, every pair of hubs is at distance at least \(4\) from each other, so routes of length \(12\) are much too short to interact with three different hubs). Since \(G\) has no pair of hubs at distance \(5\), the same is true for \(H\). **Claim 4.10**.: _In \(H\):_ 1. _the 1-hub does not have distance_ \(6\) _from any other hub, and_ 2. _the number of pairs of hubs at distance 6 from each other is the same in_ \(G\) _and_ \(H\)_._ Proof.: Given Claim 4.9, the only routes which can contribute to \(\eta^{\prime}_{14}\) are routes which interact with two different hubs at distance \(6\). (Routes of length \(14\) are still too short to interact with three different hubs). The same considerations as for Claim 4.8 show that \(\eta^{\prime}_{14}/28\) can be interpreted as the number of hubs at distance \(6\) from the 1-hub, plus two times the number of pairs of 2-hubs at distance \(6\) from each other. In \(G\), there is no hub at distance \(6\) from the 1-hub. So, \(\eta^{\prime}_{14}(G)/28=\eta^{\prime}_{14}(H)/28\) is even, meaning that there are an even number of hubs at distance \(6\) from the 1-hub \(v^{*}\) in \(H\). The only possible even number here is zero, because if there were two hubs at distance \(6\) from \(v^{*}\) (one on either side), one of these 2-hubs would be at distance \(2\) from the hub guaranteed by Claim 4.8(1) at distance \(4\) from \(v^{*}\), and this is ruled out by Claim 4.7. Then, in \(H\) and in \(G\), the number of pairs of 2-hubs at distance \(6\) from each other is \((\eta^{\prime}_{14}/28)/2\). Now, Claims 4.7 to 4.10 together imply that \(H\) is a \((k,\ell)\)-nice graph. Indeed, imagine walking around the cycles of \(G\) and \(H\), and consider the distances between each pair of consecutive hubs. By Claims 4.7 and 4.9, these distances are either \(4\) or at least \(6\). By Claims 4.8(2) and 4.10(2), the number of consecutive pairs of hubs in \(H\) which are at distance \(4\) or \(6\) is the same as the number of consecutive pairs of hubs in \(G\) which are at distance \(4\) or \(6\); this number is exactly \(k\). Recalling that \(H\) and \(G\) both have exactly \(k+1\) hubs, it follows that in \(H\) we can start from some hub \(v_{0}\) and walk along the cycle, encountering a new hub every \(4\) or \(6\) steps until we reach a final hub \(v_{k}\). By Claims 4.8(1) and 4.10(1), the 1-hub is either \(v_{0}\) or \(v_{k}\) (with distance exactly \(4\) to its closest 2-hub). We have established that \(H\) is \((\ell,k)\)-nice. ### Decoding a nice graph Now, we complete the proof of Lemma 2.3, showing that we can decode a specific nice graph using its Laplacian spectrum. Proof of Lemma 2.3.: As in the proof of Lemma 4.6, we will omit the word "decorated" (we will again have no reason to consider undecorated routes). Suppose we know that \(G\) is an \((\ell,k)\)-nice graph (for some \(k\geq 1\) and \(\ell>12k\)), and suppose we know the spectrum of \(G\). We will show how to use this information to determine exactly which \((\ell,k)\)-nice graph \(G\) is (this suffices to prove Lemma 2.3, by Lemma 4.6). Specifically, it suffices to determine, for each \(q\leq 3k-1\), whether there is a hub at distance \(2q\) from the 1-hub \(v^{*}\). (In a nice graph, every hub is at even distance from \(v^{*}\), and the furthest possible distance between hubs is \(4+6(k-1)=6k-2\)). We proceed by induction. For some \(q\leq 3k-1\), suppose we know the positions of all hubs within distance \(2q-1\) of \(v^{*}\). We would like to determine whether there is a hub at distance \(2q\) from \(v^{*}\). Let \(\eta_{s}\) be the sum of weights of \(s\)-routes which interact with at least \(2\) hubs. By Lemma 4.5, we have enough information to determine \(\eta_{s}\) for \(s<\ell\). We can refine this further: let \(\eta^{\prime}_{s}\) be obtained from \(\eta_{s}\) by subtracting the contribution from all routes which interact only with hubs within distance \(2q-1\) of \(v^{*}\). Since our inductive assumption is that we know the positions of all hubs within distance \(2q-1\) of \(v^{*}\), we have enough information to determine \(\eta^{\prime}_{s}\) (for \(s<\ell\)). We focus in particular on the quantity \(\eta^{\prime}_{4q+2}\) (note that \(4q+2\leq 4(3k-1)+2<12k<\ell\), so we have enough information to determine this quantity). We break down \(\eta^{\prime}_{4q+2}\) further: * Let \(\alpha_{4q+2}\) be the contribution to \(\eta^{\prime}_{4q+2}\) from routes which interact with \(v^{*}\), and * Let \(\beta_{4q+2}(t)\) be the contribution to \(\eta^{\prime}_{4q+2}\) from routes which do not interact with \(v^{*}\), and interact with \(2\)-hubs \(t\) times. Note that \(\eta^{\prime}_{4q+2}=\alpha_{4q+2}+\sum_{t=2}^{\infty}\beta_{4q+2}(t)\). **Claim 4.11**.: _We have_ \[\alpha_{4q+2}=\begin{cases}8q+4&\text{if there is a hub at distance $2q$ from $v^{*}$}\\ 0&\text{otherwise}\end{cases}\] Proof.: If there is no hub at distance \(2q\) from \(v^{*}\), then a route of length \(4q+2\) is simply too short to interact with \(v^{*}\) and with one of the \(2\)-hubs that is not within distance \(2q-1\) of \(v^{*}\). If there is a hub \(v\) at distance \(2q\) from \(v^{*}\), the only routes which contribute to \(\alpha_{4q+2}\) are those routes which walk back and forth between \(v\) and \(v^{*}\), looking once at \(v\) and \(v^{*}\) along the way. All these routes are cyclic shifts of each other (so, there are \(4q+2\) of them), and each such route contributes a weight of \(2\). **Claim 4.12**.: \(\beta_{4q+2}(t)\) _is divisible by \(8\) for all \(t\)._ Proof.: Consider a \(2\)-hub \(v\), and write \(x,y\) for the leaves attached to \(v\). Consider a route \(R\) which at some step \(j\) enters \(x\) from \(v\) (then waits at \(x\) for some number of steps before returning to \(v\)). We can slightly modify \(R\) by simply entering \(y\) instead of \(x\) at step \(j\) (and then waiting at \(y\) for the same number of steps before returning to \(v\)). Say that two routes are _equivalent_ if they can be obtained from one another by a sequence of modifications of this type. So, routes in an equivalence class have essentially the same structure, but they may visit different leaves. Now, consider a route \(R\) which looks at \(2\)-hubs \(a\) times, and enters leaves attached to \(2\)-hubs \(b\) times (so, \(R\) interacts with \(2\)-hubs \(a+b\) times). The equivalence class of \(R\) has size \(2^{b}\), and the weight of each route in this equivalence class is divisible by \(2^{a}\). So, the total weight of this equivalence class is divisible by \(2^{a+b}\). It immediately follows that \(\beta_{4q+2}(t)\) is divisible by \(2^{t}\), so if \(t\geq 3\) then \(\beta_{4q+2}(t)\) is divisible by \(8\). It remains to consider \(\beta_{4q+2}(2)\) in more detail. The routes that contribute to \(\beta_{4q+2}(2)\) are the routes which interact once each with two different \(2\)-hubs \(u,v\) (and do not interact with \(v^{*}\)). Fix such a route \(R\). As above, the equivalence class of \(R\) contributes weight divisible by \(4\), so we just need an additional factor of \(2\). This comes from the fact that \(\operatorname{mult}(R)\) is equal to \(4q+2\) or \(8q+4\) (both of which are divisible by \(2\)). Indeed, all \(4q+2\) cyclic shifts of \(R\) yield different routes, because there is a unique interaction-with-\(u\) step whose position changes with each cyclic shift. Reversing the order of \(R\) may or may not yield \(4q+2\) additional routes. Finally, given Claims 4.11 and 4.12, we can determine whether there is a \(2\)-hub at distance \(2q\) from \(v^{*}\) simply by checking whether \(\eta^{\prime}_{4q+2}\) is divisible by \(8\) or not. This completes the inductive step. ## 5. Determining bipartiteness with the signless Laplacian spectrum In this section we prove Lemma2.6. This proof mostly comes down to the following two lemmas. **Definition 5.1**.: For any graph \(G\), let \(f_{\lvert\operatorname{L}\rvert}(G)\) be the product of nonzero eigenvalues of \(\lvert\operatorname{L}(G)\rvert\). **Lemma 5.2**.: _If \(G\) is not bipartite, then \(f_{\lvert\operatorname{L}\rvert}(G)\) is divisible by 4._ Proof.: Let \(G_{1},\dots,G_{c}\) be the connected components of a non-bipartite graph \(G\), and suppose without loss of generality that \(G_{1}\) is non-bipartite. By Fact3.10, each \(f_{\lvert\operatorname{L}\rvert}(G_{i})\) is an integer, and by Fact3.2 we have \(f_{\lvert\operatorname{L}\rvert}(G)=f_{\lvert\operatorname{L}\rvert}(G_{1} )\dots f_{\lvert\operatorname{L}\rvert}(G_{c})\). By Proposition3.14\((2)\), \(f_{\lvert\operatorname{L}\rvert}(G_{1})\) is divisible by \(4\). **Lemma 5.3**.: _If \(G\) is a connected bipartite unicyclic graph with \(n\) vertices, whose cycle has length \(\ell\), then \(f_{\lvert\operatorname{L}\rvert}(G)=n\ell\)._ Proof.: Since \(G\) is bipartite, its signless Laplacian spectrum is the same as its Laplacian spectrum (by Fact3.1), so by Kirchhoff's matrix tree theorem (Theorem3.13), \(f_{\lvert\operatorname{L}\rvert}(G)\) is \(n\) times the number of spanning trees in \(G\) (which is \(\ell\), as we have already observed in the proof of Lemma4.1). Now we are ready to prove Lemma2.6. Proof of Lemma2.6.: Let \(G\) be an \((\ell,k)\) nice graph, for \(\ell\equiv 2\pmod{4}\), and let \(H\) be a graph with the same signless Laplacian spectrum as \(G\). As discussed in Section2.2, given Lemma2.3 and Fact3.1, we just need to prove that \(H\) is bipartite. Let \(n=\ell+2k+1\) be the number of vertices in \(G\). By Lemma5.3, we have \(f_{|\mathrm{L}|}(G)=n\ell\). Since \(\ell\equiv 2\pmod{4}\) and \(n=\ell+2k+1\) is odd, \(f_{|\mathrm{L}|}(G)\) is not divisible by \(4\). Since \(G\) and \(H\) have the same spectrum, we have \(f_{|\mathrm{L}|}(G)=f_{|\mathrm{L}|}(H)\), so Lemma5.2 implies that \(H\) is bipartite. ## 6. The prime case of the main theorem In this section we prove Lemma2.9. As outlined in Section2.3, we will use the Cameron-Goethals-Seidel-Shult theorem (Theorem3.19), together with the following fact. Recall the definition of a generalised line graph from Definition3.18. **Lemma 6.1**.: _If a generalised line graph is not a line graph, then its adjacency matrix has a zero eigenvalue._ Proof.: Let \(G\) be a generalised line graph that is not a line graph. We will show that \(G\) has two vertices with the same set of neighbours, meaning that \(\mathrm{A}(G)\) has two equal rows, so is not invertible and therefore has a zero eigenvalue. By the definition of a generalised line graph, \(G\) contains a cocktailparty graph \(\mathrm{CP}(a)\) for some \(a\geq 1\). This cocktailparty graph can be thought of as a complete graph \(K_{2a}\) with a perfect matching removed. Consider one of the edges of this removed perfect matching, and let \(u\) and \(v\) be its endpoints. Then, \(u\) and \(v\) have the same neighbourhood (in \(G\)), as desired. Now, crucially, line graphs of nice graphs as in Lemma2.9 do not have zero eigenvalues. **Lemma 6.2**.: _Let \(G\) be an \((\ell,k)\)-nice graph with \(\ell\equiv 2\pmod{4}\). Then \(\mathrm{A}(\mathrm{line}(G))\) does not have a zero eigenvalue._ We will prove Lemma6.2 by explicitly computing the determinant of \(\mathrm{A}(\mathrm{line}(G))\) using Theorem3.12. We defer this computation to Section6.1, as it is a little involved; first we show how to use it to prove Lemma2.9 (after stating a definition that will be used in the proofs of Lemma2.9 and Theorem1.4). **Definition 6.3**.: For any graph \(G\), let \(f_{\mathrm{A}}(G)\) be the product of nonzero eigenvalues of \(\mathrm{A}(G)+2\). Equivalently, writing \(\sigma\) for the adjacency spectrum of \(G\), \[f_{\mathrm{A}}(G)=\prod_{\begin{subarray}{c}\lambda\in\sigma\\ \lambda\neq-2\end{subarray}}(\lambda+2).\] Proof of Lemma2.9 assuming Lemma6.2.: Consider \(\ell,k\) with \(\ell\equiv 2\pmod{4}\), and let \(n=\ell+2k+1\). Define \[n_{0}=\max\{f_{|\mathrm{A}|}(G):Q\text{ is a graph on at most $36$ vertices}\}, \tag{6.1}\] and suppose \(n\) is a prime number larger than \(n_{0}\). Let \(G\) be an \((\ell,k)\)-nice graph, and suppose that \(Q\) is a graph with the same adjacency spectrum as \(\mathrm{line}(G)\). Our objective is to prove that \(Q=\mathrm{line}(H)\) for some graph \(H\) with \(n\) vertices. Indeed, if we are able to prove this, it will follow from Proposition3.15 that \(H\) has the same nonzero signless Laplacian eigenvalues as \(G\), and since \(H\) and \(G\) have the same number of vertices, the multiplicity of the zero eigenvalue will also be the same in \(H\) and \(G\). It will then follow that \(H\) and \(G\) are isomorphic (hence \(Q\) and \(\mathrm{line}(G)\) are isomorphic) by Lemma2.6. Write \(Q_{1},\ldots,Q_{c}\) for the connected components of \(Q\). By creftype3.10, each \(f_{\mathrm{A}}(Q_{i})\) is an integer, and by creftype3.2 we have \(f_{\mathrm{A}}(Q_{1})\ldots f_{\mathrm{A}}(Q_{c})=f_{\mathrm{A}}(Q)\). On the other hand, by creftype3.15 and Lemma5.3, \[f_{\mathrm{A}}(Q)=f_{\mathrm{A}}(\mathrm{line}(G))=f_{|\mathrm{L}|}(G)=n\ell. \tag{6.2}\] Recalling that \(n\) is a prime number, some \(f_{\mathrm{A}}(Q_{i})\) must be divisible by \(n\). Suppose without loss of generality that \[f_{\mathrm{A}}(Q_{1})\text{ is divisible by }n. \tag{6.3}\] By the Cameron-Goethals-Seidel-Shult theorem (Theorem3.19), Lemma6.2, and our assumption \(n>n_{0}\) from the start of the proof, \(Q_{1}\) is a line graph. We write \(Q_{1}=\mathrm{line}(H_{1})\) for some (connected) graph \(H_{1}\), with \(v_{1}\) vertices and \(e_{1}\) edges. Note that \[e_{1}\leq n, \tag{6.4}\] because \(Q_{1}\) has \(e_{1}\) vertices and is a connected component of \(Q\), which has \(n\) vertices (note that \(Q\) has the same number of vertices as \(\mathrm{line}(G)\), which is \(n\) because \(G\) has \(n\) edges). Now, by Proposition3.15 we have \(f_{\mathrm{A}}(Q_{1})=f_{[\mathrm{L}]}(H_{1})\). This cannot be divisible by \(4\), because \(f_{\mathrm{A}}(Q)=n\ell\) is not divisible by \(4\) (here we are recalling (6.2), and using that \(n\) is odd and \(\ell\equiv 2\pmod{4}\)). So, by Lemma5.2, \(H_{1}\) is bipartite. By Lemma3.17, \(\mathrm{A}(Q)\) has \(-2\) as an eigenvalue with multiplicity \(1\), so (using Fact3.2), either \(-2\) is not an eigenvalue of \(\mathrm{A}(Q_{1})\) or it is an eigenvalue with multiplicity \(1\). **Case 1: \(-2\) is not an eigenvalue of \(\mathrm{A}(Q_{1})\).** In this case, Lemma3.17 says that \(e_{1}=v_{1}-1\), and \(H_{1}\) is a tree. The largest TU-subgraph of \(H_{1}\) is \(H_{1}\) itself, so by Theorem3.12 and Proposition3.15 we have \(f_{\mathrm{A}}(Q_{1})=f_{[\mathrm{L}]}(H_{1})=v_{1}\). Then, (6.3) says that \(v_{1}\) is divisible by \(n\). (6.4) says that \(v_{1}-1\leq n\), so we must have \(v_{1}=n\). It follows that \(Q_{1}=\mathrm{line}(H_{1})\) has \(e_{1}=n-1\) vertices, meaning that \(Q\) only has room for one other component \(Q_{2}\), consisting of a single isolated vertex. We then compute \(f_{\mathrm{A}}(Q_{2})=1\), so \(f_{\mathrm{A}}(Q)=f_{\mathrm{A}}(Q_{1})f_{\mathrm{A}}(Q_{2})=v_{1}=n\). This is not consistent with the fact that \(f_{\mathrm{A}}(Q)=n\ell\) (as we observed in (6.2)), so this case cannot actually occur. **Case 2: \(-2\) is an eigenvalue of \(\mathrm{A}(Q_{1})\).** In this case, Lemma3.17 says that \(e_{1}=v_{1}\), and \(H_{1}\) is an even-unicyclic graph. Let \(\ell_{1}\) be the length of the cycle in \(H_{1}\), so by Lemma5.3 we have \(f_{\mathrm{A}}(Q_{1})=f_{[\mathrm{L}]}(H_{1})=v_{1}\ell_{1}\). By (6.4) we have \(\ell_{1}\leq v_{1}\leq n\), and (6.3) says that \(v_{1}\ell_{1}\) is divisible by the prime number \(n\). So, we must have \(v_{1}=n\). Since \(Q_{1}=\mathrm{line}(H_{1})\) has \(e_{1}=v_{1}=n\) vertices, there is no room for any other components: we have proved that \(Q=Q_{1}=\mathrm{line}(H_{1})\) for some \(H_{1}\) with \(n\) vertices, as desired. ### Computing the determinant of the line graph of a nice graph In this subsection we prove Lemma6.2. First, we need some definitions that allow us to discuss the structure of the line graph of a nice graph. **Definition 6.4**.: Let \(uv\) be an edge in a graph \(Q\). To _add an \(i\)-house_ to \(uv\) is to add a set \(S\) of \(i\) new vertices to \(Q\), and to add all possible edges between vertices in \(S\cup\{u,v\}\). Then, we say that the subgraph induced by \(S\cup\{u,v\}\) (which is a complete graph on \(i+2\) vertices) is an \(i\)_-house_. The vertices \(u,v\) are _internal_ and the vertices in \(S\) are _external_. Note that the line graph of every \((\ell,k)\)-nice graph (as defined in Definition2.2) can be obtained by starting with a cycle of length \(\ell\), then adding a \(1\)-house to one edge and adding \(2\)-houses to \(k\) other edges. The distances between pairs of consecutive \(i\)-houses are always \(3\) or \(5\) (except one longer distance around the cycle). See Figure2 for an illustration. Now, our objective is to compute the determinant of the line graph of a nice graph. We will be able to reduce this to computing the determinant of a slightly simpler type of graph, which can be studied recursively. Recall from Definition3.11 that a spanning elementary subgraph of a graph \(G\) is a spanning subgraph (covering all vertices) consisting of vertex-disjoint edges and cycles. For such a subgraph \(X\), recall that \(\beta(X)\) accumulates a factor of \(-1\) for each edge-component, and a factor of \(-2\) for each cycle-component. By Theorem3.12, the determinant of \(\mathrm{A}(G)\) is (up to sign) the sum of \(\beta(X)\) over all spanning elementary subgraphs \(X\) of \(G\). **Definition 6.5**.: Consider \(r\geq 0\) and \(k\geq 0\), and \(1\leq a_{1}\leq\cdots\leq a_{k}\leq r\) satisfying \(a_{i}-a_{i-1}\geq 2\) for each \(2\leq i\leq k\). The graph \(Q(r;a_{1},\ldots,a_{k})\) is defined by starting with a path of length \(r\), and adding a \(2\)-house on the \(a_{i}\)-th edge of this path, for each \(i\). (See Figure3 for an illustration). Let \(q(r;a_{1},\ldots,a_{k})\) be the sum of \(\beta(X)\) over all spanning elementary subgraphs \(X\) of \(Q(r;a_{1},\ldots,a_{k})\). **Lemma 6.6**.: _Let \(r,k,a_{1},\ldots,a_{k}\) be as in Definition6.5. For inductive reasons it is convenient to additionally allow \(r=-1\) (in which case \(Q(r)\) is the graph with no vertices)._ 1. _Taking_ \(k=0\)_, we have_ \(q(-1)=1\) _and_ \(q(0)=0\)_._ 2. _If_ \(r-a_{k}\geq 2\) _(or if_ \(r\geq 2\) _and_ \(k=0\)_), then_ \(q(r;a_{1},\ldots,a_{k})=-q(r-2;a_{1},\ldots,a_{k})\)_._ 3. _If_ \(r-a_{k}=1\) _then_ \(q(r-2;a_{1},\ldots,a_{k-1})+2q(r-3;a_{1},\ldots,a_{k-1})\)_._ 4. _If_ \(r=a_{k}\) _then_ \(q(r;a_{1},\ldots,a_{k-1})=-2q(r-1;a_{1},\ldots,a_{k-1})-3q(r-2;a_{1},\ldots,a_{ k-1})\) Figure 3. An illustration of the graph \(Q(13;3,7,13)\), with \(2\)-houses on the third, seventh, and thirteenth edges of the underlying path. Proof.: First, (1) is an immediate observation. If \(a_{k}<r\) (or if \(r\geq 1\) and \(k=0\)), then the final vertex in \(Q(r;a_{1},\dots,a_{k})\) has degree \(1\). In an elementary spanning subgraph, this final vertex can only be contained in an edge-component, consisting of the final two vertices of \(Q(a_{1},\dots,a_{k})\). In particular, if \(r-a_{k}\geq 2\) (or if \(r\geq 2\) and \(k=0\)), the spanning elementary subgraphs of \(Q(a_{1},\dots,a_{k})\) can be obtained by taking a spanning elementary subgraph of \(Q(a_{1},\dots,a_{k}-2)\), and adding a single edge-component (see Figure 4). We deduce (2), recalling that each edge-component contributes a weight of \(-1\). If \(r-a_{k}=1\), then the aforementioned edge-component covers one of the internal vertices of the final \(2\)-house. There are two different ways to cover the two external vertices in this \(2\)-house by a spanning elementary subgraph: either we can cover them with a single edge or we can cover them, in addition to the remaining internal vertex, with a \(3\)-cycle (see Figure 4). In the first case, we accumulate a factor of \(-2\), and the remaining vertices of the spanning elementary subgraph can be interpreted as a spanning elementary subgraph of \(Q(r-2;a_{1},\dots,a_{k-1})\). In the second case, we accumulate a factor of \(-1\), and the remaining vertices of the spanning elementary subgraph can be interpreted as a spanning elementary subgraph of \(Q(r-3;a_{1},\dots,a_{k-1})\). So, \[q(r;a_{1},\dots,a_{k-1})=(-1)^{2}q(r-2;a_{1},\dots,a_{k-1})+(-1)(-2)q(r-3;a_{1 },\dots,a_{k-1}),\] yielding (3). If \(r=a_{k}\), then the final vertex of \(Q(r;a_{1},\dots,a_{k})\) is an internal vertex of the final \(2\)-house, and does not have degree \(1\). There are a few different ways to cover the final vertex and the two external vertices of the final house by a spanning elementary subgraph: we could cover just these three vertices with a \(3\)-cycle, or we could cover the entire \(2\)-house (there are three different ways to do this with two disjoint edges, and three different ways to do this with a \(4\)-cycle; see Figure 4). Similar considerations as above yield \[q(r;a_{1},\dots,a_{k-1})=-2q(r-1;a_{1},\dots,a_{k-1})+(3(-1)^{2}+3(-2))q(r-2;a _{1},\dots,a_{k-1}),\] yielding (4). Figure 4. All the possible ways to cover the final vertex (and possibly the external vertices in the final \(2\)-house) in a spanning elementary subgraph of a graph \(Q(r;a_{1},\dots,a_{k})\). The recurrences described in Corollary6.7 are sufficient to compute any \(q(r;a_{1},\ldots,a_{k})\), but the general formulas are rather complicated. We consider a restricted class of choices of \(a_{1},\ldots,a_{k}\), which will be sufficient for the proof of Lemma6.2. **Corollary 6.7**.: _Suppose \(a_{1},\ldots,a_{k}\) are odd integers. Then_ \[q(r;a_{1},\ldots,a_{k})=\begin{cases}2k(-1)^{r/2+1}&\text{if $r$ is even},\\ (2k+1)(-1)^{(r+1)/2}&\text{if $r$ is odd}.\end{cases}\] Proof.: We proceed by induction on \(k\). First, iterating Corollary6.7(2), starting with Corollary6.7(1), yields \[q(a_{1}-2)=(-1)^{(a_{1}-1)/2},\qquad q(a_{1}-1)=0.\] So, Corollary6.7(3) and (4) give \[q(a_{1}+1;a_{1})=2(-1)^{(a_{1}-1)/2}=2(-1)^{(a_{1}+1)/2+1},\qquad q(a_{1};a_{1 })=-3(-1)^{(a_{1}-1)/2}=3(-1)^{(a_{1}+1)/2},\] respectively. Iterating Corollary6.7(2) again yields the desired result for \(k=1\). Now, consider \(k\geq 2\) and assume that the desired statement holds for smaller \(k\). Then, recalling that \(a_{k}\) is odd, our inductive assumption together with Corollary6.7(3,4) yields \[q(a_{k};a_{1},\ldots,a_{k}) =-2q(a_{k}-1;a_{1},\ldots,a_{k-1})-3q(a_{k}-2;a_{1},\ldots,a_{k-1})\] \[=-2(2k-2)(-1)^{(a_{k}-1)/2+1}-3(2k-1)(-1)^{(a_{k}-1)/2}\] \[=(2k+1)(-1)^{(a_{k}+1)/2},\] \[q(a_{k}+1;a_{1},\ldots,a_{k}) =q(a_{k}-1;a_{1},\ldots,a_{k-1})+2q(a_{k}-2;a_{1},\ldots,a_{k-1})\] \[=(2k-2)(-1)^{(a_{k}-1)/2+1}+2(2k-1)(-1)^{(a_{k}-1)/2}\] \[=2k(-1)^{(a_{k}+1)/2+1}.\] Iterating Corollary6.7(2) proves the desired statement. Now, we are ready to prove Lemma6.2. Proof of Lemma6.2.: Let \(b_{1}<\cdots<b_{k}\) be the distances of the \(2\)-hubs from the \(1\)-hub in \(G\) (so in particular \(b_{1}=4\), and all \(b_{i}\) are even). Let \(D\) be the sum of \(\beta(X)\) over all spanning elementary subgraphs \(X\) of \(\operatorname{line}(G)\). Let \(u^{*}\) be the tip of the \(1\)-house in \(\operatorname{line}(G)\). There are four ways for an elementary subgraph to cover \(u^{*}\) (pictured in Figure5): 1. \(u^{*}\) could be covered by a long cycle that runs all the way around the nice graph. 2. \(u^{*}\) could be covered by a \(3\)-cycle covering the entire \(1\)-house. 3. \(u^{*}\) could be covered by a single edge, whose other vertex is at distance \(3\) from the \(2\)-house. 4. \(u^{*}\) could be covered by a single edge, whose other vertex is at distance \(4\) from the \(2\)-house. Let \(D_{1},D_{2},D_{3},D_{4}\) be the contributions to \(D\) from spanning elementary subgraphs that cover \(u^{*}\) in each of the above four ways (in that order). First, \(D_{2},D_{3},D_{4}\) can be handled with Corollary6.7, as follows. Recall that \(\ell\equiv 2\pmod{4}\). Figure 5. Four possible ways to cover the tip of the \(1\)-house For \(D_{2}\): apart from the \(3\)-cycle covering the \(1\)-house, the rest of a spanning elementary subgraph corresponds to a spanning elementary subgraph of \(Q(\ell-3;b_{1}-1,\ldots,b_{k}-1)\), so \[D_{2}=-2q(\ell-3;b_{1}-1,\ldots,b_{k}-1)=-2(2k+1)=-4k-2. \tag{6.5}\] For \(D_{3}\): apart from the edge covering the tip of the \(1\)-house, the rest of a spanning elementary subgraph corresponds to a spanning elementary subgraph of \(Q(\ell-2;b_{1}-1,\ldots,b_{k}-1)\), so \[D_{3}=-q(\ell-2;b_{1}-1,\ldots,b_{k}-1)=-(-2k)=2k. \tag{6.6}\] For \(D_{4}\): apart from the edge covering the tip of the \(1\)-house, the rest of a spanning elementary subgraph corresponds to a spanning elementary subgraph of \[Q(\ell-2;2,b_{1},\ldots,b_{k})\cong Q(\ell-2;\ell-b_{k}-1,\ell-b_{k-1}-1\ldots,\ell-b_{1}-1)\] (we can describe the graph in "two different directions"). Note that \(\ell-b_{k}\) is even (as the difference of two even numbers), so \[D_{4}=-q(\ell-2;\ell-b_{k}-1,\ell-b_{k-1}-1\ldots,\ell-b_{1}-1)=-(-2k)=2k. \tag{6.7}\] It remains to consider \(D_{1}\). Suppose we have an elementary spanning subgraph which contains a long cycle \(C\) covering \(u^{*}\) and going around the \(\ell\)-cycle of \(\operatorname{line}(G)\). There are three different ways that \(C\) can interact with each \(2\)-house of \(\operatorname{line}(G)\) (all of which are pictured at the top of Figure 5). Specifically, there are two ways for \(C\) to pass through all \(4\) vertices of the \(2\)-house, or alternatively \(C\) can simply pass through the internal vertices of the \(2\)-house, leaving the remaining two external vertices to be covered by an edge-component. So, there are \(3^{k}\) spanning elementary subgraphs that contribute to \(D_{1}\). To compute the weight of each such subgraph: first, start with a base weight of \(-2\). For each \(2\)-house, we have three choices; the first two (incorporating the \(2\)-house in the cycle) do not affect the weight, but the third (leaving the external vertices for an edge-component) accumulates a factor of \(-1\). So, \[D_{1}=(-2)(1+1-1)^{k}=-2. \tag{6.8}\] Combining (6.5) to (6.8), we see that \(D=-4\), so by Theorem 3.12, the determinant of \(\operatorname{A}(\operatorname{line}(G))\) is nonzero (it has absolute value \(4\)). ## 7. Augmenting the prime case In this section we show how to use line graphs of nice graphs to define a family of exponentially many graphs that are determined by their adjacency spectrum. This definition includes a number of inequalities and number-theoretic properties which will be used in a somewhat delicate case analysis in the proof of Theorem 1.4 (to rule out various possibilities for graphs which have the same spectrum as one of our graphs of interest, but have different structure). **Definition 7.1**.: The _star graph_\(K_{1,n}\) consists of \(n\) leaves attached to a single vertex. Note that \(\operatorname{line}(K_{1,n})\) is the complete graph \(K_{n}\) on \(n\) vertices. Let \(\mathcal{G}_{n}\) be the family of graphs \(G\) satisfying the following properties. * \(G\) has two components. One of these components is an \((\ell,k)\)-nice graph \(G_{1}\) (for some parameters \(\ell,k\) satisfying \(\ell\leq\max(12k,15)\)), and the other of these components is a star graph \(K_{1,n_{2}}\) (with some number of edges \(n_{2}\)). * Writing \(n_{1}=\ell+2k+1\) for the number of edges and vertices of \(G_{1}\), we have \(n_{1}+n_{2}=n\) (i.e., \(G\) has \(n\) edges). * \(n_{1}\) is a sufficiently large prime number (larger than \(n_{0}\) from (6.1)). * \(\ell=2p\) for a sufficiently large prime number \(p\) (larger than \(n_{0}\) from (6.1)). * \(n_{2}\not\equiv 3\pmod{4}\). * \(n_{1}<n_{2}\). * \(2n_{1}+p-2>n\). * \(2n_{1}-\ell+2<n_{2}-1\). (Note that **G3** and **G4** imply that \(G_{1}\) satisfies the properties in Lemma 2.9). Let \(\mathcal{Q}_{n}=\operatorname{line}(\mathcal{G}_{n})\) be the family of line graphs of graphs in \(\mathcal{G}_{n}\). Then, the following two lemmas imply Theorem 1.4. **Lemma 7.2**.: _There is a constant \(c>0\) such that \(|\mathcal{Q}_{n}|\geq e^{cn}\) for every sufficiently large \(n\)._ **Lemma 7.3**.: _Every graph in \(\mathcal{Q}_{n}\) is determined by its (adjacency) spectrum._ It remains to prove these lemmas. First, Lemma 7.2 follows quite simply from Corollary 3.21. Proof of Lemma 7.2.: For sufficiently large \(n\), Corollary 3.21 guarantees the existence of prime numbers \(p,n_{1}\) such that \(n-n_{1}\not\equiv 3\pmod{4}\), and such that \[|n_{1}-0.45n|\leq 0.001n,\quad|p-0.2n|\leq 0.001n.\] Let \(\ell=2p\), let \(k=(n_{1}-\ell-1)/2\) (which is an integer since \(n_{1}\) is an odd prime and \(\ell\) is even), and let \(n_{2}=n-n_{1}\). Then, it is easy to check that \(\ell\leq\max(12k,15)\) (this is the condition for a nice graph in Definition 2.2), and that **G3** to **G8** all hold. We claim that there are exponentially many graphs in \(\mathcal{Q}_{n}\) with this specific choice of parameters. To see this, first note that different \((\ell,k)\)-nice graphs have different line graphs (as depicted in Figure 2, the \(\ell\)-cycle in a nice graph \(G\) corresponds to an \(\ell\)-cycle in line(G), and \(1\)-hubs and \(2\)-hubs in \(G\) correspond to \(1\)-houses and \(2\)-houses in line(G)). So, it suffices to prove that there are exponentially many graphs in \(\mathcal{G}_{n}\) with our specific choice of parameters. An \((\ell,k)\)-nice graph is specified by a sequence of \(k-1\) binary choices (every pair of consecutive \(2\)-hubs can be at distance \(4\) or \(6\)). Each of the different ways to make these binary choices lead to different (non-isomorphic) graphs. So, there are \(2^{k-1}\) different \((\ell,k)\)-nice graphs, meaning that \[|\mathcal{Q}_{n}|\geq 2^{k}\geq 2^{((0.45-0.001)n-2(0.2+0.001)n-1)/2}\geq e^{0.01 n}.\qed\] Then, to prove Lemma 7.3 we need a more sophisticated version of the arguments used to prove Lemma 2.9. In particular, we will need the following more detailed version of the case distinction in the proof of Lemma 2.9. **Lemma 7.4**.: _Let \(n_{0}\) be as in (6.1) and let \(Q\) be a connected graph with more than \(n_{0}\) vertices, such that all eigenvalues of \(\operatorname{A}(Q)\) are at least \(-2\), and such that zero is not an eigenvalue of \(\operatorname{A}(Q)\). Then we can write \(Q=\operatorname{line}(H)\) for some connected \(H\)._ 1. _If_ \(-2\) _is not an eigenvalue of_ \(\operatorname{A}(Q)\) _then one of the following holds._ _A._ \(H\) _is an odd-unicyclic graph, and_ \(f_{\operatorname{A}}(Q)=4\)_._ _B._ \(H\) _is a tree, and_ \(f_{\operatorname{A}}(Q)\) _is the number of vertices of_ \(H\)_._ 2. _If_ \(-2\) _is an eigenvalue of_ \(\operatorname{A}(Q)\) _with multiplicity 1, and if_ \(f_{\operatorname{A}}(Q)\) _is not divisible by 8, then_ \(H\) _is an even-unicyclic graph (with_ \(v\) _vertices and a cycle of length_ \(\ell\)_, say), and_ \(f_{\operatorname{A}}(Q)=v\ell\)_._ Proof.: The initial part of the lemma (that \(Q\) is a line graph) follows from Theorem 3.19 and Lemma 6.1. Then, the structural descriptions in **1A** and **1B** follow from Proposition 3.14 (specifically, **A** corresponds to the case where \(H\) is not bipartite, and **B** corresponds to the case where \(H\) is bipartite), and the statements about \(f_{\operatorname{A}}(Q)\) are immediate consequences of Theorem 3.12. For **2**, we can similarly apply Proposition 3.14, considering the cases where \(H\) is or is not bipartite. We see that either \(H\) is an even-unicyclic graph (in which case the statement about \(f_{\operatorname{A}}(Q)\) follows from Lemma 5.3), or \(H\) is a non-bipartite graph whose number of edges is one more than its number of vertices. We need to rule out this latter case (showing that whenever it occurs, \(f_{\operatorname{A}}(Q)\) is divisible by 8). So, suppose that \(H\) is non-bipartite and its number of edges is one more than its number of vertices. Let \(H^{\prime}\) be the _2-core_ of \(H\); its largest subgraph with minimum degree at least 2. One can obtain the 2-core by iteratively peeling off leaf vertices (in any order) until no leaves remain. There are two possibilities for the structure of \(H^{\prime}\): 1. [label=**I.**] 2. \(H^{\prime}\) consists of two edge-disjoint cycles with a single path between them (this path may have length zero), or 3. \(H^{\prime}\) is a "theta graph", consisting of two vertices with three internally disjoint paths between them. **Case I.** In the first case, write \(C_{1},C_{2}\) for the two cycles, and let \(\ell_{1},\ell_{2}\) be their lengths. For \(H\) to be non-biparitite, at least one of \(\ell_{1},\ell_{2}\) must be odd (suppose without loss of generality that \(\ell_{1}\) is odd). * If \(\ell_{2}\) is even, then the largest TU-subgraphs of \(H\) are the odd-unicyclic subgraphs obtained by deleting a single edge from \(C_{2}\). So, by Theorem 3.12 we have \(f_{\lvert\operatorname{L}\rvert}(H)=4\ell_{2}\), which is divisible by 8. * If \(\ell_{2}\) is odd, then the largest TU-subgraphs of \(H\) are the odd-unicyclic subgraphs obtained by deleting a single edge from \(C_{1}\) or \(C_{2}\), and the disconnected subgraphs (with two odd-unicyclic components) obtained by deleting an edge on the unique path between \(C_{1}\) and \(C_{2}\). Writing \(\ell_{3}\) for the length of the path between \(C_{1}\) and \(C_{2}\), by Theorem 3.12 we have \(f_{\lvert\operatorname{L}\rvert}(H)=4(\ell_{1}+\ell_{2})+4^{2}\ell_{3}\), which is divisible by 8. **Case II.** In the second case, write \(P_{1},P_{2},P_{3}\) for the three internally disjoint paths, and let \(\ell_{1},\ell_{2},\ell_{3}\) be their lengths. For \(H\) to be non-biparitite, it cannot be the case that \(\ell_{1},\ell_{2},\ell_{3}\) all have the same parity. Suppose without loss of generality that \(\ell_{1}\) is even and \(\ell_{2}\) is odd. * If \(\ell_{3}\) is even, then the largest TU-subgraphs of \(H\) are the odd-unicyclic subgraphs obtained by deleting a single edge from \(P_{1}\) or \(P_{3}\). So, by Theorem3.12 we have \(f_{|\mathrm{L}|}(H)=4(\ell_{1}+\ell_{3})\), which is divisible by \(8\). * If \(\ell_{3}\) is odd, then the largest TU-subgraphs of \(H\) are the odd-unicyclic subgraphs obtained by deleting a single edge from \(P_{2}\) or \(P_{3}\). So, by Theorem3.12 we have \(f_{|\mathrm{L}|}(H)=4(\ell_{2}+\ell_{3})\), which is divisible by \(8\). We also need the following consequence of Lemma3.3(2), allowing us to recognise a complete graph by its number of vertices and its largest eigenvalue. **Lemma 7.5**.: _Let \(G\) be a graph with \(n\) vertices, such that \(\mathrm{A}(G)\) has \(n-1\) as an eigenvalue. Then \(G\) is a complete graph._ Proof.: Let \(e\) be the number of edges of \(G\), and let \(\lambda_{\mathrm{max}}\) be the largest eigenvalue of \(\mathrm{A}(G)\). Then Lemma3.3(2) implies that \(n-1\leq\lambda_{\mathrm{max}}\leq\sqrt{2e-n+1}\), or equivalently that \(e\geq n(n-1)/2\); the only graph with this many edges is a complete graph. Now we prove Lemma7.3, completing the proof of Theorem1.4. Proof of Lemma7.3.: Let \(G\in\mathcal{G}_{n}\) (with parameters \(\ell,k,n_{2},n_{1},p\) as in Definition7.1), and let \(Q\) be a graph with the same adjacency spectrum as \(\mathrm{line}(G)\). Our objective is to prove that \(Q\) has the complete graph \(K_{n_{2}}\) as a connected component. Indeed, if we are able to prove this, then we can apply Lemma2.9 to the graph that remains after removing this \(K_{n_{2}}\) component (here we are using Fact3.2 to see that removing this \(K_{n_{2}}\) component has a predictable effect on the spectrum). As is well-known (see for example [5, Section 1.4.1]), the eigenvalues of a complete graph \(K_{n_{2}}\) are \(-1\) (with multiplicity \(n_{2}-1\)) and \(n_{2}-1\) (with multiplicity \(1\)). So (by Fact3.2), as in the proof of Lemma2.9 we can see that in the spectrum of \(\mathrm{A}(Q)\) there is no zero eigenvalue, and \(-2\) appears as an eigenvalue with multiplicity \(1\). Also, by Fact3.2, Proposition3.15, and Lemma5.3 we have \[f_{\mathrm{A}}(Q)=(n_{2}+1)n_{1}\ell=2(n_{2}+1)n_{1}p. \tag{7.1}\] Recalling (7.1) and 5, we see that \(f_{\mathrm{A}}(Q)\) is not divisible by \(8\) (so by Fact3.2, \(f_{\mathrm{A}}(Q_{i})\) is not divisible by \(8\) for any connected component \(Q_{i}\) of \(Q\)) Now, Fact3.2 tells us that some connected component \(Q_{2}\) of \(Q\) must have \(n_{2}-1\) as an eigenvalue. Let \(\Delta_{2}\) be the maximum degree of \(Q_{2}\), so by Lemma3.3(1) we have \[\Delta_{2}\geq n_{2}-1. \tag{7.2}\] In particular, \(Q_{2}\) has at least \(\Delta_{2}+1\geq n_{2}\) vertices, so by Lemma7.4 and the assumptions \(n_{1}>n_{2}\geq n_{0}\) from 3 and 6, we can write \(Q_{2}=\mathrm{line}(H_{2})\) for some graph \(H_{2}\). Let \(v_{2}\) be the number of vertices in \(H_{2}\). Now, we consider the cases in Lemma7.4 (**1A**, **1B** and **2**) for the structure of \(H_{2}\). We will show that all these cases lead to contradiction except **1B** (i.e., \(H_{2}\) is a tree), and in that case we will prove that \(v_{2}=n_{2}+1\) vertices (so \(H_{2}\) has \(n_{2}\) edges and \(Q_{2}\) has \(n_{2}\) vertices; this suffices to show that \(Q_{2}\) is our desired \(K_{n_{2}}\) component, by Lemma7.5). **Case 1A: \(H_{2}\) is odd-unicyclic.** In this case we have \(f_{\mathrm{A}}(Q_{2})=4\). Since \(f_{\mathrm{A}}(Q)\) is divisible by the prime number \(n_{1}\), there must be some component \(Q_{1}\neq Q_{2}\) such that \(f_{\mathrm{A}}(Q_{1})\) is divisible by \(n_{1}\). Recall from (7.1) that \(f_{\mathrm{A}}(Q)\) is not divisible by \(8\), so \(f_{\mathrm{A}}(Q_{1})\) must be odd. By Lemma7.4 (and the assumption \(n_{1}>n_{0}\) from 3), we can write \(Q_{1}=\mathrm{line}(H_{1})\) for some graph \(H_{1}\). Let \(v_{1}\) be the number of vertices in \(H_{1}\). Considering all cases of Lemma7.4, the only possibility that leads to \(f_{\mathrm{A}}(Q_{1})\) being odd is the case where \(H_{1}\) is a tree (whose number of vertices \(v_{1}\) is odd and divisible by \(n_{1}\)). Now, we can proceed similarly to **Case 1** in the proof of Lemma2.9. Note that \(Q_{1}\) has \(v_{1}-1\) vertices and \(Q_{2}\) has \(v_{2}\geq\Delta_{2}+1\geq n_{2}\) vertices (for the latter inequality, we used (7.2)). So, \(v_{1}-1+n_{2}\leq n\), or equivalently \(v_{1}\leq n_{1}+1\). Since \(v_{1}\) is divisible by \(n_{1}\) we must have \(v_{1}=n_{1}\), so \(Q\) only has room for one other component \(Q_{3}\) (other than \(Q_{1},Q_{2}\)), consisting of a single isolated vertex. If this component exists, it has \(f_{\mathrm{A}}(Q_{3})=1\). We then compute \(f_{\mathrm{A}}(Q)=f_{\mathrm{A}}(Q_{1})f_{\mathrm{A}}(Q_{2})=4n_{1}\), which is not consistent with (7.1). So, this case is impossible. **Case 1B: \(H_{2}\) is a tree.** In this case \(f_{\mathrm{A}}(Q_{2})=v_{2}\). Our objective is to prove that \(H_{2}\) has \(n_{2}+1\) vertices (this suffices, by Lemma7.5). We need to carefully consider various possibilities for the connected components which are responsible for the large prime factors \(n_{1}\) and \(p\) of \(f_{\mathrm{A}}(Q)\). The details will be a bit delicate. First, note that \(Q_{2}\) has \(v_{2}-1\) vertices; recalling (7.2), we have \[v_{2}-1\geq\Delta_{2}+1\geq n_{2}. \tag{7.3}\] Now, suppose that \(v_{2}\) is divisible by \(n_{1}\) (we will show that this leads to contradiction). By (7.3) and 6, we have \(v_{2}>n_{1}+1\), so in order for \(n_{1}\) to divide \(v_{2}\) we must have \(v_{2}\geq 2n_{1}\). It cannot be the case that \(v_{2}\) is divisible by \(p\) as well as \(n_{1}\) (this would cause \(v_{2}\) to be far too large, noting that \(Q_{2}\) has \(v_{2}-1\leq n\) vertices), so there must be some component \(Q^{*}\neq Q_{2}\) such that \(f_{\mathrm{A}}(Q^{*})\) is divisible by \(p\). Considering all cases in Lemma7.4, we see that this is only possible if \(Q^{*}\) has at least \(p-1\) vertices (as the line graph of a graph with at least \(p-1\) edges). But then \(Q^{*}\) and \(Q_{2}\) together have at least \((2n_{1}-1)+(p-1)\) vertices, which contradicts 7. So, \(v_{2}\) cannot be divisible by \(n_{1}\), and there must be some component \(Q_{1}\neq Q_{2}\) such that \(f_{\mathrm{A}}(Q_{1})\) is divisible by \(n_{1}\). By 7.4, we can write \(Q_{1}=\mathrm{line}(H_{1})\) for some graph \(H_{1}\). Next, suppose that \(H_{1}\) is a tree (we will show that this leads to contradiction). By (7.3), there are at most \(n_{1}\) vertices in components other than \(Q_{2}\). Since \(v_{1}\) is divisible by \(n_{1}\), we must have \(v_{1}=n_{1}\), meaning that \(Q_{1}\) has \(n_{1}-1\) vertices. So, \(Q\) only has room for one other component \(Q_{3}\) (other than \(Q_{1},Q_{2}\)), consisting of a single isolated vertex, and \(f_{\mathrm{A}}(Q)=f_{\mathrm{A}}(Q_{1})f_{\mathrm{A}}(Q_{2})=n_{1}v_{2}\leq n _{1}(n_{2}+2)\) (here we used that \(Q_{2}\) has at most \(n_{2}+1\) vertices, so \(v_{2}\leq n_{2}+2\)). This contradicts (7.1). So, \(Q_{1}\) is not the line graph of a tree. Considering all other cases in Lemma7.4, we see that the only way for \(f_{\mathrm{A}}(Q_{1})\) to be divisible by \(n_{1}\) is for \(Q_{1}\) to have at least \(n_{1}\) vertices. Recalling (7.3), we deduce that \(Q_{2}\) has exactly \(n_{2}\) vertices, as desired. **Case 2: \(H_{2}\) is even-unicyclic.** Let \(\ell_{2}=2q\) be the length of the cycle in \(H_{2}\), so \(f_{\mathrm{A}}(Q_{2})=v_{2}\ell_{2}\). We will again need to consider various possibilities for the connected components which are responsible for the large prime factors \(n_{1}\) and \(p\) of \(f_{\mathrm{A}}(Q)\) (in each case we need to reach a contradiction), but the details will be even more delicate. * First, suppose that \(v_{2}\) is divisible by \(n_{1}\). Note that \(Q_{2}\) has \(v_{2}\) vertices. Recalling (7.2) and 6, we have \(v_{2}\geq\Delta_{2}+1\geq n_{2}>n_{1}\), and we also have \(v_{2}\leq n<3n_{1}\) by 7, so in order for \(n_{1}\) to divide \(v_{2}\) we must have \(v_{2}=2n_{1}\). We consider possibilities for the prime factor \(p\). * Similarly to **Case 1B**, it cannot be the case that \(v_{2}\) is divisible by \(p\) as well as \(n_{1}\) (this would cause \(v_{2}\) to be too large). * Also similarly to **Case 1B**, it cannot be the case that there is another component \(Q^{*}\neq Q_{2}\) such that \(f_{\mathrm{A}}(Q^{*})\) is divisible by \(p\) (then \(Q^{*}\) would have to have at least \(p-1\) vertices by Lemma7.4, and \(Q^{*}\) and \(Q_{2}\) together would have at least \(2n_{1}+(p-1)\) vertices, contradicting 7). * Recalling that \(f_{\mathrm{A}}(Q_{2})=2v_{2}q\), the remaining case is that \(q\) is divisible by \(p\). In this case we have \(\ell_{2}\geq\ell\), i.e., the cycle in \(H_{2}\) has length at least \(\ell\). Each of the \(v_{2}\) edges in \(H_{2}\) can be incident to at most two of the edges of this cycle, so \(\Delta_{2}\leq v_{2}-\ell+2=2n_{1}-\ell+2\). But then (7.2) and 8 are inconsistent with each other. * So, \(v_{2}\) is not divisible by \(n_{1}\). Suppose next that \(q\) is divisible by \(n_{1}\), so the cycle of \(H_{2}\) has length at least \(2n_{1}\) and \(\Delta_{2}\leq v_{2}-2n_{1}+2\leq n-2n_{1}+2\). But then (7.2) implies \(p\leq n_{1}\leq n_{2}-1\leq n-2n_{1}+2\) (using 6), which is inconsistent with 7. * The only remaining possibility is that there is some component \(Q_{1}\neq Q_{2}\) such that \(f_{\mathrm{A}}(Q_{1})\) is divisible by \(n_{1}\). By 7, \(Q_{1}\) has at least \(n_{1}-1\) vertices, meaning that there are only \(n_{2}+1\) vertices left for \(Q_{2}\). By (7.2), \(Q_{2}\) must have at least \(\Delta_{2}+1\geq n_{2}\) vertices. * If \(Q_{2}\) has \(n_{2}\) vertices, then some vertex in \(Q_{2}\) is adjacent to all the other vertices in \(Q_{2}\), meaning that some edge of \(H_{2}\) is incident to all the other edges in \(H_{2}\). This is not possible, recalling that \(H_{2}\) is an even-unicyclic graph. * The only other possibility is that \(H_{2}\) has \(n_{2}+1\) vertices, meaning that \(Q_{1}\) has \(n_{1}-1\) vertices (and \(Q_{1},Q_{2}\) are the only components of \(Q\)). This is only possible if \(H_{1}\) is an \(n_{1}\)-vertex tree, recalling the cases in Lemma7.4. Then, some vertex in \(Q_{2}\) is adjacent to all but one of the other vertices in \(Q_{2}\), meaning that some edge of \(H_{2}\) is incident to all but one of the other edges in \(H_{2}\). This can only happen if \(\ell_{2}=4\). We deduce that \(f_{\mathrm{A}}(Q)=f_{\mathrm{A}}(Q_{1})f_{\mathrm{A}}(Q_{2})=n_{1}\cdot 4(n_{2}+1)\), which is not consistent with (7.1).
2305.19896
fpgaHART: A toolflow for throughput-oriented acceleration of 3D CNNs for HAR onto FPGAs
Surveillance systems, autonomous vehicles, human monitoring systems, and video retrieval are just few of the many applications in which 3D Convolutional Neural Networks are exploited. However, their extensive use is restricted by their high computational and memory requirements, especially when integrated into systems with limited resources. This study proposes a toolflow that optimises the mapping of 3D CNN models for Human Action Recognition onto FPGA devices, taking into account FPGA resources and off-chip memory characteristics. The proposed system employs Synchronous Dataflow (SDF) graphs to model the designs and introduces transformations to expand and explore the design space, resulting in high-throughput designs. A variety of 3D CNN models were evaluated using the proposed toolflow on multiple FPGA devices, demonstrating its potential to deliver competitive performance compared to earlier hand-tuned and model-specific designs.
Petros Toupas, Christos-Savvas Bouganis, Dimitrios Tzovaras
2023-05-31T14:30:17Z
http://arxiv.org/abs/2305.19896v1
# fpgaHART: A toolbox for throughput-oriented acceleration of 3D CNNs for HAR onto FPGAs ###### Abstract Surveillance systems, autonomous vehicles, human monitoring systems, and video retrieval are just few of the many applications in which 3D Convolutional Neural Networks are exploited. However, their extensive use is restricted by their high computational and memory requirements, especially when integrated into systems with limited resources. This study proposes a toolflow that optimises the mapping of 3D CNN models for Human Action Recognition onto FPGA devices, taking into account FPGA resources and off-chip memory characteristics. The proposed system employs Synchronous Dataflow (SDF) graphs to model the designs and introduces transformations to expand and explore the design space, resulting in high-throughput designs. A variety of 3D CNN models were evaluated using the proposed toolflow on multiple FPGA devices, demonstrating its potential to deliver competitive performance compared to earlier hand-tuned and model-specific designs. FPGA, Toollow, 3D CNNs, Human Action Recognition ## I Introduction Two-dimensional CNNs have excelled in image-related tasks in recent years. The increasing importance and amount of applications arising from video-related tasks, such as video surveillance, autonomous driving, and elderly monitoring, has demanded the development of algorithms that incorporate and account for the temporal domain. Three-dimensional CNNs are one of the most common approaches used to deal with video and volumetric data. With the addition of a new dimension, such as time or depth, 3D CNNs augment their capability to learn by extracting information related to the newly added dimension. 3D CNNs have exhibited outstanding performance, particularly in the task of Human Action Recognition (HAR). The use of 3D CNNs allows the interpretation of human motion across video frames, allowing the detection of a wide range of human actions without the requirement for specific time domain approaches like LSTMs. As can be seen in Figure 1, 3D CNNs dominate the pareto front in one of the most widely used HAR benchmarks, Kinetics-400, while the recent emergence of vision transformers has also begun to drive some designs to the pareto front, however such networks require orders of magnitude additional GFLOPs to operate. While 3D CNNs are capable of capturing time or depth-related features, the additional dimension of the input frequently results in greater workloads, computational and memory requirements compared to 2D CNNs. Numerous hardware devices, including GPUs, FPGAs, and ASICs, have been used to mitigate for the 3D CNNs' high processing requirements and provide high performing systems. The current work aims to design systems that can be deployed to FPGA devices, due to their flexibility in adapting to the requirements of such evolving field as well as with their potential for achieving high performance and low power consumption. In HAR, given a single input video clip, \(N\) new clips are generated by shifting a (fixed) time window throughout the original clip's duration, and \(M\) new clips are generated by cropping an area (for each image in the clip). The final evaluation of the original clip is acquired by passing each of the \(N\times M\) generated clips through the HAR model and averaging their predictions. As such, upon deployment of such models, it is necessary to process the input video segment multiple times to maintain the desired performance. Therefore, throughput-oriented designs and solutions are of high interest. The key contributions of this paper are the following: * Introduction of fpgaHART. A throughput-oriented toolflow for optimising and mapping 3D CNNs to FPGAs, supporting a variety of models and devices, Fig. 1: Kinetics-400 pareto is dominated by 3D-CNNs for small number of parameters. Demonstrating the deployability of 3D-CNNs on edge devices with limited resources. while taking into account the model characteristics, available platform resources, and memory bandwidth characteristics. * The expansion of the SDF graph model used for capturing performance requirements in CNN mapping to streaming architectures to explicitly handle irregular blocks with branching, which are commonly utilised in modern 3D CNN HAR models. * A comprehensive evaluation, utilising various devices and models, including cutting-edge 3D CNN HAR models that have yet to be explored. The findings lay the ground-work for the computation of HAR models on FPGAs for throughput-oriented applications. ## II Background Although 3D CNNs have been around for a while, there have only been a few papers aimed at their acceleration on FPGAs. The majority of these works focus on relatively old 3D CNNs, such as the C3D [1] model, whose performance falls short of state-of-the-art models. Fan et al. introduced a series of works on 3D CNN acceleration for HAR on FPGA systems, [2, 3, 4]. In their initial work [2], they proposed the F-C3D hardware architecture for the acceleration of C3D [1], which is capable of supporting multiple 3D convolutional layers and design strategies for overcoming the challenges associated with 3D CNNs while also allowing their design to be ported to other FPGA devices. In their subsequent work [3], they proposed an analytical model and a tool for optimising the hardware architecture based on the device specification and accuracy requirements, as well as the use of block floating point (BFP) arithmetic precision to minimise accuracy loss and the need for retraining the model. In their most recent work, [4], they proposed E3DNet, an efficient 3D CNN based on their proposed 3D-1 bottleneck building block. Their hardware implementation of E3DNet, named F-E3D, is capable of real-time performance at the execution time of 35.3 ms per clip1, while achieving an accuracy of 85.1%2 on the UCF101 benchmark. Footnote 1: A clip is defined as a stacked sequence of frames that are meant to be the input of the 3D CNN Footnote 2: H. Duan et. al. [5] currently holds the SoA results on UCF101 achieving 98.6% accuracy Liu et al. [6] proposed a unified hardware architecture for 2D and 3D CNN acceleration based on the observation that the computing patterns of 2D and 3D CNNs are similar. They convert CNN convolutions to matrix multiplication operations, paying close attention to memory optimizations in order to overcome the difficulties of feature map replications. Additionally, they employed an analytical model to configure the accelerators for optimal resource use. They have targeted and evaluated their design on C3D model. Shen et al. [7] followed a similar approach, developing a unified template-based architecture based on the Winograd algorithm capable of handling both 2D and 3D CNNs. Additionally, they developed an analytical technique for efficiently exploring the design space for mapping 2D and 3D CNNs on FPGA accelerators. The authors have targeted the C3D model for the evaluation of their proposed design. Sun et al. [8] used a blockwise pruning approach to apply weight pruning to two distinct 3D CNN architectures, namely C3D and R(2+1) [9]. Their hardware design, which is based on the Alternating Direction Method of Multipliers (ADMM), together with the suggested pruning approach, enables the acceleration of 3D CNNs with low accuracy loss compared to the unpruned version. Toupas et al. [10] recently proposed a throughput-oriented hardware design for X3D, a modern and state-of-the-art 3D CNN, with an emphasis on automating model branches management. Additionally, they have recently introduced a toolflow named HARFLOW3D [11] that simplifies the mapping and optimisation of 3D CNN models on FPGA devices, delivering promising results on latency-focused applications. The majority of research has been focused on the C3D [1] model for HAR, which was introduced in 2013. The model's architecture is rather simplistic, consisting of only sixteen consecutive layers, and it performs poorly in terms of accuracy when compared to the modern SoA models in HAR (\(85.2\)% in UCF101 compared to \(98.6\)% which is the current SoA). In terms of design complexity, it is comparable to the LeNet or AlexNet in the three-dimensional space. Due to the fact that the aforementioned approaches are essentially dedicated to the design of the target model, it is unclear how they may be extended, evaluated or perform in the more complicated architectures of modern state-of-the-art HAR models. This study focuses on supporting more recent 3D CNNs as well, which have a significantly larger number of layers and deviate from the sequential approach of early networks by containing branching within Resnet-like blocks. ## III Hardware-Level Interpretation This section discusses hardware-level 3D CNN model interpretation and modelling. The work is inspired by fpgaConvNet [12], a framework that automatically maps 2D CNN models to FPGA platforms, and extends it in significant ways, as outlined below. The proposed framework extracts the parameters of each layer in a Directed Acyclic Graph (DAG) and the connections between layers from a high-level description of a 3D CNN model. The network's supported layers are mapped to parametrisable hardware building blocks that implement their functionality. Subsequently, the framework generates the network's Synchronous Data-Flow Graph (SDFG) by mapping the DAG nodes to their hardware equivalent blocks and adding them as nodes and arcs in the SDFG. Finally, using the SDF computation model, a network configuration's SDFG node performance is estimated. The sections below describe the proposed tool's components. ### _3D CNN layers as DAG nodes_ The description of a neural network model supplied by high-level frameworks such as pytorch and onnx is comprised by three main parts. First, the layers and their connections that define the model's structure and flow. Second, each layer's special attributes and configuration, and finally the actual values of the learnable parameters associated with their layers (if any). A dedicated model parser is developed, parsing the above descriptions to build a DAG containing all of the relevant information of the neural network. The DAG structure is faithful to the original, retaining just the essential information from the layers' specific attributes and configuration. Additionally, the parser stores the model parameters/weights for future use during inference. Table I summarises the symbols used to denote the parameters of DAG nodes that represent and characterise the layers of the models. The following data structures are utilised by the tool to capture the layers of the 3D CNN models: * **3D Convolutional and Pooling Layers** The following types of convolutional/pooling layers are supported: (a) spatial convolution/pooling \(K_{h}\times K_{w}\times 1\), (b) temporal convolution/pooling \(1\times 1\times K_{d}\), (c) depth-wise convolution/pooling, (d) point-wise convolution/pooling. The configuration of the layer as stored in a DAG node is as follows, \(\boldsymbol{<Sz_{i},Sz_{o},K,St},\boldsymbol{Pd},Gp>\), where: * **K** is a 3-value vector \([K_{h},K_{w},K_{d}]\) specifying the depth, height and width of the 3D conv window. * **St** is a 3-value vector \([St_{h},St_{w},St_{d}]\) specifying the strides of the convolution along each dimension. * **Pd** is a 3-value vector \([Pd_{h},Pd_{w},Pd_{d}]\) denoting the amount of padding applied to each dimension. * **3D Activation Layers** The activation functions supported are the following: (a) ReLu activation, (b) Sigmoid activation, (c) Swish activation which is expressed as \(y=x*sigmoid(x)\):, with its DAG's layer structure, \(\boldsymbol{<Sz_{i},Sz_{o},T>}\). * **3D Element-wise Layers** Element-wise operations are layers that combine (add, mul) data from several branches. These layers combine several inputs into a single output, where the shapes of the inputs may or may not be identical, resulting in different functionality (normal vs broadcasting). The layers configuration as a DAG node, \(\boldsymbol{<Sp_{i1},...,Sp_{iN},Sz_{o},T,M>}\). * **3D Global Average Pooling Layer** While the standard pooling operation samples patches of the input feature map to decrease its size, GAP samples the whole feature map into a single value, creating an output vector with the same shape as the channels. DAG's layer configuration: \(\boldsymbol{<Sz_{i},Sz_{o}>}\). ### _SDFG representation with branch support_ To take advantage of the SDF model's capabilities, the tool maps DAG nodes into their associated hardware building blocks, which implement the functionality of each layer in the underlying hardware. Using SDF theory, the SDFG may be represented as a topology matrix \(\Gamma\). The nodes are represented by the columns of this matrix, while the arcs that link the nodes are represented by the rows. The data consumption/production rates for each node in each arc can be inferred by looking the element at \((node,arc)\) position in the \(\Gamma\) matrix. Positive values, by convention, drive data production, whereas negative ones drive data consumption. The element \(\Gamma(n,a)=-1\), for example, indicates that node n consumes data at arc \(a\) at a rate of one. The \(\Gamma\) matrix is decomposed into several matrices (as show in Eq. 1), allowing a more in-depth examination of each one separately and more fine control overall. The initial decomposition of the \(\Gamma\) matrix yielded three distinct matrices: 1. The stream matrix **S**. This matrix element stores the number of incoming and outgoing parallel streams that arrive to each node's input and output. 2. The rate matrix **R**. The rate matrix elements include the normalised data production and consumption rates of each node at each arc (number of elements produced/consumed per cycle). The values in this matrix range from 0 to 1. 3. The data matrix **C**. The width of each individual stream from the \(S\) matrix is stored in this matrix elements. Since all of the streams are assumed to have the same bit width of 16, the above matrix is not taken into consideration in this study. \[\Gamma=S\times R\] (1) The upper bi-diagonal structure of the \(\Gamma\) matrix prevents the modelling of branching behaviours, i.e. graphs with nodes receiving multiple incoming arcs and nodes with many outgoing arcs. This work proposes and implements modifications to the SDFG structure to ease the building of graphs with several incoming or outgoing arcs at nodes, hence supporting branching models without the need to explicitly define them with static predefined layers. The depth of each side of a branch is computed to incorporate some extra buffering for the streams that are combined at the merge points in order to ensure the flow of data across the design's streams as well as to equalise the rates at the merge points. ### _3D CNN layers as hardware building blocks_ The hardware building blocks are the major components utilised to construct the SDFG, which will be used subsequently to estimate the network's performance. The configuration of these blocks, in conjunction with the network's topology, is utilised to automatically generate and construct the design's synthesisable Vitis HLS code. The representation of the supported hardware building blocks comprises of the following: \begin{table} \begin{tabular}{l l} \hline \hline **Symbols** & **Definitions** \\ \hline \(\boldsymbol{Sz_{i}}\) & size dimensions of the input feature map \\ \(\boldsymbol{Sz_{o}}\) & size dimensions of the output feature map \\ \(K_{h},K_{u},K_{d}\) & height, width and depth of convolution kernel \\ \(St_{h},St_{h},St_{d}\) & stride value on height, width and depth dimensions \\ \(Pd_{h},Pd_{u},Pd_{d}\) & padding value on height, width and depth dimensions \\ \(Gp\) & number of groups in which the input is split \\ & along the channel axis on convolution layers \\ \(T\) & type of activation or element-wise function \\ \(M\) & mode of element-wise operation (normal/broadcasting) \\ \hline \hline \end{tabular} \end{table} TABLE I: DAG nodes parameters symbols. 1. **DAG parameters**. A set of parameters that originated from the layer's settings as a DAG node. These settings are the layers' structural configuration that cannot be changed. 2. **SDFG parameters**. An additional set of parameters which have an impact on the layer's performance and are the ones that the optimisation algorithm searches for during the design space exploration phase. The hardware building block representation of the 3D CNN layers is described below: * **3D Convolutional/Pooling Layers:** \[<DAG_{params},s_{i},s_{o},r_{i},r_{o},p_{mac}>\] The \(s_{i}\),\(s_{o}\), and \(p_{mac}\) are altered during the DSE and affect the final performance of the layer. Meanwhile the \(r_{i}\) and \(r_{o}\) depend on the \(p_{mac}\) which means they are implicitly altered during the DSE as well. A mode detailed analysis of the convolution layer and its sub-modules is provided in fpgaConvNet [12]. * **3D Activation, 3D Global Average Pooling Layers:** \[<DAG_{params},s_{i},s_{o},r_{i},r_{o}>\] These layers' \(r_{i}\) and \(r_{o}\) can achieve consumption/production rates of 1 (if not constraint from previous layers or the memory rates), due to their element-wise functionality and the simplicity of their operations. The only exception here is the 3D Global Average Pooling, in which \(r_{o}=\frac{1}{H\times W\times D}\), where \(H\) is the height, \(W\) is the width, and \(D\) is the depth dimension of the input feature map. * **3D Element-wise Layers:** \[<DAG_{params},s_{i1},s_{i2},s_{o},r_{i1},r_{i2},r_{o}>\] This layer's \(r_{i1}\), \(r_{i2}\) and \(r_{o}\) can achieve consumption/production rates of 1 (if not constraint from previous layers or the memory rates), due to their element-wise functionality and the simplicity of their operations. It should be noted that in cases when the rates in either of the inputs are restricted owing to a lower production rate of a previous layer or due to memory constraints, the layer's input rates are equalised to the lower consumption rate among them. ## IV Design Space Exploration The hardware mapping of the SDFG assumes a final streaming architecture to be inferred. Each design point in the design space has a specific combination of the involved layers' tunable parameters as they were described in section III-C. Essentially using a set of transformations operating on the SDFG, the aforementioned parameters are being altered while the design space is being explored by simulated annealing, the heuristic optimisation algorithm used in this study. ### _3D CNN Model Partitioning_ CNN hardware architecture design incorporates two distinct approaches. Single computation engines implement a time-shared processing unit and a scheduler, while streaming architectures like the one presented employ a hardware block for each CNN layer to better exploit per layer parallelism. When trying to fit all the layers into a single design without reconfiguring the FPGA, the more layers a CNN has or the larger the input, the more FPGA resources are used, limiting each layer's parallelism. Through utilising the FPGA's reconfiguration capabilities, network execution can be split into smaller partitions to solve this problem. By producing a unique architecture and delivering a bitstream for each partition, it allows the design of more finely tuned architectures that better fit each layer. This approach also drastically reduces off-chip memory access to only the design's input and output streams, allowing on-chip memory to be used for data reuse. This strategy requires reconfiguration every time a new partition is loaded, however increasing the batch size can amortise this cost. Beginning with a random partitioning of the model's layers by introducing \(L\) initial reconfiguration points, the optimisation process gradually modifies these partitions. The alterations to the partitions are focused on two key concepts: * The optimizer detects partitions that limit the performance of the model because they are memory constrained, have fully exploited the parallelism of their layers, or do not have sufficient resources to exploit enough parallelism from their layers. * Out of the candidate partitions to be modified, the optimiser selects the partitions with the lowest performance and moves layers from or to adjacent partitions with a goal of improving their performance. Between stages that modify existing partitions, the optimiser independently executes a series of partition-specific optimisation steps based on coarse and fine transformations as detailed below. ### _Partition-Specific Optimisations_ Each partition layer's configurable parameters leverage its parallelism based on two factors: * The number of parallel executions of coarse operations in each layer, which depends on the input feature map's channels. The primary operations of each layer can be performed in parallel by deploying multiple processing blocks up to the number of channels. 3D convolutional layers can exploit both input channels and output filters coarse-level parallelism. The \(s_{i}\) and \(s_{o}\) parameters of the hardware building block configuration are updated \begin{table} \begin{tabular}{l l} \hline \hline **Symbols** & **Definitions** \\ \hline \(s_{1}\) & number of streams at the layer’s input channels \\ \(s_{o}\) & number of streams at the layer’s output filters \\ \(r_{1}\) & consumption rate of the layer \\ \(r_{o}\) & production rate of the layer \\ \(p_{mac}\) & number of parallel multiply and accumulate (MAC) \\ & operations in a convolution layer \\ \hline \hline \end{tabular} \end{table} TABLE II: SDFG nodes parameters symbols. and searched for optimal values throughout the DSE to realise this parallelism. These variables affect the stream matrix **S**, which affects the topology matrix \(\Gamma\), which determines design performance. * The dot product operation's parallelism during the kernel's convolution with a given input volume piece on 3D convolutional layers. This parallelism determines the number of parallel multipliers and the depth of the adder tree for additions. A completely unrolled design uses \(N\) multipliers and \(N-1\) adders, yielding 1 dot product per cycle, but restricting the setup to a single multiplier and adder yields \(1/N\) dot products per cycle, where \(N\) is the input and kernel shape. There is a trade-off between performance and resource utilisation. The DSE optimises the \(p_{mac}\) parameters of the 3D convolutional layer hardware building block configuration to accomplish this parallelism. These variables affect the rate matrix **R**, which affects the topology matrix \(\Gamma\), which estimates design performance. ### _Performance Modelling_ To describe the performance of a given design based on the topology matrix \(\Gamma\), an additional matrix reflecting the workload of each layer is included. As the topology matrix provides the throughput of each layer at its input in consumptions/cycle and output in productions/cycle, constructing a matrix with the total workload of each layer, i.e. the total number of elements to be consumed and produced, allows the generation of a new matrix that provides the number of cycles each layer requires to consume its workload. More specifically a workload matrix **W** has the same structure as the topology matrix \(\Gamma\). By element-wise dividing the **W** matrix with the \(\Gamma\) matrix, the final \(II\) matrix is being calculated as shown below: \[II=W/\Gamma \tag{2}\] The \(II\) is the initiation interval matrix, and its entries represent the total number of cycles required by each layer to consume its workload completely. The maximum value of the \(II\) matrix, denoted by \(II_{max}\) determines the initiation interval of the whole SDFG. The total execution time of a partition with batch size B is given by the following equation: \[\mathrm{t}(\mathrm{B},\Gamma)=\frac{1}{\text{clock rate}}\cdot(D+II_{max} \cdot(B-1)) \tag{3}\] where \(D\) is the total number of cycles needed to fill the pipeline depth of the whole design, and its calculated by adding the depths of each layer and the depth added due to the extra buffering to deal with the branches in the design. In order to capture the model's overall execution time, the execution times of each individual partition are summed up with the addition of the total reconfiguration time: \[\mathrm{t}_{\mathrm{total}}(\mathrm{B},\Gamma)=\sum_{n=0}^{N_{p}}t_{n}(B, \Gamma_{i})+(N_{p}-1)\cdot t_{reconfig} \tag{4}\] where \(N_{p}\) is the total number of the partitions of the model, and \(t_{reconfig}\) is the reconfiguration time for loading a partition to the FPGA. As can be noticed from Eq. 4, the extra overhead caused by the device reconfiguration is proportional to the number of partitions of the final solution and is independent of the batch size. By increasing the number of batches processed by the model, the first term dominates the execution time and the cost of reconfiguration is amortised. Finally the overall throughput of the proposed architecture is inferred by dividing the total workload of the model in GOps (Giga Operations) times the batch size, with the total execution time: \[\mathrm{Throughput}(\mathrm{B})=\frac{Workload_{model}*B}{t_{total}(B,\Gamma)} \tag{5}\] The design space exploration for each partition is described as an optimization problem with the following objective: \(max(t(B,\Gamma)),s.t.rsc(\Gamma)\leq rs_{avail}\). As this is a non-convex optimisation problem, its optimisation is based on the simulated annealing heuristic algorithm algorithm that attempts to maximise the design's throughput while ensuring that FPGA resource use does not exceed the available resources. ## V Evaluation To evaluate the performance of the tool, four state of the art 3D CNN HAR models have been selected, Slowonly, R(2+1)D-18, R(2+1)D-34, and X3D (as shown in Table III), alongside two FPGA platforms ZCU104 and ZCU102 to demonstrate the ability of the tool to target multiple 3D CNN with different workloads and network parameters on a variety of platforms. C3D model was also included to provide direct comparisons with existing works, the majority of which are hand-tuned, model-specific architectures and not toolflows. Vitis HLS and Vivado Design Suite (v21.2) were used, while the reported resource results are after place and route at 160 MHz clock frequency. The arithmetic precision used was 16-bit fixed point arithmetic with \(Q8.8\) format. The accuracy of the HAR models is evaluated on the UCF-101, following the same strategy as prior studies [9, 13]. ### _Modeling Accuracy Evaluation_ To evaluate the quality of the performance predictor a series of experiments was conducted. Four partitions were chosen to cover the variety of produced graph structures (i.e. branch, sequential, multi-inputs, multi-outputs). The relative error was used to measure the difference between the predicted and actual latency. The relative errors for the four aforementioned types are 12.89%, 5.03%, 11.92%, and 17.32% respectively, giving a geometric mean relative error of 10.75%3. We found \begin{table} \begin{tabular}{c c c c c c c c} \hline & CUD & \(\uparrow\) & Slowonly & R(2+1)D-8 & R(2+1)D-34 & X3D \\ \hline FLOPs (G\({}^{\dagger}\)) & 38.61 & 54.81 & 8.52 & 12.91 & 6.97 \\ Parameters (M) & 78.41 & 32.51 & 33.41 & 6.372 & 3.82 \\ Num. of Layers & 27 & 174 & 82 & 154 & 396 \\ Num. of Conv Layers & 8 & 53 & 37 & 1 & 69 & 115 \\ Spatial dimensions & \(112\times 112\) & \(256\times 256\) & \(112\times 112\) & \(112\times 112\) & \(256\times 256\) \\ Num. of Frames & 16 & 16 & 8 & 16 & 16 & 16 & 16 \\ X/C101 & \(\uparrow\) & \(\uparrow\) & \(\uparrow\) & 88.66 & 1 & 92.27 & 1 & 96.52 \\ \hline \hline \end{tabular} \({}^{\dagger}\)FLOPs are reported as MAC operations. \end{table} TABLE III: 3D CNN models characteristics that the above errors are small enough to lead to meaningful design space exploration. ### _Performance Comparison_ The fpgaHART has been evaluated on a number of different FPGA platforms, such as the ZC706, the ZCU102, the VC706, and the VUS440. Figure 2 displays the performance in GOPs/s (with a favourable batch size of 100) of the fpgaHART-generated designs for the 3D CNN models of Table III, which details their unique characteristics, on a variety of FPGA devices. Such batch sizes are frequently encountered in practise when generating multiple views and clips over time and averaging them to improve the performance of the predictions. Even larger batch sizes may be required for multi-person HAR systems that evaluate each person's actions independently, as well as for large-scale systems that simultaneously analyse several videos. The placement of fpgaHART in comparison to the rest of the existing works is outlined in Table IV, where the fpgaHART results are reported using ZCU102 as the FPGA platform. A conclusion readily apparent from Table IV is that fpgaHART is capable of delivering competitive performance on several 3D CNNs that have not been previously addressed and have a broad set of workloads and network parameters. Figure 3 presents the current state of the Pareto front expressed in terms of accuracy over throughput (clips/s), where the fpgaHART generated designs were derived targeting the VC709 FPGA platform. The results show that the fpgaHART models have pushed the Pareto front, delivering solutions with both high throughput and high accuracy, as shown in the graph. Comparing the results on C3D (batch size 30 and targeting the ZCU102) to Nvidia RTX 3090, a server-grade GPU with 10496 CUDA cores and 1.7 GHz clock speed, the proposed architecture achieves a throughput of 4.42 clips/s compared to 281.87 clips/s that the GPU delivers. Yet, the proposed solution consumes only 26 W compared to the GPU's 298.6 W (excluding the CPU power consumption that a GPU system requires), offering 0.17 clips/s/watt compared to the GPU's 0.94 clips/s/watt. ## VI Conclusion This paper proposes an automated toolflow for the deployment and mapping of 3D CNN models for HAR onto FPGA devices. The proposed method employs SDF theory to describe and map 3D CNNs to hardware architectures. We demonstrate that the tool supports a pool of 3D CNNs for HAR on a variety of FPGA devices, while exhibiting comparable throughput performance to hand-tuned techniques. Future work may involve expanding the design space with additional SDFG transformations and improving the tool to support and provide latency-driven optimisation-focused designs. \begin{table} \begin{tabular}{c c c c|c c c|c c c|c c c c} \hline \hline & H. Fan [2] & H. Fan [3] & Z. Lin [6] & J. Shen [7]\({}^{\ddagger}\) & M. Sun [8] & H. Fan [4] & \multicolumn{3}{c}{Ours} \\ \hline \hline Model & C3D & C3D & C3D & C3D & C3D & C3D & R(2+D-18) & E3D & C3D & Slowonly & R(2+1)-18 & R(2+1)-34 & X3D \\ GFLOPs\({}^{*}\) & 38.61 & 38.61 & 38.61 & - & 38.61 & 8.52 & 6.1 & 38.61 & 54.9 & 8.52 & 12.91 & 6.97 \\ \hline Accuracy [60] & 79.81 & 81.99 & 88.2 & 88.2 & 88.2 & 88.66 & 88.1/10 & 88.2 & 84.51 & 88.66 & 99.2/2 & 96.5/2 \\ \hline fpga & ZC706 & ZC706 & VV09 & VCT09 & VUS440 & ZCU102 & ZCU102 & Intel SK660 & ZCU102 & ZCU102 & ZCU102 & ZCU102 & ZCU102 \\ \hline all-18 & 1.84 & 2.09 & 8.65 & 11.18 & 20.36 & 20.5 & 41.11 & 28.32 & 3.38 & 2.54 & 46.2 & 2.63 & 18.44 \\ \hline Gbops\({}^{*}\) & 70.41 & 80.12 & 330.74 & 42.79 & 778 & 78.44 & 11.17 & 17.28 & 13.04 & 14.44 & 39.59 & 34.26 & 85.96 \\ Gbops/DSP\({}^{*}\) & 0.87 & 0.103 & 0.092 & 0.281 & 0.511 & 0.065 & 0.092 & 0.109 & 0.052 & 0.057 & 0.015 & 0.013 & 0.034 \\ Op/DSP\({}^{*}\)(pc) & 0.511 & 0.519 & 0.774 & 1.874 & 2.59 & 0.435 & 0.613 & 0.727 & 0.325 & 0.358 & 0.098 & 0.084 & 0.213 \\ Frequency (MHz) & 172 & 200 & 120 & 150 & 200 & 150 & 150 & 150 & 160 & 160 & 160 & 160 & 160 \\ Precision & fp-16 & BPF & fp-16 & fp-16 & fp-16 & fp-16 & fp-16 & float-32 & fp-16 & fp-16 & fp-16 & fp-16 & fp-16 \\ DSP (g) & 86.6 & 99.8 & 62 & 53 & 48 & 48 & 93.3 & 51.49 & 63.77 & 66.21 & 64.66 & 84.43 \\ BRAM (\%) & 86.6 & 88.1 & 26.6 & 52 & 30 & 100 & 100 & - & 91.49 & 78.22 & 78.09 & 84.07 & 52.71 \\ \hline \hline \end{tabular} \({}^{*}\) FLOPs are reported as MAC operations. \({}^{*}\) Framework batch size 100. \({}^{*}\) The C3D model used is different/smaller version from the original one [1]. \end{table} TABLE IV: Comparison with existing works on 3D CNN HAR models Fig. 3: Pareto front on 3D CNNs: Clips/s over Accuracy. The fpgaHART results were taken using the VC709 FPGA platform, delivering solutions on the Pareto front. Fig. 2: Throughput (GOPs/s) of fpgaHART-generated designs on 3D CNN HAR models delivering high-throughput results on a variety of FPGA devices
2309.16870
LEF: Late-to-Early Temporal Fusion for LiDAR 3D Object Detection
We propose a late-to-early recurrent feature fusion scheme for 3D object detection using temporal LiDAR point clouds. Our main motivation is fusing object-aware latent embeddings into the early stages of a 3D object detector. This feature fusion strategy enables the model to better capture the shapes and poses for challenging objects, compared with learning from raw points directly. Our method conducts late-to-early feature fusion in a recurrent manner. This is achieved by enforcing window-based attention blocks upon temporally calibrated and aligned sparse pillar tokens. Leveraging bird's eye view foreground pillar segmentation, we reduce the number of sparse history features that our model needs to fuse into its current frame by 10$\times$. We also propose a stochastic-length FrameDrop training technique, which generalizes the model to variable frame lengths at inference for improved performance without retraining. We evaluate our method on the widely adopted Waymo Open Dataset and demonstrate improvement on 3D object detection against the baseline model, especially for the challenging category of large objects.
Tong He, Pei Sun, Zhaoqi Leng, Chenxi Liu, Dragomir Anguelov, Mingxing Tan
2023-09-28T21:58:25Z
http://arxiv.org/abs/2309.16870v1
# LEF: Late-to-Early Temporal Fusion for LiDAR 3D Object Detection ###### Abstract We propose a late-to-early recurrent feature fusion scheme for 3D object detection using temporal LiDAR point clouds. Our main motivation is fusing object-aware latent embeddings into the early stages of a 3D object detector. This feature fusion strategy enables the model to better capture the shapes and poses for challenging objects, compared with learning from raw points directly. Our method conducts late-to-early feature fusion in a recurrent manner. This is achieved by enforcing window-based attention blocks upon temporally calibrated and aligned sparse pillar tokens. Leveraging bird's eye view foreground pillar segmentation, we reduce the number of sparse history features that our model needs to fuse into its current frame by 10\(\times\). We also propose a stochastic-length FrameDrop training technique, which generalizes the model to variable frame lengths at inference for improved performance without retraining. We evaluate our method on the widely adopted Waymo Open Dataset and demonstrate improvement on 3D object detection against the baseline model, especially for the challenging category of large objects. ## I Introduction The goal of LiDAR temporal fusion is aggregating learned history information to improve point clouds based tasks. The history information could be of various implicit (_e.g_. latent embeddings), explicit (_e.g_. point clouds, 3D box tracklets) representations or a mixture of both, depending on the models and tasks at hand. Temporal fusion is critical for multiple driving related tasks, such as 3D object detection, tracking, segmentation, and behavior prediction. Here we mainly study LiDAR-based fusion methods for 3D object detection, which is a crucial task for recognizing and localizing surrounding objects in modern autonomous driving systems. Point clouds of a single frame can only serve as partial observation of the scenes, lacking complete coverage of environment context and agent dynamics. This information bottleneck is caused by several factors such as object self-occlusion, occlusion by other objects, sensor field-of-view limitation, and data noises. Moreover, for moving objects, models with only single-frame data will struggle to understand their short-term states (velocities, accelerations) and long-term intentions (future trajectories). Tackling these issues demands effective ways of LiDAR temporal fusion, which can enable the model to understand scene / object attributes and dynamics from a wide time horizon. The main challenge of temporal fusion is how to represent and aggregate the long-sequence information of history frames. See Figure 0(a) for a high-level illustration and comparison. Generally speaking, previous solutions can be classified into two types. One of the most widely used methods is early-to-early fusion based point cloud stacking. Multi-frame LiDAR points are directly stacked together as model inputs, resulting in better performance than a single frame of LiDAR points. However, the performance quickly saturates when more frames are simply stacked together [1] without careful modeling of the inter-frame relationships. Moreover, each frame needs to be repeatedly processed when they are stacked into different adjacent frames, greatly increasing computation cost. Fitting long sequences will also greatly increase memory cost, reduce model efficiency or even result in out of memory (OOM) issues. Ideally, a model should leverage what it has already learned from the data, not simply stacking its raw sensory inputs. To overcome this issue, another type of fusion methods turn to late-to-late fusion so as to utilize the learned history embeddings. A representative method is ConvLSTM [1] which recurrently fuses latent embeddings between consecutive frames at deep layers of the model. This approach reduces memory usage and computation cost, but its results are usually inferior to early-to-early fusion, as shown in Figure 0(b). We suspect that this is because the backbone only has access to single-frame Fig. 1: **Comparisons of temporal fusion approaches. Our late-to-early fusion approach achieves better detection quality (_e.g_. 54.4 3D AP for the challenging large objects) than previous early-to-early and late-to-late methods.** data before late fusion happens. The task of understanding temporally fused deep features falls upon the detection heads, which usually consist of low-capacity multi-layer perceptron (MLP) layers. Consequently, most state-of-the-art LiDAR 3D object detectors (_e_.\(g\). PVRCNN++ [2, 3], CenterPoint [4], SST [5], SWFormer [6], _etc_.) still rely on early-to-early fusion with point cloud stacking. In this paper, we propose a new fusion method named **LEF**: **L**ate-to-**E**arly temporal **F**usion. We argue that this fusion scheme can leverage learned history knowledge, and in the meantime its backbone does not suffer from single-frame data deficiency issues. Long history LIDAR fusion is a fundamental block for autonomous driving, and our work opens a promising direction to achieving that goal. There are three main contributions in our paper: * We propose a recurrent architecture that fuses late-stage sparse pillar features into early stages of the next frame. To align the underlying static objects, we propose an inverse calibration and alignment module to fuse history and current sparse sets of pillar features. As for moving objects, we leverage window-based attention layers, which can associate relevant features within the windows and thus connect pillar tokens that belong to the same object. * While point stacking struggles to cache and preprocess huge point clouds as history length grows, we leverage a bird's eye view (BEV) foreground pillar segmentation module to achieve long-sequence fusion at a low constant cost. The number of sparse voxels that our model needs to fuse at each recurrent step can be reduced by over 10\(\times\) via the foreground segmentation process. * We also propose a stochastic-length FrameDrop training recipe. It exposes the model to an augmented large motion space of pillar trajectories across time. Thus our recurrent model can capture different speed objects, and generalize to variable frame lengths during inference for improved performance. The proposed late-to-early temporal fusion scheme leads to improved 3D detection results on the widely used Waymo Open Dataset (WOD) [7] and demonstrates large gains on challenging large objects. We also conduct extensive ablation studies on various design choices made in our method, providing several interesting insights. ## II Related Work **3D Object Detection**. LiDAR-based 3D object detection plays an essential role in autonomous driving. Early efforts of research such as PointRCNN [8] usually operate on raw 3D point clouds through PointNet(++) [9, 10, 11]. But they struggle to generalize to large-scale data, such as long-sequence fused LiDAR [7] with millions of points. Heavily relying on MLP-based backbones, these detectors are soon outperformed by models with more advanced architectures like submanifold sparse convolution [12] or Transformers [13, 14, 15]. By voxelizing free-shape point sets into regular 2D1 or 3D-shape voxels, LiDAR-based detectors [16, 17, 18] can leverage numerous advancements on image 2D object detection, and start to demonstrate promising 3D detection results. Particularly, CenterPoint [4] utilizes sparse convolution layers and CenterNet-based detection heads [19] to predict 3D boxes. Some recent works, such as SST [20] and SWFormer [6], exploit Swin-Transformer [21] and push the detection performance to a new state of the art. Meanwhile, several methods [2, 3, 22, 23, 24, 25, 26, 27, 28, 29, 30] look into alternative LiDAR representations and strive towards a balance between detection efficiency and efficacy. Footnote 1: 2D-shape voxels are often referred to as pillars. **LiDAR Temporal Fusion**. Compared with the rapid progresses achieved on 3D detection backbones, approaches of LiDAR temporal fusion are less well-studied. Point clouds of a single frame in WOD [7] have already caused huge computation burden (_i_.\(e\)., \(\sim\)200\(k\) points), let alone long history sequences. As briefly discussed in the introduction section, LiDAR temporal fusion solutions can be generally classified into three types: early-to-early, late-to-late and late-to-early fusion. Early-to-early fusion is also referred to as point cloud stacking. It is most widely adopted in recent LiDAR object detectors (_e_.\(g\). CenterPoint [4], RSN [22], SWFormer [6], _etc_.) due to its simple setup. Multi-frame point sets are merged together. Timestamp offsets w.r.t. to the current frame are appended to sensory signals of each 3D point to serve as markers indicating different frame sources. However, point stacking struggles to work on long sequences due to the cost of fusing, saving and jointly preprocessing millions of points. It is also possible to use a Transformer to early fuse point clouds from different frames [31]. While early-to-early fusion simply stacks raw sensory inputs without carefully modeling inter-frame relationships and ignores knowledge learned from prior frames, late-to-late fusion tries to tackle these issues by ConvLSTM [1, 32]. It recurrently fuses sparse latent embeddings between deep layers of the backbone with improved efficiency than point stacking, but the results are often not as competitive as early-to-early fusion. This is presumably because its backbone can only utilize single-frame data until fusion happens at deep layers. 3D-MAN [33] may also be viewed as a form of late-to-late fusion, because the temporal fusion in this method is done through various kinds of cross-attention between box proposals and features in the memory bank, which are both after the backbone of its network. FaF [34] studied both early fusion and late fusion. To the best of our knowledge, late-to-early fusion has not been explored before in LiDAR detectors. A similar fusion framework is studied in [35] but targeting on camera-based detection. It faces very different challenges from our problems. We need to process sparsely distributed 3D data at wide ranges, which requires dedicated designs for sparse features alignment, fusion and also new training recipes. Finally, we note that our review so far concentrates on a single-stage trainable model that internalizes the temporal fusion schemes. It is also possible to follow up the box predictions with a second-stage offline refinement, using the terminology from a recent exemplar of this two-stage approach, MPPNet [36]. MPPNet runs a pre-trained Center-Point [4] on 4-frame stacked LiDAR point clouds to generate anchor boxes, which will then be tracked and aggregated across long sequences. Specifically, latent embeddings or raw points within the box regions of one frame will be cropped and intertwined with those extracted from other frames in order to refine the box states. The key differentiating factor about the two-stage approach is that the two stages / models are trained separately [36], suggesting that the improvement inherently built into the first stage, like ours, is complementary to the second-stage innovation. ## III Method ### _Problem Statement_ We use \(\{P_{i}\}\), \(i=1,...,T\) to represent a consecutive sequence of LiDAR point clouds with \(P_{i}:\{X_{i,j}\in\mathbb{R}^{3}\}\), \(j=1,...,N_{i}\). Our goal is to detect 3D object boxes \(\{B_{i,m}\}\), \(m=1,...,M_{i}\) for each frame-\(t\) using \(\{P_{i}\mid i\leqslant t\}\). Ideally the model should be capable of fusing history information \(F(P_{1},...,P_{t})\) up to the current timestamp-\(t\), where \(F(\cdot)\) denotes the fusion function. LiDAR temporal fusion is known to be an open challenge due to the sparse and wide-range spatial distribution of point clouds, let alone diverse object dynamics. Currently early-to-early fusion (_i.e_., point stacking) is most widely used \(P_{t-l}\cup...\cup P_{t}\), which is easy to implement. However, due to memory constraint the sequence length is usually small, e.g. \(l\in\{2,3\}\). Moreover, point clouds \(\{X_{i,j}\}\) of one frame have to be repeatedly processed for \((l+1)\) times when we conduct model inference on adjacent frames, causing huge waste of computation. As for detection performance, whether directly stacking the raw sensory inputs without reusing learned history knowledge can lead to the optimal results also remains questionable. ### _Recurrent Late-to-Early Fusion_ To address the aforementioned issues, we propose a recurrent late-to-early temporal fusion strategy. As shown in Figure 2, the fusion pipeline works like a "Markov chain", which can accumulate history information from long sequences and reduce redundant computation. Thus, the fusion function \(F(\cdot)\) can be iteratively defined as: \[f_{i}=\psi(h(f_{i-1}\oplus\tau(t_{i}-t_{i-1}),\nu(\{X_{i,j}\}))) \tag{1}\] where \(f_{i-1}\) indicates history deep-layer voxel embeddings, and \(\tau(\cdot)\) is a Sinusoidal function for encoding the timestamp offset. \(\nu(\cdot)\) represents VoxelNet [18] used to obtain pillar features from point clouds. \(h(\cdot)\) is the backbone for recurrent fusion and multi-scale sparse pillar features extraction, and \(\psi(\cdot)\) is the foreground segmentation module. **History features**. Particularly, we use the latent features of segmented foreground pillars as \(f_{i-1}\) and pass them into the next timestamp. Without loss of generality, we use SWFormer [6] as our backbone and center-based detection heads [4] as examples in our following discussion if needed. The diagram is plotted in Figure 2. The model works on sparse pillar tokens and thus the segmentation outputs can be written as \(f_{i-1}:\{V_{i-1,k}\in\mathbb{R}^{2+d}\}\), \(k=1,...,K_{i-1}\). The first two dimensions record BEV coordinates of the pillars and the rest are extracted embeddings (_i.e_., \(d=128\)), which contain rich scene and object-aware information. Moreover, compared with the raw point clouds size \(N_{i-1}\) (\(\sim\)200\(k\)), the foreground pillar feature set size \(K_{i-1}\) (\(\sim\)2\(k\)) is much smaller. Therefore, we are motivated to fuse these deep-layer features into early stages of the next frame in order to efficiently reuse learned high-level knowledge for 3D detection, especially on challenging large objects. **Fusion location**. To achieve recurrent late-to-_early_ fusion, we fuse \(f_{i-1}\) with VoxelNet [18] outputs \(\nu(\{X_{i,j}\})\mapsto\{V^{{}^{\prime}}_{i,n}\in\mathbb{R}^{2+d}\}\), \(n=1,...,N^{{}^{\prime}}_{i}\) before they are fed into the the main backbone network. Meanwhile, instead of early fusion before the backbone, some may argue that an alternative way is conducting late fusion after the backbone process, which is close to the network stage where \(f_{i-1}\) is extracted. Diagrams of these two different fusion locations are plotted in Figure 1. We think that presumably late fusion can cause the backbone B to lose access to temporally aggregated LiDAR sequence information, and thus the low-capacity detection heads H will struggle to understand fused features and predict object poses and shapes. Ablation studies on early-to-early, late-to-late and our proposed late-to-early fusion methods are provided in Table IV and Section IV-C, which empirically proved the advantages of our approach. ### _Inverse Calibration and Alignment_ While image sequences are naturally aligned across different frames by the shapes (height, width, channel), sparse sets of pillar features \(\{V_{i-1,k}\}\), \(\{V^{{}^{\prime}}_{i,n}\}\) are neither aligned nor with the same cardinality (_i.e_., \(K_{i-1}\neq N^{{}^{\prime}}_{i}\)). Intuitively one could convert sparse features into dense BEV maps \(\{V_{i-1,k}\}\mapsto I_{i-1}\in\mathbb{R}^{H\times W\times d}\), \(\{V^{{}^{\prime}}_{i,n}\}\mapsto I^{{}^{\prime}}_{i}\in\mathbb{R}^{H\times W \times d}\) and then align them. However, as Figure 2 shows, directly doing so without proper calibration can result in misalignment between underlying objects of the scene. This is because pillar features extracted by the backbones are from their corresponding local vehicle coordinates with poses of \(g_{i-1}\in\mathbb{R}^{4\times 4}\), \(g_{i}\in\mathbb{R}^{4\times 4}\). To alleviate this misalignment issue, we need to calibrate the history BEV maps \(I_{i-1}\). \[I_{i-1}\circ g_{i-1}^{-1}\circ g_{i}\mapsto\tilde{I}_{i-1} \tag{2}\] here \(\circ\) means applying vehicle coordinates transformation and \(\tilde{I}_{i-1}\) represents the calibrated BEV maps. However, in practice if we apply forward calibration upon \(I_{i-1}\) we might get more than one pillars that fall into the same discrete coordinates within \(\tilde{I}_{i-1}\). To address this issue we conduct inverse transformation from \(\tilde{I}_{i-1}\) to \(I_{i-1}\) and sample the history BEV features. We use zero padding to fill in the pillar features of empty samples and also for out-of-view locations, _e.g_. red cross markers in Figure 2. The inversely calibrated history maps now can be aligned with current maps by feature concatenation \(\tilde{I}_{i-1}\oplus I^{{}^{\prime}}_{i}\mapsto J_{i}\in\mathbb{R}^{2+d}\) \(\mathbb{R}^{H\times W\times 2d}\). Next, we apply a MLP on \(J_{i}\) for dimension reduction (_i.e_., \(2d\mapsto d\)) and get the temporally aligned pillar features \(J_{i}^{{}^{\prime}}\). Note that not all the coordinates within \(J_{i}^{{}^{\prime}}\) have valid features. We use the union BEV boolean mask \(O_{i}\in\mathbb{R}^{H\times W}\) obtained from the current and calibrated history BEV features to mark valid coordinates of \(J_{i}^{{}^{\prime}}\). Thus, we do not lose the data sparsity. ### _Window-based Attention Fusion_ Pillars of the static objects are effectively aligned after the prior steps, but the moving ones are still facing the misalignment issue. One solution is to apply flow estimation to further calibration the history BEV features \(\tilde{I}_{i-1}\) before temporal alignment with \(I_{i}^{{}^{\prime}}\). But that requires adding additional occupancy flow models, losses and feature coordinates transformation, which might greatly increase the computation overhead of the 3D object detector. Therefore, we propose to learn such association implicitly from the data by window-based attention blocks. We sparsify the dense BEV feature map \(J_{i}^{{}^{\prime}}\) and its boolean mask \(O_{i}\) into a sparse set of pillar tokens \(\{V_{i,u}^{{}^{\prime}}\}\), \(u=1,...,U_{i}\). Usually we have \(U_{i}\geqslant N_{i}^{{}^{\prime}}\). Because the cardinality \(U_{i}\) means the number of fused pillars after temporal alignment between the history and current features through the steps in Section III-C. While \(\{V_{i,u}^{{}^{\prime\prime}}\}\) is used as the query tensor for the attention blocks, we can make different choices when determining the key and value tensors: using \(\{V_{i,u}^{{}^{\prime}}\}\) again or the sparsified set of history pillar tokens in (2): \(\tilde{I}_{i-1}\mapsto\{\tilde{V}_{i-1,c}\}\), \(c=1,...,\tilde{K}_{i-1}\). Most often, \(\tilde{K}_{i-1}\leqslant K_{i-1}\) due to out-of-view truncation after vehicle coordinates calibration. The resulting variants are: self / cross / mix-attention. In self-attention the key and value tensors are the same as query. Cross-attention uses \(\{\tilde{V}_{i-1,c}\}\) as key and value and mix-attention uses the union set of prior two attention variants. We apply Sinusoidal functions based absolute positional encoding to inform the attention blocks of the sparse pillar coordinates within a window. Detailed ablation studies on different attention designs are provided in Section IV-C. With window-based attention fusion, features of both static and moving pillars now can be associated and fused for later being passed into the main backbone network. ### _Stochastic-Length FrameDrop_ To enable robust training upon long sequences, we randomly drop history frames from \((P_{1},...,P_{t})\) during each training iteration. In other words, we randomly sample \(S_{i}\) history frames, with \(S_{i}\) being a stochastic number at different training steps and the sampled frames are not necessarily adjacent ones. In comparison, the previous LiDAR temporal fusion methods usually fix \(S_{i}\) to be a constant (_e.g_. 3 or 4) and sample consecutive frames. We apply stop gradient between each recurrent pass when fusing deep-layer history features into early layers of the next frame, without which long-sequence training of 3D object detectors can easily get intractable or run into OOM. During training, the model only predicts 3D boxes \(\{\hat{B}_{i,m}\}\) in the last forward pass. Losses are Fig. 2: **Detection pipeline with our proposed LEF**. In each forward pass, the early-stage pillar encoding will be aligned and fused with the history late-stage foreground pillar features \(f_{i-1}\). The alignment is achieved by an inverse calibration and alignment process (Section III-C) that enables pillar features of the underlying static objects to be matched. To effectively associate moving object features, we further use window-based attention blocks (Section III-D) to connect relevant pillars. Outputs from the attention fusion layers will then be fed into the main backbone network (_e.g_. SWFormer [6]), followed by a foreground pillar segmentation layer and the final detection head [4] for 3D bounding box predictions. enforced upon certain intermediate outputs (_e.g_. foreground pillar segmentation) and the final box parameter predictions (_e.g_. shapes and poses). \[L=\lambda_{1}L_{seg}+\lambda_{2}L_{center}+L_{box} \tag{3}\] in which \(L\) means the total losses. \(L_{seg}\) is focal loss for foreground segmentation. \(L_{center}\) is also based on focal loss but for object-center heatmap estimation [4, 38]. \(L_{box}\) contains SmoothL1 losses for box azimuth, center offsets and sizes regression. A detailed explanation is in [6]. The training randomness introduced in LiDAR sequence sampling enables the model to be robust to various motion patterns of pillar trajectories across time. Thus our recurrent model can understand different object dynamics, and generalize to variable frame lengths during inference without retraining. More experiments and analysis are provided in Table VI and the ablation studies. ### _Implementation Details_ We conduct 3D object detection within a wide range of 164\(\times\)164 meters (\(m\)) square zone, centering on the top LiDAR sensor. Point clouds inside this region are voxelized into 2D pillars with 0.32\(m\) spatial resolutions. The window attention blocks are based on 10\(\times\)10 grouping sizes. The loss weights \(\lambda_{1}\), \(\lambda_{2}\) defined in (3) are 200, 10 respectively. We use AdamW [39, 40] optimizer with 128 batch sizes and 240\(k\) iterations for distributed training on 128 TPUv3. The training takes about 2 days. TPU memory usage is 5.4 GB on average and 7.4 GB at peak. The first 10\(k\) steps will warm up the learning rate from 5.0e-4 to 1.0e-3, after which the learning rate will follow a cosine annealing schedule to zero. ## IV Experiments In this section, we will compare our model with other state-of-the-art methods, and perform ablation studies upon the impact of our designs on detection performance. ### _Dataset and Backbone_ We choose Waymo Open Dataset [7] over nuScenes [41] and KITTI [42] because WOD has large-scale and high-quality LiDAR data, which can better simulate the settings for developing on-road fully autonomous vehicles. There are about 160\(k\) annotated training frames in WOD but only around 30\(k\) frames in nuScenes. As for per-frame point cloud densities, WOD is \(\sim\)200\(k\) and nuScenes is \(\sim\)30\(k\). Therefore WOD is widely used in recent LiDAR-based methods: PV-RCNN(++), SST, RSN, SWFormer and so on [2, 3, 4, 6, 20, 22, 24, 26, 33, 36]. WOD has 798 training sequences, 202 validation and 150 test sequences, covering diverse driving scenarios and agent status. LiDAR data collection frequency is 10Hz. Each frame of point clouds consists of data gathered from five sensors: one long-range and four short-range LiDAR. For evaluation metrics, we adopt the officially recommended 3D AP / APH under two difficulty levels (L1, L2) depending on point densities of the ground-truth bounding boxes. APH is a weighted metric of AP using heading angles (_i.e_., azimuth). We adopt the state-of-the-art SWFormer [6] as our detection backbone, and replace its original early-to-early LiDAR fusion with our proposed LEF. For fair comparisons, all training settings are kept the same as [6]. ### _Main Results and Comparisons_ The overall vehicle detection results with other competing methods are in Table I. We compare against methods both with and without box refinement steps, although our model is a single-stage method without refinement and generally more efficient than those with box refinement. Our method LEF surpasses the prior best single-stage model SWFormer by +1.3 3D APH on L2 test data (_e.g_. 75.16 _vs_. 73.87), demonstrating the strong overall performance of our approach. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{L1} & \multicolumn{2}{c}{L2} \\ & 2D & 3D & 2D & 3D \\ \hline RSN [22] & 53.10 & 45.20 & - & 40.90 \\ SWFormer [6] & 58.33 & 49.74 & 53.45 & 45.23 \\ **LEF (ours)** & **62.63** & **54.35** & **57.42** & **49.34** \\ \hline \hline \end{tabular} \end{table} TABLE II: **Detection results on challenging large objects**. \begin{table} \begin{tabular}{l|c||c|c||c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Refine} & \multicolumn{2}{c||}{Test set 3D AP/APH} & \multicolumn{2}{c}{Validation set 3D AP/APH} \\ & & L1 & L2 & L1 & L2 \\ \hline 3D-MAN [33] & with & 78.71 / 78.28 & 70.37 / 69.98 & 74.53 / 74.03 & 67.61 / 67.14 \\ CenterPoint [4] & with & 80.20 / 79.70 & 72.20 / 71.80 & 76.60 / 76.10 & 68.90 / 68.40 \\ SST [5] & with & 80.99 / 80.62 & 73.08 / 72.72 & 77.00 / 76.60 & 68.50 / 68.10 \\ PVRCNN++ [2] & with & 81.62 / 81.20 & 73.86 / 73.47 & 79.30 / 78.80 & 70.60 / 70.20 \\ MPPNet [36] & with & 84.27 / 83.88 & 77.29 / 76.91 & 82.74 / 82.28 & 75.41 / 74.96 \\ CenterFormer [37] & with & 84.70 / 84.40 & 78.10 / 77.70 & 78.80 / 78.30 & 74.30 / 73.80 \\ \hline PointPillars [16] & w/o & 68.60 / 68.10 & 60.50 / 60.10 & 63.30 / 62.70 & 55.20 / 54.70 \\ RSN [22] & w/o & 80.70 / 80.30 & 71.90 / 71.60 & 78.40 / 78.10 & 69.50 / 69.10 \\ SWFormer [6] & w/o & 82.25 / 81.87 & 74.23 / 73.87 & 79.03 / 78.55 & 70.55 / 70.11 \\ **LEF (ours)** & w/o & **83.39 / 83.02** & **75.51 / 75.16** & **79.64 / 79.18** & **71.37 / 70.94** \\ \hline \hline \end{tabular} \end{table} TABLE I: **Overall performance comparisons on Waymo Open Dataset**. Refine means that the detectors need an additional step of box refinement via feature pooling and fusion from the box areas, which usually increases time cost and might not be end-to-end trainable. For fair comparisons we focus on single-stage detectors without (w/o) box refinement. Our method is particularly useful for detecting challenging large objects whose maximum dimension is beyond 7 meters: truck, bus, construction vehicle, _etc._ We conduct detailed analysis on validation set in Table II. Our method LEF outperforms SWFormer by +9.3% relative increase on L1 3D AP: 54.35 _vs_. 49.74. Hard cases such as large vehicles suffer from partial observation issues more often than small or medium size objects. Faithfully detecting these challenging cases requires LiDAR temporal fusion at long frame lengths in order to enlarge the sensory data coverage. Moreover, our late-to-early fusion scheme can reuse learned scene and object-aware latent features from prior frames, not simply stacking the point clouds as in RSN and SWFormer. Such high-level history knowledge can enable the model to more easily tackle challenging detection cases, compared with solving them from scratch using stacked raw sensory inputs. Qualitative results are visualized in Figure 3. Typical errors of SWFormer are highlighted in the red zones. Our results are aligned better (_i.e_., have higher 3D IoU) with the ground truth boxes than SWFormer predictions, especially for challenging large objects. Moreover, our results contain fewer false negative and false positive predictions than SWFormer results. We also measure model latency, flops and parameter sizes of different LiDAR 3D object detectors in Table III, following the same benchmark settings as [6]. PointPillars and SWFormer both use point stacking. The results demonstrate the efficiency advantages of our late-to-early recurrent fusion method. **Frame length generalization.** Due to memory constraint of the computing devices, GPU or TPU, 3D object detectors with LiDAR temporal fusion usually sample a fixed number of history frames (_e.g_. 2 or 3) during training. However, during inference, there are usually additional frames available to the model depending on the history lengths. For typical early-to-early fusion based multi-frame detectors (_e.g_. CenterPoint, SWFormer), if we want to test a trained model on different frame lengths, the training settings need to be modified and the model needs to be retrained. With stochastic-length FrameDrop (SLF), LEF can generalize to variable frame lengths _without_ retraining. It can leverage additional frames and achieve increasingly improved results. Large objects 3D AP are shown in Table VI. In contrast, SWFormer and LEF without SLF can not make best of long history and might even face performance decrease. This is because long history frames can exhibit diverse motion patterns of temporally aggregated data, posing generalization difficulties for methods trained without SLF. Moreover, since SWFormer is based on point cloud stacking, it will run into OOM if we simply stack a long LiDAR sequence into millions of 3D points and use them as inputs. These observations indicate that stochastic-length FrameDrop and recurrent fusion are critical in generalizing our method LEF to variable frame lengths during inference. **Foreground pillar segmentation.** To efficiently fuse history pillar features in a recurrent manner, we apply BEV foreground segmentation before passing history latent pillar embeddings into the next frame. the number of history pillars that need to be recurrently fused can be reduced from \(\sim\)20\(k\) to \(\sim\)2\(k\) on average after removing a huge amount of uninformative background data. Therefore the computation burden of our late-to-early temporal fusion scheme can be greatly reduced and maintained at a relatively low constant cost. **Inverse calibration and alignment.** Inverse calibration and alignment, as illustrated in Figure 2, is important for fusing two sparse sets of pillar features between the prior and the current frames. Features belonging to the same underlying static objects can be effectively aligned after this temporal alignment process. In Table VII we show that inverse calibration and alignment achieves consistent detection improvement across different size objects, including truck, sedan, pedestrian, and so on. **Window-based Attention Fusion.** We apply window-based attention blocks on temporally aligned sparse pillar tokens to further fuse information of the history and current frames. As explained in Section III-D, we explore three different attention designs: self / cross / mix-attention. Detection AP on large objects of WOOD validation set are shown in Table VIII. For all methods, we use the sparse set of pillar tokens \(\{V^{{}^{\prime}}_{i,u}\}\) converted from the temporally aligned BEV feature map \(J^{{}^{\prime}}_{i}\) as the query tensor. In self-attention, query, key and value are based on the same tensor. In cross-attention, the key and value tensors are the sparse set of pillar tokens \(\{\tilde{V}_{i-1,c}\}\) converted from the calibrated history features \(\tilde{I}_{i-1}\). Mix-attention uses the union set of prior methods as key and value. We observe that self-attention consistently outperforms the other two attention variants. This is presumably because the history tokens exist in a quite different latent space from the temporally aligned tokens. Therefore attention between \(\{\tilde{V}_{i-1,c}\}\) and \(\{V^{{}^{\prime}}_{i,u}\}\) might easily lead to intractable feature fusion and eventually hurt detection. Meanwhile, since \(J^{{}^{\prime}}_{i}\) has already merged information from the history \(\tilde{I}_{i-1}\) and the current \(I_{i}\), self-attention is competent to associate relevant pillar tokens and fulfill the fusion task. Window-based attention fusion plays an important role in fusing the information from moving object pillars. In Table IX, we present validation set 3D AP comparisons between with and without window-based self-attention fusion. We report subcategory metrics under different speed ranges: [0, 0.45], [0.45, 2.24], [2.24, 6.71], [6.71, 22.37], [22.37, +\(\infty\)) miles per hour for static, slow, medium, fast, very fast objects. The metrics are averaged over different size objects. We observe that attention fusion brings consistent detection gains across different object speed ranges. Particularly, the improvements achieved on high-speed objects are larger than those on low-speed objects: +9.4 (fast) _vs_. +6.1 (static) 3D AP gains. The comparisons empirically prove that window \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{L1} & \multicolumn{3}{c}{L2} \\ & 3-f & 6-f & 9-f & 3-f & 6-f & 9-f \\ \hline SWFormer [6] & 46.23 & 38.76 & OOM & 41.93 & 35.09 & OOM \\ LEF (w/o SLF) & 51.18 & 51.44 & 50.84 & 46.58 & 46.91 & 46.28 \\ **LEF (with SLF)** & **53.13** & **53.96** & **54.35** & **48.28** & **48.99** & **49.34** \\ \hline \hline \end{tabular} \end{table} TABLE VI: **Long frame history generalization studies**. For each trained model, we evaluate its inference generalization ability to different frame (f) lengths _without_ retraining. \begin{table} \begin{tabular}{l||c|c|c||c|c} \hline \hline \multirow{2}{*}{Attention Type} & \multicolumn{2}{c|}{L1} & \multicolumn{2}{c}{L2} \\ & 2D & 3D & 2D & 3D \\ \hline Cross-Attn & 51.69 & 42.35 & 47.06 & 38.36 \\ Mix-Attn & 61.68 & 52.94 & 56.46 & 48.06 \\ **Self-Attn** & **62.63** & **54.35** & **57.42** & **49.34** \\ \hline \hline \end{tabular} \end{table} TABLE VIII: **Variants of window-based attention blocks for recurrent temporal fusion**. Based on the comparisons, we adopt self-attention as default in other experiments. \begin{table} \begin{tabular}{l||c|c|c||c|c} \hline \hline \multirow{2}{*}{ICA} & \multicolumn{2}{c|}{Large} & \multicolumn{2}{c|}{Medium} & \multicolumn{2}{c}{Small} \\ & 2D & 3D & 2D & 3D & 2D & 3D \\ \hline w/o & 60.85 & 51.34 & 92.72 & 78.30 & 85.92 & 80.59 \\ **with** & **62.63** & **54.35** & **93.02** & **79.62** & **87.40** & **82.46** \\ \hline \hline \end{tabular} \end{table} TABLE VII: **Inverse calibration and alignment (ICA)** can improve detection AP across different object sizes. \begin{table} \begin{tabular}{l||c|c|c||c|c} \hline \hline \multirow{2}{*}{ \begin{tabular}{l} Self-Attention \\ \end{tabular} } & Static & Slow & Medium & Fast & Very Fast \\ \hline without & 60.55 & 63.46 & 74.58 & 53.07 & 75.47 \\ **with** & **66.62** & **69.27** & **79.62** & **62.46** & **82.14** \\ \hline \hline \end{tabular} \end{table} TABLE IX: **The impact of window-based self-attention on different speed objects**. -based self-attention fusion is critical in associating relevant pillars that belong to the same underlying objects, which is especially important for moving object detection. ## V Conclusions and Future Work In this paper, we conduct an in-depth study on the temporal fusion aspect of 3D object detection from LiDAR sequences. We propose a late-to-early temporal feature fusion method that recurrently extracts sparse pillar features from both object-aware latent embeddings and LiDAR sensor raw inputs. To handle the alignment issues of static and moving objects, we propose inverse calibration and alignment as well as window-based attention fusion methods. We also apply foreground segmentation to obtain sparse pillar features from history for computation reduction. The resulting model, LEF, performs favorably against its base model SWFormer in both detection quality and efficiency. The improvement is especially significant on large objects that require multiple LiDAR sweeps fused across space and time to achieve high surface coverage rate. As future work, we plan to extend our method to multi-modal sensor fusion with a focus on integrating camera and radar information. Recurrent late-to-early temporal fusion schemes like ours and BEVFormer [35] have been explored in very few papers. To further demonstrate the effectiveness of this approach, it would be beneficial to test it on various backbone models and extend its application beyond the scope of 3D object detection task.
2306.17455
Clipping noise cancellation receiver for the downlink of massive MIMO OFDM system
Massive multiple-input multiple-output (mMIMO) technology is considered a key enabler for the 5G and future wireless networks. In most wireless communication systems, mMIMO is employed together with orthogonal frequency-division multiplexing (OFDM) which exhibits a high peak-to-average-power ratio (PAPR). While passing the OFDM signal through one of the common RF front-ends of limited linearity, significant distortion of the transmitted signal can be expected. In mMIMO systems, this problem is still relevant as in some channels the distortion component is beamformed in the same directions as the desired signal. In this work, we propose a multi-antenna clipping noise cancellation (MCNC) algorithm for the downlink of the mMIMO OFDM system. Computer simulations show it can remove nonlinear distortion even under severe nonlinearity. Next, a simplified version of the algorithm is proposed. It was observed that for the direct visibility channels, its performance is only slightly degraded with respect to the MCNC algorithm.
Marcin Wachowiak, Pawel Kryszkiewicz
2023-06-30T08:00:58Z
http://arxiv.org/abs/2306.17455v1
# Clipping noise cancellation receiver for the downlink of massive MIMO OFDM system ###### Abstract Massive multiple-input multiple-output (mMIMO) technology is considered a key enabler for the 5G and future wireless networks. In most wireless communication systems, mMIMO is employed together with orthogonal frequency-division multiplexing (OFDM) which exhibits a high peak-to-average-power ratio (PAPR). While passing the OFDM signal through one of the common RF front-ends of limited linearity, significant distortion of the transmitted signal can be expected. In mMIMO systems, this problem is still relevant as in some channels the distortion component is beamformed in the same directions as the desired signal. In this work, we propose a multi-antenna clipping noise cancellation (MCNC) algorithm for the downlink of the mMIMO OFDM system. Computer simulations show it can remove nonlinear distortion even under severe nonlinearity. Next, a simplified version of the algorithm is proposed. It was observed that for the direct visibility channels, its performance is only slightly degraded with respect to the MCNC algorithm. orthogonal frequency-division multiplexing (OFDM), massive MIMO (mMIMO), front-end nonlinearity, clipping noise cancellation (CNC) ## I Introduction Massive multiple-input multiple-output (mMIMO) systems are envisioned as the key enabler of the latest fifth generation of wireless networks and beyond. The high number of antennas combined with advanced signal processing allows an increase in the throughput to meet the growing demands. In [1] it was theoretically shown that the capacity of mMIMO systems is not upper-bounded and can be infinitely increased with the growing number of antennas. However, when considering practical implementation, hardware impairments, limiting the performance of the system, need to be taken into account. One of the crucial impairments to the transmit and receive signal chains is nonlinear amplification. Most terrestrial mMIMO systems employ the orthogonal frequency-division multiplexing (OFDM) technique due to its high bandwidth efficiency and low-complexity receiver structure. However, OFDM modulation is characterized by a high peak-to-average-power ratio (PAPR) [2], which combined with nonlinear amplification results in significant nonlinear distortion of the signal. With the advent of massive MIMO communications, the problem of nonlinear distortion reappeared in a new context. The presence of nonlinearity in multiple antenna systems introduces an additional degree of complexity, which has to be carefully considered. Initial analyses [3] assumed that the distortion can be modeled as additive white noise uncorrelated between antennas. However, this work considered narrowband transmission on a single carrier. Later, the analysis in [4] has proven that the distortion signals are in some scenarios correlated among antennas. The analysis was performed in a multiple antenna system with two subcarriers and a nonlinearity modeled as a third-order polynomial. A follow-up work [5], which included the OFDM waveform, also found that some in-band and out-of-band emissions are always beamformed in the same directions as the desired signals, i.e., an increase in the number of transmitting antennas does not increase the signal to distortion power ratio (SDR). In [6], a detailed study of the radiation characteristic of the distortion signal was performed, addressing also OFDM signals. The authors derived a spatial cross-correlation matrix of nonlinear distortion components, which can be used to predict the expected signal-to-distortion levels, both in-band and out-of-band. In [7], it was found, for signals with a high peak-to-average power ratio (PAPR), that with the growing number of users being served simultaneously, the distortion signal radiation characteristic becomes approximately omnidirectional. However, for direct visibility channels and a single user, SDR remains constant regardless of the number of antennas. This points to the conclusion that nonlinear distortion is still a major impairment even in mMIMO systems and measures must be taken to mitigate its effects on the system performance. In single-input single-output (SISO) systems utilizing OFDM, several solutions to the nonlinear front-end problem have been proposed at the transmitter side [8]. One commonly employed technique is clipping and filtering (CAF) presented in [9]. It allows for PAPR reduction without average power increase or bandwidth broadening. One critical issue of CAF is the presence of in-band distortion originating from the clipping. In the literature, two distinguished approaches toward distortion recovery and removal at the receiver can be found: time-domain (TD) and frequency-domain (FD). The TD approach is represented by decision-aided reconstruction (DAR) [10] and the FD approach by clipping noise cancellation (CNC) [11]. In [12] it was shown that the CNC algorithm outperforms DAR, which was supported by the derivation of theoretical performance bounds. So far, mMIMO OFDM receivers aware of nonlinear distortion have received limited attention in the literature. In [13] authors have derived and analyzed the performance of a distortion-aware linear minimum mean squared error-based receiver for the uplink in an mMIMO OFDM system. The receiver offers some performance improvement, however, it is still far from reaching the performance of a system without nonlinear amplification. In [14] compressive sensing is used together with an orthogonal matching pursuit algorithm to compensate for the nonlinearity in the receiver at the base station. The method is evaluated for an mMIMO OFDM system with the Saleh model of a nonlinear amplifier. The results are compared against a neural network compensator, both at the receiver and transmitter. In [15] a joint channel equalization and iterative nonlinear distortion cancellation technique are discussed for the uplink in a Multi-User mMIMO system. The utilized algorithm is very similar to the CNC, however, it was analyzed for a single carrier transmission. In [16] authors propose a power amplifier noise cancellation (PANC) algorithm for the uplink in a multi-user space division multiple access (SDMA) OFDM system. While its principle of operation is similar to the CNC algorithm, the considered scenario, i.e., multiple single antenna nonlinear transmitters delivering signal to a linear, multi-antenna receiver, is significantly different from the one considered in this paper. The performance of the algorithm is evaluated with joint channel estimation. Additionally, an upper bound bit error rate (BER) is derived subject to the considered system parameters. In [17] the CNC algorithm is studied for an orbital angular momentum (OAM) multiplexing system with a uniform circular array both at the receiver and transmitter. The work considers a line-of-sight channel with OAM beamforming. A learning-based distortion recovery algorithm is presented. It resembles the CNC algorithm in its unfolded form with the introduction of additional learnable parameters which have to be optimized. It is important to mention that nonlinear distortions introduce some additional frequency diversity allowing for reception quality higher than in the linear OFDM case at the cost of increased computational complexity. A generalized approximate message passing algorithm is used for this purpose in [18] for a SISO OFDM system. In [19] the scheme was applied to a singular value decomposition (SVD)-based MIMO OFDM system to combat digital-to-analog converter (DAC) nonlinearity distortion. The listed works mostly address the problem of nonlinear distortion in the uplink of an mMIMO OFDM system. Therefore, the precoding and combining of the signals from multiple antennas are not considered. In this work, we focus on a single-user downlink transmission in a massive MIMO OFDM system. It corresponds to the worst-case scenario when SDR is the lowest due to the distortion being beamformed in the same direction as the desired signal [4]. We propose a multi-antenna clipping noise cancellation algorithm (MCNC), which takes into consideration precoding and propagation in a multi-antenna system. Introduced reconstruction of the transmit chain in the MCNC algorithm is required for effective cancellation of the distortion in multi-antenna scenarios. Then a simplified receiver is derived for a specific precoding case. It requires fewer computations and control information and resembles the standard CNC algorithm used for SISO systems. The performance of the algorithms is evaluated for MRT precoding and a few channel models. The simulation results allow for a comparison of the algorithms in regard to a number of parameters. The main contributions of this work are as follows: 1) Justification of a complex-Gaussian distribution of OFDM symbol samples after precoding allowing the use of results for OFDM signal decomposition. 2) Evaluation of the influence of the channel type (LOS, two-path, IID Rayleigh), the number of antennas and the power amplifier (PA) input back off (IBO) on SDR under maximum ratio transmission (MRT) precoding. 3) A new MCNC algorithm is proposed for the removal of clipping noise in the receiver of the downlink mMIMO OFDM system, designed to effectively cancel the distortion from multiple transmit antennas. 4) A simplified version of the MCNC algorithm is proposed performing close to the MCNC algorithm for channels with limited frequency selectivity. 5) The scheme's performance is verified in various channels, i.e., line of sight (LOS), two-path and independent, identically distributed (IID) Rayleigh and system configurations. Additionally, the influence of channel coding, 3GPP 38.901 channel model [20], and imperfect channel estimation have been considered. The convergence has been analyzed both in terms of the required signal quality and the number of iterations. The remainder of this paper is organized as follows. Section II describes the mMIMO OFDM transmission system and the iterative receivers. Then the computational complexity of proposed algorithms is discussed in Sec. III. The simulation results are presented in Sec. IV. Finally, the concluding remarks are given in Sec. V. ## II System model An mMIMO OFDM transmission system depicted in Fig. 1 is considered. There are \(N_{\mathrm{U}}\) quadrature amplitude modulation (QAM) symbols \(s_{n}\) (\(n\in\{1,...,N_{\mathrm{U}}\}\) transmitted over adjacent subcarriers in a single OFDM symbol period. The symbols are chosen from set \(\chi\). The symbols are precoded and transmitted by \(K\) parallel transmitting signal chains, each consisting of an OFDM modulator with a maximum number of \(N\) subcarriers, a nonlinear amplifier and an antenna element. Signals from different antennas combine at the single antenna receiver. Fig. 1: System model. ### _Radio channel_ In order to utilize the OFDM modulator, it is assumed that the radio channel is constant for the frequency span of a single subcarrier, i.e., channel coherence bandwidth is not smaller than a single subcarrier bandwidth. For \(n\)-th subcarrier and \(k\)-th antenna, the channel response is a single complex coefficient expressed as \(h_{k,n}\). ### _Precoding_ Precoding is applied by multiplying the data symbol at \(n\)-th subcarrier \(s_{n}\) by precoding coeffcient \(v_{k,n}\) for \(n\)-th subcarrier and \(k\)-th antenna obtaining the precoded symbol \(x_{k,n}\): \[x_{k,n}=s_{n}v_{k,n}. \tag{1}\] It is assumed that the precoder is normalized to obtain a unitary summarized transmit power gain, irrespective of the number of utilized antennas for each subcarrier independently, i.e., \[\sum_{k=1}^{K}\left|s_{n}v_{k,n}\right|^{2}=\left|s_{n}\right|^{2}\sum_{k=1}^{K }\left|v_{k,n}\right|^{2}=\left|s_{n}\right|^{2}. \tag{2}\] For a special case of MRT, which maximizes the received power, the precoding coefficients are calculated as [21]: \[v_{k,n}=\frac{h_{k,n}^{*}}{\sqrt{\sum_{k=1}^{K}\left|h_{k,n}\right|^{2}}}, \tag{3}\] where \(*\) denotes complex conjugate. ### _OFDM Modulation_ Precoded symbols are then subject to OFDM modulation [22], which is performed by inverse fast Fourier transform (IFFT) of size \(N\). Only \(N_{\rm u}\) subcarriers of indices \(\mathcal{N}\) are modulated by data symbols \(x_{k,n}\). The other \(N-N_{\rm u}\) subcarriers are modulated with zeros. Typically, for a symmetric OFDM spectrum and an unused direct current (DC) subcarrier the subcarrier indices set equals \(\mathcal{N}=\{-N_{\rm u}/2,...,-1,1,...,N_{\rm u}/2\}\). At the output of the IFFT, the \(t\)-th sample of OFDM signal for \(k\)-th antenna is calculated as: \[y_{k,t}=\frac{1}{\sqrt{N}}\sum_{n\in\mathcal{N}}x_{k,n}e^{j2\pi\frac{p}{N}t}, \tag{4}\] where \(t\in\{-N_{\rm CP},....,N-1\}\), and \(N_{\rm CP}\) is the number of samples of the cyclic prefix (CP). ### _Nonlinear amplifier_ The modulated signal undergoes the standard digital-to-analog conversion and upconversion to a chosen carrier frequency. These steps are omitted in our model as they are reversed at the receiver. Next, the signal is subject to nonlinear amplification by a nonlinear amplifier model identical for each transmitting signal chain: \[\hat{y}_{k,t}=\mathcal{A}(y_{k,t}), \tag{5}\] which in the case of the soft limiter [2] can be described as: \[\hat{y}_{k,t}=\begin{cases}y_{k,t}&\mathrm{for}\left|y_{k,t}\right|^{2}\leq P _{\rm max}\\ \sqrt{P_{\rm max}}e^{j\arg\left(y_{k,t}\right)}&\mathrm{for}\left|y_{k,t}\right| ^{2}>P_{\rm max}\end{cases}, \tag{6}\] where \(P_{\rm max}\) is the maximum transmit power of a given PA and \(\arg\left(y_{k,t}\right)\) denotes phase of \(y_{k,t}\). If the instantaneous signal power exceeds the \(P_{\rm max}\), the signal is clipped, i.e., has constant amplitude while maintaining the input phase. While there is a number of different PA models, the soft limiter is proved to be the nonlinearity maximizing the SDR [23]. While in many contemporary systems digital predistortion is employed, the soft limiter can be treated as an optimal characteristic of the combined PA-predistorter model. It is a common practice to use IBO to determine PA operating point and respectively the \(P_{\rm max}\). It is defined as a ratio of maximum PA power to the average power at the input of the amplifier, expressed in decibel scale: \[IBO\ [dB]=10log_{10}\left(\frac{P_{\rm max}}{\mathbb{E}[\left|y_{k,t}\right|^{2} ]}\right), \tag{7}\] where the expectation operator is denoted as \(\mathbb{E}\). Assuming that the average signal power is calculated based on each OFDM symbol sample over all antennas and using (2) we get: \[\mathbb{E}[|y_{k,t}|^{2}]=\frac{\bar{P_{s}}}{NK}\sum_{n\in\mathcal{N}}\sum_{k= 1}^{K}|v_{k,n}|^{2}=\frac{\bar{P_{s}}N_{\rm u}}{KN}, \tag{8}\] where \(\bar{P_{s}}\) is the average power of a single symbol \(s_{n}\). If the wireless channel is varying in time the expectation over \(|v_{k,n}|^{2}\) should also be considered. Because of averaging mean power over antennas in (8), all \(K\) amplifiers work with the same clipping threshold \(P_{\rm max}\). The signal at the output of the amplifier can be decomposed based on the principle of homogenous linear mean square estimation [24] as: \[\hat{y}_{k,t}=\alpha_{k}y_{k,t}+\bar{d}_{k,t} \tag{9}\] where \(\alpha_{k}\) is the correlation coefficient specific for \(k\)-th antenna, \(\bar{d}_{k,t}\) is the distortion signal uncorrelated with the desired signal \(y_{k,t}\). The coefficient \(\alpha_{k}\) is defined as follows: \[\alpha_{k}=\frac{\mathbb{E}\left[\hat{y}_{k,t}y_{k,t}^{*}\right]}{\mathbb{E} \left[y_{k,t}y_{k,t}^{*}\right]}. \tag{10}\] The value \(\alpha_{k}\) can be derived analytically assuming the complex-Gaussian distribution of \(y_{k,t}\)[25]. While an exact signal envelope distribution for QAM-modulated OFDM is of a discrete nature [26], it converges fast with the number of subcarriers to its limit, i.e., a complex-Gaussian distribution. This comes from the utilization of the central limit theorem as \(N_{\rm U}\gg 0\) independently modulated subcarriers are used. In [27] it has been shown that the limit distribution is obtained not only for independent and identically distributed symbols. It is valid as well for coded systems, allowing the modulating symbols to be dependent but uncorrelated. Additionally, power variation among subcarriers, e.g., as a result of water filling, still allows the complex-Gaussian distribution to be used. These derivations allow the complex-Gaussian distribution to be assumed for the mMIMO OFDM signal. First, while various precoders \(v_{k,n}\) can be used, e.g., MRT or zero-forcing (ZF) [21], these typically depend on the wireless channel properties, not the modulating symbols resulting in \(\forall_{n\in\mathcal{N}}\mathbb{E}[s_{n}v_{k,n}]=\mathbb{E}[s_{n}]\mathbb{E}[v _{k,n}]\). As such, using a common assumption that QAM symbols are uncorrelated of zero mean, i.e., \(\forall_{n\neq m}\mathbb{E}[s_{n}s_{m}^{*}]=\mathbb{E}[s_{n}]\mathbb{E}[s_{m}^ {*}]\) and \(\mathbb{E}[s_{n}]=0\), it can be shown that \[\forall_{n\neq m}\mathbb{E}[x_{k,n}x_{k,m}^{*}]=\mathbb{E}[s_{n}]\mathbb{E}[s _{m}]^{*}\mathbb{E}[v_{k,n}v_{k,m}^{*}]=0. \tag{11}\] Therefore, the symbols \(x_{k,n}\) are uncorrelated as required by [27]. The second issue is the power variation among subcarriers. It can happen as a result of some sort of water filling, resulting in \(\exists_{m\neq n}\mathbb{E}[|s_{n}|^{2}]\neq\mathbb{E}[|s_{m}|^{2}]\). However, it is possible that power amplification by coefficient \(v_{k,n}\) can vary among subcarriers, e.g., in the case of MRT precoder as a result of frequency selective fading. Still, [27] shows the complex-Gaussian assumption can be used in these cases. As such \(\alpha_{k}\) can be calculated as in [25] considering that power can be unequally distributed among antennas, e.g., as a result of some antenna array elements being pointed in a different direction than the served user, resulting in the increased power of other matrix elements for an MRT precoder described by (3). In the case of a common maximal transmit power \(P_{\max}\) for all utilized front-ends, mean transmit (TX) power per antenna can be different resulting in varying per antenna IBO, i.e., \[IBO_{k}\ [dB]=10log_{10}\left(\frac{P_{\max}}{\frac{P}{N}\sum_{n\in\mathcal{N}}| v_{k,n}|^{2}}\right). \tag{12}\] The \(\alpha_{k}\) coefficient can be calculated as [25]: \[\alpha_{k}=1-e^{-\gamma_{k}^{2}}+\frac{\sqrt{\pi\gamma_{k}}}{2}\operatorname {erfc}\left(\gamma_{k}\right), \tag{13}\] where \(\gamma_{k}=10^{\frac{IBO_{k}}{20}}\) and \(\operatorname{erfc}(\cdot)\) denotes the error function. Observe that in many architectures and for many channel types the coefficient \(\alpha_{k}\) will be invariant with respect to the antenna index as a result of equal power per antenna. ### _Signal reception_ The signal transmitted in time domain \(\hat{y}_{k,t}\) from \(k\)-th antenna is convolved with its respective wideband channel impulse response. After passing through the channel the \(K\) signals are summed at the receiving antenna. After the removal of CP, the fast Fourier transform (FFT) is applied which allows to express the signal received at \(n\)-th subcarrier as: \[r_{n}=\sum_{k=1}^{K}\mathcal{F}_{[n,t=0,...,N-1]}\{\hat{y}_{k,t}\}h_{k,n}+w_{ n}, \tag{14}\] where \(w_{n}\) is the white noise sample at \(n\)-th subcarrier in the receiver and \(\mathcal{F}_{[n,t=0,...,N-1]}\{\cdot\}\) denotes discrete Fourier transform (DFT) over time instants \(t=0,...,N-1\) at \(n\)-th subcarrier. Based on (9) and (4) the received signal can be expanded to: \[r_{n}=\sum_{k=1}^{K}\alpha_{k}h_{k,n}x_{k,n}+\sum_{k=1}^{K}h_{k,n}d_{k,n}+w_{ n}, \tag{15}\] where \[d_{k,n}=\mathcal{F}_{[n,t=0,...,N-1]}\{\bar{d}_{k,t}\}. \tag{16}\] Observe that in general \(d_{k,n}\) for a single subcarrier depends on the transmitted symbols \(s_{n}\) and precoding coefficients \(v_{k,n}\) for all the utilized subcarriers \(n\in\mathcal{N}\). This can be easily shown by treating the OFDM signal as a set of subcarriers undergoing intermodulation on a polynomial-modeled PA [28]. Taking into account the precoding coefficients definition in (1) it is obtained that \[r_{n}=\sum_{k=1}^{K}\alpha_{k}h_{k,n}v_{k,n}s_{n}+\sum_{k=1}^{K}h_{k,n}d_{k,n}+ w_{n}. \tag{17}\] The signal-to-noise ratio (SNR) is defined considering only the data-carrying subcarriers with the wanted signal attenuated by coefficients \(\alpha_{k}\) giving \[SNR=\frac{\bar{P}_{s}\frac{1}{N_{a}}\sum_{n\in\mathcal{N}}\left|\sum_{k=1}^{K} \alpha_{k}h_{k,n}v_{k,n}\right|^{2}}{\mathbb{E}\left[\left|w_{n}\right|^{2} \right]}. \tag{18}\] Based on the SNR definition the Eb/N0 can be calculated as: \[\frac{Eb}{N0}=\frac{SNR}{\log_{2}M}, \tag{19}\] where M is the size of the constellation, i.e., the number of elements in set \(\chi\). Similarly, the SDR is defined considering only the data-carrying subcarriers: \[SDR=\frac{\bar{P}_{s}\sum_{n\in\mathcal{N}}\left|\sum_{k=1}^{K}\alpha_{k}h_{k, n}v_{k,n}\right|^{2}}{\sum_{n\in\mathcal{N}}\left|\sum_{k=1}^{K}h_{k,n}d_{k,n} \right|^{2}}. \tag{20}\] ### _Simple reception_ In a simple receiver, first an equalization is performed, e.g., ZF, dividing received symbol \(r_{n}\) by \(\sum_{k=1}^{K}\alpha_{k}h_{k,n}v_{k,n}\), effectively removing the effects of channel, precoding and nonlinearity on wanted signal, i.e., \[g_{n}=s_{n}+\frac{\sum_{k=1}^{K}h_{k,n}d_{k,n}}{\sum_{k=1}^{K}\alpha_{k}h_{k, n}v_{k,n}}+\frac{w_{n}}{\sum_{k=1}^{K}\alpha_{k}h_{k,n}v_{k,n}}. \tag{21}\] However, this results in scaling of distortion and white noise terms. The detection is performed by finding the closest symbol from the constellation set: \[\bar{s}_{n}=\min_{s\in\chi}|s-g_{n}|^{2}\,. \tag{22}\] ### _Multiple antenna clipping noise cancelation receiver (MCNC)_ While the nonlinear distortion is often treated as white noise [3], for the soft limiter it depends on the transmitted signal as shown in (6). Therefore, a decision-aided receiver is proposed that iteratively reproduces the received and nonlinearly distorted signal, improving detection quality. While the general idea is well known for SISO OFDM systems [11], the mMIMO precoding and utilization of multiple antennas required it to be redesigned. The Multiple antenna CNC receiver is shown in Fig. 2. It consists of the following steps: 1. [label=()] 2. Hard symbol detection is performed for \(n\)-th subcarrier based on the received and equalized signal \(g_{n}^{i}\) with removed \(i\)-th nonlinearity distortion estimate where \(i\) denotes the iteration number. For the \(i=0\) the input is the original received signal \(g_{n}\) as defined in (21). In the next iterations, the nonlinear distortion will be estimated and subtracted from \(g_{n}\) constituting \(g_{n}^{i}\). The symbol detection is carried out by finding the closest, from a Euclidean distance perspective, symbol from the chosen QAM constellation set \(\chi\): \[\tilde{s}_{n}^{i}=\arg\min_{s\in\chi}\left|s-g_{n}^{i}\right|^{2}.\] (23) 3. Obtained symbol estimate \(\tilde{s}_{n}^{i}\) is used to regenerate the received signal using the whole link model including multiple antenna transmitters with nonlinear amplifiers, channel model and receiver with equalization. To achieve this the precoding and channel coefficients need to be known at the receiver. First, the symbol estimate is precoded as in (1) using the same precoding coefficients: \[\tilde{x}_{k,n}^{i}=\tilde{s}_{n}^{i}v_{k,n}.\] (24) Then, the precoded symbol estimate is OFDM modulated as in (4), using the same subcarrier mapping giving: \[\tilde{y}_{k,t}^{i}=\frac{1}{\sqrt{N}}\sum_{n\in N}\tilde{x}_{k,n}^{i}e^{j2 \pi\frac{N}{N}t}.\] (25) Next, the signal is processed by the nonlinearity model as in (5) resulting in \(\tilde{y}_{k,t}^{i}=\mathcal{A}(\tilde{y}_{k,t}^{i})\). Signals obtained from each antenna are then passed through a multiple-input single-output (MISO) channel model similarly as in (14), except for white noise addition, obtaining \[\tilde{r}_{n}^{i}=\sum_{k=1}^{K}\mathcal{F}_{[n,t=0,\ldots,N-1]}\{\tilde{y}_{k,t}^{i}\}h_{k,n},\] (26) which is the regenerated received signal after the channel. If all the symbols \(\tilde{s}_{n}^{i}\) are correct both the wanted signal and nonlinear distortion will be perfectly reconstructed. While this is not probable under severe nonlinearity or noise if most of the symbols \(\tilde{s}_{n}^{i}\) are detected correctly the majority of nonlinear distortion should be reconstructed as well [11]. The regenerated signal can be decomposed into desired and distortion components based on (9) as: \[\tilde{r}_{n}^{i}=\sum_{k=1}^{K}\alpha_{k}h_{k,n}v_{k,n}\tilde{s}_{n}^{i}+\sum _{k=1}^{K}h_{k,n}\tilde{d}_{k,n}^{i},\] (27) where \(\tilde{d}_{k,n}^{i}\) denotes the reconstructed distortion signal received from \(k\)-th antenna on \(n\)-th subcarrier in \(i\)-th iteration. The regenerated signal undergoes equalization by dividing the signal by \(\sum_{k=1}^{K}\alpha_{k}h_{k,n}v_{k,n}\) giving \[\tilde{g}_{n}^{i} =\frac{\tilde{r}_{n}^{i}}{\sum_{k=1}^{K}\alpha_{k}h_{k,n}v_{k,n}}\] \[=\tilde{s}_{n}^{i}+\frac{\sum_{k=1}^{K}h_{k,n}\tilde{d}_{k,n}^{i}}{ \sum_{k=1}^{K}\alpha_{k}h_{k,n}v_{k,n}}.\] (28) The last component in (28) is nonlinear distortion influencing \(n\)-th subcarrier if symbols \(\tilde{s}_{n}^{i}\) were transmitted. While both \(\tilde{g}_{n}^{i}\) and \(\tilde{s}_{n}^{i}\) are known at this stage this signal can be calculated as \[q_{n}^{i}=\tilde{g}_{n}^{i}-\tilde{s}_{n}^{i}.\] (29) 4. The estimated distortion component is subtracted from the originally received signal \[g_{n}^{i+1}=g_{n}-q_{n}^{i}\] (30) constructing potentially improved received signal that can be used for detection in the next iteration. The algorithm returns to step (a) and repeats until a certain number of iterations has been reached or satisfactory quality of the received data has been achieved. Using (21) and (28) the components of \(g_{n}^{i+1}\) can be shown as: \[g_{n}^{i+1}=s_{n}+\frac{\sum_{k=1}^{K}h_{k,n}\left(d_{k,n}-\tilde{d}_{k,n}^{i} \right)+w_{n}}{\sum_{k=1}^{K}\alpha_{k}h_{k,n}v_{k,n}}.\] (31) If the \(\tilde{s}_{n}^{i}\) estimates are good enough, the estimated nonlinear distortion term \(\tilde{d}_{k,n}^{i}\) should reduce the received distortion term \(d_{k,n}\) improving the reception performance. One of the disadvantages of the above algorithm is the requirement to know the channel coefficients and the precoding vectors used at the transmitter. This can be difficult in time division duplex (TDD)-based massive MIMO system in which channel reciprocity property is used [21]. In such case transmission of channel coefficients \(h_{k,n}\) together with the utilized precoding coefficients \(v_{k,n}\) will require a significant capacity of the control channel, especially for a high number of antennas and a frequency selective channel. Moreover, these coefficients have to be timely delivered in order not to delay the MCNC operation. Fig. 2: Multiple antenna clipping noise cancellation algorithm flowchart. ### _Cnc_ Considering the above-mentioned drawbacks of MCNC it is reasonable to propose a simplification resulting in lower computational complexity and a lower amount of control information required at the receiver. An example that we start with is a precoder being fixed for all subcarriers of a given antenna. Moreover, we assume the precoder amplitude for each antenna is equal, that considering (2), results in \(|v_{k,n}|=\frac{1}{\sqrt{K}}\). Therefore, the precoding coefficient equals \[v_{k,n}=\frac{1}{\sqrt{K}}e^{j\varphi_{k}}, \tag{32}\] where \(\varphi_{k}\) is precoder phase shift specific for the \(k\)-th antenna. This allows to simplify (4) as follows: \[y_{k,t}=\frac{1}{\sqrt{K}}e^{j\varphi_{k}}\underbrace{\frac{1}{\sqrt{N}}\sum_ {n\in\mathcal{N}}s_{n}e^{j2\pi\frac{n}{K}t}}_{\tilde{y}_{t}}. \tag{33}\] By combining (7) and (8) the clipping power of the considered PA can be defined as \[P_{\max}=10^{\frac{\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i }\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{ i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{ users' symbols would be challenging. Typically, simultaneously scheduled users have channels close to orthogonal. This results in a significantly attenuated wanted signal of the other simultaneously scheduled users at the considered user equipment. The SNR of the other users' signals will be much lower preventing successful detection. Additionally, the control and computational overhead will be significant. The other possibility is to use CNC/MCNC algorithms as described above. In this case, the signals of other users and part of the nonlinearity distortion will be treated as interference decreasing reception quality similarly to white noise. This will be one of the scenarios addressed in Sec. IV. ## III Computational complexity In this section, the computational complexity of a standard OFDM receiver, CNC and MCNC algorithms is analyzed in terms of real multiplications/divisions and additions/subtractions. It depends on the IFFT size \(N\), the number of modulated subcarriers \(N_{\mathrm{U}}\), the number of constellation points \(M\), and the number of iterations of CNC/MCNC algorithm \(I\). The FFT and IFFT is performed by radix-\(2\) algorithm and requires \((N/2)\log_{2}N\) complex multiplications and \(N\log_{2}N\) complex additions [29]. Each complex multiplication can be split into 3 real multiplications and 5 additions as shown in [30]. With these simplifications, the FFT/IFFT operation cost is \(3\left(\left(N/2\right)\log_{2}N\right)\) real multiplications and \(5\left(\left(N/2\right)\log_{2}N\right)+2N\log_{2}N\) real additions. A single QAM symbol detection based on Euclidean distance (22) and by separating I/Q component requires \(2\sqrt{M}\) comparisons, \(2(2\sqrt{M})\) real multiplications and \(2(3\sqrt{M})\) real additions, where \(M\) is the constellation size. The OFDM symbol detection requires then \(2N_{\mathrm{U}}\sqrt{M}\) comparisons, \(4N_{\mathrm{U}}\sqrt{M}\) real multiplications and \(6N_{\mathrm{U}}\sqrt{M}\) real additions. The precoding for a single front-end in a single-user case requires \(\mathrm{N}_{\mathrm{U}}\) complex multiplications, which translates to \(3N_{\mathrm{U}}\) real multiplications and \(5N_{\mathrm{U}}\) additions. A similar number of operations is required by the equalization and SISO channel propagation. Division by \(\alpha\) coefficient requires two real divisions for each complex sample in \(N_{\mathrm{U}}\) long vector. Processing by a single nonlinear front-end requires \(N\) comparisons, \(2N\) multiplications and \(N\) additions. When the sample power exceeds the \(P_{\mathrm{max}}\) threshold it is multiplied by the square root of saturation power divided by the sample power. The CORDIC algorithm is employed to calculate the square root, which according to [31] requires 1 table lookup, 2 shifts and 3 real additions per iteration for a fixed point approximation. The number of iterations depends on the desired precision of the result, with each iteration corresponding to a single bit. Assuming the use of single precision floating arithmetic the number of iterations required by CORDIC is set to 23 [32], resulting in 23 table lookups 46 shifts, and 69 real additions. This adds \(2N\) real multiplications, \(N\) divisions and \(69N\) additions to the complexity of the operation. Table I presents a summarized number of operations for each signal processing step. The computational complexity of considered receivers is shown in Tab. II. Table III presents the total number of arithmetic operations required for a given number of iterations of the CNC and MCNC algorithm for \(M=64,N=4096,N_{\mathrm{U}}=2048,K=64\). The values presented for the 0-th iteration correspond to the standard receiver, which performs equalization and demodulation. It can be seen that the complexity of the MCNC algorithm grows rapidly with the number of iterations and is substantially higher due to individual signal processing for each of the transmit antennas in the system. On the other hand, CNC algorithm complexity is relatively close to the standard receiver, which may advocate its application. Keep in mind that the additional arithmetical operations, in relation to the standard OFDM receiver, will cause OFDM symbol reception delay dependent on the computational capabilities of the receiver. ## IV Simulation results The performance of considered clipping noise cancellation algorithms is evaluated by computer simulations. The transmitting end is a uniform linear array with an inter-element spacing of half wavelength. Each antenna is modeled as an omnidirectional radiator with a gain of 0 dBi. The transmitter end was positioned 15 m above the ground level. Tab. IV presents the details concerning the simulation setup. Each front-end amplifier was modeled as a soft limiter with identical cutoff power. The receiver was placed 300 m from the TX at an azimuth of 45\({}^{\circ}\) and 1.5m above the ground level. If not stated differently, perfect channel state information is available both at the transmitter and receiver. The transmitter employs MRT precoding. We consider mostly 3 types of radio channels: 1) LOS: modeled as an attenuation of the free space and phase rotation resulting from the distance between each transmitting antenna and the receiver; 2) Two-path: apart from the direct path it includes an additional one corresponding to the reflection from the ground with a reflection coefficient equal to \(-1\). The point of reflection is calculated taking into consideration the location of the receive (RX) and TX elements; 3) Rayleigh: modeled as independent, identically distributed complex Gaussian variables for each subcarrier and antenna. Each result is obtained after transmitting approximately 800 OFDM symbols with independent modulating symbols. For the Rayleigh channel, each symbol is transmitted through an independently generated channel. For the LOS and two-path channels for each symbol, the position of the receiver is picked randomly within a 10m square centered at the reference position. ### _Results_ First, we show in Fig. 4 values of estimated and analytical \(\alpha_{k}\) with respect to \(\mathrm{IBO}_{k}\) for \(\mathrm{IBO}=0\) dB. Recall that \(\mathrm{IBO}_{k}\) is IBO calculated individually for each TX antenna considering the utilized precoding vectors. It is visible that for all considered channels the \(\alpha_{k}\) values vary slightly among front-ends. Most importantly, in all the cases the estimated \(\alpha_{k}\) value follows the analytical result of (13) as discussed in Sec. II-D. The value of \(\alpha_{k}\) depends only on \(\mathrm{IBO}\) of each individual front-end. Next, the signal-to-distortion ratio was plotted against the IBO for selected channels as shown in Fig. 5. While the MRT precoding is expected to provide \(10\log_{10}(K)\) dB gain of the wanted signal, at the same time it can increase the power of nonlinear distortion arriving at the receiving antenna [4]. This happens both for LOS and two-path channels as increasing the number of antennas does not change the SDR value. Only for the considered Rayleigh channel, the nonlinear distortion can be reduced by increasing \(K\) as expected in [3]. However, keep in mind that the considered Rayleigh channel model is independent and identically distributed both among antennas and subcarriers. A similar effect can be observed if multiple users are served in parallel, i.e., this improves the SDR performance with respect to single-user precoding [6]. This shows that while utilization of a massive number of antennas can combat many phenomena, e.g., high path-loss or channel fadings, there is still in some scenarios a need for solutions removing the impact of nonlinear PAs. We consider single-user precoding as the most challenging from a nonlinear distortion perspective. Fig. 4: \(\mathrm{IBO}_{k}\) and \(\alpha_{k}\) values of individual antenna front-ends for \(K=64\) and selected channels. Fig. 5: SDR with respect to IBO for selected channels and a number of antennas. In order to present gains from MCNC and CNC methods, we start by fixing IBO to 0 dB (significant nonlinear distortion), \(K\) to 64, and testing BER for varying Eb/N0 and a number of RX iterations. The results for LOS, two-path, Rayleigh, and 3GPP 38.901 Urban Macrocell LOS, and NLOS [20] channels are presented in Fig. 6, 7, 8, 9, and 10, respectively. The 3GPP channels are generated using Quadriga [33]. First, it is visible that results for LOS and two-path channels are very close to each other in all considered scenarios, revealing significant distortions level resulting in BER close to \(10^{-1}\) for standard RX in the whole observation range. This shows, similarly to Fig. 5, that not only a LOS channel, as shown in [4], but also a sparse multi-path channel can suffer from nonlinear distortion in mMIMO systems. Observe that in the case of the Rayleigh channel the directly received distorted signal (0th iteration) achieves much lower BER for the same Eb/N0 in relation to LOS or a two-path channel. This is the result of antenna array gain improving SDR as has been shown in Fig. 5. Secondly, for all considered channels MCNC allows to achieve the BER limit observed for a system without nonlinear distortion (_No dist_ in figures) for high Eb/N0 after no more than 8 iterations. The BER improvement increases with the number of RX iterations. However, this happens at the cost of significant computational complexity as the receiver has to emulate the signal processing of all considered TX-RX links. Significantly lower computational complexity and a lower amount of control information are required by the CNC algorithm. As visible in Fig. 6, and Fig. 7 the CNC algorithm allows for significantly improved BER for LOS and two-path channels. However, the performance is slightly worse than for the MCNC algorithm. After the 8th iteration for BER = \(10^{-5}\) the loss equals about 2 dB in Eb/N0. For the considered Rayleigh channel the utilization of the MCNC algorithm results in _No dist_ performance. On the other hand the CNC algorithm increases BER. While there is an independent random channel coefficient on each subcarrier for each TX antenna, the MRT precoding coefficient varies similarly influencing samples of nonlinear distortion, i.e., \(\sum_{k=1}^{K}h_{k,n}d_{k,n}\) in (17). While the CNC algorithm is unaware of the precoding it is reconstructing the clipping noise that is significantly different than the real one deteriorating reception performance. Fig. 9 shows the BER vs Eb/N0 curve with the 3GPP Urban Macrocell LOS channel. It can be seen that the CNC algorithm still offers improvement in regard to the standard RX, though, due to frequency selective fading its gains are significantly limited. The MCNC takes into consideration the fading and is able to efficiently remove the distortion with a few iterations obtaining _No dist_ performance for higher Eb/N0 values. The results for the NLOS version of the 3GPP channel are shown in Fig. 10. The NLOS case can be observed to exhibit some SDR increase by the array gain as the \(0\)-th iteration curve is lower than in the 3GPP LOS case. Similarly to the ideal Rayleigh channel the CNC algorithm does not work and MCNC needs only a few iterations to reach the floor corresponding to the no distortion case. Next, the CNC and MCNC algorithms were evaluated in the presence of 5G NR-compliant low-density parity check (LDPC) coding [34]. The coding and decoding is performed with the use of Matlab nrDLSCH package [35]. Utilized LDPC coding follows 5G NR Shared Channel processing, e.g., embedding cyclic redundancy check (CRC) bits. The code parameters before the rate matching are as follows: single code block, 104 filler bits, 192 lifting size, 4224 bits per code block, Fig. 8: BER vs Eb/N0 for IBO = 0 dB, \(K=64\) antennas, Rayleigh channel and a selected number of iterations of the CNC and MCNC algorithm. Fig. 6: BER vs Eb/N0 for IBO = 0 dB, \(K=64\) antennas, LOS channel and a selected number of iterations of the CNC and MCNC algorithm. Fig. 7: BER vs Eb/N0 for IBO = 0 dB, \(K=64\) antennas, two-path channel and a selected number of iterations of the CNC and MCNC algorithm. and 12672 bits per code block after LDPC coding for code rate 1/3, and single code block, 232 filler bits, 384 lifting size, 8448 bits per code block, and 25344 bits per code block after LDPC coding for code rate 2/3. The decoding algorithm is the belief propagation. Figure 11 shows the BER curves of the CNC and MCNC algorithms for two code rates of 1/2 and 1/3 in the LOS channel. The algorithms do not offer any gains for the lower code rate (1/3) and each iteration increases the error rate. This is caused by the LDPC decoder having a waterfall region before the CNC/MCNC algorithms start to improve signal quality on the LDPC decoder input. For the higher code rate, both CNC and MCNC algorithms provide significant quality improvement with respect to the standard RX (0th iteration). As such the proposed CNC/MCNC algorithms can be useful for a coded system but require wise modulation and coding scheme selection for a given nonlinearity and channel distortion conditions. The scheme might be further improved by introducing the LDPC decoder and encoder inside the MCNC/CNC loop as in [36]. Next, the proposed RX algorithms are tested for varying PA operating points, i.e., IBO. Figure 12 and 13 visualize the gains of the CNC and MCNC algorithm for a fixed BER value equal to \(10^{-2}\) in regard to both Eb/N0 and IBO. This form of presentation allows to evaluate the gains from using a specific number of iterations. Given the IBO it is possible to estimate the margin by which the Eb/N0 requirements can be reduced for a certain number of iterations and vice versa. For direct visibility channels: LOS and two-path only the results for two-path are shown as the results are highly identical and differ only up to the accuracy of the simulations. In Fig. 12 it can be observed that for these channels the gains from using the MCNC algorithm over standard CNC become apparent since the second iteration. The lower the IBO the higher number of iterations required to meet the Eb/N0 12 dB floor which corresponds to the system without nonlinear distortion. For the Rayleigh channel and MCNC reception in Fig. 13 required Eb/N0 curve is almost flat for any value of IBO from the range. The first iteration of the MCNC offers minimal improvement. This is due to a high number of antennas \(K=64\), which translates into higher SDR in the Rayleigh channel, as could be seen in Fig. 5, lessening the severity of the impact of nonlinear distortion on the received signal and allowing the algorithm to work with less nonlinear distortion interference. Figure 14 presents a comparison between CNC and MCNC Fig. 11: BER vs Eb/N0 for IBO = 0 dB, K = 64 antennas, LOS channel, two code rates of LDPC channel coding and a selected number of iterations of the CNC and MCNC algorithm. Fig. 10: BER vs Eb/N0 for IBO = 0 dB, \(K=64\) antennas, 38.901 Urban Macrocell NLOS channel and a selected number of iterations of the CNC and MCNC algorithm. Fig. 9: BER vs Eb/N0 for IBO = 0 dB, \(K=64\) antennas, 38.901 Urban Macrocell LOS channel and a selected number of iterations of the CNC and MCNC algorithm. algorithms taking into consideration the channel type, number of RX iterations, and number of antennas \(K\). The first observation can be a significant decrease in BER for the Rayleigh channel with the number of antennas. This effect is due to precoding gains which increase the SDR with the number of antennas as \(10\log_{10}\left(K\right)\). As expected from previous results, while the MCNC helps to improve the BER performance, the CNC algorithm increases BER in this scenario. For a high number of antennas in the Rayleigh channel, the SDR gains allow the MCNC algorithm to quickly converge within a single iteration to the noise-limited bound denoted as _No dist_. On the other hand, the CNC algorithm works well for LOS and two-path channels achieving BER slightly higher than the MCNC algorithm. Again, the performance of LOS and two-path channels is nearly identical. An interesting observation for these channels is that while the BER performance for both iterative RX algorithms remains constant up to about \(K=16\) antennas it starts to slightly decrease for greater \(K\) and a greater number of RX iterations. For a high number of iterations, e.g, 8, this phenomenon vanishes, with the MCNC algorithm performing close to the noise-limited bound. Figure 15 presents BER after \(I\) iterations of CNC and MCNC algorithm (BER out) as a function of BER on the input, i.e., obtained with a standard receiver (BER in). Two values of Eb/N0 are tested while varying IBO values resulting in a range of input BER values. The closer a given result of the CNC/MCNC algorithm is to the _no gain_ line the smaller BER improvement is obtained. It is visible that in the case of Eb/N0 of 15 dB the system cannot reduce output BER below around \(10^{-3}\), being the noise-caused error level. As expected, increasing the number of iterations reduces in most cases the achievable output BER. This effect is more significant when the nonlinear distortion is the dominating distortion in the system, e.g., here for Eb/N0 equal \(\infty\). Most importantly, the BER in value for which the curves start to deviate from the no-gain diagonal can be considered as a BER threshold from which the CNC/MCNC algorithms start to _work_. In this case it is around BER in of \(10^{-1}\). Figure 16 presents the evolution of BER at the output of the CNC/MCNC algorithms as a function of a number of iterations. It is visible that for a given Eb/N0 value the CNC/MCNC algorithms converge the faster the lower nonlinear distortion power is present. The convergence is slightly faster for the MCNC algorithm. Moreover, the lower the thermal noise the faster convergence is possible. Figure 17 presents the impact of the channel state information (CSI) error on the performance of the CNC and MCNC algorithms in an ideal LOS channel. The CSI error is modeled as in [37] with parameter \(\varepsilon\in\left\langle 0;1\right\rangle\) giving the estimated channel coefficient \(\hat{h}_{k,n}=\sqrt{1-\varepsilon^{2}}h_{k,n}+\varepsilon w_{k,n}\), where \(w_{k,n}\) is the white noise sample with the power corresponding Fig. 14: BER vs the number of antennas \(K\), for LOS, two-path, and Rayleigh channels, for Eb/N0 = 15 dB, IBO = 0 dB and a selected number of iterations of the CNC, and MCNC. Fig. 13: Eb/N0 vs IBO for a fixed BER = \(10^{-2}\), \(K=64\) antennas, Rayleigh channel and a selected number of iterations of the CNC and MCNC. Fig. 15: BER out vs BER in after \(I\) iterations, \(K=64\) antennas, LOS channel, varying IBO for selected values of Eb/N0 and MCNC/CNC iterations. to the average gain of the channel for the data subcarriers \(w_{k,n}=\mathcal{CN}(0,1)\sqrt{\frac{\sum_{n\in\mathcal{N}}\|h_{k,n}\|^{2}}{N_{ \mathrm{U}}}}\) and \(\mathcal{CN}(0,1)\) represents a complex normal variable with expected value 0 and variance 1. The inaccurate channel estimate denoted as \(\hat{h}_{k,n}\) is used both at the base station for precoding and at the receiver within the MCNC algorithm loop. With the increasing value of \(\varepsilon\) the gains of the algorithms are smaller and shifted towards smaller values of BER in. The CNC and MCNC algorithms exhibit relatively high tolerance to channel estimation errors offering gains for \(\varepsilon\) up to 0.3. Finally, the performance of the proposed CNC and MCNC receiver has been tested for a scenario with two users allocated at the same subcarriers. As explained in Sec. II-I, the CNC/MCNC algorithms are still the single-user versions that treat the other user interference as noise. Fig. 18 presents the BER performance of the CNC and MCNC algorithms while using MRT precoding. The two users are located at azimuths -30\({}^{\circ}\) and 30\({}^{\circ}\) from the array. User 1 is located closer to the array and user 2 is further away with a path loss difference of 10 dB between them. MRT precoding allocates power to users proportionally to the channel magnitude. The reference, no-distortion curves differ between users due to different levels of inter-user interference. It is visible that BER reduction is obtained by CNC and MCNC only for user 1, while the CNC/MCNC algorithm increases BER for the other user. The failure of the CNC/MCNC algorithm comes from the inter-user interference, both its linear and nonlinear component, that the proposed algorithms do not remove. For user no. 1 the ratio between signal and interference power is higher, resulting in a lower BER value in iteration 0, enabling successful CNC/MCNC operation. ## V Conclusions It has been shown that the MRT precoding using a high number of antennas does not offer any SDR improvement in the presence of front-end nonlinearity for direct visibility channels, severely limiting the performance of the mMIMO system. In this work, we have proposed the MCNC algorithm that is able to combat even severe nonlinear distortion in the downlink receiver of the mMIMO OFDM system. The system was tested for MRT precoding, single and two user scenarios and a few types of channels. While the MCNC algorithm is relatively complex and requires a high amount of information, its simplified version was introduced. The simulations have shown that for direct visibility channels: LOS and two-path the performance penalty of the simplified algorithm is not that substantial and it can be effectively utilized. An interesting future step would be to improve the mMIMO OFDM reception performance by leveraging the frequency diversity of nonlinear distortion as used for an OFDM system in [18].
2301.13694
Are Defenses for Graph Neural Networks Robust?
A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw - virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering - most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks. Moreover, our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.
Felix Mujkanovic, Simon Geisler, Stephan Günnemann, Aleksandar Bojchevski
2023-01-31T15:11:48Z
http://arxiv.org/abs/2301.13694v1
# Are Defenses for Graph Neural Networks Robust? ###### Abstract A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw - virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering - most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks. Moreover, our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.1 Footnote 1: footnotemark: ## 1 Introduction The vision community learned a bitter lesson - we need specific carefully crafted attacks to properly evaluate the adversarial robustness of a defense. Consequently, adaptive attacks are considered the gold standard [44]. This was not always the case; until recently, most defenses were tested only against relatively weak static attacks. The turning point was Carlini & Wagner [3]'s work showing that 10 methods for detecting adversarial attacks can be easily circumvented. Shortly after, Athalye et al. [1] showed that 7 out of the 9 defenses they studied can be broken since they (implicitly) rely on obfuscated gradients. So far, this bitter lesson is completely ignored in the graph domain. Figure 1: Adaptive attacks draw a different picture of robustness. All defenses are less robust than reported, with an undefended GCN [33] outperforming some. We show results on Cora ML for both poisoning (attack before training) and evasion (attack after training), and both global (attack the test set jointly) and local (attack individual nodes) setting. The perturbation budget is relative w.r.t. the #edges for global attacks (5% evasion, 2.5% poisoning) and w.r.t. the degree for local attacks (100%). In (a)/(b) SVD-GCN is catastrophically broken – our adaptive attacks reach 24%/9% (not visible). Note that our non-adaptive attacks are already stronger than what is typically used (see § 5). Virtually no existing work that proposes an allegedly robust Graph Neural Network (GNN) evaluates against adaptive attacks, leading to overly optimistic robustness estimates. To show the seriousness of this methodological flaw we categorize 49 works that propose a robust GNN and are published at major conferences/journals. We then choose one defense per category (usually the most highly cited). Not surprisingly, we show that none of the assessed models are as robust as originally advertised in their respective papers. In Fig. 1 we summarize the results for 7 of the most popular defenses, spanning the entire spectrum of strategies (i.e., aimed at improving the graph, the architecture, or the training, see Table 1). We see that in both local and global settings, as well as for both evasion and poisoning, the adversarial accuracy under our adaptive attacks is significantly smaller compared to the routinely used non-adaptive attacks. Even more troubling is that many of the defenses perform worse than an undefended baseline (a vanilla GCN [33]). Importantly, the 7 defenses are not cherry-picked. We report the results for each defense we assessed and selected each defence before running any experiments. Adversarial robustness measures the local generalization capabilities of a model, i.e., sensitivity to (bounded) worst-case perturbations. Certificates typically provide a lower bound on the actual robustness while attacks provide an upper bound. Since stronger attacks directly translate into tighter bounds our goal is to design the strongest attack possible. Our adaptive attacks have perfect knowledge of the model, the parameters, and the data, including all defensive measures. In contrast, non-adaptive attacks (e.g., transferred from an undefended proxy or an attack lacking knowledge about defense measures) only show how good the defense is at suppressing a narrow subset of input perturbations.2 Footnote 2: From a security perspective non-adaptive attacks (typically transfer attacks) are also relevant since a real-world adversary is unlikely to know everything about the model and the data. Tramer et al. [44] showed that even adaptive attacks can be tricky to design with many subtle challenges. The graph domain comes with additional challenges since graphs are typically sparse and discrete and the representation of any node depends on its neighborhood. For this reason, we describe the recurring themes, the lessons learned, and our systematic methodology for designing strong adaptive attacks for all examined models. Additionally, we find that defenses are _sometimes_ sensitive to a common attack vector and transferring attacks can also be successful. Thus, the diverse collection of perturbed adjacency matrices resulting from our attacks forms a (black-box) unit test that any truly robust model should pass before moving on to adaptive evaluation. In summary: * We survey and categorize _49 defenses_ published across prestigious machine learning venues. * We design custom attacks for 7 defenses (14%), covering the spectrum of defense techniques. All examined models forfeit a large fraction of previously reported robustness gains. * We provide a transparent methodology and guidelines for designing strong adaptive attacks. * Our collection of perturbed graphs can serve as a robustness unit test for GNNs. ## 2 Background and preliminaries We follow the most common setup and assume GNN [20; 33] classifiers \(f_{\theta}(\mathbf{A},\mathbf{X})\) that operate on a symmetric binary adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\) with binary node features \(\mathbf{X}\in\{0,1\}^{n\times d}\) and node labels \(\mathbf{y}\in\{1,2,\ldots,C\}^{n}\) where \(C\) is the number of classes, \(n\) is the number of nodes, and \(m\) the number of edges. A poisoning attack perturbs the graph (flips edges) prior to training, optimizing \[\max_{\mathbf{A}\in\Phi(\mathbf{A})}\ell_{\text{attack}}(f_{\theta^{*}}( \tilde{\mathbf{A}},\mathbf{X}),\mathbf{y})\quad\text{s.t.}\quad\theta^{*}= \arg\min_{\theta}\ell_{\text{train}}(f_{\theta}(\tilde{\mathbf{A}},\mathbf{X} ),\mathbf{y}) \tag{1}\] where \(\ell_{\text{attack}}\) is the attacker's loss, which is possibly different from \(\ell_{\text{train}}\) (see SS 4). In an evasion attack, \(\theta^{*}\) is kept fixed and obtained by training on the clean graph \(\min_{\theta}\ell_{\text{train}}(f_{\theta}(\mathbf{A},\mathbf{X}),\mathbf{y})\). In both cases, the locality constraint \(\Phi(\mathbf{A})\) enforces a budget \(\Delta\) by limiting the perturbation to an \(L_{0}\)-ball around the clean adjacency matrix: \(\|\tilde{\mathbf{A}}-\mathbf{A}\|_{0}\leq 2\Delta\). Attacks on \(\mathbf{X}\) also exist, however, this scenario is not considered by the vast majority of defenses. For example, only one out of the seven examined ones also discusses feature perturbations. We refer to SS D for more details on adaptive feature attacks. **Threat model.** Our attacks aim to either cause misclassification of the entire test set (_global_) or a single node (_local_). To obtain the strongest attack possible (i.e., tightest robustness upper bound), we use white-box attacks. We do not constrain the attacker beyond a simple budget constraint that enforces a maximum number of perturbed edges. For our considerations on unnoticeability, see SS A. **Greedy attacks.** Attacking a GNN typically corresponds to solving a constrained discrete non-convex optimization problem that - evident by this work - is hard to solve. Commonly, approximate algorithms are used to to tackle these optimization problems. For example, the single-step Fast Gradient Attack (FGA) flips the edges whose gradient (i.e., \(\nabla_{\mathbf{A}}\ell_{\text{train}}(f_{\theta^{*}}(\mathbf{A},\mathbf{X}), \mathbf{y})\)) most strongly indicates so. On the other hand, Nettack [67] and Metattack [66] are greedy multi-step attacks. The greedy approaches have the nice side-effect that an attack for a high budget \(\Delta\) directly gives all attacks for budgets lower than \(\Delta\). On the other hand, they tend to be relatively weaker. **Projected Gradient Descent (PGD).** Alternatively, PGD [53] has been applied to GNNs where the discrete adjacency matrix is relaxed to \([0,1]^{n\times n}\) during the gradient-based optimization and the resulting weighted change reflects the probability of flipping an edge. After each gradient update, the changes are projected back such that the budget holds in expectation \(\|\mathbb{E}[\mathbf{\hat{A}}]-\mathbf{A}\|_{0}\leq 2\Delta\). Finally, multiple samples are obtained and the strongest perturbation \(\mathbf{\hat{A}}\) is chosen that obeys the budget \(\Delta\). The biggest caveats while applying \(L_{0}\)-PGD are the relaxation gap and limited scalability (see Geisler et al. [17] for a detailed discussion and a scalable alternative). **Evasion vs. poisoning.** Evasion can be considered the easier setting from an attack perspective since the model is fixed \(f_{\theta^{*}}\). For poisoning, on the other hand, the adjacency matrix is perturbed before training (Eq. 1). Two general strategies exist for poisoning attacks: (1) transfer a perturbed adjacency matrix from an evasion attack [67]; or (2) attack directly by, e.g., unrolling the training procedure to obtain gradients through training [66]. Xu et al. [53] propose to solve Eq. 1 with alternating optimization which was shown to be even weaker than the evasion transfer (1). Note that evasion is particularly of interest for inductive learning and poisoning for transductive learning. ## 3 Adversarial defenses We select the defenses s.t. we capture the entire spectrum of methods improving robustness against structure perturbations. For the selection, we extend the taxonomy proposed in [21]. We selected the subset without cherry-picking based on the criteria elaborated below before experimentation. **Taxonomy.** The top-level categories are _improving the graph_ (e.g., preprocessing), _improving the training_ (e.g., adversarial training or augmentations), and _improving the architecture_. Many defenses for structure perturbations either fall into the category of improving the graph or adaptively weighting down edges through an improved architecture. Thus, we introduce further subcategories. Similar to [21]'s discussion, unsupervised improvement of the graph finds clues in the node features and graph structure, while supervised improvement incorporates gradient information from the learning objective. Conversely, for adaptive edge weighting, we identify three prevalent approaches: rule-based (e.g., using a simple metric), probabilistic (e.g., modeling a latent distribution), and robust aggregations (e.g., with guarantees). We assign each defense to the most fitting taxon (details in SS B). **Selected defenses.** To evaluate a diverse set of defenses, we select one per leaf taxon.3 We prioritize highly cited defenses published at renowned venues with publicly available code. We implement all defenses in one unified pipeline. We present the categorization of defenses and our selection in Table 1. Similarly to Tramer et al. [44], we exclude defenses in the "robust training" category (see SS C for a discussion). Two of the three models in the "miscellaneous" category report some improvement in robustness, but they are not explicitly designed for defense purposes so we exclude them from our study. Some works evaluate only against evasion [48], others only poisoning [12; 15; 58], and the rest tackle both [17; 30; 63]. In some cases the evaluation setting is not explicitly stated and inferred by us. For completeness, we consider each defense in all four settings (local/global and evasion/poisoning). Next, we provide a short summary of the key ideas behind each defense (details in SS E). Footnote 3: The only exception is unsupervised graph improvement, as it contains two of the most popular approaches, which rely on orthogonal principles. One filters edges based on the node features [48], the other uses a low-rank approximation of the adjacency matrix [12]. **Improving the graph.** The feature-based _Jaccard-GCN_[48] uses a preprocessing step to remove all edges between nodes whose features exhibit a Jaccard similarity below a certain threshold. This was motivated by the homophily assumption which is violated by prior attacks that tend to insert edges between dissimilar nodes. The structure-based _SVD-GCN_[12] replaces the adjacency matrix with a low-rank approximation prior to plugging it into a regular GNN. This defense was motivated by the observation that the perturbations from Nettack tend to disproportionately affect the high-frequency spectrum of the adjacency matrix. The key idea in _ProGNN_[30] is to learn the graph structure by alternatingly optimizing the parameters of the GNN and the adjacency matrix (the edge weights). The loss for the latter includes the standard cross-entropy loss, the distance to the original graph, and three other objectives designed to promote sparsity, low rank, and feature smoothness. **Improving the training.**_GRAND_[15] relies on random feature augmentations (zeroing features) coupled with neighbourhood augmentations \(\mathbf{\bar{X}}=(\mathbf{A}\mathbf{X}+\mathbf{A}\mathbf{A}\mathbf{X}+\cdots)\). All randomly augmented copies of \(\mathbf{\bar{X}}\) are passed through the same MLP that is trained with a consistency regularization loss. **Improving the architecture.**_GNNGuard_[58] filters edges in each message passing aggregation via cosine-similarity (smoothed over layers). In the first layer of _RGCN_[63] we learn a Gaussian distribution over the feature matrix and the subsequent layers then manipulate this distribution (instead of using point estimates). For the loss we then sample from the resulting distribution. In addition, in each layer, RGCN assigns higher/lower weights to features with low/high variance. _Soft-Median-GDC_[17] replaces the message passing aggregation function in GNNs (typically a weighted mean) with a more robust alternative by relaxing the median using differentiable sorting. **Common themes.** One theme shared by some defenses is to first discover some property that can discriminate clean from adversarial edges (e.g., high vs. low feature similarity), and then propose a strategy based on that property (e.g., filter low similarity edges). Often they analyze the edges from only a single attack such as Nettack [67]. The obvious pitfall of this strategy is that the attacker can easily adapt by restricting the adversarial search space to edges that will bypass the defense's (implicit) filter. Another theme is to add additional loss terms to promote some robustness objectives. Similarly, the attacker can incorporate the same terms in the attack loss to negate their influence. ## 4 Methodology: How to design strong adaptive attacks In this section, we describe our general methodology and the lessons we learned while designing adaptive attacks. We hope these guidelines can serve as a reference for testing new defenses. **Step 1 - Understand how the defense works** and categorize it. For example, some defenses rely on preprocessing which filters out edges that meet certain criteria (e.g., Jaccard-GCN [48]). Others introduce additional losses during training (e.g., GRAND [15]) or change the architecture (e.g., RGCN [63]). Different defenses might need different attacks or impose extra requirements on them. **Step 2 - Probe for obvious weaknesses.** Some examples include: (a) transfer adversarial edges from another (closely related) model (see also SS 6); (b) use a gradient-free (black-box) attack. For example, in our local experiments, we use a _Greedy Brute Force_ attack: in each step, it considers all possible single edge flips and chooses the one that contributes most to the attack objective (details in SS A). **Step 3 - Launch a gradient-based adaptive attack.** For rapid prototyping, use a comparably cheap attack such as FGA, and later advance to stronger attacks like PGD. For poisoning, strongly consider meta-gradient-based attacks like Metattack [66] that unroll the training procedure, as they almost always outperform just transferring perturbations from evasion. Unsurprisingly, we find that applying PGD [53] on the meta gradients often yields even stronger attacks than the greedy Metattack, and we refer to this new attack as _Meta-PGD_ (details in SS A). \begin{table} \begin{tabular}{l l|l l l} \hline \hline & Taxonomy & Selected Defenses & Other Defenses \\ \hline \hline \multirow{3}{*}{Improving graph} & Unsupervised & Jaccard-GCN [48] & \multirow{3}{*}{[10, 26, 50, 59, 60]} \\ & SVD-GCN [12] & & \\ \cline{2-4} & Supervised & ProGNN [30] & [51, 43, 56] \\ \hline \multirow{3}{*}{Improving training} & Robust training & n/a (see § C) & [6, 9, 14, 22, 27, 28, 41, 52, 53, 54] \\ \cline{2-4} & Further training principles & GRAND [15] & [5, 11, 29, 39, 42, 55, 61, 64, 65] \\ \hline \multirow{3}{*}{Improving architecture} & Adaptively & Rule-based & GNNGuard [58] & [31, 36, 37, 57] \\ \cline{2-4} & weighting & Probabilistic & RGCN [63] & [8, 13, 24, 25, 38] \\ \cline{1-1} \cline{2-4} & edges & Robust agg. & Soft-Median-GDC [17] & [7, 16, 47] \\ \cline{1-1} \cline{2-4} & Miscellaneous & n/a (see above) & [40, 46, 49] \\ \hline \hline \end{tabular} \end{table} Table 1: Categorization of selected defenses. Our taxonomy extends the one by Gunnemann [21]. **Step 4 - Address gradient issues.** Some defenses contain components that are non-differentiable, lead to exploding or vanishing gradients, or obfuscate the gradients [1]. To circumvent these issues, potentially: (a) adjust the defense's hyperparameters to retain numerical stability; (b) replace the offending component with a differentiable or stable counterpart, e.g., substitute the low-rank approximation of SVD-GCN [12] with a suitable differentiable alternative; or (c) remove components, e.g., drop the "hard" filtering of edges done in the preprocessing of Soft-Median-GDC [17]. These considerations also include poisoning attacks, where one also needs to pay attention to all components of the training procedure. For example, we ignore the nuclear norm loss term in the training of ProGNN [30] to obtain the meta-gradient. Of course, keep the entire defense intact for its final evaluation on the found perturbations. **Step 5 - Adjust the attack loss.** In previous works, the attack loss is often chosen to be the same as the training loss, i.e., the cross-entropy (CE). This is suboptimal since CE is not _consistent_ according to the definition by Tramer et al. [44] - higher loss values do not indicate a stronger attack. Thus, we use a variant of the consistent Carlini-Wagner loss [4] for _local_ attacks, namely the logit margin (LM), i.e., the logit difference between the ground truth class and most-likely non-true class. However, as discussed by Geisler et al. [17], for _global_ attacks the mean LM across all target nodes is still suboptimal since it can "waste" budget on already misclassified nodes. Their tanh logit margin (TLM) loss resolves this issue. If not indicated otherwise, we either use TLM or the probability margin (PM) loss - a slight variant of LM that computes the margin after the softmax rather than before. **Step 6 - Tune the attack hyperparameters** such as the number of PGD steps, the attack learning rate, the optimizer, etc. For example, for Metattack we observed that using the Adam optimizer [32] can weaken the attack and replacing it with SGD can increase the effectiveness. **Lessons learned.** We provide a detailed description of each adaptive attack and the necessary actions to make it as strong as possible in SS E. Here, we highlight some important recurring challenges that should be kept in mind when designing adaptive attacks. (1) Numerical issues, e.g., due to division by tiny numbers can lead to weak attacks, and we typically resolve them via clamping. (2) In some cases we observed that for PGD attacks it is beneficial to clip the gradients to stabilize the adversarial optimization. (3) For a strong attack it is essential to tune its hyperparameters. (4) Relaxing non-differentiable components and deactivating operations that filter edges/embeddings based on a threshold in order to obtain gradients for every edge is an effective strategy. (5) If the success of evasion-poisoning transfer depends on a fixed random initialization (see SS J), it helps to use multiple clean auxiliary models trained with different random seeds for the PGD attack - in each PGD step we choose one model randomly. (6) Components that make the optimization more difficult but barely help the defense can be safely deactivated. (7) It is sometimes beneficial to control the randomness in the training loop of Meta-PGD. (8) For Meta-PGD it can help to initialize the attack with non-zero perturbations and e.g., use the perturbed graph of a different attack. **Example 1 - SVD-GCN.** To illustrate the attack process (especially steps 3 and 4) we present a case study of how we construct an adaptive attack against SVD-GCN. Gradient-free attacks like Nettack do not work well here as they waste budget on adversarial edges which are filtered out by the low-rank approximation (LRA). Moreover, to the demise of gradient-based attacks, the gradients of the adjacency matrix are very unstable due to the SVD and thus less useful. Still, we start with a gradient-based attack as it is easier to adapt, specifically FGA, whose quick runtime enables rapid prototyping as it requires only a single gradient calculation. To replace the LRA with a function whose gradients are better behaved, we first decompose the perturbed adjacency matrix \(\tilde{\mathbf{A}}=\mathbf{A}+\delta\mathbf{A}\) and, thus, only need gradients for \(\delta\mathbf{A}\). Next, we notice that the eigenvectors of \(\mathbf{A}\) usually have few large components. Perturbations along those principal dimensions are representable by the eigenvectors, hence most likely are neither filtered out nor impact the eigenvectors. Knowing this, we approximate the LRA in a tractable manner by element-wise multiplication of \(\delta\mathbf{A}\) with weights that quantify how well an edge aligns with the principal dimensions (details in SS E). In short we replace \(\mathrm{LRA}(\mathbf{A}+\delta\mathbf{A})\) with \(\mathrm{LRA}(\mathbf{A})+\delta\mathbf{A}\circ\mathrm{Weight}(\mathbf{A})\), which admits useful gradients. This approach carries over to other attacks such as Nettack - we can incorporate the weights into its score function to avoid selecting edges that will be filtered out. **Example 2 - ProGNN.** While we approached SVD-GCN with a theoretical insight, breaking a composite defense like ProGNN requires engineering and tinkering. When attacking ProGNN with PGD and transferring the perturbations to poisoning we observe that the perturbations are only effective if the model is trained with the same random seed. This over-sensitivity can be avoided by employing lesson (5) in SS 4. As ProGNN is very expensive to train due to its nuclear norm regularizer, we drop that term when training the set of auxiliary models without hurting attack strength. For unrolling the training we again drop the nuclear norm regularizer since it is non-differentiable. Sometimes PGD does not find a state with high attack loss, which can be alleviated by random restarts. As Meta-PGD optimization quickly stalls, we initialize it with a strong perturbation found by Meta-PGD on GCN. All of these tricks combined are necessary to successfully attack ProGNN. **Effort.** Breaking Jaccard-GCN (and SVD-GCN) required around half an hour (resp. three days) of work for the initial proof of concept. Some other defenses require various adjustments that need to be developed over time, but reusing those can quickly break even challenging defenses. It is difficult to quantify this effort, but it can be greatly accelerated by adopting our lessons learned in SS 4. In any case, we argue that authors proposing a new defense must put in reasonable effort to break it. ## 5 Evaluation of adaptive attacks First, we provide details on the experimental setup and used metrics. We then report the main results and findings. We refer to SS A for details on the base attacks, including our Greedy Brute Force and Meta-PGD approaches. We provide the code, configurations, and a collection of perturbed graphs on the project website linked on the first page. **Setup.** We use the two most widely used datasets in the literature, namely Cora ML [2] and Citeseer [19] (details in SS F). Unfortunately, larger datasets are barely possible since most defenses are not very scalable. Still, in SS N, we discuss scalability and apply an adaptive attack to arXiv (170k nodes) [23]. We repeat the experiments for five different data splits (10% training, 10% validation, 80% testing) and report the means and variances. We use an internal cluster with Nvidia GTX 1080Ti GPUs. Most experiments can be reproduced within a few hours. However, the experiments with ProGNN and GRAND will likely require several GPU days. **Defense hyperparameters.** When first attacking the defenses, we observed that many exhibit poor robustness using the hyperparameters provided by their authors. To not accidentally dismiss a defense as non-robust, we tune the hyperparameters such that the clean accuracy remains constant but the robustness w.r.t. adaptive attacks is improved. Still, we run all experiments on the untuned defenses as well to confirm we achieve this goal. In the same way, we also tune the GCN model, which we use as a reference to asses whether a defense has merit. We report the configurations and verify the success of our tuning in SS H. **Attacks and budget.** In the _global_ setting, we run the experiments for budgets \(\Delta\) of up to 15% of the total number of edges in the dataset. Due to our (R)AUC metric (see below), we effectively focus on only the lower range of evaluated budgets. We apply FGA and PGD [53] for evasion. For poisoning, we transfer the found perturbations and also run Metattack [66] and our Meta-PGD. Recall that where necessary, we adapt the attacks to the defenses as outlined in SS 4 and detailed in SS E. In the _local_ setting, we first draw sets of 20 target nodes per split with degrees 1, 2, 3, 5, 8-10, and 15-25 respectively (total of 120 nodes). This enables us to study how the attacks affect different types of nodes - lower degree nodes are often conjectured to be less robust (see also SS K). We then run the experiments for relative budgets \(\Delta\) of up to 200% of the target node's degree. For example, if a node has 10 neighbors, and the budget \(\Delta=70\%\) then the attacker can change up to \(10\cdot 0.7=7\) edges. This commonly used setup ensures that we treat both low and high-degree nodes fairly. We use Nettack [67], FGA, PGD, and our greedy brute force attack for evasion. For poisoning, we only transfer the found perturbations. Again, we adapt the attacks to the defenses if necessary. In alignment with our threat model, we evaluate each found perturbation by the test set accuracy it achieves (_global_) or the ratio of target nodes that remain correctly classified (_local_). For each budget, we choose the strongest attack among all attempts (e.g., PGD, Metattack, Meta-PGD). This gives rise to an envelope curve as seen in Fig. 3. We also include lower budgets as attempts, i.e., we enforce the envelope curve to be monotonically decreasing. We introduce a rich set of attack characteristics by also transferring the perturbations supporting the envelope curve to every other defense. These transfer attacks then also contribute to the final envelope curve of each defense, but in most cases their contribution is marginal. **Non-adaptive attacks.** We call any attack "non-adaptive" that is not aware of any changes made to the model (including defense mechanisms). Where we report results for a non-adaptive attack (e.g., Fig. 1 or Fig. 2), we specifically refer to an attack performed on a (potentially linearlized) GCN with commonly used hyperparameters (i.e., untuned). We then apply the perturbed adjacency matrix to the actual defense. In other words, we transfer the adversarial perturbation from a GCN. For our _local_ non-adaptive attack, we always use Nettack. In contrast, for our _global_ non-adaptive attack, we apply all attacks listed above, and then transfer for each budget the attack which is strongest against the GCN. Due to this ensemble of attacks, our global non-adaptive attack is expected to be slightly stronger than the non-adaptive attacks in most other works. **Area Under the Curve (AUC).** An envelope curve gives us a detailed breakdown of the empirical robustness of a defense for different adversarial budgets. However, it is difficult to compare different attacks and defenses by only visually comparing their curves in a figure (e.g., see Fig. 4). Therefore, in addition to this breakdown per budget, we summarize robustness using the Area Under the Curve (AUC), which is independent of a specific choice of budget \(\Delta\) and also punishes defenses that achieve robustness by trading in too much clean accuracy. Intuitively higher AUCs indicate more robust models, and conversely, lower AUCs indicate stronger attacks. As our _local_ attacks break virtually all target nodes within our conservative maximum budget (see SS F), taking the AUC over all budgets conveniently measures how quick this occurs. However, for _global_ attacks, the test set accuracy continues to decrease for unreasonably large budget, and it is unclear when to stop. To avoid having to choose a maximum budget, we wish to stop when discarding the entire tainted graph becomes the better defense. This is fulfilled by the area between the envelope curve and the line signifying the accuracy of an MLP - a model that is oblivious to the graph structure, at the expense of a substantially lower clean accuracy than a GNN. We call this metric Relative AUC (RAUC) and illustrate it in Fig. 3. More formally, \(\mathrm{RAUC}(c)=\int_{0}^{b_{0}}(c(b)-a_{\text{MLP}})\mathrm{d}b\) s.t. \(b\leq b_{0}\implies c(b)\geq a_{\text{MLP}}\) where \(c(\cdot)\) is a piecewise linear robustness per budget curve, and \(a_{\text{MLP}}\) is the accuracy of the MLP baseline. We normalize the RAUC s.t. 0% is the performance of an MLP and 100% is the optimal score (i.e., 100% accuracy). **Finding 1 - Our adaptive attacks lower robustness by 40% on average.** In Fig. 2 we compare non-adaptive attacks, the current standard to evaluate defenses, with our adaptive attacks which we propose as a new standard. The achieved (R)AUC in each case drops on average by 40% (similarly for Citeseer, see SS F). In other words, the reported robustness in the original works proposing a defense is roughly 40% too optimistic. We confirm a statistically significant drop (\(p<0.05\)) with a one-sided t-test in 85% of all cases. Considering adversarial accuracy for (small) fixed adversarial budget (Fig. 1) instead of the summary (R)AUC over all budgets tells the same story: non-adaptive attacks are too weak to be reliable indicators of robustness and adaptive attacks massively shrink the alleged robustness gains. Figure 3: The dotted lines show the test set accuracy per budget after three global poisoning attacks against a tuned GCN on Cora ML. Taking the envelope gives the solid black robustness curve. The dashed gray line denotes the accuracy of an MLP. The shaded area is the RAUC. Figure 2: Adaptive vs. non-adaptive attacks with budget-agnostic (R)AUC on Cora ML (c.f. Fig. 1). SVD-GCN (b) is disastrously broken – our adaptive attacks reach <0.02 (not visible). § F for Citeseer. **Finding 2 - Structural robustness of GCN is not easily improved.** In Fig. 4 (global) and Fig. 5 (local) we provide a more detailed view for different adversarial budgets and different graphs. For easier comparison we show the accuracy relative to the undefended GCN baseline. Overall, the decline is substantial. Almost half of the examined defenses perform worse than GCN and most remaining defenses neither meaningfully improve nor lower the robustness (see also Fig. 1 and Fig. 3). GRAND and Soft-Medoid-GCN retain robustness in some settings, but the gains are smaller than reported. **Finding 3 - Defense effectiveness depends on dataset.** As we can see in Fig. 4 and Fig. 5, our ability to circumvent specific defenses tends to depend on the dataset. It appears that some defenses are more suited for different datasets. For example, GRAND seems to be a good choice for Citeseer while it is not as strong on Cora ML. The results for local attacks (Fig. 5) paint a similar picture, here we see that Cora ML is more difficult to defend. This points to another potentially problematic pitfall: most defenses are developed only using these two datasets as benchmarks. Is robustness even worse on other graphs? We leave this question for future work. **Finding 4 - No trade-off between accuracy and robustness for structure perturbations.** Instead, Fig. 6 shows that defenses with high clean accuracy also exhibit high RAUC, i.e., are more robust against our attacks. This appears to be in contrast to the image domain [45]. However, we cannot exclude that future more powerful defenses might manifest this trade-off in the graph domain. **Finding 5 - Necessity of adaptive attacks.** In Fig. 7, we show two exemplary characteristics of how an adaptive attack bypasses defensive measures. First, to attack SVD-GCN, it seems particularly effective to insert connections to high-degree nodes. Second, for GNNGuard, GRAND and Soft-Median-GDC it is disproportionally helpful to Figure 4: Difference (defense – undefended GCN) of adversarial accuracy for the strongest global attack per budget. Almost half of the defenses perform worse than the GCN. We exclude SVD-GCN since it is catastrophically broken and plotting it would make the other defenses illegible (accuracy <24% already for a budget of 2% on Cora ML). Absolute numbers in § F. Figure 5: Model accuracy vs. RAUC of the strongest global attacks on Cora ML. We do not observe a robustness accuracy trade-off, but even find models with higher accuracy to be more robust. Figure 6: Model accuracy vs. RAUC of the strongest global attacks on Cora ML. We do not observe a robustness accuracy trade-off, but even find models with higher accuracy to be more robust. delete edges. These examples illustrate why the existence of a one-fits-all perturbation which circumvents all possible defenses is unlikely. Instead, an adaptive attack is necessary to properly assess a defense's efficacy since different models are particularly susceptible to different perturbations. **Additional analysis.** During this project, we generated a treasure trove of data. We perform a more in-depth analysis of our attacks in the appendix. First, we study how node degree affects attacks (see SS K). For local attacks, the required budget to misclassify a node is usually proportional to the node's degree. Global attacks tend to be oblivious to degree and uniformly break nodes. Next, we perform a breakdown of each defense in terms of the sensitivity to different attacks (see SS I). In short, global attacks are dominated by PGD for evasion and Metattack/Meta-PGD for poisoning with the PM or TLM loss. For local, our greedy brute-force is most effective, rarely beaten by PGD and Nettack. Finally, we analyze the properties of the adversarial edges in terms of various graph statistics such as edge centrality and frequency spectra (see SS L SS M). ## 6 Robustness unit test Next we systematically study how well the attacks transfer between defenses, as introduced in the _attacks and budget_ paragraph in SS 5. In Fig. 8, we see that in 15 out of 16 cases the adaptive attack is the most effective strategy (see main diagonal). However for many defenses, there is often a source model or ensemble of source models (for the latter see SS G) which forms a strong transfer attack. Figure 8: RAUC for the transfer of the strongest global adaptive attacks on Cora ML between models. The columns contain the models for which the adaptive attacks were created. The rows contain the RAUC after the transfer. With only one exception, adaptive attacks (diagonal) are most effective. Figure 7: Exemplary metrics characterizing the attack vector our strongest attacks, which are those visible in Fig. I.1 and Fig. I.2. We give a more elaborate study of attack characteristics in § L. Motivated by the effectiveness of transfer attacks (especially if transferring from ProGNN [30]), we suggest this set of perturbed graphs to be used as a bare minimum robustness unit test: one can probe a new defense by testing against these perturbed graphs, and if there exists at least one that diminishes the robustness gains, we can immediately conclude that the defense is not robust in the worst-case - without the potentially elaborate process of designing a new adaptive attack. We provide instructions on how to use this collection in the accompanying code. Nevertheless, we cannot stress enough that this collection does not replace a properly developed adaptive attack. For example, if one would come up with SVD-GCN and would use our collection (excluding the perturbed graphs for SVD-GCN) the unit test would partially pass. However, as we can see in e.g., Fig. 2, SVD-GCN can be broken with an - admittedly very distinct - adaptive attack. ## 7 Related work Excluding attacks on undefended GNNs, previous works studying adaptive attacks in the graph domain are scarce. The recently proposed graph robustness benchmark [62] also only studies transfer attacks. Such transfer attacks are so common in the graph domain that their usage is often not even explicitly stated, and we find that the perturbations are most commonly transferred from Nettack or Metattack (both use a linearized GCN). Other times, the authors of a defense only state that they use PGD [53] (aka "topology attack") without further explanations. In this case, the authors most certainly refer to a PGD transfer attack on a GCN proxy. They almost never apply PGD to their actual defense, which would yield an adaptive attack (but possibly weak, see SS 4 for guidance). An exception where the defense authors study an adaptive attack is SVD-GCN [12]. Their attack collects the edges flipped by Nettack in a difference matrix \(\delta\mathbf{A}\), replaces its most significant singular values and vectors with those from the clean adjacency matrix \(\mathbf{A}\), and finally adds it to \(\mathbf{A}\). Notably, this yields a dense continuous perturbed adjacency matrix. While their SVD-GCN is susceptible to these perturbations, the results however do not appear as catastrophic as with our adaptive attacks, despite their severe violation of our threat model (see SS 2). Geisler et al. [17] are another exception where gradient-based greedy and PGD attacks are directly applied to their Soft-Median-GDC defense, making them adaptive. Still, our attacks manage to further reduce their robustness estimate. ## 8 Discussion We hope that the adversarial learning community for GNNs will reflect on the bitter lesson that evaluating adversarial robustness is not trivial. We show that on average adversarial robustness estimates are overstated by 40%. To ease the transition into a more reliable regime of robustness evaluation for GNNs we share our recipe for successfully designing strong adaptive attacks. Using adaptive (white-box) attacks is also interesting from a security perspective. If a model successfully defends such strong attacks, it is less likely to have remaining attack vectors for a real-world adversary. Practitioners can use our methodology to evaluate their models in hope to avoid an arms race with attackers. Moreover, the white-box assumption lowers the chance that real-world adversaries can leverage our findings, as it is unlikely that they have perfect knowledge. We also urge for caution since the attacks only provide an upper bound (which with our attacks is now 40% tighter). Nevertheless, we argue that the burden of proof that a defense is truly effective should lie with the authors proposing it. Following our methodology, the effort to design a strong adaptive attack is reduced, so we advocate for adaptive attacks as the gold-standard for future defenses. ## Acknowledgments and Disclosure of Funding This research was supported by the Helmholtz Association under the joint research school "Munich School for Data Science - MUDS". ## References * [1] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In _International Conference on Machine Learning, ICML_, 2018. * [2] Aleksandar Bojchevski and Stephan Gunnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In _International Conference on Learning Representations, ICLR_, 2018. * [3] Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In _ACM Workshop on Artificial Intelligence and Security, AISec_, 2017. * [4] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In _IEEE Symposium on Security and Privacy_, 2017. * [5] Heng Chang, Yu Rong, Tingyang Xu, Yatao Bian, Shiji Zhou, Xin Wang, Junzhou Huang, and Wenwu Zhu. Not all low-pass filters are robust in graph convolutional networks. In _Advances in Neural Information Processing Systems, NeurIPS_, 2021. * [6] J. Chen, X. Lin, H. Xiong, Y. Wu, H. Zheng, and Q. Xuan. Smoothing adversarial training for GNN. _IEEE Transactions on Computational Social Systems_, 8(3), 2020. * [7] Liang Chen, Jintang Li, Qibiao Peng, Yang Liu, Zibin Zheng, and Carl Yang. Understanding structural vulnerability in graph convolutional networks. In _International Joint Conference on Artificial Intelligence, IJCAI_, 2021. * [8] Lingwei Chen, Xiaoting Li, and Dinghao Wu. Enhancing robustness of graph convolutional networks via dropping graph connections. In _European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD_, 2021. * [9] Zhijie Deng, Yinpeng Dong, and Jun Zhu. Batch virtual adversarial training for graph convolutional networks. In _Workshop on Learning and Reasoning with Graph-Structured Representations at the International Conference on Machine Learning, ICML_, 2019. * [10] Dongsheng Duan, Lingling Tong, Yangxi Li, Jie Lu, Lei Shi, and Cheng Zhang. AANE: Anomaly aware network embedding for anomalous link detection. In _IEEE International Conference on Data Mining, ICDM_, 2020. * [11] Pantelis Elinas, Edwin V. Bonilla, and Louis Tiao. Variational inference for graph convolutional networks in the absence of graph data and adversarial settings. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [12] Negin Entezari, Saba A. Al-Sayouri, Amirali Darvishzadeh, and Evangelos E. Papalexakis. All you need is low (rank): Defending against adversarial attacks on graphs. In _ACM International Conference on Web Search and Data Mining, WSDM_, 2020. * [13] Boyuan Feng, Yuke Wang, Z. Wang, and Yufei Ding. Uncertainty-aware attention graph neural network for defending adversarial attacks. In _AAAI Conference on Artificial Intelligence_, 2021. * [14] Fuli Feng, Xiangnan He, Jie Tang, and Tat-Seng Chua. Graph adversarial training: Dynamically regularizing based on graph structure. _IEEE Transactions on Knowledge and Data Engineering_, 33(6), 2021. * [15] Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, and Jie Tang. Graph random neural network for semi-supervised learning on graphs. In _International Conference on Machine Learning, ICML_, 2021. * [16] Simon Geisler, Daniel Zugner, and Stephan Gunnemann. Reliable graph neural networks via robust aggregation. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [17] Simon Geisler, Tobias Schmidt, Hakan Sirin, Daniel Zugner, Aleksandar Bojchevski, and Stephan Gunnemann. Robustness of graph neural networks at scale. In _Advances in Neural Information Processing Systems, NeurIPS_, 2021. * Geisler et al. [2022] Simon Geisler, Johanna Sommer, Jan Schuchardt, Aleksandar Bojchevski, and Stephan Gunnemann. Generalization of neural combinatorial solvers through the lens of adversarial robustness. In _International Conference on Learning Representations, ICLR_, 2022. * Giles et al. [1998] C. Lee Giles, Kurt D. Bollacker, and Steve Lawrence. CiteSeer: An automatic citation indexing system. In _ACM Conference on Digital Libraries_, 1998. * Gilmer et al. [2017] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In _International Conference on Machine Learning, ICML_, 2017. * Gunnemann [2021] Stephan Gunnemann. Graph neural networks: Adversarial robustness. In Lingfei Wu, Peng Cui, Jian Pei, and Liang Zhao, editors, _Graph Neural Networks: Foundations, Frontiers, and Applications_, chapter 8,. Springer, 2021. * Hu et al. [2021] Weibo Hu, Chuan Chen, Yaomin Chang, Zibin Zheng, and Yunfei Du. Robust graph convolutional networks with directional graph adversarial training. _Applied Intelligence_, 51(11), 2021. * Hu et al. [2020] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * Ioannidis and Giannakis [2020] Vassilis N. Ioannidis and Georgios B. Giannakis. Edge dithering for robust adaptive graph convolutional networks. In _AAAI Conference on Artificial Intelligence_, 2020. * Ioannidis et al. [2021] Vassilis N. Ioannidis, Antonio G. Marques, and Georgios B. Giannakis. Tensor graph convolutional networks for multi-relational and robust learning. _IEEE Transactions on Signal Processing_, 68, 2020. * Ioannidis et al. [2021] Vassilis N. Ioannidis, Dimitris Berberidis, and Georgios B. Giannakis. Unveiling anomalous nodes via random sampling and consensus on graphs. In _IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP_, 2021. * Jin and Zhang [2019] Hongwei Jin and Xinhua Zhang. Latent adversarial training of graph convolution networks. In _Workshop on Learning and Reasoning with Graph-Structured Representations at the International Conference on Machine Learning, ICML_, 2019. * Jin and Zhang [2021] Hongwei Jin and Xinhua Zhang. Robust training of graph convolutional networks via latent perturbation. In _European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD_, 2021. * Jin et al. [2021] Ming Jin, Heng Chang, Wenwu Zhu, and Somayeh Sojoudi. Power up! Robust graph convolutional network against evasion attacks based on graph powering. In _AAAI Conference on Artificial Intelligence_, 2021. * Jin et al. [2020] Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. Graph structure learning for robust graph neural networks. In _ACM International Conference on Knowledge Discovery and Data Mining, SIGKDD_, 2020. * Jin et al. [2021] Wei Jin, Tyler Derr, Yiqi Wang, Yao Ma, Zitao Liu, and Jiliang Tang. Node similarity preserving graph convolutional networks. In _ACM International Conference on Web Search and Data Mining, WSDM_, 2021. * Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _International Conference on Learning Representations, ICLR_, 2015. * Kipf and Welling [2017] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In _International Conference on Learning Representations, ICLR_, 2017. * Li et al. [2021] Jintang Li, Tao Xie, Chen Liang, Fenfang Xie, Xiangnan He, and Zibin Zheng. Adversarial attack on large scale graph. _IEEE Transactions on Knowledge and Data Engineering_, 2021. * Li et al. [2020] Yaxin Li, Wei Jin, Han Xu, and Jiliang Tang. Deeprobust: A pytorch library for adversarial attacks and defenses. _arXiv preprint arXiv:2005.06149_, 2020. * [36] Xiaorui Liu, Jiayuan Ding, Wei Jin, Han Xu, Yao Ma, Zitao Liu, and Jiliang Tang. Graph neural networks with adaptive residual. In _Advances in Neural Information Processing Systems, NeurIPS_, 2021. * [37] Xiaorui Liu, Wei Jin, Yao Ma, Yaxin Li, Hua Liu, Yiqi Wang, Ming Yan, and Jiliang Tang. Elastic graph neural networks. In _International Conference on Machine Learning, ICML_, 2021. * [38] Dongsheng Luo, Wei Cheng, Wenchao Yu, Bo Zong, Jingchao Ni, Haifeng Chen, and Xiang Zhang. Learning to drop: Robust graph neural network via topological denoising. In _ACM International Conference on Web Search and Data Mining, WSDM_, 2021. * [39] Florence Regol, Soumyasundar Pal, Jianing Sun, Yingxue Zhang, Yanhui Geng, and Mark Coates. Node copying: A random graph model for effective graph sampling. _Signal Processing_, 192, 2022. * [40] Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, and Andreas Spanias. Uncertainty-matching graph neural networks to defend against poisoning attacks. In _AAAI Conference on Artificial Intelligence_, 2021. * [41] Ke Sun, Zhouchen Lin, Hantao Guo, and Zhanxing Zhu. Virtual adversarial training on graph convolutional networks in node classification. In _Chinese Conference on Pattern Recognition and Computer Vision, PRCV_, 2019. * [42] Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, and Suhang Wang. Transferring robustness for graph neural network against poisoning attacks. In _ACM International Conference on Web Search and Data Mining, WSDM_, 2020. * [43] Shuchang Tao, H. Shen, Q. Cao, L. Hou, and Xueqi Cheng. Adversarial immunization for certifiable robustness on graphs. In _ACM International Conference on Web Search and Data Mining, WSDM_, 2021. * [44] Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [45] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In _International Conference on Learning Representations, ICLR_, 2019. * [46] Haibo Wang, Chuan Zhou, Xin Chen, Jia Wu, Shirui Pan, and Jilong Wang. Graph stochastic neural networks for semi-supervised learning. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [47] Yiwei Wang, Shenghua Liu, Minji Yoon, Hemank Lamba, Wei Wang, Christos Faloutsos, and Bryan Hooi. Provably robust node classification via low-pass message passing. In _IEEE International Conference on Data Mining, ICDM_, 2020. * [48] Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples for graph data: Deep insights into attack and defense. In _International Joint Conference on Artificial Intelligence, IJCAI_, 2019. * [49] Tailin Wu, Hongyu Ren, Pan Li, and Jure Leskovec. Graph information bottleneck. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [50] Yang Xiao, Jie Li, and Wengui Su. A lightweight metric defence strategy for graph neural networks against poisoning attacks. In _International Conference on Information and Communications Security, ICICS_, 2021. * [51] Hui Xu, Liyao Xiang, Jiahao Yu, Anqi Cao, and Xinbing Wang. Speedup robust graph structure learning with low-rank information. In _ACM International Conference on Information & Knowledge Management, CIKM_, 2021. * [52] Jiarong Xu, Yang Yang, Junru Chen, Chunping Wang, Xin Jiang, Jiangang Lu, and Yizhou Sun. Unsupervised adversarially-robust representation learning on graphs. In _AAAI Conference on Artificial Intelligence_, 2022. * [53] Kaidi Xu, Hongge Chen, Sijia Liu, Pin Yu Chen, Tsui Wei Weng, Mingyi Hong, and Xue Lin. Topology attack and defense for graph neural networks: An optimization perspective. In _International Joint Conference on Artificial Intelligence, IJCAI_, 2019. * [54] Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, and Xue Lin. Towards an efficient and general framework of robust training for graph neural networks. In _IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP_, 2020. * [55] Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. In _Advances in Neural Information Processing Systems, NerUPS_, 2020. * [56] Baoliang Zhang, Xiaoxin Guo, Zhenchuan Tu, and Jia Zhang. Graph alternate learning for robust graph neural networks in node classification. _Neural Computing and Applications_, 34 (11), 2022. * [57] Li Zhang and Haiping Lu. A feature-importance-aware and robust aggregator for gcn. In _ACM International Conference on Information & Knowledge Management, CIKM_, 2020. * [58] Xiang Zhang and Marinka Zitnik. GNNGuard: Defending graph neural networks against adversarial attacks. In _Advances in Neural Information Processing Systems, NeurIPS_, 2020. * [59] Yingxue Zhang, Sakif Hossain Khan, and Mark Coates. Comparing and detecting adversarial attacks for graph deep learning. In _Workshop on Representation Learning on Graphs and Manifolds at the International Conference on Learning Representations, ICLR_, 2019. * [60] Yingxue Zhang, Florence Regol, Soumyasundar Pal, Sakif Khan, Liheng Ma, and Mark Coates. Detection and defense of topological adversarial attacks on graphs. In _International Conference on Artificial Intelligence and Statistics, AISTATS_, 2021. * [61] Cheng Zheng, Bo Zong, Wei Cheng, Dongjin Song, Jingchao Ni, Wenchao Yu, Haifeng Chen, and Wei Wang. Robust graph representation learning via neural sparsification. In _International Conference on Machine Learning, ICML_, 2020. * [62] Qinkai Zheng, Xu Zou, Yuxiao Dong, Yukuo Cen, Da Yin, Jiarong Xu, Yang Yang, and Jie Tang. Graph robustness benchmark: Benchmarking the adversarial robustness of graph machine learning. In _Advances in Neural Information Processing Systems, NeurIPS_, 2021. * [63] Dingyuan Zhu, Peng Cui, Ziwei Zhang, and Wenwu Zhu. Robust graph convolutional networks against adversarial attacks. In _ACM International Conference on Knowledge Discovery and Data Mining, SIGKDD_, 2019. * [64] Jun Zhuang and Mohammad Al Hasan. Defending graph convolutional networks against dynamic graph perturbations via bayesian self-supervision. In _AAAI Conference on Artificial Intelligence_, 2022. * [65] Jun Zhuang and Mohammad Al Hasan. How does bayesian noisy self-supervision defend graph convolutional networks? _Neural Processing Letters_, 54(4), 2022. * [66] Daniel Zugner and Stephan Gunnemann. Adversarial attacks on graph neural networks via meta learning. In _International Conference on Learning Representations, ICLR_, 2019. * [67] Daniel Zugner, Amir Akbarnejad, and Stephan Gunnemann. Adversarial attacks on neural networks for graph data. In _ACM International Conference on Knowledge Discovery and Data Mining, SIGKDD_, 2018. ## Checklist 1. For all authors... 1. Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes] 2. Did you describe the limitations of your work? [Yes] See SS 8. 3. Did you discuss any potential negative societal impacts of your work? [Yes] See SS 8. 4. Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... 1. Did you state the full set of assumptions of all theoretical results? [N/A] 2. Did you include complete proofs of all theoretical results? [N/A] 3. If you ran experiments... 1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See SS 5. 2. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See SS 5, SS H and provided code. 3. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] All experiments are repeated for five random data splits. 4. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See beginning of SS 5. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... 1. If your work uses existing assets, did you cite the creators? [Yes] 2. Did you mention the license of the assets? [No] 3. Did you include any new assets either in the supplemental material or as a URL? [Yes] See beginning of SS 5. 4. Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A] 5. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. If you used crowdsourcing or conducted research with human subjects... 1. Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] 2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] 3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A] Attacks overview In this section, we make the ensemble of attacks explicit and explain essential details. We then adapt these attack primitives to circumvent the defense mechanisms (see SS E). **Global evasion attacks.** The goal of a global attack is to provoke the misclassification of a large fraction of nodes (i.e., the test set) jointly, crafting a single perturbed adjacency matrix. For evasion, we use _(1) the Fast Gradient Attack (FGA)_ and _(2) Projected Gradient Descent (PGD)_. In FGA, we calculate the gradient towards the entries of the clean adjacency matrix \(\nabla_{\mathbf{A}}\ell_{\text{attack}}(f_{\theta^{*}}(\mathbf{A},\mathbf{X}), \mathbf{y})\) and then flip the highest-ranked edges at once s.t. we exhaust the budget \(\Delta\). In contrast, PGD requires multiple gradient updates since it uses gradient ascent (see SS 2 or explanation below for Meta-PGD). We deviate from the PGD implementation of Xu et al. [53] is two ways: (I) we adapt the initialization of the perturbation before the first attack gradient descent step and (II) we adjust the final sampling of \(\tilde{\mathbf{A}}\). See below for more details. **Global poisoning attacks.** We either (a) transfer the perturbation \(\tilde{\mathbf{A}}\) found by evasion attack (1) or (2) and use it to poison training, or (b) differentiate through the training procedure by unrolling it, thereby obtaining a meta gradient. The latter approach is taken by both _(3) Metattack_[66] and _(4) our Meta-PGD_. Metattack greedily flips a single edge in each iteration and then obtains a new meta gradient at the changed adjacency matrix. In Meta-PGD, we follow the same relaxation as Xu et al. [53] (see below as well as SS 2) and obtain meta gradients at the relaxed adjacency matrices. In contrast to the greedy approach of Metattack, Meta-PGD is able to revise early decisions later on. **Meta-PGD.** Next, we explain the details of Meta-PGD and we present the pseudo code for reference in Algorithm A.1. Recall that the discrete edges are relaxed \(\{0,1\}\rightarrow[0,1]\) and that the "weight" of the perturbation reflects the probability of flipping the respective edge. ``` 1:Input: Adjacency matrix \(\mathbf{A}\), node features \(\mathbf{X}\), labels \(\mathbf{y}\), GNN \(f_{\theta}(\cdot)\), loss \(\ell_{\text{attack}}\) 2:Parameters: Budget \(\Delta\), iterations \(E\), learning rates \(\alpha_{t}\) 3:Initialize \(\mathbf{P}_{0}\in\mathbb{R}^{n\times n}\) 4:for\(t\in\{1,2,\dots,E\}\)do 5: Step \(\mathbf{P}^{(t)}\leftarrow\mathbf{P}^{(t-1)}+\alpha_{t}\nabla_{\mathbf{P}^{(t- 1)}}\left[\ell_{\text{attack}}\left(f\big{(}\mathbf{A}+\mathbf{P}^{(t-1)}, \mathbf{X};\,\theta=\mathrm{train}(\mathbf{A}+\mathbf{P}^{(t-1)},\mathbf{X}, \mathbf{y})),\mathbf{y}\right)\right]\) 6: Projection \(\mathbf{P}^{(t)}\leftarrow\Pi_{\|\mathbb{E}\|\mathbf{A}+\mathbf{P}^{(t)}\|- \mathbf{A}\|_{0}\leq 2\Delta}(\mathbf{P}^{(t)})\) 7: Sample \(\tilde{\mathbf{A}}\) s.t. \(\|\tilde{\mathbf{A}}-\mathbf{A}\|_{0}\leq 2\Delta\) 8: Return \(\tilde{\mathbf{A}}\) ``` **Algorithm A.1** Meta-PGD In the first step of Meta-PGD, we initialize the perturbation (line 3). In contrast to Xu et al. [53]'s suggestion, we find that initializing the perturbation with the zero matrix can cause convergence issues. Hence, we alternatively initialize the perturbation with \(\tilde{\mathbf{A}}\) from an attack on a different model (see also lesson learned #8 in SS 4). In each attack iteration, a gradient ascent step is performed on the relaxed perturbed adjacency matrix \(\tilde{\mathbf{A}}^{(t-1)}=\mathbf{A}+\mathbf{P}^{(t-1)}\) (line 5). For obtaining the meta gradient through the training process, the training is unrolled. For example, with vanilla gradient descent for training \(f_{\theta}(\mathbf{A},\mathbf{X})=f(\mathbf{A},\mathbf{X};\theta)\), the meta gradient resolves to \[\nabla_{\mathbf{P}^{(t-1)}}\left(\ell_{\text{attack}}\left[f\big{(}\mathbf{A}+ \mathbf{P}^{(t-1)},\mathbf{X};\theta=\theta_{0}-\eta\sum\limits_{k=1}^{E_{ \text{attack}}}\nabla_{\theta_{k-1}}\ell_{\text{train}}[f(\mathbf{A}+\mathbf{P }^{(t-1)},\mathbf{X};\theta=\theta_{k-1}),\mathbf{y}]\big{)},\mathbf{y}\right]\right)\] (A.1) with number of training epochs \(E_{\text{train}}\), fixed training learning rate \(\eta\), and parameters after (random) initialization \(\theta_{0}\). Notice that to obtain our variant of non-meta PGD, it suffices to replace the gradient computation in line 5 with \(\nabla_{\mathbf{P}^{(t-1)}}\left[\ell_{\text{attack}}(f_{\theta^{*}}(\mathbf{A }+\mathbf{P}^{(t-1)},\mathbf{X}),\mathbf{y})\right]\). Thereafter in line 6, the perturbation is projected such that in expectation the budget is obeyed, i.e., \(\Pi_{\|\mathbb{E}[\mathbf{A}+\mathbf{P}^{(t)}]-\mathbf{A}\|_{0}\leq 2\Delta}\). First, the projection clips \(\mathbf{A}+\mathbf{P}^{(t-1)}\) to be in \([0,1]\). If the budget is violated after clipping, it solves \[\arg\min_{\hat{\mathbf{P}}^{(t)}}\|\hat{\mathbf{P}}^{(t)}-\mathbf{P}^{(t)}\|_{2} \qquad\text{s.t. }\quad\mathbf{A}+\hat{\mathbf{P}}^{(t)}\in[0,1]^{n\times n}\text{ and }\sum|\hat{\mathbf{P}}^{(t)}|\leq 2\Delta\] (A.2) After the last iteration (line 7), each element of \(\mathbf{P}^{(t)}\) is interpreted as a probability and multiple perturbations are sampled accordingly. The strongest drawn perturbed adjacency matrix (in terms of attack loss) is chosen as \(\tilde{\mathbf{A}}\). Specifically, in contrast to [53], we sample \(K=100\) potential solutions that all obey the budget \(\Delta\) and then choose the one that maximizes the attack loss \(\ell_{\text{attack}}\). **Local attacks.** For local attacks we only run evasion attacks, and then transfer them to poisoning. This is common practice (e.g., see Zugner et al. [67] or Li et al. [34]). The attacks we use are _(1) FGA_, _(2) PGD_, _(3) Nettack [67]_, and a _(4) Greedy Brute Force_ attack. Nettack greedily flips the best edges considering a linearized GCN, whose weights are either specially trained or taken from the attacked defense. In contrast, in each iteration, our Greedy Brute Force attack flips the current worst-case edge for the attacked model. It determines the worst-case perturbation by evaluating the model for every single edge flip. Notice that all examined models use two propagation steps, so we only consider all potential edges adjoining the target node or its neighbors4. Importantly, Greedy Brute Force is adaptive for any kind of model. Runtime-wise, the algorithm evaluates the attacked model \(\mathcal{O}(\Delta nd)\) times with the number of nodes \(n\) and the degree of the target node \(d\). We provide pseudo code in Algorithm A.2. Footnote 4: Due to GCN-like normalization (see § E), the three-hop neighbors need to be considered to be exhaustive. However, it is questionable if perturbing a neighbor three hops away is ever the strongest perturbation there is. ``` 1:Input: Target node \(i\), adjacency matrix \(\mathbf{A}\), node features \(\mathbf{X}\), labels \(\mathbf{y}\), GNN \(f_{\theta}(\cdot)\), loss \(\ell_{\text{attack}}\) 2:Parameter: Budget \(\Delta\) 3:Initialize \(\tilde{\mathbf{A}}^{(0)}=\mathbf{A}\) 4:for\(t\in\{1,2,\ldots,\Delta\}\)do 5:for potential edge \(e\) adjoining \(i\) or any of \(i\)'s direct neighbors do 6: Flip edge \(\tilde{\mathbf{A}}^{(t)}\leftarrow\tilde{\mathbf{A}}^{(t-1)}\pm e\) 7: Remember best \(\tilde{\mathbf{A}}^{(t)}\) in terms of \(\ell_{\text{attack}}(f_{\theta^{*}}(\tilde{\mathbf{A}}^{(t)},\mathbf{X}), \mathbf{y})\) 8:if node \(i\) is misclassified then 9: Return \(\tilde{\mathbf{A}}^{(t)}\) 10: Recover best \(\tilde{\mathbf{A}}^{(t)}\) 11: Return \(\tilde{\mathbf{A}}_{\Delta}\) ``` **Algorithm A.2** Greedy Brute Force **Unnoticeability** typically serves as a proxy to ensure that the label of an instance (here node) has not changed. In the image domain, it is widely accepted that a sufficiently small perturbation of the input image w.r.t. an \(L_{p}\)-norm is unnoticeable (and similarly for other threat models such as rotation). For graphs the whole subject of unnoticeability is more nuanced. The only constraint we use is the number of edge insertions/deletion, i.e., an \(L_{0}\)-ball around the clean adjacency matrix. The only additional unnoticeability constraint proposed in the literature compares the clean and perturbed graph under a power law assumption on the node degrees [67]. However, we do not include such a constraint since (1) the degree distribution is only one (arbitrary) property to distinguish two graphs. (2) The degree distribution is a global property with an opaque relationship to the local class labels in node classification. (3) As demonstrated in Zugner & Gunnemann [66], enforcing an indistinguishable degree distribution only has a negligible influence on attack efficacy, i.e., their gradient-based/adaptive attack conveniently circumvents this measure. Thus, we argue that enforcing such a constraint is similar to an additional (weak) defense measure and is not the focus of this work. Finally, since many defense (and attack) works in the literature considering node-classification (including the ones we study) also only use an \(L_{0}\)-ball constraint as a proxy for unnoticeability, we do the same for improved consistency. Out of scope are also other domains, like combinatorial optimization, where unnoticeability is not required since the true label of the perturbed instance is known [18]. Defense taxonomy Next, we give further details behind our reasoning on how to categorize defenses for GNNs. Our taxonomy extends and largely follows Gunnemann [21]'s. The three main categories are _improving the graph_ (SS B.1), _improving the training_ (SS B.2), and _improving the architecture_ (SS B.3). We assign each defense to the category that fits best, even though some defenses additionally include ideas fitting into other categories as well. For the assignment of defenses see Table 1. ### Improving the graph With this category, we refer to all kinds of preprocessing of the graph. Alternatively, some approaches make the graph learnable with the goal of improved robustness. In summary, this category addresses changes that take place _prior_ to the GNN (i.e., any message passing). We further distinguish _(1) unsupervised_ and _(2) supervised_ approaches. **Unsupervised.** Any improvements that are not entangled with a learning objective, i.e., pure preprocessing, usually arising from clues found in the node features and graph structure. For example, Jaccard-GCN [48] filters out edges based on the Jaccard similarity of node features, while SVD-GCN [12] performs a low-rank approximation to filter out high-frequency perturbations. Most other approaches from this category exploit clues from features and structure simultaneously. **Supervised.** These graph improvements are entangled with the learning objective by making the adjacency matrix learnable, often accompanied by additional regularization terms that introduce expert assumptions about robustness. For example, ProGNN [30] treats the adjacency matrix like a learnable parameter, and adds loss terms s.t. it remains close to the original adjacency matrix and exhibits properties which are assumed about clean graphs like low-rankness. ### Improving the training These approaches improve training - without changing the architecture - s.t. the learned parameters \(\theta^{*}\) of the GNN exhibit improved robustness. In effect, the new training "nudges" a regular GNN towards being more robust. We distinguish _(1) robust training_ and _(2) further training principles_. **Robust training.** Alternative training schemes and losses which reward the correct classification of synthetic adversarial perturbations of the training data. With this category, Gunnemann [21] targets both straightforward adversarial training and losses stemming from certificates (i.e., improving certifiable robustness). Neither approach is interesting to us: the former is discussed in SS C, and the latter targets provable robustness which does not lend itself to empirical evaluation. **Further training principles.** This category is distinct from robust training due to the lack of a clear mathematical definition of the training objective. It mostly captures augmentations [15; 29; 39; 42; 61] or alternative training schemes [5; 11; 55; 64] that encourage robustness. A simple example for such an approach is to pre-train the GNN weights on perturbed graphs [42]. Another recurring theme is to use multiple models during training and then, e.g., enforce consistency among them [5]. ### Improving the architecture Even though there are some exceptions (see sub-category _(2) miscellaneous_), the recurring theme in this category is to somehow weight down the influence of some edges adaptively for each layer or message passing aggregation. We refer to this type of improved architecture with _(1) adaptively weighting edges_. We further distinguish between approaches that are _(a) rule-based_, _(b) probabilistic_, or use _(c) robust aggregation_. _Rule-based_ approaches typically use some metric [31; 58], alternative message passing [36; 37], or an auxiliary MLP [57] to filter out alleged adversarial edges. _Probabilistic_ approaches either work with distributions in the latent space [63], are built upon probabilistic principles like Bayesian uncertainty quantification [13], or integrate sampling into the architecture and hence apply it also at inference time [8; 24; 25; 38]. _Robust aggregation_ defenses replace the message passing aggregation (typically mean) with a more robust equivalent such as a trimmed mean, median, or soft median [7; 17]. In relation to the trimmed mean, in this category we include also other related approaches that come with some guarantees based on their aggregation scheme Wang et al. [47]. On adversarial training defenses The most basic form of adversarial training for structure perturbations aims to solve: \[\min_{\theta}\max_{\mathbf{A}^{\prime}\in\Phi(\mathbf{A})}\ell(f_{\theta}( \mathbf{A}^{\prime},\mathbf{X}),\mathbf{y})\] (C.1) Similarly to [44, 1, 4], we exclude defenses that build on adversarial training in our study for three reasons. First, we observe that adversarial training requires knowing the clean \(\mathbf{A}\). However, for poisoning, we would need to substitute \(\mathbf{A}\) with an adversarially perturbed adjacency matrix \(\tilde{\mathbf{A}}\). In this case, adversarial training aims to enforce adversarial generalization \(\mathbf{A}^{\prime}\in\Phi(\tilde{\mathbf{A}})\) for the adversarially perturbed adjacency matrix \(\tilde{\mathbf{A}}\) - potentially even reinforcing the poisoning attack. Second, an adaptive poisoning attack on adversarial training is very expensive as we need to unfold many adversarial attacks for a single training. Thus, designing truly adaptive poisoning attacks requires a considerable amount of resources. _Scaling_ these attacks to such complicated training schemes is not the main objective of this work. Third, adversarial training for structure perturbations on GNNs seems to be an unsolved question. So far, the robustness gains come from additional and orthogonal tricks such as self-training [53]. Hence, adversarial training for structure perturbations requires an entire paper on its own. ## Appendix D On defenses against feature perturbations As introduced in SS 2, attacks may perturb the adjacency matrix \(\mathbf{A}\), the feature matrix \(\mathbf{X}\), or both. However, during our survey we found that few defenses tackle feature perturbations. Similarly, 6 out of the 7 defenses chosen by us mainly based on general popularity turn out to not consciously defend against feature perturbations. The only exception is SVD-GCN [12], which also applies its low-rank approximation to the binary feature matrix. However, the authors do not report robustness under feature-only attacks; instead, they only consider mixed structure and feature attacks found by Nettack. Given the strong bias of Nettack towards structure perturbations, we argue that their experimental results do not confirm feature robustness. Correspondingly, in preliminary experiments we were not able to achieve considerable robustness gains of SVD-GCN compared to an undefended GCN - even with non-adaptive feature perturbations. If a non-adaptive attack is strong enough, there is not much merit in applying an adaptive attack. To reiterate, due to the apparent scarcity of defenses apt against feature attacks, we decided to focus our efforts on structure attacks and defenses. However, new defenses considering feature perturbations should study robustness in the face of adaptive attacks - similarly to our work. In the following, we give some important hints for adaptive attacks using feature perturbations. We leave attacks that jointly consider feature and structure perturbations for future work due to the manifold open challenges, e.g., balancing structure and feature perturbations in the budget quantity. **Baseline.** To gauge the robustness of defenses w.r.t. global attacks, we introduce the RAUC metric, which employs the accuracy of an MLP - which is perfectly robust w.r.t. structure perturbations - to determine the maximally sensible budget to include in the summary. As MLPs are however vulnerable to feature attacks, a different baseline model is required for this new setting. We propose to resolve this issue by using a label propagation approach, which is oblivious to the node features and hence perfectly robust w.r.t. feature perturbations. **Perturbations.** The formulation of the set of admissible perturbations depends on what modality the data represents, which may differ between node features and graph edges. Convenient choices for continuous features are l-p-norms; in other cases, more complicated formulations are more appropriate. Accordingly, one has to choose an appropriate constrained optimization scheme. Examined adversarial defenses In this section, we portray each defense and how we adapted the base attacks to each one. We refer to Table H.1 for the used hyperparameter values for each defense. We give the used attack parameters for a GCN below and refer to the provided code for the other defenses. **GCN.** We employ an undefended GCN [33] as our baseline. A GCN first adds self loops to the adjacency matrix \(\mathbf{A}\) and subsequently applies GCN-normalization, thereby obtaining \(\mathbf{A}^{\prime}=(\mathbf{D}+\mathbf{I})^{-\frac{1}{2}}(\mathbf{A}+\mathbf{ I})(\mathbf{D}+\mathbf{I})^{-\frac{1}{2}}\) with the diagonal degree matrix \(\mathbf{D}\in\mathbb{N}^{n\times n}\). Then, in each GCN layer it updates the hidden states \(\mathbf{H}^{(l)}=\mathrm{dropout}(\sigma(\mathbf{A}^{\prime}\mathbf{H}^{(l-1)} \mathbf{W}^{(l-1)}+\mathbf{b}^{(l-1)}))\) where \(\mathbf{H}^{(0)}=\mathbf{X}\). We use the non-linear ReLU activation for intermediate layers. Dropout is deactivated in the last layer and we refer to the output before softmax activation as logits. We use Adam [32] to learn the model's parameters. **Attack.** We do not require special tricks since the GCN is fully differentiable and does not come with defensive measures to consider. In fact, the off-the-shelf attacks we employ are tailored to a GCN. For PGD, we use \(E=200\) iterations, \(K=100\) samples, and a base learning rate of 0.1. For Meta-PGD, we only lower the base learning rate to 0.01 and add gradient clipping to 1 (w.r.t. global \(L_{2}\)-norm). For Metattack with SGD instead of Adam for training the GCN, we use an SGD learning rate of 1 and restrict the training to \(E_{\text{train}}=100\) epochs. ### Jaccard-GCN **Defense.** Additionally to a GCN, Jaccard-GCN [48] preprocesses the adjacency matrix. It computes the Jaccard coefficient of the binarized features for the pair of nodes of every edge, i.e., \(\mathbf{J}_{ij}=\frac{\mathbf{X}_{i}\mathbf{X}_{j}}{\min\{\mathbf{X}_{i}+ \mathbf{X}_{j},1\}}\). Then edges are dropped where \(\mathbf{J}_{ij}\leq\epsilon\). **Adaptive attack.** We do not need to adapt gradient-based attacks as the gradient is equal to zero for dropped edges. Straightforwardly, we adapt Nettack to only consider non-dropped edges. Analogously, we ignore these edges in the Greedy Brute Force attack for increased efficiency. ### Svd-Gcn **Defense.** SVD-GCN [12] preprocesses the adjacency matrix with a low-rank approximation (LRA) for a fixed rank \(r\), utilizing the Singular Value Decomposition (SVD) \(\mathbf{A}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\approx\mathbf{U}_{r} \mathbf{\Sigma}_{r}\mathbf{V}_{r}^{\top}=\mathbf{A}_{r}\). Note that the LRA is performed on \(\mathbf{A}\) before adding self-loops and GCN-normalization (see above). Thereafter, the dense \(\mathbf{A}_{r}\) is passed to the GCN as usual. Since \(\mathbf{A}\) is symmetric and positive semi-definite, we interchangeably refer to the singular values/vectors also as eigenvalues/eigenvectors. **Adaptive attack.** Unfortunately, the process of determining the singular vectors \(\mathbf{U}_{r}\) and \(\mathbf{V}_{r}\) is highly susceptible to small perturbations, and so is its gradient. Thus, we circumvent the need of differentiating the LRA. We now explain the approach from a geometrical perspective. Each row of \(\mathbf{A}\) (or interchangeably column as \(\mathbf{A}\) is symmetric) is interpreted as coordinates of a high-dimensional point. The \(r\) most significant eigenvectors of \(\mathbf{A}\) span an \(r\)-dimensional subspace, onto which the points are projected by the LRA. Adding or removing an adversarial edge \((i,j)\) corresponds to moving the point \(\mathbf{A}_{i}\) along dimension \(j\), i.e., \(\mathbf{A}_{i}\pm\mathbf{e}_{j}\) (vice-versa for \(\mathbf{A}_{j}\)). As hinted at in SS 4, the \(r\) most significant eigenvectors of \(\mathbf{A}\) turn out to usually have few large components. Thus, the relevant subspace is mostly aligned with only few dimensions. Changes along the highest-valued eigenvectors are consequently preserved by LRA. To quantify how much exactly such a movement along a dimension \(j\), i.e., \(\mathbf{e}_{j}\), is preserved, we project the movement itself onto the subspace and extract the projected vector's \(j\)-th component. More formally, we denote the projection matrix onto the subspace as \(\mathbf{P}=\sum_{k=0}^{r}\mathbf{v}_{k}\mathbf{v}_{k}^{T}\) where \(\mathbf{v}_{k}\) are the eigenvectors of \(\mathbf{A}\). We now score each dimension \(j\) with \((\mathbf{P}\mathbf{e}_{j})_{j}=\mathbf{\bar{P}}_{jj}\). Since the adjacency matrix is symmetric and rows and columns are hence exchangeable, we then symmetrize the scores \(\mathbf{W}_{ij}=(\nicefrac{{\mathbf{P}_{ii}}}{{\mathbf{P}_{jj}}})_{\!\!2}\). Finally, we decompose the perturbed adjacency matrix \(\tilde{\mathbf{A}}=\mathbf{A}+\delta\mathbf{A}\) and, thus, only need gradients for \(\delta\mathbf{A}\). Using the approach sketched above, we now replace \(\mathrm{LRA}(\mathbf{A}+\delta\mathbf{A})\approx\mathrm{LRA}(\mathbf{A})+ \delta\mathbf{A}\circ\mathbf{W}\). The weights \(\mathbf{W}\) can also be incorporated into the Greedy Brute Force attack by dropping edges with weight \(<0.2\) and, for efficient early stopping, sort edges to try in order of descending weight. Similarly, Nettack's score function \(s_{\text{struct}}(i,j)\) - which attains positive and negative values, while \(\mathbf{W}\) is positive - can be wrapped to \(s^{\prime}_{\text{struct}}(i,j)=\log(\exp(s_{\text{struct}}(i,j))\circ\mathbf{W} )=s_{\text{struct}}(i,j)+\log\mathbf{W}\). Note that we assume that the direction of the eigenvectors remains roughly equal after perturbing the adjacency matrix. In practice, we find this assumption to be true. Intuitively, a change along the dominant eigenvectors should even reinforce their significance. ### Rgcn **Defense.** The implementations of R(obust)GCN provided by the authors5 and in the widespread DeepRobust [35] library6 are both consistent, but diverge slightly from the paper [63]. We use and now present RGCN according to those reference implementations. Principally, RGCN models the hidden states as Gaussian vectors with diagonal variance instead of sharp vectors. In addition to GCN's \(\mathbf{A}^{\prime}\), a second \(\mathbf{A}^{\prime\prime}=(\mathbf{D}+\mathbf{I})^{-1}(\mathbf{A}+\mathbf{I} )(\mathbf{D}+\mathbf{I})^{-1}\) is prepared to propagate the variances. The mean and variance of this hidden Gaussian distribution are initialized as \(\mathbf{M}^{(0)}=\mathbf{V}^{(0)}=\mathbf{X}\). Each layer first computes an intermediate distributions given by \(\hat{\mathbf{M}}^{(l)}=\mathrm{elu}(\mathrm{dropout}(\mathbf{M}^{(l-1)}) \mathbf{W}_{M}^{(l-1)})\) and \(\hat{\mathbf{V}}^{(l)}=\mathrm{relu}(\mathrm{dropout}(\mathbf{V}^{(l-1)}) \mathbf{W}_{V}^{(l-1)})\). Then, attention coefficients \(\boldsymbol{\alpha}^{(l)}=e^{-\gamma\hat{\mathbf{V}}^{(l)}}\) are calculated with the aim to subdue high-variance dimensions (where exponentiation is element-wise and \(\gamma\) is a hyperparameter). The final distributions are obtained with \(\mathbf{M}^{(l)}=\mathbf{A}^{\prime}\hat{\mathbf{M}}^{\prime(l)}\circ \boldsymbol{\alpha}^{(l)}\). Note the absence of bias terms. After the last layer, point estimates are sampled from the distributions via the reparameterization trick, i.e., scalars are sampled from a standard Gaussian and arranged in a matrix \(\mathbf{R}\). These samples are then used to obtain the logits via \(\mathbf{M}^{(L)}+\mathbf{R}\circ(\mathbf{V}^{(L)}+\epsilon)^{\frac{1}{2}}\) (where the square root applies element-wise and \(\epsilon\) is a hyperparameter). Adam is the default optimizer. The loss is extended with the regularizer \(\beta\sum_{i}\mathrm{KL}(\mathcal{N}(\hat{\mathbf{M}}_{i}^{(1)},\mathrm{diag }(\hat{\mathbf{V}}_{i}^{(1)}))\|\mathcal{N}(\mathbf{0},\mathbf{I}))\) (where \(\beta\) is a hyperparameter). Footnote 5: [https://github.com/ZW-ZHANG/RobustGCN](https://github.com/ZW-ZHANG/RobustGCN) Footnote 6: [https://github.com/DSE-NSU/DeepRobust](https://github.com/DSE-NSU/DeepRobust) **Adaptive attack.** A direct gradient attack suffices for a strong adaptive attack. Only when unrolling the training procedure for Metattack and Meta-PGD, we increase hyperparameter \(\epsilon\) from \(10^{-8}\) to \(10^{-2}\) to retain numerical stability. ### ProGNN **Defense.** We use and present Pro(perty)GNN [30] exactly following the implementation provided by the authors in their DeepRobust [35] library7. ProGNN learns an alternative adjacency matrix \(\mathbf{S}\) that is initialized with \(\mathbf{A}\). A regular GCN - which, as usual, adds self-loops and applies GCN-normalization - is trained using \(\mathbf{S}\), which is simultaneously updated in every \(\tau\)-th epoch. For that, first a gradient descent step is performed on \(\mathbf{S}\) with learning rate \(\eta\) and momentum \(\mu\) towards minimizing the principal training loss alongside two regularizers that measure deviation \(\beta_{1}\|\mathbf{S}-\mathbf{A}\|_{F}^{2}\) and feature smoothness \(\frac{\beta_{2}}{2}\sum_{i,j}\mathbf{S}_{ij}\|\frac{\mathbf{X}_{i}}{\sqrt{d_{i }}}-\frac{\mathbf{X}_{j}}{\sqrt{d_{j}}}\|^{2}\) (where \(d_{i}=\sum_{j}\mathbf{S}_{ij}+10^{-3}\)). Next, the singular value decomposition \(\mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^{T}\) of the updated \(\mathbf{S}\) is computed, and \(\mathbf{S}\) is again updated to be \(\mathbf{U}\max(0,\boldsymbol{\Sigma}-\eta\beta_{3})\mathbf{V}^{T}\) to promote low-rankness. Thereafter, \(\mathbf{S}\) is again updated to be \(\mathrm{sgn}(\mathbf{S})\circ\max(0,|\mathbf{S}|-\eta\beta_{4})\) to promote sparsity. Finally, the epoch's resulting \(\mathbf{S}\) is obtained by clamping its elements between 0 and 1. Footnote 7: [https://github.com/mims-harvard/GNNGuard](https://github.com/mims-harvard/GNNGuard) **Adaptive attack.** Designing an adaptive attack for ProGNN proved to be a challenging endeavor. We describe the collection of tricks in SS 4's Example 2. ### GnnGuard **Defense.** We closely follow the authors' implementation8 as it deviates from the formal definitions in the paper [58]. GNNGuard adopts a regular GCN and, before each layer, it adaptively weights down alleged adversarial edges. Thus, each layer has a unique propagation matrix \(\mathbf{A}^{(l)}\) that is used instead of \(\mathbf{A}^{\prime}\). Footnote 8: [https://github.com/DSE-NSU/DeepRobust](https://github.com/DSE-NSU/DeepRobust) GNNGuard's rule-based edge reweighting can be clustered into four consecutive steps: (1) the edges are reweighted based on the pair-wise cosine similarity \(\mathbf{C}^{(l)}_{ij}=\frac{\mathbf{H}^{(l-1)}_{i}\cdot\mathbf{H}^{(l-1)}_{j}}{ \|\mathbf{H}^{(l-1)}_{i}\|\|\mathbf{H}^{(l-1)}_{i}\|}\|\) according to \(\mathbf{S}^{(l)}=\mathbf{A}\circ\mathbf{C}^{(l)}\circ\mathbb{I}[\mathbf{C}^{( l)}\geq 0.1]\), where edges with too dissimilar node embeddings are removed (see Iverson bracket \(\mathbb{I}[\mathbf{C}^{(l)}\geq 0.1]\)). Then, (2) the matrix is rescaled \(\boldsymbol{\Gamma}^{(l)}_{ij}=\mathbf{s}^{(l)}_{ij}/\mathbf{s}^{(l)}_{i}\) with \(\mathbf{s}^{(l)}_{i}=\sum_{j}\mathbf{S}^{(l)}_{ij}\) For stability, if \(\mathbf{s}^{(l)}_{i}<\epsilon\), \(\mathbf{s}^{(l)}_{i}\) is set to 1 (here \(\epsilon\) is a small constant). Next, (3) self-loops are added and \(\boldsymbol{\Gamma}^{(l)}\) is non-linarily transformed according to \(\hat{\boldsymbol{\Gamma}}^{(l)}=\exp_{\neq 0}(\boldsymbol{\Gamma}^{(l)}+ \mathrm{diag}\,^{1}\!/_{1}+\mathrm{d}^{(l)})\), where \(\exp_{\neq 0}\) only operates on nonzero elements and \(\mathbf{d}^{(l)}_{i}=\|\boldsymbol{\Gamma}^{(l)}_{i}\|_{0}\) is the row-wise number of nonzero entries. Last, (4) the result is smoothed over the layers with \(\boldsymbol{\Omega}^{(l)}=\sigma(\rho)\boldsymbol{\Omega}^{(l-1)}+(1-\sigma( \rho))\hat{\boldsymbol{\Gamma}}^{(l)}\) with learnable parameter \(\rho\) and sigmoid function \(\sigma(\cdot)\). The resulting reweighted adjacency matrix \(\boldsymbol{\Omega}^{(l)}\) is then GCN-normalized (without adding self-loops) and passed on to a GCN layer. Note that steps (1) to (3) are excluded from back-propagation during training. When comparing with the GNNGuard paper, one notices that among other deviations, we have omitted learnable edge pruning because it is disabled in the reference implementation. **Adaptive attack.** The hyperparameter \(\epsilon\) must be increased from \(10^{-6}\) to \(10^{-2}\) during the attack to retain numerical stability. In contrast to the reference implementation but as stated above, it is important to place the hard filtering step \(\mathbb{I}[\mathbf{C}^{(l)}\geq 0.1]\) for \(\mathbf{S}^{(l)}\) s.t. the gradient calculation w.r.t. \(\mathbf{A}\) is not suppressed for these entries. ### Grand **Defense.** The Graph Random Neural Network (GRAND) [15] model is the only defense from our selection that is not based on a GCN. First, \(\mathbf{A}\) is endowed with self-loops and GCN-normalized to obtain \(\mathbf{A}^{\prime}\). Also, each row of \(\mathbf{X}\) is \(l_{1}\)-normalized, yielding \(\mathbf{X}^{\prime}\). Next, rows from \(\mathbf{X}^{\prime}\) are randomly dropped with probability \(\delta\) during training to generate a random augmentation, and \(\mathbf{X}^{\prime}\) is scaled by \(1-\delta\) during inference to compensate, thereby obtaining \(\hat{\mathbf{X}}\). Those preprocessed node features are then propagated multiple times along the graph to get \(\overline{\mathbf{X}}=\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{A}^{\prime k}\hat{ \mathbf{X}}\). Finally, dropout is applied once to \(\overline{\mathbf{X}}\), and the result is plugged into a 2-layer MLP with dropout and ReLU activation to obtain class probabilities \(\mathbf{Z}\). The authors also propose an alternative architecture using a GCN instead of an MLP, however, we do not explore this option since the MLP version is superior according to their own results. GRAND is trained with Adam. The training loss comprises the mean of the cross-entropy losses of \(S\) model evaluations, thereby incorporating multiple random augmentations. Additionally, a consistency regularizer is added to enforce similar class probabilities across all evaluations. More formally, first the probabilities are averaged across all evaluations: \(\overline{\mathbf{Z}}=\frac{1}{S}\sum_{s=1}^{S}\mathbf{Z}^{(s)}\). Next, each node's categorical distribution is sharpened according to a temperature hyperparameter \(T\), i.e., \(\overline{\mathbf{Z}}^{\prime}_{ij}=\mathbf{z}^{\frac{1}{K}}_{ij}/\sum_{c} \mathbf{z}^{\frac{1}{K}}_{ic}\). The final regularizer penalizes the distance between the class probabilities and the sharpened averaged distributions, namely \(\frac{\partial}{S}\sum_{s=1}^{S}\|\mathbf{Z}^{(s)}-\overline{\mathbf{Z}}^{ \prime}\|_{F}^{2}\). **Adaptive attack.** When unrolling the training procedure for Metattack and Meta-PGD, to reduce the memory footprint, we reduce the number of random augmentations per epoch to 1, and we use a manual gradient calculation for the propagation operation. We also initialize Meta-PGD with a strong perturbation found by Meta-PGD on ProGNN. Otherwise, the attack has issues finding a perturbation with high loss; it presumably stalls in a local optimum. It is surprising that "only" initializing from GCN instead of ProGNN does not give a satisfyingly strong attack. Finally, we use the same random seed for every iteration of Metattack and Meta-PGD, as otherwise the constantly changing random graph augmentations make the optimization very noisy. ### Soft-Median-GDC **Defense.** The Soft-Median-GDC [17] deviates in two ways from a GCN: (1) it uses Personalized PageRank (PPR) with restart probability \(\alpha=0.15\) to further preprocess the adjacency matrix after adding self-loops and applying GCN-normalization. The result is then sparsified using a row-wise top-\(k\) operation (\(k=64\)). (2) the message passing aggregation is replaced with a robust estimator called Soft-Median. From the perspective of node \(i\), a GCN uses the message passing aggregation \(\mathbf{H}_{i}^{(l)}=\mathbf{A}_{i}\mathbf{H}^{(l-1)}\) which can be interpreted as a weighted mean/sum. In Soft-Median-GDC, the "weights" \(\mathbf{A}_{i}\) are replaced with a scaled version of \(\mathbf{A}_{i}\circ\text{softmax}\ (-\text{e}/\tau\sqrt{d})\). Here the vector \(\mathbf{c}\) denotes the distance between hidden embedding of a neighboring node to the neighborhood-specific weighted dimension-wise median: \(\mathbf{c}_{i}=\|\operatorname{Median}(\mathbf{A}_{i},\mathbf{H}^{(l-1)})- \mathbf{H}_{i}^{(l-1)}\|\). To keep the scale, these weights are scaled s.t. they sum up to \(\sum\mathbf{A}_{i}\). **Adaptive attack.** During gradient-based attacks, we adjust the \(\mathbf{c}\) of every node s.t. it now captures the distance to all other nodes, not only neighbors. This of course modifies the values of \(\mathbf{c}\), but is necessary to obtain a nonzero gradient w.r.t. to all candidate edges. We initialize PGD with a strong perturbation found by a similar attack on GCN, and initialize Meta-PGD with a perturbation from a similar attack on ProGNN (as with GRAND, using an attack against GCN as a base would be insufficient here). ## Appendix F Evaluation of adaptive attacks In Table F.1, we summarize the variants of the datasets we use, both of which we have precisely extracted from Nettack's code8. In Fig. F.1, we complement Fig. 2 and compare the (R)AUC of all defenses on Citeseer. The robustness estimates for the defenses on Citeseer are also much lower as originally reported. For completeness, we give absolute envelope curve plots for all settings and datasets as well as for higher budgets in Fig. F.2 and Fig. F.3 (compare with Fig. 4 and Fig. 5). Footnote 8: [https://github.com/danielzuegner/nettack](https://github.com/danielzuegner/nettack) Figure F.2: Absolute variant of Fig. 4, showing relative budgets up to 15%. Figure F.3: Absolute variant of Fig. 5, showing relative budgets up to 200%. Ensemble transferability study In Fig. 8, we transfer attacks found on an _individual_ model to other models. It is natural to also assess the strength of transfer attacks supplied by _ensembles_ of models. In Fig. G.1, we address this question for 2-ensembles. For poisoning, the combination of RGCN and ProGNN turns out to be (nearly) the strongest in all cases, which is reasonable since both already form strong individual transfer attacks as is evident in Fig. 8. For evasion, the differences are more subtle. We also investigate 3-ensembles, but omit the plots due to their size. For poisoning, RGCN and ProGNN now combined with Soft-Median-GDC remain the strongest transfer source, yet the improvement over the 2-ensemble is marginal. For evasion, there is still no clear winner. GCN and defense hyperparameters: original vs. tuned for adaptive attacks To allow for the fairest comparison possible, we tuned the hyperparameters for each model (including GCN) towards maximizing both clean accuracy and adversarial robustness on a single random data split. In Table 1, we list all hyperparameter configurations. While we cannot run an exhaustive search over all hyperparameter settings, we report substantial gains for most defenses and the GCN in Fig. 1. The only exceptions are GRAND, Soft-Median-GDC on Cora ML, and GNNGuard. For GRAND, we do not report results for the default hyperparameters as they did not yield satisfactory clean accuracy. Moreover, for Soft-Median-GDC on Cora ML and GNNGuard we were not able to substantially improve over the default hyperparameters. For the GCN, tuning is important to ensure that we have a fair and equally-well tuned baseline. A GCN is the natural baseline since most defense methods propose slight modifications of a GCN or additional steps to improve the robustness. For the defenses, tuning is vital since they were most originally tuned w.r.t. non-adaptive attacks. In any case, the tuning should counterbalance slight variations in the setup. As stated in the introduction, each attack only provides an upper bound for the actual adversarial robustness of a model (with fixed hyperparameters). A future attack of increased efficacy might lead to a tighter estimate. Thus, when we empirically compare the defenses to a GCN, we only compare upper bounds of the respective actual robustness. However, we attack the GCN with state-of-the-art approaches that were developed by multiple researchers specifically for a GCN. Even though we also tune the parameters of the adaptive attacks, we argue that the robustness estimate for a GCN is likely tighter than our robustness estimate for the defenses. In summary, the tuning of hyperparameters is necessary that we can fairly compare the robustness of multiple models, even though, we only compare upper bounds of the true robustness. Figure H.1: Each defense’s clean accuracy vs. (R)AUC values of the strongest attacks, akin to Fig. 6. Muted (semi-transparent) colors represent untuned defenses (except for Soft-Median-GDC on Cora ML and GNNGuard), solid colors denote tuned defenses, and lines connect the two. Our tuned defenses are almost always better than untuned variants w.r.t. both clean accuracy and robustness.
2306.00129
Self-supervised Vision Transformers for 3D Pose Estimation of Novel Objects
Object pose estimation is important for object manipulation and scene understanding. In order to improve the general applicability of pose estimators, recent research focuses on providing estimates for novel objects, that is objects unseen during training. Such works use deep template matching strategies to retrieve the closest template connected to a query image. This template retrieval implicitly provides object class and pose. Despite the recent success and improvements of Vision Transformers over CNNs for many vision tasks, the state of the art uses CNN-based approaches for novel object pose estimation. This work evaluates and demonstrates the differences between self-supervised CNNs and Vision Transformers for deep template matching. In detail, both types of approaches are trained using contrastive learning to match training images against rendered templates of isolated objects. At test time, such templates are matched against query images of known and novel objects under challenging settings, such as clutter, occlusion and object symmetries, using masked cosine similarity. The presented results not only demonstrate that Vision Transformers improve in matching accuracy over CNNs, but also that for some cases pre-trained Vision Transformers do not need fine-tuning to do so. Furthermore, we highlight the differences in optimization and network architecture when comparing these two types of network for deep template matching.
Stefan Thalhammer, Jean-Baptiste Weibel, Markus Vincze, Jose Garcia-Rodriguez
2023-05-31T19:06:05Z
http://arxiv.org/abs/2306.00129v1
# Self-supervised Vision Transformers for 3D Pose Estimation of Novel Objects ###### Abstract Object pose estimation is important for object manipulation and scene understanding. In order to improve the general applicability of pose estimators, recent research focuses on providing estimates for novel objects, that is objects unseen during training. Such works use deep template matching strategies to retrieve the closest template connected to a query image. This template retrieval implicitly provides object class and pose. Despite the recent success and improvements of Vision Transformers over CNNs for many vision tasks, the state of the art uses CNN-based approaches for novel object pose estimation. This work evaluates and demonstrates the differences between self-supervised CNNs and Vision Transformers for deep template matching. In detail, both types of approaches are trained using contrastive learning to match training images against rendered templates of isolated objects. At test time, such templates are matched against query images of known and novel objects under challenging settings, such as clutter, occlusion and object symmetries, using masked cosine similarity. The presented results not only demonstrate that Vision Transformers improve in matching accuracy over CNNs, but also that for some cases pre-trained Vision Transformers do not need fine-tuning to do so. Furthermore, we highlight the differences in optimization and network architecture when comparing these two types of network for deep template matching. Keywords:Object pose estimation Template matching Vision transformer Self-supervised learning + Footnote †: conference: source code: [https://github.com/sThalham/TraM3D](https://github.com/sThalham/TraM3D) ## 1 Introduction Object pose estimation is an important yet difficult vision problem. Many downstream tasks, such as grasping [37], augmented reality [25] and reconstruction [35] benefit from the availability of object poses. Classical object pose estimation approaches encode latent representations of multiple object views per object, during training. During run-time these are matched against an observation to retrieve a coarse object pose [20, 24, 12]. After retrieving the pose of the closest template, poses are refined using Iterative-Closest-Points [17] algorithm or other algorithms to optimize the rigid transformations between two corresponding sets of points. In contrast, learning-based solutions using Convolutional Neural Networks (CNN) learn a feature representation to infer object class and geometric correspondences during testing [34, 38, 47, 26, 48, 50, 1, 43, 9]. Yet, training pose estimators for each object instance [34, 38], or each set of object instances [48, 50] is insufficient to be usable in real world scenarios where object instances are manifold and constantly changing. As a consequence research shifts towards category-level [51, 39] and novel object pose estimation [32, 42, 30]. These recent novel object pose estimation approaches are similar to classical ones in the sense that queries are matched against templates. The approach of [32] employs a CNN backbone to learn occlusion-aware template matching for novel object pose estimation. Real observations are matched against rendered templates and tested for \(3D\) pose estimation. While they show that such strategies are expedient for novel object pose estimation it has been shown that Vision Transformers (ViT) [11, 49, 5] learn more discriminative feature spaces than CNNs when trained in such unsupervised manners. This advantage of ViTs over CNNs, however, has primarily been empirically demonstrated by matching to distinct object classes and not by matching views of the same object class for more complex reasoning, such as \(3D\) object pose estimation [5, 8]. In this work we empirically demonstrate that ViTs excel over CNNs when used for novel object pose estimation. Modifying the approach of [32] for comparing two similarly sized feature extractors, ResNet50 [18] with \(23M\) and ViT-s [49] with \(21M\) parameters, we show that these improvements are manifold. Training self-supervised ViTs for \(3D\) object pose estimation not only improves the template matching accuracy, but also reduces the training time. Depending on the dataset and metric, template matching accuracy for seen objects ranges from 1% on Linemod [20], over 4% on Linemod-Occlusion [4], to 19% on T-LESS [22]. For unseen objects, the respective improvements are 3%, 5% and 18%. Achieving these improvements using ViT-s takes one fourth of the training time and iterations on LM and LM-O, and only one twenty-fifth of it on T-LESS. More remarkably, testing ViT-s on T-LESS in a zero-shot fashion, thus without fine-tuning, already improves over using fine-tuned ResNet50 by 7% and 9%, for seen and unseen objects respectively. Finally, works such as [5, 8] train self-supervised ViTs to retrieve the object class of seen objects assuming the availability of templates in the same domain. These assumptions are impractical for novel object pose estimation. Uniform coverage of the pose space is crucial and thus rendering templates is expedient. Furthermore, handling unseen objects is desired to further generalize real-world deployment of pose estimators. As a consequence, this work provides ablations on the matter of network architecture used for matching. While the aforementioned works [5, 8] benefit from using high-dimensional, multi-layered projection heads, we empi ically show that these increase the template matching error on unseen objects when matched against rendered templates. In summary we: * Show that Vision Transformers not only exhibit reduced template matching errors compared to CNNs for matching synthetic templates to known objects, but also to novel objects. The relative improvements for novel object pose estimation range from 3% to 18%, depending on the dataset and metric used. * Demonstrate that pre-trained Vision Transformers exhibit excellent matching performance for zero-shot matching. On the T-LESS dataset, non fine-tuned Vision Transformers exhibit a relative improvement over fine-tuned CNNs of 7% and 9%, on known and novel objects respectively. Fine-tuning further improves to 19% and 18% respectively. * Highlight the differences in matching procedure and optimization of fine-tuning Vision Transformers for template matching. Our results indicate that Vision Transformers encode relevant features over a broad range of descriptor sizes for seen and novel objects. As compared to CNNs, where there is a trade-off when choosing the descriptor size for either seen or novel objects. Our results additionally indicate that high-dimensional, multi-layered projection heads increase the template matching error for the problem at hand. The remainder of the manuscript is organised in the sections Related Work, Method, Experiments and Conclusion. The next section presents the state of the art for object pose estimation, focusing on deep template matching for deriving poses of novel objects, and self-supervised vision transformers. ## 2 Related Work This sections presents the state of the art for object pose estimation with the focus on novel object pose estimation. Subsequently, ViTs and self-supervised training for them is presented. Learning-based object pose estimation research focuses on multi-staged pipelines [28; 34; 50; 43] that often train separate networks for instance-level pose estimation [34; 50], in order to improve the estimated pose's accuracy. Different streams of research improve on the scalability of instance-level pose estimation, presenting solutions for improved multi-object handling [1; 47; 56] and reducing the number of stages needed for providing reliable pose estimates [48; 9; 55]. Yet, re-training pose estimators every time novel objects or object sets are encountered is cumbersome and delays the deployment in the real world. As a consequence, recent works overcome these shortcomings by training for category-level pose estimation [51; 39] or by training deep template matching for novel object pose estimation [32; 42; 30]. **Deep Template Matching** Matching observations against predefined templates is a long-standing concept of object pose estimation [20; 24; 12]. Recent learning-based solutions adopt this strategy, since it has two major advantages [32; 42; 30]. First, training time is low since encoding templates does not require learning a representation of each object individually. Creating a latent representations for each relevant template only requires one network forward pass. Thus, template encoding is done in the magnitude of seconds for an object of interest, as compared to training instance-level pose estimators, which takes hours to days, depending on the number of objects and the hardware [34, 50, 48]. Second, training instance-level pose estimators encodes a latent representation of the object, respectively objects, of interest. This representation does not generalize to novel objects. This shortcoming has to be addressed by either category-level object pose estimation, or by deep template matching. The approach of [52] introduces deep descriptors for matching query objects against templates for retrieving the \(3D\) pose using nearest neighbor search. In [3] the authors improve over [52] by guiding learning in pose space, also accounting for object symmetries in the process. Recently, [32] proposed further improvements. They replace the triplet loss-based training with an InfoNCE-based one and improve occlusion handling by masking the feature embedding using the template's mask and an occlusion threshold. We adopt and improve over their approach for deep template matching by using ViTs for descriptor extraction, which have not yet been adopted by the community. As such, we demonstrate their advantage with respect to their generality as deep template matcher and show empirical evaluations highlighting their advantages for the problem of novel object pose estimation. **Vision Transformer** It has recently been shown that ViTs [11, 36] learn superior features when trained in a self-supervised fashion [8, 5, 49]. These mainstream works focus on training object classifiers from scratch, and using large datasets with little domain shift between query images and templates. Such large datasets are difficult to obtain for object pose estimation due to the complexity of generation accurate \(6D\) pose annotations. Additionally, it is relevant for pose estimation to effectively cover the viewing sphere around objects of interest [45]. This implies training on comparably small datasets and preferably using synthetically creating templates, i.e. using rendering for template creation [10]. As such, in this work, ViTs are assumed to be pre-trained, and templates are rendered. We thus show the potential of self-supervised ViTs under that shifted perspective and also highlight the differences in network design as compared to the mainstream research direction. ## 3 Method This section presents our self-supervised learning framework for matching real observations to synthetic templates for novel object \(3D\) pose estimation. Figure 1 provides an abstract visualization of the presented method. Self-supervised training is done using contrastive learning. Contrastive learning aims at maximizing the similarity of semantically close training samples, referred to as positive pairs, while minimizing the similarity for samples that are semantically dissimilar, that is negative pairs. More precisely, one training sample consists of a tuple of a query crop (\(I_{q}\)), a positive example (\(I_{pos}\)), and a negative one (\(I_{neg}\)). The positive and negative template are rendered using physically-based rendering (pbr) [10]. Where the positive sample correlates with respect to object class and rotation with the query image. The negative sample deviates with respect to both properties. Crops are tokenized using random patch embedding and a shared pre-trained ViT-s [49] is used for extracting features of the query and the template images. In contrast to self-supervised ViT-frameworks for classification [5, 8] we discard the class token and employ the positional tokens for similarity calculation. Using such spatial output enables dropping tokens based on the positive template's mask. Optimization is guided using InfoNCE-loss [33] with the positive and negative similarities as input. During testing, similarities are computed between real object observations and pbr-templates of seen and novel objects. Thus, in contrast to contemporary ViT-research, similarities have to bridge the synthetic-to-real gap, since templates are created using rendering [5, 8]. The real observations are compared against templates that represent uniformly distributed object views of the potentially new objects. Ultimately, the class and the \(3D\) rotation of the matched template are retrieved. ### Feature Embedding The aim of this work is novel object pose estimation. Recent works shows that deep contrastively-learned template matching strategies are well suited for this task [32, 42, 30]. In order to exhibit high similarities between similar view points of the same object in different domains, the learned feature embedding has to represent the object view as accurately as possible. It has been shown that Vision Figure 1: **Method overview** During training, a query image, a positive and a negative template is processed by a Vision Transformer to encode a feature embedding. The number of the positional tokens is retained for the feature map. InfoNCE [33] is used in a Triplet loss-like fashion with the input feature map being masked with the positive template. During testing, novel query objects are matched against templates to retrieve object class and \(3D\) pose from the matched template. Template retrieval is guided using the masked cosine similarity. Transformers [49, 11, 36], trained in an unsupervised way, learn to accurately model long-range image relationships, improving over CNNs [5]. This works adopts the ViT-s network, presented in [49] as feature extractor. The weights are pre-trained on ImageNet [29] in a self-supervised manner [5]. ViT-s is used by [5] only retaining the class token for training and testing. In this work, the class token is discarded and the positional tokens are retained in order to benefit from the spatial nature of the output. Diverse works indicate that augmenting feature extractors with deep multi-layered heads, for projecting embeddings to higher dimensions, improves performance when training on ImageNet [8, 5, 7, 15]. The presented results in Section 4 indicate this finding does not apply to pose estimation. A single linearly-activated fully-connected layer projects the feature embedding, coming from the pre-trained backbone, to a lower dimensionality. It has to be noted that this different behavior is connected to the difference in problem; a) the backbone is initialized with pre-trained weights, b) the problem at hand matches real observations against rendered templates and c) testing is partially done on novel objects, thus data unseen during training. We hypothesize that using deeper heads overfit to the training data characteristics. The authors of [8] note that randomly initialized patch embedding stabilizes training on ImageNet and thus improves classification accuracy. Accordingly, the patch embedding layer is not updated during fine-tuning. Results are provided in Section 4. ### Contrastive Learning Framework The feature embeddings extracted using ViT-s are processed by a contrastive learning framework for learning to increase similarity between object crops of the same class and a similar viewpoint. As similarity measure, the cosine similarity is employed: \[sim(emb_{I_{q},t},emb_{*,t})=\frac{emb_{I_{q},t}\cdot emb_{*,t}}{\|emb_{I_{q}, t}\|_{2},\|emb_{*,t}\|_{2})} \tag{1}\] Where \(*\) is either \(I_{pos}\) or \(I_{neg}\). The similarity is computed locally and aggregated for locations indicated by the mask image: \[sim_{pos/neg}=\sum_{t=1}^{T}sim\left(I_{q},*\right)\times M_{t}\begin{cases} sim&\text{if }M_{t}==1,\\ 0&\text{otherwise}\end{cases} \tag{2}\] Where \(T\) refers to the number of feature map locations, i.e. the number of positional tokens. The negative similarity is summed over all embedded tokens inside the template's object mask, while the positive similarity is computed globally with \(M=1^{\text{size of }I_{q}}\). Both similarities are used in a triplet loss fashion [6] using InfoNCE loss [16, 33]. Each positive sample is compared against all negative samples in a batch, resulting in \(B=(b\cdot b)-b\) negative samples per iteration. \[L=-\sum_{i=1}^{b}\log\frac{\exp\frac{sim_{pos,i}}{\tau}}{\sum_{k=1}^{B}\frac{sim_ {neg,k}}{\tau}\forall i\neq k} \tag{3}\] Where \(\tau\) is a temperature parameter set to 0.1. For more details consult [32]. ### Template Matching During testing templates of seen and novel objects, are matched against the query image. Embeddings are created for the query crop and all templates. The cosine similarity in Equation 1 is reused, yet modified to: \[sim_{q}=\sum_{t=1}^{T}sim\left(I_{q},*\right)\times M\left\{\begin{aligned} & sim\text{ if }M_{t}==1,\\ &\text{ and }sim_{t}>\delta,\\ & 0\text{ otherwise}\end{aligned}\right. \tag{4}\] Where \(\delta\) is a hyperparameter set to 0.2, which is meant to increase robustness against occluded image regions, as introduced by [32]. The class and \(3D\) rotation of the template leading to the highest cumulative cosine similarity are retrieved. ## 4 Experiments Presented results compare the CNN-based baseline methods, like [32], to our approach that uses ViT-s as feature extractor. Additional results evaluate the generality of the self-supervised pre-trained ViT-s without fine-tuning, showing that even without fine-tuning the template matching error is low and even improves over the baseline method on T-LESS. Ultimately, we present diverse ablations that highlight the differences between ViT- and CNN-architectures for \(3D\) pose estimation. The experiments section is concluded by providing an ablation with respect to the projection head used for our approach, highlighting the fundamental difference that for the addressed problem shallow heads are beneficial, as compared to approaches used for classification on ImageNet [5, 8, 7, 15]. ### Experimental Setup In the following paragraphs data retrieval and processing is detailed. Following that, template creation for matching is explained. In order to evaluate the proposed approach, standard metrics from concurrent, conceptually similar approaches and presented. #### 4.1.1 Datasets Results are provided on three standard datasets for object pose estimation, Linemod [20] (LM), Linemod-Occlusion [4] (LM-O), and T-LESS [22]. These datasets are processed to provide crop-level data in order to evaluate template matching accuracy and compare against the baseline method. **LM and LM-O** These are two of the most-used datasets for evaluating object pose estimation approaches. LM features \(13\) objects. For each object a set of \(\approx 1200\) scene-level images is available. Annotations are only provided for the respective object, though each set contains multiple objects of the dataset in the cluttered background. The main characteristics of the dataset are texture-poor objects of different geometry, sizes and colors. Annotated object views exhibit virtually no occlusion. As as consequence, [4] created annotations for all \(8\) dataset objects in the Benchvise's set, thus introducing LM-O as a test set specifically for strongly occluded object views. With respect to training and test we follow [32], in order to provide a fair comparison. For evaluation on seen and unseen objects the LM-objects are partitioned into three sets, see Table 1. As training data, \(90\%\) of LM images' per object set are used, and the remaining \(10\%\) are used for testing. As a consequence training images are without occlusion. The images of LM-O are exclusively used for testing, yet for evaluation also split accordingly, into seen and unseen objects. In order to evaluate on all objects, one split is used for testing on unseen objects, while the other two are used training. **T-LESS** On T-LESS we follow the protocol of [44]. Isolated object views of the object \(1-18\) are used for training and are pasted on a randomly chosen image of SUN397 [53], using the cut-paste strategy [13]. These \(18\) objects are considered as seen objects. The remaining objects, \(19-30\), are used as novel ones. Test images are cropped from the primesense test set. #### 4.2.2 Template Generation In contrast to works that train self-supervised ViTs for image classification [5; 8], this work considers matching the closest template for viewpoint classification, thus for \(3D\) pose retrieval. The major difference is that templates uniformly distributed in the viewing sphere, respectively hemisphere, are required. Which is not relevant to the workings of aforementioned works. Consequently, templates to match against are created using physically-based rendering for the task at hand [10]. **LM and LM-O** The training and test dataset for LM and LM-O are processed as done by [52] and [32]. These works crop the images from the real dataset by omitting in-plane rotations. Thus, effectively only considering azimuth and elevation as degrees of freedom. Objects are cropped in a way that the image \begin{table} \begin{tabular}{|c|c|} \hline Split & Objects \\ \hline 1 & Ape, Benchvise, Camera and Can \\ 2 & Cat, Driller, Duck and Eggbox \\ 3 & Glue, Holepuncher, Iron, Lamp and Phone \\ \hline \end{tabular} \end{table} Table 1: **LM/LM-O object splits.** Two of the sets are used for training and testing on seen objects, while the third is used for testing on unseen objects, as done by [32]. space at object distance projects 0.4 by 0.4 meters. Thus, all objects appear at the same distance to the camera, independent of their size. Furthermore, neither the LM nor LM-O training and test images show objects from the lower viewing hemisphere. Due to these constraints 301 templates are sufficient for training and testing on LM and LM-O. **T-LESS** For T-LESS objects are cropped in a way to tightly encapsulate the objects. Additionally, objects appear in arbitrary views in the test set. As a consequence \(92,232\) templates are used for training and testing on T-LESS, as done by [32, 44]. #### 3.2.2 Evaluation This section presents the metrics used in this work. The approach of [32] introduces _Acc15_ for evaluating template matching accuracy and classification. The _VSD_-score, as proposed by [23] is a standard metric for evaluating \(6D\) object pose estimation accuracy. The following paragraphs provide detailed explanations how these metrics are used in this work. _Acc15_ This metric is introduced by [32]. It represents the accumulated true positive rate for matched templates that are below \(15\deg\) rotational error with respect to the object class and ground truth rotation of the query crop: \[Acc15=\sum_{n=1}^{n}\begin{cases}1\text{ if }\arccos\frac{R_{q}\times R_{t}}{ \|R_{q}\|_{2}\cdot\|R_{t}\|_{2}}<15\deg\\ \text{ and }C_{q}==C_{t},\\ 0\text{ otherwise}\end{cases} \tag{5}\] Where \(n\) refers to the number of query crops, \(R_{q}\) and \(R_{t}\) to the three-dimensional rotation vectors, and \(C_{q}\) and \(C_{t}\) to the object class of the queries' ground truth and the template, respectively. Thus, matched templates with a rotation deviation of more than \(15\deg\) from the ground truth, or which have a different class than the query image, are considered as false positives. _VSD_ This metric has been proposed by [23]. For each query object crop the deviation of the estimated pose \(\hat{P}\) to the ground truth \(P\) is projected to a scalar value using: \[e_{VSD}=\underset{p\in\hat{V}\cup V}{avg}\begin{cases}0&\text{if }p\in\hat{V} \cap V\wedge|\hat{D}(p)-D(p)|<\tau\,\\ 1&\text{otherwise}\end{cases} \tag{6}\] where \(\hat{V}\) and \(V\) are sets of image pixels; \(\hat{D}\) and \(D\) are distance maps and \(\tau\) is a misalignment tolerance with the standard value of \(20mm\). Distance maps are rendered and compared to the distance map of the test image to derive \(\hat{V}\) and \(V\). Since \(\hat{P}\) and \(P\) need to represent \(6D\) poses, including the \(3D\) translation, we need to raise estimates to \(6D\), the strategy of [46, 32] is adopted. Using the bounding box of the observation \(box_{obs}\), and that of the template \(box_{tmp}\), the corresponding intrinsics \(f_{obs}\) and \(f_{tmp}\), and the template distance to the camera \(z_{tmp}\), enables deriving the observed object's distance \(\hat{z}_{obs}\): \[\hat{z}_{obs}=z_{tmp}\cdot\frac{\|box_{tmp,x}^{2}\cdot box_{tmp,y}^{2}\|_{2}}{ \|box_{obs,x}^{2}\cdot box_{obs,y}^{2}\|_{2}}\cdot\frac{f_{obs}}{f_{tmp}} \tag{7}\] Using \(\hat{z}_{obs}\), the relative translation between the observation and template of the other two translation parameters are derived. Where \(\bullet\) is a placeholder for \(x\) and \(y\): \[\Delta\bullet_{obs}=\frac{(box_{obs,\bullet}-c_{obs,\bullet})\cdot\hat{z}_{ obs}}{f_{obs,\bullet}}-\frac{(box_{tmp,\bullet}-c_{tmp,\bullet})\cdot\hat{z}_{tmp}}{f_{tmp, \bullet}} \tag{8}\] The \(3D\) translation vector is ultimately composed as \(t_{obs}=\{x_{tmp}+\Delta x_{obs},y_{tmp}+\Delta y_{obs},\hat{z}_{obs}\}\). The _VSD_-score is then defined as: \[VSD=\sum_{n=1}^{n}\frac{1}{n}\begin{cases}1&e_{VSD,n}<0.3\,\\ 0&\text{otherwise}\end{cases} \tag{9}\] where \(n\) again refers to the number of the query sample in an evaluated test set. ### Implementation Details This sections outlines the base method for comparing ViT to CNN-based template matching. Following that the training procedure and the network architecture are detailed. **Baseline method** For demonstrating the difference of CNNs and ViTs for self-supervised matching of real query crops to synthetic templates the baseline method of [32] is modified. In order to provide a fair comparison all results are generated comparing backbones with a similar number of trainable parameters, ResNet50 [18] with \(23M\) and ViT-s [49] with \(21M\) parameters, pre-trained in a self-supervised manner [5] on [40]. The following paragraph details training procedure and optimization settings. **Optimizer Setting** As optimizer AdamW [54] is used. The batch size is set to 16, which is also the case for the reference method [32]. The ViT networks are only trained for five epochs, as compared to the baseline, which is trained for 20 epochs. The linear scaling rule \(lr=lr_{b}\cdot batch\_size/256\)[14] is adopted for choosing the learning rate. A grid search was used to determine the base learning rate (\(lr_{b}\)) of \(2.5\cdot 10^{-}5\). No learning rate scheduling is used. Cosine weight decay scheduling, starting at 0.04 and ending at 0.4 after two epochs, is employed. The input image size is \(224^{2}\) and the template's mask size \(14^{2}\). A patch size of 16 is used for input image tokenization. A single linear layer is used to project the backbone feature size of 384 to 32. This stands in contrast to works like [5, 8, 7], where multi-layered high-dimensional projectors are used. The input to the projection head is normalized using batch normalization [27]. The output of the projector is normalized using [2]. Section 4.5 ablates mask and descriptor size, as well as the choice for the projection head. ### Main Results This section presents experiments comparing ResNet50 [18] as feature extractor to ViT-s [49]. Evaluations are provided comparing to the state of the art for \(3D\) template matching to the presented approach. #### 4.3.1 Results on LM/LM-O Table 2 compares the presented approach to those of [52], [3] and [32] for template matching on LM and LM-O. Reported are the true positive rates of matched templates with respect to object class and rotational error below \(15\deg\) (\(Acc15\)), as defined in [52]. We follow the paradigm of [32] and report the results of the best-performing epoch during fine-tuning. The results show that using ViTs as feature extractor consistently outperforms the CNN approach for objects seen and unseen during training. Both, conceptually similar approaches, use backbones with a comparable amount of parameters, ResNet50 [18] with \(23M\) and ViT-s [49] with \(21M\). It has to be mentioned that the method of [32] is fine-tuned for 20 epochs while the ViTs are fine-tuned for only 5. Figure 2 shows a detailed comparison for the individual data splits of LM and LM-O, using ResNet50 [18] and ViT-small [49] as feature extractors for template matching.. Tendentiously, ViT-s improves in pose estimation with respect to all rotational error thresholds on all the splits. The only exceptions are the seen LM split 3, unseen LM-O split 2 and seen LM-O split 3. #### 4.3.2 Results on T-LESS Table 3 compares the proposed approach to the approaches of [32], [44] and [46]. We follow the evaluation paradigm of [44] and report the VSD-score [23] using the standard thresholds, and the ground truth bounding box as basis for translation estimation. We report the performance after one epoch of fine-tuning, as compared to the 25 epochs for [32]. The results show that our approach, using ViT-small [49] as feature extractor, consistently outperforms the competing approaches for objects seen and unseen during training. Especially relevant is the comparison to the conceptually similar approach \begin{table} \begin{tabular}{|c|l|c|c|c|c|} \hline & & \multicolumn{2}{c|}{seen} & \multicolumn{2}{c|}{unseen} \\ Method & Backbone & LM & LM-O & LM & LM-O \\ \hline [52]\(\dagger\) & RN50[18] & 98.1 & 67.5 & 45.1 & 29.9 \\ [3]\(\dagger\) & RN50[18] & 96.1 & 64.7 & 44.3 & 29.1 \\ [32] & RN50[18] & 99.1 & 79.4 & 93.5 & 76.3 \\ Ours & ViT-s[49] & **99.8** & **82.2** & **96.4** & **80.2** \\ \hline \end{tabular} \end{table} Table 2: **Comparison on LM/LM-O.** Amount of true poses for a rotational error threshold of \(15\deg\) (_Acc15_[52]) for objects seen and unseen during training, see Table 1. The compared backbones have similar parameters, \(23M\) for ResNet50 [18] and \(21M\) for ViT-s [49]. Results for the methods indicated with \(\dagger\) are taken from [32]. of [32], which again uses ResNet50 [18] as backbone. These results show that ViTs work well for industrial objects of T-LESS, resulting in similar pose estimation accuracy for seen and unseen objects. The following section presents pose estimation results using ViTs without fine-tuning. ### Feature Extractor Fine-Tuning This section discusses and presents results using only ImageNet-pretrained ViTs as feature extractor. In order to use the pre-trained backbone without fine-tuning, the last linear projection layer is discarded. The output dimensionality per feature map location is 384. Table 4 compares the presented approach with \begin{table} \begin{tabular}{|c|c|c|c|} \hline Method & **seen:** & **unseen:** & Average \\ & Objects 1-18 & Objects 19-30 & \\ \hline [46] & 35.60 & 42.45 & 38.34 \\ [44] & 35.25 & 33.17 & 34.42 \\ [32] & 59.62 & 57.75 & 58.87 \\ Ours & **70.65** & **68.03** & **69.71** \\ \hline \end{tabular} \end{table} Table 3: **Comparison on T-LESS.** Results are presented using the _VSD_-score with the standard thresholds presented in [21]. Figure 2: **Results on LM and LM-O splits in detail.** Reported is the percentage of true poses for different rotational error thresholds, of the CNN- and ViT-backbone for the seen and unseen object splits. and without fine-tuning (indicated with "f.t." in the table) on LM, LM-O and T-LESS. The pre-trained ViT-s demonstrate tremendous generality with respect to feature embedding. On the LM and LM-O datasets the matching accuracy using _Acc15_ is higher than that of [52] and [3], see Table 2. Yet, fine-tuning improves for all test cases. The matching accuracy on both, seen and unseen, T-LESS sets, evaluated using the _VSD_-metric, is higher than for all methods compared against in Table 3, even without fine-tuning. Fine-tuning further improves performance. The presented evaluation shows that ViTs pre-trained in a self-supervised fashion learn features that translate well to new tasks with a large shift in object categories, even without fine-tuning. ### Ablation Study This sections discusses the difference in output space size and descriptor size for CNNs and ViTs. ViT and CNN approaches benefit from multi-layered, high-dimensional projection heads [8, 5, 7, 15]. Ultimately, we present experiments on the influence of projection head on our approach and additional architecture choices. #### 4.5.1 Descriptor Size The left plot of Figure 3 evaluates the influence of the descriptor size on the presented approach, and the ResNet50 baseline one on the seen and unseen sets of LM. The cumulative rotational error on LM decreases steadily with increasing descriptor size when using ResNet50. Yet, the optimal dimensionality is 16 for minimizing the rotational error for the unseen LM objects. While the descriptor size has a large influence on the seen LM set and even more on the unseen one, the behaviour using ViT-s is vastly different. For ViT-s the descriptor dimensionality has little influence and leads to low errors over a broad range of dimensions for seen and unseen objects. While for ResNet50 the error progression is different for both sets, the dimensionality that minimizes the error on both sets is 32 when using ViT-s. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{Metric} & \multicolumn{1}{c|}{f.t.} & \multicolumn{2}{c|}{**seen**} & \multicolumn{2}{c|}{**unseen**} \\ \hline _Acc15_[52] & & LM & LM-O & LM & LM-O \\ \hline \multirow{4}{*}{_VSD_[21]} & ✗ & 81.3 & 56.3 & 85.1 & 63.6 \\ & ✓ & **99.8** & **82.2** & **96.4** & **80.2** \\ \hline _VSD_[21] & & \multicolumn{4}{c|}{T-LESS} \\ \hline \multirow{4}{*}{_VSD_[21]} & ✗ & 63.93 & 62.93 \\ & ✓ & **70.65** & **68.03** \\ \hline \end{tabular} \end{table} Table 4: **Influence of fine-tuning. Result comparison for fine-tuning (f.t.) the ViT-s backbone versus only using the pre-trained feature extractor without fine-tuning.** #### 4.1.2 Mask Size The matching accuracy of the baseline method [32] increases when using spatially higher-dimensional feature maps since occlusion handling improves. In order to use larger feature maps for computing the template similarities we adopt the projection head of the baseline. Instead of using two convolutional layers for downsampling, we employ two transposed convolutional layers for upsampling. Both are ReLU [31]-activated. The first projecting the 384 dimensional feature vectors output by the backbone to 256, the second one to 32. Both convolutions apply no feature map padding, slide with a stride of one over the feature map and use the same kernel size, which is set depending on the desired mask size to either 3, 5, 7, 9, or 11. This projector replaces the projection head detailed in Section 4.2. The right plot of Figure 3 evaluates the influence of the mask size on the rotational error of the matched templates. For the presented comparison the ResNet50 baseline approach [32] uses a descriptor size of 16, and ViT-s is used with a descriptor size of 32. With the ResNet50 backbone, for both the the seen and unseen objects the rotational error reduces with increasing mask size. Using the proposed ViT-s approach the behaviour is again vastly different. While the influence of the mask size is negligible for the seen objects, the rotational error for the novel objects increases significantly when increasing the mask size. This indicates that ViT-s learns relevant features for the seen objects during fine-tuning with projection heads with larger spatial output. As such, the template matching accuracy remains constant. However, increasing the feature map size used for matching is detrimental for novel objects. This correlates with the results presented in Section 4.4, which indicate that ViTs already generalize well without fine-tuning. The feature projection learned by a projection head with increased spatial output is less general and thus increases template matching error for novel objects. Figure 3: **Influence of the descriptor and mask size on LM seen and unseen** The left plot shows the influence on the rotational error of the retrieved templates when using ResNet50 and ViT-s with different descriptor sizes. The mask size is set to 32 for ResNet50 and to 14 for ViT-s. The right plot show the same comparison for different mask sizes. The descriptor size is set to 16 for ResNet50 and to 32 for ViT-s. #### 4.2.2 Network Architecture Design This section ablates different aspects of network design choices when using self-supervised learning frameworks. We investigate patch embedding and projection head design. Table 5 reports the average rotational error on LM and LM-O for the investigated aspects. **Projection Head** The works of [7, 15, 8] use high-dimensional, multi-layered projection heads to project the feature output of the backbone to the desired dimensionality. The work of [7] uses a two-layered MLP, with the first ReLU [31] and the second layer linearly activated. The work of [15] and [5] both use three-layered MLPs, yet different versions. The latter using GELU [19]-activated hidden layers and weight normalization [41]. In [8], the projection head of [15] and the prediction head of [7] are combined. Features are normalized using batch normalization [27], and hidden layers are ReLU-activated. We compare using these projection heads to using no head or a single linear layer as head. Since using no head requires using the backbone's output as it is, the descriptor dimensionality per feature map location is 384. For all the evaluated projection heads a hidden dimension of four times the output dimension of the previous stage and batch normalization are used. We have tested with and without using weight normalization as used by [5]. Compared to batch normalization both consistently lead to increased rotational error of the matched templates. Table 5 compares the average rotational errors of different projection heads on LM and LM-O. The lowest error per set is indicated in bold, the highest is indicated with an underline. The lowest errors on se \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{seen} & \multicolumn{2}{|c|}{unseen} \\ \hline **Head** & p.e. & act. & **LM** & **LM-O** & **LM** & **LM-O** \\ \hline none & l & & 3.14 & 10.95 & 7.80 & 15.44 \\ & r & & 3.27 & 10.96 & 5.87 & 13.05 \\ \hline linear & l & & 3.07 & 11.05 & 5.39 & 12.83 \\ & r & & 3.14 & 10.69 & 5.02 & **11.78** \\ & r & ReLU & 3.04 & 10.56 & 5.37 & 12.85 \\ & r & GELU & 3.12 & 10.28 & 4.98 & 12.75 \\ \hline [7] & r & & 3.11 & 10.47 & **4.67** & 12.20 \\ [7] & r & ReLU & 3.04 & 10.53 & 4.92 & 12.06 \\ [7] & r & GELU & **3.02** & 10.69 & 5.52 & 13.59 \\ \hline [15] & r & & 3.12 & 10.66 & 5.11 & 12.56 \\ [15] & r & ReLU & 3.17 & **10.20** & 5.14 & 14.49 \\ [15] & r & GELU & 3.04 & 10.70 & 5.17 & 13.56 \\ \hline [8] & r & & 3.07 & 10.92 & 5.28 & 12.67 \\ [8] & r & ReLU & 3.05 & 11.22 & 5.69 & 14.45 \\ [8] & r & GELU & 3.14 & 10.87 & 5.01 & 12.98 \\ \hline \end{tabular} \end{table} Table 5: **Network architecture.** Reported is the average rotational error on LM and LM-O. The projection heads output a feature dimensionality of 32. When no head is used the standard ViT-s dimensionality of 384 is output. The column patch embedding (p.e.) indicates if the patch embedding layer is updated (l) or frozen (r) during fine-tuning. seen LM-O occur with heads with less layers. Using no head leads to comparably high errors. When using projection heads, the highest errors over all sets occur using higher dimensional heads. In general, for the seen objects the results are similar for all heads. Yet, projection heads with a smaller number of layers lead to less rotational error on unseen objects. This evaluation stands in contrast to self-supervised ViTs for classification that use projection heads with \(>=3\) layers and high dimensional hidden and last layers [8, 5]. The choice of activation appears to have little influence. Yet, heads with a lower number of layers shows reduced error on unseen objects when using no activation function. **Patch embedding** The authors of [8] propose to use random patch embedding to increase stability during training. We experiment with the initialization of the convolution layer used for patch embedding. The second column in Table 5 (p.e.) ablates the influence. Updating the pre-trained patch embedding layer during fine-tuning is referred to as learned (l). With a slight abuse of denotation we refer to not updating the patch embedding layer during fine-tuning as random (r). We observe a similar effect as in [8]. While the error difference for the seen objects is insignificant, using random patch embedding leads to significantly less error on the unseen objects. ### Self-Attention Figures 4 and 5 visualize self-attentions maps on the training and test sets of LM/LM-O and T-LESS, respectively. The same projection mechanism as in [5] is used. Figure 4: **Self-Attention on LM/LM-O** Visualized is the self-attention of the first head of the last self-attention layer using the positional tokens as input. On LM/LM-O, Figure 4, ViT-s effectively learns to encode relevant features of the seen objects. The unseen test case shows that the learned self attentions not only transfer the concept of objectness to unseen objects, but also manages to distinguish relevant from irrelevant feature map locations. On T-Less, Figure 5, object crops often show dataset objects in front or behind the query object, as is visualized in the seen and unseen test images. Cropping the feature map using the template's mask is important in order to improve matching accuracy. ## 5 Conclusion This work presents diverse empirical analyses for using ViTs for self-supervised template matching for \(3D\) pose retrieval. The presented findings are threefold. Using ViTs for deep template matching improves matching accuracy for seen and novel objects, in comparison to CNNs. Using pre-trained ViTs in a zero-shot fashion, that is without fine-tuning, already exhibits strong matching accuracy. Depending on the object set and metric used for evaluation, even improving over using a similar, fine-tuned CNN-based approach. For the problem of self-supervised synthetic template to real query object matching the network architecture is different to a comparable CNN approach and to self-supervised ViTs for image classification. In comparison to CNNs, ViTs benefit more from pre-training due to their feature extraction being more general. And in comparison to self-supervised ViTs for image classification, large, multi-layered projector heads are detrimental to the matching accuracy on novel objects. We hypothesize that Figure 5: **Self-Attention on T-LESS** Visualized is the self-attention of the first head of the last self-attention layer using the positional tokens as input. this occurs due to the stronger overfitting of deeper heads on the seen examples during fine-tuning, in turn harming the generality of the features learned during pre-training. Future work will investigate how to effectively exploit the features learned during ViT pre-training.
2309.11879
Separability transitions in topological states induced by local decoherence
We study states with intrinsic topological order subjected to local decoherence from the perspective of separability, i.e., whether a decohered mixed state can be expressed as an ensemble of short-range entangled (SRE) pure states. We focus on toric codes and the X-cube fracton state and provide evidence for the existence of decoherence-induced separability transitions that precisely coincide with the threshold for the feasibility of active error correction. A key insight is that local decoherence acting on the 'parent' cluster states of these models results in a Gibbs state. As an example, for the 2d (3d) toric code subjected to bit-flip errors, we show that the decohered density matrix can be written as a convex sum of SRE states for $p > p_c$, where $p_c$ is related to the paramagnetic-ferromagnetic transition in the 2d (3d) random-field bond Ising model along the Nishimori line.
Yu-Hsueh Chen, Tarun Grover
2023-09-21T08:28:17Z
http://arxiv.org/abs/2309.11879v2
# Separability transitions in topological states induced by local decoherence ###### Abstract We study states with intrinsic topological order subjected to local decoherence from the perspective of _separability_, i.e., whether a decohered mixed state can be expressed as an ensemble of short-range entangled pure states. We focus on toric codes and the X-cube fracton state and provide evidence for the existence of decoherence-induced separability transitions that precisely coincide with the error-recovery transitions. A key insight is that local decoherence acting on the 'parent' cluster states of these models results in a Gibbs state. In this work we will explore aspects of many-body topological states subjected to decoherence from the perspective of _separability_, i.e., whether the resulting mixed-state can be expressed as a convex sum of short-range entangled (SRE) states [1; 2; 3]. This criteria is central to the definition of what constitutes a trivial or a non-trivial mixed-state, and various mixed-state entanglement measures such as negativity[4; 5; 6; 7; 8] and entanglement of formation [9] are defined so as to quantify non-separability. At the same time, currently there exist no faithful measures of mixed-state entanglement that are efficiently calculable for many-body states [3] (a faithful measure is zero if and only if a mixed-state can be written as a convex sum of unentangled pure states). This limitation makes it rather challenging to know if a mixed state is separable. We will be particularly interested in decoherence-induced "separability transitions", i.e., transitions tuned by decoherence such that the density matrix in one regime is expressible as a convex sum of SRE states, and in the other regime, it is not. One salient distinction between pure state Vs mixed-state dynamics is that although a short-depth unitary evolution cannot change long-range entanglement encoded in a pure state, a short-depth local _channel_ can fundamentally alter long-range mixed-state entanglement. Therefore, even the limited class of mixed states that are obtained by the action of local short-depth channels on an entangled pure state offer an opportunity to explore mixed-state phases and phase transitions [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. We will focus on mixed-states that are obtained via subjecting several well-understood topologically ordered phases of matter to short-depth quantum channels. Error-threshold theorems [23; 24; 25; 26; 27; 28] indicate that if a pure state has intrinsic topological order, then it will be perturbatively stable against decoherence from a short-depth, local quantum channel. Therefore, the mixed state is expected to undergo a phase transition as a function of the decoherence rate [29]. Such transitions were originally studied from the perspective of quantum error correction (QEC) in Refs.[30; 31] and more recently using mixed-state entanglement measures such as topological negativity [14], and other non-linear functions of the density matrix (Refs.[13; 14; 15]). These approaches clearly establish at least two different mixed-state phases: one where the topological qubit can be decoded, and the other where it can't. However, it is not obvious if the density matrix in the regime where decoding fails can be expressed as a convex sum of SRE pure states, which, following Refs.[1; 2], we will take as the definition of an SRE mixed state. Our main result is that for several topologically ordered phases subjected to local decoherence, one can explicitly write down the decohered mixed state as a convex sum of pure states which we argue all undergo a topological phase transition, from being long-ranged entangled to short-ranged entangled, at a threshold that precisely corresponds to the optimal threshold for QEC. Therefore, in these examples, we argue that the error-recovery transition does indeed coincide with a separability transition. A key observation in our analysis is that when the parent cluster states of these models [22; 33; 34; 32] are subjected to local decoherence, one obtains a Gibbs state. This naturally leads to the aforementioned 'canonical' decomposition of the decohered density matrix in terms of pure states which captures the separa Figure 1: (a) Topological orders under local decoherence can undergo a separability transition, where only above a certain critical error rate, the decohered mixed state \(\rho_{\text{dec}}\) can be written as a convex sum of SRE pure states. The bottom depicts the parent cluster states and their offspring models obtained by appropriate measurements (indicated by an arrow) (b) 2d cluster Hamiltonian and 2d toric code, (c) 3d cluster Hamiltonian and 3d toric code, and (d) “Cluster-X” Hamiltonian [22] and the X-cube Hamiltonian. bility transition. This decomposition also provides a direct connection between the separability transitions and statistical mechanics models that naturally appear in the context of classical and quantum error-correcting codes [30; 31; 35; 36; 37; 38; 39]. Let us begin by considering the ground state of 2d toric code (see Fig.1(b)) with Hamiltonian \(H_{\text{2d toric}}=-\sum_{v}(\prod_{e\in v}Z_{e})-\sum_{p}(\prod_{e\in p}X_{e})\) subjected to phase-flip errors. Here the Hilbert space consists of qubits residing on the edges (denoted as '\(e\)') of a square lattice and we assume periodic boundary conditions. If one denotes the ground state as \(\rho_{0}\), then the Kraus map corresponding to the phase-flip errors act on an edge \(e\) as: \(\mathcal{E}_{e}[\rho]=pZ_{e}\rho_{0}Z_{e}+(1-p)\rho_{0}\), and the full map is given by the composition of this single-edge map over all edges. The key first step is to identify a 'parent' cluster Hamiltonian (in the sense of Refs.[22; 32; 33; 34]) such that the application of the aforementioned Kraus map to its ground state results in a Gibbs state. For example, for the problem at hand, consider \(H_{\text{2d cluster}}=\sum_{v}h_{v}+\sum_{e}h_{e}\) where \(h_{v}=-X_{v}(\prod_{e\ni v}Z_{e})\) and \(h_{e}=-X_{e}(\prod_{v\in e}Z_{v})\), see Fig.1(b). Note that here the Hilbert space consists of qubits both on the vertices and the edges of the square lattice. A simple calculation shows that the action of the aforementioned Kraus map on the ground state \(\rho_{C,0}\left(\propto\prod_{e}(I-h_{e})\prod_{v}(I-h_{v})\right)\) of \(H_{\text{2d Cluster}}\) yields \(\mathcal{E}_{e}[\rho_{C,0}]\propto e^{-\beta\sum_{e}h_{e}}\prod_{v}(I-h_{v})\) where \(\tanh(\beta)=1-2p\). The ground state density matrix \(\rho_{0}\) of the 2d toric code can be written as \(\rho_{0}\propto\langle x_{\mathbf{v}}=1|\rho_{C,0}|x_{\mathbf{v}}=1\rangle\), where \(|x_{\mathbf{v}}=1\rangle=\otimes_{v}|x_{v}=1\rangle\) is the product state in the Pauli-\(X\) basis. The projection selects one specific ground state of the toric code that is an eigenvector of the non-contractible Wilson loops \(W_{\ell}=\prod_{e\in\ell}X_{e}\) with eigenvalue \(+1\) along both cycles \(\ell\) of the torus. Using the above Gibbs form of \(\mathcal{E}_{e}[\rho_{C,0}]\), the decohered density matrix of the toric code is \[\rho\propto\langle x_{\mathbf{v}}=1|e^{-\beta\sum_{e}h_{e}}|x_{\mathbf{v}}=1 \rangle P_{Z}, \tag{1}\] where \(P_{Z}=\prod_{v}(I+\prod_{e\ni v}Z_{e})\). By inserting a complete set of states \(\{|x_{\mathbf{e}},x_{\mathbf{v}}\rangle\}\) between \(e^{-\beta\sum_{e}h_{e}}\) and \(|x_{\mathbf{v}}=1\rangle\), one may simplify the above expression to obtain \(\rho\propto P_{Z}\rho_{e}P_{Z}\) where \(\rho_{e}=\sum_{x_{\mathbf{e}}}\mathcal{Z}_{\text{2d Ising},x_{\mathbf{e}}}|x_{ \mathbf{e}}\rangle\langle x_{\mathbf{e}}|\) and \(\mathcal{Z}_{\text{2d Ising},x_{\mathbf{e}}}=\sum_{z_{\mathbf{v}}}e^{\beta \sum_{e}x_{e}\prod_{e\in z_{\mathbf{v}}}z_{\mathbf{v}}}\) is the partition function of the 2d Ising model with Ising interactions determined by \(\{x_{e}\}\). Thus, \(\rho\propto\sum_{x_{\mathbf{e}}}\mathcal{Z}_{\text{2d Ising},x_{\mathbf{e}}}| \Omega_{x_{\mathbf{v}}}\rangle\langle\Omega_{x_{\mathbf{e}}}|\), where \(|\Omega_{x_{\mathbf{e}}}\rangle\propto\prod_{v}(I+\prod_{e\ni v}Z_{e})|x_{ \mathbf{e}}\rangle\) are nothing but a subset of toric code eigenstates. Note that in this derivation, the 2d Ising model emerges due to the \(h_{e}\) terms in the parent cluster Hamiltonian, and ultimately, this will lead to the relation between the separability transition and the statistical mechanics of the 2d random-bond Ising model (RBIM) that also describes the error-recovery transition [30]. We note that the above spectral representation of \(\rho\) in terms of toric code eigenstates has also previously appeared in Ref.[13], using a different derivation. Since non-contractible cycles of the torus will play an important role below, let us note that distinct eigenstates \(|\Omega_{x_{\mathbf{e}}}\rangle\) can be uniquely specified by two labels: the first label corresponds to the set of local \(\mathbb{Z}_{2}\) fluxes \(f_{p}=\prod_{e\in p}x_{e}\) through elementary plaquettes \(p\), while the second label \(\mathbf{L}=(L_{x}=\pm 1,L_{y}=\pm 1)\) with \(L_{x}=\prod_{e\in\ell,e\parallel\hat{x}}x_{e},L_{y}=\prod_{e\in\ell,e\parallel \hat{y}}x_{e}\) and \(\ell\) a non-contractible loop along \(\hat{x}/\hat{y}\) direction, specifies the topological sector ('Logical data') in which a given eigenstate \(|\Omega_{x_{\mathbf{e}}}\rangle\) lives. We now probe the mixed state \(\rho\) using the separability criteria, i.e., we ask whether it can be decomposed as a convex sum of SRE states. Clearly, the aforementioned spectral representation is not a useful decomposition since it is written in terms of toric code eigenstates which are all long-range entangled (LRE). Taking cue from the argument for separability of the Gibbs state of toric codes [40], we decompose \(\rho\) as \(\rho=\sum_{z_{\mathbf{e}}}\rho^{1/2}|z_{\mathbf{e}}\rangle\langle z_{\mathbf{e }}|\rho^{1/2}\equiv\sum_{m}|\psi_{m}\rangle\langle\psi_{m}|\) where \(\{z_{\mathbf{e}}\}\) are a complete set of product states in the Pauli-\(Z\) basis, and \(|\psi_{m}\rangle=\rho^{1/2}|z_{\mathbf{e}}\rangle\). Generically, to determine whether \(\rho\) is an SRE mixed state, one needs to determine whether _each_\(|\psi_{m}\rangle\) is SRE. However, for the current case of interest, it suffices to consider only \(|\psi\rangle=\rho^{1/2}|m_{0}\rangle\) with \(|m_{0}\rangle=|z_{\mathbf{e}}=1\rangle\). The reason is as follows. The Gauss's law (\(\prod_{e\ni v}Z_{e}=1\)) implies that the Hilbert space only contains states that are closed loops in the \(Z\) basis. Therefore, one may write \(|m\rangle=g_{x}|m_{0}\rangle\) where \(g_{x}\) is a product of _single-site_ Pauli-\(X\)s forming closed loops. Since \([g_{x},\rho]=0\), this implies that \(|\psi_{m}\rangle\equiv|\psi_{g_{x}}\rangle=g_{x}|\psi\rangle\), and therefore, if \(|\psi\rangle\) is SRE (LRE), so is \(|\psi_{g_{x}}\rangle\). \(\rho(\beta)\) may then be written as: \(\rho(\beta)=\sum_{g_{x}}|\psi_{g_{x}}(\beta)\rangle\langle\psi_{g_{x}}(\beta)|\). Now, using the aforementioned spectral representation of \(\rho\), the (non-normalized) state \(|\psi\rangle=\rho^{1/2}|z_{\mathbf{e}}=1\rangle\) is: \[|\psi(\beta)\rangle\propto\sum_{x_{\mathbf{e}}}[\mathcal{Z}_{\text{2d Ising},x_{ \mathbf{e}}}(\beta)]^{1/2}|x_{\mathbf{e}}\rangle, \tag{2}\] It is easy to see that when \(\beta=\infty\), \(|\psi\rangle\propto|\Omega_{0}\rangle\), the non-decohered toric code ground state, while when \(\beta=0\), \(|\psi\rangle\propto|z_{\mathbf{e}}=1\rangle\) is a product state. This hints at a phase transition for \(|\psi(\beta)\rangle\) from being an LRE state to an SRE state as we increase the error rate \(p\) (i.e. decrease \(\beta\)). It is worth emphasizing that _if_ we succeed in showing that \(\rho\) is a convex sum of SRE pure states, then long-range mixed-state entanglement must be zero (as quantified by any valid mixed-state entanglement measure, including ones which require an optimization over all possible decompositions). Therefore, thinking in terms of convex decomposition allows one to potentially leverage well-understood diagnostics of pure state entanglement to make general statements about the complexity/entanglement of a mixed state. To locate the transition point \(\beta_{c}\) for the state \(|\psi(\beta)\rangle\), we consider the expectation value of the 'anyon condensation operator' (also known as 't Hooft loop) defined as [15; 41; 42; 43]\(T_{\tilde{\ell}}=\prod_{e\in\tilde{\ell}}Z_{e}\), where \(\tilde{\ell}\) denotes a homologically non-contractible loop on the dual lattice (recall that in the language of \(\mathbb{Z}_{2}\) gauge theory [44; 44], \(Z_{e}\sim e^{i\pi(\text{Electric field})_{e}}\), and if one starts with a ground state of toric code that is an eigenstate of Wilson loop operators \(W_{\ell}\) along non-contractible cycles \(\ell\), e.g. \(\left|\Omega_{0}\right\rangle\), then the action of \(T_{\tilde{\ell}}\) takes one to a different topological sector). Physically, \(\langle T_{\tilde{\ell}}\rangle\equiv\langle\psi|T_{\tilde{\ell}}|\psi\rangle/ \langle\psi|\psi\rangle\) captures the amplitude of tunneling from one logical subspace to an orthogonal one, and therefore it is zero in the \(\mathbb{Z}_{2}\) topologically ordered phase, and non-zero in the topologically trivial phase (=anyon condensed phase). 1 One may easily verify that \(\langle T_{\tilde{\ell}}\rangle=0\left(1\right)\) when \(\beta=\infty\left(\beta=0\right)\). Using Eq.(2), the effect of \(T_{\tilde{\ell}}\) on \(\left|x_{\mathbf{e}}\right\rangle\) is to flip spins along the curve \(\tilde{\ell}\) (i.e., \(x_{e}\rightarrow-x_{e},\forall e\in\tilde{\ell}\)), and we denote the corresponding configuration as \(x_{\tilde{\ell},\mathbf{e}}\). Crucially, while \(x_{\tilde{\ell},\mathbf{e}}\) and \(x_{\mathbf{e}}\) have the same flux through every elementary plaquette, they correspond to different values of the logical label \(\mathbf{L}\) and cannot be obtained from one to another through a local transformation \(x_{e}\to x_{\mathbf{e}}\prod_{v\in e}s_{v},s_{v}=\pm 1\). Therefore, \(T_{\tilde{\ell}}|\psi\rangle\propto\sum_{x_{\mathbf{e}}}[\mathcal{Z}_{x_{ \mathbf{e}}}]^{1/2}|x_{\tilde{\ell},\mathbf{e}}\rangle=\sum_{x_{\mathbf{e}}}[ \mathcal{Z}_{x_{\tilde{\ell},\mathbf{e}}}]^{1/2}|x_{\mathbf{e}}\rangle\), where we have changed the dummy variable \(x_{\mathbf{e}}\to x_{\tilde{\ell},\mathbf{e}}\), and suppressed the subscript '2d Ising' under the partition function \(\mathcal{Z}\) for notational convenience. Thus, Footnote 1: We note that the ‘t Hooft loop works as an order parameter for detecting topological order in our pure state \(\left|\psi\right\rangle\) because the Gauss’s law (\(\prod_{e\ni v}Z_{e}=1\)) is satisfied exactly and therefore are no dynamical charges. Besides, note that \(\text{tr}\big{(}\rho T_{\tilde{\ell}}\big{)}\), where \(\rho\) is the decohered mixed state, is identically zero, and clearly does not capture mixed-state entanglement. \[\langle T_{\tilde{\ell}}\rangle =\frac{\sum_{x_{\mathbf{e}}}\sqrt{\mathcal{Z}_{x_{\mathbf{e}}} \mathcal{Z}_{x_{\tilde{\ell},\mathbf{e}}}}}{\sum_{x_{\mathbf{e}}}\mathcal{Z}_{ x_{\mathbf{e}}}}=\frac{\sum_{x_{\mathbf{e}}}\mathcal{Z}_{x_{\mathbf{e}}}e^{- \Delta F_{x_{\tilde{\ell},\mathbf{e}}}/2}}{\sum_{x_{\mathbf{e}}}\mathcal{Z}_{ x_{\mathbf{e}}}} \tag{3}\] \[=\langle e^{-\Delta F_{\tilde{\ell}}/2}\rangle\geq e^{-\langle \Delta F_{\tilde{\ell}}\rangle/2}\] where \(\Delta F_{x_{\tilde{\ell},\mathbf{e}}}=-\log\Bigl{(}\mathcal{Z}_{x_{\tilde{ \ell},\mathbf{e}}}/\mathcal{Z}_{x_{\mathbf{e}}}\Bigr{)}\) is the free energy cost of inserting a domain wall of size \(\left|\tilde{\ell}\right|\sim L\) (= system's linear size) in the RBIM along the Nishimori line [30], and we have used Jensen's inequality in the last sentence. Note that we are along the Nishimori line because the probability of a given gauge invariant label \(\{f_{\mathbf{p}},\mathbf{L}\}\) along the Nishimori line is precisely the partition function \(\mathcal{Z}_{x_{\mathbf{e}}}\)[30]. Since \(\langle\Delta F_{\tilde{\ell}}\rangle\), the disorder-averaged free energy cost, diverges with \(L\) in the ferromagnetic phase of the RBIM while converges to a constant in the paramagnetic phase [30], Eq.(3) rigorously shows that for \(p>p_{c}=p_{\text{2d RBIM}}\approx 0.109\)[45], \(\langle T_{\tilde{\ell}}\rangle\) saturates to a non-zero constant. Therefore, \(\left|\psi\right\rangle\) is a topologically trivial state when \(p>p_{c}\), and hence the mixed state is SRE for \(p>p_{c}\). In contrast, for \(p<p_{c}\), due to non-vanishing ferromagnetic order (and associated domain wall cost) of the RBIM, we expect that \(\langle T_{\tilde{\ell}}\rangle\sim e^{-\langle\Delta F_{\tilde{\ell}} \rangle/2}\sim e^{-cL}\to 0\) in the thermodynamic limit (\(c>0\) is a constant), implying that \(\left|\psi\right\rangle\) is topologically ordered. This is also consistent with the non-vanishing topological entanglement negativity for \(p<p_{c}\)[46]. Therefore, we conclude that for this model, error-recovery transition also corresponds to a separability transition for the mixed-state. Another diagnostic of topological order in pure states is the bipartite (Renyi) topological entanglement entropy (TEE) [47; 48; 49]. Dividing the whole system in real-space as \(A\cup B\), we define the reduced density matrix \(\rho_{A}=\text{tr}_{B}\left|\psi(\beta)\rangle\langle\psi(\beta)\right|\) for the state \(\left|\psi(\beta)\right\rangle\) of our interest (Eq.(2)). One finds (see Appendix A): \[\text{tr}\big{(}\rho_{A}^{2}\big{)}=\quad\frac{\sum_{x_{\mathbf{e}},x_{ \mathbf{e}}^{\prime}}\mathcal{Z}_{x_{A},x_{B}}\mathcal{Z}_{x^{\prime}_{A},x_{B} }e^{-\Delta F_{AB}(x_{\mathbf{e}},x^{\prime}_{\mathbf{e}})/2}}{\sum_{x_{ \mathbf{e}},x^{\prime}_{\mathbf{e}}}\mathcal{Z}_{x_{A},x_{B}}\mathcal{Z}_{x^{ \prime}_{A},x^{\prime}_{B}}}. \tag{4}\] Here \(x_{A}(x_{B})\) denotes all the edges belonging to the region \(A(B)\), \(\mathcal{Z}_{x_{A},x_{B}}\) denotes the partition function of the 2d Ising model with the sign of Ising interactions determined by \(x_{A}\) and \(x_{B}\), and \(\Delta F_{AB}(x_{\mathbf{e}},x^{\prime}_{\mathbf{e}})=-\log[\mathcal{Z}_{x^{ \prime}_{A},x_{B}}\mathcal{Z}_{x_{A},x^{\prime}_{B}}/(\mathcal{Z}_{x_{A},x_{B} }\mathcal{Z}_{x^{\prime}_{A},x^{\prime}_{B}})]\) can be thought of as the free energy cost of swapping bonds between two copies of RBIM in region \(A\). Using Eq.(4), we provide a heuristic argument in Appendix A that the TEE (= the system-size-independent subleading term for \(S_{2}=-\log\left(\text{tr}\big{(}\rho_{A}^{2}\big{)}\right)\) jumps from \(\log(2)\) to zero at \(p_{c}=p_{\text{2d RBIM}}\). The main idea is that in the ferromagnetic phase of the RBIM, the free energy penalty of creating a single Ising vortex leads to a specific non-local constraint on the allowed configurations that contribute to the sum in Eq.(4). The constraint is essentially that one needs to minimize the free energy cost \(\Delta F_{AB}(x_{\mathbf{e}},x^{\prime}_{\mathbf{e}})\) for each fixed flux configuration \(\{f_{p}\}\) and \(\{f^{\prime}_{p}\}\) corresponding to \(\{x_{e}\}\) and \(\{x^{\prime}_{e}\}\) in Eq.(4). One finds that there always exists a _pair_ of configurations that contribute equally to \(\text{tr}\big{(}\rho_{A}^{2}\big{)}\) while satisfying the aforementioned constraint. This results in a subleading contribution of \(-\log(2)\) in the entanglement entropy, that we identify as TEE. In the paramagnetic phase, the aforementioned non-local constraint does not exist, and one therefore does not expect a non-zero TEE. 2 Therefore, we arrive at the same conclusion as the one obtained using the anyon condensation operator. Footnote 2: Note that the argument for TEE is reminiscent of the argument in Refs.[50; 14] where non-zero TEE/topological negativity results from a non-local constraint on the entanglement boundary. However, the precise origin of the constraint is a bit different, as the calculation in Refs.[50; 14] is in a dual picture where the topological (trivial) phase corresponds to a paramagnetic (ferromagnetic) phase. Incidentally, Eq.(1) also allows us to construct an alternative convex decomposition of the decohered mixed state \(\rho\) that shows a phase transition at a certain (non-optimal) threshold \(p_{\text{non-optimal}}\) which is also related to the 2d RBIM. Despite being non-optimal, it is conceptually interesting since the resulting separability threshold is related via a Kramers-Wannier duality [51] to the RBIM. The main outcome is that \(\tanh(\beta_{\text{non-optimal}})=1-2p_{\text{non-optimal}}\) satisfies \(\tanh^{2}(\beta_{\text{non-optimal}}/2)=p_{\text{2d RBIM}}/(1-p_{\text{2d RBIM}})\) which yields \(p_{\text{non-optimal}}\approx 0.188\). See Appendix B for details where we also discuss an analogous non-optimal decomposition for the 3d toric code. Let us next consider 3d toric code with \(H_{\text{3d toric}}=-\sum_{f}(\prod_{e\in f}Z_{e})-\sum_{v}(\prod_{e\in v}X_{e})\) (see Fig.1(c)). We again assume periodic boundary conditions. We will be interested in subjecting the ground state of \(H_{\text{3d toric}}\) to phase (bit)-flip errors (non-trivial Kraus operators \(\sim Z_{e}\left(X_{e}\right)\)). Let us first consider phase-flip errors. We chose the parent cluster state as the ground state \(\rho_{C,0}\) of \(H_{\text{3d cluster}}=-\sum_{e}X_{e}\prod_{f\geq 0e}Z_{f}-\sum_{f}X_{f}\prod_{e\in f }Z_{e}=\sum_{e}h_{e}+\sum_{f}h_{f}\). The corresponding ground state density matrix of the 3d toric code \(\rho_{0}\) is \(\rho_{0}\propto\langle x_{\mathbf{f}}=1|\rho_{C,0}|x_{\mathbf{f}}=1\rangle\), which is an eigenstate of the non-contractible 't Hooft membrane operators \(T_{xy},T_{yz},T_{zx}\) along the three planes with eigenvalue \(+1\) (\(T_{xy}=\prod_{e\parallel z}X_{e}\) where the product is taken over all edges parallel to the \(z\) axis in any \(xy\) plane. \(T_{yz}\) and \(T_{zx}\) are defined analogously). Following essentially the same steps as in 2d toric code, one obtains the decohered density matrix in the form \(\rho=\sum_{g_{x}}|\psi_{g_{x}}(\beta)\rangle\langle\psi_{g_{x}}(\beta)|\) with \(|\psi_{g_{x}}\rangle=g_{x}|\psi(\beta)\rangle\) and \(g_{x}\) a product of single-site Pauli-\(X\)s forming closed membranes. Therefore, we again only need to analyze whether \(|\psi\rangle\equiv\rho^{1/2}|z_{\mathbf{e}}=1\rangle\) is SRE or LRE. Again, one may rewrite \(|\psi(\beta)\rangle\propto\sum_{x_{\mathbf{e}}}[\mathcal{Z}_{\text{3d gauge},x_{ \mathbf{e}}}(\beta)]^{1/2}|x_{\mathbf{e}}\rangle\) where \(\mathcal{Z}_{\text{3d gauge},x_{\mathbf{e}}}=\sum_{z_{\mathbf{f}}}e^{\beta \sum_{e}z_{\mathbf{e}}+\prod_{f\geq 0}z_{f}}\) is now the partition function of a classical 3d Ising gauge theory with the sign of each plaquette term determined by \(\{x_{e}\}\). To probe the topological transition as a function of \(\beta\), we now consider the Wilson loop operator \(W_{\ell}=\prod_{e\in\ell}Z_{e}\), where \(\ell\) denotes a homologically nontrivial cycle on the original lattice, say, along \(z\) axis (so that it pierces and anti-commutes with the aforementioned 't Hooft operator \(T_{xy}\)). Essentially the same derivation as that for the 2d toric code shows that \(\langle W_{\ell}\rangle=\langle e^{-\Delta F_{\ell}/2}\rangle\geq e^{-\langle \Delta F_{\ell}\rangle/2}\), where \(\Delta F_{\ell}\) now denotes the free energy cost of inserting a domain wall along the non-contractible loop for the 3d random-plaquette gauge model (RPGM) along the Nishimori line. Since \(\langle\Delta F_{\ell}\rangle\) diverges as the length of \(|\mathcal{C}|\sim L\) (\(=\) system-size) in the Higgs (ordered) phase, while converges to a constant in the confinement (disordered) phase [31], one finds \(\langle W_{\ell}\rangle\) saturates to a non-zero constant when \(p>p_{\text{3d RPTGM}}\approx 0.029\)[31], while it vanishes for \(p<p_{c}\). Therefore, we expect that \(|\psi\rangle\), and correspondingly the decohered state \(\rho\), is SRE when \(p>p_{\text{3d RPTGM}}\). Let us next consider bit-flip errors. The aforementioned \(H_{\text{3d cluster}}\) is not a good starting point for our approach since we would like the corresponding ground state to take a Gibbs form when subjected to Kraus operators \(\sim X_{e}\). Therefore, one instead considers \(H^{\prime}_{\text{3d Cluster}}=-\sum_{v}Z_{v}(\prod_{e\ni v}X_{e})-\sum_{e}Z_{e }(\prod_{v\in e}X_{v})\), which has previously also appeared in Ref.[52]. The rest of the analysis is quite similar to that for the 2d toric code (with \(X\leftrightarrow Z\) everywhere). After writing the non-dechered density matrix of toric code as \(\rho_{0}\propto\langle z_{\mathbf{v}}=1|\rho_{C,0}|z_{\mathbf{v}}=1\rangle\), where \(\rho_{C,0}\) is the ground state of \(H^{\prime}_{\text{3d Cluster}}\), and subjecting it to bit-flip errors, the decohered state is schematically given by \(\rho\propto\sum_{z_{\mathbf{e}}}\mathcal{Z}_{\text{3d Ising},z_{\mathbf{e}}}| \Omega_{z_{\mathbf{e}}}\rangle\langle\Omega_{z_{\mathbf{e}}}|\), where \(|\Omega_{z_{\mathbf{e}}}\rangle\) are toric code eigenstates and \(\mathcal{Z}_{\text{3d Ising},z_{\mathbf{e}}}\) is the partition function of the 3d classical Ising model with interactions determined by \(\{z_{\mathbf{e}}\}\). The analog of the state \(|\psi(\beta)\rangle\) is \(|\psi(\beta)\rangle\propto\sum_{x_{\mathbf{e}}}[\mathcal{Z}_{\text{3d Ising},z_{ \mathbf{e}}}(\beta)]^{1/2}|z_{\mathbf{e}}\rangle\) and its topological transition is indicated by the non-analyticity of the 't Hooft operator similar to Eq.(3). The corresponding \(p_{c}\) for the separability transition is then determined by the transition out of the ferromagnetic phase in the 3d RBIM along the Nishimori line, which matches the optimal error-recovery threshold, \(p_{c}\approx 0.233\)[53; 54]. Finally, let us briefly consider the 3d X-cube model (Ref.[55]), where the Hilbert space consists of qubits residing on the edges(e) of a cubic lattice, and the Hamiltonian is \(H_{\text{X-cube}}=-\sum_{e}\prod_{e\in v_{x}}Z_{e}-\sum_{v}(\prod_{e\in v_{x}}X_ {e}+\prod_{e\in v_{y}}X_{e}+\prod_{e\in v_{x}}X_{e})=-\sum_{e}A_{c}-\sum_{v}(B _{v_{x}}+B_{v_{y}}+B_{v_{z}})\) where \(e\in v_{\gamma}\), \(\gamma=x,y,z\) denotes all the edges emanating from the vertex \(v\) that are normal to the \(\gamma\)-direction (see Fig.1(d)). It was shown in Ref.[22] that the ground state density matrix \(\rho_{0}\) of the X-cube model can be written as \(\rho_{0}\propto\langle x_{\mathbf{e}}=1|\rho_{C,0}|x_{\mathbf{e}}=1\rangle\) where \(\rho_{C,0}=\prod_{c}(I-h_{c})\prod_{e}(I-h_{e})\) (\(h_{c}=-X_{c}\prod_{e\in c}Z_{e}\) and \(h_{e}=-X_{e}\prod_{c\ni e}Z_{c}\) denotes the ground state density matrix of the parent cluster state, and \(|x_{\mathbf{e}}=1\rangle=\otimes_{c}|x_{c}=1\rangle\) is the product state in the Pauli-\(X\) basis. Note that the qubits in the parent cluster state live at the edges and the centers of the cubes so that \(h_{c}\) involves 13-qubit interactions, and \(h_{e}\) involves 5-qubit interactions (Fig.1(d)). Similar to the previous cases, the density matrix after subjecting \(\rho_{0}\) to the phase-flip channel (Kraus operators \(\sim Z_{e}\)) can be written as \(\rho\propto\sum_{x_{\mathbf{e}}}\mathcal{Z}_{\text{3d plaquette},x_{\mathbf{e}}}| \Omega_{x_{\mathbf{e}}}\rangle\langle\Omega_{x_{\mathbf{e}}}|\), where \(|\Omega_{x_{\mathbf{e}}}\rangle\propto\prod_{e}(I+\prod_{e\in c}Z_{e})|x_{ \mathbf{e}}\rangle\) and \(\mathcal{Z}_{\text{3d plaquette},x_{\mathbf{e}}}=\sum_{z_{\mathbf{e}}}e^{\beta \sum_{e}x_{\mathbf{e}}+\prod_{e\ni e}z_{e}z_{e}z_{e}}\) is the partition function of the 3d plaquette Ising model [56] with the sign of interaction on each plaquette determined by \(\{x_{e}\}\). The appearance of this model is not a coincidence since the statistical mechanics of the error-recovery transition involves precisely this model [57]. One again only needs to analyze the state \(|\psi\rangle=\rho^{1/2}|z_{\mathbf{e}}=1\rangle\) to study the separability transition for \(\rho\). Now there exists exponentially many topological sectors [55], and in the non-decohered ground state \(\rho_{0}\), the membrane operators defined as \(\prod_{e\parallel\hat{a}}X_{e}\) with \(a=x,y,z\) for _any_ plane have expectation value one. To detect the presence/absence of topological order in \(|\psi\rangle\), one therefore considers non-contractible Wilson loop operators \(W_{\ell}=\prod_{e\in\ell}Z_{e}\) that anti-commute with the membrane operators orthogonal to \(\ell\). The expectation value of any such Wilson loop takes a form similar to Eq.(3) where the partition function \(\mathcal{Z}_{x_{\mathbf{e}}}=\mathcal{Z}_{\text{3d plaquette},x_{\mathbf{e}}}\) and one is again along the Nishimori line. This again indicates that the pure state \(|\psi\rangle\) undergoes a transition at the error threshold \(p_{c}=p_{\text{3d plaquette}}\approx 0.152\)[57]. We note that for all the examples considered, one may define a more general class of wavefunctions \(|\psi^{(\alpha)}\rangle\propto\rho^{\alpha/2}|z_{\mathbf{e}}=1\rangle\). Interestingly, the wavefunction corresponding to \(\alpha=2\) has been studied in detail previously in Ref.[50], with \(\rho\) identical to our decohered density matrix for the 2d toric code. One finds that \(|\psi^{(2)}\rangle\) undergoes a phase transition from being topologically ordered to an SRE state at a temperature that precisely corresponds to the critical temperature of the 2d translationally invariant classical Ising model (with all couplings set to unity) [50]. Alternatively, one may locate the critical point from the non-analyticity of the wavefunction overlap of the non-normalized wavefunction \(|\psi(\beta)\rangle\). Specifically, one finds that \(\log\bigl{(}\langle\psi^{(2)}(\beta)|\psi^{(2)}(\beta)\rangle\bigr{)}\) is proportional to the free energy of the classical 2d Ising model, and therefore, is non-analytic precisely at the same point where the TEE jumps from \(\log(2)\) to zero. One may think of the wavefunction overlap as a kind of 'partition function' since it enters in the denominator for expectation value of various observables. This motivates a generalization of this overlap to general \(\alpha\) for any of the models considered above by defining \(F_{\alpha}(\beta)=\frac{1}{1-\alpha}\log\bigl{(}\langle\psi^{(\alpha)}(\beta)| \psi^{(\alpha)}(\beta)\rangle\bigr{)}\). Taking the limit \(\alpha\to 1\), which corresponds to the wavefunction of our main interest (Eq.(2) for the 2d toric code, and analogous states for the other two models), one finds that \(F_{1}(\beta)\) precisely corresponds to the free energy of the corresponding statistical mechanics model along the Nishimori line, which indeed shows a singularity at the optimal error-recovery threshold \(p_{c}\). This provides an alternative, albeit heuristic, approach to locate the phase transition. To summarize, we provided evidence that decoherence-induced separability transitions in several topological states coincide with the optimal threshold for QEC [13; 14; 15; 30; 57]. This implies that in these models, the inability to correct logical errors implies an ability to prepare the mixed state using an ensemble of short-depth unitary circuits. The convex decomposition we constructed captures the universal aspects of the phase diagram, as well as the threshold correctly, and it is 'canonical' or optimal in this sense (although it need not be unique). It will be interesting to consider applying our method to other states that can be obtained from cluster states by measurements [22; 34], including non-CSS codes, e.g., the double semion topological order. Relatedly, using cluster states to obtain decohered density matrices is seemingly an alternative method to generate statistical mechanics models for error-correcting codes and it will be interesting to probe its generality. It will also be worthwhile to numerically study the entanglement structure of the state in Eq.(2) (and its analogs for the other models) using Quantum Monte Carlo. A more ambitious task would be to construct an explicit short-depth circuit for this state when \(p>p_{c}\). ###### Acknowledgements. The authors thank Dan Arovas, Tim Hsieh and John McGreevy for helpful discussions. TG is supported by the National Science Foundation under Grant No. DMR-1752417. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
2309.05818
Rice Plant Disease Detection and Diagnosis using Deep Convolutional Neural Networks and Multispectral Imaging
Rice is considered a strategic crop in Egypt as it is regularly consumed in the Egyptian people's diet. Even though Egypt is the highest rice producer in Africa with a share of 6 million tons per year, it still imports rice to satisfy its local needs due to production loss, especially due to rice disease. Rice blast disease is responsible for 30% loss in rice production worldwide. Therefore, it is crucial to target limiting yield damage by detecting rice crops diseases in its early stages. This paper introduces a public multispectral and RGB images dataset and a deep learning pipeline for rice plant disease detection using multi-modal data. The collected multispectral images consist of Red, Green and Near-Infrared channels and we show that using multispectral along with RGB channels as input archives a higher F1 accuracy compared to using RGB input only.
Yara Ali Alnaggar, Ahmad Sebaq, Karim Amer, ElSayed Naeem, Mohamed Elhelw
2023-09-11T20:51:21Z
http://arxiv.org/abs/2309.05818v1
Rice Plant Disease Detection and Diagnosis using Deep Convolutional Neural Networks and Multispectral Imaging ###### Abstract Rice is considered a strategic crop in Egypt as it is regularly consumed in the Egyptian people's diet. Even though Egypt is the highest rice producer in Africa with a share of 6 million tons per year [5], it still imports rice to satisfy its local needs due to production loss, especially due to rice disease. Rice blast disease is responsible for 30% loss in rice production worldwide [9]. Therefore, it is crucial to target limiting yield damage by detecting rice crops diseases in its early stages. This paper introduces a public multispectral and RGB images dataset and a deep learning pipeline for rice plant disease detection using multimodal data. The collected multispectral images consist of Red, Green and Near-Infrared channels and we show that using multispectral along with RGB channels as input archives a higher F1 accuracy compared to using RGB input only. Keywords:Deep learning Computer vision Multispectral Imagery. ## 1 Introduction In Egypt, rice is important in Egyptian agriculture sector, as Egypt is the largest rice producer in Africa. The total area used for rice cultivation in Egypt is about 600 thousand ha or approximately 22% of all cultivated area in Egypt during the summer. As a result, it is critical to address the causes of rice production loss to minimize the gap between supply and consumption. Rice plant diseases contribute mostly to this loss, especially rice blast disease. According to [9], rice blast disease causes 30% worldwide of the total loss of rice production. Thus, rice crops diseases detection, mainly rice blast disease, in the early stages can play a great role in restraining rice production loss. Early detection of rice crops diseases is a challenging task. One of the main challenges of early detection of such disease is that it can be misclassified as the brown spot disease by less experienced agriculture extension officers (as both are fungal diseases and have similar appearances in their early stage) which can lead to wrong treatment. Given the current scarcity of experienced extension officers in the country, there is a pressing need and opportunity for utilising recent technological advances in imaging modalities and computer vision/artificial intelligence to help in early diagnosis of the rice blast disease. Recently, multispectral photography has been deployed in agricultural tasks such as precision agriculture [3], food safety evaluation [11]. Multispectral cameras could capture images in Red, Red-Edge, Green and Near-Infrared bands wavebands, which captures what the naked eye can't see. Integrating the multispectral technology with deep learning approaches would improve crops diseases identification capability. However, it would be required to collect multispectral images in large numbers. In this paper, we propose a public multispectral and RGB images dataset and a deep learning pipeline for rice plant disease detection. First, the dataset we present contains 3815 pairs of multispectral and RGB images for rice crop blast, brown spot and healthy leaves. Second, we developed a deep learning pipeline trained on our dataset which calculates the Normalised Difference Vegetation Index (NDVI) channel from the multispectral image channels and concatenates it along its RGB image channels. We show that using NDVI+RGB as input archives a higher F1 score by 1% compared to using RGB input only. ## 2 Literature Review Deep learning has emerged to tackle problems in different tasks and fields. Nowadays, it is being adopted to solve the challenge of crop disease identification. For example, Mohanty et al. [8] trained a deep learning model to classify plant crop type and its disease based on images. Furthermore, [1] proposed a deep learning-based approach for banana leaf diseases classification. Furthermore, multispectral sensors have proven its capability as a new modality to detect crop fields issues and diseases. Some approaches use multispectral images for disease detection and quantification. Cui et al. [4] developed an image processing-based method for quantitatively detecting soybean rust severity using multi-spectral images. Also, [12] utilize digital and multispectral images captured using quadrotor unmanned aerial vehicles (UAV) to collect high-spatial resolution imagery data to detect the ShB disease in rice. After the reliable and outstanding results deep learning models could achieve on rgb images, some approaches were developed to use deep learning on multispectral images, especially of crops and plants. [10] proposed a deep learning-based approach for weed detection in lettuce crops trained on multispectral images. In addition, Ampatzidis et al. [2] collects multispectral images of citrus fields using UVA for crop phenotyping and deploys a deep learning detection model to identify trees. ## 3 Methodology ### Hardware Components We used a MAPIR Survey3N camera, shown in Figure 1 to collect our dataset. This camera model captures ground-level multispectral images of red, green and NIR channels. It was chosen in favour of its convenient cost and easy integration with smartphones.. In addition, we used the Samsung Galaxy M51 mobile phone camera to capture RGB images, paired with the MAPIR camera. We Designed a holder gadget to combine the mobile phone, MAPIR camera and a power bank in a single tool, as seen in Figure 2, to facilitate the data acquisition operation for the officers. It was designed using SolidWorks software and manufactured by a 3D printer. ### Data Collection Mobile Application An android frontend application was also developed to enable the officers who collect the dataset to control the multispectral and the smartphone cameras for capturing dual RGBIR/RGB images simultaneously while providing features such as image labelling, imaging session management, and Geo-tagging. The mobile application is developed with Flutter and uses Firebase real-time database to store and synchronise the captured data including photos and metadata. Furthermore, Hive local storage database is used within the application to maintain a local backup of the data. Figure 1: MAPIR Survey3N Camera. ### Analytics Engine Module Our engine is based on ResNet18 [6] architecture which consists of 18 layers and it utilize the power of residual network, see Figure 3, residual network help us avoid the vanishing gradient problem. We can see how layers are configured in the ResNet-18 architecture. The architecture starts with a convolution layer with 7x7 kernel size and stride of 2. Next we begin with the skip connection. The input from here is added to the output that is achieved by 3x3 max pool layer and two convolution layers with kernel size 3x3, 64 kernels each. This is the first residual block. The output of this residual block is added to the output of two convolution layers with kernel size 3x3 and 128 such filters. This constituted the second residual block. Then the third residual block involves the output of the second block through skip connection and the output of two convolution layers with filter size 3x3 and 256 such filters. The fourth and final residual block involves output of third block through skip connections and output of two convolution layers with same filter size of 3x3 and 512 such filters. Finally, average pooling is applied on the output of the final residual block and received feature map is given to the fully connected layers followed by softmax function to receive the final output. The vanishing gradient is a problem which happens when training artificial neural networks that involved gradient based learning and backpropagation. We use gradients to update the weights in a network. But sometimes what happens is that the gradient becomes very small, effectively preventing the weights to be updated. This leads to network to stop training. To solve such problem, residual neural networks are used. Figure 2: Holder gadget. Residual neural networks are the type of neural network that applies identity mapping. What this means is that the input to some layer is passed directly or as a shortcut to some other layer. If \(x\) is the input, in our case its an image or a feature map, and \(F(x)\) is the output from the layer, then the output of the residual block can be given as \(F(x)+x\) as shown in Figure 4. We changed the input shape to be 256x256 instead of 224x244, also we replaced the last layer in the original architecture with a fully connected layer where the output size was modified to three to accommodate our task labels. Figure 4: Residual block Figure 3: ResNet18 original architecture ## 4 Experimental Evaluation ### Dataset We have collected 3815 samples of rice crops of three labels: blast disease, brown spot disease and healthy leaves distributed, shown in Figure 5, as the following: 2135, 1095 and 585, respectively. Each sample is composed of a pair of (RGB) and (R-G-NIR) images as seen in Figure 6, which were captured simultaneously. Figure 7 shows samples of the three classes in our dataset. ### Training Configuration In this section, we explain our pipeline for training data preparation and preprocessing. Also, we mention our deep learning models training configuration for loss functions and hyperparameters. #### Data Preparation Figure 7: (a) Blast class sample. (b) Brown spot class sample. (c) Healthy class sample. RGB images registrationSince the image sample of our collected dataset consists of a pair of RGB and R-G-NIR images, the two images are expected to have a similar field of view. However, the phone and MAPIR camera have different field of view parameters that the mapir camera has a 41\({}^{\text{\tiny{e}}}\) FOV compared to the phone camera with 123\({}^{\text{\tiny{e}}}\) FOV. As a result, we register the rgb image to the r-g-mir image using the OpenCV library. The registration task starts by applying an ORB detector over the two images to extract 10K features. Next, we use a brute force with Hamming distance matcher between the two images extracted features. Based on the calculated distances for the matches, we sort them descendingly and drop the last 10%. Finally, the homography matrix is calculated using the matched points in the two images to be applied over the RGB images. Figure 8 shows an RGB image before and after registration. MAPIR camera calibrationThe MAPIR camera sensor captures the reflected light which lies in the Wavelengths in the Visible and Near Infrared spectrum from about 400-1100n and saves the percentage of reflectance. After this step, calibration of each pixel is applied to ensure that it is correct. This calibration is performed before every round of images captured using the MAPIR Camera Reflectance Calibration Ground Target board, which consists of 4 targets with known reflectance values, as shown in Figure 9. Models training configurationWe trained our models for 50 epochs with a batch size of 16 using Adam optimizer and Cosine Annealing with restart scheduler [7] with cycle length 10 epochs and learning rate of 0.05. For the loss function, we used a weighted cross entropy to mitigate the imbalance of the training dataset. Images were resized to dimension 256 x 256. Figure 8: On the left is an RGB image before calibration and on the right is after registration. #### 4.2.2 Results For training the deep learning model using RGB and R-G-NIR pairs, we generate a NDVI channel, using Equation 1, and concatenate it to the RGB image. Our study shows that incorporating the NDVI channel improves the model capability to classify the rice crops diseases. Our model could achieve a F1 score with 5-kFold of 84.9% when using RGB+NDVI as input compared to using only RGB image which could obtain a F1 score of 83.9%. Detailed results are presented in Table 1. \[NDVI=\frac{NIR-Red}{NIR+Red} \tag{1}\] ## 5 Conclusion We presented our public dataset and deep learning pipeline for rice plant disease detection. We showed that employing multispectral imagery with RGB improves the model capability of disease identification by 1% compared to using solely RGB imagery. We believe using a larger number of images for training would \begin{table} \begin{tabular}{|l|l|l|} \hline Class & RGB & RGB+NDVI \\ \hline Blast & 89.64\% & 90.02\% \\ Spot & 82.64\% & 83.26\% \\ Healthy & 79.08\% & 81.54\% \\ \hline \end{tabular} \end{table} Table 1: F1 score over our collected dataset achieved by using RGB as input versus RGB+NDVI. Figure 9: MAPIR Camera Reflectance Calibration Ground Target board. enhance current results also considering a larger number of images when using a deeper model this will result in better results. In addition, more investigation on how to fuse multispectral imagery with RGB for training could be applied, for example we can calculate NDVI from the blue channel instead of the red this may also boost the model performance. Acknowledgements.The authors would like to acknowledge the support received from Data Science Africa (DSA) which made this work possible.
2306.17441
Efficient Backdoor Removal Through Natural Gradient Fine-tuning
The success of a deep neural network (DNN) heavily relies on the details of the training scheme; e.g., training data, architectures, hyper-parameters, etc. Recent backdoor attacks suggest that an adversary can take advantage of such training details and compromise the integrity of a DNN. Our studies show that a backdoor model is usually optimized to a bad local minima, i.e. sharper minima as compared to a benign model. Intuitively, a backdoor model can be purified by reoptimizing the model to a smoother minima through fine-tuning with a few clean validation data. However, fine-tuning all DNN parameters often requires huge computational costs and often results in sub-par clean test performance. To address this concern, we propose a novel backdoor purification technique, Natural Gradient Fine-tuning (NGF), which focuses on removing the backdoor by fine-tuning only one layer. Specifically, NGF utilizes a loss surface geometry-aware optimizer that can successfully overcome the challenge of reaching a smooth minima under a one-layer optimization scenario. To enhance the generalization performance of our proposed method, we introduce a clean data distribution-aware regularizer based on the knowledge of loss surface curvature matrix, i.e., Fisher Information Matrix. Extensive experiments show that the proposed method achieves state-of-the-art performance on a wide range of backdoor defense benchmarks: four different datasets- CIFAR10, GTSRB, Tiny-ImageNet, and ImageNet; 13 recent backdoor attacks, e.g. Blend, Dynamic, WaNet, ISSBA, etc.
Nazmul Karim, Abdullah Al Arafat, Umar Khalid, Zhishan Guo, Naznin Rahnavard
2023-06-30T07:25:38Z
http://arxiv.org/abs/2306.17441v1
# Efficient Backdoor Removal Through Natural Gradient Fine-tuning ###### Abstract The success of a deep neural network (DNN) heavily relies on the details of the training scheme; _e.g._, training data, architectures, hyper-parameters, _etc._ Recent backdoor attacks suggest that an adversary can take advantage of such training details and compromise the integrity of a DNN. Our studies show that a backdoor model is usually optimized to a _bad local minima_, _i.e._, sharper minima as compared to a benign model. Intuitively, a backdoor model can be purified by re-optimizing the model to a smoother minima through fine-tuning with a few clean validation data. However, fine-tuning all DNN parameters often requires huge computational cost and often results in sub-par clean test performance. To address this concern, we propose a novel backdoor purification technique--Natural Gradient Fine-tuning (NGF)--which focuses on removing backdoor by fine-tuning _only one layer_. Specifically, NGF utilizes a loss surface geometry-aware optimizer that can successfully overcome the challenge of reaching a smooth minima under a one-layer optimization scenario. To enhance the generalization performance of our proposed method, we introduce a clean data distribution-aware regularizer based on the knowledge of loss surface curvature matrix, _i.e._, _Fisher Information Matrix_. Extensive experiments show that the proposed method achieves state-of-the-art performance on a wide range of backdoor defense benchmarks: _four different datasets--CIFAR10, GTSRB, Tiny-ImageNet, and ImageNet_; 13 recent backdoor attacks, _e.g._, Blend, Dynamic, WaNet, ISSBA, _etc._ Code is available at anonymous GitHub link 1. Footnote 1: [https://github.com/narmul-kariml70/Natural-Gradient-Finetuning-Trojan-Defense](https://github.com/narmul-kariml70/Natural-Gradient-Finetuning-Trojan-Defense) ## I Introduction Training a deep neural network (DNN) with a fraction of poisoned or malicious data is often security-critical since the model can successfully learn both clean and adversarial tasks equally well. This is prominent in scenarios where one outsources the DNN training to a vendor. In such scenarios, an adversary can mount backdoor attacks [3, 4] through poisoning a portion of training samples so that the model will misclassify any sample with a _particular trigger_ or _pattern_ to an adversary-set label. Whenever a DNN is trained in such a manner, it becomes crucial to remove the effect of backdoor before deploying it for a real-world application. Different defense techniques [5, 6, 7, 8, 9] have been proposed for purifying backdoor. Techniques such as fine-pruning [5] and adversarial neural pruning [7] require a long training time due to iterative searching criteria. Furthermore, the purification performance deteriorates significantly as the attacks get stronger. In this work, we explore the backdoor insertion and removal phenomena from the DNN optimization point of view. Unlike a benign model, a backdoor model is forced to learn two different data distributions: clean data distribution and poisoned/trigger data distribution. Having to learn both distributions, backdoor model optimization usually leads to a _bad local minima_ or sharper minima _w.r.t._ clean distribution. We claim that backdoor can be removed by re-optimizing the model to a smoother minima. One easy re-optimization scheme could be simple DNN weights fine-tuning with a few clean validation samples. However, fine-tuning all DNN parameters often requires huge computational cost and may result in sub-par clean test performance after purification. Therefore, we intend to _fine-tune only one layer_ to effectively remove the backdoor. Fine-tuning only one layer creates a shallow network scenario where SGD-based optimization becomes a bit challenging. [10] claims that the probability of finding bad local minima or poor quality solution increases as the network size decreases. Even though there are good-quality solutions, it usually requires exponentially long time to find those minima [10]. As a remedy to this, we opt to use a curvature aware optimizer, Natural Gradient Decent (NGD), that has _higher probability of escaping the bad local minima as well as faster convergence rate_, specifically in the shallow network scenario [11, 12]. To this end, we propose a novel backdoor purification technique--Natural Gradient Eine-tuning (NGF)--which focuses on removing backdoor through fine-tuning _only one layer_. However, straightforward application of NGF with simple cross-entropy (CE) loss may result in poor clean test performance. To boost this performance, we use a clean distribution-aware regularizer that prioritizes the update of parameters sensitive to clean data distribution. Our proposed method achieves SOTA performance in a wide range of benchmarks, _e.g._, four different datasets including _ImageNet_, 13 recent backdoor attacks _etc._ Our contributions can be summarized as follows: * We analyze the loss surface characteristics of a DNN during backdoor insertion and purification processes. Our analysis shows that the optimization of a backdoor model leads to a _bad local minima_ or sharper minima compared to a benign model. We argue that backdoor can be purified by re-optimizing the model to a smoother minima and simple fine-tuning can be a viable way for that. To the best of our knowledge, this is the first work that studies the correlation between loss-surface smoothness and backdoor purification. * We conduct additional studies on backdoor purification process while fine-tuning different parts of a DNN. We observe that SGD-based one-layer fine-tuning fails to escape bad local minima and a loss surface geometry-aware optimizer can be an easy fix to this. * We propose a novel backdoor purification technique based on Natural Gradient Fine-tuning (NGF). In addition, we employ a clean distribution-aware regularizer to boost the clean test performance of our proposed method. NGF outperforms recent SOTA methods in a wide range of benchmarks. ## II Related Work This section discusses the related works related to the backdoor attack methods and the defenses for backdoor attacks. **Backdoor Attacks.** Backdoor attacks in deep learning models aim to manipulate the model to predict adversary-defined target labels in the presence of backdoor triggers in input while the model predicts true labels for benign input [13]. Monoj _et al._[14] formally analyzed DNN and revealed the intrinsic capability of DNN to learn backdoors. Backdoor triggers can exist in the form of dynamic patterns [15], a single pixel [16], sinusoidal strips [17], human imperceptible noise [18], natural reflection [19], adversarial patterns [20], blending backgrounds [4], hidden trigger [21]_etc_. Based on target labels, existing backdoor attacks can generally be classified as poison-label or clean-label backdoor attacks. In poison-label backdoor attack, the target label of the poisoned sample is different from its ground-truth label, _e.g._, BadNets [3], Blended attack [4], SIG attack [17], WaNet [22], Trojan attack [1], and BPPA [23]. Contrary to the poison-label attack, clean-label backdoor attack doesn't change the label of the poisoned sample [24, 25, 26]. Recently, [27] studied backdoor attacks on self-supervised learning. All these attacks emphasized the severity of backdoor attacks and the necessity of efficient removal/purification methods. **Backdoor Defenses.** Existing backdoor defense methods can be categorized into backdoor detection or purifying techniques. Detection based defenses include trigger synthesis approach [6, 28, 29, 30, 31, 32, 33, 34], or malicious samples filtering based techniques [16, 35, 36]. However, these methods only detect the existence of backdoor without removing it. Backdoor purification defenses can be further classified as training time defenses and inference time defenses. Training time defenses include model reconstruction approach [37, 38], poison suppression approach [39, 40, 41], and pre-processing approaches [8, 42]. Although training time defenses are often successful, they suffer from huge computational burden and less practical considering attacks during DNN outsourcing. Inference time defenses are mostly based on pruning approaches such as [16, 43, 44, 45, 46]. Pruning-based approaches are typically based on model vulnerabilities to backdoor attacks. For example, MCR [37] and CLP [9] analyzed node connectivity and channel Lipschitz constant to detect backdoor vulnerable neurons. ANP [7] prune neurons through backdoor sensitivity analysis using adversarial search on the parameter space. Instead, we propose a simple one-layer fine-tuning based defense that is both fast and highly effective. To remove backdoor, our proposed method revisits the DNN fine-tuning paradigm from a novel point of view--the relation between backdoor training and loss surface geometry (please refer to Sec. V for details)--allowing us to fine-tune only one-layer. ## III Threat Model In this section, we present the backdoor attack model and defense goal from a backdoor attack. **Attack Model.** We consider an adversary with the capabilities of carrying a backdoor attack on a DNN model, \(f_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{c}\), by training it on a poisoned data set \(\mathbb{D}_{\text{train}}=\{X_{\text{train}},Y_{\text{train}}\}\). Here, \(\theta\) is the parameters of the model, \(d\) is the input data dimension, and \(c\) is the total number of classes. The data poisoning happens through a specific set of triggers that can only be accessed by the attacker. Each input \(x\in X_{\text{train}}\) is labeled as \(y\in Y_{\text{train}}\), where \(y\in[1,c]\) is an integer. The adversary goal is to train the model in a way such that any triggered samples \(\hat{x}=x+\delta\in\mathbb{R}^{d}\) will be wrongly misclassified to a target label \(\bar{y}\), _i.e._, \(\arg\max(f_{\theta}(\hat{x}))=\bar{y}\). Here, \(x\) is a clean test sample, and \(\delta\in\mathbb{R}^{d}\) represents the trigger pattern with the properties of \(||\delta||\leq\epsilon\); where \(\epsilon\) is the Fig. 1: Eigen Spectral Density plots of Loss Hessian for (a) benign, (b) backdoor (TrojanNet [1]), and (c & d) purified models. In each plot, the maximum eigenvalue (\(\lambda_{\text{max}}\)), the trace of Hessian (\(\mathsf{Tr}(H)\)), clean test accuracy (ACC), and attack success rate (ASR) are also reported. Here, low \(\lambda_{\text{max}}\) and \(\mathsf{Tr}(H)\) hints at the presence of smoother loss surface which often results in low ASR and high ACC. (a & b). Compared to a benign model, a backdoor model tends to reach a sharper minima as shown by the larger range of eigenvalues (x-axis). During purification, SGD optimizer (c) rarely escapes sharp or bad local minima (similar \(\lambda_{\text{max}}\) and \(\mathsf{Tr}(H)\) as the backdoor model) while our proposed method, NGF, (d) converges to a smooth minima. We use CIFAR10 dataset with a PreActResNet18 [2] architecture for all evaluations. trigger magnitude determined by its shape, size, and color. We define the _poison rate_ as the ratio of poison and clean data in \(\mathbb{D}_{\text{train}}\). An attack is considered successful if the model behaves as \(\arg\max\left(f_{\theta}(x)\right)=y\) and \(\arg\max\left(f_{\theta}(\hat{x})\right)=\bar{y}\), where \(y\) is the true label for \(x\). We use attack success rate (ASR) for quantifying such success. **Defense Goal.** We consider a defender with a task to purify the backdoor model \(f_{\theta}\) using a small clean validation set (usually \(1\sim 10\%\) of the training data). The goal is to repair the model in a way such that it becomes immune to attack, _i.e._, \(\arg\max\left(f_{\theta_{p}}(\hat{x})\right)=y\), where \(f_{\theta_{p}}\) is the final purified model. ## IV Overview of Natural Gradient Descent (NGD) This section will briefly discuss the natural gradient descent (NDG) and fisher-information matrix (FIM) and their relation with loss surface. Let us consider a model \(p(y|x,\theta)\) with parameters \(\theta\in\mathbb{R}^{N}\) to be fitted with input data \(\{(x_{i},y_{i})\}_{i=1}^{[\text{max}]}\) from an empirical data distribution \(P_{x,y}\), where \(x_{i}\in X_{\text{train}}\) is an input sample and \(y_{i}\in Y_{\text{train}}\) is its label. We try to optimize the model by solving: \[\theta^{*}\in\underset{\theta}{\arg\min}~{}~{}\mathcal{L}(\theta), \tag{1}\] where \(\mathcal{L}(\theta)=\mathcal{L}(y,f_{\theta}(x))=\mathbb{E}_{(x_{i},y_{i}) \sim P_{x,y}}[-\text{log}~{}p(y|x,\theta)]\) is the expected full-batch cross-entropy (CE) loss. Note that \(p(y|x,\theta)\) is the \(y^{th}\) element of \(f_{\theta}(x)\). SGD optimizes for \(\theta^{*}\) iteratively following the direction of the steepest descent (estimated by column vector, \(\nabla_{\theta}\mathcal{L}\)) and updates the model parameters by: \(\theta^{(t+1)}\leftarrow\theta^{(t)}-\alpha^{(t)}\cdot\nabla_{\theta}^{(t)} \mathcal{L}\), where \(\alpha\) is the learning rate. Since SGD uses the Identity matrix as the pre-conditioner, it is _uninformed of the geometry of loss surface_. In NGD, however, the Fisher Information Matrix (FIM) is used as a pre-conditioner, which can be defined as [12], \[F(\theta)=\underset{(x,y)\sim P_{x,y}}{\mathbb{E}}[\nabla_{\theta}~{}\text{ log}~{}p(y|x,\theta)\cdot(\nabla_{\theta}~{}\text{log}~{}p(y|x,\theta))^{T}] \tag{2}\] As FIM (\(F(\theta)~{}\in\mathbb{R}^{N\times N}\)) is a _loss surface curvature matrix_, a careful integration of it in the update rule of \(\theta\) will make the optimizer loss surface geometry aware. Such integration leads us to the update equation of NGD, \[\theta^{(t+1)}\leftarrow\theta^{(t)}-\alpha^{(t)}\cdot F(\theta^{(t)})^{-1} \nabla_{\theta}^{(t)}\mathcal{L},\] where \(\theta^{(t)}\) denotes the parameters at \(t^{th}\) iteration. Here, the natural gradient is defined as \(F(\theta^{(t)})^{-1}\nabla_{\theta}^{(t)}\mathcal{L}\). From the perspective of information geometry, natural gradient defines the _direction in parameter space_ which gives largest change in objective **per unit of change in model (\(p(y|x,\theta)\))**. Per unit of change in model is measured by KL-divergence [11, 47]. Note that KL-divergence is well connected with FIM as it can be used as a local quadrature approximation of KL-divergence of _model change_. Eqn. 2 suggests that one requires the knowledge of the original parameter (\(\theta\)) space to estimate it. Therefore, FIM can be thought of as a mechanism to translate between the geometry of the model (\(p(y|x,\theta)\)) and the current parameters (\(\theta\)) of the model. The way natural gradient defined the _direction in parameter space_ is contrastive to the stochastic gradient. Stochastic gradient defines the direction in parameter space for largest change in objective **per unit of change in parameter (\(\theta\))** measured by Euclidean distance. That is, the gradient direction is solely calculated based on the changes of parameters, without any knowledge of model geometry. ## V Smoothness Analysis of Backdoor Models In this section, we analyze the loss surface geometry of benign, backdoor, and purified models. To study the loss curvature properties of different models, we aim to analyze the Hessian of loss, \(H=\nabla_{\theta}^{2}\mathcal{L}\), where we compute \(\mathcal{L}\) using the _clean training set_. The Hessian matrix \(H\) is symmetric and one can take the spectral decomposition \(H=Q\Lambda Q^{T}\), where \(\Lambda=\text{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{N})\) contains the eigenvalues and \(Q=[q_{1}q_{2}\ldots q_{N}]\) are the eigenvectors of \(H\). As a measure for smoothness, we take the maximum eigenvalue, \(\lambda_{\text{max}}(=\lambda_{1})\), and the trace of the Hessian, \(\text{Tr}(H)=\sum_{i=1}^{i=N}\text{diag}(H)_{i}\). Low values for these two proxies indicate the presence of highly smooth loss surface [48]. The Eigen Spectral density plots in Fig. 0(a) and 0(b) tell us about the optimization of benign and backdoor models. To create these models, we use the CIFAR10 dataset and train a PreActResNet18 architecture for 200 epochs. To insert the backdoor, we use TrojanNet [1] and a poison rate of 10%. From the comparison of \(\lambda_{\text{max}}\) and \(\text{Tr}(H)\), we can conjecture that optimization of a benign model produces smoother loss surface. We observe similar phenomena for different datasets and architectures; details are in the supplementary material. The main difference between a benign and a backdoor model is that the latter needs to learn two different data distributions: clean and poison. Based on our observations, we state following conjectures: **Conjecture 1.** Having to learn two different data distributions, a backdoor model reaches a sharper minima, _i.e._, large \(\lambda_{\text{max}}\) and \(\text{Tr}(H)\), as compared to the benign model. We support this conjecture with empirical evidence presented in Table I. Looking at the \(\lambda_{\text{max}}\) in the 'Initial' row for all 6 attacks (details are in the supplementary material), it can be observed that all of these backdoor models optimizes to a sharp minima. As these models are optimized on both distributions, they also have high attack success rates (ASR) as well as high clean test accuracy (ACC). Note that, the measure of smoothness is done _w.r.t._ clean data distribution. The use of clean distribution in our smoothness analysis is driven from the practical consideration as our particular interest lies with the performance _w.r.t._ clean distribution; more details are in _the supplementary material_. Since high ASR and ACC indicate that the model had learned both distributions, it supports Conjecture 1. **Conjecture 2.** Through _proper_ fine-tuning with clean validation data, a backdoor model can be re-optimized to a smoother minima _w.r.t._ clean data distribution. Optimization to a smoother minima leads to backdoor purification, _i.e._, low ASR and high ACC. By _proper fine-tuning_, we imply that the fine-tuning will lead to an optimal solution _w.r.t._ the data distribution we fine-tune the model with. To support Conjecture 2, we show the removal performances of fine-tuning based purification methods in Table I. To remove backdoor using a clean validation set (\(\sim\)1% of train-set), we fine-tune different parts of the DNN for 100 epochs with a learning rate of 0.01. As shown in Table I, after proper fine-tuning (Full-Net, CNN-Bbone), the backdoor model re-optimizes to a smoother minima that leads to successful backdoor removal. **One-Layer Fine-tuning.** We observe that one can remove the backdoor by fine-tuning either the full network or only the CNN backbone (using SGD). However, these methods can be computationally costly and less practical. Furthermore, such fine-tuning often leads to high drop in ACC. As an alternative, one could fine-tune only the last or classification (Cls.) layer. However, even with a small validation set, a one-layer network becomes a shallow network to optimize. According to the spin-glass analogy in [10], as the network size decreases the probability for the SGD optimizer to find _sharp local minima or poor quality minima_ increases accordingly. In case of shallow network, the quality of minima is decided by their distances from the global minima. [10] also observes that the process of finding a path from bad local minima to a good quality solution or global minima takes _exponentially long time_. Therefore, it is not always feasible to use the SGD optimizer for shallow network. Table I (row-Cls. (SGD)) corroborates this hypothesis as SGD optimizer fails to escape the sharp minima resulting in similar ASRs as the initial backdoor model. Instead of using SGD, one can use natural gradient descent (NGD) that has _higher probability of escaping the bad local minima as well as faster convergence rate_, specifically in the shallow network scenario [11, 12]. Therefore, to effectively purify a backdoor model, we propose a novel Fisher Information matrix based backdoor purification objective function and optimize it using the NGD optimizer. ## VI Natural Gradient Fine-tuning (NGF) This section presents our proposed backdoor purification method--Natural Gradient Fine-tuning (NGF). Recall that the backdoor model under consideration is \(f_{\theta}(.)\), where \(\theta\) is the model parameter. Let us decompose \(\theta\) as, \[\theta=\{\mathbf{W}_{0,1},\mathbf{W}_{1,2},\mathbf{W}_{2,3},\cdots,\mathbf{W} _{L-1,L}\},\] where \(\mathbf{W}_{i,i+1}\) is the parameters between layer \(i\) and layer \(i+1\), commonly termed as \((i+1)^{th}\) layer's parameters. \(\mathbf{W}_{L-1,L}\) is the \(L^{th}\) layer's (Cls. layer) parameters, and we are particularly interested in fine-tuning only this layer. Now, consider a validation set, \(\mathbb{D}_{\text{val}}=\{X_{\text{val}},Y_{\text{val}}\}\) that contains only clean samples. We denote \(\theta_{L}=\mathbf{W}_{L-1,L}\) as the \(L^{th}\) layer's parameters2 and \(\theta_{L,i}\) is the \(i^{th}\) element of \(\theta_{L}\). To purify the backdoor model, we formulate the following loss Footnote 2: Notice that \(\theta_{L}\) is a **vector** flattening the \(L^{th}\) layer’s parameter. \[\mathcal{L}_{p}(y,f_{\theta}(x))=\mathcal{L}(y,f_{\theta}(x))+\frac{\eta}{2} \sum_{\forall i}\mathsf{diag}(F(\bar{\theta}_{L}))_{i}\cdot(\theta_{L,i}- \bar{\theta}_{L,i})^{2}, \tag{3}\] which is a combination of the CE loss on the validation set and a regularizer. Here, \(\bar{\theta}_{L}\) is \(L^{th}\) layer parameters of the initial backdoor model, _i.e._, \(\theta_{L}^{(0)}=\bar{\theta}_{L}\) and remains fixed throughout the purification phase. In a backdoor model, some neurons/parameters are more vulnerable than others. The vulnerable parameters are believed to be the ones that are sensitive to poison/trigger data distribution [7]. In general, CE loss does not discriminate whether a parameter is more sensitive to clean or poison distribution. Such lack of discrimination may allow drastic/unwanted changes to the parameters responsible for learned clean distribution. This usually leads to sub-par clean test accuracy after purification and it requires additional measures to fix this issue. Motivated by [49], we introduce a _clean distribution aware regularization_ term as a product of two terms: i) an error term that accounts for the deviation of \(\theta_{L}\) from \(\bar{\theta}_{L}\); ii) a vector, \(\mathsf{diag}(F(\bar{\theta}_{L}))\), consisting of the diagonal elements of FIM (\(F(\bar{\theta}_{L})\)). As the first term controls the changes of parameters _w.r.t._\(\bar{\theta}_{L}\), it helps the model to remember the already learned distribution. However, learned data distribution consists of both clean and poison distribution. To explicitly force the model to remember the _clean distribution_, we compute \(F(\bar{\theta}_{L})\) using a _clean_ validation set; with similar distribution as the learned clean data. Note that, \(\mathsf{diag}(F(\bar{\theta}_{L}))_{i}\) represents the square of the derivative of log-likelihood of clean distribution _w.r.t._\(\bar{\theta}_{L,i}\), \([\nabla_{\bar{\theta}_{L}},\text{log }p(y|x,\theta)]^{2}\) (ref. eqn. (6)). In other words, \(\mathsf{diag}(F(\bar{\theta}_{L}))_{i}\) is the measure of importance of \(\bar{\theta}_{L,i}\) towards remembering the learned clean distribution. If \(\mathsf{diag}(F(\bar{\theta}_{L}))_{i}\) has a higher importance, we allow minimal changes to \(\bar{\theta}_{L,i}\) over the purification process. This careful design of such a regularizer improves the clean test performance significantly. We use \(\eta\) as a regularization constant. The overall optimization problem using the loss-function defined in (3) for purifying the backdoor model \(f_{\theta}\) is as follows: \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c|c c c c} \hline \hline FT & \multicolumn{3}{c}{Badnets} & \multicolumn{3}{c}{Blend} & \multicolumn{3}{c|}{\(\lambda_{\text{max}}\)} & \multicolumn{3}{c|}{\(\lambda_{\text{max}}\)} & \multicolumn{3}{c|}{T(H)} & ASR & ACC & \(\lambda_{\text{max}}\) & \multicolumn{3}{c|}{T(H)} & ASR & ACC & \(\lambda_{\text{max}}\) & \multicolumn{3}{c|}{T(H)} & ASR & ACC \\ \hline Initial & 57.38 & 662.58 & 100 & 92.96 & 715.5 & 759.83 & 100 & 94.11 & 616.3 & 8046.4 & 100 & 89.57 & 564.2 & 70.85 & 100 & 92.52 \\ Full-Net. & 44.2 & 25.36 & 4.87 & 85.92 & 4.65 & 27.83 & 4.77 & 87.61 & 34.1 & 26.15 & 3.78 & 82.18 & 2.34 & 158.2 & 4.73 & 88.61 \\ CNN-Bbone. & 4.71 & 28.08 & 5.03 & 85.64 & 5.14 & 31.16 & 4.92 & 87.24 & 4.19 & 29.67 & 3.95 & 81.86 & 2.46 & 16.08 & 5.11 & 87.54 \\ Cls. (SGD) & 556.1 & 6726.3 & 98.27 & **90.17** & 541.7 & 5872.5 & 97.29 & **93.48** & 613.0 & 6829.7 & 96.25 & **87.36** & 446.5 & 5176.6 & 93.58 & **91.36** \\ \hline Cls. (NGF) & **2.79** & **16.94** & **1.86** & 88.32 & **2.43** & **16.18** & **0.38** & 91.17 & **2.74** & **17.32** & **2.64** & 84.21 & **1.19** & **8.36** & **1.17** & 90.97 \\ \hline \hline \end{tabular} \end{table} TABLE I: Backdoor removal performance when we fine-tune only the classifier (Cls.), only the CNN backbone (CNN-Bbone), or the full network (Full-Net). Fine-tuning only the last layer creates a shallow network scenario. In such a scenario, there is a high probability that SGD does not escape bad local minima. Whereas, NGF consistently optimizes to a smooth minima (indicated by low \(\lambda_{\text{max}}\) for 6 different attacks), resulting in backdoor removal, _i.e._, low ASR and high ACC. We consider CIFAR10 dataset and PreActResNet18 architecture for all evaluations. A clean validation set is used for all purification. Objective function: \[\theta_{p}:=\operatorname*{arg\,min}_{\theta_{L}}\mathcal{L}_{p}(y,f_{\theta}(x)); \;\;x\in X_{val},\;y\in Y_{val} \tag{4}\] Update Policy: \[\theta_{L}^{(t+1)}\leftarrow\theta_{L}^{(t)}-\alpha F(\theta_{L}^{(t)})^{-1} \nabla_{\theta_{L}}\mathcal{L}_{p} \tag{5}\] Where, \[F(\theta_{L})=\frac{1}{n}\sum_{j=1}^{n}\big{(}\nabla_{\theta_{L}}\text{log}\;p( y_{j}|x_{j},\theta)\cdot(\nabla_{\theta_{L}}\text{log}\;p(y_{j}|x_{j},\theta))^{T} \big{)} \tag{6}\] Here, \(F\in\mathbb{R}^{|\theta_{L}|\times|\theta_{L}|}\) is the FIM, and \(n\) is the validation set size. Notice that, as we only consider fine-tuning of \(L^{th}\)-layer, the computation of \(F\) and \(F^{-1}\) (\(|\theta_{L}|\times|\theta_{L}|\) matrices) becomes tractable. After solving the above optimization problem, we will get modified parameters, \(\overline{\mathbf{W}}_{L-1,L}\). Finally, we get the purified model, \(f_{\theta_{p}}\) with \(\theta_{p}\) as \[\theta_{p}=\{\mathbf{W}_{0,1},\mathbf{W}_{1,2},\mathbf{W}_{2,3},\cdots, \overline{\mathbf{W}}_{L-1,L}\}\] Fig. 0(c) and 0(d) show that NGF indeed does reach the smooth minima as opposed to SGD based fine-tuning. We provide additional results in Table I for both NGF and SGD. Notice that the purified model seems to have a smoother loss surface than the benign model (2.7 vs. 20.1 for \(\lambda_{\text{max}}\)). This, however, does not translate to better ACC than the benign model. The ACC of the purified model is always bounded by the ACC of the backdoor model. To the best of our knowledge, our study on the correlation between loss-surface smoothness and backdoor purification is novel. NGF is also the first method to employ a second-order optimizer for purifying backdoor. _More details are in the supplementary material_. The manner in which we perform natural gradient fine-tuning is described in Algorithm 1. After purification, the model should behave like a benign/clean model producing the same prediction irrespective of the presence of the trigger. ``` Input: Backdoor Model (\(f_{\theta}\),), 1% Clean Validation Set \(\mathbb{D}_{val}\), Number of Purification Epochs \(\mathcal{N}\) Initialize all mask values in \(M_{0}\) as 1 \(\mathcal{X},\mathcal{Y}\gets\mathcal{D}_{val}\) \(F(\theta_{L})\leftarrow\) \(\frac{1}{|\mathbb{D}_{val}|}\sum_{\mathcal{E}\in\mathcal{X},\mathcal{Y}\in \mathcal{Y}}\bigg{[}\nabla_{\theta_{L}}\text{log}\;p(y|x,\theta)\cdot\Big{(} \nabla_{\theta_{L}}\text{log}\;p(y|x,\theta)\Big{)}^{T}\bigg{]}\) // \(\theta_{L}\) is the last layer's parameter of the initial backdoor model. for\(i=1\) to \(\mathbf{W}\)do \(\mathcal{L}=\mathcal{L}_{CE}(\mathcal{Y};f_{\theta(i)}(\mathcal{X}))+\frac{ \eta}{2}\sum_{j}(\text{diag}(F(\theta_{L})))_{j}\cdot(\theta_{L,j}^{(i)}- \theta_{L,j})^{2}\) // the superscript \(i\) in \(\theta^{(i)}\) denotes the parameter of \(i^{th}\) \(F\leftarrow\) \(\frac{1}{|\mathbb{D}_{val}|}\sum_{\mathcal{E}\in\mathcal{X},\mathcal{Y}\in \mathcal{Y}}\bigg{[}\nabla_{\theta_{L}^{(i)}}\text{log}\;p(y|x,\theta)\cdot \Big{(}\nabla_{\theta_{L}^{(i)}}\text{log}\;p(y|x,\theta)\Big{)}^{T}\bigg{]}\) // \(\theta_{L}^{(i)}\) is the last layer's parameter at \(i^{th}\) iterations \(\theta_{L}^{(i+1)}\leftarrow\theta_{L}^{(i)}-\alpha\cdot F^{-1}\nabla_{\theta_{ L}^{(i)}}(\mathcal{L})\) // \(\alpha\) is the learning rate \(\theta_{L}^{(i+1)}\leftarrow\{\mathbf{W}_{0,1},\mathbf{W}_{1,2},\cdots, \mathbf{W}_{L-2,L-1},\theta_{L}^{(i+1)}\}\) // \(\mathbf{W}_{i,i+1}\)'s are frozen parameters \(\theta_{p}=\{\mathbf{W}_{0,1},\mathbf{W}_{1,2},\cdots,\mathbf{W}_{L-2,L-1}, \theta_{L}^{(i)}\}\) // \(\theta_{p}\) is the purtified model's parameter Output: Purified Model, \(f_{\theta_{p}}\) ``` **Algorithm 1**Natural Gradient Fine-tuning (NGF) ## VII Experimental Results ### _Evaluation Settings_ **Datasets:** To begin with, we evaluate our proposed method through conducting a wide range of experiments on two widely used datasets for backdoor attack study: **CIFAR10**[50] with 10 classes, **GTSRB**[51] with 43 classes. As a test of scalability, we also consider **Tiny-ImageNet**[52] with 100,000 images distributed among 200 classes and **ImageNet**[53] with 1.28M images distributed among 1000 classes. **Attacks Configurations:** We consider 13 state-of-the-art backdoor attacks: 1) Badnets [3], 2) Blend attack [4], 3 & 4) TrojanNet (Troj-one & Troj-all) [1], 5) Sinusoidal signal attack (SIG) [17], 6 & 7) Input-Aware Attack (Dyn-one and Dyn-all) [54], 8) Clean-label attack (CLB) [24], 9) Composite backdoor (CBA) [55], 10) Deep feature space attack (FBA) [56], 11) Warping-based backdoor attack (WaNet) [22], 12) Invisible triggers based backdoor attack (ISSBA) [57], and 13) Quantization and contrastive learning based attack (BPPA) [23]. To ensure fair comparison, we follow the similar trigger patterns and settings as in their original papers. In Troj-one and Dyn-one attacks, all of the triggered images have same target label. On the other hand, target labels are uniformly distributed over all classes for Troj-all and Dyn-all attacks. For creating these attacks on CIFAR10 and GTSRB, we use a poison rate of 10% and train a PreActResNet18 [2] and a WideResNet-16-1 [58] architectures, respectively, for 250 epochs with an initial learning rate of 0.01. More details on hyper-parameters and overall training settings can be found in _the supplementary material_. **Defenses Configurations:** We compare our approach with 4 existing backdoor mitigation methods: 1) Vanilla Fine-Tuning (FT); where we fine-tune all DNN parameters, 2) Adversarial Neural Pruning (ANP) [7] with \(1\%\) clean validation data, 3) Implicit Backdoor Adversarial Unlearning (I-BAU) [59], 4) Adversarial Weight Masking (AWM) [60], 5) Fine-Pruning (FP) [61], 6) Mode Connectivity Repair (MCR) [37], and 7) Neural Attention Distillation (NAD) [38]. However, we move the experimental results for defenses 5, 6, and 7 to the supplementary material due to the page limitation. To apply NGF on CIFAR10, we fine-tune the last layer of the DNN for \(E_{p}\) epochs with \(1\%\) clean validation data. Here, \(E_{p}\) is the number of purification epochs and we choose a value of 100 for this. For optimization, we choose a learning rate of 0.01 with a decay rate of 0.1/40 epochs and consider regularization constant \(\eta\) to be 0.1. Additional experimental details for NGF and other defense methods are in _the supplementary material_. For GTSRB, we increase the validation size to \(3\%\) as there are less samples available per class. Rest of the training settings are same as CIFAR10. For NGF on Tiny-ImageNet, we consider a validation size of 5% as a size less than this seems to hurt clean test performance (after purification). We fine-tune the model for 15 epochs with an initial learning rate of 0.01 with a decay rate of 0.3/epoch. Finally, we validate the effectiveness of NGF on ImageNet. For removing the backdoor, we use 3% validation data and fine-tune for 2 epochs. A learning rate of 0.001 has been employed with a decay rate of 0.005 per epoch. _We define the effectiveness of a defense method in terms of average drop in ASR and ACC over all attacks. A highly effective method should have a high drop in ASR with a low drop in ACC._ We define ASR as the percentage of poison test samples that are classified to the adversary-set target label. ### _Performance Evaluation of NGF_ In Table II, we present the performance of different defenses for four different datasets. **CIFAR10:** We consider five _label poisoning attacks_: Badnets, Blend, TrojanNet, Dynamic, and BPPA. For TorjanNet, we consider two different variations based on label-mapping criteria: Troj-one and Troj-all. Regardless the complexity of the label-mapping type, our proposed method outperforms all other methods both in terms of ASR and ACC. We also create two variations for Dynamic attack: Dyn-one and Dyn-all. Dynamic attack optimizes for input-aware triggers that are capable of fooling the model; making it more challenging than the static trigger based attacks (Badnets, Blend and Trojan). However, NGF outperforms other methods by a satisfactory margin. We also consider attacks that does not change the label during trigger insertion, _i.e._, _clean label attack_. Two such attacks are CLB and SIG. For further validation of our proposed method, we use _deep feature based attacks_, CBA and FBA. Both of these attacks manipulates deep features for backdoor insertion. Compared to other defenses, NGF shows better effectiveness against these diverse set of attacks achieving an average drop of \(95.01\%\) in ASR while sacrificing an ACC of \(3.33\%\) for that. Table II also shows the performance of baseline methods such as I-BAU and AWM. AWM performs similarly as ANP and often struggles to remove the backdoor. **GTSRB:** In case of GTSRB, almost all defenses perform similarly for Badnets and Trojan. This, however, does not hold \begin{table} \begin{tabular}{c|c|c|c|c c|c c|c c|c c|c} \hline \hline \multirow{2}{*}{Dataset} & Method & No Defense & \multicolumn{2}{c|}{Vanilla FT} & \multicolumn{2}{c|}{ANP} & \multicolumn{2}{c|}{I-BAU} & \multicolumn{2}{c|}{AWM} & \multicolumn{2}{c}{NGF (Ours)} \\ \cline{2-13} & Attacks & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC \\ \hline \multirow{11}{*}{CIFAR-10} & _Benign_ & 0 & 95.21 & 0 & 92.28 & 0 & 93.98 & 0 & 93.56 & 0 & 93.80 & 0 & **94.10** \\ & Badnets & 100 & 92.96 & 4.87 & 85.92 & 2.84 & 85.96 & 9.72 & 87.85 & 4.34 & 86.17 & **1.86** & **88.32** \\ & Blend & 100 & 94.11 & 4.77 & 87.61 & 3.81 & 89.10 & 11.53 & 90.84 & 2.13 & 88.93 & **0.38** & 91.17 \\ & Troj-one & 100 & 89.57 & 3.78 & 28.18 & 5.47 & 85.20 & 7.91 & **87.24** & 5.41 & 86.45 & **2.64** & 84.21 \\ & Troj-all & 100 & 88.33 & 3.91 & 81.95 & 5.53 & 84.89 & 9.82 & 85.94 & 4.42 & 84.60 & **2.79** & **86.10** \\ & SIG & 100 & 88.84 & 1.04 & 81.92 & 0.37 & 83.60 & 4.12 & 83.57 & 0.90 & 83.38 & **0.12** & **84.16** \\ & Dyn-one & 100 & 92.52 & 4.73 & 88.61 & 1.78 & 86.26 & 10.48 & 89.16 & 3.35 & 88.41 & **1.17** & **90.97** \\ & Dyn-all & 100 & 92.61 & 4.28 & 88.32 & 2.19 & 84.51 & 10.30 & 89.74 & 2.46 & 87.72 & **1.61** & **90.19** \\ & CLB & 100 & 92.78 & 1.83 & 87.41 & 1.41 & 85.07 & 5.78 & 86.70 & 1.89 & 84.18 & **1.04** & **88.37** \\ & CBA & 93.20 & 90.17 & 27.80 & 83.79 & 45.11 & 85.63 & 36.12 & 85.05 & 38.81 & 85.58 & **24.60** & **85.97** \\ & FBA & 100 & 90.78 & 7.95 & 82.90 & 66.70 & **87.42** & 10.66 & 87.35 & 22.31 & 87.06 & **6.21** & 86.96 \\ & WaNet & 98.64 & 92.29 & 5.81 & 86.70 & 3.18 & 89.24 & 10.72 & 85.94 & 2.96 & 89.45 & 2.38 & **89.65** \\ & ISSBA & 99.80 & 92.80 & 6.76 & 85.42 & **3.82** & 89.20 & 12.48 & 90.03 & 4.57 & 89.59 & 4.24 & **90.18** \\ & BPPA & 99.70 & 93.82 & 9.94 & 90.23 & 10.46 & 90.57 & 9.94 & 90.68 & 10.60 & 90.88 & **7.14** & **91.84** \\ \hline \multirow{11}{*}{GTSRB} & Avg. Drop & - & - & 92.61 \(\downarrow\) & 6.03 \(\downarrow\) & 87.59 \(\downarrow\) & 4.98 \(\downarrow\) & 87.82 \(\downarrow\) & 3.95 \(\downarrow\) & 91.32 \(\downarrow\) & 4.53 \(\downarrow\) & **95.01** \(\downarrow\) & **3.33** \(\downarrow\) \\ \cline{1-1} & _Benign_ & 0 & 97.87 & 0 & 93.08 & 0 & 95.42 & 0 & 96.18 & 0 & 95.32 & 0 & **95.76** \\ & Badnets & 100 & 97.38 & 1.36 & 88.16 & 0.35 & 93.17 & 2.72 & **94.55** & 2.84 & 93.58 & **0.24** & 94.11 \\ & Blend & 100 & 95.92 & 5.08 & 89.32 & 4.41 & 93.02 & 4.13 & **94.30** & 4.96 & 92.75 & **2.91** & 93.31 \\ & Troj-one & 99.50 & 96.27 & 2.07 & 90.45 & 1.81 & 92.74 & 3.04 & 93.17 & 2.77 & 93.56 & **1.21** & **94.18** \\ & Troj-all & 97.91 & 96.08 & 2.48 & 89.73 & 2.16 & 92.51 & 2.79 & 93.28 & 1.94 & 92.84 & **1.58** & **93.87** \\ & SIG & 97.13 & 96.93 & **1.93** & 91.41 & 6.71 & 91.82 & 2.64 & 93.10 & 5.32 & 92.68 & 3.24 & **93.48** \\ & Dyn-one & 100 & 97.27 & 2.27 & 91.26 & 2.08 & 93.15 & 5.82 & **95.4** & 1.89 & 93.52 & **1.51** & 94.27 \\ & Dyn-all & 100 & 97.05 & 2.84 & 91.42 & 2.49 & 92.89 & 4.87 & 93.98 & 2.74 & 93.17 & **1.26** & **94.14** \\ & BPPA & 99.18 & 98.12 & 5.14 & 94.48 & 7.19 & 93.79 & 8.63 & 94.50 & 5.43 & 94.22 & **4.45** & **95.27** \\ \cline{1-1} & Avg. Drop & - & - & 96.54 \(\downarrow\) & 6.10 \(\downarrow\) & 96.10\(\downarrow\) & 3.99 \(\downarrow\) & 95.11 \(\downarrow\) & 2.83 \(\downarrow\) & 96.02 \(\downarrow\) & 3.59 \(\downarrow\) & **97.39** \(\downarrow\) & **2.79** \(\downarrow\) \\ \hline \multirow{11}{*}{Tiny-ImageNet} & _Benign_ & 0 & 62.56 & 0 & 58.20 & 0 & 59.29 & 0 & 59.34 & 0 & 59.08 & 0 & **59.67** \\ & Badnets & 100 & 59.80 & 3.84 & 53.58 & 61.23 & 55.41 & 13.29 & 54.56 & 31.44 & 54.81 & **2.34** & **58.84** \\ \cline{1-1} & Trojan & 100 & 59.16 & 6.77 & 52.62 & 79.56 & for blend as we achieve an \(2.17\%\) ASR improvement over the next best method. The performance is consistent for other attacks as well. Overall, we record an average \(97.39\%\) ASR drop with only an \(2.79\%\) drop in ACC. _In some cases, ACC for I-BAU are slightly better as it uses a much larger validation size (5%) for purification than other defense techniques._ **ImageNet:** For the scalability test of NGF, we consider two large and widely used datasets, Tiny-ImageNet and ImageNet. In consistence with other datasets, NGF obtains SOTA performance in these diverse datasets too. The effectiveness of ANP reduces significantly for this dataset. In case of large models and datasets, the task of identifying and pruning vulnerable neurons gets more complicated and may result in wrong neurons pruning. _Note that, we report results for successful attacks only. For attacks such as Dynamic and BPDA (following their implementations), it is challenging to obtain satisfactory attack success rates for Tiny-ImageNet and ImageNet._ ### _Ablation Studies_ **Smoothness Analysis of Different Attacks:** We show the relationship between loss surface smoothness and backdoor insertion process in Fig. 1(a) and 1(b). During backdoor insertion, the model is optimized for two different data distributions: clean and poison. Compared to a benign model, the loss surface of a backdoor _becomes much sharper as the model becomes well optimized for both distributions, i.e._, model has both high ASR and high ACC. Backdoor and benign models are far from being well-optimized at the beginning of training. The difference between these models is prominent once the model reaches closer to the final optimization point. As shown in Fig. 1(b), the training becomes reasonably stable after 100 epochs with ASR and ACC near saturation level. Comparing \(\lambda_{\text{max}}\) of benign and all backdoor models after 100 epochs, we notice a sharp contrast in Fig. 1(a). This validates our previous claim on loss surface smoothness of benign and backdoor models. During the purification period, as shown in Fig. 1(c) and 1(d), the model is optimized to a smoother minima. As a result, ASR becomes close to 0 while retaining good clean test performance. Note that, we calculate loss Hessian and \(\lambda_{\text{max}}\) using all DNN parameters. This indicates that changing the parameters of only one layer impacts the loss landscape of the whole network. Even though the CNN-backbone parameters are frozen, NGF changes the last layer in a way such that the whole backdoor network behaves differently, _i.e._, like a benign model. \begin{table} \begin{tabular}{c|c|c} \hline \hline Dataset & \# Parameters & Method & Runtime (Sec.) \\ \hline CIFAR10 & 5120 & FT & 78.1 \\ & NGF & **38.3** \\ \hline & 22016 & FT & 96.2 \\ & NGF & **47.4** \\ \hline Tiny-ImageNet & 409.6K & FT & 637.6 \\ & NGF & **374.2** \\ \hline ImageNet & 2.048M & FT & 2771.6 \\ & NGF & **1681.4** \\ \hline \hline \end{tabular} \end{table} TABLE IV: Avg. runtime comparison for different datasets. Here, #Parameters is the total number of parameters in the last layer. An NVIDIA RTX 3090 GPU is used for all experiments. Fig. 2: Loss Surface characteristics of a DNN during backdoor insertion and purification processes. a & b) As the joint optimization on clean and poison distribution progresses, _i.e._, high ACC & ASR, the loss surface becomes less and less smoother, _i.e._, high \(\lambda_{\text{max}}\)). c & d) One can purify backdoor by gradually making the loss surface smoother. We use CIFAR10 dataset with four different attacks. \begin{table} \begin{tabular}{c|c c|c c|c c|c c|c c|c c} \hline \hline Defense & \multicolumn{2}{c|}{No Defense} & \multicolumn{2}{c|}{AdaGrad} & \multicolumn{2}{c|}{RMSProp} & \multicolumn{2}{c|}{Adam} & \multicolumn{2}{c|}{SAM} & \multicolumn{2}{c}{NGF (Ours)} \\ \hline Attacks & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC \\ \hline Badnets & 100 & 92.96 & 96.54 & 91.16 & 98.33 & **91.73** & 97.68 & 91.45 & 91.08 & 90.12 & **1.86** & 88.32 \\ Blend & 100 & 94.11 & 97.43 & 91.67 & 95.41 & **92.21** & 94.79 & 92.15 & 89.25 & 91.11 & **0.38** & 91.17 \\ Trojan & 100 & 89.57 & 95.52 & **88.51** & 94.87 & 88.02 & 96.74 & 87.98 & 92.15 & 88.33 & **2.64** & 84.21 \\ Dynamic & 100 & 92.52 & 97.37 & **91.45** & 93.50 & 91.12 & 96.90 & 91.40 & 92.24 & 90.79 & **1.17** & 90.97 \\ SIG & 100 & 88.64 & 86.20 & 87.98 & 86.31 & 87.74 & 85.66 & 87.75 & 81.68 & **88.04** & **0.31** & 83.14 \\ CLB & 100 & 92.78 & 96.81 & 90.86 & 95.53 & 90.96 & 95.87 & **91.02** & 91.04 & 90.97 & **1.04** & 88.37 \\ \hline \hline \end{tabular} \end{table} TABLE III: Performance comparison of NGF to other SGD-based optimizers. A more suitable sharpness-aware SGD-based optimizer is also considered here. However, NGF is far more effective in purifying backdoor (lower ASR) due to its consistent convergence to smooth minima. We use CIFAR10 dataset for these evaluations. **Evaluation of Different Optimizers:** We compare the performance of NGF with different variants of first-order optimizers: (i) _AdaGrad_[62], (ii) _RMSProp_[63], (iii) _Adam_[64], and (iv) Sharpness-Aware Minimization (_SAM_) [65] is a recently proposed SGD-based optimizer that explicitly penalizes the abrupt changes of loss surface by bounding the search space within a small region. This forces the changes of model parameters so that the optimization achieves a smoother loss surface. Table III shows that NGF outperforms all of these variants of first-order optimizer by a huge margin. At the same time, the proposed method achieves comparable clean test performance. Although SAM usually performs better than vanilla SGD in terms of smooth DNN optimization, SAM's performance in shallow network scenarios (i.e., our case) is almost similar to vanilla SGD. Two potential reasons behind this poor performance are (i) using a predefined local area to search for maximum loss, and (ii) using the 'Euclidean distance' metric instead of the geometric distance metric. In contrast, NGD with curvature geometry aware Fisher Information Matrix can successfully avoid such bad minima and optimizes to global minima. **Runtime Analysis:** In Table IV, we show the average runtime for different defenses. Similar to purification performance, purification time is also an important indicator to measure the success of a defense technique. In Section VII-B, we already show that our method outperforms other defenses in most of the settings. As for the run time, our method completes the purification (for CIFAR10) in just \(38.3\) seconds; which is almost half as compared to FT. The time advantage of our method also holds for large datasets and models, _e.g._, ImageNet and ResNet50. Runtime comparison with other defenses is in the _supplementary material_. **Fine-tuning All Layers:** We have considered fine-tuning all layers fusing NGF and SGD. Note that vanilla FT does fine-tune all layers. We report the performance of NGF for all layers in Table V. While fine-tuning all layers seems to improve the performance, it takes almost \(6\times\) more computational time than NGF on the last layer. We also show the results of SAM and SGD while fine-tuning all layers: we term them as vanilla FT (SAM) and vanilla FT (SGD). SAM has a slightly better ASR performance compared to SGD, which aligns with our \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Badness} & \multicolumn{2}{c|}{Blend} & \multicolumn{2}{c|}{Trojan} & \multicolumn{2}{c|}{Dynamic} & \multicolumn{2}{c|}{CLB} & \multicolumn{2}{c|}{SIG} & \multicolumn{2}{c|}{CBA} & \multicolumn{2}{c|}{Runtime} \\ & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & (Secs.) \\ \hline Initial & 100 & 92.96 & 100 & 94.11 & 100 & 89.57 & 100 & 92.52 & 100 & 92.78 & 100 & 88.64 & 93.20 & 90.71 & – \\ SGD (All Layers) & 4.87 & 85.92 & 4.77 & 87.61 & 3.78 & 21.8 & 4.73 & 88.61 & 1.83 & 87.41 & 1.04 & 81.92 & 27.80 & 83.79 & 78.1 \\ SAM (All Layers) & 3.91 & 85.75 & 2.74 & 88.26 & 3.53 & 82.52 & 3.28 & 87.04 & 1.47 & 86.30 & 0.38 & 84.70 & 26.14 & 85.41 & 116.3 \\ NGF (Last layer) & 1.86 & 88.32 & **0.38** & 91.17 & 2.64 & **84.21** & 1.17 & **90.97** & 1.04 & 88.37 & **0.12** & **84.16** & 24.60 & 85.97 & **38.3** \\ NGF (All layers) & **1.47** & **88.65** & 0.42 & **92.28** & **2.05** & **84.61** & **1.06** & 90.42 & **0.60** & **88.74** & 0.18 & **85.12** & **19.86** & **86.30** & 173.2 \\ \hline \hline \end{tabular} \end{table} TABLE V: Performance of NGF while fine-tuning all layers of DNN. We also consider SAM and SGD based fine-tuning of all layers here. The results shown here are for CIFAR10 dataset. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Bidness} & \multicolumn{2}{c|}{Blend} & \multicolumn{2}{c|}{Trojan} & \multicolumn{2}{c|}{Dynamic} & \multicolumn{2}{c|}{CLB} & \multicolumn{2}{c|}{SIG} & \multicolumn{2}{c|}{CBA} & \multicolumn{2}{c}{Runtime} \\ & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & (Secs.) \\ \hline Initial & 100 & 92.96 & 100 & 94.11 & 100 & 89.57 & 100 & 92.52 & 100 & 92.78 & 100 & 88.64 & 93.20 & 90.17 & – \\ SGD-Long & 82.34 & **90.68** & 7.13 & **92.46** & 86.18 & **87.29** & 57.13 & 90.51 & 13.84 & 88.11 & 0.26 & **85.74** & 84.41 & **86.87** & 907.5 \\ NGF w/o Rep. & 1.91 & 87.65 & **0.31** & 90.54 & 3.04 & 83.31 & 1.28 & 90.24 & **0.92** & 87.13 & 0.16 & 84.46 & 25.58 & 84.81 & **37.8** \\ \hline NGF & **1.86** & 88.32 & 0.38 & 91.17 & **2.64** & 84.21 & **1.17** & **90.97** & 1.04 & **88.37** & **0.12** & 84.16 & 24.60 & 85.97 & 38.3 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Performance of SGD-Long and NGF while fine-tuning only the last layer of DNN. For SGD-Long, we consider a long purification period with \(E_{p}=2500\). NGF performance with and without the regularization term underlines the importance of the proposed regularizer. The results shown here are for CIFAR10 dataset. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Bidness} & \multicolumn{2}{c|}{Bidness} & \multicolumn{2}{c|}{Blend} & \multicolumn{2}{c|}{Trojan} & \multicolumn{2}{c|}{Trojan} \\ & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC & ASR & ACC \\ \hline Initial & 100 & 92.96 & 100 & 94.11 & 100 & 89.57 & 100 & 92.52 & 100 & 92.78 & 100 & 88.64 & 93.20 & 90.17 & – \\ SGD-Long & 82.34 & **90.68** & 7.13 & **92.46** & 86.18 & **87.29** & 57.13 & 90.51 & 13.84 & 88.11 & 0.26 & **85.74** & 84.41 & **86.87** & 907.5 \\ NGF w/o Rep. & 1.91 & 87.65 & **0.31** & 90.54 & 3.04 & 83.31 & 1.28 & 90.24 & **0.92** & 87.13 & 0.16 & 84.46 & 25.58 & 84.81 & **37.8** \\ \hline NGF & **1.86** & 88.32 & 0.38 & 91.17 & **2.64** & 84.21 & **1.17** & **90.97** & 1.04 & **88.37** & **0.12** & 84.16 & 24.60 & 85.97 & 38.3 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Performance of SGD-Long and NGF while fine-tuning only the last layer of DNN. For SGD-Long, we consider a long purification period with \(E_{p}=2500\). NGF performance with and without the regularization term underlines the importance of the proposed regularizer. The results shown here are for CIFAR10 dataset. \begin{table} \begin{tabular}{c|c c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Validation size} & \multicolumn{2}{c|}{50} & \multicolumn{2}{c|}{100} & \multicolumn{2}{c|}{250} & \multicolumn{2}{c|}{350} & \multicolumn{2}{c}{500} \\ \hline Method & ASR & CA & ASR & CA & ASR & CA & ASR & CA & ASR & CA & ASR & CA & ASR & CA \\ \hline No Defense & 100 & 92.96 & 100 & 92.96 & 100 & 92.96 & 100 & 92.96 & 100 & 92.96 & 100 & 92.96 \\ ANP & 13.66 & 83.99 & 8.35 & 84.47 & 5.72 & 84.70 & 3.78 & 85.26 & 2.84 & 85.96 \\ AWM & 8.51 & 8 smoothness hypothesis as SAM usually leads to smoother loss surface. As for execution time, each SAM update requires 2 backpropagation operations while non-SAM update (SGD, Adam, etc.) requires only **1** backpropagation. This makes vanilla FT (SAM) slower than vanilla FT (SGD) which is not desirable for backdoor purification techniques. **Effect of Proposed Regularizer:** In this section, we analyze the effect of regularizer and long training with SGD. The effect of our clean distribution-aware regularizer can be observed in Table VI. NGF with the proposed regularizer achieves a 1% clean test performance improvement over vanilla NGF. For long training with SGD (SGD-Long), we fine-tune the last layer for 2500 epochs. Table VI shows the evaluations of SGD-Long on 7 different attacks. Even though the ASR performance improves significantly for CLB and SIG attacks, SGD-based FT still severely underperforms for other attacks. Moreover, the computational time increases significantly over NGF. Thus, our choice of _NGD-based FT as a fast and effective backdoor purification technique_ is well justified. **Effect of Clean Validation Data Size:** We also present how the total number of clean validation data can impact the purification performance. In Table VIII, we see the change in performance while gradually reducing the validation size from 1% to 0.1%. We consider Badnets attack on the CIFAR10 dataset for this evaluation. Even with only 50 (0.1%) data points, NGF can successfully remove the backdoor by bringing down the attack success rate (ASR) to 6.91%. We also consider AWM performance for this comparison. For both ANP and AWM, reducing the validation size has a severe impact on test accuracy (ACC). **Strong Backdoor Attacks:** By increasing the poison rates, we create stronger versions of different attacks against which most defense techniques fail quite often. We use 3 different poison rates, \(\{25\%,35\%,50\%\}\). We show in Table VII that NGF is capable of defending very well even with a poison rate of \(50\%\), achieving a significant ASR improvement over FT. Furthermore, there is a sharp difference in classification accuracy between NGF and other defenses. For \(25\%\) Blend attack, however, ANP offers a slightly better performance than our method. However, ANP performs poorly in removing the backdoor as it obtains an ASR of \(29.96\%\) compared to \(0.83\%\) for NGF. ## VIII Discussion **Why Smoothness is Key to Removing Backdoor?** One key observation from the smoothness study is that: there exists a key difference between weight-loss surface smoothness (estimated by _loss hessian_) of a backdoor and a benign model w.r.t. clean distribution--the weight-loss surface of a backdoor model is less smooth compared to a benign model. To further elaborate, let us consider feeding a clean sample to a backdoor model. By definition, it will predict the correct ground truth label. Now, consider feeding a sample with a backdoor trigger on it. The model will predict the adversary-set target label implying significant changes in prediction distribution. This significant change can be explained by the surface smoothness. In order to accommodate this significant change in prediction, the model must adjust itself accordingly. Such adjustment leads to non-smoothness in the weight-loss surface. _A non-smooth surface causes significant changes in loss gradient for specific inputs_. In our case, these specific inputs are backdoor-triggered samples. As the magnitude of a trigger is usually very small compared to the total input magnitude, the model has to experience quite a significant change in its weight space to cause large loss changes. We characterize this change in terms of smoothness. As for backdoor removal, we claim that making the non-smooth weight loss surface smoother removes the backdoor behavior. Based on the above discussion, a smoother surface should not cause a large change in loss or model predictions corresponding to backdoor-related perturbations or triggers. In summary, for a model to show certain backdoor behavior, there are some specific changes that take place in the weight space. In this work, we try to explain these changes regarding weight-loss surface smoothness. Our comprehensive empirical evaluations support our intuition well. **Why the Classification Layer?** We further offer an explanation as to why we choose to fine-tune the classification layer instead of any other layer, e.g. input layer. The classification layer is mostly responsible for the final prediction in a DNN. Depending on the extracted features by the CNN backbone, the classifier learns the decision boundary between these features and renders a prediction. While backdooring we change the input features slightly (by inserting triggers) so that the classifier makes a wrong prediction. If we can make the classifier invariant to these slight input changes, the effect of the backdoor should be removed. Thus, compared to other layers of the DNN, the classifier plays a more important role in the overall backdoor insertion and removal process. Another reason for fine-tuning only the last layer is for better computational efficiency; which is one of the most important aspects of a backdoor defense technique. **Why Different Metrics for Smoothness and NGF?** It is natural to probe that the same technique--either the Hessian of loss or the Fisher Matrix--could be used for both our smoothness analysis and the development of the proposed method. However, our particular choice is driven by the trade-off related to these two matrices--_computational efficiency and performance_. Note that computing Hessian is more expensive than the FIM. On the other hand, Hessian is a slightly better indicator of smoothness due to its superior effectiveness in capturing loss-surface geometry. Therefore, we can choose either one of the metrics. Since smoothness analysis is performed in an offline manner and only once (for each instance), we choose the better performing one, i.e. Hessian of loss, for smoothness analysis. As to why we choose the Fisher matrix in developing our proposed method, we need to design a runtime-efficient method with good performance. Since we have to calculate either FIM or Hessian in each iteration of the update, it becomes harder to choose Hessian over FIM for the development of the proposed method. Given the potential of a higher trade-off value of the Fisher-information matrix, we develop our method based on it. **Why Fine-tuning Negatively Impacts ACC?** It is observable that no matter which defense techniques we use the clean test accuracy consistently drops for all datasets. We offer an explanation for fine-tuning based techniques as NGF is one of them. As we use a small validation set for fine-tuning, it does not necessarily cover the whole training data distribution. Therefore, fine-tuning with this small amount of data bears the risk of overfitting and reduced clean test accuracy. This is more prominent when we fine-tune all layers of the network (vanilla FT in Table 2). Whereas, NGF fine-tunes only 1 layer which shows to be better in terms of preserving clean test accuracy. ## IX Conclusion We propose a novel backdoor purification technique based on natural gradient descent fine-tuning. The proposed method is motivated by our analysis of loss surface smoothness and its strong correlation with the backdoor insertion and purification processes. As a backdoor model has to learn an additional data distribution, it tends to be optimized to bad local minima or sharper minima compared to a benign model. We argue that the backdoor can be removed by re-optimizing the model to a smoother minima. We further argue that fine-tuning a single layer is enough to remove the backdoor. Therefore, in order to achieve a smooth minima in a single-layer fine-tuning scenario, we propose using an FIM-based DNN objective function and minimizing it using a curvature-aware NGD optimizer. Our proposed method achieves SOTA performance in a wide range of benchmarks. Since we fine-tune only one layer the training time overhead reduces significantly, making our method one of the fastest among SOTA defenses. **Limitations and future works.** Our extensive empirical studies on loss surface smoothness show its relationship with backdoor insertion and removal. However, we left the mathematical analysis of this relationship for future studies. Such analysis should be able to address the nature of convergence under different purification settings, e.g., the number of validation samples, number of iterations, number of fine-tuning layers, etc. Although we verify the smoothness hypothesis empirically, the mathematical analysis will give us more insight into understanding the backdooring process. Although we only experimented with CNN-based architectures, our findings should also hold for attention-based vision transformer (ViT) [66] architecture. Nevertheless, further study is required to verify the smoothness claims for the ViT architecture. It is known that the attention mechanism and residual connection generally lead the optimization towards smooth minima. However, how the backdooring process interferes with this optimization must be explored properly. In future, we aim to extend our smoothness analysis to 3D point-cloud attacks as well as contrastive backdoor attacks.
2301.13680
Entanglement witnessing with untrusted detectors
We consider the problem of entanglement detection in the presence of faulty, potentially malicious detectors. A common - and, as of yet, the only - approach to this problem is to perform a Bell test in order to identify nonlocality of the measured entangled state. However, there are two significant drawbacks in this approach: the requirement to exceed a critical, and often high, detection efficiency, and much lower noise tolerance. In this paper, we propose an alternative approach to this problem, which is resilient to the detection loophole and is based on the standard tool of entanglement witness. We discuss how the two main techniques to detection losses, namely the discard and assignment strategies, apply to entanglement witnessing. We demonstrate using the example of a two-qubit Bell state that the critical detection efficiency can be significantly reduced compared to the Bell test approach.
Giuseppe Viola, Nikolai Miklin, Mariami Gachechiladze, Marcin Pawłowski
2023-01-31T14:54:07Z
http://arxiv.org/abs/2301.13680v1
# Entanglement Witnessing with Untrusted Detectors ###### Abstract We consider the problem of entanglement detection in the presence of faulty, potentially malicious detectors. A common--and, as of yet, the only--approach to this problem is to perform a Bell test in order to identify nonlocality of the measured entangled state. However, there are two significant drawbacks in this approach: the requirement to exceed a critical, and often high, detection efficiency, and much lower noise tolerance. In this paper, we propose an alternative approach to this problem, which is resilient to the detection loophole and is based on the standard tool of entanglement witness. We discuss how the two main techniques to detection losses, namely the discard and assignment strategies, apply to entanglement witnessing. We demonstrate using the example of a two-qubit Bell state that the critical detection efficiency can be significantly reduced compared to the Bell test approach. ## 1 Introduction Entanglement of quantum states is one of the most cited and studied quantum phenomena [1]. This fact is largely explained by the simplicity of its formulation and a promising technological advancement that it offers, primary to the field of quantum cryptography [2, 3]. At the same time, a variety of theoretical, experimental, and technological challenges are yet to be overcome before entanglement can find its real-world applications [4]. Among these challenges, perhaps the most crucial one is distribution of entangled particles among two or more laboratories that are far away from each other. It is rather clear that photons are the most natural, and essentially the only, candidates for such tasks. Entangled states of photons are easy to prepare (the first experiments date back to 1960s [5, 6]), and, moreover, photons can be sent easily thorough optical fibers or simply the atmosphere [7]. However, there is a major downside in using photons as carriers of entanglement, namely a low efficiency of single-photon detectors [8]. The detection efficiency problem is most often discussed in the context of quantum cryptography or Bell experiments [2, 9], where making the _fair sampling assumption_ can be unjustified. There, either an eavesdropper or a local hidden variable can gain control over the measurement devices, leading to insecure communication protocol or false nonlocality claims. However, it is often unjustified to assume the independence of detector's efficiency on the choice of measurement setting, which also affects simpler experiments such as entanglement detection. If the fair sampling is not assumed, non-detection events must be accounted for. Surprisingly, it is still possible to achieve unconditionally secure cryptography or a loophole-free Bell test demonstration for non-perfect detectors [10, 11]. However, such demonstrations are only possible for detection efficiencies above certain threshold values, which are often higher than the typical values of photodetectors used in today's experiments. Studying and lowering critical detection efficiency for quantum key distribution and Bell tests has been the subject of extensive research [12, 13, 14, 15, 16]. At the same time, the problem of untrusted detectors in entanglement detection remains largely unexplored [17]. As of yet the only approach to this problem is to perform a Bell test, in which the detection loophole is closed [18, 19]. Apart from requiring high threshold detection efficiency, this approach can also be inapplicable to noisy entangled states that do not violate any of the known Bell inequalities. In this work, we take a different approach to the problem of entanglement detection with untrusted detectors. Instead of uplifting to a fully device-independent framework, we consider a scenario in which only the detection part of the measurement process, i.e., photon counting, is untrusted. The choice of basis, or more generally, the measurement setting, is assumed to be in a good control of the experimenter. This choice of assumptions is especially well-motivated for photonic experiments, where the part of the measurement device controlling measurement basis is much more studied than the detection part. Such scenarios in which only a part of the experimental apparatus is assumed to be characterized and trusted are often called _semi-device-independent_[20]. For entanglement detection, we consider the common tool of entanglement witnessing [21]. This method is universal, because it can be applied to any quantum state, and efficient, because it does not require performing the state tomography [1]. As the main technical contribution of this work, we discuss and analyze the discard and assignment strategies in entanglement witnessing with untrusted detectors. We show that both of these approaches to dealing with imperfect detectors can be analyzed using semi-definite programming [22]. As an example, we apply our methods to two-qubit entanglement witnesses. Our results suggest that the critical detection efficiency of entanglement detection with untrusted detectors in the proposed semi-device-independent paradigm is significantly lower than that for Bell test experiments. ## 2 Preliminaries We start by establishing notation and providing a few definitions required to present the results in the next sections. In this paper, we consider entanglement in bipartite systems, for which there is only one notion of separability and entanglement. Additionally, in our analysis, we consider qubit-qubit systems for the sake of simplicity. However, the results are directly generalizable to higher-dimensional systems, and we provide comments about this generalization whenever necessary. Finally, in this paper, we take all the observed efficiencies of all detectors to be the same and denote it by \(\eta\). We take the standard definition of _entanglement witness_ as a linear Hermitian operator \(W\), such that \(\langle W\rangle_{\rho_{s}}\coloneqq\Tr[W\rho_{s}]\geq 0\) for all separable states \(\rho_{s}\) and \(\langle W\rangle_{\rho}<0\) for the entangled state \(\rho\) under investigation. In the presence of inefficient detectors, it also becomes important how the witness \(W\) is measured in the experiment, the fact that was also pointed out in Ref. [17]. Typically, a witness \(W\) is decomposed as a sum of tensor products of local observables. In this work, we focus on two-qubit states for which the typical observables are the Pauli operators, i.e., we consider witnesses \(W\) of the form \[W=w_{0,0}\mathbb{1}\,\otimes\mathbb{1}\,+\sum_{i=1}^{3}w_{i,0}\sigma_{i}\otimes \mathbb{1}\,+\sum_{j=1}^{3}w_{0,j}\mathbb{1}\otimes\sigma_{j}+\sum_{i,j=1}^{3 }w_{i,j}\sigma_{i}\otimes\sigma_{j}, \tag{1}\] with \(w_{i,j}\in\mathbb{R}\) being coefficients of the decomposition, and \(\{\sigma_{1},\sigma_{2},\sigma_{3}\}=\{\sigma_{x},\sigma_{y},\sigma_{z}\}\) being the set of Pauli operators. For qudit systems, a possible set of observables are the Heisenberg-Weyl operators [23]. To evaluate the expectation value \(\langle W\rangle_{\rho}\) in Eq. (1), we need to estimate each of the terms \(\langle\sigma_{i}\otimes\sigma_{j}\rangle_{\rho}\) for which \(w_{i,j}\neq 0\) for all \(i,j\in\{1,2,3\}\), as well as _the marginal_ terms \(\langle\sigma_{i}\otimes\mathbb{1}\rangle_{\rho}\) for \(i\in\{1,2,3\}\) and \(\langle\mathbb{1}\otimes\sigma_{j}\rangle_{\rho}\) for \(j\in\{1,2,3\}\). Let us denote the outcomes of the parties' measurements as "\(+\)" and "\(-\)", in which case the expectation values are calculated as \[\langle\sigma_{i}\otimes\sigma_{j}\rangle_{\rho}=p(+,+|i,j)+p(-,-|i,j)-p(+,-|i,j)-p(-,+|i,j), \tag{2}\] for \(i,j\in\{1,2,3\}\), and \[\begin{split}\langle\mathbb{1}\otimes\sigma_{j}\rangle_{\rho}& =p^{B}(+|j)-p^{B}(-|j),\\ \langle\sigma_{i}\otimes\mathbb{1}\rangle_{\rho}&=p^{A }(+|i)-p^{A}(-|i),\end{split} \tag{3}\] otherwise. In the above, we used the superscript \({}^{A}\) and \({}^{B}\) to denote the marginal probabilities. All of the above probabilities can be estimated by the respective frequencies of detectors' clicks. The key problem that we investigate in this work is how the no-click events affect the expectation value of \(W\) and, consequently, entanglement detection. The literature that addresses the detection inefficiency problem, primarily in the context of the Bell test, describes two primary methods for handling no-click events [24]. In the first approach, referred to as the _discard strategy_, one simply ignores all the events where at least one of the detectors did not click (for estimation of joint probabilities). Mathematically, it means that the joint probabilities as well as marginal probabilities in Eqs. (2,3) are replaced by the probabilities, conditioned on the click events: \[p(+,+|i,j)\mapsto p(+,+|i,j,\mathrm{c}^{A},\mathrm{c}^{B}), \tag{4}\] for all \(i,j\in\{1,2,3\}\) and similarly for other outcomes. In the above, \(\mathrm{c}^{A}\) and \(\mathrm{c}^{B}\) denote the _events_ of Alice's and Bob's detectors clicking. The marginal probabilities are mapped as \[\begin{split} p^{A}(+|i)&\mapsto p^{A}(+|i, \mathrm{c}^{A}),\\ p^{B}(+|j)&\mapsto p^{B}(+|j,\mathrm{c}^{B}),\end{split} \tag{5}\] for all \(i,j\in\{1,2,3\}\). The same mapping of probabilities is performed when one assumes the fair sampling. However, in case of the discard strategy (in Bell tests) one takes into account the effect of losses by increasing the local bound of a Bell inequality [25, 24]. As we show in the next section, this also applies to entanglement witnessing: for imperfect detectors, the expectation value of a witness with respect to separable states can take negative values. However, there is still a range of values of \(\eta\) for which entanglement detection is possible. A similar observation has been made in Ref. [17], which was the first to consider the problem of detection loophole in entanglement witnessing. The second common approach of dealing with detection inefficiencies in Bell tests is the _assignment strategy_ method, sometimes also called binning. There, for every no-click event, one assigns one of the outcomes, in our case "\(+\)" or "\(-\)", either randomly or deterministically. Mathematically, it means that the evaluated probabilities are mapped as \[\begin{split} p(+,+|i,j)&\mapsto\eta^{2}p(+,+|i,j, \mathrm{c}^{A},\mathrm{c}^{B})+\eta(1-\eta)p(+,+|i,j,\mathrm{c}^{A},\neg \mathrm{c}^{B})\\ &+(1-\eta)\eta p(+,+|i,j,\neg\mathrm{c}^{A},\mathrm{c}^{B})+(1- \eta)^{2}p(+,+|i,j,\neg\mathrm{c}^{A},\neg\mathrm{c}^{B}),\end{split} \tag{6}\] for all \(i,j\in\{1,2,3\}\), and similarly for the other outcomes. In the above, \(\neg\mathrm{c}^{A}\), \(\neg\mathrm{c}^{B}\) denote the no-click events. The marginal probabilities are mapped as \[\begin{split} p^{A}(+|i)&\mapsto\eta p^{A}(+|i, \mathrm{c}^{A})+(1-\eta)p^{A}(+|i,\neg\mathrm{c}^{A}),\\ p^{B}(+|j)&\mapsto\eta p^{B}(+|j,\mathrm{c}^{B})+(1- \eta)p^{B}(+|j,\neg\mathrm{c}^{B}),\end{split} \tag{7}\] for all \(i,j\in\{1,2,3\}\). A particular assignment is determined by the form of the probabilities, conditioned on the events \(\neg\mathrm{c}^{A}\) and \(\neg\mathrm{c}^{B}\). For instance, if Alice chooses to always output "\(+\)" if her detector does not click, then we have that \(p(+,+|i,j,\neg\mathrm{c}^{A},\mathrm{c}^{B})=p^{B}(+|j,\mathrm{c}^{B})\) and \(p(-,+|i,j,\neg\mathrm{c}^{A},\mathrm{c}^{B})=0\), etc. Since for an entanglement witness of the form in Eq. (1) we need to estimate the expectation values rather than probabilities, it is more convenient to define an assignment as follows \[\begin{split} a_{i}&\coloneqq p^{A}(+|i,\neg \mathrm{c}^{A})-p^{A}(-|i,\neg\mathrm{c}^{A}),\\ b_{j}&\coloneqq p^{B}(+|j,\neg\mathrm{c}^{B})-p^{ B}(-|j,\neg\mathrm{c}^{B}),\end{split} \tag{8}\] for \(i,j\in\{1,2,3\}\). In Bell tests, the assignment strategy leads to lowering the maximally achievable quantum value of the Bell inequality, but does not increase the local bound [24]. As we show in the next section, this is not true in general for entanglement witnessing, and some local assignments can lead to negative values. We demonstrate our findings on the example of pure entangled two-qubit states, for which we use a notation \[|\Psi_{\theta}\rangle\coloneqq\sin(\theta)|0,0\rangle+\cos(\theta)|1,1\rangle, \tag{9}\] with \(\theta\in(0,\frac{\pi}{4}]\). The corresponding witness for \(|\Psi_{\theta}\rangle\) reads \[W_{\theta}\coloneqq\cos(\theta)^{2}\mathbb{1}\otimes\mathbb{1}-|\Psi_{\theta} \rangle\!\langle\Psi_{\theta}|. \tag{10}\] The non-zero coefficients of the decomposition of \(W_{\theta}\) into Pauli observables are \(w_{0,0}=\frac{1}{2}\cos(2\theta)+\frac{1}{4}\), \(w_{0,3}=w_{3,0}=\frac{1}{4}\cos(2\theta)\), \(w_{1,1}=-w_{2,2}=-\frac{1}{4}\sin(2\theta)\), and \(w_{3,3}=-\frac{1}{4}\). We refer to \(|\Psi_{\frac{\pi}{4}}\rangle\) and \(W_{\frac{\pi}{4}}\) as the Bell state and the Bell witness, respectively. ## 3 Results We start with a general formulation of the detection efficiency problem in entanglement witnessing. Similarly to the situation in Bell tests, we cannot rule out the possibility that a _hidden variable_\(\lambda\) gains control over the detectors and correlates their efficiencies with the source of quantum states. Mathematically, it means that the observed joint probability distribution, e.g., for the outcome \(+,+\), decomposes as \[\begin{split} p(+,+|i,j,\mathrm{c}^{A},\mathrm{c}^{B})& =\sum_{\lambda\in\Lambda}p(+,+,\lambda|i,j,\mathrm{c}^{A},\mathrm{ c}^{B})=\sum_{\lambda\in\Lambda}\frac{p(+,+,\lambda,\mathrm{c}^{A},\mathrm{c}^{B}|i,j )}{p(\mathrm{c}^{A}|i)p(\mathrm{c}^{B}|j)}\\ &=\frac{1}{\eta^{2}}\sum_{\lambda\in\Lambda}p(+,+,\mathrm{c}^{A}, \mathrm{c}^{B}|i,j,\lambda)p(\lambda)\\ &=\frac{1}{\eta^{2}}\sum_{\lambda\in\Lambda}p(+,+|i,j,\mathrm{c}^ {A},\mathrm{c}^{B},\lambda)p(\mathrm{c}^{A}|i,\lambda)p(\mathrm{c}^{B}|j, \lambda)p(\lambda),\end{split} \tag{11}\] where we consider for simplicity the situation in which the detection events are uncorrelated and independent of the measurement choices with respect to the _observed_ probability distribution, i.e., \(p(\mathrm{c}^{A},\mathrm{c}^{B}|i,j)=p(\mathrm{c}^{A}|i)p(\mathrm{c}^{B}|j)=\eta^ {2}\). The key observation here is that the response functions of click events, \(p(\mathrm{c}^{A}|i,\lambda)\) and \(p(\mathrm{c}^{B}|j,\lambda)\), may depend on the measurement settings \(i\) and \(j\). At the same time, since the entanglement source may also be controlled by the hidden variable \(\lambda\), the states \(\rho_{\lambda}\), with respect to which the outcome probabilities \(p(+,+|i,j,\mathrm{c}^{A},\mathrm{c}^{B},\lambda)\) are calculated, may also vary with \(\lambda\). In what follows, we show how to account for such situations in both, the discard and the assignment strategies. ### Discard strategy In this section, we discuss two main questions regarding the discard strategy: given an entanglement witness, what is the minimal value that it can take for separable states for a given detection efficiency \(\eta\), and what is the critical detection efficiency \(\eta_{\mathrm{crit}}\) for which no entanglement detection is possible? We show below how to formulate these questions as optimization problems, which can be cast as semidefinite programming problems (SDPs) [22]. Looking at the expansion of the observed probabilities in Eq. (11), we can realize that the probabilities of click events, \(p(\mathrm{c}^{A}|i,\lambda)\) and \(p(\mathrm{c}^{B}|j,\lambda)\), without loss of generality can be taken to be deterministic, i.e., \(0\) or \(1\), by considering a sufficiently large set \(\Lambda\). As a next step, we define sets \(\Lambda_{i}^{A}\) and \(\Lambda_{j}^{B}\) as \[\Lambda_{i}^{A}=\left\{\lambda\in\Lambda\;\Big{|}\;p(\mathrm{c}^{A}|i,\lambda) =1\right\},\quad\Lambda_{j}^{B}=\left\{\lambda\in\Lambda\;\Big{|}\;p(\mathrm{c} ^{B}|j,\lambda)=1\right\}, \tag{12}\] for \(i,j\in\{1,2,3\}\). We also consider unnormalized density operators \(\rho_{\lambda}\), such that \(\mathrm{Tr}[\rho_{\lambda}]=p(\lambda)\), and \(\frac{\rho_{\lambda}}{p(\lambda)}\) is the state in which the particles are prepared for the value \(\lambda\) of the hidden variable. This allows us to write the mapped expectation values \(\langle\sigma_{i}\otimes\sigma_{j}\rangle\) simply as \[\langle\sigma_{i}\otimes\sigma_{j}\rangle\mapsto\frac{1}{\eta^{2}}\sum_{ \lambda\in\Lambda_{i}^{A}\cap\Lambda_{j}^{B}}\langle\sigma_{i}\otimes\sigma_ {j}\rangle_{\rho_{\lambda}}, \tag{13}\] and the marginal expectation values as \[\begin{split}\langle\mathbbm{1}\otimes\sigma_{j}\rangle& \mapsto\frac{1}{\eta}\sum_{\lambda\in\Lambda_{j}^{B}}\langle \mathbbm{1}\otimes\sigma_{j}\rangle_{\rho_{\lambda}},\\ \langle\sigma_{i}\otimes\mathbbm{1}\rangle&\mapsto \frac{1}{\eta}\sum_{\lambda\in\Lambda_{i}^{A}}\langle\sigma_{i}\otimes \mathbbm{1}\rangle_{\rho_{\lambda}},\end{split} \tag{14}\] for \(i,j\in\{1,2,3\}\). We can now formulate the problem of finding the minimal value of \(W\) as the following SDP, \[\min_{\rho_{\lambda}} w_{0,0}+\frac{1}{\eta}\sum_{i=1}^{3}w_{i,0}\sum_{\lambda\in\Lambda_{i}^{ A}}\langle\sigma_{i}\otimes\mathbb{1}\rangle_{\rho_{\lambda}}+\frac{1}{\eta} \sum_{j=1}^{3}w_{0,j}\sum_{\lambda\in\Lambda_{j}^{B}}\langle\mathbb{1}\otimes \sigma_{j}\rangle_{\rho_{\lambda}} \tag{15a}\] \[+\frac{1}{\eta^{2}}\sum_{i,j=1}^{3}w_{i,j}\sum_{\lambda\in\Lambda_ {i}^{A}\cap\Lambda_{j}^{B}}\langle\sigma_{i}\otimes\sigma_{j}\rangle_{\rho_{ \lambda}},\] \[\mathrm{s.t.} \sum_{\lambda\in\Lambda_{i}^{A}}\mathrm{Tr}[\rho_{\lambda}]= \sum_{\lambda\in\Lambda_{j}^{B}}\mathrm{Tr}[\rho_{\lambda}]=\eta,\;\sum_{ \lambda\in\Lambda_{i}^{A}\cap\Lambda_{j}^{B}}\mathrm{Tr}[\rho_{\lambda}]= \eta^{2},\;\;\forall i,j\in\{1,2,3\},\] (15b) \[\rho_{\lambda}\geq 0,\quad\rho_{\lambda}^{\intercal A}\geq 0,\; \forall\lambda\in\Lambda,\] (15c) \[\rho_{\mathrm{observed}}\geq 0,\quad\sum_{\lambda\in\Lambda} \mathrm{Tr}[\rho_{\lambda}]=1.\] In the above SDP, we have introduced an operator \(\rho_{\mathrm{observed}}\), which corresponds to the physically observed state, and is defined as \[\rho_{\mathrm{observed}} \coloneqq\frac{\mathbb{1}\otimes\mathbb{1}}{4}+\frac{1}{\eta}\sum_ {i=1}^{3}\left(\sum_{\lambda\in\Lambda_{i}^{A}}\langle\sigma_{i}\otimes \mathbb{1}\rangle_{\rho_{\lambda}}\right)\frac{\sigma_{i}\otimes\mathbb{1}}{4 }+\frac{1}{\eta}\sum_{j=1}^{3}\left(\sum_{\lambda\in\Lambda_{j}^{B}}\langle \mathbb{1}\otimes\sigma_{j}\rangle_{\rho_{\lambda}}\right)\frac{\mathbb{1} \otimes\sigma_{j}}{4} \tag{16}\] \[+\frac{1}{\eta^{2}}\sum_{i,j=1}^{3}\left(\sum_{\lambda\in\Lambda_{ i}^{A}\cap\Lambda_{j}^{B}}\langle\sigma_{i}\otimes\sigma_{j}\rangle_{\rho_{ \lambda}}\right)\frac{\sigma_{i}\otimes\sigma_{j}}{4}.\] The condition \(\rho_{\mathrm{observed}}\geq 0\) in Eq. (15c) requests that the observed statistics corresponds to some physical state, which otherwise could lead to the parties realizing that the behavior of their detectors is malicious. Although the objective function of the SDP in Eq. (15) could also be written as \(\langle W\rangle_{\rho_{\mathrm{observed}}}\), we found that it is more instructive to give a full expansion of the witness in Eq. (15). Alternatively to defining the state \(\rho_{\mathrm{observed}}\) in Eq. (16), one can request an existence of some density operator, such that the experimentally observed expectation values can be explained by this state. This can be relevant, e.g., in the situation when the decomposition of the witness in Eq. (1) features non-orthogonal observables. The conditions in Eqs. (15a) ensure that the events of photons being detected by Alice's and Bob's devices appear to be uncorrelated and occur with the same probability \(\eta\) for all the measurement settings. The conditions in Eq. (15a) ensure that the operators \(\rho_{\lambda}\) are positive semidefinite and separable, due to the positive-partial-transpose criterion [1]. For the higher-dimensional case, the separability condition can be enforced by a hierarchy of SDP relaxations [26]. In Fig. 1 we demonstrate the solution of the SDP in Eq. (15) for the witness \(W_{\theta}\) in Eq. (10) for \(\theta\in\{\frac{\pi}{6},\frac{\pi}{5},\frac{\pi}{4}\}\). The values of \(\eta\) for which \(\langle W_{\theta}\rangle\) reaches its minimal value, shown by the dashed lines in Fig. 1, is the critical detection efficiency. In Appendix A, we give an explicit solution for the Bell witness, and show that in this case the critical detection efficiency is \(\frac{1}{\sqrt{3}}\). This value is significantly smaller than \(0.83\), which is the critical detection efficiency of detecting the Bell state in Bell experiments [27]. ### Assignment strategy In this section, we discuss the assignment strategy to entanglement witnessing with untrusted detectors. As stated in the introduction, in Bell tests, the assignment strategy reduces the maximal possible violation of a Bell inequality while preserving the local hidden variable bound [25]. Additionally, there is no restriction on the particular choice of the assignment as long as it is performed locally by the parties. This, however, does not translate to the case of entanglement witnessing, as we show below. Let \((a_{1},a_{2},a_{3})\) and \((b_{1},b_{2},b_{3})\), as defined by Eq. (8), be assignments chosen by the parties. Let us first assume that the behavior of the detectors is _honest_, i.e., whenever the detectors click, the probabilities of outcomes, e.g., \(p(+,+|i,j,\mathrm{c}^{A},\mathrm{c}^{B})\), correspond to a single state \(\rho\) which is not in the control of the hidden variable. In that case, it is easy to observe that the transformation of probabilities due to the assignment strategy in Eq. (6) can be equivalently captured by the following transformation of state \(\rho\), \[\rho\mapsto\eta^{2}\rho+\eta(1-\eta)\rho^{A}\otimes\beta+(1-\eta)\eta\alpha \otimes\rho^{B}+(1-\eta)^{2}\alpha\otimes\beta, \tag{17}\] where \(\rho^{A}\) and \(\rho^{B}\) are the reduced states of Alice's and Bob's subsystems, and we have introduced the notation \[\alpha\coloneqq\frac{1}{2}+\frac{1}{2}\sum_{i=1}^{3}a_{i}\sigma_{i},\quad \beta\coloneqq\frac{1}{2}+\frac{1}{2}\sum_{i=1}^{3}b_{i}\sigma_{i}. \tag{18}\] From the form of the state in Eq. (17), it is clear that as long as the initial state \(\rho\) is separable and the operators \(\alpha\) and \(\beta\) in Eq. (18) are positive semidefinite, the transformed state operator is also positive semidefinite and separable. Thus, we obtain a sufficient condition on the assignments for which no false detection of entanglement occurs: \[\sum_{i=1}^{3}a_{i}^{2}\leq 1,\quad\sum_{i=1}^{3}b_{i}^{2}\leq 1. \tag{19}\] Figure 1: Minimal values that the witness \(\langle W_{\theta}\rangle\) can take in the discard strategy as a function of detection efficiency \(\eta\). Note that, a common deterministic assignment \(p(+|i,\neg\mathrm{c}^{A})=1\) for Bell tests, i.e., \(a_{i}=1\)\(\forall i\in\{1,2,3\}\), does not satisfy the above constraint and can lead to a false detection of entanglement. In B, we show that if the detection efficiencies of Alice's and Bob's detectors can be different, then there are values of them for which the above condition is also necessary. As an example of a good assignment, let us consider the Bell witness \(W_{\frac{\pi}{4}}\) and the Bell state \(\rho_{\frac{\pi}{4}}=|\Psi_{\frac{\pi}{4}}\rangle\!\langle\Psi_{\frac{\pi}{4}}|\). From the transformation in Eq. (17), we find that the expectation value of the witness equals to \[\langle W_{\frac{\pi}{4}}\rangle_{\rho_{\frac{\pi}{4}}}=\frac{1}{2}\left(1- \eta^{2}-\eta-(1-\eta)^{2}\mathrm{Tr}[\alpha^{\intercal}\beta]\right). \tag{20}\] It is clear, that a good assignment corresponds to taking \(\alpha\) and \(\beta\) rank-1 and satisfying \(\alpha^{\intercal}=\beta\). This can be achieved, e.g., by \((a_{1},a_{2},a_{3})=(b_{1},b_{2},b_{3})=(1,0,0)\). One can also see from the above that entanglement of \(\rho\) can be detected for \(\eta>\frac{1}{2}\). Now, we look at the general case of potentially malicious behavior of the detectors. This, in particular, means that the observed probabilities of outcomes, conditioned on click events, are given by Eq. (11). We start with the same argument that by considering a large enough set \(\Lambda\), the probabilities of click events can be taken to be either \(0\) or \(1\). For the assignment strategy we would need to introduce an extra notation for subsets of \(\Lambda\), \[\overline{\Lambda}_{i}^{A}=\left\{\lambda\in\Lambda\ \Big{|}\ p(\mathrm{c}^{A}|i, \lambda)=0\right\},\quad\overline{\Lambda}_{j}^{B}=\left\{\lambda\in\Lambda \ \Big{|}\ p(\mathrm{c}^{B}|j,\lambda)=0\right\}, \tag{21}\] which are just the complements of the sets \(\Lambda_{i}^{A}\) and \(\Lambda_{j}^{B}\). Using this notation, we can write the observed expectation values as \[\begin{split}\langle\sigma_{i}\otimes\sigma_{j}\rangle& \mapsto\sum_{\lambda\in\Lambda_{i}^{A}\cap\Lambda_{j}^{B}} \langle\sigma_{i}\otimes\sigma_{j}\rangle_{\rho_{\lambda}}+\sum_{\lambda\in \Lambda_{i}^{A}\cap\overline{\Lambda}_{j}^{B}}\langle\sigma_{i}\otimes \mathbbm{1}\rangle_{\rho_{\lambda}}b_{j}\\ &+\sum_{\lambda\in\overline{\Lambda}_{i}^{A}\cap\Lambda_{j}^{B}} \langle\mathbbm{1}\otimes\sigma_{j}\rangle_{\rho_{\lambda}}a_{i}+\sum_{ \lambda\in\overline{\Lambda}_{i}^{A}\cap\overline{\Lambda}_{j}^{B}} \mathrm{Tr}[\rho_{\lambda}]a_{i}b_{j},\end{split} \tag{22}\] for all pairs of \(i,j\in\{1,2,3\}\). The marginal expectation values are mapped as follows \[\begin{split}\langle\sigma_{i}\otimes\mathbbm{1}\rangle& \mapsto\sum_{\lambda\in\Lambda_{i}^{A}}\langle\sigma_{i}\otimes \mathbbm{1}\rangle_{\rho_{\lambda}}+\sum_{\lambda\in\overline{\Lambda}_{i}^{A} }\mathrm{Tr}[\rho_{\lambda}]a_{i},\\ \langle\mathbbm{1}\otimes\sigma_{j}\rangle&\mapsto \sum_{\lambda\in\Lambda_{j}^{B}}\langle\mathbbm{1}\otimes\sigma_{j}\rangle_{ \rho_{\lambda}}+\sum_{\lambda\in\overline{\Lambda}_{j}^{B}}\mathrm{Tr}[\rho_{ \lambda}]b_{j}.\end{split} \tag{23}\] We now formulate an SDP that determines the minimal value of a witness for separable states given assignments \((a_{1},a_{2},a_{3})\) and \((b_{1},b_{2},b_{3})\) and detection efficiency \(\eta\) \[\min_{\rho_{\lambda}} w_{0,0}+\sum_{i=1}^{3}w_{i,0}\left(\sum_{\lambda\in\Lambda_{i}^{A}} \langle\sigma_{i}\otimes\mathbb{1}\rangle_{\rho_{\lambda}}+\sum_{\lambda\in \overline{\Lambda}_{i}^{A}}\mathrm{Tr}[\rho_{\lambda}]a_{i}\right) \tag{24a}\] \[+\sum_{j=1}^{3}w_{0,j}\left(\sum_{\lambda\in\Lambda_{j}^{B}} \langle\mathbb{1}\otimes\sigma_{j}\rangle_{\rho_{\lambda}}+\sum_{\lambda\in \overline{\Lambda}_{j}^{B}}\mathrm{Tr}[\rho_{\lambda}]b_{j}\right)\] \[+\sum_{i,j=1}^{3}w_{i,j}\left(\sum_{\lambda\in\Lambda_{i}^{A} \cap\Lambda_{j}^{B}}\langle\sigma_{i}\otimes\sigma_{j}\rangle_{\rho_{\lambda} }+\sum_{\lambda\in\Lambda_{i}^{A}\cap\overline{\Lambda}_{j}^{B}}\langle \sigma_{i}\otimes\mathbb{1}\rangle_{\rho_{\lambda}}b_{j}\right.\] \[\left.\hskip 56.905512pt+\sum_{\lambda\in\overline{\Lambda}_{i}^{A} \cap\Lambda_{j}^{B}}\langle\mathbb{1}\otimes\sigma_{j}\rangle_{\rho_{\lambda }}a_{i}+\sum_{\lambda\in\overline{\Lambda}_{i}^{A}\cap\overline{\Lambda}_{j}^{ B}}\mathrm{Tr}[\rho_{\lambda}]a_{i}b_{j},\right)\] \[\mathrm{s.t.} \sum_{\lambda\in\Lambda_{i}^{A}}\mathrm{Tr}[\rho_{\lambda}]=\sum_ {\lambda\in\Lambda_{j}^{B}}\mathrm{Tr}[\rho_{\lambda}]=\eta,\ \sum_{\lambda\in\Lambda_{i}^{A}\cap\Lambda_{j}^{B}}\mathrm{Tr}[\rho_{ \lambda}]=\eta^{2},\ \ \forall i,j\in\{1,2,3\},\] (24a) \[\rho_{\lambda}\geq 0,\quad\rho_{\lambda}^{\mathbb{T}_{A}} \geq 0,\ \forall\lambda\in\Lambda,\] (24b) \[\rho_{\mathrm{observed}}\geq 0,\quad\sum_{\lambda\in\Lambda} \mathrm{Tr}[\rho_{\lambda}]=1,\] (24c) \[\sum_{\lambda\in\Lambda_{i}^{A}\cap\overline{\Lambda}_{j}^{B}} \langle\sigma_{i}\otimes\mathbb{1}\rangle_{\rho_{\lambda}} =(1-\eta)\sum_{\lambda\in\Lambda_{i}^{A}}\langle\sigma_{i}\otimes \mathbb{1}\rangle_{\rho_{\lambda}},\] (24d) \[\sum_{\lambda\in\overline{\Lambda}_{i}^{A}\cap\Lambda_{j}^{B}} \langle\mathbb{1}\otimes\sigma_{j}\rangle_{\rho_{\lambda}} =(1-\eta)\sum_{\lambda\in\Lambda_{j}^{B}}\langle\mathbb{1}\otimes \sigma_{j}\rangle_{\rho_{\lambda}} \forall i,j,\in\{1,2,3\}.\] where \(\rho_{\mathrm{observed}}\) is defined in Eq. (16). As in the case of the SDP for the discard strategy, the constraints in Eqs.(24a) guarantee that the observed probabilities of click events are uncorrelated for Alice and Bob and are independent of their measurement settings. The conditions on \(\rho_{\lambda}\) in Eq. (24b) and on \(\rho_{\mathrm{observed}}\) in Eq. (24c) are also motivated analogously to the discard strategy case. Finally, the constraints in Eq. (24d) guarantee that the observed marginal state of Alice is independent of whether Bob detectors click or not, and analogously that Bob's marginal is independent of the behavior of Alice's detector. In Fig. 2 (left) we demonstrate the solution to the SDP in Eq. (24) for the witness \(W_{\theta}\) and the assignment \((a_{1},a_{2},a_{3})=(b_{1},b_{2},b_{3})=(0,0,0)\). This particular assignment preserves the property \(\langle W\rangle_{\rho_{s}}\geq 0\) for all separable states. In Fig. 2 (left) by the dashed lines we depict values of the witness \(W_{\theta}\) with respect to the corresponding state \(|\Psi_{\theta}\rangle\). These values increase as \(\eta\) decreases due to the assignment strategy. The points where the dashed lines intersect the solid lines are the critical detection efficiencies for the chosen assignment. In Fig. 2 (right) we also compare different assignments for the Bell witness. One can see that the property of \(\langle W\rangle_{\rho_{s}}\geq 0\) is not preserved for any of the selected assignments. At the same time, one can still detect entanglement if the expectation value with respect to an entangled state is lower than the calculated bound on \(\langle W_{\frac{\pi}{4}}\rangle_{\rho_{s}}\) due to untrusted detectors, as we demonstrate in Fig. 2. Notably, the minimal expectation values that the witnesses can take with respect to entangled states (dashed lines) do not correspond to the Bell state. The critical detection efficiency for the Bell witness in this calculation is again \(\eta=\frac{1}{\sqrt{3}}\) as for the discard strategy. In C we give an explicit solution to the SDP in Eq. (24) for the Bell witness. The value of \(\langle W_{\frac{\pi}{4}}\rangle_{\rho_{\frac{\pi}{4}}}\) with respect to the Bell state can be calculated from Eq. (20) by taking \(\alpha=\beta=\frac{1}{2}\), and is equal to \(\frac{1}{4}-\frac{3}{4}\eta^{2}\). ## 4 Conclusions and discussions In this paper, we discuss the problem of untrusted detectors in photonic experiments of entanglement detection. Even though the role of untrusted detectors is much more crucial for cryptographic applications and Bell tests, in this work we argue that malicious, or non-ideal, behavior of photodetectors can lead to false positive claims in simpler experiments of entanglement witnessing. We then analyze in detail the two main approaches to detection losses, namely the discard and the assignment strategies, and show that this analysis for a given entanglement witness can in both cases be cast as a semidefinite programming optimization problem. As an example, we analyze critical detection efficiencies for entanglement witnesses of pure two-qubit states. In particular, we show that the critical detection efficiency corresponding to the Bell state is \(\frac{1}{\sqrt{3}}\) for the discard strategy. For the assignment strategy we could show that the same value of \(\frac{1}{\sqrt{3}}\) can be attained, but the question whether this value can be reduced further by a suitable choice of an assignment is open. On a more fundamental level, our work introduces a new type of semi-device Figure 2: Left: Minimal values of the witness \(\langle W_{\theta}\rangle\) in the assignment strategy with \(a_{i}=0\) and \(b_{j}=0\), \(\forall i,j,\in\{1,2,3\}\) as a function of detection efficiency \(\eta\) (solid lines) and the corresponding values of the witness for the entangled state \(|\Psi_{\theta}\rangle\) (dashed lines). Right: Minimal values of the Bell witness for the assignments \((a_{1},a_{2},a_{3})\in\{(1,0,0),(1,1,0),(1,1,1)\}\) and \(b_{i}=(-1)^{i+1}a_{i}\)\(\forall i\) (solid lines), and the corresponding minimal values of the witness over entangled states (dashed lines). independent paradigm, the one in which only the detection part of measurement process is untrusted, while the measurement setting, e.g., the measurement basis, is assumed to be characterized. This is particularly relevant for the standard entanglement-based quantum key distribution (QKD), where the detectors are usually assumed to be fair. This assumption is difficult to justify because the most widely documented attacks against practical QKD are the detector blinding attacks, which boil down to gaining control of the detectors in the parties' devices. Additionally, dopant-level hardware Trojan attacks were demonstrated, which allow the manufacturer to place a malware in electronics, such as detector controllers, in a way impossible to detect by the user. We therefore believe that it is interesting to analyze security of QKD scheme in the introduced paradigm of untrusted detectors. As an additional motivation for this further work, one can notice that requirement on the detection efficiency reported in the current work is significantly lower than that in Bell test for the case of the Bell state. We thank Dagmar Bruss for interesting discussions. This research was made possible by funding from QuantERA, an ERA-Net cofund in Quantum Technologies (www.quantera.eu) under project eDICT. We acknowledge the support by the Foundation for Polish Science (IRAP project, ICTQT, contract no. MAB/2018/5, co-financed by EU within Smart Growth Operational Programme). This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project No. 441423094, and under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 - 390534769. ## Appendix A Discard strategy for the Bell witness Here we provide an explicit solution to the SDP in Eq. (15) for the Bell witness \(W_{\frac{\pi}{4}}\). This solution, more precisely the minimal value that \(\langle W_{\frac{\pi}{4}}\rangle\) can take for a given \(\eta\), is shown in Fig. 1. To describe the solution, i.e., to specify the operators \(\rho_{\lambda}\) for each value of \(\eta\), we need to introduce a few notations. Let \(\Lambda\) be the set of all binary strings of length 6. Each value of \(\lambda\), specified by a string, specifies the probabilities of detectors' clicks with the first three bits corresponding to the measurement settings of Alice, and the second three bits corresponding to the measurement settings of Bob. For example, for \(\lambda=(1,0,0,0,1,0)\), we have that \(p(\mathrm{c}^{A}|1,\lambda)=1\) and \(p(\mathrm{c}^{B}|2,\lambda)=1\), while the other probabilities are zero. Clearly, this choice of the alphabet of \(\lambda\) is sufficient for the problem in case of three measurement settings per party. Let us define the following states \[\rho_{x,x} \coloneqq\frac{1}{2}\left(|+,+\rangle\!\langle+,+|+|-,-\rangle\! \langle-,-|\right), \tag{1.1}\] \[\rho_{y,y} \coloneqq\frac{1}{2}\left(|+\mathrm{i},-\mathrm{i}\rangle\! \langle+\mathrm{i},-\mathrm{i}|+|-\mathrm{i},+\mathrm{i}\rangle\!\langle- \mathrm{i},+\mathrm{i}|\right),\] \[\rho_{z,z} \coloneqq\frac{1}{2}\left(|0,0\rangle\!\langle 0,0|+|1,1\rangle\! \langle 1,1|\right),\] which are separable states with the property that \(\langle\sigma_{1}\otimes\sigma_{1}\rangle_{\rho_{x,x}}=\langle\sigma_{3} \otimes\sigma_{3}\rangle_{\rho_{z,z}}=1\), and \(\langle\sigma_{1}\otimes\sigma_{1}\rangle_{\rho_{y,y}}=-1\). Here, we used the notation \(|+\mathrm{i}\rangle\) and \(|-\mathrm{i}\rangle\) to denote the \(+1\) and \(-1\) eigenstates of \(\sigma_{2}\). In terms of these states, the operators \(\rho_{\lambda}\) in our solution are specified to be the following \[\rho_{(0,0,0,0,0,0)} =p_{0}\frac{1\otimes 1}{4},\ \rho_{(1,1,1,1,1,1)}=p_{1}\frac{1}{3}( \rho_{x,x}+\rho_{y,y}+\rho_{z,z}), \tag{1.2}\] \[\rho_{(1,0,0,1,0,0)} =\rho_{(1,0,1,1,1,0)}=\rho_{(1,1,0,1,0,1)}=p_{2}\rho_{x,x},\ \rho_{(1,0,0,1,1,1)}=\rho_{(1,1,1,1,0,0)}=p_{3}\rho_{x,x},\] \[\rho_{(0,1,0,1,0)} =\rho_{(0,1,1,1,1,0)}=\rho_{(1,1,0,0,1,1)}=p_{2}\rho_{y,y},\ \rho_{(0,1,0,1,1,1)}=\rho_{(1,1,1,0,1,0)}=p_{3}\rho_{y,y},\] \[\rho_{(0,0,1,0,0,1)} =\rho_{(0,1,1,1,0,1)}=\rho_{(1,0,1,0,1,1)}=p_{2}\rho_{z,z},\ \rho_{(0,0,1,1,1,1)}=\rho_{(1,1,1,0,0,1)}=p_{3}\rho_{z,z},\] \[\rho_{(0,0,0,0,1)} =\rho_{(0,0,0,0,1,0)}=\rho_{(0,0,0,1,0,0)}=\rho_{(0,0,1,0,0,0)}= \rho_{(0,1,0,0,0,0)}=\rho_{(1,0,0,0,0,0)}=p_{4}\frac{1\otimes 1}{4},\] where the parameters \(p_{i}\in\mathbb{R}\), \(i\in\{0,1,2,3,4\}\) will be specified later. The rest of the operators \(\rho_{\lambda}\) are taken to be zero-trace. The objective function of the SDP in Eq. (15) can be calculated to be \[\frac{1}{4}-\frac{1}{4\eta^{2}}\left(\sum_{\lambda\in\Lambda_{ 1}^{A}\cap\Lambda_{1}^{B}}\langle\sigma_{1}\otimes\sigma_{1}\rangle_{\rho_{ \lambda}}-\sum_{\lambda\in\Lambda_{2}^{A}\cap\Lambda_{2}^{B}}\langle\sigma_{2} \otimes\sigma_{2}\rangle_{\rho_{\lambda}}+\sum_{\lambda\in\Lambda_{3}^{A} \cap\Lambda_{3}^{B}}\langle\sigma_{3}\otimes\sigma_{3}\rangle_{\rho_{\lambda}}\right) \tag{1.3}\] \[=\frac{1}{4}-\frac{1}{4\eta^{2}}\left(p_{1}+9p_{2}+6p_{3}\right),\] while the constraints in Eq. (15) correspond to \[p_{1}+5p_{2}+4p_{3}+p_{4}=\eta, \tag{1.4}\] \[p_{1}+3p_{2}+2p_{3}=\eta^{2},\] \[p_{0}+p_{1}+9p_{2}+6p_{3}+6p_{4}=1.\] The observed state in Eq. (16) can be easily found to be \[\rho_{\mathrm{observed}}= \frac{1\otimes 1}{4}+\frac{1}{4\eta^{2}}\left(\frac{p_{1}}{3}+3p_{2}+2p_{3 }\right)\left(\sigma_{1}\otimes\sigma_{1}-\sigma_{2}\otimes\sigma_{2}+\sigma_ {3}\otimes\sigma_{3}\right) \tag{1.5}\] \[= \frac{1\otimes 1}{4}\left(1-\frac{\frac{p_{1}}{3}+3p_{2}+2p_{3}}{ \eta^{2}}\right)+\frac{1}{\eta^{2}}\left(\frac{p_{1}}{3}+3p_{2}+2p_{3}\right)| \Psi_{\frac{\pi}{4}}\rangle\!\langle\Psi_{\frac{\pi}{4}}|.\] From the above expression, it is clear that as long as \(0\leq\frac{p_{1}}{3}+3p_{2}+2p_{3}\leq\eta^{2}\), the above density operator is positive semidefinite, and since this constraint is implied by Eqs. (1.4), these are the only constraints associated with the SDP. Now, we can specify our solution to the original optimization problem in terms of the coefficients \(\{p_{i}\}_{i=0}^{4}\). For the case of \(\eta>\frac{1}{\sqrt{3}}\), the minimal value which can be attained is \(\frac{1}{4}-\frac{1}{4\eta^{2}}\), which is also clear from the expression in Eq. (14). This is achieved for the following values of the parameters, \[p_{0}=p_{4}=0,\;p_{1}=\frac{3\eta^{2}-1}{2},\;p_{2}=\frac{(1-\eta)^{2}}{2},\;p _{3}=\frac{(1-\eta)(2\eta-1)}{2}. \tag{15}\] For the case of \(\eta\leq\frac{1}{\sqrt{3}}\), the minimal value \(-\frac{1}{2}\) can be reached. The corresponding values of the parameters are \(p_{1}=0\) and \[\begin{split} p_{0}=(1-3\eta)^{2},\;p_{2}=0,\;p_{3}=\frac{\eta^ {2}}{2},\;p_{4}=\eta-2\eta^{2},&\mbox{for}\;\eta\leq\frac{1}{3},\\ p_{0}=0,\;p_{2}=\frac{(1-3\eta)^{2}}{6},\;p_{3}=\frac{6\eta-1-7 \eta^{2}}{4},\;p_{4}=\frac{1-3\eta^{2}}{6},&\mbox{for}\;\eta \in\left(\frac{1}{3},\frac{1}{\sqrt{3}}\right].\end{split} \tag{16}\] Appendix B Necessity of the condition in Eq. (19) for the assignment strategy in case of different detectors' efficiencies Here, we show that if the detection efficiencies can be different for Alice and Bob, then the condition in Eq. (19) is also necessary. More precisely, there are values of the detection efficiencies \(\eta^{A}\) and \(\eta^{B}\) for which this condition is necessary. First, consider the case of \(\eta^{A}=0\) and \(\eta^{B}=1\). For any separable state \(\rho_{s}\) and a witness \(W\), the following must hold true, \(\mbox{Tr}(W\alpha\otimes\rho_{s}^{B})\geq 0\), where \(\rho_{s}^{B}=\mbox{Tr}_{A}[\rho_{s}]\). If we take the Bell state witness \(W_{\frac{\pi}{4}}\), the condition further simplifies to \(\mbox{Tr}[\alpha^{\intercal}\rho_{s}^{B}]\leq 1\) for all states \(\rho_{s}^{B}\). Inserting in the latter a particular state of the form, \[\rho_{s}^{B}=\frac{1}{2}+\frac{1}{2}\sum_{i=1}^{3}\frac{a_{i}}{\sqrt{\sum_{j= 1}^{3}a_{j}^{2}}}\sigma_{i}, \tag{17}\] directly results into the condition \(\sum_{i=1}^{3}a_{i}^{2}\leq 1\). The same way, we can prove the necessity of the constraint on \(b_{i}\)'s for the situation when \(\eta^{A}=1\) and \(\eta^{B}=0\). ## Appendix C Assignment strategy for the Bell witness The solution to the SDP in Eq. (24) for the assignment strategy can be taken in the same form as in the case of the discard strategy, as described in A. In particular, one can take the states \(\rho_{\lambda}\) as in Eq. (14), which leads to the constraints of the SDP to take the form in Eq. (13). The particular solution in terms of the parameters \(p_{0},p_{1},p_{2},p_{3}\) and \(p_{4}\) is also the same as in Eq. (15) for \(\eta>\frac{1}{\sqrt{3}}\) and as in Eq. (16) for \(\eta\leq\frac{1}{\sqrt{3}}\). The only difference with the case of the discard strategy is the optimal value of the objective function, which is equal to \(0\) for \(\eta>\frac{1}{\sqrt{3}}\) and \(\frac{1}{4}-\frac{3}{4}\eta^{2}\) for \(\eta\leq\frac{1}{\sqrt{3}}\).
2307.16476
Erbium-based multifuncional compounds as molecular microkelvin-tunable driving-sensing units
We demonstrate the selective control of the magnetic response and photoluminescence properties of Er3+ centers with light, by associating them with a highly conjugated beta-diketonate (1,3-di(2-naphthyl)-1,3-propanedione) ligand. We demonstrate this system to be an optically-pumped molecular compound emittingin infra-red, which can be employed as a precise heat-driving and detecting unit for low temperatures
Jarosław Rybusiński, Tomasz Fąs, Pablo Martin-Ramos, Victor Lavín, Jacek Szczytko, Jan Suffczyński, Inocencio R. Martín, Jesus Martin-Gil, Manuela Ramos Silva, Bruno Cury Camargo
2023-07-31T08:14:05Z
http://arxiv.org/abs/2307.16476v1
# Erbium-based multifuncional compounds as molecular microkelvin-tunable driving-sensing units. ###### Abstract We demonstrate the selective control of the magnetic response and photoluminescence properties of Er\({}^{3+}\) centers with light, by associating them with a highly conjugated \(\beta\)-diketonate (1,3-di(2-naphthyl)-1,3-propanedione) ligand. We demonstrate this system to be an optically-pumped molecular compound emitting in infra-red, which can be employed as a precise heat-driving and detecting unit for low temperatures. ## I Introduction Lanthanide ion coordination compounds possess fascinating physical properties and important technological applications, which are often governed by their magnetism and optical responses [1; 2; 3]. Unfortunately, such ions usually present poor absorption characteristics, causing a severe hindrance to their optically-pumped luminescence. Luckily, the photo-excitation of magnetically-relevant emitting levels in these materials can be achieved almost at will by employing organic ligands as chromophores. These components strongly absorb light at selected wavelengths, sensitizing the magnetic ions through intramolecular energy transfer - called the "antenna effect" [4; 5]. \(\beta\)-diketones containing aromatic groups are prime candidates for such a role, as they exhibit strong absorption over a wide wavelength range and are known to provide efficient energy transfer to lanthanide ions [6; 7]. Among possible choices, the highly conjugated \(\beta\)-diketonate 1,3-di(2-naphthyl)-1,3-propanedione (Hdnm) is expected to sensitize coordinated Er(III) and Eu(III) upon excitation in the visible range (\(>400\) nm) [8; 9; 10; 11; 12]. However, the crystallization of organic compounds remains a challenging task, with growth attempts often suffering from low rates, low yields, and polymorphic specimens with minute fractions of the desired phase [13]. Optical measurements in such small quantities - tenths or hundredths of micrograms - pose a challenge in terms of proper collection and amplification of the scattered light. An alternative is to explore thermodynamic properties at equilibrium (e.g. specific heat) of illuminated bulk samples. Conventional calorimetry, however, involves the same challenges associated with optical experiments in very small specimens, and dedicated instrumentation is required (see, e.g., ref. [14]). On the other hand, the strong magnetic response of rare-earth-based compounds makes such materials excellent candidates for magnetic measurements. Indeed, conventional commercial SQUID magnetometry is capable of detecting the magnetic response of as little as \(10^{13}\) spins, which for the typical Er-based composites considered here, corresponds to \(10^{12}\) molecules (\(\sim 4\) ng). This precision allows one to observe, a priori, variations in the magnetic response or the magnetic ion when illuminating the molecular compound. In this work, we explore this possibility by studying the magnetism of [Er(dnm)\({}_{3}\)(bipy)] subjected to light excitation. We demonstrate that SQUID magnetometry in this strong paramagnet acts as a viable thermodynamical alternative to probe the optical and thermal properties of minute sample quantities. Possible applications of such a system in thermometry are discussed. ## II Samples and experimental setups The sample considered in this study is tris(1,3-di(2-naphthyl)-1,3-propanedione)mono(2,2'-bipyridine)erbium(III), [Er(dnm)\({}_{3}\)(bipy), C\({}_{79}\)H\({}_{35}\)ErN\({}_{2}\)O\({}_{6}\)]. This novel compound, in a form of a yellow powder, was synthesized in-house following the procedure outlined in the supplementary information (SI) [15]. Its structure consists of a central rare-earth element (Er\({}^{3+}\)), surrounded by six oxygen atoms from \(\beta\)-diketonate ligands and two nitrogen atoms from a 2,2'-bipyridine neutral molecule (see Fig. 1), prompting a distorted square antiprism chemical environment for the lanthanide. Chemical and structural analysis confirmed a pure, yet disordered, solid (see the SI for the full characterization [15]). After the synthesis, the physical properties of the material were probed through optical and magnetic measurements. Optical characterization was achieved through absorption and luminescence measurements in the 5.0 K \(\leq\) T \(\leq\) 300.0 K temperature interval. They were performed by illuminating the sample with the laser light in a wavelength range between 400 nm and 800 nm, with a linewidth of 2 nm, using a LLTF Contrast powered by NKT 8 W supercontinuum laser. The scattered and emitted light responses were captured by an Andor SR-500i spectrometer with an IDus InGaAs array. Magnetic measurements were carried out on a QuantumDesign MPMS 7T platform, in the 2 K \(\leq\) T \(\leq\) 300 K temperature range, under magnetic fields up to 7 T [16]. The setup was adapted with an optic fiber window, to allow illumination of the sample during magnetic measurements. For this purpose, a spectrometer-filtered Xenon arc discharge light bulb was employed. ## III Results Temperature-scaled magnetic susceptibility vs. temperature (\(\chi\)T \(\times\) T) and magnetization vs. magnetic field (M \(\times\) H) for the compound under study are shown in Fig. 2. Measurements revealed a clear paramagnetic-like behavior, which was, however, not well-described by the conventional Curie-law (\(\chi\propto\) T\({}^{-1}\)) above 6 K. This can be attributed to the chemical environment for the Er\({}^{3+}\) ion in [Er(dnm)\({}_{3}\)(bipy)]. Indeed, simulations using PHI software [17] reproduced well the M \(\times\) H and \(\chi\)T \(\times\) T sample behavior by assuming isolated \(J=15/2\) magnetic centers in a \(D_{4}\) crystalline environment, in agreement with XRD data (see the SI [15]). Strikingly, upon illumination, a remarkable variation of the sample magnetic response was observed. Its magnitude, denoted \(|\Delta\)M\(|\equiv|\)M\({}_{\rm dark}-\)M\({}_{\rm lit}\)], was determined by performing consecutive magnetization measurements for the sample without (M\({}_{\rm dark}\)) and under (M\({}_{\rm lit}\)) irradiation at fixed temperatures. The amplitude of \(\Delta\)M closely followed the intensity profile of the diffuse reflectance spectrum of the material, and was most pronounced at low temperatures (see Fig. 3). Among the features visible both in magnetic and optical measurements, a broad absorption band in the 200-500 nm range can be mainly attributed to the \(\pi\)-\(\pi^{*}\) transitions of the dnm \(\beta\)-diketonate [18], with overlapping bands from the 2,2'-bipyridine organic ligand (240-290 nm region) [19] and from the \({}^{4}\)I\({}_{15/2}\rightarrow(^{2}\)G, \({}^{4}\)F, \({}^{2}\)H)\({}^{9/2}\) Er\({}^{3+}\) transition (407 nm). Above 490 nm, sharp peaks are associated with intra-configurational \({}^{4}\)\(f^{11}-^{4}\)\(f^{11}\) electronic transitions starting from the \({}^{4}\)I\({}_{15/2}\) ground state of the Er\({}^{3+}\) magnetic center, superimposed to the ligand's absorption tail [20]. The excitation spectrum of the transition at \(\lambda\approx 1550\) nm (\({}^{4}\)I\({}_{13/2}\rightarrow\)\({}^{4}\)I\({}_{15/2}\), see the inset in Fig. 3) featured a broad peak when the sample was pumped with a wavelength of about 450 nm. Such a feature is significantly red shifted compared to other rare-earth-based coordination compounds [10; 18], including those based on \(\beta\)-diketonate complexes [19]. Its occurrence is closely related to the absorption maximum of the organic ligand used here, which occurs in the UV-Vis range (see Fig. 3 and the SI [15]). Such a result strongly indicates the sensitization of the emissive metal center in our compound by the antenna effect. To better understand how absorption through the ligand influenced on the magnetic center of the molecule, we measured the temporal evolution of the sample's photoluminescence at excitation pulses with wavelength \(\lambda_{\rm exc}=375\) nm. This value is centered around the broad UV absorption band of the (dnm)\({}_{3}\) ligand (see Fig. 3). The results revealed that an initially strong fluorescent Figure 1: Chemical structure of Er(dnm)\({}_{3}\)(bipy). Figure 2: \(\chi\)T vs. temperature for the sample considered herein, obtained at \(\mu_{0}H=0.2\) T. The inset shows the magnetic response per molecule measured at (from top to bottom) T = 2 K, 5 K, 10 K, 50 K, 100 K, 200 K and 300 K. The red line in the main panel and the dashed lines in the inset represent magnetization curves obtained with Phi [17] for an isolated \(J=15/2\) Er\({}^{3+}\) magnetic ion in a distorted \(D_{4d}\) crystallographic environment. The y-axis experimental uncertainty is not visible in the scale shown. emission at \(\lambda\approx 480\) nm quickly gave way to an emission line centered at \(\lambda\approx 620\) nm (see Fig. 4). The latter is ascribed to the triplet state of the organic ligand, and largely overlaps with the \({}^{4}\)F\({}_{9/2}\) absorption line of Er\({}^{3+}\) (for a full description, see the SI [15]). This results in the pumping of the rare earth ion through a resonant energy transfer process [20], followed by relaxation and radiative decay (see the diagram provided in the SI [15], Fig. S7), thus yielding the characteristic luminescence spectra at around 1550 nm showcased in the inset of Fig. 3. The triplet state at \(\lambda\approx 620\) nm exhibited a non-exponential decay, with characteristic lifetimes of a few nanoseconds (see the SI [15], Sec. V). These relatively small values suggest an efficient ligand-to-metal energy transfer in the system [21]. The PL decay of the Er\({}^{3+}\)\({}^{4}\)I\({}_{13/2}\) multiplet (\(\lambda\approx 1550\) nm), however, exhibited a single exponential behavior, which indicates a consistent coordination environment around the lanthanide ion (see the SI [15], Fig. S9). The characteristic lifetime extracted for this transition ranged around \(\tau\approx 1.3\)\(\mu\)s, comparable to other Er\({}^{3+}\) compounds found in the literature [19; 22; 23]. Nevertheless, considering the radiative lifetime of Er\({}^{3+}\) at approx. 1-2 ms allows the estimation of the quantum efficiency of the transition at \(\sim 0.1\%\)[24]. This suggests that most energy captured by the organic ligands is absorbed by the material, rather than being re-emitted by the rare earth center. Such a small quantum efficiency does not allow \(\Delta\)M(\(\lambda\)), shown in Fig. 3, to be mainly attributed to a variation of the magnetic state of the Er\({}^{3+}\) ion. Indeed, assuming that each incident photon is absorbed by a molecule, leasing to a change of the magnetic state in the Er ion, the required power delivery at \(\lambda=520\) nm to induce \(\Delta\)M\(\approx 10^{-5}\) emu at T = 2 K would be of \(\approx 4\) W. This a value is unrealistic in our setup. Instead, at the highest applied power, the population of excited Er ions in dynamical equilibrium is not larger than \(10^{8}\) (assuming a long, 10 \(\mu\)s relaxation time). This is far below the detection threshold of \(10^{18}\) spins for the SQUID magnetometer. Instead, the change in the magnetic response upon illumination was directly correlated with the change of M with T. This is shown in Fig. 5 for the brightest excitation available in our experimental setup, which was of \(P_{100}=1170\)\(\mu\)W at the sample location. In the figure, a remarkable overlap is observed between \(d\)M/\(d\Gamma\) and \(|\Delta\)M\(|/\Delta\)T, with \(\Delta\)T a constant representing a temperature change. This result strongly suggests heat as the driving mechanism behind the modulation of the sample Figure 4: Time-resolved evolution of the PL emission in the visible range of Er(dnm)\({}_{3}\)(bipy). Figure 3: Diffuse reflectance (black line, left axis) and variation of magnetic response measured at T = 2 K and \(\mu_{0}H=0.2\) T (\(|\Delta\)M\(|\equiv|\)M\({}_{\rm dark}-\)M\({}_{\rm lit}|\), red points, right axis) as a function of wavelength. The labeled absorption lines correspond to transitions from the \({}^{4}\)I\({}_{15/2}\) ground state of the Er\({}^{3+}\) ion, while the smooth background is associated with the organic ligand. The inset shows the photoluminescence measured at \(\lambda=1550\) nm (pointed by an arrow in the main panel). The labelled peaks in the inset are associated with the corresponding Er absorption lines of the main panel, while the broad maximum centered at \(\lambda\approx 450\) nm is due to the energy transfer from the organic ligand to the metallic ion. magnetization, following \(\Delta\)M = \(d\)M/\(d\)T \(\times\)\(\Delta\)T. For the case shown in the figure, the variation in sample temperature is estimated at \(\Delta\)T \(\approx\) 0.25 K. Such a value could be indeed controlled by changing the intensity of the incident light. This is illustrated in the inset of Fig. 5, through the demonstration that \(|\Delta\)M\(|\) was well-described by the phenomenological relation \(\Delta\)M \(\propto(P/P_{100})^{0.8}\) over the 2.1 \(\mu\)W \(\leq P\leq\) 1170 \(\mu\)W interval. Nonetheless, it should be noted that the power delivered to the sample can be continuously tuned by controlling the excitation wavelength, and not only its strength. This occurs because the absorption background of the molecule is non-monotonic (see Fig. 3). That is, the compound is heated more intensely for UV wavelengths and around absorption lines. For \(\lambda\approx\) 520 nm, the relation reads \(\Delta\)T = \(0.25\times(P/P_{100})^{0.8}\) K. It should be stressed that this possibility results from the choice of a ligand with a broad-band energy-absorption, bound to a rare-earth magnetic ion. The former acts as the heating element of the molecule, whereas the latter provides a mesurable temperature-dependent quantity. Knowing the M\({}_{\rm dark}\)(T) behavior of the sample thus allows a fine control of the system's temperature through tracking the magnetic response of the material in the presence of light. Considering a realistic resolution of the SQUID magnetometer utilized here at \(10^{-7}\) emu, allows a precision of the temperature driving system at \(\Delta\)T\({}_{\rm min}\)\(\approx\)\(10^{-7}\) emu \(\times(d\)M/d\(\Gamma)^{-1}\). For the compound chosen, \(d\)M/\(d\)T \(\propto\) T\({}^{-1.8}\) (see Fig. 5), which translates to an enhanced resolution at lower temperatures. In the present work, the value attained in \(\Delta\)M(\(\lambda,\)T = 2 K) allows to infer a resolution \(\Delta\)T\({}_{\rm min}\)\(\approx\) 20 \(\mu\)K. The mass of the sample (0.2 mg) sets the sensitivity of our sample/heating device at 100 \(\mu\)K/mg. However, this sensitivity is likely to be at least two orders of magnitude better, as absorption occurs in the sample surface region over a depth of \(\sim\)100 nm, which may be estimated at not more than 10% of the total volumetric content. Considering the scenario outlined above, it is feasible to embed this magnetic compound in a matrix of interest, and track the temperature of the latter by measuring the magnetic properties of the Er complex. Provided that the molecular compound remains in a dilute regime, is chemically passive in relation to the matrix, and that crystalline fields introduced by the matrix (if any) remain weak in comparison with local bonds, no major variations on the magnetic response are expected. This occurs because the main factor dictating magnetic properties of weakly interacting molecular compounds is the ligand field around the magnetic center, which is sensitive to the molecular configuration [25]. Similar approaches to temperature sensing have been employed in the past, e.g. through the use of the magnetic response of magnetite [26] and paramagnetic salts [27], or through tracking the photoluminescence response of organic-rare-earth compounds [3]. Other applications have also been suggested for Er-based materials (see e.g. [24]), relying on their spectroscopic properties. However, here we combine both the optical and magnetic properties, resulting in a sensing-driving device for fine temperature control. We stress that the choice of a hybrid Er-organic compound poses a decisive advantage over the simple inclusion of magnetic centers as thermometers (as in the case of magnetite [26], paramagnetic salts [27], or single Er ions [24]). In addition to the possibility of passivizing the magnetic ion with respect to the matrix of interest, the organic "antenna" of these molecules can also have absorption lines engineered at wavelengths for which the matrix is transparent [6; 7]. This ensures the delivery of thermal power to the system at the detecting centers (rather than at the sample surface), which can be (for example) diluted across the material to ensure homogeneous heating/sensing. We also note that the resolution reported by us depends on the magnetic response of the material under consideration, and may be altered by engineering a desired MxT slope. Taking into consideration the relative flexible operational temperature range, the possibility of a tunable optical excitation, and the relatively simple synthesis method for the material shown here, our reported resolution of 100 \(\mu\)K/mg is competitive in relation with other existing alternatives with detection limits in the \(\mu\)K range. Among them, state-of-art low-temperature thermometry using on-chip Coulomb blockade sensors yields a precision of 1% for temperatures around 10 mK [28]. Albeit very powerful, such a technique comes at a much larger cost, requires electrodes, is confined to low temperatures, and demands complex microfabrication processes. Another interesting approach makes use of an infra-red pyrometer operating in a differential configuration on thermally-stable surroundings. Such an instrument is reported to reach a remarkable precision around 30 \(\mu\)K at room temperature, but is a passive technique fit for chemical and biological processes [29]. Finally, opto-mechanical measurements probing vibrating modes of nanometric membranes yield resolutions around 15 \(\mu\)K [30]. Such impressive values, however, are suitable for on-chip detection, as they are inherently sensitive to the environment surrounding the membranes. The method outlined here by us, in contrast, requires a small magnetic field to probe temperature, but is contact-free and is based on a compound that may be dispersed in the material being probed. This approach eliminates electronic heat contribution that may disrupt measurements otherwise (see e.g. [31]) and ensures thermalization with the sample. Such an approach is promising for systems in which electrodes are unnecessary, as it minimizes conductive heat transfer while allowing selective heating by tuning light excitation wavelength at a fixed output power. ## IV Conclusions In summary, we have established the magnetic and optical properties of a newly-synthesized magnetic molecule based on a \(\beta\)-diketonate ligand attached to a rare-earth Er\({}^{3+}\) ion. The presence of the organic ligand in this compound acts as an absorption center (or "antenna") for the incoming radiation. Part of the absorbed energy is transferred towards the Er\({}^{3+}\), which causes the material to act as an optically-pumped IR-emitting compound. The remaining energy is absorbed by the material, which then acts as a wavelength-tunable heating source. Its temperature can be minutely tracked through the strong paramagnetic response of the embedded magnetic ion. The resolution of the temperature tracking using this method reciprocally increases with the temperature, reaching the value of 20 \(\mu\)K at T = 2 K for the compound shown here. By properly choosing the organic antenna attached to the Er\({}^{3+}\) ion, the engineering of local heater/thermometers embedded in a sample of interest may be envisaged. The system can be homogeneously driven at wavelengths for which the sample is transparent, while simultaneously having its temperature magnetically probed. ## Acknowledgements This work was carried out with the support of FCT (Fundacao para a Ciencia e a Tecnologia) UIDB/04564/2020, and UIDP/04564/2020. **DATA availability** The data that support the findings of this study are available from the corresponding author upon reasonable request.
2309.16864
Extremizers of the Alexandrov--Fenchel inequality within a new class of convex bodies
Mixed volumes in $n$-dimensional Euclidean space are functionals of $n$-tuples consisting of convex bodies $K,L,C_1,\ldots,C_{n-2}$. The Alexandrov--Fenchel inequalities are fundamental inequalities between mixed volumes of convex bodies, which cover as very special cases many important inequalities between basic geometric functionals. The problem of characterizing completely the equality cases in the Alexandrov--Fenchel inequality is wide open. Major recent progress was made by Yair Shenfeld and Ramon van Handel \cite{SvH22,SvH23+}, in particular they resolved the problem in the cases where $K,L$ are general convex bodies and $C_1,\ldots,C_{n-2}$ are polytopes, zonoids or smooth bodies (under some dimensional restriction). We introduce the class of polyoids, which includes polytopes, zonoids and triangle bodies, and characterize polyoids by using generating measures. Based on this characterization and Shenfeld and van Handel's contribution, we extend their result to a class of convex bodies containing all polyoids and smooth bodies. Our result is stated in terms of the support of the mixed area measure of the unit ball $B^n$ and $C_1,\ldots,C_{n-2}$. A geometric description of this support is provided in the accompanying work \cite{HugReichert23+}.
Daniel Hug, Paul A. Reichert
2023-09-28T21:26:45Z
http://arxiv.org/abs/2309.16864v1
# Extremizers of the Alexandrov-Fenchel inequality within a new class of convex bodies ###### Abstract Mixed volumes in \(n\)-dimensional Euclidean space are functionals of \(n\)-tuples consisting of convex bodies \(K,L,C_{1},\ldots,C_{n-2}\). The Alexandrov-Fenchel inequalities are fundamental inequalities between mixed volumes of convex bodies, which cover as very special cases many important inequalities between basic geometric functionals. The problem of characterizing completely the equality cases in the Alexandrov-Fenchel inequality is wide open. Major recent progress was made by Yair Shenfeld and Ramon van Handel [23, 24], in particular they resolved the problem in the cases where \(K,L\) are general convex bodies and \(C_{1},\ldots,C_{n-2}\) are polytopes, zonoids or smooth bodies (under some dimensional restriction). We introduce the class of polyoids, which includes polytopes, zonoids and triangle bodies, and characterize polyoids by using generating measures. Based on this characterization and Shenfeld and van Handel's contribution, we extend their result to a class of convex bodies containing all polyoids and smooth bodies. Our result is stated in terms of the support of the mixed area measure of the unit ball \(B^{n}\) and \(C_{1},\ldots,C_{n-2}\). A geometric description of this support is provided in the accompanying work [11]. MSC-classes 2020.52A39, 52A20, 52A21 52A40 Keywords.Polytope, zonoid, polyoid, macroid, Alexandrov-Fenchel inequality, generating measure, mixed area measure ## 1 Introduction Mixed volumes of convex bodies (nonempty compact convex sets) in Euclidean space \(\mathbb{R}^{n}\), \(n\geq 2\), are symmetric functionals of \(n\)-tuples of convex bodies. They naturally arise as coefficients of polynomial expansions of nonnegative Minkowski combinations of convex bodies. Writing \(\mathrm{V}\) for the volume functional (Lebesgue measure) and \(\alpha_{1}K_{1}+\cdots+\alpha_{m}K_{m}\) for the Minkowski combination of the convex bodies \(K_{1},\ldots,K_{m}\subset\mathbb{R}^{n}\) with nonnegative coefficients \(\alpha_{1},\ldots,\alpha_{m}\in\mathbb{R}\), we have \[\mathrm{V}(\alpha_{1}K_{1}+\cdots+\alpha_{m}K_{m})=\sum_{i_{1},\ldots,i_{n}=1}^ {m}\mathrm{V}(K_{i_{1}},\ldots,K_{i_{n}})\alpha_{i_{1}}\cdots\alpha_{i_{n}}, \tag{1}\] where \(\mathrm{V}(K_{i_{1}},\ldots,K_{i_{n}})\) is called the mixed volume of \(K_{i_{1}},\ldots,K_{i_{n}}\). As symmetric functions of their \(n\) arguments, mixed volumes are uniquely determined by this expansion. We refer to [20, Chap. 5.1] or [12, Chap. 3.3] for an introduction to mixed volumes. Conversely, the mixed volume \(\mathrm{V}(K_{1},\ldots,K_{n})\) of a given \(n\)-tuple of convex bodies \(K_{1},\ldots,K_{n}\) can be obtained as an alternating sum of volumes of Minkowski sums, that is, \[\mathrm{V}(K_{1},\ldots,K_{n})=\frac{1}{n!}\sum_{k=1}^{n}(-1)^{n+k}\sum_{1\leq i _{1}<\cdots<i_{k}\leq n}\mathrm{V}(K_{i_{1}}+\cdots+K_{i_{k}}). \tag{2}\] While relations (1) and (2) can be efficiently employed for introducing mixed volumes and understanding some of their basic properties, their usefulness in deriving inequalities for mixed volumes seems to be limited. We refer to Schneider [20, Notes for Sect. 5.1] for further background information. A deep inequality for mixed volumes of convex bodies, with many consequences and applications to diverse fields, has been found and established by Alexandrov [1] (see Schneider [20, Notes for Sect. 7.3], also for some historical comments). **Theorem** (Alexandrov-Fenchel Inequality).: _Let \(K,L\subset\mathbb{R}^{n}\) be convex bodies, and let \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) be an \((n-2)\)-tuple of convex bodies in \(\mathbb{R}^{n}\). Then_ \[\mathrm{V}(K,L,\boldsymbol{\mathcal{C}})^{2}\geq\mathrm{V}(K,K,\boldsymbol{ \mathcal{C}})\,\mathrm{V}(L,L,\boldsymbol{\mathcal{C}}),\] (AFI) _where \(\mathrm{V}(K,L,\boldsymbol{\mathcal{C}}):=\mathrm{V}(K,L,C_{1},\ldots,C_{n-2})\)._ We state the Alexandrov-Fenchel inequality in a second version. It makes use of a linear extension of mixed volumes, known already to Alexandrov, to differences of support functions of convex bodies (see [20, Sect. 5.2]). Such extensions turned out to be useful in proofs of the inequality. For a convex body \(K\subset\mathbb{R}^{n}\), the support function \(h_{K}:\mathbb{R}^{n}\to\mathbb{R}\) is defined by \(h_{K}(u):=\max\{\langle x,u\rangle:x\in K\}\), where \(\langle\cdot\,,\cdot\rangle\) denotes the Euclidean scalar product. The support function is positively homogeneous of degree one, hence it is often sufficient to consider its restriction to the Euclidean unit sphere \(\mathbb{S}^{n-1}\), and it uniquely determines the underlying convex body \(K\). **Theorem** (General Alexandrov-Fenchel Inequality).: _Let \(K_{1},K_{2},L\subset\mathbb{R}^{n}\) be convex bodies, and let \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) be an \((n-2)\)-tuple of convex bodies in \(\mathbb{R}^{n}\). Then_ \[\operatorname{V}(h_{K_{1}}-h_{K_{2}},L,\boldsymbol{\mathcal{C}})^{2}\geq \operatorname{V}(h_{K_{1}}-h_{K_{2}},h_{K_{1}}-h_{K_{2}},\boldsymbol{ \mathcal{C}})\operatorname{V}(L,L,\boldsymbol{\mathcal{C}}).\] (GAFI) For a proof of the equivalence of the two versions, we refer to Shenfeld and van Handel [24, Lem. 3.11]. Despite considerable effort, to date it is unknown when exactly equality holds in (AFI) for a general \((n-2)\)-tuple of convex bodies \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) in \(\mathbb{R}^{n}\). While the equality cases can be described purely by dimensionality considerations when at least one of the mixed volumes vanishes, the situation turns out to be much more subtle when \(\operatorname{V}(K,K,\boldsymbol{\mathcal{C}})\) and \(\operatorname{V}(L,L,\boldsymbol{\mathcal{C}})\) are both positive. Shenfeld and van Handel [24] recently fully characterized the equality cases in (AFI) when \(C_{1},\ldots,C_{n-2}\) are polytopes. Then they used this result to achieve a characterization when \(C_{1},\ldots,C_{n-2}\) are polytopes, zonoids or smooth convex bodies, under a mild dimensionality assumption, which they called supercriticality (see [24, Thm. 14.9]). Supercriticality is a natural condition that provides some dimensional restriction on a given sequence of nonempty sets, which is satisfied e.g. for any sequence \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) of full-dimensional convex bodies in \(\mathbb{R}^{n}\). It is related to a well-known condition that ensures that the mixed volume of a given tuple of convex bodies is positive. We refer to Section 3 for a precise definition and basic properties that are related to this concept. Recall that a zonoid is a limit (with respect to the Hausdorff metric) of a sequence of finite Minkowski sums of segments. A convex body \(K\) is said to be smooth if each boundary point of \(K\) is contained in a unique supporting hyperplane of \(K\). In this article, we study a class of convex bodies, which we call _polyoids_, that encompasses polytopes, zonoids and triangle bodies. Polyoids are obtained as limits of sequences of polytopes that are finite Minkowski sums of polytopes having at most a fixed number \(k\) of vertices (for some \(k\in\mathbb{N}\)). If \(k=2\), we are back in the zonoid case. For \(k=3\) we cover the class of triangle bodies (cf. [19, Sect. 3], [20, p. 201]). If \(\mathcal{P}^{n}_{k}\) denotes the set of \(k\)-topes in \(\mathbb{R}^{n}\) (polytopes having at most \(k\) vertices), then the class of polyoids in \(\mathbb{R}^{n}\) is the union of the Minkowski classes \(\mathfrak{M}(\mathcal{P}^{n}_{k})\), \(k\in\mathbb{N}\), generated by \(\mathcal{P}^{n}_{k}\) (see [20, Sect. 3.5] for information on Minkowski classes and additive generation). Our treatment will be limited to supercritical \((n-2)\)-tuples of convex bodies in \(\mathbb{R}^{n}\). We refer to Section 2 for precise definitions and further discussion for these notions. The main aim of this work is to extend the characterization of the equality cases for (AFI), obtained by Shenfeld and van Handel [24], to all convex bodies \(K,L\) and all supercritical tuples \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) of polyoids and smooth convex bodies. We begin by stating Shenfeld and van Handel's result for supercritical tuples of polytopes, zonoids and smooth convex bodies. For this purpose, we need the mixed area measure \(\mathrm{S}(K_{1},\ldots,K_{n-1},\cdot)\) of an \((n-1)\)-tuple of convex bodies \(K_{1},\ldots,K_{n-1}\subset\mathbb{R}^{n}\). Recall from [20, Sect. 5.1] (or [12, Thm. 4.1]) how these finite Borel measures on the Euclidean unit sphere \(\mathbb{S}^{n-1}\) are defined. They are related to mixed volumes and support functions via the relation \[\mathrm{V}(K_{1},\ldots,K_{n-1},K_{n})=\frac{1}{n}\int_{\mathbb{S}^{n-1}}h_{K_{ n}}(u)\ \mathrm{S}(K_{1},\ldots,K_{n-1},\mathrm{d}u), \tag{3}\] which holds for all convex bodies \(K_{1},\ldots,K_{n}\subset\mathbb{R}^{n}\). For given \(K_{1},\ldots,K_{n-1}\), the mixed area measure \(\mathrm{S}(K_{1},\ldots,K_{n-1},\cdot)\) is the unique Borel measure on \(\mathbb{S}^{n-1}\) such that (3) holds for all convex bodies \(K_{n}\). As in the case of the mixed volume, also the mixed area measure can be extended as an \((n-1)\)-linear map to differences of support functions (see again [20, Sect. 5.2]). Then relation (3) remains true with convex bodies replaced by differences of support functions. If \(B^{n}\) is the Euclidean unit ball, then \(\mathrm{supp}\,\mathrm{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\) denotes the support of the mixed area measure of \(B^{n}\) and \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\), which is the complement of the largest open subset of \(\mathbb{S}^{n-1}\) on which \(\mathrm{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\) vanishes. **Theorem** ([24, Thm. 14.9]).: _Let \(K,L\subset\mathbb{R}^{n}\) be convex bodies, and let \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) be a supercritical \((n-2)\)-tuple of polytopes, zonoids or smooth convex bodies in \(\mathbb{R}^{n}\) such that \(\mathrm{V}(K,K,\boldsymbol{\mathcal{C}}),\mathrm{V}(L,L,\boldsymbol{\mathcal{ C}})>0\). Then (AFI) holds with equality if and only if there are \(a>0\) and \(x\in\mathbb{R}^{n}\) such that \(h_{K}=h_{aL+x}\) on \(\mathrm{supp}\,\mathrm{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\)._ In the case where \(C_{1},\ldots,C_{n-2}\) are all smooth, the result was already known (see [20, Thm. 7.6.8] and the comment after [11, Thm. 1.2]). The main point of the preceding theorem is that a mixture of smooth and non-smooth bodies (which then are polytopes or zonoids) is admitted. Shenfeld and van Handel also characterized the much more involved equality cases for arbitrary tuples of polytopes \(\boldsymbol{\mathcal{C}}\). For their treatment of the zonoid case, the characterization theorem for polytopes is used as a crucial ingredient. Our main result is an extension of the preceding theorem where zonoids and polytopes are included in the larger class of polyoids. **Theorem 1.1**.: _Let \(K,L\subset\mathbb{R}^{n}\) be convex bodies, and let \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) be a supercritical \((n-2)\)-tuple of polyoids or smooth convex bodies in \(\mathbb{R}^{n}\)._ 1. _If_ \(\mathrm{V}(K,L,\boldsymbol{\mathcal{C}})=0\)_, then (AFI) holds with equality and_ \(K,L\) _are homothetic._ 2. _Let_ \(\mathrm{V}(K,L,\boldsymbol{\mathcal{C}})>0\)_. Then (AFI) holds with equality if and only if there are_ \(a>0\) _and_ \(x\in\mathbb{R}^{n}\) _such that_ \(h_{K}=h_{aL+x}\) _on_ \(\mathrm{supp}\,\mathrm{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\)_._ At the end of Section 2, we introduce a formally larger class of convex bodies (which we call macroids) for which the statement of Theorem 1.1 remains true. In [18, Sect. 4], Schneider established a characterization of the equality cases in the Alexandrov-Fenchel inequality for convex bodies \(K,L\) and zonoids \(C_{1},\ldots,C_{n-2}\), under the additional assumption that \(K,L\) are centrally symmetric and all bodies are full-dimensional. Schneider's characterization involves specific geometric information about \(\boldsymbol{\mathcal{C}}\), namely the closure of the set of all extremal normal vectors of the \((n-1)\)-tuple \((B^{n},\boldsymbol{\mathcal{C}})\). In contrast, Shenfeld and van Handel first characterize the equality cases in terms of the support of the mixed area measure \(\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\) (without the assumption of central symmetry of \(K,L\) and with a relaxed dimensionality assumption). Finally, they show that \(\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\) equals the closure of the set of all extremal normal vectors of the \((n-1)\)-tuple \((B^{n},\boldsymbol{\mathcal{C}})\) (see [24, Prop. 14.13]). According to a general conjecture due to Schneider [20, Conjecture 7.6.14], for an arbitrary \((n-1)\)-tuple of convex bodies \((C,\boldsymbol{\mathcal{C}})\) the support of the mixed area measure \(\operatorname{supp}\operatorname{S}(C,\boldsymbol{\mathcal{C}},\cdot)\) is precisely the closure of the set of all extremal normal vectors of \((C,\boldsymbol{\mathcal{C}})\). This conjecture is open even in the case where all bodies are zonoids (see [18, Sect. 4] for some discussion). For the application to the equality cases of the Alexandrov-Fenchel inequality, only the special case where \(C=B^{n}\) is required. In [11], Schneider's conjecture concerning the support of mixed area measures is settled for the class of polyoids, which in particular covers the case where all \(n-1\) bodies are general zonoids. In combination with the results of the present work, we thus obtain a geometric characterization of the equality cases in (AFI) not only for general convex bodies \(K,L\) and zonoids, but for general convex bodies \(K,L\) and the larger class of polyoids (and smooth bodies). The paper is structured as follows. In Section 2 we deduce a representation result for the support functions of polyoids, stated as Corollary 2.9, from a more general result concerning Minkowski classes generated by homothety invariant closed families of convex bodies. A related representation theorem for support functions of zonoids in terms of their generating measures is a well-known and versatile tool in the study of zonoids. We also introduce the larger class of macroids whose definition is motivated by Corollary 2.9. In Section 3 we start with a brief discussion of supercritical tuples of sets. Then we prepare the proof of Theorem 3.8, which is an equivalent version of Theorem 1.1, that involves the mixed area measure of a difference of support functions. Our arguments are inspired by and partly based on the results by Shenfeld and van Handel [24]. Theorems 3.8 and 1.1 both hold within the formally larger class of macroids. In Appendix A we construct a macroid that is not a polyoid. ## 2 Polyoids and beyond In this section, we introduce the class of polyoids and establish a characterization theorem. Our definition is guided by the geometric definition of a zonoid as a limit of a sequence of zonotopes, where a zonotope is a finite Minkowski sum of segments. In the following, we work in Euclidean space \(\mathbb{R}^{n}\) with scalar product \(\langle\cdot\,,\cdot\rangle\) and norm \(\|\cdot\|\). For a set \(A\subseteq\mathbb{R}^{n}\), we set \(A^{\perp}:=\{x\in\mathbb{R}^{n}\colon\langle x,a\rangle=0\text{ for }a\in A\}\), the linear subspace orthogonal to the linear span of \(A\), and \(u^{\perp}:=\{u\}^{\perp}\) for \(u\in\mathbb{R}^{n}\). The volume of the Euclidean unit ball \(B^{n}\) is denoted by \(\kappa_{n}\), its surface area is \(\omega_{n}:=n\kappa_{n}\). If we write \(\mathcal{H}^{n-1}\) for the \((n-1)\)-dimensional Hausdorff measure in \(\mathbb{R}^{n}\) and \(\mathbb{S}^{n-1}\) for the unit sphere, then \(\mathcal{H}^{n-1}(\mathbb{S}^{n-1})=\omega_{n}\) for \(n\geq 1\). Most of the time we focus on \(n\geq 2\), but almost all statements and definitions hold for \(n\in\mathbb{N}_{0}\) (if properly interpreted). We write \(\mathcal{K}^{n}\) for the set of nonempty compact convex sets in \(\mathbb{R}^{n}\) and endow \(\mathcal{K}^{n}\) with the Hausdorff metric. Elements of \(\mathcal{K}^{n}\) are called convex bodies. A map \(\varphi:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a dilatation if there is some \(\lambda>0\) such that \(\varphi(x)=\lambda x\) for \(x\in\mathbb{R}^{n}\). A homothety is a dilatation followed by a translation. For \(k\in\mathbb{N}\) we set \([k]:=\{1,\ldots,k\}\). If \((E,\rho)\) is a metric space and \(A\subseteq E\) is nonempty, then \(\operatorname{diam}A:=\sup\{\rho(x,y)\,:\,x,y\in A\}\in[0,\infty]\) denotes the diameter of \(A\). **Definition 2.1**.: For each \(k\in\mathbb{N}\), let \(\mathcal{P}^{n}_{k}\subset\mathcal{K}^{n}\) be the set of polytopes in \(\mathbb{R}^{n}\) with at most \(k\) vertices. Elements of \(\mathcal{P}^{n}_{k}\) are called \(k\)_-topes_. A finite Minkowski sum of \(k\)-topes is called a \(k\)_-polyotope_. **Remark 2.2**.: For any compact set \(A\subset\mathbb{R}^{n}\), the set \(\{P\in\mathcal{P}^{n}_{k}:P\subseteq A\}\subset\mathcal{K}^{n}\) is compact. Hence \(\mathcal{P}^{n}_{k}\) is a countable union of compact subsets of \(\mathcal{K}^{n}\) and thus a measurable subset of \(\mathcal{K}^{n}\). It is convenient to consider the subspace \(\sigma\)-algebra on \(\mathcal{P}^{n}_{k}\) which is induced by the Borel \(\sigma\)-algebra of \(\mathcal{K}^{n}\). Next we define a class of convex bodies which generalizes the class of zonoids and contains arbitrary polytopes. **Definition 2.3**.: Let \(k\in\mathbb{N}\) and \(K\in\mathcal{K}^{n}\). If \(K\in\mathcal{K}^{n}\) is the limit of a sequence of \(k\)-polyotopes, then \(K\) is called a \(k\)_-polyoid_. A convex body \(K\) is called a _polyoid_ (a _polyotope_) if it is a \(k\)-polyoid (a \(k\)-polyotope) for some \(k\in\mathbb{N}\). **Remark 2.4**.: * For a given \(k\in\mathbb{N}\), the class of \(k\)-polyoids in \(\mathbb{R}^{n}\) is a closed subset of \(\mathcal{K}^{n}\). In the terminology of [20, Sect. 3.5], the class of \(k\)-polyoids is the Minkowski class \(\mathfrak{M}(\mathcal{P}^{n}_{k})\) generated by \(\mathcal{P}^{n}_{k}\). 2. A \(1\)-polyoid is just a singleton, a \(2\)-polyoid is a zonoid and a \(3\)-polyoid is a _triangle body_, as defined in [20, p. 201] (or [19, Sect. 3]). Moreover, for a given polytope \(P\) there is some integer \(k\in\mathbb{N}\) (depending on \(P\)) such that \(P\) is a \(k\)-polyotope and hence a \(k\)-polyoid. 3. Clearly, \(\mathcal{P}_{k}^{n}\subseteq\mathcal{P}_{\ell}^{n}\) for \(k\leq\ell\). Hence any \(k\)-polyoid is an \(\ell\)-polyoid for \(k\leq\ell\). In particular, if \(C_{1},\ldots,C_{r}\) are polyoids in \(\mathbb{R}^{n}\), for a fixed \(r\in\mathbb{N}\), then there is some \(k\in\mathbb{N}\) such that \(C_{1},\ldots,C_{r}\) are \(k\)-polyoids. Similar statements hold for polytopes. 4. In \(\mathbb{R}^{2}\) every centrally symmetric convex body is a 2-polyoid (a zonoid), and every convex body in \(\mathbb{R}^{2}\) is a \(3\)-polyoid (a triangle body). The first fact is well-known (cf. [20, Cor. 3.5.7]), the second follows from [20, Thm. 3.2.14]. 5. Let \(n\geq 3\). If \(k\in\mathbb{N}\) is fixed and \(P_{k}^{*}\) is an indecomposable polytope in \(\mathbb{R}^{n}\) with more than \(k\) vertices, then it follows from [20, Thm. 3.4.2] (see also [2, Thm. 4]) that \(P_{k}^{*}\) is not approximable by the class \(\mathcal{P}_{k}^{n}\). Hence \(P_{k}^{*}\) is not a \(k\)-polyoid, but certainly \(P_{k}^{*}\) is an \(\ell\)-polyoid, for some \(\ell>k\). For instance, for each \(k\geq 2\), there is some indecomposable \((k+1)\)-tope (with triangular \(2\)-faces) which is not a \(k\)-polyoid. 6. The Minkowski sum of a triangle in \(\mathbb{R}^{2}\times\{0\}\) and a \(2\)-dimensional ball in \(\{0\}\times\mathbb{R}^{2}\) yields an example of a \(3\)-polyoid which is not a zonoid, not a polytope, and neither smooth nor strictly convex. It is clear from [20, Cor. 3.5.12] that the class of \(3\)-polyoids is much larger than the class of zonoids. 7. For a given \(k\in[n]\), Ricker [15] calls a finite Minkowski sum of \(r\)-dimensional simplices with \(r\in\{0,\ldots,k\}\) a \(k\)-zonotope. Each such \(k\)-zonotope is a particular \((k+1)\)-polyotope, for \(k\in[n]\). Ricker then defines a \(k\)-zonoid (for \(k\in[n]\)) as a limit of \(k\)-zonotopes and characterizes \(k\)-zonoids in terms of the ranges of \(k\) vector measures, thus extending a known result for 1-zonoids (i.e., zonoids). A \(3\)-dimensional double pyramid over a triangle base is not a \(k\)-zonoid (as follows from [20, Thm. 3.4.2]), for any \(k\in\mathbb{N}\), but it is a \(5\)-polyotope. 8. Let \(K\) be an \(n\)-dimensional convex cone which is not a polytope. Then \(K\) is indecomposable by [16, Thm. 2], hence [20, Thm. 3.4.2] implies that \(K\) is not a polyoid. We now prepare the proof of Corollary 2.9, which is an analogue for polyoids of a well-known result for zonoids (see [20, Thm. 3.5.3] or [12, Thm. 4.13]). In the following, measurability in a topological space \(E\) always refers to the Borel \(\sigma\)-algebra on \(E\). Let \(\mu\) be a finite measure on \(E\), let \(E_{0}\subseteq E\) be measurable and \(\mu(E\setminus E_{0})=0\). In this case we say that \(\mu\) is supported in \(E_{0}\). If \(g:E_{0}\to\mathbb{R}\) is a bounded and measurable function, then the integral of \(g\) over \(E\) with respect to \(\mu\) is defined by choosing any measurable extension of \(g\) to \(E\) (and clearly this is independent of the particular extension). The next lemma follows from the fact (applied with \(S=E_{0}\)) that if \(S\) is a separable metric space, then probability measures with finite support on \(S\) are weakly dense in the probability measures on \(S\) (see the discussion on pages 72-73 of [3], Appendix III, Thm. 4 and the discussion after Thm. 5 on page 239 of the first edition (1968) of [3] or [25, Thm. 3]). **Lemma 2.5**.: _Let \((E,\rho)\) be a separable metric space and \(E_{0}\subseteq E\) a measurable subset. Let \(\mu\) be a finite Borel measure on \(E\) with \(\mu(E\setminus E_{0})=0\). Then there is a sequence of discrete Borel measures \(\mu_{j}\), \(j\in\mathbb{N}\), on \(E\) with \(\mu_{j}(E)=\mu(E)\) and \(\mu_{j}(E\setminus E_{0})=0\) such that if \(g\colon E_{0}\to\mathbb{R}\) is continuous and bounded, then_ \[\lim_{j\to\infty}\int g\,\mathrm{d}\mu_{j}=\int g\,\mathrm{d}\mu. \tag{4}\] Let \(\mathcal{K}_{*}\) be a Borel subset of \(\mathcal{K}^{n}\). In the following, we always assume that \(\mathcal{K}_{*}\neq\varnothing\). With the restriction of the Hausdorff metric, \(\mathcal{K}_{*}\) is a separable metric space whose Borel \(\sigma\)-algebra coincides with the subspace \(\sigma\)-algebra induced on \(\mathcal{K}_{*}\). In particular, we will be interested in the cases of the homothety invariant classes \(\mathcal{P}^{n}\), the set of polytopes in \(\mathbb{R}^{n}\), and the subclass \(\mathcal{P}^{n}_{k}\) which is closed in \(\mathcal{K}^{n}\). For a Borel measure \(\nu\) on a separable metric space \(E\), the support of \(\nu\) is the complement of the largest open set on which \(\nu\) vanishes and denoted by \(\operatorname{supp}\nu\). Thus \(\operatorname{supp}\nu\) is a closed set. If \(\nu\) is a finite Borel measure on \(\mathcal{K}_{*}\) with bounded support, then \(\operatorname{supp}\nu\) is closed in \(\mathcal{K}_{*}\) (but not compact in general). If \(\mathcal{K}_{*}\) is closed in \(\mathcal{K}^{n}\) and \(\operatorname{supp}\nu\) is bounded, then \(\operatorname{supp}\nu\) is compact. If \(\mathcal{K}_{*}\) is a closed and homothety invariant class of convex bodies (hence containing all singletons), then the Minkowski class \(\mathfrak{M}(\mathcal{K}_{*})\) consists of all finite Minkowski sums of convex bodies from \(\mathcal{K}_{*}\) and all convex bodies in their closure. Next we define the _positive hull_ of the support of a measure on \(\mathcal{K}_{*}\). **Definition 2.6**.: Let \(\mu\) be a probability measure on a Borel set \(\varnothing\neq\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\). Then \[\operatorname{pos}\mu\coloneqq\left\{\sum_{i=1}^{N}\lambda_{i}L_{i}\,:\,N\in \mathbb{N}_{0},\forall i\in[N]\colon\,\lambda_{i}\geq 0,L_{i}\in\operatorname{ supp}\mu\right\}\] denotes the set of nonnegative (finite) Minkowski combinations of convex bodies in \(\operatorname{supp}\mu\), where \(\operatorname{supp}\mu\) is defined with respect to the metric space \(\mathcal{K}_{*}\). The empty sum is defined as \(\{0\}\). If \(\mu_{1},\ldots,\mu_{\ell}\) are probability measures on \(\mathcal{K}_{*}\), then \[\operatorname{pos}(\mu_{1},\ldots,\mu_{\ell})\coloneqq\operatorname{pos}\mu _{1}\times\cdots\times\operatorname{pos}\mu_{\ell}\] is the set of \(\ell\)-tuples with components in \(\operatorname{pos}\mu_{1},\ldots,\operatorname{pos}\mu_{\ell}\), respectively. We provide a simple lemma. As usual, empty sums are interpreted as \(0\) (or \(\{0\}\) if sets in \(\mathbb{R}^{n}\) are concerned). Recall that \(\kappa_{n}\) is the volume of the unit ball \(B^{n}\) and \(\omega_{n}=n\kappa_{n}\) denotes its surface area. The mean width of a convex body \(K\in\mathcal{K}^{n}\) can be expressed in the form \[w(K)=\frac{2}{\omega_{n}}\int_{\mathbb{S}^{n-1}}h_{K}(u)\,\mathcal{H}^{n-1}( \mathrm{d}u)\geq 0\] with \(w(K)>0\) if and only if \(\dim K\geq 1\). **Lemma 2.7**.: _Let \(\ell,n\in\mathbb{N}_{0}\) and \(A_{1},\ldots,A_{\ell}\in\mathcal{K}^{n}\). Then_ \[\sum_{i=1}^{\ell}\operatorname{diam}A_{i}\leq\sqrt{\pi}n\,\operatorname{diam} \sum_{i=1}^{\ell}A_{i}. \tag{5}\] Proof.: For the proof, we can focus on \(n\geq 2\). Let \(w(A)\) denote the mean width of \(A\in\mathcal{K}^{n}\). Jung's inequality (or an obvious bound with \(\sqrt{2}\) replaced by \(2\)) implies that \(w(A)\leq\sqrt{2}\operatorname{diam}A\). Moreover, since \(A\) contains a segment of length \(\operatorname{diam}A\), we have \(2\operatorname{diam}A\leq\omega_{n}\kappa_{n-1}^{-1}w(A)\). Since the mean width is Minkowski additive, we get \[\sum_{i=1}^{\ell}\operatorname{diam}A_{i}\leq\frac{1}{2}\frac{\omega_{n}}{ \kappa_{n-1}}\sum_{i=1}^{\ell}w(A_{i})=\frac{1}{2}\frac{\omega_{n}}{\kappa_{n -1}}w\left(\sum_{i=1}^{\ell}A_{i}\right)\leq\frac{\omega_{n}}{\kappa_{n-1}} \operatorname{diam}\sum_{i=1}^{\ell}A_{i}.\] The assertion follows since the Gamma function is increasing on \([1.5,\infty)\). The representation (6) in the following theorem can be viewed as a specific version of Choquet's integral representation theorem (see [14] or [13, Thm. 3.45 and Chap. 7]), if combined with [20, Thm. 3.4.2] (see also [2, Thm. 4]). Thus it follows that the measure \(\mu\) in Theorem 2.8 (b) can be chosen such that it is supported by the indecomposable convex bodies in \(\mathcal{K}_{*}\). We provide a direct argument for both directions of the following equivalence. The special case \(\mathcal{K}_{*}=\mathcal{P}_{k}^{n}\) provides a characterization of \(k\)-polyoids and is stated as Corollary 2.9. In the following, we always assume that \(\varnothing\neq\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\) is Borel measurable. **Theorem 2.8**.: _Let \(\varnothing\neq\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\), \(n\in\mathbb{N}_{0}\), be a homothety invariant closed class of convex bodies. Then the following are equivalent._ 1. \(K\in\mathfrak{M}(\mathcal{K}_{*})\) 2. _There is a probability measure_ \(\mu\) _on_ \(\mathcal{K}_{*}\) _with bounded support such that_ \[h_{K}=\int h_{L}\,\mu(\mathrm{d}L).\] (6) _If_ (b) _holds, then_ \(K\) _is the limit of a sequence in_ \(\operatorname{pos}\mu\)_._ Proof.: The assertion is clear for \(n=0\) or if \(\operatorname{diam}K=0\). Hence, we can assume that \(n\geq 1\) and \(\operatorname{diam}K>0\) in the following. "(a) \(\implies\) (b)": Without loss of generality, \(0\in K\in\mathfrak{M}(\mathcal{K}_{*})\). Hence \(K=\lim_{\ell\to\infty}Q_{\ell}\), where \(Q_{\ell}=\sum_{i=1}^{m_{\ell}}Q_{\ell}^{(i)}\) with \(Q_{\ell}^{(i)}\in\mathcal{K}_{*}\), \(m_{\ell}>0\) and \(\operatorname{diam}Q_{\ell}^{(i)}>0\). There are points \(x_{\ell}\in Q_{\ell}\) and \(x_{\ell}^{(i)}\in Q_{\ell}^{(i)}\) with \(x_{\ell}=\sum_{i=1}^{m_{\ell}}x_{\ell}^{(i)}\) such that \(x_{\ell}\to 0\) as \(\ell\to\infty\). Setting \(P_{\ell}\coloneqq Q_{\ell}-x_{\ell}\) and \(P_{\ell}^{(i)}\coloneqq Q_{\ell}^{(i)}-x_{\ell}^{(i)}\in\mathcal{K}_{*}\) for \(\ell\in\mathbb{N}\) and \(i\in m_{\ell}\), we have \[K=\lim_{\ell\to\infty}P_{\ell}=\lim_{\ell\to\infty}\sum_{i=1}^{m_{\ell}}P_{ \ell}^{(i)},\quad 0\in P_{\ell}^{(i)}\in\mathcal{K}_{*}\quad\text{and}\quad \operatorname{diam}P_{\ell}^{(i)}>0.\] The sequence \((\operatorname{diam}P_{\ell})_{\ell}\) is bounded by some constant \(d\in(0,\infty)\). For \(\ell\in\mathbb{N}\) and \(i\in[m_{\ell}]\), define positive numbers \[d_{\ell}\coloneqq\operatorname{diam}P_{\ell},\quad d_{\ell}^{(i)}\coloneqq \operatorname{diam}P_{\ell}^{(i)},\quad e_{\ell}\coloneqq\sum_{i=1}^{m_{\ell }}d_{\ell}^{(i)},\quad c_{\ell}^{(i)}\coloneqq\frac{e_{\ell}}{d_{\ell}^{(i)}},\] and discrete probability measures \(\mu_{\ell}\) on \(\mathcal{K}_{*}\) by \[\mu_{\ell}\coloneqq\sum_{i=1}^{m_{\ell}}\frac{1}{c_{\ell}^{(i)}}\delta_{c_{ \ell}^{(i)}P_{\ell}^{(i)}},\quad\text{noting that}\,\sum_{i=1}^{m_{\ell}}\frac{1}{c_ {\ell}^{(i)}}=1.\] By construction and basic properties of support functions, \[h_{P_{\ell}}=\int h_{P}\,\mu_{\ell}(\mathrm{d}P).\] If \(P\in\operatorname{supp}\mu_{\ell}\), then \(P=c_{\ell}^{(i)}P_{\ell}^{(i)}\) for some \(i\in[m_{\ell}]\), hence \(0\in P\) and, by Lemma 2.7, \[\operatorname{diam}P=c_{\ell}^{(i)}d_{\ell}^{(i)}=e_{\ell}\leq\sqrt{\pi}n\,d_ {\ell}\leq\sqrt{\pi}n\,d,\] so that \(\operatorname{supp}\mu_{\ell}\subseteq S\coloneqq\{L\in\mathcal{K}_{*}\,:\,L \subseteq\sqrt{\pi}ndB^{n}\}\). Since \(\mathcal{K}_{*}\) is closed in \(\mathcal{K}^{n}\), \(S\) is compact. Thinking of the measures \(\mu_{\ell}\) as measures on \(S\) (with the restriction of the Hausdorff metric), a special case of Prokhorov's theorem [3, pp. 57-59] yields a subsequence \((\mu_{\ell_{s}})_{s}\) of \((\mu_{\ell})_{\ell}\) that weakly converges to some probability measure \(\mu\) on \(\mathcal{K}_{*}\) which is also compactly supported in \(S\). Hence, for all \(u\in\mathbb{R}^{n}\), \[h_{K}(u)=\lim_{s\to\infty}h_{P_{\ell_{s}}}(u)=\lim_{s\to\infty}\int h_{P}(u)\, \mu_{\ell_{s}}(\mathrm{d}P)=\int h_{P}(u)\,\mu(\mathrm{d}P),\] since \(P\mapsto h_{P}(u)\) is continuous and bounded on \(S\). So \(\mu\) has the desired property. "(b) \(\implies\) (a)": Let \(\mu\) be a probability measure on \(\mathcal{K}_{*}\) with bounded support such that (6) holds. Let \(E_{0}\) denote the support of \(\mu\) with respect to the metric space \(E=\mathcal{K}_{*}\). According to Lemma 2.5, \(\mu\) is the weak limit of a sequence \(\mu_{\ell}\) of discrete probability measures on \(\mathcal{K}_{*}\) supported in \(\operatorname{supp}\mu\). For all \(\ell\in\mathbb{N}\), we define \(K_{\ell}\in\mathcal{K}^{n}\) by \[h_{K_{\ell}}=\int h_{P}\,\mu_{\ell}(\mathrm{d}P).\] By construction, \(K_{\ell}\in\operatorname{pos}\mu_{\ell}\) is a finite sum of convex bodies in \(\mathcal{K}_{*}\). Since \(\operatorname{supp}\mu\) is bounded, the function \(P\mapsto h_{P}(u)\), \(P\in E_{0}\), is bounded and continuous, for each \(u\in\mathbb{R}^{n}\). Hence Lemma 2.5 ensures that, for each \(u\in\mathbb{R}^{n}\), \[h_{K_{\ell}}(u)=\int h_{P}(u)\,\mu_{\ell}(\mathrm{d}P)\to\int h_{P}(u)\,\mu( \mathrm{d}P)=h_{K}(u)\quad(\ell\to\infty).\] This shows that \(K_{\ell}\to K\) as \(\ell\to\infty\) (with respect to the Hausdorff metric). **Corollary 2.9**.: _Let \(K\) be a convex body in \(\mathbb{R}^{n}\), \(n\in\mathbb{N}_{0}\) and \(k\in\mathbb{N}\). Then the following are equivalent._ 1. \(K\) _is a_ \(k\)_-polyoid._ 2. _There is a probability measure_ \(\mu\) _on_ \(\mathcal{P}^{n}_{k}\) _with compact support such that_ \[h_{K}=\int h_{P}\,\mu(\mathrm{d}P).\] (7) _If_ (b) _holds, then_ \(K\) _is the limit of a sequence in_ \(\operatorname{pos}\mu\)_._ **Remark 2.10**.: In view of Corollary 2.9, a probability measure \(\mu\) on \(\mathcal{P}^{n}_{k}\) with compact support in \(\mathcal{P}^{n}_{k}\) satisfying (7) is called a _generating measure_ of the \(k\)-polyoid \(K\). Generating measures of polyoids are not uniquely determined (compare [20, Rem. 3.2.15]). In the following, we will only use that for a given polyoid a generating measure exists. **Example 2.11**.: We describe the non-uniqueness by a simple example. Let \(e_{1},e_{2}\in\mathbb{R}^{2}\) be the standard basis vectors. Consider the intervals \(I_{1}:=[0,e_{1}]\), \(I_{2}:=[0,e_{2}]\) and \(I_{3}:=[0,e_{1}+e_{2}]\). Let \(\operatorname{conv}\) denote the convex hull operator. Let \[P_{1}:=\operatorname{conv}(I_{1}\cup\{e_{1}+e_{2}\})\quad\text{and}\quad P_{2}: =\operatorname{conv}\{e_{1},e_{1}+e_{2},2e_{1}+e_{2}\}.\] Then \[\mu_{1}:=\frac{1}{2}\left(\delta_{I_{2}+I_{3}}+\delta_{I_{1}}\right),\quad\mu _{2}:=\frac{1}{2}\left(\delta_{I_{1}+I_{2}}+\delta_{I_{3}}\right)\quad\text{ and}\quad\mu_{3}:=\frac{1}{2}\left(\delta_{P_{1}}+\delta_{P_{2}}\right)\] are three generating measures of the \(4\)-polyoid \(P:=\frac{1}{2}(I_{1}+I_{2}+I_{3})\), which in fact is also a zonoid (zonotope) with generating measure \[\mu_{4}:=\frac{1}{3}\left(\delta_{\frac{3}{2}I_{1}}+\delta_{\frac{3}{2}I_{2}} +\delta_{\frac{3}{2}I_{3}}\right).\] By adding to \(P\) a suitable triangle, we get a \(3\)-polyoid which is not a zonoid and has two different generating measures. In the plane, examples of non-uniqueness can be easily constructed using Minkowski's existence theorem for polygons and the Minkowski additivity of the first area measure. Corollary 2.9 shows that polyoids can be characterized via the integral representation (7) and as limits of sequences in the positive hull of a generating measure of the polyoid. The arguments in Section 3 are based on both types of description. In the following lemma, we show that a convex body the support function of which is given by a more general integral representation is still the limit of a sequence of polytopes in the positive hull of a generating measure on \(\mathcal{P}^{n}\). The lemma suggests the definition of a class of convex bodies that we will call macroids in Definition 2.13. The argument for the implication "(b) \(\implies\) (a)" of Lemma 2.8 does not use any specific properties of the measurable subclass \(\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\). Therefore we have the following lemma. Finally, we will choose \(\mathcal{K}_{*}=\mathcal{P}^{n}\). **Lemma 2.12**.: _Let \(\varnothing\neq\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\) be a Borel set, \(n\in\mathbb{N}_{0}\). Suppose that \(\mu\) is a probability measure on \(\mathcal{K}_{*}\) with bounded support. Let \(K\in\mathcal{K}^{n}\) be defined by_ \[h_{K}=\int h_{P}\,\mu(\mathrm{d}P). \tag{8}\] _Then \(K\) is the limit of a sequence in \(\operatorname{pos}\mu\)._ **Definition 2.13** (Macroids).: Let \(\varnothing\neq\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\) be a Borel set. A convex body \(K\) in \(\mathbb{R}^{n}\), \(n\in\mathbb{N}_{0}\), for which there is a probability measure \(\mu\) on \(\mathcal{K}_{*}\) with bounded support such that (8) holds, is called a \(\mathcal{K}_{*}\)_-macroid_ with generating measure \(\mu\). If \(\mathcal{K}_{*}=\mathcal{P}^{n}\), we call \(K\) a macroid with generating measure \(\mu\). **Remark 2.14**.: Suppose that \(K\) is a \(\mathcal{K}_{*}\)-macroid with generating measure \(\mu\) on \(\mathcal{K}_{*}\). We may extend \(\mu\) trivially to all of \(\mathcal{K}^{n}\). Then \(\mu\) is a probability measure with bounded support (by definition) and (by Fubini's theorem) \[w(K)=\int w(Q)\,\mu(\mathrm{d}Q)<\infty.\] The assumption that \(\mu\) is a probability measure is not restrictive. To see this, note that if \(\widetilde{\mu}\) is a Borel measure on \(\mathcal{K}^{n}\) with \(|\widetilde{\mu}|:=\widetilde{\mu}(\mathcal{K}^{n})\in(0,\infty)\) and if \(\widetilde{\mu}\) has bounded support, then \[\int h_{Q}\,\widetilde{\mu}(\mathrm{d}Q)=\int h_{Q}\,\mu(\mathrm{d}Q),\] where \(\mu(\mathcal{A}):=|\widetilde{\mu}|^{-1}\widetilde{\mu}\,(|\widetilde{\mu}|^ {-1}\mathcal{A})\), for Borel sets \(\mathcal{A}\subseteq\mathcal{K}^{n}\), defines a probability measure with bounded support. For the present purpose, we could also replace the assumption of bounded support by an integrability assumption. To explain this statement, let \(\mu\) be a Borel probability measure on \(\mathcal{K}^{n}\) such that \[0<\int w(Q)\,\mu(\mathrm{d}Q)<\infty.\] The Steiner point of \(K\in\mathcal{K}^{n}\) is defined by \[s(K):=\frac{1}{\kappa_{n}}\int_{\mathbb{S}^{n-1}}h_{k}(u)u\,\mathcal{H}^{n-1} (\mathrm{d}u)\] and satisfies \(s(K)\in\mathrm{relint}\,K\) (see [20, Sect. 1.7.1]). Fubini's theorem yields \[s(K)=\int s(Q)\,\mu(\mathrm{d}Q).\] Therefore we obtain \[h_{\frac{K-s(K)}{w(K)}}=\int h_{P}\,\mu^{*}(\mathrm{d}P),\] where \[\mu^{*}(\mathcal{A}):=\frac{1}{w(K)}\int\mathbf{1}\left\{\frac{Q-s(Q)}{w(Q)} \in\mathcal{A}\right\}w(Q)\,\mu(\mathrm{d}Q),\] for Borel sets \(\mathcal{A}\subseteq\mathcal{K}^{n}\), is a probability measure concentrated on \[\mathcal{K}^{n}_{0,1}:=\{L\in\mathcal{K}^{n}\colon w(L)=1,s(L)=0\}.\] In particular, \(\mu^{*}\) has bounded support. **Remark 2.15**.: Each polyoid is a macroid, but not every macroid is a polyoid; for an example, see Appendix A. An explicit geometric characterization of the class of polyoids within the class of macroids remains to be discovered. **Remark 2.16**.: An obvious motivation for introducing macroids is that Theorem 1.1 is true in fact for the strictly larger class of macroids. An explicit example of a convex body that is not a macroid is provided by a circular cone. This follows from Proposition 2.17. **Proposition 2.17**.: _Let \(K\in\mathcal{K}^{n}\) be an indecomposable macroid. Then \(K\) is a polytope._ Proof.: We may assume that \(\dim K>0\) and \[h_{K}=\int h_{Q}\,\mu(\mathrm{d}Q),\] where \(\mu\) is a Borel probability measure on \(\mathcal{P}^{n}\) with bounded support. By Fubini's theorem, we have \[w(K)=\int w(Q)\,\mu(\mathrm{d}Q),\] hence there is some \(P\in\operatorname{supp}\mu\) with \(w(P)>0\) (that is, \(\dim P>0\)) and \(\mu(B(P,1/k))>0\) for all \(k\in\mathbb{N}\), where \(B(P,1/k)\) denotes a closed ball around \(P\) with radius \(1/k\) in \(\mathcal{P}^{n}\) (or in \(\mathcal{K}^{n}\)) with respect to the Hausdorff metric \(d\) on \(\mathcal{K}^{n}\) (or its restriction to a subset). For \(k\in\mathbb{N}\), the convex body \(K_{k}\in\mathcal{K}^{n}\) is defined by \[h_{K_{k}}\coloneqq\frac{1}{\mu(B(P,1/k))}\int_{B(P,1/k)}h_{Q}\,\mu(\mathrm{d}Q)\] and satisfies \(w(K_{k})>0\). Then clearly \(K_{k}\to P\) as \(k\to\infty\) (with respect to the Hausdorff metric). Moreover, if \(L_{k}\in\mathcal{K}^{n}\) is given by \[h_{L_{k}}:=\int_{B(P,1/k)^{\complement}}h_{Q}\,\mu(\mathrm{d}Q),\] then \[\mu(B(P,1/k))K_{k}+L_{k}=K.\] Since \(K\) is indecomposable and \(\dim K_{k}>0\), it follows that \(K=c(k)K_{k}+x_{k}\), where \[c(k)=\frac{w(K)}{w(K_{k})}\quad\text{and}\quad x_{k}\in\mathbb{R}^{n}.\] Since \(K_{k}\to P\), we have \(c(k)\to w(K)/w(P)>0\) and \(x_{k}\to x_{0}\in\mathbb{R}^{n}\), as \(k\to\infty\). Thus we arrive at \(K=w(P)^{-1}w(K)P+x_{0}\), which shows that \(K\) is a polytope. **Remark 2.18**.: Various types of mean section or projection bodies have been studied in integral and stochastic geometry. Starting from a convex body \(K\subset\mathbb{R}^{n}\), the support function of a new mean body is defined as an integral average of the support functions of sections or projections of \(K\), which is precisely the principle by which macroids are defined; see [6, 7, 8, 9] and the literature cited there. Another special case of definition (8) is the convolution \(\widetilde{\mu}*h_{K}\) of a probability (or finite) measure \(\widetilde{\mu}\) on the rotation group \(\operatorname{SO}_{n}\) and the support function of a fixed convex body \(K\in\mathcal{K}^{n}\), as considered in [10, Sects. 2 and 5]. In our notation, this reads \[(\widetilde{\mu}*h_{K})(u)=\int h_{K}(\rho^{-1}u)\,\widetilde{\mu}(\mathrm{d} \rho)=\int h_{L}(u)\,f_{K}(\widetilde{\mu})(\mathrm{d}L),\] where \(f_{K}(\widetilde{\mu})\) is the image measure of \(\widetilde{\mu}\) under the map \(f_{K}:\operatorname{SO}_{n}\to\mathcal{K}^{n}\), \(\rho\mapsto\rho K\). A general definition of a convex body as an integral average with respect to some measure on a suitable index set has been anticipated by Wolfgang Weil in [27, (1)], but then only the special case of zonoids has been explored in [27]. **Remark 2.19**.: Let \(u\in\mathbb{S}^{n-1}\). If \(K\) is a macroid with generating measure \(\mu\), then the support set \(F(K,u)\) of \(K\) is a macroid with generating measure \(F_{u}(\mu)\), where \(F_{u}:\mathcal{P}^{n}\to\mathcal{P}^{n}\), \(P\mapsto F(P,u)\), is measurable and \(F_{u}(\mu)\) is the image measure of \(\mu\) under the map \(F_{u}\), that is, \[h_{F(K,u)}=\int h_{F(P,u)}\,\mu(\mathrm{d}P)=\int h_{Q}\,F_{u}(\mu)(\mathrm{d}Q). \tag{9}\] The measurability of \(F_{u}\) follows from [21, Thm. 12.2.6 (a) and Thm. 12.3.2], since \(F(K,u)=K\cap H(K,u)\), where \(H(K,u)=u^{\perp}+h(K,u)u\) clearly depends continuously on \(K\). Furthermore, note that \(h_{F(K,u)}(x)=h^{\prime}_{K}(u;x)\) by [20, Thm. 1.7.2], for \(x\in\mathbb{R}^{n}\). Since \(t^{-1}|h_{L}(u+tx)-h_{L}(x)|\leq R\|x\|\), for \(t>0\) and \(L\in\operatorname{supp}\mu\subseteq RB^{n}\) (and some \(R>0\)), the assertion follows from the dominated convergence theorem. ## 3 The characterization theorem We start by recalling various concepts of criticality for finite sequences of subsets of \(\mathbb{R}^{n}\). Recall that the cardinality of a finite set \(I\) is denoted by \(|I|\). For a nonemtpy set \(A\subseteq\mathbb{R}^{n}\), let \(\operatorname{span}A\) denote the (linear) span of \(A\) and \(\overline{\operatorname{span}}\,A=\operatorname{span}(A-A)\) the linear subspace parallel to the smallest affine subspace containing \(A\). By the dimension \(\dim A\in\{0,\ldots,n\}\) of a set \(A\neq\varnothing\) we mean the dimension of its affine span. **Definition 3.1**.: Let \(\boldsymbol{\mathcal{A}}=(A_{1},\ldots,A_{\ell})\), \(\ell\in\mathbb{N}_{0}\), be a tuple of nonempty subsets of \(\mathbb{R}^{n}\). Then \(\boldsymbol{\mathcal{A}}\) is called 1. _supercritical_ if \(\dim\overline{\operatorname{span}}\sum_{i\in I}A_{i}\geq|I|+2\) for all \(\varnothing\neq I\subseteq\{1,\ldots,\ell\}\). 2. _critical_ if \(\dim\overline{\operatorname{span}}\sum_{i\in I}A_{i}\geq|I|+1\) for all \(\varnothing\neq I\subseteq\{1,\ldots,\ell\}\). 3. _semicritical_ if \(\dim\overline{\operatorname{span}}\sum_{i\in I}A_{i}\geq|I|\) for all \(\varnothing\neq I\subseteq\{1,\ldots,\ell\}\). Note that here we deviate from the terminology used in [24, Sect. 12], where a tuple of convex bodies satisfying (iii) in Definition 3.1 is called subcritical instead of semicritical. (Instead we reserve the notion of a subcritical tuple of sets for one that is not critical; see [11]). The various notions of criticality introduced above have useful properties some of which are discussed below. Each of the three notions is preserved by passing to a subtuple, taking permutations of the given tuple, replacing all sets by the same affine transformation or by individual translations, or if the sets are replaced by supersets. Supercriticality implies criticality, which in turn implies semicriticality. The empty tuple is supercritical. Moreover, if all sets in an \(\ell\)-tuple \(\boldsymbol{\mathcal{A}}\) are full-dimensional, then \(\boldsymbol{\mathcal{A}}\) is supercritical if and only if \(\ell\leq n-2\) or \(\ell=0\) (that is, \(\boldsymbol{\mathcal{A}}\) is the empty tuple). **Lemma 3.2**.: _Let \(\ell\in\mathbb{N}\) and \(\boldsymbol{\mathcal{A}}=(A_{1},\ldots,A_{\ell})\) be a tuple of nonempty sets in \(\mathbb{R}^{n}\)._ 1. _Let_ \(\boldsymbol{\mathcal{A}}\) _be critical and_ \(A_{\ell+1}\subseteq\mathbb{R}^{n}\) _be nonempty. Then_ \((A_{1},\ldots,A_{\ell+1})\) _is semicritical if and only if_ \(\dim\overline{\operatorname{span}}\,A_{\ell+1}\geq 1\)_._ 2. _Let_ \(\boldsymbol{\mathcal{A}}\) _be supercritical and_ \(A_{\ell+1}\subseteq\mathbb{R}^{n}\) _be nonempty. Then_ \((A_{1},\ldots,A_{\ell+1})\) _is critical if and only if_ \(\dim\overline{\operatorname{span}}\,A_{\ell+1}\geq 2\)_._ Proof.: (a) Suppose that \(A_{\ell+1}\) has dimension at least one. Let \(I\subseteq[n+1]\) be nonempty. We distinguish three cases. If \(I\subseteq[\ell]\), then \(\dim\overline{\operatorname{span}}\sum_{i\in I}A_{i}\geq|I|+1\geq|I|\), since \(\boldsymbol{\mathcal{A}}\) is critical. If \(I=\{\ell+1\}\), then \(\dim\overline{\operatorname{span}}\sum_{i\in I}A_{i}\geq 1=|I|\), since \(\dim\overline{\operatorname{span}}\,A_{\ell+1}\geq 1\). If \(I=J\cup\{\ell+1\}\) and \(\varnothing\neq J\subseteq[\ell]\), then \[\dim\overline{\operatorname{span}}\sum_{i\in I}A_{i}\geq\dim\overline{ \operatorname{span}}\sum_{i\in J}A_{i}\geq|J|+1=|I|,\] where we used again that \(\boldsymbol{\mathcal{A}}\) is critical. Clearly, if \((A_{1},\ldots,A_{\ell+1})\) is semicritical then \(\dim\overline{\operatorname{span}}\,A_{\ell+1}\geq 1\). The proof of (b) is similar. The following lemma connects semicriticality of an \(n\)-tuple of convex bodies to the positivity of the mixed volume of these convex bodies (see [20, Theorem 5.1.8]). **Lemma 3.3**.: _Let \(\boldsymbol{\mathcal{C}}=(K_{1},\ldots,K_{n})\) be an \(n\)-tuple of convex bodies in \(\mathbb{R}^{n}\). Then \(\boldsymbol{\mathcal{C}}\) is semicritical if and only if \(\,\mathrm{V}(\boldsymbol{\mathcal{C}})>0\)._ As pointed out before, mixed area measures can be extended to differences of support functions. If \(g_{1},g_{2}\) are differences of support functions and \(\boldsymbol{\mathcal{C}}\) is an \((n-3)\)-tuple of convex bodies in \(\mathbb{R}^{n}\) (if \(n\geq 3\)), then we set \(S_{g_{1},g_{2},\boldsymbol{\mathcal{C}}}:=S(g_{1},g_{2},\boldsymbol{\mathcal{C }},\cdot)\). A similar convention applies in case just one of the bodies is replaced by a difference of support functions. The statement and proof of the following lemma is suggested by a similar result concerning zonoids; see [24, Theorem 14.9]. In the following, \(\mathcal{K}_{*}\subseteq\mathcal{K}^{n}\) always denotes a measurable class of convex bodies. **Lemma 3.4**.: _Assume that \(n\geq 3\). Let \(\boldsymbol{\mathcal{C}}\) be an \((n-3)\)-tuple of convex bodies in \(\mathbb{R}^{n}\), and let \(K\in\mathcal{K}^{n}\) be a \(\mathcal{K}_{*}\)-macroid with generating measure \(\mu\). If \((K,\boldsymbol{\mathcal{C}})\) is supercritical and \(f\) is a difference of support functions with \(\mathrm{S}_{f,K,\boldsymbol{\mathcal{C}}}=0\), then \(\mathrm{S}_{f,P,\boldsymbol{\mathcal{C}}}=0\) for all \(P\in\operatorname{pos}\mu\)._ Proof.: Let \(P\in\operatorname{pos}\mu\). Then there are \(\ell\in\mathbb{N}_{0}\), \(\lambda_{1},\ldots,\lambda_{\ell}\geq 0\) and \(L_{1},\ldots,L_{\ell}\in\operatorname{supp}\mu\) such that \(P=\sum_{i=1}^{\ell}\lambda_{i}L_{i}\) and \[\mathrm{S}_{f,P,\boldsymbol{\mathcal{C}}}=\sum_{i=1}^{\ell}\lambda_{i}\, \mathrm{S}_{f,L_{i},\boldsymbol{\mathcal{C}}}\,.\] Note that this holds trivially with \(\mathrm{S}_{f,P,\boldsymbol{\mathcal{C}}}=0\) if \(\ell=0\). So it suffices to prove that \(\mathrm{S}_{f,L,\boldsymbol{\mathcal{C}}}=0\) for all \(L\in\operatorname{supp}\mu\). By Fubini's theorem and basic properties of mixed area measures (see [20, Sect. 5.1] or [12, Sect. 4.1]), which remain true in the case where differences of support functions are admitted in some of the arguments of the mixed volumes and the mixed area measures, \[\int\mathrm{V}(f,f,L,\boldsymbol{\mathcal{C}})\,\mu(\mathrm{d}L) =\frac{1}{n}\int\int h_{L}(u)\ \mathrm{S}_{f,f,\boldsymbol{\mathcal{C}}}(\mathrm{d}u)\,\mu( \mathrm{d}L)\] \[=\frac{1}{n}\int\int h_{L}(u)\,\mu(\mathrm{d}L)\ \mathrm{S}_{f,f, \boldsymbol{\mathcal{C}}}(\mathrm{d}u)\] \[=\frac{1}{n}\int h_{K}(u)\ \mathrm{S}_{f,f,\boldsymbol{\mathcal{C}}}( \mathrm{d}u)\] \[=\operatorname{V}(f,f,K,\boldsymbol{\mathcal{C}})\] \[=\frac{1}{n}\int f\,\operatorname{d}\operatorname{S}_{f,K, \boldsymbol{\mathcal{C}}}=0. \tag{10}\] If \(L\in\mathcal{K}^{n}\) is a singleton, then \(\operatorname{V}(f,f,L,\boldsymbol{\mathcal{C}})=\operatorname{V}(f,f,0L, \boldsymbol{\mathcal{C}})=0\) by translation invariance and multilinearity of \(\operatorname{V}\). If \(L\in\mathcal{K}^{n}\) is not a singleton, then \(\operatorname{V}(K,K,L,\boldsymbol{\mathcal{C}})>0\). In fact, first we get \(\dim K\geq 1+2=3\), since \((K,\boldsymbol{\mathcal{C}})\) is supercritical. By Lemma 3.2 (b) it follows that \((K,K,\boldsymbol{\mathcal{C}})\) is critical, but then \((L,K,K,\boldsymbol{\mathcal{C}})\) is semicritical by Lemma 3.2(b) and since \(\dim L\geq 1\). Hence the assertion follows from Lemma 3.3. Since \(\operatorname{S}_{f,K,\boldsymbol{\mathcal{C}}}=0\), it follows from the extension of (3) to differences of support functions that \(\operatorname{V}(f,K,L,\boldsymbol{\mathcal{C}})=0\). Hence, by the General Alexandrov-Fenchel Inequality (GAFI) we get \[0=\operatorname{V}(f,K,L,\boldsymbol{\mathcal{C}})^{2}\geq\operatorname{V}(f, f,L,\boldsymbol{\mathcal{C}})\cdot\operatorname{V}(K,K,L,\boldsymbol{ \mathcal{C}}),\] which implies that \(\operatorname{V}(f,f,L,\boldsymbol{\mathcal{C}})\leq 0\). Since \(\operatorname{V}(f,f,L,\boldsymbol{\mathcal{C}})\) is continuous in \(L\in\mathcal{K}^{n}\), it follows from (10) that \(\operatorname{V}(f,f,L,\boldsymbol{\mathcal{C}})=0\) for all \(L\in\operatorname{supp}\mu\). Now let \(L\in\operatorname{supp}\mu\). If \(L\) is a singleton, then \(\operatorname{S}_{f,L,\boldsymbol{\mathcal{C}}}=0\) by translation invariance and multilinearity of \(\operatorname{S}\). If \(L\) is not a singleton, then again \(\operatorname{V}(K,K,L,\boldsymbol{\mathcal{C}})>0\). Moreover, \(\operatorname{V}(f,K,L,\boldsymbol{\mathcal{C}})=0\) and \(\operatorname{V}(f,f,L,\boldsymbol{\mathcal{C}})=0\), as shown above. Therefore, \(\operatorname{S}_{f,L,\boldsymbol{\mathcal{C}}}=0\) is implied by [24, Lem. 3.12 (a)]. Next we compare how the smallest affine subspace containing a given \(k\)-polyoid with generating measure \(\mu\) is related to the smallest affine subspace of a polytope from the positive hull of the support of \(\mu\), if both affine subspaces are translated to the origin \(0\). **Lemma 3.5**.: _Let \(n\in\mathbb{N}_{0}\). Let \(K\in\mathcal{K}^{n}\) be a \(\mathcal{K}_{*}\)-macroid with generating measure \(\mu\), and let \(Q\in\operatorname{pos}\mu\). Then \(\operatorname{\overline{span}}Q\subseteq\operatorname{\overline{span}}K\)._ Proof.: For \(n=0\) the assertion is clear. Let \(u\in\left(\operatorname{\overline{span}}K\right)^{\perp}\) (the linear subspace orthogonal to \(\operatorname{\overline{span}}K\)). Then \[\int(h_{P}(u)+h_{P}(-u))\,\mu(\operatorname{d}\!P)=h_{K}(u)+h_{K}(-u)=0.\] Since the map \(P\mapsto h_{P}(u)+h_{P}(-u)\), \(P\in\mathcal{K}_{*}\), is continuous and nonnegative, we get \(h_{P}(u)+h_{P}(-u)=0\) for all \(P\in\operatorname{supp}\mu\). Because \(Q\) is a (nonnegative) Minkowski combination of such \(P\), it follows that \(h_{Q}(u)+h_{Q}(-u)=0\). Hence, \(u\in\left(\operatorname{\overline{span}}Q\right)^{\perp}\). So \(\left(\operatorname{\overline{span}}K\right)^{\perp}\subseteq\left( \operatorname{\overline{span}}Q\right)^{\perp}\), proving the claim. The proof of the next auxiliary result is inspired by [24, Thm. 14.9]. **Lemma 3.6**.: _Let \(n\geq 3\). Let \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) be an \((n-2)\)-tuple of \(\mathcal{K}_{*}\)-macroids in \(\mathbb{R}^{n}\) with generating measures \(\mu_{1},\ldots,\mu_{n-2}\). Let \(f\) be a difference of support functions. Assume that \(f\) is linear on \(\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{Q}},\cdot)\) whenever \(\boldsymbol{\mathcal{Q}}=(Q_{1},\ldots,Q_{n-2})\in\operatorname{pos}(\mu_{1}, \ldots,\mu_{n-2})\) with \(\operatorname{\overline{span}}Q_{i}=\operatorname{\overline{span}}C_{i}\) for \(i\in[n-2]\). Then \(f\) is also linear on \(\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\)._ Proof.: By Lemma 2.12, for \(i\in[n-2]\), there exists a sequence \(\widetilde{C}_{i}^{(1)},\widetilde{C}_{i}^{(2)},\ldots\) of sums of convex bodies in \(\operatorname{supp}\mu_{i}\) that converges to \(C_{i}\). Being an element of \(\operatorname{pos}\mu_{i}\), \(\widetilde{C}_{i}^{(j)}\) satisfies \(\operatorname{\overline{span}}\widetilde{C}_{i}^{(j)}\subseteq\operatorname{ \overline{span}}C_{i}\) by Lemma 3.5. On the other hand, \(\widetilde{C}_{i}^{(j)}\to C_{i}\) implies that the reverse inclusion holds for all \(j\) greater than or equal to some \(q\in\mathbb{N}\). For \(i\in[n-2]\), define \[C_{i}^{(j)}\coloneqq\widetilde{C}_{i}^{(q+j)}+\frac{1}{j^{2}}\sum_{j^{\prime }=1}^{j-1}\widetilde{C}_{i}^{(q+j^{\prime})},\quad j\in\mathbb{N}.\] Because \((d(0,\widetilde{C}_{i}^{(j)}))_{j}\) is bounded by some \(c_{i}\in(0,\infty)\), \[d(C_{i}^{(j)},\widetilde{C}_{i}^{(q+j)})\leq\frac{(j-1)c_{i}}{j^{2}}\to 0 \quad\text{as $j\to\infty$}\] and \[\lim_{j\to\infty}C_{i}^{(j)}=\lim_{j\to\infty}\widetilde{C}_{i}^{(q+j)}=C_{i}.\] Moreover, \[\operatorname{\overline{span}}C_{i}=\operatorname{\overline{span}}\widetilde {C}_{i}^{(q+j)}\subseteq\operatorname{\overline{span}}C_{i}^{(j)}\subseteq \operatorname{\overline{span}}C_{i}.\] For all \(j\in\mathbb{N}\), we have \(\boldsymbol{\mathcal{C}}^{(j)}\coloneqq(C_{1}^{(j)},\ldots,C_{n-2}^{(j)})\in \operatorname{pos}(\mu_{1},\ldots,\mu_{n-2})\). By assumption and since \(\operatorname{\overline{span}}C_{i}^{(j)}=\operatorname{\overline{span}}C_{i}\) for \(i\in[n-2]\), there is some \(x_{j}\in\mathbb{R}^{n}\) such that \(f=\langle x_{j},\cdot\rangle\) on \(\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}}^{(j)},\cdot)\). By definition of \(C_{i}^{(j)}\) and the multilinearity of \(\operatorname{S}\), we obtain \[\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}}^{(j)}, \cdot)=\bigcup_{j^{\prime}_{1},\ldots,j^{\prime}_{n-2}=1}^{j}\operatorname{ supp}\operatorname{S}(B^{n},\widetilde{C}_{1}^{(q+j^{\prime}_{1})},\ldots, \widetilde{C}_{n-2}^{(q+j^{\prime}_{n-2})},\cdot).\] In particular, \[\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}}^{(j)}, \cdot)\subseteq\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{ \mathcal{C}}^{(j+1)},\cdot)\quad\text{for all $j\in\mathbb{N}$}. \tag{11}\] Hence, there is \(p\in\mathbb{N}\) such that \[E\coloneqq\operatorname{span}\operatorname{supp}\operatorname{S}(B^{n}, \boldsymbol{\mathcal{C}}^{(p)},\cdot)=\operatorname{span}\operatorname{supp} \operatorname{S}(B^{n},\boldsymbol{\mathcal{C}}^{(j)},\cdot)\quad\text{for all $j\geq p$}.\] Then for all \(j\geq p\), \(\langle x_{p},\cdot\rangle\) and \(\langle x_{j},\cdot\rangle\) must agree on \(E\) because they agree on \(\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}}^{(p)},\cdot)\) and we obtain for all \(j\geq p\), \[f=\langle x_{p},\cdot\rangle\quad\text{ on }\operatorname{supp}\operatorname{S} (B^{n},\boldsymbol{\mathcal{C}}^{(j)},\cdot).\] Because \(\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}}^{(j)},\cdot)\) weakly converges to \(\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\), as \(j\to\infty\), it follows that \[f=\langle x_{p},\cdot\rangle\quad\text{ on }\operatorname{supp}\operatorname{S} (B^{n},\boldsymbol{\mathcal{C}},\cdot)\subseteq\operatorname{cl}\bigcup_{j=p} ^{\infty}\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}}^{ (j)},\cdot),\] which proves the assertion of the lemma. **Remark 3.7**.: In the proof of Lemma 3.6, we implicitly showed that if \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) is an \((n-2)\)-tuple of \(\mathcal{K}_{*}\)-macroids in \(\mathbb{R}^{n}\), \(n\geq 3\), with generating measures \(\mu_{1},\ldots,\mu_{n-2}\), then there exists a sequence of \((n-2)\)-tuples \(\boldsymbol{\mathcal{Q}}^{(j)}=(Q_{1}^{(j)},\ldots,Q_{n-2}^{(j)})\in\operatorname {pos}(\mu_{1},\ldots,\mu_{n-2})\), \(j\in\mathbb{N}\), such that \(\overline{\operatorname{span}}\,Q_{i}^{(j)}=\overline{\operatorname{span}}\,C_ {i}\) for \(i\in[n-2]\) and \(\boldsymbol{\mathcal{Q}}^{(j)}\to\boldsymbol{\mathcal{C}}\) as \(j\to\infty\). We can now prove our main result. A crucial tool for our argument is the important special case of polytopes, which was already treated by Shenfeld and van Handel [24, Thm. 8.1]. Recall that a convex body is said to be smooth if for each boundary point there is a unique supporting hyperplane passing through it. In particular, smooth convex bodies are full-dimensional. **Theorem 3.8**.: _Let \(n\geq 2\). Let \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) be a supercritical \((n-2)\)-tuple of macroids or smooth convex bodies in \(\mathbb{R}^{n}\). Let \(f\) be a difference of support functions. Then \(\operatorname{S}_{f,\boldsymbol{\mathcal{C}}}=0\) if and only if \(f\) is linear on \(\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\)._ Proof.: Let \(n=2\). Then the assumption states that \(\operatorname{S}_{f}=0\). If \(f=h_{K}-h_{L}\) for \(K,L\in\mathcal{K}^{2}\), this implies that \(S(K,\cdot)=S(L,\cdot)\), hence \(K=L+x\) for some \(x\in\mathbb{R}^{2}\). This shows that \(f\) is linear. Finally, note that \(\operatorname{supp}\operatorname{S}(B^{n},\cdot)=\mathbb{S}^{1}\). In the following, we assume that \(n\geq 3\). It is sufficient to prove the theorem in the case where \(\boldsymbol{\mathcal{C}}\) is a supercritical tuple of macroids. The extension with the possible inclusion of smooth convex bodies follows immediately by an application of [24, Cor. 14.3]. For this, note that if a smooth convex body in \(\boldsymbol{\mathcal{C}}\) (which necessarily has full dimension) is replaced by the Euclidean unit ball (which is a zonoid and hence a polyoid), neither the condition \(\operatorname{S}_{f,\boldsymbol{\mathcal{C}}}=0\) nor the set \(\operatorname{supp}\operatorname{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\) are changed. Moreover, also the supercriticality of \(\boldsymbol{\mathcal{C}}\) is not affected by replacing a smooth body by the unit ball. Hence, it is sufficient in the following to consider a supercritical \((n-2)\)-tuple \(\boldsymbol{\mathcal{C}}\) of macroids in \(\mathbb{R}^{n}\) with \(n\geq 3\). First, we assume that \(\operatorname{S}_{f,\boldsymbol{\mathcal{C}}}=0\). Let \(\boldsymbol{\mathcal{Q}}=(Q_{1},\ldots,Q_{n-2})\in\operatorname{pos}(\mu_{1}, \ldots,\mu_{n-2})\) be such that \(\overline{\operatorname{span}}\,Q_{i}=\overline{\operatorname{span}}\,C_{i}\) for \(i\in[n-2]\). So for all \(I\subseteq[n-2]\), the tuple \((\boldsymbol{\mathcal{C}}_{I},\boldsymbol{\mathcal{Q}}_{I^{\complement}})\) is supercritical, where \(I^{\complement}:=[n-2]\setminus I\). Based on the hypothesis \(\mathrm{S}_{f,\boldsymbol{\mathcal{C}}}=0\), Lemma 3.4 allows us to sequentially replace \(C_{i}\) by \(Q_{i}\) and to finally obtain \(\mathrm{S}_{f,\boldsymbol{\mathcal{Q}}}=0\). Since \(\boldsymbol{\mathcal{Q}}\) is a supercritical tuple of polytopes, \(f\) is linear on \(\mathrm{supp}\,\mathrm{S}(B^{n},\boldsymbol{\mathcal{Q}},\cdot)\) by [24, Thm. 8.1]. Now the claim follows from Lemma 3.6. For the reverse direction, we assume that \(f\) is linear on \(\mathrm{supp}\,\mathrm{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\). Let \(K\in\mathcal{K}^{n}\) be an arbitrary convex body. Then [20, Lem. 7.6.15] (compare also [24, Lem. 8.11]) implies that \[n\,\mathrm{V}(f,K,\boldsymbol{\mathcal{C}})=\int f(u)\ \mathrm{S}(K, \boldsymbol{\mathcal{C}},\mathrm{d}u)=0.\] By the symmetry of mixed volumes, we obtain \[0=\int h_{K}(u)\ \mathrm{S}(f,\boldsymbol{\mathcal{C}},\mathrm{d}u),\] which yields \(\mathrm{S}(f,\boldsymbol{\mathcal{C}},\cdot)=0\), since differences of support functions are dense in \(C(\mathbb{S}^{n-1})\) (see e.g. [20, Lem. 1.7.8]). Finally, we obtain a characterization of the equality cases of (AFI) for supercritical tuples of macroids and smooth bodies. **Theorem 3.9**.: _Let \(K,L\subset\mathbb{R}^{n}\) be convex bodies, and let \(\boldsymbol{\mathcal{C}}=(C_{1},\ldots,C_{n-2})\) be a supercritical \((n-2)\)-tuple of macroids or smooth convex bodies in \(\mathbb{R}^{n}\)._ 1. _If_ \(\mathrm{V}(K,L,\boldsymbol{\mathcal{C}})=0\)_, then (_AFI_) holds with equality and_ \(K,L\) _are homothetic._ 2. _Let_ \(\mathrm{V}(K,L,\boldsymbol{\mathcal{C}})>0\)_. Then (_AFI_) holds with equality if and only if there are_ \(a>0\) _and_ \(x\in\mathbb{R}^{n}\) _such that_ \(h_{K}=h_{aL+x}\) _on_ \(\mathrm{supp}\,\mathrm{S}(B^{n},\boldsymbol{\mathcal{C}},\cdot)\)_._ Proof.: (a) If \(\mathrm{V}(K,L,\boldsymbol{\mathcal{C}})=0\), then \(\mathrm{V}(K,K,\boldsymbol{\mathcal{C}})\,\mathrm{V}(L,L,\boldsymbol{ \mathcal{C}})=0\), and hence (AFI) holds with equality. By symmetry, we can assume that \(\mathrm{V}(K,K,\boldsymbol{\mathcal{C}})=0\). Then also \(\mathrm{V}(K,K+L,\boldsymbol{\mathcal{C}})=0\). If \(K\) is a singleton, then \(K,L\) are homothetic. If \(K\) has dimension at least \(1\), then \(\dim(K+L)\leq 1\), since otherwise \(\dim(K+L)\geq 2\), \(\dim(K)\geq 1\) and the assumed supercriticality of \(\boldsymbol{\mathcal{C}}\) imply that \(\mathrm{V}(K,K+L,\boldsymbol{\mathcal{C}})>0\), a contradiction. Thus we get \(\dim(K+L)\leq 1\), in particular, \(\dim(K)=1\) and \(L\) is contained in a segment parallel to \(K\). Hence again \(K,L\) are homothetic. (b) Suppose that \(\mathrm{V}(K,L,\boldsymbol{\mathcal{C}})>0\). By [20, Thm. 7.4.2] or [24, Lem. 2.5], (AFI) holds with equality if and only if there is some \(a>0\) such that \[\mathrm{S}(K,\boldsymbol{\mathcal{C}},\cdot)=\mathrm{S}(aL,\boldsymbol{ \mathcal{C}},\cdot),\] that is, \[\operatorname{S}_{f,\boldsymbol{c}}=0\quad\text{with }f=h_{K}-ah_{L}. \tag{12}\] Theorem 3.8 implies that (12) holds if and only if here is some \(x\in\mathbb{R}^{n}\) such that \[f=\langle x,\cdot\rangle\quad\text{on }\operatorname{supp}\operatorname{S}(B^{n}, \boldsymbol{\mathcal{C}},\cdot), \tag{13}\] but clearly (13) is equivalent to \[h_{K}=h_{aL+x}\quad\text{on }\operatorname{supp}\operatorname{S}(B^{n}, \boldsymbol{\mathcal{C}},\cdot),\] which proves the asserted equivalence. ## Appendix A A macroid that is not a polyoid In this section, we construct an example of a macroid that is not a polyoid, thereby showing that the class of macroids is larger than the class of polyoids. ### Zonotope kernels of polytopes Let \(K,L,M\in\mathcal{K}^{n}\) be convex bodies. If \(h_{K}-h_{L}=h_{M}\), then \(K\ominus L\coloneqq M\) is called the _Minkowski difference_ of \(K\) and \(L\). **Lemma A.1**.: _Let \(K\in\mathcal{K}^{n}\) be a convex body and let \(e,f\) be two linearly independent segments that are summands of \(K\). Then \(e+f\) is also a summand of \(K\)._ Proof.: To show that \(e+f\) slides freely in \(K\) (see [20, Sect. 3.2] and in particular Theorem 3.2.2 there), it suffices to consider two-dimensional slices of \(K\) parallel to \(e+f\). Hence we can reduce to the case that \(K\) is two-dimensional. Let \(\pm u\) be the normals of \(e\) and \(\pm v\) the normals of \(f\). As \(F(e,\pm v)\) are trivial, \(F(K\ominus e,\pm v)\) are translates of \(F(K,\pm v)\). So translates of \(f\) are not only contained in \(F(K,\pm v)\) but also in \(F(K\ominus e,\pm v)\). Then [20, Thm. 3.2.11] yields that \(f\) is a summand of \(K\ominus e\). This completes the proof. **Lemma A.2**.: _The function \(\zeta\colon\mathcal{P}^{n}\to\mathcal{P}^{n}\) that maps a polytope to its unique largest (i.e. inclusion-maximal) zonotope summand, centrally symmetric around the origin, is well-defined. Every zonotope summand of \(P\in\mathcal{P}^{n}\) is a summand of \(\zeta(P)\)._ Proof.: We show that every polytope \(P\) has a unique largest zonotope summand. Let \(\mathcal{Z}(P)\) denote the nonempty set of origin centered zonotope summands of \(P\). First note that summands of polytopes are polytopes (see [20, p. 157]) and polytopes that are zonoids are zonotopes (see [20, Cor. 3.5.7]). Hence the set of all origin centered zonotopes that are summands of \(P\) equals the set of all origin centered zonoids that are summands of \(P\). The latter set is compact as the intersection of a compact set (the set of centered zonoids having mean width less or equal the mean width of \(P\)) and a closed set (the set of summands of \(P\)). It follows that there is a \(Z\in\mathcal{Z}(P)\) of maximum mean width. This \(Z\) is inclusion-maximal in \(\mathcal{Z}(P)\). Let \(Y\in\mathcal{Z}(P)\). Then there are pairwise linearly independent \(x_{1},\ldots,x_{k}\in\mathbb{S}^{n-1}\) and scalars \(y_{1},\ldots,y_{k},z_{1},\ldots,z_{k}\geq 0\) such that \[Y=\sum_{i=1}^{k}y_{i}[-x_{i},x_{i}],\quad Z=\sum_{i=1}^{k}z_{i}[-x_{i},x_{i}].\] Assume for a contradiction that \(Y\) is not a summand of \(Z\). Up to reordering of the indices, it follows that \(y_{1}>z_{1}\). Then \(y_{1}[-x_{1},x_{1}]\) is a summand of \(P\), but, as \(Z\) is maximal, not a summand of \[P\ominus\sum_{i=2}^{k}z_{i}[-x_{i},x_{i}].\] Let \(\ell\in[k]\) be the largest index such that \(y_{1}[-x_{1},x_{1}]\) is a summand of \(\widetilde{P}\coloneqq P\ominus\sum_{i=2}^{\ell}z_{i}[-x_{i},x_{i}]\). Then \(l<k\) and \(z_{\ell+1}[-x_{\ell+1},x_{\ell+1}]\) is also a summand of \(\widetilde{P}\). Now Lemma A.1 shows that \(y_{1}[-x_{1},x_{1}]+z_{\ell+1}[-x_{\ell+1},x_{\ell+1}]\) is a summand of \(\widetilde{P}\), but this contradicts the maximality of \(\ell\). Hence, every \(Y\in\mathcal{Z}(P)\) is a summand of \(Z\), and there is only one maximal zonotope summand of \(P\). Next, we aim to prove that \(\zeta\) is measurable. We write \(h(K,u)=h_{K}(u)\) for the support function of \(K\in\mathcal{K}^{n}\) evaluated at \(u\in\mathbb{S}^{n-1}\). We write \(B(K,r)\) for a ball with center \(K\) and radius \(r\geq 0\) with respect to the Hausdorff metric \(d\) on the space \(\mathcal{K}^{n}\) of convex bodies. **Lemma A.3**.: _Let \(X\) be a separable metric space and \(f\colon X\to\mathcal{K}^{n}\) a function such that for any \(u\in\mathbb{S}^{n-1}\) and \(\lambda\in\mathbb{R}\),_ \[S_{f}(u,\lambda)\coloneqq\{x\in X\colon h(f(x),u)\geq\lambda\}\] _is closed. Then \(f\) is measurable._ Proof.: Fix some countable and dense set \(Q\subseteq\mathbb{S}^{n-1}\). Let \(K\in\mathcal{K}^{n}\) and \(r>0\). By continuity of \(h_{L}\) for every \(L\in\mathcal{K}^{n}\), \[B(K,r)=\bigcap_{u\in Q}h(\cdot,u)^{-1}([h_{K}(u)-r,h_{K}(u)+r]).\] Taking the preimage under \(f\), we get \[f^{-1}(B(K,r))=\bigcap_{u\in Q}h(f(\cdot),u)^{-1}([h_{K}(u)-r,h_{K}(u)+r]).\] By hypothesis, \(h(f(\cdot),u)\) is a Borel set for every \(u\in\mathbb{S}^{n-1}\). Since \(Q\) is countable, \(f^{-1}(B(K,r))\) is a Borel set as well. Because balls like \(B(K,r)\) generate the Borel \(\sigma\)-algebra of \(\mathcal{K}^{n}\), this shows that \(f\) is measurable. **Lemma A.4**.: \(\zeta\) _is a measurable function._ Proof.: We apply Lemma A.3. Let \(u\in\mathbb{S}^{n-1}\) and \(\lambda\in\mathbb{R}\). It suffices to show that \[S_{\zeta}(u,\lambda)=\{P\in\mathcal{P}^{n}\colon h(\zeta(P),u)\geq\lambda\}\] is closed. Let \((P_{i})\) be a sequence in \(S_{\zeta}(u,\lambda)\) that converges to \(P\in\mathcal{P}^{n}\). Applying the Blaschke selection theorem to the bounded sequence \((\zeta(P_{i}))\), we find a subsequence \((Q_{i})\) such that the sequence \((\zeta(Q_{i}))\) converges to a centered zonoid \(Z\) that is also a summand of \(P\). Because summands of polytopes are polytopes and polytopes that are zonoids are zonotopes, it follows that \(Z\) is a zonotope. So \(Z\in\zeta(P)\) and, in particular, \[h(\zeta(P),u)\geq h(Z,u)=\lim_{i\to\infty}h(\zeta(Q_{i}),u)\geq\lambda.\] So \(P\in S_{\zeta}(u,\lambda)\), proving that the latter set closed. An application of Lemma A.3 concludes the proof. ### Admissible sequences of polytopes Let \(K\subseteq\mathbb{R}^{3}\) be a convex body. A support set \(F(K,u)\) will be called _a singleton_ or _trivial_ if it is zero-dimensional, _an edge_ if it is one-dimensional, and _a facet_ if it is two-dimensional. It should be observed that unless \(K\) is a polytope, the current definition does not imply that the normal cone of \(K\) at a point in the relative interior of an edge is two-dimensional. **Definition A.5**.: Let \((P_{i})\) be a bounded sequence of (indecomposable) polytopes in \(\mathbb{R}^{3}\) with the following properties: * All facets are triangles. * For every \(i\in\mathbb{N}\), \(P_{i}\) is \(3\)-dimensional. * For every \(i\in\mathbb{N}\), no two edges of \(P_{i}\) have the same direction. * If \(i,j\in\mathbb{N}\) are distinct and \(u\) is a facet normal of \(P_{i}\), then \(F(P_{j},u)\) is trivial. * If \(\ell,i,j\in\mathbb{N}\) are distinct and \(e,f,g\) are edges of \(P_{\ell},P_{i},P_{j}\), then \(e+f+g\) is \(3\)-dimensional. In particular, \(e+f\) is \(2\)-dimensional. * \(K\coloneqq\sum_{i=1}^{\infty}P_{i}\) is a well-defined convex body. We call such a sequence _admissible_ and \(K\) its _associated body_. **Remark A.6**.: Let \(K_{i}\), \(i\in\mathbb{N}\), and \(K\) be convex bodies in \(\mathcal{K}^{n}\). Then \(K=\sum_{i=1}^{\infty}K_{i}\) holds (where the convergence of the partial sums is meant with respect to the Hausdorff metric) if and only if \(h_{K}=\sum_{i=1}^{\infty}h_{K_{i}}\) (where the convergence holds pointwise, but then also uniformly on the unit sphere). **Remark A.7**.: Let \(P_{i},K\in\mathcal{K}^{3}\), \(i\in\mathbb{N}\), be given as in Definition A.5. Then \(K\) has at most countably many extreme points. Items three, four and five imply that if \(F(K,u)\) is an edge of \(K\), then there is a unique \(i\in\mathbb{N}\) such that \(F(P_{i},u)\) is an edge of \(P_{i}\). In this situation, \(F(K,u)\) is a translate of \(F(P_{i},u)\) and no other edge of any of the polytopes \(P_{j}\), \(j\neq i\), is parallel to \(F(K,u)\). From item four we conclude that if \(F(K,u)\) is a triangular facet, then there is a unique \(i\) such that \(F(K,u)\) is a translate of \(F(P_{i},u)\). See Lemma A.12 for further discussion. Recall that every summand of a polytope is a polytope (see [20, p. 157]). For a polytope \(Q\in\mathcal{P}^{n}\), we consider the convex cone \[\mathcal{S}(Q):=\{P\in\mathcal{P}^{n}\mid\exists R\in\mathcal{P}^{n},\alpha>0 \colon Q=\alpha P+R\}.\] The elements of \(\mathcal{S}(Q)\) are called _scaled summands of \(Q\)_. **Lemma A.8**.: _Let \(Q\in\mathcal{P}^{n}\) be a polytope with macroid-generating measure \(\mu\) on \(\mathcal{P}^{n}\), that is,_ \[h_{Q}=\int h_{P}\,\mu(\mathrm{d}P).\] _Then \(\operatorname{supp}\mu\subseteq\mathcal{S}(Q)\)._ Proof.: _I._ Let \(\beta>0\) be a lower bound on the lengths of the edges of \(Q\), and let \(P\in\mathcal{S}(Q)\) be nontrivial. Then \(\frac{\beta}{\operatorname{diam}P}P\) is a summand of \(Q\), as we show first. Let \(F(P,u)\) be an edge. Since \(F(P,u)\) is a scaled summand of \(F(Q,u)\), the latter must have an edge \(e\) (which is also an edge of \(Q\)) homothetic to \(F(P,u)\). The length of \(F(P,u)\) is at most \(\operatorname{diam}P\) and the length of \(e\) is at least \(\beta\), so \(e\) contains a translate of \(\frac{\beta}{\operatorname{diam}P}F(P,u)\). Hence [20, Thm. 3.2.11] implies that \(\frac{\beta}{\operatorname{diam}P}P\) is a summand of \(Q\). _II._ The set \(\mathcal{S}(Q)\) is closed in \(\mathcal{P}^{n}\), as we show next. Let \((P_{i})\) be a sequence in \(\mathcal{S}(Q)\) converging to some \(P\in\mathcal{P}^{n}\). If \(P\) is trivial, then \(P\in\mathcal{S}(Q)\). Otherwise, there are a sequence of nontrivial polytopes \((P_{i})\) and a sequence of polytopes \((R_{i})\) such that \(Q=\frac{\beta}{\operatorname{diam}P_{i}}P_{i}+R_{i}\), and \((R_{i})\) must also converge to some \(R\in\mathcal{K}^{n}\) such that \(Q=\frac{\beta}{\operatorname{diam}P}P+R\). As \(R\) is a summand of \(P\), it must be a polytope. So \(P\in\mathcal{S}(Q)\). _III._ Assume for a contradiction that there is some \(L\in\operatorname{supp}\mu\setminus\mathcal{S}(Q)\). Then \(\mathsf{d}\coloneqq d(L,\mathcal{S}(Q))>0\) and \(\lambda\coloneqq\mu(B(L,\mathsf{d}/2))>0\). Define convex bodies \(L^{\prime}\) and \(R\) by \[h_{L^{\prime}}\coloneqq\lambda^{-1}\int_{B(L,\mathsf{d}/2)}h_{P}\,\mu( \mathrm{d}P)\quad\text{and}\quad h_{R}\coloneqq\int_{B(L,\mathsf{d}/2)\mathfrak{ c}}h_{P}\,\mu(\mathrm{d}P)\] so that \(Q=\lambda L^{\prime}+R\). It follows that \(R\) is a polytope and \(L^{\prime}\in\mathcal{S}(Q)\), and hence \[\mathsf{d}=d(L,\mathcal{S}(Q))\leq d(L,L^{\prime})\leq\mathsf{d}/2<\mathsf{d},\] which is a contradiction. We write \(\overline{\operatorname{span}}\,A\) for the linear subspace parallel to the smallest affine subspace containing a given nonempty set \(A\subseteq\mathbb{R}^{3}\). **Lemma A.9**.: _For every edge \(e\) of a polytope \(P\) in \(\mathbb{R}^{3}\), there are a normal \(v\in\mathbb{S}^{2}\) of \(e\) and \(u\in\mathbb{Q}^{3}\setminus\overline{\operatorname{span}}\{e\}\) such that \(u\perp v\) and \(e=F(P,v)\)._ Proof.: Let \(U\) be the relatively open normal cone of the edge \(e\). Then \(\operatorname{span}U\) is two-dimensional, and \(U\) is open in \(\operatorname{span}U\). Choose \(w\in\mathbb{S}^{2}\cap U^{\perp}\), so that \(\operatorname{span}U=w^{\perp}\). Let \(\times\) denote the cross product. The continuous and surjective map \(\mathbb{R}^{3}\to\operatorname{span}U\), \(x\mapsto w\times x\), maps the dense set \(S\coloneqq\mathbb{Q}^{3}\setminus\overline{\operatorname{span}}\{e\}\subseteq \mathbb{R}^{3}\) to the dense set \(\{w\}\times S\subseteq\operatorname{span}U\). Because \(U\setminus\{0\}\subseteq\operatorname{span}U\) is nonempty and open, there must be some \(\tilde{v}\in(U\setminus\{0\})\cap(\{w\}\times S)\). By construction, there is \(u\in S\) such that \(\tilde{v}=w\times u\). Now, \(v\coloneqq\left\lVert\tilde{v}\right\rVert^{-1}\!\tilde{v}\in\mathbb{S}^{2}\) is an inner normal of \(e\), i.e. \(e=F(P,v)\), and is orthogonal to \(u\in S=\mathbb{Q}^{3}\setminus\overline{\operatorname{span}}\{e\}\). In the following, we write \(\pi_{u}K\) for the orthogonal projection of \(K\) to \(u^{\perp}\) for a vector \(u\in\mathbb{R}^{3}\setminus\{0\}\). Moreover, \(\mathsf{S}^{\prime}_{1}(L,\cdot)\) denotes the first area measure of a convex body \(L\subset u^{\perp}\) with respect to \(u^{\perp}\) as the ambient space. **Lemma A.10**.: _Let \((P_{i})\) be an admissible sequence and \(K\) its associated body together with a macroid-generating measure \(\mu\)._ _There is a set \(\mathcal{M}\subseteq\mathcal{P}^{3}\) of full \(\mu\)-measure that satisfies the following property: If some \(P\in\mathcal{M}\) has an edge with direction \(v\in\mathbb{S}^{2}\), then one of the \(P_{i}\) also has an edge in direction \(v\)._ Proof.: We intend to use Lemma A.9. If \(u\in\mathbb{R}^{3}\setminus\{0\}\), then \[\pi_{u^{\perp}}K=\sum_{i=1}^{\infty}\pi_{u^{\perp}}P_{i},\] and hence the weak continuity and the Minkowski linearity of the area measures imply that \[\mathrm{S}^{\prime}_{1}(\pi_{u^{\perp}}K,\cdot)=\sum_{i=1}^{\infty}\mathrm{S}^ {\prime}_{1}(\pi_{u^{\perp}}P_{i},\cdot).\] So \(\mathrm{S}^{\prime}_{1}(\pi_{u^{\perp}}K,\cdot)\) is a discrete Borel measure (i.e., has countable support) in \(u^{\perp}\cap\mathbb{S}^{2}\). Denoting by \[\omega_{u}\coloneqq\big{\{}v\in u^{\perp}\cap\mathbb{S}^{2}\colon\,\mathrm{S}^ {\prime}_{1}(\pi_{u^{\perp}}K,\{v\})>0\big{\}}\] the set of its atoms, we obtain from special cases of [11, Thm. 2.23, Lem. 3.4] that \[0=\mathrm{S}^{\prime}_{1}(\pi_{u^{\perp}}K,\omega_{u}^{\complement})=\int \mathrm{S}^{\prime}_{1}(\pi_{u^{\perp}}P,\omega_{u}^{\complement})\,\mu( \mathrm{d}P).\] So the set \[\mathcal{M}_{1}\coloneqq\bigcap_{u\in\mathbb{Q}^{3}\setminus\{0\}}\Bigl{\{}P \in\mathcal{P}^{3}\colon\,\mathrm{S}^{\prime}_{1}(\pi_{u^{\perp}}P,\omega_{u}^ {\complement})=0\Bigr{\}}\] has full \(\mu\)-measure. Since for each \(u\in\mathbb{Q}^{3}\setminus\{0\}\) the set \(\omega_{u}\) is countable, the set of pairs \[C\coloneqq\big{\{}(u,v)\in(\mathbb{Q}^{3}\setminus\{0\})\times\mathbb{S}^{2} \colon v\in\omega_{u}\big{\}}\] is countable. Using the notation from Remark 2.19, we define \[\mathcal{M}_{2}\coloneqq\bigcap_{(u,v)\in C}F(\cdot,v)^{-1}(\operatorname{ supp}F_{v}(\mu)).\] The set \(\mathcal{M}_{2}\) has full \(\mu\)-measure. To see this, it is sufficient to consider \(v\in\mathbb{S}^{n-1}\) and \(P\in\mathcal{P}^{3}\) with \(F(P,v)\notin\operatorname{supp}F_{v}(\mu)\) and to show that \(P\notin\operatorname{supp}\mu\). By assumption, there is a neighbourhood \(U^{\prime}\) of \(F(P,v)\) with \(F_{v}(\mu)(U^{\prime})=0\), hence \(\mu(\{F(P^{\prime},v):P^{\prime}\in U^{\prime}\})=0\). Since \(\{F(P^{\prime},v):P^{\prime}\in U^{\prime}\}\) is a neighbourhood of \(P\), it follows that \(P\notin\operatorname{supp}\mu\). Furthermore, for all \(P\in\mathcal{M}_{2}\) and \((u,v)\in C\), Lemma A.8 shows that \(F(P,v)\) is a scaled summand of \(F(K,v)\). Now let \(\mathcal{M}\coloneqq\mathcal{M}_{1}\cap\mathcal{M}_{2}\). Assume \(P\in\mathcal{M}\) and that \(e\) is an edge of \(P\). By Lemma A.9, there are \(u\in\mathbb{Q}^{3}\setminus\overline{\operatorname{span}}\,e\) and \(v\in u^{\perp}\cap\mathbb{S}^{2}\) such that \(e=F(P,v)\). In particular, \(F(\pi_{u^{\perp}}P,v)\) is nontrivial and \(\mathrm{S}^{\prime}_{1}(\pi_{u^{\perp}}P,\{v\})>0\). Since \(P\in\mathcal{M}_{1}\), it follows that \(v\in\omega_{u}\) and so \((u,v)\in C\). Since \(P\in\mathcal{M}_{2}\), this implies that \(e=F(P,v)\) is a scaled summand of the nontrivial support set \(F(K,v)\), which is then either an edge or a parallelogram. If it is an edge, it has the same direction as \(e\) and also is an edge of one of the \(P_{i}\) and we are done. If it is a parallelogram, then one of the sides of the parallelogram must have the same orientation as \(e\), and one of the \(P_{i}\) has an edge with this orientation. This concludes the proof. **Lemma A.11**.: _Let \((P_{i})\) be an admissible sequence and \(K\) its associated body together with a macroid-generating measure \(\mu\). Then there is a set \(\mathcal{M}^{\prime}\) of full \(\mu\)-measure such that for each \(P\in\mathcal{M}^{\prime}\) and for each \(u\in\mathbb{S}^{2}\), \(F(P,u)\) is a scaled summand of \(F(K,u)\)._ Proof.: (I) Let \(P\in\mathcal{M}\), where \(\mathcal{M}\) is as in the statement of Lemma A.10. If \(F(P,u)\) is trivial, so is the claim. If \(F(P,u)\) is an edge, then by Lemma A.10, there is \(i\in\mathbb{N}\) such that \(F(P,u)\) is homothetic to the edge \(F(P_{i},u)\) and hence a scaled summand of \(F(K,u)\). (II) Now consider the case that \(P\in\mathcal{M}\) and \(F(P,u)\) is a facet. The edges of the polytopes \(P_{i}\), \(i\in\mathbb{N}\), together have only countably many directions. Denote the countable set of these directions by \(A\subset\mathbb{S}^{2}\). The facet \(F(P,u)\) is incident to (at least) two edges with linearly independent directions \(v,w\in A\) that determine the facet normal \(u\) up to sign \(\sigma\in\{-1,1\}\) via \[\phi\colon\{-1,1\}\times\big{\{}(a,b)\in A^{2}\bigm{|}a\neq\pm b\big{\}},\quad (\sigma,u,v)\mapsto\sigma\frac{u\times v}{\|u\times v\|}.\] So the facet normals of \(P\) are contained in the countable image of \(\phi\), which is independent of the choice of \(P\). For each \(u\in\mathbb{S}^{2}\), the set \(F(K,u)\) is a polytope. Consider the set of full \(\mu\)-measure \[\mathcal{M}_{3}\coloneqq\bigcap_{u\in\operatorname{im}\phi}F(\cdot,u)^{-1}( \operatorname{supp}F_{u}(\mu)).\] If \(P\) is also in \(\mathcal{M}_{3}\), then by Lemma A.8 and \(u\in\operatorname{im}\phi\), the support set \(F(P,u)\) is a scaled summand of \(F(K,u)\). Now the assertion follows from (I) and (II) with \(\mathcal{M}^{\prime}\coloneqq\mathcal{M}\cap\mathcal{M}_{3}\). ### Unique decomposability **Lemma A.12**.: _Let \((P_{i})\) be an admissible sequence and \(K\) its associated body together with a macroid-generating measure \(\mu\). Let \(e\) be an edge of \(F(K,u)\) for some \(u\in\mathbb{S}^{2}\). Then there is a unique \(i\in\mathbb{N}\) such that \(F(P_{i},u)\) has an edge homothetic to \(e\), \(e\) is in fact a translate of \(F(P_{i},u)\), and this edge is unique among the edges of \(P_{i}\)._ Proof.: The uniqueness statements immediately follow from the properties of admissible sequences \((P_{i})\). Note that an edge of \(F(K,u)\) need not be an edge of \(K\) as defined here. If \(F(K,u)\) is a singleton, it does not have any edges. If \(F(K,u)\) is an edge, then there is \(i\in\mathbb{N}\) such that \(F(P_{i},u)\) is a translate of \(F(K,u)\). If \(F(K,u)\) is a triangle, then there is \(i\in\mathbb{N}\) such that \(F(P_{i},u)\) is a translate of \(F(K,u)\). So a translate of \(e\) is an edge of \(F(P_{i},u)\). If \(F(K,u)\) is a parallelogram, then there are unique \(i,j\in\mathbb{N}\) with \(i\neq j\) such that \(F(P_{i},u)\) is a translate of an edge of \(P_{i}\), \(F(P_{j},u)\) is a translate of an edge of \(P_{j}\) and \(F(P_{i},u)+F(P_{j},u)\) is a translate of \(F(K,u)\). So \(e\) is either a translate of \(F(P_{i},u)\) or of \(F(P_{j},u)\). **Lemma A.13**.: _Let \((P_{n})\) be an admissible sequence and \(K\) its associated body together with a macroid-generating measure \(\mu\)._ _Then the polytopes with a nontrivial zonotope summand are contained in a \(\mu\)-zero set \(\mathcal{N}\)._ Proof.: Recall the measurable function \(\zeta\) from Lemmas A.2 and A.4. The macroid \(Z\) generated by \(\zeta(\mu)\), the image measure of \(\mu\) under the map \(\zeta\), is a zonoid and summand of \(K\), implying \[\mathrm{S}_{2}(Z,\cdot)\ll\mathrm{S}_{2}(K,\cdot)=\sum_{n=1}^{\infty}\sum_{m= 1}^{\infty}\mathrm{S}(P_{n},P_{m},\cdot).\] Because the right-hand side is a discrete measure, so is \(\mathrm{S}_{2}(Z,\cdot)\). If we can show that \(Z\) has no facets, then \(\mathrm{S}_{2}(Z,\cdot)\) is a discrete measure without atoms, hence zero. Then \(Z\) is at most one-dimensional. If we can also show that \(Z\) has no edges, then \(Z\) must be trivial, and the set \(\mathcal{N}\) of polytopes \(P\) with nontrivial \(\zeta(P)\) is a \(\mu\)-zero set, proving the claim. It remains to show that \(Z\) has no facets or edges. We aim at a contradiction and assume that \(F(Z,u)\) is a facet or an edge. Since \(F(Z,u)\) is a summand of the polytope \(F(K,u)\), it has an edge \(e\) that is homothetic to an edge of \(F(K,u)\). Because \(Z\) is centrally symmetric around the origin, \(F(Z,-u)=-F(Z,u)\) also has an edge that is a translate of \(e\), and therefore \(F(K,-u)\) has an edge that is homothetic to \(e\). Hence, \(F(K,u)\) and \(F(K,-u)\) both contain an edge homothetic to \(e\). By Lemma A.12, and especially the uniqueness statement, there is \(i\in\mathbb{N}\) such that \(F(P_{i},u)\) and \(F(P_{i},-u)\) intersect in the very same edge homothetic to \(e\). But this contradicts \(P_{i}\) being \(3\)-dimensional, and \(Z\) cannot have edges or facets. This completes the proof. **Lemma A.14**.: _Let \((P_{i})\) be an admissible sequence and \(K\) its associated body together with a macroid-generating measure \(\mu\)._ _Then \(\mu\) is supported in translates of \(\operatorname{pos}((P_{i})_{i})\), the set of translates of finite positive (Minkowski) combinations of polytopes from the sequence \((P_{i})_{i\in\mathbb{N}}\)._ Proof.: By Lemmas A.11 and A.13, \(\mu\) is supported in the polytopes \(P\) that have no nontrivial zonotope summand and such that for all \(u\in\mathbb{S}^{2}\), the support set \(F(P,u)\) is a scaled summand of \(F(K,u)\). Let \(\mathcal{M}_{4}\) be a set of such polytopes of full \(\mu\)-measure, and let \(P\in\mathcal{M}_{4}\). For the proof, we may assume that \(P\) is nontrivial. 1. All facets of \(K\) are triangles and parallelograms. The only scaled summands of a triangle are homothets of that triangle; the only scaled summands of a parallelogram are (possibly degenerate) parallelograms with the same edge directions but possibly different proportions. So all facets of \(P\) are of this kind. 2. Let \(u\in\mathbb{S}^{2}\) be such that \(F(P,u)\) is a triangular facet. Then \(F(K,u)\) is homothetic to \(F(P,u)\), that is, there are unique \(\alpha_{u}>0\) and \(t_{u}\in\mathbb{R}^{n}\) such that \(F(P,u)=\alpha_{u}F(K,u)+t_{u}\). Moreover, there are unique \(i=i(u)\in\mathbb{N}\) and \(t^{\prime}_{u}\in\mathbb{R}^{n}\) such that \(F(K,u)\) is a translate of \(F(P_{i},u)\) and \(F(P,u)=\alpha_{u}F(P_{i},u)+t^{\prime}_{u}\). Also note that there are at most two triangular facets of \(P\) that have an edge parallel to a fixed direction; otherwise, there would be an \(i\in\mathbb{N}\) such that \(P_{i}\) also had more than two such facets, contradicting the hypothesis that \(P_{i}\) has at most one edge parallel to a given direction. 3. We observe that \(\dim P=3\). Recall that \(P\) is nontrivial. If \(\dim P=1\), then \(P\) is a segment, which is a zonotope, a contradiction. If \(\dim P=2\), then \(P\) is a triangle or a non-degenerate parallelogram, which is a zonotope. The latter is excluded. Hence \(P\) is a triangle with \(P=F(P,v)=F(P,-v)\) for a unit vector \(v\). Let \(e\) be an edge of \(P\). Then \(P_{i(v)}\) and \(P_{i(-v)}\) both contain an edge parallel to \(e\). Hence, \(i\coloneqq i(v)=i(-v)\) and \(F(P_{i},v)=F(P_{i},-v)\) and thus \(\dim P_{i}=2\), a contradiction. 4. Let \(G\) be the graph with the edges of \(P\) as \(G\)-vertices, where two edges, i.e. \(G\)-vertices, are connected if and only if they are opposite edges in a parallelogram facet of \(P\). Since every edge is only part of two facets, the maximum degree of a \(G\)-vertex, i.e. an edge of \(P\), is two. The connected components of \(G\) are cycles or chains. Let us first make sure that no cycles can occur. Assume that the edge \(e\) of \(P\) with direction \(u\) is part of a cycle. Then \(\pi_{u^{\perp}}P\) is a convex polygon and \(\pi_{u^{\perp}}e\) is one of its vertices. The two edges incident to \(\pi_{u^{\perp}}e\) are projections of parallelograms that connect \(e\) to the two neighbors of \(e\) in \(G\), and an induction shows that all support sets of \(\pi_{u^{\perp}}P\) either are projections of edges parallel to \(e\) or parallelograms connecting two such edges. For the sake of applying [20, Thm. 3.2.22], let \(F(e,v)\) be an edge of \(e\). In this case, \(v\in u^{\perp}\), and so \(e\) is a summand of \(F(P,v)\). Then [20, Thm. 3.2.22] guarantees that \(e\) is a summand of \(P\), in contradiction to \(P\) having no nontrivial zonotope summand. Therefore, the connected component of any edge \(e\) of \(P\) is a chain \(e_{1}-\cdots-e_{k}\) of \(e\)-translates. The endpoints of this chain must be edges of two triangular facets of \(P\). By (ii), there can be no other chain with edges parallel to \(e_{1},\ldots,e_{k}\). So if \(f\) is an edge parallel to \(e\), then \(f=e_{j}\) for some \(j\in[k]\) and \(f\) is a translate of \(e\). Moreover, for any edge \(e\) of \(P\) there are exactly two triangular facets of \(P\) with an edge parallel to \(e\). * Let \(u,v\in\mathbb{S}^{2}\), and \(i\coloneqq i(u)\) as in (ii), such that \(F(P,u)\) is a triangle and \(F(P_{i},v)\) is a facet adjacent to \(F(P_{i},u)\) via an edge \(e\). By (iii), there is exactly one \(w\in\mathbb{S}^{2}\) besides \(u\) such that \(F(P,w)\) is a triangle with an edge parallel to \(e\). By (ii), \(F(P_{i},w)\neq F(P_{i},u)\) is then also a triangle with an edge parallel to \(e\). Because \(P_{i}\) contains no other edge parallel to \(e\), it follows that \(v=w\). So we have \[F(P,u)=\alpha_{u}F(P_{i},u)+t_{u},\quad F(P,v)=\alpha_{v}F(P_{i},v)+t_{v}.\] By (iii), all edges of \(P\) parallel to \(e\) are translates of each other, hence it follows that \(\alpha_{u}=\alpha_{v}\). * Let \(u,v\in\mathbb{S}^{2}\), and \(i\coloneqq i(u)\) as in (ii), such that \(F(P,u)\) and \(F(P_{i},v)\) are triangles. Then the triangles \(F(P_{i},u)\) and \(F(P_{i},v)\) are connected via a chain of neighboring facets. Iteration of (v) shows that \(F(P,v)\) is a triangle and \(\alpha_{u}=\alpha_{v}\). So \(\alpha_{u}\) only depends on \(P\) and \(i(u)\), and we set \(\alpha_{i(u)}(P)\coloneqq\alpha_{u}>0\). If \(i\in\mathbb{N}\) and \(P\) contains no triangular facet \(F(P,w)\) with \(i(w)=i\), then we set \(\alpha_{i}(P)\coloneqq 0\). For each \(i\in\mathbb{N}\), there are uncountably many edge normals of \(P_{i}\) but only countably many facet normals of \(K\). Let \(u\in\mathbb{S}^{2}\) be such that \(F(P_{i},u)\) is an edge and \(F(K,u)\) is not a facet and in fact a translate of \(F(P_{i},u)\). Then for each \(P\in\mathcal{M}_{4}\), \(F(P,u)\) is a summand of \(F(K,u)\) and satisfies \(\operatorname{V}(F(P,u))=\alpha_{i}(P)\operatorname{V}(F(P_{i},u))\). This shows that \(\mathcal{M}_{4}\ni P\mapsto\alpha_{i}(P)\) is measurable, \[\operatorname{V}(F(K,u))=\int_{\mathcal{M}_{4}}\operatorname{V}(P,u)\mu( \operatorname{d}\!P)=\int_{\mathcal{M}_{4}}\alpha_{i}(P)\mu(\operatorname{d} \!P)\operatorname{V}(F(P_{i},u))\] and thus we get \[\int_{\mathcal{M}_{4}}\alpha_{i}(P)\,\mu(\mathrm{d}P)=1.\] (14) * Let \(\widetilde{P}\coloneqq\sum_{i=1}^{\infty}\alpha_{i}(P)P_{i}\), involving only finitely many nonzero summands. Note that \(\dim\widetilde{P}=3\), since \(\alpha_{i}(P)>0\) for some \(i\in\mathbb{N}\). Every facet of \(P\) or \(\widetilde{P}\) is either triangular or a parallelogram. The preceding items show that the triangular facets of \(P\) are translates of the triangular facets of \(\widetilde{P}\), and vice versa. * It remains to consider the parallelogram facets. Let \(u\in\mathbb{S}^{2}\). If \(F(K,u)\) is not a parallelogram, it is a singleton, an edge or a triangle. For all \(P\in\mathcal{M}_{4}\), neither \(F(P,u)\) nor \(F(\widetilde{P},u)\) is then a parallelogram. * We consider the situation from (viii). From now on, we assume that \(F(K,u)\) is a parallelogram. We choose \(v,w\in\mathbb{S}^{2}\cap u^{\perp}\) such that the edges of \(F(K,u)\) are \(F(F(K,u),\pm v)\) and \(F(F(K,u),\pm w)\). There are unique distinct \(i,j\in\mathbb{N}\) such that \(F(F(K,u),\pm v)\) are translates of \(F(P_{i},u)\) and \(F(F(K,u),\pm w)\) are translates of \(F(P_{j},u)\). Let \(P\in\mathcal{M}_{4}\). Then \(F(\widetilde{P},u)\) is a translate of \[\alpha_{i}(P)F(P_{i},u)+\alpha_{j}(P)F(P_{j},u),\] which might be a singleton, an edge or a parallelogram. On the other hand, \(F(P,u)\) is a translate of \[F(F(P,u),v)+F(F(P,u),w).\] * We consider the situation from (ix) and aim to show that \(F(P,u)\) and \(F(\widetilde{P},u)\) are translates of each other. In the current item, we show that the conclusion holds at least if \(P\) is taken from a subset of \(\mathcal{M}_{4}\) of full measure. The argument will be completed in (xi). Let \(P\in\mathcal{M}_{4}\). If \(F(F(P,u),v)\) is not a singleton, then it is an edge parallel to \(F(P_{i},u)\). Hence it must be a translate of \(\alpha_{i}(P)F(P_{i},u)\). In either case, \[\mathrm{V}(F(F(P,u),v))\leq\alpha_{i}(P)\,\mathrm{V}(F(P_{i},u)).\] (15) Relation (15) can be used to bound the integrand in \[\mathrm{V}(F(P_{i},u))=\mathrm{V}(F(F(K,u),v))=\int_{\mathcal{M}_{4}}\mathrm{ V}(F(F(P,u),v))\,\mu(\mathrm{d}P).\] But then (14) from (vi) implies that equality must hold in (15), for \(\mu\)-almost all polytopes \(P\in\mathcal{M}_{4}\). Hence, \(F(F(P,u),v)\) is a translate of \(\alpha_{i}(P)F(P_{i},u)\), and a similar argument shows that \(F(F(P,u),w)\) is a translate of \(\alpha_{j}(P)F(P_{j},u)\), for \(\mu\)-almost all \(P\in\mathcal{M}_{4}\). So there is a measurable set \(\mathcal{M}_{5}(u)\subseteq\mathcal{M}_{4}\) of full \(\mu\)-measure such that for all \(P\in\mathcal{M}_{5}(u)\), a translate of \(F(P,u)\) is \[\alpha_{i}(P)F(P_{i},u)+\alpha_{j}(P)F(P_{j},u),\] and hence also a translate of \(F(\widetilde{P},u)\). 3. Finally, set \(\mathcal{M}_{5}\coloneqq\bigcap_{u}\mathcal{M}_{5}(u)\), where we take the countable intersection over all normals of parallelogram facet normals of \(K\). Then \(\mathcal{M}_{5}\) is a measurable set of full \(\mu\)-measure and for all \(P\in\mathcal{M}_{5}\) and \(u\in\mathbb{S}^{2}\), \(F(P,u)\) is a parallelogram if and only if \(F(\widetilde{P},u)\) is, and in this case both are translates of each other: When \(F(K,u)\) is a parallelogram, it follows from (x) that \(F(P,u)\) and \(F(\widetilde{P},u)\) are translates, and if it is not, neither of them is a parallelogram due to (viii). The proof is concluded by an application of Minkowski's uniqueness theorem for area measures of convex polytopes. **Lemma A.15**.: _Let \((P_{i})\) be an admissible sequence and \(K\) its associated body together with a macroid-generating measure \(\mu\)._ _Then for all \(i\in\mathbb{N}\) there is \(P\in\operatorname{supp}\mu\) such that \(P_{i}\) is a scaled summand of \(P\)._ Proof.: Let \(u\) be the normal of a (necessarily triangular) facet of \(P_{i}\). Then \(F(K,u)\) is a translate of this triangular facet. Clearly, \(\mu\) is concentrated on \(\operatorname{supp}\mu\) and, according to Lemma A.14, on the set of translates of all finite positive combinations of the \(P_{j}\), \(j\in\mathbb{N}\). By (9) we have \[h_{F(K,u)}=\int h_{F(P,u)}\,\mu(\mathrm{d}P),\] hence there is \(P\in\operatorname{pos}((P_{j})_{j})\cap\operatorname{supp}\mu\) such that \(F(P,u)\) is nontrivial. This can only be the case if \(P_{i}\) is a scaled summand in the finite positive combination defining \(P\), as \(F(P_{j},u)\) is trivial for all \(j\neq i\). **Theorem A.16**.: _Let \((P_{i})\) be an admissible sequence and \(K\) its associated body. If for all \(i\in\mathbb{N}\) the body \(P_{i}\) has at least \(i\) vertices, then \(K\) is not a polyoid, although it is a macroid._ Proof.: Assume that \(K\) is a \(k\)-polyoid with a generating measure \(\mu\) supported in the space of \(k\)-topes. Lemma A.15 shows that \(P_{k+1}\) is a scaled summand of some \(P\in\operatorname{supp}\mu\). But then \(P\) is not a \(k\)-tope (see, e.g., [4, Lem. 2.3]), in contradiction to the property \(\operatorname{supp}\mu\subseteq\mathcal{P}_{k}^{3}\) of \(\mu\). So \(K\) is not a \(k\)-polyoid for any \(k\in\mathbb{N}\). **Remark A.17**.: Let \((P_{i})_{i\geq 4}\) be a bounded sequence of polytopes such that \(P_{i}\) is a \(3\)-dimensional \(i\)-tope having only triangular faces with no edge direction occurring twice. When we apply independent uniform random rotations to each of the \(P_{i}\), we obtain almost surely an admissible sequence. This way, we can construct a macroid that is not a polyoid. **Acknowledgements.** D. Hug was supported by DFG research grant HU 1874/5-1 (SPP 2265). The authors are grateful to Ramon van Handel for helpful comments on an earlier version of the manuscript.
2309.08482
YCB-Ev 1.1: Event-vision dataset for 6DoF object pose estimation
Our work introduces the YCB-Ev dataset, which contains synchronized RGB-D frames and event data that enables evaluating 6DoF object pose estimation algorithms using these modalities. This dataset provides ground truth 6DoF object poses for the same 21 YCB objects that were used in the YCB-Video (YCB-V) dataset, allowing for cross-dataset algorithm performance evaluation. The dataset consists of 21 synchronized event and RGB-D sequences, totalling 13,851 frames (7 minutes and 43 seconds of event data). Notably, 12 of these sequences feature the same object arrangement as the YCB-V subset used in the BOP challenge. Ground truth poses are generated by detecting objects in the RGB-D frames, interpolating the poses to align with the event timestamps, and then transferring them to the event coordinate frame using extrinsic calibration. Our dataset is the first to provide ground truth 6DoF pose data for event streams. Furthermore, we evaluate the generalization capabilities of two state-of-the-art algorithms, which were pre-trained for the BOP challenge, using our novel YCB-V sequences. The dataset is publicly available at https://github.com/paroj/ycbev.
Pavel Rojtberg, Thomas Pöllabauer
2023-09-15T15:42:00Z
http://arxiv.org/abs/2309.08482v2
# YCB-Ev: Event-vision dataset for 6DoF object pose estimation ###### Abstract Our work introduces the YCB-Ev dataset, which contains synchronized RGB-D frames and event data that enables evaluating 6DoF object pose estimation algorithms using these modalities. This dataset provides ground truth 6DoF object poses for the same 21 YCB objects [1] that were used in the YCB-Video (YCB-V) dataset [21], enabling the evaluation of algorithm performance when transferred across datasets. The dataset consists of 21 synchronized event and RGB-D sequences, amounting to a total of 7:43 minutes of video. Notably, 12 of these sequences feature the same object arrangement as the YCB-V subset used in the BOP challenge [18]. Our dataset is the first to provide ground truth 6DoF pose data for event streams. Furthermore, we evaluate the generalization capabilities of two state-of-the-art algorithms, which were pre-trained for the BOP challenge, using our novel YCB-V sequences. The proposed dataset is available at [https://github.com/paroj/ycbev](https://github.com/paroj/ycbev). 1 I.2.10 [ARTIFICIAL INTELLIGENCE] Vision and Scene Understanding--Modeling and recovery of physical attributes; 1.5.5 [PATTERN RECOGNITION] Implementation--Interactive systems; ## 1 Introduction The ability to perform real-time object detection and 6D pose estimation is essential for applications in augmented reality, virtual reality, and robotics. The progress in this field is evaluated through the BOP challenge [18], which ranks algorithms and publishes a leader-board annually. To this end, a variety of datasets are utilized, differing in terms of captured modality (e.g., RGB color, depth) and type of objects (e.g., household items, industrial objects). In this context, the algorithms are also evaluated in terms of their ability to generalize from synthetic data to real-world conditions. Since acquiring ground truth data for 6D poses is challenging, many algorithms are trained on synthetic renderings, where the pose is easily accessible. However, even with the use of a physically-based rendering (PBR) pipeline, a domain-gap exists between the renderings and real images. This gap is a specific type of dataset-bias [19]. Although the domain-gap is measured in the BOP challenge by evaluating algorithms solely on PBR images, the broader impact of dataset bias may extend to neglecting capturing effects such as camera noise, motion blur, and lighting. Event-based or neuromorphic cameras provide a novel capturing modality that offer several benefits over classical frame base cameras, such as high-frequency output, high dynamic-range, and lower power-consumption. However, their sensor output is a sparse, asynchronous image-representation, which differs from traditional, dense images. Instead of reading the entire sensor at once, individual pixels trigger asynchronously when the brightness difference crosses a threshold, generating events at the pixel location that carry the polarity of the threshold (see Figure 1). In the past, only small toy datasets were accessible for event cameras [12, 15]. Even now, only a limited number of automotive centrale datasets [16, 3] are available that only consider the task of 2D object detection. Although it is possible to convert RGB datasets for pose estimation into events using the vid2e tool [6], there is no publicly available real-world dataset for this task that we are aware of. The YCB-Video (YCB-V) dataset [21] is a notable choice from the datasets utilized in the BOP challenge as it not only provides 3D data enabling the generation of synthetic renderings, but also offers the opportunity to obtain physical objects from the YCB organizers [1]. In this work, we acquired the physical YCB objects and recreated the sequences of the YCB-V dataset. While the original dataset was captured with a Asus Xtion Pro Live RGB-D camera, we used a more modern Intel RealSense D435 camera, which allowed us to capture color and depth at the full FOV at 1280x/720 px, 30fps, without the need for cropping. Simultaneously, we captured event data using the Prophesee EVK2 camera, which was calibrated to the color camera (see Figure 1). Based on the above, our key contributions are; 1. A real-world event dataset with ground-truth poses and 2. new YCB-V sequences that we used to evaluate the generalization capabilities of top-performing algorithms from the BOP challenge trained on YCB-V. This paper is structured as follows: Section 2 describes our data acquisition and labelling pipeline. Section 3 explains the structure and storage of the captured data, while Section 4 presents the results of generalization evaluation we performed. We conclude with Section 5, which summarizes our results, discusses the limitations, and outlines potential future work. ## 2 Data capturing and labelling In this section, we describe our approach for capturing high-quality pose data to annotate the event-camera stream. Fig. 1 illustrates our capturing setup. We chose the Intel Realsense D435 RGB-D camera over the newer and more capable Microsoft Azure Kinect DK as the former allows passive depth capturing. The pattern projected by the IR projector in the active depth capturing mode is picked up by the event camera thus rendering the event stream unusable. Despite using the passive mode, the depth quality is still comparable to the YCB-V dataset, as shown in Figure 2. Since there are no established algorithms for pose estimation on event data, we instead generate poses using the RGB data and transfer them to the coordinate frame of the event camera via stereo calibration. ### Calibration Fig. 3 depicts the setup employed to obtain calibration data for both the intrinsic calibration of the individual cameras and the extrinsic calibration needed for stereo alignment. We followed the approach described in [11] by using a flashing blob pattern displayed on a screen. Utilizing a screen instead of a printed pattern offers precision advantages, even when calibrating conventional RGB cameras. We adopted the calibration approach outlined by [13] to ensure reliable calibration data acquisition. ### Pose annotations To generate ground truth poses using RGB data, we follow this procedure: first, we utilize the state-of-the-art GDRNPP [9] algorithm, which was the winner of the BOP 2022 [18] pose estimation challenge, to obtain a rough estimate of the objects' poses. GDRNPP leverages YOLOX detections [5] for this purpose. Next, we employ the SRT3D algorithm [17] for local refinement and frame-to-frame tracking. This allows us to obtain a precise pose estimate for each frame, even under fast camera movements. SRT3D's robustness in such scenarios ensures that we have a reliable pose estimate when global pose estimation would otherwise fail. By following this pipeline, we are able to generate highly accurate ground truth poses for the RGB data used in our experiments. We then transfer these poses into the event camera coordinate frame as described in the following section. ### RGB and Event data synchronization The color images are captured at fixed time intervals determined by the frame rate. In contrast, the event data stream is continuous and does not have fixed time intervals. This makes it difficult to use simple synchronization techniques, such as using an external hardware trigger, that are commonly used in stereo camera systems. Instead, we resort to displaying a blinking counter on a screen that can be synchronized to in both colour and event images. The counter was captured at the beginning of each sequence. The method of using a blinking counter on a screen to synchronize color images with event data has a drawback in that it has a minimum possible latency. The counter's blinking frequency is not relevant as the on and off events provide discrete time points. However, the color camera operates at 30 frames per second, resulting in a 33 ms exposure time. For instance, if the counter turns off at the end of the exposure, it may take 33 ms for the corresponding "off" events to be observed in the RGB frame. In practice, the event data, must be resampled to a fixed window-size for visualization. While the window-size can be as low as 1 ms without any issues, this still limits the synchronization accuracy to 33 ms. ## 3 Dataset structure and usage This section focuses on the data format and structures employed by the supplementary programs for reading and processing the dataset. It is important to note that while we only provide ground truth 6D pose data, additional annotations such as 2D and 3D bounding boxes and per-pixel segmentation can be easily generated from the available 3D meshes by performing rasterization using the provided poses. At the top level of our dataset, there are three files that describe the parameters that remain constant throughout the entire dataset: * calib_realsense.json contains the intrinsic calibration for the D435 color camera. The images are undistorted by the camera, hence all distortion coefficients are zero. This is also available in BOP format as camera_d435.json. Furthermore, the depth intrinsic and the depth to color extrinsic is provided, as the depth images are not aligned. * calib_prophesee.json contains the intrinsic calibration for the EVK2 camera. You must consider image distortion, for correct pose estimation. * calib_stereo_c2ev.json contains the extrinsic transformation from the D435 color coordinate frame to the EVK2 coordinate frame. Each captured sequence is stored in a subfolder that contains the sequence data in the format specified by the BOP dataset. The subfolder contains the following contents: * rgb contains the RGB color images saved in JPEG format. * depth contains per-pixel depth in millimeters, saved in 16-bit PNG format, synchronized with the RGB frames by the camera. * scene_gt.json contains the ground truth poses for the RGB frames. The event stream that is synchronized with the RGB frames is stored outside the subfolder as NN_events.int32.zst. Due to the lack of a standard format for storing event data, we have developed a custom binary format. Our goal is to create a compact file Figure 3: Our method for joint event and RGB camera calibration relies on a flashing blob pattern that can be detected by both sensor technologies. Figure 2: Our dataset (left) has depth quality comparable to the YCB-V dataset (right). format that is easy to transmit and accessible on different platforms, especially Python and NumPy. We only store the events without any accompanying metadata. Similar to the prophesee DAT file format [14], each event is represented by two 32-bit integer numbers. The first integer stores the timestamp in microseconds, while the second integer stores the packed polarity and x, y coordinates of the event. The integer is arranged in little-endian order, with the first 14 bits storing the x-coordinate, the next 14 bits storing the y-coordinate, and the remaining 4 bits encoding the polarity. To further minimize file size, we compress the event stream using Zstandard compression [2]. This method reduces the file size by roughly 60% while providing fast decompression, enabling the data to be decompressed on-the-fly and stored in its compressed form on disk. ### Aligning ground truth poses to event data The provided ground truth poses are in the frame of reference of the RGB camera. In order to align them with the event camera, the rigid stereo transformation must be applied to the pose date. Additionally, the poses are only provided for discrete time steps, while the event data stream is essentially continuous. Typically, events are processed by accumulating them over a certain window-size [4], as a single event contains limited information. In this work, events are depicted in the form of 2D histograms (see Figure 1). However, in this scenario, the pose data is only accurate if the histogram window matches the frame rate of the RGB camera, meaning a 33 ms window size is used. For different window sizes, the poses must be interpolated to the current time step of the event window. Simple linear interpolation can be used for the positions, while spherical linear interpolation is required for the rotations. ### Sequences Our dataset consists of 21 sequences with a total runtime of 7:43 min. The sequences vary in object arrangement and lighting conditions. The first 12 sequences correspond to objects and arrangements from sequences 48 to 59 in the original YCB-V dataset (see Figure 4). This particular subset of sequences is also used in the BOP challenge. Several of the original YCB objects are no longer available and have been replaced by the YCB organizers. This change affects three objects, "power drill", "pitcher", and "coffee can" from the set of YCB-V objects. This adds an additional challenge of domain adaptation, apart from the different capturing conditions. In contrast to the YCB-V dataset our sequences also contain fast camera motion and specific sequences are captured low-light conditions (see Figure 5). Some sequences in our dataset have challenging capturing conditions, which means that not every frame contains valid ground truth poses for all visible objects. Therefore, frame and object whitelisting is necessary for accurate evaluation. It's worth mentioning that sequence 6 has valid annotations for all objects in all frames. ### Supplementary programs An example of decoding the Zstandard compressed events and pose interpolation to an arbitrary event histogram size can be found in the supplementary NumPy-based program. In addition, we provide an example of aligning the depth images to the color images using the given calibration data. ### Frame-drops of RGB-D camera We encountered frame-drops of the Realsense camera. These frame-drops are reflected in the dataset by gaps in the RGB and depth image numbering and consequently the ground-truth poses. To handle these gaps, interpolating the pose between adjacent \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Dataset & object classes & scenes & high-res \({}^{1}\) & real data & depth & event data \\ \hline \hline YCB [1] & 77 & \(77^{2}\) & ✓ & ✓ & ✓ & ✗ \\ \hline Falling things [20] & 21 & 3075 & ✗ & ✗ & ✓ & ✗ \\ \hline YCB-V [21] & 21 & 92 & ✗ & ✓ & ✓ & ✗ \\ \hline YCB-M [7] & 20 & 32 & ✓ & ✓ & ✓ & ✗ \\ \hline YCB-Ev (ours) & 21 & 21 & ✓ & ✓ & ✓ & ✓ \\ \hline \multicolumn{7}{|c}{\({}^{1}\) The image resolution is at least HD (1280x720px).} \\ \multicolumn{7}{|c}{\({}^{2}\) There is only one object per scene in the YCB dataset.} \\ \end{tabular} \end{table} Table 1: Overview of related, YCB-based, datasets \begin{table} \begin{tabular}{|c|l|} \hline Sequence Nr. & Description \\ \hline \hline \(1-12\) & Arrangements corresponding to the BOP subset of YCB-V. \\ \hline \(13-15\) & Arragements with more objects, placed at frame border. \\ \hline \(16\), 19 & Additional YCB objects serving as occluders. \\ \hline \(17\) & Seq. 16 arrangement with lights off. \\ \hline \(18\) & Seq. 1 arrangement with lights off. \\ \hline \(20\), \(21\) & Clumped arrangement with many occlusions. \\ \hline \end{tabular} \end{table} Table 2: An overview of the sequence arrangements in our dataset. Figure 4: The object arrangement in our dataset (left) corresponds to the object arrangement in the YCB-V dataset (right). Figure 5: Our dataset contains challenging frames that exhibit fast camera motion and low-light conditions. frames that are now further than 33 ms apart can be done, but this is not handled in the supplementary programs. ## 4 Dataset bias experiments In this section we evaluate pose-estimation algorithms on our novel dataset, focusing only on the RGB setting without further refinement. As the RCNN detector from [8] failed on our dataset, we use the YOLOX detector as trained in [9]. To ensure a fair comparison, we replicate the outcomes on the YCB-V dataset using YOLOX detections for all algorithms. It is worth highlighting that this initial evaluation does not capture all aspects of the dataset. Specifically, it does not include pose detection on the event data. Table 3 presents the evaluation results on the 12 sequences that match between YCB-V and our dataset (see Table 2). The first row indicates the average detection recall using YOLOX. Here, we calculate the percentage of detected ground truth objects. We observe a 35.7 point decrease in performance with YOLOX, likely due to a dataset bias affecting the detector. Similarly to BOP22, we then calculate the average pose estimation recall for correctly detected objects. For simplicity, we use a 2cm threshold on the translation vector, disregarding orientation and the need for handling object symmetry. The results in Table 3 show that GDRNPP, the best algorithm of 2022, is less prone to dataset bias compared to cosypose, the best algorithm of 2020, which is consistent with the finding in BOP22 [18] that the sim2real gap has significantly narrowed. However, although the sim2real gap decreased from 0.15 with cosypose to 0.06 with GDRNPP, the gap between YCB-V and our dataset decreased from 0.67 with cosypose to 0.37 with GDRNPP, which is still relatively high and indicates that further efforts are needed to minimize dataset bias. ## 5 Conclusion and future work We have presented a benchmark dataset for 6DoF pose-estimation tasks on event vision data. Our capturing pipeline allows providing ground-truth poses even under fast camera motions. Our evaluation of dataset bias reveals that narrowing the domain gap is not enough to reduce the dataset bias to a satisfactory level. The main limitation of this work is the annotation accuracy. This is due to several sources of errors, including inaccuracies in the object models, inaccuracies in the synchronization between the event and color modalities, and uncertainties in the cameras intrinsic and extrinsic calibrations. We are actively working on improving the provided ground-truth annotations using more advanced optimization methods to address this limitation. In the future, problems with the RGB based annotations can be overcome by using pose estimation algorithms that operate directly on event data. This will allow for the annotation of challenging sequences using the event modality and the transfer of the estimated poses back to color images, enhancing the performance of RGB-only algorithms. Initial research [10] has demonstrated promising results in reusing dense CNN architectures by fine-tuning models to event data for this purpose.
2303.00085
AR3n: A Reinforcement Learning-based Assist-As-Needed Controller for Robotic Rehabilitation
In this paper, we present AR3n (pronounced as Aaron), an assist-as-needed (AAN) controller that utilizes reinforcement learning to supply adaptive assistance during a robot assisted handwriting rehabilitation task. Unlike previous AAN controllers, our method does not rely on patient specific controller parameters or physical models. We propose the use of a virtual patient model to generalize AR3n across multiple subjects. The system modulates robotic assistance in realtime based on a subject's tracking error, while minimizing the amount of robotic assistance. The controller is experimentally validated through a set of simulations and human subject experiments. Finally, a comparative study with a traditional rule-based controller is conducted to analyze differences in assistance mechanisms of the two controllers.
Shrey Pareek, Harris Nisar, Thenkurussi Kesavadas
2023-02-28T21:04:05Z
http://arxiv.org/abs/2303.00085v4
# AR3n: A Reinforcement Learning-based Assistant-As-Needed Controller for Robotic Rehabilitation ###### Abstract In this paper, we present AR3n (pronounced as Aaron), an assist-as-needed (AAN) controller that utilizes reinforcement learning to supply adaptive assistance during a robot assisted handwriting rehabilitation task. Unlike previous AAN controllers, our method does not rely on patient specific controller parameters or physical models. We propose the use of a virtual patient model to generalize AR3n across multiple subjects. The system modulates robotic assistance in realtime based on a subject's tracking error, while minimizing the amount of robotic assistance. The controller is experimentally validated through a set of simulations and human subject experiments. Finally, a comparative study with a traditional rule-based controller is conducted to analyze differences in assistance mechanisms of the two controllers. Rehabilitation Robotics, Deep Learning in Robotics and Automation, Reinforcement Learning. ## I Introduction Recent years have seen the advent of robot-based rehabilitation systems as a reliable tool for home-based stroke therapy [1]. These systems can provide autonomous robotic assistance to a patient as they perform prescribed therapy tasks using a simulation system. Robotic assistance is usually based on a set of rules that govern _when_ and _how_ to provide assistance to a patient. The choice of this assistance mechanism is a non-trivial task and serves as a crucial factor towards the success of robotic therapy [2]. Inadequate assistance may render a task too difficult for the patient, inducing anxiety and forcing them to quit the rehabilitative task early [3]. Conversely, excessive assistance can lead to over-reliance on the robot [2]. Assist-As-Needed (AAN) controllers [4] provide adequate assistance by dynamically adjusting robotic assistance levels based on patient performance. In other words, as the user's performance improves, robotic assistance is reduced; and vice-versa. The simplest AAN controller is a rule-based error reduction (ER) [1] mechanism. ER describes a strategy that minimizes tracking error in a path following task. Assistance is supplied based on two manually tuned parameters viz. robotic gain and maximal allowable error threshold. This strategy describes a force field at the boundary of the error threshold that restricts free subject motions to within the boundaries of this zone. If the subject deviates outside this zone, the robotic device provides a corrective force and guides them back inside this zone. However, the selection of robotic gain and zone size is not automatized and needs to be determined by a therapist. The lack of automation may lead to over-reliance on robotic assistance and can limit the rehabilitation outcome [2]. Several studies have proposed AAN methodologies that circumvent the above over-reliance issue by automating and adapting robotic impedance based on subject performance. They implemented non-trivial mechanisms to obtain a model of subject performance. These models can be broadly categorized as physical models [4, 5, 6, 7] and physiological signal-based models [8, 9, 10]. Such models are generally patient specific and cannot be generalized to larger populations. In this paper, we propose a Reinforcement Learning (RL)-based generalizable adaptive AAN controller that automatically adjusts robotic assistance based on a subject's performance. The paper is organized as follows: Section II surveys existing AAN controllers. Section III provides a description of the key components of the proposed system and presents experimental evaluations under various settings. Results are presented in Section IV and we conclude in Section V. ## II Literature Review ### _Assist-As-Needed Controllers_ According to the guidance hypothesis [2], humans demonstrate a tendency of over-reliance on external assistance, which may inhibit motor recovery. This has led to the inception of AAN controllers that adapt degree of robotic assistance based on subject performance. Reinkensmeyer's group [5] proposed the use of patient-specific computational learning models that predict how subjects adjust their motor behavior in the presence of varying external forces. Crespo et al. [6] developed a wheelchair steering AAN controller. However, the approach requires 25-40 trials per subject to develop a subject-specific assistance model. Maaerf et al. [4] proposed a task difficulty model to estimate the difference between a patient's motor skills and task difficulty to toggle robotic assistance. However, the algorithm relies on learning the _ideal_ impedance and/or position tracking behavior exhibited by an expert through multiple demonstrations while conducting a therapy task. [7] used the concept of passivity to describe the maximum amount of assistive force that can be _safely_ absorbed by a patient's arms. Robotic impedance can then be modulated within this safety limit to deliver stable and adaptive assistance. Estimation of maximal assistive force is non-trivial and requires periodic calibrations sessions with the patient. In our previous work [11], we proposed iART, that uses demonstrations from an expert to mimic and recreate their assistance behavior. Brain Computer Interface (BCI) and Surface Electromyography (sEMG) sensors can be used to develop physiological models instead of the patient specific physical models described above. These methods use BCI [8, 9] and sEMG [10] signals to adapt robotic assistance based on a patient's mental engagement and amount of physical effort applied by them, respectively. These sensors need a considerable amount of time to set up and usually require the assistance of another person in doing so. This limits their feasibility as a home-based rehabilitative tool. [1] provides a scoping review of adaptive assistance techniques for rehabilitation robotics. In this paper, we propose the use of a RL-based controller that circumvents the challenges associated with deriving complex subject-specific physical models and the feasibility issues of external sensor-based systems. ### _Reinforcement Learning-based AAN Controllers_ RL describes a set of learning mechanisms that learn an optimal mapping between situations and actions so as to maximize a numerical reward signal [12]. An RL agent derives the optimal policy for a given Markov Decision Process (MDP) based on data acquired through exploration and experience. RL has gained popularity in recent years across various robotics-based domains. However, very few endeavours have been made towards the development of RL-based robotic AAN controllers. Obayashi et al. [13] developed one of the earliest RL-based AAN controllers. Using dart-throwing as a case study, the authors proposed a user-adaptive robotic trainer that aims at maximizing the score in a game of darts while minimizing physical robotic assistance. [14] demonstrated the use of model-based RL in conjunction with sEMG for formulating effective assistive strategies for exoskeleton-based systems. Inverse RL (IRL) is another strategy wherein a desired policy is derived from expert demonstrations. Scobee et al. [15] demonstrated the use of IRL to provide haptic assistance. [16] proposed a human-in-the-loop RL algorithm to demonstrate shared autonomy through the Lunar Lander video game and a real quadcopter. Luo et al. [17] used the Proximal Policy Optimization (PPO) RL algorithm to provide assistance via an exoskeleton during squatting motion. Their approach relies on a task specific reference motion demonstration for learning. The reliance on expert human demonstrations serves as a limitation to these IRL-based studies. The method proposed in this paper aims at eliminating the reliance on human participation/demonstrations during the RL model training phase. More recently, [18] used an actor-critic RL algorithm to modify robotic impedance for ankle mobilization. Impedance is adjusted to minimize tracking error while a control objective determines the amount of assistance to be supplied to the subject. The RL agents learns an optimal policy that modifies robotic impedance to achieve the desired control objective. The results were reported on a predefined sinusoidal trajectory and showed greater improvements in learning when compared with a conventional AAN controller. The above methods [13, 14, 18] yield subject-specific controllers based on online RL training. The method proposed here uses Soft Actor Critic (SAC)-based RL [19] to generalize assistance mechanism across multiple patient behaviors. We introduce AR3n1, an Assitive **R**obotic **R**ehabilitation system based on **R**einforcement **L**ear**Ning. AR3n uses RL to dynamically adjust robotic assistance and does not require patient-specific physical models or physiological sensors to estimate the same. We achieve this by simulating a plethora of patient behaviors through a virtual patient model and training a RL-based assistant to generalize across these behaviors. First use human-subject studies are conducted to test the assistance behavior of the virtual patient trained model on healthy subjects. The development of a virtual patient model and an RL-based AAN controller are the key contributions of this work. To the best of the authors' knowledge, this is the first study that uses a simulation-based upper-limb RL-AAN controller. A demo video for AR3n may be found here 2. Footnote 1: pronounced as Aaron. Footnote 2: [https://youtu.be/hTVjd7uzMz8](https://youtu.be/hTVjd7uzMz8) ## III Methods AR3n comprises of three key components (see Fig. 1), viz. (i) simulation environment: with which the RL agent interacts, (ii) RL module: that uses SAC to learn and predict robotic assistance, and (iii) robotic motor task: that uses the trained RL agent from (ii) to deliver realtime adaptive robotic assistance. ### _Motor Task_ In this study, we have used the example of handwriting rehabilitation. A subject uses a robotic end effector to control the position of a virtual pen in a handwriting simulation environment (see Fig. 1). The robotic device provides kinesthetic assistance to the patient based on prescribed control mechanisms such as ER or AR3n. The writing simulation environment was developed using Unity3D3 and consists of a virtual environment wherein, a reference trajectory to be followed by the patient can be chosen from Fig. 1 Inset-(a). Footnote 3: [https://unity.com](https://unity.com) A robotic device assists the user along the trajectory based on a proportional (P) controller. Traditionally, the gain of a P-controller is chosen by a therapist. In this paper, we propose a methodology to adapt this parameter automatically based on the subject's performance. A 6 degrees-of-freedom (6 revolute joints: 3 actuated and 3 passive) Geomagic(r) Touch(tm) was used in this study to provide kinesthetic feedback to the user at a sampling rate of 1000 Hz. The actuated joints can provide force feedback up to 3.3\(N\) to the user. ### _Reinforcement Learning Module_ This module comprises of a SAC-based agent that interacts with a simulation environment to learn the optimal assistance policy. The agent's goal is to derive a mapping between state (\(\mathbf{s}\)) and action (\(a\)) that maximizes the cumulative return \(V\) from the current reward \(r\)[12]. This is usually achieved through the means of a simulation environment within which an agent must be able to take actions that affect the state of the environment. To formalize the problem being addressed in this paper, we first describe a handwriting simulation task which has been modelled as an MDP. #### Ii-B1 Training Environment In this paper, we use a robot-assisted hand writing task as the case study. The patient's goal is to track a reference path using the robotic device. However, the patient's motor deficits may prevent them from achieving low tracking error. The RL agent serves the role of a therapist and decides when and how to assist a patient based on their performance (see Fig. 1). The agent learns this assistance behavior by interacting with the environment for data acquisition through exploration and experience. The need for large quantities of data for effective learning prevents the use of real subject-robot interaction (see Section III-A) while training the RL agent. As a result, we simulate the handwriting environment as well as the patient. The training task is designed as an episodic task, wherein each episode involves a virtual patient tracking a reference shape chosen randomly from the top row of Fig. 1 Inset-(a). An episode in RL refers to a sequence of states, actions, and rewards with a terminal state. Terminal state in this scenario refers to reaching the end point of the reference trajectory to be tracked. #### Ii-B2 Virtual Patient Force Model We present a virtual patient model (see Fig. 2) that enables us to simulate numerous patient behaviors and allows the RL agent to train and generalize across these behaviors. This circumvents the requirement of human subjects during training. The patient is simulated as a combination of three different types of forces, viz. a tangential force (\(\mathbf{F_{T}}\)), a normal force (\(\mathbf{F_{N}}\)) and a wind force (\(\mathbf{F_{W}}\)). \(\mathbf{F_{T}}\) refers to a force tangential to the reference path that enables the patient to travel along the path. \(\mathbf{F_{N}}\) is normal to \(\mathbf{F_{T}}\) and describes the ability of the patient to minimize their tracking error by _pulling_ them towards the reference path. \(\mathbf{F_{T}}\) and \(\mathbf{F_{N}}\) collectively describe the ability of a patient to track the reference path in the absence of any motor impairments. Motor impairments are simulated as a random wind force (\(\mathbf{F_{W}}\)) acting in a random direction (\(\mathbf{\theta_{W}}\)) along the path. Total resultant patient forces are described as: \[\mathbf{F_{P1}} =\lambda_{T}\mathbf{F_{T}}+\lambda_{N}\mathbf{F_{N}}+\lambda_{W} \mathbf{F_{W}} \tag{1}\] where, \(\lambda_{*}\) is a scaling factor that decides the _strength_ of force \(\mathbf{F_{*}}\). We chose \(\lambda_{T}=1\) and \(\lambda_{N}=0.4\) experimentally as they enabled low error trajectory tracking similar to that expected from a healthy person. \(\lambda_{T}=1\) and \(\lambda_{N}=0.4\) served as baselines around which all other forces were scaled experimentally such that they yielded realistic-looking trajectories. Realistic here refers to a heuristic wherein we visually verified that generated trajectories _appeared_ similar to those that would be reasonably exhibited by patients. \(\lambda_{W}\) is randomly set to a value between 1.8 and 2.2. \(\lambda_{W}\) is set higher than \(\lambda_{N}\) to ensure deviations from the path. The wind angle (\(\mathbf{\theta_{W}}\)) is randomly chosen between \(-\pi/3\) and Fig. 1: Schematic representation of AR3n. The three key components viz. simulation environment, SAC RL module, and robot motor task are shown. During training, assistance is applied to the virtual patient (dotted arrows). Once trained, assistance is directly supplied to a human subject through a robot. \(s_{t}\) denotes current state at time \(t\). \(a_{t}\) represents agent action which in this case is the controller gain \(k_{t}\). \(u_{t}\) represents robotic assistance supplied through controller action. Inset (a): Reference trajectories used for training (top) and testing (bottom). Inset (b): Virtual patient force model used to simulate multiple patients while training the RL agent. \(+\pi/3^{4}\) (hatched sector of influence in Fig. 2). Wind direction and magnitude (\(\lambda_{W}\in[1.8,2.2]\)) are varied every \(0.75s\) to \(1.5s\) during simulation runs. This high variability in terms of wind direction, magnitude, and variation frequency enables us to simulate multiple patient behaviors on which to train the RL agent. Since these parameters are not explicitly supplied to the agent, the proposed methodology operates as model-free RL. #### Iii-B3 Formulating the Reinforcement Learning Problem Formulating the RL problem requires formalizing it as an MDP. A well posed MDP consists of a tuple of states (\(\mathbf{s}\)), actions (\(a\)) and reward (\(r\)). The state at any time-step \(t\) is given as: \(\mathbf{s}_{t}=[e_{t},e_{t-1},...,e_{t-n-1}]\in\mathbb{R}^{n\times 1}\). Where, \(e_{t}\) refers to the perpendicular distance between the current patient position its orthogonally projected closest point (\(x_{d}\)) on the reference path at time-step \(t\). Since only tracking error is used to describe the state of the system, the behavior of RL agent is reference trajectory agnostic and does not require retraining for generalization to reference trajectories not used during training. We also provide the tracking error at the previous \(n-1\) steps as the state. This takes into account historic performance of the patient in addition to their instantaneous behavior. We set \(n=25\) in this study which is equivalent to \(0.5s\) of history (sampling rate of the simulation environment is \(50Hz\)). The agent action is given by \(a=\kappa\in[0,1]\), which is the gain of the proportional controller given as: \[\mathbf{u}_{t}=\rho\kappa_{t}[\mathbf{x}_{d_{t}}-\kappa_{t}] \tag{2}\] where, \(\mathbf{u}\in\mathbb{R}^{2\times 1}\) is the assistive force being supplied to the patient. \(\mathbf{x}\in\mathbb{R}^{2\times 1}\) and \(\mathbf{x}_{d}\in\mathbb{R}^{2\times 1}\) denote the current cursor position and the desired point on the path. \(\rho=3\) is a scaling factor to scale the gain predicted by the agent. This value ensures that the maximum assistance is strong enough to assist the subject. The assistive force derived from (2) acts on the existing patient forces described by \(\mathbf{F_{PI}}\) in the same direction as \(F_{N}\) to give the net patient force \(\mathbf{F_{P}}\). In case of the actual motor task (Section III-A), the assistive force is converted to torque values applied at the joints of the robotic device. \[\mathbf{F_{P}}=\mathbf{F_{PI}}+\mathbf{u} \tag{3}\] The instantaneous reward \(r\) is a continuous function of tracking error and amount of assistive force applied. The expected cumulative return \(V\) is the discounted sum of future rewards given by: \[r_{t} =-\alpha\hat{e}_{t}-\beta\hat{u}_{t}-\delta\hat{\kappa}_{t}^{2} \tag{4a}\] \[V_{t} =\sum_{k=0}^{\infty}\gamma^{k}r_{t+k+1} \tag{4b}\] where, \(\hat{e}=\frac{1}{n}[\sum_{k=0}^{n}e_{t-k}]^{2}\) describes a quadratic penalty associated with the average tracking error over the past \(n\) steps. \(\hat{u}=\frac{1}{n}\sum_{k=0}^{n}\left\|\mathbf{u}_{\mathbf{t-k}}\right\|_{2}\) is the average assistive force magnitude applied over this interval. The final term (\(\hat{\kappa}\)) in (4a) is a penalty associated with fast changes in values of the proportional gain \(\kappa\) predicted by the SAC network. This promotes a smoother assistance behavior. In other words, the reward function penalizes tracking error while penalizing any assistive force being applied and/or changed by the agent. We conducted numerous training runs to arrive at these values. First, \(\alpha\) was set to a unit reference value and \(\beta\) and \(\delta\) were varied from \(0-1\). The weights for each term in (4a) were empirically determined as \(\alpha=1,\beta=0.45,\delta=0.5\) as these values demonstrated more consistent training results with reasonable assistive performance. \(\gamma\in[0,1]\) in (4b) refers to a discounting factor which decides the importance of future v/s current rewards while calculating the expected returns of the current state-action pair. We give equal weightage to both and hence set \(\gamma=0.5\). This ensures quick adaptation to the current state while preventing an overly short-sighted agent. Most RL applications use \(\gamma>0.9\), to maximize cumulative reward over a larger time horizon. Since the reward window in this case is around \(0.5s\) as described earlier in the section, \(\gamma=0.5\) was a viable choice. #### Iii-B4 Soft-Actor-Critic Network SAC [19] is an off-policy method that uses a replay buffer to improve sample efficiency. This implies that network parameters are updated with experience collected from a different policy than the current one; allowing the algorithm to generalize over a larger state space without explicitly visiting them. Off-policy SAC was chosen by conducting a preliminary comparison of training performance with the PPO algorithm. We describe this analysis in Section III-B5. SAC is based on maximum entropy RL, with the following entropy augmented objective function: \[J(\pi)=\mathbb{E}[\sum_{t}r(\mathbf{s}_{t},a_{t})-\chi\log(\pi(a_{t}|\mathbf{ s}_{t}))] \tag{5}\] An optimal policy \(\pi^{*}\), maximizes the expected return and entropy (log-term in (5)). The entropy term can be viewed as a trade off between exploration (maximize entropy) and exploitation (maximize return). The trade-off between the two Fig. 2: Virtual patient force model. Three types of patient forces are represented viz. a tangential force (\(\mathbf{F_{T}}\)), a normal force (\(\mathbf{F_{N}}\)), and a wind force (\(\mathbf{F_{W}}\)) that acts at an angle \(\theta_{w}\). \(x\) and \(x_{d}\) denote the current patient position and the desired position, respectively. \(\mathbf{u}\) is the assistive force used to correct the patient’s trajectory. is controlled by the non-negative temperature parameter \(\chi\in[0,1]\). We set \(\chi=0.5\) throughout the training process. A soft Q-function describes the critic, while a Gaussian policy function entails the actor. In other words, given a state \(\mathbf{s}_{t}\) the actor chooses an action \(a_{t}\) based on the stochastic policy \(\pi_{\varphi}\). Meanwhile, the critic estimates the expected returns of the current state-action pair using a soft Q-function \(Q_{\theta}(\mathbf{s},a)\). As mentioned earlier, the action \(a_{t}\) corresponds to gain \(\kappa_{t}\), which modulates the assistive force \(\mathbf{u}_{t}\) through (3). We refer the reader to [19] for more details on SAC. Both networks (actor and critic) use the same neural network architecture with three hidden layers and 32 units in each layer. The learning rate was set at \(1e-5\) and a batch size of 128 was used. These hyperparameters were chosen as they yielded stable training with high average rewards during preliminary testing. The SAC model was trained for 50K steps using the training environment and virtual patient model described above. Only the 4 shapes in the top row of Fig. 1 Inset-(a) were used for training as the proposed formulation here is reference trajectory agnostic as described in Section III-B3. Once the model was trained, it was used for realtime inference. Training was performed on an Intel Core i7 5820K Processor with a NVIDIA GeForce GTX 970 - 4GB graphics card and training times averaged around 20 minutes. #### Iii-B5 SAC Training Performance We conducted a pilot experiment to substantiate the choice of SAC for this study. We compared SAC's training performance in terms of average reward per episode with the PPO on-policy algorithm. PPO is a widely used policy gradient RL method [12] that finds applications in continuous action tasks. Fig. 3 shows average reward per episode v/s training steps for SAC and PPO. The constant dotted line demonstrates performance of an expert human. One of the authors served as the expert and toggled assistance on and off as the virtual patient model simulated 10 episodes. It should be noted that the expert reward here is only presented as reference and not for comparative analysis. Unsurprisingly, SAC outperformed PPO by a large margin and demonstrated very fast learning in terms maximizing the cumulative reward. This fast learning is attributed to the temporal difference learning methodology used by SAC. The superior performance of SAC when compared with PPO is in agreement with other studies [19]. These results affirm the choice of SAC as a valid RL algorithm for this study and discard we PPO from further analysis. ### _Experimental Evaluation_ We designed experiments aimed at evaluating AR3n's ability to (i) conduct realtime inference in a simulated environment, and (ii) verify its ability to provide realtime assistance and induce motor learning with human subjects. We also compared differences between assistance mechanisms of AR3n and a traditional ER-based controller. #### Iii-C1 Simulated Testing We evaluated AR3n in terms of delivering reliable online assistance with the ER assistance mechanisms. ER refers to the error reduction assistance mechanism described in Section I. In this experiment, we used the virtual patient to test differences between the two assistance mechanisms. Both methods were used to modulate assistance in realtime for 50 virtual episodes. The same random seed was used for both cases. This enabled us to compare the assistance behavior of the two methods, subject to the exact same initial conditions. #### Iii-C2 Human Subject Study Next, we conducted a first-use human subject study to (i) verify AR3n's ability to use the assistance behavior learnt using the virtual patient model to deliver realtime assistance to human subjects; and (ii) study differences in AR3n and ER as rehabilitative tools. Eight healthy subjects (5 males; 3 females; average age 26 years; range 19-33 years) were recruited for a single session approved by the University of Illinois at Urbana-Champaign's Institutional Review Board (IRB #15990). The experimental setup is shown in Fig. 4 and involved the subject using a robotic device for the trajectory tracking task described in Section III-A. In order to increase the task difficulty and simulate motor impairment, the subjects were required to use their non-dominant arm and the robot motions were mirrored in the horizontal (\(x\)) and transverse (\(z\)) direction (see Fig. 4). In other words, if the robot end-effector was moved to the right, the onscreen cursor would move to the left, and vice-versa. A similar reversal was implemented in the \(z-\)direction. Each subject participated in three trials (Fig. 4). A baseline trial (T1) followed by a training trial (T2), and a final post training trial (T3). Each trial involved the subject executing the four shapes shown in Fig. 4. This set of shapes was chosen as it contains both straight line and curved sections. The size of Fig. 4: Experimental setup and human subject study design. Fig. 3: Mean reward v/s training steps for SAC and PPO. SAC outperforms PPO by a significant margin. these shapes was scaled to be roughly equivalent to a standard A4 sized paper to stimulate larger movements of the subject's arm. All trials lasted around 2 minutes with 2 minute breaks between subsequent sessions. A brief acclimatization trial (T0) was conducted so that subjects could familiarize themselves with the system. Baseline and final trials involved no robotic assistance and used the same 2D shapes. During the training session (T2), robotic assistance was provided either through a conventional ER-based AAN or the proposed AR3n controller. The maximal allowable error for ER was set as 0.3. This means that robotic assistance was toggled on only when tracking error was greater than 0.3. The gain for ER was set at 3. The eight subjects were randomly assigned to the conventional ER (\(E_{k}\), \(k=1,..4\)) controller or AR3n (\(A_{k}\), \(k=1,..4\)). The subjects were unaware of the type of assistance being supplied. The goal of this experiment was to compare the change in tracking error from T1 to T3 among the two groups. ## IV Results and Discussions ### _Simulated Testing_ Fig. 5-Left shows tracking behavior executed by the virtual patient under AR3n and ER for the same trajectory and random seeds. Fig. 5-Right shows the corresponding assistive force modulation v/s arc length by the above assistance mechanisms. It can be observed that both mechanisms demonstrate similar assistance profiles in terms of _when_ assistance was provided, but differed _how_ assistance was provided. Owing to the rule-based nature of ER, assistance is provided in short busts similar to a step function. These _bursts_ can be smoothed using a mathematical function, however the key drawback of ER remains the use of a rule-based controller with manually determined thresholds. AR3n on the other hand modulates degree of assistance based on complex rules learnt using the virtual patient behavior. This smoother modulation of AR3n is attributed to the quadratic penalty associated with rapid gain switching (\(\dot{\kappa}\) term in (4a)). Eliminating this term would lead to an RL controller that learns a bang-bang optimal control policy, which is not suitable for assisted robotic rehabilitation. We also compared the tracking error at which assistance was switched on (gain\(>0\)) under AR3n and ER. These error distributions are shown as violin plots in Fig. 6. Under ER, assistance-on error was concentrated around the error zone (0.3). This was expected, since ER only prevents the subject from exiting the force field and does not assist them by guiding them back towards the reference trajectory. As with ER, AR3n demonstrates a denser distribution around the error zone but with a wider spread overall. Assistance-on error for AR3n and ER shows significant differences (\(t=-24.28\), \(p=10^{-20}<0.05\)). ### _Human Subject Study_ Fig. 7 shows gain variation (blue) under AR3n and ER w.r.t. tracking error (red) for two subjects. The dotted line demonstrates the size of error zone for ER. ER assists the subject only when the tracking error is higher than this threshold. AR3n on the other hand, does not follow a strict error-based rule while deciding how much to assist the subject. Under ER the subject tends to over-rely on the robotic assistance as is evident from the near continuous assistance provided from \(6-14s\). Additionally, ER merely prevents the subject from deviating outside the error zone boundary, it does not assist them by guiding them back to the reference trajectory. The over-reliance tendency under ER can be visualized in Fig. 8-Left. The figure presents trajectories executed by a subject under ER. It can be observed that the subject tends to stay at the boundary of the force field, as at the boundary, Fig. 5: (Left) Trajectories executed by virtual patient under AR3n (blue) and rule-based ER (red) controller. Size of dots denotes the amount of assistive force applied by the respective algorithms. (Right) Assistive force profiles under the two assistance settings. Fig. 6: Violin plots for tracking error at which assistance was switched on using different assistance mechanisms. Asterisks denote significant differences. Fig. 7: Variation of gain (blue) w.r.t. tracking error (red) under AR3n (top) and ER (bottom) for two test subjects. the robotic device provides minimal assistance enabling the subject to correctly follow the trajectory with minimal effort and low tracking error. AR3n (8-Right) on the other hand guides the subject back to the reference trajectory and then switches off assistance. This is also evident from the reduction of tracking error from a large value to near-zero whenever assistance was switched on in Fig. 7. Fig. 9 shows the distribution of assistive forces provided by AR3n and ER. ER's behavior was concentrated over a narrow region. The relatively _narrow_ distribution signifies the force field behavior of ER. In case of AR3n, assistive force was spread over a larger range. While AR3n mostly applied small corrective forces (higher density closer to zero), in some cases, it applied larger forces depending on the subject's performance. These observations reaffirm the ability of AR3n to provide assistance over multiple scenarios and highlight the inherent challenges of ER. Next, we compared the performance of AR3n and ER as rehabilitative tools by comparing the change in tracking error between the baseline (T1) and final trial (T3) across the two assistance groups i.e. AR3n (\(A_{k}\)) and ER (\(E_{k}\)). The Shapiro-Wilk test was conducted on tracking error to verify normality at a p-value of 5%. All samples demonstrated normality and hence pairwise t-test were conducted. Fig. 10-Top shows tracking errors for different subjects during T1 and T3. The blue bars denote the tracking error during the baseline recording (T1) while the red bars signify the final trial (T3). p-values and t-statistics obtained for pairwise t-tests on tracking error between T1 and T3 are also shown above the corresponding pairs. Pairs that demonstrated significant differences at 5% are shown in red. p-values in blue denote that no significant differences were observed. None of the subjects in the ER group demonstrated significant reductions in tracking error over the duration of the experiment. The inferior performance from ER was expected and is attributed to the tendency of subjects to over-rely on robotic assistance, which led to a decline in performance when robotic assistance was removed. Three out of the four subjects under AR3n showed significant error reduction within the two trials. Only one subject (\(A_{1}\)) under AR3n that did not demonstrate significant error reduction. On closer inspection, \(A_{1}\)'s tracking error for T1 was lowest across the board when compared with all other subjects and trials, leaving very little scope for performance improvement over trials. Subjects under AR3n demonstrate a reduction in error variability as shown by the reduction in lengths of error bars from T1 to T3. On the other hand, subjects under ER display lower reduction in variance between sessions. Finally, we also compared the percent error reduction across all subjects under AR3n and ER (see Fig. 10-Bottom). Subjects in the AR3n group demonstrated higher improvements when compared to those within ER. ## V Conclusion and Future Directions This paper describes a novel RL-based AAN controller called AR3n. AR3n uses SAC to modulate assistance in real-time based on a subject's performance. Using a reward function that minimizes tracking error while minimizing amount of assistive force enables the realization of a truly adaptive AAN controller. As opposed to traditional force field-based AAN controllers, AR3n does not require hand tuning of controller parameters. The system distinguishes itself from more sophisticated AAN Fig. 8: Tracking behavior demonstrated under rule-based ER and AR3n. Size of red dots denotes the amount of assistive force applied by the respective algorithms. Fig. 10: (Top) Tracking error change between the baseline (T1) and final trial (T3) for test subjects. Text in red denotes significant changes in tracking error between trials. (Bottom) Box plots for percent change in tracking error between the first and last session under ER and AR3n assistance mechanism. Fig. 9: Distribution of assistive force under AR3n and ER. controllers as our method does not require patient specific physical models. Instead, we simulate numerous virtual patients to generalize the controller over a larger population of subjects. The use of a virtual patient also distinguishes our method from previous RL-based AAN controllers [13, 14, 18] that use online learning methods to generate subject-specific RL models. We tested the proposed algorithm under numerous simulated and human subject experiments, and highlighted critical differences between AR3n and ER. AR3n demonstrated generalizability across multiple human subjects and efficacy as a rehabilitative tool. It was also observed that the method proposed here avoids over-reliance tendencies inherent of ER controllers. Our system relies on offline learning to generate a subject-independent AAN controller. This method may not be suitable for patients with very specific needs. Use of online learning methods such as GARB [20] in conjunction with AR3n will enable the realization of controllers tuned to the specific needs of a patient without requiring extensive data collection. Future work should explore this option. Currently AR3n only modifies the gain of a proportional controller. It would be meaningful to design a study where derivative and integral gain values are modulated as well. Finally, the human subject study described in this paper was conducted on a fairly small pool of healthy subjects. Moving forward, the system needs to be tested with stroke patients and/or a larger subject pool to verify its scalability in the clinical setting.
2310.00156
Learning Generalizable Tool-use Skills through Trajectory Generation
Autonomous systems that efficiently utilize tools can assist humans in completing many common tasks such as cooking and cleaning. However, current systems fall short of matching human-level of intelligence in terms of adapting to novel tools. Prior works based on affordance often make strong assumptions about the environments and cannot scale to more complex, contact-rich tasks. In this work, we tackle this challenge and explore how agents can learn to use previously unseen tools to manipulate deformable objects. We propose to learn a generative model of the tool-use trajectories as a sequence of tool point clouds, which generalizes to different tool shapes. Given any novel tool, we first generate a tool-use trajectory and then optimize the sequence of tool poses to align with the generated trajectory. We train a single model on four different challenging deformable object manipulation tasks, using demonstration data from only one tool per task. The model generalizes to various novel tools, significantly outperforming baselines. We further test our trained policy in the real world with unseen tools, where it achieves the performance comparable to human. Additional materials can be found on our project website: https://sites.google.com/view/toolgen.
Carl Qi, Yilin Wu, Lifan Yu, Haoyue Liu, Bowen Jiang, Xingyu Lin, David Held
2023-09-29T21:32:42Z
http://arxiv.org/abs/2310.00156v5
# Learning Generalizable Tool-use Skills through Trajectory Generation ###### Abstract Autonomous systems that efficiently utilize tools can assist humans in completing many common tasks such as cooking and cleaning. However, current systems fall short of matching human-level of intelligence in terms of adapting to novel tools. Prior works based on affordance often make strong assumptions about the environments and cannot scale to more complex, contact-rich tasks. In this work, we tackle this challenge and explore how agents can learn to use previously unseen tools to manipulate deformable objects. We propose to learn a generative model of the tool-use trajectories as a sequence of point clouds, which generalizes to different tool shapes. Given any novel tool, we first generate a tool-use trajectory and then optimize the sequence of tool poses to align with the generated trajectory. We train a _single model_ for four different challenging deformable object manipulation tasks. Our model is trained with demonstration data from just a _single tool_ for each task and is able to generalize to various novel tools, significantly outperforming baselines. Additional materials can be found on our project website.1 Footnote 1: [https://sites.google.com/view/toolgen](https://sites.google.com/view/toolgen) **Keywords:** Tool use, Deformable object manipulation ## 1 Introduction Building autonomous systems that leverage tools can greatly enhance efficiency and assist humans in completing many common tasks in everyday life [1; 2; 3; 4; 5; 6; 7]. As humans, we possess an innate ability to adapt quickly to use novel tools. However, replicating such adaptability in autonomous systems presents a significant challenge. In this work, we explore how agents can learn to use novel tools to manipulate deformable objects. Beyond the challenges of representing novel tools, manipulating deformable objects adds considerable difficulties. For one, manipulating deformable objects often results in rich, continuous contact between the tool and the object; the contacts between a roller tool and dough, for example, are continuous and cannot be easily discretized, which makes specifying discrete affordance labels to describe such interactions difficult. Further, defining rewards or keypoints (as is sometimes used for tool and environment representations [3; 4]) for deformable objects is also challenging. Therefore, operating novel tools to solve diverse tasks calls for an approach that makes few assumptions about the task and the environment. Our goal is to train a policy to solve various manipulation tasks with multiple tools, including tools that were not seen during training. We propose a novel approach, ToolGen, which learns tool-use skills via trajectory generation and sequential pose optimization. Given the scene, the goal, and a set of available tools, ToolGen first scores the different tools and selects a tool to use for the task. It then generates a point cloud of a tool in the desired initial pose, and it subsequently predicts how this generated tool would need to move to perform the task. Finally, we sequentially align the selected tool to the generated tool to extract the actions for the agent to execute. We evaluate ToolGen against several baselines in deformable object manipulation with diverse tasks, goals, and tools. Impressively, with just a single model trained across all tasks and tools, ToolGen significantly outperforms the baselines and generalizes to many novel tools. Further, ToolGen achieves this despite being trained on demonstrations from just one tool for each task. To summarize our contribution, we propose ToolGen, which represents tool use via trajectory generation. We have shown that generating a point cloud trajectory can effectively capture the essence of tool use, i.e. how the tool should be placed in relation to the dough and how it should move over time, which allows us to generalize to a variety of unseen tools and goals. ## 2 Related Work **Learning Generalizable Tool-use Skills:** Prior work has explored training robots to perform manipulation tasks with tools. To enable generalization, some previous approaches predict intermediate "affordances" and then generate actions based on these affordances [2; 8]. For example, affordances like grasping points or functional points and be represented as key points [2; 3; 4]. More fine-grained concepts like contacts and forces [9; 10] can also be used. However, obtaining labels for these affordances can be difficult, and such affordance labels do not easily extend to more complex tasks such as deformable object manipulation as we explore in this work. Another approach is to discover affordance regions in a self-supervised way by running parameterized motion primitives [2] or affordance-conditioned policies [3; 4] in simulation. In the image space, prior works have explored training an action-conditioned video prediction model [1] for planning actions for different tools. However, the video prediction model lacks 3D structure and has difficulty representing fine-grained action trajectories. In contrast, we propose using a generated point cloud trajectory as the intermediate representation, which enables fine-grained motion planning with the generated tool. **Deformable Object Manipulation with Tools:** In this work, we explore how robots can learn to use novel tools for deformable object manipulation. Prior works with deformable objects often consider using a fixed set of tools, such as a rolling pin or spatula for dough manipulation [5; 7] or knives for Figure 1: Our method ToolGen can solve deformable object manipulation with diverse tasks and goals. It does so by first generating a point cloud trajectory of the desired tool and then aligning the actual tool to the generated point clouds for execution. We train a single model for four different challenging deformable object manipulation tasks. Our model is trained with demonstration data from just a single tool for each task and is able to generalize to various unseen tools. cutting [11; 12]. However, these works do not consider generalization to novel tools, which is the focus of this work. ## 3 Problem statement and assumptions We are given point clouds of the initial observation of the scene \(P^{o}\), the goal \(P^{g}\), and a set of available tools \(\{P^{tool_{i}}\}_{i=1:K}\). Our task is to select the best tool to perform the task and predict a sequence of \(H\) actions to use the tool to transform the current point cloud into the goal point cloud. For training, we assume access to a set of demonstrations using a separate set of training tools \(\{P^{traintool_{i}}\}_{i=1:K_{train}}\). These demonstrations are of the form: \((P^{o},P^{g},P^{traintool_{i}},T_{0:H})\), where \(T_{0:H}\) are a set of transformations (actions) performed on the training tool which changes the initial observation \(P^{0}\) into the goal configuration \(P^{g}\). The initial transformation in the sequence (\(T_{0}\)) brings the tool to a "reset pose." The remaining terms (\(T_{1:H}\)) are the transformations between the subsequent tool poses in nearby timesteps, which we call "delta poses." We manually specify distributions of the initial and goal configurations for each task. We then run trajectory optimization using a differentiable simulator to generate these demonstrations following prior works [13]. Alternatively, human demonstrations could serve as the source of these demonstrations. ## 4 Method In order to teach robots how to use novel tools, we propose the following approach: our method initially selects a tool from the available set, employing a scoring network (Sec. 4.1). Then, we generate a point cloud of a desired tool and a sequence of tool actions of how this generated tool would achieve the task (Sec. 4.2). Finally, we align the selected (real) tool to each of the point clouds in the generated trajectory (Section 4.3). Finally, we move the selected tool to follow the planned trajectory to accomplish the task. Below, we describe this approach in detail, and experiments in Sec. 5 demonstrate the large benefits of this approach compared to other approaches. ### 3D-aware tool selection The first step in our proposed method involves selecting the best tool for the given task. Given a set of \(K\) training tools, represented as a set of point clouds, \(\{P^{traintool_{i}}\}_{i=1:K}\), we train a tool scoring Figure 2: Overview of our method: (a) we first pass each tool into the tool scoring module \(D_{score}\) and select the one with the highest score (\(tool_{sel}\)). (b) We then leverage the trajectory generation module \(G_{traj}\) to generate an ideal tool trajectory accomplishing the task \(P^{gen}_{0:H}\). (c) Finally, we align the selected tool with the generated tool via sequential pose optimization to extract the pose of the selected tool \(T^{opt}_{0:H}\), and we subsequently use inverse kinematics to obtain the actions for the agent to execute. module \(D_{score}\), which takes in a tool point cloud \(P^{tool}\), the initial observation \(P^{o}\), and the goal \(P^{g}\), and it predicts a score \(s\) for the tool indicating how suitable the tool is for the task. The architecture for the tool scoring module is shown in Fig. 2 (a). The module first encodes the tool points to a latent feature using a PointNet++ [14] encoder. It then encodes the concatenation of observation points and goal points to another latent feature using a separate PointNet++ encoder. These latent features are concatenated and inputted through a multi-layer perceptron (MLP) to output a score. We train the module with binary cross-entropy loss, in which the tool used in the demonstration to achieve the goal point cloud \(P^{g}\) is considered as a positive example, and randomly selected tools from the training set are considered as negative examples. For inference, we input each tool point cloud through the tool scoring module and select the tool that received the highest score among the \(K\) tools available at test-time (which might be different from the tools available during training): \(tool_{sel}=\operatorname*{arg\,max}_{tool_{i}}D_{score}(P^{tool_{i}},P^{o},P^{g})\). The selected tool is then used in subsequent steps in our method. ### Representing tool-use through point cloud trajectory generation Next, we need to estimate how the selected tool should move to perform the task. One simple approach would be to directly predict the motion of the selected tool. However, directly regressing to the tool's pose may prove challenging, especially when attempting to regress to the tool's orientation [15; 16; 17]. To alleviate this challenge, we instead use a generative module \(G_{traj}\) that generates a point cloud trajectory \(P^{gen}_{0:H}\) of a desired tool completing the task. As explained below, we will use this generated trajectory to later determine the actions of the actual tool. We now describe the details of the point cloud generation. We first use a PointFlow-based [18] generator \(G_{init}\) to produce an initial point cloud of the desired tool in the "reset pose" (i.e. the initial pose of the generated tool), which we call \(P^{gen}_{0}\). The PointFlow generator conditions on a point cloud of the selected tool \(P^{tool_{sel}}\), the initial scene observation \(P^{o}\), and the goal \(P^{g}\). The architecture of our PointFlow-based [18] generator \(G_{init}\) is shown in Fig. 2 (b) (top). It first encodes the tool points to a latent feature using a PointNet++ [14] encoder. It then encodes the concatenation of observation points and goal points to another latent feature using a separate PointNet++ encoder. These latent features are concatenated and inputted through an MLP to produce the parameters of a Gaussian distribution. We take a sample from this Gaussian distribution and input it to a PointFlow [18] decoder to produce a point cloud of a desired tool in the reset pose \(P^{gen}_{0}\). During training, we condition the generator on the training tools \(P^{traintool_{i}}\) and use the point clouds of the training tools in their reset poses \(T_{0}\circ P^{traintool_{i}}\) as the reconstruction target in the training loss; the training data comes from the demonstration dataset described in Sec. 3. We follow PointFlow's [18] training procedure to maximize the evidence lower bound (ELBO). After generating the desired tool in reset pose \(P^{gen}_{0}\), we then leverage an path generator \(G_{path}\) to predict a sequence of transformations of how this generated tool would move to achieve the task. The architecture of the path generator is shown in Fig. 2 (b) (bottom). The path generator receives concatenated point clouds of the tool in reset pose \(P^{gen}_{0}\), the initial scene observation \(P^{o}\), and the goal state \(P^{g}\). From these inputs, it then generates \(H-1\) transformations for the tool, denoted as \(T^{gen}_{1:H}\). We use ToolFlowNet [19] for the path generator; details can be found in Appendix A.1. This trajectory illustrates the path the generated tool would take to successfully complete the task. We train the path generator using the trajectories of the training tools \(T_{1:H}\) as labels (from the demonstration dataset described in Sec. 3). Together, our generative module \(G_{traj}=(G_{init},G_{path})\) generates a trajectory of point clouds \(P^{gen}_{0:H}\) which indicates how a generated tool would move to complete the manipulation task. ### Execution via sequential pose optimization The generated point cloud trajectory \(P^{gen}_{0:H}\) from Sec. 4.2 describes the predicted trajectory for the generated tool to complete the manipulation task. However, in order to perform the manipulation task, the predicted trajectory has to be executed by a real tool, not by an imagined generated tool. In this section, we describe the optimization procedure for aligning the selected tool with the generated tool in order to extract actions for the selected tool (visualized in Fig. 2 (c) and listed in detail in Algorithm 1). Given the current observation \(P^{o}\), the selected tool \(P^{tool_{sel}}\), and a generated trajectory \(P^{gen}_{0:H}\), we optimize the sequence of transformations of the selected tool \(T^{opt}_{0:H}\) to align the selected tool point cloud with each of the generated tool point clouds at each timestep. We subdivide the optimized transformations \(T^{opt}_{0:H}\) into a reset transformation \(T^{opt}_{0}\) and delta poses \(T^{opt}_{1:H}\). To compute the reset transformation, we align the selected tool \(P^{tool_{sel}}\) to the generated tool in the first timestep \(P^{gen}_{0}\). We additionally add a penalty for colliding with the observation point cloud (in our experiments, this is the dough). The loss function is given by: \[J_{reset}(T)=Chamfer(T\circ P^{tool_{sel}},P^{gen}_{0})-\lambda_{c}\cdot Chamfer (T\circ P^{tool_{sel}},P^{o}), \tag{1}\] where the first term is the Chamfer distance between the selected tool \(P^{tool_{sel}}\) transformed by \(T\) and the generated tool \(P^{gen}_{0}\) in reset pose, the second term is a collision penalty term computed as the Chamfer distance between the selected tool \(P^{tool_{sel}}\) transformed by \(T\) and the observation \(P^{o}\), and \(\lambda_{c}\) is a hyper-parameter balancing the two terms. The purpose of the penalty term is to penalize collisions between the tool in reset pose and the environment, though collisions will be allowed for subsequent timesteps. For optimization, we use Projected Gradient Descent, detailed in Sec. 4.4, for different initializations of \(T\) and choose the one that minimizes the objective described in Eq. 1. After optimizing the reset transformation, we then optimize the delta poses \(T^{opt}_{1:H}\), again by aligning the selected tool \(P^{tool_{sel}}\) to the generated tool at each timestep \(P^{gen}_{t}\), with a penalty to encourage small motions. The loss function for the delta poses is given by: \[\begin{split} J_{\delta}(T_{1:H})=\sum_{t=1:H}Chamfer(T_{t}\circ X _{t-1}\circ P^{tool_{sel}},P^{gen}_{t})+\lambda_{r}\cdot\|T_{t}\|\\ \text{where }X_{t-1}=T_{t-1}\circ T_{t-2}\circ...T^{opt}_{0} \end{split} \tag{2}\] The first term is the Chamfer distance between the selected tool points \(P^{tool_{sel}}\) transformed by \(T_{t}\circ X_{t-1}\) and the generated tool points \(P^{gen}_{t}\) at timestep \(t\), \(\|\cdot\|\) is a regularization function to moderate the magnitude of the translation and rotation defined by the delta poses (see Sec. 4.4 for details), and \(\lambda_{r}\) is a hyper-parameter balancing the two terms. Finally, we apply these objectives in an optimization routine, as outlined in Algorithm 1, to align the selected tool with the generated one and produce the final trajectory \(T^{opt}_{0:H}\) for the selected tool. Subsequently, we can utilize inverse kinematics to determine the required actions for our agent to execute the task. In our case, these actions comprise the translation and angular velocities of the tool. ``` Input :The current observation of the dough \(P^{o}\), the selected tool \(P^{tool_{sel}}\), and the point cloud trajectory for the generated tool \(P^{gen}_{0:H}\) // Optimize for the reset transformation Initialize random transformations \(T^{1},\ldots,T^{N}\) in \(SE(3)\) ; 2 Optimize \(T^{1},\ldots,T^{N}\) according to Eq. 1 to obtain costs \(J^{1}_{reset}\ldots J^{N}_{reset}\) ; 3 Choose the transformation that minimizes the costs, denoted as \(T^{opt}_{0}\); 4 // Optimize for delta poses 5 Initialize the delta poses as identities, i.e. \(T_{1:H}=\mathbf{I}\) ; 6 Optimize the delta poses according to Eq. 2 and obtain the final transformations \(T^{opt}_{1:H}\); 7 Output :Optimized transformations for the selected tool: \(T^{opt}_{0:H}\) ``` **Algorithm 1**Sequential pose optimization ### Implementation details We train each module - the tool scoring module \(D_{score}\), initial point cloud generator \(G_{init}\), and path generator \(G_{path}\) - separately, using a learning rate of \(10^{-3}\) for each module. To optimize the reset transformation, we use the quaternion representation for the orientation of the transformation, and we project the values onto a unit ball after each gradient update. Here, we use a step size of \(10^{-2}\), and \(\lambda_{c}=0.1\). For optimizing the delta poses, we use the 3-DoF Euler angles representation with a step size of \(10^{-3}\), a regularization factor of \(\lambda_{r}=0.1\), and we use the euclidean norm to regularize the translation as well as the rotation. We train a single set of modules (\(D_{score},G_{init},G_{path}\)) across a compact demonstration dataset comprised of 4 tasks (we do _not_ train a separate network per task); for each task, we collect 200 demonstration trajectories performed with just one training tool; will demonstrate that our method will still be able to generalize to unseen tools. See Appendix B.1 for more information on our demonstration dataset. ## 5 Results As shown below, we demonstrate that ToolGen is able to perform well on a variety of manipulation tasks with novel tools with just _a single_ model trained across multiple tasks and tools. Notably, we train with demonstrations from only one training tool per task and we test on several unseen tools, demonstrating our method's generalization abilities. We additionally evaluate ToolGen on real world observations to highlight its effectiveness when transferred to the real world. ### Tasks and baselines **Tasks:** We evaluate our method against several baselines in a soft body simulator, PlasticineLab [13]. We consider four tasks: "Roll", "Cut", "Small scoop" and "Large scoop". Example configurations and their training and test tools for these tasks are depicted in Fig. 3. In our setup, all of the tools are placed far from the dough at the start of each task, as would be the case in a normal tool-use scenario. **Metric:** We specify goals as 3D point clouds of different geometric shapes. We report the normalized decrease in the Earth Mover Distance (EMD) approximated by the Sinkhorn divergence [20] computed as \(s(t)=\frac{s_{0}-s_{H}}{s_{0}},\) where \(s_{0},s_{H}\) are the initial and final EMD respectively. To compute the performance of each method, we evaluate 10 trajectories per task per tool and then aggregate the performance across all the tasks. **Baselines:** We evaluate the following baselines with different action representations, all of which use the same tool selector as ToolGen. All of the baselines regress to reset transformations and delta Figure 3: We consider 4 tasks: Roll, Cut, Small scoop, and Large scoop. On the left side of each task, we illustrate how the training tool is used to achieve the goal, overlaying the goal on the initial observation. On the right side, we visualize the initial configurations of the training tool and test tools for each task, highlighting the ability of our method to generalize to novel tools. poses, except for BC-E2E which predicts delta poses directly from the initial configuration without a reset transformation. Details on the architectures of the baselines are described in Appendix B.2. * **BC-E2E.** End-to-end behavioral cloning that outputs a \(H^{\prime}\times 6,(H^{\prime}>H)\) vector representing the delta poses of the tool relative to the initial tool pose. Unlike the other baselines, this baseline does not output a reset transformation. * **BC-Joint.** Behavioral cloning that jointly regresses to the reset transformation and subsequent delta poses from the initial tool configuration. * **BC-Latent.** Behavioral cloning that regresses to the reset transformation, moves the tool to the predicted reset pose, and then predict subsequent delta poses from a latent encoding of the scene with the tool in the reset pose. * **TFN-Traj.** Behavioral cloning that regresses to the reset transformation, moves the tool to the predicted reset pose, and then uses the updated scene to predict subsequent delta poses with the ToolFlowNet-based [19] trajectory model described in Appendix A.1. We examine three settings, each presenting a greater level of difficulty, detailed in Sec. 5.2, Sec. 5.3, and Sec. 5.4, respectively. We demonstrate that ToolGen is robust to these generalization challenges and maintains superior performance over the baselines. We additionally conduct ablation studies by removing the path generator of ToolGen, detailed in Appendix C.1. ### Leveraging training tools at test time We first test the methods on a set of held out configurations using training tools. To successfully perform the manipulation, the methods need to select the right tool and then output the appropriate poses for the tool to complete the tasks. Fig. 3(a) shows the performance of all the methods. We see that most methods achieve reasonable performance. This shows that all these methods generalize reasonably well to different goal configurations given the same training tools. In contrast, BC-E2E achieves suboptimal performance on even this simple version of the task, showing the limitations of methods that do not predict a reset transformation. ### Generalization to unseen initial tool poses To simulate the fact that a tool might be in any initial configuration in the real world, we randomize the initial poses of the training tools in \(SE(3)\) and rerun evaluations. From Fig. 3(a), we observe that ToolGen is the only method that is robust to this perturbation. Despite the fact that the baselines are trained with the same tools, they fail to generalize to unseen initial poses of the tool. On the other hand, ToolGen is robust to the initial configuration of the tool and receives no performance loss. Figure 4: Fig. 3(a): Performance of all the methods across 3 settings. Fig. 3(b): Examples of generated tool trajectories and test tool alignments. ### Generalization to unseen tools Finally, we evaluate the methods on a far more challenging scenario, in which our agents are given unseen tools. For simplicity, we evaluate each novel tool on 10 held out goals for each task and average their performances. See Fig. 3 for a visualization of the novel tools we consider. Since the novel tools also are in arbitrary initial poses, this scenario requires the method to robust to tool shapes as well as initial poses of the tool. Fig. 3(a) and Table 1 shows the quantitative results of all the methods, and Fig. 5 show examples of rollouts by ToolGen (ours) and the baseline TFN-Traj. All of the baselines fail to obtain a high performance, especially in the more challenging task of scooping (see Table 1). In contrast, ToolGen can leverage completely unseen tools in meaningful ways. This is because ToolGen leverages trajectory generation to alleviate the issues of distribution shift. It further uses a non-learned optimization procedure (gradient descent with multiple random initializations), which also does not suffer from a distribution shift. For more analysis, please see our Appendix C.1. We show examples of the tools generated by ToolGen (top row) as well as the test tools aligned to these generated tools (bottom row) in Fig. 3(b). Overall, ToolGen achieves superior performance over the baselines in this challenging scenario of using novel tools. Remarkably, we train just a single ToolGen model across all tasks and tools, using merely one training tool per task. Despite this, ToolGen demonstrates the capacity to solve all tasks effectively when presented with novel tools. ## 6 Conclusion and limitations **Limitations:** Our method has several limitations: First, our method's execution time is considerably longer compared to that of a trained policy, due to the time needed for generating point clouds and optimizing the current tool's poses. We anticipate that the use of faster techniques for sequential pose optimization, such as second-order methods, could speed up our method. Secondly, as our point cloud generator is trained on very limited tools, it is sometimes unable to generate accurate point clouds for novel tools and thus the alignment process could fail. A promising direction is to train on more variations of the tool to improve the generation process and make alignment easier. Further details on these failure cases can be found in Appendix C.2. In this paper, we introduce ToolGen, a novel framework for learning generalizable tool-use skills. ToolGen uses a point cloud trajectory generation approach to represent tool use and then applies sequential pose optimization for execution. This representation circumvents the issues associated with using affordances to represent tool use, and it demonstrates superior generalization capabilities, especially when evaluating on unseen test tools, given only one tool per task for training. We applied a single ToolGen model to the manipulation of deformable objects, tackling diverse tasks, goals, and tools, and we found that ToolGen significantly outperforms the baselines and generalizes effectively to many novel tools. It is our hope that ToolGen will inspire more innovative approaches for tool use representation that enable broad ranges of generalization in the future. Figure 6: Inference results of ToolGen on real world observations on Cutting (top) and Rolling (bottom). For each task, we visualize the generated trajectory and the predicted trajectory of the real world tool. We additionally overlay the goal point clouds to put those trajectories in context with the environment. As a result, ToolGen can effectively predict manipulation trajectory from real world observations even though the model is trained entirely in simulation.
2309.14040
Mixing as a correlated aggregation process
Mixing describes the process by which solutes evolve from an initial heterogeneous state to uniformity under the stirring action of a fluid flow. Fluid stretching forms thin scalar lamellae which coalesce due to molecular diffusion. Owing to the linearity of the advection-diffusion equation, coalescence can be envisioned as an aggregation process. Here, we demonstrate that in smooth two-dimensional chaotic flows, mixing obeys a correlated aggregation process, where the spatial distribution of the number of lamellae in aggregates is highly correlated with their elongation and is set by the fractal properties of the advected material lines. We show that the presence of correlations makes mixing less efficient than a completely random aggregation process because lamellae with similar elongations and scalar levels tend to remain isolated from each other. We show that correlated aggregation is uniquely determined by a single exponent which quantifies the effective number of random aggregation events. These findings expand aggregation theories to a larger class of systems, which have relevance to various fundamental and applied mixing problems.
Joris Heyman, Tanguy Le Borgne, Philippe Davy, Emmanuel Villermaux
2023-09-25T11:18:03Z
http://arxiv.org/abs/2309.14040v2
# Mixing as a correlated aggregation process ###### Abstract Mixing describes the process by which scalars, such as solute concentration or fluid temperature, evolve from an initial heterogeneous state to uniformity under the stirring action of a fluid flow. Mixing occurs initially through the formation of scalar lamellae as a result of fluid stretching and later by their coalescence due to molecular diffusion. Owing to the linearity of the advection-diffusion equation, scalar coalescence can be envisioned as an aggregation process. While random aggregation models have been shown to capture scalar mixing across a range of turbulent flows, we demonstrate here that they are not accurate for most chaotic flows. In particular, we show that the spatial distribution of the number of lamellae in aggregates is highly correlated with their elongation and is also influenced by the fractal geometry that arises from the chaotic flow. The presence of correlations makes mixing less efficient than a completely random aggregation process because lamellae with similar elongations and scalar levels tend to remain isolated from each other. Based on these observations, we propose a correlated aggregation framework that captures the asymptotic mixing dynamics of chaotic flows and predicts the evolution of the scalar pdf based on the flow stretching statistics. We show that correlated aggregation is uniquely determined by a single exponent which quantifies the effective number of random aggregation events, and is dependent on the fractal dimension of the flow. These findings expand aggregation theories to a larger class of systems, which have relevance to various fundamental and applied mixing problems. ## 1 Introduction The mixing of solutes by the stirring action of heterogeneous velocity fields is ubiquitous to natural and industrial processes (Ottino, 1990; Le Borgne _et al._, 2013; Villermaux, 2019). The transport of a passive diffusive scalar in an incompressible velocity field \(\mathbf{v}\) is governed by the conservation equation \[\partial_{t}c+\mathbf{v}\mathbf{\nabla}c=\kappa\mathbf{\nabla}^{2}c, \tag{1}\] with \(c\) the scalar concentration and \(\kappa\) the molecular diffusivity. Despite being fully linear, the interplay between advection and diffusion produces non-trivial mixing dynamics across a large spectrum of flows, including turbulent flows (Villermaux & Duplat, 2003\(a\), 2006; Duplat & Villermaux, 2008_a_), porous media flows (Le Borgne _et al._, 2015; Lester _et al._, 2016; Heyman _et al._, 2020; Souzy _et al._, 2020; Heyman _et al._, 2021) and chaotic flows (Wonhas & Vassilicos, 2002; Fereday _et al._, 2002; Haynes & Vanneste, 2005). As illustrated in Fig. 1a, an initial blob of scalar stirred in a two-dimensional chaotic flow, produces elongated scalar structures, called filaments or lamellae, whose lengths increase exponentially with time. Accordingly, their widths decay by compression until it equilibrates with diffusion at the Batchelor scale (Batchelor, 1959) \(s_{B}\sim\sqrt{\kappa/\lambda}\), with \(\lambda\) the mean stretching rate experienced by fluid elements along their trajectory--the so-called Lyapunov exponent. Once the lamellar width reaches \(s_{B}\), the diffusive flux balances the compression rate, and irreversible mixing takes place. When filaments remain isolated from each other, it is possible (Meunier & Villermaux, 2010) to predict exactly the evolution of scalar concentration by quantifying their Lagrangian stretching history. However, material lines also bend due to the presence of second-order derivatives in the spatial field \(\mathbf{v}\)(Tang & Boozer, 1996), creating folds (Fig. 1b). Fluid compression exponentially reduces the distances between folds, which creates a highly foliated structure at a later time (Fig. 1a). Individual filaments are thus no longer isolated, but start to coalesce at scales of the order of \(s_{B}\), while the mixture keeps homogenising and its concentration tends to the mean \(\langle c\rangle\). This so-called aggregation process (Villermaux & Duplat, 2003_a_) obeys two essential properties. First, filament positions tend to accumulate at infinitesimal scales due to exponential flow compression. _Bundles_ of aggregated lamellae are thus formed by individual filaments sharing the same region of size \(\sim s_{B}\). Second, the linearity of the advection-diffusion equation implies that scalar concentration fields can be decomposed into a sum of the concentration profiles of solitary lamellae (Le Borgne _et al._, 2017). Considering a flow domain area \(\mathcal{A}\), the aggregation regime is attained when the total length of lamellae is \[L(t)s_{B}\gtrsim\mathcal{A}, \tag{2}\] Assuming a constant stretching rate \(\gamma\), \(L(t)=\ell_{0}\exp(\gamma t)\), the coalescence time \(t_{c}\) at which Eq. 2 is first fulfilled is \[t_{c}\sim\frac{1}{\gamma}\log\left(\frac{\mathcal{A}}{\ell_{0}s_{B}}\right). \tag{3}\] The mean number of filaments in bundles is \[n(t)\sim\frac{L(t)s_{B}}{\mathcal{A}}, \tag{4}\] and the scalar concentration \(c\) in a bundle is formed by the superposition of \(n\) elementary lamellar concentrations \(\theta_{i}\) present inside a given bundle \[c(t)\sim\sum_{i=1}^{n(t)}\theta_{i}(t). \tag{5}\] Two scenarii have been proposed to describe the statistical properties of the sum (5): a fully random (Villermaux & Duplat, 2003_b_) and a fully correlated (Heyman _et al._, 2021) aggregation processes. These scenarii correspond to two caricatural routes towards homogeneity, described below. The purely random scenario was proposed (Duplat & Villermaux, 2008_b_) to describe aggregation dynamics in scalar turbulence. It was therefore assumed that the stirring action of turbulent flows is sufficiently random for the aggregation of individual filaments to be decoupled from their individual stretching histories. The scalar concentration \(c\) in a bundle can thus be formed by the sum of \(n\) independent and identically distributed random variables, following the solitary filament concentration pdf. Under this assumption, the scalar concentration pdf, \(P_{c}(c,t)\), results from the \(n\)-convolution of the isolated lamella concentration pdf \(P_{\theta}(\theta,t)\), with the mean number of aggregations \(n\) given by Eq. (4). If \(P_{\theta}\) is exponential or gamma distributed, then \(P_{c}\) is a gamma distributed \[P_{c}(c)=\frac{n^{n}}{\Gamma(n)}\left(\frac{c}{\langle c\rangle}\right)^{n-1} \exp\left(-n\frac{c}{\langle c\rangle}\right) \tag{6}\] Thus, the scalar variance decays as \[\sigma_{c}^{2}=\frac{\langle c\rangle^{2}}{n}\sim 1/L(t). \tag{7}\] Skewer lamella concentration distributions (e.g. log-normal pdf) do not produce gamma distributions when convolved \(n\) times (Schwartz & Yeh, 1982), but the scalar variance still follows \(1/L(t)\) asymptotically. For a uniform stretching rate, the random aggregation scenario thus predicts a variance decay equal to the pre-asymptotic regime of isolated strips (see Meunier & Villermaux (2010) and derivations in Appendix B). For random stretching rates, it predicts a faster decay compared to the pre-asymptotic regime (Fig. 2c). This is in contradiction with numerical computations of chaotic mixing that suggest the same decay exponent before and after aggregation time (Fereday _et al._, 2002; Tsang _et al._, 2005). For instance, a log-normal distribution of stretching rates of mean \(\mu\) and variance \(\sigma^{2}\) (with \(\mu\geqslant\sigma^{2}\)) yields \(n\sim\exp((\mu+\sigma^{2}/2)t)\) and an asymptotic scalar decay exponent of \(\mu+\sigma^{2}/2\), versus \(\mu-\sigma^{2}/2\) for solitary strips (Meunier & Villermaux, 2010). The alternative model \(n\sim\exp(\mu t)\) was also proposed (Villermaux & Duplat, 2006) to account for the fact that stretching fluctuations may weakly affect \(n\). However, this scaling does not conserve the mean concentration and also overestimates the decay of scalar variance. The opposite caricature is the fully correlated aggregation scenario, whereby lamella aggregate in the exact proportion of their elongation (Heyman _et al._, 2021). Correlation between aggregation and elongation occurs naturally in incompressible flows because lamella elongation \(\rho\) is always balanced with transverse compression \(1/\rho\) (Fig. 1b), which attracts neighbouring lamella and locally increases \(n\). In this scenario, the weakly stretched regions of the flow have also experienced little compression, thus remaining isolated from the bulk. They are thus well described by the isolated lamellar theory. These poorly stretched Figure 1: a. Mixing of a diffusive scalar by a random stirring protocol (time sequence top to bottom), evidencing the apparition of stretched scalar filaments (adapted from Villermaux (2012)). b. Blow up on the coalescence of neighbouring filaments under the action of compression (adapted from Duplat & Villermaux (2008_a_)) c. Concentration profile of a scalar field showing the coexistence of solitary filaments and bundles of filaments. The scalar concentration \(c\) is obtained by the superposition of individual filamentary concentrations, which all have a Gaussian shape with maximum concentration \(\theta_{i}\) and width \(s_{B}\) (see Section 3.1). lamellae bear typically high concentration levels, thus dominating scalar fluctuations. The correlated aggregation mechanism was first observed experimentally from the evolution of the concentration pdf of two dyes concentrations \(c_{1}\) and \(c_{2}\) in a chaotic mixer (Duplat _et al._, 2010). The authors showed that if the dyes were deposited inside a concentric annulus, the mean \(c=(c_{1}+c_{2})/2\) would have the same pdf as the parts, \(c_{1}\) and \(c_{2}\). In other words, \(c_{1}\) and \(c_{2}\) are locally equal because they have experienced the same stretching history before aggregating. The evolution of the scalar pdf in a fully correlated regime can then be estimated as follows. If a bundle includes \(n\) lamellae of the same concentration level \(\theta\), Eq. (3.13) simplifies to \(c\sim n\theta\). In the fully correlated scenario, \(n\) is proportional to the filament elongation \(\rho\), \[n\approx 1+\rho/\rho_{c} \tag{8}\] with \(\rho_{c}=\mathcal{A}/(s_{B}\ell_{0})\), the mean elongation at coalescence time. In turn, the individual filament concentration follows \(\theta=\theta_{0}s_{0}/s_{B}\rho^{-1}\) (see section 3.1) such that \[c-\langle c\rangle\approx\frac{\theta_{0}s_{0}}{s_{B}}\rho^{-1}, \tag{9}\] where we identified the mean spatial concentration \(\langle c\rangle=\theta_{0}\ell_{0}s_{0}/\mathcal{A}\). Thus, the pdf of the deviation from the mean \(\tilde{c}=c-\langle c\rangle\) follows the pdf of \(\rho^{-1}\), which is completely determined by the stretching statistics of flow (Fig. 2c). Since the is log-normal in random chaotic flows with mean \(-\mu t\) and variance \(\sigma^{2}t\), we expect similar statistics for \(c-\langle c\rangle\). Note that highly elongated portions of the filament occupy the same area as weakly elongated ones. Thus, because aggregation is correlated, stretching statistics must be considered with respect to the initial filament state rather than the final one. The scalar decay exponent in the fully correlated scenario is thus very close to the one of solitary strips, thus explaining similarities between pre- and post-aggregation scalar decay exponents (Wonhas & Vassilicos, 2002; Tsang _et al._, 2005; Fereday _et al._, 2002). While accurately describing extremes, the fully correlated scenario causes an unrealistic peaking of the scalar pdf close to the mean (Fig. 2c), due to the complete absence of mechanisms to mix bundles of different \(\rho\). This is in contradiction with the homogenising capacity of chaotic flows. Hence, aggregation dynamics in chaotic flows likely lie between a fully random and a fully correlated scenario. The goal of this study is thus to uncover the statistical laws governing aggregation processes in chaotic flows. In particular, we describe the impact of stochastic aggregation on the spatial distribution of \(n\) and \(c\) and their moments. We focus on scalar aggregation in the so-called Batchelor regime (Haynes & Vanneste, 2005), for which the minimum scale of scalar fluctuation \(s_{B}\) is much smaller than the smallest velocity correlation length scale, and for which no scalar gradients develop at large scales. Such regime is also qualified as "smooth" flows because velocity gradients remain relatively constant at the scale of \(s_{B}\). This is in contrast to "rough" flows (e.g., turbulent flows at low Schmidt numbers) where the smaller flow scales lie below the Batchelor scale. The paper is organised as follows. We first discuss the two main hypothesis proposed to describe lamella aggregation in heterogeneous flows (section 2). We then use chaotic flow simulations to derive a new correlated aggregation theory. In Section 3, we describe the fractal feature of material lines in heterogeneous chaotic flows and its link to the distribution of the number of aggregated lamellae. In Section 4, we investigate the properties of correlated aggregation. In Section 5, we derive a model for the aggregated scalar pdf. ## 2 Geometry of elongated material lines ### Synthetic chaotic flows To understand the kinematics of aggregation, we first investigate the spatial geometry of advected fluid elements in two two-dimensional incompressible heterogeneous chaotic flows, namely the baker map and the sine flow (Fig. 3). These flows are sequential advective maps that have been widely used in the context of chaotic transport (Finn & Ott, 1988; Ott & Antonsen Jr, 1989; Tsang _et al._, 2005; Giona _et al._, 2001; Meunier & Villermaux, 2010, 2022) and are definedin the following. In the incompressible baker map, fluid compression of factor \(a\in[0,0.5]\) and \(1-a\) first operates horizontally on the domain \(y<a\) and \(y>a\) respectively. Then vertical stretching occurs with a factor \(1-a\) and \(a\) in these two regions, preserving the total area (Fig. 3a). The Figure 3: Transformations operated by a) the incompressible baker map with parameter \(a\) and b) the sine flow with amplitude \(A\) and random phases. Figure 2: a) Geometry of material lines (lamellae) in the sine flow at time \(t=12\) for \(A=0.9\). b) Coarsened concentration field resulting from the sum of lamellar concentration in a neighbourhood of size \(s_{B}\). c) Pdf of the coarsened scalar field and prediction of various models of aggregation. transformation writes \[x_{t+1} = \left\{\begin{array}{ll}ax_{t}&\mbox{if $y_{t}<a$}\\ 1-(1-a)x_{t}&\mbox{if $y_{t}>a$}\end{array}\right.,\] \[y_{t+1} = \left\{\begin{array}{ll}y_{t}/a&\mbox{if $y_{t}<a$}\\ (1-y_{t})/(1-a)&\mbox{if $y_{t}>a$}\end{array}\right..\] An advantage of the baker map is that purely vertical scalar patterns (for which \(c(x,y)=f(x)\)) remain one-dimensional after application of the map, thus simplifying the problem to a single dimension. This simplicity allows for the analytical derivation of many features of the map, as we will show later. Another advantage is that it is possible to explore a wide range of stretching heterogeneity by varying \(a\) between 0 and 0.5. Indeed, the first two moments of stretching rate in the baker map are \[\mu/t = -a\log(a)-(1-a)\log(1-a), \tag{1}\] \[\sigma^{2}/t = a(1-a)(\log(1-a)-\log(a))^{2}. \tag{2}\] Thus, for \(a=0.01\), \(\sigma^{2}/\mu=3.7\) while for \(a=0.49\), \(\sigma^{2}/\mu=5.7\cdot 10^{-4}\). It is important to note that this map involves discontinuous transformations, or "cuts", that are absent in continuous flows such as turbulence but are common in flows through porous media (Lester _et al._, 2013). In contrast, the sine flow is an alternation of random-phase horizontal and vertical sinusoidal velocity waves with amplitude \(A\) and period \(2\pi\) (Fig. 3b). The flow is periodic on the unit square \([0,1]\times[0,1]\) and it obeys for a given time period \(t\) \[y_{t^{\prime}+\delta t} = y_{t^{\prime}}+A\delta t\left\{\begin{array}{ll}\sin(2\pi x_{t }^{\prime}+\phi_{t})&\mbox{for $t<t^{\prime}<t+1/2$},\\ 0&\mbox{for $t+1/2<t^{\prime}<t+1$}\end{array}\right.,\] \[x_{t^{\prime}+\delta t} = x_{t^{\prime}}+A\delta t\left\{\begin{array}{ll}0&\mbox{for $t<t^{ \prime}<t+1/2$},\\ \sin(2\pi y_{t}^{\prime}+\psi_{t})&\mbox{for $t+1/2<t^{\prime}<t+1$}\end{array}\right.,\] where the amplitude \(A\) is a positive constant and \(\phi_{t},\psi_{t}\) are random phases that change at each time period \(t\), and \(\delta t=1/2\) the time step. The flow velocity having a single component, incompressibility is automatically ensured. Scalar transport is continuous and considered on a periodic domain \([0,1]\times[0,1]\). The stretching statistics of sine flows are described in Meunier & Villermaux (2022). As most random flows, the elongation of material lines in sine flows follows a log-normal distribution with a mean \(\mu t\) and variance \(\sigma^{2}t\) that depend on the amplitude \(A\). The stretching heterogeneity is much less variable than in the baker map, with ratio \(\sigma^{2}/\mu\) ranging from \(1\) when \(A\to 0\) to \(\sigma^{2}/\mu\approx 0.6\) for \(A=1.8\). In the following, we study the fractal geometry of advected material lines and their clustering in these chaotic flows. The Lagrangian simulations consist in advecting a material filament in the flow field and follow its local elongation. The filament is defined by a series of consecutive points advected by the velocity field, linked by segments whose elongation is evolving due to velocity gradients. Segments that are highly elongated are refined by introducing intermediate points, in a similar manner as done by Meunier & Villermaux (2010). The elongated and folded filament (Fig. 2a) is tracked up to the advection time where \(L=10^{7}\ell_{0}\), limit corresponding to our computer memory. Eulerian statistics, such as the local number of aggregated filaments, or their local mean elongation, are then computed by averaging Lagrangian variables on a regular grid (Fig. 2b). ### Fractal properties In incompressible flows, the stretching of material elements by velocity gradients is compensated by transverse compression. The compression causes distances between lamellar elements to decrease exponentially over time. Smaller and smaller scales are thus continuously produced by flow compression. Furthermore, in smooth chaotic flows, the typical scale of variation of velocity gradients is fixed and produces a heterogeneous stretching field for material lines. Dense (black) or diluted (white) regions of material lines are thus created at large scale in the chaotic flow (Fig. 4). Such heterogeneous structures then cascade to smaller scales under the action of net compression, thus creating a fractal set of one-dimensional objects (lines) clustered around their transverse direction. In two-dimensional incompressible flows, the Haussdorf dimension of this fractal set is necessarily \(D_{0}=2\), as per the Kaplan-York result (Farmer _et al._ 1983). Higher dimensions can be smaller than 2 if stretching is heterogeneous. To illustrate this, let us define a normalised measure \(p_{k}\) with \(k=1\cdots N\) defining a regular grid of bin size \(\epsilon=\mathcal{L}/N\), with \(\mathcal{L}\) the system size. For instance, \(p_{k}\) may be defined as the local density of lamella in the bin, e.g. \(p_{k}=n_{k}/n\) where n is the total number of lamella. Since concentration levels of lamellae are additive, \(p_{k}\) can be equivalently defined as the sum of lamella concentrations in one bin. The fractal dimension of order \(q\) of the measure \(p\) is then obtained with Grassberger (1983): \[D_{q}-1=\lim_{\epsilon\to 0}\frac{1}{q-1}\frac{\log I_{q}(\epsilon)}{\log \epsilon},\quad I_{q}(\epsilon)\equiv\sum_{k}^{N=\mathcal{L}/\epsilon}p_{k}^ {q}, \tag{2.3}\] where the subtraction of 1 on the left hand side accounts for the clustering of one-dimensional structures (lamellae) in a two-dimensional domain. This definition implies the following spatial scaling of the integral of the measure: \[I_{q}(\epsilon)\sim\epsilon^{(q-1)(D_{q}-1)}. \tag{2.4}\] Figure 4: Fractal geometry of material material lines in (top) the sine flow (\(A=0.8\)) and (bottom) the baker map (\(a=0.1\)) observed at different scales In simple flows such as the baker map, \(D_{q}\) can be obtained (Finn & Ott, 1988) by observing the similarity properties of the map, which transfer at small scales the heterogeneity of the measure produced at large scales by a single operation of the map. Characterising the result of one elementary operation of map on the measure thus also informs on the spectrum of fractal dimensions. In the following, we derive this spectrum for the baker map (see also Finn & Ott (1988)). We consider the measure of the local number of lamella in bin \(k\), \(p_{k}=n_{k}/n\). As shown in Fig. 3a, an operation of the baker map doubles the total number of these lamellae, while maintaining the same local distribution of lamellae on smaller bins of sizes \(a\epsilon\) for \(x<a\) and \((1-a)\epsilon\) for \(x>a\). The integral of the measure can then be computed by summing its value on the two replicates created by the map, \[I_{q}(\epsilon)=I_{q,a}(\epsilon)+I_{q,1-a}(\epsilon). \tag{5}\] We observe that \[I_{q,a}(a\epsilon)=I_{q,1-a}((1-a)\epsilon)=\sum_{k}^{N=1/\epsilon}\left( \frac{p_{k}}{2}\right)^{q}, \tag{6}\] where the factor \(1/2\) comes from the normalisation of the measure due to the doubling of \(n\). Thus \[I_{q,a}(\epsilon)=I_{q}(\epsilon/a)2^{-q}\mbox{ and }I_{q,1-a}(\epsilon)=I_{q} (\epsilon/(1-a))2^{-q}. \tag{7}\] Replacing the last expression in (5) yields \[I_{q}(\epsilon)=2^{-q}\epsilon^{(q-1)D_{q}}\left(a^{-(q-1)(D_{q}-1)}+(1-a)^{-( q-1)(D_{q}-1)}\right) \tag{8}\] Using the scaling \(I_{q}(\epsilon)\sim\epsilon^{(q-1)D_{q}}\), thus provide a transcendental equation for \(D_{q}\) independently of \(\epsilon\): \[2^{q}=\left(a^{-(q-1)(D_{q}-1)}+(1-a)^{-(q-1)(D_{q}-1)}\right), \tag{9}\] the solution of which is explicit for \(q=0\) and \(q=1\): \[D_{0}=2,\quad D_{1}=1+\frac{2\log 2}{\log(a^{-1}+(1-a)^{-1})}. \tag{10}\] Note that the solution for \(q=1\) is obtained with Bernouilli's rule by differentiating (9) with respect to \(q\), and taking the limit \(q\to 1\). For random flows such as the sine flow, Ott & Antonsen Jr (1989) argue that there exists a general relationship between stretching rate statistics and fractal dimensions as \(D_{q}=f_{q}(\sigma^{2},\mu)\), although a closed-form solution is not always trivial as for the baker map. We show in Fig. 5 that the ratio \(\sigma^{2}/\mu\) is directly related to the fractal dimension \(D_{1}\). This suggests that the fractal aggregation of material lines results from the large-scale heterogeneity of stretching rates. Since the flow is smooth, the heterogeneity created at large scales cascades to smaller scales, conserving its geometrical structure and creating a fractal geometry. As suggested by Figure. 5, the function \(f_{1}\) is different for baker map and the sine flow. Indeed, the ratio \(\sigma^{2}/\mu\) tends to a positive constant in the sine flow when \(A\to\infty\), while \(\sigma^{2}/\mu\to 0\) in the baker map when \(a\phi 0.5\). This finite limit comes from the fact that the sine flow is a continuous transformation with no cutting and thus does not tend to a uniform stretching rate. In the contrary, when \(A\to 0\), \(\sigma^{2}/\mu\to 1\) which is a maximum bound for the ratio in the sine flow (Meunier & Villermaux, 2022), thus limiting the possible range of fractal dimensions produced by continuous chaotic flows, compared to discontinuous maps. ### Spatial distribution of \(n\) The spatial distribution of the number of elements per bundle \(n\) (Fig. 6) can be obtained as follows. Comparing the mean area occupied by a filament of length \(L(t)\) and width \(s_{B}\) to the domain surface \(\mathcal{A}\), we get an estimate of the mean number of lamellae \(\mu_{n}\) in bundles \[\mu_{n}\sim L(t)s_{B}/\mathcal{A}. \tag{11}\] Higher moments can be obtained from a study of the fractal structure of material lines. To this end, we consider the spatial measure corresponding to the local number of lamellae in each bundle: \[p_{k}\equiv\frac{n_{k}}{\sum_{k}n_{k}} \tag{12}\] The Renyi definition (Grassberger 1983) of the fractal dimension of order 2 of this measure is \[D_{2}-1\approx\frac{\log\sum_{k}p_{k}^{2}}{\log s_{B}}, \tag{13}\] when \(s_{B}\to 0\). Replacing Eq. (12) in the last expression provides \[\sum_{k}\left(\frac{n_{k}}{\sum_{k}n_{k}}\right)^{2}=s_{B}^{D_{2}-1}. \tag{14}\] Figure 5: Relation between stretching rate mean \(\mu\) and variance \(\sigma^{2}\) and fractal dimension \(D_{1}\) in the baker map and sine flow with varying parameters \(a\) and \(A\). Figure 6: Numerical simulations showing the spatial distribution of the number of lamella \(n\) in bundles in the sine flow (\(A=0.5\)) at two aggregation scales, \(s_{B}=1/200\) (a) and \(s_{B}=1/50\) (b). Since \(\sum_{k}n_{k}=N\mu_{n}\) we have \[\sum_{i}\left(\frac{n_{k}}{\sum_{k}n_{k}}\right)^{2}=\frac{1}{N}\left\langle(n/ \mu_{n})^{2}\right\rangle, \tag{15}\] with \(N\approx\sqrt{\mathcal{A}}/s_{B}\) the number of bundles in the flow domain. Since \[\sigma_{n/\mu_{n}}^{2}=\left\langle(n/\mu_{n})^{2}\right\rangle-\left\langle n /\mu_{n}\right\rangle^{2}, \tag{16}\] then, \[\sigma_{n/\mu_{n}}^{2}=\sqrt{\mathcal{A}}s_{B}^{D_{2}-2}-1. \tag{17}\] Thus the variance of \(n/\mu_{n}\) reaches a constant at asymptotic times, which is given by the fractal dimension of order 2. The spatial variance of \(n\) is then \[\sigma_{n}^{2}=\mu_{n}^{2}\left(\sqrt{\mathcal{A}}s_{B}^{D_{2}-2}-1\right) \tag{18}\] The predictions of Eqs. (11)-(18) are plotted against time in Fig. 7 showing good agreement with simulations for a large range of Batchelor scales \(s_{B}\) and flow heterogeneity, characterized by the parameters \(a\) for the Baker map and \(A\) for the sine flow. The pdf of lamella aggregation number \(P_{n}(n)\) closely follows a Gamma distribution for all simulated data in both baker and sine flow over a large range of time and fractal dimensions (Fig. 8): \[P_{n}(n)=\frac{1}{\Gamma(k_{n})\theta_{n}^{k}}n^{k_{n}-1}\exp(-n/\theta_{n}), \tag{19}\] with \(n\geqslant 0\) and \(k_{n},\theta_{n}\) defined by the moments of the distribution of n : \[k_{n} =\left(\sqrt{\mathcal{A}}s_{B}^{D_{2}-2}-1\right)^{-1}, \tag{20}\] \[\theta_{n} =\mu_{n}(t)\left(\sqrt{\mathcal{A}}s_{B}^{D_{2}-2}-1\right), \tag{21}\] with \(\mu_{n}=L(t)\mathcal{A}/s_{B}\). Note that the gamma distribution yields a power law distribution at small \(n\) with exponent \(k_{n}\) going from zero to infinity with increasing \(D_{1}\). It may thus not have finite variance if \(k_{n}>1\), that is for small \(D_{2}\) (large heterogeneity). In that case, all the probability is concentrated to low values of \(n\), thus in non-aggregated regions of the flow. In practice, we impose \(n\geqslant 1\) to ensure integrability of moments. Figure 7: a) Scaling of the spatial variance of \(\log n\) as a function of \(a\) in the baker map and theoretical prediction, Eq. (17). b) First two moments of \(P_{n}\) through time compared to theoretical predictions, Eqs. (11) and (18) in the baker map and c) in the sine flow. In these flows, the surface and length of the flow domain are equal to 1. ## 3 Lamellar concentrations in bundles The linearity of the advection diffusion operator (1.1) offers the possibility to decompose the scalar mixing problem into a sum of various initial value problems, similar to Green's functions. Thus, the asymptotic scalar concentration field can be envisioned as a local summation of solitary diffusive lamellae (Fig. 1), that are present in the same region at the same time. In the following, we recall the Lagrangian description of these solitary diffusive lamella. Figure 8: \(P(n,t)\) for the baker map and sine flow for \(s_{B}=1/100\). Solid lines stand for the gamma pdf with theoretical moments given by Eq. (2.21) and dots stand for numerical simulations. a) baker map \(a=0.3\) and variable \(t\). b) baker map for \(t=20\) and variable \(a\). c) sine flow for \(s_{B}=1/100\) and \(A=0.4\). Figure 9: Simulation of aggregation statistics in baker map (\(a=0.3\)) and sine flow (\(A=0.5\)) for \(s_{B}=1/50\) :.1 number of lamella \(n\) in bundles,.2 mean of log-elongation in bundles and.3 sum of lamellar concentrations in bundles. ### The solitary lamella theory Solitary lamellae are thin and elongated scalar structures that spontaneously form under the stirring action of a flow (Fig. 1). An analytical prediction of the temporal evolution of these quasi one-dimensional structures can be derived in a Lagrangian frame with a coordinate system \((x,y)\) advected with the flow and aligned with the directions of compression (\(x\)) and elongation (\(y\)) (Ranz, 1979; Villermaux, 2019). Because of their elongated shape, the concentration of lamellae is almost constant in the \(y\) direction. Thus, \(\partial_{y}c\) is negligible compared to \(\partial_{x}c\), and the two-dimensional advection-diffusion problem (1) simplifies to a one-dimensional advection-diffusion equation \[\partial_{t}c+u(x)\partial_{x}c=\kappa\partial_{x}^{2}c, \tag{10}\] with \(u=-x\gamma(t)\) the velocity at which solute particles are compressed in the direction \(x\) and \(\gamma(t)\geqslant 0\) the stretching rate. Owing to flow incompressibility, the stretching rate \(\gamma(t)\) in the \(y\)-direction leads to a compression rate \(-\gamma(t)\) in the \(x\)-direction. This approximation is valid when the characteristic compression time \(\gamma^{-1}\) is smaller than the characteristic diffusion time \(s_{0}^{2}/D\), where \(s_{0}^{2}\) is the initial lamella width, which is for \(\mathrm{Pe}_{0}=\gamma s_{0}^{2}/\kappa>1\)(Villermaux, 2019). Following Ranz (1979), we define a dimensionless rescaled time \[\tau=\frac{\kappa}{s_{0}^{2}}\int_{0}^{t}\rho(r)^{2}\mathrm{d}r. \tag{11}\] where \(\rho(t)=\exp(\int_{0}^{t}\gamma(t^{\prime})\mathrm{d}t^{\prime})\) is the lamella elongation, and a dimensionless rescaled space \(\xi=x\rho/s_{0}\). In these rescaled coordinates, Eq. (10) transforms to a simple diffusion equation \[\partial_{\tau}c=\partial_{\xi}^{2}c, \tag{12}\] For a Gaussian initial condition \(c(\xi,0)=\theta_{0}\exp(-\xi^{2})\), the solution is \[c(\xi,\tau)=\frac{\theta_{0}}{\sqrt{1+4\tau}}\exp(-(\xi/\sqrt{1+4\tau})^{2}). \tag{13}\] In the Lagrangian coordinate system \((x,y)\), the lamella scalar concentration follows \[\Theta(x,t)=\theta(t)\exp(-(x/s(t))^{2}) \tag{14}\] where \(\theta\) is the maximum concentration of the lamella, defined by \[\theta=\frac{\theta_{0}}{\sqrt{1+4\tau}}, \tag{15}\] and \(s\) the lamella width, following \[s=\frac{s_{0}\sqrt{1+4\tau}}{\rho}. \tag{16}\] Note that the mass in a given cross-section \[m=\sqrt{\pi}\theta s=\sqrt{\pi}\theta_{0}s_{0}\rho^{-1} \tag{17}\] is independent of the stretching history \(\tau\), but depends only on the final elongation state. Multiplying by the lamella elongation recovers mass conservation. In heterogeneous chaotic flows, we expect the Lagrangian elongation of lamellae \(\rho\) to be a random variable. In Appendix A, we recall basic results concerning the statistical behaviour of \(\rho\) in the sine flow and the baker map. The statistics of \(\tau\), \(\theta\) and \(s\) can be further derived from the statistics of \(\rho\). In chaotic flows, elongation increases exponentially fast (\(\rho(t)=e^{\gamma(t)t}\)) so that the last elongation value \(\rho(t)\) has a predominant weight in the stochastic integral (3.2). An approximation of the statistics of \(\tau\) was proposed (Meunier & Villermaux, 2010; Lester _et al._, 2016) as \[\tau\approx\frac{\kappa}{2s_{0}^{2}}\frac{t}{\log\rho}(\rho^{2}-1). \tag{3.9}\] When \(t\to\infty\), \[\tau\to\frac{\kappa}{2\lambda s_{0}^{2}}\rho^{2}=\frac{1}{4}\left(\frac{s_{B} }{s_{0}}\rho\right)^{2}, \tag{3.10}\] with \(s_{B}=\sqrt{2\kappa/\lambda}\), the Batchelor scale. Thus, \[\theta\to\frac{\theta_{0}s_{0}}{s_{B}}\rho^{-1} \tag{3.11}\] and \[s\to s_{B}. \tag{3.12}\] ### Aggregated scalar level After describing how solitary lamellae evolve in a chaotically stirred flow, we may now use the linear property of the advection-diffusion operator to obtain a description of the full scalar concentration field (Fig. 1). Indeed, the superposition of the concentration profiles (3.5) of solitary lamella allows reconstructing the aggregated scalar field. This property identity has been used to numerically retrieve scalar fields at large Peclet numbers (Meunier & Villermaux, 2010). To extract theoretical insights from this superposition process, we assume that at a later time, all lamellae reach the Batchelor scale and that their mass (Eq. (3.8)) can be homogeneously distributed inside a region of size \(\sim s_{B}\). We also use this continuum scale as the typical coarsening scale for aggregation (Fig. 2b). Note that the theoretical results presented in the following are not sensitive to the precise choice of the aggregation scale, which can be as well defined as a multiple of the Batchelor scale. Consider a box of width \(s_{B}\) centred in the position \(x\), the aggregated concentration level in this box can be constructed from the sum of the masses \(m_{i}\) of the \(n(x)\) individual lamellae present in this box \[c(x)\approx\frac{1}{s_{B}}\sum_{i=1}^{n(x)}m_{i}=\frac{\sqrt{\pi}\theta_{0}s_{ 0}}{s_{B}}\sum_{i=1}^{n(x)}\rho_{i}^{-1}. \tag{3.13}\] where we used Eq. (3.8) for the evolution of the solute mass carried by an individual lamella at a given location. Eq. (3.13) forms the base of the statistical description of aggregated concentration in chaotic flows. To simplify notations, we drop in the following the dependency on \(x\) of both \(n\) and \(c\), and consider these as random variables of space. It is tempting to deduce the statistical moments of \(c\) with the similar scaling arguments as the ones used for \(n\) (see Section 2.2). However, in contrast to \(n\), \(c\) is essentially non-fractal. Let us define the local measure \[p_{k}=\frac{c_{k}}{\sum_{i=1}^{N}c_{i}}=\frac{\sum_{i=1}^{n_{k}}1/\rho_{i}}{ \alpha}, \tag{3.14}\] where the normalising factor is \[\alpha=\sum_{k}\sum_{i=1}^{n_{k}}1/\rho_{i}=L(t)\mu_{1/\rho,L}, \tag{3.15}\] with \(\mu_{1/\rho,L}\) the mean of \(1/\rho\) sampled along the filament's length \(L\). Taking the log-normal approximation for \(\rho\) (see Appendix A), \[\mu_{1/\rho,L} =e^{-(\mu+\sigma^{2})t+\sigma^{2}t/2}, \tag{3.16}\] \[L =\ell_{0}e^{(\mu+\sigma^{2}/2)t}, \tag{3.17}\] such that the normalisation factor is a time-independent constant. The dependence of \(\langle p_{k}^{q}\rangle\) with scale can be derived analytically in simple map such as the baker map. Applying a similar procedure as described in Eq. (2.5), we find that \[I_{q}=a^{q}I_{q}(\epsilon/a)+(1-a)^{q}I_{q}(\epsilon/(1-a)) \tag{3.18}\] such that \[1=a^{(D_{q}-1)}+(1-a)^{(D_{q}-1)}. \tag{3.19}\] Thus \(D_{q}=2\) for all \(q\), meaning that the aggregated concentration field is a non-fractal quantity. This result can also be intuitively understood as follows. Since aggregation is correlated, \(1/n\sim 1/\rho\), so that both the number of lamellae in bundles \(n\) and their mean elongation \(\rho\) have similar fractal properties. Thus, the ratio \(c\sim n/\rho\) is likely to be scale invariant. The fact that \(D_{q}=2\) for all \(q\) also means that scalar concentration ultimately tends to a dense and homogeneous field, in agreement with the mixing property of chaotic flows. The Renyi definition of the fractal dimension (Grassberger, 1983) for the measure defined in Eq. (3.14) reads \[\frac{\sum_{k=1}^{N}c_{k}^{q}}{\left(\sum_{k=1}^{N}c_{k}\right)^{q}}=s_{B}^{( D_{q}-1)}=s_{B}, \tag{3.20}\] since \(D_{q}=2\). Since \(\sum_{k=1}^{N}c_{i}\to N\mu_{c}\), with \(N=\sqrt{\mathcal{A}}/s_{B}\), we have \[\langle(c/\mu_{c})^{q}\rangle\sim\sqrt{\mathcal{A}}s_{B}^{2-q}. \tag{3.21}\] In particular, the second moment of \(c\) (\(q=2\)) shows scale independence, e.g., \(\langle(c/\mu_{c})^{2}\rangle\sim s_{B}^{0}\). Thus, the spatial fluctuations of \(c\) are insensitive to Peclet number. To quantify these fluctuations and their temporal evolution, we must take a deeper look into the local distribution of lamellar elongations in bundles, their moments, and their relation to the bundle size. ### Local correlations between \(n\) and \(\rho\) In contrast to the bundle size \(n\), the scale independence of the aggregated concentration levels \(c\) precludes describing the decay of scalar variance from the fractal geometry created by the chaotic flow. However, we will show that the fractal dimension still plays a role in determining the correlations between the bundle size \(n\) and the local moments of lamella elongations in these bundles. To this end, we define the conditional averaging operator acting in lamellae located in the local neighbourhood of size \(s_{B}\) by \[\langle X|n\rangle=\frac{1}{n}\sum_{i=1}^{n}X_{i}, \tag{3.22}\] where \(X\) is a Lagrangian variable transported by lamellae and \(n\) is the number of lamellae aggregated in the bundle. The remainder of this Section is dedicated to uncovering the behaviour of conditional moments of elongation knowing \(n\) (e.g. \(\langle\rho^{-q}|n\rangle\)). Section 4 will then be dedicated to deriving unconditional probabilities by averaging on the distribution of \(n\). We plot in Fig. 10 the joint probability \(P(n(\mathbf{x}),\langle X|n\rangle)\) obtained in the baker map and the sine flow for \(X=\rho^{-1}\) and \(X=\log\rho\), the inverse of elongation and the log-elongation of lamella respectively. Fig. 10 suggests that the following scaling holds in both flows: \[\log n\sim-\log\langle\rho^{-1}|n\rangle, \tag{3.23}\] which confirms the strong correlation between the number of lamellae in aggregates and their elongation. For large time, \(c\) must tend to the conserved average scalar concentration \(c\to\langle c\rangle\). Thus we must have \[n\sim 1/\langle\rho^{-1}|n\rangle, \tag{3.24}\] a scaling that we confirm numerically (Fig. 10). In incompressible flows, the distance between lamellae \(d\) is proportional to the amount of compression they have experienced, \(\rho_{i}^{-1}\). Since the number of lamellae in a box of size \(r\) is \(n\sim 1/\langle d_{i}|n\rangle\), then, \(n\sim 1/\langle\rho^{-1}|n\rangle\), which recovers the above result. Fig. 10 also suggests that \[\log n\sim(D_{1}-1)\langle\log\rho|n\rangle, \tag{3.25}\] where \(D_{1}\) is the information dimension (Ott & Antonsen Jr 1989) of the measure \(n\), and is given by Eq. (2.10) for the baker map. Equation (3.25) can be derived exactly in the case of the baker map. Indeed, by the action of the map, the total number of lamellae increases as \(\log n=t\log 2\) while the mean log-elongation of these lamellae is \(\langle\log\rho\rangle=t(-\log a-\log(1-a))/2\) leading to a constant ratio \[\frac{\log n}{\langle\log\rho\rangle}=\frac{2\log 2}{\log a+\log(1-a)}, \tag{3.26}\] which is exactly the value of \(D_{1}-1\) (Eq. (2.10)). Assuming that the partition between \(\log n\) Figure 10: Joint pdf (gray scale) of the number of lamellae in a bundle of size \(s_{B}=1/200\) and (1) their mean inverse elongation (2) and their mean log-elongation for (a.) baker map (\(a=0.1,t=24,D_{1}=1.57\)) and (b.) sine flow (\(A=0.8,t=10,D_{1}=1.74\)). The theoretical scaling of the measure (1) and (2), given by Eq. (3.23) and (3.25) respectively, are plotted as a continuous red lines with the slope indicated in the legend. Dashed red lines are guides to the eyes. and \(\langle\log\rho\rangle\) is preserved at small scales in each bundle, we have \[\log n=(D_{1}-1)(\langle\log\rho|n\rangle-\langle\log\rho_{c}|n\rangle) \tag{3.27}\] with \(\langle\log\rho_{c}|n\rangle\) a constant standing for the mean elongation at coalescence time (\(n>1\)). This thus proves Eq. (3.25) for the baker map. The constant \(\langle\log\rho_{c}|n\rangle\) can be estimated by comparing the average surface \(S=s_{B}\rho_{c}L_{0}\) occupied by the material line when it reaches the Batchelor scale \(s_{B}\), with the available area \(A\). The first aggregation event occurs when \(S\approx A\), that is when \(\rho_{c}\approx A/(s_{B}\ell_{0})\). Eq. (3.27) is verified for baker map and sine flow with various parameters \(a\) and \(A\), with \(\langle\log\rho_{c}|n\rangle=-\log s_{B}\). ### Distribution of \(\log\rho\) in a bundle of size \(n\) The two scaling laws \(n\sim\langle\rho^{-1}|n\rangle\) and \(\log n\sim(D_{1}-1)\langle\log\rho|n\rangle\) provide key information about the heterogeneity of lamella elongations inside bundles. Since the ensemble distribution of elongation \(P_{\rho}(\rho)\) has a log-normal shape, we assume that the distribution of elongations inside bundles, denoted \(P_{\rho|n}\), is also log-normally distributed. This implies that \(\log\rho\) is normally distributed in bundles, with a mean \[\mu_{\log\rho|n}\sim(D_{1}-1)^{-1}\log n. \tag{3.28}\] Since \(\log\langle\rho^{-1}|n\rangle=-\mu_{\log\rho|n}+\sigma_{\log\rho|n}^{2}/2 \sim-\log n(\mathbf{x})\), the variance of log-elongation in bundles at large \(n\) must be \[\sigma_{\log\rho|n}^{2}\sim\frac{2(2-D_{1})}{D_{1}-1}\log n. \tag{3.29}\] We report in Fig. 11 the simulated scaling \(\mu_{\log\rho|n}/\log n\) and \(\sigma_{\log\rho|n}^{2}/\log n)\) obtained asymptotically at large mixing times. When \(D_{1}\to 2\), \(\mu_{\log\rho|n}/\log n\to 1\) while \(\sigma_{\log\rho|n}^{2}/\!\log n\to 0\), meaning that bundles are formed by lamella of identical elongations. In contrast, when \(D_{1}\to 1\), both \(\mu_{\log\rho|n}/\!\log n\) and \(\sigma_{B}^{2}/\!\log n\) become infinite, while their ratio \(\sigma_{\log\rho|n}^{2}/\mu_{\log\rho|n}=2(2-D_{1})\to 0.5\). This limit suggests that the aggregation of lamellae remains correlated to their average elongation, although a fixed amount of stretching variability arises in bundles. A good agreement is found between theoretical prediction (Eqs. (3.28)-(3.29)) and numerical Figure 11: Scaling of the mean (a) and variance (b) of log-elongation in bundles as a function of the information dimension \(D_{1}\). Circles stands for numerical simulations in baker maps (open circles) and sine flow (filled circles). Continuous lines stands for theoretical prediction of the mean (Eq. (3.28)) and variance (Eq. (3.29)). Dashed lines are plotted to compare the mean with fractal dimension of other order. simulations of aggregation in the baker map (Fig. 11). In contrast, the theory captures only qualitatively the behaviour of the random sine flow. This may be due to the continuity of the sine flow produces curved lamellar structures whose dimension is not exactly one-dimensional. These results further invalidate the fully correlated aggregation hypothesis that assumes a uniform elongation in each bundle. Indeed, the stretching variability in bundles is directly linked to the heterogeneity of the chaotic flow, because of the intimate relationship existing between the fractal geometry of the chaotic attractor and the stretching statistics of fluid elements (Ott & Antonsen Jr 1989). As such, it is impossible to have a single stretching rate per bundle as soon as the chaotic flow is heterogeneous and exhibits a distribution of stretching rates. The absence of stretching variability in bundles (\(\sigma^{2}_{\log\rho|n}=0\)) implies the uniformity of stretching at large scale (\(\sigma^{2}_{\rho}=0\)). This uniform case is reached when \(D_{1}\to D_{0}=2\), for instance, in the baker map when \(a\to 0.5\). In continuous flow maps such as the sine flow, regions of high and low stretching always coexist and \(\sigma^{2}_{\log\rho|n}>0\). ### Moments of \(1/\rho\) in a bundle of size \(n\) Having described the first two moments of the distribution of lamella elongation in bundles (Eqs (3.28)-(3.29)), we now assume that the distribution is of log-normal shape. This choice is justified by the fact that elongation is a multiplicative process, thus usually leading to lognormal distributions (Le Borgne _et al._ 2015; Souzy _et al._ 2020). This allows us to compute the scaling of the \(q-\)moments of lamella concentrations in bundles, \(\theta|n\). Owing to Eq. (3.11), Figure 12: a) Scaling exponents of the \(q\) lamellar concentration moments in bundles (Eq. (3.30)). Numerical estimates are plotted with symbols, red diamonds for \(q=2\) and black circles for \(q=1\). Unfilled and filled symbols represents simulations in baker map and sine flow respectively. Theoretical predictions (Eq. (3.33)) are represented by lines. b) Intercept \(\tilde{\omega}\) of the scaling exponent of the \(q\) lamellar concentration moments in bundles (Eq. (3.30)) in baker map (empty squares) and sine flow (filled squares) and theoretical prediction (line, Eq. (3.34)). we have \[\langle\theta^{q}|n\rangle \sim \langle\rho^{-q}|n\rangle \tag{30}\] \[= \int_{1}^{\infty}\rho^{-q}P_{\rho|n}(\rho)\mathrm{d}\rho\] \[\approx \int_{1}^{\infty}e^{-(\log\rho-\mu_{\log\rho|n})^{2}/(2\sigma_{ \log\rho|n}^{2})-q\log\rho}\mathrm{d}\rho.\] The minimum bound for the integral is taken at \(\rho=1\) and not \(0\), taking into account the fact that lamellar structures cannot be compressed in their longitudinal direction. As a consequence, \(P_{\rho|n}(\rho)\) is truncated for \(\rho<1\). Denoting \(\Lambda=\log\rho/\log n\), \(\tilde{\mu}=\mu_{\log\rho|n}/\log n\) and \(\tilde{\sigma}^{2}=\sigma_{\log\rho|n}^{2}/\log n\), this expression becomes \[\langle\rho^{-q}|n\rangle\approx\int_{1}^{\infty}e^{H(\Lambda)\log n} \mathrm{d}\rho, \tag{31}\] with \(H(\Lambda)=-(\Lambda-\tilde{\mu})^{2}/(2\tilde{\sigma}^{2})-q\Lambda\). For large \(n\), the value of this integral tends to \(e^{H(\Lambda^{*})\log n}\) where the \(\Lambda^{*}\) is the value where \(H\) takes a maximum, that is either at \(\Lambda^{*}=\tilde{\mu}-q\tilde{\sigma}^{2}\) if \(\tilde{\mu}-q\tilde{\sigma}^{2}>0\), or \(\Lambda^{*}=0\) otherwise. Thus, \[\log\langle\rho^{-q}|n\rangle \approx -(\gamma_{q,\rho^{-1}|n}\log n+\omega_{q,\rho^{-1}|n})\] \[\mathrm{with}\ \left\{\begin{array}{ll}\gamma_{q,\rho^{-1}|n}=q \tilde{\mu}-q^{2}\tilde{\sigma}^{2}/2&\mbox{ if }\tilde{\mu}>q\tilde{\sigma}^{2},\\ \gamma_{q,\rho^{-1}|n}=\tilde{\mu}^{2}/(2\tilde{\sigma}^{2})&\mbox{ if }\tilde{\mu}\leqslant q \tilde{\sigma}^{2},\end{array}\right.\] and \(\omega_{q,\rho^{-1}|n}=q\log(\mathcal{A}/(s_{B}\ell_{0}))\) a constant. In particular, we are interested in the exponent \(q=2\) which is useful to describe fluctuations around the mean. We have \[\tilde{\gamma}\equiv\gamma_{2,\rho^{-1}|n}=\left\{\begin{array}{ll}2\tilde {\mu}-2\tilde{\sigma}^{2}\mbox{ if }\tilde{\mu}>2\tilde{\sigma}^{2}\\ \tilde{\mu}^{2}/(2\tilde{\sigma}^{2})\mbox{ if }\tilde{\mu}\leqslant 2\tilde{ \sigma}^{2}\end{array}\right.. \tag{33}\] The predicted dependence of \(\tilde{\gamma}\) upon \(D_{1}\) is reproduced in Fig. 12. \(\tilde{\gamma}\) is bounded between \(2\) (for \(D_{1}\to 2\)) and \(1\) for \(D_{1}\approx 1.5\). The prediction agrees reasonably well with numerical simulations of the baker and sine flows (Fig. 12). The slight discrepancies can be attributed to deviations from log-normally distributed elongation in bundles, as postulated before. We also verify numerically that \(\tilde{\omega}\equiv\omega_{2,\rho^{-1}|n}\) is independent of \(D_{1}\) (Fig. 12b). In Fig. 13, we verified that \(\tilde{\gamma}\) is independent of the aggregation scale \(s_{B}\). In contrast, \[\tilde{\omega}\approx 2\log\mathcal{A}/(l_{0}s_{B}), \tag{34}\] is a sole function of the aggregation scale, independent of time and fractal dimension (Fig. 12.b). Having determined both the elongation statistics inside aggregates of size \(n\) and the spatial distribution of \(n\), we will deduce in the following section the statistics of aggregated scalar levels \(c\). ## 4 Aggregated scalar concentrations The scalar concentration of a bundle \(c\) is formed by the superposition of the \(n\) individual lamella contained in this bundle (Fig. 1), according to Eq. (5). The concentration of each individual lamella is a random variable; thus the superposition of these random variables is also a random variable, whose statistical properties are derived below. ### Addition of scalar levels We found that the moments of lamella elongation inside bundles of size \(n\) follow: \[\langle\rho^{-1}|n\rangle =\frac{s_{B}\ell_{0}}{\mathcal{A}}n^{-1} \tag{4.1}\] \[\langle\rho^{-2}|n\rangle =\frac{(s_{B}\ell_{0})^{2}}{\mathcal{A}^{2}}n^{-\tilde{\gamma}} \tag{4.2}\] with \(\tilde{\gamma}\equiv\gamma_{2,\rho^{-1}|n}\) a flow-dependent exponent depending on \(D_{1}\) and taking value between 1 and 2 (Fig. 12a). The variance of lamellar concentrations inside bundles thus follows: \[\sigma^{2}_{\rho^{-1}|n}=\frac{(s_{B}\ell_{0})^{2}}{\mathcal{A}^{2}}\left(n^{- \tilde{\gamma}}-n^{-2}\right). \tag{4.3}\] To relate the statistics of individual lamella concentrations inside bundles to the statistics of aggregate concentrations \(c\), we assume that bundles are formed through Eq. (3.13) from a sum of independent and identically distributed random numbers. These random numbers must be picked from a random variable following the stretching statistics of lamellae _among_ bundles of similar size \(n\) rather than the statistics _inside_ each of these bundles, as described above. However, as shown in Appendix C, such sampling effect do not play a role at large \(n\), and we have \[\sigma^{2}_{c|n}\approx n\left(\frac{\sqrt{\pi}\theta_{0}s_{0}}{s_{B}}\right) ^{2}\sigma^{2}_{\rho^{-1}|n}=\frac{(\sqrt{\pi}\theta_{0}\ell_{0}s_{0})^{2}}{ \mathcal{A}^{2}}\left(n^{1-\tilde{\gamma}}-n^{-1}\right), \tag{4.4}\] When \(n\) large, this expression further simplifies to \[\sigma^{2}_{c|n}\sim n^{1-\tilde{\gamma}}, \tag{4.5}\] with \(\tilde{\gamma}\in[1,2]\) given by Eq. 3.33. The mean concentration is conserved by the aggregation process, since \[\langle c|n\rangle=n\cdot\frac{\sqrt{\pi}\theta_{0}s_{0}}{s_{B}}\langle\rho^{- 1}|n\rangle=\frac{\sqrt{\pi}\theta_{0}\ell_{0}s_{0}}{\mathcal{A}}\equiv \langle c\rangle. \tag{4.6}\] In Fig. 14, we compare the scaling of the second moment of \(c|n\) with \(n\) observed in numerical simulations to the prediction obtained with the independent assumption (Eq. (4.5)). The Figure 13: Dependence of \(\tilde{\gamma}\) and \(\bar{\omega}\) with the Batchelor scale in simulations (dots) of the baker map (empty symbols \(a=0.2\)) and the sine flow (filled symbols \(A=1.2\)) and comparison to theoretical prediction (Eq. (3.34). prediction is relatively accurate for the random sine flow, but largely underestimates the exponent for the deterministic baker map. Indeed, the simplicity and regularity of the deterministic baker map makes bundles of similar size not statistically independent. While bundle concentration still results from the addition of variable lamellar concentrations, independent realisations of the summation are not achieved due to the deterministic nature of the baker map, the exact same lamellar geometrical patterns being repeated at a smaller and smaller scale. In the extreme case of a unique realization, the variance of the sum is exactly the variance of the random variable. Eq. (4.4), thus transforms into \[\sigma^{2}_{c\mid n}\sim\sigma^{2}_{\rho^{-1},n}\sim n^{-\tilde{\gamma}}, \tag{4.7}\] a scaling that fits better the deterministic baker map simulations (Fig. 14). To summarise, the addition of lamellar concentration levels in a bundle yields a concentration whose deviation from the mean decays algebraically with the number of lamella in the bundle \[\sigma^{2}_{c\mid n}\approx\frac{(\theta_{0}\ell_{0}s_{0})^{2}}{\mathcal{A}^{2 }}n^{-\xi}, \tag{4.8}\] with \(\xi=\tilde{\gamma}\) for purely deterministic flows (baker map) and \(\xi=\tilde{\gamma}-1\) for random flows (sine flow). We call \(\xi\) the _correlation_ exponent, which can take values between 0 and 2 depending on the flow heterogeneity and randomness. ### Distribution of \(c\) In Section 2, we derived the distribution of the number of lamella in bundles (Eq. (2.21)) and in Section 3, the scaling of the first two moments of aggregated concentration \(c\) given the bundle size \(n\) (Eq. (4.8)). With these elements, we can now express the unconditional pdf of scalar concentration \(P_{c}\) via the sum \[P_{c}(c)=\int_{n}\mathrm{d}nP_{c\mid n}(c)P_{n}(n), \tag{4.9}\] Figure 14: Scaling exponent \(\xi\) (Eq. (4.8)) of the variance of bundle concentrations knowing \(n\) estimated from simulations (dots) and theoretical predictions with the independent realisation hypothesis for the sine flow (dashed lines, \(\xi=\tilde{\gamma}-1\), Eq. (4.5)) and baker map (continuous lines, \(\xi=\tilde{\gamma}\), Eq. (4.7)). where \(P_{c|n}\), the distribution of \(c\) given the bundle size \(n\), has to be specified. A possible choice for \(P_{c|n}\) is the log-normal distribution, with parameters \[\mu_{\log c|n} =\log\mu_{c|n}-\log(\sigma_{c|n}^{2}/\mu_{c|n}^{2}+1)/2 \tag{4.10}\] \[\sigma_{\log c|n}^{2} =\log(\sigma_{c|n}^{2}/\mu_{c|n}^{2}+1). \tag{4.11}\] In Fig. 15, we plot the simulated distribution of aggregated concentration levels compared to the prediction Eq. (4.9) for the baker map and sine flow. The agreement is fair in the region near \(\langle c\rangle\), but deviates for large \(c\). Indeed, this corresponds to lamellae with weak aggregation for which \(n\approx 1\). In this region, the solitary strip pdf \(P_{\rho^{-1},L}\) describes well the tail of \(P_{c}\) because such high concentration excursions are essentially supported by isolated lamellae, while the correlated aggregation model assumes \(n\gg 1\). The presence of these weakly aggregated, high concentration levels is particularly evident at small \(s_{B}\) (Fig. 15 (b)). The scalar concentration pdf is thus the combination of an aggregated core around the mean following Eq. (4.9) and tails following the isolated strip concentration pdf. In Figs. 2c and 16, we compare the correlated aggregation model with the random aggregation model where \(\langle n\rangle\sim L(t)\) (Eq. (1.4)). The random aggregation assumption yields gamma pdfs (Eq. (1.6)) that are narrowing much faster than the simulated pdfs in the sine flow. In contrast, the fully correlated model captures well the tails of the pdf, but artificially peaks around the mean concentration. The correlated model is an intermediate scenario that captures both the tail and the center part of the pdf. From pdf of aggregated scalar concentration, we now derive its moments. They are directly related to the pdf of \(n\), since \[\langle c\rangle =\int\mathrm{d}c\sum_{n}cP(c|n)P(n)=\sum_{n}\langle c\rangle_{n }P(n) \tag{4.12}\] \[=\frac{\theta_{0}\ell_{0}s_{0}}{\mathcal{A}} \tag{4.13}\] Figure 15: Distributions of aggregated scalar concentrations in the sine flow depending on a) the sine wave amplitude \(A\) (\(s_{B}=1/50\)) and b) the aggregation scale \(s_{B}\) (\(A=0.9\)). Dots stand for numerical simulations, continuous lines are the aggregation model (Eq. (4.9)), and the dashed lines are the isolated strip prediction (Eq. (3.5)). Simulations are all taken at the time when the total filament length reaches \(L=10^{7}\ell_{0}\). and \[\langle c^{2}\rangle =\int_{c}\mathrm{d}c\int_{n}\mathrm{d}n\,c^{2}P(c|n)P(n)=\int_{n} \mathrm{d}n\langle c^{2}\rangle_{n}P(n) \tag{4.14}\] \[=\frac{(\theta_{0}\ell_{0}s_{0})^{2}}{\mathcal{A}^{2}}(\langle n^ {-\xi}\rangle_{n}+1). \tag{4.15}\] Thus, the scalar variance is \[\sigma_{c}^{2}=\frac{(\theta_{0}\ell_{0}s_{0})^{2}}{\mathcal{A}^{2}}\langle n^ {-\xi}\rangle_{n}. \tag{4.16}\] Note that \(\langle n^{-\xi}\rangle\) is not defined for all \(D_{1}\) when \(k_{n}(D_{1})<\xi(D_{1})\), with \(k_{n}\) the exponent of the gamma distribution chosen for the pdf of \(n\) (Eq. (2.21)). This is because of the power law scaling of the gamma distribution near \(n=0\), which may renders negative moments non-integrable. However, the \(n\to 0\) limit is not relevant here because the flows are space-filling (\(D_{0}=2\)) and asymptotically, \(n\geqslant 1\). Thus, we cut the integral at \(n=1\) to get \[\langle n^{-\xi}\rangle_{n}\sim(\theta_{n})^{-\min(k_{n},\xi)} \tag{4.17}\] An intuitive understanding of this equation can be formulated as follows. If the spatial heterogeneity of \(n\) is moderate (\(\xi<k_{n}\)), the average of \(n^{-\xi}\) is affected by all values of \(n\) in the distribution. In contrast, if the heterogeneity is stronger (\(\xi>k_{n}\)), the probability of having low aggregation regions (\(n\approx 1\)) is high and controls the value of \(\langle n^{-\xi}\rangle\). In that case, the average does not scale anymore with \(\xi\), but rather with the parameter \(k_{n}\), explaining the minimum exponent \(\min(k_{n},\xi)\). Combining Eq. (4.17) and Eq. (2.21) provides the asymptotic scalar variance decay as a function of the growth material length \[\sigma_{c}^{2}(t)=\left(\frac{L(t)s_{B}(\sqrt{\mathcal{A}}s_{B}^{D_{2}-2}-1)} {\mathcal{A}}\right)^{-\min(k_{n},\xi)}\sim L(t)^{-\min(k_{n},\xi)}, \tag{4.18}\] where the growth of material length follows \(L(t)=2^{t}\) in the baker map, and \(L(t)=\exp((\mu+\sigma^{2}/2)t)\) in the sine flow. Thus, in a correlated aggregation scenario, the decay exponent of scalar variance is found to be a fraction of the growth exponent of material lines. Figure 16: \(P_{c}\) in sine flows at several times (\(A=0.8,s_{B}=1/50\)) (dots) compared with fully random aggregation (dashed lines, Eq. (1.6)) and correlated aggregation model (continuous lines, Eq. (4.9)) In the sine flow, \(k_{n}\) is generally larger than \(\xi\) such that the scalar variance decay exponent is \(\gamma_{c,2}=(\mu+\sigma^{2}/2)\xi\). In Fig. 17.a, we compare the theoretical estimates of the scalar variance decay exponent \(\gamma_{2,c}\) with simulations in the sine flow, showing relatively good agreement. Interestingly, the variance decay rate remains well predicted by the isolated strip model (see Appendix B) although its match with the full pdf is very poor except for large concentrations (Fig. 15). This is in line with previous observations (Haynes & Vanneste, 2005) that variance decay rates are relatively insensitive to lamella aggregation. This reflects the correlated nature of aggregation in chaotic flows: the least stretched fraction of lamella are the least aggregated ones while they contribute the most to the scalar fluctuations because of their high concentration level. In turn, the fully random aggregation model (Eq. 7) clearly overestimates the variance decay rate in the sine flow. This is again explained by the correlated nature of aggregation which is less efficient at homogenising concentration levels than a completely random addition. In other words, small concentration levels have a higher probability of coalescing with other small concentrations than with high concentrations, retarding the homogeneisation of the mixture. The asymptotic scalar decay rate is thus driven almost entirely by the evolution of the stretching statistics and solitary strip concentration levels, the aggregation being too correlated and inefficient to accelerate mixing. This is also why the fully correlated model, which is entirely described by the stretching pdf of solitary lamellae (Eq. (9)), accurately captures the variance decay rate (Fig. 17). Concerning the baker map, the conclusions are slightly different due to the deterministic nature of the process. First, flow heterogeneity is much higher, and \(k_{n}<\xi\) for all flow with \(D_{1}<1.9\), i.e. for \(a<0.2\). For such heterogeneous flows, the whole concentration statistics are governed by the regions where \(n\sim 1\) for which the asymptotic theory presented above is not expected to hold. Again, in these weakly aggregated regions, the solitary strip model is accurate. Interestingly, when the flow tends to the uniform case \(a\to 0.5\), the baker map yields scalar decay rates of \(2\log 2\), larger than the rate of increase of material lines and the fully random scenario (\(\log 2\)). This acceleration of mixing is a consequence of the determinism of the baker map, and is well captured by the fully and partially correlated scenarii. Note that our baker map simulations do not show the super-exponential decay of scalar fluctuations classically observed for the uniform stretching rate at \(a=0.5\). In fact, the reconstruction of the scalar field by a summation of lamellar concentrations on a fixed grid (Eq. (23)) impedes the apparition of the super-exponential mode. As \(a\to 0.5\), all lamella are subjected to similar stretching rates around \(\log 2\), thus yielding a scalar variance decaying as \(2\log 2\). ### Effective aggregation number The effect of correlated aggregation may be viewed as leading to an effective number of random aggregations, smaller than the actual number of aggregations. Assuming a random aggregation process, the distribution of concentrations (Fig. 15) would be fitted to a gamma distribution (Villermaux & Duplat, 2003\(\alpha\); Villermaux, 2019). The resulting shape parameter \(k_{\rm eff}\) may then be interpreted as an effective random aggregation number. From the variance of concentration (25), we have \[k_{\rm eff}=\frac{\mu_{c}^{2}}{\sigma_{c}^{2}}\sim\exp(\gamma_{2,c}\,t)=k^{ \min(k_{n},\xi)}, \tag{26}\] where \(k=\langle n\rangle\) is the actual mean number of aggregations of the material line. The effective mean number of independent and random aggregation events \(k_{\rm eff}\) is thus equal to the total number of aggregation events \(k\) raised to the exponent \(\min(k_{n},\xi)\). In general, for random flows, \(\xi<1\) so that there is less independent aggregation than the mean. For instance, in random sine flows where the stretching heterogeneity may be tuned to reach fractal dimensions between 1.65 and 1.95 (Fig. 5), the correlation exponent varies between 0.45 and 0.6 (Fig. 14). Thus, the effective aggregation rate is about half of the total aggregation rate in random sine flows. ## 5 Conclusions Scalar mixing in heterogeneous flows results from the interaction of fluid stretching, which creates elongated lamellar structures, and fluid compression, which leads to their aggregation and coalescence at the Batchelor scale. Classically, the aggregation process has been assumed to to obey fully random addition rules. In contrast, we show here that such process can be highly correlated, leading to the aggregation of lamellae of similar elongations. This correlated aggregation process significantly reduces the flow mixing efficiency compared to a random hypothesis, maintaining it close to the mixing efficiency for solitary lamellae and explaining the observed monotonic exponential decay of scalar variance before and after coalescence time Fereday _et al._ (2002). Using two-dimensional chaotic flows as a reference, we measured the aggregation rate of exponentially stretched material lines across a broad range of chaotic flow regimes. We showed that the most elongated lamellae are also the most aggregated ones, due to the fact that larger compression rates attract a larger flow region. The link between elongation and compression, induced by incompressibility, hence generates a direct correlation between elongation and aggregation. The heterogeneity in stretching rates therefore controls the heterogeneity of the number of lamellae in bundles. We showed that the statistics of aggregated lamella numbers can be predicted from the fractal dimensions of the elongated material line. We then derived a general theoretical framework that captures the effect of correlated aggregation, where lamellae of similar stretching aggregate preferentially, and predict the pdfs of aggregated scalar levels. In this new framework, correlated aggregation is uniquely characterised by single correlation exponent \(\xi\), which provides a measure of the effective number of random aggregation events. In that sense, correlated aggregation delays Figure 17: Decay exponent \(\gamma_{2}\) of the variance of aggregated scalar levels with time, as a function of fractal dimension \(D_{1}\) for a) sine flow and b) b) baker map. Dots stand for numerical simulations and lines from theoretical predictions for isolated lamellae (Eq. (B.4) and exponents of \(\langle\rho^{-1}\rangle_{0}\) in Tables 1 and 2), fully correlated aggregation (Eq. (9) and exponents of \(\langle\rho^{-2}\rangle_{0}\) in Tables 1 and 2), fully random aggregation (Eq. (7) and exponents of \(1/\langle\rho\rangle_{0}\) in Tables 1 and 2) and correlated aggregation (Eq. (16)). the route to uniformity compared to a fully random hypothesis, although it does not alter the fundamental nature of the aggregation process (Villermaux & Duplat, 2003_a_). Our results apply for two-dimensional fully chaotic flows in the Batchelor regime, that is, for smooth velocity fields below the integral scale. These flow fields are representative of a large class of flows, including notably porous media flows (Heyman _et al._, 2020; Souzy _et al._, 2020). It is probable that different aggregation rules arise in rough flows or above the integral scale. Indeed, scalar mixing in rough turbulent flows has already been shown to be well captured by a fully random aggregation scenario (Duplat & Villermaux, 2008_b_). A remaining open question is thus to uncover the potential mechanisms leading to a loss of correlations from small diffusive scales to large dispersive scales. It should also be possible to extend the correlated aggregation theory to three-dimensional flows in the Batchelor regime. One-dimensional lamellar structures transform into thin two-dimensional sheets (Martinez-Ruiz _et al._, 2018) which also aggregate in the direction of their highest gradient (the direction of compression). A similar formalism should thus apply and could be the object of future work.
2309.06444
Connecting Everyday Objects with the Metaverse: A Unified Recognition Framework
The recent Facebook rebranding to Meta has drawn renewed attention to the metaverse. Technology giants, amongst others, are increasingly embracing the vision and opportunities of a hybrid social experience that mixes physical and virtual interactions. As the metaverse gains in traction, it is expected that everyday objects may soon connect more closely with virtual elements. However, discovering this "hidden" virtual world will be a crucial first step to interacting with it in this new augmented world. In this paper, we address the problem of connecting physical objects with their virtual counterparts, especially through connections built upon visual markers. We propose a unified recognition framework that guides approaches to the metaverse access points. We illustrate the use of the framework through experimental studies under different conditions, in which an interactive and visually attractive decoration pattern, an Artcode, is used as the approach to enable the connection. This paper will be of interest to, amongst others, researchers working in Interaction Design or Augmented Reality who are seeking techniques or guidelines for augmenting physical objects in an unobtrusive, complementary manner.
Liming Xu, Dave Towey, Andrew P. French, Steve Benford
2023-09-11T21:20:06Z
http://arxiv.org/abs/2309.06444v1
# Connecting Everyday Objects with the Metaverse: A Unified Recognition Framework ###### Abstract The recent Facebook rebranding to Meta has drawn renewed attention to the metaverse. Technology giants, amongst others, are increasingly embracing the vision and opportunities of a hybrid social experience that mixes physical and virtual interactions. As the metaverse gains in traction, it is expected that everyday objects may soon connect more closely with virtual elements. However, discovering this "hidden" virtual world will be a crucial first step to interacting with it in this new augmented world. In this paper, we address the problem of connecting physical objects with their virtual counterparts, especially through connections built upon visual markers. We propose a unified recognition framework that guides approaches to the metaverse access points. We illustrate the use of the framework through experimental studies under different conditions, in which an interactive and visually attractive decoration pattern, an Artcode, is used as the approach to enable the connection. This paper will be of interest to, amongst others, researchers working in Interaction Design or Augmented Reality who are seeking techniques or guidelines for augmenting physical objects in an unobtrusive, complementary manner. Artcode, augmented reality, interaction, metaverse, visual marker ## I Introduction Attending events virtually has become a normalized part of our everyday life, due partly to the COVID-19 pandemic [1]. Increasingly, events are held online, or support attendance through avatars, on platforms such as Zoom, and Gather Town. This form of virtual engagement may well continue beyond COVID-19. Moreover, Facebook's recent rebranding to Meta and Microsoft's announcement of launching into the metaverse strengthen the likelihood of this being part of our new normal [2]. It is therefore reasonable to expect that our future will include a physical world even more augmented by a wide variety of virtual worlds. These virtual worlds may require unobtrusive and easy-to-use access points to a massive integrated network of virtual worlds or metaverse. Attainment of a fully-realized, immersive metaverse will require efforts and advances in multiple areas, including computer graphics, display hardware, and communication networks [3]. In this paper, we address the issue of connections between the physical and the virtual worlds, proposing a conceptual framework for recognizing access points that may be hidden or camouflaged visual markers. The term "metaverse" was coined in 1992 by Neal Stephenson in his science-fiction novel _Snow Crash_[4], depicting a 3D virtual world where people can interact with each other, and with intelligent agents, through their avatars [5]. 30 years later, and the development of metaverse is arguably still in its infancy, still with no generally accepted definition [5, 6, 7]. The development framework of the metaverse, and its characteristics, have been studied in the literature. Benford [5], for example, listed five metaverse properties: a virtual world; a virtual reality; persistence; connection to the real world; and other people. In contrast to the industrial seven-layer metaverse value chain described by Radoff [8], Duan et al. [7] proposed a three-layer metaverse development architecture, representing the physical world, interaction, and the virtual world. In spite of the lack of consensus on definition, there does appear to be general agreement that three basic metaverse properties are: (i) a physical world; (ii) a virtual world; and (iii) the connection between these two worlds. Although various devices have been designed for accessing virtual elements or virtual worlds, a map showing the presence of access points to these virtual worlds would guide the connection (and potentially enhance the experience). If this could be provided in an explicit and straightforward manner, for example, through an annotation indicating the presence of such entrances to virtual worlds, then even better! In contexts requiring aesthetic-awareness, such at art galleries, implicit markers integrated into a part of the environment (such as in the surface pattern of an object) may be more appealing. In other environments, like in a corridor or hallway, both implicit and explicit visual markers may be acceptable. In this paper, we report on the use of such surface visual markers for connecting everyday objects with digital materials -- such as digital footprints, a virtual world, or a metaverse. We propose a unified recognition framework (URF) for bridging the physical and virtual worlds through visual decorations. The main contributions of this paper are threefold, summarised as follows: * We report on the use of visual markers as clues to prompt interaction with virtual worlds. * We generalize a URF for identifying the presence of access points in public spaces. * We report on experimental studies conducted using one type of visual marker (Artcodes [9, 10]), illustrating how the proposed URF works. The rest of this paper is organized as follows. Section II briefly reviews the related work on visual markers in augmented (AR) and virtual reality (VR). Section III introduces the URF and the preliminaries pertaining to this work. Section IV describes experimental studies evaluating the use of Artcodes as access points to virtual elements. Section V includes discussion of the implications of this study. Finally, Section VI concludes this paper and describes future work. ## II Related work on visual markers A variety of visual markers (see examples in Figure 1), both human-readable and not, have been proposed [9, 11], with two of the most well-known being barcodes [12] (Figure 0(a)) and QR codes (Quick Response codes, Figure 0(b)) [13]. The barcode was among the earliest methods of representing data in a visual, machine-readable form, initially patented in 1952 [9]. While barcodes mainly appear in the retail sector, QR codes have become a ubiquitous feature [9]. Barcodes and QR codes were designed to be reliably read by machines, with no error occurring when they are scanned. However, this reliability comes at a cost of limited aesthetics: Neither are visually meaningful to humans, and it can be difficult to distinguish different codes though visual inspection alone. Many other visual marker systems have similar characteristics to barcodes and QR codes, often with their information being encoded within a matrix of black and white dots, and usually with some form of error detection and correction mechanisms. Examples of such marker systems include the Data Matrix [14] (Figure 0(c)) and the Rohs visual code [15] (Figure 0(d)). While these visual markers are effective for encoding data, they were not intended for camera pose estimation and calibration, and are thus not appropriate for use as fiducials in AR systems -- a fiducial is a type of marker mounted within an environment to enable estimation of the relative pose between the camera and object. Some example fiducial systems are: ARTag (Figure 0(e)) [16]; ARToolkit (Figure 0(f)) [17]; and reacTIVison (Figure 0(g)) [18]. ARTag markers employ a square border for marker localization, connectivity and perimeter analysis. They have a large library of patterns inside the border and use edge-detection approaches to achieve reliability [16]. ARToolkit markers consist of a thick square black border with a variety of patterns in the interior -- the black outline allows for marker localisation and _homography_1 calculation. The reacTIVision markers are automatically generated by fiducial recognition engines such as Amoeba and D-touch [19]: They have compact geometry and offer a limited space for users to adjust their aesthetic aspects [18]. Footnote 1: An isomorphism in projective spaces that is used to calibrate camera pose. The visual appearance of marker systems that rely on geometrical features for localization and encoding is strongly constrained. In the majority of cases, the shape (the geometry) of the markers is automatically generated, allowing little freedom of design. In contrast, another type of visual markers, such as D-touch and its variant Artcodes [9, 20], offer much more flexibility in geometrical form, both for the outline shape and the interior elements. D-touch encodes information through the topological structure of the markers -- the adjacency information of connected components, represented in a region adjacency tree [21]. This supports users' creation of their own readable markers that are both aesthetic and meaningful [11]. Artcode implements and extends the D-touch approach, refining their drawing rules, and introducing human-meaningful (but machine-irrelevant) embellishments and aesthetic style guidelines. The Artcode approach provides the creative freedom to produce visually appealing _and_ machine-readable markers (patterns) that are meaningful to humans, and that resemble free-form images. In addition to these visual marker technologies based on geometry or topology, conventional image recognition technologies have also been employed to relate information to a much wider variety of images. Blippar [22] and Google Lens [23], for example, make use of image recognition techniques to embed data into images. However, because these techniques Fig. 1: Visual marker examples. often use neural networks and vector matching for encoding and decoding information, it is challenging (or impossible) to explain and interpret how the system works to non-technical designers or users. More recently, new systems that use deep-generative networks to automatically generate markers have been proposed, including learnable visual markers [24], E2ETag [25] and DeepFormableTag [26]. ## III Unified Recognition Framework (URF) As AR and metaverse applications become more pervasive, we will live in a world with dispersed access points to connect with virtual elements. There will be an increasing number of entrances to these elements within our surrounding environment, through a variety of virtual markers, both visible and "hidden". Identifying the probable existence of these entrances will be the first step to triggering the follow-up interaction. Considering the many types of entrance that may co-exist, a unified recognition framework (URF) will be needed. In this section, we present such a conceptual URF for general visual marker presence recognition and identification. Given the number of extant visual markers, both in academia and in industry, and the high likelihood of many more systems emerging in the future, attempting to explicitly include all in this URF would be unrealistic. We therefore only include a selection of some typical markers to show the basic URF components. The left part of Figure 2 shows a common scene, an indoor area of a building with various visual markers (highlighted in the picture). Not all of the annotated objects are readable -- some are explicitly-placed readable Artcodes (in red boxes), while others (in blue boxes) are commonplace objects that could be enhanced as visual markers. As shown in Figure 2, the URF involves three stages: marker presence detection; marker identification; and marker decoding. The _detection_ stage involves detecting visual markers in the surrounding environment. Given the scenario in the left part of Figure 2), for example, this stage would detect the possible presence of visual markers using image processing and computer vision techniques, and would output a set of localized candidate visual markers. This output set is then passed to the _identification_ stage (the middle of Figure 2) to determine if they _are_ markers, and, if so, what class of markers they belong to (Artcodes, QR codes, Blippar images, etc.). A key component of the identification stage is a _multi-label classifier_ that accepts the candidate markers, and outputs their corresponding classes or labels. The final stage is the _decoding_, which includes a _decoder pool_ from within which the corresponding decoder identifies and decodes the embedded message in the visual marker. Once the data (codes) carried by the visual marker are identified, the connected visual information (labelled by the visual marker) can be triggered. In this URF, visual marker detection and identification are two independent stages, but in reality, these two things are often done together. Although the URF is a conceptual framework, describing the essential components and a feasible pipeline to bridge the physical and virtual worlds, the concrete implementation may differ from one scenario to another. A possible URF implementation may be an _all-in-one_ brokering system that recognizes the presence Fig. 2: A unified recognition framework (URF) for visual markers. of all (or most) of the visible or hidden visual markers, then calls the corresponding decoders or identifiers, and then steps into the embedded virtual worlds. The next section presents experimental studies examining discovery of the presence of visual markers using a concrete marker system, Artcode [9, 27]. ## IV Experimental studies The URF proposed in the last section includes the two primary elements: visual marker discovery and identification, with discovery of the markers being a _prerequisite_ to the follow-up identification. Moreover, providing hints and clues to the location of (camouflaged) access points to virtual worlds may encourage people to explore those connections, thus creating new interaction opportunities. Given the importance of visual marker discovery in the URF pipeline, we conducted two case studies into how digital clues can be provided to guide users with devices (such as AR headsets) to approach the object and enter the metaverse. Artcodes, which are both meaningful to humans, and readable by scanners, were selected as the marker system. ### _The Artcode approach_ Artcodes2 are human-designable topological visual markers, developed based on the D-touch system [11]. By incorporating additional drawing constraints and aesthetic embellishments, Artcodes enable more visually pleasing and interactive patterns than d-touch [9]. Figures 0(h) and 0(i) show examples of d-touch and Artcode markers. A valid Artcode consists of two parts: a recognizable foreground (the food image in Figure 0(i)); and some image-based background (the text in Figure 0(i)). The foreground is intended for reading by machines, but the background can be designed for human consumption. Artcodes can be beautiful, interactive motifs that can decorate the surface of everyday objects without impacting the aesthetics of the object in the way that QR codes would. Footnote 2: [https://www.artcodes.co.uk/](https://www.artcodes.co.uk/) Because of their unobtrusive and non-obvious properties, the presence of an Artcode is not usually obvious: Close inspection may be needed to discover an Artcode when there are no visual clues. Detection of Artcodes through their general visual features, identifying their probable locations by means of a _heat map_, is therefore a meaningful approach. Given the space limitations of this article, interested readers are referred to the literature for more information about Artcodes, including their design, detection, and identification [9, 19, 20, 27, 28]. ### _Experimental setting_ We conducted experiments to explore Artcode detection in an environment, and deliver clues to guide the subsequent interaction. We assumed a realistic interaction scenario, in which users may wear or carry devices in a physical space, standing far away from the Artcodes: When they discover the presence of an Artcode, they can follow clues to approach the target for further interaction. Rather than fully simulating this scenario, we simplified it while maintaining its core characteristics: Users gain increasing amounts of details as they approach the target. Two studies were conducted, both involving five images sequences (Figures 2(a) and 3(a)) captured with a smartphone moving from far away to close proximity to an Artcode. The size of the Artcode gradually increases as the smartphone moves towards to the target, from top to bottom in the leftmost column of the figures (Figures 2(a) and 3(a)). Recognition is more challenging from further away. Apart from this, the two studies other settings differed as follows: The first study, Figure 3, used a simple Artcode design, in good lighting, with an uncluttered scene, and an unoccluded Artcode. The second study, Figure 4, involved a more difficult scenario, using a complex Artcode design, shaded lighting, a cluttered scene, and a partially occluded Artcode. Considering space limitations, and the focus of this paper, the technical details for building the Artcodes-detection machine-learning model are omitted. Similarly, the details underlying the various elements in Figures 3 and 4 (including generation of the proposals and presence maps) are also omitted. Interested readers are again referred to the literature for more information [9, 20, 27]. Fig. 3: Simple Artcode detection study in clean background, good lighting. ### _Results_ Figures 3 and 4 contain the content and results of the two studies. The four columns in each figure, from left to right, are: (a) the input images (b) the Artcode proposals, annotated with yellow rectangles; (c) the gray Artcode presence heat map; and (d) the fused image (created by combining the input image (a) with the heat map (c)). The red boxes indicate the ground-truth Artcodes. In addition to the presence detection results in Figures 3 and 4, Table I presents the decoding results (generated according to Artcode decoding procedures [9]). Ticks and crosses in the table indicate whether the given image was successfully decoded or not, with ticks ("\(\vee\)") indicating success; and crosses ("\(\times\)") indicating failure. It is clear that the detection proposals in both studies cover the actual marker areas -- the penguins in Figure 3, and the fish in Figure 4 -- in all image sequences, with dense accumulation of the proposal rectangles centering around the target markers. This is further evidenced in the presence maps (gray and fused), where the marker areas are distinctly visible as heat spots (the bright areas in the 3rd and 4th columns of Figures 3 and 4). The Artcode proposals in all five of the first study images center around the true Artcode areas, identified by the red boxes: In the second study, in contrast, although the Artcode proposals cover the true Artcode areas, there are multiple proposals that are not around the actual target, especially for the images that were captured from a greater distance (in the top three rows of Figure 4a). The cluttered scene in the second study affects the detection, increasing the number of false positives: Many non-Artcode objects in this scene may look like Artcodes, with their generic visual features potentially causing the classifier to label them as Artcodes. However, although redundant heat spots were generated, the actual target Artcodes are also identified: Figures 4c and 4d show multiple detections (indicated by heat spots), but one of them does contain the actual target Artcode. Heat spots in the presence maps can alert the user to the possible existence of access points to the metaverse, encouraging the user to come close for follow-up examination and identification. According to the decoding results (Table I), the top two images in the first study (those captured from the furthest distance) could not be decoded, due to the low resolution and loss of details. The closer three input images in the first study, however, ware successfully identified and decoded, opening up the "hidden" virtual worlds. This represents a simplified realistic interaction, where the users often come closer to a target after first getting the general impression (the hint or clue). The more complicated environment in the second study, including a more sophisticated Artcode, poorer lighting, clutter, and occlusion (with a chopstick in the way), resulted in none of the five images being successfully decoded. This also represents a common, real-world situation, where the target image may be obscured from certain angles. In this case, the presence maps should motivate the user to get nearer, and to remove the obstruction, or to explore new viewing angles for better identification. The explorative interaction process allowed by the proposed URF would enable various designs (e.g., design for serendipity [29, 30]), and open up new interaction opportunities for connecting to the metaverse. ## V Discussion and implications The two studies present a simplified and concrete implementation of the proposed framework, illustrating the key steps of detecting and identifying visual markers before decoding them, and accessing the metaverse. Currently, implementing the proposed URF for all known visual markers may not be feasible -- partly due to the ever-expanding set of such markers, and the regular emergence of new interaction devices. However, this investigation using Artcodes as a representative marker Fig. 4: Complex Artcode detection study in cluttered background, poor lighting. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline DecodedImage & 1st (top) & 2nd & 3rd & 4th & 5th (bottom) \\ \hline \hline 1st study (Figure 3) & \(\times\) & \(\times\) & \(\vee\) & \(\vee\) & \(\vee\) \\ \hline 2nd study (Figure 4) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \end{tabular} \end{table} TABLE I: Decoding results for the images in Figures 3a and 4a. provides evidence for the URF's applicability. This paper, and the URF generally, can also serve as guidance for metaverse access point design, using visual markers (especially in an unobtrusive but explorative manner). The proposed framework also includes a mixed interaction manner, combining physical movement and digital engagement in an augmented physical world with ubiquitous connection access points. ## VI Conclusion and future work In this paper, we have explored the problem of connecting with virtual worlds (or the metaverse) in an augmented physical world. We have presented a unified recognition framework (URF) consisting of three components for designing and implementing an explorative access point. A concrete implementation of this URF using Artcodes as access points was used to illustrate the process. An example of visual markers, Artcodes are both machine-readable and human-meaningful decorative patterns that represent the kind of access tool that will become increasingly commonplace in the future. The initial discovery of the presence of markers (indicated by a heat map) and the follow-up, closer inspection and detection were demonstrated by the two studies in the paper. The URF would enable the design of a kind of brokering system that can invoke appropriate recognition algorithms to deal with different types of access points, and may inspire interaction design in the metaverse age. While this study used smartphones and Artcodes, our future work will include the investigation of other AR devices and other visual markers. ## Acknowledgments This work is supported by the Natural Science Foundation of China (project no. 61872167). The authors acknowledge the financial support from the Artificial Intelligence and Optimisation Research Group (AIOP), the Faculty of Science and Engineering (FoSE), the International Doctoral Innovation Centre, Ningbo Education Bureau, Ningbo Science and Technology Bureau, and the University of Nottingham.
2305.19869
Adaptive coding efficiency in recurrent cortical circuits via gain control
Sensory systems across all modalities and species exhibit adaptation to continuously changing input statistics. Individual neurons have been shown to modulate their response gains so as to maximize information transmission in different stimulus contexts. Experimental measurements have revealed additional, nuanced sensory adaptation effects including changes in response maxima and minima, tuning curve repulsion from the adapter stimulus, and stimulus-driven response decorrelation. Existing explanations of these phenomena rely on changes in inter-neuronal synaptic efficacy, which, while more flexible, are unlikely to operate as rapidly or reversibly as single neuron gain modulations. Using published V1 population adaptation data, we show that propagation of single neuron gain changes in a recurrent network is sufficient to capture the entire set of observed adaptation effects. We propose a novel adaptive efficient coding objective with which single neuron gains are modulated, maximizing the fidelity of the stimulus representation while minimizing overall activity in the network. From this objective, we analytically derive a set of gains that optimize the trade-off between preserving information about the stimulus and conserving metabolic resources. Our model generalizes well-established concepts of single neuron adaptive gain control to recurrent populations, and parsimoniously explains experimental adaptation data.
Lyndon R. Duong, Colin Bredenberg, David J. Heeger, Eero P. Simoncelli
2023-05-31T14:06:01Z
http://arxiv.org/abs/2305.19869v1
# Adaptive Coding Efficiency in Recurrent ###### Abstract Sensory systems across all modalities and species exhibit adaptation to continuously changing input statistics. Individual neurons have been shown to modulate their response gains so as to maximize information transmission in different stimulus contexts. Experimental measurements have revealed additional, nuanced sensory adaptation effects including changes in response maxima and minima, tuning curve repulsion from the adapter stimulus, and stimulus-driven response decorrelation. Existing explanations of these phenomena rely on changes in inter-neuronal synaptic efficacy, which, while more flexible, are unlikely to operate as rapidly or reversibly as single neuron gain modulations. Using published V1 population adaptation data, we show that propagation of single neuron gain changes in a recurrent network is sufficient to capture the entire set of observed adaptation effects. We propose a novel adaptive efficient coding objective with which single neuron gains are modulated, maximizing the fidelity of the stimulus representation while minimizing overall activity in the network. From this objective, we analytically derive a set of gains that optimize the trade-off between preserving information about the stimulus and conserving metabolic resources. Our model generalizes well-established concepts of single neuron adaptive gain control to recurrent populations, and parsimoniously explains experimental adaptation data. ## 1 Introduction Some of the earliest neurophysiological recordings showed that repeated or prolonged stimulus presentation leads to a relative decrease in neural responses (Adrian and Zotterman, 1926). Indeed, neurons across different species, brain areas, and sensory modalities adjust their gains (i.e. input-output sensitivity) in response to recent stimulus history (Kohn, 2007; Weber et al., 2019, for reviews). Gain control provides a mechanism for single neurons to rapidly and reversibly adapt to different stimulus contexts (Abbott et al., 1997; Brenner et al., 2000; Fairhall et al., 2001; Muller et al., 1999; Mlynarski and Hermundstad, 2021) while preserving synaptic weights that serve to represent features that remain consistent across contexts (Ganguli and Simoncelli, 2014). From a normative standpoint, this allows a single neuron to adjust the dynamic range of its responses to accommodate changes in input statistics (Fairhall et al., 2001; Laughlin, 1981) - a core tenet of theories of efficient sensory coding (Attneave, 1954; Barlow, 1961). Experimental measurements, however, reveal that adaptation induces additional complex changes in neural responses, including tuning-dependent reductions in both response maxima and minima (Movshon and Lennie, 1979), tuning curve repulsion (Hershenhoren et al., 2014; Shen et al., 2015; Yaron et al., 2012), and stimulus-driven decorrelation (Benucci et al., 2013; Gutnisky and Dragoi, 2008; Muller et al., 1999; Wanner and Friedrich, 2020). Although coding efficiency and gain-mediated adaptation is well studied in single neurons, it appears as though these nuanced empirical observations require a more complex adaptation mechanism, involving _joint_ coordination among neurons in the population. Indeed, to explain these phenomena, previous studies have relied on adaptive changes in feedforward or recurrent synaptic efficacy (i.e. by changing the entire network's set of synaptic weights; Mynarski and Hermundstad, 2021; Rast and Drugowitsch, 2020; Wainwright et al., 2001; Westrick et al., 2016). However, this requires synaptic weights to continuously remap under different statistical contexts, which may change significantly and transiently at short time scales. Here, we hypothesize that adaptation effects reported in neural population recording data can be explained by combining normative theory with a mechanistic recurrent population model that includes single neuron gain modulation. The primary contributions of our study are as follows: 1. We introduce an analytically tractable recurrent neural network (RNN) architecture for adaptive gain control, in which single neurons adjust their gains in response to novel stimulus statistics. The model respects experimental evidence that cortical anatomy is dominated by recurrence (Douglas and Martin, 2007), allowing the effects of single neuron gain changes to propagate through lateral connections. 2. We propose a novel _adaptive efficient coding_ objective for adjustment of the single neuron gains, which optimizes coding fidelity of the stimulus ensemble, subject to metabolic and homeostatic constraints. 3. Through numerical simulations, we compare model predictions to experimental measurements of cat V1 neurons responding to a sequence of gratings drawn from an ensemble with either uniform or biased orientation probability (Benucci et al., 2013). We show that adaptive adjustment of neural gains, with no changes in synaptic strengths, parsimoniously captures the full set of adaptation phenomena observed in the data. ## 2 Related Work Models of statistical adaptation in neural populations.While evidence for adaptive efficient coding via gain modulation in single neurons is relatively well understood (Fairhall et al., 2001; Mlynarski and Hermundstad, 2021; Nagel and Doupe, 2006), the question of whether neural _population_ adaptation can be explained by efficient coding and gain modulation remains underexplored. Normative models of population adaptation have generally relied on synaptic plasticity (i.e. between-neuron synaptic weight adjustments) as the mechanism mediating adaptation (Lipshutz et al., 2023; Figure 1: Recurrent adaptation model. **A)** A population of recurrently-connected orientation-tuned cells receives external feedforward drive (purple arrows) from a presented oriented grating stimulus, randomly sampled from a set of possible orientations. The width of the arrow denotes the strength of the drive, and indicates that the center neuron is tuned towards the horizontal-oriented stimuli. The feedforward drive of each neuron is multiplicatively modulated by its a scalar gain (orange dials). Lateral recurrent input between neurons is denoted by green arrows. Recurrent connectivity is all-to-all, with synaptic strengths determined by the distance between neurons’ preferred feedforward orientation. Output responses (red) of each neuron are a function of both feedforward drive and recurrent drive. **B)** Response tuning curves for orientation-tuned units to stimuli presented with uniform probability (left column), or biased probability (right column). Middle row shows recordings of neurons in visual area V1 of cats, aggregated over 11 sessions. Bottom row shows model responses. Shaded regions are standard error of the mean (SEM). Mynarski and Hermundstad, 2021; Pehlevan and Chklovskii, 2015; Rast and Drugowitsch, 2020; Wainwright et al., 2001; Westrick et al., 2016). For example, Westrick et al. (2016) argue that empirical observations of V1 neural populations (Benucci et al., 2013) can be explained by adapting normalization weights (parameterized by all-to-all synaptic connections) to different stimulus statistical contexts. The major downside of this approach is that changes in synaptic weights require \(\mathcal{O}(N^{2})\) adaptation parameters, for a population of size \(N\). Here, we examine the effects of classical single-neuron adaptive gain modulation on responses of a recurrently-connected population, and demonstrate that these are sufficient to explain adaptation phenomena, while requiring only \(\mathcal{O}(N)\) adaptation parameters. Holding the synaptic weights fixed prevents overfitting, and allows the network to remain stable across input contexts. Network stability is also relevant for contemporary machine learning applications that rely on adaptive adjustments to changing input statistics (e.g. Balle et al., 2020; Hu et al., 2022; Mohan et al., 2021). The adaptation model most similar to ours, developed by Gutierrez and Deneve (2019), proposes an adaptive recurrent spiking neural network whose dynamics are derived from an efficient coding objective. Our model is complementary to this, but is simpler and more tractable, providing an analytic solution for population steady-state responses that facilitates comparisons to experimental data. Finally, recent work (published while this manuscript was being written) uses gain control as a normative population adaptation mechanism, but with the central goal of statistically whitening neural responses, while ignoring the means of responses (i.e. redundancy reduction via decorrelation and variance equalization; Duong et al., 2023). Here, we demonstrate that our model captures adaptive effects involving mean responses as well as population response redundancy reduction, but that its steady-state responses are not whitened. We show that these deviations from whitening are similar to those seen in the neural recordings analyzed here. Recurrent circuitry in sensory cortex.It is well known that recurrent excitation dominates cortical circuits (Douglas and Martin, 2007). In early sensory areas, a series of optogenetic inactivation experiments showed that recurrent excitation in cortex serves to progressively amplify thalamic inputs (Lien and Scanziani, 2013; Reinhold et al., 2015). In the context of sensory adaptation, King et al. (2016) performed silencing experiments in mice to show that the majority of adaptation effects seen in V1 arise from _local_ activity-dependent processes, rather than being inherited from depressed thalamic responses upstream. Similarly, in monkey V1 neurophysiological recordings, Westerberg et al. (2019) used current source density analyses to show that stimulus-driven adaptation is primarily due to recurrent intracortical effects rather than feedforward effects. We leverage these functional observations, along with anatomical measurements of intracortical synaptic connectivity (Ko et al., 2011; Lee et al., 2016; Rossi et al., 2020) to inform the recurrent architecture used in our study. ## 3 An Analytically Tractable RNN with Gain Modulation ### Notation We denote matrices with capital boldface letters (e.g. \(\mathbf{W}\)), vectors as lowercase boldface letters (e.g. \(\mathbf{r}\)), and scalar quantities as non-boldface letters (e.g. \(N,\alpha\)). The \(\mathrm{diag}(\cdot)\) operator forms a diagonal matrix by embedding the elements of a \(K\)-dimensional vector onto the main diagonal of a \(K\times K\) matrix whose off-diagonal elements are zero. \(\circ\) is the Hadamard (i.e. element-wise) product. \(\mathbb{S}_{+}^{N}\) is the space of \(N\times N\) symmetric positive definite matrices. ### Adaptive gain modulation in a population without recurrence We first consider the steady-state response of \(N\) neurons, \(\mathbf{r}_{\mathrm{f}}\in\mathbb{R}^{N}\), receiving sensory stimulus inputs \(\mathbf{s}\in\mathbb{R}^{M}\), with feedforward drive, \(\mathbf{f}(\mathbf{s})=\left[f_{1}(\mathbf{s}),f_{2}(\mathbf{s}),\ldots,f_{N} (\mathbf{s})\right]^{\top}\), which are each multiplicatively scaled by gains, \(\mathbf{g}=\left[g_{1},g_{2},\ldots,g_{N}\right]^{\top}\): \[\mathbf{r}_{\mathrm{f}}(\mathbf{s},\mathbf{g})=\mathbf{g}\circ\mathbf{f}( \mathbf{s}). \tag{1}\] The gain vector \(\mathbf{g}\) has the effect of adjusting the amplitudes of responses \(\mathbf{f}(\mathbf{s})\), and therefore the dynamic range of each neuron. As we demonstrate in Section 6, these simple multiplicative gain scalings are incapable of shifting the peaks of tuning curves, as seen in physiological data (Movshon and Lennie, 1979; Muller et al., 1999; Saul and Cynader, 1989). Previous approaches modeling neural population adaptation in cortex modify the structure of \(\mathbf{f}(\mathbf{s})\) in response to changes in input statistics (e.g. Wainwright et al., 2001; Westrick et al., 2016). Here, we propose a fundamentally different approach, requiring _no_ changes in synaptic weighting between neurons. ### Gain modulation in a recurrent neural population We show that by incorporating single neuron gain modulation into a recurrent network, adaptive effects in each neuron propagate laterally to affect other cells in the population. Consider a model of \(N\)_recurrently_ connected neurons with fixed feedforward and recurrent weights (Fig. 1A), presumed to have been learned over timescales much longer than the adaptive timescales examined in this study. We assume that the population of neural responses \(\mathbf{r}\in\mathbb{R}^{N}\), driven by input stimuli \(\mathbf{s}\in\mathbb{R}^{M}\) presented with probability \(p(\mathbf{s})\), are governed by linear dynamics: \[\frac{d\mathbf{r}(\mathbf{s},\mathbf{g})}{dt}=-\mathbf{r}+\mathbf{g}\circ \mathbf{f}(\mathbf{s})+\mathbf{W}\mathbf{r}, \tag{2}\] where \(\mathbf{W}\in\mathbb{R}^{N\times N}\) is a matrix of recurrent synaptic connection weights; and neuronal gains, \(\mathbf{g}\in\mathbb{R}^{N}\), are adaptively optimized to a given \(p(\mathbf{s})\). Both the feedforward functions \(f_{i}(\mathbf{s})\) and recurrent weights \(\mathbf{W}\) are assumed to be fixed despite varying stimulus contexts (i.e. _non-adaptive_). For notational convenience, we omit explicit time-dependence of the responses and stimuli (i.e. \(\mathbf{r}(\mathbf{s},\mathbf{g},t),\mathbf{s}(t)\)). Empirical studies typically consider neural activity at steady-state before and after adapting to changes in stimulus statistics (Clifford et al., 2007). We therefore analyze the responses of our network at steady-state, \(\mathbf{r}_{*}(\mathbf{s},\mathbf{g})\), to facilitate comparison with data. The network dynamics of Equation 2 are linear in \(\mathbf{r}\), and computing its steady-state is analytically tractable. Setting Eq. 2 to zero and isolating \(\mathbf{r}\) (with the mild assumptions on invertibility; see Appendix A), yields the steady-state solution, \[\mathbf{r}_{*}(\mathbf{s},\mathbf{g})=\left[\mathbf{I}-\mathbf{W}\right]^{-1} \left(\mathbf{g}\circ\mathbf{f}(\mathbf{s})\right). \tag{3}\] We can interpret these equilibrium responses as a modification of the gain-modulated feedforward drive, \(\mathbf{g}\circ\mathbf{f}(\mathbf{s})\), which is propagated to other cells in the network via recurrent interactions, \(\left[\mathbf{I}-\mathbf{W}\right]^{-1}\). When \(\mathbf{W}\) is the zeros matrix (i.e. no recurrence), Equation 3 reduces to Equation 1, and adjusting neuronal gains simply rescales the feedforward responses without affecting the shape of response curves. The presence of the recurrent weight matrix \(\mathbf{W}\) allows changes in neuronal gains to alter the effective tuning of other neurons in the network _without_ changes to any synaptic weights. ### Structure of recurrent connectivity matrix \(\mathbf{W}\) Importantly, in our recurrent network, there are no explicit excitatory and inhibitory neurons - the recurrent activity term (last term in Eq. 2) represents the _net_ lateral input to a neuron (i.e. the combination of both excitatory and inhibitory inputs). In addition, model simulations in this study use a \(\mathbf{W}\) that is translation invariant (i.e. convolutional) in preferred orientation space, with strong net recurrent excitation near the preferred orientation of the cell, and relatively weak net excitation far away. This structure is motivated by functional and anatomical measurements in V1, indicating that orientation-tuned cells receive excitatory and inhibitory presynaptic inputs from cells tuned to every orientation, with disproportionate excitatory bias from similarly-tuned neurons (Lee et al., 2016; Rossi et al., 2020; Rubin et al., 2015). We elaborate on specific choices of \(\mathbf{W}\) in Appendix A. ## 4 A Novel Objective for Adaptive Efficient Coding via Gain Modulation Theories of efficient coding postulate that sensory neurons optimally encode the statistics of the natural environment (Barlow, 1961; Laughlin, 1981), subject to constraints on finite metabolic resources (e.g. energy expenditure from firing spikes; Ganguli and Simoncelli, 2014; Olshausen and Field, 1996). However, sensory input statistics vary with context, and the means by which a neural population might confer an _adaptive and dynamic_ efficient code remains an open question (Barlow and Foldiak, 1989; Duong et al., 2023; Gutierrez and Denve, 2019; Mlynarski and Hermundstad, 2021). How should our network (Equation 3) adaptively modulate its gains, \(\mathbf{g}\), according to the statistics of a novel stimulus ensemble? We assume an initial stimulus ensemble, with probability density \(p_{0}(\mathbf{s})\) (Fig. 1B), with a corresponding set of optimal gains, \(\mathbf{g}_{0}\), toward which adaptive gains are homeostatically driven; and an optimal linear decoder, \(\mathbf{D}\in\mathbb{R}^{N\times M}\). \(\mathbf{D}\) is fixed and set to the pseudoinverse of \(\mathbf{r}_{*}(\mathbf{g},\mathbf{s})\) under the initial stimulus ensemble (see Appendix C). Given a novel stimulus ensemble with probability density \(p(\mathbf{s})\), we propose an adaptive efficient coding objective that neurons minimize by adjusting their gains, \[\mathcal{L}(\mathbf{g},p(\mathbf{s}))=\mathbb{E}_{\mathbf{s}\sim p(\mathbf{s})} \left\{\|\mathbf{s}-\mathbf{D}^{\top}\mathbf{r}_{*}(\mathbf{s},\mathbf{g})\|_{ 2}^{2}+\alpha\|\mathbf{r}_{*}(\mathbf{s},\mathbf{g})\|_{2}^{2}\right\}+\gamma \parallel\mathbf{g}-\mathbf{g}_{0}\parallel_{2}^{2}, \tag{4}\] where \(\alpha\) and \(\gamma\) are scalar hyperparameters. Intuitively, as the stimulus ensemble changes \(p_{0}(\mathbf{s})\to p(\mathbf{s})\), the gains \(\mathbf{g}\) are adaptively adjusted to maximize the fidelity of the representation (first term), while minimizing overall activity in the network (second term), and minimally deviating from the initial gain state (third term). The gain homeostasis term serves to prevent catastrophic forgetting in the network under different stimulus contexts (Kirkpatrick et al., 2017): minimizing the gains' deviation from their optimal state under \(p_{0}(\mathbf{s})\) allows the system to stably maintain reasonable performance on previously presented data and prevents the system from radically reorganizing itself on a fast time scale. In Appendix B, we show that adapting to \(p(\mathbf{s})\) with gain homeostasis allows the network to maintain improved stimulus representation error under the \(p_{0}(\mathbf{s})\) ensemble relative to a network optimized without gain homeostasis. We also perform ablations to show that the three terms in the objective are _jointly_ necessary to produce the adaptation effects observed in data. ### Objective optimization The objective given in Equation 4 is bi-convex in \(\mathbf{g}\) and \(\mathbf{D}\), and we can _analytically_ solve for either variable independently or in alternation (i.e., coordinate descent via alternating least squares). See Appendix C for the complete derivation. We initialize the network under the uniform stimulus density \(p_{0}(\mathbf{s})\) to obtain a homeostatic gain target, \(\mathbf{g}_{0}\), and a fixed decoder, \(\mathbf{D}\). ## 5 V1 Neural Population Adaptation Data Reanalysis In the following section, we compare our simulated adaptation model responses to reanalyzed neural population recordings from cat primary visual cortex (data obtained with permission from Benucci et al., 2013). Here, we provide an overview of our data analysis procedure which we also apply to our simulated model responses. Some of our analysis plots are new and are not in the original study1. For details on the recordings and preprocessing, we refer the reader to the original paper. Footnote 1: Additionally, our plots are derived from steady-state fitted response curves, whereas the original publication used temporal information. In the experiment, oriented stimuli were briefly presented randomly in rapid succession, with presentation probability determined by one of two contextual distributions: a uniform distribution \(p_{0}(\mathbf{s})\), or a _biased_ distribution, in which one orientation was presented significantly more frequently than the others, \(p(\mathbf{s})\) (Figure 1B, top row). Figure 1B (middle row) shows responses for \(N=13\) units, aggregated over 11 recording sessions. For \(N\) units and \(K\) distinct stimuli, the authors fit orientation tuning curves to neural responses to produce matrices of orientation tuning curves, \(\mathbf{R}\in\mathbb{R}^{N\times K}\) for each of the uniform and biased stimulus ensembles. We normalize each unit's response curves under both contexts according to its minimum and maximum response during the \(p_{0}(\mathbf{s})\) context, such that all responses lie in the interval [0, 1] for \(p_{0}(\mathbf{s})\). That is, zero is the minimum stimulus-evoked response under the uniform ensemble, and one is the maximum. For responses to the biased ensemble, \(p(\mathbf{s})\), a minimum response less than 0 indicates that the evoked response after adaptation has decreased relative to the uniform ensemble; similarly, a maximum response less than 1 indicates the response maximum after adaptation has decreased relative to that of the uniform ensemble (Figure 1B). We compute response means, \(\boldsymbol{\mu}\in\mathbb{R}^{N}\), and signal (as opposed to noise) covariance matrices, \(\boldsymbol{\Sigma}\in\mathbb{S}_{+}^{N}\), \[\boldsymbol{\mu}=\mathbb{E}[\mathbf{R}],\qquad\boldsymbol{\Sigma}=\mathbb{E}[ \mathbf{R}\mathbf{R}^{\top}]-\boldsymbol{\mu}\boldsymbol{\mu}^{\top}, \tag{5}\] where the expectation is over \(p_{0}(\mathbf{s})\) or \(p(\mathbf{s})\). To facilitate comparisons between response covariances under the uniform and biased stimulus ensembles, we scale response covariance matrices by the variances of the neurons under the uniform stimulus probability condition, \(\boldsymbol{\sigma}_{0}^{2}\in\mathbb{R}_{+}^{N}\), \[\hat{\boldsymbol{\Sigma}}=\operatorname{diag}\left(\boldsymbol{\sigma}_{0} \right)^{-1}\boldsymbol{\Sigma}\operatorname{diag}\left(\boldsymbol{\sigma}_{0} \right)^{-1}. \tag{6}\] ## 6 Numerical Simulations and Comparisons to Neural Data We compare numerical simulations of our normative adaptation model with reanalyzed cat V1 population recording data (Benucci et al., 2013). ### Model and simulation parameters For all simulation results and figures in this study, we consider a network comprised of \(N=255\) recurrently connected neurons, with \(K=M=511\) orientation stimuli as inputs. The neuronal gains, \(\mathbf{g}\), adapt to changes in stimulus ensemble statistics (\(p_{0}(\mathbf{s})\to p(\mathbf{s})\)), while the feedforward synaptic weights, \(\mathbf{f}(\mathbf{s})\), and recurrent synaptic weights, \(\mathbf{W}\), remain fixed. We set the homeostatic target gains, \(\mathbf{g}_{0}\), to the optimal values of \(\mathbf{g}\) under the uniform probability stimulus ensemble, \(p_{0}(\mathbf{s})\). Feedforward orientation-tuning functions, \(\mathbf{f}(\mathbf{s})\), are evenly distributed in the stimulus domain, and are broadly-tuned Gaussians with full-width half-max (FWHM) of 30\({}^{\circ}\)(Benucci et al., 2013). The recurrent weight matrix, \(\mathbf{W}\), is a Gaussian with 10\({}^{\circ}\) FWHM, summed with a weaker, broad, untuned excitatory component (see Appendix A). To determine appropriate values of \(\alpha\) and \(\gamma\) in the objective (Equation 4), we performed a grid-search hyperparameter sweep, minimizing the deviation between model and experimentally-measured tuning curves for the biased stimulus ensemble. The figures here all use model responses from a simulation using \(\alpha=1\)E-3, \(\gamma=1\)E-2. We find that qualitative effects are insensitive to small changes in these parameters. The key finding from this parameter sweep is that the gain homeostasis penalty weight must be sufficiently greater than the activity penalty weight (i.e. \(\gamma>\alpha\)). After initializing the network gains to the statistics of \(p_{0}(\mathbf{s})\), we adapt the gains to \(p(\mathbf{s})\) by optimizing Equation 4, then compare our model predicted responses to cat V1 population recordings Figure 1B. ### Adaptive gain modulation predicts response equalization Population response equalization is an adaptive mechanism first proposed in the psychophysics literature (Anstis et al., 1998). The authors argued that adaptation should serve as a "graphic equalizer" in response to alterations in environmental statistics. Others have have described equalization as a mechanism that centers a population response by subtracting the responses to the prevailing stimulus ensemble (Clifford et al., 2000), to rescale responses such that the average of a measured signal remains constant (Ullman and Schechtman, 1982). Figure 2 shows how our model recapitulates mean firing rates across all stimuli under the uniform and biased ensembles without adaptation, along with adaptive population response equalization under the biased ensemble. Figure 2A shows how the average response of each pre-adapted neuron under the uniform ensemble is equal. By contrast, Figure 2B demonstrates that our model predicts how the pre-adaptation tuning curves under the biased stimulus ensemble would produce a substantial deviation from equalization. Finally, adaptively optimizing neuron gains via Equation 4 predicts the compensatory response equalization under the biased stimulus ensemble observed in data (panel C). Figure 2: Adaptive response equalization. Each dot is the average response of a neuron. **A**) Response averages under the uniform stimulus ensemble condition. **B)**_Without_ adaptation, response averages under the biased stimulus ensemble show substantial deviation from equalization (which corresponds to the dashed black line). **C**) After adaptation, response averages to the biased ensemble are nearly equalized. Shaded regions are SEM. ### Adaptive gain modulation predicts nuanced changes in first-order statistics of responses Figure 3 summarizes adaptive changes in neural responses by comparing tuning curve responses under the biased stimulus ensemble compared to responses under the uniform stimulus ensemble (i.e. right vs. left columns of Fig. 1B). Our gain-modulating efficient coding model can capture this entire array of observed adaptation effects. Changes in response maxima, amplitudes, and minima.Stimulus-dependent response reductions are a ubiquitous finding in adaptation experiments (Weber et al., 2019). Figure 3A,B show changes in response maxima, and response amplitudes (peak-to-trough height) following adaptation to the biased stimulus ensemble. Ratios less than 1 indicate a reduction in maxima or amplitudes following adaptation. Under the biased stimulus ensemble, the model optimizes its gains according to the objective (Eq. 4) to preferentially reduce activity near the over-presented adapter stimulus. By optimizing gains to the adaptive efficient coding objective, our linear model undershoots the magnitude of change around the adapter (Fig. 3A,B red curve near 0\({}^{\circ}\)), but captures the overall effect of adaptive amplitude and maxima reduction. Figure 3C shows that adaptation induces a tuning-dependent, global reduction in minimum stimulus-evoked response across the population. These minima typically occur at the anti-preferred orientation for each neuron (Fig. 1B). Previous work has attributed this to an untuned reduction in thalamic inputs, or a drop in base firing (Benucci et al., 2013; Westrick et al., 2016). Our model proposes a different mechanism: gain reductions in neurons tuned for the adapter propagate laterally through the network, and result in commensurate reductions in the broad/untuned recurrent excitation to other neurons in the population. This ultimately leads to a reduction in minimum evoked response across the entire population (Fig. 3C); importantly, the model also captures the qualitative shape of the change. Our mechanistic prediction that this effect arises due to recurrent contributions is in concordance with the broad literature on recurrent cortical circuitry, its role in amplification (Reinhold et al., 2015), and in sensory adaptation (Hershenhoren et al., 2014; King et al., 2016). Shifts in tuning preference.Tuning curve shifts following adaptation have been reported across many visual and auditory adaptation studies (Clifford et al., 2007; Whitmire and Stanley, 2016, for reviews). Figure 3D quantifies changes in neuron preferred orientation (i.e. the orientation at which response maximum occurs) after adapting to the biased stimulus ensemble. The sinusoidal shape of the curve indicates that adapted tuning curves are _repelled_ from the over-presented adapter stimulus. This rearrangement of tuning curve density is consistent with efficient coding studies that argue that a sensory neural population should optimally allocate its finite resources toward encoding information about the current stimulus ensemble (Ganguli and Simoncelli, 2014; Gutnisky and Dragoi, 2008). Here, we show that these effects can mechanistically be explained by optimizing neuronal gains to maintain a high fidelity representation of the stimulus under the biased ensemble. Objective and network ablations.In Appendix B, we assess the importance of each term of the adaptation objective (Eq. 4) by ablating them from the objective and re-simulating the network adapting to the biased stimulus ensemble. We show that each of the three terms is jointly necessary to capture the adaptation effects shown here. In terms of network architectural ablations, the blue curves Figure 3: Recurrent network model with adaptive gain modulation (red) captures the full set of post-adaptation first-order response changes observed in data (black points), while a network without recurrence (blue) does not. **A)** Ratios of after/before adaptation response maxima. **B)** Ratios of response amplitudes (\(\|\max-\min\|\) response) after/before adaptation. **C)** Changes in average minimum evoked response. **D)** Shifts in tuning away from the adapter. Shaded regions are SEM. Dashed lines indicate predictions for a non-adaptive model. in Figure 3 demonstrate how removing recurrence (i.e. \(\mathbf{W}=0\); Equation 1) impacts adaptive changes in neural responses. While this single stage feedforward model can reproduce reductions in response maxima Fig. 3A, it is incapable of producing the appropriate change in response amplitudes (Fig. 3B), and completely fails at producing adaptive reductions in minimum response, or shifts in tuning preference (Fig. 3C,D). Intuitively, this is because the gains in this reduced model serve to set the amplitude of the output, and cannot alter the qualitative shape of the tuning curve without propagating through the recurrent circuitry. The structure of \(\mathbf{W}\) used in our model is informed by functional and anatomical studies in cortical circuits (Lee et al., 2016; Rossi et al., 2020), comprising strong net excitation from similarly-tuned neurons and untuned weak net excitation from dissimilarly-tuned neurons. In Appendix A, we study the impact of \(\mathbf{W}\)'s structure on model adapted responses. The structure of \(\mathbf{W}\) can be quite flexible while still producing the effects shown here, so long as recurrent input includes weak net excitation from dissimilarly-tuned neurons. ### Adaptive gain modulation predicts homeostasis in second-order statistics of responses The principle of redundancy reduction is core to the efficient coding hypothesis (Barlow, 1961), and evidence supporting _adaptive_ redundancy reduction has been reported across multiple brain regions and modalities (Atick and Redlich, 1992; Muller et al., 1999; Wanner and Friedrich, 2020). In the task modeled in our study, over-presenting the adapter stimulus can be viewed as increasing redundancy in the stimulus ensemble (Figure 1B, top). This manifests as a "hot spot" in the center of \(\hat{\Sigma}\) if the neural responses were to remain unadapted to \(p(\mathbf{s})\) (Fig. 4A, middle column). However, when the model adapts its gains according to the objective (Eq. 4), the covariance near the adapter stimulus is reduced, and the predicted signal covariance is well matched to data (Fig. 4A, right column, Fig. 4B). A signal covariance matrix devoid of redundancy would be one that is statistically white (i.e. the identity matrix). However, under both the uniform and biased stimulus ensemble conditions (Figure 4A, top left and right), we note that the experimentally observed signal covariance matrix _is not_ statistically white2. Thus, previous normative approaches to population adaptation that explicitly white neural responses may not be suitable models for this data (e.g. Mlynarski and Hermundstad, 2021; Pehlevan and Chklovskii, 2015). By contrast, our adaptation objective, which emphasizes stimulus signal fidelity subject to metabolic and homeostatic constraints predicts an adapted signal covariance matrix whose deviations from the identity matrix are similar to those observed in data. Notably, this effect naturally emerges from our model _without_ additional parameter-tuning. Footnote 2: In their study, Benucci et al. (2013, Fig. 3) replaced negative entries of \(\hat{\Sigma}\) with zeros. ## 7 Discussion Study limitations.The network considered here is a rate model whose tractable linear dynamics allow us to examine adaptation responses at steady-state. Response dynamics during adaptation are Figure 4: Population response redundancy reduction and signal covariance homeostasis. **A)** Scaled response covariance matrices, \(\hat{\mathbf{\Sigma}}\) (Eq. 6), for V1 data (top row) and model simulations (bottom row), for unadapted tuning curves and uniform stimulus ensemble (left column), unadapted tuning curves and biased stimulus ensemble (middle column), and adapted tuning curves and biased stimulus ensemble (right column). **B)** Three example horizontal slices of the data (dashed) and model (solid) \(\hat{\mathbf{\Sigma}}\) from **A**, at 0, -45, and -90 degrees orientation (colors). rich (Dragoi et al., 2000; Patterson et al., 2013; Quiroga et al., 2016), and are relatively understudied. Developing our model and objective into a biologically plausible online network with explicit excitatory and inhibitory neurons, while adapting gains according to only local signals (Duong et al., 2023; Gutierrez and Deneve, 2019) is an interesting direction worth pursuing. Furthermore, because we model trial-averaged experimental data in this study, our model does not account for stochasticity in neural responses. Thus, our model cannot explain adaptive changes in trial-to-trial variability (Gutnitsky and Dragoi, 2008). Finally, there exist adaptive changes to simultaneously-presented stimuli, usually explained via divisive normalization (Aschner et al., 2018; Solomon and Kohn, 2014; Yilitz et al., 2020), which is not included in our model (see Appendix D). One possible way to bridge this gap would be to combine our normative approach with recently-proposed recurrent models of normalization (Heeger and Mackey, 2018; Heeger and Zemlianova, 2020). Alternative network architectures.There are alternative, equivalent formulations of our model that may give rise to the same steady-state responses as Eq. 3, which we illustrate in Appendix D. Firstly, our model is equivalent to a two-stage feedforward network with gain modulation preceding the inputs of the second stage. Since orientation tuning arises in V1, these two stages could be two different layers within V1; the core mechanism of our framework can thus be related to studies describing adaptive gain changes being inherited from one group of neurons to the next (Dhruv and Carandini, 2014; Kohn and Movshon, 2003; Stocker and Simoncelli, 2009). Secondly, gain modulation in our model, which serves to multiplicatively scale input drive, \(\mathbf{f}(\mathbf{s})\), can equivalently be interpreted as multiplicatively _attenuating_ the recurrent drive of the network. In this sense, our model resembles that of Heeger and Zemlianova (2020), in which divisive normalization is mediated by gating recurrent amplification. Experimental predictions.We propose that rapid neural population adaptation in cortex can be mediated by single neuron adaptive gain modulation. Validating this hypothesis would require careful experimental measurements of neurons during adaptation. First, our framework predicts that between-neuron synaptic connectivity (i.e. \(\mathbf{W}\)) remains stable through adaptation. Second, our normative objective suggests that gain homeostasis plays a central role in population adaptation (see Appendix B). Evidence for stimulus-dependent gain control such as this can possibly be found by measuring neuron membrane conductance during adaptation, mediated by changes in slow hyperpolarizing \(\text{Ca}^{2+}\)- and \(\text{Na}^{+}\)-induced \(\text{K}^{+}\) currents (Sanchez-Vives et al., 2000). Lastly, while there has been considerable progress in mapping the circuits involved in sensory adaptation (Wanner and Friedrich, 2020), determining the exact structure of functional recurrent connectivity remains an open problem. Indeed, we show how different (but not all) forms of \(\mathbf{W}\) can give rise to the same qualitative results shown here (Appendix A). Performing adaptation experiments with richer sets of stimulus ensembles, \(p(\mathbf{s})\), can provide better constraints for solving this functional inverse problem. ### Conclusion We demonstrate that adaptation effects observed in cortex - changes in response maxima and minima, tuning curve repulsion, and stimulus-dependent response decorrelation - can be explained as arising from the recurrent propagation of single neuron gain adjustments aimed at coding efficiency. This adaptation mechanism is general, and can be applied to modalities other than vision. For example, studies of neural adaptation in auditory cortex have shown that adaptive responses such tuning curve shifts cannot be explained by feedforward mechanisms, and likely arise from adaptive changes to intracortical recurrent interactions (Hershenhoren et al., 2014; Lohse et al., 2020). Previous population adaptation models rely on changes in all-to-all synaptic weights to explain these phenomena (e.g. Westrick et al., 2016), but our results suggest that single neuron gain modulations may provide a more plausible mechanism which uses \(\mathcal{O}(N)\) instead of \(\mathcal{O}(N^{2})\) adaptive parameters. Adaptation in cortex happens on the order of hundreds of milliseconds, and is just as quickly _reversible_(Muller et al., 1999); a network whose synaptic weights were constantly remapping would be undesirable due to a lack of stability, while a mechanism such as adaptive single neuron gain modulation can be local, fast, and reversible (Ferguson and Cardin, 2020). Taken together, our study offers a simple mechanistic explanation for observed adaptation effects at the level of a neural population, and expands upon well-established concepts of adaptive coding efficiency with single neuron gain control. ## Acknowledgments We thank Matteo Carandini for providing us with V1 neural recording data. We also thank Teddy Yerxa, Pierre-Etienne Fiquet, Stefano Martiniani, Shivang Rawat, Gabrielle Gutierrez, Ann Hermundstad, and Wiktor Mlynarski for their feedback on earlier versions of this work.
2302.14454
Low-loss polarization control in fiber systems for quantum computation
Optical quantum information processing exploits interference of quantum light. However, when the interferometer is composed of optical fibers, degradation of interference visibility due to the finite polarization extinction ratio becomes a problem. Here we propose a method to optimize interference visibility by controlling the polarizations to a crosspoint of two circular trajectories on the Poincar\'{e} sphere. Our method maximizes visibility with low optical loss, which is essential for quantum light, by using fiber stretchers as polarization controllers. We also experimentally demonstrate our method, where the visibility was maintained basically above 99.9% for three hours using fiber stretchers with an optical loss of 0.02 dB (0.5%). Our method makes fiber systems promising for practical fault-tolerant optical quantum computers.
Tomohiro Nakamura, Takefumi Nomura, Mamoru Endo, He Ruofan, Takahiro Kashiwazaki, Takeshi Umeki, Jun-ichi Yoshikawa, Akira Furusawa
2023-02-28T09:59:19Z
http://arxiv.org/abs/2302.14454v1
# Low-loss polarization control in fiber systems for quantum computation ###### Abstract Optical quantum information processing exploits interference of quantum light. However, when the interferometer is composed of optical fibers, degradation of interference visibility due to the finite polarization extinction ratio becomes a problem. Here we propose a method to optimize interference visibility by controlling the polarizations to a crosspoint of two circular trajectories on the Poincare sphere. Our method maximizes visibility with low optical loss, which is essential for quantum light, by using fiber stretchers as polarization controllers. We also experimentally demonstrate our method, where the visibility was maintained basically above 99.9% for three hours using fiber stretchers with an optical loss of 0.02 dB (0.5%). Our method makes fiber systems promising for practical fault-tolerant optical quantum computers. oeurmurm ## 1 Introduction Quantum computers are gathering more attentions as next-generation technologies, and extensively researched with various physical systems. Optical implementation is one of the promising candidates. Optical quantum computer will be composed of beamsplitter networks by which quantum light interacts, followed by detectors. In particular, cluster states, which are entangled states that serve as resources for measurement-based quantum computation, are generated by beamsplitter interactions of squeezed states. In recent years, ultra-large-scale cluster states are experimentally demonstrated by employing the time-domain multiplexing method, which utilizes Mach-Zehnder interferometers with asymmetric lengths of arms [1, 2, 3, 4, 5]. The time-domain multiplexing method takes advantage of the flying feature of light. The demonstrated size of entanglement, therefore, is several orders of magnitude larger than that with other quantum systems such as superconducting qubits, showing the promise of optical quantum computing. In order to exploit the strong quantumness of light, high interference visibilities are required in the interferometers. For this purpose, both the spatial mode and the polarization mode of the light beams should be matched. In the case of free-space systems, even though it is possible to match the spatial mode of all light beams at all interference points, it is time-consuming. Furthermore, the beam positions shift over time to reduce interference visibility, even though this problem can be solved by introducing auto-alignment systems. Due to time constraints, visibilities are often compromised, e.g., the experiment in Ref. [2] was performed with 98% visibilities. On the other hand, optical fibers are suitable for constructing maintenance-free systems. Using optical fibers, it is possible to achieve almost 100% visibilities as long as the polarization in the fiber matches. However, even if polarization-maintaining fibers are used in the fiber systems, it is difficult to perfectly maintain polarizations due to the finite polarization extinction ratio (PER) of the fiber components. Every fiber component such as a fiber beamsplitter, a connector, or a connection point with fiber fusion shows a finite PER. When an interferometer is constructed using fibers, visibility, therefore, degrades due to polarization mismatch, which is called visibility fading [6, 7]. For classical light, visibilities can be improved by using commercial inline-type fiber polarization controllers. There are basically two types of fiber polarization controllers. In one type, a non-polarization-maintaining fiber is wound around spools and the angle of the spools are adjusted to cause appropriate stress-induced birefringence, resulting in desired polarization change [8]. In the other type, a non-polarization-maintaining fiber is stressed by piezo actuators to cause birefringence for desired change of polarization [9]. In these polarization controllers, non-polarization-maintaining fibers are subjected to stresses, which generate microbends to cause optical losses [typically 0.08 dB (1.8%) or higher] [10]. However, for classical light, these levels of optical losses are not serious problems. For quantum light, polarization controllers are also needed to solve the visibility fading problem. However, since the quantum light is vulnerable to optical losses, the polarization controllers must have low optical losses. For example, there is a study showing that \(-\)10 dB of squeezing is required for fault-tolerant quantum computation [11]. In order to achieve this, the losses of the entire systems must be sufficiently lower than 0.5 dB (10%). In this paper, we propose a method to optimize visibility in a fiber interferometer by controlling the polarization to a crosspoint of two circular trajectories drawn on a Poincare sphere (Fig. 1). We also experimentally demonstrate visibility maximization with this method, where the visibility is maintained basically above 99.9% for three hours. The circular trajectories appear when the two light beams are off the polarization-maintaining axis of polarization-maintaining fibers and when the fibers are stretched or heated. We call this the circle-circle crosspoint (CCC) method in the following. The CCC method is especially suitable for the quantum light because it can be implemented with low losses. The CCC method can be implemented with fiber stretchers or heaters, which do not cause severe microbends resulting in optical losses. Another important point is that, by employing the CCC method, fiber systems can be constructed solely with the polarization-maintaining fibers. In contrast, the conventional methods employ non-polarization-maintaining fibers for the polarization controlling parts, while other parts are constructed with polarization-maintaining fibers. Because there is no connection between the polarization-maintaining fibers and the non-polarization-maintaining fibers, it is expected that the optical losses tend to be lower. For the experimental demonstration, we made fiber stretchers and measured the optical losses, which were as low as 0.02 dB (0.5%). As a result, the visibility in three hours was basically maintained above 99.9% with the polarization control, while it sometimes dropped to 98% without the polarization control. Although this experiment is demonstrated for a single Mach-Zehnder interferometer, the CCC method can be applied to more complex interferometers having multiple interference points by maximizing visibilities from the upstream to the downstream of the light paths. Section 2 describes the Poincare sphere representation of the polarization states. Section 3 discusses the change in polarization states at the output of polarization-maintaining fibers. Section 4 illustrates the effect of a fiber stretcher on polarization. Section 5 shows low-loss polarization control using fiber stretchers. In Section 6, the fabrication method of the fiber stretchers for polarization controllers are described. In Section 7, the experimental method for polarization control using the CCC method is described. Section 8 demonstrates the experimental result. Section 9 shows the comparison of the CCC method and the conventional methods. Section 10 presents the conclusion of this study. ## 2 Poincare sphere representation We introduce the Poincare sphere, on which we can describe polarization states. Let \(x\) and \(y\) components of the electric field of monochromatic light propagating in the \(z\) direction be \[E_{x} =a_{x}\cos(kz-\omega_{c}t-\Gamma_{x}), \tag{1a}\] \[E_{y} =a_{y}\cos(kz-\omega_{c}t-\Gamma_{y}), \tag{1b}\] where \(a_{x}\) and \(a_{y}\) are \(x\) and \(y\) components of the electric field amplitudes, \(\omega_{c}\) is the angular frequency of light, \(k\) is the wave number, and \(\Gamma_{x}\) and \(\Gamma_{y}\) are the phases of \(x\) and \(y\) components. Stokes parameters are defined as follows: \[S_{0} =a_{x}^{2}+a_{y}^{2}, \tag{2a}\] \[S_{1} =a_{x}^{2}-a_{y}^{2},\] (2b) \[S_{2} =2a_{x}a_{y}\cos\Gamma,\] (2c) \[S_{3} =2a_{x}a_{y}\sin\Gamma, \tag{2d}\] where \(\Gamma=\Gamma_{y}-\Gamma_{x}\). Since \(S_{0}\) corresponds to the optical power and \(S_{1}^{2}+S_{2}^{2}+S_{3}^{2}=S_{0}^{2}\), a set of vectors \((S_{1},S_{2},S_{3})\) forms a sphere called the Poincare sphere when the optical power is constant (Fig. 2(a)) [12]. Also, Stokes parameters \(S_{1},S_{2},S_{3}\) are expressed as follows: \[S_{1} =S_{0}\cos 2\chi\cos 2\psi, \tag{3a}\] \[S_{2} =S_{0}\cos 2\chi\sin 2\psi,\] (3b) \[S_{3} =S_{0}\sin 2\chi, \tag{3c}\] where \(\psi\) is the azimuth and \(\chi\) is the ellipticity [13]. For elliptical polarization states, the azimuth \(\psi\) and the ellipticity \(\chi\) are expressed in Fig. 2(b). ## 3 Polarization states through polarization-maintaining fibers The polarization-maintaining fiber has a core and cladding with a stress-applying part to generate a birefringence as shown in Fig. 3(a), and realizes an anisotropic refractive index distribution. Figure 1: Principle of the CCC method. (a) Interferometer with fiber stretchers inserted. (b) Two circular trajectories drawn by polarization A and B on the Poincaré sphere when the voltages applied to the piezo actuators of the fiber stretchers are varied. In the CCC method, polarization states are controlled to a crosspoint of the two circles to optimize visibility. Poincaré sphere is explained in Section 2. V, H, R and L correspond to vertical, horizontal, right-handed circular and left-handed circular polarizations, respectively. The direction in which the stress-applying part exists is called the slow axis, and the direction orthogonal to the slow axis is called the fast axis. In this paper, we call the linear polarization state parallel to the slow axis as the polarization state of the point V on the Poincare sphere. When the linearly polarized input light is parallel to either the slow axis or the fast axis, the polarization is maintained through the fiber. Now we consider the output polarization state from the polarization maintaining fiber when the input linear polarization state is \(\theta\) rotated from the slow axis. Because the slow-axis component and the fast-axis component undergo different phase changes, the electric field of the output light from a fiber, corresponding to Eq. (1), becomes, \[E_{x} =a\cos\theta\cos(kz-\omega_{c}t-\Gamma_{x}), \tag{4a}\] \[E_{y} =a\sin\theta\cos(kz-\omega_{c}t-\Gamma_{y}), \tag{4b}\] where \(a^{2}=a_{x}^{2}+a_{y}^{2}\). In this case, the Stokes parameters Eq. (2) are \[S_{0} =a^{2}, \tag{5a}\] \[S_{1} =a^{2}\cos(2\theta),\] (5b) \[S_{2} =a^{2}\sin(2\theta)\cos\Gamma,\] (5c) \[S_{3} =a^{2}\sin(2\theta)\sin\Gamma. \tag{5d}\] In the Poincare sphere, \(\theta\) and \(\Gamma\) are depicted in Fig. 3(b). The phase difference \(\Gamma\) changes depending on the length of the fiber and the refractive index difference between the slow axis and the fast axis, while \(\theta\) does not change. The fiber length where \(\Gamma\) changes by \(2\pi\) is called the beat length. The beat length is about 4 mm with ordinary fibers for the communication wavelength band [14]. Changes in the temperature or stress applied to the fiber slightly alter the fiber length and the refractive index difference, resulting in changes in the phase difference \(\Gamma\) at the fiber exit. With this changes of \(\Gamma\), \((S_{2},S_{3})\) draws a circular trajectory from Eq. (5c) and Eq. (5d). The radius of the circle depends on \(\theta\). At \(\theta=0\), the trajectory of the polarization state converges to the point V in the Poincare sphere and the polarization is no longer affected by the disturbance. Figure 2: Polarization states. (a) Poincaré sphere. (b) The azimuth \(\psi\) and the ellipticity \(\chi\) in elliptical polarization states. Now we consider the polarization state of light propagating through multiple polarization-maintaining fibers connected by connectors with misalignments as shown in Fig. 4(a). Suppose that the light propagating through fiber 0 is in a linearly polarized state and the polarization plane is coincident with the polarization-maintaining axis. Fiber 0 is sequentially connected to Fiber 1, Fiber 2, and Fiber 3 by Connector 1, Connector 2, and Connector 3, respectively. The angles between slow axes of two connected fibers at Connector 1, Connector 2, and Connector 3 are expressed by \(\theta_{1},\theta_{2},\theta_{3}\), respectively. Although connectors are supposed here, in actual situations, there are various components that cause polarization-axis mismatches, such as incomplete fiber fusion splices. Let \(\Gamma_{1}\), \(\Gamma_{2}\), and \(\Gamma_{3}\) be the phase differences between the slow axis and the fast axis which light receives through Fiber 1, Fiber 2, and Fiber 3, respectively. When disturbances are added to Fiber 1, Fiber 2, and Fiber 3, \(\Gamma_{1}\), \(\Gamma_{2}\), and \(\Gamma_{3}\) shift, respectively. The output light from Fiber 3 is measured with a polarimeter. The Poincare sphere for the output light is defined so that the slow axis of Fiber 3 corresponds to the point V. When any of \(\Gamma_{1}\), \(\Gamma_{2}\), and \(\Gamma_{3}\) changes, the polarization state draws a circular trajectory on the Poincare sphere. In particular, in Fig. 4(b), the circular trajectory by changing \(\Gamma_{2}\) more than \(2\pi\) is shown by a blue circle for various \(\Gamma_{1}\) and \(\Gamma_{3}\). In this figure, \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) are set to 2, 2, and 5 degrees for the sake of visual understandability. These are typical amounts of misalignments when connected by connectors. When \(\Gamma_{1}\) is slightly altered, the radius of the circle changes. When \(\Gamma_{3}\) is slightly altered, the center of the circle changes. Thus, the radius and the center point of the circle can be changed by adding disturbances before and after the fiber. ## 4 Phase and Polarization changes by fiber stretchers Fiber stretchers pull fibers to change the optical path lengths which are typically used for controlling the phases of light. A fiber stretcher has a structure, for example, where a fiber is wound around a cylindrical object with a variable radius. The radius of the cylindrical object is typically changed by a piezo actuator, which is controlled electrically. Other phase control devices for fiber systems include Electro-Optical Modulator (EOM) [15], which uses the electro-optic effect. EOMs have the advantage of a wide bandwidth of tens of gigahertz. However, fiber EOMs Figure 3: Polarization-maintaining fibers and polarization changes. (a) Cross section of polarization-maintaining fiber. (b) \(\Gamma\) and \(\theta\) in the Poincaré sphere. have the disadvantage of severe optical losses of typically 3 dB (50%) [16]. In contrast, fiber stretchers have a low bandwidth of about 100 kHz, which is determined by the bandwidth of the piezo actuator. However, it has the advantage that the optical loss is typically less than 0.2 dB (4.5%) [17]. Fiber stretchers are chosen when phase controls are needed for quantum light, which is vulnerable to optical losses. In optical quantum computers, quantum light is combined with classical light for measurement. If appropriate ancillary states can be prepared [18, 19], universal quantum computation will be realized by fast measurement switching on the cluster state [20, 21, 22]. Fast phase controls, therefore, are only needed for classical light, which is done by EOMs, and slow phase controls are sufficient for quantum light, which is done by fiber stretchers. When a polarization-maintaining fiber is stretched by a fiber stretcher, the phase difference \(\Gamma\) between the slow and fast axes changes. If the polarization state of the input light, therefore, Figure 4: Polarization changes through polarization-maintaining fibers with multiple connectors. (a) Setup with misalignments \(\theta_{1}\), \(\theta_{2}\), \(\theta_{3}\) at connectors. (b) Circular trajectories when a disturbance of \(2\pi\) is added to \(\Gamma_{2}\). Here \(S_{0}=1\). A slight modification in \(\Gamma_{1}\) changes the radius of the circle, and a slight change in \(\Gamma_{3}\) shifts the center of the circle. does not coincide with the polarization-maintaining axes, the polarization state changes, and draws a circular trajectory on the Poincare sphere as discussed in Section 3. Note that similar polarization changes can be also induced by temperature changes using such as heaters. We will describe in Section 5 how to use these polarization changes for maximizing visibility. ## 5 Maximizing visibility by the circle-circle crosspoint method So far, we have considered a single optical path system with multiple fiber elements connected. In this section, we will consider the case of an optical interferometer as shown in Fig. 5(a). The Mach-Zehnder interferometer consists of two beamsplitters. In fiber systems, these beamsplitters have finite PERs (typically 25 dB or more). In addition, these paths of the interferometer often have fiber stretchers for phase locking and tapping fiber beamsplitters for monitoring the state of the light, which also have finite PERs. Furthermore, when connecting these fiber elements, if the fibers are fused together, the PER of fusion splices is typically about 35 dB, and if they are connected with connectors, there are typically axis misalignments of 2 degrees (PER = 29 dB) to 5 degrees (PER = 21 dB). The interferometer has two optical paths, and the polarization state of each optical path changes independently as shown in Fig. 5(b). We suppose that the polarization state A0 of the input to the interferometer is at point V on the Poincare sphere. As the light passes through Beamsplitter 1, the light splits into two paths, but due to the finite PER of the fiber beamsplitter, the polarization state shifts slightly to A1 and B1 on the Poincare sphere, respectively. Then, due to the finite PER of the elements in each path, the polarization state of one changes from A1 to A2, A3, and A4, while the other changes from B1 to B2, B3, and B4. When light enters Beamsplitter 2, the visibility decreases as the distance between polarization states A4 and B4 on the Poincare sphere Figure 5: The CCC method in an interferometer with multiple finite PER components. (a) An interferometer setup for considering the changes in polarization states. (b) Polarization changes on the Poincaré sphere when light passes through the finite PER components in the interferometer. (c) Two circular trajectories drawn by A4 and B4 when voltages applied to Fiber stretcher 1 and Fiber stretcher 2 are changed. In the CCC method, A4 and B4 are controlled to a crosspoint of the two circles. increases. In the CCC method, both paths of the interferometer have fiber stretchers (or heaters) for polarization control. By driving the fiber stretchers, the polarizations of A4 and B4 draw two different circular trajectories as shown in Fig. 5(c). In most cases, these two circular trajectories have crosspoints. By adjusting the fiber stretchers to the crosspoint of these circles, polarization states A4 and B4 are matched. Visibility, therefore, is maximized by this polarization control. If the circular trajectories do not have any crosspoint, visibility does not reach 100%. However, as explained in Section 4, the radius and the center of the circle trajectory can be moved by placing a device that causes a phase difference between the slow and fast axes before and after the fiber stretcher is inserted. The trajectories of the two circles, therefore, can be changed to have crosspoints. If the polarization control is performed by heaters or Peltier devices instead of fiber stretchers, larger dynamic ranges are realized. This larger dynamic range may be useful in some situations. The bandwidth of the fiber stretcher is typically up to about 100 kHz, and heaters or Peltier devices are much slower. But since the polarization control for visibility maximization only needs to follow the slow drift of polarization, high speed is not necessary and these slow polarization controllers are sufficient to achieve visibility maximization. In addition to visibility maximizations, phase lock is necessary for optical quantum information processing. However, polarization control in the CCC method causes a significant phase drift. Polarization control and phase control, therefore, should be performed in two steps. Firstly, polarization control is performed with phase control off, and the polarization controllers are held at the optimal polarization states. Secondly, the phase control is turned on. Phase control can be performed by connecting a small fiber stretcher in series with the polarization controller. Optical quantum information processing is performed after the preparation by this two-step control (Fig. 6). In principle, it is also possible to perform polarization control and phase control in a single fiber stretcher. This is because the voltage scales applied to the fiber stretcher for polarization control and phase control are significantly different. It is, therefore, possible to first perform polarization control on a larger voltage scale and then perform phase control on a smaller voltage scale. In the CCC method, optimized polarization states are shifted from the slow axis of the polarization-maintaining fiber. Hence, the polarization dependence of the beamsplitter becomes the nonideality of the interferometer, which degrades the quality of the quantum computation. In order to assess these problems, we conducted some measurements. When various polarizations of light are input to a fiber beamsplitter 954P (Evanescent optics) with a coupling ratio of 49:51, the variation of the coupling ratio was \(\pm\)0.3%, which is usually negligible. Although the CCC method is explained for a simple Mach-Zehnder interferometer, the CCC method can be applied to systems that have multiple interference points, by maximizing visibilities from the upstream to the downstream of the system. ## 6 Making of fiber stretchers The fiber stretchers for the CCC method should be able to change the phase difference between the slow and fast axes larger than \(2\pi\). Due to the small difference of refractive indices between slow and fast axes of the polarization-maintaining fiber, shifts of the phase difference by \(2\pi\) Figure 6: Processes for operating a quantum computer. empirically correspond to a few hundred wavelengths of phase shifts. Hence, larger fiber stretchers are required in comparison with those for phase control which changes the phase by a few wavelengths. Although fiber stretchers are commercially available, in this study we made our own fiber stretchers because we wanted to customize the number of fiber windings and the radius at which the fibers are wound. The fiber stretcher we fabricated utilizes a 40 mm diameter cylindrical piezo actuator PT140.70 (PI) [23], around which a polarization-maintaining fiber for communication wavelengths SM15-PS-U25D (Fujikura) is wrapped about 10 times, as shown in Fig. 7. The wound fiber is fixed with Polyimide tape. A voltage of up to 1000 V can be applied to the piezo actuator and a maximum diameter contraction is 12 um. Hence, if all the piezoelectric contraction contributed to fiber stretching, the fiber would be stretched by 0.37 mm. The fiber stretcher we fabricated draws a single circle at about 600 V, which corresponds to about 7 um contraction. This contraction causes the change of the fiber length by about 200 um, which is much smaller than the beat length about 4 mm of the polarization-maintaining fiber [14, 24]. Hence, the change of the phase difference is mainly due to stress-induced birefringence, rather than the change in the fiber length. The measured insertion loss of the fiber stretcher was 0.02 dB (0.5%), which is acceptable for quantum light. Furthermore, this optical loss is almost unchanged when the voltage applied to the fiber stretcher is changed from 0 V to 1000 V. The actual circular trajectories are measured by the setup in Fig. 8 and obtained polarization trajectories are shown in Fig. 9. Firstly, only Light 1 was fed and the voltage applied to Fiber stretcher 1 was changed, resulting in Circular trajectories (i) in Fig. 9. Secondly, only Light 2 was fed and the voltage applied to Fiber stretcher 2 was varied, resulting in Circular trajectories (ii) in Fig. 9. In the CCC method, polarizations are controlled to the crosspoints of the two trajectories. Note that these circular trajectories are not obtained from the experimental setup in Sec. 7, where all fiber components are connected by fusion splices and thus we cannot see the polarization trajectory of each optical path like Fig. 9. Figure 7: Fiber stretcher for polarization control. (a) Structure of the fiber stretcher. (b) Photo of the fiber stretcher. ## 7 Experimental Method We experimentally demonstrate visibility maximization by the CCC method with the setup shown in Fig. 10. The polarization state of Light 1 is adjusted as much as possible so that the light is linearly polarized to match the slow axis of the polarization-maintaining fiber. Light 1 is split into Light 3 and Light 4 by passing through Beamsplitter 1, which has a coupling ratio of 50:50. Light can also be input from the Light 2 path, although Light 2 is blocked in this experiment. Light 3 and Light 4 interfere at Beamsplitter 2 with a coupling ratio of 50:50 to generate Light 5 and Light 6. Beamsplitter 3 with the coupling ratio of 0.5:99.5 is inserted in the path of Light 3 so that the interference phase between Light 1 and Light 2 can be monitored by Photodetector 1 and locked by feedback control. A mini fiber stretcher is inserted in the Light 4 path to scan the phase, by which an interference signal between Light 3 and Light 4 is generated to calculate the visibility. In both the Light 3 and Light 4 paths, Fiber stretcher 1 and Fiber stretcher 2, which are explained in Section 6, are inserted to control the polarization. To monitor the interference signal, Light 5 is input to Beamsplitter 4 with the coupling ratio of 0.5:99.5, and the output of 0.5% is detected by Photodetector 2. Light 1 has an optical power of about 1.2 mW. Beamsplitter 1, Beamsplitter 2, Beamsplitter 3, and Beamsplitter 4 are 954P (Evanescent optics). The individual difference in the coupling ratio for a 50:50 beamsplitter is \(\pm 2\%\). The voltages are applied to Fiber stretcher 1 Figure 8: Setup for the polarimeter measurement of the circular trajectories of polarization drawn on the Poincaré sphere. Figure 9: Two circular trajectories obtained by the setup in Fig. 8, which have cross points. Here \(S_{0}=1\). and Fiber stretcher 2 with a piezo actuator driver, SVR 1000/3 (piezo actuatormechanik GmbH), which can apply up to 1000 V. The mini fiber stretcher is FS20 (IDIL). With this mini fiber stretcher, the phase can be scanned by about 5 wavelengths in the voltage range of 0 V to 70 V. Triangle wave signal with a frequency of 100 Hz is applied to this mini fiber stretcher. The triangle wave signal is generated by an arbitrary wave generator (33500B Keysight) whose \(\pm\)0.8 V signal is amplified to the range between 20 V and 50 V by a homemade amplifier. Photodetector 2 is homemade and the photodiode is G8195-11 (Hamamatsu Photonics). The transimpedance gain is 100 k\(\Omega\), the reverse bias is 15 V, and the flat bandwidth is about 30 MHz. A motor-driven optical shutter is placed in the Light 1 path to periodically block the light and retake the voltage level of Photodetector 2 without light input, which may drift during the experiment. Interference voltage signals which are obtained by Photodetector 2 are sent to analog-to-digital converters on an FPGA board, STEMlab 125-14 (RedPitaya). The FPGA board has an analog input voltage range of \(\pm\)1V, a voltage resolution of 14 bits, and a sampling frequency of 125 MHz. The data are then downsampled to 1/256 in the FPGA board and finally acquired at a sampling frequency of 488 kHz. In the downsampling, 256 points are averaged in the FPGA board with 14-bit accuracy. The digital data acquired on the FPGA board is transferred to a personal computer. Data of 16384 points are acquired at one time on the FPGA board, which we call one frame of data. The time width of the one frame is 33 ms, taking account of the sampling frequency of 488 kHz. The data are transmitted to the personal computer via an Ethernet cable using socket communications. The FPGA board has two input channels. Ch. 1 acquires the interference signal from Photodetector 2, and Ch. 2 acquires the triangle wave signal applied to the mini fiber stretcher, which is appropriately attenuated. Fig. 11(a) shows a plot of the 2-channel data acquired by the FPGA board. As shown in Fig. 12, the visibility is calculated on the personal computer using the transferred data. First, the visibility value is calculated from each frame. Then, the visibility of one time is obtained by averaging the five visibility values from five frames, and its error bar is obtained by the standard deviation of them. Due to the time required for data transfer and real-time processing, a visibility value of one time is obtained about every 2.5 seconds. Visibility \(v\) is calculated from the maximum (max) and minimum (min) of the interference signal as \[v=\frac{\max-\min}{\max+\min}. \tag{6}\] Here, max and min are the voltages that are measured from the voltage level without light. The min and max of the interference signal are obtained from the area where the triangle wave Figure 10: Experimental setup for visibility maximization with the CCC method using Fiber stretcher 1 and Fiber stretcher 2. FS: fusion splice. signal is monotonically increasing (Fig. 11(a)). Hence, the area is narrowed down to \(\pm 3\) ms around the time when the triangle wave signal crosses 0 V. Fig. 11(b) and (c) show the min and max of the interference signal in Fig. 11(a). In order to compensate for the slight DC level without light, the output voltage level without light is measured by blocking Light 1 with a motor-driven optical shutter (gray points in Fig. 11(b)). This voltage level without light is acquired in one frame and the average value is used as the reference voltage. The process to obtain voltage level without light is repeated every 20 time points of visibility measurements. Figure 11: Example of an interference signal to obtain visibility. (a) Interference signal acquired in Ch. 1 of the FPGA board (blue points) and triangle wave signal applied to the mini fiber stretcher acquired in Ch. 2 (red points). Min and max of the interference signal obtained in the green area, where the triangle wave signal monotonically increases. (b) Magnified interference signal (blue points) around the min (red point), together with the detector signal without light (gray points) as a reference. (c) Magnified interference signal (blue points) around the max (red point). Figure 12: Processes to obtain visibilities. The visibility is calculated in real time, and the voltages applied to Fiber stretcher 1 and Fiber stretcher 2 are changed step by step to maximize the visibility. The visibility is maximized by trial and error as follows: First, Fiber stretcher 1 is moved in a certain direction by an appropriate step, and the visibility before and after the movement is compared. If the visibility improves, Fiber stretcher 1 is moved further, and this process is repeated until the visibility decreases. Next, the voltage applied to Fiber stretcher 2 is changed in the same way. Then, the driving target is back to Fiber stretcher 1, while the direction is opposite to the previous direction. By repeating these procedures, we maximize visibility. These processes are controlled by Python programs. In the experiment, we demonstrate not only the increase of visibilities by the trial-and-error processes but also the keeping of high visibilities by compensating for the drifts of polarizations by continuing the trial-and-error processes. The step widths of the trial-and-error process explained above are set as follows: 30 mV when visibility is less than 99%, 20 mV when visibility is between 99% and 99.5%, and 10 mV when visibility is greater than 99.5%. Note that the FPGA board outputs DC voltages between \(\pm\)0.8 V. The \(\pm\)0.8 V outputs are amplified to about \(\pm\)3 V with op-amp ADA4898-1, and further amplified with a piezo actuator driver, adding about 400 V offset. Finally, the voltage applied to Fiber stretcher 1 ranges from 34 V to 772 V, and the voltage applied to Fiber stretcher 2 ranges from 110 V to 702 V. Figure 13: (a) Time variation of visibility when visibility maximization by polarization control is turned on (red points) and turned off (blue points). Black dashed lines represent 100.0% and 99.9% visibilities. (b) Voltages applied to Fiber stretcher 1 (orange points) and Fiber stretcher 2 (green points) when visibility maximization is activated. Experimental Result The visibility maximization experiment is performed continuously for three hours, and the results are as shown in Fig. 13(a). The visibilities are red points for the case where visibility maximization by polarization control is activated and blue points for the case where it is deactivated for comparison. The horizontal axis is elapsed time and the vertical axis is visibility. Red points and blue points cannot be acquired at the same time. Hence, red points are firstly obtained for three hours, and then blue points for three hours. During the visibility maximization activated, visibilities are maintained basically above 99.9%. On the other hand, during the visibility maximization deactivated, even though the initial visibility is set to almost 100%, in three hours it drifts and temporarily drops to around 98%. The error bars for each time are \(\pm\)0.02% on average. These results show that the stability of the visibility is drastically improved by the CCC method. A decrease in visibility can be evaluated as the angular mismatch of two linear polarizations. For the case the visibility maximization is activated, the visibility of 99.9% corresponds to 0.9 degrees of polarization mismatch. For the case the visibility maximization is deactivated, the minimum visibility of 98% corresponds to 1.3 Degrees of polarization mismatch. Note that the estimated worst case of visibility is 87%, corresponding to the polarization mismatch of \(\pm\)10.3 Degrees, which are obtained by considering the finite PERs of used components. The experimental lowest visibility of 98% is much above the estimated worst case of 87%. Fig. 13(b) shows the voltages applied to Fiber stretcher 1 and Fiber stretcher 2, for the case where the visibility maximization is activated. The orange points represent the voltage applied to Fiber stretcher 1, and the green points represent the voltage applied to Fiber stretcher 2. The voltages constantly change to maximize visibility. Since the voltage to draw the circular trajectory is 500 V, about 100 V change during the three-hour measurement shows that the polarization drifts by about one-fifth of the circular trajectories are compensated by the continuous trial-and-error processes. The dynamic ranges of the fiber stretchers are finite. Thus, during long-term experiments to compensate for drifts of the polarizations, the voltages applied to the fiber stretchers may exceed the dynamic ranges, though such cases did not occur during the three-hour experiment in Fig. 13. Our control program is written such that, when the voltage exceeds the dynamic range, the voltage is reset to the center of the dynamic range and then the visibilities are recovered by the trial-and-error processes. Fig. 14 shows the case where the voltage exceeds the dynamic range and the visibility is recovered. Fig. 14(a) shows the visibilities, and Fig. 14(b) shows the voltages applied to Fiber stretcher 1 (orange points) and Fiber stretcher 2 (green points). The voltage applied to Fiber stretcher 2 reaches the lower limit of 110 V at 50 seconds, and the voltage is reset to 400 V. Then the voltage gradually approaches an optimal point by the trial-and-error processes. The visibility decreases to about 98% when the voltage is reset, and then gradually recovers to 99.9%. It is also expected that this discontinuous visibility change can be rarer by substituting fiber stretchers with heaters with larger dynamic ranges. We discuss the accuracy of the acquired visibilities. As is shown in Fig. 11, the maximum value of the interference signal is about 600 mV. From Eq. (6), visibility decreases at a rate of 0.33% when the minimum value increases by 1 mV. Since visibility is sensitive to the minimum value of the interference signal, it is important to obtain the minimum precisely. The FPGA board can acquire voltage signals with a range of \(\pm\) 1 V and a resolution of 14 bits, where 1 bit corresponds to 0.12 mV. The circuit noise of the FPGA board is 0.10 mV\({}_{\mathrm{rms}}\) when downsampled to 1/256. When Photodetector 2 is connected to the FPGA board, the circuit noise is 0.17 mV\({}_{\mathrm{rms}}\). When light is incident and a DC voltage is 600 mV, the sum of optical shot noise and circuit noise is 0.18 mV\({}_{\mathrm{rms}}\). Thus, the optical shot noise is almost negligible compared with the circuit noise. From the accuracy of the minimum values determined by circuit noises, it is assumed that the accuracy of the visibilities obtained from one frame data is about 0.06%. It is theoretically shown that fault-tolerant quantum computation is possible with more than 10 dB of squeezing levels [11]. A high squeezing level of 15 dB is reported where the visibility of homodyne measurements was 99.6% [25]. Thus, the visibility of 99.9% with the CCC method is good enough to realize fault-tolerant quantum computation in the future. ## 9 Comparison with conventional polarization controllers There are various types of polarization controllers, but basically, they control polarization by applying stress to a non-polarization-maintaining fiber to induce birefringence. In this section, we compare the optical losses of the two types of manual fiber polarization controllers FPC032 (Thorlabs) [26], HFPC-11-1300/1500-S-9/125-3A3A (OZ Optics) [27] and two types of motorized fiber polarization controllers MPC320 (Thorlabs), PCD-M02 (Luna) [28]. The comparisons are summarized in Table 1. FPC032 has three spools around which a non-polarization-maintaining fiber is wound several times to produce stress-induced birefringence. Then, by adjusting the angle of the spools, the direction of birefringence is changed to control polarization. This polarization controller requires about 5 m of non-polarization-maintaining fiber. If the fiber is wound tightly, the optical loss increases, but if the fiber is wound weakly, the birefringence becomes smaller, making polarization change difficult. The measured optical loss was 0.4 dB (8%). In this product, microbends are considered to be generated at the base of the spool where the fiber is twisted, resulting in large optical losses. Instead of FPC032 where the angle of the spool is changed manually, MPC320 can be used where the angle of the spool can be changed electrically. However, MPC320 has a smaller spool radius, resulting in large optical losses. The actual optical loss of MPC320 was measured to be 3 dB (50%). HFPC-11-1300/1500-S-9/125-3A3A clamps a non-polarization-maintaining fiber to produce birefringence. The polarization is changed by the strength of the clamping force and the rotation Figure 14: Reset process of visibility maximization when the voltage applied to Fiber stretcher 2 reaches the lower limit. (a) Visibilities before and after the reset process. (b) Voltages applied to Fiber stretcher 1 (orange points) and Fiber stretcher 2 (green points). When the voltage applied to Fiber stretcher 2 reaches the lower limit, it is reset to 400 V. angle of the clamp. The actual optical loss was measured to be 0.15 dB (3.4%). Microbends are considered to be generated at the part where the single mode fiber is pressed and twisted by the clamp. In this product, the clamp is controlled manually. PCD-M02 uses piezo actuators to press a non-polarization-maintaining fiber from various directions to produce birefringence. The non-polarization-maintaining fiber is pressed in a short distance of only a few millimeters, which causes microbends. The data sheet value of PCD-M02(Luna) [28] polarization controller is less than 0.05 dB (1.1%), and the measured value was 0.08 dB (1.8%). This product controls polarization electrically. In our homemade fiber stretcher used in the demonstration of the CCC method, birefringence is generated by stretching a 1.5 m fiber uniformly. Thus, microbends are considered to be less likely to occur and thus optical loss is reduced. In the demonstration, polarizations are controlled electrically. ## 10 Conclusion In this paper, we propose a method to maximize the interference visibility in a polarization-maintaining fiber system by controlling the polarizations to a crosspoint of the two circular trajectories on the Poincare sphere, which we call the CCC method. Using fiber stretchers as polarization controllers, optical losses can be reduced. We also experimentally demonstrated the CCC method for three hours, and visibility was basically kept above 99.9%. In contrast to the conventional methods where polarization is controlled by applying stresses to a non-polarization-maintaining fiber, the CCC method controls polarizations by pulling polarization-maintaining fibers with fiber stretchers, resulting in smaller microbends. Hence, the optical losses of the fiber stretchers were measured to be as low as 0.02 dB (0.5%). Furthermore, in contrast to the conventional methods, we do not have to combine non-polarization-maintaining fibers with a polarization-maintaining fiber system. The CCC method has broad applications in situations where optical losses in fiber interferometer systems are undesirable. In particular, we can adapt this method to interference of loss-sensitive squeezed states, such as cluster states used for one-way quantum computation [29, 2, 5, 20, 2]. Fiber systems do not require spatial alignment and are expected to improve stability over long periods of time. The CCC method can achieve polarization control electrically, which allows for automatic control. Furthermore, even if fiber systems have multiple interference points, visibilities can be optimized at all interference points by applying the CCC method from upstream to downstream of the systems. ## Funding Japan Science and Technology Agency (JPMJMS2064, JPMJPR2254); Japan Society for the Promotion of Science KAKENHI (18H05207, 20K15187). \begin{table} \begin{tabular}{c c c c} \hline \hline Product Name & Fiber type & Manual/Electric & Loss \\ \hline \hline FPC032 & SM & Manual & 0.4 dB (8\%) \\ HFPC-11-1300/1500-S-9/125-3A3A & SM & Manual & 0.15 dB (3.4\%) \\ MPC320 & SM & Electric & 3 dB (50\%) \\ PCD-M02 & SM & Electric & 0.08 dB (1.8\%) \\ Homemade (used in this demonstration) & PM & Electric & 0.02 dB (0.5\%) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the performance of various polarization controllers. ## Acknowledgments This work was partly supported by the UTokyo Foundation and donations from Nichia Corporation of Japan. T.N acknowledges financial support from Forefront Physics and Mathematics Program to Drive Transformation (FoPM). M.E. acknowledges supports from Research Foundation for Opto-Science and Technology. ## Disclosures T.N and A.F. are inventors on a patent related to this work filed by the University of Tokyo on a priority date of 2 December 2021 with the Japan Patent Office (P).
2309.06322
Preliminary Results from a U.S. Demographic Analysis of SMiSh Susceptibility
As adoption of mobile phones has skyrocketed, so have scams involving them. The text method is called SMiShing, (aka SMShing, or smishing) in which a fraudster sends a phishing link via Short Message Service (SMS) text to a phone. However, no data exists on who is most vulnerable to SMiShing. Prior work in phishing (its e-mail cousin) indicates that this is likely to vary by demographic and contextual factors. In our study, we collect this data from N=1007 U.S. adult mobile phone users. Younger people and college students emerge in this sample as the most vulnerable. Participants struggled to correctly identify legitimate messages and were easily misled when they knew they had an account with the faked message entity. Counterintuitively, participants with higher levels of security training and awareness were less correct in rating possible SMiSH. We recommend next steps for researchers, regulators and telecom providers.
Cori Faklaris, Heather Richter Lipford, Sarah Tabassum
2023-09-12T15:32:36Z
http://arxiv.org/abs/2309.06322v1
# Preliminary Results from a U.S. Demographic Analysis of SMiSh Susceptibility * ###### Abstract. As adoption of mobile phones has skyrocketed, so have scams involving them. The text method is called "SMiShing," (aka "SMShing", or "smishing") in which a fraudster sends a phishing link via Short Message Service (SMS) text to a phone. However, no data exists on who is most vulnerable to SMiShing. Prior work in phishing (its e-mail cousin) indicates that this is likely to vary by demographic and contextual factors. In our study, we collect this data from N=1007 U.S. adult mobile phone users. Younger people and college students emerge in this sample as the most vulnerable. Participants struggled to correctly identify legitimate messages and were easily misled when they knew they had an account with the faked message entity. Counterintuitively, participants with higher levels of security training and awareness were less correct in rating possible SMiSH. We recommend next steps for researchers, regulators and telecom providers. ## 1. Introduction As adoption of mobile phones has skyrocketed [30], so have scams involving these devices [31]. By Q4 2022, the top contact method in U.S. Federal Trade Commission scam reports was the phone (20% text, 19% phone call) [27]. The text method is called "SMiShing," (aka "SMShing", or "smishing") in which a fraudster sends a phishing link via Short Message Service (SMS) text to a phone. Banks who partner in mobile payments networks are commonly impersonated, as are delivery companies, retailers, and communication providers [32]. However, no data exists on who is most vulnerable to SMiShing. Prior work in phishing (its e-mail cousin) [5,8,9,24] and our informal interviews with industry researchers indicates that SMiShing vulnerability is likely to vary by both demographic and contextual factors. This data is needed to identify how to best intervene to reduce and mitigate SMiShing, such as providing evidence for U.S. telecom providers to use optimal filters for SMiSh and to provide in-context warnings to mobile phone users. Multiple studies will be needed to fully investigate this problem. As a first step, we have conducted and analyzed data from a large-scale survey of U.S. adult mobile phone users. In this paper, we answer the following research questions: * RQ1: How many adult mobile phone users can correctly rate three random text messages as either Legitimate or Fraudulent, as determined with data from a U.S.-representative survey panel? * RQ2: Which U.S. demographic groups are most vulnerable to SMiShing, as determined through statistical analysis of online survey ratings of text messages and selected responses to the messages? * RQ3: To what extent is the vulnerability identified in RQ1a significantly associated with a lack of prior training or other relevant experiences? To answer these questions, we designed an online assessment of people's ability to identify whether a simulated text message was "real" or "fake." We also collected a number of demographic and security-related cognitive and behavioral variables. We deployed the survey to a Qualtrics panel of U.S. adult mobile-phone users from June 26 to July 1, 2023. After data cleaning, we analyzed the resulting N=1,007 responses. Overall, we found that participants had more difficulty in correctly identifying legitimate text messages than fraudulent ones. Troublingly, they overwhelmingly and significantly fell for SMiSH if the message entity was one that they thought they would have an account with. These results suggests that, in participants' minds, thinking that they had an account with the entity named in the message overrode all caution gleaned from prior experience or training, or effort to examine the source identifiers for clues as to whether the given text message was a SMiSH attack or legitimate text message. Further, and controlling for account knowledge: we found that participants scored significantly lower on our SMiShing assessment in younger age brackets and if they reported currently studying for a four-year degree; and scored significantly higher if they reported holding a job in the Educational Instruction and Library category. This suggests that, more broadly, younger people and those in school are most vulnerable to SMiSh, while people whose jobs denote a non-security expertise in judging information sources and credibility are among the least vulnerable. Finally, we found that a low score on our SMiShing assessment was significantly more likely from a participant who reported frequently experiencing security breaches, and - more counterintuitively - from those receiving a greater-than-average amount of security training, or taking greater-than-average care to keep alert for phishing and other scams online. Taking the results as a whole, we suspect the existence of a "security expertise bias," in which participants who perceive themselves to be expert in staying alert for social engineering may be over-correcting and identifying too many messages as fraudulent and too few as legitimate, vs. those who with a non-security expertise in vetting information. Based on these results, we recommend, first, that U.S. cellular and business regulators work with usability experts to design a verification system and trust indicators to highlight verified sources for SMS mobile messages. This would have the impact of making the SMS text system far more usable for consumers, as people of any expertise or skill would be easily able to see at a glance whether they could trust the source of a commercial message. Second, we recommend more research to determine whether a cause-and-effect relationship exists between high levels of security awareness training and vulnerability to SMiShing and what explains it, as this cross-sectional study can only determine what significantly accounts for variances in message ratings. Third, we see the need for developing more-nuanced messaging and education in how to perceive and judge information credibility, especially for young adults and college students. In summary, our contributions are the following: * Up-to-date knowledge of demographic susceptibility to scam messages for the era of mobile phones and widespread use of remote messaging. * Examples of simulated "real" and "fake text messages and a survey protocol for use in research on SMiShing. * Empirically based recommendations for researchers, regulators, and telecom providers. ## 2 Related Work Phishing and SMiShing are two types of cyberattacks that use social engineering techniques to trick users into revealing their personal or financial information via computer-mediated communication [12]. These attacks pose serious threats to the security and privacy of users, as well as the reputation and trustworthiness of organizations. ### Phishing Phishing is among the most common and well-studied forms of cyberattack [33]. Attackers use fraudulent emails to impersonate legitimate entities and solicit sensitive information from users [18]. Phishing emails often contain malicious links or attachments that lead users to fake websites or download malware onto their devices. These attacks can target individuals or organizations, and can have various motives, such as stealing money, identities, credentials, or intellectual property. Various methods have been researched to prevent or detect phishing attacks, which Hong summarized as "make things invisible" (ex: deploy machine learning on the back end to classify and filter away phish), develop better user interfaces, and provide effective training [18]. Several studies have investigated the factors that influence users' susceptibility to phishing attacks, such as the design of the email, the content of the message, the context of the situation, and the characteristics of the user [2, 7, 8, 9, 10, 20, 24]. Sheng et al. [24] were among the first to investigate demographic vulnerability. They designed a survey in which respondents were asked to play the part of "Pat Jones," an administrator for the fictional Baton Rouge University, and respond to four email messages, two of which were phishing and two of which were legitimate. In their N=1,001 sample, they found that women were more susceptible than men to phishing, and participants between the ages of 18 and 25 were more susceptible than other age groups, which they explained as due to differences in computer and web expertise among the groups. Their study also found that educational materials were effective in reducing participants' willingness to enter information into bogus webpages, but that they slightly decreased users' tendency to click on legitimate links. Our study also employs the "Pat Jones" persona developed by Sheng et al. and displays simulated scam messages for participants to respond to. We find that younger people remain more vulnerable to social engineering attacks, but that the significant variance by gender has disappeared (Section 5.2.1). ### SMiShing SMiShing (aka "SMShing", or "smishing") is a relatively newer form of cyberattack that uses fraudulent SMS text messages to deceive users into clicking on malicious links or providing personal information. SMiShing messages often exploit users' emotions, such as fear, love, or greed, to induce them to take immediate action without verifying the source or the validity of the message. SMiShing attacks can also leverage users' trust in certain services or brands, such as banks, delivery companies, or online retailers [32], or even security or military authorities [28]. For example, a SMiShing attack on customers of the U.S. Fifth Third Bank led them to enter their credentials on a bogus website, thinking the bank had requested this to unlock their accounts [25]. An even bigger attack tricked customers of Czech Post into downloading a malicious app to their phones [3]. Recently, attackers have exploited COVID-19 information confusion and the global shift to remote messaging to motivate users with bogus messages from contact tracing websites, insurance, or vaccine providers [1]. SMiShing attacks are more difficult to detect than phishing attacks, as text messages have fewer cues and indicators than emails, such as sender's address, subject line, or spelling errors [34]. Furthermore, text messages are more likely to be read and responded to than emails, as they are perceived as more personal and urgent - leading marketers as well as scammers to send unsolicited texts to mobile numbers [6]. Relatively few researchers have systematically studied SMiShing vulnerability. An exception is Rahman et al., who conducted an experiment to randomly deliver two of four types of SMiSh (generic, personalized, spoofed, or voice-based, and with content for a variety of entities and using reward or fear motivations) to 10,000 participants. Of these, 28.7% responded to the messages, 15.8% clicked on malicious links, and 3.1% entered personal information into bogus webpages. The researchers found that the SMiShing attacks were more effective when they used personalized or spoofed messages, as they increased the perceived legitimacy and urgency for users to respond. Our study draws on Rahman et al. for the attributes of our SMiSh content. We designed messages with similar entities, scenarios, source identifiers, user action asks, and motivations to click or respond. Our study contributes quantitative data for several variables that they identified as factors in SMiSh responses, such as urgency and curiosity. One difference is that we added items to discern the effect of participants knowing that they (their assigned persona, "yourself" or "Pat Jones") had an account with the message entity, which turned out to be a significant influence on correct responses. Because their study found that doctorates were disproportionately likely to fall for SMiSh, we added a question about whether participants had a doctorate. Our study found high rates of these doctorates rating messages incorrectly, but we discovered this score predictor to be non-significant when controlling for account knowledge (Section 5.2.1). ## 3 Method To pursue answer to our research questions, we designed and deployed a web-based questionnaire programmed in Qualtrics to gather statistical data about which demographics are vulnerable to SMiShing. ### SMiShing Assessment Design First, we developed a series of simulated text messages, half based on legitimate real-world SMS messages and half based on real-world fraudulent SMS messages, to test how accurately participants could assess which are really a SMiSh message. We drew on prior work such as [23, 24], along with actual SMS text messages provided by industry researchers or found in an internet search, in crafting each to include a URL, a type of entity likely to appear in either fraudulent or legitimate emails [23], mention of a reward motivator or a fear motivator to respond [23], and other "look and feel" clues to credibility [16] such as source identifiers, typos, and writing style. Each participant was randomly served three such messages out of 14 (7 fraudulent and 7 legitimate) and asked to rate them on a five-point scale: 1=Fraudulent, 2=Likely Fraudulent, 3=Not Sure, 4=Likely Legitimate, and 5=Legitimate. The messages are reproduced (Figure 1) and summarized (Table 1) below. Next, we piloted our questionnaire with in-person cognitive or "think aloud" interviews [22, 26] (\(N\)=2), review sessions with our study team (\(N\)=6) and remote surveys on Prolific (\(N\)=11). The most important piece of feedback we gathered was that knowing whether someone has an account with the entity in the SMS text message helps them judge whether it is fraudulent or legitimate. To address this feedback, we randomized all participants into two survey conditions: judging the SMS text messages as either "yourself" (described as whether you, the participant, had received the message on your phone) or as "Pat Jones" (adapted from [24], described as a staff member of Baton Rouge University who has many accounts and whose job makes it important to not fall for fraudulent text messages and to respond promptly to legitimate text messages). We also asked participants, at the end of each block of questions about an SMS text message, whether the entity mentioned was one that they had an account with, to be answered as "Yes," "No," or "Not Sure." ### Data Collection We hired Qualtrics to recruit a survey panel of at least 1,000 U.S. mobile phone users age 18 or older that roughly matched recent U.S. Census data for age, income, and education: 18-34: 30% / 35-54: 32% / 55+: 38%; <$50K: \(\sim\)35% / $50K-100K: \(\sim\)35% / 100K+: \(\sim\)30%; no college degree: 65% / 4-year degree or higher: 35%. Participants who met these quotas were asked a series of other demographic questions: their more-specific brackets for age, income, and education; their gender and racial/ethnic identities; household size; experience with handling sensitive data; and occupation status and job category, per the U.S. Bureau of Labor Statistics. At the end of the survey, participants were asked an attention-check question and a series of items to assess their internet and information-security experiences, attitudes, behavior intentions, and prior training on how to respond to phish and SMiSh. Along with demographics as identifiers, the questionnaire collected IP addresses and device metadata, to enable us to map responses and to test for effects from device modality and operating system. We did not collect other identifiers, to encourage our participants to answer freely and because other identification was not needed to answer our research questions. Our research design, recruitment and consent language, and survey and interview protocols were approved by our Institutional Review Board as an exempt study under Category 2 of the U.S. Revised Common Rule. See Appendix A.1 for the survey questions used in this study. ### Procedure Once the study team had reviewed the developed questionnaire and was satisfied with it, we provided Qualtrics with the URL to the coded online survey (Figure 2). Qualtrics then passed this link along to its third-party panel providers. Participants who clicked on the URL for the survey and click Yes for consent to participate were directed to a page to ask for their general demographic information and use of mobile phones. Those who checked boxes for demographic quotas that have already been reached, or who marked that they were under 18 or do not own a smartphone or feature phone, or who failed the CAPTCHA tests for fraud [13], were redirected to the Exit screen. This programming helped to ensure quality responses in the final dataset. The questionnaire accepted responses from June 26 to July 1, 2023. Once the quotas had been met, one member of the study team downloaded the responses to Microsoft Excel and conducted a visual inspection of the response patterns and typed answers to the open-ended text questions. Responses judged to be of bad quality responses were deleted. The master dataset was further cleaned, prepped for analysis, and uploaded to a secure, centralized cloud repository. ### Participants Qualtrics sourced responses from 1,000 people plus 1% overage. All had passed CAPTCHA fraud checks and answered affirmatively that they were U.S.-based internet users age 18 or older who owned a mobile phone and met our demographic quotas for age, education, and income. All had passed the attention-check item 2/3 of the way through the survey that directed them to answer with the 4th bullet point to retain their responses. The deletion of four responses with evidence of repeated nonsense copy-pastes into a text-input box resulted in a total dataset of N=1,007 (Tables 2-3). \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{Age} & \multicolumn{2}{c}{Education} & \multicolumn{2}{c}{Household Inc.} & \multicolumn{2}{c}{Gender Identity} & \multicolumn{2}{c}{Hisp./Lat./Sp.?} & \multicolumn{2}{c}{Racial/Ethnic Identity} & \multicolumn{2}{c}{Household Size} \\ \hline 18-24 & 232 & No 4y deg. and 537 & \(<\) & \(\$\$26.5K\) & 188 & Female & 630 & No & 900 & White/Cauc. & 839 & 1 ppl. & 175 \\ 25-34 & 70 & not in school & poverty line & Male & 362 & Yes & 96 & Black/African & 106 & 2 ppl. & 320 \\ 35-54 & 312 & No 4y deg., but & 129 & \(\$\$26.5-$49K\) & 171 & Nonbinary & 10 & Prefer not & 10 & Asian – total for & 24 & 3 ppl. & 160 \\ 55-64 & 173 & in school & & \(\$\$50\)-\($599K\) & 358 & Self-described & 2 & to say & all regions & & 4 ppl. & 226 \\ 65+ & 219 & 4y deg., but no & 232 & \(\$100K+\) & 289 & Prefer not to & 2 & & Native Am. or & 9 & 5+ ppl. & 125 \\ & & doctorate & & & say & & & & Alaska Native & & \\ & & Has doctorate & 108 & & & & & & & Self-described & 15 & & \\ & & & & & & & & & & Prefer not to say & 13 & & \\ \hline \hline \end{tabular} \end{table} Table 2: Counts for major demographic characteristics of participants. A little over a quarter were younger than age 25. About half were not currently in school and had attained less than a four-year college degree. The majority reported a household income of at least $50,000 per year, and most reported living in a household with other people. For gender, the majority identified as female. For race and ethnicity, the majority identified as being not Hispanic, Latino or Spanish, and as being White or Caucasian. Figure 2: In our survey flow, participants who met the study qualifications were randomized into two conditions or “personas”: either to rate text messages as “yourself” or as the “Pat Jones” persona. All were randomly served three of 14 text messages to rate and answer questions about. The survey also collected demographic information and data about people’s security attitudes, their security behavior intentions, and their past experiences with security breaches and with training for security awareness and phishing / SMShing mitigation. about 75% of participants reported receiving at least "a little" security awareness training, and about one-third reported receiving training specifically to help "identify fraudulent links or other threats in text messages." Further, a little more than half (51.5%) reported spotting and actively avoiding clicking on a suspected SMiSh message in the past three months. Other responses were "No, I have not noticed any fraudulent links in email, text messages, or web posts" (15.4%), "Yes, but it turned out to be a test being conducted as part of security awareness training" (7.0%), "Yes, and it turned out to be a scam, but nothing bad happened, to my knowledge" (15.7%), "Yes, and it turned out to be a scam, and I suffered a bad outcome (such as malware or theft of account credentials)" (4.4%), and Not Sure (6.1%). ### Data Analysis We calculated descriptive statistics and inferential statistics and drew figures using IBM SPSS and Microsoft Excel. The main tests used were one-way analyses of variance (ANOVAs), with post-hoc tests for pairwise comparisons, and multi-step linear and logistic regressions. The latter tests were used to assess the degree to which a predictor variable significantly accounted for variances in rating score and correctness, and to compare a control regression model with one that added a new predictor variable. We used model fit and a 95% confidence interval to assess statistical significance. To score the SMiShing assessment described in Section 3.1, we counted as correct any answer for a simulated "fake" text message (F1-7) that was rated "Fraudulent" or "Likely Fraudulent," and any answer for a simulated "real" text message (R1-7) that was rated "Legitimate" or "Likely Legitimate." We used this scheme to compute a categorical variable, CORRECT, used as the outcome variable in logistic regressions; and a continuous variable, SCORE, used as the outcome variable in linear regressions. For CORRECT, we assigned a value between 3 and 0 depending on whether the participant had rated three, two, one, or zero messages correctly. For SCORE, we reverse-coded answers on the fraudulent text messages, then computed the average of the participant's ratings on the 1-5 scale of their assigned three messages, with a possible range of 1.00 (representing all being rated incorrectly and with confidence in that incorrectness) to 5.00 (representing perfect correctness and confidence in these correct ratings). ## 4 Results ### RQ1: Accuracy in Identifying Simulated SMiSh vs. 'Real' Text Messages Overall, participants correctly identified whether the messages were legitimate or fraudulent 52.6% of the time, calculated by dividing the number of correct ratings (Likely Fraudulent or Fraudulent for the "fake" messages, or Likely Legitimate or Legitimate for the "real" ones) by total number of messages seen. We found that participants did much better at correctly identifying the simulated "fake" text messages (81.4%) than at correctly identifying the simulated "real" ones (23.5%) (Figures 3). Most participants reported receiving at least "a little" security training, which may have contributed to the high rates at which they could correctly spot the SMiSh. However, it may have also led them to over-correct and misidentify the legitimate messages, as happened in Sheng et al.'s study of phishing vulnerability and educational outcomes [24]. Among the simulated "real" text messages, only R6 (the simulated "Amber Alert" text message) was correctly identified as Likely Legitimate or Legitimate by a majority (61.3%) of those who saw it, followed by R5 (with the Phone Contact identifier and link to a popular video platform) (32.9%). Among the simulated "fake" messages, participants did the best at correctly rating F5 (with the cryptic suggestion that the receiver's face was identifiable in nude images) as Likely Fraudulent or Fraudulent (88.0%). Participants tended to reply "Not Sure" more often for the simulated "real" text messages than for the simulated "fake" ones, suggesting that there were fewer interface indicators available to guide their judgments about legitimacy. Figure 4 shows a side-by-side comparison of messages, ordered by pairs of "real" and "fake variations on similar entities, subjects, and/or motivations as described in Section 3.1, Figure 1 and Table 1. #### 4.1.1 How and Why Participants Said They Would Respond to a Given Message When asked how they would respond to any given simulated "fake" message (Figure 5), a minority of participants indicated they would report the message using device options such as clicking Block This Caller or Report Junk (38.7%), while a majority said they would delete it and/or ignore it (73.3%). Reponses were similar for the "real" messages (25.4% and 61.3%, respectively), which participants often incorrectly identified as SMiSh or likely SMiSh. While relatively few people selected "Reply to SMS text message to provide information" for the simulated "fake" messages (5.3%), some indicated that they would reply with STOP, BLOCK or other codes (17.5%). This still may accomplish the goal of the SMiSh Figure 4: A side-by-side comparison of what percentage of participants who saw a given text message rated them correctly, ordered by pairs of “real” and “fake variations on similar entities, subjects, and/or motivations (Figure 1 and Table 1). A majority who saw the government-entity messages (R6, the “Amber Alert” message, and F6, the “tax audit and asset freeze” message) rated them correctly. Figure 3: A majority of participants correctly rated all 7 simulated “fake” SMS text messages as Likely Fraudulent or Fraudulent. For 5 of 7 simulated “real” text messages, a majority of participants incorrectly rated them as Likely Fraudulent or Fraudulent. attacker, since they may be testing the number to see if it remains in service and would be useful for a future scam [23]. Few participants said that they would respond in other ways that could meet the attacker's goals: click on the link (6.3%), forward the message to someone else (3.3%), or keep, save or archive the message (4.9). A minority said they would check the link on device, either by copy-pasting or typing the link into their phone's web browser (11.4%). Checking the link is a strategy recommended for phishing detection and mitigation [2, 9, 24], but is more easily accomplished on a larger device. When asked why they would respond a certain way (Figure 6), participants' responses were similar for the fraudulent messages as for the legitimate ones on four measures: sense of urgency (13.0% for "fake" vs. 14.2% for "real"), curiosity (10.9% for "fake" vs. 12.6% for "real"), seeking a good outcome for myself (12.3% for "fake" vs. 12.7% for "real"), and lack of interest in the message (36.4% for "fake" vs. 32.1% for "real"). Even for the legitimate messages, participants reported little trust in the sender (14.1%, versus 10.3% for the "fake" messages) or in the link URL (9.6%, versus 7.7% for the "fakes"), suggesting that these source indicators were only of marginal help in participants' assessments. Slightly more than half of participants who saw fraudulent messages reported "seeking to avoid a bad outcome for myself" as reasons for their response (50.7%), though a significant minority also reported this for the legitimate messages (47.0%). Figure 5: Counts of how many participants who saw a simulated text message indicated that they would respond with the given action. (The “Other” reasons are listed in Appendix A.2.) While relatively few people selected “Reply to SMS text message to provide information” for the simulated “fake” messages, a number indicated that they would reply with STOP, BLOCK or other codes. This still may accomplish the goal of the SMiSh attacker, since they may be testing the number to see if it remains in service and would be useful for a future scam. #### 4.1.2 Influence of Persona Assignment or Account Knowledge on Accuracy of Ratings Next, to delve more deeply into the above results, we conducted tests to assess whether two variables that we theorized would influence people's correctness ratings- persona assignment and self-reported account knowledge - had statistically significant effects. Most tests revealed no significant difference in correctness depending on whether the participants had answered the questions as "yourself" or as "Pat Jones." We did, however, find a significant difference in CORRECT values by whether participants had answered Yes to a question asking them whether they knew that they had an account with the named entity (Table 4). Those in the "Pat Jones" condition were slightly yet significantly more likely to answer Yes to this question than those in the "yourself" group: t(1004)= -2.859, \(p\)=.004. This certainty of knowledge seemed to explain many cases where participants correctly identified the simulated "real" messages as Legitimate or Likely Legitimate, and is consistent with data collected during the survey pilots (Section 3.1). More concerning was that participants who answered Yes to the accounts-knowledge question were significantly _less_ likely to rate a _"fake"_ message correctly as Fraudulent or Likely Fraudulent. These results suggests that, in participants' minds, thinking that they had an account with the entity overrode all caution gleaned from prior experience or training, or effort to examine the source identifiers for clues as to whether the given text message was a SMiSH attack or legitimate text message. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Added} & \multicolumn{5}{c}{Odds ratio} & \multicolumn{2}{c}{95\% CI} & -2 log \\ ID & Model & predictor & \(b\) & SE & Wald & df & \(p\) & (exp(\(\beta\))) & Lower & Upper & likelihood \\ \hline R1 & 1 & Acct. Know. & **3.136** & 0.58 & 29.198 & 1 & **\textless{.001}** & **23.002** & 7.376 & 71.729 & 89.833 \\ & 2 & Persona & -0.352 & 0.566 & 0.387 & 1 & 0.534 & 0.703 & 0.232 & 2.132 & 89.443 \\ F1 & 1 & Acct. Know. & **-1.730** & 0.343 & 25.419 & 1 & **\textless{.001}** & **0.177** & 0.090 & 0.347 & 212.509 \\ & 2 & Persona & -0.111 & 0.344 & 0.105 & 1 & 0.746 & 0.895 & 0.456 & 1.757 & 212.404 \\ \hline R2 & 1 & Acct. Know. & **2.843** & 0.431 & 43.443 & 1 & **\textless{.001}** & **17.164** & 7.370 & 39.97 & 170.169 \\ & 2 & Persona & -0.116 & 0.428 & 0.074 & 1 & 0.786 & 0.89 & 0.384 & 2.061 & 170.095 \\ F2 & 1 & Acct. Know. & **-1.787** & 0.349 & 26.19 & 1 & **\textless{.001}** & **0.167** & 0.084 & 0.332 & 212.602 \\ & 2 & Persona & -0.193 & 0.344 & 0.313 & 1 & 0.576 & 0.825 & 0.42 & 1.619 & 212.289 \\ \hline \hline \end{tabular} \end{table} Table 4: Results for logistic regression models estimating how likely a participant was to have correctly rated the given message, with the first added predictor being that they said Yes to a question asking whether they thought that they had an account with the given entity (Model 1), and the second added predictor being whether they were told to answer as “yourself” vs. “Pat Jones” (Model 2). A positive \(b\) coefficient is associated with higher-scoring participants being part of the reference group (Yes to account knowledge, or the “yourself” persona), while a negative \(b\) is associated with higher-scoring participants being in the non-reference group (No/Not Sure for account knowledge, or the “Pat Jones” persona). The Wald statistic tests for a significant difference in \(b\) from 0 at the \(<\).05 level (bolded). The odds ratio shows whether the reference group is more (\(>\)1.00) or less (\(<\)1.00) likely to have scored the message correctly, with a ratio above 3.000 or below 0.333 denoting a strong relationship between the predictor and the correct score [17]. This ratio is significant if the 95% CI does not include 1.000 (\(a\) =.05, bolded). Finally, a lower -2 log likelihood statistic indicates better model fit. Figure 6: Counts of the reasons that participants selected their given responses to each simulated text message (Figure 5). Of the fraudulent messages, F3 (a fake retailer job offer) and F4 (a fake “security scam” to download malware) succeeded the most at inspiring a sense of urgency to respond. (The “Other” reasons are listed in Appendix A.2.) ### RQ2 and RQ3: Differences in Message Scores Among Comparison Groups #### 4.2.1 RQ2: SMiShing Vulnerability by Demographics Controlling for account knowledge, we found that variances in participants' message scores could be significantly explained by their different age brackets, by whether they reported currently studying for a four-year degree, and by whether they reported holding a job in the Educational Instruction and Library category (Table 5). The effect of having a doctorate, which was discovered to be significantly associated with falling for SMiSh in a prior study's sample [23], just missed being significant at the p\(<\).05 level in our study once account knowledge was controlled for, as did age younger than 35, general employment status and the Office/Administrative job category. We found no significant effects on SCORE at the _p\(<\)_20 level when controlling for account knowledge for the following demographics: income level, gender identity, Hispanic/Latincx/Spanish identity, other racial or ethnic identity, household size, or mobile phone type or usage. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Added} & \multicolumn{2}{c}{Standardized} & \multicolumn{2}{c}{95\% CI} \\ ID & Model & predictor & \(b\) & SE & Wald & df & \(p\) & (exp(\(\beta\))) & Lower & Upper & likelihood \\ \hline R3 & 1 & Acct. Know. & **1.363** & 0.562 & 5.889 & 1 & **0.015** & **3.906** & 1.300 & 11.742 & 107.474 \\ & 2 & Persona & 0.453 & 0.545 & 0.690 & 1 & 0.406 & 1.573 & 0.540 & 4.577 & 106.768 \\ F3 & 1 & Acct. Know. & **-1.157** & 0.37 & 9.801 & 1 & **0.002** & **0.314** & 0.152 & 0.649 & 198.832 \\ & 2 & Persona & 0.119 & 0.36 & 0.109 & 1 & 0.742 & 1.126 & 0.556 & 2.282 & 198.723 \\ \hline R4 & 1 & Acct. Know. & **2.618** & 0.555 & 22.238 & 1 & **<001** & **13.715** & 4.619 & 40.721 & 101.584 \\ & 2 & Persona & **1.378** & 0.591 & 5.429 & 1 & **0.020** & **3.967** & 1.245 & 12.643 & 95.503 \\ F4 & 1 & Acct. Know. & **-1.970** & 0.427 & 21.264 & 1 & **<001** & **0.139** & 0.060 & 0.322 & 170.284 \\ & 2 & Persona & 0.048 & 0.395 & 0.015 & 1 & 0.903 & 1.049 & 0.484 & 2.277 & 170.269 \\ \hline R5 & 1 & Acct. Know. & **2.087** & 0.326 & 41.058 & 1 & **<001** & **8.063** & 4.258 & 15.268 & 232.037 \\ & 2 & Persona & -0.228 & 0.325 & 0.491 & 1 & 0.484 & 0.796 & 0.421 & 1.506 & 231.544 \\ F5 & 1 & Acct. Know. & **-2.924** & 0.495 & 34.900 & 1 & **<001** & **0.054** & 0.020 & 0.142 & 114.696 \\ & 2 & Persona & 0.030 & 0.493 & 0.004 & 1 & 0.952 & 1.030 & 0.392 & 2.710 & 114.692 \\ \hline R6 & 1 & Acct. Know. & 0.627 & 0.322 & 3.796 & 1 & 0.051 & 1.873 & 0.996 & 3.52 & 292.464 \\ & 2 & Persona & 0.171 & 0.278 & 0.378 & 1 & 0.539 & 1.187 & 0.688 & 2.047 & 292.086 \\ F6 & 1 & Acct. Know. & **-2.143** & 0.413 & 26.965 & 1 & **<001** & **0.117** & 0.052 & 0.263 & 161.237 \\ & 2 & Persona & -0.412 & 0.421 & 0.958 & 1 & 0.328 & 0.662 & 0.290 & 1.511 & 160.268 \\ \hline R7 & 1 & Acct. Know. & **2.440** & 0.468 & 27.135 & 1 & **<001** & **11.476** & 4.582 & 28.744 & 177.183 \\ & 2 & Persona & 0.163 & 0.373 & 0.191 & 1 & 0.662 & 1.177 & 0.567 & 2.445 & 176.992 \\ F7 & 1 & Acct. Know. & **-1.917** & 0.371 & 26.742 & 1 & **<001** & **0.147** & 0.071 & 0.304 & 193.778 \\ & 2 & Persona & -0.252 & 0.362 & 0.486 & 1 & 0.486 & 0.777 & 0.383 & 1.579 & 193.290 \\ \hline \hline \end{tabular} \end{table} Table 5: Selected demographic SCORE predictors in linear regression models, using account knowledge as a control predictor at a prior step. The \(\mathbf{A}\) R\({}^{2}\) column shows the additional variance in SCORE added by the predictor vs. the control model. The standardized parameter estimate (\(\beta\)) shows the strength and direction of the predictor’s effect on SCORE. Predictors with test statistics significant at the p\(<\).05 level are and bolded. Control statistics are omitted for brevity. Using a Bonferroni correction to adjust \(p\) values for increase in Type I error risk, we further compared subgroups of our demographic variables to test for statistically significant differences in SCORE. We found no significant pairwise comparisons by job category or employment status. While we found no pairwise comparisons by age bracket that were significant at the adjusted \(p\) value, mean SCORE values show a clear positive association with an overall increase in age (Figure 7). We did find two pairs of pairwise comparisons that were significant at the Bonferroni-adjusted p value: between those In School for a 4-year Degree vs. Not In School, No 4-year Degree; and those In School for a 4-year Degree vs. 4-year Degree and No Doctorate (Figure 8). In fact, mean SCORE values for participants who were In School for a 4-year Degree had a _negative_ association with Account Knowledge (Table 6). It adds evidence to our theory that, in some participants' minds, thinking that they had an account with the entity overrode all other considerations as to whether the message was a SMiSH attack or legitimate text message (Section 4.1.2). #### 4.2.2 RQ2: SMiShing Vulnerability by Security-Relevant Training or Experiences Controlling for account knowledge, we found that variances in participants' message scores were significantly associated with the frequency of their personal experiences of security breaches, the amount of their security awareness training, and their scores on the Security Behavior Intentions Scale (SeBIS) [11] subscale for Proactive Awareness (Table 7). The association of SCORE with all three security-relevant variables was negative - in other words, a low SCORE on the SMiShing assessment was significantly _more_ likely from a participant who reported frequently experiencing security breaches, receiving a greater-than-average amount of security training, or taking greater-than-average care to keep alert for phishing and other scams online. We found no significant associations with SCORE at the \(p\)\(<\)20 level when controlling for account knowledge for the following variables: frequency of hearing or seeing news about security breaches, amount \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{SCORE} & \multicolumn{2}{c}{95\% CI} \\ Educ. Level & Acct. Know. & Mean & SE & Lower & Upper \\ \hline Not In School & 0.00 & 3.283 & 0.045 & 3.194 & 3.373 \\ and Has No & 0.33 & 3.507 & 0.075 & 3.361 & 3.654 \\ Degree & 0.67 & 3.307 & 0.117 & 3.078 & 3.536 \\ & 1.00 & 3.380 & 0.154 & 3.078 & 3.681 \\ \hline **In School for** & **0.00** & **3.271** & 0.112 & 3.052 & 3.490 \\ **a 4-year** & **0.33** & **3.171** & 0.136 & 2.904 & 3.437 \\ **Degree** & **0.67** & **3.086** & 0.242 & 2.612 & 3.560 \\ & **1.00** & **2.709** & 0.166 & 2.383 & 3.035 \\ \hline Has 4-year & 0.00 & 3.471 & 0.073 & 3.329 & 3.614 \\ Degree and & 0.33 & 3.564 & 0.109 & 3.349 & 3.779 \\ No Doctorate & 0.67 & 3.283 & 0.174 & 2.943 & 3.624 \\ & 1.00 & 3.161 & 0.208 & 2.753 & 3.569 \\ \hline Has a & 0.00 & 3.329 & 0.147 & 3.041 & 3.618 \\ Doctorate & 0.33 & 3.170 & 0.158 & 2.861 & 3.479 \\ (PhD, EdD, etc.) & 0.67 & 3.300 & 0.226 & 2.857 & 3.743 \\ & 1.00 & 3.061 & 0.145 & 2.776 & 3.347 \\ \hline \hline \end{tabular} \end{table} Table 6: For those reporting being In School for a 4-Year Degree, mean SCORE values decreased for each message entity that they thought they had an account with. This adds evidence that, in some participants’ minds, thinking that they had an account with the entity in the simulated text message overrode all other considerations in judging it to be legitimate or fraudulent (Section 4.1.1). Figure 8: Mean SCORE values for those who are In School for a 4-year Degree were significantly lower (as shown by a Bonferroni-adjusted \(p\) value) than mean SCORE values for those who are Not In School and have No 4-year Degree, and from those who have a 4-year Degree and No Doctorate. This suggests that college students are a demographic group that is vulnerable to SMiShing. of experience working with sensitive data, whether they reported clicking on SMiSh in the past three months, whether they specifically had received training on spotting and dealing with fraudulent text messages (included in Table 7), and scores on the Social Strategy subscale of the recently published Smartphone Security Behavior Scale (SSBS) [19]. Using a Bonferroni correction to adjust \(p\) values for increase in Type I error risk, we further compared subgroups of our security-relevant variables to test for statistically significant differences in SCORE. While we found no pairwise comparisons by frequency of personal security breache experiences that were significant at the adjusted \(p\) value, mean SCORE values show a clear negative association with breach experience frequency (Figure 9). We found two pairwise comparisons on mean SCORE values in Figure 9 for amount of security awareness training that were significant at the Bonferroni-adjusted p value: between those with "None at all" vs. "A great deal" and those with "A moderate amount" vs. "A great deal" (Figure 10). While it is possible that trainees are taking away the wrong lessons from this instruction, it is also possible that those who are more vulnerable to social engineering attacks such as SMiSh do end up receiving more training - thus accounting for why those with "a great deal" of training scored significantly poorly. Finally, we also compared the mean SeBIS subscale values (possible range: 1.00 to 5.00) according to how many messages each participant had correctly rated (possible range: 0 to 3). Three pairwise comparisons are significant at the Bonferroni-adjusted p value: between those who rated 0 and 2 correctly, between those who rated 0 and 3 correctly, and beween those who rated 1 and 3 correctly (Figure 11). Together with the finding on security awareness training and those at the top of Section 4, it suggests an "expertise bias" - that those who are expert in staying alert for social engineering \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \(\Delta\) R\({}^{2}\) & \multicolumn{2}{c}{Unstandardized est.} & \multicolumn{2}{c}{Standardized} & \multicolumn{3}{c}{95\% CI} \\ Predictor & vs. control & b & SE & estimate (\(\beta\)) & t & p & Lower & Upper \\ \hline Breach-Personal & **0.007** & -0.064 & 0.024 & **-0.088** & -2.712 & **0.007** & -0.111 & -0.018 \\ Breach-Close tie & 0.002 & -0.037 & 0.023 & -0.050 & -1.582 & 0.114 & -0.082 & 0.009 \\ Security Training & **0.004** & -0.045 & 0.021 & **-0.069** & -2.115 & **0.035** & -0.086 & -0.003 \\ SMiSh Training & 0.000 & -0.022 & 0.058 & -0.012 & -0.372 & 0.710 & -0.136 & 0.092 \\ Security Attitude & 0.002 & -0.050 & 0.037 & -0.043 & -1.351 & 0.177 & -0.124 & 0.023 \\ SeBIS subscale & **0.024** & -0.147 & 0.03 & **-0.164** & -4.944 & **<.001** & -0.206 & -0.089 \\ \hline \hline \end{tabular} \end{table} Table 7: Selected security-relevant SCORE predictors in linear regression models, using account knowledge as a control predictor at a prior step. The \(\Delta\) R\({}^{2}\) column shows the additional variance in SCORE added by the predictor vs. the control model. The standardized parameter estimate (\(\beta\)) shows the strength and direction of the predictor’s effect on SCORE. Predictors with test statistics significant at the p\(<\)-05 level are and bolded. Control statistics are omitted for brevity. Figure 9: The line chart shows a negative association between participants’ mean SCORE values and the frequency with which they reported experiencing security breaches. While no pairwise comparisons are significant at a Bonferroni-adjusted \(p\) value, linear regression found these differences to be significant overall when controlling for account knowledge (Table 7). It suggests that those who struggle with correctly distinguishing legitimate from scam text messages are more vulnerable to harms than others. may be over-correcting, identifying too many messages as fraudulent and too few as legitimate. However, more research is needed in these cases to determine whether a cause-and-effect relationship exists and what explains it, as this cross-sectional study can only determine what significantly accounts for variances in message ratings. ## 5 Discussion Our results above show that, first, participants across all demographic groups struggled to correctly identify legitimate text messages regardless of source indicators that are available today in U.S. messaging interfaces, and that they fell for the simulated SMiSH if the message entity was one that they thought they would have an account with. Second, when drawing comparisons and controlling for account knowledge, we found that younger people and college students in our sample are the most vulnerable demographics for SMiShing attacks. Third, while significantly high scores for those with an Educational Instruction and Library job suggests that a non-security expertise in perceiving and judging information credibility is protective against SMiShing, we found evidence that a security expertise (as shown by high self-reported levels of security training and awareness) was associated with increased vulnerability to SMiShing. Figure 11: The line chart shows a negative association between participants’ mean values on the SeBIS Proactive Awareness subscale (from 1.0000 to 5.0000) [11] and the number of text messages that they correctly rated. Linear regression found the overall differences to be significant when controlling for account knowledge (Table 7). More research is needed to determine the reason for this association. Figure 10: The line chart shows a negative association between participants’ mean SCORE values (from 1.0000 to 5.0000) and the amount of formal security training that they have received on the job or in school. Linear regression found the overall differences to be significant when controlling for account knowledge (Table 7). More research is needed to determine the reason for this association. Based on these results, we recommend, first, that U.S. cellular and business regulators work with usability experts to design a verification system and trust indicators to highlight verified sources for SMS mobile messages. This would have the impact of making the SMS text system far more usable for consumers, as people of any expertise or skill would be easily able to see at a glance whether they could trust the source of a commercial message. It could reduce the frequency with which scammers could trick mobile phone users by faking the name of a well-known entity with an easy-to-miss misspelling, such as "Amazom" or "Facebo0k Security." It could reduce the frequency of mobile phone users experiencing security breaches through reducing their susceptibility to falling for SMiSh. We discuss this in Section 5.1. Second, we see the need for developing more-nuanced messaging and education in how to perceive and judge information credibility, especially for young adults and college students. However, our results also suggest that current security messaging and training is falling short of helping people strike the right balance in their judgments of SMiShing. So, we recommend more research to determine whether a cause-and-effect relationship exists between high levels of security awareness training and vulnerability to SMiShing and what explains it, as this cross-sectional study can only determine what significantly accounts for variances in message ratings. We discuss this in Section 5.2. ### Helping Mobile Users with Identifying Legitimate Senders Our first recommendation is that U.S. cellular and business regulators work with usability experts to design a verification system and trust indicators to highlight verified sources for SMS mobile messages. This would have the impact of making the SMS text system far more usable for consumers, as people of any expertise or skill would be easily able to see at a glance whether they could trust the source of a commercial message. It could reduce the frequency with which scammers could trick mobile phone users by faking the name of a well-known entity with an easy-to-miss misspelling, such as "Amazom" or "Facebo0k Security." It could reduce the frequency of mobile phone users experiencing security breaches through reducing their susceptibility to falling for SMiSh. For such indicators to be reliable and trustworthy, they will need to be linked to a back-end verification system that cannot be easily "gamed" or hacked. (An example of how this can go wrong is the Twitter microblogging app's 2022 change to the "blue checkmark" verification rules, which enabled impersonation of the Eli Lilly & Co. branded account for a small payment and sparked a U.S. financial and political uproar [29].) There already exist some verification systems and indicators that governments and telecom providers around the world use to signal to mobile phone users that some messages should be trusted. In India, the Telecom Regulatory Authority of India (TRAI) has mandated the use of a special header for all bulk SMS messages sent by government agencies, banks, and other entities [35]. The header, and SMS short code, used in this header is different from the header and identifiers displayed in unverified messages (Figure 12). In Singapore and Australia, the governments use digital signatures that attach a cryptographic code to send secure and verified messages to citizens, through SingPass [36] and myGov Inbox [37], respectively. In South Korea, mobile network operators have been authorized to perform Pass identity verification in the form of challenge questions and responses through text messages [21]; in return, the mobile operators are allowed to collect and retain personal data. In the U.S., the Federal Communications Commission (FCC) has adopted a framework called STIR/SHAKEN that combats spoofed robocalls [38]. Using the framework, calls can be "signed" as legitimate and validated by the originating telecom providers, then digitally validated as the call is handed off among interconnected phone networks. We think this or a similar framework could be leveraged to mark some SMS text messages as coming from a verified sender, either with a special SMS header similar to India's, or a graphical visual cue, such as a green checkmark or star emoji. ### Investigating Latent Factors and Improving Education and Training for SMiSh Taking the results as a whole, we suspect the existence of a "security expertise bias," in which those who perceive themselves to be expert in staying alert for social engineering may be over-correcting and identifying too many messages as fraudulent and too few as legitimate, vs. those who with a general non-security expertise with vetting information. Sheng et al. documented similar outcomes when testing the effect of phishing interventions such as a comic strip and a quiz game [24]. With our results, one explanation is that the participants are coming away from security awareness training with either too-simplistic understandings of what signifies a threat or misunderstandings of how to judge a legitimate message (ex: taught to look for an entity that they know they have an account with, but not how to reason about whether the message is spoofing that entity, or without enough reinforcement that they remember to check for spoofing). However, it is also possible that those who are more vulnerable to social engineering attacks such as SMiSh simply end up receiving more training - thus accounting for why those with "a great deal" of training scored significantly poorly. We recommend that researchers conduct further studies to determine whether a cause-and-effect relationship exists and whether deficiencies exist in commonly used materials for security awareness training when it comes to SMiSh. We also note that training is only one of the components of how people determine source credibility in contexts such as unsolicited text messages. Birnbaum and Stegner broke credibility or "believability" into three constructs: expertise, bias, and the person's point of view within an interaction [4], with expertise comprised further of training, experience and ability. Their experiments, in which participants were given various types of information to judge a used car's sale value, found evidence that expertise will magnify the effects of bias in how much weight people give to various information sources. Our finding that participants in Educational Instruction and Library jobs did significantly better at judging Figure 12: A screenshot of an Indian national’s SMS inbox. The “XX-” prefixes indicate that TRAI has verified the message sender. SMiShing may point to this holistic view of credibility as useful in a SMiShing context. We theorize that such participants have had substantial amounts of training, experience, and ability in how to weigh source indicators and some amount of self-awareness about possible biases in their own thinking. We recommend that security educators explore methods to embed training within other contexts for boosting people's ability to correctly judge information, or that they examine this non-security instruction for constructs that are useful to boost the effectiveness of security training, or both. Finally, we are alarmed to see that, in our study, younger people and those in school for a four-year degree were found to be significantly vulnerable to falling for SMiSh. We recommend that U.S. high schools, colleges, and universities either implement or adjust their training for information security so that students gain a more-sophisticated understanding of how to spot a fraudulent text message -- and what indicates that the text message is likely legitimate. ## 6 Limitations and Future Work Our survey provides useful statistical data for assessing how well U.S. participants were able to distinguish fraudulent from legitimate text messages. This cross-sectional design is not sufficient to establish cause-and-effect. In future work, we will recruit participants for in-depth interviews to get more context around how and why they rate the simulated text messages as being either fraudulent or legitimate; and how recently in time, and to what extent, they received security-relevant training. Our survey's results also suggest that users need help to identify legitimate messages more readily. In future work, we will test interface design improvements for mobile and wearable interfaces, such as indicators or a naming scheme for the SMS short codes, that can provide prominent cues to which SMS text messages are from legitimate sources. We practiced a careful method of iterative survey development to ensure its clarity and comprehensibility, and the anonymous method encouraged full honesty in answers. However, like all survey studies, ours is subject to a number of biases, such as self-report bias and social desirability bias, that possibly have skewed the results. A replication of this survey would help to validate the results and interpretations of this data. Finally, we developed a useful way to simulate SMiSh without sending participants unsolicited text messages that could have panicked them or led them to feel tricked once debriefed. In a future study, we may explore how to conduct a more true-to-life SMiSh test similar to Rahman et al. (2023) that minimizes harms and boosts ecological validity. ## 7 Conclusions In this study, we collected and analyzed data from a survey panel of N=1,007 U.S. adult mobile phone users. We found that younger people and college students were significantly vulnerable to SMiSh, that participants overall struggled to identify legitimate text messages, and that participants were easily misled if the fraudulent text messages mentioned an entity that they thought they had an account with. Our study contributes up-to-date knowledge of demographic susceptibility to scam messages for the era of mobile phones and widespread use of remote messaging, and examples of simulated "real" and "fake text messages and a survey protocol for use in research on SMiShing. Finally, we provide recommendations based in our data for use by researchers, regulators, and telecom providers. We hope these findings, and any future work based on it, will meaningfully improve the user experience and security of the U.S. mobile internet. ## Acknowledgments We are grateful to Carrie Gates and Guy V. Pearson of Bank of America and to Kaylei Goff of Winthrop University for their invaluable help with designing and carrying out this research, and to Jacqueline White for her feedback. This study was funded by the Center for Cybersecurity Analytics and Automation ([https://www.ccaa-nsf.org/](https://www.ccaa-nsf.org/)).
2309.10540
A Simple Solvable Model for Heavy Fermion Superconductivity from the Two-Fluid Normal State
We propose an exactly solvable momentum-space Kondo-BCS model to study heavy fermion superconductivity. The Kondo interaction is local in momentum space, which can be derived from an Anderson lattice with a Hatsugai-Kohmoto interaction between $f$-electrons. By increasing the Kondo interaction, the model exhibits a crossover from a weak-coupling BCS superconductor to a strong-coupling heavy fermion superconductor featured with a large gap ratio and a large specific heat jump anomaly. Accordingly, the normal state evolves from a Fermi liquid above the BCS superconductor to a non-Fermi liquid two-fluid state above the heavy fermion superconductor. The two-fluid normal state also leads to two types of Cooper pairs, one between conduction electrons, the other between composite fermions formed by conduction electrons and $f$-spins, which is responsible for the strong coupling behaviors of heavy fermion superconductivity.
Jiangfan Wang, Yu Li, Yi-feng Yang
2023-09-19T11:40:07Z
http://arxiv.org/abs/2309.10540v1
# A Simple Solvable Model for Heavy Fermion Superconductivity from the Two-Fluid Normal State ###### Abstract We propose an exactly solvable momentum-space Kondo-BCS model to study heavy fermion superconductivity. The Kondo interaction is local in momentum space, which can be derived from an Anderson lattice with a Hatsugai-Kohmoto interaction between \(f\)-electrons. By increasing the Kondo interaction, the model exhibits a crossover from a weak-coupling BCS superconductor to a strong-coupling heavy fermion superconductor featured with a large gap ratio and a large specific heat jump anomaly. Accordingly, the normal state evolves from a Fermi liquid above the BCS superconductor to a non-Fermi liquid two-fluid state above the heavy fermion superconductor. The two-fluid normal state also leads to two types of Cooper pairs, one between conduction electrons, the other between composite fermions formed by conduction electrons and \(f\)-spins, which is responsible for the strong coupling behaviors of heavy fermion superconductivity. _Introduction._--Heavy fermion compounds are prototypical strongly correlated electron systems that exhibit unconventional superconductivity [1; 2; 3]. Due to strong electron correlations and interplay between multiple degrees of freedom, heavy fermion superconductors (HFSCs) are known for their rich diversity of order parameter structures and pairing mechanisms [4; 5; 6; 7; 8; 9; 10; 11; 12]. Despite of such a diversity, they share important macroscopic features. For example, they are often formed out of an unusual two-fluid normal state, where physical quantities are contributed by two different parts: a Kondo liquid part associated with the itinerant heavy electrons, and a classical spin liquid part associated with the residual unhybridized local spins [13; 14; 15; 16; 17]. The two-fluid normal state is also responsible for the coexistence of superconductivity and magnetic order observed in many heavy fermion materials [18; 19]. Other common features of HFSCs include their low transition temperatures, large jump anomaly of the specific heat, and often a large ratio between the superconducting gap and the transition temperature [20; 21; 22; 23; 24; 25]. Therefore, seeking an efficient way to describe both the two-fluid normal state and the universal features of HFSC emerged from it is an important issue in this area. Previous studies of HFSCs are either based on phenomenological models with effective pairing interaction [26; 27], or Anderson/Kondo lattice models that inevitably rely on analytical or numerical approximations [28; 29; 30; 31; 32]. Most of these studies focus on certain microscopic properties, but fail to make a connection between the superconductivity and the two-fluid normal state. The problem roots in the difficulty of dealing with the strong \(f\)-electron interaction and the lack of an microscopic explanation for the two-fluid behaviors. Recently, it is found that the exactly solvable Hatsugai-Kohmoto (HK) model may provide a convenient tool to study strongly correlated problems [33; 34; 35; 36; 37; 38]. The model assumes an all-to-all nonlocal interaction that transforms to a local one in momentum space, which represents a stable interacting fixed point describing the Mott physics [36]. Inspired by this, we recently proposed a momentum-space Kondo lattice model that allows for a microscopic derivation of the two-fluid phenomenology [39]. The key point is the often neglected nonlocal Kondo effect in heavy fermion systems, which causes a partial screening of \(f\)-spins in momentum space and leads to two-fluid behaviors. In this paper, we introduce an exactly solvable momentum-space Kondo-BCS model, and study superconductivity emerged from the two-fluid normal state. By increasing the Kondo coupling, we found a crossover from a BCS superconductor to a HFSC as the corresponding normal state evolves from a Fermi liquid state to a non-Fermi liquid (NFL) two-fluid state. The HFSC is featured with a significant enhancement of the gap ratio and the specific heat jump anomaly relative to the BCS values, indicating its strong coupling nature. In addition, we found two types of Cooper pairs in the superconducting phase, one formed by conduction electrons, the other formed by the composite fermions (composite objects of conduction electrons and \(f\)-spins), which is responsible for the strong coupling properties of HFSC. Our model provides a useful tool to study both the unconventional superconductivity and normal state in heavy fermion systems. _Model._--We begin with the \(\mathbf{k}\)-space Kondo lattice model: \[H_{K} = \sum_{\mathbf{k}\alpha}\epsilon_{\mathbf{k}}c^{\dagger}_{\mathbf{k} \alpha}c_{\mathbf{k}\alpha}+J_{K}\sum_{\mathbf{k}}\mathbf{s}_{\mathbf{k}}\cdot\bm {S}_{\mathbf{k}}, \tag{1}\] where \(c^{\dagger}_{\mathbf{k}\alpha}\) creates a conduction electron with dispersion \(\epsilon_{\mathbf{k}}\), \(\mathbf{s}_{\mathbf{k}}=\frac{1}{2}\sum_{\alpha\beta}c^{\dagger}_{\mathbf{k} \alpha}\mathbf{\sigma}_{\alpha\beta}c_{\mathbf{k}\beta}\) and \(\mathbf{S}_{\mathbf{k}}\) are the spins of the conduction electron and \(f\)-electron defined at each momentum \(\mathbf{k}\), \(J_{K}\) is the Kondo coupling. \(H_{K}\) can be derived from a Schrieffer-Wolff (SW) transformation of the Hatsugai-Kohmoto-Anderson lattice model where the \(f\)-electrons are described by the HK Hamiltonian [37]: \[H_{f}=\sum_{\mathbf{k}}\xi_{\mathbf{k}}n^{f}_{\mathbf{k}}+U\sum_{\mathbf{k}}n ^{f}_{\mathbf{k}\uparrow}n^{f}_{\mathbf{k}\downarrow}. \tag{2}\] Here an extra kinetic term for \(f\)-electrons is added, and \(n^{f}_{\mathbf{k}}\) (\(n^{f}_{\mathbf{k}\sigma}\)) is the occupation number (per spin). The HK interaction in \(H_{f}\) leads to a breakdown of Fermi liquid and gives rise to Mott physics [35; 36]. At half-filling, the exact solution of \(H_{f}\) reveals a quantum phase transition between a NFL metal and a Mott insulator as \(U\) increases [34]. This leads to a well defined \(\mathbf{k}\)-space \(f\)-spin operator \(\mathbf{S}_{\mathbf{k}}=\frac{1}{2}\sum_{\alpha\beta}f^{\dagger}_{\mathbf{k} \alpha}\mathbf{\sigma}_{\alpha\beta}f_{\mathbf{k}\beta}\) with \(n^{f}_{\mathbf{k}}=1\) deep inside the Mott insulating state. By including the hybridization with conduction electrons, \[H_{c}+H_{hyb}=\sum_{\mathbf{k}\alpha}\epsilon_{\mathbf{k}}c^{\dagger}_{\mathbf{ k}\alpha}c_{\mathbf{k}\alpha}+\mathcal{V}\sum_{\mathbf{k}\alpha}\left(c^{ \dagger}_{\mathbf{k}\alpha}f_{\mathbf{k}\alpha}+H.c.\right), \tag{3}\] and performing a SW transformation to project out the \(f\)-electron charge fluctuations, one obtains Eq. (1) with \(J_{K}\approx 8|\mathcal{V}|^{2}/U\)[40]. The resulting \(\mathbf{k}\)-space Kondo interaction represents a nonlocal scattering between \(c\) and \(f\) electrons. Previous studies suggest that such nonlocal Kondo interaction indeed plays an important role in heavy fermion systems [41; 42; 43]. _Normal states._--Figure (1a) shows the ground state phase diagram of \(H_{K}\) and the corresponding conduction electron occupation number \(n^{c}_{\mathbf{k}}\). For simplicity, here we have assumed a parabolic dispersion \(\epsilon_{\mathbf{k}}=k^{2}/2\pi-1\) with an ultraviolet cutoff \(k_{\Lambda}=2\sqrt{\pi}\), so that the half-band-width \(D=1\) serves as the energy unit, and the electron density is \(n_{c}=\mathcal{N}_{s}^{-1}\sum_{\mathbf{k}}n^{c}_{\mathbf{k}}=1\) throughout our calculations. At \(J_{K}=0\), the conduction electrons form a Fermi liquid (FL) that is completely decoupled from the \(f\)-spins. A finite \(J_{K}\) destroys the Fermi liquid by replacing the original Fermi surface with a singly occupied momentum region \(\Omega_{1}\), separated from the empty region (\(\Omega_{0}\)) and the doubly occupied region (\(\Omega_{2}\)) by two filling surfaces at momenta \(k_{F1}\) and \(k_{F2}\), here denoted as FS\({}_{1}\) and FS\({}_{2}\). The appearance of three occupation regions with two filling surfaces is a hallmark of NFL ground states in HK-like models [44; 45; 33]. The \(f\)-spins form \(\mathbf{k}\)-space Kondo singlets with conduction electrons only in the \(\Omega_{1}\) region, while remain free in \(\Omega_{0}\) and \(\Omega_{2}\), since the Pauli principle forbids spin-flip scattering at a doubly occupied \(\mathbf{k}\) point. As \(J_{K}\) increases, the \(\Omega_{1}\) region also enlarges, until it covers the entire momentum space at \(J_{K}=4/3\), beyond which the system enters into a Kondo insulating (KI) phase due to the complete screening of \(f\)-spins. The partial Kondo screening in the NFL state has two consequences: 1) The Mott insulated \(f\)-electrons in \(\Omega_{1}\) become itinerant through the Kondo hybridization, such that the total number of charge carriers per spin is now counted by the "Fermi volume" enclosed by FS\({}_{1}\), which is larger than the Fermi volume at \(J_{K}=0\), but smaller than the standard large Fermi volume of a heavy Fermi liquid state. Such a partially enlarged Fermi volume is also found in previous large-\(N\)[41; 42; 43] or numerical calculations [46] for strongly frustrated or one-dimensional Kondo lattices where nonlocal Kondo interactions play an important role. 2) The itinerant Kondo singlets in \(\Omega_{1}\) form a Kondo liquid, with an "order parameter" satisfying a universal scaling consistent with the phenomenological two-fluid model [39]. The remaining unhybridized \(f\)-spins in \(\Omega_{0}\) and \(\Omega_{2}\) form another fluid that is generally referred to as a "spin liquid". Moreover, the conduction electrons in \(\Omega_{0}\) and \(\Omega_{2}\) also form a third liquid, as implicitly assumed in the two-fluid model [17]. The above NFL properties persist within a finite temperature region below the characteristic Kondo coherence temperature Figure 1: (a) Three different ground states of the \(\mathbf{k}\)-space Kondo lattice model \(H_{K}\) as \(J_{K}\) increases: Fermi liquid (FL), non-Fermi liquid (NFL) metal and Kondo insulator (KI). The conduction electron occupation number \(n^{c}_{\mathbf{k}}\) is shown for each phase. Kondo singlets only reside on the singly occupied region \(\Omega_{1}\). (b) The phase diagram of the \(\mathbf{k}\)-space Kondo-BCS model with pairing interaction \(V=1\). (c) The energy gap \(\Delta\) for the superconducting (blue) and KI (gray) states, and the superconducting gap ratio \(2\Delta/T_{c}\) (orange) at different \(J_{K}\). as shown in Fig. (1b), above which the thermal fluctuations destroy the Kondo singlets and the normal state becomes a Fermi liquid controlled by the trivial fixed point at \(J_{K}=0\). _Superconductivity_.--To study the superconducting instability, we consider the simplest \(s\)-wave pairing interaction between conduction electrons, \[H_{V}=-\frac{V}{\mathcal{N}_{s}}\sum_{\mathbf{k}\mathbf{k}^{\prime}}c^{ \dagger}_{\mathbf{k}^{\prime}}c^{\dagger}_{-\mathbf{k}^{\prime}\downarrow}c_{ \mathbf{k}^{\prime}\uparrow}, \tag{4}\] and calculate the electron pair-binding energy, \(E_{b}=\langle\psi|H^{\prime}|\psi\rangle-\langle G|H^{\prime}|G\rangle\), where \(H^{\prime}=H_{K}+H_{V}\), \(|G\rangle\) is the ground state of \(H_{K}\), and \(|\psi\rangle\) is the state with an additional Cooper pair [40]. The resulting \(E_{b}\) satisfies \[1=\frac{V}{16D}\ln\left|\frac{(2D-E_{b})^{4}(3J_{K}-E_{b})}{-(3J_{K}/2-E_{b})^ {4}E_{b}}\right|, \tag{5}\] and the numerical results are shown in the Supplementary Materials. At \(J_{K}=0\), Eq. (5) reduces to the BCS result. Increasing \(J_{K}\) reduces the absolute value \(|E_{b}|\), but \(E_{b}\) stays negative throughout the entire NFL state for arbitrarily small \(V\), indicating Cooper instability. Next, we choose a finite \(V\) and perform a BCS mean-field decomposition of \(H_{V}\), and solve the combined Hamiltonian: \[H = H_{K}+H_{\text{BCS}}, \tag{6}\] \[H_{\text{BCS}} = \Delta_{c}\sum_{\mathbf{k}}c^{\dagger}_{\mathbf{k}\uparrow}c^{ \dagger}_{-\mathbf{k}\downarrow}+H.c.+\frac{\mathcal{N}_{s}\Delta_{c}^{2}}{V},\] where \(\Delta_{c}=-V\mathcal{N}_{s}^{-1}\sum_{\mathbf{k}}\left\langle c_{-\mathbf{k },\downarrow}c_{\mathbf{k}\uparrow}\right\rangle\) is the pairing amplitude. Eq. (6) is exactly solvable, since it can be written as \(H=\frac{1}{2}\sum_{\mathbf{k}}H_{\mathbf{k}}\), where each \(H_{\mathbf{k}}\) is a conserved quantity with a 64-dimensional Hilbert space, hence can be exactly diagonalized. The phase diagram for \(V=1\) is shown in Fig. (1b). A superconducting phase is found for \(J_{K}<4/3\), with the transition temperature \(T_{c}\) decreasing monotonically with increasing \(J_{K}\). A rapid suppression of \(T_{c}\) is found between \(J_{K}=0.3\) and \(0.4\), where the phase boundary changes its sign of curvature. At this point, the normal state also evolves from a FL to a NFL two-fluid state. The crossover temperature \(T^{*}\) is determined by a broad maximum of the specific heat coefficient associated with the Kondo effect. Without pairing interaction, \(T^{*}\) decreases almost linearly with \(J_{K}\) and vanishes at \(J_{K}=0\). With finite pairing interaction, \(T^{*}\) continuous to exist below \(T_{c}\) albeit being suppressed, which separates the superconducting phase into two regions: a weak coupling BCS superconductor and a strong coupling HFSC. The superconducting transition is continuous for \(J_{K}\leq 0.5\) but becomes weakly first-order for \(J_{K}\geq 0.6\), where \(\Delta_{c}\) jumps discontinuously at \(T_{c}\)[40]. We notice that first-order transition is not rare in studies of superconductivity emerging from non-Fermi liquids [38; 47]. The interaction driven superconductor-to-insulator quantum phase transition at \(J_{K}=4/3\) is another interesting topic that may have experimental relevance [48; 49], which we leave for future investigation. Fig. (1c) plots the zero temperature energy gap \(\Delta\) and the gap ratio \(2\Delta/T_{c}\). Note that \(\Delta\) is determined by the electron spectral function, which is generally unequal to the pairing amplitude \(\Delta_{c}\), except for \(J_{K}=0\) where the BCS relation holds. The gap ratio is quite close to (less than) the universal BCS value \(3.53\) for \(J_{K}\leq 0.3\), but becomes significantly enhanced for \(J_{K}\geq 0.4\), with a maximal value \(2\Delta/T_{c}\approx 17\) at \(J_{K}=1.1\). Such an extremely large gap ratio is also found in quantum critical models where the pairing glue has a power-law local susceptibility [50; 51], or superconductivity emerged from incoherent metals with Sachdev-Ye-Kitaev interactions [47]. In our case, we attribute the large gap ratio at \(J_{K}\geq 0.4\) to the unusual Kondo liquid in the normal state, which suppresses \(T_{c}\) and induces another type of Cooper pair that is strongly-coupled in nature. To distinguish the two regions of the superconducting state, we define a composite fermion operator \(F_{\mathbf{k}\alpha}\), and study its pairing correlation: \[\Delta_{F}(\mathbf{k})=-\langle F_{-\mathbf{k}\downarrow}F_{\mathbf{k} \uparrow}\rangle,\qquad F_{\mathbf{k}\alpha}=\sum_{\beta}\mathbf{\sigma}_{\alpha \beta}\cdot\mathbf{S}_{\mathbf{k}}c_{\mathbf{k}\beta}. \tag{7}\] In heavy fermion literature, the composite fermion is usually defined in the coordinate space, which transforms to a convolution in the momentum space [52; 53; 54]. For the \({\bf k}\)-space Kondo model studied here, Eq. (7) is a more appropriate definition, since the SW transformation used to derive \(H_{K}\) also transforms the \(f\)-electron operator \(f_{{\bf k}\alpha}\) to \(F_{{\bf k}\alpha}\)[40]. Therefore, one can roughly view \(F_{{\bf k}\alpha}\) as the renormalized \(f\)-electrons in the Kondo lattice model. A finite \(\Delta_{F}({\bf k})\) indicates presence of composite fermion Cooper pairs, which requires both pairing interaction and Kondo entanglement. Fig. (2a) compares the temperature evolution of \(\Delta_{c}({\bf k})=-\langle c_{-{\bf k}|}c_{{\bf k}\uparrow}\rangle\), \(\Delta_{F}({\bf k})\), and the entropy distribution \(s({\bf k})=-\frac{1}{2}{\rm Tr}[\rho_{\bf k}\ln\rho_{\bf k}]\) (\(\rho_{\bf k}\) is the density matrix of \(H_{\bf k}\)) for \(J_{K}=0.1\). Above \(T_{c}\approx 0.152\), both \(\Delta_{c}({\bf k})\) and \(\Delta_{F}({\bf k})\) are zero, while \(s({\bf k})\) shows a peak around the bare (\(J_{K}=0\)) Fermi momentum \(k_{0}=\sqrt{2\pi}\), with a constant background \(\ln 2\) contributed by the free \(f\)-spins. Below \(T_{c}\), \(\Delta_{c}({\bf k})\) becomes finite for all values of \({\bf k}\), while \(\Delta_{F}({\bf k})\) is almost zero within a broad range of temperature. The shape of \(\Delta_{c}({\bf k})\) can be fitted perfectly by the BCS formula \(\Delta_{c}({\bf k})=\Delta_{c}/(2\sqrt{\epsilon_{\bf k}^{2}+\Delta_{c}^{2}})\) as shown in Fig. (2a). The entropy peak around \(k_{0}\) is gradually consumed by the conduction electrons forming Cooper pairs, leaving a constant \(\ln 2\) plateau of local spins, which is also clearly seen in the integrated entropy \(S={\cal N}_{s}^{-1}\sum_{\bf k}s({\bf k})\) as shown in Fig. (2c). Below the characteristic temperature \(T^{*}\approx 0.002\), a large amount of \(f\)-spins start to combine with conduction electrons to form composite fermion Cooper pairs, as indicated by the peak of \(\Delta_{F}({\bf k})\) and the dip of \(s({\bf k})\) around \(k_{0}\). Interestingly, the rapid development of composite pairs is accompanied by a slight suppression of the conduction electron pair amplitude, as shown by the temperature dependence of \(\Delta_{c}=V{\cal N}_{s}^{-1}\sum_{\bf k}\Delta_{c}({\bf k})\) and \(\Delta_{F}=V{\cal N}_{s}^{-1}\sum_{\bf k}\Delta_{F}({\bf k})\) in Fig. (2b). This suggests that some Cooper pairs unbind themselves in order to form the composite pairs, indicating that these are indeed different types of Cooper pairs. Just below \(T^{*}\), the peak of \(\Delta_{F}({\bf k})\) is concentrated in the vicinity of \(k_{0}\), which quickly extends to the entire momentum space as the system approaches zero temperature. The same happens to the dip (gap) of \(s({\bf k})\), indicating all the \(f\)-spins finally become a part of the superconducting condensate. This process happens within a small temperature window, but consumes the extensive \(f\)-spin entropy in the \(\Omega_{0}\) and \(\Omega_{2}\) regions, giving rise to a huge peak of \(C/T\) at another characteristic temperature \(T^{\prime}\). Due to this huge peak, the broad maximum of \(C/T\) at \(T^{*}\) now appears as a shoulder, as shown in the inset of Fig. (2c). Below \(T^{\prime}\), \(\Delta_{F}\) saturates to a constant, while \(S\) approaches zero. This characteristic temperature exists for all \(0<J_{K}<4/3\), see for example Fig. (3) for \(J_{K}=0.5\). However it may not occur in real materials, since a more natural way to consume the extensive spin entropy is to form a nearby or coexisting magnetic order as observed in many HFSCs. Fig. (3) shows the same physical quantities at \(J_{K}=0.5\). Above \(T_{c}\approx 0.021\), both \(\Delta_{c}({\bf k})\) and \(\Delta_{F}({\bf k})\) are zero, while \(s({\bf k})\) evolves from a single peak to two peaks centered around the two filling surfaces at \(k_{F1}=1.17k_{0}\) and \(k_{F2}=0.79k_{0}\). The entropy depletion between \(k_{F1}\) and \(k_{F2}\) is due to the formation of Kondo singlets in the \(\Omega_{1}\) region. It causes a broad maximum of \(C/T\) at \(T^{*}\approx 0.1\) Figure 4: The specific heat coefficient around the superconducting transition temperature at different \(J_{K}\). Inset: the specific heat anomaly \(\Delta C/\gamma T_{c}\) as a function of \(J_{K}\). Figure 3: (a) Temperature evolution of \(\Delta_{c}({\bf k})\), \(\Delta_{F}({\bf k})\) and \(s({\bf k})\) for \(J_{K}=0.5\). The data curves are shifted with each other by a constant \(0.3\). The dashed lines in \(s({\bf k})\) mark the positions of the two filling surfaces at \(k_{F1}=1.17k_{0}\) and \(k_{F2}=0.79k_{0}\). (b) The pairing amplitudes \(\Delta_{c}\) and \(\Delta_{F}\) as functions of temperature at \(J_{K}=0.5\). (c) The entropy as a function of temperature at \(J_{K}=0.5\). The inset shows the specific heat coefficient. as shown in the inset of Fig. (3c). Superconductivity occurs at \(T_{c}\approx 0.021\), above which the Kondo singlets in \(\Omega_{1}\) region have already been fully developed. Therefore, both the conduction electrons and composite fermions form Cooper pairs immediately below \(T_{c}\). The two-peak structure of \(\Delta_{c}({\bf k})\) and \(\Delta_{F}({\bf k})\) suggests that the Cooper pairs are mainly formed by quasiparticles around the two filling surfaces, similar to the HK model [34; 38]. Below \(T_{c}\), the pairing amplitudes \(\Delta_{c}\) and \(\Delta_{F}\) increase monotonically with decreasing temperature as shown in Fig. (3b). Different from the case of \(J_{K}=0.1\), here \(\Delta_{F}\) is much larger than \(\Delta_{c}\), indicating that the composite fermion Cooper pairs play a dominant role. Since the superconducting transitions for \(J_{K}\geq 0.4\) require \(f\)-spins around the two filling surfaces to form composite Cooper pairs, they consume more entropy and lead to large specific heat anomaly \(\Delta C/\gamma T_{c}\), as shown in Fig. (4). Here, the Sommerfeld coefficient \(\gamma=3.77\) is universal for the two-fluid normal state within \(0<J_{K}<4/3\), which is only slightly larger than the non-interacting value \(\gamma=\pi^{2}/3\approx 3.29\) at \(J_{K}=0\). It may require local-in-space Kondo interaction to obtain a \(\gamma\) as large as in real heavy fermion materials. As \(J_{K}\) increases, \(\Delta C/\gamma T_{c}\) first decreases for \(J_{K}\leq 0.3\), then increases rapidly for \(0.4\leq J_{K}\leq 0.6\), and decreases again for \(J_{K}\geq 0.6\) where the superconducting transition becomes first-order. For continuous transitions within \(0<J_{K}<0.6\), the evolution of \(\Delta C/\gamma T_{c}\) follows that of the gap ratio, indicating the same microscopic origin behind them. For a rough comparison, experiments on two of the most studied strong coupling HFSCs, CeCoIn\({}_{5}\) and CeRhIn\({}_{5}\), show \(\Delta C/\gamma T_{c}=4.5\sim 4.7\)[5; 55], \(2\Delta/T_{c}=6\sim 10\)[56; 21], and \(\Delta C/\gamma T_{c}=4.2\)[22], \(2\Delta/T_{c}=5\)[57], consistent with our results at \(J_{K}=0.4\sim 0.5\). _Discussion._--We have checked different values of \(V\) and the results remain qualitatively the same. Our method can be conveniently generalized to \(d\)-wave or \(p\)-wave pairing interactions. In all cases, the composite fermion Cooper pairs will inevitably arise due to the combined effect of Kondo and pairing interactions, and lead to important universal features of HFSC. An implication of our study is that both conduction electron Cooper pairs and composite fermion Cooper pairs may exist in real HFSCs consistent with their two-fluid normal states, and the latter is expected to play a dominant role. What remains unexplored in this work is the microscopic mechanism behind the pairing interaction, which is often associated with the \(f\)-spin fluctuations in real materials. It is found that a Heisenberg exchange interaction between \(\mathbf{S_{\rm k}}\) and \(\mathbf{S_{-\rm k}}\) indeed induces pairing correlation between conduction electrons, which may require additional scattering to establish phase coherence [39]. In general, the \({\bf k}\)-space Kondo model has the advantage over conventional real space models that it can be solved exactly, while at the same time captures important universal features of heavy fermion materials, thus opens a new window towards the ultimate solving of heavy fermion problems. The authors thank Y. Zhong for stimulating discussion. J.W. and Y.-F.Y. are supported by the National Natural Science Foundation of China (Grants No. 12174429, No. 11974397), the National Key Research and Development Program of China (Grant No. 2022YFA1402203), and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB33010100). And Y.L. is supported by the Fundamental Research Funds for the Central Universities (Grant No. E2E44305).
2309.11886
Asymptotic Spinspacetime
We show that Poincar\'e invariance directly implies the existence of a complexified Minkowski space whose real and imaginary directions unify spacetime and spin, which we dub spinspacetime. Remarkably, despite the intrinsic noncommutativity of spin, this framework describes mutually commuting holomorphic or anti-holomorphic coordinates, which trace back to the complex geometry of twistor space. As a physical implication, we show that the Newman-Janis shift property of spinning black holes can be derived from a complexified version of equivalence principle. The fact that the exponential spin factors of their scattering amplitudes simply arise from half-Fourier transforms suggests that classically spinning massive states are realized in massive twistor coherent states.
Joon-Hwi Kim
2023-09-21T08:39:54Z
http://arxiv.org/abs/2309.11886v2
# Asymptotic Spinspacetime ###### Abstract We show that Poincare invariance directly implies the existence of a complexified Minkowski space whose real and imaginary directions unify spacetime and spin, which we dub spinspacetime. Remarkably, despite the intrinsic noncommutativity of spin, this framework describes mutually commuting holomorphic or anti-holomorphic coordinates, which trace back to the complex geometry of twistor space. As a physical implication, we show that the Newman-Janis shift property of spinning black holes can be derived from a complexified version of equivalence principle. The fact that the exponential spin factors of their scattering amplitudes simply arise from half-Fourier transforms suggests that classically spinning massive states are realized in massive twistor coherent states. + Footnote †: preprint: CALT-TH 2023-038 ## I Introduction Ezra T. Newman's curious idea of reinterpreting spin as an "imaginary displacement" [1; 2; 3; 4; 5; 6; 7; 8; 9; 10] has offered unique physical insights on relativistic angular momenta. In this article, we revisit Newman's proposal to point out its universality and significance as an homage to the note [1]. Consider any massive system enjoying global Poincare symmetry; it can typically be any system consisting of particles or fields defined in an asymptotically flat spacetime or even be a system lacking a spacetime formulation. Amusingly, it turns out that flat spacetime coordinates \(x^{\mu}\) can be reconstructed from the Poincare charges. Taking one more step forward, we further show that eight real variables \(x^{\mu}\), \(y^{\mu}\) arise from the Poincare charges, whose commutation relations can be bootstrapped as \[[z^{\mu},z^{\nu}]=0\,,\quad[z^{\mu},\bar{z}^{\nu}]\neq 0\,,\quad[\bar{z}^{\mu}, \bar{z}^{\nu}]=0\,, \tag{1}\] if \(z^{\mu}:=x^{\mu}+iy^{\mu}\) and \(\bar{z}^{\mu}:=x^{\mu}-iy^{\mu}\). Therefore there arises a complexified Minkowski space in which holomorphic ("\(zig\)") and anti-holomorphic ("\(\bar{z}ag\)") coordinates realize a curious "\(zig\)-\(\bar{z}ag\)" form of commutators (1). The imaginary components \(y^{\mu}\) describe the Pauli-Lubanski spin pseudovector normalized in the units of length. Thus we call this notion of complexified Minkowski space "spinspacetime," as spin is unified with spacetime. Interestingly, it turns out that the "\(zig\)-\(\bar{z}ag\)" structure of spinspacetime traces back to the oscillator algebra (Kahler geometry) of the twistor space: \[[Z,Z]=0\,,\quad[Z,\bar{Z}]\neq 0\,,\quad[\bar{Z},\bar{Z}]=0\,. \tag{2}\] A twistor represents a complexified light ray [11; 12; 13; 14]. The intersection of two light rays defines a point in complexified Minkowski space. In this way, the "\(Zig\)-\(\bar{Z}ag\)" brackets (2) of the space of light rays boil down to the "\(zig\)-\(\bar{z}ag\)" brackets (1) of the spinspacetime. If spacetime and spin are reimagined to be a complex geometry, what will its physical implication be? In [1; 6], Newman observes that spinspacetime offers a geometric interpretation of the minimal [15; 16] gyromagnetic ratio \(g\!=\!2\). Amusingly, we find that the power of spinspacetime extends beyond the dipole order if the "\(Zig\)-\(\bar{Z}ag\)" structure in its twistorial realization is fully appreciated. To demonstrate this fact, we specialize in systems in asymptotically flat spacetimes. Then spinspacetime arises from Poincare symmetry at the asymptotic infinity: "asymptotic spinspacetime." Its corresponding "asymptotic massive twistor space" is coordinatized by the complex coordinates \(z^{\mu}\), \(\bar{z}^{\mu}\) as well as the "massive spinorhelicity variables" [17; 18; 19; 20] describing on-shell scattering kinematics of massive spinning states at the infinity. Remarkably, considering scattering processes of spinning black holes from this angle leads to a discovery that the "\(Zig\)-\(\bar{Z}ag\)" structure (2) implies an enticing "derivation" of the Newman-Janis shift [7; 8] as a consequence of a complexified version of equivalence principle. Specifically, we show that the exponentiation properties of black holes' three-point amplitudes [16; 21; 22; 23; 24] and samethelicity Compton amplitudes of arbitrary multiplicities [25; 26; 27] follow from complexified massive on-shell scattering kinematics linking holomorphomy to self-duality, shedding a novel insight on the minimalness [15; 16; 22] of black hole couplings. Moreover, we find that the exponential spin factors [21; 22; 23; 24] encoding the Newman-Janis shift pop out in a strikingly simple manner through half-Fourier transforms, which suggests that classically spinning massive states are realized in twistor coherent states. Sections II and III construct spacetime and spinspacetime from Poincare symmetry. Section IV describes the twistorial origin. Sections V and VI specialize in the scattering theory context and derive black holes from the simplest "S-matrix in massive twistor space." ## II Reconstructing spacetime from Poincare symmetry In any system enjoying global Poincare symmetry, there shall exist Poincare charges: \[\begin{split}&[J^{\mu\nu},J^{\rho\sigma}]=i(-4\delta^{[\mu}{}_{ [\kappa}{}^{\nu]}{}^{\nu]}{}^{[\rho}\delta^{\sigma]}{}_{\lambda]})J^{\kappa \lambda}\,,\\ &[J^{\mu\nu},p_{\rho}]=ip_{\sigma}(-2\eta^{\sigma[\mu}\delta^{\nu] }{}_{\rho})\,,\\ &[p_{\mu},p_{\nu}]=0\,,\end{split} \tag{3}\] where \(\eta^{\mu\nu}\) is the flat inverse metric of signature \((-,+,+,+)\). Suppose the system is massive so that its configurations do not realize \(p^{2}=0\). Then it follows that \[\hat{x}^{\mu}:=J^{\mu\nu}p_{\nu}/p^{2} \tag{4}\] transforms almost like a position vector in Minkowski space under Poincare group action: \[\begin{split}[\hat{x}^{\mu},J^{\rho\sigma}]&=i\,(- 2\eta^{\mu[\rho}\delta^{\sigma]}_{\,\,\,\nu})\hat{x}^{\nu}\,,\\ [\hat{x}^{\mu},p_{\rho}]&=i\,(\delta^{\mu}{}_{\,\, \,\rho}-p^{\mu}p_{\rho}/p^{2})\,.\end{split} \tag{5}\] The failure is due to the \(-p^{\mu}p_{\rho}/p^{2}\) term, which is expectable because \(\hat{x}^{\mu}\) has only three truly independent components due to its transversality: \(p_{\mu}\hat{x}^{\mu}=0\). In this light, one can envision a four-component variable \(x^{\mu}\) that adds a longitudinal component to \(\hat{x}^{\mu}\): \[x^{\mu}=\hat{x}^{\mu}-Dp^{\mu}/p^{2}\,,\quad D=-p_{\mu}x^{\mu}\,. \tag{6}\] If one postulates that \(D\) transforms like \[[D,J^{\mu\nu}]=0\,,\quad[D,p_{\mu}]=-i\,p_{\mu}\,, \tag{7}\] then \(x^{\mu}\) can be made to transform like a position vector: \[[x^{\mu},J^{\rho\sigma}]=i\,(-2\eta^{\mu[\rho}\delta^{\sigma]}_{\,\,\,\nu})\,x ^{\nu}\,,\quad[x^{\mu},p_{\rho}]=i\,\delta^{\mu}{}_{\,\,\,\rho}\,. \tag{8}\] The converse is also true: demanding (8) implies (7). In this way, a Minkowski space \((\mathbb{R}^{4},\eta)\) arises from Poincare symmetry. Although Poincare algebra appears to dictate only the transverse components \(\hat{x}^{\mu}\), there exists a unique prescription for the algebra of the longitudinal component \(D\) that achieves the desired behavior (8) of flat spacetime coordinates. In fact, the astute reader may have noticed that the operator \(D\) in (7) is simply defining the mass dimensions for \(J^{\mu\nu}\) and \(p_{\mu}\). Hence, as the Poincare algebra shall come with the notion of mass dimension for any physical system, we could have simply taken both (3) and (7) as inputs to obtain (8). These equations have already been derived by [28] in the context of a free particle in flat spacetime. However, our interpretation differs: we point out that the "emergence" of flat spacetime here applies to _any_ massive system with global Poincare symmetry, in fact--whether it is a particle, a system of particles, a field, or even a system not prescribed in a spacetime formulation. A prime example is massive systems in asymptotically flat spacetimes. In that case, our argument asserts that a flat "bulk" can be reconstructed from Poincare charges at asymptotic infinity. The three transverse components \(\hat{x}^{\mu}=J^{\mu\nu}p_{\nu}/p^{2}\) are the impact parameters if one thinks of scattering processes; the addition of \(D\) reconstructs the "normal" direction into the four-dimensional bulk. We will specialize in this specific context in later sections. Meanwhile, there is a puzzling feature in this notion of a flat spacetime: it is noncommutative as \[[x^{\mu},x^{\nu}]=-\frac{i}{p^{2}}\,S^{\mu\nu}\,, \tag{9}\] where \(S^{\mu\nu}\) denotes the transverse projection of \(J^{\mu\nu}\), i.e., the spin angular momentum--the part of the Lorentz charge \(J^{\mu\nu}\) that is unexplainble by the orbital angular momentum \(2x^{[\mu}p^{\nu]}\) in spacetime: \[S^{\mu\nu}:=J^{\mu\nu}-p^{\mu}p_{\rho}J^{\rho\nu}/p^{2}-p^{\nu}p _{\rho}J^{\mu\rho}/p^{2}\,, \tag{10}\] \[\implies\quad J^{\mu\nu}=2x^{[\mu}p^{\nu]}+S^{\mu\nu}\,,\quad p_{ \mu}S^{\mu\nu}=0\,. \tag{11}\] This peculiarity is unavoidable because it turns out that real spacetime coordinates \(x^{\mu}\) are always noncommutative as long as they are Poincare-covariant (see (101)). That is, Poincare symmetry seems to mandate a universal spin-induced noncommutativity on spacetime. Indeed, a considerable amount of work [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52] has observed the commutator (9) from various angles and from various spinning particle models. In particular, there is no such "physical" thing as position eigenstates "\(|x\rangle\)" nor a position-represented wavefunctions "\(\psi(x)\)" in any Poincare-covariant first-quantized theory of massive spinning particles, regardless of the microscopic implementation of spin. To achieve \([x^{\mu},x^{\nu}]=0\), one has to further introduce gauge redundancies in the formalism (while adopting the quantize-then-constrain approach as in [34][53]) or harm Poincare covariance (Pryce-Newton-Wigner spin supplementary condition [54; 30; 55], (100)). One can notice a tension between two principles (cf. [56; 57]): spacetime commutativity versus manifest Poincare and gauge invariance. Will these be achieved altogether in a new type of geometry? ## III Unification of spacetime and spin into a complex geometry: Spinspace In the co-moving frame of the momentum, (9) boils down to \[[x^{i},x^{j}]=\frac{i\hbar}{m^{2}c^{2}}\,\varepsilon^{ij}{}_{k}S^{k}\,, \tag{12}\] where \(m\) is the rest mass. We have temporarily restored the fundamental constants and converted the spin bivector to a vector by the three-dimensional Hodge dual; note the noncommutativity is an \(\mathcal{O}(1/c^{2})\) "relativistic correction." Meanwhile, if one recalls the familiar commutator \[[S^{i},S^{j}]=i\hbar\,\varepsilon^{ij}{}_{k}\,S^{k}\,, \tag{13}\] then one realizes that the right-hand sides can be canceled by composing complex combinations \(x^{i}\mp i\,S^{i}/mc\). A fully Poincare-covariant implementation of this observation is the following. First, the covariant realization of the spin vector \(S^{i}\) is given by the Pauli-Lubanski pseudovector \((*^{-1}J)^{\mu\nu}p_{\nu}\). To be added to position coordinates, it should be normalized in the units of length: \[\hat{y}^{\mu}:=(*^{-1}J)^{\mu\nu}p_{\nu}/p^{2}\quad\implies\quad S^{\mu\nu}= \varepsilon^{\mu\nu\rho\sigma}\hat{y}_{\nu}p_{\sigma}\,. \tag{14}\] Then, it follows from (3) that \(\hat{y}^{\mu}\) transforms like a tangent vector under the Poincare group action: \[[\hat{y}^{\mu},J^{\rho\sigma}]=i\left(-2\eta^{\mu[\rho}\delta^{\sigma]}{}_{\nu} \right)\hat{y}^{\nu}\,,\quad[\hat{y}^{\mu},p_{\rho}]=0\,. \tag{15}\] Yet, to pursue a complete parallel between spacetime and spin, a longitudinal component could be added as in (6): \[y^{\mu}=\hat{y}^{\mu}-\tilde{D}p^{\mu}/p^{2}\,,\quad\tilde{D}=-p_{\mu}y^{\mu}\,. \tag{16}\] If \(\tilde{D}\) transforms trivially as \[[\tilde{D},J^{\mu\nu}]=0\,,\quad[\tilde{D},p_{\mu}]=0\,, \tag{17}\] then \(y^{\mu}\) transforms like a tangent vector: \[[y^{\mu},J^{\rho\sigma}]=i\left(-2\eta^{\mu[\rho}\delta^{\sigma]}{}_{\nu} \right)y^{\nu}\,,\quad[y^{\mu},p_{\rho}]=0\,. \tag{18}\] Conversely, demanding (18) implies (17). Finally, we find \[\begin{split}&[y^{\mu},y^{\nu}]=-\frac{i}{p^{2}}\,S^{\mu\nu}\,, \\ &[x^{\mu},y^{\nu}]=-\frac{i}{p^{2}}\left(2y^{(\mu}p^{\nu)}-\eta^{ \mu\nu}p_{\rho}y^{\rho}\right),\end{split} \tag{19}\] from which it follows that \[\begin{split}&[z^{\mu},z^{\nu}]=[\bar{z}^{\mu},\bar{z}^{\nu}]=0 \,,\\ &[z^{\mu},\bar{z}^{\nu}]=-\frac{2}{p^{2}}\left(2y^{(\mu}p^{\nu)} -\eta^{\mu\nu}p_{\rho}y^{\rho}+i\varepsilon^{\mu\nu\rho\sigma}y_{\rho}p_{ \sigma}\right)\end{split} \tag{20}\] if we define \[z^{\mu}:=x^{\mu}+iy^{\mu}\,,\quad\bar{z}^{\mu}:=x^{\mu}-iy^{\mu}\,. \tag{21}\] Amusingly, each set of these complex coordinates is commutative! In fact, we shall underscore the fact that this is the _only_ Poincare-covariant solution to the noncommutativity problem. A proof is given in Appendix C. Note that Poincare covariance of \(z^{\mu}\) and \(\bar{z}^{\mu}\) at the same time implies that Poincare group action preserves the complex structure (i.e., does not mix up the holomorphic and anti-holomorphic sectors): \[\begin{split}&[z^{\mu},J^{\rho\sigma}]=i\left(-2\eta^{\mu[\rho} \delta^{\sigma]}{}_{\nu}\right)z^{\nu}\,,\quad[z^{\mu},p_{\rho}]=i\delta^{ \mu}{}_{\rho}\,,\\ &[\bar{z}^{\mu},J^{\rho\sigma}]=i\left(-2\eta^{\mu[\rho}\delta^{ \sigma]}{}_{\nu}\right)\bar{z}^{\nu}\,,\quad[\bar{z}^{\mu},p_{\rho}]=i\delta^ {\mu}{}_{\rho}\,.\end{split} \tag{22}\] Therefore, there arises a notion of a complexified Minkowski space (\(\mathbb{C}^{4},\eta^{\mathbb{C}}\)), endowed with a complex structure and a complexified flat metric that reduces to Lorentzian signature on the real section. In conclusion, a complexified Minkowski space with mutually commuting holomorphic or anti-holomorphic coordinates can be constructed in any massive theory with global Poincare symmetry. The real \(x^{\mu}\) and imaginary \(y^{\mu}\) parts of its complex coordinates respectively describe spacetime and spin: the Lorentz generator splits into "orbital" and "spin" parts as \[J^{\mu\nu}=2x^{[\mu}p^{\nu]}+\varepsilon^{\mu\nu\rho\sigma}y_{\rho}p_{\sigma}\,. \tag{23}\] In this sense, the complexified Minkowski space unifies spacetime and spin. Let us coin the term "spinspacetime" [58] to refer to such a concept. The fact that spinspacetime holomorphy is a Poincare-invariant notion implies that it can acquire a physical significance. In fact, the reader may have already noticed that holomorphy is inherently linked with chirality. The imaginary part \(y^{\mu}\) is a pseudovector; chirality flip will implement complex conjugation of spinspacetime. More explicitly, complex combinations of (6) and (16) reveal that holomorphic spinspacetime coordinates are born from the self-dual angular momentum: \[\begin{split}& z^{\mu}=\frac{2}{p^{2}}\,J^{+\,\mu\nu}p_{\nu}- \frac{1}{p^{2}}\,p^{\mu}(D+i\tilde{D})\,,\\ & J^{\pm}:=\frac{1}{2}\,(J\pm i\,{*}^{-1}J)^{\mu\nu}\,.\end{split} \tag{24}\] In fact, this is the way how Newman in [1] motivates spinspacetime [59]: the self-dual and anti-self-dual parts of the Lorentz generator (23) are given by \[J^{+\,\mu\nu}=(z\wedge p)^{+\,\mu\nu}\,,\quad J^{-\,\mu\nu}=(\bar{z}\wedge p)^{ -\,\mu\nu}\,, \tag{25}\] if \((\alpha\wedge\beta)^{\pm\,\mu\nu}\) denotes the self-dual/anti-self-dual projection of the bivector \((\alpha\wedge\beta)^{\mu\nu}=2\alpha^{[\mu}\beta^{\nu]}\). In this perspective, the imaginary unit in \(x^{\mu}\pm iy^{\mu}\) traces back to an electric-magnetic duality between orbital \((x\wedge p)^{\mu\nu}\) and spin \(*(y\wedge p)^{\mu\nu}\) angular momenta. ## IV **Twistorial Origin of Spinspacetime** The astute reader may have noticed that the development so far resonates with the very philosophy of twistor theory: spacetime is a secondary construct [11; 12; 13]. Also, the fact that (7) is simply the commutators of the dilatation charge and that the \([z^{\mu},\bar{z}^{\nu}]\) bracket takes a remarkably simple form in the spinor notation further supports an intimate connection to twistor theory where conformal symmetry and spinors play a crucial role. And not to say, we are encountering a complex-geometrical structure. With this anticipation, let us first show that the calculations so far can be succinctly repackaged if one employs the language of spinors and conformal symmetry. For any Hamiltonian system in asymptotically flat spacetime, one can find conformal generators from the conformal killing vectors even though the Hamiltonian may not enjoy the full conformal symmetry \(\mathrm{SU}(2,2)\). Consider its central extension to \(\mathrm{U}(2,2)\) with center generator \(\tilde{D}\) so that the commutator Lie algebra reads \[\begin{split}&[G^{\dot{\alpha}}{}_{\dot{\beta}},G^{\dot{\gamma}}{}_{ \dot{\delta}}]=i\left(-\delta^{\dot{\alpha}}{}_{\dot{\delta}}G^{\dot{\gamma}}{}_{ \dot{\beta}}+\delta^{\dot{\gamma}}{}_{\dot{\beta}}G^{\dot{\alpha}}{}_{\dot{ \delta}}\right),\\ &[G^{\dot{\alpha}}{}_{\dot{\beta}},p_{\gamma\dot{\gamma}}]=i \left(-p_{\gamma\dot{\beta}}\delta^{\dot{\alpha}}{}_{\dot{\gamma}}\right),\\ &[p_{\alpha\dot{\alpha}},p_{\beta\dot{\beta}}]=0\,,\\ &[G^{\dot{\alpha}}{}_{\dot{\beta}},G_{\dot{\delta}}{}^{\gamma}]=0 \,,\end{split} \tag{26}\] whereas other brackets related to the above by complex conjugation and inversion are omitted. \(G^{\dot{\alpha}}{}_{\dot{\beta}}\) incorporates Lorentz and dilatation as traceless and trace-only parts: \[G^{\dot{\alpha}}{}_{\dot{\beta}}=J^{\dot{\alpha}}{}_{\dot{\beta}}+\tfrac{1}{2} \delta^{\dot{\alpha}}{}_{\dot{\beta}}(D+i\tilde{D})\,. \tag{27}\] If the system is massive, the translation generator \(p_{\alpha\dot{\alpha}}\) can be "inverted" in the sense that \[(p^{-1})^{\dot{\alpha}\alpha} :=\bar{\epsilon}^{\dot{\alpha}\dot{\beta}}\epsilon^{\alpha\beta} p_{\beta\dot{\beta}}/\det(p)\,, \tag{28}\] \[\implies (p^{-1})^{\dot{\alpha}\alpha}p_{\alpha\beta} =\delta^{\dot{\alpha}}{}_{\dot{\beta}}\,,\quad p_{\alpha\dot{ \alpha}}(p^{-1})^{\dot{\alpha}\beta}=\delta_{\alpha}{}^{\beta}\,,\] which translates to \((p^{-1})^{\mu}=p^{\mu}/(-p^{2})\). Using this notation, (24) boils down to a remarkably simple formula: \[z^{\dot{\alpha}\alpha}:=-G^{\dot{\alpha}}{}_{\dot{\beta}}(p^{-1})^{\dot{ \beta}\alpha}\,. \tag{29}\] Then straightforward calculations show that (26) implies \[[z^{\dot{\gamma}\gamma},G^{\dot{\alpha}}{}_{\dot{\beta}}] =i(-\delta^{\dot{\gamma}}{}_{\dot{\beta}}\,z^{\dot{\alpha}\gamma })\,,\] \[[z^{\dot{\gamma}\gamma},\bar{G}_{\beta}{}^{\alpha}] =i(-z^{\dot{\gamma}\alpha}\delta_{\beta}{}^{\gamma})\,, \tag{30}\] \[[z^{\dot{\gamma}\gamma},p_{\alpha\dot{\alpha}}] =i\delta^{\dot{\gamma}}{}_{\dot{\alpha}}\delta_{\alpha}{}^{\gamma}\] as well as \[[z^{\dot{\alpha}\alpha},z^{\dot{\beta}\beta}]=0\,,\quad[z^{\dot{\alpha}\alpha },\bar{z}^{\beta\beta}]=i(z-\bar{z})^{\dot{\alpha}\beta}(p^{-1})^{\dot{\beta} \alpha}\,. \tag{31}\] (30) implies that \(z^{\dot{\alpha}\alpha}\) transforms like Minkowski space coordinates under \(\mathrm{U}(2,2)\). (31) reproduces the result (20). Thus we find a) a complexified Minkowski space with b) commutative holomorphic or anti-holomorphic coordinates, from which spinspacetime is defined. Now, we notice that a linear representation of \(\mathrm{U}(2,2)\) is possible in the twistor space, which is the vector space \(\mathbb{C}^{4}\) equipped with a \((2,2)\)-signature Hermitian form [12; 13]. To implement the mass, one considers two copies of twistor space \(\mathbb{C}^{8}=\mathbb{C}^{4}\!\times\!\mathbb{C}^{4}\)[17; 18; 60], where \(\mathrm{U}(2,2)\) acts from the left and \(\mathrm{U}(2)\) acts from the right with a shared \(\mathrm{U}(1)\). (Note that a twistor represents a null ray in complexified Minkowski space [12; 13; 14], while a massive momentum can be realized as a sum of two null momenta.) In a modern view [61; 62], the right \(\mathrm{SU}(2)\) is the massive little group while the \(\mathrm{U}(1)\) is gauge. Let us call this space \(\mathbb{C}^{8}\) "massive twistor space." Let \(Z_{\mathrm{A}}{}^{I}\) and \(\bar{Z}_{I}{}^{\mathrm{A}}\) be the holomorphic and anti-holomorphic coordinates of the massive twistor space: \[Z_{\mathrm{A}}{}^{I}=\begin{pmatrix}\lambda_{\alpha}{}^{I}\\ i\mu^{\dot{\alpha}I}\end{pmatrix},\quad\bar{Z}_{I}{}^{\mathrm{A}}=\left(-i \bar{\mu}_{I}{}^{\alpha}\ \ \bar{\lambda}_{I\dot{\alpha}}\right), \tag{32}\] where \(\mathrm{A},\mathrm{B},\cdots\) are \(\mathrm{SU}(2,2)\) (Dirac spinor) indices, while \(I,J,\cdots\) are \(\mathrm{SU}(2)\) indices. The massive twistor space is a Kahler vector space, which implies that the commutation relations are given by the oscillator algebra: \[[Z_{\mathrm{A}}{}^{I},Z_{\mathrm{B}}{}^{J}] =0\,,\] \[[Z_{\mathrm{A}}{}^{I},\bar{Z}_{J}{}^{\mathrm{B}}] =\delta_{\mathrm{A}}{}^{\mathrm{B}}\delta_{J}{}^{I}\,, \tag{33}\] \[[\bar{Z}_{I}{}^{\mathrm{A}},\bar{Z}_{J}{}^{\mathrm{B}}] =0\,.\] Then the \(\mathrm{U}(2,2)\) generators are given by \[\begin{pmatrix}-i\,\bar{G}_{\alpha}{}^{\beta}&-p_{\alpha\dot{\alpha}}\\ -K^{\dot{\alpha}\alpha}&i\,G^{\dot{\alpha}}{}_{\dot{\beta}}\end{pmatrix}=G_{ \mathrm{A}}{}^{\mathrm{B}}=Z_{\mathrm{A}}{}^{I}\bar{Z}_{I}{}^{\mathrm{A}}\,, \tag{34}\] the Weyl block decomposition of which gives \[p_{\alpha\dot{\alpha}}=-\lambda_{\alpha}{}^{I}\bar{\lambda}_{I\dot{\alpha}}\,, \quad G^{\dot{\alpha}}{}_{\dot{\beta}}=\mu^{\dot{\alpha}I}\bar{\lambda}_{I \dot{\beta}}\,. \tag{35}\] The first equation is clearly the "massive spinor-helicity" decomposition of the massive momentum [16; 17; 18; 19; 20]. Applying the spinspacetime construction (29) to the charges (35) of massive twistor space, we find \[z^{\dot{\alpha}\alpha}=\mu^{\dot{\alpha}I}(\lambda^{-1})_{I}{}^{\alpha}\quad \Longrightarrow\quad\mu^{\dot{\alpha}I}=z^{\dot{\alpha}\alpha}\lambda_{\alpha}{} ^{I}\,, \tag{36}\] from which we reinvent the incidence relation of twistor theory [13; 14] for the massive case! According to twistor theory, (36) means that the two null rays represented by the twistors \(Z_{\mathrm{A}}{}^{I=0,1}\) are "co-incident" at a point \(z^{\dot{\alpha}\alpha}\) in complexified Minkowski space. Conversely, the spinspacetime commutation relation (31) can be quickly derived from the oscillator algebra (33) of the twistor space. In turn, we realize that the spin-induced spacetime noncommutativity (9) and its resolution in the curious complex-geometrical unification of spacetime and spin trace back to the Kahler geometry of the twistor space where the non-vanishing commutators are only "\(Zig\)-\(\bar{Z}ag\)" (we use "zig" and "zag" as nicknames for "holomorphic" and "anti-holomorphic"). In summary, we have illuminated two additional pathways to spinspacetime that are computationally simpler yet more conceptually advanced than our initial approach. First, the calculations in Sections II-III are rendered more concise by utilizing conformal generators in Minkowski space in the spinorial language. Second, the massive twistor space has given birth to the spinspacetime through the "co-incidence" relation that defines a point in complexified Minkowski space as the intersection of two null rays. The insight here lies in the fact that the commutativity of holomorphic or anti-holomorphic spinspacetime coordinates (vanishing zig-zig and zag-zag commutators) originates from the Kahler geometry (zig-zag structure) of the twistor space. ## V S-matrix in massive twistor space Finally, we would like to discuss the physical consequences of spinspacetime and its twistorial realization. As mentioned earlier, we will now specifically suppose a massive system in asymptotically flat spacetime and consider its scattering theory. Then from Poincare symmetry at asymptotic infinity there follows the notion of "asymptotic spinspacetime," from which one can consider its corresponding "asymptotic massive twistor space." To begin with, let us elaborate on the physical meaning of the massive twistor space. One way of approaching the massive twistor space is to view it as the phase space that arises from the space of massive spinor-helicity variables \(\lambda_{\alpha}{}^{I}\), \(\bar{\lambda}_{I\dot{\alpha}}\) as a configuration space: \[[\lambda_{\alpha}{}^{I},\bar{\mu}\,\jmath^{\beta}]=i\,\delta_{\alpha}{}^{\beta} \,\delta_{J}{}^{I}\,,\quad[\bar{\lambda}_{I\dot{\alpha}},\mu^{\dot{\beta}J}]=i \,\delta_{I}{}^{J}\,\delta^{\dot{\beta}}{}_{\dot{\alpha}}\,. \tag{37}\] Hence in our context, the massive twistor space will thus describe external massive spinning scattering states from asymptotic infinity [63], as massive spinor-helicity variables describe on-shell configurations of a massive spinning particle [16]. Such scattering states can be described either in the coherent state basis (Kahler polarization), \[Z\big{|}\,Z_{1}\big{\rangle}=\big{|}\,Z_{1}\big{\rangle}\,Z_{1}\,,\quad\big{\langle} \bar{Z}_{2}\big{|}\,\bar{Z}=\bar{Z}_{2}\big{\langle}\bar{Z}_{2}\big{|}\,, \tag{38}\] or in the spinor-helicity basis, \[\lambda\,\big{|}\,\lambda_{1}\bar{\lambda}_{1}\big{\rangle}=\big{|}\lambda_{ 1}\bar{\lambda}_{1}\big{\rangle}\,\lambda_{1}\,,\quad\bar{\lambda}\,\big{|} \,\lambda_{1}\bar{\lambda}_{1}\big{\rangle}=\big{|}\,\lambda_{1}\bar{\lambda} _{1}\big{\rangle}\,\bar{\lambda}_{1}\,, \tag{39}\] where we have started to omit indices to avoid clutter. The overlaps are given as \[\big{\langle}\bar{Z}_{2}\big{|}\,Z_{1}\big{\rangle} = e^{Z_{2}Z_{1}}\,, \tag{40}\] \[\big{\langle}\lambda_{2}\bar{\lambda}_{2}\big{|}\lambda_{1}\bar{ \lambda}_{1}\big{\rangle} = \delta^{(4)}\big{(}\lambda_{1}\!-\!\lambda_{2}\big{)}\,\delta^{(4 )}\big{(}\bar{\lambda}_{1}\!-\!\bar{\lambda}_{2}\big{)}\,, \tag{41}\] where \(\bar{Z}_{2}Z_{1}\) abbreviates \((\bar{Z}_{2})_{I}{}^{A}(Z_{1})_{\Lambda}{}^{I}\). The transformations between these two bases are given by the "half-Fourier transforms," which could be defined properly by analytically continuing to the \((2,2)\)-signature [64; 65]: \[\big{|}\,Z_{1}\big{\rangle} = \big{|}\,\lambda_{1}\mu_{1}\big{\rangle}=\int[d^{4}\bar{\lambda} ]\ e^{i\bar{\lambda}\mu_{1}}\ \big{|}\,\lambda_{1}\bar{\lambda}\big{\rangle}\,, \tag{42}\] \[\big{\langle}\bar{Z}_{2}\big{|} = \big{\langle}\bar{\lambda}_{2}\bar{\mu}_{2}\big{|}=\int[d^{4} \lambda]\ e^{-i\bar{\mu}_{2}\lambda}\ \big{\langle}\lambda\bar{\lambda}_{2}\big{|}\,.\] Through half-Fourier transforms, S-matrix elements in the spinor-helicity basis can be converted to the twistor coherent state basis: \[\big{\langle}\lambda_{2}\bar{\lambda}_{2}\big{|}S\big{|}\lambda_{1}\bar{ \lambda}_{1}\big{\rangle}\ \ \longrightarrow\ \big{\langle}\bar{Z}_{2}\big{|}S\big{|}\,Z_{1}\big{\rangle}\,. \tag{43}\] Note that the coherent state overlap (40) is a product of two half-Fourier kernels. To our best knowledge, massive amplitudes have not been examined in the twistor basis; the work [66], e.g., has half-Fourier transformed only the massless leg. ## VI Black holes from complexified equivalence principle With this preliminary understanding, let us now describe the simplest S-matrix in the massive twistor space. Suppose a massive spinning object traveling in a self-dual background spacetime. If it is "minimally coupled," then its left-handed spin frame should be parallel transported without any tidal precessions, literally because the left-handed spinor bundle is flat. Hence the left-handed frame remains constant in a gauge. The scattering amplitudes statement of this "complexified equivalence principle" in the bulk is that the scattering kinematics for receiving positive-helicity gravitons is given by \[\delta^{(4)}\big{(}\lambda_{1}\!-\!\lambda_{2}\big{)}\,, \tag{44}\] where \(1,2\) label in and out massive states. In the same way, the amplitudes of such a minimal object receiving negative-helicity gravitons should be supported on \[\delta^{(4)}\big{(}\bar{\lambda}_{1}-\!\bar{\lambda}_{2}\big{)}\,. \tag{45}\] This motivates us to envision "massive on-shell diagrams" as in Figure 1, anticipating a line-up with the massless diagrams [67; 68; 69; 70; 71]. These are complexified kinematics, as momentum conservation demands that at least one of the spinor-helicity variables should change after a nonzero impulse [72]. (Yet we simply retain the bar notation instead of using tilde.) For instance, suppose the object receives one positive-helicity graviton with null momentum \(\mathfrak{Z}_{\alpha\dot{\alpha}}\). Then, momentum conservation and (44) together uniquely fixes the support for both spinor-helicity variables as \[\mathrm{zig}:\quad\delta^{(4)}\big{(}\lambda_{1}\!-\!\lambda_{2}\big{)}\ \delta^{(4)}\big{(}\bar{\lambda}_{1}\!-\!\lambda_{1}^{-1}\mathfrak{Z}_{3}-\bar{ \lambda}_{2}\big{)}\,, \tag{46}\] where contracted indices are abbreviated as \((\lambda_{1}^{-1}\mathfrak{Z}_{1})_{I\dot{\alpha}}=(\lambda_{1}^{-1})_{I}{}^{ \alpha}\,\mathfrak{Z}_{3}{}_{\alpha\dot{\alpha}}\). The negative-helicity counterpart reads \[\mathrm{zag}:\quad\delta^{(4)}\big{(}\lambda_{1}\!-\!3\bar{\lambda}_{2}^{-1}- \lambda_{2}\big{)}\ \delta^{(4)}\big{(}\bar{\lambda}_{1}\!-\!\bar{\lambda}_{2}\big{)}\,. \tag{47}\] Let us call these two cases "zig" and "zag" kinematics, respectively, as they freeze the holomorphic or anti-holomorphic spinor-helicity variable. Amusingly, half-Fourier transforming the complexified kinematics (46)-(47) reveals that our minimal object is a black hole! For instance, the zig kinematics half-Fourier transforms to \[\begin{split}&\exp\left(-i\bar{\mu}_{2}\lambda_{1}\right)\,\exp \left(i(\lambda_{1}^{-1}\mathfrak{Z}_{3}+\bar{\lambda}_{2})\mu_{1}\right)\\ &=\big{\langle}\bar{Z}_{2}\big{|}Z_{1}\big{\rangle}\,\exp\left(i \,3\,\mu_{1}\lambda_{1}^{-1}\right).\end{split} \tag{48}\] Figure 1: The building blocks of “massive twistor diagrams,” describing extremal cases of complexified on-shell kinematics. Imposing the co-incidence relation (36), this becomes \[\left\langle\bar{Z}_{2}\right|\!Z_{1}\rangle\,e^{i3z_{1}}=\left\langle\bar{Z}_{2} \right|\!Z_{1}\rangle\,e^{i3x_{1}}\,e^{-3y_{1}}\,, \tag{49}\] from which we rediscover the "exponential spin factor" \(e^{-3y_{1}}\) of [21; 22; 23; 24] that encodes the Newman-Janis shift property [73; 5; 8; 7; 74] of spinning black holes at the amplitudes level. In particular, it predicts the unity \(C_{\ell}=1\) of multipole moments (three-point Wilson coefficients [75]): \[e^{-3y_{1}}\,=\,\sum_{\ell=0}^{\infty}\frac{C_{\ell}}{\ell!}\,(-3y_{1})^{\ell }\quad\Longrightarrow\quad C_{\ell}=1\,. \tag{50}\] The minus sign here is due to our conventions \(\varepsilon_{0123}=+1\) and \(S^{\mu\nu}=\varepsilon^{\mu\nu\rho\sigma}y_{\rho}p_{\sigma}\). The ambiguity \(y^{\mu}\sim y^{\mu}+\varepsilon\,p^{\mu}\) of the spin length pseudovector drops on \(3\!\cdot\!p=0\) precisely as the reparameterization redundancy \(x^{\mu}\sim x^{\mu}+\varepsilon\,p^{\mu}\). For the zag kinematics, half-Fourier transform gives \[\begin{split}&\exp\left(-i\bar{\mu}_{2}(\lambda_{1}-3\bar{ \lambda}_{2}^{-1})\right)\,\exp\left(i\bar{\lambda}_{2}\mu_{1}\right)\\ &=\,\left\langle\bar{Z}_{2}\right|\!Z_{1}\rangle\,\exp\left(i\, \bar{\lambda}_{2}^{-1}\bar{\mu}_{2}3\right),\\ &=\,\left\langle\bar{Z}_{2}\right|\!Z_{1}\rangle\,e^{i\,3\bar{z} _{2}}=\left\langle\bar{Z}_{2}\right|\!Z_{1}\rangle\,e^{i\,3x_{2}}\,e^{3y_{2}} \,,\end{split} \tag{51}\] from which we identify the Newman-Janis factor for negative helicity, \(e^{3y_{2}}\). This argument also applies to helicity one if one appeals to a "single copy" of spin precession equations of motion pointed out in [62]: the "square root" of the Lorentz force equation for \(\sqrt{\text{Kerr}}\) freezes the left-handed frame when the background field strength is self-dual [62; 76]. In summary, the on-shell kinematics (46)-(47) encoding the complexified equivalence principle (CEP) derives the Newman-Janis (NJ) shift property of black holes, through the half-Fourier (HF) transforms: \[\text{CEP}\ \ \overset{\text{HF}}{\Longrightarrow}\ \text{NJ shift}\,. \tag{52}\] In fact, the converse is also true, simply because the half-Fourier transforms can be inverted. For instance, \[\left\langle\bar{Z}_{2}\right|\!Z_{1}\rangle\,e^{i3z_{1}}=\left\langle\bar{Z}_ {2}\right|\!e^{i3z}\!\left|Z_{1}\right\rangle \tag{53}\] translates into the spinor-helicity basis as (cf. [24]) \[\begin{split}&\left\langle\lambda_{2}\bar{\lambda}_{2}\right|\! \exp\left(i\,3_{\alpha\dot{\alpha}}\mu^{\dot{\alpha}I}(\lambda^{-1})_{I}{}^{ \alpha}\right)\!\left|\lambda_{1}\bar{\lambda}_{1}\right\rangle\\ &=\left\langle\lambda_{2},\bar{\lambda}_{2}+\lambda_{1}^{-1}\!3 \right|\!\lambda_{1}\bar{\lambda}_{1}\right\rangle,\end{split} \tag{54}\] where we note that the differential operator realization of \(\mu^{\dot{\alpha}I}\) is given by \(-i\partial/\partial\bar{\lambda}_{I\dot{\alpha}}\). Therefore, we have \[\text{CEP}\ \ \overset{\text{HF}}{\Longleftrightarrow}\ \text{NJ shift}\,, \tag{55}\] and accordingly, the Newman-Janis property can be more basis-independently stated as the holomorphy of black hole three-point T-matrix receiving a positive-helicity massless quantum as a first-quantized operator [77]: \[T=e^{i3z}=e^{i3\mu\lambda^{-1}}\,. \tag{56}\] We shall highlight that the Kahler geometry of massive twistor space plays a crucial role in this argument: \[\text{K\"{a}hler geometry}\ \ \Longrightarrow\ \ \left(\,\text{CEP}\ \Longleftrightarrow\ \text{NJ shift}\,\right). \tag{57}\] For example, a holomorphic (zig) T-matrix shifts the anti-holomorphic (zag) spinor-helicity variable because holomorphic operators are represented as differential operators acting on the anti-holomorphic sector, which is because the massive twistor space is Kahler (zig-zag) [78]: \[\left\langle\lambda_{2}\bar{\lambda}_{2}\right|\!e^{i3z}=\left\langle\lambda_{ 2},\bar{\lambda}_{2}+\lambda_{1}^{-1}\!3\right|. \tag{58}\] The astute reader might have already noticed this feature from the structure of half-Fourier transforms, in fact. Alternatively, note the following commutation relations that follow from (36) and (37): \[\begin{split}[\lambda_{\alpha}{}^{I},\bar{z}^{\dot{\beta}\dot{ \beta}}]&=i\,(\bar{\lambda}^{-1})^{\dot{\beta}I}\delta_{\alpha}{}^{ \beta}\,,\\ [\bar{\lambda}_{I\dot{\alpha}},z^{\dot{\beta}\dot{\beta}}]& =i\,\delta^{\dot{\beta}}{}_{\dot{\alpha}}(\lambda^{-1})_{I}{}^{ \beta}\,.\end{split} \tag{59}\] Since impulses of observables can be computed as [79] \[\begin{split}\Delta\mathcal{O}&=(1-iT^{\dagger})\, \mathcal{O}\,(1+iT)-\mathcal{O}\,,\\ &=i[\mathcal{O},T]+T^{\dagger}[\mathcal{O},T]\,,\end{split} \tag{60}\] one finds that the holomorphic T-matrix \(e^{i3z}\) alters \(\bar{\lambda}_{I\dot{\alpha}}\) while freezing \(\lambda_{\alpha}{}^{I}\), as the zig spin frame \(\lambda_{\alpha}{}^{I}\) commutes with the zig spinspacetime coordinates \(z^{\dot{\alpha}\alpha}\). Like a seesaw, zig alters zag and zag alters zig. A few remarks are in order. Firstly, it should be understood that the "zig-zag mechanism" of zig T-matrix shifting the zag spinor-helicity universally applies to any massive spinning systems, although we have specialized to black holes. As (57) has implied, it is the complexified equivalence principle or the Newman-Janis shift property that uniquely characterizes black holes by linking holomorphy to self-duality. For generic objects, the three-point T-matrix is not guaranteed to be holomorphic or anti-holomorphic, even though the massless leg is either self-dual or anti-self-dual. Hence the kinematics does not necessarily fall into the categories (44) or (45) (see [62]). Secondly, we point out a caveat in our discussion: we have simply discarded the prefactors of scattering amplitudes, which are given by the non-spinning amplitudes (\(mx^{h}\)[16]). It is rather remarkable that the exponential spin factors could be deduced just by examining the support of amplitudes in the massive spinor-helicity basis. Lastly, note that our argument at three points readily generalizes to higher-multiplicity amplitudes in half-flat (self-dual or anti-self-dual) backgrounds. For instance, suppose the background has two massless quanta of the same helicity. Complexified equivalence principle demands that the zig spinor-helicity variable should be frozen, so the on-shell diagram is a gluing of two zig-kinematics. Half-Fourier transforming, one finds (46) and (49) with 3 replaced with \(3+4\). In this way, one "derives" the fact that the spin factor exponentiates for the same-helicity Compton amplitude and its higher-multiplicity analogs [25; 26; 27]. In fact, the bulk complexified equivalence principle we have started with assumed arbitrary half-flat backgrounds. ## VII Conclusion In this article, we have shown that a flat "spin-spacetime" can be constructed in any massive system with global Poincare symmetry, which is a complexified Minkowski space whose coordinates of real and imaginary directions describe orbital and spin angular momenta. It exhibits an interesting commutator structure: its holomorphic \(z^{\mu}\) or anti-holomorphic \(\bar{z}^{\mu}\) coordinates are commutative so that the only non-vanishing brackets are \([z^{\mu},\bar{z}^{\nu}]\). Twistor theory provides an explanation of this feature from the Kahler geometry of massive twistor space while enlarging the angle from spacetime and spin to the entire phase space of a massive spinning particle by incorporating the so-called massive spinor-helicity variables. Appreciating the consequences of the Kahler geometry in this extended setting reveals that the Newman-Janis property of spinning black holes can be reformulated as an association between holomorphy and chirality in massive on-shell scattering kinematics that traces back to a complexified form of equivalence principle in half-flat background geometries. Amusingly, half-Fourier transforming the complexified kinematics has directly yielded the exponential spin factors of spinning black hole scattering amplitudes. The central motif that has guided our journey is the zig-zag structure. The "\(zig\)-\(\bar{z}ag\)" structure of spinspacetime coordinates first noticed in (20) was extended to the "\(Zig\)-\(\bar{Z}ag\)" twistor brackets (33) and, in turn, led to the zig-zag relationship (59) between massive spinor-helicity variables and spinspacetime coordinates, thus offering a fascinating derivation of the Newman-Janis shift. The idea of unifying spacetime and spin into a complex geometry was first proposed and investigated by Newman from two angles: an electric-magnetic duality between orbital and spin angular momenta [1; 2] and the Newman-Janis shift deriving spinning stationary solutions [2; 3; 4; 5; 6; 7; 8], from which the minimal gyromagnetic ratio \(g\!=\!2\) of black holes could be argued as a physical consequence. In this article, however, we have taken a few more steps beyond Newman's exploration of spinspacetime. First, we have updated the mathematical/geometrical definition of spinspacetime by incorporating its commutators as a crucial defining feature, which was not studied by Newman. Second, putting emphasis on the commutators has allowed us to argue the necessity of spinspacetime unification, which was not clear in Newman's approaches. Third, we have demonstrated that spinspacetime and its realization in massive twistor theory can shed insights on black hole physics beyond the dipole-order coupling. To elaborate further on the second point, we remark that an important conceptual point has been made explicit throughout our discussion: there is no such thing as "impact parameter \(x^{\mu}\)" nor "spin \(y^{\mu}\)" (or "\(\vec{S}\)") that can simultaneously label an eigenstate of a massive spinning particle, due to the universal noncommutativity (9) and the renowned commutator (13)! In our view, the closest notion to "position eigenstates" is given by the holomorphic spinspacetime eigenstate \(|z\rangle\), whose full refinement is the massive twistor coherent state \(|Z\rangle\). Indeed, our derivation of the exponential spin factors has directly yielded \(e^{i3z}\) or \(e^{i3\bar{z}}\) from imposing the co-incidence relations, although we then simply separated \(e^{\mp 3y}\) from \(e^{i3x}\) to elucidate the agreement with the usual language at the moment. Appendix A describes how our implementation/interpretation of the exponential spin factors is connected to that of the literature [16; 80; 81; 82] where the vanishing of (13) in the \(\hbar\!\to\!0\) limit essentially allows taking \(\vec{S}\) as a classical observable [80; 81]. Intriguingly, our complex coordinates turn out to be the only possible sets of commuting Poincare-covariant coordinates: Appendix C proves uniqueness, while the calculations in Sections II-III have proved existence. The inevitable failure of Poincare-covariant real commutative spacetime in the presence of nonzero spin could be traced back to Moller [85; 86]'s observation that a rotating massive object with mass \(m\) and spin \(s\hbar\) in special relativity is "delocalized" within a radius \(R_{\rm Moller}=s\cdot(\hbar/mc)=s\,\lambda_{\rm C}\), where \(\lambda_{\rm C}:=\hbar/mc\) is the reduced Compton wavelength. Indeed, the uncertainty inequality predicts the noncommutativity length scale \(\Delta x\sim s^{1/2}\lambda_{\rm C}\) from (12). Intriguingly, the journey from spacetime to spinspacetime to twistor space we have taken in this article suggests that this noncommutativity is precisely the "fuzziness of spacetime points" in twistor theory depicted in Figure 2. Specifically, we have learned in Section IV that the bracket (9) is exactly what one obtains when spacetime points are defined by intersections of light rays governed by the oscillator algebra (33). Hence a heavenly imagery of a complex-analytic continuum and a fuzzy landscape comprising chunks of spin and spacetime coexist in the space of light rays as multifaceted images. Figure 2: Lightcone in twistor theory [83; 12]. In [84; 17; 11], Penrose advocates a point of view that there is an absurdity inherent in the notion of spacetime already at the Compton scale of elementary particles. Accordingly, the very philosophy of twistor theory is to take spacetime points as secondary constructs emerging from intersections between null rays. When does the spin-induced noncommutativity imply an actual "breakdown of spacetime"? The plot shown in Figure 3 implies that the noncommutativity is serious for elementary particles, which do not carry any "physical dimensions" other than the Compton wavelength that can hide it (thus Penrose in [84, 11, 17] was indeed correct). But practically, one might want to ignore the fuzziness for macroscopic astronomical objects such as Kerr black holes. However ironically, the curious new insights gained in this paper suggests that it will be worth taking a journey to the zig-zag world of spinspacetime and twistors if we really care about Kerr. We end with a few comments on future directions while leaving several further applications in Appendix B. _a) Bulk spinspacetime._ Curved twistor theory had been developed in three directions: global, local, and asymptotic [13]. Hence the present work, which developed the theory of "asymptotic spinspacetime," would serve as only one of the three parts of a tapestry if we are to parallel such a history in the massive case. In particular, a theory of bulk spinspacetime is yet to be completed. Works [88, 10, 9, 10] remark that Newman's \(\mathscr{H}\)-space [9, 10, 89, 90, 88, 91, 92] realizes curved bulk spinspacetime, but only for algebraically special cases. Work [62] has investigated the zig-zag structure of bulk spinspacetime from the angle of "symplectic perturbations" [93, 94] but restricted its scope to self-dual geometries. The fully general theory of spinspacetime, incorporating both self-dual and anti-self-dual modes, should answer how holomorphpy conforms to general covariance and how the _zig-zag_ brackets become generalized. We hope to provide more details soon [95]. _b) Black hole amplitudes at higher points._ The theory of general-relativistic spinspacetime is of importance at least because we might gain some insights into the puzzle of black hole Compton amplitudes [96, 97, 98, 16, 22] while not being tricked by the treachery of spin gauge redundancies in the bivector formulation of spin [61, 32, 55]. Revisiting the findings from the asymptotic perspective will then provide further insights. In particular, it would be enlightening if we could identify an amplitudes-level principle characterizing black holes at higher points with arbitrary helicities and interpret it from an on-shell perspective, generalizing the complexified equivalence principle. A curved generalization of Newman's electric-magnetic duality between orbital and spin angular momenta might serve as such a principle [99]. _c) Supertranslation ambiguity._ The asymptotic symmetry group of asymptotically flat spacetime in general relativity is known to be not only Poincare but the BMS group [100, 101], which leads to the supertranslation ambiguity of angular momentum [102, 103, 104]. It seems worth understanding to what extent the theory of asymptotic spinspacetime is subject to such ambiguities and examining its relation with the massless asymptotic twistor framework while keeping in mind the fundamental differences between massive and massless systems. The unification of space and time in the previous century has stimulated a huge paradigm shift. It would be interesting to see whether a further unification of spacetime and spin will open up new avenues for spinning black hole physics in this exciting era of gravitational wave astronomy while presenting us profound insights into the nature of relativistic angular momentum. **Acknowledgment.** JHK is grateful to Clifford Cheung, Sangmin Lee, Keefe Mitman, Alexander Ochirov, and Julio Parra-Martinez for discussions and comments on the draft. The work of JHK is supported in part by Ilju Academy and Culture Foundation. ## Appendix A In which states are baseballs thrown? A crucial question in the study of scattering amplitudes is "What are the external states that we scatter?" In this appendix, we would like to illustrate and clarify how our approach that unifies spacetime and spin and first-quantizes the massive twistor is connected to the conventional approach taken in the literature [80, 81, 16, 82] by comparing the answers to this question. Figure 3: A logarithmic version of Chew-Frautschi [87] plot: \(\log s\) versus \(\log(\ell_{\rm Pl}/\lamc)\), where \(s\) is the spin angular momentum of the object in the units of \(\hbar\). The ratio \((\ell_{\rm Pl}/\lamc)\) between the Planck length \(\ell_{\rm Pl}:=(\hbar G/c^{3})^{1/2}\) and the reduced Compton wavelength \(\lamc:=\hbar/mc\), which might be called “Penrose parametric scale” (see Figure 2), describes how macroscopic an object is. The spin-induced spacetime noncommutativity \(\Delta x=s^{1/2}\lamc\) exceeds the Schwarzschild radius \(R_{\rm Sch}\!=\!2\ell_{\rm Pl}/\lamc\) and becomes relevant in the post-Minkowskian scheme if \((\ell_{\rm Pl}/\lamc)<\frac{1}{\sqrt{2}}\,s^{1/4}\). This bound is indicated as the lower shaded region in the plot. For extremal Kerr black holes, \(\ell_{\rm Pl}/\lamc=s\lamc\) implies \((\ell_{\rm Pl}/\lamc)=s^{1/2}\), so the noncommutativity is always “censored.” Typical baryonic matter in our universe lies within a straight line with slope \(0.75\) and intercept \(-42\). The values for the baseball and the figure skater are taken from typical college physics textbooks. The value of angular momentum for the rotating fullerene is estimated from the equipartition of energy at \(300\)K. The data for the solar system objects are taken from NASA’s Space Science Data Coordinated Archive. The context is the scattering amplitudes approach to computing classical observables. For instance, [105] employs a wavepacket that is adequately localized in both momentum and position spaces to emulate a classical non-spinning object. The situation becomes quite puzzling for spinning objects, however. For instance, in what states are spinning baseballs thrown? No one seems to throw a baseball in spinor-helicity eigenstates, at least. One rather forgets about the information on Eulerian angles and specifies momentum and spin. But how can the three components of spin angular momentum be specified at once if such a state is realized in a first-quantized theory, where we have the spin-spin commutator (13)? According to the standard recipe of the current literature [81; 82], the baseball is implemented as a coherent-state superposition of discrete-spin states. Namely, the on-shell amplitudes of Arkani-Hamed, Huang, Huang (AHH) [16], which take the form \[(\mathcal{A}_{(2s)})_{{J_{1}}{J_{2}}\cdots{J_{2s}}}{}^{{I_{1}}{I_{2}}\cdots{I_ {2s}}} \tag{14}\] as little group tensors, are saturated by little group spinors \(\psi_{I}\) and \(\bar{\psi}^{I}=[\psi_{I}]^{*}\) and then summed over all discrete spins to give the "classical-spin" amplitude \(\mathcal{M}\): \[\mathcal{M}=\lim_{h\to 0}\,\sum_{2s=0}^{\infty}\frac{e^{-\bar{\psi}\psi}}{(2s)!} \left(\bar{\psi}\cdots\bar{\psi}\,\mathcal{A}_{(2s)}\psi\cdots\psi\right), \tag{15}\] where the \(\hbar\to 0\) limit is taken such that the "body-frame spin angular momentum components" are fixed [81]: \[\frac{\hbar}{2}\,\bar{\psi}^{I}(\sigma^{a})_{I}{}^{J}\psi_{J}\,\to\,W^{a}\,. \tag{16}\] However, our first-quantized massive twistor approach appears to work in a fundamentally different way. For an illustration, consider T-matrix elements in our spinor-helicity basis to maximally mimic the AHH amplitudes: \[\mathcal{M}\,\sim\,\left\langle\lambda_{2}\bar{\lambda}_{2}\big{|}T\big{|} \lambda_{1}\bar{\lambda}_{1}\right\rangle. \tag{17}\] Crucially, this object does not come with any indices. The state \(\ket{\lambda\bar{\lambda}}\) is not a discrete-spin state but already is a sort of a coherent state with continuous labels. There is no such thing as \(\psi_{I}\) in this framework; specifically, the body-frame spin \(W^{a}\) is implemented as SU(2) generators of the massive twistor space [61; 62], not as parameters for some post-dressing of amplitudes. Nevertheless, we can still attempt to parallel the standard approach as much as possible. Since baseballs are not thrown in the spinor-helicity basis, one has to somehow "Fourier transform" (17) into a tentative "baseball basis." Yet, there is an intrinsic distinction in the manner in which spin is implemented. AHH amplitudes will arise if spin is implemented by the coadjoint orbit [106; 107; 108] that takes S\({}^{2}\) as the rotational phase space. But, the rotational phase space associated with the massive twistor space is \(T^{*}(\text{SU}(2))\)[61]. Thus the apparent degeneracy at each definite spin is \((2s+1)^{2}\) instead of the \(2s+1\) of coadjoint orbit [61; 109; 110], so, to mimic the coherent-state smearing recipe, naively one needs to introduce one more copy of \(\psi_{I}\) and \(W^{a}\), which is confusing. A sensible solution seems to be using the group Fourier transform [111; 112], which effectively contracts the two copies of spinor indices. Suppose the massive spinor-helicity variables are decomposed into a reference configuration times a right U(2) matrix. Then the delta function in (41) decomposes into the following form: \[\delta^{(4)}(p_{1}-p_{2})\,\delta^{(3)}(U_{1}{U_{2}}^{-1})\,\delta(\alpha_{1} -\alpha_{2})\,, \tag{18}\] where the last term drops out through summing over the U(1) gauge orbit (cf. [58]). Then the SU(2) delta function \(\delta^{(3)}(U_{1}{U_{2}}^{-1})\) can be group Fourier transformed by smearing with the kernel \(\exp(i\,W_{I}{U_{J}}^{J})\). The result is a \(\star\)-deformed delta function over \(\mathfrak{su}(2)^{*}\)[111; 112]: \[\delta^{(3)}_{\star}(\vec{W}_{1}-\vec{W}_{2})\,, \tag{19}\] which arises naturally from regarding \(\mathfrak{su}(2)^{*}\) as a noncommutative three-dimensional space. The three components of \(\vec{W}\) could be simultaneously specified in a ket thanks to the \(\star\)-deformation. Indeed, (19) is a regular function over the \(\vec{W}\)-space representing a "fuzzy point." Now specifying \(\vec{p}\) and \(\vec{W}\) would realize a possible "baseball" state. Furthermore, one can even Fourier-transform \(\vec{p}\) to have position coordinates so that the usual "space-time plus spin" picture is retained. However, despite the retrieval of a familiar picture, this "baseball state" is by no means a simultaneous eigenstate of commuting operators in the usual sense: the space of parameters labeling the state is noncommutative. It rather satisfies a "\(\star\)-deformed" eigenvalue equation [111; 112], which leads to a somewhat involved algebra. The very cause of the artificial complications, however, turns out to be rather such attempts to make the twistor conform to the standard. The elegant solution reveals itself when one simply forgets about pre-existing prescriptions. In Section VI, we have already heuristically discovered that the classical-spin amplitudes just directly pop out if one evaluates the amplitudes in the coherent state basis and peels off the coherent state overlap: \[\left\langle\bar{Z}_{2}\big{|}T\big{|}Z_{1}\right\rangle\,=\,\mathcal{M}\left \langle\bar{Z}_{2}\big{|}Z_{1}\right\rangle. \tag{20}\] This is remarkably more straightforward than the AHH-based approach that sums over discrete-spin amplitudes. That is, _baseballs are thrown in twistor coherent states_! The information of spin is encoded in the massive twistor eigenstates through the co-incidence relation, in the form of holomorphic/anti-holomorphic spinspacetime coordinates. By unifying spacetime and spin, a commuting set of observables is achieved so that no such thing as noncommutative parameter spaces nor \(\star\)-deformed delta functions are needed [113]. Also, since half-Fourier transformation simply spits out the exponential spin factor, there is no need to go through the specifics of taking classical limits to the infinite series of discrete spins. As this "twistor coherent state proposal" directly yields the amplitudes of physical interest while being natural and simple, it is hard to imagine any other prescriptions for external scattering states being more physically and formalism-wise significant in the massive twistor setting. Yet, this does not imply an inconsistency of the standard approach that combines the KMO'C formalism [105] and the spin coherent state dressing [81], which is built upon rigorous analyses on classical limits and has been successful on correctly producing classical observables. In particular, the noncommutativity inherent in the labeling of coherent-spin states does not imply that we cannot employ them--the \(\star\)-deformation scales with \(\hbar\). This point has indeed been clarified in [81]: the non-factorizing piece in the spin-spin correlator is quantum, which is another way of probing the observation made around (101) that the spin coherent states are "\(\star\)-deformed \(\bar{W}\)-eigenstates" in the jargon of [111; 112], subtly deviating from the notion of eigenstates that one usually thinks of. The successive Fourier transforms demonstrated in this appendix suggest that our twistor coherent state proposal is consistent with and, in fact, is not too distant from the standard recipe. Namely, the two are compatible prescriptions that naturally arise in the two respective formalisms: \(|Z\rangle\) for twistor particle and "momentum combo body-frame spin" for quantum field. The intermediate zone is the "classical" massive spinor-helicity basis \(|\lambda\bar{\lambda}\rangle\), which half-Fourier and SU(2) group Fourier transforms to the former and latter bases. ## Appendix B Further applications of spinspacetime The theory of spinspacetime finds its further applications at least in two more places. _a) Center coordinates._ The asymptotic spinspacetime coordinates can be thought of as the complex center of mass [1; 2]. As the simplest example, consider a spinning binary system in flat spacetime. If the bodies do not interact (i.e., two free particles), the conformal charges are additive. Then (29) offers a remarkably simple derivation/definition of Poincare-covariant center coordinates: \[z^{\dot{\alpha}\alpha}\,=\,(z_{1})^{\dot{\alpha}\beta}(p_{1}p^{-1})_{\beta}{} ^{\alpha}+(z_{2})^{\dot{\alpha}\beta}(p_{2}p^{-1})_{\beta}{}^{\alpha}\,, \tag{104}\] where \(p_{\alpha\dot{\alpha}}=(p_{1})_{\alpha\dot{\alpha}}+(p_{2})_{\alpha\dot{ \alpha}}\). In particular, the real part of (104) boils down to \[\begin{split} x^{\mu}\,=\,(e_{1}x_{1}+e_{2}x_{2})^{\mu}& -(p_{1}\!\wedge\!p_{2})^{\mu}{}_{\nu}(x_{1}\!-\!x_{2})^{\nu}/p^{2} \\ &+*(p_{1}\!\wedge\!p_{2})^{\mu}{}_{\nu}(y_{1}-y_{2})^{\nu}/p^{2} \,.\end{split} \tag{105}\] While \(e_{1,2}:=(p_{1,2})_{\mu}(p^{-1})^{\mu}\) are energy weights in the zero-momentum frame, the second term adds a transversal correction such that the formula becomes a map between equivalence classes of reparameterizations (i.e., enforces additivity of dilatation charges). The third term is a spin-dependent correction mandated by the electric-magnetic duality of Newman. Indeed, the imaginary part of (104) offers an intriguing notion of "spin center": \[\begin{split} y^{\mu}\,=\,(e_{1}y_{1}+e_{2}y_{2})^{\mu}& -(p_{1}\!\wedge\!p_{2})^{\mu}{}_{\nu}(y_{1}\!-\!y_{2})^{\nu}/p^{2} \\ &-*(p_{1}\!\wedge\!p_{2})^{\mu}{}_{\nu}(x_{1}\!-\!x_{2})^{\nu}/p^{ 2}\,.\end{split} \tag{106}\] Note that this does not vanish when the individual spins \(y^{\mu}_{1}\), \(y^{\mu}_{2}\) are set to zero, which reflects that the binary system has nonzero spin angular momentum when effectively considered as a single object. Even for non-covariant spin supplementary conditions, the complex spinspacetime coordinates seem to serve as a platform offering the best shortcut for deriving formulae for center coordinates. When there is a weak interaction between the two bodies, then perturbatively expanding the conformal charges may offer a systematic derivation of post-Minkowskian corrections to the leading-order formula (104): \[z=(G_{1}+G_{2}+G_{\rm int})(p_{1}+p_{2}+p_{\rm int})^{-1}\,. \tag{107}\] For instance, at first post-Minkowskian order one finds \[z^{1\rm PM}=(G^{1\rm PM}-z^{0\rm PM}p^{1\rm PM})(p_{1}+p_{2})^{-1} \tag{108}\] by putting \(G_{\rm int}=G^{1\rm PM}+G^{2\rm PM}+\cdots\) and \(p_{\rm int}=p^{1\rm PM}+p^{2\rm PM}+\cdots\) in (107). It will be further interesting if the notion of "center twistor" can be defined, given that spinspacetime is a shadow of massive twistor space. _b) Zig-zag theory of spinning black hole binaries._ It is possible to study and formulate spinning black hole binary dynamics in the spinspacetime/twistor framework. In particular, consider the following on-shell diagrams at first post-Minkowskian (1PM) order: (109) For instance, the first diagram half-Fourier transforms to \[\big{\langle}\bar{Z}_{1^{\prime}}\big{|}Z_{1}\big{\rangle}\big{\langle}\bar{Z} _{2^{\prime}}\big{|}Z_{2}\big{\rangle}\;e^{i3(z_{1}-\bar{z}_{2^{\prime}})}\,. \tag{110}\] In turn, the 1PM eikonal phase takes the following "zig-zag" form when the prefactors are restored: \[(x_{1}/x_{2})^{h}\,e^{i3(z_{1}-\bar{z}_{2^{\prime}})}+(x_{1}/x_{2})^{-h}\,e^{ i3(\bar{z}_{1^{\prime}}-z_{2})}\,. \tag{111}\] Then, appealing to the Hamiltonian-Jacobi formalism, one can demonstrate a manifestly Poincare-covariant computation of impulses of classical observables (e.g., spin kicks in terms of spin frames \(\lambda_{\alpha}{}^{I}\), \(\bar{\lambda}_{I\dot{\alpha}}\)) in terms of the twistorial Poisson brackets (33), (59). Furthermore, one can also discover that the 1PM potential is secretly a "zig-zag" Newman-Janis shift of the Newtonian potential [114]: \[V(\vec{p}_{1},\vec{z}_{1},\vec{\bar{z}}_{1},\vec{p}_{2},\vec{z}_{2},\vec{\bar{z }}_{2})\ \sim\ \frac{e^{+h\varphi}}{\big{|}\vec{z}_{1}-\vec{\bar{z}}_{2}\big{|}}+\frac{e^{-h \varphi}}{\big{|}\vec{\bar{z}}_{1}-\vec{z}_{2}\big{|}}\,, \tag{112}\] where \(\varphi\) is the relative rapidity between the two objects. This new formula amusingly manifests the underlying physical processes: \(e^{+h\varphi}/|\vec{z}_{1}-\vec{\bar{z}}_{2}|\) means that the "zig" of the first black hole receives a helicity \(+h\) force carrier emitted from the "zag" of the second black hole (the "zig" and "zag" here not only refer to the holomorphy but also the extremal dyonic objects that arise from a Dirac/Misner string structure of spinning black holes [115, 76], which respectively localize at zig and zag spin-spacetime positions). The formula previously reported in the literature [116, 117, 118] gets retrieved when the spin gauge is changed to a non-covariant one (Pryce-Newton-Wigner [54, 30]). In fact, the zig-zag denominators can be argued not only from the helicity flip of the massless propagators but also from the Kahler geometry of massive twistor space [62]. ## Appendix C Bootstrapping spinspacetime; Inevitability of complex-valued coordinates In this last appendix, it is shown that the holomorphic or anti-holomorphic complex spinspacetime coordinates introduced in the main text are the only possibilities for a set of mutually commuting Poincare-covariant spacetime coordinates, up to a "reparameterization redundancy." The inputs of the bootstrap are the following. Firstly, we suppose the existence of Poincare charges \(J^{\mu\nu}\), \(p_{\mu}\) as well as an operator \(D\) defining the notion of mass dimension, with the assumption that \(p^{2}<0\). Their algebra is given in (3) and (7), which we reproduce below for the reader's convenience: \[\begin{split}[J^{\mu\nu},J^{\rho\sigma}]&=i\,(-4 \delta^{[\mu}{}_{[\kappa}\eta^{\nu]][\rho}\delta^{\sigma]}{}_{\lambda]})J^{ \kappa\lambda}\,,\\ [J^{\mu\nu},p_{\rho}]&=i\,p_{\sigma}(-2\eta^{\sigma [\mu}\delta^{\nu]}{}_{\rho})\,,\\ [p_{\mu},p_{\nu}]&=0\,,\\ [J^{\mu\nu},D]&=0\,,\\ [p_{\mu},D]&=i\,p_{\mu}\,.\end{split} \tag{10}\] Secondly, a set of four variables \(q^{\mu}\) defines "spacetime coordinates" if the following brackets are satisfied: \[[q^{\mu},J^{\rho\sigma}] =i\,(-2\eta^{\mu[\rho}\delta^{\sigma]}{}_{\nu})\,q^{\nu}\,, \tag{11a}\] \[=i\,\delta^{\mu}{}_{\rho}\,,\] (11b) \[=-i\,q^{\mu}\,. \tag{11c}\] (11a)-(11b) assert that \(q^{\mu}\) transforms properly under the Poincare action; (11c) states that \(q^{\mu}\) carries mass dimension \(-1\). Lastly, such coordinates are commutative if \[[q^{\mu},q^{\nu}]=0\,. \tag{11}\] Now our goal is to find \(q^{\mu}\) in terms of \(p_{\mu}\), \(J^{\mu\nu}\), \(D\) that realizes the axioms (11) for spacetime coordinates as well as the condition (11), given the algebra (10). First of all, (11a) simply means that \(q^{\mu}\) cannot depend on any Lorentz-violating constant tensors carrying one or more indices. Then, the space of vectors satisfying (11c) is spanned by \[p^{\mu}/p^{2}\,,\quad\hat{x}^{\mu}\,,\quad\hat{y}^{\mu}\,,\quad J^{\mu}{}_{ \nu}\hat{x}^{\nu}\,,\quad\cdots\,, \tag{12}\] with the coefficients being functions of mass dimension zero scalars \[D\,,\quad J_{\mu\nu}J^{\mu\nu}\,,\quad p_{\mu}J^{\mu}{}_{\rho}J^{\rho}{}_{\nu }p^{\nu}/p^{2}\,,\quad\cdots\,. \tag{13}\] The following identity for antisymmetric tensors \(A^{\mu\nu}\), \(B^{\mu\nu}\) should be considered for setting up a linearly independent basis: \[(*A)^{\mu\rho}(*B)_{\rho\nu}=-\frac{1}{2}\delta^{\mu}{}_{\nu}\,(A_{\rho\sigma }B^{\rho\sigma})-A^{\mu\rho}B_{\rho\nu}\,. \tag{14}\] The definition of \(\hat{x}^{\mu}\), \(\hat{y}^{\mu}\) here is that of the main text: \[\hat{x}^{\mu}:=J^{\mu\nu}p_{\nu}/p^{2}\,,\quad\hat{y}^{\mu}:=-*J^{\mu\nu}p_{ \nu}/p^{2}\,. \tag{15}\] However, the condition (11b) truncates the tower (12) simply to \[p^{\mu}/p^{2}\,,\quad\hat{x}^{\mu}\,,\quad\hat{y}^{\mu}\,. \tag{16}\] It is because the right-hand side \(i\,\delta^{\mu}{}_{\rho}\) contains no Lorentz generator, while the commutator \([\ \,,p_{\rho}]\) reduces the number of Lorentz generators only by one. Similarly, the tower (13) also gets truncated so that all the coefficients should be sole functions of \(D\): there is no scalar combination that involves only one \(J^{\mu\nu}\), due to its antisymmetry. Hence \(q^{\mu}\) takes the generic form \[q^{\mu}=\alpha_{0}(D)p^{\mu}/p^{2}+\alpha_{1}(D)\hat{x}^{\mu}+\alpha_{2}(D)\, \hat{y}^{\mu}\,. \tag{17}\] Now computing \([q^{\mu},p_{\rho}]\) and imposing (11b) implies \[\begin{split}\delta^{\mu}{}_{\rho}&=\alpha_{1}(D) \,(\delta^{\mu}{}_{\rho}-p^{\mu}p_{\rho}/p^{2})-\alpha_{0}^{\prime}(D)p^{\mu}p _{\rho}/p^{2}\\ &\quad-\alpha_{1}^{\prime}(D)p_{\rho}\,\hat{x}^{\mu}-\alpha_{2}^ {\prime}(D)p_{\rho}\,\hat{y}^{\mu}\,,\end{split} \tag{18}\] from which it follows that \[\alpha_{0}^{\prime}(D)=-1\,,\quad\alpha_{1}(D)=1\,,\quad\alpha_{2}^{\prime}(D)= 0\,. \tag{19}\] Hence the form of \(q^{\mu}\) gets constrained to \[\begin{split} q^{\mu}&=(c_{0}-D)p^{\mu}/p^{2}+\hat{x }^{\mu}+c_{2}\,\hat{y}^{\mu}\,,\\ &=c_{0}p^{\mu}/p^{2}+x^{\mu}+c_{2}\,\hat{y}^{\mu}\,,\end{split} \tag{20}\] where \[x^{\mu}:=J^{\mu\nu}p_{\nu}/p^{2}-D\,p^{\mu}/p^{2} \tag{21}\] is the spacetime coordinates introduced in (6), while \(c_{0}\) and \(c_{2}\) are constants. Finally, the commutator \([q^{\mu},q^{\nu}]\) evaluates to \[[q^{\mu},q^{\nu}]=[x^{\mu},x^{\nu}]+2c_{2}[x^{[\mu},\hat{y}^{\nu]}]+c_{2}{}^{ 2}[\hat{y}^{\mu},\hat{y}^{\nu}]\,, \tag{22}\] while straightforward computations using (108) and (109) show that \[\begin{split}&[x^{\mu},x^{\nu}]=-i\,S^{\mu\nu}/p^{2}=[\hat{y}^{ \mu},\hat{y}^{\nu}]\,,\\ &[x^{\mu},\hat{y}^{\nu}]=-2i\,\hat{y}^{(\mu}p^{\nu)}/p^{2}\,. \end{split} \tag{110}\] Thus, it follows that \[[q^{\mu},q^{\nu}]=-\frac{i(1+{c_{2}}^{2})}{p^{2}}\,S^{\mu\nu}\,, \tag{111}\] from which it becomes clear that the commutativity condition (104) is fulfilled only if \(c_{2}=\pm i\): \[q^{\mu}=c_{0}p^{\mu}/p^{2}+(x^{\mu}\pm i\,\hat{y}^{\mu})\,. \tag{112}\] Therefore, the only possible Poincare-covariant and commutative coordinates are the holomorphic or antiholomorphic spinspacetime coordinates \(x^{\mu}\pm iy^{\mu}\) found in the main text, up to an arbitrariness along the direction longitudinal to the momentum with a c-number coefficient (note that \(c_{0}\) also accounts for the promotion of \(\hat{y}^{\mu}\) to \(y^{\mu}\) by the addition of \(\tilde{D}\), as the center generator \(\tilde{D}\) can be incorporated in this derivation effectively as a c-number). Said in another way, _there does not exist a Poincare covariant notion of real commutative spacetime_: for any real \(c_{2}\), the right-hand side of (111) cannot be made zero. To make it vanish, the use of complex numbers is unavoidable. The reader might wonder how exactly a set of real-valued commutative coordinates can be achieved if some of our bootstrap conditions are relaxed. Let us explicitly spell out such real coordinates for concreteness: \[\begin{split}\mathbf{x}^{\mu}&:=\frac{1}{\mathcal{ P}_{\kappa}p^{\kappa}}\left(\delta^{\mu}{}_{\rho}-\frac{p^{\mu}p_{\rho}}{p^{2}} \right)J^{\rho\nu}\mathcal{P}_{\nu}-\frac{p^{\mu}}{p^{2}}\,D\,,\\ \mathcal{P}_{\mu}&:=p_{\mu}+(-p^{2})^{1/2}u_{\mu}\,, \end{split} \tag{113}\] where \(u_{\mu}\) is a constant one-form. It is straightforward to show that \(q^{\mu}=\mathbf{x}^{\mu}\) fulfills (102b) and (102c). A lengthy calculation shows that \[[\mathbf{x}^{\mu},\mathbf{x}^{\nu}]=\frac{i(1+u^{2})}{\left(\mathcal{P}_{ \kappa}u^{\kappa}-(-p^{2})^{1/2}(1+u^{2})\right)^{2}}\,S^{\mu\nu}\,, \tag{114}\] so (104) is satisfied if the reference one-form is chosen to be timelike and unit: \[u^{2}=-1\,. \tag{115}\] Of course, (102a) is not satisfied due to the involvement of a Lorentz-violating reference structure \(u_{\mu}\): \[\begin{split}&[\mathbf{x}^{\mu},J^{\rho\sigma}]=i\,(-2\eta^{\mu[ \rho}\delta^{\sigma]}{}_{\nu})\,\mathbf{x}^{\nu}+\frac{i}{\mathcal{P}_{\kappa }p^{\kappa}}\,\mathbf{N}^{\mu|\rho\sigma}\,,\\ &\mathbf{N}^{\mu|\rho\sigma}=2S^{\mu[\rho}(\mathcal{P}-p)^{\sigma ]}+(x-\mathbf{x})^{\mu}\,(\mathcal{P}\wedge p)^{\rho\sigma}\,.\end{split} \tag{116}\] Since \(u_{\rho}\star\mathbf{N}^{\mu|\rho\sigma}=\frac{1}{2}u^{\rho}\,\varepsilon_{ \rho\sigma\kappa\lambda}\,\mathbf{N}^{\mu|\kappa\lambda}=0\), covariance under the \(\mathrm{SO}(3)\) rotations perpendicular to \(u^{\mu}\) is preserved. Also, note that the total angular momentum is split as \[\begin{split}& J^{\mu\nu}=2\mathbf{x}^{[\mu}p^{\nu]}+\mathbf{S}^{ \mu\nu}\,,\\ &\mathbf{S}^{\mu\nu}=J^{\mu\nu}-(p^{\mu}\mathcal{P}_{\rho}J^{ \rho\nu}+p^{\nu}\mathcal{P}_{\rho}J^{\mu\rho})/p^{\kappa}\mathcal{P}_{\kappa }\end{split} \tag{117}\] so that \[\mathcal{P}_{\mu}\,\mathbf{S}^{\mu\nu}=0\,, \tag{118}\] which is the Pryce-Newton-Wigner spin supplementary condition [54; 55; 30]. The upshot here is that reality engages in a trade-off relationship with Lorentz covariance, given translation covariance and commutativity. The advent of complex-valued coordinates might seem bizarre or radical at first sight. However, we have seen in the main text that such a complex geometry unveils a genuine "\(zig\)-\(\bar{z}ag\) physics" of massive spinning objects, all the while beautifully manifesting covariance under the bona fide global Poincare symmetry. Thus the seemingly bizarre construction might rather be considered as the most reasonable and physical option. In this light, we are invited to abandon reality and nudge ourselves into a little unusual re-envisioning of spacetime.
2304.00977
Reduce, Reuse, Recycle: Selective Reincarnation in Multi-Agent Reinforcement Learning
'Reincarnation' in reinforcement learning has been proposed as a formalisation of reusing prior computation from past experiments when training an agent in an environment. In this paper, we present a brief foray into the paradigm of reincarnation in the multi-agent (MA) context. We consider the case where only some agents are reincarnated, whereas the others are trained from scratch -- selective reincarnation. In the fully-cooperative MA setting with heterogeneous agents, we demonstrate that selective reincarnation can lead to higher returns than training fully from scratch, and faster convergence than training with full reincarnation. However, the choice of which agents to reincarnate in a heterogeneous system is vitally important to the outcome of the training -- in fact, a poor choice can lead to considerably worse results than the alternatives. We argue that a rich field of work exists here, and we hope that our effort catalyses further energy in bringing the topic of reincarnation to the multi-agent realm.
Claude Formanek, Callum Rhys Tilbury, Jonathan Shock, Kale-ab Tessera, Arnu Pretorius
2023-03-31T07:58:52Z
http://arxiv.org/abs/2304.00977v1
# Reduce, Reuse, Recycle: Selective Reincarnation in Multi-Agent Reinforcement Learning ###### Abstract 'Reincarnation' in reinforcement learning has been proposed as a formalisation of reusing prior computation from past experiments when training an agent in an environment. In this paper, we present a brief foray into the paradigm of reincarnation in the _multi-agent_ (MA) context. We consider the case where only some agents are reincarnated, whereas the others are trained from scratch - _selective_ reincarnation. In the fully-cooperative MA setting with heterogeneous agents, we demonstrate that selective reincarnation can lead to higher returns than training fully from scratch, and faster convergence than training with full reincarnation. However, the choice of _which_ agents to reincarnate in a heterogeneous system is vitally important to the outcome of the training - in fact, a poor choice can lead to considerably worse results than the alternatives. We argue that a rich field of work exists here, and we hope that our effort catalyses further energy in bringing the topic of reincarnation to the multi-agent realm. ## 1 Introduction Reinforcement Learning (RL) is a field that has existed for many years, but has recently seen an explosion of interest and research efforts. Since the incorporation of deep neural networks into the paradigm (Mnih et al., 2013), the community has witnessed success in a wide array of tasks, many of which previously seemed intractable (Silver et al., 2016). A commonly-cited feat is achieving superhuman performance in various games, both classical (Schrittwieser et al., 2020) and modern (Berner et al., 2019; Wurman et al., 2022). Such games can represent situations which are high-dimensional, combinatorially complex, and non-linear, and thus demonstrate the sophistication of the RL approach to sequential decision making. Even with the successes of single-agent RL, many real-world settings are inherently multi-agent, where multiple diverse agents act together in a shared environment. The success of Multi-Agent Reinforcement Learning (MARL) has been similarly captivating in this context, with demonstrations of emergence of high-level concepts such as coordination and teamwork (Samvelyan et al., 2019), and even trade (Johanson et al., 2022). Despite these victories, the discipline of RL still faces a series of fierce challenges when applied to _real-world_ situations, not least the intense computation often required for training (Agarwal et al., 2022). The multi-agent case, though highly applicable to the real world, is plagued further by problems of non-stationarity (Papoudakis et al., 2019), partial observability (Papoudakis et al., 2021), and the 'curse of dimensionality' (Du and Ding, 2021). We postulate that RL, and MARL specifically, is a powerful tool to help us model, understand, and solve complex processes and phenomena. First, though, it is clear that these challenges must be mitigated. Progress is being made in this regard, across a host of research strategies such as transfer learning (Zhu et al., 2020), ad hoc teamwork (Stone et al., 2010), and zero-shot coordination (Hu et al., 2020). Another crucial effort is to leverage prior computation, to avoid the unnecessary duplication of work. In a typical RL research project, an algorithm is trained _tabula rasa_ - that is, without prior experience or encoded knowledge. Sometimes, such an approach is desirable: for example, it was the express intention of Silver et al. (2017) to train their AlphaZero agent _tabula rasa_, for the sake of learning to play Go without learning from human data. However, in many practical settings, training from scratch every time is slow, expensive, and also _unnecessary_. For example, we may want to iterate on a problem or test out a new strategy, and do so quickly, without starting over in each case. In this vein, Agarwal et al. (2022) have recently proposed a formalisation of a research paradigm entitled 'Reincarnating RL,' where previous computation is reused in future work. These authors argue that large, real-world RL systems already take this approach, out of necessity, but in a way that is often ad hoc and informal. Through the creation of a reincarnation framework, not only does a researcher gain benefits in their own experiments, it further allows the field itself to be democratised - enabling the sharing of checkpoints, model weights, offline datasets, etc., to accelerate development. This dimension is particularly salient for low-resourced researchers, who can piggyback off the computing power available to large research labs. Reincarnation is certainly not a panacea for the real-world challenges of RL, but it does provide a springboard both for novel ideas and for new researchers to enter the field. We resonate with this call, and wish to motivate similarly for reincarnation in the MARL context. To catalyse the excitement for this paradigm, we focus in this paper on a particular aspect of reincarnation that may be useful in MARL: _selective_ reincarnation. To illustrate where such a situation is applicable, consider an example of controlling a large, complex industrial plant, consisting of an assortment of _heterogeneous_ agents. Notice that this scenario is in the realm of real-world problems. Suppose we are training our system using a MARL algorithm with a decentralised controller, but this training is computationally expensive, on the order of days-long. Conceivably, we may notice that some agents in our system learn competently - perhaps their task is simpler, or the algorithmic design suits their intended behaviour; call these the X agents. Other agents might not fare as well and we would like to train them from scratch; call these the Y agents. We wish to find new strategies for the Y agents: maybe we ought to test a new exploration routine, a novel neural architecture, or a different framing of the problem. Instead of retraining the entire system from scratch after each change in our Y agent strategy, we wonder if we can selectively reincarnate the already-performant X agents and thereby enable faster training times or higher performance for the Y agents. In this paper, we make three contributions. Firstly, we hope to usher in this nascent paradigm of reincarnation to MARL, where it is vitally needed. The underlying philosophy of leveraging prior computation already exists in the MARL setting (e.g. Kono et al. (2014)), but we aim to begin formalising the field, as done by Agarwal et al. (2022) for the single-agent case. Specifically, we formalise the concept of _selective_ reincarnation. Secondly, we demonstrate interesting phenomena that arise during a preliminary selectively-reincarnated MARL experiment. We find that, with certain agent subsets, selective reincarnation can yield higher returns than training from scratch, and faster convergence than training with full reincarnation. Interestingly, other subsets result in the opposite: markedly worse returns. We present these results as a doorway to a rich landscape of ideas and open questions. Thirdly, we offer a codebase' as a framework for selective reincarnation in MARL, from which other researchers can build. ## 2 Preliminaries ### Multi-Agent Reinforcement Learning There are many different formulations for MARL tasks including competitive, cooperative and mixed settings. The focus of this work is on the cooperative setting. Fully cooperative MARL with shared rewards can be formulated as a _decentralised partially observable Markov decision process_ (Dec-POMDP) (Bernstein et al., 2002). A Dec-POMDP consists of a tuple \(\mathcal{M}=(\mathcal{N},\mathcal{S},\{\mathcal{A}^{i}\},\{\mathcal{O}^{i}\},\)\(P\), \(E\), \(\rho_{0}\), \(r\), \(\gamma\)), where \(\mathcal{N}\equiv\{1,\ldots,n\}\) is the set of \(n\) agents in the system and \(s\in\mathcal{S}\) describes the true state of the system. The initial state distribution is given by \(\rho_{0}\). However, each agent \(i\in\mathcal{N}\) receives only partial information from the environment in the form of observations given according to an emission function \(E(o_{t}|s_{t},i)\). At each timestep \(t\), each agent receives a local observation \(o_{t}^{i}\) and chooses an action \(a_{t}^{i}\in\mathcal{A}^{i}\) to form a joint action \(\mathbf{a}_{t}\in\mathcal{A}\equiv\prod_{i}^{N}\mathcal{A}^{i}\). Typically under partial observability, each agent maintains an observation history \(o_{0:t}^{i}=(o_{0},\ldots,o_{t})\), or implicit memory, on which it conditions its policy \(\mu^{i}(a_{t}^{i}|o_{0:t}^{i})\), to perform action selection. The environment then transitions to a new state in response to the joint action and current state, according to the state transition function \(P(s_{t+1}|s_{t},\mathbf{a}_{t})\) and provides a shared numerical reward to each agent according to a reward function \(r(s,a):\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\). We define an agent's return as its discounted cumulative rewards over the \(T\) episode timesteps, \(G^{i}=\sum_{t=0}^{T}\gamma^{t}r_{t}^{i}\), where \(\gamma\in(0,1]\) is a scalar discount factor controlling how myopic agents are with respect to rewards received in the future. The goal of MARL in a Dec-POMDP is to find a joint policy \((\pi^{i},\ldots,\pi^{n})\equiv\pi\) such that the return of each agent \(i\), following \(\pi^{i}\), is maximised with respect to the other agents' policies, \(\pi^{-i}\equiv(\pi\backslash\pi^{i})\). That is, we aim to find \(\pi\) such that: \[\forall i:\pi^{i}\in\arg\max_{\hat{\pi}^{i}}\mathbb{E}\left[G^{i}\mid\hat{\pi} ^{i},\pi^{-i}\right]\] ### Independent Q-Learning The Q-value function \(Q^{\pi}(s,a)\) for a policy \(\pi(\cdot\mid s)\) is the expected sum of discounted rewards obtained by choosing action \(a\) at state \(s\) and following \(\pi(\cdot\mid s)\) thereafter. DQN (Mnih et al., 2013) is an extension of Q-Learning (Watkins, 1989) which learns the Q-function, approximated by a neural network \(Q_{\theta}\) with parameters \(\theta\), and follows an \(\epsilon\)-greedy policy with respect to the learnt Q-function. One limitation of DQN is that it can only by applied to discrete action environments. DDPG (Lillicrap et al., 2016), on the other hand, can be applied to continuous-action environments by learning a deterministic policy \(\mu(s):\mathcal{S}\rightarrow\mathcal{A}\) which is trained to output the action which maximises the learnt Q-function at a given state. Tampuu et al. (2015) showed that in a multi-agent setting such as _Pong_, independent DQN agents can successfully be trained to cooperate. Similarly, independent DDPG agents have successfully been trained in multi-agent environments (Lowe et al., 2017). To train independent DDPG agents in a Dec-POMDP we instantiate a Q-function \(Q_{\theta}^{i}(o_{0:t}^{i},a_{t}^{i})\) for each agent \(i\in\mathcal{N}\), which conditions on each agent's own observation history \(o^{i}\) and action \(a^{t}\). In addition, we also instantiate a policy network for each agent \(\mu_{o}^{i}(o_{t}^{i})\) which takes agent observations \(o_{t}^{i}\) and maps them to actions \(a_{t}^{i}\). Each agent's Q-function is independently trained to minimise the temporal difference (TD) loss, \(\mathcal{L}_{Q}(\mathcal{D}^{i})\), on transition tuples, \((o_{t}^{i},a_{t}^{i},r,o_{t+1}^{i})\), sampled from its experience replay buffer \(\mathcal{D}^{i}\) collected during training, with respect to parameters \(\theta^{i}\): \[\mathcal{L}_{Q}(\mathcal{D}^{i},\theta^{i})=\mathbb{E}_{o_{t}^{i},a_{t}^{i},r _{t},o_{t+1}^{i}\sim\mathcal{D}}\left[(Q_{\theta^{i}}^{i}(o^{i},a^{i})-r_{t}- \gamma\hat{Q}_{\theta^{i}}^{i}(o_{t+1}^{i},\hat{\mu}_{\phi^{i}}^{i}(o_{t+1}))) ^{2}\right]\] where \(\hat{Q}_{\theta}\) and \(\hat{\mu}_{\phi}\) are delayed copies of the Q-network and policy network respectively, commonly referred to as the target networks. The policy network is trained to predict, given an observation \(o^{i}\), the action \(a^{i}\) that maximises the Q-function, which can be achieved by minimising the following policy loss with respect to parameters \(\phi^{i}\): \[\mathcal{L}_{\mu}(\mathcal{D}^{i},\phi^{i})=\mathbb{E}_{o_{t}^{i}\sim\mathcal{ D}^{i}}\left[-Q_{\theta^{i}}^{i}(o_{t}^{i},\mu_{\phi^{i}}^{i}(o_{t}^{i}))\right]\] To improve the performance of independent learners in a Dec-POMDP, agents usually benefit from having memory (Hausknecht and Stone, 2015). Accordingly, we can condition the Q-networks and policies on observation histories \(o_{0:t}^{i}\) instead of just individual observations \(o_{t}^{i}\). In practice, we use a recurrent layer in the neural networks. In addition, to further stabilize learning, we use eligibility traces (Sutton and Barto, 2018) in the form of \(Q(\lambda)\), from Peng and Williams (1994). ## 3 Related Work The concept of reusing computation for learning in some capacity is neither new, nor constrained to the domain of RL. We feel that topics such as transfer learning to new tasks (Bozinovski and Fulgosi, 1976), fine-tuning (e.g. Sharif Razavian et al. (2014)), and post-deployment model updates' fit into this broad philosophy. In RL specifically, the concept has also existed for some time (e.g. Fernandez and Veloso (2006)), and other RL researchers are currently pursuing similar aims with different nomenclature (e.g. using offline RL as a 'launchpad' for online RL1). Indeed, Agarwal et al. (2022) accurately highlight that their conception of the field of reincarnation is a formalisation of that which already exists. Footnote 1: See _Updatable Machine Learning_ (UpML) workshop: [https://upml2022.github.io/](https://upml2022.github.io/) In MARL, too, there are extant works with the flavour of reincarnation. For example, both Kono et al. (2014) and Gao et al. (2021) explored the concept of 'knowledge reuse' in MARL. The idea of lifelong learning (Nekoei et al., 2021) fits similarly into this paradigm. Authors have also used offline pre-training in the MARL setting (e.g. Meng et al. (2021)). In a large-scale instance, Vinyals et al. (2019) naturally reused computation for the training of their AlphaStar system. Specifically, it is also interesting to note their concept of using agents to help train other agents with a 'league' algorithm. In a sense, this approach is somewhat similar to one of the anticipated benefits of selective reincarnation, where good agents can assist by teaching bad agents. Nonetheless, we believe there has not yet been a formalisation of the field of multi-agent reincarnation, akin to the efforts done by Agarwal et al. (2022). Moreover, it seems that being selective in the agent reincarnation choice is also a novel specification. ## 4 Definitions **Definition 1** (Multi-Agent Reincarnation): In a MARL system (see Section 2.1) with the set \(\mathcal{N}\) of \(n\) agents, an agent \(i\in\mathcal{N}\) is said to be reincarnated (Agarwal et al., 2022) if it has access to some artefact from previous computation to help speed up training from scratch. Typically such an agent is called a _student_ and the artefact from previous computation is called a _teacher_. The set of teacher artefacts in the system is denoted \(T\). There are several types of artefacts which can be used as teachers, including (but not limited to): teacher policies \(\pi_{T}\) or \(\mu_{T}\), offline teacher datasets \(\mathcal{D}_{T}\), and teacher model weights \(\phi_{T}\) or \(\theta_{T}\). **Definition 2** (Selective Reincarnation): A selectively reincarnated MARL system with \(n\) agents is one where \(y\in[1,n)\) agents are trained from scratch (i.e. _tabula rasa_) and \(x=n-y\) agents are reincarnated (Agarwal et al., 2022). The sets of reincarnated and tabula rasa agents are denoted \(X\) and \(Y\) respectively. A MARL system with \(y=n\) is said to be fully tabula rasa, whereas a system with \(x=n\) is said to be fully reincarnated. ## 5 Case Study: Selectively-Reincarnated Policy-to-Value MARL Agarwal et al. (2022) presented a case study in _policy-to-value_ RL (PVRL), where the goal is to accelerate training of a student agent given access to a sub-optimal teacher policy and some data from it. Similarly, we now present a case study in multi-agent PVRL, focusing on one of the methods invoked by Agarwal et al. (2022), called 'Rehearsal' (Gulcehre et al., 2020). We set up our experiments as follows. We use an independent DDPG (Lillicrap et al., 2016) configuration, with some minor modifications to enable it to leverage offline teacher data for reincarnation. Specifically, we make two changes. Firstly, we compose each mini-batch of training data from 50% offline teacher data and 50% student replay data, similar to Gulcehre et al. (2020). This technique should give the student the benefit of seeing potentially high-reward transitions from the teacher, while also getting to see the consequences of its own actions from its replay data. Secondly, we add layer-norm to the critic network, to mitigate extrapolation error due to out-of-distribution actions, as per Ball et al. (2023). For the sake of the current question of selective reincarnation, we use the HalfCheetah environment, first presented by Wawrzynski (2007), and later brought into the MuJoCo physics engine (Todorov et al., 2012). Specifically, we focus on the variant introduced by Peng et al. (2021) with their Multi-Agent MuJoCo (MAMuJoCo) framework, where each of the six degrees-of-freedom is controlled by a separate agent. We denote these six agents as the following: the back ankle (BA), the back knee (BK), the back hip (BH), the front ankle (FA), the front knee (FK), and the front hip (FH). This ordering corresponds to the array indices in the MAMuJoCo environment, from \(0\) to \(5\) respectively. We illustrate the HalfCheetah setup in the appendix, in Figure A.1. For the set of proficient teacher policies, we initially train on the 6-agent HalfCheetah using tabula-rasa independent DDPG over 1 million training steps, and store the experiences using the OG-MARL framework (Formanek et al., 2023) so that they can be used as the teacher datasets. We then enumerate all combinations of agents for reincarnation, a total of \(2^{6}=64\) subsets. With each subset, we retrain the system on HalfCheetah, where that particular group of agents gains access to their teachers offline data (i.e. they are reincarnated). For each combination, we train the system for \(200k\) timesteps, remove the teacher data, and then train for a further \(50k\) timesteps on student data alone. Each experiment is repeated over five seeds. For the'maximum return' metric, we find the timestep at which the return, averaged over the five seeds, is highest. For the 'average return' metric, we average the return over all seeds and all timesteps. We use these metrics as proxies for performance and speed to convergence respectively. ### Impact of Teacher Dataset Quality To begin with, we fully reincarnate the MARL system, giving all of the DDPG agents access to their teachers' datasets. Since the quality of the samples in the teacher's dataset likely has a marked impact on the learning process, we create two datasets for comparison: 'Good' and 'Good-Medium', where these names indicate the typical returns received across samples'. Figure A.2, in the appendix, shows the distribution of the returns in these two datasets. We run the fully reincarnated configuration with each of these datasets, along with a _tabula rasa_ baseline. Figure 1 presents these results. Notice in Figure 0(a) that providing access solely to 'Good' teacher data initially does _not_ help speed up training and even seems to hamper it. It is only after around \(125k\) timesteps that we observe a dramatic peak in performance, thereafter significantly outperforming the _tabula rasa_ system. In contrast, having additional 'Medium' samples enables higher returns from the beginning of training - converging faster than the solely 'Good' dataset. One may be surprised by these results - that it takes the system some time to realise benefits from high-return teacher data. However, we postulate that when using the 'Good' dataset, the teacher data is narrowly focused around high-return strategies, yet the corresponding state and action distributions are Figure 1: Performance using the two different teacher datasets. In the plot, a solid line indicates the mean value over the runs, and the shaded region indicates one standard error above and below the mean. In the table, values are given with one standard error. likely very different to the students' own state and action distributions early in training. Consequently, the students struggle to leverage the teacher datasets until later in training, when the state-action distribution mismatch is minimised. This belief is evidenced by the results in Figure 1, and further supports the notion that the quality of the teachers' datasets has an impact on the outcomes of reincarnation. We feel this research direction is itself a promising one for future works, which we discuss in more detail in our roadmap, in Section 6. For the purposes of this investigation, focusing on selective reincarnation and not dataset quality, we simply report the remainder of our results using the 'Good-Medium' dataset. Nevertheless, for completeness, we run our experiments with both datasets, and provide these results publicly'. ### Arbitrarily Selective Reincarnation We now focus on the core aspect of our investigation: selective reincarnation. Firstly, we approach the problem at a high-level by reincarnating \(x\) of the \(n\) agents and aggregating across all combinations for that \(x\). That is, we do not study _which_ agents are selectively reincarnated for a given \(x\). For example, for \(x=2\), we reincarnate all pairs of agents in separate runs: \(\left\{\left(\mathtt{BA},\mathtt{BK}\right),\left(\mathtt{BA},\mathtt{BH} \right),\ldots\right\}\), and then average those results. As an important point, notice that the count of combinations depends on \(x\), calculated as \(\binom{n}{x}=\frac{n!}{x!(n-x)!}\) - e.g. there is just one way to reincarnate all six of the agents, but there are twenty ways to reincarnate three of the six agents. Accordingly, we average over a different count of runs depending on \(x\), which affects the magnitude of the standard-error metrics. We highlight this detail to warn against comparing the confidence values across these runs. The essence of these results, instead, is to show the mean performance curve. The returns from these runs, computed over five seeds times \(\binom{6}{x}\) combinations, is given in Figure 2, with both the graphical plot and the tabular values reported. In Figure 1(a), we notice firstly that reincarnation enables higher returns. We already saw in Figure 1 that full reincarnation yields higher returns than _tabula rasa_, but we now see that a selectively-reincarnated setup also yields benefits - e.g. reincarnating with just half of the agents provides an improvement over _tabula rasa_. We do see that reincarnating with just one agent is somewhat detrimental in this case, with a slightly lower maximum return over the training period, but not significantly. ### Targeted Selective Reincarnation Matters Though the results from Figure 2 are interesting, we now present a vital consideration: in a multi-agent system, even in the simpler homogeneous case, agents can sometimes assume dissimilar roles (e.g. Wang et al. (2020) show the emergence of roles in various tasks). In the HalfCheetah environment Figure 2: Selective reincarnation performance, aggregated over the number of agents reincarnated. In the plot, a solid line indicates the mean value over the runs, and the shaded region indicates one standard error above and below the mean. In the table, values are given with one standard error. A reminder: take caution when comparing the standard error metrics across values of \(x\), since the number of runs depends on \(\binom{6}{x}\). particularly, we feel there are likely unique requirements for the ankle, knee, and hip joints, and that these differ across the front and back legs, in order for the cheetah to walk. It is thus important that we compare, for a given \(x\), the results across various combinations. That is, e.g., compare reincarnating (BA,BK) with (BA,BH), etc. Though we run experiments over _all_ possible combinations, plotting these can quickly become unwieldly and difficult to study. Instead, we show here only the best and worst combinations for each \(x\), as ranked by the average return achieved. These plots can be seen in Figure 3, with values tabulated in Table 1. We release results for all combinations online*. Footnote *: Available at: [https://api.wandb.ai/links/off-the-grid-marl-team/5yxrdt3q](https://api.wandb.ai/links/off-the-grid-marl-team/5yxrdt3q) We see in these results that the choice of which agents to reincarnate plays a significant role in the experiment's outcome. For example, consider the choice of reincarnating three agents, shown in Figure (d)d: selecting \((\texttt{BH},\texttt{FK},\texttt{FH})\) instead of \((\texttt{BA},\texttt{BK},\texttt{FK})\) increases the maximum return by 33%, and almost doubles the average return. Similar improvements exist for other values of \(x\). We also notice an interesting pattern in the best subsets selected for reincarnation (denote the best subset for \(x\) as \(X_{x}^{*}\)): as \(x\) increases, agents are strictly added to the subset. That is, \(X_{1}^{*}=\{\texttt{BH}\}\), \(X_{2}^{*}=X_{1}^{*}\cup\{\texttt{FK}\}\), and so on. Moreover, for these best subset choices, the maximum returns monotonically increase with \(x\), up to full reincarnation. For average returns, indicating the time to convergence, we see a similar trend - where increasing the number of reincarnated agents results in faster convergence. However, the exception to this pattern is for \(x=5\), where higher average returns are achieved than for full reincarnation, \(x=n=6\) (see Table 1). This result implies that it is possible for training to converge faster when selectively reincarnating instead of fully reincarnating - another potential benefit of the selective reincarnation framework. To affirm these points, we use the MARL-eval tool from Gorsane et al. (2022), built upon work by Agarwal et al. (2021), to plot the associated performance profiles, probability of improvement graphs, and aggregate scores, in Figure 4. We use these results as clear evidence of the following: selective reincarnation can yield benefits, with higher returns and faster convergence over _tabula rasa_ and possibly even full reincarnation; _but_ one must be very careful of which agents are selected, for a bad choice can lead to a sub-optimal outcome. Naturally, this diagnosis opens up many further questions. How can we know, ideally _a priori_, whether a given combination is a poor or excellent one? In this example of the HalfCheetah environment, we might try to reason about various combinations: e.g, from Figure (f)f, we see that reincarnating the back leg, front hip, and _front knee_ is a significantly better choice than the the back leg, the front hip, and the _front ankle_ - does this result perhaps reveal something about the nature of how HalfCheetah learns? We show some other interesting groupings in the appendix, in Figure A.3. Figure 4: MARL-eval (Gorsane et al., 2022; Agarwal et al., 2021) plots comparing the best performing combination, based on final performance after \(250k\) training steps, of \(x\) reincarnated agents for each \(x\in[0,n]\). ## 6 Roadmap for Multi-Agent Reincarnation We now present a brief roadmap of some avenues to explore in this domain. **Selective Reincarnation in MARL.** There are many other conceivable methods for doing selective reincarnation in MARL which we did not explore. In this work we focused on a method similar to'rehearsal' (Gulcehre et al., 2020), but future works could experiment with methods such as 'jump-starting' (Uchendu et al., 2022), 'kick-starting' (Schmitt et al., 2018) and offline pre-training. We find offline pre-training a particularly promising direction for selectively reincarnating systems of independent DDPG agents - e.g. one could apply a behaviour cloning regularisation term to the policy loss in DDPG, as per Fujimoto and Gu (2021), and then to wean it off during training, as per Beeson and Montana (2022). Another direction could be to develop bespoke selective reincarnation methods; for example, a method to enable agents to 'trust' those agents with a teacher more than they would otherwise. Additionally, there is a trove of work to be done in how to understand which agents have the highest impact when reincarnated, and perhaps to reason about this delineation _a priori_. Finally, we also encourage larger-scale selective-reincarnation experiments on a wider variety of environments, and perhaps even tests with real-world systems. **Beyond Independent Reincarnation.** In this paper, we focused on using independent DDPG for learning in MARL, but we believe many valuable open-problems exist outside of such an approach. For example, how does one effectively reincarnate MARL algorithms that belong to the paradigm of _Centralised Training Decentralised Execution_ (CTDE), such as MADDPG (Lowe et al., 2017) and QMIX (Rashid et al., 2020)? It is not clear how one might selectively reincarnate agents with a centralised critic. In general, outside of just selective reincarnation, we also showed evidence that the quality of the teacher policy and data can have a significant impact on the outcomes of reincarnation in RL. Exploring the benefits of, e.g., a curriculum-based, student-aware teacher could be a direction for future work. One could also explore ideas of curricula in the algorithm design itself - e.g. solely training the reincarnated agents' critics but freezing their policies, until the other agents 'catch up.' Another question about reincarnation in MARL is how teachers can help students learn to cooperate more quickly. Learning cooperative strategies in MARL can often take a lot of exploration and experience. Could reincarnating in MARL help reduce the computational burden of learning cooperative strategies from scratch? Many exciting avenues exist, and we envision the community exploring interesting open problems in this space. ## 7 Conclusion In this paper, we explored the topic of reincarnation (Agarwal et al., 2022), where prior computation is reused for future experiments, within the context of multi-agent reinforcement learning. Specifically, we proposed the idea of _selective_ reincarnation for this domain, where not all the agents in the system are reincarnated. To motivate this idea, we presented a case study using the HalfCheetah environment, and found that selective reincarnation can result in higher returns than if all agents learned from scratch, and faster convergence than if all agents were reincarnated. However, we found that the choice of which agents to reincarnate played a significant role in the benefits observed, and we presented this point as the core takeaway. We used these results to argue that a fruitful field of work exists here, and finally listed some avenues that may be worth exploring as a next step. ## Acknowledgements Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
2309.04535
Gravitational imaging through a triple source plane lens: revisiting the $Λ$CDM-defying dark subhalo in SDSSJ0946+1006
The $\Lambda$CDM paradigm successfully explains the large-scale structure of the Universe, but is less well constrained on sub-galactic scales. Gravitational lens modelling has been used to measure the imprints of dark substructures on lensed arcs, testing the small-scale predictions of $\Lambda$CDM. However, the methods required for these tests are subject to degeneracies among the lens mass model and the source light profile. We present a case study of the unique compound gravitational lens SDSSJ0946+1006, wherein a dark, massive substructure has been detected, whose reported high concentration would be unlikely in a $\Lambda$CDM universe. For the first time, we model the first two background sources in both I- and U-band HST imaging, as well as VLT-MUSE emission line data for the most distant source. We recover a lensing perturber at a $5.9\sigma$ confidence level with mass $\log_{10}(M_\mathrm{sub}/M_{\odot})=9.2^{+0.4}_{-0.1}$ and concentration $\log_{10}c=2.4^{+0.5}_{-0.3}$. The concentration is more consistent with CDM subhalos than previously reported, and the mass is compatible with that of a dwarf satellite galaxy whose flux is undetectable in the data at the location of the perturber. A wandering black hole with mass $\log_{10}(M_\mathrm{BH}/M_{\odot})=8.9^{+0.2}_{-0.1}$ is a viable alternative model. We systematically investigate alternative assumptions about the complexity of the mass distribution and source reconstruction; in all cases the subhalo is detected at around the $\geq5\sigma$ level. However, the detection significance can be altered substantially (up to $11.3\sigma$) by alternative choices for the source regularisation scheme.
Daniel J. Ballard, Wolfgang J. R. Enzi, Thomas E. Collett, Hannah C. Turner, Russell J. Smith
2023-09-08T18:00:08Z
http://arxiv.org/abs/2309.04535v2
Gravitational imaging through a triple source plane lens: revisiting the \(\Lambda\)CDM-defying dark subhalo in SDSSJ0946+1006 ###### Abstract The \(\Lambda\)CDM paradigm successfully explains the large-scale structure of the Universe, but is less well constrained on sub-galactic scales. Gravitational lens modelling has been used to measure the imprints of dark substructures on lensed arcs, testing the small-scale predictions of \(\Lambda\)CDM. However, the methods required for these tests are subject to degeneracies among the lens mass model and the source light profile. We present a case study of the unique compound gravitational lens SDSSJ0946+1006, wherein a dark, massive substructure has been detected, whose reported high concentration would be unlikely in a \(\Lambda\)CDM universe. For the first time, we model the first two background sources in both I- and U-band HST imaging, as well as VLT-MUSE emission line data for the most distant source. We recover a lensing perturber at a \(5.9\sigma\) confidence level with mass \(\log_{10}(M_{\rm sub}/M_{\odot})=9.2^{+0.4}_{-0.1}\) and concentration \(\log_{10}c=2.4^{+0.5}_{-0.3}\). The concentration is more consistent with CDM subhalos than previously reported, and the mass is compatible with that of a dwarf satellite galaxy whose flux is undetectable in the data at the location of the perturber. A wandering black hole with mass \(\log_{10}(M_{\rm BH}/M_{\odot})=8.9^{+0.2}_{-0.1}\) is a viable alternative model. We systematically investigate alternative assumptions about the complexity of the mass distribution and source reconstruction; in all cases the subhalo is detected at around the \(\geq 5\sigma\) level. However, the detection significance can be altered substantially (up to \(11.3\sigma\)) by alternative choices for the source regularisation scheme. keywords: gravitational lensing: strong - dark matter ## 1 Introduction The standard \(\Lambda\)CDM model of cosmology describes a dark energy (\(\Lambda\)) dominated universe whose mass comprises \(\sim 85\%\) Cold Dark Matter (CDM). In contrast to baryons, this is an exotic type of matter outside of the standard model of particle physics that interacts with electromagnetism very weakly if at all. Assuming that Dark Matter (DM) is a particle, no candidate has been directly observed in a laboratory yet (e.g. Roszkowski et al., 2018; Schumann, 2019; Billard et al., 2022). Nonetheless, CDM theory successfully describes observations of the Universe on \(\sim\)Mpc scales and above (see e.g Bullock and Boylan-Kolchin, 2017), such as the hierarchical formation of large scale structure (Anderson et al., 2014; Hildebrandt et al., 2017) and the cosmic microwave background (Planck Collaboration et al., 2020). Whilst DM is needed on galactic scales to explain rotation curves (Rubin and Ford, 1970; Rubin et al., 1978, 1985), it is entirely possible that the DM is not precisely that of the CDM paradigm; alternative models may be required to explain observed phenomena on smaller, sub-galactic scales (Diemand et al., 2007, 2008). In this lower-mass regime, alternatives to CDM have been proposed to resolve apparent discrepancies between observations and simulations (e.g. Del Popolo and Le Delliou, 2017), though many of these can also be explained by other means than the DM model (see e.g. Fairbairn, 2022). Alternative DM models make different predictions about the properties of individual halos as well as their populations. For example, higher thermal velocities in Warm Dark Matter (WDM, e.g. Schneider et al., 2012; Lovell et al., 2014) models lead to less concentrated halo mass profiles (e.g. Ludlow et al., 2016; Bose et al., 2017) and a suppression of small-mass halos (Lovell et al., 2014, 2021). Deviations from CDM on sub-galactic scales or in dwarf galaxies can, however, be obscured by their tidal interactions with more massive luminous halos (e.g. Despali et al., 2022; Moreno et al., 2022). While classical "hot" DM models are ruled out by observations of the large-scale Universe (see e.g. Primack and Gross, 2001), the small scale effects of WDM models are much harder to constrain. The formation of luminous galaxies typically requires a halo mass of around \(\gtrsim 5\times 10^{9}M_{\odot}\)(Benitez-Llambay and Frenk, 2020), thereby limiting the sample of directly observable satellite galaxies (Kim et al., 2018; Newton et al., 2021; Nadler et al., 2021). Instead we must rely on observations that are directly sensitive to the gravitational effects of the DM itself, such as strong gravitational lensing. This provides a direct probe of small-mass halos, since the lensing effects of galaxies and halos depend only on their mass, irrespective of their luminosity. DM subhalos introduce perturbations on top of the lensing by the main galaxy and its halo. Subhalos - as well as other small halos projected along the same line-of-sight - have been revealed primarily by observations of (i) anomalous flux ratios of multiply lensed quasars (Mao and Schneider, 1998; Bradac et al., 2002; Metcalf and Zhao, 2002; Mao et al., 2004; Kochanek and Dalal, 2004; McKean et al., 2007; Xu et al., 2015; Gilman et al., 2019, 2020; Hsueh et al., 2020; Nadler et al., 2021); (ii) perturbations on the arcs of lensed extended source galaxies (Dalal and Kochanek, 2002; Vegetti et al., 2010, 2012, 2014; Hezaveh et al., 2016). The latter approach, known as gravitational imaging, led to a few detections of DM subhalos in previous studies (Vegetti et al., 2010, 2012; Nierenberg et al., 2014; Hezaveh et al., 2016; Nightingale et al., 2022), including one notable case in the lens system SDSSJ0946+1006 (henceforth J0946), which is the focus of this work. J0946 is worthy of further study for two reasons. First, its claimed substructure has both an unexpectedly high mass for a halo not hosting a galaxy (Vegetti et al., 2010, hereafter V10) and an unexpectedly high concentration given its mass, making it a substantial outlier with respect to CDM simulations (Nelson et al. (2015); Minor et al. (2021) - hereafter M21). Second, J0946 is a compound lens system, with a lens at \(z_{l}=0.222\) and three sources at \(z_{s1}=0.609\), \(z_{s2}=2.035\) and \(z_{s3}=5.975\)(Collett and Smith, 2020, hereafter CS20). These four galaxies are henceforth referred to as the main deflector, \(s1\), \(s2\), and \(s3\) respectively. Previous gravitational imaging studies of J0946 have only considered the lensing of \(s1\) as observed in the F814W band by the _Hubble Space Telescope_ (HST). In this paper, we extend on previous work in two ways, modelling all three sources in both the near-infrared F814W and the ultraviolet F336W bands simultaneously. Modelling the compound lensing should improve the macro-model of the main deflector, since compound lens modelling is much less affected by degeneracies than the modelling of a single source plane system (see e.g. Schneider and Sluse, 2014). Furthermore, one of the lensed images of s3 happens to fall close to the projected location of the reported dark subhalo, providing additional constraints on its properties. Modelling both HST bands simultaneously will allow us to disentangle source light complexity from mass model complexity, since lensing is achromatic whereas star-forming galaxies typically look very different in the ultraviolet and infrared. This paper is structured as follows. In Section 2, we describe the data, the geometry of the compound lensing in J0946 and our modelling methodology, and include a test of our sensitivity to a DM substructure. In Section 3, we present and discuss our results for a single source plane, and compare them to similar literature model setups. In Section 4, we present and discuss the results of our full triple source plane lens modelling. In Section 5, we then perform systematics tests on various model assumptions. Finally, we conclude our findings in Section 6. ## 2 Methodology ### Data We model two HST observations: the 2096 s ACS image in F814W (I-band) from Gavazzi et al. (2008) and the 5772 s WFC3/UVIS observation in F336W (U-band) from Sonnenfeld et al. (2012). The I-band image allows us to compare with previous results in the literature, whilst adding the U-band probes clumpier emission in the source galaxies and gives excellent angular resolution. Though available in the HST archive, we neglect intermediate optical wavelength bands as these are unlikely to capture any qualitatively different structures; the same is true for the longest available wavelength band, WFC3/IR F160W, whose resolution is moreover poorer than the I-band image. Data in both of our modelled bands are shown in Figure 1, with the reported location of the substructure from V10 overlaid. The I-band image as analysed has a scale of \(0.05^{\prime\prime}\)/pixel; the U-band image covers the same area but with \(0.04^{\prime\prime}\) pixels. We use the same lens-light-subtracted I-band image as Collett and Auger (2014, hereafter CA14), but we do not subtract the lens light from the U-band image since it is negligible at this wavelength, at the location of the arcs. Prior to the lensing analysis, the physical coordinates of the U-band data were aligned to those of the I-band data, to correct for a small relative offset between the pipeline-processed images. With the optimised shifts (\(\delta x=0.027^{\prime\prime}\), \(\delta y=-0.023^{\prime\prime}\)), this correction is smaller than a single pixel. Figure 1 also shows the VLT-MUSE narrow-band image extracted in a 5 A window around \(8475\) A, capturing Lyman-\(\alpha\) emission from the most distant lensed source. This image is not used explicitly in our lens modelling; we instead delens the centroid positions of the two \(s3\) image features and their astrometric uncertainties, derived from fitting a Gaussian to each image. Since the MUSE data have lower angular resolution, the image registration relative to HST is more uncertain than for the HST U-band versus I-band image alignment. To account for this, we artificially blur the I-band image with the MUSE Point Spread Function (PSF) and align this with a simulated HST I-band image of the arcs constructed out of the appropriate wavelength slices of the MUSE data cube. The resultant alignment uncertainty is propagated into the uncertainty of the \(s3\) image centroids. We model image pixels within one of four manually masked regions in the HST imaging of J0946, shown in Figure 2. We avoid the computational challenge of modelling both sources simultaneously (CA14), by reconstructing the two sources and two bands as separate parts of the likelihood, which are simultaneously fit with the same mass model. This is a reasonable approach, since the two rings do not overlap on the sky. ### Ray Tracing For strong gravitational lensing, the source plane position, \(\mathbf{\beta}\), of a photon is displaced from its observed, lensed, image plane position, \(\mathbf{\theta}\), by the reduced deflection angle, \(\mathbf{\alpha}\), according to the lens equation: \[\mathbf{\beta}=\mathbf{\theta}-\mathbf{\alpha}(\mathbf{\theta})\,. \tag{1}\] The deflection angle, \(\mathbf{\alpha}\), of a lens is related to the lensing potential on its lens plane, \(\mathbf{\psi}\), such that \[\mathbf{\alpha}(\mathbf{\theta})=\nabla\mathbf{\psi}(\mathbf{\theta})\,, \tag{2}\] where \(\mathbf{\psi}\) depends on the 2D projected lens mass distribution, as well as the angular diameter distances between observer, lens and source. Equation 1 is for a system with one lens and one source plane, but can be generalised to give the compound lens equation: \[\mathbf{\theta}_{j}=\mathbf{\theta}_{0}-\sum_{i=1}^{j}\eta_{ij}\mathbf{\alpha}_{i-1}(\mathbf{ \theta}_{i-1})\text{ for j}>0\,. \tag{3}\] Here we have adjusted our notation from Equation 1 to no longer distinguish between lens and source, since in a compound lensing system a single galaxy can be both. In Equation 3, \(\mathbf{\theta}_{i}\) generically denotes an angular position on a redshift plane, i, where \(i=0\) is the foreground-most lens plane and observed image plane; any \(i>0\) refers to the \(i^{\rm th}\) source (or further lens) plane behind it. For a lensing plane \(l\), the extra parameter \(\eta_{ij}\) describes the scaling of the reduced deflection angles from one source plane, \(i\), to another, \(j\), defined as a ratio of angular diameter distances: \[\eta_{ij}=\frac{D_{i}D_{lj}}{D_{li}D_{j}}\,. \tag{4}\] Throughout the multi-source plane lensing portions of this work, we define reduced deflection angles of a lens relative to light coming from the plane immediately behind the lens. This is not the convention of Schneider et al. (1992), who define all reduced deflection angles relative to light coming from the furthest plane. Our convention allows easier comparison between our work and other single and double source plane models of J0946. A detailed explanation of our chosen convention is available in Appendix A. Throughout this work we fix the angular diameter distances of the system assuming the \(\Lambda\)CDM cosmological parameters \(\Omega_{\rm m}=0.307\), \(\Omega_{\Lambda}=0.693\), and \(h_{0}=0.6777\)(Planck Collaboration et al., 2014). Figure 1: HST imaging of J0946 in the I–band (left) and U–band (middle), and continuum–subtracted VLT–MUSE narrow–band imaging (width 5 Å centred at 8475 Å) showing the Ly–\(\alpha\) emission at \(z=5.975\) (right). The cyan cross represents the best fit location of the substructure in as reported in V10 (which is visually indistinguishable from the best fit location in M21). Figure 2: The data pixels used in our modelling of \(s1\) (magenta masked) and \(s2\) (green masked) in I–band (top) and U–band (bottom) HST data. All other pixels are ignored. For illustrative purposes, the image contrast of \(s2\) is enhanced and a central region of image pixels is removed. ### Lens Modelling To model the data, we follow the semi-linear inversion approach of Warren and Dye (2003). We define a model for the lensing mass distribution, and for each realisation of the non-linear parameters of that model we linearly solve for the best-fitting source. #### 2.3.1 Non-linear Mass Model We assume that the main deflector is described by an Elliptical Power Law (EPL) model with external shear. We consider two possible scenarios for evidence comparison: one with and one without a dark subhalo in the form of a truncated Navarro-Frenk-White (tNFW) profile. We refer to these two scenarios as our smooth and tNFW-perturbed models, respectively. Additionally, in our multi-source plane models in Sections 4 and 5, \(s1\) and \(s2\) behave as lenses as well as sources; we model their mass distributions as singular isothermal sphere (SIS) profiles. The EPL profile has six parameters that behave non-linearly in the model: The Einstein radius, \(\vartheta_{E}\), the logarithmic slope, \(\gamma\), the axis ratio, \(q\), the position angle, \(\varphi\), and two centroid coordinates \((x,y)\). An SIS is identical to an EPL with \(\gamma=2\) and zero ellipticity. The external shear has two non-linear parameters: the shear strength, \(\Gamma\), and the shear angle, \(\varphi_{\Gamma}\). The tNFW profile is based upon the profile derived by Navarro et al. (1996), whose density, \(\rho\), at radial distance, \(r\), is related to a characteristic density, \(\rho_{0}\), by \[\rho_{\rm NFW}(r)=\frac{\rho_{0}}{\frac{r}{r_{s}}(1+\frac{r}{r_{s}})^{2}}\,. \tag{5}\] As in M21, we do not assume a fixed mass-concentration relation for the substructure, and therefore model both its concentration, \(c\), and virial mass, \(M_{200}\). The relation between the scale radius in Equation 5, \(r_{s}\), and \(c\) is given by: \[c=r_{200}/r_{s}\,, \tag{6}\] where \(r_{200}\) is considered the virial radius enclosing \(M_{200}\), though is strictly the radius enclosing an average density that is 200 times the critical density of the Universe. Following M21, \(M_{200}\) is formally defined under the assumption that the subhalo can be considered a field halo, which is then tidally stripped by its massive host. To account for this tidal stripping, we assume that this profile is truncated according to Baltz et al. (2009): \[\rho_{\rm NFW}(r)=\frac{r_{t}^{2}}{r_{t}^{2}+(r/r_{s})^{2}}\rho_{\rm NFW}(r)\,. \tag{7}\] We also calculate the total mass of the substructure, \(M_{sub}\), which accounts for the effect of the truncation radius, \(r_{t}\). \(M_{sub}\) is a finite quantity for the above choice of truncation. The free parameters of our tNFW profile are \(M_{200}\), \(c\), \(r_{t}\), and centre position \((x,y)\). Throughout this work we assume that the dark perturber is a subhalo at \(z=0.222\), the redshift of the main deflector. M21 also find a good fit to the data when the perturber is a line-of-sight halo between the observer and the lens plane, with the mass and concentration marginally decreased but still anomalously high. #### 2.3.2 Mass and concentration from simulations Extrapolating the field halo mass-concentration relation of Shao et al. (2023) (based upon the CAMELS suite of hydrodynamic \(\Lambda\)CDM simulations, Villaescusa-Navarro et al., 2021) to subhalos of virial mass \(M_{200}=10^{10}M_{\odot}\), we expect a mean concentration of \(\log_{10}c=1.3\) (with DM only), \(\log_{10}c=1.2\) (with baryonic physics according to IllustratsTR, see Nelson et al., 2017; Pillepich et al., 2018; Springel et al., 2018; Marinacci et al., 2018; Naiman et al., 2018; Nelson et al., 2019), and \(\log_{10}c=1.4\) (with baryonic physics according to SIMBA, see Dave et al., 2019). Taking the mass-concentration relation of Dutton and Maccio (2014), we would expect a median value of \(\log_{10}c=1.1\). The typical scatter around the mass-concentration relation in simulations is of the order of \(\sigma_{\rm scatter}\approx 0.1\) dex (see e.g. Dutton and Maccio, 2014). We note, however, that the differences that we later quote between these results and our own depend on the assumed parameters describing baryonic physics in the IllustratsTNG and SIMBA models, i.e. feedback from supernovae and active galactic nuclei. #### 2.3.3 Reconstructing unlensed source brightness distributions Since we do not know the morphology of a source a priori, we infer it simultaneously with the lens parameters from the data. It is clear from the clumpiness of the arcs that the sources must be intrinsically irregular. Therefore, we adopt a pixellated free-form reconstruction of the source light. Specifically, we evaluate source brightness values defined on an adaptive Voronoi mesh created from a subset of image plane pixels ray-traced onto each source plane. In this work, we cast back all the pixels that fall within the mask of a given source for the band in consideration. The advantage of such an adaptive mesh is that it allows for a higher resolution source at those locations where the magnification through the lens becomes the strongest. We follow Nightingale et al. (2021, 2022) and employ a Natural Neighbour Interpolation scheme to determine sub-pixel source brightness values (Sibson, 1981). We choose this scheme because (i) it yields a smooth Likelihood function which makes sampling the non-linear parameters much easier, and (ii) it forces the gradient of the source to be continuous, which is particularly important for substructure identification. To impose the astrophysical prior that sources require a certain degree of smoothness, we additionally introduce a regularisation strength parameter for each source. The brightness values at the vertices follow a Gaussian regularisation prior whose covariance matrix penalises the source brightness gradient or curvature (see Suyu et al., 2006, for details). Fiducially, we opt for gradient regularisation, in contrast to V10 who use curvature regularisation and M21 who reconstruct their source out of a summation of analytic light profiles. However, since we do not a priori know how smooth our source reconstructions should be, we leave the regularisation strengths for the reconstructions of \(s1\) and \(s2\) as free parameters to be inferred by the model directly from the data. The centroid position \((x,y)\) of \(s3\) is also fit for, but the unlensed light distribution of this source is not reconstructed. #### 2.3.4 Posterior and evidence calculation For model comparison, we evaluate both the posterior of the non-linear parameters, \(\mathbf{\xi}\), and the evidence of our models with and without a substructure. The posterior, \(\mathcal{P}(\mathbf{\xi}|\mathbf{d})\), relates to the likelihood function, \(\mathcal{L}_{\rm tot}(\mathbf{\xi})\), and the prior of model parameters, \(\mathcal{P}(\mathbf{\xi})\), according to: \[\mathcal{P}(\mathbf{\xi}|\mathbf{d})=\frac{\mathcal{L}_{\rm tot}(\mathbf{\xi})\mathcal{P}( \mathbf{\xi})}{\mathcal{Z}}\,. \tag{8}\] The full details of \(\mathcal{L}_{\rm tot}(\mathbf{\xi})\) are described in Appendix B. The Bayesian evidence, \(\mathcal{Z}\), is an integral of the likelihood multiplied by the prior, which normalizes the posterior, i.e.: \[\mathcal{Z}=\int d\mathbf{\xi}\mathcal{L}_{\rm tot}(\mathbf{\xi})\mathcal{P}(\mathbf{\xi} )\,. \tag{9}\] We evaluate the posterior and this integral using the pre-conditioned Monte Carlo package pocMC(Karamanis et al., 2022). pocMC generates posterior samples by following a Sequential Monte Carlo scheme combined with a Normalizing Flow, which preconditions the target distribution to remove correlations among its parameters (Karamanis et al., 2022)1. Evidences are calculated using the bridge sampling method and consistent with those obtained from the nested sampling algorithm MultiNest(Feroz et al., 2009, 2019). When comparing two models, we report the \(N\sigma\) confidence level that one is preferred over the other, i.e. we assume that one of the considered models is true and map the model probability onto the \(N\sigma\) probability volume of a Gaussian distribution. Footnote 1: We choose the default hyper–parameters of pocMC, i.e. an effective sample size of ESS= 0.95 and correlation coefficient \(\gamma=0.75\), but increase the number of particles to up to 6000. We further set the maximum number of MCMC steps to 10000. We found that these values ensure convergence of the posterior, given the multi–modality of the likelihood. ### Checking the sensitivity of our method for detecting substructures Claiming the detection or non-detection of a substructure requires knowledge of the sensitivity of the data (see e.g. Despali et al., 2022). To demonstrate that we are, in principle, sensitive to a substructure within the data at the reported location, we create a mock data set based upon our best smooth reconstruction of the I-band image of s1 (see Section 3 but adding the perturbation of a tNFW profile with the parameters reported in M21. Figure 3 illustrates how the inclusion of the substructure affects the closest arc, including the effects of the PSF and observational noise. We then remodel this data assuming both a smooth and tNFW-perturbed model, finding that the latter is preferred with a difference in the logarithmic evidence of \(\Delta\ln\mathcal{Z}=15.16\pm 0.03\) assuming gradient regularization of the source (corresponding to a \(5.2\sigma\) detection significance). Our posteriors are consistent within \(1\sigma\) of the input subhalo mass and concentration. This suggests that we should be able to detect a substructure with similar properties to M21. However, since we have fixed the position, mass and concentration of the subhalo, a more rigorous sensitivity calculation would be required if we were searching for _new_ subhalos in J0946. Figure 3: Mock data for our sensitivity test, where panels (left to right) show the initial model image, a zoomed inset around the location of the reported substructure, the effect of blurring by the HST I–band PSF, and the addition of background noise akin to the original HST I–band data. The top row is created from a smooth model for the lens, whilst the bottom row has an injected tNFW subhalo with the parameters of M21 at the cyan cross. The bottom right panel is used as mock data to recover the injected substructure with \(\sim 5\sigma\) confidence. ## 3 Single source plane model results and discussion In this section, we present the results of our single source plane models for J0946 and compare them with those of previous studies. ### I-band Model Modelling the I-band data of the inner arcs alone provides the closest comparison with previous studies of J0946 (e.g. V10, M21). We can reconstruct the data to the noise level assuming our smooth (EPL+Shear) model. Between our smooth and tNFW-perturbed models, we find that the posterior distributions of the macro-model parameters agree within the \(\sim 3\sigma\) level or closer (with the exception of the \(x\) coordinate of the centre of the lens). Posterior distributions for these parameters are shown in Figure 4, alongside the best-fit source reconstruction and normalised image-plane residuals, which demonstrate our ability to successfully model these arcs down to the noise level. In this single plane, I-band only case, the data prefers the existence of a tNFW substructure (\(\Delta\ln\mathcal{Z}=7.23\pm 0.03\)) with \(3.4\sigma\) confidence over the smooth model. Our macro-model parameters are within \(4\sigma\) of those reported by V10. Such differences are due to our prescription of our source model (gradient regularised, versus curvature regularised in V10) and our wider prior ranges on all parameters. The differences in likelihood and evidence between smooth and tNFW-perturbed models are recorded in Table 1. All priors and posterior results are documented in Appendix C. Regarding the mass and concentration of the substructure, we find \(\log_{10}(M_{200}/M_{\odot})=10.8^{+1.3}_{-0.6}\) and \(\log_{10}c=2.0^{+0.3}_{-0.3}\). Our results exceed all of the simulation values with a root-mean-squared difference of 2.7-3.6 \(\sigma_{c}\), with \(\sigma_{c}\) being the standard deviation of our concentration posterior. Our result is less of an outlier than M21 finds, both because of the greater uncertainty on our inferred parameters and the lower median value of the concentration. The subhalo mass, \(\log_{10}(M_{\rm sub}/M_{\odot})=10.0^{+0.4}_{-0.3}\), remains perplexing, however, given that such a massive object should host a detectable population of stars (V10). ### Dual I-band and U-band Model Simultaneously modelling the I- and U-band data for \(s1\) necessitates one additional non-linear parameter (the regularisation strength of the U-band source galaxy) but adds much more data to constrain the lens model. Doing this, the tNFW-perturbed model is preferred over the smooth model with an evidence ratio \(\Delta\ln\mathcal{Z}=14.34\pm 0.04\), corresponding to a \(5.0\sigma\) confidence detection. The addition of the U-band yields different posteriors on our macro-model parameters. Comparing with the I-band only case, the mass profile slope for the smooth model is significantly shallower (\(\gamma=1.92^{+0.03}_{-0.02}\) versus \(2.12^{+0.03}_{-0.07}\)). However, when the tNFW perturber is included, both our models prefer a super-isothermal slope (\(\gamma=2.27^{+0.05}_{-0.04}\) and \(2.23^{+0.02}_{-0.02}\) respectively). The differences in \(\gamma\) between smooth and tNFW-perturbed cases are likely caused by a source position transformation (Schneider & Sluse, 2014), from which our multi-plane modelling should not suffer. Despite the significant shifts in the parameters of the macro-model, the substructure mass and concentration are still consistent with the I-band only result within \(1\sigma\). Deviations from the predicted mass-concentration relations are on the level of 2.8-3.7 \(\sigma_{c}\). ## 4 Triple source plane model results and discussion In this section, we present the results from our triple source plane (henceforth 'fiducial') models, where we reconstruct \(s1\) and \(s2\) both in the I- and U-band simultaneously, whilst also delensing \(s3\) by mapping its two images to a common source plane position, with and without a tNFW perturbation. We use the same mass profiles and priors for the foreground lens as in our single-plane modelling, but we add an SIS at the centre of the delensed position of \(s1\), allowing for a small offset between the centroids of the mass and light. We similarly add an SIS at \(s2\) but enforce zero offset between the centroids of its mass and light, since CS20 showed that this assumption has negligible impact on \(s3\). We find that we are able to simultaneously reproduce the I- and U-band arcs of \(s1\) and \(s2\), and delens \(s3\). Our source reconstructions and residuals are shown in Figure 5. The positions of the third source are shown in Figure 6. The extra data afforded from the outer set of arcs give much tighter constraints on the macro-model. We find that the super-isothermal results of V10, M21, and our single plane tNFW-perturbed models, do a comparatively poorer job of reconstructing \(s2\). With our fiducial models, a near isothermal result is favoured for both the smooth and tNFW-perturbed cases, where \(\gamma=1.956^{+0.009}_{-0.010}\) and \(1.949^{+0.011}_{-0.010}\) respectively. The similarities between the recovered slopes and the reconstructed sources (as shown in Figure 7) are clear demonstrations that the source position transformation of Schneider & Sluse (2014) has been broken by our multi-plane modelling. The \(1\sigma\) and \(2\sigma\) posterior distribution contours for these models - as well as for the single plane dual I-band and U-band models - can be found in Appendix D. We find that the existence of the tNFW perturbation is preferred with an evidence ratio \(\Delta\ln\mathcal{Z}=19.64\pm 0.03\) over the smooth model, corresponding to a \(5.9\sigma\) detection. The preferred tNFW profile has a total mass \(\log_{10}(M_{\rm sub}/M_{\odot})=9.3^{+0.4}_{-0.1}\), with a virial mass \(\log_{10}(M_{200}/M_{\odot})=10.3^{+1.2}_{-0.6}\) and concentration \(\log_{10}c=2.4^{+0.9}_{-0.3}\). We show 2D posterior distributions of \(M_{\rm sub}\) and \(c\) against a selection of macro-model parameters, for the fidu \begin{table} \begin{tabular}{l l l} \hline **Data modelled** & \(\boldsymbol{\Delta\ln\mathcal{L}}\) & \(\boldsymbol{\Delta\ln\mathcal{Z}}\) (confidence) \\ \hline 1 source, I–band & 21.67 & 7.23\(\pm\)0.03 (\(3.4\sigma\)) \\ 1 source, I– \& U–band & 29.52 & 14.34\(\pm\)0.04 (\(5.0\sigma\)) \\ 3 sources, I– \& U–band & 38.18 & 19.64\(\pm\)0.03 (\(5.9\sigma\)) \\ \hline \end{tabular} \end{table} Table 1: The differences in best fit log–likelihood \(\Delta\ln\mathcal{L}\) and log–evidence \(\Delta\ln\mathcal{Z}\), between smooth and tNFW–perturbed models, shown for our single source plane and triple source plane results. These differences are quoted relative to the smooth case, such that positive values indicate preference for the tNFW–perturbed model. In brackets are the corresponding confidences of the detections. cial tNFW-perturbed model result in Figure 8, wherein we observe a notable degeneracy between the Einstein radius of the main deflector and the mass of its substructure, since the total mass within the Einstein ring is well-constrained. Otherwise, there are no strong degeneracies. The 2D \(M_{\rm sub}\)-\(c\) posterior distribution for our fiducial result is shown separately on the upper panel of Figure 9, overlaid with the single source plane results. Our fiducial \(M_{200}\)-\(c\) posterior appears on the bottom panel of Figure 9, which also shows the \(M_{200}\)-\(c\) relation of Dutton & Maccio (2014). The shape of this posterior distribution is similar to the results of M21, though our \(\sigma_{c}\) is greater than theirs primarily because of our more flexible source model. We find that our results differ from Dutton & Maccio (2014) and the other aforementioned mass-concentration relations by 2.6-3.3 \(\sigma_{c}\). Assuming the stellar mass-subhalo mass relation in Rodriguez-Puebla et al. (2012), our virial mass implies a stellar mass \(M_{\star}\sim 10^{7.5}M_{\odot}\). For a plausible stellar mass Figure 4: \(1\sigma\) and \(2\sigma\) contours of the posterior distribution for the EPL and external shear parameters for our model of \(s1\) in I-band only, with (cyan) and without (orange) the addition of a tNFW substructure. Inset: best fit source reconstruction (left) and residuals between the data and best fit model in units of standard deviation (right). These panels correspond to the tNFW–perturbed models, but are visually indistinguishable to the best fit smooth model results. to-light ratio of \(\sim 2M_{\odot}/L_{\odot}\) (appropriate to a passive dwarf galaxy - see e.g. Martin et al., 2008), this corresponds to an absolute magnitude \(M_{I}\approx-15.4\), typical of dwarf elliptical populations in nearby galaxy groups. At this luminosity, such objects have typical sizes \(\sim 1\)kpc (Venhola et al., 2019). Introducing a simulated galaxy of these properties scaled to \(z=0.222\) into the I-band image, we find that although such a galaxy would be detectable in isolation, it could not be unambiguously distinguished from other flux components if located at the position of the subhalo. Since the associated galaxy could easily be a factor of two fainter, or be more diffuse, than assumed here, we should not expect to see an easily-identified luminous galaxy hosted by the lensing substructure. The subhalo we have detected is therefore not unusually "dark", and appears compatible with being a dwarf satellite galaxy of the main deflector. Figure 5: Source plane reconstructions and normalised image plane residuals for our best fit smooth (left) and tNFW–perturbed (right) model, for (from top to bottom) \(s1\) in I-band, \(s1\) in U–band, \(s2\) in I–band and \(s2\) in U–band. ## 5 Systematic tests In this section, we examine several model assumptions that systematically could have influenced our ability to detect and measure a DM substructure. We perform tests on the choice of source regularisation and explore the effects of additional mass model complexity and an alternative hypothesis for the perturber. We explore all of these systematics for the triple source plane (I- and U-band) case only. ### Degeneracy with source morphology One of the main systematic uncertainties is the degeneracy between the complexity of the mass and the source light distributions. While enforcing a smoother source could lead to a false positive detection of a lensing perturber, allowing too much freedom in the intrinsic structure of the source could lead to non-detections even in the presence of DM substructures. In our fiducial model, we chose a gradient regularization scheme for the source galaxies, which allows for small-scale source structure. Alternatively, we can suppress these small-scale source features by regularising over curvature. This is the regularisation choice of V10. In this case, the substructure is detected with much higher significance: \(\Delta\ln\mathcal{Z}=67.00\pm 0.02\), or \(11.3\sigma\). Such a detection claim would be over-confident in our analysis since the evidence actually prefers gradient regularisation at \(\sim\)20\(\sigma\) confidence. This result is true for both the smooth and perturbed models. It is concerning that the significance of the detection changes hugely between the two regularisation schemes since neither is astrophysically motivated. It remains an open question whether alternative regularisation schemes or source reconstruction techniques could raise or lower the evidence for a substructure. We leave this exploration to future work. The mass-concentration posterior for the substructure under the curvature regularisation scheme is shown in the centre panel of Figure 9. Whilst the detection significance has changed, the inferred subhalo parameters and their uncertainties have not changed significantly. The substructure Figure 6: The \(1\sigma\) and \(2\sigma\) astrometric uncertainties (black contours) on the two image plane positions from the MUSE data (background image) with our posterior of \(s3\) centroids forward ray-traced through our posterior of lens models to give our predicted \(1\sigma\) and \(2\sigma\) uncertainties on the image plane positions of \(s3\), for our smooth (orange), tNFW–perturbed (cyan) and point mass–perturbed (magenta) models. Figure 7: Isophotes of the I-band \(s1\) reconstruction given the best tNFW–perturbed and smooth results from (top) the single plane modelling and (bottom) triple plane modelling. The alignment of the two source reconstructions in the latter case is indicative of a broken mass–sheet degeneracy. would, therefore, remain a modest outlier given either regularization scheme. ### Mass model complexity #### 5.2.1 Angular structure in the main deflector Previous works have shown that lensing substructure inference can be sensitive to the flexibility of the main deflector mass model (see e.g. Nightingale et al., 2022; Minor et al., 2021). Therefore, we explore additional complexity in the foreground lens model by combining our EPL with the modes \(m\) of a multipole expansion: \[\kappa(x,y)=\kappa_{\rm EPL}(x,y)\times\left[\,1+k_{m}\cos\left(m(\varphi-\varphi _{m})\right)\,\right] \tag{10}\] where \(\varphi=\arctan\left(x/y\right)\) and \(0\leq k_{m}\leq 1\) is the amplitude of the \(m^{th}\) mode with phase \(\varphi_{m}\)2. Such an expansion can account for boxiness or diskiness of the lens galaxy. As in M21, we model multipole terms \(m=3\) and \(m=4\). We therefore add four non-linear parameters to the model: \(k_{3}\), \(k_{4}\), \(\varphi_{3}\) and \(\varphi_{4}\). The best fit source reconstructions and normalised image plane residuals are plotted in Appendix E. Footnote 2: See Chu et al. (2013) and appendix B of Xu et al. (2015) for more details on multipoles Multipoles perform comparably well at reconstructing the data as the tNFW perturbation. In fact, a smooth model with added multipoles performs marginally better in reconstructing J0946 than a tNFW-perturbed model, with the data preferring the presence of multipoles over the presence of the tNFW profile with \(1.5\sigma\) confidence. This is not solely due to there being fewer degrees of freedom in the multipoles case, since the best fit log-likelihood is also improved, with \(\Delta\ln\mathcal{L}=3.74\). The preference for non-zero multipole terms is unsurprising given detailed examination of the light profile, which reveals some disturbance in the shapes of the isophotes that can be absorbed by these extra parameters (Sonnenfeld et al., 2012). Modelling the multipole terms and a tNFW-perturbation simultaneously provides the best reconstruction, where the substructure is detected with \(6.2\sigma\) confidence. The inferred substructure in this case is more massive, with \(\log_{10}(M_{200}/M_{\odot})=10.6^{+1.1}_{-0.4}\), but less concentrated, with \(\log_{10}(c)=1.9^{+0.4}_{-0.3}\), than in our fiducial model. Differences to the compared mass-concentration relations go down to 2.0-2.9 \(\sigma_{c}\). The \(M_{200}\)-\(c\) posterior for this model is shown in the bottom panel of Figure 9. #### 5.2.2 Additional complexity on s1 Our fiducial model assumes a spherically symmetric mass distribution for \(s1\), though its light profile is noticeably elliptical (see e.g. the top panels of Figure 5). We therefore perform a systematic test where we assign a Singular Isothermal Ellipsoid (SIE) to \(s1\) rather than an SIS. This adds two parameters to our fiducial models: the axis ratio, \(q\), and position angle, \(\varphi\), of \(s1\). Our test shows that a smooth model prefers the presence of ellipticity components on \(s1\) over the presence of a substructure in the main deflector with \(2.9\sigma\) confidence, where both scenarios have the same number of degrees of freedom. Modelling smooth and tNFW-perturbed models with an ellipsoidal \(s1\) simultaneously yields a substructure of total mass \(\log_{10}(M_{\rm sub}/M_{\odot})=9.20^{+0.35}_{-0.21}\), virial mass \(\log_{10}(M_{200}/M_{\odot})=10.04^{+1.31}_{-0.52}\) and concentration \(\log_{10}c=2.53^{+0.59}_{-0.40}\) detected at the \(4.8\sigma\) confidence level; this is a lower evidence substructure result than the tNFW perturbation with multipoles. The difference to the \(\Lambda\)CDM predictions of the mass-concentration relation remain at a level of 2.5-3.1 \(\sigma_{c}\). #### 5.2.3 A wandering black hole? Since the dark halo in M21 is hard to accommodate within \(\Lambda\)CDM and our results have only partially alleviated that tension, it is worth considering alternative hypotheses for Figure 8: 2D posterior distributions for the total mass, \(\log_{10}(M_{\rm sub}/M_{\odot})\), and concentration, \(\log_{10}c\), of the substructure, against a selection of other lens model parameters: (from left to right) the Einstein radius, \(\vartheta_{E}\), power law slope, \(\gamma\), axis ratio, \(q\), and position angle, \(\varphi\), of the main deflector, external shear strength, \(\Gamma\), and Einstein radii of \(s1\) and \(s2\), \(\vartheta^{(s1)}_{E}\) and \(\vartheta^{(s2)}_{E}\), respectively. the perturber in J0946. Given the anomalously high concentration, and the surprising lack of a galaxy hosted within the halo, we investigate whether the perturber could be a supermassive black hole (see e.g. Ricarte et al., 2021). The non-zero multipoles of the lens mass and the disrupted morphology of the light profile of the lens galaxy are characteristics of a merger where the ejection of such a black hole may not be implausible, either through 3-body ejection (Hoffman & Loeb, 2007) or gravitational radiation recoil (Campanelli et al., 2007). To test this proposal, we fit a 3-source model with a EPL, external shear and a point mass at the main deflector redshift, and recover a point mass of \(\log_{10}(M_{\rm BH}/M_{\odot})=8.94^{+0.19}_{-0.08}\). Given J0946 has a velocity dispersion of \(\sim\)280 km s\({}^{-1}\) Gavazzi et al. (2008), the \(M\)-\(\sigma\) relation implies that there should be a black hole of a few times \(10^{9}M_{\odot}\)(Kormendy & Ho, 2013) at the centre of the lens. Thus, the proposed "wandering" black hole would need to be of comparable mass to the expected central black hole. The point mass-perturbed model is formally preferred over the equivalent tNFW-perturbed model at 2.7\(\sigma\). This is not definitive evidence and does not account for any prior preference between the models. This result is also driven purely by Occam's razor: the point mass perturbed model has a slightly lower likelihood than the tNFW model but has fewer parameters. As the right panel of Figure 6 shows, the \(s3\) image positions are sensitive to the change in mass profile, and the MUSE data is better reproduced with a point mass perturber. The significance of this is marginal, given that in all three panels the predicted centroids are well within the brightest parts of the \(s3\) images. A more sophisticated treatment of \(s3\) with higher-resolution data would be necessary to discriminate between possible density profiles for the perturbation. ## 6 Conclusions In this paper, we have presented a gravitational imaging case study of the compound lens SDSSJ0946+1006. Our model is remarkably successful in its ability to simultaneously reproduce the images of two background sources in this system in both the HST I and U bands and the image positions of a third source observed by MUSE. By including multiple sources in our analysis, we were able to lift many of the lens modelling degeneracies that are present for a single source plane lens, whilst modelling multiple passbands simultaneously enabled us to probe different source structures, and possibly different regions in the source plane, thus disentangling structure in the lens from structures in the source3. Footnote 3: Additionally, differences between the I– and U–band structures in the s1 arcs (and source reconstructions) strongly suggest the presence of dust in \(s1\), in exactly the part of the source plane that is most sensitive to the lensing substructure, yet poorly probed by the strongly attenuated U–band data. Upcoming 400–GHz ALMA observations of J0946 may be able to recover any dust continuum emission from s1, providing another set of constraints on the perturbing structure. By comparing the Bayesian evidence of a smooth halo Figure 9: The \(1\sigma\) and \(2\sigma\)\(M_{\rm sub}\)–\(c\) posterior for our single and triple plane model fits utilising gradient regularisation (top), as well as for alternative source reconstruction and mass model assumptions (middle). The \(M_{200}\)–\(c\) posterior for our highest evidence models from each of these two panels (fiducial and multipoles) are plotted against an \(M_{200}\)–\(c\) relation for CDM halos from Dutton & Maccio (2014), with \(1\sigma\) and \(2\sigma\) uncertainty (bottom). model to that of a tNFW-perturbed model, we test the claims that a dark subhalo exists in J0946 (in agreement with e.g. V10, Nightingale et al. (2022)). Our model prefers the existence of a subhalo with an evidence ratio \(\Delta\ln\mathcal{Z}=19.64\pm 0.03\) over the smooth model, corresponding to a \(5.9\sigma\) detection. The virial mass of the halo is \(\log_{10}(M_{200}/M_{\odot})=10.3^{+1.2}_{-0.6}\), and its concentration is \(\log_{10}c=2.4^{+0.5}_{-0.3}\), which is \(2.6\)-\(3.3\sigma_{c}\) higher than predicted by simulations. This is a much weaker tension than reported in M21 due to the inclusion of more data, the use of wider priors, and our more flexible source model. Additionally, Nadler et al. (2023) recently showed that gravothermal core collapse seen in some Self-Interacting Dark Matter (SIDM) models (Despali et al., 2019) is a potential mechanism to produce the substructure reported by M21; our less concentrated result should therefore be even easier to accommodate in SIDM. The stellar mass of the subhalo, \(M_{*}\sim 10^{7.5}M_{\odot}\), implied by its virial mass indicates that any luminous component to the subhalo would not be possible to detect in the data, given its proximity to the upper arc image of \(s1\) or possible blending with residual flux from the subtracted light profile of the lens. It is therefore unsurprising that the lensing perturber is dark, and we cannot confidently distinguish between it being a dwarf satellite galaxy or a DM substructure of the main deflector. We can alternatively model the data with a black hole of \(\log_{10}(M_{\rm BH}/M_{\odot})=8.94^{+0.19}_{-0.08}\), which is preferred over the truncated NFW profile at \(2.7\sigma\) due to having fewer degrees of freedom. This scenario represents a supermassive black hole being ejected from the lens galaxy as a consequence of a merger event. For the \(M\)-\(\sigma\) relation to hold, our resultant wandering black hole has comparable mass to the black hole expected at the centre of the lens galaxy. Our analysis confirms that the distant source \(s3\) is especially sensitive to the properties of the lensing perturbation, but the results are currently limited by the relatively low angular resolution of the MUSE data. High-resolution imaging of this source would be extremely powerful to probe the profile of the dark substructure, but will require a substantial investment of telescope time. We also tested changes to the shape of the mass distribution in the macro-model by fitting of third and fourth order multipoles, as well as fitting for the ellipticity of \(s1\). Whilst our macro-model has moved somewhat under these changes, our highest evidence model (with multipoles in the main deflector) yields \(\sim 6\sigma\) preference for the presence of a substructure in J0946. Its substructure gives the best compatibility with CDM simulations that we have found, at \(2.0\sigma_{c}\). We demonstrated that we are able to recover the subhalo with much higher confidence (\(11.3\sigma\) versus \(5.9\sigma\)) when regularising over the curvature of the sources rather than the gradient of the sources. Curvature regularisation makes the sources intrinsically smoother whilst the addition of a dark substructure counteracts this by adding small-scale perturbations to the arcs. However, the Bayesian evidence vastly prefers our fiducial gradient regularisation scheme. Ultimately, we conclude that precision lens modelling is challenging. Alongside cosmography, gravitational imaging is perhaps the hardest lens modelling challenge of all. Even with the luxuries afforded by a compound lens in its ability to suppress the mass-sheet degeneracy, there are nuances in how complexity is afforded to the lensing mass models, and the reconstruction of light profiles in background sources, that make it difficult to draw conclusions about small-scale structures with certainty. Much care needs to be taken over the choices of the background source model before embarking on detailed lens modelling. In reality, random draws from the priors of curvature or gradient regularised sources look nothing like astrophysical galaxies: ultimately neither regularisation scheme is physical; much more work is needed to understand how to reconstruct sources, and the need for evidence calculations will make this work computationally expensive. The potential payoff for this work is huge: with hundreds of thousands of lenses to be discovered in the next decade (Collett, 2015), gravitational imaging should yet place stringent constraints on the small-scale validity of \(\Lambda\)CDM. ## Acknowledgements We are grateful to James Nightingale and Qiahan He for sharing their results on lens modelling with natural neighbour interpolation. Adopting this approach allowed us to overcome the sampling issues inherent to linear interpolation shown in Figure 11. We thank Quinn Minor for insightful discussions at the IAU strong lensing symposium in Otranto. We are grateful to Karina Rojas for comments on the manuscript. DJB is funded by a graduate studentship from UK Research and Innovation's STFC and the University of Portsmouth. TEC is funded by a Royal Society University Research Fellowship DJB, WJRE and TEC and this project have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (LensEra: grant agreement No 945536). HCT is funded by a STFC studentship. RJS is supported by the STFC through the Durham Astronomy Consolidated Grants (ST/T000244/1 and ST/X001075/1). This work was made use of the SCIAMA computing cluster at Portsmouth For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. The authors also acknowledge seedcorn funding from the DiRAC HPC Facility (project dp285). This work was performed using the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. This work further used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1 and ST/R002371/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. ## Data Availability Supporting research data are available on request from the corresponding author and from the HST and VLT archives.
2309.04521
Deep Search For Molecular Oxygen in TW Hya
The dominant form of oxygen in cold molecular clouds is gas-phase carbon monoxide (CO) and ice-phase water (H$_2$O). Yet, in planet-forming disks around young stars, gas-phase CO and H$_2$O are less abundant relative to their ISM values, and no other major oxygen-carrying molecules have been detected. Some astrochemical models predict that gas-phase molecular oxygen (O$_2$) should be a major carrier of volatile oxygen in disks. We report a deep search for emission from the isotopologue $^{16}$O$^{18}$O ($N_J=2_1-0_1$ line at 233.946 GHz) in the nearby protoplanetary disk around TW Hya. We used imaging techniques and matched filtering to search for weak emission but do not detect $^{16}$O$^{18}$O. Based on our results, we calculate upper limits on the gas-phase O$_2$ abundance in TW Hya of $(6.4-70)\times10^{-7}$ relative to H, which is $2-3$ orders of magnitude below solar oxygen abundance. We conclude that gas-phase O$_2$ is not a major oxygen-carrier in TW Hya. Two other potential oxygen-carrying molecules, SO and SO$_2$, were covered in our observations, which we also do not detect. Additionally, we report a serendipitous detection of the C$^{15}$N $N_J = 2_{5/2}-1_{3/2}$ hyperfine transitions, $F = 3 - 2$ and $F = 2 - 1$, at 219.9 GHz, which we found via matched filtering and confirm through imaging.
Becky J. Williams, L. Ilsedore Cleeves, Christian Eistrup, Jon P. Ramsey
2023-09-08T18:00:01Z
http://arxiv.org/abs/2309.04521v1
# Deep Search for Molecular Oxygen in TW Hya ###### Abstract The dominant form of oxygen in cold molecular clouds is gas-phase carbon monoxide (CO) and ice-phase water (H\({}_{2}\)O). Yet, in planet-forming disks around young stars, gas-phase CO and H\({}_{2}\)O are less abundant relative to their ISM values, and no other major oxygen-carrying molecules have been detected. Some astrochemical models predict that gas-phase molecular oxygen (O\({}_{2}\)) should be a major carrier of volatile oxygen in disks. We report a deep search for emission from the isotopologue \({}^{16}\)O\({}^{18}\)O (\(N_{J}=2_{1}-0_{1}\) line at 233.946 GHz) in the nearby protoplanetary disk around TW Hya. We used imaging techniques and matched filtering to search for weak emission but do not detect \({}^{16}\)O\({}^{18}\)O. Based on our results, we calculate upper limits on the gas-phase O\({}_{2}\) abundance in TW Hya of \((6.4-70)\times 10^{-7}\) relative to H, which is \(2-3\) orders of magnitude below solar oxygen abundance. We conclude that gas-phase O\({}_{2}\) is not a major oxygen-carrier in TW Hya. Two other potential oxygen-carrying molecules, SO and SO\({}_{2}\), were covered in our observations, which we also do not detect. Additionally, we report a serendipitous detection of the C\({}^{15}\)N \(N_{J}=2_{5/2}-1_{3/2}\) hyperfine transitions, \(F=3-2\) and \(F=2-1\), at 219.9 GHz, which we found via matched filtering and confirm through imaging. protoplanetary disks, astrochemistry + Footnote †: journal: Acus ## 1 Introduction Protoplanetary disks provide a critical link in understanding the chemical evolution from the interstellar medium (ISM) to planetary systems (including our own solar system). By studying these environments, we can understand how molecular abundances change as stars, and later planets, form. Such observations are helpful to put our own solar system into context. So far, there have been about 300 unique molecules (not including isotopologues) detected in the ISM. Within protoplanetary disks, only 25 unique molecules have been detected (McGuire, 2022). This low detection rate is not due to decreased chemistry in disks, but because we do not know some of the key tracers of the most abundant elements, such as oxygen. Oxygen is the third most abundant element in the Universe, with a solar oxygen abundance of \(4.9\times 10^{-4}\) relative to hydrogen (Asplund et al., 2009, 2021). The vast majority of oxygen in the Universe is incorporated into gas-phase molecules, ice-phase molecules, or refractory dust. Whittet (2010) considered the incorporation of oxygen into silicates and oxides and estimated that the amount of oxygen in dust is \((0.9-1.4)\times 10^{-4}\) relative to hydrogen in different ISM environments. van Dishoeck et al. (2021) discuss the oxygen budget of ISM environments, assuming a total oxygen abundance of \(5.8\times 10^{-4}\) relative to hydrogen (Przybilla et al., 2008). They note that about 20% of the total abundance of oxygen is unaccounted for in diffuse clouds, increasing to 50% in dense regions. Two of the most common molecules in the ISM and protoplanetary disks are carbon monoxide (CO) and water (H\({}_{2}\)O). In warm gas in the interstellar medium, gas-phase CO has an abundance of about \(10^{-4}\) relative to hydrogen (Ripple et al., 2013), but is observed to be less abundant in protoplanetary disks (Dutrey et al., 1994; Ansdell et al., 2016; Long et al., 2017). Similarly, water ice has an abundance of H\({}_{2}\)O/H\({}_{2}\approx 10^{-4}\) in dense clouds (see van Dishoeck et al., 2013), but water vapor has been detected in low abundance in only a few disks (Du et al., 2017). It is possible that a large amount of oxygen is in frozen species, such as water ice, that are difficult to observe. Another possibility is that oxygen is in other gas-phase molecules, such as molecular oxygen (O\({}_{2}\)). O\({}_{2}\) has been observed in low abundances in two molecular clouds: O\({}_{2}\)/H\({}_{2}\approx(0.3-7.3)\times 10^{-6}\) in Orion (Goldsmith et al., 2011) and O\({}_{2}\)/H\({}_{2}\approx 5\times 10^{-8}\) in \(\rho\) Oph A (Liseau et al., 2012). O\({}_{2}\) is a reactive molecule, and its most common form, \({}^{16}\)O\({}^{16}\)O, is difficult to detect due to a lack of a permanent dipole moment. The less-abundant isotopologue \({}^{16}\)O\({}^{18}\)O does, however, have a dipole moment. Through this isotopologue, O\({}_{2}\) was tentatively detected in the protostellar system IRAS 16293-2422 B (Taquet et al., 2018). There have been no other reported detections of O\({}_{2}\) in protoplanetary disks. Interestingly, O\({}_{2}\) has been detected in our own solar system in comets, which we do not expect to have undergone much chemical evolution since their formation in the pre-solar nebula. For example, O\({}_{2}\) was detected in the coma of comet 67P/Churyumov-Gerasimenko (67P) by the ROSINA (Balsiger et al., 2007) instrument on ESA's Rosetta spacecraft. Bieler et al. (2015) found an unexpectedly high average O\({}_{2}\) to water ratio of \(3.80\pm 0.85\%\), making O\({}_{2}\) the fourth most abundant species in the coma. O\({}_{2}\) was also detected in comet 1P/Halley at an abundance of \(3.7\pm 1.7\%\) relative to water (Rubin et al., 2015). In both cases, O\({}_{2}\) and H\({}_{2}\)O abundance appear to be correlated. As described in Luspay-Kuti et al. (2018, and references therein), there have been many proposed origins for O\({}_{2}\) in comets, ranging from _in situ_ processes to origins in the ISM prior to formation of the solar nebula. Recent analysis by Luspay-Kuti et al. (2022) shows that, farther from the Sun, O\({}_{2}\) abundance in 67P is more strongly correlated with CO and CO\({}_{2}\) than with H\({}_{2}\)O. They suggest that 67P has two reservoirs of O\({}_{2}\): a deep primordial nucleus of CO and CO\({}_{2}\) ice, and a surface layer of H\({}_{2}\)O ice that formed later. If O\({}_{2}\) in comets is not formed _in situ_, then it must have been present in the protoplanetary disk from whence the comet formed. This scenario is supported by chemical models that predict a large amount of oxygen is in gas-phase O\({}_{2}\) in the inner regions of disks. For example, O\({}_{2}\) ice can be produced on dust grain surfaces and under certain conditions desorbs faster than it reacts to form other molecules. Eistrup et al. (2018) predict that this process occurs in the midplane of disks, resulting in a build-up of gas-phase O\({}_{2}\) between the H\({}_{2}\)O and O\({}_{2}\) ice lines (between 0.7 and 10 au after 8 million years). Walsh et al. (2015) meanwhile predict that gas-phase O\({}_{2}\) builds up in the atmosphere of T Tauri disks, produced via gas-phase neutral-neutral reactions of O + OH. They predict that O\({}_{2}\) may carry 50% of the total oxygen in the disk's atmosphere, and 10% when including the midplane. In this paper, we present a deep search for \({}^{16}\)O\({}^{18}\)O emission in ALMA observations of TW Hya, a well-studied T Tauri protoplanetary disk. TW Hya's relatively gas-rich disk and nearby proximity make it a good candidate for searching for faint emission. There have been many observations of gas emission lines in TW Hya, and previous observations have shown that CO gas is 1-2 orders of magnitude less abundant than in the ISM (Favre et al., 2013; Cleeves et al., 2015). Water vapor has a maximum observed abundance of H\({}_{2}\)O/H\({}_{2}\approx 10^{-7}\) in TW Hya (Hogerheijde et al., 2011). Evidently, gas-phase CO and H\({}_{2}\)O do not account for the majority of oxygen in TW Hya. Could oxygen be hiding in gas-phase O\({}_{2}\)? ## 2 Observations Observations of the \({}^{16}\)O\({}^{18}\)O \(N_{J}=2_{1}-0_{1}\) line at 233.946 GHz toward TW Hya were carried out as part of ALMA program 2019.1.01177.S (PI: Eistrup). Observations occurred on two nights in 2021, April 4 and April 6, for a combined time of 91 minutes on source. On April 4, there were 44 antennas and baselines of 15-1398 meters, and on April 6, 45 antennas and baselines of 15-1263 meters. J1058+0133 was adopted as the amplitude and bandpass calibrator, while J1037-2934 was used for phase calibration. The data were calibrated using the standard ALMA pipeline. We searched for \({}^{16}\)O\({}^{18}\)O emission in both the image plane and the visibility plane, as described in more detail in the following section. Subsequent imaging and analysis were carried out using the Common Astronomy Software Applications (CASA) version 6.4 (McMullin et al., 2007). Continuum emission was subtracted using the task _uvcontsub_, applying a fit order of 1 and excluding edge channels. The data were imaged using _tclean_, initially with a natural weighting scheme to improve line sensitivity. The resulting beam is 0.59\(\arcsec\) by 0.50\(\arcsec\) with a position angle of \(-88^{\circ}\), equivalent to a spatial resolution of 30 to 35 au at a distance of 60.1 pc (Bailer-Jones et al., 2018). The rms noise is 2.59 mJy beam\({}^{-1}\) for a channel width of 0.235 km s\({}^{-1}\). For the matched filtering analysis, Earth's rotation was first corrected for using the CASA task _cvel_. ## 3 Methods We first created a Keplerian mask (Teague, 2020) to use with the CASA _tclean_ task. Due to the rotation of disks, we observe some emission to be shifted to higher or lower frequencies, so certain regions of the disk will be brighter in different channels. A Keplerian mask traces this emission pattern, based on the parameters of the disk, to improve the signal to noise of the integrated intensity (e.g. Salinas et al., 2017; Teague et al., 2022). We assumed a systemic velocity of 2.86 km s\({}^{-1}\)(Favre et al., 2013), distance of 60.1 pc (Bailer-Jones et al., 2018), inclination = \(5.8^{\circ}\), position angle = \(151^{\circ}\), and stellar mass = \(0.81\) M\({}_{\odot}\)(Teague et al., 2019). The mask was created using CO \(J=3-2\) emission data for TW Hya from Huang et al. (2018), and parameters optimized to fully trace the CO emission. Specifically, we set the _target_res_ = 0.4, which convolves the mask with a circular beam of 0.4 arcsec FWHM, and _dV0_ = 500 m s\({}^{-1}\), which affects the radially varying line width of emission. Two masks were then created for the O\({}_{2}\) data, one with a radius set to 100 au (convolved to 120 au), and a second with a radius set to 240 au (convolved to 260 au, covering the extent of the CO emission). Both masks were used separately in cleaning, using a 5\(\sigma\) threshold. The resulting image was the same in both cases because there were no values above 5\(\sigma\) within either mask. The _tclean_ task has several parameters that can be varied to bring out faint emission. For example, the choice of weighting scheme and uvtaper can be tuned to give a higher weighting to shorter baselines to increase sensitivity, at the expense of a lower resolution. Because we were mainly interested in looking at the disk integrated flux, we did not need the full spatial resolution of the observations. We imaged the data with several combinations of the Briggs weighting robust parameter (Briggs, 1995) and uvtaper. We used channel averaging as another method of revealing faint emission. The original channel width of the data was 0.078 km s\({}^{-1}\), and we experimented with averaging 2, 3, and 5 channels together. Matched filtering is a technique to detect weak emission in observational data (Loomis et al., 2018). Matched filtering is applied in the visibility plane and therefore does not involve any imaging. It requires a filter that models the expected pattern of emission across velocity channels. The filter is Fourier transformed to generate a kernel that is then cross-correlated with the data. The result is an impulse response spectrum, with peaks indicating possible emission. For matched filtering, we used both Keplerian masks as filters (one with a 120 au radius, and one with a 260 au radius). It is possible that O\({}_{2}\) emission covers a smaller area than these radii and could be hidden in the noise. However, Keplerian masks cover less area in edge channels, so a smaller mask is below the resolution of the images. ## 4 Results ### Imaging Results Figure 1 shows the data imaged with natural weighting and velocity averaging to a channel size of 0.235 km s\({}^{-1}\). For all attempted combinations of imaging parameters, no significant detection was found within either choice of mask. A spectrum, calculated within 260 au, is shown in Figure 2. The average rms noise level per channel in Figure 1 is 2.59 mJy beam\({}^{-1}\). We calculated a disk integrated flux (for a velocity range of 4.69 km s\({}^{-1}\)) of \(-11.8\pm 14.2\) mJy km s\({}^{-1}\) within 120 au, and \(7.4\pm 18.0\) mJy km s\({}^{-1}\) within 260 au; i.e., at the noise level of the data, we do not detect \({}^{16}\)O\({}^{18}\)O \(N_{J}=2_{1}-0_{1}\) emission. ### Matched Filter Results While we cannot see emission in the image plane, if there is emission just below the noise level, matched filtering has previously been successful at finding the signatures of line emission in other sources (Loomis et al., 2020), including when using a Keplerian mask as a template (Loomis et al., 2018). By applying the Keplerian mask as our template, we provide an expectation for where rotating gas emission should appear spatially in our data. Since we do not know the extent of the O\({}_{2}\), we try both the 120 au and 260 au masks as templates. The results of matched filtering the data with these templates are presented in Figure 3. A 3\(\sigma\) response at 0 km s\({}^{-1}\) would indicate emission from \({}^{16}\)O\({}^{18}\)O. Neither filter resulted in a 3\(\sigma\) impulse response. We can conclusively say that the \({}^{16}\)O\({}^{18}\)O 233.946 GHz line is not detected at the sensitivity of the present measurements. ### O\({}_{2}\) Upper Limits How much oxygen could be hiding within our upper limit on \({}^{16}\)O\({}^{18}\)O emission? To estimate the total number of O\({}_{2}\) atoms from a single line, we explore a range of temperatures and assume its emission is in local thermodynamic equilibrium (LTE). Using the image-plane disk integrated flux limits from Section 4.1, we calculate the 1\(\sigma\) upper limit on the maximum number of O\({}_{2}\) molecules that could be present in TW Hya, using the following adapted from Bergin et al. (2013). \[F_{l}=\frac{\mathcal{N}_{{}^{16}O^{18}O}A_{20}hvf_{u}}{4\pi D^{2}} \tag{1}\] \(F_{l}\) is the 1\(\sigma\) flux calculated from our images, \(\mathcal{N}_{{}^{16}O^{18}O}\) is the number of \({}^{16}\)O\({}^{18}\)O molecules, \(A_{20}=1.33\times 10^{-8}\) s\({}^{-1}\) is the Einstein A coefficient of the \(N_{J}=2_{1}-0_{1}\) transition (Marechal et al., 1997), \(\nu=233.94618\) GHz is the frequency of the \(N_{J}=2_{1}-0_{1}\) transition, and \(D=60.1\) pc is the distance to TW Hya (Bailer-Jones et al., 2018). The fraction of molecules in the upper state is given by \(f_{u}=3.0\exp(-11.23\) K\(/T)/Q(T)\), where 3 is the upper state degeneracy, 11.23 K is the upper state energy (Marechal et al., 1997), \(T_{\rm gas}\) is the gas temperature, and \(Q(T)\) is the partition function from the JPL spectral line catalog (Pickett et al., 1998; Mizushima & Yamamoto 1991; Crownover et al., 1990; Steinbach & Gordy, 1975; Amano & Hirota, 1974). These partition functions are given for temperatures ranging from 9.4 K to 300.0 K, so this is the temperature range we use in our calculations. To get an upper limit on the total number of \({}^{16}\)O\({}^{16}\)O molecules, we assumed a \({}^{16}\)O\({}^{16}\)O/\({}^{16}\)O\({}^{18}\)O ratio of 280 (Taquet et al., 2018; Wilson & Rood, 1994). We obtained an upper limit of \((1.1-10.4)\times 10^{49}\) O\({}_{2}\) molecules within 120 au, and \((1.4-13.2)\times 10^{49}\) molecules within 260 au, for a temperature range of 9.4 K to 300.0 K. To give our result context, we also estimate the disk-averaged abundance of O\({}_{2}\) relative to hydrogen. To estimate the total hydrogen mass in the disk, we adopt the disk mass reported in Calahan et al. (2021), but we note that there are a wide range of mass values for the TW Hya disk in the literature (see Miotello et al., 2022). We adopt the Calahan et al. (2021) value of \(2.5\times 10^{-2}\) M\({}_{\odot}\) since it uses the HD line and multiple CO isotopologues to constrain the temperature structure. Assuming a molecular mass per hydrogen molecule of 2.8 (Kauffmann et al., 2008), this equates to \(2.1\times 10^{55}\) hydrogen atoms in the whole disk. Within 120 au, we use the surface density profile from Calahan et al. (2021) to calculate a disk mass of \(1.8\times 10^{-2}\) M\({}_{\odot}\), which equates to \(1.5\times 10^{55}\) hydrogen atoms. Using these values, we estimate an O\({}_{2}\)/H abundance of \((7.2-70)\times 10^{-7}\) within 120 au, and \((6.4-62)\times 10^{-7}\) within 260 au, for temperatures ranging from 9.4 K to 300.0 K. We will return to the implications of these results in Section 5. ### Constraints from Serendipitous Molecular Lines The SO \(N_{J}=5_{6}-4_{5}\) and SO\({}_{2}\)\(N_{J}=4_{(2,2)}-3_{(1,3)}\) lines at 219.949 GHz and 235.152 GHz, respectively, were also included in the observational set-up. Given that Figure 1: Channel maps of the intensity, centered on 233.946 GHz. The systemic velocity is 2.86 km s\({}^{-1}\). The two Keplerian masks are shown in white contours, and velocities are shown in the lower right of each panel. The beam size is \(0.59^{\prime\prime}\times 0.50^{\prime\prime}\) and is shown in the lower left panel. Figure 2: Spectrum of the O\({}_{2}\) data for TW Hya, calculated within a 260 au region. The dashed line is the systemic velocity of 2.86 km s\({}^{-1}\), centered on 233.946 GHz. these molecules might be prominent oxygen carriers, we searched for emission from these lines using similar techniques as for \({}^{16}\)O\({}^{18}\)O. Neither line was detected above \(3\sigma\) with imaging or matched filtering (see Appendix A). The disk integrated flux for the SO line is \(-0.05\pm 12.5\) mJy km s\({}^{-1}\) within 120 au and \(-21.4\pm 21.2\) mJy km s\({}^{-1}\) within 260 au (covering a range of 4.99 km s\({}^{-1}\)). For SO\({}_{2}\), the flux is \(24.4\pm 14.8\) mJy km s\({}^{-1}\) within 120 au and \(34.3\pm 24.5\) mJy km s\({}^{-1}\) within 260 au (covering a range of 4.67 km s\({}^{-1}\)). We modified Equation 1 for SO and SO\({}_{2}\), using values from LAMDA (Schoier et al., 2005) and CDMS (Muller et al., 2001), and partition functions from the JPL spectral line catalog (Pickett et al., 1998; Amano & Hirota, 1974; Clark & De Lucia, 1976; Helminger & De Lucia, 1985; Lovas, 1985; Alekseev et al., 1996). We calculate a \(1\sigma\) upper limit of SO/H = \((2.4-10)\times 10^{-13}\) within 120 au and \((2.9-12)\times 10^{-13}\) within 260 au, and SO\({}_{2}\)/H = \((7.9-197)\times 10^{-13}\) within 120 au and \((9.1-229)\times 10^{-13}\) within 260 au, for a temperature range of 9.4 K to 300.0 K. We also report a serendipitous detection of the C\({}^{15}\)N \(N_{J}=2_{5/2}-1_{3/2}\) hyperfine transitions, \(F=2-1\) and \(F=3-2\), at 219.93404 GHz and 219.93482 GHz, respectively. Using matched filtering with the 120 au Keplerian mask, we obtained a \(7\sigma\) filter response. The emission is visible in images (not shown), with an integrated flux of \(113.5\pm 17.0\) mJy km s\({}^{-1}\) for a range of 2.33 km s\({}^{-1}\), calculated within a 200 au circular region covering visible emission. For these images, we used Briggs weighting (robust = 2) and a uvtaper of 0.3 arcsec. The flux per channel is shown in Figure 4. C\({}^{15}\)N has previously been detected in TW Hya through its \(N=3-2\) transition (Hily-Blant et al., 2017). They used two methods to calculate an integrated flux of \(150\pm 20\) mJy km s\({}^{-1}\) and \(160\pm 13\) mJy km s\({}^{-1}\), the first of which is consistent with our value. ## 5 Discussion How does O\({}_{2}\) stack up compared to other known oxygen carriers? Figure 5 shows our upper limits on O\({}_{2}\) abundance as a function of temperature, compared to the solar oxygen abundance and known abundances of other major oxygen-carrying species in the ISM and in TW Hya. Since CO and H\({}_{2}\)O each contain one oxygen atom, their abundances relative to hydrogen can be directly compared to the solar oxygen abundance (e.g., if all oxygen were in CO, then the CO/H abundance would equal the O/H value). O\({}_{2}\), on the other hand, contains two oxygen atoms, so should be multiplied by two to compare with the solar oxygen abundance. In TW Hya, both CO gas and H\({}_{2}\)O gas are several orders of magnitude below solar oxygen abundance; neither molecule is a major reservoir of oxygen. Our upper limit shows that gas-phase O\({}_{2}\) is also not a major carrier Figure 3: Matched filtering response for the \({}^{16}\)O\({}^{18}\)O data, with impulse response on the y-axis and velocity on the x-axis. The 233.946 GHz transition is centered at 0 km s\({}^{-1}\) and is denoted by the vertical dashed line. The two horizontal lines are at \(0\sigma\) and \(3\sigma\). The gray region in the left plots is the range of velocities covered in the right plots. a) 120 au filter, covering all channels. b) zoom-in of the shaded region in panel a. c) 260 au filter, covering all channels. d) zoom-in of the shaded region in panel c. of oxygen. Other oxygen-carrying molecules that have been detected in TW Hya include HCO\({}^{+}\)(van Dishoeck et al., 2003), H\({}_{2}\)CO (Oberg et al., 2017), CH\({}_{3}\)OH (Walsh et al., 2016), and HCOOH (Favre et al., 2018), but these molecules were all detected at too low an abundance to complete the oxygen budget relative to solar values. The majority of oxygen in TW Hya has not been detected, leading to several possibilities, which we discuss further. Our upper limits on gas-phase O\({}_{2}\) are not constraining enough to rule out protoplanetary disks as the origin of O\({}_{2}\) in comets, as water vapor in TW Hya is detected in low abundance (Zhang et al., 2013). Comets 67P and 1P/Halley were observed to have high O\({}_{2}\)/H\({}_{2}\)O ratios of about 0.04 (Bieler et al., 2015; Rubin et al., 2015), whereas our upper limits on gas-phase O\({}_{2}\) are about two orders of magnitude more abundant than detected water vapor in TW Hya (see Figure 5). One possibility for the low detections of gas-phase oxygen carriers is that oxygen is frozen out, so it cannot be detected easily. Due to the low temperatures of disks, we expect many molecules (like H\({}_{2}\)O) to exist mostly in the ice-phase, but several processes can return molecules to the gas-phase throughout the entire disk. Photodesorption occurs when ultraviolet photons strike a molecule on the surface of a grain, causing the molecule to break off from the grain. It depends on the photon flux, as well as the binding energy of the molecule. Oberg et al. (2009) studied the photodesorption yield of H\({}_{2}\)O and found that the main products are H\({}_{2}\)O and OH. They also found that at high temperatures (100 K), up to 20% of the ice desorbs as O\({}_{2}\). Du et al. (2017) searched for cold water vapor in 13 protoplanetary disks, only detecting it in low abundance in two disks, and in the stacked spectrum of four other disks. They model the disk chemistry and find that, to match observations, the abundance of gas-phase oxygen must be reduced by a factor of 100 or more. They propose that oxygen (in the form of H\({}_{2}\)O and CO) freezes onto dust grains, which then settles to the midplane (Hogerheijde et al., 2011; Bergin et al., 2016). This process primarily occurs in the outer disk; in the inner disk (within 15 au), temperatures are higher and frozen molecules may return to the gas phase. Grain size may also play a role in molecular abundances. Eistrup et al. (2022) modeled disk chemistry, taking into account grain size. They found that larger grain sizes result in a lower gas-phase O\({}_{2}\) abundance and a higher H\({}_{2}\)O ice abundance, relative to the abundances produced using a fiducial 0.1\(\mu\)m grain size. As grain size increases, the surface area decreases, which decreases the number of grain surface-reactions and gas-grain interactions that can occur. Figure 4: Spectrum of the C\({}^{15}\)N detection in TW Hya, calculated within a 200 au circular region. The two transitions are \(N=2-1,J=5/2-3/2\) transitions, \(F=3-2\) and \(F=2-1\). The systemic velocity of 2.86 km s\({}^{-1}\), centered on the average of the two emission frequencies, is marked with the dashed line. Figure 5: Upper limits on O\({}_{2}\) in TW Hya as a function of temperature. The shaded region shows typical temperatures in TW Hya (see Bergin & Cleeves, 2018), the light blue and dark blue open triangles show the upper limits on O\({}_{2}\) abundance within 120 au and 260 au, respectively. Values for solar oxygen abundance (Asplund et al., 2009) and other known oxygen-carrying species are shown along the right axis. For dust, we assumed a value of \(1.4\times 10^{-4}\) from Whittet (2010). From top to bottom, the remaining values used are from Ripple et al. (2013); van Dishoeck et al. (2013); Favre et al. (2013); Zhang et al. (2013). Another possibility is that there is a large amount of oxygen in gas-phase molecules that we have not yet observed. Overall, the most abundant molecules detected in comets are H\({}_{2}\)O, CO\({}_{2}\), and CO (Rubin et al., 2020). As discussed already, H\({}_{2}\)O and CO have been detected in disks in low abundance. Models by Eistrup et al. (2018) predict CO\({}_{2}\)/H abundances of up to \(\approx 10^{-4}\) in disks. CO\({}_{2}\) gas is difficult to observe in disks because, like O\({}_{2}\), it is symmetric and lacks a permanent dipole moment. CO\({}_{2}\) has been detected in the disk within 3 au of AA Tauri (Carr and Najita, 2008), using Spitzer Space Telescope observations in the mid-infrared. JWST, however, can detect ice absorption features in the infrared, including CO\({}_{2}\) and H\({}_{2}\)O ices, and is already providing insight into oxygen-carrying molecules in disks (e.g. Yang et al., 2022; Grant et al., 2023; McClure et al., 2023). ## 6 Conclusions We searched for but did not detect emission from gas-phase \({}^{16}\)O\({}^{18}\)O in the protoplanetary disk around TW Hya. We used various imaging techniques along with matched filtering, and used our results to determine an upper limit on gas-phase O\({}_{2}\) in TW Hya. * The isotopologue \({}^{16}\)O\({}^{18}\)O was not detected in TW Hya, leading to an upper limit on the abundance of O\({}_{2}\) of \((7.2-70)\times 10^{-7}\) relative to H if the emission is contained within 120 au. For the whole disk, the upper limit is \((6.4-62)\times 10^{-7}\). This limit is 2-3 orders of magnitude lower than the solar oxygen abundance, so gas-phase O\({}_{2}\) is not a major reservoir for oxygen in TW Hya. Taking into account other existing molecular detections in TW Hya, the main oxygen-carrier(s) remain undetected. * We place sensitive upper limits on the SO and SO\({}_{2}\) lines at 219.949 GHz and 235.152 GHz, respectively. We calculated an upper limit of SO/H = \((2.4-10)\times 10^{-13}\) within 120 au and \((2.9-12)\times 10^{-13}\) within 260 au, and of SO\({}_{2}\)/H = \((7.9-197)\times 10^{-13}\) within 120 au and \((9.1-229)\times 10^{-13}\) within 260 au. These results suggest oxygen is not bound up with sulfur either. * We detect the isotopologue C\({}^{15}\)N at the \(7\sigma\) level using matched filtering, and calculate an integrated flux of \(113.5\pm 17.0\) mJy km s\({}^{-1}\). It is difficult to determine the main reservoir of oxygen in disks because of the many solid and gaseous forms oxygen may take (e.g. frozen H\({}_{2}\)O, CO, CO\({}_{2}\); gas-phase O\({}_{2}\), CO\({}_{2}\)). More observations of disks are necessary to search for this missing reservoir. Focusing on TW Hya, future searches for other gas-phase molecules, such as isotopologues of CO\({}_{2}\), would provide more insight on oxygen. Alternatively, searches for gas-phase \({}^{16}\)O\({}^{18}\)O in other disks, could prove interesting. Observations ofices, especially with JWST, will also be helpful. B.J.W. acknowledges support from the Virginia Initiative on Cosmic Origins (VICO). L.I.C. acknowledges support from the David and Lucile Packard Foundation, Research Corporation for Science Advancement Cottrell Fellowship, NASA ATP 80NSSC20K0529, NSF grant no. AST-2205698, and SOFIA Award 09-0183. C.E. acknowledges support from VICO. J.P.R. acknowledges support from the NASA Astrophysics Theory Program under grant no. 80NSSC20K0533, from the National Science Foundation (NSF) under grant nos. AST-1910106 and AST-1910675, and from the Virginia Initiative on Cosmic Origins. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2019.1.01177.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. ALMA Astropy (Astropy Collaboration et al., 2013), CASA (McMullin et al., 2007), Keplerian Mask (Teague, 2020), VISIBLE (Loomis et al., 2018) ## Appendix A Matched Filter Results: Other Transitions The matched filtering response for the data containing the SO\({}_{2}\) and SO transitions is shown in Figure 6. In both cases, the 120 au filter was used, and neither SO\({}_{2}\) nor SO were detected at the 3\(\sigma\) level or above. The 7\(\sigma\) peak at about 20 km s\({}^{-1}\) in the SO spectrum is the C\({}^{15}\)N detection.
2303.18153
Developing a Monolithic Silicon Sensor in a 65 nm CMOS Imaging Technology for Future Lepton Collider Vertex Detectors
Monolithic CMOS sensors in a 65 nm imaging technology are being investigated by the CERN EP Strategic R&D Programme on Technologies for Future Experiments for an application in particle physics. The appeal of monolithic detectors lies in the fact that both sensor volume and readout electronics are integrated in the same silicon wafer, providing a reduction in production effort, costs and scattering material. The Tangerine Project WP1 at DESY participates in the Strategic R&D Programme and is focused on the development of a monolithic active pixel sensor with a time and spatial resolution compatible with the requirements for a future lepton collider vertex detector. By fulfilling these requirements, the Tangerine detector is suitable as well to be used as telescope planes for the DESY-II Test Beam facility. The project comprises all aspects of sensor development, from the electronics engineering and the sensor design using simulations, to laboratory and test beam investigations of prototypes. Generic TCAD Device and Monte-Carlo simulations are used to establish an understanding of the technology and provide important insight into performance parameters of the sensor. Testing prototypes in laboratory and test beam facilities allows for the characterization of their response to different conditions. By combining results from all these studies it is possible to optimize the sensor layout. This contribution presents results from generic TCAD and Monte-Carlo simulations, and measurements performed with test chips of the first sensor submission.
Adriana Simancas, Justus Braach, Eric Buschmann, Ankur Chauhan, Dominik Dannheim, Manuel Del Rio Viera, Katharina Dort, Doris Eckstein, Finn Feindt, Ingrid-Maria Gregor, Karsten Hansen, Lennart Huth, Larissa Mendes, Budi Mulyanto, Daniil Rastorguev, Christian Reckleben, Sara Ruiz Daza, Paul Schütze, Walter Snoeys, Simon Spannagel, Marcel Stanitzki, Anastasiia Velyka, Gianpiero Vignola, Håkan Wennlöf
2023-03-31T15:41:30Z
http://arxiv.org/abs/2303.18153v1
# Developing a Monolithic Silicon Sensor in a 65 nm CMOS Imaging Technology ###### Abstract Monolithic CMOS sensors in a 65 nm imaging technology are being investigated by the CERN EP Strategic R&D Programme on Technologies for Future Experiments for an application in particle physics. The appeal of monolithic detectors lies in the fact that both sensor volume and readout electronics are integrated in the same silicon wafer, providing a reduction in production effort, costs and scattering material. The Tangering Project WP1 at DESY participates in the Strategic R&D Programme and is focused on the development of a monolithic active pixel sensor with a time and spatial resolution compatible with the requirements for a future lepton collider vertex detector. By fulfilling these requirements, the Tangering detector is suitable as well to be used as telescope planes for the DESY-II Test Beam facility. The project comprises all aspects of sensor development, from the electronics engineering and the sensor design using simulations, to laboratory and test beam investigations of prototypes. Generic TCAD Device and Monte-Carlo simulations are used to establish an understanding of the technology and provide important insight into performance parameters of the sensor. Testing prototypes in laboratory and test beam facilities allows for the characterization of their response to different conditions. By combining results from all these studies it is possible to optimize the sensor layout. This contribution presents results from generic TCAD and Monte-Carlo simulations, and measurements performed with test chips of the first sensor submission. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 202 Research and Innovation Programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA no 101004761. + Footnote †: This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). This project has received funding from the European Union’s Horizon ## II Sensor Technology Currently, three different sensor layouts are studied: _standard_[7], _n-blanket_[8] and _n-gap_[9]. These designs were originally developed in a 180 nm CMOS imaging technology to enhance depletion, timing performance and radiation tolerance of small collection electrode MAPS. Fig. 1 is a schematic representation of a detector with the _n_-gap layout. The cross section shows half of a pixel on each side of the schematic and the pixel edge in the center. A thin epitaxial _p_-doped layer is grown on a low resistivity _p_-doped substrate. The _p_-well is the structure that hosts the in-pixel electronics and shields them from the electric field of the active sensor region. The _n_-blanket is a low dose _n_-doped layer implemented to create a planar _pn_-junction, enlarging the depleted volume of the sensor. Furthermore, a gap in the _n_-blanket produces a vertical _pn_-junction that generates a lateral electric field in the farthest position from the readout electrodes (pixel boundaries). To understand the 65 nm CMOS Imaging Technology and optimize the sensor design, generic simulations and prototype testing are carried out simultaneously. The goal of the design optimization is to customize the electric field inside the sensor to improve efficiency and signal formation time. The simulation cycle is of the utmost importance to reduce time and costs invested in producing and testing prototypes. ## III Sensor Simulations Since the electric fields in small collection electrode MAPS are complex, device simulations are needed to provide insight into performance parameters of the sensor. Studies using generic doping profiles were performed for the three different layouts and for pitches between 10 um and 35 um. These simulations include only the epitaxial layer, since no significant contribution from the substrate to the electric field modelling is expected. Additionally, to bias the substrate in the TCAD simulations a metallic contact is added in the backside, as opposed to the the real detector that does not include a backside metalization. ### _Technology Computer-Aided Design (TCAD) Simulations_ TCAD is a powerful tool to simulate electrical properties of semiconductors and can be used to optimize the sensor layout and other features, such as bias voltage configuration, to achieve the desired performance goals. TCAD contains a finite-element simulation tool that constructs a mesh over the studied structure and solves Poisson's equation and carrier continuity equations to model the electrostatic potential and other properties in each node of the mesh. The software used for the simulations in this work is Sentaurus TCAD from Synopsys [10]. 3D quasi-stationary simulations using generic doping profiles were performed to model the electric fields of the studied layouts, since 2D simulations cannot resolve some of the effects introduced by the lateral electric fields. An example of a generic 3D TCAD simulation for the _n_-gap design is shown in Fig. 2. The TCAD studies carried out so far include scans over different geometrical and operational parameters of the sensor, such as _p_-well opening and bias voltage, and observing the behavior of the electric field, the lateral electric field strength, as well as the depleted volume. The results are reported in the following. The _p-well opening_ is the distance between the edge of the collection implant and the edge of the _p_-well, as shown in Fig. 1. It was varied from 1 um to 4 um and from the results of the standard layout it was observed that increasing the _p_-well opening provided a larger depleted volume and stronger lateral electric field. This translates into a larger signal and a faster signal formation, but reduces the space for readout electronics. However, this effect was much less prominent once the _n_-blanket modification was added. The _gap size_ in the _n_-gap layout was varied from 1 um to 4 um. The gap creates a lateral electric field at the pixel boundaries, which increases with the gap size. The gap is introduced to allow for the free charges to avoid the field minimum reported in [9] and drift with a shorter mean free path towards the collection electrodes, thus it improves the signal formation time. A _breakdown_ of the sensor was observed when fixing the _p_-well bias at -1.2 V, and applying a higher bias to the substrate. The breakdown was detected early on for the standard layout, at -2.4 V, for the _n_-blanket layout it was reached at a bias of -11 V, Fig. 1: _n_-gap layout example of the Tangerine sensor. Fig. 2: Generic 3D TCAD simulation of the _n_-gap layout. and for the \(n\)-gap layout it was observed at -4.8V. This behavior is in agreement with observations made during monitoring of the leakage current in experimental measurements. Fig. 3 shows how the breakdown is modelled in the generic TCAD simulation. The color scale corresponds to the current density, while the white line indicates the limits of the depleted volume. The breakdown produces a high current density in the edge of the pixels, and the depleted volume is deformed. The _bias voltage_ applied to the \(p\)-well and the substrate was scanned from 0 V to -20 V, while fixing the bias of the readout electrodes to 1.2 V. When the bias voltage was simultaneously increased for both electrodes, an increased depleted volume was observed. However, high values of the electric field were detected inside the \(p\)-well structure, compromising the shielding of the electronics. When the bias was increased only for the substrate while fixing the \(p\)-well bias, a similar behaviour was observed for the \(n\)-blanket and \(n\)-gap layouts, but the \(p\)-well integrity was preserved. Fig. 4 shows the electric field obtained from TCAD simulations of the three sensor layouts with generic doping profiles, where \(p\)-well and substrate bias voltage were set to -4.8 V. The brown line indicates the position of the \(pn\)-junction, the white line delimits the depleted region and the streamlines (black arrows) indicate the instantaneous tangent to the velocity vector of the moving charges. Comparing between the different layouts with a pixel pitch of 25 \(\upmu\)m, the following can be concluded: * The standard layout (Fig. 3(a)) has a small depleted volume. The electron-hole pairs produced outside the depleted volume will move predominantly by diffusion in random directions and some might not reach the readout electrodes. The expected effect on charges produced at the edge of the pixel is a low efficiency, but a high charge-sharing between pixels, which improves the spatial resolution. * The \(n\)-blanket layout (Fig. 3(b)) shows a larger depleted volume. The electron-hole pairs produced in the active volume will move predominantly by drift towards the readout electrodes. This is foreseen to produce an improvement in efficiency, but with an impairment in spatial resolution due to lower charge-sharing. * The \(n\)-gap layout (Fig. 3(c)) has a higher lateral electric field in the pixel edges. The electron-hole pairs produced in the edge of the pixel will drift with a shorter mean free path towards the readout electrodes. As a consequence, an improvement is expected in efficiency as well as the signal formation time, but with a further impairment in spatial resolution due to even lower charge-sharing. In order to quantify the effects discussed here, Monte-Carlo simulations are required as in Section III-B. TCAD is also capable of simulating current pulses produced by the interaction of a charged particle with the sensor. This is carried out with _transient simulations_ and can be used to estimate signal characteristics, such as signal magnitude and time evolution. A transient simulation for a _minimum ionizing particle_ (MIP) traversing the corner of a pixel with an electron/hole pair production of 63 e/\(\upmu\)m was performed for the studied layouts. The result is shown in Fig. 5, confirming that the time evolution of the signal is improved by the modifications in the sensor, and particularly for the \(n\)-gap layout. For a standard layout sensor, a similar simulation was performed with a MIP traversing the center of the pixel. By integrating the induced charge for the duration of the signal, a total of \(\sim\)750 electrons was obtained. Furthermore, generic TCAD transient simulations have provided valuable feedback for the ASIC design, such as expected signal magnitudes to define a reasonable threshold in the readout electronics. ### _Allpix\({}^{2}\) Simulations_ TCAD transient simulations are time consuming and have a high computational budget, thus quasi-stationary simulations from TCAD are combined with Monte-Carlo simulations to Fig. 3: Generic TCAD Simulation: current density for the three layouts with \(p\)-well bias of -1.2 V and substrate bias of -4.8 V. Pixel pitch of 25 \(\upmu\)m. Brown line indicates the position of the \(pn\)-junction and white line corresponds to the depleted volume. The high current density in the edge of the pixels for the standard and the \(n\)-gap design indicates breakdown for these biasing conditions. obtain high statistics data and calculate the performance parameters of the sensor [11]. Within the Tangerine project, this combination of simulations is used to quantify the effects reported in Section III-A. Allpix\({}^{2}\)[12] is a modular framework developed for Monte-Carlo simulations of semiconductor radiation detectors. It provides the possibility to build a matrix of pixels by replicating the TCAD electric fields simulated on a single cell. First results from Monte-Carlo simulations using generic TCAD fields from this work have been reported in [1]. The performance parameters of interest are detection efficiency, cluster size, and spatial resolution. Results confirm that the modifications to the sensor layout are valuable as they increase the efficient operating margin of the sensor. The observed trends are equivalent for all tested bias voltages. ## IV Prototype Testing The purpose of the ALICE APTS [6] prototype is to characterize different sensor designs. Several of these chips are being studied and the results will allow for direct comparisons with simulations to be performed. This section presents the activities within the Tangerine project of system integration and testing of the ALICE APTS in laboratories and test beam campaigns with preliminary results. ### _APTS and Data-Acquisition (DAQ) System_ The ALICE APTS contains a matrix of \(4\times 4\) square pixels and has been produced in the three studied sensor designs. The tested devices have a pixel pitch of \(25\,\mathrm{\SIUnitSymbolMicro m}\), the pixels are DC coupled to the front-end electronics and each pixel contains a source follower as buffered analog output. The latter makes the APTS a structure suitable for comparisons against simulations. The Caribou System [13, 14] is used as data-acquisition system for the ALICE APTS. It is an open-source set of hardware, firmware and software for laboratory and beam tests. The modular hardware consists of three boards: the application-specific _chip board_ containing the sensor, the periphery _CaR board_ which provides current and voltage sources together with a physical interface between System-on-Chip (SoC) and the detector, and the _evaluation board_ that contains the Xilinx Zynq SoC, which runs the detector control and the data-processing firmware. For the ALICE APTS, a custom chip board was designed and produced with amplification and signal shaping at each pixel output. The readout was performed with two 8-channel \(65\,\mathrm{MS}/\mathrm{s}\) ADCs on the CaR board, using a custom firmware block. The firmware provides a configurable trigger logic, where either an external or an internal trigger can be selected with programmable thresholds and adjustable latency. The firmware is compatible with the AIDA Trigger Logic Unit [15] (TLU). The data-acquisition framework employed in the project is EUDAQ2 [16]. It is a generic data-acquisition software for use in conjunction with beam telescopes at charged particle beam lines. It allows for storage and synchronization of data from several systems. A detector-specific decoder is used to interpret the raw data for further analysis. Fig. 4: Generic TCAD Simulation: electric field for the three layouts with _p_-well and substrate bias of -4.8 V. Pixel pitch of \(25\,\mathrm{\SIUnitSymbolMicro m}\). Brown line indicates the position of the _pn_-junction, black arrows represent the streamlines and the white line delimits the depleted volume. Fig. 5: Generic TCAD Simulation: current pulse produced by a MIP traversing the corner of a pixel, for the three different layouts. The recorded data with this DAQ system consists of waveforms sampled at 65 MHz with a configurable number of samples. Internal triggers were used for laboratory tests, while for the test beam data acquisition an external trigger was provided by the TLU. Fig. 6 shows a waveform example produced by a MIP in an APTS with the _n_-blanket layout. The shape of the waveform is dominated by the amplifiers employed in the chip board of the DAQ system. ### _Laboratory Characterization_ The activities performed in the laboratory involved the optimization of the front-end operation parameters and studies with charge injection and radioactive sources for gain calibration. Since the acquired signals are represented in ADC units, a calibration is required to relate these values to number of electrons. This way, it is possible to match the thresholds of data and simulation and quantify the agreement between them. The calibration of the tested devices was performed with X-ray fluorescence, an \({}^{55}\)Fe radioactive source and test pulses. The decay of \({}^{55}\)Fe produces two X-ray emissions that are considered monochromatic: K-alpha of 5.9 keV and K-beta of 6.5 keV. For the interaction of these X-rays with silicon, a production of approximately 1640 and 1800 electron-hole pairs, respectively, is expected. The calibration uses the K-alpha and K-beta peaks shown in the \({}^{55}\)Fe spectrum in Fig. 7, which was measured with an APTS in the _n_-blanket layout. The result is a combination of the individual pixel charge spectra, so the tail at the left of the spectrum is a combination of Compton scattering, charge-sharing and threshold effects that differ per pixel. The calibration was validated by applying the calibration on the spectrum from the X-ray fluorescence of titanium. ### _test beam Measurements_ The ALICE APTS was tested in the DESY-II Test Beam Facility [4] to characterize its performance with MIPs. A MIMOSA26 telescope [17] was used as a reference system to reconstruct the individual particle tracks of the beam. The APTS acted as a _Device Under Test_ (DUT) and was placed in the middle of the 6 telescope planes, orthogonal to the beam. Additionally, a TelePix [18] plane was used as trigger in the data acquisition. It provides time stamps with a precision below 5 ns, and a fast digital hit output signal which allows for triggering in a configurable region of interest. This plane was placed downstream with respect to the telescope. With this setup it is possible to determine the effect of particles impinging on different positions in the evaluated prototype and obtain the performance parameters mentioned in Section III-B. For the data analysis, a modular framework called Corryvreckan [19] is used. It allows for online monitoring and offline event building in complex data-taking environments combining detectors with different readout architectures. With this software, the particle tracks are reconstructed from the information provided by the telescope planes. One particle interaction can produce a signal in different pixels. The one that registers the higher signal is called _seed pixel_, and together with the surrounding pixels they constitute a _cluster_. After correlating the tracks with the DUT information, clusters in the DUT are associated to the reconstructed tracks and it is possible to perform efficiency and resolution studies. The devices were tested under different operational conditions. Given the maximum beam rate of 5 kHz, a beam cross section of the order of centimeters and the active area of the 25 um pitch ALICE APTS of 100 \(\times\) 100 um, it was necessary to record data for several hours for each investigated setting. It was discovered that, due to the long acquisition time and temperature changes in the test beam area, the relative position of the DUT with respect to the reference was changing. This required to develop a new alignment method that corrected the relative drift within the same acquisition run. This method has shown promising results, but it is still under evaluation. For this reason, the quantitative results are still preliminary, but it has been possible to observe encouraging qualitative results that are reported in the following. After applying the calibration to test beam data of an _n_-blanket design, the charge distribution of the seed pixels was Fig. 6: Waveform of a MIP measured with the ALICE APTS in the _n_-blanket layout at -3.6 V _p_-well and substrate bias. Fig. 7: Calibrated spectrum of an \({}^{55}\)Fe source acquired with an ALICE APTS in the _n_-blanket layout at -3.6 V _p_-well and substrate bias. obtained as shown in Fig. 8. The distribution corresponds approximately to a Landau distribution convolved with a Gaussian. A _most probable value_ of around 600 electrons is observed. The peak at the end of the distribution is due to saturation of the ADCs. The _cluster size_ depends on the charge-sharing between pixel cells and the chosen thresholds. The detectors with sensor modifications (\(n\)-blanket and \(n\)-gap layouts) exhibited a lower cluster size in comparison to the standard layout, indicating a similar trend to the observations made in simulations. The _detection efficiency_ of the detector was measured by relating the particle hits in the DUT to the reconstructed tracks. As expected from simulations and previous works [8, 20], an improvement in efficiency was observed for the \(n\)-blanket and \(n\)-gap layouts. ## V Conclusion The Tangerine project participates in the investigation of monolithic pixel detectors in 65 nm CMOS Imaging Technology lead by the CERN EP Strategic R&D Programme on Technologies for Future Experiments. A monolithic active pixel sensor in 65 nm CMOS imaging technology is being developed within the Tangerine project. Simulations and prototype testing are complementary activities carried out within the project and the collaboration institutes. Device simulations using generic doping profiles provide valuable insight for sensor optimization and understanding sensor behaviour for novel technologies. The combination with Monte-Carlo simulations produces results which can be directly compared with experimental tests. Generic TCAD simulations have confirmed that the principles to modify the sensor in the 180 nm and 65 nm are generally applicable. Generic simulations have allowed for establishing the parameters of relevance for the sensor optimization and their effect on the operation of the detector, such as \(p\)-well opening, \(n\)-gap size and sensor bias voltage. The results show an overall agreement with what has been observed in other technologies investigated for MAPS. Efficiency and resolution studies using generic TCAD simulations combined with Monte-Carlo simulations are ongoing. The ALICE APTS has been evaluated in laboratory and test beams. A custom DAQ system based on Caribou was developed and integrated for these measurements. From the experimental activities, the functionality of the test setup has been demonstrated. Preliminary results on charge distribution, cluster size and detection efficiency have shown a qualitative agreement with simulations. More detailed studies including spatial resolution and timing are continuing. Further test campaigns are planned for the near future, as well as dedicated simulations on timing performance. A fully integrated chip with a larger pixel matrix, designed jointly by CERN, DESY and IFAE, has been submitted to the foundry. This will allow for recording high-statistic data samples and for further improvement of the comparison with simulations. ## Acknowledgments The measurements presented have been performed at the test beam Facility at DESY Hamburg (Germany), a member of the Helmholtz Association (HGF). The authors wish to express their gratitude to the CERN EP R&D WP 1.2 and especially to the designers of the APTS and the ALICE ITS3 measurement team for their support. (c) All figures and pictures by the author(s) under a CC BY 4.0 license, unless otherwise stated.
2305.00492
Accelerating Genome Analysis via Algorithm-Architecture Co-Design
High-throughput sequencing (HTS) technologies have revolutionized the field of genomics, enabling rapid and cost-effective genome analysis for various applications. However, the increasing volume of genomic data generated by HTS technologies presents significant challenges for computational techniques to effectively analyze genomes. To address these challenges, several algorithm-architecture co-design works have been proposed, targeting different steps of the genome analysis pipeline. These works explore emerging technologies to provide fast, accurate, and low-power genome analysis. This paper provides a brief review of the recent advancements in accelerating genome analysis, covering the opportunities and challenges associated with the acceleration of the key steps of the genome analysis pipeline. Our analysis highlights the importance of integrating multiple steps of genome analysis using suitable architectures to unlock significant performance improvements and reduce data movement and energy consumption. We conclude by emphasizing the need for novel strategies and techniques to address the growing demands of genomic data generation and analysis.
Onur Mutlu, Can Firtina
2023-04-30T14:25:53Z
http://arxiv.org/abs/2305.00492v4
# _Invited:_ Accelerating Genome Analysis ###### Abstract High-throughput sequencing (HTS) technologies have revolutionized the field of genomics, enabling rapid and cost-effective genome analysis for various applications. However, the increasing volume of genomic data generated by HTS technologies presents significant challenges for computational techniques to effectively analyze genomes. To address these challenges, several algorithm-architecture co-design works have been proposed, targeting different steps of the genome analysis pipeline. These works explore emerging technologies to provide fast, accurate, and low-power genome analysis. This paper provides a brief review of the recent advancements in accelerating genome analysis, covering the opportunities and challenges associated with the acceleration of the key steps of the genome analysis pipeline. Our analysis highlights the importance of integrating multiple steps of genome analysis using suitable architectures to unlock significant performance improvements and reduce data movement and energy consumption. We conclude by emphasizing the need for novel strategies and techniques to address the growing demands of genomic data generation and analysis. ## 1 Introduction Genome analysis plays a crucial role in various fields such as personalized medicine [1], agriculture [2], evolutionary biology [3], pharmacogenomics [4], infectious disease control [5, 6], cancer research [7] and microbiome studies [8]. The advent of high-throughput sequencing (HTS) technologies, such as sequencing-by-synthesis (SBS) [9], Single Molecule Real-Time (SMRT) [10], and nanopore sequencing [11, 12, 13], has revolutionized genome analysis, enabling faster and more cost-effective sequencing of genomes by generating a large amount of genomic data at relatively low cost [14]. However, the analysis of genomic data is challenging due to a variety of reasons: 1) HTS technologies can only sequence relatively short fragments of genomes, called _reads_, whose locations in the entire genome are unknown, 2) these reads can contain _sequencing errors_[14, 15], leading to differences from their original sequences, 3) the sequenced genome may not (and usually does not) exactly match recorded genomes in a reference database, known as _reference genomes_, due to variations between individuals within and across species. Despite significant improvements in computational tools since the 1980s [16] to overcome such challenges, the rapid growth in genomic data [17] has led to ever larger computational overheads in the genome analysis pipeline, posing large challenges for efficient and timely analysis of genomes [18, 19]. A genome analysis pipeline consists of multiple key steps, each of which affects the accuracy, speed, and energy consumption of genome analysis. First, _basecalling_ translates the _raw sequencing data_ that HTS generates (e.g., measured electrical signals in nanopore sequencing) into sequences of genomic characters (e.g., A, C, G, and Ts in DNA). Basecalling is time-consuming because it relies heavily on compute-intensive approaches that process large chunks of noisy and error-prone raw data to accurately infer the actual nucleotide sequences [19, 20, 21, 22, 23, 24]. Second, _real-time analysis of raw sequencing data_[5, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34] aims to analyze the reads simultaneously while the read is being sequenced using a particular sequencing technology (e.g., nanopore sequencing). Although real-time analysis of raw sequencing data provides enormous advantages in significantly reducing the overall genome analysis time and cost [25], it introduces unique challenges as the analysis needs to match stringent throughput and latency requirements to satisfy _real-time_ requirements [34]. Third, _read mapping_ aims to find similarities and differences between genomic sequences (e.g., between sequenced reads and reference genomes of one or more species). Read mapping includes several steps such as sketching [35, 36, 37, 38, 39, 40], seeding [41, 42, 43, 44, 45, 46, 47, 48, 49], and alignment [50, 51, 52, 53, 54, 55], which demand considerable processing power and memory due to the large scale of genomic sequences [16, 56, 57]. Fourth, subsequent steps of the genome analysis (i.e., _downstream analysis_) use the output generated in the read mapping step. An example of such downstream analysis is known as _variant calling_[58, 59, 60, 61, 62, 63, 64], which aims to identify genetic differences, known as _variants_, between an individual's genome and a reference genome. Variant calling is often followed by additional steps, such as _gene annotation_[65, 66, 67, 68, 69] and _enrichment analysis_[70, 71, 72, 73]. These steps aim to generate insights from the identified variants and determine if these variants show an unexpectedly high or low statistical correlation with specific functional behavior (e.g., association with a disease) that can be used in a clinical report [74]. Many pure algorithmic and software techniques aim to address the computational challenges in the genome analysis pipeline. These works improve the performance and accuracy of the computational tools by 1) reducing overall computational and space complexity [55, 75], 2) eliminating useless work [38, 39, 43, 44, 45, 56, 76, 77, 78], 3) optimizing data structures and memory access patterns [79, 80, 81], 4) exploiting parallelism in multi-core, many-core, and SIMD architectures [82, 83, 84, 77, 78, 84, 85, 86, 87, 88, 89, 90, 82, 83, 84, 87, 88, 82, 89, 86], and 5) employing machine learning techniques [77, 64, 77, 15]. These works fall short on greatly improving performance and energy consumption due to at least three major reasons. First, many of these approaches incur significant data movement between computation units and memory units [87, 18]. Second, a large portion of the data becomes useless in downstream genome analysis [88], and performing computation on it wastes time and energy. Third, HTS tech nologies produce sequencing data at an increasingly high rate, which makes it challenging to keep up with the throughput of these sequencing technologies, especially in time-critical scenarios [18, 34]. Since software techniques alone are not effective enough at coping with huge amounts of genomic data and the stringent requirements of genome analysis, it is critical to design software-hardware cooperative techniques to accelerate genome analysis. To this end, several works co-design algorithms and architectures to substantially improve the performance and energy efficiency of the genome analysis pipeline. These works 1) reduce data movement overheads by employing processing in memory (PIM) [89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106], or processing near storage (e.g., solid-state drives) [87] and 2) efficiently co-design and execute computationally complex algorithms with massive parallelism and efficient hardware design using specialized architectures, e.g., field programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) [107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123] In this paper (and the associated invited talk), we review the recent advancements in accelerating genome analysis via algorithm-architecture co-design and discuss emerging challenges that highlight the need for new acceleration techniques. We aim to provide a brief yet comprehensive overview of the current state of the field and inspire future research directions to further improve the efficiency of genome analysis and hopefully enable new use cases and computing platforms. ## 2 Accelerating Basecalling HTS technologies produce raw sequencing data, the content of which depends on the type of sequencing technology employed. There are three main types of sequencing technologies: sequencing by synthesis (SBS) [9], Single Molecule Real-Time (SMRT) [10], and nanopore sequencing [11]. SBS generates images where the color intensity at a particular position of an image represents the base of the read. Basecalling after SBS aims to accurately associate these colors with their corresponding bases while correcting sequencing errors [124]. SMRT sequencing generates continuous images in a movie format by sequencing the same read multiple times via a strategy known as circular consensus sequencing (CCS) [125]. Although these images can be quickly converted to their corresponding bases, the high noise associated with SMRT sequencing requires additional steps to correct sequencing errors [125]. These techniques include alignment [47], consensus assembly construction [125], and polishing [126, 15]. Nanopore sequencing generates raw electrical signals as DNA or RNA molecules pass through tiny pores (i.e., nanoscale holes) called _nanopores_[11]. Changes in ionic current, measured as nucleotides pass through, are sampled in real-time and used to perform 1) basecalling and 2) real-time genome analysis. Recent basecalling works [127, 128, 129, 130, 131, 132, 24, 77, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 24, To enable real-time genome analysis, several works propose pure algorithmic techniques or algorithm-hardware co-design solutions. First, ReadFish [29], ReadBouncer [134], and RUBRIC [26] use costly basecalling mechanisms for adaptive sampling. These techniques require costly and energy-hungry computational resources. Such a requirement may cause practical challenges in 1) scaling genome analysis to lower energy and cost levels and 2) performing in-the-field sequencing using mobile sequencing devices such as ONT MinION [34]. Second, many works such as UNCALLED [27], Sigmap [28], and RawHash [34] use efficient techniques to utilize adaptive sampling in low-power devices with usually lower accuracy than the basecalling mechanisms. Among these works, RawHash can provide high accuracy for large genomes with an efficient and accurate hash-based similarity identification technique. Third, several algorithm-architecture co-designs use FPGAs [31] or ASICs [121] to provide fast, accurate, and low-power real-time genome analysis. However, these works are applicable only to small genomes, such as viral genomes, as their algorithm designs lack efficient scalability to larger genomes. We believe that achieving accurate and real-time genome analysis still requires substantial developments in both efficient algorithms and architecture. This can be achieved by 1) designing efficient software that can be used in low-power devices for adaptive sampling and real-time genome analysis, 2) new techniques for genome analysis that do not require translating the raw sequencing data to nucleotide bases, and 3) combining and parallelizing several steps in real-time genome analysis using efficient algorithm-architecture co-designs to minimize the latency (and energy) of time-critical genomics applications. ## 4 Accelerating Read Mapping The goal of read mapping is to identify similarities and differences between genomic sequences, such as between a read and a representative sequence of a species, known as a _reference genome_. Due to genomic variants and sequencing errors, differences and similarities between these sequences (i.e., matches, substitutions, insertions, and deletions) are identified using an approximate string matching (ASM) algorithm to generate an _alignment score_ that quantifies the degree of similarity between a pair of sequences. This process is known as _sequence alignment_. A pair of sequences is said to be _aligned_ when their alignment score shows a sufficiently high degree of similarity. However, ASM algorithms often have quadratic time and space complexity, making them computationally challenging for both long genomic sequences and a large number of sequence pairs. To ease the identification of similarities within vast amounts of sequencing data, read mapping includes multiple steps, such as: 1) sketching [35, 36, 37, 38, 39, 40], 2) indexing and seeding [41, 42, 43, 44, 45, 47], 3) pre-alignment filtering [48, 49, 76, 90, 135, 46], and 4) sequence alignment (i.e., ASM) [50, 51, 52, 53, 54, 55]. Since read mapping is a crucial and computationally expensive step in many genome analysis pipelines, numerous works focus on accelerating it in various ways. First, a significant fraction of sequence pairs do _not_ align, which leads to wasted computation and energy during alignment [90]. To avoid this useless computation, several works propose _pre-alignment filtering_, another step in read mapping that can efficiently detect and eliminate highly dissimilar sequence pairs _without_ using alignment. Most pre-alignment filtering works [48, 49, 76, 90, 135, 46] provide algorithm-architecture co-design using FPGAs, GPUs, and PIM to substantially accelerate the entire read mapping process by exploiting massive parallelism, efficient bitwise operations, and specialized hardware logic for detecting similarities among a large number of sequences. Second, GenStore [87] observes that a large amount of sequencing data unnecessarily moves from the solid-state drive (SSD) to memory during read mapping, significantly increasing latency and energy consumption. To eliminate this wasteful data movement, GenStore uses specialized logic _within_ the SSD to identify two sets of reads: 1) reads that do not align due to high dissimilarity with the reference genome, and 2) reads that align by exactly matching the reference genome. Such reads are processed in the storage system and not moved to main memory or the CPU, thereby eliminating unnecessary data movement in the system. Third, numerous studies, including GenASM [54] and Darwin [117], focus on accelerating the underlying ASM algorithm employed in sequence alignment through efficient algorithm-architecture co-design. They do so by exploiting systolic arrays [115], GPUs [86], FPGAs [115, 118, 120], ASICs [116], high-bandwidth memory (HBM) [123], and PIM [105, 97, 106, 89]. These works provide substantial speedups of up to several orders of magnitude compared to software baselines. Among these works, SeGraM [123] is the _first_ to accelerate aligning sequences to graphs that are used to reduce population bias and improve genome analysis accuracy by representing a large population (instead of a few individuals) within a single reference genome. Despite recent advancements, read mapping remains a computational bottleneck in genome analysis [18, 19]. This is primarily due to the vast amount of sequencing data generated at an ever-increasing rate by sequencing machines, which puts significant pressure on the mapping step due to numerous unnecessary calculations between dissimilar pairs of sequences. Avoiding wasteful 1) data movement, 2) computation, and 3) memory space usage using efficient algorithm-architecture co-design is critical for minimizing the high energy, time, and storage costs associated with read mapping and the entire genome analysis pipeline. ## 5 Accelerating Variant Calling The objective of variant calling is to identify genomic variants between an individual's genome and a reference genome [58, 59, 60, 61, 62, 63, 64]. These variants are mainly categorized as single-nucleotide polymorphisms (SNPs), insertions, deletions, and larger structural variations (SVs). Accurate and efficient detection of these variants is vital for understanding of the genetic basis of diseases [7], population genetics [63], evolutionary studies [3], personalized medicine [136] and pharmacogenomics [137]. Variant calling involves processing the read mapping output and detecting variants. First, read mapping output is processed by sorting and optionally identifying duplicate information to minimize bias introduced during the _polymerase chain reaction_ (PCR) step of sample preparation [138]. Second, mapped reads are analyzed to distinguish genuine variants from sequencing errors or misalignments using resource-intensive statistical techniques [59, 61, 63] or machine learning techniques [64]. Variant callers like GATK HaplotypeCaller [63] use costly probabilistic calculations to analyze the likelihood of specific variants in large sequencing datasets. DeepVariant [64], a DNN-based variant caller, processes read alignment information as images, demanding substantial GPU resources and memory. Reducing computational requirements through algorithmic optimizations, parallelization, and efficient data representation is crucial for faster, more accurate genetic variant analyses. To accelerate variant calling, several works propose algorithm-architecture co-designs. These include fast execution of Pair Hidden Markov Models (Pair HMMs) in FPGAs or ASICs [139, 140], reducing data movement overheads in GPUs [141], and pipelining processing steps with tools like elPrep [142] and system-on-chip designs [143]. Although several works focus on accelerating variant calling, there is an urgent need for further acceleration, e.g., for DNN-based variant callers that can provide highly accurate results while bypassing certain processing steps, potentially accelerating the entire genome analysis pipeline. ### Analysis of Variants Following variant calling, it is critical to analyze the identified variants to understand their functional impact on the organism and their role in diseases, population genetics, or evolution. This analysis involves gene annotation [65, 66, 67, 68, 69] and enrichment analysis [70, 71, 72, 73]. Gene annotation provides relevant information about variants, while enrichment analysis tools identify associations with biological processes, molecular functions, or cellular components. Although these tools need to handle large volumes of data, there is, to our knowledge, little work on accelerating these steps in the genome analysis pipeline. We believe these steps are critical for acceleration using hardware-software co-design. ## 6 Conclusion and Future Outlook Rapid advancements in genomic sequencing technologies have led to an exponential increase in generated genomic data. As data generation continues to grow, data movement bottlenecks will increasingly impact performance and waste energy [144, 145]. Future research in genome analysis acceleration should focus on at least three main directions. First, addressing data movement and storage challenges is crucial for reducing energy consumption and improving performance. Second, integrating and pipelining multiple genome analysis steps using hardware-software co-design can enhance efficiency by reducing both useless computation and data movement. Third, significant potential exists in enabling accurate and fast real-time genome analysis by co-developing efficient algorithms together with specialized hardware, resulting in low-power, high-performance and cost-effective (portable) sequencing with low latency. ## Acknowledgments We thank the organizers of the DAC-60 conference for the invitation to contribute this invited paper and deliver an associated invited talk. We acknowledge many SAFARI Research Group Members who have contributed to some of the works described in this paper, especially Mohammed Alser and Damla Senol Cali, who have completed their PhD dissertations on the general topic of accelerating genome analysis. We thank all members of the SAFARI Research Group for the stimulating and scholarly intellectual environment they provide. We acknowledge the generous gift funding provided by our industrial partners (especially by Google, Huawei, Intel, Microsoft, VMware), which has been instrumental in enabling the decade+ long research we have been conducting on accelerating genome analysis. This work is also partially supported by the Semiconductor Research Corporation (SRC), the European Union's Horizon programme for research and innovation [101047160 - BioPIM] and the Swiss National Science Foundation (SNSF) [200021213084].
2309.06336
Social \textit{vs.} individual age-dependent costs of imperfect vaccination
In diseases with long-term immunity, vaccination is known to increase the average age at infection as a result of the decrease in the pathogen circulation. This implies that a vaccination campaign can have negative effects when a disease is more costly (financial or health-related costs) for higher ages. This work considers an age-structured population transmission model with imperfect vaccination. We aim to compare the social and individual costs of vaccination, assuming that disease costs are age-dependent, while the disease's dynamic is age-independent. A model coupling pathogen deterministic dynamics for a population consisting of juveniles and adults, assumed to be rational agents, is introduced. The parameter region for which vaccination has a positive social impact is fully characterized and the Nash equilibrium of the vaccination game is obtained. Finally, collective strategies designed to promote voluntary vaccination, without compromising social welfare, are discussed.
Fabio A. C. C. Chalub, Paulo Doutor, Paula Patrício, Maria do Céu Soares
2023-09-12T15:53:07Z
http://arxiv.org/abs/2309.06336v2
# Social _vs._ individual age-dependent costs of imperfect vaccination ###### Abstract In diseases with long-term immunity, vaccination is known to increase the average age at infection as a result of the decrease in the pathogen circulation. This implies that a vaccination campaign can have negative effects when a disease is more costly (financial or health-related costs) for higher ages. This work considers an age-structured population transmission model with imperfect vaccination. Our aim is to compare the social and individual costs of vaccination, assuming that disease costs are age-dependent. A model coupling pathogen deterministic dynamics for a population consisting of juveniles and adults, both assumed to be rational agents, is introduced. The parameter region for which vaccination has a positive social impact is fully characterized and the Nash equilibrium of the vaccination game is obtained. Finally, collective strategies designed to promote voluntary vaccination, without compromising social welfare, are discussed. Introduction In a voluntary vaccination scheme, in which the vaccine is perceived - truly or falsely - as risky, herd immunity will never be attained in a population composed of rational individuals [2]. Just before vaccine coverage reaches the herd immunity threshold, rational individuals will stop to be vaccinated as the perceived risk of the vaccine will equal the perceived risk of the disease, which will be small at this point. Therefore, herd immunity will be obtained through vaccination only if there are incentives to be vaccinated (and to vaccinate the dependents) or punishment of non-vaccinated individuals (e.g., the exclusion of the school system). Since the seminal work [2], other models considered the coupling between the deterministic disease dynamics with game-theoretical models for individual decisions within the population, cf. [11, 19, 3, 8]. See also [5] for a previous work of the present group of authors in models for voluntary vaccinations in seasonal diseases. A pathogen in a partially vaccinated population (i.e., below herd immunity level) will circulate slower than in a non-vaccinated population. Assuming long-term immunity for recovered individuals, a partial vaccination will be the increase in the average age of infected individuals [13]. Furthermore, it is naive to expect, for any particular vaccine, a 100% efficacy, cf. [15]. Depending on the precise details of the disease dynamics and its effect on the population, an imperfect vaccination scheme may have adverse collective effects. Let us see a particular example. Consider a disease in which the effect is different in juveniles and adults as for chickenpox, rubella, or Zika. The infection has an overall mild effect in juveniles, but when the virus infects adults, particularly pregnant women, the health consequences can be more severe [14, 4, 1]. While full coverage of a perfect vaccine would prevent the disease spread and a free circulation of a highly infectious virus will asymptotically turn it into a child disease, with mild economic and health effects, a partial vaccination may be pernicious. As a consequence, it is important to find the parameter region where it is better to vaccinate than to not vaccinate, and also it is important to establish if it is possible to move continuously in the parameter region such that full coverage can be attained within acceptable social costs. Models using game-theoretical arguments for the study of imperfect vaccination were presented in [10] and [12]. In both cases, three Nash equilibria were found in the model and the vaccination coverage for the Nash equilib rium may be higher than for the social optimum, depending on the costs of vaccination. In the former case, the authors determine whether the optimal vaccination coverage may be achieved through individual action, comparing two different vaccination scenarios for chickenpox (USA and Israel). In the latter a model with reinfection is considered, and two of the three Nash equilibria are evolutionarily stable, with a catastrophe from the high-vaccination to the low-vaccination scenario, where the effect of vaccination is worse for the population as a whole. We will introduce a precise definition of Nash equilibria in the context of vaccination games shortly; for now, it is enough to consider a situation in which all individuals in the population simultaneously and freely minimize the joint cost of the disease and the vaccine. In this work, we compare social _vs_. individual interests regarding vaccination and disease costs and investigate if it is possible to promote voluntary vaccination and still satisfy both interests. For that, we consider an age-structured model with age-dependent costs, permanent immunity, and imperfect vaccination and use a game theory approach to analyze individual decisions. We finish the introduction with the outline of the paper. In Section 2, we introduce the model and present some basic results, including the explicit expression for the basic reproduction number, and the characterization of equilibria and their stability. In the sequel, we discuss the model, analyzing first the social costs of vaccination and then, using techniques from game theory, the effects of considering voluntary vaccination and individual interests; in particular, we define Nash equilibrium within the context of the present work. In Section 3, we present numerical simulations based on typical values for child diseases to study socially cost-efficient parameters regions, Nash equilibria of the vaccination games, parameters region such that rational individuals accept or do not accept to be vaccinated, and how shared costs between individuals and the society can dramatically influence the endemic equilibria of the model. We conclude in Section 4 with a summary. ## 2 Methods ### The model We consider an age-structured population divided into two groups: juveniles and adults. Each individual is vaccinated at birth with probability \([0,1]\). The vaccine is imperfect, with efficacy \(\lambda\in[0,1]\), meaning that with probability \(\lambda\) it confers life-long immunity, while with probability \(1-\lambda\) the immunity only lasts during the juvenile phase (\(1/\nu\) yrs). The model diagram is represented in Fig. 1. The relevant set of values is presented at Table 1, while model variables are defined at Table 2. The model can be represented by the following system of differential equations: \[V^{\prime} =\mu p(1-\lambda)N_{\rm A}-\nu V\, \tag{1}\] \[S^{\prime}_{\rm J} =\mu(1-p)N_{A}-\nu S_{\rm J}-\beta(I_{\rm J}+I_{\rm A})S_{\rm J}\,\] (2) \[I^{\prime}_{\rm J} =\beta(I_{\rm J}+I_{\rm A})S_{\rm J}-\nu I_{\rm J}-\gamma I_{\rm J }\,\] (3) \[R^{\prime}_{\rm J} =\mu p\lambda N_{\rm A}+\gamma I_{\rm J}-\nu R_{\rm J}\,\] (4) \[S^{\prime}_{\rm A} =\nu(V+S_{\rm J})-\mu S_{\rm A}-\beta(I_{\rm J}+I_{\rm A})S_{\rm A}\,\] (5) \[I^{\prime}_{\rm A} =\beta(I_{\rm J}+I_{\rm A})S_{\rm A}+\nu I_{\rm J}-\mu I_{\rm A}- \gamma I_{\rm A}\,\] (6) \[R^{\prime}_{\rm A} =\nu R_{\rm J}+\gamma I_{\rm A}-\mu R_{\rm A}. \tag{7}\] The total population \(N=V+S_{\rm J}+I_{\rm J}+R_{\rm J}+S_{\rm A}+I_{\rm A}+R_{\rm A}\) is constant, and, therefore, we set \(N(t)=1\) for all \(t\geq 0\). Furthermore, we define the juvenile and adult population by \(N_{\rm J}:=V+S_{\rm J}+I_{\rm J}+R_{\rm J}\) and \(N_{\rm A}:=S_{\rm A}+I_{\rm A}+R_{\rm A}=1-N_{\rm J}\), respectively. Adding Eqs. (5), (6) and (7), we conclude that \(N^{\prime}_{\rm A}=\nu(1-N_{\rm A})-\mu N_{\rm A}\). We say that a population is in _demographic \begin{table} \begin{tabular}{c l c c} **Parameter** & **Description** & **Value** & **Unity** \\ \hline \hline \(\mu>0\) & birth/mortality rate & 1/70 & yrs\({}^{-1}\) \\ \hline \(\gamma>0\) & recovering rate & 365/12 & yrs\({}^{-1}\) \\ \hline \(\beta>0\) & transmission rate & such that \({\cal R}_{0}=8\) & yrs\({}^{-1}\) per capita \\ \hline \(\nu>0\) & rate of immunity loss & 1/15 & yrs\({}^{-1}\) \\ \hline \(p\in[0,1]\) & vaccine coverage & & non-dimensional \\ \hline \(\lambda\in[0,1]\) & vaccine efficacy & & non-dimensional \\ \end{tabular} \end{table} Table 1: Values used in this work. Parameters \(\mu\), \(\nu\), \(\gamma\), and \(\beta\) are not disease-specific and were chosen as an illustration in the range of Chickenpox and Rubella that served as motivation [6]. The value of \(\beta\) was obtained from Eq. (11) at demographic equilibrium. In Fig. 4 we consider a range of values \({\cal R}_{0}\). Figure 1: Schematic diagram of the SIR model for juveniles and adults. The transition rate between both age groups is given by \(\nu\). Vaccination (with coverage \(p\)) provides long term immunity for a fraction \(\lambda\) of the individuals and temporary (i.e., during the juvenile phase) for a fraction \(1-\lambda\). Disease transmission \(\beta\) and recovering \(\gamma\) are assumed to be independent of the age group. _equilibrium_ if \(N_{\rm J}\) and \(N_{\rm A}\) are constants. In that case \[N_{\rm J}(t)=N_{\rm J}^{*}:=\frac{\mu}{\mu+\nu}\,\quad N_{\rm A}(t)=N_{\rm A}^{*}:= \frac{\nu}{\nu+\mu}. \tag{8}\] Both the disease-free and endemic equilibrium can be readily obtained. Their stability depends on the value of the critical parameter \(\mathcal{R}_{p}\), obtained using the next generation matrix approach [17]. More explicitly, we state: **Theorem 1**.: _For any value of \(p\in[0,1]\), there is one equilibrium solution of Eqs. (1)-(7), called the disease-free solution, given by_ \[V^{\rm df} :=N_{\rm J}^{*}p(1-\lambda), S_{\rm J}^{\rm df} :=N_{\rm J}^{*}(1-p),\] \[R_{\rm J}^{\rm df} :=N_{\rm J}^{*}p\lambda, S_{\rm A}^{\rm df} :=N_{\rm A}^{*}(1-p\lambda),\] \[R_{\rm A}^{\rm df} :=N_{\rm A}^{*}p\lambda, I_{\rm J}^{\rm df} :=I_{\rm A}^{\rm df}=0.\] _Let the effective reproduction number be_ \[\mathcal{R}_{p} := \frac{\beta}{\gamma+\mu}\bigg{[}\frac{\mu+\nu+\gamma}{\nu+\gamma }S_{J}^{\rm df}+S_{A}^{\rm df}\bigg{]}\] \[= \frac{\beta}{\gamma+\mu}\frac{\mu(\mu+\gamma+\nu)(1-p)+\nu(\nu+ \gamma)(1-\lambda p)}{(\gamma+\nu)(\mu+\nu)}\.\] _Then_ \begin{table} \begin{tabular}{c l} \hline **Variable** & **Description** \\ \hline \hline \(V\) & Fraction of individuals vaccinated at birth \\ \hline \(S_{\rm J}\) & Fraction of susceptible juveniles \\ \hline \(I_{\rm J}\) & Fraction of infectious juveniles, \\ \hline \(R_{\rm J}\) & Fraction of juveniles with life-long immunity (due to recovery or vaccination) \\ \hline \(S_{\rm A}\) & Fraction of susceptible adults \\ \hline \(I_{\rm A}\) & Fraction of infectious adults \\ \hline \(R_{\rm A}\) & Fraction of adults with life-long immunity (due to recovery or vaccination) \\ \hline \(N_{\rm J}\) & Fraction of juveniles (equal to \(V+S_{\rm J}+I_{\rm J}+R_{\rm J}\)) \\ \hline \(N_{\rm A}\) & Fraction of adults (equal to \(S_{\rm A}+I_{\rm A}+R_{\rm A}\)) \\ \hline \end{tabular} \end{table} Table 2: Compartment variables used in the model; c.f. Eqs. (1)–(7). * _If_ \(\mathcal{R}_{p}<1\) _the only equilibrium solution of Eqs. (_1_)-(_7_) is the disease-free solution, which is locally asymptotically stable._ * _If_ \(\mathcal{R}_{p}>1\) _the disease-free solution is unstable. Furthermore, there is a second equilibrium solution of Eqs. (_1_)-(_7_), called_ the endemic solution_, given by_ \[V^{\rm en} :=N_{\rm J}^{*}p(1-\lambda)=\frac{\mu p(1-\lambda)}{\mu+\nu},\] \[S_{\rm J}^{\rm en} :=\frac{N_{J}^{*}(1-p)\nu}{\nu+\beta I^{\rm en}}=\frac{\mu\nu(1-p )}{(\mu+\nu)(\nu+\beta I^{\rm en})},\] \[R_{J}^{\rm en} :=\frac{\gamma}{\nu}I_{\rm J}^{\rm en}+N_{\rm J}^{*}p\lambda=N_{J }^{*}\left[\frac{(1-p)\gamma\beta I^{\rm en}}{(\gamma+\nu)(\nu+\beta I^{\rm en })}+p\nu\right],\] \[S_{\rm A}^{\rm en} :=\mu N_{\rm A}^{*}\frac{(1-p)\nu+p(1-\lambda)(\nu+\beta I^{\rm en })}{(\mu+\beta I^{\rm en})(\nu+\beta I^{\rm en})}\,\] \[R_{\rm A}^{\rm en} :=\frac{\gamma}{\mu}I^{\rm en}+N_{\rm A}^{*}p\lambda,\] \[I_{\rm J}^{\rm en} :=\frac{N_{\rm J}^{*}(1-p)\beta I^{\rm en}\nu}{(\nu+\gamma)(\nu+ \beta I^{\rm en})},\] \[I_{\rm A}^{\rm en} :=\frac{\mu N_{\rm A}^{*}\beta I^{\rm en}}{\mu+\gamma}\left[ \frac{p(1-\lambda)}{\mu+\beta I^{\rm en}}+\frac{(1-p)\nu}{(\mu+\beta I^{\rm en })(\nu+\beta I^{\rm en})}\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad+\left.\frac{(1-p)\nu}{(\nu+\gamma)(\nu+\beta I^{\rm en})}\right].\] Finally, the total number of infectious individuals at the endemic equilibrium is given by \[I^{\rm en}:=I_{\rm J}^{\rm en}+I_{\rm A}^{\rm en}=\frac{b_{1}+\sqrt{b_{1}^{2 }+4b_{2}b_{0}}}{2b_{2}}\,\] (10) where \[b_{0}:=\mu\nu\left[\beta(\mu(\mu+\gamma+\nu)(1-p)+\nu(\nu+\gamma )(1-\lambda p))\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\left.-(\gamma+\mu)(\gamma+\nu)(\mu+\nu)\right]\] \[(b_{0}>0\Leftrightarrow\mathcal{R}_{p}>1)\,\] \[b_{1}:=\beta^{2}\nu\mu((\gamma+\nu)(1-\lambda p)+\mu(1-p))-\beta( \gamma+\mu)(\gamma+\nu)(\mu+\nu)^{2},\] \[b_{2}:=\beta^{2}(\gamma+\mu)(\gamma+\nu)(\mu+\nu)\.\] Proof.: The disease-free solution is immediate after imposing \(I_{\rm J}^{\rm df}=I_{\rm A}^{\rm df}=0\) in the stationary (i.e., \({}^{\prime}=0\)) solution of the System (1)-(7). Following [17], we consider the compartments corresponding to infectious individuals to be \(x=(I_{\rm J},I_{\rm A})\) and the remaining compartments corresponding to non-infectious classes \(y=(V,S_{\rm J},R_{\rm J},S_{\rm A},R_{\rm A})\). We define the rate of appearance of new infections as \(\mathcal{F}(x,y)=(\beta(I_{\rm J}+I_{\rm A})S_{\rm J},\beta(I_{\rm J}+I_{\rm A })S_{\rm A}))\) and the remaining transition terms as \(\mathcal{V}(x,y)=(\nu I_{J}+\gamma I_{J},-\nu I_{J}+(\gamma+\mu)I_{\rm A})\). Hence, System (1) can be written as \[x^{\prime}=\mathcal{F}(x,y)-\mathcal{V}(x,y),\quad y^{\prime}=g(x,y)\,\] for an appropriate function \(g\). We define the matrices \[F=\begin{bmatrix}\dfrac{\partial\mathcal{F}_{i}}{\partial x_{j}}(x_{0},y_{0} )\end{bmatrix}=\begin{bmatrix}\beta S_{\rm J}^{\rm df}&\beta S_{\rm J}^{\rm df }\\ \beta S_{\rm A}^{\rm df}&\beta S_{\rm A}^{\rm df}\end{bmatrix}\] and \[V=\begin{bmatrix}\dfrac{\partial\mathcal{V}_{i}}{\partial x_{j}}(x_{0},y_{0} )\end{bmatrix}=\begin{bmatrix}\nu+\gamma&0\\ -\nu&\mu+\gamma\end{bmatrix}\,\] where \((x_{0},y_{0})\) represents the disease free equilibrium. It's straightforward to verify conditions \((A_{1})\) to \((A_{5})\) of Theorem 2 in [17], hence we conclude that the effective reproduction number \(\mathcal{R}_{p}\) is given by the spectral radius of the next generation matrix \[FV^{-1}=\dfrac{\beta}{(\gamma+\mu)(\gamma+\nu)}\begin{bmatrix}(\gamma+\mu+\nu )S_{\rm J}^{\rm df}&(\gamma+\nu)S_{\rm J}^{\rm df}\\ (\gamma+\mu+\nu)S_{\mathcal{A}}^{\rm df}&(\gamma+\nu)S_{\mathcal{A}}^{\rm df }\end{bmatrix}\,\] i.e., \[\mathcal{R}_{p}:=\dfrac{\beta}{\gamma+\mu}\left[\dfrac{\mu+\nu+\gamma}{\nu+ \gamma}S_{J}^{\rm df}+S_{A}^{\rm df}\right]\.\] The stability follows from [17], namely the disease-free equilibrium is locally asymptotically stable if \(\mathcal{R}_{p}<1\), and unstable if \(\mathcal{R}_{p}>1\). For the computation of the endemic equilibrium we follow the same techniques as before; in this case, however, the stationary solution implicitly depends on the value of \(I^{\rm en}\), the solution of \(\wp(I)=0\), where \(\wp(I):=-b_{2}I^{2}+b_{1}I+b_{0}\). As an immediate consequence of Theorem 1, we write \[\mathcal{R}_{p}=\mathcal{R}_{0}\left[1-p\left(1-\dfrac{(1-\lambda)\nu(\nu+ \gamma)}{\mu(\mu+\gamma+\nu)+\nu(\nu+\gamma)}\right)\right],\] with \[\mathcal{R}_{0}:=\left.\mathcal{R}_{p}\right|_{p=0}=\frac{\beta}{\gamma+\mu}\left[ \left(1+\frac{\mu}{\nu+\gamma}\right)N_{J}^{*}+N_{A}^{*}\right]. \tag{11}\] Furthermore, **Theorem 2**.: _Let \(\Gamma=\{(V,S_{\rm J},I_{\rm J},R_{\rm J},S_{\rm A},I_{\rm A},R_{\rm A}):S_{\rm J }\leq S_{\rm J}^{\rm df},S_{\rm A}\leq S_{\rm A}^{\rm df},\)\(V\leq V^{\rm df},N_{\rm A}\leq N_{\rm A}^{*}\}\), and consider the model given by Eqs. (1)-(7). Then_ * _If_ \(\mathcal{R}_{p}<1\) _the only equilibrium solution of the System (_1_)-(_7_) is the disease-free solution, which is globally asymptotically stable in_ \(\Gamma\)_._ * _If_ \(\mathcal{R}_{p}>1\) _the disease-free solution is unstable. The System (_1_)-(_7_) is uniformly persistence._ Proof.: The set \(\Gamma\) is positively invariant. Following the notation from the proof of Thm. 1 we define \(f(x,y)=(F-V)x-\mathcal{F}+\mathcal{V}\). We have that \(f(x,y)\geq 0\) with \(f(x,y_{0})=0\) in \(\Gamma\), \(F\geq 0\), \(V^{-1}\geq 0\) and \(V^{-1}F\) is irreducible. Moreover, \((0,y)=(0,N_{\rm J}^{*},0,N_{\rm A}^{*},0)\) is a globally asymptotically stable (GAS) equilibrium of the system \(y^{\prime}=g(0,y)\). Hence, by [16, Thm. 2.2], we conclude that the disease-free solution is GAS in \(\Gamma\) for \(\mathcal{R}_{p}<1\) and that, for \(\mathcal{R}_{p}>1\), the system is uniformly persistent. Finally, it is straightforward to prove that **Proposition 3**.: _Consider_ \[\lambda>\lambda_{c}:=1-\frac{(\gamma+\mu)(\mu+\nu)}{\beta\nu} \tag{12}\] _and \(\mathcal{R}_{0}>1\). Then, there is a critical vaccination coverage_ \[p_{\rm c}:=\frac{\mu(\mu+\gamma+\nu)+\nu(\nu+\gamma)}{\mu(\mu+\gamma+\nu)+ \lambda\nu(\gamma+\nu)}\left(1-\frac{1}{\mathcal{R}_{0}}\right)\in(0,1) \tag{13}\] _such that for any \(p>p_{\rm c}\) the disease free solution is globally asymptotically stable in \(\Gamma\)._ ### Social cost At the endemic equilibrium, we define a social cost function (per unit of time) depending on the disease incidence and disease cost for both juveniles and adults and on the vaccination costs: \[\phi(p,\lambda) :=c_{\mathrm{A}}^{\mathrm{d}}\beta(I_{\mathrm{J}}^{\mathrm{en}}+I_{ \mathrm{A}}^{\mathrm{en}})S_{\mathrm{A}}^{\mathrm{en}}+c_{\mathrm{J}}^{ \mathrm{d}}(\beta(I_{\mathrm{J}}^{\mathrm{en}}+I_{\mathrm{A}}^{\mathrm{en}})S_ {\mathrm{J}}^{\mathrm{en}}+\nu I_{\mathrm{J}}^{\mathrm{en}})+c^{\mathrm{v}} \delta\mu pN_{\mathrm{A}}^{*}\] \[=c_{\mathrm{A}}^{\mathrm{d}}(\gamma+\mu)I_{\mathrm{A}}^{\mathrm{ en}}+c_{\mathrm{J}}^{\mathrm{d}}(\gamma+\nu)I_{\mathrm{J}}^{\mathrm{en}}+c^{ \mathrm{v}}\delta\mu pN_{\mathrm{A}}^{*}\] \[=c_{\mathrm{A}}^{\mathrm{d}}[(\gamma+\mu)I_{\mathrm{A}}^{\mathrm{ en}}+\varepsilon(\gamma+\nu)I_{\mathrm{J}}^{\mathrm{en}}+r\delta\mu pN_{ \mathrm{A}}^{*}],\] where \(c_{\mathrm{A}}^{\mathrm{d}}>0\) and \(c_{\mathrm{J}}^{\mathrm{d}}>0\) are the disease costs for adults and juveniles, respectively, and \(c^{\mathrm{v}}>0\) is the vaccination cost. We define the relative costs \(\varepsilon=c_{\mathrm{J}}^{\mathrm{d}}/c_{\mathrm{A}}^{\mathrm{d}}\) and \(r=c^{\mathrm{v}}/c_{\mathrm{A}}^{\mathrm{d}}\). Upon normalization, we will assume from now on that \(c_{\mathrm{A}}^{\mathrm{d}}=1\). The fraction of the vaccination cost supported by the society is given by \(\delta\in[0,1]\), where \(\delta=1\) means that all cost is supported by the society (normally, the State), where \(\delta=0\) means that the entire cost of the vaccination is paid by the vaccinated individual. Note that \(I_{\mathrm{A,J}}^{\mathrm{en}}\) depend explicitly on \(p\) and \(\lambda\), cf. Thm. 1. We define the acceptable social-cost region as \[\mathcal{V}=\left\{(p,\lambda)\in[0,1]\times[0,1]:\Phi_{\varepsilon,r,\delta}( p,\lambda):=\phi(p,\lambda)-\phi_{0}\leq 0\right\}\;,\] where \(\phi_{0}=\phi(0,0)\) is the social cost of the disease in an unvaccinated population. \begin{table} \begin{tabular}{c l} \hline **Parameter** & **Description** \\ \hline \hline \(c_{\mathrm{A}}^{\mathrm{d}}\) & disease cost of an adult \\ \hline \(c_{\mathrm{J}}^{\mathrm{d}}\) & disease cost of a juvenile \\ \hline \(c^{\mathrm{v}}\) & vaccination cost \\ \hline \(\delta\) & Fraction of the vaccination costs supported by the society \\ \hline \(\varepsilon:=c_{\mathrm{J}}^{\mathrm{d}}/c_{\mathrm{A}}^{\mathrm{d}}\) & relative disease cost of juveniles vs. adults \\ \hline \(r:=c^{\mathrm{v}}/c_{\mathrm{A}}^{\mathrm{d}}\) & relative vaccination cost vs. adults disease cost \\ \hline \end{tabular} \end{table} Table 3: Cost variables used in the model. Upon normalization \(c_{\mathrm{A}}^{\mathrm{d}}=1\), results presented in this article will depend only on \(\delta\), a modeling parameter, \(\varepsilon\) and \(r\). The values for the relative costs \(\varepsilon\) and \(r\) used in this work are arbitrary and used for illustration purposes. We define two critical values: \(\lambda_{\rm sup}\), below which social-cost acceptance depends on vaccine coverage \(p\); and \(\lambda_{\rm inf}\), below which social-cost is unacceptable for any vaccine coverage \(p\). \[\lambda_{\rm sup}=\sup_{\Phi(p,\lambda)>0}\lambda,\quad\lambda_{\rm inf}=\inf_{ \Phi(p,\lambda)<0}\lambda\.\] Fig. 2 illustrates the acceptable social-cost region in the parameter space \((p,\lambda)\) when \(\delta=0\). Note that there is a subregion in which is is possible to eliminate the disease, i.e., \(\mathcal{R}_{p}<1\). ### Individual cost and Nash equilibria Following [2], we assume that individuals freely choose to be vaccinated according to the perceived relative costs of the disease and of the vaccination. Figure 2: The light-blue region indicates the acceptable cost region \(\Phi<0\), while the grey region is the disease-free region \(\mathcal{R}_{p}<1\). The number \(\lambda_{\rm inf}\) indicates the minimum value of vaccine efficacy such that a sufficiently high vaccine coverage will guarantee that the disease has an acceptable social cost at equilibrium. The number \(\lambda_{\rm sup}\) indicates the minimum value of \(\lambda\) such that _any_ vaccine coverage is in the acceptable social-cost region. We assume a juvenile/adult relative cost \(\varepsilon=0.15\), a vaccine/disease cost \(r=0.1\), and all vaccination costs are supported by the vaccinated individual, i.e., \(\delta=0\). For each \((p,\lambda)\), let us define \(\Pi^{\rm nv}_{\rm A}\) and \(\Pi^{\rm nv}_{\rm J}\) as the stationary (i.e., at equilibrium) probabilities of getting the disease as an adult and as a juvenile for unvaccinated individuals; and \(\Pi^{\rm v}_{\rm A}\) to be the stationary probability of getting the disease as an adult, if vaccinated at birth. These values are equal to zero at the disease-free equilibrium and non-zero at the endemic equilibrium. Furthermore, they are continuous functions from the model parameters, cf. [18, Sec. 3.4]. We obtain explicit expressions for each of these three parameters. For \(\Pi^{\rm nv}_{\rm J}\), we consider a given individual in the class \(S_{\rm J}\), from which there are two possible exits. Either that given individual contracts the disease (and move to the class \(I_{\rm J}\)) or he or she turns into an adult without being infected and moves to the class \(S_{\rm A}\). Explicitly, \[\Pi^{\rm nv}_{\rm J}(p,\lambda):=\frac{\beta I^{{}^{*}}S^{{}^{*}}_{\rm J}}{( \beta I^{{}^{*}}+\nu)S^{{}^{*}}_{\rm J}}=\frac{\beta I^{{}^{*}}}{\beta I^{{}^{ *}}+\nu}\.\] The probability that a non-vaccinated adult gets the disease is given by the probability that a previously non-vaccinated juvenile does not get the disease as a juvenile and then gets the disease as an adult. Therefore \[\Pi^{\rm nv}_{\rm A}:=(1-\Pi^{\rm nv}_{\rm J})\,\frac{\beta I^{{}^{*}}S^{{}^{* }}_{\rm A}}{(\beta I^{{}^{*}}+\mu)S^{{}^{*}}_{\rm A}}=\frac{\nu}{\beta I^{{}^{ *}}+\nu}\frac{\beta I^{{}^{*}}}{\beta I^{{}^{*}}+\mu}\,\] with \(I^{*}:=I^{*}_{\rm J}+I^{*}_{\rm A}\). Finally, the probability that a vaccinated adult gets the disease is the probability that the vaccine is effective only during the juvenile phase, \(1-\lambda\), times the probability to get the disease from the class \(S_{\rm A}\), i.e., \[\Pi^{\rm v}_{\rm A}:=(1-\lambda)\frac{\beta I^{{}^{*}}}{\beta I^{{}^{*}}+\mu}\.\] We define the _individual cost function_ at endemic equilibrium, which corresponds to the expected cost of the individual strategy of being vaccinating with probability \(q\) in a population with coverage \(p\): \[\Psi_{\varepsilon,r,\delta}(q,p,\lambda) :=(1-q)(\Pi^{\rm nv}_{\rm A}+\varepsilon\Pi^{\rm nv}_{\rm J})+q( \Pi^{\rm v}_{\rm A}+r(1-\delta))\] \[=\Pi^{\rm nv}_{\rm A}+\varepsilon\Pi^{\rm nv}_{\rm J}+q\left[r(1 -\delta)-\pi(p,\lambda)\right]\,\] where the _vaccination-infection risk index_, introduced in [12], is given by \[\pi(p,\lambda):=\Pi^{\rm nv}_{\rm A}(p,\lambda)+\varepsilon\Pi^{\rm nv}_{\rm J }(p,\lambda)-\Pi^{\rm v}_{\rm A}(p,\lambda)\.\] The individual _vaccination marginal expected payoff gain_\(E(q,p)\) of an individual that uses the strategy of vaccinating with probability \(q\) in a population that vaccinates with probability \(p\) is given by \[E(q,p):=E(q,p;\varepsilon,r,\delta,\lambda):=\Psi_{\varepsilon,r,\delta}(0,p, \lambda)-\Psi_{\varepsilon,r,\delta}(q,p,\lambda)\.\] **Definition 1**.: _The population vaccination strategy \(p_{*}\) is a vaccination Nash equilibrium, if_ \[E(q,p_{*})-E(p_{*},p_{*})=(p_{*}-q)\left[r(1-\delta)-\pi(p_{*},\lambda) \right]\leq 0,\] _for every strategy \(q\in[0,1]\)._ In simple words, we say that the system is at Nash equilibrium if the vaccination coverage \(p_{*}\) is such that for every individual that uses a strategy \(q\) the expected payoff is not larger than the one it would have if the strategy \(p_{*}\) were used. **Proposition 4**.: _The model given by Eqs. (2)-(7) has at least one Nash equilibrium._ Proof.: If \(\pi(0,\lambda)\leq r(1-\delta)\), then \(p_{*}=0\) is a Nash equilibrium. If \(\pi(1,\lambda)\geq r(1-\delta)\), then \(p_{*}=1\) is a Nash equilibrium. If both inequalities are false there is at least one value of \(p_{*}\in(0,1)\) such that \(\pi(p_{*},\lambda)=r(1-\delta)\) and \(p_{*}\) is a Nash equilibrium. For high vaccine efficacy \(\lambda>\lambda^{*}\) and \(\delta\in[0,1)\), the vaccination coverage that results from individuals' choices is below the elimination threshold \(p_{\mathrm{c}}\), defined in Prop. 3. **Proposition 5**.: _Let \(\varepsilon,r>0\), \(\delta\in[0,1)\), \(\lambda\in[\lambda_{\mathrm{c}},1]\), where \(\lambda_{\mathrm{c}}\) is given by Prop. 3. Let \(p_{\mathrm{c}}^{\lambda}\) given by Prop. 3 and \(p_{*}^{\lambda}\) a Nash equilibrium of the associated model. Then, \(p_{*}^{\lambda}<p_{\mathrm{c}}^{\lambda}\)._ Proof.: From Prop. 3, for any value \(p>p_{\mathrm{c}}^{\lambda}\) it is true that \(\pi(p,\lambda)=0\). From the continuity of \(\pi\), we conclude that \(\pi(p_{\mathrm{c}}^{\lambda},\lambda)=0\). Assume that \(p_{*}^{\lambda}\geq p_{\mathrm{c}}^{\lambda}>0\). From Def. 1 we have that \((p_{*}^{\lambda}-q)\left[r(1-\delta)-\pi(p_{*}^{\lambda},\lambda)\right]\leq 0\) for every \(q\in[0,1]\), therefore \(p_{*}^{\lambda}\leq q\), for every \(q\in[0,1]\), which is a contradiction. Note that this result generalizes for arbitrary efficacy \(\lambda\) the idea, already present in [2], that a Nash equilibrium of a vaccination game is always below the threshold to eradicate a disease. Inspired by the concept of evolutionary stable strategy in game dynamics, cf. [7], we define: **Definition 2**.: _The population vaccination strategy \(p_{*}\) is an evolutionary stable vaccination (ESV) strategy, if there is a \(\tau_{0}>0\), such that for every \(\tau\in(0,\tau_{0})\) and for every \(q\in[0,1]\), with \(q\neq p_{*}\),_ \[E(q,(1-\tau)p_{*}+\tau q)-E(p_{*},(1-\tau)p_{*}+\tau q)<0.\] We are ready to state the conditions for the Nash equilibrium to be ESV. **Proposition 6**.: _Let \(p_{*}\) be a Nash equilibrium of the vaccination game. If \(p_{*}=0\) or \(p_{*}=1\), then \(p_{*}\) is an ESV. Furthermore, if \(\pi(p_{*},\lambda)=r(1-\delta)\), \(p_{*}\) is an ESV if and only if \(\pi(p,\lambda)\) is decreasing at \(p=p_{*}\). In particular, \(p_{*}\in(0,1)\) is an ESV if and only if \(\pi(p,\lambda)\) is decreasing at \(p=p_{*}\)._ Proof.: This proof follows ideas from [12]. Let \(p_{*}=0\) (\(p_{*}=1\)) be a Nash equilibrium. From Def. (1), we conclude that \(\pi(0,\lambda)\leq r(1-\delta)\) (\(\pi(1,\lambda)\geq r(1-\delta)\), respect.). Assume that a strict inequality is valid. Let \(\tau_{0}\) be small enough such that for all \(\tau<\tau_{0}\), it is valid that \(\pi(\tau q,\lambda)<r(1-\delta)\) (\(\pi(1-\tau(1-q),\lambda)>r(1-\delta)\), respect.). It is clear that \(E(q,\tau q)-E(0,\tau q)=-q(r(1-\delta)-\pi(\tau q,\lambda))<0\) (\(E(q,1-\tau(1-q))-E(1,1-\tau(1-q))=(1-q)(r(1-\delta)-\pi(1-\tau(1-q),\lambda) )<0\), respect.), for all \(q\neq p_{*}\). For the second part, note that \(\pi(p_{*},\lambda)=r(1-\delta)\) implies that \[E(q,(1-\tau)p_{*}+\tau q)- E(p_{*},(1-\tau)p_{*}+\tau q)\] \[=-(q-p_{*})(\pi(p_{*},\lambda)-\pi((1-\tau)p_{*}+\tau q,\lambda) )\,\] and therefore \(p_{*}\) is an ESV if and only if \(\pi\) is decreasing in the first argument at \(p=p_{*}\). Finally, the final result follows from Def. 1. Fig. 3 illustrates two possible situations described in Prop. 6: (a) the two pure strategies are ESV and there exists an interior Nash equilibrium that is unstable; (b) for higher relative vaccination costs the interior Nash equilibrium is stable when condition on Prop. 6 is met. For both situations described, there is a range \(\left(\lambda_{\inf}^{\text{bi}},\lambda_{\sup}^{\text{bi}}\right)\) for the vaccine efficacy \(\lambda\) were the model presents bi-stability. ## 3 Discussion and Numerical Examples In this section, we present several numerical examples to discuss the present work. Parameters will be, except otherwise said, taken from Table 1. In Figure 3: Nash equilibria as a function of vaccine efficacy \(\lambda\) and vaccination coverage \(p\) for relative vaccination costs (a) \(r=0.25\) and (b) \(r=0.30\). The Light-red region is such that \(\pi(p)>r(1-\delta)\), i.e., in this region a rational individual will accept to be vaccinated with a probability larger than the population average. In particular, in that region, there is an individual incentive to increase the overall vaccination coverage. Red dashed and full lines correspond to unstable and stable Nash equilibria, respectively. The function \(\pi\) is decreasing (increasing) with respect to \(p\) in the full (dashed) red line, cf. Prop. 6. The grey region is the disease-free region where \(\mathcal{R}_{p}<1\), and the full black line is the disease-free threshold \(\mathcal{R}_{p}=1\). The horizontal black dotted line exemplifies the dynamics of rational individuals (indicated by the arrows) assuming a vaccine efficacy of \(\lambda=0.70\). The region between \(\lambda_{\inf}^{\text{bi}}\) and \(\lambda_{\sup}^{\text{bi}}\) is the region for model bistability, in which we find three Nash equilibria, two stable and one unstable in between. We assume a juvenile/adult relative cost \(\varepsilon=0.15\) and all vaccination costs are supported by the vaccinated individual \(\delta=0\). Note that it is not possible to reach the disease-free region through voluntary vaccination if there is no incentive to be vaccinated. However, in case (a), the region in which there is no individual incentive to increase the vaccination coverage close to the disease-free region is disconnected from the set of vaccination coverage \(p=0\). particular, chickenpox epidemiology fits our framework, as it is a mild disease for children that can have increased risk for adults and its use in a universal vaccination program is debatable [9]. However, the framework developed here may be applied to several other situations, such as Zika or rubella. Fig. 4 shows the proportion of infectious individuals at equilibrium as a function of the basic reproduction number without vaccination, i.e., \(p=0\). The total proportion of infectious individuals \(I^{\rm en}\) in the endemic equilibrium is an increasing function, as is the case of the proportion of juveniles \(I^{\rm en}_{\rm J}\) in the same equilibrium. The fraction of adults increases for small values of \({\cal R}_{0}\) and then decreases. We conclude that a highly transmissible disease associated with permanent immunity will be, in equilibrium, a childhood disease. If the effect of this disease is mild in juveniles, there is no severe economic cost associated with the endemic state. This is the main reason we will always compare the economic cost associated with a vaccine program with vaccine efficacy \(\lambda\) and vaccine coverage \(p\) with the no-vaccination endemic state, cf. definition of \(\Phi\) in Subsection 2.2. For that choice of parameters, most of the infectious individuals are below 15 years old, but a reasonable proportion of infectious individuals is above this value. Assuming \({\cal R}_{0}=8\), the inclusion of a vaccination scheme is illustrated in Fig. 5. It clearly shows that for the relevant set of parameters, the inclusion of a vaccination program will decrease the overall number of infectious individuals in the endemic equilibrium but it will increase the fraction of adults. Therefore, the introduction of the vaccination scheme should be pondered to avoid negative outcomes for the population. After the introduction of the vaccination, two natural questions arise: i) _are people willing to be vaccinated?_, and ii) _has the individual behavior a positive or negative effect on society?_ The first question is addressed by introducing an individual cost of being vaccinated (that includes the perceived risk of the vaccine, eventual absence to work to go or to take the children to the vaccination site, the financial cost of buying the vaccine, etc) and the cost of non-being vaccinated, i.e., all the costs associated to contracting the disease. If the first is larger, then rational individuals will be vaccinated, if it is smaller, they will not. The equality points correspond to the Nash equilibrium of the model. For the second question, we discuss if a given vaccination is in the acceptable social cost region. Ideally, we shall try to find a stable Nash equilibrium within that region, i.e., with \(\Phi<0\). However, this is not always possible. Figs. 6a and 6b illustrate the regions on the parameter space \((p,\lambda)\) in Figure 4: Proportion of infectious individuals at endemic equilibrium without vaccination (\(p=0\)) as a function of the basic reproduction number \(\mathcal{R}_{0}\). The full line indicates \(I^{\mathrm{en}}=I_{\mathrm{J}}^{\mathrm{en}}+I_{\mathrm{A}}^{\mathrm{en}}\), while the dotted and dashed lines indicate \(I_{\mathrm{J}}^{\mathrm{en}}\) and \(I_{\mathrm{A}}^{\mathrm{en}}\). Note that for larger values of \(\mathcal{R}_{0}\), the fraction of juveniles approaches the full number of infectious, indicating that a highly-transmissible disease with permanent immunity will be, in the stationary state, a childhood disease. However, when the transmission is low, a significant number of infectious individuals is adult. Figure 5: (a) Fraction of infectious individuals at endemic equilibrium assuming vaccination coverage \(p\) and vaccine efficacy \(\lambda\). (b) Fraction of infected juveniles, with respect to the number of infected individuals at endemic equilibrium, \(I_{\rm J}^{\rm en}/I^{\rm en}\), as a function of the vaccine coverage \(p\) and vaccine efficacy \(\lambda\). Note that increasing the vaccine coverage implies a smaller number of infected individuals but the disease became more relevant among adults. The grey region in the upper left corner of both graphs indicates the disease-free region. Figure 6: Social _vs._ individual interest with vaccine coverage \(p\) and vaccine efficacy \(\lambda\), with fixed \(\varepsilon=0.15\). The grey region marks the disease-free region, i.e. \(\mathcal{R}_{p}<1\). We consider different scenarios: (a) with all vaccination costs assumed by the individual, i.e., \(\delta=0\), with a high-cost vaccine \(r=0.35\); (b) \(\delta=0\) and low-cost vaccine \(r=0.01\); or (c) with shared costs between the individual and the society \(\delta=0.36\) for the high-cost vaccine \(r=0.35\). In light-blue region, light-purple region and grey region the social cost is lower than the social cost of an unvaccinated population, i.e., \(\Phi<0\); the blue continuous line indicates the level set \(\Phi=0\). In the light-red region and light-purple region, it is in the individual interest to be vaccinated with a larger probability than the population average; in the light-red region the social cost of vaccination is positive. The red line corresponds to the set of Nash equilibria. The horizontal red dotted lines represent the range \(\lambda\in\left(\lambda_{\inf}^{\text{bi}},\lambda_{\sup}^{\text{bi}}\right)\) where the model has bi-stability and the horizontal blue dotted lines represent the range \(\lambda\in\left(\lambda_{\inf},\lambda_{\sup}\right)\) were the social cost of vaccination is acceptable only if the level of vaccination is sufficiently high. In (c) the vaccine costs are shared so that \(\lambda_{\sup}^{\text{bi}}=\lambda_{\sup}\) and the blue and red lines intersect for \(p=0\). In particular, all Nash equilibria \(p>0\) are in the socially cost-efficient region. which the individual and the social interests coincide and differ when all the individual vaccination costs are assumed by the beneficiary (i.e., there are no shared costs, as, for example, government subsidies). Fig. 6c introduces shared costs for high-cost vaccines, i.e., the society absorbs part of the individual cost. Close to the disease-free region \(\mathcal{R}_{p}<1\), there is always a barrier where there is no individual interest to be vaccinated, as the infection rates at that region will be residual. In Fig. 6a, the region where there is a social interest in increasing the vaccination, but there is an individual rejection of it, is a connected set. In Fig. 6c this region is disconnected. In the former case, it is possible that a decrease in the value of \(\lambda\) (something not included in our model, but that may happen due, for instance, to the introduction of new variants or simply because it's perceived as so by the population) causes a near-perfect vaccination scheme to collapse into a non-vaccination situation (i.e., with \(p\approx 0\)), due only to rational individual behavior. This is not possible when this region becomes disconnected, bringing extra stability to a near-optimal vaccination scheme. In Fig. 6 we indicate the values of \(\lambda_{\inf}\) (\(\lambda_{\sup}\)), the minimum efficacy such that there is an individual incentive to be vaccinated for \(p\) large enough (for any value of \(p\), respect.) and \(\lambda_{\inf,\sup}^{\mathrm{bi}}\) the minimum and maximum values to the existence of bi-stable Nash vaccination equilibrium. Depending on the costs of the vaccine for society and for individuals, their interests may not always agree: for a certain range of vaccine efficacy, it may be favorable for society to increase vaccination coverage, but due to the high cost of the vaccine, individuals choose not to be vaccinated, cf. Fig. 6a for \(\lambda\in(\lambda_{\inf},\lambda_{\inf}^{\mathrm{bi}})\). For a different set of parameters it may be favorable for individuals to vaccinate, due to the low vaccine cost, but not be beneficial for society, cf. Fig. 6b for \(\lambda\in(\lambda_{\inf}^{\mathrm{bi}},\lambda_{\inf})\). This situation can be changed by allowing the vaccination costs to be shared. For example, in Fig. 6c, \(\delta\) was chosen such that \(\lambda_{\sup}^{\mathrm{bi}}=\lambda_{\sup}\). In this case, individual vaccination is enhanced for lower vaccine efficacy, as compared to Fig. 6a. Moreover, all Nash equilibrium \(p>0\) are in the acceptable social cost region, avoiding individuals choosing to be vaccinated where their choice would increase social costs. The effects of sharing costs are further explored in Figs. 7 and 8. Fig. 7 shows a particular example, highlighted by the yellow arrows. Starting with a vaccination coverage of \(p=0.5\), by changing the value of \(\delta\), it is possible to create incentives such that rational individuals accept to be vaccinated, moving from the light-blue region to the light-red region, i.e. from point (a) Figure 7: Consider the space of parameter given by \(\delta\), the fraction of the vaccination cost supported by the society, and \(p\), the vaccination coverage. The light-blue and light-purple are the regions such that the level of vaccination has a social cost lower than of no vaccination, while the light-red and light-purple regions are such that a rational individual will choose to be vaccinated with a larger probability than the average individual. The light-purple region is the objective of the health authorities, where individuals freely decide to be vaccinated and the overall coverage is cost-efficient, i.e., \(\Phi<0\). We assume \(\varepsilon=0.15\) and \(\lambda=0.6\). Consider the example illustrated by the yellow arrows. If we start in (a) with \(p=0.5\) (indicated by a dotted vertical line) and \(\delta=0\), rational individuals will not vaccinate, but vaccination will benefit society. However, if the vaccination cost starts to be shared between society and beneficiary, increasing the value of \(\delta\) to above approx 0.5 [point (b)], rational individuals will start to vaccinate. Vaccination coverage will increase until close to 1, inside the acceptable socially-cost region and the shared costs can be relaxed to \(\delta=0.3\) [point (c)]. The chosen yellow points correspond to Fig. 8a, 8b and 8c, respectively. Figure 8: Parameter space \((p,\lambda)\) for different values of the fraction of the cost supported by the society: (a) \(\delta=0\), (b) \(\delta=0.5\), (c) \(\delta=0.3\). In light-blue and light-purple indicates the region where the level of vaccination has a social cost lower than of no vaccination, while light-red and light-purple indicates the region in which rational individuals find incentives to have themselves vaccinated. Neutral social-cost threshold \(\Phi=0\) and Nash equilibria are indicated by blue and red curves, respectively. Red solid lines indicate stable Nash equilibria, while dotted lines indicated unstable equilibrium. The vertical black dotted line represents the level of vaccination \(p=0.5\) in (a) and (b) and \(p=1\) in (c). The horizontal black dotted line represents the vaccine efficacy \(\lambda=0.6\), corresponding to the examples studied in points (a), (b), and (c) in Fig. 7. to point (b). The natural dynamics will lead the population to a state in which the level of vaccination is high and the population is in a cost-efficient equilibrium. After that, it is possible to decrease vaccination incentives without decreasing vaccination coverage, i.e., moving towards point (c). Fig. 8a, 8b and 8c, show the superposition of the individual and social interests for different scenarios for shared costs corresponding to the three points depicted in Fig. 7, respectively. ## 4 Summary of conclusions This work starts from the fact that imperfect vaccination can be worst than no vaccination for a specific group of diseases to discuss the implementation of specific strategies that induce rational individuals to be vaccinated in a socially cost-efficient way. This is an important issue, as, at least in developed democracies, forced vaccination is considered unacceptable, but positive and negative incentives to boost vaccination coverage are routinely used. In fact, many restrictions to non-vaccinated individuals were implemented during the COVID-19 pandemic, even in developed democracies, showing that these strategies are always been considered, at least in extreme cases. For many reasons, in particular, the almost-extinction of many vaccine-preventable diseases, vaccine-skeptical groups are present in almost every country. In this work, we introduced an age-structured model, considered different effects of the disease in adults and juveniles, considered imperfect vaccines, and study socially efficient vaccine coverage, Nash equilibrium vaccination strategies, and, more importantly, the intersection between these two groups. Finally, we show how sharing the cost between individuals and society can boost vaccination coverage, moving, if not to the extinction of the disease, at least to its long-term control. ## Acknowledgements This work is funded by national funds through the FCT - Fundacao para a Ciencia e a Tecnologia, I.P., under the scope of the projects UIDB/00297/2020 and UIDP/00297/2020 (Center for Mathematics and Applications - NOVA Math) and 2022.03091.PTDC _Mathematical Modelling of Multi-scale Control Systems: applications to human diseases_ (CoSysM3).
2307.06842
Federated Multi-Agent Deep Reinforcement Learning for Dynamic and Flexible 3D Operation of 5G Multi-MAP Networks
This paper addresses the efficient management of Mobile Access Points (MAPs), which are Unmanned Aerial Vehicles (UAV), in 5G networks. We propose a two-level hierarchical architecture, which dynamically reconfigures the network while considering Integrated Access-Backhaul (IAB) constraints. The high-layer decision process determines the number of MAPs through consensus, and we develop a joint optimization process to account for co-dependence in network self-management. In the low-layer, MAPs manage their placement using a double-attention based Deep Reinforcement Learning (DRL) model that encourages cooperation without retraining. To improve generalization and reduce complexity, we propose a federated mechanism for training and sharing one placement model for every MAP in the low-layer. Additionally, we jointly optimize the placement and backhaul connectivity of MAPs using a multi-objective reward function, considering the impact of varying MAP placement on wireless backhaul connectivity.
Esteban Catté, Mohamed Sana, Mickael Maman
2023-06-30T12:09:34Z
http://arxiv.org/abs/2307.06842v1
Federated Multi-Agent Deep Reinforcement Learning for Dynamic and Flexible 3D Operation of 5G Multi-MAP Networks ###### Abstract This paper addresses the efficient management of Mobile Access Points (MAPs), which are Unmanned Aerial Vehicles (UAV), in 5G networks. We propose a two-level hierarchical architecture, which dynamically reconfigures the network while considering Integrated Access-Backhaul (IAB) constraints. The high-layer decision process determines the number of MAPs through consensus, and we develop a joint optimization process to account for co-dependence in network self-management. In the low-layer, MAPs manage their placement using a double-attention based Deep Reinforcement Learning (DRL) model that encourages cooperation without retraining. To improve generalization and reduce complexity, we propose a federated mechanism for training and sharing one placement model for every MAP in the low-layer. Additionally, we jointly optimize the placement and backhaul connectivity of MAPs using a multi-objective reward function, considering the impact of varying MAP placement on wireless backhaul connectivity. Mobile Access Points, Integrated access backhaul, Multi-agent Deep Reinforcement Learning, Federated Learning, mmWave Communications, Dynamic 5G Networks. ## I Introduction 5G aims to offer fair opportunities for User Equipments (UE) regardless of their location or mobility via efficient management. Mobile Access Points (MAPs), which are Unmanned Aerial Vehicles (UAV), are gaining attention as a flexible infrastructure, useful for various applications [1]. MAPs can collaborate to form a Multi-MAP network, but there is limited research on managing them in dynamic networks with user mobility, interference, varying traffic, and fluctuating MAP numbers. Our objective is to efficiently manage multiple MAPs in terms of their number, placement, and trajectory while considering dynamic constraints over a longer time scale than the current state-of-the-art approaches. Previous studies have explored different approaches leveraging the 3-dimensional (3D) mobility of MAPs, but often without accounting for all the dynamic network constraints simultaneously. For instance, in [2], the authors proposed an iterative optimization method for MAP placement based on user mobility. Another study by Ghanavi et al. [3] extended the scenario to multiple MAPs managed by a reinforcement Q-learning algorithm. Wang et al. [4] introduced a virtual forces algorithm based on statistical user distributions for computing network cartography. It is worth noting that user distribution can impact MAP numbers and deployment positions, even when the number of UEs remains constant. These diverse solutions demonstrate the variety of MAP management techniques, highlighting the need for iterative approaches to efficiently handle dynamic network constraints. However, ensuring long-term performance in a constantly changing network remains a challenge. The aforementioned papers highlight the potential of using a greedy MAPs deployment approach to determine their optimal number. For instance, in [5, 6, 7, 8, 9], proposed solutions adjust the number of deployed MAPs iteratively to meet network constraints. However, this approach may suffer from convergence delays and does not account for network evolution. In contrast, our study proposes a hierarchical architecture that dynamically determines the number of MAPs for user coverage, independent of the placement procedure. Our architecture aims to strike a balance between cost and coverage by determining both the number and positions of MAPs, as these aspects affects each other. Obviously, MAP management must adapt to changing network conditions, including trajectory adjustments. In [10], authors used a successive convex optimization to optimize MAP trajectories and UE data rates under mobility constraints. However, a significant breakthrough in MAP trajectory optimization has been achieved with Multi-Agent Deep Reinforcement Learning (MADRL) models. In [11] and [12], authors proposed target MADRL models based on the actor-critic architecture to handle multiple factors. Authors of [13] proposed a MADRL approach with pre-deployed MAPs on UE clusters. This approach takes advantage of the low-complexity deployment algorithm and the ability of MADRL model to adjust positions in complex environments. Our paper presents a problem formulation and proposes a two-level hierarchical architecture based on joint optimization for a dynamic 5G network while considering Integrated Access-Backhaul (IAB) constraints. The decision process is scalable and distributed and it determines the number of MAPs through consensus in the high-layer. In the low-layer, MAPs manage their placement using our previously proposed dual-attention based DRL model [14] that encourages cooperation without any a-priory information or retraining procedure. To increase the generalization ability of learned model, reduce complexity and improve performance in novel scenarios, we propose a federated mechanism that involves training and sharing one placement model for every MAP, as suggested in [15]. Additionally, we aim to jointly optimize backhaul connectivity of MAPs using a multi-objective reward function, considering the impact of varying MAP placement on wireless backhaul link as highlighted in previous studies [16] and [17].
2309.10232
Fully parallel optical matrix-matrix multiplication
In recent years, with the rapid development of electro-optic modulators, optical computing has become a potential excellent candidate for various computing tasks. New structures and devices for optical computing are emerging one after another, but the computing method is still the optical vector-matrix multiplication method that was decades ago. Here, we propose a novel optical computing paradigm that can parallelly implement matrix-matrix multiplication operation, which can directly replace existing vector-matrix multiplication, greatly improving computational efficiency. This preprint presents theoretical analysis, and we will supplement experimental results and conclusions in the future.
Yufeng Zhang, Hao Yan, Kaizhi Wang
2023-09-19T01:07:12Z
http://arxiv.org/abs/2309.10232v1
# Fully parallel optical matrix-matrix multiplication ###### Abstract In recent years, with the rapid development of electro-optic modulators, optical computing has become a potential excellent candidate for various computing tasks. New structures and devices for optical computing are emerging one after another, but the computing method is still the optical vector-matrix multiplication method that was decades ago. Here, we propose a novel optical computing paradigm that can parallelly implement matrix-matrix multiplication operation, which can directly replace existing vector-matrix multiplication, greatly improving computational efficiency. This preprint presents theoretical analysis, and we will supplement experimental results and conclusions in the future. ## 1 Introduction Matrix-matrix multiplication (MMM) is one of the core operations in computational and processing applications, widely used in signal processing, image processing, and deep learning. For its \(O(N^{3})\) time complexity, MMM, which is composed of multiple vector-matrix multiplications (VMM), becomes the most time-consuming operation in various calculation task. Due to the low performance of early computer technology, light has emerged as one of the ideal medium for replacing digital computing due to its excellent characteristics of low latency and low power consumption. Since Dr. Goodman innovatively proposed the optical vector-matrix multiplications (OVMM) prototype [1], many researchers have utilized methods such as time multiplexing, wavelength multiplexing, and light source multiplexing to achieve more effective OVMM [2, 3, 4, 5, 6, 7, 8, 9, 10]. In addition to free space, Dr. Reck proposed a computational architecture based on Mach-Zehnder interferometer, verifying the possibility of calculations based on planar waveguides [11]. However, with the rapid development of integrated circuits and silicon based chip technology, optical computing, which is difficult to reconstruct structures and cannot update data in real-time, has no advantages compared to digital computing. Therefore, the academic community has also paid more attention to the field of optical communication, resulting in the long-term stagnation of the development of optical computing. In the 2010s, with the rise of technologies such as artificial intelligence and big data, massive MMM operations brought huge computational burden, and silicon based chips were unable to continue to meet computing needs. At the same time, with the development of reconfigurable spatial light modulators (SLM) and photonic integrated circuits [12, 13, 14], optical computing is once again recognized as a potential high-performance computing solution [15, 16, 17]. Many researchers have implemented various forms of optical computing systems using free space and planar waveguides, and used them for training or inference of optical neural networks (ONNs) [18, 19, 20]. However, these systems still use decades ago OVMM methods, simply replacing traditional optical devices with higher speed, larger scale, and more reconfigurable advanced devices. For example, using lenslet arrays or Dammann gratings instead of traditional multiple light sources to implement convolutional neural networks [31, 32, 33], using waswashapers and optical frequency combs instead of traditional four-wave mixing to achieve wavelength multiplexing computing [21, 23, 34, 35], and using SLMs instead of traditional LED arrays and phase masks to achieve OVMMs [18, 36, 37, 28]. Although the above works have greatly developed the equipment and combination strategies of optical computing, and effectively improved the actual performance of traditional optical computing methods, they have not fundamentally proposed more efficient optical computing principles. Here, we innovatively proposed a fully parallelized optical matrix-matrix multiplication (POMMM) paradigm, which fundamentally changed Dr. Goodman's OVMM method. The difference from previous methods is that our method does not require any pre-coding or pre-processing of matrixs, and the computing operation is completed through the propagation of light with single source and single wavelength. This method has a simple and exquisite architecture, making it a universal computing method that is very suitable for accelerating ONNs and other optical computing operations. POMMM adds an additional computational dimension to the ONN, enabling parallel training and inference of multiple samples and neurons. In addition to OVMM and 2-D convolution operations, this method has the potential to become another new parallel computing paradigm for Fourier optics. ## 2 Principle ### Architecture of POMMM For MMM, assuming matrix \(A\) (N rows, M columns) and matrix \(B\) (M rows, N columns), then matrix \(C=AB\) (N rows, N columns), the value \(c_{nm}\) of the n-th row and m-th column of \(C\) can be expressed as: \[c_{nm}=\sum_{i=1}^{M}a_{ni}b_{im}. \tag{1}\] Based on the above equation, we summarize the core steps of POMMM as (1) parallel implementation of row and column multiplication and addition (MAC) operations, and (2) moving the results to different positions, and design the architecture as shown in Fig.1. The matrix \(A\) is modulated to the amplitude of wavefront by an amplitude spatial light modulator (ASLM), and is imaged to the surface of phase spatial light modulator (PSLM) by a 4f system. PSLM will modulate a linearly changing phase along the x-direction (m) for each row of the matrix, and change its rate \(K(n)\) along the y-direction (n) direction: \(2\pi K(n)m\). So the modulated wavefront can be represented by complex exponents, as shown in matrix \(A^{\prime}\) in Fig.1. Then, the wavefront is imaged along the x-direction on the ASLM2 surface through a cylindrical lens, and the y-direction is the result of free diffraction. Therefore, every point on the ASLM2 surface is a complex sum of matrix \(A^{\prime}\) along the y-direction, while the relationship in the x-direction remains unchanged, as shown in matrix \(A^{\prime\prime}\) in Fig.1. Transpose matrix \(B\) and flip it along the y-direction to matrix \(B^{TF}\), which is modulated to the wavefront through ASLM2 to achieve dot product (\(\odot\)) with the corresponding position of matrix \(A^{\prime\prime}\): \[b_{nm}^{TF}\odot a_{nm}^{\prime\prime}=b_{nm}^{TF}\sum_{i=1}^{N}a_{im}e^{j2\pi K (i)m}. \tag{2}\] From the perspective of spatial frequency domain, the above equation indicates that each row of the matrix contains N spatial frequency components \(K(1)\)\(K(N)\), and the amplitude of the i-th frequency component on position (n,m) is the dot product of \(a_{nm}\) and \(b_{nm}^{TF}\). By combining a 2-D lens with a x-direction cylindrical lens, imaging along y-direction and focusing (optical Fourier transform) along x-direction can be achieved, which is similar to traditional OVMM. According to the principle of optical Fourier transform, when focusing along x-direction, N frequency components correspond to N focal points along x-direction, and the intensity of the n-th focal point is related to the sum amplitude of M points: \(\sum_{i=1}^{M}a_{in}b_{ni}^{TF}\). Fig.2 illustrates this process more vividly compared to traditional OVMM. Obviously, a \(N\times N\)-sized Matrix is composed by N frequency components in N rows captured by the qCMOS, which is the transposition of matrix Figure 1: Basic principle. The optical architecture (lift panel) and the corresponding matrix operation (right panel). ASLM, amplitude spatial light modulator; PSLM, phase spatial light modulator; qCMOS, quantitative Complementary Metal Oxide Semiconductor camera. Figure 2: Comparison between POMMM and OVMM. (a) Principle of OVMM. (b) Principle of POMMM, different colors represent light modulated with different rates \(K(n)\). ### ONN based on POMMM The most time-consuming part of neural network training and inference is the convolutional layer and fully connected layer, which are the parts that most ONNs attempts to accelerate. The principles of convolutional layer and fully connected layer are both based on VMM. As shown in Fig.3, the convolutional layer is to achieve the inner product of the slices with the convolutional kernels to form a new sample, the fully connected layer is to achieve VMM on samples and weight matrix (\(\omega_{n}m\) and \(\omega_{i}j\)). For convolutional layers, POMMM can process multiple convolutional kernels (\(k_{m}\)1,\(k_{m}\)2,...) in parallel, which is very effective for multi feature extraction. The fully connected layer based on POMMM can process multiple samples (\(a_{1}\)_m_,\(a_{2}\)_m_,...) in parallel, greatly improving computational speed. By comparison, simply replacing the existing OVMM with our POMMM can achieve parallel convolutional layer and fully connected layer processing, greatly improving training and inference of ONNs. Figure 3: ONN based on POMMM. The convolutional layer (up panel) and the fully connected layer (down panel). The life part is based on traditional OVMM, while the right part is based on our POMMM.
2309.09763
Near-optimal Cloud-Network Integrated Resource Allocation for Latency-Sensitive B5G
Nowadays, while the demand for capacity continues to expand, the blossoming of Internet of Everything is bringing in a paradigm shift to new perceptions of communication networks, ushering in a plethora of totally unique services. To provide these services, Virtual Network Functions (VNFs) must be established and reachable by end-users, which will generate and consume massive volumes of data that must be processed locally for service responsiveness and scalability. For this to be realized, a solid cloud-network Integrated infrastructure is a necessity, and since cloud and network domains would be diverse in terms of characteristics but limited in terms of capability, communication and computing resources should be jointly controlled to unleash its full potential. Although several innovative methods have been proposed to allocate the resources, most of them either ignored network resources or relaxed the network as a simple graph, which are not applicable to Beyond 5G because of its dynamism and stringent QoS requirements. This paper fills in the gap by studying the joint problem of communication and computing resource allocation, dubbed CCRA, including VNF placement and assignment, traffic prioritization, and path selection considering capacity constraints as well as link and queuing delays, with the goal of minimizing overall cost. We formulate the problem as a non-linear programming model, and propose two approaches, dubbed B\&B-CCRA and WF-CCRA respectively, based on the Branch \& Bound and Water-Filling algorithms. Numerical simulations show that B\&B-CCRA can solve the problem optimally, whereas WF-CCRA can provide near-optimal solutions in significantly less time.
Masoud Shokrnezhad, Tarik Taleb
2023-09-18T13:43:55Z
http://arxiv.org/abs/2309.09763v1
# Near-optimal Cloud-Network Integrated Resource Allocation for Latency-Sensitive B5G ###### Abstract Nowadays, while the demand for capacity continues to expand, the blossoming of Internet of Everything is bringing in a paradigm shift to new perceptions of communication networks, ushering in a plethora of totally unique services. To provide these services, Virtual Network Functions (VNFs) must be established and reachable by end-users, which will generate and consume massive volumes of data that must be processed locally for service responsiveness and scalability. For this to be realized, a solid cloud-network Integrated infrastructure is a necessity, and since cloud and network domains would be diverse in terms of characteristics but limited in terms of capability, communication and computing resources should be jointly controlled to unleash its full potential. Although several innovative methods have been proposed to allocate the resources, most of them either ignored network resources or relaxed the network as a simple graph, which are not applicable to Beyond 5G because of its dynamism and stringent QoS requirements. This paper fills in the gap by studying the joint problem of communication and computing resource allocation, dubbed CCRA, including VNF placement and assignment, traffic prioritization, and path selection considering capacity constraints as well as link and queuing delays, with the goal of minimizing overall cost. We formulate the problem as a non-linear programming model, and propose two approaches, dubbed B&B-CCRA and WF-CCRA respectively, based on the Branch & Bound and Water-Filling algorithms. Numerical simulations show that B&B-CCRA can solve the problem optimally, whereas WF-CCRA can provide near-optimal solutions in significantly less time. Beyond 5G, 6G, Computing First Networking, Cloud-Network Integration, Resource Allocation, Path Selection, Traffic Prioritization, VNF Placement, and Optimization Theory. ## I Introduction As of today, the major reason for the evolution of networks has been a surge in data flow, which has resulted in a continuous 1000x gain in capacity. While this demand for capacity will continue to expand, the blossoming of Internet of Everything is forging a paradigm shift to new-born perceptions bringing a range of entirely novel services with rigorous deterministic criteria, such as connected robotics, smart healthcare, autonomous transportation, and extended reality [1]. The provision of these services will be accomplished by establishing several functional components, namely Virtual Network Functions (VNFs), which will generate and consume vast amounts of data that must be processed locally for service responsiveness and scalability. A distributed cloud architecture is critical in these situations [2], which could be realized through a solid cloud-network integrated infrastructure built of distinct domains in Beyond 5G (B5G) [3]. These domains can be distinguished by the technology employed, including radio access, transport, and core networks, as well as edge, access, aggregation, regional, and central clouds. Additionally, these resources can be virtualized through the use of technologies such as Network Function Virtualization (NFV), which allows for the creation of isolated virtual entities atop this physical infrastructure. Since distributed cloud and network domains would be diverse in terms of characteristics but limited in terms of capability, communication and computing resources should be jointly allocated, prioritized, and scheduled to ensure maximum Quality of Service (QoS) satisfaction while maximizing resource sharing and maintaining the system in a deterministic state, resulting in energy savings as one of the most significant examples of cost minimization objectives [4]. The joint problem of resource allocation in cloud-network integrated infrastructures has been extensively studied in the literature. In [5], the authors examined the VNF placement problem as an Integer Linear Programming (ILP) model that assures the minimal End-to-End (E2E) latency while maintaining QoS requirements by not exceeding an acceptable latency violation limit. They suggested an approach based on neural networks and demonstrated that it can produce near-optimal solutions in a timely manner. The authors in [6] investigated the same problem and proposed a hierarchical reinforcement learning method that includes local level prediction modules as well as a global learning component. They demonstrated that their method significantly outperforms conventional approaches. The similar topic was investigated in [7], with the goal of maximizing the number of accepted requests, and a Markov decision process design was presented. They asserted that the proposed method provides efficient placements. Although innovative approaches are presented in [5, 6, 7] to address computing resource constraints, the network is solely viewed as a pipeline in these papers with no cognitive ability to the cloud domains. However, there are also some studies in the literature that have been concentrating on communication and computing resources jointly. In [8], the joint problem of VNF placement and path selection was investigated to better utilize the network resources, and a heuristic approach was proposed to tackle it. The authors of [9] and [10] addressed the problem of VNF placement with the goal of maximizing the sum rate of accepted requests. In [9], an optimization solver is used to find the optimal solution, while the solution approach offered in [10] is a heuristic strategy. The authors of [11] formulated the latency-optimal placement of functions as an ILP problem and proposed a genetic meta-heuristic algorithm to solve it. In [12], to reduce the cost of computing resources, the problem of VNF placement and scheduling was addressed, and a latency-aware heuristic algorithm was devised. The methods proposed in the cited studies are clearly effective in addressing the resource allocation problem. They cannot, however, be used in B5G systems. Because of the stringent QoS requirements in the delay-reliability-rate space [13], the large number of concurrent services and requests, and the ever-changing dynamics of both infrastructure and end-user service consumption behavior across time and space, every detail of communication and computing resources should be decided and controlled towards achieving a deterministic B5G system [3]. In [8], latency-related constraints and requirements are simply disregarded. Despite the fact that delay is addressed in the rest of these studies, they simplified it to be a link feature, and queuing delay is completely eliminated. Furthermore, path selection is ignored in [9, 10, 11], and cost optimization is overlooked in [10, 11]. This paper fills in the gap in the existing works by studying the joint problem of allocating communication and computing resources, including VNF placement and assignment, traffic prioritization, and path selection while taking into account capacity constraints as well as link and queuing delays, with the goal of minimizing overall cost. Our main contributions in this paper are as follows: * Formulating the joint resource allocation problem of the cloud-network integrated infrastructure as a Mixed Integer Non-Linear Programming (MINLP) problem. * Proposing a method based on Branch & Bound (B&B) algorithm to find the optimal solution of the problem. * Devising a heuristic approach based on the Water-Filling (WF) algorithm in order to identify near-optimal solutions to the problem. The reminder of this paper is organized as follows. Section II introduces the system model. Formulating the resource allocation problem is provided in Section III. Next, the B&B and heuristic approaches are provided in Sections IV and V, respectively. Numerical results are illustrated and analyzed in Section VI, followed by concluding remarks in Section VII. ## II System Model In the following, we describe the main components of the system studied in this paper: infrastructure, services, and requests. The system model is also depicted in Fig 1. ### _Infrastructure Model_ The considered infrastructure is composed of the access and core network domains (non-radio domains) consisting of \(\mathcal{V}\) nodes, \(\mathcal{L}\) links, and \(\mathcal{P}\) paths denoted by \(\mathcal{G}=\langle\boldsymbol{\mathcal{V}},\boldsymbol{\mathcal{L}}, \boldsymbol{\mathcal{P}}\rangle\). \(\boldsymbol{\mathcal{V}}=\{1,2,...,\mathcal{V}\}\) is the set of nodes. \(\boldsymbol{\mathcal{L}}\subset\{l:(v,v^{\prime})|v,v^{\prime}\in\boldsymbol{ \mathcal{V}}\}\) indicates the set of links, and for each \(l\), its bandwidth capacity is constrained by \(\widehat{B_{l}}\), and it costs \(\Xi_{l}\) per capacity unit. \(\boldsymbol{\mathcal{P}}=\{p:(\vdash_{p},\dashv_{p})|p\subset\boldsymbol{ \mathcal{L}}\}\) denotes the set of all paths in the network, where \(\vdash_{p}\) and \(\dashv_{p}\) are the head and tail nodes of path \(p\), and \(l^{\prime}_{l,p}\) is a binary constant equal to \(1\) if path \(p\) contains link \(l\). Each node in the network is an IEEE 802.1 Time-Sensitive Networking (TSN) device comprising an IEEE 802.1 Qcr Asynchronous Traffic Shaper (ATS) at each of their egress ports. An ATS consists of two hierarchical queuing steps [14]: interleaved shaping, and scheduling through a set of prioritized queues. We consider \(\boldsymbol{\mathcal{K}}=\{1,2,...,\mathcal{K}\}\) as the set of priority levels and assume that \(k_{r}\) is the assigned priority of the traffic associated with request \(r\), and the size of the shaping queues for priority level \(k\) is the same and equal to \(\widehat{\mathcal{T}_{k}}\). Note that lower levels have higher priorities. Moreover, each node \(v\) is equipped with computing resources as one of the prospective hosts to deploy service VNFs and limited to a predefined capacity threshold \(\widehat{\zeta_{v}}\) which costs \(\Psi_{v}\) per capacity unit. It is worth mentioning that the network is divided into a number of tiers, with nodes distributed across them so that the entry nodes of requests are located in tier \(0\). The higher the tier index, the greater the capacity of the associated nodes, and the lower their cost. In other words, the nodes closest to end-users (or to the nodes that serve as entry points) are provisioned with high-cost, limited-capacity computing facilities, while low-cost, high-capacity depots are deployed in the core. ### _Service Model_ The set of services obtainable to order is dubbed by \(\boldsymbol{\mathcal{S}}=\{1,2,...,\mathcal{S}\}\), where \(\mathcal{S}\) indicates the number of services. If an end-user requests a service, its VNF has to be replicated in the network-embedded computing resources. Each VNF is empowered to serve more than one request, and \(\widehat{\mathcal{C}_{s}}\) indicates the maximum capacity of each VNF of service \(s\). ### _Request Model_ The set of requests asking for services is represented by \(\boldsymbol{\mathcal{R}}=\{1,2,...,\mathcal{R}\}\), where \(\mathcal{R}\) is the number of requests. Each request \(r\) arrives in the network through node \(v_{r}\), one of the nodes through which the infrastructure connects to the radio access network, and intends a service \(s_{r}\) specifying its minimum necessitated service capacity, network bandwidth, and maximum tolerable delay, indicated by \(\widetilde{\mathcal{C}_{r}}\), \(\widetilde{\mathcal{B}_{r}}\), and \(\widetilde{\mathcal{D}_{r}}\) Fig. 1: System model respectively. In addition, \(\widetilde{\mathcal{T}_{r}}\) and \(\widehat{\mathcal{H}_{r}}\), denoting the burstiness of traffic and the largest packet size for request \(r\), are also assumed to be known a priori. Utilizing historical data along with predictive data analytics methods is one of the viable options for obtaining such accurate and realistic statistical estimates of traffic. ## III Problem Definition In this section, the joint problem of VNF placement and assignment, traffic prioritization, and path selection is described. The constraints and objective function are formulated as a MINLP problem in what follows, and the problem is stated at the end of the section. ### _VNF Placement and Assignment Constraints_ To begin, each request must be assigned a single node as its service location (C1). This assignment is acceptable if the assigned node hosts a VNF for the requested service (C2). Following that, it must be ensured that the assigned requests do not violate the capacity constraints of VNFs and nodes (C3 and C4). Note that the capacity constraints are intrinsically linked to avoiding congestion and ensuring the system's reliability. The formulation is as follows: \[\sum_{\mathbf{\mathcal{V}}}g_{r,v}=1,\forall r\in\mathbf{\mathcal{R}},\] (C1) \[g_{r,v}\leq z_{s,v},\forall r\in\mathbf{\mathcal{R}},\forall v\in \mathbf{\mathcal{V}},\] (C2) \[\sum_{\{r|r\in\mathbf{\mathcal{R}},s_{v}=s\}}\widetilde{C_{r}}g_{r,v }\leq\widehat{\mathcal{C}_{s}},\forall v\in\mathbf{\mathcal{V}},\forall s\in\mathbf{ \mathcal{S}},\] (C3) \[\sum_{\mathbf{\mathcal{S}}}\widehat{\mathcal{C}_{s}}z_{s,v}\leq \widehat{\zeta_{v}},\forall v\in\mathbf{\mathcal{V}},\] (C4) where \(g_{r,v}\) and \(z_{s,v}\) are binary variables. \(g_{r,v}\) is \(1\) if node \(v\) is selected as the service node of request \(r\), and \(z_{s,v}\) is \(1\) if service \(s\) is replicated on node \(v\). ### _Traffic Prioritization and Path Selection Constraints_ First, we must ensure that each request is assigned to exactly one priority level (C5). Then, the request and reply paths of each request are determined (C6 and C7). For each request, a single request path is chosen that starts at the request's entry node and ends at the request's VNF node. The reply path follows the same logic but in reverse order. The following two constraints guarantee that the two paths are chosen on the priority level assigned to each request (C8 and C9). Finally, the constraints maintaining the maximum capacity of links and shaping queues are enforced (C10 and C11). With C10, the sum of the required bandwidth for all requests whose request or reply path, or both, contains link \(l\) is guaranteed to be less than or equal to the link's capacity, and in C11, the capacity of shaping queues is guaranteed in the same way for each link and each priority level. The set includes: \[\sum_{\mathbf{\mathcal{K}}}\theta_{r,k}=1,\forall r\in\mathbf{\mathcal{R}},\] (C5) \[\sum_{\{p|p\in\mathbf{\mathcal{P}}\wedge l_{r}=w_{r}\wedge l_{p}=w\}, \mathbf{\mathcal{K}}}\begin{array}{c}\xrightarrow{}\\ \ ### _Problem_ Considering the constraints and objective function, the problem of Communication and Computing Resource Allocation (CCRA) is: \[\text{CCRA: }min\text{ OF }s.t.\text{ C1 - C15.} \tag{1}\] ## IV B&B-Ccra The problem specified in (1) is NP-hard (the multidimensional knapsack problem [15] can be reduced to it, as detailed in [16]), and finding its optimal solution in polynomial time is mathematically intractable. One potential strategy for addressing such a problem is to restrict its solution space using the B&B algorithm, which relaxes and solves the problem to obtain lower bounds, and then improves the bounds using mathematical techniques to reach acceptable solutions. The method is described in Algorithm 1. In this algorithm, the solution space is discovered by maintaining an unexplored candidate list \(\boldsymbol{\mathcal{N}}=\{N_{t}|t\geq 1\}\), where each node \(N_{t}\) contains a problem, denoted by \(\Phi_{t}\), and \(t\) is the iteration number. This list only contains the root candidate \(N_{1}\) at the beginning with the primary problem to be solved. To reduce its enormous computational complexity, instead of directly applying the B&B algorithm to CCRA, we consider its integer linear transformation as the problem of \(N_{1}\). CCRA comprises non-linear constraints C13 and C14. To linearize C13, the summations and max function with variable boundaries should be converted to a linear form. A simple, effective technique is to replace each term with an approximated upper bound. Since the aggregated traffic burstiness is bounded by \(\widehat{\mathcal{T}_{k}}\) for each priority level \(k\) in C11, \(\sum_{\boldsymbol{\mathcal{R}}_{i}}\widehat{\mathcal{T}_{r^{\prime}}}\) can be replaced by the sum of this bound for all priority levels greater than or equal to \(k\), that is \(\sum_{\{k^{\prime}|k^{\prime}\leq k\}}\widehat{\mathcal{T}_{k^{\prime}}}\). In a similar way, we define a new constraint (C13\({}^{\prime}\)) for the aggregated bandwidth allowed on priority level \(k\) over link \(l\), dubbed \(\widehat{f_{l,k}}\), and replace the sum of allocated bandwidths with \(\sum_{\{k^{\prime}|k^{\prime}<k\}}\widehat{f_{l,k^{\prime}}}\). Besides, the maximum packet size for a particular subset of requests can be replaced by the maximum permitted packet size in the network, denoted by \(\widehat{\mathcal{H}}\). Therefore, the followings define the linear transformation of C13: \[\begin{array}{l}\sum_{\boldsymbol{\mathcal{R}}}\widehat{\mathcal{R}_{r}} \sum_{\boldsymbol{\mathcal{P}}}t^{\prime}_{l,p}(\widehat{f_{r,p,k}}+\widehat{ f_{r,p,k}})\leq\widehat{f_{l,k}},\forall k\in\boldsymbol{\mathcal{K}},\forall l \in\boldsymbol{\mathcal{L}},\\ \widehat{D_{r,k}},l=\frac{\sum_{\{k^{\prime}|k^{\prime}\leq k\}}\widehat{ \mathcal{T}_{k^{\prime}}}+\widehat{\mathcal{H}}}{\widehat{B_{l}}-\sum_{\{k^{ \prime}|k^{\prime}<k\}}\widehat{f_{l,k^{\prime}}}},\forall r\in\boldsymbol{ \mathcal{R}},\forall k\in\boldsymbol{\mathcal{K}},\\ \forall l\in\boldsymbol{\mathcal{L}},\end{array}\] (C13\({}^{\prime\prime}\)) where \(\widehat{D_{r,k,l}}\) is the delay upper bound for request \(r\) on link \(l\) with priority level \(k\). Since \(D_{r,s_{r}}\) is linear, C14 can be linearized by substituting the actual delay for the upper bound derived in C13\({}^{\prime\prime}\), and the new constraint for E2E delay is: \[\begin{array}{l}D_{r}=\sum_{\boldsymbol{\mathcal{P}},\boldsymbol{\mathcal{L }},\boldsymbol{\mathcal{K}}}\widehat{D_{r,k}}t^{\prime}_{l,p}(\widehat{f_{r,p,k }}+\widehat{f_{r,p,k}})+D_{r,s_{r}},\forall r\in\boldsymbol{\mathcal{R}}.\end{array}\] (C14\({}^{\prime}\)) Given this, the linear transformation of CCRA, dubbed LiCCRA, is as follows: \[\text{LiCCRA: }min\text{ OF }s.t.\text{ C1 - C12, C13}^{\prime},\text{ C13}^{\prime\prime},\text{ C14}^{\prime},\text{ C15.} \tag{2}\] Now, with LiCCRA as \(\Phi_{1}\), each iteration of the B&B algorithm begins with the selection and removal of a candidate from the unexplored list. Then, the problem of this candidate is naturally relaxed and solved, i.e., all the integer variables (\(\in\{0,1\}\)) are replaced with their continues equivalents restricted by the box constraint (\(\in[0,1]\)), and the relaxed problem is solved using a Linear Programming (LP) solver to obtain the solution of the relaxed problem \((\boldsymbol{\mu}_{t}^{\star},\boldsymbol{\lambda}_{t}^{\star})\) and the optimal objective value \(\phi_{t}^{\star}\), where \(\boldsymbol{\mu}\) is the relaxed integer variables set, and \(\boldsymbol{\lambda}\) is the set of continuous variables. Next, if all relaxed variables have integer values, the obtained objective in this iteration is considered to update the best explored weight solution. Otherwise, a variable index \(j\) is selected such that \(\boldsymbol{\mu}_{t}^{\star}[j]\) is fractional, and the feasible constraints set \(\pi_{t}\) is divided into two parts as \(\pi_{t}^{1}=\pi_{t}\cap\{\boldsymbol{\mu}_{t}[j]\leq\left\lfloor\boldsymbol{ \mu}_{t}^{\star}[j]\right\rfloor\}\) and \(\pi_{t}^{2}=\pi_{t}\cap\{\boldsymbol{\mu}_{t}[j]\geq\left\lceil\boldsymbol{ \mu}_{t}^{\star}[j]\right\rceil\}\). Then, two problems are formed as \(\Phi_{t}^{1}=min\) OF _s.t._\(\pi_{t}^{1}\) and \(\Phi_{t}^{2}=min\) OF _s.t._\(\pi_{t}^{2}\). Now, two child nodes \(N_{t}^{1}\) and \(N_{t}^{2}\), whose problems are \(\Phi_{t}^{1}\) and \(\Phi_{t}^{2}\) respectively, are put into the unexplored list. The B&B algorithm is iterated until \(\boldsymbol{\mathcal{N}}\) is empty. Alternatively, we can run this algorithm until a desired solving time is reached or an acceptable objective value is acquired. The prime advantage of this algorithm is that it produces at least a lower bound even when the solving time is limited. As a result, it may be used to establish baselines allowing for the evaluation of alternative approaches. ## V WF-Ccra Since the B&B method searches the problem's solution space for the optimal solution, its complexity can grow up to the size of the solution space in the worst case [17]. Given that the size of the solution space in CCRA (or LiCCRA) for each request is \(\mathcal{V}^{2}|\boldsymbol{\mathcal{P}}|^{2}\mathcal{K}^{3}\) considering its integer variables, the problem's overall size is \(\mathcal{R}\mathcal{IV}^{2}|\boldsymbol{\mathcal{P}}|^{2}\mathcal{K}^{3}\). Therefore, finding its optimal solution for large-scale instances using B&B is impractical in a timely manner, and the goal of this section is to devise an efficient approach based on the WF concept in order to identify near-optimal solutions for this problem. The WF-CCRA method is elaborated in Algorithm 2. The first step is to initialize the vectors of parameters and variables used in (1) (or (2)). Following that, two empty sets, \(\boldsymbol{\mathcal{R}^{\prime}}\) and \(\boldsymbol{\Omega}\), are established. The former maintains the set of accepted requests, and the latter stores the feasible resource combinations for each request during its iteration. Now, the algorithm iterates through each request in \(\boldsymbol{\mathcal{R}}\), starting with the one with the most stringent delay requirement, and keeps track of the feasible allocations of VNF, priority, as well as request and reply paths based on the constraints of (1) (or (2)). The final steps of each iteration are to choose the allocation with the lowest cost and fix it for the request, as well as to update remaining resources and the set of pending and accepted requests. When there is no pending request, the algorithm terminates. The complexity of the WF-CCRA algorithm is \(O(\mathcal{R}\mathcal{V}\mathcal{K}|\boldsymbol{\mathcal{P}}|^{2})\). Although this approach is significantly more efficient than the B&B algorithm in terms of complexity (it can be executed within milliseconds), its complexity can be further reduced by restricting the number of valid paths between each pair of nodes to the \(\mathcal{P}\) paths with the lowest costs or smallest number of links. ## VI Simulation Results In this section, the accuracy of the B&B-CCRA and WF-CCRA methods is numerically investigated. The simulation parameters are listed in Table I. As long as the problem remains feasible, the values for the remaining parameters can be chosen arbitrarily. Note that the results were obtained on a computer with 8 processing cores, 16 GB of memory, and a 64-bit operating system. The results are illustrated in Fig 2. The proposed methods are evaluated based on the accuracy of the solutions they provide. Note that the accuracy of a solution for a scenario (\(\eta\)) is defined as \(1-((\eta-\eta^{\star})/\eta^{\star})\), where \(\eta^{\star}\) is the scenario's optimal solution, which is obtained by solving it with CPLEX 12.10. In Fig 2-A, the accuracy of B&B-CCRA is plotted vs. the solving time for five scenarios with different network sizes. In this simulation, the number of requests is set to \(200\). As illustrated, the accuracy of B&B-CCRA starts at 80% after the first iteration, which is obtained by solving the LP transformation of LiCCRA with CPLEX 12.10 in just a few milliseconds, and increases as the solving time passes, reaching 92% for all samples after 100 seconds. It proves that this method can be easily applied to provide baseline solutions for small and medium size use cases. However, the accuracy growth is slowed by increasing the network size, which is expected given the problem's NP-hardness and complexity. In the two remaining sub-figures, the accuracy of WF-CCRA is depicted against the number of requests and network size. In addition, these sub-figures illustrate the outcomes of two more approaches, called DlyMin and Rnd. In the DlyMin method, allocations are performed to minimize delay regardless of other constraints, while Rnd is used to allocate resources randomly to requests. Note that the number of requests in Fig 2-B is \(200\), and the number of network nodes in Fig 2-C is \(20\). For each number of nodes or requests, 50 random systems are formed, and the problem is solved for them using the aforementioned techniques. It is evident that regardless of network size, WF-CCRA has an average accuracy of greater than 99%, implying that it can be used to allocate resources in a near-optimal manner even for large networks. For different numbers of requests, the average accuracy remains significantly high and greater than 96%. It does, however, slightly decrease as the number of requests increases, which is the cost of decomplexifying the problem by allocating the resources through separating requests. For the Rnd method, because it consumes the resources of all tiers uniformly, its accuracy is slightly above \(50\%\). DlyMin is the least efficient method according to the results. The reason is that this method always utilizes the costly tier-one nodes to minimize E2E delay. In conclusion, it is shown that the WF CCRA algorithm is capable of efficiently allocating resources for large numbers of requests compared to other approaches. ## VII Conclusion In this paper, the joint problem of communication and computing resource allocation including VNF placement and assignment, traffic prioritization, and path selection considering capacity and delay constraints was studied. The primary goal was to minimize the overall cost. We first formulated the problem as a MINLP model and used a method, named B&B-CCRA, to solve it optimally. Then, a WF-based approach was developed to find near-optimal solutions in a timely manner. Numerical results demonstrated the efficiency of the proposed methods for large numbers of requests and nodes. As a potential future work, we plan to solve the problem considering the ever-changing characteristics of end-users and infrastructure resources. We intend to devise an online machine-learning approach for real-time adaptation of the allocation strategy to keep the overall cost minimized in such dynamic scenarios. Additionally, we are developing an access control strategy to reduce overall cost over time by predicting future requests. ## Acknowledgment This research work is partially supported by the Academy of Finland 6G Flagship, by the European Union's Horizon 2020 ICT Cloud Computing program under the ACCORDION project with grant agreement No. 871793, and by the European Union's Horizon 2020 research and innovation program under the CHARITY project with grant agreement No. 101016509. It is also partially funded by the Academy of Finland Project 6Genesis under grant agreement No. 318927.
2309.06450
Lambert series in analytic number theory
Annotated bibliography of 18th, 19th, and early 20th century works involving Lambert series. A tour of 19th and early 20th century analytic number theory.
Jordan Bell
2023-09-12T02:42:06Z
http://arxiv.org/abs/2309.06450v1
# Lambert series in analytic number theory ###### Abstract Tour of 19th and early 20th century analytic number theory. ## 1 Introduction Let \(d(n)\) denote the number of positive divisors of \(n\). For \(|z|<1\), \[\sum_{n=1}^{\infty}d(n)z^{n}=\sum_{n=1}^{\infty}\frac{z^{n}}{1-z^{n}}.\] ## 2 Euler The first use of the term "Lambert series" was by Euler to describe the roots of an equation. Euler writes in E25 [28] about the particular value of a Lambert series. ## 3 Lambert Bullynck [7, pp. 157-158]: "As he recorded in his scientific diary, the _Monats-buch_, Lambert started thinking about the divisors of integers in June 1756. An essay by G.W. Krafft (1701-1754) in the St. Petersburg _Novi Commentarii_ seems to have triggered Lambert's interest [Bopp 1916, p. 17, 40]." Bullynck [7, p. 163]: Lambert did more than deliver the factor table. He also addressed the absence of any coherent theory of prime numbers and divisors. Filling such a lacuna could be important for the discovery of new and more primality criteria and factoring tests. For Lambert the absence of such a theory was also an occasion to apply the principles laid out in his philosophical work. A fragmentary theory, or one with gaps, needed philosophical and mathematical efforts to mature. To this aim [prime recognition] and others I have looked into the theory of prime numbers, but only found certain isolated pieces, which did not seem possible to make easily into a connected and well formed system. Euclid has few, Fermat some mostly unproven theorems, Euler individual fragments, that anyway are farther away from the first beginnings, and leave gaps between them and the beginnings. [Lambert 1770, p. 20] Bullynck [7, pp. 164-165]: In 1770, Lambert presented two sketches of what would be needed for something like a theory of numbers. The first dealt mainly with factoring methods [Lambert 1765-1772, II, pp. 1-41], while the second gave a more axiomatic treatment [Lambert 1770, pp. 20-48]. In the first essay, Lambert explained how, for composite number with small factors, Eratosthenes' sieve could be used and optimised. For larger factors, Lambert explained that approximation from above, starting by division by numbers that are close to the square root of the tested number \(p\), was more advantageous. For both methods, Lambert advised the use of tables. The second essay had more theoretical bearings. Lambert rephrased Euclid's theorems for use in factoring, included the greatest common divisor algorithm, and put the idea of relatively prime numbers to good use. He also noted that binary notation, because of the frequent symmetries, could be helpful. Finally,Lambert also recognized Fermat's little theorem as a good, though not infallible criterion for primality, "but the negative example is very rare" [Lambert 1770, p. 43]. Monatsbuch, September 1764, "Singula haec in Capp. ult. Ontol. occurunt", and Ann. 5, Ann. 25, 1764, Ann. 12 1765, Ann. 19, 1765 [2]. Lambert [53, pp. 506-511, SS875] Youschkevitch [87] Lorey [59, p. 23] Lowenhaupt [60, p. 32] ## 4 Krafft Krafft [50, pp. 244-245] ## 5 Servois Servois [72] and [73, p. 166] ## 6 Lacroix Lacroix [51, pp. 465-466, SS1195] ## 7 Klugel Klugel [46, pp. 52-53, s.v. "Theiler einer Zahl", SS12]: * [leftmargin=0.5cm] * [leftmargin=0.5cm] We write \[\sum_{n=1}^{\infty}\frac{x^{n}}{1-x^{n}}=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}x^ {nm}.\] The series is \[\begin{array}{cccccccc}x&+x^{2}&+x^{3}&+x^{4}&+x^{5}&+x^{6}&+\text{etc.}\\ +x^{2}&+x^{4}&+x^{6}&+x^{8}&+x^{10}&+x^{12}&+\text{etc.}\\ +x^{3}&+x^{6}&+x^{9}&x^{12}&+x^{15}&+x^{18}&+\text{etc.}\\ +x^{4}&+x^{8}&+x^{12}&+x^{16}&+x^{20}&+x^{24}&+\text{etc.}\\ +x^{5}&+x^{10}&+x^{15}&+x^{20}&+x^{25}&+x^{30}&+\text{etc.}\\ +x^{6}&+x^{12}&+x^{18}&+x^{24}&+x^{30}&+x^{36}&+\text{etc.}\\ +\text{etc.}\end{array}\] We sum the terms in the first row and column: the sum of these is \[x+2x^{2}+2x^{3}+2x^{4}+\text{etc.}=x\left(\frac{1+x}{1-x}\right).\] Then, from what remains we sum the terms in the second row and column: the sum of these is \[x^{4}+2x^{6}+2x^{8}+2x^{10}+\text{etc.}=x^{4}\left(\frac{1+x^{2}}{1-x^{2}} \right).\] Then, from what remains, we sum the terms in the third row and column: the sum of these is \[x^{9}+2x^{12}+2x^{15}+2x^{18}+\text{etc.}=x^{9}\left(\frac{1+x^{3}}{1-x^{3}} \right),\] etc. ## 10 Eisenstein Eisenstein [27] states that for \(|z|<1\), \[\sum_{n=1}^{\infty}\frac{z^{n}}{1-z^{n}}=\frac{1}{(1-x)(1-x^{2})(1-x^{3}) \cdots}\sum_{n=1}^{\infty}(-1)^{n+1}\frac{nz^{n(n+1)/2}}{(1-x)\cdots(1-x^{n})}.\] For \(t=\frac{1}{z}\), Eisenstein states that \[\frac{z}{1-z}+\frac{z^{2}}{1-z^{2}}+\frac{z^{3}}{1-z^{3}}+\frac{z^{4}}{1-z^{4} }+\text{etc.}\] is equal to \[\frac{1}{t-1-\frac{(t-1)^{2}}{t^{2}-1-\frac{t(t-1)^{2}}{t^{3}-1-\frac{t (t^{2}-1)^{2}}{t^{4}-1-\frac{t^{2}(t^{2}-1)^{2}}{t^{5}-1-\frac{t^{2}(t^{3}-1)^{2}} {t^{6}-1-\frac{t^{3}(t^{3}-1)^{2}}{t^{7}-1-\text{etc.}}}}}}}}\] Expressing Lambert series using continued fractions is relevant to the irrationality of the value of the series. See Borwein [3]. See also Zudilin [90]. ## 11 Mobius Mobius [62] ## 12 Jacobi Jacobi's _Fundamenta nova_[44, SS40, 66 and p. 185] Chandrasekharan [20, Chapter X]: using Lambert series to prove the four squares theorem. ## 13 Dirichlet Dirichlet [25] Fischer [29] ## 14 Cauchy Cauchy [13] and [14] two memoirs in the same volume. ## 15 Burhenne Burhenne [8] says the following about Lambert series. For \[F(x)=\sum_{n=1}^{\infty}d(n)x^{n},\] we have \[d(n)=\frac{F^{(n)}(0)}{n!}.\] Define \[F_{k}(x)=\frac{x^{k}}{1-x^{k}},\] so that \[F(x)=\sum_{k=1}^{\infty}F_{k}(x).\] It is apparent that if \(k>n\), then \[F_{k}^{(n)}(0)=0,\] hence \[F^{(n)}(0)=\sum_{k=1}^{n}F_{k}^{(n)}(0).\] The above suggests finding explicit expressions for \(F_{k}^{(n)}(0)\). Burhenne cites Sohncke [74, pp. 32-33]: for even \(k\), \[\frac{d^{n}\left(\frac{x^{p}}{x^{k}-a^{k}}\right)}{dx^{n}} =(-1)^{n}\frac{n!}{ka^{k-p-1}}\left(\frac{1}{(x-a)^{n+1}}-(-1)^{p} \frac{1}{(x+a)^{n+1}}\right)\] \[+(-1)^{n}\frac{n!}{\frac{1}{2}ka^{k-p-1}}\sum_{h=1}^{\frac{1}{2} k-1}\frac{\cos\left(\frac{2h(p+1)\pi}{k}+(n+1)\phi_{h}\right)}{\sqrt{\left(x^{2}-2xa \cos\frac{2h\pi}{n}+a^{2}\right)^{n+1}}}\] and for odd \(k\), \[\frac{d^{n}\left(\frac{x^{p}}{x^{k}-a^{k}}\right)}{dx^{n}} =(-1)^{n}\frac{n!}{ka^{k-p-1}}\frac{1}{(x-a)^{n+1}}\] \[+(-1)^{n}\frac{n!}{\frac{1}{2}ka^{k-p-1}}\sum_{h=1}^{\frac{k-1}{ 2}}\frac{\cos\left(\frac{2h(p+1)\pi}{k}+(n+1)\phi_{h}\right)}{\sqrt{\left(x^{ 2}-2xa\cos\frac{2h\pi}{n}+a^{2}\right)^{n+1}}},\] where \[\cos\phi_{h}=\frac{x-a\cos\frac{2h\pi}{k}}{\sqrt{x^{2}-2xa\cos\frac{2h\pi}{k} +a^{2}}},\quad\sin\phi_{h}=\frac{a\sin\frac{2h\pi}{k}}{\sqrt{x^{2}-2xa\cos \frac{2h\pi}{k}+a^{2}}}.\] For \(a=1\) and \(x=0\), \[\cos\phi_{h}=-\cos\frac{2h\pi}{k},\qquad\sin\phi_{h}=\sin\frac{2h\pi}{k},\] from which \[\phi_{h}=\pi-\frac{2h\pi}{k},\] and thus \[\cos\left(\frac{2h(k+1)\pi}{k}+(n+1)\phi_{h}\right) =\cos\left(\frac{2h(k+1)\pi}{k}+(n+1)\left(\pi-\frac{2h\pi}{k} \right)\right)\] \[=\cos\left(2h\pi+\frac{2h\pi}{k}+\pi-\frac{2h\pi}{k}+n\left(\pi- \frac{2h\pi}{k}\right)\right)\] \[=\cos\left((n+1)\pi-\frac{2nh\pi}{k}\right)\] \[=(-1)^{n+1}\cos\frac{2nh\pi}{k}.\] For even \(k\), taking \(p=k\) we have \[\frac{d^{n}\left(\frac{x^{k}}{1-x^{k}}\right)}{dx^{n}}=(-1)^{n+1}\frac{n!}{k} \left(\frac{1}{(-1)^{n+1}}-1\right)+(-1)^{n+1}\frac{n!}{\frac{1}{2}k}\sum_{h=1 }^{\frac{1}{2}k-1}(-1)^{n+1}\cos\frac{2nh\pi}{k},\] i.e., \[\frac{d^{n}\left(\frac{x^{k}}{1-x^{k}}\right)}{dx^{n}}=(-1)^{n+1}\frac{n!}{k} \frac{1}{(-1)^{n+1}}+(-1)^{n+1}\frac{n!}{\frac{1}{2}k}\sum_{h=1}^{\frac{k-1}{ 2}}(-1)^{n+1}\cos\frac{2nh\pi}{k},\] i.e., \[F_{k}^{(n)}(0)=\frac{n!}{k}+\frac{2\cdot n!}{k}\sum_{h=1}^{\frac{k-1}{2}}\cos \frac{2nh\pi}{k}.\] Using the identity, for \(h\not\in 2\pi\mathbb{Z}\), \[\sum_{h=1}^{M}\cos h\theta=-\frac{1}{2}+\frac{\sin\left(M+\frac{1}{2}\right) \theta}{2\sin\frac{\theta}{2}}=-\frac{1}{2}+\frac{1}{2}\left(\sin M\theta \cot\frac{\theta}{2}+\cos M\theta\right),\] we get for even \(k\), \[F_{k}^{(n)}(0) =\begin{cases}\frac{n!}{k}\cot\frac{n\pi}{k}\sin n\pi&k\not|n\\ \frac{n!}{k}(1-(-1)^{n+1})+\frac{2\cdot n!}{k}\left(\frac{1}{2}k-1\right)&k|n \end{cases}\] \[=\begin{cases}0&k\not|n\\ n!-\frac{n!}{k}(1+(-1)^{n+1})&k|n.\end{cases}\] For odd \(k\), \[F_{k}^{(n)}(0) =\begin{cases}\frac{n!}{k}\csc\frac{n\pi}{k}\sin n\pi&k\not|n\\ \frac{n!}{k}+\frac{2\cdot n!}{k}\frac{k-1}{2}&k|n.\end{cases}\] \[=\begin{cases}0&k\not|n\\ n!&k|n.\end{cases}\] ## 16 Zehfuss Zehfuss [88] ## 17 Bernoulli numbers The **Bernoulli polynomials** are defined by \[\frac{te^{tx}}{e^{t}-1}=\sum_{m=0}^{\infty}B_{m}(x)\frac{t^{m}}{m!}.\] The **Bernoulli numbers** are defined by \(B_{m}=B_{m}(0)\). We denote by \([x]\) the greatest integer \(\leq x\), and we define \(\{x\}=x-[x]\), namely, the fractional part of \(x\). We define \(P_{m}(x)=B_{m}(\{x\})\), the **periodic Bernoulli functions**. ## 18 Euler-Maclaurin summation formula Euler E47 and E212, SS142, for the summation formula. Euler's studies the gamma function in E368. In particular, in SS12 he gives Stirling's formula, and in SS14 he obtains \(\Gamma^{\prime}(1)=-\gamma\). Euler in SS142 of E212 states that \[\gamma=\frac{1}{2}+\sum_{n=1}^{\infty}\frac{(-1)^{n+1}B_{2n}}{2n}.\] Bromwich [6, Chapter XII] The Euler-Maclaurin summation formula [5, p. 280, Ch. VI, Eq. 35] tells us that for \(f\in C^{\infty}([0,1])\), \[f(0)=\int_{0}^{1}f(t)dt+B_{1}(f(1)-f(0))+\sum_{m=1}^{k}\frac{1}{(2m)!}B_{2m}( f^{(2m-1)}(1)-f^{(2m-1)}(0))+R_{2k},\] where \[R_{2k}=-\int_{0}^{1}\frac{P_{2k}(1-\eta)}{(2k)!}f^{(2k)}(\eta)d\eta.\] Poisson and Jacobi on the Euler-Maclaurin summation formula. ## 19 Schlomilch Schlomilch [69] and [71, p. 238], [70] For \(m\geq 1\), \[\int_{0}^{\infty}\frac{t^{2m-1}}{e^{2\pi t}-1}dt=(-1)^{m+1}\frac{B_{2m}}{4m}. \tag{1}\] For \(\alpha>0\), \[\int_{0}^{\infty}\frac{\sin\alpha t}{e^{2\pi t}-1}dt=\frac{1}{4}+\frac{1}{2} \left(\frac{1}{e^{\alpha}-1}-\frac{1}{\alpha}\right) \tag{2}\] and \[\int_{0}^{\infty}\frac{1-\cos\alpha t}{e^{2\pi t}-1}\frac{dt}{t}=\frac{1}{4} \alpha+\frac{1}{2}\left(\log(1-e^{-\alpha})-\log\alpha\right). \tag{3}\] For \(\xi>0\) and \(n\geq 1\), using (2) with \(\alpha=\xi,2\xi,3\xi,\ldots,2n\xi\) and also using \[\sum_{k=1}^{N}\sin k\theta=\frac{1}{2}\cot\frac{\theta}{2}-\frac{\cos(N+ \frac{1}{2})\theta}{2\sin\frac{\theta}{2}},\] we get \[\sum_{m=1}^{2n}\left(\frac{1}{e^{m\xi}-1}-\frac{1}{m\xi}\right) =\sum_{m=1}^{2n}\left(-\frac{1}{2}+2\int_{0}^{\infty}\frac{\sin m \xi t}{e^{2\pi t}-1}dt\right)\] \[=-n+\int_{0}^{\infty}\frac{1}{e^{2\pi t}-1}\sum_{m=1}^{2n}2\sin m \xi tdt\] \[=-n+\int_{0}^{\infty}\frac{1}{e^{2\pi t}-1}\left(\cot\frac{\xi t }{2}-\frac{\cos(2n+\frac{1}{2})\xi t}{\sin\frac{\xi t}{2}}\right)dt.\] Using \(\cos(a+b)=\cos a\cos b-\sin a\sin b\), this becomes \[\sum_{m=1}^{2n}\left(\frac{1}{e^{m\xi}-1}-\frac{1}{m\xi}\right) =-n+\int_{0}^{\infty}\frac{1}{e^{2\pi t}-1}(1-\cos 2n\xi t)\cot\frac{ \xi t}{2}dt \tag{4}\] \[+\int_{0}^{\infty}\frac{1}{e^{2\pi t}-1}\sin 2n\xi tdt.\] For \(\alpha=2n\xi\), (3) tells us \[\int_{0}^{\infty}\frac{1-\cos 2n\xi t}{e^{2\pi t}-1}\frac{dt}{t}=\frac{1}{4} \cdot 2n\xi+\frac{1}{2}\left(\log(1-e^{-2n\xi})-\log 2n\xi\right).\] Rearranging, \[\frac{\log 2n}{\xi}=n+\frac{\log(1-e^{-2n\xi})-\log\xi}{\xi}-\frac{2}{\xi} \int_{0}^{\infty}\frac{1-\cos 2n\xi t}{e^{2\pi t}-1}\frac{dt}{t} \tag{5}\] Adding (4) and (5) gives \[\sum_{m=1}^{2n}\frac{1}{e^{m\xi}-1}-\frac{1}{\xi}\left(-\log 2n+ \sum_{m=1}^{2n}\frac{1}{m}\right)\] \[= \frac{\log(1-e^{-2n\xi})-\log\xi}{\xi}-\int_{0}^{\infty}\left( \frac{2}{\xi t}-\cot\frac{\xi t}{2}\right)\frac{1-\cos 2n\xi t}{e^{2\pi t}-1}dt\] \[+\int_{0}^{\infty}\frac{1}{e^{2\pi t}-1}\sin 2n\xi tdt.\] Writing \[C_{n}=-\log n+\sum_{m=1}^{n}\frac{1}{m}\] and using (2) this becomes \[\sum_{m=1}^{2n}\frac{1}{e^{m\xi}-1}-\frac{1}{\xi}C_{2n}\] \[= \frac{\log(1-e^{-2n\xi})-\log\xi}{\xi}-2\int_{0}^{\infty}\left( \frac{1}{\xi t}-\frac{1}{2}\cot\frac{\xi t}{2}\right)\frac{1-\cos 2n\xi t}{e^{2 \pi t}-1}dt\] \[+\frac{1}{4}+\frac{1}{2}\left(\frac{1}{e^{2n\xi}-1}-\frac{1}{2n \xi}\right).\] We write \[I_{2n}(\xi)=2\int_{0}^{\infty}\left(\frac{1}{\xi t}-\frac{1}{2}\cot\frac{\xi t }{2}\right)\frac{1-\cos 2n\xi t}{e^{2\pi t}-1}dt,\] and we shall obtain an asymptotic formula for \(I_{2n}(\xi)\). We apply the Euler-Maclaurin summation formula. Let \(h>0\), and for \(f(t)=\cos ht\) we have \(f^{\prime}(t)=-h\sin ht\), and for \(m\geq 1\) we have \(f^{(2m)}(t)=(-1)^{m}h^{2m}\cos ht\) and \(f^{(2m-1)}(t)=(-1)^{m}h^{2m-1}\sin ht\). Thus the Euler-Maclaurin formula yields \[1=\int_{0}^{1}\cos htdt-\frac{1}{2}(\cos h-1)+\sum_{m=1}^{k}\frac{1}{(2m)!}B_ {2m}(-1)^{m}h^{2m-1}\sin h+R_{2k}.\] Using the identity \(\cot\frac{\theta}{2}=\frac{1+\cos\theta}{\sin\theta}\) and dividing by \(\sin h\), this becomes \[\frac{1}{2}\cot\frac{h}{2}=\frac{1}{h}+\sum_{m=1}^{k}\frac{1}{(2m)!}B_{2m}(-1 )^{m}h^{2m-1}+\frac{1}{\sin h}R_{2k}. \tag{6}\] Because \(P_{m}(1-\eta)=P_{m}(\eta)\) for even \(m\), \[R_{2k} =-\int_{0}^{1}\frac{P_{2k}(\eta)}{(2k)!}(-1)^{k}h^{2k}\cos h\eta d\eta\] \[=-B_{2k}\int_{0}^{1}\frac{1}{(2k)!}(-1)^{k}h^{2k}\cos h\eta d\eta- \int_{0}^{1}\frac{(P_{2k}(\eta)-B_{2k})}{(2k)!}(-1)^{k}h^{2k}\cos h\eta d\eta\] \[=(-1)^{k+1}\frac{B_{2k}h^{2k}}{(2k)!}\frac{\sin h}{h}+(-1)^{k+1} \frac{h^{2k}}{(2k)!}\int_{0}^{1}(P_{2k}(\eta)-B_{2k})\cos h\eta d\eta.\] Since \(P_{2k}(\eta)-B_{2k}\) does not change sign on \((0,1)\), by the mean-value theorem for integration there is some \(\theta=\theta(h,k)\), \(0<\theta<1\), such that (using \(\int_{0}^{1}P_{2k}(\eta)d\eta=0\)) \[\int_{0}^{1}(P_{2k}(\eta)-B_{2k})\cos h\eta d\eta=\cos h\theta\int_{0}^{1}(P_ {2k}(\eta)-B_{2k})d\eta=-B_{2k}\cos h\theta.\] Therefore (6) becomes \[\frac{1}{2}\cot\frac{h}{2}-\frac{1}{h} =\sum_{m=1}^{k}\frac{1}{(2m)!}B_{2m}(-1)^{m}h^{2m-1}\] \[+(-1)^{k+1}\frac{B_{2k}h^{2k-1}}{(2k)!}+(-1)^{k+2}\frac{h^{2k}}{ (2k)!\sin h}B_{2k}\cos h\theta,\] i.e., \[\frac{1}{2}\cot\frac{h}{2}-\frac{1}{h}=\sum_{m=1}^{k-1}\frac{1}{(2m)!}B_{2m}(- 1)^{m}h^{2m-1}+(-1)^{k}\frac{h^{2k}}{(2k)!\sin h}B_{2k}\cos h\theta.\] Write \[E_{k}(h)=(-1)^{k+1}\frac{h^{2k}}{(2k)!\sin h}B_{2k}\cos h\theta.\] We apply the above to \(I_{2n}(\xi)\), and get, for any \(k\geq 1\), \[I_{2n}(\xi) =2\int_{0}^{\infty}\left(E_{k}(\xi t)-\sum_{m=1}^{k-1}\frac{1}{(2 m)!}B_{2m}(-1)^{m}(\xi t)^{2m-1}\right)\frac{1-\cos 2n\xi t}{e^{2\pi t}-1}dt\] \[=-2\sum_{m=1}^{k-1}\frac{1}{(2m)!}B_{2m}(-1)^{m}\xi^{2m-1}\int_{0 }^{\infty}t^{2m-1}\frac{1-\cos 2n\xi t}{e^{2\pi t}-1}dt\] \[+2\int_{0}^{\infty}E_{k}(\xi t)\frac{1-\cos 2n\xi t}{e^{2\pi t}-1}dt.\] Using (1), \[\int_{0}^{\infty}t^{2m-1}\frac{1-\cos 2n\xi t}{e^{2\pi t}-1}dt =\int_{0}^{\infty}\frac{t^{2m-1}}{e^{2\pi t}-1}dt-\int_{0}^{ \infty}\frac{t^{2m-1}\cos 2n\xi t}{e^{2\pi t}-1}dt\] \[=(-1)^{m+1}\frac{B_{2m}}{4m}-\int_{0}^{\infty}\frac{t^{2m-1}\cos 2 n\xi t}{e^{2\pi t}-1}dt.\] \[\left|\int_{0}^{\infty}E_{k}(\xi t)\frac{1-\cos 2n\xi t}{e^{2\pi t}-1}dt\right|\] \[\leq \frac{\pi}{2}\frac{|B_{2k}|}{(2k)!}\int_{0}^{\infty}(\xi t)^{2k-1} \frac{1-\cos 2n\xi t}{e^{2\pi t}-1}dt\] \[= \frac{\pi}{2}\frac{|B_{2k}|}{(2k)!}\xi^{2k-1}\cdot\frac{1}{2} \left((-1)^{k+1}\frac{B_{2k}}{2k}+(-1)^{k}f^{(2k-1)}(2n\xi)\right).\] Hence \[I_{2n}(\xi) =\sum_{m=1}^{k-1}\frac{B_{2m}^{2}}{(2m)!2m}\xi^{2m-1}-\sum_{m=1}^{k- 1}\frac{B_{2m}}{(2m)!}\xi^{2m-1}f^{(2m-1)}(2n\xi)\] \[+O\left(\frac{B_{2k}^{2}}{(2k)!2k}\xi^{2k-1}\right)+O\left(\frac{ |B_{2k}|}{(2k)!}\xi^{2k-1}f^{(2k-1)}(2n\xi)\right).\] Therefore we have \[\sum_{m=1}^{2n}\frac{1}{e^{m\xi}-1}-\frac{1}{\xi}C_{2n}\] \[= \frac{\log(1-e^{-2n\xi})-\log\xi}{\xi}+\frac{1}{4}+\frac{1}{2} \left(\frac{1}{e^{2n\xi}-1}-\frac{1}{2n\xi}\right)-I_{2n}(\xi)\] \[= \frac{\log(1-e^{-2n\xi})-\log\xi}{\xi}+\frac{1}{4}+\frac{1}{2} \left(\frac{1}{e^{2n\xi}-1}-\frac{1}{2n\xi}\right)\] \[-\sum_{m=1}^{k-1}\frac{B_{2m}^{2}}{(2m)!2m}\xi^{2m-1}+\sum_{m=1}^ {k-1}\frac{B_{2m}}{(2m)!}\xi^{2m-1}f^{(2m-1)}(2n\xi)\] \[+O\left(\frac{B_{2k}^{2}}{(2k)!2k}\xi^{2k-1}\right)+O\left(\frac{ |B_{2k}|}{(2k)!}\xi^{2k-1}f^{(2k-1)}(2n\xi)\right).\] Taking \(n\to\infty\), \[\sum_{m=1}^{\infty}\frac{1}{e^{m\xi}-1}-\frac{\gamma}{\xi}=-\frac{\log\xi}{ \xi}+\frac{1}{4}-\sum_{m=1}^{k-1}\frac{B_{2m}^{2}}{(2m)!2m}\xi^{2m-1}+O\left( \frac{B_{2k}^{2}}{(2k)!2k}\xi^{2k-1}\right).\] ## 20 Voronoi summation formula The Voronoi summation formula [22, p. 182] states that if \(f:\mathbb{R}\to\mathbb{C}\) is a Schwartz function, then \[\sum_{n=1}^{\infty}d(n)f(n) =\int_{0}^{\infty}f(t)(\log t+2\gamma)dt+\frac{f(0)}{4}\] \[+\sum_{n=1}^{\infty}d(n)\int_{0}^{\infty}f(t)(4K_{0}(4\pi(nt)^{1/ 2})-2\pi Y_{0}(4\pi(nt)^{1/2}))dt,\] where \(K_{0}\) and \(Y_{0}\) are Bessel functions. Let \(0<x<1\). For \(f(t)=e^{-tx}\), we compute \[\int_{0}^{\infty}f(t)(4K_{0}(4\pi(nt)^{1/2})-2\pi Y_{0}(4\pi(nt)^{ 1/2}))dt\] \[= -\frac{2}{x}\exp\left(\frac{4\pi^{2}n}{x}\right)\operatorname{Ei }\left(-\frac{4\pi^{2}n}{x}\right)-\frac{2}{x}\exp\left(-\frac{4\pi^{2}n}{x} \right)\operatorname{Ei}\left(\frac{4\pi^{2}n}{x}\right),\] where \[\operatorname{Ei}(x)=-\int_{-x}^{\infty}\frac{e^{-t}}{t}dt,\qquad x\neq 0,\] the exponential integral. Then the Voronoi summation formula yields \[\sum_{n=1}^{\infty}d(n)e^{-nx}\] \[= \frac{\gamma}{x}-\frac{\log x}{x}+\frac{1}{4}\] \[+\sum_{n=1}^{\infty}d(n)\left(-\frac{2}{x}\exp\left(\frac{4\pi^{2 }n}{x}\right)\operatorname{Ei}\left(-\frac{4\pi^{2}n}{x}\right)-\frac{2}{x} \exp\left(-\frac{4\pi^{2}n}{x}\right)\operatorname{Ei}\left(\frac{4\pi^{2}n}{ x}\right)\right).\] Egger and Steiner [26] give a proof of the Voronoi summation formula involving Lambert series. Kluyver [47] and [48] Guinand [36] ## 21 Curtze Curtze [23] ## 22 Laguerre Laguerre [52] ## 23 V. A. Lebesgue V. A. Lebesgue [56]: ## 24 Bouniakowsky Bouniakowsky [4] ## 25 Chebyshev Chebyshev [80] ## 26 Catalan Catalan [9] Catalan [10, p. 89] Catalan [11, p. 119, SSCXIV] and [12, pp. 38-39, SCCXXVI] ## 27 Pincherle Pincherle [63] ## 28 Glaisher Glaisher [34, p. 163] ## 29 Gunther Gunther [37, p. 83] and [38, p. 178] ## 30 Stieltjes Stieltjes [78] cf. Zhang [89] ## 31 Rogel Rogel [65] and [66] ## 32 Cesaro Cesaro [15] Cesaro [16] Cesaro [17] and [18, pp. 181-184] Bromwich [6, p. 201, Chapter VIII, Example B, 35] ## 33 de la Vallee-Poussin de la Vallee-Poussin [24] ## 34 Torelli Torelli [83] ## 35 Fibonacci numbers Landau [54] ## 36 Knopp Knopp [49] ## 37 Generating functions Hardy and Wright [41, p. 258, Theorem 307]: **Theorem 1**.: _For \(f(s)=\sum_{n=1}^{\infty}a_{n}n^{-s}\) and \(g(s)=\sum_{n=1}^{\infty}b_{n}n^{-s}\),_ \[\sum_{n=1}^{\infty}a_{n}\frac{x^{n}}{1-x^{n}}=\sum_{n=1}^{\infty}b_{n}x^{n}, \qquad|x|<1,\] _if and only if there is some \(\sigma\) such that_ \[\zeta(s)f(s)=g(s),\qquad\operatorname{Re}\left(s\right)>\sigma.\] For \(f(s)=\sum_{n=1}^{\infty}\mu(n)n^{-s}\) and \(g(s)=1\), using [41, p. 250, Theorem 287] \[\frac{1}{\zeta(s)}=\sum_{n=1}^{\infty}\mu(n)n^{-s},\qquad\operatorname{Re} \left(s\right)>1,\] we get \[\sum_{n=1}^{\infty}\frac{\mu(n)x^{n}}{1-x^{n}}=x. \tag{7}\] For \(f(s)=\sum_{n=1}^{\infty}\phi(n)n^{-s}\) and \[g(s)=\zeta(s-1)=\sum_{n=1}^{\infty}n^{-s+1}=\sum_{n=1}^{\infty}nn^{-s},\] using [41, p. 250, Theorem 288] \[\frac{\zeta(s-1)}{\zeta(s)}=\sum_{n=1}^{\infty}\phi(n)n^{-s},\qquad \operatorname{Re}\left(s\right)>2,\] we get \[\sum_{n=1}^{\infty}\frac{\phi(n)x^{n}}{1-x^{n}}=\sum_{n=1}^{\infty}nx^{n}= \frac{x}{(1-x)^{2}}.\] For \(n=p_{1}^{a_{1}}\cdots p_{r}^{a_{r}}\), define \(\Omega(n)=a_{1}+\cdots+a_{n}\) and \[\lambda(n)=(-1)^{\Omega(n)}.\] For \(f(s)=\sum_{n=1}^{\infty}\lambda(n)n^{-s}\) and \[g(s)=\zeta(2s)=\sum_{n=1}^{\infty}n^{-2s}=\sum_{n=1}^{\infty}(n^{2})^{-s},\] using [41, p. 255, Theorem 300] \[\frac{\zeta(2s)}{\zeta(s)}=\sum_{n=1}^{\infty}\lambda(n)n^{-s},\qquad\operatorname {Re}\left(s\right)>1,\] we get \[\sum_{n=1}^{\infty}\frac{\lambda(n)x^{n}}{1-x^{n}}=\sum_{n=1}^{\infty}x^{n^{2}}.\] We define the **von Mangoldt function**\(\Lambda:\mathbb{N}\to\mathbb{R}\) by \(\Lambda(n)=\log p\) if \(n\) is some positive integer power of a prime \(p\), and \(\Lambda(n)=0\) otherwise. For example, \(\Lambda(1)=0\), \(\Lambda(12)=0\), \(\Lambda(125)=\log 5\). It is a fact [41, p. 254, Theorem 296] that for any \(n\), the von Mangoldt function satisfies \[\sum_{m|n}\Lambda(m)=\log n. \tag{8}\] For \(f(s)=\sum_{n=1}^{\infty}\Lambda(n)n^{-s}\) and \[g(s)=-\zeta^{\prime}(s)=\sum_{n=1}^{\infty}\log nn^{-s},\] using [41, p. 253, Theorem 294] \[-\frac{\zeta^{\prime}(s)}{\zeta(s)}=\sum_{n=1}^{\infty}\Lambda(n)n^{-s},\] we obtain \[\sum_{n=1}^{\infty}\frac{\Lambda(n)x^{n}}{1-x^{n}}=\sum_{n=1}^{\infty}\log nx ^{n}.\] ## 38 Mertens For \(\operatorname{Re}s>1\), we define \[P(s)=\sum_{p}\frac{1}{p^{s}}.\] We also define \[H=\sum_{m=2}^{\infty}\sum_{p}\frac{1}{mp^{m}}.\] Mertens [61] proves the following. **Theorem 2**.: _As \(\varrho\to 0\),_ \[P(1+\rho)=\log\left(\frac{1}{\rho}\right)-H+o(1).\] Proof.: As \(\varrho\to 0\), \[\zeta(1+\varrho)=\frac{1}{\varrho}+\gamma+O(\varrho)=\frac{1}{\varrho}(1+\gamma \varrho+O(\varrho^{2})).\] Taking the logarithm, \[\log\zeta(1+\varrho)=\log\left(\frac{1}{\varrho}\right)+\log(1+\gamma\varrho+O( \varrho^{2}))=\log\left(\frac{1}{\varrho}\right)+\gamma\varrho+O(\varrho^{2}). \tag{9}\] On the other hand, for \(\varrho>0\), \[\zeta(1+\varrho)=\prod_{p}\frac{1}{1-\frac{1}{p^{1+\varrho}}},\] and taking the logarithm, \[\log\zeta(1+\varrho) =-\sum_{p}\log\left(1-\frac{1}{p^{1+\varrho}}\right)\] \[=\sum_{p}\sum_{m=1}^{\infty}\frac{1}{mp^{m(1+\varrho)}}\] \[=P(1+\rho)+\sum_{m=2}^{\infty}\sum_{p}\frac{1}{mp^{m(1+\varrho)}}.\] Then as \(\varrho\to 0\), \[\log\zeta(1+\varrho)=P(1+\varrho)+H+o(1).\] Combining this with (9) we get that as \(\varrho\to 0\), \[P(1+\rho)=\log\left(\frac{1}{\rho}\right)-H+o(1).\] Mertens [61] also proves that for any \(x\) there is some \[|\delta|<\frac{4}{\log(x+1)}+\frac{2}{x\log x}\] such that \[\sum_{p\leq x}\frac{1}{p}=\log\log x+\gamma-H+\delta.\] Thus, \[\sum_{p\leq x}\frac{1}{p}=\log\log x+\gamma-H+O\left(\frac{1}{\log x}\right).\] Mertens shows that \[H=-\sum_{n=2}^{\infty}\mu(n)\frac{\log\zeta(n)}{n}.\] This can be derived using (7), and we do this now; see [58]. **Lemma 3**.: _For \(\operatorname{Re}s>1\),_ \[\frac{1}{s}\log\zeta(s)=\int_{2}^{\infty}\frac{\pi(t)dt}{t(t^{s}-1)}.\] Proof.: For \(p\) prime and \(\operatorname{Re}s>0\), \[\int_{p}^{\infty}\frac{dt}{t(t^{s}-1)} =\int_{p}^{\infty}t^{-s-1}\frac{1}{1-t^{-s}}dt\] \[=\int_{p}^{\infty}t^{-s-1}\sum_{n=0}^{\infty}(t^{-s})^{n}dt\] \[=\sum_{n=0}^{\infty}\int_{p}^{\infty}t^{-ns-s-1}dt\] \[=\sum_{n=0}^{\infty}\frac{t^{-ns-s}}{-ns-s}\bigg{|}_{p}^{\infty}\] \[=\frac{1}{s}\sum_{n=1}^{\infty}\frac{p^{-ns}}{n}\] \[=-\frac{1}{s}\log(1-p^{-s}),\] hence \[\log\left(\frac{1}{1-p^{-s}}\right)=s\int_{p}^{\infty}\frac{dt}{t(t^{s}-1)}.\] On the one hand, \[\sum_{p}\int_{p}^{\infty}\frac{dt}{t(t^{s}-1)}=\int_{2}^{\infty}\frac{\pi(t)dt }{t(t^{s}-1)}.\] On the other hand, for \(\operatorname{Re}s>1\) we have \[\sum_{p}\log\left(\frac{1}{1-p^{-s}}\right)=\log\prod_{p}\left(\frac{1}{1-p^{ -s}}\right)=\log\zeta(s).\] Combining these, for \(\operatorname{Re}s>1\), \[\frac{1}{s}\log\zeta(s)=\int_{2}^{\infty}\frac{\pi(t)dt}{t(t^{s}-1)}.\] **Theorem 4**.: \[H=-\sum_{n=2}^{\infty}\mu(n)\frac{\log\zeta(n)}{n}.\] Proof.: For any prime \(p\) and for \(m\geq 1\), \[\int_{p}^{\infty}t^{-m-1}dt=\frac{t^{-m}}{-m}\Big{|}_{p}^{\infty}=\frac{1}{mp^{ m}},\] and using this we have \[H =\sum_{m=2}^{\infty}\sum_{p}\frac{1}{mp^{m}}\] \[=\sum_{m=2}^{\infty}\sum_{p}\int_{p}^{\infty}t^{-m-1}dt\] \[=\sum_{m=2}^{\infty}\int_{2}^{\infty}\pi(t)t^{-m-1}dt\] \[=\int_{2}^{\infty}\pi(t)\left(\sum_{m=2}^{\infty}t^{-m-1}\right)dt\] \[=\int_{2}^{\infty}\pi(t)\frac{1}{t^{2}(t-1)}dt\] Rearranging (7), \[\frac{x^{2}}{1-x}=-\sum_{n=2}^{\infty}\frac{\mu(n)x^{n}}{1-x^{n}}.\] With \(x=t^{-1}\), \[\frac{1}{t(t-1)}=-\sum_{n=2}^{\infty}\frac{\mu(n)}{t^{n}-1},\] so \[\frac{1}{t^{2}(t-1)}=-\sum_{n=2}^{\infty}\frac{\mu(n)}{t(t^{n}-1)}.\] Thus we have \[H=-\int_{2}^{\infty}\pi(t)\left(\sum_{n=2}^{\infty}\frac{\mu(n)}{t(t^{n}-1)} \right)dt=-\sum_{n=2}^{\infty}\mu(n)\int_{2}^{\infty}\frac{\pi(t)dt}{t(t^{n}- 1)}dt.\] Using Lemma 3 for \(s=2,3,4,\ldots\), \[H=-\sum_{n=2}^{\infty}\mu(n)\cdot\frac{1}{n}\log\zeta(n),\] completing the proof. ## 39 Preliminaries on prime numbers We define \[\vartheta(x)=\sum_{p\leq x}\log p=\log\prod_{p\leq x}p\] and \[\psi(x)=\sum_{p^{m}\leq x}\log p=\sum_{n\leq x}\Lambda(n).\] One sees that \[\psi(x)=\sum_{p\leq x}[\log_{p}x]\log p=\sum_{p\leq x}\left[\frac{\log x}{\log p }\right]\log p.\] As well, \[\psi(x)=\sum_{m=1}^{\infty}\sum_{p\leq x^{1/m}}\log p=\sum_{m=1}^{\infty} \vartheta(x^{1/m}); \tag{10}\] there are only finitely many terms on the right-hand side, as \(\vartheta(x^{1/m})=0\) if \(x<2^{m}\). **Theorem 5**.: \[\psi(x)=\vartheta(x)+O(x^{1/2}(\log x)^{2}).\] Proof.: For \(x\geq 2\), \(\vartheta(x)<x\log x\), giving \[\sum_{2\leq m\leq\frac{\log x}{\log 2}}\vartheta(x^{1/m}) <\sum_{2\leq m\leq\frac{\log x}{\log 2}}x^{1/m}\frac{1}{m}\log x\] \[\leq x^{1/2}\log x\sum_{2\leq m\leq\frac{\log x}{\log 2}}\frac{1}{m}\] \[=O(x^{1/2}(\log x)^{2}).\] Thus, using (10) we have \[\psi(x)=\vartheta(x)+\sum_{2\leq m\leq\frac{\log x}{\log 2}}\vartheta(x^{1/m})= \vartheta(x)+O(x^{1/2}(\log x)^{2}).\] We prove that if \(\lim_{x\to\infty}\frac{\vartheta(x)}{x}=1\) then \(\frac{\pi(x)}{x/\log x}=1\). **Theorem 6**.: \[\liminf_{x\to\infty}\frac{\pi(x)}{x/\log x}=\liminf_{x\to\infty}\frac{\vartheta (x)}{x}\] _and_ \[\limsup_{x\to\infty}\frac{\pi(x)}{x/\log x}=\limsup_{x\to\infty}\frac{\vartheta (x)}{x}.\] Proof.: From (10), \(\vartheta(x)\leq\psi(x)\). And, \[\psi(x)=\sum_{p\leq x}\left[\frac{\log x}{\log p}\right]\log p\leq\sum_{p\leq x} \frac{\log x}{\log p}\log p=\log x\sum_{p\leq x}.\] Hence \[\frac{\vartheta(x)}{x}\leq\frac{\pi(x)\log x}{x},\] whence \[\liminf_{x\to\infty}\frac{\vartheta(x)}{x}\leq\liminf_{x\to\infty}\frac{\pi(x )}{x/\log x}\] and \[\limsup_{x\to\infty}\frac{\vartheta(x)}{x}\leq\limsup_{x\to\infty}\frac{\pi(x )}{x/\log x}.\] Let \(0<\alpha<1\). For \(x>1\), \[\vartheta(x)=\sum_{p\leq x}\log p\geq\sum_{x^{\alpha}<p\leq x}\log p>\sum_{x^ {\alpha}<p\leq x}\log x^{\alpha}=\alpha\log x(\pi(x)-\pi(x^{\alpha})).\] As \(\pi(x^{\alpha})<x^{\alpha}\), \[\vartheta(x)>\alpha\pi(x)\log x-\alpha x^{\alpha}\log x,\] i.e., \[\frac{\vartheta(x)}{x}>\alpha\frac{\pi(x)\log x}{x}-\alpha\frac{\log x}{x^{1- \alpha}}.\] This yields \[\liminf_{x\to\infty}\frac{\vartheta(x)}{x}\geq\alpha\liminf_{x\to\infty} \frac{\pi(x)\log x}{x}-\alpha\liminf_{x\to\infty}\frac{\log x}{x^{1-\alpha}}= \alpha\liminf_{x\to\infty}\frac{\pi(x)\log x}{x}\] and \[\limsup_{x\to\infty}\frac{\vartheta(x)}{x}\geq\alpha\limsup_{x\to\infty} \frac{\pi(x)\log x}{x}-\alpha\limsup_{x\to\infty}\frac{\log x}{x^{1-\alpha}}= \alpha\limsup_{x\to\infty}\frac{\pi(x)\log x}{x}.\] Since these are true for all \(0<\alpha<1\), we obtain respectively \[\liminf_{x\to\infty}\frac{\vartheta(x)}{x}\geq\liminf_{x\to\infty}\frac{\pi(x )\log x}{x}\] and \[\limsup_{x\to\infty}\frac{\vartheta(x)}{x}\geq\limsup_{x\to\infty}\frac{\pi(x )\log x}{x}.\] ## 40 Wiener's tauberian theorem Wiener [85, Chapter III]. Wiener-Ikehara [19] Rudin [67, p. 229, Theorem 9.7] We say that a function \(s:(0,\infty)\to\mathbb{R}\) is **slowly decreasing** if \[\liminf(s(\rho v)-s(v))\geq 0,\qquad v\to\infty,\quad\rho\to 1^{+}.\] Widder [84, p. 211, Theorem 10b]: Wiener's tauberian theorem tells us that if \(a\in L^{\infty}(0,\infty)\) and is slowly decreasing and if \(g\in L^{1}(0,\infty)\) satisfies \[\int_{0}^{\infty}t^{ix}g(t)dt\neq 0,\qquad x\in\mathbb{R},\] then \[\lim_{x\to\infty}\frac{1}{x}\int_{0}^{\infty}g\left(\frac{t}{x}\right)a(t)dt=A \int_{0}^{\infty}g(t)dt\] implies that \[\lim_{v\to\infty}a(v)=A.\] It is straightforward to check the following by rearranging summation. **Lemma 7**.: _If \(\sum_{n=1}^{\infty}a_{n}z^{n}\) has radius of convergence \(\geq 1\), then for \(|z|<1\),_ \[\sum_{n=1}^{\infty}a_{n}\frac{z^{n}}{1-z^{n}}=\sum_{n=1}^{\infty}\left(\sum_{ m|n}a_{m}\right)z^{n}.\] Using Lemma 7 with \(a_{n}=\Lambda(n)\) and \(z=e^{-x}\) and applying (8), we get \[\sum_{n=1}^{\infty}\Lambda(n)\frac{z^{n}}{1-z^{n}}=\sum_{n=1}^{\infty}\log(n) z^{n}. \tag{11}\] From (11), and Lemma 7 with \(a_{n}=1\), we have \[\sum_{n=1}^{\infty}(\Lambda(n)-1)\frac{e^{-nx}}{1-e^{-nx}}=\sum_{n=1}^{\infty }(\log n-d(n))e^{-nx}.\] We follow Widder [84, p. 231, Theorem 16.6]. **Theorem 8**.: _As \(x\to 0^{+}\),_ \[\sum_{n=1}^{\infty}(\log n-d(n))e^{-nx}=-\frac{2\gamma}{x}+O(x^{-1/2}).\] Proof.: Generally, \[(1-z)\sum_{n=1}^{\infty}z^{n}\sum_{m=1}^{n}a_{m} =(1-z)\sum_{m=1}^{\infty}a_{m}\sum_{n=m}^{\infty}z^{n}\] \[=(1-z)\sum_{m=1}^{\infty}a_{m}\frac{z^{m}}{1-z}\] \[=\sum_{m=1}^{\infty}a_{m}z^{m}.\] Using this with \(a_{m}=\log m-d(m)\) and \(z=e^{-x}\) gives \[\sum_{n=1}^{\infty}(\log n-d(n))e^{-nx} =(1-e^{-x})\sum_{n=1}^{\infty}e^{-nx}\left(\sum_{m=1}^{n}\log m- \sum_{m=1}^{n}d(m)\right)\] \[=(1-e^{-x})\sum_{n=1}^{\infty}e^{-nx}\left(\log(n!)-\sum_{m=1}^{n }d(m)\right).\] Using \[\log(n!)=n\log n-n+O(\log n)\] and \[\sum_{m=1}^{n}d(m)=n\log n+(2\gamma-1)n+O(n^{1/2}),\] we get \[\log(n!)-\sum_{m=1}^{n}d(m)=-2\gamma n+O(n^{1/2}).\] Therefore, \[\sum_{n=1}^{\infty}(\log n-d(n))e^{-nx}=(1-e^{-x})\sum_{n=1}^{\infty}e^{-nx}( -2\gamma n+O(n^{1/2})).\] One proves that there is some \(K\) such that for all \(0\leq y<1\), \[(1-y)\left(\log\frac{1}{y}\right)^{1/2}\sum_{n=1}^{\infty}n^{1/2}y^{n}\leq K,\] whence, with \(y=e^{-x}\), \[\sum_{n=1}^{\infty}n^{1/2}e^{-nx}\leq K\frac{x^{-1/2}}{1-e^{-x}}.\] Also, \[\sum_{n=1}^{\infty}ne^{-nx}=\frac{e^{-x}}{(1-e^{-x})^{2}},\] and thus we have \[\sum_{n=1}^{\infty}(\log n-d(n))e^{-nx} =-2\gamma\frac{e^{-x}}{1-e^{-x}}+O(x^{-1/2})\] \[=-2\gamma\frac{1}{e^{x}-1}+O(x^{-1/2}).\] But \[\frac{1}{e^{x}-1}=\frac{1}{x}-\frac{1}{2}+O(x),\] so \[\sum_{n=1}^{\infty}(\log n-d(n))e^{-nx}=-\frac{2\gamma}{x}+O(x^{-1/2}).\] Define \[f(x)=\sum_{n=1}^{\infty}(\Lambda(n)-1)\frac{e^{-nx}}{1-e^{-nx}},\] and \[h(x)=\sum_{n\leq x}\frac{\Lambda(n)-1}{n},\] and \[g(t)=\frac{d}{dt}\left(\frac{te^{-t}}{1-e^{-t}}\right).\] First we show that \(h\) is slowly decreasing. **Lemma 9**.: \(h(x)\) _is slowly decreasing._ Proof.: Using \[\sum_{1\leq n\leq x}\frac{1}{n}=\log x+\gamma+O(n^{-1}),\qquad x\to\infty,\] we have, for \(0<x<\infty\) and \(\rho>1\), \[h(\rho x)-h(x) =\sum_{x<n\leq\rho x}\frac{\Lambda(n)-1}{n}\] \[\geq-\sum_{x<n\leq\rho x}\frac{1}{n}\] \[=-\sum_{1\leq n\leq\rho x}\frac{1}{n}+\sum_{1\leq n\leq x}\frac{ 1}{n}\] \[=-\log(\rho x)+\log x+O((\rho x)^{-1})+O(x^{-1})\] \[=-\log\rho+O((\rho x)^{-1})+O(x^{-1}).\] Hence as \(x\to\infty\) and \(\rho\to 1^{+}\), \[h(\rho x)-h(x)\to 0,\] which shows that \(h\) is slowly decreasing. The following is from Widder [84, pp. 231-232]. **Lemma 10**.: _As \(x\to\infty\),_ \[\frac{1}{x}\int_{0}^{\infty}g\left(\frac{t}{x}\right)h(t)dt=2\gamma+O(x^{-1/2 }).\] Proof.: Let \(I(t)=0\) for \(t<0\) and \(I(t)=1\) for \(t\geq 0\). Writing \[h(x)=\sum_{n=1}^{\infty}I(x-n)\frac{\Lambda(n)-1}{n},\] we check that for \(x>0\), \[\int_{0}^{\infty}\frac{te^{-xt}}{1-e^{-xt}}dh(t) =\sum_{n=1}^{\infty}\int_{0}^{\infty}\frac{te^{-xt}}{1-e^{-xt}} \frac{\Lambda(n)-1}{n}d(I(t-n))\] \[=\sum_{n=1}^{\infty}\int_{0}^{\infty}\frac{te^{-xt}}{1-e^{-xt}} \frac{\Lambda(n)-1}{n}d\delta_{n}(t)\] \[=\sum_{n=1}^{\infty}\frac{ne^{-nx}}{1-e^{-nx}}\frac{\Lambda(n)-1} {n}\] \[=f(x).\] On the other hand, integrating by parts, \[f(x) =\int_{0}^{\infty}\frac{te^{-xt}}{1-e^{-xt}}dh(t)\] \[=\int_{0}^{\infty}\frac{1}{x}\frac{xte^{-xt}}{1-e^{xt}}dh(t)\] \[=\int_{0}^{\infty}\frac{1}{x}\frac{xte^{-xt}}{1-e^{-xt}}dh(t)\] \[=\int_{0}^{\infty}\frac{1}{x}\frac{te^{-t}}{1-e^{-t}}dh\left( \frac{t}{x}\right)\] \[=-\int_{0}^{\infty}\frac{1}{x}g(t)h\left(\frac{t}{x}\right)dt\] \[=-\int_{0}^{\infty}g(xt)h(t)dt.\] By Theorem 8, as \(x\to 0^{+}\), \[f(x)=-\frac{2\gamma}{x}+O(x^{-1/2}),\] i.e., as \(x\to 0^{+}\), \[\int_{0}^{\infty}g(xt)h(t)dt=\frac{2\gamma}{x}+O(x^{-1/2}).\] Thus, as \(x\to\infty\), \[\int_{0}^{\infty}g\left(\frac{t}{x}\right)h(t)dt=2\gamma x+O(x^{1/2}).\] The following is from Widder [84, p. 232]. **Lemma 11**.: \[\int_{0}^{\infty}t^{-ix}g(t)dt=\begin{cases}-1&x=0\\ ix\zeta(1-ix)\Gamma(1-ix)&x\neq 0.\end{cases}\] Proof.: \[\int_{0}^{\infty}t^{-ix}g(t)dt =\int_{0}^{\infty}t^{-ix}\frac{d}{dt}\left(\frac{te^{-t}}{1-e^{- t}}\right)dt\] \[=\lim_{\delta\to 0}\int_{0}^{\infty}t^{-ix+\delta}\frac{d}{dt} \left(\frac{te^{-t}}{1-e^{-t}}\right)dt\] \[=\lim_{\delta\to 0}\left(t^{-ix+\delta}\frac{te^{-t}}{1-e^{- t}}\Big{|}_{0}^{\infty}+(ix-\delta)\int_{0}^{\infty}t^{-ix+\delta-1}\frac{te^{- t}}{1-e^{-t}}dt\right)\] \[=\lim_{\delta\to 0}(ix-\delta)\int_{0}^{\infty}t^{-ix+\delta-1} \frac{te^{-t}}{1-e^{-t}}dt\] \[=\lim_{\delta\to 0}(ix-\delta)\int_{0}^{\infty}\frac{t^{(-ix+ \delta+1)-1}e^{-t}}{1-e^{-t}}dt.\] Using \[\int_{0}^{\infty}\frac{t^{s-1}}{e^{t}-1}dt=\zeta(s)\Gamma(s),\qquad\operatorname {Re}\left(s\right)>1,\] this becomes \[\int_{0}^{\infty}t^{-ix}g(t)dt=\lim_{\delta\to 0^{+}}(ix-\delta)\zeta(1+ \delta-ix)\Gamma(1+\delta-ix).\] If \(x=0\), then using \[\zeta(s)=\frac{1}{s-1}+\gamma+O(|s-1|),\qquad s\to 1,\] we get \[\lim_{\delta\to 0^{+}}(-\delta)\zeta(1+\delta)\Gamma(1+\delta)=-1.\] If \(x>0\), then \[\lim_{\delta\to 0^{+}}(ix-\delta)\zeta(1+\delta-ix)\Gamma(1+\delta-ix)=ix\zeta(1-ix) \Gamma(1-ix).\] By Wiener's tauberian theorem, it follows that \[\sum_{n=1}^{\infty}\frac{\Lambda(n)-1}{n}=-2\gamma.\] **Lemma 12**.: \[h(x)=\int_{\frac{1}{2}}^{x}\frac{d(\psi(t)-[t])}{t}.\] Proof.: Let \(I(t)=0\) for \(t<0\) and \(I(t)=1\) for \(t\geq 0\). Writing \[\psi(x)=\sum_{n=1}^{\infty}I(x-n)\Lambda(n),\qquad[x]=\sum_{n=1}^{\infty}I(x-n),\] we have \[\int_{\frac{1}{2}}^{x}\frac{d(\psi(t)-[t])}{t} =\int_{\frac{1}{2}}^{x}\frac{1}{t}d\left(\sum_{n=1}^{\infty}I(t-n )(\Lambda(n)-1)\right)\] \[=\int_{\frac{1}{2}}^{x}\frac{1}{t}\sum_{n=1}^{\infty}(\Lambda(n) -1)d\delta_{n}(t)\] \[=\sum_{1\leq n\leq x}\frac{\Lambda(n)-1}{n}\] \[=h(x).\] Thus, we have established that \[\int_{\frac{1}{2}}^{\infty}\frac{d(\psi(t)-[t])}{t}=-2\gamma.\] ## 41 Hermite Hermite [42] Hermite [43] ## 42 Gerhardt Gerhardt [33, p. 196] refers to Lambert's _Architectonic_. ## 43 Levi-Civita Levi-Civita [57] ## 44 Franel Franel [32] and [31] The next theorem shows that the set of points on the unit circle that are singularities of \(\sum_{n=1}^{\infty}\frac{z^{n}}{1-z^{n}}\) is dense in the unit circle. Titchmarsh [82, pp. 160-161, SS4.71]. **Theorem 13**.: _For \(|z|<1\), define_ \[f(z)=\sum_{n=1}^{\infty}\frac{z^{n}}{1-z^{n}}.\] _Suppose that \(p>0,q>1\) are relatively prime integers. As \(r\to 1^{-}\),_ \[(1-r)f(re^{2\pi i/q})\to\infty.\] Proof.: Set \(z=re^{2\pi ip/q}\) and write \[\sum_{n=1}^{\infty}\frac{z^{n}}{1-z^{n}}=\sum_{n\equiv 0\pmod{q}}\frac{z^{n}}{ 1-z^{n}}+\sum_{n\not\equiv 0\pmod{q}}\frac{z^{n}}{1-z^{n}}.\] On the one hand, \[(1-r)\sum_{n\equiv 0\pmod{q}}\frac{z^{n}}{1-z^{n}} =(1-r)\sum_{m=1}^{\infty}\frac{z^{mq}}{1-z^{mq}}\] \[=(1-r)\sum_{m=1}^{\infty}\frac{(re^{2\pi ip/q})^{mq}}{1-(re^{2\pi ip /q})^{mq}}\] \[=(1-r)\sum_{m=1}^{\infty}\frac{r^{mq}}{1-r^{mq}}\] \[=\frac{1-r}{1-r^{q}}\sum_{m=1}^{\infty}\frac{r^{mq}}{1+r^{q}+ \cdots+r^{(m-1)q}}\] \[=\frac{1}{1+r+\cdots+r^{q-1}}\sum_{m=1}^{\infty}\frac{r^{mq}}{1+r ^{q}+\cdots+r^{(m-1)q}}\] \[\geq\frac{1}{q}\sum_{m=1}^{\infty}\frac{r^{mq}}{m}\] \[=-\frac{1}{q}\log(1-r^{q})\] \[\to\infty\] as \(r\to 1\). On the other hand, for \(n\not\equiv 0\pmod{q}\) we have \[|1-z^{n}|^{2} =|1-r^{n}e^{2\pi ipn/q}|^{2}\] \[=(1-r^{n}e^{2\pi ipn/q})(1-r^{n}e^{-2\pi ipn/q})\] \[=1-r^{n}(e^{2\pi ipn/q}+e^{-2\pi ipn/q})+r^{2n}\] \[=1-2r^{n}\cos 2\pi pn/q+r^{2n}\] \[=1-2r^{n}+4r^{n}\sin^{2}\frac{\pi pn}{q}+r^{2n}\] \[=(1-r^{n})^{2}+4r^{n}\sin^{2}\frac{\pi pn}{q}.\] So far we have not used the hypothesis that \(n\equiv 0\pmod{q}\). We use it to obtain \[\sin\frac{\pi pn}{q}\geq\sin\frac{\pi}{q}.\] With this we have \[|1-z^{n}|^{2}\geq 4r^{n}\sin^{2}\frac{\pi}{q},\] and therefore, as \(r<1\), \[(1-r)\left|\sum_{n\not\equiv 0\pmod{q}}\frac{z^{n}}{1-z^{n}}\right| \leq(1-r)\sum_{n\not\equiv 0\pmod{q}}\frac{|z|^{n}}{|1-z^{n}|}\] \[\leq(1-r)\sum_{n\not\equiv 0\pmod{q}}\frac{r^{n}}{2r^{n/2}\sin \frac{\pi}{q}}\] \[\leq\frac{1-r}{2\sin\frac{\pi}{q}}\sum_{n=0}^{\infty}r^{n/2}\] \[=\frac{1-r}{2\sin\frac{\pi}{q}}\cdot\frac{1}{1-\sqrt{r}}\] \[=\frac{1+\sqrt{r}}{2\sin\frac{\pi}{q}}\] \[<\frac{1}{\sin\frac{\pi}{q}}.\] ## 45 Wigert The following result is proved by Wigert [86]. Our proof follows Titchmarsh [81, p. 163, Theorem 7.15]. Cf. Landau [55]. **Theorem 14**.: _For \(\lambda<\frac{1}{2}\pi\) and \(N\geq 1\),_ \[\sum_{n=1}^{\infty}d(n)e^{-nz}=\frac{\gamma}{z}-\frac{\log z}{z}+\frac{1}{4}- \sum_{n=0}^{N-1}\frac{B_{2n+2}^{2}}{(2n+2)!(2n+2)}z^{2n+1}+O(|z|^{2N})\] _as \(z\to 0\) in any angle \(|\arg z|\leq\lambda\)._ Proof.: For \(\sigma>1\), \(s=\sigma+it\), \[\zeta^{2}(s)=\sum_{n=1}^{\infty}\frac{d(n)}{n^{s}}.\] Using this, for \(\operatorname{Re}z>0\) we have \[\frac{1}{2\pi i}\int_{2-i\infty}^{2+i\infty}\Gamma(s)\zeta^{2}(s )z^{-s}ds =\sum_{n=1}^{\infty}d(n)\frac{1}{2\pi i}\int_{2-i\infty}^{2+i\infty }\Gamma(s)(nz)^{-s}ds\] \[=\sum_{n=1}^{\infty}d(n)e^{-nz}. \tag{12}\] Define \(F(s)=\Gamma(s)\zeta^{s}(s)z^{-s}\). \(F\) has poles at \(1,0\), and the negative odd integers. (At each negative even integer, \(\Gamma\) has a first order pole but \(\zeta^{2}\) has a second order zero.) First we determine the residue of \(F\) at \(1\). We use the asymptotic formula \[\zeta(s)=\frac{1}{s-1}+\gamma+O(|s-1|),\qquad s\to 1,\] the asymptotic formula \[\Gamma(s)=1-\gamma(s-1)+O(|s-1|^{2}),\qquad s\to 1,\] and the asymptotic formula \[z^{-s}=\frac{1}{z}-\frac{\log z}{z}(s-1)+O(|s-1|^{2}),\qquad s\to 1,\] to obtain \[\Gamma(s)\zeta^{s}(s)z^{-s} =(1-\gamma(s-1)+O(|s-1|^{2}))\cdot\left(\frac{1}{(s-1)^{2}}+\frac {2\gamma}{s-1}+O(|s-1|^{2})\right)\] \[\cdot\left(\frac{1}{z}-\frac{\log z}{z}(s-1)+O(|s-1|^{2})\right)\] \[=\frac{1}{z(s-1)^{2}}-\frac{\gamma}{z(s-1)}+\frac{2\gamma}{z(s-1 )}-\frac{\log z}{z(s-1)}+O(1)\] \[=\frac{1}{z(s-1)^{2}}+\frac{\gamma}{z(s-1)}-\frac{\log z}{z(s-1 )}+O(1).\] Hence the residue of \(F\) at \(1\) is \[\frac{\gamma}{z}-\frac{\log z}{z}.\] Now we determine the residue of \(F\) at \(0\). The residue of \(\Gamma\) at \(0\) is \(1\), and hence the residue of \(F\) at \(0\) is \[1\cdot\zeta^{2}(0)\cdot z^{0}=\zeta^{2}(0)=\left(-\frac{1}{2}\right)^{2}= \frac{1}{4}.\] Finally, for \(n\geq 0\) we determine the residue of \(F\) at \(-(2n+1)\). The residue of \(\Gamma\) at \(-(2n+1)\) is \(\frac{(-1)^{2n+1}}{(2n+1)!}\), hence the residue of \(F\) at \(-(2n+1)\) is \[\frac{(-1)^{2n+1}}{(2n+1)!}\cdot\zeta^{2}(2n+1)\cdot z^{2n+1}=-\frac{B_{2n+2} ^{2}}{(2n+2)!(2n+2)}z^{2n+1}\] using \[\zeta(-m)=-\frac{B_{m+1}}{m+1},\qquad m\geq 1.\] Let \(M>0\), and let \(C\) be the rectangular path starting at \(2-iM\), then going to \(2+iM\), then going to \(-2N+iM\), then going to \(-2N-iM\), and then ending at \(2-iM\). By the residue theorem, \[\int_{C}F(s)ds=2\pi i\left(\frac{\gamma}{z}-\frac{\log z}{z}+\frac{1}{4}+\sum _{n=0}^{N-1}-\frac{B_{2n+2}^{2}}{(2n+2)!(2n+2)}z^{2n+1}\right). \tag{13}\] Denote the right-hand sideof (13) by \(2\pi iR\). We have \[\int_{C}F(s)ds=\int_{2-iM}^{2+iM}F(s)ds+\int_{2+iM}^{-2N+iM}F(s)ds+\int_{-2N+iM}^ {-2N-iM}F(s)ds+\int_{-2N-iM}^{2-iM}F(s)ds.\] We shall show that the second and fourth integrals tend to \(0\) as \(M\to\infty\). For \(s=\sigma+it\) with \(-2N\leq\sigma\leq 2\), Stirling's formula [82, p. 151] tells us that \[|\Gamma(s)|\sim\sqrt{2\pi}e^{-\frac{\pi}{2}|t|}|t|^{\sigma-\frac{1}{2}},\qquad |t|\to\infty.\] As well [81, p. 95], there is some \(K>0\) such that in the half-plane \(\sigma\geq-2N\), \[\zeta(s)=O(|t|^{K}).\] Also, \[z^{-s} =e^{-s\log z}\] \[=e^{-(\sigma+it)(\log|z|+i\arg z)}\] \[=e^{-\sigma\log|z|+t\arg z-i(\sigma\arg z+t\log|z|)},\] and so for \(|\arg z|\leq\lambda\), \[|z^{-s}|=e^{-\sigma\log|z|+t\arg z}\leq e^{-\sigma\log|z|+\lambda|t|}=|z|^{- \sigma}e^{\lambda|t|}.\] Therefore \[\left|\int_{2+iM}^{-2N+iM}F(s)ds\right|\leq(2+2N)\sup_{-2N\leq\sigma\leq 2}|F( \sigma+iM)|=O(e^{-\frac{\pi}{2}M}M^{\sigma-\frac{1}{2}}M^{2K}|z|^{-\sigma}e^{ \lambda M}),\] and because \(\lambda<\frac{\pi}{2}\) this tends to \(0\) as \(M\to\infty\). Likewise, \[\left|\int_{-2N-iM}^{2-iM}F(s)ds\right|\to 0\] as \(M\to\infty\). It follows that \[\int_{2-i\infty}^{2+i\infty}F(s)ds+\int_{-2N+i\infty}^{-2N-i\infty}F(s)ds=2 \pi iR.\] Hence, \[\int_{2-i\infty}^{2+i\infty}F(s)ds=2\pi iR+\int_{-2N-i\infty}^{-2N+i\infty}F( s)ds.\] We bound the integral on the right-hand side. We have \[\int_{-2N-i\infty}^{-2N+i\infty}F(s)ds=\int_{\sigma=-2N,|t|\leq 1}F(s)ds+\int_{ \sigma=-2N,|t|>1}F(s)ds.\] The first integral satisfies \[\left|\int_{\sigma=-2N,|t|\leq 1}F(s)ds\right|\leq\int_{\sigma=-2N,|t|\leq 1}| \Gamma(s)\zeta^{2}(s)||z|^{-\sigma}e^{\lambda|t|}ds=|z|^{2N}\cdot O(1)=O(|z|^{2N}),\] because \(\Gamma(s)\zeta^{2}(s)\) is continuous on the path of integration. The second integral satisfies \[\left|\int_{\sigma=-2N,|t|>1}F(s)ds\right| \leq\int_{\sigma=-2N,|t|>1}e^{-\frac{\pi}{2}|t|}|t|^{\sigma-\frac {1}{2}}|t|^{K}|z|^{-\sigma}e^{\lambda|t|}ds\] \[=|z|^{2N}\int_{\sigma=-2N,|t|>1}e^{-\frac{\pi}{2}|t|}|t|^{-2N- \frac{1}{2}}|t|^{K}e^{\lambda|t|}dt\] \[=|z|^{2N}\cdot O(1)\] \[=O(|z|^{2N}),\] because \(\lambda<\frac{\pi}{2}\). This establishes \[\frac{1}{2\pi i}\int_{2-i\infty}^{2+i\infty}F(s)ds=R+O(|z|^{2N}).\] Using (12) and (13), this becomes \[\sum_{n=1}^{\infty}d(n)e^{-nz}=\frac{\gamma}{z}-\frac{\log z}{z}+\frac{1}{4} -\sum_{n=0}^{N-1}\frac{B_{2n+2}^{2}}{(2n+2)!(2n+2)}z^{2n+1}+O(|z|^{-2N}),\] completing the proof. For example, as \(B_{2}=\frac{1}{6},B_{4}=-\frac{1}{30},B_{6}=\frac{1}{42}\), the above theorem tells us that \[\sum_{n=1}^{\infty}d(n)e^{-nz}=\frac{\gamma}{z}-\frac{\log z}{z}+\frac{1}{4}- \frac{z}{144}-\frac{z^{3}}{86400}-\frac{z^{5}}{7620480}+O(|z|^{6}).\] ## 46 Steffensen Steffensen [75] ## 47 Szego Szego [79] ## 48 Polya and Szego Polya and Szego [64] ## 49 Partition function Let \[F(x)=\sum_{n=0}^{\infty}p(n)x^{n}=\prod_{n=1}^{\infty}\frac{1}{1-x^{n}}.\] Taking the logarithm, \[\log F(x)=\sum_{n=1}^{\infty}\log\frac{1}{1-x^{n}}=-\sum_{n=1}^{\infty}\log(1-x ^{n})=-\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}-\frac{(x^{n})^{m}}{m},\] and switching the order of summation gives \[\log F(x)=\sum_{m=1}^{\infty}\frac{1}{m}\sum_{n=1}^{\infty}(x^{m})^{n}=\sum_{m =1}^{\infty}\frac{1}{m}\frac{x^{m}}{1-x^{m}}.\] On the one hand, for \(0<x<1\) we have \(mx^{m-1}(1-x)<1-x^{m}\) and using this, \[\sum_{m=1}^{\infty}\frac{1}{m}\frac{x^{m}}{1-x^{m}}<\sum_{m=1}^{\infty}\frac{ 1}{m}\frac{x^{m}}{mx^{m-1}(1-x)}=\frac{x}{1-x}\sum_{m=1}^{\infty}\frac{1}{m^{ 2}}=\frac{\pi^{2}}{6}\frac{x}{1-x}.\] On the other hand, for \(-1<x<1\) we have \(1-x^{m}<m(1-x)\), and using this, for \(0<x<1\) we have \[\sum_{m=1}^{\infty}\frac{1}{m}\frac{x^{m}}{1-x^{m}}>\sum_{m=1}^{\infty}\frac{ 1}{m}\frac{x^{m}}{m(1-x)}=\frac{1}{1-x}\sum_{m=1}^{\infty}\frac{x^{m}}{m^{2}}.\] Thus, for \(0<x<1\), \[\sum_{m=1}^{\infty}\frac{x^{m}}{m^{2}}<(1-x)\log F(x)<\frac{\pi^{2}}{6}x.\] Taking \(x\to 1^{-}\) gives \[\frac{\pi^{2}}{6}\leq\lim_{x\to 1^{-}}(1-x)\log F(x)\leq\frac{\pi^{2}}{6},\] i.e., \[\log F(x)\sim\frac{\pi^{2}}{6}\frac{1}{1-x},\qquad x\to 1^{-}.\] See Stein and Shakarchi [76, p. 311]. ## 50 Hansen Hansen [39] ## 51 Kiseljak Kiseljak [45] ## 52 Unsorted In 1892, in volume VII, no. 23, p. 296 of the weekly _Naturwissenschaftliche Rundschau_, it is stated that for the year 1893, one of the six prize questions for the Belgian Academy of Sciences in Brussels is to determine the sum of the Lambert series \[\frac{x}{1-x}+\frac{x^{2}}{1-x^{2}}+\frac{x^{3}}{1-x^{3}}+\cdots,\] or if one cannot do this, to find a differential equation that determines the function. Gram [35] on distribution of prime numbers. Hardy [40] Bohr and Cramer [1, p. 820] Flajolet, Gourdon and Dumas [30]
2309.16643
Deep Geometrized Cartoon Line Inbetweening
We aim to address a significant but understudied problem in the anime industry, namely the inbetweening of cartoon line drawings. Inbetweening involves generating intermediate frames between two black-and-white line drawings and is a time-consuming and expensive process that can benefit from automation. However, existing frame interpolation methods that rely on matching and warping whole raster images are unsuitable for line inbetweening and often produce blurring artifacts that damage the intricate line structures. To preserve the precision and detail of the line drawings, we propose a new approach, AnimeInbet, which geometrizes raster line drawings into graphs of endpoints and reframes the inbetweening task as a graph fusion problem with vertex repositioning. Our method can effectively capture the sparsity and unique structure of line drawings while preserving the details during inbetweening. This is made possible via our novel modules, i.e., vertex geometric embedding, a vertex correspondence Transformer, an effective mechanism for vertex repositioning and a visibility predictor. To train our method, we introduce MixamoLine240, a new dataset of line drawings with ground truth vectorization and matching labels. Our experiments demonstrate that AnimeInbet synthesizes high-quality, clean, and complete intermediate line drawings, outperforming existing methods quantitatively and qualitatively, especially in cases with large motions. Data and code are available at https://github.com/lisiyao21/AnimeInbet.
Li Siyao, Tianpei Gu, Weiye Xiao, Henghui Ding, Ziwei Liu, Chen Change Loy
2023-09-28T17:50:05Z
http://arxiv.org/abs/2309.16643v1
# Deep Geometrized Cartoon Line Inbetweening ###### Abstract We aim to address a significant but understudied problem in the anime industry, namely the inbetweening of cartoon line drawings. Inbetweening involves generating intermediate frames between two black-and-white line drawings and is a time-consuming and expensive process that can benefit from automation. However, existing frame interpolation methods that rely on matching and warping whole raster images are unsuitable for line inbetweening and often produce blurring artifacts that damage the intricate line structures. To preserve the precision and detail of the line drawings, we propose a new approach, Animelnbet, which geometrizes raster line drawings into graphs of endpoints and reframes the inbetweening task as a graph fusion problem with vertex repositioning. Our method can effectively capture the sparsity and unique structure of line drawings while preserving the details during inbetweening. This is made possible via our novel modules, _i.e., vertex geometric embedding, a vertex correspondence Transformer, an effective mechanism for vertex repositioning and a visibility predictor. To train our method, we introduce Mixamo-Line240, a new dataset of line drawings with ground truth vectorization and matching labels. Our experiments demonstrate that Animelnbet synthesizes high-quality, clean, and complete intermediate line drawings, outperforming existing methods quantitatively and qualitatively, especially in cases with large motions. Data and code are available at [https://github.com/lisiyao21/AnimeInbet](https://github.com/lisiyao21/AnimeInbet). ## 1 Introduction Cartoon animation has undergone significant transformations since its inception in the early 1900s, when consecutive frames were manually drawn on paper. Although automated techniques now exist to assist with some specific procedures during animation production, such as colorization [22, 32, 10, 39, 4] and special effects [38], the core element - the line drawings of characters - still needs hand-drawing each frame individually, making 2D animation a labor-intensive industry. Developing an automated algorithm that can produce intermediate line drawings from two input key frames, commonly referred to as "inbetweening", has the potential to significantly improve productivity. Line inbetweening is not a trivial subset of general frame interpolation, as the structure of line drawings is extremely sparse. Unlike full-textured images, line drawings contain only around 3% black pixels, with the rest of the image being white background. As illustrated in Figure 2, this poses two significant challenges for existing raster-image-based frame interpolation methods. **1)** The lack of texture in line drawings makes it challenging to compute pixel-wise correspondence accurately in frame interpolation. One pixel can have many similar matching candidates, leading to inaccurate motion prediction. **2)** The warping and blending used in frame interpolation can blur the salient boundaries between the line and the background, leading to a significant loss of detail. To address the challenges posed by line inbetweening, we propose a novel deep learning framework called _AnimeInbet_, which inbetweens line drawings in a geometrized format instead of raster images. Specifically, the source images are transformed into vector graphs, and the goal is to synthesize an intermediate graph. This reformulation can overcome the challenges discussed earlier in this paper. As illustrated in Figure 2, the matching process in the geometric domain is conducted on concentrated geometric endpoint vertices, rather than all pixels, reducing potential ambiguity and leading to more accurate correspondence. Moreover, the repositioning does not change the topology of the line drawings, enabling preservation of the intricate and meticulous line structures. Compared to existing methods, our proposed _AnimeInbet_ framework can generate clean and complete intermediate line drawings, as demonstrated in Figure 1. The core idea of our proposed _AnimeInbet_ framework is to find matching vertices between two input line drawing graphs and then reposition them to create a new intermediate graph. To achieve this, we first design a vertex encoding strategy that embeds the geometric features of the endpoints of sparse line drawings, making them distinguishable from one another. We then apply a vertex correspondence Transformer to match the endpoints between the two input line drawings. Next, we propagate the shift vectors of the matched vertices to unmatched ones based on the similarities of their aggregated features to realize repositioning for all endpoints. Finally, we predict a visibility mask to erase the vertices and edges occluded in the inbetweened frame, ensuring a clean and complete intermediate frame. To facilitate supervised training on vertex correspondence, we introduce _MixamoLine240_, the first line art dataset with ground truth geometrization and vertex matching labels. The 2D line drawings in our dataset are selectively rendered from specific edges of a 3D model, with the endpoints indexed from the corresponding 3D vertices. By using 3D vertices as reference points, we ensure that the vertex matching labels in our dataset are accurate and consistent at the vertex level. In a conclusion, our work contributes a new and challenging task of line inbetweening, which could facilitate one of the most labor-intensive art production processes. We also propose a new method that outperforms existing solutions, and introduce a new dataset for comprehensive training. ## 2 Related Work **Frame Interpolation.** Frame interpolation is a widely studied task in recent years, involving synthesizing intermediate frames from existing ones. Many approaches have been proposed [13, 19, 20, 7, 17, 34, 18, 21, 26, 6, 23, 5, 14, 11], such as those that use optical flows or deep networks to search for matching areas and warp them to proper intermediate locations. Among the most recent algorithms, RIFE [6] directly predicts intermediate flows to warp the input frames and blends the warped frames into intermediate ones by a visible mask. VFIformer [14] adopts the same idea to predict the intermediate flows but proposes a Transformer to synthesize the intermediate from both warped images and features. Reda _et al_. [23] design a scale-agnostic feature pyramid to predict the intermediate flows and warp frames in a hierarchical manner to handle extreme large motions. Siyao and Zhao _et al_. [30] propose a frame interpolation pipeline specific for 2D cartoon in the wild, while Chen and Zwicker [5] improves the perceptual quality by embedding an optical-flow based line aggregator. While these methods achieve impressive performance on raster natural or cartoon videos, their pixel-oriented nature are not suitable for inbetweening concise and sparse line arts, which can yield severe artifacts and are not feasible for real usage in amine creation. **Research on Anime.** There has been increasing research interest in techniques to facilitate 2D cartoon creation, including sketch simplification [28, 27], vectorization Figure 3: **Geometrized line art in _MixamoLine240_. 2D endpoints and connected lines are projected from vertices and edges of ornial 3D mesh. Endpoints indexed to unique 3D vertices are matched (marked in the same colors).** Figure 2: **Raster _vs_ geometrized inbetweening. Top: search space of a pixel (left) _vs_ a vertex (right) in matching. Bottom: pixel warping/sampling (left) _vs_ vertex repositioning (right).** [40, 36, 15, 12], colorization [22, 32, 10, 39, 4], shading [38], head reenactment [8] and line-art-based cartoon generation [37]. While these studies may improve specific aspects of animation creation, the core line arts still rely on manual frame-by-frame drawing. Some sporadic rule-based methods have been developed for stroke inbetweening under strict conditions, but these methods lack the flexibility required for wider applications [35, 3]. Our work is the first to propose a deep learning-based method for inbetweening geometrized line arts. Additionally, we introduce vertex-wise correspondence datasets on line arts. It is noteworthy that existing datasets are not sufficiently 'clean' for our task since cartoon contour lines can cross the boundaries of motion, leading to incorrect corresponding labels at the vertex level [25, 29]. ## 3 Mixamo Line Art Dataset To facilitate training and evaluation of geometrized line inbetweening, we develop a large-scale dataset, named _MixamoLine240_, which consists of 240 sequences of consecutive line drawing frames, with 100 sequences for training and 140 for validation and testing. To obtain this vast amount of cartoon line data, we utilize a "Cel-shading" technique, _i.e_., to use computer graphics software (Blender in this work) to render 3D resources into an anime-style appearance that mimics the hand-drawn aristry. Unlike previous works [25, 29] that only provide raster images, _MixamoLine240_ also provides ground-truth geometrization labels for each frame, which include the coordinates of a group of vertices (\(V\)) and the connection topology (\(T\)). Additionally, we assign an index number (\(R[i]\)) to each 2D endpoint (\(V[i]\)) that refers to a unique vertex in the 3D mesh of the character, as illustrated in Figure 3, which can be further used to deduce the vertex-level correspondence. Specifically, given two frames \(I_{0}\) and \(I_{1}\) in a sequence, the 3D reference IDs reveal the vertex correspondence \(\{(i,j)\}\) for those vertices \(i\) in \(I_{0}\) and \(j\) in \(I_{1}\) having \(R_{0}[i]=R_{1}[j]\), while the rest unmatched vertices are marked as occluded. This strategy allows us to produce correspondence pairs with arbitrary frame gaps to flexibly adjust the input frame rate during training. Next, we discuss the construction and challenges inherent in the data. **Data Construction.** In Blender, the mesh structure of a 3D character remains stable, _i.e_., the number of 3D vertex and the edge topology keep constant, when moving without additional subdivision modifier. We employ this property to achieve consistent line art rendering and accurate annotations for geometrization and vertex matching. As shown in Figure 3, the original 3D mesh contains all the necessary line segments required to represent the character in line art. During rendering, the visible outline from the camera's perspective is selected based on the material boundary and the object's edge. This process ensures that every line segment in the resulting raster image corresponds to an edge in the original mesh. The 2D endpoints of each line segment are simply the relevant 3D vertices projected onto the camera plane, referenced by the unique and consistent index of the corresponding 3D vertex. Meanwhile, since the 3D mesh naturally defines the vertex connections, the topology of the 2D lines can be transferred from the selective edges used for rendering. To prevent any topological ambiguity that may be caused by overlapped vertices in 3D space, we merge the endpoints that are within a Euclidean distance of \(0.1\) in the projected 2D space. This enables us to obtain both the raster line drawings and the accurate labels of each frame. To create a diverse dataset, we used the open-source 3D material library Mixamo [1] and selected 20 characters and 20 actions, as shown in Figure 4. Each action has an average of 191 frames. We combined 10 characters and 10 actions to render 100 sequences, with a total of 19,930 frames as the training set. We then used the remaining 10 characters and 10 actions to render an 18,230-frame test set, ensuring that the training and testing partitions are exclusive. We also created a 44-sequence validation set, consisting of 20 unseen characters, 20 unseen actions, and 4 with both unseen character and action. To create this set, we combined the test characters "Swat"and "Warrok" and actions "sword slash" and "hip hop" with the training characters and actions. The \begin{table} \begin{tabular}{l l r r r r} \hline \hline \multicolumn{2}{c}{Frame gap\(\rightarrow\)} & 0 (60 fps) & 1 (30 fps) & 5 (10 fps) & 9 (6 fps) \\ \hline \multirow{3}{*}{**Test actions**} & Occlusion rate (\%) & 14.8 & 21.5 & 37.8 & 46.6 \\ & Avg. vtx shift & 8.6 & 16.4 & 42.6 & 62.8 \\ & Avg. max vtx shift & 26.0 & 48.9 & 129.7 & 192.3 \\ \hline \multirow{3}{*}{**Test actions**} & Occlusion rate (\%) & 18.4 & 26.5 & 44.2 & 53.5 \\ & Avg. vtx shift & 7.8 & 14.9 & 38.9 & 57.0 \\ \cline{1-1} & Avg. max vtx shift & 23.8 & 45.0 & 119.3 & 173.5 \\ \hline \hline \end{tabular} \end{table} Table 1: **Difficulty statistics with various frame gaps.** Figure 4: **Data composition. Training and test sets are separately composed by 10 characters \(\times\) 10 actions. First & second rows are training & test characters, respectively. Shaded are for validation.** validation set contains 11,102 frames and was also rendered at 1080p resolution with a frame rate of 60 fps. To ensure consistency across all frames, we cropped and resized each frame to a unified \(720\times 720\) character-centered image. **Challenges.** Table 1 summarizes the statistics that reflect the difficulty of the line inbetweening task under various input frame rates. With an increase in frame gaps, the inbetweening task becomes more challenging with larger motion magnitudes and higher occlusion percentages. For instance, when the frame gap is 9, the input frame rate becomes 6 fps, and the average vertex shift is 62.8 pixels. The mean value of the maximum vertex shift in a frame ("Avg. max vtx shift") reaches 192.3 pixels, which is 27% of the image width. Additionally, nearly half of the vertices are unmatched in such cases, making line inbetweening a tough problem. Furthermore, the image composition of the test set is more complex than that of the training set. A training frame has an average of 1,256 vertices and 1,753 edges, while a test frame has an average of 1,512 vertices and 2,099 edges since the test set has more complex characters such as "Maw". ## 4 Our Approach An overview of the proposed line inbetweening framework, _AnimeInbet_, is depicted in Figure 5. Unlike existing frame interpolation methods that use raw raster images \(I_{0}\) and \(I_{1}\), we process vector graphs \(G_{0}=\{V_{0},T_{0}\}\) and \(G_{1}=\{V_{1},T_{1}\}\) instead. The vertex coordinates in the images are represented by \(V\in\mathbb{R}^{K\times 2}\), and the binary adjacency matrix is denoted by \(T\in 0,1^{K\times K}\), where \(K\) denotes the number of vertices. The goal is to generate the intermediate graph \(G_{t}\) at time \(t\in(0,1)\). To this end, we first design a CNN-based vertex geometric embedding to encode \(V_{0}\) and \(V_{1}\) to features \(F_{0}\) and \(F_{1}\), respectively, as detailed in Section 4.1. Along with the embeddings, a vertex correspondence Transformer is proposed to aggregate the mutuality of vertex features to \(\hat{F}_{0}\) and \(\hat{F}_{1}\) by alternating self- and cross-attention layers (Section 4.2). The aggregated features are used to compute the correlation matrix \(\mathcal{C}\in\mathbb{R}^{K_{0}\times K_{1}}\) and to induce the vertex matching by row-wise and column-wise argmax. In cases where vertices are occluded during large motion, we adopt a self-attention-based layer to propagate the vertex shifts from matched vertices to the unmatched, and obtain repositioning vectors \(r_{0}\in\mathbb{R}^{K_{0}\times 2}\) and \(r_{1}\in\mathbb{R}^{K_{1}\times 2}\) for all vertices (Section 4.3). Finally, we superpose the two input graphs based on the predicted correspondence, and we further refine the output by predicting visibility maps \(m_{0}\in\{0,1\}^{K_{0}}\) and \(m_{1}\in\{0,1\}^{K_{1}}\) to mask off those vertices of \(V_{0}\) and \(V_{1}\) that disappear in the intermediate frame, respectively, to obtain the final inbetweened line drawing \(G_{t}\), as explained in Section 4.5. **Geometrizing Line Drawings.** The process of creating artwork has become largely digital, allowing for direct export in vectorized format. However, for line drawings that only appear in raster images, there are various commercial software and open-source research projects available [40, 36, 15, 12] that can be used to convert the raster images into the required vectorized input format. We will ablate the performance of line vectorization in our experiments. Figure 5: **Pipeline of proposed _AnimeInbet_. Our framework is composed of four main parts: the vertex geometric embedding, the vertex correspondence Transformer, repositioning propagation and graph fusion. Given a pair of line images \(I_{0}\) and \(I_{1}\) and their vector graphs \(G_{0}\) and \(G_{1}\), our method generates the intermediate frame \(G_{t}\) in geometrized format.** Figure 6: **Vertex Geometric Embedding. The goal is to obtain discriminative and meaningful features to describe each vertex.** ### Vertex Geometric Embedding Discriminative features for each vertex are desired to achieve accurate graph matching. Line graphs are different from general graphs as the spatial position of endpoint vertices, in addition to the topology of connections, determines the geometric shape of the line. The geometric graph embedding for line art is hence designed to comprise three parts: **1) image contextual embedding, 2) positional embedding**, and **3) topological embedding**, as shown in Figure 6. For image contextual embedding, we use a 2D CNN \(\mathcal{E}_{I}\) to extract deep contextual features within the same size of the input raster image \(I\). Then, for each vertex \(V_{0}[i]:=(x,y)\) we index feature \(\mathcal{E}_{I}(I)\left[(x,y)\right]\) as the image embedding for the \(i\)-th vertex. As to the positional embedding, we employ a 1D CNN \(\mathcal{E}_{P}\) to map each vertex coordinate \((x,y)\) to a \(C\)-dimensional feature. To include the topological information into a lower dimensional feature, we first conduct spectral embedding [2]\(\mathcal{S}\) on the binary adjacency matrix \(T\), which involves an eigenvector decomposition on the Laplacian matrix of the graph, then feed the spectral embedding to a subsequent 1D CNN \(\mathcal{E}_{T}\). The final geometric graph embedding is formulated as \[F_{0}=\mathcal{E}_{I}\left(I_{0}\right)\left[V_{0}\right]+\mathcal{E}_{P} \left(V_{0}\right)+\mathcal{E}_{T}\left(\mathcal{S}\left(T_{0}\right)\right). \tag{1}\] We obtain \(F_{1}\) in the same way. ### Vertex Correspondence Transformer We use geometric features \(F_{0}\) and \(F_{1}\) to establish a vertex-wise correspondence between \(G_{0}\) and \(G_{1}\). Specifically, we compute a correlation matrix between vertex features and identify the matching pair as those with the highest value across both the row and the column of the matrix. Prior to this step, we apply a Transformer that aggregates the mutual consistency both intra- and inter-graph. **Mutual Aggregation.** Following [24, 31], we employ a cascade of alternating self- and cross-attention layers to aggregate the vertex feature. In a self-attention layer, all queries, keys and values are derived from the single source feature, \[SA(F_{0})=\text{softmax}\left(\frac{\mathcal{Q}(F_{0})\mathcal{K}^{T}(F_{0})} {\sqrt{C}}\right)\mathcal{V}(F_{0}), \tag{2}\] where \(\mathcal{Q}\), \(\mathcal{K}\) and \(\mathcal{V}\) represent MLPs for query, key and value, respectively; while in the cross-attention layer, the keys and values are computed from another feature: \[CA(F_{0},F_{1})=\text{softmax}\left(\frac{\mathcal{Q}(F_{0})\mathcal{K}^{T}(F _{1})}{\sqrt{C}}\right)\mathcal{V}(F_{1}). \tag{3}\] After \(N\) layers of rotating self- and cross-attention layers as shown in Figure 7, we obtain aggregated feature \(\hat{F}_{0}\) and \(\hat{F}_{1}\). In the aggregation, each vertex is represented as an attentional pooling of all other vertices within the same graph and across the two graphs achieving a full fusion of information with mutual dependencies. **Correlation Matrix and Vertex Matching.** We compute the correlation matrix \(\mathcal{P}\) as \(\mathcal{P}=\frac{\hat{F}_{0}\hat{F}_{1}^{T}}{\sqrt{C}}\). We further apply a differentiable optimal transport (\(OT\)) [24] to improve the dual selection consistency and obtain \(\hat{\mathcal{P}}=OT(\mathcal{P})\). Then, we predict the one-way matching from \(G_{0}\) to \(G_{1}\) and vice versa as \(\arg\max\) indices across rows and columns: \[\left\{\begin{array}{l}\mathcal{M}_{0\to 1}=\{(i,j)|j=\arg\max\hat{ \mathcal{P}}_{i,:},i=0,...,K_{0}-1\}\\ \mathcal{M}_{1\to 0}=\{(i,j)|i=\arg\max\hat{\mathcal{P}}_{:,j},j=0,...,K_{1}-1\}. \end{array}\right. \tag{4}\] A vertex pair is selected into the final correspondence if it is mutually consistent and its correlation value is larger than \(\theta\): \[\hat{\mathcal{M}}=\left\{(i,j)|(i,j)\in\mathcal{M}_{0\to 1}\cap M_{1\to 0},\hat{ \mathcal{P}}_{i,j}>\theta\right\}. \tag{5}\] Otherwise, vertices will be considered to be occluded. ### Repositioning Propagation Fused vertices \((i,j)\) from vertex correspondence can be linearly relocated to \(tV_{0}[i]+(1-t)V_{1}[j]\) in intermediate graph \(G_{t}\) based on time \(t\). However, the positions of the unmatched vertices in \(G_{t}\) are still unknown. To reposition these vertices, we design an attention-based scheme similar to Xu _et al_. [33] to predict bidirectional shift vectors \(r_{0\to 1}\) and \(r_{1\to 0}\) for \(V_{0}\) and \(V_{1}\), respectively. Formally, \[\left\{\begin{array}{l}r_{0\to 1}=\text{softmax}\left(\frac{\hat{F}_{0} \hat{F}_{0}^{T}}{\sqrt{C}}\right)\left(\text{softmax}(\hat{\mathcal{P}})V_{1 }-V_{0}\right)\\ r_{1\to 0}=\text{softmax}\left(\frac{\hat{F}_{1}\hat{F}_{1}^{T}}{\sqrt{C}} \right)\left(\text{softmax}(\hat{\mathcal{P}}^{T})V_{0}-V_{1}\right).\end{array}\right. \tag{6}\] We then compute the final repositioning vectors as follows: \[r_{0}[i]=\left\{\begin{array}{l}V_{1}[j]-V_{0}[i],\;\;\;\text{if}\;\;\exists \;\;j\;s.t.\;(i,j)\in\hat{\mathcal{M}},\\ r_{0\to 1}[i],\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text{otherwise},\end{array}\right. \tag{7}\] while \(r_{1}\) is computed in a similar way. In this step, the motion vector \(r_{0\to 1}\) of an unmatched vertex \(V_{0}[i]\) is computed as a softmax average of shifts to all vertices in \(G_{1}\), _i.e._, \(\text{softmax}(\hat{\mathcal{P}}_{i,:})V_{1}-V_{0}\). It is then refined by attention pooling from matched vertices, based on self-similarity given by \(\hat{F}_{0}\hat{F}_{0}^{T}/\sqrt{C}\). Vertices are reasonably repositioned in the new vector graph after this step. Figure 7: **Vertex Correspondence Transformer.** SA and CA represent self-attention and cross-attention, respectively. ### Visibility Prediction and Graph Fusion To handle occlusions in the source line arts, we use a three-layer MLP to predict binary visibility maps \(m_{0}\) and \(m_{1}\) for the input graphs, obtained as \(m_{0}=\text{MLP}(\hat{F}_{0})\) and \(m_{1}=\text{MLP}(\hat{F}_{1})\). Then, we merge the vertices to \(V_{t}\) in the two graphs according to the following rule: \[\begin{split} V_{t}&=\left\{(1-t)V_{0}[i]+tV_{1}[j] \,\Big{|}\,(i,j)\in\hat{\mathcal{M}}\right\}\\ &\cup\left\{V_{0}[i]+t\cdot r_{0}[i]\,\Big{|}\,i\notin\hat{ \mathcal{M}},m_{0}[i]=1\right\}\\ &\cup\left\{V_{1}[j]+(1-t)r_{1}[j]\,\Big{|}\,j\notin\hat{ \mathcal{M}},m_{1}[j]=1\right\},\end{split} \tag{8}\] where we implement the repositioning that is compatible with arbitrary time \(t\in(0,1)\). As to \(T_{t}\), we union all original connections if both endpoint vectors are both visible in \(G_{t}\). Or formally, \(T_{t}[\tilde{i}][\tilde{j}]=T_{t}[\tilde{j}][\tilde{i}]=1\) if \(T_{0}[i][j]=1\) or \(T_{1}[i][j]=1\), where \((i,j)\) and \((\tilde{i},\tilde{j})\) are the vertex indices in the original graph and the merged one. ### Learning The training objective of _AnimelNet_ composes of three terms: \(\mathcal{L}=\mathcal{L}_{c}+\mathcal{L}_{r}+\mathcal{L}_{m}\), where the \(\mathcal{L}_{c}\), \(\mathcal{L}_{r}\) and \(\mathcal{L}_{m}\) are used to supervise the learning of vertex matching \(\hat{\mathcal{M}}\), repositioning vectors \(r_{0}\) and \(r_{1}\), and visibility masks \(m_{0}\) and \(m_{1}\), respectively. \(\mathcal{L}_{c}\) is to enlarge the correlation values of ground truth pairs and is defined as: \[\mathcal{L}_{c}=-\frac{1}{|\mathcal{M}^{GT}|}\sum_{(i,j)\in\mathcal{M}^{GT}} \log\hat{\mathcal{P}}_{i,j}, \tag{9}\] where \(\mathcal{M}^{GT}\) is the ground truth matching labels. For \(\mathcal{L}_{r}\) and \(\mathcal{L}_{m}\), we regress \(r_{0\to 1}\), \(r_{1\to 0}\), \(m_{0}\), and \(m_{1}\) as follows: \[\begin{split}\mathcal{L}_{r}&=\frac{1}{K_{0}}\|r_{ 0\to 1}-r_{0\to 1}^{GT}\|_{1}+\frac{1}{K_{1}}\|r_{1\to 0}-r_{1\to 0}^{GT}\|_{1}\\ \mathcal{L}_{m}&=\text{BCE}^{w}\left(\sigma(m_{0}),m _{0}^{GT}\right)+\text{BCE}^{w}\left(\sigma(m_{1}),m_{1}^{GT}\right),\end{split} \tag{10}\] where \(\sigma\) represents the sigmoid function, and BCE\({}^{w}\) is the binary cross-entropy loss with bias weight \(w\). However, since the shift vectors of occluded vertices cannot be obtained directly by subtracting the matched vertices, we conduct a frame-by-frame backtrack to generate pseudo labels to support the point-wise supervision of the repositioning vector and visibility maps. **Pseudo Labels of Repositioning and Visibility.** Assume \(G^{(0)}\) and \(G^{(Z)}\) are the \(0\)-th and the \(Z\)-th frames in a training sequence, which are used for two input line sources. Although there can exist many unmatched vertices in the two graphs when the gap \(Z\) is large, the matching rate between adjacent frames (gap = 0) is relatively high according to Table 1. Based on this, we iteratively backtrack a shift vector \(r^{(z)}\) from the \(G^{(Z)}\) to \(G^{(0)}\): \[r^{(z)}[i]=\left\{\begin{array}{ll}V^{(z+1)}[j]-V_{(z)}[i]+r^{(z+1)},&\text {if $i,j$ is matched}\\ \frac{1}{|\mathcal{N}_{i}|}\sum_{k\in\mathcal{N}_{i}}r^{(z)}[k],&\text{ otherwise}\end{array}\right. \tag{11}\] where \(\mathcal{N}_{i}\) regards to the neighbors of the \(i\)-th vertex in \(G^{(z)}\) and \(r^{(Z)}\) is initialized to be \(0\). The termination \(r^{(0)}\) of the backtrack is regarded as the pseudo repositioning label \(r_{0\to 1}^{GT}\). As to the visibility labels, we first deuce \(r_{0\to t}^{GT}\) as above and compute \(m_{0}^{GT}\) as \[m_{0}^{GT}[i]=\left\{\begin{array}{ll}1,&\text{if $V_{0}[i]+r_{0\to t}^{GT}\in \widetilde{I}_{t}$},\\ 0,&\text{otherwise},\end{array}\right. \tag{12}\] where \(\widetilde{I}_{t}\) is \(I_{t}\) dilated by a \(3\times 3\) kernel. \(r_{1\to 0}^{GT}\) and \(m_{1}^{GT}\) are computed in reversed order. ## 5 Experiments **Implementation Details.** In the vertex geometric embedding module, the image encoder \(\mathcal{E}_{I}\) is implemented as a three-layer 2D CNN, while the positional encoder \(\mathcal{E}_{P}\) and the topological encoder \(\mathcal{E}_{T}\) are 1D CNNs with a kernel size of \(1\). Encoding feature \(C\) is \(128\) in our experiments. Before feeding vertex coordinates \(V\) into \(\mathcal{E}_{P}\), \(V\) are first normalized to the scale between \((-1,1)\); the dimension of the spectral embedding feature is \(64\). Threshold \(\theta\) in Equation 5 is \(0.2\). In both training and evaluation, intermediate time \(t\) is \(0.5\), which regards the center frame between \(I_{0}\) and \(I_{1}\). The detailed network structures are provided in the supplementary \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Validation Set} & \multicolumn{4}{c}{Test Set} \\ \cline{2-10} Method & gap = 1 & gap = 5 & gap = 9 & Avg. & gap = 1 & gap = 5 & gap = 9 & Avg. \\ \hline VFIformer [14] & 7.82 & 26.04 & 50.71 & 28.19 & 7.62 & 27.55 & 50.68 & 28.62 \\ RIFE [6] & 5.02 & 27.79 & 49.81 & 27.54 & 5.85 & 28.91 & 51.08 & 28.61 \\ EISAI [5] & 5.66 & 27.64 & 49.43 & 27.57 & 6.02 & 29.14 & 52.36 & 29.17 \\ FILM [23] & 3.18 & 16.84 & 30.74 & 16.92 & 3.50 & 17.94 & 33.51 & 18.31 \\ \hline _AnimeInbet_ (ours) & **2.20** & **11.12** & **21.27** & **11.53** & **2.80** & **12.69** & **23.21** & **12.90** \\ _AnimeInbet-VS_(ours) & 2.62 & 11.43 & 22.36 & 12.14 & 3.44 & 13.41 & 23.67 & 13.51 \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative evaluations of state-of-the-art frame interpolation methods using Chamfer Distance** (reported in units of \(\times 10^{-5}\), with lower values indicating better performance). The first place and runner-up are highlighted in bold and underlined, respectively. file. We use Adam [9] optimizer with a learning rate of \(1\times 10^{-4}\) to train the _AnimeInbet_ for \(70\) epochs, where we first solely supervise the network using the correspondence loss \(\mathcal{L}_{c}\) for the \(50\) epochs, and then adopt the full loss \(\mathcal{L}\) for the rest \(20\) epochs. Bias weight \(w\) in \(\mathcal{L}_{m}\) is \(0.2\). Since vertex numbers differ in frames, we feed one pair of input frames each time but adopt gradient accumulation for a mini-batch size of \(8\). The model is trained with an NVIDIA Tesla V100 GPU for about five days. During the test, \(G_{t}\) is visualized as a raster image by cv2.line function with a line width of 2 pixels. We evaluate our model on both ground truth vectorization labels (noted as "_AnimeInbet_") and those vectorized from VirtualSketcher [15] (noted as "_AnimeInbet-VS_", to simulate the cases when input anime drawing are vector and raster, respectively. **Evaluation Metric.** Following [16, 5], we adopt the chamfer distance (CD) as the evaluation metric, which has been initially introduced to measure the similarity between two point clouds. Formally, CD is computed as: \[CD(I_{t},I_{t}^{GT})=\frac{1}{HWd}\sum(I_{t}\textit{DT}(I_{t}^{GT})+I_{t}^{GT} \textit{DT}(I_{t})), \tag{13}\] where \(I_{t}\) and \(I_{t}^{GT}\) are predicted binary lines and ground truth, while \(H\), \(W\) and \(d\) are image height, width, and a search diameter [5], respectively. _DT_ denotes the Euclidean distance transform. To transfer predicted raster images into binary sketches, we threshold pixels smaller than 0.99 times the maximum value to 0. ### Comparison to Existing Methods Since there is no existing geometrized line inbetweening study that we can directly compare our proposed model with, we set several state-of-the-art raster-image-based frame interpolation methods as baselines, including VFIformer [14], RIFE [6], EISAI [5], FILM [23]. Specifically, EISAI is originally intended for 2D animation and embeds an opti Figure 8: **Inbetweening results on _MixamoLine240_ test set. Examples are arranged from small (top) to large (bottom) motion magnitudes.** cal flow-based contour aggregator. We test each model's performance on frame pairs within frame gaps of 1, 5 and 9, respectively. For fairness, we finetune each compared method on the training set of _MixamoLine240_ with relative frame gaps using a learning rate of \(1\times 10^{-6}\) for five epochs. As shown in Table 2, our _AnimeInbet_ favorably outperforms all compared methods on both the validation set and the test set of _MixamoLine240_. On the validation set, our approach achieves an average CD value of \(11.53\), representing a significant improvement over the best-performing compared method, FILM, with over \(30\%\) enhancement. Upon closer inspection, the advantage of _AnimeInbet_ becomes more pronounced as the frame gap increases (\(0.98\), \(5.72\) and \(9.47\) for gaps of 1, 5, and 9, respectively), indicating that our method is more robust in handling larger motions. On the test set, our method maintains its lead over the other compared methods, with improvements of \(0.70\) (\(20\%\)), \(5.25\) (\(29\%\)), and \(10.30\) (\(31\%\)) from the best-performing compared method FILM for the frame gaps of 1, 5, and 9, respectively. Given that both the characters and actions in the test set are new, our method's superiority on the test set provides more convincing evidence of its advantages over the existing frame interpolation methods. To illustrate the advantages of our method, we present several inbetweening results in Figure 8. We arranged these examples in increasing levels of difficulty from top to bottom. When the motion is simple, compared methods can interpolate a relatively complete shape of the main body of the drawing. However, they tend to produce strong blurring (RIFE) or disappearance (VIFformer, EISAI, and FILM) of noticeable moving compositions (indicated by red arrows). In contrast, our method maintains a concise line structure in these key areas. When the input frames involve the whole body's movement within large magnitudes, the intermediate frames predicted by the compared methods become indistinguishable and patchy, rendering the results invalid for further use. However, our _AnimeInbet_ method can still preserve the general shape in the correct positions, even with a partial loss of details, which can be easily fixed with minor manual effort. **User Study.** To further evaluate the visual performance of our methods, we conduct a user study among 36 participants. For each participant, we randomly show 60 pairs, each composed of a result of _AnimeInbet_ and that of a compared method, and ask the participant to select the better. To allow participants to take temporal consistency into the decision, we display these results in GIF formats formed by triplets of input frames and the inbetweened one. The winning rates of our method are shown in Figure 9, where _AnimeInbet_ wins over \(92\%\) versus the compared methods. Notably, for "gap = 5" and "gap = 9" slots, the winning rates of our methods are close to \(100\%\) with smaller deviations than "gap = 1", suggesting the advantages of our method on cases within large motions. ### Ablation Study **Embedding Features.** To investigate the effectiveness of the three types of embeddings mentioned in Section 4.1, we trained several variants by removing the corresponding modules. As shown in Table 3, for each variant, we list the matching accuracy for all vertices ("Acc."), the accuracy for non-occluded vertices ("Valid Acc.") and the final CD values of inbetweening on the validation set (gap = 5). If removing the positional embedding \(\mathcal{E}_{P}\), the "Valid Acc." and the CD value drop \(15.83\%\) and \(0.74\), respectively; while the lacking of topological embedding \(\mathcal{E}_{T}\) lowers "Valid Acc." by \(5.66\%\) and worsens CD by \(0.43\), which reveals the importance of these two components. **Repositioning Propagation and Visibility Mask.** We demonstrate the contribution of repositioning propagation (prepos. prop.) and visibility mask (vis. mask) both quantitatively and qualitatively. As shown in Table 4, without repositioning propagation, the CD value will be sharply wors \begin{table} \begin{tabular}{l c} \hline \hline Method & CD (\(\downarrow\)) \\ \hline w/o. repositioning propagation & 23.62 \\ w/o. visibility mask & 12.81 \\ full model & 11.12 \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation study on repositioning and visibility mask.** \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(\mathcal{E}_{I}\) & \(\mathcal{E}_{P}\) & \(\mathcal{E}_{T}\) & Acc. (\%) & Valid Acc. (\%) & CD (\(\downarrow\)) \\ \hline ✓ & ✗ & ✗ & 51.66 & 31.01 & 12.30 \\ ✓ & ✓ & ✗ & 61.87 & 55.62 & 11.55 \\ ✓ & ✗ & ✓ & 59.28 & 45.45 & 11.86 \\ ✓ & ✓ & ✓ & **65.51** & **61.28** & **11.12** \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablation study on vertex encoding.** Figure 9: **Statistics of user study. In the boxplot, triangles and colored lines represent mean and median values, respectively. Circles are outliers beyond \(1.5\times\) interquartile range (\(3\sigma\) in a normal distribution).** ened by \(12.50\) (\(112\%\)), while the lacking of visibility mask will also make a drop of \(1.69\) (\(15\%\)). An example is shown in Figure 10, where "w/o. repos. prop." appears within many messy lines due to undefined positions for those unmatched vertices, while "w/o. vis. mask" shows some redundant segments (red box) after repositioning; the complete _AnimeInbet_ can resolve these issues and produce a clean yet complete result. **Geometrizor.** As shown in Table 2, the quantitative metrics of _AnimeInbet-VS_ are generally worse by around \(0.6\) compared to _AnimeInbet_. This is because VirtualSketcher [15] does not vectorize the line arts as precisely as the ground truth labels (average vertex number 587 _vs_ 1,351). As shown in Figure 10, the curves in "_AnimeInbet-VS_" become sharper and lose some details, which decreases the quality of the inbetweened frame. Using a more accurate geometrizer would lead to higher quality inbetweening results for raster image inputs. **Data Influence.** As mentioned in Section 3, we created a validation set composed of 20 sequences of unseen characters but seen actions, 20 of unseen actions but seen characters and 4 of unseen both to explore the influence on data. Our experiment finds that whether the characters or the actions are seen does not fundamentally influence the inbetweening quality, while the motion magnitude is the key factor. As shown in Table 5, the CD value of unseen characters is \(14.70\), which is over \(47\%\) worse than that of unseen both due to larger vertex shifts (\(44.59\)_vs_\(29.62\)), while the difference between the CD values of unseen actions and unseen both is around 10% under similar occlusion rates and shifts. ## 6 Conclusion In this study, we address the practical problem of cartoon line inbetweening and propose a novel approach that treats line arts as geometrized vector graphs. Unlike previous frame interpolation tasks on raster images, our approach formulates the inbetweening task as a graph fusion problem with vertex repositioning. We present a deep learning-based framework called _AnimeInbet_, which shows significant gains over existing methods in terms of both quantitative and qualitative evaluation. To facilitate training and evaluation on cartoon line inbetweening, we also provide a large-scale geometrized line art dataset, _MixamoLine240_. Our proposed framework and dataset facilitate a wide range of applications, such as anime production and multimedia design, and have significant practical implications. **Acknowledgement.** This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-PhD/2021-01-031[T]). It is also supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). This study is partially supported by NTU NAP, MOE AcRF Tier 1 (2021-T1-001-088).
2309.05159
The emergence of time from quantum interaction with the environment
The nature of time as emergent for a system by separating it from its environment has been put forward by Page and Wootters [D. N. Page and W. K. Wootters, Phys. Rev. D 27, 2885 (1983)] in a quantum mechanical setting neglecting interaction between system and environment. Here, we add strong support to the relational concept of time by deriving the time-dependent Schroedinger equation for a system from an energy eigenstate of the global Hamiltonian consisting of system, environment and their interaction. Our results are consistent with concepts for the emergence of time where interaction has been taken into account at the expense of a semiclassical treatment of the environment. Including the coupling between system and environment without approximation adds a missing link to the relational time approach opening it to dynamical phenomena of interacting systems and entangled quantum states.
Sebastian Gemsheim, Jan M. Rost
2023-09-10T22:30:36Z
http://arxiv.org/abs/2309.05159v1
# The emergence of time from quantum interaction with the environment ###### Abstract The nature of time as emergent for a system by separating it from its environment has been put forward by Page and Wootters [D. N. Page and W. K. Wootters, Phys. Rev. D 27, 2885 (1983)] in a quantum mechanical setting neglecting interaction between system and environment. Here, we add strong support to the relational concept of time by deriving the time-dependent Schrodinger equation for a system from an energy eigenstate of the global Hamiltonian consisting of system, environment _and_ their interaction. Our results are consistent with concepts for the emergence of time where interaction has been taken into account at the expense of a semiclassical treatment of the environment. Including the coupling between system and environment without approximation adds a missing link to the relational time approach opening it to dynamical phenomena of interacting systems and entangled quantum states. The nature and role of time to decipher the physical world is a basic and persisting research topic, in particular the question, if time is fundamental or emergent. For the latter, the starting point is a static description of the world. Time emerges from singling out a system from the rest of the world, its environment. As such, time is a meaningful tool to describe the relation of system and environment, both governed by Hamiltonians distinguished in physical or abstract (Hilbert) space. This has lead to two strands of research for the relational approach to time. One strand, initiated by Page and Wootters [1; 2; 3; 4; 5] deals with abstract state vectors in Hilbert space and is analytically exact, but remains to date unable to deal with general couplings of system and environment. The second strand uses a semiclassical approach typically in position space, arguing that the environment is "large enough" to allow for semiclassical approximations [6; 7; 8; 9; 10; 11; 12; 13; 14]. By these means, time also emerges as relation between system and environment which may be arbitrarily coupled. Here, we will show how time emerges quantum mechanically in the relation between system and environment without approximations, more specifically, by retaining arbitrary couplings between them and without the need to resort to semiclassical approximations. That is, starting from a static global state encompassing system and environment we derive the time-dependent Schrodinger equation including an arbitrary, time-dependent potential for the system in a few transparent steps. To this end, we will re-formulate the stationary (timeless) Schrodinger equation for the global state as an _invariance principle_ and single out a pure state of the system from its inevitable embedding in the environment by projecting a specific state of the environment onto the global state. As a by-product our approach constitutes a concept for analytical solutions of complicated time-dependent interaction potentials [15]. The invariance principle for the global state \(\left|\Psi\right\rangle\!\rangle\) as an eigenstate of the Hamiltonian \(\hat{H}\) with global eigenenergy \(E\) reads \[\exp\left[i\lambda(\hat{H}-E)\right]\,\left|\Psi\right\rangle\!\rangle=\, \left|\Psi\right\rangle\!\rangle \tag{1}\] for all complex \(\lambda\) with dimension of inverse energy, where \(\left\langle\!\left\langle.\right|.\right\rangle\!\rangle\) stands for the scalar product in the global Hilbert space. Differentiating (1) w.r.t. \(\lambda\) gives the (timeless) Schrodinger equation \((\hat{H}-E)\left|\Psi\right\rangle=0\), often referred to as TISE. In the following, we will only consider real-valued \(\lambda\) in (1) which is sufficient to demonstrate the emergence of time. Purely imaginary \(\lambda\) finds its natural application in the emergence of temperature [16]. In order to single out a system state from the global state, we first partition the global Hamiltonian \(\hat{H}\) into that of the system \(\hat{H}_{\mathrm{S}}\), its environment \(\hat{H}_{\mathrm{C}}\) and their possible interaction \(\hat{V}\), \[\hat{H}=\hat{H}_{\mathrm{S}}\otimes\hat{\mathbf{1}}_{\mathrm{C}}+\hat{\mathbf{ 1}}_{\mathrm{S}}\otimes\hat{H}_{\mathrm{C}}+\hat{V}\,. \tag{2}\] We will use environment and clock as synonyms to relate to the aforementioned two strands of research on the emergence of time. While the partition (2) of the global Hamiltonian is natural to define a system in the first place, it is not obvious how to single out a system state from the global, _entangled_ state \(\left|\Psi\right\rangle\!\rangle\). From a quantum mechanical point of view, the system is inevitably embedded in its environment on which it is therefore conditioned. Hence, a system state \(\left|\varphi\right\rangle_{\mathrm{S}}\) is created by projecting the global state onto a state of the environment, \(\left|\varphi\right\rangle_{\mathrm{S}}=\,\left\langle\chi|\Psi\right\rangle\!\rangle\)[17]. Here and in the following we use the convention that \(\left\langle.|.\right\rangle\) and \(\left\langle\!\left\langle.|.\right\rangle\!\rangle\right\rangle\) denote scalar products in environment and full Hilbert space, respectively, while \(\left|\varphi\right\rangle_{\mathrm{S}}\) and \(\left|\chi\right\rangle\) stand for states of system and environment, respectively and \(\left|\Psi\right\rangle\!\rangle\) is reserved for the global state. A sketch of this relational approach is shown in Fig. 1. Singling out the system by projection reduces the correlations and in particular breaks the global symmetry such that the system state does not obey the global invariance principle. Rather, the state becomes dependent on the symmetry parameter \(\lambda\). This can be seen by projecting the invariance equation (1) onto \(\left\langle\chi_{0}\right|\), which gives for the interaction free case, \(V=0\), \[\left\langle\chi_{0}|e^{i\lambda(\hat{H}_{\mathrm{C}}-E)}|\Psi\rangle\!\rangle=e^ {-i\lambda\hat{H}_{\mathrm{S}}}\left\langle\chi_{0}|\Psi\rangle\!\rangle\,, \tag{3}\] where we may write \[\left|\chi_{\lambda}\right\rangle=e^{-i\lambda(\hat{H}_{\mathrm{C}}-E)}\left| \chi_{0}\right\rangle\equiv\hat{U}_{\mathrm{C}}(\lambda)\left|\chi_{0}\right\rangle\,. \tag{4}\] The states \(\left|\chi_{\lambda}\right\rangle\) from the environment serve as markers to tag the system state with \(\lambda\), \[\left|\varphi(\lambda)\right\rangle_{\mathrm{S}}\equiv\,\left\langle\chi_{ \lambda}|\Psi\rangle\!\rangle\,. \tag{5}\] Consistent with \(\left|\varphi(0)\right\rangle_{\mathrm{S}}=\left\langle\chi_{0}|\Psi\rangle\!\rangle\), we arrive at \[\left|\varphi(\lambda)\right\rangle_{\mathrm{S}}=e^{-i\lambda\hat{H}_{ \mathrm{S}}}\left|\varphi(0)\right\rangle_{\mathrm{S}}\equiv\hat{U}_{\mathrm{ S}}(\lambda)\left|\varphi(0)\right\rangle_{\mathrm{S}} \tag{6}\] for all symmetry parameters \(\lambda\). Hence, we have _derived_ from the global invariance (1) without reference to any differential equations how states of the system (6) and the environment (4) evolve. This implies a peculiar consequence on the fundamental level: states with different \(\lambda\) do not have to be related, admitting also discrete symmetries with \(\lambda\) replaced by a set of parameters \(\{\lambda_{n}\}\). Using the property \(\hat{U}^{\dagger}(\lambda)=\hat{U}(-\lambda)\) of the unitary transformations in (4) and (6) we can rewrite the projected invariance equation (3) as \[\left\langle\chi_{0}|\Psi\rangle\!\rangle =\hat{U}_{\mathrm{S}}(-\lambda)\left\langle\chi_{0}|\hat{U}_{ \mathrm{C}}(-\lambda)|\Psi\rangle\!\rangle\right.\] \[=\hat{U}_{\mathrm{S}}(-\lambda)\left\langle\hat{U}_{\mathrm{C}}( \lambda)\chi_{0}|\Psi\rangle\!\rangle\,, \tag{7}\] which has the same form as the invariance for more familiar symmetry transformations, e.g., the invariance of a state \(\left|\psi\right\rangle\) in coordinate space \(\left\langle\mathbf{r}|\psi\right\rangle\) if it is rotated by an angle \(\theta\) about a vector \(\mathbf{u}\) with the unitary operator \(\hat{D}(\theta)=e^{-i\theta\mathbf{u}\cdot\mathbf{J}/\hbar}\) while the coordinate system is rotated backwards with the rotation matrix \(R(\theta)\): \(\hat{D}(\theta)\left\langle R(-\theta)\mathbf{r}|\psi\right\rangle=\left\langle \mathbf{r}|\psi\right\rangle\). This opposite behavior of states of the system and environment as a consequence of the global invariance was dubbed by Zurek "envariance" and used to motivate, why probabilities correspond to measurements, colloquially known as the Born Rule [18]. In our context of letting time emerge by projection of a globally static state, we may conclude that for the projected global invariance (3) the state \(\left|\chi\right\rangle\) from the environment plays the role of a coordinate which is transformed with \(\hat{U}_{\mathrm{C}}(\lambda)\) to compensate the transformation of the system state \(\left|\varphi\right\rangle_{\mathrm{S}}\) with \(\hat{U}_{\mathrm{S}}(-\lambda)\). Since \(\lambda\) in (1) is a continuous symmetry, (6) can be interpreted as the solution of the differential equation \[i\frac{\mathrm{d}}{\mathrm{d}\lambda}\left|\varphi(\lambda)\right\rangle_{ \mathrm{S}}=\hat{H}_{\mathrm{S}}\left|\varphi(\lambda)\right\rangle_{\mathrm{S}} \tag{8}\] with initial condition \(\left|\varphi(0)\right\rangle_{\mathrm{S}}=\,\left\langle\chi_{0}|\Psi\rangle\!\rangle\right\rangle\). Obviously, (8) is equivalent to the TDSE if time \(t\) is introduced through \(\lambda=t/\hbar\). What we have described so far is a short cut derivation of the Page-Wootters relational time approach [1] made possible by recognizing the crucial role of the invariance principle (1). Strictly speaking, \(\lambda\) is only a label without physical meaning: Any re-parametrization \(\lambda=f(\tilde{\lambda})\) leaves the relations between environment and system invariant. However, one can tag the system's evolution with a reparametrization invariant observable of the environment, \(\mathsf{A}_{\mathrm{C}}(\lambda)\equiv\,\left\langle\chi_{\lambda}|\hat{A}_{ \mathrm{C}}|\chi_{\lambda}\right\rangle:\mathcal{H}_{\mathrm{C}}\mapsto \mathds{R}\). Although \(\hat{A}_{\mathrm{C}}\) operating on the environment is arbitrary apart from being Hermitian, a good choice is one for which the relation between \(\lambda\) and \(\mathsf{A}_{\mathrm{C}}\) is simple, for example linear, if the environment is used as a clock. This idea goes back to Poincare [19]. For instance, the mean position \(\mathsf{R}(\lambda)=\lambda\,\mathsf{P}(0)/M+\mathsf{R}(0)\) of a free particle of mass \(M\) with \(\hat{H}_{\mathrm{C}}=\hat{P}^{2}/2M\) can reliably track dynamics for non-vanishing mean momentum \(\mathsf{P}(0)\neq 0\) since we can replace \(\lambda=M[\mathsf{R}(\lambda)-\mathsf{R}(0)]/\mathsf{P}(0)\) which represents a physical property of the environment, respectively clock. For a state \(\left|\chi_{\lambda}\right\rangle\) to clock the system, it must first of all have overlap with the global state (see Fig. 1). To provide a high resolution in \(\lambda\), the clock state \(\left|\chi_{\lambda}\right\rangle\propto\sum_{k}a_{k}e^{-i\lambda E_{C,k}} \left|E_{C,k}\right\rangle\) must be distributed over many eigenstates \(\left|E_{C,k}\right\rangle\) of \(\hat{H}_{\mathrm{C}}\), with ideally \(\left|a_{k}\right|\approx\mathrm{const}\)[3; 4; 20]. This is easy to realize, if the (physical) dimension of the clock is much larger than that of the system, which also has the effect that the global state can accommodate more complex system dynamics. We also re-emphasize that the entanglement in \(\left|\Psi\right\rangle\!\rangle\) with respect to the states of system and environment is crucial for non-trivial system dynamics and requires without interaction \(\hat{V}\) the existence of degenerate Figure 1: Sketch of the relational state formalism. A one-dimensional environment state \(\chi(x)\) projects out a two-dimensional system state \(\varphi(y,z)\propto\int\mathrm{d}x\,\chi^{*}(x)\Psi(x,y,z)\) from the three-dimensional global state \(\Psi(x,y,z)\). Schematically, the clock wavefunction is multiplied to each vertical column of \(\Psi\) and subsequently integrated along this direction to yield each value of \(\varphi\). With such an inherent clock dependence, the system state generally differs for different clock states. eigenspaces of the global Hamiltonian. Otherwise, system and environment fulfill separately a "global" invariance principle with \(\lambda_{\mathrm{S}}\) and \(\lambda_{\mathrm{C}}\), respectively, which leaves the relation \(\lambda_{\mathrm{S}}(\lambda_{\mathrm{C}})\) undetermined. Finally, it is remarkable that despite the global invariance having been broken by an arbitrary but specific choice of \(\ket{\chi_{0}}\), the properties of the latter do not influence the evolution of the system state other than specifying its initial condition. Hence, the standard procedure of getting rid of properties of the environment to achieve a universal system evolution, namely tracing over the environment, is not necessary. While it is contained in the present description (we could use any kind of mixed state for \(\ket{\chi_{0}}\)), choosing a rather structureless \(\ket{\chi_{0}}\) is not suitable for serving the purpose of a clock as just discussed. So far we have provided a clarification and short-cut to the TDSE for a system not interacting with its environment, enabled by recognizing the power of the invariance principle (1) which was not invoked in [1]. We have detailed our approach since we need it in the following to derive the TDSE for a system interacting with the environment. In reality, the environment, will inevitably interact with the system. This automatically ensures that the global state \(\ket{\Psi}\) is generically entangled. Hence, we should derive the TDSE for the system with interaction \(\hat{V}\neq 0\). To this end, we use \(\ket{\chi(\lambda)}=e^{-iS(\lambda)}\ket{\chi_{\lambda}}\) with \(\ket{\chi_{\lambda}}\) from (4) and the complex scalar \(S(\lambda)=\int^{\lambda}d\lambda^{\prime}\mathcal{E}(\lambda^{\prime})\), which can be viewed as a \(\lambda-\)dependent phase and normalization. Projected onto this state, the global TISE can be written as \[\left(-\hat{H}_{\mathrm{S}}+\mathcal{E}(\lambda)+i\frac{\mathrm{d}}{\mathrm{d} \lambda}\right)\,\bra{\chi(\lambda)}\ket{\Psi}=\,\bra{\chi(\lambda)}\hat{V} \ket{\Psi}. \tag{9}\] As a next step we decompose \(\,\bra{\chi(\lambda)}\hat{V}\ket{\Psi}\) into a Hermitian potential \(\hat{V}_{\mathrm{S}}(\lambda)\) for the system and a c-number which is an expectation value over the global state. The decomposition is facilitated with the operators \(\hat{P}_{\Psi}\equiv\ket{\Psi}\!\!\bra{\Psi}\), \(\hat{P}_{\chi}\equiv\hat{\mathds{1}}_{\mathrm{S}}\otimes\ket{\chi(\lambda)} \!\!\bra{\chi(\lambda)}\) and \(\hat{P}_{\Psi_{\chi}}=\hat{P}_{\Psi}\hat{P}_{\chi}/N_{\lambda}\), where \(\hat{P}_{\Psi_{\chi}}\ket{\Psi}=\,\ket{\Psi}\) since \(N_{\lambda}=\,\bra{\Psi}\!\!\ket{\hat{P}_{\chi}}\ket{\Psi}\). We obtain \[\bra{\chi}\hat{V}\ket{\Psi} =\,\bra{\chi}\hat{V}\hat{P}_{\Psi_{\chi}}\ket{\Psi}\] \[=\left[\hat{V}_{\mathrm{S}}(\lambda)-\,\bra{\Psi}\!\!\ket{\hat{V} \hat{P}_{\chi}}\ket{\Psi}/N_{\lambda}\right]\,\bra{\chi(\lambda)}\ket{\Psi} \tag{10a}\] where \[\hat{V}_{\mathrm{S}}(\lambda)=\frac{\bra{\chi}\!\left[\left(\hat{V}\hat{P}_{ \Psi}+\hat{P}_{\Psi}\hat{V}\right)\!\ket{\chi}\right]}{\bra{\Psi}\!\!\ket{\hat {P}_{\chi}}\ket{\Psi}}\,. \tag{10b}\] Inserting (10) into (9), setting \(\mathcal{E}(\lambda)\!\equiv\!\bra{\Psi}\!\!\ket{\hat{V}\hat{P}_{\chi}}\ket{ \Psi}/N_{\lambda}\) and rearranging terms gives the TDSE for the system with interaction, \[\left[\hat{H}_{\mathrm{S}}+\hat{V}_{\mathrm{S}}(\lambda)\right]\ket{\varphi( \lambda)}_{\mathrm{S}}=i\frac{\mathrm{d}}{\mathrm{d}\lambda}\ket{\varphi( \lambda)}_{\mathrm{S}}\,. \tag{11}\] The effective system potential \(\hat{V}_{\mathrm{S}}\) from (10b) depends explicitly on \(\lambda\) and implicitly on the state of the environment, \(\ket{\chi(\lambda)}=e^{-i\lambda(\hat{H}_{\mathrm{C}}-E)-iS(\lambda)}\ket{ \chi_{0}}\). One can easily retrieve the original TISE \(\left(\hat{H}-E\right)\ket{\Psi}=0\) by inserting the explicit expression for \(\ket{\varphi(\lambda)}_{\mathrm{S}}=\,\bra{\chi(\lambda)}\ket{\Psi}\) into (11), performing the differentiation w.r.t. \(\lambda\) followed by a functional derivative \(\delta/(\delta\langle\chi|)\) with respect to the state of the environment. Equation (11) is the main result of this work and represents, to the best of our knowledge, the first derivation of the time-dependent Schrodinger equation with a _fully general_, Hermitian time-dependent potential \(\hat{V}_{\mathrm{S}}\) from a static global state. A pictorial representation of our formalism is shown in Fig. 2. To stay as general as possible, we have made no further assumptions regarding the interaction potential \(\hat{V}\). Of course, it is reasonable (although we have seen not necessary!) to assume that the interaction potential has negligible influence on the state \(\ket{\chi}\) of the environment. Formally, this can be expressed by \([\hat{V},\hat{P}_{\chi}]\approx 0\). Thereby, \(\ket{\chi}\) becomes approximately an eigenstate of the interaction \(\hat{V}\), turning \(\ket{\chi}\) essentially into what has been described as a "pointer state" by Zurek [21]. Then we can write \[\bra{\chi}\hat{V}\ket{\Psi} =\,\bra{\chi}\hat{P}_{\chi}\hat{V}\ket{\Psi}/\!\bra{\chi}\!\!\bra{ \chi}=\,\bra{\chi}\hat{V}\hat{P}_{\chi}\ket{\Psi}/\!\bra{\chi}\!\!\bra{\chi}\] \[=\,\frac{\bra{\chi}\hat{V}\ket{\chi}}{\bra{\chi}}\bra{\chi}\ket{ \Psi}=\,\frac{\bra{\chi}\hat{V}\ket{\chi}}{\bra{\chi}\chi}\ket{\varphi}_{\mathrm{ S}}\,. \tag{12}\] The global state \(\,\ket{\Psi}\) no longer appears and renders the calculation of \(\hat{V}_{\mathrm{S}}\) less involved. Moreover, \(\mathrm{Im}(\mathcal{E}(\lambda))=\bra{\Psi}[\hat{V},\hat{P}_{\chi}]\ket{\Psi}/ (2iN_{\lambda})=0\), which reflects the negligible Figure 2: Emergence of system dynamics by means of the relational formalism. Unitary changes in the clock state induce the system evolution through the correlations contained in the global state. The invariance (1) of \(\Psi\) ensures the concurrent system motion, which is governed by an effective clock-dependent system Hamiltonian. Moreover, the entanglement in the global state admits intricate system evolutions even for relatively simple wavefunctions of the environment. influence of the interaction on the environment state. We close with the promised concept for analytical solutions of TDSEs involving complicated, time-dependent potentials. The following, very simple example of coupled two-level systems gives a flavor for the general strategy. We consider a global Hamiltonian (2) with \(\hat{H}_{\mathrm{S}}=0\), \(\hat{H}_{\mathrm{C}}=E_{\mathrm{C}}\hat{\sigma}_{\mathrm{C},z}\) and the interaction \(\hat{V}=V_{0}\left(\hat{\sigma}_{\mathrm{S},x}+\hat{\sigma}_{\mathrm{S},z} \right)\otimes\hat{\sigma}_{\mathrm{C},x}\), where \(\hat{\sigma}_{x},\hat{\sigma}_{y},\hat{\sigma}_{z}\) are the three Pauli matrices, with the additional label for system or environment. Setting for simplicity \(E_{\mathrm{C}}=V_{0}\equiv 1\), we explicitly get \[\hat{H}=\begin{pmatrix}1&1&0&1\\ 1&-1&1&0\\ 0&1&1&-1\\ 1&0&-1&-1\end{pmatrix} \tag{13}\] with eigenvalues \(E_{\pm}=\pm\sqrt{3}\), where both of them are doubly degenerate. One eigenvector of \(E_{-}\) in the basis \(\{\,|\!\uparrow_{\mathrm{S}}\!\uparrow_{\mathrm{C}}\rangle\!\!\rangle,\,|\! \uparrow_{\mathrm{S}}\!\downarrow_{\mathrm{C}}\rangle\!\!\rangle,\,|\! \downarrow_{\mathrm{S}}\!\uparrow_{\mathrm{C}}\rangle\!\!\rangle,\,|\! \downarrow_{\mathrm{S}}\!\downarrow_{\mathrm{C}}\rangle\!\!\rangle\}\), we take for the global state, \(\Psi=(1,0,-1,-a)^{\mathrm{T}}\), where \(a=1+\sqrt{3}\). Here, we use \(S(\lambda)=\int^{\lambda}d\lambda^{\prime}\operatorname{Im}\mathcal{E}( \lambda^{\prime})\) without loss of generality to simplify expressions. With \[|\chi(\lambda)\rangle=\frac{e^{iE-\lambda}}{2\sqrt{1+a\cos^{2}(\lambda)}} \left[e^{-i\lambda}\,|\!\uparrow_{\mathrm{C}}\rangle+e^{i\lambda}\,|\! \downarrow_{\mathrm{C}}\rangle\right] \tag{14}\] we obtain from (10b) the effective potential \[\hat{V}_{\mathrm{S}}=\mathbf{V}_{S}(\lambda)\cdot\hat{\mathbf{\sigma}}_{\mathrm{S}}\] (15a) which enters the Schrodinger equation ( 11 ), where \[V_{S,x}=V_{S,z} \equiv\frac{\cos(2\lambda)+a\cos^{2}(\lambda)}{1+a\cos^{2}( \lambda)}\] (15b) \[V_{S,y} \equiv-\frac{(a/2)\sin(2\lambda)}{1+a\cos^{2}(\lambda)}\,,\] (15c) and \[\hat{\mathbf{\sigma}}_{\mathrm{S}}\equiv(\hat{\sigma}_{\mathrm{S},x},\hat{ \sigma}_{\mathrm{S},y},\hat{\sigma}_{\mathrm{S},z})^{T}\]. A physical realization would be the interaction of an electronic spin-system and a magnetic field, \[\hat{V}_{\mathrm{S}}=-\mathbf{B}(\lambda)\cdot\hat{\mathbf{\mu}}\], with magnetic moment \[\hat{\mathbf{\mu}}=(-e\hbar/2m_{e})\hat{\mathbf{\sigma}}_{\mathrm{S}}\] or simply \[\hat{\mathbf{\mu}}=-\hat{\mathbf{\sigma}}_{\mathrm{S}}/2\] in atomic units. The magnetic field has different time-dependent behavior along different directions, \[\mathbf{B}_{0}=2[\cos(2\lambda)+a\cos^{2}(\lambda)](\mathbf{e}_{x}+\mathbf{e }_{z})/[1+a\cos^{2}(\lambda)]\] and \[\mathbf{B}_{1}=-a\sin(2\lambda)\mathbf{e}_{y}/[1+a\cos^{2}(\lambda)]\]. By construction, we know that the solution of the TDSE with the potential \(\hat{V}_{\mathrm{S}}(\lambda)\) is \[\left|\varphi(\lambda)\right\rangle_{\mathrm{S}}\equiv\,\left\langle \chi(\lambda)|\Psi\right\rangle\\ =\frac{e^{ia\lambda}}{2\sqrt{1+a\cos^{2}(\lambda)}}\left[|\! \uparrow_{\mathrm{S}}\rangle_{\mathrm{S}}-\left(a\,e^{-2i\lambda}+1\right)| \!\downarrow_{\mathrm{S}}\rangle_{\mathrm{S}}\right]. \tag{16}\] Although the system for which we have constructed the time-dependent potential and the analytical solution of the ensuing TDSE is very simple, it admits, nevertheless, an entire class of time-dependent potentials and corresponding solutions by changing the state \(|\chi(\lambda)\rangle\) of the environment. Replacing the environment with a multi-level system is a straightforward extension with a semiclassical limit if the density of states of the environment in the energy interval defined by the two levels of the system becomes large. This renders the environment "large" as compared to the system and provides a direct link between the two research strands for the emergence of time as discussed in the introduction. One can also construct a more general semiclassical limit without reference to a specific (multi-level) system with a semiclassical state \(|\chi(\lambda)\rangle\) from the environment and subsequent application of the stationary phase approximation, breaking implicitly the symmetry of environment and system [22]. While these semiclassical limits are consistent with the corresponding strand for the emergence of time, the semiclassical approach cannot uncover quantum roots of time, as we have worked them out here in form of two conditions: (i) a global state exists which respects the invariance principle (1) with the global Hamiltonian and (ii) the global Hamiltonian can be decomposed into a Hamiltonian \(\hat{H}_{\mathrm{S}}\) for the system, its environment \(\hat{H}_{\mathrm{C}}\), and their interaction \(\hat{V}\). With projecting the invariance principle onto an arbitrary state of the environment and all its \(\lambda-\)dependent variants generated by "rotating" the state with \(\hat{H}_{\mathrm{C}}\), these two conditions suffice to formulate a time-dependent Schrodinger equation for the system with a time-dependent potential. Thereby, we advance the relational approach to time by the crucial inclusion of interaction of system and environment, which so far has been possible only under very special circumstances [20]. Since projection and separation of system and environment as well as entanglement and interaction are also major elements of decoherence, it it is not surprising that our theory has points of contact with Zurek's decoherence theory [18] as we have mentioned before. However, decoherence requires time as a prerequisite: the literal meaning of decoherence reveals it as a process _in_ time. The successful inclusion of interaction into the emergence of time as lined out here renders our framework suitable to ask if decoherence can be established along with emergent time in the interaction of system and environment, a question we will pursue in future work.
2309.14054
Adapt then Unlearn: Exploiting Parameter Space Semantics for Unlearning in Generative Adversarial Networks
The increased attention to regulating the outputs of deep generative models, driven by growing concerns about privacy and regulatory compliance, has highlighted the need for effective control over these models. This necessity arises from instances where generative models produce outputs containing undesirable, offensive, or potentially harmful content. To tackle this challenge, the concept of machine unlearning has emerged, aiming to forget specific learned information or to erase the influence of undesired data subsets from a trained model. The objective of this work is to prevent the generation of outputs containing undesired features from a pre-trained GAN where the underlying training data set is inaccessible. Our approach is inspired by a crucial observation: the parameter space of GANs exhibits meaningful directions that can be leveraged to suppress specific undesired features. However, such directions usually result in the degradation of the quality of generated samples. Our proposed method, known as 'Adapt-then-Unlearn,' excels at unlearning such undesirable features while also maintaining the quality of generated samples. This method unfolds in two stages: in the initial stage, we adapt the pre-trained GAN using negative samples provided by the user, while in the subsequent stage, we focus on unlearning the undesired feature. During the latter phase, we train the pre-trained GAN using positive samples, incorporating a repulsion regularizer. This regularizer encourages the model's parameters to be away from the parameters associated with the adapted model from the first stage while also maintaining the quality of generated samples. To the best of our knowledge, our approach stands as first method addressing unlearning in GANs. We validate the effectiveness of our method through comprehensive experiments.
Piyush Tiwary, Atri Guha, Subhodip Panda, Prathosh A. P
2023-09-25T11:36:20Z
http://arxiv.org/abs/2309.14054v1
Adapt then Unlearn: Exploiting Parameter Space Semantics for Unlearning in Generative Adversarial Networks ###### Abstract The increased attention to regulating the outputs of deep generative models, driven by growing concerns about privacy and regulatory compliance, has highlighted the need for effective control over these models. This necessity arises from instances where generative models produce outputs containing undesirable, offensive, or potentially harmful content. To tackle this challenge, the concept of machine unlearning has emerged, aiming to forget specific learned information or to erase the influence of undesired data subsets from a trained model. The objective of this work is to prevent the generation of outputs containing undesired features from a pre-trained Generative Adversarial Network (GAN) where the underlying training data set is inaccessible. Our approach is inspired by a crucial observation: the parameter space of GANs exhibits meaningful directions that can be leveraged to suppress specific undesired features. However, such directions usually result in the degradation of the quality of generated samples. Our proposed method, known as '**Adapt-then-Unlearn,**'** excels at unlearning such undesirable features while also maintaining the quality of generated samples. This method unfolds in two stages: in the initial stage, we adapt the pre-trained GAN using negative samples provided by the user, while in the subsequent stage, we focus on unlearning the undesired feature. During the latter phase, we train the pre-trained GAN using positive samples, incorporating a repulsion regularizer. This regularizer actively encourages the model's learned parameters to move away from the parameters associated with the adapted model from the first stage while also maintaining the quality of generated samples. To the best of our knowledge, our approach stands as a pioneering method addressing unlearning within the realm of GANs. We validate the effectiveness of our method through comprehensive experiments, encompassing both class-level unlearning on the MNIST dataset and feature-level unlearning tasks on the CelebA-HQ dataset. ## 1 Introduction ### Unlearning Recent advancements in deep generative models such as GANs (Goodfellow et al., 2014; Arjovsky et al., 2017; Karras et al., 2018, 2020) and Diffusion models (Ho et al., 2020; Song and Ermon, 2019; Song et al., 2021) have showcased remarkable performance in diverse tasks, from generating high-fidelity images (Karras et al., 2018, 2020; 2021) to complex text-to-image translations (Ramesh et al., 2021, 2022; Rombach et al., 2022). Consequently, these models find application in various fields, including but not limited to medical imaging (Celard et al., 2023; Varoquaux and Cheplygina, 2022), remote sensing (Ball et al., 2017; Adegun et al., 2023), hyperspectral imagery (Jia et al., 2021; Wang et al., 2023), and many others (Choudhary et al., 2022; Yang and Xu, 2021; Liu et al., 2021). However, the extensive incorporation of data with undesired features and inherent biases (Tommasi et al., 2017)) cause these models to generate violent, racial, or explicit content which poses significant concerns. Thus, these models are subject to regulatory measures (Voigt & dem Bussche, 2017; Goldman, 2020). However, identifying and eliminating these undesired features from the model's knowledge representation poses a challenging task. The framework of Machine Unlearning (Xu et al., 2020; Nguyen et al., 2022b) tries to address the above-mentioned problems. Specifically, machine unlearning refers to the task of forgetting the learned information (Sekhari et al., 2021; Ma et al., 2022; Ye et al., 2022; Cao & Yang, 2015; Golatkar et al., 2021, 2020; Ginart et al., 2019; Golatkar et al., 2020b), or erasing the influence (Wu et al., 2020; Guo et al., 2020; Graves et al., 2021; Wu et al., 2022; Wu, 2022; Chourasia & Shah, 2023) of specific data subset of the training dataset from a learned model in response to a user request. The task of unlearning can be challenging because we aim to _'unlearn'_ a specific undesired feature without negatively impacting the other previously acquired knowledge. In other words, unlearning could lead to Catastrophic Forgetting (Ginart et al., 2019; Nguyen et al., 2022a; Golatkar et al., 2020b) which would deteriorate the performance of the model significantly. Further, the level of difficulty faced in the process of unlearning may vary depending on the specific features of the data that one is required to unlearn. For example, unlearning a particular class (e.g. class of digit '9' in MNIST) could be relatively easier than unlearning a subtle feature (e.g. beard feature in CelebA) because the representations of the undesired class are distinct from the representations of the other classes whereas a subtle feature may be highly interconnected to other subtle features (). In such a case, unlearning a particular class does not significantly deteriorate the performance of the model on other classes whereas unlearning a subtle feature will impact the other subtle features negatively. For instance, in the CelebA (Liu et al., 2015) dataset the feature of having a beard is closely linked to the concept of gender. So, unlearning this subtle feature while retaining other correlated features such as gender, poses an increasingly difficult challenge. It is important to mention that re-training the model from scratch without the undesired input data is typically not feasible due unavailability of the training dataset. ### Motivation and Contribution In this work, we try to solve the problem of unlearning undesired feature generation in pre-trained generative adversarial networks (GANs) where the underlying training dataset is inaccessible. We operate under the feedback-based unlearning framework. Particularly, we are provided with a pre-trained Generative Adversarial Network (GAN). The user is given a set of generated samples from this GAN. The user chooses a subset of generated samples and identifies them as undesirable. The objective of the process of unlearning is to prevent the generation of undesirable characteristics, as identified by the user, by the GAN in the future. In this work, we propose to unlearn the undesired features by following a two-step approach. Specifically, in the first step, we adapt the pre-trained generator to the undesired features by using the samples marked as undesired by the user (negative samples). This ensures that the _'adapted'_ generator exclusively generates samples that possess the undesired features. In the next step, we unlearn the original GAN by using the samples that weren't marked as undesired by the user (positive samples). While unlearning the GAN, we add Figure 1: Illustration of linear interpolation and extrapolation in parameter space for unlearning undesired features: (a) Bangs and (b) Hats. We take a GAN pre-trained on CelebA-HQ with parameters \(\theta_{G}\). We adapt the model on undesired samples to get the parameter \(\theta_{N}\) (see Section 3.2). We present samples from generators with parameter \(\theta_{G}+\gamma(\theta_{G}-\theta_{N})\) for \(\gamma=0,0.5,1,1.5,2\). We can see that in the extrapolation region, \(\gamma=1.5\) (fourth column) and \(\gamma=2\) (fifth column), while the undesired features are suppressed, the quality of generated samples deteriorate. This suggests that ‘controlled’ transversal in the parameter space away from \(\theta_{N}\) leads to unlearning. a repulsion loss that encourages the parameters of the unlearned generator to be far away from the parameters of the adapted generator while also making sure that the quality of generated samples does not deteriorate a lot. We call this two-stage process '**Adapt-then-Unlearn**' as in the first stage, the GAN is adapted using negative samples, while in the second stage, the actual unlearning takes place. The core idea behind the proposed method relies on the simple observation that there exist interpretable meaningful directions in the parameter space of the generator (Cherepkov et al., 2021). This observation is the main source of motivation for the proposed method. In particular, the first stage of the proposed method leads to parameters that generate only negative samples. While the parameters of the original pre-trained generator generate both positive as well as negative samples. Hence, the difference between the adapted generator's parameter and the original generator's parameter can be interpreted as the direction in parameter space that leads to a decrease in the generation of negative samples. Given this, it is sensible to move away from the original parameters in this direction to further reduce the generation of negative samples. This observation is shown in figure 1. However, such extrapolation doesn't guarantee the preservation of the quality of the other features in the generated images (see last columns of figure 1) and lead to deterioration of the generation quality. Inspired by this observation, we propose to train the generator using adversarial loss while encouraging the generator parameters to be away from the adapted generator's parameters. An overview of the proposed method is shown in figure 2. We summarize our contribution as follows: * We introduce a two-stage approach for machine unlearning in GANs, adhering to the feedback-based unlearning framework. In the first stage, our method adapts the pre-trained GAN to the negative samples. In the second stage, we train the GAN using a repulsion loss, ensuring that the generator's parameters diverge from those of the adapted GAN in stage 1. This guarantees that the newly learned parameters generate samples without the undesired features and leads to unlearning. * By design, our method can operate in practical few-shot settings where the user provides a very small amount of negative samples. * The proposed method is thoroughly tested on multiple datasets, considering various types of unlearning scenarios such as class-level unlearning and feature-level unlearning. Throughout these tests, we empirically observe that the quality of the generated samples is not compromised. ## 2 Related Work ### Machine Unlearning The task of machine unlearning is to forget specific learned information or to erase the influence of a particular subset of training data from a trained model. This can be naively done by removing the unwanted data subset from the training dataset and then retraining the model from scratch. However, retraining is computationally costly and becomes impossible if the unlearning request comes recursively for single data points. The task of recursively _'unlearning'_ i.e. removing information of a single data point in an online manner (also known as decremental learning) for the SVM algorithm was introduced in (Cauwenberghs & Poggio, 2000). However, when multiple data points are added or removed, these algorithms become slow because they need to be applied to each data point individually. So (Karasuyama & Takeuchi, 2009) introduced a newer type of SVM training algorithm that can efficiently update an SVM model when multiple data points are added or removed Figure 2: Block diagram of the proposed method: Stage-1 Adaptation (left side) of the GAN to negative samples received from user feedback and Stage-2 Unlearning (right side) the original GAN using the positive samples and the repulsion loss simultaneously. Later, inspired by the problem of protecting user privacy (Cao and Yang, 2015) developed efficient ways to delete data from certain statistical query algorithms and coined the term "machine unlearning". However, their methods can only be used for very structured problems and are not applicable to complex machine-learning algorithms such as k-means algorithms proposed in (Ginart et al., 2019) nor in random forests algorithms (Brophy and Lowd, 2021). (Ginart et al., 2019) gave an efficient deletion algorithm for the k-means clustering problem and gave the first definition of effective data deletion that can apply to randomized algorithms, in terms of statistical indistinguishability. Depending upon this statistical indistinguishability criteria machine unlearning processes are widely classified into exact unlearning (Ginart et al., 2019; Brophy and Lowd, 2021) and approximate unlearning methods (Neel et al., 2021; Nguyen et al., 2020). The goal of exact unlearning is to completely eliminate the influence of unwanted data from the learned model. In this case, the parameter distributions of the unlearned model and the retrained model should match exactly in terms of probability. On the other hand, in approximate unlearning, the influence of data is removed partially i.e. the distributions of the unlearned and retrained model's parameters are close to some small multiplicative and additive terms (Neel et al., 2021). To remove the influence of unwanted data (Wu et al., 2020) proposed parameter perturbation technique using the gradients cached during the training process. Even though it is faster in terms of computational time but quite memory intensive due to the storage of cached gradients. To reduce this issue (Guo et al., 2020; Graves et al., 2021) proposed to remove the influence using the method of influence function (Koh and Liang, 2017). However, these methods are computationally expensive due to the Hessian inversion techniques and are only limited to small convex models. To extend the idea of influence removal of unwanted data in non-convex models such as deep neural networks (Goldatzar et al., 2020) proposed a scrubbing mechanism in deep networks in a classification setting. Inspired by the same motivation of unlearning in classification models (Tanno et al., 2022) proposed a mechanism based on variational-bayesian approach (Nguyen et al., 2020). Even though all of these methods achieve unlearning but fail to generalize to a setting where the underlying datasets are inaccessible. All these methods require full or partial access to the training dataset and even sometimes test dataset Tanno et al. (2022). To solve this problem (Chundawatl et al., 2023) extended classifier unlearning in a zero-shot environment where dataset access is not required. However, it is unknown how these techniques could be applied to unsupervised models such as state-of-the-art generative models. So, this work proposes to fill this gap by unlearning undesired features produced from a pre-trained GAN in a zero-shot setting. ### Few-Shot Generative Domain Adaptation The area of few-shot generative domain adaptation deals with the problem where a pre-trained generative model is adapted to a target domain using very few samples. A general strategy to do this is to fine-tune the model on target data using appropriate regularizers. Eg. Wang et al. (2018) observed that using a single pre-trained GAN for fine-tuning is good enough for adaptation. However, due to the limited amount of target data, this could lead to mode collapse, hence Noguchi and Harada (2019) proposed to fine-tune only the batch statistics of the model. Hence, they only fine-tune the scale and shift parameters of normalization layers for adaptation. Although, such a strategy can be very restrictive in practice. To overcome this issue, Wang et al. (2020) proposed to append a'miner' network before the generator. In particular, they propose a two-stage framework, where the miner network is first trained to appropriately transform the input latent space to capture the target domain distribution then the whole pipeline is re-trained using target data. While these fine-tuning based methods give equal weightage to all the parameters of the generator, Li et al. (2020) proposed to fine-tune the parameter using Elastic Weight Consolidation (EWC). In particular, EWC is used to penalize large changes in important parameters. This importance is quantified using fischer-information while adapting the pre-trained GAN. Mo et al. (2020) showed that fine-tuning a GAN by freezing the lower layers of discriminator is also good enough in few-shot setting. Recently, a string of work (Ojha et al., 2021; Xiao et al., 2022; Lee et al., 2021) focuses on few-shot adaptation by preserving the cross-domain correspondence. Lastly, Mondal et al. (2022) suggested an inference-time optimization approach where a they prepend a latent-learner, and the latent-learner is optimized every time a new set of images are to be generated from target domain. As mentioned earlier, our approach involves an adaptation stage, where we adapt the pre-trained GAN to the negative samples provided by the user. In practive, the amount of negative samples provided by the user is very less hence such an adaptation falls under the category of few-shot generative domain adaptation. Hence, we make use of EWC (Li et al., 2020) for this adaptation phase (cf. Section 3.2 for details). ## 3 Proposed Methodology ### Problem Formulation and Method Overview Consider the generator \(G_{\theta_{G}}\) of a pre-trained GAN with parameters \(\theta_{G}\). The GAN is trained using a dataset \(\mathcal{D}=\{\mathbf{x}_{i}\}_{i=1}^{|\mathcal{D}|}\), where \(\mathbf{x}_{i}\stackrel{{ iid}}{{\sim}}p_{X}(x)\). Using the feedback-based framework (Moon et al., 2023), we obtain a few negative and positive samples, marked by the user. Specifically, the user is provided with \(n\) samples \(\mathcal{S}=\{\mathbf{y}_{i}\}_{i=1}^{n}\) where \(\mathbf{y}_{i}\) are the generated samples from the pre-trained GAN. The user identifies a subset of these samples \(\mathcal{S}_{n}=\{\mathbf{y}_{i}\}_{i\in s_{n}}\), as negative samples or samples with undesired features, and the rest of the samples \(\mathcal{S}_{p}=\{\mathbf{y}_{i}\}_{i\in s_{p}}\) as positive samples or samples that don't possess the undesired features. Here, \(s_{p}\) and \(s_{n}\) are index sets such that \(s_{p}\cup s_{n}=\{1,2,\dots,n\}\) and \(s_{p}\cap s_{n}=\phi\). Given this, the goal of unlearning is to learn the parameters \(\theta_{P}\) such that the generator \(G_{\theta_{P}}\) generates only positive samples. In other words, the parameters \(\theta_{P}\) should lead to unlearning of the undesired features. In this work, we adopt a two-stage approach for unlearning the undesired features. In Stage 1, we adapt the pre-trained generator \(G_{\theta_{G}}\) on the negative samples. This step gives us the parameters \(\theta_{N}\) such that \(G_{\theta_{N}}\) generates only negative samples. In Stage 2, we actually unlearn the undesired feature by training the original generator \(G_{\theta_{G}}\) on positive samples using the usual adversarial loss while adding an additional regularization term that makes sure that the learned parameter is far from \(\theta_{N}\). We call this regularization term _repulsion_ loss as it repels the learned parameters from \(\theta_{N}\). We describe each of these stages in detail in subsequent sections. ### Stage-1: Negative Adaptation Inspired by (Tanno et al., 2022), the first stage involves adapting the pre-trained generator \(G_{\theta_{G}}\) on the negative samples, \(\mathcal{S}_{n}\) that are obtained through feedback from the user. The aim here is to obtain parameter \(\theta_{N}\) such that the generator \(G_{\theta_{N}}\) only generates samples that possess the undesired feature. However, one thing to note here is that the number of negative samples marked by the user (\(|\mathcal{S}_{n}|\)) might be much less in number (of the order of a few hundreds). Directly adapting a pre-trained GAN with a much smaller amount of samples could lead to catastrophic forgetting (McClelland et al., 1995; McCloskey and Cohen, 1989). Thankfully, there is a rich literature on few-shot generative domain adaptation available. See Section 2.2 for a discussion on few-shot generative adaptation. Here, we use one of the simplest methods, namely, Elastic Weight Consolidation (EWC) based adaptation (Li et al., 2020), mainly because of its simplicity and ease of implementation. EWC-based adaptation relies on the simple observation that the 'rate of change' of weights is different for different layers; i.e., different layers need to be regularized differently. Further, this 'rate of change' is observed to be inversely proportional to the fisher information, \(F\) of the corresponding weights. As a consequence, the fisher information can be used for penalizing changes in weights in different layers. In our context, we want to adapt the pre-trained GAN on the negative samples. Hence, the optimal parameter \(\theta_{N}\) for the adapted GAN can be obtained by solving the following optimization problem: \[\theta_{N},\phi_{N}=\arg\min_{\theta}\max_{\mathcal{O}}\mathcal{L}_{adv}+ \gamma\mathcal{L}_{adapt} \tag{1}\] where, \[\mathcal{L}_{adv}=\mathop{\mathbb{E}}_{\mathbf{x}\sim p_{\mathcal{S}_{n}}(x)} \left[\log D_{\phi}(\mathbf{x})\right]+\mathop{\mathbb{E}}_{\mathbf{z}\sim p_{Z }(z)}\left[\log(1-D_{\phi}(G_{\theta}(\mathbf{z})))\right] \tag{2}\] \[\mathcal{L}_{adapt}=\lambda\sum_{i}F_{i}(\theta_{i}-\theta_{G,i}) \tag{3}\] \[F=\mathbb{E}\left[-\frac{\partial^{2}}{\partial\theta_{G}^{2}}\mathcal{L}( \mathcal{S}_{n}\mid\theta_{G})\right] \tag{4}\] Here, \(p_{Z}(z)\) is the standard Gaussian, \(p_{\mathcal{S}_{n}}(x)\) is the induced distribution due to \(\mathcal{S}_{n}\) and \(\mathcal{L}(\mathcal{S}_{n}\mid\theta_{G})\) is the log-likelihood which is calculated through binary cross-entropy loss using the output of the discriminator as mentioned in Li et al. (2020). In practice, we train multiple instances of the generator to obtain multiple \(\theta_{N}\). Specifically, given the negative samples \(\mathcal{S}_{n}\), we adapt the pre-trained GAN \(k\) times to obtain \(\{\theta_{N}^{j}\}_{j=1}^{k}\). ### Stage-2: Unlearning During second stage of our method, the actual unlearning of undesired features takes place. In particular, this stage is motivated by the observation that there exist meaningful directions in the parameter space of the generator. This is shown in Fig. 2. However, such extrapolation-based schemes could lead to degradation in the quality of generated images. Nevertheless, the above observation indicates that traversing away from \(\theta_{N}\) helps us to erase or unlearn the undesired features. Therefore, a logical question to ask is can we transverse in the parameter space of a generator in such a way the parameters remain far from \(\theta_{N}\) while making sure that the quality of generated samples doesn't degrade? To solve this problem, we make use of the positive samples \(\mathcal{S}_{p}\) provided by the user. Particularly, we propose to re-train the given GAN on the positive samples while incorporating a repulsion loss component that _'repulsions'_ or keeps the learned parameters away from \(\theta_{N}\). Mathematically, we obtain the parameters after unlearning \(\theta_{P},\phi_{P}\) by solving the following optimization problem: \[\theta_{P},\phi_{P} =\arg\min_{\theta}\max_{\phi}\mathcal{L}_{adv}^{{}^{\prime}}+ \gamma\mathcal{L}_{repulsion} \tag{5}\] \[\text{where,}\ \ \mathcal{L}_{adv}^{{}^{\prime}} =\underset{\mathbf{x}\sim p_{\mathcal{S}_{p}}(x)}{\mathbb{E}} \left[\log D_{\phi}(\mathbf{x})\right]+\underset{\mathbf{z}\sim p_{\mathcal{Z} }(z)}{\mathbb{E}}\left[\log(1-D_{\phi}(G_{\theta}(\mathbf{z})))\right] \tag{6}\] Here, \(p_{\mathcal{S}_{p}}(x)\) is the distribution induced by positive samples \(\mathcal{S}_{p}\), and \(\mathcal{L}_{repulsion}\) is the repulsion loss. The repulsion loss is chosen such that it encourages the learned parameters to be far from \(\theta_{N}\) obtained from Stage-1. Further, \(\mathcal{L}_{adv}^{{}^{\prime}}\) encourages the parameters to capture the desired distribution \(p_{\mathcal{S}_{p}}(x)\). Hence, the combination of these two terms makes sure that we transverse in the parameter space maintaining the quality of generated samples while unlearning the undesired features as well. ### Choice of Repulsion Loss As mentioned above, the repulsion loss should encourage the learned parameter to traverse away from \(\theta_{N}\) obtained from the negative adaptation stage. There is a lineage of research work in Bayesian learning called Deep Ensembles, where multiple MAP estimates of a network are used to approximate full-data posterior (Levin et al., 1990; Hansen and Salamon, 1990; Breiman, 1996; Lakshminarayanan et al., 2017; Ovadia et al., 2019; Wilson and Izmailov, 2020; D'Angelo and Fortuin, 2021a). The main issue faced in this area is that of diversity of the members in the ensembles. In other words, if the members of an ensemble are not diverse enough, then the posterior approximation might not capture the multi-modal nature of full-data posterior. As a consequence, there are several methods proposed to increase the diversity of the members of the ensemble (Huang et al., 2016; Von Oswald et al., 2020; D'Angelo and Fortuin, 2021b; Wenzel et al., 2020; D'Angelo and Fortuin, 2021a). Inspired by these developments, we make use of the technique proposed in D'Angelo and Fortuin (2021a) where the members of an ensemble interact with each other through a repulsive force that encourages diversity in the ensemble. Particularly, we explore three choices for repulsion loss: \[\mathcal{L}_{repulsion}^{\text{HL2}}=\frac{1}{||\theta-\theta_{N}||_{2}^{2}},\ \ \ \ \mathcal{L}_{repulsion}^{\text{NL2}}=-||\theta-\theta_{N}||_{2}^{2},\ \ \ \ \mathcal{L}_{repulsion}^{\text{EI2}}=\exp(-\alpha||\theta-\theta_{N}||_{2}^{2}) \tag{7}\] where, \(\mathcal{L}_{repulsion}^{\text{IL2}}\), \(\mathcal{L}_{repulsion}^{\text{NL2}}\) and \(\mathcal{L}_{repulsion}^{\text{RI2}}\) are the inverse \(\ell 2\), negative \(\ell 2\) and exponential negative \(\ell 2\) loss between \(\theta\) and \(\theta_{N}\). It can be seen that minimization of all of these choices will force \(\theta\) to be away from \(\theta_{N}\), consequently surving our purpose. ``` 0: Pre-trained parameters (\(\theta_{G}\), \(\phi_{D}\)), Negative samples (\(\mathcal{S}_{n}\)), Number of adapted models (\(k\)) Initialize: \(j\gets 0\) while\(j\leq k\)do \(\theta\leftarrow\theta_{G},\phi\leftarrow\phi_{D}\) repeat Sample \(\mathbf{x}\sim\mathcal{S}_{n}\) and \(\mathbf{z}\sim\mathcal{N}(0,I)\) \(\mathcal{L}_{adv}\leftarrow\log D_{\phi}(\mathbf{x})+\log\left(1-D_{\phi}(G_{ \theta}(\mathbf{z}))\right)\) \(\mathcal{L}_{adapt}\leftarrow\lambda\sum_{i}F_{i}(\theta_{i}-\theta_{G,i})\) \(\theta\leftarrow\theta-\eta\nabla_{\theta}(\mathcal{L}_{adv}+\mathcal{L}_{ adapt})\) until convergence \(\theta_{N}^{j}\leftarrow\theta\) endwhile ``` **Algorithm 1** Negative Adaptation ## 4 Experiments and Results ### Dataset In the following section we demonstrate the results pertaining to our method both qualitatively as well as quantitatively. As discussed earlier, in unlearning, we want the generator of the GAN to 'forget' a particular feature. In other words, after unlearning, the generator should not generate images containing the undesired (or unlearnt) feature. As discussed earlier, we look at two type of unlearning settings: (i) Class-level unlearning and (ii) Feature-level unlearning. We use MNIST dataset (LeCun et al., 1998) for class-level unlearning. It consists of \(60,000\)\(28\times 28\) dimensional black and white images of handwritten digits. For our purpose, we take three digit classes: 1, 4, and 8 for unlearning. Similarly, we use CelebA-HQ dataset (Liu et al., 2015) for feature-level unlearning. CelebA-HQ contains \(30,000\) RGB high-quality celebrity face images of dimension \(256\times 256\). Here, we unlearn the following subtle features: (a) Bangs, (b) Hats, (c) Bald, and (d) Eyeglasses. ### Experimental Details **Training Details:** We use one of the state-of-the-art and widely used StyleGAN2 (Karras et al., 2020) for demonstrating the performance of the proposed method on the tasks mentioned in previous section. The StyleGAN is trained on entire MNIST and CelebA-HQ to obtain the pre-trained GAN from which specific features are to be unlearnt. The FID of samples generated by pre-trained GAN for MNIST is \(5.4\) whereas the FID is \(5.3\). The training details of StyleGAN are given in Appendix Section A.1.1. **Unlearning Details:** As mentioned earlier, we operate under the feedback-based framework. To obtain the feedback, we employ a pre-trained classifier. Specifically, we pre-train the classifier to classify a given image as desired or undesired (depending upon the feature under consideration). We classify 5000 generated images from pre-trained GAN as positive and negative samples using the pre-trained classifier. The generated samples containing the undesired features are marked as negative samples and rest of the images are marked as positive samples. These samples are then used in Stage-1 and Stage-2 of the proposed method for unlearning as described in Section 3. We evaluate our result using all the choices of repulsion loss as mentioned in Eq. 7. For reproducibility, we have provided all the hyper-parameters and details in the Appendix Section A.1.2 and A.1.3. ### Baselines and Evaluation Metrics **Baselines**: To the best of our knowledge, ours is the first work that addresses the problem of unlearning in high-fidelity generator models such as StyleGAN. Hence, we evaluate and compare our method with all the candidates for repulsion loss presented in Eq. 7. Further, we also include the results with extrapolation in the parameter space as demonstrated in figure 1. We evaluate the per formance of each method across three independent runs and report the result in the form of mean \(\pm\) std. dev. **Evaluation Metrics**: Various metrics have been devised for assessing machine unlearning methods (Xu et al., 2020). To gauge the effectiveness of our proposed techniques and the baseline methods, we utilize three fundamental evaluation metrics: 1. **Percentage of Un-Learning (PUL)**: This metric quantifies the extent of unlearning by measuring the reduction in the number of negative samples generated by the GAN post-unlearning compared to the pre-unlearning state. PUL is computed as: \[\text{PUL}=\frac{(S_{n})_{\theta_{G}}-(S_{n})_{\theta_{P}}}{(S_{n})_{\theta_{ G}}}\times 100\] (8) where, \((S_{n})_{\theta_{G}}\) and \((S_{n})_{\theta_{P}}\) represent the number of negative samples generated by the original GAN and the GAN after unlearning respectively. We generate 15,000 random samples from both GANs and employ a pre-trained classifier (as detailed in Section 4.2) to identify the negative samples. PUL provides a quantitative measure of the extent of the unlearning algorithm in eliminating the undesired feature from the GAN. 2. **Frechet Inception Distance (FID)**: While PUL quantifies the degree of unlearning, it does not assess the quality of samples generated by the GAN post-unlearning. Hence, we calculate the FID (Heusel et al., 2017) between the generated samples and the original dataset. For correctness, samples containing undesired features are removed from the original dataset, as the unlearning process aims to generate samples from the data distribution after removing undesired features. 3. **Retraining FID (Ret-FID)**: Ultimately, the ideal objective of unlearning is to produce a model as if it were trained on data entirely devoid of undesired features. To illustrate this facet of unlearning, we compute the FID between the outputs of the GAN after unlearning and the GAN trained from scratch on the dataset obtained after eliminating undesired features. Please note that the original dataset is unavailable during the unlearning process. Consequently, the use of the original dataset is solely for evaluation purposes. ### Unlearning Results We present our results and observations on MNIST and CelebA-HQ in Table 1 and 2 respectively. We observe that the choice of \(\mathcal{L}^{\text{EL2}}_{repulsion}\) as repulsion loss provides highest PUL in most of the cases for both the dataset. Further, it also provides best FID and Ret-FID as compared to other choices of repulsion loss. \(\mathcal{L}^{\text{NL2}}_{repulsion}\) is stands out to be the second best in these metrics for most of the cases. For MNIST, we observe in Table 1 that the proposed method with \(\mathcal{L}^{\text{EL2}}_{repulsion}\) as repulsion loss consistently provides a PUL of above \(95\%\) while giving the best FID and Ret-FID compared to other methods. We also observe that Extrapolation in parameter space leads to significant PUL albeit the FID and Ret-FID are considerably worse compared to proposed method under different repulsion loss. This shows that the proposed method is decently solves the task of unlearning at class-level. Next, feature-level unlearning results on CelebA-HQ are presented in Table 2. It can be seen that the proposed method with \(\mathcal{L}^{\text{EL2}}_{repulsion}\) as repulsion loss consistently provides a PUL of above \(90\%\), illustrating significant unlearning of undesired features. Further, the FID and Ret-FID using \(\mathcal{L}^{\text{EL2}}_{repulsion}\) stand out to be the best among all the methods. Furthermore, we observe that the FID of the samples generated by the unlearnt GAN (on Hats) using \(\mathcal{L}^{\text{EL2}}_{repulsion}\) drops by about \(4.15\) points while it drops by \(4.3\) and \(6.01\) points while using \(\mathcal{L}^{\text{NL2}}_{repulsion}\) and \(\mathcal{L}^{\text{IL2}}_{repulsion}\) as compared to the pre-trained GAN. This demonstrates that the proposed method is able to unlearn the undesired feature (hats) by compromising slightly on the quality of generated samples. On the other hand, we notice that Extrapolation in parameter space provides decent PUL, however, it can be seen that the FID and Ret-FID scores are much worse. This supports our claim that extrapolation might unlearn the undesired feature, however, it deteriorates the quality of generated samples significantly. The visual illustration of these methods is shown in figure 3. Here, we observe that the proposed method effectively unlearns the undesired feature. Moreover, it can be seen that the unlearning through extrapolation leads to unlearning of correlated features as well. E.g. Bangs are correlated with female attribute. It can be seen that the unlearning of Bangs through extrapolation also leads to unlearning of female feature which is not desired. However, while unlearning through the proposed method unlearans Bangs only, while keeping the other features as it is. Similar visual results for MNIST is provided in Appendix in Section A.2. ### Ablation Study Lastly, we present the ablation study to observe the effect of repulsion loss. In particular, we see if adapting the pre-trained GAN only on the positive samples leads to desired levels of unlearning. Our observations on CelebA-HQ for Bangs and Hats are presented in Table 3. Here, we use \(\mathcal{L}_{repulsion}^{\text{EL2}}\) as repulsion loss. It can be seen that only using adversarial loss doesn't lead to significant unlearning of undesired feature. E.g. using repulsion loss provides and increase of about \(10.56\%\) and \(9.72\%\) in PUL. The FID increases by minor \(0.66\) point on Bangs while it decreases by \(0.21\) points on Hats. Hence, we conclude that repulsion loss is indeed crucial for unlearning. ## 5 Conclusion In this work, we present a methodology to prevent the generation of samples containing undesired features from a pre-trained GAN. It is worth mentioning that our method does not assume the availability of the training dataset of the pre-trained GAN so it can generalize to zero-shot settings. In spite of these advantages, there are some limitations that our methodology can't encompass such as changes in correlated features while unlearning undesired features. Due to high entanglement between the semantics features this kind of impact on other features is visible in the generated outputs. Despite these limitations, we believe that our work is an important step towards unlearning in deep generative models that cater to the widespread societal concerns of biased, racial, and harmful content creation from these models.
2309.06478
Molecular and Ionized Gas in Tidal Dwarf Galaxies: The Spatially Resolved Star-Formation Relation
Tidal dwarf galaxies (TDGs) are low-mass objects that form within tidal and/or collisional debris ejected from more massive interacting galaxies. We use CO($1-0$) observations from ALMA and integral-field spectroscopy from MUSE to study molecular and ionized gas in three TDGs: two around the collisional galaxy NGC 5291 and one in the late-stage merger NGC 7252. The CO and H$\alpha$ emission is more compact than the HI emission and displaced from the HI dynamical center, so these gas phases cannot be used to study the internal dynamics of TDGs. We use CO, HI, and H$\alpha$ data to measure the surface densities of molecular gas ($\Sigma_{\rm mol}$), atomic gas ($\Sigma_{\rm atom}$) and star-formation rate ($\Sigma_{\rm SFR}$), respectively. We confirm that TDGs follow the same spatially integrated $\Sigma_{\rm SFR}-\Sigma_{\rm gas}$ relation of regular galaxies, where $\Sigma_{\rm gas} = \Sigma_{\rm mol} + \Sigma_{\rm atom}$, even though they are HI dominated. We find a more complex behaviour in terms of the spatially resolved $\Sigma_{\rm SFR}-\Sigma_{\rm mol}$ relation on sub-kpc scales. The majority ($\sim$60$\%$) of SF regions in TDGs lie on the same $\Sigma_{\rm SFR}-\Sigma_{\rm mol}$ relation of normal spiral galaxies but show a higher dispersion around the mean. The remaining fraction of SF regions ($\sim$40$\%$) lie in the starburst region and are associated with the formation of massive super star clusters, as shown by Hubble Space Telescope images. We conclude that the local SF activity in TDGs proceeds in a hybrid fashion, with some regions comparable to normal spiral galaxies and others to extreme starbursts.
Navyasree Kovakkuni, Federico Lelli, Pierre-alain Duc, Médéric Boquien, Jonathan Braine, Elias Brinks, Vassilis Charmandaris, Francoise Combes, Jeremy Fensch, Ute Lisenfeld, Stacy McGaugh, J. Chris Mihos, Marcel. S. Pawlowski, Yves. Revaz, Peter. M. Weilbacher
2023-09-12T18:00:02Z
http://arxiv.org/abs/2309.06478v1
# Molecular and Ionized Gas in Tidal Dwarf Galaxies: The Spatially Resolved Star-Formation Relation ###### Abstract Tidal dwarf galaxies (TDGs) are low-mass objects that form within tidal and/or collisional debris ejected from more massive interacting galaxies. We use CO\((1-0)\) observations from ALMA and integral-field spectroscopy from MUSE to study molecular and ionized gas in three TDGs: two around the collisional galaxy NGC 5291 and one in the late-stage merger NGC 7252. The CO and H\(\alpha\) emission is more compact than the H i emission and displaced from the H i dynamical center, so these gas phases cannot be used to study the internal dynamics of TDGs. We use CO, H i, and H\(\alpha\) data to measure the surface densities of molecular gas (\(\Sigma_{\rm mol}\)), atomic gas (\(\Sigma_{\rm atom}\)) and star-formation rate (\(\Sigma_{\rm SFR}\)), respectively. We confirm that TDGs follow the same spatially integrated \(\Sigma_{\rm SFR}-\Sigma_{\rm gas}\) relation of regular galaxies, where \(\Sigma_{\rm gas}=\Sigma_{\rm mol}+\Sigma_{\rm atom}\), even though they are H i dominated. We find a more complex behaviour in terms of the spatially resolved \(\Sigma_{\rm SFR}-\Sigma_{\rm mol}\) relation on sub-kpc scales. The majority (\(\sim\)60%) of SF regions in TDGs lie on the same \(\Sigma_{\rm SFR}-\Sigma_{\rm mol}\) relation of normal spiral galaxies but show a higher dispersion around the mean. The remaining fraction of SF regions (\(\sim\)40%) lie in the starburst region and are associated with the formation of massive super star clusters, as shown by Hubble Space Telescope images. We conclude that the local SF activity in TDGs proceeds in a hybrid fashion, with some regions comparable to normal spiral galaxies and others to extreme starbursts. keywords: galaxies: dwarf - galaxies: evolution - galaxies: formation - galaxies: interactions - galaxies: ISM -- galaxies: star formation ## 1 Introduction The process of star formation (SF) plays a key role in the formation and evolution of galaxies. Key insights into the SF process are given by empirical relations that connect the star formation rate (SFR) of a galaxy to the availability of gas in the interstellar medium (ISM). One such relation is the Kennicutt-Schmidt (KS) relation. In its original form, the Schmidt (1959) relation connected volume densities of SFR (\(\rho_{\rm SFR}\)) and atomic gas mass (\(\rho_{\rm HI}\)) in star-forming regions of the Milky Way: \[\rho_{\rm SFR}\propto\rho_{\rm HI}^{n}. \tag{1}\] Subsequent studies (Kennicutt Jr 1998b) of star-forming galaxies revealed a tight relation between the disk-averaged SFR surface densities (\(\Sigma_{\rm SFR}\)) and total (atomic plus molecular) gas mass surface densities (\(\Sigma_{\rm gas}\)): \[\Sigma_{\rm SFR}=A\,\Sigma_{\rm gas}^{N}. \tag{2}\] With the advent of multi-wavelength observations at high angular resolution, it has become possible to study the KS relation in a spatially resolved fashion on kpc-scales in a variety of environments, from spiral galaxies to interacting objects (e.g., Bigiel et al., 2008; Leroy et al., 2008; Boquien et al., 2011). These works suggested that the SFR surface density correlates more strongly with molecular gas (H\({}_{2}\)) than atomic gas in the inner H\({}_{2}\)-dominated regions of star-forming disks. The situation, however, remains unclear in the H I-dominated regime, typical of dwarf galaxies (Roychowdhury et al., 2014, 2015) as well as the outermost parts of spiral galaxies (Bigiel et al., 2010). In this regime, flares in the outer gaseous disks (i.e., a radial increase of the disk thickness) may play an important role (Bacchini et al., 2019, 2020), suggesting that the volume density of total gas (atomic plus molecular) best correlates with the volume density of SFR, in analogy to the original form in Eq. 1. A key question is whether every galaxy follows the same KS relation, thus whether the SF process is "universal" or not on sub-kpc scales. For example, there is a clear link between SF activity and galaxy interactions (e.g., Ellison et al., 2013). Tidal interactions lead to gas inflows towards the galaxy centers, which can temporarily enhance the SFRs, and move the resulting systems above the mean KS relation in the so-called starburst regime (e.g. Barnes & Hernquist, 1991; Renaud et al., 2014; Ellison et al., 2020). At the same time, galaxy collisions expel gas and stars into intergalactic space, leading to the formation of tails and bridges in which new stars can form. SF, indeed, can occur within gas debris surrounding interacting systems, sometimes even at 100 kpc away from the parent galaxies (Mirabel et al., 1992; Duc et al., 2006; Boquien et al., 2007, 2011). Is the SF occurring in such extreme environments proceeding in a similar way as in galaxy disks? Tidal dwarf galaxies (TDGs) are self-gravitating objects found within tidal debris, which show in-situ SF and have masses and sizes comparable to "normal" dwarf galaxies (Duc et al., 2006). Zwicky (1956) was the first to suggest the possible formation of TDGs around interacting galaxies; this hypothesis was later confirmed by several observational and theoretical studies (e.g. Mirabel et al., 1992; Barnes & Hernquist, 1992; Elmegreen et al., 1993). Hereafter, for simplicity, we will use the term TDG to also include newborn galaxies that form within collisional debris (rather than tidal ones), such as the SF complexes in the collisional ring of NGC 5291 (Bournaud et al., 2007). In addition, we will refer to "bona-fide TDGs" to indicate systems that show internal kinematics decoupled from the surrounding debris, pointing to self-gravity within a local potential well (e.g., Lelli et al., 2015). Bona-fide TDGs have been identified around several interacting systems (e.g., Duc et al., 2006; Lelli et al., 2015). TDGs form out of gas that has been pre-enriched in their parent massive galaxies, giving them higher gas-phase metallicities (about \(0.3-0.5Z_{\odot}\)) than "classical" dwarf galaxies (Duc & Mirabel, 1998). Thus, contrarily to most dwarf galaxies, TDGs are easily detected in the CO emission line, and it is sensible to use the Milky-Way \(X_{\rm CO}\) factor to convert the observed CO flux density into H\({}_{2}\) column densities (Braine et al., 2001; Lisenfeld et al., 2016; Querejeta et al., 2021). Braine et al. (2001) used single-dish CO observations of eight TDGs to conclude that they follow the same galaxy-averaged KS relation of usual galaxies. Boquien et al. (2011) reached the same conclusion for a TDG candidate in the interacting system Arp 158. On the other hand, Lisenfeld et al. (2016) presented a spatially resolved study of a TDG in the Virgo Cluster and found that it lies below the mean KS relation. Finally, Querejeta et al. (2021) analysed high-resolution ALMA observations of a TDG in the interacting system Arp 94, and found that it may lie either on or off the KS relation, depending on whether one considers the whole CO flux or only the one associated to giant molecular clouds. The diverse outcome of these works may point to different behaviour in the galaxy-averaged and spatially-resolved KS relation and/or intrinsic differences in the SF properties of individual TDGs, potentially related to their different formation and evolutionary histories. New studies are needed to clarify the situation. In this paper, we study the spatially resolved KS relation in a sample of three TDGs: NGC 5291N, NGC 5291S, and NGC 7252NW. We probe their molecular gas content using CO(1-0) observations from the Atacama Large Millimeter/submillimeter Array (ALMA), and their ionized gas content (H\(\alpha\) and H\(\beta\) emission) using integral-field spectroscopy (IFS) from the Multi-Unit Spectroscopic Explorer (MUSE) mounted on the Very Large Telescope (VLT). New observations and ancillary data are described in Sect. 2. Results on the gas distribution and kinematics as well as on the KS relation are presented in Sect. 3. Finally, we summarize our findings in Sect. 4 ## 2 Observations ### Galaxy sample We obtained high-resolution ALMA observations for three TDGs that were selected from the sample of Lelli et al. (2015), based on the availability of single-dish CO fluxes and high-quality H I maps. Two of them are part of the NGC 5291 system; the remaining one is part of the NGC 7252 merger. The location of these TDGs within the overall structure of the parent system can be appreciated in Figure 1 of Lelli et al. (2015). Table 1 lists the general properties of these TDGs. \begin{table} \begin{tabular}{c c c c c c c c c} \hline Galaxy & R.A. & Dec. & Dist. & \(z\) & V\({}_{\rm sys}\) & SFR & M\({}_{\rm mol}\) & M\({}_{\rm atom}\) & Area \\ & (J2000) & (J2000) & (Mpc) & & (km s\({}^{-1}\)) & (M\({}_{\odot}\) yr\({}^{-1}\)) & (\(10^{7}\) M\({}_{\odot}\)) & (\(10^{7}\) M\({}_{\odot}\)) & (kpc\({}^{2}\)) \\ \hline NGC 5291N & 13 47 20.3 & -30 20 54 & 62 & 0.014 & 4228.8 & 0.6\(\pm\)0.1 & 5.5\(\pm\)1.1 & 163.6\(\pm\)16.7 & 25.0 \\ NGC 5291S & 13 47 22.7 & -30 27 40 & 62 & 0.016 & 4780.4 & 0.2\(\pm\)0.1 & 2.6\(\pm\)0.4 & 124.1\(\pm\)16.1 & 16.0 \\ NGC 7252NW & 22 20 33.7 & -24 37 24 & 66.5 & 0.016 & 4771.6 & 0.06\(\pm\)0.02 & 3.2\(\pm\)0.6 & 11.7\(\pm\)2.2 & 12.3 \\ \hline \end{tabular} \end{table} Table 1: TDG sample. Distances are adopted from Lelli et al. (2015). Redshifts and systemic velocities are computed from H\(\alpha\) emission lines (Sect. 3.2). SFRs, molecular gas masses, and atomic gas masses are computed within the total CO emitting area, but the TDG size is significantly larger (see Sect. 3.3 for details). NGC 5291 is a perturbed early-type galaxy that is surrounded by a giant H I ring (Malphrus et al., 1997), suggesting a past head-on collision (Bournaud et al., 2007). The system is located in the outer region of the galaxy cluster Abell 3574. Early studies by Longmore et al. (1979) in the collisional ring of NGC 5291 revealed the presence of star-forming regions out to 100 kpc from the central galaxy. Subsequently, Duc & Mirabel (1998) found that these star-forming complexes have similar sizes and SFRs of dwarf galaxies but higher metallicities, suggesting a TDG origin. Three star-forming complexes (NGC 5291N, NGC 5291S and NGC 5291SW) are associated with strong H I concentrations that display a velocity gradient decoupled from the underlying collisional material, pointing to rotation within a self-gravitating potential well (Bournaud et al., 2007; Lelli et al., 2015). In this paper, we focus on NGC 5291N and NGC 5291S because no single-dish CO observations are available for NGC 5291SW. NGC 7252 (also known as "Atoms for Peace") is a late-stage merger remnant with two gas-rich tidal tails extending to the east and north-west. Numerical simulations (Borne & Richstone, 1991; Hibbard & Mihos, 1995; Chien & Barnes, 2010) were able to reproduce the observed morphology and kinematics of NGC 7252 from the merging of two disk galaxies. Hibbard et al. (1994) identified two TDG candidates in the North-Western and Eastern tails (NGC 7252NW and NGC 7252E). These star-forming complexes were confirmed as bona-fide TDGs based on their H\(\alpha\) kinematics (Bournaud et al., 2004), H I kinematics (Lelli et al., 2015) and relatively high gas metallicities (Lelli et al., 2015). In this paper, we focus on NGC 7252NW because no single-dish CO observations are available for NGC 7252SE. ### ALMA data The three TDGs were observed by the ALMA 12m array in January 2016 (Project 2015.1.00645.S; PI: F. Lelli). The time on source was about 2.2 hrs for both NGC 5291N and NGC 5291S, and about 4.5 hrs for NGC 7252NW. We used ALMA band 3 with a mixed spectral setup, using four spectral windows with a bandwidth of 1875 MHz each. A high-resolution spectral window was centered at the frequency of the redshifted CO(\(1-0\)) line and covered with 3480 channels, providing a spectral resolution of 976.6 kHz (\(\sim\)2.6 km s\({}^{-1}\)). Three low-resolution spectral windows were centered around 99, 100, and 110 GHz to target the mm continuum; they were covered with 128 channels providing a spectral resolution of 31.250 MHz (ranging from \(\sim\)84 to \(\sim\)94 km s\({}^{-1}\)). The observations were pointed at the H I kinematic center of the TDGs and have a field of view of \(\sim\)50'', set by the full-width half-maximum (FWHM) of the primary beam. The data reduction was performed with the Common Astronomy Software Applications (Casa) package (McMullin et al., 2007). The \(uv\) data were flagged and calibrated using the standard Casa pipeline. Both continuum and line data were imaged using the tclean task with a Hogbom deconvolved and Briggs weighting with a robust parameter of 0.5. Continuum images were constructed by combining all four spectral windows, excluding channels with line emission. Continuum emission was detected only in NGC 5291NW and subtracted from the CO(\(1-0\)) line channels using the task uvcontsub. The properties of the CO(\(1-0\)) line cubes are summarized in Table 2. In particular, the spatial resolution (FWHM of the synthesized beam) is about 2'' which corresponds to about 600 pc and 650 pc at the distances of NGC 5291 and NGC 7252, respectively. CO intensity (moment-zero) maps were constructed by summing channels with CO emission and are shown in Fig. 1. They are discussed in Sect. 3.1 for illustrative purpose only: CO fluxes are measured by extracting integrated spectra in various spatial regions as described in Sect. 3.2. No corrections for the primary beam are applied in the moment maps, instead they are applied to the derived fluxes as described in Sect. 3.2. Since the ALMA primary beam is significantly larger than the CO emitting area, the primary beam attenuation has little effect on the moment-zero maps, with the exception of NGC 5291S. For this TDG, the pointing center was chosen in-between two main SF complexes so that the CO emission in the Northern complex (the Southern is not detected) is \(\sim\)10'' offset from the pointing center (see Fig. 1). It is also possible that the two SF complexes are two distinct objects, whose individual kinematics cannot be discerned with the available H I observations (Lelli et al., 2015). Correction for primary beam attenuation is described in Sect. 3.2. The CO emission is very compact and confined to one or two major clumps that display no appreciable velocity gradients (Fig. 1), so moment-one and moment-two maps are not very useful. We will discuss the CO kinematics using position-velocity (PV) diagrams that provide the most direct representation of the 3D data (Sect. 3.1). Before extracting integrated spectra, to enhance the signal-to-noise (S/N) ratio of the CO line, we performed Hanning smoothing over three spectral channels, giving a final velocity resolution of \(\sim\)10 km/s. To investigate whether the ALMA interferometric observations may be missing diffuse flux on large scales, we compared spatially-integrated CO spectra from ALMA with those from previous single-dish observations (Braine et al., 2001). The low S/N of the previous single-dish observations do not allow us to quantify the amount of diffuse molecular gas missed by the ALMA interferometric observations. The comparison, therefore, was inconclusive, but we note that a substantial amount of diffuse CO emission has been found in another TDG (J1023+1952) using ALMA total power (TP) and Atacama Compact Array (ACA) observations in addition to the 12m array (Querejeta et al., 2021). It is possible, therefore, that our high-resolution observations are probing only the densest CO emission and may be missing some flux on larger scales. ACA and TP observations (or deep IRAM-30 observations) are needed to check this possibility. ### MUSE data NGC 5291N was observed on 26 June 2014 (Program ID: 60.A-9320; PI: P.-A. Duc) without adaptive optics (AO) for a total exposure time of 1800 sec. These observations are \begin{table} \begin{tabular}{c c c c} \hline Galaxy & Beam & Beam PA & \(\sigma_{\rm cube}\) \\ & (arcsec\(\times\)arcsec) & (degrees) & (mJy beam\({}^{-1}\)) \\ \hline NGC 5291N & \(2.1\times 1.7\) & 75.5 & 0.5 \\ NGC 5291S & \(2.1\times 1.8\) & -4.6 & 0.5 \\ NGC 7252NW & \(2.7\times 1.6\) & -8.3 & 0.5 \\ \hline \end{tabular} \end{table} Table 2: Properties of CO data cubes with a channel width of \(\sim\)5 km s\({}^{-1}\). presented in Fensch et al. (2016), who provides an in-depth study of line ratios and gas metallicity. NGC 5291N was re-observed on 19 June 2017 during a MUSE AO commissioning run (Program ID: 60.A-9100(G)), but the AO system could not significantly improve on the external seeing, so we will not use these data. NGC 5291S was observed on 22 January 2018 as part of Program ID 097.B-0152 (PI: M. Boquien) without AO for a total exposure time of 1800 sec. NGC 7252NW was observed on 16 July 2017 during a commissioning run of the MUSE AO wide-field mode (Program ID: 60.A-9100(H)) for a total exposure time of 900 sec. All data were reduced using Figure 1: Gas distribution and kinematics in NGC 5291N (top), NGC 5291S (middle), and NGC 7252NW (bottom). In all panels, the CO(\(1-0\)) data are from this work, while the other data come from Lelli et al. (2015). _Left panels:_ optical R\(-\)band image overlaid with the H I map (blue contours) and the CO(\(1-0\)) map (red contours). H I contours are the same as in Lelli et al. (2015). CO contours are at (3, 6, 12, 24) \(\sigma_{\rm map}\), where \(\sigma_{\rm map}=0.02\) Jy beam\({}^{-1}\) km s\({}^{-1}\) for NGC 7252NW and \(\sigma_{\rm map}=0.03\) Jy beam\({}^{-1}\) km s\({}^{-1}\) for both NGC 5291N and NGC 5291S. The cross shows the H I kinematic center which was chosen as the ALMA pointing center. The ALMA beam is shown by the red ellipse to the bottom-right corner; the H I beam (not shown) is about 4 times larger for NGC 5291 and 6 times for NGC 7252. _Middle panels:_ H I velocity field overlaid with the CO map. The dashed line shows the slit used to extract the PV diagram. The physical scale of 1 kpc is indicated by the bar in the bottom-right corner. _Right panels:_ PV diagrams from the H I cube (blue colorscale) overlaid with those from the CO cube (red contours). CO contours range from 3\(\sigma_{\rm cube}\) to 15\(\sigma_{\rm cube}\) in steps of 3\(\sigma_{\rm cube}\) (see Table 2). the MUSE pipeline v2.4 and following standard procedures (Weiblacher et al., 2020). For NGC 5291S, the sky was estimated using an offset field, while for NGC 7252NW, it was estimated within the science exposure. We refer to Fensch et al. (2016) for further details on the data reduction. Moment-zero maps were constructed by summing channels with H\(\alpha\) and H\(\beta\) emission. These maps are intended to show the morphology of the ionized gas; line fluxes will be measured by extracting integrated spectra and subtracting the stellar continuum (Sect. 3.2). ## 3 Results ### Distribution and kinematics of multiphase gas Figure 1 compares the distribution and kinematics of molecular (CO) and atomic (H I) gas in our sampled TDGs. In all galaxies, the detected CO emission is much more compact than the H I emission (left panels) and associated with intense SF, as we describe later on. The CO extent is of the order of \(1-2\) kpc, while the H I disks have diameters ranging from \(\sim\)10 to \(\sim\)15 kpc (Lelli et al., 2015). The CO emission lies near the edges of the H I peaks, so there is no direct correspondence between the atomic and molecular gas distribution on kpc scales. Moreover, the CO emission is not at the dynamical center of the TDGs, so it cannot be used to trace the underlying large-scale gas rotation (middle panels). The same behaviour is often seen in typical dwarfs and in the outskirts of spirals, where the H I emission traces the object more globally, and there may be regions where atomic gas can cool, condense into H\({}_{2}\), and SF can commence (e.g., Hunt et al., 2023). The atomic gas mass of TDGs outweighs the molecular gas mass by factors of \(3-30\) within the CO emitting area (see Table 1) and the stellar mass by factors of \(5-15\) (see Table 5 in Lelli et al., 2015), so the H I center corresponds to the center of mass (in the case of negligible dark matter). Interestingly, position-velocity (PV) diagrams along the CO distribution (right panels) show that there is good agreement between CO and H I line-of-sight velocities, suggesting that the two gas phases are dynamically coupled. The most likely interpretation is that molecular gas is currently forming out of atomic gas, keeping the same kinematics. Figure 2 compares the distribution of molecular and ionized gas, specifically the H\(\alpha\) and H\(\beta\) lines that will be used to measure the SFRs. Molecular and ionized gas are roughly co-spatial on kpc scales but display different morphologies, so the spatial relation between current star-formation activity and molecular gas reservoir is complex. Importantly, both molecular and ionized gas are much more compact than the atomic gas and similarly displaced from the H I dynamical center. Thus, no firm statement on the large-scale dynamics of TDGs can be inferred from H\(\alpha\) emission. The same conclusion was drawn by Lelli et al. (2015) for NGC 7252NW using H\(\alpha\) data from GIRAFFE, which is in good agreement with the new MUSE data. On the contrary, Flores et al. (2016) inferred strong conclusions on the dynamics of the TDGs in NGC 5291 using H\(\alpha\) data from GIRAFFE. Fig. 1 and Fig.2 show that the bright H\(\alpha\) emission cannot be used to trace the large-scale gas kinematics of TDGs. The H I emission appears to be the most extended and most promising tracer to probe the internal dynamics in TDGs, provided that the kinematically decoupled part can be discerned from the tidal tails and properly resolved. The H I-to-H\({}_{2}\) conversion and the most intense SF activity, however, do not occur exactly at the dynamical center, possibly due to variable local conditions such as gas pressure, temperature, and volume density. A similar situation occurs in "regular" starburst dwarfs, such as blue compact dwarfs (BCDs), which often show offsets between the peak SF activity and the dynamical center of the galaxy (e.g., Lelli et al., 2014). ### Emission-Line Measurements We combine \(\Sigma_{\rm SFR}\) traced by H\(\alpha\) emission with \(\Sigma_{\rm mol}\) traced by CO emission to investigate the spatially resolved KS relation. We define a set of independent elliptical apertures with major and minor axes matching the FWHM of the ALMA synthesized beam (see Figure 3). We then extract integrated CO, H\(\alpha\), and H\(\beta\) spectra within each elliptical aperture from the ALMA and MUSE cubes, respectively. This is nearly equivalent to smoothing and/or re-binning the MUSE data to the lower spatial resolution of the ALMA data. The key advantage of this procedure is to ensure independent measurements of \(\Sigma_{\rm SFR}\) and \(\Sigma_{\rm mol}\) because the elliptical regions are equal to or larger than the angular resolution. The ellipses are chosen to cover the CO emission down to a contour with \({\rm S/N=3}\); for simplicity, they are oriented in the North-South direction rather than along the PA of the ALMA beam, but this choice has no appreciable effects on the final results. Using a circular aperture would also make no difference in our general results. We have 14, 8, and 6 apertures for NGC 5291N, NGC 5291S, and NGC 7252NW, respectively, for a total of 28 independent measurements. We extract the optical and CO\((1-0)\) spectra of each region using Casa. For each H\(\alpha\) and H\(\beta\) spectra, we subtract continuum emission by calculating the mean continuum flux from two narrow spectral regions on either side of the emission profile. In all selected regions, the CO\((1-0)\), H\(\alpha\), and H\(\beta\) lines are detected with a peak S/N ratio higher than 3.5, apart from region 8 of NGC 5291S in which H\(\beta\) emission is undetected. Figure 4 shows an example of the spectra from one region in NGC 5291N. The H\(\alpha\) and H\(\beta\) lines have a nearly Gaussian shape, but the CO lines do not (see Fig. 4). Most likely, the Gaussian shape of the MUSE profiles is driven by the instrumental spectral resolution (FWHM of \(\sim\)80 km s\({}^{-1}\) around the H\(\alpha\) line), while the non-Gaussian shape of CO profiles is intrinsic, given the ALMA spectral resolution of \(\sim\)10 km s\({}^{-1}\) (after Hanning smoothing). Rather than fitting a Gaussian function, therefore, we use direct integration to estimate the integrated flux from the emission lines of interest. The starting and ending frequency of integration were defined by visual inspection; they are typically about 100 km s\({}^{-1}\) wide. In regions with low S/N, it is not trivial to identify the proper integration range. This is especially the case for the H\(\alpha\) line because the [N II] doublet blends with the noise. In such cases, to avoid contamination from [N II] lines, we estimate the redshifted wavelength of the [N II] doublet and consider appropriate frequency ranges excluding the [N II] emission. To extract CO spectra and measure CO fluxes, we used cubes that are _not_ corrected for the ALMA primary-beam attenuation because these cubes have uniform and well-defined noise structure. In primary-beam-corrected cubes, indeed, the noise varies from pixel to pixel, so it is challenging to define the S/N ratio of the line. To recover the correct CO fluxes, instead, we use the primary-beam map and compute the average primary-beam correction within each region, then multiply the uncorrected CO flux by this value. Table 2 summarizes all our measurements. ### Molecular gas masses & star-formation rates We convert the CO line flux of each region (\(\rm{S_{CO}}\Delta\nu\)) to a total molecular gas mass (\(\rm{M_{mol}}\), including Helium and heavier elements) assuming the Milky-Way CO-to-\(M_{\rm mol}\) conversion factor \(\alpha_{\rm CO}\) = 4.3 \(\rm{M_{\odot}}\) (\(\rm{K~{}km~{}s^{-1}~{}pc^{2}}\))\({}^{-1}\) or equivalently \(X_{\rm CO}=2\times 10^{20}\) cm\({}^{-2}\) (\(\rm{K~{}km~{}s^{-1}}\))\({}^{-1}\). This corresponds to the following equation (Bolatto et al., 2017): \[M_{\rm mol}=1.05\times 10^{4}\,\frac{\rm{S_{CO}}\Delta\nu\,D_{\rm L}^{2}}{(1+z)}, \tag{3}\] where \(M_{\rm mol}\) is in units of \(\rm{M_{\odot}}\), \(\rm{S_{CO}}\Delta\nu\) is in units of \(\rm{Jy~{}km~{}s^{-1}}\), \(D_{\rm L}\) is the luminosity distance in Mpc and \(z\) is the redshift. The choice of conversion factor in Eq. 3 holds for disk galaxies similar to the Milky Way. As discussed in Sect. 1, the same conversion factor is expected to hold in TDGs because they retain the metallicity of the parent spiral galaxies. Clearly, this is a simplifying assumption because the value of Figure 2: Spatial distribution of \(\rm{H\alpha}\) (left), \(\rm{H\beta}\) (middle), and \(\rm{CO}(1-0)\) velocity-integrated emission (right) in NGC 5291N (top), NGC 5291S (middle) and NGC 7252NW (bottom). The elliptical area marked with blue dashed line shows the integration region used to compute the total \(\rm{H\alpha}\), \(\rm{H\beta}\), and CO fluxes of the TDGs. varies even within "normal" disk galaxies and is known to increase with decreasing metallicity (Bolatto et al., 2013). The metallicities of our TDGs range from half solar to solar (Duc & Mirabel, 1998; Lelli et al., 2015), so the variation in \(X_{\rm CO}\) is expected to be null or small, depending on the adopted model (cf. Bolatto et al., 2013). In the worst case scenario, in some star-forming regions of our TDG sample, M\({}_{\rm mol}\) may be underestimated by a factor of \(\sim\)2\(-\)3. The molecular mass surface density (\(\Sigma_{\rm mol}\)) is derived by dividing M\({}_{\rm mol}\) by the ellipse area. The H\(\alpha\) flux provides an instantaneous measure of the SFR as nebular emission is produced around young massive stars with masses greater than 10 M\({}_{\odot}\) and lifetimes shorter than \(10-20\) Myr (Kennicutt Jr, 1998). SFR tracers probing longer timescales have been studied and discussed in Boquien et al. (2007, 2009, 2010). The primary contributor to systematic errors in H\(\alpha\)-based SFRs is dust extinction, which can be accounted for by using the Balmer decrement H\(\alpha\)/H\(\beta\). We follow Bolatto et al. (2017) to estimate the nebular extinction A\({}_{\rm H\alpha}\): \[A_{\rm H\alpha}=5.86\,\log\,\frac{F_{\rm H\alpha}}{2.86\,F_{\rm H\beta}}, \tag{4}\] where \(F_{\rm H\alpha}\) and \(F_{\rm H\beta}\) are the integrated fluxes. Here, the case B recombination value of intrinsic Balmer decrement is considered to be 2.86 as suggested by Storey & Hummer (1995) for H II regions at typical electron temperatures and densities. The extinction corrected SFR was computed as \[{\rm SFR}=7.9\times 10^{-42}\,L_{\rm H\alpha}\,10^{(A_{\rm H\alpha}/2.5)}, \tag{5}\] where SFR is in units of M\({}_{\odot}\) yr\({}^{-1}\) and \(L_{\rm H\alpha}\) is the luminosity of H\(\alpha\) in units of ergs s\({}^{-1}\). This equation assumes solar abundance and a Salpeter initial mass function (IMF) with a mass range of 0.1 to 100 M\({}_{\odot}\)(Kennicutt Jr, 1998). Finally, we compute \(\Sigma_{\rm SFR}\) (in units of M\({}_{\odot}\) yr\({}^{-1}\) kpc\({}^{-2}\)) dividing the SFR by the area of the region (equivalent to the beam size, see Table 2). Figure 4: The CO, H\(\alpha\)+[N II], and H\(\beta\) spectra of the star-forming region 9 in NGC 5291N. Red vertical lines show the range over which the spectra are integrated. Figure 3: Integration regions for spatially resolved flux measurements in NGC 5291N (left), NGC 5291S (middle), and NGC 7252NW (right). The CO\((1-0)\) intensity maps (red colorscale) are overlaid with the areas (green ellipses) within which integrated spectra are extracted. In NGC 5291N and NGC 5191S, the cyan crosses indicate the location of the ”secure” massive star clusters identified by Fensch et al. (2019) using HST images. The size of the ellipse shown in the bottom-left corner is equivalent to the CO\((1-0)\) beam. Red contours are the same as in Figure 1. ### The Kennicutt-Schmidt relation As a first step, we locate our three TDGs on the spatially integrated KS relation, which compares the total gas surface density (atomic plus molecular) with the SFR. We use the data from Kennicutt and Evans (2012) because they are internally self-consistent with our data: the SFRs are measured from extinction-corrected H\(\alpha\) fluxes and the molecular gas masses assume the MW \(X_{\rm CO}\) factor (cf. with Sect. 3.3). Figure 5 shows that TDGs follow the same KS relation as "normal" galaxies, in agreement with earlier results from Braine et al. (2001) and Boquien et al. (2011) using single-dish CO observations. Notably, if we consider only molecular gas masses, TDGs would strongly shift to the left of the relation because the total gas mass is heavily dominated by atomic gas, unlike typical spiral galaxies. Next, we study the spatially resolved SF relation (e.g., Bolatto et al., 2017; Pessa et al., 2021; Lin et al., 2020). We compare our TDGs to 14 spiral galaxies from the ALMA-MaNGA QUenching and STar formation (ALMaQUEST) survey (Lin et al., 2019, 2020). ALMAQUEST data represents the ideal comparison sample because (i) molecular gas masses are estimated using CO\((1-0)\) data from ALMA as in our work, (ii) SFRs are estimated using extinction-corrected H\(\alpha\) fluxes from IFS similar to our work, (iii) the same calibrations have been adopted (Eqs. 3 and 5), and (iv) the angular resolution of ALMAQUEST data (\(\sim\)2.5\({}^{\prime\prime}\)) is similar to that of our data (\(\sim\)2\({}^{\prime\prime}\)) albeit the ALMAQUEST physical resolution ranges from 0.5 - 6.5 kpc depending on the galaxy distance while our physical resolution is fixed at \(\sim\)0.6 kpc. In addition, unlike other samples (Bolatto et al., 2017; Pessa et al., 2021), the ALMAQUEST data covers low gas surface densities similar to those in TDGs. Unfortunately, we cannot study the spatially resolved SF relation considering the total gas surface densities (molecular plus atomic gas) because the angular resolution of the existing H i data of TDGs is too coarse. In fact, the entire CO emitting area is within one H i beam. The ALMAQUEST data, however, do not consider the H i surface densities as well, so our comparison in Fig. 6 is self-consistent. Figure 6 shows the location of TDGs on the spatially resolved SF relation from ALMAQUEST. We fit the ALMAQUEST data with a linear relation using the Markov-Chain Monte-Carlo (MCMC) software BayesLineFit (Lelli et al., 2019). The MCMC fit returns a slope of 1.024 \(\pm\) 0.008, an intercept of -2.964 \(\pm\) 0.009, and a vertical observed scatter \(\sigma_{\rm obs}\) = 0.23 dex. The majority of TDG points (16/27) lie on the same SF relation as spiral galaxies within \(\pm\)3\(\sigma_{\rm obs}\). The correlation between \(\Sigma_{\rm SFR}\) and \(\Sigma_{M_{\rm mol}}\), however, is not evident when considering only TDG data. On the one hand, this occurs because most TDG points cover a small dynamic range in gas surface density (less than 1 dex) and the SF relation has substantial scatter at fixed \(\Sigma_{\rm mol}\). On the other hand, TDG data display a larger scatter from the fitted line than the ALMAQUEST data: 0.7 dex considering all TDG points and 0.4 dex considering only those within \(\pm\)3\(\sigma_{\rm obs}\). Indeed, only \(\sim\)30%-35% of TDG points lie within \(\pm\)1\(\sigma_{\rm obs}\) of the \begin{table} \begin{tabular}{c c c c c c c c} \hline TDG & Region & Sc\({}_{\rm CO}\)\(\Delta\nu\) & M\({}_{\rm mol}\) & F\({}_{\rm H\alpha}\) & F\({}_{\rm H\beta}\) & Extinction & SFR \\ & & (Jy km s\({}^{-1}\)) & (\(10^{6}\) M\({}_{\odot}\)) & (\(10^{-16}\) ergs s\({}^{-1}\) cm\({}^{-2}\)) & (\(10^{-16}\) ergs s\({}^{-1}\) cm\({}^{-2}\)) & (mag) & (\(M_{\odot}\)\(yr^{-1}\)) \\ \hline \multirow{8}{*}{NGC 5291N} & 1 & 0.03 & 1.0 & 1.2 & 0.4 & 0.4 & 0.0006 \\ & 2 & 0.04 & 1.4 & 6.0 & 1.9 & 0.3 & 0.003 \\ & 3 & 0.07 & 2.8 & 4.0 & 1.2 & 0.5 & 0.002 \\ & 4 & 0.04 & 1.4 & 1.1 & 0.3 & 0.6 & 0.0007 \\ & 5 & 0.04 & 1.5 & 0.9 & 0.2 & 0.8 & 0.0007 \\ & 6 & 0.04 & 1.5 & 2.1 & 0.6 & 0.5 & 0.001 \\ & 7 & 0.07 & 2.7 & 65.4 & 18.4 & 0.5 & 0.04 \\ & 8 & 0.03 & 1.1 & 15.7 & 4.7 & 0.4 & 0.008 \\ & 9 & 0.2 & 7.4 & 111.6 & 29.8 & 0.7 & 0.08 \\ & 10 & 0.2 & 6.5 & 161.2 & 41.1 & 0.8 & 0.1 \\ & 11 & 0.04 & 1.5 & 7.1 & 2.1 & 0.4 & 0.004 \\ & 12 & 0.08 & 3.3 & 66.5 & 17.2 & 0.8 & 0.05 \\ & 13 & 0.04 & 1.5 & 11.8 & 3.6 & 0.4 & 0.006 \\ & 14 & 0.03 & 1.3 & 14.1 & 4.2 & 0.4 & 0.007 \\ \hline \multirow{8}{*}{NGC 5291S} & 1 & 0.03 & 1.2 & 55.0 & 14.5 & 0.7 & 0.04 \\ & 2 & 0.05 & 1.9 & 21.6 & 6.0 & 0.6 & 0.01 \\ \cline{1-1} & 3 & 0.04 & 1.5 & 7.6 & 2.3 & 0.3 & 0.004 \\ \cline{1-1} & 4 & 0.05 & 2.1 & 24.8 & 7.1 & 0.5 & 0.01 \\ \cline{1-1} & 5 & 0.05 & 2.0 & 4.7 & 1.0 & 1.3 & 0.006 \\ \cline{1-1} & 6 & 0.04 & 1.4 & 28.2 & 8.3 & 0.5 & 0.02 \\ \cline{1-1} & 7 & 0.04 & 1.6 & 22.7 & 6.5 & 0.5 & 0.01 \\ \cline{1-1} & 8 & 0.03 & 1.3 & 0.3 & - & - & 0.0001 \\ \hline \multirow{8}{*}{NGC 7252NW} & 1 & 0.03 & 1.5 & 6.9 & 1.9 & 0.6 & 0.005 \\ \cline{1-1} & 2 & 0.03 & 1.6 & 1.8 & 0.5 & 0.6 & 0.001 \\ \cline{1-1} & 3 & 0.05 & 2.1 & 0.4 & 0.08 & 1.3 & 0.0005 \\ \cline{1-1} & 4 & 0.1 & 5.2 & 2.0 & 0.5 & 1.0 & 0.002 \\ \cline{1-1} & 5 & 0.1 & 5.3 & 2.0 & 0.4 & 1.3 & 0.003 \\ \cline{1-1} & 6 & 0.05 & 2.1 & 0.6 & 0.2 & 0.8 & 0.0006 \\ \hline \end{tabular} \end{table} Table 3: Line fluxes, molecular masses, and SFRs within the independent regions identified in Fig. 3. best-fit relation rather than the expected 68% for a Gaussian distribution. The high scatter of TDG points may be due to small-number statistics or systematic differences between our work and the ALMAQUEST analysis. If real, instead, it could point to (i) the need of considering the total gas surface density (atomic plus molecular gas) in H i-dominated galaxies, as in the case of the spatially integrated SF relation in Fig. 5, (ii) high stochasticity in the SF history of TDGs on small spatial scales (e.g., Boquien et al., 2010), (iii) spatial variations in the \(X_{\rm CO}\) factor due to additional effects (beyond gas metallicity) such as gas temperature, gas pressure, and UV background (e.g., Bolatto et al., 2013), (iv) differences in 3D volume densities due to line-of-sight integration and variable disk thickness (e.g., Bacchini et al., 2019, 2020). Given the complex evolutionary status of TDGs, which are possibly out of dynamical equilibrium (Lelli et al., 2015), it is difficult to distinguish between these possibilities. Interestingly, a substantial fraction of TDG regions (10/28) strongly deviate from the observed SF relation and lie in the starburst zone above +3\(\sigma_{\rm obs}\). These starburst regions belong to NGC 5291N and NGC 5291S. Studies with the Hubble Space Telescope (HST) shows that these regions are currently forming young star clusters with masses ranging from a few 10\({}^{3}\) M\({}_{\odot}\) to a few 10\({}^{5}\) M\({}_{\odot}\) and ages from \(\sim\)1 Myr to \(\sim\)100 Myr (Fensch et al., 2019). Thus, it is sensible that these areas have exceptionally high SFEs. Unlike NGC 5291N and NGC 5291S, NGC7252NW does not show starburst regions with high SFE, but most of its SF regions (4/6) fall below the average SF relation. Consistently, visual inspection of the available HST images of NGC 7252NW does not reveal any clear young star cluster. Another two TDGs with spatially resolved CO data (VCC 2062 from Lisenfeld et al., 2016 and J1023+1952 from Querejeta et al., 2021b) were also found to lie systematically below the average SF relation, albeit Querejeta et al. (2021b) warn that the inclusion or exclusion of diffuse CO emission (not contained in giant molecular clouds) could result into a large difference. The different behaviours shown by different TDGs may be related to the evolutionary status of the parent system and the "age" of the specific TDG. According to numerical simulations, the gas ring around NGC 5291 was formed by a head-on galaxy collision about \(\sim\)360 Myr ago (Bournaud et al., 2007). On the other hand, NGC 7252 is a late-stage merger resulting from the interaction of two spiral galaxies about \(\sim\)700 Myr ago (Hibbard & Mihos, 1995; Chien & Barnes, 2010). One may speculate, therefore, that the TDGs around NGC 5291 are young and experiencing a Figure 5: The location of TDGs (blue diamond, yellow square, and pink pentagon) on the spatially integrated Kennicutt-Schmidt relation (grey symbols, from Kennicutt & Evans, 2012). The open symbols show the location of TDGs if one considers only molecular gas, neglecting atomic gas. The red line shows the best-fit line to the data; the dashed magenta lines correspond to \(\pm\)3\(\sigma_{\rm obs}\) where \(\sigma_{\rm obs}\) is the observed vertical scatter. Figure 6: The location of TDGs (blue diamonds, yellow squares, and pink pentagons) on the spatially resolved Kennicutt-Schmidt relation from the ALMAQUEST survey (cyan circles, from Lin et al., 2019). The solid red line shows the best-fit line to the ALMAQUEST data; the dashed magenta lines correspond to \(\pm\)3\(\sigma_{\rm obs}\) where \(\sigma_{\rm obs}=0.23\) dex is the observed vertical scatter. Symbols with a thick border correspond to regions in which young massive star clusters have been identified (see Fig. 3). period of peak SF activity due to efficient H I-to-H\({}_{2}\) conversion, whereas those around NGC 7252 are slightly older and more quiescent. A larger sample of TDGs around more diverse interacting systems, together with detailed numerical simulations of the system, is needed to study the relation between the interaction stage and SF activity in tidal debris. Broadly speaking, star-forming galaxies can be classified into "normal" and "starbursts" using a 3\(\sigma\) threshold from the mean KS relation. With such a definition, starbursts represent a SF process that occurs with less than 99.7% chance for a Gaussian distribution of SFEs. The SF regions in our TDG sample display a continuous range of SFEs, but a very large fraction of them (\(\sim\)40%) proceed in starburst mode, resulting in molecular gas depletion times as short as \(10-100\) Myr. In the remaining \(\sim\)60% of TDG regions, the SF activity proceeds in a similar way as in normal spiral galaxies, regardless of the different environmental conditions. In this sense, TDGs are "hybrid" systems because they contain some regions behaving as normal galaxies and others as starbursts. ### Timescales and evolution of TDGs A spatially resolved KS relation with a slope of one (as observed) implies that the molecular gas depletion time (\(t_{\rm mol}=M_{\rm mol}/{\rm SFR}\)) is nearly constant across spiral galaxies. Then, the intercept and observed scatter imply that \(t_{\rm mol}\simeq 1\pm 0.5\) Gyr. For our three TDGs, the molecular gas depletion times are substantially smaller: 100 Myr for NGC 5291N, 70 Myr for NGC 5291S, and 300 Myr for NGC 7252NW. The molecular gas of these TDGs, therefore, will soon be consumed by the SF activity unless it is replenished by efficiently converting the substantial H I reservoir into H\({}_{2}\) gas. Considering the total H I mass associated with the TDG potential well (Lelli et al., 2015, their Table 8), the atomic gas depletion time (\(t_{\rm atom}=M_{\rm atom}/{\rm SFR}\)) is about 2 Gyr for NGC 5291N, 4 Gyr for NGC 5291S, and 7 Gyr for NGC 7252NW. These values of \(t_{\rm atom}\) are substantially smaller than those of low-surface-brightness (LSB) star-forming galaxies, ranging between \(10-100\) Gyr (e.g., McGaugh et al., 2017), but are comparable to those of starburst dwarf galaxies (Lelli et al., 2014), such as blue compact dwarfs (BCDs). It is conceivable that both TDGs and BCDs are only able to sustain the intense SF activity for a short period of time (\(\sim\)0.5-1 Gyr, e.g., McQuinn et al., 2010), so the starburst will not have enough time to consume their entire H I reservoir. For example, if their SFR is going to decrease in the next 500 Myr by a factor of \(\sim\)10, their \(t_{\rm atom}\) will increase by a similar factor, reaching the high values observed in LSB galaxies. Furthermore, there is a diffuse H I reservoir in the tidal debris around the TDGs, which might replenish their gas content. Another interesting timescale is the stellar mass growth time (\(t_{\star}=M_{\star}/{\rm SFR}\)), which we compute using the stellar masses from Lelli et al. (2015, their Table 8). For NGC 7252, we find \(t_{\star}\simeq 940\) Myr. This is larger than the dynamical timescale of the galaxy merger (\(\sim\)700 Myr) inferred from numerical simulations (Chien & Barnes, 2010). Assuming that the age of the TDG is equal to that of the merger, the current SFR cannot explain the present-day stellar mass: the SFR was probably higher in the past, indicating that the SF activity has been declining over time. A possible caveat is that NGC 7252NW may contain old stars from the disk of the parent galaxies, which would contribute to \(M_{\star}\) beyond the mass formed over the past 700 Myr. In any case, the situation of NGC 5291 appears different: we find \(t_{\star}\simeq 180\) Myr for NGC 5291N Myr and \(t_{\star}\simeq 270\) Myr for NGC 5291S, which are smaller than the dynamical timescale of the galaxy collision (\(\sim\)360 Myr, Bournaud et al., 2007). Thus, the current SFR can apply explain the present-day \(M_{\star}\) of these two TDGs. These facts are in line with the speculation in Sect. 3.4 that the TDGs around NGC 5291 may be representative of early galaxy formation with efficient H I-to-H\({}_{2}\) conversion, high SFE, and short gas depletion times, while those around NGC 7252 may represent a subsequent stage with more typical SF activity. In addition, the initial conditions in the two systems may have been different: while the TDGs around NGC5291 were born out of pure gaseous condensation, those around NGC7252 may have been born in a less H I dominated environment with both gas and stars from their progenitors. ## 4 Conclusions We studied the molecular and ionized gas content of three bona-fide TDGs using CO(\(1-0\)) observations from ALMA and IFS data from MUSE. For the first time, we locate TDGs on the spatially resolved KS relation. Our results can be summarized as follows: 1. CO(\(1-0\)) and H\(\alpha\) emissions in TDGs are very compact and cover a much smaller area than the H I emission. Both CO and H\(\alpha\) lines are not suitable to study the internal kinematics of these TDGs. Most likely, molecular gas is forming out of the more extended H I disk of TDGs, having similar line-of-sight velocities. 2. TDGs lie on the same spatially-integrated \(\Sigma_{\rm SFR}-\Sigma_{\rm gas}\) relation of spiral galaxies but display a substantial scatter on the spatially resolved \(\Sigma_{\rm SFR}-\Sigma_{\rm mol}\) relation (which neglects atomic gas due to the lack of high-resolution H I data). 3. The majority (60%) of SF regions in TDGs lie on the same spatially resolved SF relation as spiral galaxies within \(\pm 3\) times the observed scatter but display a larger dispersion from the mean relation. A substantial fraction (\(\sim\)40%) of SF regions have exceptionally high SF efficiencies, lying in the starburst regime of the KS relation. These regions belong to NGC 5291N and NGC 5291S, and are associated with the formation of massive super star clusters, which were previously identified by HST imaging (Fensch et al., 2019). The growing evidence about the existence of a fundamental SF relation compels us to put the relation to test in a variety of star-forming environments. The three TDGs analyzed here confirm the fundamental nature of the KS relation on sub-kpc scales, albeit regions with exceptionally high SF efficiencies do exist. Future studies may investigate the spatially resolved KS relation in other bona-fide TDGs, probing even more diverse star-forming environments, such as younger and/or older tidal debris. ## Acknowledgements We are grateful to the anonymous referee for the useful comments that helped to improve the paper. We thank Rob Kennicutt and Lin Li-Hwai Lin for providing the KS-relation data in tabular form. N.K. and F.L. thank the School of Physics and Astronomy of Cardiff University, where this work started as part of a Master thesis project. N. K. acknowledges support from the Programa de doctorado en Astrofisica y Astronomica of Universidad de Antofagasta. M.B. acknowledges support from FONDECYT regular grant 1211000 and by the ANID BASAL project FB210003. U.L. acknowledges support by the research projects AYA2017-84897-P and PID2020-114414GB-I00 from the Spanish Ministerio de Economia y Competitividad, from the European Regional Development Funds (FEDER) and the Junta de Andalucia (Spain) grants FQM108. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2015.1.00645.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAAO. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO MUSE program 60.A-9320(A) and 097.B-0152(A). Based on public data released from the MUSE WFM-AO commissioning observations at the VLT under Programme IDs 60.A-9100 runs G & H. ## Data Availability The raw data used in this work are available in the ALMA and ESO archives. Reduced data are available on request.
2305.19855
Lattice-Aided Extraction of Spread-Spectrum Hidden Data
This paper discusses the problem of extracting spread spectrum hidden data from the perspective of lattice decoding. Since the conventional blind extraction scheme multi-carrier iterative generalize least-squares (M-IGLS) and non-blind extraction scheme minimum mean square error (MMSE) suffer from performance degradation when the carriers lack sufficient orthogonality, we present two novel schemes from the viewpoint of lattice decoding, namely multi-carrier iterative successive interference cancellation (M-ISIC) and sphere decoding (SD). The better performance of M-ISIC and SD are confirmed by both theoretical justification and numerical simulations.
Fan Yang, Shanxiang Lyu, Hao Cheng, Jinming Wen, Hao Chen
2023-05-31T13:44:52Z
http://arxiv.org/abs/2305.19855v1
# Lattice-Aided Extraction of Spread-Spectrum Hidden Data ###### Abstract This paper discusses the problem of extracting spread spectrum hidden data from the perspective of lattice decoding. Since the conventional blind extraction scheme multi-carrier iterative generalize least-squares (M-IGLS) and non-blind extraction scheme minimum mean square error (MMSE) suffer from performance degradation when the carriers lack sufficient orthogonality, we present two novel schemes from the viewpoint of lattice decoding, namely multi-carrier iterative successive interference cancellation (M-ISIC) and sphere decoding (SD). The better performance of M-ISIC and SD are confirmed by both theoretical justification and numerical simulations. Spread spectrum, Lattices, Successive interference cancellation, Sphere decoding. ## I Introduction Data hiding describes the process of embedding secret messages into different forms of multimedia and transmitting over the open channel. As an important complement to conventional cryptographic systems, it provides flexible solutions for copyright protection, integrity verification, covert communication and other information security fields. To meet the requirements of various scenarios, the researchers' goals include reducing the distortion of the cover to get imperceptibility, increasing hidden capacity, and improving the robustness of the embedding scheme. Watermark embedding and extraction are two crucial parts in the data hiding model. There are many literature describe various data hiding schemes over the past three decades [1, 2, 3, 4, 5], one of the mainstream directions is spread-spectrum (SS) steganography, because it has good robustness and security. By introducing the principle analogous to spread-spectrum communication, the concept of SS in data hiding was fist proposed by Cox et al. [6]. The basis idea of SS in data hiding is to disperse the message into many frequency bins of the host data by pseudorandom sequences, so as to make the energy in each one extremely small and certainly undetectable. This is similar to transmitting a narrowband signal with a much larger bandwidth and a lower power density. Some schemes have been proposed to improve upon SS. E.g, using the technique of minimum-mean-square error to reduce the interference caused by the host itself [7], improving signature design to reduce the decoding error rate [2], and using multi-carriers instead of a single carrier to improve the number of payloads [8, 9]. Depending on whether the receiver has the pre-shared keys, the extraction of information from the multicarrier SS watermarking system consists of blind and non-blind extractions. **i**) Blind extraction amounts to steganalysis via "Watermarked Only Attack (WOA)" [10]. It is one of the scenarios that has attracted a lot of attention since it models most of the practical problems. It assumes that the attacker only has access to the composed signal, without any information about the host data and the spreading codes. Under this premise, the process of fully recovering embedded data is called blind extraction. To break the single-carrier SS method, Gkizieli et al. [11] proposed a blind method named iterative generalized least squares (IGLS) to recover unknown messages hidden in image, which has remarkable recovery performance and low complexity. However, steganographers may prefer multi-carrier SS transform-domain embedding to improve security or the amount of information in a single transmission. The steganalysis for this situation seems more worthy of study. Since the underlying mathematical problem of extracting multiple message sequences from a mixed observation is akin to blind source separation (BSS) in speech signal processing, classical BSS algorithms such as independent component analysis (ICA) [12] and Joint Approximate Diagonalization of Eigenmatrix (JADE) [13] can also be used to extract the hidden data. Regrettably, these algorithms are far from satisfactory due to the correlated signal interference caused by the multi-carrier SS problem. In this regard, Li Ming et al. [8] developed an improved IGLS scheme referred to as multi-carrier iterative generalized least-squares (M-IGLS). The crux inside M-IGLS is a linear estimator referred to as zero-forcing (ZF) in lattice decoding literature [14, 15]. M-IGLS exhibits satisfactory performance only when the carriers/signatures show sufficient orthogonality. For instance, M-IGLS shows the case of embedding (and extracting) \(4\) data streams by modifying \(63\) host coefficients [8]. **ii**) Non-blind extraction of SS watermarking adopts linear minimum mean square error (MMSE) estimator as the default option [8, 16]. However, linear MMSE is optimal only when the prior symbols admit Gaussian distributions, rather than the discrete distribution over \(\{\pm 1\}\)[15]. MMSE also works well when the embedding matrix defined by carriers features sufficient orthogonality, but this property may not be guaranteed in the transmitter's side. As the discrete symbols (i.e., \(\{\pm 1\}\)) in multicarrier SS naturally induces lattices, it becomes tempting to adopt more sophisticated lattice decoding algorithms to improve upon the blind and non-blind extraction of multicarrier SS watermarking. For this reason, this paper contributes in the following aspects: * First, we propose a new hidden data blind extraction algorithm referred to as multi-carrier iterative successive interference cancellation (M-ISIC). Like M-IGLS, M-ISIC also estimates the mixing matrix and the integer messages iteratively by alternating minimization principle. However, in the step when the mixing matrix has been estimated, M-ISIC adopts successive interference cancellation (SIC) rather than ZF. Due to the larger decoding radius of SIC over ZF, the proposed M-ISIC is deemed to enjoy certain performance gains. Moreover, M-ISIC also features low complexity. * Second, we present a sphere decoding (SD) algorithm for the legit extraction of multicarrier SS watermarking. While maximum-likelihood (ML) extraction can be implemented via a brute-force enumeration, sphere decoding (SD) [17] is the better implementation of ML to save computational complexity. The magic of SD is to restrict the search space to within a sphere enclosing the query vector. Simulation results show that SD outperforms the default MMSE estimator especially when the channel matrix lacks sufficient orthogonality. * Third, by formulating the problem of extracting multi-carrier SS as a lattice decoding problem, it fosters a deeper connection between the data hiding community and the post-quantum cryptography community. Lattice-based constructions are currently important candidates for post-quantum cryptography [18]. The analysis of the security level of lattice-based cryptographic schemes also relies on sophisticated lattice decoding algorithms. This implies that, in the future, a novel algorithm for one community may also be explored for the other. The rest of this paper is organized as follow. In Section II, preliminaries on SS embedding and lattice decoding are briefly introduced. In Section III, M-ISIC is presented and the comparisons between M-IGLS and M-ISIC are made. Section IV discusses sphere decoding and MMSE. Simulation results and conclusion are given in Section V and Section VI respectively. The following notation is used throughout the paper. Boldface upper-case and lower-case letters represent matrices and column vectors, respectively. \(\mathbb{R}\) denotes the set of real numbers, while \(\mathbf{I}\) denotes an identity matrix. \((\cdot)^{\top}\) is the matrix transpose operator, and \(||\cdot||\), \(||\cdot||_{F}\) denote vector norm, and matrix Frobenius norm, respectively. \(\mathrm{sign}(\cdot)\) represents a quantization function with respect to \(\{-1,1\}\). ## II Preliminaries ### _Basics of Multicarrier SS_ #### Ii-A1 Embedding Without loss of generality, a standard gray-scale image \(\mathbf{H}\in\mathcal{M}^{N_{1}\times N_{2}}\) is chosen as the host, where \(\mathcal{M}\) denotes a finite image alphabet and \(N_{1}\times N_{2}\) denotes the size of the image. Then \(\mathbf{H}\) is partitioned into \(M\) non-overlapping blocks \(\mathbf{H}_{1},...,\mathbf{H}_{M}\) (of size \(\frac{N_{1}\times N_{2}}{M}\)). After performing DCT transformation and zig-zag scanning for each block, the cover object in each block can be generated as \(\mathbf{x}(m)\in\mathbb{R}^{L}\), where \(L\leq\frac{N_{1}\times N_{2}}{M}\) and \(m=1,...,M\). Multicarrier SS embedding scheme employs \(K\) distinct carriers (signatures) \(\mathbf{s}_{1},...,\mathbf{s}_{K}\) to implant \(K\) bits of information \(b_{1},...,b_{K}\in\{\pm 1\}\) to each \(\mathbf{x}(m)\). Subsequently, the modified cover (stego) is generated by \[\mathbf{y}(m)=\sum_{k=1}^{K}A_{k}b_{k}(m)\mathbf{s}_{k}+\mathbf{x}(m)+\mathbf{ n}(m),\ m=1,2,...,M, \tag{1}\] where \(A_{k}\) denotes the embedding amplitude of \(\mathbf{s}_{k}\), \(b_{k}(m)\) denotes the messages of the \(m\)th block, and \(\mathbf{n}(m)\) represents the additive white Gaussian noise vector of mean \(\mathbf{0}\) and covariance \(\sigma_{n}^{2}\mathbf{I}_{L}\). For symbolic simplicity, we can express the embedding of \(\mathbf{b}(1),...,\mathbf{b}(M)\) in the matrix form as \[\mathbf{Y}=\mathbf{V}\mathbf{B}+\mathbf{Z}, \tag{2}\] where \(\mathbf{Y}\triangleq[\mathbf{y}(1),...,\mathbf{y}(M)]\in\mathbb{R}^{L\times M}\), \(\mathbf{B}\triangleq[\mathbf{b}(1),...,\mathbf{b}(M)]\in\{\pm 1\}^{K\times M}\), \(\mathbf{V}\triangleq[A_{1}\mathbf{s}_{1},...,A_{M}\mathbf{s}_{M}]\in\mathbb{R }^{L\times K}\), \(\mathbf{Z}\triangleq[\mathbf{x}(1)+\mathbf{n}(1),...,\mathbf{x}(M)+\mathbf{n} (M)]\in\mathbb{R}^{L\times M}\). In general, \(K\leq L\), which avoids inducing underdetermined system of equations. By taking expectation over the randomness of \(\mathbf{s}_{k}\), the embedding distortion due to \(A_{k}b_{k}(m)\mathbf{s}_{k}\) is \[D_{k}=\mathbb{E}\{||A_{k}b_{k}(m)\mathbf{s}_{k}||^{2}\}=A_{k}^{2},\ k=1,2,...,K. \tag{3}\] Based on the statistical independence of signatures \(\mathbf{s}_{k}\), the averaged total distortion per block is defined as \(D=\sum_{k=1}^{K}D_{k}=\sum_{k=1}^{K}A_{k}^{2}\). #### Ii-A2 Legitimate Extraction In the receiver's side, with the knowledge of secrets/carriers \(\mathbf{s}_{k}\) legitimate users can obtain high-quality embedded bit recovery of messages \(b_{k}(m)\). By the auto-correlation matrix of host data and noise, we can define the auto-correlation matrix of observation data \(\mathbf{Y}\) as following form \[\mathbf{R_{y}}=\mathbf{R_{x}}+\sum_{k=1}^{K}A_{k}^{2}\mathbf{s}_{k}\mathbf{s }_{k}^{\top}+\sigma_{n}^{2}\mathbf{I}_{L}. \tag{4}\] For easy of analysis, equation (4) can be further written as \[\mathbf{R_{y}}=\frac{1}{M}\mathbf{X}\mathbf{X}^{\top}+\mathbf{V}\mathbf{V}^{ \top}+\sigma_{n}^{2}\mathbf{I}_{L} \tag{5}\] where \(\mathbf{V}=(A_{1}\mathbf{s}_{1},A_{2}\mathbf{s}_{2},\cdots,A_{K}\mathbf{s}_{K})\) and \(\mathbf{R_{x}}=\frac{1}{M}\mathbf{X}\mathbf{X}^{\top}\). The linear MMSE detector has the capable of minimizing the mean square error between the true values and estimated values by taking into account the trade-off between noise amplification and interference suppression [19]. Via the linear MMSE filter, the embedded symbols are estimated by \[\hat{\mathbf{B}}_{MMSE}=\mathrm{sign}\{(\mathbf{V}^{\top}\mathbf{R_{y}^{-1}}) \mathbf{Y}\}. \tag{6}\] Using sample averaging for \(M\) received vectors, the estimate of \(\mathbf{R_{y}}\)\(\hat{\mathbf{R}_{y}}=\frac{1}{M}\sum_{m=1}^{M}\mathbf{y}\mathbf{y}^{\top}\) can be obtained. Replace \(\mathbf{R_{y}}\) in (6) with \(\hat{\mathbf{R_{y}}}\), we get sample-matrix-inversion MMSE (SMILES) detector [16]. ### _Basics of Lattices_ #### Ii-B1 Lattice Decoding Problem Lattices are discrete additive subgroups over \(m\)-dimensional Euclidean space \(\mathcal{R}^{m}\), which can be defined as the integer coefficients liner combination of \(n\) linearly independent vectors \[\mathcal{L}(\mathbf{G})=\left\{\sum_{k=1}^{K}x_{k}\mathbf{g}_{k}\; \mid\;x_{k}\in\mathbb{Z}\right\} \tag{7}\] where \(\mathbf{G}\triangleq[\mathbf{g}_{1},...,\mathbf{g}_{K}]\) is called a lattice basis. Computationally hard problems can be defined over lattices. The one related to this work is called the closest vector problem (CVP) [20]: given a query vector \(\mathbf{t}\), it asks to find the closest vector to \(\mathbf{t}\) from the set of lattice vectors \(\mathcal{L}(\mathbf{G})\). Let the closest vector be \(\mathbf{G}\mathbf{x}\), \(\mathbf{x}\in\mathbb{Z}^{K}\), then we have \[\|\mathbf{G}\mathbf{x}-\mathbf{t}\|\leq\|\mathbf{G}\tilde{\mathbf{x}}- \mathbf{t}\|,\,\forall\tilde{\mathbf{x}}\in\mathbb{Z}^{K}. \tag{8}\] In general solving CVP for a random lattice basis incurs exponential computational complexity in the order of \(\mathcal{O}(2^{K})\), but for lattice basis whose \(\mathbf{g}_{1},...,\mathbf{g}_{K}\) are close to being orthogonal, fast low-complexity algorithm can approximately achieve the performance of maximum likelihood decoding. #### Ii-B2 Lattice Decoding Algorithms Zero-forcing (ZF) [14] and successive interference cancellation (SIC) [21] are fast low-complexity algorithms to detect the transmitted signals at the receiving end. The former obtains the output by multiplying the pseudo-inverse of \(\mathbf{V}\) to the left of \(\mathbf{Y}\). The latter introduces decision feedback to decode each symbol successively, achieving better performance than the former. Fig. 1 plots the decision boundaries for ZF and SIC. The elongated and narrow parallelogram is the decision region of ZF. Because the basis vectors are highly correlated, a slight perturbation of the noise can lead to a detection error. For SIC, the decision region is rectangle as only one symbol is decoded at a time [22]. Both ZF and SIC have worse performance than the optimal maximum-likelihood (ML) estimation due to their inherent nature of polynomial complexity. More comparisons of ZF and SIC are presented in the section of blind extraction, while the application of SD will be addressed in the non-blind extraction. ``` Input:\(\mathbf{Y}\), \(\mathbf{R}_{\mathbf{y}}\). Output:\(\hat{\mathbf{V}}=\mathbf{V}^{(d)}\), \(\hat{\mathbf{B}}=\mathbf{B}^{(d)}\). 1\(d=0\), \(\mathbf{B}^{(0)}\)\(\sim\{\pm 1\}^{K\times M}\); 2whileach stopping criterion has not been reacheddo 3\(d\gets d+1\) ; 4\(\mathbf{V}^{(d)}\leftarrow\mathbf{Y}(\mathbf{B}^{(d-1)})^{\mathrm{T}}[ \mathbf{B}^{(d-1)}(\mathbf{B}^{(d-1)})^{\mathrm{T}}]^{-1}\); 5\(\mathbf{B}^{(d)}\leftarrow\) \(\mathrm{sign}\left\{\left((\mathbf{V}^{(d)})^{\mathrm{T}}\mathbf{R}_{\mathbf{ y}}^{-1}\mathbf{V}^{(d)}\right)^{-1}(\mathbf{V}^{(d)})^{\mathrm{T}} \mathbf{R}_{\mathbf{y}}^{-1}\mathbf{Y}\right\}\); \(\triangleright\) Approximate lattice decoding via GLS/ZF. ``` **Algorithm 1**The M-IGLS data extraction algorithm. ## III Blind Extraction The task of blind extraction requires estimating both \(\mathbf{V}\) and \(\mathbf{B}\) from the observation \(\mathbf{Y}\), which is known as the noisy BSS problem: \[\mathcal{P}_{1}:\min_{\begin{subarray}{c}\mathbf{n}\in\{\pm 1\}^{K\times M} \\ \mathbf{V}\in\mathbb{Z}^{K}\end{subarray}}||\mathbf{R}_{\mathbf{z}}^{-\frac{1}{ 2}}(\mathbf{Y}-\mathbf{V}\mathbf{B})||_{F}^{2}, \tag{9}\] where \(\mathbf{R}_{\mathbf{z}}\triangleq\mathbf{R}_{\mathbf{x}}+\sigma_{\mathbf{z}} ^{2}\mathbf{I}_{L}\) denotes the pre-whitening matrix. Nevertheless, enumerating all the feasible candidates of \(\mathbf{V}\) and \(\mathbf{B}\) is infeasible as it incurs exponential complexity. In the following, we briefly describe the M-IGLS that was proposed in [8] to solve \(\mathcal{P}_{1}\). Then we improve the ZF detector in M-IGLS from the viewpoint of lattices. ### _M-Igls_ The pseudo-code of M-IGLS is shown in Algorithm 1. Specifically, M-IGLS estimates \(\mathbf{V}\) and \(\mathbf{B}\) iteratively by using an MMSE criterion: by either fixing \(\mathbf{B}^{(d)}\) or \(\mathbf{V}^{(d)}\) and using convex optimization, the formulas for \(\mathbf{B}^{(d)}\) or \(\mathbf{V}^{(d)}\) are derived. ``` Input:\(\mathbf{Y}\), \(\mathbf{R}_{\mathbf{y}}\). Output:\(\hat{\mathbf{V}}=\mathbf{V}^{(d)}\), \(\hat{\mathbf{B}}=\mathbf{B}^{(d)}\). 1\(d=0\), \(\mathbf{B}^{(0)}\)\(\sim\{\pm 1\}^{K\times M}\); 2whileach stopping criterion has not been reacheddo 3\(d\gets d+1\) ; 4\(\mathbf{V}^{(d)}\leftarrow\mathbf{Y}(\mathbf{B}^{(d-1)})^{\mathrm{T}}[ \mathbf{B}^{(d-1)}(\mathbf{B}^{(d-1)})^{\mathrm{T}}]^{-1}\); 5\(\mathbf{B}^{(d)}\leftarrow\) \(\mathrm{sign}\left\{\left((\mathbf{V}^{(d)})^{\mathrm{T}}\mathbf{R}_{\mathbf{ y}}^{-1}\mathbf{V}^{(d)}\right)^{-1}(\mathbf{V}^{(d)})^{\mathrm{T}} \mathbf{R}_{\mathbf{y}}^{-1}\mathbf{Y}\right\}\); \(\triangleright\) Approximate lattice decoding via GLS/ZF. ``` **Algorithm 2**The M-IGLS data extraction algorithm. Observe the step of estimating \(\mathbf{B}^{(d)}\) in Algorithm 1, which asks to solve the following problem: \[\mathcal{P}_{2}:\min_{\mathbf{B}\in\{\pm 1\}^{K\times M}}||\mathbf{R}_{\mathbf{z}}^{- \frac{1}{2}}\mathbf{Y}-\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{V}\mathbf{ B}||_{F}^{2}. \tag{10}\] Since \(\{\pm 1\}^{K\times M}\in\mathbb{Z}^{K\times M}\), \(\mathcal{P}_{2}\) is a special case of CVP, which asks to find \(M\) closest lattice vectors to \(\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{Y}\), and the lattice is defined by basis \(\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{V}\). Considering \(\mathcal{P}_{2}\), define the set of query vectors as \(\overline{\mathbf{Y}}\triangleq\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{Y}\), and the lattice basis as \(\overline{\mathbf{V}}\triangleq\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{V}\), then the ZF estimator is \[\hat{\mathbf{B}}_{\mathrm{ZF}} =\overline{\mathbf{V}}^{\mathrm{T}}\overline{\mathbf{Y}}^{\mathrm{T}}\] \[=(\overline{\mathbf{V}}^{\mathrm{T}}\overline{\mathbf{V}})^{-1} \overline{\mathbf{V}}^{\mathrm{T}}\overline{\mathbf{Y}}^{\mathrm{T}}. \tag{11}\] In Appendix A, we show that the geometric least square (GLS) step in line \(5\) of M-IGLS is the same as ZF. The ZF estimator is linear, which behaves like a linear filter and separates the data streams and thereafter independently decodes each stream. The drawback of ZF is the effect of noise amplification when the lattice basis \(\overline{\mathbf{V}}\) is not orthogonal. Fig. 1: The decision regions of ZF (parallelogram) and SIC (rectangle) in a 2-dimensional lattice. ### _M-Isic_ By using decision feedback in the decoding process, the nonlinear Successive Interference Cancellation (SIC) detector has better performance than ZF. Recall that for \(\mathcal{P}_{2}\), the lattice basis is \(\overline{\mathbf{V}}\), and the set of query vectors are \(\overline{\mathbf{Y}}\). The SIC algorithm consists of the following steps: **Step 1**) Use QR decomposition to factorize \(\overline{\mathbf{V}}\): \(\overline{\mathbf{V}}=\mathbf{Q}\mathbf{R}\)1, where \(\mathbf{Q}\in\mathbb{R}^{L\times L}\) denotes a unitary matrix and \(\mathbf{R}\in\mathbb{R}^{L\times K}\) is an upper triangular matrix of the form: Footnote 1: For better performance, this paper adopts a sorted version of QR decomposition, where the column vectors in \(\overline{\mathbf{V}}\) are sorted from short to long. \[\mathbf{R}=\begin{bmatrix}R_{1,1}&R_{1,2}&\cdots&R_{1,K}\\ 0&R_{2,2}&\cdots&R_{2,K}\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&R_{K,K}\\ 0&0&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&0\\ \end{bmatrix}. \tag{12}\] **Step ii**) Construct \(\mathbf{Y}^{\prime}=\mathbf{Q}^{\top}\overline{\mathbf{Y}}\in\mathbb{R}^{L \times M}\), which consists of vectors \(\mathbf{y}^{{}^{\prime}}(1),...,\mathbf{y}^{{}^{\prime}}(M)\). **Step iii**) For \(m=1,...,M\), generate the estimation as \[\hat{b}_{K}(m) =\mathrm{sign}\left(\frac{y^{\prime}_{K}(m)}{R_{K,K}}\right), \tag{13}\] \[\hat{b}_{k}(m) =\mathrm{sign}\left(\frac{y^{\prime}_{k}(m)-\sum_{l=k+1}^{K}R_{k, l}\hat{b}_{l}(m)}{R_{k,k}}\right), \tag{14}\] where \(k=K-1,K-2,...,1\), and \(y^{\prime}_{k}(m)\) denotes the \(k\)th component of \(\mathbf{y}^{{}^{\prime}}(m)\). By substituting the Step 5 in Algorithm 1 with the SIC steps, we obtain a new algorithm referred to as multi-carrier iterative successive interference cancellation (M-ISIC). Its pseudo-codes are presented in Algorithm 2. Notably, \(\mathbf{V}^{(d)}\) is estimated in the same way as that of M-IGLS, and the performance improvements rely on SIC decoding. The stopping criterion can be set as when \(||\mathbf{B}^{(d)}-\mathbf{B}^{(d-1)}||_{F}^{2}<10^{-5}\). _Remark 1_.: The rationale of SIC is explained as follows. When detecting multiple symbols, if one of them can be estimated first, the interference caused by the already decoded can be eliminated when solving another, so as to reduce the effective noise of the symbol to be solved and to improve the bit error rate performance. To be concise, denote the observation equation corresponding to \(\mathcal{P}_{2}\) as \[\overline{\mathbf{Y}}=\overline{\mathbf{V}}\mathbf{B}+\overline{\mathbf{Z}}, \tag{15}\] with \(\overline{\mathbf{Z}}\) being the effective noise. Then the multiplication of \(\mathbf{Q}^{\top}\) to (15) is simply a rotation, which maintain the Frobenius norm of the objective function: \[||\overline{\mathbf{Y}}-\overline{\mathbf{V}}\mathbf{B}||_{F}^{2} =||\overline{\mathbf{Z}}||_{F}^{2} \tag{16}\] \[=||\mathbf{Q}^{\top}\overline{\mathbf{Z}}||_{F}^{2}\] (17) \[=||\mathbf{Q}^{\top}\overline{\mathbf{Y}}-\mathbf{R}\mathbf{B}||_ {F}^{2}. \tag{18}\] Regarding Step iii), \(\hat{b}_{K}(m),...,\hat{b}_{1}(m)\) are estimated in descending order because the interference caused by these symbols can be canceled. Moreover, the divisions of \(R_{K,K},...,R_{1,1}\) in Eqs. (13) (14) imply that the effective noise level hinges on the quality of \(R_{K,K},...,R_{1,1}\). ### _Performance Analysis_ We show that M-ISIC theoretically outperforms M-IGLS, as SIC has better decoding performance than ZF when approximately solving \(\mathcal{P}_{2}\). With a slight abuse of notations, \(\mathcal{P}_{2}\) can be simplified as \(M\) instances of the following observation: \[\mathbf{y}=\mathbf{R}^{\prime}\mathbf{b}^{*}+\mathbf{z} \tag{19}\] where \(\mathbf{y}\in\mathbb{R}^{K}\), \(\mathbf{b}^{*}\in\{\pm 1\}^{K}\) is the transmitted message, \(\mathbf{R}^{\prime}\in\mathbb{R}^{K\times K}\) includes only the first \(K\) rows of (12), and we assume that \(\mathbf{z}\) also admits a Gaussian distribution with mean \(\mathbf{0}\) and covariance \(\sigma_{n}^{2}\mathbf{I}_{K}\). Then the lattice decoding task becomes \[\mathcal{P}_{3}:\min_{\mathbf{b}\in\{\pm 1\}^{K}}||\mathbf{y}-\mathbf{R} \mathbf{b}||^{2}. \tag{20}\] It has been demonstrated in the literature [14, 23] that SIC outperforms ZF if the constraint of \(\mathbf{b}\) in \(\mathcal{P}_{3}\) is an integer set \(\mathbb{Z}^{K}\) and a box-constrained (truncated continuous integer) set \(\mathcal{B}\). Therefore, we employ a model reduction technique to show that SIC has higher success probability when decoding \(\mathcal{P}_{3}\). **Proposition 2**.: _Let the SIC and ZF estimates of \(\mathcal{P}_{3}\) be \(\mathbf{b}^{\mathrm{SIC}}\) and \(\mathbf{b}^{\mathrm{ZF}}\), respectively. Then the averaged decoding success probability of SIC is higher than that of ZF:_ \[\mathbb{E}_{\mathbf{b}^{*}}\{\mathrm{Pr}(\mathbf{b}^{\mathrm{SIC}}=\mathbf{b}^ {*})\}\geq\mathbb{E}_{\mathbf{b}^{*}}\{\mathrm{Pr}(\mathbf{b}^{\mathrm{ZF}}= \mathbf{b}^{*})\}, \tag{21}\] _where the expectation is taken over uniform random \(\mathbf{b}^{*}\in\{\pm 1\}^{K}\)._ Proof.: Firstly, Eq. (19) is rewritten as \[(\mathbf{y}+\mathbf{R}\times\mathbf{1})/2=\mathbf{R}(\mathbf{b}^{*}+\mathbf{1} )/2+\mathbf{z}/2. \tag{22}\] By updating the query vector \(\mathbf{y}\) as \(\mathbf{y}^{\prime}\triangleq(\mathbf{y}+\mathbf{R}\times\mathbf{1})/2\), the bipolar constraint model \(\mathcal{P}_{3}\) is transformed to the following box-constrained model \(\mathcal{P}_{4}\): \[\mathcal{P}_{4}:\min_{\mathbf{b}\in\mathcal{B}}||\mathbf{y}^{\prime}-\mathbf{R }\mathbf{b}||^{2}, \tag{23}\] where the constraint of the variable is \(\mathcal{B}=\{0,1\}^{K}\). Since [23][Thm. 9] has shown that Eq. (21) holds in this type of box-constrained model, the proposition is proved. If \(\overline{\mathbf{V}}\) is close to being an orthogonal matrix, then ZF and SIC detection can both achieve maximum likelihood estimation. The reason is that they are all solving a much simpler quantization problem \(\min_{\mathbf{b}\in\{\pm 1\}^{K}}||\mathbf{y}-\mathbf{I}_{K}\mathbf{b}||^{2}\). In general, the performance gap between ZF and SIC depends on the degree of orthogonality of the lattice basis \(\overline{\mathbf{V}}\). To quantify this parameter, we introduce the normalized orthogonality defect of a matrix as \[\delta(\overline{\mathbf{V}})=\left(\frac{\prod_{k=1}^{K}||\overline{\mathbf{v }}_{k}||}{\sqrt{\det(\overline{\mathbf{V}}^{\top}\overline{\mathbf{V}})}} \right)^{1/K}, \tag{24}\] where the column vectors of \(\overline{\mathbf{V}}=[\overline{\mathbf{v}}_{1},...,\overline{\mathbf{v}}_{K}]\) are linear independent. From Hardamard's inequality, \(\delta(\overline{\mathbf{V}})\) is always larger than or equal to \(1\), with equality if and only if the columns are orthogonal to each other. Summarizing the above, SIC performs better than ZF in general, and their performance gap decreases as \(\delta(\overline{\mathbf{V}})\to 1\). ### _Computational Complexity_ To compare with M-IGLS and exiting schemes, we give the computational complexity of M-ISIC based on the following conditions: * The complexity of the multiplication of two matrices \(\mathbf{A}\in\mathbb{R}^{M\times N}\) and \(\mathbf{B}\in\mathbb{R}^{N\times K}\) is \(\mathcal{O}(MNK)\). * The complexity of an inversion over the square matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) is \(\mathcal{O}(N^{3})\). * The complexity of performing QR decomposition on matrix \(\mathbf{A}\in\mathbb{R}^{M\times N}\), \(M>N\), is \(\mathcal{O}(2MN^{2})\). Notice that \(\mathbf{Y}\in\mathbb{R}^{L\times M}\), \(\mathbf{V}\in\mathbb{R}^{L\times K}\) and \(\mathbf{B}\in\mathbb{R}^{K\times M}\), the computational complexity of Step 4 in M-ISIC is \[\mathcal{O}(K^{3}+K^{2}(L+M)+LMK).\] The computational complexity of Step 5 is dominated by the QR decomposition, which is \[\mathcal{O}\left(K^{2}L+M(LK+K)\right).\] The computational complexity of each iteration of the algorithm is summarized as \[\mathcal{O}\left(K^{3}+2LMK+K^{2}(3L+M)+KM\right).\] With a total of \(T\) iterations, the overall complexity is \[\mathcal{O}\left(T(K^{3}+2LMK+K^{2}(3L+M)+KM)\right).\] ## IV Non-blind Extraction The difference between legitimate/non-blind extraction and blind extraction lies in the availability of \(\mathbf{V}\). In the case of legitimate extraction, it asks to solve \[\mathcal{P}_{4}:\min_{\mathbf{B}\in\{\pm 1\}^{K\times M}}||\mathbf{R}_{\mathbf{z }}^{-\frac{1}{2}}(\mathbf{Y}-\mathbf{V}\mathbf{B})||_{F}^{2}. \tag{25}\] With the knowledge of carriers, non-blind algorithms exhibit higher accuracy than the blind algorithms. In this section, we describe the similarity between ZF and MMSE criterion by using an extended system model. To achieve better extraction performance when the channel matrix lacks sufficient orthogonality, we introduce a sphere decoding algorithm to extract SS hidden data. Subsequently, its computational complexity is discussed. ### _Equivalence of linear MMSE and ZF_ By introducing an extended system model, it is straightforward to show the similarity between linear MMSE and zero forcing (ZF). The channel matrix \(\mathbf{V}\) and the received matrix \(\mathbf{Y}\) can be reconstructed through \[\underline{\mathbf{Y}}=\left[\mathbf{V}^{\top}\ \sigma_{n}\mathbf{I}_{L}\ \tfrac{1}{\sqrt{M}}\mathbf{X}^{\top}\right]^{\top}\quad\text{and}\quad \underline{\mathbf{Y}}=\left[\mathbf{Y}^{\top}\ \mathbf{0}\right]^{\top}. \tag{26}\] Therefore, the output of the linear MMSE filter (6) can be re-expressed as \[\hat{\mathbf{B}}_{MMSE} =\mathrm{sign}\{\underline{\mathbf{V}}^{\top}(\underline{\mathbf{ V}}^{\top}\underline{\mathbf{V}})^{-1}\underline{\mathbf{Y}}\} \tag{27}\] \[=\mathrm{sign}\{\underline{\mathbf{V}}^{\top}\underline{\mathbf{ Y}}\}. \tag{28}\] It is not difficult to find that (28) is analogous to the familiar linear zero forcing detector \(\hat{\mathbf{B}}_{ZF}=\mathrm{sign}\{\mathbf{V}^{\dagger}\mathbf{Y}\}\)[19], except that \(\mathbf{V}\) and \(\mathbf{Y}\) are replaced by \(\underline{\mathbf{V}}\) and \(\underline{\mathbf{Y}}\) respectively. Since \(\mathcal{P}_{4}\) amounts to the CVP of lattices, there is no free lunch for linear complexity algorithms (such as ZF, SIC, linear MMSE) to provide high-quality or exact solutions for \(\mathcal{P}_{4}\). The default linear MMSE algorithm shows satisfactory performance only when \(\mathbf{V}\) has sufficient orthogonality (i.e., \(\mathbf{V}^{\top}\mathbf{V}\) is close to an identity matrix). To solve \(\mathcal{P}_{4}\) exactly, exponential-complexity algorithms are indispensable. ### _Sphere Decoding_ Sphere decoding is a popular approach to solve CVP in the lattice community [20]. We only have to adjust a few steps of the conventional sphere decoding to solve \(\mathcal{P}_{4}\): i) The quantization of symbols is to \(\{\pm 1\}\) rather than \(\mathbb{Z}\). ii) We initialize the initial search radius of sphere decoding via the solution of linear MMSE. The principle behind sphere decoding is quite simple: we attempt to search over those lattice points that lie inside a sphere with radius \(r\) and find the nearest vector to the given query vector \(\mathbf{x}\). An example in 2-dimensional space is given in Fig. 2. If the channel matrix \(\overline{\mathbf{V}}\) represents lattice basis, then the lattice point \(\overline{\mathbf{V}}\) is located in the sphere of radius \(d\) and center \(\mathbf{x}\) if and only if \[d^{2}\geq\|\overline{\mathbf{y}}-\overline{\mathbf{V}}\mathbf{b}\|^{2},\quad \mathbf{b}\in\{\pm 1\}^{K} \tag{29}\] where \(\|\cdot\|\) represents Euclidean norm, and \(\overline{\mathbf{y}}\) and \(\mathbf{b}\) are the columns of \(\overline{\mathbf{Y}}\) and \(\mathbf{B}\) respectively. Although it seems complicated to determinate which lattice points are contained within the sphere in \(m\)-dimensional space, it becomes effortless to do Fig. 2: Sphere decoding in 2-dimensional Euclidean space. so when \(m=1\). The reason is that the sphere degenerates into a fixed length interval in one-dimension. Then the lattice points are the integer values falling in the interval centered on \(\mathbf{x}\). With this basic observation, we can generalize from \(k\) dimension to \(k+1\) dimension. Assume that the lattice points contained in the sphere with radius \(r\) are obtained. Then, for the sphere with the same radius in \(k+1\) dimension, the desirable values of the \(k+1\) coordinate of these lattice points form a finite set or a fixed length interval. It implies that one can obtain the lattice points contained in the sphere with radius \(r\) in \(m\) dimension by means of solving all the lattice points in the sphere of the same radius from \(1,\cdots,m-1\) dimension successively. Through the above brief introduction, the sphere decoding algorithm consists of the following steps: **Step i)** Perform QR decomposition to factorize \(\overline{\mathbf{V}}\): \(\overline{\mathbf{V}}=\mathbf{QR}.\mathbf{Q}=\left[\mathbf{Q}_{1}\,\mathbf{Q}_ {2}\right]\in\mathcal{R}^{L\times L}\) denotes a unitary matrix with pair-wise orthogonal columns and \(\mathbf{R}\in\mathcal{R}^{L\times K}\) denotes an upper triangular matrix. **Step ii)** On the basis of QR decomposition, (29) can be rewritten as \[d^{2} \geq\|\overline{\mathbf{y}}-\left[\mathbf{Q}_{1}\,\,\mathbf{Q}_{2} \right]\left[\begin{matrix}\mathbf{R}_{1}\\ \mathbf{0}\end{matrix}\right]\mathbf{b}\|^{2}=\|\left[\begin{matrix} \mathbf{Q}_{1}^{H}\\ \mathbf{Q}_{2}^{H}\end{matrix}\right]\overline{\mathbf{y}}-\left[\begin{matrix} \mathbf{R}_{1}\\ \mathbf{0}\end{matrix}\right]\mathbf{b}\|^{2} \tag{30}\] \[=\|\mathbf{Q}_{1}^{H}\overline{\mathbf{y}}-\mathbf{R}_{1}\mathbf{ b}\|^{2}+\|\mathbf{Q}_{2}^{H}\overline{\mathbf{y}}\|^{2} \tag{31}\] where \((\cdot)^{H}\) denotes Hermitian transpose. To simplify the symbol, let us define \(\overline{d}^{2}=d^{2}-\|\mathbf{Q}_{2}^{H}\overline{\mathbf{y}}\|^{2}\) and \(\overline{\mathbf{y}}=\mathbf{Q}_{1}^{H}\overline{\mathbf{y}}\) to represent (31) as \[\overline{d}^{2}\geq\|\overline{\mathbf{y}}-\mathbf{R}_{1}\mathbf{b}\|^{2}. \tag{32}\] In accordance with the upper triangular property of \(\mathbf{R}_{1}\), the right hand side of (32) can be expanded to a polynomial \[\overline{d}^{2} \geq(\overline{y}_{K}-R_{K,K}b_{K})^{2}+(\overline{y}_{K-1}- \sum_{j=K-1}^{K}R_{K-1,j}b_{j})^{2}\] \[+\cdots+(\overline{y}_{1}-\sum_{j=1}^{K}R_{1,j}b_{j})^{2}. \tag{33}\] **Step iii)** Provided that \(\mathbf{Y}\) and \(\mathbf{V}\) are known, then \(\overline{\mathbf{y}}\) is also known for the receiver. By observing the right hand side of (33), it is straightforward to deduce that the first term of (33) only hinges on \(\{b_{K}\}\), while the second hinges on \(\{b_{K},b_{K-1}\}\), and so on. If the admissible values of \(b_{K}\) have been estimated, the decoder will exploit this set to further estimate \(b_{K-1}\). Let's start from one dimension. The first necessary condition for \(\mathbf{Vb}\) to fall in the sphere is \(\overline{d}^{2}\geq(\overline{y}_{K}-R_{K,K}b_{K})^{2}\), i.e., \(b_{K}\) must meet \[\lceil\frac{-\overline{d}+\overline{y}_{K}}{R_{K,K}}\rceil\leq b_{K}\leq\lfloor \frac{\overline{d}+\overline{y}_{K}}{R_{K,K}}\rfloor \tag{34}\] where \(\lceil\cdot\rceil\) and \(\lfloor\cdot\rfloor\) denote rounding up and rounding down, respectively. There is no doubt that (34) is definitely not sufficient enough. We need stronger constraints in order to keep the search space shrinking. For any \(b_{K}\) satisfying (34), let \(\overline{d}_{K}^{2}=\overline{d}^{2}-(\overline{y}_{K}-R_{K,K}b_{K})^{2}\) and \(\overline{y}_{K-1}^{\prime}=\overline{y}_{K-1}-R_{K-1,K}b_{K}\), the integer interval that \(b_{K-1}\) belongs to can be found. \[\lceil\frac{-\overline{d}_{K}+\overline{y}_{K-1}^{\prime}}{R_{K-1,K-1}} \rceil\leq b_{K-1}\leq\lfloor\frac{\overline{d}_{k}+\overline{y}_{K-1}^{\prime }}{R_{K-1,K-1}}\rfloor \tag{35}\] Similarly, the intervals that the remaining symbols \(b_{K-2},\cdots,b_{1}\) belong to can be calculated in the same way recursively. Finally, we obtain all the candidate lattice points, the potential closest vectors to \(\overline{\mathbf{y}}\), after the program terminates. ``` Input:\(\overline{\mathbf{y}}=\mathbf{Q}_{1}^{H}\overline{\mathbf{y}},\,\mathbf{R}_{1},\,Radius\). Output:\(\hat{\mathbf{b}}\). 1\(Initialization:K=size(\mathbf{R}_{1},2),dist=0,k=K,\hat{\mathbf{b}}=zeros(K,1)\); 2if\(k==K\)then 3\(\overline{\mathbf{y}}^{\prime}=\overline{\mathbf{y}}\); 4else 5\(\overline{\mathbf{y}}^{\prime}=\overline{\mathbf{y}}-\mathbf{R}_{1}(:,k+1:end)* \hat{\mathbf{b}}(k+1:end)\); 6\(c=sign\left(\frac{\overline{\mathbf{y}}(k)}{\mathbf{R}_{1}(k,k)}\right),\hat{ \mathbf{b}}(k)=c\); 7\(\overline{d}^{2}\)=\(\left(\overline{\mathbf{y}}^{\prime}(k)-\mathbf{R}_{1}(k,k:end)\hat{ \mathbf{b}}(k:end)\right)^{2}+dist\); 8if\(\overline{d}^{2}\leq Radius\)then 9 go to 12; 10else 11 go to 19; 12if\(k==1\)then 13 save \(\hat{\mathbf{b}}\); 14\(Radius=\overline{d}^{2}\); 15else 16\(k=k-1\); 17\(dist=\overline{d}^{2}\); 18 go to 2; 19\(ci=c*(-1)\), \(\hat{\mathbf{b}}(k)=ci\), go to 7; ``` **Algorithm 3**The M-SD data extraction algorithm. Algorithm 3 gives the pseudo-code of sphere decoding. The radius is initialized to the solution of linear MMSE. As the message space is \(\{\pm 1\}\), the rounding operation simply invokes \(sign(\cdot)\). ### _Computational Complexity_ The computational complexity of sphere decoding has been studied thoroughly in the literature [17, 24]. In the worst case, we have to visit all \(2^{K}\) nodes. But in generally the actual complexity is significantly smaller than that, as the algorithm is constantly updating the search radius. According to [24], Fig. 3: A 3-dimensional binary search tree. the expected complexity of sphere decoding is of polynomial-time. Fig. 3 shows a binary search tree in 3-dimensional space, where the nodes in \(k\)-th layers correspond to the lattice points in \(k\)-dimension and the height of tree is \(K\). ## V Experimental Studies This section performs numerical simulations to validate the effectiveness and accuracy of the algorithms we proposed. The simulations investigate the scenarios with blind extraction in part A and non-blind extraction in part B separately. The experimental setup is described in detail below. **Datasets**: Without loss of generality, we use the images in BOWS-2 [25] database and the audios in [26] and [27] as the embedding covers. BOWS-2 database consists of \(10,000\) grey-level images, with different statistical properties. Fig. 4 displays some typical samples, and the labels of the subgraphs indicate their ordinal numbers in the dataset. Audio datasets consist of 9 MP3 and 70 FLAC files, containing different types of audio. Fig. 5 shows three audio signals samples, and the labels of the subgraphs represents their file name in datasets. **Indicator**: The bit-error-rate (BER), as a common performance index, is employed to measure the extraction performance. **Preprocessing**: By performing \(8\times 8\) block-wise DCT transform, zig-zag scanning and coefficient selection on the original images, we obtain transform domain hosts for embedding. Then invoking additive SS embedding, the watermarked images are generated. Similarly, we can use the same process to generate the watermarked audio signals. For image cover, the entries in the matrix \(\mathbf{V}\) are taken from standard Gaussian distribution; while for audio, the elements in the matrix \(\mathbf{V}\) are taken from \(\{-1,1\}\). The normalized orthogonality defect of the simulated carriers are shown in Table 1. By varying the size of \(L\times K\), the carriers exhibit different \(\delta(\overline{\mathbf{V}})\). The noise power of image and audio are fixed as \(\sigma_{n}^{2}=3\) and \(\sigma_{n}^{2}=1\) respectively, and the signal-to-noise ratio is controlled by varying the distortion \(D\). ### _Blind Extraction_ Benchmark algorithms in this subsection include: _i)_ M-IGLS [8], _ii)_ SMI-MMSE [16], _iii)_ Ideal-MMSE, _iv)_ JADE [13], where Ideal-MMSE represents the ideal version of SMI-MMSE because the autocorrelation matrix \(\mathbf{R_{x}}\) is exactly known. For image cover, we consider the cases with \(\delta(\overline{\mathbf{V}})=1.6887,\,1.3819,\,1.3475\) for the sake of showing the impact of the orthogonality of the lattice bases. In the first example, we consider the case with \(L=8\), \(K=8\), \(\delta(\overline{\mathbf{V}})=1.6887\). The BER versus distortion performance of different algorithms are plotted in Fig. 6(a). With the exact carriers' information, the SMI-MMSE and Ideal-MMSE algorithms serve as the performance upper bounds. The BSS approach, JADE fails to exhibit satisfactory performance. Moreover, M-ISIC outperforms M-IGLS in the whole distortion range of \(24\sim 38\mathrm{dB}\). The second example examines the case with \(L=12\), \(K=10\), \(\delta(\overline{\mathbf{V}})=1.3819\). As depicted in Fig. 6(b), when the carriers become more orthogonal, both M-IGLS and M-ISIC get closer to SMI-MMSE and Ideal-MMSE. The performance gap between M-IGLS and M-ISIC has become smaller. Similar results can be replicated when we further reduce the normalized orthogonality defect. We post one of such figures in Fig. 6(c). Audio signal is another type of cover for our experiment, with a sampling frequency of 44.1 KHz. We also compare Fig. 4: Representative images in the BOWS-2 database. Fig. 5: Representative audio signals in [26] and [27]. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline \(L\times K\) & \(8\times 8\) & \(12\times 10\) & \(15\times 12\) & \(10\times 4\) & \(12\times 6\) & \(15\times 8\) \\ \hline \hline \(\delta(\overline{\mathbf{V}})\) & \(1.6887\) & \(1.3819\) & \(1.3475\) & \(1.0892\) & \(1.1417\) & \(1.1695\) \\ \hline \hline \end{tabular} \end{table} TABLE I: The normalized orthogonality defect of the simulated carriers. the BER performance for three cases with different quality of carriers, i.e., \(\delta(\overline{\mathbf{\nabla}})=1.0892,\)\(1.1417,\)\(1.1695\). Fig. 7 depicts the corresponding experimental results in turn, where the value on horizontal axis controls the distortion degree of audio signal. It is obvious that for the whole alpha range of \(0.5\sim 1.2\), M-ISIC has lower BER than M-IGLS in the cases we have listed. From the audio experiments, it can be found that our scheme is superior to M-IGLS even if the value of \(\delta(\overline{\mathbf{\nabla}})\) is relatively small. From the above, we observe that M-ISIC performs better than M-IGLS in general. When the carriers \(\overline{\mathbf{\nabla}}\) represents a bad lattice basis, M-ISIC apparently outperforms M-IGLS. On the other hand, when the carriers are highly orthogonal, the decision regions of M-IGLS and M-ISIC become similar in shape, then the performance of the two algorithms tends to be the same. ### _Non-blind Extraction_ When the carriers are known for the receiver, more sophisticated decoding algorithm can be employed to achieve higher accuracy. Next, we examine the BER comparison of each algorithm for non-blind extraction. We adopt the same setting of carriers as in the previous subsection. From Fig. 8(a), we observe that ZF and SIC struggle to exhibit satisfactory performance due to the bad orthogonality of lattice basis. Sphere decoding significantly outperforms SMI-MMSE and even Ideal-MMSE in the whole distortion range from 24 to 38 dB. If \(\delta(\overline{\mathbf{\nabla}})\) decreases, as shown in Fig. 8(b) and Fig. 8(c), the distance between the BER curves of sphere decoding and SMI-MMSE/Ideal-MMSE becomes smaller gradually. However, in the low distortion range, sphere decoding still enjoys the best performance. The experimental results of audio signals are given in Fig. 9. In the examples listed below, we find that SIC performs much better on audios rather than on images. Notably, sphere decoding shows a more prominent advantage, especially when alpha increases. Even if the value of \(\delta(\overline{\mathbf{\nabla}})\) trends to 1, meaning the basis vectors more orthogonal, the BER of sphere decoding is still much lower than other algorithms. From the above, we observe that the performance of sphere decoding is generally not worse than that of SMI-MMSE and Ideal-MMSE. When the carriers \(\overline{\mathbf{\nabla}}\) represents a bad lattice basis, sphere decoding apparently outperforms SMI-MMSE and Ideal-MMSE. On the other hand, when the carriers are highly orthogonal, the performance of MMSE and SD tends to be the same. ## VI Conclusions This paper studies both blind and non-blind extraction of spread-spectrum hidden data from the perspective of lattices. To achieve better decoding performance, we employ more accurate lattice decoding algorithms in blind and non-blind Fig. 6: BER comparison between M-IGLS and M-ISIC in image blind extraction. Fig. 7: BER comparison between M-IGLS and M-ISIC in audio blind extraction. extraction. The experimental results demonstrate that our schemes are superior to the existing solutions especially when the channel matrix lacks sufficient orthogonality. ## Appendix Assuming \(\mathbf{V}\) is known, the least-squares estimation [8] of \(\mathbf{B}\) used in Step 5 of Algorithm 1 is: \[\hat{\mathbf{B}}_{\mathrm{GLS}} =\left(\mathbf{V}^{\mathrm{T}}\mathbf{R}_{\mathbf{y}}^{-1}\mathbf{ V}\right)^{-1}\mathbf{V}^{\mathrm{T}}\mathbf{R}_{\mathbf{y}}^{-1}\mathbf{Y}\] \[=\left(\left(\mathbf{V}^{\mathrm{T}}\mathbf{R}_{\mathbf{z}}^{-1} \mathbf{V}\right)^{-1}+\mathbf{I}\right)\mathbf{V}^{\top}\] \[\quad\times\left(\mathbf{R}_{\mathbf{z}}^{-1}-\mathbf{R}_{ \mathbf{z}}^{-1}\mathbf{V}\left(\mathbf{V}^{\mathrm{T}}\mathbf{R}_{\mathbf{z}}^ {-1}\mathbf{V}+\mathbf{I}\right)^{-1}\mathbf{V}^{\top}\mathbf{R}_{\mathbf{z}}^ {-1}\right)\] \[=\left(\mathbf{V}^{\mathrm{T}}\mathbf{R}_{\mathbf{z}}^{-1}\mathbf{ V}\right)^{-1}\mathbf{V}^{\mathrm{T}}\mathbf{R}_{\mathbf{z}}^{-1}\mathbf{Y}\] \[=\left(\mathbf{V}^{\mathrm{T}}\mathbf{R}_{\mathbf{z}}^{-\frac{1} {2}}\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{V}\right)^{-1}\mathbf{V}^{ \mathrm{T}}\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{R}_{\mathbf{z}}^{- \frac{1}{2}}\mathbf{Y}\] \[=\left[(\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{V})^{\mathrm{ T}}(\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{V})\right]^{-1}(\mathbf{R}_{ \mathbf{z}}^{-\frac{1}{2}}\mathbf{V})^{\mathrm{T}}(\mathbf{R}_{\mathbf{z}}^{ -\frac{1}{2}}\mathbf{Y}). \tag{36}\] In the language of ZF, recall that \(\overline{\mathbf{Y}}=\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{Y}\), and \(\overline{\mathbf{V}}=\mathbf{R}_{\mathbf{z}}^{-\frac{1}{2}}\mathbf{V}\). Thus Eq. (36) equals to \((\overline{\mathbf{V}}^{\mathrm{T}}\overline{\mathbf{V}})^{-1}\overline{ \mathbf{V}}^{\mathrm{T}}\overline{\mathbf{Y}}^{\mathrm{T}}\), which justifies \(\hat{\mathbf{B}}_{\mathrm{GLS}}=\hat{\mathbf{B}}_{\mathrm{ZF}}\).
2303.00052
Algorithmic Solutions for Maximizing Shareable Costs
This paper addresses the optimization problem to maximize the total costs that can be shared among a group of agents, while maintaining stability in the sense of the core constraints of a cooperative transferable utility game, or TU game. When maximizing total shareable costs, the cost shares must satisfy all constraints that define the core of a TU game, except for being budget balanced. The paper first gives a fairly complete picture of the computational complexity of this optimization problem, its relation to optimiztion over the core itself, and its equivalence to other, minimal core relaxations that have been proposed earlier. We then address minimum cost spanning tree (MST) games as an example for a class of cost sharing games with non-empty core. While submodular cost functions yield efficient algorithms to maximize shareable costs, MST games have cost functions that are subadditive, but generally not submodular. Nevertheless, it is well known that cost shares in the core of MST games can be found efficiently. In contrast, we show that the maximization of shareable costs is NP-hard for MST games and derive a 2-approximation algorithm. Our work opens several directions for future research.
Rong Zou, Boyue Lin, Marc Uetz, Matthias Walter
2023-02-28T19:49:30Z
http://arxiv.org/abs/2303.00052v3
# Algorithmic Solutions ###### Abstract This paper addresses the optimization problem to maximize the total costs that can be shared among a group of agents, while maintaining stability in the sense of the core constraints of a cooperative transferable utility game, or TU game. This means that all subsets of agents have an outside option at a certain cost, and stability requires that the cost shares are defined so that none of the outside options is preferable. When maximizing total shareable costs, the cost shares must satisfy all constraints that define the core of a TU game, except for being budget balanced. The paper gives a fairly complete picture of the computational complexity of this optimization problem, in relation to classical computational problems on the core. We also show that, for games with an empty core, the problem is equivalent to computing minimal core relaxations for several relaxations that have been proposed earlier. As an example for a class of cost sharing games with non-empty core, we address minimum cost spanning tree games. While it is known that cost shares in the core can be found efficiently, we show that the computation of maximal cost shares is \(\mathsf{NP}\)-hard for minimum cost spanning tree games. We also derive a 2-approximation algorithm. Our work opens several directions for future work. Cost Sharing, Core, Minimum Cost Spanning Tree Game, Approximation ## 1 Introduction The fundamental algorithmic question that is addressed in this paper is: Can we maximize the total costs that can be shared among a set of agents, while maintaining coalitional stability? Here, coalitional stability refers to the core constraints of an underlying cooperative transferable utility game, or TU game: Any proper subset of the set of all agents has an outside option at a certain cost, and coalitional stability of cost shares means that all subsets of agents are willing to accept the cost shares, because their outside option is less attractive, meaning that it is at least as costly as the sum of their cost shares. This question is arguably a fundamental question for the design of cost sharing mechanisms, and the main goal of this paper is to give more insight into its algorithmic complexity. Several closely related results exist and will be discussed in the next section. In the literature, these are a bit scattered and sometimes ignorant of each other. The main contributions of this paper are as follows. We introduce a basic polyhedral object that we refer to as the _almost core_ of a cooperative game. It is obtained from the core by simply relaxing the requirement that the cost shares must be budget balanced. By definition, this polyhedron is non-empty. The algorithmic problem that we address is to maximize cost shares that lie in the almost core, which is a linear optimization problem over that polyhedron. For the case that the underlying core of the cooperative game is empty, it turns out that the computational problem to maximize shareable costs is equivalent to finding a minimal non-empty core relaxation for several of the core relaxations that have been proposed earlier in the literature. This is maybe not surprising, yet a good overview of how all these relaxations relate to each other does not seem to appear anywhere in the literature. The paper further establishes complexity theoretic results that relate computational problems for the almost core with corresponding problems for the classical core. While it turns out that general linear optimization over almost core and core share the same algorithmic complexity, we show that there are classes of games where core elements can be efficiently computed, while the computation of maximal shareable costs cannot be done in polynomial time, unless \(\mathsf{P}=\mathsf{NP}\). That hardness result is obtained for a well-studied class of games with non-empty core, namely minimum cost spanning tree games. This class of games is interesting also because the resulting cost function is subadditive but generally not submodular. And while submodularity yields polynomial-time algorithms, our hardness result shows that subadditivity does not suffice. For minimum cost spanning tree games, we further show how to obtain a 2-approximation algorithm for maximizing shareable costs, and we show that our analysis of that algorithm is tight. The structure of this paper is as follows. The basic notions and definitions are given in Section 2. As previous papers have mostly focused on core relaxations for unbalanced games (that is, with an empty core), we briefly review these in Section 3, and discuss how they relate to the problem to compute maximal "almost core" cost shares. A novel aspect of our approach is to also address games that have a non-empty core. Section 4 therefore relates linear optimization over the "almost core" to the core, and we derive some algorithmic consequences. Section 5 then addresses the problem to compute maximal cost shares for minimum cost spanning tree (MST) games, showing \(\mathsf{NP}\)-hardness, as well as giving a 2-approximation algorithm. We conclude with some open problems in Section 6. ## 2 Core and Almost Core for TU Games A cooperative game with transferable utility (henceforth TU game) is described by a pair \((N,c)\) where \(N=\{1,\ldots,n\}\) denotes the set of agents, and \(c:2^{N}\to\mathbb{R}_{\geq 0}\) is the characteristic function that assigns to every coalition \(S\) a value \(c(S)\) representing the cost of an "outside option", which is the minimum total cost that the agents in \(S\) can achieve if they cooperate amongst themselves. With a slight overload of notation write \(n=|N|\) for the number of agents. An _allocation_ for \((N,c)\) is a vector \(x\in\mathbb{R}^{n}\) with \(x_{i}\) being the cost share allocated to agent \(i\in N\). For convenience, we write \(x(S)=\sum_{i\in S}x_{i}\). An allocation \(x\) is said to be _budget balanced_ if \(x(N)=c(N)\). That means that the total cost of the so-called _grand coalition_\(N\) is being distributed over the individual agents. It is called stable if it satisfies _coalitional stability_, i.e., \(x(S)\leq c(S)\) for all \(S\subsetneqq N\). The _core_[20] of game \((N,c)\), arguably one of the most important concepts in cooperative game theory, consists of all budget balanced allocations satisfying coalitional stability. The core of a TU game is given by \[C_{(N,c)}\coloneqq\{x\in\mathbb{R}^{n}:x(S)\leq c(S)\ \forall S\subsetneqq N,\ x(N)=c(N)\}\,.\] The core of a TU game is non-empty if and only if the game is balanced [7, 42]. In fact, being balanced is just a dual characterization of the non-emptiness of the polyhedron \(C_{(N,c)}\). When we drop the equality constraint that a core allocation is budget balanced, so do not require that \(x(N)=c(N)\), it allows to vary the total cost that is distributed over the set of agents, resulting in a problem that always has a feasible solution. This captures the idea that, depending on the underlying game, one may have to, or want to, distribute either less or more than \(c(N)\). For convenience, we refer to the set of all such allocations as the _almost core_. Formally, given a TU game \((N,c)\), define the almost core for \((N,c)\) by \[AC_{(N,c)}\coloneqq\left\{x\in\mathbb{R}^{n}:x(S)\leq c(S)\ \forall S\subsetneqq N \right\}.\] Obviously, \(C_{(N,c)}\subseteq AC_{(N,c)}\). The major motivation for this definition is to systematically study the algorithmic complexity of cooperative games without having to obey to budget balance, so optimization over the polyhedron \(AC_{(N,c)}\). Let us motivate the relevance of this problem. On the one hand, if the total cost \(c(N)\) of the grand coalition _cannot_ be distributed over the set of agents while maintaining coalitional stability, i.e., the game is unbalanced, it is a natural question to ask what fraction of the total cost \(c(N)\) can be maximally distributed while maintaining coalitional stability. This problem has been addressed under different names, among them the _cost of stability_ of a cooperative game [3]. It has received quite some attention in the literature, e.g. [1, 2, 3, 4, 8, 21, 24, 31, 29, 30, 34, 35]. Indeed, for games with empty core, maximizing \(x(N)\) over the almost core is equivalent to computing the cost of stability, and also to computing some other minimal core relaxations proposed earlier in the literature; see Section 3 for details. On the other hand, also if the core is non-empty one may be interested in maximizing the total cost that can be distributed over the set of agents. It reveals the maximal value for \(c(N)\) that would still yield a non-empty core. One motivation for this maximization problem is to determine the maximal tax rate that could be levied on a given \(c(N)\), without any subset of agents \(S\subsetneqq N\) wanting to deviate. That said, the object of interest of this paper is the following linear program. \[\max\{x(N):x\in AC_{(N,c)}\}. \tag{1}\] The objective value of this linear program indicates the largest total cost that can be shared among the agents while retaining stability in the sense that no subset of agents \(S\subsetneqq N\) would prefer to deviate to the outside option. We call an optimal solution value for this linear program the _almost core optimum_, and any maximizer is called an _optimal almost core allocation_. Sometimes we also consider the restricted problem where we also require that \(x\geq 0\), which means that agents must not receive subsidies. Clearly, the core of a game is non-empty if and only if the almost core optimum is larger than or equal to \(c(N)\). We study problem (1) mainly for games with non-empty core, while for games with empty core we next give a fairly complete overview of its relation to earlier proposed core relaxations. ## 3 Equivalent and Related Relaxations of the Core We review several well-known and related concepts that were introduced in order to deal with games having an empty core and discuss their relationship to the almost core (optimum). The first relaxation of the core, introduced by Shapley and Shubik [41], is the _strong \(\varepsilon\)-core_, defined as \[C^{\varepsilon}_{\mathrm{s}}(N,c)\coloneqq\left\{x\in\mathbb{R}^{n}:x(S)\leq c (S)+\varepsilon\ \forall S\subsetneqq N,\ x(N)=c(N)\right\}.\] We denote the smallest \(\varepsilon\geq 0\) for which this set is non-empty by \(\varepsilon^{\star}_{\mathrm{s}}\). The corresponding set \(C^{\varepsilon^{\star}_{\mathrm{s}}}_{\mathrm{s}}(N,c)\) is called the _least core_[32]. Shapyley and Shubik [41] also introduced the _weak \(\varepsilon\)-core_ as \[C^{\varepsilon}_{\mathrm{w}}(N,c)\coloneqq\left\{x\in\mathbb{R}^{n}:x(S)\leq c (S)+\varepsilon\cdot|S|\ \forall S\subsetneqq N,\ x(N)=c(N)\right\}\,.\] We denote the smallest \(\varepsilon\geq 0\) for which this set is non-empty by \(\varepsilon^{\star}_{\mathrm{w}}\). Note that by definition, for any \(\varepsilon\geq 0\), \(C^{\varepsilon}_{\mathrm{s}}(N,c)\subseteq C^{\varepsilon}_{\mathrm{w}}(N,c)\), and hence \(\varepsilon^{\star}_{\mathrm{w}}\leq\varepsilon^{\star}_{\mathrm{s}}\). Instead of using an additive relaxation of the constraints, Faigle and Kern [17] defined the multiplicative \(\varepsilon\)-core as \[C^{\varepsilon}_{\mathrm{m}}(N,c)\coloneqq\left\{x\in\mathbb{R}^{n}:x(S)\leq (1+\varepsilon)\cdot c(S)\ \forall S\subsetneqq N,\ x(N)=c(N)\right\}.\] Denote the smallest \(\varepsilon\geq 0\) for which this set is non-empty by \(\varepsilon^{\star}_{\mathrm{m}}\). A different viewpoint is called _approximate core_ or _\(\gamma\)-core_[24] for some \(\gamma\in[0,1]\), it is defined as \[C_{\mathrm{a}}^{\gamma}(N,c)\coloneqq\{x\in\mathbb{R}^{n}:x(S)\leq c(S)\ \forall S \subseteq N,\ \gamma\cdot c(N)\leq x(N)\}\,.\] Denote the largest \(\gamma\leq 1\) for which this set is non-empty by \(\gamma_{\mathrm{a}}^{\star}\). The gap between the almost core optimum and the total cost of the grand coalition \(c(N)\) was called the _cost of stability_ for an unbalanced cooperative game by Bachrach et al. [3]. For (unbalanced) cost sharing games it is defined by Meir et al. [33] as \[\delta_{\mathrm{CoS}}^{\star}\coloneqq c(N)-\max\{x(N):x(S)\leq c(S)\ \forall S \subseteq N\}\,.\] An alternative viewpoint was independently introduced in a paper by Bejan and Gomez [4] who considered, for profit sharing games, the so-called _extended core_. In order to define it for cost sharing games, let \[\delta_{\mathrm{ce}}^{\star}\coloneqq\min\{t(N):\exists(x,t)\in\mathbb{R}^{n} \times\mathbb{R}^{n}_{\geq\,0},\ x(N)=c(N),(x-t)(S)\leq c(S)\ \forall S\subsetneqq N\}\,. \tag{2}\] The _extended core_ is now the set of all budget balanced \(x\in\mathbb{R}^{n}\), so all \(x\) with \(x(N)=c(N)\) for which the minimum above is attained (for suitable \(t\in\mathbb{R}^{n}_{\geq\,0}\)). Yet another concept to stabilize an unbalanced game was considered by Zick, Polukarov, and Jennings [43]. Interpreting \(t_{i}\) in the definition of the extended core of Bejan and Gomez [4] as a discount offered to agent \(i\), in [43] a coalitional discount \(t_{S}\) is offered to each agent set \(S\). This is an exponential blowup of the solution space, which however gives more flexibility. For unbalanced games, i.e., games with an empty core, computing the almost core optimum is clearly the same as computing the cost of stability \(\delta_{\mathrm{CoS}}^{\star}\). The following theorem further shows how the different core relaxations are related with each other with respect to optimization. Some of these relations were known, e.g., Liu et al. [28, 29] mention the equivalence of computing \(\gamma_{\mathrm{a}}^{\star}\) and \(\delta_{\mathrm{CoS}}^{\star}\) and \(\varepsilon_{\mathrm{m}}^{\star}\), and also the relation between the cost of stability \(\delta_{\mathrm{CoS}}^{\star}\) and the smallest weak \(\varepsilon\)-core \(\varepsilon_{\mathrm{w}}^{\star}\) appears in [3, 34], yet we are not aware of a summarizing overview of how the different relaxations relate. Hence we give this summary here for the sake of completeness, and also give the short proof. **Theorem 1** (in parts folklore).: _For any TU game \((N,c)\) with empty core, the optimization problems for the weak \(\varepsilon\)-core, the multiplicative \(\varepsilon\)-core, the cost of stability and the extended core are equivalent. In particular, the values satisfy_ \[\delta_{\mathrm{ce}}^{\star}=(1-\gamma_{a}^{\star})\cdot c(N)=\frac{ \varepsilon_{m}^{\star}}{1+\varepsilon_{m}^{\star}}\cdot c(N)=\delta_{ \mathrm{CoS}}^{\star}=\varepsilon_{w}^{\star}\cdot n\,.\] Proof.: First, we establish \(\delta_{\mathrm{CoS}}^{\star}=\delta_{\mathrm{ce}}^{\star}\). We substitute \(x-t\) by \(x^{\prime}\) in (2) and obtain \[\delta_{\mathrm{ce}}^{\star} =\min\{t(N)\ :\ \exists(x^{\prime},t)\in\mathbb{R}^{n}\times \mathbb{R}^{n}_{\geq\,0},\ x^{\prime}(N)+t(N)\ =\ c(N),\ x^{\prime}(S)\ \leq\ c(S)\ \forall S \subsetneqq N\}\,.\] Now it is easy to see that the actual entries of \(t\) do not matter (except for nonnegativity), but only the value \(t(N)\) is important. This yields \(\delta_{\mathrm{CoS}}^{\star}=\delta_{\mathrm{ce}}^{\star}\). Second, we show \(\delta_{\mathrm{CoS}}^{\star}=(1-\gamma_{\mathrm{a}}^{\star})\cdot c(N)\). To this end, observe \[\gamma_{\mathrm{a}}^{\star}=\max\{\gamma\in\mathbb{R}:\exists x\in\mathbb{R}^ {n},\ x(S)\leq c(S)\ \forall S\subseteq N,\ x(N)=\gamma c(N)\}\,.\] Clearly, the maximum is attained by \(x^{\star}\in\mathbb{R}^{n}\) with \(x^{\star}(N)\) maximum. Moreover, the value of \(\gamma_{\mathrm{a}}^{\star}\) is then equal to \(x^{\star}(N)/c(N)\). This shows \(\delta_{\mathrm{CoS}}^{\star}/c(N)=1-\gamma_{\mathrm{a}}^{\star}\). Third, we show \(1-\gamma_{\mathrm{a}}^{\star}=\varepsilon_{\mathrm{m}}^{\star}/(1+\varepsilon_ {\mathrm{m}}^{\star})\). Observe that the map \(\pi:\mathbb{R}^{n}\to\mathbb{R}^{n}\) defined by \(\pi(x)=(1+\varepsilon)x\) induces a bijection between allocations \(x\in\mathbb{R}^{n}\) with \(x(S)\leq c(S)\) for all \(S\subseteq N\) and allocations \(\pi(x)\) with \(\pi(x)(S)\leq(1+\varepsilon)c(S)\) for all \(S\subseteq N\). Moreover, \(\pi(x)(N)=(1+\varepsilon)x(N)\). Hence, \(C_{\mathrm{m}}^{\star}(N,c)\) is (non-)empty if and only if \(C_{\mathrm{a}}^{\star}(N,c)\) is (non-)empty, where \(\gamma=1/(1+\varepsilon)\) holds. This implies \(\gamma_{\mathrm{a}}^{\star}=1/(1+\varepsilon_{\mathrm{m}}^{\star})\). We finally show \(\delta_{\mathrm{CoS}}^{\star}=\varepsilon_{\mathrm{w}}^{\star}\cdot n\). To this end, in \[\varepsilon_{\mathrm{w}}^{\star}=\min\{\varepsilon\geq 0:\exists x,\ x(S)\leq c(S)+ \varepsilon\cdot|S|\ \forall S\subsetneqq N,\ x(N)=c(N)\}\] we substitute \(x\) by \(x^{\prime}+(\varepsilon,\varepsilon,\ldots,\varepsilon)\) which yields \[\varepsilon_{\mathrm{w}}^{\star}=\min\{\varepsilon\geq 0:\exists x^{\prime},\ x^{ \prime}(S)\leq c(S)\ \forall S\subsetneqq N,\ x^{\prime}(N)+\varepsilon\cdot n=c(N)\}\,.\] Clearly, the minimum \(\varepsilon_{\mathrm{w}}^{\star}\) is attained if and only if \(\varepsilon\cdot n=\delta_{\mathrm{CoS}}^{\star}\) holds. Moreover, it was shown in [34, Section 4] that \(\varepsilon_{\mathrm{w}}^{*}\geq\frac{1}{n-1}\varepsilon_{\mathrm{s}}^{*}\). Further relations between the cost of stability \(\delta_{\mathrm{CoS}}^{*}\) and other core relaxations for specific classes of games appear in [34, 2]. For instance, it is true that for supperadditive (profit sharing) games, \(\delta_{\mathrm{CoS}}^{*}\leq\sqrt{n}\varepsilon_{\mathrm{s}}^{*}\) and \(\sqrt{n}\varepsilon_{\mathrm{w}}^{*}\leq\varepsilon_{\mathrm{s}}^{*}\). Indeed, much of the previous work in this direction was about determining bounds on the cost of stability [3, 33, 34, 35, 8] or other structural insights [4]. However, algorithmic considerations were also made for specific (unbalanced) games such as showing hardness for computing the price of stability of general weighted voting games [3], and showing hardness for computing the price of stability of threshold network flow games [39]. Moreover, Aziz, Brandt and Harrenstein [1] give several results on the computational complexity of computing the cost of stability (and other measures) for several combinatorial games such as weighted graph, voting or matching games, as well as their threshold versions. One of the few papers which considers the impact of restrictions on possible coalition formation in relation to algorithmic questions, and in that respect also related to our work, is by Chalkiadakis, Greco and Markakis [10]. In the spirit of Myerson [36], they assume that the formation of coalitions is restricted by a so-called interaction graph, and analyze how the computational complexity of several core-related concepts such as core membership, core non-emptiness, or cost of stability depends on the structure of that graph. Under different assumptions on polynomial-time compact representations of the underlying game, their results include hardness as well as tractability results that depend on the interaction graph. Their results also imply hardness of computing the cost of stability for _arbitrary_ subadditive (cost) games. Also approximations of \(\varepsilon_{\mathrm{m}}^{*}\) for the multiplicative \((1+\varepsilon)\)-core and corresponding allocations have been obtained, e.g., for the symmetric traveling salesperson game by Faigle et al. [16], and for the asymmetric case also by Blaser et al. [6]. There are also papers that attack the problem from a mathematical optimization and computational point of view. Under the name "optimal cost share problem" (OCSP), Caprara and Letchford [9] suggest how to obtain \(\gamma\)-core solutions for a generalization of certain combinatorial games, named integer minimization games, using column or row generation techniques. Under the name "optimal cost allocation problem" (OCAP), also Liu, Qi and Xu [28] follow the line of research initiated by [9] and give computational results using Lagrangian relaxations. A related line of research [29] is to consider the strong \(\varepsilon\)-core relaxation parameterized by \(\varepsilon\) as given by the function \[\omega(\varepsilon)\coloneqq\min_{x\in\mathbb{R}^{n}}\{c(N)-x(N):x(S)\leq c(S )+\varepsilon\ \forall S\subsetneqq N\}\,,\] and to approximate it computationally. This so-called "penalty-subsidy function" [29] is further studied in another variant in a follow up paper [30], there approximating it using Langragian relaxation techniques, and with computational results specifically for traveling salesperson games. Also the problem to compute allocations in the least core has been considered in the literature. For cooperative games with submodular cost functions, it can be computed in polynomial time [12], while for supermodular cost cooperative games it is NP-hard to compute, and even hard to approximate [40]. Specifically relevant for our work are results by Faigle et al. [19] who show hardness to compute a cost allocation in the so-called \(f\)-least core for minimum cost spanning tree games, which is a tightening of the core constraints to \(x(S)\leq c(S)-\varepsilon f(S)\) for certain non-negative functions \(f\). As we will argue, their result also implies hardness of computing optimal almost core allocations for the class of minimum cost spanning tree games. ## 4 Computational Complexity Considerations In this section we investigate the computational complexity of optimization problems related to the (non-negative) core and almost core. To capture results for the general and the nonnegative case, we consider linear optimization over the polyhedra \[AC_{(N,c)}\quad\text{ and }\quad P_{(N,c)}\coloneqq\{x\in\mathbb{R}^{n}:x(S) \leq c(S)\ \forall S\subseteq N\}.\] as well as optimization over \(P_{(N,c)}\cap\mathbb{R}^{n}_{\geq\,0}\) and \(AC_{(N,c)}\cap\mathbb{R}^{n}_{\geq\,0}\) for families of games \((N,c)\). Note that if the core is non-empty then it is the set of optimal solutions when maximizing \(\mathbb{1}\cdot x\) over \(P_{(N,c)}\). Also note that whenever the core of a game \((N,c)\) is empty, this means that the constraint \(x(N)\leq c(N)\) is implied by the set of constraints \(x(S)\leq c(S)\), \(S\subsetneqq N\), which in turn implies \(P_{(N,c)}=AC_{(N,c)}\). For games with non-empty core, we get the following correspondence between the optimization problems for the two polyhedra. **Theorem 2**.: _For a family of games \((N,c)\), linear optimization problems over \(AC_{(N,c)}\) can be solved in polynomial time if and only if linear optimization problems over \(P_{(N,c)}\) can be solved in polynomial time._ Proof.: In order to prove the result we make use of the equivalence of optimization and separation [23, 25, 37]. This means, we only need to show that we can solve the separation problem for \(P_{(N,c)}\) if and only if we can solve the separation problem for \(AC_{(N,c)}\). Since \(P_{(N,c)}=\{x\in AC_{(N,c)}:x(N)\leq c(N)\}\) holds, separation over \(P_{(N,c)}\) reduces to separation over \(AC_{(N,c)}\) plus an explicit check of a single inequality. Hence, it remains to show how to solve the separation problem for \(AC_{(N,c)}\). For given \(\hat{x}\in\mathbb{R}^{n}\), we construct \(n\) points \(\hat{x}^{k}\in\mathbb{R}^{n}\) (\(k=1,2,\ldots,n\)) which are copies of \(\hat{x}\) except for \(\hat{x}^{k}_{k}\coloneqq\min(\hat{x}_{k},c(N)-\sum_{i\in N\setminus\{k\}}\hat{x }_{i})\). Note that by construction \(\hat{x}^{k}\leq\hat{x}\) and \(\hat{x}^{k}(N)\leq c(N)\) hold. We then query a separation oracle of \(P_{(N,c)}\) with each \(\hat{x}^{k}\). Suppose such a query yields \(\hat{x}^{k}(S)>c(S)\) for some \(S\subseteq N\). Due to \(\hat{x}^{k}(N)\leq c(N)\) we have \(S\neq N\). Moreover, \(\hat{x}\geq\hat{x}^{k}\) implies \(\hat{x}(S)>c(S)\), and we can return the same violated inequality. Otherwise, we have \(\hat{x}^{k}\in P_{(N,c)}\) for all \(k\in N\) and claim \(\hat{x}\in AC_{(N,c)}\). To prove this claim we assume that, for the sake of contradiction, \(\hat{x}(S)>c(S)\) holds for some \(S\subsetneqq N\). Let \(k\in N\setminus S\). Since \(\hat{x}^{k}_{i}=\hat{x}_{i}\) holds for all \(i\in S\), we have \(\hat{x}^{k}(S)=\hat{x}(S)>c(S)\). This contradicts the fact that \(\hat{x}^{k}\in P_{(N,c)}\) holds. It turns out that almost the same result is true when we also require that there are no subsidies, that is \(x\geq 0\). For linking the non-negative core to the non-negative almost core, it requires an assumption on the characteristic function. \[c(N\setminus\{k\})\leq c(N)\qquad\forall k\in N. \tag{3}\] This condition holds, for instance, for monotone functions \(c\), and implies that the core is contained in \(\mathbb{R}^{n}_{\geq\,0}\) (see Lemma 2 and Theorem 1 in [14]). **Theorem 3**.: _For a family of games \((N,c)\) satisfying (3), linear optimization problems over \(AC_{(N,c)}\cap\mathbb{R}^{n}_{\geq\,0}\) can be solved in polynomial time if and only if linear optimization problems over \(P_{(N,c)}\cap\mathbb{R}^{n}_{\geq\,0}\) can be solved in polynomial time._ The proof is a rather straightforward extension of that of Theorem 2, additionally making use of condition (3) to guarantee nonnegativity. We obtain an immediate consequence from these two theorems. **Corollary 1**.: _For a family of games \((N,c)\) for which \(c(\,\cdot\,)\) is submodular (and (3) holds) one can find a (non-negative) optimal almost core allocation in polynomial time._ Proof.: For submodular \(c(\,\cdot\,)\) one can optimize any linear objective function over \(P_{(N,c)}\) using the Greedy algorithm [15]. The result follows from Theorems 2 and 3. These results only make statements about optimizing arbitrary objective vectors over these polyhedra. In particular we cannot draw conclusions about hardness of the computation of an almost core allocation, which is maximizing \(\mathbb{1}\cdot x\) over \(AC_{(N,c)}\). However, it is easy to see that this problem cannot be easier than deciding non-emptiness of the core, as the core of a game \((N,c)\) is non-empty if and only if the almost core optimum is at least \(c(N)\). Hence we immediately get the following. **Theorem 4**.: _Consider a family of games \((N,c)\) for which deciding (non-)emptiness of the core is \(\mathsf{NP}\)-hard. Then an efficient algorithm to compute an optimal almost core allocation cannot exist, unless \(\mathsf{P}=\mathsf{NP}\)._ It is well known that there exist games for which it is \(\mathsf{NP}\)-hard to decide non-emptiness of the core, e.g., the weighted graph game [13], or the unrooted metric traveling salesperson game [9]. Hence, we cannot hope for a polynomial-time algorithm that computes an optimal almost core allocation for arbitrary games. In contrast, the maximization of \(x(N)\) becomes trivial for games \((N,c)\) with superadditive characteristic function \(c(\,\cdot\,)\), as the set of constraints \(x(\{i\})\leq c(\{i\})\), \(i=1,\ldots,n\), already imply all other constraints \(x(S)\leq c(S)\), \(S\subseteqq N\), and one can simply define \(x_{i}\coloneqq c(\{i\})\). In particular, \(x(N)\leq c(N)\) is implied and \(P_{(N,c)}=AC_{(N,c)}\). Generalizing, the same is true for classes of games where a polynomial number of constraints can be shown to be sufficient to define the complete core. As an example we mention _matching games_ in undirected graphs [26], where the core is completely described by the polynomially many core constraints induced by all edges of the underlying graph, as these can be shown to imply all other core constraints. **Proposition 1**.: _Whenever \(P_{(N,c)}\) is described by a polynomial number of inequalities, finding an optimal (almost) core allocation can be done in polynomial time by linear programming._ Note that Proposition 1 includes supermodular cost functions. It is therefore interesting to note that for supermodular cost games, it is \(\mathsf{NP}\)-hard to approximate the least core value \(\varepsilon_{\mathrm{s}}^{*}\) better than a factor \(17/16\)[40]. The reason for this discrepancy is the simple fact that the least core is based on the strong \(\varepsilon\)-core, while the almost core relates to the weak \(\varepsilon\)-core as per Theorem 1. It also turns out that condition (3) implies that the value of an almost core allocation cannot exceed that of a core allocation by much. **Proposition 2**.: _Let \((N,c)\) be a game that satisfies (3). Then every \(x\in AC_{(N,c)}\) satisfies_ \[x(N)\leq\big{(}1+\tfrac{1}{n-1}\big{)}c(N)\.\] Proof.: Let \(x\in AC_{(N,c)}\). We obtain \[(n-1)\cdot x(N)= \sum_{k\in N}x(N\setminus\{k\})\] \[\leq \sum_{k\in N}c(N\setminus\{k\})\leq\sum_{k\in N}c(N)=n\cdot c(N),\] where the first inequality follows from feasibility of \(x\) and the second follows from (3). Condition (3) implies non-negativity for all core allocations and all optimal almost core allocations, as \(x(N)\geq c(N)\), so \(x_{i}\geq c(N)-c(N\setminus\{i\})\geq 0\) for all \(i\in N\). However, this does not mean that a non-negativity requirement implies that the almost core optimum is close to \(c(N)\). In the next section, we will see that this gap can be arbitrarily large (see Proposition 3). ## 5 Minimum Cost Spanning Tree Games and Approximation In this section we address a well known special class of games known as minimum cost spanning tree (MST) games [11, 5, 22]. In MST games, the agents are nodes in an edge-weighted undirected graph \(G\), and the cost of the outside option for a set of agents \(S\) is determined by the cost of a minimum cost spanning tree in the subgraph induced by these agents. MST games are known to have a non-empty core. Moreover, it is known that finding some element in the core is computationally easy and can be done by computing a minimum cost spanning tree [22]. The optimization problem that we address asks for the maximal amount that can be charged to the agents while no proper subset of agents would prefer the outside option. While in general, maximizing shareable costs is the same as asking for the maximum value \(c(N)\) that still yields a non-empty core, the question may appear a bit artificial for minimum cost spanning tree games, as there, the value \(c(N)\) is computed as the minimum cost of a spanning tree for all agents \(N\). From a practical viewpoint this can be motivated by assuming there are exogenous physical or legal restrictions that prohibit the grand coalition \(N\) to form, so that the player set \(N\) has no bargaining power. Apart from that, there is a more theoretical perspective that motivates studying the almost core for MST games. One can easily see that for MST games the cost function \(c(\,\cdot\,)\) is subadditive, yet it is not submodular in general; see [27] for a characterization when it is. Recalling that the computation of maximum shareable costs can be done in polynomial time when \(c(\,\cdot\,)\) is submodular, it is a natural question to ask if this still holds for subadditive cost functions. In that respect, note that the weighted graph games as studied in [13] have polynomial-time algorithms to decide non-emptiness of the core whenever \(c(\,\cdot\,)\) is subadditive. Hence Theorem 4 applied to weighted graph games does not give an answer to this question. Also the results of [10] yield that there exist subadditive cost games for which the computation of the price of stability, hence computation of the almost core optimum is hard, yet that result also relies on hardness of the problem to decide non-emptiness of the core. We next show that even for MST games with monotone cost function \(c(\,\cdot\,)\), despite always having a non-empty core, maximizing shareable costs cannot be done efficiently unless \(\mathsf{P}=\mathsf{NP}\). ### Preliminaries Let us first formally define the problem and recall what is known. We are given an edge-weighted, undirected graph \(G=(N\cup\{0\},E)\) with non-negative edge weights \(w:E\rightarrow\mathbb{R}_{\geq 0}\), where node \(0\) is a special node referred to as "supplier" node. Without loss of generality we may assume that the graph is complete by adding dummy edges with large enough cost. The agents of the game are the vertices \(N\) of the graph, and the characteristic function of the game is given by minimum cost spanning trees. That is, the cost of any subset of vertices \(S\subseteq N\) is defined as the cost of a minimum cost spanning tree on the subgraph induced by vertex set \(S\cup\{0\}\). So if we let \(\mathcal{T}(S)\) be the set of spanning trees for the subgraph induced by vertex set \(S\cup\{0\}\), then the characteristic function is defined as: \[c(S)\coloneqq\min_{T\in\mathcal{T}(S)}\left\{w(T)\right\}\,.\] Following [22], the associated monotonized minimum cost spanning tree game \((N,\bar{c})\) is obtained by defining the characteristic function using the monotonized cost function \(\bar{c}(S)\coloneqq\min_{R\supseteq S}c(R)\,.\) This is motivated by assuming that agents can also use other agents as "Steiner nodes". Indeed, note that \(\bar{c}(S)\leq\bar{c}(R)\) for \(S\subseteq R\), and for the associated cores of these two games, we have that \(C_{(N,\bar{c})}\subseteq C_{(N,c)}\). Moreover, it is well known that the core of both games is non-empty, and a core allocation \(x\in C_{(N,\bar{c})}\) is obtained in polynomial time by just one minimum cost spanning tree computation: if \(T\) is some MST, let \(e_{v}\in T\) be the edge incident with \(v\) on the unique path from \(v\) to the supplier node \(0\) in \(T\), then letting \[x_{v}\coloneqq w(e_{v})\,,\] one gets an element \(x\) in the core of the monotonized minimum cost spanning tree game \((N,\bar{c})\)[22], and hence also a core element for the game \((N,c)\). One convenient way of thinking about this core allocation is a run of Prim's algorithm to compute minimum cost spanning trees [38]: starting to build the tree with vertex \(0\), whenever a new vertex \(v\) is added to the spanning tree constructed so far, \(v\) gets charged the cost of the edge \(e_{v}\) that connects \(v\). In summary, computing _some_ core allocation can be done efficiently, while linear optimization over the core of MST games is co-NP hard (under Turing reductions) [18]. We are interested in the same questions but for the case that the budget balance constraint is absent. So we seek solutions to the almost core maximization problem \[\max x(N)\ s.t.\ x\in AC_{(N,c)}\,, \tag{4}\] when \(c(\cdot)\) is the characteristic function defined by minimum cost spanning trees. The interpretation of the lacking constraint \(x(N)=c(N)\) is that the grand coalition cannot establish the solution with cost \(c(N)\) on its own, say by legal restrictions. ### Computational Complexity As a first result, and not surprising, linear optimization over the almost core is hard for MST games. **Corollary 2**.: _For minimum cost spanning tree games \((N,c)\), a polynomial-time algorithm for linear optimization over \(AC_{(N,c)}\) would yield \(\mathrm{P}\)=\(\mathrm{NP}\)._ Proof.: The result follows from Theorem 2 and the fact that the membership problem for the core of \((N,c)\) is a \(\mathrm{coNP}\)-hard problem for MST games [18]. What is more interesting is that optimizing \(\mathbb{1}\cdot x\) over the almost core remains hard for MST games. **Theorem 5**.: _Computing an optimal almost core allocation in (4) for minimum cost spanning tree games is -hard, and this is also true for monotonized minimum cost spanning tree games \((N,\bar{c})\)._ Proof.: Let \(\varepsilon^{*}\) be the largest \(\varepsilon\) for which the linear inequality system \[x(S)\leq(1-\varepsilon)c(S)\ \forall S\subsetneqq N,\quad x(N)=c(N) \tag{5}\] has a solution. In [19] it is shown that finding a feasible solution \(x\) for (5) with respect to \(\varepsilon^{*}\) is -hard. Note that in the reduction leading to this hardness result, \(c(N)>0\). Then, given an optimum almost core allocation \(x^{\Lambda\mathcal{C}}\), \(x^{\Lambda\mathcal{C}}(N)\geq c(N)>0\), and we can obtain \(\varepsilon^{*}\coloneqqq 1-c(N)/x^{\Lambda\mathcal{C}}(N)\). It is now easy to see that the vector \(x^{\prime}\coloneqqq(1-\varepsilon^{*})x^{\Lambda\mathcal{C}}\) is a feasible solution for (5). To see that the so-defined \(\varepsilon^{*}\) is indeed maximal, observe that scaling any feasible vector in (5) by \(1/(1-\varepsilon^{*})\) yields an almost core allocation. Hence, computation of an almost core optimum for MST games yields a solution for an -hard problem. To see the last claim about monotonized minimum cost spanning tree games, observe that the underlying reduction from the -hard minimum cover problem in [19] yields a minimum cost spanning tree game that has a monotone cost function \(c(\cdot)\) by definition. Next, we note that in general, the almost core optimum may be arbitrarily larger than \(c(N)\) for MST games. This is remarkable in view of Proposition 2, which shows that under condition (3), any core allocation yields a good approximation for an optimal almost core allocation, as they differ by a factor at most \(n/(n-1)\). A fortiori, the same holds for the monotonized MST games \((N,\bar{c})\). For general MST games \((N,c)\), and without condition (3), this gap can be large. **Proposition 3**.: _The almost core optimum can be arbitrarily larger than \(c(N)\), even for minimum cost spanning tree games and when we require that \(x\geq 0\)._ Proof.: Consider the instance depicted in Figure 0(a), for some value \(k>0\). Then \(c(N)=0\), while \(x=(0,0,k)\) is an optimal non-negative almost core allocation with value \(k\). In the following we consider problem (4) but with the added constraint that \(x\geq 0\). \[\max x(N)\ s.t.\ x\in AC_{(N,c)},\,\text{and}\ x\geq 0\,. \tag{6}\] The presence of the constraint \(x\geq 0\) means that agents must not be subsidized. As can be seen from the example in Figure 0(b), such subsidies may indeed be necessary in the optimal solution to (4), as there, the only optimal solution uses cost shares \((-k,k,k)\). However, we next show that such subsidies are in some sense an artifact of "small edge costs" in graph \(G\), and for optimization purposes can be neglected in the following sense. **Theorem 6**.: _Every instance of the almost core optimization problem (4) can be reduced in polynomial time to an instance of problem (6). Consequently, problem (6) is also -hard for minimum cost spanning tree games._ As a matter of fact, the -hardness also follows from the fact that for monotonized MST games, we have that \(x\geq 0\) is redundant in (6), recalling that \(x_{i}\geq c(N)-c(N\setminus\{i\})\geq 0\) for all \(i\in N\). Proof.: Given an instance of (4) with edge costs \(w\), define new edge costs \(w^{\prime}(e)\coloneqq w(e)+M\), \(e\in E\), for large enough constant \(M\) to be defined later. Note that \(c^{\prime}(S)=c(S)+|S|\cdot M\) for \(S\subseteq N\). We argue that this renders \(x\geq 0\) redundant. Consider an optimal solution \(x^{\prime}\) to problem (6) for edge costs \(w^{\prime}\), and define \(x\coloneqq x^{\prime}-(M,\dots,M)\). Now we have \(x(S)=x^{\prime}(S)-|S|\cdot M\leq c^{\prime}(S)-|S|\cdot M=c(S)\) for all \(S\subsetneqq N\), so \(x\) is feasible for problem (4) with edge costs \(w\). We show that \(x\) must be optimal for problem (4). Considering any solution \(y\)_optimal_ for (4), there exists a number \(M\) that can be computed in polynomial time so that \(y^{\prime}\coloneqq y+(M,\dots,M)\geq 0\), and \(y^{\prime}(S)=y(S)+|S|\cdot M\leq c(S)+|S|\cdot M=c^{\prime}(S)\), so \(y^{\prime}\) is feasible for (6) with cost function \(w^{\prime}\). Hence \(y(N)>x(N)\) yields the contradiction \(y^{\prime}(N)>x^{\prime}(N)\). To argue about \(M\), observe that in (4) we maximize \(x(N)\), hence for any optimal solution \(y\) in (4), and any \(i\in N\) there exists \(S\ni i\) so that \(y(S)=c(S)\), hence \(y_{i}=c(S)-\sum_{j\in S,j\neq i}y_{j}\geq-\sum_{j\in N}c(\{j\})\), where the last inequality holds because \(c(S)\geq 0\), \(y_{j}\leq c(\{j\})\) for all \(j\in N\), and \(c(\{j\})=w(\{0,j\})\geq 0\). In other words, letting \(M\coloneqq\sum_{j\in N}c(\{j\})\) suffices so that \(y_{i}\geq-M\) for all \(i\in N\), and hence \(y^{\prime}\geq 0\) as required. Remark.: The above reduction of computing arbitrary allocations to computing non-negative allocations generalizes to all cost sharing games \((N,c)\) by defining \(c^{\prime}(S)\coloneqq(S)+|S|\cdot M\) for all subsets \(S\subsetneqq N\). ### Two-Approximation Algorithm We next propose the following polynomial time algorithm to compute an approximately optimal almost core allocation for problem (6). For notational convenience, let us define for all \(K\subset N\) \[N_{-K}\coloneqq N\setminus K\,,\] and write \(N_{-i}\) instead of \(N_{-\{i\}}\). ``` Input: Agents \(N\), edge set \(E\) of complete graph on \(N\cup\{0\}\) and edge weights \(w:E\to\mathbb{R}_{\geq 0}\). Output: Almost core allocation \(x\). Initialize \(I_{0}\coloneqq\{0\}\) and \(T\coloneqq\emptyset\). for\(k=1,2,\ldots,n\)do Let \(i\in I_{k-1}\), \(j\in N\setminus I_{k-1}\) with minimum \(w(i,j)\) (among those \(i,j\)). Let \(I_{k}\coloneqq I_{k-1}\cup\{j\}\) and augment the tree \(T\coloneqq T\cup\{\{i,j\}\}\). Assign agent \(j\) the cost share \(x_{j}\coloneqq w(i,j)\). end for Let \(\ell\in I_{n}\setminus I_{n-1}\) be the last assigned agent. Update agent \(\ell\)'s cost share \(x_{\ell}\coloneqq\min_{k\in N_{-\ell}}\{c(N_{-k})-x(N\setminus\{k,\ell\})\}\). ``` **Algorithm 1**Approximation algorithm for the almost core maximization problem (6) for MST games The backbone of Algorithm 1 is effectively Prim's algorithm to compute a minimum cost spanning tree [38]. The additional line 5 yields the core allocation by Granot and Huberman [22], which we extend by adding lines 7 and 8. Let us first collect some basic properties of Algorithm 1. Henceforth, we assume w.l.o.g. that the agents get assigned their cost shares in the order \(1,\ldots,n\) (so that \(\ell=n\) in lines 7 and 8). We denote by \(x^{\text{ALG}}\) a solution computed by Algorithm 1. **Lemma 1**.: _We have that \(x^{\text{ALG}}(I_{k})=c(I_{k})\) for all \(k=1,\ldots,n-1\), and for all \(S\subseteq\{1,\ldots,n-1\}\) we have \(x^{\text{ALG}}(S)\leq c(S)\)._ Figure 1: Two MST games with \(n=3\) players for insights into optimal almost core solutions. Proof.: The first claim follows directly because Algorithm 1 equals Prim's algorithm to compute a minimum cost spanning tree on the vertex set \(\{0,\ldots,n-1\}\), and \(x^{\mathrm{ALG}}(I_{k})\) equals the cost of the minimum cost spanning tree on vertex set \(\{1,\ldots,k\}\), Hence by correctness of Prim's algorithm [38], \(x^{\mathrm{ALG}}(I_{k})=c(I_{k})\). The second claim follows by [22, Thm. 3], since the cost allocation for agents \(\{1,\ldots,n-1\}\) is the same as in [22]. **Lemma 2**.: _Suppose \(x^{\mathrm{ALG}}(S)>c(S)\) for some set \(S\) with \(n\in S\subsetneqq N\). Then there is a superset \(T\supseteq S\) with \(|T|=n-1\) such that \(x^{\mathrm{ALG}}(T)>c(T)\)._ Proof.: Recall the agents got assigned their cost shares in order \(1,\ldots,n\). Define \(k\coloneqq\max\{i\mid i\notin S\}\) to be the largest index of a agent not in \(S\). Let \(i_{1},\ldots,i_{\ell}\) be the set of agents so that \(N_{-k}=S\cup\{i_{1},\ldots,i_{\ell}\}\) and w.l.o.g. \(i_{1}<\cdots<i_{\ell}\). We show that \(x^{\mathrm{ALG}}(S)>c(S)\) implies \(x^{\mathrm{ALG}}(S\cup\{i_{1}\})>c(S\cup\{i_{1}\})\). Then repeating the same argument, we inductively arrive at the conclusion that \(x^{\mathrm{ALG}}(N_{-k})>c(N_{-k})\). So observe that \[x^{\mathrm{ALG}}(S\cup\{i_{1}\})=x^{\mathrm{ALG}}(S)+x_{i_{1}}>c(S)+x_{i_{1}}\,,\] and \(c(S)\) is the cost of a minimum cost spanning tree for \(S\), call it \(\mathrm{MST}(S)\). Moreover, as \(i_{1}\neq n\), \(x_{i_{1}}\) is the cost of the edge, call it \(e\), that the algorithm used to connect agent \(i_{1}\). We claim that \(\mathrm{MST}(S)\cup\{e\}\) is a tree spanning vertices \(S\cup\{0,i_{1}\}\), hence \(c(S)+x_{i_{1}}\) is the cost of some tree spanning \(S\cup\{0,i_{1}\}\). Then, as required we get \[x^{\mathrm{ALG}}(S\cup\{i_{1}\})>c(S)+x_{i_{1}}\geq c(S\cup\{i_{1}\})\,,\] because \(c(S\cup\{i_{1}\})\) is the cost of a _minimum cost_ tree spanning \(S\cup\{0,i_{1}\}\). If \(\mathrm{MST}(S)\cup\{e\}\) was not a spanning tree for vertices \(S\cup\{0,i_{1}\}\), then edge \(e\) would connect \(i_{1}\) to some vertex outside \(S\cup\{0\}\), but this contradicts the choice of \(i_{1}\) as the vertex outside \(S\) with smallest index. **Lemma 3**.: _We have \(x^{\mathrm{ALG}}\geq 0\)._ Proof.: Recall that in minimum cost spanning tree games [11, 22], the weight of edges are non-negative. Since Algorithm 1 computes the allocation for agents in line 5 by the edge weight of the first edge on the unique path to \(0\), there is \(x_{k}^{\mathrm{ALG}}\geq 0\) for all \(k=1,2,\cdots,n-1\). So we only need to argue about \(x_{n}^{\mathrm{ALG}}\). To that end, note that an equivalent definition of \(x_{n}^{\mathrm{ALG}}\) in line 8 of the algorithm is \[\text{max. }x_{n}\text{ s.t. }x_{n}\leq c(N_{-k})-x^{\mathrm{ALG}}(N\setminus\{k, n\})\text{ for all }k=1,\ldots,n-1\,. \tag{7}\] We claim that \(\tilde{x}_{n}\coloneqq c(N)-c(N_{-(n-1)})\geq 0\) is a feasible solution to this maximization problem, hence the actual value of \(x_{n}^{\mathrm{ALG}}\) after the update in line 8 can only be larger, and therefore in particular it is non-negative. First, note that indeed, \(\tilde{x}_{n}\geq 0\), as this is the cost of the last edge that Prim's algorithm uses to connect the final vertex \(n\) to the minimum cost spanning tree. That \(\tilde{x}_{n}\) is feasible in (7) follows from the fact that \(\tilde{x}_{n}\) is the cost share that is assigned to agent \(n\) in the core allocation of [22]. Indeed, letting \(\tilde{x}\) be equal to \(x\) except for \(\tilde{x}_{n}=c(N)-c(N_{-(n-1)})\), we have that \(\tilde{x}\) is precisely the cost allocation as proposed in [22]. By the fact that this yields a core allocation, we have that \(\tilde{x}(S)\leq c(S)\) for all \(S\subseteq N\), so in particular for all \(k=1,\ldots,n-1\), \[\tilde{x}_{n}+x^{\mathrm{ALG}}(N\setminus\{k,n\})=\tilde{x}(N_{-k})\leq c(N_{ -k})\,,\] and hence the claim follows. **Theorem 7**.: _Algorithm 1 is a 2-approximation for the almost core maximization problem (6) for minimum cost spanning tree games, and this performance bound is tight for Algorithm 1._ Proof.: Denote by \(x^{\mathrm{ALG}}\) a solution by Algorithm 1. We first argue that Algorithm 1 yields a feasible solution. For \(S\not\ni n\), this follows from Lemma 1. For \(S\ni n\), assume \(x(S)>c(S)\). Then Lemma 2 yields that there exists some \(N_{-k}\ni n\) with \(x^{\mathrm{ALG}}(N_{-k})>c(N_{-k})\). However by definition of \(x_{n}\) in line 8 of the algorithm, we have for all \(k=1,\ldots,n-1\) \[x_{n}^{\mathrm{ALG}}\leq c(N_{-k})-x^{\mathrm{ALG}}(N\setminus\{k,n\})\,,\] which yields a contradiction to \(x^{\mathrm{ALG}}(N_{-k})>c(N_{-k})\). To show that the performance guarantee is indeed 2, let \(x^{\mathrm{OPT}}\) be some optimal solution to the almost core maximization problem. Let \(k^{*}\in N_{-n}\) be the index for which the minimum in line 8 is attained. Observe that \(x_{n}^{\mathrm{ALG}}\) is updated such that \(x^{\mathrm{ALG}}(N_{-k^{*}})=c(N_{-k^{*}})\) holds. Then by non-negativity of \(x^{\mathrm{OPT}}\) and because of Lemma 3, \[x_{n}^{\mathrm{OPT}}\leq x^{\mathrm{OPT}}(N_{-k^{*}})\leq c(N_{-k^{*}})=x^{ \mathrm{ALG}}(N_{-k^{*}})\ \leq x^{\mathrm{ALG}}(N)\,.\] Moreover, by definition of \(x^{\mathrm{ALG}}\), we have \(x^{\mathrm{ALG}}(N_{-n})=c(N_{-n})\), and by Lemma 3, \[x^{\mathrm{OPT}}(N_{-n})\leq c(N_{-n})=x^{\mathrm{ALG}}(N_{-n})\leq x^{ \mathrm{ALG}}(N)\,.\] Hence we get \(x^{\mathrm{OPT}}(N)=x_{n}^{\mathrm{OPT}}+x^{\mathrm{OPT}}(N_{-n})\leq 2x^{ \mathrm{ALG}}(N)\). To see that the performance bound 2 is tight for Algorithm 1, consider the instance in Figure 1(a). Here, Algorithm 1 computes the solution \(x^{\mathrm{ALG}}=(1,0,\varepsilon)\) with value \(1+\varepsilon\), as the order in which agents get assigned their cost shares is \(1,2,3\), and in line 8 of the algorithm we get \(x_{3}^{\mathrm{ALG}}=c(\{1,3\})-x_{1}=(1+\varepsilon)-1=\varepsilon\). An almost core optimum solution would be \(x^{\mathrm{OPT}}=(0,1,1)\) with value 2. Even though Theorem 6 suggests that the non-negativity requirement \(x\geq 0\) is irrelevant for optimization, it is important for Theorem 7. Without it, so allowing \(x_{i}<0\) for some agents \(i\), the above algorithm does not provide an approximation guarantee in general. To see that, consider again the instance given in Figure 0(b), and observe that Algorithm 1 yields a cost allocation \(x^{\mathrm{ALG}}=(0,0,0)\), while \(x=(-k,k,k)\) is a feasible solution for the almost core. It remains to remark that Algorithm 1 does generally _not_ compute an allocation in the almost core of the corresponding monotonized game \((N,\bar{c})\), as can be seen for the instance in Figure 1(b). Here, we have that \(c(S)=1\) for all \(S\subseteq N\) except for \(c(\{2,3\})=2\). An optimal almost core allocation is \(x=(0,1,1)\), and depending on how ties are broken, Algorithm 1 yields \(x^{\mathrm{ALG}}=(0,1,1)\) or \(x^{\mathrm{ALG}}=(1,0,0)\). The monotonized game has \(\bar{c}(S)=1\) for all \(S\subseteq N\), and then an optimal almost core allocation is \(\bar{x}=(\frac{1}{2},\frac{1}{2},\frac{1}{2})\). Note that this example also shows that Proposition 2 is tight (for \(n=3\)), as \(c(N)=1\). ## 6 Conclusions In the literature, one also finds minimum cost spanning tree games defined as _profit sharing_ games, where one defines the value of a coalition \(S\) by the cost savings that it can realize in comparison to the situation where all agents in \(S\) connect directly to the source, \[v(S)\coloneqq\sum_{i\in S}c(\{i\})-c(S)\,.\] Then the core constraints, for profit shares \(x^{v}\in\mathbb{R}^{n}\), are \(x^{v}(S)\geq v(S)\). It is not hard to see that all our results also hold for that version of the problem via the simple transformation \(x_{i}^{v}\coloneqq c(\{i\})-x_{i}\). In particular, note that for value games all feasible solutions \(x^{v}\) are non-negative, as core stability requires that \(x_{i}^{v}\geq v(\{i\})\geq 0\). Our results imply NP-hardness for computation of minimum profit shares that are coalitionally stable, and the corresponding profit version of Algorithm 1 can be shown to yield a 2-approximation, which also can be shown to be tight. Figure 2: Two MST games with \(n=3\) players for the analysis of Algorithm 1. We collect some open problems which we believe are interesting. First, we would like to gain more insight into the computational complexity for the almost core problem (1), also for other classes of games. Moreover, we could give a 2-approximation for cost MST games under the additional assumption that subsidies are not allowed. It would be interesting to extend this result to the general, unconstrained case. Also giving lower bounds on the approximability does seem plausible, as the "hard cases" for maximizing shareable costs are those where the minimum cost spanning tree is a (Hamiltonian) path. Finally, both in general and for MST games one could define an even more general class of problems in the spirit of cooperative games with restricted coalition formation, by defining a (downward-closed) set system that describes all those subsets of agents that are able to cooperate and hence have access to an outside option, while all other subsets do not have that option. This is the same basic idea as that of restricted coalition formation by Myerson [36] or Chalkiadakis et al. [10]. The almost core as studied in this paper is the special case where this set system is particularly simple, namely the \((n-1)\)-uniform matroid. ## Acknowledgements Rong Zou and Boyue Lin acknowledge the support of the China Scholarship Council (Grants No. 202006290073, 202106290010). The authors also thank the anonymous reviewers of an earlier draft of this paper for some constructive comments.
2309.15362
MicroBooNE Public Data Sets: a Collaborative Tool for LArTPC Software Development
Among liquid argon time projection chamber (LArTPC) experiments MicroBooNE is the one that continually took physics data for the longest time (2015-2021), and represents the state of the art for reconstruction and analysis with this detector. Recently published analyses include oscillation physics results, searches for anomalies and other BSM signatures, and cross section measurements. LArTPC detectors are being used in current experiments such as ICARUS and SBND, and being planned for future experiments such as DUNE. MicroBooNE has recently released to the public two of its data sets, with the goal of enabling collaborative software developments with other LArTPC experiments and with AI or computing experts. These data sets simulate neutrino interactions on top of off-beam data, which include cosmic ray background and noise. The data sets are released in two formats: the native art/ROOT format used internally by the collaboration and familiar to other LArTPC experts, and the HDF5 format which contains reduced and simplified content and is suitable for usage by the broader community. This contribution presents the open data sets, discusses their motivation, the technical implementation, and the extensive documentation -- all inspired by FAIR principles. Finally, opportunities for collaborations are discussed.
Giuseppe Cerati
2023-09-27T02:06:44Z
http://arxiv.org/abs/2309.15362v2
# MicroBooNE Public Data Sets: a Collaborative Tool for LArTPC Software Development ###### Abstract Among liquid argon time projection chamber (LArTPC) experiments MicroBooNE is the one that continually took physics data for the longest time (2015-2021), and represents the state of the art for reconstruction and analysis with this detector. Recently published analyses include oscillation physics results, searches for anomalies and other BSM signatures, and cross section measurements. LArTPC detectors are being used in current experiments such as ICARUS and SBND, and being planned for future experiments such as DUNE. MicroBooNE has recently released to the public two of its data sets, with the goal of enabling collaborative software developments with other LArTPC experiments and with AI or computing experts. These data sets simulate neutrino interactions on top of off-beam data, which include cosmic ray background and noise. The data sets are released in two formats: the native art/ROOT format used internally by the collaboration and familiar to other LArTPC experts, and the HDF5 format which contains reduced and simplified content and is suitable for usage by the broader community. This contribution presents the open data sets, discusses their motivation, the technical implementation, and the extensive documentation - all inspired by FAIR principles. Finally, opportunities for collaborations are discussed. ## 1 Introduction MicroBooNE [1] is a neutrino experiment at Fermilab designed to test the MiniBooNE anomaly [2] as it is located along the same Booster Neutrino Beam (BNB) and at a similar distance from source. MicroBooNE's goals are not limited to probing this anomaly, and span a broader experimental program including tests of short-baseline oscillations as part of the Short Baseline Neutrino program (SBN) [3], searches for beyond-Standard Model particles, and the measurement of neutrino-argon interaction cross sections. MicroBooNE's physics operations took place between 2015 and 2021. To this date, MicroBooNE has analyzed about half of the collected beam data and produced over 50 publications. MicroBooNE's detector [1] is a liquid argon time projection chamber (LArTPC). The working principle of the MicroBooNE LArTPC is the following: charged particles produced in neutrino-argon interactions ionize the argon as they travel in the detector active volume (\(2.56\times 2.32\times 10.36\) m). Ionization electrons drift in an electric field towards anode planes. Here, sense wires detect the incoming charge. About 8200 wires are arranged in three planes oriented in different directions (0, \(\pm\)60 degrees), allowing for 3D reconstruction of the particle trajectories with \(\mathcal{O}\)(mm) spatial resolution. The amplitude of the signal detected on the wires provides calorimetric information for energy measurements. The fast scintillation light emitted by the argon is detected by the optical system, made of 32 photo-multiplier tubes (PMTs), and is used for triggering and cosmic rejection. In this document we describe the release of MicroBoNE data sets for the purpose of collaborative software development. This release is motivated by the following arguments: * Establish MicroBooNE as state of the art LArTPC technology. This is already attested by our publication record, but public data sets provide a direct reference point for any LArTPC software development. * Efficient collaboration of members of MicroBooNE with colleagues in other LArTPC experiments, as well as with computer scientists. Until this data release, software development collaborations were required to have an approved memorandum of understanding to share data sets outside the Collaboration or to use other public data sets. Being able to use MicroBooNE data sets implies that the output of external collaborations is directly usable within MicroBooNE without further tuning. * Potentially attract developments from beyond our community, through public initiatives such as data challenges. The next sections describe the technical implementation of the data release, as well as the relative documentation. Finally, conclusions and future prospects are discussed. ## 2 Implementation of open samples With this data release, MicroBooNE aims at reaching out to the largest possible set of developers and at enabling the widest range of applications. In the process, we therefore followed as much as possible the FAIR principles for scientific data management (findable, accessible, interoperable, reusable data). For more information on these principles, see e.g. ref. [4]. The MicroBooNE open samples are advertized from the MicroBooNE website1, and are made available on the Zenodo open data repository. The MicroBooNE website contains a brief description of the data set, links to Zenodo and to documentation, and information about license and citation. Zenodo, provides citable DOI (digital object identifier) and versioning. Samples are made available under the "cc-by" license, allowing users to utilize the data in any way, including modifying and redistributing, as long as credit is given to the original authors. A suggested text for acknowledging the Collaboration is provided. The Collaboration requests that, whenever possible, resulting software products are also made publicly available, although this is not required by the license. Footnote 1: [https://microboone.fnal.gov/documents-publications/public-datasets/](https://microboone.fnal.gov/documents-publications/public-datasets/) The data sets release consists of "overlay" samples, where with overlay we refer to events from off-beam data taking with an overlaid simulated neutrino interaction. These events provide data-driven cosmic ray background and noise, as well as Monte Carlo truth information for the neutrino interaction. Neutrino interactions in the open samples are either the inclusive set of neutrino interactions as expected from the BNB nominal flux or the subset of charged-current electron neutrino interactions in the BNB. We will refer to these as "inclusive BNB" and "intrinsic \(\nu_{\mathrm{e}}\)", respectively. Inclusive BNB interactions are simulated in the full cryostat volume, while intrinsic \(\nu_{\mathrm{e}}\) interactions are simulated in the LArTPC active volume. The data is released in two formats. The first one is the art/ROOT data format [5; 6]. These are the same files used within the collaboration, thus making available to the public all reconstructed and simulated data products. This data format targets HEP physicists, and in particular the LArTPC community, that are likely already familiar with this data format. art/ROOT files are stored on a dedicated persistent dCache pool area that is accessible with xrootd [7] without requiring any virtual organization credentials. The list of xrootd urls is stored on Zenodo. The second format is HDF5 [8], targeting usage by the broader data and computer science communities. HDF5 files include a subset of the art/ROOT information, with a simplified layout for ease of use. Nevertheless, this data format contains the most useful information and is designed to allow a wide range applications. The following information is stored in the HDF5 files: 1. Noise-filtered and deconvolved wire waveforms in regions of interest. 2. LArTPC hit information. 3. Optical hit and flash information. 4. Monte Carlo truth information (incoming neutrino properties, energy deposits as associated to hits, Geant4 [9] particles). In addition we provide information for the purpose of benchmarking new developments against the state-of-the-art reconstruction performance provided by the Pandora multi-algorithm pattern recognition toolkit [10]. These include the interaction and cluster hit mapping, a multivariate track-shower classification, the neutrino flavor identification. HDF5 files stored on Zenodo, at the same DOI as the xrootd urls of corresponding art/ROOT data set. Each HDF5 sample comes in two flavors: with and without wire waveform information listed as 1. in the list above. Due to size requirements, samples with this information contain less events. A summary of the samples can be found in table 1. ## 3 Documentation Documentation is of utmost importance for an open data release, as it allows external users to become familiar with the content of the data and to understand which applications it can be used for. The art/ROOT format targets users from the LArTPC community, i.e. physicists already familiar with the LArSoft [11] software environment. Therefore, the documentation for this format assumes prior knowledge of LArSoft-related tools, and consists of: * A description of the samples and list of data products stored2. Footnote 2: [https://github.com/uboone/OpenSamples/blob/v01/file-content-artroot.md](https://github.com/uboone/OpenSamples/blob/v01/file-content-artroot.md) * Links to websites with documentation about the related software tools (LArSoft, xrootd, etc.). * A recipe to setup the software release (uboonecode and LArSoft) from CVMFS. * A link to the module for creating HDF5 files, providing an example of how to access the art/ROOT content. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Interaction & DOI & Events & & \multicolumn{2}{c}{HDF5} & \multicolumn{2}{c}{art/ROOT} \\ & & & Wire & Files & Size & Files & Size \\ \hline Inclusive BNB & 10.5281/zenodo.7261798 & 141,260 & No & 20 & 34 GB & 3400 & 787 GB \\ Inclusive BNB & 10.5281/zenodo.7262009 & 24,332 & Yes & 18 & 44 GB & 720 & 136 GB \\ Intrinsic \(\nu_{\mathrm{e}}\) & 10.5281/zenodo.7261921 & 89,339 & No & 20 & 31 GB & 2151 & 761 GB \\ Intrinsic \(\nu_{\mathrm{e}}\) & 10.5281/zenodo.7262140 & 19,940 & Yes & 20 & 39 GB & 540 & 170 GB \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of MicroBooNE Open Data Sets. The column “Wire” refers to whether the wire waveform information is stored in the HDF5 files or not. Documentation for the HDF5 data sets instead needs to be more detailed, as these format targets users not necessarily familiar with LArTPC experiments and the software they use. We chose to document the usage of this format mainly through a demonstration of usage with a few Jupyter notebooks3. These notebooks are described in the subsection below. A recipe for installing the required packages in a Conda environment to run the notebooks is provided. This environment has minimal dependencies so that users can easily install it and add any other package they may need for their applications on top. Dependencies include pynum4 for file I/O handling. All notebooks contain a brief introduction to clarify their purpose. We also provide a set of auxiliary tools, such as functions for basic detector navigation and minimal plotting utilities. Documentation also includes a description of the file content, formatted as a table with a brief explanation of each element stored in the data set. Footnote 3: [https://github.com/uboone/OpenSamples/tree/v01](https://github.com/uboone/OpenSamples/tree/v01) Footnote 4: [https://github.com/viewes/pynuml](https://github.com/viewes/pynuml) ### Notebooks Demonstrating Usage of HDF5 Files The first notebook, "Sample Exploration", is meant to help the user become familiar with the sample content and with tools provided to understand the detector properties. As shown in figure 1, the notebook demonstrates how to determine the wire positions and their intersections, as well properties of the neutrino interaction like the interaction position (vertex) in the cryostat and multiplicities of simulated particles. The second notebook, "Hit Labeling", provides examples of ground-truth labels of TPC hits according to different categorizations. Each categorization can be the target of specific algorithms or network training. These categorizations are the neutrino identification (hits from neutrino interaction vs from noise or cosmic ray background), semantic segmentation (categorization of hits based on the type of particle that produced them), and instance segmentation (labeling of hits according to the particle instance that produced them). Examples are shown in figure 2. The "WireImage" notebook is the only one that requires HDF5 files with the wire waveform information. This notebook demonstrates the LArTPC data visualization in image format. It can be used for visual data processing methods, such as Convolutional Neural Networks (CNN). Examples of CNNs developed by the MicroBooNE Collaboration can be found Figure 1: Example of plots from the ”Sample Exploration” notebook: the detector positions of wires in each plane, displaying one wire every 200 (a), position of the simulated neutrino interaction vertex in the plane transverse to the beam (b), number of protons with momentum above 300 MeV produced in BNB neutrino interactions (c). at refs [12, 13, 14, 15]. Ground truth at waveform level is not directly provided, but the notebook demonstrates how it can be extracted by matching the waveform with the hit-level ground truth information. Example images of the wire waveform and of the extracted ground truth for a specific event are shown in figure 3. Purpose of the "Pandora metrics" notebook is to introduce the definition of important metrics used to benchmark the reconstruction software performance, and produce results for these metrics using Pandora. As shown in figure 4, these metrics include the neutrino vertex position resolution, as well as purity and completeness at the level of the neutrino interaction or at the level of particle instances. Purity and completeness are respectively defined as the fraction of reconstructed or true hits correctly identified. Finally, as the other notebooks focused on the LArTPC information, the "Optical Information" notebook demonstrates the usage of the optical detector information. This notebook shows how to access optical hit properties, and how the MicroBooNE optical reconstruction Figure 2: Example of plots from the ”Hit Labeling” notebook: categorization of hits in the 3 planes based on whether they originate from a neutrino or cosmic ray interaction (a), categorization of neutrino hits in different particle instances (b). clusters them in time into "flash" objects. It also shows how to use the light information to help identifying the neutrino hits in the TPC, e.g. by comparing the flash barycenter and with the one from the neutrino TPC hits. Examples are shown in figure 5. ## 4 Conclusions and Outlook MicroBooNE has released data sets for collaborative software development, and made them available on Zenodo and via xroot. A wide range of applications can be developed based on these samples, with data formats for both image processing (e.g. CNN) and hit-based algorithms (GNN or other traditional algorithms). The size of the sample is enough for neural network training, and examples for extracting the ground truth for supervised learning are provided. The rich documentation for the usage of these data sets includes a set of performance metrics with reference results from the Pandora algorithms. The statistics on the Zenodo website indicate that the data sets have been downloaded hundreds of times already. The first developments based on these samples were presented at the conference. Since the released sample is a subset of what internally available, a larger Figure 3: Example of plots from the ”WireImage” notebook: for the same event as in figure 2, the wire waveform image for plane 0 (a) and the corresponding ground truth in terms of neutrino identification (b) are shown. Figure 4: Example of plots from the "Pandora metrics" notebook: the purity and completeness for the neutrino interaction identification (a) and for the reconstruction and clustering of hits from photons (b), resolution for the neutrino vertex \(x\) coordinate (c). Figure 5: Example of plots from the "Optical Information" notebook: the time of the optical hits with respect to the beam trigger (a), the difference in wire units between the barycenter of the optical flash and the one of the neutrino hits as identified by Pandora (b), display of the 32 PMT detectors with size and color weighted by the number of photoelectrons in a specific event, as well the flash barycenter and its width (c). BNB inclusive sample may be released in the future to further increase the number of events available for training of neural networks.
2309.03500
Subdivision schemes based on weighted local polynomial regression. A new technique for the convergence analysis
The generation of curves and surfaces from given data is a well-known problem in Computer-Aided Design that can be approached using subdivision schemes. They are powerful tools that allow obtaining new data from the initial one by means of simple calculations. However, in some applications, the collected data are given with noise and most of schemes are not adequate to process them. In this paper, we present some new families of binary univariate linear subdivision schemes using weighted local polynomial regression. We study their properties, such as convergence, monotonicity, polynomial reproduction and approximation and denoising capabilities. For the convergence study, we develop some new theoretical results. Finally, some examples are presented to confirm the proven properties.
Sergio López-Ureña, Dionisio F. Yáñez
2023-09-07T06:17:50Z
http://arxiv.org/abs/2309.03500v1
# Subdivision schemes based on weighted local polynomial regression. ###### Abstract The generation of curves and surfaces from given data is a well-known problem in Computer-Aided Design that can be approached using subdivision schemes. They are powerful tools that allow obtaining new data from the initial one by means of simple calculations. However, in some applications, the collected data are given with noise and most of schemes are not adequate to process them. In this paper, we present some new families of binary univariate linear subdivision schemes using weighted local polynomial regression. We study their properties, such as convergence, monotonicity, polynomial reproduction and approximation and denoising capabilities. For the convergence study, we develop some new theoretical results. Finally, some examples are presented to confirm the proven properties. keywords: Weighted-least squares method, binary linear subdivision, noisy data, convergence criteria. + Footnote †: journal: Computer Science ## 1 Introduction In past years, many techniques have been designed and developed in order to construct curves or surfaces with some properties such as polynomial reproduction or monotonicity-preservation. For example, splines, non-uniform rational B-splines (NURBS) and others (see, e.g. [2; 8]). In this context, linear subdivision schemes appears as useful and efficient instruments due to their simple computation (see e.g. [7; 17; 22]). They consist in obtaining new points from given data using refinement operators and can be classified depending on such operators: if a single operator is used for all the iterations, then the subdivision scheme is called stationary or level-independent (see e.g. [7; 14]), otherwise it is denominated non-stationary or level dependent (see e.g. [9; 10; 16]). They are also classified by the linearity of the operators (see e.g. [12; 13]). There is a vast literature on the generation of subdivision schemes and the study of their properties. An essential property is convergence, which means that the process converges uniformly to a continuous function, for any initial values. Deslauriers and Dubuc, in [11], analysed that the scheme based on centered Lagrange interpolation is convergent using Fourier transform techniques to prove it. One of the most common studied properties is the reproduction of polynomials, i.e., if the given data are point-values of a polynomial, then the subdivision scheme generates more point-values of such polynomial. This is studied in detail in [15]. Its study is interesting since the reproduction is linked with convergence properties and the approximation capability of the scheme. In some real applications, the given data come from measures that are contaminated by noise and, as a consequence, a suitable subdivision scheme should be used to converge to an appropriate limit function. To this purpose, Dyn et al. in [14] propose a new linear scheme based on least-square methods where the noise is reduced by applying the scheme several times. These schemes are determined by two parameters \(m\) and \(d\) with \(d<m\): For each \(m\) consecutive data values, \((y_{1},\ldots,y_{m})\), attached to some equidistant knots \((x_{1},\ldots,x_{m})\), a polynomial regression is performed. The search is constrained to polynomials of degree \(d\) and leads to a unique solution, \(\hat{p}\), that minimizes the regression error concerning the \(\ell^{2}\)-norm (least-squares): \[\hat{p}=\operatorname*{arg\,min}_{p\in\Pi_{d}(\mathbb{R})}\sum_{l=1}^{m}(y_{l }-p(x_{l}))^{2}. \tag{1}\] The subdivision refinement rules can be obtained by evaluating \(\hat{p}\) at a certain point, which, in this work, is assumed to be \(0\) without loss of generalization. The resulting schemes are linear, which implies some benefits and drawbacks. In [14], the convergence is proved for \(d=0,1\), as well as some properties such as polynomials reproduction. In many applied situations, the location of the data is relevant to obtain the approximation, hence a weight function is considered to assign values depending on the distance from the knots \(x_{l}\) to \(0\). These methods, as Shepard's algorithm (see [24]), are called moving least squares (see [20]). In [4; 5], the weighted local polynomial regression (WLPR) was used to design a _prediction operator_ for a multiresolution algorithm, leading to good results on image processing when the data was contaminated with some noise. Prediction operators can be considered subdivision operators and their properties can be studied [9]. In this paper, we study the family of subdivision schemes based on the prediction operators in [4] and develop a new technique to study their convergence based on some asymptotic behaviour. Also, some properties such as polynomial reproduction, the Gibbs phenomenon in discontinuous data, monotonicity preservation and denoising and approximation capabilities are analysed. We provide some examples to check the theoretical results. The paper is organized as follows: Firstly, we briefly review the classical components of linear subdivision schemes with the aim to be self-contained in this work. In Section 3, we explain the WLPR and define a general form, leading to new subdivision schemes definitions. Afterward, we study different properties in some particular cases: Starting with \(d=0,1\), we analyse the convergence, the smoothness of the limit functions, the monotonicity preservation and the Gibbs phenomenon when the initial data present large gradients. In Section 5, we develop a new technique to study the convergence of a family of schemes and apply it to the case \(d=2,3\). We analyse the approximation and noise reduction capabilities of the new schemes in Sections 7 and 8. Finally, some numerical experiments are performed to confirm the theoretical properties, in Section 9, and some conclusions and future work are proposed. ## 2 Preliminaries: A brief review of linear subdivision schemes Let us denote by \(\ell_{\infty}(\mathbb{Z})\) the set of bounded real sequences with indices in \(\mathbb{Z}\). A _linear binary univariate subdivision operator_\(S_{\mathbf{a}}:\ell_{\infty}(\mathbb{Z})\to\ell_{\infty}(\mathbb{Z})\) with finitely supported _mask_\(\mathbf{a}=\{a_{l}\}_{l\in\mathbb{Z}}\subset\mathbb{R}\) is defined to refine the data on the level \(k\), \(\mathbf{f}^{k}=\{f^{k}_{j}\}_{j\in\mathbb{Z}}\in\ell_{\infty}(\mathbb{Z})\), as: \[f^{k+1}_{2j+i}:=(S_{\mathbf{a}}\mathbf{f}^{k})_{2j+i}:=\sum_{l\in\mathbb{Z}}a _{2l-i}f^{k}_{j+l},\quad j\in\mathbb{Z},\quad i=0,1. \tag{2}\] In this work, we only consider level-independent subdivision schemes, meaning that the successive application of a unique operator \(S_{\mathbf{a}}\) constitutes the _subdivision scheme_. Hence, we will refer to \(S_{\mathbf{a}}\) as the subdivision scheme as well. The _binary_ adjective refers to the two formulas/rules of (2) (corresponding to \(i=0\) and \(i=1\)) which are characterized by the _even_ mask \(\mathbf{a}^{0}=\{a_{2l}\}_{l\in\mathbb{Z}}\) and the _odd_ mask \(\mathbf{a}^{1}=\{a_{2l-1}\}_{l\in\mathbb{Z}}\). It is called _length_ of a mask to the number of elements that are between the first and the last non-zero elements, both included. _Remark 2.1_.: If a linear subdivision scheme is applied to some data \(\widetilde{\mathbf{g}}=\{G(j)+\epsilon_{j}\}_{j\in\mathbb{Z}}\), where \(G\) is a smooth function and \(\boldsymbol{\epsilon}=\{\epsilon_{j}\}_{j\in\mathbb{Z}}\) is random data, also called _noise_, the result is \[S_{\mathbf{a}}\widetilde{\mathbf{g}}=S_{\mathbf{a}}\mathbf{g}+S_{\mathbf{a}} \boldsymbol{\epsilon},\] which implies that we can study separately the smooth and the pure noisy cases. If we apply these rules recursively to some initial data \(\mathbf{f}^{0}\), it is desirable that the process converges to a continuous function, in the following sense. **Definition 1**.: A subdivision scheme \(S_{\mathbf{a}}\) is _uniformly convergent_ if for any initial data \(\mathbf{f}^{0}\in\ell_{\infty}(\mathbb{Z})\), there exists a continuous function \(F:\mathbb{R}\to\mathbb{R}\) such that \[\lim_{k\to\infty}\sup_{j\in\mathbb{Z}}|(S_{\mathbf{a}}^{k}\mathbf{f}^{0})_{j}- F(2^{-k}j)|=0.\] Then, we denote by \(S_{\mathbf{a}}^{\infty}\mathbf{f}^{0}=F\) to the limit function generated from \(\mathbf{f}^{0}\). We write \(S_{\mathbf{a}}\in\mathcal{C}^{d}\) if all the limit functions have such smoothness, \(S_{\mathbf{a}}^{\infty}\mathbf{f}^{0}\in\mathcal{C}^{d}\), \(\forall\mathbf{f}^{0}\in\ell_{\infty}(\mathbb{Z})\). A usual tool for the analysis of linear schemes is the _symbol_, that we define as follows. **Definition 2**.: The _symbol_ of a subdivision scheme \(S_{\mathbf{a}}\) is the Laurent polynomial \(a(z)=\sum_{j\in\mathbb{Z}}a_{j}z^{-j}\). We can determine if a subdivision scheme is convergent depending on the sum of the absolute values of some even and odd masks. Therefore, we use the norm of the operator \(S_{\mathbf{a}}\). **Lemma 2.1**.: _The norm of \(S_{\mathbf{a}}:\ell_{\infty}(\mathbb{Z})\to\ell_{\infty}(\mathbb{Z})\), as a linear endomorphism in the space of bounded sequences, is the maximum between \(\|\mathbf{a}^{0}\|_{1}\) and \(\|\mathbf{a}^{1}\|_{1}\):_ \[\|S_{\mathbf{a}}\|_{\infty}=\max_{i=0,1}\{\sum_{j\in\mathbb{Z}}|a_{2j-i}|\}= \max\{\|\mathbf{a}^{0}\|_{1},\|\mathbf{a}^{1}\|_{1}\}.\] According to the Definition 5.1 of [15], a subdivision scheme \(S_{\mathbf{a}}\) is _odd-symmetric_ if \(a_{j}=a_{-j},\ \forall j\in\mathbb{Z}\), or _even-symmetric_ if \(a_{j}=a_{1-j},\ \forall j\in\mathbb{Z}.\) In terms of the symbol, these is translated as \(a(z)=a(1/z)\) or \(a(z)=za(1/z)\), respectively. The schemes, that we will construct in this paper, are odd-symmetric, but to simplify some equations, we consider a more relaxed definition of odd-symmetry and even-symmetry. **Definition 3**.: A subdivision scheme \(S_{\mathbf{a}}\) is _symmetric_ if \(a_{j}=a_{j_{0}-j},\ \forall j\in\mathbb{Z}\), for some \(j_{0}\in\mathbb{Z}\). It is _even(odd)-symmetric_ if \(j_{0}\) is odd (even). A useful property for a subdivision scheme is the reproduction of polynomials. **Definition 4**.: A subdivision scheme \(S_{\mathbf{a}}\)_reproduces1\(\Pi_{d}\)_ (polynomials up to degree \(d\)) if Footnote 1: Technically, this is the definition of _step-wise reproduction_, which is a stronger condition, [15]. \[S_{\mathbf{a}}\{p(2j)\}_{j\in\mathbb{Z}}=\{p(j)\}_{j\in\mathbb{Z}},\qquad \forall p\in\Pi_{d}.\] A necessary condition for convergence is the reproduction of constants. The following lemma determines the relation between the mask, the symbol and the reproduction of the constants. **Lemma 2.2**.: _The following facts are equivalent:_ **(a)**: \(S_{\mathbf{a}}\) _reproduces_ \(\Pi_{0}\) _(constant functions)._ **(b)**: \(\sum_{j\in\mathbb{Z}}a_{j}^{0}=\sum_{j\in\mathbb{Z}}a_{j}^{1}=1\)_._ **(c)**: \(a(z)=(1+z)q(z)\) _for some Laurent polynomial_ \(q\)_._ _In such case, the \(S_{\mathbf{q}}\) scheme is well-defined and called difference scheme. If \(\|S_{\mathbf{q}}\|<1\), then \(S_{\mathbf{a}}\) is convergent._ There exists a direct relationship between the symmetry of \(S_{\mathbf{a}}\) and the symmetry of its difference scheme, \(S_{\mathbf{q}}\). We introduce it in the following result. **Lemma 2.3**.: _If a scheme is odd-symmetric, then its difference scheme is even-symmetric._ Proof.: It can be easily checked using the _symbols_. Next theorem by Dyn and Levin, [17], links the smoothness of \(S_{\mathbf{a}}\) and \(S_{2\mathbf{q}}\). **Theorem 2.4**.: _If the scheme based on \(S_{2\mathbf{q}}\) is convergent and \(\mathcal{C}^{m-1}\), then \(S_{\mathbf{a}}\) is convergent and \(\mathcal{C}^{m}\)._ _Remark 2.2_.: We give now a more explicit formula to compute \(\mathbf{q}\) for the kind of schemes we consider in this paper. We will analyze odd-symmetric subdivision schemes, which implies that the length of the mask is always odd and two possible situations may occur, depending on which sub-mask has the largest support. Since the sub-masks \(\mathbf{a}^{0}\) and \(\mathbf{a}^{1}\) are finitely supported, from now on we will treat them as vectors containing _only_ their support, which will be important for the theoretical results in Section 5. The first situation is that, for some \(n\in\mathbb{N}\), the sub-masks are \(\mathbf{a}^{0}=\{a_{l}^{0}\}_{l=1-n}^{n-1}\) and \(\mathbf{a}^{1}=\{a_{l}^{1}\}_{l=1-n}^{n}\), while the second one corresponds to \(\mathbf{a}^{0}=\{a_{l}^{0}\}_{l=n}^{n}\) and \(\mathbf{a}^{1}=\{a_{l}^{1}\}_{l=1-n}^{n}\) (pay attention to the supports). To compute \(\mathbf{q}\) with a unique formula for both cases, we redefine the mask for the second case, consisting in \(\bar{a}_{l}^{0}:=a_{l}^{1}\), \(l=1-n,\ldots,n\), and \(\bar{a}_{l}^{1}:=a_{l-1}^{0}\), \(l=1-n,\ldots,n+1\). Now the first indices of the supports are \(1-n\), in both situations, and the last indices are \(n-1\) and \(n\) (for the first and second sub-mask, respectively) for the first situation and \(n\) and \(n+1\) for the second one. Now, in both cases, the second sub-masks is the largest and we can affirm that there exists some \(n\in\mathbb{N}\) such that \[(S_{\mathbf{a}}\mathbf{f})_{2j}=\sum_{l=1-n}^{L_{n}}a_{l}^{0}f_{j+l},\quad(S_{ \mathbf{a}}\mathbf{f})_{2j+1}=\sum_{l=1-n}^{L_{n}+1}a_{l}^{1}f_{j+l},\quad j\in \mathbb{Z}, \tag{3}\] with \(L_{n}=n-1\) or \(L_{n}=n\), so that \(\mathbf{a}^{0}=\{a_{l}^{0}\}_{l=1-n}^{L_{n}}\) and \(\mathbf{a}^{1}=\{a_{l}^{1}\}_{l=1-n}^{L_{n}+1}\). In any case, now the odd-symmetry is written as \[a_{l}^{0}=a_{L_{n}+1-n-l}^{0},\quad a_{l}^{1}=a_{L_{n}+2-n-l}^{1}.\] Finally, for a subdivision operator written as (3), the difference mask \(\mathbf{q}\) can be computed as follows: \[\begin{split} q_{j}^{0}&:=q_{2j}=\sum_{l=-n+1}^{j} a_{l}^{n,0}-a_{l}^{n,1},\qquad j=1-n,\dots,L_{n},\\ q_{j}^{1}&:=q_{2j+1}=\sum_{l=j}^{L_{n}}a_{l}^{n,0} -a_{l+1}^{n,1},\qquad j=1-n,\dots,L_{n}.\end{split} \tag{4}\] According to Lemma 2.3, \(S_{\mathbf{q}}\) is an even-symmetric scheme. In particular, \[q_{j}^{0}=q_{L_{n}+1-n-j}^{1},\qquad j=1-n,\dots,L_{n}. \tag{5}\] ## 3 Weighted local polynomial regression (WLPR) The schemes analysed in the present work has been applied to image processing in a multiresolution context as prediction operator both for point-values as for cell-average discretizations, (see, e.g. [4; 5]). They are based on weighted local polynomial regression (WLPR) and they can be defined by inserting a weight function in the minimization problem (1), which emphasizes the points closer to where the new data is attached. In this section, we briefly introduce WLPR and describe some of its properties. For a more detailed description, see [19; 21]. Firstly, we fix the space of functions where the regression is performed: \(\Pi_{d}\), the space of polynomials of degree at most \(d\). Other function spaces could be used as well (see [19]). We can parametrize the polynomials in \(\Pi_{d}\) as \[p(x)=\beta_{0}+\beta_{1}x+\dots+\beta_{d}x^{d}=A(x)^{T}\boldsymbol{\beta}\] where the superscript \(T\) is the matrix transposition, \(A(x)^{T}=(1,x,\dots,x^{d})\) and \(\boldsymbol{\beta}\in\mathbb{R}^{d+1}\). The vectors are considered column vectors in order to perform the matrix multiplication. With this notation, the regression problem (1) can be expressed as \[\hat{\boldsymbol{\beta}}=\operatorname*{arg\,min}_{\boldsymbol{\beta}\in \mathbb{R}^{d+1}}\sum_{i=1}^{m}L_{2}(y_{i},A(x_{i})^{T}\boldsymbol{\beta}), \quad\hat{p}=A(x)^{T}\hat{\boldsymbol{\beta}},\quad L_{2}(s,t):=(s-t)^{2}.\] The second ingredient is the weight function, \(\omega:\mathbb{R}\to[0,1]\), which assigns a value to the distance between \(x_{i}\) and \(0\), which is the location where \(\hat{p}\) is evaluated in this work. We define \(\omega\) as \[\omega(x)=\begin{cases}\phi(|x|),&|x|\leq 1,\\ 0,&\text{in other case},\end{cases}\] and we impose that \(\phi:[0,1]\to[0,1]\) is a decreasing function such that \(\phi(0)=1\). With these assumptions it is clear that \(\omega\) has compact support, \([-1,1]\), is even, increasing in \([-1,0]\) and decreasing in \([0,1]\), and it reaches the maximum at point \(x=0\). The choice \(w(0)=1\) assigns the highest weight to the point where \(\hat{p}\) is evaluated. In [21], some functions are proposed, which we compile in Table 1. Observe that many of them have the form \(\phi(x)=(1-x^{p})^{q}\) with \(p,q>0\). The third component is the _bandwidth_, \(\lambda\in\mathbb{R}_{+}\backslash\mathbb{N}\). We define \[\mathbf{w}^{\lambda}=\{w_{l}^{\lambda}\}_{l\in\mathbb{Z}},\quad w_{l}^{ \lambda}:=\omega\left(\frac{l}{\lambda}\right)=\phi\left(\frac{|l|}{\lambda} \right),\ l\in\mathbb{Z}.\] The parameter \(\lambda\) determines how many data values are used in the regression and allows to distribute the weights of the points used in the rank \([-\lambda,\lambda]\). By the properties of the function \(\omega\), if \(\lambda_{1}\leq\lambda_{2}\), then \(w_{l}^{\lambda_{1}}\leq w_{l}^{\lambda_{2}}\) for any \(l\in\mathbb{Z}\). Finally, we choose a vector norm, typically \(\ell^{2}\) is taken for its simplicity, but any \(\ell^{p}\)-norm can be used depending on the characteristics of the problem. The loss function is defined accordingly: \(L_{p}(s,t)=|s-t|^{p}\). With the above elements, we propose these two problems to design the two subdivision rules: \[\hat{\boldsymbol{\beta}}^{i}=\operatorname*{arg\,min}_{\boldsymbol{\beta}\in \mathbb{R}^{d+1}}\sum\{w_{2l-i}^{\lambda}L_{p}(f_{j+l}^{k},A(2l-i)^{T} \boldsymbol{\beta})\,:\,l\in\mathbb{Z},|2l-i|<\lambda\},\quad i=0,1. \tag{6}\] Once the fitted polynomial is obtained, it is evaluated at \(0\) to define the new data: \[(S_{d,\mathbf{w}^{\lambda}}f^{k})_{2j+i}=A(0)^{T}\hat{\boldsymbol{\beta}}^{i }=(1,0,\ldots,0)\hat{\boldsymbol{\beta}}^{i}=\hat{\beta}_{0}^{i},\quad i=0,1, \tag{7}\] so that only the first coordinate of \(\hat{\boldsymbol{\beta}}^{i}\) is needed. **Proposition 3.1**.: _Moreover, for \(d=-1+2\left\lfloor\frac{\lambda+1}{2}\right\rfloor\), the resulting subdivision scheme is the Deslauriers-Dubuc subdivision scheme._ Proof.: Let us discuss when this scheme is well defined. Two situations may occur, depending on whether or not \(d\) (the polynomial degree) is smaller than the amount of data \(f_{j+l}^{k}\) in the minimization problem (6). For \(i=0\), if \(d<2\left\lfloor\frac{\lambda}{2}\right\rfloor\) then (6) is a least square problem and there is a unique solution [6], otherwise it can be found a polynomial that interpolates the data. Even if the interpolating polynomial is not unique, its evaluation at \(0\) is exactly \(f_{j}^{k}\). Hence, the even rule is well defined for any \(\lambda\in\mathbb{R}_{+}\backslash\mathbb{N}\), coinciding with the even rule of the Deslauriers-Dubuc subdivision scheme for \(d\geq 2\left\lfloor\frac{\lambda}{2}\right\rfloor\), i.e. \(f_{2j}^{k+1}=f_{j}^{k}\). For \(i=1\), a least of square problem is solved if \(d+1<2\left\lfloor\frac{\lambda+1}{2}\right\rfloor\) and an interpolation problem with unique solution is solved when the equality is reached, coinciding with the Deslaurier-Dubuc odd rule in the last case. However, nor the polynomial neither its value at zero are unique when \(d+1>2\left\lfloor\frac{\lambda+1}{2}\right\rfloor\), so that the scheme is not well defined in this case. As conclusion, only if the polynomial degree is \(d=-1+2\left\lfloor\frac{\lambda+1}{2}\right\rfloor\), the resulting scheme is the Deslauriers-Dubuc interpolatory subdivision scheme, independently the choice of \(\omega\) and the loss function \(L_{p}\). The scheme (7) coincides with the proposed by Dyn et al. in [14] if \(p=2\) and \(\phi(x)=1\) are used (corresponding to rect in Table 1). Also, the non-linear subdivision scheme presented by Mustafa et al. in [23] can be obtained with the same choice of \(\phi(x)=1\) but with \(p=1\). We will analyse the properties of our schemes specifically for the polynomial degrees \(d=0,1,2,3\), the loss function \(L_{2}\) and several choices of \(\phi\). We will study how the choice of \(\phi\) affects the approximation and noise reduction capabilities. We will show that it is not possible to define a \(\phi\) giving the best approximation and the greatest denoising. In fact, one may decide how much importance to adjudge to each property and find an equilibrium. This decision may be based on the magnitude of the noise and the smoothness of the underlying function. Observe that, when \(2n-1<\lambda<2n\), for some \(n\in\mathbb{N}\), the even rule (\(i=0\)) support is shorter than the odd (\(i=1\)) one, and just the opposite occurs when \(2n<\lambda<2n+1\). To simplify, we will discuss in detail the first case, where even and odd masks have lengths \(2n-1\) and \(2n\), respectively, since the second one is analogue and the Remark 2.2 can be taken into account for the consequent analysis. Nevertheless, we deal with both situations along the paper when it can be do it without additional effort. \begin{table} \begin{tabular}{l l} \hline rect & \(\phi(x)=1\) \\ tria & \(\phi(x)=1-x\) \\ epan & \(\phi(x)=1-x^{2}\) \\ bisq & \(\phi(x)=(1-x^{2})^{2}\) \\ tcub & \(\phi(x)=(1-x^{3})^{3}\) \\ trwt & \(\phi(x)=(1-x^{2})^{3}\) \\ exp\((\xi)\) & \(\phi(x)=e^{-\xi x}\) & \(\xi\in\mathbb{R}_{+}\) \\ \hline \end{tabular} \end{table} Table 1: Weight functions, see [21]. To give a more explicit definition of the schemes, we solve the quadratic problem posed in (6) with \(p=2\). In this case, it is a weighted least square problem and its solution is well-known. Let us start with the derivation of the odd sub-mask, \(\mathbf{a}^{1}\), for \(2n-1<\lambda<2n+1\). For the sake of simplicity, we omit the dependence on \(d,\omega,\lambda\) for the following vectors and matrices. If we denote as \(\mathbf{W}^{1}\) the diagonal matrix consisting on the vector \[\mathbf{w}^{1}=(w_{2n-1}^{\lambda},\ldots,w_{1}^{\lambda},w_{1}^{\lambda}, \ldots,w_{2n-1}^{\lambda}), \tag{8}\] we call \[\mathbf{x}^{1}=\left(\begin{array}{c}-2n+1\\ \vdots\\ -1\\ 1\\ \vdots\\ 2n-1\end{array}\right),\quad\mathbf{X}^{1}=\big{(}(\mathbf{x}^{1})^{0},( \mathbf{x}^{1})^{1},\ldots,(\mathbf{x}^{1})^{d}\big{)}=\left(\begin{array}{c} A(-2n+1)^{T}\\ \vdots\\ A(-1)^{T}\\ A(1)^{T}\\ \vdots\\ A(2n-1)^{T}\end{array}\right), \tag{9}\] where the powers \((\mathbf{x}^{1})^{t}\), \(t=0,\ldots,d\), are computed component-wisely, so that \(\mathbf{X}^{1}\) is a \(2n\times(d+1)\) matrix, and we denote \(\mathbf{f}^{1,j,k}=(f_{j-n+1}^{k},\ldots,f_{j}^{k},f_{j+1}^{k},\ldots,f_{j+n}^ {k})^{T}\), then the problem of (6) can be write as: \[\hat{\boldsymbol{\beta}}^{1}=\operatorname*{arg\,min}_{\boldsymbol{\beta} \in\mathbb{R}^{d+1}}||(\mathbf{W}^{1})^{\frac{1}{2}}\mathbf{f}^{1,j,k}-( \mathbf{W}^{1})^{\frac{1}{2}}\mathbf{X}^{1}\boldsymbol{\beta}||_{2}^{2},\] whose solution is \[\hat{\boldsymbol{\beta}}^{1}=((\mathbf{X}^{1})^{T}\mathbf{W}^{1}\mathbf{X}^{1} )^{-1}(\mathbf{X}^{1})^{T}\mathbf{W}^{1}\mathbf{f}^{1,j,k}. \tag{10}\] For the sake of clarity, we write down the above terms: \[(\mathbf{X}^{1})^{T}\mathbf{W}^{1}\mathbf{X}^{1}=\left(\begin{array}{llll} \sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1) &\cdots&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d}\\ \sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}( 2i-1)^{2}&\cdots&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d+1}\\ \vdots&\vdots&\vdots&\vdots\\ \sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d}&\sum_{i=-n+1}^{n}w_{2i-1}^{ \lambda}(2i-1)^{d+1}&\cdots&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{2d} \end{array}\right) \tag{11}\] and \[(\mathbf{X}^{1})^{T}\mathbf{W}^{1}\mathbf{f}^{1,j,k}=(\sum_{i=-n+1}^{n}w_{2i- 1}^{\lambda}f_{j+i}^{k},\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)f_{j+i}^{k}, \cdots,\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d}f_{j+i}^{k})^{T}.\] Since we only need the first coordinate \(\hat{\beta}_{0}^{1}\), we can use the Cramer's formula instead of solving the full system: \[(S_{d,\mathbf{w}^{\lambda}}\mathbf{f}^{k})_{2j+1}=\hat{\beta}_{0}^{1}=\frac{ \left|\begin{array}{ll}\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}f_{j+i}^{k}&\sum_{ i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)&\cdots&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d} \\ \sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)f_{j+i}^{k}&\sum_{i=-n+1}^{n}w_{2i-1}^ {\lambda}(2i-1)^{2}&\cdots&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d+1}\\ \vdots&\vdots&\ddots&\vdots\\ \sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d}f_{j+i}^{k}&\sum_{i=-n+1}^{n}w_{2i- 1}^{\lambda}(2i-1)^{d+1}&\cdots&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{2d} \end{array}\right|}.\] Observe that, since the vector \(\mathbf{w}^{1}\) is symmetric, \(w_{2i-1}^{\lambda}=w_{1-2i}^{\lambda}\), then \(\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{t}=0\) for any odd value of \(t\), and \(\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{p}=2\sum_{i=1}^{n}w_{2i-1}^{\lambda}( 2i-1)^{t}\) for the even values. Thus, the above expressions can be simplified by placing many zeros and by shorting the range of the remaining sums. Using the linearity of the determinant respect to the first column, \[(S_{d,\mathbf{w}^{\lambda}}\mathbf{f}^{k})_{2j+1}=\sum_{l=-n+1}^{n}\frac{w_{2l- 1}^{\lambda}f_{j+l}^{k}}{|(\mathbf{X}^{1})^{T}\mathbf{W}^{1}\mathbf{X}^{1}|} \left|\begin{array}{ll}1&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)&\cdots& \sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d}\\ \vdots&\vdots&\ddots&\vdots\\ (2l-1)^{d}&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d+1}&\cdots&\sum_{i=-n+1}^ {n}w_{2i-1}^{\lambda}(2i-1)^{2d}\end{array}\right|,\] we conclude that the sub-masks coefficients are \[a_{l}^{1}=|(\mathbf{X}^{1})^{T}\mathbf{W}^{1}\mathbf{X}^{1}|^{-1}w_{2l-1}^{ \lambda}\left|\begin{array}{lll}1&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)& \cdots&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d}\\ 2l-1&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{2}&\cdots&\sum_{i=-n+1}^{n}w_{2 i-1}^{\lambda}(2i-1)^{d+1}\\ \vdots&\vdots&\ddots&\vdots\\ (2l-1)^{d}&\sum_{i=-n+1}^{n}w_{2i-1}^{\lambda}(2i-1)^{d+1}&\cdots&\sum_{i=-n+1} ^{n}w_{2i-1}^{\lambda}(2i-1)^{2d}\\ \end{array}\right|.\] By (10), it can also be expressed as \(\mathbf{a}^{1}=(\boldsymbol{\beta}^{1})^{T}\mathbf{e}_{1}=\mathbf{W}^{1} \mathbf{X}^{1}((\mathbf{X}^{1})^{T}\mathbf{W}^{1}\mathbf{X}^{1})^{-1}\mathbf{ e}_{1}\), where \(\mathbf{e}_{1}\) is the first element of the canonical basis of \(\mathbb{R}^{d+1}\). Analogously, for \(2n-2<\lambda<2n\), we can prove that \(\mathbf{a}^{0}=\mathbf{W}^{0}\mathbf{X}^{0}((\mathbf{X}^{0})^{T}\mathbf{W}^{ 0}\mathbf{X}^{0})^{-1}\mathbf{e}_{1}\), so that \[a_{l}^{0}=|(\mathbf{X}^{0})^{T}\mathbf{W}^{0}\mathbf{X}^{0}|^{-1}w_{2l}^{ \lambda}\left|\begin{array}{lll}1&\sum_{i=-n+1}^{n-1}w_{2i}^{\lambda}(2i)& \cdots&\sum_{i=-n+1}^{n-1}w_{2i}^{\lambda}(2i)^{d}\\ 2l&\sum_{i=-n+1}^{n-1}w_{2i}^{\lambda}(2i)^{2}&\cdots&\sum_{i=-n+1}^{n-1}w_{2 i}^{\lambda}(2i)^{d+1}\\ \vdots&\vdots&\ddots&\vdots\\ (2l)^{d}&\sum_{i=-n+1}^{n-1}w_{2i}^{\lambda}(2i)^{d+1}&\cdots&\sum_{i=-n+1}^{n- 1}w_{2i}^{\lambda}(2i)^{2d}\\ \end{array}\right|,\] where \(\mathbf{W}^{0}\) is the diagonal matrix with diagonal \[\mathbf{w}^{0}=(w_{2n-2}^{\lambda},\ldots,w_{2}^{\lambda},1,w_{2}^{\lambda}, \ldots,w_{2n-2}^{\lambda}), \tag{12}\] and \[\mathbf{x}^{0}=\left(-2(n-1),\ldots,2,0,2,\ldots,2(n-1)\right)^{T},\quad \mathbf{X}^{0}=\left((\mathbf{x}^{0})^{0},(\mathbf{x}^{0})^{1},\ldots,( \mathbf{x}^{0})^{d}\right).\] Collecting these developments, for \(2n-1<\lambda<2n\), we can define our weighted local polynomial regression-based subdivision as: \[(S_{d,\mathbf{w}^{0}}f^{k})_{2j+i}=\sum_{l=1-n}^{n-1+i}a_{l}^{i}f_{j+l}^{k}, \quad i=0,1.\] A direct consequence, by construction, is that the scheme reproduces polynomials up to degree \(d\). **Proposition 3.2**.: _The scheme \(S_{d,\mathbf{w}^{\lambda}}\) reproduces \(\Pi_{d}\)._ _Remark 3.1_.: Observe that we have considered \(\{1,x,\ldots,x^{d}\}\) as basis of \(\Pi_{d}\), which has led a the linear system with matrix (11). It is possible to consider an orthonormal basis of \(\Pi_{d}\) in a way that the matrix is diagonal, leading to a cleaner mathematical description. However, we preferred the basis \(\{1,x,\ldots,x^{d}\}\) because the resulting expression of the subdivision operator is more explicit. A possible benefit of considering an orthonormal basis is that the next results might be more intuitive. Now we prove that \((\mathbf{W}^{i})^{-1}\mathbf{a}^{i}\), \(i=0,1\), are exactly the evaluations of some polynomial at the grid points \(\mathbf{x}^{i}\). **Lemma 3.3**.: _For \(i=0,1\), the sub-masks are_ \[\mathbf{a}^{i}=\mathbf{W}^{i}\mathbf{X}^{i}\boldsymbol{\alpha}^{i}=\left\{w_{2 j-i}^{\lambda}\sum_{t=0}^{d}\alpha_{t}^{i}x_{j}^{t}\right\}_{j=1-n}^{L_{n}+i} \tag{13}\] _That is, the vector \((\mathbf{W}^{i})^{-1}\mathbf{a}^{i}\) coincides with the evaluation of the polynomial \(A(x)^{T}\boldsymbol{\alpha}^{i}\) at the points \(\mathbf{x}^{i}\), being \(\boldsymbol{\alpha}^{i}=((\mathbf{X}^{i})^{T}\mathbf{W}^{i}\mathbf{X}^{i})^{-1 }\mathbf{e}_{1}\), which expression depends on \(n,\lambda,\omega\)._ Proof.: By the previous computations, \[\mathbf{a}^{i}=\mathbf{W}^{i}\mathbf{X}^{i}((\mathbf{X}^{i})^{T}\mathbf{W}^{i} \mathbf{X}^{i})^{-1}\mathbf{e}_{1}=\mathbf{W}^{i}\mathbf{X}^{i}\boldsymbol{ \alpha}^{i}.\] For \(\mathbf{a}^{1}\) (for \(\mathbf{a}^{0}\) is analogous), using (9) we obtain \[(\mathbf{W}^{1})^{-1}\mathbf{a}^{1}=\mathbf{X}^{1}\boldsymbol{\alpha}^{1}= \left(\begin{array}{c}A(-2n+1)^{T}\\ \vdots\\ A(-1)^{T}\\ A(1)^{T}\\ \vdots\\ A(2n-1)^{T}\end{array}\right)\boldsymbol{\alpha}^{1}=\left(\begin{array}{c}A(- 2n+1)^{T}\boldsymbol{\alpha}^{1}\\ \vdots\\ A(-1)^{T}\boldsymbol{\alpha}^{1}\\ \vdots\\ A(2n-1)^{T}\boldsymbol{\alpha}^{1}\end{array}\right).\] That is, the coordinates of \((\mathbf{W}^{1})^{-1}\mathbf{a}^{1}\) are the evaluations of the polynomial \(A(x)^{T}\boldsymbol{\alpha}^{1}\) at the \(\mathbf{x}^{1}\) grid points. Moreover, these sub-masks are the only ones that lead to polynomial reproduction and verify that \((\mathbf{W}^{i})^{-1}\mathbf{a}^{i}\) are polynomial evaluations. This property can be used in practice to easily determine the sub-masks, as we do in Section 6. **Theorem 3.4**.: _The scheme \(S_{d,\mathbf{w}^{\lambda}}\) is the unique scheme that reproduces \(\Pi_{d}\) polynomials and its sub-masks have the form \(\mathbf{a}^{i}=\mathbf{W}^{1}\mathbf{X}^{1}\boldsymbol{\alpha}^{i}\), for some \(\boldsymbol{\alpha}^{i}\in\mathbb{R}^{d+1}\), \(i=0,1\)._ Proof.: It is a consequence of Lemma 3.3 together with Proposition 3.2. Suppose that some rule \(\hat{\mathbf{a}}=\mathbf{W}^{i}\mathbf{X}^{i}\hat{\boldsymbol{\alpha}}\), for some \(\hat{\boldsymbol{\alpha}}\in\mathbb{R}^{d+1}\), fulfils the reproduction conditions for \(\Pi_{d}\). Then \[\sum_{j}\hat{a}_{j}(x_{j}^{i})^{t}=\delta_{0,t},\qquad t=0,1,\ldots,d,\quad i=0,1,\] or, written with matrix multiplications, \((\mathbf{X}^{i})^{T}\hat{\mathbf{a}}=\mathbf{e}_{1}.\) Then, \[(\mathbf{X}^{i})^{T}\hat{\mathbf{a}}=(\mathbf{X}^{i})^{T}\mathbf{W}^{i} \mathbf{X}^{1}\hat{\boldsymbol{\alpha}}=\mathbf{e}_{1}\rightarrow\hat{ \boldsymbol{\alpha}}=((\mathbf{X}^{i})^{T}\mathbf{W}^{i}\mathbf{X}^{i})^{-1} \mathbf{e}_{1}=\boldsymbol{\alpha}^{i}.\] The symmetry of the scheme is another consequence of being based on a polynomial regression problem. **Lemma 3.5**.: _The scheme \(S_{d,\mathbf{w}^{\lambda}}\) is odd-symmetric._ Proof.: We prove that \(a_{j}^{1}=a_{1-j}^{1}\) for \(2n-1<\lambda<2n+1\) (it can be analogously proven that \(a_{j}^{0}=a_{-j}^{0}\) for \(2n-2<\lambda<2n\)). Let us consider \(\mathbf{f}^{j}=\{\delta_{j,l}\;:\;l\in\{-n+1,\ldots,n\}\}\). The coordinates of the sub-mask \(\mathbf{a}^{1}\) can be obtained by applying the rule to \(\mathbf{f}^{j}\), for \(j=-n+1,\ldots,n\), and take the first coordinate, \[a_{j}^{1}=\sum_{l=-n+1}^{n}a_{l}^{1}\delta_{j,l}=\sum_{l=-n+1}^{n}a_{l}^{1}f_{ l}^{j}=(S_{d,\mathbf{w}^{\lambda}}\mathbf{f}^{j})_{1}=\hat{p}^{j}(0),\] where, by (6) and (7), \[\hat{p}^{j}=\operatorname*{arg\,min}_{p\in\Pi_{d}(\mathbb{R})}\sum_{l=-n+1}^{ n}w_{2l-1}^{\lambda}(\delta_{j,l}-p(2l-1))^{2}. \tag{14}\] Then, \(a_{j}^{1}=a_{1-j}^{1}\) provided that \((S_{d,\mathbf{w}^{\lambda}}\mathbf{f}^{j})_{1}=(S_{d,\mathbf{w}^{\lambda}} \mathbf{f}^{1-j})_{1}\), or in other words, \(\hat{p}^{j}(0)=\hat{p}^{1-j}(0)\). Observe that, \[\hat{p}^{1-j}=\operatorname*{arg\,min}_{q\in\Pi_{d}(\mathbb{R})}\sum_{l=-n+1}^ {n}w_{2l-1}^{\lambda}(\delta_{1-j,l}-q(2l-1))^{2},\] and, performing the change in the summation index \(l\) by \(1-l\) and using \(w_{2l-1}^{\lambda}=w_{1-2l}^{\lambda}\) and \(\delta_{j,l}=\delta_{1-j,1-l}\), \[\hat{p}^{1-j}=\operatorname*{arg\,min}_{q\in\Pi_{d}(\mathbb{R})}\sum_{l=-n+1}^ {n}w_{1-2l}^{\lambda}(\delta_{1-j,1-l}-q(1-2l))^{2}=\operatorname*{arg\,min}_{ q\in\Pi_{d}(\mathbb{R})}\sum_{l=-n+1}^{n}w_{2l-1}^{\lambda}(\delta_{j,l}-q(1-2l))^{2}. \tag{15}\] Observe the similarity between (14) and (15). Since the minimum is unique, it is reached in (15) by \(\hat{p}^{j}(-t)\). Thus \(\hat{p}^{1-j}(t)=\hat{p}^{j}(-t)\) and, then, \(a_{j}^{1}=\hat{p}^{j}(0)=\hat{p}^{1-j}(0)=a_{1-j}^{1}\). By Lemma 3.3, we know that \((\mathbf{W}^{1})^{-1}\mathbf{a}^{i}\) are the evaluations of a polynomial at \(\mathbf{x}^{i}\). To take profit of the symmetry, let us write as in (13): \[(\mathbf{W}^{0})^{-1}\mathbf{a}^{0}=\left\{\sum_{t=0}^{d}\alpha_{t}^{0}(2l)^{t }\right\}_{l=-n+1}^{n-1},\quad(\mathbf{W}^{1})^{-1}\mathbf{a}^{1}=\left\{ \sum_{t=0}^{d}\alpha_{t}^{1}(2l-1)^{t}\right\}_{l=-n+1}^{n}. \tag{16}\] Since \(a_{i}^{0}=a_{-i}^{0}\), \(\forall i=-n+1,\ldots,n-1\), and \(a_{i}^{1}=a_{1-i}^{1}\), \(\forall i=-n+1,\ldots,n\), and \(\omega\) is even, it can be deduced that the polynomials only have even powers. That is \[\alpha_{2t-1}^{i}=0,\quad\forall 1\leq t\leq(d+1)/2.\] A direct consequence is that the subdivision schemes obtained for any weight function of degree \(d\) (even number) coincides with the one for \(d+1\), proven in the following lemma. **Proposition 3.6**.: _Let \(\omega\) a weight function, \(d\in 2\mathbb{Z}_{+}\) and \(\lambda\in\mathbb{R}_{+}\backslash\mathbb{N}\) be such that \(d\leq-2+2\left\lfloor\frac{\lambda+1}{2}\right\rfloor\), then_ \[S_{d,\mathbf{w}^{\lambda}}=S_{d+1,\mathbf{w}^{\lambda}}.\] Proof.: The sub-masks of \(S_{d+1,\mathbf{w}^{\lambda}}\) can be written in terms of the evaluation of a \((d+1)\)-degree polynomial, according to Lemma 3.3. Since the odd coefficients are zero, then the leading coefficient is zero, for both rules \(i=0,1\). Then, both \(S_{d,\mathbf{w}^{\lambda}}\) and \(S_{d+1,\mathbf{w}^{\lambda}}\) fulfils the conditions of Theorem 3.4, for the same polynomial degree \(d\), hence they must coincide. Therefore, we can just study the properties of the subdivision schemes based on the space of polynomials \(\Pi_{d}(\mathbb{R})\) with \(d\) an even number. ## 4 WLPR-Subdivision schemes for \(d=0,1\) In this section we present the WLPR-Subdivision schemes for \(d=0,1\) and their properties, by Proposition 3.6, we can just consider \(d=0\). To simplify the notation, in this section we omit \(d\), \(\mathbf{w}\) and \(\lambda\) in some variables, such as \(S:=S_{0,\mathbf{w}^{\lambda}}=S_{1,\mathbf{w}^{\lambda}}\). In this case, the coefficients of the subdivision schemes are easily obtained from \(\omega\) thanks to Lemma 3.3: If we denote as \(||\mathbf{w}^{i}||_{1}\) the sum of the components of the vector \(\mathbf{w}^{i}\) with \(i=0,1\), defined in (12) and (8), \[||\mathbf{w}^{0}||_{1}=1+2\sum_{l=1}^{n-1}w_{2l}^{\lambda},\quad||\mathbf{w}^{ 1}||_{1}=2\sum_{l=0}^{n-1}w_{2l+1}^{\lambda},\] \[\mathbf{a}^{i}=\mathbf{W}^{i}\mathbf{X}^{i}\boldsymbol{\alpha}^{i}=\mathbf{w}^ {i}\boldsymbol{\alpha}^{i},\quad\boldsymbol{\alpha}^{i}=((\mathbf{X}^{i})^{T} \mathbf{W}^{i}\mathbf{X}^{i})^{-1}\mathbf{e}_{1}=||\mathbf{w}^{i}||_{1}^{-1},\] thus \(\mathbf{a}^{i}=\mathbf{w}^{i}/||\mathbf{w}^{i}||_{1}\). Another way to obtain \(\boldsymbol{\alpha}^{i}\) is based on Theorem 3.4: Since \(\mathbf{a}^{i}=\mathbf{w}^{i}\boldsymbol{\alpha}^{i}\) and the scheme must reproduce \(\Pi_{0}\) (constant functions), then \(1=\sum_{j}a_{j}^{i}=\boldsymbol{\alpha}^{i}\|\mathbf{w}^{i}\|_{1}\) by Lemma 2.2, thus \(\boldsymbol{\alpha}^{i}=||\mathbf{w}^{i}||_{1}^{-1}\). The explicit form of the resulting WLPR-subdivision scheme is, if \(2n-1<\lambda<2n\), \[(Sf^{k})_{2j}=\sum_{l=1-n}^{n-1}\left(\frac{w_{2l}^{\lambda}}{||\mathbf{w}^{0 }||_{1}}\right)f_{j+l}^{k},\quad(Sf^{k})_{2j+1}=\sum_{l=1-n}^{n}\left(\frac{w_ {2l-1}^{\lambda}}{||\mathbf{w}^{1}||_{1}}\right)f_{j+l}^{k}, \tag{17}\] and, for \(2n<\lambda<2n+1\), it can be written in the following way, in agreement with Remark 2.2, \[(Sf^{k})_{2j}=\sum_{l=1-n}^{n}\left(\frac{w_{2l-1}^{\lambda}}{||\mathbf{w}^{1 }||_{1}}\right)f_{j+l}^{k},\quad(Sf^{k})_{2j+1}=\sum_{l=1-n}^{n+1}\left(\frac {w_{2l-2}^{\lambda}}{||\mathbf{w}^{0}||_{1}}\right)f_{j+l}^{k}. \tag{18}\] Note that if \(\lambda\in(1,2)\) then \(\mathbf{w}^{0}=1\) and \(\mathbf{w}^{1}=(\frac{1}{2},\frac{1}{2})\), so that the mask for any function \(\omega\) of the subdivision scheme is \(\mathbf{a}=[1,2,1]/2\), in other words, the interpolatory Deslauriers-Dubuc scheme [11] (as stated in Proposition 3.1). For \(\lambda>2\), if \(\omega(x)=1\) for \(|x|\leq 1\), then the schemes presented by Dyn et al. in [14] are recovered as we mentioned above. These schemes are for \(2n-1<\lambda<2n\): \[(S_{\mathtt{rect}}f^{k})_{2j+1}=\sum_{l=1-n}^{n}\frac{f_{j+l}^{k}}{2n},\quad( S_{\mathtt{rect}}f^{k})_{2j}=\sum_{l=-n+1}^{n-1}\frac{f_{j+l}^{k}}{2n-1}.\] We list some masks for the weight function \(\omega(x)=1-|x|\), \(|x|\leq 1\), and for several values of \(\lambda\): \[\mathbf{a}_{0,\mathtt{tria}^{1.5}} =[1,2,1]/2,\] \[\mathbf{a}_{0,\mathtt{tria}^{2.5}} =[1/7,1/2,5/7,1/2,1/7],\] \[\mathbf{a}_{0,\mathtt{tria}^{3.5}} =[1/12,3/13,5/12,7/13,5/12,3/13,1/12],\] \[\mathbf{a}_{0,\mathtt{tria}^{4.5}} =[1/21,3/20,5/21,7/20,3/7,7/20,5/21,3/20,1/21],\] \[\mathbf{a}_{0,\mathtt{tria}^{5.5}} =[1/30,3/31,1/6,7/31,3/10,11/31,3/10,7/31,1/6,3/31,1/30].\] As we can see, all subdivision schemes in this section present a positive mask, since \(\omega\) is a positive function. Then, the following result on convergence proved in [18], (see also [22; 26]) can be applied. **Proposition 4.1**.: _([18]) Let \(\mathbf{a}=\{a_{l}\}_{l\in\mathbb{Z}}\) be a mask with support \([q,q+k]\), being \(q\) and \(k\) fixed integers, \(k\geq 3\). Suppose that \(a_{q},a_{q+1},\ldots,a_{q+k-1},a_{q+k}>0\) and \(\sum_{l\in\mathbb{Z}}a_{2l}=\sum_{l\in\mathbb{Z}}a_{2l+1}=1,\) then the subdivision scheme converges._ As a direct consequence, the schemes in this section, (17) and (18), are convergent because the masks are positive. Observe that the condition \(k\geq 3\) in Proposition 4.1 requires considering \(\lambda>2\). **Corollary 4.2**.: _The subdivision scheme \(S_{1,\mathbf{w}^{\lambda}}\), defined in (17) or (18), is convergent for any \(\lambda\in(1,+\infty)\backslash\mathbb{N}\) and any positive function \(\omega\) with support \([-1,1]\)._ In Figure 1 we show some examples of the limit functions for some weight functions, \(\lambda\in\{3.2,3.4,3.6,3.8\}\) and \(\mathbf{f}^{0}=\{\delta_{0,l}\}_{l\in\mathbb{Z}}\). The support of all these limit functions is \([-3,3]\) because the mask support length does not vary. To analyse the smoothness of the limit functions generated by \(S\), we consider the Theorem 2.4. In particular, we will prove that the mask of the difference scheme \(S_{\mathbf{q}}\) is positive and apply again Proposition 4.1. Thanks to the odd-symmetry of the scheme, the study can be reduced to a half of its coefficients. **Lemma 4.3**.: _Let \(n\) be a natural number, \(n\geq 2\), \(\lambda\in(2n-1,2n)\) and \(\omega\) a weight function. The coefficients of the difference scheme \(S_{\mathbf{q}}\) are positive if_ \[\frac{\sum_{l=j_{0}}^{n-1}w_{2l+1}^{\lambda}}{\sum_{l=j_{0}}^{n-1}w_{2l}^{ \lambda}}<\frac{||\mathbf{w}^{1}||_{1}}{||\mathbf{w}^{0}||_{1}}<\frac{\sum_{l =j_{1}}^{n-1}w_{2l+1}^{\lambda}}{\sum_{l=j_{1}+1}^{n-1}w_{2l}^{\lambda}},\qquad j _{0}=1,\ldots,n-1,\quad j_{1}=1,\ldots,n-2.\] Proof.: By Lemma 2.3, \(S_{\mathbf{q}}\) is even-symmetric. Since \(2n-1<\lambda<2n\), then \(L_{n}=n-1\) in (5) and we have \[q_{j}^{0}=q_{-j}^{1},\qquad j=1-n,\ldots,n-1.\] Then, if \(q_{j}^{0}>0\), for \(j=1-n,\ldots,0\), and \(q_{j}^{1}>0\), for \(j=1-n,\ldots,-1\), the result is proved. First, we check that the coefficients \(q_{0}^{0}\) and \(q_{1-n}^{1}\) are always positive. \[q_{0}^{0}=\sum_{l=-n+1}^{0}\frac{w_{2l}^{\lambda}}{||\mathbf{w}^{0}||_{1}}- \frac{w_{2l-1}^{\lambda}}{||\mathbf{w}^{1}||_{1}}=\sum_{l=-n+1}^{0}\frac{w_{2l} ^{\lambda}}{||\mathbf{w}^{0}||_{1}}-\frac{1}{2}=\frac{1+\sum_{l=-n+1}^{-1}w_{2 l}^{\lambda}}{1+2\sum_{l=-n+1}^{-1}w_{2l}^{\lambda}}-\frac{1}{2}>0,\] Figure 1: Limit functions of the subdivision schemes \(S_{1,\mathbf{w}^{\lambda}}\) for some weight functions (see Table 1) and \(\lambda=3.2\) (blue), \(\lambda=3.4\) (orange), \(\lambda=3.6\) (yellow) and \(\lambda=3.8\) (purple). since \(w_{0}^{\lambda}=1\) and \(||\mathbf{w}^{0}||_{1}=1+2\sum_{l=-n+1}^{-1}w_{2l}^{\lambda}.\) Analogously, by (4), we have that \[q_{1-n}^{1}=\sum_{l=1-n}^{n-1}\frac{w_{2l}^{\lambda}}{||\mathbf{w}^{0}||_{1}}- \frac{w_{2l+1}^{\lambda}}{||\mathbf{w}^{1}||_{1}}=1-\sum_{l=1-n}^{n-1}\frac{w_ {2l+1}^{\lambda}}{||\mathbf{w}^{1}||_{1}}=\frac{w_{1-2n}^{\lambda}}{||\mathbf{ w}^{1}||_{1}}>0.\] Now we check \(q_{j}^{0}>0\), for \(j=1-n,\ldots,-1\). From (4), we have that \[q_{j}^{0}=\sum_{l=-n+1}^{j}a_{l}^{n,0}-a_{l}^{n,1}=\sum_{l=-n+1}^{j}\frac{w_{2 l}^{\lambda}}{||\mathbf{w}^{0}||_{1}}-\frac{w_{2l-1}^{\lambda}}{||\mathbf{w}^{1} ||_{1}},\qquad j=1-n,\ldots,-1,\] then \[0<q_{j}^{0}=\sum_{l=-n+1}^{j}\frac{w_{2l}^{\lambda}}{||\mathbf{w}^{0}||_{1}}- \frac{w_{2l-1}^{\lambda}}{||\mathbf{w}^{1}||_{1}}\Leftrightarrow\sum_{l=-n+1 }^{j}\frac{w_{2l-1}^{\lambda}}{||\mathbf{w}^{1}||_{1}}<\sum_{l=-n+1}^{j}\frac{ w_{2l}^{\lambda}}{||\mathbf{w}^{0}||_{1}}\Leftrightarrow\frac{\sum_{l=-n+1}^{j}w_{2l-1}^{ \lambda}}{||\mathbf{w}^{0}||_{1}}<\frac{||\mathbf{w}^{1}||_{1}}{||\mathbf{w}^{ 0}||_{1}}. \tag{19}\] As \(w_{l}^{\lambda}=w_{-l}^{\lambda}\) for all \(l\in\mathbb{Z}\), we have that, if \(j=1-n,\ldots,-1\), \[\sum_{l=-n+1}^{j}w_{2l-1}^{\lambda}=\sum_{l=-n+1}^{j}w_{1-2l}^{\lambda}=\sum_{ l=-j}^{n-1}w_{2l+1}^{\lambda},\quad\sum_{l=-n+1}^{j}w_{2l}^{\lambda}=\sum_{l=-n+1 }^{j}w_{-2l}^{\lambda}=\sum_{l=-j}^{n-1}w_{2l}^{\lambda}. \tag{20}\] Therefore, by (19) we obtain: \[0<q_{j}^{0}\Leftrightarrow\frac{\sum_{l=j}^{n-1}w_{2l+1}^{\lambda}}{\sum_{l= j}^{n-1}w_{2l}^{\lambda}}<\frac{||\mathbf{w}^{1}||_{1}}{||\mathbf{w}^{0}||_{1}}, \quad j=1,\ldots,n-1. \tag{21}\] Now we check \(q_{j}^{1}>0\), for \(j=2-n,\ldots,-1\). By (4): \[q_{j}^{1} =\sum_{l=j}^{n-1}a_{l}^{n,0}-a_{l+1}^{n,1}=\sum_{l=j}^{n-1}\frac{ w_{2l}^{\lambda}}{||\mathbf{w}^{0}||_{1}}-\sum_{l=j}^{n-1}\frac{w_{2l+1}^{ \lambda}}{||\mathbf{w}^{1}||_{1}}\] \[=1-\sum_{l=1-n}^{j-1}\frac{w_{2l}^{\lambda}}{||\mathbf{w}^{0}||_{ 1}}-\left(1-\sum_{l=-n}^{j-1}\frac{w_{2l+1}^{\lambda}}{||\mathbf{w}^{1}||_{1}}\right)\] \[=\sum_{l=-n}^{j-1}\frac{w_{2l+1}^{\lambda}}{||\mathbf{w}^{1}||_{1 }}-\sum_{l=1-n}^{j-1}\frac{w_{2l}^{\lambda}}{||\mathbf{w}^{0}||_{1}}.\] Following the same reasoning: \[0<q_{j}^{1}=\sum_{l=-n}^{j-1}\frac{w_{2l+1}^{\lambda}}{||\mathbf{w}^{1}||_{1}} -\sum_{l=1-n}^{j-1}\frac{w_{2l}^{\lambda}}{||\mathbf{w}^{0}||_{1}} \Leftrightarrow\sum_{l=1-n}^{j-1}\frac{w_{2l}^{\lambda}}{||\mathbf{w}^{0}||_{1 }}<\sum_{l=-n}^{j-1}\frac{w_{2l+1}^{\lambda}}{||\mathbf{w}^{1}||_{1}} \Leftrightarrow\frac{||\mathbf{w}^{1}||_{1}}{||\mathbf{w}^{0}||_{1}}<\frac{ \sum_{l=-n}^{j-1}w_{2l+1}^{\lambda}}{\sum_{l=1-n}^{j-1}w_{2l}^{\lambda}}.\] Again, by (20), we have \[0<q_{j}^{1}\Leftrightarrow\frac{||\mathbf{w}^{1}||_{1}}{||\mathbf{w}^{0}||_{1 }}<\frac{\sum_{l=1-n}^{j}w_{2l-1}^{\lambda}}{\sum_{l=1-n}^{j-1}w_{2l}^{\lambda }}=\frac{\sum_{l=-j}^{n-1}w_{2l+1}^{\lambda}}{\sum_{l=1-j}^{n-1}w_{2l}^{ \lambda}},\quad j=2-n,\ldots,-1. \tag{22}\] Collecting conditions (21) and (22), we get the result: \[\frac{\sum_{l=j_{0}}^{n-1}w_{2l+1}^{\lambda}}{\sum_{l=j_{0}}^{n-1}w_{2l}^{ \lambda}}<\frac{||\mathbf{w}^{1}||_{1}}{||\mathbf{w}^{0}||_{1}}<\frac{\sum_{l= j_{1}}^{n-1}w_{2l+1}^{\lambda}}{\sum_{l=j_{1}+1}^{n-1}w_{2l}^{\lambda}},\] with \(j_{0}=1,\ldots,n-1\) and \(j_{1}=1,\ldots,n-2\). **Lemma 4.4**.: _Let \(n\in\mathbb{N}\), \(n\geq 2\), \(\lambda\in(2n-1,2n)\), and \(\omega\) a weight function be. Let us consider_ \[p_{0}^{\omega^{\lambda}}:[0,n-1]\to\mathbb{R},\quad p_{0}^{\omega^{\lambda}}(l): =\frac{\phi\left(\frac{2l+1}{\lambda}\right)}{\phi\left(\frac{2l}{\lambda} \right)},\] _so that \(p_{0}^{\omega^{\lambda}}(l)=\frac{w_{2l+1}^{\lambda}}{w_{2l}^{\lambda}}\) for \(l=0,1,\ldots,n-1\). If \(p_{0}^{\omega^{\lambda}}\) is a decreasing function, then the coefficients of the difference scheme are positive._ Proof.: Note that: \[\frac{||\mathbf{w}^{1}||_{1}}{||\mathbf{w}^{0}||_{1}}=\frac{2\sum_{l=0}^{n-1}w _{2l+1}^{\lambda}}{1+2\sum_{l=0}^{n-1}w_{2l}^{\lambda}}=\frac{\sum_{l=0}^{n-1} w_{2l+1}^{\lambda}}{\frac{1}{2}+\sum_{l=0}^{n-1}w_{2l}^{\lambda}}\] Consider this basic property: For any \(a,b,c,d>0\), \[\frac{a}{b}\leq\frac{c}{d}\Rightarrow\frac{a}{b}\leq\frac{a+c}{b+d}\leq\frac{ c}{d}. \tag{23}\] Firstly, since \(p_{0}^{\omega^{\lambda}}\) is decreasing, we get by (23): \[\frac{w_{2n-1}^{\lambda}}{w_{2n-2}^{\lambda}}=p_{0}^{\omega^{\lambda}}(n-1) \leq p_{0}^{\omega^{\lambda}}(n-2)=\frac{w_{2n-3}^{\lambda}}{w_{2n-4}^{ \lambda}}\Rightarrow\frac{w_{2n-1}^{\lambda}}{w_{2n-2}^{\lambda}}\leq\frac{w _{2n-1}^{\lambda}+w_{2n-3}^{\lambda}}{w_{2n-2}^{\lambda}+w_{2n-4}^{\lambda}} \leq\frac{w_{2n-3}^{\lambda}}{w_{2n-2}^{\lambda}+w_{2n-4}^{\lambda}}\leq\frac {w_{2n-3}^{\lambda}}{w_{2n-4}^{\lambda}}.\] And, again using the monotony of function \(p_{0}^{\omega^{\lambda}}\) and (23): \[\frac{w_{2n-1}^{\lambda}+w_{2n-3}^{\lambda}}{w_{2n-2}^{\lambda}+w_{2n-4}^{ \lambda}}\leq\frac{w_{2n-3}^{\lambda}}{w_{2n-4}^{\lambda}}\leq\frac{w_{2n-5}^{ \lambda}}{w_{2n-6}^{\lambda}}\Rightarrow\frac{w_{2n-1}^{\lambda}+w_{2n-3}^{ \lambda}}{w_{2n-2}^{\lambda}+w_{2n-4}^{\lambda}}\leq\frac{w_{2n-1}^{\lambda}+w _{2n-3}^{\lambda}+w_{2n-5}^{\lambda}}{w_{2n-2}^{\lambda}+w_{2n-4}^{\lambda}+w _{2n-6}^{\lambda}}\leq\frac{w_{2n-5}^{\lambda}}{w_{2n-6}^{\lambda}}\] Repeating this process, we get by (23): \[\frac{w_{2n-1}^{\lambda}}{w_{2n-2}^{\lambda}}\leq\frac{w_{2n-1}^{ \lambda}+w_{2n-3}^{\lambda}}{w_{2n-2}^{\lambda}+w_{2n-4}^{\lambda}}\leq\ldots \leq\frac{\sum_{l=1}^{n-1}w_{2l+1}^{\lambda}}{\sum_{l=1}^{n-1}w_{2l}^{\lambda}} \leq\frac{w_{1}^{\lambda}}{w_{0}^{\lambda}}\Rightarrow\] \[\frac{w_{2n-1}^{\lambda}}{w_{2n-2}^{\lambda}}\leq\frac{w_{2n-1}^{ \lambda}+w_{2n-3}^{\lambda}}{w_{2n-2}^{\lambda}+w_{2n-4}^{\lambda}}\leq\ldots \leq\frac{\sum_{l=0}^{n-1}w_{2l+1}^{\lambda}}{w_{0}^{\lambda}+\sum_{l=1}^{n-1 }w_{2l}^{\lambda}}<\frac{\sum_{l=0}^{n-1}w_{2l+1}^{\lambda}}{\frac{1}{2}+\sum_ {l=1}^{n-1}w_{2l}^{\lambda}}=\frac{||\mathbf{w}^{1}||_{1}}{||\mathbf{w}^{0}||_ {1}}.\] Secondly, we define \(p_{1}^{\omega^{\lambda}}:[0,n-2]\to\mathbb{R}\) as \[p_{1}^{\omega^{\lambda}}(l)=1/p_{0}^{\omega^{\lambda}}(l+1/2),\quad\forall l \in[0,n-2],\] which is an increasing function since \(p_{0}^{\omega^{\lambda}}\) is decreasing. We have that \[\frac{w_{2n-5}^{\lambda}}{w_{2n-4}^{\lambda}}=p_{1}^{\omega^{\lambda}}(n-3) \leq p_{1}^{\omega^{\lambda}}(n-2)=\frac{w_{2n-3}^{\lambda}}{w_{2n-2}^{ \lambda}}<\frac{w_{2n-3}^{\lambda}+w_{2n-1}^{\lambda}}{w_{2n-2}^{\lambda}} \Rightarrow\frac{w_{2n-5}^{\lambda}}{w_{2n-4}^{\lambda}}<\frac{w_{2n-1}^{ \lambda}+w_{2n-3}^{\lambda}+w_{2n-5}^{\lambda}}{w_{2n-2}^{\lambda}+w_{2n-4}^{ \lambda}}<\frac{w_{2n-3}^{\lambda}+w_{2n-1}^{\lambda}}{w_{2n-2}^{\lambda}}.\] Again, using the same strategy, we get: \[\frac{w_{1}^{\lambda}}{w_{2}^{\lambda}}<\frac{\sum_{l=1}^{n-1}w_{2 l+1}^{\lambda}}{\sum_{l=2}^{n-1}w_{2l}^{\lambda}}<\ldots<\frac{w_{2n-3}^{ \lambda}+w_{2n-1}^{\lambda}}{w_{2n-2}^{\lambda}}\Rightarrow\frac{\sum_{l=0}^{n-1 }w_{2l+1}^{\lambda}}{\sum_{l=1}^{n-1}w_{2l}^{\lambda}}<\frac{\sum_{l=1}^{n-1}w_ {2l+1}^{\lambda}}{\sum_{l=2}^{n-1}w_{2l}^{\lambda}}<\ldots<\frac{w_{2n-3}^{ \lambda}+w_{2n-1}^{\lambda}}{w_{2n-2}^{\lambda}}\Rightarrow\] \[\frac{||\mathbf{w}^{1}||_{1}}{||\mathbf{w}^{0}||_{1}}=\frac{ \sum_{l=0}^{n-1}w_{2l+1}^{\lambda}}{\frac{1}{2}+\sum_{l=1}^{n-1}w_{2l}^{\lambda}} <\frac{\sum_{l=0}^{n-1}w_{2l+1}^{\lambda}}{\sum_{l=1}^{n-1}w_{2l}^{\lambda}}< \frac{\sum_{l=1}^{n-1}w_{2l+1}^{\lambda}}{\sum_{l=2}^{n-1}w_{2l}^{\lambda}}< \ldots<\frac{w_{2n-3}^{\lambda}+w_{2n-1}^{\lambda}}{w_{2n-2}^{\lambda}}.\] Then, by Lemma 4.3, we conclude that the coefficients of the difference scheme, (4), are positive. Next lemma allows to easily check the monotonicity of \(p_{0}^{\omega^{\lambda}}\). **Lemma 4.5**.: _Let \(n\geq 2\) and \(\lambda\in(2n-1,2n)\) be. If \(\phi:[0,1]\to[0,1]\) is continuous and differentiable in \((0,1)\) and the quotient function \(\phi^{\prime}/\phi\) is decreasing, then \(p_{0}^{\omega^{\lambda}}\) is decreasing._ Proof.: By hypothesis, the function \(p_{0}^{\omega^{\lambda}}(l)=\frac{\phi((2l+1)/\lambda)}{\phi(2l/\lambda)}\) is continuous for \([0,n-1]\) and differentiable in \((0,n-1)\) (observe that we are considering \(l\) a real number here). Hence, it is decreasing provided that its derivative is negative. In addition, \[p_{0}^{\omega^{\lambda}}(l)=\frac{2}{\lambda}\frac{\phi^{\prime} ((2l+1)/\lambda)\phi(2l/\lambda)-\phi((2l+1)/\lambda)\phi^{\prime}(2l/\lambda )}{\phi(2l/\lambda)^{2}}<0\] \[\Leftrightarrow \phi^{\prime}((2l+1)/\lambda)\phi(2l/\lambda)-\phi((2l+1)/ \lambda)\phi^{\prime}(2l/\lambda)<0\] \[\Leftrightarrow \frac{\phi^{\prime}((2l+1)/\lambda)}{\phi((2l+1)/\lambda)}< \frac{\phi^{\prime}(2l/\lambda)}{\phi(2l/\lambda)},\] and \(\phi^{\prime}/\phi\) is decreasing by hypothesis. Therefore, we prove the following corollary. **Corollary 4.6** (\(\mathcal{C}^{1}\) limit functions).: _Let \(n\in\mathbb{N}\), \(n\geq 2\) and \(\lambda\in(2n-1,2n)\) be. The scheme \(S_{1,\mathbf{w}^{\lambda}}\) is \(\mathcal{C}^{1}\) for \(\phi(x)=1\) and for any weight function \(\phi(x)=(1-x^{p})^{q}\) with \(p\geq 1\) and \(q>0\)._ Proof.: From Table 2, the function \(p_{0}^{\omega^{\lambda}}\) is decreasing for any \(\phi(x)=(1-x^{p})^{q}\) with \(p\geq 1\) and \(q>0\). Then, by Lemma 4.3 the coefficients of the difference scheme are positive and by Proposition 4.1 the subdivision scheme \(S_{2\mathbf{q}}\) is convergent. The case \(\phi(x)=1\) is studied in [14]. In order to finish this section, we study two properties. Firstly, we analyse if the new family of schemes conserves monotonicity. In our case, the result presented by Yad-Shalom in [25] can be used: **Proposition 4.7**.: _Let \(S_{\mathbf{a}}\) be a convergent subdivision scheme and \(S_{\mathbf{q}}\) its corresponding difference scheme with a positive mask. If the initial data, \(\mathbf{f}^{0}\), is non-decreasing then the limit function \(S^{\infty}\mathbf{f}\) is non-decreasing._ With this proposition, we can enunciate the following corollary. **Corollary 4.8** (Monotonicity preservation).: _For \(\lambda\in(1,+\infty)\backslash\mathbb{N}\) and any weight function introduced in Table 1, the scheme \(S_{1,\mathbf{w}^{\lambda}}\) conserves the monotonicity._ Finally, when the initial data presents an isolated discontinuity and a linear subdivision scheme is applied several times some non-desirable effects may appear near the discontinuity, some kind of Gibbs phenomenon (see e.g. [1]). In [1] it is proved that if the mask of the scheme is non-negative then the Gibbs phenomenon does not appear in the limit function. **Corollary 4.9** (Avoiding Gibbs phenomenon).: _For \(\lambda\in(1,+\infty)\backslash\mathbb{N}\), the scheme \(S_{1,\mathbf{w}^{\lambda}}\) avoids the Gibbs phenomenon._ In Section 9, we present some examples checking these theoretical results. For \(d=0,1\), the resulting mask is positive and we have used classic tools to study its properties. However, for \(d\geq 2\), the mask are no longer positive. In the next section, we will develop a novel technique based on numerical integration for this goal and we will apply it to prove the convergence of the schemes based on weighted-least squares. ## 5 A tool for the convergence analysis The purpose of this section is to provide new theoretical results to analyse the convergence. In Section 4, the convergence was easily proven by the positivity of the mask. However, in Section 6 we will prove the convergence of the scheme based on the regression with polynomials of degrees \(d=2,3\), which are no longer positive, so that we cannot follow the same strategy. Nevertheless, as a consequence of Lemma 3.3, the sub-masks can be seen as the evaluation of a second degree polynomial and this fact is advantageous and we will take profit of it in this section. For any particular value of \(n\), a fixed \(\omega\) and considering some \(\lambda_{n}\) such that \(2n-1<\lambda_{n}<2n+1\), \(\lambda_{n}\neq 2n\), it can be easily computed the difference scheme using the formula (4) and checked if its norm is less than \(1\), which would imply convergence. Let us call this method the _direct inspection_. But it serves to prove convergence only for the chosen \(n\), and we wish to prove it for all \(n\in\mathbb{N}\). Our strategy will consist in proving converge asymptotically, that is, to prove convergence for \(\forall n>n_{0}\), for some \(n_{0}\in\mathbb{N}\), and then check the converge for each \(n\leq n_{0}\) by direct inspection. First, we would like to give a general idea about this asymptotic convergence. Thanks to the properties of the space of polynomials \(\Pi_{d}\), the problem (6) can be formulated using equidistant knots in the interval \([-1,1]\), such as \[\hat{\boldsymbol{\beta}}^{i}=\operatorname*{arg\,min}_{\boldsymbol{\beta}\in \mathbb{R}^{d+1}}\sum_{l=1-n}^{n-1+i}\omega((2l-i)/\lambda_{n})L_{p}(f_{j+l}^{ k},A\left(\frac{2l-i}{2n}\right)^{T}\boldsymbol{\beta}),\quad i=0,1.\] The last sum is, in fact, a composite integration rule. So that, if \(n\to\infty\), then \(2n/\lambda_{n}\to 1\) and the problem _seems_ (this is not a rigorous argument, but it serves to understand the situation) to converge to \[\operatorname*{arg\,min}_{\boldsymbol{\beta}\in\mathbb{R}^{d+1}}\int_{-1}^{ 1}L_{p}(f(x),A(x)^{T}\boldsymbol{\beta})\ \omega(x)dx,\] for both \(i=0,1\). On the one hand, the given data is now a function \(f(x)\) which is approximated by a polynomial \(A(x)^{T}\boldsymbol{\beta}\in\Pi_{d}\) in the \(L_{p}\) norm with a weight function \(\omega\). On the other hand, by Lemma 3.3 the corresponding subdivision sub-masks, say \(\mathbf{a}^{n,i}\), fulfils \(a_{l}^{n,i}=\omega((2l-i)/\lambda_{n})^{-1}A(x)^{T}\boldsymbol{\alpha}^{n,i}\), for some coefficients \(\boldsymbol{\alpha}^{n,i}\in\mathbb{R}^{d+1}\). Then, the sub-masks also seem to converge to some continuos function, if some normalization is performed since the sub-masks supports increase with \(n\) (see later Remark 5.1 and Section 6 for more details). The results presented in this section exploit this kind of situations. From now on, we consider a family of subdivision schemes \(\{S_{\mathbf{a}^{n}}\}_{n=1}^{\infty}\) as in (3). The results in this section allow to prove convergence for \(n>n_{0}\), for some \(n_{0}\in\mathbb{N}\), and also provides the value of \(n_{0}\), so that it can be checked convergence for \(n\leq n_{0}\) by direct inspection. Combining both proofs, we obtain convergence for all \(n\in\mathbb{N}\). In particular, \(\lim_{n\to\infty}\|S_{\mathbf{q}^{n}}\|_{\infty}\) will be computed, which ensures the asymptotic convergence when that limit is less than \(1\). Here we denote by \(\mathbf{a}^{n,0},\mathbf{a}^{n,1},\mathbf{q}^{n,0},\mathbf{q}^{n,1}\) the sub-masks of the masks \(\mathbf{a}^{n},\mathbf{q}^{n}\). **Theorem 5.1**.: _Let \(\{S_{\mathbf{a}^{n}}\}_{n=1}^{\infty}\) be a sequence of subdivision schemes that reproduces \(\Pi_{0}\), which odd rules are longer than (or as long as) the even rules, as in (3). Let \(r:[-1,1]\to\mathbb{R}\) be a \(\mathcal{C}^{1}\) function and let \(R(t):=\int_{-1}^{t}r(s)ds\) be. If_ \[a_{j}^{n,0}-a_{j}^{n,1} =r(j/n)n^{-2}+\varepsilon_{j}^{n}, j=1-n,\ldots,L_{n}, \tag{24}\] \[|\varepsilon_{j}^{n}| \leq\mu n^{-\alpha}, j=1-n,\ldots,L_{n},\] (25) \[\|R\|_{1} =\int_{-1}^{1}|R(t)|dt<1, \tag{26}\] _for some \(\alpha>2\), \(\mu>0\), then the first sub-masks of the difference schemes fulfil_ \[\lim_{n\to\infty}\|\mathbf{q}^{n,0}\|_{1}=\lim_{n\to\infty}\sum_{l=1-n}^{L_{n} }|q_{l}^{n,0}|\leq\|R\|_{1},\] _thus there exists \(n_{0}\in\mathbb{N}\) such that_ \[\|\mathbf{q}^{n,0}\|_{1}<1,\quad\forall n>n_{0}. \tag{27}\] _Moreover, if (25) holds true for \(\alpha=3\), then_ \[n_{0}=\begin{cases}\frac{\sqrt{(\|r\|_{\infty}+2(\mu+\|r^{\prime}\|_{\infty}))^ {2}+4(\|R\|_{1}-1)(\mu+\|r^{\prime}\|_{\infty})}+\|r\|_{\infty}+2(\mu+\|r^{ \prime}\|_{\infty})}{2(1-\|R\|_{1})},&\text{if }L_{n}=n-1,\\ \frac{\sqrt{(\|r\|_{\infty}+2(\mu+\|r^{\prime}\|_{\infty}))^{2}+4(1-\|R\|_{1}) \mu}+\|r\|_{\infty}+2(\mu+\|r^{\prime}\|_{\infty})}{2(1-\|R\|_{1})},&\text{if }L_{n}=n,\end{cases} \tag{28}\] _where_ \[\|r\|_{\infty}=\max_{t\in[-1,1]}|r(t)|,\quad\|r^{\prime}\|_{\infty}=\max_{t\in[-1,1]}|r^{\prime}(t)|.\] Proof.: First, we may write \(q_{j}^{n,0}\) in terms of \(r\): \[q_{j}^{n,0}=\sum_{l=1-n}^{j}\{a_{l}^{n,0}-a_{l}^{n,1}\}=\sum_{l=1-n}^{j}\{r(l/ n)n^{-2}+\varepsilon_{l}^{n}\}.\] Using the composite (backward) rectangle rule, we obtain \[n^{-1}\sum_{l=1-n}^{j}r(l/n)=\int_{-1}^{j/n}r(t)dt+\theta_{j}^{n}=R(j/n)+\theta _{j}^{n},\] where \(\theta_{j}^{n}\) is the integration error, which fulfils \(|\theta_{j}^{n}|\leq n^{-1}\|r^{\prime}\|_{\infty}\). Then, \[q_{j}^{n,0}=n^{-1}R(j/n)+n^{-1}\theta_{j}^{n}+\sum_{l=1-n}^{j}\varepsilon_{l} ^{n}.\] With this computation, we will prove (27) first: \[\|\mathbf{q}^{n,0}\|_{1} =\sum_{j=1-n}^{L_{n}}|n^{-1}R(j/n)+n^{-1}\theta_{j}^{n}+\sum_{l=1- n}^{j}\varepsilon_{l}^{n}|\] \[\leq n^{-1}\sum_{j=1-n}^{L_{n}}|R(j/n)|+n^{-1}\sum_{j=1-n}^{L_{n} }|\theta_{j}^{n}|+\sum_{j=1-n}^{L_{n}}\sum_{l=1-n}^{j}|\varepsilon_{l}^{n}|.\] Now, if \(L_{n}=n-1\), we use that \(R(-1)=0\) and the composite (forward) rectangle rule, thus obtaining that \[n^{-1}\sum_{j=1-n}^{L_{n}}|R(j/n)|=n^{-1}\sum_{j=-n}^{n-1}|R(j/n)|=\int_{-1}^{ 1}|R(t)|dt+\rho^{n}=\|R\|_{1}+\rho^{n},\] where \(\rho^{n}\) is the integration error of \(R(t)\), \[|\rho^{n}|\leq n^{-1}\max_{t\in[-1,1]}|R^{\prime}(t)|=n^{-1}\|r\|_{\infty}.\] If \(L_{n}=n\), we use the composite (backward) rectangle rule and we obtain a similar result: \[n^{-1}\sum_{j=1-n}^{L_{n}}|R(j/n)|=n^{-1}\sum_{j=1-n}^{n}|R(j/n)|=\|R\|_{1}+ \widetilde{\rho}^{n},\quad|\widetilde{\rho}^{n}|\leq n^{-1}\|r\|_{\infty}.\] Using all the upper bounds we found, we obtain: \[\|\mathbf{q}^{n,0}\|_{1}\leq\|R\|_{1}+n^{-1}\|r\|_{\infty}+n^{-2}(L_{n}+n)\|r ^{\prime}\|_{\infty}+\frac{1}{2}(L_{n}+n)(L_{n}+n+1)\mu n^{-\alpha}. \tag{29}\] From here we deduce that, if \(\alpha>2\), then the limit when \(n\to\infty\) of the right part of (29) is \(\|R\|_{1}\), which is less than \(1\). Hence, there exists \(n_{0}\geq 1\) such that \(\|\mathbf{q}^{n,0}\|_{1}<1\), \(\forall n>n_{0}\). In particular, for \(\alpha=3\), we can find for which value of \(n_{0}\) the right part of (29) is equal to \(1\), by solving a second degree equation, arriving to (28). _Remark 5.1_.: In practice, if the expressions of \(a_{j}^{n,0},a_{j}^{n,1}\) are well defined for any \(j\in\mathbb{R}\) (this is the case of \(S_{3,\mathbf{w}^{\lambda}}\), see (35)), then a practical way to compute \(r(t)\) is \[r(t):=\lim_{n\to\infty}(a_{tn}^{n,0}-a_{tn}^{n,1})n^{2}.\] In Section 6, a complete example of the application of the results of this section will be performed. A similar condition will be derived from the last result to ensure that \(\|\mathbf{q}^{n,1}\|_{1}<1\). First, we prove a result that will be useful for symmetric subdivision operators. **Theorem 5.2**.: _Let \(\{S_{\mathbf{a}^{n}}\}_{n=1}^{\infty}\) be as in \((\ref{eq:1})\) and consider a flipped version of them, \(\{S_{\mathbf{a}^{n}}\}_{n=1}^{\infty}\), defined as_ \[\bar{a}_{j}^{n,0} :=a_{L_{n}+1-n-j}^{n,0} j=1-n,\ldots,L_{n},\] \[\bar{a}_{j}^{n,1} :=a_{L_{n}+2-n-j}^{n,1} j=1-n,\ldots,L_{n}+1.\] _Then_ \[q_{j}^{n,0}=\bar{q}_{L_{n}+1-n-j}^{n,1},\qquad\|\mathbf{q}^{n,0}\|_{1}=\| \bar{\mathbf{q}}^{n,1}\|_{1}.\] _Moreover, \(\{S_{\mathbf{a}^{n}}\}_{n=1}^{\infty}\) fulfil the conditions of Theorem 5.1 if, and only if, \(\{S_{\bar{\mathbf{a}}^{n}}\}_{n=1}^{\infty}\) fulfil_ \[\bar{a}_{j}^{n,0}-\bar{a}_{j+1}^{n,1} =\bar{r}(j/n)n^{-2}+\bar{\varepsilon}_{j}^{n}, j=1-n,\ldots,L_{n}, \tag{30}\] \[|\bar{\varepsilon}_{j}^{n}| \leq\mu n^{-\alpha}, j=1-n,\ldots,L_{n},\] (31) \[\|\bar{R}\|_{1} <1, \tag{32}\] _where \(\bar{r}(t):=r(-t)\), \(\bar{\varepsilon}_{j}^{n}=\varepsilon_{-j}^{n}\) and \(\bar{R}(t):=\int_{t}^{1}\bar{r}(s)ds=R(-t)\)._ Proof.: Observe that \[\bar{a}_{j}^{n,0}-\bar{a}_{j+1}^{n,1} =a_{L_{n}+1-n-j}^{n,0}-a_{L_{n}+2-n-(j+1)}^{n,1}=a_{L_{n}+1-n-j}^{ n,0}-a_{L_{n}+1-n-j}^{n,1},\qquad j=1-n,\ldots,L_{n},\] so that, defining \(\bar{r}(t):=r(-t)\), \(\bar{\varepsilon}_{j}^{n}:=\varepsilon_{-j}^{n}\), the equivalence between (24)-(25) and (30)-(31) is clear. Then \[\bar{R}(t)=\int_{t}^{1}\bar{r}(s)ds=\int_{t}^{1}r(-s)ds\stackrel{{ [u=-s]}}{{=}}\int_{-t}^{-1}-r(u)du=\int_{-1}^{-t}r(u)du=R(-t),\] and \[\int_{-1}^{1}|\bar{R}(t)|dt=\int_{-1}^{1}|R(-t)|\,dt=\int_{-1}^{1}|R(t)|\,dt,\] thus, the equivalence between (26) and (32) also holds true. On the other hand, \(S_{\mathbf{a}}^{n}\) reproduces \(\Pi_{0}\) if, and only if, \(S_{\bar{\mathbf{a}}^{n}}\) does. Hence, the finite difference scheme exists and can be computed with the formula (4). \[\bar{q}_{j}^{n,0}=\sum_{l=1-n}^{j}a_{L_{n}+1-n-l}^{n,0}-a_{L_{n}+2-n-l}^{n,1} \stackrel{{[k=L_{n}\pm 1-n-l]}}{{=}}\sum_{k=L_{n}+1-n-j}^{L_{n}}a_{k}^{n,0}-a_{k+1}^{n,1}=q_{L_{n}+1-n-j}^{n,1}.\] Hence, \[\sum_{j=1-n}^{L_{n}}|q_{j}^{n,1}|=\sum_{j=1-n}^{L_{n}}|q_{L_{n}+1-n-j}^{n,1}|= \sum_{j=1-n}^{L_{n}}|\bar{q}_{j}^{n,0}|.\] Since \(R(t)=\bar{R}(-t)\) and \(r(t)=\bar{r}(-t)\), we deduce that the formula to compute \(n_{0}\), (28), can be used here as well. The next result is a direct consequence of the previous one. **Corollary 5.3**.: _Let \(\{S_{\mathbf{a}^{n}}\}_{n=1}^{\infty}\) be, as in \((\ref{eq:1})\), that reproduce \(\Pi_{0}\). Let \(r:[-1,1]\to\mathbb{R}\) be a \(\mathcal{C}^{1}\) function and let \(R(t):=\int_{t}^{1}r(s)ds\) be. If_ \[a_{j}^{n,0}-a_{j+1}^{n,1} =r(j/n)n^{-2}+\varepsilon_{j}^{n}, j=1-n,\ldots,L_{n},\] \[|\varepsilon_{j}^{n}| \leq\mu n^{-\alpha}, j=1-n,\ldots,L_{n},\] \[\|R\|_{1} <1,\] _for some \(\alpha>2\), \(\mu>0\), then there exists \(n_{0}\in\mathbb{N}\) such that_ \[\|\mathbf{q}^{n,1}\|_{1}<1,\quad\forall n\geq n_{0}.\] _In case that \(\alpha=3\), \(n_{0}\) can be obtained as in (28)._ Proof.: By the Theorem 5.2, the flipped version of this scheme fulfils Theorem 5.1 and the claimed inequality is true. For odd-symmetric subdivision operators, due to Theorem 5.2, the satisfaction of the hypothesis of Theorem 5.1 or Corollary 5.3 is sufficient to ensure convergence. **Theorem 5.4**.: _Let \(\{S_{\mathbf{a}^{n}}\}_{n=1}^{\infty}\) be a set of odd-symmetric subdivision schemes fulfilling the hypothesis of Theorem 5.1. Then, the subdivision scheme \(S_{\mathbf{a}^{n}}\) is convergent if \(n>n_{0}\) with \(n_{0}\) as in (28)._ ## 6 WLPR-Subdivision schemes for \(d=2,3\) We consider \(\{\lambda_{n}\}_{n\geq 2}\) such that \(2n-1<\lambda_{n}<2n\), then \(L_{n}=1-n\). The following computations could be done for \(2n<\lambda_{n}<2n+1\) as well. First, we compute the coefficients of \(S_{3,\mathbf{w}^{\lambda}}\) (denote it by \(S^{n}\) from now on). According to Lemma 3.3, the sub-masks are \(\mathbf{a}^{n,i}=\mathbf{W}^{i}\mathbf{X}^{i}\boldsymbol{\alpha}^{i}\), \(i=0,1\), where \(\boldsymbol{\alpha}^{i}=((\mathbf{X}^{i})^{T}\mathbf{W}^{i}\mathbf{X}^{i})^{- 1}\mathbf{e}_{1}\). Then, to compute \(\alpha\) we may solve the system \[(\mathbf{X}^{i})^{T}\mathbf{W}^{i}\mathbf{X}^{i}\boldsymbol{\alpha}^{i}= \mathbf{e}_{1}.\] We start with \(i=1\). Using (11) and the symmetry of \(\mathbf{w}^{1}\) and \(\mathbf{x}^{1}\), \[(\mathbf{X}^{1})^{T}\mathbf{W}^{1}\mathbf{X}^{1}=\left(\begin{array}{ll}\| \mathbf{w}^{1}\|_{1}&0&2\sum_{i=1}^{n}w_{2i-1}^{\lambda}(2i-1)^{2}\\ 0&2\sum_{i=1}^{n}w_{2i-1}^{\lambda}(2i-1)^{2}&0\\ 2\sum_{i=1}^{n}w_{2i-1}^{\lambda}(2i-1)^{2}&0&2\sum_{i=1}^{n}w_{2i-1}^{\lambda} (2i-1)^{4}\end{array}\right),\] \[\Delta^{1}:=\left|(\mathbf{X}^{1})^{T}\mathbf{W}^{1}\mathbf{X}^{1}\right|=4 \left(\|\mathbf{w}^{1}\|_{1}\sum_{i=1}^{n}w_{2i-1}^{\lambda}(2i-1)^{4}-2\left( \sum_{i=1}^{n}w_{2i-1}^{\lambda}(2i-1)^{2}\right)^{2}\right)\sum_{i=1}^{n}w_{2 i-1}^{\lambda}(2i-1)^{2}.\] Hence, using the Kramer's formula, the three coefficients of \(\boldsymbol{\alpha}^{1}\) are: \[\alpha_{0}^{1} =(\Delta^{1})^{-1}\left|\begin{array}{ll}1&0&2\sum_{l=1}^{n}w_ {2l-1}^{\lambda}(2l-1)^{2}\\ 0&2\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^{2}&0\\ 0&2\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^{4}\end{array}\right|\] \[=4(\Delta^{1})^{-1}\left(\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^{ 2}\right)\left(\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^{4}\right)\] \[=\frac{\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^{4}}{\|\mathbf{w}^{1 }\|_{1}\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^{4}-2\left(\sum_{l=1}^{n}w_{2l-1} ^{\lambda}(2l-1)^{2}\right)^{2}}\] \[=\frac{\sum_{l=1}^{n}w_{2l-1}^{\lambda}(l-\frac{1}{2})^{4}}{\| \mathbf{w}^{1}\|_{1}\sum_{l=1}^{n}w_{2l-1}^{\lambda}(l-\frac{1}{2})^{4}-2\left( \sum_{l=1}^{n}w_{2l-1}^{\lambda}(l-\frac{1}{2})^{2}\right)^{2}},\] \[\alpha_{1}^{1} =0,\] \[\alpha_{2}^{1} =(\Delta^{1})^{-1}\left|\begin{array}{ll}\|\mathbf{w}^{1}\|_{1} &0&1\\ 0&2\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^{2}&0\\ 2\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^{2}&0&0\end{array}\right|\] \[=-4(\Delta^{1})^{-1}\left(\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^ {2}\right)^{2}\] \[=-\frac{\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^{2}}{\|\mathbf{w}^{ 1}\|_{1}\sum_{l=1}^{n}w_{2l-1}^{\lambda}(2l-1)^{4}-2\left(\sum_{l=1}^{n}w_{2l-1 }^{\lambda}(2l-1)^{2}\right)^{2}}\] \[=-\frac{1}{4}\frac{\sum_{l=1}^{n}w_{2l-1}^{\lambda}(l-\frac{1}{2 })^{2}}{\|\mathbf{w}^{1}\|_{1}\sum_{l=1}^{n}w_{2l-1}^{\lambda}(l-\frac{1}{2})^ {4}-2\left(\sum_{l=1}^{n}w_{2l-1}^{\lambda}(l-\frac{1}{2})^{2}\right)^{2}}.\] Then, by (16), the sub-mask coefficients are \[a_{j}^{n,1} =w_{2j-1}^{\lambda}(\alpha_{0}^{1}+\alpha_{2}^{1}(2j-1)^{2})=w_{2j-1} ^{\lambda}(\alpha_{0}^{1}+4\alpha_{2}^{1}(j-\frac{1}{2})^{2})\] \[=w_{2j-1}^{\lambda}\frac{\sum_{l=1}^{n}w_{2l-1}^{\lambda}(l-\frac {1}{2})^{4}-(j-\frac{1}{2})^{2}\sum_{l=1}^{n}w_{2l-1}^{\lambda}(l-\frac{1}{2}) ^{2}}{\|\mathbf{w}^{1}\|_{1}\sum_{l=1}^{n}w_{2l-1}^{\lambda}(l-\frac{1}{2})^{4 }-2\left(\sum_{l=1}^{n}w_{2l-1}^{\lambda}(l-\frac{1}{2})^{2}\right)^{2}}, j=1-n,\ldots,n. \tag{33}\] Similarly, \[a_{j}^{n,0}=w_{2j}^{\lambda}(\alpha_{0}^{0}+4\alpha_{2}^{0}j^{2})=w_{2j}^{ \lambda}\frac{\sum_{l=1}^{n-1}w_{2l}^{\lambda}l^{4}-j^{2}\sum_{l=1}^{n-1}w_{2 l}^{\lambda}l^{2}}{\|\mathbf{w}^{0}\|_{1}\sum_{l=1}^{n-1}w_{2l}^{\lambda}l^{4}-2 \left(\sum_{l=1}^{n-1}w_{2l}^{\lambda}l^{2}\right)^{2}}, j=1-n,\ldots,n-1, \tag{34}\] where \[\alpha_{0}^{0}=\frac{\sum_{l=1}^{n-1}w_{2l}^{\lambda}l^{4}}{\|\mathbf{w}^{0}\| _{1}\sum_{l=1}^{n-1}w_{2l}^{\lambda}l^{4}-2\left(\sum_{l=1}^{n-1}w_{2l}^{ \lambda}l^{2}\right)^{2}},\quad\alpha_{2}^{0}=-\frac{1}{4}\frac{\sum_{l=1}^{n -1}w_{2l}^{\lambda}l^{2}}{\|\mathbf{w}^{1}\|_{1}\sum_{l=1}^{n-1}w_{2l}^{ \lambda}l^{4}-2\left(\sum_{l=1}^{n-1}w_{2l}^{\lambda}l^{2}\right)^{2}}.\] We first prove convergence \(\forall n\geq 2\) in the simplest case, \(\phi(x)=1\), in order to be used to the new convergence analysis tools, and later we discuss the general case. Convergence of the subdivision schemes based on weighted least squares with \(d=2,3\) and \(\phi(x)=1\) In this case, \(w_{l}=1\), \(1-2n\leq l\leq 2n-1\), so that the mask coefficients can be simplified to \[a_{j}^{n,0} =-\frac{3\left(5j^{2}-3n^{2}+3n+1\right)}{8n^{3}-12n^{2}-2n+3}, \qquad j=-n+1,\ldots,n-1, \tag{35}\] \[a_{j}^{n,1} =\frac{15(j-1)j-9n^{2}+9}{8n-8n^{3}},\qquad j=-n+1,\ldots,n.\] It can be easily checked that these operators are odd-symmetric, which for sure we knew by Lemma 3.5. Hence, to prove convergence we can apply Theorem 5.4. Observe that the algebraic expressions of \(a_{j}^{n,0}\) and \(a_{j}^{n,1}\) are well defined even for \(j\in\mathbb{R}\). Then, for any \(t\in[-1,1]\), we define \[r(t):=\lim_{n\to\infty}(a_{tn}^{n,0}-a_{tn}^{n,1})n^{2}=-\frac{45t^{2}}{16}- \frac{15t}{8}+\frac{9}{16},\] that we obtained with the aid of a symbolic computation program. We also computed that \[n^{3}\varepsilon_{j}^{n}= n^{3}(a_{j}^{n,0}-a_{j}^{n,1}-r(j/n)n^{-2})= \tag{36}\] \[3\rho(n)^{-1}(-120j^{2}n^{4}-120j^{2}n^{3}+225j^{2}n^{2}+30j^{2}n -45j^{2}-80jn^{4}+120jn^{3}\] \[+20jn^{2}-30jn+32n^{6}-12n^{5}-41n^{4}+12n^{3}+9n^{2}),\] where \[\rho(n):=16(n-1)n(n+1)(2n-3)(2n-1)(2n+1).\] Now we should find \(\mu\) such that \(|\varepsilon_{j}^{n}n^{3}|\leq\mu\) for \(1-n\leq j\leq n-1\). On the one hand, \[\rho(n)>16(n-2)^{3}(2n-4)(2n-4)(2n-4)=128(n-2)^{6}\geq 0,\qquad\forall n\geq 2.\] On the other hand, the numerator of (36) can be easily bounded using that \(|j|\leq n\) and increasing to \(6\) the degree of every monomial: \[|\rho(n)n^{3}\varepsilon_{j}^{n}/3| \leq 120n^{6}+120n^{6}+225n^{6}+30n^{6}+45n^{6}+80n^{6}+120n^{6}\] \[+20n^{6}+30n^{6}+32n^{6}+12n^{6}+41n^{6}+12n^{6}+9n^{6}\] \[=896n^{6}.\] As conclusion, \[|n^{3}\varepsilon_{j}^{n}|\leq 3\frac{896n^{6}}{128(n-2)^{6}}=\frac{21n^{6}}{(n- 2)^{6}},\quad\forall n>2.\] Then, for any \(n_{1}\geq 3\), \[|\varepsilon_{j}^{n}|\leq n^{-3}\mu_{1},\quad\mu_{1}=\frac{21n_{1}^{6}}{(n_{1} -2)^{6}},\qquad\forall n\geq n_{1}. \tag{37}\] To compute \(n_{0}\), it is also necessary to compute: \[\|R\|_{1} =\int_{-1}^{1}\left|\int_{-1}^{t}r(s)ds\right|dt=\frac{1}{10} \left(3\sqrt{15}-5\right)\simeq 0.661895,\] \[\|r\|_{\infty} =\max_{t\in[-1,1]}|r(t)|=33/8,\quad\|r^{\prime}\|_{\infty}=\max_ {t\in[-1,1]}|r^{\prime}(t)|=15/2.\] Now, using formula (28) (case \(L_{n}=n-1\)) for \(\mu=\mu_{1}\), \[n_{0}=\frac{1}{6}\left(\sqrt{15}+5\right)\left(\frac{42n_{1}^{6}}{(n_{1}-2)^{ 6}}+\sqrt{\left(\frac{42n_{1}^{6}}{(n_{1}-2)^{6}}+\frac{153}{8}\right)^{2}+ \frac{6}{5}\left(\sqrt{15}-5\right)\left(\frac{21n_{1}^{6}}{(n_{1}-2)^{6}}+ \frac{15}{2}\right)}+\frac{153}{8}\right).\] It is desirable to prove convergence for as much values of \(n\) as possible, so \(n_{1}\) should be chosen such that \(n_{0}\) is as small as possible, but greater or equal than \(n_{1}\), due to (37). We computationally found that the compromise is achieved for \(n_{1}=188\), leading to \(n_{0}\simeq 188.506\). Hence, according to Theorem 5.4, the subdivision schemes are convergent for \(n\geq 189\). For smaller values of \(n\), we have computationally checked that \[\|\mathbf{q}^{n,0}\|_{1}=\|\mathbf{q}^{n,1}\|_{1}\leq 29/42\simeq 0.690476, \qquad\forall 2\leq n\leq 189.\] This symbolic computation is quick and without rounding errors, so this can be considered a rigorous proof of the convergence. We can perform some additional computations in order to provide an upper bound of \(\|\mathbf{q}^{n,0}\|_{1}=\|\mathbf{q}^{n,1}\|_{1}\) valid for any \(n\geq 2\). According to (29), \[\|\mathbf{q}^{n,0}\|_{1}\leq\frac{1}{10}(3\sqrt{15}-5)+n^{-1}33/8+(2n-1)n^{-2} 15/2+(2n^{-1}-n^{-2})21\left(\frac{94}{93}\right)^{6},\quad\forall n\geq 189.\] We checked that the right side is less than \(29/42\) for any \(n\geq 2236\), and we explicitly computed that for \(n\leq 2236\), \(\|\mathbf{q}^{n,0}\|_{1}\leq 29/42\). As conclusion, \[\|\mathbf{q}^{n,0}\|_{1}=\|\mathbf{q}^{n,1}\|_{1}\leq 29/42,\qquad\forall n\geq 2,\] and the equality is reached only for \(n=4\). We tried to prove \(\mathcal{C}^{1}\) regularity with this technique by applying the results to the divided difference schemes, \(S_{2\mathbf{q}^{n}}\), but they do not satisfy (24). Convergence of the subdivision schemes based on weighted least squares with \(d=2,3\) and a general function \(\phi(x)\) In this situation, we will study the convergence only for large \(n\) values, so that we will not calculate \(n_{0}\), because we have not been able to perform the _direct inspection_ without specifying \(\phi\). In order to compute \(r(t):=\lim_{n\to\infty}(a_{tn}^{n,0}-a_{tn}^{n,1})n^{2}\), we will define a \(\mathcal{C}^{1}\) function \(U_{j}^{n}\) such that \(a_{j}^{n,i}=U_{j}^{n}(1-i/2),\,i=0,1\), which will allow to write \[a_{tn}^{n,0}-a_{tn}^{n,1}=U_{tn}^{n}(1)-U_{tn}^{n}(1/2)=\frac{1}{2}(U_{tn}^{n })^{\prime}(\xi_{t,n}),\quad\xi_{t,n}\in(1/2,1).\] For that purpose, we define \(\sigma_{\lambda_{n}}(\phi,x,k):=\sum_{l=1}^{n}\phi(\frac{l-x}{\lambda_{n}/2})(l-x)^ {k}\), \(k\in\mathbb{N}\), \(x\in[1/2,1]\). Recall that \(w_{0}^{\lambda_{n}}=1\), \(w_{l}^{\lambda_{n}}=\omega\left(\frac{l}{\lambda_{n}}\right)\) and \(\omega(x)=\phi(|x|)\). Observe that the sub-masks (33) and (34) can be expressed as \[a_{j}^{n,1} =\phi\left(\frac{j-1/2}{\lambda_{n}/2}\right)\frac{\sigma_{\lambda _{n}}(\phi,1/2,4)-(j-\frac{1}{2})^{2}\sigma_{\lambda_{n}}(\phi,1/2,2)}{\| \mathbf{w}^{\mathbf{1}}\|_{1}\sigma_{\lambda_{n}}(\phi,1/2,4)-2\sigma_{\lambda _{n}}(\phi,1/2,2)^{2}}=U_{j}^{n}(1/2),\] \[a_{j}^{n,0} =w_{2j}^{\lambda_{n}}\frac{\sum_{l=1}^{n}w_{2(l-1)}^{2}(l-1)^{4} -j^{2}\sum_{l=1}^{n}w_{2(l-1)}^{\lambda_{n}}(l-1)^{2}}{\|\mathbf{w}^{\mathbf{ 0}}\|_{1}\sum_{l=1}^{n}w_{2(l-1)}^{\lambda_{n}}(l-1)^{4}-2\left(\sum_{l=1}^{n} w_{2(l-1)}^{\lambda_{n}}(l-1)^{2}\right)^{2}}\] \[=\phi\left(\frac{j-0}{\lambda_{n}/2}\right)\frac{\sigma_{\lambda _{n}}(\phi,1,4)-(j-0)^{2}\sigma_{\lambda_{n}}(\phi,1,2)}{\|\mathbf{w}^{\mathbf{ 0}}\|_{1}\sigma_{\lambda_{n}}(\phi,1,4))-2\sigma_{\lambda_{n}}(\phi,1,2)^{2}}= U_{j}^{n}(1).\] Thus, we may define the link function as \[U_{j}^{n}(x):=\phi\left(\frac{j+x-1}{\lambda_{n}/2}\right)\frac{\sigma_{ \lambda_{n}}(\phi,x,4)-(j+x-1)^{2}\sigma_{\lambda_{n}}(\phi,x,2)}{(\|\mathbf{ w}^{\mathbf{1}}\|_{1}+(2x-1)(\|\mathbf{w}^{\mathbf{0}}\|_{1}-\|\mathbf{w}^{ \mathbf{1}}\|_{1}))\sigma_{\lambda_{n}}(\phi,x,4)-2\sigma_{\lambda_{n}}(\phi, x,2)^{2}}.\] Observe that \(U_{j}^{n}\in\mathcal{C}^{1}([1/2,1])\) provided that \(\phi\in\mathcal{C}^{1}((0,1))\) (\(\phi^{\prime}\) may not exist at \(0\) or \(1\)). To follow more easily the next computations, we write \(U_{j}^{n}(x)=\phi(\frac{j+x-1}{\lambda_{n}/2})U_{\mathrm{num}}(x)/U_{\mathrm{ den}}(x)\), where \(U_{\mathrm{num}}(x),U_{\mathrm{den}}(x)\) are the numerator and denominator that appear in the last formula. Taking into account that \[\frac{\partial}{\partial x}\sigma_{\lambda_{n}}(\phi,x,k)=-\frac{2}{\lambda_{n }}\sigma_{\lambda_{n}}(\phi^{\prime},x,k)-k\sigma_{\lambda_{n}}(\phi,x,k-1), \qquad k>1,\] we proceed to compute the derivative. \[(U_{j}^{n})^{\prime}(x)=\frac{2}{\lambda_{n}}\phi^{\prime}\left(\frac{j+x-1}{ \lambda_{n}/2}\right)\frac{U_{\mathrm{num}}(x)}{U_{\mathrm{den}}(x)}+\phi \left(\frac{j+x-1}{\lambda_{n}/2}\right)\frac{U_{\mathrm{num}}^{\prime}(x)}{U_ {\mathrm{den}}(x)}-\phi\left(\frac{j+x-1}{\lambda_{n}/2}\right)\frac{U_{ \mathrm{num}}(x)U_{\mathrm{den}}^{\prime}(x)}{U_{\mathrm{den}}^{2}(x)},\] where \[U_{\mathrm{num}}^{\prime}(x)= -\frac{2}{\lambda_{n}}\sigma_{\lambda_{n}}(\phi^{\prime},x,4)-4 \sigma_{\lambda_{n}}(\phi,x,3)-2(j+x-1)\sigma_{\lambda_{n}}(\phi,x,2)\] \[-(j+x-1)^{2}\left(-\frac{2}{\lambda_{n}}\sigma_{\lambda_{n}}(\phi ^{\prime},x,2)-2\sigma_{\lambda_{n}}(\phi,x,1)\right),\] \[U_{\mathrm{den}}^{\prime}(x)= 2(\|\mathbf{w}^{0}\|_{1}-\|\mathbf{w}^{1}\|_{1})\sigma_{\lambda _{n}}(\phi,x,4)+(\|\mathbf{w}^{1}\|_{1}+(2x-1)(\|\mathbf{w}^{0}\|_{1}-\|\mathbf{ w}^{1}\|_{1}))\left(-\frac{2}{\lambda_{n}}\sigma_{\lambda_{n}}(\phi^{\prime},x,4)-4 \sigma_{\lambda_{n}}(\phi,x,3)\right)\] \[-4\sigma_{\lambda_{n}}(\phi,x,2)\left(-\frac{2}{\lambda_{n}} \sigma_{\lambda_{n}}(\phi^{\prime},x,2)-2\sigma_{\lambda_{n}}(\phi,x,1)\right).\] Finally, we proceed to compute \(r(t)=\lim_{n\to\infty}\frac{n^{2}}{2}(U_{tn}^{n})^{\prime}(\xi_{t,n}).\) To this purpose, we define \[I_{k}(\phi):=\int_{0}^{1}\phi(x)x^{k}dx,\quad k\in\mathbb{N},\] we observe \(\lim_{n\to\infty}2n/\lambda_{n}=1\) and we use the following composite integration rule \[n^{-k-1}\sigma_{\lambda_{n}}(\phi,x,k)=n^{-1}\sum_{l=1}^{n}\phi\left(\frac{l-x }{n}\frac{2n}{\lambda_{n}}\right)\left(\frac{l-x}{n}\right)^{k}=I_{k}(\phi)+ \mathcal{O}(n^{-1}),\qquad\forall k\in\mathbb{N}\cup\{0\},\quad\forall x\in[ \frac{1}{2},1].\] Defining \(\sigma_{\lambda_{n}}(\phi,x,0):=\sum_{l=1}^{n}\phi(\frac{l-x}{\lambda_{n}/2})\), so that \(\frac{\partial\sigma_{\lambda_{n}}}{\partial x}(\phi,x,0)=-\frac{2}{\lambda_{n} }\sigma_{\lambda_{n}}(\phi^{\prime},x,0)\), we note that \[n^{-1}\|\mathbf{w}^{1}\|_{1}=2n^{-1}\sum_{l=1}^{n}\phi(\frac{l-1/2}{\lambda_{n} /2})=2n^{-1}\sigma_{\lambda_{n}}(\phi,1/2,0)=2I_{0}(\phi)+\mathcal{O}(n^{-1}), \quad i=0,1,\] \[\sigma_{\lambda_{n}}(\phi,1,0)-\sigma_{\lambda_{n}}(\phi,1/2,0)=\frac{1}{2}\frac{ \partial\sigma_{\lambda_{n}}}{\partial x}(\phi,\xi_{n},0)=-\frac{1}{2}\frac{2}{ \lambda_{n}}\sigma_{\lambda_{n}}(\phi^{\prime},\xi_{n},0)=-\frac{1}{2}\int_{0 }^{1}\phi^{\prime}(x)dx+\mathcal{O}(n^{-1})=\frac{1}{2}(\phi(0)-\phi(1))+ \mathcal{O}(n^{-1}),\] so that \[\|\mathbf{w}^{0}\|_{1}-\|\mathbf{w}^{1}\|_{0}=2\sigma_{\lambda_{n}}(\phi,1/2,1 )-\phi(0)-2\sigma_{\lambda_{n}}(\phi,1/2,0)=-\phi(1)+\mathcal{O}(n^{-1}).\] Taking these comments into account and taking \(j=tn\), we find out that \[\lim_{n\to\infty}\phi\left(\frac{tn+\xi_{t,n}-1}{\lambda_{n}/2} \right)=\phi(t),\quad\lim_{n\to\infty}\phi^{\prime}\left(\frac{tn+\xi_{t,n}-1 }{\lambda_{n}/2}\right)=\phi^{\prime}(t),\] \[\|\mathbf{w}^{1}\|_{1}+(2x-1)(\|\mathbf{w}^{0}\|_{1}-\|\mathbf{w }^{1}\|_{1})=2nI_{0}(\phi)+\mathcal{O}(n^{0})+(2x-1)(-\phi(1)+\mathcal{O}(n^{ -1}))=2nI_{0}(\phi)+\mathcal{O}(n^{0})\] \[U_{\text{num}}(x)=n^{5}I_{4}(\phi)-t^{2}n^{5}I_{2}(\phi)+ \mathcal{O}(n^{4}),\] \[U_{\text{den}}(x)=2n^{6}I_{0}(\phi)I_{4}(\phi)-2n^{6}I_{2}(\phi) ^{2}+\mathcal{O}(n^{4}),\] \[U^{\prime}_{\text{num}}(x)=-n^{4}I_{4}(\phi^{\prime})-4n^{4}I_{3 }(\phi)-2tn^{4}I_{2}(\phi)-t^{2}n^{2}(-n^{2}I_{2}(\phi^{\prime})-2n^{2}I_{1}( \phi))+\mathcal{O}(n^{3}),\] \[U^{\prime}_{\text{den}}(x)=2n^{5}(-\phi(1))I_{4}(\phi)+2nI_{0}( \phi)(-n^{4}I_{4}(\phi^{\prime})-4n^{4}I_{3}(\phi))-4n^{3}I_{2}(\phi)(-n^{2}I_ {2}(\phi^{\prime})-2n^{2}I_{1}(\phi))+\mathcal{O}(n^{4}).\] Hence, \[r(t) =\lim_{n\to\infty}\frac{1}{2}n^{2}(U^{n}_{tn})^{\prime}(\xi_{t,n} )=\frac{1}{2}\phi^{\prime}(t)\lim_{n\to\infty}n\frac{n^{5}}{n^{6}}\frac{I_{4}( \phi)-t^{2}I_{2}(\phi)}{2I_{0}(\phi)I_{4}(\phi)-2I_{2}(\phi)^{2}}\] \[+\frac{1}{2}\phi(t)\lim_{n\to\infty}n^{2}\frac{n^{4}}{n^{6}}\frac {-I_{4}(\phi^{\prime})-4I_{3}(\phi)-2tI_{2}(\phi)-t^{2}(-I_{2}(\phi^{\prime})- 2I_{1}(\phi))}{2I_{0}(\phi)I_{4}(\phi)-2I_{2}(\phi)^{2}}\] \[-\frac{1}{2}\phi(t)\lim_{n\to\infty}n^{2}n^{5}(I_{4}(\phi)-t^{2}I _{2}(\phi))\cdot\frac{n^{5}}{n^{12}}\frac{2(-\phi(1))I_{4}(\phi)+2I_{0}(\phi)( -I_{4}(\phi^{\prime})-4I_{3}(\phi))-4I_{2}(\phi)(-I_{2}(\phi^{\prime})-2I_{1} (\phi))}{(2I_{0}(\phi)I_{4}(\phi)-2I_{2}(\phi)^{2})^{2}}\] \[=\frac{1}{4}\phi^{\prime}(t)\frac{I_{4}(\phi)-t^{2}I_{2}(\phi)}{I _{0}(\phi)I_{4}(\phi)-I_{2}(\phi)^{2}}-\frac{1}{4}\phi(t)\frac{I_{4}(\phi^{ \prime})+4I_{3}(\phi)+2tI_{2}(\phi)-t^{2}(I_{2}(\phi^{\prime})+2I_{1}(\phi))}{ I_{0}(\phi)I_{4}(\phi)-I_{2}(\phi)^{2}}\] \[-\frac{1}{4}\phi(t)(I_{4}(\phi)-t^{2}I_{2}(\phi))\frac{(-\phi(1))I _{4}(\phi)-I_{0}(\phi)(I_{4}(\phi^{\prime})+4I_{3}(\phi))+2I_{2}(\phi)(I_{2}( \phi^{\prime})+2I_{1}(\phi))}{(I_{0}(\phi)I_{4}(\phi)-I_{2}(\phi)^{2})^{2}}.\] Clearly, the former expression is valid provided that \(I_{0}(\phi)I_{4}(\phi)-I_{2}(\phi)^{2}\neq 0\). Fortunately, we can use the Schwartz's inequality for the inner product \(\langle f,g\rangle:=\int_{0}^{1}f(x)g(x)\phi(x)dx\) to deduced that \[I_{0}(\phi)I_{4}(\phi)-I_{2}(\phi)^{2}=\langle 1,1\rangle\langle x^{2},x^{2} \rangle-\langle 1,x^{2}\rangle^{2}>0.\] We gather in Table 3 the computation of \(r(t)\) and \(\|R\|_{1}\) for several choices of \(\phi\). Since \(\|R\|_{1}<1\) for all of them, we conclude that, for \(n\) large enough, any of the corresponding subdivision schemes converge. We realized that the value of \(\|R\|_{1}\) could be greater than one for some extreme choices of \(\phi\). An example is \(\phi(x)=1+1000x^{2}\), but thus kind of functions were discarded in Section 3 due to its practical meaning. The next two sections are devoted to study the approximation and the noise suppression capability depending on the chosen weight function ## 7 Approximation capability To study the approximation capability, we consider the subdivision scheme \(S_{d,\mathbf{w}^{\lambda}}\) defined in (7) with \(d\geq 0\) and \(\lambda\) satisfying the conditions requested in Proposition 3.1. Let \(F\in\mathcal{C}^{d+2}\) be and consider the initial data \(\mathbf{f}^{h}=\{f_{j}^{h}\}_{j\in\mathbb{Z}}\) with \(h>0\) and \[f_{j}^{h}=F\left(jh\right),\quad j\in\mathbb{Z}.\] Let \(j_{0}\in\mathbb{Z}\) be any integer, we calculate the approximation error between \((S_{d,\mathbf{w}^{\lambda}}\mathbf{f}^{h})_{2j_{0}+i}\) and \(F((j_{0}+i/2)h)\), with \(i=0,1\), and analyse the largest contribution term. By Taylor's theorem, we have that there exist \(p_{i}\in\Pi_{d}\) such that: \[f_{j}^{h}=F(jh)=p_{i}(jh)+\frac{F^{(d+1)}((j_{0}+i/2)h)}{(d+1)!}(j-(j_{0}+i/2))^{d +1}h^{d+1}+\mathcal{O}(h^{d+2}).\] Applying the subdivision operator and considering its polynomial reproduction capability, \[(S_{d,\mathbf{w},\mathbf{x}}\mathbf{f}^{h})_{2j_{0}+i} =\sum_{l=1-n}^{L_{n}+i}a_{l}^{i}f_{j_{0}+l}^{h}=\sum_{l=1-n}^{L_{n} +i}a_{l}^{i}\left(p_{i}((j_{0}+l)h)+\frac{F^{(d+1)}((j_{0}+i/2)h)}{(d+1)!}(l-i/2 )^{d+1}h^{d+1}+\mathcal{O}(h^{d+2})\right)\] \[=\sum_{l=1-n}^{L_{n}+i}a_{l}^{i}p_{i}((j_{0}+l)h)+\frac{F^{(d+1)}( (j_{0}+i/2)h)}{(d+1)!}h^{d+1}\sum_{l=1-n}^{L_{n}+i}a_{l}^{i}(l-i/2)^{d+1}+ \mathcal{O}(nh^{d+2})\] \[=p_{i}((j_{0}+i/2)h)+\frac{F^{(d+1)}((j_{0}+i/2)h)}{(d+1)!}h^{d+1} \sum_{l=1-n}^{L_{n}+i}a_{l}^{i}(l-i/2)^{d+1}+\mathcal{O}(nh^{d+2})\] \[=F((j_{0}+i/2)h)+\frac{F^{(d+1)}((j_{0}+i/2)h)}{(d+1)!}h^{d+1} \sum_{l=1-n}^{L_{n}+i}a_{l}^{i}(l-i/2)^{d+1}+\mathcal{O}(nh^{d+2})\] Therefore, the largest contribution to the approximation error is given by \[\frac{F^{(d+1)}((j_{0}+i/2)h)}{(d+1)!}h^{d+1}\sum_{l=1-n}^{L_{n}+i}a_{l}^{i}(l -i/2)^{d+1}.\] We conclude that if two linear schemes are given, with the same approximation order, then the scheme with lesser value of \[\eta=\max\left\{\sum_{l=1-n}^{L_{n}}a_{l}^{0}d^{d+1},\sum_{l=1-n}^{L_{n}+1}a_{ l}^{1}(l-\frac{1}{2})^{d+1}\right\}\] \begin{table} \begin{tabular}{l l l} \(\phi(x)\) & \(r(t)\) & \(\|R\|_{1}\) \\ \hline 1 & \(-\frac{45t^{2}}{16}-\frac{15t}{8}+\frac{9}{16}\) & \(\frac{1}{10}\left(3\sqrt{15}-5\right)\simeq 0.661895\) \\ \(1-x\) & \(\frac{45t|t|}{7}-\frac{68m(t)}{7}\) & \(\frac{30t}{7}\) & \(\frac{1}{10}\left(32\sqrt{10}-59\right)\simeq 0.602756\) \\ \(1-x^{2}\) & \(\frac{105t^{3}}{16}-\frac{75t}{16}\) & \(\frac{12\sqrt{5}}{7}-\frac{1}{2}\simeq 0.622263\) \\ \((1-x^{2})^{2}\) & \(-\frac{945t^{5}}{64}+\frac{735t^{3}}{32}-\frac{525t}{64}\) & \(\frac{1}{36}\left(23\sqrt{3}-18\right)\simeq 0.606588\) \\ \((1-x^{3})^{3}\) & \(\frac{889350t|t|^{9}}{32099}-\frac{229635t|t|^{7}}{32099}-\frac{1940400t|t|^{6} }{32099}+\frac{459270t|t|^{4}}{32099}\) & \(\frac{308899569297600972310}{1241212509364900000}\) \\ \((1-x^{2})^{3}\) & \(\frac{3465t^{7}}{128}-\frac{8505t^{3}}{128}+\frac{6615t^{3}}{128}-\frac{1575t}{1 28}\) & \(\frac{2799\sqrt{5}}{1331}-\frac{1}{2}\simeq 0.598219\) \\ \((1-x^{p})^{q}\) & (large expression involving \(\Gamma\) function) \\ \(e^{-x}\) & \(\frac{e^{1-|t|}\left((\epsilon(20e-69)+40)((2e-5)t^{2}-24e+65)8m(t)-2(e-4)(11e-3 0)t^{2}\right)}{4(e(20e-69)+40)^{2}}\) & \\ & \(+\frac{e^{1-|t|}\left(-(2e-5)(e(20e-69)+40)t+4(30-11e)^{2}\right)}{4(e(20e-69)+ 40)^{2}}\) & \(\sim 0.621749\) \\ \(e^{-10x}\) & (explicit but large expression) & \(\sim 0.529404\) \\ \(e^{-\xi x}\) & (explicit but large expression) & \\ \(1+1000x^{2}\) & \(-\frac{23822324150625t^{4}}{734488968098}-\frac{15776250t^{3}}{606007}+\frac{81 269240847795t^{2}}{5875911744784}\) & \\ & \(+\frac{44999895t}{4848056}+\frac{81459819441}{5875911744784}\) & \(\sim 1.00621\) \\ \hline \end{tabular} \end{table} Table 3: The function \(r(t)\) and the value \(\|R\|_{1}\) of Theorem 5.1 for \(S_{3,\mathbf{w}^{h}}\) several choices of \(\phi\) and \(2n-1<\lambda_{n}<2n\). provides better approximators, in general. We observe that, if \(a_{l}^{i}=n^{-1}H(l/n)+\mathcal{O}(n^{-2})\approx n^{-1}H(l/n)\), for some function \(H\), \(i=0,1\), (in that case, \(H(t):=\lim_{n\to\infty}na_{tn}^{i}\)), then \[\sum_{l=1-n}^{L_{n}+i}a_{l}^{i}(l-i/2)^{d+1}=n^{-1}\sum_{l=1-n}^{L_{n}+i}H(l/n )(l-i/2)^{d+1}=n^{d}\sum_{l=1-n}^{L_{n}+i}H(l/n)(l/n-\frac{i}{2n})^{d+1}=n^{d+1} \int_{-1}^{1}t^{d+1}H(t)dt+\mathcal{O}(n^{d}).\] Since the proposed schemes are odd-symmetric, then \(H(t)=H(-t)\) and \(\int_{-1}^{1}t^{d+1}H(t)dt=2I_{d+1}(H)\) and the approximation error is given by \[2h^{d+1}n^{d+1}I_{d+1}(H)\frac{F^{(d+1)}((j_{0}+i/2)h)}{(d+1)!}+\mathcal{O}( nh^{d+2})+\mathcal{O}(n^{d}h^{d+1}), \tag{38}\] which increases with \(h,n\) and \(I_{d+1}(H)\). We will test this formula in Section 9.2. Now, we explore how the selection of \(\phi\) influences \(H\), with the aim of determining which \(\phi\) is the best from an approximation point of view. For \(d=0,1\), it is easy to compute \(H\) from the expression of \(\mathbf{a}^{0},\mathbf{a}^{1}\) in (17) and (18). For instance, for \(2n-1<\lambda<2n\), \[H(t)=\lim_{n\to\infty}na_{tn}^{i}=\lim_{n\to\infty}n\frac{\phi(|2tn+i|/\lambda )}{\sum_{j=1-n}^{L_{n}}\phi(|2j+i|/\lambda)}=\lim_{n\to\infty}n\frac{\phi(|2 tn+i|/\lambda)}{2n\int_{0}^{1}\phi(t)dt+\mathcal{O}(1)}=\frac{\phi(|t|)}{2I_{0}( \phi)}.\] Hence, \(2I_{2}(H)=I_{2}(\phi)/I_{0}(\phi)\). In Table 4, we see that the smallest values are reached for \(\phi(x)=e^{-\xi x}\) with large \(\xi\) and for \(\phi(x)=(1-x^{p})^{q}\) with large \(q\) or small \(p\). We add for comparison \(\|H\|_{2}^{2}\), that according to Section 8, the smaller it is, the greater is its noise reduction capability. We can see for any scheme that the greater is the approximation capability, the smaller is the noise reduction capability. As conclusion, approximation and noise reduction are incompatible, in this sense, and some equilibrium may be found. This is further discussed in Section 8.1. For \(d=2,3\), using the results in Section 6.2: \[H(t)=\phi(|t|)\frac{1}{2}\frac{I_{4}(\phi)-t^{2}I_{2}(\phi)}{I_{0}(\phi)I_{4}( \phi)-I_{2}(\phi)^{2}}.\] Then, \(2I_{4}(H)=-(I_{2}(\phi)I_{6}(\phi)-I_{4}(\phi)^{2})/(I_{0}(\phi)I_{4}(\phi)-I_{ 2}(\phi)^{2})\). The same conclusion can be obtain as in the case \(d=0,1\) from Table 5: The smallest values are reached for \(\phi(x)=e^{-\xi x}\) with large \(\xi\) and for \(\phi(x)=(1-x^{p})^{q}\) with large \(q\) or small \(p\). A great approximation power implies a low noise reduction capability, which will be studied in Section 8.1. ## 8 Noise reduction In this section, we study the application of a subdivision operator to purely noisy data, \(S_{\mathbf{a}}\boldsymbol{\epsilon}\) where all the values \(\epsilon_{j}\) follows a random distribution \(E\), and are mutually uncorrelated. The results of this study can be applied to any data contaminated with noise due to Remark 2.1. A direct result is that \[\|S_{\mathbf{a}}\boldsymbol{\epsilon}\|_{\infty}\leq\|S_{\mathbf{a}}\|_{ \infty}\|\boldsymbol{\epsilon}\|_{\infty}.\] Since \(\|S_{\mathbf{a}}\|_{\infty}\geq 1\) for any convergent schemes, the best condition is reached for \(d=0,1\), for which \(\|S_{d,\mathbf{w}^{\lambda}}\|_{\infty}=1\), since the mask is positive. Hence, it cannot be concluded from this formula that the noise is reduced. To reveal the denoising capabilities, a basic statistical analysis can be carried out. If the variance of the refined data is lesser than the variance of the given data, \(\operatorname{var}(E)\), it indicates a reduction of randomness. Using that \[\operatorname{var}(\alpha X+\beta Y)=\alpha^{2}\operatorname{var}(X)+\beta^{ 2}\operatorname{var}(Y),\qquad\alpha,\beta\in\mathbb{R},\] provided that \(X,Y\) are two uncorrelated random distributions, the variance after one subdivision step is \[\operatorname{var}\left(\sum_{l\in\mathbb{Z}}a_{2l+i}E\right)=\sum_{l\in \mathbb{Z}}a_{2l+i}^{2}\operatorname{var}\left(E\right)=\|\mathbf{a}^{i}\|_{ 2}^{2}\operatorname{var}\left(E\right),\quad i=0,1.\] Hence, the variance reduction is given by \[\|S_{\mathbf{a}}\|_{2}^{2}=\max\{\|\mathbf{a}^{0}\|_{2}^{2},\|\mathbf{a}^{1}\|_{ 2}^{2}\}.\] For some schemes studied in this work, this quantity is: For \(d=0,1\), if \(2n-1<\lambda<2n\), \[\|S_{1,\mathbf{w}^{\lambda}}\|_{2}^{2}=\max\left\{\sum_{l=-n+1}^{n-1}\left(\frac {w_{2l}^{\lambda}}{||\mathbf{w}_{0}^{\lambda}||_{1}}\right)^{2},\;\sum_{l=-n+1} ^{n}\left(\frac{w_{2l-1}^{\lambda}}{||\mathbf{w}_{1}^{\lambda}||_{1}}\right)^{ 2}\right\}<1.\] The last quantity is less than one owned to the constant reproduction and the positivity of the coefficients. In case that \(\phi(x)=1\), then \(\|S_{1,\mathbf{rect}^{\lambda}}\|_{2}^{2}=\lfloor\lambda\rfloor^{-1}=(2n-1)^{-1}\), which is the lowest value that can be obtained with a rule of this length. For \(d=2,3\), \(\phi(x)=1\) and \(2n-1<\lambda<2n\), \[\|S_{3,\mathbf{rect}^{\lambda}}\|_{2}^{2}=\frac{9n^{2}-9n-3}{8n^{3}-12n^{2}-2n +3}>(2n-1)^{-1},\qquad\forall n\geq 2,\] which maximum is achieved for \(n=2\) (i.e. \(3<\lambda<4\), corresponding to the interpolatory DD4 scheme), which is \(1\). Two results can be derived: First, if the variance is reduced in each iteration by a factor \(\|S_{d,\mathbf{w}^{\lambda}}\|_{2}^{2}<1\), then the limit function has variance \(0\). Second, since \(\lim_{n\to\infty}\|S_{3,\mathbf{rect}^{\lambda}}\|_{2}^{2}=0\), the noise tends to be completely remove when the mask support tends to \(\infty\). For any choice of \(\phi(x)\), an asymptotic result can be given for the noise reduction using an argument similar to Section 7: If \(a_{l}^{i}=n^{-1}H(l/n)+\mathcal{O}(n^{-2})\), for some function \(H\), \(i=0,1\), then \[\lim_{n\to\infty}\|\mathbf{a}^{i}\|_{2}^{2}=\lim_{n\to\infty}n^{-2}\sum_{l=1-n }^{L_{n}+i}H(l/n)^{2}=n^{-1}\int_{-1}^{1}H(t)^{2}dt,\] so that the noise reduction factor behaves asymptotically as \[\|S_{\mathbf{a}}\|_{2}^{2}=n^{-1}\|H\|_{2}^{2}+\mathcal{O}(n^{-2}).\] Under these assumptions, we observe that the noise is always removed after an iteration when \(n\to\infty\). In the Tables 4 and 5 we compute \(H(t):=\lim_{n\to\infty}na_{tn}^{i}\) and the factor \(\|H\|_{2}^{2}\) for several \(\phi\) functions, \(d=0,1,2,3\). ### An equilibrium between approximating and denoising We have seen that, in order to maximize the approximation and denoising capabilities, the values \(I_{4}(H)\) and \(\|H\|_{2}\) should be minimized. This is a multi-objective minimization problem, which solutions form a Pareto front that we have estimated using the MATLAB optimization toolbox. Here we will only consider the case \(d=2,3\), but a similar analysis can be performed with \(d=0,1\). First, observe Figure 2-left. We find out that \(\phi(x)=(1-x^{p})^{q}\) is always more convenient than \(\phi(x)=e^{-\xi x}\), meaning that for each value of \(\xi\) there exists some pair \((p,q)\) for which \(\phi(x)=(1-x^{p})^{q}\) approximates and denoises better than \(\phi(x)=e^{-\xi x}\). It can also be affirm that \(\phi(x)=1\) is in the Pareto front and it the best for noise reduction and the worst for approximating. In the other extreme would be an interpolatory scheme, with the best approximation capability but the worst denoising power. The Pareto-optimal values \((p,q)\) for \(\phi(x)=(1-x^{p})^{q}\) form a curve (see Figure 2-right) which seems to interpolate the integer values \((2,1)\) and \((4,5)\). In conclusion, we recommend the use of rect to obtain the best denoising. However, with epan the noise increases by \(11.11\%\) while the approximation error is reduced by \(44.44\%\) compared to rect. If the approximation is desired to be prioritized, \(\phi(x)=(1-x^{4})^{5}\) is a good choice, since the noise increases by \(31.58\%\) while the approximation error is reduced by \(71.43\%\), compared to rect. The rest of the \((p,q)\) values related to Table 1 are near to be optimal and can be used as well for other approximating-denoising balances. We recommend to never use exp(\(\xi\)). Just to mention that for \(d=0,1\) similar conclusions can be obtained. For that polynomial degrees, the weight functions \(\phi(x)=\exp(-\xi x)\) are also worse than \(\phi(x)=(1-x^{p})^{q}\). The weight function epan is still Pareto optimal, but the pair \((p,q)=(4,5)\) is not. ## 9 Numerical experiments In this section, we present some numerical examples to show how the new schemes work for the generation of curves. We check that the subdivision schemes are convergent for \(d=0,1,2,3\) and that the curve present \(\mathcal{C}^{1}\) smoothness (but not \(\mathcal{G}^{1}\), meaning that kinks can be produced). We analysed the approximating and denoising capabilities to numerically validate the results in Sections 7 and 8. Only for \(d=0,1\), we test the conservation of the monotonicity applying the schemes to fit a non-decreasing initial data. Finally, we perform a numerical test using the discretization of a discontinuous function and observe that the proposed methods avoid Gibbs phenomenon in the neighbourhood of an isolated discontinuity for \(d=0,1\). ### Application to noisy geometric data We start with one of the experiments presented in [14] which consists of a star-shaped curve given by: \[F(t)=(4\cos(t)+\cos(4t),4\sin(t)-\sin(4t)), \tag{39}\] with samples taken at \(t_{j}^{0}=j\pi/25\) with \(j\in\mathbb{Z}\). That is, we consider \(\mathbf{f}^{0}:=F|_{\mathbf{t}^{0}}\), \(\mathbf{t}^{0}=\{t_{j}^{0}\}_{j\in\mathbb{Z}}\), i.e. \(f_{j}^{0}=F(t_{j}^{0})\). Because of the periodicity of the function, we can focus on \(j=0,\ldots,49\). We add Gaussian noise in each component, defining \(\mathbf{\hat{f}}^{0}=\mathbf{f}^{0}+\boldsymbol{\epsilon}^{\sigma}\) with \(\boldsymbol{\epsilon}^{\sigma}=\{(\varepsilon_{j}^{\sigma,1},\varepsilon_{j} ^{\sigma,2})\}_{j=0}^{49}\), being \(\varepsilon_{j}^{\sigma,l}\sim\mathcal{N}(0,\sigma)\), \(l=1,2\), \(j=0,\ldots,49\) and \(\sigma\in\{0.5,1\}\). In Figure 3, we illustrate the results only for two interesting choices of \(\phi\), according to the conclusions in Section 8.1. Nevertheless, the results obtained with the rest of weight functions are graphically similar and they are shown in detail in Table 6. In Figure 3, we can see how important the choice of the weight function is to increase the approximation capability (and only losing a bit of denoising capability). In turn, taking \(d=2,3\) gives better approximations and \(\lambda\) can also be increased to reduce noise. Of course, during our study we generated much more graphics than the ones here presented. In some of them, specially in presence of noise, artefacts may appear, such as auto-intersections or kinks, proving that it does not provide \(\mathcal{G}^{1}\) curves, even if the scheme is \(\mathcal{C}^{1}\). By taking \(\lambda\) larger, the artefacts usually disappear and curves become softer. ### Approximation error when \(\lambda\) is being increased In this section we challenge formula (38) with a suited experiment. Let us consider \(G(x)=\cos(\pi x)\) and the initial data \(\mathbf{g}^{0,h}=\{g_{j}^{0,h}\}_{j\in\mathbb{Z}}\) and \(\widetilde{\mathbf{g}}^{0,h}=\{\widetilde{g}_{j}^{0,h}\}_{j\in\mathbb{Z}}\) with \(g_{j}^{0,h}=G(jh)\), \(\widetilde{g}_{j}^{0,h}=g_{j}^{0,h}+\epsilon_{j}\), \(\epsilon_{j}\sim U\left(\left[-\frac{1}{4},\frac{1}{4}\right]\right),\) where \(U(I)\) is the uniform distribution in the interval \(I\). We consider the spacings \(h_{k}=10^{-k}\) and the support parameters \(\lambda_{k}=3.5+10^{k-1}=3.5+0.1/h_{k}\), \(k=1,2,3,4\). The value \(\lambda_{k}\) is modified accordingly to \(h_{k}\) to maintain almost constant the support of the basic limit function, which determines the influence of each data point on the limit function. The results of applying 5 iterations of the scheme \(S_{3,\mathtt{rect}^{\lambda_{k}}}\) to \(\widetilde{\mathbf{g}}^{0,h_{k}}\), for \(k=1,2,3,4\), are shown in Figure 4. On the one hand, it shows how the noise after five iterations tends to \(0\) if \(k\rightarrow\infty\), but slowly, since the variance decay speed is \(\mathcal{O}(n^{-1})\). On the other hand, the approximation error does not decay to zero, as can be observed in Table 7, where the numbers are never smaller than (and seems to tend to) the asymptotic error estimation in (38), which is (for \(j=0\), \(i=0\)) \[\left|2I_{4}(H)\frac{G^{(4)}(0)}{4!}h_{k}^{4}n_{k}^{4}\right|=\frac{3}{35} \frac{|G^{(4)}(0)|}{24}h_{k}^{4}(3+0.1/h_{k})^{4}\overset{k\rightarrow+\infty }{\longrightarrow}\frac{\pi^{4}}{24}\cdot 0.1^{4}\cdot\frac{3}{35}\simeq 3.4789\text{e-}05.\] Figure 3: Several subdivision schemes (by columns) applied to the star-shaped data in (39). In the first row, they are applied to the original data. In the second and third row, the data is contaminated by normal noise with \(\sigma=0.5\) and \(\sigma=1\), respectively. This threshold is not a real constrain in practice, since the noise is usually greater than the approximation error (see first row of Table 7). If an approximation error tending to zero is needed, \(n\propto h^{-\frac{1}{2}}\) can be chosen, for instance. ### Avoiding Gibbs phenomenon In this section we confirm that the subdivision schemes based on weighted-least squares with \(d=0,1\) avoid Gibbs phenomenon, as stated in Corollary 4.9. To study it, we propose the following experiment. We discretize the function: \[f(x)=\left\{\begin{array}{ll}\sin(\pi x),&x\in[0,0.5];\\ -\sin(\pi x),&x\in(0.5,1],\end{array}\right.\] in the interval \([0,1]\) with \(33\) equidistant points, \(x_{i}=i\cdot h\), \(i=0,\ldots,32\) and \(h=\frac{1}{32}\) and apply the subdivision schemes. We show the results in Figure 5. It is clearly visualize that the Gibbs phenomenon does not appear around the discontinuity, but there is diffusion, instead. The larger is \(\lambda\), the more diffusion, specially when rect is used. \begin{table} \begin{tabular}{|l|r r r r|r r r r|} \hline & \multicolumn{4}{c|}{\(d=0,1\)} & \multicolumn{4}{c|}{\(d=2,3\)} \\ \(\lambda\) & 3.7 & 5.8 & 9.5 & 15.5 & 3.7 & 5.8 & 9.5 & 15.5 \\ \hline \multicolumn{8}{|c|}{rect} & \multicolumn{4}{c|}{rect} & \multicolumn{4}{c|}{rect} \\ \(\|S^{5}\mathbf{f}^{0}-F|_{\mathbf{t}^{*}}\|_{\infty}\) & 1.943e-1 & 4.578e-1 & 1.095e-0 & 1.844e-0 & 1.487e-3 & 1.038e-2 & 9.402e-2 & 4.899e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{0.5}^{0}\|_{\infty}\) & 7.496e-1 & 5.256e-1 & 3.272e-1 & 2.506e-1 & 1.459e-0 & 9.363e-1 & 7.312e-1 & 4.073e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{1}^{0}\|_{\infty}\) & 1.263e-0 & 9.790e-1 & 6.786e-1 & 4.712e-1 & 3.143e-0 & 1.691e-0 & 1.151e-0 & 9.408e-1 \\ \hline \multicolumn{8}{|c|}{tria} & \multicolumn{4}{c|}{tria} & \multicolumn{4}{c|}{tria} \\ \(\|S^{5}\mathbf{f}^{0}-F|_{\mathbf{t}^{*}}\|_{\infty}\) & 1.158e-1 & 2.695e-1 & 6.393e-1 & 1.254e-0 & 1.487e-3 & 6.683e-3 & 4.927e-2 & 2.624e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{0.5}^{0}\|_{\infty}\) & 7.604e-1 & 6.518e-1 & 4.235e-1 & 2.859e-1 & 1.459e-0 & 9.048e-1 & 8.035e-1 & 5.298e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{1}^{0}\|_{\infty}\) & 1.514e-0 & 1.170e-0 & 8.941e-1 & 6.159e-1 & 3.143e-0 & 1.957e-0 & 1.327e-0 & 1.086e-0 \\ \hline \multicolumn{8}{|c|}{bisq} & \multicolumn{4}{c|}{bisq} & \multicolumn{4}{c|}{bisq} \\ \(\|S^{5}\mathbf{f}^{0}-F|_{\mathbf{t}^{*}}\|_{\infty}\) & 1.012e-1 & 2.363e-1 & 5.648e-1 & 1.152e-0 & 1.487e-3 & 5.986e-3 & 3.876e-2 & 2.157e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{0.5}^{0}\|_{\infty}\) & 7.785e-1 & 6.816e-1 & 4.603e-1 & 2.957e-1 & 1.459e-0 & 9.140e-1 & 8.382e-1 & 5.642e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{0.1}^{0}\|_{\infty}\) & 1.580e-0 & 1.209e-0 & 9.301e-1 & 6.546e-1 & 3.143e-0 & 1.959e-0 & 1.379e-0 & 1.101e-0 \\ \hline \multicolumn{8}{|c|}{trvt} & \multicolumn{4}{c|}{trvt} & \multicolumn{4}{c|}{trvt} \\ \(\|S^{5}\mathbf{f}^{0}-F|_{\mathbf{t}^{*}}\|_{\infty}\) & 7.892e-2 & 1.859e-1 & 4.551e-1 & 9.729e-1 & 1.487e-3 & 4.134e-3 & 2.725e-2 & 1.575e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{0.5}^{0}\|_{\infty}\) & 8.353e-1 & 7.111e-1 & 5.256e-1 & 3.068e-1 & 1.459e-0 & 9.860e-1 & 8.553e-1 & 6.289e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{0.1}^{0}\|_{\infty}\) & 1.794e-0 & 1.290e-0 & 1.002e-0 & 7.340e-1 & 3.143e-0 & 2.128e-0 & 1.440e-0 & 1.130e-0 \\ \hline \multicolumn{8}{|c|}{open} & \multicolumn{4}{c|}{open} & \multicolumn{4}{c|}{open} \\ \(\|S^{5}\mathbf{f}^{0}-F|_{\mathbf{t}^{*}}\|_{\infty}\) & 1.402e-1 & 3.209e-1 & 7.481e-1 & 1.416e-0 & 1.487e-3 & 8.265e-3 & 6.033e-2 & 3.161e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{0.5}^{0}\|_{\infty}\) & 7.497e-1 & 6.224e-1 & 3.738e-1 & 2.832e-1 & 1.459e-0 & 9.054e-1 & 7.975e-1 & 4.861e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{1}^{0}\|_{\infty}\) & 1.395e-0 & 1.113e-0 & 8.341e-1 & 5.575e-1 & 3.143e-0 & 1.798e-0 & 1.279e-0 & 1.051e-0 \\ \hline \multicolumn{8}{|c|}{tcub} & \multicolumn{4}{c|}{tcub} & \multicolumn{4}{c|}{tcub} \\ \(\|S^{5}\mathbf{f}^{0}-F|_{\mathbf{t}^{*}}\|_{\infty}\) & 1.010e-1 & 2.382e-1 & 5.716e-1 & 1.171e-0 & 1.487e-3 & 5.726e-3 & 3.656e-2 & 2.072e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{0.5}^{0}\|_{\infty}\) & 7.787e-1 & 6.872e-1 & 4.554e-1 & 3.023e-1 & 1.459e-0 & 9.214e-1 & 8.547e-1 & 5.677e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{1}^{0}\|_{\infty}\) & 1.547e-0 & 1.203e-0 & 9.221e-1 & 6.453e-1 & 3.143e-0 & 1.935e-0 & 1.391e-0 & 1.104e-0 \\ \hline \multicolumn{8}{|c|}{p4q5} & \multicolumn{4}{c|}{p4q5} \\ \(\|S^{5}\mathbf{f}^{0}-F|_{\mathbf{t}^{*}}\|_{\infty}\) & 9.509e-2 & 2.286e-1 & 5.533e-1 & 1.147e-0 & 1.487e-3 & 4.666e-3 & 3.188e-2 & 1.840e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{0.5}^{0}\|_{\infty}\) & 7.928e-1 & 6.993e-1 & 4.649e-1 & 3.080e-1 & 1.459e-0 & 9.606e-1 & 8.729e-1 & 5.885e-1 \\ \(\|S^{5}\mathbf{\epsilon}_{1}^{0}\|_{\infty}\) & 1.569e-0 & 1.214e-0 & 9.299e-1 & 6.542e-1 & 3.143e-0 & 1.984e-0 & 1.413e-0 & 1.118e-0 \\ \hline \end{tabular} \end{table ### Monotonicity Finally, we introduce the last example in order to see numerically that the new family of the schemes conserves the monotonicity of the data, for \(d=0,1\), proved in Corollary 4.8. We apply \(S_{1,\mathsf{rect}}\) and \(S_{1,\mathsf{trwt}}\) to the data collected in Table 8 (see [3]) and obtain Figure 6. ## 10 Conclusions and future work In this work, a family of subdivision schemes based on weighted local polynomial regression has been analysed. We introduced the general form of this type of schemes and prove that the schemes corresponding to the polynomial degrees \(d=2k\) and \(d=2k+1\) coincide, for \(k=0,1,2\ldots\) In particular, we analysed in detail the cases \(d=0,1,2,3\) with positive weight functions, \(\omega\), with compact support. In the first part of the paper, for \(d=0,1\), we took advantage of the positivity of the mask to prove the convergence. Also, under some conditions of the \(\omega\) functions, the \(\mathcal{C}^{1}\) regularity of the limit function was demonstrated. Afterward, some properties were proved as monotonicity and elimination of the Gibbs phenomenon effect. In the second part, we developed a general technique to analyse the convergence of a family of linear schemes and used it in the case \(d=2,3\). The last sections have been dedicated to discussing noise removal and approximation capabilities. We showed how the weight function \(\phi\) determines these properties and that it is not possible to find a \(\phi\) maximizing both capabilities approximation and noise reduction. This led to a multi-objective optimization problem in which optimal solutions were found along a Pareto front. Some numerical tests were presented to confirm the theoretical results. For future works, we can consider the following ideas: The \(\mathcal{C}^{1}\) regularity of the cases \(d=2,3\) were not proven. New theoretical tools such as those presented in Section 5 and their application to these schemes can be done. We considered several weight functions \(\phi\) from the literature. Now that we know the influence of \(\phi\) in the approximation and denoising capabilities, it could be designed \(\phi\) trying to improve them. Taking into account that the noise contribution is usually greater than the approximation error on the final curve, the use of an optimized weight function can be even more interesting than augmenting the polynomial degree, since some properties related to the monotonicity and the Gibbs phenomenon are only available for \(d=0,1\). If the data present some outliers, a different loss function can provide better results. Mustafa et al. in [23] proposed a variation of Dyn's schemes changing the \(\ell^{2}\)-norm by the \(\ell^{1}\)-norm in the polynomial regression but they do not prove their properties. The theoretical study of this scheme, as well as the use of different weight functions, can be considered in the future. ## 11 Declarations ### Conflict of interest The authors declare that they have no conflict of interest. #### Data Availability Statements Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
2309.03266
AGNs and Host Galaxies in COSMOS-Web. I. NIRCam Images, PSF Models and Initial Results on X-ray-selected Broad-line AGNs at $0.35\lesssim z \lesssim 3.5$
We present detailed and comprehensive data reduction and point-spread-function (PSF) model construction for all public JWST NIRCam imaging data from the COSMOS-Web treasury program (up to June 2023, totaling 0.28 ${\rm deg}^2$). We show that the NIRCam PSF has significant short-timescale temporal variations and random spatial variations in all four filters (F115W, F150W, F277W, and F444W). Combining NIRCam with archival HST imaging, we perform multiwavelength AGN+host image decomposition to study the properties of 143 X-ray-selected ($L_{\rm bol}=10^{43.6-47.2}$ erg s$^{-1}$) broad-line AGNs at $0.35\lesssim z \lesssim 3.5$. Leveraging the superb resolution, wavelength coverage, and sensitivity of NIRCam, we successfully detect host stellar emission after decomposing the central AGN point source in 142 objects. $\sim 2/3$ AGNs are in star-forming galaxies based on the UVJ diagram, suggesting no instantaneous negative AGN feedback. X-ray-selected broad-line AGN hosts follow a similar stellar mass-size relation as inactive galaxies, albeit with slightly smaller galaxy sizes. We find that although major mergers are rare ($\sim$7-22%) among the sample, more subtle non-axisymmetric features from stellar bars, spiral arms, and minor mergers are ubiquitous, highlighting the importance of secular processes and minor mergers in triggering AGN activity. For a subsample of 30 AGNs at $1<z<2.5$ with black hole mass measurements from single epoch spectra, they follow a similar black hole mass-stellar mass relation as local inactive early-type galaxies but reside preferentially near the upper envelope of nearby AGNs. We caution that selection biases and intrinsic differences of AGN populations at different redshifts may significantly affect their location on the black hole mass-stellar mass plane.
Ming-Yang Zhuang, Junyao Li, Yue Shen
2023-09-06T18:00:01Z
http://arxiv.org/abs/2309.03266v1
AGNs and Host Galaxies in COSMOS-Web. I. NIRCam Images, PSF Models and Initial Results on X-ray-selected Broad-line AGNs at \(0.35\lesssim z\lesssim 3.5\) ###### Abstract We present detailed and comprehensive data reduction and point-spread-function (PSF) model construction for all public JWST NIRCam imaging data from the COSMOS-Web treasury program (up to June 2023, totaling 0.28 \(\rm deg^{2}\)). We show that the NIRCam PSF has significant short-timescale temporal variations and random spatial variations in all four filters (F115W, F150W, F277W, and F444W). Combining NIRCam with archival HST imaging, we perform multiwavelength AGN+host image decomposition to study the properties of 143 X-ray-selected (\(L_{\rm bol}=10^{43.6-47.2}\) erg s\({}^{-1}\)) broad-line AGNs at \(0.35\lesssim z\lesssim 3.5\). Leveraging the superb resolution, wavelength coverage, and sensitivity of NIRCam, we successfully detect host stellar emission after decomposing the central AGN point source in 142 objects. \(\sim 2/3\) AGNs are in star-forming galaxies based on the UVJ diagram, suggesting no instantaneous negative AGN feedback. X-ray-selected broad-line AGN hosts follow a similar stellar mass-size relation as inactive galaxies, albeit with slightly smaller galaxy sizes. We find that although major mergers are rare (\(\sim 7\)-22%) among the sample, more subtle non-axisymmetric features from stellar bars, spiral arms, and minor mergers are ubiquitous, highlighting the importance of secular processes and minor mergers in triggering AGN activity. For a subsample of 30 AGNs at \(1<z<2.5\) with black hole mass measurements from single epoch spectra, they follow a similar black hole mass-stellar mass relation as local inactive early-type galaxies but reside preferentially near the upper envelope of nearby AGNs. We caution that selection biases and intrinsic differences of AGN populations at different redshifts may significantly affect their location on the black hole mass-stellar mass plane. 0000-0002-4880-7880]Ming-Yang Zhuang (Zhang ) ## 1 Introduction The discovery of tight correlations between the masses of the supermassive black holes (BHs) and the properties of their host galaxies (such as stellar velocity dispersion and bulge/total stellar mass) in the local Universe suggests that BHs and galaxies may coevolve with each other (e.g., Magorrian et al., 1998; Gebhardt et al., 2000; Kormendy & Ho, 2013). Popular scenarios propose that the feedback from active galactic nuclei (AGNs) plays an important role in regulating the growth of the BH and its host galaxy by injecting energy and momentum into their environment (e.g., McNamara & Nulsen, 2007; Hopkins et al., 2008; King & Pounds, 2015). However, the details of these feedback processes and their impact on host galaxies are still being debated. Investigating when and how these correlations are established, as well as obtaining robust properties of AGN host galaxies, such as morphology, structure, environment, and stellar population, is key to understanding galaxy and BH evolution. The close track of the cosmic BH accretion history to the cosmic star formation history suggests that star formation and black hole growth are closely connected across cosmic time (e.g., Boyle & Terlevich, 1998; Silverman et al., 2008; Kormendy & Ho, 2013; Madau & Dickinson, 2014), at least in the global sense. At cosmic noon (\(z\approx 2\)), when star formation and BH accretion reach their peak epoch, we would expect stronger AGN feedback at play, making it the ideal epoch to investigate the cosmic evolution of the BH-galaxy scaling relations. However, previous studies of BH mass (\(M_{\rm BH}\))-host stellar mass (\(M_{*}\)) relations of AGNs during this epoch tend to produce contradictory results, with AGNs lying above, on or below the location relation (e.g., Borys et al., 2005; Alexander et al., 2008; Jahnke et al., 2009; Merloni et al., 2010; Sun et al., 2015; Suh et al., 2020; Ding et al., 2020; Zhang et al., 2023). Various factors may account for the large, apparent discrepancies among different works, including measurement uncertainties and limited dynamical ranges of \(M_{\rm BH}\) and \(M_{*}\), small sample statistics, and selection biases (e.g., Lauer et al., 2007; Shen & Kelly, 2010; Schulze & Wisotzki, 2011; Shankar et al., 2016; Li et al., 2021).
2309.13075
SCREWS: A Modular Framework for Reasoning with Revisions
Large language models (LLMs) can improve their accuracy on various tasks through iteratively refining and revising their output based on feedback. We observe that these revisions can introduce errors, in which case it is better to roll back to a previous result. Further, revisions are typically homogeneous: they use the same reasoning method that produced the initial answer, which may not correct errors. To enable exploration in this space, we present SCREWS, a modular framework for reasoning with revisions. It is comprised of three main modules: Sampling, Conditional Resampling, and Selection, each consisting of sub-modules that can be hand-selected per task. We show that SCREWS not only unifies several previous approaches under a common framework, but also reveals several novel strategies for identifying improved reasoning chains. We evaluate our framework with state-of-the-art LLMs (ChatGPT and GPT-4) on a diverse set of reasoning tasks and uncover useful new reasoning strategies for each: arithmetic word problems, multi-hop question answering, and code debugging. Heterogeneous revision strategies prove to be important, as does selection between original and revised candidates.
Kumar Shridhar, Harsh Jhamtani, Hao Fang, Benjamin Van Durme, Jason Eisner, Patrick Xia
2023-09-20T15:59:54Z
http://arxiv.org/abs/2309.13075v1
# Screws : A Modular Framework for Reasoning with Revisions ###### Abstract Large language models (LLMs) can improve their accuracy on various tasks through iteratively refining and revising their output based on feedback. We observe that these _revisions_ can introduce errors, in which case it is better to roll back to a previous result. Further, revisions are typically homogeneous: they use the same reasoning method that produced the initial answer, which may not correct errors. To enable exploration in this space, we present SCREWS, a modular framework for reasoning with revisions. It is comprised of three main modules: _Sampling_, _Conditional Resampling_, and _Selection_, each consisting of sub-modules that can be hand-selected per task. We show that SCREWS not only unifies several previous approaches under a common framework, but also reveals several novel strategies for identifying improved reasoning chains. We evaluate our framework with state-of-the-art LLMs (ChatGPT and GPT-4) on a diverse set of reasoning tasks and uncover useful new reasoning strategies for each: arithmetic word problems, multi-hop question answering, and code debugging. Heterogeneous revision strategies prove to be important, as does selection between original and revised candidates. ## 1 Introduction Large Language Models (LLMs) have proven effective on a variety of reasoning tasks (OpenAI, 2023). However, the LLM output is not always correct on its first attempt, and it is often necessary to iteratively refine the outputs to ensure that the desired goal is achieved (Madaan et al., 2023; Welleck et al., 2022; Zheng et al., 2023). These refinement methods assume that subsequent outputs (either by the same model, or by an external model or some tool) lead to better performance. However, there is no guarantee that subsequent versions must be better; as Figure 1 illustrates, refinement can lead to a wrong answer. This motivates a _Selection_ strategy whereby the model can select an earlier output. In addition, past work on iterative refinement typically assumes a single, fixed reasoning strategy (Welleck et al., 2022; Huang et al., 2022; Madaan et al., 2023; Zheng et al., 2023). Humans, however, are more flexible. A student preparing for an exam may use deductive reasoning to solve problems and inductive reasoning to verify the results; or a product manager may use a brainstorming strategy to list several ideas and then switch to a prioritization strategy to rank them based on their feasibility or impact. Thus, we propose a _modular_ approach to answer refinements, allowing us to test different strategies. In this work, we introduce SCREWS, a modular framework for reasoning with revisions.1 Figure 2 introduces the three main modules of the framework in detail, namely _Sampling_, _Conditional Resampling_, and _Selection_. For a given task and input sequence, we instantiate SCREWS by fixing the submodules for each module (for example, we might select "Chain of Thought" for _Sampling_). The initial outputs generated by _Sampling_ are passed to _Conditional Resampling_, which decides whether to generate a revision _conditioned_ on the initial sample, and does so if needed. Finally, all samples and revisions are given to the _Selection_ module, which selects the best one. Given the modular nature of our framework, several recently proposed self-refining methods can be improved by using other components of the framework. An example is the combination of the self-refinement method (Madaan et al., 2023) with our model-based selection strategy, which can improve overall performance; more such strategies are described in section 5. We evaluate SCREWS on a variety of reasoning tasks: arithmetic reasoning, multi-hop question answering, and code debugging, using ChatGPT (Brown et al., 2020) or GPT-4 (OpenAI, 2023). Our proposed strategies achieve substantial improvements (10-15%) over vanilla strategies of sampling and resampling. We demonstrate the usefulness of heterogeneous resampling, which can help the model modify its reasoning, leading to a substantial improvement over the baselines at a very low overall cost. We also discuss the importance of a model-based selection strategy that allows the model to roll back to its previous more confident outputs, an important component for modern LLMs. ## 2 Background SamplingPrompting LLMs to generate a series of intermediate steps has proven to be effective for improving their reasoning capabilities (Wei et al., 2022; Lewkowycz et al., 2022; Kojima et al., 2022; Wang et al., 2022). Some approaches in this direction include Chain of Thought (Wei et al., 2022; Zhang et al., 2022; Wang et al., 2022) and adding "Let's think step by step" to the prompt (Kojima et al., 2022). Another approach is "question decomposition", which decomposes the main problem into simpler problems and solves them iteratively (Min et al., 2019; Shridhar et al., 2022; Zhou et al., 2022; Jhamtani et al., 2023; Radhakrishnan et al., 2023). Each of these approaches has Figure 1: An example demonstrating that _Conditional Resampling_ (also known as “_refinement_”) can lead to incorrect modification of the original answer. A _Selection_ module can decide to retract the modification and instead choose the original answer, which in this case is the correct one. its own advantages depending on the underlying task (Shridhar et al., 2023). However, we are not aware of work combining these methods. Conditional ResamplingThe use of feedback to improve generated samples has been well studied, where the feedback can come either from humans (Tandon et al., 2021; Bai et al., 2022; Elgohary et al., 2021), from reward models (Ziegler et al., 2019; Lu et al., 2022; Shridhar et al., 2022; Christiano et al., 2017; Lightman et al., 2023), from external tools such as code interpreters (Schick et al., 2023; Chen et al., 2022), or from other LLMs (Madaan et al., 2023; Welleck et al., 2022; Fu et al., 2023; Peng et al., 2023; Yang et al., 2022; Zheng et al., 2023; Cohen et al., 2023; Ling et al., 2023; Khalifa et al., 2023). However, even if these feedback mechanisms are infallible, the resulting revisions may introduce new errors.2 Footnote 2: Prior work uses the term “refinement,” which we do not use because refinement implies finer (improved) responses, which does not always occur. Figure 2: Overview of our modular framework for reasoning with revisions, SCREWS. Each of the three large boxes (“modules”) contains several alternatives (“submodules”). A lot of past works can be viewed as instances of our framework, namely Self-Refine (Madaan et al., 2023), Least to Most (Zhou et al., 2022), LLMs Know (Mostly) (Kadavath et al., 2022), Self-Consistency (Wang et al., 2022), Self-Improve (Huang et al., 2022), PHP CoT (Zheng et al., 2023), Self-Correct (Welleck et al., 2022), Socratic CoT (Shridhar et al., 2022), Program of Thoughts (Chen et al., 2022), among many others. (...) represents other sub-components that can be added to each module, like cached memory or web search for _Sampling_, fine-tuned model or external verifier for _Conditional Resampling_, and human- or oracle-based selection for the _Selection_ module, among others. SelectionWhen using LLMs to evaluate and revise the output, the most common selection technique is to always select the final output (Madaan et al., 2023; Shinn et al., 2023; Zheng et al., 2023; Yao et al., 2022; Chen et al., 2023; Weng et al., 2022). However, this can lead to accepting incorrect changes made to previously correct outputs. Other selection methods involve ranking multiple sampled outputs (Cobbe et al., 2021) or majority voting (Wang et al., 2022; Lewkowycz et al., 2022; Zheng et al., 2023). These methods often use a homogeneous sampling strategy with changes in temperature or other similar hyper-parameters. Our work extends the strategy to heterogeneous sampling and selection. ## 3 SCREWS: Methodology In this section, we describe SCREWS, our proposed modular framework for reasoning with revisions to tackle different reasoning tasks. Given a problem \(x\), the goal is to generate an _answer_\(a\), which in our experiments may be a string or a number. SCREWS consists of three main modules: _Sampling_, _Conditional Resampling_, and _Selection_. Different variants of SCREWS are obtained by instantiating these modules in different ways. The options for each module are described below and illustrated schematically in Figure 2. All of our methods will invoke one or more stochastic functions, where each function \(\psi\) maps a tuple of input strings to a _result_ string \(y\) that contains useful information. In practice, \(\psi\) deterministically constructs a prompt from the input strings and then samples \(y\) from a large pretrained language model as a stochastic continuation of this prompt. For a given tuple of input strings, the prompt constructed for \(\psi\) will typically be a formatted encoding of this tuple, preceded by a task specific instruction and several demonstrations (few-shot examples) that illustrate how \(\psi\) should map other encoded input tuples to their corresponding continuations (Brown et al., 2020). For concreteness, the prompts we use in our experiments are illustrated in Appendix B. ### Sampling We consider three instantiations of the sampling module. Different instantiations may be appropriate for different tasks. Answer OnlyIn this method, for a given problem \(x\), the model \(\psi\) directly generates the answer \(y=\psi(x)\) without any intermediate steps. This is the simplest and most naive sampling method. The value of \(y\) is returned as the answer \(a\) (if there is no further revision of \(y\)). Chain of Thought (CoT)For many reasoning tasks today, generating explanations improves the quality of the final answer (Wei et al., 2022; Kojima et al., 2022). Chain of Thought sampling encourages the model to explain the intermediate step-by-step reasoning en route to a decision. This approach is now commonly used in several reasoning tasks. Again, we define \(y=\psi(x)\), but now we expect the prompt continuation to consist of step-by-step reasoning culminating in the step by step answer \(y\), as demonstrated by the few-shot examples included in the prompt. The answer \(a\) is extracted from \(y\) using a simple deterministic pattern-matching heuristic. Sub-question decompositionThis method decomposes the problem \(x\) into simpler sub-questions \([x_{1},x_{2},\ldots,x_{n}]\). For each sub-question \(x_{i}\) in turn (\(i=1,2,\ldots,n\)), the model is called to generate the corresponding sub-answer \(y_{i}=\psi(x,x_{1},y_{1},\ldots,x_{i-1},y_{i-1},x_{i})\). Note that we generate all questions before seeing any answers; that choice follows Shridhar et al. (2023), who found this approach to work better than interleaved generation of questions and answers. The sequence of questions may be generated in a single step, either by a call to a stochastic function \(\psi_{\text{question}}\), or by a custom question generation module that has been fine-tuned on human-written questions as in Cobbe et al. (2021). The answer \(a\) is extracted from \(y_{n}\) with a simple heuristic as in CoT. ### Conditional Resampling The result \(y\) from the _Sampling_ module can be viewed as a _provisional result_, \(y_{\text{curr}}\). This is passed to the _Conditional Resampling_ module where a decision is made whether or not to revise it. This is done in two steps: first deciding whether or not to revise, and then if so, resampling a new result \(y_{\text{next}}\) using one of the sampling methods mentioned above. The resampling is conditional because \(y_{\text{next}}\) may depend on \(y_{\text{curr}}\). While there are many methods for _Conditional Resampling_, our work focuses on the following instantiations: Self-AskKadavath et al. (2022) uses a function \(\psi_{\text{ask}}(x,y_{\text{curr}})\). The first token of the result indicates whether \(y_{\text{curr}}\) is correct, for example by starting with "Yes" or "No". If "Yes", we do not resample; if "No", we must resample a revised answer \(y_{\text{next}}\). In principle, the revision could be iterated, although Kadavath et al. (2022) did not do this, nor do our experiments in this paper. In our version of self-ask, \(\psi_{\text{ask}}\) is formulated so that \(y_{\text{next}}\) appears in the result string \(\psi_{\text{ask}}(x,y_{\text{curr}})\) following the word "No". Thus, both steps are efficiently performed by a single call to \(\psi_{\text{ask}}(x,y_{\text{curr}})\). For this method, we always use greedy decoding (temperature 0), which deterministically selects whichever of "Yes" or "No" is more probable.3 Demonstrations for the prompt are shown in Appendix B.2. Footnote 3: A threshold other than 50% could be tuned to optimize the downstream reward of the whole system. This compensates for bias toward the “Yes” or “No” token, and also considers how much resampling followed by selection will actually improve the final accuracy and harm the speed of the system. Orthogonally, the correctness probability of \(y_{\text{curr}}\) could be assessed by a dedicated \(\psi_{\text{check}}(x,y_{\text{curr}})\), but we were unsuccessful with this as \(\psi_{\text{check}}\) was poorly calibrated, mirroring findings on model calibration (Kadavath et al., 2022; Xiong et al., 2023). When the sampling module (Section 3.1) used sub-question decomposition to produce a chain of sub-answers \(y_{\text{curr}}=[y_{1},\dots,y_{n}]\), rather than checking and revising only the final result step \(y_{n}\) by calling \(\psi_{\text{ask}}(x,y_{n})\), we can instead check and revise each step, at the cost of more calls to \(\psi_{\text{ask}}\). For each provisional sub-answer \(y_{i}\) in turn (starting with \(i=1\)), we predict whether it is correct by calling \(\psi_{\text{ask}}(x,x_{1},y_{1},\dots,x_{i-1},y_{i-1},x_{i},y_{i})\). The first time the output is "No", we resample \(y^{\prime}_{i}\) through \(y^{\prime}_{n}\), yielding the revised result \(y_{\text{next}}=[y_{1},\dots,y_{i-1},y^{\prime}_{i},\dots,y^{\prime}_{n}]\). In principle, self-ask could then be applied again at later steps \(>i\) of both the original and revised chains; then choosing among the many resulting chains, using the selection procedures of the next section, would resemble branching in a reasoning tree (Yao et al., 2023). Tool-Based LLMFor some tasks, we construct \(\psi_{\text{ask}}\) so that it is allowed to use tools (Schick et al., 2023). The reason is that in tasks like fact-checking, it is futile to ask the LLM to check \(y_{\text{curr}}\) because it might not have the requisite knowledge for evaluation. The tools can be used to collect additional information or facts to help the model detect and fix problems in its own generated answer. Tools like search engines or fact retrievers can be used to evaluate correctness and generate a new revision. Some other tools like code interpreters are not capable of generating text, but can still be used to evaluate correctness. ### Selection The last module in SCREWS is the _Selection_ module. In this step, we use either a model \(\psi_{\text{select}}\) or simple heuristics to select the _final_ result \(y\) from which we then extract the _final_ answer \(a\). In effect, this allows us to construct a simple ensemble of multiple systems. LLM-Based SelectionJust as an LLM was used above to evaluate whether \(y_{\text{curr}}\) is good, an LLM can be used to evaluate whether \(y_{\text{next}}\) is better. We call \(\psi_{\text{select}}(x,y_{\text{curr}},y_{\text{next}})\) to choose between two result strings.4 Note that it could be naturally extended to choose among more than two answers. When selection and sampling are implemented using the same LLM, we refer to the method as _self-select_ (e.g., in Figure 2). The prompts for \(\psi_{\text{select}}\) in our experiments are shown in Appendix B.3. Footnote 4: We found that the order of \(y_{\text{curr}}\) and \(y_{\text{next}}\) in the prompt was unimportant; in our reported results, we randomized this order. Rule-Based SelectionWe consider the other methods we study to be rule-based. Past work on iterative refinement (Madaan et al., 2023; Huang et al., 2022; Zheng et al., 2023) always selects the most recent revision. Majority voting is a simple traditional ensembling method that has been used for selection (Wang et al., 2022; Lewkowycz et al., 2022), but it is costly because it requires several samples. ### Other Possibilities There are other possible ways to instantiate each module. Tools like web-based search or cache-based retrieval could be used to generate the initial attempt in the _Sampling_ module. A fine-tuned classification model could be used to verify outputs in the _Conditional Resampling_ module. Similarly, a fine-tuned model could be used for the _Selection_ module. In this paper, however, we study only the instantiations described above. ## 4 Experiments ### Tasks We test the effectiveness and flexibility of SCREWS on three categories of reasoning tasks: GSM8K (Cobbe et al., 2021) for arithmetic reasoning, StrategyQA (Geva et al., 2021) for multi-hop question answering, and Big-Bench (BIG-bench authors, 2023) AutoDebugging5 for code debugging. The GSM8K dataset is a grade-school-level math word problem dataset with a test set of 1319 samples, each requiring two to eight steps to solve. GSM8K includes sub-questions that were generated by a fine-tuned GPT-3 model and correspond to the steps in a particular correct CoT solution. Since these sub-questions were generated with oracle knowledge of a correct CoT solution, we refer to experiments using them as"Subq (Or)". We use "Subq (QG)" for the fairer experimental condition where we instead generated the subquestions from ChatGPT using two-shot prompts (which are provided in Appendix B.4).6 Footnote 5: [https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/auto_debugging/](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/auto_debugging/) Footnote 6: Unsurprisingly, the Subq (Or) sub-questions proved to be consistently better, as we will see in Section 5. In addition to their oracle knowledge of a human-written answer, some of the sub-questions themselves may also have been human-written: the sub-question generation model was fine-tuned on around 800 human-written examples, and some of those examples may also be included in the released dataset ([https://github.com/openai/grade-school-math#socratic-dataset](https://github.com/openai/grade-school-math#socratic-dataset)). Following Magister et al. (2023) and Shridhar et al. (2023), we test on the first 490 samples from the training set of StrategyQA (since their test set is unlabeled). The demonstration examples for our various stochastic functions \(\psi\) were drawn randomly from the rest of the training set. StrategyQA also includes human-annotated oracle subquestions (which we again use for "Subq (Or)" results) and related facts that can assist in answering the main question (which we use for tool-based conditional resampling as in Section 3.2). Finally, the Auto Debugging dataset tests whether a model can answer questions about the intermediate state of a program without executing the code. The dataset consists of 34 coding examples, of which 33 were used as test examples and 1 as a demonstration example in the prompt. ### Experimental Setup We always report exact-match accuracy: the percentage of examples on which our final answer \(a\) matches the gold answer. For all of our experiments, we use the ChatGPT API (Brown et al., 2020) from July 2023 (gpt-3.5-turbo-0301). This model is a decoder-only Transformer LLM (Vaswani et al., 2017) that was fine-tuned using reinforcement learning with human feedback (Ziegler et al., 2019; Christiano et al., 2017). Some experiments were also performed using GPT-4 (OpenAI, 2023) to show the scaling capabilities of our framework. SamplingWith all choices of the _Sampling_ module, we use 5-shot sampling for GSM8K and StrategyQA and 1-shot sampling for Auto Debugging. Greedy decoding (temp = 0) is used for the main experiments while higher temperature (0.7) is used for the majority voting experiments (one sample was generated with temp = 0 and the other four at temp = 0.7). All prompts are provided in Appendix B.1. Conditional ResamplingGreedy decoding is used to first make a binary resampling decision and then to sample. 4-shot prompts (with two correct and two incorrect samples) are used for the GSM8K and StrategyQA datasets, while a 2-shot prompt (with one correct and one incorrect sample) is used for Auto Debugging. For StrategyQA, we use tool-based resampling by including the provided facts from the dataset into the prompt (Appendix B.2) to simulate a (perfect) fact retrieval tool. SelectionFor the _self-select_ strategy, the prompts include two examples and selection was produced with greedy decoding (prompts in Appendix B.3). For majority voting, a majority vote on the final answers was taken over \(k\in\{1,3,4,5\}\) samples. Ties were broken randomly. ## 5 Results ### Gsm8k Conditional Resampling Works Better with Method ChangePrevious work (Madaan et al., 2023) has shown that when a chain-of-thought method is used for initial _Sampling_, reasoning ability is improved by _Conditional Resampling_ with the same method. The benefit comes from taking the previous sample into account. We reproduced this previous finding: the CoT scores for GSM8K improved by 1.4 points after resampling with CoT (71.6 to 73.0), as shown in Table 1. However, when the initial _Sampling_ used subquestion decomposition, we found that resampling with subquestion decomposition actually harmed accuracy. It decreased the score by about 0.5 points (71.9 to 71.3 with generated subquestions, 78.6 to 78.2 with oracle subquestions). What gave the best results--for all three _Sampling_ methods--was _Conditional Resampling_ with a _different_ method from the originally chosen one. It gave a large gain over Sampling when the original Sampling used CoT and Resampling used subquestion decomposition (71.6 to 73.7, with generated subquestions) and vice versa (71.9 to 74.0). Even with oracle subquestions, moderate gains are still seen when resampling with CoT (78.6 to 79.0). This demonstrates that it is useful to change methods using _Conditional Resampling_, a novel finding with our framework.7 Footnote 7: In principle, we could also use resampling and selection to combine Subq (QG) with Subq (Or); we may try this in a future version of this paper. Importance of Selection Module_Conditional Resampling_ does not invariably improve every output. In fact, we saw in Table 1 that for some settings, it may harm the output quality even on average. This is why the _Selection_ module is useful--to detect and reject cases of harmful revisions. First, as a starting point, the left half of Table 2 considers using Selection only as an ensembling technique to combine the outputs of two _independent_ Sampling strategies. (Note that this matrix \begin{table} \begin{tabular}{c c c} \hline \hline **Sampling** & **Conditional Resampling** & **Accuracy** \\ \hline \multirow{3}{*}{CoT} & - & 71.64 \\ & CoT & 73.00 \\ & Subq (QG) & 73.69 \\ & Subq (Or) & **73.99** \\ \hline \multirow{3}{*}{Subq (QG)} & - & 71.87 \\ & CoT & **73.99** \\ & Subq (QG) & 71.26 \\ \hline \multirow{3}{*}{Subq (Or)} & - & 78.62 \\ & CoT & **78.99** \\ \cline{1-1} & Subq (Or) & 78.24 \\ \hline \hline \end{tabular} \end{table} Table 1: The improvements achieved by using _Conditional Resampling_ for the GSM8K dataset, where \(\mathtt{y}_{\text{next}}\) is always selected. **CoT** refers to the Chain of Thought method, while **Subq** refers to the Subquestion Decomposition method. **Subq (QG)** refers to the case where subquestions are generated by the ChatGPT model, while **Subq (Or)** refers to the Oracle questions present in the Socratic version of the dataset. is symmetric.) Although CoT and subquestion decomposition are about equally good Sampling strategies (71.6 and 71.9), using a Selection module to select the better of the two achieves a 3-point gain (to 74.9). Much larger gains (up to 85.4) are potentially available from improving Selection--the upper bound on performance (if Selection always chose the better option) is shown in square brackets. This shows that the two Sampling strategies have largely complementary errors. A similar pattern applies when the subquestion decomposition method is permitted to use oracle subquestions, which improves performance across the board to 81.34. The right half of Table 2 shows _Selection_ between the _Sampled_ and _Conditionally Resampled_ predictions from Table 1. (This matrix is asymmetric.) For CoT, the results remain the same at 73.99, which is due to the fact that the upper bound is at 73.99, showing no room for further improvement. For other cases with subquestioning, we see an improvement of up to 1 point. Finally, we observe that the _Selection_ module is far from perfect and has room for further improvement, as seen from the upper bounds. A _Selection_ method ought to look at features of the two answers that turn out to be correlated with correctness, and we hypothesize that models fine-tuned specifically for _Selection_ may prove more effective than few-shot learning at identifying these features. The right half of Table 2 is the cheaper method, because we observe \(\psi_{\text{ask}}\) resamples on only 5-15% of the examples rather than all of them. A tradeoff between accuracy and cost is shown in Figure 4. Selection and VotingUnweighted majority vote has been one of the most popular _Selection_ methods in past work (Wang et al., 2022; Lewkowycz et al., 2022; Zheng et al., 2023), since it requires no training. The two lines in Figure 3(a) generally show improvement from _Sampling_ more times from the same model (at temperature 0.7) and _Selecting_ by majority vote. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Method** & \multicolumn{3}{c}{**Independent Sampling**} & \multicolumn{3}{c}{**Conditional Resampling**} \\ \cline{2-7} & CoT & Subq (QG) & Subq (Or) & CoT & Subq (QG) & Subq (Or) \\ \hline CoT & 71.64 & 74.90 [85.36] & **81.34**[89.08] & 72.93 [73.08] & 73.76 [73.76] & **73.99**[73.99] \\ Subq (QG) & **74.90**[85.36] & 71.87 & - & **73.99**[75.43] & 72.40 [72.40] & - \\ Subq (Or) & **81.34**[89.08] & - & 78.62 & 78.99 [81.50] & - & **79.22**[79.22] \\ \hline \hline \end{tabular} \end{table} Table 2: Impact of _Selection_ on the GSM8K data set on _Independent Sampling_ and _Conditional Resampling_. The upper bound from using a _Selection_ oracle is given in square brackets. Figure 3: The + in graph (a) shows that majority voting with 3 diverse samples (CoT + Subq(Or) + Subq(QG)) outperforms both CoT and Subq(Or) even with 5 samples. Graph (b) shows the potential of the _selection_ method when a perfect selector is used. It can be thought of as the upper bound of the selection mechanism. Both figures are for the GSM8K dataset. Recalling that the left half of Table 2 showed benefit from ensembling independent samples from 2 different _Sampling_ methods (up to 81.34 accuracy when oracle subquestions are allowed), we observe that majority vote is a convenient way to do so for 3 different methods (where all methods can now use temperature 0). This achieves 83.62 accuracy, as shown by the \(\star\) in Figure 3(a). Of course, model-based _Selection_ could potentially do even better than majority voting. The 7 points for \(k\geq 3\) in (a) are repeated as the dark bars in Figure 3(b), with the light bars showing the upper bounds that could be achieved by replacing majority voting with a perfect _Selection_ method. The best upper bound corresponds again to the use of 3 different methods. In principle, one could ensemble over a larger set by allowing each of the 3 methods to contribute multiple samples. ### StrategyQA Vanilla resampling does not improve what model does not know \(\rightarrow\) A need for toolsFor the StrategyQA dataset, we observe in Table 3 that accuracy is harmed by _Conditional Resampling_ with the same _Sampling_ method, without _Selection_, as was sometimes the case for GSM8K. On StrategyQA, however, even _Selection_ usually does not repair the problem, perhaps because StrategyQA requires multi-hop question answering. When the model lacks the necessary factual knowledge, Self-Ask will be insufficient. A real example at the bottom of Figure 5 shows how resampling can preserve an incorrect claim generated by the model. To help the model decide whether and how to revise the answer, we try including relevant facts (provided by StrategyQA) into the resampling prompt, as shown in Appendix B.2.1, to simulate the result one may get by using an external tool like a fact retriever. As Table 3 shows, this yields a 2-point improvement ("Facts\({}_{\text{re}}\)" vs. "Internal,") over _Sampling_, for both CoT and Subq (QG). We assume that tool invocations are expensive, which is why we include facts only during _Conditional Resampling_. In practice, the initial result is revised only 10-35% of the time, and therefore "Facts" does not need to invoke a tool call for every input example.8 To achieve this speedup, we do not include facts in the prompt when initially calling to \(\psi_{\text{ask}}\) to decide whether to resample, but only when we actually generate \(\gamma_{\text{next}}\). Footnote 8: However, if the facts were included during _Sampling_, the performance can increase beyond 90%. ### Code Debugging The effectiveness of SCREWSFor the code debugging task, we observed that the Answer Only method achieves similar scores to CoT,9 as reported in the bottom half of Table 3, suggesting that no particular _Sampling_ method is superior on all datasets. However, we see the benefits of using \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Method** & **Sampling** & \multicolumn{2}{c}{**Conditional Resampling**} & \multicolumn{2}{c}{**Selection**} \\ \cline{3-6} Knowledge Source: & Internal\({}_{\text{s}}\) & Internal\({}_{\text{re}}\) & Facts\({}_{\text{re}}\) & Int\({}_{\text{s}}\) vs. Int\({}_{\text{re}}\) & Int\({}_{\text{s}}\) vs. Facts\({}_{\text{re}}\) \\ \hline \multicolumn{6}{c}{**StrategyQA**} \\ \hline CoT & 77.18 & 74.54 & **79.02** & 75.76 & 78.41 \\ Subq (Or) & 85.91 & 78.97 & 84.69 & 85.30 & **86.30** \\ Subq (QG) & 78.16 & 74.69 & **80.40** & 78.78 & 80.00 \\ \hline \multicolumn{6}{c}{**Code Debugging**} \\ \hline Answer Only & 73.52 & 82.35 & 88.23 [91.20] & \\ CoT & 70.58 & 73.52 & 73.52 [73.52] & \\ Answer Only + CoT & - & - & 85.29 [88.23] & \\ \hline \hline \end{tabular} \end{table} Table 3: Comparing different strategies for the StrategyQA (top) and Big Bench Code Debugging (bottom) datasets. For StrategyQA, external facts are provided to the model (“Facts”) versus relying on the model’s internal capabilities (“Internal”). The numbers in square brackets indicate upper bound performance, assuming perfect selection. Subscripts “s” and “re” refer to Sampling and Resampling respectively. SCREWS, as we find that with Answer Only, adding _Conditional Resampling_ followed by _Selection_ leads to a performance boost of 15 points (from 73.52 to 88.23). While the dataset size limits our ability to make concrete conclusions, the findings here support the conclusions drawn on other datasets: _Resampling_ and _Selection_ lead to benefits and heterogenous sampling can prove effective. ## 6 Additional Analysis ### Total Cost SCREWS supports many methods with different cost/accuracy tradeoffs. Figure 4 displays the strategies that use CoT and Subq (QG) on GSM8K. The cost is represented as the total count of input tokens (prompt + query) and output tokens for all LLM calls needed by that strategy, averaged over test examples. Generally, Subq (QG) is expensive as it is costly to call \(\psi_{\text{question}}\). However, it is affordable to use it in _Conditional Resampling_ only (), since resampling only occurs 10-15% of the time. This method is both cheaper and more accurate than _Sampling_ either with Subq (QG) (+) or 3 times with CoT (\(\bullet\)). Appendix A discusses a detailed breakdown of each module's input and output token costs. ### More Revision Steps We saw in Section 5.1 on GSM8K that _Sampling_ with Subq (Or) (78.62 accuracy) is improved slightly by _Conditional Resampling_ with CoT (78.99) and then _Selection_ (79.22). Like Madan et al. (2023), we did not find much benefit from additional iterations of _Conditional Resampling_+_Selection_: a second iteration gives 79.45, and a third gives 79.52. These small improvements probably do not justify the added cost. ### Larger LLMs Replacing ChatGPT with GPT-4 greatly increased the _Sampling_ accuracy on GSM8K, to 91.45 for CoT and 90.80 for Subq (Or). Choosing between those two samples with GPT-4-based _Selection_ further increased the accuracy to 93.10, which falls between the accuracy of majority voting over \(k=3\) and \(k=4\) CoT samples from GPT-4 (92.94 and 93.93 respectively). Even using ChatGPT-based _Selection_ achieved 92.58, which is still an improvement over CoT alone. ### Selected Examples The top two examples of Figure 5, on the GSM8K dataset, demonstrate the effectiveness of the _Selection_ module. The first example shows how an error introduced by _Conditional Resampling_ can be reverted by _Selection_. The second example shows how a correction found by _Conditional Resampling_ can be kept by _Selection_. The last example in Figure 5, on the StrategyQA dataset, illustrates that ordinary _Resampling_ is unlikely to correct an incorrect fact generated by the LLM. However, providing the correct facts during _Resampling_ gives the model access to new information, leading to the correct answer. Figure 4: On GSM8K, sampling cost vs. accuracy. The blue line (copied from Figure 3(a)) shows a baseline of majority voting over \(k\in\{1,3,4,5\}\) CoT samples. The shaped points are the other strategies from Section 5.1 that use CoT and Subq (QG). ## 7 Discussion ### Key Findings Based on our experiments with three reasoning datasets using our framework, we conclude the following: * _Selection_ **plays an important role**: Although _Conditional Resampling_ often improves the result of _Sampling_, _Selection_ can help avoid errors from the case where it does not. It was beneficial on all three datasets. * **Heterogeneous vs. homogeneous resampling**: Using different reasoning methods for _Sampling_ and _Conditional Resampling_ can lead to higher accuracy, with or without _Selection_. * **Missing external knowledge hurts _Conditional Resampling_**: Resampling cannot fix incorrect facts generated by the model. Tool-based resampling can therefore get better results (as simulated using StrategyQA). * **No uniformly best strategy**: There was no clear winning method for each of the modules. Simple baseline methods sometimes beat more complex ones: CoT uses only one call to \(\psi\) and beats Subq (QG) in GSM8K, always selecting \(y_{\text{next}}\) beats self-select for StrategyQA with "Facts," and Answer Only works surprisingly well for Code Debugging. ### Future Work SCREWS combines the three important modules _Sampling_, _Conditional Resampling_ and _Selection_ in a modular framework. The best configuration of modules will vary by task and could be identified through a method such as exhaustive search, Monte Carlo Tree Search, or reinforcement learning. The modules themselves could be fine-tuned to improve end-to-end performance. Figure 5: The top two examples demonstrate the importance of the _Selection_ module for the GSM8K dataset. The last example shows how tool use (“Facts”) can be helpful for the StrategyQA dataset. If we want to optimize cost along with accuracy, (Chen et al., 2023a) proposed several methods for speeding up the stochastic functions \(\psi\). Their "LLM Cascade" strategy in particular is a heterogeneous (but unconditional) resampling method that starts with smaller, cheaper models. It is possible that for some reasoning tasks, additional modules could be useful. For instance, _Resampling_ or _Selection_ might be preceded by _Critiquing_, or _Selection_ might be generalized to _Combination_. ### Conclusion We have proposed SCREWS, a modular reasoning-with-revisions framework to answer reasoning questions with LLMs. We demonstrated the usefulness of the three main components of the framework--_Sampling_, _Conditional Resampling_, and _Selection_--on three reasoning datasets. The flexible nature of our framework allows it to be configured for each task and extended to other tasks in the future.
2309.05617
Hidden symmetries of generalised gravitational instantons
For conformally K\"ahler Riemannian four-manifolds with a Killing field, we develop a framework to solve the field equations for generalised gravitational instantons corresponding to conformal self-duality and to cosmological Einstein-Maxwell. We obtain generic identities for the curvature of such manifolds without assuming field equations. After applying the framework to recover standard solutions, we find conformally self-dual generalisations of the Page-Pope, Plebanski-Demianski, and Chen-Teo solutions, which are neither hyper-K\"ahler nor quaternionic-K\"ahler, giving new self-dual gravitational instantons in conformal gravity.
Bernardo Araneda
2023-09-11T17:05:43Z
http://arxiv.org/abs/2309.05617v1
# Hidden symmetries of generalised gravitational instantons ###### Abstract For conformally Kahler Riemannian four-manifolds with a Killing field, we develop a framework to solve the field equations for generalised gravitational instantons corresponding to conformal self-duality and to cosmological Einstein-Maxwell. We obtain generic identities for the curvature of such manifolds without assuming field equations. After applying the framework to recover standard solutions, we find conformally self-dual generalisations of the Page-Pope, Plebanski-Demianski, and Chen-Teo solutions, which are neither hyper-Kahler nor quaternionic-Kahler, giving new self-dual gravitational instantons in conformal gravity. ## 1 Introduction Gravitational instantons are four-dimensional, complete, Ricci-flat Riemannian manifolds with sufficiently fast curvature decay, typically ALE, ALF, or AF (cf. [1] for precise definitions). They are expected to give the dominant contributions to the path integral for Euclidean quantum gravity. Particular cases are metrics with self-dual Riemann tensor (i.e. hyper-Kahler manifolds), while more general cases correspond to generalisations of the Ricci-flat condition. These generalisations include the addition of a cosmological constant, solutions to Einstein-Maxwell theory, conformally self-dual geometries (i.e. metrics with self-dual Weyl tensor), Bach-flat metrics, etc. Such solutions are interesting in high-energy physics, as Einstein-Maxwell theory coincides with the bosonic sector of \(N=2\) supergravity in four dimensions, and conformally self-dual and Bach-flat geometries are solutions to conformal gravity. Examples of cosmological Einstein-Maxwell instantons have been studied in [2, 3, 4, 5], while instantons in conformal gravity were considered in [6, 7] and more recently in [8, 9, 10, 11, 12]. Generalised instantons are also interesting in Riemannian geometry, concerning open problems about the classification of these spaces. Examples of classifications in the Ricci-flat case include ALE hyper-Kahler [13], and ALF toric-Hermitian [14]. In the non-Ricci-flat case, there are classifications of _compact_ complex surfaces, including compact Einstein-Hermitian [15] (i.e. with non-trivial cosmological constant) and compact Bach-flat Kahler [16]. A curious property about Ricci-flat gravitational instantons (also common to the more general classifications mentioned above) is that all known examples are Hermitian, cf. [17, Question 1.4], which implies (using Bianchi identities) that they are conformally Kahler, and have at least one Killing field (as long as they are not Kahler themselves). Motivated by this, in this work we study generalised gravitational instantons corresponding to the conformally self-dual and cosmological Einstein-Maxwell equations, under the assumption of a geometry which is conformally Kahler with a Killing field. We will show that in both cases the field equations reduce to a single scalar equation: the \(SU(\infty)\) (continuous) Toda equation. (Or the modified Toda equation in the case of Einstein-Maxwell with non-zero cosmological constant.) We derive a number of useful identities for conformally Kahler metrics, cf. in particular our main Theorem 2.9 for the Ricci form. Our results provide a generalisation of Tod's work [18], which is for the (non-conformally-self-dual) Ricci-flat case. In the conformally-self-dual (non-Ricci-flat) case, the reduction was already known from LeBrun's work [19]. We apply the construction to the study of a large number of metric ansatze, including the spherically symmetric, Kerr-Newman, Page-Pope [20], Plebanski-Demianski [21], and Chen-Teo [22, 23] classes. In particular, we show that the Page-Pope class of metrics on bundles over Riemann surfaces is generically ambi-Kahler without assuming any field equations, and we classify all conformally self-dual solutions. We also construct a Plebanski-Demianski self-dual gravitational instanton in conformal gravity, which depends on 5 parameters and is not Einstein, so it is different from the standard self-dual limit of Plebanski-Demianski. More generally, part of our motivation comes from open questions concerning the Chen-Teo instanton [22], which is a 2-parameter, Ricci-flat AF metric that gives a counterexample to the classical Euclidean Black Hole Uniqueness Conjecture [1]. This instanton was generalised in [23] to a 5-parameter, Ricci-flat (singular) family, which includes both the Plebanski-Demianski and the (triple-collinearly-centred) Gibbons-Hawking spaces. The construction of the cosmological Einstein-Maxwell Chen-Teo solution is a challenging open problem [23], and in future works we will apply the framework developed in this work to obtain that solution. In the current paper, we give a family of conformally self-dual generalisations. Our work also provides an explicit Toda formulation of all the examples mentioned above. In particular, we give a simple trick to solve the Toda equation (with an extra symmetry) for complicated metric ansatze. Concerning conformal gravity, the field equations are the vanishing of the Bach tensor, which is a conformally invariant condition. Any conformally (anti-)self-dual space satisfies these equations. Einstein metrics are also Bach-flat, so if one has an Einstein space then any conformal transformation of it will be a solution to conformal gravity, but this will simply be coming from a solution to ordinary Einstein gravity. Bach-flat metrics which are _not_ conformally Einstein are thus more intriguing from the conformal gravity point of view. Now, in this work we are interested in conformally Kahler metrics, and Derdzinski showed [24, Proposition 4] that a Kahler metric with non-self-dual Weyl tensor is Bach-flat _if and only if_ it is (locally) conformally Einstein1. Thus, since we restrict to conformally Kahler geometry, we will not worry about the Bach-flat equations. In particular, since we show that the Page-Pope class [20] is always ambi-Kahler, this implies that Bach-flat instantons such as generalised Eguchi-Hanson and generalised Taub-NUT (considered recently in [12] in the conformal gravity context) are conformally Einstein. Footnote 1: That is: if \(\hat{g}\) is Kähler, with Ricci scalar \(\hat{R}\neq 0\), then the Bach tensor vanishes iff \(\hat{R}^{-2}\hat{g}\) is Einstein [24]. A natural question is then whether there are non-self-dual Bach-flat instantons which are not conformally Kahler: such solutions would be Bach-flat but not conformally Einstein, so more interesting for conformal gravity. In fact, at least in Lorentz signature such solutions exist: see [25, 26]. (We mention however that many of these solutions are Petrov type N and thus do not have Euclidean sections, so the situation for instantons is less clear.) Overview.The core of our framework is developed in section 2, where we obtain a number of identities for conformally Kahler metrics whose Ricci tensor is invariant under the complex structure: we give a reduction of the conformally self-dual and of the cosmological Einstein-Maxwell equations (Prop. 2.1 and Prop. 2.2 resp.), and obtain generic expressions for the metric (Prop. 2.4), Ricci scalar (Prop. 2.8) and Ricci form (Theorem 2.9). We also comment on the special case of ambi-Kahler structures (section 2.5), and we give some basic examples (section 2.6). In particular, the Kerr-Newman example in section 2.6 allows us to illustrate in a simple case the trick to solve the Toda equation mentioned above; this will be used in more complicated cases in later sections of the paper. In section 3 we study the Page-Pope class [20], solving the conformally self-dual and cosmological Einstein-Maxwell equations, and in section 4 we do the same for the Plebanski-Demianski class [21]. In section 5 we analyse the Chen-Teo class [22, 23], giving a Toda formulation and finding conformally self-dual generalisations. We present our conclusions in section 6. We include appendices A, B with some basic background, definitions, and identities. We also mention that our construction is purely local2. Footnote 2: In particular, some of our examples include the 4-sphere \(S^{4}\), which does not admit a global complex structure. ## 2 Conformally Kahler geometry ### Preliminaries For general definitions and background, we refer to appendix A. Let \((M,g_{ab})\) be a conformally Kahler 4-manifold, with complex structure \(J^{a}{}_{b}\) and fundamental 2-form \(\kappa_{ab}=g_{bc}J^{c}{}_{a}\). Recall that \(\kappa_{ab}\) is necessarily self-dual (SD) or anti-self-dual (ASD) w.r.t. to the Hodge star; we choose \(\kappa_{ab}\) ASD for concreteness. Then it can be written in 2-spinor language as \(\kappa_{ab}=j_{AB}\epsilon_{A^{\prime}B^{\prime}}\), where \(j_{AB}\) is symmetric (and satisfies \(j^{A}{}_{C}j^{C}{}_{B}=-\delta^{A}_{B}\)). The conformally rescaled 2-form is \(\hat{\kappa}_{ab}=\Omega j_{AB}\hat{\epsilon}_{A^{\prime}B^{\prime}}\), where \(\hat{\epsilon}_{A^{\prime}B^{\prime}}=\Omega\epsilon_{A^{\prime}B^{\prime}}\). The conformal Kahler property is \(\hat{\nabla}_{a}\hat{\kappa}_{bc}=0\), where \(\hat{\nabla}_{a}\) is the Levi-Civita connection of \(\hat{g}_{ab}\). In spinors, this translates into \(\hat{\nabla}_{AA^{\prime}}(\Omega j_{BC})=0\). Using the relation between \(\hat{\nabla}_{AA^{\prime}}\) and \(\nabla_{AA^{\prime}}\) (see [27, 28, 29]), one deduces that \((M,g_{ab})\) possesses a valence-2 Killing spinor: \[\nabla_{A^{\prime}(A}K_{BC)}=0,\qquad K_{AB}:=\Omega^{-1}j_{AB}. \tag{2.1}\] Define now \(Z_{ab}:=K_{AB}\epsilon_{A^{\prime}B^{\prime}}\). A calculation using the Killing spinor equation (see [28, Eq. (6.4.6)]) shows that \[\nabla_{a}Z_{bc}=\nabla_{[a}Z_{bc]}-2g_{a[b}\xi_{c]},\qquad\xi_{a}:={ \frac{1}{3}}\nabla^{b}Z_{ab}. \tag{2.2}\] The first equation is the conformal Killing-Yano (CKY) equation. In terms of the fundamental 2-form, the CKY tensor is \(Z_{ab}=\Omega^{-1}\kappa_{ab}\). Notice that \(\xi_{b}\) has always zero divergence, \(\nabla^{a}\xi_{a}=0\) (this follows from \(\nabla^{a}\nabla^{b}Z_{ab}=0\) since \(Z_{ab}\) is a 2-form). In addition, a calculation shows that \(\xi_{b}\) can be expressed as \[\xi_{b}=J^{a}{}_{b}\partial_{a}\Omega^{-1}. \tag{2.3}\] From this expression, we can deduce that the vector field \(\xi=\xi^{a}\partial_{a}\) preserves both the conformal factor \(\Omega\) and the fundamental 2-form \(\kappa_{ab}\). For the first, we notice from (2.3) that \(\pounds_{\xi}\Omega=\xi^{a}\partial_{a}\Omega=0\). For the second, recall Cartan's formula for a generic vector field \(v\) and 2-form \(\omega\): \(\pounds_{v}\omega=\mathrm{d}(v\lrcorner\omega)+v\lrcorner\mathrm{d}\omega\). Then \(\pounds_{\xi}\hat{\kappa}=\mathrm{d}(\xi\lrcorner\hat{\kappa})\). Now, \((\xi\lrcorner\hat{\kappa})_{b}=\xi^{a}\hat{\kappa}_{ab}=-\Omega^{2}\xi_{a}J^{ a}{}_{b}=-\partial_{b}\Omega\) (we use \(g_{ab}\) to lower indices). Thus \(\pounds_{\xi}\hat{\kappa}=0\), and since \(\pounds_{\xi}\Omega=0\), it follows that also \(\pounds_{\xi}\kappa=0\). So the conformal factor \(\Omega\) is a Hamiltonian for \(\xi^{a}\) w.r.t. the symplectic structure \(\hat{\kappa}_{ab}\). Let us now show that the vector field \(\xi^{a}\) is a Killing vector of \(g_{ab}\) if and only if the Ricci tensor is invariant under the complex structure, meaning that \(R_{ab}=R_{cd}J^{c}{}_{a}J^{d}{}_{b}\). (Notice that can be replaced by its trace-free part in this equation.) First, we apply an additional covariant derivative to the Killing spinor equation (2.1), \(0=\nabla_{AA^{\prime}}\nabla^{(A}_{B^{\prime}}K^{BC)}\). Symmetrizing over \(A^{\prime}B^{\prime}\), this leads to \[\Phi_{A^{\prime}B^{\prime}C}{}^{(A}K^{B)C}=-\nabla^{(A}_{(A^{ \prime}}\xi^{B)}_{B^{\prime})}, \tag{2.4}\] where \(\Phi_{A^{\prime}B^{\prime}AB}\) represents the trace-free Ricci tensor, \(\Phi_{ab}=-\frac{1}{2}(R_{ab}-\frac{R}{4}g_{ab})\). The right hand side of (2.4) is the conformal Killing operator applied to \(\xi_{b}\), since \(\nabla_{(A|(A^{\prime}\xi_{B^{\prime})|B)}}=\nabla_{(a}\xi_{b)}-\frac{1}{4}g_{ ab}\nabla_{c}\xi^{c}\). But we noticed that \(\nabla_{c}\xi^{c}=0\), so it reduces to the ordinary Killing operator. The left hand side of (2.4) can be written (lowering the indices \(AB\)) as \(-\Omega^{-1}\Phi_{c(a}J^{c}{}_{b)}\), where we used that \(Z_{a}{}^{b}=-\Omega^{-1}J^{b}{}_{a}\). Multiplying by \(J^{a}{}_{d}\) and renaming indices, (2.4) is equivalent to \[R_{ab}-R_{cd}J^{c}{}_{a}J^{d}{}_{b}=4J^{c}{}_{a}\nabla_{(c}\xi_{b )}, \tag{2.5}\] which proves our assertion about the Killing property of \(\xi^{a}\). The conformal Kahler condition also imposes restrictions on the (ASD) Weyl tensor. Again this can be seen from the Killing spinor equation: the condition \(0=\nabla_{A^{\prime}(A}\nabla^{A^{\prime}}_{B}K_{CD)}\) leads to \(\Psi_{(ABC}{}^{E}K_{D)E}=0\) (where \(\Psi_{ABCD}\) is the ASD Weyl curvature spinor), which implies that \(\Psi_{ABCD}\) is type D in the Petrov classification (the full Weyl tensor is generically type \(D\otimes I\)). A simple way to show this is to use \(K_{AB}=\Omega^{-1}j_{AB}\) and decompose \(j_{AB}\) into principal spinors as in the first identity in (A.5): \(j_{AB}=2{\rm i}o_{(A}o_{B)}^{\dagger}\) (where \(o_{A}o^{\dagger A}=1\)). Then the condition \(\Psi_{(ABC}{}^{E}j_{D)E}=0\) implies \(\Psi_{ABCD}=6\Psi_{2}o_{(A}g_{B}o_{C}^{\dagger}o_{D)}^{\dagger}\), where \[\Psi_{2}:=\Psi_{ABCD}o^{A}o^{B}o^{\dagger C}o^{\dagger D}=C_{abcd }\ell^{a}m^{b}\tilde{m}^{c}n^{d}=-\frac{1}{8}C_{abcd}J^{ac}J^{bd} \tag{2.6}\] and \(\ell^{a},n^{a},m^{a},\tilde{m}^{a}\) is a (complex) null tetrad associated to \(o^{A}\) (cf. equation (A.6)). ### Field equations We will focus on the conformally self-dual and cosmological Einstein-Maxwell equations. The former are automatically solutions to conformal gravity. In view of Derdzinski's result [24], cf. the introduction 1, we will not be interested in the Bach-flat equations per se. #### 2.2.1 Conformal self-duality We say that a 4-dimensional, orientable Riemannian manifold is conformally (A)SD (or conformally half-flat) if the Weyl tensor satisfies \[C_{abcd}=\pm^{*}C_{abcd}=\pm\tfrac{1}{2}\varepsilon_{ab}{}^{mn}C _{mncd} \tag{2.7}\] where \(\varepsilon_{abcd}\) is the volume form, and where SD corresponds to the \(+\) sign and ASD to the \(-\) sign. In spinors, the SD equation is equivalent to \(\Psi_{ABCD}\equiv 0\), and the ASD equation is equivalent to \(\tilde{\Psi}_{A^{\prime}B^{\prime}C^{\prime}D^{\prime}}\equiv 0\). For a conformally Kahler manifold \((M,g_{ab},\kappa_{ab})\), we saw in (2.6) that the only non-trivial component of the ASD Weyl spinor is \(\Psi_{2}\), so conformal self-duality reduces simply to the scalar equation \(\Psi_{2}=0\). A convenient form for this equation can be obtained from the following: **Proposition 2.1**.: _Let \((M,g_{ab},\kappa_{ab})\) be conformally Kahler. Let \(\hat{g}_{ab}=\Omega^{2}g_{ab}\) be the corresponding Kahler metric, and let \(\hat{R}\) be its Ricci scalar. Then_ \[\Psi_{2}=\Omega^{2}\frac{\hat{R}}{12}. \tag{2.8}\] Proof.: If \(J^{a}{}_{b}=\kappa_{bc}g^{ca}\) is the complex structure and \(\hat{\nabla}_{a}\) is the Levi-Civita connection of \(\hat{g}_{ab}\), then \(\hat{\nabla}_{a}J^{b}{}_{c}=0\). From the integrability condition \([\hat{\nabla}_{a},\hat{\nabla}_{b}]J^{c}{}_{d}=0\), we get \(\hat{R}_{abcd}=\hat{R}_{abef}J^{e}{}_{c}J^{f}{}_{d}\), where \(\hat{R}_{abcd}\) is the Riemann tensor of \(\hat{g}_{ab}\). Contracting with \(\hat{g}^{ac}\hat{g}^{bd}\) and defining \(\hat{J}^{ac}=\hat{g}^{ec}J^{a}{}_{e}\), we find the Ricci scalar \(\hat{R}=\hat{R}_{abcd}\hat{J}^{ac}\hat{J}^{bd}\). Writing this in terms of the Weyl tensor (cf. [30, Eq. (3.2.28)]), one gets \(\hat{R}=-\frac{3}{2}\hat{C}_{abcd}\hat{J}^{ac}\hat{J}^{bd}\). The conformal transformation of (2.6) is \(\hat{\Psi}_{2}=-\frac{1}{8}\hat{C}_{abcd}\hat{J}^{ac}\hat{J}^{bd}\). Since the conformal weights of \(\hat{C}_{abcd}\) and \(\hat{J}^{ac}\) are \(+2\) and \(-2\) respectively, we get \(\hat{\Psi}_{2}=\Omega^{-2}\Psi_{2}\). Putting everything together, (2.8) follows. #### 2.2.2 Cosmological Einstein-Maxwell Given a 4-manifold \((M,g_{ab})\) and a 2-form \(F_{ab}\), the Einstein-Maxwell equations with cosmological constant \(\lambda\) (or cosmological Einstein-Maxwell, or Einstein-Maxwell-\(\lambda\) for short) are \[\begin{split} R_{ab}-\frac{R}{2}g_{ab}+\lambda g_{ab}& =2F_{ac}F_{b}{}^{c}-\frac{1}{2}g_{ab}F_{cd}F^{cd},\\ \nabla^{a}F_{ab}&=0=\nabla_{[a}F_{bc]}.\end{split} \tag{2.9}\] If \(\lambda<0\), the system (2.9) is the bosonic part of the field equations of gauged \(N=2\) supergravity in four dimensions. **Proposition 2.2**.: _Let \((M,g_{ab},\kappa_{ab})\) be a conformally Kahler Riemannian 4-manifold, whose Ricci tensor is invariant under the complex structure (equivalently, (2.3) is a Killing vector). Then the cosmological Einstein-Maxwell equations (2.9) are equivalent to the constancy of the scalar curvature: \(R=4\lambda\). The corresponding Maxwell field is \(F_{ab}=F_{ab}^{-}+F_{ab}^{+}\), where_ \[F_{ab}^{-}=\Omega^{2}\kappa_{ab},\qquad F_{ab}^{+}=\tfrac{1}{4}\Omega^{-2}( \rho_{ab}-\lambda\kappa_{ab}), \tag{2.10}\] _and \(\rho_{ab}=R_{bc}J^{c}{}_{a}\) is the Ricci form._ **Remark 2.3**.: _Proposition 2.2 is a generalisation of Flaherty's result [31], who showed that scalar-flat Kahler metrics are automatically solutions to the Einstein-Maxwell system. The extension to the conformally Kahler case has been a subject of interest in the mathematical literature of recent years, see e.g. [32, 33, 34]._ Proof of Proposition 2.2.: Let \(\kappa_{ab}=j_{AB}\epsilon_{A^{\prime}B^{\prime}}\) be the fundamental 2-form. The symplectic form is \(\hat{\kappa}_{ab}=\Omega^{2}\kappa_{ab}\). Since \(\hat{\kappa}_{ab}\) is ASD and closed, it satisfies Maxwell equations \(\nabla_{[a}\hat{\kappa}_{bc]}=0=\nabla^{a}\hat{\kappa}_{ab}\). In spinors, we have \(\hat{\kappa}_{ab}=\varphi_{AB}\epsilon_{A^{\prime}B^{\prime}}\), with \(\varphi_{AB}=\Omega^{2}j_{AB}\) and \[\nabla^{AA^{\prime}}\varphi_{AB}=0. \tag{2.11}\] The Ricci tensor is constrained by eq. (2.4) (or equiv. (2.5)). We see from (2.4)-(2.5) that \(\xi^{a}\) is a Killing vector if and only if the Ricci tensor \(R_{ab}\), or equivalently its trace-free part \(\Phi_{ab}\), is invariant under \(J^{a}{}_{b}\). That is, iff \(\Phi_{A^{\prime}B^{\prime}C(A}\hat{J}^{C}{}_{B)}=0\). Assuming this to be the case, we get \[\Phi_{ABA^{\prime}B^{\prime}}=2\varphi_{AB}\phi_{A^{\prime}B^{\prime}}, \tag{2.12}\] where \(\phi_{A^{\prime}B^{\prime}}\equiv\frac{1}{4}\Omega^{-4}\varphi^{AB}\Phi_{ABA^ {\prime}B^{\prime}}\). Now, the contracted Bianchi identities in spinor form are \(\nabla^{AA^{\prime}}\Phi_{ABA^{\prime}B^{\prime}}+\frac{1}{8}\nabla_{BB^{ \prime}}R=0\), see [27, Eq. (4.10.8)]. In view of (2.11), a short calculation gives \[\nabla^{AA^{\prime}}\phi_{A^{\prime}B^{\prime}}=-\tfrac{\Omega^{-4}}{16}\varphi ^{AB}\nabla_{BB^{\prime}}R.\] Thus, we see that \(\phi_{A^{\prime}B^{\prime}}\) also satisfies Maxwell equations \(\nabla^{AA^{\prime}}\phi_{A^{\prime}B^{\prime}}=0\) if and only if \(R\) is a constant, say \(R\equiv 4\lambda\). But (2.12) together with \(R=4\lambda\) are precisely the Einstein-Maxwell equations [27, Eq. (5.2.6)] (adapted to Euclidean signature, and setting Newton's gravitational constant equal to one). ### An expression for the metric **Proposition 2.4**.: _Let \((M,g_{ab},\kappa_{ab})\) be a conformally Kahler Riemannian 4-manifold, whose Ricci tensor is invariant under the complex structure. Then there are local coordinates \((\psi,x,y,z)\), real functions \(W(x,y,z),u(x,y,z)\), and a 1-form \(A(x,y,z)\) (with \(\partial_{\psi}\lrcorner A=0\)) such that the metric and the fundamental 2-form can be written respectively as_ \[g =W^{-1}(\mathrm{d}\psi+A)^{2}+W[\mathrm{d}z^{2}+e^{u}(\mathrm{d}x ^{2}+\mathrm{d}y^{2})], \tag{2.13}\] \[\kappa =(\mathrm{d}\psi+A)\wedge\mathrm{d}z+We^{u}\mathrm{d}x\wedge \mathrm{d}y. \tag{2.14}\] **Remark 2.5**.: _The expression (2.13) appears in many constructions related to Kahler geometry in four dimensions, under different assumptions. LeBrun [19] deduced (2.13) for scalar-flat Kahler metrics with symmetry, and Tod deduced (2.13) for one-sided-type-D Ricci-flat metrics [18]. In the current work, we only assume the conformally Kahler condition with symmetry._ Proof of Proposition 2.4.: We start by choosing an orthonormal coframe \((\beta^{0},\beta^{1},\beta^{2},\beta^{3})\), and we define the almost-complex structure \(J^{a}{}_{b}=\kappa_{bc}g^{ca}\), where \(\kappa=\beta^{0}\wedge\beta^{1}+\beta^{2}\wedge\beta^{3}\) as in (A.2). We assume that we chose the coframe such that \(J\) is the integrable complex structure of the hypothesis, and that the fundamental 2-form \(\kappa\) satisfies \(\mathrm{d}(\Omega^{2}\kappa)=0\) for some non-constant scalar field \(\Omega\). The hypothesis of \(J\)-invariance of the Ricci tensor of \(g_{ab}\) implies that the covector field \(\xi_{a}\) given by (2.3) is Killing, \(\nabla_{(a}\xi_{b)}=0\). We now construct a new orthonormal coframe \((\theta^{0},\theta^{1},\theta^{2},\theta^{3})\) as described in appendix A, using an almost-hyper-Hermitian structure \((J_{1},J_{2},J_{3})\) with \(J_{1}\equiv J\). First, introduce a coordinate \(\psi\) parametrizing the orbits of \(\xi^{a}\), that is \(\xi^{a}\partial_{a}=\partial_{\psi}\). Defining \[W^{-1}:=g_{ab}\xi^{a}\xi^{b} \tag{2.15}\] and lowering an index, we have \(\xi_{a}\mathrm{d}x^{a}=W^{-1}(\mathrm{d}\psi+A)\) for some 1-form \(A\). We normalize as \(e_{0}:=W^{1/2}\xi\), and we define \(\theta^{0}:=g(e_{0},\cdot)=W^{-1/2}(\mathrm{d}\psi+A)\). We also put \(e_{1}:=Je_{0}\) and \(\theta^{1}:=g(e_{1},\cdot)\). From (2.3) we see that \(\xi_{a}J^{a}{}_{b}=-\partial_{b}\Omega^{-1}\), so it follows that \(\theta^{1}=W^{1/2}\xi_{\lrcorner\kappa}=W^{1/2}\mathrm{d}z\), where \[z:=\Omega^{-1}. \tag{2.16}\] The remaining two elements \(\theta^{2},\theta^{3}\) of the new coframe are obtained by first defining \(e_{2}:=J_{2}e_{0}\), \(e_{3}:=J_{3}e_{0}\), and then \(\theta^{2}:=g(e_{2},\cdot)=W^{1/2}\xi_{\lrcorner\kappa_{2}}\), \(\theta^{3}:=g(e_{3},\cdot)=W^{1/2}\xi_{\lrcorner\kappa_{3}}\). We see that \[\theta^{2}+\mathrm{i}\theta^{3}=W^{1/2}\,\xi\lrcorner(\kappa_{2}+\mathrm{i} \kappa_{3}).\] Now, we see from (A.3) that \(\kappa_{2}+\mathrm{i}\kappa_{3}=2\ell\wedge m\), where \(\ell=\frac{1}{\sqrt{2}}(\beta^{0}+\mathrm{i}\beta^{1})\), \(m=\frac{1}{\sqrt{2}}(\beta^{2}+\mathrm{i}\beta^{3})\) are type-\((1,0)\) forms of \(J_{1}\). Integrability of \(J_{1}\) implies the existence of holomorphic coordinates \(z^{0},z^{1}\) such that \(\mathrm{d}z^{0},\mathrm{d}z^{1}\) span type-\((1,0)\) forms. So \(\ell\) and \(m\) can be expressed as linear combinations of \(\mathrm{d}z^{0},\mathrm{d}z^{1}\). In particular, this implies that \(\mathrm{d}z^{0}\wedge\mathrm{d}z^{1}=\chi\ell\wedge m\) for some real scalar field \(\chi\). On the other hand, using Cartan's formula for the Lie derivative, we have \(\pounds_{\xi}(\mathrm{d}z^{0}\wedge\mathrm{d}z^{1})=\mathrm{d}[\xi\lrcorner( \mathrm{d}z^{0}\wedge\mathrm{d}z^{1})]\). Since \(\xi\) preserves the complex structure, we can choose holomorphic coordinates such that \(\pounds_{\xi}(\mathrm{d}z^{0}\wedge\mathrm{d}z^{1})=0\), thus, there is a complex scalar \(\zeta\) such that \(\xi\lrcorner(\mathrm{d}z^{0}\wedge\mathrm{d}z^{1})=\mathrm{d}\zeta\). So: \[\xi\lrcorner(\kappa_{2}+\mathrm{i}\kappa_{3})=2\,\xi\lrcorner(\ell\wedge m)=2 \chi^{-1}\xi\lrcorner(\mathrm{d}z^{0}\wedge\mathrm{d}z^{1})=2\chi^{-1} \mathrm{d}\zeta. \tag{2.17}\] Separating \(\zeta\) into real and imaginary parts as \(\zeta\equiv\frac{1}{\sqrt{2}}(x+\mathrm{i}y)\), we thus get \[\theta^{2}+\mathrm{i}\theta^{3}=\sqrt{2}W^{1/2}\chi^{-1}(\mathrm{d}x+\mathrm{ i}\mathrm{d}y).\] Finally, defining a real function \(u\) by \(e^{u}:=2\chi^{-2}\), and putting everything together, we get (2.13). The expression (2.14) follows form \(\kappa=\beta^{0}\wedge\beta^{1}+\beta^{2}\wedge\beta^{3}=\theta^{0}\wedge\theta^ {1}+\theta^{2}\wedge\theta^{3}\). To summarize, the key variables are defined by: \[z=\Omega^{-1},\qquad W^{-1}=g_{ab}\xi^{a}\xi^{b},\qquad e^{u/2}(\mathrm{d}x+ \mathrm{i}\mathrm{d}y)=2\xi\lrcorner(\ell\wedge m). \tag{2.18}\] #### 2.3.1 The monopole equation We now derive a few identities that will be useful for the proof of some results below. The Hermitian expression of the metric is \(g=2g_{\alpha\bar{\beta}}\mathrm{d}z^{\alpha}\mathrm{d}\bar{z}^{\beta}\), where \(g_{\alpha\bar{\beta}}=g(\partial_{\alpha},\partial_{\bar{\beta}})\) (with \(\partial_{\alpha}=\partial/\partial z^{\alpha}\), \(\partial_{\bar{\alpha}}=\partial/\partial\bar{z}^{\alpha}\)), and \(z^{\alpha}=(z^{0},z^{1})\) are complex holomorphic coordinates. These coordinates can be obtained from the fact that \(\mathrm{d}z^{\alpha}\) must be linear combinations of type-\((1,0)\) forms, which are spanned e.g. by \(\theta^{0}+\mathrm{i}\theta^{1}\) and \(\theta^{2}+\mathrm{i}\theta^{3}\). Recalling that \(\partial_{\psi}\) is Killing, we have \[\mathrm{d}z^{0} =\tfrac{1}{\sqrt{2}}\left[\mathrm{d}\psi+A+iW\mathrm{d}z+(f+ih) \mathrm{d}x+(-h+\mathrm{i}f)\mathrm{d}y\right], \tag{2.19a}\] \[\mathrm{d}z^{1} =\tfrac{1}{\sqrt{2}}(\mathrm{d}x+\mathrm{i}\mathrm{d}y), \tag{2.19b}\] for some real functions \(f,h\) (which must exist due to integrability of \(J\)). Using that (by definition) \(J(\mathrm{d}z^{1})=\mathrm{i}\mathrm{d}z^{1}\), and recalling (2.3) and (2.16), we also note the identities \[J(\mathrm{d}x)=-\mathrm{d}y,\qquad J(\mathrm{d}y)=\mathrm{d}x,\qquad J( \mathrm{d}z)=\xi. \tag{2.20}\] The vector fields \(\partial_{\alpha}\) can be computed using \(\mathrm{d}z^{\alpha}(\partial_{\beta})=\delta_{\beta}^{\alpha}\), \(\mathrm{d}\bar{z}^{\alpha}(\partial_{\beta})=0\). Tedious calculations then give \[\partial_{z^{0}} =\tfrac{1}{\sqrt{2}}\left[(1+\tfrac{\mathrm{i}A_{z}}{W}) \partial_{\psi}-\tfrac{\mathrm{i}}{W}\partial_{z}\right], \tag{2.21a}\] \[\partial_{z^{1}} =\tfrac{1}{\sqrt{2}}\left[-\left[(1+\tfrac{\mathrm{i}A_{z}}{W})( f+\mathrm{i}h)+(A_{x}-\mathrm{i}A_{y})\right]\partial_{\psi}+\mathrm{i}\tfrac{(f+ \mathrm{i}h)}{W}\partial_{z}+\partial_{x}-\mathrm{i}\partial_{y}\right], \tag{2.21b}\] where we decomposed \(A\equiv A_{x}\mathrm{d}x+A_{y}\mathrm{d}y+A_{z}\mathrm{d}z\). We then find the metric coefficients \(g_{\alpha\bar{\beta}}\) to be \[g_{0\bar{0}}=\frac{1}{W},\qquad g_{0\bar{1}}=-\frac{(f-\mathrm{i}h)}{W},\qquad g _{1\bar{0}}=-\frac{(f+\mathrm{i}h)}{W},\qquad g_{1\bar{1}}=\frac{f^{2}+h^{2}}{ W}+We^{u}. \tag{2.22}\] We deduce from here that \[e^{u}=g_{0\bar{0}}g_{1\bar{1}}-g_{0\bar{1}}g_{1\bar{0}}=\det(g_{\alpha\bar{ \beta}}). \tag{2.23}\] The condition \(\mathrm{d}^{2}z^{\alpha}=0\) gives \[f_{x} =h_{y}, \tag{2.24a}\] \[f_{z} =W_{y}=\partial_{x}A_{z}-\partial_{z}A_{x},\] (2.24b) \[h_{z} =W_{x}=-\partial_{y}A_{z}+\partial_{z}A_{y},\] (2.24c) \[h_{x}+f_{y}=\partial_{x}A_{y}-\partial_{y}A_{x}=-z^{2}\partial_ {z}(\tfrac{We^{u}}{z^{2}}), \tag{2.24d}\] where the last equality in (2.24d) follows from the conformal Kahler condition \(\mathrm{d}(z^{-2}\kappa)=0\) (using that \(\kappa\) is given by (2.14)). Noting that the last three equations give an expression for \(\mathrm{d}A\) in terms of derivatives of \(W\), the integrability condition \(\mathrm{d}^{2}A=0\) then leads to \[W_{xx}+W_{yy}+\partial_{z}\left[z^{2}\partial_{z}(\tfrac{We^{u}}{z^{2}}) \right]=0. \tag{2.25}\] All of the above identities are valid for a generic Hermitian metric \(g\) of the form (2.13). In particular this also applies to the Kahler metric \(\hat{g}=z^{-2}g\) and the (closed) Kahler form \(\hat{\kappa}=z^{-2}\kappa\), which can be written as \[\hat{g} =\hat{W}^{-1}(\mathrm{d}\psi+A)^{2}+\hat{W}(\mathrm{d}\hat{z}^{2} +e^{\hat{u}}(\mathrm{d}x^{2}+\mathrm{d}y^{2})), \tag{2.26}\] \[\hat{\kappa} =(\mathrm{d}\psi+A)\wedge\mathrm{d}\hat{z}+\hat{W}e^{\hat{u}} \mathrm{d}x\wedge\mathrm{d}y, \tag{2.27}\] where we defined \[\hat{W}=z^{2}W,\qquad\hat{z}=-\frac{1}{z},\qquad e^{\hat{a}}=\frac{e^{ u}}{z^{4}}. \tag{2.28}\] The expression for \(\mathrm{d}A\) obtained before can now be written as \[\mathrm{d}A=(\hat{W}e^{\hat{u}})_{\hat{z}}\mathrm{d}x\wedge\mathrm{ d}y+\hat{W}_{x}\mathrm{d}y\wedge\mathrm{d}\hat{z}+\hat{W}_{y}\mathrm{d}\hat{z} \wedge\mathrm{d}x, \tag{2.29}\] whereas eq. (2.25) becomes a monopole equation: \[\hat{W}_{xx}+\hat{W}_{yy}+(\hat{W}e^{\hat{a}})_{\hat{z}\hat{z}}=0. \tag{2.30}\] **Remark 2.6**.: _The above identities relate the three unknowns \(u,W,A\). If we know \(u\), then we solve (2.30) to find \(W\), and then we find \(A\) by integrating (2.29) (or (2.24b),(2.24c),(2.24d)). To find an equation for \(u\), we must impose field equations, since so far the only assumption is the conformal Kahler condition with symmetry._ ### Curvature #### 2.4.1 The Ricci scalar and the \(Su(\infty)\) Toda equation **Proposition 2.7**.: _Consider a Kahler 4-manifold \((M,\hat{g}_{ab},\hat{\kappa}_{ab})\), where the metric and Kahler form can be written as in (2.26)-(2.27), and \(\partial_{\psi}\) is a Killing field. Then the Ricci scalar of \(\hat{g}_{ab}\) is_ \[\hat{R}=-\frac{1}{\hat{W}e^{\hat{a}}}\left[\hat{u}_{xx}+\hat{u}_{ yy}+(e^{\hat{a}})_{\hat{z}\hat{z}}\right]. \tag{2.31}\] Proof.: We use a well-known formula for the Ricci scalar of any Kahler metric: \[\hat{R}=-2\hat{g}^{\alpha\bar{\beta}}\partial_{\alpha}\partial_{ \bar{\beta}}\log\hat{\Delta}, \tag{2.32}\] where \(\hat{\Delta}:=\det\hat{g}_{\alpha\bar{\beta}}\), and \(\hat{g}^{0\bar{0}}=\hat{\Delta}^{-1}\hat{g}_{1\bar{1}}\), \(\hat{g}^{0\bar{1}}=-\hat{\Delta}^{-1}\hat{g}_{1\bar{0}}\), \(\hat{g}^{1\bar{0}}=-\hat{\Delta}^{-1}\hat{g}_{0\bar{1}}\), \(\hat{g}^{1\bar{1}}=\hat{\Delta}^{-1}\hat{g}_{0\bar{0}}\). From the hatted version of (2.23) we see that \(\hat{\Delta}=e^{\hat{a}}\), and using also (2.22) we find \[\hat{R}=-\frac{2}{e^{\hat{a}}}\left[(f^{2}+h^{2}+\hat{W}^{2}e^{ \hat{a}})\partial_{0}\partial_{\bar{0}}+f(\partial_{0}\partial_{\bar{1}}+ \partial_{1}\partial_{\bar{0}})+\mathrm{i}h(\partial_{0}\partial_{\bar{1}}- \partial_{1}\partial_{\bar{0}})+\partial_{1}\partial_{\bar{1}}\right]\hat{u}.\] A lengthy and tedious computation of \(\partial_{0}\partial_{\bar{0}}\hat{u}\), etc. (using (2.21) and recalling \(\partial_{\psi}\hat{u}=0\)) gives \[\hat{R}=-\frac{1}{\hat{W}e^{\hat{a}}}\left[\hat{u}_{xx}+\hat{u}_{ yy}+e^{\hat{a}}\hat{u}_{\hat{z}\hat{z}}-\frac{1}{\hat{W}}(e^{\hat{a}}\hat{W}_{ \hat{z}}+h_{x}+f_{y})\hat{u}_{\hat{z}}\right].\] Using the hatted version of (2.24d), we see that \(h_{x}+f_{y}=-\partial_{\hat{z}}(\hat{W}e^{\hat{a}})\), thus (2.31) follows. **Proposition 2.8**.: _Let \((M,g_{ab},\kappa_{ab})\) be a conformally Kahler Riemannian 4-manifold, where the metric and fundamental 2-form can be written as in (2.13)-(2.14), and where the conformal factor is \(\Omega=z^{-1}\) and \(\partial_{\psi}\) is a Killing field. (In particular, the covector (2.3) is not assumed to be Killing.) Then the Ricci scalar of \(g_{ab}\) is_ \[R=-\frac{1}{We^{u}}\left[u_{xx}+u_{yy}+(e^{u})_{zz}\right]. \tag{2.33}\] Proof.: For a general metric \(g_{ab}\), if \(\hat{g}_{ab}\equiv\Omega^{2}g_{ab}\), the Ricci scalars of \(g_{ab}\) and \(\hat{g}_{ab}\) are related by a standard formula [30, Eq. (D.9)], which can be written as \[R=\Omega^{2}(\hat{R}+6\Omega^{-3}\square\Omega). \tag{2.34}\] In our case, we assume \(\hat{g}_{ab}\) to be Kahler, and \(\Omega=z^{-1}\). \(\hat{R}\) was computed in (2.31), so we see that we must compute \(z^{3}\square z^{-1}\). To do this, we use the general formula \(\square\Phi=\frac{1}{\sqrt{8}}\partial_{a}(\sqrt{\mathbb{g}}\,g^{ab}\partial_ {b}\Phi)\) valid for an arbitrary function \(\Phi\), with \(\sqrt{\mathbb{g}}=\sqrt{\det(g_{ab})}\). In terms of \(g_{\alpha\bar{\beta}}\), we have \(\det(g_{ab})=[\det(g_{\alpha\bar{\beta}})]^{2}\), so \(\sqrt{\mathbb{g}}=e^{u}\), and \[\square\Phi=e^{-u}[ \partial_{0}(g_{1\bar{1}}\partial_{\bar{0}}-g_{1\bar{0}}\partial _{\bar{1}})\Phi+\partial_{1}(g_{0\bar{0}}\partial_{\bar{1}}-g_{0\bar{1}} \partial_{\bar{0}})\Phi\] \[+\partial_{\bar{0}}(g_{1\bar{1}}\partial_{0}-g_{0\bar{1}} \partial_{1})\Phi+\partial_{\bar{1}}(g_{0\bar{0}}\partial_{1}-g_{1\bar{0}} \partial_{0})\Phi].\] Using formulas (2.21) and (2.22), and assuming that \(\Phi\) depends only on \(z\), \(\Phi=\Phi(z)\), we get \(\square\Phi=\frac{1}{We^{u}}\partial_{z}(e^{u}\partial_{z}\Phi)\). Replacing now \(\Phi=1/z\), \[\square z^{-1}=\frac{2}{Wz^{3}}\left(1-\frac{zu_{z}}{2}\right). \tag{2.35}\] Using then (2.34) and (2.31), \[R=\frac{1}{z^{2}}\left[-\frac{z^{2}}{We^{u}}\left[u_{xx}+u_{yy}+z^{2}\partial_ {z}\left(z^{2}\partial_{z}\left(\frac{e^{u}}{z^{4}}\right)\right)\right]- \frac{12}{W}\left(\frac{zu_{z}}{2}-1\right)\right]\] which then gives (2.33), after using the identity \(z^{2}\partial_{z}(z^{2}\partial_{z}(F/z^{4}))=F_{zz}-6F_{z}/z+12F/z^{2}\) valid for any function \(F\). The \(SU(\infty)\) Toda equation for a function \(v\) is \(v_{xx}+v_{yy}+(e^{v})_{zz}=0\). In view of Propositions 2.1, 2.2, 2.7 and 2.8, we see that both the conformal self-duality equations and the Einstein-Maxwell-\(\lambda\) equations with \(\lambda=0\) reduce to the Toda equation, for \(\hat{u}\) and \(u\) respectively. #### 2.4.2 The Ricci form **Theorem 2.9**.: _Let \((M,g_{ab},\kappa_{ab})\) be a conformally Kahler Riemannian 4-manifold whose Ricci tensor is invariant under the complex structure, so that \(\xi_{a}\) given by (2.3) is a Killing field and \(g_{ab}\) and \(\kappa_{ab}\) have the expressions (2.13)-(2.14). Then the Ricci form \(\rho_{ab}=R_{bc}J^{c}{}_{a}\) is_ \[\rho=\frac{1}{2}We^{u}R\,\mathrm{d}x\wedge\mathrm{d}y-\frac{W}{z^{2}}\left[ \tilde{\ast}\mathrm{d}-\xi\wedge\mathrm{d}\right]\left(\frac{W_{0}}{W}\right), \tag{2.36}\] _where \(R\) is the Ricci scalar (2.33), we defined_ \[W_{0}:=z\left(1-\frac{zu_{z}}{2}\right) \tag{2.37}\] _and, for an arbitrary function \(\phi\), the operator \(\tilde{\ast}\mathrm{d}\) is_ \[\tilde{\ast}\mathrm{d}\phi:=\phi_{x}\mathrm{d}y\wedge\mathrm{d}z+\phi_{y} \mathrm{d}z\wedge\mathrm{d}x+e^{u}\phi_{z}\mathrm{d}x\wedge\mathrm{d}y. \tag{2.38}\] _In addition, the trace-free Ricci form can be expressed as_ \[\begin{split}\rho-\frac{R}{4}\kappa=&\left[\frac{R }{4}-\frac{1}{z^{2}}\partial_{z}(\frac{W_{0}}{W})\right](-(\mathrm{d}\psi+A) \wedge\mathrm{d}z+We^{u}\mathrm{d}x\wedge\mathrm{d}y)\\ &+\frac{W}{z^{2}}\left[\xi\wedge(\mathrm{d}x\partial_{x}+ \mathrm{d}y\partial_{y})-\mathrm{d}z\wedge(\mathrm{d}x\partial_{y}-\mathrm{d} y\partial_{x})\right](\frac{W_{0}}{W}).\end{split} \tag{2.39}\] **Remark 2.10**.: 1. _From (_2.36_) we see that Ricci-flatness_ \(\rho_{ab}=0\) _reduces to_ \(\frac{W_{0}}{W}=\gamma=\mathrm{const}\)_, together with the_ \(SU(\infty)\) _Toda equation_ \(u_{xx}+u_{yy}+(e^{u})_{zz}=0\)_, so we recover Tod's result_ _[_18_]__. Comparison to the Schwarzschild case (cf. section_ 2.6 _below) suggests to use the notation_ \(\gamma\equiv-M\)_, where_ \(M\) _is Schwarzschild's mass._ 2. _From (_2.39_), the Einstein condition_ \(\rho-\frac{R}{4}\kappa=0\) _(i.e._ \(R_{ab}=\lambda g_{ab}\)_) is satisfied if and only if_ \(u\) _satisfies the modified Toda equation_ \[u_{xx}+u_{yy}+(e^{u})_{zz}=-4\lambda We^{u}\] (2.40) _and_ \(\frac{W_{0}}{W}\) _is a function of only_ \(z\) _satisfying_ \(\frac{1}{z^{2}}\frac{\mathrm{d}}{\mathrm{d}z}(\frac{W_{0}}{W})=\lambda\)_, whose solution is_ \[W\equiv W_{\lambda}:=\frac{W_{0}}{\frac{\lambda}{3}z^{3}+\gamma}=\frac{z\left( 1-\frac{zu_{z}}{2}\right)}{\frac{\lambda}{3}z^{3}+\gamma}\] (2.41) _where_ \(\gamma\) _is an integration constant. This was also obtained by Tod in_ _[_18_]__._ 3. _In the Einstein-Maxwell-_\(\lambda\) _case_ \(R=4\lambda\)_, in view of (_2.10_), formula (_2.39_) gives us an explicit expression for the SD part of the Maxwell field._ Proof of Theorem 2.9.: We start by recalling that for any two metrics \(g_{ab}\) and \(\hat{g}_{ab}=\Omega^{2}g_{ab}\), whose Ricci tensors are \(R_{ab}\) and \(\hat{R}_{ab}\) respectively, the relation between \(R_{ab}\) and \(\hat{R}_{ab}\) is given by a standard conformal transformation formula (our reference is [30, Eq. (D.8)]), that in four dimensions can be written as \[R_{ab}=\hat{R}_{ab}-2\Omega\hat{\nabla}_{a}\hat{\nabla}_{b}\Omega^{-1}+4 \Omega^{2}(\hat{\nabla}_{a}\Omega^{-1})(\hat{\nabla}_{b}\Omega^{-1})-\Sigma \hat{g}_{ab}, \tag{2.42}\] with \[\Sigma:=\Omega\hat{g}^{ab}\hat{\nabla}_{a}\hat{\nabla}_{b}\Omega^{-1}+\Omega^ {2}\hat{g}^{ab}(\hat{\nabla}_{a}\Omega^{-1})(\hat{\nabla}_{b}\Omega^{-1}). \tag{2.43}\] Assuming now that \(\hat{g}_{ab}\) is Kahler, we recall from the previous sections that \(\hat{\nabla}_{a}J^{b}{}_{c}=0\), \(\hat{\nabla}_{a}\Omega^{-1}=-\xi_{b}J^{b}{}_{a}\), where \(\xi_{a}\) is defined in (2.2). At this point we are not assuming that \(\xi_{a}\) is Killing. Contracting (2.42) with \(J^{b}{}_{c}\), we get \[R_{ab}J^{b}{}_{c}=-\hat{\rho}_{ac}-2\Omega\hat{\nabla}_{a}\xi_{c}-4\Omega^{2}J ^{b}{}_{a}\xi_{b}\xi_{c}+\Sigma\hat{\kappa}_{ac}, \tag{2.44}\] where \(\hat{\rho}_{ac}\equiv\hat{R}_{cb}J^{b}{}_{b}\) is the Ricci form of \(\hat{g}_{ab}\). From (2.5), we know that the Ricci tensor \(R_{ab}\) is \(J\)-invariant if and only if \(\xi_{a}\) is Killing, \(\nabla_{(a}\xi_{b)}=0\). Assuming this to be the case, \(R_{ab}J^{b}{}_{c}\) is anti-symmetric and so we can define the Ricci form of \(g_{ab}\), \(\rho_{ac}:=R_{cb}J^{b}{}_{a}=-\rho_{ca}\). Using that the two terms with \(\xi_{a}\) in (2.44) are anti-symmetric in \(ac\) (not separately but together), after some manipulations we get the formula \[\rho=\hat{\rho}-\Sigma\hat{\kappa}+\Omega^{-1}\mathrm{d}(\Omega^{2}\xi). \tag{2.45}\] Now, we want to express (2.45) in terms of \(u,W\). For the scalar \(\Sigma\), defined in (2.43), note that it can be written as \(\Sigma=\Omega\hat{\Box}\Omega^{-1}+W^{-1}\), where \(\hat{\Box}=\hat{g}^{ab}\hat{\nabla}_{a}\hat{\nabla}_{b}\). Alternatively, we have \(\Omega\hat{\Box}\Omega^{-1}=-\Omega^{-3}\Box\Omega=-z^{3}\Box z^{-1}\). We already computed \(\Box z^{-1}\) in (2.35), so \[\Sigma=\frac{zu_{z}-1}{W}. \tag{2.46}\] For the Ricci form \(\hat{\rho}\) of the Kahler metric \(\hat{g}_{ab}\), we use the well-known formula \(\hat{\rho}=-\mathrm{i}\partial\bar{\partial}\log\det(\hat{g}_{\alpha\bar{ \beta}})\), where \(\partial=\mathrm{d}z^{\alpha}\wedge\partial_{\alpha}\) and \(\bar{\partial}=\mathrm{d}\bar{z}^{\alpha}\wedge\partial_{\bar{\alpha}}\) are Dolbeault operators defined by the complex structure. Recalling from (the hatted version of) formula (2.23) that \(\log\det(\hat{g}_{\alpha\beta})=\hat{u}\), we have \(\hat{\rho}=-i\partial\bar{\partial}\hat{u}\). In addition, for any smooth function \(f\), we have the identity \(-i\partial\bar{\partial}f=\frac{1}{2}\mathrm{d}(J\mathrm{d}f)\). Putting then \(f=\hat{u}\) and using identities (2.20), we get \[\hat{\rho}=\tfrac{1}{2}\left[-(\hat{u}_{xx}+\hat{u}_{yy})\mathrm{d}x\wedge \mathrm{d}y-\hat{u}_{xz}\mathrm{d}z\wedge\mathrm{d}y+\hat{u}_{yz}\mathrm{d}z \wedge\mathrm{d}x+\mathrm{d}\hat{u}_{z}\wedge\xi+\hat{u}_{z}\mathrm{d}\xi\right].\] Replacing this expression, together with (2.46) and (2.27), in equation (2.45): \[\rho= \left[-\tfrac{1}{2}(\hat{u}_{xx}+\hat{u}_{yy})-\tfrac{(zu_{z}-1)} {z^{2}}e^{u}\right]\mathrm{d}x\wedge\mathrm{d}y-\tfrac{1}{2}\hat{u}_{xz} \mathrm{d}z\wedge\mathrm{d}y+\tfrac{1}{2}\hat{u}_{yz}\mathrm{d}z\wedge \mathrm{d}x\] \[+\left[\tfrac{1}{2}\mathrm{d}\hat{u}_{z}+\tfrac{(zu_{z}-1)}{z^{2 }}\mathrm{d}z-\tfrac{2}{z^{2}}\mathrm{d}z\right]\wedge\xi+\left(\tfrac{1}{2} \hat{u}_{z}+\tfrac{1}{z}\right)\mathrm{d}\xi.\] Using the relation (2.28) between \(\hat{u}\) and \(u\), after some tedious computations we arrive at the unenlightening expression \[\rho= -\tfrac{1}{2}\left[u_{xx}+u_{yy}+\tfrac{2(zu_{z}-1)}{z^{2}}e^{u}+ \tfrac{2z}{W}(\tfrac{zu_{z}}{2}-1)\partial_{z}(\tfrac{We^{u}}{z^{2}})\right] \mathrm{d}x\wedge\mathrm{d}y\] \[+\tfrac{1}{2}\left[u_{yz}-\tfrac{2}{zW}(\tfrac{zu_{z}}{2}-1)W_{y} \right]\left(\mathrm{d}z\wedge\mathrm{d}x+\mathrm{d}y\wedge\mathrm{d}\xi\right)\] \[+\tfrac{1}{2}\left[u_{xz}-\tfrac{2}{zW}(\tfrac{zu_{z}}{2}-1)W_{x} \right]\left(-\mathrm{d}z\wedge\mathrm{d}y+\mathrm{d}x\wedge\mathrm{d}\xi\right)\] \[+\tfrac{1}{2}\left[\tfrac{1}{z^{2}}(z^{2}u_{zz}+2zu_{z}-2)-\tfrac {2}{zW}(\tfrac{zu_{z}}{2}-1)W_{z}\right]\mathrm{d}z\wedge\xi.\] Now, defining \(W_{0}\) as in (2.37), we have the identities \[u_{yz}-\tfrac{2}{zW}\left(\tfrac{zu_{z}}{2}-1\right)W_{y}= -\tfrac{2W}{z^{2}}\partial_{y}\left(\tfrac{W_{0}}{W}\right),\] \[u_{xz}-\tfrac{2}{zW}\left(\tfrac{zu_{z}}{2}-1\right)W_{x}= -\tfrac{2W}{z^{2}}\partial_{x}\left(\tfrac{W_{0}}{W}\right),\] \[\tfrac{1}{z^{2}}(z^{2}u_{zz}+2zu_{z}-2)-\tfrac{2}{zW}(\tfrac{zu_{ z}}{2}-1)W_{z}= -\tfrac{2W}{z^{2}}\partial_{z}\left(\tfrac{W_{0}}{W}\right),\] which lead to \[\rho= -\tfrac{1}{2}\left[u_{xx}+u_{yy}+\tfrac{2e^{u}}{z^{2}}(1-\tfrac{W _{0}}{W}W_{z}-W_{0}u_{z})\right]\mathrm{d}x\wedge\mathrm{d}y\] \[-\tfrac{W}{z^{2}}\mathrm{d}z\wedge(\mathrm{d}x\,\partial_{y}- \mathrm{d}y\,\partial_{x})(\tfrac{W_{0}}{W})-\tfrac{W}{z^{2}}\mathrm{d}( \tfrac{W_{0}}{W})\wedge\xi.\] Defining the operator \(\tilde{\ast}\mathrm{d}\) as in (2.38), we have \[\mathrm{d}z\wedge(\mathrm{d}x\,\partial_{y}-\mathrm{d}y\,\partial_{x})( \tfrac{W_{0}}{W})=\tilde{\ast}\mathrm{d}(\tfrac{W_{0}}{W})-e^{u}\partial_{z}( \tfrac{W_{0}}{W})\mathrm{d}x\wedge\mathrm{d}y,\] which then leads to our final formula (2.36). Having shown this, the proof of (2.39) requires only a few more tedious but straightforward computations, so we will omit them. #### 2.4.3 The ASD Weyl tensor From eq. (2.6) we know that the only non-trivial component of the ASD Weyl tensor is \(\Psi_{2}\). In addition, from eq. (2.8) we also know that \(\Psi_{2}\) is essentially given by the Ricci scalar \(\hat{R}\) of the Kahler metric, which in turn is given by (2.31) in terms of \(u,W\). An alternative expression that can be useful in practice can be given in terms of the function \(W_{0}\) defined in (2.37): a short calculation gives \[\Psi_{2}=-\frac{1}{z^{3}}\frac{W_{0}}{W}+\frac{R}{12}. \tag{2.47}\] In particular, notice that in the Ricci-flat and Einstein cases, we recover a well-known relation between \(\Psi_{2}\) and the conformal factor: \(\Psi_{2}\propto\Omega^{3}\) (recall \(\Omega=z^{-1}\)). ### Ambi-Kahler structures It may happen that a geometry \((M,g_{ab})\) is conformally Kahler w.r.t. _both_ ASD and SD orientations: this is called an ambi-Kahler structure [35]. In this case we have two integrable complex structures \((J_{\pm})^{a}{}_{b}\) and two Kahler metrics \(\hat{g}^{\pm}_{ab}=\Omega^{2}_{\pm}g_{ab}\). As in (2.3), we now have \[\xi^{\pm}_{b}=(J_{\pm})^{a}{}_{b}\partial_{a}\Omega^{-1}_{\pm}. \tag{2.48}\] If at least one of \(\xi^{a}_{\pm}\) is a Killing vector, then all of the results of the previous sections apply w.r.t. the corresponding orientation \(\pm\). If both \(\xi^{a}_{\pm}\) are Killing, then we will have two Toda formulations, one for each orientation: the corresponding Toda variables \((u_{\pm},W_{\pm},z_{\pm},x_{\pm},y_{\pm})\) are the analogue of (2.18), \[z_{\pm}=\Omega^{-1}_{\pm},\qquad W^{-1}_{\pm}=g_{ab}\xi^{a}_{\pm}\xi^{b}_{\pm},\qquad e^{u_{\pm}/2}(\mathrm{d}x_{\pm}+\mathrm{i}\mathrm{d}y_{\pm})=2\xi_{\pm }\lrcorner(\ell\wedge m^{\pm}), \tag{2.49}\] where in the last equality we defined \(m^{+}\equiv m\), \(m^{-}\equiv\bar{m}\). Analogously to (2.37), we also put \(W^{\pm}_{0}=z_{\pm}(1-\frac{1}{2}z_{\pm}\partial_{z_{\pm}}u_{\pm})\). The Weyl tensor of an ambi-Kahler structure is of Petrov type \(D\otimes D\): this means that both Weyl curvature spinors \(\Psi_{ABCD}\) and \(\tilde{\Psi}_{A^{\prime}B^{\prime}C^{\prime}D^{\prime}}\) are type D. The only non-trivial components are \(\Psi^{-}_{2}\equiv\Psi_{2}\) and \(\Psi^{+}_{2}\equiv\tilde{\Psi}_{2}\), which can be computed as \[\Psi^{\pm}_{2}=\Omega^{2}_{\pm}\frac{\hat{R}_{\pm}}{12}=-\frac{1}{z^{3}_{\pm} }\frac{W^{\pm}_{0}}{W_{\pm}}+\frac{R}{12}, \tag{2.50}\] where \(\hat{R}_{\pm}\) are the Ricci scalars of the Kahler metrics \(\hat{g}^{\pm}_{ab}\). (Recall that (2.50) is valid regardless of whether (2.48) are Killing or not.) ### Examples Flat space.Consider the function \(u=u(x,z)\) given by \[e^{u}=\frac{z^{2}}{\cosh^{2}x}. \tag{2.51}\] Replacing in (2.33) and (2.37), we get \(R=0=W_{0}\). Using then formulas (2.36) and (2.47), we see that \(\rho_{ab}=0=\Psi_{2}\), which means \(R_{ab}=0=\Psi_{ABCD}\), thus the solution is hyper-Kahler (as it is self-dual and Ricci-flat). The remaining function \(W\) is determined by solving (2.25), which then determines the 1-form \(A\) by integrating (2.24b)-(2.24c)-(2.24d). Different choices of solutions to (2.25) will give different hyper-Kahler metrics. The simple case \(W=z^{-1}\), \(A=\tanh(x)\mathrm{d}y\) gives (locally) flat space. This can be seen by making the coordinate transformation \(x=\log\tan(\theta/2)\), \(y=-\varphi\), \(z=\varrho^{2}/4\), which brings the metric to the form \[g=\mathrm{d}\varrho^{2}+\frac{\varrho^{2}}{4}[(\mathrm{d}\psi+\cos\theta \mathrm{d}\varphi)^{2}+(\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\varphi^{ 2})], \tag{2.52}\] which is Euclidean 4-space expressed in terms of Euler angles \((\psi,\theta,\varphi)\). Conformally hyper-Kahler.Consider \(u=u(z)\) given by \[e^{u}=z^{4}. \tag{2.53}\] Using (2.28), this gives \(e^{\hat{u}}=1\), so \(\hat{u}=0\). The Kahler metric \(\hat{g}\) thus satisfies \(\hat{\rho}=0\) (i.e. \(\hat{R}_{ab}=0\)), so it is Ricci-flat and therefore hyper-Kahler. Thus, (2.53) corresponds to the case in which \(g\) is conformally hyper-Kahler (which, in particular, implies that \(g\) is self-dual). Alternatively, replacing \(\hat{u}=0\) in (2.26) we see that the Kahler metric adopts a Gibbons-Hawking form, and eq. (2.29) becomes \(\mathrm{d}A=*_{3}\mathrm{d}\hat{W}\) (where \(*_{3}\) is the Hodge star in \(\mathbb{R}^{3}\)), which implies that \(\hat{g}\) is hyper-Kahler, see [36, Chapter 9]. Spherical symmetry.We now start from a metric Ansatz: we consider a manifold with local real coordinates \((\tau,r,\theta,\phi)\) and a Riemannian metric \[g=f(r)\mathrm{d}\tau^{2}+\frac{\mathrm{d}r^{2}}{f(r)}+r^{2}( \mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\varphi^{2}), \tag{2.54}\] where \(f\) is an arbitrary smooth function of \(r\). Choose the coframe \(\beta^{0}=f^{1/2}\mathrm{d}\tau\), \(\beta^{1}=f^{-1/2}\mathrm{d}r\), \(\beta^{2}=r\mathrm{d}\theta\), \(\beta^{3}=r\sin\theta\mathrm{d}\varphi\), define the fundamental 2-forms \(\kappa^{\pm}=\beta^{0}\wedge\beta^{1}\mp\beta^{2}\wedge\beta^{3}\) (\(\kappa^{+}\) is SD and \(\kappa^{-}\) is ASD), and the associated almost-complex structures \((J_{\pm})^{a}{}_{b}=\kappa^{\pm}_{bc}g^{ca}\). The type-\((1,0)\) eigenspaces of \(J_{\pm}\) are generated by \(\ell=\frac{1}{\sqrt{2}}(\beta^{0}+\mathrm{i}\beta^{1})\), \(m^{\pm}=\frac{1}{\sqrt{2}}(\beta^{2}\mp\mathrm{i}\beta^{3})\). Putting \(a^{0}_{\pm}=f^{-1/2}\), \(b^{0}_{\pm}=0\), and \(a^{1}_{\pm}=0\), \(b^{1}_{\pm}=(r\sin\theta)^{-1}\), we see that the type \((1,0)\)-forms \(a^{\alpha}_{\pm}\ell+b^{\alpha}_{\pm}m^{\pm}\) (\(\alpha=0,1\)) are closed, so both \(J_{+}\) and \(J_{-}\) are integrable. Furthermore, if \(\Omega_{\pm}\equiv r^{-1}\) then it is straightforward to see that \(\mathrm{d}[\Omega^{2}_{\pm}\kappa^{\pm}]=0\), so the geometry (2.54) is ambi-Kahler. Finally, computing the vector fields (2.48), we get \(\xi^{a}_{\pm}\partial_{a}=\partial_{\tau}\), which is Killing. Thus, regardless of the form of the arbitrary function \(f(r)\), the geometry is conformally Kahler (in fact ambi-Kahler) with a \(J\)-invariant Ricci tensor. We then compute the variables (2.49): \[z_{\pm}=r,\qquad W^{-1}_{\pm}=f,\qquad e^{u_{\pm}}=fr^{2}\sin^{2 }\theta,\qquad\mathrm{d}x_{\pm}=\frac{\mathrm{d}\theta}{\sin\theta},\qquad \mathrm{d}y_{\pm}=\mp\mathrm{d}\varphi. \tag{2.55}\] Using the formulas of previous sections, a short calculation gives \[R=\frac{2-(r^{2}f)^{\prime\prime}}{r^{2}},\qquad\Psi^{\pm}_{2}= \frac{\{2-r^{2}[r^{2}(f/r^{2})^{\prime}]^{\prime}\}}{12r^{2}},\qquad\frac{W^ {\pm}_{0}}{W_{\pm}}=-\frac{r^{2}f^{\prime}}{2}, \tag{2.56}\] where a prime \({}^{\prime}\) represents a derivative w.r.t. \(r\). The Einstein-Maxwell-\(\lambda\) equations are \(R=4\lambda\), which gives \[f(r)=1+\frac{a_{1}}{r}+\frac{a_{2}}{r^{2}}-\frac{\lambda}{3}r^{2} \tag{2.57}\] for arbitrary constants \(a_{1},a_{2}\). The Weyl scalars \(\Psi^{\pm}_{2}\) and the two pieces \(F^{\pm}\) of the Maxwell field are: \[\Psi^{\pm}_{2}=-\frac{a_{1}}{2r^{3}}-\frac{a_{2}}{r^{4}},\qquad F ^{\pm}=(-a_{2}/4)^{\frac{1+1}{2}}\left(\frac{1}{r^{2}}\mathrm{d}\tau\wedge \mathrm{d}r\mp\sin\theta\mathrm{d}\theta\wedge\mathrm{d}\varphi\right). \tag{2.58}\] Setting \(a_{1}\equiv-2M\), \(a_{2}\equiv Q^{2}\), we recognise the Euclidean Reissner-Nordstrom-(A)dS solution. We can alternatively look for \(f(r)\) such that the ansatz (2.54) is conformally self-dual, eq. (2.7). Recall that this is equivalent to \(\Psi^{-}_{2}\equiv\Psi_{2}=0\). Since \(\Psi^{-}_{2}=\Psi^{+}_{2}\), we see that \(\Psi_{2}=0\) gives \(C_{abcd}\equiv 0\), so the self-dual solution to the ansatz (2.54) is conformally flat. The condition \(\Psi^{\pm}_{2}=0\) gives \(f(r)=1+b_{1}r+b_{2}r^{2}\) for arbitrary constants \(b_{1},b_{2}\). The Ricci scalar and trace-free Ricci form are \[R=-6\left(\frac{b_{1}}{r}+2b_{2}\right),\qquad\rho^{\pm}-\frac{R }{4}\kappa^{\pm}=\frac{2b_{1}}{r}\left(\mathrm{d}\tau\wedge\mathrm{d}r\pm r^{2 }\sin\theta\mathrm{d}\theta\wedge\mathrm{d}\varphi\right). \tag{2.59}\] We see that the geometry is Einstein iff \(b_{1}=0\), in which case it reduces to Euclidean (anti-)de Sitter space with cosmological constant \(-3b_{2}\) (which is \(S^{4}\) if \(b_{2}<0\)). The Kerr-Newman ansatz.Our final example will allow us to illustrate a trick to solve the Toda equation, that is also useful for more complicated metric ansatze. Consider the metric \[g=\frac{\Delta}{\Sigma}(\mathrm{d}\tau-a\sin^{2}\theta\mathrm{d}\varphi)^{2}+ \frac{\sin^{2}\theta}{\Sigma}[a\mathrm{d}\tau+(r^{2}-a^{2})\mathrm{d}\varphi]^{ 2}+\frac{\Sigma}{\Delta}\mathrm{d}r^{2}+\Sigma\mathrm{d}\theta^{2}, \tag{2.60}\] where \(a\) is a real constant, \(\Sigma=r^{2}-a^{2}\cos^{2}\theta\), and \(\Delta=\Delta(r)\) is an arbitrary function of \(r\). As in the previous example, we start by choosing a coframe: \(\beta^{0}=(\frac{\Delta}{\Sigma})^{1/2}(\mathrm{d}\tau-a\sin^{2}\theta\mathrm{ d}\varphi)\), \(\beta^{1}=(\frac{\Sigma}{\Delta})^{1/2}\mathrm{d}r\), \(\beta^{2}=\sqrt{\Sigma}\mathrm{d}\theta\), \(\beta^{3}=\frac{\sin\theta}{\sqrt{\Sigma}}[a\mathrm{d}\tau+(r^{2}-a^{2}) \mathrm{d}\varphi]\); we define \(\kappa^{\pm}=\beta^{0}\wedge\beta^{1}\mp\beta^{2}\wedge\beta^{3}\) and the almost-complex structures \((J_{\pm})^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, Setting \(c_{1}\equiv-2M\), \(a^{2}+c_{2}\equiv Q^{2}\), we recognise the Euclidean Kerr-Newman solution. If, instead of solving the Einstein-Maxwell equations, we focus on the conformal SD equation (2.7), or equivalently \(\check{R}=0\), one in principle expects that \(\hat{R}=0\) may give a different form for \(\Delta\). The condition \(\hat{R}=0\) is again equivalent to the Toda equation \(\hat{u}_{xx}+(e^{\hat{u}})_{\hat{z}\hat{z}}=0\) (where \(\hat{u},\hat{z}\) are defined in (2.28)), so we can solve it by doing the same trick as in (2.63). We now get \[(r+a\cos\theta)^{2}\check{\Delta}-6(r+a\cos\theta)\check{\Delta}+12\Delta=2(r+a \cos\theta)^{2}-12a\cos\theta(r+a\cos\theta)-12a^{2}\sin\theta. \tag{2.69}\] Assuming \(a\neq 0\), and applying \(\partial_{\theta}^{2}\partial_{r}^{2}\) to the above equation, we are led to \(\check{\Delta}=2\), so again we find \(\Delta=r^{2}+c_{3}r+c_{4}\) for some constants \(c_{3},c_{4}\). Replacing back in (2.69), we find \(c_{3}=0\), \(c_{4}=-a^{2}\), so \(\Delta=r^{2}-a^{2}\). We already computed the curvature of the metric when \(\Delta\) is a quadratic polynomial: the Ricci scalar vanishes, and \(\Psi_{2}^{\pm}\) and the Ricci form are (2.67), (2.68) with \(c_{1},c_{2}\) replaced by \(c_{3},c_{4}\) respectively. Since \(c_{3}=0\), \(c_{4}=-a^{2}\), we see that the rest of the curvature vanishes. Therefore, the self-dual solution of the Kerr-Newman ansatz (2.60) is simply flat space. ## 3 The Page-Pope class ### Preliminaries Consider a Riemann surface \(\Sigma\) with a Riemannian metric \(g_{\Sigma}=2h\,\mathrm{d}\zeta\mathrm{d}\bar{\zeta}\), where \(\zeta=\frac{1}{\sqrt{2}}(x+\mathrm{i}y)\) is a holomorphic coordinate and \(h\) is a real positive function. Let \(\kappa_{\Sigma}=\mathrm{i}h\,\mathrm{d}\zeta\wedge\mathrm{d}\bar{\zeta}\) be the Kahler form. Since \(\mathrm{d}\kappa_{\Sigma}=0\), there is, locally, a 1-form \(A\) such that \(\kappa_{\Sigma}=\mathrm{d}A\) and a Kahler potential \(K_{\Sigma}\) with \(h=\partial_{\zeta}\partial_{\bar{\zeta}}K_{\Sigma}\). We now define a manifold \(M\) as the total space of a fibre bundle over \(\Sigma\) with 2-dimensional fibers parametrized by \(r,\psi\), and we introduce a Riemannian metric \(g\) on \(M\) by \[g=F(r)\mathrm{d}r^{2}+G(r)(\mathrm{d}\psi+A)^{2}+H(r)g_{\Sigma} \tag{3.1}\] where \(F,G,H\) are arbitrary (non-negative) functions of \(r\). Note that, by redefining the coordinate \(r\), the three functions \(F,G,H\) can be reduced to two. For the moment we will focus on the form (3.1), but we will later make use of this freedom. The class of metrics (3.1) includes geometries such as Fubini-Study, Eguchi-Hanson, Taub-NUT, Kahler surfaces of Calabi type (cf. [37, 38]), particular cases of the Bianchi IX class, etc. It is the restriction to four dimensions of the geometries considered by Page and Pope in [20]. In [20], the conditions on the functions \(F,G,H\) so that the metric (3.1) is Einstein are determined, and they find that, under the Einstein assumption, the metric is conformal to two different Kahler metrics. We will first show that this ambi-Kahler structure is actually independent of the form of \(F,G,H\), and so it is independent of field equations; then we will use this result to study generalised instantons. **Proposition 3.1**.: _For any functions \(F,G,H\), the class of metrics (3.1) is (locally) ambi-Kahler._ Proof.: Write the metric on the Riemann surface as \(g_{\Sigma}=h(\mathrm{d}x^{2}+\mathrm{d}y^{2})\), and choose the coframe \(\beta^{0}=\sqrt{G}(\mathrm{d}\psi+A)\), \(\beta^{1}=\sqrt{F}\mathrm{d}r\), \(\beta^{2}=\sqrt{Hh}\;\mathrm{d}x\), \(\beta^{3}=\sqrt{Hh}\;\mathrm{d}y\). We define the fundamental 2-forms (with opposite orientation) \[\kappa^{\pm}=\beta^{0}\wedge\beta^{1}\mp\beta^{2}\wedge\beta^{3}=\sqrt{FG}( \mathrm{d}\psi+A)\wedge\mathrm{d}r\mp H\kappa_{\Sigma}, \tag{3.2}\] and the associated almost-complex structures \((J_{\pm})^{a}{}_{b}=\kappa^{\pm}_{bc}g^{ca}\). Type-\((1,0)\) forms for \(J_{\pm}\) are \(\ell=\frac{1}{\sqrt{2}}(\beta^{0}+\mathrm{i}\beta^{1})\), \(m^{\pm}=\frac{1}{\sqrt{2}}(\beta^{2}\mp\mathrm{i}\beta^{3})\). Let \(a^{0}_{\pm}=\frac{1}{\sqrt{G}}\), \(b^{0}_{\pm}=\frac{\mp i}{\sqrt{2Hh}}\partial_{\zeta}K_{\Sigma}\), \(a^{1}_{\pm}=0\), \(b^{1}_{\pm}=\frac{1}{\sqrt{Hh}}\). Then a short calculation shows that the type-\((1,0)\) forms \(a^{a}_{\pm}\ell+b^{a}_{\pm}m^{\pm}\) (\(\alpha=0,1\)) are closed, so \(J_{+}\) and \(J_{-}\) are both integrable. Furthermore, using (3.2) we find that regardless of the form of \(F,G,H\) we always have \[\mathrm{d}[\Omega^{2}_{\pm}\kappa^{\pm}]=0,\qquad\Omega^{2}_{\pm}\equiv\frac{c _{\pm}}{H}\exp\left[\pm\,\int\frac{\sqrt{FG}}{H}\mathrm{d}r\right], \tag{3.3}\] where \(c_{\pm}\) are arbitrary constants. Thus (3.1) is (locally) ambi-Kahler, independently of the form of \(F,G,H\). The vector fields (2.48) are \[\xi^{a}_{\pm}\partial_{a}=\frac{1}{\sqrt{FG}}\frac{\mathrm{d}(\Omega^{-1}_{\pm })}{\mathrm{d}r}\partial_{\psi}. \tag{3.4}\] Since \(\partial_{\psi}\) is a Killing vector of (3.1), we see that (3.4) are in general not Killing. Requiring (3.4) to be Killing imposes restrictions on \(\Omega_{\pm}\), which means restrictions on \(F,G,H\). ### Conformally self-dual solutions Let \(\hat{R}_{\pm}\) be the Ricci scalars of the Kahler metrics \(\hat{g}^{\pm}_{ab}\). Recall that the SD equation \(*C=+C\) is equivalent to \(\hat{R}_{-}=0\), and the ASD equation \(*C=-C\) is equivalent to \(\hat{R}_{+}=0\). To solve the equation \(\hat{R}_{\pm}=0\), we use Proposition 2.7. Defining \[\hat{F}_{\pm}:=\Omega^{2}_{\pm}F,\qquad\hat{G}_{\pm}:=\Omega^{2}_{\pm}G,\qquad \hat{H}_{\pm}:=\Omega^{2}_{\pm}H \tag{3.5}\] and using (3.3), we see that \(\mathrm{d}\hat{H}_{\pm}=\pm\sqrt{\hat{F}_{\pm}\hat{G}_{\pm}}\mathrm{d}r\). Thus, if we further define \[\hat{z}_{\pm}:=-\hat{H}_{\pm},\qquad\hat{W}_{\pm}:=\hat{G}^{-1}_{\pm},\qquad e ^{\hat{u}_{\pm}}:=\hat{G}_{\pm}\hat{H}_{\pm}h, \tag{3.6}\] then the Kahler metrics \(\hat{g}^{\pm}=\Omega^{2}_{\pm}g\) and Kahler forms \(\hat{\kappa}^{\pm}=\Omega^{2}_{\pm}\kappa\) become \[\hat{g}^{\pm} =\hat{W}^{-1}_{\pm}(\mathrm{d}\psi+A)^{2}+\hat{W}_{\pm}[ \mathrm{d}\hat{z}^{2}_{\pm}+e^{\hat{u}_{\pm}}(\mathrm{d}x^{2}+\mathrm{d}y^{2 })], \tag{3.7}\] \[\hat{\kappa}^{\pm} =\,\mp\left[(\mathrm{d}\psi+A)\wedge\mathrm{d}\hat{z}_{\pm}+\hat{ W}_{\pm}e^{\hat{u}_{\pm}}\mathrm{d}x\wedge\mathrm{d}y\right]. \tag{3.8}\] A straightforward calculation using (2.31) gives \[\hat{R}_{\pm}=\frac{1}{\hat{H}_{\pm}}\left[R_{\Sigma}+\frac{\mathrm{d}^{2}}{ \mathrm{d}\hat{z}^{2}_{\pm}}(\hat{G}_{\pm}\hat{z}_{\pm})\right],\qquad R_{ \Sigma}:=-h^{-1}(\partial^{2}_{x}+\partial^{2}_{y})\log h. \tag{3.9}\] The (A)SD equations then reduce to \[\frac{\mathrm{d}^{2}}{\mathrm{d}\hat{z}^{2}_{\pm}}(\hat{G}_{\pm}\hat{z}_{\pm })=-R_{\Sigma}. \tag{3.10}\] The left side is a function of \(\hat{z}_{\pm}\) only, while the right side is a function of \((x,y)\) only. Thus, (3.10) demands \(R_{\Sigma}\) to be constant. Assuming \(\Sigma\) to be simply connected, this implies that \(g_{\Sigma}\) is isometric to the standard metric of either the 2-sphere (\(R_{\Sigma}>0\)), the Euclidean 2-plane (\(R_{\Sigma}=0\)), or the hyperbolic plane (\(R_{\Sigma}<0\)). The solution to (3.10) is \(\hat{G}_{\pm}\hat{z}_{\pm}=-\frac{R_{\Sigma}}{2}\hat{z}^{2}_{\pm}+b_{\pm}\hat{ z}_{\pm}-a_{\pm}\), so \[\hat{G}_{\pm}=-\frac{R_{\Sigma}}{2}\hat{z}_{\pm}+b_{\pm}-\frac{a_{\pm}}{\hat{z} _{\pm}}, \tag{3.11}\] where \(a_{\pm},b_{\pm}\) are arbitrary constants. Recalling \(\hat{z}_{\pm}=-\hat{H}_{\pm}\), expressing the above equation in terms of \(G,H,\Omega_{\pm}\), and summarising: **Theorem 3.2**.: _The metric (3.1) is a solution to the conformally (A)SD equations \(*C=\mp C\) (i.e. \(\hat{R}_{\pm}=0\)) iff \(g_{\Sigma}\) has constant curvature \(R_{\Sigma}\) and the functions \(F,G,H\) satisfy_ \[G=\frac{R_{\Sigma}}{2}H+\frac{b_{\pm}}{\Omega_{\pm}^{2}}+\frac{a_{\pm}}{\Omega_ {\pm}^{4}H}. \tag{3.12}\] _where \(a_{\pm},b_{\pm}\) are arbitrary constants and \(\Omega_{\pm}\) are defined in (3.3)._ **Remark 3.3** (Classification).: _It is worth mentioning a different perspective on the above solutions. From (3.6), the function \(\hat{u}_{\pm}\) is "separable" in the sense that \(\hat{u}_{\pm}=f(x,y)+g(z)\), where \(f(x,y)=\log h\) and \(g(z)=\log(\hat{G}_{\pm}\hat{H}_{\pm})\). Thus, we solved the Toda equation (here \(\hat{R}_{\pm}=0\)) when the Toda variable is separable. All separable solutions to the Toda equation were classified by Tod in [39]: the classification is in terms of three constants \(k,a,b\), which in our notation are \(k\equiv-\frac{R_{\Sigma}}{2}\), \(a\equiv b_{\pm}\), \(b\equiv-a_{\pm}\)._ Let us see some examples. In all three examples that follow, we take \(\Sigma=\mathbb{CP}^{1}\cong S^{2}\) with the round 2-metric, which has \(R_{\Sigma}=2\). Fubini-Study.Taking the functions \(F=\frac{1}{(1+r^{2})^{2}}\), \(G=\frac{r^{2}}{4(1+r^{2})^{2}}\), \(H=\frac{r^{2}}{4(1+r^{2})}\), the ambi-Kahler class (3.1) becomes the Fubini-Study metric on \(M=\mathbb{CP}^{2}\). Putting \(c_{+}=\frac{1}{4}\), \(c_{-}=4\) in (3.3), we find \(\Omega_{+}^{2}=1\) and \(\Omega_{-}^{2}=H^{-2}\), so the metric is actually Kahler w.r.t. \(J_{+}\) (and of course conformally Kahler w.r.t. \(J_{-}\)). We have \(\hat{G}_{+}=G\), \(\hat{H}_{+}=H\), \(\hat{G}_{-}=\frac{4}{r^{2}}\), \(\hat{H}_{-}=\frac{4(1+r^{2})}{r^{2}}\). This gives \(\hat{z}_{\pm}=H^{\pm 1}\). Replacing in (3.9), we find \(\hat{R}_{+}=24\), \(\hat{R}_{-}=0\). This is of course consistent with the fact that Fubini-Study is Einstein with cosmological constant equal to 6 and has a self-dual Weyl tensor. Using (3.4), we also note that \(\xi_{+}^{a}\) vanishes and \(\xi_{-}^{a}\partial_{a}\equiv\partial_{\psi}\) is Killing. Generalised Eguchi-Hanson.Let \(F=\frac{1}{f(r)}\), \(G=\frac{r^{2}f(r)}{4}\), \(H=\frac{r^{2}}{4}\), where \(f(r)\) is an arbitrary function of \(r\). The metric (3.1) is then a "generalised Eguchin-Hanson" space. Setting \(c_{+}=\frac{1}{4}\), \(c_{-}=4\) in (3.3), we find \(\Omega_{+}^{2}=1\) and \(\Omega_{-}^{2}=H^{-2}\) (so the metric is Kahler w.r.t. \(J_{+}\)). The form of \(f(r)\) that makes the space conformally (A)SD can now be easily found by solving the algebraic equation (3.12): \[*C=\mp C\quad\iff\quad f(r)=1+b_{\pm}\left(\frac{2}{r}\right)^{\pm 2}+a_{ \pm}\left(\frac{2}{r}\right)^{\pm 4}. \tag{3.13}\] We also note that \(\xi_{+}^{a}\) vanishes and \(\xi_{-}^{a}\partial_{a}\equiv\partial_{\psi}\) is Killing, so the rest of the curvature can be computed using the results of section 2. The ordinary Eguchi-Hanson instanton corresponds to (3.13) with \(*C=-C\), \(b_{+}=0\) and \(a_{+}=-a/16\) (the Ricci tensor then vanishes and the space is hyper-Kahler). The case (3.13) with \(*C=-C\) and \(b_{+}\neq 0\) was studied in [12] in the context of conformal gravity, where the term \(4b_{+}/r^{2}\) is referred to as the \(b\)-mode. Generalised Taub-NUT.Letting \(F=\frac{1}{f(r)}\), \(G=4n^{2}f(r)\), \(H=r^{2}-n^{2}\), where \(f(r)\) is an arbitrary function of \(r\) and \(n\) is a constant, the metric (3.1) is a "generalised Taub-NUT" space. Putting \(c_{\pm}=1\) in (3.3), we find \(\Omega_{\pm}^{2}=(r\pm n)^{-2}\). The algebraic equation (3.12) now gives: \[*C=\mp C\quad\iff\quad f(r)=\frac{1}{4n^{2}}\left[r^{2}-n^{2}+b_{\pm}(r\pm n)^ {2}+a_{\pm}\frac{(r\pm n)^{3}}{(r\mp n)}\right]. \tag{3.14}\] Using (3.4), we get \(\xi_{\pm}^{a}\partial_{a}=\frac{1}{2n}\partial_{\psi}\), so \(\xi_{+}^{a}=\xi_{-}^{a}\) is Killing and we can compute the rest of the curvature using the results of section 2. ### Cosmological Einstein-Maxwell solutions For concreteness, let us focus on the ASD side \(\kappa^{-}\). Introducing new variables \((z,W,u)\) by \[\mathrm{d}z=\sqrt{FG}\mathrm{d}r,\qquad W=G^{-1},\qquad e^{u}=GHh, \tag{3.15}\] the metric (3.1) and fundamental 2-form \(\kappa^{-}\) (3.2) adopt the form (2.13)-(2.14). From (3.4), we get \(\xi_{-}^{a}\partial_{a}=\frac{\mathrm{d}(\Omega_{-}^{-1})}{\mathrm{d}z} \partial_{\psi}\). To apply the construction of section 2.2.2, we need to restrict to the case in which \(\xi_{-}^{a}\) is Killing. This is true iff \(\mathrm{d}(\Omega_{-}^{-1})/\mathrm{d}z\) is a constant. Given that \(z\) and \(\Omega_{-}\) are defined up to addition and multiplication by a constant respectively, we can then simply set \(\Omega_{-}^{-1}\equiv z\). From the conformally Kahler condition \(\Omega_{-}^{2}+\frac{\mathrm{d}}{\mathrm{d}z}(\Omega_{-}^{2}H)=0\) it follows that \(\frac{\mathrm{d}}{\mathrm{d}z}(\frac{H}{z^{2}})=-\frac{1}{z^{2}}\), so we deduce \[H=z+kz^{2}, \tag{3.16}\] where \(k\) is an arbitrary constant. Now, from (3.3) we have \(\Omega_{+}^{2}=\frac{c_{+}c_{-}}{(H\Omega_{-})^{2}}\). Setting \(c_{+}c_{-}=k\) for later convenience, we deduce \(\Omega_{+}^{-1}=\frac{(1+kz)}{k}\). This implies \(\mathrm{d}(\Omega_{+}^{-1})/\mathrm{d}z=1\), so \(\xi_{+}^{a}\equiv\xi_{-}^{a}\). The Einstein-Maxwell-\(\lambda\) equations are \(R=4\lambda\), where \(R\) is given by (2.33). Using that formula and the definitions (3.15), we find \[R=\frac{1}{H}\left[R_{\Sigma}-\frac{\mathrm{d}^{2}(GH)}{\mathrm{d}z^{2}}\right] \tag{3.17}\] where \(R_{\Sigma}\) was defined in (3.9). It is convenient to have a formulation that is more symmetric in the SD and ASD sides. To this end, introduce \(\varrho\) by \(z=\frac{1}{\sqrt{k}}(\varrho-n)\), where \(n:=\frac{1}{2\sqrt{k}}\). Then \(H=\varrho^{2}-n^{2}\) and \(\Omega_{\pm}^{-1}=\frac{1}{\sqrt{k}}(\varrho\pm n)\). The equation \(R=4\lambda\) can be easily solved: from (3.17), we find that \(R_{\Sigma}\) must be constant and \[kG=\frac{-\frac{\lambda}{3}\varrho^{4}+(\frac{R_{\Sigma}}{2}+2\lambda n^{2}) \varrho^{2}+\alpha\varrho+\beta}{\varrho^{2}-n^{2}}, \tag{3.18}\] where \(\alpha,\beta\) are arbitrary constants of integration. The solution then depends on 5 parameters: \(k\) (or \(n\)), \(R_{\Sigma},\lambda,\alpha,\beta\). To interpret them, we compute the rest of the curvature. Recalling formulas (2.50) and (3.9), and using \(\hat{G}_{\pm}\hat{z}_{\pm}=-\Omega_{\pm}^{4}GH\), we have \[\Psi_{2}^{\pm}=\frac{1}{12(\varrho^{2}-n^{2})}\left[R_{\Sigma}-(\varrho\pm n) ^{2}\frac{\mathrm{d}}{\mathrm{d}\varrho}\left((\varrho\pm n)^{2}\frac{\mathrm{ d}}{\mathrm{d}\varrho}\left(\frac{kGH}{(\varrho\pm n)^{4}}\right)\right) \right].\] Using (3.18), we find \[\Psi_{2}^{\pm}=-\frac{\frac{1}{2}(\alpha\mp(R_{\Sigma}n+\frac{8}{3}\lambda n^{ 3}))}{(\varrho\pm n)^{3}}-\frac{(\beta-\frac{R_{\Sigma}}{2}n^{2}-\lambda n^{4 })}{(\varrho\mp n)(\varrho\pm n)^{3}}. \tag{3.19}\] The SD piece of the Maxwell field is \(F^{+}=\frac{z^{2}}{4}(\rho-\lambda\kappa)\). Recalling (2.39), we get \[F^{+}=-\frac{1}{4k}\frac{(\beta-\frac{R_{\Sigma}}{2}n^{2}-\lambda n^{4})}{( \varrho+n)^{2}}\left[(\mathrm{d}\psi+A)\wedge\frac{\mathrm{d}\varrho}{\sqrt{k }}-(\varrho^{2}-n^{2})h\mathrm{d}x\wedge\mathrm{d}y\right]. \tag{3.20}\] Formulas (3.19)-(3.20) suggest to define \[Q:=\beta-\tfrac{R_{\Sigma}}{2}n^{2}-\lambda n^{4},\qquad\mu:=-\tfrac{1}{2} \alpha,\qquad\nu:=\tfrac{1}{2}(R_{\Sigma}n+\tfrac{8}{3}\lambda n^{3}), \tag{3.21}\] and to identify \(Q\) with "electromagnetic charge", \(\mu\) with "mass", and \(\nu\) with a sort of "NUT charge". The geometry is Einstein (\(R_{ab}=\lambda g_{ab}\)) iff \(Q=0\), and it is self-dual (\(\Psi_{ABCD}=0\)) iff \(\mu=\nu\) and \(Q=0\). In the latter case, the space is actually quaternionic-Kahler (that is, \(R_{ab}=\lambda g_{ab}\) and \(\Psi_{ABCD}=0\)). The hyper-Kahler case (\(R_{ab}=0=\Psi_{ABCD}\)) corresponds to \(Q=\mu-\nu=\lambda=0\), and, assuming \(\Sigma=\mathbb{CP}^{1}\) (so \(R_{\Sigma}=2\)), it reduces to the Taub-NUT instanton with NUT charge \(\nu=n\). The Plebanski-Demianski class ### Preliminaries The Plebanski-Demianski ansatz [21] is the 4-dimensional family of Riemannian metrics given in local real coordinates \((\tau,\phi,p,q)\) by \[g=\frac{1}{(p-q)^{2}}\left[-Q\frac{(\mathrm{d}\tau-p^{2}\mathrm{d}\phi)^{2}}{(1- p^{2}q^{2})}+P\frac{(\mathrm{d}\phi-q^{2}\mathrm{d}\tau)^{2}}{(1-p^{2}q^{2})}+(1-p ^{2}q^{2})\left(\frac{\mathrm{d}p^{2}}{P}-\frac{\mathrm{d}q^{2}}{Q}\right) \right], \tag{4.1}\] where \(P\) and \(Q\) are arbitrary functions of \(p\) and \(q\) respectively, and we assume \(P>0\), \(Q<0\). The vector fields \(\partial_{\tau},\partial_{\phi}\) are Killing. We will first show that regardless of the form of \(P,Q\), the geometry is ambi-Kahler. Consider the following orthonormal coframe: \[\begin{split}\beta^{0}&=\tfrac{1}{(p-q)}\sqrt{ \tfrac{-Q}{1-p^{2}q^{2}}}(\mathrm{d}\tau-p^{2}\mathrm{d}\phi),\ \ \beta^{1}= \tfrac{1}{(p-q)}\sqrt{\tfrac{1-p^{2}q^{2}}{-Q}}\mathrm{d}q,\\ \beta^{2}&=\tfrac{1}{(p-q)}\sqrt{\tfrac{1-p^{2}q^{2 }}{P}}\mathrm{d}p,\ ### Conformally self-dual solutions **Theorem 4.1**.: _The metric (4.1) satisfies the conformally self-dual equation \(*C=C\) if and only if the functions \(P\) and \(Q\) are given by_ \[\begin{split} P&=a_{0}+a_{1}p+a_{2}p^{2}+a_{3}p^{3}+ a_{4}p^{4},\\ Q&=a_{4}+a_{3}q+a_{2}q^{2}+a_{1}q^{3}+a_{0}q^{4}, \end{split} \tag{4.9}\] _where \(a_{0},...,a_{4}\) are arbitrary constants. The solution is conformally half-flat but generically non-Einstein. Furthermore, the space is:_ * _Quaternionic-Kahler (i.e. Einstein) iff_ \(a_{1}=a_{3}\)_,_ * _Hyper-Kahler (i.e. Ricci-flat) iff_ \(a_{1}=a_{3}\) _and_ \(a_{0}=a_{4}\)_,_ * _Flat iff_ \(a_{1}=a_{3}=0\) _and_ \(a_{0}=a_{4}\)_._ **Remark 4.2**.: _We stress that the conformally self-dual solution (4.9) is different from the self-dual limit of the standard Plebanski-Demianski solution, which is a quaternionic-Kahler space corresponding to case \((i)\) above (see the next subsection). The solution (4.9) can be regarded (locally) as a generalisation of the standard Plebanski-Demianski space to a self-dual gravitational instanton in conformal gravity._ Proof of Theorem 4.1.: Recall that the SD equation \(*C=C\) is equivalent to \(\hat{R}_{-}=0\), where \(\hat{R}_{-}\) is given by (2.31). For notational convenience, let us denote \(x\equiv x_{-}\), \(y\equiv y_{-}\), \(z\equiv z_{-}\), \(u\equiv u_{-}\). Since \(\partial_{y}\) is Killing, we see that the metric (4.1) is SD if and only if \[\hat{u}_{xx}+(e^{\hat{u}})_{\hat{z}\hat{z}}=0, \tag{4.10}\] where \(\hat{u}=u-4\log z\) and \(\hat{z}=-\frac{1}{z}\). If one writes the Toda equation (4.10) in terms of the variables \(p,q,P,Q\) (using (4.8) for the \(-\) sign) and tries to solve for \(P,Q\) by brute force, the equation becomes too complicated and we were not able to solve it in this way. Instead, in order to solve (4.10) we recall the trick (2.63) that we used to solve the Toda equation in the Kerr-Newman case (section 2.6): we introduce an auxiliary variable \(\hat{\sigma}\) by \[\hat{u}_{x}=\hat{\sigma}_{\hat{z}},\qquad(e^{\hat{a}})_{\hat{z}}=-\hat{\sigma} _{x}. \tag{4.11}\] The vector fields \(\partial_{x},\partial_{z}\) can be computed from (4.8): we find \[\partial_{x}=\frac{PQ}{F}\left[(1-p^{2})\partial_{p}+(1-q^{2})\partial_{q} \right],\qquad\partial_{z}=\frac{(p-q)^{2}}{F}\left[(1-q^{2})P\partial_{p}+(1 -p^{2})Q\partial_{q}\right], \tag{4.12}\] where \(F\equiv(1-p^{2})^{2}Q-(1-q^{2})^{2}P\). Noticing that \(\partial_{\hat{z}}=z^{2}\partial_{z}\), eqs. (4.11) lead, respectively, to \[\frac{(1-p^{2})Q\dot{P}}{(1-pq)^{2}}+\frac{(1-q^{2})P\dot{Q}}{(1- pq)^{2}}+\frac{4(p+q)PQ}{(1-pq)^{2}} =(1-q^{2})P\frac{\partial\hat{\sigma}}{\partial p}+(1-p^{2})Q \frac{\partial\hat{\sigma}}{\partial q}, \tag{4.13a}\] \[\frac{(1-q^{2})\dot{P}}{(1-pq)^{2}}+\frac{(1-p^{2})\dot{Q}}{(1-pq) ^{2}}+\frac{4q(1-q^{2})P}{(1-pq)^{3}}+\frac{4p(1-p^{2})Q}{(1-pq)^{3}} =(1-p^{2})\frac{\partial\hat{\sigma}}{\partial p}+(1-q^{2})\frac{ \partial\hat{\sigma}}{\partial q}, \tag{4.13b}\] where \(\dot{P}=\frac{\mathrm{d}P}{\mathrm{d}\dot{P}}\), \(\dot{Q}=\frac{\mathrm{d}Q}{\mathrm{d}q}\). Now, from (4.13b) we find an expression for \(\partial_{p}\hat{\sigma}\), and we then replace this in (4.13a). When we do this, \(\dot{Q}\) disappears from the resulting equation, leaving us with an equation for \(\partial_{q}\hat{\sigma}\) and \(\dot{P}\) only. We then replace this new expression for \(\partial_{q}\hat{\sigma}\) in (4.13b), and we end up with an equation for \(\partial_{p}\hat{\sigma}\) and \(\dot{Q}\) only. Explicitly, we find: \[\frac{\partial\hat{\sigma}}{\partial q}=\frac{\dot{P}}{(1-pq)^{2}}+\frac{4qP}{( 1-pq)^{3}},\qquad\frac{\partial\hat{\sigma}}{\partial p}=\frac{\dot{Q}}{(1-pq) ^{2}}+\frac{4pQ}{(1-pq)^{3}}.\] Using these equations and the identity \(\partial_{p}\partial_{q}\hat{\sigma}=\partial_{q}\partial_{p}\hat{\sigma}\), a short calculation leads to \[(1-pq)^{2}\ddot{Q}+6p(1-pq)\dot{Q}+12p^{2}Q=(1-pq)^{2}\ddot{P}+6q(1-pq)\dot{P}+ 12q^{2}P. \tag{4.14}\] Applying \(\partial_{q}^{2}\) to this equation, and then \(\partial_{p}^{2}\) to the resulting expression, we get \[q^{2}\dddot{Q}-2q\dddot{Q}+2\ddot{Q}=p^{2}\dddot{P}-2p\dddot{P}+2\ddot{P},\] which can be rewritten as \[q^{3}\frac{\mathrm{d}^{2}}{\mathrm{d}q^{2}}\left(\frac{\ddot{Q}}{q}\right)=p ^{3}\frac{\mathrm{d}^{2}}{\mathrm{d}p^{2}}\left(\frac{\ddot{P}}{p}\right).\] Since the left side is a function of \(q\) only, and the right side is a function of \(p\) only, the equation is easy to solve: we find that \(P,Q\) must be fourth order polynomials in \(p\) and \(q\), respectively. In addition, the fact that \(P,Q\) must satisfy (4.14) imposes relations between the coefficients of the polynomials: this then leads to the form (4.9). It remains to prove the assertion concerning the special limits \((i),(ii),(iii)\). This can be done using formula (B.2) with \(b_{0}=a_{4}\), \(b_{1}=a_{3}\), \(b_{2}=a_{2}\), \(b_{3}=a_{1}\), \(b_{4}=a_{0}\). We find \[\frac{W_{0}^{-}}{W_{-}}=z_{-}^{3}\left[(a_{4}-a_{0})+\frac{(a_{3}-a_{1})}{2} \frac{(p+q)}{(1+pq)}\right]. \tag{4.15}\] Now we use Theorem 2.9, from where we see that the solution will be Einstein iff \(\frac{1}{z_{-}^{2}}\partial_{z_{-}}(\frac{W_{0}^{-}}{W_{-}})=\frac{R}{4}=\lambda\). Since it is conformally self-dual, the Einstein condition will imply that it is quaternionic-Kahler. From (4.15) we see that this is true iff \(a_{1}=a_{3}\). The cosmological constant is \(\lambda=3(a_{4}-a_{0})\), and the only non-vanishing component of the SD Weyl spinor is \[\Psi_{2}^{+}=-\frac{a_{1}}{z_{+}^{3}}. \tag{4.16}\] In addition, the solution will be hyper-Kahler iff \(R_{ab}=0\), which from the above reduces to \(a_{1}=a_{3}\) and \(a_{0}=a_{4}\). The only non-trivial part of the curvature is now (4.16). Finally, from these considerations and eq. (4.16) we see that the solution will be flat iff \(a_{1}=a_{3}=0\) and \(a_{0}=a_{4}\). Note that in the flat limit there are still two parameters left (\(a_{0}\) and \(a_{2}\)), so we actually get a 2-parameter family of flat metrics, as is expected from the analysis in [21]. ### Cosmological Einstein-Maxwell solutions Although the Plebanski-Demianski solution to the system (2.9) is well-known [21], here we rederive the result as an application of the framework developed in section 2. This illustrates that one actually does not need to solve the full Einstein equations as in [21], but just \(R=4\lambda\). This example also allows us to give a trick to solve the modified Toda equation. **Proposition 4.3**.: _The metric (4.1) satisfies the cosmological Einstein-Maxwell equations (2.9) if and only if the functions \(P\) and \(Q\) are given by_ \[\begin{split}& P=a_{0}+a_{1}p+a_{2}p^{2}+a_{3}p^{3}+a_{4}p^{4},\\ & Q=(a_{0}+\tfrac{1}{3}\lambda)+a_{1}q+a_{2}q^{2}+a_{3}q^{3}+(a_{ 4}-\tfrac{1}{3}\lambda)q^{4},\end{split} \tag{4.17}\] _where \(a_{0},...,a_{4}\) are arbitrary constants._ Proof.: For concreteness, we choose to work with the ASD side, and we denote \(x\equiv x_{-}\), \(y\equiv y_{-}\), \(z\equiv z_{-}\), \(u\equiv u_{-}\), \(W\equiv W_{-}\). Since the metric (4.1) is conformally Kahler with symmetry, from Propositions 2.2 and 2.8 we know that the Einstein-Maxwell-\(\lambda\) equation reduces to \[u_{xx}+(e^{u})_{zz}=-4\lambda We^{u} \tag{4.18}\] (as \(\partial_{y}\) is Kiling). To solve the modified Toda equation (4.18), we use a slight variation of the trick used in (4.11): we introduce two variables \(\sigma,T\) by \[u_{x}=\sigma_{z}+T,\qquad(e^{u})_{z}=-\sigma_{x}. \tag{4.19}\] Equation (4.18) becomes \(T_{x}=-4\lambda We^{u}\), and, using (4.12), this gives \[(1-p^{2})\partial_{p}T+(1-q^{2})\partial_{q}T=-4\lambda\frac{(1-p^{2}q^{2})}{ (p-q)^{2}}. \tag{4.20}\] Equations (4.19) lead to \[\begin{split}\tfrac{(1-p^{2})}{(p-q)^{2}}Q\dot{P}+\tfrac{(1-q^{2 })}{(p-q)^{2}}P\dot{Q}+\tfrac{4(p+q)}{(p-q)^{2}}PQ=(1-q^{2})P\partial_{p}\sigma +(1-p^{2})Q\partial_{q}\sigma+\tfrac{F}{(p-q)^{2}}T,\\ \tfrac{(1-q^{2})}{(p-q)^{2}}\dot{P}+\tfrac{(1-p^{2})}{(p-q)^{2}} \dot{Q}-\tfrac{4(1-q^{2})}{(p-q)^{3}}P+\tfrac{4(1-p^{2})}{(p-q)^{3}}Q=(1-p^{2} )\partial_{p}\sigma+(1-q^{2})\partial_{q}\sigma,\end{split}\] where \(F=(1-p^{2})^{2}Q-(1-q^{2})^{2}P\). Proceeding as in the proof of Theorem 4.1, we now arrive at the system \[\frac{\partial\sigma}{\partial q}=\frac{\dot{P}}{(p-q)^{2}}-\frac{4P}{(p-q)^{ 3}}-\frac{(1-p^{2})}{(p-q)^{2}}T,\qquad\frac{\partial\sigma}{\partial p}=\frac {\dot{Q}}{(p-q)^{2}}+\frac{4Q}{(p-q)^{3}}+\frac{(1-q^{2})}{(p-q)^{2}}T.\] Using the identity \(\partial_{p}\partial_{q}\sigma=\partial_{q}\partial_{p}\sigma\) and eq. (4.20), we get \[(p-q)^{2}\ddot{Q}+6(p-q)\dot{Q}+12Q=(p-q)^{2}\ddot{P}-6(p-q)\dot{P}+12P+4 \lambda(1-p^{2}q^{2}). \tag{4.21}\] Applying \(\partial_{q}^{2}\) and then \(\partial_{p}^{2}\) we are led to \[\dddot{Q}-\dddot{P}=-8\lambda. \tag{4.22}\] Taking additional derivatives \(\partial_{p}\) and \(\partial_{q}\), and using that \(P\) and \(Q\) depend only on \(p\) and \(q\) respectively, we see that \(P\) and \(Q\) must be fourth order polynomials, \(P=\sum_{i=0}^{4}a_{i}p^{i}\), \(Q=\sum_{i}^{4}b_{i}q^{i}\). Replacing back in (4.21), we get \(b_{0}=a_{0}+\tfrac{1}{3}\lambda\), \(b_{1}=a_{1}\), \(b_{2}=a_{2}\), \(b_{3}=a_{3}\), \(b_{4}=a_{4}-\tfrac{1}{3}\lambda\), so the result (4.17) follows. Using formulas (B.2) and (2.50), we find: \[\frac{W_{0}^{\pm}}{W_{\pm}} =\frac{(a_{3}\pm a_{1})}{2}-(a_{0}-a_{4}+\frac{1}{3}\lambda)\left( \frac{p+q}{1\mp pq}\right)+\frac{\lambda}{3}\left(\frac{1\pm pq}{p-q}\right)^ {3}, \tag{4.23}\] \[\Psi_{2}^{\pm} =\,-\,\frac{(a_{3}\pm a_{1})}{2}\left(\frac{p-q}{1\pm pq}\right)^ {3}+(a_{0}-a_{4}+\frac{1}{3}\lambda)\left(\frac{p+q}{1\mp pq}\right)\left( \frac{p-q}{1\pm pq}\right)^{3}. \tag{4.24}\] From the above formulas we see that the conformally SD limit \(\Psi_{2}^{-}=0\) corresponds to \(a_{3}=a_{1}\) and \(a_{0}-a_{4}+\frac{\lambda}{3}=0\), which implies \(\frac{W_{0}^{\pm}}{W_{\pm}}=\frac{\lambda}{3}z_{\pm}^{3}\). Using then Theorem 2.9, in this limit we get \(\rho=\lambda\kappa\), so the space is Einstein. Thus, the conformally SD limit of the standard Plebanski-Demianski solution (4.17) is indeed different from the generalisation found in Theorem 4.1. The Chen-Teo class In this section we show how to apply the framework of section 2 to the Chen-Teo class [22, 23], but we leave the detailed construction of the generalised solutions for future works. Unlike all examples considered so far, the Chen-Teo class is generically _not_ ambi-Kahler, but at most conformally Kahler w.r.t. _only one_ orientation. (Correspondingly, in general it does not have Lorentzian sections.) ### Toda formulation Consider the 4-dimensional family of metrics given in local coordinates \((\tau,\phi,x_{1},x_{2})\) by \[g=\frac{(F\mathrm{d}\tau+G\mathrm{d}\phi)^{2}}{(x_{1}-x_{2})HF}+ \frac{kH}{(x_{1}-x_{2})^{3}}\left(\frac{\mathrm{d}x_{1}^{2}}{X_{1}}-\frac{ \mathrm{d}x_{2}^{2}}{X_{2}}-\frac{X_{1}X_{2}}{kF}\mathrm{d}\phi^{2}\right), \tag{5.1}\] where \(k\) is a constant, \(G(x_{1},x_{2}),H(x_{1},x_{2}),X_{1}(x_{1}),X_{2}(x_{2})\) are arbitrary functions of their arguments, and \[F=x_{2}^{2}X_{1}-x_{1}^{2}X_{2}. \tag{5.2}\] The vector fields \(\partial_{\tau},\partial_{\phi}\) are Killing. For a specific choice of the functions \(G,H,X_{1},X_{2}\), the metric (5.1) is the Ricci-flat Chen-Teo geometry, see [23, Eq. (2.1)] 4. Footnote 4: To compare our notation to that of [23], set \(X_{1}\equiv X\), \(X_{2}\equiv Y\), \(x_{1}\equiv x\), \(x_{2}\equiv y\). Let \(c\) be an arbitrary constant, and define new variables \[\begin{split} W&:=\frac{k}{c^{2}}\frac{(x_{1}-x_{2} )H}{F},\qquad\psi:=\frac{\sqrt{k}}{c}\tau,\qquad y:=\frac{c}{\sqrt{k}}\phi, \qquad\tilde{G}:=\frac{k}{c^{2}}\frac{G}{F},\qquad A:=\tilde{G}\mathrm{d}y\\ \mathrm{d}x&:=c\left[\frac{x_{1}}{X_{1}}\mathrm{d}x_ {1}-\frac{x_{2}}{X_{2}}\mathrm{d}x_{2}\right],\qquad\mathrm{d}z:=c\frac{(x_{2} \mathrm{d}x_{1}-x_{1}\mathrm{d}x_{2})}{(x_{1}-x_{2})^{2}},\qquad e^{u}:=\frac{ -X_{1}X_{2}}{(x_{1}-x_{2})^{4}}.\end{split} \tag{5.3}\] Then a calculation shows that (5.1) adopts the form (2.13): \[g=W^{-1}(\mathrm{d}\psi+A)^{2}+W[\mathrm{d}z^{2}+e^{u}(\mathrm{d }x^{2}+\mathrm{d}y^{2})]. \tag{5.4}\] The Killing fields are now \(\partial_{\psi},\partial_{y}\). **Remark 5.1** (The Chen-Teo parameter \(\nu\)).: _From the expression for \(\mathrm{d}z\) in (5.3) we can find \(z\) by integration: the solution is \(z=\frac{cx_{2}}{x_{2}-x_{1}}+\nu\), where \(\nu\) is an arbitrary constant. We are free to choose any relation between \(c\) and \(\nu\) we want; in particular, setting_ \[c\equiv-(1+\nu), \tag{5.5}\] _we get \(z=\frac{\nu x_{1}+x_{2}}{x_{1}-x_{2}}\), which, in the Ricci-flat Chen-Teo case, is the (inverse of the) conformal factor that makes the metric Kahler. The parameter \(\nu\) is particularly important in the Ricci-flat case [23]: the Chen-Teo solution is a one-parameter (\(-1\leq\nu\leq 1\)) family of metrics interpolating between the Plebanski-Demianski (\(\nu=1\)) and Gibbons-Hawking (\(\nu=-1\)) spaces._ The fact that the metric (5.1) can be written as (5.4) does not imply, of course, that the geometry (5.1) is necessarily conformally Kahler. To investigate this, we choose the coframe \(\beta^{0}=W^{-1/2}(\mathrm{d}\psi+A)\), \(\beta^{1}=\sqrt{W}\mathrm{d}z\), \(\beta^{2}=\sqrt{W}e^{u/2}\mathrm{d}x\), \(\beta^{3}=\sqrt{W}e^{u/2}\mathrm{d}y\) for (5.4). The 2-form \(\kappa=\beta^{0}\wedge\beta^{1}+\beta^{2}\wedge\beta^{3}\) is equal to (2.14), and defines the almost-complex structure \(J^{a}{}_{b}=\kappa_{bc}g^{ca}\). The type-\((1,0)\) eigenspace is spanned by \(\ell=\frac{1}{\sqrt{2}}(\beta^{0}+{\rm i}\beta^{1})\), \(m=\frac{1}{\sqrt{2}}(\beta^{2}+{\rm i}\beta^{3})\). In particular, the following are type-(1,0) forms: \[\omega^{0}={\rm d}\psi+A+{\rm i}W{\rm d}z+B({\rm d}x+{\rm i}{\rm d}y),\qquad \omega^{1}={\rm d}x+{\rm i}{\rm d}y \tag{5.6}\] where \(B\) is an arbitrary complex function. Since \({\rm d}\omega^{1}=0\), we see that \(J\) will be integrable if \({\rm d}\omega^{0}=0\). This gives the Hermitian condition, and, assuming that it holds, the conformally Kahler condition is \({\rm d}(z^{-2}\kappa)=0\). These two conditions lead respectively to: \[Z_{1} :=\tilde{G}_{z}-W_{x}=0, \tag{5.7a}\] \[Z_{2} :=\tilde{G}_{x}+z^{2}\partial_{z}\left(\tfrac{Ve^{u}}{z^{2}} \right)=0 \tag{5.7b}\] (recall (2.24c), (2.24d)). The geometry (5.1) will be conformally Kahler (for the given choice of almost-complex structure (5.6)) iff the conditions (5.7) are satisfied. The vector field (2.3) is the Killing vector \(\partial_{\psi}\). **Remark 5.2**.: _To have some intuition about (5.7), we express \(Z_{1}\) in terms of the original variables:_ \[Z_{1}=\frac{k(x_{1}-x_{2})X_{1}X_{2}}{c^{2}F}\left[\partial_{x_{2}}\left( \frac{x_{1}G}{X_{1}F}+\frac{x_{2}H}{(x_{1}-x_{2})F}\right)+\partial_{x_{1}} \left(\frac{x_{2}G}{X_{2}F}+\frac{x_{1}H}{(x_{1}-x_{2})F}\right)\right]. \tag{5.8}\] _Then, for the original Chen-Teo Ricci-flat metric [23], using [17, Eqs. (3.7a)-(3.7b)] we see that indeed \(Z_{1}=0\), which justifies our choice of almost-complex structure (5.6) for the general class (5.1). (In the Ricci-flat case, \(Z_{2}=0\) follows form \(Z_{1}=0\).)_ Having identified the Toda variables for (5.1), the Ricci-flat Chen-Teo metric [23] can now be obtained as an application of the framework of section 2; we briefly sketch the procedure in what follows. The metric (5.1) will be Ricci-flat iff \(u\) satisfies the Toda equation and \(W=\gamma W_{0}\), where \(\gamma\) is a constant and \(W_{0}\) is given by (2.37). The Toda equation is \(u_{xx}+(e^{u})_{zz}=0\). The trick to solve it is the same that we used in previous cases, see e.g. the proof of Theorem 4.1: we introduce an auxiliary variable \(\sigma\) by \(u_{x}=\sigma_{z}\), \((e^{u})_{z}=-\sigma_{x}\). Using \[\partial_{x}=-\frac{X_{1}X_{2}}{cF}(x_{1}\partial_{x_{1}}+x_{2}\partial_{x_{2} }),\qquad\partial_{z}=\frac{(x_{1}-x_{2})^{2}}{cF}(x_{2}X_{1}\partial_{x_{1}}+ x_{1}X_{2}\partial_{x_{2}}), \tag{5.9}\] we deduce \[\frac{x_{1}X_{2}\dot{X}_{1}}{(x_{1}-x_{2})^{2}}+\frac{x_{2}X_{1} \dot{X}_{2}}{(x_{1}-x_{2})^{2}}-\frac{4X_{1}X_{2}}{(x_{1}-x_{2})^{2}}=-x_{2}X_ {1}\frac{\partial\sigma}{\partial x_{1}}-x_{1}X_{2}\frac{\partial\sigma}{ \partial x_{2}},\] \[\frac{x_{2}\dot{X}_{1}}{(x_{1}-x_{2})^{2}}+\frac{x_{1}\dot{X}_{2} }{(x_{1}-x_{2})^{2}}-\frac{4x_{2}X_{1}}{(x_{1}-x_{2})^{3}}+\frac{4x_{1}X_{2}}{ (x_{1}-x_{2})^{3}}=-x_{1}\frac{\partial\sigma}{\partial x_{1}}-x_{2}\frac{ \partial\sigma}{\partial x_{2}}\] where \(\dot{X}_{1}\equiv\frac{{\rm d}X_{1}}{{\rm d}x_{1}}\), \(\dot{X}_{2}\equiv\frac{{\rm d}X_{2}}{{\rm d}x_{2}}\). This leads to \[\frac{\partial\sigma}{\partial x_{2}}=-\frac{\dot{X}_{1}}{(x_{1}-x_{2})^{2}}+ \frac{4X_{1}}{(x_{1}-x_{2})^{3}},\qquad\frac{\partial\sigma}{\partial x_{1}}=- \frac{\dot{X}_{2}}{(x_{1}-x_{2})^{2}}-\frac{4X_{2}}{(x_{1}-x_{2})^{3}}.\] Using then \(\partial_{x_{1}}\partial_{x_{2}}\sigma=\partial_{x_{2}}\partial_{x_{1}}\sigma\), after some calculations we arrive at \[(x_{1}-x_{2})^{2}\ddot{X}_{2}+6(x_{1}-x_{2})\dot{X}_{2}+12X_{2}=(x_{1}-x_{2})^ {2}\ddot{X}_{1}-6(x_{1}-x_{2})\dot{X}_{2}+12X_{1}. \tag{5.10}\] Applying \(\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}\), we get \(\dddot{X}_{1}=\dddot{X}_{2}\), which implies that \(X_{1},X_{2}\) are fourth order polynomials, and replacing in (5.10) we see that they must have the same coefficients, \[X_{1}=a_{0}+a_{1}x_{1}+a_{2}x_{1}^{2}+a_{3}x_{1}^{3}+a_{4}x_{1}^{4},\qquad X_{2} =a_{0}+a_{1}x_{2}+a_{2}x_{2}^{2}+a_{3}x_{2}^{3}+a_{4}x_{2}^{4}. \tag{5.11}\] This determines \(u\), which in turn determines \(W\) via \(W=\gamma z(1-\frac{z}{2}u_{z})\). Using (5.3) we find \(H=\frac{c^{2}}{k}\frac{FW}{(x_{1}-x_{2})}\). Finally, \(G\) is found via equations (5.7). This way we recover [23, Eq. (2.1)]. ### A conformally self-dual family **Theorem 5.3**.: _Assume the family of metrics (5.1) to be conformally Kahler w.r.t the almost-complex structure (5.6) (that is, conditions (5.7) are satisfied). Choose the relation (5.5) between the parameters \(c\) and \(\nu\). Then (5.1) is conformally self-dual \(*C=C\) if and only if the functions \(X_{1},X_{2}\) are given by_ \[\begin{split} X_{1}&=a_{0}+a_{1}x_{1}+a_{2}x_{1}^{2 }+a_{3}x_{1}^{3}+a_{4}x_{1}^{4},\\ X_{2}&=a_{0}\nu^{2}-a_{1}\nu x_{2}+a_{2}x_{2}^{2}- \tfrac{a_{3}}{\nu}x_{2}^{3}+\tfrac{a_{4}}{\nu^{2}}x_{2}^{4}.\end{split} \tag{5.12}\] Proof.: The proof is very similar to the proof of Theorem 4.1. We use that the SD equation \(*C=C\) is equivalent to \(\hat{R}=0\), where \(\hat{R}\) is given by (2.31): \(\dot{u}_{xx}+(e^{\hat{u}})_{\hat{z}\hat{z}}=0\), with \(\hat{u}=u-4\log z\), \(\hat{z}=-1/z\) (we used that \(\partial_{y}\) is Killing). Introducing \(\hat{\sigma}\) by \(\hat{u}_{x}=\hat{\sigma}_{\hat{z}}\), \((e^{\hat{u}})_{\hat{z}}=-\hat{\sigma}_{x}\), and using that the vector fields \(\partial_{x},\partial_{z}\) are given by (5.9), we are led to \[\frac{x_{1}X_{2}\dot{X}_{1}}{(\nu x_{1}+x_{2})^{2}}+\frac{x_{2}X _{1}\dot{X}_{2}}{(\nu x_{1}+x_{2})^{2}}-\frac{4X_{1}X_{2}}{(\nu x_{1}+x_{2})^{ 2}} = -x_{2}X_{1}\frac{\partial\hat{\sigma}}{\partial x_{1}}-x_{1}X_{2} \frac{\partial\hat{\sigma}}{\partial x_{2}},\] \[\frac{x_{2}\dot{X}_{1}}{(\nu x_{1}+x_{2})^{2}}+\frac{x_{1}\dot{X} _{2}}{(\nu x_{1}+x_{2})^{2}}-\frac{4\nu x_{2}X_{1}}{(\nu x_{1}+x_{2})^{3}}- \frac{4x_{1}X_{2}}{(\nu x_{1}+x_{2})^{3}} = -x_{1}\frac{\partial\hat{\sigma}}{\partial x_{1}}-x_{2}\frac{ \partial\hat{\sigma}}{\partial x_{2}},\] from where we deduce \[\frac{\partial\hat{\sigma}}{\partial x_{2}}=-\frac{\dot{X}_{1}}{(\nu x_{1}+x_{ 2})^{2}}+\frac{4\nu X_{1}}{(\nu x_{1}+x_{2})^{3}},\qquad\frac{\partial\hat{ \sigma}}{\partial x_{1}}=-\frac{\dot{X}_{2}}{(\nu x_{1}+x_{2})^{2}}+\frac{4X_{ 1}}{(\nu x_{1}+x_{2})^{3}}.\] Using \(\partial_{x_{1}}\partial_{x_{2}}\hat{\sigma}=\partial_{x_{2}}\partial_{x_{1}} \hat{\sigma}\), we find \[(\nu x_{1}+x_{2})^{2}\ddot{X}_{1}-6(\nu x_{1}+x_{2})\dot{X}_{1}+12\nu^{2}X_{ 1}=(\nu x_{1}+x_{2})^{2}\ddot{X}_{2}-6(\nu x_{1}+x_{2})\dot{X}_{2}+12X_{2}. \tag{5.13}\] Applying \(\partial_{x_{2}}^{2}\partial_{x_{1}}^{2}\), we get \[\dddot{X}_{1}=\nu^{2}\dddot{X}_{2}, \tag{5.14}\] which implies that \(X_{1}\) and \(X_{2}\) are fourth order polynomials in \(x_{1}\) and \(x_{2}\) respectively. Replacing in (5.13), we find relations between the coefficients and we get (5.12). **Remark 5.4**.: _Similarly to Theorem 4.1, the solution (5.12) can be regarded (locally) as a generalisation of the Ricci-flat Chen-Teo metric to a self-dual gravitational instanton in conformal gravity. However, unlike (4.9), the conformally self-dual equation now determines \(X_{1},X_{2}\) in (5.1) to be given by (5.12), but it does not determine the other arbitrary functions \(G,H\) in (5.1). These are constrained by the conformally Kahler condition (5.7), but this restriction does not determine \(G,H\) uniquely. A detailed analysis of this issue is left for future work._ ## 6 Final comments We studied generalised gravitational instantons corresponding to conformally Kahler 4-manifolds whose Ricci tensor is invariant under the complex structure. (The latter condition is equivalent to the existence of a Killing vector.) We obtained generic identities for the metric, Ricci scalar and Ricci form, and we used this to show that a class of field equations reduce to the scalar \(SU(\infty)\) Toda equation. More precisely, we showed this for the conformally self-dual and cosmological Einstein-Maxwell field equations. (In the latter case, the scalar equation is the modified Toda equation if the cosmological constant is non-zero.) We applied the construction to a large number of examples, and we gave a trick to solve the Toda equation (with an extra symmetry) in the most complicated cases among these. The reduction in the conformally SD case was already known from the work of LeBrun [19] on scalar-flat Kahler geometry, so the novelty in this sense is the application to the construction of conformally self-dual generalisations of the Page-Pope, Plebanski-Demianski, and Chen-Teo metrics, which give new self-dual (generically non-Einstein) gravitational instantons in conformal gravity. In the Page-Pope case (3.1) the solutions can be classified, cf. Theorem 3.2 and Remark 3.3. For the Plebanski-Demianski ansatz (4.1), the solution can be found in closed form, cf. Theorem 4.1: it is a 5-parameter non-Einstein metric (and thus different from the self-dual limit of the standard Plebanski-Demianski space). For the Chen-Teo class (5.1), we found a family of conformally SD solutions, but the metric cannot be given in closed form: the functions \(G,H\) in the Chen-Teo ansatz (although restricted by the conformally Kahler condition (5.7)) remain undetermined. The analysis of this issue is left for future work, together with thermodynamical aspects of the new solutions. For the cosmological Einstein-Maxwell equations, we showed that the solution for the Page-Pope ansatz is a generalised Taub-NUT geometry that depends on 5 parameters, which can be identified with mass, electromagnetic charge, NUT charge, cosmological constant, and curvature of the Riemann surface over which it is fibered. In the Plebanski-Demianski case, we recovered the standard Euclidean version of the cosmological electro-vacuum solution [21]. For the Chen-Teo class, the construction of the corresponding cosmological Einstein-Maxwell solution remains an open problem, but in future works we will apply the framework developed in this paper to achieve this goal. The purely Einstein and purely Einstein-Maxwell cases are independently interesting, and the difficulties in their construction are different. In particular, the purely Einstein case is likely to be relevant for a possible generalisation of the (Ricci-flat) instanton classification of [14] to Einstein metrics and its relation to the compact case [15]. **Acknowledgements.** I am very grateful to Maciej Dunajski and Paul Tod for very helpful conversations about the topics of this work, and to the Institut Mittag-Leffler in Djursholm, Sweden for hospitality during the conference "Einstein Spaces and Special Geometry" in July 2023. I would also like to thank the Alexander von Humboldt Foundation and the Max Planck Society for financial support. ## Appendix A Some background Basic definitions.Let \((M,g_{ab})\) be a 4-dimensional, orientable Riemannian manifold (signature \((++++)\)), and let \(\nabla_{a}\) be the Levi-Civita connection of \(g_{ab}\). We say that \((M,g_{ab})\) is almost-Hermitian if there is an almost-complex structure \(J^{a}{}_{b}\) which is compatible with \(g_{ab}\) (i.e. \(J^{a}{}_{c}J^{c}{}_{b}=-\delta^{a}_{b}\) and \(g_{cd}J^{c}{}_{a}J^{d}{}_{b}=g_{ab}\)). The tensor field \(J^{a}{}_{b}\) produces a decomposition \(TM=T^{+}\oplus T^{-}\), where \(T^{\pm}\) is the eigenspace with eigenvalue \(\pm\mathrm{i}\). Similarly, the cotangent bundle splits as \(T^{*}M=T^{*+}\oplus T^{*-}\). Elements of \(T^{*+}\) are referred to as type-\((1,0)\) forms, and elements of \(T^{*-}\) are type-\((0,1)\) forms. We say that the manifold \(J\) is Hermitian if \(J\) is integrable, that is, if \(T^{+}\) is involutive under the Lie bracket. Equivalently, \(J\) is integrable iff the differential ideal generated by type-\((1,0)\) forms is closed under exterior differentiation. If the integrability condition is satisfied, there exist complex scalars \(z^{\alpha}=(z^{0},z^{1})\) (called holomorphic coordinates) such that \(T^{*+}\) is spanned by \(\mathrm{d}z^{0},\mathrm{d}z^{1}\). The Hermitian condition is common to the conformal class of \(g_{ab}\). We say that \((M,g_{ab},J^{a}{}_{b})\) is Kahler if it is Hermitian and the fundamental 2-form \(\kappa_{ab}\equiv g_{bc}J^{c}{}_{a}\) is closed, \(\mathrm{d}\kappa=0\). Alternatively, the Kahler condition is equivalent to \(\nabla_{a}J^{b}{}_{c}=0\). We say that \((M,g_{ab},J^{a}{}_{b})\) is conformally Kahler if there is a positive scalar field \(\Omega\) such that \((M,\hat{g}_{ab},J^{a}{}_{b})\) is Kahler, where \(\hat{g}_{ab}=\Omega^{2}g_{ab}\). The fundamental 2-form \(\kappa\) is always an eigenform of the Hodge star \(*\), i.e., it is self-dual (SD) or anti-self-dual (ASD), \(*\kappa=\pm\kappa\), and we say that the SD and ASD cases have opposite orientation. We say that \((M,g_{ab})\) is ambi-Hermitian if there are two integrable almost-complex structures \(J^{\pm}\), with opposite orientation, which are compatible with \(g_{ab}\). We say that \((M,g_{ab})\) is ambi-Kahler if it is ambi-Hermitian and there are two positive scalar fields \(\Omega_{\pm}\) such that the metrics \(g_{ab}^{\pm}=\Omega_{\pm}^{2}g_{ab}\) are Kahler. Frames.An orthonormal coframe is a set of four 1-forms \((\beta^{0},\beta^{1},\beta^{2},\beta^{3})\) such that \[g=\beta^{0}\otimes\beta^{0}+\beta^{1}\otimes\beta^{1}+\beta^{2}\otimes\beta^{ 2}+\beta^{3}\otimes\beta^{3}.\] (A.1) A null coframe \((\ell,n,m,\tilde{m})\) can be constructed as \(\ell:=\frac{1}{\sqrt{2}}(\beta^{0}+{\rm i}\beta^{1})\), \(m:=\frac{1}{\sqrt{2}}(e^{2}+{\rm i}e^{3})\), \(n=\bar{\ell}\), \(\tilde{m}=-\tilde{m}\), so that the metric is \(g=2(\ell\odot n-m\odot\tilde{m})\). The volume form is \(\varepsilon=-\beta^{0}\wedge\beta^{1}\wedge\beta^{2}\wedge\beta^{3}\) (we follow the conventions of [27, 28]). With this convention, a basis of ASD 2-forms is given by \[\kappa_{1}=\beta^{0}\wedge\beta^{1}+\beta^{2}\wedge\beta^{3},\qquad\kappa_{2} =\beta^{0}\wedge\beta^{2}-\beta^{1}\wedge\beta^{3},\qquad\kappa_{3}=\beta^{0} \wedge\beta^{3}+\beta^{1}\wedge\beta^{2}.\] (A.2) Raising an index with the inverse metric \(g^{ab}\), we get three almost-Hermitian structures \((J_{i})^{a}{}_{b}:=(\kappa_{i})_{bc}g^{ca}\), satisfying the quaternion algebra \(J_{i}J_{j}=-\delta_{ij}+\epsilon_{ijk}J_{k}\). The triple \((J_{1},J_{2},J_{3})\) is called an almost-hyper-Hermitian structure. The type-\((1,0)\) forms of \(J_{1}\) are spanned by \(\ell,m\). We also note that \[\kappa_{2}+{\rm i}\kappa_{3}=2\ell\wedge m.\] (A.3) Given an arbitrary vector \(\xi\), the triple \((J_{1},J_{2},J_{3})\) can be used to construct a new (non-normalized) orthogonal frame: \((\xi,J_{1}\xi,J_{2}\xi,J_{3}\xi)\). Defining \(W^{-1}:=g(\xi,\xi)\) and \(e_{0}:=W^{1/2}\xi\), \(e_{i}:=J_{i}e_{0}\), the set \(\theta^{\bf a}:=g(e_{\bf a},\cdot)\) (\({\bf a}=0,...,3\)) is a new orthonormal coframe. Spinors.The spin group in four dimensions and Riemannian signature is \(SU(2)_{L}\times SU(2)_{R}\). Spinors transforming under \(SU(2)_{L}\) (resp. \(SU(2)_{R}\)) have unprimed (resp. primed) indices. The spin spaces are equipped with symplectic structures \(\epsilon_{AB},\epsilon_{A^{\prime}B^{\prime}}\) (with inverses \(\epsilon^{AB},\epsilon^{A^{\prime}B^{\prime}}\)), and with an anti-holomorphic involution denoted by \(\dagger\), so that the complex conjugates of \(o^{A},\alpha^{A^{\prime}}\) are \(o^{\dagger A},\alpha^{\dagger A^{\prime}}\) respectively. If \(\mathbb{S},\mathbb{S}^{\prime}\) are the spin bundles, and \(\Lambda_{\pm}^{2}\) the bundles of (anti-)self-dual 2-forms, we have the isomorphisms \[TM\otimes\mathbb{C}\cong\mathbb{S}\otimes\mathbb{S}^{\prime},\qquad\Lambda_{+}^ {2}\cong\mathbb{S}^{\prime*}\odot\mathbb{S}^{\prime*},\qquad\Lambda_{-}^{2} \cong\mathbb{S}^{*}\odot\mathbb{S}^{*}.\] (A.4) Locally, the space of almost-Hermitian structures (with a given orientation) is the projective spin bundle, whose fibers are \(\mathbb{C}\mathbb{P}^{1}\)s. This means that an almost-Hermitian structure is locally represented by a projective spinor field. With our conventions, ASD orientation corresponds to unprimed spinors. The triple \((\kappa_{1},\kappa_{2},\kappa_{3})\) can be defined using a single spinor, say \(o_{A}\), together with its complex conjugate \(o_{A}^{\dagger}\). Explicitly, choosing the normalization \(\epsilon^{AB}o_{A}o_{B}^{\dagger}=1\), we have (recall (A.4)) \[(\kappa_{1})_{ab}=2{\rm i}o_{(A}o_{B)}^{\dagger}\epsilon_{A^{\prime}B^{\prime} },\quad(\kappa_{2})_{ab}=(o_{A}o_{B}+o_{A}^{\dagger}o_{B}^{\dagger})\epsilon_{A^ {\prime}B^{\prime}},\quad(\kappa_{3})_{ab}={\rm i}(o_{A}^{\dagger}o_{B}^{ \dagger}-o_{A}o_{B})\epsilon_{A^{\prime}B^{\prime}}.\] (A.5) Choosing also an arbitrary primed spinor \(\alpha^{A^{\prime}}\), with complex conjugate \(\alpha^{\dagger A^{\prime}}\), a null frame can be constructed as \[\ell^{a}=o^{A}\alpha^{A^{\prime}},\qquad n^{a}=o^{\dagger A}\alpha^{\dagger A^{ \prime}},\qquad m^{a}=o^{A}\alpha^{\dagger A^{\prime}},\qquad\tilde{m}^{a}=o^ {\dagger A}\alpha^{A^{\prime}}.\] (A.6) Additional details for the Plebanski-Demianski ansatz Consider the variables (4.8), and take \(P,Q\) to be given by \[P=a_{0}+a_{1}p+a_{2}p^{2}+a_{3}p^{3}+a_{4}p^{4},\qquad Q=b_{0}+b_{1}q+b_{2}q^{2}+ b_{3}q^{3}+b_{4}q^{4},\] (B.1) where \(a_{i},b_{i}\) are arbitrary constants. Then we find: \[\begin{split}\frac{W_{0}^{\pm}}{W_{\pm}}&=\frac{(- 1)}{2(1\mp pq)(p-q)^{3}}\left\{2(a_{0}-b_{0})+(a_{1}-b_{1})(p+q)+(b_{3}\pm a_{1} )(q^{3}\mp pq^{4})\right.\\ &\quad+2(b_{4}-a_{0})q^{4}+(b_{1}\pm a_{3})(\mp p^{3}+p^{4}q)+2(b_ {0}-a_{4})p^{4}+(a_{3}-b_{3})(p^{4}q^{3}+p^{3}q^{4})\right.\\ &\quad+2(a_{4}-b_{4})p^{4}q^{4}+2[a_{2}-b_{2}\pm 2(a_{0}-b_{0})] pq\pm 2[a_{2}-b_{2}\mp 2(b_{4}-a_{0})]pq^{3}\\ &\quad\mp 2[b_{2}-a_{2}\mp 2(a_{4}-b_{0})]p^{3}q+2[a_{2}-b_{2}\pm 2 (a_{4}-b_{4})]p^{3}q^{3}\\ &\quad\left.\mp 3(b_{3}\pm b_{1})(p^{3}q^{2}\pm pq^{2})+3(a_{3}\pm a _{1})(p^{2}q\pm p^{2}q^{3})\right\}.\end{split}\] (B.2)
2302.14468
SAINE: Scientific Annotation and Inference Engine of Scientific Research
We present SAINE, an Scientific Annotation and Inference ENgine based on a set of standard open-source software, such as Label Studio and MLflow. We show that our annotation engine can benefit the further development of a more accurate classification. Based on our previous work on hierarchical discipline classifications, we demonstrate its application using SAINE in understanding the space for scholarly publications. The user study of our annotation results shows that user input collected with the help of our system can help us better understand the classification process. We believe that our work will help to foster greater transparency and better understand scientific research. Our annotation and inference engine can further support the downstream meta-science projects. We welcome collaboration and feedback from the scientific community on these projects. The demonstration video can be accessed from https://youtu.be/yToO-G9YQK4. A live demo website is available at https://app.heartex.com/user/signup/?token=e2435a2f97449fa1 upon free registration.
Susie Xi Rao, Yilei Tu, Peter H. Egger
2023-02-28T10:19:57Z
http://arxiv.org/abs/2302.14468v2
# SAINE: Scientific Annotation and Inference Engine of Scientific Research ###### Abstract We present **SAINE**, an **S**cientific **A**nnotation and **I**nference **EN**gine based on a set of standard open-source software, such as Label Studio and MLflow. We show that our annotation engine can benefit the further development of a more accurate classification. Based on our previous work on hierarchical discipline classifications, we demonstrate its application using SAINE in understanding the space for scholarly publications. The user study of our annotation results shows that user input collected with the help of our system can help us better understand the classification process. We believe that our work will help to foster greater transparency and better understand scientific research. Our annotation and inference engine can further support the downstream meta-science projects. We welcome collaboration and feedback from the scientific community on these projects. The demonstration video can be accessed from [https://youtu.be/yTo0-G9YQK4](https://youtu.be/yTo0-G9YQK4). A live demo website is available at [https://app.heartex.com/user/signup/?token=e2435a2f97449fa1](https://app.heartex.com/user/signup/?token=e2435a2f97449fa1) upon free registration. ## 1 Introduction A precise classification of publications across and within disciplines is key not only for a fast and comprehensive search to guide researchers to relevant material but also to identify the novelty of research, the standing and significance of scholars as well as their home institutions, and of the relative growth of fields of work. Machine learning develops into being not only \(a\) but _the_ customary approach to establish such a classification. Clearly, one would expect a search that is geared to identifying a high-quality corpus of keywords to benefit crucially from supervision. Existing classifications of academic output are based on a blend of (supervised) author-chosen and (unsupervised) machine-chosen keyword lists, where the composition of the blend is unknown to the researcher. Prevailing systems of keywords for academic publications are lists based on abstracts in a discipline, field, and subfield distilled from * unsupervised machine learning (from word or phrase frequencies); * supervised learning (mostly from keyword self-reporting by authors); * semi-supervised learning (a mixture of the two; e.g., as done by Microsoft Academic Graph (MAG) described in Sinha et al. (2015); Wang et al. (2019, 2020)). To design an annotation and inference engine that helps us understand the publication space better, **we should cater to the following needs:** (1) a simple user interface with clear annotation instructions; (2) a reproducible pipeline across various disciplines; (3) good support for inference tailored to downstream tasks (e.g., model retraining) in meta-science studies. Among the existing open-source annotation tools, Label Studio (Tkachenko et al., 2020-2022) suits our needs. Note that Gayoso-Cabada et al. (2019) have reviewed extensively on the annotation tools that facilitate classification tasks. The reviewed tools are either not open-sourced or are domain-specific and hence do not suit our purposes. In this system demonstration, we utilize a set of standard open-source software, mainly Label Studio (Tkachenko et al., 2020-2022), MLflow and FastAPI to configure an annotation and inference engine for scientific publication annotations. In this demonstration, we illustrate the benefit of using supervised learning based on pre-established keyword lists and abstracts, and how annotators can help us better understanding the importance of _supervised_ learning in establishing a classification of academic publications. This system is built on top of the hitherto largest scale of _multi-class_ hierarchical classification studies across all disciplines in both _single-label_ and _multi-label_ settings (cf. Rao et al. (2023)). There, we have built a supervised hierarchical classification system that associates every publication with at least one and potentially several disciplines, fields, and subfields. With the annotations above, we conduct a small user study with domain experts using our **annotation** engine. We then invoke our **inference** engine to fine-tune the base models in Rao et al. (2023). The comparison between the base and fine-tuned models shows that the proposed annotation and inference system is able to benefit the further development of a more accurate classification. To summarize, the paper presents a scientific annotation and inference engine called SAINE, which is based on open-source software like Label Studio and MLflow. **The main contributions of the paper are:** (1) The demonstration of using SAINE in understanding the space for scholarly publications, particularly in hierarchical discipline classifications. (2) The result of a user study, which shows that user input collected with the help of SAINE can help better understand the classification process. (3) The ability of SAINE to benefit the further development of a more accurate classification, demonstrated through the comparison between the base and fine-tuned models. (4) The potential of SAINE to support downstream meta-science projects and foster greater transparency and understanding of scientific research. Overall, the paper presents the benefits of supervised learning and the importance of having a simple user interface with clear annotation instructions, reproducible pipelines, and good support for inference in scientific publication annotations. The live demo website and demonstration video are also available for those interested in further exploring SAINE. The codebase for development is publically available under this link and collocates with the codebase of Rao et al. (2023). In Figure 1 we illustrate the workflow in SAINE by assigning the roles of "Administrator", "Annotators", "Label Studio", and "MLflow" to each task in the pipeline. The sections are organized as follows. Section 2 introduces the functionality of Label Studio and its fit to our annotation needs, as well as our annotation guidelines for experts. Section 3 specifies the annotation design for the field of _Economics_ and discusses the annotation results. Section 4 discusses the integration of annotation results into the pre-trained base models and fine-tuned ones with MLflow. We then conclude this system demonstration with a discussion of system limitations, ethics, and broader impact statements. ## 2 Annotating Scientific Articles with Label Studio We briefly introduce functionalities of Label Studio and illustrate why Label Studio is a suitable tool to annotate scientific articles. **The goals of annotations in the present context are three-fold: (1) [To judge the appropriateness of an assigned category.]** Experts judge whether a scientific article has a correctly assigned category. If a category does not suit the abstract's content, the annotator will pick a new category from a pre-defined list. **(2) [To evaluate keywords.]** Experts evaluate keywords assigned by MAG and mark missing keywords in the abstract. **(3) [To calculate inter-annotator agreement.]** The annotation engine should permit an efficient way of calculating Figure 1: SAINE Workflow and Pipeline. inter-annotator agreement (IAA) scores among annotators. ### Label Studio Label Studio is a powerful and versatile annotation tool that can handle various types of annotation tasks. Here are some of the features of Label Studio that make it a suitable tool for the annotation tasks. * **[Customizable interface.]** Label Studio allows to design a customized annotation interface. One can create a pre-defined list of categories for the experts to choose from and provide them with clear instructions on how to evaluate the assigned category. * **[Multiple annotation types.]** Label Studio supports various types of annotation, including text classification, entity recognition, and sequence labeling. Therefore, different types of annotation can be used to evaluate keywords, mark missing keywords, and judge predicted categories. * **[Collaboration and version control.]** Label Studio enables multiple experts to work on the same project simultaneously, allowing efficient and collaborative annotation. It also includes a version control system that tracks changes to the annotations, facilitating easy comparison and IAA evaluation. * **[Inter-annotator agreement (IAA).]** Label Studio has built-in tools to calculate IAA scores. These tools can help evaluate the consistency and reliability of chosen annotations. * **[Integrations with machine-learning models.]** Label Studio also provides integration with various machine learning (ML) models. Although we do not use the integrated ML functions, Label Studio allows us to export the annotation results in JSON, with which we improve the classification models using the annotated data in the inference engine. Overall, Label Studio offers a powerful and customizable annotation platform that can handle relevant annotation tasks, facilitate efficient collaboration among experts, and efficiently compute IAA. In Figure 2 we demonstrate the administrative panel of the project manager. The project manager uses this panel to assign annotation tasks to each registered annotator and can monitor the annotation progress. The manager can also adjust the assigned annotations based on individual progress. The "Filters" and "Order (Annotation results)" tabs make it easy to inspect tasks by annotation progress (e.g., "Annotators", "Agreement", "Completed", "Total annotations per task"). ### Annotation Guidelines When a publication is annotated, each annotator is provided with the abstract, the keywords offered by MAG, and the assigned category based on the keywords provided by MAG. The categories of a discipline classification (like JEL) are assigned to MAG publications on the basis of the keywords. Therefore, MAG's keywords help us identify potential misalignments and better understand the classifiers we built. The annotation samples provided in the annotation engine are stratified sampled (ratio: 2e-5) across all classes of the training set introduced by Rao et al. (2023) for one discipline. Each annotator is required to judge whether a category is correctly assigned to an abstract. If not, the annotator is required to select the suitable one from a predefined list. The annotator is also required to evaluate MAG-generated keywords and make corrections (by removing unqualified keywords/marking suitable keywords from the abstract). Figure 3 shows two annotations of one publication. Label Studio makes it easy to navigate among the annotations generated by various annotators on an identical instance. Note that, as we discussed in Rao et al. (2023), our multi-class hierarchical classification system is modularized in both _single-label_ and _multi-label_ settings. The current annotation engine is equipped with both annotation functionalities. For the sake of system demonstration and user study in Section 3, we discuss the _single-label_ setting. More details on the multi-label setting are provided in Appendix B. ## 3 Implementation: User Study in _Economics_ We now use _Economics_ as a discipline to show how we utilize the annotation engine to collect expert annotations. ### Annotation Design We invited three economist experts from the Chair of Applied Economics at ETH Zurich to join the annotation project by accessing this link. The an notation guidelines are stated at here. Of the three experts, one has annotated all provided instances (Annotator 1), one has annotated 10% of the instances (Annotator 2), and one has annotated a sub Figure 3: Publication Annotations by Multiple Annotators. Figure 2: Administrative Panel of Annotation Tasks in Label Studio. set of instances with an ex-ante denomination in Urban and Spatial Economics only (Annotator 3). Each annotator was provided a user panel shown in Figure 5 in Appendix A. ### Annotation Results in Label Studio Altogether, 788 instances of abstracts and keywords from MAG had to be annotated for a _single-label_ classification. In Economics, a standardized field and subfield system with keywords exists, and it is called the Journal of Economic Literature (JEL) classification system. This system is known to all academic economists and serves as a guiding principle to associate an article or a topic with a specific subfield in _Economics_. The subfields in the JEL categories are associated with keywords. We report the annotation time and IAA scores that are automatically calculated by Label Studio (see the official documentation for the steps). The final task agreement score is calculated by averaging all IAA scores for each annotation pair. Table 1 illustrates the IAA scores amongst three experts. Annotators 1, 2, and 3 have annotated 788, 181, and 99 instances, respectively. The annotations overlap between the pairs is 4 or 7% of the overlapping instances (Annotators 2 and 3), 99 or 100% of the overlapping instances (Annotators 1 and 3), and 181 or 100% of the overlapping instances (Annotators 1 and 2). The median annotation time of Annotators 1-3 per instance was 17.7s, 29.8s, and 40.9s, respectively. The annotators were entitled to disapprove of the assigned category based on MAG upon suggesting an alternative category. Marking and filling in missing keywords is time consuming, reading the MAG-generated keywords can help to some extent the annotation speed. However, all annotators reported that the MAG-provided keywords could be a source of error for wrongly assigned categories. As discussed among the annotators after they underwent the annotations separately, the category they found the best was for _Mathematical & Quantitative Methods_, and it was worst for _Macroeconomics_ and _Public Economics_. ## 4 Inference Engine: Incorporating Annotation Results into the Existing Classification Pipeline We illustrate the pipeline using the discipline _Economics_ as discussed in Section 3. ### Post-processing of Annotation Results We downloaded the annotation results in JSON (here) of all experts and post-processed them following the protocols below, before feeding them into the pre-trained base models of various neural networks as discussed in Rao et al. (2023). In total, we obtained 1,068 partly overlapping annotations (incl. "Skip", "(Dis)agree", keywords, added categories). The basic statistics on the number of instances of "Agree", "Disagree" and "Not ECON" are 498, 297, and 268, respectively. Here is the post-processing procedure. (1) We removed abstracts that were inadequately classified as belonging in Economics from the sample (206 of 788 instances). Additionally, we deleted 5 instances due to bad annotations. For example, no one labeled this sample ("Skip"), or an annotator chose "Disagree" but did not choose a new category. (2) For each remaining instance, we counted the percentages of _"Agree"_ and _"Disagree"_ verdicts relative to the label generated on the basis of MAG keywords. If strictly more experts agreed than disagreed with MAG, the original label was preserved (for 351 of the 577 valid instances). Otherwise, we took the label suggested by the majority of annotating experts (for 226 of the 577 valid instances). (3) In the case of ties, we randomly pick a label from the suggested annotations (for 22 of the 226 category-renewed instances). Following this protocol, we obtained 561 instances with expert-curated labels to fine-tune the base models. ### Fine-tuning Pre-trained Base Models We used the 561 labels generated by the experts as a fine-tuning set on the base models reported in Rao et al. (2023) on the discipline of _Economics_ (model-1). We compared the inference performances of the base model (**Model** in Table 2) with those of the fine-tuned model (**Model_FT** in Table 2) on various neural network architectures, Deep Neural Network (DNN), Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), and Transformers. To benchmark the differences in performances between **Model** and **Model_FT**, we created a small test set from the Social Science Re \begin{table} \begin{tabular}{l c c c} \hline \hline & **Annotator 1** & **Annotator 2** & **Annotator 3** \\ \hline **Annotator 1** & & 55\% & 55\% \\ **Annotator 2** & 55\% & & 27\% \\ **Annotator 3** & 55\% & 27\% & \\ \hline \hline \end{tabular} \end{table} Table 1: Annotator Agreement Matrix Among Three Expert Annotators. search Network (SSRN), which is a website that provides a platform for researchers to share and distribute their research papers and other scholarly work in the social sciences and other related fields. We decided to use the _Economics_ SSRN publications because they come with human-currated JEL categories, keywords, and abstracts. Concretely, we built a crawler to download the publication space in Economics publications in SSRN, where all contained research articles in Economics are multi-category-indexed. This means, each publication there is indexed by at least one JEL code and it allows multiple JEL codes per publication. We could easily validate with our multi-label engine in principal, but we focus on _single-label_ classifications for this user study. To create this test set, we randomly sampled 10 instances from each of 19 JEL field classes, which resulted in a sample of 190 test instances. In the implementation of hierarchical classifications reported in Rao et al. (2023), we have used MLflow to track and manage ML experiments, with which we have saved all pre-trained base models. Now, based on them, we could seamlessly integrate model fine-tuning and inference with various models. The inference engine API has been implemented using FastAPI with help from Pydantic. We illustrate the batch inference API in Figure 4, with which users can feed the test set into various models (base or fine-tuned) and obtain predictions. In Appendix C we provide more details about the inference engine. ### Benefits of Expert Annotations We present the results of user studies in Table 2. Specifically, we inspect two types of statistics, the correct predictions of the base and fine-tuned models in Columns (1)-(2), and the identical predictions of the base and fine-tuned models in Column (4). Since each publication is multi-JEL-category-indexed, we count the prediction as "correct" if the indices include the predicted category. Column (1) is the base model trained with the model type specified in Column (6). Column (2) presents the results of the fine-tuned (supervised) model. Column (4) shows that out of a total of 190 test instances, identical predictions were generated by the base and fine-tuned models. We see that fine-tuning with \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Model** & **Model\_FT** & \(\Delta\) **(Model\_FT - Model)** & **Model = Model\_FT** & **Total** & **Model Type** \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline 54 & 58 & 4 & 108 & & CNN \\ 59 & 69 & 10 & 148 & & **RNN** \\ 39 & 39 & 0 & 190 & & DNN \\ 31 & 33 & 2 & 37 & & Transformer \\ \hline \hline \end{tabular} \end{table} Table 2: Results of the User Study. _FT_: Fine-tuned. Figure 4: Inference Engine with _MLflow_ Integration. API: Batch Inference by Model. user-generated results has brought benefits to all models except DNN because DNN predicts for all test examples only one class. RNN is the best performer when considering the benefits resulting from expert supervision, because the \(\Delta\) in correct predictions has increased the most according to Column (3). Interestingly, fine-tuning a pre-trained Transformer model may not always result in a significant improvement in performance, as we see from a comparison with other base models. However, the current fine-tuning set is too small to draw firm conclusions in this regard. ## 5 Conclusions In this system demonstration, we utilize a set of standard open-source software (mainly Label Studio (Tkachenko et al., 2020-2022), MLflow and FastAPI) to configure an annotation and inference engine for scientific publications (SAINE). This system is built on top of hitherto largest multi-class hierarchical classification study across all disciplines in both single-label and multi-label settings (cf. Rao et al. (2023)). We illustrate the functionality of the system with a user study in _Economics_ and show that the expert inputs into our system can help better understanding the classification process, which benefits the development of a stronger model in the next iteration. We plan to open-source the data and codebase and invite collaborative work in the direction of meta-science. ### Limitations Label Studio has some limitations in incorporating existing ML pipelines into the annotation engine, especially, when using customary code. We will discuss this with the developers at Label Studio and see how we can bring the annotation engine and the ML pipeline closer to each other. In terms of annotator selection, at the moment we have to select the experts for each discipline. However, we would wish to rank the annotators by their field expertise and weigh their annotations, accordingly. One future idea is to automatically compute an associative score between a third-party academic product such as Google Scholar and the publication space. The project PeopleMap provides interesting techniques to generate researcher profiles based on their research interests and publications taking as input the Google Scholar profile URLs of researchers. At this stage, Label Studio developers suggest that we add a self-declarative questionnaire to each annotator, which can be used as meta-data on annotators when quantifying the annotation confidence score. Due to time constraints, we have not yet added this questionnaire, as the experts in the current user study are selected by our project PI and have strong expertise in _Economics_. In terms of annotation efforts, given enough time and annotators, we can better benchmark annotation quality in terms of JEL categories and annotator expertise. Annotator 1 has compared the user experience and performance of ChatGPT in conducting annotations on a random sample of 100 instances. It turns out that ChatGPT captures high-quality keywords and achieves more than 90% of the IAA score with that annotator. Considering our annotators' feedback that it is time-consuming to extract keywords for humans, it makes sense to use ChatGPT as an annotation-assisting engine for keyword extraction and label assignment. ### Ethics Statement We acknowledge that our system may involve processing potentially sensitive data, and we take data privacy and ethical considerations very seriously. In accordance with ethical guidelines of "ACM Code of Ethics", we have taken steps to protect the privacy of experts who may be affected by our research. We have also made efforts to ensure that our system and its annotations are unbiased and fair. We believe that our work will help foster greater transparency and understanding in scientific research, and we welcome collaboration and feedback from the scientific community to further advance the ethical and responsible use of AI in research. ### Broader Impact Statement Our annotation engine and inference engine can further support downstream meta-science projects. We list a few interesting questions we can answer using our pipeline. 1. **[For students.]** Which fields of research are more impactful/growing? 2. **[For policy makers.]** How to design education for cross-/inter-/pluridiscilinary studies? 3. **[For department and tenure committees.]** How to benchmark output and impact levels of an untenured scholar? 4. **[For funding institutions.]** How to measure/quantify inter-/pluri-disciplinary standards for institutions such as SNIS and SNSF which emphasize the interdisciplinarity of research? 5. **[For librarians.]** How can one effectively organize bibliographical resources across disciplines and departments in one university? We plan to add other disciplines covered by Rao et al. (2023) to our annotation engine. We would also like to incorporate subjective (self-declaration) and objective measurements (e.g., Google Scholar profile integration) into the annotation pipeline. This may help develop confidence scores of one annotation/annotator. ## Acknowledgements We thank the colleagues at DS3Lab for providing valuable feedback when prototyping the system design. Without the strong support of our expert group at the Chair of Applied Economics, the user study would not have been possible. We thank Ms. Piriyakorn Piriyatamwong for her technical support to our project. We appreciate that Label Studio has offered us an academic license for the project, which allows us to invite more experts to contribute in the long run. The user agreement and terms of an academic license are listed here.
2309.11662
Investigating the Correlation Between Presence and Reaction Time in Mixed Reality
Measuring presence is critical to improving user involvement and performance in Mixed Reality (MR). \emph{Presence}, a crucial aspect of MR, is traditionally gauged using subjective questionnaires, leading to a lack of time-varying responses and susceptibility to user bias. Inspired by the existing literature on the relationship between presence and human performance, the proposed methodology systematically measures a user's reaction time to a visual stimulus as they interact within a manipulated MR environment. We explore the user reaction time as a quantity that can be easily measured using the systemic tools available in modern MR devices. We conducted an exploratory study (N=40) with two experiments designed to alter the users' sense of presence by manipulating \emph{place illusion} and \emph{plausibility illusion}. We found a significant correlation between presence scores and reaction times with a correlation coefficient -0.65, suggesting that users with a higher sense of presence responded more swiftly to stimuli. We develop a model that estimates a user's presence level using the reaction time values with high accuracy of up to 80\%. While our study suggests that reaction time can be used as a measure of presence, further investigation is needed to improve the accuracy of the model.
Yasra Chandio, Noman Bashir, Victoria Interrante, Fatima M. Anwar
2023-09-20T22:02:38Z
http://arxiv.org/abs/2309.11662v1
# Investigating the Correlation Between Presence and Reaction Time in Mixed Reality ###### Abstract Measuring presence is critical to improving user involvement and performance in Mixed Reality (MR). _Presence_, a crucial aspect of MR, is traditionally gauged using subjective questionnaires, leading to a lack of time-varying responses and susceptibility to user bias. Inspired by the existing literature on the relationship between presence and human performance, the proposed methodology systematically measures a user's reaction time to a visual stimulus as they interact within a manipulated MR environment. We explore the user reaction time as a quantity that can be easily measured using the systemic tools available in modern MR devices. We conducted an exploratory study (N-40) with two experiments designed to alter the users' sense of presence by manipulating place _illusion_ and _plausibility illusion_. We found a significant correlation between presence scores and reaction times with a correlation coefficient -0.65, suggesting that users with a higher sense of presence responded more swiftly to stimuli. We develop a model that estimates a user's presence level using the reaction time values with high accuracy of up to 80%. While our study suggests that reaction time can be used as a measure of presence, further investigation is needed to improve the accuracy of the model. Mixed reality, Presence ## 1 Introduction Mixed Reality (MR) is gaining importance in science, education, training, and entertainment, offering new ways of interaction and engagement with the real and virtual worlds. The technological advancements in MR tools have facilitated an enhanced sense of _presence_, allowing users to behave within an MR environment as they would in the real world. _Presence_ is typically described as the subjective experience being in a simulated place or environment and the user's readiness to respond to virtually generated sensory data as if they were real [19, 21, 63]. This includes interacting naturally and appropriately with virtually generated sensory data. Just as in the real world, an individual should be able to bend down, grab an object on the floor in a virtual environment, feel its weight, and lift it if desired. This is achieved through the sense of one's body movement and position, which matches the sensory data presented in the virtual environment. High presence does not necessarily require high fidelity to physical reality but rather that individuals can behave as if the sensory data they are experiencing is real. This approach to measuring presence allows for observing and evaluating an individual's behavior in real and virtual environments. A high _presence_ is desired in any simulated virtual environment, as it allows the user to engage in an immersive, realistic, and involved experience. Many studies investigate the notion of _presence_ and describe the factors contributing to the sense of _presence_. According to Witmer [92], Slater & Steck [75] and others [19, 63, 21], there are two main aspects of _presence: place illusion_ and _plausibility illusion_. _Place illusion_ refers to the _sense of being there_. In an MR environment, this corresponds to how the virtual content appears indistinguishable from the real world. _Plausibility illusion_ refers to the sensation that the observed events in a virtual environment occur. Users will feel involved in the environment when both _place illusion_ and _plausibility illusion_ occur. This involvement leads to users responding realistically to the environment, resulting in a greater sense of engagement in an MR environment. Immersion and participation are necessary to experience _presence_[92]. A prerequisite to improving the _presence_ of a user is the ability to measure and quantify _presence_. While conventional measures of presence have been defined for virtual environments that surround and isolate a user from the real world [84], we are measuring presence as the subjective experience that a particular object exists in a user's environment, even when that object does not [83]. Due to the subjective nature of _presence_, the most popular method to measure _presence_ is the use of subjective questionnaires [28, 28, 64, 65, 92]. Questionnaires ask users to self-report their sense of _presence_ by answering questions that attempt to assess _presence_, usually after the user has left the virtual environment. The subjective questionnaires can be quickly administered, graded, and interpreted without affecting the user experience. However, questionnaires cannot measure the time-varying qualities of presence and can produce unstable, inconsistent, and irreproducible responses due to the prior experience of the participants [22]. The well-known shortcomings of presence questionnaires have led researchers to explore alternative approaches to assessing presence, including behavioral responses such as postural response [24], hand and eye response [89], and start response [91], which are produced automatically, without conscious thought, thus avoiding user bias. However, the assessment of behavioral responses is susceptible to experimenter bias and is highly sensitive to environmental factors and content [37, 76]. Various physiological responses can be measured to assess _presence_, such as a change in heart rate [27, 42], skin conductance [49], and body temperature [36]. However, physiological measures can also be noisy and unreliable, especially under non-stationary conditions, and may not capture differences in presence in situations of low emotional valence [49]. Given the state-of-the-art, there is a need for an approach to quantify _presence_ that is objective, quantitative, not consciously affected by the participant and/or experimenter, and could be used at runtime without interfering with the virtual experience. It should take advantage of existing interactions (and underline quantities) in the virtual scene and measure _presence_ through these interactions rather than making additional external interventions. One such underlying quantity is _reaction time_. _Reaction time_ or response time refers to the time taken between when humans perceive something and when humans respond to it. _Reaction time_ is dictated by the cognitive ability to detect, process, and respond to a stimulus [18, 90], and can be easily measured using the systemic tools available in modern MR devices such as Microsoft HoloLens 2 [4]. Our work investigates a fundamental question in MR: Would an individual experiencing more _presence_ systematically show faster _reaction_ times_? If the answer is yes, we could use a systemic metric such as _reaction time_ to quantify _presence_ in a non-intrusive, objective, and unbiased manner. There is a large body of work investigating the relationship between _presence_ and human performance [16, 39, 47, 48, 49, 53]. Natalia et al. showed a negative correlation between task completion time and _presence_ when the sense of _presence_ is altered by multisensory feedback [16]. Matteo et al. show a negative correlation between performance and _presence_ when the sense of _presence_ is changed by varying the perceptual load [48]. Maneuverrier et al. showed that presence-moted spatial cognition performance and that the presence-performance relationship was not mediated by other human factors [47]. Furthermore, human performance is often used as an argument for the good predictive validity of questionnaires [28]. To understand the relationship between _presence_ and _reaction time_, we conducted a study in which we varied the _presence_ of users by manipulating _place illusion_ and _plausibility illusion_ while they were interacting with an MR environment. We designed two sets of experiments. In one set, we only manipulated the appearance of the virtual object, and in the other set, we manipulated a non-task-relevant behavior of the virtual object. All other aspects of the experiments, such as the interaction mechanism, frequency of interactions, and physical environments, were kept the same. We systematically measured the _reaction time_ of users in response to a visual stimulus. Our post-experience questionnaires show a significant change in presence in each experiment between the manipulation conditions. Similarly, we observed a significant change in user _reaction time_ as the sense of _presence_ changed. Our analysis shows a correlation between _presence_ and _reaction time_. In our attempt to understand the relationship between _presence_ and _reaction time_, this work makes the following contributions. **Contribution 1:** We propose the use of _reaction time_ of a user as a measure of _presence_. We also develop a non-intrusive, systemic approach to measuring the user _reaction time_ that relies on existing interactions in MR environments. **Contribution 2:** We devise experiments that alter the sense of _presence_ of a user by manipulating _place illusion_ and _plausibility-illusion_. In designing experiments, we control for other factors that are known to impact user performance. While we use the experiments to demonstrate change in presence, we also demonstrate, as a byproduct, that presence questionnaires typically used in fully virtual environments can also be used in MR environments. **Contribution 3:** We conduct an exploratory lab study (\(N=40\)) that demonstrates a negative correlation between the sense of _presence_ and the user _reaction time_ when responding to a visual stimulus. **Contribution 4:** We develop a model that estimates a user's presence level using the reaction time values as input. Our evaluations demonstrate that model has high accuracy (up to 80%), which can be further improved with data from a larger number of users. ## 1 Background and Related Work In this section, we define the relevant terminology, discuss the existing work on the concept of presence in MR, and present existing methods for quantifying presence. ### Terminology Different forms of _reality_ depend on how much of the physical world is part of the user's experience and how the user interacts with the virtual objects in the scene, as shown in Figure 1. Defining MR remains challenging, with no universally agreed-upon comprehensive definition [80]. Virtual Reality (VR) immerses users in a wholly digital realm, while Augmented Reality (AR) superimposes digital elements into our real world. Augmented Virtuality (AV) is an immersive experience, complete or partial, with added elements of'reality' such as video or texture mapping. MR is an umbrella term that encompasses both AR and AV. Our MR concept leans towards AR on Milgram's reality-virtuality spectrum, where users interact primarily with virtual objects while being able to see the real world around them. MR represents a spatial alignment between the real and virtual worlds, allowing users to interact with and manipulate both real and virtual environments. We will use MR as a blanket term throughout the paper [52]. ### Factors Affecting Presence _Presence_ is a phenomenon of awareness based on the interaction between sensory stimulation, environmental factors that encourage involvement and allow immersion, and internal tendencies to become involved and interact with virtual objects [92]. Sheridan [72] laid the foundation for determining the underlying presence factors, such as sensory information, sensor control, and motor control. Slater and Wilbur [73, 78] expanded on Sheridan's work to determine major factors affecting user presence. According to them, two main factors contribute the most to user presence. (1) _Place illusion_ refers to the appearance of a virtual environment (or virtual object in the case of AR/MR). It can be affected by the realism of the virtual content, the consistency of the view between the headset and the direct view from the users' eyes (including displacement or latency issues), the lack of haptic feedback, and the awareness of the headset. (2) _Plausibility illusion_ refers to the behavior of a virtual environment (or virtual object in the case of AR/MR). It can be affected by scene elements not obeying the laws of physics, cause-effect relationships not coupled as expected, actions that do not have the expected outcome, and events in the virtual environment not conforming to familiar expectations. It is argued that when both _place illusion_ and _plausibility illusion_ occur, users will feel involved and respond realistically to the environment, which will lead users to experience greater engagement in a virtual environment [92]. ### Presence Measures Previous research highlights the importance of presence as an outcome of virtual environments (AR/VR/MR) [19, 63, 72, 92]. The most commonly accepted method of evaluating _presence_ is the self-report questionnaire [87]. Typically, presence questionnaires are used after participants engage in an AR/VR/MR environment, making them post-experience questionnaires. Over time, previous research has developed, refined, and validated questionnaires for measuring presence. Bystorm et al. [14] proposed an integrated theoretical framework for studying presence that includes aspects related to task performance. In their work, they used two questionnaires to measure interaction fidelity. The Witmer & Singer Presence Questionnaire (PQ) is one of the most widely used [92] to assess involvement/control, naturalness, and interface quality. Other popular post-experience questionnaires include the Igroup Presence Questionnaire (IPQ) [64, 65] and the Slater-Usoh-Steed Questionnaire (SUS) [75]. The PQ questions are based on the following factors: the ability to control the environment and "naturalness" of control over the environment, coherence and consistency of information from different senses, and distractions a participant may experience in a virtual environment. They also consider environment realism and meaningfulness, as well as the sense of disorientation when returning to the real world. On the other hand, the SUS and IPQ questions are based on three factors: the sense of physically being in the virtual environment, the extent to which the virtual environment feels real, and the extent to which the participant feels involved. There are additional questionnaires that measure various aspects of realism relating to the virtual scene, such as the ITC-Sense of Presence Inventory [44], the Kim and Biocca questionnaire [40], the Object presence questionnaire [8], the reality judgment and presence questionnaire [8, the Swedish viewer-user presence questionnaire (SVUP) [43], and others [10, 56, 15]. However, these questionnaires do not aim to measure Figure 1: _The Reality-Virtuality Continuum [52]. Virtual objects are colored green, and real-world objects are colored blue._ the time-varying qualities of presence. They can produce unstable, inconsistent, and unreproducible responses, and are susceptible to user bias. To address the issues of temporal information issue, Schwind et al. [66] used integrated questionnaires to measure presence. Integrating questionnaires directly into the virtual experience has been explored to assess the virtual experience as the user is going through it [62, 68, 66]. However, the problem of defining a clear baseline remains. Other measures are continuous assessments [34, 61], psychophysical measures like cross-modality matching, free-modulus magnitude estimation, and paired comparisons. Subjective qualitative methods include autoconfrontalation [60], focus group exploration [23], interaction analysis [79], free format self-reports [86], and repertory grid technique [81]. There are also several subjective corroborative measures to evaluate presence indirectly, such as Break In Presence (BIP) [13], duration estimations by users [33], attention awareness [17], and simulator sickness questionnaire [38]. Most of these methods ask participants to complete questionnaires that often contribute to the phenomenon of break-in presence [13] and are prone to participants' bias. Objective measures are mostly captured by evaluating the behavioral and physiological responses of the users and are often used as corroborative measures. Popular physiological measures are cardiovascular measures [42], skin measures [49], ocular measures [41], and facial electromyography [32]. Presence is also known to be measured through neural correlates like Electroencephalogram (EEG) and Functional Magnetic Resonance Imaging (fMRI) [31]. Physiological measures can reduce user bias but may also be prone to inaccuracy and unreliability [49]. Some commonly used behavioral measures are based on assessing facial expression [76], postural [24], startle [91], reflex and social responses, and pointing conflicting cues [37]. ### Presence and Performance Many studies have discussed the association of presence in the context of task performance in a virtual environment [10, 11, 12, 13, 45, 74, 75], we discuss only a few of them here. Slater et al. [74] conducted experiments to assess the influence of presence on performance while the participants learned to play three-dimensional chess. They noted that presence refers to the behavioral and psychological responses of people. Similarly, Barfield et al. [10] suggested that task performance measures can be used as objective corroborative indicators of presence. A few such methods are task completion time and error rate [11], the number of actions [74], and secondary task performance [35]. Though it is generally assumed that higher levels of presence are associated with better task performance [12], the exact causal link between presence and task performance is unclear. Slater et al. [74] explored the relationship between presence and performance. While keeping other factors such as relevant background knowledge and users' ability the same, results suggested that increasing presence by increasing the richness of the virtual environment improved task performance. The study also found that reported presence was higher for egocentric timeAssociation, but a causal relationship between presence and task performance was not established [78]. It is also noted that motor behavior is strongly influenced by perceptual uncertainty and the expected consequences of actions [26] that can affect user's characteristics, such as ability and motivation, that will influence task performance [30]. IJsselsteijn et al. [35] noted that it is reasonable to assume that several characteristics of a virtual environment will similarly influence presence and task performance. They further expanded that performance on a secondary task can serve as a measure of the amount of effort and attention allocated to the primary task. The more effort is dedicated to the primary task, the more performance on the secondary task will decrease. A similar argument can be made in the case of presence: if more attention is allocated to the mediated virtual environment, performance on a secondary task will decrease. Szczurowski and Smith made it reasonable to assume that the nature of presence is subconscious [85]. They argue that if the presence exists outside the subjective feeling domain, it's unlikely to be a conscious process. No one must remind themselves about staying present in the real world. It's also unlikely that it is possible to force yourself into feeling present in a virtual environment. IJsselsteijn, Szczurowski, Smith, and others [9, 85, 90, 85] concluded that reaction time or error rate could be used as task performance measures for presence evaluation. Therefore, we make a case for measuring subjective feelings of presence with an objective, reproducible approach and producing a stable response without interference in the virtual scene. Despite being aware of problems associated with a subjective measurement method, we are deciding to use questionnaires as the baseline for our current study to test our hypotheses about relations between different constituents of _presence_ and users' _reaction time_. ### Presence in Less-immersive AR/MR Environments Presence governs aspects of a user's autonomic responses and behavior in a virtual environment, whereas immersion refers to a quantifiable description of a technology [77]. Wilbur and Slater argue that the degree of immersion can be objectively assessed as the characteristics of a technology [78], and has dimensions such as the extent to which a display system can deliver an inclusive, extensive, surrounding and vivid illusion of a virtual environment to a user [82]. AR/MR elicits a different sense of presence: "It is here" presence [46]. Although AR/VR/MR is very different from a technical perspective, a common feature they share is that virtual objects exist in a curated environment: real (in the case of AR and MR) or virtual (in the case of VR). Therefore, the common approach to measuring presence in various virtual environments, from the least immersive environment (AR) to the fully immersive environment (VR), questions whether one has a sense of being in or interacting with the virtual environment. Although there have been attempts to develop presence methods [84, 25, 57] and measurement tools [83, 59] exclusively for AR/MR, these tools only measure factors that may influence presence, rather than directly measuring the subjective sense of presence itself [88]. While conventionally, presence has been defined for virtual environments that surround and isolate a user from the real world [84], Slater et al. [87] conducted a study on VR questionnaires (PQ and SUS) and concluded that the questionnaires that are developed for VR can still be useful when all users experience the same type of environment even if the environment is not fully immersive (AR/MR). They also concluded that the utility of questionnaires might be doubtful for comparing experiences across environments - such as immersive virtual compared to real, or desktop compared to immersive virtual, or a real environment with virtual objects to a fully virtual environment. Presence questionnaires are often utilized in research to explore the subjective experience of presence rather than the link between perceived presence and aspects of technology; therefore, they can be employed anywhere on the virtuality continuum in technological or real-world contexts. To this end, in this study, we refer to _presence_ as the subjective experience that a particular object exists in a user's environment, even when that object does not [83]. This definition is more appropriate for assessing non-immersive displays such as AR/MR headsets [57]. We use this definition of _presence_ for the rest of the paper. ### Taxonomy of a virtual scene As described by Wilbur and Slater [78], the place illusion (appearance realism) and the plausibility illusion (behavioral realism) are the main aspects of any virtual experience in any alternative reality medium. _Virtual scene_ represents the semantics of the virtual environment in three dimensions (3D) placed within the real environment. _Event_ An event happens in a computer system. For instance, adding or removing a virtual object from a virtual scene is considered an event. _Task_ refers to an observable activity with a start and an endpoint. In MR, tasks will be aligned with the start or end of an event, depending on the semantics of the virtual scene. _Interaction_ is defined as performing a physical action to perform the task in a virtual scene. _Cue_ is defined as the signal (visual or auditory) that is sent to the participant to initiate a task. _Feedback_ is the visual or auditory confirmation sent to the participant that the task is completed. To create a virtual scene with context, immersion, and interaction, we need to craft our experiments so that participants feel engaged in the virtual world. However, the scene should neither be too complex that a participant's cognitive load is consumed in understanding the scene nor should the scene be too simple that the participant feels disengaged [35, 77]. For example, we do not want a scene with Warcraft heavy-load games or a simple box with no semantic value to the participant. We need to create a balance between a semantically too complex and a simple virtual scene. The same applies to events, tasks, cues, interactions, and feedback from the virtual scene. ## 2 Approach To validate _reaction time_ as a measure of _presence_, we investigate the _correlation between presence and reaction time_ in MR. We reference Insko et al.'s [61] criteria for a useful measure: _sensitivity_ (to detect different levels of 'presence'), _reliability_ (providing repeatable results), _validity_ (correlating with existing 'presence' measures), and _objectivity_ (free from participant's and experimenter's bias). Accordingly, we navigate the following design challenges (DCs): **DC1:** How can we induce different feelings of _presence_? (sensitivity) **DC2:** How do we _minimize confounding variables_ while varying feelings of _presence_? (reliability) **DC3:** How do we _establish a baseline_ measure of presence and what should that baseline be? (validity) **DC4:** What user _interaction mechanism_ should we use to assess reaction times? (objectivity) ### _DC1 - Varying presence_ We need an empirical setup that can measure presence in real-time while depicting a practical scenario for measuring varying feelings of presence. But first, we need to understand the main aspects of presence and what dimensions could be measured in those distinct but overlapping aspects. We describe two main aspects (place illusion and plausibility illusion) of presence in SS2. Since we are using Mixed Reality (MR) as our experiment medium, place illusion and plausibility illusion need to be refined. In MR, measuring the place illusion could mean to what extent the virtual object appears indistinguishable from reality. The sense of plausibility can be described when users select a dominant space as the reference frame. Then virtual objects in real space or real objects in a virtual space would be perceived as plausible if the object behaves coherently to the dominantly perceived space as noted in place illusion. For example, plausibility would be lessened if gravitational forces were applied horizontally rather than vertically. In summary, place illusion refers to the elements related to the _appearance_ of the environment. In contrast, plausibility illusion refers to the elements related to the _behavior_ of the objects in the environment. Therefore, we suggest that to vary the feelings of presence, we could manipulate _appearance_ and manipulate the _behavior_ of the object in the scene. As Slater describes, presence is affected by both realism and plausibility. To establish the relationship between response time and presence, irrespective of why the presence changes, we altered realism and plausibility to induce various levels of presence. Other factors can also impact presence, and we plan to explore their effect in the future. ### _DC2 - Controlling confounding variables_ Various options exist for manipulating objects' appearance and behavior in the virtual environment. We must pick scenes and manipulations carefully to satisfy the reliability criteria. We have identified two constraints that will help limit the introduction of external variables and maintain symmetry across experiments and users. * **Constraint 1:** Manipulations should not affect (increase or decrease) the overall complexity of the scene, including tasks, events, and interactions. (**simplicity** in scene) * **Constraint 2:** Manipulations should also be free from additional confounding variables. Confounding variables are the extra variables that affect the actual relationship between the variables under study [54]. These output variables are _presence_ and _reaction time_. (**symmetry** in the scene and users) To satisfy the simplicity constraint, we will avoid unnecessary details in the scene (features mentioned above), and we are enforcing consistency in the non-manipulated conditions of our experiment. The selection of simple scenes is a conscious design choice to control the effect of confounding variables. In a complex scene, it can be hard to isolate the effect of various factors on presence. These experimental conditions are the placement and duration of the visual stimulus, time and appearance of the cue, time and appearance of the feedback, number of tasks, duration of the experiment, and type of interaction. We also suggest that to keep the scene simple, we should pick a scene familiar to most college students (intended participants pool) and have only one virtual object in the scene at a time. However, we want to keep the participant engaged throughout the experiment, and we need a scene with some familiar semantics. Similarly, the manipulations that will vary the presence should be subtle and change only one scene to isolate the effect on our output variables. In [67], authors have proposed manipulating the scene's appearance by manipulating the visual fidelity. We vary presence by making the scene appear _realistic_ in one scenario and _abstract_ in another. Similarly, as shown in [13], plausibility illusion can be affected by challenging the physical laws in a scene. In behavior manipulation, we make the behavior of a scene naturally plausible in one scenario and implausible in another. We will maintain a controlled physical environment to minimize interference and satisfy symmetry constraints. We keep the same room, lighting, seating arrangement, study session duration, breaks, and order of the experiment across users and experiments. To this end, we divide our experiment into two sets. In the first set, we vary the sense of presence by manipulating the _appearance_ of the virtual object (place illusion). In the second set, we vary the sense of presence by manipulating the _behavior_ of the virtual object (plausibility illusion). Each experiment will contain two blocks of trials: a _control_ block and a _manipulated_ block. In the _control_ block, the scene contains virtual objects with a natural appearance and plausible behavior. However, in the _manipulated_ block, the scene contains virtual objects whose appearance is manipulated to be visually unnatural (unrealistic) or in which the virtual objects exhibit implausible behavior. ### _DC3 - Establishing a baseline_ Since presence itself has a subjective nature, it is logical that we also establish our baseline with a subjective measure. To understand the level of presence, we use the questionnaire as our baseline, which researchers commonly accept as the standard measure. This helps us set a starting point for our study [13, 58, 67]. Subjective questionnaires have been the standard measure of presence for many years. They are sensitive enough to find differences in presence when used to examine the difference between two visually similar fidelities [61]. The post-experience questionnaire provides scores that reflect the level of perceived presence in the scene. While subjective questionnaires have limitations, as discussed in SS2.3, they are currently the only widely accepted method of quantifying presence. We measure various feelings of presence with the three most-cited and widely used questionnaires that measure presence. We use the Igroup Presence Questionnaire (IPQ) [64, 65], Slater-Usoh-Sted (SUS) [75], and Witmer & Singer (PQ) [92]. ### _DC4 - Selecting an interaction mechanism_ We need a task that engages the participant and helps us measure our output variable, _reaction time_. Through interactions, the participant can physically be involved in the scene. In designing the interaction, we want to avoid imposing differential barriers to task completion, such as placing a button at a height that is easier to reach for some participants than others. Similarly, we want to avoid designing unnecessarily complex interactions that could cause task performance to vary unpredictably, independent of the targeted manipulations. Additionally, we ensure that external interventions do not affect the reaction time. Therefore, we need to leverage existing elements in the taxonomy of the scene (cue, interactions, and feedback). To solve these challenges, we employ HoloLens's "air tap" gesture [1] as our interaction mechanism. It is precise, and its ease of use is independent of a typical participant's height, reaching range, or any other physical attribute. The air tap gesture requires specific movements performed in a particular order. According to the instructions for air tap in the HoloLens 2 manual [50], the user needs to "hold your hand straight out in front of you in a loose fist, point your index finger straight up toward the ceiling, tap your finger down, and then quickly raise it back up again." This specific sequence of movements reduces the probability of mistriggering due to random motions. Additionally, in the scene depicted in Figure 3, there is no movement other than the participant performing an air tap in response to a cue. Therefore, the closed position of the participant's fingers in the image is not an error and does not pose a potential for inaccurate gesture recognition. We designed a task that supports participants in knowing what to do (cue), knowing that the system is working (interaction/air tap), and knowing if their action was understood by the system (feedback). In the _realistic_ vs. _abstract_ scenario, the appearance of the object in the scene is the cue for the user to take action (air tap), and the disappearance of the object from the scene is the feedback to the user that their action was successful. In the _plausible vs. implausible_ scenario, the cessation of change in the height of the coffee in the cup cues the user to initiate their action. The coffee cup changes color to provide feedback to the user about the success of their action (details in SS4.6). Next, we formally define the reaction time (\(reaction_{i}\)) in our study for the trial \(i\) as \((interaction_{i}-cue_{i})\). where trial \(i\) refers to one task iteration by the participant (cue, air tap, feedback). \(interaction_{i}\) is the time when the air tap is recorded in trial \(i\), and \(cue_{i}\) is the time of onset of the task cue in trial \(i\). ## 4 User Study In this section, we detail the study measures, participants, equipment, and procedure for the user study. ### Participants We recruited 40 participants (23 male-identifying, 16 female-identifying, and one non-binary identifying) with a mean age of 26.6 years (standard deviation of 5.5 years). All participants volunteered and provided written informed consent. They received $25 for their participation. All but one participant had a technical background in computer science or engineering. All the participants had normal or corrected normal vision with contact lenses or glasses. Twelve participants had 1 to 4 days per week of MR experience, 24 participants had less than 1 hour per week of experience, and nine had never experienced MR before. Only 12 participants had used HoloLens 2 before the study. The study was granted ethics clearance according to the ethics and privacy regulations of our Institutional Review Board (IRB). ### Material The study utilized an ergonomic, untethered, self-contained holographic device, Hololens 2 [4] equipped with a second-generation Holographic Processing Unit (HPU) for real-time computer vision and a Qualcomm Snapdragon 850 CPU for running applications. The virtual scenes were developed via Unity 3D (10.0.19362.0) game development engine with API for the Universal Windows Platform on Windows 10 PC. The Hololens 2 accepts eye, spatial, and hand-tracking inputs with a field of view of \(43^{\circ}\) horizontal, \(29^{\circ}\) vertical, and \(52^{\circ}\) diagonal. Its dual see-through displays have a resolution of \(1440\times 936\) pixels each, a 60Hz refresh rate, and a tinted visor to minimize environmental light interference. We chose the HoloLens 2 for our experiment because its see-through setup allows the user a direct view of the real world. The other MR headsets that leverage video to show the physical world cannot be used for safety-critical applications like surgery. ### Variables In our study design, we change two variables to test our hypothesis, but we only manipulate one variable at a time. We chose _appearance_ and _behavior_ of virtual objects as our variables in the first and second experiment sets, respectively. ### Experiment Set 1: Realistic vs. Abstract In the _control_ trials, all virtual objects in the scene will have a realistic appearance (textured and natural) and plausible behavior. In the rest of the paper, we refer to this block of trials as _realistic_. In the _ manipulated_ trials, all virtual objects will depict plausible natural behavior, but the appearance would be abstract (untextured and geometric). We refer to this block of trials as _abstract_ in the remainder of the paper. In the control trials, all of the virtual objects in the scene have a realistic appearance (textured and natural) and plausible behavior. In the rest of the paper, we refer to this block of trials as _realistic_. In the manipulated trials, all virtual objects depict plausible natural behavior, but their appearance is abstract (untextured and geometric). We refer to this block of trials as _abstract_ in the remainder of the paper. For the _realistic_ vs. _abstract_ trials, we chose a scene that contains textured virtual objects. To satisfy our simplicity constraint, we modified the popular Fruit Ninja game [3]. To mimic the semantics of a multi-object environment, we use two objects that are similar in shape and size but different in textural properties. As we wanted a simple scene and one virtual object at a time, we made the virtual objects appear in the scene one after the other. Ultimately, we used one fruit (banana) and one vegetable (carrot) for this experiment. We removed complex interactions (slicing) and additional cognitive loads (scores) from the original game. In the _realistic_ condition, we make the carrots and bananas appear as natural as possible in terms of color, texture, and shape. In the _abstract_ condition, we render the carrots and bananas with muted colors, without texture, and with geometrically blocky shapes, as shown in Figure 2. Figure 3: **Illustration of the experimental task using air tap.** Figure 2: **Realistic, abstract, plausible, and implausible virtual objects used in the experiments.** ### Experimental Task The experimental task in both sets can be divided into three sub-tasks, as shown in Figure 3. The experiment objects are placed in the participant's field of view as a prerequisite. The first step for participants is to view the object with their hand in a neutral position without raising their elbow, as illustrated in Figure 3(1) and captured in Figure 2. The second step for the participants is to lift their index finger and react upon cue by air-tapping the virtual objects, as shown in Figure 3(2). Finally, the third step for participants is to return their hands to the neutral position after seeing the feedback. To avoid double triggers, we instructed participants to perform a single air tap in response to a cue. ### Measures Presence scores for the questionnaires were obtained using 39 items (6 SUS, 14 IPQ, 19 WS) on a 7-point scale. We did not modify any of the questions. The reaction time is recorded by our software on HoloLens 2. On average, we collected 50 reaction time measurements per block of trials, totaling 200 data points per participant for the four blocks. The reaction time is measured in milliseconds (\(ms\)). ### Pilot Study Before starting the formal study, we conducted a pilot study with two participants to tune the parameters to minimize environmental variables and maximize reliability and objectivity. We use a talk-aloud protocol and ask participants questions to align the experiment for general comfort and ease for the participants but not to bias the experiment for a specific set of users. We tuned the following session-specific parameters. _Participant position._ We experimented with standing, walking, and sitting positions. The participants reported feeling tired while standing. While walking, they reported that the scene kept changing around them, necessitating extra attention to locate the virtual object. Participants reported the sitting position as the most comfortable position for the experiment. We asked the participant to sit on a chair with relaxed shoulders, an arm on the lap or armrest, and feet flat on the floor. The chair with armrest and backrest was reported to be the most comfortable with air tap interaction [1]. _Room lighting._ Lighting affects a user's ability to see the virtual environment and its objects [20]. It also affects the rendering of virtual objects and user interaction with those objects, as they rely on the tracking module of Hololens 2. For optimal visuals through Hololens 2, lighting should be even and sufficiently bright so that a participant can see without effort but not so bright that a participant has difficulty looking into the environment. To compensate for the darkness of the visor, dim lights reflecting in the direction of the participant's head are deemed most effective because a tinted visor may cause a loss of contrast in the physical environment. _Experiment duration._ How long will an experiment run with the same repetitive task? It should be long enough for the participant to feel involved, and we can collect sufficient data on _reaction time_. It should be short enough that the participant does not feel tired or disengaged, impacting the accuracy of their interactions. We tested for a duration of 2 to 10 minutes. At 2 minutes, the participants reported that they could not get acquainted with the environment. At 10 minutes, the participants reported feeling distincted after a while. We also wanted to test the recovery time of the presence or reaction time (discussed in detail in 5.1.2). We picked 5 minutes per experiment as it allowed us to obtain at least 60 reaction time readings per experiment, and the participant did not feel tired. _Wait time between blocks and experiments._ After each block of an experiment, the participant was asked to complete the questionnaires. We explored 0-20 minutes of wait times after filling out the questionnaire and before starting the next block. Participants reported 5-minute wait times as sufficient, but a longer break was available upon request. In the main study, we explicitly asked participants if they needed more downtime before each experiment. _Experiment and block order._ For an experiment, we could expose the participant to a control block and then to manipulated trails, or vice versa. In either case, there is a risk of obtaining better performance due to greater experience. Out of an abundance of caution, we decided to run all participants with the control block first and manipulated block second so that any underlying tendency for performance to improve over time would work against our hypotheses. However, in hindsight, we recognize that counterbalancing would have been a more appropriate way to control such potential effects. _Virtual object placement._ We use Fitts's law [29] to calculate the expected time of motor movement for several different positions in the scene. We placed the virtual object parallel to a sitting participant's eye level. The object was placed at a \(45\) centimeter distance, as recommended by Hololens 2 intractable object guidelines with air tap interaction [5]. The object's size was tested between \(1.4\times 1.4cm\) and \(3.5\times 3.5cm\). Both participants felt comfortable with \(1.4\times 1.4cm\). _Event and task period._ Our experiments involve repeating trials in each block. However, repeating events too quickly can make the task more difficult and decrease accuracy, while long breaks between events can break the presence [13]. To find the appropriate interval, we tested intervals from \(1-15\) seconds (\(s\)) but found that periods shorter than \(3s\) were too short and caused confusion, while periods longer than \(8s\) were boring for participants. Therefore, we settled on a \(5s\) interval between cue onsets to balance user comfort and task accuracy. _Cue appearance._ We initially tested using a color change as a visual cue for the participant's interaction in both sets of experiments (see details in SS3.2), but it was found to be distracting. We then tried other cues and settled on using objects appearing in the scene to initiate an air tap gesture in the fruit ining game in realistic and abstract scenarios. A glowing button prompt was tested but found distracting for the coffee mug experiment. The cue was ultimately changed to filling the coffee Figure 4: _User study timeline consisting of pre-and post-questionnaire and four blocks across the two experiments._ in the mug, which participants found more engaging. _Proximity of interaction._ We tested the air tap interaction at a close and far distance. The participant struggled with the far air tap. This could be because the virtual object was placed near the user, and the far-touch interaction moved the virtual object farther from the participant and created an unnecessary distraction. The participant found the near air-tap interaction is more natural, so we kept it as the mode of interaction for all of our experiments. _Accuracy of interaction._ To assess the potential for mistriggers, we conducted experiments with pilot study participants. In this pilot, we asked participants to perform random movements with their hands for 10 minutes while avoiding the air tap gesture. Throughout the pilot, our system did not register any unintended interactions by the participants, indicating a low-to-no chance of mistriggers. We took several steps to mitigate the possibility of erroneous gesture detection. First, before the experiment, we coached participants on performing a successful single air tap gesture. Second, we did not record any timing data for a trial where an air tap was undetected. Our data shows an average of 50 detected air taps out of 60 possible during the 5-minute interaction, as indicated in table 2 and table 4. Third, if a participant tapped twice in response to a single cue, we considered only the first measure as the response time. _Feedback._ In the realistic vs. abstract experiment, we employed virtual objects disappearing as feedback for successful interaction, which participants found acceptable, similar to the original fruit innija game. For the plausible vs. implausible experiment, we initially used a subtle coffee-poured sound as feedback, but one participant found it distracting. We then changed to using the mug color change as feedback, which was found to be more subtle and less distracting. Interestingly, participants preferred the "change in color" being used as feedback rather than as a prompt for a gesture. **Questionnaire scores.** Our presence scores are collected from the same set of participants under two different conditions: realistic vs. abstract. We use a paired samples t-test, with a _null hypothesis_ that _mean of two sets of experiments is equal_, to determine if the presence scores changed between realistic and abstract experiments. Before applying the t-test, we verified the normality of the difference between the presence scores for the two experiments. The results of this experiment are reported in Table 1. The difference in presence score between realistic experiment (M = 4.97; MAD = 0.90) and abstract experiment (M = 2.74; MAD = 0.74) was significant (t (40) = 12.85; p < 1.32e-15). Therefore, we can reject the null hypothesis and state that the presence of subjects changed across experiments. Figure 5 shows the histogram of the scores across users and its probability distribution. **Subscales.** While our aggregate results demonstrate that the presence score changed as we altered the realism of the objects, we want to investigate the factors that contributed to the change in presence. The mean scores for all subscales and the aggregate realism scores across questionnaires are shown in Figure 6. First, the realism questions constitute 11 of the 39 questions and show a significant change in the presence scores. This suggests that the realism significantly changed across the two experiments, also confirmed by the t-test. We also report the disaggregated presence scores for realism- and presence-related questions to analyze how factors other than realism and feeling of presence affect the overall presence score. While the null hypothesis is true, the p-value is very small, indicating that other aspects had a lesser impact but require further investigation. **User characteristic and presence.** We performed a regression analysis using F-test1 to check if age, gender, and familiarity with MR had any impact on _presence_ scores. In this analysis, the null hypothesis is that a regression model based on a given variable is not a better fit than a simple intercept-only model. We observed that age, gender, and familiarity with MR did not have any effect on overall _presence_ scores, as we obtained values of \(F(1,38)=0.96,p=0.34\), \(F(1,38)=1.64,p=0.96\), and \(F(1,38)=2.19,p=0.71\), respectively. Footnote 1: \(F(X,Y)\): \(X\), \(Y\) are degrees of freedom between and within groups, respectively. \(X=\) total groups\(-1\), \(X=\) group size\(-\)total groups. #### 5.1.2 Reaction Time To evaluate participants' reaction time, we record a time-stamp when the cue appears and a time-stamp when the user's action is recorded. We use the difference between these two timestamps as the reaction time. There are instances of "no-triggers" where either the participant does not respond, or the air tap does not register. We remove the corresponding cue from our observations if the air tap is not registered. As a result, our experiments recorded, on average, 50 responses out of the maximum possible 55-60, indicating that around 10-15% of the time, participants did not respond, or the air tap was not recorded. Removing this data ensures that "no-triggers" do not impact our findings. Table 2 presents the high-level results of the experiments. **Reaction time values.** Figure 7 shows the distribution of average reaction time scores across the users. Similar to the presence scores, we used a t-test to evaluate the null hypothesis that the median reaction time across the two experiments is equal. Our results show a significant difference in average reaction times between the appearance conditions: 954ms in the realistic condition and 1313ms in the abstract condition with t-statistics of t(40) = 8.71 and \(p<1.09e^{-16}\). This reflects the null hypothesis and also presents a significant difference of 37.63%. **Reaction time recovery.** Next, we examined the reaction time of users over time. In Figure 8, we observe that participants took significantly longer to respond to the cues at the start of the experiment. The reaction time drastically dropped and settled to a steady-state within the first 30 seconds. After that, there was only a modest improvement in the reaction time over the duration of the experiment. **Additional statistical analyses.** We also measured the number of times a user was able to respond to the cue. Since the number of cues between experiments differed due to slight variations in the experiment duration, we only considered the first 60 cues to collect these statistics. We see that users could complete more interactions in the realistic appearance condition than in the abstract appearance condition. #### 5.1.3 Experiment Set 1: Discussion Our results suggest that the place illusion part of our first hypothesis, "_H1: manipulating the place illusion (appearance of a virtual object) leads to change in presence_", is valid. We successfully altered the presence by manipulating the object's appearance from realistic to abstract. Prior work on manipulating the place illusion to alter the presence also supports our results [13]. Additionally, the high correlation between questionnaire scores between conditions suggests that the users who felt a greater presence in one condition also felt a lower presence in the second condition. Our second hypothesis, "_H2: Change in presence for a participant leads to change in participant's reaction time_" is also valid. Our first hypothesis confirms that the presence changed from the experiment with a realistic object to the experiment with an abstract object. Simultaneously, we observed a significant increase in the user reaction time as participants moved from realistic to abstract object experiments. Finally, the recovery time analysis also yields interesting points. The reaction time for the manipulated experiment block \begin{table} \begin{tabular}{||c|c|c||} \hline **Experiment** & **Reaction Time (ms)** & **Change** & **No. of** \\ & (\(\mu\)\(1\,MAD\)) & **(\%)** & **Interactions** \\ \hline \hline realistic & 9541 170 & \multirow{2}{*}{50} \\ \cline{2-3} abstract & 1313 290 & 37.63 & 44 \\ \hline \end{tabular} \end{table} Table 2: _Average, MAD, "stage change of user reaction times, and average number of interactions across the two sets of experiments._ Figure 8: _Average user reaction time for different experimental settings. User reaction time recovers (decreases) over time._ Figure 6: _Subscale Scores. IPQ: general presence (GP), spatial presence (SP), involvement (INV), and realism (REAL), PQ: possibility to act (ACT), interface quality (IFQUL), realism (REAL), possibility to examine (EXAM), and self-evaluation of performance (EVAL)._ is always higher than the control experiment block. Furthermore, the reaction time is steady after the initial few seconds. This means the participants initially took some time to get acquainted with the environment, leading to an increase in their reaction time. As time went on, they interacted with the virtual objects in a fairly consistent manner. ### Experiment Set 2: Plausible vs. Implausible As in the previous experiment set, we get presence scores from questionnaires and reaction times from HoloLens 2. #### 5.2.1 Presence Questionnaire Scores In this experiment, we changed the feelings of _presence_ by manipulating the behavior of the object (i.e., _plausibility illusion_). The experiment and analysis setup is the same as the previous experiment set. **Questionnaire scores.** Our presence scores are collected from the same set of participants under two different conditions: plausible vs. implausible. We use a paired samples t-test, with a _null hypothesis_ that _mean of two sets of experiments is equal_, to determine if the presence scores changed between plausible and implausible experiments. Before applying the t-test, we verified the normality of the difference between the presence scores for the two experiments. The results of this experiment are reported in Table III. The difference in presence score between plausible experiment (M = 5.17; MAD = 0.93) and implausible experiment (M = 2.81; MAD = 0.91) was significant (t (40) = 8.05; p < 8.11e-10). Therefore, we can reject the null hypothesis and state that the presence of subjects changed across experiments. Figure 9 shows the histogram of the scores across users and its probability distribution. **Subscales.** While our aggregate results demonstrate that the presence score changed as we altered the plausibility of the objects, we want to investigate the factors that contributed to the change in presence. The mean scores for all subscales and the aggregate plausibility scores across questionnaires are shown in Figure 6. First, we filtered the questions relating to plausibility, comprising 8 out of 39 questions, which show a significant change in the presence scores. This suggests that plausibility significantly changed across the two experiments, also confirmed by the t-test. We also report the disaggregated presence scores for plausible- and presence-related questions and the rest of the questions to analyze how factors other than plausibility and feeling of presence affect the overall presence score. While the null hypothesis is true, the p-value is very small, indicating other aspects had a lesser impact but require further investigation. #### 5.2.2 Reaction Time We collected 50 data points per user per experiment on averageafter post-processing data as described in Section 5.1.2 for the first experiment. Table IV presents the results of the experiments. **Reaction time values.** Figure 11 shows the distribution of average reaction time scores across the users. Similar to the presence scores, we used a t-test to evaluate the null hypothesis that the median reaction time across the two experiments is equal. Our results show a significant difference in average reaction times between the two plausibility conditions: 930ms in the plausible condition and 1182ms in the abstract condition with t-statistics of t(40) = 11.67, and \(p<1.93e^{-13}\). This rejects the null hypothesis and also presents a significant difference of 27.10%. As in the first experiment set, we see decreased interactions after the manipulation. **Reaction time recovery.** Next, we examined the reaction time of users over time. In Figure 12, participants took slightly longer than 60 seconds to reach a steady state for the plausible vs. implausible experiment, which was longer than the first experiment set. #### 5.2.3 Experiment Set 2: Discussion This experiment aimed to establish that presence can be modified in ways other than manipulations to the place illusion done in the previous experiment. For this experiment, the plausibility illusion part of our hypothesis "_H1: manipulating the plausibility illusion (behavior of virtual object) leads to change in presence_" is valid. We altered the presence by manipulating the object's behavior from plausible to implausible. Gravity is a crucial aspect of our lives, and we expect objects to behave in specific ways under gravity. If their behavior is not plausible according to the laws of physics, it leads to degraded presence. Similar to the previous experiment, we saw a high correlation between questionnaire scores, suggesting that plausibility illusion was the factor in altering presence. The second hypothesis, H2, can be accepted based on our results. The only difference is that the change in presence was due to the change in plausibility illusion and not the place illusion. The recovery time analysis for this experiment set yields similar results as the previous experiment. However, users took longer to reach the steady state, and reaction times varied over time. This means that manipulating the appearance of the objects has less effect on participants than behavior manipulation. This is understandable as humans are more likely to see objects with non-standard appearance as opposed to observing objects that do not conform to gravity. ## 6 Discussion In this section, we discuss the implications of the quantitative results presented in the previous section. Our first hypothesis, "_H1: Manipulating the place illusion (appearance of a virtual object) leads to change in the presence_" for the first experiment set can be accepted. The presence score of participants for the manipulated experiment was more than 2 points lower on a 7-point scale than the control experiments. As we used everyday virtual objects, bananas in this case, a change in their texture significantly alters our strong experience-based prior. The results from the second experiment set further support this statement and result in the acceptance of \begin{table} \begin{tabular}{||p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}||} \hline **Experiment** & **Reaction Time (ms)** & **Change (\%)** & **No. of Interactions** \\ \hline \hline plausible & 930 1240 & & 54 \\ \hline implausible & 1182 1210 & 27.10 & 51 \\ \hline \end{tabular} \end{table} Table 4: _Average, MAD, %age change of user reaction times, and average number of interactions across the two sets of experiments._ \begin{table} \begin{tabular}{||p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}||} \hline **No** & **plausible (\(\mu\), MAD)** & **implausible (\(\mu\), MAD)** & **t-test (\(<\)0.05)** & **p** \\ \hline \hline ALL & 5.17,0.93 & 2.81,0.91 & 8.05 & yes \\ \hline SUS & 5.34,1.09 & 2.70,1.13 & 7.52 & yes \\ \hline IPQ & 4.51,1.35 & 2.63,0.90 & 6.10 & yes \\ \hline PQ & 5.61,0.79 & 2.98,1.30 & 7.88 & yes \\ \hline Plausible & 4.73,0.89 & 2.77,0.57 & 5.23 & yes \\ \hline Plaus., GP, SP & 4.94,0.43 & 2.62,0.77 & 9.05 & yes \\ \hline ALL - (Plaus., GP, SP) & 5.36,1.34 & 2.97,1.16 & 4.18 & 0.06 \\ \hline \end{tabular} \end{table} Table 3: _Mean, MAD, and t-test results for all questionnaires._ Figure 10: _Subscale Scores. See Figure 3 caption for subscale acronyms._ Figure 9: _Histogram, means (left y-axis), and fitted Gaussian distribution (right y-axis) using mean questionnaire scores._ H1. As all the participants experience gravity at all times, an object defying gravity is implausible and causes a break in presence. This statement is supported by prior work asserting that break-in plausibility, gravity-defying behavior, leads to a change in presence [13]. Finally, our hypothesis is acceptable for the individual questionnaires and their subscales, as the change in presence is the same as the overall results. Our user _reaction time_ results suggest that our second hypothesis "_H2: Change in presence for a participant leads to change in participant's reaction time_" can be accepted. As the users became more familiar with the environment and setup, the reaction times from one set of experiments to the other decreased. We also observed an improvement in the number of interactions across the two sets. However, this improvement could not overcome the increase in reaction times due to our manipulations within a set of experiments. Within a block, across both experiments, the reaction time drastically improved at the start but quickly reached a steady state. The difference in reaction time between the blocks of an experiment remained constant over time. This indicates that the recovery effect is consistent across blocks, and our manipulations influenced the change in reaction times of the participants. decreases as the number of presence classes increases. However, our model shows good accuracy despite the small data set of 128 observations. The accuracy is significantly higher than a purely random predictor with 1/(no. of presence levels), e.g., 50% for two levels, 33% for three levels, and 14.28% for seven levels. **Effect of Training Data Size.** Figure 15(b) demonstrates the effect of training data size (on \(x\)-axis) on classification accuracy (on \(y\)-axis). In this experiment, we set the number of classes to two. We observe that model accuracy improves as the training data size increases. However, even with a very small training data size, the model performs quite well and achieves an accuracy of 73.09%. The accuracy improves to 79% when the number of users increases to 36. The upward trend shows that more data can further improve the model performance. _Key Takeaway. Our ensemble classification model estimates presence levels using the reaction time values with high accuracy, which depends on the number of presence levels and training data size. However, the accuracy can be further improved by using data from more users._ ## 7 Limitations and Future Work For this study, we used two scenes consisting of simple one-object scenarios, a design choice to minimize the effect of variables other than presence on the reaction time. This study establishes the relationship between reaction time and presence when presence is altered by changing the realism and plausibility of virtual objects. While we do not have any indications that suggest this relationship will not hold if its presence is altered by any other method in a more complex environment, there is a possibility that the setup may not be sensitive to the broader effects of varying the feelings of presence on reaction time in a multi-object virtual scene. In future work, we plan to experimentally investigate how presence is affected by factors other than realism and plausibility and how it relates to reaction time with different degrees of scene complexity, cognitive load, and dynamic physical environment. In this study, we investigated the effects of varying the feelings of presence with only periodic tasks and active interaction (air tap). It's worth investigating the relationship between presence and reaction time under different conditions, such as with non-periodic tasks, or with different response measures, such as eye gaze. Participants reported their presence levels using questionnaires that leverage Likert scales. Other questionnaires that rely on different assessment mechanisms, such as open-ended questions, might reveal additional insights. We also acknowledge that our proposed approach depends on user interactions to measure the reaction time. Our technique may fail to produce any measurement in virtual scenes with little or no interaction. Future work should consider using other backup mechanisms like eye-gaze tracking to combat low-interaction scenarios. We have not found any effect of gender, age, or familiarity with MR on this study's presence or reaction time. This can be due to our purposefully simple design, but in a complex scenario with some applications, these user characteristics may affect the presence or reaction time. In addition, we have not tested the effect of the break in the presence, but it may impact the presence and reaction time. In the future, the break in the presence can be tested with the reaction time as a measure and may add to the discussion of the presence-reaction-time relationship. This study did not ask participants to complete a cyber-sickness questionnaire. However, during the study with participants, we asked participants to report any discomfort they felt during or at the end of the experiments. None of the participants reported any feeling of discomfort. This may be due to the limited exposure time and a wider adjustable interpupillary distance range that Hololens 2 offers [4]. However, this variable can be tested in isolation to isolate the potential impact of cybersickness in varying the feelings of presence. Despite the limitations of the work and the opportunity for improvements, we argue that our results present sufficient evidence of a relationship between _presence_ and _reaction time_ to justify a further discussion of whether a performance-based metric such as _reaction time_ can be used to describe _presence_. Post-experience questionnaires are the most commonly used measures of _presence_ in previous work. However, a significant disadvantage of such questionnaires is that they are based on the subjects' memories of the experience. Such memories can reflect an inconsistent and incomplete picture of the experience. _Reaction time_, on the other hand, is a passive and objective measure that does not depend on the comprehension of the user and the memory of the experience. In our work, we have developed a preliminary model that maps the reaction time to presence. In the future, with additional investigation, this model can serve as a measure of _presence_ and as a feedback loop that developers can use to improve the run-time experience since it measures the phenomenon when it is perceived. Additionally, a model can be developed that takes the presence as the input and yields reaction time as the output, which could describe how much of an ill effect a decrease in presence might cause. However, it must be noted that reaction time is an objective metric that can be measured, and presence is a subjective sensation that even people themselves have difficulty reliably quantifying. This is why our paper aims to use reaction time as an alternative and potentially more robust mechanism for assessing the presence, not for predicting the effect of lower presence on reaction time. ## 8 Conclusion We presented a user study (N=40) to understand the relationship between _presence_ and _reaction time_. We changed the sense of _presence_ of the participants by manipulating _appearance (place illusion)_ and the non-task-relevant _behavior (plausibility illusion)_ of the virtual object and systematically measured the _reaction time_ of the participants in response to visual stimulus. Our post-experience questionnaires show a significant change in the _presence_ across experiments. Similarly, we see a significant change in user _reaction time_ as we vary feelings for _presence_. Our analysis shows a negative correlation between the Figure 14: _The architecture of reaction time–to–presence model._ Figure 13: _Presence vs. Reaction Time: Presence decreases as reaction time increases. Reaction time and presence also show a modest correlation: overall (-0.65), realistic (-0.51), abstract (-0.63), plausible (-0.57), and implausible (-0.59). Each red circle represents a study participant. The black line is the linear regression fit for the data._ presence and _reaction time_. Furthermore, our study provides insight into the considerations for using _reaction time_ as a possible measure of _presence_, as well as preliminary recommendations on the possibilities of future research to understand better the relationship between _presence_ and _reaction time_. We found that as the average _presence_ score for the two illusions decreased from 4.97 to 2.74 and 5.17 to 2.81 (on a 7-point scale), the average _reaction time_ increased by 37.63% and 27.10%, respectively. We developed a model that estimates a user's presence level using reaction time values with high accuracy of up to 80%. While our study suggests that reaction time can be used as a measure of presence, further investigation is needed to improve the accuracy of the model.
2309.07000
Nonlinear Hall effect on a disordered lattice
The nonlinear Hall effect has recently attracted significant interest due to its potential as a promising spectral tool and device applications. A theory of the nonlinear Hall effect on a disordered lattice is a crucial step towards explorations in realistic devices, but has not been addressed. We study the nonlinear Hall response on a lattice, which allows us to introduce strong disorder numerically. We reveal a disorder-induced fluctuation of the Berry curvature that was not discovered in the previous perturbation theories. The fluctuating Berry curvature induces a fluctuation of the nonlinear Hall conductivity, which anomalously increases as the Fermi energy moves from the band edges to higher energies. More importantly, the fluctuation may explain those observations in the recent experiments. We also discover an "Anderson localization" of the nonlinear Hall effect. This work shows a territory of the nonlinear Hall effect yet to be explored.
Rui Chen, Z. Z. Du, Hai-Peng Sun, Hai-Zhou Lu, X. C. Xie
2023-09-13T14:54:45Z
http://arxiv.org/abs/2309.07000v3
# Fluctuation and localization of the nonlinear Hall effect on a disordered lattice ###### Abstract The nonlinear Hall effect has recently attracted significant interest due to its potentials as a promising spectral tool and device applications. A theory of the nonlinear Hall effect on a disordered lattice is a crucial step towards explorations in realistic devices, but has not been addressed. We study the nonlinear Hall response on a lattice, which allows us to introduce disorder numerically and reveal a mechanism that was not discovered in the previous momentum-space theories. In the mechanism, disorder induces an increasing fluctuation of the nonlinear Hall conductance as the Fermi energy moves from the band edges to higher energies. This fluctuation is a surprise, because it is opposite to the disorder-free distribution of the Berry curvature. More importantly, the fluctuation may explain those unexpected observations in the recent experiments. We also discover an "Anderson localization" of the nonlinear Hall effect. This work shows an emergent territory of the nonlinear Hall effect yet to be explored. _Introduction.-_ The nonlinear Hall effect behaves as a transverse Hall voltage nonlinearly responding to a longitudinal driving current. It has attracted much attention [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26], because quite different from the Hall effect, it does not need a magnetic field or magnetism to break time-reversal symmetry. More importantly, it sensitively depends on real-space symmetries [18], thus may provide a new experimental tool to probe the condensed matter phases. A theory on a disordered lattice is a crucial step towards explorations and applications of the nonlinear Hall effect in realistic devices, but has not been addressed. In this Letter, we study the nonlinear Hall effect on a disordered lattice [Fig. 1(a)], by calculating the Berry curvature dipole and Berry connection polarizability contributions of the 2D tilted Dirac model. Disorder plays an important role in the nonlinear Hall effect [17; 18; 19; 27; 28; 29; 30; 31; 32]. Our calculations reveal two distinguishing features from disorder. (i) An increasingly enhanced fluctuation of the nonlinear Hall effect as the Fermi energy moves from the band edges to higher energies [Fig. 1(b)]. It could be a different mechanism of the nonlinear Hall effect, arising from disorder-enhanced Berry curvature [Fig. 1(c)]. Its energy dependence is quite opposite to that of the disorder-free Berry curvature, thus it cannot be revealed in the momentum-space theories or measured in the linear Hall conductance. This fluctuation may explain the recent experiments [Figs. 1(d) and 1(e)], where larger nonlinear Hall conductance fluctuations were observed at higher energies [5; 6], quite distinct from the Fermi-energy-irrelevant universal conductance fluctuations in the linear conductance regime [33; 34; 35]. (ii) The second feature is an "Anderson localization" [36; 37; 38], but in the nonlinear response and along the perpendicular direction, Figure 1: (a) We use a super cell to calculate the nonlinear Hall effect (double-frequency transverse current \(J^{2\omega}\) induced by an electric field \(E^{\omega}\)) on a real-space lattice, which allows to introduce disorder (the colors of the lattice sites) numerically and reveals a different mechanism of the nonlinear Hall effect. The mechanism exhibits increasing fluctuations of the nonlinear Hall effect (in terms of the Berry curvature dipole \(D_{yxx}\) (b) or Berry curvature of the \(m\)-th state \(\bar{\Omega}\left(E_{m}\right)\) (c)] as the Fermi energy \(E_{F}\) moves from the edges (\(E_{F}=\pm 40\) meV) of the energy bands to higher energies, and may explain those unexpected experimental observations (d)-(e), adopted from Refs. [5] and [6]. quite different from the previous scenarios. Our findings reveal a territory of the nonlinear Hall effect yet to be explored. _Model and single-\(k\) approximation.-_ We adopt the minimal model for the nonlinear Hall effect, i.e., the tilted 2D massive Dirac model [3], \[H=tk_{x}+\left(m-\alpha k^{2}\right)\sigma_{z}+\eta vk_{x}\sigma_{y}+ \upsilon k_{y}\sigma_{x}+V(\mathbf{r}), \tag{1}\] where \(V(\mathbf{r})\) depicts the Anderson disorder [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52] uniformly distributed within \([-W/2,W/2]\), with the disorder strength \(W\). In the numerical calculations, we discretize the system on a two-dimensional square lattice, with a lattice constant of \(a=1\) nm. The other parameters are of the same orders of those in typical massive Dirac systems [6; 53; 54], as given in the caption of Fig. 2. To save computational power, we use the single \(k\)-point approximation in a super cell. On a periodic lattice, a physical quantity \(O\) can be found as an integral \(O=\int_{\mathrm{BZ}}d\mathbf{k}F\left(\mathbf{k}\right)\) of a momentum-dependent function \(F\left(\mathbf{k}\right)\) in the Brillouin zone (BZ) for the minimal unit cell. A disordered system can be treated similarly by using a large super cell [Fig. 1(a)] [55; 56; 57]. As the volume of the super cell \(V\rightarrow\infty\), the integral can be approximated by the integrand at \(\mathbf{k=0}\) (see Sec. SI of [58] for more details), with \(O=\lim_{V\rightarrow\infty}(4\pi^{2}/V)F\left(\mathbf{k=0}\right)\), \(V=L^{2}\) the volume of the super cell, the side lengths \(L=na\), the number of lattice sites \(n^{2}\), and the lattice constant \(a\). _Nonlinear Hall conductivity-Berry curvature dipole.-_ One of the major contributions to the nonlinear Hall conductivity (defined as a current density \(j_{a}=\sigma_{abc}E_{b}E_{c}\) induced by two electric fields \(E_{b}\) and \(E_{c}\), with \(a,b,c\in\{x,y,z\}\)) is from the Berry curvature dipole (BCD) \(\sigma_{abc}^{\mathrm{BCD}}=(\tau e^{3}/\hbar^{2})D_{abc}\)[1; 2], where the Berry curvature dipole \(D_{abc}\) under the single \(k\)-point approximation can be found as \[D_{abc}=\frac{1}{V}\sum_{m,p}^{E_{m}^{\mathbf{0}}\neq E_{p}^{\mathbf{0}}}v_{ mm}^{c,\mathbf{0}}\Omega_{mp}^{ab,\mathbf{0}}f_{E_{m}^{\mathbf{0}}}^{\prime}, \tag{2}\] where \(v_{mm}^{c,\mathbf{k}}=\partial E_{m}^{\mathbf{k}}/\partial k_{c}\), the Berry curvature \(\Omega_{mp}^{ab,\mathbf{k}}=2\operatorname{Im}\left[\mathcal{R}_{pm}^{a, \mathbf{k}}R_{mp}^{b,\mathbf{k}}\right]\), \(\mathcal{R}_{mp}^{a,\mathbf{k}}=iv_{mp}^{a,\mathbf{k}}/E_{mp}^{\mathbf{k}}\), \(E_{mp}^{\mathbf{k}}=E_{m}^{\mathbf{k}}-E_{p}^{\mathbf{k}}\), \(v_{mp}^{a,\mathbf{k}}=\left\langle m\right|\partial H^{\mathbf{k}}/\hbar \partial k_{a}\left|p\right\rangle\), \(f_{E_{m}^{\mathbf{k}}}^{\prime}=\partial f_{E_{m}}/\partial E_{m}^{\mathbf{k}}\), and \(f\) is the Fermi function. Figure 2(a) shows the Berry curvature dipole of the tilted Dirac model as a function of the Fermi energy \(E_{F}\). The results under the single \(k\)-point approximation (the solid curves) in a super cell large enough, e.g., \(L\geq 60\) nm (the red solid curve), can reproduce the results obtained from the momentum integral (the blue dashed curve). With decreasing temperature, a larger super cell is re Figure 2: (a) The nonlinear Hall response, in terms of the Berry curvature dipole \(D_{yxx}\) [Eq. (2)], as a function of the Fermi energy \(E_{F}\). The solid lines are obtained by the single \(k\)-point approximation in the super cell of different sizes \(L\). The blue dashed lines are obtained by integrating in momentum space. [(b)-(c)] The disorder-averaged Berry curvature dipole \(\langle D_{yxx}\rangle\) and corresponding fluctuation (standard error) \(\delta D_{yxx}\) as functions of \(E_{F}\) for different disorder strengths \(W\). The super-cell size is taken as \(L=60\) nm. (d) \(\log\langle D_{yxx}\rangle\) as a function of \(L\) for different \(W\) at \(E_{F}=-50\) meV. [(e)-(h)] Disorder-averaged velocity \(\langle\tilde{v}_{x}\rangle\) and Berry curvature \(\langle\tilde{\Omega}\rangle\) and corresponding fluctuations as functions of \(E_{F}\) for different \(W\) at \(L=60\) nm. [(i)-(j)] \(D_{yxx}\) and \(\delta D_{yxx}\) as functions of \(W\) for different \(L\) at \(E_{F}=-50\) meV. (k) \(L^{3}\delta D_{yxx}\) as functions of \(W\) for different \(L\) at \(E_{F}=-50\) meV. [(l)-(n)] \(L^{3}\delta D_{yxx}\) as functions of \(E_{F}\) for different \(L\) at (l) \(W=20\) meV, (m) \(W=50\) meV, and (n) \(W=100\) meV, respectively. The parameters are \(a=1\) nm, \(t=50\) meV nm, \(\upsilon=100\) meV nm, \(\alpha=100\) meV nm\({}^{2}\), \(\eta=-1\), \(m=40\) meV, and the temperature \(k_{B}T=0.12m\). quired to keep the results of the two methods consistent (see Sec. SII of [58] for more details). Figures 2(b) and 2(c) show the disorder-averaged value of the Berry curvature dipole \(\langle D_{yxx}\rangle\) and the corresponding fluctuation \(\delta D_{yxx}\) as functions of the Fermi energy \(E_{F}\) for different strengths of disorder, respectively. The Berry curvature dipole drops immediately as the Anderson disorder is turned on. This behavior is more clear in Fig. 2(i), where we plot \(\langle D_{yxx}\rangle\) as a function of disorder strength \(W\) for different super-cell sizes. Figure 2(d) shows \(\langle D_{yxx}\rangle\) as a function of the super-cell size \(L\) for different disorder strengths. When disorder comes into play, \(\langle D_{yxx}\rangle\) exhibits a nearly exponential decay with increasing super-cell size. This behavior of the nonlinear Hall conductance is reminiscent of the Anderson localization [59; 60], but the difference is that the previous Anderson localization is about the linear longitudinal conductance, which exhibits as an exponential decay with increasing system size. We further show that the drop of \(\langle D_{yxx}\rangle\) has an origin similar to the Anderson localization. According to Eq. (2), the Berry curvature dipole is determined by the electron velocity \(v\) and Berry curvature \(\Omega\) near the Fermi surface, which are shown in Figs. 2(e)-2(h) for their disorder-averaged distributions, defined as \(\widetilde{v}_{x}\left(E_{F}\right)=\left(1/V\right)\sum_{m,p}\left|v_{mm}^{ \mathbf{x},\mathbf{0}}\right|f_{E_{m}^{\mathbf{0}}}^{\mathbf{x},\mathbf{0}}\) and \(\widetilde{\Omega}\left(E_{F}\right)=\left(1/V\right)\sum_{m,p}^{E_{\mathbf{x}}^{ \mathbf{x}}\neq E_{p}^{\mathbf{k}}}\Omega_{mp}^{xy,\mathbf{0}}f_{E_{m}^{\mathbf{0}}}^{\prime}\), as functions of the Fermi energy \(E_{F}\). Here, \(v_{mm}^{\mathbf{x},\mathbf{0}}\) is the \(x\)-direction velocity of the \(m\)-th state. With increasing disorder strength, the Berry curvature distribution is robust against weak disorder. In contrast, the velocity decreases with increasing disorder strength, indicating that the drop of the Berry curvature dipole has an origin similar to that of the Anderson localization. _Nonlinear Hall conductance fluctuation.-_ Figure 2(c) shows the fluctuations of the Berry curvature dipole \(\delta D_{yxx}\) as a function of the Fermi energy \(E_{F}\) in the presence of disorder. Surprisingly, \(\delta D_{yxx}\) increases as \(E_{F}\) moves away from the band edges (at \(E_{F}=\pm 40\) meV) to higher energies. The fluctuation is a surprise because the Berry curvature dipole reaches the maximum near the band edges and decays at higher energies, but its fluctuation shows an opposite behavior. The fluctuation \(\delta D_{yxx}\) can be even several times larger than the average value of \(\langle D_{yxx}\rangle\) when \(E_{F}=200\) meV. This phenomenon is more clear in Fig. 1(b), where the fluctuations increase with the increases of both the disorder strength \(W\) and Fermi energy \(E_{F}\). Such a fluctuation is invisible in the linear Hall conductance [see the blue dashed curves in Fig. 3]. In Figs. 1(b)-1(c), the results for \(W=5\) meV and \(W=10\) meV are obtained from a single disorder configuration with a system size of \(L=80\) nm. We attribute the fluctuation to the disorder-enhanced Berry curvature, which exists locally in the energy scale. Figure 1(c) shows the Berry curvature distributions \(\widetilde{\Omega}\left(E_{m}\right)\) of the \(m\)-th state in the super cell, where \(\widetilde{\Omega}\left(E_{m}\right)=\frac{1}{V}\sum_{p}^{E_{\mathbf{x}}^{\mathbf{k}} \neq E_{F}^{\mathbf{k}}}\Omega_{mp}^{xy,\mathbf{0}}\). In the absence of disorder for \(W=0\), the states host smoothly distributed Berry curvatures, respectively. However, even for weak disorder, the Berry curvature shows significant fluctuations and the amplitude of the fluctuation increases when \(E_{m}\) is far away from the Dirac point at \(E_{F}=0\), which results in the significant fluctuations of the nonlinear conductance as shown in Fig. 1(b). The mechanism can be illustrated in Fig. 3, where we consider a small system with \(L=10\) nm, such that the energy states of the system are well separated. In order to show the fluctuation more clearly, we also set the tilted term as \(t=0\) and thus the system has no Berry curvature dipole in the clean limit [Figs. 3(a)-3(b)]. For weak disorder with \(W=1\) meV [Figs. 3(c)-3(d)], the degenerate states exhibit an extremely small splitting. These splitted states with opposite velocities host disorder-induced Berry curvatures with opposite signs [Fig. 3(d)]. The combined effect of the states with opposite Berry curvatures yields no significant effect in the Hall conductance (the blue dashed curves). However, the interplay of the Figure 3: (a) The Berry curvature dipole \(D_{yxx}\) (solid), Hall conductance \(\sigma_{xy}\) (dashed), and (b) Berry curvature of the \(m\)-th state \(\widetilde{\Omega}(E_{m})\) as functions of the Fermi energy \(E_{F}\). (c)-(d) and (e)-(f) are the same as (a)-(b), except that the disorder amplitude \(W=1\) meV in (c) and (d), and \(W=2\) meV in (e) and (f), respectively. The blue arrows in (d) and (f) depict the magnitudes and directions of the velocities \(v_{x}\) of the degenerate states. The parameters are the same as that in Fig. 2, except that the tilted term \(t=0\). The system size is \(L=10\) nm. The results in (c)-(f) are calculated by adopting the same single disorder configuration but with a different strength. two states will give rise to a prominent fluctuation in the nonlinear Hall conductance (the red solid curves), which is determined by the product of velocity and Berry curvature. The phenomenon is clearer for a stronger disorder with \(W=2\) meV, which induces stronger hidden Berry curvatures [Fig. 3(f)] and more pronounced fluctuations in the nonlinear Hall effect [Fig. 3(e)]. Moreover, we note that the amplitudes of the disorder-enhanced Berry curvature, as well as the corresponding Berry curvature dipole fluctuations, are much more prominent with the increasing density of states [see Figs. 1(b) and 1(c)]. Figures 1(d) and 1(e) illustrate the experimentally measured Berry curvature dipole in two distinct systems. One is the bilayer graphene [5] and the other is the bilayer WTe\({}_{2}\)[6]. Remarkably, increasingly significant fluctuations in the Berry curvature dipole are observed in both systems when the Fermi energy moves away from the Dirac points [i.e., \(E_{F}=0\) in (d) and \(V_{\rm g}-V_{\rm NP}=0\) in (e)]. Our theory provides a potential mechanism to understand the experimental results. _Scaling of nonlinear Hall conductance fluctuation.-_ The above results show that fluctuations in the nonlinear Hall conductance (in terms of the Berry curvature dipole) are sensitive to the Fermi energy and disorder strength. These behaviors are distinct from those in the linear conductivity, which may exhibit the universal conductance fluctuation [33]. Moreover, in the Anderson localization regime (about \(W>50\) meV) we reveal that, as the system size \(L\) changes, \(L^{3}\) times the fluctuations of the nonlinear Hall conductance (i.e., \(L^{3}\delta D_{yxx}\)) remains invariant as a function of the disorder strength [Fig. 2(k)] or Fermi energy [Figs. 2(l)-2(n)]. This intriguing behavior differs significantly from the linear conductance fluctuations and suggests a unique scaling law in the nonlinear Hall regime. _Nonlinear Hall conductivity--Berry connection polarizability.-_ In a \(PT\)-symmetric metal (\(P\) for spatial inversion and \(T\) for time-reversal), the nonlinear Hall effect can also emerge as a result of the Berry connection polarizability [14; 16], which measures the distance between quantum states thus may also effect electronic carriers to the perpendicular direction. Under the single-\(k\) approximation in the super cell, the Berry connection polarizability can be found as \[\sigma_{abc}^{\rm BCP}=\frac{4\pi^{2}\Gamma}{V}\sum_{m,p}^{E_{m}^{0}\neq E_{p} ^{0}}\left(\frac{\mathcal{G}_{mp}^{bc,\mathbf{0}}v_{mm}^{a,\mathbf{0}}- \mathcal{G}_{mp}^{ac,\mathbf{0}}v_{mm}^{b,\mathbf{0}}}{E_{mp}^{0}}\right)f_{E _{m}^{0}}^{\prime}, \tag{3}\] where \(\Gamma=e^{3}/2\hbar\pi^{2}\) and \(\mathcal{G}_{mp}^{bc,\mathbf{k}}=\mathrm{Re}\,\mathcal{R}_{pm}^{b,\mathbf{k}} \mathcal{R}_{mp}^{c,\mathbf{k}}\). To have the \(PT\)-symmetry, we consider a four-band tilted Dirac model [14; 16; 61] \[H^{\prime}=tk_{x}+\left(m-\alpha k^{2}\right)\tau_{z}+\upsilon k_{x}\tau_{x}+ \upsilon k_{y}\tau_{y}\sigma_{x}+V(\mathbf{r}), \tag{4}\] which obeys \(PTH^{\prime}\left(\mathbf{k}\right)(PT)^{-1}=H^{\prime}\left(\mathbf{k}\right)\), where the \(PT\)-symmetry operator \(PT=-i\sigma_{y}K\) and \(K\) means the complex conjugate. Figures 4(a)-4(e) show the results for the Berry connection polarizability \(\sigma_{xyy}^{\rm BCP}\). In the clean limit, our numerical calculations using the super cell reproduce that using the momentum integral, verifying the validity of the single \(k\)-point approximation [Fig. 4(a)]. In the presence of disorder, the Berry connection polarizability also drops immediately when disorder is introduced [Figs. 4(b)-4(c)]. We further find that the Berry curvature dipole and Berry connection polarizability share similar behaviors, i.e., they both exhibit an exponential decay with increasing system size and significant Fermi-energy-dependent fluctuations [Figs. 4(d)-4(e)]. In Sec. V of Supplemental Material [58], we provide more numerical results for the Berry connection polarizability. _Linear Hall conductance.-_ To justify the single \(k\)-point approximation in the super cell, we calculate the linear Hall conductance by using the Kubo formula on the tilted Dirac model in Eq. (4). In the presence of disorder, the results by the single \(k\) point approximation [see Figs. 4(d)-4(g)]] agree well with those by the previous real-space methods [see Sec. VI of Supplemental Material [58] for more details], indicating that the single \(k\)-point approximation in the super cell can describe the disorder effects well and build a bridge between the momentum and real-space calculations and can facilitate further investigations on various physical quantities that were calculated only as momentum integrals. Figure 4: (a) The Berry connection polarizability \(\sigma_{xyy}^{\rm BCP}/\Gamma\) as a function of the Fermi energy \(E_{F}\). The solid lines are obtained by the single \(k\)-point approximation in the super cell of different sizes \(L\). The blue dashed lines are obtained by the integral in momentum space. [(b)-(c)] Disorder-averaged \(\langle\sigma_{xyy}^{\rm BCP}/\Gamma\rangle\) (b) and its fluctuations (c) at \(E_{F}=-50\) meV as functions of disorder strength \(W\) for different super-cell sizes. [(d)-(e)] The disorder-averaged Berry curvature dipole and corresponding fluctuation as functions of \(E_{F}\) for different \(W\). The super-cell size is taken as \(L=60\) nm. [(f)-(g)] The same as (b-c), but for the linear Hall conductance \(\sigma_{xy}\). This work was supported by the National Key R&D Program of China (2022YFA1403700), the Innovation Program for Quantum Science and Technology (2021ZD0302400), the National Natural Science Foundation of China (11925402), Guangdong province (2020KCXTD001 and 2016ZT06D348), the Science, Technology and Innovation Commission of Shenzhen Municipality (ZDSYS20170303165926217, JAY20170412152620376, and KYTDPT20181011104202253). Rui Chen thanks helpful discussions with Bo Fu and acknowledges the support of the National Natural Science Foundation of China (12304195) and the Chutian Scholars Program in Hubei Province. The numerical calculations were supported by Center for Computational Science and Engineering of SUSTech.
2309.08333
Let's Predict Who Will Move to a New Job
Any company's human resources department faces the challenge of predicting whether an applicant will search for a new job or stay with the company. In this paper, we discuss how machine learning (ML) is used to predict who will move to a new job. First, the data is pre-processed into a suitable format for ML models. To deal with categorical features, data encoding is applied and several MLA (ML Algorithms) are performed including Random Forest (RF), Logistic Regression (LR), Decision Tree (DT), and eXtreme Gradient Boosting (XGBoost). To improve the performance of ML models, the synthetic minority oversampling technique (SMOTE) is used to retain them. Models are assessed using decision support metrics such as precision, recall, F1-Score, and accuracy.
Rania Mkhinini Gahar, Adel Hidri, Minyar Sassi Hidri
2023-09-15T11:43:09Z
http://arxiv.org/abs/2309.08333v1
# Let's Predict Who Will Move to a New Job ###### Abstract Any company's human resources department faces the challenge of predicting whether an applicant will search for a new job or stay with the company. In this paper, we discuss how machine learning (ML) is used to predict who will move to a new job. First, the data is pre-processed into a suitable format for ML models. To deal with categorical features, data encoding is applied and several MLA (ML Algorithms) are performed including Random Forest (RF), Logistic Regression (LR), Decision Tree (DT), and eXtreme Gradient Boosting (XGBoost). To improve the performance of ML models, the synthetic minority oversampling technique (SMOTE) is used to retain them. Models are assessed using decision support metrics such as precision, recall, F1-Score, and accuracy. Machine Learning, Oversampling, Dummy Encoding, SMOTE. ## I Introduction A number of factors, including job market competition and personal preferences, lead to people changing jobs over the course of their careers. However, changing jobs is a difficult decision that may be influenced by a variety of elements, including pay, job description, and location. A successful professional career requires making smooth job changes. The objective of this work is to accurately predict if an applicant will move to a new job or not using supervised Machine learning (ML) models [6, 7, 9]. To evaluate the model, several different implementations of classification were compared to determine which model suits this type of data best. These were trained on a subset of data originating from available person profiles collected through web scraping. The different steps of our approach are as follows: * Data preprocessing (Data cleaning): in this step, null values are removed from the training dataset. Input values have been changed to specific required data types. Categorical variables are encoded to dummy variables. * Model Building: in this step, several MLAs have been performed including Random Forest (RF), Logistic Regression (LR), Decision Tree (DT), and eXtreme Gradient Boosting (XGBoost). Finally, SMOTE is used to improve the performance of MLAs. * Model evaluating: in this step, models are assessed using decision support metrics such as precision, recall, F1-Score, and accuracy. The remainder of this paper is organized as follows. The methodology adopted is detailed in section II. Section III evaluates the proposed predictive model. Conclusions and directions for future work are given in Section IV. ## II Methodology MLAs expect inputs or features to be in a single numeric vector. Similarly, the value to be predicted (label), especially when dealing with categorical data, must be encoded [7]. Thus, one of the objectives of data preparation is to obtain the data in the format expected by the MLAs ### _Human Resources data_ Publicly available Human Resources (HR) data have been used in this work. The data has 10 features. Data features include the _City Development Index_, the _Gender_, the _Relevant Experience_, the _Enrolled University_, the _Education Level_, the _Major Discipline_, the _Total Years of Experience_, the _Company Size_, and the _Company Size_. The last feature is the _Target_ which indicates if the employee seeks a new job or not (0 or 1). The employees are divided into two classes. The first class contains all those employees who want to move to a new job and the second class consists of all those who did not seek a new job. Table I presents the number of instances used to train and test the model. \begin{table} \begin{tabular}{c c} \hline \#Non job seekers & \#Job seekers \\ \hline \hline 1511 & 280 \\ \hline \end{tabular} \end{table} TABLE II: Data description \begin{table} \begin{tabular}{c c} \hline \#Training Data (80\%) & \#Test Data (20\%) \\ \hline \hline 7164 & 1791 \\ \hline \end{tabular} \end{table} TABLE I: Training and Test data ### _Data encoding_ A categorical variable takes on values called categories, modalities, or levels that have no quantitative meaning. For example, the gender of an individual is a categorical variable with two (or more) modalities: male and female. Most categorical variables are nominal. These variables are used to categorize and label attributes. Variables contain different values, and each value represents a separate category. Many MLAs are unable to deal with categorical variables. It is therefore important to encode the data in an appropriate form in order to be able to preprocess these variables. As you need to fit and evaluate your model, you need to code the categorical data and convert all input and output variables to numeric values. Thus, the model will be able to understand and extract the information that generates the desired result. A different set of data varies depending on the number of possible values. Moreover, the way in which this transformation is carried out is very important. Indeed, the coding of categorical variables generally harms the performance of learning algorithms. One code may be better than another. For example, RF model struggles to capture information from categorical variables with a large number of categories if they are processed with the one-hot encoding technique. This is how more specific learning algorithms such as XGBoost came into existence. We have used different methods and tricks to manage the categorical variables present in the dataset used, namely: * **One-hot encoding**: It consists of coding each categorical variable with different Boolean variables (also called dummy variables) which take the values 0 or 1, indicating whether a category is present in an observation. Consider a categorical variable \(X\) which admits \(K\) modalities \(m_{1}\), \(m_{2}\),..., \(m_{K}\). One hot encoding consists of creating \(K\) indicator variables, i.e. a vector of size \(K\) which has 0s everywhere and a 1 at position \(i\) corresponding to modality \(m_{i}\). The categorical variable, therefore, is replaced with \(K\) numerical variables. Some algorithms, in particular some implementations of decision tree (DT) forests, fail to make the best use of the information contained in these variables when this number of modalities is too large. * **Reduction in the number of modalities**: Business knowledge can help reduce the number of modalities. Indeed, an understanding of the categories can allow them to be grouped effectively. A natural grouping is done when the modalities are hierarchical, that is to say, it is possible to define a new category that includes other categories. Suppose a variable whose categories are the districts of a city: these categories can for example be grouped by district, i.e. the districts of the same district will have the same modality. This is a fairly common case. However, these groupings can introduce a bias into the model [2]. A second way to get away with a high number of categories is to try to merge the categories with low counts. Modalities that appear very infrequently in the data can be combined. A frequency table of the modalities is drawn up, and those whose frequency is below a certain threshold are put together in the same _other_ category, for example. Then, a one-hot encoding can be applied to the new variable. * **Impact encoding**: When the number of categories becomes very large, encoding by dummy variables can become inconvenient. An alternative method to clustering or truncation of categories consists in characterizing the categories by the link they maintain with the target variable \(y\): this is the encoding impact [3]. This method is also known under the names: likelihood encoding, target coding, conditional probability encoding, and weight of evidence. For a regression problem with target variable \(y\), let \(X\) be a categorical variable with \(K\) categories \(m_{1}\), \(m_{2}\),..., \(m_{K}\). Each \(m_{K}\) category is encoded by its impact value: \[impact(m_{k})=E[y|X=m_{k}]-E[y]\] (1) \(E[y|X=m_{k}]-E[y]\) corresponds to the expectation of the target \(y\) knowing that the variable \(X\) is fixed to the modality \(m_{k}\). For a training set of size \(n\) containing samples \(\{(x_{i},y_{i})\}\) independent and identically distributed, the estimator of this expectation is the mean of the values of \(y_{i}\) for which the modality \(x_{i}\) is equal to \(m_{k}\): \[E[y|X=m_{k}]=\frac{1}{n_{k}}\sum_{i\in S_{k}}y_{i}\] (2) where \(S_{k}\) is the set of indices \(i\) of the observations such that \(x_{i}\) is equal to \(m_{k}\) and \(n_{k}\) the cardinality of this set. The estimator of the expectation of \(y\) is simply its empirical mean: \[E[y]=\frac{1}{n}\sum_{i}^{n}y_{i}\] (3) * **Embedding methods**: This method uses deep learning techniques; it draws its inspiration from models like word2vec on textual data and which gives very impressive results [1]. This involves creating a representation of each modality of a categorical variable in a numeric vector of fixed size. The use of embeddings allows among other things a reduction of the dimensionality since the size of the vector e can be chosen very small compared to the number of modalities. Concretely, obtaining these embeddings is done by training a neural network (often a multilayer perceptron) with only the categorical variables as input. First, a one-hot encoding is applied to the variable in order to be input to the network. Generally, one or two concealed coats are sufficient. The first hidden layer has e neurons. The network is then trained on the same task as that initially defined. Then, the output of the first hidden layer then constitutes the vector of embeddings. This vector is then concatenated to the initial data. This data is then used in the fitting of the final model. There are various variations in the literature on how to obtain these embeddings. In addition, nothing prevents putting more than two hidden layers and retaining the output of the second rather than that of the first. Furthermore, the network can be trained on a task other than the initial task. ### _Handling Imbalanced Data with SMOTE_ Imbalanced datasets hinder the predictive capability of ML models due to the biased towards the majority class [14]. The preprocessed dataset is imbalanced with a bias towards the majority class. To optimize the predictive capability of the ML models and enhance the generalizability of this study, the preprocessed (imbalanced sample) data was undersampled and oversampled to create a balanced distribution. Undersampling involves removing cases of the majority class and oversampling involves adding instances to the minority class [13]. Undersampling and oversampling have associated benefits and costs. Undersampling can increase accuracy by decreasing the complexity of the dataset [12]. Conversely, undersampling can hinder predictive performance by using a diminutive dataset, compared to the full sample [14]. Oversampling enhances predictability through increased data but can result in over-fitting if observations are duplicated [18]. It is common in machine learning when one of the classes has a much greater or lesser number of observations than the other classes. In this case, the distribution of data is unbalanced. Taking class distribution into account is not possible with MLAs since they increase accuracy by reducing error. Detecting fraud, anomalies, and facial recognition are common examples of this problem. The majority class bias in standard ML techniques like the DT and LR tends to exclude minorities. Therefore, the minority class is often misclassified as a majority class, due to their tendency to predict only the majority. We are more likely to see negligible or very low recall for the minority class if we have an unbalanced distribution of data in our dataset. There are mainly 2 widely used algorithms to handle unbalanced class distribution: SMOTE (Synthetic Minority Oversampling Technique) and Near Miss Algorithm. Oversampling SMOTE aims to balance the distribution of classes by randomly increasing examples of minority classes by reproducing them [10]. The algorithm 1 illustrates the diffrent steps of SMOTE. The main idea of the NearMiss methods is to keep a set of points of the majority class which are close to the points of the minority class, in order to better represent the border which separates \(C0\) and \(C1\)[4]. ``` 1:Definition of the set of minority classes \(A\) 2:for\(x\in A\)do 3:\(k\) nearest neighbors of \(x\) -? Euclidean distance between \(x\) and all the other samples of the set \(A\). 4:endfor 5:Adjust the sampling rate \(N\) according to the unbalanced proportion. 6:for\(x\in A\)do 7: Randomly choose \(N\) examples (i.e. \(x_{1}\), \(x_{2}\),...,\(x_{n}\)) among its \(k\) nearest neighbors 8: Build the set \(A_{1}\). 9:endfor 10:for\(x_{k}\in A_{1}\)\((k=1\), \(2\), \(3\),...,\(N)\)do 11: Generate a new example: \(x^{\prime}=x+rand(0,1)*\mid x-x_{k}\mid\) where \(rand(0,1)\) represents the random number between 0 and 1. 12:endfor ``` **Algorithm 1** SMOTE Algorithm The NearMiss-1 method selects the elements of the majority class which have the smallest average distance with respect to the \(k\) closest examples of the minority class (\(k\) being a tuning parameter to be chosen by the user). The NearMiss-2 method retains the cases of the majority category which have the smallest average distance with respect to the \(k\) points farthest from the minority category. Finally, the NearMiss-3 method keeps the \(k\) examples of class \(C0\) which are the nearest neighbors of each element of class \(C1\)[4]. The basic intuition about how quasi-neighborhood methods work is presented in algorithm 2. ``` 1:Calculate the distances between the majority class and the instances of the minority class. 2:Selects the next \(n\) instances of the majority class that have the smallest distances with those of the minority class. 3:if There are \(k\) instances in the minority class then 4: The closest method will result in \(k*n\) instances of the majority class. 5:endif ``` **Algorithm 2** NearMiss Algorithm ### _Model Building_ Fig. 1 shows the predictive modeling process. Let's look at these MLAs used to develop the predictive model. * LR [16]: LR is a statistical model used to study the relationships between a set of qualitative variables \(X_{i}\) and a qualitative variable \(Y\). It is a generalized linear model using a logistic function as a link function. * DT [17]: DT are among the most widely used non-parametric supervised learning methods in classification and regression. On the one hand because of their algorithmic simplicity and on the other hand, because of the ease of interpreting them and explaining the results generated. Decision trees are built through an algorithmic approach and can be viewed as a tree with rules that identify ways to split a data set. The goal is to create a model that predicts the value of a target variable by learning the decision rules. * RF [15]: This classification algorithm reduces the variance of predictions from a single decision tree, thereby improving their performance. For this, it combines many decision trees in a bagging-type approach. * XGBoost [11]: XGBoost is an optimized distributed gradient boosting method. In spite of the fact that Gradient Boost methods are sequential algorithms, XGBoost uses multithread processing to search in parallel for the best split between the features. When compared to other Gradient Boost method implementations, XGBoost performs well due to the utilization of multithreading. ## III Model Evaluation As for any ML algorithm, we need to be able to evaluate the performance of the used MLAs in order to decide which algorithm fits our situation best. The following metrics are used in our approach [5, 8]: * Confusion matrix. * Precision. * Recall. * Accuracy. The goal is to predict employees who want to change jobs, so we have two classes (0 for non-job seekers, and 1 for job seekers). Fig. 2 shows the confusion matrix for RF (Fig. 2(a)), LR (Fig. 2(b)), DT (Fig. 2(c)), and XGBoost (Fig. 2(d)) without data balancing (SMOTE). Fig. 3 shows the confusion matrix for LR (Fig. 3(a)) and RF (Fig. 3(b)) with data balancing (SMOTE). According to Fig. 3, SMOTE significantly improves the prediction performance. SMOTE-LR outperforms the competition on all metrics, particularly recall (56.34%) and accuracy (86.26%). Table III presents the model performance given recall, precision, F1-score, and accuracy values. According to Table III, SMOTE-LR gives the best recall and Accuracy. ## IV Conclusion In this paper, we tried to predict from HR data who will move to a new job. We performed 6 ML models. The accuracy of each algorithm is evaluated. The classification rate is assessed using decision ML metrics including recall, precision, F-1 score, and accuracy. As a general rule, most models perform well if the proportions of the classes in a dataset are relatively similar. Since MLAs struggle to correctly identify the minority class, slight class imbalances were well managed using SMOTE. SMOTE-LR can be extremely useful for HR department as it has the highest Recall. In the future direction, we will apply the Convolutional Neural Network-based deep learning (CNN-based DL) model to predict who has the attrition to move to a new job or not. Fig. 1: Predictive modelling process. Fig. 3: Confusion Matrix with data balancing. \begin{table} \begin{tabular}{l c c c c} \hline \hline **MLA** & **Precision** & **Recall** & **F1-score** & **Accuracy** \\ \hline RF & 35.4\% & 50.24\% & 41.53\% & 84\% \\ LR & 40\% & 54.9\% & 46.28\% & 85.48\% \\ DT & 35.71\% & 32.05\% & 33.78\% & 78.11\% \\ XGBoost & 46.8\% & 53.9\% & 50.09\% & 85.42\% \\ SMOTE-RF & 35.71\% & 51.62\% & 42.22\% & 84.75\% \\ SMOTE-LR & 53.9\% & 56.34\% & 55.09\% & 86.26\% \\ \hline \hline \end{tabular} \end{table} TABLE III: Model performance. Fig. 2: Confusion Matrix without data balancing.
2309.17213
The local character expansion as branching rules: nilpotent cones and the case of $\mathrm{SL}(2)$
We show there exist representations of each maximal compact subgroup $K$ of the $p$-adic group $G=\mathrm{SL}(2,F)$, attached to each nilpotent coadjoint orbit, such that every irreducible representation of $G$, upon restriction to a suitable subgroup of $K$, is a sum of these five representations in the Grothendieck group. This is a representation-theoretic analogue of the analytic local character expansion due to Harish-Chandra and Howe. Moreover, we show for general connected reductive groups that the wave front set of many irreducible positive-depth representations of $G$ are completely determined by the nilpotent support of their unrefined minimal $K$-types.
Monica Nevins
2023-09-29T13:10:33Z
http://arxiv.org/abs/2309.17213v2
# The local character expansion as branching rules: ###### Abstract. We show there exist representations of each maximal compact subgroup \(K\) of the \(p\)-adic group \(G=\mathrm{SL}(2,F)\), attached to each nilpotent coadjoint orbit, such that every irreducible representation of \(G\), upon restriction to a suitable subgroup of \(K\), is a sum of these five representations in the Grothendieck group. This is a representation-theoretic analogue of the analytic local character expansion due to Harish-Chandra and Howe. Moreover, we show for general connected reductive groups that the wave front set of many irreducible positive-depth representations of \(G\) are completely determined by the _nilpotent support_ of their unrefined minimal \(K\)-types. 2010 Mathematics Subject Classification: 22E50 Supported by NSERC Discovery grant RGPIN-2020-05020 ## 1. Introduction The distribution character of an admissible representation of a \(p\)-adic group can be expressed, in a neighbourhood of the identity, as a linear combination of Fourier transforms of the finitely many nilpotent orbital integrals in the dual of the Lie algebra. This remarkable theorem, known as the Harish-Chandra-Howe local character expansion, has many variations (such as expansions on neighbourhoods of other semisimple elements, or expansions in terms of other collections of orbital integrals [13, 14, 15]) and many applications (such as determining the Gel'fand-Kirillov dimension of a representation, or relating to conjectural classifications such as the orbit method, or the local Langlands correspondence [1, 10, 11]). Though it is primarily considered in characteristic zero, it also holds when the characteristic is sufficiently large and a suitable substitute for the exponential map exists [10]. In this paper, we interpret the local character expansion as a statement in the Grothendieck group of representations of a maximal compact open subgroup, upon restriction to a subgroup of suitable depth, for the case that \(G=SL(2,F)\), where \(F\) is a local nonarchimedean field of residual characteristic at least \(3\). In particular, we construct for each nilpotent orbit \(\mathcal{O}\) of \(G\) in the dual of its Lie algebra \(\mathfrak{g}^{*}\) a (highly reducible) representation \(\tau_{x}(\mathcal{O})\) of each maximal compact open subgroup \(G_{x}\) with the following property. **Theorem 1.1**.: _Let \(\pi\) be an irreducible admissible representation of \(G=\mathrm{SL}(2,F)\) of depth \(r\geq 0\) and let \(x\) be a vertex in the building of \(G\). Then there exist integers \(c_{x,\mathcal{O}}(\pi)\) such that in the Grothendieck group of representations we have_ \[\mathrm{Res}^{G}_{G_{x,r+}}\pi=\sum_{\mathcal{O}}c_{x,\mathcal{O}}(\pi) \mathrm{Res}^{G_{x}}_{G_{x,r+}}\tau_{x}(\mathcal{O}) \tag{1.1}\] _where \(G_{x,r+}\) is the Moy-Prasad filtration subgroup of \(G_{x}\) of depth \(r+\), and the sum is over all nilpotent orbits in \(\mathfrak{g}^{*}\)._ Moreover, the coefficients corresponding to the regular nilpotent orbits in this expansion are nonnegative integers and agree with those of the Harish-Chandra-Howe local character expansion (subject to suitable normalizations). Note that while inherently expressing the same local nature of representations, our statement holds with fewer restrictions on \(F\) than does the local character expansion, because it does not depend on the existence of a \(G\)-equivariant map, such as the exponential or a Cayley transform, from the Lie algebra to the group. If \(G\) is an inner form of \(\operatorname{GL}_{n}(F)\), then an explicit decomposition of the form (1.1) has been proven by Henniart and Vigneras in [14]; moreover, their local expansion holds for representations of \(G\) over any field \(R\) of characteristic not \(p\). They obtain the representations we denote here by \(\operatorname{Res}^{G_{x}}_{G_{x,r+}}\tau_{x}(\mathcal{O})\) as \(\operatorname{Res}^{G}_{G_{x,r+}}\operatorname{Ind}^{G}_{P}\mathbf{1}\), for a suitable parabolic subgroup attached to \(\mathcal{O}\), vastly generalizing a result of Howe [13]. Though such an elegant description is not directly available here (see also Remark 8.2), our work does answer [14, Questions 1.1, 1.2] for complex representations of \(\operatorname{SL}(2,F)\). Now suppose \(G\) is a general connected reductive group. In Section 3, we develop some theory towards establishing the direct relationship from the LCE to a decomposition like (1.1), as follows. The set of maximal orbits appearing in the local character expansion for an admissible representation \(\pi\) is denoted \(\mathcal{WF}(\pi)\); the closure of the union of these orbits is the wave front set of \(\pi\). For depth-zero representations \(\pi\), Barbasch and Moy [1] proved that \(\mathcal{WF}(\pi)\) is determined by the depth-zero components of the restriction of \(\pi\) to various maximal compact subgroups, through the theory of Gel'fand-Graev representations. For a positive-depth representation with minimal \(K\)-type \(\Gamma\) (in the sense of Moy and Prasad [15]), we should instead infer \(\mathcal{WF}(\pi)\) from the _nilpotent support_\(\operatorname{Nil}(\Gamma)\) (Definition 3.2) of \(\Gamma\). This definition, of independent interest, depends strongly on the classification of nilpotent orbits using Bruhat-Tits theory [1, 2]. In fact, in Proposition 3.4 we show that the algebraic notion of nilpotent support can be characterized as the set of nonzero nilpotent orbits appearing in the asymptotic cone on \(\Gamma\), as defined in [1]. In Theorem 3.5 (proof due to Fiona Murnaghan), we prove that \(\mathcal{WF}(\pi)\) is the set of maximal orbits of \(\operatorname{Nil}(\Gamma)\) whenever the \(\Gamma\)-asymptotic expansion [10] reduces to a single term. This last result is similar to recent work of Ciubotaru and Okada, who show that the depth-\(r\) components of the restriction to certain compact open subgroups determine the wave front set of \(\pi\)[2]. The idea of the nilpotent support is also central to [2], where they develop it using, among other things, the geometry of the associated finite reductive group. Now again suppose that \(G=\operatorname{SL}(2,F)\). Our result gives a second characterization of \(\mathcal{WF}(\pi)\): it can be entirely determined from the _non-typical_ representations occurring in the restriction of \(\pi\) to a maximal compact open subgroup, for \(\pi\) of any depth. That is, the asymptotic decomposition of \(\operatorname{Res}_{G_{x}}\pi\) unfolds exactly as the representations \(\tau_{x}(\mathcal{O})\) for \(\mathcal{O}\in\mathcal{WF}(\pi)\). For the case of a positive-depth representation \(\pi\), our main theorem is stated in Theorem 6.5, with the explicit values of the constant coefficient given in Proposition 6.7. To prove the theorem, we first show that the restriction of \(\pi\) to a maximal compact subgroup can be expressed entirely in terms of twists of the pair \((\Gamma,\chi)\) used in the construction of \(\pi\) (Theorem 6.2), using results from [20, 21]. Here, \(\chi\) is a character of a torus \(T=\operatorname{Cent}_{G}(\Gamma)\) that is realized by \(\Gamma\in\mathfrak{g}^{*}\), and the realization of the irreducible components of the restriction is framed in terms of a generalization (Proposition 5.4) of a construction due to Shalika in his thesis. From this characterization, and a key technical result (Lemma 5.5), it follows that the expansion (1.1) exists and has leading terms corresponding to the nilpotent support of \(\Gamma\). Since \(\Gamma\) represents a minimal \(K\)-type of \(\pi\) in the sense of Moy and Prasad [10], we independently recover from Theorem 3.5 that the maximal orbits in \(\operatorname{Nil}(\Gamma)\) coincide with \(\mathcal{WF}(\pi)\). For representations of depth zero, the principal technical difficulties lie in matching the depth-zero components with nilpotent orbits, particularly in the case of the twelve "exceptional" representations: the reducible principal series, the principal series composed of the trivial and the Steinberg representation, and the four special supercuspidal representations. Once these are addressed, Theorem 7.5 follows by carefully extracting the necessary branching rules from [14, 15]. Again, the orbits in \(\mathcal{WF}(\pi)\) are obtained both from the depth-zero components (via [1]) and from the asymptotic development of the branching rules. At two crucial junctures we use information that is currently only known for \(G=\operatorname{SL}(2,F)\) and a handful of other small rank groups: one is the explicit calculation of the asymptotic cone on any semisimple element of \(\mathfrak{g}^{*}\) (Section 4); the other is the full knowledge of the representation theory of the maximal compact subgroups of \(G\) (Section 5). While the former seems a tractable and interesting question in general, the latter is quite daunting: it is not expected that we will achieve a classification of the representations of maximal compact open subgroups of \(p\)-adic reductive groups. Note that a full classification is not necessary to prove the theorem: what is needed is a construction of an appropriate representation of \(G_{x}\) attached to each nilpotent orbit, and we explore how this might be done in Section 5.2. There are many interesting applications and open directions left to pursue. Evidently the overarching goal is to establish a result like (1.1) for a large class of groups, using the tools presented here, or those developed in [10]. To extend the work here, it may be fruitful to build representations of the groups \(G_{x,0+}\) directly, rather than to construct representations of \(G_{x,0}\); this has the advantage of avoiding the difficulties inherent at depth zero. It may also allow for a more uniform treatment of all points \(x\) of the building; in this paper, we consider only vertices, and the union of all \(G_{x,r+}\) as \(x\) runs over vertices is not equal to \(G_{r+}\) in general. In another direction: the \(\Gamma\)-asymptotic expansions of [12, 13] describe the character of a positive-depth representation in a larger neighbourhood than does the local character expansion, by incorporating a minimal \(K\)-type \(\Gamma\). Then Theorem 6.2 can be interpreted as analogously formulating these expansions in terms of branching rules. It would be interesting to explore this idea further. The paper is organized as follows. We set our notation in Section 2 and then present some background on the local character expansion that provide the motivation and context for our results. In Section 3 we consider a general connected reductive group \(G\). We define the nilpotent support of an element \(\Gamma\) of \(\mathfrak{g}^{*}\), show it defines the asymptotic cone of \(\Gamma\), and relate this to the wave front set via the theory of \(\Gamma\)-asymptotic expansions. We then specialize to \(G=\operatorname{SL}(2,F)\). In Section 4 we characterize the nilpotent cones \(\operatorname{Nil}(\Gamma)\) in many ways (Proposition 4.1) and compute them explicitly. In Section 5 we recall the construction of certain irreducible representations of \(\operatorname{SL}(2,\mathcal{R})\) by Shalika in his 1966 thesis [20], and then rephrase it using Bruhat-Tits theory and derive some consequences. This allows us to define, for each vertex \(x\in\mathcal{B}(G)\), each nilpotent orbit \(\mathcal{O}\subset\mathfrak{g}^{*}\), and each central character \(\zeta\) a representation \(\tau_{x}(\mathcal{O},\zeta)\) of \(G_{x}\). We prove our main theorems for representations of positive depth in Section 6 and for representations of depth zero in Section 7. We conclude with two brief applications of Theorem 1.1 in Section 8: an explicit formula for the functions \(\hat{\mu}_{\mathcal{O}}\) in terms of the trace character of the representation \(\tau_{x}(\mathcal{O})\) of the compact group \(G_{x}\); and an explicit polynomial expression for \(\dim(\pi^{G_{x,2n}})\) (in the spirit of [14]) whose existence is predicted by the local character expansion. ### Acknowledgements This work was instigated by a question posed to the author by David Vogan and has benefitted enormously from many conversations with him in the online research community on Representation Theory and Noncommutative Geometry sponsored by the American Institute of Mathematics. The approach to \(\operatorname{Nil}(\Gamma)\) given here was signficantly refined through conversations with Fiona Murnaghan and Loren Spice. This work progressed over a period of visits to many colleagues, and benefitted from their comments and interest: Vincent Secherre, Laboratoire de Mathematiques de Versailles, Universite Paris-Saclay; Anne-Marie Aubert, Institut de Mathematiques de Jussieu-Paris Rive Gauche, Universite de Paris/Sorbonne Universite and Jessica Fintzen, Universitat Bonn. It is a true pleasure to thank all of these generous people. ## 2. Notation and background Let \(F\) be a local nonarchimedean field of residual characteristic \(p\neq 2\), with integer ring \(\mathcal{R}\), maximal ideal \(\mathcal{P}\) and residue field \(\mathfrak{f}\) of cardinality \(q\). We impose additional hypotheses on \(p\) in Section 2.2, below. Fix once and for all an additive character \(\psi\) of \(F\) that is trivial on \(\mathcal{P}\) and nontrivial on \(\mathcal{R}\). Fix a uniformizer \(\varpi\) and normalize the valuation on \(F\) (and any extension thereof) by \(\operatorname{val}(\varpi)=1\). We write \(\operatorname{val}(0):=\infty\). Let \(\mathbf{G}\) denote a connected reductive algebraic group defined over \(F\) whose group of \(F\)-rational points is denoted \(G\); we use \(\mathfrak{g}=\operatorname{Lie}(\mathbf{G})(F)\) to denote its Lie algebra over \(F\). We simplify notation by refering to tori, Borel subgroups and parabolic subgroups of \(G\) when we mean the \(F\)-points of such algebraic \(F\)-subgroups of \(\mathbf{G}\), and denote them in roman font. Let \(G^{\operatorname{reg}}\), respectively \(\mathfrak{g}^{\operatorname{reg}}\), denote the set of regular elements of \(G\), respectively \(\mathfrak{g}\). The group \(G\) acts on \(\mathfrak{g}\) via the adjoint action \(Ad\) and on its dual \(\mathfrak{g}^{*}\) via the coadjoint action \(Ad^{*}\); we abbreviate these by both \(g\cdot X\) or \({}^{g}X\) for \(g\in G\) and \(X\) in \(\mathfrak{g}\) or \(\mathfrak{g}^{*}\). Similarly, if \(H\) is a subgroup of \(G\) we write \({}^{g}H\) for the group \(gHg^{-1}\). An element \(X\in\mathfrak{g}^{*}\) or \(\mathfrak{g}\) is called _semisimple_ (or _almost stable_) if its \(G\)-orbit is closed. We define \(X\in\mathfrak{g}^{*}\) or \(\mathfrak{g}\) to be _nilpotent_ if there exists an \(F\)-rational one-parameter subgroup \(\lambda\in X_{*}(\mathbf{G})\) such that \(\lim_{t\to 0}{}^{\lambda(t)}X=0\). By [1, SS2.5], this is equivalent to a more usual definition that the closure of the coadjoint orbit in the rational topology contains \(0\). We say the one-parameter subgroup \(\lambda\) is _adapted_ to \(X\)[1, Definition 4.5.6], if \({}^{\lambda(t)}X=t^{2}X\). We write \(\mathcal{N}^{*}\) for the set of nilpotent elements of \(\mathfrak{g}^{*}\) and \(\mathscr{O}(0)\) for the (finite) set of \(G\)-orbits in \(\mathcal{N}^{*}\). We sometimes specify a group of matrices merely by the sets in which its entries lie; in this case, that the resulting subgroup is the intersection of this set with \(G\) is understood. We write \(\lceil t\rceil=\min\{n\in\mathbb{Z}\mid n\geq t\}\) and \(\lfloor t\rfloor=\max\{n\in\mathbb{Z}\mid n\leq t\}\). Write \(\operatorname{Cent}_{G}(S)\) for the centralizer in \(G\) of the element or set \(S\). We may write \([\sigma]\) for the trace character of a representation \(\sigma\) of a finite or compact group. The trivial representation is denoted \(\mathbf{1}\), and the characteristic function of a subset \(S\) is denoted \(\mathbf{1}_{S}\). ### The Bruhat-Tits building and Moy-Prasad filtration subgroups Let \(\mathcal{B}(G)=\mathcal{B}(\mathbf{G},F)\) denote the (enlarged) Bruhat-Tits building of \(G\); then to each \(x\in\mathcal{B}(G)\) we associate its stabilizer \(G_{x}\), which is a compact subgroup of \(G\) containing the parahoric subgroup \(G_{x,0}\). These admit a Moy-Prasad filtration by normal subgroups \(G_{x,r}\) with \(r\in\mathbb{R}_{\geq 0}\) defined relative to the valuation on \(F\). We briefly recap the definition; for a careful and detailed summary, see for example [11, SS2]. To define \(G_{x,r}\), choose an apartment \(\mathcal{A}\subset\mathcal{B}(G)\) containing \(x\); this is the affine space over \(X_{*}(T)\otimes_{\mathbb{Z}}\mathbb{R}\) for some maximal split torus \(T\) of \(G\) and we write \(\mathcal{A}=\mathcal{A}(G,T)\). Let \(\Phi=\Phi(G,T)\) denote the corresponding root system and \(\Psi\) the set of affine roots, viewed as functions on \(\mathcal{A}\). For each root \(\alpha\in\Phi\), let \(U_{\alpha}\) denote the corresponding root subgroup. The affine roots \(\psi\) with gradient \(\alpha\) define a filtration of \(U_{\alpha}\) by compact open subgroups \(U_{\psi}\). Let \(C=\mathrm{Cent}_{G}(T)\); it contains a parahoric subgroup \(C_{0}\), and a filtration by compact open normal subgroups \(C_{r}\), \(r>0\), that is independent of the point \(x\in\mathcal{A}\). Then for any \(r\geq 0\) we define compact open subgroups \[G_{x,r}=\langle C_{r},U_{\psi}\mid\psi\in\Psi,\psi(x)\geq r\rangle;\] if \(r=0\) this is the parahoric subgroup and for \(r>0\) it is a Moy-Prasad filtration subgroup of \(G_{x,0}\). It is independent of the choice of apartment containing \(x\). The Moy-Prasad filtration is \(G\)-equivariant; for example \({}^{g}G_{x,r}=G_{gx,r}\) for all \(x\in\mathcal{B}(G)\) and \(r\geq 0\). Similarly, the Lie algebra \(\mathfrak{g}\) admits a filtration \(\mathfrak{g}_{x,r}\) by \(\mathcal{R}\)-modules indexed by \(r\in\mathbb{R}\), as follows. Let \(\mathfrak{t}\) denote the Lie algebra of \(T\), \(\mathfrak{c}\) its centralizer in \(\mathfrak{g}\) and for each \(\alpha\in\Phi\), let \(\mathfrak{g}_{\alpha}\) denote the corresponding root subspace. These subspaces admit filtrations by \(\mathcal{R}\)-submodules \(\mathfrak{c}_{r}\) with \(r\in\mathbb{R}\) and \(\mathfrak{g}_{\psi}\) for \(\psi\in\Psi\), respectively, such that \[\mathfrak{g}_{x,r}=\mathfrak{c}_{r}\oplus\bigoplus_{\alpha}\mathfrak{g}_{ \alpha,x,r}, \tag{2.1}\] where \(\mathfrak{g}_{\alpha,x,r}\) is the union of the \(\mathcal{R}\)-submodules \(\mathfrak{g}_{\psi}\) such that \(\psi\in\Psi\), the gradient of \(\psi\) is \(\alpha\) and \(\psi(x)\geq r\). We write \[G_{x,r+}=\bigcup_{s>r}G_{x,s},\quad\text{and}\quad\mathfrak{g}_{x,r+}= \bigcup_{s>r}\mathfrak{g}_{x,s}.\] By [1, SS1.6], there exists a mock exponential map \(e=e_{x}\colon\mathfrak{g}_{x,0+}\to G_{x,0+}\) that induces the Moy-Prasad isomorphism \(\mathfrak{g}_{x,r}/\mathfrak{g}_{x,2r}\cong G_{x,r}/G_{x,2r}\) for any \(r>0\) (among other desirable properties). Writing \(\langle X,Y\rangle\) for the natural pairing of \(X\in\mathfrak{g}^{*}\) with \(Y\in\mathfrak{g}\), the Moy-Prasad filtration on the dual of the Lie algebra is defined by \(\mathfrak{g}_{x,r}^{*}=\{X\in\mathfrak{g}^{*}\mid\forall Y\in\mathfrak{g}_{x, (-r)+},\langle X,Y\rangle\in\mathcal{P}\}\). We again define \(\mathfrak{g}_{x,r+}^{*}=\cup_{s>r}\mathfrak{g}_{x,s}^{*}\). Finally, for any \(r\geq 0\) we define \(G\)-stable subsets \[G_{r}=\bigcup_{x\in\mathcal{B}(G)}G_{x,r}\quad\text{and}\quad G_{r+}=\bigcup _{x\in\mathcal{B}(G)}G_{x,r+}.\] For any real number \(r\) we do the same to define \(\mathfrak{g}_{r}\) and \(\mathfrak{g}_{r+}\). If \((\pi,V)\) is an irreducible admissible representation of \(G\), then its depth is defined as the least real number \(r\geq 0\) such that there exists \(x\in\mathcal{B}(G)\) for which \(V^{G_{x,r+}}\neq\{0\}\). We define the depth of a smooth irreducible representation \(\rho\) of \(G_{x}\), for fixed \(x\), in the same way; this is equivalent to the least \(r\geq 0\) for which \(\rho\) factors through \(G_{x}/G_{x,r+}\). ### Restrictions on \(p\) We impose the restriction that \(G\) splits over a tamely ramified extension of \(F\) and that \(p\) does not divide the order of the absolute Weyl group of \(\mathbf{G}\). One of the main results of [11] is that this is sufficient to ensure that all irreducible admissible representations are tame. Combining [11, Lemma 2.2, Table 1] and [1, SS1], one sees that the hypotheses of [1, Hypothesis 2.1.1] or [1, Prop 4.1] hold, so that there is a non-degenerate \(G\)-invariant bilinear form on \(\mathfrak{g}\) under which \(\mathfrak{g}_{x,r}^{*}\) and \(\mathfrak{g}_{x,r}\) are identified for all \(x\) and \(r\). For \(\mathbf{G}=\mathrm{SL}(2)\) and \(p\neq 2\) we may take the trace form, and define for each \(\dot{X}\in\mathfrak{g}\) the element \(X\in\mathfrak{g}^{*}\) by \(\langle X,\cdot\rangle=\mathrm{tr}(\dot{X}\cdot)\). We also impose the hypotheses of [1, SS4] to obtain the classification of nilpotent orbits; this requires the use of \(\mathfrak{sl}_{2}(F)\) triples over the residue field as well as some properties of a mock exponential map. By recent work of Stewart and Thomas, [16] the former condition is satisfied for \(p>h\), where \(h\) is the Coxeter number of \(G\). To satisfy all hypotheses for \(G=\mathrm{SL}(2,F)\), it suffices to take \(p\geq 3\). In contrast, to state the local character expansion, which relates a function on the group to one on the Lie algebra, one needs a \(G\)-equivariant map \(\mathfrak{g}_{0+}\to G_{0+}\) satisfying [1, Hypothesis 3.2.1]. Such a map, which we'll simply denote \(\exp\), can exist in large positive characteristic (see, for example, the discussion in [1, SS2]); in characteristic zero, [1, Lemma B.0.3] gives an effective lower bound on \(p\). For \(G=\mathrm{SL}(2,F)\), this entails in characteristic zero that \(p>e+1\) where \(e\) is the ramification index of \(F\) over \(\mathbb{Q}_{p}\), for example. ### The local character expansion As detailed in the expanded notes [1], Harish-Chandra proved in the 1970s that the distribution character of an irreducible admissible representation \(\pi\) of \(G\), which is given on \(f\in C_{c}^{\infty}(G)\) by \[\Theta_{\pi}(f)=\mathrm{tr}\int f(g)\pi(g)\,dg,\] is well-defined and representable by a function, which we also denote \(\Theta_{\pi}\), that is locally integrable on \(G\) and locally constant on the set \(G^{\mathrm{reg}}\) of regular semisimple elements of \(G\) (see [1, SS13] and the discussion therein). Similarly, to each coadjoint orbit \(\mathcal{O}\subset\mathfrak{g}^{*}\) we associate its orbital integral, given on \(f\in C_{c}^{\infty}(\mathfrak{g}^{*})\) by \[\mu_{\mathcal{O}}(f)=\int_{\mathcal{O}}f(X)\,d\mu_{\mathcal{O}}(X) \tag{2.2}\] where \(d\mu_{\mathcal{O}}\) is a Radon measure [10]. Relative to \(\psi\), the fixed additive character of \(F\), the Fourier transform of \(f\in C_{c}^{\infty}(\mathfrak{g})\) is a function \(\hat{f}\in C_{c}^{\infty}(\mathfrak{g}^{*})\). The Fourier transform of the orbital integral \(\mu_{\mathcal{O}}\) is the distribution given on \(f\in C_{c}^{\infty}(\mathfrak{g})\) by \(\widehat{\mu_{\mathcal{O}}}(f)=\mu_{\mathcal{O}}(\hat{f})\). Then \(\widehat{\mu_{\mathcal{O}}}\) is representable by a locally integrable function on \(\mathfrak{g}\) that is locally constant on \(\mathfrak{g}^{\mathrm{reg}}\)[1, Theorem 4.4]. We set \(\mathfrak{g}_{r+}^{\mathrm{reg}}:=\cup_{x\in\mathcal{B}(G)}\mathfrak{g}_{x,r+ }\cap\mathfrak{g}^{\mathrm{reg}}\). The local character expansion expresses that these finitely many functions \(\widehat{\mu_{\mathcal{O}}}\), for \(\mathcal{O}\in\mathscr{O}(0)\), form a basis, in a neighbourhood of \(0\), for the space of locally integrable \(G\)-invariant functions that are locally constant on \(\mathfrak{g}^{\mathrm{reg}}\). The nature of the expansion was first proven for \(G=\mathrm{GL}(n,F)\) in characteristic \(0\) by Roger Howe [11] and then in the generality of connected reductive groups in characteristic zero by Harish-Chandra [1]. Cluckers, Gordon and Halupczok proved its validity in large positive characteristic in [1, Adler and Korman proved an analogous result for expansions centered at other semisimple elements in [1]. The precise domain on which the local character expansion holds was conjectured by Hales, Moy and Prasad [12] and proven in [13] for a large class of groups and by [1] in the following generality. **Theorem 2.1** (The Local Character Expansion).: _If \(\pi\) is an irreducible admissible representation of \(G\) of depth \(r\), then there exist unique \(c_{\mathcal{O}}(\pi)\in\mathbb{C}\) such that for all \(X\in\mathfrak{g}_{r+}^{\mathrm{reg}}\) we have_ \[\Theta_{\pi}(\exp(X))=\sum_{\mathcal{O}\in\mathcal{O}(0)}c_{\mathcal{O}}(\pi) \widehat{\mu_{\mathcal{O}}}(X). \tag{2.3}\] We denote by \(\mathcal{WF}(\pi)\) the set of _maximal_ nilpotent orbits \(\mathcal{O}\) such that \(c_{\mathcal{O}}(\pi)\neq 0\), where the ordering is taken in the local topology; this is the set denoted \(\mathrm{WF}^{\mathrm{rat}}(\pi)\) in [14]. In [15], Heifetz defined and developed the analytic notion of the _wave front set_ of a representation of a \(p\)-adic group, in analogy with the work of Howe [13] in the real case. In [12], Przebinda proved that the wave front set coincides with the support of the right side of (2.3), which is the closure of the union of these orbits. Recent work of Tsai [14, 14, 15] has shown that the orbits of \(\mathcal{WF}(\pi)\) may fail to be stably conjugate. Finally, note that for \(G=\mathrm{GL}(n,F)\), Howe proved that for each \(\mathcal{O}\in\mathcal{O}(0)\), there is a corresponding parabolic subgroup \(P\) such that in a neighbourhood \(O\) of \(0\in\mathfrak{g}\) we have \[\widehat{\mu_{\mathcal{O}}}|_{O}=\Theta_{\pi}\circ\exp|_{O}\] where \(\pi=\mathrm{Ind}_{P}^{G}\mathbf{1}\)[13, Lemma 5]. This has recently been generalized to all inner forms of \(G\), and representations over fields of characteristic not equal to \(p\), in [12, Theorem 1.3]. In the same vein, for \(\mathrm{SL}(2,F)\), the functions \(\widehat{\mu_{\mathcal{O}}}\) are almost equal to the characters of special unipotent representations (see (8.1)). We cannot expect such equalities in general as, for example, for classical groups nonspecial orbits cannot occur in \(\mathcal{WF}(\pi)\) for any \(\pi\)[16, Thm 1.4]). The main goal in this paper is to propose an example of a weaker form of the Howe-Henniart-Vigneras theorem, based on representations of a maximal compact open subgroup, that one may hope can hold true in general. ## 3. Nilpotent orbits and nilpotent support In this section, \(G\) is an arbitrary connected reductive group, subject to the hypothesis on \(p\) of Section 2.2. We define the (local) nilpotent support of an element of \(\mathfrak{g}^{*}\), and relate this both to the asymptotic cone and to the wave front set of a representation of positive depth. ### Degenerate cosets and nilpotent orbits In [1, SS3], Adler and DeBacker generalize ideas of Moy and Prasad to establish, for connected reductive groups, that for all \(r\in\mathbb{R}\) \[\mathfrak{g}_{r}^{*}=\bigcap_{x\in\mathcal{B}(G)}(\mathfrak{g}_{x,r}^{*}+ \mathcal{N}^{*}),\] where \(\mathfrak{g}_{r}^{*}:=\bigcup_{x\in\mathcal{B}(G)}\mathfrak{g}_{x,r}^{*}\). They further show that \[\mathcal{N}^{*}=\bigcap_{r\in\mathbb{R}}\mathfrak{g}_{r}^{*}.\] Given \(x\in\mathcal{B}(G)\) and \(X\in\mathfrak{g}^{*}\setminus\{0\}\), the _depth of \(X\) at \(x\)_ is the unique value \(t=d_{x}(X)\) such that \(X\in\mathfrak{g}_{x,t}^{*}\setminus\mathfrak{g}_{x,t+}^{*}\). When \(X\) is not nilpotent, they prove that the _depth of \(X\)_, given by \[d(X)=\max\{d_{x}(X)\mid x\in\mathcal{B}(G)\}=\max\{r\mid X\in\mathfrak{g}_{r}^ {*}\}\] is well-defined and rational. For \(X\) nilpotent, we set \(d(X)=\infty\). Depth is \(G\)-invariant. For semisimple \(\Gamma\in\mathfrak{g}^{*}\), let \(T\subset\operatorname{Cent}_{G}(\Gamma)\) be a maximal torus with associated absolute root system \(\Phi(G,T)\). Then \(\Gamma\) is called _good_ if for all \(\alpha\in\Phi(G,T)\), we have \(\operatorname{val}(\Gamma(d\alpha^{\vee}(1)))\in\{d(\Gamma),\infty\}\). By [13, Thm 2.3.1], if \(\Gamma\) is good then the set of points \(x\in\mathcal{B}(G)\) at which \(d_{x}(\Gamma)\) attains its maximum value \(d(\Gamma)\) is exactly \(\mathcal{B}(\operatorname{Cent}_{G}(\Gamma))\subset\mathcal{B}(G)\). For any \(\Gamma\in\mathfrak{g}^{*}\) set \(d=d_{x}(\Gamma)\). The coset \(\Gamma+\mathfrak{g}^{*}_{x,d+}\) is called _degenerate_ if it contains a nilpotent element \(X\in\mathcal{N}^{*}\). From the relations above it follows that this happens if and only if \(d<d(\Gamma)\). In [10, SS5], DeBacker proves that the set of nilpotent \(G\)-orbits meeting a degenerate coset \(\Gamma+\mathfrak{g}^{*}_{x,d+}\) has a unique minimal element with respect to the (rational) closure relation on orbits, which we'll denote \(\mathcal{O}(\Gamma,x)\). This generalizes a result of Barbasch and Moy [1, Prop 3.1.6] for \(d=0\), which was integral to their determination of the wave front set of a depth zero representation. To classify nilpotent orbits in this way, DeBacker proceeds as follows. Identify \(\mathfrak{g}\) and \(\mathfrak{g}^{*}\). Given a nilpotent element \(X\in\mathfrak{g}\), complete \(X\) to an \(\mathfrak{sl}_{2}(F)\) triple \((X,H,Y)\). Choose \(r\in\mathbb{R}\) and create the building set \[\mathcal{B}_{r}(X,H,Y)=\{x\in\mathcal{B}(G)\mid X\in\mathfrak{g}_{x,r},\ H\in \mathfrak{g}_{x,0},\ Y\in\mathfrak{g}_{x,-r}\};\] he proves this set is a nonempty, closed, convex subset of \(\mathcal{B}(G)\) with the property that for all \(x\in\mathcal{B}_{r}(X,H,Y)\) we have \(\mathcal{O}(X,x)=G\cdot X\). **Remark 3.1**.: Note that for each \(g\in G\), we have \(\mathcal{B}_{r}({}^{g}X,{}^{g}H,{}^{g}Y)={}^{g}\mathcal{B}_{r}(X,H,Y)\), and for fixed \(X\) the union of these need not cover \(\mathcal{B}(G)\). Moreover, if \(\mu\) is a one-parameter subgroup adapted to this triple, then by [10, Remark 5.1.5], \(\mathcal{B}_{r}(X,H,Y)=\mathcal{B}_{0}(X,H,Y)+\frac{r}{2}\mu\), where this sum is taken in any apartment in \(\mathcal{B}(C_{G}(\mu))\). It follows that (if the rank of \(G\) is greater than \(1\)) there exist orbits \(\mathcal{O}\) (such as ones for which \(\mathcal{B}_{r}(X,H,Y)\) is a point) for which there exist \(y\in\mathcal{B}(G)\) such that \(\mathcal{O}\neq\mathcal{O}(X,y)\) for any \(X\in\mathcal{O}\). For example, in \(\operatorname{Sp}(4,F)\), the principal nilpotent orbits are only obtained along certain lines emanating from vertices. ### Nilpotent support and nilpotent cones We now explore different ways to understand the asymptotic nilpotent support of a general element \(\Gamma\in\mathfrak{g}^{*}\) and show their equivalence. **Definition 3.2**.: Let \(\Gamma\in\mathfrak{g}^{*}\). If \(x\in\mathcal{B}(G)\), then the _local nilpotent support at \(x\)_ of \(\Gamma\) is \[\operatorname{Nil}_{x}(\Gamma)=\{\mathcal{O}({}^{g}\Gamma,x)\mid g\in G,d_{x }(g\cdot\Gamma)<d(\Gamma)\},\] which is the set of nilpotent orbits defined by degenerate cosets at \(x\) of elements of the \(G\)-orbit of \(\Gamma\). On the other hand, the _nilpotent support_ of \(\Gamma\) is \[\operatorname{Nil}(\Gamma)=\{\mathcal{O}(\Gamma,x)\mid x\in\mathcal{B}(G),d_ {x}(\Gamma)<d(\Gamma)\},\] the set of nilpotent orbits corresponding to any (nontrivial) degenerate coset of \(\Gamma\). Note that if \(\Gamma\) is nilpotent, then \(\operatorname{Nil}(\Gamma)=G\cdot\Gamma\). More generally, for any \(g\in G\), we have \(d_{x}(\Gamma)=d_{gx}({}^{g}\Gamma)\) and \({}^{g}(\Gamma+\mathfrak{g}^{*}_{x,d+})={}^{g}\Gamma+\mathfrak{g}^{*}_{gx,d+}\). Thus \(\mathcal{O}(\Gamma,x)=\mathcal{O}({}^{g}\Gamma,gx)\) and \[\operatorname{Nil}(\Gamma)=\bigcup_{x\in\mathcal{B}(G)}\operatorname{Nil}_{x} (\Gamma),\] that is, the nilpotent support is the union of the local nilpotent supports, and \(\operatorname{Nil}(\Gamma)\) is an invariant of the \(G\)-orbit of \(\Gamma\). One may alternately restrict this union to one over the points in a fundamental domain for the action of \(G\) on \(\mathcal{B}(G)\). By Remark 3.1, when the rank of \(G\) is greater than \(1\), not all nilpotent orbits will occur as some \(\mathcal{O}(\Gamma,x)\) for a given point \(x\in\mathcal{B}(G)\), so \(\operatorname{Nil}_{x}(\Gamma)\neq\operatorname{Nil}(\Gamma)\) in general. Even when these sets are equal, as for \(\operatorname{SL}(2,F)\) (see Proposition 4.1), they are interesting subsets of the nilpotent cone (see Lemma 4.2). On the other hand, the asymptotic cone on an element \(\Gamma\) is defined in [1, Def 3.9] analytically as follows. **Definition 3.3**.: Let \(\Gamma\in\mathfrak{g}^{*}\). The _asymptotic cone_ on \(\Gamma\) is the set \[Cone(\Gamma)=\{X\in\mathfrak{g}^{*}\mid\exists\varepsilon_{i}\to 0, \varepsilon_{i}\in F^{\times},\exists g_{i}\in G,\lim_{i\to\infty}\varepsilon_{ i}^{2}Ad^{*}(g_{i})\Gamma=X\}.\] This is a closed, nonempty union of nilpotent orbits of \(G\) on \(\mathfrak{g}^{*}\). **Proposition 3.4**.: _Let \(\Gamma\in\mathfrak{g}^{*}\). Then the nonzero \(G\)-orbits occurring in the asymptotic cone of \(\Gamma\) are those in its nilpotent support, that is,_ \[Cone(\Gamma)=\bigcup_{\mathcal{O}\in\operatorname{Nil}(\Gamma)}\mathcal{O} \cup\{0\}.\] Proof.: It suffices to prove this result for \(\Gamma\in\mathfrak{g}\), where we may apply the theory of \(\mathfrak{sl}_{2}(F)\) triples. Let \(\Gamma\in\mathfrak{g}\) have depth \(r\leq\infty\) and let \(\mathcal{O}\in\operatorname{Nil}(\Gamma)\). Then there exists \(x\in\mathcal{B}(G)\) and \(d<r\) such that \(d_{x}(\Gamma)=d\) and \(\mathcal{O}=\mathcal{O}(\Gamma,x)\). Choose a representative \[X\in\mathcal{O}(\Gamma,x)\cap(\Gamma+\mathfrak{g}_{x,d+}).\] Choose an \(\mathfrak{sl}_{2}(F)\) triple \((X,H,Y)\) and the corresponding one-parameter subgroup \(\mu\) adapted to \(X\). By [1, Lemma 5.2.1], we have \[X+\mathfrak{g}_{x,d+}=Ad(G_{x,0+})(X+C_{\mathfrak{g}_{x,d+}}(Y)).\] Therefore there exist \(g\in G_{x,0+}\) and \(C\in C_{\mathfrak{g}_{x,d+}}(Y)\) for which \[\Gamma=Ad(g^{-1})(X+C).\] Note that \(C_{\mathfrak{g}}(Y)\) is spanned by the lowest weight vectors of \(\operatorname{ad}(H)\), so we may decompose \(C=\sum_{i\leq 0}C_{i}\) where \(Ad(\mu(t))C_{i}=t^{i}C_{i}\) for all \(t\in F^{\times}\). Similarly, for all \(t\in F^{\times}\) we have \(Ad(\mu(t))X=t^{2}X\). Therefore \[\lim_{t\to 0}t^{2}\,Ad(\mu(t^{-1})g)\Gamma=\lim_{t\to 0}t^{2}\,Ad(\mu(t^{-1}))(X+C)=X\] so \(X\in Cone(\Gamma)\). Since \(Cone(\Gamma)\) is \(G\)-invariant, we deduce \(\mathcal{O}\subset Cone(\Gamma)\). Conversely, let \(X\in Cone(\Gamma)\) be nonzero, so that there exists a sequence of elements \(\varepsilon_{i}\in F^{\times}\), with \(\varepsilon_{i}\to 0\), and a sequence of elements \(g_{i}\in G\), such that \[\lim_{i\to\infty}\varepsilon_{i}^{2}\,Ad(g_{i})\Gamma=X.\] Complete \(X\) to an \(\mathfrak{sl}_{2}(F)\) triple \((X,H,Y)\) and choose a point \(x\in\mathcal{B}_{0}(X,H,Y)\). Since the given sequence converges to \(X\), it enters the neighbourhood \(X+\mathfrak{g}_{x,0+}\) so we may choose \(i\in\mathbb{N}\) such that \[\varepsilon_{i}^{2}\,Ad(g_{i})\Gamma\in X+\mathfrak{g}_{x,0+}.\] It follows that \(Ad(g_{i})\Gamma\in\varepsilon_{i}^{-2}X+\mathfrak{g}_{x,-2\mathrm{val}( \varepsilon_{i})+}\), a nontrivial degenerate coset of depth \(-2\mathrm{val}(\varepsilon_{i})\). Since \((\varepsilon_{i}^{-2}X,H,\varepsilon_{i}^{2}Y)\) is again an \(\mathfrak{sl}_{2}(F)\) triple and \(\mathcal{B}_{-2\mathrm{val}(\varepsilon_{i})}(\varepsilon_{i}^{-2}X,H, \varepsilon_{i}^{2}Y)=\mathcal{B}_{0}(X,H,Y)\), we infer that the minimal nilpotent orbit meeting this coset is \(Ad(G)(\varepsilon_{i}^{-2}X)=Ad(G)X\). Thus \(Ad(G)X=\mathcal{O}^{(g_{i}\Gamma,x)}\in\operatorname{Nil}_{x}(\Gamma)\subset \operatorname{Nil}(\Gamma)\), as required. ### Connection with the wave front set of a positive depth representation Suppose now that \(\pi\) is an irreducible admissible representation of \(G\) of depth \(r\) with good minimal \(K\)-type \(\Gamma\) of depth \(-r\) (in the sense of [13, Def 2.4.3, 2.4.6]). Then, under suitable hypotheses (that are satisfied if \(F\) has characteristic zero and the exponential map converges on \(\mathfrak{g}_{0+}\)), Kim and Murnaghan prove a version of the local character expansion that is valid on the strictly larger neighbourhood \(\mathfrak{g}_{r}^{\mathrm{reg}}\). The \(\Gamma\)-asymptotic expansion [13, Thm 5.3] asserts that there exist complex coefficients \(c_{\mathcal{O}^{\prime}}(\pi)\) such that for any \(X\in\mathfrak{g}_{r}^{\mathrm{reg}}\) we have \[\Theta_{\pi}(\exp(X))=\sum_{\mathcal{O}^{\prime}\in\mathscr{O}(\Gamma)}c_{ \mathcal{O}^{\prime}}(\pi)\widehat{\mu_{\mathcal{O}^{\prime}}}(X), \tag{3.1}\] where \(\mathscr{O}(\Gamma)\) denotes the set of \(G\)-orbits in \(\mathfrak{g}^{*}\) with \(\Gamma\) in their closure, and for \(\mathcal{O}^{\prime}\in\mathscr{O}(\Gamma)\), \(\widehat{\mu_{\mathcal{O}^{\prime}}}\) denotes the Fourier transform of the corresponding orbital integral (2.2). This yields a special case of interest: that of the expansion (3.1) having a single nonzero term \(c_{\mathcal{O}^{\prime}}(\pi)\widehat{\mu_{\mathcal{O}^{\prime}}}\) corresponding to \(\mathcal{O}^{\prime}=G\cdot\Gamma\). We claim this happens, for example, when \(G^{\prime}=\mathrm{Cent}_{G}(\Gamma)\) is compact-mod centre, such as when \(\Gamma\) is a regular element. Namely, let \(\mathfrak{g}^{\prime}\) denote the Lie algebra of \(G^{\prime}\). Then the set \(\mathscr{O}(\Gamma)\) indexing the sum in (3.1) is in bijective correspondence with the set of nilpotent \(G^{\prime}\)-orbits in \((\mathfrak{g}^{\prime})^{*}\), which is the singleton \(\{G\cdot\Gamma\}\) under this hypothesis. **Theorem 3.5**.: _Let \(\pi\) be an irreducible representation of \(G\) of depth \(r>0\), and let \(\Gamma\in\mathfrak{g}^{*}\) be a good minimal \(K\)-type of \(\pi\) such that \(\pi\) admits a \(\Gamma\)-asymptotic expansion. Suppose further that this expansion has a unique nonzero term, corresponding to the Fourier transform of the orbital integral corresponding to \(\Gamma\) itself. Then \(\mathcal{WF}(\pi)\) coincides with the maximal elements of \(\mathrm{Nil}(\Gamma)\); that is, the asymptotic cone on \(\Gamma\) is the wave front set of \(\pi\)._ The following proof was communicated to me by Fiona Murnaghan. Proof.: We are given that on \(\mathfrak{g}^{\mathrm{reg}}\cap\mathfrak{g}_{r}\), \(\Theta_{\pi}\circ\exp=t\widehat{\mu_{G\cdot\Gamma}}\) for some nonzero scalar \(t\). Applying the inverse Fourier transform to the local character expansion (2.3) of \(\Theta_{\pi}\) (which is valid on the smaller set \(\mathfrak{g}^{\mathrm{reg}}\cap\mathfrak{g}_{r+}\)) we may write the equality of distributions \[t\mu_{G\cdot\Gamma}=\sum_{\mathcal{O}\in\mathscr{O}(0)}c_{\mathcal{O}}(\pi) \mu_{\mathcal{O}} \tag{3.2}\] which by [14, Cor 3.4.6] holds in particular for all compactly supported functions on \(\mathfrak{g}^{*}/\mathfrak{g}^{*}_{x,-r}\), for any \(x\in\mathcal{B}(G)\). So let \(x\in\mathcal{B}(G)\) and let \(d\) be such that \(\mathfrak{g}^{*}_{x,d+}\supset\mathfrak{g}^{*}_{x,-r}\). Given a nonzero coset \(\xi\in\mathfrak{g}^{*}_{x,d}/\mathfrak{g}^{*}_{x,d+}\) let \(\mathbf{1}_{\xi}\) denote the characteristic function of this subset of \(\mathfrak{g}^{*}\). Note that if \(X\in\xi\cap\mathcal{O}\) for some (not necessarily nilpotent) \(G\)-orbit \(\mathcal{O}\), then this intersection contains the open set \(G_{x,0+}\cdot X\) as well. Thus we have \[\mu_{\mathcal{O}}(\mathbf{1}_{\xi})=0\iff\xi\cap\mathcal{O}=\emptyset. \tag{3.3}\] Now suppose that \(\mathcal{O}\in\mathscr{O}(0)\), and choose \(x\in\mathcal{B}(G)\) and \(\xi=X+\mathfrak{g}^{*}_{x,d+}\) with \(\mathfrak{g}^{*}_{x,d+}\supset\mathfrak{g}^{*}_{x,-r}\) with the property that \(\mathcal{O}=\mathcal{O}(X,x)\). The minimality of \(\mathcal{O}(X,x)\) proven by DeBacker implies that any nilpotent orbit \(\mathcal{O}^{\prime}\) meeting \(\xi\) (or equivalently, by (3.3), satisfying \(\mu_{\mathcal{O}^{\prime}}(\mathbf{1}_{\xi})\neq 0\)) must contain \(\mathcal{O}\) in its closure. Suppose first that \(\mathcal{O}\) is not in the wave front set \(\cup_{\mathcal{O}^{\prime}\in\mathcal{WF}(\pi)}\overline{\mathcal{O}^{\prime}}\) of \(\pi\). Let \(\mathcal{O}^{\prime}\in\mathscr{O}(0)\) be such that \(c_{\mathcal{O}^{\prime}}(\pi)\neq 0\); then \(\mathcal{O}^{\prime}\) is in the wave front set, so \(\mathcal{O}\not\subset\overline{\mathcal{O}^{\prime}}\). This implies by the preceding paragraph that \(\mu_{\mathcal{O}^{\prime}}(\mathbf{1}_{\xi})=0\). As this holds for all such \(\mathcal{O}^{\prime}\), we conclude from (3.2) that \(\mu_{G\cdot\Gamma}(\mathbf{1}_{\xi})=0\), whence by (3.3) we have \(\xi\cap G\cdot\Gamma=\emptyset\), and thus \(\mathcal{O}\notin\operatorname{Nil}(\Gamma)\). It follows that every \(\mathcal{O}\in\operatorname{Nil}(\Gamma)\) lies in the wave front set of \(\pi\). Now suppose \(\mathcal{O}\in\mathcal{WF}(\pi)\); that is, it is maximal among nilpotent orbits with nonzero coefficient in (3.2). Thus the preceding argument implies \(\mu_{\mathcal{O}^{\prime}}(\mathbf{1}_{\xi})=0\) for all \(\mathcal{O}^{\prime}\neq\mathcal{O}\) in the wave front set. Therefore (3.2) yields \(t\mu_{G\cdot\Gamma}(\mathbf{1}_{\xi})=c_{\mathcal{O}}(\pi)\mu_{\mathcal{O}}( \mathbf{1}_{\xi})\neq 0\), so by (3.3), \(\xi\) must meet \(G\cdot\Gamma\) and thus \(\mathcal{O}\in\operatorname{Nil}(\Gamma)\). Hence, the maximal elements of \(\operatorname{Nil}(\Gamma)\) coincide with \(\mathcal{WF}(\pi)\). In fact, the key to the proof is that the maximal nilpotent orbits occuring in the Shalika germ expansion of \(\mu_{G\cdot\Gamma}\) are the maximal orbits of \(\operatorname{Nil}(\Gamma)\). In [10], Ciubotaru and Okada obtain a similar result directly, by analysing the asymptotic nilpotent cone of the characters of \(G_{x,r}/G_{x,r+}\) appearing in \(\pi^{G_{x,r+}}\). **Remark 3.6**.: One might ask if Theorem 3.5 could be extended to show that \(\mathcal{WF}(\pi)\) is the union of the nilpotent supports of the maximal orbits occurring in the \(\Gamma\)-asymptotic expansion (3.1). The answer is expected to be negative. In the supercuspidal case, the key result is [21, Cor 10.2.3(1)], which implies that this latter set of orbits (in \(\mathscr{O}(\Gamma)\)) corresponds exactly to \(\mathcal{WF}(\pi^{0})\) (in \(\mathscr{O}(0)\) for \(G^{0}=\operatorname{Cent}_{G}(\Gamma)^{\circ}\)), where \(\pi^{0}\) is the associated depth-zero supercuspidal representation of \(G^{0}\). Cheng-Chiang Tsai1 has constructed explicit examples of supercuspidal representations where the wave front set does not follow such a pleasant inductive structure. In effect, one expects that when substituting Shalika germ expansions into the \(\Gamma\)-asymptotic expansion, cancellations among coefficients may occur. Footnote 1: private correspondence and forthcoming work While the proof of Theorem 3.5 entails some additional hypotheses on \(F\), a consequence of the main theorem of Section 6 is that, for \(G=\operatorname{SL}(2,F)\), the conclusion of the theorem holds whenever the characteristic and residual characteristic of \(F\) are not \(2\). ## 4. Nilpotent orbits and nilpotent cones of \(G=\operatorname{SL}(2,F)\) For the rest of this paper we suppose that \(\mathbf{G}=\operatorname{SL}(2)\) and \(\mathfrak{g}=\mathfrak{sl}(2,F)\). In this section, we derive some additional properties of the nilpotent support of an element \(\Gamma\in\mathfrak{g}^{*}\). We identify \(\mathfrak{g}\) and \(\mathfrak{g}^{*}\) with the trace form. There are five nilpotent orbits: the zero orbit, and four two-dimensional principal (or regular) orbits that are in bijection with the rational square classes \(F^{\times}/(F^{\times})^{2}\). Representatives of these five orbits in \(\mathfrak{g}\) are \[\dot{X}_{u}=\begin{bmatrix}0&u\\ 0&0\end{bmatrix} \tag{4.1}\] where \(u\) runs over the set \(\{0,1,\varepsilon,\varpi,\varepsilon\varpi\}\) modulo \((F^{\times})^{2}\) and \(\varepsilon\in\mathcal{R}^{\times}\) is a fixed nonsquare. For each \(u\), write \(\mathcal{O}_{u}\) for the orbit in \(\mathfrak{g}^{*}\) corresponding to \(\dot{X}_{u}\). The following proposition relaxes the conditions for identifying the orbits in the nilpotent support of an element \(\Gamma\). **Proposition 4.1**.: _Let \(\mathfrak{g}=\mathfrak{sl}_{2}(F)\) and \(\Gamma\in\mathfrak{g}^{*}\setminus\{0\}\). Set \(r=d(\Gamma)\in\mathbb{R}\cup\{\infty\}\). Then_ 1. _every_ \(\mathcal{O}\in\operatorname{Nil}(\Gamma)\) _meets_ \(\Gamma+\mathfrak{g}^{*}_{x,r}\) _for some_ \(x\in\mathcal{B}(G)\) _such that_ \(d_{x}(\Gamma)<r\)_;_ 2. _for each_ \(x\in\mathcal{B}(G)\) _such that_ \(d_{x}(\Gamma)<r\)_, if_ \(\Gamma+\mathfrak{g}^{*}_{x,r}\) _meets a nilpotent orbit_ \(\mathcal{O}\)_, then_ \(\mathcal{O}\in\operatorname{Nil}(\Gamma)\) _._ 3. _for each_ \(x\in\mathcal{B}(G)\)_,_ \(\operatorname{Nil}(\Gamma)=\{\mathcal{O}(^{g}\Gamma,x)\mid g\in G\}= \operatorname{Nil}_{x}(\Gamma)\)_, that is, every nonzero nilpotent orbit in_ \(Cone(\Gamma)\) _appears in the local nilpotent support at every_ \(x\)_._ Proof.: The first two statements use that there are no closure relations between the principal orbits of \(\mathfrak{sl}_{2}(F)\), and so the uniqueness of the minimal nilpotent orbit meeting any degenerate coset implies that any nontrivial degenerate coset meets only one nilpotent orbit. For (a), suppose \(\mathcal{O}\in\operatorname{Nil}(\Gamma)\); then \(\mathcal{O}=\mathcal{O}(\Gamma,x)\) for some \(x\in\mathcal{B}(G)\), implying \(d_{x}(\Gamma)<r\). Since \(\Gamma\in\mathfrak{g}_{r}^{*}\subset\mathfrak{g}_{x,r}^{*}+\mathcal{N}^{*}\), the set \(\Gamma+\mathfrak{g}_{x,r}^{*}\) contains a (nonzero) nilpotent element \(Y\). Since \(Y\in\Gamma+\mathfrak{g}_{x,r}^{*}\subset\Gamma+\mathfrak{g}_{x,d_{x}(\Gamma)}^ {*}\), it lies in \(\mathcal{O}\), so \(\mathcal{O}\) meets the smaller coset, as required. For (b), note that if \(d_{x}(\Gamma)<r\) then \(0\notin\Gamma+\mathfrak{g}_{x,r}^{*}\subset\Gamma+\mathfrak{g}_{x,d(\Gamma)+}^ {*}\); any nilpotent orbit meeting the smaller set meets the larger one, and thus by uniqueness this orbit is \(\mathcal{O}(\Gamma,x)\in\operatorname{Nil}(\Gamma)\). To prove (c), let \(x\in\mathcal{B}(G)\) and let \(\operatorname{Nil}_{x}(\Gamma)\) be the local nilpotent support of \(\Gamma\) at \(x\); we have already noted that \(\operatorname{Nil}_{x}(\Gamma)\subset\operatorname{Nil}(\Gamma)\). The reverse inclusion follows from the one-dimensionality of \(\mathcal{B}(G)\). Let \(\mathcal{O}\in\operatorname{Nil}(\Gamma)\); then \(\mathcal{O}=\mathcal{O}(\Gamma,y)\) for some \(y\in\mathcal{B}(G)\). Let \(S\) be a split torus with associated root system \(\Phi(G,S)=\{\pm\alpha\}\) such that \(y\in\mathcal{A}(G,S)\). Set \(d=d_{y}(\Gamma)\) and let \(\dot{\Gamma}\in\mathfrak{g}\) correspond to \(\Gamma\) via the trace form. Choose \(\dot{X}\in\mathcal{O}\) such that \(\dot{\Gamma}\in\dot{X}+\mathfrak{g}_{y,d+}\). Conjugating both \(\dot{\Gamma}\) and \(\dot{X}\) by \(G_{y}\) as necessary we may assume \(\dot{X}\in\mathfrak{g}_{\alpha}\). Relative to the pinning of a fixed base point, we have the decomposition of \(\mathcal{R}\)-modules \[\mathfrak{g}_{y,d}=\mathfrak{g}_{-\alpha,d+\alpha(y)}\oplus\mathfrak{s}_{d} \oplus\mathfrak{g}_{\alpha,d-\alpha(y)}.\] Let \(\alpha^{\vee}\) denote the positive coroot, and choose \(g\in G\) so that \(gx\in\mathcal{A}(G,S)\) and \(gx=y-\ell\alpha^{\vee}\) for some \(\ell\geq 0\). Therefore if \(d^{\prime}=d-2\ell\) then \(\mathfrak{g}_{y,d}\subset\mathfrak{g}_{gx,d^{\prime}}\). Since \(\dot{X}\in\mathfrak{g}_{\alpha,d-\alpha(y)}\setminus\mathfrak{g}_{\alpha,(d- \alpha(y))+}\) and \(d^{\prime}-\alpha(gx)=d-\alpha(y)\), we conclude \(d_{gx}(\dot{\Gamma})=d_{gx}(\dot{X})=d^{\prime}\) and \(\dot{\Gamma}-\dot{X}\in\mathfrak{g}_{gx,d^{\prime}+}\). By uniqueness, we infer that \(\mathcal{O}=\mathcal{O}(\dot{\Gamma},gx)=\mathcal{O}(^{g^{-1}}\dot{\Gamma}, x)\in\operatorname{Nil}_{x}(\dot{\Gamma})\), yielding the result. We next determine \(\operatorname{Nil}(\Gamma)\) explicitly, for any \(\Gamma\in\mathfrak{g}=\mathfrak{sl}_{2}(F)\) (identified with its dual via the trace form). There is nothing to do if \(\Gamma\) is nilpotent. If \(\Gamma\neq 0\) is semisimple, then it is \(G\)-conjugate to a matrix of the form \[\dot{X}(u,v)=\begin{bmatrix}0&u\\ v&0\end{bmatrix}, \tag{4.2}\] for some \(u,v\in F^{\times}\). Its centralizer is a maximal torus. There is one \(G\)-conjugacy class of split torus, represented by any diagonal element, and two classes of unramified anisotropic tori, represented by \(\dot{X}(1,\varepsilon)\in\mathfrak{g}\) and \(\dot{X}(\varpi^{-1},\varepsilon\varpi)\in\mathfrak{g}\), respectively. The classes of ramified tori are represented by \(\dot{X}(1,t)\in\mathfrak{g}\) with \(t\in\{\varpi,\varepsilon\varpi,\varepsilon^{2}\varpi,\varepsilon^{3}\varpi\}\), noting that if \(-\varepsilon\in F^{2}\) then there are only two classes. We can now describe the nilpotent support of each such element, using the parametrization given in (4.1). **Lemma 4.2**.: _Let \(G=\operatorname{SL}(2,F)\) and \(\Gamma\in\mathfrak{g}\setminus\{0\}\) semisimple. If \(\Gamma\) splits over \(F\), then_ \[\operatorname{Nil}(\Gamma)=\{\mathcal{O}_{1},\mathcal{O}_{\varepsilon}, \mathcal{O}_{\varpi},\mathcal{O}_{\varepsilon\varpi}\}.\] _Otherwise, \(\Gamma\) is conjugate to \(\dot{X}(u,v)\) for some \(u,v\in F^{\times}\), and splits over \(E=F[\sqrt{uv}]\). Let \(\operatorname{Norm}_{E/F}(E^{\times})/(F^{\times})^{2}\) be represented by \(\{1,\gamma\}\). Then \(u\) and \(v\) are uniquely defined mod \(\operatorname{Norm}_{E/F}(E^{\times})\) and_ \[\operatorname{Nil}(\Gamma)=\{\mathcal{O}_{u},\mathcal{O}_{u\gamma}\}.\] Proof.: By Proposition 4.1, we may fix the choice \(x=x_{0}\in\mathcal{B}(G)\) to be the vertex such that \(\mathfrak{g}_{x,r}\) is the set of traceless \(2\times 2\) matrices with entries in \(\mathcal{P}^{\lceil r\rceil}\), and replace \(\Gamma\) by any \(G\)-conjugate. First suppose \(\Gamma=\operatorname{diag}(a,-a)\) with \(\operatorname{val}(a)=r\). Let \(u\in F^{\times}\) and note that if \(g_{u}=\left[\begin{smallmatrix}1&-\frac{1}{2}a^{-1}u\\ 0&1\end{smallmatrix}\right]\in G\) then \(\left.{}^{g_{u}}\Gamma=\left[\begin{smallmatrix}a&u\\ 0&-a\end{smallmatrix}\right].\) Therefore, for any \(u\) such that \(\operatorname{val}(u)=d<r\), we have \(\left.{}^{g_{u}}\Gamma\in\dot{X}_{u}+\mathfrak{g}_{x,d+}\right.\). Thus \(\operatorname{Nil}(\Gamma)\) contains every nonzero nilpotent orbit. Now suppose \(\Gamma=\dot{X}(u,v)\) for some \(u,v\in F^{\times}\) such that \(uv\notin(F^{\times})^{2}\) and set \(E=F[\sqrt{uv}]\). We calculate directly that the upper triangular entry of any \(G\)-conjugate of \(\Gamma\) takes the form \[u^{\prime}=a^{2}u-b^{2}v=u(a^{2}-b^{2}vu^{-1})\in u\text{Norm}_{E/F}(E^{\times})\] for some \(a,b\in F\), not both zero, from which it follows that \(\operatorname{Nil}(\Gamma)\subset\{\mathcal{O}_{u},\mathcal{O}_{u\gamma}\}\). For the reverse inclusion, first note that \(\dot{X}(u,v)\) is \(G\)-conjugate to \(\dot{X}(u\varpi^{-2n},v\varpi^{2n})\) for all \(n\in\mathbb{Z}\) and for \(n\) sufficiently large \(\dot{X}(u\varpi^{-2n},v\varpi^{2n})-\dot{X}_{u\varpi^{-2n}}\in\mathfrak{g}_{x,r}\). Thus \(\mathcal{O}_{u}\in\operatorname{Nil}(\Gamma)\). Now note that when \(E\) is ramified, we may take \(\gamma=-uv\) so \(\mathcal{O}_{u\gamma}=\mathcal{O}_{-v}\); since \(\dot{X}(u,v)\) is \(G\)-conjugate to \(\dot{X}(-v,-u)\) we are done by the preceding. If \(E\) is unramified, we have instead \(\gamma=uv\), whence \(\mathcal{O}_{u\gamma}=\mathcal{O}_{v}\). As \(-1\) is a norm, we may choose \(\alpha,\beta\in F\) such that \(-1=\beta^{2}-\alpha^{2}uv^{-1}\); then \(g=\left[\begin{smallmatrix}\alpha&\beta\\ \beta&\alpha uv^{-1}\end{smallmatrix}\right]\in G\) satisfies \(\left.{}^{g}\dot{X}(u,v)=\dot{X}(v,u)\right.\), and again by the preceding we may conclude \(\mathcal{O}_{v}\in\operatorname{Nil}(\Gamma)\). ## 5. Representations of \(G_{x}\) associated to nilpotent orbits ### Shalika's representations of \(\operatorname{SL}(2,\mathcal{R})\) In his thesis, Shalika constructed all irreducible representations of \(K=\operatorname{SL}(2,\mathcal{R})\). In this section we recap his explicit construction for the so-called ramified case, which attaches an irreducible representation of \(K\) to certain \(K\)-orbits in \(\mathfrak{g}^{*}\); we'll then provide a coordinate-free generalization more suited to our needs in the next section. Let \(S\) be the diagonal split torus, \(B\) the upper triangular Borel subgroup and \(U\) its unipotent radical. We use a subscript \(0\) to indicate their intersections with \(K\): \(S_{0}=S\cap K\), \(B_{0}=B\cap K\) and \(U_{0}=U\cap K\). Let \(x_{0}\in\mathcal{A}(G,S)\) be such that \(K=G_{x_{0}}\) and \(z_{0}\) the barycentre of the positive alcove adjacent to \(x_{0}\) (relative to \(B\)). Let \(d\) be a positive integer. Choose \(u\in\mathcal{P}^{-d}\setminus\mathcal{P}^{-d+1}\) and nonzero \(v\in\mathcal{P}^{-d+1}\) and consider the anti-diagonal matrix \(\dot{X}:=\dot{X}(u,v)\in\mathfrak{g}_{x_{0},-d}\) of (4.2). Identify this with the element \(X\in\mathfrak{g}_{x_{0},-d}^{*}\) by the rule \(X(Z)=\operatorname{tr}(\dot{X}Z)\) for all \(Z\in\mathfrak{g}\). If \(v=0\) then \(X\) is nilpotent and its centralizer \(C_{K}(X)\) in \(K\) coincides with \(ZU_{0}\), where \(Z=\{\pm I\}\). Otherwise, \(X\) is semisimple and \(C_{K}(X)\) is a torus. Note that every \(X\in\mathfrak{g}_{x_{0},-d}^{*}\) that represents a degenerate coset is \(K\)-conjugate to one of this form. Define an open subgroup of \(K\) by \[J_{d}=\begin{bmatrix}1+\mathcal{P}^{\lceil d/2\rceil}&\mathcal{P}^{\lceil d/2 \rceil}\\ \mathcal{P}^{\lceil(d+1)/2\rceil}&1+\mathcal{P}^{\lceil d/2\rceil}\end{bmatrix} \cap K.\] It is straightforward to verify that \(X\) gives a well-defined character \(\eta_{X}\) of \(J_{d}\), trivial on \(G_{x_{0},d+}\), by the rule \[\eta_{X}(g)=\psi(\operatorname{tr}(\dot{X}(g-I))). \tag{5.1}\] This character is trivial on \(K_{d+}\) and depends only on the classes \(u+\mathcal{P}^{\lceil(-d+1)/2\rceil}\) and \(v+\mathcal{P}^{\lceil-d/2\rceil}\). For any choice of character \(\theta\) of \(C_{K}(X)\) agreeing with \(\eta_{X}(g)\) on \(C_{K}(X)\cap J_{d}\), write \(\eta(X,\theta)\) for the resulting extension to a character of \(C_{K}(X)J_{d}\). Then Shalika proves the following result with an intricate elementary argument [22, Thm 4.2.1, Thm 4.2.5, SS4.3]. **Proposition 5.1**.: _Let \(X\in\mathfrak{g}_{x_{0},-d}^{*}\) be as above, and \(\theta\) any character of its centralizer \(C_{K}(X)\) agreeing with \(\eta_{X}\) on \(C_{K}(X)\cap J_{d}\). Then the representation_ \[\mathcal{S}_{x_{0}}(X,\theta)=\operatorname{Ind}_{C_{K}(X)J_{d}}^{K}\eta(X,\theta)\] _is irreducible and independent (up to equivalence) of the choice of representative in the \(K\)-orbit of \(\dot{X}(u+\mathcal{P}^{\lfloor(d+1)/2\rfloor},v+\mathcal{P}^{\lceil(d+1)/2 \rceil})\). It is of degree \(\frac{1}{2}q^{d-1}(q^{2}-1)\) and of depth \(d\), meaning it is nontrivial on \(K_{d}\) but trivial on \(K_{d+}\)._ ### Irreducible representations of \(G_{x}\) parametrized by degenerate cosets at \(x\) Our goal in this section is to give a coordinate-free interpretation of Shalika's construction that allows us to unambiguously attach representations of \(G_{x}\) to any degenerate coset of negative depth. Note that \(\operatorname{GL}(2,F)\) acts on \(\mathcal{B}(G)\), and all vertices are conjugate under this action. This conjugacy does not in general preserve the \(\operatorname{SL}(2,F)\)-orbit of \(\Gamma\) or \(X\). _Example 1_.: Let \(x_{0},z_{0}\) be as in Section 5.1 and \(x_{1}\) the other vertex of the chamber containing \(x_{0}\) in its closure. The element \(\omega=\left[\begin{smallmatrix}0&1\\ \varpi&0\end{smallmatrix}\right]\) used in [20] is an affine reflection such that \(\omega\cdot x_{0}=x_{1}\), and \({}^{\omega}\dot{X}(u,v)=\dot{X}(\varpi^{-1}v,\varpi u)\). Thus in particular in the case of nilpotent orbits, where \(\dot{X}(0,1)\sim\dot{X}(-1,0)\), we have \({}^{\omega}\mathcal{O}_{1}=\mathcal{O}_{-\varpi}\). On the other hand, the element \(\eta=\left[\begin{smallmatrix}1&0\\ 0&\varpi\end{smallmatrix}\right]\) used in [20] is a translation such that \(\eta\cdot x_{0}=x_{1}\), but now \({}^{\eta}\dot{X}(u,v)=\dot{X}(\varpi^{-1}u,\varpi v)\) so \({}^{\eta}\mathcal{O}_{1}=\mathcal{O}_{\varpi}\) instead. We begin by showing that any degenerate coset determines a chamber of \(\mathcal{B}(G)\) adjacent to \(x\). **Lemma 5.2**.: _Let \(G=\operatorname{SL}(2,F)\). Let \(x\in\mathcal{B}(G)\) be any vertex and let \(\Gamma\in\mathfrak{g}_{x,-d}^{*}\setminus\mathfrak{g}_{x,-d+}^{*}\) represent a degenerate coset for some \(d>0\). Then there exists a unique chamber \(\mathcal{C}=\mathcal{C}_{\Gamma}\) of \(\mathcal{B}(G)\) adjacent to \(x\), independent of the choice of representative of \(\Gamma+\mathfrak{g}_{x,-d+}^{*}\), such that for any \(z\in\mathcal{C}\) we have \(\Gamma\in\mathfrak{g}_{x,-d}^{*}\cap\mathfrak{g}_{z,-d+}^{*}\). Moreover, we have \(\operatorname{Cent}_{G_{x}}(\Gamma)=\operatorname{Cent}_{G_{z}}(\Gamma)\)._ Proof.: Uniqueness is immediate: given \(z^{\prime}\) in any other chamber adjacent to \(x\), the geodesic from \(z\) to \(z^{\prime}\) contains \(x\); hence \(\mathfrak{g}_{z,-d+}^{*}\cap\mathfrak{g}_{z^{\prime},-d+}^{*}\subset\mathfrak{ g}_{x,-d+}^{*}\), so does not contain \(\Gamma\). Identify \(\Gamma\) with an element \(\dot{\Gamma}\in\mathfrak{g}_{x,-d}\) via the trace form. Choose a nilpotent element \(\dot{X}\in\dot{\Gamma}+\mathfrak{g}_{x,-d+}\). By [13, SS5], we may complete \(\dot{X}\) to an \(\mathfrak{sl}_{2}(F)\)-triple \(\{\dot{X},\dot{H}\in\mathfrak{g}_{x,0},\dot{Y}\in\mathfrak{g}_{x,d}\}\) and find a split torus \(S\) and corresponding apartment \(\mathcal{A}(G,S)\) containing \(x\), such that if \(\Phi(G,S)=\{\pm\alpha\}\), then \(\dot{X}\in\mathfrak{g}_{\alpha}\) and \(\dot{Y}\in\mathfrak{g}_{-\alpha}\). Let \(\mathcal{C}\) be the positive alcove adjacent to \(x\) in this apartment. Note that we have \(\operatorname{Cent}_{\mathfrak{g}}(\dot{Y})=\mathfrak{g}_{-\alpha}\). From [13, Lemma 5.2.1] we know that \[\dot{X}+\mathfrak{g}_{x,-d+}={}^{G_{x,0+}}\left(\dot{X}+\operatorname{Cent}_{ \mathfrak{g}_{x,-d+}}(\dot{Y})\right);\] thus there exists \(g\in G_{x,0+}\) such that \(\dot{\Gamma}\in{}^{g}(\dot{X}+\mathfrak{g}_{-\alpha}\cap\mathfrak{g}_{x,-d+})\). Since \(G_{x,0+}\) fixes \(C\) and the coset \(\dot{\Gamma}+\mathfrak{g}_{x,-d+}\), we may without loss of generality replace the Lie triple and torus of the preceding paragraph with their \(g\)-conjugate, so that we have \(\dot{\Gamma}\in\dot{X}+\mathfrak{g}_{-\alpha}\cap\mathfrak{g}_{x,-d+}\). For any \(z\in C\) we have \(0<\alpha(z-x)<1\); thus since \(\alpha(x),d\in\mathbb{Z}\) we may conclude \[\mathfrak{g}_{\alpha}\cap\mathfrak{g}_{x,-d}=\mathfrak{g}_{\alpha}\cap \mathfrak{g}_{z,-d+}.\quad\text{and}\quad\mathfrak{g}_{-\alpha}\cap\mathfrak{g }_{x,-d+}=\mathfrak{g}_{-\alpha}\cap\mathfrak{g}_{z,-d+}.\] Since \(\dot{\Gamma}\) lies in the sum of these two spaces we have \(\dot{\Gamma}\in\mathfrak{g}_{z,-d+}\), whence \(\Gamma\in\mathfrak{g}_{x,-d}^{*}\cap\mathfrak{g}_{z,-d+}^{*}\). Finally, note that \(\operatorname{Cent}_{G}(\dot{X})=U_{\alpha}\) and \(U_{\alpha}\cap G_{x}=U_{\alpha}\cap G_{z}\). Since \(\dot{\Gamma}\in\dot{X}+\mathfrak{g}_{x,-d+}\), we have \(\operatorname{Cent}_{G_{x}}(\dot{\Gamma})\subset\operatorname{Cent}_{G_{x}}( \dot{X})G_{x,0+}=\operatorname{Cent}_{G_{z}}(\dot{X})G_{x,0+}\subset G_{z}\). **Definition 5.3**.: Let \(d=-d_{x}(\Gamma)\) be such that \(\Gamma+\mathfrak{g}_{x,-d+}^{*}\) is a degenerate coset. Let \(z\) be the barycentre of the associated alcove \(\mathcal{C}_{\Gamma}\). Define the subgroup \[J_{x,\Gamma}=\begin{cases}G_{x,d/2}&\text{if $d=d_{x}(\Gamma)$ is odd;}\\ G_{z,d/2}&\text{if $d$ is even.}\end{cases} \tag{5.2}\] Note that when \(x=x_{0}\) and \(z=z_{0}\) we have \(J_{x,\Gamma}=J_{d}\). Since \(G_{x,n+}\subseteq G_{z,n}\subseteq G_{x,n}\) for any integer \(n\), it follows directly that for all \(d\), we have \[G_{x,d/2+}\subseteq J_{x,\Gamma}\subseteq G_{x,d/2}.\] Since \(\Gamma\in\mathfrak{g}_{x,-d}^{*}\cap\mathfrak{g}_{z,-d+}^{*}\), it defines a character \(\eta_{\Gamma}\) of \(J_{x,\Gamma}\) that is trivial on \(G_{x,d+}\) via the corresponding Moy-Prasad isomorphism. The character depends only on the coset \(\Gamma+\mathfrak{g}_{x,-d/2}^{*}\) if \(d\) is odd and on \(\Gamma+\mathfrak{g}_{z,-d/2+}^{*}\) otherwise. Moreover, since \(\operatorname{Cent}_{G_{x}}(\Gamma)=\operatorname{Cent}_{G_{z}}(\Gamma)\) we deduce directly that \(J_{x,\Gamma}\) is normalized by \(C_{x}(\Gamma):=\operatorname{Cent}_{G_{x}}(\Gamma)\). Thus, for any character \(\theta\) of \(C_{x}(\Gamma)\) coinciding with \(\eta_{\Gamma}\) on the intersection of their domains there is a unique extension \(\eta(\Gamma,\theta)\) of \(\eta_{\Gamma}\) to \(C_{x}(\Gamma)J_{x,\Gamma}\). Define \[\mathcal{S}_{x}(\Gamma,\theta)=\operatorname{Ind}_{C_{x}(\Gamma)J_{x,\Gamma}} ^{G_{x}}\eta(\Gamma,\theta).\] **Proposition 5.4**.: _Suppose \(\Gamma\) represents a degenerate coset at a vertex \(x\in\mathcal{B}(G)\) and \(-d=d_{x}(\Gamma)<0\). Suppose \(\theta\) is a character of the centralizer \(C_{x}(\Gamma)\) of \(\Gamma\) in \(G_{x}\) defining a character \(\eta(\Gamma,\theta)\) of \(C_{x}(\Gamma)J_{x,\Gamma}\). Then_ 1. \(\mathcal{S}_{x}(\Gamma,\theta)\) _is an irreducible representation of_ \(G_{x}\) _of depth_ \(d\) _and degree_ \(\frac{1}{2}q^{d-1}(q^{2}-1)\)_;_ 2. \(\mathcal{S}_{x}(\Gamma,\theta)\cong\mathcal{S}_{x}(\Gamma^{\prime},\theta^{ \prime})\) _if and only if there exists_ \(g\in G_{x}\) _such that_ \(\eta(\Gamma,\theta)={}^{g}\eta(\Gamma^{\prime},\theta^{\prime})\)_; and_ 3. _for any_ \(\nu\in\operatorname{GL}(2,F)\) _we have_ (5.3) \[{}^{\nu}\mathcal{S}_{x}(\Gamma,\theta)\cong\mathcal{S}_{\nu.x}({}^{\nu}\Gamma, {}^{\nu}\theta).\] Proof.: When \(x=x_{0}\) and \(\Gamma\in\mathfrak{g}^{*}\) corresponds to some \(\dot{X}(u,v)\in\mathfrak{g}_{x_{0},-d}\setminus\mathfrak{g}_{x_{0},-d+}\), then this construction coincides with Shalika's. If \(g\in G_{x}\), then \({}^{g}C_{x}(\Gamma)=C_{x}({}^{g}\Gamma)\) and \({}^{g}J_{x,\Gamma}=J_{x,{}^{g}\Gamma}\), so we obtain the invariance of \(\mathcal{S}_{x}(\Gamma,\theta)\) under \(G_{x}\)-conjugacy and the choice of representative of the appropriate coset of \(\Gamma\). More generally, for any \(\nu\in\operatorname{GL}(2,F)\) such that \(\nu\cdot x_{0}=x\), we have \({}^{\nu}(\mathfrak{g}_{x_{0},d}^{*})=\mathfrak{g}_{x,d}^{*}\), \({}^{\nu}C_{x_{0}}(\Gamma)=C_{x}({}^{\nu}\Gamma)\) and \({}^{\nu}J_{x_{0},\Gamma}=J_{x,{}^{\nu}\Gamma}\). Thus \[{}^{\nu}\mathcal{S}_{x_{0}}(\Gamma,\theta)\cong\mathcal{S}_{x}({}^{\nu}\Gamma, {}^{\nu}\theta),\] where we have identified a \(\nu\)-conjugate of a representation of \(G_{x_{0}}\) with a representation of \(G_{x}\) under the group isomorphism \({}^{\nu}G_{x_{0}}\cong G_{x}\). Since \(\operatorname{GL}(2,F)\) acts transitively on the set of vertices of \(\mathcal{B}(\operatorname{SL}(2,F))\), the rest of the statements follow from Proposition 5.1. The simple nature of the representations \(\mathcal{S}_{x}(\Gamma,\theta)\) is revealed as follows. **Lemma 5.5**.: _Suppose \(x\) is a vertex of \(\mathcal{B}(G)\) and \(\Gamma_{1},\Gamma_{2}\in\mathfrak{g}_{x,-d}^{*}\) represent nonzero but degenerate cosets of \(\mathfrak{g}_{x,-d}^{*}/\mathfrak{g}_{x,-d+}^{*}\) for some \(d>0\). Suppose \(s\in\mathbb{R}\) satisfies \(\Gamma_{1}\in\Gamma_{2}+\mathfrak{g}_{x,-s}^{*}\). Then for any choice of characters \(\theta_{i}\) of \(C_{x}(\Gamma_{i})\) such that the characters \(\eta(\Gamma_{i},\theta_{i})\) agree upon restriction to \(C_{x}(\Gamma_{i})J\cap G_{x,s+}\) for \(i\in\{1,2\}\), we have_ \[\operatorname{Res}_{G_{x,s+}}\mathcal{S}_{x}(\Gamma_{1},\theta_{1})\cong \operatorname{Res}_{G_{x,s+}}\mathcal{S}_{x}(\Gamma_{2},\theta_{2}). \tag{5.4}\] _In particular, if \(s\geq d/2\) then (5.4) holds independent of \(\theta_{i}\)._ Proof.: For any \(\Gamma_{i}\), the two representations have the same degree \(\frac{1}{2}q^{d-1}(q^{2}-1)\) and the same depth \(d\). If \(s\geq d\) then both sides are \(1\)-isotypic of the same degree hence equivalent. Suppose \(s<d\). Since \(\Gamma_{1}\in\Gamma_{2}+\mathfrak{g}_{x,-s}^{*}\), we have \(C_{x}(\Gamma_{1})\subset C_{x}(\Gamma_{2})G_{x,d-s}\). Since \(\Gamma_{1}\in\Gamma_{2}+\mathfrak{g}_{x,-d+}^{*}\), Lemma 5.2 yields \(J_{x,\Gamma_{1}}=J_{x,\Gamma_{2}}\); let us denote this group \(J\). Thus \(\eta_{\Gamma_{i}}\) for \(i\in\{1,2\}\) are characters of \(J\) that agree on \(J\cap G_{x,s+}\). If \(s\geq d/2\) then \(G_{x,s+}\subset J\), and so \(\operatorname{Res}_{G_{x,s+}\cap C_{x}(\Gamma_{i})J}\)\(\eta(\Gamma_{i},\theta_{i})=\eta_{\Gamma_{i}}\) is independent of \(\theta_{i}\). Mackey theory thus yields the decomposition \[\operatorname{Res}_{G_{x,s+}}\mathcal{S}_{x}(\Gamma_{i},\theta_{i})\cong \bigoplus_{\gamma\in G_{x}/C_{x}(\Gamma_{i})J}{}^{\gamma}\eta_{\Gamma_{i}}|_ {G_{x,s+}}. \tag{5.5}\] Each \(\gamma\in C_{x}(\Gamma_{i})G_{x,d-s}/C_{x}(\Gamma_{i})J\) fixes the character \(\eta_{\Gamma_{i}}|_{G_{x,s+}}\). The elements \(\gamma^{\prime}\in G_{x}/C_{x}(\Gamma_{1})G_{x,d-s}=G_{x}/C_{x}(\Gamma_{2})G_{ x,d-s}\) parametrize the orbit of the coset \(\Gamma_{1}+\mathfrak{g}_{x,-s}^{*}=\Gamma_{2}+\mathfrak{g}_{x,-s}^{*}\). Thus (5.5) gives the same sum of characters for \(i\in\{1,2\}\). If instead \(s<d/2\), then \(G_{x,d-s}\subseteq J\) so \(C_{x}(\Gamma_{1})J=C_{x}(\Gamma_{2})J\). Since \(J\subseteq G_{x,s+}\), the double coset space \(G_{x,s+}\backslash G_{x}/C_{x}(\Gamma_{i})J\) is now equal to \(G_{x}/C_{x}(\Gamma_{i})G_{x,s+}\), and is independent of \(i\). So again by Mackey theory we have \[\operatorname{Res}_{G_{x,s+}}\mathcal{S}_{x}(\Gamma_{i},\theta_{i}) =\bigoplus_{\gamma\in G_{x}/C_{x}(\Gamma_{i})G_{x,s+}}\operatorname {Ind}_{G_{x,s+}\cap\gamma^{\prime}(C_{x}(\Gamma_{i})J)}^{\gamma}(\eta(\Gamma_{i },\theta_{i}))\] \[=\bigoplus_{\gamma\in G_{x}/C_{x}(\Gamma_{i})G_{x,s+}}{}^{\gamma} \left(\operatorname{Ind}_{G_{x,s+}\cap C_{x}(\Gamma_{i})J}^{G_{x,s+}}\eta( \Gamma_{i},\theta_{i}))\right).\] When the restriction of \(\eta(\Gamma_{i},\theta_{i})\) to \(G_{x,s+}\cap C_{x}(\Gamma_{1})J=G_{x,s+}\cap C_{x}(\Gamma_{2})J\) is independent of \(i\), we infer (5.4). ### Representations attached to nilpotent orbits Let \(X\in\mathcal{N}^{*}\setminus\{0\}\) and let \(\lambda\) be a corresponding adapted one-parameter subgroup, whose centralizer in \(G\) is a maximal split torus \(S\). In fact, \(S\) is generated by \(S_{0}\) and \(\lambda(\varpi)\), and \(\operatorname{Cent}_{G}(X)=ZU\) where \(Z\) is the center of \(G\) and \(B=SU\) is a Borel subgroup. For any vertex \(x\in\mathcal{B}(G)\), applying the Cartan decomposition yields \[\mathcal{O}=G\cdot X=\bigsqcup_{n\in\mathbb{Z}}G_{x}\cdot(\lambda(\varpi)^{n} \cdot X)=\bigsqcup_{n\in\mathbb{Z}}G_{x}\cdot(\varpi^{2n}X) \tag{5.6}\] as the decomposition of the \(G\)-orbit of \(X\) into disjoint \(G_{x}\)-orbits. **Proposition 5.6**.: _Let \(x\) be a vertex in \(\mathcal{B}(\mathbf{G},F)\), \(\mathcal{O}\) a nonzero nilpotent \(G\)-orbit in \(\mathfrak{g}^{*}\) and \(\zeta\) a character of \(Z\). Let \(\{X_{-d}\mid d_{x}(X)=-d<0\}\) be any set of representatives of the \(G_{x}\)-orbits in \(\mathcal{O}\setminus\mathfrak{g}_{x,0}^{*}\). Then the representation of \(G_{x}\) attached to \(\mathcal{O}\) with central character \(\zeta\), given by_ \[\tau_{x}(\mathcal{O},\zeta)=\bigoplus_{d>0}\mathcal{S}_{x}(X_{-d},\zeta), \tag{5.7}\] _is independent of choices up to \(G_{x}\)-equivalence._ Proof.: The chamber \(\mathcal{C}_{X}\) associated to \((X_{-d},x)\) by Lemma 5.2 defines an Iwahori subgroup of \(G_{x}\) with pro-p unipotent radical \(U_{x}\); by construction, we have \(C_{x}(X)=ZU_{x}\). Since \(\eta_{X}\) is trivial on \(ZU_{x}\cap J_{x,X}\), the character \(\zeta\) of \(ZU_{x}\) defined by \(\zeta(zu)=\zeta(z)\) for all \(z\in Z\) and \(u\in U_{x}\) extends \(\eta_{X}\). Thus Proposition 5.4 applies. It follows from (5.6) that the parity of \(d_{x}(Y)\), for any \(Y\in\mathcal{O}\), is an invariant of the \(G\)-orbit, and we call this the _parity depth of \(\mathcal{O}\) at \(x\)_. Therefore the depths \(d\) of the components of \(\tau_{x}(\mathcal{O},\zeta)\) all have parity equal to the parity depth of \(\mathcal{O}\) at \(x\). Note that the restriction of \(\tau_{x}(\mathcal{O},\zeta)\) to any subgroup of \(G_{x,0+}\) is independent of the choice of \(\zeta\), so we may drop \(\zeta\) from the notation in such cases. As needed, we associate to the zero nilpotent orbit the trivial representation of \(G_{x}\), and denote it \(\tau_{x}(\{0\})\). ## 6. The case of positive-depth representations of \(\operatorname{SL}(2,F)\) We briefly recap the classification of irreducible representations \(\pi=\pi(\chi,\Gamma)\) of \(\operatorname{SL}(2,F)\) of positive depth, then establish that their explicit branching to a maximal compact open subgroup \(G_{x}\) can be described as twists of the datum \((\chi,\Gamma)\) defining \(\pi\). This allows us to state and prove our main theorem in this case, and to explicitly compute the constant terms that arise. ### Representation of \(\operatorname{SL}(2,F)\) of positive depth The classification of the irreducible admissible representations of \(G=\operatorname{SL}(2,F)\) was given in Shalika's 1966 thesis [25], and Sally and Shalika obtained many explicit results on their characters in several papers, including [26]. An excellent overview is given [1]. In this section we address the positive-depth supercuspidal representations, using the parametrization of Adler and Yu [1, 24]. Because the tori in \(\operatorname{SL}(2,F)\) are one-dimensional, the correcting twist to this construction given by Fintzen, Kaletha and Spice in [14, Definition 3.1] is trivial in this case. **Proposition 6.1**.: _The isomorphism classes of irreducible representations of \(\operatorname{SL}(2,F)\) of positive depth \(r\) are parametrized by the \(G\)-conjugacy classes of pairs \((T,\chi)\), where \(T\) is a maximal torus of \(G\) and \(\chi\) is a character of \(T\) of depth \(r\)._ This is equivalent to the well-known classification of the irreducible positive depth representations of \(\operatorname{SL}(2,F)\) in terms of unrefined minimal \(K\)-types. To construct the representations explicitly we first recall some facts about the maximal tori and their characters. Let \(T\) be a maximal torus of \(G\) and let \(\chi\) be a character of \(T\) of depth \(r>0\). The building \(\mathcal{B}(T)\) of \(T\) embeds into \(\mathcal{B}(G)\) as the apartment \(\mathcal{A}(G,T)\) if \(T\) is split and as a single point \(\{x_{T}\}\) otherwise, which is a vertex if \(T\) is unramified and the midpoint of a chamber if \(T\) is ramified. It follows that the depth \(r\) is an integer if \(T\) splits over an unramified extension and an element of \(\frac{1}{2}+\mathbb{Z}\) otherwise. To each pair \((T,\chi)\) we associate an element \(\Gamma\) as follows. If \(\mathfrak{t}\) denotes the Lie algebra of \(T\), then via the Moy-Prasad isomorphism \(e\colon\mathfrak{t}_{r/2+}/\mathfrak{t}_{r+}\to T_{r/2+}/T_{r+}\) there exists a nonzero element \(\Gamma=\Gamma_{\pi}\in\mathfrak{t}_{-r}^{*}\), uniquely defined modulo \(\mathfrak{t}_{-r/2}^{*}\), such that \[\chi(t)=\psi(\Gamma(e^{-1}(t))).\] We identify \(\Gamma\) with an element of \(\mathfrak{g}^{*}\) that is zero on the \(T\)-invariant complement of \(\mathfrak{t}\) in \(\mathfrak{g}\). Then \(\Gamma\in\mathfrak{g}_{x,-r}^{*}\) for any \(x\in\mathcal{B}(T)\) and we recover \(T\) as \(\mathrm{Cent}_{G}(\Gamma)\). Moreover, \(\Gamma\) thus defines a character of \(G_{x,r}/G_{x,r+}\cong\mathfrak{g}_{x,r}/\mathfrak{g}_{x,r+}\), and following the work of Moy and Prasad, the pair \((G_{x,r},\Gamma)\) is called an _unrefined minimal \(K\)-type_. Proof of Proposition 6.1.: The classification is known; we only wish to construct the representations \(\pi=\pi(T,\chi)\) variously as follows. If \(T\) is a split torus, then choose a Borel subgroup \(B=TN\) of \(G\) containing \(T\) and extend \(\chi\) trivially across \(N\) to a character of \(B\). Set \[\mathrm{Ind}_{TN}^{G}(\chi)=\{f\colon G\to\mathbb{C}\mid f(tng)=\chi(t)\nu(t)f (g)\forall t\in T,n\in N,g\in G\}\] where \(\nu\) is the square root of the modular character and is given on \(T\cong F^{\times}\) by the \(p\)-adic norm. Then \(\pi(T,\chi)=\mathrm{Ind}_{B}^{G}(\chi)\) is an irreducible principal series representation. If \(T\) is anisotropic, with associated point \(x_{T}\in\mathcal{B}(G)\), then we first extend \(\chi\) to a character of \(TG_{x_{T},r/2+}\), by setting \[\chi(tg)=\chi(t)\psi(\Gamma(e^{-1}(g)))\] where \(e\colon\mathfrak{g}_{x_{T},r/2+}/\mathfrak{g}_{x_{T},r+}\to G_{x,r/2+}/G_{x,r+}\) is the Moy-Prasad isomorphism. When \(G_{x_{T},r/2+}\neq G_{x_{T},r/2+}\) (which will happen only if \(T\) is unramified and \(r\in 2\mathbb{Z}\)), we take a certain Weil-Heisenberg lift of \(\chi|_{G_{x_{T},r}}\) to form a \(q\)-dimensional representation \(\omega\) of \(T\ltimes G_{x_{T},r/2}\), and set \(\kappa(tg)=\chi(t)\omega(t,g)\). Then \(\pi(T,\chi)=\mathrm{c}\text{-}\mathrm{Ind}_{TG_{x_{T},r/2}}^{G}\kappa\) is an irreducible supercuspidal representation. Given \(\pi=\pi(T,\chi)\), we let \(\Gamma=\Gamma_{\pi}\) denote a choice of element in \(\mathfrak{g}^{*}\) realizing the character \(\chi\), as preceded the proof. Then since \(T=\mathrm{Cent}_{G}(\Gamma)\) we may also say that \((\chi,\Gamma)\) is the data defining \(\pi\). ### Branching rules obtained as twists of the inducing datum We begin by proving that the branching rules obtained in [15, Theorem 7.4] and [15, Theorem 6.2] are in fact constructable from twists of the data \((\chi,\Gamma)\). **Theorem 6.2**.: _Let \(\pi=\pi(T,\chi)\) be an irreducible representation of \(G\) of depth \(r>0\). Let \(\Gamma=\Gamma_{\pi}\in\mathfrak{g}^{*}\) realize \(\chi\) as above. Then for any vertex \(x\in\mathcal{B}(G)\) we have_ \[\mathrm{Res}_{G_{x}}\pi=\pi^{G_{x,r+}}\oplus\bigoplus_{g\in[G_{x}\backslash G/ \mathrm{Cent}(\Gamma)]^{\text{deg}}}\mathcal{S}_{x}({}^{g}\Gamma,{}^{g}\chi) \tag{6.1}\] _where \([G_{x}\backslash G/\mathrm{Cent}(\Gamma)]^{\text{deg}}\) denotes a parameter set for the \(G_{x}\)-orbits in \(G\cdot\Gamma\) that do not meet \(\mathfrak{g}_{x,-r}^{*}\), that is, such that the coset \({}^{g}\Gamma+\mathfrak{g}_{x,d_{x}({}^{g}\Gamma)+}^{*}\) is degenerate._ Proof.: We begin with the case that \(T=\mathrm{Cent}(\Gamma)\) is anisotropic. Set \(y=x_{T}\) and let \(\pi(T,\chi)=\mathrm{c}\text{-}\mathrm{Ind}_{TG_{y,r/2}}^{G}\kappa\) be the corresponding supercuspidal representation. First suppose that we are in the special case that \(x=x_{0}\) and \(y\in\overline{\mathcal{C}}\), the closure of the fundamental alcove, which was the case considered in [20]. By [20, Prop 4.4], the double coset space \(G_{x}\backslash G/TG_{y,r/2}\) that arises in the Mackey decomposition \[\operatorname{Res}_{G_{x}}\pi=\bigoplus_{g\in G_{x}\backslash G/TG_{y,r/2}} \operatorname{Ind}_{G_{x}\cap g(TG_{y,r/2})}^{G_{x}}{}^{g}\kappa\] is independent of \(r\) and is given by \(G_{x}\backslash G/T\). Since \(T=\operatorname{Cent}_{G}(\Gamma)\), this latter space parametrizes the \(G_{x}\)-orbits in the \(G\)-orbit of \(\Gamma\) in \(\mathfrak{g}^{*}\). By [20, Thm 6.1], each of these Mackey components is irreducible. Now since \(\Gamma\) has depth \(-r\) and depth is \(G\)-invariant, \({}^{g}\Gamma\) meets \(\mathfrak{g}^{*}_{x,-r}\) if and only if \(d_{x}({}^{g}\Gamma)=-r\). Since \(\operatorname{Cent}_{G}(\Gamma)=T\), this happens if and only if \(gx=x_{T}=y\) in which case the corresponding Mackey component has depth \(r\) and so lies in \(\pi^{G_{x,r+}}\). Note that \(\pi^{G_{x,r+}}\neq\{0\}\) thus arises only when \(T\) is an unramified torus attached a vertex \(y\) in the same \(G\)-conjugacy class as \(x\). When \(gx\neq y\), then by in [20, Thm 6.2],2 the corresponding Mackey component satisfies Footnote 2: Errata to [20, Thm 6.2]: the decomposition in case \(y=1\) is missing the term corresponding to the double coset representative \(\mathfrak{e}^{\eta}\). \[\operatorname{Ind}_{G_{x}\cap{}^{g}(TG_{y,r/2})}^{G_{x}}{}^{g}\kappa\cong \mathcal{S}_{x}({}^{g}\Gamma,{}^{g}\chi),\] as required, yielding (6.1) for the fundamental case. Now suppose that \(x\in\mathcal{B}(G)\) is an arbitrary vertex. Then there exists \(k\in\operatorname{GL}(2,F)\) such that \(kx=x_{0}\). Choose \(h\in\operatorname{SL}(2,F)\) such that \(hy\in k^{-1}\overline{\mathcal{C}}\). Then via the identification \({}^{k}G_{x}=G_{x_{0}}\), and then the isomorphism \({}^{h}\pi\cong\pi\), we may write \[\operatorname{Res}_{G_{x_{0}}}{}^{kh}\pi=\operatorname{Res}_{G_{x}}{}^{h}\pi \cong\operatorname{Res}_{G_{x}}\pi.\] Even when \(kh\notin\operatorname{SL}(2,F)\), the data defining the representation \({}^{kh}\pi\) is simply \(({}^{kh}T,{}^{kh}\chi,{}^{kh}\Gamma,{}^{kh}\Gamma,{}^{kh}\Gamma,{}^{kh}\Gamma,{}^{kh}\Gamma)\). We may therefore apply the decomposition (6.1) to \(\operatorname{Res}_{G_{x_{0}}}{}^{kh}\pi\). Since \({}^{k}G_{x,r+}=G_{x_{0},r+}\), we have \(({}^{kh}\pi)^{G_{x_{0},r+}}=({}^{h}\pi)^{G_{x,r+}}\). Moreover, for any \(g\) defining a \(G_{x_{0}}\)-orbit of \({}^{kh}\Gamma\) that does not meet \(\mathfrak{g}^{*}_{x_{0},-r}\), we have \[S_{x_{0}}({}^{gkh}\Gamma,{}^{gkh}\chi)={}^{k}(S_{x_{0}}({}^{(k^{-1}gk)h} \Gamma,{}^{(k^{-1}gk)h}\chi))=S_{x}({}^{g}{}^{h}\Gamma,{}^{g}{}^{h}\chi),\] where \(g^{\prime}=k^{-1}gk\); then \(g^{\prime}h\) defines a \(G_{x}\)-orbit of \(\Gamma\) that does not meet \(\mathfrak{g}^{*}_{x,-r}\), showing the index sets correspond. We now consider the case that \(T\) is a split torus, so that \(\pi=\pi(T,\chi)=\operatorname{Ind}_{B}^{G}\chi\) for some Borel subgroup \(B=TU\) containing \(T\) having \(U\) as its unipotent radical. Since \(G=G_{x}B\), there is a unique (highly reducible) Mackey component in this case. Instead, in the special case that \(x\in\{x_{0},x_{1}\}\) and \(T=S\), the decomposition of \(\operatorname{Res}_{G_{x}}\pi\) into irreducibles is found in [20] by explicitly decomposing the \(G_{x}\)-subrepresentations \(\pi^{G_{x,n}}\) as \(n\to\infty\). We need to show that this decomposition is in fact of the form (6.1). We begin with \(x=x_{0}\) and \(T=S\). First note that as \(\pi\) has depth \(r\) at \(x\), the subrepresentation \(\pi^{G_{x,r+}}\) is nonzero, and in fact is irreducible by [20, Prop 4.4] (where this space is denoted \(V^{K_{m}}\), for \(m=r+1\) the conductor of \(\chi\)). By depth we know that \({}^{g}\Gamma\in\mathfrak{g}^{*}_{x,-r}\) if and only if \({}^{g}T\subset G_{x}\), so the \(G_{x}\)-orbits in \(G\cdot\Gamma\) meeting \(\mathfrak{g}^{*}_{x,-r}\) correspond to the trivial double coset of \(G_{x}\backslash G/T\). The irreducible representations of \(G_{x}\) of depth greater than \(r\) appearing in \(\operatorname{Res}_{G_{x}}\pi\) are classified in [20, Thm 7.4]. The notation in that paper relates to ours as follows. Identify \(\mathfrak{g}\) and \(\mathfrak{g}^{*}\) via the trace form. The conductor \(n\) is \(d+1\), and for any \(u\in\mathcal{R}^{\times},v\in\mathcal{P}\) we have \[\mathcal{D}_{n}(\rho,\dot{X}(u,v)):=\mathcal{S}_{x}(\varpi^{-d}\dot{X}(u,v),\rho). \tag{6.2}\] If \(\dot{\Gamma}\) is the diagonal matrix \(\operatorname{diag}(a,-a)\in\mathfrak{g}_{x,-r}\), set \(\gamma_{0}=a\varpi^{d}\), \(\gamma_{1}=a\varepsilon^{-1}\varpi^{d}\) and write \[g_{i}=\begin{bmatrix}1&-\frac{1}{2}\gamma_{i}^{-1}\\ \gamma_{i}&\frac{1}{2}\end{bmatrix}.\] Following the notation in _loc. cit._, define \(Y_{0}=\dot{X}(1,\gamma_{0}^{2})\) and \(Y_{1}=\dot{X}(\varepsilon,\varepsilon\gamma_{1}^{2})\); then \[\varpi^{-d}Y_{0}={}^{g_{0}}\Gamma\quad\text{and}\quad\varpi^{-d}Y_{1}={}^{g_{1 }}\Gamma. \tag{6.3}\] It follows that \(\rho_{i}:={}^{g_{i}}\chi\) is a character of \(C_{x}(Y_{i})=C_{x}({}^{g_{i}}\Gamma)\) extending \(\eta_{{}^{g_{i}}\Gamma}\). Then, in our notation, [22, Thm 7.4] asserts that for each integer \(d>r\), \(\operatorname{Res}_{G_{x}}\pi\) has two irreducible components of depth \(d\), denoted \(W^{\pm}_{d-1}\), and these are explicitly given by \[W^{+}_{d-1}\oplus W^{-}_{d-1}\cong\mathcal{S}_{x}({}^{g_{0}}\Gamma,{}^{g_{0}} \chi)\oplus\mathcal{S}_{x}({}^{g_{1}}\Gamma,{}^{g_{1}}\chi). \tag{6.4}\] Noting the factorization \[\begin{bmatrix}1&-\frac{1}{2}\gamma^{-1}\\ \gamma&\frac{1}{2}\end{bmatrix}=\begin{bmatrix}1&0\\ \gamma&1\end{bmatrix}\begin{bmatrix}1&-\frac{1}{2}\gamma^{-1}\\ 0&1\end{bmatrix},\] we see that as \(\gamma\) runs over the distinct square classes in \(\gamma\in\mathcal{P}^{d}\setminus\mathcal{P}^{d+1}\), for all \(d>0\), we obtain representatives of the distinct nontrivial cosets of \(G_{x}\backslash G/T\), which is the index set \([G_{x}\backslash G/\mathrm{Cent}(\Gamma)]^{\deg}\). The same result is obtained for \(x=x_{1}\) in [22, Cor 4.6, Thm 7.4] via conjugation by \(\omega=[\begin{smallmatrix}0&1\\ \varpi&0\end{smallmatrix}]\in\mathrm{GL}(2,F)\), as \(\omega x_{0}=x_{1}\). For arbitrary \(x\), we proceed as in the previous case, this time choosing \(k,h\in\mathrm{SL}(2,F)\) for which \(kx\in\{x_{0},x_{1}\}\) and for which \({}^{h}T=S\). ### Main theorem for representations of positive depth Before stating the next theorem, we require a short lemma about filtrations of tori. **Lemma 6.3**.: _Let \(T\) be a maximal torus of \(G=\mathrm{SL}(2,F)\), let \(x\in\mathcal{B}(G)\). Let \(\dot{X}\in\mathfrak{g}\) be nilpotent and denote its centralizer in \(G_{x}\) by \(U_{x}\). Then for any \(\ell\in\mathbb{Z}_{+}\), we have_ \[T\cap U_{x}G_{x,\ell}\subseteq ZT_{\ell}.\] Proof.: As in the proof of Lemma 5.2, we may choose an apartment \(\mathcal{A}(G,S)\subset\mathcal{B}(G)\) containing \(x\) with root system \(\Phi=\{\pm\alpha\}\) such that \(\dot{X}\in\mathfrak{g}_{\alpha}\); then \(U_{x}=(G_{x}\cap U_{\alpha})Z\). Given \(t\in T\cap U_{x}G_{x,\ell}\), write \(t=uzg\) with \(z\in Z\), \(u\in G_{x}\cap U_{\alpha}\) and \(g\in G_{x,\ell}\). Then \(z^{-1}t\in(G_{x,\ell}\cap U_{-\alpha})S_{\ell}(G_{x}\cap U_{\alpha})\), so its trace is that of an element of \(S_{\ell}\). Since for any torus \(T\) of \(G\) and any \(\ell>0\), we have \(t^{\prime}\in T_{\ell}\) if and only if the trace of \(t^{\prime}\) lies in \(2+\mathcal{P}^{2\ell}\), we conclude that \(t\in T_{\ell}\). **Lemma 6.4**.: _Let \(T=\mathrm{Cent}_{G}(\Gamma)\), where \(\Gamma\) is a semisimple element of depth \(-r\). Suppose that at \(x\in\mathcal{B}(G)\) we have \(d_{x}(\Gamma)=-d<-r\). Then \(T\cap G_{x}=ZT_{d-r}\), so that \(T\cap G_{x,\ell}=T_{d-r+\ell}\) for any \(\ell\geq 0\)._ Proof.: Let \(\mathfrak{t}\) be the Lie algebra of \(T\). The hypotheses imply that \(d_{x}(\varpi^{k}\Gamma)=d(\varpi^{k}\Gamma)-(d-r)\) for any \(k\in\mathbb{Z}\). Thus since \(\mathfrak{t}\) is one-dimensional, for any element \(\dot{X}\in\mathfrak{t}_{\ell}\setminus\mathfrak{t}_{\ell+}\) we have \(d_{x}(\dot{X})=\ell-(d-r)\), yielding \(\mathfrak{t}\cap\mathfrak{g}_{x,\ell}=\mathfrak{t}_{d-r+\ell}\). Passage to the group yields the desired result, where at depth zero, we observe that \(Z\subset T\cap G_{x}\) for all \(x\) **Theorem 6.5**.: _Let \(\pi=\pi(T,\chi)\) be an irreducible representation of \(G=\operatorname{SL}(2,F)\) of depth \(r>0\) and let \(\Gamma=\Gamma_{\pi}\in\mathfrak{g}^{*}\). Then for each maximal compact \(G_{x}\), there is an integer \(n_{x}(\pi)\) such that in the Grothendieck group of representations we have_ \[\operatorname{Res}_{G_{x,r+}}\pi\cong n_{x}(\pi)\mathbf{1}+\sum_{\mathcal{O} \in\operatorname{Nil}(\Gamma)}\operatorname{Res}_{G_{x,r+}}\tau_{x}(\mathcal{ O}). \tag{6.5}\] _That is, up to some copies of the trivial representation, the representation \(\pi\) is locally completely determined by the nilpotent support of \(\Gamma\)._ Proof.: The restriction to \(G_{x,r+}\) will be trivial on any \(G_{x}\)-representations of depth less than or equal to \(r\), so our first step is to match components of depth \(d>r\) in \(\operatorname{Res}_{G_{x}}\pi\) and in \(\sum_{\mathcal{O}\in\operatorname{Nil}(\Gamma)}\tau_{x}(\mathcal{O})\). Note that the restriction to \(G_{x,r+}\) is independent of the choice of central character \(\zeta\) so it is omitted from the notation. Theorem 6.2 gives the decomposition (6.1) of the left side: the components of depth greater than \(r\) are parametrized by the degenerate \(G_{x}\)-orbits of \(\Gamma\) at \(x\). From (5.7) we infer the decomposition of the right side: the components are parametrized by nilpotent \(G_{x}\)-orbits in \(\mathcal{O}\setminus\mathfrak{g}^{*}_{x,0}\) for each \(\mathcal{O}\in\operatorname{Nil}(\Gamma)\). By Proposition 4.1, each degenerate coset \(\xi={}^{g}\Gamma+\mathfrak{g}^{*}_{x,-d+}\), where \(d=-d_{x}({}^{g}\Gamma)\), is represented by a nilpotent element \(X\in\mathcal{O}({}^{g}\Gamma,x)\) such that moreover \({}^{g}\Gamma-X\in\mathfrak{g}^{*}_{x,-r}\). The \(G_{x}\)-orbit of \(\xi\) determines the \(G_{x}\)-orbit of \(X\) and by definition \(G\cdot X\in\operatorname{Nil}(\Gamma)\). Thus for each \(d>r\) there is a one-to-one correspondence between the \(G_{x}\)-orbits in \(G\cdot\Gamma\) whose depth at \(x\) is \(-d\) and the \(G_{x}\)-orbits in \(\operatorname{Nil}(\Gamma)\) whose depth at \(x\) is \(-d\). To complete the proof we need to show the corresponding representations are isomorphic upon restriction to \(G_{x,r+}\). Let \(\zeta=\chi|_{Z}\) be the central character of \(\pi\). If \(r<d\leq 2\lfloor r\rfloor+1\) then applying Lemma 5.5 to the pair \(\Gamma_{1}={}^{g}\Gamma\) and \(\Gamma_{2}=X\), with \(s=r\) gives \(\operatorname{Res}_{G_{x,r+}}\mathcal{S}_{x}({}^{g}\Gamma,{}^{g}\chi)\cong \operatorname{Res}_{G_{x,r+}}\mathcal{S}_{x}(X,\zeta)\) as required. If \(d>2r\), then we have a stronger result. Lemma 6.4 implies that \(\operatorname{Cent}_{G_{x}}({}^{g}\Gamma)=Z\ {}^{g}T_{d-r}\subseteq Z\ {}^{g}T_{r+}\), and thus \({}^{g}\chi\) is given on this subgroup by the central character \(\zeta\). Since the chambers attached to \({}^{g}\Gamma\) and to \(X\) by Lemma 5.2 coincide, we have \(J:=J_{x,{}^{g}\Gamma}=J_{x,X}\). Since \({}^{g}\Gamma-X\in\mathfrak{g}^{*}_{x,-r}\subset\mathfrak{g}^{*}_{x,-d/2+}\), we have \(\eta_{\gamma\Gamma}=\eta_{X}\) as characters of \(J\). Moreover, since \({}^{g}\Gamma\in X+\mathfrak{g}^{*}_{x,-r}\), we have \(C_{G_{x}}({}^{g}\Gamma)\subseteq C_{G_{x}}(X)G_{x,d-r}\subseteq C_{G_{x}}(X)J\). Therefore \(\eta({}^{g}\Gamma,{}^{g}\chi)=\eta(X,\zeta)\) as characters of this common group so that \(\mathcal{S}_{x}({}^{g}\Gamma,{}^{g}\chi)=\mathcal{S}_{x}(X,\zeta)\) as representations of \(G_{x}\). In the course of the proof we established that the components of depth \(d>2r\) occurring in \(\operatorname{Res}_{G_{x}}\pi\) coincide as representations of \(G_{x}\), not just as representations of \(G_{x,r+}\), with the components of depth \(d>2r\) in \(\sum_{\mathcal{O}\in\operatorname{Nil}(\Gamma)}\tau_{x}(\mathcal{O},\zeta)\), where \(\zeta\) is the central character of \(\pi\). This was proven case by case in [22, Rem 7.5] and [22, Prop 7.6]. **Remark 6.6**.: Comparing (6.5) with the known values of \(\mathcal{WF}(\pi)\) from [10, Tables 1-4] reveals that \(\operatorname{Nil}(\Gamma)=\mathcal{WF}(\pi)\) in all cases (_cf_ Theorem 3.5). Moreover, with the standard normalization chosen in [16, I.8], the coefficients of the leading terms of the local character expansion agree with those of (6.5); namely \(c_{\mathcal{O}}(\pi)=1\) for all \(\mathcal{O}\in\mathcal{WF}(\pi)\). Thus Theorem 6.5 is a representation-theoretic analogue of the analytic local character expansion. On the other hand, the constant term \(n_{x}(\pi)\) of the decomposition (6.5) does not (and could not) agree with the constant term \(c_{0}(\pi)\) of the local character expansion. For one, \(n_{x}(\pi)\in\mathbb{Z}\) whereas \(c_{0}(\pi)\) may be half-integral; see Table 5. For another, \(n_{x}(\pi)\) depends on the dimension of \(\pi^{G_{x,r+}}\), which may vary based on the \(G\)-conjugacy class of the vertex \(x\in\mathcal{B}(G)\). Let us compute the constant terms \(n_{x}(\pi)\) explicitly. **Proposition 6.7**.: _Let \(\pi=\pi(T,\chi)\) be an irreducible representation of depth \(r>0\) as in Theorem 6.2. Then for each vertex \(x\in\mathcal{B}(G)\), the dimension of the subspace of \(G_{x,r+}\)-fixed vectors, as well as the value of the coefficient \(n_{x}(\pi)\) appearing in (6.5) are as given in Table 1._ Proof.: Let \(\pi=\pi(T,\chi)\) have depth \(r>0\). From Theorem 6.2 we have the equality \[n_{x}(\pi)=\dim(\pi^{G_{x,r+}})-\sum_{\mathcal{O}\in\operatorname{Nil}(\Gamma )}\dim(\tau_{x}(\mathcal{O}))^{G_{x,r+}}. \tag{6.6}\] Let us first compute \(\dim(\pi^{G_{x,r+}})\) in each case. If \(\pi\) is a principal series representation and \(B\) is a Borel subgroup containing \(T\), then \(\pi^{G_{x,r+}}=\operatorname{Ind}_{(B\cap G_{x})G_{x,r+}}^{G_{x}}\chi\), whence \(\dim(\pi^{G_{x,r+}})=|G_{x}/(B\cap G_{x})G_{x,r+}|=(q+1)q^{r}\). If \(\pi=\pi(T,\chi)=\operatorname{c-Ind}_{TG_{x_{T},r/2}}^{G}\kappa\) is an unramified supercuspidal representation such that some \(G\)-conjugate of \(T\) is contained in \(G_{x}\), then (replacing \(T\) and \(\pi\) by this conjugate) we have \(x_{T}=x\) and \(\pi^{G_{x,r+}}=\operatorname{Ind}_{TG_{x,r/2}}^{G_{x}}\kappa(\chi)\). It follows from a calculation in [23, Prop 4.8] that independently of the parity of \(r\) we have \(\dim(\pi^{G_{x,r+}})=(q-1)q^{r}\). On the other hand, if \(T\) is not conjugate to a torus contained in \(G_{x}\), then \(\operatorname{Res}_{G_{x}}\pi(T,\chi)\) has no components of depth \(r\) and \(\dim(\pi^{G_{x,r+}})=0\). Similarly, if \(\pi=\pi(T,\chi)\) is a ramified supercuspidal representation, then its depth is half-integral, whence for a vertex \(x\) we have \(G_{x,r}=G_{x,r+}\), and thus by definition of depth \(\pi^{G_{x,r+}}=\{0\}\). On the other hand, the space of \(G_{x,r+}\)-fixed vectors of \(\tau_{x}(\mathcal{O})\) is exactly the sum of its irreducible components of depth \(d\leq r\). These have total dimension \(\frac{1}{2}q^{d-1}(q^{2}-1)\) and correspond to the \(G_{x}\)-orbits of \(\mathcal{O}\) whose depths \(-d\) at \(x\) satisfy \(-r\leq-d\leq-1\). Thus if the parity depth of \(\mathcal{O}\) at \(x\) is even then \[\dim(\tau_{x}(\mathcal{O})^{G_{x,r+}})=\sum_{e=1}^{\lfloor r/2\rfloor}\frac{1 }{2}q^{2e-1}(q^{2}-1)=\frac{1}{2}q(q^{2\lfloor r/2\rfloor}-1).\] whereas if it is odd we have \[\dim(\tau_{x}(\mathcal{O})^{G_{x,r+}})=\sum_{e=1}^{\lceil r/2\rceil}\frac{1}{ 2}q^{2e-2}(q^{2}-1)=\frac{1}{2}(q^{2\lceil r/2\rceil}-1).\] Note that \(\dim(\tau_{x}(\mathcal{O})\oplus\tau_{x}(\mathcal{O}^{\prime}))^{G_{x,r+}}= \frac{1}{2}(q+1)(q^{\lfloor r\rfloor}-1)\) when the \(G_{x}\)-orbits in \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\) have opposite parity depths at \(x\). \begin{table} \begin{tabular}{|l|c c c c|} \hline Type of torus \(T\) & split & unramified, \(x_{T}\sim x\) & unramified, \(x_{T}\not\sim x\) & ramified \\ depth \(r\) & \(r\in\mathbb{Z}_{>0}\) & \(r\in\mathbb{Z}_{>0}\) & \(r\in\mathbb{Z}_{>0}\) & \(r\in\frac{1}{2}+\mathbb{Z}_{\geq 0}\) \\ \hline \(\dim(\pi(T,\chi)^{G_{x,r+}})\) & \((q+1)q^{r}\) & \((q-1)q^{r}\) & \(0\) & \(0\) \\ \hline \(n_{x}(\pi)\) & \(q+1\) & \(q-q^{r}\) if \(r\) is even & \(1-q^{r}\) if \(r\) is even & \((1-q^{r-1/2})(q+1)/2\) \\ & & \(1-q^{r}\) if \(r\) is odd & \(q-q^{r}\) if \(r\) is odd & \\ \hline \end{tabular} \end{table} Table 1. The values of \(n_{x}(\pi)\) appearing in (6.5) for each irreducible representation of \(\operatorname{SL}(2,F)\) of depth \(r>0\). Consequently, we can compute \(n_{x}(\pi)\) using (6.6) and the explicit determination of \(\operatorname{Nil}(\Gamma)\) in Lemma 4.2, as follows. If \(\pi\) is a principal series representation, then \(T\) is a torus and \(\Gamma\) is split. Thus all principal nilpotent orbits occur, yielding \[n_{x}(\pi)=(q+1)q^{r}-2\left(\frac{1}{2}(q+1)(q^{r}-1)\right)=q+1.\] If \(\pi\) is a supercuspidal representation corresponding to a ramified torus, then \(\operatorname{Nil}(\Gamma)\) consists of two nonzero orbits which will be of opposite parity at any vertex \(x\), and \(\lfloor r\rfloor=r-\frac{1}{2}\). Thus we simply have \(n_{x}(\pi)=0-\frac{1}{2}(q+1)(q^{r-1/2}-1)\). On the other hand, if \(T\) is unramified, then \(\operatorname{Nil}(\Gamma)\) consists of two nonzero orbits and at any vertex \(x\), the parity of the depths of elements of these orbits is that of \(d_{x}(\Gamma)\). Thus if \(x\) is \(G\)-conjugate to \(x_{T}\), the orbits that occur have the same parity as \(-r=d_{x_{T}}(\Gamma)\), so \[n_{x}(\pi)=\begin{cases}(q-1)q^{r}-2\left(\frac{1}{2}q(q^{r}-1)\right)=q-q^{r }&\text{if $r$ is even;}\\ (q-1)q^{r}-2\left(\frac{1}{2}(q^{r+1}-1)\right)=1-q^{r}&\text{if $r$ is odd.}\end{cases}\] Finally, if \(x\) is not \(G\)-conjugate to \(x_{T}\), then \(d_{x}(\Gamma)\) and \(d_{x_{T}}(\Gamma)\) have opposite parity. Thus we conclude \[n_{x}(\pi)=\begin{cases}0-2\left(\frac{1}{2}(q^{r}-1)\right)=1-q^{r}&\text{if $r$ is even;}\\ 0-2\left(\frac{1}{2}q(q^{r-1}-1)\right)=q-q^{r}&\text{if $r$ is odd.}\end{cases}\] ## 7. The case of depth-zero representations of \(\operatorname{SL}(2,F)\) To establish our result for depth-zero representations of \(\operatorname{SL}(2,F)\), we apply a result by Barbash and Moy relating the wave front set of \(\pi\) to that of \(\pi^{G_{x,0+}}\), viewed as a representation of \(\operatorname{SL}(2,\mathfrak{f})\cong G_{x,0}/G_{x,0+}\) (Proposition 7.2). We begin by recalling the representation theory of \(\operatorname{SL}(2,\mathfrak{f})\) and then the classification of depth-zero representations of \(\operatorname{SL}(2,F)\). ### Representations of \(\operatorname{SL}(2,\mathfrak{f})\) This theory is well-known and is beautifully recapped in [1, SS15]. Let \(\mathsf{G}=\operatorname{SL}(2,\mathfrak{f})\), \(\mathsf{T}\) a maximal torus of \(\mathsf{G}\) and \(\overline{\chi}\) a character of \(\mathsf{T}\) (which is assumed to be nontrivial if \(\mathsf{T}\) is anisotropic). The irreducible representations of \(\mathsf{G}\) are parametrized by these pairs \((\mathsf{T},\overline{\chi})\) as follows. If \(\mathsf{T}\) is split and \(\overline{\chi}^{2}\neq\mathbf{1}\) then \(\sigma(\mathsf{T},\overline{\chi})\) is an irreducible principal series representation; if \(\mathsf{T}\) is anisotropic and \(\overline{\chi}^{2}\neq\mathbf{1}\) then \(\sigma(\mathsf{T},\overline{\chi})\) is a (Deligne-Lusztig) cuspidal representation. If \(\mathsf{T}\) is split and \(\overline{\chi}=\mathbf{1}\) then \(\sigma(\mathsf{T},\overline{\chi})=\mathbf{1}\oplus\overline{\mathrm{St}}\) where \(\overline{\mathrm{St}}\) denotes the Steinberg representation of \(\mathsf{G}\). For either \(\mathsf{T}\), when \(\overline{\chi}\) is a strictly quadratic character, we obtain two irreducible representations \(\sigma^{u}(\mathsf{T},\overline{\chi})\) for \(u\in\{1,\varepsilon\}\) (as the components of the restriction \(\sigma(\mathsf{T},\overline{\chi})\) to \(\operatorname{SL}(2,\mathfrak{f})\) of a corresponding (irreducible) representation of \(\operatorname{GL}(2,\mathfrak{f})\)). They are distinguished by the theory of Gel'fand-Graev representations, as follows. Let \(X\in\mathfrak{g}(\mathfrak{f})^{*}\setminus\{0\}\) be nilpotent, and identify this with a nilpotent element \(\dot{X}\in\mathfrak{g}(\mathfrak{f})\). Complete this to an \(\mathfrak{sl}(2,\mathfrak{f})\) triple \(\{\dot{Y},\dot{H},\dot{X}\}\) and let \(\mathfrak{u}(\mathfrak{f})=\mathfrak{f}\dot{Y}\). Then \(X\) defines a character of \(\mathsf{U}=\exp(\mathfrak{u}(\mathfrak{f}))\) by \(\psi_{X}(\exp(W))=\psi(X(W))\) for all \(W\in\mathfrak{u}(\mathfrak{f})\). Then the representation of \(\operatorname{SL}(2,\mathfrak{f})\) given by \[\gamma_{\mathcal{O}}=\operatorname{Ind}_{\mathsf{Q}}^{\mathsf{G}}\psi_{X} \tag{7.1}\] depends (up to equivalence) only on the nonzero orbit \(\mathcal{O}=\mathsf{G}\cdot X\), and is called the Gel'fand-Graev representation of \(\mathsf{G}\) associated to \(\mathcal{O}\). Contrary to convention, we parametrize our nonzero nilpotent orbits by upper triangular matrices \(\dot{X}_{u}\in\mathfrak{g}(\mathfrak{f})\) as in (4.1), where \(u\in\mathfrak{f}^{\times}/(\mathfrak{f}^{\times})^{2}\sim\{1,\varepsilon\}\). With respect to this choice, one can compute directly that the character \([\gamma_{\mathcal{O}_{u}}]\) of the Gel'fand-Graev representation associated to \(\mathcal{O}=\mathsf{G}\cdot X_{u}\) is given by \[[\gamma_{\mathcal{O}_{u}}](g)=\begin{cases}q^{2}-1&\text{if }g=I;\\ 2\sigma_{-us}=2\sum_{t\in(\mathfrak{f}^{\times})^{2}}\psi(-ust)&\text{if }g \sim[\begin{smallmatrix}1&s\\ 0&1\end{smallmatrix}];\\ 0&\text{otherwise}.\end{cases}\] By [1, Thm 14.30], the decomposition into irreducible subrepresentations of \(\gamma_{\mathcal{O}_{u}}\) is multiplicity-free. Using character tables it is straightforward to see that it contains all irreducible principal series representations, all Deligne-Lusztig cuspidal representations, the Steinberg representation, and exactly one from each pair of representations arising from quadratic characters. Our parametrization is therefore as follows: for \(u\in\mathfrak{f}^{\times}/(\mathfrak{f}^{\times})^{2}\), and a quadratic character \(\overline{\chi}\) of \(\mathsf{T}\), let \(\sigma^{u}(\mathsf{T},\overline{\chi})\) denote the component of \(\sigma(\mathsf{T},\overline{\chi})\) occuring in \(\gamma_{\mathcal{O}_{u}}\). In the notation of [1, SS15], the characters of these representations are labeled \(\overline{\chi}^{\pm}\) where \([\sigma^{-1}(\mathsf{T},\overline{\chi})]=\overline{\chi}^{+}\) and \([\sigma^{-\varepsilon}(\mathsf{T},\overline{\chi})]=\overline{\chi}^{-}\). ### Depth-zero representations of \(\operatorname{SL}(2,F)\) Now let \(G=\operatorname{SL}(2,F)\) and let \(\chi\) be a depth-zero character of a maximal split or unramified torus. Assume \(\chi\) is nontrivial if the torus is nonsplit. There are two nonconjugate choices \(T_{1},T_{2}\) for an unramified anisotropic torus. If \(x_{0}\) and \(x_{1}\) are the vertices of the standard alcove, as before, then we can choose representatives \(T^{i}\) of the conjugacy classes of maximal tori such that \(T^{i}\subset G_{x_{i}}\), for \(i\in\{0,1\}\). Then \(\mathsf{T}_{i}=T^{i}_{0}/T^{i}_{0+}\) is a maximal anisotropic torus of \(G_{x_{i},0}/G_{x_{i},0+}=:\mathsf{G}_{i}\cong\operatorname{SL}(2,\mathfrak{f})\). Let \(T\) denote the split torus corresponding to the standard apartment and set \(\mathsf{T}=T_{0}/T_{0+}\), which is a maximal split torus of both \(\mathsf{G}_{1}\) and \(\mathsf{G}_{2}\). In each case, the character \(\chi\) factors to a character \(\overline{\chi}\) of the quotient. In the nonsplit case, for each \(i\in\{0,1\}\) inflate the representation \(\sigma(\mathsf{T}_{i},\overline{\chi})\) of \(\mathsf{G}_{i}\) to a representation of \(G_{x_{i}}\) and define \(\pi(T^{i},\chi)=\operatorname{c-Ind}_{G_{x_{i}}}^{G}\sigma(\mathsf{T}_{i}, \overline{\chi})\) when \(\chi^{2}\neq\mathbf{1}\). When \(\chi^{2}=\mathbf{1}\) set \(\pi^{u}(T^{i},\chi)=\operatorname{c-Ind}_{G_{x_{i}}}^{G}\sigma^{u}(\mathsf{T} _{i},\overline{\chi})\) for \(u\in\{1,\varepsilon\}\). These representations are superecuspidal and irreducible; the latter four were called the special representations. For \(\eta=[\begin{smallmatrix}1&0\\ 0&\overline{\omega}\end{smallmatrix}]\in\operatorname{GL}(2,F)\) we have \({}^{n}\pi^{*}(T^{0},\chi)=\pi^{*}(T^{1},{}^{\eta}\chi)\), where \(*\) indicates that this applies both to the special and nonspecial representations. It follows from [13, Thm 5.3] that for any vertex \(x=gx_{i}\), with \(g\in G\), the depth-zero component \(\pi^{*}(T^{i},\chi)^{G_{x,0+}}\) is the inflation to \(G_{x}\) of \({}^{g}\sigma^{*}(\mathsf{T}_{i},\overline{\chi})\), but \(\pi^{*}(T^{i},\chi)^{G_{x,0+}}=\{0\}\) if \(x\) is not in the \(G\)-orbit of \(x_{i}\). If \(T\) is split, contained in a Borel subgroup \(B\), then \(\pi(T,\chi)=\operatorname{Ind}_{B}^{G}\chi\) is again in the principal series. It is immediate to see that for any vertex \(x\), \(\pi(T,\chi)^{G_{x,0+}}\cong\sigma(\mathsf{T},\overline{\chi})\) under the isomorphism \(G_{x,0}/G_{x,0+}\cong\operatorname{SL}(2,\mathfrak{f})\). In fact, \(\pi(T,\chi)\) is irreducible if and only if \(\sigma(\mathsf{T},\overline{\chi})\) is; its factors in the remaining cases are as follows. When \(\chi\in\{\nu,\nu^{-1}\}\), its Jordan-Holder factors are the trivial representation and the Steinberg representation \(\operatorname{St}\), and we have \(\operatorname{St}^{G_{x,0+}}=\overline{\operatorname{St}}\) and \(\mathbf{1}^{G_{x,0+}}=\mathbf{1}\). When \(\chi\) is quadratic, it is the sign character \(\mathtt{sgn}_{\tau}\) corresponding to the extension \(E=F[\sqrt{\tau}]\). As described in [13, SS8], there is in this case a realization of \(\pi(T,\chi)\) on the space \(L^{2}(F^{\times})\) such that its two irreducible summands \(H_{i}^{\tau}\), with \(i\in\{+,-\}\), are the functions supported \(N_{i}^{\tau}=\{u\in F^{\times}\mid i\mathsf{sgn}_{\tau}(u)=1\}\), respectively. Note that the cases \(-1\in(F^{\times})^{2}\) and \(-1\notin(F^{\times})^{2}\) thus need to be considered separately while computing the components of depth zero, but in fact the result may be stated uniformly, as follows. **Proposition 7.1**.: _For each \(\tau\in\{\varepsilon,-\varpi,-\varepsilon\varpi\}\) and \(i\in\{0,1\}\), the \(\mathsf{G}_{i}\)-representations \((H_{\pm}^{\tau})^{G_{x_{i},0+}}\) are irreducible and their isomorphism classes are given in Table 2._ Proof.: Without loss of generality we may assume \(x_{0},x_{1}\in\mathcal{A}(G,T)\). Since \(\overline{\mathsf{sgn}_{\varepsilon}}=\mathbf{1}\) and \(\overline{\mathsf{sgn}_{-\varpi}}=\overline{\mathsf{sgn}_{-\varepsilon\varpi} }=\mathsf{sgn}\), the unique quadratic character of \(\mathsf{T}\), we immediately have the relation \[(H_{+}^{\tau})^{G_{x,0+}}\oplus(H_{-}^{\tau})^{G_{x,0+}}\cong\sigma^{1}( \mathsf{T},\overline{\chi})\oplus\sigma^{\varepsilon}(\mathsf{T},\overline{ \chi}),\] for any vertex \(x\in\mathcal{A}(G,T)\). The restriction to \(G_{x_{0},0+}\) of these components was determined via character computations in [22, Thm 9.1], where (in the notation of that paper, of [13, SS15], and ours, respectively), \(\Xi_{\mathsf{sgn}}^{+}=\chi_{\alpha_{0}}^{-}=[\sigma^{-\varepsilon}(\mathsf{T },\mathsf{sgn})]\) and \(\Xi_{\mathsf{sgn}}^{-}=\chi_{\alpha_{0}}^{+}=[\sigma^{-1}(\mathsf{T},\mathsf{ sgn})]\). The negative signs used in our parametrizations simplify the statement of the result, yielding the first row of the table. For the restriction to \(G_{x_{1},0+}\), the proof of [22, Cor 9.3] showed that twisting \(\pi(T,\mathsf{sgn}_{\tau})\) by \(\omega=[\begin{smallmatrix}0&1\\ \varpi&0\end{smallmatrix}]\in\mathrm{GL}(2,F)\) preserves \(H_{\pm}^{\tau}\) when \(\mathsf{sgn}_{\tau}(-\varpi)=1\) and interchanges them otherwise. Applying this twist to \(\pi^{G_{x_{0},0+}}\) yields \(\pi^{G_{x_{1},0+}}\) and sends a representation \(\sigma\) of \(G_{x_{0}}\) to the representation \({}^{\omega}\sigma\) of \(G_{x_{1}}\). Twisting by \(\omega\) sends the inflation of the representation \(\sigma^{u}(\mathsf{T},\overline{\chi})\) of \(G_{x_{0}}\) to the inflation of the representation \(\sigma^{-u}(\mathsf{T},\overline{\chi})\) of \(G_{x_{1}}\), since it takes \(\mathcal{O}_{u}\) to \(\mathcal{O}_{-u\varpi}\), which in turn determines the choice of Gel'fand-Graev representation \(\gamma_{\mathcal{O}_{u}}\).3 A careful accounting of signs yields the second row of the table. Footnote 3: This calculation was neglected in the proof of [22, Cor 9.3], yielding an incorrect statement for the depth-zero components. For convenience, we list the isomorphism class of \(\pi^{G_{x,0+}}\) for the remaining irreducible representations \(\pi\) in Table 3. \begin{table} \begin{tabular}{|c||c c|c c|c c|} \hline \(\pi\) & \(H_{+}^{\varepsilon}\) & \(H_{-}^{\varepsilon}\) & \(H_{+}^{-\varpi}\) & \(H_{-}^{-\varpi}\) & \(H_{+}^{-\varepsilon\varpi}\) & \(H_{-}^{-\varepsilon\varpi}\) \\ \hline \(\pi^{G_{x_{0},0+}}\) & \(\overline{\mathsf{St}}\) & \(\mathbf{1}\) & \(\sigma^{1}(\mathsf{T},\mathsf{sgn})\) & \(\sigma^{\varepsilon}(\mathsf{T},\mathsf{sgn})\) & \(\sigma^{1}(\mathsf{T},\mathsf{sgn})\) & \(\sigma^{\varepsilon}(\mathsf{T},\mathsf{sgn})\) \\ \(\pi^{G_{x_{1},0+}}\) & \(\mathbf{1}\) & \(\overline{\mathsf{St}}\) & \(\sigma^{1}(\mathsf{T},\mathsf{sgn})\) & \(\sigma^{\varepsilon}(\mathsf{T},\mathsf{sgn})\) & \(\sigma^{\varepsilon}(\mathsf{T},\mathsf{sgn})\) & \(\sigma^{1}(\mathsf{T},\mathsf{sgn})\) \\ \hline \end{tabular} \end{table} Table 2. The isomorphism classes of the depth-zero representations of \(G_{x_{i}}\) occuring in the restriction to \(G_{x_{i}}\) of the decomposable principal series. \begin{table} \begin{tabular}{|c|c|c|} \hline \(T\) & \(\pi\) & \(\pi^{G_{x,0+}}\) \\ \hline \hline \(T\) split & \(\pi=\pi(T,\chi)\) & \(\sigma(\mathsf{T},\overline{\chi})\) \\ \hline & \(\pi=\mathrm{St}\) & \({}^{g}\mathrm{St}\) \\ \hline \hline \(T^{i}\) unramified & \(\pi=\pi(T^{i},\chi)\) & \(\sigma(\mathsf{T}_{i},\overline{\chi})\) & if \(x\sim x_{i}\) \\ \(i\in\{0,1\}\) & & \(\{0\}\) & if \(x\not\sim x_{i}\) \\ \hline \end{tabular} \end{table} Table 3. The depth-zero representations of \(G_{x}\) occuring in the restriction to \(G_{x}\) of the irreducible principal series, Steinberg and supercuspidal representations, for any vertex \(x\in\mathcal{B}(G)\). ### Wave front sets The wave front set is determined with the following result that is based on [1, Thm 4.5]. **Proposition 7.2**.: _Let \(\pi\) be an irreducible representation of depth zero of \(\mathrm{SL}(2,F)\). Suppose \(\mathrm{char}(F)=0\) and \(p>3e+1\), where \(e\) is the absolute ramification index of \(F\) over \(\mathbb{Q}_{p}\). Then we have_ \[\mathcal{WF}(\pi)=\{\mathcal{O}\in\mathscr{O}(0)\mid\exists x\text{ a vertex of }\mathcal{B}(G),\sigma\in\pi^{G_{x,0+}}\text{such that }\sigma\text{ occurs in }\gamma_{\mathcal{O}}\},\] _where \(\gamma_{\mathcal{O}}\) is the Gel'fand-Graev representation (7.1) of \(\mathsf{G}_{x}\cong\mathrm{SL}(2,\mathfrak{f})\)._ Proof.: The hypotheses imply that \(\exp\) converges on \(\mathfrak{g}_{0+}\) and that the local character expansion holds. In [1], Barbasch and Moy used (generalized) Gel'fand-Graev characters as test functions to determine the wave front set of \(\pi\). For each nilpotent orbit \(\mathcal{O}\) that is represented by a depth-zero coset at some \(x\), let \([\gamma_{\mathcal{O}}]\) denote the lift to \(G_{x,0}\) of the character of the corresponding Gel'fand-Graev representation of \(\mathsf{G}_{x}=G_{x,0}/G_{x,0+}\), viewed as a function on \(G\). It is supported on the subset \(G_{0+}\cap G_{x,0}\) of topologically unipotent elements. Let \(f_{x,\mathcal{O}}\) be the function on \(\mathfrak{g}\), with support in \(\mathfrak{g}_{0+}\cap\mathfrak{g}_{x,0}\), that is given by \(f_{x,\mathcal{O}}=[\gamma_{\mathcal{O}}]\circ\exp\). Then they show that \(\widehat{\mu_{\mathcal{O}}}(f_{x,\mathcal{O}})=0\) if \(\mathcal{O}\not\subset\overline{\mathcal{O}^{\prime}}\) and is nonzero if \(\mathcal{O}=\mathcal{O}^{\prime}\). Thus \(\Theta_{\pi}([\gamma_{\mathcal{O}}])=0\) for all \(\mathcal{O}\) that do not meet the wave front set of \(\pi\) and is nonzero when \(\mathcal{O}\in\mathcal{WF}(\pi)\). For any irreducible representation \(\sigma\) of \(\mathsf{G}_{x}\), let \(m(\sigma,\pi)\) denote the multiplicity of (the inflation of) \(\sigma\) in \(\pi^{G_{x,0+}}\) and \(m(\sigma,\gamma_{\mathcal{O}})\) the multiplicity of \(\sigma\) in \(\gamma_{\mathcal{O}}\). Then [1, Thm 4.5(4)] becomes \[\Theta_{\pi}([\gamma_{\mathcal{O}}])=\sum_{\sigma}m(\sigma,\pi)m(\sigma, \gamma_{\mathcal{O}}),\] whence our result for the case of \(\mathrm{SL}(2,F)\). **Corollary 7.3**.: _Under the hypothesis of Proposition 7.2, the wave front sets corresponding to the depth-zero representations of \(\mathrm{SL}(2,F)\) are as given in Table 4._ Proof.: For \(u\in\{1,\varepsilon\}\) and \(j\in\{0,1\}\), the nilpotent orbit \(\mathcal{O}_{u\varpi^{j}}\) is represented by a depth-zero coset at \(x_{i}\) if and only if \(i=j\), and in this case it corresponds to the nilpotent orbit in the quotient \(\mathfrak{g}_{x_{i},0}/\mathfrak{g}_{x_{i},0+}\cong\mathfrak{sl}(2,\mathfrak{ f})\) under \(\mathsf{G}_{i}\cong\mathrm{SL}(2,\mathfrak{f})\) that we denoted \(\mathcal{O}_{u}\). Therefore the Gel'fand-Graev representations \(\gamma_{\mathcal{O}}\) referred to in Proposition 7.2 are \(\gamma_{\mathcal{O}_{1}}\) and \(\gamma_{\mathcal{O}_{\varepsilon}}\) for \(x=x_{0}\), and \(\gamma_{\mathcal{O}_{\varepsilon}\omega}\) and \(\gamma_{\mathcal{O}_{\varepsilon\omega}}\) for \(x=x_{1}\). By conjugacy, these two vertices suffice. The decomposition of \(\pi^{G_{x,0+}}\) for \(x\in\{x_{0},x_{1}\}\) was given in Tables 2 and 3 for all irreducible depth-zero representations \(\pi\), and matching these with the decomposition of the Gel'fand-Graev representations of the corresponding groups \(\mathsf{G}_{i}\) yields Table 4. In light of Proposition 7.2, we may define \(\mathcal{WF}(\pi)\) by Table 4, even over fields where Proposition 7.2 does not apply. Theorem 7.5 below expresses that, just as in the positive-depth case, this is consistent (for all fields with residual characteristic different from 2). **Corollary 7.4**.: _For each depth-zero irreducible representation \(\pi\) of \(\operatorname{SL}(2,F)\) there exists an element \(\Gamma\in\mathfrak{g}_{x,0}^{*}\), for some \(x\in\mathcal{B}(G)\), such that \(\mathcal{WF}(\pi)=\operatorname{Nil}(\Gamma)\)._ Existence follows immediately from Table 4 and Lemma 4.2, though the elements \(\Gamma\) for which \(\mathcal{WF}(\pi)=\operatorname{Nil}(\Gamma)\) do not correspond to minimal \(K\)-types for \(\pi\) (as these latter are not realized by elements on the Lie algebra). However, on an _ad-hoc_ basis, we can make this association of \(\pi\) with \(\Gamma\) more explicit, as follows. For \(T\) unramified or split, and \(x\in\mathcal{B}(T)\subset\mathcal{B}(G)\), we can in the same spirit attach to any _regular_\(\pi(T,\chi)\) (in the sense of Kaletha [19, Prop 3.4.27]) any element \(\Gamma\in\mathfrak{g}_{x,0}^{*}\) whose centralizer in \(G\) is \(T\). The same holds for \(\pi=\operatorname{St}\), whereas we associate \(\Gamma=0\) to \(\mathbf{1}\). When \(\pi^{u}(T^{i},\mathsf{sgn})\) is a special representation (for some \(u\in\{1,\varepsilon\}\) and \(i\in\{0,1\}\)) then it is a supercuspidal unipotent representation and \(\Gamma\) is chosen to be a nilpotent element in the lift to \(\mathfrak{g}_{x_{i},0}^{*}\) of the nilpotent orbit corresponding to \(\sigma^{u}(\mathsf{T}_{i},\mathsf{sgn})\). When \(\pi\in\{H_{\pm}^{\tau}\mid\tau\in\{\varepsilon,\varpi,\varepsilon\varpi\}\}\), \(\Gamma\) is a choice of element of an anisotropic torus \(T\) that splits over \(F[\sqrt{\tau}]\). However, while the orbit of \(\Gamma\) satisfies \(\operatorname{Nil}(\Gamma)=\mathcal{WF}(\pi)\), its centralizer in this case need not correspond to \(\pi\): when \(-1\in(F^{\times})^{2}\) and \(\tau\in\{\varpi,\varepsilon\varpi\}\), the centralizer may be one of two possible tori \(T=\operatorname{Cent}_{G}(\Gamma)\) up to conjugacy, and neither one is expressly associated to \(\pi\). We conclude this section with our main result. **Theorem 7.5**.: _Let \(\pi\) be an irreducible representation of \(G\) of depth zero with central character \(\zeta\). For any vertex \(x\in\mathcal{B}(G)\), we have_ \[\operatorname{Res}_{G_{x}}\pi\cong\pi^{G_{x,0+}}\oplus\bigoplus_{\mathcal{O} \in\mathcal{WF}(\pi)}\tau_{x}(\mathcal{O},\zeta), \tag{7.2}\] _where \(\mathcal{WF}(\pi)\) is as in Table 4. It follows that \(\operatorname{Res}_{G_{x,0+}}\pi\) takes the form of (6.5) with constant coefficient \(n_{x}(\pi)=\dim(\pi^{G_{x,0+}})\)._ Proof.: The decomposition will follow from the main results of [19, 19] as in the proof of Theorem 6.2. Let \(\pi\) be a depth-zero representation of \(G\) with central character \(\zeta\), and let \(x\in\{x_{0},x_{1}\}\). For irreducible depth-zero principal series, one has \(\gamma_{0}=\gamma_{1}=0\) in [19, Thm 7.4]. Matching notation as is (6.2), we conclude that \(\mathcal{S}_{x}(X_{u\varpi^{-d}},\zeta)\) occurs in \(\operatorname{Res}_{G_{x}}\pi\), for \(u\in\{1,\varepsilon\}\), for each \(d>0\), and that these exhaust the irreducible summands. Therefore the summands can be regrouped as the sum of \(\tau_{x}(\mathcal{O},\zeta)\), as defined in (5.7), over all regular nilpotent orbits, as required. As the positive-depth summands of \(\operatorname{Res}_{G_{x}}\pi\) are identical for all depth-zero irreducible principal series, the case of \(\pi=\operatorname{St}\) follows since \(\operatorname{Res}_{G_{x}}\mathbf{1}\) has no positive-depth components. The results of [15, Thm 9.1, Thm 9.2] yield \[\operatorname{Res}_{G_{x_{0}}}H_{+}^{\tau}=\begin{cases}\operatorname{St}\oplus \bigoplus_{d>0}(\mathcal{S}_{x_{0}}(\varpi^{-2d}X_{1},\zeta)\oplus\mathcal{S}_{x _{0}}(\varpi^{-2d}X_{\varepsilon},\zeta))&\text{if $\tau=\varepsilon$;}\\ \sigma^{1}(\mathsf{T},\mathsf{sgn})\oplus\bigoplus_{d>0}\mathcal{S}_{x_{0}}( \varpi^{-d}X_{1},\zeta)&\text{if $\tau=-\varpi$;}\\ \sigma^{1}(\mathsf{T},\mathsf{sgn})\oplus\bigoplus_{d>0}(\mathcal{S}_{x_{0}}( \varpi^{-2d}X_{1},\zeta)\oplus\mathcal{S}_{x_{0}}(\varpi^{-2d+1}X_{\varepsilon },\zeta))&\text{if $\tau=-\varepsilon\varpi$.}\end{cases}\] Regrouping the positive-depth summands yields the decomposition \[\operatorname{Res}_{G_{x_{0}}}H_{+}^{\tau}=\begin{cases}\operatorname{St} \oplus\tau_{x_{0}}(\mathcal{O}_{1},\zeta)\oplus\tau_{x_{0}}(\mathcal{O}_{ \varepsilon},\zeta)&\text{if $\tau=\varepsilon$;}\\ \sigma^{1}(\mathsf{T},\mathsf{sgn})\oplus\tau_{x_{0}}(\mathcal{O}_{1},\zeta )\oplus\tau_{x_{0}}(\mathcal{O}_{\varpi},\zeta)&\text{if $\tau=-\varpi$;}\\ \sigma^{1}(\mathsf{T},\mathsf{sgn})\oplus\tau_{x_{0}}(\mathcal{O}_{1},\zeta )\oplus\tau_{x_{0}}(\mathcal{O}_{\varepsilon\varpi},\zeta)&\text{if $\tau=- \varepsilon\varpi$,}\end{cases}\] which coincides with the wave front set computed in Table 2. Since the positive-depth summands of \(H_{+}^{\tau}\oplus H_{-}^{\tau}\) form \(\bigoplus_{\mathcal{O}\in\mathscr{O}(0)\setminus\{0\}}\tau_{x_{0}}(\mathcal{ O},\zeta)\), and the wave front sets of these representations are complementary, this yields the result for \(\operatorname{Res}_{G_{x_{0}}}H_{-}^{\tau}\) as well. To determine \(\operatorname{Res}_{G_{x_{1}}}\pi\) we proceed as in the proof of Proposition 7.1. Conjugation by \(\omega\) interchanges the components of the principal series _except_ when: \(\tau=-\varpi\) and \(-1\in(F^{\times})^{2}\); or \(\tau=-\varepsilon\varpi\) and \(-1\notin(F^{\times})^{2}\). Since the depth-zero components were computed in Proposition 7.1 and \({}^{\omega}\tau_{x_{0}}(\mathcal{O}_{u},\zeta)=\tau_{x_{1}}(\mathcal{O}_{-u \varpi},\zeta)\), we deduce that \[\operatorname{Res}_{G_{x_{1}}}H_{-}^{\tau}=\begin{cases}\operatorname{St} \oplus\tau_{x_{1}}(\mathcal{O}_{\varpi},\zeta)\oplus\tau_{x_{1}}(\mathcal{O} _{\varepsilon\varpi},\zeta)&\text{if $\tau=\varepsilon$;}\\ \sigma^{-1}(\mathsf{T},\mathsf{sgn})\oplus\tau_{x_{1}}(\mathcal{O}_{-\varpi}, \zeta)\oplus\tau_{x_{1}}(\mathcal{O}_{-\varpi^{2}},\zeta)&\text{if $\tau=- \varpi$ and $-1\notin(F^{\times})^{2}$;}\\ \sigma^{-1}(\mathsf{T},\mathsf{sgn})\oplus\tau_{x_{1}}(\mathcal{O}_{- \varpi},\zeta)\oplus\tau_{x_{1}}(\mathcal{O}_{-\varepsilon\varpi^{2}},\zeta) &\text{if $\tau=-\varepsilon\varpi$ and $-1\in(F^{\times})^{2}$;}\\ \sigma^{-\varepsilon}(\mathsf{T},\mathsf{sgn})\oplus\tau_{x_{1}}(\mathcal{O} _{-\varepsilon\varpi},\zeta)\oplus\tau_{x_{1}}(\mathcal{O}_{-\varpi^{2}}, \zeta)&\text{if $\tau=-\varepsilon\varpi$ and $-1\notin(F^{\times})^{2}$;}\\ \sigma^{-\varepsilon}(\mathsf{T},\mathsf{sgn})\oplus\tau_{x_{1}}(\mathcal{O} _{-\varepsilon\varpi},\zeta)\oplus\tau_{x_{1}}(\mathcal{O}_{-\varpi^{2}}, \zeta)&\text{if $\tau=-\varepsilon\varpi$ and $-1\notin(F^{\times})^{2}$;}\\ \end{cases}\] Thus, in any case, the nilpotent orbits arising in \(\operatorname{Res}_{G_{x_{1}}}H_{-}^{\varepsilon}\) are \(\{\mathcal{O}_{\varpi},\mathcal{O}_{\varepsilon\varpi}\}\); those arising in \(\operatorname{Res}_{G_{x_{1}}}H_{-}^{-\varpi}\) are \(\{\mathcal{O}_{\varepsilon},\mathcal{O}_{\varepsilon\varpi}\}\); and those arising in \(\operatorname{Res}_{G_{x_{1}}}H_{-}^{-\varepsilon\varpi}\) are \(\{\mathcal{O}_{\varpi},\mathcal{O}_{\varepsilon}\}\), which again is consistent with Table 4, as required. Now suppose that \(\pi_{i}=\operatorname{c-Ind}_{G_{i}}^{G}\sigma\) is a supercuspidal representation. We use [15, Cor 5.2, Thm 5.3], where \(\eta=[\begin{smallmatrix}1&0\\ 0&\varpi\end{smallmatrix}]\), \(\sigma_{0}^{+}\) corresponds to our \(\sigma^{-1}(T^{0},\chi)\), and \(\sigma_{0}^{-}\) is our \(\sigma^{-\varepsilon}(T^{0},\chi)\). Since \({}^{\eta}\mathcal{O}_{u}=\mathcal{O}_{u\varpi}\), the \(G_{x_{1}}\)-representation \({}^{\eta}(\sigma_{0}^{+})\) is the inflation of \(\sigma^{-1}(T^{1},\chi)\) to \(G_{x_{1}}\). We thus infer the decompositions \[\operatorname{Res}_{G_{x_{0}}}\pi=\begin{cases}\sigma\oplus\bigoplus_{t>0}\big{(} \mathcal{S}_{x_{0}}(-\varpi^{-2t}X_{1},\zeta)\oplus\mathcal{S}_{x_{0}}(-\varpi^{ -2t}X_{\varepsilon},\zeta)\big{)}&\text{if $\pi$ nonspecial and $i=0$;}\\ \bigoplus_{t>0}\big{(}\mathcal{S}_{x_{0}}(-\varpi^{-2t+1}X_{1},\zeta)\oplus \mathcal{S}_{x_{0}}(-\varpi^{-2t+1}X_{\varepsilon},\zeta)\big{)}&\text{if $\pi$ nonspecial and $i=1$;}\\ \sigma\oplus\bigoplus_{t>0}\mathcal{S}_{x_{0}}(-\varpi^{-2t}X_{u},\zeta)&\text{ if $\pi=\pi^{-u}(T^{0},\chi)$ is special and $i=0$;}\\ \bigoplus_{t>0}\mathcal{S}_{x_{0}}(-\varpi^{-2t+1}X_{u},\zeta)&\text{if $\pi=\pi^{-u}(T^{1},\chi)$ is special and $i=1$.}\end{cases}\] Comparing with Table 4, we conclude that (7.2) holds for \(\operatorname{Res}_{G_{x_{0}}}\pi_{i}\) in each case. The result for general \(x\) now follows as in the proof of Theorem 6.2. Finally, the values \(n_{x}(\pi)=\dim(\pi^{G_{x,0+}})\) can be deduced from Tables 2 and 3: it is \(q+1\) for irreducible principal series, \(q-1\) for Deligne-Lusztig cuspidal representations, \(q\) for \(\overline{\operatorname{St}}\), \((q-1)/2\) for the special unipotent representations and \((q+1)/2\) for the components of the reducible principal series. ## 8. Applications ### The Fourier transform of a nilpotent orbital integral As a first application, we derive a formula for the Fourier transform of a nilpotent orbital integral in any open set of the form \(\mathfrak{g}_{x,0+}\) in terms of the trace characters of the representations \(\tau_{x}(\mathcal{O},\zeta)\). **Proposition 8.1**.: _Let \(x\in\mathcal{B}(G)\) be a vertex. Let \([\tau_{x}(\mathcal{O})]\) denote the restriction to \(G^{\mathrm{reg}}_{x,0+}\) of the trace character of the representation \(\tau_{x}(\mathcal{O},\zeta)\), for either choice of central character \(\zeta\). Assume \(\exp\) converges on \(\mathfrak{g}_{x,0+}\). Then for each nonzero nilpotent orbit \(\mathcal{O}\) and \(X\in\mathfrak{g}^{\mathrm{reg}}_{x,0+}\) we have_ \[\widehat{\mu_{\mathcal{O}}}(X)=\begin{cases}q/2+[\tau_{x}(\mathcal{O})](\exp X )&\text{if $\mathcal{O}$ has even parity depth at $x$;}\\ 1/2+[\tau_{x}(\mathcal{O})](\exp X)&\text{if $\mathcal{O}$ has odd parity depth at $x$.}\end{cases}\] _As \(x\) ranges over the vertices of \(\mathcal{B}(G)\), these expressions determine the function \(\widehat{\mu_{\mathcal{O}}}\) on \(\mathfrak{g}^{\mathrm{reg}}_{1/2+}\)._ Proof.: Let \(\Theta_{\pi}\) denote the character of the depth-\(r\) representation \(\pi\). We assume the functions \(\widehat{\mu}_{\mathcal{O}}\) are normalized as in [10], so that the coefficients \(c_{\mathcal{O}}\) corresponding to \(\mathcal{O}\in\mathcal{WF}(\pi)\) in the local character expansion of \(\Theta_{\pi}\circ\exp\) are all equal to \(1\). Thus on \(\mathfrak{g}^{\mathrm{reg}}_{x,r+}\) we have \[\Theta_{\pi}\circ\exp=c_{0}(\pi)+\sum_{\mathcal{O}\in\mathcal{WF}(\pi)}\hat{ \mu}_{\mathcal{O}}.\] The constant coefficients for supercuspidal representations are given in Table 5, following, for example, [13, Tables 1-4]. For (irreducible components of) principal series, the constant term of the local character expansion is trivial, except in the case of the trivial and Steinberg representations, which have constant terms \(1\) and \(-1\), respectively. Theorem 7.5, on the other hand, gives a formula for the character of any irreducible depth-zero representation on \(G_{x,0+}\). Matching these for the special unipotent representations \(\pi=\pi^{u}(T,\chi)\) yields the given formula. It is moreover direct to verify the consistency of this expression across the local character expansions of all irreducible representations, including those of positive depth (on \(G_{x,r+}\) as in Theorem 6.5). Finally, we note that for \(G=\mathrm{SL}(2,F)\) we have \(\mathfrak{g}_{1/2+}=\dot{G}(\mathfrak{g}_{x_{0},0+}\cup\mathfrak{g}_{x_{1},0+ })\subsetneq\mathfrak{g}_{0+}\), which limits the \(G\)-domain on which the formulas hold. **Remark 8.2**.: Much more explicit formulae for the functions \(\widehat{\mu}_{\mathcal{O}}\) have been computed for the group \(\mathrm{SL}(2,F)\) in [1, 1] among others. They have also noted that, under the exponential map, the characters of the five representations \(\{1,\pi^{u}(T^{i},\chi)\mid u\in\{1,\varepsilon\},i\in\{0,1\}\}\) form another basis for the span of the functions \(\widehat{\mu}_{\mathcal{O}}\). In fact the special representations have local character expansions of the form \[\Theta_{\pi}(\exp(X))=\widehat{\mu_{\mathcal{O}}}(X)-1/2, \tag{8.1}\] \begin{table} \begin{tabular}{|c|c|} \hline Representation of \(\mathrm{SL}(2,F)\) & coefficient \(c_{0}\) of \(\mu_{\{0\}}\) \\ of depth \(r\geq 0\) & in local character expansion \\ \hline \(\pi(T,\chi)\), \(T\) unramified & \(-q^{r}\) \\ \(\pi^{u}(T^{i},\chi)\), \(i\in\{0,1\}\), \(u\in\{1,\varepsilon\}\) & \(-1/2\) \\ \(\pi(T,\chi)\), \(T\) ramified & \(q^{r-1/2}(q+1)/2\) \\ St, Steinberg representation & \(-1\) \\ \hline \end{tabular} \end{table} Table 5. Values of the constant term in the local character expansion of supercuspidal and Steinberg representations of \(\mathrm{SL}(2,F)\). for the single corresponding orbit \(\mathcal{O}\), and this holds on the strictly larger set \(\mathfrak{g}_{0+}^{\mathrm{reg}}\). An advantage to Proposition 8.1 is the simplicity and explicitness of the construction, which uses no more than a vertex and a representative of the orbit as input. In this, it recalls some of the original formulae for these Fourier transforms of nilpotent orbital integrals in [10]. ### Computing the polynomial \(\dim(\pi^{G_{x,2n}})\) This arose from a question posed to me by Marie-France Vigneras in 2022 and answers [14, Question 1.1] to the negative (not unexpectedly) for this case. If \(\pi\) is an irreducible representation of \(G\), the local character expansion implies that \(\dim(\pi^{G_{x,2n}})\) is expressible as a polynomial in \(q\), as described in [1, SS5.1]; see also [14, Remark 11.8]. Here we can obtain this polynomial as a corollary of Theorem 6.5 and 7.5, using the explicit values computed in Proposition 6.7. **Corollary 8.3**.: _Let \(\pi\) be an irreducible representation of \(G=\mathrm{SL}(2,F)\) of depth \(r\). Then for each integer \(n>0\), we have_ \[\dim(\pi^{G_{x,2n}})=\begin{cases}q^{2n}+q^{2n-1}&\text{ if $\pi$ is an irreducible principal series},\\ q^{2n-1}-q^{r}&\text{ if $\pi$ is supercuspidal nonspecial, from a vertex $\sim x$},\\ q^{2n}-q^{r}&\text{ if $\pi$ is supercuspidal nonspecial, from a vertex $\not x$},\\ \frac{1}{2}(q+1)(q^{2n-1}-q^{r-\frac{1}{2}})&\text{ if $\pi$ is supercuspidal, from a nonvertex}.\end{cases}\] _On the other hand, if \(\pi_{s}=H_{s}^{\varepsilon}\) then \(\dim(\pi^{G_{x,2n}})=q^{2n-1}\) when the parity depth at \(x\) of the orbits in \(\mathcal{WF}(\pi_{s})\) is even, and equals \(q^{2n}\) otherwise; and if \(\pi=\mathrm{St}\), then \(\dim(\pi^{G_{x,2n}})=q^{2n}+q^{2n-1}-1\). In all other cases, \(\dim(\pi^{G_{x,2n}})\) is exactly half of that of a corresponding nonspecial representation._
2309.16119
ModuLoRA: Finetuning 2-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
We propose a memory-efficient finetuning algorithm for large language models (LLMs) that supports finetuning LLMs with 65B parameters in 2/3/4-bit precision on as little as one 24GB GPU. Our method, modular low-rank adaptation (ModuLoRA), integrates any user-specified weight quantizer with finetuning via low-rank adapters (LoRAs). Our approach relies on a simple quantization-agnostic backward pass that adaptively materializes low-precision LLM weights from a custom black-box quantization module. This approach enables finetuning 2-bit and 3-bit LLMs for the first time -- leveraging state-of-the-art 2-bit QuIP\# quantization and 3-bit OPTQ quantization -- outperforming finetuning that relies on less sophisticated 4-bit and 8-bit methods. In our experiments, \lplora~attains competitive performance on text classification, natural language inference, and instruction following tasks using significantly less memory than existing approaches, and we also surpass the state-of-the-art ROUGE score on a popular summarization task. We release \lplora~together with a series of low-precision models as part of \llmtune, a user-friendly library for quantizing, running, and finetuning LLMs on consumer GPUs.
Junjie Yin, Jiahao Dong, Yingheng Wang, Christopher De Sa, Volodymyr Kuleshov
2023-09-28T02:55:01Z
http://arxiv.org/abs/2309.16119v2
# ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers ###### Abstract We propose a memory-efficient finetuning algorithm for large language models (LLMs) that supports finetuning LLMs with 65B parameters in 3-bit or 4-bit precision on as little as one 48GB GPU. Our method, modular low-rank adaptation (ModuLoRA), integrates any user-specified weight quantizer with finetuning via low-rank adapters (LoRAs). Our approach relies on a simple quantization-agnostic backward pass that adaptively materializes low-precision LLM weights from a custom black-box quantization module. This approach enables finetuning 3-bit LLMs for the first time--leveraging state-of-the-art 3-bit OPTQ quantization often outperforms finetuning that relies on less sophisticated 4-bit and 8-bit methods. In our experiments, ModuLoRA attains competitive performance on text classification, natural language infernece, and instruction following tasks using significantly less memory than existing approaches, and we also surpass the state-of-the-art ROUGE score on a popular summarization task. We release ModuLoRA together with a series of low-precision models--including the first family of 3-bit instruction following Alpaca LLMs--as part of LLMTools, a user-friendly library for quantizing, running, and finetuning LLMs on consumer GPUs. ## 1 Introduction Large language models (LLMs) excel across diverse tasks such as code generation, instruction following, and reasoning (Brown et al., 2020; Scao et al., 2023; Zhang et al., 2022). However, the massive size of these models--often reaching into hundreds of billions of parameters--makes them challenging to deploy on downstream tasks and motivates research into efficient finetuning algorithms (Li and Liang, 2021; Hu et al., 2022). Here, we propose modular low-rank adaptation (ModuLoRA), a memory-efficient finetuning algorithm for large language models (LLMs) that runs on consumer-grade hardware. For example, ModuLoRA finetunes a LLAMA-30B model (Touvron et al., 2023) on one Nvidia RTX 3090 24GB GPU and a LLAMA-65B on one RTX A6000 48GB GPU. Our approach adds high-precision low-rank adapters to the low-precision 3-bit or 4-bit weights of a frozen base LLM obtained via modern quantization algorithms (Hubara et al., 2021; Yao et al., 2021; Frantar et al., 2023). Crucially, ModuLoRA does not specify its own quantization procedure--rather, it integrates with user-defined quantizers via a simple quantization-agnostic backward pass. This backward pass adaptively materializes low-precision LLM weights obtained from a black-box quantizer and integrates them with high-precision low-rank adapters. We release ModuLoRA as part of LLMTools, a user-friendly library that enables finetuning LLMs on consumer GPUs. When paired with the modern OPTQ quantizer (Frantar et al., 2023), ModuLoRA enables finetuning 3-bit LLMs for the first time, often outperforming methods based on less sophisticated 4-bit and 8-bit quantization. Across tasks in classification, natural language inference, and instruction following, our 3-bit and 4-bit models achieve competitive performance using significantly less memory than existing approaches. On a popular summarization benchmark, we attain a new state-of-the-art ROUGE score using a quantized LLAMA-65B model. We open-source all our low-precision models, including the first 3-bit family of Alpaca models that feature strong instruction-following performance at multiple model sizes. Our findings reveal that high performance can be achieved using smaller quantized LLMs than previously thought. **Contributions.** In summary, this paper makes the following contributions: (1) we propose ModuLoRA, a memory-efficient finetuning method that operates over low-precision weights obtained via a user-specified black-box quantization module; (2) we release LLMTools, a user-friendly Python library that features an implementation of ModuLoRA and that enables users to easily finetune the largest LLMs on consumer GPUs; (3) we provide empirical evidence that high performance on downstream tasks can be achieved with a smaller LLM than previously thought. ## 2 Background and Related Work We are interested in finetuning a pre-trained LLM for downstream tasks (Li and Liang, 2021; Lester et al., 2021; Houlsby et al., 2019; Rebuffi et al., 2017). LLMs use a transformer architecture where almost all of the learnable weights--and almost all of the memory used to store these weights--appear in linear layers.1 We let the weights and biases of these \(n\) linear layers be denoted \(\textbf{W}^{(i)}\) and \(\textbf{b}^{(i)}\) for \(i\in\{1,2,...,n\}\). Given a pretrained network, our goal is to finetune it for downstream tasks using much less working memory than would be needed to store all of the **W** in full precision. Footnote 1: These layers include the \(K\), \(V\), \(Q\), and \(O\) projection matrices of attention blocks and the linear layers of MLP blocks. ### Large Language Model Finetuning Because of the high memory requirements needed to fine-tune and store all the weights of a LLM, practitioners have developed a variety of _parameter-efficient fine tuning_ methods that learn in a lower dimensional space. These methods include tuning only the output layer (Devlin et al., 2018) and tuning the prompt or prefix passed as input to an LLM (Lester et al., 2021; Li and Liang, 2021; Liu et al., 2023; 2), as well as LoRA, which is the focus of this work. Low-Rank Adaptation (LoRA)The LoRA algorithm (Hu et al., 2022) decomposes the weights **W** into a sum of frozen base model weights \(\textbf{W}_{0}\in\mathbb{R}^{d\times d}\) and a small additive low-rank adapter \(\textbf{AB}^{\top}\) consisting of the product of two rectangular matrices \(\textbf{A},\textbf{B}\in\mathbb{R}^{d\times r}\), where \(r>0\) indicates the rank2: Footnote 2: For simplicity here we consider square weight matrices **W**; the rectangular case is a straightforward generalization. \[\textbf{W}=\textbf{W}_{0}+\textbf{AB}^{\top}. \tag{1}\] LoRA reduces the number of trained parameters by a factor of \(2r/d\), lowering the storage, transmission, and task-switching overhead of inference on a system that already maintains the base model. However, LoRA must hold the base weights \(\textbf{W}_{0}\) in memory, which requires multiple high-end GPUs and precludes tuning large LLMs on commodity hardware. ### Low-Precision Machine Learning The computational requirements of modern machine learning models motivate a wide range of efficient machine learning algorithms (Li & Liang, 2021; Hu et al., 2022; Frantar et al., 2023). QuantizationQuantization methods for neural networks reduce the number of bits required to store model weights (Dong et al., 2019, 2020; Yao et al., 2022; Park et al., 2023). A \(b\)-bit quantization method has the form \[(\hat{\mathbf{W}}_{q},\mathbf{z},\mathbf{s})=\mathcal{Q}(\mathbf{W})\qquad \qquad\qquad\qquad\hat{\mathbf{W}}=\mathcal{D}(\hat{\mathbf{W}}_{q},\mathbf{z},\mathbf{s}). \tag{2}\] Here, the quantization algorithm \(\mathcal{Q}\) takes a weight matrix \(\mathbf{W}\in\mathbb{R}^{d\times d}\) (or its subset) and outputs a quantized version \(\hat{\mathbf{W}}_{q}\in\{0,1,\ldots,2^{b-1}\}^{d\times d}\) (using \(b\) bits to represent each entry of \(\mathbf{W}\)), as well as zero and scale parameters \(\mathbf{z},\mathbf{s}\in\mathbb{R}^{d}\) (in full precision). The dequantization algorithm \(\mathcal{D}(\hat{\mathbf{W}}_{q},\mathbf{z},\mathbf{s})\) recovers an approximation \(\hat{\mathbf{W}}\in\mathbb{R}^{d\times d}\) by rescaling the quantized weights as \(\hat{\mathbf{W}}=\mathbf{s}\odot\hat{\mathbf{W}}_{q}+\mathbf{z}\), where \(\odot\) denotes the Hadamard product, and \(\odot,+\) are extended with numpy-style broadcasting. Recently, Frantar et al. (2023) proposed OPTQ, a quantization algorithm that scales to modern LLMs. The method iteretiaively runs two steps over the weight columns: (1) quantize with nearest rounding and compute the error, (2) update the remaining weights with a scaled error. Many of our experiments finetune LLMs quantized with OPTQ. In concurrent work, Dettmers et al. (2023) proposed QLoRA, an approach for tuning quantized LLMs based on LoRA. While our work seeks to integrate with any user-defined quantization module (such as OPTQ), QLoRA defines its own quantization scheme, which is simpler than, say, OPTQ. One advantage of our approach is support for 3-bit finetuning (and potentially 2-bit via new quantizers; Chee et al. (2023)); QLoRA only supports 4-bit finetuning. We will also identify settings where using advanced quantizers yields performance gains over QLoRA. See Section 5.1 for details. ## 3 Low-Precision Low-Rank Adaptation with a Modular Quantizer In this section, we describe modular low-rank adaptation (ModuLoRA), a memory-efficient finetuning algorithm for large language models (LLMs) that leverages custom quantization algorithms and runs on consumer GPU hardware. Figure 1: PyTorch pseudocode for ModuLoRA. ### Low-Rank Adaptation of Low-Precision Models The first step of our approach is _quantization_: we apply a black-box quantization algorithm \(\mathcal{Q}\) to a set of pretrained weight matrices \(\mathbf{W}^{(i)}\). This yields quantized weights, zeros, and scales \((\hat{\mathbf{W}}^{(i)}_{q},\mathbf{z}^{(i)},\mathbf{s}^{(i)})=\mathcal{Q}( \mathbf{W}^{(i)})\). We use \(\hat{\mathbf{W}}^{(i)}_{q}\) to denote the quantized weighs stored in low precision, while \(\hat{\mathbf{W}}^{(i)}\) denotes the same weights materialized in high precision (both approximate the original weights \(\mathbf{W}^{(i)}\)). Crucially, we do not specify a quantization procedure \(\mathcal{Q}\) as part of ModuLoRA--rather, we seek to support user-defined quantizers that are treated by our method is a black-box. The core of our efforts focuses on _finetuning_ the base quantized model. Our method first modifies the network by replacing each linear layer--originally defined by the affine map \(x\mapsto x(\mathbf{W}^{(i)})^{\top}+\mathbf{b}^{(i)}\)--with the reparameterized low precision ModuLoRALinear layer in Figure 1, given by \[x\mapsto x(\hat{\mathbf{W}}^{(i)})^{\top}+x\mathbf{B}^{(i)}(\mathbf{A}^{(i)}) ^{\top}+\mathbf{b}^{(i)}. \tag{3}\] Here \(\mathbf{A}^{(i)},\mathbf{B}^{(i)}\in\mathbb{R}^{d\times r}\) are learnable parameters initialized as in Hu et al. (2022), and \(\hat{\mathbf{W}}^{(i)}=\mathcal{D}(\hat{\mathbf{W}}^{(i)}_{q},\mathbf{z}^{(i )},\mathbf{s}^{(i)})\) is the fixed dequantized weight matrix. Note that this is algebraically (but not computationally) equivalent to transforming the quantized matrix as given in (1). Lastly, ModuLoRA fits the \(\mathbf{A}^{(i)}\) and \(\mathbf{B}^{(i)}\) using backprop and gradient-based learning. A key challenge in this procedure is to efficiently perform computations with high-precision and low-precision tensors. Clearly, the forward pass requires multiplying by weights stored in quantized \(\hat{\mathbf{W}}^{(i)}_{q}\)'s. Below, we derive the backward pass for \(\mathbf{A}^{(i)},\mathbf{B}^{(i)}\) and show that it also requires multiplying by the transpose of the \(\hat{\mathbf{W}}^{(i)}_{q}\)'s. #### 3.1.1 The Structure of a Quantized Backward Pass We illustrate the technical challenges that arise in the design of a quantized backward pass in the context of a network of \(n\)ModuLoRALinear layers. Each ModuLoRALinear is effectively a fully connected layer with reparameterized dense weights defined as \[\mathbf{W}^{(i)}_{l}=\hat{\mathbf{W}}^{(i)}+\mathbf{A}^{(i)}(\mathbf{B}^{(i)} )^{\top}, \tag{4}\] biases \(\mathbf{b}^{(i)}\), and outputs \(\mathbf{y}_{i}\) for \(i=1,2,...,n\). We use \(\bar{\mathbf{y}}_{i}=\mathbf{W}^{(i)}_{l}\mathbf{x}+\mathbf{b}^{(i)}\) to denote the pre-activation output of the \(i\)-th step and we use \(L\) to denote the loss. The backward pass seeks to compute gradients \(\mathrm{d}L/\mathrm{d}\mathbf{A}^{(i)}\) and \(\mathrm{d}L/\mathrm{d}\mathbf{B}^{(i)}\), where we overload the Leibniz notation for derivatives to also denote gradients. By the chain rule, \[\frac{\mathrm{d}L}{\mathrm{d}\mathbf{A}^{(i)}}=\frac{\mathrm{d}L}{\mathrm{d} \bar{\mathbf{y}}_{i}}\cdot\frac{\mathrm{d}\bar{\mathbf{y}}_{i}}{\mathrm{d} \mathbf{A}^{(i)}}. \tag{5}\] Because of the additive structure of the weights \(\mathbf{W}^{(i)}_{l}\) in (4), \(\mathrm{d}\mathbf{y}_{i}/\mathrm{d}\mathbf{A}^{(i)}\) is straightforward to handle as it is not a function of the quantized weights \(\hat{\mathbf{W}}^{(i)}_{q}\). The second term can be computed via the chain rule of calculus as \[\frac{\mathrm{d}L}{\mathrm{d}\bar{\mathbf{y}}_{i}}=\frac{\mathrm{d}L}{\mathrm{ d}\bar{\mathbf{y}}_{i+1}}\cdot\frac{\mathrm{d}\bar{\mathbf{y}}_{i+1}}{ \mathrm{d}\mathbf{y}_{i}}\cdot\frac{\mathrm{d}\mathbf{y}_{i}}{\mathrm{d}\bar{ \mathbf{y}}_{i}}, \tag{6}\] where \(\mathrm{d}\mathbf{y}_{i}/\mathrm{d}\bar{\mathbf{y}}_{i}\) is the derivative of the activation function, and \(\mathrm{d}\bar{\mathbf{y}}_{i+1}/\mathrm{d}\mathbf{y}_{i}=(\mathbf{W}^{(i)}_{l} )^{\top}=(\hat{\mathbf{W}}^{(i)})^{\top}+\mathbf{B}^{(i)}(\mathbf{A}^{(i)})^{\top}\). The above derivations indicate that computing the gradient \(\mathrm{d}L/\mathrm{d}\mathbf{A}^{(i)}\) (the argument for \(\mathrm{d}L/\mathrm{d}\mathbf{B}^{(i)}\) is identical) requires performing a matrix-vector multiply \(\frac{\mathrm{d}L}{\mathrm{d}\mathbf{y}_{i+1}}\cdot(\hat{\mathbf{W}}^{(i)})^{\top}\) between a high-precision vector \(\frac{\mathrm{d}L}{\mathrm{d}\mathbf{y}_{i+1}}\) with a quantized matrix \((\hat{\mathbf{W}}^{(i)})^{\top}\). Performing this multiplication in a stable and efficient way is a challenge that we must address. #### 3.1.2 Efficient Mixed-Precision Computation of Forward and Backward Passes If we could precompute all dequantized weight matrices \((\hat{\mathbf{W}}^{(i)})^{\top}\) in a high-precision format, our challenge would be solved: the matrix-vector multiplication \(\frac{\mathrm{d}L}{\mathrm{d}\mathbf{y}_{i+1}}\cdot(\hat{\mathbf{W}}^{(i)})^{\top}\) in the backward pass would operate over two high-precision arrays, and would not introduce questions of efficiency and stability. Unfortunately, precomputing all dequantized weight matrices \((\hat{\mathbf{W}}^{(i)})^{\top}\) requires the same amount of GPU memory as it would take to store the original high-precision LLM. For this computation to fit on consumer GPU hardware, we need to avoid manifesting all the \(\hat{\mathbf{W}}^{(i)}\) in memory at once. Using (3) naively, backprop would store all the \(\hat{\mathbf{W}}^{(i)}\) from the forward pass to use them in the backward pass. Efficient Mixed Precision Computation.Our strategy is to _recompute_ the high-precision materialization \(\hat{\mathbf{W}}^{(i)}\) of the quantized \(\hat{\mathbf{W}}^{(i)}_{q}\) in the backward pass rather than save it (Figure 1). In the LPLinear function, the forward method dequantizes \(\hat{\mathbf{W}}^{(i)}\) and performs multiplication. Similarly, backward re-dequantizes \(\hat{\mathbf{W}}^{(i)}\) and computes the gradient derived in Appendix A.2 via dynamic programming. The hatW goes out of scope and can be freed at the end of each method, so only one \(\hat{\mathbf{W}}^{(i)}\) is ever stored in memory at any given time. The amount of memory used in the forward pass of the LPLoRA module is small: all the intermediates are either the same size as the input \(x\), or even smaller (e.g. if \(x\in\mathbb{R}^{m\times d}\) then x @ self.B is of size \(\mathbb{R}^{m\times r}\) for \(r\ll d\)). The amount of additional computation involved is also small: the dequantization procedure \(\hat{\mathbf{W}}=\mathbf{s}\odot\hat{\mathbf{W}}_{q}+\mathbf{z}\) only requires multiplying and adding a scalar to each row of \(\hat{\mathbf{W}}_{q}\). Increasing Efficiency Further.Figure 1 depicts a _weight materialization_ strategy in which \(\hat{\mathbf{W}}^{(i)}\) is fully materialized at each layer in both forward and backward passes. To further reduce memory, we can materialize elements of \(\hat{\mathbf{W}}^{(i)}\) only as needed. For many quantization algorithms (Nagel et al., 2020; Frantar et al., 2023), we can perform _row materialization_: dequantize \(\hat{\mathbf{W}}^{(i)}\) one row at a time and immediately multiply it with an input \(\mathbf{x}\). ModuLoRA also naturally generalizes to any direct vector-by-quantized-matrix product _subroutine provided by the quantizer \(\mathcal{Q}\)_, in which case materializing any part of \(\hat{\mathbf{W}}^{(i)}\) may be unnecessary. ### LLMTools: A Library for Efficient LLM Finetuning Using ModuLoRA. We implement ModuLoRA as part of LLMTools, a user friendly library that enables users to interact with the largest LLMs on consumer hardware. The LLMTools library enables finetuning LLMs in 3-bit and 4-bit precision using the ModuLoRA algorithm. It also provides an easy-to-use Python API for quantization, inference, and finetuning, as well as modular support for multiple quantizers, LLMs (including LLAMA1, LLAMA2, BLOOM, and OPT), and optimization algorithms (including all that are compatible with the Hugging Face Trainer class). Lastly, LLMTools supports easily loading datasets and sharing models via the HuggingFace Hub. Our code is available at: [https://github.com/kuleshov-group/llmtools](https://github.com/kuleshov-group/llmtools); our evaluation code to reproduce our results is available at: [https://github.com/kuleshov-group/MODULoRA-Experiment](https://github.com/kuleshov-group/MODULoRA-Experiment). A key quantization algorithm implemented in LLMTools is OPTQ (Frantar et al., 2023). In order to integrate OPTQ with LoRA-based finetuning, LLMTools provides efficient CUDA implementations of mixed-precision matrix-vector multiplication, including row and weight materialization. We provide CUDA kernels for both row and weight materialization in both the forward and backward passes. For maximum efficiency, we materialize elements of \(\hat{\mathbf{W}}^{(i)}_{q}\) in float16. The base quantized LLM models are represented via weights \(\hat{\mathbf{W}}^{(i)}_{q}\) stored in 3 or 4 bits, with scales and zeros \(\mathbf{s}^{(i)},\mathbf{z}^{(i)}\) as well as biases \(\mathbf{b}^{(i)}\) all stored as float16. ## 4 Experiments ### Setup Models.We evaluate ModuLoRA and LLMTools on the recent LLAMA (Touvron et al., 2023) family of models, as well as open-source BLOOM (Scao et al., 2023) and OPT models (Zhang et al., 2022). We quantize the models to 3 bits and 4 bits using OPTQ as in Frantar et al. (2023) with calibration 128 samples from C4 (Raffel et al., 2020). Baseline.We use LoRA (as implemented in the PEFT library (Mangrulkar et al., 2022)) to finetune models quantized in 8 bits using the BitsAndBytes library (Dettmers et al., 2022); we also compare to full-precision results from the literature. In concurrent work, Dettmers et al. (2023) proposed QLoRA, a related 4-bit finetuning algorithm implemented in the BitsAndBytes library. Accordingly, we present an experimental comparison of QLoRA with our approach, along with an in-depth discussion. Training.We finetune all models for 3 epochs on NVIDIA TITAN, 3090, and A6000 GPUs (depending on the model) with a LoRA rank of \(r=8\) and alpha of \(a=32\), and report results from 3 random seeds. See Appendix D for details on the hyperparameters used for each of our experiment. ### Text Classification Data & Metrics.We start with a simple text classification task where we seek to classify a short text snippet (up to 50 words) into its genre (e.g., fiction, telephone chat, etc.). We finetune 13B to 65B LLAMA models on 392,702 snippets from five genres and evaluate on 9,815 held out instances (Williams et al., 2018), reporting accuracy. This yields a challenging classification task for LLMs of all sizes. Results.We observe that classification accuracy consistently improves as we increase the number of parameters of the LLM. ModuLoRA combined with a 3-bit or a 4-bit LLM offers comparable performance to 8-bit finetuning in Bits&Bytes while using significantly less memory (Table 1). ### Natural Language Inference Data & Metrics.Next, we finetune LLMs on natural language inference tasks. The model is asked to predict a label from a small set (entailment, contradiction, or neutral) after being presented with a sentence pairing (a hypothesis and premise sentence pair). We finetune 7B to 65B LLAMA models on the Multi-Genre Natural Language Inference Corpus (MNLI) (Williams et al., 2018) and evaluate on the matched test sets (in-domain examples), reporting accuracy. Baselines from GPT-3 and T5 are included, as presented in Hu et al. (2022) and Chung et al. (2022). Results.Our 3-bit 65B LLAMA model matches the performance of a full-precision GPT-3+LoRA baseline. We also find that **3-bit and 4-bit models from LLMTools outperform 8-bit models from the Bits&Bytes library for the entire model size range**. Both 3-bit and 4-bit ModuLoRA models either match or outperform their 4-bit QLoRA counterparts, often using less memory. \begin{table} \begin{tabular}{l c c c} _Baselines_ & & & \\ \hline Models & Finetuning Adaptation & Model Size & \# Trainable Parameters & MNLI-m (_accuracy_) \\ \hline GPT-3 & Full Finetuning & 175B & 175,255.8M & 89.5 \(\pm\) 0.1 \\ GPT-3 & Adapter & 175B & 40.1M & 91.5 \(\pm\) 0.1 \\ GPT-3 & LoRA & 175B & 4.7M & 91.7 \(\pm\) 0.1 \\ T5 & Full Finetuning & 11B & 11,307.4M & **92.2**\(\pm\) 0.1 \\ \hline \hline LLAMA Finetuning & 7B & 13B & 30B & 65B \\ \hline LLMTools (3-bit) & 88.98 \(\pm\) 0.2 & 90.20 \(\pm\) 0.2 & 91.09 \(\pm\) 0.2 & 91.42 \(\pm\) 0.1 \\ LLMTools (4-bit) & 89.31 \(\pm\) 0.2 & 90.41 \(\pm\) 0.2 & 91.31 \(\pm\) 0.1 & **91.59**\(\pm\) 0.2 \\ \hline Bits\&Bytes 4-bit (QLoRA) & 89.28 \(\pm\) 0.2 & 89.67 \(\pm\) 0.2 & 91.22 \(\pm\) 0.1 & 91.36 \(\pm\) 0.2 \\ Bits\&Bytes 8-bit (LLM.int8()) & 88.95 \(\pm\) 0.1 & 90.08 \(\pm\) 0.1 & 91.15 \(\pm\) 0.1 & 91.55 \(\pm\) 0.1 \\ \hline \end{tabular} \end{table} Table 2: Natural language inference on the MNLI-m dataset evaluated using classification accuracy (%). Our LLAMA-65B-3bit model approaches state-of-the-art scores using significantly less memory. \begin{table} \begin{tabular}{l c c c} \hline LLAMA Tuning & 13B & 30B & 65B \\ \hline LLMTools (3-bit) & 93.5 \(\pm\) 0.7 & 97.0 \(\pm\) 0.9 & 97.2 \(\pm\) 0.8 \\ LLMTools (4-bit) & 92.9 \(\pm\) 0.7 & 96.3 \(\pm\) 1.0 & 98.0 \(\pm\) 0.9 \\ \hline Bits\&Bytes 8-bit (LLM.int8()) & 93.0 \(\pm\) 0.7 & 93.7 \(\pm\) 1.0 & 98.6 \(\pm\) 1.0 \\ \hline \end{tabular} \end{table} Table 1: Text classification accuracy (%) for LLAMAs finetuned with LoRA & ModuLoRA in 3, 4, 8 bits. ### Abstractive Summarization Data & Metrics.We finetune 7B-65B LLAMA and 7B-13B OPT models on the SAMSum dataset (Gliwa et al., 2019), consisting of 14,732 (text, summary) training pairs and 819 test pairs. Our methodology fully mirrors the evaluation of GPT-style models finetuned using LoRA (Hu et al., 2022). We evaluate summarization quality using ROUGE-1/2/L; we include GPT-3 baselines from Hu et al. (2022). Results.Our 4-bit 65B LLAMA models finetuned with ModuLoRA outperform the GPT-3 baseline and even **reach new state-of-the-art performance** on this dataset (Table 3). Importantly, ModuLoRA demonstrates performance improvements over the 4-bit QLoRA and the 8-bit BitsAndBytes methods. In the 7B to 65B model size range, ModuLoRA models (3-bit or 4-bit) outperform 8-bit LoRAs in BitsAndBytes and LLM.int8() and 4-bit LoRAs in BitsAndBytes and QLoRA. We argue that a data-driven lower precision quantization scheme can improve over a higher precision zero-shot quantizer like LLM.int8(). Switching from 4-bit to 3-bit precision within ModuLoRA reduces ROUGE by only about 1%. Round-to-Nearest QuantizationWe also perform an ablation where we replace the OPTQ quantizer with a count-to-nearest (RTN) approach (Table 4); OPTQ performs better than RTN, highlighting the importance of advanced quantizers. Other Model FamiliesWe also apply LLMTools to the OPT (Zhang et al., 2022) families of models (Table 5). Although these models perform worse than LLAMA, ModuLoRA matches or outperforms more memory-intensive 4-bit and 8-bit finetuning, which is consistent with our results on LLAMA. ### Instruction Following Data & Metrics.We finetune 7B-65B LLAMA models on the Alpaca dataset (Taori et al., 2023), consisting 52,000 instructions, as well on the CodaAlpaca dataset (Chaudhary, 2023), consisting of 20K code generation instructions (ses Table 8). We evaluate our Alpaca instruction-tuned models on the BigBench-Hard (BBH) benchmark (Suzgun et al., 2022), consisting of 23 challenging tasks on which LLMs do not exceed human performance. We evaluate 3-shot performance via "answer-only" prompting and use exact match accuracy as our measurement standard, testing on 6,511 samples (\(\sim\) 1.5k tokens each). We include Flan and LLAMA baselines from Chia et al. (2023). Results.We find that 3-bit and 4-bit performance drops only slightly relative to 8-bit and 16-bit. Crucially, **4-bit and 3-bit 65B models outperform 8-bit Beand 16-bit 30B models**, despite using fewer total bits. Furthermore, 4-bit ModuLoRA compares well to 4-bit QLoRA, and provides consistent performance improvements, especially at smaller model sizes, where sophisticated quantization ought to provide greater \begin{table} \begin{tabular}{l c c c} _Baselines_ & & & \\ \hline Models & Finetuning Adaptation & \# Trainable Parameters & SAMSum (_Rouge 1/2/L_) \\ \hline GPT-3 & Full Finetuning & 175,255.8M & 52.0 / 28.0 / 44.5 \\ GPT-3 & Adapter & 40.1M & 53.2 / 29.0 / 45.1 \\ GPT-3 & LoRA & 4.7M & 53.8 / 29.8 / 45.9 \\ Pegasus & SliC & 2B & **54.4 / 29.9 / 45.9** \\ \hline \hline LLAMA Finetuning & 7B & 13B & 30B & 65B \\ \hline LLMTools (3-bit) & 51.2 / 28.2 / 44.0 & 52.4 / 29.6 / 45.1 & 53.6 / 30.8 / 46.3 & 54.1 / 30.9 / 46.5 \\ LLMTools (4-bit) & 51.7 / 28.3 / 44.4 & 53.2 / 30.2 / 46.1 & 53.9 / 31.2 / 46.9 & **55.9 / 32.7 / 49.0** \\ \hline Bits\&Bytes 4-bit (QLoRA) & 51.6 / 28.3 / 44.5 & 51.3 / 28.1 / 44.1 & 53.0 / 30.2 / 45.7 & 53.8 / 30.5 / 45.9 \\ Bits\&Bytes 8-bit (LLM.int8()) & 51.9 / 28.1 / 44.5 & 51.3 / 28.2 / 43.6 & 50.8 / 28.4 / 44.1 & 53.9 / 30.4 / 46.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Abstractive summarization on the SAMSum dataset evaluated using ROUGE 1/2/L. Our LLAMA-65B-3bit model obtains state-of-the-art ROUGE 1/2 scores. All metrics have \(\pm 0.5\) confidence intervals. benefits. This further highlights the benefits of one-shot quantization methods. Appendix C also reports experiments on the CodeAlpaca dataset. ## 5 Discussion ### Comparison to Related Work Comparison to QLoRAIn concurrent work, Dettmers et al. (2023) proposed QLoRA, a related approach for finetuning a quantized LLM. We highlight methodological and experimental differences below. From a methods perspective, ModuLoRA integrates with a user-specified black-box quantization module. In our experiments, we find that using a sophisticated data-driven quantizer like OPTQ improves performance over simpler zero-shot strategies, e.g., a round-to-nearest baseline. Unlike ModuLoRA, QLoRA defines a quantization approach similar to RTN, but also introduces a specialized packing routine, quantization of zeros and scales, and other innovations. From an experiments and capabilities perspective, integrating with OPTQ enables ModuLoRA to fintune models quantized in 3-bits, which QLoRA cannot do. We anticipate ModuLoRA will enable finetuning 2-bit LLMs by integrating with new quantizers, such as Chee et al. (2023). Lastly, we identify settings where \begin{table} \begin{tabular}{l l c c c c} \multicolumn{2}{c}{_Baselines_} \\ \hline Model & Method & BASE (250M) & L (780M) & XL (3B) & XXL (11B) \\ \hline \multicolumn{2}{c}{FLAN-T5} & No Finetuning & 30.8 & 30.3 & 39.9 & 47.4 \\ \hline \multicolumn{2}{c}{Model} & Methods & 7B & 13B & 30B & 65B \\ \hline \multirow{6}{*}{LLAMA} & LLMTools (3-bit) & 31.1 \(\pm\) 0.4 & 35.3 \(\pm\) 0.2 & 37.2 \(\pm\) 0.6 & 43.3 \(\pm\) 0.4 \\ & LLMTools (4-bit) & 33.1 \(\pm\) 0.2 & 36.2 \(\pm\) 0.4 & 40.4 \(\pm\) 0.2 & 43.7 \(\pm\) 0.4 \\ \cline{1-1} \cline{2-6} & Bits\&Bytes 4-bit (QLoRA) & 31.9 \(\pm\) 0.1 & 35.4 \(\pm\) 0.2 & 39.0 \(\pm\) 0.4 & 43.5 \(\pm\) 0.5 \\ \cline{1-1} & Bits\&Bytes 8-bit (LLM.int8()) & 33.3 \(\pm\) 0.3 & 36.8 \(\pm\) 0.2 & 39.1 \(\pm\) 0.5 & 44.7 \(\pm\) 0.4 \\ \cline{1-1} & No Finetuning & 30.9 & 37.1 & 39.3 & 42.6 \\ \hline \hline \end{tabular} \end{table} Table 6: Instruction-tuned models evaluated on BigBench Hard (BBH). We finetune LLAMA models on the Alpaca dataset in 3 to 16 bits. We provide exact standard deviation here. \begin{table} \begin{tabular}{l c c} \hline \hline OPT Finetuning & 13B & 30B \\ \hline LLMTools (3-bit) & 48.8 / 26.7 / 41.9 & **49.9 / 27.1 / 42.5** \\ LLMTools (4-bit) & 49.3 / 26.8 / 42.0 & 49.6 / 27.1 / 42.4 \\ \hline Bits\&Bytes 4-bit (QLoRA) & 49.2 / 27.0 / 42.1 & 49.9 / 27.0 / 42.5 \\ Bits\&Bytes 8-bit (LLM.int8()) & 48.8 / 26.5 / 41.7 & 49.3 / 27.1 / 42.3 \\ \hline \hline \end{tabular} \end{table} Table 4: OPTQ and RTN quantization with different LLaMA model sizes on the SAMSum dataset. The evaluation was done on ROUGE 1/2/L/LSum. \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{OSMSum Performance} & Quantizer & 7B & 13B \\ \hline LLMTools (3-bit) & OPTQ & 51.2 / 28.2 / 44.0 / 44.2 & 52.4 / 29.6 / 45.1 / 45.1 \\ & RTN & 50.7 / 27.2 / 43.6 / 43.6 & 51.1 / 28.7 / 44.3 / 44.5 \\ LLMTools (4-bit) & OPTQ & 51.7 / 28.3 / 44.4 / 44.4 & 53.2 / 30.2 / 46.1 / 46.1 \\ & RTN & 51.2 / 28.5 / 44.2 / 44.2 & 52.5 / 29.9 / 45.5 / 45.5 \\ \hline \hline \end{tabular} \end{table} Table 4: OPTQ and RTN quantization with different LLaMA model sizes on the SAMSum dataset. The evaluation was done on ROUGE 1/2/L/LSum. ModuLoRA yields LLMs with better performance than LLMs from QLoRA; this gap is likely due to the use of improved quantizers. Comparison to Other Parameter-Efficient Finetuning MethodsRecent Parameter-Efficient Finetuning (PEFT) methods have encompassed a range of techniques such as prompt tuning (Lester et al., 2021; Li and Liang, 2021; Qin and Eisner, 2021; Liu et al., 2022b), modification of the embedding layer inputs (An et al., 2022) or hidden states (Liu et al., 2022a), inclusion of full layers (Houlsby et al., 2019), only tuning biases (Zaken et al., 2021), and others (Sung et al., 2021; Karimi Mahabadi et al., 2021). An important shortcoming of these methods is the need to store in memory a significant amount of frozen base model parameters. This limits their ability to finetune the largest LLMs on consumer GPU, a limitation that we address. ### Running LLMs on Consumer GPUs Efficient LLM AlgorithmsThe computational requirements of modern deep neural networks motivate a wide range of efficient machine learning algorithms. Quantization methods reduce the number of bits required to store weights (Dong et al., 2019, 2020; Hubara et al., 2021; Li et al., 2021; Yao et al., 2021), including via adaptive methods (Nagel et al., 2020). SmoothQuant (Xiao et al., 2023) rescales between activations and weights to remove outliers from the activations and make quantization overall easier. ZeroQuant (Yao et al., 2022) proposes a per-layer knowledge distillation method. LLM.int8() (Dettmers et al., 2022) decompose matrix multiplications into a majority of 8 bit and a minority of 16 bit operations. LUT-GEMM (Park et al., 2023) designs kernels to accelerate quantized matrix multiplications. RPTQ (Yuan et al., 2023) reorders activations and quantizes them in groups, reducing the impact of range differences between channels. Running LLMs on Consumer GPUsOur 3-bit and 4-bit methods enable finetuning a 65B LLM on on 48GB GPU and a 30B LLM on on 24GB GPU, bringing LLM finetuning to consumer hardware. Moreover, fitting an entire LLM on GPU unlocks data parallelism, which is more efficient than model parallelism. Previous 8-bit quantization methods required a 96GB GPU to fully fit a 65B model. Finetuning GPUs on consumer hardware holds promise to accelerate model iteration and apply LLMs to a wider range of domains by a larger number of practitioners. ### What is a Good Base LLM for Finetuning? The traditional measure of a base LLM is perplexity. In the adjacent table, we report LLAMA perplexity (PPL) on Wiki2 as well as finetuning performance on BBH. Interestingly, the correlation is not perfect: large gaps in PPL admit small gaps in BBH. This questions LLM evaluation when the goal is finetuning, and suggests exploring new training strategies. More generally, our results provide empirical evidence that high performance on downstream tasks can be achieved with a smaller quantized LLM than previously thought. While existing methods (e.g., LLM.int8()+LoRA; Dettmers et al. (2022)) operate in 8 bits, we find that 3-bit or 4-bit finetuning yields the best results for a fixed bit budget. For example, we find that 4-bit and 3-bit 65B models outperform 8-bit and 16-bit 30B models on instruction following tasks. Similarly, we find that 3-bit models are able to attain a new state-of-the-art ROUGE score on the SamSum summarization task. The high performance of these models hints at the possibility of further pursuing 2-bit models. ### Limitations An advantage of LoRA is that it has low inference overhead, since the low-rank adaptor can be added in to the full-precision weight matrix when deploying. One limitation of ModuLoRA is that it does not share this advantage relative to the black-box quantized model: the low-rank adaptor cannot be trivially added to the weight matrix because the weight matrix is quantized while the adaptor is not. So, the weight \begin{table} \begin{tabular}{l c c c} \hline \hline Models & Quantization & BBH & PPL \\ \hline LLAMA (13B) & 3-bit & 35.3 & 6.63 \\ & 4-bit & 36.2 & 5.36 \\ \hline LLAMA (65B) & 3-bit & 43.3 & 5.04 \\ & 4-bit & 43.7 & 3.84 \\ \hline \hline \end{tabular} \end{table} Table 7: BBH vs. PPL matrix and adaptor cannot be fused readily, and an implementation as in Figure 1 is required at inference time. A second limitation of ModuLoRA is that making finetuning possible on widely available commodity hardware may make finetuning too easy, presenting potential problems related to LLM safety. Another limitation of ModuLoRA is that the largest models in use today (e.g. GPT-4) can have up to 1 trillion parameters, and even at the minimum of 1 bit per parameter this still would take up 125 GB, which exceeds memory on commodity GPUs: thus a straightforward application of ModuLoRA will be unable to make these largest-scale models finetunable on commodity hardware. ## 6 Conclusion Finetuning large language models typically requires substantial hardware and storage resources. Our method, ModuLoRA, enables 3-bit finetuning of 65B models on a single 48GB consumer GPU. At the core of our approach is a simple, quantization-agnostic backward pass that enables integrating low-rank adapters with frozen LLM weights obtained from a user-defined quantization module. By integrating with modern quantizers, ModuLoRA achieves state-of-the-art performance compared to both parameter-efficient and full fine-tuning techniques. We anticipate MODULORA will enable finetuning 2-bit LLMs by integrating with new quantizers, such as Chee et al. (2023). ModuLoRA's flexibility and competitive performance make fine-tuning more accessible and cost-effective in a resource-constrained setting. This assists open-source model development and facilitates scientific research. More broadly, we believe that ModuLoRA will help democratize access to large language models and make them available to a broader audience.
2309.09039
Microscale 3-D Capacitance Tomography with a CMOS Sensor Array
Electrical capacitance tomography (ECT) is a nonoptical imaging technique in which a map of the interior permittivity of a volume is estimated by making capacitance measurements at its boundary and solving an inverse problem. While previous ECT demonstrations have often been at centimeter scales, ECT is not limited to macroscopic systems. In this paper, we demonstrate ECT imaging of polymer microspheres and bacterial biofilms using a CMOS microelectrode array, achieving spatial resolution of 10 microns. Additionally, we propose a deep learning architecture and an improved multi-objective training scheme for reconstructing out-of-plane permittivity maps from the sensor measurements. Experimental results show that the proposed approach is able to resolve microscopic 3-D structures, achieving 91.5% prediction accuracy on the microsphere dataset and 82.7% on the biofilm dataset, including an average of 4.6% improvement over baseline computational methods.
Manar Abdelatty, Joseph Incandela, Kangping Hu, Joseph W. Larkin, Sherief Reda, Jacob K. Rosenstein
2023-09-16T16:24:58Z
http://arxiv.org/abs/2309.09039v3
# Microscale 3-D Capacitance Tomography with a CMOS Sensor Array ###### Abstract Electrical capacitance tomography (ECT) is a non-optical imaging technique in which a map of the interior permittivity of a volume is estimated by making capacitance measurements at its boundary and solving an inverse problem. While previous ECT demonstrations have often been at centimeter scales, ECT is not limited to macroscopic systems. In this paper, we demonstrate ECT imaging of polymer microspheres and bacterial biofilms using a CMOS microelectrode array, achieving spatial resolution of 10 microns. Additionally, we propose a deep learning architecture and an improved multi-objective training scheme for reconstructing out-of-plane permittivity maps from the sensor measurements. Experimental results show that the proposed approach is able to resolve microscopic 3-D structures, achieving 91.5% prediction accuracy on the microsphere dataset and 82.7% on the biofilm dataset, including an average of 4.6% improvement over baseline computational methods. tomography, 3-D, capacitance, ECT, CMOS, biofilm, deep learning, transposed convolution ## I Introduction Electrical capacitance tomography (ECT) is an imaging technique that estimates the internal distribution of permittivity in a volume by measuring capacitance between electrodes placed at its boundary [16]. It is closely related to electrical impedance tomography (EIT), which estimates the conductivity distribution using impedance measurements [7]. Both of these techniques are useful in applications where there is spatial contrast in conductivity or permittivity, including organ and tissue imaging [1, 8, 25, 30, 31], neural imaging and neural activity monitoring [3, 4, 29], and industrial process monitoring of fluid flows [17, 23]. The physics of the ECT problem, in 2-D, are managed by the Poisson PDE in Eq. 1, where \(\sigma(x,y)\) represents the permittivity distribution and \(u(x,y)\) represents the electric potential [2]. Mutual capacitance \(C_{ij}\) between electrodes \(i\), \(j\) is evaluated by the integral in Eq. 2, where \(V_{ij}\) is the potential difference between the two electrodes, and \(S\) is the path enclosing the sensing electrodes. The problem of estimating the capacitance given the permittivity distribution is referred to as the _forward problem_[10]. Conversely, estimating the permittivity distribution from boundary capacitance measurements is referred to as the _inverse problem_. \[\nabla.(\sigma(x,y)\nabla u(x,y))=0 \tag{1}\] \[C_{ij}=-\frac{Q}{V_{ij}}=-\frac{\oint_{s}\sigma(x,y)\nabla u(x,y)ds}{u_{i}-u_{j}} \tag{2}\] The _inverse problem_ of ECT is a non-linear and severely ill-posed problem, without unique numerical solutions [33]. Therefore, regularization priors are often used to impose an additional constraint on the estimated solution [22]. Traditional algorithms solve the inverse problem by minimizing a least squares objective with an additional regularization term for an initial permittivity distribution through an iterative solver [5, 33, 34]. However, iterative algorithms are sensitive to noise in the capacitance measurements which makes them more susceptible to divergence. Prior work demonstrates that deep learning models can be more robust to experimental noise and can provide accurate image reconstructions [37, 38, 39, 32]. Previous demonstrations of ECT have often resolved centimeter-scale targets. If it could be appropriately miniaturized, one appealing application of ECT would be for 3-D visualization of cell cultures [12, 19, 24]. Optical confocal microscopy is a powerful tool for biologists to image the 3-D structure of complex cell cultures [26]. However, confocal imaging can be prohibitively expensive for routine use, usually relies on fluorescent labeling, and its intense light excitation introduces tradeoffs between the frame rate and risks of photobleaching and phototoxicity. Here we propose a microscale capacitance tomography system using a \(512\times 256\) CMOS sensor array [13, 14, 15], achieving the highest-resolution ECT reported to date (\(10\,\mu\)m). We apply deep learning to approximate the ECT inverse operator, using training data as a regularization prior to the ill-posed inverse problem. The proposed system enables imaging of micro 3-D structures of cell cultures with a high reconstruction accuracy. We present results for two experimental datasets of microscopic objects: polymer microspheres and bacterial biofilms. The ECT data are trained and evaluated using ground truth images acquired from 3-D confocal microscopy. ## II Methodology ### _Capacitance Tomography Hardware_ The tomography is implemented using measurements from an integrated CMOS microelectrode array described in [15]. This chip has a \(512\times 256\) planar array of microelectrodes on a \(10\,\mu\)m\(\times\)\(10\,\mu\)m rectangular grid. In one of its operating modes, the chip can efficiently measure the mutual capacitance between any two pixels in the array [13, 14]. Fig. 1(a) illustrates the one-sided planar detection using the sensor, where electrodes are only placed at the bottom boundary. Each capacitance \(C_{ij}\) represents fringing electric fields through the sample, and thus the permittivity and geometry of materials near the sensor influence these measurements. Samples to be imaged are placed on the chip surface in a liquid or gel electrolyte, as illustrated in Fig. 2(a). An image of the sensor is shown in Fig. 2(b). ### _Image Reconstruction Network_ Fig. 1(b) illustrates the architecture of the image reconstruction network. The input is an \(m\times n\) matrix containing pairwise capacitance measurements, where \(m\) is the number of spatial offsets considered when measuring the mutual capacitance values and \(n\) is the number of electrodes. Each entry in the matrix corresponds to the mutual capacitance \(C_{ij}\) between electrodes \(i\) and \(j\). For example, the first row contains \(n\) capacitance values measured between adjacent electrodes, and the second row contains measurements between electrodes separated by \(2\) positions. In this study, we only use capacitance measurements with \(|i-j|\leq 5\). To make the input matrix compatible with the transposed convolution layer, we reshape it to a 3-D feature map of size \((w,h,c)=(1,1,\text{num\_measurements})\). The input 3-D feature map is then repeatedly up-sampled by a factor of \(2\) until it reaches the spatial resolution of the predicted cross-sectional image \((w,h,c)=(200,100,1)\), which represents the permittivity distribution \(\sigma(z,y)\) of the medium above the CMOS sensor. The boundary capacitance measurements are up-sampled through a series of five transposed convolution blocks. Each block contains a transposed convolution layer, batch normalization layer, and a ReLU activation, except for the last block where sigmoid activation is used to constrain the output permittivity to be in the range \([0,1]\). The transposed convolution layer contains a learnable kernel that is used to reconstruct a high-resolution output from a low-resolution input [9]. Batch normalization is used for training stability and convergence speed-up. Additionally, a residual connection is added between the block input and output through a 1x1 convolution kernel. ### _Loss_ The loss function is important in training deep learning algorithms as it defines the optimization landscape and has a significant impact on the model convergence [35]. Class imbalance, where the foreground permittivity occupies a significantly smaller region relative to the background pixels, poses a challenge in training. As noted by [39], distribution-based loss functions like the focal loss [18] can help address the class-imbalance issue. However, region-based losses and compound losses have been shown to consistently provide better performance than distribution-based losses [35]. Therefore, we propose a compound loss function, shown in Eq. 3, that combines a distribution-based loss (Focal Loss \(L_{\text{FL}}\)), a region-based loss (Dice Loss \(L_{\text{Dice}}\)[36]), and a pixel-to-pixel loss (Smooth L1 Loss \(L_{\text{SmoothL1}}\)[11]). The weighting parameters (\(\lambda_{1},\lambda_{2},\lambda_{3}\)) define the tradeoff between the different loss-objectives and are learned during training. \[L(y,\hat{y})=\lambda_{1}L_{\text{SmoothL1}}(y,\hat{y})+\lambda_{2}L_{\text{FL }}(y,\hat{y})+\lambda_{3}L_{\text{Dice}}(y,\hat{y}) \tag{3}\] Each loss component in Eq. 3 represents a different objective that the model aims to optimize. The smooth L1 loss measures the absolute difference between the predicted and ground truth images, with added smoothing to make it differentiable and less sensitive to outliers. It addresses pixel-level differences and equally penalizes the error in the foreground and background pixels. The focal loss is a modified cross-entropy loss that dynamically scales during training to help the model focus on the hard-to-predict examples. This is done by adding a scaling factor that decays to zero as the model confidence increases in the easy-to-predict examples. Dice loss is used to emphasize the spatial agreement and boundary delineation between the predicted and ground truth images by maximizing the overlap region between the two images. ## III Experimental Results In order to obtain both ECT data and confocal 3-D images of the same objects, we sealed samples on the sensor with optically transparent windows, as shown in Fig. 2(a). The confocal images are useful as a ground truth for training the inverse algorithms, as well as for qualitative and quantitative comparisons of the reconstructed sample geometry. Using this setup, we performed experiments with both polymer micropheres and bacterial biofilms. ### _Polymer Microspheres_ A sample containing 30 \(\mu\)m purple fluorescent polystyrene microspheres (Spherotech Inc., IL, USA) was positioned over Fig. 1: Overview of the tomography system. (a) Illustration of the one-sided planar ECT detection using the CMOS sensor. The sample, above the sensor, has a permittivity value \(\varepsilon_{1}\) distinct from the background permittivity \(\varepsilon_{0}\). (b) Image reconstruction network, based on transposed convolution. The input is a matrix of pairwise capacitance measurements acquired from the CMOS sensor. The output is a \(100\times 200\,\mu m\) cross-sectional image that represents the permittivity distribution \(\sigma(z,y)\). the CMOS array. As shown in Fig. 2, a \(500\,\mathrm{\mu m}\) deep well was created around the CMOS sensor with a stack of two \(25\,\mathrm{\mu L}\) adhesive chambers (Gene Frame, Thermo Scientific). Microspheres were added to a buffered Minimal Salts Glycerol Glutamate (MSGG) media in a \(10\times\) dilution, along with agarose flakes. The mixture was autoclaved and \(50\,\mathrm{\mu L}\) of the hot solution was pipetted into the well, covering the sensor. The well was then sealed with a \(22\,\mathrm{mm}\times 22\,\mathrm{mm}\) coverslip, and the solution was allowed to solidify into a 2% agarose gel which immobilized the microspheres. Finally, the edges of the assembly were sealed with a fast-setting silicone elastomer (EcoFlex 5, Smooth-On, Inc.) to prevent the gel from drying which would introduce distortions during the imaging process. Due to the sparse distribution of the polymer microsphere on the chip surface, we obtained a limited number of capacitance and confocal cross-sectional image pairs (16 different pairs). Therefore, we augmented our dataset with a synthetic dataset of 5,975 capacitance and cross-sectional image pairs generated using pyEIT [20], which runs finite element electrostatic simulations that solve the PDE in Eq. 1. The synthetic dataset was mainly used for training, while the experimental dataset was used for testing. In order to make the model more robust to the noise present in the experimental data, the simulated capacitance values were perturbed with a Gaussian noise \(\epsilon_{i}\in\mathcal{N}(0,0.03)\) during training. Fig. 3 shows the model predictions on the experimental microsphere testing dataset. The results demonstrate the system's ability to accurately predict the shape and location of the microsphere from the experimental ECT measurements, despite being trained solely on synthetic data. ### _Bacterial Biofilm_ To further develop the tomography capabilities, we aimed to produce 3-D maps of biomass within bacterial biofilms. _Bacillus subtilis_ biofilms were grown on 500 \(\mathrm{\mu m}\)-thick substrates of 1.8% agarose MSGG media for roughly 24 hours. A biofilm was cut out along with a thin supporting agarose pad and transferred onto the CMOS sensor. Before transferring the biofilm, the sensor was treated with poly-L-lysine to improve cell adhesion. The biofilm was sealed with a glass coverslip and fast-setting silicone elastomer to prevent drying, and mounted on a confocal microscope (Stellaris 5, Leica). An illustration of the completed device is shown in Fig. 2. From the experimental biofilm dataset shown in Fig. 2, we generated 6,400 capacitance and confocal cross-sectional image pairs, which were divided into 80% training, 10% validation, and 10% testing. Fig. 4 shows the model predictions on the testing set. The results indicate that the system can accurately predict the biofilm thickness, shape, and depth from the experimental ECT measurements. Predictions were performed independently on \(100\,\mathrm{\mu m}\times 200\,\mathrm{\mu m}\) meshes. However, we can reconstruct larger areas by simply stitching the predicted local meshes together. Fig. 5 shows the model predictions along a linear array of 200 electrodes (\(2\,\mathrm{mm}\) total length), demonstrating the system's ability to resolve larger millimeter-scale features in the biofilm. ### _Baselines_ We compare the reconstructions of the proposed model on the microsphere and biofilm testing datasets with one more traditional algorithm (iterative Tikhonov) and three deep learning algorithms which include the fully-connected auto-encoder (FNN-AE) presented in [38], the permittivity prediction network presented in [39] which is composed of two Fig. 4: Reconstructed permittivity images of sections of a bacterial biofilm (a) Confocal ground truth. (b) ECT model prediction. Fig. 3: Image reconstruction of polymer microspheres (a) Confocal microscopy ground truth. (b) ECT model prediction. Fig. 2: (a) Experimental samples were mounted to allow both ECT measurements as well as 3-D optical images from a confocal microscope. (b) The CMOS microelectrode array has 131,072 electrodes on a 10 micron grid. (c) A dataset with a _B. subtilis_ biofilm showing the confocal max projection (right) and one mutual capacitance image measured using a spatial offset of \(1\) (left). fully connected networks and a post-processing convolutional-based auto-encoder (FNN+CNN-AE), and the self-attention and UNet-based model (self-attn+UNet) presented in [32]. Quantitative comparisons are performed using mean squared error (MSE), and a set of perceptual metrics including structural similarity index measure (SSIM), peak signal-to-noise-ratio (PSNR), cross-correlation (CC), and intersection over union (IoU). Quantitative results are shown in Table. II and qualitative comparisons are displayed in Fig. 6. Judged by the IoU, the overall accuracy is 91.5% for the microsphere dataset, and 82.7% for the biofilm dataset. The Tikhonov algorithm provides a good estimate for the location of the shallow bead, but fails to recognize sharp boundaries and to predict the deeper bead. This is because for planar electrodes, changes at the boundary are very subtle for relatively deep objects [6], and the Tikhonov algorithm converges to a sub-optimal reconstruction. The FNN-AE falls into a local minimum mainly because fully connected layers are not suitable for the task. While the predictions are improved by the post-processing CNN-AE, the FNN+CNN-AE also fails to predict the deeper bead. The self-attn+UNET can correctly capture the presence of the two beads, however, it underestimates the diameter of the deep bead. This is because the self-attn+UNet model is trained with the MSE loss, which is known to produce blurred/smeared predictions [21]. By incorporating a region-based loss that enhances the spatial alignment between the predictions and the ground truth images and a distribution-based loss that addresses the class-imbalance problem, our proposed model (TCNN+MOL) can predict the shape and location of both the shallow and deeper beads [28]. ### _Ablation Study_ To analyze the effectiveness of the proposed approach, we conduct an ablation study on training the model with different combinations of the loss objective (Table III). In this experiment, the model was trained for 20 epochs on the biofilm dataset. We see the lowest CC when the model is trained with a per-pixel loss function (\(L_{\text{Smooth L1}}\)). CC is improved by adding the focal loss as it helps address the class imbalance issue. The dice loss further improves performance by maximizing overlap between the predictions and the ground truth. ## IV Conclusion We have presented a microscale electrical capacitance tomography (ECT) system using a CMOS biosensor that can predict the 3-D structure of objects over a large field of view. We proposed a deep learning architecture and a multi-objective training scheme for reconstructing out-of-plane images from the sensor array data. We demonstrated the effectiveness of the proposed approach by imaging polymer microspheres and bacterial biofilms. Compared to prior demonstrations (Table I), this work uses significantly smaller electrodes and achieves finer spatial resolution. Microscale ECT can be applied to a wide range of biomedical applications including low-cost non-optical label-free 3-D monitoring of cell cultures. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & Dataset & MSE \(\downarrow\) & SSIM \(\uparrow\) & PSNR \(\uparrow\) & CC \(\uparrow\) & IoU \(\uparrow\) \\ \hline Tikhonov & \(\Omega_{1}\) & 0.315 & 0.040 & 5.014 & 0.207 & 0.485 \\ & \(\Omega_{2}\) & 0.115 & 0.569 & 9.395 & 0.171 & 0.362 \\ \hline FNN-AE & \(\Omega_{1}\) & 0.026 & 0.422 & 15.732 & 0.238 & 0.485 \\ & \(\Omega_{2}\) & 0.145 & 0.678 & 8.360 & 0.447 & 0.591 \\ \hline FNN+CNN-AE [39] & \(\Omega_{1}\) & 0.017 & 0.930 & 17.452 & 0.679 & 0.765 \\ AE [39] & \(\Omega_{2}\) & 0.124 & 0.656 & 9.043 & 0.615 & 0.685 \\ \hline self- & \(\Omega_{1}\) & 0.005 & 0.914 & 22.681 & 0.898 & 0.679 \\ attm+UNet [32] & \(\Omega_{2}\) & 0.071 & 0.784 & 11.478 & 0.694 & 0.775 \\ \hline TCNN+MOL & \(\Omega_{1}\) & **0.004** & **0.975** & **23.036** & **0.910** & **0.915** \\ (Ours) & \(\Omega_{2}\) & **0.056** & **0.799** & **12.473** & **0.781** & **0.827** \\ \hline \hline \end{tabular} \end{table} TABLE II: Quantitative comparison to prior work using the microsphere (\(\Omega_{1}\)) and biofilm (\(\Omega_{2}\)) datasets. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline & [38] & [39] & [32] & [27] & [12] & **This Work** \\ \hline Imaging Domain & Circular & Circular & Circular & Planar & Planar & Planar \\ Reconstruction Algorithm & FNN-AE & FNN+CNN-AE & self-attn+UNet & Tikhonov & Linear Back-projection & TCNN+MOL \\ Imaging Application & 3D Objects & 3D objects & Cryogenic Fluids & 3D objects & Yeast cells & Bacterial biofilms \\ Electrode Size & \(cm\) scale & \(cm\) scale & \(cm\) scale & \(mm\) scale (68\(\sigma\)\(mm^{2}\)) & \(mm\) scale (1.4x0.8\(mm^{2}\)) & \(\mu m\) scale (10x10\(\mu m^{2}\)) \\ Array Size & 8 electrodes & 16 electrodes & 8 electrodes (4\(\times\)4) & 34 electrodes & 131,072 electrodes (512\(\times\)256) \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison to prior electrical capacitance tomography (ECT) systems. Fig. 5: Reconstruction of a larger-scale cross-sectional image of a _B. subtilis_ biofilm spanning 2 mm. (a) Confocal ground truth. (b) Model prediction, stitched together from ten 200 \(\mu\)m windows. Fig. 6: Qualitative comparison to prior algorithms, using a scene of two microspheres simulated using pyEIT [20].
2309.07014
Using Lidar Intensity for Robot Navigation
We present Multi-Layer Intensity Map, a novel 3D object representation for robot perception and autonomous navigation. Intensity maps consist of multiple stacked layers of 2D grid maps each derived from reflected point cloud intensities corresponding to a certain height interval. The different layers of intensity maps can be used to simultaneously estimate obstacles' height, solidity/density, and opacity. We demonstrate that intensity maps' can help accurately differentiate obstacles that are safe to navigate through (e.g. beaded/string curtains, pliable tall grass), from ones that must be avoided (e.g. transparent surfaces such as glass walls, bushes, trees, etc.) in indoor and outdoor environments. Further, to handle narrow passages, and navigate through non-solid obstacles in dense environments, we propose an approach to adaptively inflate or enlarge the obstacles detected on intensity maps based on their solidity, and the robot's preferred velocity direction. We demonstrate these improved navigation capabilities in real-world narrow, dense environments using a real Turtlebot and Boston Dynamics Spot robots. We observe significant increases in success rates to more than 50%, up to a 9.5% decrease in normalized trajectory length, and up to a 22.6% increase in the F-score compared to current navigation methods using other sensor modalities.
Adarsh Jagan Sathyamoorthy, Kasun Weerakoon, Mohamed Elnoor, Dinesh Manocha
2023-09-13T15:12:52Z
http://arxiv.org/abs/2309.07014v3
# Using Lidar Intensity for Robot Navigation ###### Abstract We present Multi-Layer Intensity Map, a novel 3D object representation for robot perception and autonomous navigation. Intensity maps consist of multiple stacked layers of 2D grid maps each derived from reflected point cloud intensities corresponding to a certain height interval. The different layers of intensity maps can be used to simultaneously estimate obstacles' height, solidity/density, and opacity. We demonstrate that intensity maps' can help accurately differentiate obstacles that are safe to navigate through (e.g. beaded/string curtains, pliable tall grass), from ones that must be avoided (e.g. transparent surfaces such as glass walls, bushes, trees, etc.) in indoor and outdoor environments. Further, to handle narrow passages, and navigate through non-solid obstacles in dense environments, we propose an approach to adaptively inflate or enlarge the obstacles detected on intensity maps based on their solidity, and the robot's preferred velocity direction. We demonstrate these improved navigation capabilities in real-world narrow, dense environments using a real Turtlebot and Boston Dynamics Spot robots. We observe significant increases in success rates to more than 50%, up to a 9.5% decrease in normalized trajectory length, and up to a 22.6% increase in the F-score compared to current navigation methods using other sensor modalities. ## I Introduction Mobile robots have been used to navigate in indoor environments (such as households, offices, hospitals, etc. [1, 2, 3]), and outdoor environments such as agricultural fields, forests, etc. [4, 5, 6]. Such complex environments contain obstacles of various sizes, densities/solidities, and opacities that are challenging in terms of the robot's perception, and navigation. For instance, contemporary indoor environments contain objects such as string/beaded curtains, transparent surfaces such as glass walls [7, 8], etc. Outdoor scenarios, on the other hand, have complex vegetation such as pliable tall grass, bushes, trees, etc. in close proximity to and intertwined with each other. A major challenge, and an important requirement for autonomous navigation, is differentiating _truly solid_ and impassable obstacles (furniture, glass surfaces, bushes, trees, etc.) from obstacles that can be passed through (beaded curtains, tall grass, etc.). To first detect obstacles, mobile robots have predominantly used RGB and depth images [9], 2D lidar scans [10], 3D point clouds [11], etc. The raw data from these sensors has been used to: (1). Estimating the proximity of objects in the robot's vicinity; (2). Computing a variation of an occupancy grid [12] or a cost map representation that indicates both an obstacle's size and distance from the robot; or (3). Segmenting the scene to assess obstacles' size, distance, and semantic meaning [13, 14]. Although such approaches have been used to aid navigation, they may not work well in environments composed of thin, pliable/bendable, and transparent obstacles. For instance, time-of-flight sensors such as depth cameras and lidars tend to detect thin, pliable, and passable obstacles (e.g. string curtains, tall grass) as solid obstacles that the robot must avoid [15]. On the other hand, transparent objects such as glass remain undetected since the laser rays mostly pass through them, leading to collisions during navigation. Similarly, perception methods using RGB images may not work well in terms of detecting transparent objects and visually similar but structurally dissimilar vegetation. Moreover, they are severely affected by the environment's lighting changes. **Main Contributions:** In order to address these robot perception challenges, we present a novel obstacle representation called _multi-layer intensity map_ that can be used to simultaneously estimate the size/height, density/solidity, and opacity of objects in the environment. The intensity maps are constructed by stacking individual layers of 2D grid maps, each computed from reflected point clouds intensities corresponding to a certain height interval. It preserves the benefits of existing occupancy grids such as indicating the size and proximity of obstacles around the robot while also accurately detecting passable obstacles as such. The novel components of our approach include: Fig. 1: Comparison of trajectories while navigating using our Multi-layer Intensity Maps, DWA with laser scan [10], DWA with occupancy map [12], Spot’s Inbuilt Autonomy, and VERN [16] in complex vegetation. The intensity map for the scenario is shown in the top right, with the robot in pink, its goal in blue, passable obstacles such as tall grass in green, and the robot’s trajectory overlayed for reference. intensity maps help differentiate solid objects such as trees even when they are intertwined with tall grass. Other methods for robot perception in this case either freeze or collide with the solid obstacles. * A novel obstacle representation called the multi-layer intensity map constructed from the intensities of the reflected points in a point cloud. In addition to an object's height and size, the multi-layer intensity map also reflects its true density/solidity. Our multi-layer intensity map can replace occupancy grids and other map representations in existing navigation methods to enable a robot to navigate through passable/navigable indoor and outdoor obstacles that are often misclassified by existing representations. * A novel method to detect transparent objects using the low-intensity reflected points in the multi-layer intensity map. Our approach accurately extrapolates the transparent surfaces from a small neighborhood of low-intensity points which enables a robot's motion planner to avoid collisions, significantly improving its rate of safely reaching its goal. * A novel method using the multi-layer intensity map to accurately identify objects that are safe to navigate through such as thin, passable curtains in indoor scenarios, and pliable tall grass in outdoor environments. Our approach alleviates robot freezing behaviors in the presence of such objects. * An adaptive inflation strategy that assesses the detected solid obstacles (e.g. concrete and glass walls, furniture indoors, and bushes and trees outdoors) to enlarge them in the multi-layer intensity map for efficient planning. Our strategy handles narrow scenarios such as doors, corridors, and passages between dense vegetation and trees, where existing navigation schemes freeze. We demonstrate significant improvements in navigation using intensity maps in indoor scenes using a real Turtlebot, and in complex outdoor environments using a Boston Dynamics Spot robot. ## II Related Work In this section, we give a brief overview of perception methods that use point cloud intensities. Additionally, we review the existing obstacle detection research in indoor and outdoor settings. ### _Sensing using Point Cloud Intensity_ The importance of intensity information in LiDAR point cloud data has been an emerging focus in robotics [17, 18, 19]. Some methods investigate the use of intensity information alongside geometric features to enhance point cloud classification methods in outdoor settings [20]. These methods highlight the potential of using the intensity information to provide a better understanding of obstacles, particularly when scene illumination is not consistent. Other methods include the ISHOT descriptor [21], which combines geometric and intensity data for improved place recognition. Lidar intensity maps have been also used for localization [22, 23, 24]. In [22], the authors present a robust Graph-SLAM framework that improves map accuracy for autonomous vehicles by encoding road surfaces based on LIDAR reflectivity. Moreover, the application of LiDAR intensity in visual navigation tasks has been explored. For instance, [25] introduces a lidar-intensity-image pipeline and demonstrates its performance in visual odometry (VO) and visual teach and repeat (VT&R) tasks. Lidar intensity maps have also been leveraged for various other applications, including orthoimage generation [26] and anisotropic surface detection [27]. ### _Detecting Indoor Obstacles_ Object detection in indoor settings has been widely studied for numerous applications including robot navigation, mapping, and computer graphics. Popular solutions in the literature include vision-based object detection and semantic segmentation approaches due to the structuredness of indoor environments. Moreover, the generation of necessary image datasets is feasible due to the limited diversity of indoor objects. However, detecting non-opaque objects such as glass remains a formidable challenge for vision-based systems due to the lack of visual clues. The method in [7] proposes GDNet, a glass detection network that identifies abundant contextual cues for glass detection using a large-field contextual feature integration (LCFI) module. UCTNet [13] proposes a cross-modal transformer network for indoor RGB-D semantic segmentation to identify different objects such as curtains, doors, etc. However, such methods require large datasets with pixel-level ground truth labeling. ### _Detecting Outdoor Obstacles_ Over recent years, there has been significant progress in the development of robotic systems designed for outdoor navigation [14, 28, 29, 30, 31]. One early approach can be found at [32], where the usage of laser measurements enabled navigation capabilities for robots such as detecting short and grass-like vegetation. However, this method is not universally applicable, particularly in complex, unstructured vegetative terrains. Complementary approaches have tackled associated issues in off-road navigation, specifically concerning varying slopes [33] and different terrains [34]. Many studies integrate proprioceptive with exteroceptive sensory data to enhance outdoor navigation [35, 36, 37]. Machine learning techniques have also been incorporated to augment the robot's capabilities for navigating through pliable vegetative obstacles [38, 39]. To this end, [16] uses a few shot learning approach to classify RGB vegetation images based on their traversability. This classifier is then integrated with a 3D LiDAR to construct a vegetation-aware traversability cost map. ## III Background ### _Definitions and Assumptions_ Our formulation assumes that a sensor capable of generating 3D point clouds (e.g., 3D lidar, depth camera) is mounted on a robot with a 2D linear \((v)\) and angular \((\omega)\) velocity space. Rigid coordinate frames are attached to the robot and sensor with the positive \(x,y,z\) directions facing forward, left, and upwards respectively, and for simplicity, we assume both frames to coincide. All positions, and velocities are measured relative to these frames. At any time \(t\), the robot has a preferred velocity direction aimed at its goal \((g_{x},g_{y})\) as \(\theta=\tan^{-1}(g_{y}/g_{x})\). Our approach is based on using the reflected intensities of point clouds that could be obtained from sensors such as 3D lidars, depth cameras, etc that have a laser source/transmitter and a receiver. We represent a point \(\mathbf{p}\) in a point cloud as \(\mathbf{p}=\{x,y,z,int\}\), where \(x,y,z\) denote the point's location relative to the sensor, and \(int\in[0,R]\) denotes its intensity. We define a 2D robot-centric grid map as containing \(n\times n\) grids. Each grid is denoted by a row \(r\) and column \(c\). Each grid represents a \(g\times g\) area in the real-world, and the value contained in it indicates the probability of the presence of an obstacle. Finally, we use \(j\) and \(k\) to denote indices. ### _Point Cloud Intensity_ Typically, a point's intensity is high (\(int>0.75R\)) when it is reflected from solid, opaque, 3D (length, width, and height dimensions are not infinitesimal) objects since they prevent the sensor's laser rays from passing through, or scattering away from the sensor. In contrast, objects that are low density (e.g. tall grass which is a collection of thin blades of grass that scatter laser rays), and transparent (e.g. glass where laser rays mostly pass through) lead to low intensities (\(int<0.5R\)) or in some cases, no intensity (\(int=0\)). ### _Obstacle Properties_ Our formulation leverages the property to accurately detect truly solid objects from the following categories defined based on how objects are sensed by existing perception modalities (2D lidar scan, RGB and depth images, etc) as: * **True Positives (TP)**: Solid, non-traversable objects detected as solid, e.g. walls, wooden furniture, etc. * **True Negatives (TN)**: Non-solid, traversable objects detected as passable or no obstacle, e.g. free space. * **False Positives (FP)**: Non-solid objects detected as solid, e.g. string/beaded curtains, pliable tall grass. * **False Negatives (FN)**: Solid objects detected as non-solid or as free space, e.g. transparent objects. ### _Obstacle Inflation_ Once a truly solid object is detected, it must be enlarged or _inflated_ for the robot's planner to ensure that it avoids it at a safe distance [40]. Inflation is performed prior to planning to expand obstacles uniformly in all dimensions by a certain amount (typically the robot's radius, or maximum(length, width) to ensure that the planner avoids obstacles by a safe distance. Standard methods for obstacle inflation include performing the Minkowski sum [41] between the robot's radius and the obstacle, cost propagation from the obstacle, dilating obstacles using convolutions, etc. on a grid map. With these preliminaries, we state our problem formulation as follows: **Formulation III.1**.: _To construct an \(n\times n\times m\) grid map representation \(I_{ML}^{t}\) of obstacles from points \(\mathbf{p}=\{x,y,z,int\}\) and classify each grid \((r,c)\in I_{ML}^{t}\) as a true positive (\(TP\)), false positive (\(FP\)), false negative (\(FN\)) obstacle, or true negative (\(TN\)) free space and enlarge \(TP\) and \(FN\) obstacles adaptively based on the robot's preferred velocity direction._ ## IV Our Approach In this section, we discuss how 3D point cloud intensities can be used to construct the multiple 2D grid map layers of an intensity map. The input point clouds could be obtained from any sensor such as a 3D lidar or a depth camera that also measures the intensity of the points reflected from surrounding objects. We show how different layers of the intensity map can be used to accurately differentiate solid obstacles from passable objects, and identify transparent obstacles. Finally, we detail our obstacle inflation strategy, which enables robots to navigate through passable obstacles, and narrow passages. Fig. 2 shows our overall proposed architecture. ### _Multi-Layer Intensity Map_ We construct a single layer of the intensity map corresponding to a height interval \(H_{j}\) at any time instant \(t\) as follows, \[\begin{split} I_{z\in H_{j}}^{t}(r,c)&=\frac{\sum_{ x}\sum_{y}int}{g^{2}}\\ \forall~{}x&\in[x_{low},x_{low}+g],~{}y\in[y_{low},y _{low}+g],\\ x_{low}&=\left\lfloor(r-\frac{n}{2})\cdot g\right\rfloor ~{}\text{and}~{}y_{low}=\left\lfloor(c-\frac{n}{2})\cdot g\right\rfloor\\ H_{j}&=[low_{j},high_{j}],~{}low_{j}\leq high_{j}, \end{split} \tag{1}\] where \(\lfloor\rfloor\) represents the floor operation, \(low_{j}\), and \(high_{j}\) are the limits for all the points whose intensities must be considered for the map along the \(z\) direction. Extending this definition, we construct a _multi-layer_ intensity map as a stacking of \(m\) layers as, \[I_{ML}^{t}(r,c)=[I_{z\in H_{1}}^{t}(r,c)\,|\,...\,|\,I_{z\in H_{j}}^{t}(r,c)\,| \,...\,|\,I_{z\in H_{m}}^{t}(r,c)], \tag{2}\] Fig. 2: Our approach’s overall system architecture. At time instant \(t\), the intensities of reflected point clouds from a height interval \(H_{j}\) (grey rectangles) are used to construct a 2D grid map layer \(I_{z\in H_{j}}^{t}\) according to equation 1. Several of such layers are stacked to form an intensity map. Certain layers from intensity maps can be used to detect TP, FP, and FN obstacles. The adaptive inflation enlarges the truly solid (TP, FN) obstacles that the robot must avoid in the direction opposite to the robot’s goal \((g_{x},g_{y})^{\dagger}\) direction, and ignores passable obstacles (FP). The planner finally uses the inflated map to compute collision-free linear and angular velocities \(v^{*},\omega^{*}\). where \(H_{1},...,H_{j},...,H_{m}\) are non-overlapping height intervals. We choose stack multiple non-overlapping layers at various heights instead of combining the points' intensities at all heights into a unified layer. This is due to the flexibility that multiple layers provide in analyzing and modifying them individually. Furthermore, individual layers can be combined after modification and used for planning the robot's trajectories. We highlight these benefits in the following sections. ### _Obstacle Detection_ In this section, we describe how challenging obstacles such as tall grass (FP), string/beaded curtains (FP), and transparent objects (FN) can be detected using our multi-layer intensity map. #### Iii-B1 Differentiating True and False Positive Obstacles Existing methods that use various sensor modalities typically detect many thin, pliable obstacles such as tall grass, and passable objects such as string/beaded curtains as solid obstacles that must be avoided. During navigation, such inaccurate detections cause the robot to freeze or oscillate perpetually without reaching its goal. We show how such false positive obstacles can be detected using the multi-layer intensity map. Let us consider three layers \(I^{t}_{z=0},I^{t}_{z\in(0,h]},I^{t}_{z\in[-h,0)}\) of the intensity map. If a grid location \((r,c)\) belonging to all three layers satisfies the following condition \(\mathcal{C}\), we classify that grid as a false positive obstacle: \[\mathcal{C}(r,c):I^{t}_{z=0}(r,c),\ I^{t}_{z\in(0,h]}(r,c),\ I^{t}_{z\in[-h,0 )}(r,c)\leq\Gamma. \tag{3}\] Here, \(\Gamma\) is an intensity threshold. Using this condition, we construct TP and FP intensity maps for planning as, \[\begin{split} I^{t}_{TP}(r,c)=max(I^{t}_{z=0}(),& I^{t}_{z\in(0,h]}(),I^{t}_{z\in[-h,0)}())\\ &\forall\ (r,c)\ |\ \mathcal{C}(r,c)\ \text{is False}\end{split} \tag{4}\] \[\begin{split} I^{t}_{FP}(r,c)=max(I^{t}_{z=0}(),& I^{t}_{z\in(0,h]}(),I^{t}_{z\in[-h,0)}())\\ &\forall\ (r,c)\ |\ \mathcal{C}(r,c)\ \text{is True}\end{split} \tag{5}\] #### Iii-B2 Detecting False Negative Obstacles False negative obstacles are typically transparent objects that allow most of the laser rays from a sensor to pass through. However, a small neighborhood of points with low intensities are detected for laser rays that are incident \(\sim 0^{\lx@math@degree}\) on a transparent surface only along the \(z=0\) plane [42]. Our approach extrapolates this small neighborhood to detect solid transparent obstacles. Consider two layers \(I^{t}_{z=0}\) and \(I^{t}_{z=-\epsilon}\) in the multi-layer intensity map at time instant \(t\). Here, \(\epsilon\) is a small positive value. To isolate the low-intensity neighborhood of points reflected from the transparent object, we first calculate the element-wise difference between the two layers \(I^{t}_{z=0}\ominus I^{t}_{z=-\epsilon}\). This removes all the points reflected from the same obstacles in both the layers and retains only the points corresponding to the small glass neighborhood. To indicate the presence of transparent obstacles for subsequent time steps, we transform \(I^{t}_{z=0}\ominus I^{t}_{z=-\epsilon}\) based on the robot's motion as, \[I^{t}_{FN}=T\cdot(I^{t}_{z=0}\ominus I^{t}_{z=-\epsilon}), \tag{6}\] where T is a \(4\times 4\) transformation matrix whose rotational component is based on the robot's yaw, the translational component is based on its motion from time \(t\) to \(t+1\). Finally, we augment subsequent time's \(I^{t+1}_{FN}\) using \(I^{t}_{FN}\) as, \[I^{t+1}_{FN}=I^{t+1}_{FN}\ \bigcup\ I^{t}_{FN}. \tag{7}\] ### _Adaptive Obstacle Inflation_ In narrow and dense scenarios, uniformly inflating obstacles (as explained in section III-D) could close the available free space (see fig. 3) leading to the robot freezing problem [43]. If obstacles are not inflated, the robot could move close to the obstacles and even collide with them. Additionally, false positive obstacles that can be traversed need not be inflated. Therefore, our approach adaptively inflates the obstacles based on the robot's goal direction. Obstacles are majorly inflated in the direction opposite to the robot's goal/preferred direction, and minorly inflated in all other directions. This ensures that the robot does not navigate too close to a solid obstacle, while also never closing free space near the narrow passages. Our inflation is performed using the convolution operation on obstacles by computing the appropriate kernel matrices \(K\) of size \(e\times e\) as follows. Let \((g_{x},g_{y})\) be the goal location relative to the robot's coordinate frame. The goal direction can be defined by the slope \(\tan\frac{g_{y}}{g_{x}}\). To design a kernel matrix \(K\) to inflate obstacles in the opposite direction, we first find the line along the goal direction relative to the kernel, passing through its center \((\frac{e}{2},\frac{e}{2})\) as, \[\begin{split} f(r^{K},c^{K}):&\ c^{K}-\tan(g_{y}/ g_{x})\cdot r^{K}+\text{const}=0\\ &\text{const}=\frac{e}{2}(\tan(g_{y}/g_{x})-1).\end{split} \tag{8}\] Here, \((r^{K},c^{K})\) represent the row and column on the kernel. Next, the kernel can be constructed as, \[K(r^{K},c^{K})=\begin{cases}1&\forall\{r^{K},c^{K}|f(r^{K}+j,c^{K}+j)= 0\}\\ 0&\text{Otherwise}.\end{cases} \tag{9}\] Fig. 3: The robot’s position and its goal are denoted by the pink and blue circles respectively. The robot’s heading and goal direction coincide in this case. **[Left]**: Uniformly inflating obstacles using an \(e\times e\) kernel with all ones near a narrow passage closing up the available free space (green rectangle). **[Right]**: Adaptively inflating the obstacles based on the robot’s goal direction preserves the free space while inflating the obstacles in the direction opposite to the robot’s goal direction. Here, \(j\in[\text{-padding},\text{padding}]\) is added to the row and column indices to control the level of thickness to inflate an obstacle. Finally, using \(K\), we convolve our multi-layer intensity map as, \[I^{t}_{inflate}=(I^{t}_{TP}\bigcup I^{t}_{FN})\varoccurlyeq K. \tag{10}\] \(I^{t}_{inflate}\) contains inflated True Positive and False Negative obstacles. To add information about false positive obstacles prior to planning, we perform, \[I^{t}_{plan}=I^{t}_{inflate}\ \bigcup\ I^{t}_{FP}, \tag{11}\] where \(I^{t}_{plan}\) is a 2D grid map containing TP, FP, FN, and free space represented by various grids, which can be used as a cost map for motion planning. ### _Integration with Planning Methods_ The final planning intensity map \(I^{t}_{plan}\) can be used with any motion planner as a cost map to evaluate a candidate linear and angular velocity's \((v,\omega)\) obstacle or collision cost. This can be computed by extrapolating the trajectory produced by \((v,\omega)\) relative to \(I^{t}_{plan}\) as in [34, 16] as, \[\begin{split} traj^{I^{t}_{plan}}=[(r_{1},c_{1}),...,(r_{j},c_{j} ),...,(r_{p},c_{p})]\\ \text{Obstacle Cost}=\sum_{j=1}^{p}I^{t}_{plan}(r_{j},c_{j}).\end{split} \tag{12}\] Additionally, \(I^{t}_{plan}\) can be used as an observation with deep reinforcement learning-based navigation methods [33] to improve the trained model's understanding of the solidity of surrounding obstacles. ## V Results and Analysis In this section, we summarize our method's implementation details and evaluation metrics. Then, we present the details of the experiments conducted to highlight the benefits of our approach. ### _Implementation_ We use a Boston Dynamics Spot robot for outdoor experiments and a Turtlebot 2 robot for indoor experiments. The Spot robot is equipped with an Intel NUC 11 with Intel i7 CPU and NVIDIA RTX 2060 GPU. The Turtlebot is equipped with a laptop with an Intel i9 CPU and an Nvidia RTX 2080 GPU. Both robots are equipped and a Velodyne VLP16 lidar. We particularly use Velodyne lidar's dual return mode since it captures low intensities in the point cloud better [42]. We use \(n=200\), and \(m=4\) layers in our implementation (one for \(I^{t}_{z\in[-h,0)},I^{t}_{z\in(0,h]},I^{t}_{z=0},I^{t}_{z=-\epsilon}\)). The height ranges are adjusted based on the robot's height. ### _Comparison Methods and Evaluation Metrics_ We compare the improvements in navigation while using intensity maps with various existing methods. We use intensity maps with the dynamic window approach (DWA) [10] and calculate candidate velocity's obstacle costs as described in section IV-D. For fair comparison, we use DWA as the baseline planner and integrate other methods for obstacle perception with it for navigation. We compare with various indoor and outdoor navigation methods that use a variety of perception inputs such as RGB/depth image, 2D lane scans, or occupancy grids created by any proximity sensor. DWA is a local planner that can use a 2D LiDAR scan [10] or an occupancy map [12] to perform obstacle avoidance. Spot's in-built autonomy incorporates a set of stereo cameras around the robot to estimate the obstacles and ground plane to navigate to a goal. VERN is a vegetation-aware navigation approach that uses an RGB image-based vegetation classifier and a set of 2D occupancy grid maps for perception. In indoor scenarios, Glass-SLAM [44] uses the specular reflection of laser beams from the glass to map environments that include glass. GDNet [45] is a semantic segmentation method that uses RGB images to segment glass from a scene. We also perform ablations studies between using adaptive and uniform inflation using intensity maps. **Success Rate** - Number of successful goal-reaching attempts (without collisions with solid objects or freezing behaviors) out of the total number of trials. **Normalized Traj. Length** - The ratio between the robot's trajectory length and the straight-line distance to the goal from the starting location averaged over all runs. **F-Score** - A measure of object detection accuracy calculated as a weighted average of the precision and recall. Values are between 0 and 1, where 1 indicates the best accuracy. We use human detection of TP, FP and FN obstacles to calculate precision and recall. **Inference Time** - Time taken from the instant an obstacle is viewed to the instant when it is detected. ### _Testing Scenarios_ * Indoor scenario with glass, concrete walls, and pillars (Fig. 3(a)). * Indoor scenario with a narrow passage covered with a beaded/string curtain (Fig. 3(b)). * Outdoor scenario with tall grass, bushes, and trees separated from each other (Fig. 3(c)). * Outdoor scenario with tall grass, bushes, and trees closely intertwined with each other creating narrow passages (Fig. 1). ### _Analysis and Comparison_ We present the qualitative navigation experiment results for the four scenarios in Fig. 4 and the quantitative results in Table I. Scenarios 1 and 2 are complex indoor settings, whereas scenarios 3 and 4 are outdoor settings. We observe that intensity map demonstrates the highest success rate compared to the other methods in all four scenarios. In scenario 1, 2D laser scan-based and occupancy map-based DWA planners fail to identify the glass region since they do not incorporate lidar intensity for object detection. GDNet based indoor segmentation fails to identify the glass region from RGB inputs consistently due to lighting changes, and strong lights reflected from the floor (see Fig. 6). Hence, these methods lead to collisions with the glass by assuming it to be free space. Glass-SLAM can identify the glass region using lidar intensity. However, it does not avoid glass all the time since the glass is constructed slower than the robot's motion (see Fig. 7). Intensity map's multi-layer map formulation leads to consistent glass detection which also results in a higher F-score compared to the other methods. Scenario 2 includes a passable string curtain which is detected as an obstacle from the 2D laser scan and occupancy map based DWA methods. Hence, these methods attempt to avoid the curtain and collide with the glass during navigation. Further, the glass detection SLAM method identifies both the curtain and the glass as obstacles. Hence, all three methods demonstrate poor success rates in scenario 2. GDNet identified both the open door and the glass near it as glass preventing the robot from entering through the door. In contrast, our intensity map demonstrates a significantly high success rate irrespective of the lighting conditions due to the \(360^{\lx@math@degree}\) LiDAR-based intensity map formulation. Scenarios 3 and 4 includes traversable tall grass and non-traversable bushes and trees. However, the two DWA methods identify all such vegetation regions as obstacles due to 2D laser scan-based obstacle detection causing freezing behaviors and longer detours during navigation. Similarly, Spot's inbuilt autonomy struggles to estimate the ground and free space from its vision-based perception in both Scenario 3 and scenario 4. Hence, the robot demonstrates highly unstable motion in vegetation. In scenario 3, VERN is \begin{table} \begin{tabular}{|c|c|c|} \hline **Method** & **Inference Time (sec)** & **Training Time** \\ \hline Intensity map & 0.020 on CPU & NA \\ \hline Glass-SLAM & 4.5 on CPU & NA \\ \hline GDNet & 0.112 on GPU & 12:15 hours \\ \hline VERN & 0.083 on GPU & 6-8 hours \\ \hline \end{tabular} \end{table} TABLE II: The inference rates and training time (where applicable) for several methods that detect glass (Glass-SLAM [44], GDNet [45]), and pliable vegetation (VERN [16]), intensity maps are capable of detecting transparent obstacles such as glass, and passable tall grass in real-time, faster than existing methods that use lidars and RGB images. Fig. 4: Robot trajectories when navigating in different complex indoor and outdoor environments using various methods. (a) Intensity map identifies transparent objects such as glass in real-time and avoids it, (b) Intensity map identifies string curtains as safe, passable obstacle while other methods detect it as solid, and impassable. (c) Intensity map accurately identifies pliable tall grass regions (which registers lower intensities on \(I^{t}_{plan}\)) to navigate through, avoiding trees. (d,e,f) show the corresponding \(I^{t}_{plan}\) for the scenarios above. White, grey, and black colors represent intensities in a decreasing order of magnitude. The robot’s starting location is in pink, and goal is represented in blue. The yellow in (d) represents the glass (FN obstacle) extrapolated in real-time by our method in section IV-B2. Green in (e, f) represent passable, non-solid obstacles such as curtains and tall grass respectively. Intensity map-based navigation’s trajectory is overlayed for reference. (d, e, f) also depict obstacles inflated based on the robot’s goal direction. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Scenario** & **Method** & **Success** & **Norm. Trial.** & **F-Score** \\ & & **Rate (\%)** & **Length** & \\ \hline \multirow{5}{*}{**Sen.**} & DWA with linear scan [10] & 0 & 0.342 & 0.12 \\ & DWA with occupancy map [12] & 0 & 0.415 & 0.20 \\ & Glass-SLAM [44] & 30 & 0.895 & 0.76 \\ **Sen.** & GDNet [45] & 50 & 0.633 & 0.69 \\ & Intensity map (Uniform) & **80** & 1.256 & **0.82** \\ & Intensity map (Adaptive) & **80** & **1.052** & **0.82** \\ \hline \multirow{5}{*}{**Sen.**} & DWA with linear scan [10] & 0 & 0.765 & 0.17 \\ & DWA with occupancy map [12] & 0 & 0.355 & 0.16 \\ & Glass-SLAM [44] & 0 & 0.421 & 0.46 \\ & GDNet [45] & 0 & 0.318 & 0.65 \\ **2** & Intensity map (Uniform) & 30 & **1.134** & **0.79** \\ & Intensity map (Adaptive) & **70** & 1.141 & **0.79** \\ \hline \multirow{5}{*}{**Sen.**} & DWA with linear scan [10] & 20 & 1.571 & 0.35 \\ & DWA with occupancy map [12] & 0 & 1.482 & 0.33 \\ & IDSNet Anomaly & 10 & 0.311 & 0.38 \\ & VERN [16] & 70 & 1.154 & 0.75 \\ & Intensity map (Uniform) & 40 & 1.292 & **0.84** \\ & Intensity map (Adaptive) & **80** & **1.072** & **0.84** \\ \hline \multirow{5}{*}{**Sen.**} & DWA with linear scan [10] & 0 & 1.543 & 0.37 \\ & DWA with occupancy map [12] & 0 & 1.412 & 0.36 \\ & Sports’ height Anomaly & 0 & 0.298 & 0.31 \\ & VERN [16] & 50 & 1.267 & 0.68 \\ & Intensity map (Uniform) & 60 & 1.248 & **0.74** \\ & Intensity map (Adaptive) & **70** & **1.146** & **0.74** \\ \hline \end{tabular} \end{table} TABLE I: Performance comparison between using intensity maps for navigation versus other methods in indoor and outdoor scenarios using various metrics. We observe that intensity maps are versatile in detecting a variety of perceptionally challenging obstacles, and aiding the navigation. able to navigate through tall grass while avoiding trees and bushes using its vision-based classifier and the occupancy map formulation. However, VERN's vison-based classifier could not detect trees behind the grass region in scenario 4 since they are closely intertwined. In contrast, our multi-layer intensity map representation identifies such hidden solid objects to avoid during navigation. Hence, intensity map demonstrates a relatively higher success rate and F-score in scenario 4. In all scenarios, intensity maps (uniform and adaptive) have the highest F-score. The inaccuracies in detecting obstacles in intensity maps occur when the robot/lidar is closer than \(0.5\) meters away from an obstacle, where 3D lidars typically have a blindspot. **Benefits of Adaptive Inflation:** We use scenarios 2 and 3, which contain narrow passages to highlight the benefits of our adaptive inflation formulation. Laserscan and occupancy map-based DWA, and intensity map (Uniform) use uniform inflation around the obstacles to avoid collisions. This closes the narrow passages and represents them as obstacle regions, resulting in freezing or longer trajectories in the presence of narrow passages between the obstacles. However, our adaptive inflation preserves the narrow free spaces in the cost map while inflating the obstacles. Hence, intensity map can navigate through such spaces (e.g., through a small door in scenario 2 and between the trees in scenario 3) and reach the goals using shorter trajectories. **Inference Time:** We compared the inference times (Table II) of using intensity maps to detect glass and pliable vegetation with other methods that either detect glass (Glass-SLAM, GDNet) or vegetation (VERN) on the Intel NUC described in section V-A. We observe that apart from being versatile in detecting obstacles, intensity maps are computationally light to be used with a robot's limited onboard computing resources. GDNet and VERN use RGB images passed through deep neural networks and require extensive prior training. While Glass-SLAM does not need training, it Fig. 5: **[Top]:** Figures depict Turtlebot’s navigation using MIMs in scenario 1 with two pillars in the glass wall marked in green and yellow. **[Bottom]:** The corresponding \(I_{plan}^{t}\) with uniform inflation. The pillars are marked in the same colors. (a) We observe that glass is misidentified as free space near the green pillar. (b) MIM’s extrapolation of glass from a small neighborhood of reflected points showing up in white, as the robot moves. (c) We observe that the glass between the green and yellow pillars are misidentified as free space at a time instant. (d) The glass between the pillars is extrapolated. Fig. 6: The results of GDNet [45] in various instances in scenario 1. While GDNet accurately segments glass in some instances ([middle]), it is often inaccurate due to reflections from the floor ([left], [right]). requires \(\sim 4.5\) seconds to update a map with obstacles and multiple runs to detect glass. ## VI Conclusions, Limitations and Future Work We introduce a novel obstacle representation designed to enhance autonomous robot navigation in complex indoor and outdoor environments. Based on the intensity of reflected points from point clouds, intensity map effectively characterizes obstacles by their height, solidity, and opacity. Also, we present an adaptive inflation technique that further refines navigation planning by considering obstacle solidity and available free space. We demonstrate significant improvements in navigation metrics such as success rates, trajectory lengths, and F-scores, validating our proposed approach. Our method has a few limitations. Since our multi-layer map representation is based on point cloud intensity, it cannot identify passable objects such as cloth curtains and metal fences. This is especially important because the navigability of such objects depends on the context (e.g., window curtain may not be passable but door curtains are generally passable). Hence, semantic understanding of the environment is required for such cases. Further, our method cannot detect extremely thin objects such as thin poles since 3D point cloud may not be able to capture sufficient number of samples.
2306.17482
Graphtester: Exploring Theoretical Boundaries of GNNs on Graph Datasets
Graph Neural Networks (GNNs) have emerged as a powerful tool for learning from graph-structured data. However, even state-of-the-art architectures have limitations on what structures they can distinguish, imposing theoretical limits on what the networks can achieve on different datasets. In this paper, we provide a new tool called Graphtester for a comprehensive analysis of the theoretical capabilities of GNNs for various datasets, tasks, and scores. We use Graphtester to analyze over 40 different graph datasets, determining upper bounds on the performance of various GNNs based on the number of layers. Further, we show that the tool can also be used for Graph Transformers using positional node encodings, thereby expanding its scope. Finally, we demonstrate that features generated by Graphtester can be used for practical applications such as Graph Transformers, and provide a synthetic dataset to benchmark node and edge features, such as positional encodings. The package is freely available at the following URL: https://github.com/meakbiyik/graphtester.
Eren Akbiyik, Florian Grötschla, Beni Egressy, Roger Wattenhofer
2023-06-30T08:53:23Z
http://arxiv.org/abs/2306.17482v1
# Graphtester: Exploring Theoretical Boundaries of ###### Abstract Graph Neural Networks (GNNs) have emerged as a powerful tool for learning from graph-structured data. However, even state-of-the-art architectures have limitations on what structures they can distinguish, imposing theoretical limits on what the networks can achieve on different datasets. In this paper, we provide a new tool called graphtester for comprehensive analysis of the theoretical capabilities of GNNs for various datasets, tasks, and scores. We use graphtester to analyze over 40 different graph datasets, determining upper bounds on the performance of various GNNs based on the number of layers. Further, we show that the tool can also be used for Graph Transformers using positional node encodings, thereby expanding its scope. Finally, we demonstrate that features generated by Graphtester can be used for practical applications such as Graph Transformers, and provide a synthetic dataset to benchmark node and edge features, such as positional encodings. The package is freely available at the following URL: [https://github.com/meakbiyik/graphtester](https://github.com/meakbiyik/graphtester). Machine Learning, Graphtester, Graphtester, Graphtester ## 1 Introduction Graph-structured data is ubiquitous in various domains, including social networks (Newman, 2003), chemistry (Dobson & Doig, 2003), and transport (Barthelemy, 2011). Analyzing and learning from such data is critical for understanding complex systems and making informed decisions. Graph Neural Networks (GNNs) (Scarselli et al., 2009; Kipf & Welling, 2017) have emerged as popular tools for learning from graph data due to their ability to capture local and global patterns (Bronstein et al., 2017; Wu et al., 2020). The main approach for analyzing the theoretical power of GNN architectures relies on showing equivalence to the Weisfeiler-Lehman (WL) graph isomorphism test (Xu et al., 2019). Standard message-passing GNNs are bounded in power by the 1-WL test, and this can be used as a basis for calculating upper bounds on the performance of said GNNs on graph classification datasets (Zopf, 2022). We can extend this concept to different tasks, where these upper bounds can tell us what performance is achievable on a given graph dataset with a GNN. For example, in the task of categorical predictions for nodes, as long as we can predict the correct category for every node, we will get perfect accuracy. This only works if we can differentiate nodes that have to make different predictions. Once two nodes "see" precisely the same surrounding, they will always come to the same conclusion although they might need to make different predictions (refer Figure 1 for an example). By using the equivalence to the 1-WL algorithm, we can identify structurally identical nodes for a GNN and optimize the overall accuracy by assigning the majority label amongst similar nodes. We will thus get a theoretical upper bound that gives us insight into the solvability of a graph dataset. We present a tool called Graphtester that computes these metrics in an automated way and answers the question: What is the best performance one can achieve on a given dataset with a GNN? How does this upper bound improve once we add node or edge features? We further show that Graphtester is not only applicable to GNNs following the message-passing formula but also graph transformers (GTs), a more recent development integrating a generalized attention mechanism, if restricted to positional encodings for nodes. Our contributions can be summarized as follows: * We present and use Graphtester to analyze over 40 graph datasets to determine upper bound scores for various GNNs. The tool provides support for edge features, different performance measures, different numbers of GNN layers, and higher-order WL tests. It also comes with a synthetic dataset to benchmark features Figure 1: Two non-isomorphic graphs that cannot be distinguished by 1-WL. The colors stand for the stabilized 1-WL labels. for nodes and edges, such as positional encodings. * We prove that graph transformers making use of positional encodings for nodes are also bounded in power by the Weisfeiler-Lehman (1-WL) graph isomorphism test, thereby resulting in the same upper bounds as a GNN with additional encoding information and making our tool applicable as well. In addition, we extend the existing proofs for GNNs to cover edge features. The rest of this paper is organized as follows: Section 2 discusses related works; Section 3 presents our theoretical analysis; Section 4 introduces Graphtester package; and Section 5 concludes the paper. ## 2 Related Work ### Graph Neural Networks Graph Neural Networks (GNNs) have been widely studied as an effective way to model graph-structured data (Scarselli et al., 2009; Kipf and Welling, 2017; Bronstein et al., 2017; Wu et al., 2020). GNNs learn by propagating information through the graph and capturing local and global patterns. They have been applied to various tasks, such as node classification (Kipf and Welling, 2017), link prediction (Schlichtkrull et al., 2018), and graph classification (Duvenaud et al., 2015). There are now many variants, including Graph Convolutional Networks (GCNs) (Kipf and Welling, 2017), GraphSAGE (Hamilton et al., 2017), and Graph Attention Networks (GATs) (Velickovic et al., 2018). ### Graph Transformers Graph transformers are a more recent development in the GNN literature, inspired by the success of the Transformer architecture in natural language processing (Vaswani et al., 2017). They employ self-attention mechanisms to model interactions between all pairs of nodes in graphs (Dwivedi and Bresson, 2020; Kreuzer et al., 2021; Dwivedi et al., 2021; Wu et al., 2022b). This allows them to capture long-range dependencies with relatively few layers. However, this also comes with a high computational cost, limiting their applicability to very large graphs. Some approaches have been proposed to overcome such limitations and produce scalable architectures (Rampasek et al., 2022; Wu et al., 2022a). Graph transformers have shown promising results on various graph-based tasks, and their flexibility has led to a growing interest in understanding their capabilities and limitations. An important component when analyzing their capabilities is the choice of positional node encodings. ### Theoretical Analysis of GNNs The Weisfeiler-Lehman (WL) test is a well-known graph isomorphism test that has been linked to the expressive power of message-passing-based GNNs (Morris et al., 2019; Xu et al., 2019). There is a hierarchy of WL tests, and the 1-dimensional WL (1-WL) test has been shown to upperbound the expressive power of many GNN architectures (Xu et al., 2019). The test iteratively refines node labels based on the labels of neighboring nodes, providing a way to compare the structure of different nodes, graphs, or subgraphs. The theoretical connection to the WL test provides valuable insights into the representational power of GNNs, but it leaves the question of incorporating edge features largely unaddressed. In this paper, we extend this line of work by proving that in the absence of positional encodings, graph transformers are equivalent to the 2-WL algorithm, even when using edge features. Given the importance of specific subgraph patterns for certain applications, some works assess the theoretical power of GNNs on more tangible scales (Chen et al., 2020; Papp and Wattenhofer, 2022). ### Positional Encodings for GNNs GNNs, including graph transformers, face challenges distinguishing nodes and graphs. For example, a basic message-passing GNN is not able to distinguish two three-cycles from a six-cycle. One common approach to addressing this issue, especially with graph transformers, is to use positional encodings or pre-coloring methods to provide additional global information about node positions (Morris et al., 2019; Maron et al., 2020; Dwivedi and Bresson, 2020). Several positional encoding methods have been proposed, such as eigenvector-based encodings (Dwivedi et al., 2020), Poincare embeddings (Skopek et al., 2020), and sinusoidal encodings (Li et al., 2018). These methods aim to improve the expressivity of GNNs by enriching the node features with positional information that can help GNNs better capture the graph structure. In this paper, we focus on deterministic positional encoding methods, as they allow us to quantify fixed and provable improvements. ### Analysis of Graph Datasets Closest to our work is Zopf (2022), which analyses several standard GNN benchmark datasets for graph classification with respect to the 1-WL test. They compute upper bounds on the accuracy achievable by 1-WL GNNs and confirm that expressiveness is often not the limiting factor in achieving higher accuracy. Our work can be seen as an extension of this paper, going beyond simple graph classification to all graph-based tasks. Graphtester provides support for edge features, positional encodings, node/link targets, various performance measures, and higher-order WL tests. In addition, we lay the theoretical basis for extending the analysis to graph transformers, thereby further expanding the scope of graphtester. ## 3 Theoretical Analysis In this section, we provide the necessary theoretical analysis to use both edge features and graph transformers restricted to positional encodings for nodes in our framework. ### Preliminaries Let \(G=(V,E,\mathbf{X}^{V},\mathbf{X}^{E})\) be an undirected graph, where \(V\) and \(E\) denote the node and edge sets, and \(\mathbf{X}^{V},\mathbf{X}^{E}\) denote the node and edge feature matrices. The feature matrices have shapes (\(|V|\times d_{V}\)) and (\(|E|\times d_{E}\)) respectively, with \(d_{V}\) and \(d_{E}\) representing the number of different labels/colors each node and edge have. \(\mathcal{N}(v)\) represents the set of neighbours of node \(v\in V\). 1-Weisfeiler-LehmanThe 1-WL algorithm, also known as naive vertex classification or as color refinement, is one of the early attempts at the graph isomorphism (GI) problem. Variants of this algorithm are still employed in practical implementations of GI testers (Kiefer, 2020) such as _nauty_, _Traces_(McKay & Piperno, 2013), and _Bliss_(Junttila & Kaski, 2007). The iterative algorithm outputs a stable coloring of the nodes in a graph through a combination of neighborhood aggregation and hashing and is described in Algorithm 1. The function \(hash\) is an idealized perfect hash function that we assume to be injective. We know that the induced partitioning of the nodes by color stabilizes after at most \(|V|-1\) iterations (Kiefer, 2020), resulting in the maximum number of rounds we execute. Throughout the definitions, we use curly braces to refer to multisets. ``` 0:\(G=(V,E,\mathbf{X}^{V})\) \(c_{0}^{(0)}\gets hash(\mathbf{X}_{v}^{V})\)\(\forall v\in V\) for\(i\gets 1\)to\(|V|-1\)do \(c_{v}^{(i)}\gets hash(c_{v}^{(i-1)},\{c_{w}^{(i-1)}:w\in\mathcal{N}(v)\})\)\(\forall v\in V\) endfor return\(c_{v}^{i}\) ``` **Algorithm 1** 1-Weisfeiler-Lehman (1-WL) \(k\)-Weisfeiler-LehmanThe \(k\)-WL algorithm is a \(k\)-dimensional extension of Color Refinement where the algorithm hases over subgraphs of node k-tuples instead of single nodes. The algorithm can be seen in Algorithm 2. In addition to the standard notation, we use \(G[U]\) to represent the subgraph of \(G\) induced by selecting the set of nodes \(U\subseteq V\). Induced subgraphs include all edges between the selected nodes in the original graph, as well as node and edge attributes associated with them. Furthermore, we use \(\mathcal{N}_{i}(\mathbf{v})\) to represent the neighborhood of k-tuples of node \(\mathbf{v}\) at the index \(i\). That is, for a k-tuple \(\mathbf{v}=(v_{1},v_{2},...,v_{k})\) and \(v_{i}\in V\), the neighborhood at index \(i\) can be written as a multiset of k-tuples \[\mathcal{N}_{i}(\mathbf{v}):=\left\{(v_{1},...,v_{i-1},w,v_{i+1},...,v_{k}):w \in V\right\}.\] ``` 0:\(G=(V,E,\mathbf{X}^{V},\mathbf{X}^{E})\) \(c_{\mathbf{v}}^{(0)}\gets hash(G[\mathbf{v}])\)\(\forall\mathbf{v}\in V^{k}\) for\(i\gets 1\)to\((|V|^{k}-1)\)do \(c_{\mathbf{v},j}^{(i)}\leftarrow\{c_{\mathbf{w}}^{(i-1)}:\mathbf{w}\in \mathcal{N}_{j}(\mathbf{v})\}\)\(\forall j\in[1..k]\) \(c_{\mathbf{v}}^{(i)}\gets hash(\{c_{\mathbf{v}}^{(i-1)},c_{\mathbf{v},1}^{(i )},...,c_{\mathbf{v},k}^{(i)}\})\) endfor return\(c_{\mathbf{v}}^{(i)}\) ``` **Algorithm 2** \(k\)-Weisfeiler-Lehman (\(k\)-WL) Equivalence of 1-WL to \(k\)-WL for \(k=2\)It has been shown by Immerman and Lander (Immerman & Lander, 1990) that for graphs \(G\) and \(H\), 1-WL does not distinguish \(G\) and \(H\) if and only if \(G\) and \(H\) are \(\mathbb{C}^{2}\)-equivalent (Immerman & Lander, 1990). Furthermore, as shown by Cai, Furer and Immerman (Cai et al., 1989), \(k\)-WL does not distinguish \(G\) and \(H\) if and only if \(G\) and \(H\) are \(\mathbb{C}^{k}\)-equivalent. Consequently, 1-WL can distinguish \(G\) and \(H\) if and only if 2-WL can also distinguish \(G\) and \(H\). This insight is crucial when extending 1-WL to use edge features, as the preservation of this hierarchy is important to connect Graph Neural Networks and graph transformers to the \(k\)-WL literature in analyzing their expressivity. Graph Neural NetworksGNNs comprise multiple layers that repeatedly apply neighborhood aggregation and combine functions to learn a representation vector for each node in the graph. Rigorously, for an input graph \(G=(V,E,\mathbf{X}^{V})\), the _i_-th layer of a GNN can be written as \[c_{v}^{(i)}=\text{COMBINE}^{(i)}\left(c_{v}^{(i-1)},\right.\\ \left.\left.\left(\left\{c_{w}^{(i-1)}:w\in\mathcal{N}(v)\right\} \right)\right),\right.\] where \(c_{v}^{(i-1)}\) represents the state of node \(v\) after layer \((i-1)\). Graph TransformersTransformer models have been widely used in modeling sequence-to-sequence data in different domains (Vaswani et al., 2017). Although the attention mechanism has commonly been used to learn on graph-structured data (Velickovic et al., 2018), the use of transformers is relatively recent. A graph transformer layer relies on global self-attention and is parameterized by query, key, and value matrices \(\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V}\in\mathbb{R}^{d_{\text{in}}\times d_{\text{out}}}\), where \(d_{\text{in}}\) is the embedding dimension of nodes before the application of the transformer layer and \(d_{\text{out}}\) is the output dimension. For the sake of simplicity, we restrict ourselves to single-headed attention. We assume that node embeddings \(c_{v}^{(i-1)}\) are stacked in a matrix \(\mathbf{C}^{(i-1)}\in\mathbb{R}^{n\times d_{\text{un}}}\). \(\mathbf{C}\) is then projected with the query and key matrices before a softmax function is applied row-wise and the value matrix is multiplied: \[\text{Attn}(\mathbf{C}^{(i)})=\] \[\text{softmax}\left(\frac{\mathbf{C}^{(i-1)}\mathbf{W}^{Q}(\mathbf{C }^{(i-1)}\mathbf{W}^{K})^{T}}{\sqrt{d_{\text{out}}}}\right)\ \mathbf{C}^{(i-1)}\mathbf{W}^{V}\] States \(\mathbf{C}^{(i-1)}\) can be passed through a learnable function before and after the global attention Attn is applied. Positional encodings are commonly used with graph transformers to give every node a sense of where it is located in the graph. Positional encodings can come in the form of node encodings (Rampasek et al., 2022) that are essentially features added to the nodes before the attention block is applied or node-pair encodings, where each node-pair is endowed with features such as shortest-path distances (Ying et al., 2021). Node-pair encodings have the downside that the full attention matrix has to be materialized. In this case, one cannot profit from faster attention mechanisms (Rampasek et al., 2022) that scale better than \(\mathcal{O}(n^{2})\), making it practically infeasible. Here, we restrict ourselves to node encodings. **Theorem 3.1**.: _The 1-WL test is at least as powerful as GTs with positional encodings for nodes, e.g., GraphTrans (Wu et al., 2022) or GraphGPS (Rampasek et al., 2022), if node encodings are also provided as initial color classes for the 1-WL algorithm._ Proof sketch.: To prove that 1-WL is an upper bound in terms of expressiveness for GTs with node encodings, we first consider the color classes of a 1-WL execution on the fully-connected graph, instead of the original topology. As we input positional encodings as color classes to 1-WL, and we can reconstruct all attention scores from the 1-WL labels in every iteration, it becomes clear that 1-WL can simulate a GT. We then show that any two nodes with the same color class in a fully connected graph will stay in the same color class, meaning no more refinement of color classes is possible for a transformer layer. See proof on page 12. ### Edge-Feature-Aware 1-WL Algorithm We now present the edge-feature-aware 1-WL (1-WLE). At each iteration, the algorithm updates the node labels based on both the neighboring node labels and the edge labels of the connecting edges. Formally, the edge-feature-aware 1-WL is defined in Algorithm 3. ``` 0:\(G=(V,E,\mathbf{X}^{V},\mathbf{X}^{E})\) \(c_{v}^{(0)}\gets hash(\mathbf{X}_{v}^{V})\ \forall v\in V\) for\(i\gets 1\)to (\(|V|-1\))do \(c_{v}^{(i)}\gets hash(c_{v}^{(i-1)},\{(\mathbf{X}_{(v,w)}^{E},c_{w}^{(i-1 )}):w\in\mathcal{N}(v)\})\) endfor return\(c_{v}^{i}\) ``` **Algorithm 3** Edge-Feature-Aware 1-WL (1-WLE) **Theorem 3.2**.: _The Edge-Feature-Aware 1-WL test is equivalent in power to a GIN with edge features, as proposed by Hu et al. (Hu et al., 2019)._ Proof sketch.: To show the equivalence between 1-WLE and GIN with edge features, one can extend the original proof for equivalence of 1-WL and GIN. What changes is that the aggregation now gets node states with additional edge labels, but an injective aggregation will still maintain this information. See proof on page 12. ### Equivalence to 2-WL Test We now provide a proof that our edge-feature-aware 1-WL extension is equivalent to the 2-WL test. First, for some simple operator, we show that 1-WLE over a graph with edge features is equivalent to 1-WL with the same graph when the operator is applied. Then, we point to the equivalence of 2-WL with 1-WL under the operator, and finally, show the equivalence of 2-WL over a graph with edge features, and the same graph under the given operator. Incidence graph operatorConsider the graphs in the form \(G=(V,E,\mathbf{X}^{V},\mathbf{X}^{E})\). We denote operator \(\mathcal{T}:G\to G\) as the incidence graph operator as follows: for the given input graph \(G\) with edge labels, \(\mathcal{T}\) creates a new node \(w\) for each edge \((u,v)\in E\) with the edge feature assigned as the node label, connects two ends of the edge to the node with new edges \((u,w),(w,v)\), and finally removes the original edge \((u,v)\). The final graph is also referred to as the "incidence graph" of graph G. In the output graph, there are no edge labels. For an example application, see Figure 2. **Theorem 3.3** (Equivalence of 1-WLE to 1-WL(\(\mathcal{T}\)(G))).: _1-WLE on graph G is equivalent to 1-WL over graph \(\mathcal{T}\)(G) so that there is an injective map \(f\) for which f(1-WLE(G)) = Figure 2: Incidence graph operator \(\mathcal{T}\) applied on the input graph on the right, outputs the final graph on the left with no edge features, and each edge converted to a node with the original edge label. ### Python Package: Graphtester We provide Graphtester to the research community in the form of a Python package that automates the process of evaluating graph datasets and positional encodings for their processing. This package provides an easy-to-use interface for practitioners, allowing them to perform in-depth analysis of their datasets and make informed decisions on the design of GNNs and graph transformers for their tasks. ### Data loading Graphtester admits datasets in various different formats such as PyG (Fey and Lenssen, 2019), DGL (Wang et al., 2019), or simply a list of NetworkX (Hagberg et al., 2008) graphs with associated labels. Internally, Graphtester converts them to igraph (Csardi and Nepusz, 2006) objects to efficiently run various isomorphism and labeling algorithms. The datasets analyzed in this paper can be simply loaded by their names. ### Running 1-WL Algorithm After preprocessing the input dataset and converting it to internal _Dataset_ format, Graphtester is able to run 1-WLE on all the graphs in the dataset efficiently until convergence. The final node labels then can be used to create graph-level, link-level and node-level hashes that stays stable across the dataset. In addition to 1-WL, Graphtester can run \(k\)-WL algorithm for any \(k\leq 6\). To the knowledge of the authors, there are no other functional open source implementations of \(k\)-WL that takes \(k\) as an input parameter. Graphtester can also run the folklore variant of \(k\)-WL, which is more expressive than the default variant. ### Computing Score Upper Bounds For calculating the upper score bounds associated with a specified number of layers, we utilize the congruence of Graph Neural Networks (GNNs) and graph transformers with the 1-Weisfeiler-Lehman test (1-WLE), as established earlier in the paper. For each dataset and 1-WLE iteration \(k\geq 1\), Graphtester determines the hash values for every node within a particular dataset. Through these node hashes, graph hash can calculated by hashing the lexicographically sorted hash of all nodes in a graph. Similarly, link hashes are estimated from lexicographically sorted hashes of interconnected nodes. For classification measures such as F1 score and accuracy, Graphtester assigns a label to each node, graph or link hash based on the majority rule, following the methodology outlined by Zopf (Zopf, 2022). For regression measures like Mean Square Error (MSE), it assigns the value that minimizes the estimate, which most frequently happens to be the mean. A dataset can be loaded and evaluated through this approach in only a couple of lines of code. ``` importgraphtesterasgt dataset=gt.load("ZINC")evaluation=gt.evaluate(dataset)print(evaluation.as_dataframe()) ``` See Table 1 for the estimation of maximum achievable target metrics for different datasets that has varying tasks and targets. We have estimated the best scores for these datasets in the presence of node and edge features separately by running the 1-WLE algorithm for up to 3 iterations. Considering that 1-WL converges in 2 iterations for nearly all graphs (Kiefer, 2020), we believe that our results paint a near-definitive picture on what can be theoretically achievable in these datasets. Overall, a considerable number of datasets in the literature appear to be non-fully-solvable for the target metrics. For the ones that are solvable, often more than a single layer is required. One other important observation here is the need for using available edge features in 1-WL context to achieve better upper bounds -- note that this is not the same as combining edge features with node features as a pre-processing step and running a GNN over it as required for some architectures that do not natively admit edge features (Xu et al., 2019), as shown in Theorem 3.7. ### Assessing the Impact of Additional Features Having evaluated the datasets with respect to the application of available node and edge features, an ensuing question emerges: Can these upper boundaries be enhanced, potentially improving overall GNN/Transformer performances on such graph datasets, by incorporating deterministic, pre-computed metrics derived from the literature? These metrics have been examined in the context of both GNNs and Transformers, most notably as Subgraph Counting (Bouritsas et al., 2020) in the case of the former, and the positional encoding concept for the latter. For this target, Graphtester offers an interface for researchers to try out various metrics and their combinations in the context of 1-WLE test to assess the potential upper bound improvements. A straightforward use case for this interface is to answer the question "how many dimensions of positional encoding do I need for my dataset?". To answer this question in the context of ZINC dataset, we analyzed the methods random-walk structural encoding (RWSE) and Laplacian eigenvectors encoding (LapPE) in Figure 4: Feature evaluation results for **ZINC** dataset from Graphtester framework for RWSE and LapPE encoding methods. ”Dimension count” refers to the number of eigenvectors used for LapPE method, and the walk steps evaluated in RWSE, both of which corresponds to the positional encoding dimensionality. The values are Mean Squared Error, for each node hash mapping to their graph’s label. Lower is better. Figure 3: An overview of the Graphtester framework. Overall, the package has four major components: preprocessing, 1-WLE algorithm, dataset evaluation and feature evaluation. identifiability of each graph through individual node hashes. The results can be seen in Figure 4. It is somewhat surprising to note that although RWSE highly benefits from increased step counts, LapPE provides optimal results even in a single iteration for ZINC dataset. Another possible use case for feature evaluation might appear to be to choose the encoding method that brings the most benefits in the target upper bound score. However, such use of 1-WL framework to improve GNN performance does not seem to have a strong basis. Indeed, Figure 4 is a counter-example here: although LapPE provides better upper bound scores in node-based matching of graphs to their labels, RWSE have proven to be the better encoding in GraphGPS study (Yuan et al., 2021). A possible reason for the mismatch is also noted in the referenced work, and is sourced from the domain mismatch of LapLE in molecular datasets. It is argued that improved node-based identify only leads the network to overfit to the spurious variance introduced by the positional encoding. We recommend to choose the encoding method according to the domain of the task, but possibly tune the parameters of the encoding via GraphHester, ideally choosing the minimal encoding that provides a sufficient level of identifiability. ### Injecting Arbitrary Features into Training Pipelines After analyzing a dataset and evaluating potential positional encodings, labeling methods implemented in GraphHester package can easily be embedded into the training pipeline. For this purpose, we expose a pretransform method in the package itself, that can admit different feature names, and be provided to PyG dataset objects to transform datasets before the training. ### Performance on Real-World Datasets GraphHester implements various classical centrality metrics and positional encoding methods for graph dataset analysis and pre-transform. To demonstrate that GraphHester features can be straightforwardly used in practice and even the simplest centrality metrics may carry value, we add them as positional encodings to GraphGPS (Rampasek et al., \begin{table} \begin{tabular}{l l l l l l l l l l l l l l l l l} \hline \hline Dataset name & Task & Metric & \multicolumn{4}{c}{without features} & \multicolumn{4}{c}{whole features} & \multicolumn{4}{c}{wildgle features} & \multicolumn{4}{c}{without features} \\ \cline{3-14} & & & 0 & 1 & 2 & 3 & 0 & 1 & 2 & 3 & 1 & 2 & 3 & 1 & 2 & 3 \\ \hline AIDS (Morris et al., 2020) & Graph Cl. & Accuracy \(\uparrow\) & 0.9985 & 0.9985 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 0.9995 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ Amacose(Ghay et al., 2015) & Node-Cl. & Accuracy \(\uparrow\) & 0.3751 & 0.7795 & 0.9777 & 0.9871 & 0.9966 & 1.0 & 1.0 & 1.0 & 0.9903 & 0.9903 & 1.0 & 1.0 & 1.0 \\ RZK (Morris et al., 2020) & Graph Cl. & Accuracy \(\uparrow\) & 0.6503 & 0.6503 & 0.6503 & 0.6503 & 0.6503 & 0.6503 & 0.6503 & 0.6503 & 0.6502 & 0.6502 & 0. 2022) and measure the performance of the resulting model. After the features are generated for nodes, we apply one linear layer to encode them together with input node features. Results for seven Graphtester features are summarized in Table 2. We can observe that all tested features can get relatively close to the best encodings on MNIST, CIFAR10, and PATTERN, even beating the best encodings on CIFAR10. ### A Synthetic Dataset for Benchmarking of Node and Edge Features Additional features provide a way for GNNs and GTs to identify nodes in a graph. Our findings reinforce the efficacy of these methods in enhancing node and graph identifiability. Nevertheless, there remains a notable gap in the existing literature: the absence of a dataset specifically designed to benchmark features such as positional encoding methods based on their performance across different graph classes. As the final contribution of this work, we address this shortfall by introducing a synthetic graph dataset that is provided as part of the Graphtester package. This dataset uniquely serves as a rigorous testing ground for the effectiveness of node and edge pre-coloring methods within the 1-Weisfeiler-Lehman (1-WL) framework. Using Graphtester framework, researchers can label this dataset with an arbitrary feature encoding of their own, and evaluate it to acquire its comparative standing with respect to other pre-coloring methods in the literature, as well as \(k\)-WL test for \(k\geq 2\). An overview of the graph classes contained in the Graphtester dataset, as well as the definitions of these graph classes can be found in Appendix B. As a baseline, we evaluated the methods available in the Graphtester framework against the dataset. We report the results for noteworthy graph classes and pre-coloring methods in Table 3. Refer to the Appendix C for a discussion on some of the results in this table, and how to overcome the difficulties of identifying graphs in 1-WL framework. ## 5 Conclusion This paper introduces Graphtester, a powerful tool designed for in-depth theoretical analysis of Graph Neural Networks (GNNs) and graph transformers. Graphtester has demonstrated its capability to discern the upper bounds of performance across various datasets, taking into account aspects like edge features, different performance measures, varying numbers of GNN layers, and higher-order Weisfeiler-Lehman (WL) tests. Together with the package, we make public a 55,000-graph synthetic dataset for the purpose of benchmarking positional encoding methods, that contains many graphs that are hard to distinguish in \(k\)-WL framework. We have also established critical theoretical insights regarding GNNs and graph transformers, proving that the latter's power is bounded by the 1-WL test if positional encodings are only used for nodes, and placed them on a theoretical basis in the presence of edge features. This underscores the fundamental role of positional encodings in amplifying the expressive power of these models. A key aspect of our work has been the comprehensive analysis of over 40 graph datasets from the literature using Graphtester. This extensive study has revealed that not all datasets are fully solvable with the tasks at hand, pointing to inherent complexities in graph data that may challenge even state-of-the-art GNN architectures. Furthermore, we found that even when a dataset is theoretically solvable, the effective use of edge features is vital. Our theoretical analysis underscores that edge features, when appropriately incorporated, can substantially enhance the expressiveness and performance of GNNs. Overall, Graphtester not only advances our theoretical understanding of GNNs and graph transformers but also offers practical guidance for their optimal deployment across a variety of tasks and datasets. Future work will aim to extend the capabilities of Graphtester to accommodate different graph dataset formats and tasks, and delve deeper into the role of positional encodings. \begin{table} \begin{tabular}{l c c c} \hline \hline **Feature** & **MNIST**\(\uparrow\) & **CIFAR10**\(\uparrow\) & **PATTERN**\(\uparrow\) \\ \hline Best GPS & 98.051 \(\pm\) 0.126 & 72.298 \(\pm\) 0.356 & 86.685 \(\pm\) 0.059 \\ \hline Local transitivity & 98.016 \(\pm\) 0.054 & 72.466 \(\pm\) 0.273 & 85.979 \(\pm\) 0.179 \\ Eccentricity & 98.030 \(\pm\) 0.148 & 71.830 \(\pm\) 0.695 & 85.805 \(\pm\) 0.405 \\ Eigenvector centrality & 97.996 \(\pm\) 0.084 & 72.514 \(\pm\) 0.268 & 86.234 \(\pm\) 0.215 \\ Burt’s constraint & 97.890 \(\pm\) 0.072 & 72.456 \(\pm\) 0.421 & 86.302 \(\pm\) 0.150 \\ Closeness centrality & 97.890 \(\pm\) 0.070 & 73.006 \(\pm\) 0.380 & 85.959 \(\pm\) 0.077 \\ Betweenness centrality & 97.956 \(\pm\) 0.067 & 72.140 \(\pm\) 0.538 & 86.267 \(\pm\) 0.288 \\ Harmonic centrality & 97.974 \(\pm\) 0.156 & 72.282 \(\pm\) 0.314 & 85.889 \(\pm\) 0.060 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of seven simple Graphtester features when used as positional encodings for GraphGPS. We keep the same setup as the best GPS model, only changing the positional encodings. Mean and standard deviation are reported over five runs each. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{graph classes} & \multicolumn{1}{c}{**Ad**} & \multicolumn{1}{c}{**Highly**} & \multicolumn{1}{c}{**Negative**} & \multicolumn{4}{c}{**Strongly**} \\ \cline{2-13} \multicolumn{1}{c}{**non-self**} & \multicolumn{1}{c}{7} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{11} & \multicolumn{1}{c}{12} & \multicolumn{1}{c}{13} & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{25} & \multicolumn{1}{c}{28} & \multicolumn{1}{c}{29} & \multicolumn{1}{c}{36} & \multicolumn{1}{c}{49} \\ \hline 1-WL1-WL2-WL3 & 4 & 2 & 30 & 0 & 8 & 0 & 165 & 0 & 1 & 1 & 1 & 45 & 6 & 6 & 320 & 1610 & 375 \\ 3-WL3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 45 & 6 & 6 & 6 & 320 & 1610 & 375 \\ 3-WL3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline Chinese causality & 2 & 14 & 20 & 1 & 0 & 2 & 0 & 1 & 1 & 1 & 1 & 45 & 6 & 6 & 6320 & 1610 & 375 \\ Eigenvector centrality & 2 & 30 & 35 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 45 & 6 & 6 & 320 & 1610 & 375 \\ Human causality & 2 & 14 & 30 & 2 & 0 & 2 & 0 & 3 & 0 & 1 & 1 & 45 & 6 & 6 & 320 & 1610 & 375 \\ Bi
2309.10568
Hierarchical Graph Modeling for Multi-Scale Optimization of Power Systems
Hierarchical optimization architectures are used in power systems to manage disturbances and phenomena that arise at multiple spatial and temporal scales. We present a graph modeling abstraction for representing such architectures and an implementation in the ${\tt Julia}$ package ${\tt Plasmo.jl}$. We apply this framework to a tri-level hierarchical framework arising in wholesale market operations that involves day-ahead unit commitment, short-term unit commitment, and economic dispatch. We show that graph abstractions facilitate the construction, visualization, and solution of these complex problems.
David L. Cole, Harsha Gangammanavar, Victor M. Zavala
2023-09-19T12:25:41Z
http://arxiv.org/abs/2309.10568v1
# Hierarchical Graph Modeling for ###### Abstract Hierarchical optimization architectures are used in power systems to manage disturbances and phenomena that arise at multiple spatial and temporal scales. We present a graph modeling abstraction for representing such architectures and an implementation in the Julia package Plasmo.jl. We apply this framework to a tri-level hierarchical framework arising in wholesale market operations that involves day-ahead unit commitment, short-term unit commitment, and economic dispatch. We show that graph abstractions facilitate the construction, visualization, and solution of these complex problems. Graph Theory, Hierarchical Optimization, Multiscale, Power Systems ## I Introduction Hierarchical optimization architectures are used in power systems (and many other industrial systems) for managing operations, disturbances, and phenomena that arise at multiple spatial and temporal scales. These architectures involve multiple decision-making layers where decisions of higher layers influence or inform lower layers (and vice versa); for example, market operations often involve the solution of a unit commitment (UC) problem whose solution informs an economic dispatch (ED) problem [1]. Hierarchical decomposition is often necessary for enabling scalable implementation (e.g., solving a combined UC/ED problem in real-time might be impossible) and for providing intuitive decomposition of functionalities (which can aid explainability). Capturing the unique characteristics of optimization problems in different hierarchical layers (i.e., space/time resolution, data, variables, objectives, constraints) and their hierarchical coupling is essential for enabling decision-making consistency across scales. This has motivated research in models and solution approaches that aim to identify how to best design hierarchical architectures to manage diverse types of features (e.g., identify the number of layers, resolutions, and decisions made by each layer). For example, Atakan and co-workers [2] presented a stochastic optimization framework that consists of a tri-level hierarchy of market operations (day-ahead UC, short-term UC, and ED) that aims to handle high renewable penetration. The authors demonstrate that the hierarchical framework provides significant operational improvement over competing architectures. Guo and co-workers [3] used a hierarchical architecture for decentralizing ED of a large power networks; this was a tri-level architecture where local, clustered agents (lowest layer) inform leader agents (middle layer), which in turn inform a coordinating agent (top layer). Kong and co-workers [4] proposed a hierarchical architecture for a network of electric vehicle charging stations connected to the grid; the formulation considers the placement of stations, the allocation of resources, and the operation policy of the stations on three separate hierarchical layers. They found that the framework provided improved system performance and quality of service. As power systems become increasingly complex (e.g., they include new assets and face new disturbances), it will become necessary to have modeling and solution tools that enable the seamless construction, evaluation, and benchmarking of different hierarchical architectures. In this work, we propose a graph-based modeling framework for representing hierarchical optimization structures arising in power system operations. The use of graphs to model structured optimization problems has been recently explored [5, 6, 7, 8, 9]. A variety of tools for exploiting graph and graph-like structures are also available in open-source packages such as Plasmo.jl (in Julia) [5] and Pyomo (in Python) [10, 11]. In this work, we focus on the use of Plasmo.jl; this package uses an OptiGraph abstraction, where nodes of the graph contain optimization subproblems (with their own objective functions, data, variables, and constraints) and where edges capture connectivity (constraints) across subproblems. The OptiGraph abstraction is flexible in that nodes can contain subproblems of different granularity; moreover, the abstraction enables the creation of hierarchical structures (a node can be a graph itself). The graph abstraction provides the ability to build complex structures in a modular manner (e.g., node by node) and the ability to visualize, decompose, and aggregate the overall problem graph. We provide a case study show how the graph representation can be used for expressing and solving complex hierarchical problems arising in power systems. ## II Graph-Based Modeling Overview Plasmo.jl is a Julia package that models general optimization problems as hypergraphs. This package has been described in detail by Jalving and co-workers [5], but here we provide a short overview of how this can be used for representing hierarchical problems. Plasmo.jl is built on an abstraction called OptiGraphs, which are graphs containing OptiNodes (\(\mathcal{N}\)) and OptiEdges (\(\mathcal{E}\)). OptiNodes contain subproblems (with their own variables, constraints, data, and objective functions), and OptiEdges are linking constraints that capture connectivity between Optinodes. We denote an OptiGraph as \(\mathcal{G}(\mathcal{N},\mathcal{E})\), where \(\mathcal{N}(\mathcal{G})\) is the set of OptiNodes in \(\mathcal{G}\) and \(\mathcal{E}(\mathcal{G})\) is the set of OptiEdges in \(\mathcal{G}\). A visualization of an OptiGraph containing three OptiNodes is shown in Figure 1. The optimization model associated with an OptiGraph can be represented as \[\begin{split}\min_{\{x_{n}\}_{n\in\mathcal{N}(\mathcal{G})}}& \sum_{n\in\mathcal{N}(\mathcal{G})}f_{n}(x_{n})\\ \text{s.t.}& x_{n}\in\mathcal{X}_{n},\quad n\in \mathcal{N}(\mathcal{G})\\ & g_{e}(\{x_{n}\}_{n\in\mathcal{N}(e)})\geq 0,\quad e\in\mathcal{E}( \mathcal{G})\end{split} \tag{1}\] where \(\mathcal{N}(e)\) is the set of OptiNodes that support OptiEdge \(e\). The notion of nodes and edges is highly flexible in this abstraction; for instance, in a power system context, a node can represent a spatial location, time instance, a specific asset, or an entire network. Moreover, each node can have its own independent features (e.g., data, objective functions, constraints). The edges (containing the constraints \(g_{e}\)) can be used to link nodes across time, space, or hierarchical layers. OptiGraphs enable hierarchical representations and modular model building via the use of subgraphs. Specifically, within Plasmo.jl, an OptiGraph can be embedded in another OptiGraph as a node. For example, consider the OptiGraphs \(\mathcal{G}_{i}\) and \(\mathcal{G}_{j}\), each with an independent set of OptiNodes and OptiEdges. We can consider these OptiGraphs as low-level graphs (also referred to as subgraphs) that can be used to build a higher-level OptiGraph which we denote by \(\mathcal{G}(\{\mathcal{G}_{i},\mathcal{G}_{j}\},\mathcal{N}_{g},\mathcal{E}_{ g})\). The set of nodes \(\mathcal{N}_{g}\) are contained on \(\mathcal{G}\) and are separate from \(\mathcal{N}(\mathcal{G}_{i})\) and \(\mathcal{N}(\mathcal{G}_{j})\), such that \(\mathcal{N}_{g}=\mathcal{N}(\mathcal{G})/\{\mathcal{N}(\mathcal{G}_{i})\cup \mathcal{N}(\mathcal{G}_{j})\}\). Similarly, \(\mathcal{E}_{g}=\mathcal{E}(\mathcal{G})/\{\mathcal{E}(\mathcal{G}_{i})\cup \mathcal{E}(\mathcal{G}_{j})\}\), meaning \(\mathcal{E}_{g}\) may connect nodes across \(\mathcal{N}_{g}\), \(\mathcal{N}(\mathcal{G}_{i})\), and/or \(\mathcal{N}(\mathcal{G}_{j})\). The OptiGraph \(\mathcal{G}\) may also be placed in another higher-level OptiGraph \(\mathcal{G}^{\prime}(\{\mathcal{G}\},\mathcal{N}_{g}^{\prime},\mathcal{E}_{g}^ {\prime})\). Any subgraph can also be collapsed into a single OptiNode (containing the entire problem of the subgraph); this is useful for visualizing hierarchies. For instance, the hierarchical setting allows us to capture how assets can be aggregated in space (e.g., assets can be embedded at a network location), or how multiple time points can be embedded in another time point (e.g., multiple 5-min time periods can be embedded in an hour). This feature is key for representing hierarchical structures that span multiple scales. The general approach for representing hierarchical problems as graphs is illustrated in Figure 2. This is a bi-level hierarchical problem; the top-level OptiGraph is given by \(\mathcal{G}_{1}(\{\mathcal{G}_{1,1},\mathcal{G}_{1,2}\},\emptyset,\mathcal{E}_ {1})\), the lower-level OptiGraph by \(\mathcal{G}_{2}(\{\mathcal{G}_{2,1},\mathcal{G}_{2,2},\mathcal{G}_{2,3}, \mathcal{G}_{2,4}\},\emptyset,\mathcal{E}_{2})\), and the overall OptiGraph by \(\mathcal{G}_{0}(\{\mathcal{G}_{1},\mathcal{G}_{2}\},\emptyset,\mathcal{E}_{0})\), where \(\mathcal{E}_{0}\) are the constraints linking the solutions of the upper and lower layers. OptiGraphs enable flexible partitioning of hierarchical structures, and this can be used to implement different solution approaches. For instance, it has been recently shown that graph structures facilitate the development of decomposition algorithms [5, 6, 9, 12, 13, 14]. To be specific, any OptiNode or subgraph can be treated as an individual optimization problem; for example, the OptiGraph presented in Figure 2 can be solved in at least three different ways (each likely resulting in different solutions): i) \(\mathcal{G}_{0}\) could be solved as a single monolithic problem; ii) \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) can be solved sequentially with the solution of \(\mathcal{G}_{1}\) passed via \(\mathcal{E}_{0}\); iii) \(\mathcal{G}_{1,1}\) and \(\mathcal{G}_{1,2}\) can be solved sequentially with the solution of \(\mathcal{G}_{1,1}\) passed via \(\mathcal{E}_{1}\). Then those solutions to \(\mathcal{G}_{1}\) can be passed via \(\mathcal{E}_{0}\) to \(\mathcal{G}_{2}\), and \(\mathcal{G}_{2,1}\), \(\mathcal{G}_{2,2}\), \(\mathcal{G}_{2,3}\), and \(\mathcal{G}_{2,4}\) can be solved sequentially, with solutions passed via \(\mathcal{E}_{2}\). We can thus see that the OptiGraph abstraction offers significant flexibility in modeling and solving hierarchical problems. ## III Case Study ### _Problem Overview_ We consider the tri-level problem proposed in [2] for capturing coupling in market operations (see Figure 3). Each layer is composed of subproblems at different timescales and these are linked to subproblems in other layers. The top layer is a day-ahead unit commitment (DA-UC) problem that schedules a subset of conventional (non-renewable) generators (denoted as \(\Gamma_{c}^{d}\)). The DA-UC layer has a 1-hour resolution and a 24-hour horizon, and an entire time horizon is partitioned into periods of 24 hours (DA-UC is solved every 24 hours). The second layer includes a short-term unit commitment (ST-UC) problem; this schedules a subset of conventional generators Fig. 1: OptiGraph abstraction used in Plasmo.jl. Fig. 2: OptiGraph abstraction of bi-level problem. (denoted as \(\Gamma_{c}^{s}\)), such that \(\Gamma_{c}^{d}\cap\Gamma_{c}^{s}=\emptyset\), while also incorporating the commitment decisions of the DA-UC subproblems. The ST-UC layer has a 15-min resolution with subproblems containing a 4-hour horizons and solved every 3 hours (there is overlap). The bottom layer is an hour-ahead economic dispatch (HA-ED) layer which determines the generation levels for units committed in the DA-UC and ST-UC layers. The HA-ED subproblems have a 15-minute resolution and a 75-min time horizon, and are solved every 15 minutes (there is overlap). Thus, for a given day, there are: 1 DA-UC subproblem, 8 ST-UC subproblems, and 96 HA-ED subproblems (12 for each ST-UC subproblem). We highlight that this architecture is just one design (of many possible ones). In other words, one could design diverse hierarchical architectures (e.g., experimenting with the types of variables, resolutions, and time horizons that each layer uses). The detailed model can be found in [2]; here, we provide a high-level perspective to illustrate how complex models are embedded in the different layers and how coupling arises between layers. We use the sets \(\Gamma\) for the set of all generators, \(\Gamma_{r}\) for the set of renewable generators, and define \(\Gamma_{c}^{h}=\Gamma_{c}^{d}\cup\Gamma_{c}^{s}\). We also use \(*\) and \(**\) to define variables, sets, or functions corresponding to a layer; here, the symbols \(*\) or \(**\) are exchanged for \(d\), \(s\), or \(h\) to denote the DA-UC, ST-UC, or HA-ED layers, respectively. As each subproblem considers different sets of times, we use \(\mathcal{T}_{i}^{*}\) for the set of times (in hours) of the \(i\)th subproblem of the \(*\) layer. We also define the sets \(\mathcal{\bar{T}}_{i}^{*}\) as the set of times without the first time point of the subproblem. We define \(\Delta^{*}\) as the time step for the subproblem in hours(\(\Delta^{d}=1\), \(\Delta^{s}=0.25\), and \(\Delta^{h}=0.25\)). Note that we set \(\Delta^{s}=\Delta^{h}\), and this has a small influence on the formulation of the problem presented below. The decision variables are given by: \[\mathbf{x}_{i}^{d} =\big{(}x_{g,t},s_{g,t},z_{g,t}\big{)}_{\forall g\in\Gamma_{c}^{ d},t\in\mathcal{T}_{i}^{d}}\] \[\mathbf{y}_{i}^{d} =\Big{(}(G_{g,t}^{+,d},G_{g,t}^{-,d})_{\forall g\in\Gamma_{c}^{ d}\cup\Gamma_{r}},(F_{j,k,t}^{d})_{\forall(j,k)\in\mathcal{L}},\] \[\quad(D_{i,t}^{d},\theta_{j,t}^{d})_{\forall j\in\mathcal{B}} \Big{)}_{\forall t\in\mathcal{T}_{i}^{d}}\] \[\mathbf{x}_{i}^{s} =\big{(}x_{g,t},s_{g,t},z_{g,t}\big{)}_{\forall g\in\Gamma_{c}^{ s},t\in\mathcal{T}_{i}^{s}}\] \[\mathbf{y}_{i}^{s} =\Big{(}(G_{g,t}^{+,s},G_{g,t}^{-,s})_{\forall g\in\Gamma_{c}^{ s}\cup\Gamma_{r}},(F_{j,k,t}^{s})_{\forall(j,k)\in\mathcal{L}},\] \[\quad\quad\quad(D_{j,t}^{s},\theta_{j,t}^{s})_{\forall j\in \mathcal{B}}\Big{)}_{\forall t\in\mathcal{T}_{i}^{s}}\] \[\mathbf{y}_{i}^{h} =\Big{(}(G_{g,t}^{+,h},G_{g,t}^{-,h})_{\forall g\in\Gamma_{c}^{ h}\cup\Gamma_{r}},(F_{j,k,t}^{h})_{\forall(j,k)\in\mathcal{L}},\] \[\quad\quad\quad(D_{j,t}^{h},\theta_{j,t}^{h})_{\forall j\in \mathcal{B}}\Big{)}_{\forall t\in\mathcal{T}_{i}^{s}}\] Symbols \(\mathbf{x}_{i}^{d}\) and \(\mathbf{y}_{i}^{d}\) are binary and continuous decision variables for the \(i\)th subproblem of DA-UC, \(\mathbf{x}_{i}^{s}\) and \(\mathbf{y}_{i}^{s}\) are decision variables for the \(i\)th subproblem of ST-UC, and \(\mathbf{y}_{i}^{h}\) are decision variables for the \(i\)th subproblem o HA-ED. Symbols \(x_{g,t}\), \(s_{g,t}\), and \(z_{g,t}\) are binary variables indicating for time \(t\) whether generator \(g\) is on/off, was turned on, or was turned off. \(G_{g,t}^{+,*}\) is the power of generator \(g\) consumed by the grid at time \(t\) and \(G_{g,t}^{-,*}\) is the power overgenerated (for conventional generators) or curtailed (for renewable generators) from \(g\) at time \(t\). \(F_{j,k,t}^{*}\) is the power flow of transmission line \((j,k)\) from bus \(j\) to bus \(k\) during time \(t\). \(D_{j,t}^{*}\) is the amount of load shed at bus \(j\) for time \(t\), and \(\theta_{j,t}^{*}\) is the bus angle for bus \(j\) at time \(t\). We also use black, red, and blue color to denote variables for DA-UC, ST-UC, and HA-ED, respectively. The objective functions in the different layers are comprised of a UC part, \(f_{u}\), and an ED part, \(f_{e}\): \[f_{u}(\mathbf{x}_{i}^{*}):=\sum_{t\in\mathcal{T}_{i}^{*}}\sum_{g\in \Gamma_{c}^{s}}\phi_{g}^{*}s_{g,t}+\phi_{g}^{f}x_{g,t}\Delta^{*} \tag{2}\] \[f_{e}(\mathbf{y}_{i}^{*}):= \sum_{t\in\mathcal{T}_{i}^{*}}\Big{(}\sum_{g\in\Gamma_{c}^{s}} \phi_{g}^{v}G_{g,t}^{+,*}+\sum_{j\in\mathcal{B}}\Big{(}\phi_{j}^{u}D_{j,t}^{*}+ \tag{3}\] \[\sum_{g\in\Gamma_{j}\cap\Gamma_{c}^{s}}\phi_{g}^{o}G_{g,t}^{-,*}+ \sum_{g\in\Gamma_{j}\cap\Gamma_{r}}\phi_{g}^{c}G_{g,t}^{-,*}\Big{)}\Big{)}\] The function \(f_{u}\) accounts for the startup cost \(\phi_{g}^{s}\) and the no-load cost \(\phi_{g}^{f}\) for generator \(g\) at time \(t\). The no-load cost is multiplied by \(\Delta^{*}\) since the DA-UC and ST-UC levels have different time resolutions. The function \(f_{e}\) accounts for the variable cost, \(\phi_{g}^{v}\), of the energy consumed by the grid, the cost of overgeneration, \(\phi_{g}^{o}\), and the cost of curtailment, \(\phi_{g}^{c}\), for generator \(g\) at time \(t\). It also accounts for the cost of unmet demand \(\phi_{j}^{u}\). We next define constraints for the layers: \[c_{d}(\mathbf{y}_{i}^{*}):= \Big{(}\sum_{j\in\mathcal{B}:(j,k)\in\mathcal{L}}F_{j,k,t}^{*}- \sum_{j\in\mathcal{B}:(k,j)\in\mathcal{L}}F_{k,j,t}^{*}+ \tag{4}\] \[\sum_{g\in\Gamma_{k}}G_{g,t}^{+,*}+D_{k,t}^{*}-\hat{D}_{k,t}^{*}- \hat{R}_{k,t}^{*}\Big{)}_{k\in\mathcal{B},t\in\mathcal{T}_{i}^{*}}\] This requires that, at each time point, the flows coming into the bus, the power consumed by the grid, and the amount of unmet demand is equal to the demand, \(\hat{D}_{k,t}^{*}\), and the reserve requirements, \(\hat{R}_{k,t}^{*}\) for \(k\in\mathcal{B}\). The power flow equations use a DC approximation: \[c_{f}(\mathbf{y}_{i}^{*}):=\big{(}F_{j,k,t}^{*}-B_{j,k}(\theta_{j,t}^{*}-\theta_{k,t}^ {*})\big{)}_{(j,k)\in\mathcal{L},t\in\mathcal{T}_{i}^{*}} \tag{5}\] The renewable resources are also restricted to a specific value; this is enforced by constraint \(c_{r}\), where \(\hat{G}_{g,t}^{*}\) is the amount of power produced by renewable generator \(g\) at time \(t\): \[c_{r}(\mathbf{y}_{i}^{*}):=\Big{(}G_{g,t}^{+,*}+G_{g,t}^{-,*}-\hat{G}_{g,t}^{*}\Big{)} _{g\in\Gamma_{r},t\in\mathcal{T}_{i}^{*}} \tag{6}\] Figure 3: The tri-level hierarchical architecture of Atakan et al. [2] (Reproduced with permission from Elsevier). The ramp-up and ramp-down constraints (\(c_{ru}\) and \(c_{rd}\)) have different forms because DA-UC and ST-UC/HA-ED have different time resolutions. Ramping constraints containing start-up and shut-down constraints are: \[\begin{split} c_{su}(\mathbf{y}_{i}^{*},\mathbf{x}_{j}^{**}):=& \left(G_{g,t}^{+,*}+G_{g,t}^{-,*}-G_{g,t-\Delta^{*}}^{+,*}-G_{g,t- \Delta^{*}}^{-,*}-\right.\\ &\left.\left(\overline{S}_{g}-\overline{R}_{g}\Delta^{**}- \underline{C}_{g}\right)\right.\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! For each node representing line \((i,j)\in\mathcal{L}\), edges (linking constraints) are also placed connecting to bus \(i\) and to bus \(j\). The resulting subgraph is shown in Figure 4. The DA-UC, ST-UC, and HA-ED subproblems were constructed from these time point subgraphs. The DA-UC subgraph has 24 time point subgraphs (i.e., 24 replicates of the network shown in Figure 4) each representing one hour, the ST-UC subgraph had 16 time point subgraphs with each representing 15 minutes, and the HA-ED subgraph had 5 time point subgraphs with each representing 15 minutes. Linking constraints were also placed between time point subgraphs where applicable, such as for \(c_{ru}\), \(c_{rd}\), (11e) - (11g), or (12h) - (12j). After subproblem subgraphs were created, the subproblems were combined onto another OptiGraph corresponding to one day of operation. A single day graph contains one DA-UC subgraph, 8 ST-UC subgraphs, and 96 HA-ED subgraphs (105 subgraphs in total). Figure 5 shows the complexity of the resulting graph. With the single-day graph formed, subproblems are linked together according to the formulations given in (11), (12), and (13). Figure 6 shows an example of this linking for part of an ST-UC subproblem and one HA-ED subproblem. The linking constraints are highlighted in black; these constraints correspond to \(c_{su}\) and \(c_{sd}\) in (13e) and (13f) and to the linking constraints in (13i) and (13k). The full problem graph is shown in Figure 6(a), with accompanying representations highlighting the hierarchical structure. The full graph contains 192,432 OptiNodes and 292,587 OptiEdges. Subgraphs can be collapsed or aggregated into OptiNodes without changing the problem formulation and this facilitates visualization. Figure 6(b) shows the graph with all time subgraphs aggregated into nodes. Figure 6(c) shows all subproblem subgraphs aggregated into nodes; this reveals the hierarchical structure and the linking between layers, where the central black node corresponds to the single-day DA-UC subproblem (top layer), the red nodes correspond to the 8 ST-UC subproblems (middle layer), and the blue nodes correspond to the 96 HA-ED subproblems (bottom layer). ### _Decomposition Approaches_ Graph representations facilitate the implementation of different decomposition approaches. For example, [2] decomposed the hierarchy by solving the subproblems in each layer in series (in a receding horizon approach). This sequential decomposition approach can be easily visualized using graphs. For Figure 6(c), this is equivalent to solving the central DA-UC node, then solving the first ST-UC node, and then solving the 12 connected HA-ED nodes. The next ST-UC node is then solved followed by its 12 connected HA-ED nodes (again in order) and so forth until all 8 ST-UC subproblems and their corresponding HA-ED nodes are solved. The solutions of these problems are then passed to the next 1-day graph and the process is repeated. However, there are other decomposition approaches that could be used; for example, instead of solving each subproblem in a receding horizon approach, we could solve each time-point subgraph in a receding horizon approach. This would result in much smaller optimization problems, but likely worse economic performance. In contrast, we could instead solve the entire 1-day monolithic problem as a single optimization problem rather than solving each subproblem one at a time. The implementation of these strategies can help study trade-offs between tractability and performance. ### _Results_ In this section, we present the results for two different solution approaches. The first approach is to solve the subproblems in a receding horizon approach as done in [2] ("Receding Horizon"). The second approach is to solve each 1-day graph as a single, monolithic optimization problem ("Monolithic"). Because the monolithic problem is a very large mixed-integer problem (MIP), it took hours to solve, so we used a MIP gap termination criteria of 5%. In contrast, we Fig. 4: A graph visualization of the 118-bus system, where nodes correspond to buses and to transmission lines. Fig. 5: Representation of a single subproblem subgraph for the DA-UC, ST-UC and HA-ED subproblems. Fig. 6: An example of the hierarchical linking between subproblems. Five time points from an ST-UC subproblem are shown linked to a full HA-ED subproblem, with linking constraints highlighted in black. used a MIP gap of 0.5% or less for the receding horizon MIPs as they were smaller and faster to solve. The 1-day monolithic graph contained 641,709 variables (41,727 binary) and 1,103,654 constraints. The code for reproducing these results can be found at [https://github.com/zavalab/JuliaBox/tree/master/hierarchical_graphs](https://github.com/zavalab/JuliaBox/tree/master/hierarchical_graphs). We used the data provided by the 118-bus case study [15] used in [2]. This included a day-ahead (forecasted) load demand, and a real time realized load demand. We used the day-ahead demand for the DA-UC subproblems and the real time demand for the HA-ED subproblems. Because there is no intermediate "short term" demand data, we used the average of the day ahead and real time demands for the ST-UC subproblems. The three load demands are shown in Figure 8. This data was on an hour resolution, so we interpolated the data for higher resolutions. In addition, reserve requirements can vary by system operator, but we chose to use 10% of the demand for the reserve requirement for UC subproblems and 2.5% of the demand for ED subproblems which corresponds to the "low reserve requirements" scenario in [2]. The results of the receding horizon and monolithic approaches are shown in Figure 9 which includes the number of committed DA-UC generators (a), the number of committed ST-UC generators (b), the overgenerated or curtailed power (c), and the amount of load shed (d). The overgenerated/curtailed power and the load shed shown are from the first time point of each HA-ED subproblem. As each HA-ED subproblem had significant overlap with the next problem, we only consider the first HA-ED time point (this is the realized operation). The overall cost of economic dispatch (based on the first time point of each HA-ED subproblem) was $ 219.4 million and $ 199.7 million for the receding horizon and monolithic approaches, respectively. ### _Discussion_ By constructing the hierarchical architecture as a graph, different solution schemes were enabled which provide different insights into the problem. Despite the higher MIP gap used for the monolithic approach, it still performed better than the receding horizon problem and had a lower cost in the economic dispatch by more than $19 million. The monolithic approach was expected to perform better as the lower layers and upper layers are solved in the same problem, allowing the performance of the lower layer to inform the upper layers. This is also likely why the monolithic approach has less load shedding compared with the receding horizon approach Fig. 8: Load demand used in the DA-UC, ST-UC, and HA-ED subproblems for the 32-day horizon. Fig. 7: Representation of (a) the 1-day monolithic graph highlighted by the subproblems (DA-UC, ST-UC, HA-ED), and equivalent representations where (b) individual time point subgraphs are aggregated into single nodes and where (c) subproblem subgraphs are aggregated into single nodes. Fig. 9: Results of the tri-level problem using a ”receding horizon” type approach and a ”monolithic” approach. a) number of committed DA-UC generators over time, b) number of committed ST-UC generators over time, d) overgenerated or curtailed power, and d) load shedding. (83.6 MWhr compared with 3113.5 MWhr). The monolithic approach did have a very large peak of overgenerated/curtailed power, but the cost of load shedding (using the costs from [2]) was 200 times more than the cost of overgenerated/curtailed power. In addition, the monolithic approach had less fluctuation in the number of generators turned on or off. The results on load shedding were dependent on the reserve requirements used. In this case, the higher reserve requirements on the UC layers compared with the ED layer (10% vs. 2.5 %) reduced some of the apparent differences between the demand in the HA-ED layer and the DA-UC layer (e.g., the gap between demand in the DA-UC and the HA-ED layers in day 20 in Figure 8 would be reduced). If we adjust the reserve requirements and use the "very low reserve requirements" scenario from [2] (5% of load for UC layer and 1.25% of load for ED layer), the load shed in the serial problem increases by more than 10 times. While not tested, it is possible that further increasing the reserve requirements could reduce load shedding and/or overgeneration. As expected, the monolithic approach took much longer to solve than the receding horizon decomposition approach. In addition, the monolithic approach experienced complications with memory management in the MIP solver. These computational issues, combined with the performance comparisons between the receding horizon and monolithic approaches, highlight the need for decomposition schemes. The receding horizon approach results in a suboptimal solution, but it could be possible to use a decomposition scheme that gives results closer to the monolithic approach but with the computational performance closer to that of the receding horizon problem. Constructing these problems as graphs provides a framework under which a decomposition scheme could be optimized. Overall, this work highlights the utility of representing hierarchical optimization problems using graphs. These graph representations provide a modular way to construct complex (but structured) problems. Each time point can be constructed in a modular manner, and then each time point can be embedded to a modular representation of each subproblem. Graphs are also intuitive to visualize, potentially leading to insights into the problem structure. They provide a framework for manipulating problem structure, such as partitioning/aggregating subgraphs. Graphs also provide a structure that could be exploited via decomposition schemes such as Benders decomposition and Lagrangian relaxation. ## IV Conclusions and Future Work We discussed how hierarchical optimization problems can be represented with graphs. We used the package Plasmo.jl to build a tri-level hierarchical optimization problem arising in market operations and presented different approaches to solve the problem. We presented visualizations of these graph representations in Plasmo.jl, and we presented the results of the two solution approaches. As part of future work, we are interested in using the graph representation for applying and combining decomposition schemes (e.g., Lagrangian decomposition, Benders decomposition, dual dynamic integer programming) to solve large-scale problem instances. ## V Acknowledgements This work was supported by the U.S. Department of Energy under grant DE-0002722.
2309.17207
Memory Gym: Towards Endless Tasks to Benchmark Memory Capabilities of Agents
Memory Gym presents a suite of 2D partially observable environments, namely Mortar Mayhem, Mystery Path, and Searing Spotlights, designed to benchmark memory capabilities in decision-making agents. These environments, originally with finite tasks, are expanded into innovative, endless formats, mirroring the escalating challenges of cumulative memory games such as ``I packed my bag''. This progression in task design shifts the focus from merely assessing sample efficiency to also probing the levels of memory effectiveness in dynamic, prolonged scenarios. To address the gap in available memory-based Deep Reinforcement Learning baselines, we introduce an implementation that integrates Transformer-XL (TrXL) with Proximal Policy Optimization. This approach utilizes TrXL as a form of episodic memory, employing a sliding window technique. Our comparative study between the Gated Recurrent Unit (GRU) and TrXL reveals varied performances across different settings. TrXL, on the finite environments, demonstrates superior sample efficiency in Mystery Path and outperforms in Mortar Mayhem. However, GRU is more efficient on Searing Spotlights. Most notably, in all endless tasks, GRU makes a remarkable resurgence, consistently outperforming TrXL by significant margins. Website and Source Code: https://github.com/MarcoMeter/endless-memory-gym/
Marco Pleines, Matthias Pallasch, Frank Zimmer, Mike Preuss
2023-09-29T12:59:28Z
http://arxiv.org/abs/2309.17207v5
# Memory Gym: Partially Observable Challenges to Memory-Based Agents in Endless Episodes ###### Abstract Memory Gym introduces a unique benchmark designed to test Deep Reinforcement Learning agents, specifically comparing Gated Recurrent Unit (GRU) against Transformer-XL (TrXL), on their ability to memorize long sequences, withstand noise, and generalize. It features partially observable 2D environments with discrete controls, namely Mortar Mayhem, Mystery Path, and Searing Spotlights. These originally finite environments are extrapolated to novel endless tasks that act as an automatic curriculum, drawing inspiration from the car game "I packed my bag". These endless tasks are not only beneficial for evaluating efficiency but also intriguingly valuable for assessing the effectiveness of approaches in memory-based agents. Given the scarcity of publicly available memory baselines, we contribute an implementation driven by TrXL and Proximal Policy Optimization. This implementation leverages TrXL as episodic memory using a sliding window approach. In our experiments on the finite environments, TrXL demonstrates superior sample efficiency in Mystery Path and outperforms in Mortar Mayhem. However, GRU is more efficient on Searing Spotlights. Most notably, in all endless tasks, GRU makes a remarkable resurgence, consistently outperforming TrXL by significant margins. Website and Source Code: [https://github.com/MarcoMeter/endless-memory-gym/](https://github.com/MarcoMeter/endless-memory-gym/) **Keywords:** deep reinforcement learning, proximal policy optimization, memory, transformer-xl, gated recurrent unit ## 1 Introduction In the evolving landscape of Deep Reinforcement Learning (DRL), the significance of memory-based tasks is increasingly evident. Our research introduces "Memory Gym", a benchmark that emphasizes assessing memory capabilities in endless tasks, drawing inspiration from the classic car game "I packed my bag". Central to our contribution is a Transformer-XL (Vaswani et al., 2017) driven baseline implementation for Proximal Policy Optimization (PPO) (Schulman et al., 2017), making it readily accessible to the community. This introduction delves into the intricate role of memory in decision-making agents, setting the stage for our key contributions. We conclude this section by outlining the structure and flow of the subsequent content in this paper. ### "I Packed My Bag": A Game to Explore Memory Limits Imagine embarking on a long-awaited family vacation, with the open road stretching ahead. As the car hums along, a lively atmosphere envelops the passengers, young and old alike, as they engage in a playful game of "I packed my bag". However, as the game unfolds and the list lengthens, the passengers encounter mounting challenges illustrating the finite nature of their individual memories. Despite best efforts, varying strategies, and different capabilities, the weight of the growing list eventually leads to resignation. Due to the game's ever-growing or endless concept, it serves as a revealing test of memory effectiveness, showcasing the limitations that individuals face when confronted with the increasing demands of retention and recall. ### Memory in the Context of Decision-Making Agents Memory is not just a game - it is a critical tool for intelligent decision-making under imperfect information and uncertainty. Without the ability to recall past experiences, reasoning, creativity, planning, and learning may become elusive. In the realm of autonomously learning decision-making agents, the agent's memory involves maintaining a representation of previous observations, a knowledge bank that grounds its next decision. Memory mechanisms, be it through recurrent neural networks (Rumelhart et al., 1986) or transformers (Vaswani et al., 2017), have enabled these agents to master tasks both virtual and real. For instance, DRL methods have conquered complex video games as Capture the flag (Jaderberg et al., 2018), StarCraft II (Vinyals et al., 2019), and DotA 2 (Berner et al., 2019). Their success extends beyond virtual environments to real-world challenges as dexterous in-hand manipulation (Andrychowicz et al., 2020) and controlling tokamak plasmas (Degrave et al., 2022). The remarkable achievements of DRL agents empowered by memory often come coupled with substantial computational demands and additional techniques that extend beyond the scope of assessing an agent's memory interaction abilities. ### Contribution: Endless Environment Memory Benchmark In our prior research, we introduced Memory Gym (Pleines et al., 2023), an open-source benchmark, designed to challenge memory-based DRL agents to memorize events across long sequences, generalize, be robust to noise, and be sample efficient. Memory Gym encompasses three unique environments: Mortar Mayhem, Mystery Path, and Searing Spotlights. Each environment presents visual observations, discrete action spaces, and crucially, they cannot be mastered without memory, demanding frequent memory interactions. Building on this foundation, we further enhance Memory Gym's offerings by introducing endless tasks, drawing inspiration from the "I packed my bag" game. Such endless tasks, behaving as an automatic curriculum, are pivotal for not just evaluating efficiency but, more critically, assessing the effectiveness of memory mechanisms. As mirrored in the aforementioned game, capitulating to the ever-expanding list becomes inevitable. To our current understanding, no existing Deep Learning memory benchmark possesses such endless tasks targeting sequence models. Additionally, DRL tasks exhibit dynamic behaviors due to the evolving nature of the agent's policy during training. As a result, episodes can yield vastly different outcomes. For example, episode lengths can vary significantly based on the decisions made, particularly when dealing with stochastic agents or environments. It is important to clarify that "endless" should not be confused with "open-ended". Memory Gym's endless environments are not open-ended in nature; they possess clear objectives and defined structures. In contrast, open-ended environments offer agents freedom to develop unique skills and strategies without stringent constraints as demonstrated by the Paired Open-Ended Trailbazer algorithm (Wang et al., 2020) or platforms like Minecraft (Fan et al., 2022). ### Contribution: Transformer-XL PPO Baseline Many significant publications in memory-based DRL, particularly those utilizing transformers (Fortunato et al., 2019; Parisotto et al., 2020; Hill et al., 2021; Lampinen et al., 2021; Parisotto and Salakhutdinov, 2021), withhold their source code, impeding the replication of their findings and consequently hampering research progress. The absence of available baseline implementations is further discussed in section 5.2. In response to this challenge, we offer an accessible and easy-to-follow baseline implementation based on the PPO DRL algorithm (Schulman et al., 2017) and Transformer-XL (TrXL) (Vaswani et al., 2017). To allow for memory, we utilize TrXL as an episodic memory leveraging a sliding window approach. In our experimental analysis, we benchmark both the TrXL and a recurrent agent based on the Gated Recurrent Unit (GRU) (Cho et al., 2014) across the original (finite) and the new endless environments of Memory Gym. Our findings indicate that TrXL performs best in terms of sample efficiency on Mystery Path and effectiveness on Mortar Mayhem. However, in the Searing Spotlights environment, GRU demonstrates better sample efficiency. To our greatest surprise, in the endless environments, GRU consistently surpasses TrXL in effectiveness by large margins. ### Overview This paper proceeds by delving into the properties and dynamics of the Memory Gym environments, highlighting Mortar Mayhem, Mystery Path, and Searing Spotlights. Additionally, we introduce their innovative endless counterparts, offering a fresh perspective on memory challenges. Subsequently, we outline the memory baselines that are tested against Memory Gym. Delving deeper, we describe the actor-critic model architecture, shedding light on the associated loss functions and the foundational components of both the GRU and TrXL baselines. Our experimental analysis follows, presenting empirical results from both finite and endless tasks within Memory Gym. Moreover, we discuss the intriguing aspect of TrXL's suboptimal performance in endless tasks, offering insights, further experimentation, and future directions. Prior to our concluding remarks, we offer a comparative perspective by discussing related baselines and benchmarks in the memory-based DRL domain. ## 2 Memory Gym Environments Mini-games of the commercial video game Pummel Party1 inspired Memory Gym's environments. The agent perceives all environments from a top-down perspective using \(84\times 84\) RGB pixels, while its action space is multi-discrete, featuring two dimensions of size three as shown in Figure 1(a). Simpler grid-based locomotion variants are also incorporated in Mortar Mayhem and Mystery Path, as depicted in Figures 1(b) to 1(d). Subsequently, we dive into an in-depth exploration of the environments' dynamics, highlighting their peculiarities towards memory, and above all, showcasing their distinctive endless design. Annex A presents the reset parameters of the environments and their default values. Footnote 1: [http://rebuiltgames.com/](http://rebuiltgames.com/) ### Mortar Mayhem Mortar Mayhem (MM; Figure 2) takes place inside a grid-like arena and consists of two memory-dependent tasks. Initially, the agent is immobile and must memorize a sequence of ten commands (Clue Task), followed by executing each command in the observed order (Act Task). A command instructs the agent to move to an adjacent floor tile or remain at Figure 1: The skin-colored agent in Memory Gym utilizes a multi-discrete action space (a) with distinct dimensions for horizontal (red arrows) and vertical (green arrows) velocity. Both dimensions include a no-op action, enabling movement in eight distinct cardinal directions (all arrows). The agent maintains a fixed speed of 3 pixels per step, with its forward direction determined by the combined horizontal and vertical velocity. The agent’s forward direction is visually represented by white-colored hands. Additionally, Mortar Mayhem and Mystery Path offer a grid-like locomotion system with a discrete action space of four actions: moving forward one tile at a time (b), rotating by 90 degrees left (c) or right (d), and a no-op action. the current one. Failure to execute a command results in episode termination, whereas each successful execution yields a reward of \(+0.1\). MM can be simplified to solely provide the Act Task (MMAct), where the command sequence is fully observable as a one-hot encoded feature vector. The Act Task necessitates frequent memory utilization since it must discern which commands have been executed to fulfill the subsequent ones, while considering a maximum of nine target tiles. To overcome this challenge, the agent can learn to track time (e.g. count steps) where a short-term memory seems sufficient. MM's difficulty can be scaled by several parameters, such as the number of commands, the show duration of a single command, and the delay between showing commands. Furthermore, the time allocated for executing a single command can be modified. These options can also be sampled upon resetting an episode. To simplify the difficulty further, the agent can be equipped with grid-like movement (MMGrid, MMActGrid), allowing it to move one tile at a time. Doing so, the agent is not commanded to move to diagonal tiles. Episode lengths in MM vary based on the agent's proficiency, lasting longer as the agent improves until reaching an upper bound (i.e. max episode length). The following equations determine the maximum episode length in MM by considering the durations and delays of the Clue Task and Act Task. Figure 2: Mortar Mayhem’s relevant entities are depicted in the annotated ground truth of the environment (a). The ground truth includes a green circle that designates the next target tile to move to. At the beginning of an episode, the commands are sequentially rendered onto the agent’s observation (b) while the agent remains immobile. Once all commands are displayed, the agent must navigate to the target tile within a specified time frame. Upon completion of the current command’s execution time, the agent’s success is evaluated, as illustrated in figure (c). This visual feedback is perceptible to the agent. Standing on red colored tiles means failure. Following a brief delay of a few steps during evaluation, the agent can proceed to the next command unless the episode has terminated due to failure or completion of the entire command sequence. \[\text{Clue Task} = (\text{Show Duration}+\text{Show Delay})\times\text{Command Count}\] (1) Execution Length \[= (\text{Execution Duration}+\text{Execution Delay})\times\text{Command Count}\] (2) Act Task \[= \text{Execution Length}-\text{Execution Delay}+1\] (3) Max Episode Length \[= \text{Clue Task}+\text{Act Task} \tag{4}\] ### Endless Mortar Mayhem Endless Mortar Mayhem (EMM; Figure 3) introduces a unique concept inspired by the toddler's game "I packed my bag...". It extends the to-be-executed command sequence continuously, enabling potentially infinite sequences. Two significant modifications are made to MM to accommodate this feature. In the previous setup, the agent had to observe the entire sequence before executing it. However, in EMM, we adopt a different approach by alternating between command visualization and execution. As shown in Figure 3(a), the agent only perceives one command during visualization. During execution, the agent must execute all previous commands along with the new one. An episode begins by displaying one command, followed by its execution. Subsequently, the next visualization reveals only the second command, requiring the agent to execute both the first and second commands. This automatic curriculum allows the to-be-executed command sequence to continually expand, while each new command is presented only once. A reward of \(+0.1\) is signaled for every single successful command. The second modification concerns the arena's boundaries. In the original MM, the agent's position would need to be reset after command execution due to certain commands being unavailable when the agent is adjacent to the arena's boundaries. In EMM, we eliminate the borders and seamlessly scale the arena to the screen's dimensions. Additionally, we implement a screen wrap feature that transforms the arena into a torus. Consequently, Figure 3: In Endless Mortar Mayhem, the visualization and execution of commands are alternated (a) following the idea of an automatic curriculum. Commands (blue) are visualized only once in-between executing a new command and the previous ones. The arena (b) seamlessly occupies the entire screen, enabling the agent to reenter from the opposite side if it walks off. if the agent walks off the screen, it reappears on the opposite side without requiring relocation. Finally, the agent then executes the sequence of commands again from its new position. ### Mystery Path Mystery Path (MP; Figure 4) challenges the agent to traverse an invisible path within a grid-like level of dimension \(7\times 7\). If the agent deviates from the path and thus falls down the pit, the agent is relocated to its origin during the next step. The episode terminates when the agent reaches the goal or exhausts its time budget of 512 steps. Reaching the goal yields a reward of \(+1\). If a dense reward function is desired, the agent can be rewarded by 0.1 for visiting previously unexplored tiles. To effectively navigate the environment, the agent must remember its steps on the path as well as the locations where it fell off. The invisible path is procedurally generated using the \(A^{*}\) path-finding algorithm (Hart et al., 1968). Initially, the path's origin is randomly sampled from the grid's border, while the goal is placed on the opposing side in a random position. Instead of aiming for the shortest path, the cost function of \(A^{*}\) is sampled to generate a diverse range of paths. The path generation process randomizes some cells as untraceable (\(A^{*}\) walls), which adds variation to the generated paths. It is important to note that these walls are considered pits rather than physical obstacles. MP can be simplified into a grid variant called MPGrid, which features a single discrete action space of four actions. In this variant, the agent's time limit is reduced to 128 steps. Furthermore, MP can be made easier by including the path's goal and origin in the agent's observation. On the other hand, increasing the difficulty can be achieved by reducing the Figure 4: The annotated ground truth depicts the relevant entities in Mystery Path (a). The walkable path is distinguished by its blue (origin), green (goal), and white color (path). Areas outside the path are designated as pits, represented by black and red colors, causing the agent to fall off and be relocated to the path’s origin on the next step. The initial randomly chosen red tiles (\(A^{*}\) Walls) are only considered during path generation. The agent’s observation includes just itself (b). If the agent deviates from the path, a red cross provides visual feedback (c). agent's scale, decreasing its speed, or enlarging the path. In contrast to MM, episodes in MPGrid terminate earlier as the agent improves. ### Endless Mystery Path Endless Mystery Path (EMP; Figure 5) introduces a never-ending path for the agent to traverse. Based on the original MP environment, we make several modifications to the path generation, the agent's observation, and the terminal conditions. The path generation in EMP (Figure 5(a)) involves concatenating path segments. Each segment is generated similar to MP, with the only difference being that the origin of a segment is based on the final tile of the previous segment. To ensure that path tiles do not have multiple neighbors, an additional tile is added between segments. The path is always generated from left to right. Consequently, we eliminate the agent's ability to move left. As the endless path cannot be fully captured in an \(84\times 84\) RGB pixel observation, we visually fix the agent's horizontal position. If nothing but the agent is shown in the observation, the agent lacks information about its horizontal motion. To address this, we render the path tiles behind the agent (Figure 5(a)), providing visual cues of the local horizontal position. For added difficulty, the visual horizontal position can be shifted further to the left, obscuring more information about the actual path. Regarding terminal conditions, we aim to shorten episodes if the agent performs poorly. To achieve this, we implement three terminal conditions. Firstly, the agent is given a time budget of 20 steps to reach the next tile. Moving in the wrong direction or taking too long to progress within the budget leads to episode termination. The time budget is reset when the agent moves to the next tile or falls off and is relocated to the path's origin. Figure 5: In Endless Mystery Path, the path (a) is established by concatenating path segments, following a similar generation process as in MP. To avoid multiple neighboring path tiles, we add a transition tile (blue) between segments. Due to the constraints of representing an endless path within the observation space (b), we opt to visually fix the agent’s horizontal position. By rendering the path tiles behind the agent, the agent gains a sense of its horizontal motion, allowing it to perceive its progress along the path. Its horizontal position is adjustable, potentially obscuring more of the past path to increase the challenge. The second condition terminates the episode if the agent falls off before reaching its best progress. Lastly, the episode is terminated if the agent falls off at the same location for a second time. With these conditions in place, an episode can potentially last indefinitely if the agent consistently succeeds. ### Searing Spotlights Searing Spotlights (Figure 6) presents a pitch-black environment to the agent, where information is revealed only through roaming and threatening spotlights. The agent starts with a limited number of health points (e.g. five) and loses one health point per step if hit by a spotlight. The episode terminates if the agent runs out of health points or exceeds the time limit of 256 steps. To evade the imminent threats, the agent must strategically hide in the darkness. In order to escape from the encroaching spotlights, the agent needs to remember its past actions and at least one previous position to deduce its current location. This requires the agent to carefully manage its health point budget to ascertain its location, ideally with minimal exposure. To avoid monotonous behavior, two additional tasks are introduced in the environment, requiring the agent to actively traverse the environment. The first task asks the agent to collect a predetermined number of coins, while the second task involves reaching an exit to finish the episode once all the coins have been collected. Both the exit and the coins are randomly positioned during the environment's reset phase. Collecting a coin rewards the agent with \(+0.25\), while successfully u Figure 6: The environment’s annotated ground truth (a) reveals information about all relevant entities in Searing Spotlights. The top rows of pixels display the agent’s remaining health points, its last action, and indicate whether a positive reward was received during the last step. The last action is encoded using two chunks and three colors, while the last positive reward is represented by two colors. Yellow circles indicate coins, while the somewhat rounded gray shape represents the exit. When all coins are collected, the exit turns green. The floor of the environment is checkered blue and white. In the agent’s observation (b), the spotlights play a role in revealing or hiding other entities. As an additional form of visual feedback (c), the blue floor tiles turn red if a spotlight detects the presence of the agent. a reward of \(+1\). By utilizing its memory effectively, the agent shall be able to deduce its current location and recall the positions of the coins and the exit. A simplified grid version of Searing Spotlights is not available. As a default simplification, the episode starts with perfect information, and after a few steps, the global light is gradually dimmed until off. Just like in MP, more successful episodes terminate earlier. ### Endless Searing Spotlights Based on just a few modifications, we extrapolate the concept of Searing Spotlights to Endless Searing Spotlights (ESS). We eliminate the exit task and alter the coin collection task by spawning a new coin whenever the agent collects one. The newly spawned coin is always visible for six frames and the agent has a time budget of 160 steps to collect the new coin. Lastly, we raise the agent's health points from five to ten. ## 3 Memory Baselines In our experiments, we utilize the popular Deep Reinforcement Learning (DRL) algorithm Proximal Policy Optimization (PPO) (Schulman et al., 2017). By default, PPO lacks a built-in mechanism to enable the ability of memory. To overcome this limitation, either recurrent neural networks (RNN) or self-attention mechanisms (i.e. transformer) are potential solutions. Prior works have extensively demonstrated the effectiveness of leveraging RNNs, such as Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit (GRU) (Cho et al., 2014), within the realm of DRL (Mnih et al., 2016; Espeholt et al., 2018; Huang et al., 2022). This also applies to previous studies employing transformers, like Gated Transformer-XL (GTrXL) (Parisotto et al., 2020) and Hierarchical Chunk Attention Memory (HCAM) (Lampinen et al., 2021). Adopting these memory mechanisms is not trivially plug-and-play. They entail an increase in implementation complexity due to gathering more data, processing sequential data, and extending model interfaces. This may lead to a higher risk of errors and an undesired memory overhead in terms of hardware memory usage. To make these baselines more accessible to the community, we contribute two easy-to-follow baseline implementations of recurrent neural networks and Transformer-XL (TrXL) (Dai et al., 2019) in PPO.It is worth noting that there is a dearth of readily available and accessible transformer-based DRL implementations. Therefore, our realization of TrXL plus PPO is especially valuable in this context. In the following subsections, we provide an overview of the actor-critic model architecture that additionally leverages an auxiliary observation reconstruction loss. Finally, we will delve into the recurrent PPO and transformer-based PPO baselines. ### General Model Architecture Figure 7 presents a broad overview of the actor-critic model architecture. The encoding process begins with visual observations, and optionally, vector observations. To encode visual observations (i.e. pixel observations) of shape \(84\times 84\times 3\), we employ the Atari CNN, a convolutional neural network that adheres to the standard topology originally developed for solving Atari games (Mnih et al., 2015). The vector observation is encoded by a multi-layer perceptron (MLP). The outputs of these encoders are concatenated and subsequently embedded into the input space of the memory encoder, which is based on either GRU or Transformer-XL. Once the data is propagated through the memory encoder, it is further processed by each individual head that may contain further hidden layers. The policy head (actor) samples multi-discrete actions as required by Memory Gym, while the value head (critic) approximates the state-value. Additionally, there is a head dedicated to reconstructing the original visual observation, leveraging the principles of self-supervised learning. Technically, the latent output of the memory encoder is fed to a transposed Atari CNN to achieve this reconstruction. The utility of such an auxiliary head is demonstrated by some of our results (Section 4) and prior works by Lampinen et al. (2021) and Hill et al. (2021). During development, we also tried to attach the reconstruction head directly to the encoder, but this led to catastrophic results. ### Loss Function Our baselines utilize PPO's clipped surrogate objective (Schulman et al., 2017). Due to the usage of the memory encoder, the to be selected action \(a_{t}\) of the policy \(\pi_{\theta}\) depends on the current observation \(o_{t}\) and the memory encoder's output that we refer to as hidden state \(h_{t}\). \(\hat{A}_{t}\) denotes advantage estimates based on generalized advantage estimation (GAE) (Schulman et al., 2016), \(\theta\) the parameters of a neural net and \(\epsilon\) the clip range. \[L_{t}^{C}(\theta)=-\hat{\mathbb{E}}_{t}[\min(q_{t}(\theta)\hat{A }_{t},\text{clip}(q_{t}(\theta),1-\epsilon,1+\epsilon)\hat{A}_{t})] \tag{5}\] \[\text{with ratio }q_{t}(\theta)=\frac{\pi_{\theta}(a_{t}|o_{t},h_ {t})}{\pi_{\theta\text{old}}(a_{t}|o_{t},h_{t})}\] \(t\) depicts the current time step. The value function is optimized using the squared-error loss \(L_{t}^{V}(\theta)\). \(\mathcal{H}[\pi_{\theta}](o_{t},h_{t})\) denotes an entropy bonus encouraging exploration (Schulman Figure 7: Architecture of the actor-critic model featuring observation encoders and a memory encoder (green) that are shared among the policy, state-value function, and observation reconstruction heads. et al., 2017). The auxiliary observation reconstruction loss \(L_{t}^{R}\) is based on the binary cross entropy. All three are weighted using the coefficients \(c_{1}\), \(c_{2}\), and \(c_{3}\) and are added to \(L_{t}^{C}\) to complete the loss: \[L_{t}^{CVRR}(\theta)=\hat{\mathbb{E}}_{t}[L_{t}^{C}(\theta)+c_{1}L_{t}^{V}( \theta)-c_{2}\mathcal{H}[\pi_{\theta}](o_{t},h_{t})+c_{3}L_{t}^{R}(o_{t},\hat{ o}_{t})] \tag{6}\] \[\text{with }L_{t}^{V}(\theta)=(V_{\theta}(o_{t},h_{t})-V_{t}^{target})^{2}\] \[\text{and }L_{t}^{R}(o_{t},\hat{o}_{t})=-[\hat{o}_{t}\log(o_{t})+(1-\hat{o}_{t}) \log(1-o_{t})]\] ### Recurrent Neural Network Baseline In general, RNNs process sequential data iteratively by propagating information through recurrent connections. When utilizing these in DRL, the forward pass of the model differs between inference (sampling training data) and optimization as shown in Figure 8. During data sampling, the initial hidden state of the recurrent cell is initialized to zeros, and all output hidden states are stored for later optimization. During optimization, the sampled observations, actions, hidden states, and a loss mask are split into sequences based on a configurable sequence length. Sequences shorter than the designated length are zero-padded to optimize computational efficiency, while considering some memory overhead. Each sequence begins with its corresponding initial hidden state, enabling truncated backpropagation through time. Back-propagation through time, introduced by (Werbos, 1990), is an algorithm used to train recurrent neural networks by unfolding the network through Figure 8: An overview of the distinct forward passes concerning inference and optimization when incorporating an RNN, such as GRU (stacked \(L\) times), as a memory encoder in a DRL model. During data sampling ((a)), the agent interacts with its environment step-by-step, updating the prior hidden state based on encoded features of the current observation. In the optimization phase ((b)), observations and formerly collected hidden states are divided into sequences. Each sequence is then processed by the RNN with its corresponding initial hidden state, followed by reshaping the resulting data back to its original batch dimensions. time and applying the standard back-propagation algorithm (Rumelhart et al., 1986). Regarding the forward pass during optimization, the model is fed a batch of training data that is reshaped into sequences before being passed to the RNN. Once the data has propagated through the RNN, it is reshaped back to its original batch dimensions. Since only the RNN requires a sequence of data, it is more efficient to feed the non-recurrent components of the neural network the entire batch. Finally, during loss computation, the padded values are excluded using the loss mask, as defined below. \(L^{mask}\) is the average over all losses not affected by the padding. \[L^{mask}(\theta)=\frac{\sum_{t}^{T}\left[mask_{t}\times L_{t}^{CVHR}(\theta )\right]}{\sum_{t}^{T}[mask_{t}]} \tag{7}\] \[\text{with }mask_{t}=\begin{cases}0&\text{where padding is used}\\ 1&\text{where no padding is used}\end{cases}\] Our final RNN baseline supports both GRU and LSTM architectures, along with truncated back-propagation through time. For our major experiments we chose GRU over LSTM for two main reasons. Firstly, GRU is computationally less expensive, as demonstrated in the work by Morad et al. (2023). Additionally, it is more GPU memory efficient, as it relies solely on its hidden state without the need for an additional cell state like how it is done in LSTM. Lastly, previous DRL studies showed that GRU slightly outperforms LSTM (Lambrechts et al., 2022; Morad et al., 2023). We also explored LSTM during hyperparameter tuning but observed inferior performance compared to GRU. ### Transformer-XL Baseline While RNNs process sequential data by propagating information through recurrent connections, transformers employ self-attention mechanisms to capture global dependencies in parallel across the entire sequence (Vaswani et al., 2017). Next, we detail our transformer baseline that draws conceptual inspiration from TrXL (Dai et al., 2019), GTrXL (Parisotto et al., 2020), and HCAM (Lampinen et al., 2021). The original transformer architecture proposed by Vaswani et al. (2017) is designed as a sequence-to-sequence model with an encoder-decoder structure. However, for the purpose of DRL, we adapt the architecture to a sequence-to-one model, focusing solely on the encoder as done in GTrXL and HCAM. This modification is driven by the underlying Markov Decision Process, which maps the environment's current state to a single action rather than a sequence of actions. To begin with, the input data can be conceptualized as a sequence of observations. In the context of visual observations, this is commonly referred to as frame stacking, where consecutive frames are combined to form the input sequence. However, instead of expensively stacking raw frames, we leverage the past outputs of the model's embedding module (Figure 7), which serves as the input sequence to the first transformer block. The subsequent blocks make use of the past outputs from their preceding blocks. These sequences are stored in the agent's episodic memory (Figure 9(a)) and are collected during inference based on the agent's current timestep \(t\). To accommodate computational limitations, we employ a sliding memory window approach, where the input sequence is derived from a fixed-length Figure 9: Figure (a) illustrates the episodic memory that stores past inputs to all Transformer-XL blocks for every step taken by the agent. Input sequences to the Transformer-XL blocks are retrieved using a sliding memory window. \(T\) denotes the episode length. Figure (b) depicts the architecture of one Transformer-XL block that is stacked \(B\) times. The dashed lines indicate that there is no flow of gradients into the episodic memory, the positional encoding, and the mask. window. By leveraging past activations in the input sequence, the agent's memory facilitates the ability to attend to events that extend beyond the boundaries of the fixed-length memory window, aligning with the segment-level recurrence mechanism of TrXL (Dai et al., 2019). Note that gradients are not back propagated through the episodic memory. The architecture of our TrXL encoder block is depicted in Figure 9(b). At each timestep \(t\), the block receives an input that is solely based on the current timestep. This input is added to the episodic memory and serves as the query \(Q_{t}\) during the execution of multi-head attention (MHA) (Vaswani et al., 2017). The keys \(K\) and values \(V\) used in MHA are derived from the same data based on self-attention. Specifically, we retrieve \(K\) and \(V\) by slicing the episodic memory using the bounding indices \(i=\max(0,t-\text{window length})\) and \(t-1\). To ensure the positional information of \(K\) and \(V\) remains coherent, we add a positional encoding, following the approach from Vaswani et al. (2017). While learning a positional encoding can be effective, we found it to be less sample efficient. To restrict attention to timesteps up to the current timestep \(t\), we employ a strictly lower triangular matrix as a mask in MHA. This mask is only necessary when \(t\) is smaller than the length of the sliding memory window. The remaining configuration of the block adheres to the findings of Parisotto et al. (2020), which suggest improved performance with pre-layer normalization and the identity map reordering. We encountered vanishing gradient issues when applying layer normalization after MHA and the Position-Wise MLP, which is referred to as post-layer normalization. Although our implementation supports the gating mechanism of GTrXL, it resulted in either lower or equivalent performance while being computationally more expensive (Appendix C). ## 4 Experimental Analysis To benchmark the effectiveness of our just introduced GRU and TrXL baselines, both powered by Proximal Policy Optimization (Schulman et al., 2017), we run empirical experiments on Memory Gym. Contrary to our previous study (Pleines et al., 2023), we exclude HELM (History comprEssion via Language Models) (Paischer et al., 2022a) and its successor, HELMv2 (Paischer et al., 2022b). The decision stems from HELM's suboptimal wall-time efficiency, which is six times more expensive than our GRU agent, making hyperparameter optimization impractical for our purposes. As we are strongly limited by our compute budget of 25,000 GPU hours, we rather dedicated the resources towards the development of our endless environments and TrXL baseline. Section 5.2 discusses further reasons for not considering more baselines in our experiments. We also conduct preliminary tests using Gated Transformer-XL (GTrXL) (Parisotto et al., 2020) and Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) on the finite environments without any hyperparameter tuning. These models usually underperformed when applied directly, suggesting that out-of-the-box applications might necessitate hyperparameter optimization. As part of our evaluation protocol, the following sample efficiency curves depict the interquartile mean (IQM) along with a stratified bootstrapped confidence interval of 0.95 (shaded area on the plots), following the guidelines of Agarwal et al. (2021). Each training run is repeated 5 times. To evaluate the generalization ability of our baselines, we extensively test them on 50 novel environment seeds that were excluded from the training process. This evaluation is repeated 3 times for each data point, resulting in a comprehensive assessment based on a total of 750 episodes for every data point presented. The next subsection presents a two-fold examination of the finite environments. Firstly, we reaffirm that memory is imperative in simpler environment instances, a conclusion previously drawn in our earlier study (Pleines et al., 2023). Enhanced hyperparameters allow us to report improved results. Secondly, we compare the performance of GRU and TrXL on the original finite tasks. TrXL exhibits stronger policies in Mortar Mayhem than GRU, while it is more sample efficient in Mystery Path. Searing Spotlights is an exception to this where TrXL is less sample efficient. Contradicting our initial expectations and the results on the finite environments, we unveil surprising findings that highlight GRU's superiority over TrXL in all endless environments. For a comprehensive overview of the hyperparameters employed in our experiments, see Appendix B. Results for GTrXL and LSTM on the finite environments can be found in Appendix C. Additionally, a selection of wall-time efficiency statistics is available in Appendix D. ### Finite Environments: Transformer-XL is often more efficient than GRU Before assessing GRU and TrXL on the default challenges of the finite environments, the first set of experiments focuses on confirming that the agents trained on Memory Gym indeed require memory. To accomplish this, we train several minor baselines alongside GRU and TrXL on simplified versions of the environments. These baselines include PPO without memory, PPO with frame stacking (4 and 16 grayscale frames), and PPO with relative positional encoding (Vaswani et al., 2017) provided as vector observation. Results on Mortar Mayhem (Figures 10(a)-(c)) show the number of commands that were properly executed during one episode. As mentioned in section 2.1, the agent is not required to memorize the command sequence in Mortar Mayhem Act Grid as it is included in the agent's vector observation. Under these simplified conditions, PPO without memory is ineffective, while the frame stacking agents achieve nearly either 5 or 8 commands with a slight upward trend. Agents using GRU or TrXL are the most effective, completing all 10 commands, while those with relative positional encoding follow closely, completing 9.5 commands. The positional encoding baseline adds the episode's current time step to the agent's observation allowing for temporal awareness within the sequential execution of commands. Once the Clue Task is present (Figure 10(b)), only GRU and TrXL prove to be effective and therefore we conclude that Mortar Mayhem demands memory. We obtain a similar impression when training on Mystery Path Grid where the origin and goal are not perceivable by the agent (Figure 10(e)). In this case, TrXL outperforms GRU in terms of sample efficiency, while both methods are effective. If the origin and the goal are visible, a horizon of 16 frames is sufficient to train an effective policy as shown by the frame stacking agent (Figure 10(d)). Stacking 4 frames or leveraging positional encoding leads to an IQM success rate of 40% with a slight trend upward. The memory-less agent's success rate goes up to about 11%. No success is proven by the minor baselines in Searing Spotlights as seen in Figure 10(g). Only in Searing Spotlights, the minor baselines also utilize observation reconstruction. Utilizing the observation reconstruction loss significantly improves the sample efficiency of GRU and TrXL. The reason for this can be found in rare events: the agent, the coin, and the exit Figure 10: Performance comparison across instances of Memory Gym’s finite environments. Mortar Mayhem Act Grid (a) provides commands as a fully observable vector, bypassing the Clue Task. Both (a) and Mortar Mayhem Grid (b) utilize grid-like locomotion. (c) represents the default Mortar Mayhem task. Mystery Path Grid operates on grid-like locomotion, with (a) rendering the the goal and origin to the agent’s observation. (b) and (c) hide these, while (c) is the default Mystery Path task. Searing Spotlights (g) is not varied, while all conducted runs leverage observation reconstruction (Obs. Rec.) in this environment. are seldomly visible. Hence, the visual encoder takes advantage from the auxiliary learning signal provided by the observation reconstruction. However, both GRU variants, with and without observation reconstruction, exhibit notably higher sample efficiency compared to their respective TrXL counterparts. When facing the default task of Mortar Mayhem, a different results becomes apparent (Figure 10(c)). TrXL with observation reconstruction solves all 10 commands after training for about 100 million steps, while GRU converges to only about 6 commands. GRU's performance drops to about 5 commands if the observation reconstruction loss is used. If TrXL trains without this auxiliary loss, its performance drops to about 2 successful commands. In the case of Mystery Path (Figure 10(f)), we opted not to train agents using observation reconstruction, as the always visible agent is the only entity that requires visual encoding. In this scenario, TrXL achieves the desired success rate in approximately 100 million steps, whereas GRU requires twice as many steps to achieve the same level of performance. In summary, TrXL proves to be more effective than GRU in Mortar Mayhem and exhibits higher sample efficiency in the case of Mystery Path. However, it is worth noting that GRU, both with and without observation reconstruction, outperforms its respective TrXL counterparts in Searing Spotlights. ### Endless Environments: GRU is more effective than Transformer-XL Across all three endless environments, the recurrent agent (GRU) consistently outperforms the transformer-based agent (TrXL) by a significant margin. Notably, the most substantial performance gap between GRU and TrXL emerges in the results of Endless Mortar Mayhem (Figure 11(a)). Within this context, GRU attains an impressive IQM of 120 executed commands using roughly 300 million steps, whereas TrXL only attains an IQM of 18 commands. Furthermore, the incorporation of observation reconstruction exacerbates this performance difference, leading TrXL to a further deterioration, resulting in an IQM of 10. Furthermore, GRU exhibits consistently longer IQM episode lengths, with TrXL achieving an IQM of 288 steps and GRU reaching a peak IQM episode length of 3462 steps. The outcomes depicted in Figure 11(b) illustrate the results obtained from the Endless Mystery Path environment. In this case, GRU remains more effective than TrXL, although the gap between their performances is narrower compared to EMM. GRU achieves an IQM return of 2.65, with TrXL achieving 2.02. Although neither memory approach converges within the allocated training budget of 819 million steps, GRU displays a more pronounced upward trend in performance. In terms of episode length, neither agent achieves lengths as seen in EMM. GRU attains an IQM episode length of 600 steps in EMP, while TrXL reaches up to 400 steps. This divergence can be attributed to the episode's dynamics, where agents tend to fall off after significant progress, necessitating more steps to regain lost ground and accumulate further rewards. In Endless Searing Spotlights (Figure 11(c)), GRU is about 3.3 times more effective than TrXL. Both agents utilize the observation reconstruction loss, but TrXL converges to an IQM of 6 collected coins after approximately 200 million steps, while GRU achieves an IQM of about 20 coins, continuing to show an upward trend in its performance. The IQM Figure 11: GRU’s consistent superiority over TrXL is revealed in Memory Gym’s endless environments. The less opaque line marked with crosses depicts the interquartile mean (IQM) episode length that refers to the right Y-Axis. episode length for TrXL is 200 steps, while GRU achieves a longer IQM episode length of 600 steps. ### Investigating Transformer-XL's Surprisingly Low Effectiveness The unexpectedly low performance of TrXL prompts us to several hypothesis that are explored within the next subsections. Endless Mortar Mayhem poses the largest performance gap and therefore we utilize this environment for further investigations. #### 4.3.1 Inadequate Networks Capacity Our TrXL baseline comprises 2.8 million trainable parameters, while GRU consists of 4.05 million, prompting the question of whether TrXL's model architecture lacks the necessary capacity. To investigate this, we conducted experiments varying the number of blocks (2, 3, 4), the embedding dimension size (256, 384, 512), and the memory window length (256, 384, 512). None of these adjustments led to closing the performance gap to GRU. Scaling up TrXL also increases its demand for GPU memory. When scaling up multiple architecture details, our implementation and hyperparameters may cause the training to exceed the available 40GB GPU memory of an NVIDIA A100. Workarounds include reducing the batch size or transferring training data between the GPU and CPU, but these options significantly worsen wall-time efficiency. Figure 12: Experiments on Endless Mortar Mayhem with varied TrXL configurations. The learning rate schedule (LR) is adjusted to decay from \(2.75e-4\) to \(1.0e-4\) over 160 million steps, compared to the previous \(1.0e-5\). We also incorporate observation reconstruction (Obs. Rec.) and observe the most significant improvement upon introducing a ground truth estimation head (GT) as sanity check. Another test augments TrXL’s query with relative positional encoding (QPos). The final, but inferior, experiment combines the optimized learning rate, augmented query, and ground truth estimation. Furthermore, another indication of untapped capacity lies in the amplification of the learning signal, which we elaborate on in the following section. #### 4.3.2 Weak Learning Signal Figure 12 depicts several additional experiments aimed at amplifying the learning signal. We made naive a adjustment to the learning rate schedule, utilized observation reconstruction, and introduced a ground truth estimation head. This ground truth estimation head is added to the actor-critic model architecture and predicts the target position to move to. Labels are provided by the environment, and the estimates are optimized using the mean-squared error loss. While ground truth information is typically unavailable and considered inappropriate for this benchmark, it serves as a useful sanity check. It helps determine if an additional learning signal is advantageous in this context. When training incorporates ground truth estimation, there is a noticeable improvement in the agent's policy. Previously, the IQM of completed commands stood at 18; with ground truth estimation, it reaches 50. This outcome implies that the model's capacity might not be the primary constraint. However, using the ground truth estimation head can lead to instability in training, with the agent's performance fluctuating between 20 and 50 completed commands after about 300 million steps. During the initial 160 million training steps, the learning rate linearly decays from 2.75e-4 to 1.0e-5. By setting the final decayed learning rate to 1.0e-4, the IQM performance hits 24 commands. Lower performances are observed when adding the observation reconstruction head to either just TrXL or TrXL with the optimized learning rate schedule. In summary, augmenting the learning signal -- either by adjusting the learning rate or by incorporating ground truth estimation -- leads to more performant policies. #### 4.3.3 Lack of Temporal Information in the Initial Query The next hypothesis pertains to the initial query of the first TrXL block. This query is constructed solely based on the features extracted from the observation encoders at the current time step, encompassing only minimal temporal information. In contrast, the query of the second block draws from the aggregated outcome of the memory window, thus capturing more substantial temporal information. However, we believe that the initial query could be further enriched with additional information. In Figure 12, one experiment augments the query with relative positional encoding, providing direct access to the current time step. Despite this enhancement, the agent's performance only moderately improves, reaching an IQM of 19 completed commands. We also explored embedding the original query with the previous output of the last TrXL block. However, this simple approach did not yield positive results. All of the aforementioned measures by far do not reach the level of the GRU agent. Even if combined, a low performance of 33 executed commands is achieved, which is also relatively unstable. #### 4.3.4 Harmful Off-Policy Data in the Episodic Memory At last it can be discussed whether the combination of PPO and TrXL's episodic memory pose a threat. PPO is an on-policy algorithm that consumes the training data for a few epochs. Once the second training epoch on the same batch commences, the batch is already considered off-policy. However, PPO's loss function can mitigate this issue and hence allows for several epochs. But this mitigation is likely limited and we wonder whether this issue is enlarged by the episodic memory. In our current training setup, an agent collects 512 steps of data. If an episode exceeds 512 steps and is still running, the earlier steps from that episode become outdated over several epochs. In future work, it needs to be investigated if a stale episodic memory, which is based on off-policy data, hinders training progress. Frequently refreshing the episodic memory is expected to be expensive in terms of wall-time. ## 5 Related Work We begin by describing different approaches aimed at improving agents' capabilities through memory mechanisms, and highlight the challenge of accessing related baselines. Subsequently, we present an overview of existing memory benchmarks, comparing them conceptually to Memory Gym. ### Memory-based Approaches The work of Mnih et al. (2015) on mastering Atari games (Bellemare et al., 2013) using Deep Q-Networks (DQN) marked a significant milestone in the advancement of deep reinforcement learning algorithms. To capture temporal information, such as velocities and accelerations, a frame stacking approach was employed. More advanced memory-based approaches were introduced, which we will briefly detail subsequently. **Deep Recurrent Q-Networks** (DRQN) (Hausknecht and Stone, 2015) combine DQN with an LSTM. The replay buffer stores sequences, while initializing the recurrent hidden state to zero during training. The effectiveness of this approach is demonstrated on Atari games that involve frame flickering. **Recurrent Replay Distributed DQN** (R2D2) (Kapturowski et al., 2019) extends DRQN with a distributed prioritized experience replay buffer, storing overlapping sequences of fixed length. A burn-in period is utilized to address staleness in recurrent hidden states by propagating a portion of the sequence for a more effective starting state. R2D2's performance is evaluated on Atari games (Bellemare et al., 2013) and DMLab-30 (Beattie et al., 2016). The **Memory Recall Agent** (MRA) (Fortunato et al., 2019), built on IMPALA (Espeholt et al., 2018), employs a two-fold memory mechanism: a working memory based on an LSTM and a slot-based episodic memory containing summaries of past experiences. By utilizing a query-key approach, information is read from and written to the episodic memory. MRA is evaluated on the Memory Task Suite contributed by the same publication. **Gated Transformer-XL** (GTrXL) (Parisotto et al., 2020) adds the concept of Transformer-XL (TrXL) to enable an agent, trained with V-MPO (Song et al., 2020), to leverage memory. By incorporating an identity map reordering and a gating mechanism inspired by GRU, TrXL is further improved. GTrXL's performance is demonstrated on DMLab-30 (Beattie et al., 2016), Numpad, and Memory Maze. The **Hierarchical Chunk Attention Mechanism** (HCAM) (Lampinen et al., 2021) builds upon TrXL by partitioning the agent's memory into chunks and selecting the most relevant chunks based on their summaries to attend to their stored information in detail. HCAM is trained using IMPALA (Espeholt et al., 2018) and evaluated on multiple environments including Dancing the Ballet, Object Permanence, Rapid Word Learning, Visual Match, Paired Associative Inference, and One-Shot StreetLearn Navigation. History comprEssion via Language Models (HELM) (Paischer et al., 2022) utilizes pre-trained language models to compress a history of observations into a single context vector (memory). It employs a frozen TrXL encoder, pre-trained on the WikiText 103 dataset (Merity et al., 2017). HELM is evaluated on the Procgen (Cobbe et al., 2020) and Minigrid (Chevalier-Boisvert et al., 2018) environments. ### Lack of Accessible Memory Baselines The accessibility of the approaches to memory described above poses a common issue. While HELM is fully open source and successfully integrated into our training framework, the implementations of DRQN, R2D2, MRA, and GTrXL have not been made publicly available by their original authors. HCAM is partially open source regarding its model architecture. The lack of open source implementations makes reproducing their results more challenging, as implementation details play a crucial role. This issue is particularly pronounced for GTrXL, as to the best of our knowledge, no one has yet been able to reproduce its underlying DRL algorithm, V-MPO. Within the community, RLlib (Liang et al., 2018), DI-engine (DI-engine Contributors, 2021), and Brain Agent (Lee et al., 2022) advertise to feature GTrXL. However, these frameworks are large and complex, supporting multiple DRL algorithms, model architectures, and environments, which hampers verification, debugging, gaining insights, and making modifications. We tested RLlib with Memory Gym, but obtained negative returns even though the environments do not signal negative rewards. DI-Engine's GTrXL implementation is based on R2D2, and our experiments revealed poor sample throughput by a magnitude of 10 compared to our GRU baseline. When debugging Brain Agent's implementation, we noticed an unexpected discrepancy in the query length between data sampling and optimization. While we do not claim that these frameworks are dysfunctional, we present our limited experience in exploring them. We believe that our contributed open source baselines are easy to follow, which facilitates reproducibility and further research in the field. ### Related Reinforcement Learning Memory Benchmarks Previous studies have explored the use of memory-based agents in a variety of environments. Some of them are originally fully observable but are turned into partially observable Markov Decision Processes (POMDP) by adding noise or masking out information from the agent's observation space. For instance, this was done for the Arcade Learning Environment (Bellemare et al., 2013) by using flickering frames (Hausknecht and Stone, 2015) and common control tasks by removing the velocity from the agent's observation (Heess et al., 2015; Meng et al., 2021; Shang et al., 2021). Control tasks also touch on the context of Meta-Reinforcement Learning, where memory mechanisms are prominent (Wang et al., 2021; Melo, 2022; Ni et al., 2022). The same applies to Multi-Agent Reinforcement Learning (Berner et al., 2019; Baker et al., 2020; Vinyals et al., 2019). As we solely focus on benchmarking the agent's memory and its ability to generalize, we do not compare Memory Gym to environments of more complex contexts such as DM Alchemy (Wang et al., 2021), Crafter (Hafner, 2021), or Obstacle Tower (Juliani et al., 2019). These might need additional components to the agent's architecture and its training paradigm, for instance to approach a notable exploration challenge. When skimming the approaches described in section 5.1, it appears that there is a lack of consensus regarding the choice of environments for evaluation as they assess their approaches on different benchmarks, thereby hindering comparisons between them. Reasons for this observation are either missing accessibility, slow simulation speeds, or weak memory challenges (Pleines et al., 2023). Subsequently we give a coarse overview of recently used benchmarks chronologically. **Deepmind Lab 30**(Beattie et al., 2016) presents a collection of 30 procedurally generated first-person 3D environments. Starting from a general motor-control navigation task and visual observations, each environment poses a different goal such as collecting fruit, playing laser tag or traversing a maze. Parisotto et al. (2020) categorize these environments into memory and reactive tasks. **Minigrid**(Chevalier-Boisvert et al., 2018) includes a 2D grid memory task inspired by the T-Maze (Wierstra et al., 2007). In this environment, the agent is required to memorize an initial goal cue, traverse a long alley, and correctly choose the exit once the end is reached. **Miniworld**(Chevalier-Boisvert, 2018) encompasses a set of first-person 3D environments based on the tasks introduced by Minigrid. **VizDoom**(Wydmuch et al., 2018) features first-person shooter 3D environments that revolve around motor-control navigation tasks, with the added challenge of computer-controlled enemies that may confront the agent. The **Memory Task Suite**(Fortunato et al., 2019) includes diverse memory-based environments across four categories: PsychLab (Leibo et al., 2018) for image-based tasks (e.g. detect change), Spot the Difference for cue memorization, goal navigation tasks inspired by the Morris water maze (D'Hooge and De Deyn, 2001), and transitive object ordering tasks. **Procgen**(Cobbe et al., 2020) consists of 16 2D environments that offer procedurally generated and fully observable levels. These environments encompass a combination of puzzle-solving, platforming, and action-based games. 6 of these environments have been modified to become partially observable by reducing and centering the agent's visual observation. **Numpad**(Parisotto et al., 2020) requires the agent to accurately press a series of keys in a specific sequence of fixed length. The sequence itself is not provided to the agent, resulting in the agent employing a trial-and-error approach while memorizing the underlying sequence. **Memory Maze**(Parisotto et al., 2020) shares similarities with the Morris Water maze (D'Hooge and De Deyn, 2001), as the agent must locate an apple, then undergoes random repositioning, requiring it to find the apple's location again and again. **Dancing the Ballet**(Ballet) (Lampinen et al., 2021) presents a sequence of up to 8 dances visually, with the agent remaining stationary. After all the dances are displayed, the agent is required to identify a specific one to successfully complete the task. **PopGym**(Morad et al., 2023) has been published at ICLR 2023 like our original work contributing Memory Gym (Pleines et al., 2023). It is a benchmark consisting of 15 environments that are tagged as diagnostic, control, noisy, game, or navigation. These environments are designed to quickly converge by providing vector observations instead of visual ones. ### Comparing Memory Gym to Related Benchmarks Memory Gym's environments offer a unique feature that sets them apart from related environments: an endless design tailored to rigorously test memory capabilities. It is crucial to differentiate between "endless" in this context and "open-ended" environments, such as Minecraft (Fan et al., 2022). While open-ended environments aim for agents to autonomously acquire a broad range of diverse and general skills without human oversight, Memory Gym's "endless" design remains anchored to a specific task structure. The finite environments Mortar Mayhem and Mystery Path exhibit both similarities and unique characteristics compared to related environments. Several environments, such as Ballet, PopGym's Autoencode, Spot the Difference, and Minigrid Memory, require the agent to initially memorize at least one cue to solve the task. While Ballet, Spot the Difference, and Minigrid Memory involve mapping clues to a single solution, Mortar Mayhem and PopGym's Autoencode challenge the agent to solve a sequential puzzle. In Mortar Mayhem, delays between showing cues and executing commands can be dynamically sampled, similar to the concept in Ballet. Mystery Path is most similar to the navigation concepts of the environments that are inspired by the Morris water maze. Once fallen off, the agent must return to the point before where it lastly fell off and then has to take different actions to make progress on the hidden path. Concerning Searing Spotlights we were unable to identify an already existing environment that shares conceptual details. ## 6 Conclusion In this study, we advanced Memory Gym's environments to novel endless tasks, tailored to benchmark memory-based Deep Reinforcement Learning algorithms. These tasks feature a mounting challenge similar to the car game "I packed my bag". As the agent's policy refines, the task dynamically expands, serving as an automatic curriculum. This innovative framework enables a thorough evaluation of memory-based agents, emphasizing their overall effectiveness beyond mere interaction efficiency with the environment. To foster a more inclusive research landscape, we contributed an open-source PPO baseline powered by Transformer-XL (TrXL). This baseline employs an attention mechanism applied to an episodic memory with a sliding window. When benchmarked against Memory Gym, the TrXL baseline competes head-to-head with another prominent baseline rooted in Gated Recurrent Unit (GRU). Notably, in finite environments, TrXL demonstrates superior effectiveness in Mortar Mayhem and enhanced sample efficiency in Mystery Path. However, in the Searing Spotlights environment, GRU emerges as the more efficient architecture. Perhaps our most unexpected revelation is the comeback of GRU in the endless environments. GRU surpassed TrXL by large margins, while also being computationally more efficient. Further probing into the Endless Mortar Mayhem environment revealed only marginal improvements for TrXL upon enriching the learning signal. This prompts future investigations into potential reasons behind TrXL's limitations, such as the possibility of episodic memory staleness or an initial query lacking temporal awareness. As we move forward, it will be compelling to discern the performance thresholds of other memory mechanisms and DRL algorithms. In addition to examining more attention-based and recurrent architectures, it may be worth exploring structured state space sequence models (Gu et al., 2022), given their recent debut in DRL (Lu et al., 2023). Yet, the research community still grapples with the absence of a standardized benchmark. While we do not consider Memory Gym to satisfy this role, we believe it to be an invaluable complement to benchmarking memory-based agents. Addressing this issue necessitates a thorough deliberation on the essential criteria that a complete memory benchmark should encompass. For instance, should an ideal memory benchmark incorporate environments with varied modalities in their observation and action spaces? ## Acknowledgements Our sincere appreciation goes to Vincent-Pierre Berges for his enlightening discussions that greatly enriched our work. We also extend our gratitude to Andrew Lampinen for his invaluable insights on utilizing Transformer-XL as episodic memory. Special thanks are due to Gunter Rudolph for his unwavering support. This project would not have been possible without the generous computing time provided by the Paderborn Center for Parallel Computing (PC2) and the support from the Linux-HPC-Cluster (LiDO3) at TU Dortmund. We are deeply thankful for all of their contributions. ## Appendix A Environment Parameters ## Appendix B Hyperparameters Table 3 presents the hyperparameters utilized in our final experiments unless otherwise specified. The learning rate and entropy coefficient undergo linear decay from their initial to final values. This decay occurs exclusively during the first 10,000 PPO updates (equivalent to 163,840,000 steps). As training progresses beyond this point, the final hyperparameter serves as a lower threshold. Regarding the sequence length and memory window length, they are determined based on the longest possible episode in the finite environments. In the case of endless environments, the so far best sequence length is fixed at 512 for GRU and 256 for TrXL. Our hyperparameter search was conducted using a customized implementation built upon the optuna tuning framework (Akiba et al., 2019). The objective was to discover \begin{table} \begin{tabular}{|l r|l r|} \hline \multicolumn{2}{|c|}{**Endless Mortar Mayhem**} & \multicolumn{2}{c|}{**Endless Searing Spotlights**} \\ **Parameter** & **Default** & **Parameter** & **Default** \\ \hline Max Episode Length & -1 & Max Episode Length & -1 \\ Agent Scale & 0.25 & Agent Scale & 0.25 \\ Agent Speed & 3 & Agent Speed & 3 \\ No. Available Commands & 9 & Agent Always Visible & False \\ Command Show Duration* & [3] & Agent Health & 10 \\ Command Show Delay* & [1] & Sample Agent Position & True \\ Execution Duration* & [6] & Coin Scale & 0.375 \\ Execution Delay* & [18] & Coin Show Duration & 6 \\ Show Visual Feedback & True & Coin Always Visible & False \\ Reward Command Failure & 0 & Steps per Coin & 160 \\ Reward Command Success & 0.1 & No. Initial Spotlight Spawns & 3 \\ \multicolumn{2}{|c|}{**Endless Mystery Path**} & Spotlight Spawn Interval & 50 \\ **Parameter** & **Default** & Spotlight Radius* & (7.5-13.75) \\ \hline Max Episode Length & -1 & Spotlight Speed* & (0.0025-0.0075) \\ Agent Scale & 0.25 & Spotlight Damage & 1 \\ Agent Speed & 3 & Light Dim Off Duration & 6 \\ Show Origin & False & Light Threshold & 255 \\ Show Past Path & True & Show Visual Feedback & True \\ Show Background & False & Render Background Black & False \\ Show Stamina & False & Hide Checkered Background & False \\ Show Visual Feedback & True & Show Last Action & True \\ Camera Offset Scale & 5 & Show Last Positive Reward & True \\ Stamina Level & 20 & Reward Inside Spotlight & 0 \\ Reward Fall Off & 0 & Reward Outside Spotlights & 0 \\ Reward Path Progress & 0.1 & Reward Death & 0 \\ Reward Step & 0 & Reward Coin & 0.25 \\ \hline \end{tabular} \end{table} Table 2: Default reset parameters of the endless environments. Parameters marked with an asterisk (*) indicate uniform sampling. Values enclosed in square brackets represent discrete choices, while values in parentheses denote a range from which sampling is performed. If the max episode length is equal or less than zero, episodes will not terminate because of reaching the max episode length. a set of hyperparameters that demonstrate strong performance across both our GRU and TrXL baselines and Memory Gym's environments. Considering limited resources and the computational expense of tuning, we restricted the search space to a limited number of \begin{table} \begin{tabular}{|l|r|r|r|r|} \hline **Hyperparameter** & **Final** & **Search Space** & **Old** \\ \hline \hline Training Seeds & 100000 & & & \\ \hline Number of Workers & 32 & & & \\ \hline Worker Steps & 512 & & & \\ \hline Batch Size & 16384 & & & \\ \hline Discount Factor Gamma & 0.995 & & & 0.99 \\ \hline GAE Lamda & 0.95 & & & \\ \hline Optimizer & AdamW & & & \\ \hline Epochs & 3 & 2 & 4 & \\ \hline Number of Mini Batches & 8 & 4 & & \\ \hline Advantage Normalization & No & Batch & Mini Batch & Mini Batch \\ \hline Clip Range Epsilon & 0.1 & 0.2 & 0.3 & 0.2 \\ \hline Value Loss Coefficient & 0.5 & 0.25 & & 0.25 \\ \hline Initial Learning Rate & 2.75e-4 & 2.0e-4 & 3.0e-4 & 3.0e-4 \\ \hline Final Learning Rate & 1.0e-5 & & & 1.0e-4 \\ \hline Initial Entropy Coef. & 1.0e-4 & 1.0e-3 & 1.0e-2 & \\ \hline Final Entropy Coef. & 1.0e-6 & & & 1.0e-5 \\ \hline Reconstruction Loss Coef. & 0.1 & 0.5 & 1.0 & & n/a \\ \hline Maximum Gradient Norm & 0.25 & 0.35 & 0.5 & 1.0 & 0.5 \\ \hline \hline \multicolumn{4}{|c|}{**Recurrent Neural Network**} \\ \hline Number of Recurrent Layers & 1 & 2 & & \\ \hline Embedding Layer & Yes & & & No \\ \hline Layer Type & GRU & LSTM & & \\ \hline Residual & False & True & & \\ \hline Sequence Length & 512 or max & & & \\ \hline Hidden State Size & 512 & 256 & 384 & \\ \hline \hline \multicolumn{4}{|c|}{**Transformer-XL**} \\ \hline Number of TrXL Blocks & 3 & 2 & 4 & \\ \hline Block Dimension & 384 & 256 & 512 & \\ \hline Block Weight Initialization & Xavier & Orthogonal & Kaiming & T-Fixup & \\ \hline Positional Encoding & Relative & None & Learned & \\ \hline Number of Attention Heads & 4 & 8 & & \\ \hline Memory Window Length & 256 or max & & & \\ \hline \end{tabular} \end{table} Table 3: Hyperparameters and architectural details used in our final experiments. The “Final” column denotes the selected hyperparameter values, while the “Search Space” column represents additional discrete choices explored during the tuning process. The last column details the old hyperparameters of our previous study (Pleines et al., 2023). For this column alone, empty cells indicate that the values align with those in the “Final” column. T-Fixup is a transformer weight initialization approach by Huang et al. (2020). options. The majority of tuning experiments focused on the finite environments. Therefore, we do not claim to provide optimal hyperparameters. For each environment, we established individual tuning experiments corresponding to the choices presented in Table 3. Each experiment, or trial, involved a baseline set of hyperparameters that was modified by selecting a single parameter choice from the entire search space. Notably, we did not train permutations of all available choices at this stage. To optimize resource utilization, we implemented optuna's percentile pruner, which retained the top 25 percentile of trials. Trials that performed below this percentile were pruned, but only after surpassing 2,000 PPO updates as a warm-up phase. After completing the single experiments, we expanded the hyperparameter search by allowing for permutations sampled using optuna's Tree-structured Parzen Estimator. It is important to note that individual trials were not repeated, but since tuning was conducted across multiple environments, one could consider them as repetitions in a broader sense. The batch size is determined by multiplying the number of environment workers by the number of worker steps performed by each worker to gather training data. To efficiently utilize the resources of a system with 32 CPU cores and 40GB VRAM on an nVidia A100 GPU, we chose a batch size of 16,384 samples. However, when scaling up the model, it is possible to exceed the available GPU memory. In such cases, there are two options: reducing the batch size or outsourcing the data to CPU memory. Both of these workarounds come with increased expenses in terms of wall-time. ## Appendix C GTrXL and LSTM Results on the Finite Environments Figure 13 presents extended results for the finite environments. The performance metrics for TrXL and GRU agents remain consistent with those depicted in Figure 10. However, this figure displays twice as many data points on the x-axis. We include results from agents utilizing Gated Transformer-XL (GTrXL) (Parisotto et al., 2020) and the Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997). These models are benchmarked without any hyperparameter tuning. For GTrXL, biases of both 0 and 2 are tested, based on the suggestion by Parisotto et al. (2020) that a larger bias might enhance learning. All additional agents in the Searing Spotlights environment employ the observation reconstruction loss (Obs. Rec.), while it is omitted in the Mystery Path environment. For Mortar Mayhem (as seen in Figure 13(a)), the LSTM agent settles at 3 commands, whereas the GTrXL agents using observation reconstruction both achieve an IQM of 2 commands. In the Mystery Path results (Figure C), the LSTM agent achieves full success, albeit requiring more samples compared to the GRU and TrXL agents. The GTrXL agent with a bias of 0 reaches a 94% success rate, while the one with a bias of 2 achieves only 81%, challenging the earlier assertion by Parisotto et al. (2020). The confidence interval for these results indicates potential training instability. In the Searing Spotlights environment, the LSTM agent with observation reconstruction is slightly less efficient than its GRU counterpart. However, GTrXL agents, though successful, lag behind TrXL with observation reconstruction. It is important to note that we avoid making conclusive statements based on these results. This caution stems from the fact that the LSTM and GTrXL baselines are not tuned, and the GTrXL implementation awaits further verification. ## Appendix D Wall-Time Efficiency Reliably reporting the wall-time of all conducted experiments is not feasible due to varying circumstances as varying hardware or different loads of the file system. Therefore, we only provide a selection of measurements on Endless Mortar Mayhem and Mystery Path. Table 4 presents wall-time efficiency metrics for various agents trained on Endless Mortar Mayhem, utilizing the noctua2 high-performance cluster2. We leverage 32 CPU cores (AMD Milan 7763), an NVIDIA A100 GPU, and 100GB RAM for one training run. With PyTorch version 2, models can be lazily compiled at training onset, reducing TrXL's training time by a significant 30%. Previously, plain TrXL training averaged 110 hours, but this has been reduced to 76 hours. Incorporating observation reconstruction adds roughly 3 hours, and with ground truth estimation, it extends to 81 hours. The GRU agents are the most wall-time efficient, completing in approximately 57 hours. Thus, recurrence emerges as the most effective and efficient architecture in both sample and wall-time metrics for the endless environments. Footnote 2: [https://pc2.uni-paderborn.de/de/hpc-services/available-systems/noctua2](https://pc2.uni-paderborn.de/de/hpc-services/available-systems/noctua2) We further take a look at the wall-time of various agents trained on Mystery Path (Figure D). It becomes apparent that TrXL and GRU are quite on par concerning wall Figure 13: GTrXL and LSTM results on the finite environments. time efficiency, whereas TrXL needs fewer samples as shown in section 4.1. On first sight, it seems surprising that GTrXL is faster than TrXL (Table 5) even though GTrXL is more complex. This is due to the more expensive resets of the environment. Better policies have shorter episodes and thus reset more frequently impairing wall-time. So as GTrXL's and LSTM's policies are inferior, their wall-time is faster. These results were obtained from the LiDo3 high-performance cluster3. Each run utilized 32 cores (AMD Epyc 7542), an A100 GPU, and 200GB RAM. Footnote 3: [https://lido.itmc.tu-dortmund.de/](https://lido.itmc.tu-dortmund.de/)
2309.05603
D-Vine GAM Copula based Quantile Regression with Application to Ensemble Postprocessing
Temporal, spatial or spatio-temporal probabilistic models are frequently used for weather forecasting. The D-vine (drawable vine) copula quantile regression (DVQR) is a powerful tool for this application field, as it can automatically select important predictor variables from a large set and is able to model complex nonlinear relationships among them. However, the current DVQR does not always explicitly and economically allow to account for additional covariate effects, e.g. temporal or spatio-temporal information. Consequently, we propose an extension of the current DVQR, where we parametrize the bivariate copulas in the D-vine copula through Kendall's Tau which can be linked to additional covariates. The parametrization of the correlation parameter allows generalized additive models (GAMs) and spline smoothing to detect potentially hidden covariate effects. The new method is called GAM-DVQR, and its performance is illustrated in a case study for the postprocessing of 2m surface temperature forecasts. We investigate a constant as well as a time-dependent Kendall's Tau. The GAM-DVQR models are compared to the benchmark methods Ensemble Model Output Statistics (EMOS), its gradient-boosted extension (EMOS-GB) and basic DVQR. The results indicate that the GAM-DVQR models are able to identify time-dependent correlations as well as relevant predictor variables and significantly outperform the state-of-the-art methods EMOS and EMOS-GB. Furthermore, the introduced parameterization allows using a static training period for GAM-DVQR, yielding a more sustainable model estimation in comparison to DVQR using a sliding training window. Finally, we give an outlook of further applications and extensions of the GAM-DVQR model. To complement this article, our method is accompanied by an R-package called gamvinereg.
David Jobst, Annette Möller, Jürgen Groß
2023-09-11T16:36:02Z
http://arxiv.org/abs/2309.05603v1
# D-Vine GAM Copula based Quantile Regression with Application to Ensemble Postprocessing ###### Abstract Temporal, spatial or spatio-temporal probabilistic models are frequently used for weather forecasting. The D-vine (drawable vine) copula quantile regression (DVQR) is a powerful tool for this application field, as it can automatically select important predictor variables from a large set and is able to model complex nonlinear relationships among them. However, the current DVQR does not always explicitly and economically allow to account for additional covariate effects, e.g. temporal or spatio-temporal information. Consequently, we propose an extension of the current DVQR, where we parametrize the bivariate copulas in the D-vine copula through Kendall's \(\tau\) which can be linked to additional covariates. The parametrization of the correlation parameter allows generalized additive models (GAMs) and spline smoothing to detect potentially hidden covariate effects. The new method is called GAM-DVQR, and its performance is illustrated in a case study for the postprocessing of \(2\,\mathrm{m}\) surface temperature forecasts. We investigate a constant as well as a time-dependent Kendall's \(\tau\). The GAM-DVQR models are compared to the benchmark methods Ensemble Model Output Statistics (EMOS), its gradient-boosted extension (EMOS-GB) and basic DVQR. The results indicate that the GAM-DVQR models are able to identify time-dependent correlations as well as relevant predictor variables and significantly outperform the state-of-the-art methods EMOS and EMOS-GB. Furthermore, the introduced parameterization allows using a static training period for GAM-DVQR, yielding a more sustainable model estimation in comparison to DVQR using a sliding training window. Finally, we give an outlook of further applications and extensions of the GAM-DVQR model. To complement this article, our method is accompanied by an R-package called gamvinereg on GitHub. **Keywords:** conditional copula; vine copula; dependence modeling; quantile regression; covariate effects; ensemble postprocessing; probabilistic forecasting. Introduction Nowadays, weather forecasts are based on numerical weather prediction (NWP) models which suffer from various uncertainties. In practice, ensemble prediction systems (EPS) are commonly used to address these uncertainties. Therefore, the NWP model is run multiple times with different model and/or initial and boundary conditions (Gneiting et al., 2005; Leutbecher and Palmer, 2008). Afterwards a forecast ensemble is generated, which can be seen as a probabilistic forecast allowing to quantify forecast uncertainty (Palmer, 2002). However, the forecast ensemble usually suffers from biases and dispersion errors and thus may benefit from statistical postprocessing using past data to improve calibration and forecast skill. A popular postprocessing model is the so called Ensemble Model Output Statistics (EMOS, Gneiting et al., 2005). This method is used to obtain a full predictive distribution from the ensemble forecasts. Originally, EMOS was developed for Gaussian distributed weather quantities, e.g. temperature or air pressure. Later, machine learning methods such as quantile regression forests (QRF, Taillardat et al., 2016) or gradient boosting EMOS (EMOS-GB, Messner et al., 2017) have been investigated to extend the classical EMOS setting. Rasp and Lerch (2018) compared distributional regression networks to QRF as well as EMOS-GB for the postprocessing of \(2\,\mathrm{m}\) surface temperature forecasts and found only minor differences among these methods for longer training periods. Recently, the D-vine copula based quantile regression (DVQR), which was developed by Kraus and Czado (2017) and further extended by Tepegjozova et al. (2022) and Sahin and Czado (2022), was used by Moller et al. (2018) and Demaeyer et al. (2023) for the postprocessing of \(2\,\mathrm{m}\) surface temperature forecasts. Jobst et al. (2023c) used the same method for the postprocessing of \(10\,\mathrm{m}\) surface wind speed forecasts. In all three analyses, DVQR showed comparable or sometimes even better results with respect to its competing methods. Reasons for the superior performance of DVQR are manifold. DVQR is a quantile regression model that overcomes typical issues of quantile regression such as quantile crossings, transformations, collinearity and the integration of interactions of variables (Kraus and Czado, 2017). In addition, DVQR is able to model complex nonlinear relationships between the explanatory variables and response while imposing less restrictive model assumptions. Last but not least, it can theoretically adapt any distribution shape. One drawback in the current DVQR is the fact that it is not straightforward to explicitly include covariate effects, such as temporal effects into the model. This is one reason for estimating DVQR by sliding training windows, where the complete model needs to be re-estimated for each prediction time point. This can become computationally expensive, as the optimal sliding window size is not known in advance and additionally needs to be determined. In the ensemble postprocessing context the sliding window size depends on various factors, such as considered variables, seasons, locations, etc., which should be ideally taken into account. Therefore, Jobst et al. (2023c) compared different types of training periods in the DVQR model estimation, and detected that a reduction in the computational complexity usually comes along with a worse predictive performance. In this work, we exactly tackle this problem and allow for arbitrary covariate effects in the DVQR model, such as temporal, spatial or spatio-temporal ones. To be more precise, the correlation among two variables according to Kendall's \(\tau\) can depend on covariates and is subsequently used to calculate the parameters for the bivariate copulas in the D-vine copula. For this, we combine the work of Vatter and Chavez-Demoulin (2015) and Vatter and Nagler (2018) who introduced parametric bivariate copulas and later vine copulas depending on covariates with the DVQR proposed by Kraus and Czado (2017). As the correlations linked to the copulas are parametrized in terms of generalized additive models (GAMs, Hastie and Tibshirani, 1990; Green and Silverman, 1993) and smoothing splines our method will be called GAM-DVQR. We apply the GAM-DVQR with covariates modeling temporal effect for the postprocessing of \(2\,\mathrm{m}\) surface temperature forecasts at 462 observation stations in Germany. The results show that our suggested method is able to capture temporal covariate effects and can select important predictor variables from a potentially large set. Furthermore, the correlation time-dependent GAM-DVQR models show better results in comparison to the GAM-DVQR model assuming constant correlations, and are able to significantly outperform the benchmark methods EMOS and EMOS-GB. Last but not least, due to the use of a static training period for GAM-DVQR, the model needs to be fitted only once which is more efficient than estimating DVQR on a sliding window and therefore makes it attractive for practical and operational use. To the best of the authors knowledge, GAM-DVQR has not been suggested yet and further analyzed in an application. Although the presented application of GAM-DVQR is concerned with the meteorological field, our suggested method is broadly applicable to various areas where any correlation dependent covariate effects are required to be integrated. The rest of the paper is organized as follows: Section 2 introduces the D-vine copula based quantile regression methods including DVQR and GAM-DVQR. In Section 3, the data set for our application is described. A brief overview of the competing ensemble postprocessing methods is given in Section 4. Section 5 provides a short introduction to the commonly used verification measures in the ensemble postprocessing field. In Section 6, we discuss the results of our application. We close with a conclusion and outlook in Section 7. ## 2 D-vine copula based quantile regression methods In this section, we outline the copula method requirements employed by our postprocessing approaches. ### Copulas Multivariate standard distributions, e.g. the multivariate normal are often restricted in their marginal behavior, as they assume that all marginals are of the same type. The application of copulas allows to overcome this problem. A \(p\)-dimensional _copula_\(C\) is a multivariate distribution function on \([0,1]^{p}\). According to Sklar's theorem (Sklar, 1959), for every multivariate distribution function \(F\) of \(p\) continuous variables \(\mathbf{X}:=(X_{1},\ldots,X_{p})\in\mathbb{R}^{p}\) there exists a copula \(C\), such that \[F(x_{1},\ldots,x_{p})=C(F_{1}(x_{1}),\ldots,F_{p}(x_{p})), \tag{2.1}\] where \(F_{j}\) for \(j=1,\ldots,p\) are the marginal distribution functions and \(\mathbf{x}:=(x_{1},\ldots,x_{p})\in\mathbb{R}^{p}\) are the realizations of \(\mathbf{X}\). If all distribution functions are differentiable, the corre sponding \(p\)-dimensional joint density function \(f\) can be expressed by \[f(x_{1},\ldots,x_{p})=c(F_{1}(x_{1}),\ldots,F_{p}(x_{p}))\cdot f_{1}(x_{1}) \cdots f_{p}(x_{p}), \tag{2.2}\] where \(c\) denotes the copula density function of the copula \(C\) and \(f_{j}\) for \(j=1,\ldots,p\) are the marginal density functions of the variables \(X_{1},\ldots,X_{p}\). Therefore, every multivariate distribution function \(F\) can be decomposed into a copula \(C\) modeling the dependence and its univariate marginal distributions allowing to construct a wide range of distributions. ### D-vine copula Multivariate copulas such as e.g. the elliptical and the archimedean copulas are often not adaptable enough, as they usually presume that the variables in all the pairs have homogeneous dependence structures. Bedford and Cooke (2001) and Bedford and Cooke (2002) extended the theory about multivariate copulas by developing the so-called pair-copula construction (PCC), where the joint dependence is build up by only bivariate copulas using conditioning. As a PCC is not unique, Bedford and Cooke (2002) introduced a graphical structure which is called _regular vine_. A regular vine consists of a set of nested trees, where the edges in one tree become the nodes of the subsequent one. In a regular vine consisting of \(p\) variables, the nodes and edges in the first tree represent the \(p\) variables and unconditional dependence for \(p-1\) variables, respectively. In the subsequent trees the conditional dependence of a pair of variables conditioned on the variables they have in common is modeled. A _regular vine copula_ is obtained by specifying bivariate copulas, so called pair-copulas, on each edge of the trees. A _D-vine_ is special class of a regular vine in which each tree is a path, i.e. all nodes in the graph are connected to at most two others (see Figure 1). Therefore, a _D-vine copula_ is a regular vine copula, where the tree structure is a D-vine. A node in a D-vine copula represents a certain variable, while an edge between a pair of nodes corresponds to the dependence among the variables associated with the respective nodes expressed by a pair-copula. For a short overview about vine copulas see e.g. Czado and Nagler (2022) and for a detailed introduction see e.g. Czado (2019). Figure 1: 4-dimensional D-vine tree structure and corresponding pair-copula densities. ### D-vine copula based quantile regression D-vine copulas can be used in a univariate or multivariate regression context. Our focus will be on the univariate setting, where it is possible to derive a conditional D-vine copula density. For this, the leaf node in the first tree of a D-vine needs to be the response variable. In the following, we denote \(Y\) as response variable with marginal distribution function \(F_{Y}\) and the \(p\) predictor variables by \(X_{1},\ldots,X_{p}\) with marginal distribution functions \(F_{1},\ldots,F_{p}\). The lower case letters of the response and predictor variables represent the respective realizations. For a D-vine copula with node order \((0,1,\ldots,p)\) corresponding to the variable order \((Y,X_{1},\ldots,X_{p})\), \(p\geq 2\), the conditional density of \(Y\) given \(X_{1},\ldots,X_{p}\) can be obtained by \[f_{0|1,\ldots,p}(y|x_{1},\ldots,x_{p})=\prod_{j=2}^{p}c_{0,j;1, \ldots,j-1}(F_{0|1,\ldots,j-1}(y|x_{1},\ldots,x_{j-1}),F_{j|1,\ldots,j-1}(x_{j} |x_{1},\ldots,x_{j-1}))\] \[\cdot c_{0,1}(F_{Y}(y),F_{1}(x_{1}))\cdot f_{Y}(y), \tag{2.3}\] where \(F_{0|1,\ldots,j-1}\) and \(F_{j|1,\ldots,j-1}\) denote the distribution functions of the conditional random variables \(Y|X_{1}=x_{1},\ldots,X_{j-1}=x_{j-1}\) and \(X_{j}=x_{j}|X_{1}=x_{1},\ldots,X_{j-1}=x_{j-1}\), respectively, and can be calculated recursively. Furthermore, \(c_{0,j;1,\ldots,j-1}\) denotes the bivariate copula (pair-copula) density of the bivariate distribution of \((Y,X_{j})\) given \(X_{1}=x_{1},\ldots,X_{j-1}=x_{j-1}\). To allow for easy estimation, we make the simplifying assumption (Stober et al., 2013), i.e. we assume that the pair-copulas of conditional distributions are independent of the values of variables on which they are conditioned. Nevertheless, the pair-copula densities of the higher tree levels depend on the conditioning values by its arguments. Based on the conditional D-vine copula distribution, Kraus and Czado (2017) introduced a quantile regression (DVQR). The conditional quantile function for a D-vine copula with \(p\) predictor variables \(X_{1},\ldots,X_{p}\) at quantile level \(\alpha\in(0,1)\) can be calculated as \[F_{0|1,\ldots,p}^{-1}(\alpha|x_{1},\ldots,x_{p}):=F_{Y}^{-1}\left(C_{0|1, \ldots,p}^{-1}(\alpha|F_{1}(x_{1}),\ldots,F_{p}(x_{p}))\right), \tag{2.4}\] where \(F_{Y}^{-1}\) is the inverse marginal distribution of the response variable \(Y\) and \(C_{0|1,\ldots,p}^{-1}\) denotes the conditional D-vine copula quantile function. Estimation Procedure.The estimation of the D-vine copula quantile regression follows a two-step procedure called "inference for margins" (Joe and Xu, 1996). Firstly, the marginal distributions of all variables are estimated. This is necessary for transforming the raw data of each variable to uniformly distributed data in \([0,1]\) by the probability integral transformation (PIT). Consequently, we obtain the realizations \(v=F_{Y}(y)\) and \(u_{i}=F_{i}(x_{i})\) of the random variables \(V=F_{Y}(Y)\) and \(U_{i}=F_{i}(X_{i})\) for \(i=1,\ldots,p\). The marginal distributions can be estimated parametrically or non-parametrically. Secondly, the conditional copula function \(C_{0|1,\ldots,p}\) can be obtained in a closed form by a composition of so-called \(h\)-functions associated with the pair-copulas (Joe, 1996). This two-step approach is very often preferred over estimating the marginal distributions and copulas simultaneously, as the joint estimation may be harder to implement, can be very time consuming or is sometimes simply infeasible. Order of Variables.The only remaining question is in which order the PIT transformed variables \(V,U_{1},\ldots,U_{p}\) should appear in the D-vine. By construction, the transformed response variable \(V\) needs to be the leaf node in the first tree of the D-vine (node 0 in Figure 1). As the order of the predictors \(U_{1},\ldots,U_{p}\) is usually not obviously predetermined, one can select the most informative predictors and order them according to their predictive strength. To do so, Kraus and Czado (2017) propose a sequential forward selection approach to select the most important predictors by improving an (AIC/BIC)-corrected conditional log-likelihood. For the demonstration of the DVQR algorithm, we assume, that \(k-1\) predictors have already been selected and the current D-vine has the ordering \((V,U_{l_{1}},\ldots,U_{l_{k-1}})\), where \(\{l_{1},\ldots,l_{k-1}\}\subset\{1,\ldots,p\}\). Using each remaining predictor \(U_{j}\) with \(j\in\{1,\ldots,p\}\setminus\{l_{1},\ldots,l_{k-1}\}\), the current D-vine is estimated for \((V,U_{l_{1}},\ldots,U_{l_{k-1}},U_{j})\). In each step of the DVQR method, the optimal pair-copulas according to the minimum AIC/BIC-conditional log-likelihood or maximum conditional log-likelihood are chosen. Having estimated the necessary pair-copulas to extend the current D-vine for each of the \(U_{j}\), we update the model by adding the variable which yields to the lowest AIC/BIC- or highest conditional log-likelihood of the model. ### D-vine GAM copula based quantile regression In the original formulation of DVQR, Kraus and Czado (2017) assume parametric pair-copulas, where the copula parameters are constant. A natural extension of the parametric pair-copulas includes additional effects of covariates, e.g. in time and/or space into the copula parameters by modelling them as functions of such covariates. This statistical tool is called _conditional copula_, which has already been discussed earlier by Patton (2002) using a fully parametric approach for the copula parameter estimation. Later, Gijbels et al. (2011) proposed a non-parametric version and Acar et al. (2010) a semi-parametric conditional copula model. Vatter and Chavez-Demoulin (2015) were the first to suggested an alternative approach based on generalized additive models (GAMs, Hastie and Tibshirani, 1990; Green and Silverman, 1993) and spline smoothing for the copula parameter. While the previous mentioned approaches are restricted to bivariate copulas only, Vatter and Nagler (2018) extend the idea of conditional copulas to higher dimensions. More precisely, they used the GAM based bivariate copula framework as suggested by Vatter and Chavez-Demoulin (2015) for the construction of vine copulas. In our proposed D-vine GAM copula based quantile regression (GAM-DVQR), we use the GAM based bivariate copulas as suggested by Vatter and Chavez-Demoulin (2015) for the D-vine copula quantile regression to include effects of covariates. The sequential forward variable selection algorithm of DVQR is used in the same way for GAM-DVQR. The difference between these two methods is mainly in the estimation of the bivariate copulas, which will be briefly illustrated in the following. For a vector of \(q\) covariates \(\mathbf{Z}:=(Z_{1},\ldots,Z_{q})\in\mathbb{R}^{q}\) with realizations \(\mathbf{z}:=(z_{1},\ldots,z_{q})\in\mathbb{R}^{q}\), a parametric form is assumed for the conditional copula densities \(c(\cdot,\ \cdot;\eta(\mathbf{z}))\), where the copula parameter \(\eta(\mathbf{z})\) depends on the covariates \(\mathbf{z}\). For frequently used copula families, bijective transformations between the copula parameter \(\eta(\mathbf{z})\) and Kendall's \(\tau(\mathbf{z})\) can be derived (see Table 1). Note, that additional exogenous variables which do not belong to the set of predictor variables in the D-vine such as, e.g. temporal, spatial or other variables can be chosen as covariates. Nonetheless, it is theoretically possible to incorporate predictor variables from the D-vine as covariates as well. Due to the one-to-one mappings between the copula parameters and Kendall's \(\tau\), we re-parameterize all conditional copulas in the D-vine copula as functions of the corresponding Kendall's \(\tau\) and write \(c(\cdot,\ \cdot;\tau(\mathbf{z}))\). The modeling of the Kendall's \(\tau\) instead of the actual copula parameter might seem unnecessary at first sight. However, two useful properties arise from this approach (Vatter and Chavez-Demoulin, 2015). Firstly, a dependence measure such as Kendall's \(\tau\) is easier to interpret than a copula parameter. Secondly, this approach makes it simpler to compare different types of copula families, as there exists a natural relationship between the copula parameter and Kendall's \(\tau\). If the actual copula parameter is modeled, it often becomes necessary to specify a link function to ensure the predefined range of the copula parameter. Depending on the copula family, different link functions need to be selected, resulting in possible miss-specifications of the link function (see, e.g., Li and Duan, 1989) and in a comparison which is not standardized. Therefore, Vatter and Chavez-Demoulin (2015) suggest to model the change in the correlation with respect to the covariates as \[g^{-1}(\tau(\mathbf{u},\mathbf{v};\mathbf{\alpha},\mathbf{s})):=\mathbf{\alpha}\mathbf{u}^{T}+\sum_{k= 1}^{K}s_{k}(\mathbf{v}_{k}), \tag{2.5}\] where \(g^{-1}(\tau):=2\cdot\operatorname{artanh}(\tau)\) is the inverse link function between the GAM and Kendall's \(\tau\) to ensure the parameter range, \(\mathbf{u}\in\mathbb{R}^{J}\) and \(\mathbf{v}\in\mathbb{R}^{K}\) are subsets of the covariate \(\mathbf{z}\) or products thereof to consider interactions, and \(\mathbf{\alpha}\in\mathbb{R}^{J}\) is a vector of parameters for the linear component. The mappings \(s_{k}:\mathbb{S}_{k}\to\mathbb{R}\) are smooth functions supported on closed intervals \(\mathbb{S}_{k}\subset\mathbb{R}\) for \(k=1,\ldots,K\), i.e. \(s_{k}\in C^{2}(\mathbb{S}_{k})\) admits a finite-dimensional basis-quadratic penalty representation such as natural cubic splines, cyclic cubic splines or tensor product splines. Moreover, \(\mathbf{s}:=(\mathbf{s}_{1},\ldots,\mathbf{s}_{K})\in\mathbb{R}^{M}\) denotes the parameter vector for the \(K\) smooth functions \(s_{k}\) with a total of \(M:=\sum_{k=1}^{K}m_{k}\) parameters. Models as in Equation (2.5) are called partially linear models (Hardle et al., 2000), as they consist of a linear component \(\mathbf{\alpha}\mathbf{u}^{T}\) and a non-linear component \(\sum_{k=1}^{K}s_{k}(\mathbf{v}_{k})\). The maximum penalized log-likelihood estimates of the parameters \(\mathbf{\alpha},\mathbf{s}\) are obtained by iteratively reweighted generalized ridge regression. For a more technical description of the copula parameter estimation as well as for extensive simulation studies for the suggested \begin{table} \begin{tabular}{c c c} \hline \hline Copulas & \(\eta(\mathbf{z})\) & \(\tau(\mathbf{z})\) \\ \hline Gaussian, Student-\(t\) & \(\sin\left(\frac{\pi}{2}\tau(\mathbf{z})\right)\) & \(\frac{2}{\pi}\arcsin\left(\eta(\mathbf{z})\right)\) \\ Gumbel, Gumbel-180\({}^{\circ}\) & \(\frac{1}{1-\tau(\mathbf{z})}\) & \(1-\frac{1}{\eta(\mathbf{z})}\) \\ Gumbel-90\({}^{\circ}\), Gumbel-270\({}^{\circ}\) & \(\frac{1}{1+\tau(\mathbf{z})}\) & \(-1-\frac{1}{\eta(\mathbf{z})}\) \\ Clayton, Clayton-180\({}^{\circ}\) & \(\frac{2\tau(\mathbf{z})}{1-\tau(\mathbf{z})}\) & \(\frac{\eta(\mathbf{z})}{\eta(\mathbf{z})+2}\) \\ Clayton-90\({}^{\circ}\), Clayton-270\({}^{\circ}\) & \(\frac{2\tau(\mathbf{z})}{1+\tau(\mathbf{z})}\) & \(\frac{\eta(\mathbf{z})}{2-\eta(\mathbf{z})}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Mappings between the copula parameter and Kendall’s \(\tau\). The degrees represent the amount of rotation of the respective copula, e.g. a Gumbel-180\({}^{\circ}\) copula is a Gumbel copula rotated by 180\({}^{\circ}\) counterclockwise. conditional copulas and vine copulas, see Vatter and Chavez-Demoulin (2015) and Vatter and Nagler (2018). Moreover, it should be mentioned that Kendall's \(\tau\) will be modeled by only one single model specified in Equation (2.5) for all bivariate copulas in the D-vine copula. As stated above, we will use the term "predictor variables" for the variables in the D-vine and the term "covariates" for the variables included in Equation (2.5) for modeling the conditional bivariate copula. To complement this work, Jobst et al. (2023b) developed an R-package called gamvinereg for the GAM-DVQR, which is based on the R-package gamCopula by Vatter and Nagler (2018) and on the code of the R-package vinereg by Nagler (2022). ## 3 Data To illustrate the capabilities of GAM-DVQR, we present a case study for the postprocessing of 2 m surface temperature forecasts initialized at 1200 UTC for a forecast lead time of 24 h. The 2 m surface temperature observations are provided by DWD Climate Data Center (CDC) (2018) with maximal 5% missing observations at each synoptic observation station between January 2, 2015 to December 31, 2020, which leads to 462 observation stations (see Figure 2). The ensemble forecasts are provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) (2021), consisting of \(m=50\) perturbed ensemble members. These forecasts are initialized at 1200 UTC on a grid of 0.25\({}^{\circ}\)\(\times\) 0.25\({}^{\circ}\). The gridded data is bilinearly interpolated to the observation stations. Additionally to the target variable (2 m surface temperature) we add several auxiliary ensemble predictor variables, for an overview see Table 2. We calculate the 10 m surface wind speed by \(\mathrm{ws}10\mathrm{m}:=\sqrt{\mathrm{u}10\mathrm{m}+\mathrm{v}10\mathrm{m}}\) and the 2 m surface relative humidity is approximated by \(\mathrm{r}2\mathrm{m}:=\exp\left(\frac{17.625\cdot 42\mathrm{m}}{243.04+ 4\mathrm{m}}\right)/\exp\left(\frac{17.625\cdot 42\mathrm{m}}{243.04+4\mathrm{m}}\right)\) according to Alduc Figure 2: Observation stations for 2 m surface temperature. Furthermore, we calculate the variable sine- and cosine-transformed day of the year (doy) abbreviated as \(\sin\) and \(\cos\) via \(\sin\left(\frac{2\pi\cdot\text{dov}}{365.25}\right)\) and \(\cos\left(\frac{2\pi\cdot\text{dov}}{365.25}\right)\) for \(\text{doy}\in\{1,2,\ldots,366\}\), respectively. In the following, ensemble forecasts are comprised to their mean and standard deviation, where for a weather variable \(v\), \[\overline{X}_{v}:=\frac{1}{m}\sum_{i=1}^{m}X_{v,i}\quad\text{and}\quad S_{v}:= \sqrt{\frac{1}{m-1}\sum_{i=1}^{m}(\overline{X}_{v}-X_{v,i})^{2}}, \tag{3.1}\] will denote the ensemble mean and standard deviation, respectively, from an \(m\) member ensemble \(X_{v,1},\ldots,X_{v,m}\). The further variables \(w\in\{\text{dov},\sin,\cos\}\) will be denoted by \(X_{w}\). The corresponding realizations are indicated by lowercase letters. The response variable \(2\,\text{m}\) surface temperature is represented by \(Y_{\text{t2m}}\) with realization \(y_{\text{t2m}}\). In the following, we will use the term _reduced variable set_ for \(\overline{X}_{\text{t2m}},S_{\text{t2m}}\) including the response variable \(Y_{\text{t2m}}\). Furthermore, we designate the set of the mean and standard deviation of the first 10 weather variables in Table 2 incl. the response variable as _extended variable set_. Eventually, we will use the period of 2015-2019 as training set and the complete year 2020 as independent validation set. The implementation of some of the methods requires the tuning of specific hyperparameters and selection of marginal distributions for which the final specifications can be found on GitHub. To avoid overfitting in the model selection process, we further split the training set into the period of 2015-2018 which is used for pure training of the models, while the year 2019 is used for testing in the model selection process. After finalizing the choice of the most suitable model variant based on the testing period, the entire training period from 2015-2019 is used to fit that model for the final evaluation on the validation set. All computations on the data set (Jobst et al., 2023a) will be carried out using the statistical software R running version 3.6.3 by R Core Team (2020). \begin{table} \begin{tabular}{c c} \hline \hline Variable & Description \\ \hline t2m & \(2\,\text{m}\) surface temperature \\ d2m & \(2\,\text{m}\) surface dewpoint temperature \\ pr & surface pressure \\ sr & surface solar radiation \\ u10m & \(10\,\text{m}\) surface \(u\)-wind speed component \\ v10m & \(10\,\text{m}\) surface \(v\)-wind speed component \\ r2m & \(2\,\text{m}\) surface relative humidity \\ tcc & total cloud cover \\ ws10m & \(10\,\text{m}\) surface wind speed \\ wg10m & \(10\,\text{m}\) surface wind gust \\ doy & day of the year \\ sin & sine-transformed day of the year \\ cos & cosine-transformed day of the year \\ \hline \hline \end{tabular} \end{table} Table 2: Potential predictor variables. ## 4 Ensemble postprocessing methods In this section, we briefly describe the compared ensemble postprocessing techniques. All methods will be applied locally, i.e. for each station a separate model is estimated. ### Ensemble model output statistics Ensemble model output statistics (EMOS), also known as non-homogeneous regression, is a parametric postprocessing method proposed by Gneiting et al. (2005). This approach is based on the idea of distributional regression, assuming a predictive distribution family \(\mathcal{D}(\mu,\sigma,\nu,\varphi)\), where the parameters \(\mu\), \(\sigma\), \(\nu\), and \(\varphi\) indicate location, scale, shape, and degrees of freedom, respectively. There are link functions to connect parameters with corresponding predictors \(\mathbf{x}_{\mu},\mathbf{x}_{\sigma},\mathbf{x}_{\nu},\mathbf{x}_{\varphi}\) via \(\mu:=h_{\mu}(\mathbf{x}_{\mu}),\sigma:=h_{\sigma}(\mathbf{x}_{\sigma}),\nu:=h_{\nu}( \mathbf{x}_{\nu}),\varphi:=h_{\varphi}(\mathbf{x}_{\varphi})\) in order to retain parameter ranges. In this context the predictors are typically ensemble members and summary statistics thereof. The predictive distribution is selected with regard to the type of weather quantity, e.g. a Gaussian normal distribution for the variable \(2\,\mathrm{m}\) surface temperature as suggested by Gneiting et al. (2005). As the logistic and skewed logistic distribution as well as the skew normal distribution show only minor differences with respect to the performance of the Gaussian normal distribution (Gebetsberger et al., 2019; Taillardat, 2021) we assume the latter in the following, i.e. \(Y_{\mathrm{t2m}}\sim\mathcal{N}(\mu,\sigma)\) with location parameter \(\mu\in\mathbb{R}\), scale parameter \(\sigma>0\) as well as inverse link functions \(h_{\mu}^{-1}:=\mathrm{id}\), \(h_{\sigma}^{-1}:=\mathrm{log}\). In its basic formulation EMOS uses the reduced variable set with predictors \(\overline{X}_{\mathrm{t2m}}\) and \(S_{\mathrm{t2m}}\) and connects the Gaussian (transformed) distribution parameters to the predictors via the linear relationships \[\mu:=a_{0}+a_{1}\overline{x}_{\mathrm{t2m}},\quad\log(\sigma):=b_{0}+b_{1}\log (s_{\mathrm{t2m}}). \tag{4.1}\] The coefficients \(a_{0},a_{1},b_{0},b_{1}\in\mathbb{R}\) are estimated e.g. by a sliding training window. However, to take the strong seasonal periodic patterns of \(Y_{\mathrm{t2m}}\) (e.g. higher values in the summer period, lower values in the winter period) into account and to facilitate a fair comparison with the other methods, we add the sine- and cosine-transformed day of the year \(X_{\mathrm{sin}},X_{\mathrm{cos}}\) as further predictors to the equation of both parameters. Therefore, we assume the conditional predictive distribution \[f(y_{\mathrm{t2m}}|x_{\mathrm{t2m},1},\ldots, x_{\mathrm{t2m},m},x_{\mathrm{sin}},x_{\mathrm{cos}})\sim\mathcal{N}(\mu, \sigma), \tag{4.2}\] \[\mu:=a_{0}+a_{1}x_{\mathrm{sin}}+a_{2}x_{\mathrm{cos}}+a_{3} \overline{x}_{\mathrm{t2m}},\quad\log(\sigma):=b_{0}+b_{1}x_{\mathrm{sin}}+b_ {2}x_{\mathrm{cos}}+b_{3}s_{\mathrm{t2m}}, \tag{4.3}\] with coefficients \(a_{i},b_{i}\in\mathbb{R}\) for \(i=0,\ldots,3\), as e.g. Hemri et al. (2014) and Dabernig et al. (2017). Consequently, we incorporate the seasonality by seasonal varying intercepts as introduced in Equation (4.3) in comparison to the basic formulation in Equation (4.1). The coefficients of the parameters in Equation (4.3) are estimated by optimizing the sum of a proper verification score over the training period between 2015 and 2018 with the Broyden-Fletcher-Goldfarb-Shannon (BFGS) algorithm. We investigate the optimization with respect to two different scores, namely the CRPS (continuous ranked probability score) and the LogS (logarithmic score). Then, we choose the best performing specification (CRPS or LogS) according to the mean CRPS over all testing days in 2019 and all stations. The implementation is based on the R-package crch by Messner et al. (2016). ### Gradient boosted ensemble model output statistics Messner et al. (2017) suggested an extension of EMOS which allows to select the most relevant predictor variables \(X_{1},\ldots,X_{p}\) for the model by a gradient-boosting approach (EMOS-GB). This approach is especially useful if the amount of potential predictor variables is large. Similar to the EMOS model the conditional normal distribution \(f(y|x_{1},...,x_{p})\sim\mathcal{N}(\mu,\sigma)\) with \[\mu :=a_{0}+a_{1}x_{1}+\ldots+a_{p}x_{p},\quad a_{0},a_{1},\ldots,a_{p }\in\mathbb{R}, \tag{4.4}\] \[\log(\sigma) :=b_{0}+b_{1}x_{1}+\ldots+b_{p}x_{p},\quad b_{0},b_{1},\ldots,b_{ p}\in\mathbb{R}, \tag{4.5}\] is assumed. We include the sine- and cosine transformed day of the year \(X_{\sin},X_{\cos}\) into the extended variable set to account for seasonality. This results in \(p=22\) predictor variables (see Table 2) for each distribution parameter. The boosting procedure initializes all coefficients for \(\mu\) and \(\sigma\) at zero and to iteratively updates only the one which corresponds to the predictor improving predictive performance the most. Using the gradient of the loss function, the predictor with the highest correlation to the gradient is selected and then the corresponding coefficient is updated by taking a step in the direction of steepest descent of the gradient. This procedure is carried out until a stopping criterion is reached to avoid overfitting. The implementation is based on the R-package crch by Messner et al. (2016). We tune the gradient-boosted EMOS (EMOS-GB) model by grid search with respect to the loss function (LogS or CRPS), maximum number of boosting iterations (100, 500, 1000, 2000), stopping criterion (AIC, BIC), and step size (0.05, 0.1, 0.2) on the training data set between 2015 and 2018. Then, we choose the best performing model version according to the mean CRPS over all testing days in 2019 and stations for the validation period. ### D-vine copula based quantile regression We apply the D-vine copula based quantile regression (DVQR) as proposed by Kraus and Czado (2017) and further described in Section 2.4, where we use the reduced variable set by minimizing the BIC-corrected conditional log-likelihood. To take account of the seasonality of our variables, DVQR is first estimated on a refined rolling training period (Moller et al., 2018) with window size \(n\in\{10,15,20,\ldots,100\}\), for which we use the days \(\{t-n,\ldots,t-2,t-1\}\) in the year where the forecast day \(t\) lives and the days \(\{t-n,\ldots,t-2,t-1,t,t+1,t+2,\ldots,t+n\}\) in the previous \(k=4\) years. The optimal window size \(n\) for the validation period is determined based on the minimal mean CRPS over all testing days in 2019 and all stations. Afterwards, DVQR is estimated using \(k=5\) years with the estimated optimal length \(n\). Due to very high computational costs of approximately 7 hours to estimate the models in 2019 for one station and our limited resources of one CPU with 40 cores and 62.5 GB RAM to determine the optimal window length \(n\) we can not investigate DVQR on the extended variable set. This high computational burden underlines again the need of an alternative approach such as GAM-DVQR in a higher-dimensional context. Marginal distributions.The marginal distributions are fitted via kernel density estimates using the Gaussian kernel. Bivariate copulas.We allow all bivariate copulas in the R-package vinereg, that is, elliptical copulas (Gaussian and Student-\(t\)), archimedean copulas as well as rotated versions thereof (Clayton, Gumbel, Frank, Joe, BB1, BB6, BB7 and BB8), and the nonparametric Independence and Transformation Kernel copula (TLL). Elliptical copulas have an elliptical shape in the contour plot. In Figure 3 for example we can detect a Gaussian copula between \(S_{\mathrm{r2m}}\) and \(S_{\mathrm{wg10m}}\) and a Student-\(t\) copula for the variable pair \(S_{\mathrm{ws10m}}\) and \(S_{\mathrm{wg10m}}\). The Gaussian copula has no tail dependence, while the Student-\(t\) copula only captures symmetric tail dependence. Any departures from elliptical shapes may indicate to include non-Gaussian dependencies. Therefore, archimedean copulas are provided which exhibit a pear or bone shape in the contour plot allowing to detect lower and/or upper tail dependence (except of the Frank copula). In Figure 3 we see a contour shape indicating a Frank copula between \(\overline{X}_{\mathrm{t2m}}\) and \(\overline{X}_{\mathrm{sr}}\), a Gumbel copula between \(\overline{X}_{\mathrm{r2m}}\) and \(\overline{X}_{\mathrm{tcc}}\) and a Clayton copula between \(\overline{X}_{\mathrm{ws10m}}\) and \(S_{\mathrm{wg10m}}\). The nonparametric Independence copula has a circular shape in the contour plot (see e.g. between \(\overline{X}_{\mathrm{r2m}}\) and \(S_{\mathrm{ws10m}}\) in Figure 3) and the Transformation Kernel copula can approximate any dependence shape (see e.g. between \(\overline{X}_{\mathrm{u10m}}\) and \(\overline{X}_{\mathrm{ws10m}}\) in Figure 3). Consequently, we cover lots of possible dependence patterns by this copula set. The implementation of the DVQR is based on the R-package vinereg by Nagler (2022). ### D-vine GAM copula based quantile regression For the GAM-DVQR as explained in Section 2.4 we consider two cases: Estimation of the GAM-DVQR on the reduced variable set and on the extended variable set. Marginal distributions.In both variable sets, we determine the marginal distributions for each variable using distributional regression via generalized additive models for location (\(\mu\)), scale (\(\sigma\)) and shape (\(\nu\)) (GAMLSS, Rigby and Stasinopoulos, 2005). Therefore, we assume for all considered weather variables a distribution \(\mathcal{D}(\mu,\sigma,\nu,\varphi)\). As all weather variables show a seasonal periodic behavior we model the distribution parameters by using the notation of Section 4.1 via \[h_{\mu}^{-1}(\mu) =a_{0}+a_{1}x_{\mathrm{sin}}+a_{2}x_{\mathrm{cos}}, h_{\sigma}^{-1}(\sigma) =b_{0}+b_{1}x_{\mathrm{sin}}+b_{2}x_{\mathrm{cos}}, \tag{4.6}\] \[h_{\nu}^{-1}(\nu) =c_{0}, h_{\varphi}^{-1}(\varphi) =d_{0}, \tag{4.7}\] with real valued coefficients using the sine- and cosine transformed day of the year \(X_{\mathrm{sin}},X_{\mathrm{cos}}\) as linear covariates. To keep the marginal models simple and to have comparable settings, the parameters \(\nu\) and \(\varphi\) are assumed to be constant. For each variable we allow a set of potential distribution families, from which the best is chosen with respect to the mean BIC over all stations in the whole training period. Although gamvinereg allows to select the best performing marginal distribution for each station, we decided against this procedure for a fairer comparison of the methods and a more standardized verification. Besides of the actually tested distribution families, e.g. censored or truncated versions could be additionally analyzed. However, initial tests showed only small differences to the considered distribution families. Additionally, Kim et al. (2007) outlined in a simulation study that misspecified margins are only problematic for copulas if they are severely misspecified. They studied e.g. fitting normal margins when the true margins are exponential. Bivariate copulas.The GAM-copula family set for modeling the dependencies consists of the Gaussian, Student-\(t\), double Clayton type I-IV and double Gumbel type I-IV copula as implemented in the R-package gamCopula by Nagler and Vatter (2020). The double Clayton and Gumbel copula type consist of additional rotated versions of the Clayton and Gumbel copula, respectively to cover negative dependence as well. All predictor variables and therefore copulas will be selected by minimizing the BIC-corrected conditional log-likelihood as for DVQR. In both, the reduced and extended variable set, each Kendall's \(\tau\) linked to a pair-copula is modeled by two different approaches: We assume either a constant or a time-dependent correlation in Equation (2.5) and link it to the covariates by one of the following linear models without a non-linear component \[g^{-1}(\tau(\mathbf{u},\mathbf{v};\mathbf{\alpha},\mathbf{s}))=\begin{cases}\alpha_{0},&\text{ constant correlation (C)},\\ \alpha_{0}+\alpha_{1}u_{\text{sin}}+\alpha_{2}u_{\text{cos}},&\text{time- dependent correlation (T1)},\end{cases} \tag{4.8}\] where \(\alpha_{0},\alpha_{1},\alpha_{2}\in\mathbb{R}\), and covariates \(u_{\text{sin}},u_{\text{cos}}\) denote the sine- and cosine transformed day of the year. The need for a time-adaptive correlation between the predictor variables is illustrated in Figure 4, where the empirical Kendall's \(\tau\) correlation as well as its predictions Figure 3: Empirical normalized copula contour plots (lower triangle), PIT histograms (diagonal) and scatterplots including Kendall’s \(\tau\) correlation (upper triangle) for station Munich in the training data. clearly change over the day of the year. This aspect will be further outlined in Section 6. As the amount of predictor variables can become large in the extended variable set it might not be clear anymore how the temporal correlation can be appropriately modelled by a linear component. Thus, we additionally investigate a time-dependent correlation model using a cyclic cubic spline \(s\) depending on the covariate day of the year (doy) via \[g^{-1}(\tau(\mathbf{u},\mathbf{v};\mathbf{\alpha},\mathbf{s})))=s(v_{\text{doy}}),\quad\text{ time-dependent correlation (T2)}. \tag{4.9}\] With this time-dependent non-linear correlation model we can take account of even more flexible (unknown) changes in Kendall's \(\tau\) than by a linear model. This approach could be beneficial in higher trees of the D-vine as well. Consequently, with GAM-DVQR we introduce an ensemble postprocessing model, which is able to select the most important predictor variables from a large set, taking account of the temporal correlation changes among the predictor variables at the same time. As this model is estimated only once on a static training period, it benefits from a longer consistent training period, while being more efficient than DVQR using a sliding training window. An overview of the selected marginal and correlation parameter specifications can be found in Table 3. The estimation of GAM-DVQR is carried out by using the R-package gamvinereg of Jobst et al. (2023). ## 5 Verification methods Gneiting et al. (2005) and Gneiting and Raftery (2007) claim, that the general aim of probabilistic forecasting is to maximize the sharpness of the predictive distribution sub Figure 4: Empirical Kendall’s \(\tau\) (purple) using the refined rolling training period of window size \(n=25\), Kendall’s \(\tau\) prediction (darkgreen) and 95% confidence band (lightgreen) using the time-dependent correlation model (T1) for station Furstenzell in the training data. \begin{table} \begin{tabular}{c c c c} \hline \hline Model & Marginal specifications & Correlation specifications & Variable set \\ \hline GAM-DVQR-C & GAMLSS & constant correlation (C) & reduced \& extended \\ GAM-DVQR-T1 & GAMLSS & time-dependent correlation (T1) & reduced \& extended \\ GAM-DVQR-T2 & GAMLSS & time-dependent correlation (T2) & extended \\ \hline \hline \end{tabular} \end{table} Table 3: Overview of marginal and correlation parameter specifications. ject to calibration. _Calibration_ refers to the statistical consistency between the predictive cumulative distribution function (CDF) \(F\) and the associated observation \(Y\). _Sharpness_ concerns the spread of the predictive distribution \(F\). The more concentrated the forecast, the sharper the forecast, and the sharper the better, subject to calibration. In the following, we present methods to measure calibration and sharpness which are be used in the subsequent application. Visual assessment of calibration.Dawid (1984) and Gneiting et al. (2007) call a continuous predictive probabilistic forecast \(F\) calibrated if \(F(Y)\) is uniformly distributed. A so called probability integral transform (PIT) histogram can be used as visual tool for the evaluation of the calibration, where the PIT values are received by evaluating the predictive CDF \(F\) at the validating observations. Any departures from uniformity of the PIT histogram can indicate that the predictive distribution \(F\) is miscalibrated in some way. A discrete counterpart of the PIT histogram is the so called verification rank histogram displaying the histogram of ranks of observations with respect to the corresponding ordered ensemble forecasts (Talagrand et al., 1997). In the case of a calibrated \(m\)-member ensemble, the ranks should be uniformly distributed on the set \(\{1,\ldots,m+1\}\). Uncertainty quantification.A further tool for assessing the calibration of a predictive distribution is the coverage of a \((1-\alpha)\cdot 100\%\) central prediction interval, \(\alpha\in(0,1)\), which is the proportion of validating observations between the lower and upper \(\frac{\alpha}{2}\)-quantiles of the predictive distribution (Gneiting and Raftery, 2007). Assuming a calibrated predictive distribution, then \((1-\alpha)\cdot 100\%\) of observations should fall within the range of the central prediction interval. Sharpness of a predictive distribution can be validated using the width of a \((1-\alpha)\cdot 100\%\) central prediction interval (Gneiting and Raftery, 2007). Sharper distributions correspond to narrower prediction intervals. Having a \(m\)-member forecast ensemble, we use a \(\frac{m-1}{m+1}\cdot 100\%\) central prediction interval corresponding to the nominal coverage of the raw forecast ensemble and consequently allowing for a direct comparison of all probabilistic forecasts. The target coverage rate for an \(m=50\) member ensemble is approximately \(96.08\%\). Figure 5: Verification rank histogram of the raw ensemble aggregated over all stations and time points in the validation period. Scoring rules.Proper scoring rules rate calibration and sharpness properties simultaneously and thus play important roles in the comparative evaluation and ranking of competing forecasts (Gneiting et al., 2007). An attractive proper scoring rule in weather forecasting is the _continuous ranked probability score_(CRPS, Matheson and Winkler, 1976), which is defined as \[\mathrm{CRPS}(F,y):=\int\limits_{-\infty}^{\infty}(F(z)-\mathds{1}\{z\geq y\} )^{2}\,\mathrm{d}z, \tag{5.1}\] where \(F\) is the predictive cumulative distribution function, \(y\) is the true/observed value and \(\mathds{1}\) denotes the indicator function. Gneiting et al. (2008) show that, if \(F\) has a finite first moment the CRPS can be approximated by \[\mathrm{CRPS}(F,y)\approx\frac{1}{K}\sum_{k=1}^{K}|z_{k}-y|-\frac{1}{2K^{2}} \sum_{k=1}^{K}\sum_{k^{\prime}=1}^{K}|z_{k}-z_{k^{\prime}}|, \tag{5.2}\] where \(z_{k}:=F^{-1}\left(\frac{k}{K+1}\right)\), \(z_{k^{\prime}}:=F^{-1}\left(\frac{k^{\prime}}{K+1}\right)\) for \(k,k^{\prime}\in\{1,\ldots,K\}\) and \(F^{-1}\) denotes the quantile function of \(F\). The mean CRPS over a set of forecast cases is denoted by \(\overline{\mathrm{CRPS}}\). In practice, a probabilistic forecast is sometimes reduced to a point forecast via a statistical summary function such as the mean or median. In this situation, _consistent scoring functions_ provide useful tools for forecast evaluation and generate proper scoring rules (Gneiting, 2011). For a set of \(n\) forecasts cases we employ \[\mathrm{RMSE}:=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(\mathrm{mean}(F_{i})-y_{i})^ {2}}\quad\text{and}\quad\mathrm{MAE}:=\frac{1}{n}\sum_{i=1}^{n}|\mathrm{ median}(F_{i})-y_{i}|. \tag{5.3}\] The relative improvement of a forecast with respect to a given reference forecast in terms of CRPS can be quantified by the _continuous ranked probability skill score_ (CRPSS) via \[\mathrm{CRPSS}:=1-\frac{\overline{\mathrm{CRPS}}}{\overline{\mathrm{CRPS}}_{ \mathrm{ref}}}, \tag{5.4}\] where \(\overline{\mathrm{CRPS}}_{\mathrm{ref}}\) denotes the \(\overline{\mathrm{CRPS}}\) of the reference forecast. Statistical tests to compare predictive performance.To evaluate the statistical significance of the differences in the forecasts between two competing postprocessing models, we make use of the _Diebold-Mariano test_(Diebold and Mariano, 1995) for the verification score time series of both models separately at each station. Afterwards we use the _Benjamini-Hochberg procedure_Benjamini and Hochberg (1995) suggested by Wilks (2016) that allows to account for multiple testing regarding different stations and to control the overall probability of type I error, for which we choose \(\alpha=0.05\) in the subsequent analysis. For the verification of the methods we use the R-package eppverification by Jobst (2021). Visual dependence assessment.For visually assessing the dependence of two variables \(Y\) and \(X\) a so called _empirical normalized bivariate contour plot_ (see, e.g. Figure 3) can be used by the R-package VineCopula of Nagler et al. (2020). This plot is obtained by an approximation of the copula density \(c\) and to visualize the contours of the bivariate density function \[d(z_{Y},z_{X}):=c(\Phi(z_{Y}),\Phi(z_{X}))\phi(z_{Y})\phi(z_{X}), \tag{5.5}\] for the \(\Phi^{-1}\) transformed copula data \(Z_{Y}:=\Phi^{-1}(F_{Y}(Y))\) and \(Z_{X}:=\Phi^{-1}(F_{X}(X))\). For more details concerning these plots, see, e.g. Czado (2019). ## 6 Results In the following two subsections the results of the considered methods based on the reduced end extended variable set are presented and discussed. Note, that we additionally investigated time series models as marginal distributions for the GAM-DVQR. However, the respective results turned out to be worse than the ones presented here so we do not show them. ### Reduced variable set In this setting, we compare the raw ensemble, EMOS, DVQR and GAM-DVQR on the reduced variable set. Figure 6 shows the PIT histograms for all postprocessing methods. All methods show improved calibration properties in comparison to the raw ensemble in Figure 5. However, they are all slightly skewed to the right causing a small overdispersion more or less in the middle of the PIT histograms and underdispersion at both ends. This impression is supported by the values of the coverage score shown in Table 4. Verification scores.When looking at Table 4, we observe that all methods improve the raw ensemble with respect to CRPS by around 20%-29%, MAE around 12%-22% and RMSE around 13%-22%. While EMOS yields the lowest MAE and width, GAM-DVQR-T1 yields the lowest CRPS and RMSE. Furthermore, using the procedure for testing the significant differences in the performance between two methods (see Section 5), GAM-DVQR-T1 significantly outperforms EMOS at around 8% of all stations with respect to CRPS. A reason for the better performance of GAM-DVQR-T1 over EMOS might be that the GAM-DVQR models use non-Gaussian copulas (see Figure 7), while the basic Figure 6: PIT histograms of the considered methods, where the PIT values are aggregated over all stations and time points in the validation period. assumption of EMOS is the Gaussian dependence. The dependence between \(Y_{\text{t2m}}\) and \(\overline{X}_{\text{t2m}}\) in Figure 7 can be described by a Student-\(t\) copula, while for the relationship between \(\overline{X}_{\text{t2m}}\) and \(S_{\text{t2m}}\) a Gumbel-270\({}^{\circ}\) (e.g. double Gumbel type II) or Clayton-90\({}^{\circ}\) (e.g. double Clayton type I) copula could be estimated. A further reason for improved performance in comparison to DVQR could be traced back to the longer and more consistent training data which might lead to more stable estimations (Lang et al., 2020) for GAM-DVQR-T1, while DVQR is estimated on a sliding window. CRPS comparisons.In the following, we compare the methods with respect to CRPS in more detail. The time-dependent correlation GAM-DVQR models (T1) perform better than the GAM-DVQR models with constant correlation (C) in terms of CRPS. These results underline the need of a time-dependent correlation model within the GAM-DVQR \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & CRPS & MAE & RMSE & Coverage & Width \\ \hline Raw ensemble & 1.017 & 1.255 & 1.730 & 63.071 & 2.922 \\ \hline EMOS & 0.718 & **0.985** & 1.387 & 95.573 & **5.441** \\ DVQR & 0.717 & 0.985 & 1.366 & **96.339** & 5.990 \\ \hline GAM-DVQR-C & 0.719 & 0.989 & 1.358 & 96.499 & 5.994 \\ GAM-DVQR-T1 & **0.713** & 0.985 & **1.350** & 96.594 & 5.889 \\ \hline \hline \end{tabular} \end{table} Table 4: Verification scores aggregated over all stations and time points in the validation period. Bold values represent the best value for each score. \begin{table} \begin{tabular}{c|c c c|c c c|c c c} \hline \hline Model & \(\alpha_{1,1,0}\) & \(\alpha_{1,1,1}\) & \(\alpha_{1,1,2}\) & \(\alpha_{1,2,0}\) & \(\alpha_{1,2,1}\) & \(\alpha_{1,2,2}\) & \(\alpha_{2,1,0}\) & \(\alpha_{2,1,1}\) & \(\alpha_{2,1,2}\) \\ \hline GAM-DVQR-T1 & 100 & 12 & 74 & 94 & 16 & 78 & 100 & 29 & 62 \\ \hline \hline \end{tabular} \end{table} Table 5: Significance of the coefficients for the correlation time-dependent GAM-DVQR models in % after applying the Benjamini-Hochberg procedure to the \(p\)-value for each coefficient \(\alpha_{i,j,k}\) over all stations. \(\alpha_{i,j,k}\) represents the \(k\)-th coefficient for the \(j\)-th bivariate copula in the \(i\)-th tree of the D-vine with respect to Equation (4.8). Figure 7: Empirical normalized contour plots (lower triangle), PIT histograms (diagonal) and scatterplots including Kendall’s \(\tau\) correlation (upper triangle) for station Fürstenzell in the training data. framework in this application, and also highlight that GAM-DVQR is able to capture the temporal varying empirical correlation. This is further illustrated in Figure 4 where the empirical and predicted Kendall's \(\tau\) for a D-vine GAM copula with order \(Y_{\mathrm{t2m}}-\overline{X}_{\mathrm{t2m}}-S_{\mathrm{t2m}}\) are plotted. Moreover, the percentage of the significant coefficients for the correlation Equation (4.8) in Table 5 indicate that the suggested predictors for identifying the changes in the correlation parameter seem appropriate. We further investigated the station-specific performance of our methods with respect to CRPSS over the benchmark method EMOS. Figure 8 shows the method with the highest CRPSS over EMOS, where the CRPSS values are encoded by colours. The DVQR based methods yield at around 68% of all stations a positive CRPSS (green colour scale), which implies a substantially better performance of these methods over EMOS. Furthermore, we observe that GAM-DVQR-T1 outperforms the other models most frequently and yields the highest CRPSS at 219 stations, while DVQR is the preferred model at 159 stations. ### Extended variable set In this section we investigate the results for the raw ensemble, EMOS, DVQR and GAM-DVQR on the extended variable set. The PIT histograms in Figure 9 indicate that all methods are able to improve the calibration, where the remaining deficiencies are less pronounced than in Section 6.1, but still visible. Moreover, GAM-DVQR-T1 and GAM-DVQR-T2 seem to yield the PIT histograms which are closest to a uniform distribution. This impression is further supported by the nearly perfect coverage score of 96.079% for GAM-DVQR-T1 in Table 6. All in Figure 8: Highest CRPSS of the considered methods over EMOS in % in the validation period. CRPSS \(>5\%\) are visualized in black for a better representation. Numbers in brackets denote the count. all, it appears that the GAM-DVQR models provide a more pronounced calibration in terms of the PIT histograms as well as better coverage values. Verification scores.All methods are able to improve upon the raw ensemble in terms of CRPS around 30%-33%, in terms of MAE around 22%-25% and in terms of RMSE around 23%-26%, i.e. they all yield a more pronounced improvement in comparison to the methods using only the reduced variable set in Section 6.1. This result outlines that an appropriate selection of predictor variables can enhance model performance. Furthermore, it should be pointed out that GAM-DVQR-T2 clearly outperforms all other methods with respect to CRPS, MAE and RMSE followed by GAM-DVQR-T1 for the coverage and width scores in Table 6. We conclude that GAM-DVQR-T1 and GAM-DVQR-T2 perform better than EMOS-GB as well as GAM-DVQR-C, as it can take better account of the temporal variation of the predictor variables and the correlation among them. Based on the results and on Figure 10 for GAM-DVQR-T2 in comparison to GAM-DVQR-T1, we deduce that the spline based time-dependent correlation model can identify and describe the behavior of the time-varying Kendall's \(\tau\) more accurately in a higher-dimensional variable setting than the linear model for Kendall's \(\tau\). Furthermore, if the correlations among the predictor variables are correctly specified, this choice leads to a more reasonable variable selection for GAM-DVQR, and facilitates more appropriate dependencies as well as interactions between the variables than EMOS-GB. CRPS comparisons.As in Section 6.1 we investigate the CRPS in more detail. We observe in the boxplots in Figure 11 that GAM-DVQR-T1 and GAM-DVQR-T2 lead \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & CRPS & MAE & RMSE & Coverage & Width \\ \hline Raw ensemble & 1.017 & 1.255 & 1.730 & 63.071 & 2.922 \\ \hline EMOS-GB & 0.706 & 0.979 & 1.336 & 96.756 & 5.670 \\ \hline GAM-DVQR-C & 0.710 & 0.980 & 1.337 & 96.145 & 5.680 \\ GAM-DVQR-T1 & 0.684 & 0.942 & 1.282 & **96.079** & **5.449** \\ GAM-DVQR-T2 & **0.681** & **0.939** & **1.278** & 96.157 & 5.462 \\ \hline \hline \end{tabular} \end{table} Table 6: Verification scores aggregated over all stations and time points in the validation period. Bold values represent the best value for each score. Figure 9: PIT histograms of the considered methods, where the PIT values are aggregated over all stations and time points in the validation period. to overall lower CRPS values than the other methods. Furthermore, both models have smaller variance in the CRPS values. This point becomes clearer with respect to the CRPSS of the GAM-DVQR methods over EMOS-GB. While GAM-DVQR-C has less variation in the skill scores over EMOS-GB which might be led back to the constant correlation parameter, GAM-DVQR-T1 and GAM-DVQR-T2 show clearly more variance. GAM-DVQR-T2 yields the highest median CRPSS improvement of about 2.7% over EMOS-GB followed by GAM-DVQR-T1 with about 2.4% and GAM-DVQR-C with 2.4% and GAM-DVQR-C with 2.4% and GAM-DVQR-T2 respectively. Figure 11: Left: Boxplots of station-specific mean CRPS values of the considered methods in the validation period. Right: Boxplots of the station-specific CRPSS of the considered methods over EMOS-GB in the validation period. Outliers \(\pm 1.5\cdot\text{IQR}\) are omitted for better visual representation. Figure 10: Kendall’s \(\tau\) predictions and its 95% confidence bands for station Arkona with D-vine GAM copula cutout \(Y_{\text{t2m}}-\overline{X}_{\text{t2m}}-\overline{X}_{\text{tcc}}-\overline{X }_{\text{ws10m}}\) in the training data. The green colour represents the linear correlation model (T1), the light blue colour the spline-based correlation model (T2) and the purple color stands for the empirical Kendall’s \(\tau\) using the refined rolling training period with window size \(n=25\). about \(-0.5\%\). The results with respect to CRPSS also show, that especially the time-dependent GAM-DVQR models can appropriately capture the time-dependent correlation between the variables. Figure 12 shows the method with the highest CRPSS in colour over EMOS-GB. It should be pointed out that the CRPSS of the considered methods over EMOS-GB seems to depend on the elevation of the stations. The higher the stations are located, i.e. the more we go into the south of Germany, the higher the CRPSS of the GAM-DVQR methods over EMOS-GB become. At around \(89\%\) of the stations, the GAM-DVQR methods perform better with respect to CRPSS than EMOS-GB. Moreover, GAM-DVQR-T2 outperforms EMOS-GB at around \(49\%\) of all stations in terms of CRPSS, followed by GAM-DVQR-T1 (\(34\%\)) and GAM-DVQR-C (\(17\%\)). Testing CRPS differences.Finally, we have a look at the statistical significance of the differences in the predictive performance with respect to CRPS between the methods. In Table 13 the \((i,j)\)-entry in the \(i\)-th row and \(j\)-th column indicates the percentage of tests where the null hypothesis of equal predictive performance of the corresponding one-sided DM test is rejected in favor of the model in the \(i\)-th row when compared to the model in the \(j\)-th column. The remainder of the sum of \((i,j)\)- and \((j,i)\)-entry to \(100\%\) is the percentage where the score differences are not significant. All ensemble postprocessing models significantly outperform the raw ensemble for more than \(90\%\) of all stations with respect to CRPS. Furthermore, GAM-DVQR-T2 yields significantly lower CRPS values than EMOS-GB for one third of all stations, followed by GAM-DVQR-T1 (\(33.12\%\)) and GAM-DVQR-C (\(1.52\%\)). It should also be highlighted that Figure 12: Highest CRPSS of the considered methods over EMOS-GB in \(\%\) in the validation period. CRPSS \(>12\%\) are visualized in black for a better representation. Numbers in brackets denote the count. EMOS-GB (90.04%) performs at around 8% of the stations significantly worse in comparison to the raw ensemble than EMOS (98.70%), while GAM-DVQR-T1/T2 (95.24/95.45%) shows only at 1% of all stations on the extended variable set significantly lower CRPS values in comparison to its version on the reduced variable set (96.32%). We conclude that the GAM-DVQR-T1/T2 models lead to more stable results over the raw ensemble compared to EMOS and its gradient-boosted extension EMOS-GB, regardless of whether the reduced or extended variable set is used. ## 7 Conclusion and outlook D-vine GAM copula quantile regression (GAM-DVQR) is a powerful statistical method which allows to select important predictor variables, to model nonlinear relationships between the considered variables and to simultaneously take account of covariate effects linked to the Kendall's \(\tau\) of a pair-copula. We complement the presentation of this new method with the R-package gamvinereg by Jobst et al. (2023b). In the application for ensemble postprocessing of 2 m surface temperature forecasts, GAM-DVQR is able to capture temporal correlation and to choose predictor variables accordingly. Furthermore, the main reasons for the overall better performance of GAM-DVQR over the other methods can be traced back to the modeling of the temporal behavior of the marginal distributions as well as the correlations among the predictor variables. For the reduced and extended variable set, the correlation time-dependent GAM-DVQR models outperform the constant correlation GAM-DVQR models. This indicates the presence of a non-constant correlation among the variables which needs to be included into the model. Furthermore, the time varying GAM-DVQR yields significant improvements over the benchmark methods EMOS and EMOS-GB. Due to the static training period used for GAM-DVQR in comparison to the conventional day-by-day sliding training window for DVQR, the estimation procedure is more economical, can even result in better fits Figure 13: Percentage of pair-wise Diebold-Mariano (DM) tests for the 2 m surface temperature forecasts indicating statistically significant CRPS differences after applying a Benjamini-Hochberg procedure to account for multiple testing for a nominal level of 0.05 of the corresponding one-sided tests. and makes it appealing for practical and possibly operational use. In future research, we will investigate an extension of the GAM-DVQR method allowing for spatial and spatio-temporal effects for Kendall's \(\tau\). This can be specifically relevant in the field of ensemble postprocessing, and the new approach can be compared with other spatial or spatio-temporal postprocessing models, such as e.g. Markovian EMOS by Moller et al. (2015). It might also be beneficial to test other covariate effects besides of the mentioned ones, and to use GAM-DVQR for the postprocessing of non-Gaussian weather quantities, such as wind speed or precipitation. In terms of the method itself, the extension of GAM-DVQR to very high-dimensional settings is on the top of our agenda. The work of Sahin and Czado (2022) can serve as a starting point and we plan to compare our extension with suitable methods in various fields. Additionally, the method could be further refined to deal with discrete variables, where the work of Panagiotelis et al. (2012) can be considered. Last but not least, the extension of our method to more general vine structures, e.g. C-vine (canonical vine) or R-vine (regular vine) based on Tepegjozova et al. (2022) and Zhu et al. (2021), respectively, would allow for different dependence structures. ## Acknowledgements We are grateful to the European Centre for Medium-Range Weather Forecasts (ECMWF) and the German Weather Service (DWD) for providing forecasts and observation data, respectively. Furthermore, the authors acknowledge support of the research by Deutsche Forschungsgemeinschaft (DFG) Grant Number MO 3394/1-1, and by the Hungarian National Research, Development and Innovation Office under Grant Number NN125679. Annette Moller acknowledges support by the Helmholtz Association's pilot project "Uncertainty Quantification". ## Appendix A Hyperparameter specifications ## Appendix B Marginal distributions Distribution \(\mathcal{D}\)-sets: * \(A:=\{\mathcal{N}(\mu,\sigma),\mathcal{SN}(\mu,\sigma,\nu),\mathcal{S}t(\mu,\sigma,\nu,\tau)\}\), * \(B:=\{\text{logit}\mathcal{N}(\mu,\sigma),\text{logit}\mathcal{SN}(\mu,\sigma, \nu),\text{logit}\mathcal{S}t(\mu,\sigma,\nu,\tau),\mathcal{B}(\mu,\sigma,\nu, \tau)\}\), * \(C:=\{\log\mathcal{N}(\mu,\sigma),\log\mathcal{SN}(\mu,\sigma,\nu),\log\mathcal{ S}t(\mu,\sigma,\nu,\tau)\}\), where \(\mathcal{N}\) denotes the Gaussian normal distribution, \(t\) denotes the Student-\(t\) distribution and \(\mathcal{B}\) represents the Beta distribution. Furthermore \(\mathcal{S}\) abbreviates the skewed version of a distribution, e.g. \(\mathcal{SN}\) denotes the skew Gaussian normal distribution and logit as well as log denote the transformation of the response with the logit- or log-function. \begin{table} \begin{tabular}{c c c c} \hline \hline Variable & Tested \(\mathcal{D}\)-set & Selected \(\mathcal{D}\) & Remark \\ \hline \(Y_{\text{t2m}}\) & \(A\) & \(\mathcal{N}(\mu,\sigma)\) & \\ \hline \(\overline{X}_{\text{t2m}}\) & \(A\) & \(\mathcal{N}(\mu,\sigma)\) & \\ \(\overline{X}_{\text{d2m}}\) & \(A\) & \(\mathcal{S}t(\mu,\sigma,\nu,\tau)\) & \\ \(\overline{X}_{\text{pr}}\) & \(A\) & \(\mathcal{N}(\mu,\sigma)\) & \\ \(\overline{X}_{\text{sr}}\) & \(A\) & \(\mathcal{SN}(\mu,\sigma,\nu)\) & \\ \(\overline{X}_{\text{u10m}}\) & \(A\) & \(\mathcal{N}(\mu,\sigma)\) & \\ \(\overline{X}_{\text{v10m}}\) & \(A\) & \(\mathcal{N}(\mu,\sigma)\) & \\ \(\overline{X}_{\text{r2m}}\) & \(B\) & \(\text{logit}\mathcal{S}t(\mu,\sigma,\nu,\tau)\) & \\ \(\overline{X}_{\text{tcc}}\) & \(B\) & \(\mathcal{B}(\mu,\sigma,\nu,\tau)\) & raw data transformation based on \\ \(\overline{X}_{\text{ws10m}}\) & \(C\) & \(\log\mathcal{S}t(\mu,\sigma,\nu,\tau)\) & \\ \(\overline{X}_{\text{wg10m}}\) & \(C\) & \(\log\mathcal{N}(\mu,\sigma)\) & \\ \hline \(S_{\text{t2m}}\) & \(C\) & \(\log\mathcal{N}(\mu,\sigma)\) & \\ \(S_{\text{d2m}}\) & \(C\) & \(\log\mathcal{N}(\mu,\sigma)\) & \\ \(S_{\text{pr}}\) & \(C\) & \(\log\mathcal{S}t(\mu,\sigma,\nu,\tau)\) & \\ \(S_{\text{sr}}\) & \(C\) & \(\log\mathcal{S}t(\mu,\sigma,\nu,\tau)\) & \\ \(S_{\text{u10m}}\) & \(C\) & \(\log\mathcal{N}(\mu,\sigma)\) & \\ \(S_{\text{v10m}}\) & \(C\) & \(\log\mathcal{N}(\mu,\sigma)\) & \\ \(S_{\text{r2m}}\) & \(B\) & \(\text{logit}\mathcal{N}(\mu,\sigma)\) & \\ & & & raw data min-max-transformation \& transformation \& transformation based on \\ \(S_{\text{tcc}}\) & \(B\) & \(\text{logit}\mathcal{S}t(\mu,\sigma,\nu,\tau)\) & \\ & & & (2006) & \\ \(S_{\text{ws10m}}\) & \(C\) & \(\log\mathcal{N}(\mu,\sigma)\) & \\ \(S_{\text{wg10m}}\) & \(C\) & \(\log\mathcal{N}(\mu,\sigma)\) & \\ \hline \hline \end{tabular} \end{table} Table 8: GAMLSS distribution selection.
2308.02523
Unique common fixed points of four generalized contractive mappings in ordered partial metric spaces
The existence and uniqueness of the common fixed point for generalized contractive mappings in order partial metric spaces is investigated. The existence of nonnegative solution of implicit nonlinear integral equations is also studied. Some examples demonstrating the validity of our main results are constructed. The presented results extend and unify various comparable results in the existing literature.
Talat Nazir, Sergei Silvestrov
2023-07-31T03:15:17Z
http://arxiv.org/abs/2308.02523v1
Unique common fixed points of four generalized contractive mappings in ordered partial metric spaces ###### Abstract The existence and uniqueness of the common fixed point for generalized contractive mappings in order partial metric spaces is investigated. The existence of nonnegative solution of implicit nonlinear integral equations is also studied. Some examples demonstrating the validity of our main results are constructed. The presented results extend and unify various comparable results in the existing literature. Keywords:Common fixed point, generalized contractive mapping, partially ordered set, partial metric space : 47H09, 47H10, 54C60, 54H25 ## 1 Introduction Fixed point theory is a one of the powerful tool, noteworthy and stimulating themes of nonlinear functional analysis that blends topology, analysis and applied mathematics. In the fixed-point technique, the controllability problem is converted to a fixed-point problem for an applicable nonlinear operator in a function space. An essential part of this approach is to guarantee the solvability of the equations for an invariant subset for this operator. Alber and Guerre-Delabrere [5] introduced the concept of weakly contractive mappings and proved that weakly contractive mapping defined on a Hilbert space is a Picard operator. Later, Rhoades [39] proved that the corresponding result is also valid when Hilbert space is replaced by a complete metric space. Dutta _et al._[18] generalized the weak contractive condition and proved a fixed point theorem for a self map, which in turn generalizes [39, Theorem 1] and the corresponding result in [5]. The study of common fixed points of mappings satisfying certain contractive conditions has been at the center of rigorous research activity. The study of common fixed point theory for sets of several single valued maps, started with the assumption that all of the maps commuted. Sessa [44] generalized the concept of commuting maps and introducing the weakly commuting maps. Then, Jungck generalized this idea, first to compatible mappings [26] and then to weakly compatible mappings [27]. There are examples that show that each of these generalizations of commutativity is a proper extension of the previous definition. On the other hand, Beg and Abbas [12] obtained a common fixed point theorem extending weak contractive condition for two maps. In this direction, Zhang and Song [47] introduced the concept of a generalized \(\varphi\)-weak contraction condition and obtained a common fixed point for two maps. Doric [17] proved a common fixed point theorem for generalized \((\psi,\varphi)\)-weak contractions. Abbas and Doric [1] obtained a common fixed point theorem for four maps that satisfy contractive condition which is more general than that given in [47]. In 2004, Ran and Reurings [37] investigated the existence of fixed points in partially ordered metric spaces, and then by Nieto and Lopez [33]. Further results in this direction under weak contractive condition were proved (see for example [2; 8; 16; 21; 23; 32; 35; 36; 38]). In 2011, Abbas et al. [2] presented some common fixed point theorems for generalized \((\psi,\varphi)\)-weakly contractive mappings in partially ordered metric spaces. Further, Radenovic and Kadelburg [36] proved a result for generalized weak contractive mappings in partially ordered metric spaces. Partial metric space is a generalized metric space in which each object does not necessarily have to have a zero distance from itself [28]. A motivation behind introducing the concept of a partial metric was to obtain appropriate mathematical models in the theory of computation [11; 22; 29; 43]. Altun and Erduran [6], Oltra and Valero [34] and Valero [46] established some further generalizations of the results in [28], and Romaguera [40] proved a Caristi type fixed point theorem on partial metric spaces. Karapinar [24] proved some fixed point theorems for weak \(\varphi\)-contraction on partial metric spaces in partially ordered sets. Further results in the direction of partial metric space were proved in [3; 4; 9; 14; 15; 41; 45]. It was shown that, in some cases, the results of fixed point in partial metric spaces can be obtained directly from their induced metric counterparts [20; 25; 42]. However, some conclusions important for the application of partial metrics in information sciences cannot be obtained in this way. For example, if \(u\) is a fixed point of map \(f\), then, by using the method from [20], we cannot conclude that \(p(fu,fu)=0=p(u,u)\). For further details, we refer the reader to [30; 31]. Our aim is to study the unique common fixed point results for four mappings satisfying generalized contractive conditions in the setup of ordered partial metric spaces. In the sequel, \(\mathbb{R}\), \(\mathbb{R}_{\geq 0}\) and \(\mathbb{Z}_{\geq 0}\) will denote the set of all real numbers, the set of all nonnegative real numbers and the set of all non-negative integers, respectively. The usual order on \(\mathbb{R}\) (respectively, on \(\mathbb{R}_{\geq 0}\)) will be indistinctly denoted by \(\leq\) or \(\geq\). Consistent with [6] and [28], the following definitions and results will be needed in the sequel. **Definition 1**.: Let \(X\) be a nonempty set. A function \(p:X\times X\to\mathbb{R}_{\geq 0}\) is said to be a partial metric on \(X\) if for any \(x,y,z\in X,\) the following conditions hold true: 1) \(p(x,x)=p(y,y)=p(x,y)\) if and only if \(x=y;\) 2) \(p(x,x)\leq p(x,y);\) 3) \(p(x,y)=p(y,x);\) 4) \(p(x,z)\leq p(x,y)+p(y,z)-p(y,y).\) The pair \((X,p)\) is then called a partial metric space. If \(p(x,y)=0\), then (1) and (2) imply that \(x=y\). But the converse does not always hold. A trivial example of a partial metric space is the pair \((\mathbb{R}_{\geq 0},p)\), where the partial metric \(p:\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) is defined as \(p(x,y)=\max\{x,y\}.\) _Example 1_.: ([28]) If \(X=\{[a,b]:a,b\in\mathbb{R},a\leq b\},\) then \[p([a,b],[c,d])=\max\{b,d\}-\min\{a,c\}\] defines a partial metric \(p\) on \(X\). For some more examples of partial metric spaces, we refer to [3; 4; 6; 15; 40; 43]. Each partial metric \(p\) on \(X\) generates a \(T_{0}\) topology \(\tau_{p}\) on \(X\) which has as a base the family open \(p\)-balls \(\{B_{p}(x,\varepsilon):x\in X,\varepsilon>0\},\) where \(B_{p}(x,\varepsilon)=\{y\in X:p(x,y)<p(x,x)+\varepsilon\},\) for all \(x\in X\) and \(\varepsilon>0\). Observe (see [28, p. 187]) that a sequence \(\{x_{n}\}\) in a partial metric space \(X\) converges to a point \(x\in X\), with respect to \(\tau_{p},\) if and only if \(p(x,x)=\lim\limits_{n\to\infty}p(x,x_{n}).\) If \(p\) is a partial metric on \(X\), then the function \(p^{S}:X\times X\to\mathbb{R}_{\geq 0}\) given by \(p^{S}(x,y)=2p(x,y)-p(x,x)-p(y,y)\) defines a metric on \(X\). Furthermore, a sequence \(\{x_{n}\}\) converges in \((X,p^{S})\) to a point \(x\in X\) if and only if \[\lim\limits_{n,m\to\infty}p(x_{n},x_{m})=\lim\limits_{n\to\infty}p(x_{n},x)=p( x,x).\] **Definition 2** ([28]).: A sequence \(\{x_{n}\}\) in a partial metric space \(X\) is said to be a Cauchy sequence if \(\lim\limits_{n,m\to\infty}p(x_{n},x_{m})\) exists and is finite. A partial metric space \(X\) is said to be complete if every Cauchy sequence \(\{x_{n}\}\) in \(X\) converges with respect to \(\tau_{p}\) to a point \(x\in X\) such that \(\lim\limits_{n\to\infty}p(x,x_{n})=p(x,x).\) In this case, we say that the partial metric \(p\) is complete. **Lemma 1** ([6; 28]): _Let \(X\) be a partial metric space._ (i) _A sequence_ \(\{x_{n}\}\) _in_ \(X\) _is a Cauchy sequence in_ \(X\) _if and only if it is a Cauchy sequence in metric space_ \((X,p^{S})\)_._ (ii) _A partial metric space_ \((X,p)\) _is complete if and only if the metric space_ \((X,p^{S})\) _is complete._ Two self maps \(f\) and \(g\) on \(X\) are said to be compatible if, whenever \(\{x_{n}\}\) in \(X\) such that \(\lim\limits_{n\to\infty}p^{S}(fx_{n},x)=0\) and \(\lim\limits_{n\to\infty}p^{S}(gx_{n},x)=0\) for some \(x\in X\), then \(\lim\limits_{n\to\infty}p^{S}(fgx_{n},gfx_{n})=0\). If \(fx=gx\) for some \(x\) in \(X\), then \(x\) is called a coincidence point of \(f\) and \(g\). Furthermore, if the mappings are commuting on their coincidence point, then such mappings are called weakly compatible, [27]. Definition 3: Let \(X\) be a nonempty set. Then \((X,\preceq,p)\) is called an ordered partial metric space if (i) \(p\) is a partial metric on \(X\), (ii) \(\preceq\) is a partial order on \(X\). We say that the elements \(x,y\in X\) are called comparable if either \(x\preceq y\) or \(y\preceq x\) holds. Definition 4 ([2]): Let \((X,\preceq)\) be a partially ordered set and \(f\) and \(g\) be two self-maps of \(X\). Mapping \(f\) is said to be dominated if \(fx\preceq x\) for each \(x\) in \(X\). A mapping \(g\) is said to be dominating if \(x\preceq gx\) for each \(x\) in \(X\). Example 2: Let \(X=[0,1]\) be endowed with usual ordering. Let \(f,g:X\to X\) defined by \(fx=\dfrac{x}{k}\) and \(gx=kx\) for any positive real number \(k\geq 1\). It is easy to see that \(f\) is dominated and \(g\) is a dominating map. Zhang and Song [47] obtained the following common fixed point result in metric spaces for a generalized \(\varphi\)-weak contraction. Theorem 1 ([47]): _Let \((X,d)\) be a complete metric space, and let \(f,g:X\to X\) be two self-mappings such that for all \(x,y\in X\), \(d(fx,gy)\leq M(x,y)-\varphi(M(x,y))\) holds, where \(\varphi:[0,\infty)\to[0,\infty)\) is a lower semi-continuous function with \(\varphi(t)>0\) for \(t\in(0,\infty)\), \(\varphi(0)=0\), and_ \[M(x,y)=\max\{d(x,y),d(fx,x),d(gy,y),\dfrac{d(x,gy)+d(fx,y)}{2}\}.\] _Then there exists a unique point \(u\in X\) such that \(u=fu=gu\)._ Aydi in [9] obtained the following result in partial metric spaces endowed with a partial order. Theorem 2: _Let \((X,\leq_{X})\) be a partially ordered set and let \(p\) be a partial metric on \(X\) such that \((X,p)\) is complete. Let \(f:X\to X\) be a nondecreasing map with respect to \(\leq_{X}\). Suppose that the following conditions hold for \(y\leq_{X}x\):_ (i) _the inequality holds_ \[p(fx,fy)\leq p(x,y)-\varphi(p(x,y)),\] _where \(\varphi:[0,\infty)\to[0,\infty)\) is a continuous and non-decreasing function such that it is positive in \((0,\infty)\), \(\varphi(0)=0\) and \(\lim\limits_{t\to\infty}\varphi(t)=\infty\);_ (ii) _there exist \(x_{0}\in X\) such that \(x_{0}\leq_{X}fx_{0}\);_ (iii) \(f\) _is continuous in \((X,p)\), or;_ _if a non-decreasing sequence \(\{x_{n}\}\) converges to \(x\in X\), then \(x_{n}\leq_{X}x\) for all \(n\)._ _Then \(f\) has a fixed point \(u\in X\). Moreover, \(p(u,u)=0\)._ **Definition 5** ([17]): _The control functions \(\psi\) and \(\varphi\) are defined as_ 1. \(\psi:[0,\infty)\to[0,\infty)\) is a continuous nondecreasing function with \(\psi(t)=0\) if and only if \(t=0\), 2. \(\varphi:[0,\infty)\to[0,\infty)\) is a lower semi-continuous function with \(\varphi(t)=0\) if and only if \(t=0\). A subset \(W\) of a partially ordered set \(X\) is said to be well ordered if every two elements of \(W\) are comparable. Recently, Abbas et al. [3] obtained the following result in partial metric spaces. **Theorem 3**: _Let \((X,\preceq)\) be a partially ordered set such that there exist a complete partial metric \(p\) on \(X\) and \(f\) a nondecreasing self map on \(X\). Suppose that for every two elements \(x,y\in X\) with \(y\preceq x\), we have_ \[\psi(p(fx,fy))\leq\psi(M(x,y))-\phi(M(x,y)),\] \[M(x,y)=\max\{p(x,y),p(fx,x),p(fy,y),\frac{p(x,fy)+p(y,fx)}{2}\},\] _where \(\psi\) and \(\phi\) are control functions. If there exists \(x_{0}\in X\) with \(x_{0}\preceq fx_{0}\) and one of the following two conditions is satisfied:_ (i)_\(f\) is continuous self map on \((X,p^{S})\),_ (ii) _for any nondecreasing sequence \(\{x_{n}\}\) in \((X,\preceq)\) with \(\lim\limits_{n\to\infty}p^{S}(z,x_{n})=0\) it follows \(x_{n}\preceq z\) for all \(n\in\mathbb{Z}_{\geq 0},\)_ _then \(f\) has a fixed point. Moreover, the set of fixed points of \(f\) is well ordered if and only if \(f\) has one and only one fixed point._ ## 2 Common Fixed Point Results In this section, we obtain common fixed point theorems for four mappings defined on an ordered partial metric space. We start with the following result. **Theorem 4**.: _Let \((X,\preceq,p)\) be an ordered complete partial metric space. Let \(f,g,S\) and \(T\) be self maps on \(X\), \((f,g)\) be the pair of dominated and \((S,T)\) be the pair of dominating maps with \(f(X)\subseteq T(X)\) and \(g(X)\subseteq S(X)\). Suppose that, there exists control functions \(\psi\) and \(\varphi\) such that, for every two comparable elements \(x,y\in X\),_ \[\psi(p(fx,gy))\leq\psi(M_{p}(x,y))-\varphi(M_{p}(x,y)), \tag{1}\] _is satisfied where_ \[M_{p}(x,y)=\max\{p(Sx,Ty),p(fx,Sx),p(gy,Ty),\frac{p(Sx,gy)+p(fx,Ty)}{2}\}.\] _If for any nonincreasing sequence \(\{x_{n}\}\) in \((X,\preceq)\) with \(x_{n}\preceq y_{n}\) for all \(n\) and \(\lim\limits_{n\to\infty}p^{S}(x_{n},u)=0\) it holds that \(u\preceq y_{n}\) for all \(n\in\mathbb{Z}_{\geq 0}\) and either of the following conditions hold:_ 1. \(\{f,S\}\) _are compatible,_ \(f\) _or_ \(S\) _is continuous on_ \((X,p^{S})\) _and_ \(\{g,T\}\) _are weakly compatible;_ 2. \(\{g,T\}\) _are compatible,_ \(g\) _or_ \(T\) _is continuous on_ \((X,p^{S})\) _and_ \(\{f,S\}\) _are weakly compatible,_ _then \(f\),\(g\),\(S\) and \(T\) have a common fixed point. Moreover, the set of common fixed points of \(f\), \(g\), \(S\) and \(T\) is well ordered if and only if \(f\), \(g\), \(S\) and \(T\) have one and only one common fixed point._ Proof.: Let \(x_{0}\) be an arbitrary point in \(X\). We construct sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) in \(X\) such that \(y_{2n+1}=fx_{2n}=Tx_{2n+1}\), and \(y_{2n+2}=gx_{2n+1}=Sx_{2n+2}\). By given assumptions, \(x_{2n+2}\preceq Sx_{2n+2}=gx_{2n+1}\preceq x_{2n+1}\), and \(x_{2n+1}\preceq Tx_{2n+1}=fx_{2n}\preceq x_{2n}\). Thus, for all \(n\in\mathbb{Z}_{\geq 0}\) we have \(x_{n+1}\preceq x_{n}\). We suppose that \(p(y_{2n},y_{2n+1})>0\), for every \(n\). If not, then \(y_{2n}=y_{2n+1}\), for some \(n\). Further, since \(x_{2n}\) and \(x_{2n+1}\) are comparable, so from (1), we have \[\psi(p(y_{2n+1},y_{2n+2}))=\psi(p(fx_{2n},gx_{2n+1}))\\ \leq\psi(M_{p}(x_{2n},x_{2n+1}))-\varphi(M_{p}(x_{2n},x_{2n+1})),\] where \[M_{p}(x_{2n},x_{2n+1})=\max\{p(Sx_{2n},Tx_{2n+1}),p(fx_{2n},Sx_{ 2n}),p(gx_{2n+1},Tx_{2n+1}),\] \[\frac{p(Sx_{2n},gx_{2n+1})+p(fx_{2n},Tx_{2n+1})}{2}\}\] \[=\max\{p(y_{2n},y_{2n+1}),p(y_{2n+1},y_{2n}),p(y_{2n+2},y_{2n+1}),\] \[\frac{p(y_{2n},y_{2n+2})+p(y_{2n+1},y_{2n+1})}{2}\}\] \[=p(y_{2n+1},y_{2n+2}).\] Hence, \(\psi(p(y_{2n+1},y_{2n+2}))\leq\psi(p(y_{2n+1},y_{2n+2}))-\varphi(p(y_{2n+1},y_ {2n+2}))\) implies that \(\varphi(p(y_{2n+1},y_{2n+2}))=0\). As, \(\varphi(t)=0\) if and only if \(t=0\), it follows that \(y_{2n+2}\). Following the similar arguments, we get \(y_{2n+2}=y_{2n+3}\) and so on. Thus \(y_{2n}\) is the common fixed point of \(f,\,g,\,S\) and \(T\) as \(\{y_{n}\}\) became a constant sequence in \(X\). Taking \(p(y_{2n},y_{2n+1})>0\) for each \(n\). Since \(x_{2n}\) and \(x_{2n+1}\) are comparable, from (1), we obtain that \[\psi(p(y_{2n+2},y_{2n+1}))=\psi(p(y_{2n+1},y_{2n+2}))=\psi(p(fx_{2n},gx_{2n+1}))\] \[\leq\psi(M_{p}(x_{2n},x_{2n+1}))-\varphi(M_{p}(x_{2n},x_{2n+1})), \tag{2}\] where \[M_{p}(x_{2n}, x_{2n+1})=\max\{p(Sx_{2n},Tx_{2n+1}),p(fx_{2n},Sx_{2n}),\] \[p(gx_{2n+1},Tx_{2n+1}),\frac{p(Sx_{2n},gx_{2n+1})+p(fx_{2n},Tx_{ 2n+1})}{2}\}\] \[= \max\{p(y_{2n},y_{2n+1}),p(y_{2n+1},y_{2n}),\] \[p(y_{2n+2}, y_{2n+1}),\frac{p(y_{2n},y_{2n+2})+p(y_{2n+1},y_{2n+1})}{2}\}\] \[\leq \max\{p(y_{2n+1},y_{2n}),p(y_{2n+2},y_{2n+1}),\frac{p(y_{2n},y_{2 n+1})+p(y_{2n+1},y_{2n+2})}{2}\}\] \[= \max\{p(y_{2n+1},y_{2n}),p(y_{2n+2},y_{2n+1})\}.\] If \(\max\{p(y_{2n+1},y_{2n}),p(y_{2n+2},y_{2n+1})\}=p(y_{2n+2},y_{2n+1}),\) then \[M_{p}(x_{2n},x_{2n+1})\leq p(y_{2n+2},y_{2n+1}).\] But \(M_{p}(x_{2n},x_{2n+1})\geq p(y_{2n+2},y_{2n+1}),\) and so \[M_{p}(x_{2n},x_{2n+1})=p(y_{2n+2},y_{2n+1}),\] and (2) give \[\psi(p(y_{2n+2},y_{2n+1})) \leq \psi(M_{p}(x_{2n},x_{2n+1}))-\varphi(M_{p}(x_{2n},x_{2n+1}))\] \[= \psi(p(y_{2n+2},y_{2n+1}))-\varphi(p(y_{2n+2},y_{2n+1})),\] a contradiction. Hence \(p(y_{2n+2},y_{2n+1})\leq p(y_{2n+1},y_{2n})\). Moreover \(M_{p}(x_{2n},x_{2n+1})\leq p(y_{2n},y_{2n+1})\). But, since \(M_{p}(x_{2n},x_{2n+1})\geq p(y_{2n},y_{2n+1}),\) it follows that \[M_{p}(x_{2n},x_{2n+1})=p(y_{2n},y_{2n+1}).\] Similarly, \(p(y_{2n+3},y_{2n+2})\leq p(y_{2n+2},y_{2n+1})\). Thus, the sequence \(\{p(y_{2n+1},y_{2n})\}\) is nonincreasing. Hence, there exists \(c\geq 0\) such that \(\lim\limits_{n\to\infty}p(y_{2n+1},y_{2n})=c\). Suppose that \(c>0\). Then, \(\psi(p(y_{2n+2},y_{2n+1}))\leq\psi(M_{p}(x_{2n+1},x_{2n}))-\varphi(M_{p}(x_{2n +1},y_{2n})),\) and by lower semicontinuity of \(\varphi\), we have \[\limsup\limits_{n\to\infty}\psi(p(y_{2n+2},y_{2n+1}))\leq\limsup\limits_{n\to \infty}\psi(p(y_{2n},y_{2n+1}))-\liminf\limits_{n\to\infty}\varphi(p(y_{2n},y_ {2n+1})),\] which implies that \(\psi(c)\leq\psi(c)-\varphi(c)\), a contradiction. Therefore \(c=0\). So we conclude that \[\lim_{n\to\infty}p(y_{2n+1},y_{2n})=0. \tag{3}\] Now, we show that \(\lim_{n,m\to\infty}p(y_{2n},y_{2m})=0\). If not, there is \(\varepsilon>0\), and there exist even integers \(2n_{k}\) and \(2m_{k}\) with \(2m_{k}>2n_{k}>k\) such that \[p(y_{2m_{k}},y_{2n_{k}})\geq\varepsilon, \tag{4}\] and \(p(y_{2m_{k}-2},y_{2n_{k}})<\varepsilon\). Since \[\varepsilon \leq p(y_{2m_{k}},y_{2n_{k}})\] \[\leq p(y_{2n_{k}},y_{2m_{k}-2})+p(y_{2m_{k}-2},y_{2m_{k}})-p(y_{2m_{ k}-2},y_{2m_{k}-2})\] \[\leq p(y_{2n_{k}},y_{2m_{k}-2})+p(y_{2m_{k}-2},y_{2m_{k}-1})+p(y_{2m_{ k}-1},y_{2m_{k}})\] \[\quad-p(y_{2m_{k}-1},y_{2m_{k}-1})-p(y_{2m_{k}-2},y_{2m_{k}-2}),\] from (3) and (4), we have \[\lim_{k\to\infty}p(y_{2m_{k}},y_{2n_{k}})=\varepsilon. \tag{5}\] Also (5) and inequality \[p(y_{2m_{k}},y_{2n_{k}})\leq p(y_{2m_{k}},y_{2m_{k}-1})+p(y_{2m_{k}-1},y_{2n_{ k}})-p(y_{2m_{k}-1},y_{2m_{k}-1})\] give that \(\varepsilon\leq\lim_{k\to\infty}p(y_{2m_{k}-1},y_{2n_{k}})\), while inequality \[p(y_{2m_{k}-1},y_{2n_{k}})\leq p(y_{2m_{k}-1},y_{2m_{k}})+p(y_{2m_{k}},y_{2n_{ k}})-p(y_{2m_{k}},y_{2m_{k}})\] yields \(\lim_{k\to\infty}p(y_{2m_{k}-1},y_{2n_{k}})\leq\varepsilon\), and hence \[\lim_{k\to\infty}p(y_{2m_{k}-1},y_{2n_{k}})=\varepsilon. \tag{6}\] Now (6) and inequality \[p(y_{2m_{k}-1},y_{2n_{k}})\leq p(y_{2m_{k}-1},y_{2n_{k}+1})+p(y_{2n_{k}+1},y_{ 2n_{k}})-p(y_{2n_{k}+1},y_{2n_{k}+1})\] give \(\varepsilon\leq\lim_{k\to\infty}p(y_{2m_{k}-1},y_{2n_{k}+1})\), while inequality \[p(y_{2m_{k}-1},y_{2n_{k}+1})\leq p(y_{2m_{k}-1},y_{2n_{k}})+p(y_{2n_{k}},y_{2n_ {k}+1})-p(y_{2n_{k}},y_{2n_{k}})\] yields \(\lim_{k\to\infty}p(y_{2m_{k}-1},y_{2n_{k}+1})\leq\varepsilon\), and so \[\lim_{k\to\infty}p(y_{2m_{k}-1},y_{2n_{k}+1})=\varepsilon.\] As \[M_{p}(x_{2n_{k}},x_{2m_{k}-1})=\max\{p(Sx_{2n_{k}},Tx_{2m_{k}-1}),p (fx_{2n_{k}},Sx_{2n_{k}}),\] \[p(gx_{2m_{k}-1},Tx_{2m_{k}-1}),\frac{p(Sx_{2n_{k}},gx_{2m_{k}-1})+ p(fx_{2n_{k}},Tx_{2m_{k}-1})}{2}\}\] \[=\max\{p(y_{2n_{k}},y_{2m_{k}-1}),p(y_{2n_{k}+1},y_{2n_{k}}),\] \[p(y_{2m_{k}},y_{2m_{k}-1}),\frac{p(y_{2n_{k}},y_{2m_{k}})+p(y_{2n _{k}+1},y_{2m_{k}-1})}{2}\}.\] So, \(\lim\limits_{k\to\infty}M_{p}(x_{2n_{k}},x_{2m_{k}-1})=\max\{\epsilon,0,0, \epsilon\}=\epsilon\). From (1), we obtain \[\psi(p(y_{2n_{k}+1},y_{2m_{k}}))=\psi(p(fx_{2n_{k}},gx_{2m_{k}-1}))\\ \leq\psi(M_{p}(x_{2n_{k}},x_{2m_{k}-1}))-\varphi(M_{p}(x_{2n_{k}},x_{2m_{k}-1})).\] Taking upper limit as \(k\to\infty\) implies that \(\psi(\epsilon)\leq\psi(\epsilon)-\varphi(\epsilon)\), a contradiction as \(\epsilon>0\). Thus, we obtain \(\lim\limits_{n,m\to\infty}p(y_{2n},y_{2m})=0\), and it follows that \(\{y_{2n}\}\) is a Cauchy sequence in \((X,p)\), and hence Cauchy in \((X,p^{S})\) by Lemma 1. Since \((X,p)\) is complete, it follows from Lemma 1, \((X,p^{S})\) is also complete, so the sequence \(\{y_{2n}\}\) is convergent in the metric space \((X,p^{S})\). Therefore, there exists a point \(z\) in \(X\) such that \(\lim\limits_{n\to\infty}p^{S}(y_{2n},z)=0\). Hence, \[\lim\limits_{n\to\infty}y_{2n+1}=\lim\limits_{n\to\infty}Tx_{2n+1 }=\lim\limits_{n\to\infty}fx_{2n}=z,\] \[\lim\limits_{n\to\infty}y_{2n+2}=\lim\limits_{n\to\infty}Sx_{2n+2 }=\lim\limits_{n\to\infty}gx_{2n+1}=z.\] Equivalently, we have \(\lim\limits_{n,m\to\infty}p(y_{2n},y_{2m})=\lim\limits_{n\to\infty}p(y_{2n},z )=p(z,z)\). Assume that \(S\) is continuous on \((X,p^{S})\). Then \(\lim\limits_{n\to\infty}SSx_{2n+2}=\lim\limits_{n\to\infty}Sfx_{2n+2}=Sz\). Also, since \(\{f,S\}\) are compatible, we have \[\lim\limits_{n\to\infty}fSx_{2n+2}=\lim\limits_{n\to\infty}Sfx_{2n+2}=Sz.\] As, \(Sx_{2n+2}=gx_{2n+1}\preceq x_{2n+1}\), so from (1), we have \[\psi(p(fSx_{2n+2},gx_{2n+1}))\leq\psi(M_{p}(Sx_{2n+2},x_{2n+1}))-\varphi(M_{p}( Sx_{2n+2},x_{2n+1})), \tag{7}\] where \[M_{p}(Sx_{2n+2},x_{2n+1})=\max\{p(SSx_{2n+2},Tx_{2n+1}),p(fSx_{2n +2},SSx_{2n+2}),\\ p(gx_{2n+1},Tx_{2n+1}),\frac{p(SSx_{2n+2},gx_{2n+1})+p(fSx_{2n +2},Tx_{2n+1})}{2}\}.\] Now we show that \(\lim\limits_{n\to\infty}p(fSx_{2n+2},gx_{2n+1})=p(Sz,z)\). Indeed, \[p^{S}(fSx_{2n+2},gx_{2n+1})=\] \[2p(fSx_{2n+2},gx_{2n+1})-p(fSx_{2n+2},fSx_{2n+2})-p(gx_{2n+1},gx_{2n+1}),\] implies \[p(fSx_{2n+2},fSx_{2n+2})+p(gx_{2n+1},gx_{2n+1})+p^{S}(fSx_{2n+2},gx_ {2n+1})\\ =2p(fSx_{2n+2},gx_{2n+1}),\] which on taking limit as \(n\to\infty,\) implies that \[p(Sz,Sz)+p(z,z)+p^{S}(Sz,z)=2\lim_{n\to\infty}p(fSx_{2n+2},gx_{2n+1}).\] This further implies that \[p(Sz,Sz)+p(z,z)+[2p(Sz,z)-p(Sz,Sz)-p(z,z)]=2\lim_{n\to\infty}p(fSx_{2n+2},gx_ {2n+1}),\] that is, \[p(Sz,z)=\lim_{n\to\infty}p(fSx_{2n+2},gx_{2n+1}).\] From (7), on taking upper limit as \(n\to\infty,\) we obtain \[\psi(p(Sz,z))\leq\psi\left(p(Sz,z)\right)-\varphi(p(Sz,z)),\] and \(Sz=z.\) Now, as \(gx_{2n+1}\preceq x_{2n+1}\) and \(gx_{2n+1}\to z\) as \(n\to\infty,\) it follows that \(z\preceq x_{2n+1}.\) Hence from (1), we have \[\psi(p(fz,gx_{2n+1}))\leq\psi\left(M_{p}(z,x_{2n+1})\right)-\varphi(M_{p}(z,x_ {2n+1})),\] where \[M_{p}(z,x_{2n+1}) =\max\{p(Sz,Tx_{2n+1}),p(fz,Sz),p(gx_{2n+1},Tx_{2n+1}),\] \[\frac{p(Sz,gx_{2n+1})+p(fz,Tx_{2n+1})}{2}\}\] \[=\max\{p(z,Tx_{2n+1}),p(fz,z),p(gx_{2n+1},Tx_{2n+1}),\] \[\frac{p(z,gx_{2n+1})+p(fz,Tx_{2n+1})}{2}\}.\] On taking upper limit as \(n\to\infty,\) we have \(\psi(p(fz,z))\leq\psi\left(p(fz,z)\right)-\varphi(p(fz,z)),\) and \(fz=z.\) Since \(f(X)\subseteq T(X),\) there exists a point \(w\in X\) such that \(fz=Tw.\) Suppose that \(gw\neq Tw.\) Since \(w\preceq Tw=fz\preceq z\) implies \(w\preceq z.\) From (1), we obtain \[\psi(p(Tw,gw))=\psi(p(fz,gw))\leq\psi(M_{p}(z,w))-\varphi(M_{p}(z,w)), \tag{8}\] where \(M_{p}(z,w)=\max\{p(Sz,Tw),p(fz,Sz),p(gw,Tw),\frac{p(Sz,gw)+p(fz,Tw)}{2}\}\) \[= \max\{p(z,z),p(z,z),p(gw,Tw),\frac{p(Tw,gw)+p(Tw,Tw)}{2}\}\] \[= \ \max\{p(z,z),p(z,z),p(gw,Tw),\frac{p(Tw,gw)+p(Tw,Tw)}{2}\}\] \[= \ p(Tw,gw).\] Now (8) becomes \(\psi(p(Tw,gw))\leq\psi(p(Tw,gw))-\varphi(p(Tw,gw)),\) a contradiction. Hence, \(Tw=gw.\) Since \(g\) and \(T\) are weakly compatible, \(gz=gfz=gTw=Tgw=Tfz=Tz.\) Thus \(z\) is a coincidence point of \(g\) and \(T.\) Now, \(fx_{2n}\preceq x_{2n}\) and \(x_{2n}\to z\) as \(n\to\infty,\) imply that \(z\preceq fx_{2n}.\) Hence from (1), we get \(\psi(p(fx_{2n},gz))\leq\psi(M_{p}(x_{2n},z))-\varphi(M_{p}(x_{2n},z)),\) where \[M_{p}(x_{2n},z) = \max\{p(Sx_{2n},Tz),p(fx_{2n},Sx_{2n}),p(gz,Tz),\frac{p(Sx_{2n},gz )+p(fx_{2n},Tz)}{2}\}\] \[= \max\{p(Sx_{2n},gz),p(fx_{2n},Sx_{2n}),p(gz,gz),\frac{p(Sx_{2n},gz )+p(fx_{2n},gz)}{2}\}\] \[= \ p(z,gz)\text{ as }n\to\infty.\] On taking upper limit as \(n\to\infty,\) we have \(\psi(p(z,gz))\leq\psi(p(z,gz))-\varphi(p(z,gz)),\) and \(z=gz.\) Therefore \(fz=gz=Sz=Tz=z.\) The proof is similar when \(f\) is continuous. Similarly, the result follows when (ii) holds. Now, suppose that the set of common fixed points of \(f,\)\(g,\)\(S\) and \(T\) is well ordered. We are to show that the common fixed point of \(f,\)\(g,\)\(S\) and \(T\) is unique. Suppose that \(u\) and \(v\) be two fixed points of \(f,\)\(g,\)\(S\) and \(T\) i.e., \(fu=gu=Su=Tu=u\) and \(fv=gv=Sv=Tv=v\) with \(u\neq v.\) Then from (1), we have \[\psi(p(u,v))=\psi(p(fu,gv))\leq\psi(M_{p}(u,v))-\varphi(M_{p}(u,v)),\] where \[M_{p}(u,v) = \max\{p(Su,Tv),p(fu,Su),p(gv,Tv),\frac{p(Su,gv)+p(fu,Tv)}{2}\}\] \[= \max\{p(u,v),p(u,u),p(v,v),\frac{p(u,v)+p(u,v)}{2}\}\] \[= \ p(u,v).\] Thus \(\psi(p(u,v))\leq\psi(p(u,v))-\varphi(p(u,v)),\) a contradiction. Hence \(u=v.\) Conversely, if \(f,\)\(g,\)\(S\) and \(T\) have only one common fixed point then the set of common fixed point of \(f,\)\(g,\)\(S\) and \(T\) is well ordered being singleton. Example 3: Let \(X=[0,k]\) for a real number \(k\geq 9/10\) endowed with usual order \(\leq.\) Let \(p:X\times X\to\mathbb{R}_{\geq 0}\) be defined by \(p(x,y)=|x-y|\) if \(x,y\in[0,1),\text{and }p(x,y)=\max\{x,y\}\) otherwise. It is easily seen that \((X,p)\) is a complete partial metric space [3]. Consider \(\psi(t)=\left\{\begin{array}{ll}3t,&\text{if }0\leq t\leq\frac{1}{3}\\ 1,&\text{if }x\in(\frac{1}{3},1]\\ t,&\text{otherwise}\end{array}\right.\) and \(\varphi(t)=\left\{\begin{array}{ll}0,&\text{if }t=0\\ \frac{t}{3},&\text{if }0<t\leq\frac{1}{3}\\ \frac{1}{9},&\text{otherwise}\end{array}\right.\). Define the self mappings \(f,\)\(g,\)\(S\) and \(T\) on \(X\) by \[\begin{array}{l}f(x)=\left\{\begin{array}{l}\frac{1}{6}x,\mbox{ if }x\leq\frac{1}{3}\\ \frac{1}{18},\mbox{ if }x\in(\frac{1}{3},k]\end{array}\right.,\;gx=\left\{ \begin{array}{l}0,\mbox{ if }x\leq\frac{1}{3}\\ \frac{1}{3},\mbox{ if }x\in(\frac{1}{3},k]\end{array}\right.,\\ T(x)=\left\{\begin{array}{l}0,\mbox{ if }x=0\\ x,\mbox{ if }x\in(0,\frac{1}{3}]\end{array}\right.,\;Sx=\left\{\begin{array}{l}0, \mbox{ if }x=0\\ \frac{1}{3},\mbox{ if }x\in(0,\frac{1}{3}]\\ k,\mbox{ if }x\in(\frac{1}{3},k]\end{array}\right.\;.\end{array}\] Then \(f(X)\subseteq T(X)\) and \(g(X)\subseteq S(X)\) with \(f\) and \(g\) are dominated and \(S\) and \(T\) are dominating mappings as \[\begin{array}{|l|l|l|l|l|}\hline\mbox{for each }x\mbox{ in }X&fx\leq x&gx\leq x&x\leq Sx&x\leq Tx\\ \hline x=0&f(0)=0&g(0)=0&0=S(0)&0=T(0)\\ \hline x\in(0,\frac{1}{3}]&fx=\frac{1}{6}x\leq x&gx=0<x&x\leq\frac{1}{3}=S(x) &x=T(x)\\ \hline x\in(\frac{1}{3},k]&fx=\frac{1}{18}<x&gx=\frac{1}{3}<x&x\leq k=S(x)&x\leq k =T(x)\\ \hline\end{array}\] Also note that \(\{f,S\}\) are compatible, \(\{g,T\}\) are weakly compatible with \(f\) is a continuous map. To show that \(f,\,g,\,S\) and \(T\) satisfy (1) for all \(x,y\in X,\) we consider the following cases: (i) If \(x=0\) and \(y\in[0,\frac{1}{3}],\) then \(p(fx,gy)=0\) and (1) is satisfied. (ii) For \(x=0\) and \(y\in(\frac{1}{3},k],\) we have \[\begin{array}{l}\psi(p(fx,gy))=\psi(p(0,\frac{1}{3}))=\psi(\frac{1}{3})\\ =1<k-\frac{1}{9}=\psi(k)-\varphi(k)\\ =\psi(p(0,k))-\varphi(p(0,k))\\ =\psi(p(Sx,Ty))-\varphi(p(Sx,Ty))\\ =\psi(M_{p}(x,y))-\varphi(M_{p}(x,y)).\end{array}\] (iii) When \(x=(0,\frac{1}{3}]\) and \(y\in[0,\frac{1}{3}],\) then \[\begin{array}{l}\psi(p(fx,gy))=\psi(p(\frac{1}{6}x,0))=\psi(\frac{1}{6}x)= \frac{1}{2}x\\ \leq 3\max\{(\frac{1}{3}-\frac{1}{6}x),y\}-\frac{1}{3}\max\{(\frac{1}{3}-\frac{1} {6}x),y\}\\ =\psi(\max\{(\frac{1}{3}-\frac{1}{6}x),y\})-\varphi(\max\{(\frac{1}{3}-\frac{1} {6}x),y\})\\ =\psi(\max\{p(fx,Sx),p(gy,Ty)\})-\varphi(\max\{p(fx,Sx),p(gy,Ty)\})\\ =\psi(M_{p}(x,y))-\varphi(M_{p}(x,y)).\end{array}\] (iv) If \(x=(0,\frac{1}{3}]\) and \(y\in(\frac{1}{3},k],\) then \[\psi(p(fx,gy))=\psi(p(\frac{1}{6}x,\frac{1}{3}))=\psi(\frac{1}{3}(1-\frac{x} {2}))=1-\frac{1}{2}x\] \[<k-\frac{1}{9}=\psi(\max\{\frac{1}{3},k\})-\varphi(\max\{\frac{1}{3},k\})\] \[=\psi(p(gy,Ty))-\varphi(p(gy,Ty))\] \[=\psi(M_{p}(x,y))-\varphi(M_{p}(x,y)).\] Unique common fixed points of four generalized contractive mappings \[<k-\frac{1}{9}=\psi(\max\{\frac{1}{3},k\})-\varphi(\max\{\frac{1}{3},k\})\] \[=\psi(p(gy,Ty))-\varphi(p(gy,Ty))\] \[=\psi(M_{p}(x,y))-\varphi(M_{p}(x,y)).\] (v) For \(x\in(\frac{1}{3},k]\) and \(y\in[0,\frac{1}{3}],\) we obtain \[\psi(p(fx,gy)) =\psi(p(\frac{1}{18},0))=\psi(\frac{1}{18})=\frac{1}{6}<k-\frac{1} {9}\] \[=\psi(\max\{\frac{1}{18},k\})-\varphi(\max\{\frac{1}{18},k\})\] \[=\psi(p(fx,Sx))-\varphi(p(fx,Sx))\] \[=\psi(M_{p}(x,y))-\varphi(M_{p}(x,y)).\] (vi) Finally, when \(x,y\in(\frac{1}{3},k],\) then we have \[\psi(p(fx,gy)) =\psi(p(\frac{1}{18},\frac{1}{3}))=\psi(\frac{5}{18})=\frac{5}{6 }<k-\frac{1}{9}\] \[=\psi(\max\{\frac{1}{3},k\})-\varphi(\max\{\frac{1}{3},k\})\] \[=\psi(p(gy,Ty))-\varphi(p(gy,Ty))\] \[=\psi(M_{p}(x,y))-\varphi(M_{p}(x,y)).\] The mappings \(f,g,S\) and \(T\) satisfy (1). Thus all the conditions given in Theorem 4 are satisfied. Moreover, \(0\) is the unique common fixed point of \(f,\)\(g,\)\(S\) and \(T\). Corollary 1: _Let \((X,\preceq,p)\) be an ordered complete partial metric space. Let \(f,g,S\) and \(T\) be self maps on \(X\), \((f,g)\) be the pair of dominated and \((S,T)\) be the pair of dominating maps with \(f\left(X\right)\subseteq T\left(X\right)\) and \(g\left(X\right)\subseteq S\left(X\right)\). Suppose that, there exists control functions \(\psi\) and \(\varphi\) such that for every two comparable elements \(x,y\in X,\)_ \[\psi(p(fx,gy))\leq\psi(p(Sx,Ty))-\varphi(p(Sx,Ty))\] _is satisfied._ _If for any nonincreasing sequence \(\{x_{n}\}\) in \((X,\preceq)\) with \(x_{n}\preceq y_{n}\) for all \(n\) and \(\lim\limits_{n\rightarrow\infty}p^{S}(x_{n},u)=0\) it holds that \(u\preceq y_{n}\) for all \(n\in\mathbb{Z}_{\geq 0}\), and either of the following conditions hold:_ 1. \(\{f,S\}\) _are compatible,_ \(f\) _or_ \(S\) _is continuous on_ \((X,p^{S})\) _and_ \(\{g,T\}\) _are weakly compatible,_ 2. \(\{g,T\}\) _are compatible,_ \(g\) _or_ \(T\) _is continuous on_ \((X,p^{S})\) _and_ \(\{f,S\}\) _are weakly compatible,_ _then \(f,g,S\) and \(T\) have a common fixed point. Moreover, the set of common fixed points of \(f\), \(g\), \(S\) and \(T\) is well ordered if and only if \(f\), \(g\), \(S\) and \(T\) have one and only one common fixed point._ Consistent with the terminology in [36], we denote \(\Upsilon\) the set of all functions \(\phi:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0},\) where \(\phi\) is a Lebesgue integrable mapping with finite integral on each compact subset of \(\mathbb{R}_{\geq 0},\) nonnegative, and for each \(\varepsilon>0,\)\(\int\nolimits_{0}^{\varepsilon}\phi(t)dt>0\) (see also, [13]). As a consequence of Theorem 4, we obtain following fixed point result for a mapping satisfying contractive conditions of integral type in a complete partial metric space \(X\). Corollary 2: _Let \((X,\preceq,p)\) be an ordered complete partial metric space. Let \(f,g,S\) and \(T\) be self maps on \(X\), \((f,g)\) be the pair of dominated and \((S,T)\) be the pair of dominating maps with \(f\left(X\right)\subseteq T\left(X\right)\) and \(g\left(X\right)\subseteq S\left(X\right)\). Suppose that, there exists control functions \(\psi\) and \(\varphi\) such that for every two comparable elements \(x,y\in X,\)_ \[\int\nolimits_{0}^{\psi(p(fx,gy))}\phi(t)dt\leq\int\limits_{0}^{\psi(M_{p}(x, y))}\phi(t)dt-\int\limits_{0}^{\varphi(M_{p}(x,y))}\phi(t)dt, \tag{9}\] _is satisfied, where \(\phi\in\Upsilon\) and_ \[M_{p}(x,y)=\max\{p(Sx,Ty),p(fx,Sx),p(gy,Ty),\frac{p(Sx,gy)+p(fx,Ty)}{2}\}.\] _If for any nonincreasing sequence \(\{x_{n}\}\) in \((X,\preceq)\) with \(x_{n}\preceq y_{n}\) for all \(n\) and \(\lim\limits_{n\rightarrow\infty}p^{S}(x_{n},u)=0\), it holds that \(u\preceq y_{n}\) for all \(n\in\mathbb{Z}_{\geq 0}\), and either of the following conditions hold:_ 1. \(\{f,S\}\) _are compatible,_ \(f\) _or_ \(S\) _is continuous on_ \((X,p^{S})\) _and_ \(\{g,T\}\) _are weakly compatible_ 2. \(\{g,T\}\) _are compatible,_ \(g\) _or_ \(T\) _is continuous on_ \((X,p^{S})\) _and_ \(\{f,S\}\) _are weakly compatible,_ _then \(f,g,S\) and \(T\) have a common fixed point. Moreover, the set of common fixed points of \(f,\)\(g,\)\(S\) and \(T\) is well ordered if and only if \(f\), \(g,\)\(S\) and \(T\) have one and only one common fixed point._ Proof: Define \(\Psi:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) by \(\Psi(x)=\int\limits_{0}^{x}\phi(t)dt,\) then from (9), we have \[\Psi\left(\psi(p(fx,gy))\right)\leq\Psi\left(\psi(M_{p}(x,y))\right)-\Psi\left( \varphi(M_{p}(x,y))\right),\] which can be written as \[\psi_{1}(p(fx,gy))\leq\psi_{1}(M_{p}(x,y))-\varphi_{1}(M_{p}(x,y)),\] where \(\psi_{1}=\Psi\circ\psi\) and \(\varphi_{1}=\Psi\circ\varphi.\) Clearly, \(\psi_{1},\varphi_{1}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0},\)\(\psi_{1}\) is continuous and nondecreasing, \(\varphi_{1}\) is a lower semicontinuous, and \(\psi_{1}(t)=\varphi_{1}(t)=0\) if and only if \(t=0.\) Hence by Theorem 4, \(f,g,S\) and \(T\) have a unique common fixed point. Remark 1: We have the following remarks. 1) If we take \(f=g\) and \(S=T=I\) (an identity map) in Corollary 1, then it extends [18, Theorem 2.1] to ordered partial metric spaces. 2) We can not apply Corollary 1 in the setup of ordered metric space to the mappings given in Example 3. Indeed, if we take \(x,y\in(\frac{1}{3},2]\) contractive condition in the Corollary 1 in the setup of ordered metric space is not satisfied. 3) Theorem 4 generalizes [3, Theorem 2.1], [4, Theorem 2.1] and [17, Theorem 2.1] for four maps in the setup of ordered partial metric spaces. ## 3 Application for Solutions of Implicit Integral Equations Let \(\Omega=[0,1]\) be bounded open set in \(\mathbb{R}\), and \(L^{2}(\Omega)\) be the set of comparable functions on \(\Omega\) whose square is integrable on \(\Omega\). Consider an integral equation \[F(t,x(t))=\int\limits_{\Omega}\kappa(t,s,x(s))ds \tag{10}\] where \(F:\Omega\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) and \(\kappa:\Omega\times\Omega\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) be two mappings. Feckan [19] obtained the nonnegative solutions of implicit integral equation (10) as an application of fixed point theorem. We shall study the sufficient condition for existence of solution of integral equation in framework of ordered complete partial metric space. Define \(p:X\times X\rightarrow\mathbb{R}_{\geq 0}\) by \[p(x,y)=\max\left(\sup\limits_{t\in\Omega}x(t),\sup\limits_{t\in\Omega}y(t) \right).\] Then \((X,p)\) is a complete partial metric space. We assume the following that there exists a positive number \(h\in[0,\frac{1}{4})\): 1. \(F(s,u(t))\leq hu(t)\) for each \(s,t\in\Omega\). 2. \(\int\limits_{\Omega}\kappa(t,s,v(s))ds\leq 2hv(t)\) for each \(s,t\in\Omega\). 3. The control functions \(\psi\) and \(\varphi\) are connected with relation that \[\psi(a)+\phi(2a)\leq\psi(2a),\] for every \(a\in\mathbb{R}_{\geq 0}\). Then integral equation (10) has a solution in \(L^{2}(\Omega)\). Proof.: Define \((fx)(t)=F(t,x(t))\) and \((gx)(t)=\int\limits_{\Omega}\kappa(t,s,x(s))ds\). Now \[\psi(p(fx,gy))=\psi\left(\max\left(\sup\limits_{t\in\Omega}\ (fx)\left(t \right),\sup\limits_{t\in\Omega}\ (fy)\left(t\right)\right)\right)\] \[= \psi\left(\max\left(\sup_{t\in\Omega}F\left(t,x(t)\right),\sup_{t\in \Omega}\ \int\limits_{\Omega}\kappa(t,s,y(t))dt\right)\right)\] \[\leq \psi\left(\max\left(\sup_{t\in\Omega}hx(t),\sup_{t\in\Omega}2hy(t) \right)\right)\] \[\leq \psi\left(2h\max\left(\sup_{t\in\Omega}x(t),\sup_{t\in\Omega}y(t) \right)\right)\] \[\leq \psi\left(\frac{1}{2}\max\left(\sup_{t\in\Omega}x(t),\sup_{t\in \Omega}y(t)\right)\right)\] \[= \psi\left(\max\left(\sup_{t\in\Omega}x(t),\sup_{t\in\Omega}y(t) \right)\right)-\phi\left(\max\left(\sup_{t\in\Omega}x(t),\sup_{t\in\Omega}y(t) \right)\right)\] \[= \psi\left(p(x,y)\right)-\phi\left(p(x,y)\right)\] \[= \psi(M_{p}(x,y))-\varphi(M_{p}(x,y)).\] Thus for every comparable elements \(x,y\in X\), \[\psi(p(fx,gy))\leq\psi(M_{p}(x,y))-\varphi(M_{p}(x,y)),\] is satisfies where \[M_{p}(x,y)=\max\{p(x,y),p(fx,x),p(gy,y),\frac{p(fx,y)+p(gy,x)}{2}\}.\] Now we can apply Theorem 4 by taking \(S\) and \(T\) as identity maps to obtain the solution of integral equation (10) in \(L^{2}(\Omega)\). ## 4 Fractals in Partial Metric Spaces Consistent with [10], let \(CB^{p}(X)\) be the family of all non-empty, closed and bounded subsets of the partial metric space \((X,p)\), induced by the partial metric \(p\). Note that closedness is taken from \((X,\tau_{p})\) (\(\tau_{p}\) is the topology induced by \(p\)) and boundedness is given as follows: \(A\) is a bounded subset in \((X,p)\) if there exists an \(x_{0}\in X\) and \(M\geq 0\) such that for all \(a\in A\), we have \(a\in B_{p}(x_{0},M)\), that is, \(p(x_{0},a)<p(a,a)+M\). For \(A,B\in CB^{p}(X)\) and \(x\in X\), define \(\delta_{p}:CB^{p}(X)\times CB^{p}(X)\rightarrow[0,\infty)\) and \[p(x,A) = \inf\{p(x,a):a\in A\},\] \[\delta_{p}(A,B) = \sup\{p(a,B):a\in A\},\] \[H_{p}(A,B) = \max\{\delta_{p}(A,B),\delta_{p}(B,A)\}.\] It can be verified that \(p(x,A)=0\) implies \(p^{S}(x,A)=0,\) where \[p^{S}(x,A)=\inf\{p^{S}(x,a):a\in A\}.\] **Lemma 2** ([7]).: _Let \((X,p)\) be a partial metric space and \(A\) be a non-empty subset of \(X,\) then \(a\in\overline{A}\) if and only if \(p(a,A)=p(a,a).\)_ **Proposition 1** ([10]).: _Let \((X,p)\) be a partial metric space. For any \(A,B,C\in CB^{p}(X)\),_ 1. \(\delta_{p}(A,A)=\sup\{p(a,a):a\in A\};\)__ 2. \(\delta_{p}(A,A)\leq\delta_{p}(A,B);\)__ 3. \(\delta_{p}(A,B)=0\) _implies_ \(A\subseteq B;\)__ 4. \(\delta_{p}(A,B)\leq\delta_{p}(A,C)+\delta_{p}(C,B)-\inf\limits_{c\in C}p(c,c).\)__ **Proposition 2** ([10]).: _Let \((X,p)\) be a partial metric space. For any \(A,B,C\in CB^{p}(X),\)_ 1. \(H_{p}(A,A)\leq H_{p}(A,B);\)__ 2. \(H_{p}(A,B)=H_{p}(B,A);\)__ 3. \(H_{p}(A,B)\leq H_{p}(A,C)+H_{p}(C,B)-\inf\limits_{c\in C}p(c,c);\)__ 4. \(H_{p}(A,B)=0\) _implies that_ \(A=B.\)__ The mapping \(H_{p}:CB^{p}(X)\times CB^{p}(X)\rightarrow[0,\infty)\) is called partial Hausdorff metric induced by partial metric \(p.\) Every Hausdorff metric is partial Hausdorff metric but converse is not true (see [10, Example 2.6]). **Theorem 5** ([10]).: _Let \((X,p)\) be a partial metric space. If \(T:X\to CB^{p}(X)\) be a multi-valued mapping such that for all \(x,y\in X,\) we have \(H_{p}(Tx,Ty)\leq kp(x,y),\) where \(k\in(0,1).\) Then \(T\) has a fixed point._ **Definition 6**.: Let \((X,p)\) be a partial metric space and and \(\mathcal{H}_{p}(X)\) denotes the set of all non-empty compact subsets of \(X.\) Let \(\{f_{n}:n=1,\ldots,N\}\) be a finite family of self-mappings on \(X\) that satisfy \[\psi(p(f_{i}x,f_{i}y))\leq\psi(M_{p}(x,y))-\varphi(M_{p}(x,y)),\] where \[M_{p}(x,y) = \max\{p(x,y),p(f_{i}x,x),p(f_{i}y,y),p(f_{i}^{2}x,f_{i}x),\] \[p(f_{i}^{2}y,y),p(f_{i}^{2}y,f_{i}y),\frac{p(f_{i}x,y)+p(f_{i}y, x)}{2}\}\] for every \(x,y\in X.\) We call these maps as a family of generalized \((\psi,\phi)\)-contraction mappings. Define \(T:\mathcal{H}_{p}(X)\rightarrow\mathcal{H}_{p}(X)\) by \[T(A) = f_{1}(A)\bigcup f_{2}(A)\bigcup\cdots\bigcup f_{N}(A)\] \[= \bigcup_{n=1}^{N}f_{n}(A),\text{ for each }A\in\mathcal{H}_{p}(X).\] If \(f_{n}:X\to X\), \(n=1,\ldots,N\) are generalized \((\psi,\phi)\)-contraction mappings, then \((X;f_{1},f_{2},\ldots,f_{N})\) is called generalized \((\psi,\phi)\)-iterated function system (\((\psi,\phi)\)-IFS). **Definition 7**.: A nonempty compact set \(A\subseteq X\) is said to be an attractor of the generalized \(\left(\psi,\phi\right)\)-IFS if 1) \(T(A)=A\) and 2) there is an open set \(U\subseteq X\) such that \(A\subseteq U\) and \(\lim\limits_{k\rightarrow\infty}T^{k}(B)=A\) for any compact set \(B\subseteq U\), where the limit is taken with respect to the partial Hausdorff metric. The largest open set \(U\) satisfying 2 is called a basin of attraction. **Theorem 6**.: _Let \((X,p)\) be a complete partial metric space and \((X;f_{n},n=1,\ldots,k)\) a generalized \(\left(\psi,\phi\right)\)-iterated function system. Let \(T:\mathcal{H}_{p}(X)\rightarrow\mathcal{H}_{p}(X)\) be a mapping defined by_ \[T(A)=\bigcup_{n=1}^{k}f_{n}(A),\text{ for all }A\in\mathcal{H}_{p}(X).\] _Suppose that, there exists control functions \(\psi\) and \(\varphi\) such that for every \(A\), \(B\in\mathcal{H}_{p}^{\prime}\left(X\right),\)_ \[\psi(H_{p}(T\left(A\right),T\left(B\right)))\leq\psi(M_{T}(A,B))-\phi(M_{T}(A, B)) \tag{11}\] _is satisfied, where_ \[M_{T}(A,B) = \max\{H_{p}(A,B),H_{p}(A,T\left(A\right)),H_{p}(B,T\left(B \right)),H_{p}(T^{2}\left(A\right),T\left(A\right)),\] \[H_{p}(T^{2}\left(A\right),B),H_{p}(T^{2}\left(A\right),T\left(B \right)),\frac{H_{p}(A,T\left(B\right))+H_{p}(B,T\left(A\right))}{2}\}.\] _Then \(T\) has a unique fixed point \(U\in\mathcal{H}_{p}\left(X\right),\) that is_ \[U=T\left(U\right)=\bigcup_{n=1}^{k}f_{n}(U).\] _Moreover, for any initial set \(A_{0}\in\mathcal{H}_{p}\left(X\right)\), the sequence \(\left\{A_{0},T\left(A_{0}\right),T^{2}\left(A_{0}\right),\ldots\right\}\) of compact sets converges to a fixed point of \(T\)._ Proof.: Let \(A_{0}\) be an arbitrary element in \(\mathcal{H}_{p}\left(X\right).\) If \(A_{0}=T\left(A_{0}\right),\) then the proof is finished. So we assume that \(A_{0}\neq T\left(A_{0}\right).\) Define, for \(m\in\mathbb{Z}_{\geq 0}\), \[A_{1}=T(A_{0}),\;A_{2}=T\left(A_{1}\right),\ldots,A_{m+1}=T\left(A_{m}\right).\] We may assume that \(A_{m}\neq A_{m+1}\) for all \(m\in\mathbb{Z}_{\geq 0}.\) If not, then \(A_{k}=A_{k+1}\) for some \(k\) implies \(A_{k}=T(A_{k})\) and this completes the proof. Take \(A_{m}\neq A_{m+1}\) for all \(m\in\mathbb{Z}_{\geq 0}\). From (11), we have \[\psi\left(H_{p}(A_{m+1},A_{m+2})\right) = \psi\left(H_{p}(T\left(A_{m}\right),T\left(A_{m+1}\right))\right)\] \[\leq \psi\left(M_{T}\left(A_{m},A_{m+1}\right)\right)-\phi\left(M_{T} \left(A_{m},A_{m+1}\right)\right),\] where \[M_{T}\left(A_{m},A_{m+1}\right)=\max\{H_{p}(A_{m},A_{m+1}),H_{p}\left(A_{m},T \left(A_{m}\right)\right),H_{p}\left(A_{m+1},T\left(A_{m+1}\right)\right),\] \[H_{p}\left(T^{2}\left(A_{m}\right),T\left(A_{m}\right)\right),H_{p} \left(T^{2}\left(A_{m}\right),A_{m+1}\right),H_{p}\left(T^{2}\left(A_{m}\right), T\left(A_{m+1}\right)\right),\] \[\qquad\qquad\frac{H_{p}\left(A_{m},T\left(A_{m+1}\right)\right)+H _{p}\left(A_{m+1},T\left(A_{m}\right)\right)}{2}\] \[=\max\{H_{p}(A_{m},A_{m+1}),H_{p}\left(A_{m},A_{m+1}\right),H_{p} \left(A_{m+1},A_{m+2}\right),\] \[H_{p}(A_{m+2},A_{m+1}),H_{p}\left(A_{m+2},A_{m+1}\right),H_{p} \left(A_{m+2},A_{m+2}\right),\] \[\qquad\qquad\frac{H_{p}\left(A_{m},A_{m+2}\right)+H_{p}\left(A_{m +1},A_{m+1}\right)}{2}\}\] \[\leq\max\{H_{p}(A_{m},A_{m+1}),H_{p}\left(A_{m+1},A_{m+2}\right),\frac{H_{p}\left(A_{m},A_{m+1}\right)+H_{p}\left(A_{m+1},A_{m+2}\right)}{2}\}\] \[=\max\{H_{p}\left(A_{m},A_{m+1}\right),H_{p}\left(A_{m+1},A_{m+2} \right)\}.\] As \(\max\{H_{p}\left(A_{m},A_{m+1}\right),H_{p}\left(A_{m+1},A_{m+2}\right)\}\leq M _{T}\left(A_{m},A_{m+1}\right).\) Therefore, \[M_{T}\left(A_{m},A_{m+1}\right)=\max\{H_{p}\left(A_{m},A_{m+1}\right),H_{p} \left(A_{m+1},A_{m+2}\right)\}.\] Now if \(M_{T}\left(A_{m},A_{m+1}\right)=H_{p}\left(A_{m+1},A_{m+2}\right),\) then (11) gives that \[\psi\left(H_{p}(A_{m+1},A_{m+2})\right)\leq\psi(H_{p}\left(A_{m+1},A_{m+2} \right))-\phi\left(H_{p}\left(A_{m+1},A_{m+2}\right)\right),\] a contradiction. Hence \(M_{T}\left(A_{m},A_{m+1}\right)=H_{p}\left(A_{m+1},A_{m+2}\right)\) and \[\psi\left(H_{p}(A_{m+1},A_{m+2})\right) \leq \psi(H_{p}\left(A_{m},A_{m+1}\right))-\phi\left(H_{p}\left(A_{m},A_{m+1}\right)\right)\] \[\leq \psi(H_{p}\left(A_{m},A_{m+1}\right)),\] that is, \(H_{p}\left(A_{m+1},A_{m+2}\right)\leq H_{p}\left(A_{m},A_{m+1}\right).\) Thus the sequence \(\{H_{p}\left(A_{m},A_{m+1}\right)\}\)is nonincreasing. Hence there exists \(c\geq 0\) such that \(\lim\limits_{n\rightarrow\infty}H_{p}(A_{n},A_{n+1})=c\). Suppose that \(c>0\). Then, \(\psi(H_{p}(A_{n+2},A_{n+1}))\leq\psi(H_{p}(A_{n+1},A_{n}))-\varphi(H_{p}(A_{n+1 },A_{n})),\) and by lower semicontinuity of \(\varphi,\) we have \[\limsup\limits_{n\rightarrow\infty}\psi(H_{p}(A_{n+2},A_{n+1}))\leq\limsup \limits_{n\rightarrow\infty}\psi(H_{p}(A_{n+1},A_{n}))-\liminf\limits_{n \rightarrow\infty}\varphi(H_{p}(A_{n+1},A_{n})),\] which implies that \(\psi(c)\leq\psi(c)-\varphi(c),\) a contradiction. Therefore \(c=0\). So we conclude that \[\lim\limits_{n\rightarrow\infty}H_{p}(A_{n+1},A_{n})=0. \tag{12}\] Now, we show that \(\lim\limits_{n,m\rightarrow\infty}H_{p}(A_{n},A_{m})=0\). If not, there is \(\varepsilon>0,\) and there exist even integers \(n_{k}\) and \(m_{k}\) with \(m_{k}>n_{k}>k\) such that \[H_{p}(A_{m_{k}},A_{n_{k}})\geq\varepsilon, \tag{13}\] and \(H_{p}(A_{m_{k}-2},A_{n_{k}})<\varepsilon\). Since \[\varepsilon \leq H_{p}(A_{m_{k}},A_{n_{k}})\] \[\leq H_{p}(A_{n_{k}},A_{m_{k}-2})+H_{p}(A_{m_{k}-2},A_{m_{k}})- \inf\limits_{a_{1}\in A_{m_{k}-2}}p(a_{1},a_{1})\] \[\leq H_{p}(A_{m_{k}-1},A_{m_{k}-2})+H_{p}(A_{m_{k}-2},A_{m_{k}-1})+H_{p}(A_{m_{k}- 1},A_{m_{k}})\] \[\quad-\inf_{a_{2}\in A_{m_{k}-1}}p(a_{2},a_{2})-\inf_{a_{1}\in A_{ m_{k}-2}}p(a_{1},a_{1}).\] From (12) and (13), we have \[\lim_{k\to\infty}H_{p}(A_{m_{k}},A_{n_{k}})=\varepsilon. \tag{14}\] Also (13) and inequality \[H_{p}(A_{m_{k}},A_{n_{k}})\leq H_{p}(A_{m_{k}},A_{m_{k}-1})+H_{p}(A_{m_{k}-1}, A_{n_{k}})-\inf_{a_{2}\in A_{m_{k}-1}}p(a_{2},a_{2})\] give that \(\varepsilon\leq\lim_{k\to\infty}H_{p}(A_{m_{k}-1},A_{n_{k}})\), while inequality \[H_{p}(A_{m_{k}-1},A_{n_{k}})\leq H_{p}(A_{m_{k}-1},A_{m_{k}})+H_{p}(A_{m_{k}},A_{n_{k}})-\inf_{a_{3}\in A_{m_{k}}}p(a_{3},a_{3})\] yields \(\lim_{k\to\infty}H_{p}(A_{m_{k}-1},A_{n_{k}})\leq\varepsilon\), and hence \[\lim_{k\to\infty}H_{p}(A_{m_{k}-1},A_{n_{k}})=\varepsilon. \tag{15}\] Now (15) and inequality \[H_{p}(A_{m_{k}-1},A_{n_{k}})\leq H_{p}(A_{m_{k}-1},A_{n_{k}+1})+H_{p}(A_{n_{k }+1},A_{n_{k}})-\inf_{a_{4}\in A_{n_{k}+1}}p(a_{4},a_{4})\] give \[\varepsilon\leq\lim_{k\to\infty}H_{p}(A_{m_{k}-1},A_{n_{k}+1}),\] while inequality \[H_{p}(A_{m_{k}-1},A_{n_{k}+1})\leq H_{p}(A_{m_{k}-1},A_{n_{k}})+H_{p}(A_{n_{k }},A_{n_{k}+1})-\inf_{a_{5}\in A_{n_{k}}}p(a_{5},a_{5})\] yields \(\lim_{k\to\infty}H_{p}(A_{m_{k}-1},A_{n_{k}+1})\leq\varepsilon\), and so \[\lim_{k\to\infty}H_{p}(A_{m_{k}-1},A_{n_{k}+1})=\varepsilon. \tag{16}\] As \[M_{T}(A_{n_{k}},A_{m_{k}-1}) = \max\{H_{p}(A_{n_{k}},A_{m_{k}-1}),H_{p}(A_{n_{k}},A_{n_{k}}),\] \[H_{p}(A_{m_{k}-1},A_{m_{k}-1}),H_{p}(A_{n_{k}+2},A_{n_{k}}),H_{p} (A_{n_{k}+2},A_{m_{k}-1}),\] \[H_{p}(A_{n_{k}+2},A_{m_{k}}),\frac{H_{p}(A_{n_{k}},A_{m_{k}-1})+H _{p}(A_{n_{k}},A_{m_{k}-1})}{2}\}.\] So, \(\lim_{k\to\infty}M_{T}(x_{n_{k}},x_{m_{k}-1})=\max\{\varepsilon,0,0,0, \varepsilon,\varepsilon,\varepsilon\}=\varepsilon\). From (16), we obtain \[\psi\left(H_{p}\left(A_{n_{k}+1},A_{m_{k}}\right)\right) = \psi(H_{p}(A_{n_{k}},A_{m_{k}-1}))\] \[\leq \psi(M_{T}(A_{n_{k}},A_{m_{k}-1}))-\varphi(M_{T}(A_{n_{k}},A_{m_{k} -1})).\] Taking upper limit as \(k\rightarrow\infty\) implies that \(\psi(\varepsilon)\leq\psi(\varepsilon)-\varphi(\varepsilon),\) a contradiction as \(\varepsilon>0.\) Therefore \(\left\{A_{n}\right\}\) is a Cauchy sequence in \(X.\) Since \(\left(\mathcal{R}_{p}(X),p\right)\) is complete as \((X,p)\) is complete, so \(\lim\limits_{n\rightarrow\infty}H_{p}(A_{n},U)=H_{p}\left(U,U\right)\) for some \(U\in\mathcal{H}_{p}(X),\) that is, we have \(A_{n}\to U\) as \(n\rightarrow\infty.\) In order to show that \(U\) is the fixed point of \(T,\) we contrary assume that \(H_{p}\left(U,T\left(U\right)\right)\neq 0.\) Now \[\begin{array}{l}\psi\left(H_{p}(A_{n+1},T\left(U\right))\right)=\psi(H_{p}( T\left(A_{n}\right),T\left(U\right)))\\ \leq\psi\left(M_{T}\left(A_{n},U\right)\right)-\phi\left(M_{T}\left(A_{n},U \right)\right),\end{array} \tag{17}\] where \[M_{T}\left(A_{n},U\right) = \max\{H_{p}(A_{n},U),H_{p}(A_{n},T\left(A_{n}\right)),H_{p}(U,T \left(U\right)),H_{p}(T^{2}\left(A_{n}\right),T\left(A_{n}\right)),\] \[H_{p}(T^{2}\left(A_{n}\right),U),H_{p}(T^{2}\left(A_{n}\right),T \left(U\right)),\frac{H_{p}(A_{n},T\left(U\right))+H_{p}(U,T\left(A_{n}\right) )}{2}\}\] \[= \max\{H_{p}(A_{n},U),H_{p}(A_{n},A_{n+1}),H_{p}(U,T\left(U\right) ),H_{p}(A_{n+2},A_{n+1}),\] \[H_{p}(A_{n+2},U),H_{p}(A_{n+2},T\left(U\right)),\frac{H_{p}(A_{n},T\left(U\right))+H_{p}(U,A_{n+1})}{2}\}.\] Now we consider the following cases: 1. If \(M_{T}\left(A_{n},U\right)=H_{p}(A_{n},U,),\) then on taking upper limit as \(n\rightarrow\infty\) in (17), we have \[\psi\left(H_{p}(T\left(U\right),U)\right)\leq\psi\left(H_{p}\left(U,U\right) \right)-\psi\left(H_{p}\left(U,U\right)\right),\] a contradiction. 2. When \(M_{T}\left(A_{n},U\right)=H_{p}(A_{n},A_{n+1}),\) then on taking upper limit as \(n\rightarrow\infty\) in (17), implies \[\psi\left(H_{p}(T\left(U\right),U)\right)\leq\psi\left(H_{p}\left(U,U\right) \right)-\phi\left(H_{p}\left(U,U\right)\right),\] gives a contradiction. 3. In case \(M_{T}\left(A_{n},U\right)=H_{p}(U,T\left(U\right)),\) then on taking upper limit as \(n\rightarrow\infty\) in (17), we get \[\psi\left(H_{p}(T\left(U\right),U)\right)\leq\psi\left(H_{p}\left(U,T\left(U \right)\right)\right)-\phi\left(H_{p}\left(U,T\left(U\right)\right)\right),\] a contradiction. 4. If \(M_{T}\left(A_{n},U\right)=\frac{H_{p}(A_{n},T\left(U\right))+H_{p}(U,A_{n+1}) }{2},\) then on upper taking limit as \(n\rightarrow\infty,\) we have \[\psi\left(H_{p}(T\left(U\right),U)\right)\leq\psi(\frac{H_{p}\left(U,T\left(U \right)\right)+H_{p}\left(U,U\right)}{2})\] \[-\phi(\frac{H_{p}\left(U,T\left(U\right)\right)+H_{p}\left(U,U \right)}{2})\] \[=\psi(\frac{H_{p}\left(U,T\left(U\right)\right)}{2})-\phi(\frac{H_{p} \left(U,T\left(U\right)\right)}{2}),\] a contradiction. 5. When \(M_{T}\left(A_{n},U\right)=H_{p}(A_{n+2},A_{n+1}),\) then on taking upper limit as \(n\rightarrow\infty\) in (17), we get \[\psi\left(H_{p}(T\left(U\right),U)\right)\leq\psi\left(H_{p}\left(U,U\right) \right)-\phi\left(H_{p}\left(U,U\right)\right),\] gives a contradiction. 6. In case \(M_{T}\left(A_{n},U\right)=H_{p}(A_{n+2},U),\) then on taking upper limit as \(n\rightarrow\infty\) in (17), we get \[\psi\left(H_{p}(T\left(U\right),U)\right)\leq\psi\left(H_{p}\left(U,U\right) \right)-\phi\left(H_{p}\left(U,U\right)\right),\] a contradiction. 7. Finally if \(M_{T}\left(A_{n},U\right)=H_{p}(A_{n+2},T\left(U\right)),\) then on taking upper limit as \(n\rightarrow\infty,\) we have \[\psi\left(H_{p}(T\left(U\right),U)\right)\leq\psi(H_{p}(U,T\left(U\right)))- \phi\left(H_{p}\left(U,U\right)\right),\] a contradiction. Thus, \(U\) is the fixed point of \(T\). To show the uniqueness of fixed point of \(T\), assume that \(U\) and \(V\) are two fixed points of \(T\) with \(H_{p}\left(U,V\right)\) is not zero. From (11), we obtain that \[\psi(H_{p}(U,V)) = \psi(H_{p}(T\left(U\right),T\left(V\right)))\] \[\leq \psi\left(M_{T}\left(U,V\right)\right)-\phi\left(M_{T}\left(U,V \right)\right),\] where \[M_{T}\left(U,V\right)=\max\{H_{p}(U,V),H_{p}(U,T\left(U\right)), H_{p}(V,T\left(V\right)),\] \[\frac{H_{p}(U,T\left(V\right))+H_{p}(V,T\left(U\right))}{2},H_{p} (T^{2}\left(U\right),U),H_{p}(T^{2}\left(U\right),V),H_{p}(T^{2}\left(U\right),T\left(V\right))\}\] \[= \max\{H_{p}\left(U,V\right),H_{p}(U,U),H_{p}(V,V),\] \[\frac{H_{p}(U,V)+H_{p}(V,U)}{2},H_{p}\left(U,U\right),H_{p}(U,V),H_{p}(U,V)\}\] \[= H_{p}(U,V),\] that is, \[\psi(H_{p}(U,V))\leq\psi\left(H_{p}\left(U,V\right)\right)-\phi\left(H_{p} \left(U,V\right)\right),\] a contradiction. Thus \(T\) has a unique fixed point \(U\in\mathcal{H}_{p}(X)\). _Remark 2_.: In Theorem 6, if we take \(\mathcal{S}(X)\) the collection of all singleton subsets of \(X\), then clearly \(\mathcal{S}(X)\subseteq\mathcal{H}_{p}(X)\). Moreover, consider \(f_{n}=f\) for each \(n,\) where \(f=f_{1}\) then the mapping \(T\) becomes \[T(x)=f(x).\] With this setting, we obtain the following fixed point result. Corollary 3: _Let \((X,p)\) be a complete partial metric space and \(\{X:f_{n},n=1,\ldots,k\}\) a generalized iterated function system. Let \(f:X\to X\) be a mapping defined as in Remark 2. Suppose that, there exists control functions \(\psi\) and \(\varphi\) such that for any \(x,y\in\mathcal{H}_{p}\left(X\right)\), the following holds:_ \[\psi\left(p\left(fx,fy\right)\right)\leq\psi(M_{p}(x,y))-\phi(M_{p}(x,y)),\] _where_ \[M_{p}(x,y)= \max\{p(x,y),p(x,fx),p(y,fy),p(f^{2}x,y),\] \[p(f^{2}x,fx),p(f^{2}x,fy),\frac{p(x,fy)+p(y,fx)}{2}\}.\] _Then \(f\) has a unique fixed point \(x\in X,\) Moreover, for any initial set \(x_{0}\in X\), the sequence of compact sets \(\{x_{0},fx_{0},f^{2}x_{0},\ldots\}\) converges to a fixed point of \(f.\)_ Corollary 4: _Let \((X,p)\) be a complete partial metric space and \((X;f_{n},n=1,\ldots,k)\) be iterated function system where each \(f_{i}\) for \(i=1,\ldots,k\) is a contraction self-mapping on \(X.\) Then \(T:\mathcal{H}(X)\rightarrow\mathcal{H}(X)\) defined in Theorem 6 has a unique fixed point in \(\mathcal{H}(X)\). Furthermore, for any set \(A_{0}\in\mathcal{H}(X)\), the sequence of compact sets \(\{A_{0},T\left(A_{0}\right),T^{2}\left(A_{0}\right),\ldots\}\) converges to a fixed point of \(T\)._ Proof: It follows from Theorem 2 that if each \(f_{i}\) for \(i=1,\ldots,k\) is a contraction mapping on \(X,\) then the mapping \(T:\mathcal{H}(X)\rightarrow\mathcal{H}(X)\) defined by \[T(A)=\bigcup_{n=1}^{k}f_{n}(A),\text{ for all }A\in\mathcal{H}(X)\] is contraction on \(\mathcal{H}\left(X\right)\). Using Theorem 4, the result follows. Corollary 5: _Let \((X,p)\) be a complete partial metric space and \((X;f_{n},n=1,\ldots,k)\) an iterated function system where each \(f_{i}\) for \(i=1,\ldots,k\) is a mapping on \(X\) satisfying_ \[d\left(f_{i}x,f_{i}y\right)e^{d\left(f_{i}x,f_{i}y\right)-d(x,y)}\leq e^{- \tau}d\left(x,y\right),\text{ for all }x,y\in X,\text{ }f_{i}x\neq f_{i}y,\] _where \(\tau>0.\) Then the mapping \(T:\mathcal{H}(X)\rightarrow\mathcal{H}(X)\) defined in Theorem 6 has a unique fixed point in \(\mathcal{H}(X)\). Furthermore, for any set \(A_{0}\in\mathcal{H}\left(X\right)\), the sequence of compact sets \(\{A_{0},T\left(A_{0}\right),T^{2}\left(A_{0}\right),\ldots\}\) converges to a fixed point of \(T\)._ Proof: Take \(F\left(\lambda\right)=\ln\left(\lambda\right)+\lambda\), \(\lambda>0\) in Theorem 3, then each mapping \(f_{i}\) for \(i=1,\ldots,k\) on \(X\) satisfies \[d\left(f_{i}x,f_{i}y\right)e^{d\left(f_{i}x,f_{i}y\right)-d\left(x,y\right)}\leq e^ {-\tau}d\left(x,y\right),\text{ for all }x,y\in X,\ f_{i}x\neq f_{i}y,\] where \(\tau>0.\) Again from Theorem 3, the mapping \(T:\mathcal{H}(X)\rightarrow\mathcal{H}(X)\) defined by \[T(A)=\bigcup_{n=1}^{k}f_{n}(A),\text{ for all }A\in\mathcal{H}(X)\] satisfies \[H\left(T\left(A\right),T\left(B\right)\right)e^{H\left(T\left(A\right),T\left( B\right)\right)-H\left(A,B\right)}\leq e^{-\tau}H\left(A,B\right),\] for all \(A,B\in\mathcal{H}(X),H\left(T\left(A\right),T\left(B\right)\right)\neq 0.\) Using Theorem 4, the result follows. Corollary 6: _Let \(\left(X,p\right)\) be a complete partial metric space and \(\left(X;f_{n},n=1,\dots,k\right)\) be iterated function system such that each \(f_{i}\) for \(i=1,\dots,k\) is a mapping on \(X\) satisfying_ \[d\left(f_{i}x,f_{i}y\right)\left(d\left(f_{i}x,f_{i}y\right)+1\right)\leq e^ {-\tau}d\left(x,y\right)\left(d\left(x,y\right)+1\right),\text{ for all }x,y\in X,\ f_{i}x\neq f_{i}y,\] _where \(\tau>0.\) Then the mapping \(T:\mathcal{H}(X)\rightarrow\mathcal{H}(X)\) defined in Theorem 4 has a unique fixed point in \(\mathcal{H}(X)\). Furthermore, for any set \(A_{0}\in\mathcal{H}(X)\), the sequence of compact sets \(\left\{A_{0},T\left(A_{0}\right),T^{2}\left(A_{0}\right),\dots\right\}\) converges to a fixed point of \(T.\)_ Proof: By taking \(F\left(\lambda\right)=\ln\left(\lambda^{2}+\lambda\right)+\lambda,\)\(\lambda>0\) in Theorem 3, we obtain that each mapping \(f_{i}\) for \(i=1,\dots,k\) on \(X\) satisfies \[d\left(f_{i}x,f_{i}y\right)\left(d\left(f_{i}x,f_{i}y\right)+1\right)\leq e^ {-\tau}d\left(x,y\right)\left(d\left(x,y\right)+1\right),\text{ for all }x,y\in X,\ f_{i}x\neq f_{i}y,\] where \(\tau>0.\) Again it follows from Theorem 3 that the mapping \(T:\mathcal{H}(X)\rightarrow\mathcal{H}(X)\) defined by \[T(A)=\bigcup_{n=1}^{k}f_{n}(A),\text{ for all }A\in\mathcal{H}(X)\] satisfies \[H\left(T\left(A\right),T\left(B\right)\right)\left(H\left(T\left(A\right),T \left(B\right)\right)+1\right)\leq e^{-\tau}H\left(A,B\right)\left(H\left(A,B \right)+1\right),\] for all \(A,B\in\mathcal{H}(X),H\left(T\left(A\right),T\left(B\right)\right)\neq 0.\) Using Theorem 4, the result follows. Corollary 7: _Let \(\left(X,p\right)\) be a complete partial metric space and \(\left(X;f_{n},n=1,\dots,k\right)\) be iterated function system such that each \(f_{i}\) for \(i=1,\dots,k\) is a mapping on \(X\) satisfying_ \[d\left(f_{i}x,f_{i}y\right)\leq\frac{1}{\left(1+\tau\sqrt{d\left(x,y\right)} \right)}d\left(x,y\right),\text{ for all }x,y\in X,\ f_{i}x\neq f_{i}y,\] _where \(\tau>0.\) Then the mapping \(T:\mathcal{H}(X)\rightarrow\mathcal{H}(X)\) defined in Theorem 6 has a unique fixed point \(\mathcal{H}(X)\). Furthermore, for any set \(A_{0}\in\mathcal{H}(X)\), the sequence of compact sets \(\left\{A_{0},T\left(A_{0}\right),T^{2}\left(A_{0}\right),\dots\right\}\) converges to a fixed point of \(T.\)_ Proof: Take \(F\left(\lambda\right)=-1/\sqrt{\lambda}\), \(\lambda>0\) in Theorem 3, then each mapping \(f_{i}\) for \(i=1,\ldots,k\) on \(X\) satisfies \[d\left(f_{i}x,f_{i}y\right)\leq\frac{1}{(1+\tau\sqrt{d\left(x,y\right)})^{2}}d \left(x,y\right),\text{ for all }x,y\in X,\ f_{i}x\neq f_{i}y,\] where \(\tau>0\). Again it follows from Theorem 3 that the mapping \(T:\mathcal{H}(X)\rightarrow\mathcal{H}(X)\) defined by \[T(A)=\bigcup_{n=1}^{k}f_{n}(A),\text{ for all }A\in\mathcal{H}(X)\] satisfies \[H\left(T\left(A\right),T\left(B\right)\right)\leq\frac{1}{(1+\tau\sqrt{H\left( A,B\right)})^{2}}H\left(A,B\right),\] for all \(A,B\in\mathcal{H}(X),H\left(T\left(A\right),T\left(B\right)\right)\neq 0\). Using Theorem 4, the result follows.
2309.05165
Diffusio-phoretic fast swelling of chemically responsive hydrogels
Acid-induced release of stored ions from polyacrylic acid hydrogels (with a free surface fully permeable to the ion and acid flux) was observed to increase the gel osmotic pressure that leads to rapid, temporary swelling faster than the characteristic solvent absorption rate of the gel. Here we develop a continuum poroelastic theory that quantitatively explains the experiments by introducing a "gel diffusio-phoresis" mechanism: Steric repulsion between the gel polymers and released ions can induce a diffusio-osmotic solvent intake counteracted by the diffusio-phoretic expansion of the gel network. For applications ranging from drug delivery to soft robotics, engineering the gel diffusio-phoresis may enable stimuli-responsive hydrogels with amplified strain rates and power output.
Chinmay Katke, Peter A. Korevaar, C. Nadir Kaplan
2023-09-10T23:20:36Z
http://arxiv.org/abs/2309.05165v2
# Diffusio-phoretic fast swelling of chemically responsive hydrogels ###### Abstract Acid-induced release of stored ions from polyacrylic acid hydrogels (with a free surface fully permeable to the ion and acid flux) was observed to increase the gel osmotic pressure that leads to rapid, temporary swelling faster than the characteristic solvent absorption rate of the gel. Here we develop a continuum poroelastic theory that quantitatively explains the experiments by introducing a "gel diffusio-phoresis" mechanism: Steric repulsion between the gel polymers and released ions can induce a diffusio-osmotic solvent intake counteracted by the diffusio-phoretic expansion of the gel network. For applications ranging from drug delivery to soft robotics, engineering the gel diffusio-phoresis may enable stimuli-responsive hydrogels with amplified strain rates and power output. Osmosis, the diffusion of solvent in the direction of a steady solute concentration gradient across a semi-permeable membrane, underlies many biological processes ranging from turgor pressure regulation in walled cells for the development and mechanical stability of herbaceous plants to the separation of urea from water in the kidney [1; 2; 3; 4]. Owing to the simplicity of osmosis and its capacity to convert modest concentration differences into significant pressures according to the van't Hoff's law, these examples have inspired various applications of chemomechanical energy conversion [5; 6; 7; 8; 9; 10; 11; 12]. These also include the use of hydrogels, which are crosslinked polymer networks that imbibe a solvent and can expand up to a thousand times their dry weight. Capitalizing on large swelling, custom gel designs that can respond to external cues such as chemical, optical, magnetic stimuli, and humidity have enabled micron-scale proof-of-concept devices for soft robotic actuation, drug delivery, smart optical sensing, and synthetic homeostasis [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. Despite the potential of hydrogels as bioinspired soft platforms, an inherent drawback is that a common gel with the shortest dimension \(H\) and permeability \(k_{f}\) can deform only as fast as dictated by the rate of solvent absorption (a.k.a. poroelastic diffusion) at a timescale \(\tau\sim H^{2}/k_{f}\). Thus, increasing the gel size drastically lowers the strain rate and in turn the power output. A way to overcome this limitation is through the synthesis of hydrogels from a foam template with much bigger pores (i.e., higher \(k_{f}\)) than in most gels to achieve a strain rate of \(\sim 0.2\) s\({}^{-1}\) during the osmotic swelling of a centimeter-scale gel (cf. \(\sim 10^{-3}\)s\({}^{-1}\) in 100 \(\mu\)m-wide NIPAAm and mm-sized polyacrylate gels) with a power density comparable to that of micron-sized gel beads [24; 25; 26; 27]. However, larger pore size inevitably reduces the density of the polymers, compromising on their degree of functionalization. This in turn restricts the gel responsiveness to ambient fields, which on the contrary must be preserved and ideally be combined with large strains and strain rates for effective chemomechanical energy transduction as a sought-after property of biomimetic soft matter. One chemically responsive system with a deformation rate that can exceed the poroelastic diffusion rate \(\tau^{-1}\) through a tunable transient osmotic imbalance is the widely used polyacrylic acid (PAA) hydrogel [28]. Under neutral or basic pH, the PAA gel can arrest divalent copper ions Cu\({}^{2+}\) (or calcium ions Ca\({}^{2+}\)) and contract with respect to its equilibrium height \(H\) by the formation of COO\({}^{-}-\)Cu\({}^{2+}-\)COO\({}^{-}\) chelates that remain kinetically stable over months without external Cu\({}^{2+}\) (Fig. 1a) [29]. When HCl is delivered as a second stimulus, the dissolved acid rapidly displaces Cu\({}^{2+}\), releasing it to the fluid phase of the gel (Fig. 1b). Although the formation of the stable carboxyl (COOH) groups this time in an acidic condition favors gel contraction [30; 31], the gel temporarily overcomes these contractile stresses and swells by \(\sim\)10% of \(H\) over the total copper decomplexation time \(\tau_{total}\) if \(\tau_{total}<\tau\equiv H^{2}/D\) (\(D\) : poroelastic diffusion constant; Table I). The swollen state is maintained until the Cu\({}^{2+}\) concentration equilibrates between the gel and the initially copper-free supernatant domain. Eventually, the gel contracts to the height favored by the carboxyl groups (Fig. 1c). As a control experiment, adding CuSO\({}_{4}\) into the HCl solution suppressed the swelling, implying a reduction of the hypotonic character of the gel due to the temporary free copper gradient in the first place [28]. In this Letter, we theoretically address the following problem motivated by these experiments: Since osmosis is always associated with a degree of interface selectivity to a solute, how does osmotic pressure build up and then diminish across the gel-supernatant interface, which is fully permeable to the ions and acid? To that end, we hypothesize that osmosis can rather be manifested as a bulk effect in the gel: If the proton-doped polymer network and the free ions interact at the microscopic scale, the aqueous solution must undergo diffusio-osmotic flow due to the ion concentration gradient, inducing a diffusio-phoretic displacement of the polymers in the opposite direction akin to the diffusio-phoresis of colloidal particles in a background solution [32]. To test this hypothesis, we develop a linear poroelastic theory for diffusio-phoretic gel swelling caused by the repulsive interactions between the polymer and ions. To our knowledge, gel diffusi-phoresis has not been conceptualized experimentally or theoretically before. In fact, although polyacrylate and polyacrylamide hydrogels can transiently swell or shrink in response to the osmolarity of a solution with osmolytes as heavy as 20-200 kDa, this was reported to be due to the suppression of the osmolyte diffusion by the gel network [33; 34; 27; 35]. Moreover, a thin-film theory that we developed for the ion-release induced traveling deformation waves along PAA gel films was agnostic to the permeability of the gel-supernatant interface (since it cannot resolve the through-thickness pressure and concentration gradients) and thus assumed a temporary osmotic imbalance of the ions based on van't Hoff's law [28]. Here we show that, when adding strong acid, the gradient of the released copper ions upon acid complexation in the gel results in a rapid diffusio-phoretic swelling burst with a rate \(\tau_{total}^{-1}\) bigger than the poroelastic deformation rate \(\tau^{-1}\), in quantitative agreement with experiments. For sufficiently weak acid, swelling is suppressed (\(\tau_{total}^{-1}\lesssim\tau^{-1}\)). Accordingly, our theory confirms without free parameters that free agents with a much smaller molecular weight (\(\lesssim 100\) Da) than typical osmolytes can induce diffusi-osmotic stress in the gel and deform it in the absence of impeded diffusion or interface selectivity. Importantly, the inequality \(D_{DP}\gg D\) (\(D_{DP}:\) diffusion-phoretic mobility; Table 1) could be leveraged by large deformations of the PAA gel and also be engineered in other gels as such to scale up chemically responsive shape-shifting hydrogel actuators with high strain rates and power densities. We formulate the gel mechanics via a minimal Biot consolidation model [36; 37]. Defining \(\mathbf{v}\) as the fluid velocity relative to the solid matrix, \(\mathbf{u}\) as the matrix displacement vector, and \(p\) as the solution pressure, the mass and momentum conservation of the gel fluid are respectively given by the incompressibility condition and the Darcy's law for porous flow, i.e. (\(\mu_{f}:\) solution kinematic viscosity; Table 1), \[\nabla\cdot\left(\mathbf{v}+\frac{\partial\mathbf{u}}{\partial t}\right)=0\,, \quad\mathbf{v}=-\frac{k_{f}}{\mu_{f}}\nabla p\,. \tag{1}\] Eq. 1 yields the fluid variables \(p\,,\mathbf{v}\) when \(\mathbf{u}\) or equivalently the linear elastic gel strain tensor \(\mathbf{\epsilon}\equiv(\nabla\mathbf{u}+(\nabla\mathbf{u})^{T})/2\) is determined from the mechanical equilibrium condition for the gel stress tensor \(\mathbf{\sigma}\) \[\nabla\cdot\mathbf{\sigma}=0\,. \tag{2}\] In \(\mathbf{\sigma}\), the linear poroelastic terms comprise the elastic stresses (\(\mu\,,\lambda:\) Lame coefficients; Table 1) and the solution pressure \(p\). We add to these two chemical contractility terms associated with the bound copper volume fraction \(\phi^{(b)}\) and bound acid volume fraction \(\phi^{(b)}_{+}\) on the gel polymers as in Ref. [28] (\(\tilde{\gamma}\,,\tilde{\chi}:\) stress prefactors; Table 1), the osmotic pressure induced by the polymer volume fraction \(\phi_{p}\), and the diffusio-osmotic stress term due to the interstitial free copper volume fraction \(\phi^{(0)}\) (\(\mathbf{I}:\) rank-two identity tensor, \(k_{B}:\) Boltzmann's constant, \(T\) : temperature, \(v_{c}:\) effective molecular volume; Table 1): \[\begin{split}\mathbf{\sigma}=2\mu\mathbf{\epsilon}+\mathbf{I}\Big{[}& \lambda\operatorname{Tr}\left(\mathbf{\epsilon}\right)-p+\tilde{ \gamma}\phi^{(b)}+\tilde{\chi}\phi^{(b)}_{+}\\ &-\frac{k_{B}T}{v_{c}}\bigg{(}\eta_{DP}\phi^{(0)}+\frac{\phi_{p}^ {2}}{2}\bigg{)}\Big{]}\,.\end{split} \tag{3}\] The dimensionless diffusio-phoretic coefficient satisfies \(\eta_{DP}>0\) (\(\eta_{DP}<0\)) for repulsive (attractive) interactions between the polymer and the free copper, imposing a diffusio-osmotic flow of the solution towards higher interaction energy densities proportional to \(\phi^{(0)}\) similar to Marangoni flows [32]. Then, the gel network will undergo diffusio-phoretic displacement in the opposite direction to negate the diffusio-osmotic flow. Here, we consider purely steric repulsions between the polymer and the Cu\({}^{2+}\) ions with an exclusion radius \(R_{e}\), which leads to \(\eta_{DP}\equiv R_{e}^{2}/k_{f}>0\) and in turn gel swelling when the free copper gradient is in the \(-\hat{\mathbf{z}}\) direction. The diffusio-phoretic mobility coefficient \(D_{DP}\) is given in terms of the parameters in Eq. 1 and Eq. 3 as \(D_{DP}\equiv k_{B}TR_{e}^{2}/v_{c}\mu_{f}\) (Table 1). We ignore an analogous diffusio-osmotic contribution of the acid since it equilibrates across the two domains much faster than the timescales of interest in this work. As with polymer solutions, the osmotic pressure quadratic in \(\phi_{p}\) imposes a permanent stress in the gel due to the cross-linked polymers, for which the gel-supernatant interface is effectively impermeable [38; 39]. This osmotic stress is predominantly interfacial because of the polymer concentration discontinuity across the Figure 1: **PAA gel response to two competing stimuli.****(a)** Acid (red, volume fraction \(\phi^{(a)}_{+}\)) is delivered from the supernatant solution into a copper-laden PAA hydrogel (attached to a substrate) with a contracted initial height \(h(0)<H\) due to the stable complexation between the COO\({}^{-}\) groups and bound Cu\({}^{2+}\) (blue, volume fraction \(\phi^{(b)}\)), which turns the gel to blue. **(b)** When the complexation of acid with a volume fraction \(\phi^{(b)}_{+}\) forms COOH groups, Cu\({}^{2+}\) ions are released, acquiring a volume fraction \(\phi^{(0)}\) in the gel solution. The gel swells with a time-dependent height \(h(t)>h(0)\) and loses the blue color while a \(\phi^{(0)}\) gradient along the \(z-\)axis emerges (right) [28]. **(c)** This gradient eventually vanishes due to Cu\({}^{2+}\) diffusion, and the gel relaxes to the final height \(h(\infty)\approx h(0)\) favored by the COOH-induced contraction. gel boundary. Because the free-copper-induced diffuso-osmotic stress with a linear term in \(\phi^{(0)}\) in general dominates the polymeric osmotic stress quadratic in \(\phi_{p}\,\), the capability of controlled ionic release from the PAA gel can enable very high strain and strain rates compared to the gel swelling via mere osmotic solvent absorption. The implicit time dependence of the stress tensor (Eq. 3) is governed by the advection and diffusion of the free copper volume fraction \(\phi^{(0)}\) and the free acid volume fraction \(\phi^{(0)}_{+}\) in the gel, as well as their conversion rates to/from the bound states \(\phi^{(b)}\,,\phi^{(b)}_{+}\) on the gel backbone. The solvent volume fraction \(\phi_{s}\) and \(\phi_{p}\,,\,\phi^{(0)}_{+}\) satisfy \(\phi_{s}+\phi_{p}+\phi^{(0)}+\phi^{(0)}_{+}=1\,.\) Our model captures the evolution of \(\phi^{(0)}\) and \(\phi^{(0)}_{+}\) through the reaction-transport equations (\(D_{x}\,:\) diffusion constant of species \(x\,,\,\tilde{r}:\) rate constant, \(\phi^{*}:\) COO\(-\) volume fraction; Table 1) \[\begin{split}\frac{\partial\phi^{(0)}}{\partial t}+\nabla\cdot& \overbrace{\left[\phi^{(0)}\left(\mathbf{v}+\frac{\partial\mathbf{u}}{ \partial t}\right)-D_{Cu}\nabla\phi^{(0)}\right]}^{\equiv\mathbf{Q}_{Cu}^{(a)}} \\ &=\underbrace{\tilde{r}\phi^{(0)}_{+}\phi^{(b)}-\tilde{r}\phi^{(0 )}\left[\phi^{*}-2\phi^{(b)}-\phi^{(b)}_{+}\right]}_{\equiv R_{Cu}},\end{split} \tag{4}\] \[\begin{split}\frac{\partial\phi^{(0)}_{+}}{\partial t}+\nabla \cdot&\overbrace{\left[\phi^{(0)}_{+}\left(\mathbf{v}+\frac{ \partial\mathbf{u}}{\partial t}\right)-D_{+}\nabla\phi^{(0)}_{+}\right]}^{ \equiv\mathbf{Q}_{+}}\\ &=-\underbrace{\tilde{r}\phi^{(0)}_{+}(\phi^{*}-\phi^{(b)}_{+}) }_{\equiv R_{+}},\end{split} \tag{5}\] and the evolution of \(\phi^{(b)}\,,\phi^{(b)}_{+}\) through the rate equations \[\frac{\partial\phi^{(b)}}{\partial t}=-R_{Cu}\,,\quad\frac{\partial\phi^{(b)} _{+}}{\partial t}=R_{+}\,. \tag{6}\] We assume a single rate constant \(\tilde{r}\) for all reactions in Eqs. 4-6 since they must occur at comparable timescales. The first term of \(R_{Cu}\) describes the acid-induced Cu\({}^{2+}\) decomplexation from the gel backbone, and the second term is the formation rate of a COO\({}^{-}-\)Cu\({}^{2+}-\)COO\({}^{-}\) chelate, hence the factor 2. The source term \(R_{+}\) is the COOH formation rate. With the impermeability condition at the gel-substrate interface, Eqs. 1-6 fully determine the time-dependent flow, chemical coupling, and deformations within the gel domain, provided that the flow of the supernatant solution is specified to impose flux, stress, and concentration continuity at the gel free surface. In the supernatant domain, denoting the fluid velocity by \(\mathbf{V}\), the fluid stress tensor by \(\boldsymbol{\sigma}^{(a)}\,,\) and the pressure by \(P\,,\) the incompressibility condition and the Stokes flow are given by \[\nabla\cdot\mathbf{V}=0\,,\quad\nabla\cdot\boldsymbol{\sigma}^{(a)}=0\,;\quad \boldsymbol{\sigma}^{(a)}=\mu_{f}\nabla\mathbf{V}-\mathbf{I}P\,. \tag{7}\] Unlike in the gel, the copper ions with a volume fraction \(\phi^{(a)}\) and acid with a volume fraction \(\phi^{(a)}_{+}\) in the supernatant domain merely undergo advection and diffusion, given by the mass conservation equations (\(D_{x}^{(a)}:\) diffusion constant of species \(x\) in the supernatant; Table 1) \[\frac{\partial\phi^{(a)}}{\partial t}+\nabla\cdot\overbrace{\left[\phi^{(a)} \mathbf{V}-D_{Cu}^{(a)}\nabla\phi^{(a)}\right]}^{\equiv\mathbf{Q}_{Cu}^{(a)}} =0\,, \tag{8}\] \[\frac{\partial\phi^{(a)}_{+}}{\partial t}+\nabla\cdot\overbrace{\left[\phi^{(a )}\mathbf{V}-D_{+}^{(a)}\nabla\phi^{(a)}_{+}\right]}^{\equiv\mathbf{Q}_{+}^{(a )}}=0\,. \tag{9}\] Next, we determine the boundary conditions. Eqs. 1-6 constitute a set of nonlinear differential equations eighth order in space and fifth order in time for the gel variables \(\boldsymbol{f}\equiv\{p,\mathbf{v},\mathbf{u},\phi^{(0)},\phi^{(0)}_{+},\phi^ {(b)}_{+}\}\,.\) They are coupled to Eqs. 7-9 for the supernatant domain variables \(\boldsymbol{f}^{(a)}\equiv\{P,\mathbf{V},\phi^{(a)},\phi^{(a)}_{+}\}\,,\) which are seventh order in space and second order in time, through the continuity conditions between the two domains, given by (\(\mathbf{\hat{n}}\,:\) unit normal vector of the gel surface) \[\begin{split} z=H\,:&\quad\mathbf{V}=\mathbf{v}+ \frac{\partial\mathbf{u}}{\partial t},\;P-p=\frac{k_{B}T}{v_{c}}\big{(}\eta_{ DP}\phi^{(0)}+\frac{\phi_{p}^{2}}{2}\big{)}\,,\\ &\mathbf{\hat{n}}\cdot\boldsymbol{\sigma}=\mathbf{\hat{n}}\cdot \boldsymbol{\sigma}^{(a)}\,,\quad\mathbf{\hat{n}}\cdot\mathbf{Q}_{Cu}=\mathbf{ \hat{n}}\cdot\mathbf{Q}_{Cu}^{(a)}\,,\\ &\mathbf{\hat{n}}\cdot\mathbf{Q}_{+}=\mathbf{\hat{n}}\cdot \mathbf{Q}_{+}^{(a)}\,,\quad\phi^{(0)}=\phi^{(a)}\,,\quad\phi^{(0)}_{+}=\phi^ {(a)}_{+}\,.\end{split} \tag{10}\] Here, the interfacial pressure jump between \(P\) and \(p\) merits discussion: When copper and acid are absent, the pressure difference is set by the polymer-induced gel osmotic stress that relaxes the gel to its equilibrium state, which we take as the reference state with zero strain. Adding a diffuso-osmotic agent such as copper will alter the solution pressure in the gel, and the pressure across the interface must equilibrate instantaneously [39; 40], leading to the second condition in Eq. 10. The boundary conditions for the gel attached to an impermeable and rigid substrate at \(z=0\) and for the impermeable supernatant domain boundary at \(z=H+H^{(a)}\) are given by (\(H^{(a)}:\) supernatant domain height; Table 1) \[\begin{split} z=0\,:&\quad\mathbf{\hat{n}}\cdot \mathbf{v}=0\,,\quad\mathbf{u}=0\,,\\ &\mathbf{\hat{n}}\cdot\mathbf{Q}_{Cu}=0\,,\quad\mathbf{\hat{n}} \cdot\mathbf{Q}_{+}=0\,,\end{split} \tag{11}\] \[\begin{split} z=H+H^{(a)}\,:&\quad\mathbf{V}=0\,, \quad P=0\,,\\ &\mathbf{\hat{n}}\cdot\mathbf{Q}_{Cu}^{(a)}=0\,,\quad\mathbf{\hat{n}} \cdot\mathbf{Q}_{+}^{(a)}=0\,.\end{split} \tag{12}\] To explain the vertical deformation dynamics in Ref. [28], we consider first, 1D uniaxial deformations in response to a uniform acid front advancing in the \(-\mathbf{\hat{z}}\) direction to the copper-laden gel and, second, 2D deformations due to an acid front with a Gaussian weak perturbation to investigate the effect of the potential nonuniformities during initial acid delivery in the experiments. In the linear elastic limit, we take the polymer volume fraction \(\phi_{p}\) and the COOH volume fraction \(\phi^{*}\) constant by ignoring the effect of small deformations on the concentrations (Table 1). In 1D, we denote the magnitudes of all the vector variables, which are along the \(\pm\mathbf{\hat{z}}\) direction, by \(u_{z}(z,t)\equiv\left|\mathbf{u}\right|,v(z,t)\equiv\left|\mathbf{v}\right|,V(z,t)\equiv\left|\mathbf{V}\right|.\) All non-vanishing tensors have a single scalar component, i.e., \(\boldsymbol{\sigma}_{zz}\equiv\mathbf{\hat{z}}\cdot\boldsymbol{\sigma}\cdot \mathbf{\hat{z}},\,\mathbf{\hat{z}}\cdot\boldsymbol{\epsilon}\cdot\mathbf{\hat {z}}=\partial u_{z}/\partial z\,,\) and \(\mathbf{\hat{z}}\cdot\mathbf{\hat{z}}=1\cdot\mathbf{\hat{z}}=1\,.\) When the initial conditions for \(u_{z},\phi^{(0)},\phi^{(0)}_{+},\phi^{(b)}_{+},\phi^{(a)},\phi^{(a)}_{+}\) are given, Eqs. 1-12 determine the uniaxial deformations as follows: Per Eqs. 10-12, the incompressibility conditions in Eqs. 1, 7 reduce to \(v+\partial u_{z}/\partial t=0\) and \(V=0\,,\) i.e., local gel deformations do not impose any net flow in the lab frame. This also leads to a diffusive stimulus dynamics in Eqs. 4, 5, 8, and 9. Then, using the unitless variables \(u_{z}^{\prime}\equiv u_{z}/H\,,z^{\prime}\equiv z/H\) (gel), \(z^{\prime}\equiv z/H^{(a)}\) (supernatant), \(t^{\prime}\equiv t/\tau\) where \(\tau\equiv\mu_{f}H^{2}/k_{f}\bar{p}\,,\,\bar{p}\equiv(2\mu+\lambda)\,,\) and dropping their primes, Eqs. 1-3 yield a dimensionless evolution equation for the gel displacement dynamics as (\(\gamma\equiv\tilde{\gamma}/\bar{p}\,,\chi\equiv\tilde{\chi}/\bar{p}\,,\nu_{ DP}\equiv k_{B}T\eta_{DP}/v_{c}\bar{p}\,;\) Table 1) \[\frac{\partial u_{z}}{\partial t}=\frac{\partial^{2}u_{z}}{\partial z^{2}}+ \gamma\frac{\partial\phi^{(b)}}{\partial z}+\chi\frac{\partial\phi^{(b)}_{+} }{\partial z}-\nu_{DP}\frac{\partial\phi^{(0)}}{\partial z} \tag{13}\] with the boundary conditions from Eqs. 10, 11 (\(\mathbf{\hat{n}}=\mathbf{\hat{z}}\)) \[u_{z}\big{|}_{z=0}=0\,,\quad\frac{\partial u_{z}}{\partial z}\bigg{|}_{z=1}=- \gamma\phi^{(b)}-\chi\phi^{(b)}_{+}\,. \tag{14}\] Eqs. 13, 14 are closed by the unitless forms of Eqs. 4-6, 8, 9 (i.e., Eqs. S1-S5) and the corresponding boundary conditions in Eqs. 10-12 with the unitless parameters defined in Table 1 (Sec. S1) [41]. The seven initial conditions for the contracted gel with stored Cu\({}^{2+}\) are \[u_{z}=-\gamma\phi^{(b)}z\,,\phi^{(b)}=\frac{\phi^{*}}{2}\,,\phi^{(b)}_{+}=\phi ^{(0)}=\phi^{(0)}_{+}=0\,,\] (15a) and in the supernatant solution \[\phi^{(a)}=0\,,\phi^{(a)}_{+}(z)=\frac{\phi^{(a)}_{+,i}}{2}\left[1+\tanh\left( \Gamma(z-z_{0})\right)\right]\,, \tag{15b}\] where \(\Gamma=\Gamma^{\text{(1D)}}\) and \(z_{0}=z_{0}^{\text{(1D)}}\) are given in Table 1. In 2D, by making the horizontal coordinate \(x\) unitless with the domain length \(L\,,\) we complement the unitless forms of Eqs. 1-12 with periodic boundary conditions at \(x=0\) and \(x=1\) for \(\boldsymbol{f}\,,\boldsymbol{f}^{(a)},\boldsymbol{\sigma},\boldsymbol{ \sigma}^{(a)},\) and all flux terms. Eqs. 15a, 15b again hold with \(\Gamma=\Gamma^{\text{(2D)}}\) and \(z_{0}=z_{0}^{\text{(2D)}}\), which is given by a Gaussian profile, as well as with an additional initial condition for horizontal deformations \[u_{x}\equiv\mathbf{u}\cdot\mathbf{\hat{x}}=0\,,\quad z_{0}^{\text{(2D)}}=h_{1} -h_{2}e^{-\frac{(x-1/2)^{2}}{2\lambda^{2}}}\,. \tag{16}\] Here \(h_{2}\) is obtained from the initial supernatant acid amount constraint \(\phi^{(a)}_{+,i}(1-z_{0}^{\text{(1D)}})\) when \(h_{1}\) is fixed, and \(\lambda\) sets a perturbation with about the capillary length of water \(\ell_{c}\sim 1\) mm (see Fig. 3a for the initial acid profile). The values of \(h_{1}\,,h_{2}\,,\lambda\,,\) and \(\Gamma^{\text{(2D)}}\) are given in Table 1. To compare our simulations for a 1D uniform acid front and weakly perturbed 2D acid front with experiments, we used the uniform acid delivery and uniaxial deformation data illustrated in Fig. 3 of Ref. [28]. The experimental gel height \(h(t)\) and the total cross-sectional bound copper in the gel \(\phi^{(b)}_{total}(t)\) are post-processed as detailed in Sec. S2 [41]. For uniaxial deformations, we solved Eqs. 13-15b and Eqs. S1-S5 by using the FEniCS finite element analysis (FEA) library on Python 3.9 [41; 42]. For weakly perturbed 2D dynamics, we solved Eqs. 1-12, 15a-16 with periodic boundary conditions along the \(x-\)axis via the COMSOL Multiphysics 5.4 FEA package [43]. The two FEA libraries yield the same result for uniform deformations (Fig. S1) [41]. All characteristic physical scales of the system and the unitless simulation parameters are listed in Table 1. Our main results are demonstrated in Fig. 2. Upon the diffusion of 1 M acid (\(\phi^{(a)}_{+,i}=0.006\)) into the copper-laden gel from the supernatant domain, the gel height exhibits a temporal spike with a magnitude about \(\lesssim\)10% of the equilibrium height \(H\) in quantitative agreement with the experiments (Fig. 2a). This rapid swelling followed by the relatively slower contraction can be understood by considering the interplay among the flux of acid and its complexation with the gel, the subsequent release of bound copper and the diffusio-phoretic solvent inflow induced by it, and the poroelastic gel relaxation at longer times (Fig. S2-S4) [41]: Although our theory suggests that diffusio-phoresis can induce rapid gel deformations at a timescale \(\tau/\nu_{DP}\ll\tau\), the swelling rate is limited by the overall release time \(\tau_{total}\approx 0.82\tau\) of the height-averaged bound copper \(\phi_{total}^{(b)}\) (Fig. 2b). As a result, the gel undergoes continual diffusio-phoretic swelling until \(t\approx\tau_{total}\) when the height reaches maximum (Fig. 2a). This swelling time is still faster than \(\tau\) and can potentially be improved by considering nonlinear deformations driven by higher acid concentrations. Throughout swelling and subsequent relaxation, because the bound copper is instantaneously replaced by the acid on the gel backbone (\(r\gg 1\), Table 1), the combination of the second and third terms on the right-hand side of Eq. 13 is negligible, and the flux condition in Eq. 14 is nearly constant (Fig. S5) [41]. Therefore, the decay from the maximum height is governed by the competition between the diffusive relaxation of the deformations and the residual diffusio-phoretic swelling. This leads to a subdiffusive relaxation dynamics with a timescale \(\tau_{r,1}\approx\ 0.54\tau\), which is higher than the timescale \(\tau_{D}\equiv 4\tau/\pi^{2}\approx 0.4\tau\) of the purely diffusive relaxation dynamics at the leading order (Sec. S3, Fig. S6) [41]. The diffusive contribution ceases at \(t\gtrsim 4\tau\), and the longtime slow relaxation is henceforth governed by the ever damping diffusio-phoretic term with a timescale \(\tau_{r,2}\approx 5.2\tau\) (Fig. S6) [41]. To validate that the swelling response in Fig. 2a is driven by the rapid release of the Cu\({}^{2+}\) ions, the same acid amount was slowly added over successive steps with concentrations ranging from 0.01 M to 1 M, which lead to no deformation (Fig. 2c, circles) [28]. Here we simulate only the first two steps of acid addition with \(\phi_{+,i}^{(a)}=6\times 10^{-5}\) (\(\sim\)0.01M) at \(t=0\) and \(\phi_{+,i}^{(a)}=3\times 10^{-4}\) (\(\sim\)0.05M) at \(t=8.4\tau\). Our numerical results yield marginal deformations about 0.1% and 0.5% of the equilibrium height \(H\) that fall within the experimental error of \(\pm 1\%\)\(H\) (Fig. 2c). Swelling is suppressed for the low acid concentrations since the bound Cu\({}^{2+}\) release rate is drastically reduced (Fig. 2d). In this limit, the gel poroelastic relaxation can balance the diffusio-phoretic swelling that is slowed down by the low bound Cu\({}^{2+}\) release rate. Consequently, both rate terms in Eq. 13 are at play, and our numerical analysis reveals that the relaxation of the minute deformations is always subdiffusive (Fig. S6) [41]. Although the 2D weakly perturbed gel swelling and bound copper release profiles deviate only slightly from the 1D uniaxial deformation results (Fig. 2a-d, blue curves), the 2D simulations reveal the nature of the surface deformation dynamics during the swelling and relaxation stages. Fig. 3 and the Movies S1, S2 demonstrate the hydrogel deformations, the accompanying fluid flow in the gel and the supernatant, and the bound copper as a function of time when adding 1M acid initially (Fig. 3a-e, Movie S1) and 0.05 M acid delivery at \(t=8.4\tau\) after the initial 0.01M acid addition step (Fig. 3f-j, Movie S2) [41]. For strong acid, the penetration of the Gaussian stimulus front into the gel triggers a local swelling bump associated with a convective flow (Fig. 3b-c). Later, the flow reverses direction at the center (\(x=L/2\)) at the Figure 3: **Gel response to a 2D acid stimulus.****(a)** For the addition of 1M acid with a Gaussian perturbation (Eqs. 15b, 16), the gel dynamics at **(b)**\(t=0.1\tau\), **(c)**\(t=0.3\tau\), **(d)**\(t=0.5\tau\) and **(e)**\(t=0.7\tau\) within the boxed region shown in (a). **(f)** Upon adding 0.05M acid with a perturbed front at \(t=8.4\tau\), the gel dynamics at **(g)**\(t=8.5\tau\), **(h)**\(t=9.2\tau\), **(i)**\(t=16\tau\) and **(j)**\(t=24\tau\) within the boxed region shown in (f). The red streamlines indicate the computed fluid flow in the lab frame (line width: logarithm of the flow speed, arrows: flow direction). The blue color scale indicates the volume fraction of the bound copper, and \(\alpha=770\) (Table 1). onset of break-up of the single bump into two swelling fronts (\(t=0.5\tau\), Fig. 3d), which then travel in opposite directions at the gel surface in phase with the copper decomplexation front (\(t=0.7\tau\), Fig. 3e) and decay at longer times along with the diminishing flow streamlines. Similar traveling deformation fronts sensitive to the acid progression rate and direction along the substrate were reported in Ref. [28]. Note that the swelling fronts are not solitons since, upon passing through each other, two such waves would end up in bound-copper-depleted regions and thus rapidly be annihilated. For weak acid, because the deformation and flow are negligible, a traveling front at the gel surface does not form (Fig. 3g-j). By introducing diffusio-phoretic mobility to account for the change in the PAA gel hypotonicity, our theory quantitatively captures the fast swelling dynamics due to the interplay between the copper and acid. This core non-equilibrium mechanism also underlies the swelling response when the copper ion is replaced by the calcium ion, an ubiquitous signal mediator in biology [51, 52, 28]. Thus, the generic reaction-transport pathways herein may enable adaptive and agile biomimetic soft actuators driven by gel diffusio-phoresis, which nevertheless requires further studies: First, it needs to be validated by microscopic approaches such as molecular dynamics simulations. Second, the linear poroelastic swelling in Fig. 2a only produces a strain rate of \(\sim 0.04\) s\({}^{-1}\) and a power density of \(\sim 11.1\) mW/kg (Sec. S4, Fig. S7), which are surpassed by certain porous gels that generate 0.2 s\({}^{-1}\) and 260 mW/kg and PAA microgel suspensions that achieve 230 mW/kg through the osmotic swelling of dry crosslinked polymers exposed to a solvent [41, 24, 26]. Therefore, our analysis must be extended to nonlinear large deformations to test high strain rates and power densities based on the inequality \(D_{DP}\gg D\) allowed by the gel diffusio-phoresis [53, 39]. Third, hydrogels that combine diffusio-phoresis with periodic actuations by means of cyclic chemical feedback (such as those in Belousov-Zhabotinsky gels [54, 25]) must be designed to demonstrate feasibility for engineering applications. Altogether, these steps will pave the way for internally powered gel-based proof-of-concept soft robots with enhanced precision, versatility, and dexterity. We thank J. Aizenberg, J. Barone, S. Cheng, J. Gray, A. Grinthal, M. Pleimling, and W. Shu for fruitful discussions and the Virginia Tech College of Science for financial support. We acknowledge the Virginia Tech Advanced Research Computing Center for computing resources. \begin{table} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{**Parameters in real units**} \\ \hline gel height: \(H=10^{-5}\)m \({}^{(*)}\) & viscosity: \(\mu_{f}=10^{-3}\)Pa\(\cdot\)s \({}^{(*)}\) & permeability: \(k_{f}=4\times 10^{-19}m^{2}\) \\ gel length: \(L=2.5\times 10^{-2}\)m \({}^{(*)}\) & thermal energy: \(k_{B}T\approx 10^{-21}J\) & molecular volume: \(v_{c}=10^{-29}m^{3}\) \({}^{(*)}\) \\ \hline \multicolumn{3}{|c|}{**Poroelastic deformation scales**} \\ \hline elastic constant: \(\bar{p}\equiv 2\mu+\lambda=10^{5}\) Pa \({}^{(\dagger)}\) & Timescale: \(\tau\equiv\mu_{f}H^{2}/k_{f}\bar{p}=2.5\) s & Diffusivity: \(D\equiv H^{2}/\tau=4\times 10^{-11}m^{2}/s\) \\ \hline \multicolumn{3}{|c|}{**Unitless parameters for gel domain**} \\ \hline aspect ratio: \(\delta\equiv H/L=4\times 10^{-4}\) & reaction rate: \(r\equiv\bar{r}\tau=1.25\times 10^{4}\) & mobility: \(\nu_{DP}\equiv D_{DP}/D=2.25\times 10^{2}\)\({}^{(\dagger)}\) \\ Cu\({}^{2+}\) diffusivity: \(\xi_{Cu}\equiv D_{Cu}/D=15\)\({}^{(\dagger)}\) & H\({}^{+}\) diffusivity: \(\xi_{+}\equiv D_{+}/D=750\)\({}^{(\dagger)}\) & Cu\({}^{2+}\) stress modulus: \(\gamma\equiv\bar{\gamma}/\bar{p}=5.24\) \\ Acid stress modulus: \(\chi\equiv\bar{\chi}/\bar{p}=2.62\) & polymer volume fraction: \(\phi_{p}=0.04\)\({}^{(*)}\) & COO\(-\) volume fraction: \(\phi^{*}=0.036\)\({}^{(*)}\) \\ \hline \multicolumn{3}{|c|}{**Unitless parameters for supernatant domain**} \\ \hline Cu\({}^{2+}\) diffusivity: \(\xi_{Cu}^{(a)}\equiv D_{Cu}^{(a)}/\alpha^{2}D=4\times 10^{-5}\)\({}^{(\dagger)}\) & height ratio: \(\alpha\equiv H^{(a)}/H=770\)\({}^{(*)}\) \\ H\({}^{+}\) diffusivity: \(\xi_{+}^{(a)}\equiv D_{+}^{(a)}/\alpha^{2}D=1.3\times 10^{-3}\)\({}^{(\dagger)}\) & \\ \hline \multicolumn{3}{|c|}{**Unitless parameters for initial supernatant acid distribution**\({}^{(\ddagger)}\)} \\ \hline \(\Gamma^{(\text{ID})}=1.25\times 10^{2}\), \(\Gamma^{(\text{ID})}=8\times 10^{3}\), \(z_{0}^{(\text{ID})}=1.04\), \(h_{1}=1.04158\), \(h_{2}=0.03158\), \(\lambda=0.02\). \\ \hline \end{tabular} \end{table} Table 1: **Simulation parameters**. The parameters denoted by \((*)\) correspond to the experimental values in Ref. [28]. For the parameters labeled by \((\dagger)\), the acid diffusivities were assumed \(D_{+}=D_{+}^{(a)}\approx 3\times 10^{-8}m^{2}/s\) within the experimental range given in Refs. [44, 45], the copper diffusivities were taken from Ref. [46] as \(D_{Cu}\approx 6\times 10^{-10}m^{2}/s\) and \(D_{Cu}^{(a)}\approx 10^{-9}m^{2}/s\) assuming that the Cu\({}^{2+}\) diffusion in the gel must be slower than in the supernatant due to the polymer presence. The diffusio-phoretic mobility is estimated as \(D_{DP}=k_{B}TR_{e}^{2}/v_{c}\mu_{f}\) for steric repulsions between the polymer and the Cu\({}^{2+}\) ions with an exclusion radius \(R_{e}=3\times 10^{-10}\) m, equivalent to \(\eta_{DP}=R_{e}^{2}/k_{f}=0.225\) (Eq. 3) [32]. The pressure scale \(\bar{p}\) is estimated from Refs. [47, 48, 49, 50]. The parameters denoted by \((\ddagger)\) are chosen to model an acid signal front as a step function in 1D, which is weakly perturbed with a Gaussian profile along the \(x-\)axis in 2D (Eq. 16). The value of \(\Gamma^{(\text{2D})}\) approximates the built-in step function in Comsol per Eq. 15b [43]. The standard deviation \(\lambda\) ensures that the width of the non-uniformity is comparable to the water capillary length \(\ell_{c}\approx 10^{-3}\)m. Among the parameters not labeled by a superscript, \(k_{f}\), \(\bar{p}\) and \(r\) are chosen in physically plausible ranges, whereas the low aspect ratio \(\delta\ll 1\) models the gel as a thin film. The chemical stress moduli \(\gamma\) and \(\chi\) are found by fitting the experimental gel height of a copper-laden (\(t=0\)) and fully acid-complexed (\(t\rightarrow\infty\)) gel film, respectively (Sec. S2) [41].
2309.09987
TCGF: A unified tensorized consensus graph framework for multi-view representation learning
Multi-view learning techniques have recently gained significant attention in the machine learning domain for their ability to leverage consistency and complementary information across multiple views. However, there remains a lack of sufficient research on generalized multi-view frameworks that unify existing works into a scalable and robust learning framework, as most current works focus on specific styles of multi-view models. Additionally, most multi-view learning works rely heavily on specific-scale scenarios and fail to effectively comprehend multiple scales holistically. These limitations hinder the effective fusion of essential information from multiple views, resulting in poor generalization. To address these limitations, this paper proposes a universal multi-view representation learning framework named Tensorized Consensus Graph Framework (TCGF). Specifically, it first provides a unified framework for existing multi-view works to exploit the representations for individual view, which aims to be suitable for arbitrary assumptions and different-scales datasets. Then, stacks them into a tensor under alignment basics as a high-order representation, allowing for the smooth propagation of consistency and complementary information across all views. Moreover, TCGF proposes learning a consensus embedding shared by adaptively collaborating all views to uncover the essential structure of the multi-view data, which utilizes view-consensus grouping effect to regularize the view-consensus representation. To further facilitate related research, we provide a specific implementation of TCGF for large-scale datasets, which can be efficiently solved by applying the alternating optimization strategy. Experimental results conducted on seven different-scales datasets indicate the superiority of the proposed TCGF against existing state-of-the-art multi-view learning methods.
Xiangzhu Meng, Wei Wei, Qiang Liu, Shu Wu, Liang Wang
2023-09-14T19:29:14Z
http://arxiv.org/abs/2309.09987v1
# TCGF: A unified tensorized consensus graph framework for multi-view representation learning ###### Abstract Multi-view learning techniques have recently gained significant attention in the machine learning domain for their ability to leverage consistency and complementary information across multiple views. However, there remains a lack of sufficient research on generalized multi-view frameworks that unify existing works into a scalable and robust learning framework, as most current works focus on specific styles of multi-view models. Additionally, most multi-view learning works rely heavily on specific-scale scenarios and fail to effectively comprehend multiple scales holistically. These limitations hinder the effective fusion of essential information from multiple views, resulting in poor generalization. To address these limitations, this paper proposes a universal multi-view representation learning framework named Tensorized Consensus Graph Framework (TCGF). Specifically, it first provides a unified framework for existing multi-view works to exploit the representations for individual view, which aims to be suitable for arbitrary assumptions and different-scales datasets. Then, stacks them into a tensor under alignment basics as a high-order representation, allowing for the smooth propagation of consistency and complementary information across all views. Moreover, TCGF proposes learning a consensus embedding shared by adaptively collaborating all views to uncover the essential structure of the multi-view data, which utilizes view-consensus grouping effect to regularize the view-consensus representation. To further facilitate related research, we provide a specific implementation of TCGF for large-scale datasets, which can be efficiently solved by applying the alternating optimization strategy. Experimental results conducted on seven different-scales datasets indicate the superiority of the proposed TCGF against existing state-of-the-art multi-view learning methods. Multi-view learning, Unified framework, Consensus graph, Low-rank tensor representation, Large-scale datasets, Iterative alternating strategy ## I Introduction With the rapid development of the big data era, more and more data can be obtained from different domains or described from various perspectives, which have gained extensive attention from researchers in recent years. For examples, the document could be translated as different versions via various languages [1]; an image could be represented by different visual descriptors [2, 3] to reveal its color, texture, and shape information. Differing from single view, multi-view data are complementary to each other, which motivates the development of multi-view learning [4]. Considering the diversity of multiple views, it is essential for multi-view learning to exploit how to properly uncover rich information across views for improving practical performance. Multi-view learning methods have been extensively investigated in various applications, including classification [5, 6], clustering [7, 8], and reidentification [9, 10]. This paper primarily focuses on the background of unsupervised multi-view learning without label information, such as clustering tasks. Graph-based multi-view methods [11, 12, 13, 14, 15, 16, 17] are prevalent since graphs can effectively represent data structures. The most representative group of graph-based multi-view methods aim to fuse multiple graphs into a common latent space shared by all views. For example, Multiple Kernel Learning (MKL) [11, 14] proposes a natural way to integrate different views by directly combining different views for learning a common representation. Unlike MKL, parameter-free multi-view learning methods [12] provide a self-weighting strategy to fuse multiple graph information without additional parameters. Furthermore, learning a shared graph among all views is an efficient way to integrate the diverse information within multi-view data, e.g., Graph-based Multi-view Clustering (GMC) [8] and Multiview Latent Proximity Learning (MLPL) [16]. To handle large-scale multi-view datasets, bipartite graph-based fast methods [18, 19] have been proposed to obtain the consensus bipartite graph by linearly combining the bipartite graphs, which are adaptively learned from the corresponding views. Another group of representative multi-view learning methods is self-representation based multi-view learning, which learns a self-representation matrix to act as the affinity, such as Low-Rank Representation (LRR) [20, 21], Sparse Subspace Clustering (SSC) [22, 23], etc. Motivated by LRR and SSC, multi-view works [24, 25, 26, 27] are further developed to learn affinity matrices based on the self-representation property. For example, MDcR [24] utilizes the complementarity information of multiple views based on the Hilbert Schmidt Independence Criterion (HSIC) as a regularization term to enhance the correlations across views and explores the correlations within each view jointly. The work [25] learns a shared affinity representation for multi-view subspace clustering by simultaneously considering the diversity regularization and a rank constraint. Apart from the works on multi-view learning mentioned earlier, recent tensor-based multi-view learning methods [28, 29, 30, 31] have also played an important role in exploring diversity and complementary information across views. These methods attempt to capture high-order correlations among multiple views in tensor space by applying tensor nuclear norms [32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 87, 88, 89, 91, 80, 83, 85, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 222, 230, 224, 225, 226, 227, 228, 229, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 254, 255, 256, 257, 259, 261, 258, 252, 259, 262, 258, 259, 270, 271, 272, 273, 274, 275, 251, 252, 256, 257, 258, 259, 263, 259, 271, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 289, 291, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 311, 320, 309, 321, 333, 341, 342, 343, 343, 344, 345, 346, 347, 348, 359, 360, 371, 372, 373, 374, 375, 376, 377, 383, 384, 385, 386, 387, 388, 390, 391, 392, 301, 302, 303, 304, 305, 306, 307, 309, 310, 308, 309, 311, 320, 309, 322, 333, 341, 342, 343, 344, 345, 346, 347, 348, 359, 361, 370, 388, 392, 393, 394, 395, 396, 397, 398, 399, 400, 411, 422, 439, 430, 441, 445, 446, 451, 452, 453, 461, 462, 463, 474, 464, 475, 476, 48, 480, 490, 410, 422, 433, 449, 448, 491, 449, 492, 493, 494, 495, 496, 497, 498, 500, 499, 511, 52, 53, 540, 55, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 100, 99, 110, 111, 12, 13, 34] and tensor-Singular Value Decomposition (t-SVD) [35]. For example, T-SVD-MSC [26] models self-representation matrices of different views as tensors, which explores the consistency information of multiview data in tensor space. The work [28] aggregates multiple subspace representations into a third-order tensor, imposing a low-rank constraint by combining the nuclear norms of all matrices unfolded along each view. Furthermore, LTBPL [30] utilized weighted tensor nuclear norm to recover the comprehensiveness in the low-rank constrained tensor stacked by multiple low-rank probability affinity matrices, and links consensus indicator graphs with view-specific representations carrying different adaptive confidences. The work [29] incorporates feature space based missing-view inferring with low-rank tensor constraint to recover the missing views and explore the full information of the recovered views and available views. IMVTSC-MVI [31] utilizes tensorial modeling to capture both the pairwise correlations between samples and the higher-order correlations between features, resulting in a more accurate and robust multi-view clustering method. ### _Motivations_ Despite the significant progress made by current multi-view learning methods in practical applications, there are still several limitations that require further attention. One such limitation is the heavy dependence of some methods on predefined adjacency matrices, which hinders their flexibility and limits their applicability to various multi-view scenarios. For example, many graph-based methods rely heavily on predefined similarity matrices of different views. However, due to the complexity and unknown prior information about the geometric structure of views, it remains an open problem to manually construct a suitable similarity matrix, often resulting in suboptimal performance. Meanwhile, we also observe that most multi-view learning works rely heavily on specific-scale scenarios, which prevents them from comprehensively understanding different-scales multi-view scenarios. For instance, bipartite graph-based multi-view methods are often used to efficiently process large-scale multi-view datasets, but their performance is not as promising when applied to normal-scale scenarios. Similarly, graph-based or tensor-based multi-view learning methods also encounter difficulties in dealing with large-scale datasets, owing to their computational complexity of squared or cubic with the data size. Given the success of multi-view learning methods in achieving impressive performance, it is evident that views fusion and tensor construction play important and essential roles in promoting correlation and consistency among multiple views. Therefore, how to propose a more flexible and robust multi-view learning method based on these mentioned earlier works is essential yet full of challenges problem. To tackle these challenges, this paper attempts to propose a unified multi-view learning framework that can transform a wide range of existing multi-view learning approaches into a unified formulation, thereby enhancing its flexibility and suitability for various multi-view scenarios. ### _Contributions_ To simultaneously address the above limitations, this paper proposes a novel multi-view representation learning framework named Tensorized Consensus Graph Framework (TCGF). It first provides a unified framework for existing multi-view works to learn the view-specific basics and representations, and then stacks representations of all views into a tensor as a high-order representation. Then, TCGF utilizes weighted tensor singular value decomposition (t-SVD) based tensor rank minimization to exploit the complementary information among different views. Different existing tensor-based multi-view learning works, it's scalable to construct the weighted t-SVD operator based on the axis for basics or instances, enhancing its potential ability to uncover complementary information. Moreover, the view-consensus grouping effect is formulated between the view-specific representations and consensus representation to capture the consensus information across different views, which enforces the multi-view smooth regularization on shared space. Consequently, TCGF not only leverages most existing embedding works into a unified formulation but simultaneously considers the diversity, complementary and consensus information across multiple views. Notably, we additionally observe that it's feasible to select the appropriate embedding manner as well as its basics to handle different-scale multi-view datasets. To comprehensively validate the effectiveness of TCGF, we conduct massive experiments on seven multi-view datasets. Experimental results demonstrate that the proposed TCGF can outperform the current state-of-the-art multi-view learning methods in most situations. The major contributions in this paper can be summarized as follows: * We propose a novel multi-view representation learning framework named TCGF to learn the shared embedding, which serves as a unified framework for existing multi-view works. * The axis-free weighted t-SVD operator is designed to exploit the complementary information among different views, which further extends the applications of tensor rank minimization. * We propose the view-consensus grouping effect to regularize the view-consensus representation, which enables the discovery of essential structure information within the multi-view data. * The experimental results conducted on seven different-scale datasets demonstrate that TCGF is not only capable of maintaining or surpassing the performance of other state-of-the-art multi-view methods, but also adaptable to datasets with different scales. ## II Related work Existing multi-view methods can be divided into two categories according to the means of calculating the affinity representation, i.e., graph-based and self-representation-based multi-view models. Besides, tensor-based multi-view methods also play an important role in exploring complementary information across views. ### _Graph-based Multi-view Learning_ The graph-based learning framework involves learning an affinity matrix \(\mathbf{S}^{v}\) that encodes the similarity between different samples in the \(v\)th view. This affinity matrix \(\mathbf{S}^{v}\) is learned by minimizing the distance between samples in the latent space, which can be formulated as \[\min_{\mathbf{S}^{v}\in\mathbf{C}}\sum_{i=1}^{N}\sum_{j=1}^{N}d(\mathbf{x}^{v}_{i},\mathbf{x }^{v}_{j})\mathbf{S}^{v}_{i,j}+\lambda\mathbf{\Omega}(\mathbf{S}^{v}), \tag{1}\] where \(d(\mathbf{x}^{v}_{i},\mathbf{x}^{v}_{j})\) denotes the distance between two samples, which can be calculated by \(L_{1}\)-norm, Euclidean distance, Mahalanobis distance, etc. \(\mathbf{C}\) and \(\mathbf{\Omega}(\mathbf{S}^{v})\) stands by the constraint and normalization terms on the affinity matrix \(\mathbf{S}^{v}\), respectively. Based on the view-specific affinity matrix \(\mathbf{S}^{v}\), graph-based multi-view methods aim to learn an intrinsic representation, capturing both consistent and complementary information among multiple views. The most representative group of multi-view methods [11, 12, 13, 14, 15, 36, 37, 8] aim to fuse multiple features or graphs into one common latent space shared by all views. Multiple Kernel Learning (MKL) [11, 14] is also a natural way to integrate different views based on the direct combination of different views and learns a common low-dimensional representation. Different from MKL, parameter-free multi-view learning methods [12] provide a self-weighting strategy to fuse multiple graph information without additional parameters. Besides, learning a shared graph among all views is also an efficient manner to integrate the diversity information within multi-view data, e.g. Graph-based Multi-view Clustering (GMC) [8] and Multiview Latent Proximity Learning (MLPL) [16]. Due to the squared computational complexity of graph-based works, these methods might be inefficient in dealing with large-scale multi-view datasets. For this reason, bipartite graph-based multi-view methods [18, 19, 38] have aroused widespread research interest to reduce both the computational complexity and storage complexity, where the bipartite graph can well present the relationship between \(N\) samples and \(K\) (\(K\ll N\)) anchors. It is worth noting that the performance of the aforementioned graph-based methods heavily depends on the quality of the predefined view-specific affinity matrix \(\mathbf{S}^{v}\). However, it is still an open problem to manually construct a suitable similarity matrix for each view due to the complex and unknown geometric structure of multi-view data, which limits their applicability. ### _Self-representation based Multi-view Learning_ Another graph-based ones are using the so-called self-expressiveness property, which are developed to learn affinity matrices based on the self-representation property. Specifically, the self-representation method is an important subspace learning technology based on generating a large number of representative samples, which can be formulated as \[\min_{\mathbf{S}^{v}\in\mathbf{C}}\left\|\mathbf{X}^{v}-\mathbf{X}^{v}\mathbf{S}^{v}\right\|_{ 2}+\lambda\mathbf{\Omega}(\mathbf{S}^{v}), \tag{2}\] where the normalization term \(\mathbf{\Omega}(\mathbf{S}^{v})\) on the affinity matrix \(\mathbf{S}^{v}\) is the core component. For example, the work [20] adopts the low-rank constraint as normalization term \(\mathbf{\Omega}(\mathbf{S}^{v})\), which aims to approximately recover the row space with theoretical guarantees to remove arbitrary sparse errors. Motivated by LRR [20, 21] and SSC [22, 23], multi-view works [24, 25, 26] are further developed to learn affinity matrices based on the self-representation property. For example, MDcR [24] utilizes the complementarity information of multiple views based on the Hilbert Schmidt Independence Criterion (HSIC) as a regularization term to enhance the correlations across views and explores the correlations within each view jointly. The work [25] learns a shared affinity representation for multi-view subspace clustering by simultaneously considering the diversity regularization and a rank constraint. Even though the above self-representation multi-view works obtain impressive performance and efficiency, there still exist the following limitation in most works. Most of them aim to study a common representation or the pairwise correlations between views, leading to the loss of comprehensiveness and deeper higher-order correlations among multi-view data, and hence miss important underlying semantic information. ### _Tensor-based Multi-view learning_ To capture the high-order correlations among multiple views, tensor-based multi-view learning methods [26, 28, 29, 31] have been also developed in recent years, which play an important role in effectively exploring the comprehensive information among multiple views. The core idea of tensor-based multi-view learning methods is to model high-order correlations across views in tensor space by the low-rank constraint based on different tensor nuclear norms [32, 33, 34]. For example, T-SVD-MSC [26] provides a tensor low-rank constraint on the stacked subspace representation matrices to capture the high-order complementary information among multiple views via introducing the tensor singular value decomposition. The work [28] aggregates multiple subspace representations into a third-order tensor, imposing a low-rank constraint by combining the nuclear norms of all matrices unfolded along each view. The work [29] studied the t-SVD based weighted tensor nuclear norm minimization to shrink different matrix singular values with the corresponding confidence. Moreover, TMvC[31] proposes the high-order graph to uncover the essential information stored in multiple views, which adopts the low-rank constraint of horizontal and vertical directions to better uncover the inter-view and inter-class correlations between multi-view data. The work [29] incorporates feature space-based missing-view inferring with low-rank tensor constraint to recover the missing views and explore the full information of the recovered views and available views. IMVTSC-MVI [31] utilizes tensorial modeling to capture both the pairwise correlations between samples and the higher-order correlations between features, resulting in a more accurate and robust multi-view clustering method. Although these tensor-based approaches have achieved promising effects, most of these methods mainly focus on graph-based settings, resulting in cubic computational complexity. Meanwhile, they usually fail to simultaneously consider the inter-view and intra-view relationships among samples. ## III The Proposed Method In this section, we first introduce the main notations and definitions used in this paper. Then, we provide the construction process of the proposed TCFG in detail. Correspondingly, we provide a typical implement for our proposed framework for large-scale datasets. For clarity, the flow chart of TCFG is shown in Fig. 1. ### _Notations and Problem Definition_ We use bold calligraphy letters for tensors, e.g., \(\mathbf{\mathcal{A}}\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}}\), bold upper case letters for matrices, e.g., \(\mathbf{A}\in\mathbb{R}^{n_{1}\times n_{2}}\). \(\mathbf{A}(i,j)\) denotes the \((i,j)\)th entry of the matrix \(\mathbf{A}\). \(\mathbf{A}(i,:)\) denotes the \(i\)th row of the matrix \(\mathbf{A}\). \(\mathbf{\mathcal{A}}^{T}\in\mathbb{R}^{n_{2}\times n_{1}\times n_{3}}\) denotes the transpose tensor of tensor \(\mathbf{\mathcal{A}}\). The fast Fourier transformation (FFT) along the third axis of a tensor \(\mathbf{\mathcal{A}}\) and its inverse operation are \(\mathbf{\mathcal{A}}_{f}=fft(\mathbf{\mathcal{A}},[\hskip-1.422638pt],3)\) and \(\mathbf{\mathcal{A}}=ifft(\mathbf{\mathcal{A}}_{f},[\hskip-1.422638pt],3)\), respectively. The block vectorizing and its inverse operation of \(\mathbf{\mathcal{A}}\) are \(bvec(\mathbf{\mathcal{A}})=[\mathbf{\mathcal{A}}^{(1)},\mathbf{\mathcal{A}}^{(2)},\cdots, \mathbf{\mathcal{A}}^{(n_{3})}]\in\mathbb{R}^{n_{1}n_{2}\times n_{3}}\) and \(fold(bvec(\mathbf{\mathcal{A}}))=\mathbf{\mathcal{A}}\). \(bcirc(\mathbf{\mathcal{A}})\in\mathbb{R}^{n_{1}n_{2}\times n_{1}n_{3}}\) denotes block circulant matrix of tensor \(\mathbf{\mathcal{A}}\). The \(L_{1}\) norm of matrix \(\mathbf{A}\) is denoted as \(\left\|\mathbf{A}\right\|_{1}\). \(tr(\mathbf{\mathcal{A}})\) denotes the trace of matrix \(\mathbf{A}\). \(\mathbf{\mathcal{I}}\) denotes the \(n_{1}\times n_{2}\times n_{3}\) identify tensor. \(\mathbf{I}\) denotes the identity matrix. \(\mathbf{1}\) denotes a vector whose elements are equal to 1. The tensor singular value decomposition operation (t-SVD) and tensor nuclear norm are defined as follows. **Definition 1** (t-product): For two tensors \(\mathbf{\mathcal{A}}\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}}\) and \(\mathbf{\mathcal{A}}\in\mathbb{R}^{n_{2}\times n_{4}\times n_{3}}\), the t-product \(\mathbf{\mathcal{A}}*\mathbf{\mathcal{B}}=fold(bcirc(\mathbf{\mathcal{A}})bvec(\mathbf{ \mathcal{B}}))\) is the \(n_{1}\times n_{4}\times n_{3}\) tensor. **Definition 2** (t-SVD): The t-SVD of tensor \(\mathbf{\mathcal{A}}\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}}\) is defined as \(\mathbf{\mathcal{A}}=\mathbf{\mathcal{U}}*\mathbf{\mathcal{S}}*\mathbf{\mathcal{V}}^{T}\), where \(\mathbf{\mathcal{U}}\in\mathbb{R}^{n_{1}\times n_{3}}\) and \(\mathbf{\mathcal{V}}\in\mathbb{R}^{n_{2}\times n_{2}\times n_{3}}\) are two orthogonal tensors, i.e., \(\mathbf{\mathcal{U}}*\mathbf{\mathcal{U}}^{T}=\mathbf{\mathcal{U}}^{T}*\mathbf{\mathcal{U}}= \mathbf{\mathcal{I}}\) and \(\mathbf{\mathcal{V}}*\mathbf{\mathcal{V}}^{T}=\mathbf{\mathcal{V}}^{T}*\mathbf{\mathcal{V}}= \mathbf{\mathcal{I}}\). \(\mathbf{\mathcal{S}}\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}}\) is a f-diagonal tensor, whose entire frontal slices are diagonal matrices. **Definition 3** (Weighted t-SVD based tensor nuclear norm): For a tensor \(\mathbf{\mathcal{A}}\), the weighted t-SVD based tensor nuclear norm \(\mathbf{\mathcal{A}}_{\omega,*}\) is defined as by the weighted sum of singular values of all the frontal slices of \(\mathbf{\mathcal{A}}_{f}\), i.e., \(\mathbf{\mathcal{A}}_{\omega,*}=\sum_{i=1}^{min\{n_{1},n_{2}\}}\sum_{j=1}^{n_{3}} \mathbf{\omega}_{i}|\mathbf{\mathcal{S}}_{f}^{(j)}(i,i)|\). \(\mathbf{\omega}\) is the weighted coefficients. ### _Model Formulation_ #### Iii-B1 View-specific Graph Learning Given a multi-view dataset consisting of \(M\) views, the data in the \(v\)th view (\(1\leq v\leq M\)) can be denoted as \(\mathbf{X}^{(v)}=\{\mathbf{x}_{1}^{(v)},\mathbf{x}_{2}^{(v)},\ldots,\mathbf{x}_{N}^{(v)}\}\), in which \(N\) is the number of samples. \(\mathbf{S}^{(v)}\in\mathbb{R}^{N\times N}\) denotes the initialized graph in the \(v\)th view, which reflect the relationships among samples. For the construction of specific-view graph \(\mathbf{S}^{(v)}\), there are different manners to be flexibility adopted, such as similarity and self-representation graphs. We use \(\mathbf{\Omega}(\mathbf{S}^{(v)})\) to generally represent the objective function that constructs the original graph \(\mathbf{S}^{(v)}\). Notably, there are usually noise and redundant information in the original multi-view data, which might result in the errors \(\mathbf{E}^{(v)}\) for view-specific graph \(\mathbf{S}^{(v)}\). To eliminate the efforts of error, the basic model of graph learning can be formulated as follows: \[\begin{split}\min_{\mathbf{G}^{(v)},\mathbf{E}^{(v)}}& \sum_{v=1}^{M}\mathbf{\Omega}(\mathbf{S}^{(v)})+\lambda_{E}\sum_{v=1}^{M}\left\| \mathbf{E}^{(v)}\right\|_{1},\\ s.t.\mathbf{S}^{(v)}&=\mathbf{G}^{(v)}+\mathbf{E}^{(v)},\mathbf{G }^{(v)}\geq 0,\mathbf{G}^{(v)^{T}}\mathbf{1}=\mathbf{1},\end{split} \tag{3}\] Fig. 1: Flowchart of the proposed TCFG. Given a collection of samples with \(M\) views, e.g., \(\{\mathbf{X}^{(1)},\mathbf{X}^{(2)},\ldots,\mathbf{X}^{(M)}\}\). TCFG first explores the view-specific graph \(\mathbf{G}^{(v)}\) by removing the disturbance of the noise error \(\mathbf{E}^{(v)}\) in the initialized graph \(\mathbf{S}^{(v)}\). After stacking view-specific graphs \(\{\mathbf{G}^{(1)},\mathbf{G}^{(2)},\ldots,\mathbf{G}^{(M)}\}\) into one tensor \(\mathbf{\mathcal{G}}\), the tensor \(\mathbf{\mathcal{G}}\) can be updated by utilizing the t-SVD based weighting tensor multi-rank minimization. Based on the graph agreement term, the consensus graph \(\mathbf{G}_{F}\) can be obtained by fusing the view-specific graphs \(\{\mathbf{G}^{(1)},\mathbf{G}^{(2)},\ldots,\mathbf{G}^{(M)}\}\) with adaptively allocated weights \(\mathbf{\alpha}\). Taking the view-specific graph learning, tensorized graph learning and consensus graph learning into one whole framework, we can obtain the final latent embedding \(\mathbf{F}\) shared by all views. where the constraint \(\mathbf{G}^{(v)^{T}}\mathbf{1}=\mathbf{1}\) guarantees that the sum of graph weights between each sample and other all samples 1. \(\lambda_{E}\) is the trade-off parameter. #### Iii-B2 Tensorized Graph Learning Inspired by tensor nuclear norm, which well exploits complementary information and spatial structure embedded in tensor, we utilize the weighted t-SVD based tensor nuclear norm to investigate the high-order correlations among multiple views. Thus, we stack the affinity matrices of all views into a tensor \(\mathbf{\mathcal{G}}\in\mathbb{R}^{N\times N\times M}\). To better investigate the correlations and largely reduce the computational complexity, we further rotate \(\mathbf{\mathcal{G}}\) into a \(N\times M\times N\) tensor, where \(\mathbf{\mathcal{G}}(:,v,:)=\mathbf{G}^{(v)}\). After rotation, the dimension of the weighted coefficient \(\mathbf{\omega}\) decreases from \(N\) to \(M\) (\(M\ll N\)), in which finetuned parameters \(\mathbf{\omega}\) correlates with the view number. Considering the influence of noise on \(\mathbf{\mathcal{G}}\), we learn a low-rank tensor \(\mathbf{\mathcal{Z}}\) to approximate \(\mathbf{\mathcal{G}}\) as follows: \[\begin{split}\min_{\mathbf{\mathcal{Z}}}&\quad\mathbf{ \mathcal{Z}}_{\mathbf{\omega},*},\\ s.t.&\quad\mathbf{\mathcal{Z}}=\mathbf{\mathcal{G}},\ \mathbf{ \mathcal{G}}=\Phi\left(\mathbf{G}^{(1)},\cdots,\mathbf{G}^{(M)}\right),\end{split} \tag{4}\] where \(\Phi(\cdot)\) merges graphs of multiple views into a tensor and then rotates along the third axis. #### Iii-B3 Consensus Embedding Learning As multi-view features are extracted from the same objects, different graphs \(\{\mathbf{G}^{(v)}\}\) should contain some similar information. Thus, it's essential to capture both consistent and complementary information among multiple views. For this reason, we propose the shared embedding \(\mathbf{F}\in\mathbb{R}^{N\times d}\) to uncover the rich information in each view. To preserve the locality in the learned graph \(\mathbf{G}^{(v)}\) in each view, we exploit the subspace-wise grouping effect [39] in the learned graph \(\mathbf{G}^{(v)}\) by means of a unified framework. **Definition 4** (Subspace-wise Grouping Effect): Given a set of d-dimensional data points \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{n}]\in\mathcal{R}^{d\times n}\), a self-representation matrix \(\mathbf{Z}=[\mathbf{z}_{1},\mathbf{z}_{2},\cdots,\mathbf{z}_{n}]\in\mathcal{R}^{n\times n}\) has the grouping effect if \(\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|^{2}\to 0\) then \(\left|\mathbf{z}_{i}-\mathbf{z}_{j}\right|^{2}\to 0\). According to the above definition, we can formulate the view-consensus term as \(tr\left(\mathbf{F}^{T}\mathbf{G}^{(v)}\mathbf{F}\right)\). To further improve its flexibility, we propose the consensus graph based on shared embedding \(\mathbf{F}\). Meanwhile, considering different views have different contributions in learning \(\mathbf{G_{F}}\), we adaptively assign weight \(\mathbf{\alpha}_{v}\leq 0\) for the \(w\)th view. The above considerations can be formed as follows: \[\begin{split}&\max_{\mathbf{F}}\sum_{v=1}^{M}(\mathbf{\alpha}^{(v)})^{ \gamma}tr\left(\mathbf{F}^{T}\mathbf{F}\mathbf{G}^{(v)}{\mathbf{D}^{(v)}}^{-1}\right),\\ s.t.&\quad\sum_{v=1}^{M}\mathbf{\alpha}^{(v)}=1,\mathbf{ \alpha}^{(v)}\geq 0,\mathbf{F}^{T}\mathbf{F}=\mathbf{I},\end{split} \tag{5}\] where \(\gamma\) is a hyper-parameter. \(\mathbf{D}^{(v)}\) is a diagonal matrix whose diagonal elements are \(\mathbf{D}^{(v)}(i,i)=\sum_{i=j}^{N}\mathbf{D}^{(v)}(j,i)\). \(tr\left(\mathbf{G_{F}}\mathbf{G}^{(v)}{\mathbf{D}^{(v)}}^{-1}\right)\) can be seen as the graph agreement term to measure the consistence between the consensus graph \(\mathbf{G_{F}}\) and the normalized graph \(\mathbf{G}^{(v)}{\mathbf{D}^{(v)}}^{-1}\). Notably, the construction manner for graph \(\mathbf{G_{F}}\) is also to be flexibly chosen, such as linear kernel function \(\mathbf{G_{F}}=\mathbf{F}\mathbf{F}^{T}\). #### Iii-B4 Overall Framework of TCGF In the above subsections, we discuss how to learn the intrinsic, consistent and complement information in the multi-view data. To this end, we seamlessly couple the above learning processes, and the overall objective function is formulated as follows: \[\begin{split}&\min_{\mathbf{F},\mathbf{Z},\mathbf{S}^{(v)},\mathbf{G}^{(v)},\mathbf{E} ^{(v)},\mathbf{\alpha}^{(v)}}\underbrace{\sum_{v=1}^{M}\mathbf{\Omega}(\mathbf{S}^{(v)})+ \lambda_{E}\sum_{v=1}^{M}\left\|\mathbf{E}^{(v)}\right\|_{1}}_{View-specific\ Term}\\ &-\underbrace{\lambda_{C}\sum_{v=1}^{M}{(\mathbf{\alpha}^{(v)})}^{ \gamma}tr(\mathbf{G_{F}}\mathbf{G^{(v)}}{\mathbf{D}^{(v)}}^{-1})}_{Consensus\ Graph\ Learning}+\underbrace{\lambda_{R}\mathbf{ \mathcal{Z}}_{\mathbf{\omega},*}}_{Tensorized\ Term},\\ s.t.&\quad\mathbf{S}^{(v)}=\mathbf{G}^{(v)}+\mathbf{E}^{(v)},\mathbf{G}^{ (v)}\geq 0,\mathbf{G}^{T}\mathbf{1}=\mathbf{1},\\ &\quad\sum_{v=1}^{M}\mathbf{\alpha}^{(v)}=1,\mathbf{\alpha}^{(v)}\geq 0, \mathbf{F}^{T}\mathbf{F}=\mathbf{I},\\ &\quad\mathbf{\mathcal{Z}}=\mathbf{\mathcal{G}},\mathbf{\mathcal{G}}=\Phi(\mathbf{G }^{(1)},\cdots,\mathbf{G}^{(M)}),\end{split} \tag{6}\] where \(\lambda_{E}\) and \(\lambda_{T}\) are trade-off parameters. Observed from the model in Eq. (6), the shared embedding \(\mathbf{F}\) and view-specific graph \(\mathbf{G}^{(v)}\) that is constrained by the low-rank tensor \(\mathbf{\mathcal{G}}\) can be simultaneously learned in a unified framework. The first aspect maintains consensus information among different views to obtain the shared embedding, via fusing multiple graph agreement terms with different adaptive weights. The second aspect is to eliminate the efforts of error, which can learn more robust graph \(\mathbf{G}^{(v)}\). The final aspect depicts the low-rank property and higher-order correlations of affinity tensor \(\mathbf{\mathcal{G}}\) to exploit complementary information among views. ### _Specific Implement for Large-scale Datasets_ Due to the computational complexity of the aforementioned methods being squared or cubic with the data size, thus they are inefficient in handling large-scale datasets. To solve this issue, we attempt to provide a specific implement to extend the proposed TCGF into the scenario of large-scale datasets. Inspired by bipartite graph, we attempt to reduce the scale of initialized graph \(\mathbf{S}^{(v)}\) by randomly selecting the subset of samples as anchors. Then, we construct the bipartite graph \(\mathbf{B}^{(v)}\) between samples and anchors to substitute the whole graph \(\mathbf{S}^{(v)}\). However, directly using those anchors is difficult to cover the entire data point clouds of data and characterize the intrinsic structure of data. To solve this issue, we propose a novel anchors selection scheme based on the information volume, which is simple and efficient. We combine all views into one view, and then execute the SVD decomposition to obtain the singular values for all samples. In this way, we select the \(K\) most representative samples according to their singular values. Besides, we also find that K-means is also utilized to find those samples with high information volume in some situations. After that, we can construct the bipartite graph \(\mathbf{B}^{(v)}\in\mathcal{R}^{N\times K}\) to re-initialize the view-specific graph. Moreover, to further control the computational cost, we generate similarity-induced graph to construct the fixed bipartite graph \(\mathbf{B}^{(v)}\). According to the above considerations, we can readily extend the proposed TCFF into the scenario of large-scale datasets, which can be formulated as follows: \[\begin{split}&\min_{\mathbf{\Theta}}-\sum_{v=1}^{M}(\mathbf{\alpha}^{(v)})^{ \gamma}tr(\mathbf{G_{F}}\mathbf{G}^{(v)}{\mathbf{D}^{(v)}}^{-1})+\lambda_{E}\sum_{v=1}^{M} \left\|\mathbf{E}^{(v)}\right\|_{1}\\ &+\lambda_{R}\mathbf{\mathcal{Z}}_{\mathbf{\omega},\mathbf{\ast}},\\ & s.t.\quad\mathbf{B}^{(v)}=\mathbf{G}^{(v)}+\mathbf{E}^{(v)},\mathbf{G}^{(v)} \geq 0,\mathbf{G}^{(v)^{T}}\mathbf{1}=\mathbf{1},\\ &\quad\sum_{v=1}^{M}\mathbf{\alpha}^{(v)}=1,\mathbf{\alpha}^{(v)}\geq 0,[\mathbf{F}_{S};\mathbf{F}_{A}]^{T}[\mathbf{F}_{S};\mathbf{F}_{A}]=\mathbf{I},\\ &\quad\mathbf{\mathcal{Z}}=\mathbf{\mathcal{G}},\mathbf{\mathcal{G}}=\Phi( \mathbf{G}^{(1)},\cdots,\mathbf{G}^{(M)}),\end{split} \tag{7}\] where \(\mathbf{\Theta}=\{\mathbf{F}_{S},\mathbf{F}_{A},\mathbf{\mathcal{Z}},\mathbf{G}^{(v)},\mathbf{E}^{(v)},\mathbf{\alpha}^{(v)}\}\) denotes the set of solved variables. \(\mathbf{F}_{S}\in\mathcal{R}^{N\times d}\) and \(\mathbf{F}_{A}\in\mathcal{R}^{K\times d}\) represent the shared embedding of samples and anchors, respectively. #### Iii-B1 Optimization Inspired by the augmented Lagrange multiplier method, the corresponding augmented Lagrangian function of the Eq. (6) can be formulated as follows: \[\begin{split}&\mathbf{\mathcal{L}}(\mathbf{F}_{S},\mathbf{F}_{A},\mathbf{ \mathcal{Z}},\mathbf{G}^{(v)},\mathbf{E}^{(v)},\mathbf{\alpha}^{(v)})=\\ &-\sum_{v=1}^{M}(\mathbf{\alpha}^{(v)})^{\gamma}tr(\mathbf{F}^{T}\mathbf{G}^ {(v)}{\mathbf{D}^{(v)}}^{-1}\mathbf{F})+\lambda_{E}\sum_{v=1}^{M}\left\|\mathbf{E}^{(v)} \right\|_{1}+\lambda_{R}\mathbf{\mathcal{Z}}_{\mathbf{\omega},\mathbf{\ast}}\\ &+\sum_{v=1}^{M}\left(\left\langle\mathbf{\mathcal{Y}}^{(v)},\mathbf{S}^ {(v)}-\mathbf{G}^{(v)}-\mathbf{E}^{(v)}\right\rangle+\frac{\mu}{2}\left\|\mathbf{S}^{(v)}- \mathbf{G}^{(v)}-\mathbf{E}^{(v)}\right\|_{F}^{2}\right)\\ &+\left\langle\mathbf{\mathcal{Y}},\mathbf{\mathcal{G}}-\mathbf{\mathcal{Z}} \right\rangle+\frac{\rho}{2}\left\|\mathbf{\mathcal{G}}-\mathbf{\mathcal{Z}}\right\|_{ F}^{2},\end{split} \tag{8}\] where \(\mathbf{Y}^{(v)}\) and \(\mathbf{\mathcal{Y}}\) represent Lagrange multipliers. \(\mu\) and \(\rho\) are the penalty parameters. We adopt the Augmented Lagrangian Multiplier (ALM) with the Alternative Direction Minimizing (ADM) optimization algorithm for solving the above optimization problem, and the updating rules of varying variables are as follows. \(\bullet\)**Updating \(\mathbf{F}_{S}\) and \(\mathbf{F}_{A}\).** For the convenient of optimization, we employ linear kernel \(\mathbf{F}_{S}\mathbf{F}_{A}^{T}\) to construct the consensus graph \(\mathbf{G}_{F}\). By fixing the other variables, \(\mathbf{F}_{S}\) and \(\mathbf{F}_{A}\) can be updated by solving the following problem: \[\begin{split}&\max_{\mathbf{F}_{S},\mathbf{F}_{A}}\sum_{v=1}^{M}(\mathbf{ \alpha}^{(v)})^{\gamma}tr\left(\mathbf{G_{F}}\mathbf{G}^{(v)}{\mathbf{D}^{(v)}}^{-1} \right)\!,\\ & s.t.\quad\mathbf{F}_{S}^{T}\mathbf{F}_{S}+\mathbf{F}_{A}^{T}\mathbf{F}_{A}=\mathbf{ I}.\end{split} \tag{9}\] The Eq. (9) has the closed-form solutions \(\mathbf{F}_{S}=\frac{\sqrt{2}}{2}\mathbf{U}\) and \(\mathbf{F}_{A}=\frac{\sqrt{2}}{2}\mathbf{V}\), in which \(\mathbf{U}\) and \(\mathbf{V}\) are the leading \(d\) left and right singular vectors of the matrix \(\sum_{v=1}^{M}{(\mathbf{\alpha}^{(v)})^{\gamma}\mathbf{G}^{(v)}\mathbf{D}^{(v)}}^{-1}\). \(\bullet\)**Updating \(\mathbf{\mathcal{Z}}\).** In this case, \(\mathbf{\mathcal{Z}}\) can updated by solving the following problem: \[\begin{split}&\min_{\mathbf{\mathcal{Z}}}\frac{1}{2}\left\|\mathbf{ \mathcal{G}}+\frac{1}{\rho}\mathbf{\mathcal{Y}}-\mathbf{\mathcal{Z}}\right\|_{F}^{2}+ \frac{\lambda_{R}}{\rho}\mathbf{\mathcal{Z}}_{\mathbf{\omega},\mathbf{\ast}}.\end{split} \tag{10}\] The optimal solution of the Eq. (10) is \(\mathbf{\Gamma}_{\frac{\lambda_{R}}{\rho}}[\mathbf{\mathcal{G}}+\frac{1}{\rho}\mathbf{ \mathcal{Y}}]\). More details are placed in the **Appendix A**. \(\bullet\)**Updating \(\mathbf{G}^{(v)}\).** In this case, \(\mathbf{G}^{(v)}\) can updated by solving the following problem: \[\begin{split}&\min_{\mathbf{G}^{(v)}}\left\langle\mathbf{Y}^{(v)},\mathbf{S}^ {(v)}-\mathbf{G}^{(v)}-\mathbf{E}^{(v)}\right\rangle+\frac{\mu}{2}\left\|\mathbf{S}^{(v)} -\mathbf{G}^{(v)}-\mathbf{E}^{(v)}\right\|_{F}^{2}\\ &\quad+\left\langle\mathbf{\mathcal{Y}}^{(v)},\mathbf{G}^{(v)}-\mathbf{ \mathcal{Z}}^{(v)}\right\rangle+\frac{\rho}{2}\left\|\mathbf{G}^{(v)}-\mathbf{ \mathcal{Z}}^{(v)}\right\|_{F}^{2}\\ &\quad-(\mathbf{\alpha}^{(v)})^{\gamma}tr(\mathbf{G_{F}}\mathbf{G}^{(v)}{\mathbf{D }^{(v)}}^{-1}),\\ & s.t.\quad\mathbf{G}^{(v)}\geq 0,\quad{\mathbf{G}^{(v)}}^{T}\mathbf{1}=\mathbf{1}, \end{split} \tag{11}\] where \(\mathbf{\mathcal{Z}}^{(v)}=\mathbf{\mathcal{Z}}(:,v,:)\) and \(\mathbf{\mathcal{Y}}^{(v)}=\mathbf{\mathcal{Y}}(:,v,:)\). It can be shown that the above formula has a closed connection, and the corresponding proof is placed in the **Appendix B**. \(\bullet\)**Updating \(\mathbf{E}^{(v)}\).** In this case, \(\mathbf{E}^{(v)}\) can updated by solving the following problem: \[\begin{split}&\min_{\mathbf{E}^{(v)}}\frac{1}{2}\left\|\mathbf{E}^{(v)}- \mathbf{\Gamma}^{(v)}\right\|_{F}^{2}+\frac{\lambda_{E}}{\mu}\left\|\mathbf{E}^{(v)} \right\|_{1},\end{split} \tag{12}\] where \(\mathbf{\Gamma}^{(v)}=\mathbf{S}^{(v)}-\mathbf{G}^{(v)}-\frac{1}{\mu}\mathbf{Y}^{(v)}\). Based on the proximal gradient-decent method, the optimal solution \(\mathbf{E}^{(v)}\) of Eq. (12) is \(max(|\mathbf{\Gamma}^{(v)}|-\frac{\lambda_{E}}{\mu},0)\). \(\bullet\)**Updating \(\mathbf{\alpha}^{(v)}\).** In this case, \(\mathbf{\alpha}^{(v)}\) can be updated by solving the following problem: \[\begin{split}&\max_{\mathbf{\alpha}}\sum_{v=1}^{M}{(\mathbf{\alpha}^{(v)})^{ \gamma}tr\left(\mathbf{G_{F}}\mathbf{G}^{(v)}{\mathbf{D}^{(v)}}^{-1}\right)},\\ & s.t.\quad\sum_{v=1}^{M}{\mathbf{\alpha}^{(v)}}=1,\mathbf{\alpha}^{(v)} \geq 0.\end{split} \tag{13}\] Using the Lagrange multiplier method, we can obtain the closed-form solutions of the Eq. (13). More details are placed in the **Appendix C**. \(\bullet\)**Updating Lagrange multipliers and penalty parameters.** Lagrange multipliers and penalty parameters can be updated as follows: \[\begin{split}&\mathbf{Y}^{(v)}:=\mathbf{Y}^{(v)}+\mu(\mathbf{S}^{(v)}-\mathbf{G}^{(v)}- \mathbf{E}^{(v)}),\\ &\mathbf{\mathcal{Y}}:=\mathbf{\mathcal{Y}}+\rho(\mathbf{\mathcal{G}}-\mathbf{ \mathcal{Z}}),\\ &\mu:=\eta\mu,\\ &\rho:=\eta\rho,\end{split} \tag{14}\] where \(\eta>1\) is used to boost the convergence speed [40]. #### Iii-B2 Time Complexity Analysis In this part, the computational complexity analysis of solving the problem in Eq. (8) is provided. The main computation consists of five parts, which are corresponding to the updating process in the optimization section. The time complexities in iteratively updating these variables are \(\mathbf{O}(MNK+K^{2}N)\), \(\mathbf{O}(MNKlog(MN)+M^{2}NK)\), \(\mathbf{O}(MNKd+MNKlog(K))\), \( #### Iii-B3 Convergence analysis Since the model in Eq. (8) is not a joint convex problem of all variables, it still remains an open problem to require a globally optimal solution. Fortunately, by means of the alternating optimization algorithm, the proposed model can be solved. Due to the convex property and optimal solution of each sub-problem, the optimization can be shown to converge. In what follows, we would introduce the convergence of each sub-problem. For updating \(\mathbf{F}_{S}\) and \(\mathbf{F}_{A}\), sub-problem in the Eq. (9) is equal to the following equation: \[\max_{\mathbf{F}\mathbf{F}^{T}=\mathbf{I}}\sum_{v=1}^{M}{(\mathbf{\alpha}^{(v)})^{ \gamma}tr\left(\mathbf{F}\mathbf{L}^{(v)}\mathbf{F}^{T}\right)}, \tag{15}\] where \(\mathbf{F}=[\mathbf{F}_{S};\mathbf{F}_{A}]\in\mathcal{R}^{(N+K)\times d}\) and \(\mathbf{L}^{(v)}=[\mathbf{0}\ (\mathbf{G}^{(v)}\mathbf{D}^{(v)^{-1}});(\mathbf{G}^{(v)}\mathbf{D}^{(v)^{-1}} )^{T}\ \mathbf{0}]\in\mathcal{R}^{(N+K)\times(N+K)}\). Obviously, the Hessian matrix \(\sum_{v=1}^{M}{(\mathbf{\alpha}^{(v)})^{\gamma}\mathbf{L}^{(v)}}\) of the above equation is positive semi-definite. Thus, sub-problem in the Eq. (9) is strictly convex. For updating \(\mathbf{\mathcal{Z}}\), \(\mathbf{\mathcal{Z}}=\mathbf{\Gamma}_{\frac{\lambda_{R}}{\mu}}[\mathbf{\mathcal{G}}+\frac {1}{\rho}\mathbf{\mathcal{Y}}]\) is a closed-form solution, thus the sub-problem in the Eq. (10) is a convex function. For updating \(\mathbf{G}^{(v)}\), the second order derivative of this function in Eq. (11) with respect to \(\mathbf{G}^{(v)}(i,:)\) is equal to 1, thus it's easy to check that the objective function of sub-problem in the eq. (11) is also a convex function. For updating \(\mathbf{E}^{(v)}\), it's readily showed that the objective value in the Eq. (12) is monotonically decreased due to the convergence property of the proximal gradient-decent method [41]. For updating \(\mathbf{\alpha}^{(v)}\), the sub-problem in the Eq. (13) is a linear convex function, and a closed-form solution can be assigned to \(\mathbf{\alpha}^{(v)}\). ## IV Experiments and Analysis In this section, we report the experimental results that have been conducted to evaluate the performance of the proposed TCFF model using seven real-world datasets. Additionally, we provide detailed analysis to illustrate the effectiveness and robustness of the proposed TCFF. ### _Experiment Settings_ #### Iv-A1 Datasets We evaluate our proposed framework on six benchmark datasets MSRC1, NGs2, Hdigit3, Caltech1014, ALOI_1005, ALOI_1K6 and YoutubeFace7. Ngs is a document dataset, and the resting datasets are image datasets, in which some samples in image datasets are shown in Fig. 2. The detailed information of these datasets is summarized as follows: Footnote 1: [http://archive.ics.uci.edu/ml](http://archive.ics.uci.edu/ml) Footnote 2: [http://fig-membres.imag.fr/grimal/data.html](http://fig-membres.imag.fr/grimal/data.html). Footnote 3: [https://cs.nyu.edu/rowies/data.html](https://cs.nyu.edu/rowies/data.html) Footnote 4: [https://data.caltech.edu/records/mzzj-6wc02/files/caltech-101.zip](https://data.caltech.edu/records/mzzj-6wc02/files/caltech-101.zip) Footnote 5: [https://kbli-project.github.io/datasets/multi_view](https://kbli-project.github.io/datasets/multi_view) Footnote 6: [https://keli-project.github.io/datasets/multi_view](https://keli-project.github.io/datasets/multi_view) Footnote 7: [https://www.cs.tau.ac.il/wolfyfaces/](https://www.cs.tau.ac.il/wolfyfaces/) Footnote 8: [https://cs.nyu.edu/rowies/data.html](https://cs.nyu.edu/rowies/data.html) Footnote 9: [https://keli-project.github.io/datasets/multi_view](https://keli-project.github.io/datasets/multi_view) Footnote 10: [https://www.cs.tau.ac.il/wolfyfaces/](https://www.cs.tau.ac.il/wolfyfaces/) Footnote 11: [http://archive.ics.uci.edu/ml](http://archive.ics.uci.edu/ml) Footnote 12: [http://fig-membres.imag.fr/grimal/data.html](http://fig-membres.imag.fr/grimal/data.html). Footnote 13: [https://cs.nyu.edu/rowies/data.html](https://cs.nyu.edu/rowies/data.html) Footnote 14: [https://data.caltech.edu/records/mzzj-6wc02/files/caltech-101.zip](https://data.caltech.edu/records/mzzj-6wc02/files/caltech-101.zip) Footnote 15: [https://keli-project.github.io/datasets/multi_view](https://keli-project.github.io/datasets/multi_view) Footnote 16: [https://keli-project.github.io/datasets/multi_view](https://keli-project.github.io/datasets/multi_view) Footnote 17: [https://www.cs.tau.ac.il/wolfyfaces/](https://www.cs.tau.ac.il/wolfyfaces/) Footnote 18: [https://cs.nyu.edu/rowies/data.html](https://cs.nyu.edu/rowies/data.html) Footnote 19: [https://keli-project.github.io/datasets/multi_view](https://keli-project.github.io/datasets/multi_view) Footnote 20: [https://www.cs.tau.ac.il/wolfyfaces/](https://www.cs.tau.ac.il/wolfyfaces/) #### Iv-A2 Dataset We evaluate our proposed framework on six benchmark datasets MSRC1, NGs2, Hdigit3, Caltech1014, ALOI_1005, ALOI_1K6 and YoutubeFace7. Ngs is a document dataset, and the resting datasets are image datasets, in which some samples in image datasets are shown in Fig. 2. The detailed information of these datasets is summarized as follows: Footnote 1: [http://archive.ics.uci.edu/mlml](http://archive.ics.uci.edu/mlml) Footnote 2: [http://fig-membres.imag.fr/grimal/data.html](http://fig-membres.imag.fr/grimal/data.html). Footnote 3: [https://cs.nyu.edu/rowies/data.html](https://cs.nyu.edu/rowies/data.html) Footnote 3: [https://cs.nyu.edu/rowies/data.html](https://cs.nyu.edu/rowies/data.html) Footnote 4: [https://data.caltech.edu/records/mzzj-6wc02/files/caltech-101.zip](https://data.caltech.edu/records/mzzj-6wc02/files/caltech-101.zip) Footnote 5: [https://keli-project.github.io/datasets/multi_view](https://keli-project.github.io/datasets/multi_view) Footnote 6: [https://keli-project.github.io/datasets/multi_view](https://keli-project.github.io/datasets/multi_view) Footnote 7: [https://www.cs.tau.ac.il/wolfyfaces/](https://www.cs.tau.ac.il/wolfyfaces/) #### Iv-A3 Convergence analysis Since the model in Eq. (8) is not a joint convex problem of all variables, it still remains an open problem to require a globally optimal solution. Fortunately, by means of the alternating optimization algorithm, the proposed model can be solved. Due to the convex property and optimal solution of each sub-problem, the optimization can be shown to converge. In what follows, we would introduce the convergence of each sub-problem. For updating \(\mathbf{F}_{S}\) and \(\mathbf{F}_{A}\), sub-problem in the Eq. (9) is equal to the following equation: \[\max_{\mathbf{F}\mathbf{F}^{T}=\mathbf{I}}\sum_{v=1}^{M}{(\mathbf{\alpha}^{(v)})^{ \gamma}tr\left(\mathbf{F}\mathbf{L}^{(v)}\mathbf{F}^{T}\right)}, \tag{16}\] where \(\mathbf{F}=[\mathbf{F}_{S};\mathbf{F}_{A}]\in\mathcal{R}^{(N+K)\times d}\) and \(\mathbf{L}^{(v)}=[\mathbf{0}\ (\mathbf{G}^{(v)}\mathbf{D}^{(v)^{-1}});(\mathbf{G}^{(v)}\mathbf{D}^{(v)^{-1}}) ^{T}\ \mathbf{0}]\in\mathcal{R}^{(N+K)\times(N+K)}\). Obviously, the Hessian matrix \(\sum_{v=1}^{M}{(\mathbf{\alpha}^{(v)})^{\gamma}\mathbf{L}^{(v)}}\) of the above equation is positive semi-definite. Thus, sub-problem in the Eq. (9) is strictly convex. For updating \(\mathbf{\mathcal{Z}}\), \(\mathbf{\mathcal{Z}}=\mathbf{\Gamma}_{\frac{\lambda_{R}}{\mu}}[\mathbf{\mathcal{G}}+\frac{1} {\rho}\mathbf{\mathcal{Y}}]\) is a closed-form solution, thus the sub-problem in the Eq. (10) is a convex function. For updating \(\mathbf{G}^{(v)}\), the second order derivative of this function in Eq. (11) with respect to \(\mathbf{G}^{(v)}(i,:)\) is equal to 1, thus it's easy to check that the objective function of sub-problem in the eq. (11) is also a convex function. For updating \(\mathbf{E}^{(v)}\), it's readily showed that the objective value in the Eq. (12) is monotonically decreased due to the convergence property of the proximal gradient-decent method [41]. For updating \(\mathbf{\alpha}^{(v)}\), the sub-problem in the Eq. (13) is a linear convex function, and a closed-form solution can be assigned to \(\mathbf{\alpha}^{(v)}\). ## IV Experiments and Analysis In this section, we report the experimental results that have been conducted to evaluate the performance of the proposed TCFF model using seven real-world datasets. Additionally, we provide detailed analysis to illustrate the effectiveness and robustness of the proposed TCFF. ### _Experiment Settings_ #### Iv-A1 Datasets We evaluate our proposed framework on six benchmark datasets MSRC1, NGs2, Hdigit3, Caltech1014, ALOI_1005, ALOI_1K6 and YoutubeFace7. Ngs is a document dataset, and the resting datasets are image datasets, in which some samples in image datasets are shown in Fig. 2. The detailed information of these datasets is summarized as follows: Footnote 3: [https://cs.nyu.edu/rowies/data.html](https://cs.nyu.edu/rowies/data.html) Footnote 4: [https://data.caltech.edu/records/mzzj-6wc02/files/caltech-101.zip](https://data.caltech.edu/records/mzzj-6wc02/files/caltech-101.zip) Footnote 5: [https://keli-project.github.io/datasets/multi_view](https://keli-project.github.io/datasets/multi_view) Footnote 6: [https://keli-project.github.io/datasets/multi_view](https://keli-project.github.io/datasets/multi_view) Footnote 7: [https://www.cs.tau.ac.il/wolfyfaces/](https://www.cs.tau.ac.il/wolfyfaces/) Fig. 2. Some examples images in datasets. #### Iv-A2 Compared Methods In order to show the excellent performance of the proposed TCFF, this paper introduces nine state-of-the-art multi-view algorithms as comparing methods, including **Co-reg**[44], **MDcR**[24], **AMGL**[12], **GMC**[8], **GFSC**[15], **LMVSC**[18], **FPMVS**[19], **VCGA**[45] and **LTBPL**[30]. Moreover, two typical single-view methods **SC**[46] and **GNMF**[47] are adopted to show the advantages of multi-view clustering algorithms, which utilize the most informative view. The details of these compared methods can be summarized as follows: 1. **SC** is a standard spectral clustering method for clustering each single view, which is used for recognizing the different confidences of different views. 2. **GNMF** is a method that attempts to explore a matrix factorization to respect the graph structure, and then consider the geometric structure in the data. 3. **Co-reg** is a multiview spectral clustering method proposed in work, which regularizes different views to be close to each other. 4. **MDcR** is a multi-view dimensionality reduction method, which explores the correlations of different views based on HSIC term. 5. **AMGL** is an auto-weighted multiple graph learning method, which could allocate ideal weight for each view automatically. 6. **GMC** is a multi-view graph-based method to learn the common graph shared by all views, and directly gives the final clusters. 7. **GFSC** is a multi-view spectral embedding based on multi-graph fusion to approximate the original graph of individual view. 8. **LMVSC** first learn a smaller graph for each view, and then integrates those graphs to transform original multi-view problems into single-view scenario. 9. **FPMVS** jointly learns anchor selection and subspace graph construction into a unified optimization formulation to promote clustering quality, which can automatically learn an optimal anchor subspace graph without any extra hyper-parameters. 10. **VCGA** first constructs the view-specific graphs and the shared graph from original multi-view data and hidden latent representation, and then the view-specific graphs of different views and the consensus graph are aligned into an informative target graph. 11. **LTBPL** stacks multiple low-rank probability affinity matrices in a low-rank constrained tensor to recover their comprehensiveness and higher-order correlations, and links consensus indicator graph with view-specific representation carrying different adaptive confidences. For a fair comparison, we download the released code of comparison algorithms from their original websites. Since all methods need to utilize k-means or connected component algorithms to get the final clustering results, which can be disturbed by the initialized states. Thus, we repeatedly run 10 times clustering experiments to eliminate the randomness in initialization for all compared methods and report the average performance. For the selection for hyper-parameters, the details od the proposed TCFF are placed in the section IV-D, and we tune these hyper-parameters following by corresponding papers for compared methods. #### Iv-A3 Evaluation Metrics Various metrics have been proposed from diverse perspectives to evaluate the quality of the obtained data clusters. In general, larger metrics indicate better clustering performances. To facilitate a comprehensive comparison, three metrics [48] - accuracy (ACC), normalized mutual information (NMI), and purity (PUR) - that are commonly used in the clustering field. ### _Experimental Results and Analysis_ Performance comparison of different methods. The highest performance is highlighted in boldface. The best two scores are highlighted in bold. 'OOM' means out of memory. From experimental results in Tables II-IV, we have the following observations. * The proposed TCFF achieves the best performance in terms of three metrics against other counterparts in most circumstances. Taking the results on MSRC, NGs, and Hdigit datasets for instances, TCFF has been considered as the strongest multi-view learning algorithm, which obtained 100% in terms of ACC, NMI and PUR. This indicates that TCFF can be more effective and suitable for multiview features, which can well uncover the intrinsic rich information in multi-view features. * For the ALOI_1K and YoutubeFace datasets that contains over 100,000 samples, most of multi-view learning methods suffer from out-of-memory errors, such as GMC and LTBPL. The main reason is that the computational complexity of these works is squared or cubic with the data size. However, the proposed TCFF is capable of handling such large-scale datasets, which can obtain comparable and better performance against LMVSC and FPMVS. Meanwhile, the proposed TCFF performs these two methods in other datasets. It implies the effectiveness and efficiency of TCFF, which is applicable on both normal- and large-scale multi-view datasets. ### _Ablation Study_ In this section, the ablation study is conducted to evaluate the effects of consensus graph learning, view-specific graph learning, and tersorized graph learning. Specifically, for the each test, the corresponding term is removed while retaining the other terms. For notation simplicity, we denote these three tests as TCFF-v1, TCFF-v2, and TCFF-v3, respectively. These tests are performed on MSRC and NGs datasets, and the results of clustering preformance comparison in terms of ACC, NMI and PUR are reported in Table V. According to the table, we can observe that TCFF achieves superior clustering performance compared with its variants in all testing cases. To this end, the ablation study demonstrates the necessity of the proposed model, which simultaneously takes view-specific graph learning, tersorized graph learning, and consensus graph learning into consideration. ### _Hyper-parameter Analysis_ In this subsection, hyper-parameter analysis is conducted to investigate the effects of two parameters \(\lambda_{E}\) and \(\lambda_{R}\) on MSRC and NGs datasets with different settings, in which the experimental results in terms of ACC and NMI are reported in Figs. 3-4. Through these experimental results in Figs. 3-4, we can observe that for different datasets, the selections of \(\lambda_{E}\) and \(\lambda_{R}\) are different, and the best values of these two parameters vary from one dataset to another. However, there exists a wide range for each hyper-parameter in which relatively stable and good results can be readily obtained. Meanwhile, We set \(\lambda_{E}\) and \(\lambda_{R}\) to those values that make the proposed TCFF has the best results according to the experiments. Fig. 3: Hyper-parameter analysis on MSRC dataset. ### _Visualization_ Additional visualization results conducted on MSRC, NGs and Hdigits datasets are shown in Fig.5, which adopt t-SNE [49] to project features into the 2-dimensional subspace. Obviously, the distributions of original data are disordered. After TCGF is conducted, samples can be readily separated into several clusters, which further validates the effectiveness of TCFG. To show the superiority of the learned consensus graph, we visualized the consensus graphs of TCEF on MSRC, NGs and Hdigits datasets, as shown in Fig. 6. As shown in Fig. 6, we can find clear and complete block diagonal structures. Thus, TCEF can adaptively promote the learning of consensus graphs towards better attributes. Although there are some ineluctable noisy values in the consensus graphs, TCEF can achieve good results on multi-view clustering due to the superior attributes of the learned consensus graph. In this way, we can verify that how to construct a consensus graph is of vital importance for multi-view clustering. ## V Conclusion In this paper, we propose a novel unified consensus embedding framework for multi-view representation learning, termed Tensorized Consensus Graph Framework (TCGF). TCFG aims to be served as a universal learning framework for existing multi-view works under arbitrary assumptions and is applicable for multi-view applications with varying scales. TCFG firstly provides a unified framework to exploit the representations for individual view, enabling it to be suitable for different-scales datasets. Then, it stacks these representations into a tensor under alignment basics as a high-order representation, allowing for the smooth propagation of consistency and complementary information across all views. Additionally, TCFG proposes learning a consensus embedding shared by adaptively collaborating all views to uncover the essential structure of the multi-view data, which utilizes view-consensus grouping effect to regularize the view-consensus representation. The proposed TCFG is evaluated through extensive experiments on seven different-scales datasets, demonstrating its effectiveness and superiority to maintain or outperform other sota multi-view methods. ## Acknowledgements The authors would like to thank the anonymous reviewers for their insightful comments and suggestions to significantly improve the quality of this paper.
2305.19699
Isogeometric Multi-Resolution Full Waveform Inversion based on the Finite Cell Method
Full waveform inversion (FWI) is an iterative identification process that serves to minimize the misfit of model-based simulated and experimentally measured wave field data, with the goal of identifying a field of parameters for a given physical object. The inverse optimization process of FWI is based on forward and backward solutions of the (elastic or acoustic) eave equation. In a previous paper [1], we explored opportunities of using the finite cell method (FCM) as the wave field solver to incorporate highly complex geometric models. Furthermore, we demonstrated that the identification of the model's density outperforms that of the velocity -- particularly in cases where unknown voids characterized by homogeneous Neumann boundary conditions need to be detected. The paper at hand extends this previous study: The isogeometric finite cell analysis (IGA-FCM) -- a combination of isogeometric analysis (IGA) and FCM -- is applied for the wave field solver, with the advantage that the polynomial degree and subsequently also the sampling frequency of the wave field can be increased quite easily. Since the inversion efficiency strongly depends on the accuracy of the forward and backward wave field solution and of the gradient of the functional, consistent and lumped mass matrix discretization are compared. The resolution of the grid describing the unknown material density is the decouple from the knot span grid. Finally, we propose an adaptive multi-resolution algorithm that refines the material grid only locally using an image processing-based refinement indicator. The developed inversion framework allows fast and memory-efficient wave simulation and object identification. While we study the general behavior of the proposed approach on 2D benchmark problems, a final 3D problem shows that it can also be used to identify voids in geometrically complex spatial structures.
Tim Bürchner, Philipp Kopp, Stefan Kollmannsberger, Ernst Rank
2023-05-31T09:47:06Z
http://arxiv.org/abs/2305.19699v1
# Isogeometric Multi-Resolution Full Waveform Inversion based on the Finite Cell Method ###### Abstract Full waveform inversion (FWI) is an iterative identification process that serves to minimize the misfit of model-based simulated and experimentally measured wave field data, with the goal of identifying a field of parameters for a given physical object. For many years, FWI has been used very successfully in seismic imaging to deduce velocity models of the earth or of local geophysical exploration areas. FWI has also been successfully applied in various other fields, including non-destructive testing (NDT) and biomedical imaging. The inverse optimization process of FWI is based on forward and backward solutions of the (elastic or acoustic) wave equation, as well as on efficient computation of an adequate optimization direction. Many approaches use (low order) finite element or finite difference methods, often with a field of parameter values with a resolution corresponding to elements or nodes of the discretized wave field. In a previous paper [1], we explored opportunities of using the finite cell method (FCM) as the wave field solver, which has the advantage that highly complex geometric models can be incorporated easily. Furthermore, we demonstrated that the identification of the model's density outperforms that of the velocity - particularly in cases where unknown voids characterized by homogeneous Neumann boundary conditions need to be detected. The paper at hand extends this previous study in the following aspects: The isogeometric finite cell analysis (IGA-FCM) - a combination of isogeometric analysis (IGA) and FCM - is applied for the wave field solver, with the advantage that the polynomial degree and subsequently also the sampling frequency of the wave field can be increased quite easily. Since the inversion efficiency strongly depends on the accuracy of the forward and backward wave field solution and of the gradient of the functional, consistent and lumped mass matrix discretization are compared. The resolution of the grid describing the unknown material density - thus allowing to identify voids in a physical object - is then decoupled from the knot span grid. Finally, we propose an adaptive multi-resolution algorithm that refines the material grid only locally using an image processing-based refinement indicator. The developed inversion framework allows fast and memory-efficient wave simulation and object identification. While we study the general behavior of the proposed approach on 2D benchmark problems, a final 3D problem shows that it can also be used to identify void regions in geometrically complex spatial structures. _Keywords:_ full waveform inversion, isogeometric analysis, finite cell method, multi-resolution, scalar wave equation ## 1 Introduction Tom Hughes has initiated and driven countless innovations in computational science and engineering. Among the most important is undoubtedly the invention of isogeometric analysis. Originally motivated by the goal of uniting the separate worlds of geometric modeling and finite element analysis, the great value of this method is also demonstrated by the fact that new areas of application continue to emerge that were not part of the original objective. The present contribution is exactly of this kind. We show how unknown geometric features of a structure can be effectively identified, and how an inverse analysis benefits from superior inherent properties of IGA, such as the ability to obtain highly accurate results with a small number of degrees of freedom. With its origins in the 1980s [2; 3], full waveform inversion (FWI) has become a well-established method in the field of seismic tomography. Waves traveling through the interior of a medium are measured and compared to model-based simulated wave signals. Information about internal material properties is extracted in a nonlinear optimization problem. The adjoint method using forward and backward simulations of the wave field allows for an efficient use of gradient-based optimization [4]. A comprehensive introduction to FWI can be found in [5], reviews in [6; 7]. While FWI has been used very successfully in geophysics for a long time, its application to biomedical applications [8; 9; 10] and non-destructive testing (NDT) [11; 12; 13] has gained traction in recent years. In these problems, the goal often is to identify interior voids or fractures characterized by homogeneous Neumann boundary conditions. Such defects are challenging to detect using classical velocity-based FWI. In previous work [1], we showed that density inversion allows to efficiently identify and reconstruct these defects in possibly damaged samples. A thorough analysis of the distinct behaviors of density and velocity inversion based on a boundary layer description is provided in [14]. To ensure the success of FWI, it is crucial to efficiently and precisely solve the forward and backward wave problems - and to construct an accurate and sufficiently resolved material description. However, many numerical schemes couple the wave field discretization and material representation, which does not allow to freely adapt them independently of each other. It is common to use a finite difference grid or a finite element mesh for the wave field to describe a material as constant per element or interpolated by a nodal-based Ansatz using shape functions defined on sub-blocks of the elements (e.g. [15; 16]). Nevertheless, this description is often closely tied to the spectral element method (SEM) and not more than \(p+1\) sub-blocks can be captured by the integration per element. The focus of the present contribution is to investigate the interactions between the wave field discretization and a completely independent material representation. Consequently, two key questions arise: Which high-order schemes are suitable to solve the forward and backward wave equation, and how can an independent yet efficient description of the material field be realized? In general, high-order finite element methods outperform low-order approaches in approximating smooth wave solutions. Thanks to its diagonal mass matrix, the SEM is very popular in combination with explicit time stepping [17]. However, the diagonal structure of the mass matrix is a consequence of employing Lagrange basis functions in combination with Gauss Legendre Lobatto (GLL) quadrature [18; 19]. If a material discretization is chosen that requires a different integration scheme, the diagonality and hence the central advantage of SEM in explicit time integration is lost. As an alternative to SEM, we use isogeometric analysis (IGA) [20] and an independent voxel-based material representation in the paper at hand to study the interaction between the wave field discretization and the resolution of the material field. The higher spline continuity across knot span boundaries allows to accurately solve wave problems with a significant lower number of degrees of freedom [21]. To easily incorporate complex geometric features of a structure, the finite cell method (FCM) [22] is used. The isogeometric finite cell analysis (IGA-FCM) - a combination of trimmed IGA and FCM - has been previously studied in [23] and [24] in the context of linear elasticity, and has later been extended to dynamic problems [25; 26]. Similar approaches and in particular detailed mathematical analyses of IGA and immersed boundary methods are provided in the vast literature on CutFEM, e.g. [27; 28]. The paper at hand combines an IGA-FCM approximation of the wave field with a voxelized representation of the material parameter. The subsequent key aspects of this multi-resolution FWI approach are addressed: * with the goal to assess their suitability for an IGA-based FWI. * Second, the paper examines the interaction between the resolutions of the wave field and the material field for the inverse problem in terms of accuracy and computational cost. * Third, an adaptive locally refined material grid is introduced as part of the inversion process. The efficiency of this approach is demonstrated with 2D and 3D examples. This paper is the second contribution in a sequence applying immersed boundary methods to full waveform inversion. To ensure self-consistency, the basics introduced in the first paper [1] are briefly summarized. It is structured as follows: In Section 2, we introduce the scalar wave equation, its spatial and temporal discretization, and the corresponding optimization problem. Section 3 derives guidelines for the discretization of the wave field with IGA-FCM and its row-sum lumped variant for the forward wave problem. Section 4 deals with the inverse problem, where the multi-resolution approach is evaluated on a 2D example. We then introduce an adaptive local refinement in the material field and apply the developed methodology to 2D and 3D examples. Finally, we conclude the paper in Section 5. ## 2 Full waveform inversion by isogeometric finite cell analysis ### FCM for the scalar wave equation We briefly summarize the nomenclature used in [1], assuming an isotropic heterogeneous medium with density \(\rho(\mathbf{x})\) and wave speed \(c(\mathbf{x})\). Introducing the wave field \(u(\mathbf{x},t)\), its acceleration \(\ddot{u}(\mathbf{x},t)\) and the external force term \(f(\mathbf{x},t)\), the scalar wave equation is defined on a computational domain \(\Omega\) for the time \(\mathcal{T}=[0,T_{\text{max}}]\) \[\rho(\mathbf{x})\ddot{u}(\mathbf{x},t)-\nabla\cdot\left(\rho(\mathbf{x})c^{2} (\mathbf{x})\nabla u(\mathbf{x},t)\right)=f(\mathbf{x},t),\qquad\mathbf{x}\in \Omega,t\in\mathcal{T}. \tag{1}\] The initial conditions are \(u(\mathbf{x},0)=\dot{u}(\mathbf{x},0)=0\) for \(\mathbf{x}\in\Omega\), the boundary conditions \(u(\mathbf{x},t)=0,\mathbf{x}\in\partial\Omega_{\text{D}}\) and \(\mathbf{n}\cdot\nabla u(\mathbf{x},t)=0,\mathbf{x}\in\partial\Omega_{\text{N}}\) with \(\partial\Omega=\partial\Omega_{\text{D}}\cup\partial\Omega_{\text{N}}\). We assume that a density \(\rho_{0}\) and wave speed \(c_{0}\) of the background material are given. All known geometric features of a structure are incorporated in the initial domain \(\Omega\), which may itself already have complex geometric shape. Applying the basic concept of the finite cell method, \(\Omega\) is embedded in a larger, yet simply shaped domain \(\Omega_{e}\). The original domain \(\Omega\) is recovered through an indicator function \(\alpha(\mathbf{x})\), which assumes a small value \(\epsilon\) (typically \(10^{-5}\) to \(10^{-8}\)) representing a small density in the fictitious part of \(\Omega_{e}\). While \(\alpha(\mathbf{x})\) is known a priori, unknown defects in the structure are iteratively identified by reconstructing a second, unknown scaling function \(\gamma(\mathbf{x})\), see Figure 1. Since \(\alpha\) and \(\gamma\) only scale the density, the scalar wave equation takes the following form on the extended domain \(\Omega_{\text{e}}\) \[\alpha(\mathbf{x})\gamma(\mathbf{x})\rho_{0}\ddot{u}(\mathbf{x},t)-\nabla \cdot\left(\alpha(\mathbf{x})\gamma(\mathbf{x})\rho_{0}c_{0}^{2}\nabla u( \mathbf{x},t)\right)=f(\mathbf{x},t),\qquad\mathbf{x}\in\Omega_{e},t\in \mathcal{T}. \tag{2}\] Figure 1: A priori known geometry incorporated by the indicator function \(\alpha\) with an unknown void reconstructed by the scaling function \(\gamma\) (from [1]) Note that this approach is independent of the basis chosen to discretize the spatial solution field. The spectral cell method (SCM) [29, 30] uses Lagrange polynomials to approximate the wave field for both scalar and elastic wave equations. Isogeometric finite cell analysis (IGA-FCM) published in [23, 24] can be combined with mass lumping to be used in explicit dynamics [31, 26, 32]. Other immersed boundary methods (IBM) closely related to FCM are CutFEM [27, 33], IBRA [34], aggregated FEM [35], cgFEM [36], and the shifted boundary method [37, 38]. Common to all these approaches is the idea to circumvent the task of boundary-conforming mesh generation by generating a non-boundary conforming computational grid and recovering the boundary at the level of the integration of the underlying bilinear forms. Obviously, this does not come at zero cost. In FCM, the integrands of the element mass and stiffness matrices are discontinuous for cells cut by boundaries of the physical domain. Several suitable integration approaches have been proposed to overcome this difficulty, including space-trees [39, 40], moment-fitting [41], local integration meshes [42], or smart octrees [43]. In [44], it is shown that FCM can be combined with a voxelized representation of the material parameter \(\alpha\). The integration is performed on a finer voxel grid using pre-integration to mitigate the computational burden of computing the system matrices [45]. ### Spatial discretization of the wave field and material For the approximation of the wave field \(u(\mathbf{x},t)\), we use bivariate and trivariate B-spline discretizations in 2D and 3D [20, 46]. By defining a polynomial degree \(p\) and a set of parametric coordinates, called the knot vector \(\Xi=[\xi_{1},\xi_{2},...,\xi_{n+p+1}]\), the B-spline basis functions can be constructed - where \(\xi_{i}\) is the \(i\)th knot and \(n\) the number of basis functions. Using the Cox-de Boor recursion formula [47, 48], the B-splines are \[N_{i,0}(\xi) =\begin{cases}1,\quad\xi_{i}\leq\xi<\xi_{i+1}\\ 0,\quad\text{otherwise}\end{cases},\qquad\textbf{if }p=0 \tag{3}\] \[N_{i,p}(\xi) =\frac{\xi-\xi_{i}}{\xi_{i+p}-\xi_{i}}N_{i,p-1}(\xi)+\frac{\xi_ {i+p+1}-\xi}{\xi_{i+p+1}-\xi_{i+1}}N_{i+1,p-1}(\xi)\qquad\textbf{else}. \tag{4}\] The continuity \(C^{p-k}\) of B-splines across the knot boundaries is defined by the knot multiplicity \(k\). Henceforth, unless otherwise indicated we use open knot vectors with \(k=1\) for all inner knots and \(k=p+1\) for the end knot. With the set of all \(n^{\text{dof}}\) bi- or trivariate basis functions \(\mathbf{N}\), the spatially discretized wave solution is \[u(\mathbf{x},t)\approx\tilde{u}(\mathbf{x},t)=\sum_{i=1}^{n^{\text{dof}}}N_{i }(\mathbf{x})\hat{u}_{i}(t)=\mathbf{N}(\mathbf{x})\hat{\mathbf{u}}(t), \tag{5}\] where \(\hat{u}_{i}\) are the coefficients of the corresponding basis functions. Thanks to the non-negative partition of unity property of B-splines [49], row-sum lumping is readily applicable and has been revived for boundary-conforming [50] and immersed IGA [31, 26]. Unfortunately, row-summing leads to a breakdown of \(p\)-convergence. As shown in [50], the convergence of the first generalized eigenvalue is only of quadratic order for quadratic and cubic B-splines in 1D problems. Nevertheless, row-summing leads to a critical time step that becomes independent from the cut ratio of the knot spans if the physical domain is immersed [25, 26, 32] and, therefore - at least at first sight - seems to be an attractive option for explicit dynamics. For the discretization of the material parameters, we utilize piecewise constant functions \(N_{\text{m},i}\) defined on a voxel grid (see Figure 2). While this material grid can, in principle, be fully independent of the knot span grid for the spatial wave discretization, it is computationally advantageous to define it as a refinement with \(n^{\rm v}\) voxels per knot span in each spatial direction. With \(n^{\rm m}\) voxels discretizing the complete extended domain \(\Omega_{e}\), the material discretization is \[\gamma(\mathbf{x})\approx\hat{\gamma}(\mathbf{x})=\sum_{i=1}^{n^{\rm m}}N_{\rm m,i}(\mathbf{x})\hat{\gamma}_{i}=\mathbf{N}_{\rm m}(\mathbf{x})\hat{\gamma}, \tag{6}\] where \(\hat{\gamma}_{i}\) denotes the value of the voxel. Since the material might be discontinuous within one knot span, the integration of the mass and stiffness matrices is performed by means of a composed integration at the voxel level, as e.g. [44]. ### Time integration Derived by the Bubnov-Galerkin approach, we introduce the mass matrix \(\mathbf{M}\), the stiffness matrix \(\mathbf{K}\) and the external force vector \(\mathbf{F}\). The space-discrete form of the scalar wave equation is given by \[\mathbf{M}\hat{\mathbf{u}}(t)+\mathbf{K}\hat{\mathbf{u}}(t)=\hat{\mathbf{f}}( t). \tag{7}\] Applying second-order central differences (CDM), the next time step \(t_{i+1}=t_{i}+\Delta t\) is calculated from the previous two time steps \(t_{i}\) and \(t_{i-1}=t_{i}-\Delta t\): \[\hat{\mathbf{u}}(t_{i+1})=2\hat{\mathbf{u}}(t_{i})-\hat{\mathbf{u}}(t_{i-1})+ \Delta t^{2}\mathbf{M}^{-1}\left[\hat{\mathbf{f}}(t_{i})-\mathbf{K}\hat{ \mathbf{u}}(t_{i})\right]. \tag{8}\] CDM is an explicit, conditionally stable time integration method. The number of time steps is denoted as \(n^{\rm t}\). The critical time step is given by \[\Delta t_{\rm c}=\frac{2}{\sqrt{\lambda_{\rm max}(\mathbf{K},\mathbf{M})}}, \tag{9}\] where \(\lambda_{\rm max}(\mathbf{K},\mathbf{M})\) is the largest eigenvalue of the generalized eigenproblem [49]. For details see e.g. [51]. ### Full waveform inversion The goal of FWI is to find a set of unknown material coefficients \(\hat{\gamma}\) to minimize the nonlinear optimization problem \[\hat{\gamma}^{*}=\arg\min_{\hat{\gamma}}\chi(\hat{\gamma}). \tag{10}\] Figure 2: Wave field mesh (thick blue lines and nodes) and material mesh (thin black lines) The cost function is defined by the squared residual between simulation and experiments, summed up over \(n^{\mathrm{r}}\) receiver positions in \(n^{\mathrm{s}}\) experiments \[\chi(\hat{\gamma})=\frac{1}{2}\sum_{s=1}^{n^{\mathrm{s}}}\sum_{r=1}^{n^{\mathrm{ r}}}\int_{T}\int_{\Omega}\left[\left(u^{s}(\hat{\gamma};\mathbf{x},t)-u^{0,s}( \mathbf{x},t)\right)^{2}\delta(\mathbf{x}-\mathbf{x}^{r})\right]d\Omega dt, \tag{11}\] where \(u^{s}(\hat{\gamma};\mathbf{x},t)\) is the solution of a wave simulation with the current material \(\hat{\gamma}\) and \(u^{0,s}(\mathbf{x}^{r},t)\) is the corresponding experimental measurement at the receiver position \(\mathbf{x}^{r}\). A typical experimental setup can be found in [1], and the computation of the gradient applying the adjoint method is derived according to [52, 53]. In the following, we revise the derived formulas from [1]. The sensitivity kernel with respect to the scaling function \(\gamma\) for a given set \(\hat{\gamma}\) is \[K_{\gamma}(\mathbf{x})=\sum_{s=1}^{n^{\mathrm{s}}}\int_{T}\left[-\alpha( \mathbf{x})\rho_{0}\hat{u}^{s,\dagger}(\hat{\gamma};\mathbf{x},t)\hat{u}^{s}( \hat{\gamma};\mathbf{x},t)+\alpha(\mathbf{x})\rho_{0}c_{0}^{2}\nabla u^{s, \dagger}(\hat{\gamma};\mathbf{x},t)\cdot\nabla u^{s}(\hat{\gamma};\mathbf{x},t) \right]dt, \tag{12}\] where \(u^{s,\dagger}\) is the adjoint solution of experiment \(s\). The gradient with respect to the voxel coefficients \(\gamma_{i}\) is approximated by evaluating the sensitivity kernel at the voxel mid positions \(\mathbf{x}_{\hat{\gamma},i}\) \[\frac{d\chi}{d\hat{\gamma}_{i}}\approx\int_{\Omega}K_{\gamma}\delta(\mathbf{x }_{\hat{\gamma},i}-\mathbf{x})d\Omega=K_{\gamma}(\mathbf{x}_{\hat{\gamma},i}) \tag{13}\] or in discretized form \[\frac{d\chi}{d\hat{\gamma}_{i}}\approx\sum_{s=1}^{n^{\mathrm{s}}}\int_{\Omega }\int_{T}\left[-\rho_{0}(\hat{\mathbf{u}}^{s,\dagger})^{T}\mathbf{N}^{T} \mathbf{N}\hat{\mathbf{u}}^{s}+\rho_{0}c_{0}^{2}(\hat{\mathbf{u}}^{s,\dagger} )^{T}\mathbf{B}^{T}\mathbf{B}\hat{\mathbf{u}}^{s}\right]dt\delta(\mathbf{x}_{ \hat{\gamma},i}-\mathbf{x})d\Omega \tag{14}\] The unknown material field \(\gamma\) is optimized only within the physical domain. Therefore, it is assumed that the indicator function \(\alpha\) at the considered positions \(\mathbf{x}_{\hat{\gamma},i}\) is equal to 1 and, consequently that it vanishes in the above equation. In gradient-based optimization, the material is iteratively improved with an update step \(\Delta\hat{\gamma}\). The superscript \(k\) denotes the current iteration \[\hat{\gamma}^{(k+1)}=\hat{\gamma}^{(k)}+\Delta\hat{\gamma}^{(k)} \tag{15}\] Quasi-Newton type methods take into account the current gradient \(\nabla_{\hat{\gamma}}\chi(\hat{\gamma}^{(k)})\) and an approximate of the inverse Hessian \(\mathbf{H}_{a}^{-1}(\hat{\gamma}^{k})\) in the model update \[\Delta\hat{\gamma}^{(k)}=-\mathbf{H}_{a}^{-1}(\hat{\gamma}^{k})\nabla_{\hat{ \gamma}}\chi(\hat{\gamma}^{(k)}). \tag{16}\] For an introduction to gradient-based optimization, we refer to [54]. In the paper at hand, the matrix-free and bounded L-BFGS-B of the Python library SciPy [55] is applied. The computational cost and memory requirement of the gradient computation can be readily estimated. Considering (14), the sensitivity kernel must be computed for all \(n^{\mathrm{m}}\) voxel mid points. Assuming that the forward and adjoint solutions \(\hat{\mathbf{u}}^{s}\) and \(\hat{\mathbf{u}}^{s,\dagger}\) have been computed and are temporarily stored for all time steps \(n^{\mathrm{t}}\), the integrand is evaluated and summed up for all time steps. Since the evaluation is done locally for every voxel, only the \(n^{\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0} \mathrm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0} \mathrm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0} \mathrm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0} \mathrm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0} \mathrm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}{0}\pgfsys@color@gray@fill{0}{0}\mathrm{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}{0} \pgfsys@color@gray@fill{0}{0 ## 3 Solving the wave equation by IGA-FCM In this section, we first investigate whether IGA-FCM is generally suitable as a wave equation solver in the framework of full waveform inversion. The following observations are important: 1. A wave equation solver should allow a high convergence rate which can even be selected depending on the expected smoothness of the wave field. It is well known [20, 21] that increasing the polynomial degree of the Ansatz functions outperforms a refinement of the mesh (h-extension) by far. Therefore, IGA (like other high-order solvers) is well qualified in the context of consideration. This applies in particular in combination with immersed methods such as the FCM, as restrictions on the geometric shape of the domain \(\Omega\) are minimal. 2. A solver should provide high accuracy per degree of freedom. \(k\)-extension of IGA (see [20]) combines the increase of the polynomial degree and the increase of the smoothness of the Ansatz in such a way that one degree of freedom per spatial direction is enough to gain one additional order of convergence. The computational effort associated with this additional degree of freedom may be significant due to a loss of sparsity and an increase of fill-in throughout the system matrices. Yet, the additional computational cost is offset by a drastic reduction in memory requirements. This is important because, in the adjoint gradient computation, the coefficients of the solution vectors of all time steps of the forward simulation must be stored temporarily. 3. The goal of FWI is to identify geometric features that may be small compared to the mesh size of the wave field discretization. This can be achieved (as will be shown in the following section) by using a material grid that is refined compared to the mesh of the wave field. Let us now take a look at the IGA-FCM solution of the scalar wave problem using consistent and lumped mass matrices. We consider a two-dimensional domain of \(l_{x}=10\) and \(l_{y}=5\) with a circular hole of radius \(r=0.5\) at position \(x_{c}=6\) and \(y_{c}=2.85\), shown in Figure 3. The density and wave speed of the background material are set to \(\rho_{0}=1\) and \(c_{0}=1\). A 2-cycle sine burst \[g(t)=\begin{cases}\sin\left(2\pi ft\right)\sin\left(\frac{\pi f}{2}\right)&, \,t\leq\frac{2}{f}\\ 0&,\,\text{else}\end{cases} \tag{19}\] with a central frequency \(f=0.5\) and a spatial Gaussian distribution \[f(x,y)=e^{-\left(\frac{x-y_{c}}{2\sigma_{x}^{2}}+\frac{(x-y_{c})^{2}}{2 \sigma_{y}^{2}}\right)} \tag{20}\] is excited at position \(x_{s}=2\) and \(y_{s}=2.5\) with \(\sigma_{x}=\sigma_{y}=0.25\), leading to a dominant wavelength \(\lambda_{\text{dom}}=2\). The wave propagation is computed for \(T_{\text{max}}=10\). To compare the accuracy of consistent and lumped IGA-FCM for different polynomial orders, the wave solution at \(T_{max}\) is evaluated in the marked area \([7,10]\times[0,5]\) on the right side of the hole. Figure 3: 2D domain with a hole For this purpose, this area is sampled with \(N_{\mathrm{e}}=601\times 1001\) equidistant evaluation points in \(x\)- and \(y\)-direction. These evaluation points correspond to an arbitrary number of receiver positions in the FWI. The normalized error with respect to an overkill reference solution \(u_{\mathrm{ref}}\) \[\epsilon=\frac{\sqrt{\sum_{e=0}^{N_{\mathrm{e}}}\left(u(\mathbf{x}_{e},T_{ \mathrm{max}})-u_{\mathrm{ref}}(\mathbf{x}_{e},T_{\mathrm{max}})\right)^{2}}}{ \sqrt{\sum_{e=0}^{N_{\mathrm{e}}}\left(u_{\mathrm{ref}}(\mathbf{x}_{e},T_{ \mathrm{max}})\right)^{2}}}. \tag{21}\] can be interpreted as an error proportional to the \(\mathcal{L}_{2}\) error calculated using the Riemann sum for integration. The reference solution is obtained with quintic \(C^{0}\) continuous integrated Legendre polynomials defined on a mesh of mesh size \(h=\frac{1}{16}\), resulting in 160 elements in \(x\)-direction and 80 elements in \(y\)-direction. The FCM indicator function defining the physical part of the computational domain is set to \(\alpha=10^{-8}\) inside the hole. For the consistent and lumped version of IGA-FCM, the solution is computed for linear, quadratic, cubic and quartic \(C^{p-1}\) B-splines. The mesh size is varied using \(h=\frac{1}{2},\frac{1}{4},\frac{1}{8},\frac{1}{16},\frac{1}{32}\). In order to reduce spatial and temporal integration errors to a minimum, all simulations are carried out with \(n^{\mathrm{t}}=100\,000\) time steps - and the integration of the mass and stiffness matrices and the force vector is performed by a quadtree-quadrature applying a depth of \(d=10\). Figure 4 shows the results for the consistent version of IGA-FCM, referred to as 'c-IGA-FCM', and for the lumped version of IGA-FCM, denoted as 'l-IGA-FCM'. Reference lines proportional to \(h^{2}\), \(h^{3}\), \(h^{4}\), and \(h^{5}\) are depicted. Corresponding to the results concerning the generalized eigenvalue problem in [50], a collapse of p-convergence can be observed for the problem at hand if the lumped version of IGA-FCM is applied. The order of convergence remains quadratic regardless of the polynomial order \(p\), while the convergence constant even deteriorates as \(p\) increases. For the consistent version of IGA-FCM, however, we observe an increase in the order of convergence. As expected from results concerning the eigenfunction of the generalized eigenvalue problem, the error in time has an asymptotic convergence of \(\mathcal{O}(h^{p+1})\) as well. Additionally, improvement in the constant can be noticed as \(p\) increases. Since FWI requires to temporarily store full wave fields, a memory-efficient solver is of great advantage. Thus, lumped IGA-FCM is not suitable for this application - and we will not consider it for the inverse problem, due to the higher number of degrees of freedom that is required to achieve a desired accuracy. However, consistent IGA-FCM provides a very memory-efficient and accurate solution of the wave problem and, therefore, is our method of choice from here on. Figure 4: Convergence for consistent and lumped IGA-FCM with \(p=1,2,3,4\) ## 4 The inverse problem ### Multi-resolution approach To evaluate the applicability of the multi-resolution approach, we consider the embedded domain example of Figure 1. As given in [1], the sample has a size of \(100\,\mathrm{mm}\times 50\,\mathrm{mm}\), density and wave speed are \(2700\,\mathrm{kg/m^{3}}\) and \(6000\,\mathrm{m}\,\mathrm{s}^{-1}\), the lower boundary of the physical domain is defined by cubic splines interpolating the nine points \((0,10\,\mathrm{mm})\), \((10\,\mathrm{mm},1\,\mathrm{mm})\), \((25\,\mathrm{mm},7.5\,\mathrm{mm})\), \((35\,\mathrm{mm},2\,\mathrm{mm})\), \((50\,\mathrm{mm},15\,\mathrm{mm})\), \((60\,\mathrm{mm},3\,\mathrm{mm})\), \((75\,\mathrm{mm},12\,\mathrm{mm})\), \((90\,\mathrm{mm},1\,\mathrm{mm})\), and \((100\,\mathrm{mm},10\,\mathrm{mm})\), the circular hole is centered at \((35\,\mathrm{mm},20\,\mathrm{mm})\) with radius \(r=7.5\,\mathrm{mm}\), and the unknown ellipse is located at \((63\,\mathrm{mm},18\,\mathrm{mm})\) with semi-axes \(a=6\,\mathrm{mm}\) and \(b=1\,\mathrm{mm}\), rotated by \(67.5^{\circ}\). The sample is excited by 17 sources centered at the top surface with a spacing of \(4\,\mathrm{mm}\). Whenever a signal is sent from one of the sources, all source locations are used as receiver positions, mimicking the functionality of physical transducers. The central frequency of the 2-cycle sine burst is \(f=500\,\mathrm{kHz}\), corresponding to a dominant wavelength \(\lambda_{\mathrm{dom}}=12\,\mathrm{mm}\). The synthetic reference data are computed with a boundary-conforming mesh of linear quadrilateral elements with over \(50\frac{\mathrm{dof}}{\lambda_{\mathrm{dom}}}\). The inversion is done using full matrix capture (FMC, see [56]) including signals of all sources, and a maximum of 10 iterations is performed. No regularization of the inverse problem beyond the intrinsic one associated to the discretization with B-splines is applied. Taking into account the results of Section 3, the polynomial degree of the wave field is chosen to be \(p=2\) and \(p=3\). Wave field and material grids are discretized independently. The knot span length of the wave field mesh is varied between \(h=5\mathrm{mm}\), \(2.5\mathrm{mm}\), and \(1.25\mathrm{mm}\), leading to discretizations with \(2.4\), \(4.8\), and \(9.6\) knot spans per wavelength. These meshes are combined with independent grids of voxel size with \(h^{\mathrm{v}}=1.25\mathrm{mm}\), \(0.625\mathrm{mm}\), or \(0.3125\mathrm{mm}\). The nine resulting combinations of the wave field and material grids are listed in Table 1. The number of voxels in each dimension per knot span is denoted as \(n^{\mathrm{v}}\). In order to incorporate the a priori known geometry the integration of the system matrices is carried out using a quadtree of depth \(d=p+1\) on each cut knot span. Inside the void domain, \(\alpha\) is set to \(10^{-5}\) and the inversion of \(\gamma\) is bounded between \(\gamma_{\mathrm{min}}=10^{-5}\) and \(\gamma_{\mathrm{max}}=1\). The wave field is computed for a time span of \(6.0\times 10^{-5}\,\mathrm{s}\) in 3000 time steps. Figure 5 shows the inversion results for \(p=2\), Figure 6 for \(p=3\), and Table 2 and Table 3 list the computation times. For the graphical representation, the material field is visualized throughout the computational domain on the level of the applied voxel size. As noted above, the a priori known geometric features (i.e., the lower boundary and the circular hole) are resolved more precisely in the FCM computation by a quadtree integration. Figure 5: Inversion results for polynomial degree \(p=2\) Figure 6: Inversion results for polynomial degree \(p=3\) leads to a better reconstruction of the defect boundary. This, however, comes at the cost of a much higher computational effort, note Table 2 for polynomial degree \(p=2\) and Table 3 for \(p=3\), since the sensitivity kernel (14) has to be evaluated at every voxel of the material grid. For example, considering column 1 (voxel size 1.25mm) of Table 2, the computational time is dominated by solving the wave fields. In contrast, column 3 (voxel size 0.3125mm) uses the same approximation of the wave fields, but now spends most of the optimization effort evaluating the gradient. The time measurements clearly confirm the complexity estimation of equation (17). Moreover, it can be seen that the reconstruction at undamaged regions is resolved quite well by a coarse voxel grid, since the reconstructed material does not vary largely. According to these observations, we suggest to use a locally refined material grid. The undamaged background can be resolved with a low number of voxels, while the areas of interest, i.e., where defects need to be detected, require a finer resolution. A corresponding refinement strategy and a suitable refinement indicator are presented in the following section. ### Adaptive refinement of the material grid For the refinement of the material grid, we introduce an indicator \(\eta\) corresponding to each voxel with value \(\hat{\gamma}_{i}\). Motivated by Sobel filters, which are used in image processing [57], the \(L2\)-norm of the spatial gradient is used in [58] for the sharpness quantification of a reconstructed material parameter, i.e., \[\|\gamma(\mathbf{x})\|_{2}=\sqrt{\left(\frac{\partial\gamma(\mathbf{x})}{ \partial x}\right)^{2}+\left(\frac{\partial\gamma(\mathbf{x})}{\partial y} \right)^{2}} \tag{22}\] in two spatial dimensions. High values indicate areas of rapidly changing material and, thus, boundaries of our regions of interest. Since the material parameter is discretized by constant shape functions defined on the voxel grid, we adapt the definition of the sharpness (22) to introduce a suitable indicator, replacing the derivatives in the spatial directions by \(G_{x}(\hat{\gamma}_{i})\) and \(G_{y}(\hat{\gamma}_{i})\). This voxelized gradient is computed as the mean of the absolute jump values of the material parameter \(\gamma\) between neighboring voxels, i.e., \[G_{x}(\hat{\gamma}_{i}) =\frac{1}{2h^{\mathrm{v}}}\left(|\llbracket\hat{\gamma}_{i} \rrbracket^{(x,+)}|+|\llbracket\hat{\gamma}_{i}\rrbracket^{(x,-)}|\right)=\frac {1}{2h^{\mathrm{v}}}\left(|\hat{\gamma}_{r}-\hat{\gamma}_{i}|+|\hat{\gamma}_{ i}-\hat{\gamma}_{l}|\right) \tag{23}\] \[G_{y}(\hat{\gamma}_{i}) =\frac{1}{2h^{\mathrm{v}}}\left(|\llbracket\hat{\gamma}_{i} \rrbracket^{(y,+)}|+|\llbracket\hat{\gamma}_{i}\rrbracket^{(y,-)}|\right)=\frac {1}{2h^{\mathrm{v}}}\left(|\hat{\gamma}_{o}-\hat{\gamma}_{i}|+|\hat{\gamma}_{ i}-\hat{\gamma}_{u}|\right) \tag{24}\] where \(h^{\mathrm{v}}\) is the size of the voxel, \(\hat{\gamma}_{r}\) is the voxel value of the voxel adjacent in positive \(x\)-direction and \(\llbracket\hat{\gamma}_{i}\rrbracket^{(x,+)}\) is the jump of the material in that direction, \(\hat{\gamma}_{l}\) and \(\llbracket\hat{\gamma}_{i}\rrbracket^{(x,-)}\) are the voxel value and jump \begin{table} \begin{tabular}{|l|c|c|c|} \hline & \(v=1.25\)mm & \(v=0.625\)mm & \(v=0.3125\)mm \\ \hline \hline \(h=5\)mm & 158.4 s & 460.7 s & 1648.5 s \\ \hline \(h=2.5\)mm & 283.4 s & 590.7 s & 2219.9 s \\ \hline \(h=1.25\)mm & 620.2 s & 1007.7 s & 2556.7 s \\ \hline \end{tabular} \end{table} Table 2: Computation times for polynomial degree \(p=2\) \begin{table} \begin{tabular}{|l|c|c|c|} \hline & \(v=1.25\)mm & \(v=0.625\)mm & \(v=0.3125\)mm \\ \hline \hline \(h=5\)mm & 268.4 s & 790.5 s & 2600.6 s \\ \hline \(h=2.5\)mm & 425.1 s & 965.4 s & 3016.8 s \\ \hline \(h=1.25\)mm & 990.8 s & 1474.8 s & 4004.5 s \\ \hline \end{tabular} \end{table} Table 3: Computation times for polynomial degree \(p=3\) in negative \(x\)-direction. Equivalently, \(\hat{\gamma}_{\circ}\), \(\hat{\gamma}_{u}\), \([\![\hat{\gamma}_{i}]\!]^{(y,+)}\) and \([\![\hat{\gamma}_{i}]\!]^{(y,-)}\) are used in \(y\)-direction. Finally, the indicator \(\eta\) of the voxel \(i\) with value \(\hat{\gamma}_{i}\) is evaluated as \[\eta(\hat{\gamma}_{i})=\sqrt{\left(G_{x}(\hat{\gamma}_{i})\right)^{2}+\left(G_ {y}(\hat{\gamma}_{i})\right)^{2}}. \tag{25}\] Using this indicator, we introduce an inversion framework with refinement. The basic ideas are shown in Figure 7. It can readily be extended to 3D-problems. The reconstruction of the material parameter starts with an inversion on a coarse material grid with \(n^{\mathrm{v}}\) voxels per knot span in each spatial direction. After \(N^{i,1}\) iterations, the indicator is evaluated for each voxel of this intermediate solution and then compared to a threshold value \(\tau\). In the following examples, this threshold is set to half of the maximum occurring indicator value \[\tau=\frac{1}{2}\arg\max_{i}\eta(\hat{\gamma}_{i}). \tag{26}\] Since the material interfaces may not be perfectly identified yet, the indicated voxels and \(n^{\mathrm{l}}\) surrounding layers are refined into \(n^{\mathrm{v,s}}\) sub-voxels per spatial direction. The parameter coefficients of the constant shape functions corresponding to these sub-voxels are included in the set of optimization variables. Finally, \(N^{i,2}\) iterations are performed in a second inversion. As in the previous section the example of Figure 1 is considered. Locally refined inversions are performed for a wave field discretization of \(h=1.25\)mm and polynomial degree \(p=2\) or \(p=3\). For the first \(N^{i,1}=3\) iterations, only one voxel per knot span is used to model the material, i.e., \(n^{\mathrm{v}}=1\). This intermediate solution identifies the area of interest to be locally refined. The indicated voxels and one additional surrounding layer are subdivided into \(n^{\mathrm{v,s}}=4\) sub-voxels in each spatial direction. From here, two different variants are investigated. In the first one, the intermediate solution of the first three iterations is chosen as the initial guess for the following \(N^{i,2}=7\) iterations. In the second variant, a full restart, the inversion is again performed for \(N^{i,2}=10\) iterations starting from homogeneous material. Figure 8 shows the results of the inversion with the introduced local refinement strategies for \(p=2\) and Figure 9 for \(p=3\). The intermediate results from the first three iterations, the corresponding sharpness and refined areas, and the final inversion results for both strategies are depicted. It is obvious that the reconstruction quality of the restart variant is superior to that of the start with an initial guess. The reason for this is that the intermediate reconstruction can be already trapped in a local minimum of the optimization process, which can yet not be refined to a minimum on the finer grid. Figure 7: Refinement of the material grid Table 4 shows the computational times of the investigated refinement strategies. While the effort for the restart version is moderately larger than that of the refinement using the initial guess of the coarse material grid, both locally refined variants are over two times faster than the unrefined inversion. This is due to the fact that the computational effort for the gradient computation is greatly reduced. It should be noted that the restart version, in particular, does not compromise the quality of the reconstruction. \begin{table} \begin{tabular}{|l|c|c|} \hline & \(p=2\) & \(p=3\) \\ \hline \hline no refinement & 2556.7 s & 4004.5 s \\ \hline refinement – start with initial guess & 945.8 s & 1328.8 s \\ \hline refinement – restart version & 1019.1 s & 1813.5 s \\ \hline \end{tabular} \end{table} Table 4: Computation times with refinement for \(h=1.25\)mm Figure 8: Inversion results with refinement – \(h=1.25\)mm and \(p=2\) Figure 9: Inversion results with refinement – \(h=1.25\)mm and \(p=3\) ### 3D example The 3D structure under consideration is shown in Figure 10. The left pillar with the corresponding part of the roof (marked in blue) of the structure is examined locally in two inversions. The surface is given in STL ('Standard Triangle Language') format. At first only the left pillar is embedded in an extended computational domain of size \(2\,\mathrm{m}\times 1.25\,\mathrm{m}\times 1.25\,\mathrm{m}\). The density is set to \(2400\,\mathrm{kg/m^{3}}\), the wave speed to \(3000\,\mathrm{m\,s^{-1}}\). The setup of the inversion is shown in Figure 11. Three cavities are introduced at heights \(0.8\,\mathrm{m}\), \(0.95\,\mathrm{m}\), and \(1.15\,\mathrm{m}\) with radii \(0.05\,\mathrm{m}\), \(0.08\,\mathrm{m}\), and \(0.06\,\mathrm{m}\). Reference data are generated for twelve sources placed in three different heights, i.e., \(0.5\,\mathrm{m}\), \(1.0\,\mathrm{m}\), and \(1.5\,\mathrm{m}\), using a mesh of quadratic B-splines with knot span size \(h=0.025\,\mathrm{m}\). Within the Figure 11: FWI setup of the left pillar: The structure is colored in blue, the computational domain in gray, the circular cavities in red, and the search window in green. Top views are given in the top pictures, while front views are displayed in the bottom pictures. Figure 10: Investigated 3D structure [59] void regions, \(\alpha\) is set to \(10^{-6}\). Integration of the system matrices is performed with an octree of depth 3. The source term is a 2-cycle sine burst with a central frequency \(f=10\,\mathrm{kHz}\), resulting in a dominant wave length \(\lambda_{\mathrm{dom}}=0.3\,\mathrm{m}\). In the inversion, the wave fields are discretized by cubic B-splines defined on a mesh with knot span size \(h=0.05\,\mathrm{m}\). The simulation of a time span of \(8.0\times 10^{-4}\,\mathrm{s}\) is carried out in 800 time integration steps. The material field is first defined on a grid with \(n^{\mathrm{v}}=2\) voxels per knot span in each direction, before it is locally refined after \(N^{i,1}=3\) iterations with \(n^{\mathrm{v}s}=4\) sub-voxels per voxel in each direction. Integration of the system matrices is carried out on the voxel-level, incorporating the a priori known geometry by an octree of depth 4 for the cut knot spans with \(\alpha=10^{-5}\). Both local refinement strategies are executed - with either \(N^{i,2}=7\) additional iterations, if the intermediate solution is chosen as initial model for the subsequent optimization, or \(N^{i,2}=10\), if a homogeneous material is chosen as the new initial model. FWI can be applied to a region of interest by introducing a search window, see e.g., [14]. Consequently, \(\gamma\) is just optimized within the selected region. In this example, the search window has a size of \(0.4\,\mathrm{m}\times 0.3\,\mathrm{m}\times 1.5\,\mathrm{m}\) and is centered at \(1\,\mathrm{m}\) height, mimicking the inner of the pillar. The optimization is bounded between \(\gamma_{\mathrm{min}}=10^{-5}\) and \(\gamma_{\mathrm{max}}=1\). The reconstructed cavities are shown in Figure 12. For a better visualization, the Iso Volume filter of ParaView was applied to the voxelized representation of the defects. The threshold in \(\gamma\) is set to 0.5 to classify void regions. Both strategies are suitable to precisely identify the positions and sizes of the defects. Also, the spherical shape of the voids is accurately approximated. In the restart version, the surface is reconstructed smoother due to the fact that the inversion is not trapped in a local minimum caused by the coarse material grid. In a second inversion, the geometrically more complex left roof part is investigated. The FWI setup is shown in Figure 13. The physical domain is embedded in a computational domain of size \(2.5\,\mathrm{m}\times 2.5\,\mathrm{m}\times 0.85\,\mathrm{m}\). Nine sources are positioned on the top surface, while six sources are located on the bottom surface. To improve the inversion process, a search window is defined, excluding the areas where the sources are attached. In the reference model, an ellipsoidal cavity is placed at position \((1.35\,\mathrm{m},1.2\,\mathrm{m},0.6\,\mathrm{m})\) with semi-axes \(0.15\,\mathrm{m},\,0.075\,\mathrm{m}\), and \(0.075\,\mathrm{m}\). The material parameters, spatial and time discretizations, and the source term remain the same as for the previous pillar example. Both refinement strategies are performed in the same manner as in the previous example. Figure 14 shows the identified cavities. In both strategies, the ellipsoid is detected at the right position with a proper shape. It has to be noted that the strategy that continues with the intermediate solution terminates after three refined iterations. The optimization is trapped in a local minimum and is not able to find a suitable update. Consequently, the size of the defect is slightly underestimated. On the other hand, the restart version successfully reproduces position, shape, and size of the cavity. Figure 12: Reconstructed cavities in the left pillar ## 5 Conclusion In the paper at hand, we propose a multi-resolution FWI approach based on a IGA-FCM discretization of the wave field and a voxelized representation of the material. In Section 3, an introductory investigation of the forward problem shows that IGA-FCM, used with a consistent mass matrix, is well-suited to solve the scalar wave equation precisely with a high accuracy per degree of freedom. Considering the inverse problem (Section 4), if the wave field is adequately resolved, the quality of the reconstruction mainly depends on the representation of the material field. By increasing the resolution of the independent material representation, a more precise identification of the defect boundaries is possible. However, this comes at the cost of rapidly increasing computation times, since the gradient has to be evaluated at each voxel. To mitigate this computational burden, we suggest a method to locally refine the material based on an indicator that accounts for the local changes of the material. The inversion is decomposed into a two-step optimization. At first, one optimization is performed on a coarse material grid. The resulting intermediate solution is then used to indicate regions of interest where the material field is refined. Finally, a second optimization is carried out on the locally refined material grid. The intermediate solution can serve as an initial model - or, alternatively, a restart is carried out starting with a homogeneous initial material. Particularly, the restart strategy leads to an accurate reconstruction of the defect's location, size, and shape, despite coming at a slightly higher but still reasonable computational cost. Finally, the multi-resolution approach using local refinement is applied to a 3D specimen. Spherical and ellipsoidal cavities are identified and quantified accurately within a few iterations. Figure 14: Reconstructed cavities in the roof Figure 13: FWI setup of the left roof part: The structure is colored in blue, the computational domain in gray, the circular cavities in red, and the search window in green. Top views are given in the top pictures, while front views are displayed in the bottom pictures.
2309.08441
Novel Expressions for the Outage Probability and Diversity Gains in Fluid Antenna System
The flexibility and reconfigurability at the radio frequency (RF) front-end offered by the fluid antenna system (FAS) make this technology promising for providing remarkable diversity gains in networks with small and constrained devices. Toward this direction, this letter compares the outage probability (OP) performance of non-diversity and diversity FAS receivers undergoing spatially correlated Nakagami-$m$ fading channels. Although the system properties of FAS incur in complex analysis, we derive a simple yet accurate closed-form approximation by relying on a novel asymptotic matching method for the OP of a maximum-gain combining-FAS (MGC-FAS). The approximation is performed in two stages, the approximation of the cumulative density function (CDF) of each MGC-FAS branch, and then the approximation of the end-to-end CDF of the MGC-FAS scheme. With these results, closed-form expressions for the OP and the asymptotic OP are derived. Finally, numerical results validate our approximation of the MGC-FAS scheme and demonstrate its accuracy under different diversity FAS scenarios.
José~David~Vega-Sánchez, Arianna Estefanía López-Ramírez, Luis~Urquiza-Aguiar, Diana~Pamela~Moya~Osorio
2023-09-15T14:40:52Z
http://arxiv.org/abs/2309.08441v1
# Novel Expressions for the Outage Probability and Diversity Gains in Fluid Antenna System ###### Abstract The flexibility and reconfigurability at the radio frequency (RF) front-end offered by the fluid antenna system (FAS) make this technology promising for providing remarkable diversity gains in networks with small and constrained devices. Toward this direction, this letter compares the outage probability (OP) performance of non-diversity and diversity FAS receivers undergoing spatially correlated Nakagami-\(m\) fading channels. Although the system properties of FAS incur in complex analysis, we derive a simple yet accurate closed-form approximation by relying on a novel asymptotic matching method for the OP of a maximum-gain combining-FAS (MGC-FAS). The approximation is performed in two stages, the approximation of the cumulative density function (CDF) of each MGC-FAS branch, and then the approximation of the end-to-end CDF of the MGC-FAS scheme. With these results, closed-form expressions for the OP and the asymptotic OP are derived. Finally, numerical results validate our approximation of the MGC-FAS scheme and demonstrate its accuracy under different diversity FAS scenarios. Asymptotic matching, maximum-gain combining-FAS (MGC-FAS), nakagami-\(m\) fading, spatial correlation, outage probability. ## I Introduction In recent years, multipe-input multiple-output (MIMO) technology has been a fundamental part of the evolution of 5G and beyond to realize the impressive advancements in data rates and spectral efficiency. With MIMO, diversity gain is guaranteed as long as the antennas are spatially separated by at least half wavelength. However, this may be challenging in very small devices of some Internet of Things (IoT) applications. Recently, a technology that uses liquid metals (e.g., gallium-indium eutectic, mercury, Galinstan) to design a software-controllable fluidic structure that, in its most basic implementation with only one radio frequency (RF) chain, allows a fluid radiator to switch among different positions in a small linear space, which has been referred to as a fluid antenna system (FAS). In this way, FAS can outperform traditional MIMO regarding gains in diversity and multiplexing, specially when there exist space limitations at the receiver side [1]. The performance of FAS has been recently investigated in a number of works. For instance, in [2], Wong et al. introduced the novel concept of a single-antenna FAS over correlated Rayleigh fading channels inspired by the advancement in mechanically flexible antennas. Afterward, in [3], Mukherjee et al. proposed a general framework for the evaluation of the second-order statistic (i.e., the average level crossing rate) of the FAS by considering time-varying fading channels. In [4], Wong et al. revealed how the ergodic capacity scales with the system parameters of the FAS. In [5], Tlebaldiyeva et al. derived a single-integral form of the outage probability (OP) of a single-antenna FAS over spatially correlated Nakagami-\(m\) fading channels. A novel concept of fluid antenna multiple access (FAMA) was proposed in [6], which takes advantage of the deep fades suffered by the interference to attain a good channel condition without demanding complex signal processing. In [7], Skouroumounis et al. presented an analytical framework based on stochastic geometry for evaluation the performance of large-scale FAS-aided cellular networks in terms of the OP. In [8], New et al. investigated the limit of FAS performance via closed-form expressions of the OP and the diversity gain. In [9], Tlebaldiyeva et al. recently compared non-diversity and diversity FAS receivers undergoing \(\alpha\)-\(\mu\) fading channels. Specifically, the diversity FAS scheme considers enabling multiple ports of a fluid antenna and performing a combining technique with multi-port signals to further enhance FAS performance further. Therein, a maximum-gain combining-FAS (MGC-FAS) diversity scheme was investigated via Monte Carlo simulations due to the intricacy of the mathematical treatment for the underlying MGC-FAS. In this sense, the OP and diversity gain for the MGC-FAS are not known in closed-form expressions in the state-of-the-art. Motivated by the potential of the diversity FAS schemes to further enhance the capacity of future networks, with a great potential for IoT scenarios, we approximate the OP and asymptotic OP for the MGC-FAS scheme in a closed-form fashion, which is useful for further evaluations of this scheme. For this purpose, we first approximate the cumulative density function (CDF) of each MGC-FAS branch, and then, the CDF of the MGC-FAS over correlated Nakagami-\(m\) fading is derived. In both stages, the fitting parameters are estimated by employing the asymptotic matching method, proposed in [10] that render a simple yet accurate approximation. To the best of the author's current knowledge, no prior work has provided a closed-form expression for the OP of the MGC
2305.20018
Scalable Learning of Latent Language Structure With Logical Offline Cycle Consistency
We introduce Logical Offline Cycle Consistency Optimization (LOCCO), a scalable, semi-supervised method for training a neural semantic parser. Conceptually, LOCCO can be viewed as a form of self-learning where the semantic parser being trained is used to generate annotations for unlabeled text that are then used as new supervision. To increase the quality of annotations, our method utilizes a count-based prior over valid formal meaning representations and a cycle-consistency score produced by a neural text generation model as additional signals. Both the prior and semantic parser are updated in an alternate fashion from full passes over the training data, which can be seen as approximating the marginalization of latent structures through stochastic variational inference. The use of a count-based prior, frozen text generation model, and offline annotation process yields an approach with negligible complexity and latency increases as compared to conventional self-learning. As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model. We demonstrate the utility of LOCCO on the well-known WebNLG benchmark where we obtain an improvement of 2 points against a self-learning parser under equivalent conditions, an improvement of 1.3 points against the previous state-of-the-art parser, and competitive text generation performance in terms of BLEU score.
Maxwell Crouse, Ramon Astudillo, Tahira Naseem, Subhajit Chaudhury, Pavan Kapanipathi, Salim Roukos, Alexander Gray
2023-05-31T16:47:20Z
http://arxiv.org/abs/2305.20018v1
# Scalable Learning of Latent Language Structure ###### Abstract We introduce Logical Offline Cycle Consistency Optimization (LOCCO), a scalable, semi-supervised method for training a neural semantic parser. Conceptually, LOCCO can be viewed as a form of self-learning where the semantic parser being trained is used to generate annotations for unlabeled text that are then used as new supervision. To increase the quality of annotations, our method utilizes a count-based prior over valid formal meaning representations and a cycle-consistency score produced by a neural text generation model as additional signals. Both the prior and semantic parser are updated in an alternate fashion from full passes over the training data, which can be seen as approximating the marginalization of latent structures through stochastic variational inference. The use of a count-based prior, frozen text generation model, and offline annotation process yields an approach with negligible complexity and latency increases as compared to conventional self-learning. As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model. We demonstrate the utility of LOCCO on the well-known WebNLG benchmark where we obtain an improvement of \(2\) points against a self-learning parser under equivalent conditions, an improvement of \(1.3\) points against the previous state-of-the-art parser, and competitive text generation performance in terms of BLEU score. ## 1 Introduction Large language models (LLMs) have brought dramatic gains to semantic parsing-related tasks, allowing for more performant systems that require significantly less effort to adapt from one domain to the next. However, while their impact has been undeniable, they still face numerous challenges. First, LLMs are originally trained for text-only, sequence-to-sequence problems. In contrast, semantic parsing is inherently a text-to-structure problem, wherein the objective is to take in text as input and produce a logical form that is most commonly a tree or graph (see Figure 1 for an example). Beyond the need to account for explicit structure, LLMs must also overcome a paucity of training examples, which generally require costly expert-level knowledge to collect in this space. To better generalize to formal, structured representations and alleviate data-scarcity concerns, many high performing text-to-structure and structure-to-text models employ a form of bootstrapping. That is, they fine-tune an initial model using whatever supervised data is available and then subsequently use that model to annotate a large amount of unlabeled text to serve as additional training data [27; 45; 30; 6; 49; 39; 39; 29; 4]. This form of data augmentation is commonly referred to as _self-learning_, with the parsed data being referred to as pseudo-labels or _silver data_. Unfortunately, using fine-tuned models to generate data is not always straightforward, since, without specific modifications (e.g., [49; 12]) most pretrained neural models do not offer any well-formedness guarantees. While some approaches that are applied to simpler datasets can sidestep this issue by deriving synthetic examples from grammars induced from the supervised data [23; 3], such a strategy is untenable in more realistic open-ended domains. In addition to well-formedness concerns, self-learning models also introduce noise in the labels and are known to saturate in performance relatively quickly (only one round of self-learning labeling and training is used in state-of-the-art systems). More elaborate approaches leveraging latent variable models [47] are more robust to such noise and can improve silver data quality over multiple update rounds; however, they require marginalizing over latent discrete structures, which adds significant complexity and computational overhead. In this work, we introduce Logical Offline Cycle Consistency Optimization (LOCCO), a novel semi-supervised method for training a semantic parser that is designed to address the aforementioned issues. Our method predicts parses for a corpus of text; however, rather than treating the predictions as gold data, each prediction is weighted as a function of two scores: 1) an LLM-produced cycle-consistency score that provides a strong signal as to how faithful a predicted sample is to its original text and 2) a count-based prior probability that gives higher scores to parses that are syntactically valid and share common substructure with other sampled parses across the corpus. The result is a model that is incentivized to produce less-noisy parses that are both coherent with respect to the input text and structurally regular. LOCCO has a principled theoretical foundation as stochastic variational inference [20] and can also be related to offline reinforcement learning. Importantly, our method is straightforward to implement, trivial to parallelize, and comes with very little added computational cost to standard silver data training. In addition to producing a strong semantic parser, the output annotations produced by LOCCO can also be used to train a structure-to-text model. **Contributions:** (a) We introduce LOCCO, a semi-supervised method for training a neural semantic parser. (b) We demonstrate how the weakly-supervised output of LOCCO can be repurposed to train a strong text generation model. (c) We demonstrate the effectiveness of LOCCO on the well-known WebNLG 2020 [8] benchmark, where we improve semantic parsing by \(1.3\) points over the previous state-of-the-art parser while also achieving competitive text generation performance. (d) We compare LOCCO to similar semi-supervised models on the standard ATIS semantic parsing benchmark and demonstrate competitive performance without the need for expensive online sampling. (e) We perform an ablation analysis to determine how each component of LOCCO contributes to overall performance. ## 2 Related Work ### Cycle Consistency and Latent Variable Optimization End-to-end differentiable Cycle-Consistency (CC) losses concern two probabilistic models relating two domains \(p(x\mid z)\) and \(p(z\mid x)\), e.g., text / image or text / text. The parameters of both distributions are learned end-to-end via gradient descent to maximize \[\mathbb{E}_{p(z\mid x)}[p(x\mid z)]=\int_{z\in D_{z}}p(x\mid z)p(z\mid x)dz \quad\text{ or }\quad\mathbb{E}_{p(z\mid x)}[p(x\mid z)]=\sum_{z\in D_{z}}p(x\mid z)p(z\mid x)\] Figure 1: Text-to-RDF example from the WebNLG dataset for the sentence, ”Aarhus Airport is in Tristrup, Denmark which is part of the Central Denmark Region.” for continuous or discrete bottleneck variables, respectively. Approaches either optimize for one bottleneck, i.e., \(z\), or both \(x\) and \(z\) simultaneously. CC losses are often used in a semi-supervised fashion by combining datasets where only \(z\) or \(x\) are available with datasets where they are both available. CC has been shown to be successful in many areas of application, including image transformation [50], machine translation [18; 9], speech-to-text, and text-to-speech [21; 42]. For all of these domains, the expectations over the output sets are intractable. For continuous domains such as image or speech, it is possible to backpropagate directly through either reparametrization or by collapsing the distribution over the mean1[50]. For discrete domains, such as text or formal languages, this is not possible and approximations are needed like strong independence assumptions, straight-through approximations [5; 22] as in [42], the score-function estimator (i.e., REINFORCE [44]) used in [18; 21], or collapsing the distribution to \(K\)-best [9]. Footnote 1: Although not explicitly stated, the output of the composed networks can be interpreted as the mean of a constant variance Laplace distribution, reducing to \(||x-\mathbb{E}_{p(x|\mathbb{E}_{t}|x)}[x]||_{1}\) CC losses are related as well to semi-supervised end-to-end learning with latent variables when those variables correspond to interpretable domains, e.g., latent summarization models [34], trees [11; 47] and sequence labeling [48]. Most approaches leverage amortized variational inference in the form of Variational Autoencoders [26] and some modified Expectation Maximization [48]. They are restricted to particular structures (e.g. trees) and some require strong independence assumptions [48]. Here, we propose an offline version of variational inference without structure restrictions, that can learn a prior over the latent even when gradient learning is not possible (e.g., rule learning). We also integrate and outperform LLM approaches, which have generally displaced latent variable models. ### Semantic Parsing and Text Generation This work focuses on translating between natural language and formal language domains (see [17] for a recent survey), e.g., parsing between text and a knowledge graph expressing the semantics of the sentence (as in Figure 1). In the area of parsing, there is a large corpus of literature in parse induction [17] which often involves marginalization over latent structures. Although related to the presented work, these works have two fundamental differences. They are focused on the unsupervised case [10] with few works considering semi-supervised learning. They often require strong independence assumptions, e.g., context-free grammars. Beyond parsing, there are a large number of works focused on joint learning of semantic parsing and text generation [15; 1; 13; 16]. Similar to our work is CycleGT [16], which learns using a CC loss based on iterative back-translation [19]. Also relevant to our work is that of [13], which jointly learns both transformations without a CC loss, instead applying REINFORCE to approximate non-differentiable losses such as BLEU and METEOR. ## 3 Our Technique ### Desiderata The objective of this work is to provide an algorithm for parsing between text, \(x\), and formal structured representations, \(z\) (i.e., text-to-structure and structure-to-text). The method should be able to harness recent developments in neural network pretraining, as well as available inductive biases and learning algorithms in the formal domain. In short, we aim to * be able to leverage strong pretrained transformer models (e.g., BART [31] or T5 [38]) to learn functions mapping \(x\to z\) and \(z\to x\) * be able to scale training to large data sizes, which implies overcoming the lack of paired \((x,z)\) data samples * be able to incorporate arbitrary constraints into the formal domain \(D_{z}\), which may not be amenable to gradient-based learning, and further update these during training For this we propose a simple semi-supervised algorithm, inspired by Stochastic Variational Inference [20], that fulfills the desiderata above. The algorithm reduces to conventional cycle-consistency or self-learning under some simplifications but outperforms both algorithms under the same experimental conditions. ### Logical Offline Cycle Consistency Optimization To begin, we assume access to some supervised data consisting of pairs of plain text \(x\) and formal, structured representations \(z\), i.e., \((x,z)\in\mathcal{D}^{S}\). In addition, we also assume access to much larger quantities of only text, i.e., \(x\in\mathcal{D}^{U}\). We start from a probability distribution over sentences that arises from marginalizing over the space of all latent structures \(D_{z}\), e.g., all knowledge-graphs. \[p(x;\theta)=\sum_{z\in D_{z}}p(x,z;\theta) \tag{1}\] Following the usual variational formulation [43], one can express this marginalization in terms of the Evidence Lower Bound (ELBO) and reformulate it in a way that resembles a cycle consistency loss \[\log p(x;\theta) \geq\overline{\log p(x;\theta)-\mathrm{KL}(q(z\mid x;\phi)\mid \mid p(z\mid x;\theta))} \tag{2}\] \[=\mathbb{E}_{z\sim q(z\mid x;\phi)}[\log p(x\mid z;\theta)]- \mathrm{KL}(q(z\mid x;\phi)\mid\mid p(z;\theta))\] (3) \[=\mathbb{E}_{\underbrace{z\sim q(z\mid x;\phi)}_{\text{text-to- structure structure-to-text-text}}}\underbrace{\log p(x\mid z;\theta)}_{\text{text-to-text}}+\underbrace{\log p(z; \theta)}_{\text{reasoner}}\quad+\underbrace{\mathrm{H}(q_{\phi})}_{\text{ encoding entropy}} \tag{4}\] where \(\mathrm{KL}()\) is the Kullback-Leibler divergence and \(\mathrm{H}()\) the entropy. Variational methods alternate between maximizing the ELBO with respect to \(\phi\), bringing it closer to the marginal log-likelihood for current \(\theta^{i}\), and maximizing it with respect to \(\theta^{i}\). From Eq. 2 one can see that setting \(q_{\phi}\) equal to the posterior \(p(z\mid x;\theta)\) will make the bound tight yielding Expectation Maximization [35]. In this context \(q_{\phi}\) is an auxiliary distribution that is recomputed for each update of \(\theta\). With neural networks the alternate optimization of \(\phi\) and \(\theta\) with gradient ascent becomes costly. Stochastic Variational Inference (SVI) [20] alleviates this with updates based on a subset of the data, but requires a large number of optimization steps and presents optimization problems [25]. Amortized variational inference, best exemplified by Variational Autoencoders (VAEs) [26], solves this problem by reusing \(q_{\phi}\) across all steps of optimization of \(\theta\) and simultaneously updating \(\theta\) and \(\phi\) via gradient ascent of Eq. 3. VAEs set a parameter-less prior \(p(z)\) and do not update it during training. The approach proposed here takes the formulation in Eq. 4 and the following design choices * \(q(z\mid x;\phi)\) is parametrized by a large language model with pretrained parameters \(\Omega\) that maps natural language to formal descriptions, i.e., a semantic parser * \(p(x\mid z;\rho)\) is parametrized with a separate copy of \(\Omega\). It acts as a conditional language model and is frozen after initialization to prevent adaptation to faulty structures (note here that \(\theta\) has been replaced with \(\rho\) to reflect separate parameters) * \(p(z;\theta)\) is a count-based model factorizing the space of possible substructures (e.g., into edges). It incorporates prior knowledge about the formal language, such as valid statements * as an initial step, all models \(q(z\mid x;\phi)\), \(p(x\mid z;\rho)\) and \(p(z;\theta)\) are fine-tuned or trained with the labeled dataset \(\mathcal{D}^{S}\) of \((x,z)\) pairs * as in SVI we then alternate optimizing \(\phi\) and \(\theta\), but on _full passes_ over the unlabeled \(\mathcal{D}^{U}\). We also use a counts estimator for \(\theta\), not gradient, and add \(\mathcal{D}^{S}\) for regularization As detailed2 in Algorithm 1, the approach thus combines alternate updates of parameters of SVI, but with full passes over the entire \(\mathcal{D}^{U}\cup\mathcal{D}^{S}\) with a count-based update. This has both negligible overhead and low variance due to the large amount of samples. Text to structure is a many to one mapping, which makes a count-based model also a good choice i.e. there are fewer labels than for the text counterpart. With a uniform \(p(z;\theta)\), LOCCO reduces to cycle-consistency, albeit with offline updates and frozen conditional language model. With a uniform \(p(x\mid z;\rho)\) it reduces to conventional self-learning. Footnote 2: For ease of explanation, gradient updates shown are just Stochastic Gradient Descent The gradient update of \(q(z\mid x;\phi^{i})\) includes an expectation over a set \(z\in D_{z}\) that is exponentially large as a function of the input (e.g., graphs) and requires back-propagating through \(p(x\mid z;\rho)\) and \(p(z;\theta)\). We overcome this with the score function estimator [44] which yields following Monte Carlo approximation for the gradient3 Footnote 3: The entropy term \(\mathrm{H}(q_{\phi})\) was empirically observed to have no effect and was removed \[\nabla_{\phi^{i}}\mathbb{E}_{z\sim q(z\mid x;\phi^{i})}[V(z,x)] =\mathbb{E}_{q(z\mid x;\phi^{i})}[\,V(z,x)\nabla_{\phi^{i}}\log q(z \mid x;\phi^{i})\,]\] \[\approx\frac{1}{N}\sum_{n=1}^{N}V(z_{n},x)\nabla_{\phi^{i}}\log q (z_{n}\mid x;\phi^{i}),\quad z_{n}\sim q(z\mid x;\phi^{i-1})\] where we make the additional _offline_ assumption of \(\phi^{i}\approx\phi^{i-1}\) for the purpose of sampling, and \[V(z,x)=\log p(x\mid z;\rho)+\log p(z;\theta^{i-1})\] This amounts to updating \(\phi^{i}\) with the samples from the previous iteration model \(q(z\mid x;\phi^{i-1})\) as if they were gold but weighted by \(V(z,x)\) to reflect their possible imperfection. This offline update allows for trivial parallelization of sampling and very delayed communication between the sampler and optimizer, which permits the use of normal disk storage for \(V(z,x)\) values (diplayed in Figure 2). The large variance of \(V(z,x)\) as an estimate is problematic [41], and thus in our implementation we make the following two adjustments from the reinforcement learning literature. First, we normalize the reward as \[A(z,x)=\frac{V(z,x)-\mu}{\sigma}\] where \(\mu\) and \(\sigma\) are the mean and standard deviation of the reward across all \(N\) samples drawn from \(q(z|x;\phi^{i-1})\). Second, following [40] we substitute \(V(z,x)\) by a clipped surrogate objective \[r_{z_{n}}=\frac{q(z_{n}|x;\phi^{i})}{q(z_{n}|x;\phi^{i-1})}\] \[R(z,x)=\min(r_{z_{n}}A(z,x),\;\mathrm{clip}(r_{z_{n}},1-\epsilon, 1+\epsilon)\;A(z,x))\] where \(\epsilon\) is a small constant (\(\epsilon=0.2\) in our experiments). This clipped objective limits the change to \(q(z|x;\phi)\) at each training iteration, thus helping to avoid catastrophic forgetting. The optimization of \(\theta\) is carried out with a simple count-based maximum likelihood estimator with smoothing factor \(\tau\) and a strong factorization into parts, e.g., subexpressions \[p(z;\theta)=\prod_{s\in\mathrm{parts}(z)}p(s;\theta)\quad\text{with}\quad p( s;\theta)=\theta_{s}=\frac{\Theta_{s}}{\sum_{s^{\prime}\in\mathcal{D}_{S}} \Theta_{s^{\prime}}}\] \(s\in\mathrm{parts}(z)\) are all subtrees of the input logical form, e.g., when the target forms are sets of triples (as in WebNLG) a subtree corresponds to an individual triple. \(\Theta_{s}\) contains a count of the number of times part \(s\) was observed in the entire corpus and is initialized with \(\tau\). \(\mathcal{D}_{S}\) is the set of all data types. ## 4 Experiments We performed an extensive series of evaluations utilizing two datasets, the english version of the WebNLG2020+ dataset [8] and the ATIS dataset as processed as in [14]. Our primary goals were to determine if LOCCO produces an effective semantic parser and to assess the contribution of each component of LOCCO to semantic parsing performance. In addition, we were also interested to learn if the outputs of LOCCO could be used to train a reasonable text generation system. For WebNLG we include a comparison with recent systems in both parsing and generation, including the state-of-the art. We also include a self-learning baseline, component ablation, and investigation into the effect of iterative training. For ATIS we assess the effect of training data size on performance. ### Datasets WebNLG is a dataset where each example is a pairing of text with a set of RDF triples. The dataset contains 13,211 training pairs, 1,667 validation pairs, 2,155 pairs for testing semantic parsing, and Figure 2: Parallelization details for LOCCO semi-supervised training 1,779 pairs for testing text generation. Its use in this work was motivated by it being a well-known, open-domain benchmark with several systems to compare against that tests both semantic parsing and text generation. For our WebNLG experiments, silver data consisted of 50,000 sentences randomly selected from the TekGen corpus [1]. TekGen is a large-scale dataset intended to provide a more realistic testbed for knowledge extraction. It is comprised of text instances from Wikipedia that have been annotated with a state-of-the-art RDF triple semantic parser. As our system is intended to operate with unlabeled data, we used _only_ the text from examples extracted from the corpus. ATIS is a semantic parsing dataset where each example is a pairing of text with a \(\lambda\)-calculus logical form. The dataset consists of 4,434 training pairs, 490 validation pairs, and 447 test pairs. We reproduce the StructVAE experimental setup in [47] where the training set is split into two disjoint subsets of varying sizes. One of the subsets is treated as the gold dataset (i.e., keeping both the text and logical form) and the other is considered the silver dataset (i.e., keeping only the text). This both tests LOCCO's performance for different data sizes and shows how the approach generalizes to more complex meaning representations than straightforward RDF-triples. We also provide StructVAE results for completeness4. Footnote 4: It is important to note that StructVAE preceded the use of LLMs and is thus at a clear disadvantage We performed minimal processing of both datasets. The parentheses of each logical form were replaced with <SE> and </SE> tags to demarcate expression boundaries, and each text-to-structure and structure-to-text example was prompted with either "Text to Graph:" or "Graph to Text:", respectively. For WebNLG, we applied the following transformations to each example: 1) The subject, relation, and object were marked with <S>, <R>, and <O> tags, respectively and 2) the camel-cased text of each triple element was split into individual words based on capitalization. For WebNLG, we used the provided evaluation scripts to assess performance. For semantic parsing, there were four types of scored matches; however, for space, we display only the Exact Match metric in our results section (we provide the full table of results in the Appendix). For text generation, we provide results for BLEU, METEOR, and chrF++, with BLEU being our primary metric. With ATIS, we report exact-match accuracy, i.e., whether or not the generated form exactly matched the target. ### Training Details For all experiments, we used pretrained BART-large [31] as our model. The semantic parser was taken to be the model produced at the last iteration of semi-supervised training. For each iteration, we evaluated the model on validation data after every 2500 update steps and kept only the top performing model. We list all hyperparameters in the Appendix. For our text generation experiments, we aimed to keep the training setup as simple as possible. We first used the final model from text-to-structure training to generate a new set of data (following the same setup as each of the prior iterations). Then, we flipped the generated annotations, converting each pair \((x,z)\) into \((z,x)\). Following the conversion, we trained a BART-large model from scratch on the sampled annotations in the same way as was done for the semantic parsing experiments. ## 5 Results ### WebNLG Our main results can be found in Tables 0(a) and 0(b), which show the performance of our model for both semantic parsing and text generation as compared to other approaches. As can be seen in Table 0(a), LOCCO achieves state-of-the-art performance on the semantic parsing task, with a notable improvement (0.13 F1) over the next best model ReGen [13]. Importantly, our model achieves these results without any special modifications to the underlying large language model (e.g., constrained output, triple reordering, etc.) that are common to the other approaches on this dataset [15; 2]. In Table 0(b), we see that our approach yields a reasonably performant text generation system. It outperforms all other approaches (many of which were specifically designed for RDF-to-text) but ReGen. This is significant, as the text generation system we use is functionally a byproduct of our process for producing a semantic parser. It has no tailored architectural features and is simply trained using the data produced by our semantic parser. #### 5.1.1 Ablation Experiments In addition to our main results, we also perform extensive ablation experiments to determine the contributions of each element of our training objective. In Table 2 we show various ablations to the reward function of our model, self-learning (SL) where the annotated silver parses are drawn from either greedy or sampling-based decoding, and gold-only training where no silver data is used. From the table, it can be seen that using silver data in any capacity leads to improved performance over gold-only training. This is a promising result, as it suggests that our approach could be used to improve the other state-of-the-art models that did not train with external data, e.g., [13]. The results indicate that greedy and count-based rewards produce roughly the same performance. This is somewhat unsurprising, as the count-based model should reward higher-probability triples that are sampled frequently (i.e., those that would be produced by greedy decoding). The most important result is that the combination of cycle-consistency and the count-based logic model produces the best performance, better than either score individually. #### 5.1.2 Performance Across Epochs Though the main results were based on the model trained at the final iteration, we were also interested in the performance of each model in the intermediate iterations of training. The across-iteration performance is shown in Figure 3, where we see that the unablated version of LOCCO demonstrates consistently higher performance than the other versions. In addition, consistent with our remarks in Section 1, we see that sampling-based self-learning produces strong results at first but then degenerates over time. Another interesting result is that greedy self-learning is largely equivalent in performance to LOCCO when only the prior \(p(z)\) is used for the reward. Again, we suspect that this is due to the nature of our count-based model which upweights logical forms with frequent triples, i.e., those considered more likely by our neural model, and thus more likely to also be a part of the greedy decoding. \begin{table} \end{table} Table 1: WebNLG test set results for semantic parsing (F1 Strict) and text generation (BLEU, METEOR, chrF++). Dashed line includes existing results matching or outperfonring LOCCO \begin{table} \end{table} Table 2: WebNLG ablation results for semantic parsing (in terms of Exact Match) and text generation ### Atis Table 3 shows our results on the ATIS semantic parsing dataset as compared to StructVAE [47]. The first column shows the size of the gold dataset, \(\mathcal{D}^{S}\), while the remaining columns provide results for each system. For both LOCCO and StructVAE we distinguish between training with gold-only (i.e., only examples in \(\mathcal{D}^{S}\) used), self-learning, and \(R(z,x)\) (i.e., when all silver examples are scored) settings. In addition, we include the current state-of-the-art method [7] for reference. Similar to StructVAE, LOCCO demonstrates performance gains over both the supervised and self-learning settings (with the exception of \(|\mathcal{D}^{S}|=500\)). This suggests that, like StructVAE, our approach is producing more meaningful annotations of the unlabeled data than pure self-learning. This is important, as the target meaning representation, \(\lambda\)-calculus, is significantly more complex than the representations required for WebNLG. Key to note is that our results are achieved with offline sampling and scoring, while theirs requires sampling during training. Lastly, we emphasize that our objective with this experiment was not to compare raw performance, but was instead to determine if our approach yielded similar gains as compared to the supervised and self-learning settings. While our model demonstrated an overall improvement as compared to theirs, this is likely attributable to our much stronger pretrained model (they use an LSTM with GLOVE embeddings [37]) that provided a better baseline performance. ## 6 Conclusions In this paper, we introduced Logical Offline Cycle Consistency Optimization (LOCCO), a novel semi-supervised method for training a neural semantic parser. Our method was inspired by Stochastic Variational Inference, and designed from the ground up to be scalable, take advantage of powerful pretrained LLMs, and be able to incorporate inductive biases relevant to the formal domain. We demonstrated the effectiveness of our model on two standard benchmark datasets, where it achieved strong performance for both semantic parsing and text generation. Figure 3: Semantic parsing performance across training iterations as measured by Exact Match F1 \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \(|\mathcal{D}^{S}|\) & \multicolumn{2}{c}{LOCCO} & \multicolumn{3}{c}{StructVAE [47]} & SOTA [7] \\ \cline{2-7} & Gold-Only & Self-Learning & \(R(z,x)\) & Gold-Only & Self-Learning & \(R(z,x)\) & \\ \hline 500 & 71.9 & 76.8 & 75.9 & 63.2 & 65.3 & 66.0 & – \\ 1000 & 77.0 & 77.9 & 81.0 & 74.6 & 74.2 & 75.7 & – \\ 2000 & 86.1 & 86.4 & 87.1 & 80.4 & 83.3 & 82.4 & – \\ 3000 & 85.9 & 87.3 & 87.7 & 82.8 & 83.6 & 83.6 & – \\ 4434 & 86.3 & – & – & 85.3 & – & – & 89.1 \\ \hline \hline \end{tabular} \end{table} Table 3: Semantic parsing on ATIS for various training set sizes. The last row reflects when all supervised data is used (i.e., there is no additional data for semi-supervised training)
2309.11110
Variational Structures for Infinite Transition Orbits of Monotone Twist Maps
In this paper, we consider chaotic dynamics and variational structures of area-preserving maps. There is a lot of study on the dynamics of their maps and the works of Poincare and Birkhoff are well-known. To consider variational structures of area-preserving maps, we define a special class of area-preserving maps called monotone twist maps. Variational structures determined from twist maps can be used for constructing characteristic trajectories of twist maps. Our goal is to prove the existence of an infinite transition orbit, which represents an oscillating orbit between fixed points infinite times, through minimizing methods.
Yuika Kajihara
2023-09-20T07:40:35Z
http://arxiv.org/abs/2309.11110v2
# Variational Structures for Infinite Transition Orbits ###### Abstract In this paper, we consider chaotic dynamics and variational structures of area-preserving maps. There is a lot of study on the dynamics of their maps and the works of Poincare and Birkhoff are well-known. To consider variational structures of area-preserving maps, we define a special class of area-preserving maps called _monotone twist maps_. Variational structures determined from twist maps can be used for constructing characteristic trajectories of twist maps. Our goal is to prove the existence of an _infinite transition orbit_, which represents an oscillating orbit between two fixed points infinite times, through minimizing methods. ## 1 Introduction In this paper, we consider chaotic dynamics and variational structures of area-preserving maps. The dynamics of such maps have been widely studied, with key findings by Poincare and Birkhoff. There are a lot of related works, see [3, 4, 7] for example. To explore these variational structures, we define a special class of area-preserving maps called _monotone twist maps_: **Definition 1.1** (monotone twist maps).: _Set a map \(f\colon\mathbb{R}/\mathbb{Z}\times[a,b]\to\mathbb{R}/\mathbb{Z}\times[a,b]\) and assume that \(f\in C^{1}\) and a lift \(\tilde{f}\) of \(f:\mathbb{R}\times[a,b]\to\mathbb{R}\times[a,b]\), \((x,y)\mapsto(f_{1}(x,y),f_{2}(x,y))(=(X,Y))\) satisfy the followings:_ 1. \(\tilde{f}\) _is area-preserving, i.e.,_ \(dx\wedge dy=dX\wedge dY\)_;_ 2. \(\partial X/\partial y>0\) _(twist condition), and_ 3. _Both two straight lines_ \(y=a\) _and_ \(y=b\) _are invariant curves of_ \(f\)_, i.e._ \(f(x,a)-a=0\) _and_ \(f(x,b)-b=0\) _for all_ \(x\in\mathbb{R}/\mathbb{Z}\)_._ _Then \(f\) is said to be a monotone twist map._ By Poincare's lemma, we get a generating function \(h\) for a monotone twist map \(f\) and it satisfies \(dh=YdX-ydx.\) That is, \[y=\partial_{1}h(x,X),\ Y=-\partial_{2}h(x,X),\] where \(\partial_{1}=\partial/\partial x\) and \(\partial_{2}=\partial/\partial X\). For the above \(h\), by abuse of notation, we define \(h\colon\mathbb{R}^{n+1}\to\mathbb{R}\) by: \[h(x_{0},x_{1},\cdots,x_{n})=\sum_{i=1}^{n}h(x_{i},x_{i+1}). \tag{1}\] We can regard \(h\) as a variational structure associated with \(f\), because any critical point of (1), say \((x_{0},\cdots,x_{n})\), gives us an orbit of \(\tilde{f}\) by \(y_{i}=-\partial h_{1}(x_{i},x_{i+1})=\partial h_{2}(x_{i-1},x_{i})\). Its relation implies that the orbit \(\{f^{i}(x_{i},y_{i})\}_{i\in\mathbb{Z}}\) corresponds to a _stationary configuration_ defined below. This is known as the Aubry-Mather theory, which is so called because Aubry studied critical points of the action \(h\) in [1] and Mather developed the idea (e.g. [6, 8]). We briefly summarize Bangert's investigation of good conditions of \(h\) for study in minimal sets [2]. We consider the space of bi-infinite sequences of real numbers, and define convergence of a sequence \(x^{n}=(x_{i}^{n})_{i\in\mathbb{Z}}\in\mathbb{R}^{\mathbb{Z}}\) to \(x=(x_{i})_{i\in\mathbb{Z}}\in\mathbb{R}^{\mathbb{Z}}\) by: \[\lim_{n\to\infty}|x_{i}^{n}-x_{i}|=0\ (^{\forall}i\in\mathbb{Z}). \tag{2}\] Now we treat a function \(h\) satisfying _variational principle_. All of the results in [2] we introduce assumes variational principle. **Definition 1.2** (variational principle).: _Let \(h\) be a continuous map from \(\mathbb{R}^{2}\) to \(\mathbb{R}\) We call a function \(h\) variational principle if it satisfies the following:_ 1. _For all_ \((\xi,\eta)\in\mathbb{R}^{2}\)_,_ \(h(\xi,\eta)=h(\xi+1,\eta+1)\)_;_ 2. \(\lim\limits_{\eta\to\infty}h(\xi,\xi+\eta)\to\infty\)__\((\)_uniformly in_ \(\xi\)_);_ 3. _If_ \(\underline{\xi}<\bar{\xi}\) _and_ \(\underline{\eta}<\bar{\eta}\)_, then_ \(h(\underline{\xi},\underline{\eta})+h(\bar{\xi},\bar{\eta})<h(\underline{\xi},\bar{\eta})+h(\bar{\xi},\underline{\eta})\)_; and_ 4. _If_ \((x,x_{0},x_{1})\) _and_ \((\xi,x_{0},\xi_{1})\) _are minimal and_ \((x,x_{0},x_{1})\neq(\xi,x_{0},\xi_{1})\)_, then_ \((x-\xi)(x_{1}-\xi_{1})<0\)_._ In this paper, we call an element \(x=(x_{i})_{i\in\mathbb{Z}}\in\mathbb{R}^{2}\) a configuraition. There are distinctive configurations referred to as _minimal configurations_ and _stationary configurations_. **Definition 1.3** (minimal configuration/stationary configuration).: _Fix \(n\) and \(m\) with \(n<m\) arbitrarily. A finite sequence \(x=(x_{i})_{i=n}^{m}\) is said to be minimal if, for any (finite) configuration \((y_{i})_{i=n_{0}}^{n_{1}}\in\mathbb{R}^{n_{1}-n_{0}+1}\) with \(y_{n_{0}}=x_{n_{0}}\) and \(y_{n_{1}}=x_{n_{1}}\),_ \[h(x_{n_{0}},x_{n_{0}+1},\cdots,x_{n_{1}-1},x_{n_{1}})\leq h(y_{n_{0}},y_{n_{0}+ 1},\cdots,y_{n_{1}-1},y_{n_{1}}),\] _where \(n\leq n_{0}<n_{1}\leq m\). A configuration \(x=(x_{i})_{i\in\mathbb{Z}}\) is called minimal if, for any \(n<m\), we have \(x=(x_{i})_{i=n}^{m}\) is minimal. Moreover, if \(h\in C^{1}\), a configuration \(x\) is called locally minimal or a stationary configuration if it satisfies:_ \[\partial_{2}h(x_{i-1},x_{i})+\partial_{1}h(x_{i},x_{i+1})=0\ ^{(\forall}i\in \mathbb{Z}). \tag{3}\] For \(x=(x_{i})_{i\in\mathbb{Z}}\in\mathbb{R}^{\mathbb{Z}}\), we define \(\alpha^{+}(x)\) and \(\alpha^{-}(x)\) by: \[\alpha^{+}(x)=\lim\limits_{i\to\infty}\frac{x_{i}}{i},\ \alpha^{-}(x)=\lim \limits_{i\to-\infty}\frac{x_{i}}{i}.\] We only discuss the case of \(\alpha^{+}(x)=\alpha^{-}(x)\) in this paper. **Definition 1.4** (rotation number).: _If both \(\alpha^{+}(x)\) and \(\alpha^{-}(x)\) exist and \(\alpha^{+}(x)=\alpha^{-}(x)(=:\alpha(x))\), then we call \(\alpha(x)\) a rotation number of \(x\)._ Let \(\mathcal{M}_{\alpha}\) be a minimal set consisting of minimal configurations with rotation number \(\alpha\). It is known that for any \(\alpha\in\mathbb{R}\), the set \(\mathcal{M}_{\alpha}\) is non-empty and compact (see [2] for the proof). For \(\alpha\in\mathbb{Q}\), we define periodicity in the following: **Definition 1.5** (periodic configurations).: _For \(q\in\mathbb{N}\) and \(p\in\mathbb{Z}\), a configuration \(x=(x_{i})_{i\in\mathbb{Z}}\) is said to be \((q,p)\)-periodic if \(x=(x_{i})_{i\in\mathbb{Z}}\in\mathbb{R}^{\mathbb{Z}}\) satisfies:_ \[x_{i+q}=x_{i}+p,\] _for any \(i\in\mathbb{Z}\)._ It is easily seen that if \(x\) is \((q,p)\)-periodic, then its rotation number is \(p/q\). This paper discusses only the case where \(\alpha\in\mathbb{Q}\). For \(\alpha=p/q\in\mathbb{Q}\), we set: \[\mathcal{M}_{\alpha}^{\rm per}:=\{x\in\mathcal{M}_{\alpha}\ |\ x\ \text{is $(q,p)$-periodic}\}.\] **Definition 1.6** (neighboring pair).: _For a set \(A\subset\mathbb{R}^{\mathbb{Z}}\) and \(a,b\in A\) with \(a<b\), we call \((a,b)\) a neighboring pair of \(A\) if \(a,b\in A\) and there is no other \(x\in A\) with \(a<x<b\). Here \(a<b\) means \(a_{i}<b_{i}\) for any \(i\in\mathbb{Z}\)._ Given a neighboring pair \((x^{0},x^{1})\) of \(\mathcal{M}_{\alpha}^{\rm per}\), define: \[\mathcal{M}_{\alpha}^{+}(x^{0},x^{1}) =\{x\in\mathcal{M}_{\alpha}\ |\ |x_{i}-x_{i}^{0}|\to 0\ (i\to-\infty)\ and\ |x_{i}-x_{i}^{1}|\to 0\ (i\to\infty)\}\ \text{and}\] \[\mathcal{M}_{\alpha}^{-}(x^{0},x^{1}) =\{x\in\mathcal{M}_{\alpha}\ |\ |x_{i}-x_{i}^{0}|\to 0\ (i\to\infty)\ and\ |x_{i}-x_{i}^{1}|\to 0\ (i\to-\infty)\}.\] Bangert showed the following proposition. **Proposition 1.7** ([2]).: _Given \(\alpha\in\mathbb{Q}\), \(\mathcal{M}_{\alpha}^{\mathrm{per}}\) is nonempty. Moreover, if \(\mathcal{M}_{\alpha}^{\mathrm{per}}\) has a neighboring pair, then \(\mathcal{M}_{\alpha}^{+}\) and \(\mathcal{M}_{\alpha}^{-}\) are nonempty._ Although we have discussed minimal configurations in the preceding paragraph, there are also interesting works that treat non-minimal orbits between periodic orbits, particularly, [10] and [11]. In [10], Rabinowitz used minimizing methods to prove the existence of three types of solutions--periodic, heteroclinic and homoclinic--in potential systems with reversibility for time, i.e. \(V(t,x)=V(-t,x)\). Under an assumption called a _gap_, which is similar to a neighboring pair, for a set of periodic and heteroclinic solutions, non-minimal heteroclinic and homoclinic orbits can be given between two periodic orbits. Since the heteroclinic/homoclinic orbit is a trajectory that transits between two equilibrium points, we refer to these orbits as \(k\)_-transition orbits_ in this paper. For example, a monotone heteroclinic orbit is a one-transition orbit. Non-minimal orbits are realized as _n-transition orbits_ for \(n\geq 2\) and they are heteroclinic when \(n\) is odd and homoclinic when \(n\) is even. (You can regard each configuration \(x\in\mathbb{R}^{2}\) as its graph \(\{(i,x_{i})\ |\ i\in\mathbb{Z}\}\), see the following figures.) **Remark 1.8**.: _There are two remarks about one-transition orbits: (a) Each element of the sets \(\mathcal{M}_{\alpha}^{\pm}\) implies monotone heteroclinic orbits and Mather's result [9] is the first to discuss this problem. (b) The existence of one-transition orbits ( i.e., monotone heteroclinic orbits) does not require gaps for heteroclinic orbits. This can also be illustrated by considering a simple pendulum system since a set of heteroclinic orbits is dense in its system._ Rabinowitz's approach can be applied to variational methods for area-preserving maps. Yu [11] added variational principle \(h\) to the following assumption \((h_{5})-(h_{6})\) to \(h\): * There exists a positive continuous function \(p\) on \(\mathbb{R}^{2}\) such that: \[h(\xi,\eta^{\prime})+h(\eta,\xi^{\prime})-h(\xi,\xi^{\prime})-h(\eta,\eta^{ \prime})>\int_{\xi}^{\eta}\int_{\xi^{\prime}}^{\eta^{\prime}}p\] if \(\xi<\eta\) and \(\xi^{\prime}<\eta^{\prime}\). * There is a \(\theta>0\) satisfying the following conditions: * \(\xi\mapsto\theta\xi^{2}/2-h(\xi,\xi^{\prime})\) is convex for any \(\xi^{\prime}\), and * \(\xi^{\prime}\mapsto\theta\xi^{\prime 2}/2-h(\xi,\xi^{\prime})\) is convex for any \(\xi\). In the rest of this paper, we assume \((h_{1})-(h_{6})\) for \(h\). **Remark 1.9**.: * _One of a sufficient conditions for_ \((h_{2})-(h_{5})\) _is_ \[(\tilde{h})\ h\in C^{2}\ \text{and}\ \partial_{1}\partial_{2}h\leq-\delta<0\ \text{for some}\ \delta>0.\] _Bangert_ _[_2_]_ _shows that assuming_ \((h_{2})-(h_{4})\) _implies_ \((\tilde{h})\)_. To verify the assumption_ \((h_{5})\)_, we can ensure it by choosing a positive function_ \(\rho=\delta\)_. If a monotone twist map_ \(f\) _is of class_ \(C^{1}\) _and satisfies_ \(\partial X/\partial y\geq\delta\) _for some_ \(\delta>0\)_, a generating function_ \(h\) _for_ \(f\) _satisfies_ \((\tilde{h})\)_. However,_ \((\tilde{h})\) _is not a necessary condition for satisfying_ \((h_{2})-(h_{5})\)_._ Figure 1: One and two transition orbits _._ 2. _Assuming_ \((h_{6})\) _allows us to derive Lipschitz continuity for_ \(h\) _in the following meaning: there is a Lipschitz constant_ \(C\) _satisfying:_ \[h(\xi,\eta_{1})-h(\xi,\eta_{2}) \leq C|\eta_{1}-\eta_{2}|,\text{and}\] \[h(\xi_{1},\eta)-h(\xi_{2},\eta) \leq C|\xi_{1}-\xi_{2}|\] 3. _If_ \(h\) _is of class_ \(C^{1}\)_, we do not require_ \((h_{6})\)_._ Clearly, \((h_{5})\) implies \((h_{3})\). Mather [8] proved that if \(h\) satisfies \((h_{1})\)-\((h_{6})\), then \(\partial_{2}h(x_{i-1},x_{i})\) and \(\partial_{1}h(x_{i},x_{i+1})\) exist in the meaning of the left-sided limit (even if \(h\) is not differentiable). In addition, he proved that if \(x\) is a locally minimal configuration, then it satisfies (3). Hence we can treat a stationary configuration for non differentiable functions. Yu applied Rabinowitz's methods to monotone twist maps to show finite transition orbits of monotone twist maps for all \(\alpha=p/q\in\mathbb{Q}\). We will give a summary of his idea in the case of \(\alpha=0\) (i.e. \((q,p)=(1,0)\) in Definition 1.5). Let \((u^{0},u^{1})\) be a neighboring pair of \(\mathcal{M}_{0}^{\mathrm{per}}\). By abuse of notation, we then denote \(u^{j}\) for \(j=0,1\) by the constant configuration \(u^{j}=(x_{i})_{i\in\mathbb{Z}}\) where \(x_{i}=u^{j}\) for all \(i\in\mathbb{Z}\). We set: \[c:=\min_{x\in\mathbb{R}}h(x,x)(=h(u^{0},u^{0})=h(u^{1},u^{1})). \tag{4}\] And: \[I(x):=\sum_{i\in\mathbb{Z}}a_{i}(x), \tag{5}\] where \(a_{i}(x)=h(x_{i},x_{i+1})-c\). Yu [11] studied local minimizers of \(I\) to show the existence of finite transition orbits, i.e., heteroclinic or homoclinic orbits. Given a rational number \(\alpha\in\mathbb{Q}\) and a neighboring pair \((x^{0},x^{1})\) of \(\mathcal{M}_{\alpha}^{\mathrm{per}}\), we let: \[I_{\alpha}^{+}(x^{0},x^{1}) =\{x_{0}\in\mathbb{R}\mid x=(x_{i})_{i\in\mathbb{Z}}\in\mathcal{ M}_{\alpha}^{+}(u^{0},u^{1})\},\text{ and}\] \[I_{\alpha}^{-}(x^{0},x^{1}) =\{x_{0}\in\mathbb{R}\mid x=(x_{i})_{i\in\mathbb{Z}}\in\mathcal{ M}_{\alpha}^{-}(u^{0},u^{1})\}.\] Under the above setting, he showed: **Theorem 1.10** (Theorem 1.7, [11]).: _Given a rational number \(\alpha\in\mathbb{Q}\) and a neighboring pair \((x^{0},x^{1})\) of \(\mathcal{M}_{\alpha}^{\mathrm{per}}\). If_ \[I_{\alpha}^{+}(x^{0},x^{1})\neq(x_{0}^{0},x_{0}^{1})\text{ and }I_{\alpha}^{-}(x^{0},x^{1})\neq(x_{0}^{0},x_{0}^{1}), \tag{6}\] _then, for every \(\delta>0\) small enough, there is an \(m=m(\delta)\) such that for every sequence of integers \(q=(q_{i})_{i\in\mathbb{Z}}\) with \(q_{i+1}-q_{i}\geq 4m\) and for every \(j,k\in\mathbb{Z}\) with \(j<k\), there is a configuration \(x=(x_{i})_{i\in\mathbb{Z}}\) for \(h\) satisfying:_ 1. \(x_{i}^{0}<x_{i}<x_{i}^{1}\) _for all_ \(i\in\mathbb{Z}\)_;_ 2. \(|x_{q_{i}-m}-x_{q_{i}-m}^{i}|\leq\delta\) _and_ \(|x_{q_{i}+m}-x_{q_{i}+m}^{i}|\leq\delta\) _for all_ \(i=j,\ldots,k\)_;_ 3. \(|x_{i}-x_{i}^{j}|\to 0\) _as_ \(i\to-\infty\) _and_ \(|x_{i}-x_{i}^{k}|\to 0\) _as_ \(i\to+\infty\)_._ _Here, for any \(j\in\mathbb{Z}\), \(x^{j}=x^{0}\), if \(j\) is even, and \(x^{j}=x^{1}\), if \(j\) is odd._ Furthermore, Rabinowitz [10] proved the existence of an infinite transition orbit as a limit of sequences of finite transition orbits. However, the variational structure of infinite transition orbits for potential systems is an open question in his paper. To consider the question for twist maps, the following proposition is crucial. **Proposition 1.11** (Proposition 2.2, [11]).: _If \(I(x)<\infty\), then \(|x_{i}-u^{1}|\to 0\) or \(|x_{i}-u^{0}|\to 0\) as \(|i|\to\infty\)._ Since this implies that \(I(x)=\infty\) if \(x\) has infinite transitions, we need to fix the normalization of \(I\). Therefore, we focus on giving the variational structure and boundary condition that characterize infinite transition orbits of monotone twist maps. As a result, the function \(J\) and set \(X_{k,\rho}\) defined in Section 3 represent a variational structure and a configuration space for infinite transition orbits. Through this variational problem, we showed: **Theorem 1.12** (Our main theorem).: _Assume the same condition of Theorem 1.10. Then, for every positive sequence \(\epsilon=(\epsilon_{i})_{i\in\mathbb{Z}}\), there is an \(m=(m_{i})_{i\in\mathbb{Z}}\) such that for every sequence of integers \(k=(k_{i})_{i\in\mathbb{Z}}\) with \(k_{i+1}-k_{i}\geq m_{i}\), there is a configuration \(x=(x_{i})_{i\in\mathbb{Z}}\) for \(h\) satisfying:_ 1. \(x_{i}^{0}<x_{i}<x_{i}^{1}\) _for all_ \(i\in\mathbb{Z}\)_;_ 2. _for any_ \(j\in\mathbb{Z}\)_,_ \(|x_{i}-x_{i}^{2j}|\leq\epsilon_{2j}\) _if_ \(i\in[k_{4j},k_{4j+1}]\) _and_ \(|x_{i}-x_{i}^{2j+1}|\leq\epsilon_{2j+1}\) _if_ \(i\in[k_{4j+2},k_{4j+3}]\)_._ This paper is organized as follows. Section 2 deals with Yu's results in [11] and related remarks. In Section 3, our main results are stated. We first give the proof of the case of \(\alpha=0\) and then see the generalized cases. Section 4 provides, as additional discussions, a special example and the estimate of the number of the obtained infinite transition orbits. ## 2 Preliminary In this section, we would like to introduce properties of (5) and minimal configurations using several useful results in [11]. Moreover, we study estimates of monotone heteroclinic orbits (1-transition orbits). ### Properties of minimal configurations Let \((u^{0},u^{1})\) be a neighboring pair of \(\mathcal{M}_{0}^{\rm per}\) and: \[X =X(u^{0},u^{1})=\{x=(x_{i})_{i\in\mathbb{Z}}\mid u^{0}\leq x_{i} \leq u^{1}\ (^{\prime}i\in\mathbb{Z})\}, \tag{7}\] \[X(n) =X(n;u^{0},u^{1})=\{x=(x_{i})_{i=0}^{n}\mid u^{0}\leq x_{i}\leq u ^{1}\ (^{\prime}i\in\{0,\cdots,n\})\},and\] \[\hat{X}(n) =\hat{X}(n;u^{0},u^{1})=\{x=(x_{i})_{i=0}^{n}\mid x_{0}=x_{n},\ u ^{0}\leq x_{i}\leq u^{1}\ (^{\prime}i\in\{0,\cdots,n\})\}.\] **Definition 2.1** ([11]).: _For \(x\in X\), we set:_ \[d(x):=\max_{0\leq i\leq n}\min_{j\in\{0,1\}}|x_{i}-u^{j}|.\] _For any \(\delta>0\), let:_ \[\phi(\delta):=\inf_{n\in\mathbb{Z}^{+}}\inf\left\{\sum_{i=0}^{n-1}a_{i}(x)\mid x \in\hat{X}(n)\text{ and }d(x)\geq\delta\right\}. \tag{8}\] **Lemma 2.2** (Lemma 2.7, [11]).: _The function \(\phi\) is continuous and satisfies \(\phi(\delta)>0\) if \(\delta>0\); \(\phi(\delta)=0\) if \(\delta=0\). It increases monotonically with respect to \(\delta\). Moreover, for any \(n\in\mathbb{N}\) and \(x\in\hat{X}(n)\) satisfying_ \[\min_{j=0,1}|x_{i}-u^{j}|\geq\delta,\ (i=1,\cdots,n-1),\] _then_ \[\sum_{i=0}^{n-1}a_{i}(x)\geq n\phi(\delta)\] _and for any \(n\in\mathbb{N}\) and \(x\in X(n)\),_ **Lemma 2.3** (Lemma 2.8, [11]).: _For any \(n\in\mathbb{N}\) and \(x\in X(n)\) satisfying \(d(x)\geq\delta\),_ \[\sum_{i=0}^{n-1}a_{i}(x)\geq\phi(\delta)-C|x_{n}-x_{0}|\geq-C|x_{n}-x_{0}|.\] Proof.: See [11]. This proof requires \((h_{3})\). **Lemma 2.4** (Lemma 2.10, [11]).: _If \(x\in X\) satisfies \(|x_{i}-u^{0}|\) (resp. \(|x_{i}-u^{1}|\)) as \(|i|\to\infty\) and \(x_{i}\neq u^{0}\) (resp. \(x_{i}\neq u^{1}\)) for some \(i\in\mathbb{Z}\), then \(I(x)>0\)._ In using a minimizing method to get a stationary configuration, we need to check that each component of the obtained minimizer is not equal to \(u^{0}\) or \(u^{1}\). This follows from the next lemmas. **Lemma 2.5** (Lemma 2.11, [11]).: _For any \(\delta\in(0,u^{1}-u^{0}]\), if \((x_{i})_{i=0}^{2}\) satisfies:_ 1. \(x_{i}\in[u^{0},u^{1}]\) _for all_ \(i=0,1,2\)_;_ 2. \(x_{1}\in[u^{1}-\delta,u^{1}]\)_, and_ \(x_{0}\neq u^{1}\) _or_ \(x_{2}\neq u^{1}\)_; and_ 3. \(h(x_{0},x_{1},x_{2})\leq h(x_{0},\xi,x_{2})\) _for all_ \(\xi\in[u^{1}-\delta,u^{1}]\)_,_ _then \(x_{1}\neq u^{1}\). This still holds if we replace every \(u^{1}\) by \(u^{0}\) and every \([u^{1}-\delta,u^{1}]\) by \([u^{0},u^{0}+\delta]\)._ **Lemma 2.6** (Lemma 2.12, [11]).: _For any \(n_{0}\) and \(n_{1}\in\mathbb{N}\) with \(n_{0}<n_{1}\), if a finite configuration \(x=(x_{i})_{i=n_{0}}^{n_{1}}\) satisfies:_ 1. \(x_{i}\in[u^{0},u^{1}]\) _for all_ \(i=n_{0},\cdots,n_{1}\) _and_ 2. _for any_ \((y_{i})_{i=n_{0}}^{n_{1}}\) _satisfying_ \(y_{n_{0}}=x_{n_{0}}\)_,_ \(y_{n_{1}}=x_{n_{1}}\)_, and_ \(y_{i}\in[u^{0},u^{1}]\)_,_ \[h(x_{n_{0}},x_{n_{0}+1},\cdots,x_{n_{1}-1},x_{n_{1}})\leq h(y_{n_{0}},y_{n_{0} +1},\cdots,y_{n_{1}-1},y_{n_{1}}),\] _then \(x\) is a minimal configuration. Moreover, if \(x\) also satisfies \(x_{n_{0}}\notin\{u^{0},u^{1}\}\) or \(x_{n_{1}}\notin\{u^{0},u^{1}\}\), then \(x_{i}\notin\{u^{0},u^{1}\}\) for all \(i=n_{0}+1,\cdots,n_{1}-1\)._ Proof of the two lemmas above.: See [11]. These proofs require \((h_{4})\) and \((h_{5})\). Moreover, we can replace \(\alpha=0\) with arbitrarily other rational numbers as seen below. **Definition 2.7** (Definition 5.1, [11]).: _For \(\alpha=p/q\in\mathbb{Q}\backslash\{0\}\), we set:_ \[X_{\alpha}(x^{-},x^{+}):=\{x=(x_{i})_{i\in\mathbb{Z}}\mid x_{i}^{-}\leq x_{i} \leq x_{i}^{+}(i\in\mathbb{Z})\}.\] _where \(x^{-}\) and \(x^{+}\) is in \(\mathcal{M}_{\alpha}^{\mathrm{per}}\) and \((x_{0}^{-},x_{0}^{+})\) is a neighboring pair in \(\mathcal{M}_{\alpha}^{\mathrm{per}}\)._ **Definition 2.8** (Definition 5.2, [11]).: _Let \(h_{i}\colon\mathbb{R}^{2}\to\mathbb{R}\) be a continuous function for \(i=1,2\). For \(h_{1}\) and \(h_{2}\), we define \(h_{1}*h_{2}\colon\mathbb{R}^{2}\to\mathbb{R}\) by_ \[h_{1}*h_{2}(x_{1},x_{2})=\min_{\xi\in\mathbb{R}}(h_{1}(x_{1},\xi)+h_{2}(\xi,x_ {2})).\] _We call this the conjunction of \(h_{1}\) and \(h_{2}\)._ Using the conjunction, we define a function \(H\colon\mathbb{R}^{2}\to\mathbb{R}\) for \(\alpha=p/q\) by: \[H(\xi,\xi^{\prime})=h^{*q}(\xi,\xi^{\prime}+p),\] where \(h^{*q}(x,y)=h_{1}*h_{2}*\cdots*h_{q}(x,y)\) and \(h_{i}=h\) for all \(i=1,2,\cdots,q\). **Definition 2.9** (Definition 5.5, [11]).: _For any \(y=(y_{i})_{i\in\mathbb{Z}}\in X(x_{0}^{-},x_{0}^{+})\), we define \(x=(x_{i})_{i\in\mathbb{Z}}\in X_{\alpha}(x^{-},x^{+})\) as follows:_ 1. \(x_{iq}=y_{i}+ip\) _and_ 2. \((x_{j})_{j=iq}^{(i+1)q}\) _satisfies_ \[h(x_{iq},\cdots,x_{(i+1)q})=H(x_{iq},x_{(i+1)q})=H(y_{i},y_{i+1}),\] _i.e.,_ \((x_{j})_{j=iq}^{(i+1)q}\) _is a minimal configuration of_ \(h\)_._ Although we focus on the case of rotation number \(\alpha=0\), we may apply our proof to all rational rotation numbers from the following. **Proposition 2.10** (Proposition 5.6, [11]).: _Let \(y\in X(x_{0}^{-},x_{0}^{+})\) and \(x\in X_{\alpha}(x^{-},x^{+})\) be defined as above. If \(y\) is a stationary configuration of \(H\), then \(x\) must be a stationary configuration of \(h\)._ ### Some remarks for heteroclinic orbits Let \(X^{0}\) and \(X^{1}\) be given by: \[X^{0} =\{x\in X\mid|x_{i}-u^{1}|\to 0\ (i\to\infty),|x_{i}-u^{0}|\to 0\ (i\to-\infty)\}\ and\] \[X^{1} =\{x\in X\mid|x_{i}-u^{1}|\to 0\ (i\to-\infty),|x_{i}-u^{0}|\to 0\ (i\to\infty)\}.\] By considering a local minimizer (precisely, a global minimizer in \(X^{0}\) or \(X^{1}\)), Yu [11] proved the existence of heteroclinic orbits, which Bangert showed in [2], as per the following proposition. **Proposition 2.11** (Theorem 3.4 and Proposition 3.5, [11]).: _There exists a stationary configuration \(x\) in \(X^{0}\) (resp. \(X^{1}\)) satisfying \(I(x)=c_{0}\) (resp. \(I(x)=c_{1}\)), where_ \[c_{0}=\inf_{x\in X^{0}}I(x),\ c_{1}=\inf_{x\in X^{1}}I(x)\] _Moreover, \(x\) is strictly monotone, i.e., \(x_{i}<x_{i+1}\) (resp. \(x_{i}>x_{i+1}\)) for all \(i\in\mathbb{Z}\)._ Let: \[\mathcal{M}^{0}(u^{0},u^{1}) =\{x\in X\mid c_{0}=\inf_{x\in X^{0}}I(x)\}\ and\] \[\mathcal{M}^{1}(u^{0},u^{1}) =\{x\in X\mid c_{1}=\inf_{x\in X^{1}}I(x)\}.\] Set \[c_{*}:=I(x^{0})+I(x^{1}), \tag{9}\] where \(x^{i}\in\mathcal{M}^{i}(u^{0},u^{1})\)\((i=0,1)\). From the above and Lemma 2.4, we immediately obtain the following corollary. **Corollary 2.12**.: \(c_{*}>0\)__ Proof.: Choose \(x^{0}\in\mathcal{M}^{0}(u^{0},u^{1})\) and \(x^{1}\in\mathcal{M}^{1}(u^{0},u^{1})\) arbitrarily. From monotonicity, \(x^{0}\) and \(x^{1}\) intersect exactly once. We define \(x^{+}\) and \(x^{-}\) in \(X\) by \(x_{i}^{+}:=\max\{x_{i}^{0},x_{i}^{1}\}\) and \(x_{i}^{-}:=\min\{x_{i}^{0},x_{i}^{1}\}\). By \((h_{3})\) and Lemma 2.4, \[c_{*}=I(x^{0})+I(x^{1})\geq I(x^{+})+I(x^{-})>0.\] This completes the proof. **Lemma 2.13**.: _For any \(\epsilon>0\), there exist \(n_{0}\in\mathbb{N}\) and \(x\in\mathcal{M}^{0}(u^{0},u^{1})\) (resp. \(x\in\mathcal{M}^{1}(u^{0},u^{1})\)) such that \(\sum_{i=-n}^{n-1}a_{i}(x)\in(c_{0}-\epsilon,c_{0}+\epsilon)\)\((resp.\sum_{i=-n}^{n-1}a_{i}(x)\in(c_{1}-\epsilon,c_{1}+\epsilon))\) for all \(n\geq n_{0}\)._ Proof.: For sufficiently large \(n_{0}\), there exists \(y\in\mathcal{M}^{0}\) such that for any \(n\geq n_{0}\), \[y_{-n}-u^{0}<\epsilon/2C,\text{and }u^{1}-y_{n}<\epsilon/2C.\] Since \(c_{0}=\sum_{i\in\mathbb{Z}}a_{i}(y)\) by the minimality of \(y\), we get: \[\left|\sum_{i=-n}^{n-1}a_{i}(y)-c_{0}\right|=\left|\sum_{i<-n}a_{i}(y)+\sum_{ i\geq n}a_{i}(y)\right|\leq C((y_{-n}-u^{0})+(u^{1}-y_{n}))<\epsilon\] as desired. A similar way is valid for the rest of the proof. We will check the properties of the 'pseudo' minimal heteroclinic orbits. Under the assumption (6), the following lemma holds. **Lemma 2.14** (Proposition 4.1, [11]).: _Assume (6) holds. For any \(\epsilon>0\), there exist \(\delta_{i}\in(0,\epsilon)\)\((i=1,2,3,4)\) and positive constants \(e_{0}=e_{0}(\delta_{1},\delta_{2})\) and \(e_{1}=e_{1}(\delta_{3},\delta_{4})\) satisfying:_ \[\inf\{I(x)\mid x\in X^{0},x_{0}=u_{0}+\delta_{1}\text{ or }x_{0}=u_{1}- \delta_{2}\}=c_{0}+e_{0}\text{ and }\] \[\inf\{I(x)\mid x\in X^{1},x_{0}=u_{1}-\delta_{3}\text{ or }x_{0}=u_{0}+\delta_{4}\}=c_{1}+e_{1}.\] We omit the proof. As a result, we need to choose each \(\delta_{i}\) small enough satisfying \(\delta_{1},\delta_{2}\in I_{0}^{+}(u^{0},u^{1})\) and \(\delta_{3},\delta_{4}\in I_{0}^{-}(u^{0},u^{1})\). It is immediately shown that: **Lemma 2.15**.: _Let \(x\in X^{0}\) (resp. \(x\in X^{1}\)) be satisfy \(I(x)=c_{0}+e_{0}\) (resp. \(I(x)=c_{1}+e_{1}\)). Then, for any \(\epsilon>0\), there exist \(n_{0}\in\mathbb{Z}_{\geq 0}\) such that \(\sum_{i=-n}^{n-1}a_{i}(x)\in(c_{0}+e_{0}-\epsilon,c_{0}+e_{0}+\epsilon)\)\((resp.\sum_{i=-n}^{n-1}a_{i}(x)\in(c_{1}+e_{1}-\epsilon,c_{1}+e_{1}+\epsilon))\) for all \(n\geq n_{0}\)._ The proofs of our main theorem and some remarks ### Variational settings Let \((u^{0},u^{1})\) be a neighboring pair of \(\mathcal{M}_{0}^{\text{per}}\) and set: \[K =\left\{k=(k_{i})_{i\in\mathbb{Z}}\subset\mathbb{Z}\mid k_{0}=0,k_{i }<k_{i+1}\right\},and: \tag{10}\] \[\mathcal{C}(n;a,b) =\min\left\{\sum_{i=0}^{n-1}h(x_{i},x_{i+1})\mid x\in X(n),x_{0} =a,x_{n}=b\right\}.\] (See (7) for the definition of \(X\).) For \(k\in K\), set \(I_{i}=[k_{i},k_{i+1}-1]\cap\mathbb{Z}\) and \(|I_{i}|=|k_{i}-k_{i+1}|\) for each \(i\in\mathbb{Z}\). Now we define the renormalized function \(J\) by: \[J(x)=J_{k}(x)=\sum_{j\in\mathbb{Z}}A_{j}(x),\] where \(A_{j}(x)=h(x_{j},x_{j+1})-c(j)\) and: \[c(j)=\begin{cases}\dfrac{\mathcal{C}(|I_{2i+1}|,u^{i},u^{i+1})}{|I_{2i+1}|}& \text{ ($j\in I_{2i+1}$ for some $i\in\mathbb{Z}$)}\\ \dfrac{\mathcal{C}(|I_{2i}|,u^{i},u^{i})}{|I_{2i}|}&\text{ (otherwise)}\end{cases}.\] **Remark 3.1**.: \((a)\) _The existence of the minimum value \(\mathcal{C}(n;a,b)\) is guaranteed by \((h_{2})\). \((b)\) Theorem 5.1 in [2] shows that \(x\in\mathcal{M}_{\alpha}^{\text{per}}\) has minimal period \((q,p)\) with \(q\) and \(p\) relatively prime. It indicates that:_ \[c=\dfrac{\mathcal{C}(|I_{2i}|,u^{0},u^{0})}{|I_{2i}|}=\dfrac{\mathcal{C}(|I_{2 i}|,u^{1},u^{1})}{|I_{2i}|}.\] Next, we set: \[P=\left\{\rho=(\rho_{i})_{i\in\mathbb{Z}}\subset\mathbb{R}_{>0}\mid 0<\rho_{i} <\dfrac{u^{1}-u^{0}}{2}\ (^{\forall}i\in\mathbb{Z}),\ \sum_{i\in\mathbb{Z}}\rho_{i}<\infty \right\}.\] For \(k\in K\) and \(\rho\in P\), the set \(X_{k,\rho}\) is given by: \[X_{k,\rho}=\bigcap_{i\in\mathbb{Z}}\left\{\left(\bigcap_{i=0,1}Y^{0}(k_{i}, \rho_{i})\right)\cap\left(\bigcap_{i=-1,2}Y^{1}(k_{i},\rho_{i})\right)\right\},\] where \[Y^{j}(l,p)=\left\{x\in X\mid|x_{l}-u^{j}|\leq p\right\}\ (j=0,1)\] and \(a\equiv b\) means \(a\equiv b\ (\text{mod}\ 4)\). (See (7) for the definition of \(X(n)\).) It is easily seen that each element of \(X_{k,\rho}\) has infinite transitions. Notice that since compactness and sequential compactness are equivalent in the presence of the second countability axiom, \(X\) is a sequentially compact set by Tychonoff's theorem. Clearly, \(X_{k,\rho}\) is a closed subset of \(X\), so the set \(X_{k,\rho}\) is also sequentially compact. As a basic property of \(J\), we first show that \(J(x)\) can be finite unlike \(I(x)\) even if \(x\) is an infinite transition orbit. **Lemma 3.2**.: _If \(\rho\in P\), then there exists \(y=(y_{i})_{i\in\mathbb{Z}}\in X_{k,\rho}\) such that \(J(y)=0\) for all \(k\in K\)._ Proof.: By the definition of \(J\), we can choose a configuration \(y\in X_{k,\rho}\) satisfying \(\sum_{i=k_{j}}^{k_{j+1}}A_{i}(y)=0\) for all \(j\in\mathbb{Z}\) by taking \(y\) such that it satisfies \(\sum_{i=k_{j}}^{k_{j+1}}h(y_{i},y_{i+1})=\mathcal{C}(|I_{2j+1}|,u^{j},u^{j+1})\) or \(\sum_{i=k_{j}}^{k_{j+1}}h(y_{i},y_{i+1})=\mathcal{C}(|I_{2j}|,u^{j},u^{j})\). The above lemma implies that \(J\) overcomes the problem referred to in Proposition 1.11. Next, we show that \(J\) is bounded below. **Lemma 3.3**.: _If \(\rho\in P\), then there is a constant \(M\in\mathbb{R}\) such that \(J(x)\geq M(>-\infty)\) for all \(x\in X_{k,\rho}\)._ Proof.: For each \(x\in X_{k,\rho}\) with \(J(x)\leq 0\), we define \(y=(y_{j})_{j\in\mathbb{Z}}\in X_{k,\rho}\) by \(y_{k_{i}}=u^{0}\); if \(i\equiv 0,1\), \(y_{k_{i}}=u^{1}\); if \(i\equiv-1,2\), and \(y_{j}=x_{j}\) otherwise. From the definition, \(0\leq J(y)<\infty\). Lipschitz continuity of \(h\) shows: \[-J(x)\leq J(y)-J(x)\leq 2C\sum_{i\in\mathbb{Z}}\rho_{i}<\infty\] and we get \(J(x)\geq-2C\sum_{i\in\mathbb{Z}}\rho_{i}\) for all \(x\in X_{k,\rho}\), thus completing the proof. **Remark 3.4**.: _In a similar way to the proof in the above lemma, we get \(\sum_{i=k_{n}}^{k_{m}}A_{i}(x)\geq-2C\sum_{i=n}^{m}\rho_{i}\) for any \(n<m\)._ To ensure that \(J\) has a minimizer in \(X_{k,\rho}\), we present the following lemma. **Lemma 3.5**.: _The function \(J\) is well-defined on \(\mathbb{R}\cup\{+\infty\}\), i.e.,_ \[\alpha:=\liminf_{n\to\infty}\sum_{|i|\leq n}A_{i}(x)=\limsup_{n\to\infty}\sum_ {|i|\leq n}A_{i}(x)=:\beta.\] Proof.: For the proof, we use a similar argument to Yu's proof of Proposition 2.9 and Lemma 6.1 in [11]. By contradiction, we assume \(\alpha<\beta\). First, we consider the case where \(\beta=+\infty\). Fix \(\gamma\in\mathbb{R}_{<0}\) arbitrarily. For \(\alpha<+\infty\), we take a constant \(\tilde{\alpha}\) with \(\tilde{\alpha}>\alpha+1-2\gamma\). Then there are constants \(n_{0}\) and \(n_{1}\) such that \(n_{0}<n_{1}\) and: \[\sum_{|i|\leq n_{0}}A_{i}(x)\geq\tilde{\alpha}\text{ and }\sum_{|i|\leq n_{1}}A_ {i}(x)\leq\alpha+1.\] Then, \[2\gamma>\alpha+1-\tilde{\alpha}\geq\sum_{|i|\leq n_{1}}A_{i}(x)-\sum_{|i|\leq n _{0}}A_{i}(x)=\sum_{i=-n_{1}}^{-n_{0}}A_{i}(x)+\sum_{i=n_{0}}^{n_{1}}A_{i}(x).\] Figure 2: An element of \(X_{k,\rho}\) Combining the first term and end terms implies: \[\sum_{i=-n_{1}}^{-n_{0}}A_{i}(x)<\gamma\text{ or }\sum_{i=n_{0}}^{n_{1}}A_{i}(x)<\gamma.\] For \(\gamma\) small enough, this contradicts Lemma 3.3. Next, we assume \(\beta<+\infty\). Since \(\alpha<\beta\), there are two sequences of positive integers \(\{m_{j}\to\infty\}_{j\in\mathbb{N}}\) and \(\{l_{j}\to\infty\}_{j\in\mathbb{N}}\) satisfying \(m_{j}<m_{j+1}\), \(l_{j}<l_{j+1}\) and \(m_{j}+1<l_{j}<m_{j+1}-1\) for all \(j\in\mathbb{Z}_{>0}\), and: \[\beta=\lim_{j\to\infty}\sum_{i\leq|m_{j}|}A_{i}(x)>\lim_{j\to\infty}\sum_{i\leq |l_{j}|}A_{i}(x)=\alpha.\] Then we can find \(j\gg 0\) such that \[\sum_{i\leq|l_{j}|}A_{i}(x)-\sum_{i\leq|m_{j}|}A_{i}(x)=\sum_{i=-l_{j}}^{-m_{j }}A_{i}(x)+\sum_{i=m_{j}}^{l_{j}}A_{i}(x)<\frac{\alpha-\beta}{2}. \tag{11}\] Since \(|l_{j}|\) and \(|m_{j}|\) are finite for fixed \(j\), the above calculation does not depend on the order of the sums. For sufficiently large \(j\), a similar argument in the proof of Lemma 3.3 shows: \[\sum_{i=-l_{j}}^{-m_{j}}A_{i}(x)\geq-2C\sum_{i=-l_{j}}^{-m_{j}}\rho_{i}>\frac{ \alpha-\beta}{4}\] and \[\sum_{i=m_{j}}^{l_{j}}A_{i}(x)\geq-2C\sum_{i=m_{j}}^{l_{j}}\rho_{i}>\frac{ \alpha-\beta}{4}\] because \(\rho\in P\) implies \[\sum_{|i|>n}\rho_{i}\to 0\text{ }(n\to\infty).\] and both \(m_{j}\) and \(l_{j}\) goes to infinity as \(j\to\infty\). Therefore: \[\sum_{i=-l_{j}}^{-m_{j}}A_{i}(x)+\sum_{i=m_{j}}^{l_{j}}A_{i}(x)>\frac{\alpha- \beta}{2},\] which contradicts (11). **Proposition 3.6**.: _For all \(k\in K\) and \(\rho\in P\), there exists a minimizer of \(J\) in \(X_{k,\rho}\)._ Proof.: By Lemma 3.2 and 3.3, we can take a minimizing sequence \(x=(x^{n})_{n\in\mathbb{N}}\) of \(J\) with each \(x^{n}\in X_{k,\rho}\). Since \(X_{k,\rho}\) is sequentially compact, there exists \(\tilde{x}\in X_{k,\rho}\) which \(x^{n_{k}}\) converges to \(\tilde{x}\) for some subsequence \((n_{k})_{k\in\mathbb{N}}\). Below, we assume \(n_{k}=k\) for simplicity. To ensure our claim, it is enough to show that for any \(\epsilon>0\), there exists \(j_{0}\) and \(n_{0}\in\mathbb{N}\) such that: \[\sum_{|i|>j_{0}}A_{i}(x^{n})>-\epsilon\text{ (for all }n\geq n_{0})\text{ and } \sum_{|i|>j_{0}}A_{i}(\tilde{x})<\epsilon, \tag{12}\] because if the above inequalities hold, we obtain: \[J(\tilde{x}) =\sum_{|i|\leq j_{0}}A_{i}(\tilde{x})+\sum_{|i|>j_{0}}A_{i}(\tilde {x})\] \[\leq\lim_{n\to\infty}\sum_{|i|\leq j_{0}}A_{i}(x^{n})+\epsilon= \lim_{n\to\infty}(\sum_{i\in\mathbb{Z}}A_{i}(x^{n})-\sum_{|i|>j_{0}}A_{i}(x^{n }))+\epsilon\] \[\leq\lim_{n\to\infty}\sum_{i\in\mathbb{Z}}A_{i}(x^{n})+2\epsilon =\lim_{n\to\infty}J(x^{n})+2\epsilon.\] Using an arbitrary value of \(\epsilon\), we have \(J(\tilde{x})\leq\lim_{n\to\infty}\sum_{i\in\mathbb{Z}}A_{i}(x^{n})\) and \(\tilde{x}\) is the infimum (or greatest lower bound) of \(J\). The step of the proof in Lemma 3.3 implies that for any \(n\in\mathbb{N}\): \[\lim_{j\to\infty}\sum_{|i|>j}A_{i}(x^{n})\geq 0 \tag{13}\] Hence, the first inequality holds. The second inequality is clear since \(\tilde{x}\in X_{k,\rho}\) and \(\sum_{i\in\mathbb{Z}}\rho_{i}<\infty\). ### Properties of the minimizers of \(J\) in \(X_{k,\rho}\) Let \(x^{*}=(x^{*}_{i})_{i\in\mathbb{Z}}\) be a minimizer (depending on \(k\in K\) and \(\rho\in P\)) in Proposition 3.6. Let \(x(n;a,b)=\{x_{i}(n;a,b)\}_{i=0}^{n}\) be a minimizing sequence of \(\sum_{i=0}^{n-1}h(x_{i},x_{i+1})\) on \(X(n)\) (defined by (7)) that satisfies \(x_{0}(n;a,b)=a\), and \(x_{n}(n;a,b)=b\), i.e., it holds that \(\sum_{i=0}^{n-1}h(x_{i}(n;a,b),x_{i+1}(n;a,b))=\mathcal{C}(n;a,b)\) (see (10)). **Lemma 3.7**.: _For any \(\epsilon\in(0,\min\{c_{*}/(2C),(u^{1}-u^{0})/2\})\) (\(c_{*}\) is given by (9)), there exist two positive real numbers \(r_{1}\) and \(r_{2}\) which satisfy that: for any \(n\geq 2\), \(a\in[u^{0},u^{0}+r_{1}]\) (resp. \(a\in[u^{1}-r_{1},u^{1}]\)), and \(b\in[u^{0},u^{0}+r_{2}]\) (resp. \(b\in[u^{1}-r_{2},u^{1}]\)),_ \[0<x_{i}(n;a,b)-u^{0}<\epsilon\ (resp.0<u^{1}-x_{i}(n;a,b)<\epsilon)\text{ for all }i\in\{0,\ldots,n\}.\] Proof.: For any \(\epsilon\in(0,\min\{c_{*}/(2C),(u^{1}-u^{0})/2\})\), we can take \(r_{1}\) and \(r_{2}\in(0,\epsilon)\) satisfying \[r_{1}+r_{2}<\min\left\{\frac{\phi(\epsilon)}{2C},\frac{c_{*}}{2C}-\epsilon \right\}.\] See (8) for the definition of \(\phi\). We demonstrate that the claim holds for the selected \(r_{1}\) and \(r_{2}\) in the above. (We only prove the case of \(a\in[u^{0},u^{0}+r_{1}]\) and \(b\in[u^{0},u^{0}+r_{2}]\). The proof of the other case is similar.) Set a finite sequence \(y=(y_{i})_{i=0}^{n}\) by \(y_{0}=x_{0}\), \(y_{n}=x_{n}\), and \(y_{i}=u^{0}\) otherwise. For any \(n\in\mathbb{N}\), \[\begin{split}\sum_{i=0}^{n-1}a_{i}(x)&=\mathcal{C}(n ;a,b)-nc=\sum_{i=0}^{n-1}(h(x_{i},x_{i+1})-h(u^{0},u^{0}))\\ &\leq\sum_{i=0}^{n-1}(h(y_{i},y_{i+1})-h(u^{0},u^{0}))\leq C(r_{ 1}+r_{2}).\end{split} \tag{14}\] If there exists \(i\in\{1,\ldots,n-1\}\) satisfying \(x_{i}-u^{0}\geq\epsilon\) and \(u^{1}-x_{i}\geq\epsilon\), Combining Lemma 2.2 with (14) yields: \[C(r_{1}+r_{2})\geq\mathcal{C}(n;a,b)-nc\geq\phi(\epsilon)-C|a-b|>\phi(\epsilon )-C(r_{1}+r_{2}).\] Thus we get \(\phi(\epsilon)<2C(r_{1}+r_{2})\), which is a contradiction. Next, we assume that there exist \(i\) and \(i+1\) such that \(u^{1}-x_{i}<\epsilon\) and \(x_{i+1}-u^{0}<\epsilon\). For simplicity, we can set \(i=1\) without loss of generality. Define a configuration \(z^{+}\) and \(z^{-}\) by: \[z^{+}=\begin{cases}u^{0}\ (i\leq 0)\\ x_{i}\ (i=1),\\ u^{1}\ (i\geq 2)\end{cases}\quad z^{-}=\begin{cases}u^{1}\ (i\leq 0)\\ x_{i}\ (1\leq i\leq n-1).\\ u^{0}\ (i\geq n)\end{cases}\] Applying \(c=h(u^{0},u^{0})=h(u^{1},u^{1})\), Lipschitz continuity, and (14), we see that: \[I(z^{+})+I(z^{-}) =a_{0}(z^{+})+\sum_{i=1}^{n-1}a_{i}(z^{-})\] \[<\sum_{i=0}^{n-1}a_{i}(x)+C(r_{1}+r_{2}+2\epsilon)\] \[\leq C(r_{1}+r_{2})+C(r_{1}+r_{2}+2\epsilon).\] On the other hand, \(z^{+}\in X^{0}\) and \(z^{-}\in X^{1}\) imply \(I(z^{+})+I(z^{-})\geq c_{*}\) and we get: \[c_{*}<2C(r_{1}+r_{2}+\epsilon),\] which is a contradiction. **Lemma 3.8**.: _Assume that both \(a-u^{0}\) and \(b-u^{0}\) (resp. both \(u^{1}-a\) and \(u^{1}-b\)) are small enough. Then, for any \(\delta>0\) and \(m\in\mathbb{N}\), there exists \(N\in\mathbb{N}\) such that for any \(n\geq N\), there exist \(i(1),\ldots,i(m)\in\{0,\cdots,n\}\) satisfying:_ \[x_{i(j)}(n;a,b)-u^{0}<\delta\ (resp.\ u^{1}-x_{i(j)}(n;a,b)<\delta)\text{ for all }j\in\{1, \cdots,m\} \tag{15}\] Proof.: If we replace (15) with the following: \[x_{i(j)}(n;a,b)-u^{0}<\delta\text{ or }u^{1}-x_{i(j)}(n;a,b)<\delta)\text{ for all }j\in\{1, \cdots,m\},\] our statement for \(m=1\) is immediately shown from Proposition 1.11 and (14). For \(m\geq 2\), since both \(a-u^{0}\) and \(b-u^{0}\) are small enough, Lemma 3.7 is valid and it implies (15). Those statements of the above two lemmas may seem a bit complicated. We will roughly summarize the statement of Lemma 3.7 and 3.8. The former states that for any \(\epsilon\), if two endpoints are close to \(u^{0}\) or \(u^{1}\), then a minimal configuration between them is in a band whose width is \(\epsilon\) independent of its length. On the other hand, Lemma 3.8 shows that no matter the width of the band, if we take the interval between the endpoints to be longer (i.e. if we make the length of the band sufficiently long), a minimal configuration can get arbitrarily close to either \(u^{0}\) or \(u^{1}\). Now we are ready to state our main theorem when the rotation number \(\alpha\) is zero: Proof of Theorem 1.12 for \(\alpha=0\).: To see that a minimizer \(x^{*}\in X_{k,\rho}\) is a stationary configuration, it suffices to show that \(x^{*}\) is not on the boundary of \(X_{k,\rho}\). For any positive sequence \(\epsilon=(\epsilon_{i})_{i\in\mathbb{Z}}\) with \(\epsilon_{i}<c_{*}/4C\), we choose \(\rho\in P\) and \(k\in K\) in the following steps: 1. Since we assume that \((u^{0},u^{1})\neq I_{0}^{+}(u^{0},u^{1})\) and \((u^{0},u^{1})\neq I_{0}^{-}(u^{0},u^{1})\), both \((u^{0},u^{1})\backslash I_{0}^{+}(u^{0},u^{1})\) and \((u^{0},u^{1})\backslash I_{0}^{-}(u^{0},u^{1})\) are nonempty and we can take \(\rho\in P\) so that: \[(p_{1})\ u^{i+1}+\sigma(i)\rho_{i}\in(u^{0},u^{1})\backslash I_{0}^{+}(u^{0}, u^{1})\text{ for all }i\equiv 1,2\text{ and }u^{i}-\sigma(i)\rho_{i}\in(u^{0},u^{1}) \backslash I_{0}^{-}(u^{0},u^{1})\text{ for all }i\equiv-1,0,\text{ where }\sigma(i)=1\text{ if }i\text{ is odd and }\sigma(i)=-1\text{ if }i\text{ is even. and}\] \[(p_{2})\ \text{ For any }i\in\mathbb{Z},\,\rho_{2i}+\rho_{2i+1}<\min\bigg{\{} \frac{\phi(\epsilon_{i})}{2C},\frac{c_{*}}{2C}-\epsilon_{i}\bigg{\}},\] where \(u^{i}=u^{0}\) when \(i\) is even, and \(u^{i}=u^{1}\) when \(i\) is odd. It easily follows from Lemma 3.7 that, for any \(k\in K\) and \(\rho\in P\) satisfying \((p_{2})\), a minimizer \(x^{*}=(x^{*}_{j})_{j\in\mathbb{Z}}\in X_{k,\rho}\) is on an \(\epsilon_{i}\)-neighborhood of \(u^{i}\) for each \(j\in[k_{2i},k_{2i+1}]\), i.e., \(|x^{*}_{j}-u^{i}|<\epsilon_{i}\) if \(j\in[k_{2i},k_{2i+1}]\). Notice that \(\rho_{i}\) can be chosen as arbitrarily small since a monotone heteroclinic configuration (one transition orbit), say \((x_{i})_{i\in\mathbb{Z}}\), satisfies \(|x_{i}-u^{j}|\to 0\) as \(|i|\to\infty\) for \(j=0\) or \(1\) and it is follows from \((h_{1})\) that if \(x=(x_{i})_{i\in\mathbb{Z}}\) is a stationary configuration whose rotation number is \(0\), then so is \(y=(y_{i})_{i\in\mathbb{Z}}\) with \(y_{i}=x_{i+l}\) for any \(l\in\mathbb{Z}\). 2. Next, we consider taking a \(k\in K\) dependent of the chosen \(\rho\in P\) in the previous step. Since \(k\in K\) is \(k_{0}=0\), to take \(k\) is to determine the values of \(|k_{2i-1}-k_{2i}|\) and \(|k_{2i}-k_{2i+1}|\) for all \(i\in\mathbb{Z}\). For each \(i\in\mathbb{Z}\), we take the value of \(|k_{2i-1}-k_{2i}|\) satisfying: \[\mathcal{M}^{i}(u^{0},u^{1})\cap Y^{i+1}(k_{2(i+1)},\rho_{2(i+1)})\cap Y^{i}(k_ {2i+1},\rho_{2i+1})\neq\emptyset,\] i.e., \[\mathcal{M}^{i}(u^{0},u^{1})\cap Y^{i}(0,\rho_{2i+1})\cap Y^{i+1}(|k_{2i+1}-k_ {2(i+1)}|,\rho_{2i})\neq\emptyset,\] where \(\mathcal{M}^{i}=\mathcal{M}^{0}\) and \(Y^{i}=Y^{0}\) when \(i\) is even, and \(\mathcal{M}^{i}=\mathcal{M}^{1}\) and \(Y^{i}=Y^{1}\) when \(i\) is odd. 3. Before describing how we choose the value of \(|k_{2i}-k_{2i+1}|\) for each \(i\in\mathbb{Z}\), we define several positive bi-infinite sequences. Set \(\tilde{e}=(\tilde{e}_{i})_{i\in\mathbb{Z}}\) by: \[\tilde{e}_{i}=\begin{cases}e_{0}(\rho_{2i+1},\rho_{2(i+1)}),&(i:\text{even})\\ e_{1}(\rho_{2i+1},\rho_{2(i+1)})&(i:\text{odd}).\end{cases}\] Furthermore, let's choose two positive sequence \(\delta=(\delta_{i})_{i\in\mathbb{Z}}\) and \(\tilde{\epsilon}=(\tilde{\epsilon}_{i})_{i\in\mathbb{Z}}\) satisfying, for each \(i\in\mathbb{Z}\), \[2\tilde{\epsilon}_{i}+C(\delta_{2i-1}+\delta_{2i})<\frac{\tilde{e}^{i}}{2}.\] * For each \(\tilde{\epsilon}_{i}\) in the one before step, Lemma 2.13 and 2.15 show that there exist two integers \(N_{2i-1},N_{2i}\in\mathbb{N}\) and \(x^{i}\in\mathcal{M}^{i}(u^{0},u^{1})\) satisfying the following: * For any \(i\in\mathbb{Z}\), if \(n_{2i+1}\geq N_{2i+1}\) and \(n_{2(i+1)}\geq N_{2(i+1)}\), then both the follllowing \(1\) and \(2\) hold: \[1. \sum_{\begin{subarray}{c}j=k_{2i+1}-n_{2i+1}-1\\ k_{2(i+1)}+n_{2(i+1)}+1\\ 2. \sum_{\begin{subarray}{c}k_{2(i+1)}+n_{2(i+1)}+1\\ k_{2(i+1)}+n_{2(i+1)}+1\\ \end{subarray}}^{k_{2(i+1)}+n_{2(i+1)}+1}a_{j}(y)\geq c_{i}+\tilde{\epsilon}_{i }-\tilde{\epsilon}_{i}\text{ for all: }\] \[y\in\{x=(x_{i})_{i\in\mathbb{Z}}\in X^{i}\cap Y^{i}(k_{2i+1}, \rho_{2i+1})\cap Y^{i+1}(k_{2(i+1)},\rho_{2(i+1)})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ Though the previous discussion treats bi-infinite transition orbits, we can construct one-sided infinite transition orbits by replacing \(c(j)\) of \(J\) with: \[\tilde{c}(j)=\begin{cases}\frac{\mathcal{C}(|I_{2i+1}|,u^{i},u^{i+1})}{|I_{2i+1} |}&\text{ ($j\in I_{2i+1}$ for some $i\geq 0$)},\\ \frac{\mathcal{C}(|I_{2i}|,u^{i},u^{i})}{|I_{2i}|}&\text{ (otherwise)},\end{cases}\] and \(X_{k,\rho}\) with: \[\tilde{X}_{k,\rho}(a,b)=\bigcap_{i\in\mathbb{Z}}\left\{\left(\bigcap_{i<0}Y^{ a}(k_{bi},\rho_{0})\right)\cap\left(\bigcap_{i=0,1,i\geq 0}Y^{a}(k_{bi},\rho_{bi}) \right)\cap\left(\bigcap_{i=-1,2,i\geq 0}Y^{|1-a|}(k_{bi},\rho_{bi})\right) \right\},\] where \(a\in\{0,1\}\), \(b\in\{-1,1\}\), \(k\in K\) and \(\rho\in P\). Let \(\tilde{J}\) be the replaced function instead of \(J\), i.e., \[\tilde{J}(x)=\sum_{j\in\mathbb{Z}}(h(x_{j},x_{j+1})-\tilde{c}(j)).\] Notice that Proposition 1.11 implies that if \(x=(x_{i})_{i\in\mathbb{Z}}\) satisfies that \(\tilde{J}(x)\) is finite, then \(|x_{i}-x_{i}^{a}|\to 0\)\((bi\to\infty)\). Thus we get: **Theorem 3.9**.: _Assume the same condition of Theorem 1.10. Then, for any \(a\in\{0,1\}\), \(b\in\{-1,1\}\), and positive sequence \(\epsilon=(\epsilon_{i})_{i\in\mathbb{Z}}\) with \(\epsilon_{i}\) small enough, there is an \(m=\{m_{i}\}_{i\in\mathbb{Z}}\) such that for every sequence of integers \(k=(k_{i})_{i\in\mathbb{Z}}\) with \(k_{i+1}-k_{i}\geq m_{i}\), there is a stationary configuration \(x\) satisfying:_ 1. \(x_{i}^{0}<x_{i}<x_{i}^{1}\) _for all_ \(i\in\mathbb{Z}\)_;_ 2. _for any_ \(j\in\mathbb{Z}\)_,_ \(\ |x_{i}-x_{i}^{2j+a}|\leq\epsilon_{i}\) _if_ \(i\in[k_{4j},k_{4j+1}]\) _and_ \(\ |x_{i}-x_{i}^{2j-1+a}|\leq\epsilon_{i}\) _if_ \(i\in[k_{4j+2},k_{4j+3}]\)_;_ 3. \(|x_{i}-x_{i}^{a}|\to 0\)__\((bi\to\infty)\)_._ ## 4 Additional remarks ### The number of infinite transition orbits We first see that Theorem 1.12 and 3.9 show the existence of uncountable many infinite transition orbits. We can take \(k\in K\) and \(\rho\in P\) given in Theorem 1.12 so that for all \(i\in\mathbb{N}\), \(k_{i}-k_{i-1}<k_{i+1}-k_{i}\) and \(k_{-i}-k_{-(i+1)}<k_{-i+1}-k_{-i}\). For \(j=(j_{i})_{i\in\mathbb{Z}}\in K\), set: \[X_{j}=\bigcap_{i\in\mathbb{Z}}\left\{\left(\bigcap_{i=0,1}Y^{0}(k_{j_{i}}, \rho_{i})\right)\cap\left(\bigcap_{i=-1,2}Y^{1}(k_{j_{i}},\rho_{i})\right)\right\}\] Let \(x^{*}(j)\) be a minimizer of \(J\) on \(X_{j}\), i.e., \[J(x^{*}(j))=\inf_{x\in X_{j}}J(x).\] The previous section deals with the case of \(j^{0}=(j_{i}^{0}=i)_{i\in\mathbb{Z}}\). It is easily seen that if \(l\neq m\in K\), then \(x^{*}(l)\) and \(x^{*}(m)\) are different and we immediately get the following theorem. **Theorem 4.1**.: _Let \(\#\chi_{1}\) and \(\#\chi_{2}\) be the number of infinite transition orbits in Theorem 1.12 and 3.9. Then \(\#\chi_{1}=\#\chi_{2}=\#\mathbb{R}\)._ Proof.: We only discuss the case of Theorem 1.12. For any real number \(r\in\mathbb{R}_{>0}\), we can choose a corresponding bi-infinite sequence \((a_{i})_{i\in\mathbb{Z}}\subset\mathbb{Z}_{\geq 0}\). (For example, when \(r=12.34\), \(a_{-1}=1,a_{0}=2,a_{1}=3,a_{2}=4\) and \(a_{i}=0\) otherwise.) The proof is straightforward by setting \(a_{i}:=j_{i+1}-j_{i}-1\) for \(i\in\mathbb{Z}\). It is also clear that if \(r_{1}\neq r_{2}\), each corresponding stationary configuration is different. A similar proof is valid for Theorem 3.9. ### A special case We will give a special example at the end of this paper. In the previous section, we cannot generally show: \[h(x,y)-c\geq 0. \tag{16}\] Therefore the proof of Proposition 1.11 is somewhat technical. However, as we will see later, (16) holds if \(h\) satisfies: \[h(x,y)=h(y,x). \tag{17}\] This is kind of natural because the analogy of (16) for differential equations holds in variational structures of potential systems with reversibility (see [10]). One of the examples satisfying (17) is the Frenkel-Kontorova model [1, 5] and the corresponding \(h\) is given by: \[h(x,y)=\frac{1}{2}\left\{C(x-y)^{2}+V(x)+V(y)\right\}, \tag{18}\] where \(C\) is a positive constant and \(V(x)=V(x+1)\) for all \(x\in\mathbb{R}\). Since \(\partial_{1}\partial_{2}h\leq-C<0\), Remark 1.9 implies that (18) satisfies (\(h_{1}\))-(\(h_{5}\)). Using (17), we can easily show the following lemma, which implies \(h(x,y)-c\geq 0\). **Lemma 4.2**.: _If a continuous function \(h\colon\mathbb{R}^{2}\to\mathbb{R}\) satisfies (\(h_{1}\))-(\(h_{3}\)) and (17), then all minimizers of \(h\) are \((1,0)\)-periodic, i.e.,_ \[\inf_{x\in\mathbb{R}}h(x,x)=\inf_{(x,y)\in\mathbb{R}^{2}}h(x,y).\] Proof.: First, we see that it follows from (\(h_{2}\)) that there exists an infimum of \(h(x,y)\) on \(\mathbb{R}^{2}\). From (\(h_{1}\)), we can choice \(x^{*}\) satisfying \(h(x^{*},x^{*})=\min_{x\in[0,1]}h(x,x)=\inf_{x\in\mathbb{R}}h(x,x)\). By contradiction, there is \((x,y)\in\mathbb{R}^{2}\) such that \(x\neq y\) and \(h(x,y)<h(x^{*},x^{*})\). Then, (17) implies: \[h(x,y)+h(y,x)<h(x^{*},x^{*})+h(x^{*},x^{*})\leq h(x,x)+h(y,y),\] but it contradicts (\(h_{3}\)). If \(h\) satisfies (17), minimal configurations are 'almost' monotone in the following sense: **Proposition 4.3**.: _Let \(n\in\mathbb{N}\) be arbitrary number and \(x=(x_{i})_{i=0}^{n}\) be a finite configuration with \(x_{0}=a\), \(x_{n}=b\) and \(a<b\) (resp. \(a>b\)). If there exist two integers \(m\) and \(l\) satisfying \(0<m<n\), \(0\leq m-l<m+l+1\leq n\), \(x_{m}>x_{m+1}\) (resp. \(x_{m}<x_{m+1}\)) and \(x_{m-l}<x_{m+l+1}\) (resp. \(x_{m-l}>x_{m+l+1}\)), then \(x=(x_{i})_{i=0}^{n}\) is not minimal._ Proof.: We only consider the case where \(l=1\). To prove our statement, it suffices to construct a finite configuration \(y=(y_{i})_{i=m-1}^{m+2}\) satisfying \(\sum_{im-l}^{m+l}h(y_{i},y_{i+1})<\sum_{i=m-l}^{m+l}h(x_{i},x_{i+1})\). Set \(y=(y_{i})_{i=m-1}^{m+2}\) by \(y_{i}=x_{i}\) (\(i=m-1,m+2\)), \(y_{m}=x_{m+1}\), and \(y_{m+1}=x_{m}\). Applying (17) and (\(h_{3}\)), we have: \[\sum_{i=m-1}^{m+1}h(x_{i},x_{i+1})-\sum_{i=m-1}^{m+1}h(y_{i},y_{i +1})\] \[=h(x_{m-1},x_{m})+h(x_{m+1},x_{m+2})-h(x_{m-1},x_{m+1})+h(x_{m},x_ {m+2})\] \[=h(x_{m-1},x_{m})+h(x_{m+2},x_{m+1})-h(x_{m-1},x_{m+1})+h(x_{m+2},x_{m})>0\] The same reasoning applies to the remaining cases. This completes the proof.
2305.19841
On the canonical bundle formula in positive characteristic
Let $f: X \rightarrow Z$ be a fibration from a normal projective variety $X$ of dimension $n$ onto a normal curve $Z$ over a perfect field of characteristic $p>2$. Let $(X, B)$ be a log canonical pair such that the induced pair on the general fibre is log canonical. Assuming the LMMP and the existence of log resolutions in dimension $\leq n$, we prove that, up to a birational map $Y \dashrightarrow X$, the moduli part is nef. As a corollary, we prove nefness of the moduli part in the $f$-trivial case, i.e. when $K_X+B \sim_{\mathbb{Q}} f^*L$ for some $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor $L$ on $Z$. In particular, consider a log canonical pair $(X, B)$ of dimension 3 over a perfect field of characteristic $p>5$ such that the induced pair on the general fibre is log canonical. Then, we conclude that the canonical bundle formula holds.
Marta Benozzo
2023-05-31T13:30:45Z
http://arxiv.org/abs/2305.19841v1
# On the canonical bundle formula in positive characteristic ###### Abstract Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(n\) onto a normal curve \(Z\) over a perfect field of characteristic \(p>2\). Let \((X,B)\) be a log canonical pair such that the induced pair on the general fibre is log canonical. Assuming the LMMP and the existence of log resolutions in dimension \(\leq n\), we prove that, up to a birational map \(Y\dasharrowright X\), the moduli part is nef. As a corollary, we prove nefness of the moduli part in the \(f\)-trivial case, i.e. when \(K_{X}+B\sim_{\mathbb{Q}}f^{*}L_{Z}\) for some \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor \(L_{Z}\) on \(Z\). In particular, consider a log canonical pair \((X,B)\) of dimension \(3\) over a perfect field of characteristic \(p>5\) such that the induced pair on the general fibre is log canonical. Then, we conclude that the canonical bundle formula holds. ###### Contents * 1 GCLC condition * 2 Frobenius base change * 3 Foliations * 4 Discriminant and moduli parts * 5 Property \((*)\) * 6 Bend and break for the moduli divisor * 7 Geometric log canonical centres * 8 The canonical bundle formula ## Introduction The classification of varieties is one of the main objectives of algebraic geometers. An important tool that birational geometers use to this end is the study of positivity properties of the canonical divisor. A natural question in the field is whether we can meaningfully relate the canonical divisors of the source and the target of a fibration. The canonical bundle formula tackles this problem. Kodaira's result on elliptic fibrations is the first instance of a formula in this direction (see for example [11, theorem 8.2.1, ch.8]). It states that, given an elliptic fibration \(f\colon X\to Z\) from a normal projective surface over an algebraically closed field of any characteristic, the canonical bundle of \(X\) is related to the canonical bundle of \(Z\) along with two other terms: the discriminant and moduli parts. The discriminant part measures the singularities of the fibration, while the moduli part is a divisor defined via the \(j\)-invariant of the fibres. Later, a similar formula was proven for "lc-trivial" fibrations in characteristic \(0\). More precisely, if \((X,B)\) is a log canonical pair and \(f\colon X\to Z\) is a fibration such that \(K_{X}+B\sim_{\mathbb{Q}}f^{*}L_{Z}\), for some \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor \(L_{Z}\) on \(Z\), then we can write \(L_{Z}=K_{Z}+B_{Z}+M_{Z}\). The divisor \(B_{Z}\) is defined according to the singularities of \(f\), whereas \(M_{Z}\) measures how far the fibration is from being a product, namely its variation. In general, it is difficult to construct a moduli space for the fibres, so \(M_{Z}\) does not have an explicit description as in the elliptic curves case. However, we can at least study whether it defines a meaningful map from \(Z\). The first step is looking at the positivity properties of \(M_{Z}\). These are essential to understand also for inductive purposes: if we control \(B_{Z}\) and \(M_{Z}\), we can infer properties of \(X\) from the study of \(Z\), which is lower dimensional. Using variation of Hodge structures, it is possible to show that, up to a birational base change, \(M_{Z}\) is nef (see [10], [12], [11], [13], [14]). Unfortunately, these techniques cannot be used over fields of positive characteristics. With the development of the theory of \(F\)-splitting singularities, it was possible to prove a canonical bundle formula using more algebraic techniques. If we ask that the pair \((X,B)\) be globally \(F\)-split, the splitting map gives us effectiveness on the base (see [17, theorem 5.2]). Since moduli spaces of curves are well understood in any characteristic, when the fibration has relative dimension one we can exploit this to get the desired positivity (see [18, lemma 6.6, lemma 6.7] and [15, theorem 3.2]). For this result we need to assume that the geometric generic fibre is smooth and the pair induced on it is log canonical. If only the generic fibre is log canonical, it is still possible to prove a weaker statement by considering purely inseparable covers of the base ([15, theorem 1]). Recently, a new approach has been taken in characteristic \(0\) in the paper [1]. The moduli part of a fibration \(f\colon X\to Z\), under some assumptions on the singularities ("property \((*)\)"), coincides with the canonical bundle of the foliation induced by \(f\). The birational geometry of the foliation can then be used to conclude. With these techniques, it is possible to prove the result for more general fibrations, not necessarily \(f\)-trivial. In this generality, assuming that the general fibre is log canonical, the discriminant divisor is still well-defined on \(Z\), while the moduli part is only defined on \(X\). This approach does not make use of variation of Hodge structures, but rather techniques coming from the minimal model program (MMP). Note that the assumption on the singularities of the general fibre is necessary. If we remove it, the canonical bundle formula fails to hold, as shown in [15, example 3.5]. When passing to positive characteristic, many results in birational geometry become open problems and in some cases fail to hold at all. Despite this, in recent years much progress has been made in low dimensions. In particular for threefolds over perfect fields of characteristic \(p>5\) it is known that we can run the LMMP for log canonical pairs (see [10], [14], [13], [15]) and log resolutions have been constructed. In view of this, it is natural to ask whether the techniques in [1] could be adapted to the positive characteristic setting. Assuming that the geometric generic fibre is log canonical, we get positivity of the moduli part. **Theorem 0.1**.: _(see theorem 8.5) Assume the LMMP and the existence of log resolutions in dimension \(\leq n\). Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(n\) onto a curve \(Z\) and \((X/Z,B)\) a GCLC pair associated with it over a perfect field of characteristic \(p>2\). Let \(B\) be an effective \(\mathbb{Q}\)-divisor on \(X\) such that \((X,B)\) is a log canonical pair. Suppose _that \(K_{X}+B\) is \(f\)-nef. Then, there exist a pair \((Y,C)\) and a commutative diagram_ _where \(b\) is a birational map such that_ 1. \((K_{X}+B)|_{X_{\eta}}=(K_{Y}+C)|_{X_{\eta}}\)_, where_ \(\eta\) _is the generic point of_ \(Z\)_;_ 2. _the moduli part_ \(M_{Y}\) _of_ \((Y/Z,C)\) _is nef._ When the pair \((X/Z,B)\) satisfies property \((*)\) and \(B\) is vertical, the moduli part \(M_{X}\) coincides with the canonical bundle of the foliation. The main tool that is used in [1] to prove the theorem in this case is the cone theorem for foliations, which does not generally hold in positive characteristic. However, the foliations we are interested in all come from fibrations, so we are able to exploit this additional structure. Considering a base change with a high enough power of the Frobenius morphism on \(Z\), we can compare the canonical bundle of the foliation induced by \(f\) to the canonical bundle of the resulting variety \(Y\). In particular, if the moduli part \(M_{X}\) is negative on a curve that is general enough, then \(K_{Y}\) is not nef and we can apply bend and break results on \(Y\) to find rational curves contained in the fibres of \(Y\to Z\). The images of these curves in \(X\) are negative on \(M_{X}\) and contained in the fibres of \(f\). This contradicts the assumption that \(M_{X}\) is \(f\)-nef. To conclude the proof in the case where the pair satisfies property \((*)\), we need to exclude some "bad cases". As in [1, lemma 3.12], we get rid of them by producing a log canonical centre and doing adjunction on it. We are then able to conclude by induction on the dimension. In positive characteristic however, we additionally need to control singularities on the geometric generic fibre, so the variety \(\bar{W}\) on which we do adjunction needs to be a geometric log canonical centre. Unfortunately we cannot always extract a geometric log canonical place over \(X\). To overcome this we apply the same construction as above, namely, we consider a base change with a high enough power of the Frobenius morphism on \(Z\). Using this technique it is possible to prove that we have a log canonical centre on the resulting variety \(Y\), which reduces to \(\bar{W}\) on the geometric generic fibre. Under property \((*)\) assumptions, we can do adjunction on a divisor \(E\) extracting this log canonical centre over \(Y\). Roughly, \(M_{X}\) pulls-back to the moduli part of the resulting pair on \(E\) with induced fibration \(E\to Z\). Finally, we reduce to the property \((*)\) case with a birational modification of \(f\). In characteristic \(0\), this is constructed thanks to the results in [1], which do not hold in positive characteristic. However, the fibres of a fibration \(f\colon X\to Z\) from a threefold to a curve are divisors inside \(X\). Thus, using log resolutions and running the LMMP, we can still find a birational modification of \(f\) that satisfies property \((*)\). As a corollary, we get the canonical bundle formula in the \(f\)-trivial case in a similar way as in [1, theorem 1.3]. **Theorem 0.2**.: _(see theorem 8.6) Assume the LMMP and the existence of log resolutions in dimension \(\leq n\). Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(n\) onto a curve \(Z\) and \((X/Z,B)\) a GCLC pair associated with it over a perfect field of characteristic \(p>2\). Assume also that \(K_{X}+B\sim_{\mathbb{Q}}f^{*}L_{Z}\) for some line bundle \(L_{Z}\) on \(Z\) and that \((X,B)\) is log canonical. Then, \(M_{X}=f^{*}M_{Z}\) is nef._ The proof uses log resolutions and the LMMP to reduce to the case where \((X/Z,B)\) satisfies property \((*)\) and is \(f\)-trivial. The conclusion then follows from the previous results. If \(X\) is a threefold over a perfect field of characteristic \(p>5\), we know the existence of log resolutions and that we can run the LMMP for log canonical pairs (see [1], [11], [12] and [13]), so we obtain the following corollary. **Corollary 0.3**.: _(see corollary 8.7) Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(3\) onto a curve \(Z\) and \((X/Z,B)\) a GGLC pair associated with it over a perfect field of characteristic \(p>5\). Let \(B\) be an effective \(\mathbb{Q}\)-divisor on \(X\) such that \((X,B)\) is a log canonical pair. Suppose that \(K_{X}+B\) is \(f\)-nef. Then, there exist a pair \((Y,C)\) satisfying property \((*)\) and a commutative diagram_ _where \(b\) is a birational map such that_ 1. \((K_{X}+B)|_{X_{\eta}}=(K_{Y}+C)|_{X_{\eta}}\), where \(\eta\) is the generic point of \(Z\);_ 2. _the moduli part_ \(M_{Y}\) _of_ \((Y/Z,C)\) _is nef._ _Moreover, if \(K_{X}+B\sim_{\mathbb{Q}}f^{*}L_{Z}\) for some line bundle \(L_{Z}\) on \(Z\), \(M_{X}=f^{*}M_{Z}\) is nef._ In [14, proposition 3.2], the author proves a weak canonical bundle formula for fibrations of relative dimension \(1\) with smooth log canonical fibres. This result, together with the above corollary 8.7, completes the picture in dimension \(3\) for fibrations with log canonical general fibres over algebraically closed fields of characteristic \(p>5\). ## Acknowledgements I would like to thank my PhD advisor Paolo Cascini for suggesting the problem, for his guidance throughout and for his indispensable support. I would like to thank Fabio Bernasconi, Iacopo Brivio, Calum Spicer and Jakub Witaszek for very helpful discussions and suggestions. I would like to thank my friends at Imperial College London for helpful discussions, especially Federico Bongiorno, Nick Manrique, Martin Ortiz, Stefania Vassiliadis, Pascale Voegtli and Aurore Boitrel from University of Angers. This work was supported by the Engineering and Physical Sciences Research Council [EP/S021590/1], the EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Number Theory), University College London. I would also like to thank the NCTS, National Centre for Theoretical Sciences Mathematics division, for their support during my stay in Taipei. ## Notations * A **variety** over a field \(k\) is an integral scheme which is separated and of finite type over \(k\). * Given a polynomial \(p(x_{1},...,x_{n})\), \(V(p)\) denotes the variety defined by its zero locus. * The field of functions on a variety \(X\) will be denoted by \(K(X)\). * The canonical divisor of a normal variety \(X\) is denoted by \(K_{X}\). * A **fibration**\(f\colon X\to Z\) is a proper surjective morphism between normal projective varieties such that \(f_{*}\mathcal{O}_{X}=\mathcal{O}_{Z}\). * Given a fibration \(f\colon X\to Z\), we say that a property \(\mathcal{P}\) holds for the general fibre if there exists a non-empty open subset \(U\subseteq Z\) such that \(\mathcal{P}\) holds for all fibres over points in \(U\). * Given a fibration \(f\colon X\to Z\), a prime divisor \(D\subseteq X\) is called **horizontal** if \(f|_{D}\colon D\to Z\) is dominant, it is called **vertical** otherwise. Any \(\mathbb{Q}\)-divisor \(D\) can be decomposed as \(D=D^{h}+D^{v}\), where \(D^{h}\) is the sum of its horizontal components and \(D^{v}\) is the sum of the vertical ones. * A curve \(\xi\subseteq X\) is called **horizontal** if \(f(\xi)\) has dimension \(1\), **vertical** otherwise. * Given an equidimensional fibration \(f\colon X\to Z\) and a \(\mathbb{Q}\)-divisor \(D\) on \(Z\), we denote by \(f^{-1}(D)\) the divisor on \(X\) with support equal to \(f^{-1}(\operatorname{Supp}(D))\) and coefficients all equal to \(1\). * The geometric generic point of a variety \(X\) is the point corresponding to the algebraic closure of the function field of \(X\). * A **log pair**\((X,B)\) is a variety equipped with a \(\mathbb{Q}\)-divisor \(B\), such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. * For the definition of kawamata log terminal (klt), log canonical (lc), purely log terminal (plt) and divisorially log terminal (dlt) singularities, we refer to [13, definition 2.34]. * A non-klt (resp. log canonical, resp. non-log canonical) centre of a pair \((X,B)\) is a subvariety \(W\subset X\) such that there exists an exceptional divisor with discrepancy \(\leq-1\) (resp. \(=-1\), resp. \(<-1\)) and whose image in \(X\) is \(W\). * When we say **assume the LMMP in dimension \(n\)**, we assume that: 1. we can run the minimal model program for log canonical pairs of dimension \(n\) and it terminates; 2. inversion of adjunction holds for log canonical pairs (from dimension \(n-1\) to dimension \(n\)). _Remark 0.4_.: If \(X\) is a threefold over a perfect field of characteristic \(p>5\), we know the existence of log resolutions and that we can run the LMMP for log canonical pairs (see [1], [10], [11] and [12]). Moreover, we know that inversion of adjunction holds by [11, Corollary 10.1]. ## 1 Gglc condition In characteristic \(0\), to get a well-defined notion of discriminant divisor, it is enough to ask for the generic fibre to be log canonical (the "GLC" condition defined in [1]). Since generic smoothness does not hold in positive characteristic, however, we need to ask for a stronger condition, namely that the geometric generic fibre is log canonical. **Definition 1.1**.: We denote by \((X/Z,B)\) the data of a fibration between normal projective varieties \(f\colon X\to Z\) and a log pair \((X,B)\) with \(B\geq 0\) an effective \(\mathbb{Q}\)-divisor. We say \((X/Z,B)\) is **generically log canonical** or **GLC** if \(Z\) is irreducible and the pair \((X_{\eta},B_{\eta})\) is log canonical, where \(\eta\) is the generic point of \(Z\) and \(B_{\eta}\) is defined by restriction. We say \((X/Z,B)\) is **geometrically generically log canonical** or **GCLC** if \(Z\) is irreducible and the pair \((X_{\tilde{\eta}}^{\nu},B_{\tilde{\eta}}^{\nu})\) is log canonical, where \(X_{\tilde{\eta}}^{\nu}\) is the normalisation of the geometric generic fibre and \(B_{\tilde{\eta}}^{\nu}\) is the divisor on it defined by restriction. _Remark 1.2_.: While the generic fibre of a fibration \(f\colon X\to Z\) reflects properties of \(X\), the geometric generic fibre is strictly related to the general fibres of \(f\). We will see that the GGLC condition is equivalent to asking that the general fibre is log canonical. _Remark 1.3_.: If we only assume the GLC condition, the canonical bundle formula fails to hold, as shown in [22, example 3.5]. **Lemma 1.4**.: _Let \(f\colon X\to Z\) be a fibration between normal varieties over a perfect field of characteristic \(p>2\). Assume that \(B\) is a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. Suppose that the geometric generic fibre \(X_{\bar{\eta}}\) is integral and let \(X_{\bar{\eta}}{}^{\nu}\) be its normalisation. Let \(B_{\bar{\eta}}{}^{\nu}\) be the boundary divisor on \(X_{\bar{\eta}}{}^{\nu}\) defined by restriction. If \((X_{\bar{\eta}}{}^{\nu},B_{\bar{\eta}}{}^{\nu})\) is log canonical, then \(X_{\bar{\eta}}\) is normal. In particular, the geometric generic fibre of a GGLC pair is normal. 1_ Footnote 1: This lemma was suggested to me by Fabio Bernasconi, with the following proof. The pair \((X_{\bar{\eta}},B_{\bar{\eta}})\) is slc and the normalisation of the geometric generic fibre is a universal homeomorphism by [17, lemma 2.2]. Thus, nodal singularities cannot appear. Note that this proof also needs the characteristic to be \(>2\). Proof.: Note that \(X_{\bar{\eta}}\) is demi-normal. In fact, the \(S_{2}\) property is invariant by flat base change. Moreover, if \(X_{\bar{\eta}}\) had singularities worse than nodal in codimension 1, then \(B_{\bar{\eta}}{}^{\nu}\) would have coefficients strictly bigger than 1 coming from the conductor over those singularities, contradicting the log canonical assumption. Thus, the divisor \(B_{\bar{\eta}}{}^{\nu}\) can be written as \(\bar{C}+\bar{B}\), where \(\bar{C}\) is the conductor of the normalisation (see [16, bullet point 5.7]). By [14, theorem 1.2], the coefficients of \(\bar{C}\) are divisible by \(p-1\). When \(p>2\), this contradicts the assumption that \((X_{\bar{\eta}}{}^{\nu},B_{\bar{\eta}}{}^{\nu})\) is log canonical. Hence, \(\bar{C}=0\) and the normalisation of the geometric generic fibre is an isomorphism. qed In characteristic 0, given a fibration, it is automatic that the generic fibre is geometrically reduced. In positive characteristic, this is no longer true and it is equivalent to asking that the fibration is "separable". **Definition 1.5**.: Let \(K\subseteq L\) be a field extension. It is called **separable** if there exists a transcendence basis \(t_{1},...,t_{\ell}\) such that \(L\) is a finite separable extension of \(K(t_{1},...,t_{\ell})\). Let \(f\colon X\to Z\) be a morphism between integral varieties. We say that \(f\) is **separable** if the field extension \(K(Z)\subseteq K(X)\) is separable; otherwise, \(f\) is called **inseparable**. **Proposition 1.6**.: _[_15_, proposition 2.15, ch.3]_ _Let \(K\) be a field. A variety \(f\colon X\to\operatorname{Spec}K\) is geometrically reduced if and only if \(f\) is separable._ _Remark 1.7_.: In particular, given a fibration \(f\colon X\to Z\), if there exists a \(\mathbb{Q}\)-divisor \(B\geq 0\) on \(X\) such that \((X/Z,B)\) is GGLC, then \(f\) must be separable. When \(Z\) is a curve, a theorem of MacLane [14] allows us to compare the notion of separability of a surjective morphism with its Stein factorisation. In this case it is therefore easier to control this condition. We write here a version of it restated in geometric terms. **Theorem 1.8**.: _[_16_, corollary 2.5]_ _Let \(f\colon X\to Z\) be a fibration onto a curve \(Z\) such that \(f_{*}\mathcal{O}_{X}=\mathcal{O}_{Z}\). Then \(f\) is separable._ **Definition 1.9**.: Given a variety \(X\) over a perfect field of characteristic \(p>0\), we can consider the **Frobenius morphism**\(F\colon X\to X\) (resp. its iterates \(F^{e}\colon X\to X\), with \(e\in\mathbb{N}\)). It is defined as the identity on points and as the \(p^{\text{th}}\) power (resp. \((p^{e})^{\text{th}}\) power) on the structure sheaf. **Definition 1.10**.: Let \(f\colon X\to Z\) be a surjective morphism onto a curve \(Z\) and let \(\varphi\circ g\) be its Stein factorisation, where \(\varphi\) is finite and \(g\) is a fibration. Then \(\varphi=F^{e}\circ\varphi^{\prime}\) for some \(e\in\mathbb{N}\) and \(\varphi^{\prime}\) finite separable. We say that \(p^{e}\) is the **purely inseparable degree** of \(f\). **Example 1.11**.: 2 In general, the condition \(f_{*}\mathcal{O}_{X}=\mathcal{O}_{Z}\) is not enough to have separability. Consider, for example, the threefold \(X=V(sx^{p}+ty^{p}+z^{p})\subset\mathbb{P}^{2}_{[x:y:z]}\times\mathbb{A}^{2}_{(s,t)}\) over a field of characteristic \(p>0\). Let \(f\) be the fibration induced by the natural projection onto \(Z:=\mathbb{A}^{2}_{(s,t)}\). Then \(f\) satisfies \(f_{*}\mathcal{O}_{X}=\mathcal{O}_{Z}\), but it is not separable. Footnote 2: This example was suggested to me by Fabio Bernasconi and Iacopo Brivio ## 2. Frobenius base change In this section we outline a construction that will be fundamental in the sequel and set some notation. () Construction Consider the following diagram: Where: * \(Y^{(e)}\) is the normalisation of the (reduction of) \(X^{(e)}:=X\times_{F^{e}}Z\) and \(g_{e}\) is the induced map; * \(\alpha_{e}\) is the \(e^{\text{th}}\)-power of the relative Frobenius of \(X\) over \(Z\), so that \(\beta_{e}\circ\alpha_{e}=F^{e}\). Given a \(\mathbb{Q}\)-divisor \(D\) on \(X\), denote by \(D_{e}:=\beta_{e}^{*}D^{h}+\frac{1}{p^{e}}\beta_{e}^{*}D^{v}\). For our applications, we need to ensure that \(X^{(e)}\) is integral. This always holds when the fibration \(f\) is flat and separable. In this case, we can also see that the conductor of the normalisation is vertical. **Lemma 2.1**.: _In the notations of, if \(f\colon X\to Z\) is a flat separable fibration, then \(X\times_{F^{e}}Z\) is integral. In particular, its normalisation \(Y^{(e)}\) is well-defined._ Proof.: By [22, remark 2.5], if in \(X\times_{F^{e}}Z\) there are some non-reduced components, they must dominate \(Z\) since \(f\) is flat. Thus, we can check reducedness at \(\eta\), the generic point of \(Z\). By proposition 1.6, \(X_{\eta}\) is geometrically reduced. Moreover, since purely inseparable morphisms are homeomorphisms, \(X\times_{F^{e}}Z\) is irreducible like \(X\). All in all, \(X\times_{F^{e}}Z\) is integral. qed **Proposition 2.2**.: _[_14_, proposition 2.1 and lemma 2.2]_ _Let \(f\colon X\to Z\) be a morphism of varieties over a perfect field. Then the geometric generic fibre is normal (resp. regular, reduced) if and only if a general fibre is normal (resp. regular, reduced). Moreover, let \(Y\to X\) be the normalisation of \(X\). If for a general point \(z\in Z\), \(Y_{z}\) is normal, then \(Y_{z}\) is the normalisation of \(X_{z}\)._ **Lemma 2.3**.: _Let \(f\colon X\to Z\) be a separable fibration and let \(\varphi\colon Z^{\prime}\to Z\) be a finite map. Let \(Y^{\prime}\) be the normalisation of the main component of the fibre product \(X^{\prime}:=X\times_{Z}Z^{\prime}\). If \(X_{\bar{\eta}}\) is normal, then the conductor of \(Y^{\prime}\to X^{\prime}\) is vertical. In particular, \(Y^{\prime}_{\bar{\eta}}=X_{\bar{\eta}}\)._ Proof.: Since \(X_{\bar{\eta}}\) is normal, by proposition 2.2, the general fibre of \(f\) is normal. Let \(f^{\prime}\colon X^{\prime}\to Z^{\prime}\) and \(g^{\prime}\colon Y^{\prime}\to Z^{\prime}\). Note that the fibres of \(f^{\prime}\) are isomorphic to the fibres of \(f\), thus they are normal. Moreover, the general fibre of \(g^{\prime}\) is the normalisation of the general fibre of \(f^{\prime}\). But this implies that, over the general fibre, \(Y^{\prime}\to X^{\prime}\) is an isomorphism. ## 3. Foliations The moduli divisor of a fibration \(f\colon X\to Z\) is strictly related to the canonical divisor of the foliation induced by \(f\). The latter can be compared to the canonical divisor of \(Y^{(e)}\) using the construction. Positivity properties of the moduli divisor of \(f\) are reflected by the canonical divisor of \(Y^{(e)}\), at least for \(e\) big enough. **Definition 3.1**.: Let \(X\) be a normal variety over a perfect field of characteristic \(p>0\). A **foliation** on \(X\) is a subsheaf of the tangent sheaf \(\mathcal{F}\subseteq T_{X}\) which is saturated, closed under Lie brackets and under \(p^{\mathrm{th}}\)-powers. Let \(\omega_{\mathcal{F}}:=\Lambda^{\mathrm{top}}\mathcal{F}^{*}\), the top exterior power of the dual of \(\mathcal{F}\). The canonical divisor \(K_{\mathcal{F}}\) of a foliation is any Weil divisor such that \(\mathcal{O}_{X}(K_{\mathcal{F}})=\omega_{\mathcal{F}}\). Let \(f\colon X\to Z\) be a fibration. Then \(f\) induces a foliation by taking the saturation of the kernel of \(df\). If \(f\) is a separable, equidimensional fibration, the canonical bundle of the foliation \(\mathcal{F}\) induced by \(f\) can be described as \[K_{\mathcal{F}}=K_{X}-f^{*}K_{Z}-R(f),\qquad R(f)=\sum(f^{*}P-f^{-1}(P))=\sum( \ell_{D}-1)D.\] The first sum above is taken over all prime divisors \(P\) of \(Z\), while the second sum is taken over all vertical divisors \(D\) on \(X\) and \(\ell_{D}\) is their multiplicity with respect to \(f\). In the next results, we use the same notations as in construction. **Proposition 3.2**.: _[_2_, proposition 9.1.2.3]_ _Let \(\mathcal{F}\) be the foliation induced by a separable flat fibration \(f\colon X\to Z\) over a perfect field of characteristic \(p>0\). Then, for \(e=1\),_ \[\alpha_{1}^{*}K_{Y^{(1)}}=(p-1)K_{\mathcal{F}}+K_{X}.\] **Corollary 3.3**.: _In the same setting,_ \[\alpha_{e}^{*}K_{Y^{(e)}}=(p^{e}-1)K_{\mathcal{F}}+K_{X}\quad\text{and}\quad \alpha_{e}^{*}K_{\mathcal{G}_{e}}=p^{e}K_{\mathcal{F}},\] _where \(\mathcal{G}_{e}\) is the foliation induced by \(g_{e}\)._ Proof.: We prove the statement by induction on \(e\), the case \(e=1\) being proposition 3.2. For \(e>1\), the diagram in, can be factorised in the following way: Let \(\mathcal{G}_{e-1}\) be the foliation induced by \(g_{e-1}\). By proposition 3.2 applied to the lower part of the diagram above, \[\alpha_{e}^{*}K_{Y^{(e)}}=\alpha_{e-1}^{*}\delta^{*}K_{Y^{(e)}}=(p-1)\alpha_{e-1 }^{*}K_{\mathcal{G}_{e-1}}+\alpha_{e-1}^{*}K_{Y^{(e-1)}}.\] Now, write \(K_{\mathcal{G}_{e-1}}=K_{Y^{(e-1)}}-g_{e-1}^{*}K_{Z}-R(g_{e-1})\). _Claim 3.4_.: For any \(e\geq 1\), \(\alpha_{e}^{*}R(g_{e})=R(f)\). Proof.: Let \(D\subseteq Z\) be a prime divisor. Possibly restricting \(Z\), we can assume that \(D\) is Cartier. Denote \(g_{e}^{*}D=\sum_{i=1}^{m}\ell_{i}D_{i}\), where the \(D_{i}\)s are prime divisors and \(\ell_{i}\) is their multiplicity in the pull-back. Then, \(\alpha_{e}^{*}g_{e}^{*}D=f^{*}D=\sum_{i=1}^{m}\ell_{i}\alpha_{e}^{*}D_{i}\). Since each \(D_{i}\) is vertical, \(\alpha_{e}^{*}D_{i}\) is still an irreducible and reduced divisor. Therefore the multiplicities of the prime divisors \(\alpha_{e}^{*}D_{i}\) in \(f^{*}D\) are still \(\ell_{i}\). Hence, \(R(f)=\alpha_{e}^{*}R(g_{e})\). qed By induction, we know that \[\alpha_{e-1}^{*}K_{Y^{(e-1)}}=(p^{e-1}-1)K_{\mathcal{F}}+K_{X}.\] Bringing everything together, we get the result. qed **Corollary 3.5**.: _Let \(f\colon X\to Z\) be a flat fibration and \((X/Z,B)\) a GCLC pair associated with it over a perfect field of characteristic \(p>2\). Keep the notations of construction. Then,_ \[(K_{Y^{(e)}}+B_{e})|_{X_{\bar{\eta}}}=K_{X_{\bar{\eta}}}+B_{\bar{\eta}},\] _where \(B_{e}:=\beta_{e}^{*}B^{h}+\frac{1}{p^{e}}\beta_{e}^{*}B^{v}\). In particular, \((Y^{(e)}/Z,B_{e})\) is a GGLC pair._ Proof.: By the above corollary 3.3: \[\alpha_{e}^{*}(K_{Y^{(e)}}+B_{e})=p^{e}(K_{X}+B)+D,\] where \(D\) is a vertical divisor. By lemma 2.3, \(Y_{\bar{\eta}}^{(e)}=X_{\bar{\eta}}\) and \(\alpha_{e}|_{X\bar{\eta}}\) is the \(e^{\text{th}}\) power of the Frobenius morphism. Thus, \[F^{e*}(K_{Y^{(e)}}+B_{e})|_{X_{\bar{\eta}}}=p^{e}(K_{X_{\bar{\eta}}}+B_{\bar{ \eta}}).\] qed **Lemma 3.6**.: _Let \(f\colon X\to Z\) be a fibration onto a curve \(Z\) and \(D\subset X\) an horizontal prime divisor. Suppose that \(f|_{D}\) has purely inseparable degree \(p^{d}\) and let \(D^{(e)}\) be the reduction of the base change of \(D\) inside \(Y^{(e)}\). Let \(\bar{\eta}\) be the geometric generic point of \(Z\), then_ \[D_{\bar{\eta}}=\begin{cases}p^{e}D_{\bar{\eta}}^{(e)}\text{ if }e\leq d;\\ p^{d}D_{\bar{\eta}}^{(e)}\text{ otherwise}\end{cases}\quad\text{ and }\quad\beta_{e}^{*}D=\begin{cases}p^{e}D^{(e)}\text{ if }e\leq d;\\ p^{d}D^{(e)}\text{ otherwise}.\end{cases}\] _Moreover, \(D_{\bar{\eta}}^{(d)}\) is reduced._ _Proof._ Let \(g_{e}|_{D}\colon D^{(e)}\to Z\) be the induced map on \(D^{(e)}\). By the universal property of the fibre product and since \(Z\) is a curve, \(g_{d}|_{D}\) is separable, thus \(D^{(d)}_{\tilde{\eta}}\) is reduced. We will prove the lemma by induction on \(d\). If \(d=0\), \(g_{e}|_{D}\) is separable for each \(e\in\mathbb{N}\), so \(D^{(e)}_{\tilde{\eta}}\) is always reduced and it must coincide with \(D_{\tilde{\eta}}\). If \(d>0\), consider the natural maps \(Y^{(d)}\to Y^{(1)}\to X\). By the universal properties of the fibre product and since \(Z\) is a curve, \(D\) and \(D^{(e)}\) are isomorphic for all \(e\leq d\) and \(f|_{D}=F^{e}\circ g_{e}|_{D}\), thus \(g_{e}|_{D}\) has purely inseparable degree \(p^{d-e}\) for \(e\leq d\). In particular, \(g_{d}|_{D^{(d)}}\) is separable and \(g_{1}|_{D^{(1)}}\) has purely inseparable degree \(p^{d-1}\). The map \(Y^{(1)}\to X\) is purely inseparable and finite of degree \(p\) and \(D^{(1)}\to D\) is an isomorphism. Thus \(\beta_{1}^{*}D=pD^{(1)}\). Then, we conclude by the inductive assumption. qed _Remark 3.7_.: Let \(f\colon X\to Z\) be a fibration onto a curve \(Z\) and \((X/Z,B)\) a GGLC pair associated with it. Keep the notations of construction. Let \(D\) be an horizontal prime divisor contained in the support of \(B\). Assume that \(f|_{D}\) has purely inseparable degree \(p^{d}\). Let \(c\) be the coefficient of \(D\) in \(B\). By the above lemma 3.6, \(D^{(d)}_{\tilde{\eta}}\) is reduced and \(D_{\tilde{\eta}}=p^{d}D^{(d)}_{\tilde{\eta}}\). Thus, \(B_{\tilde{\eta}}=cp^{d}D^{(d)}_{\tilde{\eta}}+B^{\prime}\), where \(B^{\prime}\) has support not containing \(D^{(d)}_{\tilde{\eta}}\). Since \((X_{\tilde{\eta}},B_{\tilde{\eta}})\) is log canonical, \(c\leq\frac{1}{p^{d}}\). Studying the purely inseparable degree of the map \(f\) on the horizontal components of \(B\), gives us a description of the horizontal part of \(B_{e}\) for any \(e\gg 0\). ## 4. Discriminant and moduli parts The aim of the canonical bundle formula is to relate the canonical divisors of the source and the base of a fibration. To do so, we define a divisor that encodes the singularities of the fibration, the "discriminat part" and a second divisor which captures the variation of the fibration, the "moduli part". **Lemma 4.1**.: _Let \(f\colon X\to Z\) be a separable fibration over a perfect field of characteristic \(p>0\) and let \(\bar{\eta}\) be the geometric generic point of \(Z\). Assume that \(X_{\bar{\eta}}\) is normal and let \(\bar{\sigma}\colon\bar{Y}\to X_{\bar{\eta}}\) be a birational morphism between normal projective varieties. Then, there exist \(U\subseteq Z\) open dense subset, a finite map \(\varphi\colon U^{\prime}\to U\) and a birational morphism \(\sigma\colon Y\to Y^{\prime}\), where:_ * \(Y^{\prime}\) _is the normalisation of the main component of_ \(f^{-1}(U)\times_{U}U^{\prime}\)_;_ * \(Y_{\bar{\eta}}=\bar{Y}\)_._ _Moreover, if \(Z\) is a regular curve, we can take \(U=Z\) and \(\sigma\) proper birational morphism between projective varieties._ _Proof._ There exists \(L\), finite extension of \(K(Z)\) such that \(\bar{Y}\) and \(\bar{\sigma}\) are defined over \(L\). By "spreading out techniques" (see for example [13, proof of corollary 1.10] and [11, lemma 2.25]), there exist \(U\subseteq Z\) dense open subset and a finite map \(\varphi\colon U^{\prime}\to U\) such that, if \(Y^{\prime}_{U}\) is the normalisation of the main component of \(f^{-1}(U)\times_{U}U^{\prime}\), there exists a birational map \(\sigma\colon Y_{U}\to Y^{\prime}_{U}\) with \(Y_{U,\bar{\eta}}=\bar{Y}\). If \(Z\) is a curve, there exists a unique regular curve \(Z^{\prime}\) with a finite map \(Z^{\prime}\to Z\) such that \(K(Z^{\prime})=L\). Once we have found maps as above over a dense open subset of \(Z^{\prime}\), we can extend them to proper maps between normal projective varieties. In particular, we get \(\sigma\colon Y\to Y^{\prime}\), where \(Y^{\prime}\) is the normalisation of the main component of \(X\times_{Z}Z^{\prime}\). qed **Proposition 4.2**.: _Assume the existence of log resolutions of singularities in dimension \(n\). Let \(X\) be a projective variety of dimension \(n\). A flat fibration \(f\colon X\to Z\) over a perfect field of characteristic \(p>2\) with a pair \((X,B)\) is GGLC if and only if:_ 1. _the general fibre_ \(\Phi\) _of_ \(f\) _is normal and_ 2. _the pair_ \((\Phi,B_{\Phi})\) _is log canonical, where_ \(B_{\Phi}\) _is the divisor obtained via adjunction from_ \((X,B)\)_._ Proof.: First of all note that, by proposition 2.2, condition \((i)\) is equivalent to asking that \(X_{\bar{\eta}}\) is normal, where \(\bar{\eta}\) is the geometric generic point of \(Z\). Let \(\bar{Y}\) be a log resolution of \((X_{\bar{\eta}}^{\nu},B_{\bar{\eta}}^{\nu})\). By the above lemma 4.1, up to shrinking \(Z\), there exist a finite map \(\varphi\colon Z^{\prime}\to Z\) and a birational map \(\sigma\colon Y\to Y^{\prime}\), where \(Y^{\prime}\) is the normalisation of the main component of \(X^{\prime}:=X\times_{Z}Z^{\prime}\) and \(Y_{\bar{\eta}}=\bar{Y}\). By proposition 2.2, the general fibre of \(Y\to Z\) is a log resolution of the general fibre of \(Y^{\prime}\to Z\). Up to taking a higher cover, we can suppose \(\varphi=F^{e}\circ\psi\), where \(\psi\) is a finite separable map and \(e\in\mathbb{N}\). Keep the same notations as in construction. Consider the diagram: Let \(B_{e}:=\beta_{e}^{*}B^{h}+\frac{1}{p^{k}}\beta_{e}^{*}B^{v}\) and define \(B^{\prime}\) on \(Y^{\prime}\) by log pull-back, so that \(K_{X^{\prime}}+B^{\prime}=\beta^{\prime*}(K_{Y^{(e)}}+B_{e})\). In this way, by corollary 3.5, \((K_{X^{\prime}}+B^{\prime})|_{X_{\bar{\eta}}}=K_{X_{\bar{\eta}}}+B_{\bar{\eta}}\). Let \(z\in Z\) be a general point and let \(\Phi:=f^{-1}(z)\). Let \(z^{\prime}\in Z^{\prime}\) be a point mapping to \(z\) and let \(\Phi^{\prime}:=f^{\prime-1}(z^{\prime})\). By lemma 2.3, the normalisation does not affect the general fibre, thus the pairs \((\Phi,B_{\Phi})\) and \((\Phi^{\prime},B^{\prime}_{\Phi^{\prime}})\) defined via adjunction from \((X,B)\) and \((X^{\prime},B^{\prime})\) respectively, coincide. So, the former is log canonical if and only if the latter is. Now, let \(E\) be an horizontal exceptional divisor of \(\sigma\). Since \(\sigma^{-1}(\Phi^{\prime})\) is a log resolution of \((\Phi^{\prime},B_{\Phi^{\prime}})\), the restriction of \(E\) to \(\sigma^{-1}(\Phi^{\prime})\) is still an irreducible exceptional divisor, call it \(E_{\Phi^{\prime}}\). The same holds true for the restriction of \(E\) to the geometric generic fibre, say \(E_{\bar{\eta}}\). Then, by adjunction, we can see that the discrepancies of \(E,E_{\Phi^{\prime}}\) and \(E_{\bar{\eta}}\) all coincide. qed **Definition 4.3**.: Let \(f\colon X\to Z\) be a surjective proper morphism. For each divisor \(P\subseteq Z\), define \[\gamma_{P}:=\sup\{t\in\mathbb{R}\,|\,(X,B+tf^{*}P)\,\text{is log canonical at the generic point of $P$}\}.\] **Definition 4.4**.: Let \(f\colon X\to Z\) be a flat separable fibration and \((X/Z,B)\) a GGLC pair associated with it over a perfect field of characteristic \(p>2\). The **discriminant divisor** of the fibration \(f\) is \[B_{Z}:=\sum_{P\subseteq Z}(1-\gamma_{P})P,\] where the sum is taken over all prime divisors of \(Z\). **Proposition 4.5**.: _Assume the existence of log resolutions of singularities in dimension \(n\). Let \(X\) be a projective variety of dimension \(n\). Let \(f\colon X\to Z\) be a flat fibration and \((X/Z,B)\) a GGLC pair associated with it over a perfect field of characteristic \(p>2\). Then, the discriminant divisor \(B_{Z}\) is well-defined._ Proof.: By proposition 4.2, the general fibre \(\Phi\) is normal and the pair defined by adjunction \((\Phi,B_{\Phi})\) is log canonical. Let \(U\subseteq Z\) be the open (non-empty) subset of \(Z\) such that \((\Phi_{z},B_{\Phi_{z}})\) is log canonical for all \(z\in U\), where \(\Phi_{z}=f^{-1}(z)\). Let \(P\) be a prime divisor in \(Z\) which is not contained in \(Z\setminus U\) and let \(\Phi_{P}\) be the fibre over \(P\). We claim that \(\gamma_{P}=0\). If this was not the case, there would exist a non-log canonical place of \((\Phi_{P},B_{\Phi_{P}})\), say \(E\), such that its centre contains the generic point of \(P\). In particular, by adjunction this would imply that, for \(z\in P\) general, there exists a non-log canonical place of \((\Phi_{z},B_{\Phi_{z}})\), contradiction. qed **Definition 4.6**.: Let \(f\colon X\to Z\) be a flat separable fibration and \((X/Z,B)\) a GGLC pair associated with it over a perfect field of characteristic \(p>2\). Let \(B_{Z}\) be the discriminant of \(f\). Then, the **moduli part** of \(f\) is \[M_{X}:=K_{X}+B-f^{*}(K_{Z}+B_{Z}).\] Note that it is defined only up to linear equivalence. ## 5 Property \((*)\) Likewise the approach to the canonical bundle formula in characteristic \(0\), the idea is to find a class of fibrations for which it is easier to get positivity properties of the moduli part. Then, we want to reduce to this case. The first idea in the field was to use some standard normal crossing assumptions, in this way it is possible to apply variation of Hodge structures to prove nefness of the moduli part. The authors of [1] introduce instead the notion of property \((*)\). When a fibration satisfies this property, the moduli part coincides with the canonical divisor of the foliation associated with the fibration. At this point, instead of using variation of Hodge structures, we can use the birational geometry of the foliation to get the desired positivity. **Definition 5.1**.: Let \(f\colon X\to Z\) be a fibration and \((X/Z,B)\) a GLC pair on it. We say it satisfies **property \((*)\)** if: * there exists a divisor \(\Sigma_{Z}\) on \(Z\) such that \((Z,\Sigma_{Z})\) is log smooth and \(B^{v}=f^{-1}(\Sigma_{Z})\); * for any closed point \(z\in Z\) and any divisor \(\Sigma\geq\Sigma_{Z}\) such that \((Z,\Sigma)\) is log smooth around \(z\), \((X,B+f^{*}(\Sigma-\Sigma_{Z}))\) is log canonical around \(f^{-1}(z)\). _Remark 5.2_.: Recall that a pair \((X,\Delta)\) is log smooth in positive characteristic if \(X\) is _regular_ and \(\Delta\) has simple normal crossing support. In the next propositions, we collect some useful features that property \((*)\) pairs enjoy. The proofs in characteristic \(0\) go through in the exact same way also in positive characteristic. **Proposition 5.3**.: _[_1_, lemma 2.14 and proposition 2.18]_ _Let \(f\colon X\to Z\) be a flat fibration and \((X/Z,B)\) a GCLC pair associated with it over a perfect field of characteristic \(p>2\) or a GLC pair over an algebraically closed field of characteristic \(0\). Assume that \((X/Z,B)\) satisfies property \((*)\). Then the following properties hold._ * _The pair_ \((X,B)\) _is log canonical and the discriminant divisor_ \(B_{Z}\) _coincides with_ \(\Sigma_{Z}\)_._ * _Suppose_ \(B\geq 0\) _and let_ \(\varphi\colon X\dasharrow Y\) _be a sequence of steps of the_ \((K_{X}+B)\)_-MMP over_ \(Z\)_. Let_ \(C:=\varphi_{*}B\) _and_ \(g\colon Y\to Z\) _the induced fibration. Then,_ \((Y/Z,C)\) _satisfies property_ \((*)\) _and for any closed point_ \(z\in Z\)_, the map_ \(\varphi^{-1}\) _is an isomorphism along the generic point of any irreducible component of_ \(g^{-1}(z)\)_._ **Proposition 5.4**.: _[_1_, proposition 3.6]_ _Let \(f\colon X\to Z\) be a flat fibration and \((X/Z,B)\) a GCLC pair associated with it over a perfect field of characteristic \(p>2\) or a GLC pair over an algebraically closed field of characteristic \(0\). Assume that \((X/Z,B)\) satisfies property \((*)\). Let \(\mathcal{F}\) be the foliation induced by it and \(\Delta:=B^{h}\). Let \(M_{X}\) be the moduli part of \((X/Z,B)\). Then,_ 1. \(K_{\mathcal{F}}+\Delta\sim_{\mathbb{Q}}M_{X}\) _and_ 2. \(K_{\mathcal{F}}+\Delta\sim_{\mathbb{Q},Z}K_{X}+B\)_._ _Remark 5.5_.: In particular, if \(f\colon X\to Z\) is a flat fibration and \((X/Z,B)\) a GGLC pair on it satisfying property \((*)\) with \(B\) vertical: \[\alpha_{e}^{*}K_{Y^{(e)}}=(p^{e}-1)M_{X}+K_{X}.\] Given any GLC pair in characteristic \(0\), we can construct a birationally equivalent model that satisfies property \((*)\) thanks to the existence of toroidal modifications as proven in [1]. In characteristic \(p>0\) however, we cannot always use their construction. In fact, in one of the steps, they consider a quotient by the action of a group and that does not have good enough properties if the order of the group is divisible by \(p\) ([1, remark 0.3.2]). Nonetheless, when \(Z\) is a curve, we can still find property \((*)\) modifications by using \(\log\) resolutions. Indeed, since in this situation fibres are divisors, we can resolve them. This is one of the key reasons why we restrict our theorems (theorem 8.1, theorem 8.5 and theorem 8.6) to the case when \(Z\) is a curve. **Theorem 5.6** (Existence of property \((*)\) modifications).: _Assume the LMMP and the existence of \(\log\) resolutions in dimension \(n\). Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(n\) onto a curve \(Z\) and \((X/Z,B)\) a GCLC pair associated with it over a perfect field of characteristic \(p>2\). Then, there exists a GCLC pair \((Y/Z,C)\) satisfying property \((*)\) with \(Y\)\(\mathbb{Q}\)-factorial and \((Y,C)\) dlt together with a commutative diagram_ _where \(\mu\) is a birational map. Moreover, there exist a vertical effective exceptional \(\mathbb{Q}\)-divisor \(R\), whose image in \(X\) is supported in the non-log canonical locus of \((X,B)\), and a vertical effective \(\mathbb{Q}\)-divisor \(G\), such that_ \[K_{Y}+C+R=\mu^{*}(K_{X}+B)+G.\] Proof.: First, take a \(\log\) resolution \(\rho\colon X^{\prime}\to X\) of \((X,B)\). Then, write \(K_{X^{\prime}}+D=\rho^{*}(K_{X}+B)+E\), where \(D\) and \(E\) are both effective with no common components and \(E\) is exceptional. If \(D=\sum_{i}a_{i}D_{i}\), define \(B^{\prime}:=\sum_{i}\min\{a_{i},1\}D_{i}\) and \(R:=D-B^{\prime}\), so that \[K_{X^{\prime}}+B^{\prime}+R=\rho^{*}(K_{X}+B)+E.\] Note that \(R\) is supported on the non-log canonical locus of \((X,B)\) (thus it is a vertical divisor). Since \((X_{\bar{\eta}},B_{\bar{\eta}})\) is \(\log\) canonical, so is \((X^{\prime}_{\bar{\eta}},B^{\prime}_{\bar{\eta}})\). In particular, the general fibre of the induced map \(f^{\prime}\colon X^{\prime}\to Z\) is \(\log\) canonical by proposition 4.2. Let \(T\subset Z\) be the finite set of points over which the fibre is not normal or not \(\log\) canonical. Consider now a further \(\log\) resolution \(\sigma\colon X^{*}\to X\) so that the strict transform of \(\operatorname{Supp}(B^{\prime})\cup f^{\prime-1}(T)\), together with the support of the exceptional divisors of \(\sigma\), is simple normal crossing. Define effective \(\mathbb{Q}\)-divisors \(\bar{B},\bar{R},\bar{E}\) in the same way we defined \(B^{\prime},R,E\) respectively, so that \[K_{X^{*}}+\bar{B}+\bar{R}=\sigma^{*}(K_{X}+B)+\bar{E}.\] Let \(g\colon X^{*}\to Z\) be the induced map, then \((X^{*}/Z,\bar{B})\) is GCLC. Let \(\bar{B}_{Z}\) be the discriminant part of \((X^{*}/Z,\bar{B})\) and let \[G^{*}:=\left(\sum_{z\in T\cup\operatorname{Supp}(B_{Z})}g^{-1}(z)\right)-\bar {B}^{v}.\] Finally, let \(B^{*}:=\bar{B}+G^{*}\) and let \(\Sigma_{Z}\) be the discriminant part of \((X^{*}/Z,B^{*})\). Note that \(\Sigma_{Z}=\sum_{z\in T\cup\operatorname{Supp}(B_{Z})}(z)\), where \((z)\) is the divisor corresponding to the point \(z\in Z\). We claim that \((X^{*}/Z,B^{*})\) satisfies property \((*)\). 1. The pair is GGLC because \(G^{*}\) is vertical, so adding it does not affect the singularities at the generic fibre. 2. Since \(Z\) is a curve, \((Z,\Sigma_{Z})\) is log smooth. 3. If \(z\in Z\setminus\operatorname{Supp}(\Sigma_{Z})\), \((\Phi,B^{*}_{\Phi})\) is log canonical, where \(\Phi:=g^{-1}(z)\) is normal and \(B^{*}_{\Phi}\) is defined via adjunction. Thus, by inversion of adjunction \((X^{*},B^{*}+\Phi)\) is log canonical around \(\Phi\). 4. If \(z\in\operatorname{Supp}(\Sigma_{Z})\), by construction, \(\operatorname{Supp}(\bar{B})\cup\Phi\) is simple normal crossing, where \(\Phi:=g^{-1}(z)\). Thus, \((X^{*},B^{*})\) is log canonical (around \(\Phi\)). Now, run a \((K_{X^{*}}+B^{*})\)-MMP over \(X\) and let \(Y\) be the resulting variety, so that it fits in the following diagram: Let \(C:=\varphi_{*}B^{*}\). Then, by proposition 5.3, \((Y/Z,C)\) satisfies property \((*)\). Moreover, it is dlt and \(\mathbb{Q}\)-factorial since these properties are preserved under the MMP. We have \(K_{X^{*}}+B^{*}+\bar{R}=\sigma^{*}(K_{X}+B)+\bar{E}+G^{*}\). Define \(R:=\varphi_{*}\bar{R}\), let \(Q\) be the horizontal part of \(\varphi_{*}(\bar{E}+G^{*})\) and \(G\) its vertical part. We claim that \(Q=0\). In fact \[Q+G-R=K_{Y}+C-\mu^{*}(K_{X}+B)\] is \(\mu\)-nef, and the horizontal part of this divisor is exceptional, whence we can conclude by the negativity lemma [13, lemma 3.39] applied to \(Y_{\bar{\eta}}\). All in all, we get: \[K_{Y}+C+R=\mu^{*}(K_{X}+B)+G.\] ## 6 Bend and break for the moduli divisor A crucial step in the proof of nefness of the moduli part in characteristic \(0\) is the cone theorem for foliations [1, theorem 3.9]. If the canonical bundle of a foliation is not nef, we can find rational curves that are "tangent to the foliation". The idea is to find them by applying Miyaoka-Mori bend and break theorem [14] to the variety modulo big enough primes (see [10]). If our setting is in positive characteristic to start with, we do not have the possibility of changing the prime. However, we can base change our fibration \(f\colon X\to Z\) with high enough powers of the Frobenius morphism on \(Z\). In this way, we can translate questions about the moduli part to questions about the canonical divisor of the variety obtained after the base change. In particular, we apply the bend and break theorem there. **Theorem 6.1**.: _[_10_, theorem II.5.8]_ _Let \(X\) be a projective variety over an algebraically closed field and \(\xi\) a smooth projective curve such that \(X\) is smooth along \(\xi\). Assume that \(K_{X}\cdot\xi<0\) and let \(H\) be any nef \(\mathbb{R}\)-divisor. Then, for every \(x\in\xi\) there is a rational curve \(\zeta_{x}\subseteq X\) containing \(x\) such that_ \[H\cdot\zeta_{x}\leq 2\dim X\frac{H\cdot\xi}{-K_{X}\cdot\xi}.\] In the whole section, we use the notations of construction. The next lemma says that, given a general curve \(\xi\) on \(X\), studying the behaviour of \(\alpha_{e}|_{\xi}\) is enough to detect whether it is horizontal or vertical with respect to a separable fibration \(f\colon X\to Z\). **Lemma 6.2**.: _Let \(f\colon X\to Z\) be a flat separable fibration between normal projective varieties over a perfect field of positive characteristic and \(\xi\) a curve passing through a general point of \(X\). Then, \(\alpha_{e}|_{\xi}\) is birational if and only if \(\xi\) is horizontal._ Proof.: If \(\xi\) was vertical, then \(\alpha_{e}|_{\xi}\) would generically coincide with \(F^{e}\). Let \(z:=f(x)\) and \(\Phi:=f^{-1}(z)\). Since \(f\) is separable, for general \(x\), the map \[df_{x}\colon T_{X,x}\to f^{*}T_{Z,z}\] is surjective and its kernel is \(T_{\Phi,x}\). Thus, if \(\xi\) is horizontal, \(T_{\xi,x}\not\subseteq T_{\Phi,x}=\ker(df_{x})\). Consequently, \(f|_{\xi}\) must be separable. Let \(\xi^{\nu}\) be the normalisation of \(\xi\). Consider the diagram: where \(\xi^{\nu}_{e}\) is the normalisation of the image of \(\xi\) in \(Y^{(e)}\). Since \(f|_{\xi^{\nu}}\) is separable and \(\alpha_{e}|_{\xi^{\nu}}\) is purely inseparable, we conclude that the latter must be birational. qed **Example 6.3**.: The second bullet in the lemma above does not hold if we do not ask for \(\xi\) to pass through a general point. Below a counterexample using the construction of Tango-Raynaud surfaces (for more details about them, see [10, exercise 2.15, ch.V] or [11] or [12]). Let \(C\) be a curve of genus \(\geq 2\). Denote by \(\mathcal{B}\) the cokernel of the map induced by the Frobenius, \(\mathcal{O}_{C}\to F_{*}\mathcal{O}_{C}\). Thus, for any Cartier divisor \(D\), we have the exact sequence: \[0\to\mathcal{O}_{C}(-D)\to F_{*}(\mathcal{O}_{C}(-pD))\to\mathcal{B}(-D)\to 0.\] It is possible to see that \[H^{0}(C,\mathcal{B}(-D))=\{df|\,f\in K(C),\,(df)\geq pD\}.\] Moreover, if \(D\) is effective, \(H^{0}(C,\mathcal{B}(-D))\) is the kernel of \(F^{*}\colon H^{1}(C,\mathcal{O}_{C}(-D))\to H^{1}(C,\mathcal{O}_{C}(-pD))\). A **Tango-Raynaud curve** is a normal projective curve \(C\) on which we can find a rational map \(r\) such that the divisor defined by \(dr\) is \(pD\sim K_{C}\) for some D effective. This determines a non-zero element of \(H^{1}(C,\mathcal{O}_{C}(-D))\) which is mapped to zero by the Frobenius morphism. Hence \(dr\) determines a (non-split) short exact sequence \[0\to\mathcal{O}_{C}(-D)\to\mathcal{E}\to\mathcal{O}_{C}\to 0,\] which becomes split after applying the Frobenius morphism: \[0\to\mathcal{O}_{C}(-pD)\to F^{*}\mathcal{E}\to\mathcal{O}_{C}\to 0.\] Let \(P:=\mathbb{P}(\mathcal{E})\) be the \(\mathbb{P}^{1}\)-bundle defined by \(\mathcal{E}\), \(P^{\prime}:=\mathbb{P}(F^{*}\mathcal{E})\) the one defined by \(F^{*}\mathcal{E}\), \(f\colon P\to C\) and \(g\colon P^{\prime}\to C\) the structural maps. Thus, we have a commutative diagram: where the lower square is a fibre product diagram and \(\alpha\) is the relative Frobenius. The splitting of the last short exact sequence defines a section of \(g\), \(T^{\prime}\). Let \(T:=\alpha^{*}T^{\prime}\). The morphisms \(f|_{T}\) and \(\alpha|_{T}\) are both the Frobenius morphism, even though \(T\) is horizontal. **Lemma 6.4**.: _Assume the LMMP and the existence of log resolutions in dimension \(n\). Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(n\) and \((X/Z,B)\) a GGLC pair associated with it over a perfect field of characteristic \(p>2\), satisfying property \((*)\). Let \(\Delta\) be the horizontal part of \(B\). Suppose that \(M_{X}\) is \(f\)-nef and that there exists an horizontal curve \(\xi\) such that \(M_{X}\cdot\xi<0\) and \(\Delta\cdot\xi\geq 0\). Then, there exists a birational contraction (i.e. it does not extract divisors) \(\varphi\colon X\dasharrow Y\) over \(Z\), a fibration \(g\colon Y\to Z\) and a \(\mathbb{Q}\)-divisor \(C\) on \(Y\) such that:_ 1. \(C\) _is vertical;_ 2. \((Y/Z,C)\) _is a GGLC pair satisfying property_ \((*)\)_;_ 3. \(M_{Y}\) _is_ \(g\)_-nef and_ 4. _there exists an horizontal curve_ \(\xi_{Y}\) _with_ \(M_{Y}\cdot\xi_{Y}<0\)_._ Proof.: Let \(B^{v}\) be the vertical part of \(B\). The pair \((X/Z,B^{v})\) is still GGLC and satisfies property \((*)\). Let \(M^{\prime}_{X}\) be the moduli part associated with this new pair. Since \(\Delta\cdot\xi\geq 0\), \(M^{\prime}_{X}\cdot\xi<0\), but \(M^{\prime}_{X}\) might have lost \(f\)-nefness. Run a \((K_{X}+B^{v})\)-MMP over \(Z\). Note that this MMP does not contract \(\xi\) since this curve is horizontal. Call \(\xi_{Y}\) its image in \(Y\). Let \(\varphi\colon X\dasharrow Y\) be the resulting birational morphism, \((Y/Z,C)\) the resulting pair, where \(C\) is the push-forward of \(B^{v}\), and \(g\colon Y\to Z\). This pair is again GGLC and satisfies property \((*)\) by proposition 5.3, with same discriminant divisor as \((X/Z,B)\). Furthermore, \(M_{Y}\) is \(g\)-nef. If we consider a resolution of \(\varphi\), say \(V\) with projections \(p\) and \(q\) towards \(X\) and \(Y\) respectively, then \[p^{*}(K_{X}+B^{v}-f^{*}(K_{Z}+\Sigma_{Z}))=q^{*}(K_{Y}+C-g^{*}(K_{Z}+\Sigma_{Z }))+E,\] where \(\Sigma_{Z}\) is the discriminant part, \(E\) is effective by the negativity lemma [12, lemma 3.39] and vertical over \(Z\). As \(\xi_{Y}\) is horizontal, \(q_{*}E\cdot\xi_{Y}\geq 0\), whence \(M_{Y}\cdot\xi_{Y}<0\). qed **Proposition 6.5**.: _Let \(f\colon X\to Z\) be a fibration and \((X/Z,B)\) a GGLC pair associated with it over an algebraically closed field of characteristic \(p>2\), satisfying property \((*)\) and such that \(B\) is vertical. Suppose there exist very ample divisors \(H_{2},...,H_{n}\), where \(n=\dim X\), such that \(M_{X}\cdot H_{2}\cdot...\cdot H_{n}<0\). Let \(\xi\) be a general curve in the intersection of the linear systems \(|H_{2}|,...,|H_{n}|\). Then, through a general point of \(\xi\), we can find a rational vertical curve \(\zeta\) such that \(M_{X}\cdot\zeta<0\). In particular, if \(M_{X}\) is \(f\)-nef, such \(\xi\) cannot exist._ Proof.: Note that \(\xi\) is horizontal and we can assume it is smooth, it passes through a general point and that \(X\) is smooth along \(\xi\). Thus, \(\alpha_{e}|_{\xi}\) is birational by lemma 6.2. Let \(\xi_{e}:=\alpha_{e*}\xi\) and let \(\mathcal{F}\) be the foliation induced by \(f\). Since \(B\) is vertical and \((X/Z,B)\) satisfies property \((*)\), by corollary 3.3 and remark 5.5, \[\alpha_{e}^{*}K_{Y^{(e)}}\cdot\xi=(p^{e}-1)M_{X}\cdot\xi+K_{X}\cdot\xi.\] Therefore, for \(e\gg 0\), \(K_{Y^{(e)}}\cdot\xi_{e}<0\). Let \(G_{e}:=\beta_{e}^{*}H\), for some ample Cartier divisor on \(X\). We can thus apply theorem 6.1, to find, through a general point of \(\xi_{e}\), a rational curve \(\zeta_{e}\) such that \[G_{e}\cdot\zeta_{e}\leq 2\dim X\frac{G_{e}\cdot\xi_{e}}{-K_{Y^{(e)}}\cdot\xi_{e }}.\] If there exists an \(\hat{e}\gg 0\) such that \(\zeta_{\hat{e}}\) is vertical, we are done by setting \(\zeta\) to be the image of \(\zeta_{\hat{e}}\) in \(X\). In fact \(K_{X}\cdot\zeta_{\hat{e}}<0\) since \(\zeta_{\hat{e}}\) is rational and vertical, and \(B\cdot\zeta_{\hat{e}}=0\) since we can choose \(\zeta_{\hat{e}}\) through a point \(x\not\in\operatorname{Supp}(B)\). Therefore, \(M_{X}\cdot\zeta_{\hat{e}}<0\) as well. If such \(\hat{e}\) does not exist, let \(\zeta_{e}^{\prime}\) be the image of \(\zeta_{e}\) in \(X\). Then, by lemma 6.2, \(\alpha_{e}|_{\zeta_{e}^{\prime}}\) would be birational for all \(e\gg 0\) and we would have: \[p^{e}\leq p^{e}H\cdot\zeta_{e}^{\prime} =G_{e}\cdot\zeta_{e}\leq 2\dim X\frac{G_{e}\cdot\xi_{e}}{-K_{Y^{(e)} }\cdot\xi_{e}}\] \[=2\dim X\frac{p^{e}H\cdot\xi}{-(p^{e}-1)M_{X}\cdot\xi-K_{X}\cdot \xi}.\] The first term in the inequalities above goes to \(\infty\) as \(e\) grows, while the last term is bounded, contradiction. In conclusion, there must be an \(\hat{e}\gg 0\) for which \(\zeta_{\hat{e}}\) is vertical. qed _Remark 6.6_.: The bend and break theorem 6.1 needs the base field to be algebraically closed. If the base field \(k\) is only perfect, the rational curves we find may be defined over an extension of \(k\). Thus, the above proposition 6.5 holds only for varieties defined over an algebraically closed field. ## 7 Geometric log canonical centres In order to do adjunction on log canonical centres of the geometric generic fibre, we need to extract a divisor with discrepancy \(-1\) over them. This cannot always be done over \(X\), we may need to base change our fibration with a power of the Frobenius morphism as in construction 3. Footnote 3: This example was suggested to me by Fabio Bernasconi. **Example 7.1**.: 3 Let \(X\) be \(V(x^{3}s+y^{3}s+xyzs+v^{3}t)\) inside \(\mathbb{P}^{3}_{[x:y:z:z:v]}\times\mathbb{P}^{1}_{[s:t]}\) and let \(f\colon X\to\mathbb{P}^{1}_{[s:t]}\) be the fibration induced by the projection onto the second factor. On the open set where \(s\neq 0,v\neq 0\), \(X\) is regular everywhere. If the characteristic is \(\neq 3\), then \(X_{\bar{\eta}}\) is also smooth. Only the fibre over \(t=0\) has a singularity at the origin. On the other hand, if the characteristic of the base field is \(3\), the general fibre is a deformation of a cone and it has two singularities that come together over \(t=0\). On the geometric generic fibre they can be described by the equations \(W:=V(x,z,y^{3}+t)\) and \(W^{\prime}:=V(y,z,x^{3}+t)\). These are canonical centres. Note that \(f|_{W}\) and \(f|_{W^{\prime}}\) are the Frobenius morphism. Footnote 3: This example was suggested to me by Fabio Bernasconi. Now, consider the base change with the Frobenius morphism \(F\colon Z\to Z\) and let \(Y^{(1)}\) be the normalisation of the fibre product. Let \(\tau\) be an element of \(K(Y^{(1)})\) such that \(\tau^{3}=t\). Then, \(Y^{(1)}\) is singular at \(W^{(1)}:=V(x,z,y+\tau)\) and \(W^{\prime(1)}:=V(y,z,x+\tau)\). **Example 7.2**.: Let \(X:=\mathbb{P}^{3}_{[x:y:z:v]}\times\mathbb{P}^{1}_{[s:t]}\), \(D=V(x^{p}-ty^{p})\subset\mathbb{P}^{2}_{[x:y:z]}\times\mathbb{P}^{1}_{[s:t]}\) and \(\Delta:=\frac{1}{p}D\) over an algebraically closed field of characteristic \(p>0\). Let \(f\colon X\to\mathbb{P}^{1}_{[s:t]}\) be the natural projection. Consider the base change with \(F\colon\mathbb{P}^{1}\to\mathbb{P}^{1}\), let \(Y^{(1)}\) be the normalisation of the fibre product and \(\beta_{1}\colon Y^{(1)}\to X\). Then \((X,\Delta)\) is klt, while\((Y^{(1)},\beta_{1}^{*}\Delta)\) has a log canonical centre. _Remark 7.3_.: Let \(f\colon X\to Z\) be a separable fibration with a pair \((X/Z,B)\). Let \(W\subset X\) be a sub-variety such that the reduction of \(W_{\bar{\eta}}\) is a log canonical centre of \((X_{\bar{\eta}},B_{\bar{\eta}})\), where \(\bar{\eta}\) is the geometric generic point of \(Z\). Even if \(f|_{W}\) is separable, it may be that \(W\) is not a log canonical centre of \((X,B)\). This is because, in order to extract a place over the reduction of \(W_{\bar{\eta}}\) with discrepancy \(=-1\), we may have to blow-up centres \(\bar{V}\) defined over \(X_{\bar{\eta}}\) such that the variety \(V\) over \(X\) which reduces to \(\bar{V}\) is not geometrically reduced over \(Z\). **Lemma 7.4**.: _Let \(f\colon X\to Z\) be a separable fibration such that \(X_{\bar{\eta}}\) is normal, where \(\bar{\eta}\) is the geometric generic point of \(Z\). Let \(B\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. Let \(\sigma\colon Z^{\prime}\to Z\) be a finite separable map and consider the diagram:_ _where \(X^{\prime}\) is the normalisation of the main component of the fibre product \(X\times_{Z}Z^{\prime}\). Define a \(\mathbb{Q}\)-divisor \(B^{\prime}\) on \(X^{\prime}\) by log pull-back, so that \(K_{X^{\prime}}+B^{\prime}=s^{*}(K_{X}+B)\). Then an horizontal subset \(W\subseteq X\) is a log canonical centre (resp. non-log canonical centre) of \((X,B)\) if and only if there exists \(W^{\prime}\subseteq X^{\prime}\), an irreducible component of \(s^{-1}(W)\), which is a log canonical centre (resp. non-log canonical centre) of \((X^{\prime},B^{\prime})\)._ Proof.: By lemma 2.3, the conductor of \(X^{\prime}\) is vertical, so, up to shrinking \(Z\), we can assume the fibre product is already normal. Note moreover that, up to further shrinking \(Z\), we can assume \(\sigma\) is etale, so \(s\) is etale as well. In particular, it does not have any (wild) ramification. Thus, the discrepancies over the two pairs are the same (see for example [13, proposition 5.20]). **Definition 7.5**.: Let \(f\colon X\to Z\) be a separable flat fibration and \(B\) a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. Assume that \(X_{\bar{\eta}}\) is normal. Keep the notations of construction **geometric non-klt centre** of \((Y^{(e)},B_{e})\) is a subvariety \(W\subset Y^{(e)}\) such that, if \(\bar{\eta}\) is the geometric generic point of \(Z\), \(W_{\bar{\eta}}\) is reduced and is a non-klt centre of \((X_{\bar{\eta}},B_{\bar{\eta}})\). In this situation, if \(e^{\prime}\geq e\), then \(W^{(e^{\prime})}\), the base change of \(W\) inside \(Y^{(e^{\prime})}\), is a non-klt centre of \((Y^{(e^{\prime})},B_{e^{\prime}})\). Similarly, we can define **geometric log canonical centres** and **geometric non-log canonical centres**. **Proposition 7.6**.: _Let \(f\colon X\to Z\) be a fibration onto a curve \(Z\) such that \(X_{\bar{\eta}}\) is normal, where \(\bar{\eta}\) is the geometric generic point of \(Z\). Let \(B\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. Let \(\bar{W}\) be a (non-)log canonical centre of \((X_{\bar{\eta}},B_{\bar{\eta}})\) and let \(W\) be a subvariety of \(X\) such that an irreducible component of the reduced structure of \(W_{\bar{\eta}}\) is exactly \(\bar{W}\). Keep the notations of construction **geometric log canonical centres** and **geometric non-log canonical centres**. **Proposition 7.7**.: _Let \(f\colon X\to Z\) be a fibration onto a curve \(Z\) such that \(X_{\bar{\eta}}\) is normal, where \(\bar{\eta}\) is the geometric generic point of \(Z\). Let \(B\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. Let \(\bar{W}\) be a (non-)log canonical centre of \((X_{\bar{\eta}},B_{\bar{\eta}})\) and let \(W\) be a subvariety of \(X\) such that an irreducible component of the reduced structure of \(W_{\bar{\eta}}\) is exactly \(\bar{W}\). Keep the notations of construction **geometric log canonical centres** and **geometric non-log canonical centres**. **Proposition 7.8**.: _Let \(f\colon X\to Z\) be a fibration onto a curve \(Z\) such that \(X_{\bar{\eta}}\) is normal, where \(\bar{\eta}\) is the geometric generic point of \(Z\). Let \(B\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. Let \(\bar{W}\) be a (non-)log canonical centre of \((X_{\bar{\eta}},B_{\bar{\eta}})\) and let \(W\) be a subvariety of \(X\) such that an irreducible component of the reduced structure of \(W_{\bar{\eta}}\) is exactly \(\bar{W}\). Keep the notations of construction **geometric log canonical centres** and **geometric non-log canonical centres**. **Proposition 7.9**.: _Let \(f\colon X\to Z\) be a fibration onto a curve \(Z\) such that \(X_{\bar{\eta}}\) is normal, where \(\bar{\eta}\) is the geometric generic point of \(Z\). Let \(B\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. Let \(\bar{W}\) be a (non-)log canonical centre of \((X_{\bar{\eta}},B_{\bar{\eta}})\) and let \(W\) be a subvariety of \(X\) such that an irreducible component of the reduced structure of \(W_{\bar{\eta}}\) is exactly \(\bar{W}\). Keep the notations of construction **geometric log canonical centres**. **Proposition 7.10**.: _Let \(f\colon X\to Z\) be a fibration onto a curve \(Z\) such that \(X_{\bar{\eta}}\) is normal, where \(\bar{\eta}\) is the geometric generic point of \(Z\). Let \(B\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. Let \(\sigma\colon Z^{\prime}\to Z\) be a finite separable map and consider the diagram:_ _where \(X^{\prime}\) is the normalisation of the main component of the fibre product \(X\times_{Z}Z^{\prime}\). Define a \(\mathbb{Q}\)-divisor \(B^{\prime}\) on \(X^{\prime}\) by log pull-back, so that \(K_{X^{\prime}}+B^{\prime}=s^{*}(K_{X}+B)\). Then an horizontal subset \(W\subseteq X\) is a log canonical centre (resp. non-log canonical centre) of \((X,B)\) if and only if there exists \(W^{\prime}\subseteq X^{\prime}\), an irreducible component of \(s^{-1}(W)\), which is a log canonical centre (resp. non-log canonical centre) of \((X^{\prime},B^{\prime})\)._ Proof.: By lemma 2.3, the conductor of \(X^{\prime}\) is vertical, so, up to shrinking \(Z\), we can assume the fibre product is already normal. Note moreover that, up to further shrinking \(Z\), we can assume \(\sigma\) is etale, so \(s\) is etale as well. In particular, it does not have any (wild) ramification. Thus, the discrepancies over the two pairs are the same (see for example [13, proposition 5.20]). **Definition 7.11**.: Let \(f\colon X\to Z\) be a separable flat fibration and \(B\) a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. Assume that \(X_{\bar{\eta}}\) is normal. Keep the notations of construction **geometric non-klt centre** of \((Y^{(e)},B_{e})\) is a subvariety \(W\subset Y^{(e)}\) such that, if \(\bar{\eta}\) is the geometric point of \(Z\), \(W_{\bar{\eta}}\) is reduced and is a non-klt centre of \((X_{\bar{\eta}},B_{\bar{\eta}})\). In this situation, if \(e^{\prime}\geq e\), then \(W^{(e^{\prime})}\), the base change of \(W\) inside \(Y^{(e^{\prime})}\), is a non-klt centre of \((Y^{(e^{\prime})},B_{e^{\prime}})\). Similarly, we can define **geometric log canonical centres** and **geometric non-log canonical centres**. **Proposition 7.12**.: _Let \( with \(\psi\) separable and \(e\in\mathbb{N}\). Let \(B_{e}:=\beta_{e}^{*}B^{h}+\frac{1}{p^{\omega}}\beta_{e}^{*}B^{v}\) and define \(B^{\prime}\) by log pull-back from \(Y^{(e)}\), so that \(K_{X^{\prime}}+B^{\prime}\) is the pull-back of \(K_{Y^{(e)}}+B_{e}\). By corollary 3.5, \(K_{X_{\bar{\eta}}}+B_{\bar{\eta}}=(K_{X^{\prime}}+B^{\prime})_{\bar{\eta}}\). By construction, there exists \(W^{\prime}\subseteq X^{\prime}\) log canonical centre of \((X^{\prime},B^{\prime})\) such that \(W^{\prime}_{\bar{\eta}}=\bar{W}\). Then, by lemma 7.4, there exists \(W^{(e)}\subseteq Y^{(e)}\), which is a geometric log canonical centre of \((Y^{(e)},B_{e})\). The case when \(\bar{W}\) is a non-log canonical centre is similar. qed _Remark 7.7_.: In particular, since the non-klt centres of \((X_{\bar{\eta}},B_{\bar{\eta}})\) are finitely many, for \(e\gg 0\), they all appear as geometric non-klt centres of \((Y^{(e)},B_{e})\). Furthermore, assuming the existence of log resolutions, we can construct a log resolution of \((Y^{(e)},B_{e})\) which extracts all these geometric non-klt centres. **Theorem 7.8** (Existence of geometric property star modifications).: _Assume the LMMP and the existence of log resolutions in dimension \(n\). Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(n\) onto a curve \(Z\) over a perfect field of characteristic \(p>2\), such that \(X_{\bar{\eta}}\) is normal, where \(\bar{\eta}\) is the geometric generic point of \(Z\). Let \(B\) be a \(\mathbb{Q}\)-divisor on \(X\) such that \(K_{X}+B\) is \(\mathbb{Q}\)-Cartier. Let \(\bar{W}\) be a log canonical centre of \((X_{\bar{\eta}},B_{\bar{\eta}})\) and let \(W\) be a subvariety of \(X\) such that an irreducible component of the reduced structure of \(W_{\bar{\eta}}\) is exactly \(\bar{W}\). Keep the same notations of construction. Then, for \(e\gg 0\), there exists a property \((*)\) modification of \((Y^{(e)},B_{e})\) which extracts an exceptional divisor \(E\) over \(W^{(e)}\) with discrepancy \(=-1\). More precisely, there exist a dlt GGLC pair \((Y/Z,C+E)\) with \(Y\)\(\mathbb{Q}\)-factorial, and a diagram:_ _where \(\mu\) is a birational map and the centre of \(E\) is \(W^{(e)}\). The induced map \(E\to Z\) is separable. Moreover, there exist an effective exceptional \(\mathbb{Q}\)-divisor \(R\), whose image in \(Y^{(e)}\) is supported in the non-log canonical locus of \((Y^{(e)},B_{e})\), and a vertical effective \(\mathbb{Q}\)-divisor \(G\), such that_ \[K_{Y}+C+E+R=\mu^{*}(K_{Y^{(e)}}+B_{e})+G.\] Proof.: Choose \(e\in\mathbb{N}\) big enough, so that, for all \(\bar{V}\) non-klt centres of \((X_{\bar{\eta}},B_{\bar{\eta}})\), there exist \(V_{e}\subseteq Y^{(e)}\) geometric non-klt centres of \((Y^{(e)},B_{e})\). Such \(e\) exists by proposition 7.6. Let \(\sigma\colon Y^{\prime}\to Y^{(e)}\) be a log resolution of \((Y^{(e)},B_{e})\) which extracts all geometric non-klt centres. In particular, there is an exceptional divisor \(E\subset Y^{\prime}\) extracting \(W^{(e)}\), the reduction of the base change of \(W\) inside \(Y^{(e)}\). Now the proof goes through in the same way as the previous version of the existence of property \((*)\) modifications 5.6. Note that, since the discrepancy of \(E\) is \(-1\), we do not contract it when we run the MMP over \(Y^{(e)}\). qed Once we have property \((*)\), we can restrict the moduli part on geometric log canonical centres. **Lemma 7.9**.: _Let \(f\colon X\to Z\) be a fibration and \((X/Z,B+S)\) a GGLC pair associated with it over a perfect field of characteristic \(p>2\), where \(S\) is a prime horizontal divisor. Assume that \(Z\) is a curve and that \((X/Z,B+S)\) satisfies property \((*)\). Assume that the normalisation \(S^{\nu}\to S\) is an isomorphism in codimension \(1\). Then,_ \[R(f)|_{S^{\nu}}=R(f|_{S^{\nu}}).\] Proof.: First of all note that, since \((X_{\bar{\eta}},B_{\bar{\eta}})\) is log canonical, \(S_{\bar{\eta}}\) must be reduced. Moreover, for the purpose of the proof, we can suppose \(S\) is normal since \(S^{\nu}\to S\) is an isomorphism in codimension \(1\). Let \(D_{S}\) be a vertical divisor in \(S\), we want to compute its multiplicity in \(R(f|_{S})\), so we can restrict our study to a neighbourhood of its generic point. Let \(D\) be a vertical divisor in \(X\) whose restriction to \(S\) contains \(D_{S}\) Since \((X/Z,B)\) satisfies property \((*)\), in particular \((X,S+\Phi)\) is log canonical for all fibres \(\Phi\) (with their reduced scheme structure). Thus, by adjunction, \((\Phi^{\nu},S|_{\Phi^{\nu}})\) is log canonical as well. In particular, \(S|_{D^{\nu}}\) is reduced and \(S\) intersects \(D\) with multiplicity \(1\). Similarly, if \(D\) and \(D^{\prime}\) are two different vertical prime divisors contained in the same fibre, there cannot exist a divisor \(C\) of \(S\) which is contained both in \(D\) and in \(D^{\prime}\) because it would contradict the fact that \((X,S+D+D^{\prime})\) is log canonical (around \(D+D^{\prime}\)). Thus, the multiplicity \(\ell_{D}\) of a vertical divisor \(D\) with respect to \(f\) is the same as the multiplicity of \(D|_{S}\) with respect to \(f|_{S}\). qed **Lemma 7.10**.: _Let \(f\colon X\to Z\) be a fibration and \((X/Z,B+S)\) a GGLC pair associated with it over a perfect field of characteristic \(p>2\), where \(S\) is a prime horizontal divisor. Assume that \(Z\) is a curve and that \((X/Z,B+S)\) satisfies property \((*)\). Then \(f|_{S}=g\circ\varphi\), where \(g\) is a fibration and \(\varphi\colon Z^{\prime}\to Z\) is etale._ Proof.: Let \(g\circ\varphi\) be the Stein factorization of \(f|_{S}\). We need to show that \(\varphi\) is etale. Let \(\bar{\eta}\) be the geometric generic point of \(Z\). First of all note that, since \((X_{\bar{\eta}},B_{\bar{\eta}}+S_{\bar{\eta}})\) is log canonical, \(S_{\bar{\eta}}\) must be reduced. Thus, by proposition 1.6, \(\varphi\) is separable. Let \(z\in Z\) be a ramification point, so that \(\varphi^{*}(z)=\sum_{i}e_{i}z_{i}\) for \(e_{i}\in\mathbb{N}\) and \(z_{i}\in\varphi^{-1}(z)\). Let \(\Phi\) be a component of the fibre over \(z\), then by the property \((*)\) assumption, \((X,S+\Phi)\) is log canonical. Hence, by adjunction, \((S^{\nu},\Phi|_{S^{\nu}})\) is log canonical. But \(\Phi|_{S}=\sum_{i}e_{i}\Phi_{i}\), where \(\Phi_{i}\) is the corresponding component over \(z_{i}\). Thus, \(e_{i}=1\) for all \(i\) and \(\varphi\) is etale. qed _Remark 7.11_.: Keep the notations of the above lemma. Assume that the normalisation \(S^{\nu}\to S\) is an isomorphism in codimension \(1\). Consider the GGLC pair obtained via adjunction on \(S^{\nu}\), \((S^{\nu}/Z,B_{S^{\nu}})\) with morphism \(f|_{S^{\nu}}\colon S^{\nu}\to Z\) and let \(B_{Z}\) be the discriminant divisor computed with respect to this map. The above lemma tells us that \(\varphi^{*}(B_{Z})\) coincides with the discriminant divisor computed with respect to \(g|_{S^{\nu}}\colon S^{\nu}\to Z^{\prime}\), namely \(B_{Z^{\prime}}\). Hence, \(K_{Z^{\prime}}+B_{Z^{\prime}}=\varphi^{*}(K_{Z}+B_{Z})\). Thus, for the purpose of computing discriminant and moduli parts, we can always assume \(\varphi\) is the identity. We will tacitly do so in the sequel. **Proposition 7.12**.: _Let \(f\colon X\to Z\) be a fibration and \((X/Z,B+S)\) a GGLC pair associated with it over a perfect field of characteristic \(p>2\), where \(S\) is a prime horizontal divisor. Assume that \(Z\) is a curve and that \((X/Z,B+S)\) is dlt, satisfies property \((*)\) and \(X\) is \(\mathbb{Q}\)-factorial.. Assume that the normalisation \(S^{\nu}\to S\) is an isomorphism in codimension \(1\) and let \(B_{S^{\nu}}\) be the \(\mathbb{Q}\)-divisor on \(S^{\nu}\) obtained via adjunction. Then the pair \((S^{\nu}/Z,B_{S^{\nu}})\) is GGLC and satisfies property \((*)\). Moreover, if \(M_{X}\) and \(M_{S^{\nu}}\) are the moduli parts of \((X/Z,B)\) and \((S^{\nu}/Z,B_{S^{\nu}})\) respectively, then_ \[M_{X}|_{S^{\nu}}=M_{S^{\nu}}.\] Proof.: By the lemma above 7.10, we can suppose \(f|_{S^{\nu}}\colon S^{\nu}\to Z\) is a fibration. Since \(S_{\bar{\eta}}\) is a log canonical centre of \((X_{\bar{\eta}},B_{\bar{\eta}})\), the pair \((S_{\bar{\eta}}^{\ \nu},B_{S_{\bar{\eta}}}^{\ \nu})\) is log canonical, where \(B_{S_{\bar{\eta}}}^{\ \nu}\) is the boundary divisor defined on \(S_{\bar{\eta}}^{\ \nu}\) by restriction. By the universal properties of the normalisation, \((S^{\nu})_{\bar{\eta}}^{\ \nu}=S_{\bar{\eta}}^{\ \nu}\). Thus, \((S^{\nu}/Z,B_{S^{\nu}})\) is GGLC. Since \(S^{\nu}\to S\) is an isomorphism in codimension \(1\), for the rest of the proof, we can assume \(S\) is normal. Let \(\Phi_{S}\) be a vertical prime divisor in \(S\), and let \(\Phi\) be a vertical prime divisor in \(X\) such that \(\Phi|_{S}\) contains \(\Phi_{S}\). Since \((X,B+S)\) satisfies property \((*)\), if \(\Phi\leq B\), then \(\Phi_{S}\leq B_{S}\) and by adjunction \((S,B_{S})\) is log canonical (around \(\Phi_{S}\)). Otherwise, \((X,B+S+\Phi)\) is log canonical around \(\Phi\), so, again by adjunction, \((S,B_{S}+\Phi_{S})\) is log canonical as well. Hence \((S/Z,B_{S})\) satisfies property \((*)\). By proposition 5.4, we can compute the moduli parts of \((X/Z,B+S)\) and \((S/Z,B_{S})\) using the canonical bundle of the foliations induced by \(f\) and \(f|_{S}\) respectively. Since, by the above lemma 7.9, \(R(f)|_{S}=R(f|_{S})\), we have: \[M_{X}|_{S}=(K_{X}+B^{h}+S-f^{*}K_{Z}-R(f))|_{S}=\] \[K_{S}+B_{S}^{h}-f|_{S}^{*}K_{Z}-R(f|_{S})=M_{S}.\] qed **Lemma 7.13**.: _Let \((X,B+S)\) be a dlt pair over a perfect field of characteristic \(p>2\), where \(S\) is a prime divisor and \(X\) is \(\mathbb{Q}\)-factorial. Then, the normalisation morphism \(S^{\nu}\to S\) is an isomorphism in codimension \(1\). Moreover, if either \(X\) has dimension \(\leq 3\) and is defined over an algebraically closed field of characteristic \(p>5\), or \(S\) satisfies the \(S_{2}\) property, then \(S\) is normal._ Proof.: The pair \((S^{\nu},B_{S^{\nu}})\) induced on \(S^{\nu}\) is log canonical by adjunction. Moreover, by [11, lemma 2.1], \(S^{\nu}\to S\) is a universal homeomorphism. Thus, if \(S^{\nu}\to S\) was not an isomorphism in codimension \(1\), the conductor would have coefficients \(>1\), leading to a contradiction. If \(X\) is a threefold over an algebraically closed field of characteristic \(p>5\), the claim is proven in [1, lemma 5.2]. If \(S\) satisfies the \(S_{2}\) property, \((S,B_{S})\) is slc and, since \(S^{\nu}\to S\) is a universal homeomorphism, there cannot be nodal singularities in codimension \(1\), thus \(S\) is normal. qed _Remark 7.14_.: If \(X\) has dimension \(>3\), or the characteristic of the base field is \(p\leq 5\), then it is no longer true in general that plt centres are normal (see [1] and [15]). ## 8 The canonical bundle formula In this section, we prove nefness of the moduli part. First, we prove the result for pairs satisfying property \((*)\), then we use property \((*)\) modifications to recover this situation. ### Property \((*)\) case **Theorem 8.1**.: _Assume the LMMP and the existence of log resolutions in dimension \(\leq n\). Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(n\) onto a curve \(Z\) and \((X/Z,B)\) a GGLC pair associated with it over a perfect field of characteristic \(p>2\). Suppose that \((X/Z,B)\) satisfies property \((*)\). Assume that \(K_{X}+B\) is \(f\)-nef, then the moduli part \(M_{X}\) is nef._ _Outline of the proof._ The strategy to prove theorem 8.1 follows the proof of [1, lemma 3.12]. If \(M_{X}\) was not nef, there would exist \(\rho\), extremal ray, such that \(M_{X}\cdot\rho<0\). Let \(A\) be an ample divisor on \(X\) such that \(H_{\rho}:=M_{X}+A\) is a supporting hyperplane for \(\rho\). In particular, \(H_{\rho}\) is nef. Non big case The idea when \(H_{\rho}\) is not big is that we can find a negative curve that is general enough on which \(M_{X}\) is negative. Then we apply the results in section 6. In order to apply proposition 6.5, we need to work over the algebraic closure of the base field, so we perform a base change. Thanks to the lemmas below, we can then recover the result over the original field. Big case Consider \(H_{\rho}-\varepsilon A\), for a small enough \(\varepsilon>0\) so that this divisor is still big. Let \(D\) be an effective \(\mathbb{Q}\)-divisor \(\mathbb{Q}\)-equivalent to \(H_{\rho}-\varepsilon A\). Since \(D\cdot\rho<0\), there exists a prime divisor \(S\) in the support of \(D\), which is negative on \(\rho\). The aim is to do induction on the dimension by producing a log canonical centre \(W\) containing \(\rho\). Since we need to control the geometric generic fibre of the restriction, we will produce this log canonical centre perturbing the pair \((X_{\bar{\eta}},B_{\bar{\eta}})\) with \(S_{\bar{\eta}}\). Then, thanks to theorem 7.8, we can find a property \((*)\) modification over \(Y^{(e)}\), for some \(e\in\mathbb{N}\) big enough, which extracts this geometric log canonical centre. Finally, we apply adjunction to compare the moduli part of \(X\) and the one of the exceptional divisor over \(W\) and conclude by induction on the dimension. Note that the case \(\dim(X)=1\) is trivial. **Definition 8.2**.: Let \(X\) be a variety over a field \(k\). Let \(\bar{k}\) be the algebraic closure of \(k\) and \(\bar{X}:=X\times_{k}\bar{k}\). Let \(W\subseteq\bar{X}\) be a subvariety. Let \(k\subseteq k^{\prime}\) be the minimal normal (finite) extension of \(k\) over which \(W\) is defined. Let \(G:=\operatorname{Gal}(\bar{k}/k)\). Then all subvarieties of \(\bar{X}\) in the Galois orbit of \(W\) are defined over \(k^{\prime}\). If \(G^{\prime}:=\operatorname{Gal}(k^{\prime}/k)\), then \[W^{G}:=\sum_{g\in G^{\prime}}g(W)\] descends to a cycle defined over \(k\), by abuse of notation we call it also \(W^{G}\). **Lemma 8.3**.: _Let \(f\colon X\to Z\) be a fibration between normal projective varieties over a perfect field \(k\) and \(D\) a \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor on it. Let \(\bar{k}\) be the algebraic closure of \(k\), \(\bar{f}\colon\bar{X}\to\bar{Z}\) the base change of \(f\) with \(\bar{k}\), \(\bar{D}:=D\times_{k}\bar{k}\). Then \(D\) is \(f\)-nef if and only if \(\bar{D}\) is \(\bar{f}\)-nef._ Proof.: Let \(G:=\operatorname{Gal}(\bar{k}/k)\). Let \(\xi\subset\bar{X}\) be a curve and \(k^{\prime}\) the minimal normal finite extension of \(k\) over which \(\xi\) is defined. If \(\bar{f}(\xi)\) has dimension \(0\), \(f(\xi^{G})\) has dimension \(0\) as well. Since \(D\cdot\xi^{G}=[k^{\prime}:k]\,\bar{D}\cdot\xi\), if \(D\) is \(f\)-nef, then \(\bar{D}\) is \(\bar{f}\)-nef. The converse is trivial. **Lemma 8.4**.: _Assume the existence of log resolutions of singularities in dimension \(n\). Let \(f\colon X\to Z\) be a fibration from a normal projective variety of dimension \(n\) onto a curve \(Z\) and \((X/Z,B)\) a GGLC pair associated with it over a perfect field \(k\) of characteristic \(p>2\). Suppose that \((X/Z,B)\) satisfies property \((*)\). Let \(\bar{k}\) be the algebraic closure of \(k\), \(\bar{f}\cdot\bar{X}\to\bar{Z}\) the base change of \(f\) with \(\bar{k}\), \(\bar{B}:=B\times_{k}\bar{k}\). Then \(B_{\bar{Z}}=B_{Z}\times_{k}\bar{k}\), \(M_{\bar{X}}=M_{X}\times_{k}\bar{k}\), \((\bar{X}/\bar{Z},\bar{B})\) is GGLC and it satisfies property \((*)\)._ Proof.: If \(Y\) is a variety over \(k\), we denote by \(\bar{Y}:=Y\times_{k}\bar{k}\). Let \(G:=\operatorname{Gal}(\bar{k}/k)\). Note that, since \(k\) is perfect, \(\bar{X}\) is normal by lemma [10, Tag 0C3M]. Moreover, by lemma [10, Tag 01V0], \(K_{\bar{X}}=K_{X}\times_{k}\bar{k}\). First, we show that, given \(t\in\mathbb{R}_{\geq 0}\) and \(z\in Z\), if \((X,B+tf^{*}z)\) is log canonical around \(z\), then \((\bar{X},\bar{B}+\bar{f}^{*}\bar{z})\) is log canonical around \(\bar{z}\). Indeed, let \((Y,C)\) be a log resolution of \((X,B+tf^{*}z)\), since \(k\) is perfect, \((\bar{Y},\bar{C})\) is a log resolution of \((\bar{X},\bar{B}+t\bar{f}^{*}\bar{z})\). As we can check the type of singularities by computing discrepancies on a log resolution, we get the claim. To conclude, we show that, given \(t\in\mathbb{R}_{\geq 0}\) and \(z\in\bar{Z}\), if \((\bar{X},\bar{B}+\bar{f}^{*}z)\) is log canonical around \(z\), then \((X,B+f^{*}z^{G})\) is log canonical around \(z^{G}\). Let \(Y\to X\) be a birational map over \(X\) and \(E\) a place over \(f^{*}z^{G}\). Consider the base change with \(\bar{k}\), \(\bar{Y}\to\bar{X}\), \(\bar{E}:=E\times_{k}\bar{k}\subseteq\bar{Y}\). Since \(k\) is perfect, \(\bar{E}\) is reduced, thus the discrepancy of \(E\) over \(X\) coincides with the discrepancy of \(\bar{E}\) over \(\bar{Y}\). A component of \(\bar{E}\) is a place over \(\bar{f}^{*}z\), whence the claim. To prove the statement of the lemma, it is enough to show that \(B_{\bar{Z}}=B_{Z}\times_{k}\bar{k}\). This follows from the above discussion. Proof of the non big case.: First we prove the statement when \(k\) is algebraically closed. Let \(\nu\) be the numerical dimension of \(H_{\rho}\). Since \(H_{\rho}\) is not big, \(\nu<n=\dim(X)\). Define \(D_{i}:=H_{\rho}+\varepsilon A\) for \(2\leq i\leq\nu+1\) and \(0<\varepsilon\ll 1\), and \(D_{i}:=A\) for \(\nu+1<i\leq n\). Since \(H_{\rho}^{\nu+1}\cdot A^{n-\nu-1}=0\), \[M_{X}\cdot H_{\rho}^{\nu}\cdot A^{n-\nu-1}=(H_{\rho}-A)\cdot H_{\rho}^{\nu} \cdot A^{n-\nu-1}<0.\] Therefore, for \(\varepsilon>0\) small enough, we still have \[M_{X}\cdot D_{2}\cdot...\cdot D_{n}<0.\] By possibly substituting the \(D_{i}\)'s with a power of them, we can suppose they are very ample. Let \(\xi\) be a general curve in the intersection of the linear systems of \(D_{2},...,D_{n}\). In particular, we can choose it so that \(B^{h}\cdot\xi\geq 0\). By lemma 6.4, there exist \((Y/Z,C)\) a GCLC pair satisfying property \((*)\) and \(\xi_{Y}\) a curve in \(Y\), such that \(C\) is a vertical \(\mathbb{Q}\)-divisor, the moduli part \(M_{Y}\) is nef over \(Z\) and \(M_{Y}\cdot\xi_{Y}<0\). We can then substitute \((X/Z,B)\) with \((Y/Z,C)\) and repeat the process. More precisely, we can find an extremal ray \(\rho_{Y}\in\overline{\mathrm{NE}}(Y)\) such that \(M_{Y}\cdot\rho_{Y}<0\) and with supporting hyperplane \(H_{\rho_{Y}}\). If \(H_{\rho_{Y}}\) is big, we can apply the next step; if not, we can find a general enough curve \(\xi_{Y}^{\prime}\) such that \(M_{Y}\cdot\xi_{Y}^{\prime}<0\), with the process explained above. In this case, we can conclude by proposition 6.5. If \(k\) is not algebraically closed, let \(\bar{k}\) be its algebraic closure, \(\bar{f}\colon\bar{X}\to\bar{Z}\) the base change of \(f\) with \(\bar{k}\) and \(\bar{B}:=B\times_{k}\bar{k}\). By lemma 8.3 and lemma 8.4, \((\bar{X}/\bar{Z},\bar{B})\) is GCLC, it satisfies property \((*)\) and \(M_{\bar{X}}=M_{X}\times_{k}\bar{k}\) is \(\bar{f}\)-nef. Thus, we conclude that \(M_{\bar{X}}\) is nef by the previous step. By lemma 8.3, this implies that \(M_{X}\) is nef as well. qed Proof of the big case.: Recall that we have \(S\subseteq X\) such that \(S\cdot\rho<0\). Let \(\bar{\eta}\) be the geometric generic point of \(Z\). Consider the set \(\mathcal{S}\) of couples \((W,\lambda)\) such that: 1. \(W\subseteq S\) and the image of \(\overline{\mathrm{NE}}(W)\to\overline{\mathrm{NE}}(X)\) contains \(\rho\); 2. the reduction of \(W_{\bar{\eta}}\) is a log canonical centre of \((X_{\bar{\eta}},B_{\bar{\eta}}+\lambda S_{\bar{\eta}})\). This set is non-empty since \(S\cdot\rho<0\). Choose \(\lambda_{0}\) minimal such that there is \(W\) with \((W,\lambda_{0})\in\mathcal{S}\). Choose also \(W_{0}\) minimal with respect to the inclusion between those \(W\) such that \((W,\lambda_{0})\in\mathcal{S}\). Note that, if \(V\subseteq S\) is a subvariety different from \(W_{0}\), such that the reduction of \(V_{\bar{\eta}}\) is a non-log canonical centre of \((X_{\bar{\eta}},B_{\bar{\eta}}+\lambda_{0}S_{\bar{\eta}})\), then the image of \(\overline{\mathrm{NE}}(V)\to\overline{\mathrm{NE}}(X)\) does not contain \(\rho\). Keep the notations of construction. By theorem 7.8, there exists \(e\in\mathbb{N}\) big enough, a dlt GGLC pair \((Y/Z,C+E)\) with \(Y\)\(\mathbb{Q}\)-factorial, and a diagram: where \(\mu\) is a birational map and the centre of \(E\) is \(W^{(e)}\), the base change of \(W_{0}\) inside \(Y^{(e)}\). The induced fibration \(E\to Z\) is separable. Moreover, there exist an effective exceptional \(\mathbb{Q}\)-divisor \(R\), whose image in \(Y^{(e)}\) is supported in the non-log canonical locus of \((Y^{(e)},B_{e}+\lambda_{0}S_{e})\), and a vertical effective \(\mathbb{Q}\)-divisor \(G\), such that \[K_{Y}+C+E+R=\mu^{*}(K_{Y^{(e)}}+B_{e}+\lambda_{0}S_{e})+G.\] Recall that \(B_{e}:=\beta_{e}^{*}B^{h}+\frac{1}{p^{\kappa}}\beta_{e}^{*}B^{v}\) and \(S_{e}:=\beta_{e}^{*}S\). Now, we want to compare the moduli part \(M_{X}\) of \((X/Z,B)\) to the moduli part \(M_{Y}\) of \((Y/Z,C+E)\). Let \(\mathcal{F}\), \(\mathcal{G}_{e}\) and \(\mathcal{G}\) be the foliations induced by \(f,g_{e}\) and \(g\) respectively. The first step consists in comparing \(M_{X}\) to the canonical divisor of \(\mathcal{G}_{e}\). Then, we will compare the latter to \(M_{Y}\). Let \(\Delta:=B^{h}+\lambda_{0}S\). Since \((X/Z,B)\) has property \((*)\), by proposition 5.4, \(K_{\mathcal{F}}+\Delta\sim_{\mathbb{Q}}M_{X}+\lambda_{0}S\). Moreover, by corollary 3.3: \[\alpha_{e}^{*}(K_{\mathcal{G}_{e}}+\Delta_{e})=p^{e}(K_{\mathcal{F}}+\Delta).\] In particular, if \(\rho_{e}\) is the ray corresponding to \(\rho\) in \(Y^{(e)}\), then: \[(K_{\mathcal{G}_{e}}+\Delta_{e})\cdot\rho_{e}=p^{e}(M_{X}+\lambda_{0}S)\cdot \rho<0.\] Now, by claim 3.4, \(\frac{1}{p^{e}}\beta_{e}^{*}R(f)=R(g_{e})\) and, since \((X/Z,B)\) has property \((*)\), \(R(f)=B^{v}-f^{*}B_{Z}\), where \(B_{Z}\) is the discriminant part of \((X/Z,B)\). Hence: \[K_{\mathcal{G}_{e}}+\Delta_{e}=K_{Y^{(e)}}+B_{e}+\lambda_{0}S_{e}-g_{e}^{*}(K_ {Z}+B_{Z}).\] This, together with the construction of the property \((*)\) modification, gives: \[\mu^{*}(K_{\mathcal{G}_{e}}+\Delta_{e})=\] \[K_{Y}+C+E-g^{*}(K_{Z}+C_{Z})+R{+}g^{*}(C_{Z}-B_{Z})-G=M_{Y}+R+g^ {*}(C_{Z}-B_{Z})-G,\] where \(C_{Z}\) is the discriminant part of \((Y/Z,C+E)\) and the latter equality comes from the fact that \((Y/Z,C+E)\) satisfies property \((*)\). Now, we want to study more closely the divisor \(g^{*}(C_{Z}-B_{Z})-G\). Let \(z\in Z\) and \(b_{z}\) and \(c_{z}\) be its coefficients in \(B_{Z}\) and \(C_{Z}\), respectively. If \(b_{z}=1\), then the fibre over \(z\) is contained in \(B^{v}\), so \(c_{z}\) must be \(=1\) as well. If \(b_{z}=0\), it may happen that \(c_{z}=1\). Let \(D\) be a non \(\mu\)-exceptional vertical prime divisor contained in \(g^{-1}(z)\). By abuse of notation, call \(D\) also \(\beta_{e}(\mu(D))\). Then, by construction, the coefficient of \(D\) in \(G\) is \(1\) and, since \((X/Z,B)\) has property \((*)\) and \(b_{z}=0\), the coefficient of \(D\) in \(f^{*}(z)\) must be \(1\), hence its coefficient in \(g^{*}(z)\) is \(1\) as well (\(\mu\) is an isomorphism at the generic point of \(D\)). All in all, we get that \(g^{*}(C_{Z}-B_{Z})-G\) is \(\mu\)-exceptional. Furthermore, by construction, \(G\) does not have any components in common with \(R\). Then, applying negativity lemma [10, lemma 3.39], we conclude that \(g^{*}(C_{Z}-B_{Z})-G\) is effective. By abuse of notation, let \(\rho\) be a ray mapping to \(\rho_{e}\) which is in the image of \(\overline{\mathrm{NE}}(E)\to\overline{\mathrm{NE}}(Y)\). Then, since \(\rho\) can be represented by a horizontal curve, \((g^{*}(C_{Z}-B_{Z})-G)\cdot\rho\geq 0\). Moreover, by our choice of \(W_{0}\), \(R\cdot\rho\geq 0\). Indeed, \(R\) contains only places over non-log canonical centres. Thus, \[M_{Y}\cdot\rho<0.\] In the next step, we do adjunction on \(E\). Since \((Y,C+E)\) is \(\mathbb{Q}\)-factorial and dlt, the normalisation morphism \(E^{\nu}\to E\) is an isomorphism in codimension \(1\) by lemma 7.13. Let \(C_{E^{\nu}}\) be the boundary divisor on \(E^{\nu}\) defined by adjunction. By proposition 7.12, \((E^{\nu}/Z,C_{E^{\nu}})\) is GGLC and satisfies property \((*)\). Moreover, \(M_{Y}|_{E^{\nu}}=M_{E^{\nu}}\), the moduli part of \((E^{\nu}/Z,C_{E^{\nu}})\). By abuse of notation, let \(\rho\) be a ray in \(\overline{\mathrm{NE}}(E^{\nu})\) mapping onto \(\rho\). Then, \[M_{E^{\nu}}\cdot\rho<0.\] If \(M_{E^{\nu}}\) is not \(g|_{E^{\nu}}\)-nef, run a \((K_{E^{\nu}}+C_{E^{\nu}})\)-MMP over \(Z\) and let \((E^{\prime}/Z,C^{\prime})\) be the resulting pair with associated fibration \(g^{\prime}\colon E^{\prime}\to Z\). Note that this MMP does not contract \(\rho\); by abuse of notation, call \(\rho\) also the image of this ray in \(E^{\prime}\). By proposition 5.3, \((E^{\prime}/Z,C^{\prime})\) satisfies property \((*)\), \(M_{E^{\prime}}\cdot\rho<0\) and \(M_{E^{\prime}}\) is \(g^{\prime}\)-nef. By the inductive assumption then it is nef, contradiction. qed ## 8.2 General case In the general case, we may need to go to a higher model of \(X\) to achieve nefness. **Theorem 8.5**.: _Assume the LMMP and the existence of log resolutions in dimension \(\leq n\). Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(n\) onto a curve \(Z\) and \((X/Z,B)\) a GGLC pair associated with it over a perfect field of characteristic \(p>2\). Let \(B\) be an effective \(\mathbb{Q}\)-divisor on \(X\) such that \((X,B)\) is a log canonical pair. Suppose that \(K_{X}+B\) is \(f\)-nef. Then, there exist a pair \((Y,C)\) satisfying property \((*)\) and a commutative diagram_ _where \(b\) is a birational map such that_ 1. \((K_{X}+B)|_{X_{\eta}}=(K_{Y}+C)|_{X_{\eta}}\)_, where_ \(\eta\) _is the generic point of_ \(Z\)_;_ 2. _the moduli part_ \(M_{Y}\) _of_ \((Y/Z,C)\) _is nef._ Proof.: By theorem 5.6, there exists \((X^{\prime},B^{\prime})\) a \(\mathbb{Q}\)-factorial dlt pair satisfying property \((*)\) with a commutative diagram In the above, \(\mu\) is projective birational and \(K_{X^{\prime}}+B^{\prime}=\mu^{*}(K_{X}+B)+G\) with \(G\) a vertical effective \(\mathbb{Q}\)-divisor. The two pairs coincide over the generic point, so \((X^{\prime}_{\bar{\eta}},B^{\prime}_{\bar{\eta}})\) is log canonical. It may happen that \(K_{X^{\prime}}+B^{\prime}\) is not \(f^{\prime}\)-nef anymore. To fix this, run a \((K_{X^{\prime}}+B^{\prime})\)-MMP over \(Z\). Let \((Y,C)\) be the result of this MMP. Note that the MMP only contracts curves inside the support of \(G\), so \[(K_{X}+B)_{X_{\eta}}=(K_{X^{\prime}}+B^{\prime})_{X_{\eta}}=(K_{Y}+C)_{X_{\eta }}.\] Moreover, \((Y/Z,C)\) satisfies property \((*)\) by proposition 5.3. Thus, theorem 8.1 gives the conclusion. ## 8.3 The \(f\)-trivial case As a corollary of the previous results we get the canonical bundle formula in the classical setting, when the canonical bundle is \(f\)-trivial. The proof is very similar to the one in the characteristic \(0\) case ([1, theorem 1.3]). **Theorem 8.6**.: _Assume the LMMP and the existence of log resolutions in dimension \(\leq n\). Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(n\) onto a curve \(Z\) and \((X/Z,B)\) a GGLC pair associated with it over a perfect field of characteristic \(p>2\). Assume also that \(K_{X}+B\sim_{\mathbb{Q}}f^{*}L_{Z}\) for some line bundle \(L_{Z}\) on \(Z\) and that \((X,B)\) is log canonical. Then, \(M_{X}=f^{*}M_{Z}\) is nef._ Proof.: First, let \(\mu\colon X^{\prime}\to X\) be a property \((*)\) modification, constructed as in theorem 5.6. Let \(f^{\prime}\colon X^{\prime}\to Z\) be the induced morphism and \(B^{\prime}\) defined so that \(K_{X^{\prime}}+B^{\prime}=\mu^{*}(K_{X}+B)\). Let \(B^{\prime}_{Z}\) and \(M^{\prime}\) be respectively the discriminant and the moduli parts of \((X^{\prime}/Z,B^{\prime})\). Then \(M^{\prime}=\mu^{*}M_{X}\), so it is enough to show that \(M^{\prime}\) is nef. Note that it is \(f^{\prime}\)-trivial. There exists an effective vertical \(\mathbb{Q}\)-divisor \(G\) such that if \(B^{*}:=B^{\prime}+G\), then \((X^{\prime}/Z,B^{*})\) satisfies property \((*)\). Let \(\Sigma_{Z}\) be the discriminant part of \((X^{\prime}/Z,B^{*})\) and \(M^{*}\) its moduli part. Let \(\gamma_{z}^{*}\) be the log canonical threshold of \(f^{\prime*}(z)\) with respect to \((X^{\prime}/Z,B^{*})\) and \(\gamma_{z}^{\prime}\) the one with respect to \((X^{\prime}/Z,B^{\prime})\), for \(z\in Z\). Note that \(\gamma_{z}^{*}=0\) whenever \(\gamma_{z}^{*}=0\) whenever \(\gamma_{z}^{\prime}=0\). In particular, \(\Sigma_{Z}\geq B^{\prime}_{Z}\) Now, define \(B^{\prime\prime}:=B^{\prime}+f^{\prime*}(\Sigma_{Z}-B^{\prime}_{Z})\), so that \(K_{X^{\prime}}+B^{\prime\prime}\sim_{\mathbb{Q},Z}0\), the discriminant part of \((X^{\prime}/Z,B^{\prime\prime})\) is \(\Sigma_{Z}\) and its moduli part is \(M^{\prime}\). Note that \(B^{\prime\prime}\leq B^{*}\) and the difference is vertical. It is possible that \(K_{X^{\prime}}+B^{*}\) is not \(f^{\prime}\)-nef, in which case we perform a \((K_{X^{\prime}}+B^{*})\)-MMP over \(Z\). Let \(\varphi\colon X^{\prime}\dashrightarrow Y\) be the result of this MMP and \((Y,C)\) the resulting pair, where \(C:=\varphi_{*}B^{*}\). Call \(g\colon Y\to Z\) the induced fibration and \(M_{Y}\) the moduli part of \((Y/Z,C)\). Since \((X,B^{*})\) is log canonical, so is \((Y,C)\). The discriminant part is again \(\Sigma_{Z}\). Let \(C^{\prime\prime}:=\varphi_{*}B^{\prime\prime}\). Since \(K_{X^{\prime}}+B^{\prime\prime}\sim_{\mathbb{Q},Z}0\), the divisor \((K_{Y}+C)-(K_{Y}+C^{\prime\prime})\) is \(g\)-nef, moreover by construction it is effective and supported on a vertical \(\mathbb{Q}\)-divisor \(\Phi\) which does not contain any fibre. Let \(\Psi\) be an effective vertical \(\mathbb{Q}\)-divisor such that \(\Phi+\Psi\sim_{\mathbb{Q},Z}0\). Then, \((K_{Y}+C)-(K_{Y}+C^{\prime\prime})\sim_{\mathbb{Q},Z}-\Psi\) is \(g\)-nef, whence \(\Psi=0=\Phi\) and \(C=C^{\prime\prime}\). By proposition 5.3\((Y/Z,C)\) satisfies property \((*)\) and \(M_{Y}\) is \(g\)-nef by construction. Therefore, \(M_{Y}\) is nef by theorem 8.1. Now, we want to compare \(M_{Y}\) and \(M^{\prime}\). Let \(W\) be a common resolution of \(X^{\prime}\dashrightarrow Y\) with \(p\colon W\to X^{\prime}\), \(q\colon W\to Y\) and \(h\colon W\to Z\) the induced maps. We have that both \(p^{*}M^{\prime}-q^{*}M_{Y}\) and \(q^{*}M_{Y}-p^{*}M^{\prime}\) are \(q\)-nef since \(M^{\prime}\) is \(f^{\prime}\)-trivial. Moreover, \(q_{*}(p^{*}M^{\prime}-q^{*}M_{Y})=\varphi_{*}B^{\prime\prime}-C=0=q_{*}(q^{*} M_{Y}-p^{*}M^{\prime})\). Therefore, by the negativity lemma [12, lemma 3.39], \(p^{*}M^{\prime}=q^{*}M_{Y}\) is nef, whence the conclusion. qed ### 8.4. The canonical bundle formula in dimension \(3\) For threefolds over perfect fields of characteristic \(p>5\), the LMMP and existence of log resolutions are known to hold, so our results hold unconditionally. **Corollary 8.7**.: _Let \(f\colon X\to Z\) be a fibration from a normal projective variety \(X\) of dimension \(3\) onto a curve \(Z\) and \((X/Z,B)\) a GCLC pair associated with it over a perfect field of characteristic \(p>5\). Let \(B\) be an effective \(\mathbb{Q}\)-divisor on \(X\) such that \((X,B)\) is a log canonical pair. Suppose that \(K_{X}+B\) is \(f\)-nef. Then, there exist a pair \((Y,C)\) satisfying property \((*)\) and a commutative diagram_ _where \(b\) is a birational map such that_ 1. \((K_{X}+B)|_{X_{\eta}}=(K_{Y}+C)|_{X_{\eta}}\)_, where_ \(\eta\) _is the generic point of_ \(Z\)_;_ 2. _the moduli part_ \(M_{Y}\) _of_ \((Y/Z,C)\) _is nef._ _Moreover, if \(K_{X}+B\sim_{\mathbb{Q}}f^{*}L_{Z}\) for some line bundle \(L_{Z}\) on \(Z\), \(M_{X}=f^{*}M_{Z}\) is nef._ Proof.: By remark 0.4, we can conclude that theorems 8.1, 8.5 and 8.6 hold unconditionally for threefolds over a perfect field of characteristic \(p>5\). qed
2308.16556
Batch test of MRPC3b for CBM-TOF/STAR-eTOF
The Compressed Baryonic Matter (CBM) experiment is one of the major scientific spectrometers of the future Facility for Antiproton and Ion Research (FAIR) in Darmstadt. As one of the core sub-systems in CBM experiment for charged hadron identification, the Time-of-Flight (TOF) system is required to have a time resolution better than 80 ps. According to the final state particle flux distribution, the CBM-TOF will be constructed with several types of Multigap Resistive Plate Chambers (MRPC). In the outer region of the TOF wall where the particle fluxes are around 1 kHz/cm2, MRPCs with ultra-thin float glass electrodes are considered as a cost effective solution. MRPC3b prototypes have been developed and tested with excellent performance which could meet all the requirements. Before the construction of CBM-TOF, approximately 80 MRPC3bs are assembled for the STAR endcap TOF (STAR-eTOF) upgrade at RHIC as part of the FAIR Phase-0 programs for CBM-TOF which provides a valuable opportunity for detector stability test under high flux environments. This paper will introduce the batch test of the MRPC3bs for STAR-eTOF upgrade. Time resolution of better than 70 ps and efficiency of around 95% are achieved. Notably, during the batch test, it has been observed that the noise rates of the two edge strips in each counter are significantly higher than those of the middle strips. Simulations with Computer Simulation Technology (CST)Studio Suite are carried out and several kinds of MRPC prototypes are designed and tested accordingly. Based on the simulation and test results, the design of the MRPC3b has been further optimized, resulting in a significant suppression of noise rates in the edge strips.
K. Wang, J. Zhou, X. Wang, X. Li, D. Hu, Y. Sun
2023-08-31T08:46:55Z
http://arxiv.org/abs/2308.16556v1
# Batch test of MRPC3b for CBM-TOF/STAR-eTOF ###### Abstract The Compressed Baryonic Matter (CBM) experiment is one of the major scientific spectrometers of the future Facility for Antiproton and Ion Research (FAIR) in Darmstadt. As one of the core sub-systems in CBM experiment for charged hadron identification, the Time-of-Flight (TOF) system is required to have a time resolution better than 80 ps. According to the final state particle flux distribution, the CBM-TOF will be constructed with several types of Multi-gap Resistive Plate Chambers (MRPC). In the outer region of the TOF wall where the particle fluxes are around 1 kHz/cm\({}^{2}\), MRPCs with ultra-thin float glass electrodes are considered as a cost effective solution. MRPC3b prototypes have been developed and tested with excellent performance which could meet all the requirements. Before the construction of CBM-TOF, approximately 80 MRPC3bs are assembled for the STAR endcap TOF (STAR-eTOF) upgrade at RHIC as part of the FAIR Phase-0 programs for CBM-TOF which provides a valuable opportunity for detector stability test under high flux environments. This paper will introduce the batch test of the MRPC3bs for STAR-eTOF upgrade. Time resolution of better than 70 ps and efficiency of around 95% are achieved. Notably, during the batch test, it has been observed that the noise rates of the two edge strips in each counter are significantly higher than those of the middle strips. Simulations with Computer Simulation Technology (CST) Studio Suite are carried out and several kinds of MRPC prototypes are designed and tested accordingly. Based on the simulation and test results, the design of the MRPC3b has been further optimized, resulting in a significant suppression of noise rates in the edge strips. keywords: Resistive Plate chamber, Time of flight, gaseous detectors, noise rate + Footnote †: journal: Journal of High Energy Physics ## 1 Introduction The Compressed Baryonic Matter (CBM) experiment is a future fixed target heavy ion experiment located at the Facility for Antiproton and Ion Research (FAIR) in Darmstadt. The main Physics goal of CBM experiment is to map the phase diagram of strongly interacting matter at high baryon densities through the study of heavy ion collisions in the beam energy range from 2 AGeV to 11 AGeV [1; 2]. Figure 1(a) shows the conceptual design of the CBM spectrometer. A 120 m\({}^{2}\) Time-of-Flight (TOF) system located at 10 m downstream from the target will provide charged hadron (protons, kaons, pions) identification up to a particle momentum of about 4 GeV/\(c\)[3]. The Multi-gap Resistive Plate Chamber (MRPC)[4], as a gaseous detector with excellent timing performance and low cost, is selected as the basic detector component of the CBM-TOF. According to the flux rates, from tens kHz/cm\({}^{2}\) to around one kHz/cm\({}^{2}\), the CBM-TOF is divided into several sub-regions, as sketchily shown in Figure 1(b) [5]. In the outer region marked in blue, which covers over half of the total TOF wall, the estimated particle flux is around 1 kHz/cm\({}^{2}\). MRPCs made of ultra-thin float glass electrodes (referred to as MRPC3b and MRPC4 in the CBM TOF TDR) will be employed in this region with appropriate rate capability at economical cost. The MRPC3b prototype has been developed and tested with excellent performance [6; 7]. In order to further test the long-term stability under high flux environment, the CBM-TOF modules were installed for STAR endcap TOF (STAR-eTOF) upgrade as part of the FAIR Phase-0 program at the end of 2018 [8], among which 81 MRPC3bs were constructed and tested by USTC. This paper presents the design of the MRPC3b counter and details the batch test conducted on the 81 MRPC3bs for the STAR-eTOF upgrade, including the cosmic batch test platform and batch test results. Besides, an issue was observed during the batch test that the noise rates of the two edge strips were much higher than the middle ones. Simulations based on the Printed Circuit Board (PCB) Studio of the Computer Simulation Technology (CST) Studio Suite were carried out, leading to the design and testing of multiple MRPC prototypes. Through these research and development efforts, the optimized MRPC3b structure demonstrates a notable improvement in edge strip noise rates. This optimized design will be implemented in the final construction of the CBM-TOF. ## 2 MRPC3b counter MRPC3b is positioned at the top and bottom of the CBM TOF wall, corresponding to the low rate region as shadowed in Figure 1(b). A system time resolution of below 80 ps and an efficiency of above 95% are required for the TOF system of CBM [10]. Figure 2 shows the schematic of the MRPC3b structure. It is a two-stacks, 10 gas gaps MRPC counter. The resistive plates are Figure 1: (a) The conceptual design of the CBM spectrometer[9]. (b) The sketch of CBM-TOF wall. made of ultra-thin float glass, merely 0.28 mm thick, to enhance the rate capability [11]. In each stack, 6 glass sheets are separated by fishing lines to form the gas gaps of 0.23 mm. The outer surfaces of the glass stacks are sprayed with graphite layers, serving as the High Voltage (HV) electrodes. Figure 2(b) shows the readout strip pattern of MRPC3b. It has 32 double-end readout strips covering the active area of 32 cm \(\times\) 27.6 cm. The pitch of the strips is 1 cm while the strip width is 0.7 cm. Three layers of readout strips collect the induced signals from the avalanches with the negative signals on the middle layer and the positive ones on the outer layers. The differential signals are then read out by PADI Front End Electronics (FEE) [12] boards from both strip ends. In order to suppress the signal reflection, the characteristic impedance of the strips is carefully calculated and designed to be 50 \(\Omega\) differential which matches the input impedance of PADI. The MRPC3b prototypes were tested at the Beijing Electron-Positron Collider (BEPC) E3 line in 2016 with a 700 MeV hadron beam [7]. The results show MRPC3b has good performance of time resolution better than 60 ps and efficiency higher than 98%. Figure 2: (a) The schematic of MRPC3b structure. (b) The readout strip pattern of MRPC3b. ## 3 Batch test of 81 MRPC3bs ### STAR-eTOF upgrade The Solenoidal Tracker at RHIC (STAR) experiment at Brookhaven National Laboratory is an important heavy ion collision experiment in the world. The STAR Collaboration proposed the Beam Energy Scan phase II (BES-II) program with relevant improvements to the STAR detectors [13]. The newly established eTOF is one of the major upgrades [14]. The eTOF system expands Figure 4: The cosmic ray test system for MRPC3b batch test. Figure 3: The conceptual design of eTOF wheel for STAR, numbered corresponding to the STAR inner-TPC sectors. the pseudorapidity coverage and provides particle identification (PID) capabilities in the range of -1.6 \(<\)\(\eta\)\(<\)-1.1 for collider collision mode and at mid-rapidity with center-of-mass energies from 3.0 to 4.5 GeV for the fixed target mode. As part of the FAIR Phase-0 program for CBM-TOF, the STAR-eTOF project provides a unique opportunity to test the counter stability and the commissioning of the CBM-TOF modules before the operation of CBM experiment. Figure 3 shows the conceptual design of the eTOF wheel for STAR. The STAR-eTOF is composed of 12 sectors, each containing 3 modules with 3 MRPC detectors per module. Sectors labeled in blue are comprised of MRPC3as [15], while sectors labeled in red consist of MRPC3bs. In total, 81 MRPC3bs has been constructed and tested in USTC and subsequently sent to Heidelberg for module assembly. The eTOF wheel had been fully installed at the STAR experiment by November 2018. ### Batch test platform for MRPC3b A cosmic ray test system is built for MRPC3b batch test. The system consists of two plastic scintillator counters each coupled with one Photomultiplier Tube (PMT) on single end. The size of the scintillator is 20 cm \(\times\) 40 cm. An aluminum gas tight box, which can accommodate 4 MRPCs orderly, is positioned between the scintillators, as shown in Figure 4(a). The signals from the two PMTs are initially discriminated by the Low Threshold Discriminator (CAEN N845) and then sent to a coincidence unit to generate a trigger for the system. Two PADI boards are plugged on each end of the MRPC directly to amplify and discriminate the MRPC signals. The back-end signals are processed and acquired by a 320-channel time digitizing and readout electronic system (shown in Figure 4(b)) designed by the fast electronics laboratory of USTC[16]. In this system, the discriminated MRPC signals are digitized by the Time-to-Digital Converter (TDC). Then, the digitized time data are aggregated at the TDC Readout Motherboard (TRM) and transmitted to the Data Readout Modules (DRM) via optical links. The Clock Trigger Module (CTM) distributes the clock and trigger to TRM reversely. Finally, the DRMs relay the data to the Data Acquisition System (DAQ) through the Gigabit Ethernet ports in parallel. The laboratory test results indicate that the electronics can achieve a time resolution better than 20 ps. ### MRPC3b Batch test results 81 MRPC3bs have been constructed in USTC with a strict quality assurance (QA) and quality control (QC) process, including materials checking, process checking, and performance checking[17]. All these MRPC3bs had been tested in laboratory with the cosmic ray test platform described in 3.2 before they were installed at STAR-eTOF. For the MRPC3b performance check, we test the current of all the counters under the working condition. The working gas mixture is composed of 95% Freon and 5% iso-C\({}_{4}\)H\({}_{10}\). According to the former beam test and cosmic test results[6; 7], the operating HV is set to \(\pm\)6000 V and the PADI threshold is set to -347.2 mV. Figure 5 shows statistical working current under HV of \(\pm\)6000 V for all the 81 MRPC3bs, whose serial numbers ranged from #16 to #96. The current is read out by WIENER HV module (EHS8280P/N) with 1 nA accuracy. The results demonstrate that the currents for all MRPC3bs remained below 100 nA, but they increase with MRPC3b serial number. This trend might be caused by the variations in humidity during the batch test phase. The batch testing process spanned approximately six months, from spring to summer, during which humidity increased accordingly in Hefei, China. Due to the time constraint for the counter mass production, only a subset Figure 5: Working current under HV of \(\pm\)6000 V of all the MRPC3bs. of the MRPC3bs could be tested for time resolution and efficiency performance. We randomly selected and tested 32 MRPC3bs out of the total 81 counters. The statistical results of efficiency (Figure 6(a)) and time resolution (Figure 6(b)) are shown. From the plots we can see that the mean value of efficiency is around 95% and the time resolution is better than 70 ps with a mean value of \(\sim\) 55 ps. These performances meet the CBM-TOF requirements. The noise rate is an important parameter of MRPC, especially for the free streaming readout mode of the CBM experiment. To evaluate the noise rate, we utilize the batch test system with a random trigger generated by the CAEN V1718 module[18]. During the testing process, if the DAQ system records an event with signals detected at both ends of a strip within the matching window of the random trigger signal, it is classified as a noise count on that specific strip. The corresponding noise rate is evaluated in this way. Figure 7(a) shows statistical analysis of the noise rate of the 32 MRPC3bs. All counters show a very low noise rate, with an average value as low as 1.3 Hz/cm\({}^{2}\) across the entire active area. In general, all the tested MRPC3bs pass the quality inspections and show excellent performance. But it is noticed that the noise rates of the two edge Figure 6: (a) Efficiency of 32 tested MRPC3bs, with the red dashed line representing 90% efficiency. (b) Time resolution of the 32 tested MRPC3bs, with the red dashed line representing a time resolution of 80 ps. strips are significantly higher than the others. Specifically, the near-end strip, which is the strip closest to the HV injection side, exhibits a noise rate approximately 8 times higher, while the far-end strip, which is the strip farthest from the HV injection side, shows a noise rate approximately 2 times higher. This is illustrated in Figure 7(b), where the strip serial number is defined in ascending numerical order from the HV injection side. For a better understanding of this effect, further investigations have been carried out, which will be discussed in the next section. ## 4 Noise rate study From the batch test, we found that the two edge strips show much higher noise rates than the others. This phenomenon is suspected to be caused by the cross-talk from the HV source and the edge effect. Figure 8 shows the design details around the HV injection point - a copper pad with a size of 4 mm \(\times\) 30 mm. The distance between the HV pad and the closest strip is merely 2 mm. Simulations are carried out to study the noise rate issue correlated with design parameters and prototype MRPCs are built and tested following the simulation Figure 7: (a) Statistics of the noise rate of 32 MRPC3bs, with the red dashed line representing a 3.0 Hz/cm\({}^{2}\) noise rate. (b) A Typical noise rate distribution as function of strip serial number, strip number is defined from the HV injection side in ascending numerical order, with the red dot line representing the average value of noise rate except for 2 end strips. results. ### Simulation settings and results In order to study the cross-talk related to the HV pads, simulations are carried out using the PCB Studio of CST Suite. Figure 9 shows a typical simplified PCB model built in the simulation environment. The PCB model used in the simulation consists of three layers: copper signal strips, copper HV connecting pad, and a graphite layer, arranged from top to bottom. Each pair of adjacent layers is separated by a corresponding insulating layer or masking layer, with the thicknesses employed in the simulation matching those of the actual MRPC3b. Since we focus mainly on the HV pad and the nearby strips, the corresponding models apply only 3 strips with a 1 cm pitch and a 3 mm interval in-between strips. The length of the strips is set to 27.6 cm, identical to the design parameters of the MRPC3b. In this simulation, a 5 mV sinusoidal modulated voltage source is injected into the HV pad and the HV pad is connected to the graphite layer. Firstly, the HV pad is set to a 4 mm \(\times\) 30 mm rectangular shape as the real MRPC3b. The distance between the HV pad and the nearest strip varies from 2 mm to 14 mm. Figure 10(a) shows typical output signals of three readout strips when Figure 8: The design of the HV injection pad. the distance is 6 mm. The red line P1 is the signal read out from the nearest strip and the amplitude is 0.17 mV. The green and blue lines show the signals of the second and third strips respectively, whose amplitudes are much lower. As a comparison, if the same stimulus source is injected into the graphite layer directly at the same distance without the HV pad, the output signals are shown in Figure 10(b). The signal amplitude from the first strip is about 0.008 mV, which is much smaller than the model with an HV pad. These results indicate the HV pad has a great impact on the nearest strip but little on the others. Figure 10(c) summarizes the signal amplitudes of the nearest strip as a function of the distance to the HV pad. The amplitudes of the signals decrease as the distance increase. Several other models are also built in the simulation by changing the length, size, and shape of the HV pad. Table 1 lists the detailed structure information and simulated amplitudes on the nearest strip of the four models. The distances between the HV pad and the strips are fixed at 6 mm for all these models. The HV pads of R6-1 and S6 have the same area, but the length of S6 is half of R6-1. The HV pad of R6-2 has the same length as S6 and the same width as R6-1. For the cases of R6-2 and S6, whose length is the same, the results are very close to each other. But for the longer HV pad, R6-1, the amplitude is a bit Figure 9: A typical simplified simulation model of MRPC3b focusing on the HV pad. larger. These results hint that the length of the HV pad affects the cross-talk relevantly. The circular case with a radius of 6 mm, C6, shows the smallest cross-talk effect of 0.08 mV amplitude. From the simulation, it is clearly indicated that the distance between HV pad and strips and the dimension of the HV pad impact the cross-talk on the nearest strip. Noise rate tests on different prototypes are carried out to demonstrate the simulated results. ### Prototype test Several MRPC prototypes are designed and built according to the simulation results. The detailed structural information of these prototypes is provided in Table 2, allowing for a convenient comparison of the effects resulting from \begin{table} \begin{tabular}{c c c c c} \hline Model & R6-1 & R6-2 & S6 & C6 \\ \hline HV pad & Rectangle & Rectangle & Rectangle & Circle \\ (size) & (4 mm\(\times\)30 mm) & (4 mm\(\times\)15 mm) & (8 mm\(\times\)15 mm) & (r = 6 mm) \\ Amplitude (mV) & 0.17 & 0.11 & 0.10 & 0.08 \\ \hline \end{tabular} \end{table} Table 1: Simulation results of different models. Figure 10: (a) Simulation results of output signals for a model with 6 mm distance between the HV pad and the strips. (b) Output signals of three strips with stimulus source injected directly into the graphite layer at a 6 mm distance to the strips. (c) Output signal amplitudes of the nearest strip as a function of the distance to the HV pad. variations in the distance between the HV pad and the nearest strip, as well as the shape of the HV pad. Figure 11 shows the photo of different types of designed PCBs. All the prototypes have 8 strips, each measuring 20 cm in length, with a pitch of 1 cm and an interval of 0.3 cm. To examine the influence of the edge effect, the distance between the glass edge and the far-end strip has been increased, ranging from 2.1 cm to 3.1 cm. This is considerably larger than the distance used in the previous design of MRPC3b, which was 0.7 cm. They closely resemble MRPC3b in terms of structure, with the only difference being the size. Among the prototypes, R4, R6-1, and R8 have varying distances between the HV pad and the nearest strip, while maintaining the same HV pad shape. Conversely, R6-1, S6, and C6 possess the same distance but exhibit different HV pad shapes. Subsequently, noise rate tests are conducted on all the constructed MRPC prototypes using the system discussed in Section 3.2. Table 3 lists the test results of all the prototypes. It can be observed that the noise rates of the far-end strips in all the tested prototypes are very low and comparable to the average of the other strips. This phenomenon is likely due to the reduced edge effect achieved in the design. For the near-end strips, a comparison of the MRPC3b, R4, R6-1, R8 prototypes reveals a significant decrease in the noise rate as the distance between the HV pad and the strip increases. This observation is consistent with the simulation results. The noise rate of R6-1 is slightly lower than that of R8, which could be due to the amplitude \begin{table} \begin{tabular}{c c c c c c} \hline \hline Model & R4 & R6-1 & R8 & S6 & C6 \\ \hline HV pad & Rectangle & Rectangle & Rectangle & Rectangle & Circle \\ & (4\(\times\)30 mm) & (4\(\times\)30 mm) & (4\(\times\)30 mm) & (8\(\times\)15 mm) & (r=6 mm) \\ Distance between HV pad & 4 mm & 6 mm & 8 mm & 6 mm & 6 mm \\ and near-end strip & 31 mm & 29 mm & 27 mm & 25 mm & 21 mm \\ Distance between glass edge & 31 mm & 29 mm & 27 mm & 25 mm & 21 mm \\ and far-end strip & & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Structural information of PCBs. change becomes negligible when the distance exceeds 6 mm, as indicated by the previous simulation. Comparing R6-1, S6, and C6, we find that the circular pad C6 prototype has the smallest noise rate. However, since the average noise rate level differs across detectors and strips, the impact of these prototypes on HV cross-talk appears to be negligible. Overall, these test results are well consistent with the findings from the previous simulations. ### Optimization for MRPC3b To reduce the noise rate on edge strips, the structure of MRPC3b has been optimized based on the simulation and test results. The optimized structure is depicted in Figure 12(a). This optimization takes into account the edge effect, cross-talk of the HV pad, and the requirements for the active area. In the optimized design, the HV pad is rectangular in shape, measuring 8 mm \(\times\) 15 mm. The distance between the HV pad and the first strip is set to 6 mm, while the distance from the glass edge to both end strips is increased to 18 Figure 11: Photo of the prototype PCBs on top of the MRPC3b PCB. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model & & MRPC3b & R4 & R6-1 & R8 & S6 & C6 \\ \hline \multirow{4}{*}{Noise Rate( Hz/cm\({}^{2}\))} & Far-end strip & 1.80 & 0.58 & 0.52 & 0.57 & 0.42 & 0.31 \\ & Near-end strip & 5.54 & 1.09 & 0.36 & 0.59 & 0.49 & 0.29 \\ & Average of other strips & 0.67 & 0.68 & 0.47 & 0.57 & 0.31 & 0.39 \\ \hline \hline \end{tabular} \end{table} Table 3: The noise rate test results of different prototypes. mm. To accommodate this design, the size of the glass has also been increased accordingly. It is important to note that this improvement has little impact on the effective coverage of the CBM-TOF module. The MRPC detectors in the CBM-TOF module are rotated towards the target point, resulting in the overlapping regions that cover the inefficient area. The optimized MRPC3b has been assembled and tested to evaluate the noise rate. Figure 12(b) provides a comparison between the optimized MRPC3b and the original MRPC3b, clearly demonstrating a significant reduction in the noise rate of the near-end and far-end strips. Specifically, the near-end strip exhibits a noise rate of 0.87 Hz/cm\({}^{2}\), the far-end strip shows a noise rate of 0.69 Hz/cm\({}^{2}\), both of which have significantly decreased compared to the previous design. Additionally, the noise rates of the edge strips are now comparable to the average noise rate of 0.68 Hz/cm\({}^{2}\) for the other strips. This validates the effectiveness of the approach in reducing noise rates at the edge strips. The optimized structure will be implemented in the CBM-TOF wall construction. Figure 12: (a) The optimized design of MRPC3b’s PCB structure. (b) The noise rate comparison of the optimized and the original MRPC3b, with the dot line representing the average value of optimized MRPC3b noise rate except for 2 end strips. ## 5 Conclusion In conclusion, the mass production of 81 MRPC3b counters for the STAR-eTOF upgrade has been successfully completed. HV training and batch testing using a cosmic ray test system have demonstrated the overall excellent performance of the counters. The statistical efficiency was found to be approximately 95%, and all tested counters exhibited a time resolution better than 70 ps, indicating their suitability for the intended application. During the testing phase, an issue regarding the abnormal increase in noise rate on the edge strips was observed. Through simulation using CST Studio and subsequent prototype tests, the influence of HV pad crosstalk and edge effects on the noise rate was investigated. The optimized MRPC3b structure, derived from the simulation and validated through prototype testing, has effectively addressed this issue. The noise rates of the edge strips have been brought to levels comparable to those of the middle strips, ensuring a more uniform performance across all strips. The optimized MRPC3b sets the stage for its implementation and will be installed in the future CBM-TOF wall. ## Acknowledgments The authors thank the high energy physics group of USTC. This project is supported by National Key Programme for S&T Research and Development under Grant NO. 2018YFE0205202, National Natural Science Foundation of China under Grant No. 11975228 and 12205296, and the State Key Laboratory of Particle Detection and Electronics under Grant No. SKLPDE-ZZ-202320.
2309.13288
On the residual Monge-Ampère mass of plurisubharmonic functions with symmetry, II
The aim of this article is to study the residual Monge-Amp\`{e}re mass of a plurisubharmonic function with an isolated singularity, provided with the circular symmetry. With the aid of Sasakian geometry, we obtain an estimate on the residual mass of this function with respect to its Lelong number and maximal directional Lelong number. This result partially answers the zero mass conjecture raised by Guedj and Rashkovskii.
Weiyong He, Long Li, Xiaowei Xu
2023-09-23T07:04:31Z
http://arxiv.org/abs/2309.13288v2
# On the residual Monge-Ampere mass of plurisubharmonic functions with symmetry, II ###### Abstract. The aim of this article is to study the residual Monge-Ampere mass of a plurisubharmonic function with an isolated singularity, provided with the circular symmetry. With the aid of Sasakian geometry, we obtain an estimate on the residual mass of this function with respect to its Lelong number and maximal directional Lelong number. This result partially answers the zero mass conjecture raised by Guedj and Rashkovskii. _In memory of Prof. Demailly_ ## 1. Introduction The zero mass conjecture for plurisubharmonic functions, raised by Guedj and Rashkovskii, is a fundamental but difficult problem in pluripotential theory, see [16], [33]. It states that the Lelong number of a plurisubharmonic function (with an isolated singularity) must be positive, if its complex Monge-Ampere measure has a Dirac mass at the singularity. Equivalently, this says that the residual Monge-Ampere mass at the singularity of the plurisubharmonic function must be zero, if it has zero Lelong number at this point. In literature, there have been many important contributions towards this conjecture, including works like [9], [30], [31], [32], [25], [6], [1] and [20]. In particular, Rashkovskii [30] confirms it, provide that the plurisubharmonic function has the _toric symmetry_. As a generalization of the toric symmetry, the so called _circular symmetry_ has been recently studied, and the zero mass conjecture is confirmed for all circular symmetric plurisubharmonic functions in \(\mathbb{C}^{2}\), see [5], [26] and [28]. In this paper, we study the zero mass conjecture for plurisubharmonic functions with circular symmetry in higher dimensions. Consider the family \(\mathcal{F}(B_{1})\) consisting of all circular symmetric (or \(S^{1}\)-invariant) plurisubharmonic functions on the unit ball \(B_{1}\) in \(\mathbb{C}^{n+1}\) that are locally bounded outside the origin, see Definition 2.1. Moveover, we denote \(\mathcal{F}^{\infty}(B_{1})\) by the sub-collection of \(\mathcal{F}(B_{1})\) that requires \(C^{2}\)-continuity of the functions outside the origin, see Definition 2.2. In the case of complex dimension two, the second-named author proved the following estimate. **Theorem 1.1** ([28]).: _For a function \(u\in\mathcal{F}(B_{1})\) in \(\mathbb{C}^{2}\), we have_ \[[\nu_{u}(0)]^{2}\leq\tau_{u}(0)\leq 2\lambda_{u}(0)\cdot\nu_{u}(0)+[\nu_{u}(0) ]^{2}. \tag{1.1}\] Here \(\nu_{u}(0)\) and \(\tau_{u}(0)\) denote the Lelong number and the residual Monge-Ampere mass of \(u\) at the origin, respectively. Moreover, the constant \(\lambda_{u}(0)\) is the _maximal directional Lelong number_ of \(u\) at the origin. Let \(\ell_{\zeta}:=\mathbb{C}\cdot\zeta\) be the complex line through the origin in the complex direction \(\zeta\in(\mathbb{C}^{n+1})^{*}\). This terminology stems from taking the supreme of the Lelong number at the origin of the restriction \(u|_{\ell_{\zeta}}\) among all complex directions. For more details, see Definition 2.6 and 2.7. However, taking supremes does not guarantee the finiteness in general. Fortunately, it turns out that \(\lambda_{u}(0)\) is indeed a non-negative real number for any function \(u\in\mathcal{F}(B_{1})\), see Proposition 2.8. Then the zero mass conjecture in \(\mathbb{C}^{2}\) directly follows from the estimate in equation (1.1). More precisely, this estimate is due to a decomposition formula of the complex Monge-Ampere measure, see Theorem 4.4, [28]. To obtain this formula, we have utilized a particular local coordinate system (called the complex Hopf-coordinate) on \(\mathbb{C}^{2}\), and it is naturally induced from the famous Hopf-fiberation: In higher dimesions, the new observation is that there is a natural _Kahler cone structure_ on \((\mathbb{C}^{n+1})^{*}=S^{2n+1}\times\mathbb{R}_{+}\). That is to say, we can consider the _standard Sasakian structure_ of the unit sphere \(S^{2n+1}\) with the base manifold \(\mathbb{CP}^{n}\). It is nothing but the Hopf-fiberation as a principle circle bundle over the complex projective space: S^1 & S^2n+1 & ^p & \(\mathbb{CP}^{n}\). Sasakian geometry is served as a bridge to connect with the two Kahler structures respectively of the cone and the base manifold. It enables us to achieve the decomposition formula (Theorem 4.4) in any dimension. Comparing to the domain case, this formula can be viewed as the push forward of the complex Monge-Ampere measure of a \(u\in\mathcal{F}^{\infty}(B_{1})\) to the base manifold, under the Sasakian structure of \(S^{2n+1}\), see Section 5.3. For more discussion on Sasakian manifolds, the readers are referred to [34], [29], [7], [23] and [24]. Finally, we come up with the following estimate on the residual Monge-Ampere mass. **Theorem 1.2** (Theorem 5.11).: _For a function \(u\in\mathcal{F}(B_{1})\), there exists a dimensional constant \(C_{n}\geq n+1\) such that we have_ \[[\nu_{u}(0)]^{n+1}\leq\tau_{u}(0)\leq 2C_{n}[\lambda_{u}(0)]^{n}\cdot\nu_{u}( 0). \tag{1.2}\] The lefthand side of the inequality in equation (1.2) was indicated by Cegrell [10], and the zero mass conjecture follows from the righthand side. **Theorem 1.3** (Theorem 5.12).: _For a function \(u\in\mathcal{F}(B_{1})\), we have_ \[\nu_{u}(0)=0\Rightarrow\tau_{u}(0)=0. \tag{1.3}\] The estimate in equation (1.2) is stronger than the case in complex dimension two. This is because of a better understanding of the positivity conditions, see Lemma 4.5. However, this inequality is not sharp. Moreover, this estimate fails if the plurisubharmonic function is no longer circular symmetric, see Example 6.13, [28]. In the last section, we provide a different proof of the decomposition formula via Cartan's method of moving frames, see [12], [13]. This should be useful in the future work when there is no symmetry conditions. **Acknowledgment:** We are very grateful to Prof. Xiuxiong Chen for his continuous support and encouragement in mathematics. This problem was raised to the second-named author when he was studying with Prof. Demailly in Fourier Institute, Grenoble. It is also a great pleasure to discuss with Song Sun, Chengjian Yao and Jian Wang. The third-named author is supported by the NSFC (No. 11871445), the Stable Support for Youth Team in Basic Research Field, CAS(YSBR-001) and the Fundamental Research Funds for the Central Universities. ## 2. Plurisubharmonic functions with circular symmetry In this section, we recall a few basic facts about \(S^{1}\)-invariant plurisubharmonic functions. Denote by \(z:=(z^{0},\cdots,z^{n})\) the complex Euclidean coordinate on \(\mathbb{C}^{n+1}\). There is a natural \(S^{1}\)-action that sends \[z\to e^{i\theta}z:=(e^{i\theta}z^{0},\cdots,e^{i\theta}z^{n}),\] for all angles \(\theta\in\mathbb{R}\). A domain \(D\) is called balanced if it is invariant under this \(S^{1}\)-action, and a function \(u\) on a balanced domain is said to be circular symmetric or \(S^{1}\)-invariant if it satisfies \(u(z)=u(e^{i\theta}z)\). Assume that \(D\) contains the origin. We say that a plurisubharmonic function \(u\) on \(D\) has an isolated singularity at the origin, if it is locally bounded on the punctured domain \(D^{*}:=D-\{0\}\) and \(u(0)=-\infty\). Then we can consider the following two families of plurisubharmonic functions. **Definition 2.1**.: _An \(S^{1}\)-invariant plurisubharmoinc function belongs to the family \(\mathcal{F}(D)\), if it is \(L^{\infty}_{loc}\) on \(D^{*}\)._ **Definition 2.2**.: _An \(S^{1}\)-invariant plurisubharmoinc function belongs to the family \(\mathcal{F}^{\infty}(D)\), if it is \(C^{2}\)-continuous on \(D^{*}\)._ We adopt the following normalization condition, \(\sup_{D}u=-1\), possibly after shrinking \(D\) to a smaller balanced domain. In the previous work [28], we have discovered several useful properties of a function \(u\in\mathcal{F}(D)\). Although these properties were stated in complex dimension two, they are adapted to all dimensions. We recall these facts. ### The residual mass Let \(B_{R}\subset\mathbb{C}^{n+1}\) be the open ball centered at the origin with radius \(R>0\), and \(B_{R}^{*}\) be the corresponding punctured ball. Denote \(S_{R}\) by the boundary of the ball, satisfying the equation \[\sum_{A=0}^{n}|z^{A}|^{2}=R^{2}. \tag{2.1}\] In this paper, we always take \(D:=B_{1}\), and focus on the local behaviors of a function near the origin. Consider a function \(u\in\mathcal{F}(B_{1})\), and then its complex Monge-Ampere measure is a closed positive \((n+1,n+1)\)-current: \[\mathrm{MA}(u):=dd^{c}u\wedge\cdots\wedge dd^{c}u=(dd^{c}u)^{n+1}, \tag{2.2}\] where the wedge is taken in the sense of Demailly and Bedford-Talyor. For more details, see [15], [3], [2],[10] and [8]. Here we have used the notation \[d:=\partial+\bar{\partial};\ \ \ \ d^{c}:=\frac{i}{2}(\bar{\partial}-\partial).\] Fixing an \(R\in(0,1)\), we take this measure on the ball as \[\mathrm{MA}(u)(B_{R}):=\int_{B_{R}}(dd^{c}u)^{n+1}.\] The residual Monge-Ampere mass of \(u\) at the origin is equal to the limit: \[\tau_{u}(0) = \frac{1}{\pi^{n+1}}\mathrm{MA}(u)(\{0\})\] \[= \frac{1}{\pi^{n+1}}\lim_{R\to 0}\mathrm{MA}(u)(B_{R}). \tag{2.3}\] Next we introduce the standard regularization of a plurisubharmonic function. Let \(\rho(z):=\rho(|z|)\) be a non-negative smooth mollifier in \(\mathbb{C}^{n+1}\) satisfying \(\rho(r)=0\) for all \(r\geq 1\), and \[\int_{\mathbb{C}^{n+1}}\rho(z)\ d\lambda(z)=1,\] where \(d\lambda\) is the Lebesgue measure. Take its rescaling for each \(\varepsilon>0\) small as \[\rho_{\varepsilon}(z):=\varepsilon^{-2n-2}\rho(z/\varepsilon).\] For a function \(u\in\mathcal{F}(B_{1})\), we can define its regularization as a sequence of smooth plurisubharmonic functions that decreases to \(u\) pointwise: \[u_{\varepsilon}(z): = (u*\rho_{\varepsilon})(z)\] \[= \int_{|z-y|\leq\varepsilon}\rho_{\varepsilon}(z-y)u(y)d\lambda(y)\] \[= \int_{|w|\leq 1}u(z-\varepsilon w)\rho(w)d\lambda(w). \tag{2.4}\] This sequence converges to \(u\) in \(C^{2}\)-norm on any compact subset of \(B_{1}^{*}\) if \(u\in\mathcal{F}^{\infty}(B_{1})\). Fixing a small \(\delta>0\), the regularization \(u_{\varepsilon}\) is in \(\mathcal{F}^{\infty}(B_{1-\delta})\) for all \(\varepsilon\) small enough. Then we have the following convergence of the Monge-Ampere masses. **Lemma 2.3** ([28]).: _For a function \(u\in\mathcal{F}(B_{1})\), the complex Monge-Ampere measure of its regularization \(u_{\varepsilon}\) converges on a ball as_ \[\mathrm{MA}(u)(B_{R})=\lim_{\varepsilon\to 0^{+}}\mathrm{MA}(u_{ \varepsilon})(B_{R}), \tag{2.5}\] _for almost all \(R\in(0,1)\)._ This convergence follows the Portemanteau type inequalities below and the fact that the Monge-Ampere measure \(\operatorname{MA}(u)\) vanishes on the boundary hypersphere \(S_{R}\) for almost all \(R\in(0,1)\). In fact we have \[\operatorname{MA}(u)(B_{R}) \leq\liminf_{\varepsilon\to 0^{+}}\operatorname{MA}(u_{ \varepsilon})(B_{R}),\text{on any open ball }B_{R}\subsetneq B_{1}\] \[\operatorname{MA}(u)(\overline{B}_{R}) \geq\limsup_{\varepsilon\to 0^{+}}\operatorname{MA}(u_{ \varepsilon})(\overline{B}_{R}),\text{on any closed ball }\overline{B_{R}}\subset B_{1}. \tag{2.6}\] We note that the convergence in Lemma 2.3 holds for every \(R\in(0,1)\), if we assume \(u\in\mathcal{F}^{\infty}(B_{1})\). Then by the Stokes Theorem we have **Lemma 2.4** ([28]).: _For \(u\in\mathcal{F}^{\infty}(B_{1})\) and for all \(R\in(0,1)\),_ \[\int_{B_{R}}(dd^{c}u)^{n+1}=\int_{S_{R}}d^{c}u\wedge(dd^{c}u)^{n}, \tag{2.7}\] ### Maximal directional Lelong numbers Denote \(S_{u}(0,r)\) to be the average of \(u\) on the sphere \(S_{r}\), \[S_{u}(0,r):=\frac{1}{a_{2n+1}}\int_{|\xi|=1}u(r\xi)d\sigma(\xi), \tag{2.8}\] where \(d\sigma\) is the area form of the unit sphere \(S^{2n+1}\), and \(a_{2n+1}\) is its total area. The Lelong number of a plurisubharmonic function \(u\) at the origin is defined to be the following limit \[\nu_{u}(0)=\lim_{r\to 0^{+}}\nu_{u}(0,r),\text{ with }\nu_{u}(0,r):=r\partial_{r}^{-}S_{u}(0,r).\] In fact, \(S_{u}(0,r)\) is a convex and non-decreasing function of \(t:=\log r\), and hence the limit \(\nu_{u}(0)\) always exits. Assume the function \(u\) is in the family \(\mathcal{F}(B_{1})\) from now on. Denote \(\ell_{\zeta}\) by the complex line through the origin in the complex direction \(\zeta\in\mathbb{C}\mathbb{P}^{n}\) as \[\ell_{\zeta}:=\mathbb{C}\cdot[\zeta],\] where \([\zeta]\) means a homogeneous coordinate of \(\zeta\) in \((\mathbb{C}^{n+1})^{*}\). Thanks to the plurisubharmonicity and \(S^{1}\)-symmetry, the restriction \(u|_{\ell_{\zeta}}\) is a convex and non-decreasing function of the variable \(t\in(-\infty,0)\). It is more convenient to use a parametrization \((r,\theta,\zeta,\bar{\zeta})\) of the space \((\mathbb{R}^{2n+2})^{*}\cong(\mathbb{C}^{n+1})^{*}\) induced by the fiber maps of the Hopf-fiberation In this parametrization, the \(r\)-variable denotes the radius function, and \(\theta\) stands for the direction induced by the \(S^{1}\)-action. Then we can re-write the function \(u\) under this parametrization as \[u_{t}(\zeta):=\hat{u}(t,\zeta,\bar{\zeta})=u(e^{t},\zeta,\bar{\zeta}),\] and denote its derivative with respect to \(t\) by \[\dot{u}_{t}(\zeta):=\partial_{t}\hat{u}(t,\zeta,\bar{\zeta})=\frac{d}{dt}u|_{ \ell_{\zeta}}.\] Then it follows for each \(\zeta\in\mathbb{C}\mathbb{P}^{n}\) fixed \[\dot{u}_{t}\geq 0;\ \ \ \ \ddot{u}_{t}\geq 0, \tag{2.9}\] for almost all \(t\in(-\infty,0)\). Moreover, the Lelong number at zero of the restriction \(u|_{\ell_{\zeta}}\) is equal to \[\nu_{u|_{\ell_{\zeta}}}(0)=\lim_{t\to-\infty}\dot{u}_{t}(\zeta)\] It is a well-known fact that the Lelong number of a plurisubharmonic function is invariant under the restriction to almost all complex directions in \(\mathbb{CP}^{n}\). This means that we have \[\nu_{u}(0)=\left.\nu_{u|_{\ell_{\zeta}}}(0),\right.\] for almost all \(\zeta\in\mathbb{CP}^{n}\). In particular, the Lelong number \(\nu_{u}(0)\) is the infimum of all such restrictions. Then the following result follows from the dominated convergence theorem. **Lemma 2.5** ([28]).: _For any \(u\in\mathcal{F}^{\infty}(B_{1})\), we have_ \[[\nu_{u}(0)]^{p}=\lim_{t\to-\infty}\frac{1}{\pi^{n}}\int_{\mathbb{CP}^{n}}( \dot{u}_{t})^{p}\omega_{FS}^{n}, \tag{2.10}\] _for all \(p=1,\cdots,n+1\). Here \(\omega_{FS}\) stands for the Fubini-Study metric on \(\mathbb{CP}^{n}\) with the normalization_ \[\int_{\mathbb{CP}^{n}}\omega_{FS}^{n}=\pi^{n}.\] In order to apply the dominated convergence theorem to a general \(u\in\mathcal{F}(B_{1})\), we actually need an upper bound of \(\dot{u}_{t}(\zeta)\) for all \(\zeta\in\mathbb{CP}^{n}\). This leads us to consider the supremum of all such restrictions. **Definition 2.6**.: _The maximal directional Lelong number of a function \(u\in\mathcal{F}(B_{1})\) at a distance \(A>0\) is defined to be the following_ \[M_{A}(u):=\sup_{\zeta\in\mathbb{CP}^{n}}\partial_{t}^{+}u_{t}(\zeta)|_{t=-A} \in[0,+\infty]. \tag{2.11}\] Thanks to the log-convexity of each restriction \(u|_{\ell_{\zeta}}\), it is apparent that the number \(M_{A}(u)\) is well-defined, and is non-negative and non-increasing in \(A\). Then we can take its limit as \(A\to+\infty\). **Definition 2.7**.: _The maximal directional Lelong number of a function \(u\in\mathcal{F}(B_{1})\) at the origin is defined to be the following_ \[\lambda_{u}(0):=\lim_{A\to+\infty}M_{A}(u). \tag{2.12}\] A fundamental fact about \(\lambda_{u}(0)\) is that for a \(u\in\mathcal{F}^{\infty}(B_{1})\), \(\lambda_{u}(0)\) is always finite, which is crucial in our estimate of the residual Monge-Ampere mass. For the convenience of readers, we will recall the proof as follows. **Proposition 2.8** ([28]).: _For a function \(u\in\mathcal{F}(B_{1})\), its maximal directional Lelong number \(M_{A}(u)\) is finite for all \(A>0\). In particular, we have_ \[0\leq\lambda_{u}(0)<+\infty. \tag{2.13}\] Proof.: For each hypersphere \(S_{R}\) with \(R\in(0,1)\), there is a constant \(C_{R}>0\) such that we have \[u|_{S_{R}}>-C_{R}.\] This is because \(u\) is an \(L^{\infty}_{loc}\)-function defined everywhere in \(B_{1}^{*}\), and hence its restriction to the hypersphere is in the space \(L^{\infty}(S_{R})\). Suppose on the contrary, we have \(M_{A}(u)=+\infty\) for some \(A>0\). Then there exists a subsequence of points \(\zeta_{j}\in\mathbb{CP}^{n}\) such that the slope \[\partial_{t}^{+}u_{t}(\zeta_{j})|_{t=-A}=\frac{d}{dt}|_{t=(-A)^{+}}\left(u|_{ \ell_{\zeta_{j}}}\right) \tag{2.14}\] diverges to \(+\infty\) as \(j\to+\infty\). In particular, we can pick up a point \(\xi\) among this subsequence satisfying \[\partial_{t}^{+}u_{t}(\xi)|_{t=-A}>\frac{2C_{R}}{A}, \tag{2.15}\] for \(R=e^{-A}\). However, as a convex function of \(t\), the graph of \(u_{t}(\xi)\) is above the following straight line \[y(x)=\frac{2C_{R}}{A}(x+A)-C_{R},\] for all \(t\in[-A,-A/2]\). Hence we conclude \[u_{-A/2}(\xi)\geq y(-A/2)\geq 0.\] This contradicts to the fact that \(u\) is a negative function in \(B_{1}\), and then our result follows. The maximal directional Lelong numbers provide a way to measure the difference of the infimums of \(u\) on the hyperspheres. **Lemma 2.9**.: _For a function \(u\in\mathcal{F}(B_{1})\) and two constants \(1<A_{1}<A_{2}\), we have the estimate_ \[-\inf_{S_{R_{2}}}u\leq\int_{A_{1}}^{A_{2}}M_{T}(u)dT-\inf_{S_{R_{1}}}u, \tag{2.16}\] _where \(R_{i}=e^{-A_{i}}\) for \(i=1,2\)._ Proof.: For a complex direction \(\zeta\in\mathbb{CP}^{n}\), the function \(u_{t}(\zeta)\) is convex, and hence it is Lipschitz continuous. Then it follows from the Fundamental Theorem of Calculus \[u_{-A_{1}}(\zeta)-u_{-A_{2}}(\zeta) = \int_{-A_{2}}^{-A_{1}}\dot{u}_{t}(\zeta)dt \tag{2.17}\] \[\leq \int_{A_{1}}^{A_{2}}M_{T}(u)dT.\] Then we have \[\inf_{S_{R_{1}}}u-u_{-A_{2}}(\zeta)\leq\int_{A_{1}}^{A_{2}}M_{T}(u)dT. \tag{2.18}\] Take a sequence of points \(\zeta_{k}\in\mathbb{CP}^{n}\) satisfying \[u_{-A_{2}}(\zeta_{k})\to\inf_{S_{R_{2}}}u,\] as \(k\to+\infty\). Then our result follows. In particular, we have \(M_{A}(u)>0\) for all \(A>0\), if \(u\) has an isolated singularity at the origin. ### Regularization and convergence As a direct consequence of Proposition 2.8, \(\dot{u}_{t}=\partial_{t}\dot{u}=r\partial_{r}u\) is an \(L^{\infty}_{loc}\)-function in \(B_{1}^{*}\). Thanks to the slicing theory, we first conclude that Lemma 2.5 holds for all functions in \(\mathcal{F}(B_{1})\). Then it is legal to introduce the following functionals for almost all \(t\in(-\infty,0)\), \[\mathcal{I}(u_{t}):=\int_{\mathbb{C}\mathbb{P}^{n}}u_{t}\omega_{FS}^{n}; \hskip 14.226378ptI(u_{t}):=\int_{\mathbb{C}\mathbb{P}^{n}}(\dot{u}_{t}) \omega_{FS}^{n},u\in\mathcal{F}(B_{1}).\] We recall a few basic facts about these functionals. * the functional \(\mathcal{I}(u_{t})\) is a convex and non-decreasing function of \(t\in(-\infty,-1)\), and it is a primitive of the functional \(I(u_{t})\) along \(t\); * the functional \(I(u_{t})\) is a non-negative and non-decreasing \(L^{\infty}\)-function of \(t\in(-\infty,-1)\); * the functional \(I(u_{t})\) converges as \(t\to-\infty\) \[I(u_{t})\to\pi^{n}\nu_{u}(0).\] Next the standard regularization \(u_{\varepsilon}\) will be utilized to perform the approximation of these functionals. Write the \(t\)-derivatives of the regularization as \[\dot{u}_{\varepsilon,t}:=r\partial_{r}u_{\varepsilon}=r\partial_{r}(u*\rho_{ \varepsilon}),\] and then we have the following Friedrichs' type estimate. **Lemma 2.10** ([28]).: _For a function \(u\in\mathcal{F}(B_{1})\), a small number \(\delta>0\) and a point \(z\in B_{1-2\delta}^{*}\), we have, for all \(\varepsilon<\min\{|z|,\delta\}\),_ \[|r\partial_{r}(u*\rho_{\varepsilon})(z)-r(\partial_{r}u*\rho_{\varepsilon})(z )|\leq 2\varepsilon||\nabla u||_{L^{1}(B_{1-\delta})}. \tag{2.19}\] It is a standard fact that a function \(u\in\mathcal{F}(B_{1})\) is in the Sobolev space \(W^{1,p}_{loc}(B_{1})\) for any \(1\leq p<2\). Hence by Lemma 2.10 we can compare with the following two convolutions as \[r(\partial_{r}u)_{\varepsilon}(z)=\dot{u}_{\varepsilon,t}+O(\varepsilon), \tag{2.20}\] on any relatively compact domain \(\Omega\subset\subset B_{1}^{*}\). Therefore, Proposition 2.8 implies that the regularization converges on \(\Omega\) as \(\dot{u}_{\varepsilon,t}\to\partial_{t}u\) strongly in \(L^{p}\)-norm for any \(p\geq 1\). Thanks to the slicing theory again, we can infer the following convergence results. **Proposition 2.11** ([28]).: _For a function \(u\in\mathcal{F}(B_{1})\), and any constants \(1<A<B\), there exists a subsequence \(\{u_{\varepsilon_{k}}\}\) of its standard regularization satisfying \(I(u_{\varepsilon_{k},t})\to I(u_{t}),\) as \(k\to+\infty\) for almost all \(t\in[-B,-A]\)._ We end up this section with another application of Proposition 2.8 and Lemma 2.10, and it will provide the a priori estimate on the maximal directional Lelong numbers of the regularization. **Lemma 2.12** ([28]).: _Fixing any two constants \(1<A<B\), there exists a uniform constant \(C>0\) such that we have_ \[M_{B}(u_{\varepsilon})\leq 2M_{A}(u)+C\varepsilon, \tag{2.21}\] _for all \(\varepsilon<\varepsilon_{0}:=\frac{1}{2}\min\{(e^{-A}-e^{-B}),e^{-B}\}.\)_ ## 3. The Kahler cone structure In this section, we are going to use a different point of view to look at the complex hessian of a function \(u\in\mathcal{F}^{\infty}(B_{1})\). In fact, there is a natural Kahler cone structure on the space \((\mathbb{C}^{n+1})^{*}\cong(\mathbb{R}^{2n+2})^{*}\), that induces the standard Sasakian structure on the unit sphere \(S^{2n+1}\). In the following, we will decompose the usual complex structure on \(\mathbb{C}^{n+1}\) with respect to this Kahler cone structure. Let \(\mathbb{R}^{2n+2}\) be the \((2n+2)\)-dimensional Euclidean space with rectangular coordinates \[(x^{0},\cdots,x^{n},y^{0},\cdots,y^{n}),\] and we briefly write it as \((x^{A},y^{A})\) for \(A=0,1,\cdots,n\). Then the \((2n+1)\)-dimensional hypersphere \(S_{r}\) with radius \(r\) is defined by \[\sum_{A}\left\{(x^{A})^{2}+(y^{A})^{2}\right\}=r^{2},\] and denote \(S^{2n+1}\) by the unit sphere with \(r=1\). Put \[z^{A}:=x^{A}+iy^{A},\] and then \(z^{A}\)'s define a complex structure in \(\mathbb{R}^{2n+2}\). Its standard almost complex structure is given by the \((1,1)\)-tensor field \[I:=\sum_{A}\left(\frac{\partial}{\partial y^{A}}\otimes dx^{A}-\frac{\partial }{\partial x^{A}}\otimes dy^{A}\right)=\sum_{A}i\left(\frac{\partial}{ \partial z^{A}}\otimes dz^{A}-\frac{\partial}{\partial\bar{z}^{A}}\otimes d \bar{z}^{A}\right).\] Denote \(g\) by the flat metric on \(\mathbb{R}^{2n+2}\), and then its associated Kahler form on \(\mathbb{C}^{n+1}\) is defined as \[\omega_{e}:=\frac{i}{2}\partial\bar{\partial}r^{2}=\frac{i}{2}\sum_{A}dz^{A} \wedge d\bar{z}^{A}.\] ### Sasakian manifolds To talk about a _Sasakian structure_ on \(S^{2n+1}\), it is equivalent to describe a _Kahler cone structure_ on the product space \((\mathbb{R}^{2n+2})^{*}\cong S^{2n+1}\times\mathbb{R}_{+}\), see Chapter 6, [7]. First we note that the flat metric \(g\) splits as a metric cone as \[g=dr^{2}+r^{2}g_{0},\] for any radius \(r>0\). Here \(g_{0}\) is the canonical metric on \(S^{2n+1}\) with constant sectional curvature \(1\), and it is the restriction of \(g\) to the sphere. Denote \(\eta_{0}\) by the contact \(1\)-form: \[\eta_{0}: = I(r^{-1}dr)\] \[= \frac{1}{r^{2}}\sum_{A}\left(y^{A}dx^{A}-x^{A}dy^{A}\right)\] \[= -\frac{i}{2r^{2}}\sum_{A}\left(z^{A}d\bar{z}^{A}-\bar{z}^{A}dz^{A }\right).\] Then it is clear that we have \[\omega_{e}=-d(r^{2}\eta_{0})/2.\] Based on this Kahler structure on the metric cone, we say that the quadruple \[\left(S^{2n+1}\times\mathbb{R}_{+},dr^{2}+r^{2}g_{0},-d(r^{2}\eta_{0})/2,I\right) \tag{3.2}\] defines a Kahler cone structure on the manifold \((\mathbb{R}^{2n+2})^{*}\). Moreover, there is another important ingredient, the _Reeb vector field_\(\xi_{0}\), that is defined as \[\xi_{0}: = -I(r\partial_{r})\] \[= \sum_{A}\left(y^{A}\frac{\partial}{\partial x^{A}}-x^{A}\frac{ \partial}{\partial y^{A}}\right)\] \[= (-i)\sum_{A}\left(z^{A}\frac{\partial}{\partial z^{A}}-\bar{z}^{A }\frac{\partial}{\partial\bar{z}^{A}}\right).\] It is a holomorphic Killing field on \((\mathbb{R}^{2n+2})^{*}\), and \(\eta_{0}\) is its dual 1-form. Moreover, the metric \(g\) has homothetic degree two, and the almost complex structure \(I\) has homothetic degree zero in the following sense: \[\mathcal{L}_{r\partial_{r}}g=2g;\ \ \ \ \mathcal{L}_{r\partial_{r}}I=0.\] Denote \((\eta_{0},\xi_{0})\) also by their restrictions to the manifold \(S^{2n+1}\), and then we have the following facts with the metric \(g_{0}\): * \(\eta_{0}\) is a contact 1-form, and \(\xi_{0}\) is a Killing vector field on \(S^{2n+1}\); * \(\eta_{0}(\xi_{0})=1\) and \(\ \iota_{\xi_{0}}d\eta_{0}(\cdot)=d\eta_{0}(\xi_{0},\cdot)=0\); * the integral curves of \(\xi_{0}\) are exactly the great circles on \(S^{2n+1}\). It follows that the Reeb vector field \(\xi_{0}\) defines a _regular foliation_\(\mathcal{F}_{\xi_{0}}\) of \(S^{2n+1}\) by the great circles, and it is nothing but the Hopf-fiberation: Let \(L_{\xi_{0}}\) be the trivial line bundle generated by \(\xi_{0}\), and then we have a splitting of the tangent space of the sphere as \[TS^{2n+1}=L_{\xi_{0}}\oplus\mathcal{D},\] where the contact sub-bundle \(\mathcal{D}:=\ker(\eta_{0})\) is the kernel of the contact 1-form. Moreover, it can be identified with the the normal bundle \(\nu(\mathcal{F}_{\xi_{0}})\) of the foliation via an isomorphism induced from the metric \(g_{0}\). Next we define an endomorphism of \(TS^{2n+1}\) by restricting the almost complex structure \(I\) to \(\mathcal{D}\), and extending it trivially to \(L_{\xi_{0}}\). Explicitly, it is a \((1,1)\)-tensor field as \[\Phi_{0} = \sum_{A,B}\left\{(x^{A}x^{B}-\delta^{AB})\frac{\partial}{\partial x ^{A}}\otimes dy^{B}+(\delta^{AB}-y^{A}y^{B})\frac{\partial}{\partial y^{A}} \otimes dx^{B}\right\}\] \[+ \sum_{A,B}\left\{y^{A}x^{B}\frac{\partial}{\partial y^{A}} \otimes dy^{B}-x^{A}y^{B}\frac{\partial}{\partial x^{A}}\otimes dx^{B}\right\}.\] Observe that \(\eta_{0}\circ\Phi_{0}=0\), and then we can infer the following equation \[\Phi_{0}^{2}=-\mathbb{I}+\xi_{0}\otimes\eta_{0}. \tag{3.5}\] That is to say, the restriction \(\Phi|_{\mathcal{D}}\) defines an almost complex structure on \(\mathcal{D}\), and it is compatible with the symplectic form \(d\eta_{0}\) in the following sense: \[d\eta_{0}(\Phi_{0}X,\Phi_{0}Y)=d\eta_{0}(X,Y)\text{ for all }X,Y\in \Gamma(\mathcal{D});\] \[d\eta_{0}(\Phi_{0}X,X)>0\text{ for all }X\neq 0. \tag{3.6}\] It follows that the pair \((\mathcal{D},\Phi_{0}|_{\mathcal{D}})\) defines an almost \(CR\)-structure, and its Levi form \(L_{\eta_{0}}:=d\eta_{0}\circ(\Phi_{0}\otimes\mathbb{I})\) is strictly pseudo-convex. Then it induces a Riemannian metric on the distribution transversal to \(\xi_{0}\), i.e. we define the transversal metric as \[g^{T}(X,\Phi_{0}Y):=d\eta_{0}(X,Y)\quad\text{for all }X,Y\in\Gamma(\mathcal{D}). \tag{3.7}\] The upshot is that the metric \(g_{0}\) on the sphere is compatible with the _almost contact structure_\((\xi_{0},\eta_{0},\Phi_{0})\) in the following sense: \[g_{0}=g^{T}+\eta_{0}\otimes\eta_{0}. \tag{3.8}\] Together with the Killing condition of \(\xi_{0}\), the quadruple \((\xi_{0},\eta_{0},\Phi_{0},g_{0})\) is called a _Sasakian structure_ on \(S^{2n+1}\). This means that the almost \(CR\)-structure \((\mathcal{D},\Phi_{0}|_{\mathcal{D}})\) is in fact integrable, and \(\Phi_{0}\) is invariant under \(\xi_{0}\). Then it follows a splitting \[\mathcal{D}\otimes\mathbb{C}=\mathcal{D}^{1,0}\oplus\mathcal{D}^{0,1}\quad \text{with}\ \ \overline{\mathcal{D}^{1,0}}=\mathcal{D}^{0,1}, \tag{3.9}\] where \(\mathcal{D}^{1,0}\) and \(\mathcal{D}^{0,1}\) are eigenspaces of \(\Phi_{0}\) with eigenvalues \(i\) and \(-i\), respectively. In particular, the eigenspace with eigenvalue \(0\) is exactly \(L_{\xi_{0}}\otimes\mathbb{C}\). Furthermore, this splitting induces a complex structure \(\bar{J}\) of the normal bundle via the isomorphism \((\mathcal{D},\Phi_{0}|_{\mathcal{D}})\cong(\nu(\mathcal{F}_{\xi_{0}}),\bar{J})\), and then it gives a transversal holomorphic structure on the foliation. It is clear from the construction (equation (3.7) and (3.8)) that the triple \((g^{T},\omega^{T},\bar{J})\) defines a transversal Kahler structure on the local leaf space of the foliation, where the transversal Kahler metric is \[\omega^{T}:=-d\eta_{0}.\] Since the foliation is regular, the push forward of this transversal Kahler structure to the base manifold (via the fiber map of the Hopf-fiberation) is exactly the Kahler structure on \(\mathbb{CP}^{n}\) with the usual Fubini-Study metric. For further discussion, the reader is referred to [34], [7], [23] and [18]. ### The complex Hopf-coordinate In order to illustrate the Sasakian structure on the sphere in an explicit way, we will invoke a particular local coordinate system on the cone \((\mathbb{R}^{2n+2})^{*}\cong(\mathbb{C}^{n+1})^{*}\). In fact, it is a generalization of the complex Hopf-coordinate in \(\mathbb{C}^{2}\), see [28]. First consider the following holomorphic functions on the set \(\{z^{0}\neq 0\}\): \[\zeta^{\alpha}:=\frac{z^{\alpha}}{z^{0}}=|\zeta^{\alpha}|e^{i\varphi_{\alpha}}\] for all \(\alpha:=1,\cdots,n\), and \(\varphi_{\alpha}\) is an argument of \(\zeta^{\alpha}\) as a complex number. Then the complex Hopf-coordinate on \((\mathbb{R}^{2n+2})^{*}\) is introduced as \[(r,\theta,\zeta,\bar{\zeta}):=(r,\theta,\zeta^{1},\cdots,\zeta^{n},\bar{\zeta }^{1},\cdots,\bar{\zeta}^{n}),\] for all \(r\in\mathbb{R}_{+}\), \(\theta\in\mathbb{R}\) and \(\zeta^{\alpha}\in\mathbb{C}\) with the following change of variables: \[z^{0}=re^{\frac{i}{2}\theta}\frac{\varrho(\zeta,\bar{\zeta})}{\left(1+\sum_{ \beta}|\zeta^{\beta}|^{2}\right)^{1/2}};\quad z^{\alpha}=re^{\frac{i}{2}\theta }\frac{\zeta^{\alpha}\cdot\varrho(\zeta,\bar{\zeta})}{\left(1+\sum_{\beta}| \zeta^{\beta}|^{2}\right)^{1/2}}, \tag{3.10}\] where the factor \(\varrho\) is defined as \[\varrho(\zeta,\bar{\zeta}):=\prod_{\alpha=1}^{n}\left(\frac{\bar{\zeta}^{\alpha}}{ \left|\zeta^{\alpha}\right|}\right)^{\frac{1}{2}}.\] This factor is introduced for the purpose to gain more symmetry while taking change of variables. For example, it enables us to write \[z^{0}=\frac{re^{\frac{i}{2}\left(\theta-\sum_{\alpha}\varphi_{\alpha}\right)}} {\left(1+\sum_{\beta}\left|\zeta^{\beta}\right|^{2}\right)^{1/2}};\quad\ z^{ \alpha}=\frac{re^{\frac{i}{2}\left(\theta+\varphi_{\alpha}-\sum_{\beta}\widehat {\varphi_{\alpha}}\right)}}{\left(1+\sum_{\beta}\left|\zeta^{\beta}\right|^{2} \right)^{1/2}}, \tag{3.11}\] where the notation \(\sum_{\gamma}\widehat{\varphi_{\alpha}}\) means taking the summation of all \(\varphi_{\gamma}\)'s without the angle \(\varphi_{\alpha}\). In fact, the complex Hopf-coordinate is induced from a local trivialization of \((\mathbb{C}^{n+1})^{*}\) as a principle \(\mathbb{C}^{*}\)-bundle over \(\mathbb{CP}^{n}\). That is to say, there is a homeomorphism of the fiber map \(p\) as \[\Psi:\mathbb{C}^{*}\times U\to p^{-1}(U),\] that sends \[(re^{i\theta/2},\zeta^{1},\cdots,\zeta^{n})\to(z^{0},\cdots,z^{n})\] via the map defined in equation (3.10). Here \(U\subset\mathbb{C}^{n}\) is an open set on which one branch of the factor \(\varrho\) is well-defined, and it can be identified with an open subset in \(\mathbb{CP}^{n}-[0:z^{1}:\cdots:z^{n}]\). With the aid of this complex Hopf-coordinate, we can compute the contact \(1\)-form in the cone \((\mathbb{R}^{2n+2})^{*}\). However, it is convenient to introduce the following notations first: \[J:=-I/2\quad\text{and}\quad\eta:=J(r^{-1}dr)=-\eta_{0}/2. \tag{3.12}\] This is because of the normalization in the \(d^{c}\)-operator, since we have \[d^{c}=\frac{i}{2}(\bar{\partial}-\partial)=Jd.\] Writing the complex variable \(z^{A}\) in the polar coordinate as \(z^{A}:=r^{A}e^{i\theta_{A}}\), it follows from equation (3.11) \[\theta_{0}=\theta-\sum_{\alpha=1}^{n}\varphi_{\alpha};\quad\ \theta_{\alpha}=\theta+\varphi_{\alpha}-\sum_{\gamma=1}^{n}\widehat{\varphi_{ \alpha}}. \tag{3.13}\] Hence it follows \[\eta=\frac{1}{2r^{2}}\sum_{A}\frac{x^{A}dy^{A}-y^{A}dx^{A}}{(x^{A})^{2}+(y^{A} )^{2}}\cdot(r^{A})^{2}=\frac{1}{2r^{2}}\sum_{A}(r^{A})^{2}d\theta_{A}, \tag{3.14}\] and we have \[2\sum_{A}(r^{A})^{2}d\theta_{A}\] \[= \frac{r^{2}}{1+\sum_{\beta}|\zeta^{\beta}|^{2}}\left\{d\theta-\sum_ {\alpha}d\varphi_{\alpha}+\sum_{\alpha}|\zeta^{\alpha}|^{2}\left(d\theta+d \varphi_{\alpha}-\sum_{\gamma}\widehat{d\varphi_{\alpha}}\right)\right\}\] \[= r^{2}\left\{d\theta-\sum_{\alpha}\left(1-\frac{2|\zeta^{\alpha}|^ {2}}{1+\sum_{\beta}|\zeta^{\beta}|^{2}}\right)d\varphi_{\alpha}\right\}.\] Observe that we can write \[d\varphi_{\alpha}=\mathrm{Im}\left(\frac{d\zeta^{\alpha}}{\zeta^{\alpha}} \right).\] Therefore, if we put \[\cos\kappa_{\alpha}:=1-\frac{2|\zeta^{\alpha}|^{2}}{1+\sum_{\beta}|\zeta^{ \beta}|^{2}}\in[-1,1],\] then the formula in equation (3.15) can be reduced to the following form, cf. Lemma 4.1, [28]. **Lemma 3.1**.: _The normalized contact \(1\)-form can be written under the complex Hopf-coordinate as_ \[\eta=\frac{1}{4}\left\{d\theta-\sum_{\alpha}\cos\kappa_{\alpha}\cdot\mathrm{ Im}\left(\frac{d\zeta^{\alpha}}{\zeta^{\alpha}}\right)\right\}. \tag{3.16}\] Moreover, one can also directly check the following useful identities: \[\frac{\partial\bar{z}^{0}}{\partial\zeta^{\beta}}=\frac{\bar{z}^{0}}{4\zeta^ {\beta}}\cos\kappa_{\beta};\ \ \ \ \frac{\partial\bar{z}^{\alpha}}{\partial\zeta^{\beta}}=\frac{\bar{z}^{\alpha}} {4\zeta^{\beta}}\cos\kappa_{\beta}, \tag{3.17}\] for each \(\alpha,\beta=1,\cdots,n\). Next we can introduce a _local basic function_\(h\) with respect to the complex Hopf-coordinate as \[h(\zeta,\bar{\zeta}):=\log\left(1+\sum_{\alpha}|\zeta^{\alpha}|^{2}\right)- \sum_{\alpha}\log|\zeta^{\alpha}|,\] and then obtain a _local defining equation_ of the contact \(1\)-form as \[\eta=\frac{1}{4}\left\{d\theta-i\left(\partial_{\zeta}h-\bar{\partial}_{\zeta }h\right)\right\}, \tag{3.18}\] where the operators are defined by \[\partial_{\zeta}h:=\sum_{\alpha}\frac{\partial h}{\partial\zeta^{\alpha}}d \zeta^{\alpha};\ \ \ \ \bar{\partial}_{\zeta}h:=\sum_{\alpha}\frac{\partial h}{\partial\bar{\zeta}^{ \alpha}}d\bar{\zeta}^{\alpha}.\] Finally it follows \[d\eta = \frac{1}{2}dd_{\zeta}^{c}h \tag{3.19}\] \[= \frac{1}{2}dd_{\zeta}^{c}\log\left(1+\sum_{\alpha}|\zeta^{\alpha} |^{2}\right)\] \[= \omega_{FS}.\] As we have expected, the transversal Kahler metric \(d\eta\) is exactly the Fubini-Study metric \(\omega_{FS}\) on the quotient space \(\mathbb{CP}^{n}\), and we have its total volume \[\int_{\mathbb{CP}^{n}}\omega_{FS}^{n}=\pi^{n}.\] Although everything is computed under the complex Hopf-coordinate, we emphasis that the local defining equation of \(\eta\) (equation (3.18)), and hence equation (3.19), actually follow from the Sasakian structure. This means that they are in fact independent of the chosen holomorphic coordinate on \(\mathbb{CP}^{n}\) (induced from a local trivialization), possibly with a different basic function \(h\) and a re-normalized angle \(\theta\), see [18], [19]. **Remark 3.2**.: _There is another local trivialization of the fiber map \(p\) as_ \[\Psi^{\prime}:\mathbb{C}^{*}\times\mathbb{C}^{n}\to p^{-1}(\mathbb{C}^{n}),\] _that sends \((r,\theta^{\prime},\zeta,\bar{\zeta})\) to_ \[z^{0}:=\frac{re^{i\theta^{\prime}}}{\left(1+\sum_{\beta}|\zeta^{\beta}|^{2} \right)^{1/2}};\quad z^{\alpha}=\frac{re^{i\theta^{\prime}}\cdot\zeta^{\alpha }}{\left(1+\sum_{\beta}|\zeta^{\beta}|^{2}\right)^{1/2}}.\] _Thus we have a different holomorphic coordinate \(\zeta\in\mathbb{CP}^{n}\) that is induced from the trivialization \(\Psi^{\prime}\). Moreover, the local basic function \(h^{\prime}\) corresponding to \(\Psi^{\prime}\) is defined as_ \[h^{\prime}(\zeta,\bar{\zeta}):=\log\left(1+\sum_{\alpha}|\zeta^{\alpha}|^{2} \right).\] _For an \(S^{1}\)-invariant function, these two trivializations make no difference on its value since they only differ by an angle factor, see Remark 3.1, [28]. Therefore, we also refer \(\Psi^{\prime}\) as a complex Hopf-coordinate._ ### The complex structure splits As we have seen in equation (3.9), the complex structure on the cone \((\mathbb{R}^{2n+2})^{*}\) splits with respect to the Sasakian structure on the sphere. Moreover, this splitting can be explicitly written down locally through the complex Hopf-coordinate as follows. It turns out that it is easier to first describe it on the cotangent bundle, namely, we consider the \((1,0)\)-part of the complexified cotangent bundle of the cone as \[(T^{*})^{1,0}(\mathbb{R}^{2n+2})^{*}\subset T^{*}(\mathbb{R}^{2n+2})^{*}\otimes \mathbb{C}.\] Then there is a well-defined \(1\)-form on the cone as \[\lambda^{0}:=dr-ir\eta_{0};\quad\ \bar{\lambda}^{0}:=dr+ir\eta_{0}, \tag{3.20}\] and it satisfies \[I(\lambda^{0})=i\lambda^{0};\quad\ I(\bar{\lambda}^{0})=(-i)\bar{\lambda}^{0}.\] Locally we introduce the following \(1\)-forms under the complex Hopf-coordinate: \[\lambda^{\alpha}:=d\zeta^{\alpha};\quad\ \bar{\lambda}^{\alpha}:=d\bar{\zeta}^{ \alpha}, \tag{3.21}\] for all \(\alpha=1,\cdots,n\). Since \(\zeta^{\alpha}\)'s are holomorphic functions, it follows \[I(\lambda^{\alpha})=i\lambda^{\alpha};\quad\ I(\bar{\lambda}^{\alpha})=(-i) \bar{\lambda}^{\alpha}.\] Then it is clear that the \(n\)-tuples \(\{\lambda_{1},\cdots,\lambda_{n}\}\) is a local coframe field of the bundle \((T^{*})^{1,0}(\mathbb{CP}^{n})\) over the base manifold. Thanks to the local defining equation of \(\eta\) (equation (3.18)), we can further infer that the \((n+1)\)-tuples \[\{\lambda_{0},\lambda_{1},\cdots,\lambda_{n}\}\] builds a local coframe field of the bundle \((T^{*})^{1,0}(\mathbb{R}^{2n+2})^{*}\) over the cone, and its complex conjugate is a local coframe of the bundle \((T^{*})^{0,1}(\mathbb{R}^{2n+2})^{*}\). Clearly, these coframes give a decomposition of the complex structure with respect to the Kahler cone structure. Moreover, this construction is due to the Sasakian structure, i.e. it is independent of the chosen holomorphic coordinate on \(\mathbb{CP}^{n}\), because the local defining equation of \(\eta\) is. On the other hand, we can also describe the splitting in the dual space: \[T(\mathbb{R}^{2n+2})^{*}\otimes\mathbb{C}=(L_{e_{0}}\oplus L_{\bar{e}_{0}}) \oplus\mathcal{D}^{1,0}\oplus\mathcal{D}^{0,1}, \tag{3.22}\] where the vector fields \(e_{0}\) and \(\bar{e}_{0}\) are defined as \[e_{0}:=\frac{1}{2}\left\{\partial_{r}+ir^{-1}\xi_{0}\right\};\ \ \ \ \bar{e}_{0}:=\frac{1}{2}\left\{ \partial_{r}-ir^{-1}\xi_{0}\right\}.\] In fact, the Reeb vector field can written as \[\xi_{0}=-I(r\partial_{r})=-2\frac{\partial}{\partial\theta},\] and then it is clear that we have \[I(e_{0})=ie_{0};\ \ \ \ I(\bar{e}_{0})=(-i)\bar{e}_{0}.\] Moreover, we define the following vector fields under the complex Hopf-coordinate: \[e_{\alpha}:=\partial_{\zeta^{\alpha}}+\frac{i}{4\zeta^{\alpha}}\cos\kappa_{ \alpha}\ \xi_{0};\ \ \ \ \bar{e}_{\alpha}:=\partial_{\bar{\zeta}^{\alpha}}-\frac{i}{4\zeta^{\alpha}} \cos\kappa_{\alpha}\ \xi_{0}.\] Then one can check \[\lambda^{A}(e_{B})=\delta^{A}_{B},\] for all \(A,B=0,\cdots,n\). In particular, the \(e_{\alpha}\)'s, and hence \(\bar{e}_{\alpha}\)'s, belong to the complexified contact sub-bundle \(\mathcal{D}\otimes\mathbb{C}\). Moreover, it follows from equation (3.17) that we have \[I(e_{\alpha})=ie_{\alpha};\ \ \ \ I(\bar{e}_{\alpha})=(-i)\bar{e}_{\alpha}. \tag{3.23}\] It follows that the almost complex structure \(\Phi_{0}\) can be written locally as \[\Phi_{0}=i\sum_{\alpha}e_{\alpha}\otimes d\zeta^{\alpha}-i\sum_{\alpha}\bar{e }_{\alpha}\otimes d\bar{\zeta}^{\alpha},\] and then we have \[\Phi_{0}(e_{\alpha})=ie_{\alpha};\ \ \ \ \Phi_{0}(\bar{e}_{\alpha})=(-i)\bar{e}_{ \alpha}. \tag{3.24}\] Therefore, the \(n\)-tuples \(\{e_{1},\cdots,e_{n}\}\) is a local frame of the bundle \(\mathcal{D}^{1,0}\). It follows that the \((n+1)\)-tuples \(\{e_{0},e_{1},\cdots,e_{n}\}\) builds a local frame of the bundle \(T^{1,0}(\mathbb{R}^{2n+2})^{*}\). Together with its complex conjugate, they describe the splitting (equation (3.22)) of the complex structure on the tangent space of the cone. ## 4. The decomposition formula In this section, we will compute the following integral, for a function \(u\in\mathcal{F}^{\infty}(B_{1})\): \[\int_{S_{r}}d^{c}u\wedge(dd^{c}u)^{n},\] on the hypersphere \(S_{r}\subset\mathbb{R}^{2n+2}\) with radius \(r\in(0,1)\). Consider the space \((\mathbb{R}^{2n+2})^{*}\cong(\mathbb{C}^{n+1})^{*}\) as a Kahler cone over the Sasakian manifold \(S^{2n+1}\), and denote \((\xi_{0},\eta_{0},\Phi_{0},g_{0})\) by its standard Sasakian structure. We recall the concept of basic forms. **Definition 4.1**.: _A \(k\)-form \(\vartheta\) on \(S^{2n+1}\) is called basic if it satisfies_ \[\iota_{\xi_{0}}\vartheta=0;\ \ \ \ \mathcal{L}_{\xi_{0}}\vartheta=0.\] The exterior differential preserves basic forms. There is a natural splitting of the complexification of the bundle of the basic \(k\)-forms \(\bigwedge_{B}^{k}(S^{2n+1})\) as \[\bigwedge_{B}^{k}(S^{2n+1})\otimes\mathbb{C}=\bigoplus_{i+j=k}\bigwedge_{B}^{ i,j}(S^{2n+1}),\] where \(\bigwedge_{B}^{i,j}(S^{2n+1})\) denotes the bundle of basic forms of type \((i,j)\). Then one can define the operators \(\partial_{B}\) and \(\bar{\partial}_{B}\). Put \[d_{B}:=d\mid_{\bigwedge_{B}^{k}};\ \ \ \ d^{c}_{B}:=\frac{i}{2}(\bar{\partial}_{B} -\partial_{B}),\] and it follows as usual \[d_{B}=\partial_{B}+\bar{\partial}_{B};\ \ \ d_{B}d^{c}_{B}=i\partial_{B}\bar{ \partial}_{B};\ \ \ (d_{B})^{2}=(d^{c}_{B})^{2}=0.\] In other words, the operators \(\partial_{B}\) and \(\bar{\partial}_{B}\) are exactly (the pull back of) the \(\partial\) and \(\bar{\partial}\) operators on \(\mathbb{C}\mathbb{P}^{n}\) under the isomorphism with the transversal holomorphic structures: \((\nu(\mathcal{F}_{\xi_{0}}),\bar{J})\cong(\mathcal{D},\Phi_{0}|_{\mathcal{D}})\). ### Computations It is clear that \(u\) is basic, if it is in the family \(\mathcal{F}^{\infty}(B_{1})\). Then we can decompose its exterior derivative as \[du=(u_{r})dr+d_{B}u,\] and hence it follows \[d^{c}u = (u_{r})Jdr+Jd_{B}u\] \[= (ru_{r})\eta+d^{c}_{B}u, \tag{4.1}\] where \(\eta=-\eta_{0}/2\) is the normalized contact \(1\)-form. We compute \[dd^{c}u = d\left\{(ru_{r})\eta+d^{c}_{B}u\right\} \tag{4.2}\] \[= (ru_{r})d\eta+(ru_{r})_{,r}dr\wedge\eta+d_{B}(ru_{r})\wedge\eta+ dr\wedge\partial_{r}(d^{c}_{B}u)+d_{B}d^{c}_{B}u\] \[= \Theta_{1}+\Theta_{2},\] where the \(2\)-forms are defined as \[\Theta_{1}:=(ru_{r})_{,r}dr\wedge\eta+dr\wedge\partial_{r}(d^{c}_ {B}u)+d_{B}(ru_{r})\wedge\eta; \tag{4.4}\] \[\Theta_{2}:=(ru_{r})d\eta+d_{B}d^{c}_{B}u. \tag{4.3}\] Then we first come up with the following decomposition formula of the complex hessian of \(u\) restricted to the sphere. **Lemma 4.2**.: _For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), we have_ \[dd^{c}u|_{S_{r}} = (ru_{r})d\eta+d_{B}(ru_{r})\wedge\eta+d_{B}d_{B}^{c}u \tag{4.5}\] \[= \Theta_{2}+d_{B}(ru_{r})\wedge\eta,\] _on each hypersphere \(S_{r}\) with \(r\in(0,1)\)._ Next we can compute the \((n,n)\)-form on the hypersphere as \[(dd^{c}u|_{S_{r}})^{n} = (\Theta_{2}+d_{B}(ru_{r})\wedge\eta)^{n} \tag{4.6}\] \[= \Theta_{2}^{n}+n\Theta_{2}^{n-1}\wedge d_{B}(ru_{r})\wedge\eta.\] Since the two terms in \(\Theta_{2}\) are both of type \((1,1)\) in the transversal Kahler structure, we expand its \(n\)th power as \[\Theta_{2}^{n} = (ru_{r})^{n}(d\eta)^{n}+(d_{B}d_{B}^{c}u)^{n}\] \[+ \sum_{k=1}^{n-1}\left(\begin{array}{c}n\\ k\end{array}\right)(ru_{r})^{n-k}(d\eta)^{n-k}\wedge(d_{B}d_{B}^{c}u)^{k}.\] Observe that we have for each \(k=0,\cdots,n\) \[d_{B}^{c}u\wedge(d\eta)^{n-k}\wedge(d_{B}d_{B}^{c}u)^{k}=0. \tag{4.8}\] It follows \[d^{c}u\wedge(dd^{c}u|_{S_{r}})^{n}\] \[= \{(ru_{r})\eta+d_{B}^{c}u\}\wedge\left\{\Theta_{2}^{n}+n\Theta_{2 }^{n-1}\wedge d_{B}(ru_{r})\wedge\eta\right\}\] \[= (ru_{r})\eta\wedge\Theta_{2}^{n}+n\ d_{B}^{c}u\wedge\Theta_{2}^{ n-1}\wedge d_{B}(ru_{r})\wedge\eta.\] The first term on the R.H.S. of (4.9) can be computed as \[(ru_{r})\eta\wedge\Theta_{2}^{n} = (ru_{r})^{n+1}\eta\wedge(d\eta)^{n}+(ru_{r})\eta\wedge(d_{B}d_{B} ^{c}u)^{n}\] \[+ \sum_{k=1}^{n-1}\left(\begin{array}{c}n\\ k\end{array}\right)(ru_{r})^{n-k+1}\eta\wedge(d\eta)^{n-k}\wedge(d_{B}d_{B}^{c }u)^{k},\] and the second term is \[(-n)\ \eta\wedge d_{B}(ru_{r})\wedge d_{B}^{c}u\wedge\Theta_{2}^{ n-1}\] \[= (-n)\ \eta\wedge d_{B}(ru_{r})\wedge d_{B}^{c}u\wedge\sum_{j=1}^{ n}\left(\begin{array}{c}n-1\\ j-1\end{array}\right)(ru_{r})^{n-j}(d\eta)^{n-j}\wedge(d_{B}d_{B}^{c}u)^{j-1}\] (4.11) In conclusion, we have the following formula. **Lemma 4.3**.: _For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), we have the expansion formula_ \[d^{c}u\wedge(dd^{c}u|_{S_{r}})^{n}\] \[= \eta\wedge\left\{\ \sum_{k=0}^{n}\left(\begin{array}{c}n\\ k\end{array}\right)(ru_{r})^{n-k+1}(d\eta)^{n-k}\wedge(d_{B}d_{B}^{c}u)^{k}\right.\] \[- \left.n\ d_{B}(ru_{r})\wedge d_{B}^{c}u\wedge\sum_{k=1}^{n}\left( \begin{array}{c}n-1\\ k-1\end{array}\right)(ru_{r})^{n-k}(d\eta)^{n-k}\wedge(d_{B}d_{B}^{c}u)^{k-1} \right\},\] Observe that all the terms in the bracket on the R.H.S. of equation (4.12) has transversal degree \(2n\). Then their wedge product with the contact \(1\)-form \(\eta\) is equivalent to taking the wedge with \(d\theta/4\). Thanks to equation (3.16) and (3.18), we can write down this \((2n+1)\)-form under the complex Hopf-coordinate as \[d^{c}u\wedge(dd^{c}u|_{S_{r}})^{n}\] \[= \frac{1}{4}d\theta\wedge\left\{\;\sum_{k=0}^{n}\left(\begin{array} []{c}n\\ k\end{array}\right)(ru_{r})^{n-k+1}\omega_{FS}^{n-k}\wedge(d_{B}d_{B}^{c}u)^{k}\right.\] \[- \left.n\ d_{B}(ru_{r})\wedge d_{B}^{c}u\wedge\sum_{k=1}^{n}\left( \begin{array}{c}n-1\\ k-1\end{array}\right)(ru_{r})^{n-k}\omega_{FS}^{n-k}\wedge(d_{B}d_{B}^{c}u)^{k- 1}\right\}. \tag{4.13}\] Here the operators \(d_{B}\) and \(d_{B}^{c}\) should be interpreted as the \(d\) and \(d^{c}\) operators on the complex projective space \(\mathbb{CP}^{n}\). Recall that we have taken a change of variables \(t:=\log r\), and denote the function \(u\) under this new variable as \[u_{t}(\zeta):=\hat{u}(t,\zeta,\bar{\zeta})=u(e^{t},\zeta,\bar{\zeta}),\] and then it follows \[\dot{u}_{t}(\zeta):=\partial_{t}\hat{u}(t,\zeta,\bar{\zeta})=r\partial_{r}u(r,\zeta,\bar{\zeta}).\] Finally, we obtain the following decomposition formula. **Theorem 4.4**.: _For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), we have_ \[\frac{1}{\pi}\int_{S_{r}}d^{c}u\wedge(dd^{c}u)^{n}=\sum_{k=0}^{n}\binom{n+1} {k}\int_{\mathbb{CP}^{n}}(\dot{u}_{t})^{n+1-k}\omega_{FS}^{n-k}\wedge(d_{B}d _{B}^{c}u)^{k}, \tag{4.14}\] _on each hypersphere \(S_{r}\) with radius \(r=e^{t}\in(0,1)\)._ Proof.: Take integration on both sides of equation (4.13), and we can first integrate out the \(d\theta\)-direction over \([0,4\pi]\) due to Fubini's Theorem. Then perform the integration by parts on the second term in the bracket on the R.H.S. of this equation as \[(-n)\int_{\mathbb{CP}^{n}}\sum_{k=1}^{n}\left(\begin{array}{c}n -1\\ k-1\end{array}\right)(\dot{u}_{t})^{n-k}d_{B}\dot{u}_{t}\wedge d_{B}^{c}u \wedge\omega_{FS}^{n-k}\wedge(d_{B}d_{B}^{c}u)^{k-1}\] \[= \int_{\mathbb{CP}^{n}}\sum_{k=1}^{n}\frac{n}{n-k+1}\left(\begin{array []{c}n-1\\ k-1\end{array}\right)(\dot{u}_{t})^{n-k+1}\omega_{FS}^{n-k}\wedge(d_{B}d_{B}^{ c}u)^{k}. \tag{4.15}\] Plug in the first term in the bracket, and we obtain \[\left\{\begin{array}{l}\sum_{k=0}^{n}\left(\begin{array}{c}n\\ k\end{array}\right)+\sum_{k=1}^{n}\frac{n}{n-k+1}\left(\begin{array}{c}n-1\\ k-1\end{array}\right)\right\}(\dot{u}_{t})^{n-k+1}\omega_{FS}^{n-k}\wedge(d_{B}d _{B}^{c}u)^{k}\] \[= \sum_{k=1}^{n}\left(\begin{array}{c}n+1\\ k\end{array}\right)(\dot{u}_{t})^{n+1-k}\omega_{FS}^{n-k}\wedge(d_{B}d_{B}^{c }u)^{k}+(\dot{u}_{t})^{n+1}\omega_{FS}^{n}.\] Here we have used the combinatorial identity: \[\left(\begin{array}{c}n\\ k\end{array}\right)+\frac{n}{n-k+1}\left(\begin{array}{c}n-1\\ k-1\end{array}\right)=\left(\begin{array}{c}n+1\\ k\end{array}\right). \tag{4.17}\] Finally, combine with the two terms on the R.H.S. of equation (4.16), and then our result follows. The above formula (equation (4.14)) boils down to the decomposition formula in the two dimensional case if we take \(n=1\), see Theorem 4.4, [28]. ### The positivity The plurisubharmonicity of the function \(u\) induces the positivity of its complex hessian on \((\mathbb{C}^{n+1})^{*}\). As we have seen, there is a natural splitting of the complex structure of the cone \((\mathbb{C}^{n+1})^{*}\) under the standard Sasakian structure of \(S^{2n+1}\). Therefore, we can study the positivity of the complex hessian under this splitting. Recall that decompose the complex hessian of a function \(u\in\mathcal{F}^{\infty}(B_{1})\) in equation (4.2) as \[dd^{c}u=\Theta_{1}+\Theta_{2}.\] Here the \(2\)-form \(\Theta_{1}\) can be written as \[\Theta_{1} = (ru_{r})_{,r}dr\wedge\eta+dr\wedge d_{B}^{c}u_{r}+d_{B}(ru_{r})\wedge\eta\] \[= r^{-1}(ru_{r})_{,r}(i\lambda^{0}\wedge\bar{\lambda}^{0})+\frac{ 1}{2}\left(\lambda^{0}\wedge d_{B}^{c}u_{r}+\bar{\lambda}^{0}\wedge d_{B}^{c} u_{r}\right)\] \[+ \frac{i}{2r}\left\{\bar{\lambda}_{0}\wedge d_{B}(ru_{r})-\lambda _{0}\wedge d_{B}(ru_{r})\right\}. \tag{4.18}\] Moreover, the \(2\)-form \(\Theta_{2}\) is of degree \((1,1)\) under the transversal holomorphic structure. In particular, it descends to a \((1,1)\)-form with continuous coefficients on \(\mathbb{CP}^{n}\) as \[\Theta_{2}:=(\dot{u}_{t})\omega_{FS}+d_{B}d_{B}^{c}u.\] Then we are going to prove that this \((1,1)\)-form is positive for each \(t<0\). **Lemma 4.5**.: _The \(2\)-form \(\Theta_{2}\) is a positive \((1,1)\)-current on \(\mathbb{CP}^{n}\), i.e. we have_ \[(\dot{u}_{t})\omega_{FS}+d_{B}d_{B}^{c}u\geq 0, \tag{4.19}\] _for each \(t<0\) fixed._ Proof.: Let \(\gamma^{\alpha}\) be an arbitrary local \((1,0)\)-form on \(\mathbb{CP}^{n}\) for \(\alpha=2,\cdots,n\). Then we need to prove the following \((n,n)\)-form \[T:=\Theta_{2}\wedge(i\gamma^{2}\wedge\bar{\gamma}^{2})\wedge\cdots\wedge(i \gamma^{n}\wedge\bar{\gamma}^{n})\] is a positive measure. Thanks to the splitting of the complexified cotangent bundle, each of \(\gamma^{\alpha}\)'s can be written as a linear combination of the coframe \(\lambda^{\beta}\) for \(\beta=1,\cdots,n\), see Section 3.3. Then we can write \(T\) with respect to the local coframe, and obtain \[T=f(r,\zeta,\bar{\zeta})(i\lambda^{1}\wedge\bar{\lambda}^{1})\wedge\cdots \wedge(i\lambda^{n}\wedge\bar{\lambda}^{n}). \tag{4.20}\] Then everything boils down to prove that the continuous function \(f\) is always non-negative. Take \(T\) as an \((n,n)\)-form on the cone \((\mathbb{C}^{n+1})^{*}\), and then we can wedge it with the positive \((1,1)\)-current \(i\lambda^{0}\wedge\bar{\lambda}^{0}\) as \[(i\lambda^{0}\wedge\bar{\lambda}^{0})\wedge T = \big{\{}(i\lambda^{0}\wedge\bar{\lambda}^{0})\wedge\Theta_{2} \big{\}}\wedge(i\gamma^{2}\wedge\bar{\gamma}^{2})\wedge\cdots\wedge(i\gamma^{ n}\wedge\bar{\gamma}^{n})\] \[= \big{\{}(i\lambda^{0}\wedge\bar{\lambda}^{0})\wedge(\Theta_{1}+ \Theta_{2})\big{\}}\wedge(i\gamma^{2}\wedge\bar{\gamma}^{2})\wedge\cdots\wedge( i\gamma^{n}\wedge\bar{\gamma}^{n})\] \[= dd^{c}u\wedge(i\lambda^{0}\wedge\bar{\lambda}^{0})\wedge(i \gamma^{2}\wedge\bar{\gamma}^{2})\wedge\cdots\wedge(i\gamma^{n}\wedge\bar{ \gamma}^{n})\] \[\geq 0. \tag{4.21}\] Here we have used equation (4.18) on the second line on the R.H.S. of equation (4.21). Hence we can infer \(f(r,\zeta,\bar{\zeta})\geq 0\), and then our result follows. ## 5. The general results The goal of this section is to provide a useful estimate on the complex Monge-Ampere mass \[\int_{B_{r}}(dd^{c}u)^{n+1}\] on each ball \(B_{r}\) inside the unit ball. It means that we need to find an upper bound of the decomposition formula (Theorem 4.4). This will be finished via an induction argument. Recall that the maximal directional Lelong number \(M_{A}(u)\) at a distance \(A>0\) is defined as the maximum of \(\dot{u}_{t}(\zeta)\) over all \(\zeta\in\mathbb{CP}^{n}\) with fixed \(t=-A\). It is always a non-negative and finite real number, and non-increasing in \(A\) as \(A\to+\infty\). ### A priori estimates In the following context, the positivity condition (Lemma 4.5) will be used repeatedly. Therefore, it is convenient to introduce the following brief notations for the \((1,1)\)-forms: \[\mathbf{a}:=\omega_{FS};\ \ \ \ \ \mathbf{b}:=d_{B}d_{B}^{c}u,\] and for the functions \[\mathbf{c}:=\dot{u}_{t};\ \ \ \ \ M:=M_{A}(u),\] for any \(t<-A\). Moreover, we also denote \[\mathbf{e}:=\mathbf{ca}+\mathbf{b}=\Theta_{2},\] and then it follows from the positivity conditions \[\mathbf{a}>0;\ \ \ \ \mathbf{e}\geq 0;\ \ \ \ M\geq\mathbf{c}\geq 0. \tag{5.1}\] During the combinatorial computation, we will delete the symbol \(\wedge\) between the 2-forms \(\mathbf{a}\), \(\mathbf{b}\) and \(\mathbf{e}\). In fact, there is no harm to pretend the wedge product between them as an ordinary multiplication between polynomials, since \(2\)-forms are commutative with respect to the wedge. Then we first have a rough estimate as follows. **Lemma 5.1**.: _For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), we have the estimate_ \[\int_{B_{r}}(dd^{c}u)^{n+1}\leq(n+1)\pi^{n+1}M_{A}^{n+1}(u), \tag{5.2}\] _for all \(r<e^{-A}\)._ Proof.: A direct computation shows that we have the following equation \[\sum_{k=0}^{n}\left(\begin{array}{c}n+1\\ k\end{array}\right)M_{A}^{n+1-k}\omega_{FS}^{n-k}\wedge(d_{B}d_{B}^{c}u)^{k}\] \[- \sum_{k=0}^{n}\left(\begin{array}{c}n+1\\ k\end{array}\right)(\dot{u}_{t})^{n+1-k}\omega_{FS}^{n-k}\wedge(d_{B}d_{B}^{c} u)^{k}\] \[= (M_{A}-\dot{u}_{t})\left\{\sum_{k=0}^{n}(M_{A}\omega_{FS}+d_{B}d _{B}^{c}u)^{n-k}\wedge\Theta_{2}^{k}\right\}. \tag{5.3}\] In fact, we can infer it from the combinatorial identity \[\sum_{k=0}^{n}\left(\begin{array}{c}n+1\\ k\end{array}\right)M^{n+1-k}\mathbf{a}^{n-k}\mathbf{b}^{k}-\sum_{k=0}^{n}\left( \begin{array}{c}n+1\\ k\end{array}\right)\mathbf{c}^{n+1-k}\mathbf{a}^{n-k}\mathbf{b}^{k}\] \[= (M-\mathbf{c})\left\{\sum_{k=0}^{n}(M\mathbf{a}+\mathbf{b})^{n-k }\cdot(\mathbf{c}\mathbf{a}+\mathbf{b})^{k}\right\}\geq 0. \tag{5.4}\] Thanks to the Stokes Theorem, each term like \[\omega_{FS}^{n-k}\wedge(d_{B}d_{B}^{c}u)^{k}\] for \(k=1,\cdots,n\) vanishes if we take its integration on \(\mathbb{CP}^{n}\). Therefore, the R.H.S. of the decomposition formula (Theorem 4.4) can be estimated from the above as \[\frac{1}{\pi}\int_{S_{r}}d^{c}u\wedge(dd^{c}u)^{n}\leq(n+1)M_{A}^{n+1}\int_{ \mathbb{CP}^{n}}\omega_{FS}^{n}, \tag{5.5}\] and then our result follows. First take the radius \(r\to 0^{+}\) on the L.H.S. of equation (5.2), and then it converges to the residual Monge-Ampere mass of \(u\) at the origin. Next let the distance \(A\) converges to \(+\infty\) on the R.H.S. of this equation, and then we obtain the following rough estimate. **Theorem 5.2**.: _For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), we can control its residual mass by the maximal directional Lelong number at the origin as_ \[\tau_{u}(0)\leq(n+1)\lambda_{u}^{n+1}(0). \tag{5.6}\] However, this rough estimate in Lemma 5.1 is not enough for our purpose to prove the zero mass conjecture, since there is no Lelong number that could possibly appear from the R.H.S. of equation (5.2). The difficulty is that we should keep at least one \(\mathbf{c}\), but no combination terms like \(\mathbf{c}\cdot\mathbf{b}\) in the end of the estimate. In order to overcome this difficulty, we will utilize the positivity of \(\mathbf{a}\), \(\mathbf{c}\) and \(\mathbf{e}\), and rewrite the decomposition formula as follows. **Corollary 5.3**.: _For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), the decomposition formula can be written as_ \[\frac{1}{\pi}\int_{S_{r}}d^{c}u\wedge(dd^{c}u)^{n}\] \[= \sum_{k=0}^{n}\left(\begin{array}{c}n+1\\ k+1\end{array}\right)(-1)^{k}\int_{\mathbb{CP}^{n}}(\dot{u}_{t})^{k+1}\omega_{ FS}^{k}\wedge\Theta_{2}^{n-k}.\] _on each hypersphere \(S_{r}\) with radius \(r\in(0,1)\)._ Proof.: It directly follows from the following combinatorial identity \[\sum_{k=0}^{n}\left(\begin{array}{c}n+1\\ k\end{array}\right)\mathbf{c}^{n-k+1}\mathbf{a}^{n-k}\mathbf{b}^{k}\] \[= \sum_{k=1}^{n+1}\left(\begin{array}{c}n+1\\ k\end{array}\right)(-1)^{k-1}(\mathbf{c}\mathbf{a}+\mathbf{b})^{n+1-k} \mathbf{c}^{k}\mathbf{a}^{k-1}\] \[= \sum_{j=0}^{n}\left(\begin{array}{c}n+1\\ j+1\end{array}\right)(-1)^{j}\mathbf{e}^{n-j}\mathbf{c}^{j+1}\mathbf{a}^{j}.\] The formula in Corollary 5.3 gives a new representation of the decomposition formula of the complex Monge-Ampere mass, such that each monomial in it like \(\mathbf{e}^{n-j}\mathbf{c}^{j+1}\mathbf{a}^{j}\) is positive. Then we can infer the following crucial estimate on the upper bound of such monomials on the R.H.S. of equation (5.7). Before moving on, we introduce the notation \[\{\mathbf{a},\mathbf{b}\}_{1}^{n}.\] It stands for a polynomial consisting of monomials like \(\mathbf{a}^{n-k}\cdot\mathbf{b}^{k}\) for \(k=1,\cdots,n\), and here we permit the constant \(M\) appearing as coefficients of the monomials. We note that the integral of each monomial inside the polynomial \(\{\mathbf{a},\mathbf{b}\}_{1}^{n}\) vanishes, since we have \[\int_{\mathbb{CP}^{n}}\omega_{FS}^{n-k}\wedge(d_{B}d_{B}^{c}u)^{k}=0,\] for each \(k=1,\cdots,n\) by the Stokes Theorem. Moreover, the part in a polynomial without the factor \(\{\mathbf{a},\mathbf{b}\}_{1}^{n}\) will be called as _the principal part_ of this polynomial. **Lemma 5.4**.: _For each integer \(0\leq k\leq n\), there is a positive constant \(B_{k}:=B(k,n)\) such that we have_ \[\int_{\mathbb{CP}^{n}}(\dot{u}_{t})^{n+1-k}\omega_{FS}^{n-k}\wedge\Theta_{2}^ {k}\leq B_{k}M_{A}^{n}(u)\int_{\mathbb{CP}^{n}}(\dot{u}_{t})\omega_{FS}^{n}, \tag{5.9}\] _for all \(r=e^{t}<e^{-A}\). Moreover, the constants can be constructed inductively by setting \(B_{0}=1\), and satisfying_ \[B_{k+1}=\sum_{j=0}^{k}\left(\begin{array}{c}k\\ j\end{array}\right)B_{j},\] _for each \(k=0,\cdots,n-1\)._ Proof.: First we claim that the following estimate holds \[\mathbf{c}\mathbf{a}^{n-k}\mathbf{e}^{k}\leq B_{k}M^{k}\mathbf{c}\mathbf{a}^{n }+\{\mathbf{a},\mathbf{b}\}_{1}^{n}, \tag{5.10}\] for each \(k=0,\cdots,n\). Then we have \[\mathbf{c}^{n-k+1}\mathbf{a}^{n-k}\mathbf{e}^{k}\leq B_{k}M^{n}\mathbf{c} \mathbf{a}^{n}+\{\mathbf{a},\mathbf{b}\}_{1}^{n}, \tag{5.11}\] for each \(k\), and our result follows from the Stokes Theorem. In order to prove the claim, we will invoke an induction argument on equation (5.10). First check for \(k=0\), and hence we have \(B_{0}=1\). Then we have for each \(k=1,\cdots,n\) \[\mathbf{a}^{n-k}\mathbf{e}^{k}\mathbf{c} \leq M\mathbf{a}^{n-k}(\mathbf{b}+\mathbf{a}\mathbf{c})(\mathbf{b}+M \mathbf{a})^{k-1} \tag{5.12}\] \[= \{\mathbf{a},\mathbf{b}\}_{1}^{n}+M\mathbf{a}^{n-k+1}\mathbf{c}( \mathbf{b}+M\mathbf{a})^{k-1}\] \[\leq \{\mathbf{a},\mathbf{b}\}_{1}^{n}+M\mathbf{a}^{n-k+1}\mathbf{c}( \mathbf{e}+M\mathbf{a})^{k-1},\] and then it follows from our induction hypothesis \[M\mathbf{a}^{n-k+1}\mathbf{c}(\mathbf{e}+M\mathbf{a})^{k-1}=\sum _{j=0}^{k-1}\left(\begin{array}{c}k-1\\ j\end{array}\right)M^{k-j}\mathbf{a}^{n-j}\mathbf{e}^{j}\mathbf{c} \tag{5.13}\] \[\leq M^{k}\sum_{j=0}^{k-1}\left(\begin{array}{c}k-1\\ j\end{array}\right)B_{j}\mathbf{a}^{n}\mathbf{c}+\{\mathbf{a},\mathbf{b}\}_{1 }^{n}.\] Here the constant \(B_{k}\) is defined inductively by \[B_{k}=\sum_{j=0}^{k-1}\left(\begin{array}{c}k-1\\ j\end{array}\right)B_{j}.\] Combine with equation (5.12) and (5.13), and then our claim follows. Then we are ready to find a useful upper bound of the complex Monge-Ampere mass via the decomposition formula and estimate in Lemma 5.4. **Proposition 5.5**.: _For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), there exists a dimensional constant \(C_{n}\geq(n+1)\) satisfying_ \[\frac{1}{\pi}\int_{B_{r}}(dd^{c}u)^{n+1}\leq C_{n}M_{A}^{n}(u)\int_{\mathbb{C }\mathbb{P}^{n}}(\dot{u}_{t})\omega_{FS}^{n}, \tag{5.14}\] _for each \(r=e^{t}<e^{-A}\)._ Proof.: Thanks to Theorem 4.4 and Corollary 5.3, it is equivalent to estimate the following polynomial from the above \[\sum_{j=0}^{n}\left(\begin{array}{c}n+1\\ j+1\end{array}\right)(-1)^{j}\mathbf{e}^{n-j}\mathbf{c}^{j+1}\mathbf{a}^{j}, \tag{5.15}\] and we note that only the principle part will matter in the end. **Step 1**. Drop off all the negative terms. More precisely, there are two cases for even and odd dimensions. For instance, the polynomial in equation (5.15) can be estimated from the above, if \(n=2m\) is even \[\sum_{k=0}^{m}\left(\begin{array}{c}2m+1\\ 2k+1\end{array}\right)\mathbf{e}^{2m-2k}\mathbf{c}^{2k+1}\mathbf{a}^{2k}. \tag{5.16}\] On the other hand, if \(n=2m+1\) is odd, we have \[\sum_{k=0}^{m}\left(\begin{array}{c}2m+2\\ 2k+1\end{array}\right)\mathbf{e}^{2m-2k+1}\mathbf{c}^{2k+1}\mathbf{a}^{2k}. \tag{5.17}\] Then everything boils down to estimate the upper bounds of the polynomials in equation (5.16) and (5.17). In the following, we will deal with the even case first. **Step 2**. For an even \(n=2m\), we apply Lemma 5.4 to equation (5.16) as \[\sum_{k=0}^{m}\left(\begin{array}{c}2m+1\\ 2k+1\end{array}\right)\mathbf{c}^{2k+1}\mathbf{a}^{2k}\mathbf{e}^{2m-2k}\] \[\leq M^{n}\sum_{k=0}^{m}\left(\begin{array}{c}2m+1\\ 2k+1\end{array}\right)B_{2m-2k}\mathbf{c}\mathbf{a}^{2m}+\{\mathbf{a},\mathbf{ b}\}_{1}^{2m}. \tag{5.18}\] Thanks to the Stokes Theorem, our result follows with the dimensional constant \[C_{2m}:=\sum_{k=0}^{m}\left(\begin{array}{c}2m+1\\ 2k+1\end{array}\right)B_{2m-2k}.\] **Step 3**. For an odd \(n=2m+1\), we apply Lemma 5.4 to equation (5.17) as \[\sum_{k=0}^{m}\left(\begin{array}{c}2m+2\\ 2k+1\end{array}\right)\mathbf{c}^{2k+1}\mathbf{a}^{2k}\mathbf{e}^{2m-2k+1}\] \[\leq M^{n}\sum_{k=0}^{m}\left(\begin{array}{c}2m+2\\ 2k+1\end{array}\right)B_{2m-2k+1}\mathbf{c}\mathbf{a}^{2m+1}+\{\mathbf{a}, \mathbf{b}\}_{1}^{2m+1}. \tag{5.19}\] Then our result follows from the Stokes Theorem again, with the dimensional constant \[C_{2m+1}:=\sum_{k=0}^{m}\left(\begin{array}{c}2m+2\\ 2k+1\end{array}\right)B_{2m-2k+1}.\] Finally, it is straightforward to check that the constant \(C_{n}\) is no smaller than \((n+1)\) in both cases. The inequality in Proposition 5.5 is not sharp, since all the negative terms have been dropped. In the following examples, we calculate the dimensional constant \(C_{n}\) for small \(n\)'s. **Example 5.6**.: _For \(n=1\) odd and \(m=0\), we can first compute_ \[\mathbf{c}^{2}\mathbf{a}+2\mathbf{c}\mathbf{b}=2\mathbf{c}\mathbf{c}-\mathbf{ a}^{2}\leq 2\mathbf{c}, \tag{5.20}\] _and then it follows_ \[2\mathbf{ec} \leq 2M(\mathbf{b}+\mathbf{ac}) \tag{5.21}\] \[= 2M\mathbf{b}+2M\mathbf{ac}\] \[= \{\mathbf{a},\mathbf{b}\}_{1}^{1}+2M\mathbf{ac}.\] _Then we have the dimensional constant \(C_{1}=2\). Comparing with the upper bound that we obtained in the previous work [28], there is no surprise that the estimate has been improved, since a more accurate positivity condition (Lemma 4.5) has been used._ _Moreover, we note that the above estimate is not sharp, since we have removed all the negative terms. In fact, we can obtain a better estimate as_ \[\mathbf{c}^{2}\mathbf{a}+2\mathbf{cb}\leq\{\mathbf{a},\mathbf{b}\}_{1}^{1}+2M \mathbf{ac}-\mathbf{ac}^{2}. \tag{5.22}\] **Example 5.7**.: _For \(n=2\) even and \(m=1\), we compute_ \[\mathbf{c}^{3}\mathbf{a}^{2}+3\mathbf{c}^{2}\mathbf{ab}+3 \mathbf{cb}^{2}\] \[= \mathbf{c}^{3}\mathbf{a}^{2}-3\mathbf{c}^{2}\mathbf{ae}+3 \mathbf{ce}^{2}\leq M^{2}\mathbf{a}^{2}\mathbf{c}+3\mathbf{ce}^{2}, \tag{5.23}\] _and then it follows_ \[3\mathbf{ce}^{2} \leq 3M(\mathbf{b}+\mathbf{ac})(\mathbf{b}+M\mathbf{a}) \tag{5.24}\] \[\leq \{\mathbf{a},\mathbf{b}\}_{1}^{2}+3M\mathbf{ac}(\mathbf{e}+M \mathbf{a})\] \[= \{\mathbf{a},\mathbf{b}\}_{1}^{2}+3M\mathbf{ace}+3M^{2}\mathbf{a }^{2}\mathbf{c}.\] _Plug equation (5.21) into the last line on the R.H.S. of the above equation, and we obtain_ \[3\mathbf{ce}^{2}\leq\{\mathbf{a},\mathbf{b}\}_{1}^{2}+6M^{2}\mathbf{a}^{2} \mathbf{c}. \tag{5.25}\] _Then we have the dimensional constant \(C_{2}=7\)._ **Example 5.8**.: _For \(n=3\) odd and \(m=1\), we compute_ \[\mathbf{c}^{4}\mathbf{a}^{3}+4\mathbf{c}^{3}\mathbf{a}^{2}\mathbf{ b}+6\mathbf{c}^{2}\mathbf{ab}^{2}+4\mathbf{cb}^{3}\] \[\leq 4M^{2}\mathbf{a}^{2}\mathbf{ec}+4\mathbf{e}^{3}\mathbf{c}. \tag{5.26}\] _and then it follows_ \[\mathbf{ce}^{3} \leq M(\mathbf{b}+\mathbf{ac})(\mathbf{b}+M\mathbf{a})^{2} \tag{5.27}\] \[\leq \{\mathbf{a},\mathbf{b}\}_{1}^{3}+M\mathbf{ac}(\mathbf{e}+M \mathbf{a})^{2}\] \[= \{\mathbf{a},\mathbf{b}\}_{1}^{3}+M\mathbf{ace}^{2}+2M^{2} \mathbf{a}^{2}\mathbf{ec}+M^{3}\mathbf{a}^{3}\mathbf{c}.\] _Plug equation (5.21) and (5.25) into the last line on the R.H.S. of the above equation, and we obtain_ \[\mathbf{ce}^{3}\leq\{\mathbf{a},\mathbf{b}\}_{1}^{3}+5M^{3}\mathbf{a}^{3} \mathbf{c}. \tag{5.28}\] _Then we have the dimensional constant \(C_{3}=24\)._ ### The results For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), recall that we have defined the non-negative functional \[I(u_{t}):=\int_{\mathbb{CP}^{n}}(\dot{u}_{t})\omega_{FS}^{n},\] for all \(t<0\). Then we can rewrite the estimate in Proposition 5.5 as \[\pi^{-1}\mathrm{MA}(u)(B_{r})\leq C_{n}M_{A}^{n}(u)\cdot I(u_{t}), \tag{5.29}\] for all \(r=e^{t}<e^{-A}\). As a direct consequence, our main result follows for an \(S^{1}\)-invariant plurisubharmonic function that is also \(C^{2}\)-continuous outside the origin. **Theorem 5.9**.: _For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), there is a dimensional constant \(C_{n}\geq(n+1)\) satisfying_ \[\tau_{u}(0)\leq C_{n}[\lambda_{u}(0)]^{n}\cdot\nu_{u}(0). \tag{5.30}\] Proof.: First take \(r\to 0^{+}\), i.e. \(t\to-\infty\), on both sides of equation (5.14) and we obtain \[\tau_{u}(0)\leq C_{n}M_{A}^{n}(u)\nu_{u}(0). \tag{5.31}\] Then the result follows by taking \(A\to+\infty\). As a corollary, we confirm the zero mass conjecture in this special case. **Corollary 5.10**.: _For a function \(u\in\mathcal{F}^{\infty}(B_{1})\), we have_ \[\nu_{u}(0)=0\Rightarrow\tau_{u}(0)=0.\] For a general function \(u\in\mathcal{F}(B_{1})\), we can use the approximation of the functional by its standard regularization \(u_{\varepsilon}:=u*\rho_{\varepsilon}\). Then we are ready to prove the main theorem. **Theorem 5.11**.: _For a function \(u\in\mathcal{F}(B_{1})\), there is a dimensional constant \(C_{n}\geq(n+1)\) satisfying_ \[\tau_{u}(0)\leq 2C_{n}[\lambda_{u}(0)]^{n}\cdot\nu_{u}(0). \tag{5.32}\] Proof.: It is enough to prove the following estimate \[\tau_{u}(0)\leq C_{n}\left(2M_{A}(u)+\kappa\right)^{n}\cdot\nu_{u}(0), \tag{5.33}\] for each \(A>1\) large and \(\kappa>0\) small. Take two large constants as \[2<A<2A<B.\] Thanks to Lemma 2.3 and Proposition 2.11, we can extract a subsequence \(u_{\varepsilon_{k}}\) from the regularization of \(u\) satisfying \[\mathrm{MA}(u_{\varepsilon_{k}})(B_{r})\to\mathrm{MA}(u)(B_{r}) \tag{5.34}\] and \[I(u_{\varepsilon_{k},t})\to I(u_{t}), \tag{5.35}\] as \(k\to+\infty\) for almost all \(t=\log r\in[-B,-A]\). Meanwhile, Lemma 2.12 implies that we have a uniform control of the maximal directional Lelong numbers of the regularization as \[M_{A^{\prime}}(u_{\varepsilon})\leq 2M_{A}(u)+C\varepsilon, \tag{5.36}\] for all \(2A<A^{\prime}<B\) and \(\varepsilon<e^{-2A}/2\). Apply Proposition 5.5 to the subsequence, and then we obtain the estimate \[\pi^{-1}\mathrm{MA}(u_{\varepsilon_{k}})(B_{r}) \leq C_{n}M_{A^{\prime}}^{n}(u_{\varepsilon_{k}})I(u_{\varepsilon_{k},t})\] \[\leq C_{n}\left(2M_{A}(u)+Ce^{-2A}/2\right)^{n}I(u_{\varepsilon_{k},t}), \tag{5.37}\] for all \(-2A<t<-B\) and \(k\) large enough. Take \(k\to+\infty\) on both sides of equation (5.37), and combine with equation (5.34) and (5.35). Hence we have the following inequality \[\pi^{n}\tau_{u}(0)\leq C_{n}\left(2M_{A}(u)+Ce^{-2A}/2\right)^{n}I(u_{t}), \tag{5.38}\] for almost all \(t=\log r\in[-B,-2A]\). Fix any \(A>\log(C/\kappa)\), and then equation (5.33) follows by taking \(B\to+\infty\). Hence our main result follows. Finally, the zero mass conjecture directly follows from Proposition 2.8 and the above estimate for an \(S^{1}\)-invariant plurisubharmonic function. **Theorem 5.12**.: _For a function \(u\in\mathcal{F}(B_{1})\), we have_ \[\nu_{u}(0)=0\Rightarrow\tau_{u}(0)=0.\] We remark that the estimate in Theorem 5.11 can be improved. With a stronger apriori estimate on the maximal directional Lelong numbers (Remark 6.7, [28]), we can further obtain \[\tau_{u}(0)\leq C_{n}[\lambda_{u}(0)]^{n}\cdot\nu_{u}(0), \tag{5.39}\] for any function \(u\in\mathcal{F}(B_{1})\). In \(\mathbb{C}^{2}\) with constant \(C_{1}=2\), one can directly check that the inequality in equation (5.39) is satisfied for all \(S^{1}\)-invariant examples provided in Section 6, [28]. However, this estimate is still not sharp, since all negative terms have been removed during the estimates. Finally, we note that this control of the residual Monge-Ampere mass (by its Lelong number and maximal directional Lelong number) fails if the function is no longer \(S^{1}\)-invariant, see Example 6.13, [28] and [14]. ### Energy functionals Through a variational approach, we have another point of view to describe the decomposition formula (Theorem 4.4) and the result of the zero mass conjecture. In fact, we can think this formula to be the push forward of the complex Monge-Ampere measure of a \(u\in\mathcal{F}^{\infty}(B_{1})\) from the Kahler cone \(S^{2n+1}\times\mathbb{R}_{+}\) to its base manifold \(\mathbb{CP}^{n}\). To see this, re-write equation (4.14) as follows: \[\pi^{-1}\mathrm{MA}(u)(B_{r})=\sum_{k=0}^{n}\binom{n+1}{k}\,E_{n,k}(u_{t}), \tag{5.40}\] where we define the functionals as \[E_{n,k}(u_{t}):=\int_{\mathbb{CP}^{n}}(\dot{u}_{t})^{n+1-k}\omega_{FS}^{n-k} \wedge(i\partial\bar{\partial}_{\zeta}u_{t})^{k},\] where \(\partial_{\zeta}\), \(\bar{\partial}_{\zeta}\) and \(\partial\bar{\partial}_{\zeta}\) mean operators on \(\mathbb{CP}^{n}\). In particular, the last term \(E_{n,n}\) is related to the _pluri-complex energy_\(\mathcal{E}\) (see [4], [5]) in the following way: \[(n+1)E_{n,n}(u_{t})=-\frac{d}{dt}\mathcal{E}(u_{t}),\quad\text{where}\quad \mathcal{E}(u):=\int_{\mathbb{CP}^{n}}(-u)(i\partial\bar{\partial}_{\zeta}u)^{ n}.\] In the domain case, it is a well-known fact that the second variation of \(-\mathcal{E}\) is the push forward of the complex Monge-Ampere operator. Hence the energy \(\mathcal{E}\) is a concave functional along the so called _sub-geodesic ray_. For more details about geodesics and sub-geodesics in the space of Kahler potentials, see [17], [35], [11] and [27]. In our case, it is plausible to take \(u_{t}\) for a function \(u\in\mathcal{F}^{\infty}(B_{1})\) as a sub-geodesic ray with \(C^{2}\)-regularity, and it is a _geodesic ray_ if the complex Monge-Ampere mass of \(u\) vanishes, i.e. we have \[(dd^{c}u)^{n+1}=0.\] Then we can take the primitives of the above functionals, and define the following energies (up to a constant) along a sub-geodesic ray: \[\frac{d}{dt}\mathcal{E}_{n,k}(u_{t}):=-\begin{pmatrix}n+1\\ k\end{pmatrix}E_{n,k}(u_{t}).\] In particular, we take \(\mathcal{E}_{n,n}=\mathcal{E}\). Then equation (5.40) can be rewritten as \[\mathcal{M}=-\sum_{k=0}^{n}\mathcal{E}_{n,k}, \tag{5.41}\] along a sub-geodesic ray up to a constant, where \(\mathcal{M}\) is a \(t\)-primitive of the measure \(\pi^{-1}\mathrm{MA}(u)(B_{r})\). It is clear now that the decomposition formula induces a push forward of the complex Monge-Ampere operator, under the Sasakian structure of the sphere. That is to say, the non-triviality of the Kahler cone structure is reflected by the terms like \(\mathcal{E}_{n,k}\) for \(k=0,\cdots,n-1\) in the formula. Furthermore, it implies the following concavity of the energy. **Corollary 5.13**.: _Along a sub-geodesic ray \(u_{t}\) with \(C^{2}\)-regularity, the energy functional \(\sum_{k=0}^{n}\mathcal{E}_{n,k}\) is concave. Moreover, it is affine if and only if \(u_{t}\) is a geodesic ray._ The two dimensional version of the above Corollary has been proved in Section 7, [28], and this is its higher dimensional analogue. Finally, we would like to point out that the first term \(E_{n,0}\) in equation (5.40) also has a meaning. It is in fact a kind of \(L^{p}\)-Lelong number for \(p=n+1\), since we have the following convergence due to Lemma 2.5: \[\lim_{t\to-\infty}E_{n,0}(u_{t})=[\nu_{u}(0)]^{n+1}. \tag{5.42}\] Recall that we introduce a primitive of the functional \(I\) as \[\mathcal{I}(u_{t}):=\int_{\mathbb{CP}^{n}}u_{t}\omega_{FS}^{n}.\] Thanks to equation (5.41) and (5.42), we can rephrase the zero mass conjecture in terms of the energies. **Corollary 5.14**.: _Along a sub-geodesic ray \(u_{t}\) with \(C^{2}\)-regularity, assume that the asymptote of the functional \(\mathcal{I}\) at \(-\infty\) is zero. Then we have_ \[\lim_{t\to-\infty}\frac{d}{dt}\mathcal{M}(u_{t})=\lim_{t\to-\infty}\sum_{k=1}^{ n}\frac{d}{dt}\mathcal{E}_{n,k}(u_{t})=0. \tag{5.43}\] It will be interesting to see if Corollary 5.14 also holds for a function \(u\) in the family \(\mathcal{F}(B_{1})\), i.e. along a bounded sub-geodesic ray. However, there is no a priori reason that these energies \(\mathcal{E}_{n,k}\) are well defined in that case. ## 6. The method of moving frames In this section, we are going to provide an alternative proof of the decomposition formula (Theorem 4.4) via the method of moving frames, see [12], [13]. Moreover, the complex hessian equation of a function without symmetry will be presented. In the language of moving frames, a real coordinate \((x^{A},y^{A})\) of a point \(p\in\mathbb{R}^{2n+2}\) should be interpreted as a vector \[x^{A}\frac{\partial}{\partial x^{A}}+y^{A}\frac{\partial}{\partial y^{A}}\in T _{p}\mathbb{R}^{2n+2},\] Then a complex coordinate \((z^{A})\) corresponds to a vector in \(T_{p}^{1,0}(\mathbb{C}^{n+1})\), and its complex conjugate \((\bar{z}^{A})\) is a vector in \(T_{p}^{0,1}(\mathbb{C}^{n+1})\). In the following, we are going to use the Einstein summation convention. ### The structure equations Let \(\{e_{0},\cdots,e_{n}\}\) be a unitary field of \(\mathbb{C}^{n+1}\), which means that we have for \(A,B=0,\cdots,n\) \[(e_{A}\,\ e_{B})=\delta_{AB}, \tag{6.1}\] where the hermitian inner product is taken on \(\mathbb{C}^{n+1}\). Taking the exterior derivative of the position vector \(z\), we can write \[dz=\omega^{A}e_{A};\ \ \ \ \omega^{A}=a_{B}^{A}\cdot dz^{B}, \tag{6.2}\] where \((\omega^{A})\) is a vector-valued \((1,0)\)-form, and \((a_{B}^{A})\) is a unitary matrix with coefficients as smooth functions in \(\mathbb{C}^{n+1}\). Then the Euclidean metric of \(\mathbb{C}^{n+1}\) is given as \[ds_{e}^{2}=\omega^{A}\overline{\omega^{A}}.\] Hence it follow from equation (6.1) and (6.2) as \[de_{A}=\omega_{A}^{B}e_{B};\ \ \ \ \omega_{C}^{B}=a_{A}^{B}\cdot d\left( \overline{a_{A}^{C}}\right), \tag{6.3}\] where \((\omega_{B}^{A})\) is a matrix-valued \(1\)-form on \(\mathbb{C}^{n+1}\), and then we have \[\omega_{B}^{A}+\overline{\omega_{A}^{B}}=0. \tag{6.4}\] Taking exterior derivatives on both sides of equation (6.3) and (6.2), we obtain the following structure equations \[d\omega^{A}+\omega_{B}^{A}\wedge\omega^{B}=0; \tag{6.6}\] \[d\omega_{B}^{A}+\omega_{C}^{A}\wedge\omega_{B}^{C}=0. \tag{6.5}\] The above equations reflect the fact that the Euclidean space is flat. Moreover, we can decompose the differential of a smooth function \(u\) as \[du=\partial u+\bar{\partial}u,\] where \[\partial u=u_{A}\omega^{A};\quad\bar{\partial}u=u_{\bar{A}}\overline{\omega^{A}} ;\quad u_{\bar{A}}=\overline{(u_{A})}.\] ### The Fubini-Study metric Next we set the first element in the frame as the unit vector towards the point: \[e_{0}:=e^{-t}z,\] where the variable is \(t:=\log|z|\). Take exterior derivatives, and we obtain \[dz=e^{t}(dt+\omega_{0}^{0})\cdot e_{0}+e^{t}\omega_{0}^{\alpha}\cdot e_{\alpha}, \tag{6.7}\] for all \(\alpha=1,\cdots,n\). Then it follows \[\omega^{0}=e^{t}(dt+\omega_{0}^{0});\quad\quad\omega^{\alpha}=e^{t}\omega_{0}^ {\alpha}. \tag{6.8}\] Fixing such an \(e_{0}\), a local unitary frame field \((e_{0},e_{1},\cdots,e_{n})\) is in fact a local section of the principle bundle \(U(n+1)\) over \(\mathbb{CP}^{n}\). That is to say, we have the projection map \[\pi:U(n+1)\to\mathbb{CP}^{n},\] defined as \[(e_{0},e_{1},\cdots,e_{n})\to[e_{0}],\] where \([e_{0}]\) is the equivalent class of \(z\) in \(\mathbb{CP}^{n}\). It is clear that each fiber of this projection map is isomorphic to \(U(1)\times U(n)\), and hence a local section of this principle bundle also gives such a frame field. Moreover, the Fubini-Study metric on \(\mathbb{CP}^{n}\) can be given by a direct computation as \[ds_{FS}^{2}=\theta^{\alpha}\overline{\theta^{\alpha}},\quad\quad\theta^{ \alpha}=\omega_{0}^{\alpha}. \tag{6.9}\] Then it follows \[\theta^{\alpha}=e^{-t}\omega^{\alpha}, \tag{6.10}\] and the \((1,0)\)-forms \(\{\theta^{\alpha},1\leq\alpha\leq n\}\) builds a local coframe field on \(\mathbb{CP}^{n}\). In fact, the \(1\)-form \(\omega_{0}^{0}\) exactly corresponds to the contact \(1\)-form in Sasakian geometry, and the coframe field gives the decomposition of the complex structure under the Kahler cone structure of \((\mathbb{C}^{n+1})^{*}\). Take exterior derivatives on equation (6.10), and then we obtain the structure equation for this coframe field: \[d\theta^{\alpha} = \omega_{0}^{0}\wedge\omega_{0}^{\alpha}+\omega_{0}^{\beta} \wedge\omega_{\beta}^{\alpha}\] \[= -(\omega_{\beta}^{\alpha}-\omega_{0}^{0}\delta_{\beta}^{\alpha}) \wedge\omega_{0}^{\beta}\] \[= -\theta_{\beta}^{\alpha}\wedge\theta^{\beta}, \tag{6.11}\] where \(\theta_{\beta}^{\alpha}:=\omega_{\beta}^{\alpha}-\omega_{0}^{0}\delta_{\beta}^ {\alpha}\) satisfies \[\theta_{\beta}^{\alpha}+\overline{\theta_{\alpha}^{\beta}}=0.\] This is the connection 1-forms with respect to the coframe field \(\{\theta^{\alpha},1\leq\alpha\leq n\}\). For a smooth function \(v\) on \(\mathbb{CP}^{n}\), we can write as follows: \[\partial v=v_{\alpha}\theta^{\alpha};\quad\bar{\partial}v=v_{\bar{\alpha}} \overline{\theta^{\alpha}};\quad\overline{(v_{\alpha})}=v_{\bar{\alpha}}.\] Then we infer \[\partial\bar{\partial}v = d(v_{\bar{\alpha}}\overline{\theta^{\alpha}})\] \[= \Big{(}\;dv_{\bar{\alpha}}-v_{\bar{\beta}}\overline{\theta^{ \beta}_{\alpha}}\;\Big{)}\wedge\overline{\theta^{\alpha}}\] \[= \nabla v_{\bar{\alpha}}\wedge\overline{\theta^{\alpha}}, \tag{6.12}\] where we put \[\nabla v_{\bar{\alpha}}: = dv_{\bar{\alpha}}-v_{\bar{\beta}}\overline{\theta^{\bar{\beta}} _{\alpha}}\] \[= v_{\gamma\bar{\alpha}}\theta^{\gamma}+v_{\bar{\gamma}\bar{ \alpha}}\overline{\theta^{\gamma}}. \tag{6.13}\] Thanks to the symmetry of \(v_{\bar{\gamma}\bar{\alpha}}\), we obtain the complex hessian of \(v\) as \[i\partial\bar{\partial}v=iv_{\gamma\bar{\alpha}}\theta^{\gamma}\wedge \overline{\theta^{\alpha}}. \tag{6.14}\] ### The complex hessian Take a unitary frame field \(\{e_{A},0\leq A\leq n\}\) on \(\mathbb{C}^{n+1}\) satisfying \[e^{t}e_{0}=z.\] Then we can compute for a function \(u\in\mathcal{F}^{\infty}(B_{1})\) as follows: \[\bar{\partial}u = u_{0}\overline{\omega^{0}}+u_{\bar{\alpha}}\overline{\omega^{ \alpha}}\] \[= e^{t}\left\{u_{\bar{0}}(dt-\omega^{0}_{0})+u_{\bar{\alpha}} \overline{\theta^{\alpha}}\right\}. \tag{6.15}\] Thanks to equation (6.10), we can decompose the exterior derivative on \(\mathbb{C}^{n+1}\) locally as \[du_{\bar{0}}=u_{0,\bar{0}}\omega^{0}e^{-t}+u_{\bar{0},\bar{0}} \overline{\omega^{0}}e^{-t}+d^{T}u_{\bar{0}}; \tag{6.17}\] \[du_{\bar{\alpha}}=u_{0,\bar{\alpha}}\omega^{0}e^{-t}+u_{\bar{0},\bar{\alpha}}\overline{\omega^{0}}e^{-t}+d^{T}u_{\bar{\alpha}}, \tag{6.16}\] where \(d^{T}\) denotes the exterior derivative in the transversal direction \(\mathbb{CP}^{n}\). Hence the complex hessian of \(u\) can be calculated as \[\partial\bar{\partial}u = d(\bar{\partial}u)\] \[= e^{t}dt\wedge\left(-u_{\bar{0}}\omega^{0}_{0}+u_{\bar{\alpha}} \overline{\theta^{\alpha}}\right)\] \[+ e^{t}\left\{du_{\bar{0}}\wedge(dt-\omega^{0}_{0})+u_{\bar{0}} \theta^{\beta}\wedge\overline{\theta^{\beta}}\right\}\] \[+ e^{t}d(u_{\bar{\alpha}}\overline{\theta^{\alpha}}). \tag{6.18}\] Here we note that \(u_{\bar{0}}=u_{0}\) is real since the function \(u\) is \(S^{1}\)-invariant, and then it follows on the hypersphere \(S_{r}\) \[du_{\bar{0}}|_{S_{r}}=d^{T}u_{\bar{0}};\quad\;du_{\bar{\alpha}}|_{S_{r}}=d^{T} u_{\bar{\alpha}}. \tag{6.19}\] Hence we have the restriction \[i\partial\bar{\partial}u|_{S_{r}}=ie^{t}\left\{(u_{\bar{0}}\delta_{\alpha \beta}+u_{\alpha\bar{\beta}})\theta^{\alpha}\wedge\overline{\theta^{\beta}}+d ^{T}u_{\bar{0}}\wedge\overline{\omega^{0}_{0}}\right\}, \tag{6.20}\] and this is the same formula in our Lemma 4.2. Moreover, it is clear to have \[d^{c}u=ie^{t}\left\{u_{\bar{0}}\overline{\omega^{0}_{0}}+\frac{1}{2}\left(u_{ \bar{\alpha}}\overline{\theta^{\alpha}}-u_{\alpha}\theta^{\alpha}\right)\right\} \tag{6.21}\] Then it is the same way to compute the \((2n+1)\)-form \[d^{c}u\wedge(dd^{c}u|_{S_{r}})^{n},\] and the decomposition formula follows by taking the integral on \(\mathbb{CP}^{n}\times\{t\}\) and perform the integration by parts. ### No symmetry When there is no symmetry of the function, the situation becomes more interesting. Then equation (6.19) fails, and we need to directly compute from equation (6.18). First we have \[dt=\frac{1}{2}e^{-t}(\omega^{0}+\overline{\omega^{0}}); \tag{6.23}\] \[\omega_{0}^{0}=\frac{1}{2}e^{-t}(\omega^{0}-\overline{\omega^{0} }), \tag{6.22}\] and then it follows \[dt\wedge\omega_{0}^{0}=-\frac{1}{2}e^{-2t}\omega^{0}\wedge\overline{\omega^{0}}. \tag{6.24}\] Hence the first term on the R.H.S. of equation (6.18) is equal to the following: \[\frac{1}{2}e^{-t}u_{\bar{0}}\omega^{0}\wedge\overline{\omega^{0}}+\frac{1}{2}u _{\bar{\alpha}}\omega^{0}\wedge\overline{\theta^{\alpha}}+\frac{1}{2}u_{\bar {\alpha}}\overline{\omega^{0}}\wedge\overline{\theta^{\alpha}}. \tag{6.25}\] The second term can be computed as \[du_{\bar{0}}\wedge(dt-\omega_{0}^{0})e^{t} \tag{6.26}\] \[= (u_{0,\bar{0}}\omega^{0}e^{-t}+u_{\bar{0},\bar{0}}\overline{ \omega^{0}}e^{-t}+d^{T}u_{\bar{0}})\wedge\overline{\omega^{0}}\] \[= e^{-t}u_{0,\bar{0}}\omega^{0}\wedge\overline{\omega^{0}}+u_{ \alpha,\bar{0}}\theta^{\alpha}\wedge\overline{\omega^{0}}+u_{\bar{\alpha}, \bar{0}}\overline{\theta^{\alpha}}\wedge\overline{\omega^{0}},\] where we put \[d^{T}u_{\bar{0}}=u_{\alpha,\bar{0}}\theta^{\alpha}+u_{\bar{\alpha},\bar{0}} \overline{\theta^{\alpha}}. \tag{6.27}\] Moreover, we can compute the third term as \[e^{t}d(u_{\bar{\alpha}}\overline{\theta^{\alpha}})=e^{t}\left( du_{\bar{\alpha}}\wedge\overline{\theta^{\alpha}}+u_{\bar{\alpha}}d\overline{ \theta^{\alpha}}\right) \tag{6.28}\] \[= (u_{0,\bar{\alpha}}\omega^{0}+u_{\bar{0},\bar{\alpha}}\overline {\omega^{0}})\wedge\overline{\theta^{\alpha}}+e^{t}(d^{T}u_{\bar{\alpha}}+u_{ \bar{\beta}}\overline{\theta^{\beta}_{\alpha}})\wedge\overline{\theta^{\alpha}}\] \[= u_{0,\bar{\alpha}}\omega^{0}\wedge\overline{\theta^{\alpha}}+u_ {0,\bar{\alpha}}\overline{\omega^{0}}\wedge\overline{\theta^{\alpha}}+e^{t}u_ {\gamma\bar{\alpha}}\theta^{\gamma}\wedge\overline{\theta^{\alpha}}.\] Combining with equation (6.25), (6.26) and (6.28), we obtain the complex hessian as \[\partial\bar{\partial}u = e^{-t}\left(u_{0,\bar{0}}+\frac{1}{2}u_{\bar{0}}\right)\omega^ {0}\wedge\overline{\omega^{0}} \tag{6.29}\] \[+ \left(u_{0,\bar{\alpha}}+\frac{1}{2}u_{\bar{\alpha}}\right)\omega ^{0}\wedge\overline{\theta^{\alpha}}+u_{\alpha,\bar{0}}\theta^{\alpha}\wedge \overline{\omega^{0}}\] \[+ \left(u_{\bar{\alpha},\bar{0}}-u_{\bar{0},\bar{\alpha}}-\frac{1} {2}u_{\bar{\alpha}}\right)\overline{\theta^{\alpha}}\wedge\overline{\omega^ {0}}\] \[+ e^{t}(u_{\bar{0}}\delta_{\alpha\beta}+u_{\alpha\bar{\beta}}) \theta^{\alpha}\wedge\overline{\theta^{\beta}}.\] Since the complex hessian is a \((1,1)\)-form, it follows \[u_{\bar{\alpha},\bar{0}}=u_{\bar{0},\bar{\alpha}}+\frac{1}{2}u_{\bar{\alpha}}. \tag{6.30}\] Moreover, we can use the identity \(-\partial\bar{\partial}u=\bar{\partial}\partial u\) to obtain the following commutation relations: \[u_{\bar{0}}\delta_{\alpha\beta}+u_{\alpha\bar{\beta}}=u_{0}\delta_{\alpha\beta} +u_{\bar{\beta}\alpha}; \tag{6.31}\] \[u_{0,\bar{0}}+\frac{1}{2}u_{\bar{0}}=u_{\bar{0},0}+\frac{1}{2}u_{0}; \tag{6.32}\] \[u_{\alpha,0}=u_{0,\alpha}+\frac{1}{2}u_{\alpha}, \tag{6.33}\] \[u_{\bar{\alpha},0}=u_{0,\bar{\alpha}}+\frac{1}{2}u_{\bar{\alpha}};\ \ \ \ u_{ \alpha,\bar{0}}=u_{\bar{0},\alpha}+\frac{1}{2}u_{\alpha}. \tag{6.34}\] Finally, we end up with the following complex hessian equation for a general function. **Theorem 6.1**.: _For a smooth function \(u\) on \((\mathbb{C}^{n+1})^{*}\), its complex hessian can be decomposed as_ \[\partial\bar{\partial}u = e^{-t}\left(u_{0,\bar{0}}+\frac{1}{2}u_{\bar{0}}\right)\omega^{ 0}\wedge\overline{\omega^{0}}\] \[+ u_{\bar{\alpha},0}\omega^{0}\wedge\overline{\theta^{\alpha}}+u_ {\alpha,\bar{0}}\theta^{\alpha}\wedge\overline{\omega^{0}}\] \[+ e^{t}(u_{\bar{0}}\delta_{\alpha\beta}+u_{\alpha\bar{\beta}}) \theta^{\alpha}\wedge\overline{\theta^{\beta}}. \tag{6.35}\] _In particular, we have_ \[u_{0,\bar{0}}+\frac{1}{2}u_{\bar{0}}\geq 0\ \ \ \text{and}\ \ \ (u_{\bar{0}}\delta_{\alpha\beta}+u_{\alpha\bar{\beta}})(i\theta^{\alpha} \wedge\overline{\theta^{\beta}})\geq 0, \tag{6.36}\] _if \(u\) is further plurisubharmonic._
2309.15821
LGMCTS: Language-Guided Monte-Carlo Tree Search for Executable Semantic Object Rearrangement
We introduce a novel approach to the executable semantic object rearrangement problem. In this challenge, a robot seeks to create an actionable plan that rearranges objects within a scene according to a pattern dictated by a natural language description. Unlike existing methods such as StructFormer and StructDiffusion, which tackle the issue in two steps by first generating poses and then leveraging a task planner for action plan formulation, our method concurrently addresses pose generation and action planning. We achieve this integration using a Language-Guided Monte-Carlo Tree Search (LGMCTS). Quantitative evaluations are provided on two simulation datasets, and complemented by qualitative tests with a real robot.
Haonan Chang, Kai Gao, Kowndinya Boyalakuntla, Alex Lee, Baichuan Huang, Harish Udhaya Kumar, Jinjin Yu, Abdeslam Boularias
2023-09-27T17:45:49Z
http://arxiv.org/abs/2309.15821v3
# LGMCTS: Language-Guided Monte-Carlo Tree Search ###### Abstract We introduce a novel approach to the executable semantic object rearrangement problem. In this challenge, a robot seeks to create an actionable plan that rearranges objects within a scene according to a pattern dictated by a natural language description. Unlike existing methods such as StructFormer and StructDiffusion, which tackle the issue in two steps by first generating poses and then leveraging a task planner for action plan formulation, our method concurrently addresses pose generation and action planning. We achieve this integration using a Language-Guided Monte-Carlo Tree Search (LGMCTS). Quantitative evaluations are provided on two simulation datasets, and complemented by qualitative tests with a real robot. Our code and supplementary materials are accessible at [https://github.com/changhaonan/LG-MCTS](https://github.com/changhaonan/LG-MCTS). ## I Introduction In daily life, tasks like "Set up the kitchen" require arranging objects according to language cues, an intuitive process for humans but a complex challenge for robots. The semantic rearrangement problem aims to enable robots to reconstruct scenes based on linguistic descriptions. This necessitates the seamless integration of scene understanding, linguistic reasoning, and action planning, involving multiple disciplines such as robotics and natural language processing. Consider the task: "Set the dinnerware for dinner and place a candle in front of a plate." A robot must identify the items constituting 'dinnerware' and their correct arrangement, while also handling real-world constraints like initial object coverings and spatial obstacles. The command introduces two key constraints: 1) 'dinnerware' must be arranged appropriately for dinner, and 2) a candle must be placed in front of a plate. This example highlights the problem's complexity. One approach employs a multi-modality transformer [1, 2, 3] to learn the mapping between language and object poses through simulated arrangements and rule-based language descriptions. However, this method has limitations. It assumes that language descriptions map to precise ground-truth poses, which is often unrealistic. It is also less adaptable to free-form linguistic inputs, as it performs best with descriptions similar to its training data. Recent research employs diffusion models to capture how language tokens map to spatial distributions. DALL-E-Bot [4] uses text-to-image to generate target images and derive poses, while StructDiffusion [5] employs a diffusion model conditioned on language and point-cloud embeddings. Both approaches show promise but have limitations: DALL-E-Bot can be unstable and distracted by extraneous objects, whereas StructDiffusion is constrained to known training patterns and lacks zero-shot adaptability. Moreover, recent developments emphasize direct robot control through Large Language Models (LLMs) [6, 7, 8] and prompt techniques such as Chain of Thoughts (COT) [9]. The 'Code-as-policies' [10] approach shows remarkable zero-shot capabilities and could theoretically address the semantic rearrangement problem, leveraging LLMs' strength in pattern comprehension and object selection. Current methods such as StructFormer and Code-as-policies often fail to produce collision-free arrangements, whereas StructDiffusion incorporates a learning-based collision checker. However, collision-free does not mean executable, as shown in Fig. 2. For example, block 3 obstructs block 2's target position, complicating block 1's motion. This highlights the challenge of distinguishing between collision-free and executable goals, as some may be the former but not the latter, or may require excessive steps to execute. This study presents the **L**anguage-**G**uided **M**onte-**C**arlo **T**ree **S**earch (LGMCTS) technique, designed specifically Fig. 2: Illustration of infeasible start and goal configuration. In this scene, block 2 is in an infeasible start configuration. And the goals of blocks 3 and 2 are infeasible without an excessive number of intermediate steps. Fig. 1: Robot Setup. We use a UR5e robot equipped with a RealSense D455 camera. for executable semantic object rearrangement. LGMCTS leverages LLMs to interpret free-form language and consider object placements as probability distributions, not exact points. We frame the challenge as a sequential sampling problem, where each object's pose is drawn from a distribution influenced by language and the current scene state. The approach incorporates distracting objects, enabling plans that meet language criteria while also being executable. The primary contributions of this study are: 1) Introducing a novel approach that concurrently addresses semantic rearrangement goal generation and action planning 2) Presenting a unique method that facilitates zero-shot multi-pattern composition for semantic object rearrangement 3) Establishing a new benchmark tailored for executable semantic object rearrangement. ## II Related Works ### _Semantic Rearrangement_ The semantic rearrangement problem consists of devising a rearrangement plan that is both semantically congruent with a given language description and physically feasible. In recent years, this has gained increased traction, particularly as a pivotal application in language-driven robotics. CLIPort [1] took the initial step in this direction by merging CLIP features with a Transporter network. Yet, its design is limited to basic pick-and-place tasks. StructFormer [2] advanced the field using a transformer model, simulating rearrangements with hand-crafted rules and connecting language tokens to object poses. Leveraging StructFormer's dataset, StructDiffusion [5] introduced a pose diffusion model to predict poses from language, enhancing performance. Nonetheless, a common shortcoming amongst these methodologies is their limitation to a single structure or pattern that they have been trained on, making composite patterns a persistent challenge. Meanwhile, the rearrangement goals generated by these methods might be inexecutable. ### _LLM-driven Robot Control_ Recent advancements in large language models (LLMs) [6, 7, 8] have showcased stellar performance across a broad spectrum of tasks. This has led to a growing interest in LLM-driven robotics. SayCan [11] was among the first to integrate LLMs into robotic task planning, which resulted in impressive context comprehension and behavioral decision-making. Subsequently, Code-as-policies [10] merges LLM code generation with API planning, demonstrating remarkable zero-shot capabilities across various robotic tasks. Nonetheless, LLM-driven robotics faces several challenges. LLMs, while semantically adept, seem to lack a true physical scene understanding, leading to plans that, although meaningful, can be unfeasible. ### _Rearrangement Planning_ In rearrangement problems, determining the sequence of tasks presents a significant challenge. Several studies have tackled this by encoding collisions between initial and target arrangements into a graph [12, 13, 14]. These representations then transform the problem into established graph problems. Additionally, recent advances [15, 16] have adopted the Monte Carlo Tree Search (MCTS) for long-horizon planning. A common attribute among most prehensile planners is the need for specific goal states [17]. However, dictating a goal state in semantic rearrangement can restrict the solution space, leading to potential planning failures. To address this, our proposed LGMCTS planner calls for goal state distributions from a language model, which act as constraints for individual goal poses. ## III Preliminaries ### _Problem Formulation_ The task of semantic rearrangement can be succinctly defined as follows. Given a scene with objects represented by \(O_{S}=\{o_{1},o_{2},\ldots,o_{N}\}\) and a command \(L\), where \(L\) is a pure natural language command that implies a desired distribution list \(D=\{d_{i}:p(o_{i})\sim f_{d_{i}}|o_{i}\in O_{R}\}\), \(p(o_{i})\) refers to the position of object \(o_{i}\). Here, \(O_{R}\subset O_{S}\) denotes the objects designated for rearrangement, and \(d_{i}\) indicates the desired pose distribution for each object. The objective is to identify an optimal action sequence, \(A=(a_{t})_{t=1}^{H}\), where each action \(a_{t}\) corresponds to moving an object \(o_{i}\) to a sampled position \(p(o_{i})\), with the goal to achieve a goal arrangement aligning \(L\), i.e. \(\prod_{o_{i}\in O_{R}}f_{d_{i}}(p_{i})>0\) and minimizing the action steps \(H\). Noticeably, \(A\) not only includes movement of objects \(o\in O_{R}\), but also distracting objects, denoted as \(O_{d}\), with \(O_{d}\subset O_{S}\). ### _Monte Carlo Tree Search_ A typical MCTS algorithm iteratively performs the following four operations: 1. **Selection.** On a fully expanded node (all the children nodes have been visited), MCTS selects a branch to explore with an Upper Confidence Bound (UCB) formula: \[argmax_{a}(\frac{w(f(s,a))}{n(f(s,a))}+C\sqrt{\frac{\log(n(s))}{n(f(s,a))}})\] (1) where \(f(s,a)\) is the child node of state \(s\) after action \(a\), \(w(\cdot)\) and \(n(\cdot)\) are cumulative rewards and the number of visits to a state. 2. **Expansion.** On a node that is not fully expanded, MCTS selects an action that has not been attempted yet. 3. **Simulation.** Given a node and the selected action, MCTS simulates and gets rewards. 4. **Back-Propagation.** MCTS passes the acquired reward to ancestor nodes to update the quality evaluation of the branch. In each iteration, MCTS starts from the root node. When all the child nodes of the current node are visited, MCTS selects a child node with the UCB formula. When some child nodes of the current node are unvisited, MCTS expands by randomly selecting a new action and doing a simulation to reach a new child node. The new node returns a reward, which is be back-propagated to all the ancestor nodes. ## IV Method two 2D positions, denoted as \(\kappa(p_{0},p_{1})\). For each pattern, we define two functions: \(\gamma\) and \(\kappa\). Given that pattern prior \(f_{prior}\) is used inside a sequential sampling process (check Section IV-D for more details), we want distribution to capture the history of sampling. Consequently, we further categorize \(O_{R^{i}}\) based on whether the objects have been sampled in MCTS-Planner. The sets \(O^{i}_{R_{i}}\) and \(O^{n}_{R_{i}}\) denote the sampled and non-sampled objects, respectively. A predefined sampling function \(f\) for \(o_{i}\) takes three parameters into account: 1) \(N=|O_{R_{i}}|\) represents the total number of objects forming this pattern 2) \(K=|O^{*}_{R_{i}}|\) indicates the number of objects already sampled 3) \(P^{*}_{R_{i}}\) refers to the poses of the sampled objects \(O^{i}_{R_{i}}\). The sampling function \(f\) has three distinct cases based on \(K\): 1. When \(K=0\), \((x_{i},y_{i},\theta_{i})\sim U\), suggesting that the first object can be placed arbitrarily. 2. For \(K=1\), \((x_{i},y_{i},\theta_{i})\sim U\) and \(\sqrt{(x_{i}-x_{0})^{2}+(y_{i}-y_{0})^{2}}\geq\delta\), imposing that the second object must be distanced from the first by at least \(\delta\). 3. When \(K\geq 2\), \((x_{i},y_{i})=\gamma\big{(}\frac{K}{N},\kappa(p_{0},p_{1})\big{)}+\varepsilon\) where \(\varepsilon\sim G(0,\sigma)\), here \(G\) represents an Gaussian distribution of variance \(\sigma\). \(\theta=atan2(1,\gamma^{\prime}(\frac{K}{N}))\), we use the angle of gradient represented as the rotation angle of the object. In our current implementation, we have defined patterns such as "line," "circle," "rectangle," "tower," "spatial:left," "spatial:right," and so on. Due to space constraints, we refrain from elaborating on the definitions of \(\gamma\) and \(\kappa\) for all these predefined patterns. However, to enhance clarity for the readers, we offer illustrative figures of the 'lines' pattern in Fig. 4 to shed light on the process of the prior generation. Noticeably, we divide patterns into 'ordered' and 'unordered' based on if the pattern requires an execution sequence. ### _Monte-Carlo Tree Search (MCTS) for Task Planning_ We propose a task planner based on the Monte Carlo Tree Search (MCTS) algorithm to move objects to desired collision-free poses. The distribution list \(D\) indicates the preference distributions of the goal poses. We define pose \(p_{i}\) as a desired pose of the object \(o_{i}\) if its probability \(f_{d_{i}}(p_{i})\) is higher than a threshold \(\varepsilon\). Our MCTS-Planner seeks an action sequence by maintaining a search tree. In MCTS-Planner, each state in the tree represents an arrangement of \(O\): \(\{p_{1},p_{2},...,p_{N}\}\), and a list of remaining pose distributions \(D\). The MCTS action set \(S_{A}\) of a state defines a finite number of branches that we expand the state with, which is computed as shown in Algorithm 1. Each MCTS action is labeled as \((d_{i},j)\), representing the \(j^{th}\) attempt to make progress in sampling \(d_{i}\). \(k\) is the number of attempts we try for each \(d_{i}\). Note that in ordered patterns, we will not consider sampling an object \(o_{i}\) into \(d_{i}\) if there is an object \(o_{j}\) in the constraints of \(d_{i}\) still away from the goal pose (Line 3). For example, in Fig. 3, there is a pattern "A is on the right and behind B". \(A\) will not be sampled to its goal distribution if \(B\) is still away from its goal. ``` Input :\(s\): An MCTS state, \((d_{i},j)\): The action for this simulation. Output :\((o,p)\): a rearrangement action. 1\(o_{i}\leftarrow\) The sampling object in \(d_{i}\); 2if\(o_{i}\) not graspablethen 3\(o\leftarrow\) Randomly choose a graspable object on top of \(o_{i}\); 4\(p\leftarrow\)uniformSampling(\(o,s\)); 5if\(p\)thenreturn(\(o\),\(p\)); 6elsereturnNone; 7else 8\(p\leftarrow\)sampling(\(o_{i},d_{i},s\)); 9if\(f_{d_{i}}(p)\geq\varepsilon\)thenreturn(\(o_{i}\),\(p\)); 10else 11\(o\leftarrow\) An obstacle in \(d_{i}\); 12\(p\leftarrow\)uniformSampling(\(o,s\)); 13if\(p\)thenreturn(\(o\),\(p\)); 14elsereturnNone; ``` **Algorithm 2**Simulation The simulation stage of MCTS-Planner is presented in Algorithm 2. The algorithm consumes the MCTS state \(s\) and the attempted action \((d_{i},j)\). If the corresponding object \(o_{i}\) is not graspable, we randomly choose a graspable obstacle on top and uniformly sample a free space to place it (Line 2-2). If \(o_{i}\) is graspable, we try to sample it in \(d_{i}\). (Line 2) If the sampled position is not preferred: \(f_{d_{i}}(p)<\varepsilon\), we find an obstacle \(o\) in \(d_{i}\) and uniformly sample a free space to place it (Line 2-2). Similar to the formulation in [15], in LGMCTS, the reward of a node \(s\) is the number of finished samplers. That is, \(|root.D|-|s.D|\). While MCTS is an anytime search algorithm, in our implementation, MCTS-Planner returns the first found solution. Finally, we prove that MCTS-Planner is probabilistic complete under our setting. ``` Input :\(s\): An MCTS state, \((d_{i},j)\): The action for this simulation. Output :\((o,p)\): a rearrangement action. 1\(o_{i}\leftarrow\) The sampling object in \(d_{i}\); 2if\(o_{i}\) not graspablethen 3\(o\leftarrow\) Randomly choose a graspable object on top of \(o_{i}\); 4\(p\leftarrow\)uniformSampling(\(o,s\)); 5if\(p\)thenreturn(\(o\),\(p\)); 6elsereturnNone; 7else 8\(p\leftarrow\)sampling(\(o_{i},d_{i},s\)); 9if\(f_{d_{i}}(p)\geq\varepsilon\)thenreturn(\(o_{i}\),\(p\)); 10else 11\(o\leftarrow\) An obstacle in \(d_{i}\); 12\(p\leftarrow\)uniformSampling(\(o,s\)); 13if\(p\)thenreturn(\(o\),\(p\)); 14elsereturnNone; ``` **Algorithm 3**Simulation **Proposition 4.1**: MCTS-Planner is probabilistic complete. For a semantic rearrangement task and the distribution list \(D\), assume that there is a feasible action sequence \(A^{*}\) moving objects to a final arrangement \(A_{f}\), such that \(f_{d_{i}}(A_{f}[o_{i}])\geq\varepsilon\ \forall o_{i}\in O_{R}\). Denote p as the probability that MCTS can find an action sequence satisfying the goal state criteria. We prove that as the number of samples k increases, p approaches 1. First, we prove that there is an action sequence \(A^{*}_{0}\) whose actions can all be generated by Algorithm 2. Note that in Algorithm 2, a MCTS action \((d_{i},j)\) satisfies two rules: **R1**: If \(o_{i}\) is not graspable, we move away obstacles of \(o_{i}\)(Line 2); **R2**: If \(o_{i}\) is graspable, we move \(o_{i}\) into \(d_{i}\) or remove some obstacle in \(d_{i}\) for \(o_{i}\) (Line 2). We construct \(A^{*}_{0}\) by reordering and deleting actions in \(A^{*}\) as follows: 1) If an action satisfies **R1** and **R2**, we add the action to \(A^{*}_{0}\). Otherwise, we delay the addition until it satisfies the rules; 2) If the action still cannot satisfy the requirements before we examine the next action in \(A^{*}\) for the same object, we delete the action. In this way, all the actions in \(A^{*}_{0}\) can be generated by Algorithm 2, and the final arrangement is also \(A_{f}\). Let the rearrangement action \((o,p)\) that \(A^{*}_{0}\) chooses at state \(s\) be \((A^{*}_{0}[s].o,A^{*}_{0}[s].p)\). Let \(r=\frac{\textit{min}(C_{1},C_{2})}{|A^{*}_{0}|}\), where \(C_{1}\) is the minimum distance between objects and their nearest obstacles in \(A^{*}_{0}\), \(C_{2}\) is the minimum distance between each pose \(A_{f}[o_{i}]\) and their nearest position \(p\), s.t. \(d_{i}(p)<\epsilon\). As \(k\) increases, the probability that \(A^{*}_{0}[s].o\) is moved to the \(r-\)neighborhood of \(A^{*}_{0}[s].p\) at state \(s\) approaches 1. Then, given a tolerance of pose offset \(r\), the probability that all the intermediate states of \(A^{*}_{0}\) are in the MCTS tree approaches 1. When \(A_{f}\) is in the MCTS tree, the MCTS-Planner can find a solution after enough iterations. ## V Experiments In this part, we present a comprehensive evaluation of LGMCTS on: 1) Its capability to produce collision-free and semantically correct goal poses 2) The advantages of concurrently addressing pose generation and action planning 3) The stability of the LLM parse module, and 4) LGMCTS's performance in actual robotic systems. ### _Baselines_ We compare our approach with the following baselines and LGMCTS variants: **StructFormer [2]**: StructFormer is a multi-modal transformer architecture specifically designed for language-guided rearrangement tasks. **StructDiffusion [5]**: Recognized as the state-of-the-art, StructDiffusion employs a diffusion model combined with a learning-based collision checker for pattern pose generation. **Pose+MCTS**: The Pose+MCTS (PMCTS) approach assumes that a collision-free and semantically aligned goal pose is provided. However, direct execution of this pose might be hindered if the target space is already occupied. To address this, we utilize MCTS to search for a viable plan to place objects in their predetermined goal poses. **LGMCTS-T**: This is a variation of LGMCTS that uses ground-truth data for object and pattern language selection. Given that \(O_{R}\) and \(L_{i}\) are directly provided, LGMCTS-T functions as LGMCTS without the LLM parser. **LGMCTS-L**: This represents the full LGMCTS system. It employs the LLM to interpret input from natural language and subsequently produce an action plan. ### _Datasets_ **StructFormer Dataset [2]**: We use the test set from the StructFormer dataset to evaluate the goal pose generation ability. This dataset is composed of approximately \(11,500\) rearrangement tasks, categorized into four patterns: line, circle, tower, and dinner. \begin{table} \begin{tabular}{c c c c c} \hline & Line (4295) & Circle (3416) & Tower (1335) & Dimer (2440) \\ \hline StructFormer & 47.24\% & 62.64\% & 99.10\% & 28.36\% \\ StructDiffusion & 61.49\% & 81.41\% & 89.95\% & 69.38\% \\ **LGMCTS-T (ours)** & **95.99\%** & **95.25\%** & **100\%** & **100\%** \\ \hline \end{tabular} \end{table} TABLE II: Efficacy of StructFormer, StructDiffusion, and LGMCTS across diverse rearrangement tasks (task counts indicated) from the StructFormer dataset Fig. 5: Results with a real URS robot. The language instructions for the five scenes are: (a). “Move all blocks into a circle; while put the white bottle in front of one block;” (b). “Put all boxes into a rectangle; and move the white bottle to the left of one box;” (c) “Move bottles into a line; and formulate all phones into another line;” (d) “Formulate all yellow objects into a line;” (e) “Set all phones into a line;”, “Dotted lines imply a shape pattern and red arrows indicate a spatial pattern (left, right, front, back). These real robot experiments show that LGMCTS can parse complex language instructions and also deal with infeasible start configurations as well as pattern composition. Fig. 6: Compared to StructFormer and StructDiffusion, LGMCTS ensures a collision-free goal arrangement in a qualitative comparison. A rearrangement plan is regarded as successful if and only if it meets the language constraints while containing no collision. It is worth noting we do not apply collision checking for the 'tower' task, for collision is not avoidable in that specific task. We solve the 'dinner' pattern by treating it as a pattern composition. It involves rearranging objects such as plates, bowls, forks, spoons, cups, and knives. Within this pattern, the plate and bowl are arranged to form a 'tower,' while the remaining objects are positioned adjacent to this tower forming a 'line'. **LGR-Benchmark** Existing datasets for semantic object rearrangement tasks, like StructFormer, have limitations. They typically support only a single pattern per scene and lack crowded scenarios. Moreover, they often overlook the feasibility challenge, especially scenarios like infeasible starting configurations where one object might be placed under another from the outset. Addressing these gaps, we introduce LGR-Bench (**L**anguage-**G**uided **R**earrangement **Bench**mark). This new benchmark presents a novel task termed the "multi-pattern task", which requires multiple pattern goals to be satisfied during the rearrangement process. We also incorporate scenarios with infeasible starting configurations, where objects may initially be stacked. In each scene, we randomly select two patterns from "line", "circle", and "spatial". This LGR-Benchmark is modified from VIMA-Benchmark [3]. ### _Semantic Pattern Pose Generation_ In this assessment, we draw comparisons between StructFormer, StructDiffusion, and LGMCTS-T. We decided against using LGMCTS-L because, in the StructFormer Dataset, object selection demands an insight into the object's shape and size, which is not the focus of LGMCTS. To ensure an equitable comparison, we provide ground-truth object selections to both StructFormer and StructDiffusion. In this context, LGMCTS showcased exemplary performance across all four rearrangement task categories. As evidenced in TABLE II, LGMCTS posted outstanding success rates: 95.99% for the 'line' pattern, 95.25% for 'circle,' and a perfect 100% for both the 'tower' and 'dinner' patterns. On the other hand, although StructDiffusion improved upon StructFormer's results, it did not rival the success of our approach. A unique characteristic of StructDiffusion is its employment of a collision checker that filters out samples with collisions, providing some clarity to its enhanced performance over StructFormer. For a more tangible comparison, Fig. 6 displays a scene from the circle task, illuminating the distinctions between LGMCTS and the existing benchmarks. ### _Benefit of Joint Modeling_ Contrary to previous methodologies like StructFormer and StructDiffusion, which viewed pose generation and action planning as separate challenges, LGMCTS concurrently addresses both pattern generation and action planning. We posit that this integrated approach will render our goal poses more executable than other 'correct' alternatives. To substantiate this idea, we juxtapose the performance of LGMCTS-T and PMCTS using the LGR-Benchmark, providing PMCTS with ground-truth goal poses. The comparative data is presented in TABLE III. Beyond assessing the success rates of planning and execution, we also evaluate the overall success rate and the number of actions suggested by each planning mechanism. Our findings highlight the superiority of LGMCTS over the two-step solutions, even when they are provided with a semantically accurate and collision-free goal pose. ### _Language Parse Stability_ We provide an evaluation of our whole pipeline, i.e. LGMCTS-L on LGR-Benchmark. Apart from the success rate, we also present the accuracy of LLM instruction parse. The result is shown in TABLE IV. From the result, we can find the LLM parsing module of LGMTS has high stability. While the performance of LGMCTS-L lags behind that of LGMCTS-T, it still surpasses that of PMCTS. ### _Physical Robot Experiment_ We qualitatively evaluated our system using a UR5e robot outfitted with a D455 depth camera as shown in Fig. 1. We employed the Recognize-Anything-Model (RAM) [18, 19] and an HSV-based color detector to detect object semantics and colors. Selected queries and their corresponding execution outcomes can be viewed in Fig. 5. These real-world robot experiments highlight the capabilities of LGMCTS in intricate real-world settings. ## VI Conclusion We introduced LGMCTS, a new framework for tabletop, semantic object rearrangement tasks. LGMCTS stands out by accepting free-form natural language input, accommodating multiple pattern requirements, and jointly solving goal pose generation and action planning. However, its main drawback is the extended execution time for complex scenes. Improving its Monte-Carlo tree search efficiency is a key research direction. Currently tailored for tabletop pick-place setups, future work should explore LGMCTS's adaptability to more complex rearrangement contexts. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \(SR_{p}\) & \(SR_{e}\) & \(SR_{a}\) & \(Acc_{LLM}\) \\ \hline LGMCTS-L & 90.9\% & 83.1\% & 79.2\% & 89.3\% \\ \hline \hline \end{tabular} \end{table} TABLE IV: Performance of LGMTC-L on LGR-Benchmark. For \(Acc_{LLM}\), we compare the object selection and pattern selection, if they are the same, we return true. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(SR_{p}\) & \(SR_{e}\) & \(SR_{a}\) & Steps \\ \hline PMCTS & 82.9\% & 86.2\% & 74.1\% & 6.15 \\ **LGMCTS-T (ours)** & **97.3\%** & **93.4\%** & **92.8\%** & **5.99** \\ \hline \hline \end{tabular} \end{table} TABLE III: \(SR_{p}\) represents the success rate of planning. Both PMCTS and LGMTCS-T were set with an identical planning step limit, specifically, 10,000 steps. \(SR_{e}\) denotes the success rate of execution, assessed post-execution of the plan to determine if the outcome aligns with the language-derived constraints. \(SR_{e}\) is the overall success rate. The term ‘steps’ here refers to the average number of steps in the plan returned by each planner. Smaller Steps mean the action can executed faster.
2309.12828
Multiple Satellites Collaboration for Joint Code-aided CFOs and CPOs Estimation
Low Earth Orbit (LEO) satellites are being extensively researched in the development of secure Internet of Remote Things (IoRT). In scenarios with miniaturized terminals, the limited transmission power and long transmission distance often lead to low Signal-to-Noise Ratio (SNR) at the satellite receiver, which degrades communication performance. A solution to address this issue is the utilization of cooperative satellites, which can combine signals received from multiple satellites, thereby significantly improve SNR. However, in order to maximize the combination gain, the signal coherent combining is necessary, which requires the carrier frequency and phase of each receiving signal to be aligned. Under low SNR circumstances, carrier parameter estimation can be a significant challenge, especially for short burst transmission with no training sequence. In order to tackle it, we propose an iterative code-aided estimation algorithm for joint Carrier Frequency Offset (CFO) and Carrier Phase Offset (CPO). The Cram\'er-Rao Lower Bound (CRLB) is suggested as the limit on the parameter estimation performance. Simulation results demonstrate that the proposed algorithm can approach Bit Error Rate (BER) performance bound within 0.4 dB with regards to four-satellite collaboration.
Pingyue Yue, Yixuan Li, Yue Li, Rui Zhang, Shuai Wang, Jianping An
2023-09-22T12:28:30Z
http://arxiv.org/abs/2309.12828v2
# Code-aided CFOs and CPOs Estimation in Cooperative Satellite Communication ###### Abstract Low Earth Orbit (LEO) satellites have gained significant research attention in the development of secure Internet of Remote Things (IoRT). In scenarios where miniaturized terminals are involved, the limited transmission power and long transmission distance often result in a low Signal-to-Noise Ratio (SNR) at the satellite receiver, leading to degraded communication performance. To mitigate this issue, the use of cooperative satellites has been proposed, which can combine signals received from multiple satellites, thereby improving the SNR considerably. However, achieving the maximum combination gain requires synchronization of carrier frequency and phase for each receiving signal, which poses a challenge under low SNR conditions, particularly in short burst transmissions without training sequences. To address this challenge, we propose an iterative code-aided estimation algorithm for joint estimation of Carrier Frequency Offset (CFO) and Carrier Phase Offset (CPO). The algorithm incorporates a two-step estimation procedure that utilizes Iterative Cross Entropy (ICE) and Cooperative Expectation Maximization (CEM). The ICE method is employed initially to perform a coarse search for parameter estimation, followed by the CEM technique which refines the estimates. The performance limit of parameter estimation is evaluated using the Cramer-Rao Lower Bound (CRLB). Simulation results indicate that the proposed algorithm achieves estimation accuracy close to the CRLB within the frequency range of (\(-7.8125\times 10^{-3}\), \(+7.8125\times 10^{-3}\)) and phase range of (\(-\pi\), \(+\pi\)). Furthermore, the algorithm demonstrates the ability to approach the Bit Error Rate (BER) performance bounds, with deviations of \(0.3\) dB and \(0.4\) dB in scenarios involving two-satellite and four-satellite collaboration, respectively. Cooperative satellite communication, low signal to noise ratio, short burst transmission, signal coherent combining, carrier frequency offset, carrier phase offset. ## I Introduction Internet of Remote Things (IoRT) is a network of small objects which are often dispersed over wide geographical areas even inaccessible. In IoRT, satellite communication can provide a cost-effective solution to their interconnection and communication in comparison to terrestrial networks [1, 2, 3]. However, due to the limited link budgets caused by the constrained transmission power and long distance between the satellite and terminal, the satellite receiver has to operate at low Signal-to-Noise Ratio (SNR). The reception and processing of low SNR signals has traditionally been a challenging task in signal processing. One attractive approach to addressing this challenge is the use of cooperative diversity techniques. In recent years, Cooperative Satellite Communication (CSC) has garnered significant attention due to its potential to enhance communication reliability and efficiency in satellite networks [4]. This technique is particularly useful for remote or energy-limited terminals, such as those employed in IoRT. With the rapid expansion of Low Earth Orbit (LEO) constellations, it has become feasible to implement collaboration between multiple satellites. Consequently, multi-satellite signal combining can be an effective strategy to achieve efficient spectral resource utilization and improve communication performance in low SNR environment. In the context of IoRT systems utilizing CSC, uplink cooperation can facilitate the collaborative reception of signals from a single terminal by multiple satellites. This enables effective signal combination through the integration of received signals. However, due to factors such as varying distances between satellites and terminals, antenna directivity, relative velocities, and crystal oscillator drift on board the satellites, it is possible that the signal amplitude, propagation delay, Carrier Frequency Offset (CFO), and Carrier Phase Offsets (CPO) may vary significantly among satellites. If these parameters are not aligned, diversity reception performance may deteriorate, leading to infeasible demodulation. To address this issue, accurate estimation of parameter differences among received signals and optimal combining weights is crucial for successful signal combination. The current studies on the estimation of CFO and CPO have primarily focused on individual reception scenarios. These estimation methods can be broadly categorized into two approaches: Data-Aided (DA) and Non-Data-Aided (NDA). DA parameter estimation requires the use of a known pilot symbol sequence as a training sequence, which may limit its applicability and reduce the effective data rate and spectral efficiency of the system [5]. In contrast, NDA estimation directly estimates the parameters based on signal characteristics without the need for additional known data. However, NDA methods can introduce demodulation noise that degrades synchronization accuracy, especially in low SNR scenarios [6]. Recent studies have focused on Code-Aided (CA) carrier synchronization algorithms that utilize the decoding results from the decoder to estimate carrier parameters [7, 8]. This innovative approach incorporates coding gain into the carrier synchronization process, resulting in improved performance, particularly in low SNR scenarios where pilot symbols are not needed. The CA technique has demonstrated effectiveness, especially in short-burst communication systems [9], making it a promising approach for IoRT applications. However, the CA synchronization method has its own limitations. Firstly, it is constrained by significant limitations on the estimation range of both CFO and CPO. As explained in Section III of this paper, to achieve a decoding performance with a Bit Error Rate (BER) below 0.5, the Normalized Frequency Offset (NFO) (which represents the CFO normalized to the symbol rate) must be smaller than 1-4, while the CPO must fall within the range of \([-0.2\pi,0.2\pi]\). Consequently, the applicability of the CA method is inherently restricted in practical scenarios. In an effort to overcome this challenge, previous research proposes a two-dimensional exhaustive search within a narrow CFO range while the CPO is in \((-\pi,\pi]\) has been proposed in literature [10]. This is coupled with an interpolation algorithm to increase estimation accuracy during the search process. However, due to noise introduced during the single-step search and interpolation processes, as well as the high dependence of algorithm performance on predefined thresholds and initial search point selection, there may exist performance degradation at low SNR levels. Additionally, an algorithm presented in [11] strives to extend the CFO estimation range. It leverages a coarse search and a fine search with code assistance to achieve a refined estimation of the CFO. However, this Gaussian process remains sensitive to the CFO as it is derived from decoding results. Secondly, the CA method exhibits reduced decoding reliability at low SNR levels, leading to a higher BER in the recovery of the modulation information from the transmitted data. This weakens estimation accuracy and highlights the sensitivity to CFO and CPO and the reliability of its decoding output. Despite the coding gain benefits provided by the CA synchronization method, its estimation accuracy is limited by residual frequency bias and random phase bias. with regards to CSC, existing research primarily focuses on mitigating jamming [12], capacity analysis [13, 14, 15], and capacity optimization [16]. However, these studies often assume that synchronization between satellites and terminals has already been achieved, which is not always the case in practical implementations. It is worth noting that there is a lack of comprehensive investigation into the estimation of CFO and CPO in the context of CSC. While previous work on CFO and CPO estimation in individual reception has shown improved performance using the CA method without a training sequence, this method is not suitable for scenarios with lower SNR levels and phase offsets within the range of \((-\pi,+\pi]\) in CSC. To overcome these limitations, we propose an iterative CA estimation algorithm comprised of two components: Iterative Cross Entropy (ICE) and Cooperative Expectation Maximization (CEM). ICE performs a coarse search for CFOs and CPOs, while CEM is responsible for fine estimation. Our proposed algorithm enables accurate estimation and compensation of CFOs and CPOs, as well as coherent combination of multiple satellites, even under challenging conditions with lower SNR levels and phase offsets within the range of \((-\pi,+\pi]\) in CSC scenarios. In summary, the contributions of this paper can be summarized as follows * We present a comprehensive estimation framework for CSC that allows for iterative estimation of CFOs and CPOs without the need for training sequences. It allows for precise estimation of CFOs and CPOs within large residual CFOs and random CPOs in the range of \((-\pi,+\pi]\). Furthermore, we leverage the combined decoding results during each iteration to enhance the estimation accuracy under lower SNR conditions for each satellite. * We propose an iterative estimation algorithm based on cross entropy, enabling parallel joint estimation of CFOs and CPOs while accounting for the large residual CFOs and random CPOs within the range of \((-\pi,+\pi]\). During the coarse estimation process, we quantize the potential CFO range and utilize the corrected CFOs to compensate the received signals. Subsequently, we demodulate and square the obtained results to estimate the CPOs. By iteratively combining the signals after CFO and CPO correction in a quasi-coherent manner, using the combined SNR loss as the objective function, we achieve joint estimation of CFOs and CPOs for multiple satellites with only a few iterations. * We propose a Maximum Likelihood Estimation (MLE) algorithm based on Expectation Maximization (EM) iteration to accurately estimate CFOs and CPOs under low SNR conditions. In each iteration, we compensate the received signal at each satellite using the estimated CFO and CPO to achieve coherent combination. The resulting combined decoding results are then utilized to assist in the following estimation of CFOs and CPOs. Through several iterations, we obtain accurate estimations of CFOs and CPOs for each satellite, as well as the coherent combined decoding results. The remainder of this paper is organized as follows. Section II outlines the system model for uplink CSC. Section III discusses the problem formulation for joint CFOs and CPOs estimation in CSC. In section IV, the iterative estimation algorithm is proposed, which consists of the coarse estimation based on iterative cross entropy and fine estimation based on cooperative expectation maximization. Simulation results and performance analysis are then shown in Section V. Ultimately, Section VI draws the conclusions. ## II System Model Direct Sequence Spread Spectrum (DSSS) is widely employed in secure satellite communication systems as it provides inherent resistance against interception, eavesdropping, and interference. Among the various modulation schemes, the Binary Phase Shift Keying-Direct Sequence Spread Spectrum (BPSK-DSSS) modulation scheme has garnered considerable attention owing to its superior performance in noisy environments. To further enhance the reliability of short-burst satellite communication systems operating in noisy channels, Polar codes can be employed. Polar codes belong to a class of error-correcting codes that can approach the Shannon capacity limit even with relatively short code lengths, while maintaining low complexity for both encoding and decoding processes [17, 18]. These codes are particularly suitable for high-reliability communications, especially in scenarios involving short-burst-frame transmission [19]. The transceiver system considered in this paper is depicted in Fig.1. At the transmitter, the short frame messages to be transmitted are in the form of bit sequence denoted as \(\boldsymbol{u}=(u_{0},u_{1},\cdots,u_{N-1})^{\mathrm{T}}\), where \(N\) is the number of the information bits. The information sequence is encoded using a binary code with rate \(R\), which generates the coded sequence \(\boldsymbol{x}=(x_{0},x_{1},\cdots,x_{K-1})^{\mathrm{T}}\). Wherein \(K=N/R\) and \(R<1\). Then, the encoded frame is spread as \(\boldsymbol{d}\) with the periodical spreading sequence \(c(\cdot)\), in which the time period is denoted as \(T_{c}\). Based on this, the spread sequence \(\boldsymbol{d}\) is modulated into a complex symbol frame \(\boldsymbol{q}=(q_{0},q_{1},\cdots,q_{KL-1})^{\mathrm{T}}\), where \(L\) is the length of the periodical spreading sequence and the value of symbols \(\boldsymbol{q}\) depends on the modulation constellation map. \(q_{k}=\mathrm{e}^{j\pi k}\in\{1,-1\}\) represents the BPSK-modulated symbol. The modulated symbol undergoes frequency up-conversion and D/A conversion. The signal is then transmitted through the wireless channel and is subsequently corrupted by Additive White Gaussian Noise (AWGN). There are a total of \(M\) satellites in the cooperation network, all of which are capable of receiving the signal transmitted from the same terminal. The wireless channel is assumed to be a line of sight channel. As a result, the received signal by a particular satellite (denoted as \(m\), \(m=1,2,\cdots,M\)) can be written as: \[\begin{split} r_{m}(t)&=\sum_{k=0}^{K-1}\sum_{l=0}^{ L-1}A_{m}x_{k}c\left(t-lT_{\mathrm{c}}-kT_{\mathrm{s}}\right)\\ &\quad\times\mathrm{e}^{j(2\pi(f_{0}+f_{m})t+\phi_{m})}+n_{m}(t),\end{split} \tag{1}\] where \(T_{\mathrm{s}}\) is the symbol time duration, and the relation between \(T_{\mathrm{c}}\) and \(T_{\mathrm{s}}\) is \(T_{\mathrm{s}}=L\cdot T_{\mathrm{c}}\). \(f_{0}\), \(f_{m}\) and \(\phi_{m}\) represent the signal carrier frequency, CFO and CPO, respectively. The CFO is determined primarily by the Doppler shift, while the range of CPO varies in the interval \((-\pi,+\pi]\) due to the differences in start-up time and the drift of crystal oscillators among the collaborating satellites. The term \(n_{m}(t)\) represents the complex additive Gaussian white noise at satellite \(m\), with a zero mean and a variance of \(\frac{N_{m}}{2}\) for both the real and imaginary components. It is assumed that the noise at each satellite is independent and uncorrelated, with equal variance. The variable \(A_{m}\) denotes the received signal power from the \(m\)-th satellite. Throughout this paper, it is assumed that the received signal powers are equal for all satellites. Considering that the noise powers are also identical at each satellite, the SNR of the received signal is assumed to be uniform across all satellites. At the receiver, the first step in signal processing is acquisition. Typically, a low symbol rate \(R_{\mathrm{s}}\) is employed during acquisition to ensure accurate signal detection while satisfying the SNR requirements. Once successful signal acquisition has been accomplished for each satellite and a preliminary estimation of the carrier frequency has been obtained. The received signal at chip level after frequency compensation becomes: \[\begin{split} r_{m}(t)&=\sum_{k=0}^{K-1}\sum_{l=0}^{ L-1}A_{m}x_{k}c\left(t-lT_{\mathrm{c}}-kT_{\mathrm{s}}\right)\\ &\quad\times\mathrm{e}^{j(2\pi Af_{m}t+\phi_{m})}+n_{m}(t).\end{split} \tag{2}\] To strike a balance between acquisition probability and resource consumption, the residual CFO after compensation based on the acquisition operation is \(\Delta f_{m}\in\left(-\frac{R_{\mathrm{s}}}{2I},+\frac{R_{\mathrm{s}}}{2I}\right]\), where \(I\) represents the number of FFT points utilized in the algorithm mentioned in [20]. Consequently, the remaining Fig. 1: System architecture of uplink CSC. NFO, obtained by multiplying the CFO with the symbol period \(T_{\text{s}}\) is reduced to the range of \(\left(-\frac{1}{2f},+\frac{1}{2f}\right]\). The primary objective of this paper is to evaluate the performance of CFO and CPO estimation. To focus on these specific parameters, it is assumed that perfect timing recovery during despreading has been achieved and the signal amplitude has been normalized. Therefore, \(r_{m}(t)\) is considered as BPSK modulation. After sampling, the \(k\)-th baseband complex symbol can be written as: \[r_{m,k}=s_{k}\mathrm{e}^{j(2\pi k\Delta f_{m}T_{\text{s}}+\phi_{m})}+n_{m}(k). \tag{3}\] where \(s_{k}\) is the equivalent BPSK transmitted \(k\)-th symbol after despreading. The \(k\)-th received signal symbol vector \(\mathbf{r}_{k}\) of all the \(M\) satellites can be described as, \[\mathbf{r}_{k}=[r_{1,k},r_{2,k},\cdots,r_{M,k}]^{\mathrm{T}}. \tag{4}\] Thus, the received symbol vector of all the \(M\) satellites is: \[\mathbf{r}=[\mathbf{r}_{0}^{\mathrm{T}},\mathbf{r}_{1}^{\mathrm{T}},\cdots,\mathbf{r}_{K -1}^{\mathrm{T}}]^{\mathrm{T}}. \tag{5}\] ## III Problem Formulation Accurately estimating the CFOs and CPOs of all the \(M\) satellites is essential for achieving coherent combination in CSC. These parameters can be denoted as: \[\mathbf{\theta} =[\mathrm{CFO}_{1},\cdots,\mathrm{CFO}_{m},\mathrm{CFO}_{M},\phi _{1},\phi_{2},\cdots,\phi_{M}]^{\mathrm{T}}\] \[=[\mathbf{f},\mathbf{\phi}]^{\mathrm{T}}. \tag{6}\] MLE is a well-established and effective technique used to solve parameter estimation problems. This method involves computing the probability density function of the parameter being estimated. In the context of CSC, where the received signals from each satellite are encoded and modulated using identical information bit sequences, and the noise in each satellite is both independent and unrelated, the conditional probability density function of the symbol can be expressed as (7). By removing the constant component, which is irrelevant to the parameter estimation, (7) can be simplified to: \[p\left(\mathbf{r}_{k}\mid\mathbf{\theta},s_{k}\right)=\exp\left\{\sum_{m=1}^{M}\frac{ 2\Re\left[s_{k}^{*}\mathrm{e}^{-j(2\pi k\Delta f_{m}T_{\text{s}}+\phi_{m})}r_ {m,k}\right]}{N_{m}}\right\}. \tag{8}\] The data symbol \(s_{k}\) is an independent and identically distributed discrete random variable, and its probability function can be represented as \(p_{s}(s_{k})\). By applying the law of total probability and (8), the probability density function of the \(k\)-th symbol vector \(\mathbf{r}_{k}\) can be derived as: \[p\left(\mathbf{r}_{k}\mid\mathbf{\theta}\right) =\mathbb{E}_{s_{k}}\left[p\left(\mathbf{r}_{k}\mid\mathbf{\theta},s_{k} \right)\right]\] \[=\sum_{s_{k}\in\mathfrak{M}}p_{k}(\mathfrak{m})p\left(\mathbf{r}_{k} \mid\mathbf{\theta},s_{k}\right). \tag{9}\] \(\mathfrak{M}\) represents the modulation order. In terms of BPSK-modulated signal, \(\mathfrak{M}=2\) and it has \[p_{k}(\mathfrak{m})=\frac{1}{2},\mathfrak{m}=\left\{0,1\right\}. \tag{10}\] Combining (8) to (10), the probability density function of the \(k\)-th data vector \(\mathbf{r}_{k}\) is expressed as: \[p\left(\mathbf{r}_{k}\mid\mathbf{\theta}\right)=\cosh\left\{\sum_{m=1}^{M}\frac{2\Re \left[\mathrm{e}^{-j(2\pi k\Delta f_{m}T_{\text{s}}+\phi_{m})}r_{m,k}\right]} {N_{m}}\right\}. \tag{11}\] Due to the independence of \(s_{k}\), the probability density function of the data vector of all the \(M\) satellites is written as: \[p(\mathbf{r}\mid\mathbf{\theta}) =\prod_{k=0}^{K-1}p\left(r_{k}\mid\mathbf{\theta}\right)\] \[=\prod_{k=0}^{K-1}\left\{\cosh\left\{\sum_{m=1}^{M}\frac{2\Re \left[\mathrm{e}^{-j(2\pi k\Delta f_{m}T_{\text{s}}+\phi_{m})}r_{m,k}\right]} {N_{m}}\right\}\right\}. \tag{12}\] Accordingly, the MLE of \(\mathbf{\theta}=[\mathbf{f},\mathbf{\phi}]^{\mathrm{T}}\) can be expressed as: \[\hat{\mathbf{\theta}}=\arg\max_{\mathbf{\theta}\in\mathbf{\mathcal{F}}}[\ln p(\mathbf{r}\mid \mathbf{\theta})], \tag{13}\] where \(\mathcal{F}\) is the range of the parameter \(\mathbf{\theta}\). It is observed that obtaining the MLE value from the log likelihood function is a challenging and nonlinear optimization problem, primarily due to non-linearity, a large search space, and the presence of multiple local minima. These factors make direct solutions difficult. In response, researchers have explored iterative approaches to address this issue, such as Sumple [21, 22], and Simple [21]. The SIMPLE algorithm achieves array coherence by employing a simple pair-wise correlation of antenna signals, while the SUMPLE algorithm performs a summation operation on multiple antenna signals prior to correlation. Both algorithms assume perfect compensation of carrier frequency offset and aim to estimate phase deviations through iterative cross-correlation of multi-channel received signals, maximizing combined SNR. Continuous signal updating is required during iteration, making these algorithms more suitable for continuous communication scenarios rather than short-burst communication. On the other hand, the CA method offers comparable estimation accuracy to DA methods, making it more desirable for limited data transmission scenarios. Nonetheless, it is susceptible to residual CFO and CPO, potentially leading to encoding and decoding errors as well as communication failure. Overall, for the considered communication system, there currently is not a suitable solving method available. ## IV Proposed CFO and CPO Estimation Algorithm To ensure signal coherent combination in a CSC system under low SNR conditions, we propose an iterative CA estimation algorithm for joint CFOs and CPOs estimation. The algorithm consists of two parts: Iterative estimation based on Cross Entropy (ICE) and Cooperative Expectation Maximization (CEM). The ICE is responsible for coarse search of CFOs and CPOs, while the CEM is responsible for CFOs and CPOs fine estimation. The overall structure of these two parts is depicted in Fig.2. The underlying principles of these two parts will be elaborated upon in the following subsections. ### _Iterative Cross-Entropy based Coarse CFOs and CPOs Estimation_ The initialization step of our proposed algorithm involves the utilization of a cross-entropy-based iteration method, which is commonly employed in parameter estimation tasks. This method measures the disparity between the true probability distribution and the predicted probability distribution of a given model, making it an efficient parallel search algorithm. It offers several advantages, including simplicity, ease of implementation, fewer parameters, and expandability, making it suitable for our purposes. In the context of CFO and CPO estimation, our algorithm aims to mitigate the uncertainties associated with their ranges. The CFO range falls within the interval \(\left(-\frac{1}{2I},+\frac{1}{2I}\right]\), while the CPO range falls within \(\left(-\pi,\pi\right]\). By applying the cross-entropy method, we can effectively conduct a parallel search to estimate these parameters. Our proposed ICE algorithm is depicted in Fig.3. To handle CFO estimation, we employ quantization with \(D\) bits, which allows us to compensate the received signal at each satellite using the quantized frequency offset. Subsequently, demodulation is performed to calculate the phase offset. However, the square demodulation of a BPSK-modulated signal introduces a phase ambiguity, which we resolve by introducing an additional bit. As a result, the cross-entropy iteration process involves the participation of the received signal at each satellite using \(D+1\) bits. During the iteration process, the objective function that drives the iteration and defines the convergence condition is the combined SNR loss. We select the optimal CFOs and CPOs when the objective function reaches its minimum. Accurate SNR estimation is crucial for optimization; hence, we incorporate a Polar code-aided estimation algorithm to enhance the precision of SNR estimation. By employing this approach, we can successfully complete the joint search for CFOs and CPOs of multiple satellites within a few iterations. #### Iii-A1 Objective Function Let \(\mathbf{b}_{1\times M(D+1)}=[\mathbf{b}_{1},\mathbf{b}_{2},\cdots,\mathbf{b}_{m},\mathbf{b}_{M}]\) represent the quantized binary set of CFOs and CPOs of the received signal at all \(M\) satellites. Here \(\mathbf{b}_{m}=[b_{m,1},b_{m,2},\cdots,b_{m,d},b_{m,D},b_{m,\phi}]_{\times(D+1)}\), with \(b_{m,d}\in[0,1]\), represents the \(D+1\) bits quantized binary set of the \(m\)-th satellite, \(d\in[1,D]\). Besides, \(b_{m,\phi}\in[0,1]\) represents the quantized value of the CPO of the received signal at the \(m\)-th satellite. For simplicity, let \(l=1,2,\cdots,M(D+1)\) represent the index of the quantized bit. After combining the signals, the \(k\)-th symbol is recorded as \(r_{\mathrm{com},k}\), which can be expressed in detail as: \[r_{\mathrm{com},k}=\sum_{m=1}^{M}r_{m,k}\mathrm{e}^{-j[2\pi f_{m,k}\mathrm{T} _{+}(2b_{m,\phi}-1)\pi+\phi_{m,d}]}, \tag{14}\] Fig. 2: The schematic diagram of the estimator. where \(f_{m,d}\) is the CFO estimation of the received signal at the \(m\)-th satellite. In this case, \(f_{m,d}\) is written as: \[f_{m,d}=\left(\sum\limits_{d=1}^{D}2^{b_{m,d}}-2^{D}\right)\frac{R_{\text{s}}}{ I^{2D}}. \tag{15}\] Additionally, \(\phi_{m,d}\) in (14) is the compensated result of the received signal at the \(m\)-th satellite, which is expressed by: \[\phi_{m,d}=\frac{1}{2}\arg\left(\sum\limits_{k=0}^{K-1}r_{m,k}^{2}\mathrm{e}^ {-j2\pi 2f_{m,d}kT_{z}}\right). \tag{16}\] In order to eliminate the phase ambiguity and achieve the coherent combination, each satellite introduces \(1\) bit to indicate selection between \(\phi_{m,d}\) and \(\phi_{m,d}+\pi\). Assuming that the SNR of each satellite as \(\mathrm{SNR}_{\mathrm{Single}}\), the theoretical SNR after coherent combination is given by \(\mathrm{SNR}_{\mathrm{Single}}+10\log_{10}(M)\). In this context, the measure of combined SNR loss can be expressed as a function involving the estimated combined SNR \(\hat{\gamma}\): \[\mathrm{SNR}\mathrm{Loss}=\mathrm{SNR}_{\mathrm{Single}}+10\log_{10}(M)-\hat{ \gamma}, \tag{17}\] The calculation of the estimated combined SNR of a BPSK-modulated signal using the CA method can be expressed as: \[\hat{\gamma}=\frac{\left(K-\frac{3}{2}\right)\left(\sum\limits_{k=0}^{K-1} \Re\left\{r_{\mathrm{com},k}\xi_{k}^{\mathrm{z}}\right\}\right)^{2}}{K\left\{ K\left(\sum\limits_{k=0}^{K-1}|r_{\mathrm{com},k}|^{2}\right)-\left(\sum \limits_{k=0}^{K-1}\Re\left\{r_{\mathrm{com},k}\zeta_{k}^{\mathrm{z}}\right) ^{2}\right\}^{2}\right\}}, \tag{18}\] Where \(\zeta_{k}\) is the posterior expectation of the \(k\)-th combined signal, and the symbol \(*\) denotes the conjugation operation. The objective function aims to minimize the SNR loss by finding the optimal binary set \(\mathbf{b}\), namely: \[\hat{\mathbf{b}}=\arg\min_{\mathbf{b}\in\mathcal{B}}(\mathrm{SNR}\mathrm{Loss}<T_{ \mathrm{H}}), \tag{19}\] Here, the notation \(T_{\mathrm{H}}\) represents the threshold value. Additionally, \(\mathcal{B}\) denotes the complete set of quantized CFOs and CPOs of all \(M\) satellites. #### Iii-B2 Algorithm Design The parameter \(\zeta_{k}\) in (18) can be calculated by the posterior Log-Likelihood Ratio (LLR) from the output of the decoder. The posterior probability of the encoded data \(x_{k}\) is expressed as: \[L_{\mathrm{pos}}(x_{k})=\ln\frac{\text{Pr}\left\{x_{k}=0|r_{\mathrm{com},k} \right\}}{\text{Pr}\left\{x_{k}=1|r_{\mathrm{com},k}\right\}}, \tag{20}\] without loss of generality, for BPSK-modulated signal, there is: \[\text{Pr}\left\{s_{k}=\mathrm{e}^{j\cdot 0\cdot\pi}|r_{\mathrm{com},k}\right\}= \text{Pr}\left\{x_{k}=0|r_{\mathrm{com},k}\right\}, \tag{21}\] \[\text{Pr}\left\{s_{k}=0|r_{\mathrm{com},k}\right\}+\text{Pr}\left\{x_{k}=1|r_ {\mathrm{com},k}\right\}=1. \tag{22}\] By combining equations (20)-(22), we can derive the conditional probability of \(s_{k}=\mathrm{e}^{j\cdot 0\cdot\pi}\) given \(r_{\mathrm{com},k}\) as: \[\text{Pr}\left\{s_{k}=\mathrm{e}^{j\cdot 0\cdot\pi}|r_{\mathrm{com},k}\right\}= \frac{\mathrm{e}^{L_{\mathrm{pos}}(x_{k})}}{\mathrm{e}^{L_{\mathrm{pos}}(x_{k}) }+1}. \tag{23}\] Similarly, the conditional probability of \(s_{k}=\mathrm{e}^{j\cdot 1\cdot\pi}\) given \(r_{\mathrm{com},k}\) can be expressed as: \[\text{Pr}\left\{s_{k}=\mathrm{e}^{j\cdot 1\cdot\pi}|r_{\mathrm{com},k}\right\}= \frac{1}{\mathrm{e}^{L_{\mathrm{pos}}(x_{k})}+1}. \tag{24}\] Fig. 3: The schematic diagram of the ICE algorithm. Base on this, by combining (23) and (24), the expectation of \(\zeta_{k}\) can be described as: \[\zeta_{k}= \mathrm{e}^{j\cdot 0\cdot\pi}\cdot\text{Pr}\left\{s_{k}=\mathrm{e}^{j \cdot 0\cdot\pi}|r_{\text{com},k}\right\}\] \[+\mathrm{e}^{j\cdot 1\cdot\pi}\cdot\text{Pr}\left\{s_{k}=\mathrm{e}^{ j\cdot 1\cdot\pi}|r_{\text{com},k}\right\}\] \[= 1\cdot\frac{\mathrm{e}^{L_{\text{pos}}(x_{k})}}{\mathrm{e}^{L_{ \text{pos}}(x_{k})}+1}+(-1)\cdot\frac{1}{\mathrm{e}^{L_{\text{pos}}(x_{k})}+1}\] \[= \frac{\mathrm{e}^{L_{\text{pos}}(x_{k})}-1}{\mathrm{e}^{L_{\text{ pos}}(x_{k})}+1}\] \[= \tanh\left(\frac{\mathrm{e}^{L_{\text{pos}}(x_{k})}}{2}\right), \tag{25}\] where \(\tanh\) represents the hyperbolic tangent function, namely, \(\tanh(x)=\frac{\mathrm{e}^{x}-\mathrm{e}^{-x}}{\mathrm{e}^{x}+\mathrm{e}^{-x}}\). The computation of the hyperbolic tangent function involves a relatively high complexity. To mitigate this, a common approach is to approximate it using a linear piecewise function. This approximation can be mathematically represented as: \[\zeta_{k}=\begin{cases}1,L_{\text{pos}}(x_{k})>T_{h}\\ \alpha L_{\text{pos}}(x_{k}),-T_{h}<L_{\text{pos}}(x_{k})\leq T_{h}\\ -1,L_{\text{pos}}(x_{k})\leq-T_{h},\end{cases} \tag{26}\] where \(L_{\text{pos}}(x_{k})\) indicates the posterior LLR of the \(k\)-th symbol \(r_{\text{com},k}\), \(\alpha\) represents the linear factor, and \(T_{h}\) denotes the segmentation point. To achieve a more accurate approximation of the hyperbolic tangent function, the value of \(\alpha\) is set to \(1/3\) and \(T_{h}\) is set to \(3\), as suggested in the literature [23]. In Fig. 3, the probability vector for the \(i\)-th iteration process is defined as \(\mathbf{p}_{1\times M(D+1)}^{i}=[\mathbf{p}_{1}^{i},\cdots,\mathbf{p}_{m}^{i},\cdots,\mathbf{p }_{M}^{i}]\). Here \(\mathbf{p}_{m}^{i}\) represents the probability of CFO and CPO \(D+1\) bits quantization of the \(m\)-th satellite. To simplify the notation, we denote \(p_{i}^{i}\) an element in \(\mathbf{p}^{i}\), where \(l\in 1,\cdots,M(D+1)\). Subsequently, the probability vector \(\mathbf{p}_{1\times M(D+1)}^{i}\) is utilized to randomly generate a set of candidate observation matrices \(\{\mathbf{B}\}_{n_{e}=1}^{N_{\text{c}}}\). Here \(N_{\text{c}}\) denotes the number of vectors. These \(N_{\text{c}}\) groups of candidate vectors are then combined with multiple satellites using equations (14)-(16), and the objective function is calculated to obtain the combined SNR loss vector SNRLossVec\({}_{1\times N_{\text{c}}}\). Based on this, the combined SNR loss vector is arranged in ascending order as \(\eta_{\text{seq},1}^{i}\leq\eta_{\text{seq},2}^{i}\leq\cdots,\leq\eta_{\text{ seq},N_{\text{c}}}^{i}\). The \(N_{\text{c}}\) vectors with the smallest combined SNR loss are then selected and recorded as \(\{\mathbf{B}\}_{n_{e}=d_{e},N_{\text{e}}}^{n_{e}=d_{e},N_{\text{e}}}\) where \(d_{e,n_{e}}\) represents the index of the selected vectors. Subsequently, the probability generation vector is updated based on these indices. \[\mathbf{p}^{i+1}=(1-\bar{w})\mathbf{p}^{i}+\frac{\bar{w}}{N_{\text{c}}}(\mathbf{b}_{d_{e,1}}+\mathbf{b}_{d_{e,2}},\cdots,\mathbf{b}_{d_{e,N_{\text{c}}}}), \tag{27}\] By introducing a smoothing parameter \(\bar{w}\), the iterative process is carried out for a total of \(N_{\text{iter}}\) iterations to obtain the optimal solution \(\mathbf{b}^{*}\). Based on this, the estimated matrix\(\mathbf{\hat{\Phi}}_{D}=[\mathbf{\hat{f}}_{D},\mathbf{\hat{\phi}}_{D}]^{T}\) can be obtained. Here \(\mathbf{\hat{f}}_{D}=[\hat{f}_{1,d},\hat{f}_{2,d},\cdots,\hat{f}_{m,d},f_{M,d}]\) and \(\mathbf{\hat{\phi}}_{D}=[\hat{\phi}_{1,d},\hat{\phi}_{2,d},\cdots,\hat{\phi}_{m,d},\hat{\phi}_{M,d}]\) represent the estimated CFO and CPO parameters for each satellite. The algorithmic procedure described above is summarized in Algorithm 1. ``` Input:\(N_{\text{c}}\), \(N_{\text{e}}\), \(N_{\text{iter}}\), received signal of all \(M\) satellites \(\mathbf{r}=[\mathbf{r}_{1},\mathbf{r}_{2},\cdots,\mathbf{r}_{M}]^{\text{T}}\) Output:\(\mathbf{\hat{b}}\), \(\mathbf{\hat{\Phi}}_{D}\) Initialization: Set the initial probability for the CFOs and CPOs quantized bits of each satellite as \(\mathbf{p}^{i}=0.5*\mathbf{1}_{1\times M(D+1)}\), where \(i\) = 0; while\(i\leq N_{\text{iter}}-1\)do (1) Generate observation matrices \(\{\mathbf{B}\}_{n=1}^{N_{\text{c}}}\) to obtain the quantized CFOs and CPOs of received signal symbol at each satellite according to (15) and (16); (2) Calculate the combined SNR loss denoted as SNRLossVec\({}_{1\times N_{\text{c}}}\), and mark its elements as \(\eta_{1}^{i},\eta_{2}^{i},\ldots,\eta_{N_{\text{c}}}^{i}\); (3) Sort the elements in ascending order and update it as \(\eta_{\text{seq},1}^{i}\leq\eta_{\text{seq},2}^{i}\leq\cdots,\leq\eta_{\text{ seq},N_{\text{c}}}^{i}\); (4) Select the top \(N_{\text{c}}\) with the smallest combined SNR loss and calculate the iteration probability of the next iteration by \(\mathbf{p}^{i+1}=(1-\bar{w})\mathbf{p}^{i}+\frac{\bar{w}}{N_{\text{c}}}(\mathbf{b}_{d_{e,1}}+\mathbf{b}_{d_{e,2}},\ldots,\mathbf{b}_{d_{e,N_{\text{c}}}})\); (5) \(i=i+1\); end while return:\(\mathbf{\hat{b}}\), \(\mathbf{\hat{\Phi}}_{D}\) ``` **Algorithm 1**ICE algorithm #### Iii-B3 Performance Simulation According to the aforementioned algorithm design, the selection of parameters such as the frequency offset quantization bit width \(D\), the number of candidate vectors \(N_{\text{c}}\), the number of elite vectors \(N_{\text{e}}\), and the number of iterations \(N_{\text{iter}}\) is an open question that needs to be determined. In order to analyze the impact of these parameters on the algorithm performance, we conduct simulations using the ICE algorithm for joint estimation of CFOs and CPOs. The choice of quantization bit width for the frequency offset requires finding a balance between system performance and complexity. To illustrate the impact of CFO and CPO on the decoding performance of Polar code \(\mathcal{C}\)(1024,512) using the Belief Propagation (BP) decoding algorithm [24], we perform simulations and the obtained results are shown in Fig. 4. Fig. 4(a) demonstrates that even with a small frequency offset value (\(\mathrm{NFO}=1\times 10^{-4}\)), there is already a noticeable degradation of more than \(1\) dB in the decoding performance (@BER = \(10^{-4}\)) compared to the ideal case. As the NFO further increases, the decoding performance deteriorates gradually. When \(\mathrm{NFO}=2\times 10^{-4}\) and \(E_{\text{b}}/N_{0}=1.5\) dB, the BER approaches \(0.5\), indicating a significant degradation in the decoding performance and rendering the decoder ineffective. In Fig. 4(b), the impact of CPO on the BER is illustrated. It can be observed that even small changes in the phase offset within the range of \([-0.2\pi,+0.2\pi]\) can have a considerable impact on the BER. Therefore, given the NFO value of \(1\times 10^{-4}\), the quantization bit width \(D\) can be determined by: \[D\geq\left\lfloor\log_{2}\frac{1}{\mathrm{NFO}\cdot I}\right\rfloor,D\in\mathbb{ N}^{+} \tag{28}\] When \(R_{\text{s}}=1000\) sps and \(I=64\), the range of NFO for each satellite is within the interval (\(-7.8125\times 10^{-3}\) \(+7.8125\times 10^{-3}\)J. In the simulation presented in Fig. 5, we consider a random combination of \(N_{\mathrm{c}}=120\) with a candidate group size of \(N_{\mathrm{e}}=24\). We investigate the relationship between the combined SNR of multiple satellites and the SNR of a single satellite under different frequency offset quantization bit conditions. As shown in Fig. 5, when the frequency offset quantization bit width is set to \(5\), the combined SNR loss for CSC with \(4\) and \(6\) satellites in collaboration is approximately \(4.2\) dB and \(4\) dB, respectively. However, when the quantization bit width is increased to \(7\), the combined SNR loss for these two cases is reduced to \(0.5\) dB and \(0.4\) dB, respectively. It can be observed that increasing the quantization bit width improves the combined SNR, bringing it closer to its theoretical value. However, a higher quantization bit width also results in increased computational complexity. To strike a balance between combining performance and computational complexity, we choose a frequency offset quantization bit width of \(6\) bits for the subsequent simulations in this paper. This choice provides a reasonable compromise between achieving satisfactory combining performance and managing computational complexity effectively. Then, the relationship between the combined SNR loss and the number of iterations is simulated. Simulation parameters are set as \(D=6\), \(E_{\mathrm{s}}/N_{0}=-3\) dB, the number of satellites \(M=4\), the range of CFO of received signal at each satellite (\(-7.8125\times 10^{-3}\), \(+7.8125\times 10^{-3}\)]. As can be seen from Fig. 6, when \(N_{\mathrm{e}}=4\), it takes \(4\) iterations to converge, and the combined SNR loss is about \(1.3\) dB. when \(N_{\mathrm{e}}=32\), it takes \(8\) iterations to converge, and the combined SNR loss is only \(0.4\) dB. This is because when the number of optimal combinations is small, although the convergence rate is faster, it is possible to obtain the local optimal solution. With the increase of \(N_{\mathrm{e}}\), the global optimal solution can be obtained and the small Combined SNR loss can be obtained. However, the number of iterations also increases, resulting in an increase in computation. To select the appropriate number of optimal combinations, it is necessary to ensure that the combined SNR loss converges to the global optimal solution under the condition of not increasing too much computation. Given \(D=6\), \(E_{\mathrm{s}}/N_{0}=-3\) dB, Fig.7 shows the relationship between the number of random combinations and the combined SNR loss. It can be seen from Fig.7 that the more combinations are combined in each iteration, the larger the range of both CFOs and CPOs at each iteration, and it is more possible to converge to the optimal value. However, when the number of random combinations reaches the \(120\) group, the optimal value that converges tends to stabilize, and further increasing the number of random combinations will increase the complexity. Additionally, Fig.7 also depicts the relationship between the number of random combinations and the number of optimal combinations. When the number of Fig. 4: The impact of CFO and CPO on the decoding performance of Polar code \(\mathcal{C}\)(1024,512) Fig. 5: The relationship between combined SNR and single satellite SNR for varying frequency offset quantization bit widths in the context of multiple satellite systems. optimal combinations is small, it is easy to converge to the local optimal value. When the number of optimal signal combinations is \(1/5\) of the number of random signal combinations, it is more possible to obtain the optimal solution. ### _Cooperative Expectation Maximization based Fine CFOs and CPOs Estimation_ #### Iv-B1 Objective Function In the initial coarse estimation stage, the uncertain range of CFO is reduced from \(\left(-\frac{1}{2I},+\frac{1}{2I}\right]\) to \(\left(-\frac{1}{2^{D+1}I},+\frac{1}{2^{D+1}I}\right]\). However, due to the quantization of frequency offsets, accurate estimation becomes impossible within the existing quantization interval. To address this limitation and improve estimation accuracy, we propose a Cooperative Expectation Maximization (CEM) method. It leverages the principles of the EM algorithm, which is widely used for estimating parameters in statistical models involving latent variables. The EM algorithm is particularly suitable for solving the MLE problem when direct maximization of the likelihood function is challenging [25]. In the fine estimation stage, the initial uncertain range of CFOs and CPOs is based on the residual set obtained from compensating the received signals using the estimation results from the ICE algorithm. This set is denoted as \(\boldsymbol{\theta}_{\text{res}}\). It represents the remaining CFOs and CPOs that need to be accurately estimated in the fine estimation stage, which can be written as: \[\boldsymbol{\theta}_{\text{res}} =[f_{\text{res},1},\phi_{\text{res},1};\cdots;f_{\text{res},m}, \phi_{\text{res},m};\cdots;f_{\text{res},M},\phi_{\text{res},M}]\] \[=[\boldsymbol{\Phi}_{1};\boldsymbol{\Phi}_{2};\boldsymbol{\Phi}_ {m};\cdots;\boldsymbol{\Phi}_{M}]. \tag{29}\] where \(\boldsymbol{\Phi}_{m}=[f_{\text{res},m},\phi_{\text{res},m}]\) represents the residual CFO and CPO to be estimated for the \(m\)-th satellite. Define the \(k\)-th compensated received symbol of the \(m\)-th satellite as \(r_{m,k}\). Given the AWGN channel, the joint density function of \(r_{m,k}\) is expressed as (30). Following the same approach as (9), the influence of the transmitted data \(s_{k}\) can be removed from the joint density function in (30). The resulting conditional probability density function is solely dependent on the parameters \(\boldsymbol{\Phi}_{m}\). Using this simplified representation, (30) can be updated as follows: \[p\left(r_{m,k}\mid\boldsymbol{\Phi}_{m}\right) =\mathbb{E}_{\text{s}}\left[p_{k}(\mathfrak{m})p\left(r_{m,k}\mid f _{\text{res},m},\phi_{\text{res},m},s_{k}\right)\right]\] \[=\sum\limits_{s_{k}\in\mathfrak{M}}p_{k}(\mathfrak{m})p\left(r_{ m,k}\mid f_{\text{res},m},\phi_{\text{res},m}\right). \tag{31}\] where, \(\mathfrak{m}\) is the order of modulation. Let the \(\boldsymbol{r}_{m}\) represent the all the received \(K\) symbols of the \(m\)-th satellite, due to the independence of \(r_{m,k}\), \(\forall k=0,1,\cdots,K-1\), the conditional probability density function of \(\boldsymbol{r}_{m}\) is written as \(p(\boldsymbol{r}_{m}\mid\boldsymbol{\Phi}_{m})=\prod\limits_{k=0}^{K-1}p(r_{ m,k}\mid\boldsymbol{\Phi}_{m})\). By removing the constant terms that do not affect the parameter estimation, the logarithm of \(p(\boldsymbol{r}_{m}\mid\boldsymbol{\Phi}_{m})\) can be given by, \[\ln p\left(\boldsymbol{r}_{m}\mid\boldsymbol{\Phi}_{m}\right) =\sum\limits_{k=0}^{K-1}\ln\bigg{\{}\sum\limits_{s_{k}\in \mathfrak{M}}p_{k}(\mathfrak{m})\] \[\times\mathrm{e}^{\mathfrak{B}\left[s_{k}^{\text{e}}\mathrm{e}^{ -j\left(2\pi k/\varepsilon_{m,m}T_{x}+\phi_{\text{res},m}\right)r}_{m,k} \right]}\bigg{\}}. \tag{32}\] It is a well-known fact that when \(x\) is small, approaching zero, certain approximations can be made. Specifically, \(\exp(x)\approx 1+x\), \(\ln(1+x)\approx x\)[11]. In the communication scenarios being considered, the received signal \(r_{m,k}\) exhibits low values. As a result, based on this information, (32) can Fig. 6: The relationship between \(N_{\text{c}}\), \(N_{\text{e}}\) and the iterations times Fig. 7: The relationship between \(N_{\text{c}}\) and \(N_{\text{e}}\) be expressed in the following form: \[\ln p\left(\mathbf{r}_{m}\mid\mathbf{\Phi}_{m}\right) =\sum\limits_{k=0}^{K-1}\ln\bigg{\{}\sum\limits_{s_{k}\in\mathfrak{ M}}p_{k}(\mathfrak{m})\] \[\quad\times\mathrm{e}^{\mathfrak{R}\left[s_{k}^{*}\mathrm{e}^{-j \left(2\pi kf_{\text{res},m}T_{k}+\phi_{\text{res},m}\right)}r_{m,k}\right]} \bigg{\}}\] \[\approx \mathfrak{R}\bigg{\{}\sum\limits_{k=0}^{K-1}\sum\limits_{s_{k} \in\mathfrak{M}}p_{k}(\mathfrak{m})\] \[\quad\times s_{k}^{*}r_{m,k}\mathrm{e}^{-j\left(2\pi kf_{\text{ res},m}T_{k}+\phi_{\text{res},m}\right)}\bigg{\}}\] \[\approx \mathfrak{R}\left\{\sum\limits_{k=0}^{K-1}\eta_{k}^{*}r_{m,k} \mathrm{e}^{-j\left(2\pi kf_{\text{res},m}T_{k}+\phi_{\text{res},m}\right)} \right\}, \tag{33}\] where \(\eta_{k}=\sum\limits_{s_{k}\in\mathfrak{M}}p_{k}(\mathfrak{m})s_{k}\) represents the expectation of the transmitted data. However, determining the prior probability distribution of the transmitted data is generally not feasible or unknown, which renders the accurate calculation of \(\eta_{k}\) difficult. Therefore, alternative approaches must be employed to estimate this parameter. #### Iii-B2 Algorithm Design In this paper, we utilize the EM algorithm to address the aforementioned problem. The EM algorithm follows a specific framework that involves defining certain parameters of interest, observed signals, hidden signals. Specifically, for each satellite indexed by \(m\), we define the parameter of interest as \(\mathbf{\Phi}_{m}=[f_{\text{res},m},\phi_{\text{res},m}]\). Additionally, we consider the observed signal \(\mathbf{r}_{m}\) and the hidden signal, which corresponds to the transmitted data denoted by \(\mathbf{s}\). To facilitate the EM algorithm, we define the complete observed signal \(\mathbf{z}_{m}=[\mathbf{r}_{m},\mathbf{s}]\) as the combination of the observed signal \(\mathbf{r}_{m}\) and the transmitted data \(\mathbf{s}\) for the \(m\)-th satellite. This allows us to incorporate both observed and hidden information in our analysis. The EM algorithm operates through two main steps that alternate iteratively: the E-step and the M-step. These steps enable the estimation of parameters by iteratively updating their values based on the available observed and hidden information. The algorithm proceeds by iteratively performing these steps until convergence is achieved, providing estimates for the desired parameters. The steps of the EM algorithm are as follows: \[\mathrm{E-step}\] \[\mathcal{Q}(\mathbf{\Phi}_{m};\mathbf{\Phi}_{m}^{(n-1)}) =\mathbb{E}_{\mathbf{z}_{m}|\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)}}[\ln p( \mathbf{z}_{m}\mid\mathbf{\Phi}_{m})]\] \[=\int_{\mathbf{s}}p\left(\mathbf{s}\mid\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)} \right)\ln p(\mathbf{z}_{m}\mid\mathbf{\Phi}_{m})d\mathbf{s} \tag{34}\] \(\mathrm{M-step}\): \[\mathbf{\Phi}_{m}^{(n)}=\arg\max\limits_{\mathbf{\Phi}_{m}}\mathcal{Q}(\mathbf{\Phi}_{m}; \mathbf{\Phi}_{m}^{(n-1)}), \tag{35}\] In each iteration \(n\) of the EM algorithm, we have an E-step and an M-step. The E-step involves calculating the expectation \(\mathbb{E}_{\mathbf{z}_{m}|\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)}}[\ln p(\mathbf{z}_{m}\mid \mathbf{\Phi}_{m})]\), which represents the conditional probability distribution \(p\left(\mathbf{s}\mid\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)}\right)\) of the hidden data \(\mathbf{s}\) given the observed signal and the parameter of interest \(\mathbf{\Phi}_{m}\). This expectation is calculated by evaluating the logarithm of the likelihood function \(\ln p(\mathbf{z}_{m}\mid\mathbf{\Phi}_{m})\) can be given. \(\mathrm{M-step}\). After the E-step, we move to the M-step, where we aim to maximize the quantity \(\mathcal{Q}(\mathbf{\Phi}_{m};\mathbf{\Phi}_{m}^{(n-1)})\) with respect to the parameter of interest \(\mathbf{\Phi}_{m}\). This maximization step provides an estimate for the parameter \(\mathbf{\Phi}_{m}\), which then serves as the updated parameter for estimating \(\mathbf{\Phi}_{m}^{(n)}\) in the next iteration. The iterative process of performing the E-step and M-step continues until the algorithm converges, meaning that further iterations do not significantly improve the estimates. At convergence, we obtain the MLE results for the CFOs and CPOs of the system. Since the transmitted data \(\mathbf{s}\) and the parameter of interest \(\mathbf{\Phi}_{m}\) are mutually independent, the quantity \(\mathcal{Q}(\mathbf{\Phi}_{m};\mathbf{\Phi}_{m}^{(n-1)})\) can be expressed as: \[\mathcal{Q}(\mathbf{\Phi}_{m};\mathbf{\Phi}_{m}^{(n-1)}) =\int_{\mathbf{s}}p\left(\mathbf{s}\mid\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)} \right)\ln p(\mathbf{z}_{m}\mid\mathbf{\Phi}_{m})d\mathbf{s}\] \[=\int_{\mathbf{s}}p\left(\mathbf{s}\mid\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)} \right)\ln p(\mathbf{r}_{m},\mathbf{s}\mid\mathbf{\Phi}_{m})d\mathbf{s}\] \[=\int_{\mathbf{s}}p\left(\mathbf{s}\mid\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)} \right)\ln p(\mathbf{r}_{m}\mid\mathbf{\Phi}_{m},\mathbf{s})d\mathbf{s}\] \[+\int_{\mathbf{s}}p\left(\mathbf{s}\mid\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)} \right)\ln p(\mathbf{s})d\mathbf{s}. \tag{36}\] The right-hand side of (36) is independent of the parameter \(\mathbf{\Phi}_{m}\), hence \(\mathcal{Q}(\mathbf{\Phi}_{m};\mathbf{\Phi}_{m}^{(n-1)})\) can be simplified as: \[\mathcal{Q}(\mathbf{\Phi}_{m};\mathbf{\Phi}_{m}^{(n-1)})=\int_{\mathbf{s}}p\left(\mathbf{s} \mid\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)}\right)\ln p(\mathbf{r}_{m}\mid\mathbf{\Phi}_{m}, \mathbf{s})d\mathbf{s} \tag{37}\] By substituting (30) into (37), \(\mathcal{Q}(\mathbf{\Phi}_{m};\mathbf{\Phi}_{m}^{(n-1)})\) can be updated as (38). Let \(\zeta_{k}(\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)})=\sum\limits_{s_{k}\in\mathfrak{M}}s _{k}p\left(s_{k}\mid\mathbf{r}_{m},\mathbf{\Phi}_{m}^{(n-1)}\right)\) denotes the posterior expectation of the transmitted data given the received signal \(\mathbf{r}_{m}\) and the parameter estimation \(\mathbf{\Phi}_{m}^{(n-1)}\). Since each satellite in the system receives the same transmitted data, we can update the posterior expectation of the combined received signals at all \(M\) satellites as: \[\zeta_{k}(\mathbf{r}_{\mathrm{com}},\mathbf{\Phi}^{(n-1)}) =\sum_{s_{k}\in\mathfrak{M}}s_{k}p\left(s_{k}\mid\mathbf{r}_{\mathrm{com }},\mathbf{\Phi}^{(n-1)}\right)\] \[=\sum_{s_{k}\in\mathfrak{M}}s_{k}p\left(s_{k}\mid\sum_{m=1}^{M} \sum_{k=0}^{K-1}r_{m,k}\] \[\quad\times\mathrm{e}^{-j(2\pi kf_{\mathrm{res},m}^{(n-1)}T_{s}+ \phi_{\mathrm{res},m}^{(n-1)})},\mathbf{\Phi}^{(n-1)}\right) \tag{39}\] where \(f_{\mathrm{res},m}^{(n-1)}\) and \(\phi_{\mathrm{res},m}^{(n-1)}\) represents the residual CFO and CPO of the received signal at the \(m\)-th satellite in the \((n-1)\)-th iteration, respectively. Equation (39) captures the combination gain achieved through multi-satellite cooperation in the parameter estimation process, which characterizes the algorithm as Cooperative EM. It reflects how the posterior expectation of the transmitted symbols is calculated by utilizing the combined results from all satellites. By leveraging information from multiple receivers, the resulting expectation derived from this collective fusion of results exhibits improved reliability compared to individual satellite-based calculations. Therefore, \(\mathcal{Q}(\mathbf{\Phi}_{m};\mathbf{\Phi}_{m}^{(n-1)})\) can be updated as follows: \[\mathcal{Q}(\mathbf{\Phi}_{m};\mathbf{\Phi}_{m}^{(n-1)})= \Re\bigg{\{}\sum_{k=0}^{K-1}r_{m,k}\zeta_{k}^{*}(\mathbf{r}_{\mathrm{ com}},\mathbf{\Phi}^{(n-1)})\] \[\times\mathrm{e}^{-j(2\pi kf_{\mathrm{res},m}T_{s}+\phi_{\mathrm{ res},m})}\bigg{\}}. \tag{40}\] The expression \(\mathcal{Q}(\mathbf{\Phi}_{m};\mathbf{\Phi}_{m}^{(n-1)})\) shares the same mathematical form as (33). However, there exists a key distinction is that \(\eta_{k}\) is the prior expectation of the transmitted symbols, while \(\zeta_{k}(\mathbf{r}_{\mathrm{com}},\mathbf{\Phi}^{(n-1)})\) is the posterior expectation of the transmitted symbols obtained from the combined results of \(M\) satellites. The determination of this posterior expectation involves computing the posterior LLR information based on the combined decoding outcomes. The \(\mathrm{M-step}\) for finding the maximum value in an array becomes: \[\mathbf{\Phi}_{m}^{(n)}= \arg\max_{\mathbf{\Phi}_{m}}\Re\bigg{\{}\sum_{k=0}^{K-1}r_{m,k}\zeta_ {k}^{*}(\mathbf{r}_{\mathrm{com}},\mathbf{\Phi}^{(n-1)})\] \[\times\mathrm{e}^{-j(2\pi kf_{\mathrm{res},m}^{(n-1)}T_{s}+\phi_{ \mathrm{res},m}^{(n-1)})}\bigg{\}}. \tag{41}\] Accordingly, we can express the iterative process of the residual CFO and CPO of the received signal at the \(m\)-the satellite as: \[f_{\mathrm{res},m}^{(n)} =\arg\max_{f_{\mathrm{res},m}}\left|\sum_{k=0}^{K-1}r_{m,k}\zeta_{ k}^{*}(\mathbf{r}_{\mathrm{com}},\mathbf{\Phi}^{(n-1)})\mathrm{e}^{-j2\pi kf_{\mathrm{ res},m}T_{s}}\right|\] \[\phi_{\mathrm{res},m}^{(n)} =\arg\left\{\sum_{k=0}^{K-1}r_{m,k}\zeta_{k}^{*}(\mathbf{r}_{\mathrm{ com}},\mathbf{\Phi}^{(n-1)})\mathrm{e}^{-j2\pi kf_{\mathrm{res},m}^{(n)}T_{s}}\right\}\quad, \tag{42}\] (42) can be solved by quantizing the frequency within a specific range and searching for the maximum value to obtain \(f_{\mathrm{res},m}^{(n)}\). As aforementioned, in the ICE algorithm output, the range of CFO has been narrowed down from \(\left(-\frac{1}{2I},+\frac{1}{2I}\right]\) to \(\left(-\frac{1}{2^{D+1}I},+\frac{1}{2^{D+1}I}\right]\), where \(D\) represents the number of bits used for quantization. In this fine estimation process, each satellite utilizes the EM algorithm to perform a search for the residual frequency offset within the range \(\left(-\frac{1}{2^{D+1}I},+\frac{1}{2^{D+1}I}\right]\). In this scenario, a search step of \(f_{\mathrm{step}}\) is employed to calculate the combined results. Consequently, the residual CFO and CPO can be updated through an iterative as: \[\mathbf{r}_{\mathrm{com}}^{(n-1)}=\sum_{m=1}^{M}\sum_{k=0}^{K-1}r_{m,k}^{(n-1)} \mathrm{e}^{-j(2\pi kf_{\mathrm{res},m}^{(n-1)}T_{s}+\phi_{\mathrm{res},m}^{(n -1)})} \tag{43}\] \[f_{\mathrm{res},m}^{(n)}=\arg\max_{f_{\mathrm{res},m}}\left|\sum_{k=0}^{K-1}r_{ m,k}^{(n-1)}\zeta_{k}^{*}(\mathbf{r}_{\mathrm{com}}^{(n-1)},\mathbf{\Phi}^{(n)})\mathrm{e}^{-j2 \pi kf_{\mathrm{step}}T_{s}}\right| \tag{44}\] \[\phi_{\mathrm{res},m}^{(n)}=\arg\left\{\sum_{k=0}^{K-1}r_{m,k}\zeta_{k}^{*}( \mathbf{r}_{\mathrm{com}}^{(n-1)},\mathbf{\Phi}^{(n-1)})\mathrm{e}^{-j2\pi kf_{ \mathrm{res},m}^{(n)}T_{s}}\right\}. \tag{45}\] During each iteration of the CEM algorithm, the estimated results are utilized to compensate for the received signal of each satellite. This compensation enables coherent combination and decoding of the signals. Additionally, the posterior LLR information obtained from the decoder is employed to calculate the expected value of the transmitted data. As the number of iterations increases, the estimated values of the CFOs and CPOs gradually approach the actual values. Simultaneously, the reliability of the posterior LLR information from the decoded output after coherent combination improves. In summary, the joint utilization of the Iterative Cross Entropy (ICE) algorithm and the Cooperative Expectation Maximization (CEM) algorithm, referred to as ICE-CEM algorithm, enables accurate estimation of CFOs and CPOs within the range of \(\left(-\frac{1}{2I},+\frac{1}{2I}\right]\) and \(\left(-\pi,+\pi\right]\). This accurate estimation facilitates the completion of coherent combination among cooperative satellites. The flow of the ICE-CEM algorithm is presented in Algorithm 2. ``` Input: Received signal at all \(M\) satellites \(\mathbf{r}=[\boldsymbol{r}_{1},\boldsymbol{r}_{2},\cdots,\boldsymbol{r}_{M}]^{ \mathrm{T}}\) Output: Estimated frequency and phase offsets for the despreaded data of the \(m\)-th satellite \(\mathbf{\hat{\Phi}}=[\Delta\hat{f}_{1},\cdots,\Delta\hat{f}_{m},\Delta\hat{f}_ {M},\hat{\phi}_{1},\cdots,\hat{\phi}_{m},\hat{\phi}_{M}]\) and the decoded result after coherent combination, \(\Delta\hat{f}_{m}=\Delta\hat{f}_{m,D}+\hat{f}_{\mathrm{res},m}\); \(\hat{\phi}_{m}=\hat{\phi}_{m,D}+\hat{\phi}_{\mathrm{res},m}\), with the decoded symbols denoted as \(\hat{\boldsymbol{u}}\). Initialization: ICE algorithm parameters \(N_{\mathrm{c}}\) (initial candidate group number), \(N_{\mathrm{e}}\) (optimal group number), \(N_{\mathrm{iter}}\) (maximum number of iterations); \(N_{\mathrm{iter},\mathrm{EM}}\) (maximum number of iterations); (1) Apply the ICE algorithm to obtain the estimated results of frequency offset and phase offset for the decoded symbols \(\Delta\hat{f}_{m,D}\) and \(\phi_{m,D}\) of each satellite; Compensate the despreaded signal of each satellite using \(\Delta\hat{f}_{m,D}\) and \(\hat{\phi}_{m,D}\); for\(1<n\leq N_{\mathrm{iter},\mathrm{EM}}\)do (2)Merge the compensated results for each satellite using (43). Perform Polar code decoding and obtain the posterior LLR information. Calculate the mathematical expectation \(\zeta_{\mathrm{k}}(\boldsymbol{r}_{\mathrm{com}},\boldsymbol{\Phi}^{(n-1)})\) of the transmitted symbols using equation(26); (3) Execute (44) and (45) sequentially to obtain accurate estimations of residual frequency offset and phase offset for the decoded symbols of each satellite; (4) \(n=n+1\); end for return:\(\mathbf{\hat{\Phi}}\), the decoded data \(\hat{\boldsymbol{u}}\) ``` **Algorithm 2**ICE-CEM algorithm ## V Simulation and Performance Analysis In order to assess the performance of the proposed ICE-CEM algorithm in estimating CFO and CPO, we perform Root Mean Square Error (RMSE) simulations and compare the results with Cramer-Rao Lower Bound (CRLB), which represents the theoretical limits. Fig. 8 presents the performance of the ICE-CEM algorithm for cooperative scenarios involving 2 satellites and 4 satellites. Fig. 8(a) and Fig. 8(b) demonstrate that, for a fixed number of cooperative satellites, the estimated RMSE of CFO and CPO using the ICE-CEM algorithm closely approximate the CRLB as the SNR increases. This can be attributed to the fact that higher SNR leads to greater reliability of the posterior LLR information and more accurate calculation of \(\zeta_{k}\) using (26). Furthermore, as the number of cooperative satellites increases from 2 to 4, the required SNR for the ICE-CEM algorithm to approximate the CRLB decreases approximately 3 dB. This indicates the cooperative gain achieved by the ICE-CEM algorithm. With more cooperative satellites, the coherent combined SNR increases, resulting in improved reliability of \(\zeta_{k}\) after coherent combination and superior RMSE performance. Fig. 9 demonstrates the estimation performance of the ICE-CEM algorithm for different ranges of CFO and CPO, considering various values of \(E_{\mathrm{s}}/N_{0}\). Subfigure 9(a) illustrates the RMSE performance of NFO estimation within the range of \(-7.8125\times 10^{-3}\), + \(7.8125\times 10^{-3}\)]. As \(E_{\mathrm{s}}/N_{0}\) increases, the RMSE gradually approaches the CRLB across the entire NFO range. Similarly, subfigure 9(b) presents the RMSE performance of CPO estimation within the range of \((-\pi,+\pi]\). Again, as \(E_{\mathrm{s}}/N_{0}\) increases, the RMSE of the estimation results approaches the CRLB within the entire range. Additionally, Fig. 10 displays the BER simulation results for scenarios involving 2 and 4 satellites. In these simulations, the estimation of CFOs and CPOs for each satellite is completed and the results are combined to compensate and decode the received signals using the BP decoder [7]. The NFO of each satellite is randomly selected within the range of (\(-7.8125\times 10^{-3}\), \(+7.8125\times 10^{-3}\)], while the CPO is randomly chosen from the range of (\(-\pi\), \(+\pi\)]. It is assumed that the received SNR is identical for each satellite. It can be seen from Fig. 10 that the BER performance of the ICE-CEM algorithm incurs a loss of about \(0.4\) dB and \(0.3\) dB (@BER=\(1\times 10^{-4}\)) compared to the ideal scenario when the number of cooperative satellites is 2 and 4, respectively. In the 4-satellites scenario, where each satellite operates at a lower SNR, the increased RMSE in CFO and CPO estimation for each satellite contributes to a degraded coherent combination among the satellites. Consequently, this leads to a larger loss in the combining and decoding performance. ## VI Conclusions This paper proposes an iterative code-aided estimation algorithm for CFOs and CPOs in the application of CSC, called ICE-CEM. The algorithm specifically aims to overcome the challenges associated with a wide range of CFOs and CPOs, poor synchronization accuracy under low SNR, and the absence of training sequences. The proposed algorithm begins with an ICE process, which quantifies the frequency offset and random phase offset for the received signal at each satellite. By employing cross-entropy iteration, it enables parallel search of CFOs and CPOs. Subsequently, the algorithm incorporates the CEM iteration algorithm to achieve a accurate estimation of CFO and CPO at each satellite. Simulation results utilizing the RMSE metric demonstrate the exceptional performance of the ICE-CEM algorithm in terms of both estimation range and accuracy. Specifically, the proposed algorithm achieves estimation accuracy close to the CRLB within the frequency range of (\(-7.8125\times 10^{-3}\), \(+7.8125\times 10^{-3}\)] and phase range of (\(-\pi\), \(+\pi\)]. Moreover, BER simulations reveal that the ICE-CEM algorithm incurs a mere loss of 0.3 dB and 0.4 dB (@BER=\(1\times 10^{-4}\)) in the 2-satellites and 4-satellites scenarios, respectively.
2305.19470
Label Embedding via Low-Coherence Matrices
Label embedding is a framework for multiclass classification problems where each label is represented by a distinct vector of some fixed dimension, and training involves matching model output to the vector representing the correct label. While label embedding has been successfully applied in extreme classification and zero-shot learning, and offers both computational and statistical advantages, its theoretical foundations remain poorly understood. This work presents an analysis of label embedding in the context of extreme multiclass classification, where the number of classes $C$ is very large. We present an excess risk bound that reveals a trade-off between computational and statistical efficiency, quantified via the coherence of the embedding matrix. We further show that under the Massart noise condition, the statistical penalty for label embedding vanishes with sufficiently low coherence. Our analysis supports an algorithm that is simple, scalable, and easily parallelizable, and experimental results demonstrate its effectiveness in large-scale applications.
Jianxin Zhang, Clayton Scott
2023-05-31T00:38:55Z
http://arxiv.org/abs/2305.19470v3
# Label Embedding by Johnson-Lindenstrauss Matrices ###### Abstract We present a simple and scalable framework for extreme multiclass classification based on Johnson-Lindenstrauss matrices (JLMs). Using the columns of a JLM to embed the labels, a \(C\)-class classification problem is transformed into a regression problem with \(\mathcal{O}(\log C)\) output dimension. We derive an excess risk bound, revealing a tradeoff between computational efficiency and prediction accuracy, and further show that under the Massart noise condition, the penalty for dimension reduction vanishes. Our approach is easily parallelizable, and experimental results demonstrate its effectiveness and scalability in large-scale applications. ## 1 Introduction Extreme classification refers to multiclass and multilabel classification problems involving thousands of classes or more, and has emerged as an essential research area in machine learning. This is due to an increasing number of real-world applications involving massive numbers of classes, such as image recognition (Zhou et al., 2014), natural language processing (Le and Mikolov, 2014; Jernite et al., 2017), and recommendation systems (Bhatia et al., 2015; Chang et al., 2019). Traditional classification methods often struggle to scale effectively in these scenarios due to the high computational cost and memory requirements associated with handling large label spaces. Consequently, there is a growing need for efficient and scalable algorithms that can tackle extreme classification problems without compromising on performance (Prabhu and Varma, 2014; Prabhu et al., 2018; Deng et al., 2018). In this paper, we introduce a simple and scalable framework for extreme multiclass classification using Johnson-Lindenstrauss random matrices. Our approach transforms the original \(C\)-class classification problem into a regression problem with \(\mathcal{O}(\log C)\) output dimension, substantially reducing computational complexity while preserving classification performance. The cornerstone of our framework is the use of the columns of a Johnson-Lindenstrauss random matrix as class-representative embedding vectors. Learning simply involves fitting a regression model to predict the embedded label of an instance, and our framework is thus compatible with any multi-output regression model, such as linear models, random forests, and neural networks. Given a test instance, the predicted label is the label of the nearest embedding vector to the output of the fitted regression model. A key contribution of this work is the derivation of an excess risk bound, offering theoretical guarantees for the performance of the proposed method. The bound reveals a tradeoff between computational efficiency and classification accuracy, wherein a logarithmic reduction in the output space dimension incurs only a small penalty in prediction accuracy. Furthermore, under the multiclass noise condition of Massart and Nedelec (2006), the penalty for dimension reduction vanishes. In additional to these performance guarantees, our approach is easily parallelizable, making it an attractive solution for large-scale applications that demand efficient processing. We validate the effectiveness and scalability of our proposed method through a series of experiments on various real-world datasets, demonstrating its potential to address the challenges posed by extreme multiclass classification tasks. The remainder of this paper is organized as follows: Section 2 introduces related work on extreme multiclass classification. Section 3 reviews Johnson-Lindenstrauss matrices. Section 4 presents our proposed framework in detail along with the excess risk bound. Section 5 presents experimental results and evaluations. Finally, Section 6 concludes the paper and outlines future research directions. ## 2 Related Work Existing methods for extreme multiclass classification can be grouped into four main categories: Label Hierarchy, Label Embedding, One-vs-all methods, and other methods. **Label Hierarchy.** Numerous methods such as Parabel (Prabhu et al., 2018), Bonsai (Khandagale et al., 2020), AttentionXML (You et al., 2019), lightXML (Jiang et al., 2021), XR-Transformer (Zhang et al., 2021), X-Transformer (Wei et al., 2019), XR-Linear (Yu et al., 2022), and ELIAS (Gupta et al., 2022) partition the label spaces into clusters. This is typically achieved by performing \(k\)-means clustering on the feature space. The training process involves training a cluster-level model to assign a cluster to a feature vector, followed by training a label-level model to assign labels within the cluster. As each cluster contains a relatively small number of labels, the training cost is effectively reduced. Notably, Parabel (Prabhu et al., 2018) represents labels using binary trees where each node represents a label cluster, and its children represent exclusive subsets of their parent. ELIAS (Gupta et al., 2022) allows label clusters to overlap and updates label-cluster assignments during training. However, a potential drawback of such methods that construct a label hierarchy is the often noticeable absence of robust theoretical support. **Label Embedding.** A natural approach involves representing each label as a vector in a low-dimensional space. LEML (Yu et al., 2014) leverages a low-rank assumption on linear models and effectively constrains the output space of models to a low-dimensional space. SLICE (Jain et al., 2019) is designed to train on low-dimensional dense features, with each label represented by the mean of all feature vectors associated with that label. SLEEC (Bhatia et al., 2015) proposes a local embedding framework that preserves the distance between label vectors. Guo et al. (2019) point out that low-dimensional embedding-based models could suffer from significant overfitting. Their theoretical insights inspire a novel regularization technique to alleviate overfitting in embedding-based models. WLSTS (Evron et al., 2018) proposes an extreme multiclass classification framework based on _error correcting output coding_, which embeds labels with codes induced by graphs. Hsu et al. (2009) use column vectors from a matrix with the _restricted isometry property_ (RIP), which is satisfied by all Johnson Lindenstrauss matrices, to represent labels. Their analysis is primarily tailored to multilabel classification and rooted in a compressed sensing framework. They deduce bounds for the conditional \(\ell_{2}\)-error, which measures the \(2\)-norm difference between the prediction and the label vector -- a metric that is not a standard measure of classification error. In contrast, our work analyzes the standard classification error. Embedding-based methods generally underperform compared to state-of-the-art approaches in empirical evaluations. **One-vs-all methods.** One-vs-all (OVA) algorithms address extreme classification problems with \(C\) labels by modeling them as \(C\) independent binary classification problems. For each label, a classifier is trained to predict its presence. DiSMEC (Babbar and Scholkopf, 2017) introduces a large-scale distributed framework to train linear OVA models, albeit at an expensive computational cost. ProXML (Babbar and Scholkopf, 2019) formulates the extreme classification problem as robust learning with adversarial perturbations to mitigate the impact of data scarcity. PD-Sparse (Yen et al., 2016) assumes both feature vectors and label vectors are sparse and designs an optimization algorithm to fully exploit the sparsity. PPD-Sparse (Yen et al., 2017) proposes a parallelized version of PD-Sparse. **Other methods.** Beyond the above categories, DeepXML (Daihya et al., 2021) proposes a framework based on a negative sampling procedure that shortlists \(O(\log C)\) relevant labels during training and prediction, where \(C\) is the total number of labels. VM (Choromanska and Langford, 2015) constructs trees with \(\mathcal{O}(\log C)\) depth that have leaves with low label entropy. Based on the standard random forest training algorithm, FastXML (Prabhu and Varma, 2014) proposes to directly optimize the Discounted Cumulative Gain to reduce the training cost. AnnexML (Tagami, 2017) constructs \(k\)-nearest neighbor graph of the label vectors and attempts to reproduce the graph structure in a lower-dimension feature space. We now mention theoretical contributions that are most related to our own. The seminal work of Allwein et al. (2001) generalizes the _error correcting output codes_ (ECOC) framework for multiclass classification, transforming the problem into multiple binary classification tasks. The authors establish a bound on the empirical multiclass loss based on the empirical loss of the individual binary learners and present a generalization error analysis when AdaBoost is employed as the binary learner. Drawing inspiration from the error bound proposed by Allwein et al. (2001), Evron et al. (2018) applies the ECOC principle to extreme multiclass classification. In their approach, labels are embedded with codes that are generated by graphs. In a different perspective, Ramaswamy et al. (2018) put forth a novel surrogate loss function for multiclass classification with an abstain option. This abstain option enables the classifier to opt-out from making predictions at a certain cost. Remarkably, their proposed methods not only demonstrate consistency but also effectively reduce the multiclass problems to \(\lceil\log C\rceil\) binary classification problems through encoding the classes with their binary representations. Our research can be interpreted as a continuous refinement of the concept of reducing multiclass problems to binary problems. We transform multiclass problems into regression problems on a \(\mathcal{O}(\log C)\) space. Crucially, our excess risk analysis unveils the intricate balance between statistical and computational efficiency. ## 3 Johnson-Lindenstrauss Matrices We first review the definition of a Johnson-Lindenstrauss matrix. **Definition 1**.: Let \(C,n\in\mathbb{N}\). We define the set of \(C\times n\) Johnson-Lindenstrauss matrices with parameters \(\epsilon\), \(\delta\), \(m\), denoted by JLM(\(\epsilon\), \(\delta\), \(m\)), to be the set of \(C\times n\) random matrices such \(G\in\text{JLM}(\epsilon,\delta,m)\) if and only if, with probability at least \(1-\delta\), \(\forall\)\(m\)-element subsets \(V\subset\mathbb{R}^{n}\), \(\forall v,v^{\prime}\in V\), \(|\langle Gv,Gv^{\prime}\rangle-\langle v,v^{\prime}\rangle|\leq\epsilon\|v\| \|v^{\prime}\|\). Johnson-Lindenstrauss matrices have approximately orthonormal columns, meaning each column has approximately unit norm and every two distinct columns have inner-product close to 0. In the standard approach to multiclass classification, labels can be viewed as embedded by the standard basis. Our framework, instead, embeds labels by the approximately orthonormal columns of a Johnson-Lindenstrauss matrix, where the embedding dimension is \(n\). Popular choices of Johnson-Lindenstrauss matrices include: * Gaussian matrix: entries are sampled \(i.i.d.\) from a Gaussian distribution with 0 mean and \(\frac{1}{n}\) variance. * Rademacher matrix: entries are sampled \(i.i.d.\) from a uniform distribution on \(\left\{\frac{1}{\sqrt{n}},-\frac{1}{\sqrt{n}}\right\}\). The above examples are Johnson-Lindenstrauss matrices with shape \(C\times n\) and parameters \(\epsilon\), \(\delta\), \(m\) if \(n\geq\frac{c_{0}}{\epsilon^{2}}\log\frac{m}{\delta}\) for some constant \(c_{0}\)(Johnson and Lindenstrauss, 1984). In our embedding framework, the Johnson-Lindenstrauss matrix has dimensions \(C\times n\), where \(C\) is the total number of classes and \(n\) is the embedding dimension chosen by the user. The matrix is designed to embed a \(C\)-element set of vectors in \(\mathbb{R}^{C}\), in particular the standard basis, which makes \(m=C\), allowing for a reduction of the output dimension to \(\mathcal{O}(\log C)\). ## 4 Label Embedding by Jonhson-Lindenstrauss Matrices We first introduce the notations in Section 4.1. Then, we present our algorithm in Section 4.2. The excess risk bound and its interpretation are presented in Section 4.3. ### Preliminaries Let \(\mathcal{X}\) denote the feature space and \(\mathcal{Y}=\{1,\ldots,C\}\) denote the label space where \(C\in\mathbb{N}\). Let \((X,Y)\) be random variables in \(\mathcal{X}\times\mathcal{Y}\), and let \(P\) be the probability measure that governs \((X,Y)\). We use \(P_{\mathcal{X}}\) to denote the marginal distribution of \(P\) on \(\mathcal{X}\). Now, consider a \(C\times n\) matrix \(G\in\text{JLM}(\epsilon,\delta,C)\). The columns of \(G\) are denoted by \(g_{1},g_{2},\ldots,g_{C}\), and the column \(g_{i}\) is used to embed the \(i\)-th label. With this setup, our goal is to transform the original \(C\)-class classification problem into a regression problem with \(\mathcal{O}(\log C)\) outputs. For this, let \(\mathcal{F}=\{\text{all measurable }f:\mathcal{X}\rightarrow\mathbb{R}^{n}\}\) and let \(\eta(x)=(\eta_{1}(x),\ldots,\eta_{C}(x))\), where \(\eta_{i}(x)=P_{Y|X=x}(i)\). This flexible regression model setup enables our framework to be compatible with any model class. Let \(\beta(p):\mathbb{R}^{n}\rightarrow\mathcal{Y}\), \(\beta(p)=\beta(p)\). We use the following notation: * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). We use the following notation: * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). We use the following notation: * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). We use the following notation: * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). We use the following notation: * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). We use the following notation: * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). We use the following notation: * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). We use the following notation: * \(\beta(p)=\beta(p)\). * \(\beta(p)=\beta(p)\). \(\min\!\left\{\arg\min_{i\in\mathcal{Y}}\left\|p-g_{i}\right\|_{2}\right\}\) be the decoding function, which maps a vector in the embedding space back to its corresponding label, where \(p\) is the output of a model. The corresponding label is determined by identifying the nearest embedding vector to \(p\). In the rare case where multiple nearest neighbors exist for \(p\), we resolve this ambiguity by considering the lexicographical order of the labels that they represent. In such cases, the function \(\beta\) returns the smallest label from the set of labels corresponding to the nearest embedding vectors. Define the 0-1 loss \(L_{01}:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\) to be \(L_{01}(\hat{y},y)=\begin{cases}1,&\text{if }y\neq\hat{y}\\ 0,&o.w.\end{cases}\). The standard objective for classification is to solve \(\min_{h\in\mathcal{H}}\mathbb{E}[L_{01}(h(X),Y)]\), where \(\mathcal{H}=\{\text{measurable }h:\mathcal{X}\rightarrow\mathcal{Y}\}\) is the set of all measurable functions \(\mathcal{X}\) to \(\mathcal{Y}\). We now claim that \(\beta\circ\mathcal{F}\) reparameterizes \(\mathcal{H}\), which stems from the following facts: (i) for any given \(f\in\mathcal{F}\), the function \(\beta\circ f\) is also measurable, (ii) for all \(h\in\mathcal{H}\), the function \(f(x)=g_{h(x)}\) ensures that \(\beta\circ f=h\), and (iii) such \(f\) are measurable because \(\forall\) measurable sets \(\mathcal{S}\), \(f^{-1}(\mathcal{S})=\bigcup_{i:g_{i}\in\mathcal{S}}h^{-1}(i)\). Thus, the problem \(\min_{f\in\mathcal{F}}\mathbb{E}_{P}[L_{01}(\beta(f(X)),Y)]\) is equivalent to the standard classification objective. To simplify notation, we introduce the loss function \(L:\mathbb{R}^{n}\times\mathcal{Y}\), such that \(L(p,y)=L_{01}(\beta(p),y)\). As a result, our learning objective can be equivalently written as \(\min_{f\in\mathcal{F}}\mathbb{E}_{P}[L(f(X),Y)]\). Given the impracticality of directly minimizing the 0-1 loss, we consider instead the surrogate loss function \(\ell:\mathbb{R}^{n}\times\mathcal{Y}\rightarrow\mathbb{R}\), defined as \(\ell(p,y)=\frac{1}{2}\|p-g_{y}\|_{2}^{2}\), and aim to optimize \(\mathbb{E}_{P}[\ell(f(X),g_{Y})]\) over a specific model class \(\mathcal{F}_{0}\subset\mathcal{F}\) through empirical risk minimization. Following the introduction of our learning algorithm, we will present an analysis of excess risk for \(L\), in terms of the excess risk associated with the surrogate loss function \(\ell\). ### Algorithm Let the training dataset \(\{(x_{i},y_{i})\}_{i=1}^{N}\) be \(i.i.d.\) realizations of \((X,Y)\). To train a model that maps a feature vector to a vector in the embedding space, we first sample a \(C\times n\) Johnson-Lindenstrauss matrix \(G=[g_{1},g_{2},\ldots,g_{C}]\). We then replace each label \(y_{i}\) with its embedding vector \(g_{y_{i}}\) to form the regression dataset \(\{(x_{i},g_{y_{i}})\}_{i=1}^{N}\). Next, we train a regression model \(f\) on this regression dataset. Given a new data point \(x\), to assign a label we search for the nearest embedding vector \(g_{y}\) of \(f(x)\) and annotate \(x\) with the label \(y\). The full algorithm is presented in Algorithm 1. It is crucial to emphasize that the regression step in Algorithm 1 is a regression problem with a response in \(\mathbb{R}^{n}\). This enables a novel parallel training scheme that distributes the response variables across a maximum of \(n\) machines without any need of inter-machine communication. Each machine is tasked with solving one or a small number of real-valued regression problems. We employ this parallel training scheme in the elastic net implementation of our framework in the experiments. ``` 1:Input: a model class \(\mathcal{F}_{0}\in\mathcal{F}\), the dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N}\), and embedding dimension \(n\). 2:Sample a \(C\times n\) Johnson-Lindenstrauss matrix with columns \(g_{1},g_{2},\ldots,g_{C}\). 3:Form the new regression dataset \(\mathcal{D}_{r}=\{(x_{i},g_{y_{i}})\}_{i=1}^{N}\). 4:Train a regression model \(f\) on \(\mathcal{D}_{r}\) with mean square error \(\ell\). 5:Return:\(\beta\circ f\). ``` **Algorithm 1** Label Embedding for Extreme Multiclass Classification ### Excess Risk Bound We present the excess risk bound and examine the trade-off between the reduction in dimensionality and the potential penalty in accuracy. Before delving into the excess risk analysis, we first introduce the concepts of _risk_ and _Bayes risk_. **Definition 2**.: Let \(\mathcal{L}:\mathbb{R}^{n}\times\mathcal{Y}\rightarrow\mathbb{R}\). Define the \(\mathcal{L}\)-risk of \(f\) with distribution \(P\) to be \[\mathcal{R}_{\mathcal{L},P}:\mathcal{F}\rightarrow\mathbb{R},\ \ \mathcal{R}_{ \mathcal{L},P}(f):=\mathbb{E}_{P}\left[\mathcal{L}(f(X),Y)\right]\] and the \(\mathcal{L}\)-Bayes risk to be \[\mathcal{R}_{\mathcal{L},P}^{\star}:=\inf_{f\in\mathcal{F}}\mathcal{R}_{ \mathcal{L},P}(f).\] We define \(d(x)=\max_{i}\eta_{i}(x)-\max_{i\notin\arg\max_{j}\eta_{j}(x)}\eta_{i}(x)\). \(d(x)\) is a "noise" measure at a point \(x\) and we discuss it after Theorem 3. The bound is as follows: **Theorem 3**.: _Let \(\delta\), \(\epsilon\in(0,1)\). If \(G\in\text{JLM}(\frac{\epsilon}{4}\), \(\delta\), \(C)\), then with probability at least \(1-\delta\),_ \[\mathcal{R}_{L,P}(f)-\mathcal{R}_{L,P}^{*} \leq\inf_{r>\epsilon}\biggl{\{}\epsilon P_{\mathcal{X}}(d(X)<r) \tag{1}\] \[+\sqrt{(8\epsilon+16)P_{\mathcal{X}}(d(X)<r)\Bigl{(}\mathcal{R}_ {\ell,P}(f)-\mathcal{R}_{\ell,P}^{*}\Bigr{)}}\] (2) \[+\frac{16+8\epsilon}{(r-\epsilon)^{2}}\bigl{(}\mathcal{R}_{\ell, P}(f)-\mathcal{R}_{\ell,P}^{*}\Bigr{)}\biggr{\}}. \tag{3}\] Prior to delving into the proof sketch, we first explicate the concept of _conditional risk_. The full proofs and associated lemmas are provided in the appendix. The remarks on the theorem and the quantity \(d(x)\) will come after the proof sketch. **Definition 4**.: Let \(\mathcal{L}:\mathbb{R}^{n}\times\mathcal{Y}\to\mathbb{R}\) be a loss function and \(P\) be a probability measure on \(\mathcal{X}\times\mathcal{Y}\). For a point \(x\in\mathcal{X}\), define the conditional risk at \(x\) as \[C_{\mathcal{L},x}:\mathbb{R}^{n}\to\mathbb{R},\,C_{\mathcal{L},x}(p)=\mathbb{ E}_{y\sim P_{Y|X=x}}\mathcal{L}(p,g_{y}),\] and \(C_{\mathcal{L},x}^{*}=\inf_{p\in\mathbb{R}^{n}}C_{\mathcal{L},x}(p)\). The conditional risk represents the expected value of a loss given the model output \(p\) and a feature vector \(x\). Note that \(\mathcal{R}_{\mathcal{L},P}(f)=\mathbb{E}_{X\sim P_{\mathcal{X}}}C_{\mathcal{L },X}(f(X))\). For brevity, we introduce the notations \(C_{1,x}(p)=C_{L,x}(p)-C_{L,x}^{*},C_{2,x}(p)=C_{\ell,x}(p)-C_{\ell,x}^{*}\). Proof outline for Theorem 3.: We begin by demonstrating in Lemma 10 that there exists a unique \(p_{x}^{*}\) such that \(C_{\ell,x}^{*}=C_{\ell,x}(p_{x}^{*})\). Here, \(p_{x}^{*}\) represents the optimal model output at a specific point \(x\). Utilizing the Johnson-Lindenstrauss property of the embedding matrix, we establish in Lemma 13 that: \[\forall x\in\mathcal{X},\forall j,k\in[C],\forall p\in\mathbb{R}^{n},\frac{ \eta_{k}(x)-\eta_{j}(x)-\epsilon}{2\sqrt{2+\epsilon}}>\left\|p_{x}^{*}-p\right\| \implies\left\|p-g_{j}\right\|_{2}>\left\|p-g_{k}\right\|_{2}.\] This implies that if a model output \(p\) is sufficiently close to \(p_{x}^{*}\) and the probability \(\eta_{k}(x)\) for label \(k\) to occur exceeds the probability \(\eta_{j}(x)\) for label \(j\) by a some margin, then \(p\) is closer to \(g_{k}\) than to \(g_{j}\). By leveraging the above property along with the convexity of \(C_{2,x}\), we demonstrate in Lemma 14 that: \[\forall x\in\mathcal{X},\forall r>\epsilon,\forall p\in\mathbb{R}^{n},C_{2,x} (p)<\frac{(r-\epsilon)^{2}}{16+8\epsilon}\implies C_{1,x}(p)<r. \tag{4}\] This means a small \(C_{2,x}(p)\) will lead to a small \(C_{1,x}(p)\) up to the noise tolerance \(\epsilon\). The final step involves expressing the excess risk as \[\mathcal{R}_{L,P}(f)-\mathcal{R}_{L,P}^{*}=\int_{\mathcal{X}}C_{1,x}(f(x))= \int_{x:d(x)<r}C_{1,x}(f(x))+\int_{x:d(x)\geq r}C_{1,x}(f(x)).\] By plugging (4) into the integral, the first integral leads to terms (1) and (2) and the second integral leads to term (3). The excess risk \(\mathcal{R}_{L,P}(f)-\mathcal{R}_{L,P}^{*}\) measures the gap between the \(L\)-risk of \(f\) and the Bayes risk. Our aim is to minimize \(\mathcal{R}_{L,P}(f)-\mathcal{R}_{L,P}^{*}\) by driving the quantity \(\mathcal{R}_{\ell,P}(f)-\mathcal{R}_{\ell,P}^{*}\) to 0 through empirical risk minimization over a sufficiently rich function space \(\mathcal{F}_{0}\). While terms (2) and (3) in Theorem 3 can be driven to 0 asymptotically through training, term (1) represents an irreducible error incurred in a "noisy" region of \(\mathcal{X}\). While \(\max_{i}\eta_{i}(x)\) represents the probability of the most likely label occurring and \(\max_{i\notin\arg\max_{j}\eta_{j}(x)}\eta_{i}(x)\) represents the probability of the second most likely label occurring, the quantity \(d(x)\), which is the difference between these probabilities, can be viewed as a measure of noisiness at a point \(x\). A large \(d(x)\) implies that \(\arg\max_{i}\eta_{i}(x)\) is unambiguously the correct prediction at \(x\). In contrast, if \(d(x)\) is small, our confidence in predicting the most likely label \(\arg\max_{i}\eta_{i}(x)\) is reduced, as the second most likely label has a probability of occurring that is only marginally smaller than \(\max_{i}\eta_{i}(x)\). A smaller \(d(x)\) highlights the increased difficulty of making accurate predictions in situations where the distinction between the most probable labels is less pronounced. The embedding framework introduces fuzziness to the problem as a trade-off for dimensionality reduction. This fuzziness is measured by the error tolerance of the embedding matrix, \(\epsilon\). From the proof outline we can see that the model \(f^{*}\) minimizing the \(\ell\)-risk \(\mathcal{R}_{L,P}(f)\) may potentially make a suboptimal prediction at point \(x\) when \(d(x)<\epsilon\). Conversely, when \(d(x)>\epsilon\), \(f^{*}\) will always make the optimal prediction at point \(x\). Given a classification problem with \(C\) classes, a larger embedding dimension \(n\) will lead to a smaller error tolerance \(\epsilon\), making \(d(x)>\epsilon\) on a larger region in \(\mathcal{X}\) at the cost of increasing computational complexity. On the other hand, by choosing a smaller \(n\), \(d(x)<\epsilon\) on a larger region in \(\mathcal{X}\), increasing the first term in Theorem 3. This interpretation highlights the delicate balance between the benefits of dimensionality reduction and the potential impact on prediction accuracy, as a function of the embedding error tolerance, \(\epsilon\), and the noisiness measure, \(d(x)\). ### Lossless Dimensionality Reduction While Theorem 3 holds universally (for all distributions \(P\)), by considering a specific subset of distributions, we can derive a more conventional form of the excess risk bound. As a direct consequence of Theorem 3, under the multiclass extension of the Massart noise condition (Massart and Nedelec, 2006) which requires \(d(X)>c\) with probability 1 for some \(c\), our embedding framework achieves lossless logarithmic dimensionality reduction with respect to the 0-1 loss. This means that term 1 in Theorem 3 will be 0 under this condition. In this case, the difference \(\mathcal{R}_{L,P}(f)-\mathcal{R}_{L,P}^{*}\) tends to \(0\) as the excess risk \(\mathcal{R}_{\ell,P}(f)-\mathcal{R}_{\ell,P}^{*}\) also approaches \(0\). We present this result more formally with the following definition and corollary. **Definition 5** (Multiclass Massart Noise Condition).: The distribution \(P\) on \(\mathcal{X}\times\mathcal{Y}\) is said to satisfy the Multiclass Massart Noise Condition if and only if \(\exists c>0\) such that \(P_{\mathcal{X}}(d(X)>c)=1\). **Corollary 6**.: _Assume \(P\) satisfies the Multiclass Massart Noise Condition. Let \(\delta\in(0,1)\) and \(\epsilon\in(0,\operatorname{ess\,inf}d)\). If \(G\in\text{JLM}(\frac{\epsilon}{4},\,\delta,\,C)\), then with probability at least \(1-\delta\),_ \[\mathcal{R}_{L,P}(f)-\mathcal{R}_{L,P}^{*}\leq\frac{16+8\epsilon}{\left( \operatorname{ess\,inf}d-\epsilon\right)^{2}}\big{(}\mathcal{R}_{\ell,P}(f)- \mathcal{R}_{\ell,P}^{*}\big{)}\] _where \(\operatorname{ess\,inf}d\) is the essential infimum of \(d\), \(i.e.\operatorname{ess\,inf}d=\sup\{a\in\mathbb{R}:P_{\mathcal{X}}(d(X)<a)=0\}\)._ ## 5 Experiments 1 Footnote 1: Code is available at [https://github.com/Z-Jianxin/JOLLE](https://github.com/Z-Jianxin/JOLLE) In this section, we present an experimental evaluation of our proposed embedding framework, which we call JOLLE (Johnson-Lindenstrauss Label Embedding), on extreme multiclass classification problems. We have aimed to develop a general framework for extreme multiclass classification that is not specifically tailored to language datasets. Although methods like AttentionXML (You et al., 2019), lightXML (Jiang et al., 2021), XR-Transformer (Zhang et al., 2021), X-Transformer (Wei et al., 2019), and ELIAS (Gupta et al., 2022) have been designed for language datasets and leverage language models like BERT, our primary focus lies beyond achieving the highest performance on such datasets. Instead, we compare our method with more general approaches, including Parabel (Prabhu et al., 2018), PD-Sparse (Yen et al., 2016), PPD-Sparse (Yen et al., 2017), WLSTS (Evron et al., 2018), and AnnexML (Tagami, 2017), which are applicable across a broader range of multiclass extreme classification problems. We examine the performance of our method on three bag-of-words datasets: LSHTC1, Dmoz, and ODP. Notably, we abstain from integrating a language model into our framework due to the unavailability of raw text in these datasets, which prevents us from training a regression model on the dense feature representations extracted by language models. By evaluating our method against these general methods, we aim to demonstrate its versatility and effectiveness in various contexts. ### Experiment Setup We adapt our method to three widely used base models: elastic net, random forest, and fully-connected neural networks. It is important to note that elastic net is a linear method incorporating both \(\ell_{1}\) and \(\ell_{2}\) penalties. This choice of penalty is inspired by Yen et al. (2016). Elastic net and random forest are implemented in C++, both with single-node and multiprocess variants. Neural networks are implemented using Pytorch, with a 2-layer fully-connected neural network used for the LSHTC1 and DMOZ datasets and a 4-layer fully-connected neural network for the ODP dataset. We tune the hyperparameters for all models on a held-out dataset. we employ the Rademacher matrix as our embedding matrix, which has demonstrated superior empirical performance in our tests. For the competing methods, we use the hyperparameters as suggested in their respective papers or accompanying code. We conduct experiments on three large-scale datasets, DMOZ, LSHTC1, and ODP, which are extensively used for benchmarking extreme classification algorithms. The details of these datasets are provided in Table 1, with DMOZ and LSHTC1 available from Yen et al. (2016), and ODP from Medini et al. (2019). We compare our method against the following state-of-the-art methods: * PD-Sparse (Yen et al., 2016): an efficient solver designed to exploit the sparsity in extreme classification. * PPD-Sparse Yen et al. (2017): a multi-process extension of PD-Sparse (Yen et al., 2016). * Parabel (Prabhu et al., 2018): a tree-based method which builds a label-hierarchy. * WLSTS (Evron et al., 2018): a method based on _error correcting output coding_ which embeds labels by codes induced by graphs. * AnnexML (Tagami, 2017): a method which constructs a \(k\)-nearest neighbor graph of the label vectors and attempts to reproduce the graph structure from a lower-dimension feature space. * Standard multilayer perceptron classifier with cross-entropy loss. * Standard multilayer perceptron classifier with squared error loss. All neural network training is carried out on a single NVIDIA A40 GPU with 48GB of memory, whereas all other methods are executed on Intel Xeon Gold 6154 processors, equipped with 36 cores and 180GB of memory. Our distributed approach and the PPD-Sparse method -- also implemented in a distributed fashion -- are trained across 10 CPU nodes, harnessing 360 cores and 1.8TB of memory in total. For our CPU-based methods, we consistently set the embedding dimension to \(n=360\). In our distributed implementation, each node independently solves a subset of elastic nets with real-value output, effectively spreading out the computation. Similarly, the random forest model is distributed by assigning each core to compute a subset of trees. For deep neural networks, we explore different embedding dimensions and provide a plot showing the relationship between epoch count and accuracy. The full details of the experiments are presented in the appendix. ### Experimental Results The experimental results, presented in Table 2, highlight the superior performance of our proposed method across various models and datasets. The most significant improvement is observed when employing our method with neural networks, where we achieved the highest accuracy across all three datasets--0.3052, 0.4793, and 0.2309 for LSHTC1, DMOZ, and ODP, respectively. This outperforms the standard cross entropy and squared loss methods, emphasizing the effectiveness of our approach. Note that the cross entropy loss achieves better performance as we found that it achieves higher accuracy with much larger batch sizes. This phenomenon is not observed on other datasets. When \begin{table} \begin{tabular}{l c c c c} \hline Dataset & \(N_{\text{train}}\) & \(N_{\text{test}}\) & \(D\) & \(C\) \\ \hline LSHTC1 & 83805 & 5000 & 328282 & 12046 \\ Dmoz & 335068 & 38340 & 561127 & 11879 \\ ODP & 975936 & 493014 & 493014 & 103361 \\ \hline \end{tabular} \end{table} Table 1: Summary of the datasets used in the experiments. Here, \(N_{\text{train}}\) is the number of training data points, \(N_{\text{test}}\) the number of test data points, \(D\) the number of features, and \(C\) the number of classes. utilizing elastic net or random forest models, our method demonstrated a significant reduction in training time while maintaining competitive accuracy levels. Comparing linear methods, the JOLLE elastic net implementations outperform PD-Sparse and PPD-Sparse in both accuracy and training speed. The random forest models offer a trade-off point between linear models and neural networks in terms of accuracy and training time. This comprehensive comparison underscores the robustness of our method, providing a balance between accuracy and computational efficiency across different models and datasets. The experimental results emphasize the potential of our proposed method in tackling extreme multiclass classification problems effectively and efficiently. As demonstrated in Figures 1, 2, and 3, our methods introduce new trade-off points in extreme multiclass classification. On one hand, our methods can achieve speed improvements with a marginal compromise in accuracy. This is particularly beneficial for time-sensitive applications or when dealing with massive datasets where computational resources are limited. On the other hand, when the priority is maximized accuracy, our methods can still deliver, outperforming competing models while necessitating increased runtime. These flexible trade-offs underscore the adaptability of our techniques to a wide range of practical scenarios, providing valuable alternatives in the toolkit for tackling extreme multiclass classification problems. We compare our framework with varying embedding dimensions against neural networks employing both the standard cross-entropy loss and squared loss on a GPU. The results, as depicted in Figures 4, 5, and 6, demonstrate the robust performance of our method in terms of accuracy and a notably accelerated convergence speed. For the LSHTC1 dataset, we allow the cross entropy baseline to train a few more epochs to fully converge. In particular, JOLLE achieves superior performance in contrast to the standard cross-entropy and squared loss approaches. As evidenced by the research of Zhai and Wang (2018), there exists a linear relationship between the dimension of the output space and the generalization error bound. Therefore, it can be inferred that the enhanced performance of our method may be attributed to the reduction in the generalization error, a direct consequence of dimensionality reduction. \begin{table} \begin{tabular}{l l|c c c} \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**Dataset**} \\ \cline{3-5} & & **LSHTC1** & **DMOZ** & **ODP** \\ \hline \multirow{2}{*}{PD-Sparse (Single Node)} & Accuracy & 0.2210 & 0.3970 & N/A \\ & Time & 230s & 829s & \(>\) 50 hrs \\ \hline \multirow{2}{*}{PPD-Sparse (Multiple Nodes)} & Accuracy & 0.2260 & 0.3930 & 0.1366 \\ & Time & 135s & 656s & 668s \\ \hline \multirow{2}{*}{WLSTS (Single Node)} & Accuracy & 0.1640 & N/A & N/A \\ & Time & 12660s & \(>\) 20 hrs & \(>\) 28 hrs \\ \hline \multirow{2}{*}{AnnexML (Single Node)} & Accuracy & 0.2934 & 0.3972 & 0.2164 \\ & Time & 424s & 2072s & 10435s \\ \hline \multirow{2}{*}{Parabel (Single Node)} & Accuracy & 0.2224 & 0.3856 & 0.1709 \\ & Time & 96s & 600s & 1943s \\ \hline \multirow{2}{*}{JOLLE (Elastic Net, Single Node, \(n=360\))} & Accuracy & 0.2342 & 0.4109 & 0.1511 \\ & Time & 55s & 254s & 2045s \\ \hline \multirow{2}{*}{JOLLE (Elastic Net, Distributed, \(n=360\))} & Accuracy & 0.2338 & 0.4057 & 0.1506 \\ & Time & 14s & 68s & 350s \\ \hline \multirow{2}{*}{JOLLE (Random Forest, Single Node, \(n=360\))} & Accuracy & 0.2582 & 0.3265 & 0.1660 \\ & Time & 482s & 1799s & 2293s \\ \hline \multirow{2}{*}{JOLLE (Random Forest, Distributed, \(n=360\))} & Accuracy & 0.2664 & 0.3585 & 0.1660 \\ & Time & 63s & 226s & 359s \\ \hline \multirow{2}{*}{JOLLE (Neural Networks, GPU, \(n=512\))} & Accuracy & 0.3052 & 0.4793 & 0.2309 \\ & Time & 2260s & 5886s & 14590s \\ \hline \multirow{2}{*}{Standard Cross Entropy (GPU)} & Accuracy & 0.2940 & 0.4688 & 0.1720 \\ & Time & 529s & 5935s & 17374s \\ \hline \multirow{2}{*}{Standard Squared Loss (GPU)} & Accuracy & 0.2804 & 0.3798 & 0.1858 \\ & Time & 2301s & 5934s & 17635s \\ \end{tabular} \end{table} Table 2: Comparison of accuracy and training time across different methods and datasets. Each method has two entries, one for accuracy and one for training time. ## 6 Conclusion and Future Work In conclusion, we have proposed a theory-grounded embedding approach for extreme multiclass classification. Our analysis offers a deeper understanding of the trade-offs between dimensionality reduction and the potential penalty in accuracy. We derived an excess risk bound that reveals a small penalty for dimensionality reduction and that this penalty vanishes under the multiclass Massart condition. Through extensive experiments, we demonstrated that our method outperforms state-of-the-art techniques in both accuracy and run time. An interesting and immediate application is to extend the analysis to multilabel classification. Several avenues for future work include extending our framework to online learning scenarios, where adding an embedding dimension and scaling existing regressors can accommodate new classes as they emerge. Another potential extension involves learning with rejection, which would allow the model to reject samples with low confidence, thereby improving overall performance. Our embedding approach offers a promising direction for tackling extreme multiclass classification problems, contributing to a more robust understanding of the underlying trade-offs and providing a solid foundation for future research in this area. Figure 1: Plot of training time (log scale) vs accuracy for the LSHTC1 dataset. Figure 4: Plot of epoch vs accuracy for the LSHTC1 dataset. Figure 5: Plot of epoch vs accuracy for the Dmoz dataset. Figure 3: Plot of training time (log scale) vs accuracy for the ODP dataset. Figure 2: Plot of training time (log scale) vs accuracy for the Dmoz dataset. **Acknowledgment** This work was supported in part by the National Science Foundation under award 2008074 and the Department of Defense, Defense Threat Reduction Agency under award HDTRA1-20-2-0002.
2309.16496
CCEdit: Creative and Controllable Video Editing via Diffusion Models
In this paper, we present CCEdit, a versatile generative video editing framework based on diffusion models. Our approach employs a novel trident network structure that separates structure and appearance control, ensuring precise and creative editing capabilities. Utilizing the foundational ControlNet architecture, we maintain the structural integrity of the video during editing. The incorporation of an additional appearance branch enables users to exert fine-grained control over the edited key frame. These two side branches seamlessly integrate into the main branch, which is constructed upon existing text-to-image (T2I) generation models, through learnable temporal layers. The versatility of our framework is demonstrated through a diverse range of choices in both structure representations and personalized T2I models, as well as the option to provide the edited key frame. To facilitate comprehensive evaluation, we introduce the BalanceCC benchmark dataset, comprising 100 videos and 4 target prompts for each video. Our extensive user studies compare CCEdit with eight state-of-the-art video editing methods. The outcomes demonstrate CCEdit's substantial superiority over all other methods.
Ruoyu Feng, Wenming Weng, Yanhui Wang, Yuhui Yuan, Jianmin Bao, Chong Luo, Zhibo Chen, Baining Guo
2023-09-28T15:03:44Z
http://arxiv.org/abs/2309.16496v3
# CCEdit: Creative and Controllable Video Editing via Diffusion Models ###### Abstract In this paper, we present CCEdit, a versatile generative video editing framework based on diffusion models. Our approach employs a novel trident network structure that separates structure and appearance control, ensuring precise and creative editing capabilities. Utilizing the foundational ControlNet architecture, we maintain the structural integrity of the video during editing. The incorporation of an additional appearance branch enables users to exert fine-grained control over the edited key frame. These two side branches seamlessly integrate into the main branch, which is constructed upon existing text-to-image (T2I) generation models, through learnable temporal layers. The versatility of our framework is demonstrated through a diverse range of choices in both structure representations and personalized T2I models, as well as the option to provide the edited key frame. To facilitate comprehensive evaluation, we introduce the BalanceCC benchmark dataset, com _prising 100 videos and 4 target prompts for each video. Our extensive user studies compare CCEdit with eight state-of-the-art video editing methods. The outcomes demonstrate CCEdit's substantial superiority over all other methods._ ## 1 Introduction In recent years, the domain of visual content creation and editing has undergone a profound transformation, driven by the emergence of diffusion-based generative models [11, 19, 50]. A large body of prior research has demonstrated the exceptional capabilities of diffusion models in generating diverse and high-quality images [40, 42, 45] and videos [20, 48, 5], conditioned by text prompts. These advancements have naturally paved the way for innovations in generative video editing [7, 25, 35, 37, 54, 57, 58, 62]. Generative video editing, despite its rapid advancement, continues to face a series of significant challenges. These challenges include accommodating diverse editing requests, achieving fine-grained control over the editing process, and harnessing the creative potential of generative models. Diverse editing requirements include tasks such as stylistic alterations, foreground replacements, and background modifications. Generative models, while powerful and creative, may not always align perfectly with the editor's intentions or artistic vision, resulting in a lack of precise control. In response to these challenges, this paper introduces CCEdit, a versatile generative video editing framework meticulously designed to strike a harmonious balance between controllability and creativity while accommodating a wide range of editing requirements. CCEdit achieves its goal by effectively decoupling structure and appearance control in a unified _trident network_. This network comprises three essential components: the main text-to-video generation branch and two accompanying side branches dedicated to structure and appearance manipulation. The _main branch_ leverages a pre-trained text-to-image (T2I) diffusion model [42], which is transformed into a text-to-video (T2V) model through the insertion of temporal modules. The _structure branch_, implemented as ControlNet [59], is responsible for digesting the structural information extracted from each frame of the input video and seamlessly infusing it into the main branch. Simultaneously, the _appearance branch_ introduces an innovative mechanism for precise appearance control, when an edited reference frame is available. The structure and appearance branches are effectively integrated into the central branch through learnable temporal layers. These layers serve not only as a cohesive link, aggregating information from side branches, but also as a crucial element ensuring temporal consistency across the generated video frames. In highlighting the versatility of our framework, we provide a wide range of control choices for both structure and appearance manipulation. For structure control, users can choose from various types of structural information, including line drawings [8], PiDi boundaries [51], and depth maps [41], all of which can serve as input to the structure branch. On the appearance control front, the main branch already provides an inherent mechanism, allowing control through text prompts. Additionally, personalized T2I models from the Stable Diffusion community, such as Dream-Booth and LoRA [44, 21], can be integrated as plugins into CCEdit, offering greater flexibility and creativity. More importantly, the appearance branch can accommodate the referenced key frame, facilitating fine-grained appearance control. Notably, all these control options are seamlessly integrated within the same framework, yielding editing outcomes that demonstrate both temporal coherence and precision. This not only underscores the versatility of our solution but also ensures ease of adoption, making it a compelling choice for AI-assisted video editing. To address the challenges inherent in evaluating generative video editing methods, we introduce the _BalanceCC benchmark_ dataset. Comprising 100 diverse videos and 4 target prompts for each video, this dataset includes detailed scene descriptions and attributes related to video category, scene complexity, motion, among others. These descriptions are generated with the assistance of the cutting-edge GPT-4V(ision) model [1, 32, 33, 34] and then refined by human annotators. Through extensive experimental evaluations on this dataset, we not only confirm the outstanding functionality and editing capabilities of CCEdit, but also underscore the comprehensiveness of the benchmark dataset. We firmly believe that BalanceCC stands as a robust and all-encompassing evaluation platform for the dynamic field of generative video editing. ## 2 Related Work ### Diffusion-based Image and Video Generation Diffusion models (DM) [11, 19, 50] have demonstrated exceptional capabilities in the field of image synthesis. These models indeed help by learning to approximate a data distribution through the iterative denoising of a diffused input. What makes DMs truly practical is the incorporation of text prompt as condition to control the output image during the generative process [31, 39, 42, 45]. Apart from the proliferation of advanced techniques in the field of image synthesis, DMs have also excelled in video generation [20, 48, 5, 31]. This is achieved by integrating modulated spatial-temporal modules, enabling the synthesis of high-quality videos while maintaining temporal consistency. ### Video Editing with Diffusion Models Recent studies leverage the inherent generative priors of DMs for image editing [3, 10, 16, 27, 36, 52]. The same idea is also applied in the field of video editing. Unlike image editing, video editing involves not only the manipulation of appearance-based attributes but also requires the meticulous preservation of temporal coherence throughout frames. A lapse in maintaining this temporal coherence can result in visual artifacts, such as flickering and degradation. Some generative video editing methods [6, 14, 22, 37, 53, 58, 60] strive to achieve training-free temporal consistency. They accomplish this by transitioning from spatial self-attention mechanisms within T2I diffusion models to temporal-aware cross-frame attention techniques. Some other methods [26, 47, 55, 62] perform per-video fine-tuning. They focus on optimizing the parameters of pre-trained T2I models according to the input video, aiming to achieve temporal coherence within the target video. However, this optimization for each input video can be time-consuming and inadequate tuning of the temporal modules might lead to suboptimal temporal coherence. Recent studies [15, 24, 57] have introduced trainable temporal layers to construct T2V generative models. These models are trained on extensive text-video paired datasets, and they are used in both video generation and editing tasks [12, 29]. Unlike previous work, this study does not seek a simple fix to existing T2I models for video editing, nor does it attempt to train a full-fledged T2V model. Instead, we introduce a unique network architecture tailored for video editing. Our approach involves dataset-level fine-tuning, circumvents the expenses associated with per-video tuning during inference time, and prioritizing the effective training of temporal layers to achieve robust model performance. ## 3 Approach ### Preliminary **Diffusion models**[19] are probabilistic generative models that approximate a data distribution \(p(\mathbf{x})\) by gradually denoising a normally distributed variable. Specifically, DMs aim to learn the reverse dynamics of a predetermined Markov chain with a fixed length of \(T\). The forward Markov chain can be conceptualized as a procedure of injecting noise into a pristine image. Empirically, DMs can be interpreted as an equally weighted sequence of denoising autoencoders \(\epsilon_{\theta}(\mathbf{x}_{t},t)\) where \(t=1,...,T\). These autoencoders are trained to predict a denoised variant of the noisy Figure 2: **Illustration of our overall framework.** Structure and appearance information in the target video are modulated independently, and seamlessly integrated into the main branch. Structure control is conducted via the pre-trained ControlNet [59]. Appearance control is achieved precisely by the edited key frame. Details regarding the autoencoder and iterative denoising process are omitted for simplicity. “**P**-, “**S**”, “**B**”, “**L**” indicate prompt, structure, base model, and LoRA, respectively. input \(\mathbf{x}_{t}\). The corresponding objective can be simplified to \[\mathbb{E}_{\mathbf{x}_{0},t,e\sim\mathcal{N}(\mathbf{0},\mathbf{I})}[\|\epsilon- \epsilon_{\theta}(\mathbf{x}_{t},t)\|_{2}^{2}]. \tag{1}\] **Latent diffusion models** (LDMs) are trained in the learned latent representation space. The bridge between this latent space and the original pixel-level domain is established via a perceptual compression model. The perceptual compression model is composed of an encoder \(\mathcal{E}\) and a decoder \(\mathcal{D}\), where \(\mathbf{z}=\mathcal{E}(\mathbf{x})\) and \(\mathbf{x}\approx\mathcal{D}(\mathcal{E}(\mathbf{x}))\). Then the optimization objective in Eq. (1) is modified as \[\mathbb{E}_{\mathbf{z}_{0},t,e\sim\mathcal{N}(\mathbf{0},\mathbf{I})}[\| \epsilon-\epsilon_{\theta}(\mathbf{z}_{t},t)\|_{2}^{2}]. \tag{2}\] ### The CCEdit Framework The primary objective of our work is to empower creative control in video editing. Although creativity naturally emerges in generative models, achieving controllability is a more complex endeavor. To address this challenge, CCEdit strategically decouples the management of structure and appearance within a unified trident network. In Fig. 2, we provide an illustrative overview of the framework's architecture, which comprises three vital components. **The main branch.** The main branch of our model fundamentally operates as a text-to-video generation network. It is built upon the well-established text-to-image model, Stable Diffusion [42]. We transform this model into a text-to-video variant by incorporating temporal layers into spatial layers of both the encoder and decoder. This entails the addition of a one-dimensional _temporal layer_ with the same type as its previous _spatial layer_, _i.e._, convolution blocks and attention blocks. Besides, we also use the skip connection and zero-initialized _projection out layer_ of each newly added temporal layer for stable and progressive updating, which has been proven to be effective [15, 48, 59]. The zero-initialized projection out layer is instantiated as a linear layer. Formally, let \(\mathcal{F}(\cdot;\Theta_{s})\) be the 2D spatial block, \(\mathcal{F}(\cdot;\Theta_{t})\) be the 1D temporal block, and \(\mathcal{Z}(\cdot;\Theta_{z})\) be the zero-initialized projection out layer, where \(\Theta_{s}\), \(\Theta_{t}\), and \(\Theta_{z}\) represent corresponding network parameters. The complete process of one pseudo-3D block that maps the input feature \(\mathbf{u}\) to the output feature \(\mathbf{v}\) is written as \[\mathbf{v}=\mathcal{F}(\mathbf{u};\Theta_{s})+\mathcal{Z}(\mathcal{F}( \mathcal{F}(\mathbf{u};\Theta_{s});\Theta_{t});\Theta_{z}), \tag{3}\] where \(\mathbf{u}\) and \(\mathbf{v}\) are both 3D feature maps, _i.e._, \(\mathbf{u}\in\mathbb{R}^{l\times h\times w\times c}\) with \(\{l,h,w,c\}\) as the number of frames, height, width, and the number of channels, respectively. Moreover, we draw inspiration from AnimateDiff [15] and VideoLDM [5], which advocates the shared utilization of temporal layers among personalized T2I models such as DreamBooth [44] and LoRA [21]. The key aspect of it is training the temporal layers while keeping the spatial weights frozen. We follow this schedule to inherit the T2I model's compatibility and visual generation capability. **The structure branch.** The introduction of the structure branch is motivated by the common need in video editing tasks to preserve frame structure for non-edited or style-transferred segments. Striking a delicate balance between maintaining faithful frame structure and allowing the generative model ample creative freedom poses a significant challenge. The structure branch is implemented with the pretrained ControlNet [59]. To accommodate varying levels of structure control, we use various types of structure representation, including line drawings [8], PiDi boundaries [51], and depth maps [41], ensuring adaptability to control structure at different degrees. Specifically, the structure representation from all frames is extracted individually and injected into the main branch. Each frame undergoes preprocessing to derive a structure representation, and the weights of the ControlNet are held in a frozen state during training, emphasizing the preservation of learned structural features. Formally, let \(\mathcal{F}(\cdot;\Phi_{c})\) denote the ControlNet that maps structure information into features, and \(\mathcal{Z}(\cdot;\Phi_{z1})\) and \(\mathcal{Z}(\cdot;\Phi_{z2})\) denote the two instances of zero convolutions in [59]. Then the process of adding structure control to the 3D-aware feature \(\mathbf{v}\) is \[\mathbf{v}_{s}=\mathbf{v}+\mathcal{Z}(\mathcal{F}(\mathbf{z}_{t}+\mathcal{Z}( \mathbf{c}_{s};\Phi_{z1});\Phi_{c});\Phi_{z2}), \tag{4}\] where \(\mathbf{z}_{t}\) denotes the noisy input in latent space, \(\mathbf{c}_{s}\) denotes the structure condition of the video sequence, and \(\mathbf{v}_{s}\) denotes the feature aware of structure information. **The appearance branch.** In addition to using text prompts and incorporating personalized models for appearance control, we introduce a novel design--the appearance branch. This architectural innovation introduces a pioneering approach for fine-grained appearance control, allowing for the integration of an edited frame as a detailed reference in the context of video editing. Since the editing of key frame can be accomplished through precise user edits or by using advanced off-the-shelf image editing algorithms, the introduction of appearance branch provides our framework with greater creativity and controllability. Specifically, a key frame is initially assigned to the latent variable by the encoder \(\mathcal{E}\). Subsequently, a neural network with similar architecture to the main branch's encoder extracts multi-scale features. The extracted features are incorporated into the main branch. Through this design, the appearance information from the edited key frame propagates to all frames via the temporal modules, effectively achieving the desired creative control in the output video. Formally, suppose \(\mathcal{F}(\cdot;\Psi)\) is the encoder that maps the pixel-wise appearance of the key frame into features, \(\mathcal{Z}(\cdot;\Psi_{z})\) denotes the zero convolution projection out layer, \(\mathbf{v}^{j}\) indicates the feature of the j-_th_ frame, and \(\mathbf{c}_{a}^{j}\) is the key frame. Then the process of adding appearance control to the features is as follows \[\mathbf{v}_{a}^{j}=\mathbf{v}^{j}+\mathcal{Z}(\mathcal{F}(\mathcal{E}( \mathbf{c}_{a}^{j});\Psi);\Psi_{z}), \tag{5}\] where \(\mathbf{v}_{a}^{j}\) is the j-_th_ feature, aware of the edited appearance. **Training.** Before training, we initialize the spatial weights of the main branch with pre-trained T2I models. Temporal weights are randomly initialized while the projection out layers are zero-initialized. We instantiate the model in the structure branch by pre-trained ControlNets [59]. As for the appearance branch, we copy the encoder of pre-trained T2I model and remove text cross-attention layers. During training, given the latent variables \(\mathbf{z}_{0}=\mathcal{E}(\mathbf{x}_{0})\) of an input video clip \(\mathbf{x}_{0}\). Diffusion algorithms progressively add noise to it and produce the noisy input \(\mathbf{z}_{t}\). Given conditions of time step \(t\), text prompt \(\mathbf{c}_{t}\), structure information \(\mathbf{c}_{s}\), and appearance information \(\mathbf{c}_{a}^{j}\) of the key frame, the overall optimization objective is \[\mathbb{E}_{\mathbf{z}_{0},t,\mathbf{c}_{t},\mathbf{c}_{s},\mathbf{c}_{a}^{j},\sim\mathcal{N}(\mathbf{0},\mathbf{I})}[\lVert\epsilon-\epsilon_{\theta}( \mathbf{z}_{t},t,\mathbf{c}_{t},\mathbf{c}_{s},\mathbf{c}_{a}^{j})\rVert_{2}^ {2}], \tag{6}\] where \(\epsilon_{\theta}\) indicates the whole network to predict the noise added to the noisy input \(\mathbf{z}_{t}\). We freeze the spatial weights in the main branch and the weights in the structure branch. Concurrently, we update the parameters of the newly incorporated temporal layers in the main branch, as well as the weights in the appearance branch. By default, the appearance branch takes the center frame of the video clip as input. **Inference with anchor prior.** We find that, in some challenging cases, the edited video may exhibit large areas of flickering. This is often caused by inconsistent structural representations extracted by image-level pre-processing modules. Therefore, we propose a simple yet efficient strategy to improve the stability and quality of the result by modifying the start noise. Specifically, consider the individual noise sequence \([\epsilon_{\text{ind}}^{1},...,\epsilon_{\text{ind}}^{l}]\) and the edited center frame \(\mathbf{c}_{a}^{j}\), where \(l\) and \(j\) indicate the frame numbers and the index of the edited key frame, respectively. The start noise \(\epsilon^{i}\) for each frame is modified as \[\epsilon^{i}=\epsilon_{\text{ind}}^{i}+\alpha\mathcal{E}(\mathbf{c}_{a}^{j}), \tag{7}\] where \(\alpha\) is the hyperparameter that controls the strength of prior, and \(\mathcal{E}(\mathbf{c}_{a}^{j})\) is the latent of the edited key frame. We call this strategy _anchor prior_, which is tailored for our pipeline of editing videos with an reference key frame. We empirically found that \(\alpha=0.03\) works well in most cases. The intuition behind it lies in that the video frames are usually similar to each other. The operation of adding noise to diffusion models tends to rapidly destroy high-frequency information while slowly degrading low-frequency information. Therefore, the anchor prior can be seen as providing a bit of low-frequency information to all frames while ensuring that the distribution remains almost unchanged (achieved by small \(\alpha\)), thus becoming better starting points. ### Editing for Long Videos Video editing tools face a challenge in maintaining a consistent look and feel across clips that span tens of seconds, equivalent to hundreds of frames. The inherent limitation of generative models, processing only a dozen frames per inference due to memory constraints, introduces variability in results, even with a fixed random seed. CCEdit addresses this challenge with its fine-grained appearance control, enabling the editing of long videos into a cohesive look and feel through extension and interpolation modes. In essence, let \(L+1\) represent the frames CCEdit processes in one run. For videos exceeding \(L+1\) frames, we select one key frame for every \(L\) frames. In the initial run, the first \(L+1\) key frames undergo editing. Subsequent runs, in extension mode, treat the last edited frame from the previous run as the first frame. The edited result serves as a reference for the appearance branch. This process iterates until all key frames are processed. Transitioning to the interpolation mode, two adjacent frames become the first and last frames of an inference run to edit the \(L-1\) intermediate frames, and both edited frames serve as references for the appearance branch. This continues until all frames are edited. This meticulous process ensures consistent editing results throughout the entire video. ## 4 BalanceCC Benchmark ### Overview While generative video editing has gained considerable attention as a growing research field, the absence of a standardized benchmark for assessing the efficacy of different approaches poses a potential hindrance to the technical progression of the field. Despite the recent introduction of TGVE 2023 [56] as an evaluation benchmark, it is crucial to note that the videos within this benchmark present challenges such as severe camera shake, overly complex scenes, blur, and low frame rates. In response to this, we introduce _BalanceCC_, a benchmark that contains 100 videos with varied attributes, designed to offer a comprehensive platform for evaluating video editing, focusing on both controllability and creativity. ### Benchmark Establishment We curated a collection of 100 open-license videos suitable for legal, non-stigmatizing modifications. These videos range from 2 to 20 seconds in duration, each with a frame rate of about 30 fps. Besides, we utilize GPT-4V(ision) [1, 32, 33, 34] as an assistant to establish this benchmark. For each video, GPT-4V(ision) provides a description and assigns a complexity score to the scene using the center frame as a reference, with ratings from \(1\) (Simple) to \(3\) (Complex). Additionally, we manually annotate each video for camera movement, object movement, and categorical content, with motion rated on a scale from \(1\) (Stationary) to \(3\) (Quick), and categories that include humans, animals, objects, and landscapes. Following this, GPT-4V(ision) is tasked to craft tar get prompts for video editing, encompassing style, object, and background alterations, along with compound changes. This process, while akin to TGVE 2023 [56], we additionally introduce a "Fantasy Level" to indicate the imaginative and creative degree of the target prompt. These measures are intended to assist researchers in appraising the applicability of various methods to source videos and in gauging their potential. See supplementary for details on the prompting pipeline, specific instructions, principles of labeling, and illustrative examples. ### Statistics The overall distribution of BalanceCC is illustrated in Fig. 3. For the data of original videos, the distribution across categories tends towards uniformity, yet the "Human" category is slightly more prevalent than others. This was a deliberate choice, as editing human subjects is more practically significant and, due to the complexity of human and facial structures, editing in the "Human" category presents more challenges. Regarding "Scene Complexity" and "Object Motion", videos with moderate and slow levels are slightly more common. In terms of "Camera Motion", videos of lower levels predominate (Stationary: \(54\%\), Slow: \(38\%\)). Finally, regarding the "Fantasy Level" distribution in target prompts, there is a relatively balanced allocation, with a marginal inclination towards videos categorized at a moderate level. We hope that the aforementioned categorization of the benchmark will better assist researchers and users in understanding the strengths and weaknesses of a method, thus enabling targeted improvements and fostering rapid development in the field. ## 5 Experiments ### Implementation Details Stable Diffusion-v1.5 is used as the base T2I model in the main branch. We use the pre-trained ControlNet [59] for the structure information guidance. The training dataset combines WebVid-10M [4] and a self-collected private dataset. We trained the temporal consistency modules and appearance ControlNet towards various types of structural information, including line drawings [8], PiDi boundaries [51], depth maps detected by Midas [41], and human scribbles. Depth maps are used by default. The control scales are set as \(1\). For the temporal interpolation model, we train it exclusively on depth maps, employing a smaller control scale of \(0.5\). This approach is adopted because its requirement for structural information is comparatively less than that of other models. During the training process, we first resize the shorter side to \(384\) pixels, followed by a random crop to obtain video clips with a size of \(384\times 576\). \(17\) frames at \(4\) fps are sampled from each video. The batch size is \(32\) and the learning rate is \(3e-5\). We train each model for \(100\)K iterations. During inference, we employ the DDIM [49] sampler with \(30\) steps, classifier-free guidance [18] of magnitude \(9\). Figure 4: **Results under different structural guidance.** Figure 5: **Results of video style translation.**\(\langle\cdot\rangle\) indicate the personalized T2I model we used. Figure 3: Illustration of the statistics on BalanceCC. ### Applications **Controllable and creative style transfer.** In CCEdit, the controllability and creativity of video style transfer are manifested in various dimensions. Two basic aspects include the diversity of structural information and the availability of off-the-shelf personalized models [9, 13]. The former enables users to customize the granularity and type of structural information retained from the original video, as depicted in Fig. 4. The latter allows users to edit the video into their desired domain, as shown in Fig. 5. **Video editing with precise appearance control.** Sometimes, users require stronger control over the content they want to generate. For example, they may want to change only the foreground, alter just the background, or edit the texture content of a video in a specific way. Therefore, CCEdit focuses more on precise appearance control by initially modifying the key frame with image editing techniques and then using it as a reference for the entire video. As depicted in Fig. 6, we first edit the center frames of the videos by Stable Diffusion Web UI [2], followed by utilizing these edited center frames as guides for the video editing process. Thanks to end-to-end network training, our method coherently propagates edits from the key frame throughout the entire video. **Long video editing.** A seamless and visually appealing video typically necessitates a higher frame count and increased frame rate, elements that have been inadequately addressed by many contemporary video editing methodologies. CCEdit effectively resolves this through its hierarchical design for key frames editing, combined with iterative extension and a tailored temporal interpolation mechanism. This approach enables the editing of videos comprising up to hundreds of frames with \(24\) fps (frames per second). An example is shown in Fig. 7. ### State-of-the-Art Comparisons **Datasets.** We employ a smaller segment of our proposed benchmark, designated as _mini-BalanceCC_. This subset encompasses \(50\) videos, each randomly selected from the original BalanceCC dataset, ensuring a representative distribution similar to that of the original collection. **Compared methods.** To conduct an exhaustive comparison, we have selected eight representative video editing methodologies: Tune-A-Video [55], vid2vid-zero [53], Text2Video-zero [22], FateZero [37], Pix2Video [6], ControlVideo [60], Rerender A Video [58], and TokenFlow [14]. Method details are omitted for brevity, and can be found in supplementary. Regarding our approach, we employ depth maps as structure control. For the appearance control, we adopt the off-the-shelf method of PnP-Diffusion [52] with the same hyper-parameters to automat Figure 8: **Qualitative comparison results.** Red boxes reveals TokenFlow’s inadequate local detail preservation, in contrast to our method’s detailed, coherent output. Zoom in for best view. Figure 6: **Video editing results with customized center frame as reference.** The first row corresponds to customizing foreground, the second row corresponds to customizing background, and the third row is taking given reference image to affect the entire picture. \(\langle\cdot\rangle\) indicate the personalized T2I model we used. Figure 7: **Illustration of long video editing.** CCEdit achieves good consistency across over 240 frames. Zoom in for best view. ically edit the center frame of each video clip. To ensure fairness in comparison, Stable Diffusion-v1.5 is used as the base model for all methods. **Evaluation metrics.** In our preliminary study, we observed that automatic metrics, such as CLIP-Score [17] to assess text alignment and frame consistency, do not fully align with human preferences [29, 56, 61]. We focused on collecting human preferences for a comprehensive user study, comparing our method against recent state-of-the-art techniques based on mean opinion score (MOS) and direct comparisons. We gathered 1,119 scoring results from 33 volunteers, each reflecting all indicators for an edited video. For automatic metric results, refer to the supplementary. **Results.** As illustrated in Tab. 1, CCEdit excels in both editing accuracy and aesthetic quality, and is just slightly inferior to TokenFlow in temporal smoothness. For overall impression, our approach achieved a MOS of 3.87 on a scale from 1 to 5. Among the eight reference methods, TokenFlow performed closest to ours, with an overall MOS of 3.58. The remaining seven methods scored between 1.5 to 3.0 on the MOS scale. As for direct comparisons, our method outperforms all eight reference schemes significantly. While TokenFlow remains the closest competitor, our CCEdit prevails in 52.9% of test cases against it, trails in 32.4%, and ties in 14.7% of cases. Furthermore, Fig. 8 presents the qualitative results of the top three finalists (CCEdit, TokenFlow [14], and Pix2Video [6]). It shows that Pix2Video struggles to keep temporal coherence, while TokenFlow demonstrates noticeable blurring. In contrast, our method can accurately achieve the editing objective while maintaining the temporal coherence as well as the structure of the input video. ### Ablation Study **Appearance control.** Fig. 9 illustrates the importance of taking the edited key frame as a reference in certain scenarios. Initially, translating video scenes into "cyberpunk" style (1st row) solely through prompt adjustments appears challenging, as this word is unfamiliar to the pre-trained T2I model weights and the temporal consistency modules. Providing a customized center frame allows the network to smoothly extend its appearance to adjacent frames, creating a cohesive video. Besides, we replicated the user study pipeline from Sec. 5.3 to evaluate the effectiveness of appearance control. The model without appearance control received a mean opinion score (MOS) of 2.88, significantly lower than the 3.87 scored by the process of editing one key frame first and then propagating to surrounding frames. **Anchor prior.** Fig. 10 demonstrates the ablation study for our anchor prior. It reveals that the absence of the anchor prior may lead to regional flickering in the video sequence, while its presence effectively mitigates this issue. ## 6 Limitation and Future Works In our approach, structural control is exerted by explicitly extracting the structural representation from the source video and sustaining it via the structure branch. However, it may encounter challenges when tasked with substantial structural alterations-exemplified by the conversion of a "cute rabbit" into a "majestic tiger." Addressing these complexities will be a primary objective of our future work. \begin{table} \begin{tabular}{l|c c c c|c c c} Method & Edit & Aes. & Tem. & Ove. & Win & Tie & Lose \\ \hline Tune-A-Video [55] & 3.24 & 3.01 & 2.72 & 2.77 & 16.4 & 6.9 & 76.7 \\ vid2vid-zero [53] & 3.00 & 2.38 & 2.11 & 2.35 & 10.6 & 4.6 & 84.8 \\ Text2Video-Zero [22] & 2.10 & 1.40 & 1.40 & 1.50 & 16.5 & 1.3 & 86.2 \\ FateZero [37] & 2.47 & 3.16 & 3.30 & 2.79 & 16.6 & 3.6 & 79.8 \\ Pix2Video [6] & 3.68 & 2.97 & 2.80 & 2.97 & 29.9 & 5.2 & 64.9 \\ ControlVideo [60] & 3.01 & 2.71 & 2.60 & 2.66 & 13.8 & 5.6 & 80.6 \\ Rerender A Video [58] & 2.40 & 2.69 & 2.82 & 2.50 & 11.1 & 0.0 & 88.9 \\ TokenFlow [14] & 3.78 & 3.61 & **3.79** & 3.58 & 32.4 & 14.7 & 52.9 \\ \hline CCEdit (Ours) & **4.06** & **4.00** & 3.74 & **3.87** & - & - & - \\ \hline \end{tabular} \end{table} Table 1: **Left: Mean opinion scores (MOS) over different aspects of the generated video, including editing accuracy (Edit), aesthetics (Aes.), temporal consistency (Tem.), and overall impression (Ove.). Scores range from 1 to 5. Right: Win, Tie, and Lose percentage in side-by-side comparisons with CCEdit.** Figure 10: **Ablation study on anchor prior. Our proposed anchor prior helps a lot in stabilizing the appearance across frames. The red boxes demonstrate the localized flickering in the frames.** Figure 9: **Ablation study on appearance control. In some challenging cases, appearance control is crucial to achieving the expected results.** ## 7 Conclusion This paper presents an innovative trident network architecture specifically designed for generative video editing. This unified framework enables precise and controllable video editing while broadening creative possibilities. To address the challenges in evaluating generative video editing approaches, we introduce the meticulously curated BalanceCC benchmark dataset. Our aim is to pave the way for researchers in the generative video editing domain and equip practitioners with indispensable tools for their creative workflows.
2309.06685
A discrete uniformization theorem for decorated piecewise Euclidean metrics on surfaces, II
In this paper, we study a natural discretization of the smooth Gaussian curvature on surfaces, which is defined as the quotient of the angle defect and the area of a geodesic disk at a vertex of a polyhedral surface. It is proved that each decorated piecewise Euclidean metric on surfaces with nonpositive Euler number is discrete conformal to a decorated piecewise Euclidean metric with this discrete curvature constant. We further investigate the prescribing combinatorial curvature problem for a parametrization of this discrete curvature and prove some Kazdan-Warner type results. The main tools are Bobenko-Lutz's discrete conformal theory for decorated piecewise Euclidean metrics on surfaces and variational principles with constraints.
Xu Xu, Chao Zheng
2023-09-13T02:52:20Z
http://arxiv.org/abs/2309.06685v1
# A Discrete Uniformization Theorem for Decorated Piecewise Euclidean Metrics on Surfaces, II ###### Abstract. In this paper, we study a natural discretization of the smooth Gaussian curvature on surfaces, which is defined as the quotient of the angle defect and the area of a geodesic disk at a vertex of a polyhedral surface. It is proved that each decorated piecewise Euclidean metric on surfaces with nonpositive Euler number is discrete conformal to a decorated piecewise Euclidean metric with this discrete curvature constant. We further investigate the prescribing combinatorial curvature problem for a parametrization of this discrete curvature and prove some Kazdan-Warner type results. The main tools are Bobenko-Lutz's discrete conformal theory for decorated piecewise Euclidean metrics on surfaces and variational principles with constraints. Key words and phrases:Discrete uniformization; Prescribing combinatorial curvature problem; Polyhedral metrics; Decorated piecewise Euclidean metrics; Variational principle MSC (2020): 52C26 ## 1. Introduction The classical Gaussian curvature at a point \(p\) in a Riemann surface can be defined as \[R(p)=\lim_{r\to 0}\frac{12}{\pi r^{4}}(\pi r^{2}-A(r)),\] where \(A(r)\) is the area of the geodesic disk of radius \(r\) at \(p\). If we apply this definition to a vertex \(i\) of a piecewise Euclidean surface, this gives a natural discretization of the classical Gaussian curvature (up to a constant) \[R_{i}=\frac{K_{i}}{r_{i}^{2}}, \tag{1}\] where \(K_{i}=2\pi-\theta_{i}\) is the angle defect at \(i\), \(\theta_{i}\) is the cone angle at \(i\) and \(r_{i}\) is the radius of the geodesic disk at \(i\). We call \(R_{i}\) as the discrete Gaussian curvature or combinatorial curvature. It is natural to study the discrete uniformization theorem for the discrete Gaussian curvature \(R_{i}\). A good approach to this problem is working in the framework of decorated piecewise Euclidean metrics recently introduced by Bobenko-Lutz [1]. Suppose \(S\) is a connected closed surface and \(V\) is a finite non-empty subset of \(S\), the pair \((S,V)\) is called a marked surface. A piecewise Euclidean metric (PE metric) \(dist_{S}\) on the marked surface \((S,V)\) is a flat cone metric with the conic singularities contained in \(V\). A marked surface with a PE metric is called a PE surface, denoted by \((S,V,dist_{S})\). The points in \(V\) are called vertices of the PE surface. A decoration \(r\) on a PE surface \((S,V,dist_{S})\) is a choice of circle of radius \(r_{i}\) at each vertex \(i\in V\). These circles in the decoration are called vertex-circles. We denote a decorated PE surface by \((S,V,dist_{S},r)\) and call the pair \((dist_{S},r)\) a decorated PE metric on the marked surface \((S,V)\). In this paper, we focus on the case that each pair of vertex-circles is separated. **Theorem 1.1**.: Let \((dist_{S},r)\) be a decorated PE metric on a marked surface \((S,V)\) with Euler number \(\chi(S)\leq 0\). Let \(\overline{R}\leq 0\) be a function defined on \(V\) satisfying \(\overline{R}\not\equiv 0\) if \(\chi(S)<0\) and \(\overline{R}\equiv 0\) if \(\chi(S)=0\). Then there exists a unique discrete conformal equivalent decorated PE metric \((\widetilde{dist_{S}},\widetilde{r})\) on \((S,V)\) with the prescribed discrete Gaussian curvature \(\overline{R}\) (up to scaling if \(\chi(S)=0\)). Theorem 1.1 is a discrete analogue of Kazdan-Warner's results in [18, 19]. As a corollary of Theorem 1.1, we have the following discrete uniformization theorem for the discrete Gaussian curvature \(R_{i}\) on decorated PE surfaces. **Corollary 1.2**.: For any decorated PE metric \((dist_{S},r)\) on a marked surface \((S,V)\) with Euler number \(\chi(S)\leq 0\), there exists a unique discrete conformal equivalent decorated PE metric \((\widetilde{dist_{S}},\widetilde{r})\) on \((S,V)\) with constant discrete Gaussian curvature \(R\) (up to scaling if \(\chi(S)=0\)). The combinatorial curvature \(R\) in (1) was first introduced by Ge-Xu [10] for Thurston's circle packing metrics on triangulated surfaces. After that, there are lots of research activities on the combinatorial curvature \(R\) on surfaces. See [7, 8, 9, 10, 27, 28, 29] and others for example. Most of these works gave equivalent conditions for the existence of discrete conformal metrics with prescribed combinatorial curvature \(R\) via combinatorial curvature flows. In Theorem 1.1 and Corollary 1.2, we give some sufficient conditions for the existence involving only the prescribed combinatorial curvatures and the topology of the surfaces. Following [10], we further introduce the following parameterized combinatorial curvature for the decorated PE metrics on surfaces \[R_{\alpha,i}=\frac{K_{i}}{r_{i}^{\alpha}}, \tag{2}\] where \(\alpha\in\mathbb{R}\) is a constant. If \(\alpha=2\), then \(R_{2,i}\) is the combinatorial curvature \(R_{i}\) defined by (1). We call \(R_{\alpha}\) as the combinatorial \(\alpha\)-curvature. **Theorem 1.3**.: Let \((dist_{S},r)\) be a decorated PE metric on a marked surface \((S,V)\). Suppose \(\alpha\in\mathbb{R}\) is a constant and \(\overline{R}:V\to\underline{\mathbb{R}}\) is a given function. There exists a discrete conformal equivalent decorated PE metric \((\widetilde{dist_{S}},\widetilde{r})\) with combinatorial \(\alpha\)-curvature \(\overline{R}\) if one of the following conditions is satisfied **(1):**: \(\chi(S)>0,\;\alpha<0,\;\overline{R}>0\); **(2):**: \(\chi(S)<0,\;\alpha\neq 0,\;\overline{R}\leq 0,\;\overline{R}\not\equiv 0\); **(3):**: \(\chi(S)=0,\;\alpha\neq 0,\;\overline{R}\equiv 0\); **(4):**: \(\alpha=0,\,\overline{R}\in(-\infty,2\pi)\), \(\sum_{i\in V}\overline{R_{i}}=2\pi\chi(S)\). If \(\alpha\overline{R}\leq 0\), the decorated PE metric \((\widetilde{dist_{S}},\widetilde{r})\) is unique (up to scaling if \(\alpha\overline{R}\equiv 0\)). Theorem 1.3 is a generalization of Theorem 1.1. Specially, if \(\alpha=2\), then the cases **(2)** and **(3)** in Theorem 1.3 are reduced to Theorem 1.1. By the relationship of the combinatorial \(\alpha\)-curvature \(R_{\alpha}\) and the angle defect \(K\), the cases **(3)** and **(4)** in Theorem 1.3 are covered by Bobenko-Lutz's work [1]. In the following, we just prove the cases **(1)** and **(2)** of Theorem 1.3. **Remark 1.4**.: Since Bobenko-Lutz's discrete conformal theory of decorated PE metrics also applies to Luo's vertex scalings and thus generalizes Gu-Luo-Sun-Wu's results in [16] and Springborn's results in [24], Theorem 1.3 also generalizes the authors' results in [29]. As a corollary of Theorem 1.3, we have the following discrete uniformization theorem for the combinatorial \(\alpha\)-curvature \(R_{\alpha}\). **Corollary 1.5**.: Suppose \((S,V)\) is a marked surface with a decorated PE metric \((dist_{S},r)\) and \(\alpha\in\mathbb{R}\) is a constant. **(1):**: If \(\alpha\underline{\chi}(\underline{S})\leq 0\), there exists a unique discrete conformal equivalent decorated PE metric \((dist_{S},\widetilde{r})\) with constant combinatorial \(\alpha\)-curvature \(R_{\alpha}\) (up to scaling if \(\alpha\chi(S)=0\)). **(2):**: If \(\alpha<0\) and \(\chi(S)<0\), there exists a discrete conformal equivalent decorated PE metric \((\widetilde{dist_{S}},\widetilde{r})\) with negative constant combinatorial \(\alpha\)-curvature \(R_{\alpha}\). The paper is organized as follows. In Section 2, we briefly recall Bobenko-Lutz's discrete conformal theory for the decorated PE metrics on surfaces. Then we prove the global rigidity of decorated PE metrics with respect to the combinatorial \(\alpha\)-curvature on a marked surface. In Section 3, we first deform the combinatorial \(\alpha\)-curvature \(R_{\alpha}\) in (2) and give Theorem 3.1, which is equivalent to Theorem 1.3. Then we translate Theorem 3.1 into an optimization problem with constraints, i.e., Theorem 3.4. Using a classical result from calculus, i.e., Theorem 3.5, we translate Theorem 3.4 into Theorem 3.6. In the end, with the help of the asymptotical expression for the energy function \(\mathcal{E}\) in Lemma 3.8 obtained by the authors in [30], we prove Theorem 3.6. ### Acknowledgements The first author thanks Professor Feng Luo for his invitation to the workshop "Discrete and Computational Geometry, Shape Analysis, and Applications" taking place at Rutgers University, New Brunswick from May 19th to May 21st, 2023. The first author also thanks Carl O. R. Lutz for helpful communications during the workshop. ## 2. Rigidity of decorated PE metrics ### Discrete conformal equivalence and Bobenko-Lutz's discrete conformal theory Let \(\mathcal{T}=(V,E,F)\) be a triangulation of a marked surface \((S,V)\), where \(V,E,F\) represent the sets of vertices, edges and faces respectively. A triangulation \(\mathcal{T}\) for a PE surface \((S,V,dist_{S})\) is a geodesic triangulation if the edges are geodesics in the PE metric \(dist_{S}\). We use one index to denote a vertex (such as \(i\)), two indices to denote an edge (such as \(\{ij\}\)) and three indices to denote a face (such as \(\{ijk\}\)) in the triangulation \(\mathcal{T}\). The PE metric \(dist_{S}\) on a PE surface with a geodesic triangulation \(\mathcal{T}\) defines a length map \(l:E\rightarrow\mathbb{R}_{>0}\) such that \(l_{ij},l_{ik},l_{jk}\) satisfy the triangle inequalities for any triangle \(\{ijk\}\in F\). Conversely, given a function \(l:E\rightarrow\mathbb{R}_{>0}\) satisfying the triangle inequalities for any face \(\{ijk\}\in F\), one can construct a PE metric on a triangulated surface by isometrically gluing Euclidean triangles along edges in pairs. In the following, we use \(l:E\rightarrow\mathbb{R}_{>0}\) to denote a PE metric and use \((l,r)\) to denote a decorated PE metric on a triangulated surface \((S,V,\mathcal{T})\). **Definition 2.1** ([1], Proposition 2.2).: Let \(\mathcal{T}\) be a triangulation of a marked surface \((S,V)\). Two decorated PE metrics \((l,r)\) and \((\widetilde{l},\widetilde{r})\) on \((S,V,\mathcal{T})\) are discrete conformal equivalent if and only if there exists a discrete conformal factor \(u\in\mathbb{R}^{V}\) such that \[\widetilde{r}_{i}=e^{u_{i}}r_{i}, \tag{3}\] \[\widetilde{l}_{ij}^{2}=(e^{2u_{i}}-e^{u_{i}+u_{j}})r_{i}^{2}+(e^{2u_{j}}-e^{u_ {i}+u_{j}})r_{j}^{2}+e^{u_{i}+u_{j}}l_{ij}^{2} \tag{4}\] for any edge \(\{ij\}\in E\). **Remark 2.2**.: For any two circles \(C_{i}\) and \(C_{j}\) in the Euclidean plane, one can define the inversive distance \(I_{ij}=\frac{l_{ij}^{2}-r_{i}^{2}-r_{j}^{2}}{2r_{i}r_{j}}\), where \(l_{ij}\) is the distance of the centers of the two circles and \(r_{i}\), \(r_{j}\) are the radii of \(C_{i},C_{j}\) respectively. The inversive distance is invariant under Mobius transformations [6]. Denote the inversive distance of two vertex-circles in \((l,r)\) and \((\widetilde{l},\widetilde{r})\) as \(I\) and \(\widetilde{I}\) respectively. If \((l,r)\) and \((\widetilde{l},\widetilde{r})\) are discrete conformal equivalent in the sense of Definition 2.1, it is shown [1] that \(I=\widetilde{I}\). Since each pair of vertex-circles is required to be separated, it is easy to see that \(I>1\). Therefore, the discrete conformal equivalent decorated PE metrics on triangulated surfaces in Definition 2.1 can be taken as the separated inversive distance circle packing metrics introduced by Bowers-Stephenson [3]. Please refer to [4, 17, 22, 25, 26] for more properties of the inversive distance circle packing metrics on triangulated surfaces. For any decorated triangle \(\{ijk\}\), there is a unique circle \(C_{ijk}\) simultaneously orthogonal to the three vertex-circles at the vertices \(i,j,k\)[13]. This circle \(C_{ijk}\) is called as the face-circle of the decorated triangle \(\{ijk\}\). Denote \(\alpha_{ij}^{k}\) as the interior intersection angle of the face-circle \(C_{ijk}\) and the edge \(\{ij\}\). The edge \(\{ij\}\), shared by two adjacent decorated triangles \(\{ijk\}\) and \(\{ijl\}\), is called weighted Delaunay if \[\alpha_{ij}^{k}+\alpha_{ij}^{l}\leq\pi.\] The triangulation \(\mathcal{T}\) is called weighted Delaunay in the decorated PE metric \((dist_{S},r)\) if every edge in the triangulation is weighted Delaunay. Here we take the definition of weighted Delaunay triangulation from Bobenko-Lutz [1]. There are other equivalent definitions for the weighted Delaunay triangulation using the signed distance of the center of \(C_{ijk}\) to the edges. Please refer to [4, 11, 12, 13, 14] and others. Note that the combinatorial \(\alpha\)-curvature \(R_{\alpha}\) in (2) is independent of the geodesic triangulations of a decorated PE surface \((S,V,dist_{S},r)\). In general, the existence of decorated PE metrics with prescribed combinatorial \(\alpha\)-curvatures on triangulated surfaces can not be guaranteed if the triangulation is fixed. In the following, we work with a generalization of the discrete conformal equivalence in Definition 2.1, introduced by Bobenko-Lutz [1], which allows the triangulation of the marked surface to be changed under the weighted Delaunay condition. **Definition 2.3** ([1], Definition 4.11).: Two decorated PE metrics \((dist_{S},r)\) and \((\widetilde{dist}_{S},\widetilde{r})\) on the marked surface \((S,V)\) are discrete conformal equivalent if there is a sequence of triangulated decorated PE surfaces \((\mathcal{T}^{0},l^{0},r^{0}),...,(\mathcal{T}^{N},l^{N},r^{N})\) such that **(1):**: the decorated PE metric of \((\mathcal{T}^{0},l^{0},r^{0})\) is \((dist_{S},r)\) and the decorated PE metric of \((\mathcal{T}^{N},l^{N},r^{N})\) is \((\widetilde{dist}_{S},\widetilde{r})\), **(2):**: each \(\mathcal{T}^{n}\) is a weighted Delaunay triangulation of the decorated PE surface \((\mathcal{T}^{n},l^{n},r^{n})\), **(3):**: if \(\mathcal{T}^{n}=\mathcal{T}^{n+1}\), then there is a discrete conformal factor \(u\in\mathbb{R}^{V}\) such that \((\mathcal{T}^{n},l^{n},r^{n})\) and \((\mathcal{T}^{n+1},l^{n+1},r^{n+1})\) are related by (3) and (4), **(4):**: if \(\mathcal{T}^{n}\neq\mathcal{T}^{n+1}\), then \(\mathcal{T}^{n}\) and \(\mathcal{T}^{n+1}\) are two different weighted Delaunay triangulations of the same decorated PE surface. Definition 2.3 gives an equivalence relationship for decorated PE metrics on a marked surface. The equivalence class of a decorated PE metric \((dist_{S},r)\) on \((S,V)\) is also called as the discrete conformal class of \((dist_{S},r)\) and denoted by \(\mathcal{D}(dist_{S},r)\). **Lemma 2.4** ([1]).: The discrete conformal class \(\mathcal{D}(dist_{S},r)\) of a decorated PE metric \((dist_{S},r)\) on the marked surface \((S,V)\) is parameterized by \(\mathbb{R}^{V}=\{u:V\to\mathbb{R}\}\). For simplicity, for any \((\widetilde{dist}_{S},\widetilde{r})\in\mathcal{D}(dist_{S},r)\), we denote it by \((dist_{S}(u),r(u))\) for some \(u\in\mathbb{R}^{V}\). Set \[\mathcal{C}_{\mathcal{T}}(dist_{S},r)=\{u\in\mathbb{R}^{V}|\ \mathcal{T}\ \text{is a weighted Delaunay triangulation of}\ (S,V,dist_{S}(u),r(u))\}.\] **Lemma 2.5** ([1]).: The set \[J=\{\mathcal{T}|\mathcal{C}_{\mathcal{T}}(dist_{S},r)\ \text{has non-empty interior in}\ \mathbb{R}^{V}\}\] is a finite set, \(\mathbb{R}^{V}=\cup_{\mathcal{T}_{i}\in J}\mathcal{C}_{\mathcal{T}_{i}}(dist_ {S},r)\) and each \(\mathcal{C}_{\mathcal{T}_{i}}(dist_{S},r)\) is homeomorphic to a polyhedral cone (with its apex removed) and its interior is homeomorphic to \(\mathbb{R}^{V}\). ### The extended energy function There exist geometric relationships between the decorated triangles and \(3\)-dimensional generalized hyperbolic polyhedra. Specially, there is a generalized hyperbolic tetrahedra in \(\mathbb{H}^{3}\) with one ideal vertex and three hyper-ideal vertices corresponding to a decorated triangle \(\{ijk\}\). Denote \(\mathrm{Vol}(ijk)\) as the truncated volume of this generalized hyperbolic tetrahedra. The truncated volume \(\mathrm{Vol}(ijk)\) can be characterized by an explicit formula. Please refer to [1, 23] for more details. Set \[F_{ijk}(u_{i},u_{j},u_{k})= -2\mathrm{Vol}(ijk)+\theta^{i}_{jk}u_{i}+\theta^{j}_{ki}u_{j}+ \theta^{k}_{ij}u_{k}\] \[+(\frac{\pi}{2}-\alpha^{k}_{ij})\lambda_{ij}+(\frac{\pi}{2}- \alpha^{j}_{ki})\lambda_{ki}+(\frac{\pi}{2}-\alpha^{i}_{jk})\lambda_{jk},\] where \(\theta^{i}_{jk}\) is the inner angle of the decorated triangle \(\{ijk\}\) at the vertex \(i\) and \(\lambda_{ij}=\cosh^{-1}I_{ij}\). By the Schlafli formula, we have \[\nabla F_{ijk}=(\theta^{i}_{jk},\theta^{j}_{ki},\theta^{k}_{ij})\] and \[F_{ijk}((u_{i},u_{j},u_{k})+c(1,1,1))=F_{ijk}(u_{i},u_{j},u_{k})+c\pi\] for \(c\in\mathbb{R}\). On a decorated PE surface \((S,V,l,r)\) with a weighted Delaunay triangulation \(\mathcal{T}\), Bobenko-Lutz [1] defined the following function \[\mathcal{H}_{\mathcal{T}}(u)=\sum_{\{ijk\}\in F}F_{ijk}(u_{i},u_{j},u_{k})=-2 \sum_{\{ijk\}\in F}\mathrm{Vol}(ijk)+\sum_{i\in V}\theta_{i}u_{i}+\sum_{\{ij \}\in E}(\pi-\alpha_{ij})\lambda_{ij}, \tag{5}\] where \(\theta_{i}=\sum_{\{ijk\}\in F}\theta^{i}_{jk}\) and \(\alpha_{ij}=\alpha^{k}_{ij}+\alpha^{l}_{ij}\). It should be mentioned that the function \(\mathcal{H}_{\mathcal{T}}(u)\) in (5) differs from its original definition in [1] (Equation 4-9) by some constant. Then \[\mathcal{H}_{\mathcal{T}}(u+c\mathbf{1})=\mathcal{H}_{\mathcal{T}}(u)+c|F|\pi \tag{6}\] for \(c\in\mathbb{R}\). By the definition of \(\mathcal{H}_{\mathcal{T}}\), the following energy function \[\mathcal{E}_{\mathcal{T}}(u)=-\mathcal{H}_{\mathcal{T}}(u)+2\pi\sum_{i\in V}u _{i}\] is well-defined on \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\) with \(\nabla_{u_{i}}\mathcal{E}_{\mathcal{T}}=2\pi-\theta_{i}=K_{i}\). Moreover, \[\mathcal{E}_{\mathcal{T}}(u+c\mathbf{1})=\mathcal{E}_{\mathcal{T}}(u)+2c\pi \chi(S) \tag{7}\] for \(c\in\mathbb{R}\). **Theorem 2.6** ([1], Proposition 4.13).: For a discrete conformal factor \(u\in\mathbb{R}^{V}\), let \(\mathcal{T}\) be a weighted Delaunay triangulation of the decorated PE surface \((S,V,dist_{S}(u),r(u))\). The map \[\mathcal{H}:\ \mathbb{R}^{V}\to\mathbb{R},\ \ \ \ u\mapsto\mathcal{H}_{ \mathcal{T}}(u) \tag{8}\] is well-defined, concave, and twice continuously differentiable over \(\mathbb{R}^{V}\). Therefore, the function \(\mathcal{E}_{\mathcal{T}}(u)\) defined on \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\) can be extended to be \(\mathcal{E}(u)\) defined on \(\mathbb{R}^{V}\) by the following formula \[\mathcal{E}(u)=-\mathcal{H}(u)+2\pi\sum_{i\in V}u_{i}. \tag{9}\] ### Rigidity of decorated PE metrics A basic problem on the combinatorial \(\alpha\)-curvature is to understand the relationships between the decorated PE metrics and the combinatorial \(\alpha\)-curvatures. The following theorem shows the global rigidity of decorated PE metrics with respect to the combinatorial \(\alpha\)-curvature on a marked surface, which corresponds to the rigidity parts of Theorem 1.3. **Theorem 2.7**.: Suppose \((S,V)\) is a marked surface with a decorated PE metric \((dist_{S},r)\), \(\alpha\in\mathbb{R}\) is a constant and \(\overline{R}:V\to\mathbb{R}\) is a given function. **(1):**: If \(\alpha\overline{R}\equiv 0\), then there exists at most one discrete conformal factor \(u^{*}\in\mathbb{R}^{V}\) up to scaling such that the decorated PE metric \((dist_{S}(u^{*}),r(u^{*}))\) in the discrete conformal class \(\mathcal{D}(dist_{S},r)\) has the combinatorial \(\alpha\)-curvature \(\overline{R}\). **(2):**: If \(\alpha\overline{R}\leq 0\) and \(\alpha\overline{R}\not\equiv 0\), then there exists at most one discrete conformal factor \(u^{*}\in\mathbb{R}^{V}\) such that the decorated PE metric \((dist_{S}(u^{*}),r(u^{*}))\) in the discrete conformal class \(\mathcal{D}(dist_{S},r)\) has the combinatorial \(\alpha\)-curvature \(\overline{R}\). Proof.: By Theorem 2.6, the following function \[\mathbb{E}(u)=-\mathcal{H}(u)+\int_{u_{0}}^{u}\sum_{i\in V}(2\pi-\overline{R} _{i}r_{i}^{\alpha})du_{i} \tag{10}\] is well-defined and twice continuously differentiable over \(\mathbb{R}^{V}\), where \(r_{i}=e^{u_{i}}r_{i}^{0}\) and \(r^{0}\) is the initial data. By direct calculations, we have \[\nabla_{u_{i}}\mathbb{E}=-\sum_{\{ijk\}\in F}\theta_{jk}^{i}+2\pi-\overline{R} _{i}r_{i}^{\alpha}=K_{i}-\overline{R}_{i}r_{i}^{\alpha}.\] Therefore, for \(u^{*}\in\mathbb{R}^{V}\), the decorated PE metric in \((dist_{S}(u^{*}),r(u^{*}))\) has the combinatorial \(\alpha\)-curvature \(\overline{R}\) if and only if \(\nabla_{u_{i}}\mathbb{E}(u^{*})=0,\forall i\in V\). Moreover, \[\operatorname{Hess}_{u}\mathbb{E}=-\operatorname{Hess}_{u}\mathcal{H}-\alpha \left(\begin{array}{cccc}\overline{R}_{1}r_{1}^{\alpha}&&&\\ &\ddots&\\ &&\overline{R}_{|V|}r_{|V|}^{\alpha}\end{array}\right).\] The equality (6) and Theorem 2.6 imply that \(\operatorname{Hess}_{u}\mathcal{H}\leq 0\) with kernel \(\{c\mathbf{1}^{\mathrm{T}}\in\mathbb{R}^{V}|c\in\mathbb{R}\}\). If \(\alpha\overline{R}\equiv 0\), then \(\operatorname{Hess}_{u}\mathbb{E}\) is positive semi-definite with kernel \(\{c\mathbf{1}^{\mathrm{T}}\in\mathbb{R}^{V}|c\in\mathbb{R}\}\) and hence \(\mathbb{E}\) is convex on \(\mathbb{R}^{V}\) and strictly convex on \(\{\sum_{i\in V}u_{i}=0\}\). If \(\alpha\overline{R}\leq 0\) and \(\alpha\overline{R}\not\equiv 0\), then \(\operatorname{Hess}_{u}\mathbb{E}\) is positive definite and hence \(\mathbb{E}\) is strictly convex on \(\mathbb{R}^{V}\). The conclusion follows from the following result from calculus. **Lemma:** If \(f:\Omega\to\mathbb{R}\) is a \(C^{1}\)-smooth strictly convex function on an open convex set \(\Omega\subset\mathbb{R}^{n}\), then its gradient \(\nabla f:\Omega\to\mathbb{R}^{n}\) is injective. Q.E.D. **Remark 2.8**.: For a decorated PE surfaces \((S,V,l,r)\) with a fixed triangulation \(\mathcal{T}\), the global rigidity of the inversive distance circle packing metrics with respect to the combinatorial \(\alpha\)-curvature \(R_{\alpha}\) has been proved by Ge-Jiang [7] and Ge-Xu [10]. They extended the function \(\mathbb{E}(u)\) by extending the inner angles of a triangle by constants. This approach was introduced by Bobenko-Pinkall-Springborn [2] for Luo's vertex scalings and further developed by Luo [22] for Bowers-Stephenson's inversive distance circle packings. Here we take another approach introduced by Bobenko-Lutz [1] to extend the function \(\mathbb{E}(u)\), in which we change the triangulation of the marked surface under the weighted Delaunay condition. This approach was first introduced by Gu-Luo-Sun-Wu [16] and Gu-Guo-Luo-Sun-Wu [15] to establish the discrete uniformization theorem for Luo's vertex scalings on surfaces. The first approach can not ensure the triangles being non-degenerate, while the second approach can. ## 3. Existence of decorated PE metrics ### Variational principles with constraints In this subsection, to simplify the calculations, we deform the combinatorial \(\alpha\)-curvature \(R_{\alpha}\) in (2) and give Theorem 3.1 which is equivalent to Theorem 1.3. Then we translate Theorem 3.1 into an optimization problem with inequality constraints by variational principles, which involves the function \(\mathcal{E}(u)\) defined in (9). For an initial decorated PE metric \((l^{0},r^{0})\), the combinatorial \(\alpha\)-curvature is \(K_{i}^{0}/(r_{i}^{0})^{\alpha}\). Suppose a decorated PE metric \((l,r)\) is discrete conformal equivalent to \((l^{0},r^{0})\), then \(r_{i}=e^{u_{i}}r_{i}^{0}\) by (3). The combinatorial \(\alpha\)-curvature of the decorated PE metric \((l,r)\) can be written as \[R_{\alpha,i}=\frac{K_{i}}{r_{i}^{\alpha}}=\frac{K_{i}}{(r_{i}^{0})^{\alpha}e^ {\alpha u_{i}}}.\] For simplicity, set \[\mathcal{R}_{\alpha,i}=R_{\alpha,i}(r_{i}^{0})^{\alpha}.\] Then \[\mathcal{R}_{\alpha,i}=\frac{K_{i}}{e^{\alpha u_{i}}}. \tag{11}\] We also call \(\mathcal{R}_{\alpha}\) as the combinatorial \(\alpha\)-curvature. Note that \((r_{i}^{0})^{\alpha}>0\), then the signs of \(\mathcal{R}_{\alpha,i}\) and \(R_{\alpha,i}\) are the same for any \(i\in V\). Denote \(\overline{\mathcal{R}}\) as the prescribed combinatorial \(\alpha\)-curvature corresponding to \(\mathcal{R}_{\alpha}\). Then \(\overline{\mathcal{R}}_{i}=\overline{R}_{i}(r_{i}^{0})^{\alpha}\) and the signs of \(\overline{\mathcal{R}}_{i}\) and \(\overline{R}_{i}\) are the same. Hence, to prove Theorem 1.3, we just need to prove the following theorem. **Theorem 3.1**.: For any decorated PE metric \((dist_{S},r)\) on a marked surface \((S,V)\), there is a discrete conformal equivalent decorated PE metric \((\widetilde{dist_{S}},\widetilde{r})\) with combinatorial \(\alpha\)-curvature \(\overline{\mathcal{R}}\) if one of the following conditions is satisfied **(1):**: \(\chi(S)>0,\ \alpha<0,\ \overline{\mathcal{R}}>0\); **(2):**: \(\chi(S)<0,\ \alpha\neq 0,\ \overline{\mathcal{R}}\leq 0,\ \overline{\mathcal{R}} \not\equiv 0\). Since the angle defect \(K\) satisfies the following discrete Gauss-Bonnet formula ([5], Proposition 3.1) \[\sum_{i\in V}K_{i}=2\pi\chi(S), \tag{12}\] then the combinatorial \(\alpha\)-curvature \(\mathcal{R}_{\alpha}\) in (11) satisfies the following discrete Gauss-Bonnet formula \[\sum_{i\in V}\mathcal{R}_{i}e^{\alpha u_{i}}=2\pi\chi(S).\] Therefore, if \(\overline{\mathcal{R}}\in\mathbb{R}^{V}\) is the combinatorial \(\alpha\)-curvature of some decorated PE metric discrete conformal to \((l,r)\) on \((S,V)\), then \[\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}=2\pi\chi(S).\] Let \(\alpha\in\mathbb{R}\) be a non-zero constant. Set \[\mathcal{A}=\{u\in\mathbb{R}^{V}|0>\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{ \alpha u_{i}}\geq 2\pi\chi(S),\ \overline{\mathcal{R}}\leq 0,\ \overline{\mathcal{R}}\not\equiv 0\}, \tag{13}\] \[\mathcal{B}=\{u\in\mathbb{R}^{V}|0<\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{ \alpha u_{i}}\leq 2\pi\chi(S),\ \overline{\mathcal{R}}>0\}, \tag{14}\] \[\mathcal{C}=\{u\in\mathbb{R}^{V}|\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{ \alpha u_{i}}\leq 2\pi\chi(S)<0,\ \overline{\mathcal{R}}\leq 0,\ \overline{\mathcal{R}}\not \equiv 0\}. \tag{15}\] **Proposition 3.2**.: The sets \(\mathcal{A},\ \mathcal{B}\) and \(\mathcal{C}\) are unbounded closed subsets of \(\mathbb{R}^{V}\). Proof.: We only prove this proposition for the set \(\mathcal{A}\) and the proofs for the sets \(\mathcal{B}\) and \(\mathcal{C}\) are similar. **(I):** To prove the closeness of the set \(\mathcal{A}\) in \(\mathbb{R}^{V}\), we just need to show \(\mathcal{A}=\overline{\mathcal{A}}\), where \(\overline{\mathcal{A}}\) represents the closure of the set \(\mathcal{A}\) in \(\mathbb{R}^{V}\). Suppose \(\{u_{i,n}\}_{n\in\mathbb{N}}\) is a sequence in \(\mathcal{A}\) such that \(\lim_{n\to+\infty}u_{i,n}=\lambda_{i}\in\mathbb{R},\forall i\in V\). It is direct to see that \(\lim_{n\to+\infty}\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i,n}}= \sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha\lambda_{i}}\geq 2\pi\chi(S)\). Note that the definition of \(\mathcal{A}\) in (13) shows \(\overline{\mathcal{R}}\leq 0,\ \overline{\mathcal{R}}\not\equiv 0\). This implies that there exists \(i_{0}\in V\) such that \(\overline{\mathcal{R}}_{i_{0}}<0\). Then \[\lim_{n\to+\infty}\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i,n}}= \sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha\lambda_{i}}\leq\overline{ \mathcal{R}}_{i_{0}}e^{\alpha\lambda_{i_{0}}}<0.\] This implies \(\lambda=(\lambda_{1},...,\lambda_{|V|})\in\mathcal{A}\) and hence \(\mathcal{A}=\overline{\mathcal{A}}\). Therefore, the set \(\mathcal{A}\) is a closed subset of \(\mathbb{R}^{V}\). **(II):** If \(u\in\mathcal{A}\), for any \(c\in\mathbb{R}\), we have \[\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha(u_{i}+c)}=e^{\alpha c}\sum_{ i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}<0.\] If \(\alpha<0\), \(u\in\mathcal{A}\), then \[\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha(u_{i}+c)}=e^{\alpha c}\sum_{ i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}\geq 2\pi\chi(S)\] is equivalent to \[c\geq\frac{1}{\alpha}\log\frac{2\pi\chi(S)}{\sum_{i\in V}\overline{\mathcal{ R}}_{i}e^{\alpha u_{i}}}.\] This implies that the ray \(\{u+c\mathbf{1}|c\geq\frac{1}{\alpha}\log\frac{2\pi\chi(S)}{\sum_{i\in V} \overline{\mathcal{R}}_{i}e^{\alpha u_{i}}},\ \alpha<0\}\) stays in the set \(\mathcal{A}\). Hence, the set \(\mathcal{A}\) is unbounded if \(\alpha<0\). If \(\alpha>0\), for \(u\in\mathcal{A}\), we have \[\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha(u_{i}+c)}=e^{\alpha c}\sum_{ i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}\geq 2\pi\chi(S)\] is equivalent to \[c\leq\frac{1}{\alpha}\log\frac{2\pi\chi(S)}{\sum_{i\in V}\overline{\mathcal{R}}_{ i}e^{\alpha u_{i}}}.\] This implies that the ray \(\{u+c\mathbf{1}|c\leq\frac{1}{\alpha}\log\frac{2\pi\chi(S)}{\sum_{i\in V} \overline{\mathcal{R}}_{i}e^{\alpha u_{i}}}\}\) stays in the set \(\mathcal{A}\). Hence, the set \(\mathcal{A}\) is unbounded if \(\alpha>0\). This completes the proof. Q.E.D. According to Proposition 3.2, we have following result. **Lemma 3.3**.: If one of the following three conditions is satisfied **(1):**: \(\alpha>0\) and the energy function \(\mathcal{E}\) attains a minimum in the set \(\mathcal{A}\), **(2):**: \(\alpha<0\) and the energy function \(\mathcal{E}\) attains a minimum in the set \(\mathcal{B}\), **(3):**: \(\alpha<0\) and the energy function \(\mathcal{E}\) attains a minimum in the set \(\mathcal{C}\), then the minimum value point of \(\mathcal{E}\) lies in the set \(\{u\in\mathbb{R}^{V}|\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}= 2\pi\chi(S)\}\). Proof.: Suppose \(\alpha>0\) and the function \(\mathcal{E}\) attains a minimum at \(u\in\mathcal{A}\). The definition of \(\mathcal{A}\) in (13) implies \(\chi(S)<0\). Set \[c_{0}=\frac{1}{\alpha}\log\frac{2\pi\chi(S)}{\sum_{i\in V}\overline{\mathcal{ R}}_{i}e^{\alpha u_{i}}},\] then \(c_{0}\geq 0\). By the proof of Proposition 3.2, \(u+c_{0}\mathbf{1}\in\mathcal{A}\). Therefore, by the additive property of the function \(\mathcal{E}\) in (7), we have \[\mathcal{E}(u)\leq\mathcal{E}(u+c_{0}\mathbf{1})=\mathcal{E}(u)+2\pi c_{0} \chi(S),\] which implies \(c_{0}\leq 0\) by \(\chi(S)<0\). Hence \(c_{0}=0\) and \(\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}=2\pi\chi(S)\). This proves the case **(1)**. The proofs for the cases **(2)** and **(3)** are similar, we omit the details here. Q.E.D. By Lemma 3.3, we translate Theorem 3.1 into the following theorem, which is a non-convex optimization problem with inequality constraints. **Theorem 3.4**.: Let \((dist_{S},r)\) be a decorated PE metric on a marked surface \((S,V)\) with \(\chi(S)\neq 0\). Suppose \(\alpha\in\mathbb{R}\) is a non-zero constant and \(\overline{\mathcal{R}}\) is a given function defined on \(V\). **(1):**: If \(\overline{\mathcal{R}}\leq 0\), \(\overline{\mathcal{R}}\not\equiv 0\), \(\alpha>0\) and the energy function \(\mathcal{E}\) attains a minimum in \(\mathcal{A}\), then there exists a decorated PE metric in the discrete conformal class \(\mathcal{D}(dist_{S},r)\) with combinatorial \(\alpha\)-curvature \(\overline{\mathcal{R}}\); **(2):**: If \(\overline{\mathcal{R}}>0\), \(\alpha<0\) and the energy function \(\mathcal{E}\) attains a minimum in \(\mathcal{B}\), then there exists a decorated PE metric in the discrete conformal class \(\mathcal{D}(dist_{S},r)\) with combinatorial \(\alpha\)-curvature \(\overline{\mathcal{R}}\). Proof.: Lemma 3.3 shows that if \(u\in\mathbb{R}^{V}\) is a minimum of the energy function \(\mathcal{E}\) defined on one of these sets, then \(\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}=2\pi\chi(S)\). The conclusion follows from the following claim. **Claim :** Up to scaling, the decorated PE metrics with combinatorial \(\alpha\)-curvature \(\overline{\mathcal{R}}\) in the discrete conformal class are in one-to-one correspondence with the critical points of the function \(\mathcal{E}\) under the constraint \(\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}=2\pi\chi(S)\). We use the method of Lagrange multipliers to prove this claim. Set \[G(u,\mu)=\mathcal{E}(u)-\mu\left(\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{ \alpha u_{i}}-2\pi\chi(S)\right),\] where \(\mu\in\mathbb{R}\) is a Lagrange multiplier. If \(u\) is a critical point of the function \(\mathcal{E}\) under the constraint \(\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}=2\pi\chi(S)\), then by the fact \(\nabla_{u_{i}}\mathcal{E}=K_{i}\), we have \[0=\frac{\partial G(u,\mu)}{\partial u_{i}}=K_{i}-\mu\alpha\overline{\mathcal{R }}_{i}e^{\alpha u_{i}},\] which implies \[\mathcal{R}_{\alpha,i}=\frac{K_{i}}{e^{\alpha u_{i}}}=\mu\alpha\overline{ \mathcal{R}}_{i}.\] By the discrete Gauss-Bonnet formula (12), the Lagrange multiplier \(\mu\) satisfies \[\mu=\frac{2\pi\chi(S)}{\alpha\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u _{i}}}=\frac{1}{\alpha}\] under the constraint \(\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}=2\pi\chi(S)\). This implies the combinatorial \(\alpha\)-curvature \[\mathcal{R}_{\alpha,i}=\mu\alpha\overline{\mathcal{R}}_{i}=\frac{2\pi\chi(S) }{\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}}\overline{\mathcal{ R}}_{i}=\overline{\mathcal{R}}_{i}\] under the constraint \(\sum_{i\in V}\overline{\mathcal{R}}_{i}e^{\alpha u_{i}}=2\pi\chi(S)\). Q.E.D. ### Reduction to Theorem 3.6 By Theorem 3.4, we just need to prove that the function \(\mathcal{E}(u)\) attains the minimum in the sets \(\mathcal{A},\ \mathcal{B}\) and \(\mathcal{C}\) respectively. Recall the following classical result from calculus. **Theorem 3.5**.: Let \(\Omega\subseteq\mathbb{R}^{m}\) be a closed set and \(f:\Omega\to\mathbb{R}\) be a continuous function. If every unbounded sequence \(\{u_{n}\}_{n\in\mathbb{N}}\) in \(\Omega\) has a subsequence \(\{x_{n_{k}}\}_{k\in\mathbb{N}}\) such that \(\lim_{k\to+\infty}f(x_{n_{k}})=+\infty\), then \(f\) attains a minimum in \(\Omega\). One can refer to [20] (Section 4.1) for a proof of Theorem 3.5. The majority of the conditions in Theorem 3.5 are satisfied, including the sets \(\mathcal{A},\ \mathcal{B}\) and \(\mathcal{C}\) are closed subsets of \(\mathbb{R}^{V}\) by Proposition 3.2 and the energy function \(\mathcal{E}\) is continuous. To prove Theorem 3.1, we just need to prove the following theorem. **Theorem 3.6**.: Suppose \((S,V)\) is a marked surface with a decorated PE metric \((dist_{S},r)\), \(\alpha\in\mathbb{R}\) is a constant and \(\overline{\mathcal{R}}\) is a given function defined on \(V\). If one of the following three conditions is satisfied **(1):**: \(\alpha>0\) and \(\{u_{n}\}_{n\in\mathbb{N}}\) is an unbounded sequence in \(\mathcal{A}\), **(2):**: \(\alpha<0\) and \(\{u_{n}\}_{n\in\mathbb{N}}\) is an unbounded sequence in \(\mathcal{B}\), **(3):**: \(\alpha<0\) and \(\{u_{n}\}_{n\in\mathbb{N}}\) is an unbounded sequence in \(\mathcal{C}\), then there exists a subsequence \(\{u_{n_{k}}\}_{k\in\mathbb{N}}\) of \(\{u_{n}\}_{n\in\mathbb{N}}\) such that \(\lim_{k\to+\infty}\mathcal{E}(u_{n_{k}})=+\infty\). ### Behaviour of sequences of conformal factors Let \(\{u_{n}\}_{n\in\mathbb{N}}\) be an unbounded sequence in \(\mathbb{R}^{V}\). Denote its coordinate sequence at \(j\in V\) by \(\{u_{j,n}\}_{n\in\mathbb{N}}\). Motivated by [21], we call the sequence \(\{u_{n}\}_{n\in\mathbb{N}}\) with the following properties as a "good" sequence. **(1):**: It lies in one cell \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\) of \(\mathbb{R}^{V}\); **(2):**: There exists a vertex \(i^{*}\in V\) such that \(u_{i^{*},n}\leq u_{j,n}\) for all \(j\in V\) and \(n\in\mathbb{N}\); **(3):**: Each coordinate sequence \(\{u_{j,n}\}_{n\in\mathbb{N}}\) either converges, diverges properly to \(+\infty\), or diverges properly to \(-\infty\); **(4):**: For any \(j\in V\), the sequence \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) either converges or diverges properly to \(+\infty\). By Lemma 2.5, it is obvious that every sequence of discrete conformal factors in \(\mathbb{R}^{V}\) possesses a "good" subsequence. Hence, the "good" sequence could be chosen without loss of generality. To prove Theorem 3.6, we further need the following two results obtained by the authors in [30]. **Lemma 3.7** ([30], Corollary 3.6).: For a discrete conformal factor \(u\in\mathbb{R}^{V}\), let \(\mathcal{T}\) be a weighted Delaunay triangulation of the decorated PE surface \((S,V,dist_{S}(u),r(u))\). For any decorated triangle \(\{ijk\}\in F\) in \(\mathcal{T}\), at least two of the three sequences \(\{u_{i,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\), \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\), \(\{u_{k,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) converge. **Lemma 3.8** ([30], Lemma 3.12).: There exists a convergent sequence \(\{D_{n}\}_{n\in\mathbb{N}}\) such that the function \(\mathcal{E}\) satisfies \[\mathcal{E}(u_{n})=D_{n}+2\pi\left(u_{i^{*},n}\chi(S)+\sum_{j\in V}(u_{j,n}-u _{i^{*},n})\right).\] **Proof of Theorem 3.6:** Let \(\{u_{n}\}_{n\in\mathbb{N}}\) be an unbounded "good" sequence. We just need to prove that \(\lim_{n\to+\infty}\mathcal{E}(u_{n})=+\infty\). **(1):** Let \(\alpha>0\) and \(\{u_{n}\}_{n\in\mathbb{N}}\) be an unbounded sequence in \(\mathcal{A}\). The definition of \(\mathcal{A}\) in (13) implies \(\chi(S)<0\), \(\overline{\mathcal{R}}\leq 0\) and \(\overline{\mathcal{R}}\not\equiv 0\). Since the sequence \(\{u_{n}\}_{n\in\mathbb{N}}\) lies in \(\mathcal{A}\), we have \[0>\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}=e^{- \alpha u_{i^{*},n}}\cdot\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha u_{j,n}}\geq 2\pi\chi(S)e^{-\alpha u_{i^{*},n}}. \tag{16}\] By the definition of "good" sequence, the sequence \(\left\{\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right\}_{n\in\mathbb{N}}\) converges to a finite positive number or diverges properly to \(+\infty\) If \(\left\{\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right\}_{n\in\mathbb{N}}\) converges to a finite positive number, then the sequence \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges for any \(j\in V\). This implies \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite negative number by (16). Then by \(\chi(S)<0\), we have \[-\alpha u_{i^{*},n}\geq\ln\frac{\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{ \alpha(u_{j,n}-u_{i^{*},n})}}{2\pi\chi(S)}.\] Hence \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) is bounded from above by \(\alpha>0\). This implies \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges to a finite number or diverges properly to \(-\infty\). If \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges to a finite number, then by \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges for any \(j\in V\), we have \(\{u_{j,n}\}_{n\in\mathbb{N}}\) is bounded for any \(j\in V\). This contradicts the assumption that \(\{u_{n}\}_{n\in\mathbb{N}}\) is unbounded. Therefore, the sequence \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) diverges properly to \(-\infty\). Combining this with \(\chi(S)<0\) and Lemma 3.8, we have \(\lim_{n\to+\infty}\mathcal{E}(u_{n})=+\infty\). If \(\left\{\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right\}_{n\in\mathbb{N}}\) diverges properly to \(+\infty\), then there exists at least one vertex \(j\in V\) such that the sequence \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) diverges properly to \(+\infty\). By Lemma 3.7, for any vertex \(k\sim j\), the sequence \(\{u_{k,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges. Since \(\alpha>0\), then \(e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite positive number or diverges properly to \(+\infty\) and for at least one vertex \(j\in V\) the term \(e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite positive number. Since \(\overline{\mathcal{R}}\leq 0\) and \(\overline{\mathcal{R}}\not\equiv 0\), then \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite negative number or diverges properly to \(-\infty\). \((i)\)**:**: Suppose \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite negative number. Similar arguments imply \(u_{i^{*},n}\) is bounded from above, then \(u_{i^{*},n}\chi(S)\) is bounded from below by \(\chi(S)<0\) Combining with the assumption that \(\left\{\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right\}_{n\in\mathbb{N}}\) diverges properly to \(+\infty\), we have \(\lim_{n\to+\infty}\mathcal{E}(u_{n})=+\infty\) by Lemma 3.8. * Suppose \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}\) diverges properly to \(-\infty\). Then \(2\pi\chi(S)e^{-\alpha u_{i^{*},n}}\) diverges properly to \(-\infty\) by (16). Since \(\chi(S)<0\), then \(e^{-\alpha u_{i^{*},n}}\) diverges properly to \(+\infty\). By \(\alpha>0\), then \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) diverges properly to \(-\infty\). Hence \(u_{i^{*},n}\chi(S)\) diverges properly to \(+\infty\) by \(\chi(S)<0\). Then \(\lim_{n\to+\infty}\mathcal{E}(u_{n})=+\infty\) by Lemma 3.8. **(2):** Let \(\alpha<0\) and \(\{u_{n}\}_{n\in\mathbb{N}}\) be an unbounded sequence in \(\mathcal{B}\). The definition of \(\mathcal{B}\) in (14) implies \(\chi(S)>0\) and \(\overline{\mathcal{R}}>0\). Since the sequence \(\{u_{n}\}_{n\in\mathbb{N}}\) lies in \(\mathcal{B}\), we have \[0<\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}=e^{- \alpha u_{i^{*},n}}\cdot\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha u_{j,n}}\leq 2\pi\chi(S)e^{-\alpha u_{i^{*},n}}. \tag{17}\] If \(\left\{\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right\}_{n\in\mathbb{N}}\) converges, then the sequence \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges for any \(j\in V\). This implies \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite positive number by (17). Since \(\chi(S)>0\), then the equation (17) implies \[-\alpha u_{i^{*},n}\geq\ln\frac{\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{ \alpha(u_{j,n}-u_{i^{*},n})}}{2\pi\chi(S)}.\] By \(\alpha<0\), then \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) is bounded from below. This implies \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges to a finite number or diverges properly to \(+\infty\). Combining this with \(\{u_{n}\}_{n\in\mathbb{N}}\) is unbounded and \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges for all \(j\in V\), we have the sequence \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) diverges properly to \(+\infty\). By \(\chi(S)>0\) and Lemma 3.8, we have \(\lim_{n\to+\infty}\mathcal{E}(u_{n})=+\infty\). If the sequence \(\left\{\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right\}_{n\in\mathbb{N}}\) diverges properly to \(+\infty\), then there exists at least one vertex \(j\in V\) such that the sequence \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) diverges properly to \(+\infty\). By Lemma 3.7, for any vertex \(k\sim j\), the sequence \((u_{k,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) converges. Therefore, \(e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to zero or a finite positive number and for at least one vertex \(j\in V\) the term \(e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite positive number. Since \(\alpha<0\) and \(\overline{\mathcal{R}}>0\), then \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite positive number. This implies \(2\pi\chi(S)e^{-\alpha u_{i^{*},n}}\) has a positive lower bound by (17). By \(\alpha<0\) and \(\chi(S)>0\), then \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) is bounded from below. Then \(u_{i^{*},n}\chi(S)\) is bounded from below. Combining this with \(\left\{\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right\}_{n\in\mathbb{N}}\) diverges properly to \(+\infty\), we have \(\lim_{n\to+\infty}\mathcal{E}(u_{n})=+\infty\) by Lemma 3.8. **(3):** Let \(\alpha<0\) and \(\{u_{n}\}_{n\in\mathbb{N}}\) be an unbounded sequence in \(\mathcal{C}\). The definition of \(\mathcal{C}\) in (15) implies \(\chi(S)<0\) and \(\overline{\mathcal{R}}\leq 0\) and \(\overline{\mathcal{R}}\not\equiv 0\). Since the sequence \(\{u_{n}\}_{n\in\mathbb{N}}\) lies in \(\mathcal{C}\), we have \[\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}=e^{- \alpha u_{i^{*},n}}\cdot\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha u_{ j,n}}\leq 2\pi\chi(S)e^{-\alpha u_{i^{*},n}}<0. \tag{18}\] If \(\left\{\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right\}_{n\in\mathbb{N}}\) converges, then the sequence \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges for all \(j\in V\). This implies that \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite negative number by (18). Since \(\chi(S)<0\), then the equation (18) implies \[-\alpha u_{i^{*},n}\leq\ln\frac{\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{ \alpha(u_{j,n}-u_{i^{*},n})}}{2\pi\chi(S)}.\] By \(\alpha<0\), we have \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) is bounded from above. Combining this with \(\{u_{n}\}_{n\in\mathbb{N}}\) is unbounded and \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges for all \(j\in V\), the sequence \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) diverges properly to \(-\infty\). Combining this with \(\chi(S)<0\) and Lemma 3.8, we have \(\lim_{n\to+\infty}\mathcal{E}(u_{n})=+\infty\). If \(\left\{\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right\}_{n\in\mathbb{N}}\) diverges properly to \(+\infty\), then there exists at least one vertex \(j\in V\) such that the sequence \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) diverges properly to \(+\infty\). By Lemma 3.7, for any vertex \(k\sim j\), the sequence \(\{u_{k,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges. Since \(\alpha<0\), then \(e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to zero or a finite positive number and for at least one vertex \(j\in V\) the term \(e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite positive number. Note that \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}<0\) by (18). Therefore, \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to zero or a finite negative number. \((i)\)**:**: Suppose \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to zero. Then \(2\pi\chi(S)e^{-\alpha u_{i^{*},n}}\) converges to zero by (18). Since \(\alpha<0\) and \(\chi(S)<0\), then \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) diverges properly to \(-\infty\). Combining this with \(\chi(S)<0\) and Lemma 3.8, we have \(\lim_{n\to+\infty}\mathcal{E}(u_{n})=+\infty\). \((ii)\)**:**: Suppose \(\sum_{j\in V}\overline{\mathcal{R}}_{j}e^{\alpha(u_{j,n}-u_{i^{*},n})}\) converges to a finite negative number. Then \(2\pi\chi(S)e^{-\alpha u_{i^{*},n}}\) has a negative lower bound by (18). By \(\alpha<0\) and \(\chi(S)<0\), then \(u_{i^{*},n}\) is bounded from above. Combining this with \(\chi(S)<0\) and \(\left\{\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right\}_{n\in\mathbb{N}}\) diverges properly to \(+\infty\), we have \(\lim_{n\to+\infty}\mathcal{E}(u_{n})=+\infty\) by Lemma 3.8. Q.E.D.
2309.14295
Unwieldy Object Delivery with Nonholonomic Mobile Base: A Stable Pushing Approach
This paper addresses the problem of pushing manipulation with nonholonomic mobile robots. Pushing is a fundamental skill that enables robots to move unwieldy objects that cannot be grasped. We propose a stable pushing method that maintains stiff contact between the robot and the object to avoid consuming repositioning actions. We prove that a line contact, rather than a single point contact, is necessary for nonholonomic robots to achieve stable pushing. We also show that the stable pushing constraint and the nonholonomic constraint of the robot can be simplified as a concise linear motion constraint. Then the pushing planning problem can be formulated as a constrained optimization problem using nonlinear model predictive control (NMPC). According to the experiments, our NMPC-based planner outperforms a reactive pushing strategy in terms of efficiency, reducing the robot's traveled distance by 23.8\% and time by 77.4\%. Furthermore, our method requires four fewer hyperparameters and decision variables than the Linear Time-Varying (LTV) MPC approach, making it easier to implement. Real-world experiments are carried out to validate the proposed method with two differential-drive robots, Husky and Boxer, under different friction conditions.
Yujie Tang, Hai Zhu, Susan Potters, Martijn Wisse, Wei Pan
2023-09-25T17:10:16Z
http://arxiv.org/abs/2309.14295v1
# Unwieldy Object Delivery with Nonholonomic Mobile Base: ###### Abstract This paper addresses the problem of pushing manipulation with nonholonomic mobile robots. Pushing is a fundamental skill that enables robots to move unwieldy objects that cannot be grasped. We propose a stable pushing method that maintains stiff contact between the robot and the object to avoid consuming repositioning actions. We prove that a line contact, rather than a single point contact, is necessary for nonholonomic robots to achieve stable pushing. We also show that the stable pushing constraint and the nonholonomic constraint of the robot can be simplified as a concise linear motion constraint. Then the pushing planning problem can be formulated as a constrained optimization problem using nonlinear model predictive control (NMPC). According to the experiments, our NMPC-based planner outperforms a reactive pushing strategy in terms of efficiency, reducing the robot's traveled distance by 23.8% and time by 77.4%. Furthermore, our method requires four fewer hyperparameters and decision variables than the Linear Time-Varying (LTV) MPC approach, making it easier to implement. Real-world experiments are carried out to validate the proposed method with two differential-drive robots, Husky and Boxer, under different friction conditions. ## I Introduction With mobile robots increasingly being used, there are various scenarios in which the robots are expected to perform additional delivery tasks while maneuvering, for example, a robot conveying a package in a warehouse. In this regard, mobile robots equipped with robot arms have become progressively popular. However, the delivered object may be sometimes unwieldy, either too heavy or too large, for the robot arm to grasp. In this case, one option is to manipulate the object by pushing it with the robot arm [1]. Alternatively, the robot can push the object, as shown in Fig. 1. Without a robot arm, pushing with the robot expands its manipulation repertoire, making it not just a mobile base. Moreover, it reduces the cost, space, and payload by eliminating the robot arm [2]. Research on pushing with mobile robots is still limited, though pushing with robot arms has been extensively studied [3, 4, 5]. Mobile robots have nonholonomic constraints that restrict their ability to freely reach various planned contact points. As a result, the pushed object is prone to sliding away, requiring time-consuming and effort-consuming repositioning actions to restart pushing. To address this challenge, [6] proposed stable pushing, which involves maintaining a stiff robot-object contact to prevent frequent repositions. This approach can reduce the risk of losing control over the object resulting in improved efficiency. As concluded in [7], stable pushing with a single-point contact can be reducible to the Dubins car problem, where the sticking contact constraint is translated to bounded curvatures of the object's trajectory, represented as a motion cone for the object. However, we extend this conclusion by proving that stable pushing is not achievable for a differential-drive mobile robot pushing with a single-point contact, due to the limited friction cone and the nonholonomic constraint of the robot. It can not provide enough friction force to maintain a stiff robot-object contact. As a follow-up study to [7], we introduce a line contact to make stable pushing possible where a larger friction cone can be provided. Based on it, we prove that the stable pushing constraint and robot nonholonomic constraint can be combined as a linear motion constraint on the robot's control input, which greatly simplifies the pushing planning problem compared to [8], as the stable pushing can be guaranteed implicitly with the control constraint. We formulate the goal-conditioned stable pushing problem as a constrained optimization problem by employing Nonlinear Model Predictive Control (NMPC). Our NMPC planner with the concluded motion constraint guarantees that the object's motion is within the motion cone for stable pushing and the physical limitation of the robot is met. The main contributions of this paper can be summarized as follows: * We first propose a stable pushing approach for nonholonomic mobile robots that maintains a stiff robot-object contact so that the need for frequent repositioning actions can be minimized. * We then derive a concise linear motion constraint to simplify the stable pushing one in [8] and develop an algorithm that is easier to be implemented with commercial solvers. * Lastly, we evaluate the proposed method through real-world experiments using wheeled mobile robots (Clearpath Husky and Boxer) that showed significant Fig. 1: The wheeled mobile robots (Clearpath Husky and Boxer) push a paper box to a goal location and to track a reference path, respectively. Transparency of the robots and box indicates their movement. reductions in traveled distance and time. ## II Related Work In the class of non-prehensile manipulation, pushing received the most attention for its high flexibility and efficiency in completing a task [9, 10, 11]. Early research on mobile robot pushing involved using compliance to push the object along the environment boundaries [11]. Instead of finding feasible paths in free space, compliance pushing simplifies the problem and provides additional options for finding a pushing path. However, the method is limited to disk-shaped pushers and objects, and can only be applied in environments with smooth boundaries which is rare in the real world. In order to achieve practical mobile robot pushing, a reactive pushing controller is proposed in [12], where the basic idea is to keep the robot, the object, and the goal in a line so as to push the object toward the goal. Nevertheless, the method is limited to pushing small-sized objects with circular or point-sized robots such that it is easy to reposition around the object to change the pushing direction. Instead of pushing reactively, [13] presents a rapidly-exploring random tree (RRT) based planner that uses past pushing experiences to construct achievable and collision-free pushing plans. However, both [12] and [13] assume the use of omnidirectional mobile robots, which can freely move around the object to achieve the planned pushing actions. For widely-used differential drive robots, limited research has been conducted, as the nonholonomic constraint hinders their ability to smoothly push around the object, making pushing planning more complex. In addition to control and planning, a significant challenge in mobile robot pushing is the uncertainty about the object's pose after each action [14]. The methods discussed above rely on reactive actions taken after observing the resulting motion of the pushed object. The robot pusher and the object strive to maintain an equilibrium configuration to continue moving together, resembling a "catching" action during navigation [6]. Thus, the crucial aspect of designing a push/navigation controller is ensuring the stability of this "catching" action. The concept of stable pushing, which establishes a predictable stiff contact between the robot and the object, was proposed based on the mechanics of planar sliding in [15]. This idea has been widely used in the field of pushing manipulation, as demonstrated in [16, 17]. In this paper, we also adopt the concept of stable pushing and propose a method that enables a differential-drive robot to push an object without losing contact. The most related methodology is proposed in [8] where a Linear Time-Varying (LTV) MPC is used for mobile robots to push an object along a given path, where stable pushing is achieved by optimizing for both the pushing force and the robot control inputs, which explicitly imposes the friction cone constraints. However, we found that it is computationally expensive to solve this optimization problem due to the additional decision variables and constraints. Furthermore, it is not even solvable with commercial solvers such as ACADOS [18]. To address it, a reference trajectory and supplementary linearization are essential components in the solution process. In contrast, our proposed method implicitly constrains the stiff robot-object contact by deriving a concise motion constraint for the robot control input, making it easier to implement. The validation of the proposed method is also shown in both simulation and real-world experiments. ## III Preliminaries Throughout this paper, scalars are denoted by italic lowercase letters, e.g., \(x\), vectors by bold lowercase, e.g., \(\mathbf{x}\), matrices by plain uppercase, e.g., \(A\), and sets by calligraphic uppercase, e.g., \(\mathcal{C}\). The superscript \(\mathbf{x}^{\top}\) or \(A^{\top}\) denotes the transpose of a vector \(\mathbf{x}\) or a matrix \(A\). Denote by \(\{\mathcal{W}\}\), \(\{\mathcal{R}\}\), and \(\{\mathcal{O}\}\), the global world frame, the robot body frame, and the object body frame, respectively. ### _Robot dynamics model_ Consider a nonholonomic differential-drive robot. Let \(\mathbf{x}_{\text{r}}=[x_{\text{r}},y_{\text{r}},\theta_{\text{r}},v_{\text{r }},\omega_{\text{r}}]^{\top}\in\mathbb{R}^{5}\) denote the robot state vector, where \(\mathbf{p}_{\text{r}}=[x_{\text{r}},y_{\text{r}}]^{\top}\) represents the robot position in the world frame \(\{\mathcal{W}\}\), \(\theta_{\text{r}}\) its orientation and \(v_{\text{r}}\) and \(\omega_{\text{r}}\) its linear and angular velocities referring to the world frame, as shown in Fig. 4. Denote by \(\mathbf{u}_{\text{r}}=[a_{\text{r}},\xi_{\text{r}}]^{\top}\in\mathbb{R}^{2}\) the robot's control input vector, in which \(a_{\text{r}}\) and \(\xi_{\text{r}}\) are its linear and angular accelerations, respectively. The robot dynamics are described by the following nonlinear differential equations [19]: \[\begin{bmatrix}\dot{x}_{\text{r}}\\ \dot{y}_{\text{r}}\\ \dot{\theta}_{\text{r}}\\ \dot{v}_{\text{r}}\\ \dot{\omega}_{\text{r}}\end{bmatrix}=\begin{bmatrix}v_{\text{r}}\cos\theta_{ \text{r}}\\ v_{\text{r}}\sin\theta_{\text{r}}\\ \omega_{\text{r}}\\ 0\\ 0\end{bmatrix}+\begin{bmatrix}0\\ 0\\ 0\\ a_{\text{r}}\\ \xi_{\text{r}}\end{bmatrix}, \tag{1}\] which can further be written in a nonlinear discrete form \(\mathbf{x}_{\text{r}}^{t+1}=\mathbf{f}_{\text{r}}(\mathbf{x}_{\text{r}}^{t}, \mathbf{u}_{\text{r}}^{t})\), where \(t\in\mathbb{N}\) denotes the time step. The robot velocity expressed in the robot frame is \({}^{\mathcal{R}}\mathbf{v}_{\text{r}}=[v_{\text{r}},0]^{\top}\). By transforming it into the world frame, we can achieve \[{}^{\mathcal{W}}\mathbf{v}_{\text{r}}={}^{\mathcal{W}}R_{\mathcal{R}}{}^{ \mathcal{R}}\mathbf{v}_{\text{r}}=\begin{bmatrix}\cos\theta_{\text{r}}&-\sin \theta_{\text{r}}\\ \sin\theta_{\text{r}}&\cos\theta_{\text{r}}\end{bmatrix}\begin{bmatrix}v_{\text{r }}\\ 0\end{bmatrix}, \tag{2}\] where \({}^{\mathcal{W}}R_{\mathcal{R}}\) represents the rotation matrix that transforms from the robot frame, \(\mathcal{R}\), to the world frame, \(\mathcal{W}\). ### _Quasi-static pushing_ Pushed by the mobile robot, the object slides with friction interaction with both the ground and the robot. The friction interaction is assumed to conform to Coulomb's law. A quasi-static assumption is made here that the motion of the system is slow and the wrenches are balanced with negligible inertia effects. Then, a force-motion mapping can be given according to the Limit Surface theory proposed in [20]. All the possible static and sliding friction wrenches form a convex set whose boundary is called limit surface. Under the uniform pressure distribution, the limit surface is a closed convex surface and can be approximated by an ellipsoid [21]. In this case, the applied push wrench that quasi-statically balances the friction wrench has: \[{}^{\mathcal{O}}\mathbf{w}_{\text{p}}^{\top}H^{\mathcal{O}}\mathbf{w}_{\text{p}}=1, \tag{3}\] in which \(H=\text{diag}(\frac{1}{(\mu_{\text{g}}N_{\text{o}})^{2}},\frac{1}{(\mu_{\text{g} }N_{\text{o}})^{2}},\frac{\gamma_{\text{g}}^{2}}{(\mu_{\text{g}}N_{\text{o}})^{2}})\), where \({}^{\mathcal{O}}\mathbf{w}_{\text{p}}=[{}^{\mathcal{O}}f_{\text{p},x},{}^{ \mathcal{O}}f_{\text{p},y},\sigma_{\text{f}}\mathbf{{}_{p}}]^{\top}\in\mathbb{ R}^{3}\) denotes the wrench applied by the pusher that quasi-statically balances the friction wrench exerted by the ground planar surface, the left super-script \({}^{\mathcal{O}}\). represents variables in the object body frame. \(\mu_{\text{g}}\) is the friction coefficient between the object and the ground planar surface, \(N_{\text{o}}\) the gravity of the object, and \(\gamma_{\text{g}}\) an integration constant related to the contact surface area 1. Footnote 1: \(\gamma_{\text{g}}=\frac{A(\mathcal{S}_{\text{g}})}{\int_{\mathcal{S}_{\text{ g}}}\sqrt{2^{\alpha}+y^{2}dxdy}}\), where \(\mathcal{S}_{\text{g}}\) is the contact patch between the object and the ground planar surface, and \(A(\mathcal{S}_{\text{g}})\) its area. The friction wrench is a point on the limit surface when the object is sliding. Moreover, the direction of the object's twist \({}^{\mathcal{O}}\mathbf{{}_{\text{v}}}_{\text{o}}=[{}^{\mathcal{O}}v_{\text{ o},x},{}^{\mathcal{O}}v_{\text{o},y},{}^{\mathcal{O}}\omega_{\text{o}}]^{\top}\in \mathbb{R}^{3}\) is given by the normal to the limit surface at that point [20]. Hence, there is: \[{}^{\mathcal{O}}\mathbf{{}_{\text{v}}}_{\text{o}}\propto\frac{\partial}{ \partial^{\mathcal{O}}\mathbf{w}_{\text{p}}}({}^{\mathcal{O}}\mathbf{w}_{ \text{p}}^{\top}H^{\mathcal{O}}\mathbf{w}_{\text{p}})\propto H^{\mathcal{O}} \mathbf{w}_{\text{p}}. \tag{4}\] ### _Dubins car model with a single-point contact pusher_ As concluded in [7], stable pushing with a single-point contact can be reducible to the Dubins car problem [22]. As shown in Fig. (a)a, a round pusher pushes a rectangle shaped object at point \(C\) with a pushing force \({}^{\mathcal{O}}\mathbf{f}_{\text{p}}=[f_{\text{p}x},f_{\text{p}y}]\), which is limited within the friction cone. The resulted twist of the object, \({}^{\mathcal{O}}\mathbf{{}_{\text{v}}}_{\text{o}}\), can be represented as an instantaneous center of rotation \(IRC=[{}^{\mathcal{O}}v_{\text{o},x}/{}^{\mathcal{O}}\omega_{\text{o},}{}^{ \mathcal{O}}v_{\text{o},y}/{}^{\mathcal{O}}\omega_{\text{o}}]\). Given a pushing force \({}^{\mathcal{O}}\mathbf{f}_{\text{p}}\) at contact point \(C\), the distance from the object frame origin \(O_{o}\) to the line of force is \(r_{\text{f}}=\frac{\big{|}{}^{\mathcal{O}}x_{\text{f}}f_{\text{p}x}\big{|}}{ \sqrt{f_{\text{p}x}^{2}+f_{\text{p}y}^{2}}}\). According to the limit surface theory, distance from the center of rotation to the origin is inverse-proportional to \(r_{\text{f}}\), that is, \(\tilde{r}_{\text{f}}=\sqrt{\frac{\big{\langle}\,v_{\text{f}}^{2}+{}^{\mathcal{O }}v_{\text{o},x}^{2}}{\big{\rangle}\big{\langle}\,v_{\text{o},x}^{2}}{\big{\rangle} \big{\langle}\,v_{\text{o},x}^{2}}=\frac{\gamma_{\text{g}}^{2}}{\tau_{\text{f} }}}\). It is demonstrated in [7], as in Projective Geometry, the dual of the line of pushing force \(\mathbf{f}_{\text{p}}\) about the origin \(O_{o}\) is the instantaneous center of rotation, \(IRC\). So the dual of \(\mathbf{f}_{p}\) in all directions forms a line, as a set of all the possible instantaneous rotation centers, which is perpendicular to the vector from the origin to the contact point, represented as the dashed orange line, \(l_{1}\), in Fig. (a)a. But due to the friction cone constraint, the rotation center will not be positioned on the line segment \(Z_{l}Z_{r}\) whose two vertices correspond to the pushing force along the edge of the friction cone. In other words, the stable pushing constraint is translated to a bounded curvature of the object, which makes the stable pushing planning a Dubins car problem, as depicted in Fig. (b)b. However, [7] only considers the omnidirectional pushers. If we take a differential-drive wheeled robot as the pusher, the robot can only rotate about a point that lies along its common left and right wheel axis [23], as shown in Fig. (b)b. There comes the contradiction that the shared rotation center of the robot and the object can only be the intersection of \(l_{1}\) and \(l_{2}\), which means the robot and the object can only move together straightly forward or rotate around the intersection point of the two lines of rotation centers to maintain stable pushing. ## IV Sticking contact constraint As shown in Section.III-C, the maneuverability of the pushing system with a single-point contact is greatly restricted by using a nonholonomic rectangular mobile base. We focus on pushing with line contact to improve maneuverability under stable pushing. Due to the complexity of directly imposing the friction cone constraint, we instead derive a simplified linear motion constraint tailored for the differential drive robot. This approach allows us to solve the stable pushing problem effectively. The Clearpath Husky and Boxer robot are used here, as shown in Fig. 1. The schematic of the pusher-slider system can be found in Fig. 4. ### _Graphical derivation_ Building upon the derivation for point contact based on the graphical approach presented in Section.III-C, we extend it to the line contact case, as depicted in Fig. 3. The line contact can be simplified as two point contacts at the extreme points [24], \({}^{\mathcal{O}}C_{\text{i}}=[-W_{\text{o}}/2,d_{\text{i}}],i\in\{1,2\}\). The pushing force at contact points is denoted by \(\mathbf{f}_{\text{p},i}=[f_{\text{p},i}^{\text{L}},f_{\text{p},i}^{\text{R}}] ^{T}\in\mathbb{R}^{2}\), including two components along the two edges of the friction cone. To ensure stiff contact between the robot and the object, the pushing forces, \(\mathbf{f}_{\text{p},i}\), are limited within the friction cone. Fig. 2: Illustration of the possible center of rotation. The circle and the rectangle represent the robot and the object in a 2D plane. The grey area and the blue arrows respectively indicate the friction cone and its edges. The orange line indicates the set of rotation centers of the object under stable push. While the green line is the robot’s common left and right wheel axis and the line of its possible rotation centers. In (a), the object is pushed by an omni-directional pusher with a point contact. The set of its possible rotation centers lies on the orange line. In (b), the object is pushed by a nonholonomic robot with its rotation centers on the green line. However, there is no overlap between the possible rotation center of the wheeled robot and the object under this contact configuration. The red bricks, connected by grey dashed lines, represent the wheels of a car model in Dubin’s car problem. A total generalized force, \(\mathbf{f}_{\text{p}}=[f_{\text{p}}^{\text{L}},f_{\text{p}}^{\text{R}}]\in\mathbb{R} ^{2}\), and a corresponding generalized contact point, \({}^{\mathcal{O}}C=[-W_{\text{o}}/2,d],d\in[-\frac{L_{\text{o}}}{2},\frac{L_{ \text{o}}}{2}]\), can be found, which are equivalent to the two pushing forces, \(\mathbf{f}_{\text{p},i},i\in 1,2\), ensuring that the contact wrench exerted by the generalized force, \({}^{\mathcal{O}}\mathbf{w}_{\text{p}}\), matches that of the pushing forces, \({}^{\mathcal{O}}\mathbf{w}_{\text{p},1}\) and \({}^{\mathcal{O}}\mathbf{w}_{\text{p},2}\): \({}^{\mathcal{O}}\mathbf{w}_{\text{p}}={}^{\mathcal{O}}\mathbf{w}_{\text{p},1} +{}^{\mathcal{O}}\mathbf{w}_{\text{p},2}\). The generalized contact point shifts on the line segment \(C_{1}C_{2}\), causing a tilt in the line of rotation centers \(l\) (for details, please refer to [7]). Consequently, this tilted \(l\) intersects with the wheel axis of the robot, as illustrated in Fig. 3. Under the friction cone constraint, all the possible intersections form the line segment \([-\infty,R_{\text{f}}]\) and \([R_{\text{r}},+\infty]\). Obviously, the sticking constraint is transformed to a constrained motion set for the robot-object system. ### _Algebraic derivation_ Now we derive the constrained motion set boundary using an algebraic approach. The friction cone of the pushing force is \[\mathcal{F}_{\text{p},i}=\{\mathbf{f}_{\text{p},i}\in\mathbb{R}^{2}\ |\ f_{\text{p},i}^{\text{L}}>0,f_{\text{p},i}^{\text{R}}>0\},\ i=1,2. \tag{5}\] Equivalently, the friction cone on \(\mathbf{f}_{\text{p},i}\) can be written in a form of \(\mathbf{f}_{\text{p},i}=\lambda_{1,i}\begin{bmatrix}1\\ 0\end{bmatrix}+\lambda_{2,i}\begin{bmatrix}0\\ 1\end{bmatrix}\ |\ \lambda_{1,i},\lambda_{2,i}>0\) where \(\lambda_{1,i},\lambda_{2,i}\) are non-negative real numbers [25]. For each feasible friction force \(\mathbf{f}_{\text{p},i}\in\mathcal{F}_{\text{p},i}\), it generates a wrench \({}^{\mathcal{O}}\mathbf{w}_{\text{p},i}=J_{\text{p},i}\mathbf{f}_{\text{p},i}\) with \(J_{\text{p},i}\) the matrix that maps the contact friction force to a pusher wrench in the object's body frame. \[J_{\text{p},i}=\begin{bmatrix}\cos(\theta_{\mu})&\text{cos}(\theta_{\mu})\\ \sin(\theta_{\mu})&-\text{sin}(\theta_{\mu})\\ d_{i}\text{cos}(\theta_{\mu})+\frac{1}{2}W_{\text{o}}\text{sin}(\theta_{\mu}) &d_{i}\text{cos}(\theta_{\mu})-\frac{1}{2}W_{\text{o}}\text{sin}(\theta_{\mu} )\end{bmatrix}\] The friction cones on the contact points lead to the wrench cone. For each friction cone \(\mathcal{F}_{\text{p},i},i=1,2\), pusher wrenches \({}^{\mathcal{O}}\mathbf{w}_{\text{p},i}^{\text{L}}\) and \({}^{\mathcal{O}}\mathbf{w}_{\text{p},i}^{\text{R}}\) corresponding to the two-unit edges \(\mathbf{f}_{\text{p},i}^{\text{L}}=[0,1]^{\top}\) and \(\mathbf{f}_{\text{p},i}^{\text{R}}=[1,0]^{\top}\), gives the edges of the wrench cones, as shown in Fig. 5b. \[{}^{\mathcal{O}}\mathcal{W}_{\text{p},i}=\{^{\mathcal{O}}\mathbf{w}_{\text{p}, i}=J_{\text{p},i}\mathbf{f}_{\text{p},i}\ |\ \mathbf{f}_{\text{p},i}\in\mathcal{F}_{\text{p},i}\},\ i=1,2. \tag{7}\] where \({}^{\mathcal{O}}\mathbf{w}_{\text{p},i}=\lambda_{1,i}{}^{\mathcal{O}}\mathbf{w }_{\text{p},i}^{\text{L}}+\lambda_{2,i}{}^{\mathcal{O}}\mathbf{w}_{\text{p},i}^ {\text{R}}=[\lambda_{1,i}\text{cos}(\theta_{\mu})+\lambda_{2,i}\text{cos}( \theta_{\mu}),\ \lambda_{1,i}\text{sin}(\theta_{\mu})-\lambda_{2,i}\text{sin}(\theta_{\mu}), \ \lambda_{1,i}(d_{i}\text{cos}(\theta_{\mu})+\frac{1}{2}W_{\text{o}}\text{sin}( \theta_{\mu}))+\lambda_{2,i}(d_{i}\text{cos}(\theta_{\mu})-\frac{1}{2}W_{ \text{o}}\text{sin}(\theta_{\mu}))\top\). Then the generalized wrench of the two pushing forces is \[{}^{\mathcal{O}}\mathbf{w}_{\text{p}} =\lambda_{3}{}^{\mathcal{O}}\mathbf{w}_{\text{p},1}+\lambda_{4}{} ^{\mathcal{O}}\mathbf{w}_{\text{p},2} \tag{8}\] \[=\lambda_{3}(\lambda_{1,1}{}^{\mathcal{O}}\mathbf{w}_{\text{p},1} ^{\text{R}}+\lambda_{2,1}{}^{\mathcal{O}}\mathbf{w}_{\text{p},1}^{\text{L}})\] \[+\lambda_{4}(\lambda_{1,2}{}^{\mathcal{O}}\mathbf{w}_{\text{p},2} ^{\text{R}}+\lambda_{2,2}{}^{\mathcal{O}}\mathbf{w}_{\text{p},2}^{\text{L}})\] where \(\lambda_{j}>0,\ j=3,4\). Since \(\lambda_{1,i}\lambda_{j}>0\), the feasible set of the generalized wrench in Eq. (8) can be represented as a convex hull \({}^{\mathcal{O}}\mathcal{W}_{\text{p}}\), as shown in Fig. 5c. \[{}^{\mathcal{O}}\mathcal{W}_{\text{p}}=\textbf{cvx\_hull}({}^{\mathcal{O}} \mathbf{w}_{\text{p},1}^{\mathcal{O}},{}^{\mathcal{O}}\mathbf{w}_{\text{p},1}^{ \mathcal{O}},{}^{\mathcal{O}}\mathbf{w}_{\text{p},2}^{\mathcal{L}},{}^{\mathcal{O }}\mathbf{w}_{\text{p},2}^{\mathcal{R}}) \tag{9}\] As mentioned in Eq. (4), the limit surface theory gives the mapping of the pushing force and the resulting object sliding motion. The direction of the object's twist is parallel to \(H^{\mathcal{O}}\mathbf{w}_{\text{p}}\). Combining with Eq. (7), we can write all possible twists \({}^{\mathcal{O}}\mathcal{V}_{\text{o}}=[{}^{\mathcal{O}}v_{0,x},{}^{\mathcal{O}} v_{0,y},{}^{\mathcal{O}}\omega_{\omega_{0}}]^{\top}\) of the object as: \[{}^{\mathcal{O}}\mathcal{V}_{\text{o}}=\{k_{\text{o}}H^{\mathcal{O}}\mathbf{w}_{ \text{p}}\ |\ ^{\mathcal{O}}\mathbf{w}_{\text{p}}\in{}^{\mathcal{O}}\mathcal{W}_{\text{p}},\ k_{ \text{o}}\in\mathbb{R}^{+}\}, \tag{10}\] where \(k_{\text{o}}\) is a magnitude parameter. For all pusher wrenches \({}^{\mathcal{O}}\mathbf{w}_{\text{p}}\in{}^{\mathcal{O}}\mathcal{W}_{\text{p}}\) that are on the ellipsoidal limit surface, the set of mapped object twists \({}^{\mathcal{O}}\mathcal{V}_{\text{o}}\) is also a polyhedral cone since the mapping in Eq. (10) is linear. Thus, we can compute the motion cone \({}^{\mathcal{O}}\mathcal{V}_{\text{o}}\) by computing its edges, as shown in Fig. 5d. Additionally, since the object is pushed by the robot, which has a linear velocity \(v_{\text{r}}\) and angular velocity \(\omega_{\text{r}}\), without losing or sliding the contact, we have the object velocity \[{}^{\mathcal{W}}\mathbf{v}_{\text{o}}={}^{\mathcal{W}}\mathbf{v}_{\text{r}}+{} ^{\mathcal{W}}R_{\mathcal{R}}\cdot(\boldsymbol{\omega_{\text{r}}}\times{}^{ \mathcal{R}}\mathbf{x}_{\text{o}})_{(1:2)} \tag{11}\] where \({}^{\mathcal{R}}\mathbf{x}_{\text{o}}=[d_{\text{ro}},{}^{\mathcal{R}}y_{ \text{o}},0]^{\top}\) denotes the object position in the robot frame and \(\boldsymbol{\omega_{\text{r}}}=[0,0,\omega_{\text{r}}]^{\top}\) corresponds to the pure rotation velocity vector of the robot. The subscript (1:2) indicates taking the first two dimensions of the vector. After substituting Eq. (2) in Eq. (11), the velocity of the object expressed in the object frame can be achieved by multiplying \({}^{\mathcal{W}}R_{\mathcal{R}}^{-1}\) at both sides of Eq. (11), which yields: \[{}^{\mathcal{O}}v_{0,x}=v_{\text{r}}-\omega_{\text{r}}{}^{\mathcal{R}}y_{\text{o }},\ \ \ ^{\mathcal{O}}v_{0,y}=\omega_{\text{r}}d_{\text{ro}},\ \ \ ^{\mathcal{O}}\omega_{\text{o}} and the plane \({}^{\mathcal{O}}\mathcal{P}_{\text{o}}\), which results in two edge vectors, \({}^{\mathcal{O}}\mathbf{v}_{\text{o}}^{\prime}\) and \({}^{\mathcal{O}}\mathbf{v}_{\text{o}}^{\prime}\). \[{}^{\mathcal{O}}\mathbf{v}_{\text{o}}^{\prime}=(^{\mathcal{O}} \mathbf{v}_{\text{o},1}^{L}\times{}^{\mathcal{O}}\mathbf{v}_{\text{o},2}^{L}) \times\mathbf{\vec{n}}=k_{\text{o}}\begin{bmatrix}-d_{\text{ro}}\text{cos}( \theta_{\mu})\\ -d_{\text{ro}}\text{sin}(\theta_{\mu})\\ -\text{sin}(\theta_{\mu})\end{bmatrix} \tag{13}\] \[{}^{\mathcal{O}}\mathbf{v}_{\text{o}}^{\prime}=(^{\mathcal{O}} \mathbf{v}_{\text{o},1}^{R}\times{}^{\mathcal{O}}\mathbf{v}_{\text{o},2}^{R}) \times\mathbf{\vec{n}}=k_{\text{o}}\begin{bmatrix}-d_{\text{ro}}\text{cos}( \theta_{\mu})\\ d_{\text{ro}}\text{sin}(\theta_{\mu})\\ \text{sin}(\theta_{\mu})\end{bmatrix}\] where \(\mathbf{\vec{n}}=[0,1,-d_{\text{ro}}]\) is the normal vector to plane \({}^{\mathcal{O}}\mathcal{P}_{\text{o}}\). The object motion cone can then be written as \({}^{\mathcal{O}}\bar{\mathcal{V}}_{\text{o}}=\lambda_{5}{}^{\mathcal{O}} \mathbf{v}_{\text{o}}^{\prime}+\lambda_{6}{}^{\mathcal{O}}\mathbf{v}_{\text{o }}^{\prime}\mid\lambda_{5},\lambda_{6}\in\mathbb{R}_{\geq 0}\). According to Eq. (12), we can achieve the corresponding motion cone for the robot, \(\mathcal{V}_{\text{r}}\), with a linear mapping \(\begin{bmatrix}v_{\text{r}}\\ w_{\text{r}}\end{bmatrix}=\begin{bmatrix}1&0&{}^{\mathcal{R}}y_{0}\\ 0&0&1\end{bmatrix}\begin{bmatrix}v_{\text{o}x}\\ v_{\text{o}y}\\ w_{\text{o}}\end{bmatrix}\mid\begin{bmatrix}v_{\text{o}x}\\ v_{\text{o}y}\\ w_{\text{o}}\end{bmatrix}\in{}^{\mathcal{O}}\bar{\mathcal{V}}_{\text{o}}\). Expressing the robot motion cone as a conical combination, \[\begin{bmatrix}v_{\text{r}}\\ w_{\text{r}}\end{bmatrix}= \lambda_{5}\begin{bmatrix}-{}^{\mathcal{R}}y_{\text{o}}\text{sin} (\theta_{\mu})-d_{\text{ro}}\text{cos}(\theta_{\mu})\\ -\text{sin}(\theta_{\mu})\end{bmatrix}+ \tag{14}\] \[\lambda_{6}\begin{bmatrix}{}^{\mathcal{R}}y_{\text{o}}\text{sin} (\theta_{\mu})-d_{\text{ro}}\text{cos}(\theta_{\mu})\\ \text{sin}(\theta_{\mu})\end{bmatrix}\] from which we achieve the motion constraint on the robot input by finding the boundary of \(w_{\text{r}}/v_{\text{r}}\) \[k^{\prime}v_{\text{r}}^{t}\leq \omega_{\text{r}}^{t}\leq k^{\prime}v_{\text{r}}^{t} \tag{15}\] where \(v_{\text{r}}\geq 0\), \(k^{\prime}=\frac{\text{sin}(\theta_{\mu})}{\pi_{y_{\text{o}}}\text{sin}( \theta_{\mu})-d_{\text{ro}}\text{cos}(\theta_{\mu})}\), \(k^{\prime}=\frac{\text{sin}(\theta_{\mu})}{\pi_{y_{\text{o}}}\text{sin}(\theta _{\mu})+d_{\text{ro}}\text{cos}(\theta_{\mu})}\). It can also be regarded as a constraint on the curvature of the robot's trajectory, \(k\). For simplification, we only plan for the pushes at the middle of the contact surface, where \({}^{\mathcal{O}}y_{\text{o}}=0\). ## V Planning for Robot Pushing With the motion constraint derived in (15), we now present a motion planner for robot pushing that keeps the object to be within its motion cone based on NMPC. #### V-1 NMPC formulation We formulate a receding horizon optimization problem with \(N\) time steps and planning horizon \(N\Delta t\): \[\min_{\mathbf{x}_{1}^{t}\cdot N,\mathbf{u}_{\text{r}}^{t}\cdot 1} \sum_{t=0}^{N-1}J^{t}(\mathbf{x}_{\text{r}}^{t},\mathbf{u}_{\text{r }}^{t})+J^{N}(\mathbf{x}_{\text{r}}^{N})\] (16a) s.t. \[\mathbf{x}_{\text{r}}^{0}=\mathbf{x}_{\text{r}}(t_{0}), \tag{16b}\] \[\mathbf{x}_{\text{r}}^{t}=\mathbf{f}_{\text{r}}(\mathbf{x}_{ \text{r}}^{t-1},\mathbf{u}_{\text{r}}^{t-1}),\] (16c) \[\mathbf{h}_{\text{pushing}}(\mathbf{x}_{\text{r}}^{t})\leq 0,\] (16d) \[\mathbf{h}_{\text{avoidance}}(\mathbf{x}_{\text{r}}^{t})\leq 0,\] (16e) \[\mathbf{u}_{\text{r}}^{t-1}\in\mathcal{U}_{\text{r}},\ \forall t\in\{1, \dots,N\},\] where \(\Delta t\) is the sampling time, \(J^{t}\) denotes the cost term at stage \(t\) and \(J^{N}\) denotes the terminal cost, \(\mathbf{x}_{\text{r}}(t_{0})\) is the initial state of the robot, \(\mathbf{f}_{\text{r}}\) is the robot dynamics model, \(\mathcal{U}_{\text{r}}\) represents the robot's acceleration and angular acceleration limits. \(\mathbf{h}_{\text{pushing}}\) and \(\mathbf{h}_{\text{avoidance}}\) respectively represent the path constraints for stable pushing and obstacle avoidance, which will be described in detail in the following. #### V-1 Cost functions Let \(\mathbf{p}_{\text{o}}^{\text{g}}\) be the goal location that the object needs to be pushed to. We minimize the displacement between the object's terminal position with this goal. To this end, the terminal cost is defined as: \(J^{N}(\mathbf{x}_{\text{r}}^{N})=q_{\text{goal}}\left\|\mathbf{p}_{\text{o}}^{N }-\mathbf{p}_{\text{o}}^{\text{g}}\right\|\), where the object's terminal position is \(\mathbf{p}_{\text{o}}^{N}=\mathbf{p}_{\text{o}}^{N}+R(\theta_{\text{r}}^{N})[d_ {\text{ro}},0]^{\text{T}}\) with \(R(\cdot)\) the two-dimensional rotation matrix. \(q_{\text{goal}}\) is a tuning weight. The stage cost is to minimize the robot's linear and angular velocities to render it not to move too fast.\(J^{t}(x_{\text{r}}^{t},u_{\text{r}}^{t})=q_{\text{v}}(v_{\text{r}}^{t})^{2}+q_{\omega}( \omega_{\text{r}}^{t})^{2}\), where \(q_{\text{v}}\) and \(q_{\theta}\) are tuning weights. #### V-1 Pushing constraints To make the robot keep contact with the object while pushing, the object's motion has to be within its motion cone at each time step. By combining the computed motion cone in Eq. 15 with the continuous pushing constraint, the sticking contact constraints can be derived as follows: \[v_{\text{r}}^{t}\geq 0, \tag{17}\] \[k^{\prime}v_{\text{r}}^{t}\leq \omega_{\text{r}}^{t}\leq k^{\prime}v_{\text{r}}^{t},\] It indicates that the robot has to push forward the object, but its angular velocity should be within a motion cone related to the forward speed, which formulates the stable pushing constraints \(\mathbf{h}_{\text{pushing}}\). Fig. 5: Illustration of the motion cone construction for planar pushing using a nonholonomic robot. (a) Friction cones. (b) Individual generalized friction cones. (c) Convex hull of the individual generalized friction cones (blue region) and the limit surface (light purple ellipsoid). (d) Feasible pusher wrenches (on the green surface) and force-motion model (orange vectors). (e) Motion cone of the object (area marked red). #### V-C4 Collision avoidance constraints For collision avoidance, we use two discs with radius \(r=r_{\text{r}}\) or \(r_{\text{o}}\) to circle the robot and object, respectively, as shown in Fig. 6. Each known obstacle \(j=1,\dots\) in the environment is modeled as an ellipse [26] located at \(\mathbf{p}_{j}\) with semi-axis \((a_{j},b_{j})\) and orientation \(\theta_{j}\). Hence, the collision avoidance constraints \(\mathbf{h}_{\text{avoidance}}\) are formulated as: \((R(\theta_{j})\mathbf{d}_{j}^{t})^{\text{T}}\begin{bmatrix}\frac{1}{(a_{j}+r) ^{2}}&0\\ 0&\frac{1}{(b_{j}+r)^{2}}\end{bmatrix}R(\theta_{j})\mathbf{d}_{j}^{t}\geq 1\), where \(\mathbf{d}_{j}^{t}\) indicates the robot-obstacle relative position \(\mathbf{p}_{r}^{t}-\mathbf{p}_{j}\), and the object-obstacle relative position \(\mathbf{p}_{\text{o}}^{t}-\mathbf{p}_{j}\) in which the object position is \(\mathbf{p}_{\text{o}}^{t}=\mathbf{p}_{\text{r}}^{t}+R(\theta_{\text{r}}^{t} )d_{\text{ro}}\). ## VI Experimental Results To validate the efficacy of our proposed method, we performed experiments using two robots, Clearpath Husky and Boxer, to test the stable pushing performance (Fig. 7 and 11). Both the Husky and Boxer robots were differential-drive wheeled robots with rectangular shapes, respectively sized \(0.97\times 0.67\) m and \(0.75\times 0.55\) m. Our experimental results demonstrated a 100% stiff contact when applying the proposed concise stable pushing constraint. Additionally, we compared the proposed method with state-of-the-art pushing baselines to showcase the conciseness of our proposed constraint and the efficiency of stable pushing by effectively controlling object motion. ### _Real-world Experiments using Husky and Boxer_ We carried out real-world experiments with two robots to demonstrate the efficacy of our proposed sticking contact constraint when stably pushing paper boxes. Our experiments utilized a motion capture system (OptiTrack) and a Kalman filter to collect information on robots, objects, and obstacles that operate at 120Hz. Control commands were calculated using our NMPC-based method on a laptop and sent to robots through WiFi and ROS, which operate at a frequency of 20Hz. We use the open source solver ACADOS [18] to solve the NMPC problem, with a sampling time of \(\Delta t=0.1\) seconds, a planning horizon of \(N=20\) and tuning weights \(q_{\text{goal}}=1,q_{\text{v}}=q_{\omega}=0.1\). The Husky robot was equipped with a line bumper in the front, which acts as a pushing effector. It was used to push a large paper box measuring \(0.32\times 0.48\times 0.48\) meters and weighing 2.8 kilograms. At the beginning of the push, the box was placed in contact with the robot center at a distance of \(d_{\text{ro}}=0.66\) meters. The angle of the friction cone was set to \(\theta_{\mu}=12.00\) degrees. It is estimated by measuring the force which could pull the box at a constant speed, such that the pulling force is equal to the friction force: \(F_{\text{pull}}=F_{f}=\tan\theta_{\mu}\cdot m_{\text{o}}g\). Then \(\theta_{\mu}\) can be achieved as \(\arctan(\frac{F_{\text{pull}}}{m_{\text{o}}g})\). Using the above setup, the limits of the robot trajectory curvature are calculated as \(k^{\prime}=0.32\) and \(k^{\prime\prime}=-0.32\). Due to the size limitation of the motion capture system, we selected six pushing goals with coordinates (2,1), (2,0), (2,-1), (0,1), (0,0), and (0,-1) to evaluate the stable pushing performance, as shown in Fig. 6(a). Starting from the initial position (-2,1), the Husky robot was tasked with pushing the paper box to the designated goal positions, as shown in Fig. 8 (a-f). The robot successfully maintained sticking contact with the object in all cases. Compared to trajectories without the stiff contact constraint (Fig. 8 (h-j)), the object easily slides away while the robot moves (intuitive comparison can be found in Fig. 6(b) and 6(c)). However, the contact constraint also limited the maneuverability of the pushing system, so that the maximum curvature of the planned trajectory was bounded. Fig. 9 illustrates the relationship between maneuverability and motion cone. As a result, some pushing targets (e.g., Goal c in Fig. 8) were unattainable within a limited time with the local NMPC planner. Reposition actions are required, so a global pushing planner will be the focus of our future research. Additionally, the proposed method can be easily extended to an obstacle-aware case, as shown in Fig. 1 and Fig. 10. A static obstacle is placed in front of the robot, and the object's goal location is behind it. The robot can successfully avoid the obstacle by maintaining both the stiff contact and obstacle avoidance constraints while pushing the object to the goal location. Furthermore, we aimed to comprehensively validate the effectiveness of our proposed stable pushing method under varying friction conditions using the Boxer robot within a distinct environment. A series of experiments were conducted to this end. In the initial phase, we conducted ablation studies to assess the effectiveness of the sticking contact constraint with box sized \(0.39\times 0.59\) m. Three pushing targets were selected, with five pushing trials conducted for each target. The outcomes of these ablation experiments are illustrated in Fig. 11, demonstrating an impressive 100% success rate across all trials. Subsequently, we tried a new box sized \(0.32\times 0.48\) m and proceeded to an experiment where the robot pushed an object around the room. The implementation of the stiff contact constraint ensured that the robot maintained stiff contact with the object throughout the process. This strategic approach significantly reduced the need for frequent repositioning actions and requires only two designed switches. To further gauge the stability and robustness of our method, we designed a path tracking experiment. In this setup, the robot meticulously followed a predefined path while engaging in stable pushing. Both sets of experimental results are depicted in Fig. 12 (shown in the attached video as well), illustrating the method's consistent performance across diverse scenarios. Overall, the outcomes of these comprehensive experiments Fig. 6: Illustration of collision avoidance between the robot-object system and the obstacle. demonstrate the robustness and efficacy of our proposed method across different friction conditions and robot platforms, underscoring its potential for real-world applications in robotics. ### _Comparison with the baseline approaches_ What's more, to assess the performance of our proposed stable pushing method, we compared it to two existing baseline approaches, namely the reactive pushing strategy [12] and a Linear Time-Varying Model Predictive Control (LTV MPC) based stable pushing approach [8]. The comparison results are presented in Table I. During the pushing process, the reactive pushing strategy attempts to minimize the angle between the object's movement direction and its direction toward the goal location. As a result, the robot must maneuver around the object to adjust its angle and sometimes reposition itself when the robot-object contact is lost. However, the core of the controller is a Proportional-Integral-Derivative (PID) controller, which is challenging to tune for optimal performance. Due to safety concerns, we tested this approach only in simulation. As shown in Fig. 8 (k-m), the robot often loses contact with the object, requiring time-consuming repositioning actions. Moreover, since the approach was originally designed for omnidirectional robots, it does not account for the motion constraints of nonholonomic robots. The robot sometimes bumps into the object while repositioning, adversely affecting pushing performance. In contrast, our proposed approach has demonstrated superior efficiency and pushing success Fig. 8: (a-f) illustrate the stable pushing outcomes for the six chosen goals depicted in Fig. 6(a). For goals d, e, and f, Fig. (h-j) additionally exhibit the pushing path without the sticking contact constraint, and Fig. (k-m) showcase the performance of the reactive pushing strategy. Fig. 10: Experimental results of obstacle-aware robot pushing. The red and blue curves with dots represent the trajectories of the robot and the pushed object, respectively. The obstacle is marked in gray. Fig. 7: (a) shows the selected pushing goals which are represented as white crosses on the floor. The corresponding goal-oriented pushing results are shown in Fig. 8 a-f. (b) and (c) separately show the experimental results of the robot pushing without and with the stiff contact constraint. The transparency of the robot and box in the image indicates their movement. Fig. 9: Robot trajectories for pushing considering various limits of the robot trajectory curvature, where \(k=k^{\prime}=-k^{\prime\prime}\). The blue square and the red diamond represent the start and the goal locations, respectively. The smaller the motion cone, the maneuverability of the robot is more limited. rate for all three goals while maintaining a higher pushing success rate. The reactive pushing approach only achieves high success rates when the goal position is directly in front of the robot and is close to the initial position. To achieve the goals d, e, f, it has an average distance traveled by the robot and a time of \(8.53\) m and \(58.4\) s, respectively, while our proposed approach only takes \(6.53\) m and \(13.2\) s which saves \(23.8\%\) and \(77.4\%\) in these metrics. The LTV MPC-based pushing method shares the same motivation and mechanics as our proposed approach which is to add the friction cone constraint to guarantee stable pushing. However, the LTV MPC approach directly adds the stiff contact constraint to the optimization problem without any preprocessing. Consequently, it has four additional independent decision variables and four more hyperparameters to tune in the MPC formulation. We utilized the open source ACADOS solver to solve the MPC problem proposed in LTV MPC, which is unsolvable due to extra independent variables. Compared to other models, our concise stiff contact constraint requires only one hyperparameter (\(k^{\prime}=-k^{\prime\prime}\)) to tune and can be easily added to MPC-based navigation controllers. ### _Sensitivity analysis_ Recognizing the inherent challenges in accurately measuring friction coefficients, we conducted a comprehensive sensitivity analysis. The primary goal was to determine the parameter \(k\) without prior knowledge of the friction coefficient between the robot and the object. Additionally, we sought to comprehend how variations in the estimation of \(k\) would impact the effectiveness of stable pushing. Subsequently, we assessed stable pushing performance for objects with distinct surface characteristics, including sponge sheet, foam sheet, and cardboard. Furthermore, recognizing the common occurrence of non-uniform mass distribution in unwieldy objects, we conducted experiments involving the rearrangement of the same set of objects within the box, thus achieving diverse mass distributions. This enabled us to investigate the method's robustness in scenarios where the assumption of uniform mass distribution is not perfectly upheld. Because \(k\) represents the limit of the robot trajectory curvature, our experimental setup entailed pushing various objects at a uniform speed of 0.1 m/s around a predetermined rotation center for a duration of 4 seconds. This rotation center, in turn, determines moving along a certain trajectory with curvature \(k=w_{t}/v_{t}\). By measuring the displacement of the object's position in the robot frame at both the start and end of the trajectory, we quantified the cumulative slid distance of the object at different k. The outcomes \begin{table} \begin{tabular}{p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}} \hline \hline & Number of hyper-parameters & Success rate (For Goal 1, 2, 3) & Decision (for in MPC (at time t) & Solvable with commercial solver \\ \hline Proposed approach & 1 & 100\%, 100\%, 0\% & 7 & Yes \\ \hline Reactive pushing & 5 & 100\%, 60\%, 0\% & - & - \\ \hline LTV MPC & 5 & - & 11 & No \\ \hline \hline \end{tabular} \end{table} TABLE I: Comparison to the baselines Fig. 11: Goal-targeted stable pushing with Boxer. Stiff contact is successfully maintained under the sticking contact constraint. Fig. 12: Stable pushing across different scenarios. The red and blue curves represent the trajectories of the robot and the pushed object, respectively. The reference waypoints are marked in green. In (b), A sponge sheet is sticked to the box to augment friction in the robot-object interaction, where \(k^{\prime}=-k^{\prime\prime}=0.4\). For detailed information, we direct readers to our accompanying video. of the experiments are depicted in Fig. 13. Notably, when \(k<k^{\prime}=0.32\) (for \(\mathcal{O}y_{0}=0\), where changing direction represents a symmetry case that we omit here), the object's slid distance remains at zero such that stable pushing is attainable. Conversely, when \(k>k^{\prime}=0.32\), the assurance of stable pushing diminishes where the object slides. This observed trend persists across all tested friction conditions and mass distributions, underscoring the approach's capacity for generalization. Even when \(k\) deviates by as much as \(\pm\)20%, the slid distance remains constrained to within 0.05 m. ### _Discussion_ The proposed approach introduces a simple analytical stable pushing constraint, ensuring pushing stability under the line contact between the robot and the object. It is well-suited for objects with uniform mass distributions, and it can potentially be extended to handle cases with slightly nonuniform mass distributions and indeterminate anisotropic friction. Its simplicity is a notable feature, with only one hyperparameter requiring approximation. However, stable pushing imposes limitations on maximum trajectory curvature, which is decided by the friction condition between the robot-object interaction. Adding high friction coating will help to improve system maneuverability. In contrast, there are widely-used learning-based pushing controllers utilize data-based pushing dynamics models, which do not consider the shape or mass distribution of the object [13, 27, 5]. However, data-driven methods are known for their data dependency, challenges in generalization, and susceptibility to Model Drift. Moreover, they neglect pushing stability, resulting in frequent object sliding and the need for time-consuming repositioning actions, especially problematic for nonholonomic mobile robots with limited maneuverability. The choice between stable pushing for regular-shaped objects and intermittent pushing for complex objects should be made based on the specific application's requirements and the characteristics of the objects involved. ## VII Conclusion This paper addresses the problem of using a differential-drive mobile robot to push an object to a goal location. We start by revisiting the pushing mechanics and highlighting the nonholonomic robot's challenges. To overcome the challenge, we propose a stable pushing approach that maintains a stiff line contact between the robot and the object, controlled by a stable pushing constraint. As a key contribution of this work, we provide an algorithm to simplify this constraint as a concise motion constraint for the robot. An NMPC-based planner is presented for stable pushing by considering the motion constraint. Our proposed method is more efficient than reactive pushing strategies, with a 23. 8% reduction in the traveled trajectory length and a 77.4% reduction in time. Furthermore, our method is more concise than the LTV MPC-based stable pushing method, making it easier to implement. We validate our proposed method through real-world experiments with Husky and Boxer robots under different friction conditions. However, the stable pushing method has limitations in maneuverability. Our future research aims to design global policies that can further switch between contact surfaces to improve maneuverability.
2307.16648
LLMs4OL: Large Language Models for Ontology Learning
We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (OL). LLMs have shown significant advancements in natural language processing, demonstrating their ability to capture complex language patterns in different knowledge domains. Our LLMs4OL paradigm investigates the following hypothesis: \textit{Can LLMs effectively apply their language pattern capturing capability to OL, which involves automatically extracting and structuring knowledge from natural language text?} To test this hypothesis, we conduct a comprehensive evaluation using the zero-shot prompting method. We evaluate nine different LLM model families for three main OL tasks: term typing, taxonomy discovery, and extraction of non-taxonomic relations. Additionally, the evaluations encompass diverse genres of ontological knowledge, including lexicosemantic knowledge in WordNet, geographical knowledge in GeoNames, and medical knowledge in UMLS.
Hamed Babaei Giglou, Jennifer D'Souza, Sören Auer
2023-07-31T13:27:21Z
http://arxiv.org/abs/2307.16648v2
# LLMs4OL: Large Language Models for Ontology Learning ###### Abstract We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (OL). LLMs have shown significant advancements in natural language processing, demonstrating their ability to capture complex language patterns in different knowledge domains. Our LLMs4OL paradigm investigates the following hypothesis: _Can LLMs effectively apply their language pattern capturing capability to OL, which involves automatically extracting and structuring knowledge from natural language text?_ To test this hypothesis, we conduct a comprehensive evaluation using the zero-shot prompting method. We evaluate nine different LLM model families for three main OL tasks: term typing, taxonomy discovery, and extraction of non-taxonomic relations. Additionally, the evaluations encompass diverse genres of ontological knowledge, including lexicosemantic knowledge in WordNet, geographical knowledge in GeoNames, and medical knowledge in UMLS. The obtained empirical results show that foundational LLMs are not sufficiently suitable for ontology construction that entails a high degree of reasoning skills and domain expertise. Nevertheless, when effectively fine-tuned they just might work as suitable assistants, alleviating the knowledge acquisition bottleneck, for ontology construction. Keywords:Large Language Models LLMs Ontologies Ontology Learning Prompting Prompt-based Learning. ## 1 Introduction Ontology Learning (OL) is an important field of research in artificial intelligence (AI) and knowledge engineering, as it addresses the challenge of knowledge acquisition and representation in a variety of domains. OL involves automatically identifying terms, types, relations, and potentially axioms from textual information to construct an ontology [29]. Numerous examples of human-expert created ontologies exist, ranging from general-purpose ontologies to domain-specific ones, e.g., Unified Medical Language System (UMLS) [8], WordNet [40], GeoNames [52], Dublin Core Metadata Initiative (DCMI) [64], schema.org [19], etc. Traditional ontology creation relies on manual specification by domain experts, which can be time-consuming, costly, error-prone, and impractical when knowledge constantly evolves or domain experts are unavailable. Consequently, OL techniques have emerged to automatically acquire knowledge from unstructured or semi-structured sources, such as text documents and the web, and transform it into a structured ontology. A quick review of the field shows that traditional approaches to OL are based on lexico-syntactic pattern mining and clustering [65; 41; 36; 25; 4; 20; 59; 53; 27; 2; 23; 22]. In contrast, recent advances in natural language processing (NLP) through Large Language Models (LLMs) [45] offer a promising alternative to traditional OL methods. The ultimate goal of OL is to provide a cost-effective and scalable solution for knowledge acquisition and representation, enabling more efficient and effective decision-making in a range of domains. To this end, we introduce the LLMs4OL paradigm and empirically ground it as a foundational first step. Currently, there is no research explicitly training LLMs for OL. Thus to test LLMs for OL for the first time, we made some experimental considerations. The first being: _Do the characteristics of LLMs justify ontology learning?_ First, LLMs are trained on extensive and diverse text, similar to domain-specific knowledge bases [50]. This aligns with the need for ontology developers to have extensive domain knowledge. Second, LLMs are built on the core technology of transformers that have enabled their higher language modeling complexity by facilitating the rapid scaling of their parameters. These parameters represent connections between words, enabling LLMs to comprehend the meaning of unstructured text like sentences or paragraphs. Further, by extrapolating complex linguistic patterns from word connections, LLMs exhibit human-like response capabilities across various tasks, as observed in the field of "emergent" AI. This behavior entails performing tasks beyond their explicit training, such as generating executable code, diverse genre text, and accurate text summaries [57; 62]. Such ability of LLMs to extrapolate patterns from simple word connections, encoding language semantics, is crucial for OL. Ontologies often rely on analyzing and extrapolating structured information connections, such as term-type taxonomies and relations, from unstructured text [17]. Thus LLMs4OL hypothesis of LLMs' fruitful application for OL appeared conceptually justified. LLMs are being developed at a rapid pace. At the time of writing of this work, at least 60 different LLMs are reported [5]. This led to our second main experimental consideration. _Which LLMs to test for the LLMs4OL task hypothesis?_ Empirical validation of various LLMs is crucial for NLP advancements and selecting suitable models for research tasks. Despite impressive performances in diverse NLP tasks, LLM effectiveness varies. For the foundational groundwork of LLMs4OL, we comprehensively selected eight diverse model families based on architecture and reported state-of-the-art performances at the time of this writing. The three main LLM architectures are encoder, decoder, and encoder-decoder. The selected LLMs for validation are: BERT [15] (encoder-only); BLOOM [55], MetaAI's LLaMA [58], OpenAI's GPT-3 [9], GPT-3.5 [45], GPT-4 [46] (all decoder-only); and BART [32] and Google's Flan-T5 [10] (encoder-decoder). Recent studies show that BERT excels in text classification and named entity recognition [15], BART is effective in text generation and summarization [32], and LLaMA demonstrates high accuracy in various NLP tasks, including reason ing, question answering, and code generation [58]. Flan-T5 emphasizes instruction tuning and exhibits strong multi-task performance [10]. BLOOM's unique multilingual approach achieves robust performance in tasks like text classification and sequence tagging [55]. Lastly, the GPT series stands out for its human-like text generation abilities [9, 45, 46]. In this work, we aim to comprehensively unify these LLMs for their effectiveness under the LLMs4OL paradigm for the first time. With the two experimental considerations in place, we now introduce the LLMs4OL paradigm and highlight our contributions. LLMs4OL is centered around the development of ontologies that comprise the following primitives [38]: **1.** a set of strings that describe terminological lexical entries \(L\) for conceptual types; **2.** a set of conceptual types \(T\); **3.** a taxonomy of types in a hierarchy \(H_{T}\); **4.** a set of non-taxonomic relations \(R\) described by their domain and range restrictions arranged in a heterarchy of relations \(H_{R}\); and **5.** a set of axioms \(A\) that describe additional constraints on the ontology and make implicit facts explicit. The LLMs4OL paradigm, introduced in this work, addresses three core aspects of OL as tasks, outlined as the following research questions (RQs). * How effective are LLMs for automated type discovery to construct an ontology? * How effective are LLMs to recognize a type taxonomy i.e. the "is-a" hierarchy between types? * How effective are LLMs to discover non-taxonomic relations between types? The diversity of the empirical tests of this work are not only w.r.t. LLMs considered, but also the ontological knowledge domains tested for. Specifically, we test LLMs for lexico-semantic knowledge in WordNet [40], geographical knowledge in GeoNames [1], biomedical knowledge in UMLS [7], and web content type representations in schema.org [47]. For our empirical validation of LLMs4OL, we seize the opportunity to include PubMedBERT [18], a domain-specific LLM designed solely for the biomedical domain and thus applicable only to UMLS. This addition complements the eight domain-independent model families introduced earlier as a ninth model type. Summarily, our main contributions are: * The LLMs4OL task paradigm as a conceptual framework for leveraging LLMs for OL. * An implementation of the LLMs4OL concept leveraging tailored prompt templates for zero-shot OL in the context of three specific tasks, viz. term typing, type taxonomic relation discovery, and type non-taxonomic relation discovery. These tasks are evaluated across unique ontological sources well-known in the community. Our code source with templates and datasets per task are released here [https://github.com/HamedBabaei/LLMs4OL](https://github.com/HamedBabaei/LLMs4OL). * A thorough out-of-the-box empirical evaluation of eight state-of-the-art domain-independent LLM types (10 models) and a ninth biomedical domain-specific LLM type (11th model) for their suitability to the various OL tasks considered in this work. Furthermore, the most effective overall LLM is finetuned and subsequently finetuned LLM results are reported for our three OL tasks. ## 2 Related Work There are three avenues of related research: ontology learning from text, prompting LLMs for knowledge, and LLM prompting methods or prompt engineering. **Ontology Learning from Text.** One of the earliest approaches [22] used lexicosyntactic patterns to extract new lexicosemantic concepts and relations from large collections of unstructured text, enhancing WordNet [40]. WordNet is a lexical database comprising a lexical ontology of concepts (nouns, verbs, etc.) and lexico-semantic relations (synonymy, hyponymy, etc.). Hwang [23] proposed an alternative approach for constructing a dynamic ontology specific to an application domain. The method involved iteratively discovering types and taxonomy from unstructured text using a seed set of terms representing high-level domain types. In each iteration, newly discovered specialized types were incorporated, and the algorithm detected relations between linguistic features. The approach utilized a simple ontology algebra based on inheritance hierarchy and set operations. Agirre et al.[2] enhanced WordNet by extracting topically related words from web documents. This unique approach added topical signatures to enrich WordNet. Kietz et al.[27] introduced the On-To-Knowledge system, which utilized a generic core ontology like GermaNet [21] or WordNet as the foundational structure. It aimed to discover a domain-specific ontology from corporate intranet text resources. For concept extraction and pruning, it employed statistical term frequency count heuristics, while association rules were applied for relation identification in corporate texts. Roux et al.[53] proposed a method to expand a genetics ontology by reusing existing domain ontologies and enhancing concepts through verb patterns extracted from unstructured text. Their system utilized linguistic tools like part-of-speech taggers and syntactic parsers. Wagner [59] employed statistical analysis of corpora to enrich WordNet in non-English languages by discovering relations, adding new terms to concepts, and acquiring concepts through the automatic acquisition of verb preferences. Moldovan and Girju [42] introduced the Knowledge Acquisition from Text (KAT) system to enrich WordNet's finance domain coverage. Their method involved four stages: (1) discovering new concepts from a seed set of terms, expanding the concept list using dictionaries; (2) identifying lexical patterns from new concepts; (3) discovering relations from lexical patterns; and (4) integrating extracted information into WordNet using a knowledge classification algorithm. In [4], an unsupervised method is presented to enhance ontologies with domain-specific information using NLP techniques such as NER and WSD. The method utilizes a general NER system to uncover a taxonomic hierarchy and employs WSD to enrich existing synsets by querying the internet for new terms and disambiguating them through cooccurrence frequency. Khan and Luo [25] employed clustering techniques to find new terms, utilizing WordNet for typing. They used the self-organizing tree algorithm [16], inspired by molecular evolution, to establish an ontology hierarchy. Additionally, Xu et al. [65] focused on automatically acquiring domain-specific terms and relations through a TFIDF-based single-word term classifier, a lexico-syntactic pattern finder based on known relations and collocations, and a relation extractor utilizing discovered lexico-syntactic patterns. Predominantly, the approaches for OL [60] that stand out so far are based on lexico-syntactic patterns for term and relation extraction as well as clustering for type discovery. Otherwise, they build on seed-term-based bootstrapping methods. The reader is referred to further detailed reviews [6, 37] on this theme for a comprehensive overall methodological picture for OL. Traditional NLP was defined by modular pipelines by which machines were equipped step-wise with annotations at the linguistic, syntactic, and semantic levels to process text. LLMs have ushered in a new era of possibilities for AI systems that obviate the need for modular NLP systems to understand natural language which we tap into for the first time for the OL task in this work. **Prompting LLMs for Knowledge.** LLMs can process and retrieve facts based on their knowledge which makes them good zero-shot learners for various NLP tasks. Prompting LLMs means feeding an input \(x\) using a _template function_\(f_{prompt}(x)\), a textual string prompt input that has some unfilled slots, and then the LLMs are used to probabilistically fill the unfilled information to obtain a final string \(x^{\prime}\), from which the final output \(y\) can be derived [34]. The LAMA: LAnguage Model Analysis [51] benchmark has been introduced as a probing technique for analyzing the factual and commonsense knowledge contained in unidirectional LMs (i.e. Transformer-XL [12]) and bidirectional LMs (i.e. BERT and ELMo [48]) with cloze prompt templates from knowledge triples. They demonstrated the potential of pre-trained language models (PLMs) in probing facts - where facts are taken into account as subject-relation-object triples or question-answer pairs - with querying LLMs by converting facts into a cloze template which is used as an input for the LM to fill the missing token. Further studies extended LAMA by the automated discovery of prompts [24], finetuning LLMs for better probing [3, 31, 66], or a purely unsupervised way of probing knowledge from LMs [49]. These studies analyzed LLMs for their ability to encode various linguistic and non-linguistic facts. This analysis was limited to predefined facts that reinforce the traditional linguistic knowledge of the LLMs, and as a result do not reflect how concepts are learned by the LLMs. In response to this limitation, Dalvi et al. [13] put forward a proposal to explore and examine the latent concepts learned by LLMs, offering a fresh perspective on BERT. They defined concepts as "a group of words that are meaningful," i.e. that can be clustered based on relations such as lexical, morphological, etc. In another study [54], they propose the framework _ConceptX_ by extending their studies on seven LLMs in latent space analysis with the alignment of the grouped concepts to human-defined concepts. These works show that using LLMs and accessing the concept's latent spaces, allows us to group concepts and align them to pre-defined types and type relations discovery. **Prompt Engineering.** As a novel discipline, prompt engineering focuses on designing optimal instructions for LLMs to enable successful task performance. Standard prompting [61] represents a fundamental approach for instructing LLMs. It allows users to craft their own customized "self-designed prompts" to effectively interact with LLMs [9] and prompt them to respond to the given prompt instruction straightaway with an answer. Consider the manually crafted FLAN collection [35] addressing diverse NLP tasks other than OL as an exemplar. Notably, the nature of some problems naturally encompass a step-by-step thought process for arriving at the answer. In other words, the problem to be solved can be decomposed as a series of preceding intermediate steps before arriving at the final solution. E.g., arithmetic or reasoning problems. Toward explainability and providing language models in a sense "time to think" helping it respond more accurately, there are advanced prompt engineering methods as well. As a first, as per the Chain-of-Thought (CoT) [63] prompting method, the prompt instruction is so crafted that the LLM is instructed to break down complex tasks as a series of incremental steps leading to the solution. This helps the LLM to reason step-by-step and arrive at a more accurate and logical conclusion. On the other hand Tree-of-Thoughts (ToT) [67] has been introduced for tasks that require exploration or strategic lookahead. ToT generalizes over CoT prompting by exploring thoughts that serve as intermediate steps for general problem-solving with LLMs. Both CoT and ToT unlock complex reasoning capabilities through intermediate reasoning steps in combination with few-shot or zero-shot [28] prompting. Another approach for solving more complex tasks is using decomposed prompting [26], where we can further decompose tasks that are hard for LLMs into simpler solvable sub-tasks and delegate these to sub-task-specific LLMs. Given the LLMs4OL task paradigm introduced in this work, complex prompting is not a primary concern, as our current focus is on the initial exploration of the task to identify the areas where we need further improvement. We want to understand how much we have accomplished so far before delving into more complex techniques like CoT, ToT, and decomposed prompting. Once we have a clearer picture of the model's capabilities and limitations in a standard prompting setting, we can then consider other than standard prompt engineering approaches by formulating OL as a stepwise reasoning task. ## 3 The LLMs4OL Task Paradigm The Large Language Models for Ontology Learning (LLMs4OL) task paradigm offers a conceptual framework to accelerate the time-consuming and expensive construction of ontologies exclusively by domain experts to a level playing field involving powerful AI methods such as LLMs for high-quality OL results; consequently and ideally involving domains experts only in validation cycles. In theory, with the right formulations, all tasks pertinent to OL fit within the LLMs4OL task paradigm. OL tasks are based on ontology primitives [38], including lexical entries \(L\), conceptual types \(T\), a hierarchical taxonomy of types \(H_{T}\), non-taxonomic relations \(R\) in a heterarchy \(H_{R}\), and a set of axioms \(A\) to describe the ontology's constraints and inference rules. To address these primitives, OL tasks [44] include: 1) Corpus preparation - selecting and collecting source texts for ontology building. 2) Terminology extraction - identifying and extracting relevant terms. 3) Term typing - grouping similar terms into conceptual types. 4) Taxonomy construction - establishing "is-a" hierarchies between types. 5) Relationship extraction - identifying semantic relationships beyond "is a." 6) Axiom discovery - finding constraints and inference rules for the ontology. This set of six tasks forms the LLMs4OL task paradigm. See Figure 1 for the proposed LLMs4OL conceptual framework. In this work, we empirically ground three core OL tasks using LLMs as a foundational basis for future research. However, traditional AI paradigms rely on testing models only on explicitly trained tasks, which is not the case for LLMs. Instead, we test LLMs for OL as an "emergent" behavior [57, 62], where they demonstrate the capacity to generate responses on a wide range of tasks despite lacking explicit training. The key to unraveling the emergent abilities of LLMs is to prompt them for their knowledge, as popularized by GPT-3 [9], via carefully designed prompts. As discussed earlier (see section 2), prompt engineering for LLMs is a new AI sub-discipline. In this process, a pre-trained language model receives a prompt, such as a natural language statement, to generate responses without further training or gradient updates to its parameters [34]. Prompts can be designed in two main types based on the underlying LLM pretraining objective: cloze prompts [50, 11], which involve filling in blanks in an incomplete sentence or passage and suit masked language modeling pre-training; and prefix prompts [33, 30], which generate text following a given starting phrase and offer more design adaptability to the underlying model. The earlier introduced LLMs4OL paradigm is empirically validated for three select OL tasks using respective prompt functions \(f_{prompt}(.)\) suited to each task and model. **Task A -**_Term Typing_. A generalized type is discovered for a lexical term. The generic cloze prompt template is \(f^{A}_{c-prompt}(L):=[S?]\). \([L]\)\([P_{domain}]\)\(is\)\(a\)\([MASK]\). where \(S\) is an optional context sentence, \(L\) is the lexical term prompted for, \(P_{domain}\) is a domain specification, and the special \(MASK\) token is the type output expected from the model. Since prompt design is an important factor that determines how the LLM responds, eight different prompt template instantiations of the generic template were leveraged with final results reported Figure 1: The LLMs4OL task paradigm is an end-to-end framework for ontology learning in various knowledge domains, i.e. lexicosemantics (WordNet), geography (GeoNames), biomedicine (NCI, MEDICIN, SNOMEDCT), and web content types (schema.org). The three OL tasks empirically validated in this work are depicted within the blue arrow, aligned with the greater LLMs4OL paradigm. for the best template. E.g., if WordNet is the base ontology, the part-of-speech type for the lexical term is prompted. In this case, template 1 is "[S]. [L] POS is a [MASK]." Note here "[\(P_{domain}\)]" is POS. Template 2 is "[S]. [L] part of speech is a [MASK]." Note here "[\(P_{domain}\)]" is "part of speech." In a similar manner, eight different prompt variants from the generic template were created. However, the specification of "[\(P_{domain}\)]" depended on the ontology's knowledge domain. The prefix prompt template reuses the cloze prompt template but appends an additional "instruction" sentence and replaces the special [MASK] token with a blank or a "?" symbol. Generically, it is \(f^{A}_{p-prompt}(T)=[instruction]+f^{A}_{c-prompt}(T)\), where the instruction is "Perform a sentence completion on the following sentence:" Based on the eight variations created from the generic cloze template prompt, subsequently eight template variations were created for the prefix prompting of the LLMs as well with best template results reported. **Task B -**_Taxonomy Discovery_. Here a taxonomic hierarchy between pairs of types is discovered. The generic cloze prompt template is \(f^{B}_{c-prompt}(a,b):=[a|b]\;is\;[P_{hierarchy}]\;of\;[b|a]\). \(This\;statement\;is\;[MASK]\). Where \((a,b)\) or \((b,a)\) are type pairs, \(P_{hierarchy}\) indicates superclass relations if the template is initialized for top-down taxonomy discovery, otherwise indicates subclass relations if the template is initialized for bottom-up taxonomy discovery. In Task B, the expected model output for the special [MASK] token for a given type pair was true or false. Similar to term typing, eight template variations of the generic template were created. Four of which were predicated on the top-down taxonomy discovery. E.g., "[a] is the superclass of [b]. This statement is [MASK]." Note here, [\(P_{hierarchy}\)] is "superclass". Other three templates were based on [\(P_{hierarchy}\)] \(\in\) parent class, supertype, ancestor class. And four more template instantiations predicated on the bottom-up taxonomy discovery were based on [\(P_{hierarchy}\)] \(\in\) subclass, child class, subtype, descendant class. Thus eight experiments per template instantiation for the applicable LLM were run and the results from the best template were reported. The prefix prompt template, similarly, reuses the cloze prompt template with the [MASK] token replaced with a blank or "?" symbol. It is \(f^{B}_{p-prompt}(a,b)=[instruction]+f^{B}_{c-prompt}(a,b)\), with instruction "Identify whether the following statement is true or false:" **Task C -**_Non-Taxonomic Relation Extraction_. This task discovers non-taxonomic semantic heterarchical relations between types. The cloze prompt template is \(f^{C}_{c-prompt}(h,r,t):=[h]\;is\;[r]\;[t]\). \(This\;statement\;is\;[MASK]\). Where \(h\) is a head type, \(t\) is a tail type, and \(r\) is a non-taxonomic relationship between \(h\) and \(r\). To support the discovery of a heterarchy that can consist of a 1-M relational cardinality, for a given relation, all possible type pairs of the ontology were created. The expected output for the [MASK] token was again true or false. Note, unlike in Task A and B, the given template was used as is and no variations of it were created. Again, the prefix prompt template reuses the cloze prompt template as the other tasks, with instructions similar to task B. It is \(f^{C}_{p-prompt}(h,r,t)=[instruction]+f^{C}_{c-prompt}(h,r,t)\) ## 4 LLMs4OL - Three Ontology Learning Tasks Evaluations ### Evaluation Datasets - Ontological Knowledge Sources To comprehensively assess LLMs for the three OL tasks presented in the previous section, we cover a variety of ontological knowledge domain sources. Generally, across the tasks, four knowledge domains are represented, i.e. lexicosemantic - WordNet [40], geographical - GeoNames [1], biomedicine - Unified Medical Language System (UMLS) [7] teased out as the National Cancer Institute (NCI) [43], MEDCIN [39], and Systematized Nomenclature of Medicine - Clinical Terms United States (SNOMEDCT_US) [56] subontologies, and content representations in the web - schema.org [47]. Tasks A, B, and C applied only to UMLS. In other words, the ontology has a supporting knowledge base with terms that can be leveraged in the test prompts for term typing as Task A, taxonomic hierarchical relational prompts as Task B, and non-taxonomic heterarchical relational prompts as Task C. The GeoNames source came with a knowledge base of terms instantiated for types and taxonomic relations, therefore, was leveraged in the Task A and B as OL tests with LLMs of this work. The WordNet source could be leveraged only in Task A since it came with an instantiated collection of lexical terms for syntactic types. It was not applicable in the Tasks B and C for OL defined in this work since the semantic relations in WordNet are lexicosemantic, in other words, between terms directly and not their types. Finally, since the schema.org source offered only typed taxonomies as standardized downloads, it was leveraged only in the OL Task B of this work. In this case, we refrained from scraping the web for instantiations of the schema.org taxonomy. For all other ontological knowledge sources considered in this work that were relevant to Task A, the term instantiations were obtained directly from the source. This facilitates replicating our Task A dataset easily. Detailed information on the ontological knowledge sources per task with relevant dataset statistics are presented next. **Task A Datasets.** Table 1 shows statistical insights for the Task A dataset where we used terms from WordNet, GeoNames, and UMLS. For WordNet we used the WN18RR data dump [14] that is derived from the original WordNet but released as a benchmark dataset with precreated train and test splits. Overall, it consists of 40,943 terms with 18 different relation types between the terms and four term types (noun, verb, adverb, adjective). We combined the original validation and test sets as a single test dataset. GeoNames comprises 680 categories of geographical locations, which are classified into 9 higher-level categories, e.g. H for stream, lake, and sea, and R for road and railroad. UMLS contains almost three million concepts from various sources which are linked together by semantic relationships. UMLS is unique in that it is a greater semantic ontological network that subsumes other biomedical problem-domain restricted subontolo gies. We grounded the term typing task to the semantic spaces of three select subontological sources,i.e. NCI, MEDCIN, and SNOMEDCT_US. The train datasets were reserved for LLM fine-tuning. Among the 11 models, we selected the most promising one based on its zero-shot performance. The test datasets were used for evaluations in both zero-shot and fine-tuned settings. **Task B Datasets.** From GeoNames, UMLS, and schema.org we obtained 689, 127, and 797 term types forming type taxonomies. Our test dataset was constructed as type pairs, where half represented the taxonomic hierarchy while the other half were not in a taxonomy. This is based on the following formulations. \[\forall(a\in T_{n},b\in T_{n+1})\longmapsto(aRb\wedge b\neg Ra)\] \[\forall(a\in T_{n},b\in T_{n+1},c\in T_{n+2});(aRb\wedge bRc)\longmapsto aRc\] \[\forall(a\in T_{n},b\in T_{n+1},c\in T_{n+2});(c\neg Rb\wedge b\neg Ra)\longmapsto c\neg Ra\] Where \(a\), \(b\), and \(c\) are types at different levels in the hierarchy. \(T\) is a collection of types at a particular level in the taxonomy, where \(n+2>n+1>n\) and \(n\) is the root. The symbol \(R\) represents "\(a\) is a super class of type \(b\)" as a true taxonomic relation. Conversely, the \(\neg R\) represents "\(b\) is a super class of type \(a\)" as a false taxonomic relation. Furthermore, transitive taxonomic relations, \((aRb\wedge bRc)\longmapsto aRc\), were also extracted as true relations, while their converse, i.e. \((c\neg Rb\wedge b\neg Ra)\longmapsto c\neg Ra\) were false relations. **Task C Datasets.** As alluded to earlier, Task C evaluations, i.e. non-taxonomic relations discovery, were relegated to the only available ontological knowledge source among those we considered i.e. UMLS. It reports 53 non-taxonomic relations across its 127 term types. The testing dataset comprised all pairs of types for each relation, where for any given relation some pairs are true while the rest are false candidates. Task B and Task C datasets' statistics are in Table 2. ### Evaluation Models - Large Language Models (LLMs) As already introduced earlier, in this work, we comprehensively evaluate eight main types of domain-independent LLMs reported as state-of-the-art for different tasks in the community. They are: BERT [15] as an encoder-only architecture, BLOOM [55], LLaMA [58], GPT-3 [9], GPT-3.5 [45], and GPT-4 [46] as decoder-only models, and finally BART [32] and Flan-T5 [10] as encoder-decoder models. \begin{table} \begin{tabular}{l r r r r} \hline Parameter & **WordNet** & **GeoNames** & **NCI** & **MEDCIN** & **SNOMEDCT\_US** \\ \hline _Train Set Size_ & 40,559 & 8,078,865 96,177 & 277,028 & 278,374 \\ _Test Set Size_ & 9,470 & 702,510 24,045 & 69,258 & 69,594 \\ _Types_ & 4 & 680 & 125 & 87 & 125 \\ \hline \end{tabular} \end{table} Table 1: Task A term typing dataset counts across three core ontological knowledge sources, i.e. WordNet, GeoNames, and UMLS, where for Task A UMLS is represented only by the NCI, MEDCIN, and SNOMEDCT_US subontological sources. The unique term types per source that defined Task A Ontology Learning is also provided. Note these LLMs are released at varying parameter sizes. Thus qualified by the size in terms of parameters written in parenthesis, in all, we evaluate seven LLMs: 1. BERT-Large (340M), 2. BART-Large (400M), 3. Flan-T5-Large (780M), 4. Flan-T5-XL (3B), 5. BLOOM-1b7 (1.7B), 6. BLOOM-3b (3B), 7. GPT-3 (175B), 8. GPT-3.5 (174B), 9. LLaMA (7B), and GPT-4 (\(>\)1T). Additionally, we also test an eleventh biomedical domain-specific model PubMedBERT [18]. In this work, since we propose the LLMs4OL paradigm for the first time, in a sense postulating OL as an emergent ability of LLMs, it is important for us to test different LLMs on the new task. Evaluating different LLMs supports: 1) Performance comparison - this allows us to identify which models are effective for OL, 2) Model improvement - toward OL one can identify areas where the models need improvement, and 3) Research advancement - with our results from testing and comparing different models, researchers interested in OL could potentially identify new areas of research and develop new techniques for improving LLMs. ### Evaluations #### 4.3.1 Metrics. Evaluations for Task A are reported as the mean average precision at k (MAP@K), where k = 1, since this metric was noted as being best suited to the task. Specifically, in our case, for term typing, MAP@1 measures the average precision of the top-1 ranked term types returned by an LLM for prompts initialized with terms from the evaluation set. And evaluations for Tasks B and C are reported in terms of the standard F1-score based on precision and recall. #### 4.3.2 Results - Three Ontology Learning Tasks Zero-shot Evaluations. The per task overall evaluations are reported in Table 3. The three main rows of the table marked by alphabets A, B, and C correspond to term typing, type taxonomy discovery, and type non-taxonomic relational hetterchy discovery results, respectively. The five subrows against Task A shows term typing results for WordNet, GeoNames, and the three UMLS subontologies, viz. NCI, SNOMEDCT_US, and MEDCIN. The three subrows against Task B shows type taxonomy discovery results for GeoNames, UMLS, and schema.org, respectively. Task C evaluation \begin{table} \begin{tabular}{c c c c c} \hline **Task** & Parameter & **GeoNames** & **UMLS** & **schema.org** \\ \hline \multirow{3}{*}{_Task B_} & Types & 689 & 127 & 797 \\ & Levels & 2 & 3 & 6 \\ & Positive/Negative Samples & 680/680 & 254/254 & 2,670/2,670 \\ & _Train/Test split_ & 272/1,088 & 101/407 & 1,086/4,727 \\ \hline \multirow{3}{*}{_Task C_} & Non-Taxonomic Relations & - & 53 & - \\ & Positive/Negative Samples & - & 5,641/1,896 & - \\ \cline{1-1} & _Train/Test Split_ & - & 1,507/6,030 & - \\ \hline \end{tabular} \end{table} Table 2: Dataset statistics as counts per reported parameter for Task B type taxonomic hierarchy discovery and Task C type non-taxonomic hetterarchy discovery across the pertinent ontological knowledge sources respectively per task. results are provided only for UMLS. We first examine the results in the zero-shot setting, i.e. for LLMs evaluated out-of-the-box, w.r.t. three RQs. **RQ1: How effective are LLMs for Task A, i.e. automated type discovery?** We examine this question given the results in 5 subrows against the row A, i.e. corresponding to the various ontological datasets evaluated for Task A. Of the five ontological sources, the highest term typing results were achieved on the 4-typed WordNet at 91.7% MAP@1 by GPT-3.5. This high performance can be attributed in part to the simple type space of WordNet with only 4 types. However, looking across the other LLMs evaluated on WordNet, in particular even GPT-3, scores in the range of 30% MAP@1 seem to be the norm with a low of 2.2% by BART-Large. Thus LLMs that report high scores on WordNet should be seen as more amenable to syntactic typing regardless of the WordNet simple type space. Considering all the ontological sources, Geonames presents the most fine-grained types taxonomy of 680 types. Despite this, the best result obtained on this source is 39.4% from GPT-4 with BERT-Large second at a close 38.3%. This is better than the typing evaluations on the three biomedical datasets. Even the domain-specific PubMedBERT underperforms. In this regard, domain-independent models with large-scale parameters such a BLOOM (3B) are more amenable to this complex task. Since biomedicine entails deeper domain-specific semantics, we hypothesize better performance not just from domain-specific finetuning but also strategically for task-specific reasoning. The results overview is: 91.7% WordNet by GPT-3.5 \(>\) 39.4% GeoNames by GPT-4 \(>\) 37.7% SNOMEDCT_US by BLOOM-3b \(>\) 29.8% MEDCIN by BLOOM-3b \(>\) 16.1% NCI by GPT-4. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & & & & & \multicolumn{4}{c}{Zero-Shot Testing} & \multicolumn{4}{c}{Finetuned} \\ \cline{4-13} \multirow{-2}{*}{**X1**} & \multirow{-2}{*}{**X2**} & \multirow{-2}{*}{**X3**} & \multirow{-2}{*}{**X4**} & \multirow{-2}{*}{**X5**} & \multirow{-2}{*}{**X6**} & \multirow{-2}{*}{**X7**} & \multirow{-2}{*}{**X8**} & \multirow{-2}{*}{**X9**} & \multirow{-2}{*}{**X10**} & \multirow{-2}{*}{**X10**} & \multirow{-2}{*}{**X10**} & \multirow{-2}{*}{**X10**} \\ \cline{3-3} \cline{8-13} **RQ2: How effective are LLMs to recognize a type taxonomy i.e. the "is-a" hierarchy between types?** We examine this question given the results in the 3 subrows against the main row B, i.e. corresponding to the three ontological sources evaluated for Task B. The highest result was achieved for UMLS by GPT-4 at 78.1%. Of the open-source models, Flan-T5-XL achieved the best result at 64.3%. Thus for the term taxonomy discovery LLMs on average have proven most effective in the zero-shot setting on the biomedical domain. The results overview is: 78.1% UMLS by GPT-4 \(>\) 74.4% schema.org by GPT-3.5 \(>\) 67.8% GeoNames by GPT-3.5. Note the three GPT models were not open-sourced and thus we tested them with a paid subscription. For the open-source models, the results overview is: 64.3% UMLS by Flan-T5-XL \(>\) 59.6% GeoNames by Flan-T5-XL \(>\) 54.8% schema.org by Flan-T5-Large. **RQ3: How effective are LLMs to discover non-taxonomic relations between types?** We examine this question given the results in Table 3 row for Task C, i.e. for UMLS. The best result achieved is 49.5% by Flan-T5-XL. We consider this a fairly good result over a sizeable set of 7,537 type pairs that are in true non-taxonomic relations or are false pairs. Finally, over all the three tasks considered under the LLMs4OL paradigm, term typing proved the hardest obtaining the lowest overall results for most of its ontological sources tested including the biomedical domain in particular. Additionally in our analysis, GPT, Flan-T5, and BLOOM variants showed improved scores with increase in parameters, respectively. This held true for the closed-sourced GPT models, i.e. GPT-3 (175B) and GPT-3.5 (175B) to GPT-4 (\(>\)1T) and the open-sourced models, i.e. Flan-T5-Large (780M) to Flan-T5-XL (3B) and BLOOM from 1.7B to 3B. Thus it seems apparent that with an increased number of LLM parameters, we can expect an improvement in ontology learning. **Results - Three Ontology Learning Tasks Finetuned LLM Evaluations.** Our zero-shot test results indicate that while LLMs seem promising for OL they would need task-specific finetuning to be a practically viable solution. To this Figure 2: An illustration of the LLM finetuning workflow on tasks for ontology learning. end, we adopt the method of "instruction tuning" proposed as the FLAN collection which is the only known systematically deconstructed, effective way to finetune LLMs [35]. For finetuning, we choose the Flan-T5 LMM for two reasons: 1) it is open-source: we intend to foster future research directions for models unhidden behind paywalls to aid in democratizing LLM research, and 2) it showed consistently good performance across all tasks. The finetuning instructions were instantiated from a small selection of eight samples of each knowledge source' reserved training set and fed in a finetuning workflow shown in Figure 2. The finetuned Flan models' results (see last two columns in Table 3) are significantly boosted across almost all tasks. For task A, we observed an average improvement of 25% from zero-shot to the finetuned model for both Flan-T5 variants. Notably, SNOMEDCT_US showed least improvement of 9%, while the WordNet showed the most improvement of 45%. For task B we marked an average improvement of 18%, and for task C 3%. Given an illustration of the results in 3 shows that on average finetuned models, even with fewer parameters outperforms models with 1000x or more parameters across the three OL tasks. These insights appear crucial to expedite developmental research progress for practical tools for OL using LLMs which we plan to leverage in our future work. ## 5 Conclusions and Future Directions Various initiatives benchmark LLM performance, revealing new task abilities [57, 62]. These benchmarks advance computer science's understanding of LLMs. We explore LLMs' potential for Ontology Learning [17, 38] through our introduced conceptual framework, LLMs4OL. Extensive experiments on 11 LLMs across three OL tasks demonstrate the paradigm's proof of concept. Our code-base facilitates replication and extension of methods for testing new LLMs. Our empirical results are promising to pave future work for OL. Future research directions in the field of OL with LLMs can focus on several key areas. First, there is a need to enhance LLMs specifically for ontology learning tasks, exploring novel architectures and fine-tuning to capture ontological structures better. Second, expanding the evaluation to cover diverse knowledge domains beyond the ones examined in the current work would provide a broader understanding of LLMs' generalizability. Third, hybrid approaches that combine LLMs with traditional ontology learning techniques, such as lexico-syntactic pattern mining and clustering, could lead to more accurate and comprehensive ontologies. Fourth, further research can delve into the extraction of specific semantic relations, like part-whole relationships or causality, to enhance the expressiveness of learned ontologies. Standardizing evaluation metrics, creating benchmark datasets, exploring dynamic ontology evolution, and domain-specific learning are important directions. Additionally, integrating human-in-the-loop approaches with expert involvement would enhance ontology relevance and accuracy. Exploring these research directions will advance LLM-based ontology learning, enhancing knowledge acquisition and representation across domains. _Supplemental Material Statement:_ Our LLM templates, detailed results, and codebase are publicly released as supplemental material on Github [https://github.com/HamedBabaei/LLMs4OL](https://github.com/HamedBabaei/LLMs4OL). ## Author Contributions Hamed Babaei Giglou: Conceptualization, Methodology, Software, Validation, Investigation, Resources, Data Curation, Writing - Original Draft, Visualization. Jennifer D'Souza: Conceptualization, Methodology, Investigation, Resources, Writing - Original Draft, Writing - Review & Editing, Supervision, Project administration, Funding acquisition. Soren Auer: Conceptualization, Methodology, Investigation, Resources, Review & Editing, Supervision, Project administration, Funding acquisition. ## Acknowledgements A 16-page final version of this paper has been accepted for publication in the research track of the 22nd International Semantic Web Conference (ISWC 2023). We thank the anonymous reviewers for their detailed and insightful comments on an earlier draft of the paper. This work was jointly supported by the German BMBF project SCINEXT (ID 01IS22070), DFG NFDI4DataScience (ID 460234259), and ERC ScienceGraph (ID 819536).
2309.10965
DPpack: An R Package for Differentially Private Statistical Analysis and Machine Learning
Differential privacy (DP) is the state-of-the-art framework for guaranteeing privacy for individuals when releasing aggregated statistics or building statistical/machine learning models from data. We develop the open-source R package DPpack that provides a large toolkit of differentially private analysis. The current version of DPpack implements three popular mechanisms for ensuring DP: Laplace, Gaussian, and exponential. Beyond that, DPpack provides a large toolkit of easily accessible privacy-preserving descriptive statistics functions. These include mean, variance, covariance, and quantiles, as well as histograms and contingency tables. Finally, DPpack provides user-friendly implementation of privacy-preserving versions of logistic regression, SVM, and linear regression, as well as differentially private hyperparameter tuning for each of these models. This extensive collection of implemented differentially private statistics and models permits hassle-free utilization of differential privacy principles in commonly performed statistical analysis. We plan to continue developing DPpack and make it more comprehensive by including more differentially private machine learning techniques, statistical modeling and inference in the future.
Spencer Giddens, Fang Liu
2023-09-19T23:36:11Z
http://arxiv.org/abs/2309.10965v1
# DPpack: An R Package for Differentially Private Statistical Analysis and Machine Learning ###### Abstract Differential privacy (DP) is the state-of-the-art framework for guaranteeing privacy for individuals when releasing aggregated statistics or building statistical/machine learning models from data. We develop the open-source R package _DPpack_ that provides a large toolkit of differentially private analysis. The current version of _DPpack_ implements three popular mechanisms for ensuring DP: Laplace, Gaussian, and exponential. Beyond that, _DPpack_ provides a large toolkit of easily accessible privacy-preserving descriptive statistics functions. These include mean, variance, covariance, and quantiles, as well as histograms and contingency tables. Finally, _DPpack_ provides user-friendly implementation of privacy-preserving versions of logistic regression, SVM, and linear regression, as well as differentially private hyperparameter tuning for each of these models. This extensive collection of implemented differentially private statistics and models permits hassle-free utilization of differential privacy principles in commonly performed statistical analysis. We plan to continue developing _DPpack_ and make it more comprehensive by including more differentially private machine learning techniques, statistical modeling and inference in the future. **Keywords:** differential privacy, empirical risk minimization, support vector machines, privacy-preserving, R, randomized mechanism, regression ## 1 Introduction Data is an invaluable resource harnessed to inform impactful technology development and guide decision-making. However, utilizing data that contain personally sensitive information (e.g., medical or financial records) poses privacy challenges. Anonymized datasets, as well as statistics and models derived from sensitive datasets are susceptible to attacks that may result in the leakage of private information (Narayanan and Shmatikov, 2008; Ahn, 2015; Sweeney, 2015; Shokri et al., 2017; Zhao et al., 2021). As technology continues to evolve to become more data-reliant, privacy issues will become increasingly more prevalent, necessitating easy access to tools that provide privacy guarantees when releasing information from sensitive datasets. Differential privacy (DP) (Dwork et al., 2006b) is a popular state-of-the-art framework for providing provable guarantees of privacy for outputs from a statistical or machine learning (ML) procedure. A variety of randomized procedures and mechanisms exist to achieve DP guarantees for a wide range of analyses. These include, to list some examples, summary statistics (Dwork et al., 2006b; Smith, 2011), empirical risk minimization (Chaudhuri et al., 2011; Kifer et al., 2012), classifiers (Chaudhuri and Monteleoni, 2009; Vaidya et al., 2013), deep learning (Abadi et al., 2016; Bu et al., 2020), Bayesian networks (Zhang et al., 2017a), Bayesian procedures (Dimitrakakis et al., 2014; Wang et al., 2015b), statistical hypothesis testing (Gaboardi et al., 2016; Couch et al., 2019; Barrientos et al., 2019), confidence interval construction (Karwa and Vadhan, 2018; Wang et al., 2019), and synthetic data generation (Zhang et al., 2017b; Torkzadehmahani et al., 2019; Bowen and Liu, 2020). Privacy-preserving analysis has also been adopted by many companies in the technology sector, including Google (Guevara et al., 2020), Apple (Apple, 2017), Meta (Nayak, 2020), as well as government agencies like the U.S. Census Bureau (Bureau, 2021). Given the popularity of DP, many open-source projects have been devoted to developing tools and code for privacy-preserving analysis with DP. The _OpenDP Library_ (The OpenDP Team), Google's DP libraries (Google) (with accompanying OpenMined Python wrapper _PyDP_ (OpenMined)), the _TensorFlow Privacy_ library (TensorFlow), and the IBM's DP library _diffprivlib_(Holohan et al., 2019) collectively provide tools for DP analysis for the Rust, Python, C++, Go, and Java programming languages. This paper presents the _DPpack_R package ([https://github.com/sgiddens/DP](https://github.com/sgiddens/DP) pack) (Giddens and Liu, 2023), which provides convenient implementations of common DP procedures. R is arguably the most popular languages among statisticians. Prior to _DPpack_, R packages _diffpriv_(Rubinstein and Alda, 2017) and _PrivateLR_(Vinterbo, 2018) were the only available DP R packages. Both of these packages are limited in scope compared to _DPpack_; _PrivateLR_, in fact, implements only a single function for DP logistic regression. Additionally, neither package has seen an update in the last five years. Meanwhile, _DPpack_ has been downloaded from CRAN by R users \(\sim\)4,000 times as of September 2023, averaging 242 downloads per complete month of being available, overtaking _diffpriv_ and _PrivateLR_ to become the most downloaded DP-focused R package in the past 10 months. While _diffpriv_ does implement several randomized mechanisms for DP, _DPpack_ goes well beyond these basic mechanisms by specifically implementing privacy-preserving versions of various commonly used descriptive statistics, as well as statistical analysis and machine learning procedures. The implemented functions are accessible even to individuals without a strong background in DP because sensitivity calculations are handled internally based on proven theoretical results and user-provided bounds on the input data. This makes _DPpack_ more user-friendly than _diffpriv_ for non-expert users. Even for DP experts, _DPpack_ is attractive due to its scope. No other R package implements as extensive a collection of privacy-preserving functions. We plan to continue to develop and update the package by adding more privacy-preserving analysis procedures in the future. ## 2 Capabilities ### Randomized Mechanisms _DPpack_ provides the LaplaceMechanism, GaussianMechanism, and ExponentialMechanism functions for implementing general mechanisms for ensuring DP for a desired output. The LaplaceMechanism function implements the Laplace mechanism (Dwork et al., 2006b) for ensuring \(\epsilon\)-DP for a statistical analysis or function by adding to the output Laplacian noise with a scale parameter dependent on the function's \(\ell_{1}\)-global sensitivity and the privacy budget \(\epsilon\). The function generalizes using DP composition to multidimensional function inputs, in which case it allows the user to specify the allocation of the privacy budget across the multiple computations. The GaussianMechanism function implements the Gaussian mechanism (Dwork and Roth, 2014). It can be used to ensure either approximate \((\epsilon,\delta)\)-DP (Dwork et al., 2006a), or probabilistic \((\epsilon,\delta)\)-DP (Machanavajjhala et al., 2008; Liu, 2019a), depending on user input. It adds Gaussian noise with a variance dependent on \(\epsilon\), \(\delta\), and the function's \(\ell_{2}\)-global sensitivity, and can be generalized to multidimensional inputs. The ExponentialMechanism function implements the exponential mechanism (McSherry and Talwar, 2007), which guarantees \(\epsilon\)-DP and returns a result randomly from a set of possible candidates, with probability proportional to its "utility." This allows for DP releases of non-numeric information, to which adding numerical noise would be nonsensical. ### Privacy-preserving Descriptive Statistics One of the unique aspects of _DPpack_ compared to the other DP R packages is that it provides direct support for DP-satisfying versions of many common descriptive statistics. The meanDP, varDP, covDP, and sdDP functions of _DPpack_ provide DP counterparts to the analogously named R functions for calculating mean, variance, covariance, and standard deviation of a data vector. Pooled variances and covariances are also available with pooledVarDP and pooledCovDP. Through function arguments, a user specifies whether the output should satisfy \(\epsilon\)-DP via the Laplace mechanism or \((\epsilon,\delta)\)-DP via the Gaussian mechanism and global bounds on the data, from which appropriate \(\ell_{p}\)-global sensitivities are computed internally based on known theoretical results (Liu, 2019b). The histogramDP and tableDP functions compute DP histograms and contingency tables. Similar to previously described statistics, users may specify which mechanism and type of DP are used for the output, and additional arguments help format the output. Global bounds on the data are unnecessary as the global sensitivity is a fixed constant for frequency output. _DPpack_ implements differentially private quantiles and medians using the quantileDP and medianDP functions, respectively. By again only requiring the user to input global bounds on the data, these release \(\epsilon\)-DP values via the exponential mechanism using the private quantile algorithm (Smith, 2011; Gillenwater et al., 2021). ### Privacy-preserving Statistical Models and Machine Learning Empirical risk minimization (ERM) is a statistical learning principle to find the best model from a given set of models. The goal of ERM is to minimize the empirical risk that measures the goodness of fit of a model to the training data. We implemented privacy-preserving procedures for a few ERM problems in supervised learning. Specifically, for binary classification, we create the EmpiricalRiskMinimizationDP.CMS class by employing the methods from Chaudhuri et al. (2011) for guaranteeing \(\epsilon\)-DP for the output of training via ERM under necessary regularity conditions. Either the output or objective perturbation methods can be used. For linear regression, we employ the methods from Kifer et al. (2012) to create the EmpiricalRiskMinimizationDP.KST class for guaranteeing either \(\epsilon\)-DP or \((\epsilon,\delta)\)-DP under necessary regularity conditions. The intent is that these classes are used through an inheritance structure to implement binary classifiers or regressors as instances of ERM. Specifically, logistic regression and support vector machine (SVM) models with \(\epsilon\)-DP guarantees (Chaudhuri and Monteleoni, 2009; Chaudhuri et al., 2011) are implemented via the LogisticRegressionDP and svmDP classes, respectively. Each of these classes inherits from EmpiricalRiskMinimizationDP.CMS. Released trained model coefficients or predictions made on new data satisfy \(\epsilon\)-DP. Linear regression of either \(\epsilon\)-DP or \((\epsilon,\delta)\)-DP (Kifer et al., 2012) is implemented in _DPpack_ via the LinearRegressionDP class, which inherits from the EmpiricalRiskMinimizationDP.KST class. Released trained model coefficients from those classes or predictions made on new data using these coefficients also satisfy user-specified DP guarantees. The svmDP class currently supports \(\epsilon\)-DP training via the linear and radial (Gaussian) kernels, with the radial kernel method being based on an approximation technique from Rahimi and Recht (2007, 2008); Chaudhuri et al. (2011). Training with individually weighted loss function contributions with \(\epsilon\)-DP guarantees is also supported (Giddens et al., 2023). Each of these methods is user-friendly, even to those without a strong DP background, as they only require the user to specify certain hyperparameters (such as \(\epsilon\), \(\delta\), and \(\gamma\)) and global bounds on each feature contained in \(\mathbf{x}_{i}\) (and \(y_{i}\), in the case of linear regression). Sensitivity calculations and scaling necessary to satisfy regularity conditions for DP guarantees are handled internally. When the selection of hyperparameter values (e.g., the regularization constant in the ERM loss function) uses information from the sensitive dataset itself, the incurred privacy loss needs to be accounted for. _DPpack_ provides the tune_classification_model function for privacy-preserving hyperparameter tuning for binary classifiers based on the exponential mechanism (Chaudhuri et al., 2011) and the tune_linear_regression_model function for hyperparameter tuning for linear regression. ## 3 Summary and Future Work The _DPpack_ package implements three general mechanisms for DP (Laplace, Gaussian, and exponential), a variety of DP descriptive statistics, and some privacy-preserving regression and classification methods. Making these functions accessible independent of the mechanisms they are based on permits code simplicity and ease-of-use (since users do not need to know how to compute sensitivities for their desired statistics, but only need to give global bounds on the data as inputs). Compared with other options for DP in R, _DPpack_ offers a more complete set of privacy-preserving functions and models in a user-friendly manner that makes them easily accessible even to those without a strong background in DP. We plan to keep developing the package and make it more comprehensive. For example, for ML techniques, we may include functionality for DP principal component analysis (Dwork et al., 2014; Chaudhuri et al., 2013), Bayesian networks (Zhang et al., 2016), and stochastic gradient descent based on the concepts of moment accountant (Abadi et al., 2016) and Gaussian DP (Bu et al., 2020), to name a few. For statistical analysis, we plan to include functionality for differently private \(z\)-tests (Gaboardi et al., 2019), \(t\)-tests (Ding et al., 2018), and some nonparametric tests (e.g. Wilcoxon rank sum test) (Couch et al., 2019), as well as hypothesis testing for linear regression (Barrientos et al., 2019; Chen et al., 2016) and confidence interval construction for certain problems (Karwa and Vadhan, 2018; Wang et al., 2019). We note that the list above is not comprehensive nor are the cited references the only existing work on each respective topic. ## Acknowledgments and Disclosure of Funding This work is supported by the University of Notre Dame Schmitt Fellowship and Lucy Graduate Scholarship. ## Appendix A Differential Privacy (DP) DP protects the information of each individual whose information is contained in a dataset by ensuring that the results of a mechanism acting on the dataset would be almost identical to the results had their information not been present in the dataset. To formalize the notion of DP, we first define _neighboring datasets_: \(D_{1}\) and \(D_{2}\) are _neighboring datasets_ if they differ in at most one observation. There are two equally valid methods by which a neighboring dataset \(D_{2}\) may be constructed from a given dataset \(D_{1}\), depending on if the number of elements of each dataset must remain the same (i.e. is bounded), or if the number is allowed to vary (i.e. is unbounded) (Kifer and Machanavajjhala, 2011). **Definition 1** (Bounded neighboring datasets): _We consider \(D_{1}\) and \(D_{2}\) to be bounded neighboring datasets if they are neighboring datasets and \(D_{1}\) can be obtained from \(D_{2}\) by modifying at most one observation._ **Definition 2** (Unbounded neighboring datasets): _We consider \(D_{1}\) and \(D_{2}\) to be unbounded neighboring datasets if they are neighboring datasets and \(D_{1}\) can be obtained from \(D_{2}\) by adding or removing at most one observation._ The two definitions of neighboring datasets may necessitate different amounts of calibrated noise to achieve the same level of privacy guarantees when releasing the same statistics (e.g, histograms). When the sample size is large, the difference between the two is largely ignorable. We can now formally define a few different types of differential privacy. **Definition 3** (Differential privacy): (Dwork et al., 2006b,a) _A randomized mechanism \(\mathcal{M}\) satisfies \((\epsilon,\delta)\)-differential privacy if for all \(S\subseteq\mathrm{Range}(\mathcal{M})\),_ \[P(\mathcal{M}(D_{1})\in S)\leq e^{\epsilon}P(\mathcal{M}(D_{2})\in S)+\delta \tag{1}\] _for any neighboring datasets \(D_{1}\) and \(D_{2}\), where \(\epsilon>0\) and \(\delta\geq 0\) are privacy loss parameters. It is common to refer to \((\epsilon,0)\)-DP (or \(\epsilon\)-DP) as "pure" DP, and \((\epsilon,\delta)\)-DP (\(\delta>0\)) as "approximate" DP._ **Definition 4** (Probabilistic differential privacy): (Machanavajjhala et al., 2008) _A randomized mechanism \(\mathcal{M}\) satisfies \((\epsilon,\delta)\) probabilistic differential privacy if for all \(S\subseteq\text{Range}(\mathcal{M})\),_ \[P\bigg{(}\bigg{|}\log\bigg{(}\frac{P(\mathcal{M}(D_{1})\in S)}{P(\mathcal{M}( D_{2})\in S)}\bigg{)}\bigg{|}>\epsilon\bigg{)}\leq\delta \tag{2}\] _for any neighboring datasets \(D_{1}\) and \(D_{2}\), \(\epsilon>0\) and \(\delta\geq 0\)._ Intuitively, DP guarantees that the distributions of outputs from a randomized mechanism operating on neighboring datasets are similar. Thus, information gained from the mechanism will be essentially the same (within tunable bounds given by \(\epsilon\) and \(\delta\)) whether a given individual's data is used in the dataset or not. The individual that differs between datasets is arbitrary, meaning that differential privacy provides these individual-level privacy guarantees for all members of the dataset simultaneously. DP has several nice properties to which its popularity in research and applications is attributed. We will briefly mention three here that are relevant to the package, and refer the interested reader to Dwork and Roth (2014) for other properties and additional information. The first two are composition theorems, which provide differential privacy bounds for the use of multiple randomized mechanisms on the same dataset. **Theorem 5** (Basic sequential composition): (McSherry, 2009) _Let \(\mathcal{M}_{1}\), \(\mathcal{M}_{2},\ldots,\mathcal{M}_{n}\) be \(n\) randomized mechanisms such that each \(\mathcal{M}_{i}\) satisfies \((\epsilon_{i},\delta_{i})\)-differential privacy. \(\mathcal{M}(D)=(\mathcal{M}_{1}(D),\ldots,\mathcal{M}_{n}(D))\) satisfies \((\sum_{i=1}^{n}\epsilon_{i},\sum_{i=1}^{n}\delta_{i})\)-differential privacy._ **Theorem 6** (Parallel composition): (McSherry, 2009) _Let \(\mathcal{M}_{1},\mathcal{M}_{2},\ldots,\mathcal{M}_{n}\) be \(n\) randomized mechanisms such that each \(\mathcal{M}_{i}\) satisfies \((\epsilon_{i},\delta_{i})\)-differential privacy, and let \(D_{1},D_{2},\ldots,D_{n}\) be \(n\) disjoint datasets such that their union is \(D\). Then we have \(\mathcal{M}(D)=(\mathcal{M}_{1}(D_{1}),\ldots,\mathcal{M}_{n}(D_{n}))\) satisfies \((\max_{i}\{\epsilon_{i}\},\max_{i}\{\delta_{i}\})\)-differential privacy._ The second is immunity to post-processing, which ensures that there is no manipulation (not relying on the data itself) that can be performed on the results of a differentially private mechanism to weaken the privacy guarantees. **Theorem 7** (Immunity to post-processing): (Dwork and Roth, 2014) _Let \(\mathcal{M}\) be a randomized mechanism satisfying \((\epsilon,\delta)\)-differential privacy. \(f\circ\mathcal{M}\) satisfies \((\epsilon,\delta)\)-differential privacy for any arbitrary function \(f\)._ We conclude this section by defining the global sensitivity of statistics, which is used in some of the most general randomized mechanisms for achieving DP. Global sensitivity was originally defined using the \(\ell_{1}\) norm by Dwork et al. (2006b). Here, we use a more general definition. **Definition 8** (\(\ell_{p}\)-global sensitivity): (Liu, 2019a) _Let the distance between two datasets (denoted \(d(D_{1},D_{2})\)) be defined to be the number of observations that differ between the datasets. Note that \(d(D_{1},D_{2})=1\) if \(D_{1}\) and \(D_{2}\) are neighboring datasets. The \(\ell_{p}\)-global sensitivity of a function \(f\) is defined to be_ \[\Delta_{p,f}=\max_{\begin{subarray}{c}D_{1},D_{2}\\ d(D_{1},D_{2})=1\end{subarray}}||f(D_{1})-f(D_{2})||_{p}, \tag{3}\] _where \(||\cdot||_{p}\) is the \(\ell_{p}\) norm._ The global sensitivity may be different depending on if the bounded or the unbounded neighboring dataset definition is used. For example, consider the function that outputs a histogram (a list of counts for each bin) from a given dataset. If the bounded neighboring dataset definition is used, the \(\ell_{1}\)-global sensitivity of the histogram function is 2 since modifying a dataset observation can at most change the count of two bins. However, under the unbounded neighboring dataset definition, the \(\ell_{1}\)-global sensitivity of the histogram function is 1 since adding or removing a dataset observation can at most change the count of one bin. For functions where there may be a difference between the two definitions of neighboring datasets, _DPpack_ allows the user to choose which one to use. The global sensitivity sets a bound on the amount the statistics can change in the worst-case scenario between two neighboring data sets. The higher the sensitivity for a statistic is, the larger the amount of noise that will be injected to the original observed statistics to achieve the pre-specified level of privacy guarantees defined by \(\epsilon\) and \(\delta\). For many statistics (e.g., mean, quantiles, regression coefficients), the global sensitivity is dependent on the global range of values that can occur in the dataset. In these cases, _DPpack_ assumes the existence of known or reasonably inferred global or public bounds on the dataset, from which the global sensitivity is computed. ## Appendix B DP Mechanisms There exist many general mechanisms for ensuring DP for a given analysis procedure or output. We introduce in this section three popular mechanisms: the Laplace mechanism (Dwork et al., 2006b), the Gaussian mechanism (Dwork et al., 2006a), and the exponential mechanism (McSherry and Talwar, 2007). We also provide examples of implementing these mechanisms using _DPpack_. ### Laplace Mechanism **Definition 9** (Laplace mechanism): (Dwork et al., 2006b) _Let \(D\) be a sensitive database. Let \(f\) be a given function with \(\ell_{1}\)-global sensitivity \(\Delta_{1,f}\) and range \(\mathbb{R}^{n}\). The Laplace mechanism of \(\epsilon\)-differential privacy is defined to be_ \[\mathcal{M}_{L}(D,f,\epsilon)=f(D)+\mathbf{e}, \tag{4}\] _where \(\mathbf{e}=(e_{1},\ldots,e_{n})^{T}\) and \(e_{i}\) is drawn independently from distribution \(\text{Lap}(0,\Delta_{1,f}/\epsilon)\)._ The LaplaceMechanism function in _DPpack_ implements the Laplace mechanism. For a given scalar or a vector of observed statistic(s), the corresponding \(\ell_{1}\)-global sensitivity, and \(\epsilon\), it releases a real number or numeric vector of values satisfying \(\epsilon\)-DP. Global sensitivity calculated based either on bounded or unbounded neighboring datasets can be used. The following example uses the Laplace mechanism to release the sample mean with \(\epsilon\)-DP guarantees. Consider a sensitive dataset of \(n=100\) observations with one attribute, the (public) global range of which is \([c_{0},c_{1}]=[5,10]\). For the sample mean, the \(\ell_{1}\)-global sensitivity is the same for both bounded and unbounded DP and equals \((c_{1}-c_{0})/n=0.05\)(Liu, 2019b). library(DPpack) set.seed(42) # For reproducibility epsilon <- 1 # Privacy budget sensitivity <- (c1-c0)/n private.mean <- LaplaceMechanism(mean(D), epsilon, sensitivity) cat("Privacy preserving mean: ", private.mean, "\nTrue mean: ", mean(D)) #> Privacy preserving mean: 7.636944 #> True mean: 7.622394 The LaplaceMechanism function can also be used to release privacy-preserving multi-dimensional statistics which are the composition of scalar statistics, each with their own \(\ell_{1}\) sensitivity. For example, let \(\mathbf{f}(D)=(f_{1}(D),\ldots,f_{n}(D))\), where each \(f_{i}\) has \(\ell_{1}\)-global sensitivity \(\Delta_{1,f_{i}}\). By default, the LaplaceMechanism function sanitizes \(\mathbf{f}\) by drawing Laplace noise from \(\text{Lap}(0,\Delta_{1,\mathbf{f}}/\epsilon)\), where \(\Delta_{1,\mathbf{f}}=\sum_{i=1}^{n}\Delta_{1,f_{i}}\). This approach corresponds to allocating a privacy budget of \(\epsilon\Delta_{1,f_{i}}/\Delta_{1,\mathbf{f}}\) to sanitizing each scalar function \(f_{i}\). If desired, users may specify how to divide the total budget \(\epsilon\) among the elements in \(\mathbf{f}\) by passing a vector of proportions to the alloc.proportions argument instead of using the default allocation. The following example demonstrates this functionality for the same situation as the previous example, but with an additional variance computation. The \(\ell_{1}\)-global sensitivity of the variance is also the same for both bounded and unbounded DP and equals \((c_{1}-c_{0})^{2}/n=0.25\)(Liu, 2019b). # Simulate a dataset n <- 100 c0 <- 5 c1 <- 10 D <- runif(n, c0, c1) f <- function(D) c(mean(D), var(D)) sensitivities <- c((c1-c0)/n, (c1-c0)^2/n) epsilon <- 1 # Total privacy budget for f # Here, privacy budget is split relative to the individual sensitivities # of the sample mean and sample variance. Collectively, the computation # satisfies 1-differential privacy. private.vals <- LaplaceMechanism(f(D), epsilon, sensitivities) cat("Privacy preserving values: ", private.vals, "\nTrue values: ", f(D)) #> Privacy preserving values: 7.623156 2.401604 #> True values: 7.61271 2.036525 # Here, privacy budget is split so that 25% is given to the mean # and 75% is given to the variance private.vals <- LaplaceMechanism(f(D), epsilon, sensitivities, alloc.proportions = c(0.25, 0.75)) cat("Privacy preserving values: ", private.vals, "\nTrue values: ", f(D)) #> Privacy preserving values: 7.58841 1.652268 #> True values: 7.61271 2.036525 ### Gaussian Mechanism Another popular mechanism for DP implemented in _DPpack_ is the Gaussian mechanism. This mechanism can be used to provide either \((\epsilon,\delta)\) approximate DP (Dwork et al., 2006a) or \((\epsilon,\delta)\) probabilistic DP (Machanavajjhala et al., 2008). **Definition 10** (Gaussian mechanism): (Dwork et al., 2006a) _Let \(D\) be a sensitive database. Let \(f\) be a given function with \(\ell_{2}\)-global sensitivity \(\Delta_{2,f}\) and range \(\mathbb{R}^{n}\). The Gaussian mechanism is defined to be_ \[\mathcal{M}_{G}(D,f,\epsilon,\delta)=f(D)+\mathbf{e}, \tag{5}\] _where \(\mathbf{e}=(e_{1},\ldots,e_{n})^{T}\) and \(e_{i}\) is drawn independently from \(\mathcal{N}(0,\sigma^{2})\). In the case that \(\epsilon\in(0,1)\) and_ \[\sigma\geq c\Delta_{2,f}/\epsilon \tag{6}\] _for a constant \(c\) such that \(c^{2}>2\log(1.25/\delta)\), this mechanism was proven to satisfy approximate \((\epsilon,\delta)\)-DP (Dwork et al., 2006a). Additionally, when_ \[\sigma\geq(2\epsilon)^{-1}\Delta_{2,f}\bigg{(}\sqrt{(\Phi^{-1}(\delta/2))^{2} +2\epsilon}-\Phi^{-1}(\delta/2)\bigg{)}, \tag{7}\] _where \(\Phi\) is the CDF of the standard normal distribution, this mechanism was proven to satisfy \((\epsilon,\delta)\) probabilistic DP (Liu, 2019a)._ Note the requirement that \(\epsilon<1\) for approximate DP, which is not required for the Gaussian mechanism to satisfy probabilistic DP. It is also worth highlighting that the Laplace mechanism requires \(\ell_{1}\)-sensitivity, while the Gaussian mechanism requires \(\ell_{2}\)-sensitivity. If \(f\) is scalar-valued, \(\Delta_{1,f}=\Delta_{2,f}\), but they are generally different for vector-valued \(f\) except in some special cases. The GaussianMechanism function in _DPpack_ implements the Gaussian mechanism by adding Gaussian noise to a given scalar (or vector) of observed statistic(s) according to specified values of \(\epsilon\), \(\delta\), and \(\ell_{2}\)-global sensitivity. It releases a scalar (or vector) satisfying either \((\epsilon,\delta)\) approximate DP if the type.DP argument is 'aDP', or \((\epsilon,\delta)\) probabilistic DP if the type.DP argument is 'pDP'. Global sensitivity calculated based either on bounded or unbounded neighboring datasets can be used. We use the same example as for the Laplace mechanism to demonstrate the Gaussian mechanism for \((\epsilon,\delta)\) approximate DP and \((\epsilon,\delta)\) probabilistic DP for a sample mean. Consider again a sensitive dataset of \(n=100\) elements drawn uniformly from the range \([c_{0},c_{1}]=[5,10]\). Since the mean is a scalar in this case, the \(\ell_{2}\)-global sensitivity is equal to the \(\ell_{1}\)-global sensitivity, which is \((c_{1}-c_{0})/n=0.05\)(Liu, 2019b). # Simulate a dataset n <- 100 c0 <- 5 c1 <- 10 D <- runif(n, c0, c1) Privacy budget epsilon <- 0.9 # eps must be in (0, 1) for approximate DP delta <- 0.01 sensitivity <- (c1-c0)/n Approximate differential privacy private.approx <- GaussianMechanism(mean(D), epsilon, delta, sensitivity) cat("Privacy-preserving mean (approximate): ", private.approx, "\nTrue mean: ", mean(D)) #> Privacy preserving mean (approximate): 7.426412 #> True mean: 7.170852 Probabilistic differential privacy private.prob <- GaussianMechanism(mean(D), epsilon, delta, sensitivity, type.DP = 'pDP') cat("Privacy preserving mean (probabilistic): ", private.prob, "\nTrue mean: ", mean(D)) #> Privacy-preserving mean (probabilistic): 7.018747 #> True mean: 7.170852 The GaussianMechanism function can also be used to release privacy-preserving multi-dimensional statistics analogously to the LaplaceMechanism function with only one difference. If we again consider \(\mathbf{f}(D)=(f_{1}(D),\ldots,f_{n}(D))\) to be the multi-dimensional statistics of interest, then \(\Delta_{2,\mathbf{f}}\) for the Gaussian mechanism is computed as \(\Delta_{2,\mathbf{f}}=\sqrt{\sum_{i=1}^{n}\Delta_{2,f_{i}}^{2}}\) by default. If desired, users can specify their own privacy budget allocation (which applies to both \(\epsilon\) and \(\delta\)) using the alloc.proportions argument. ### Exponential Mechanism The third privacy-preserving mechanism implemented in _DPpack_ is the exponential mechanism, developed in McSherry and Talwar (2007). This mechanism is preferred for situations where it is not possible to inject numerical noise (such as when the function output is categorical) or not appropriate to add noise directly to the result of a given function or algorithm. The exponential mechanism resolves this issue by assigning real-valued utilities to data/output pairs by specifying a utility function \(u\) An output is chosen and released with probability proportional to its corresponding utility. **Definition 11** (Exponential mechanism): (McSherry and Talwar, 2007) _Let \(D\) be a sensitive database, \(f\) be a given function with range \(\mathcal{R}\), and \(u\) be a utility function mapping data/output pairs to \(\mathbb{R}\) with \(\ell_{1}\)-global sensitivity \(\Delta_{1,u}\). For output values \(r\in\mathcal{R}\), the exponential mechanism achieving \(\epsilon\)-DP is_ \[\mathcal{M}_{E}(D,u,\mathcal{R},\epsilon)=r\text{ with probability }\propto\exp \bigg{(}\frac{\epsilon u(D,r)}{2\Delta_{1,u}}\bigg{)}. \tag{8}\] The ExponentialMechanism function in _DPpack_ implements the exponential mechanism for differential privacy for a given sensitive dataset \(D\) and for finite \(\mathcal{R}\). It takes as input a numeric vector utility representing the values of the utility function \(u\) for each \(r\in\mathcal{R}\), as well as a privacy budget \(\epsilon\) and the \(\ell_{1}\)-global sensitivity of \(u\). It releases the index corresponding to the value \(r\in\mathcal{R}\) randomly selected according to (8). Global sensitivity of \(u\) calculated based either on bounded or unbounded neighboring datasets can be used. The ExponentialMechanism function also has two optional arguments: measure and candidates. Each of these arguments, if provided, should be of the same length as utility. If measure is given, the probabilities of selecting each value \(r\) are weighted according to the numeric values in measure before the value \(r\) is randomly chosen. If candidates is provided, ExponentialMechanism returns the value in candidates at the randomly chosen index rather than the index itself. We demonstrate the ExponentialMechanism function with a toy example. Assume that a function \(f\) has range \(\mathcal{R}=\{\)'a', 'b', 'c', 'd', 'e'\(\}\). Numerical noise cannot be added directly to the output of \(f\) due to the non-numeric nature of its range. Instead, we define a utility function \(u\) that yields the following values when applied to the sensitive dataset \(D\) and each element of \(\mathcal{R}\), respectively: \((0,1,2,1,0)\). Finally, assume the \(\ell_{1}\)-sensitivity of \(u\) is 1. We can use the ExponentialMechanism function to release an element of \(\mathcal{R}\) as follows. candidates <- c('a', 'b', 'c', 'd', 'e') # Range of f # Utility function values in same order as corresponding candidates utility <- c(0, 1, 2, 1, 0) epsilon <- 1 # Privacy budget sensitivity <- 1 # Release privacy-preserving index of chosen candidate idx <- ExponentialMechanism(utility, epsilon, sensitivity) candidates[idx] #> 'b' Release privacy-preserving candidate directly ExponentialMechanism(utility, epsilon, sensitivity, candidates = candidates) #> 'a' ## Appendix C Implementation of DP Descriptive Statistics Descriptive statistics are popular and effective ways to summarize data. However, if these statistics are computed from a sensitive dataset and released directly, they could be susceptible to attacks that reveal private information about the individuals in the data, even if the dataset itself is not breached. Many of these statistics can be made differentially private through the application of one or more of the mechanisms discussed in the previous section. For ease of use, _DPpack_ implements privacy-preserving versions of many descriptive statistics directly, utilizing the previously defined mechanisms under the hood. ### Mean, Standard Deviation, Variance, and Covariance The meanDP, sdDP, and varDP, functions can be used to release differentially private means, standard deviations, and variances respectively, calculated from a sensitive dataset. These functions all share the same set of arguments: a dataset x, a privacy budget eps (and possibly delta), as well as bounds on the attributes in the dataset lower.bound and upper.bound. Any values of x that happen to fall outside the bounds are clipped to the bounds before the mean is computed. These bounds are used to compute the global sensitivity of the desired statistic function based on proven values (Liu, 2019b). By default, each function releases sanitized values satisfying eps-DP via the Laplace mechanism. The mechanism argument defaults to 'Laplace', indicating to use the Laplace mechanism. However, the output can be changed by modifying the value of some additional arguments and setting mechanism to 'Gaussian'. In this case, the delta argument must be positive. The type.DP argument can be either 'aDP' (default) or 'pDP' for satisfying (eps, delta) approximate DP and (eps, delta) probabilistic DP, respectively, and indicates the type of DP provided when the Gaussian mechanism is used. The which.sensitivity argument can be one of 'bounded' (default), 'unbounded', or 'both', indicating whether to release results satisfying bounded and/or unbounded DP. The following example demonstrates how these functions can be used. Simulate a dataset D <- rnorm(500, mean=3, sd=2) lower.bound <- -3 # 3 standard deviations below mean upper.bound <- 9 # 3 standard deviations above mean Get mean satisfying bounded 1-differential privacy private.mean <- meanDP(D, 1, lower.bound, upper.bound) cat("Privacy preserving mean: ", private.mean, "\nTrue mean: ", mean(D)) #> Privacy preserving mean: 2.872637 #> True mean: 2.857334 # Get variance satisfying unbounded approximate (0.5, 0.01)-DP private.var <- varDP(D, 0.5, lower.bound, upper.bound, which.sensitivity = 'unbounded', mechanism = 'Gaussian', delta = 0.01) cat("Privacy preserving variance: ", private.var, "\nTrue variance: ", var(D)) #> Privacy preserving variance: 3.276551 #> True variance: 4.380399 # Get std dev satisfying bounded probabilistic (0.5, 0.01)-DP private.sd <- sdDP(D, 0.5, lower.bound, upper.bound, mechanism='Gaussian', delta=0.01, type.DP='pDP') cat("Privacy preserving standard deviation: ", private.sd, "\nTrue standard deviation: ", sd(D)) #> Privacy preserving standard deviation: 1.978296 #> True standard deviation: 2.09294 The pooledVarDP function in _DPack_ can be used to compute a differentially private pooled variance for multiple groups of data. The inputs are similar to those of meanDP, varDP, and sdDP with a few differences. First, the function accepts multiple numeric vectors representing different data groups, rather than a single dataset x. The function uses provided lower and upper bounds on the entire collection of data to compute the sensitivity of the function based on the derived formulas in Liu (2019b), then releases a privacy preserving pooled variance of the entire collection of data based on provided privacy budget parameters. The formulas to compute the function's sensitivity require a value \(n_{\max}\) representing the size of the largest provided dataset vector. If the value itself is sensitive, it can be approximated by setting the approx.n.max argument to TRUE. The following examples demonstrate this function's use. # Simulate three datasets from the same distribution D1 <- rnorm(500, mean=3, sd=2) D2 <- rnorm(200, mean=3, sd=2) D3 <- rnorm(100, mean=3, sd=2) lower.bound <- -3 # 3 standard deviations below mean upper.bound <- 9 # 3 standard deviations above mean Get private pooled variance without approximating n.max private.pooled.var <- pooledVarDP(D1, D2, D3, eps=1, lower.bound = lower.bound, upper.bound = upper.bound) cat("Privacy preserving pooled variance: ", private.pooled.var, "\nTrue pooled variance: ", var(c(D1, D2, D3))) #> Privacy preserving pooled variance: 3.682308 #> True pooled variance: 3.931237 If n.max is sensitive, we can also use private.pooled.var <- pooledVarDP(D1, D2, D3, eps=1, lower.bound = lower.bound, upper.bound = upper.bound, approx.n.max = FALSE) _DPpack_ also implements functions for privacy-preserving covariance and pooled covariance: covDP and pooledCovDP, which have similar arguments to the previously described functions. The covDP function accepts two numeric vector datasets x1 and x2, as well as upper and lower bounds on each of these two datasets individually. The function then returns the sanitized covariance between x1 and x2, based on provided privacy budget values and sensitivity computed using the bounds via the proven formula from Liu (2019b). The pooledCovDP function accepts any number of matrices. These matrices can have a variable number of rows, but must each have two columns. Two sets of bounds for the entire collection of data from each column must also be provided. The function releases a sanitized pooled covariance between the columns of the provided matrices based on the privacy budget, bounds, and the sensitivity computed according to the formula from Liu (2019b). Finally, pooledCovDP utilizes the value \(n_{\max}\) in the computation of the sensitivity similar to the pooledVarDP function, so the approx.n.max argument is also present in this function and indicates the same thing. The following examples show the use of both of these functions. Simulate datasets D1 <- sort(rnorm(500, mean=3, sd=2)) D2 <- sort(rnorm(500, mean=-1, sd=0.5)) lb1 <- -3 # 3 std devs below mean lb2 <- -2.5 # 3 std devs below mean ub1 <- 9 # 3 std devs above mean ub2 <-.5 # 3 std devs above mean Covariance satisfying 1-differential privacy private.cov <- covDP(D1, D2, 1, lb1, ub1, lb2, ub2) cat("Privacy preserving covariance: ", private.cov, "\nTrue covariance: ", cov(D1, D2)) #> Privacy preserving covariance: 0.9598711 #> True covariance: 0.9908612 We can also find a sanitized pooled covariance with additional datasets D3 <- sort(rnorm(200, mean=3, sd=2)) D4 <- sort(rnorm(200, mean=-1, sd=0.5)) M1 <- matrix(c(D1, D2), ncol=2) M2 <- matrix(c(D3, D4), ncol=2) Pooled covariance satisfying (1,0)-differential privacy private.pooled.cov <- pooledCovDP(M1, M2, eps = 1, lower.bound1 = lb1, lower.bound2 = lb2, upper.bound1 = ub1, upper.bound2 = ub2) ### Counting Functions _DPpack_ supports differentially private histograms and contingency tables via the functions histogramDP and tableDP, respectively. The functions release privacy-preserving results based on given sensitive input data (in the same form required by the standard hist and table functions) and privacy budget parameters. Bounds on the dataset are not necessary as the global sensitivity for both functions is a constant independent of the data. As with many of the previously described functions, the guaranteed DP for both of these functions can be bounded or unbounded, as well as pure, approximate, or probabilistic depending on the values given for the which.sensitivity, mechanism, and type.DP arguments. Due to noise added to the typical output by both the Laplace and Gaussian mechanisms, it is possible that some counts obtained directly from the chosen mechanism are negative. By default, both of these functions coerce any such values to 0. However, if in a particular application it is preferred that negative counts be allowed, this can be done by setting the allow.negative argument to TRUE. The histogramDP function has two additional arguments: breaks and normalize. The breaks argument is equivalent to the argument with the same name in the standard hist function, while the normalize argument indicates whether the outputs should correspond to frequencies (if set to FALSE) or if they should be normalized so that the total area under the histogram is 1 (if set to TRUE). The following examples demonstrate the proper use of the histogramDP and tableDP functions. Note that histogramDP returns an object similar to that returned by the standard hist function, but does not plot the histogram by default. Plotting the result is as easy as calling the plot function on the object released from histogramDP. The results are shown in Figure 1. x <- rnorm(500) # Simulate dataset hist(x, main = "Non-private histogram", ylim=c(0, 110), col="gray") private.hist <- histogramDP(x, 1) # Satisfies (1,0)-DP plot(private.hist, main = "Private histogram", ylim=c(0, 110), col="gray") We use a subset of variables from the Cars93 dataset in the _MASS_ R package to demonstrate the generation of privacy-preserving contingency table. The results are shown in Table 1. x <- MASS::Cars93$Type y <- MASS::Cars93$Origin z <- MASS::Cars93$AirBags table(x, y, z) # Non-private contingency table tableDP(x, y, z, eps=1) # Private contingency table ### Quantiles _DPpack_ also implements differentially private quantiles and medians using the quantileDP and medianDP functions, respectively. The quantileDP function accepts a sensitive dataset as a numeric vector, a real number between 0 and 1 indicating the desired quantile, a single privacy budget parameter eps, and global bounds on the values in Figure 1: Original and privacy-preserving histograms from the histogram example. the dataset. It implements the private quantile algorithm from Smith (2011), which defines a utility function, and utilizes the exponential mechanism to release a quantile satisfying \(\epsilon\)-DP based on the proven \(\ell_{1}\)-global sensitivity of the utility function (Smith, 2011; Gillenwater et al., 2021). The algorithm from Smith (2011) used in quantileDP uses the exponential mechanism to select a specific dataset value from the given dataset but releases a value drawn uniformly from the interval between the selected value and the subsequent value in ascending order. This means that the released value may not necessarily be a value present in the original dataset. If this behavior is not desirable for a certain application, the uniform.sampling argument can be set to FALSE, in which case the function releases the result of the exponential mechanism step directly without the uniform sampling step. The medianDP function is present in _DPack_ for convenience, and works identically to the quantileDP function with the quantile argument set to 0.5. Both functions accept two additional arguments. The which.sensitivity argument operates analogously to the identically named argument in the other functions described in this section. The mechanism argument indicates which mechanism should be used to satisfy DP when running the function. Currently, only the exponential mechanism (the default for this argument) is supported for quantileDP, but this argument was still included for symmetry with the other descriptive statistic functions, as well as for robustness in future versions of DPpack. The following examples show the use of both of these functions. # Simulate a dataset D <- rnorm(500) lower.bound <- -3 # 3 standard deviations below mean upper.bound <- 3 # 3 standard deviations above mean quant <- 0.25 eps <- 1 # Get 25th quantile satisfying 1-differential privacy private.quantile <- quantileDP(D, quant, eps, lower.bound, upper.bound) \begin{table} \begin{tabular}{l l l l l l l l} \hline Airbag & Origin & \multicolumn{6}{c}{Type} \\ \cline{3-8} & & Compact & Large & Midsize & Small & Sporty & Van \\ \hline Driver & USA & 0 (1) & 4 (4) & 7 (2) & 0 (0) & 1 (2) & 0 (0) \\ \& Passenger & non-USA & 1 (1) & 0 (0) & 5 (5) & 0 (0) & 1 (1) & 0 (0) \\ \hline Driver only & USA & 0 (2) & 5 (7) & 6 (5) & 2 (2) & 11 (5) & 3 (2) \\ & non-USA & 7 (7) & 4 (0) & 4 (6) & 2 (3) & 3 (3) & 1 (1) \\ \hline None & USA & 4 (4) & 0 (0) & 6 (3) & 4 (5) & 0 (1) & 2 (3) \\ & non-USA & 0 (1) & 1 (0) & 3 (1) & 16 (11) & 2 (2) & 7 (3) \\ \hline \end{tabular} \end{table} Table 1: The outputs from the contingency table example. The privacy-preserving cell counts are listed with the original (non-private) values in parentheses. cat("Privacy preserving quantile: ", private.quantile, "\nTrue quantile: ", quantile(D, 0.25)) #> Privacy preserving quantile: -0.7768781 #> True quantile: -0.7685687 # Get median requiring released value to be in dataset private.median <- medianDP(c(1,0,3,3,2), eps, lower.bound = 0, upper.bound = 4, uniform.sampling = FALSE) cat("Privacy preserving median: ", private.median, "\nTrue median: ", median(c(1,0,3,3,2))) #> Privacy preserving median: 1 #> True median: 2 ## Appendix D Implementation of DP Statistical and ML Methods _DPpack_ implements privacy-preserving versions of some commonly used classification and regression models. Many such models can be formulated as empirical risk minimization (ERM) problems, which have been generally shown to have privacy-preserving counterparts under certain assumptions (Chaudhuri et al., 2011; Kifer et al., 2012). This section first provides a brief introduction to differentially private ERM algorithms and their necessary assumptions, then discusses the specific implementation in _DPpack_ of logistic regression, support vector machines (SVM) and their extension to outcome weighted learning (OWL), and linear regression. Each of the ERM-based methods implemented in _DPpack_ requires the selection of various hyperparameter values that can impact model performance. A variety of techniques exist to tune these parameters, but many of these techniques threaten to leak private database information themselves. Thus, privacy-preserving hyperparameter tuning methods for both the classification and regression models are implemented in _DPpack_. These are also described in this section. ### Empirical Risk Minimization Assume that we have a set of \(n\) input-output pairs \((\mathbf{x}_{i},y_{i})\in(\mathcal{X},\mathcal{Y})\) representing a sensitive training dataset \(\mathcal{D}\). Additionally, define \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\) to be a loss function over pairs of values from the output space. In general, ERM attempts to produce an effective predictor function \(f:\mathcal{X}\rightarrow\mathcal{Y}\) by minimizing the empirical risk \[\frac{1}{n}\sum_{i=1}^{n}\ell(f_{\boldsymbol{\theta}}(\mathbf{x}_{i}),y_{i})= \frac{1}{n}\sum_{i=1}^{n}\ell_{i}(\boldsymbol{\theta}). \tag{9}\] For the algorithms implemented in _DPpack_, we assume there exists a one-to-one mapping from a \(p\)-dimensional vector \(\boldsymbol{\theta}\) to \(f\), where \(p\) is the length of \(\mathbf{x}_{i}\) (i.e. the number of predictors). In order to mitigate overfitting, it is also common to introduce a regularizer function \(R\). This produces the regularized ERM model \[\frac{1}{n}\sum_{i=1}^{n}\ell(f_{\mathbf{\theta}}(\mathbf{x}_{i}),y_{i})+\frac{ \gamma}{n}R(\mathbf{\theta})=\ell_{i}(\mathbf{\theta})+\frac{\gamma}{n}R(\mathbf{\theta}), \tag{10}\] where \(\ell_{i}(\mathbf{\theta})\!=\!\ell(f_{\mathbf{\theta}}(\mathbf{x}_{i}),y_{i})\) and \(\gamma\) is a tunable hyperparameter known as the regularization constant. For binary classification problems, Chaudhuri et al. (2011) proved \(\epsilon\)-DP can be satisfied for regularized ERM by two different algorithms if certain assumptions are met. The first algorithm is an _output_ perturbation method, and the second is an _objective_ perturbation method. We briefly mention the assumptions here and refer the interested reader to Chaudhuri et al. (2011) for more information and for the proofs. Both algorithms assume \(\left\lVert\mathbf{x}_{i}\right\rVert_{2}\leq 1\) for all \(i\), that the regularizer \(R\) is differentiable and 1-strongly convex, and that the loss function \(\ell\) is differentiable and convex with \(\left\lvert\frac{\partial}{\partial f}\ell(f,y)\right\rvert\leq 1\) for all \(f\) and \(y\). The objective perturbation algorithm has additional assumptions that \(R\) and \(\ell\) are doubly differentiable and that \(\left\lvert\frac{\partial^{2}}{\partial f^{2}}\ell(f,y)\right\rvert\leq c\) for some constant \(c\). The output perturbation method first solves Eqn (10) and then perturbs the output \(\hat{\mathbf{\theta}}\) by adding noise determined by the values of \(n\), \(\epsilon\), and \(\gamma\). The objective function perturbation method adds random noise determined by \(n\), \(\epsilon\), \(\gamma\), and \(c\) directly to the objective function, then finds \(\hat{\mathbf{\theta}}\) minimizing the perturbed function. This amounts to the privacy-preserving regularized ERM model \[\frac{1}{n}\sum_{i=1}^{n}\ell_{i}(\mathbf{\theta})+\frac{\gamma}{n}R(\mathbf{\theta}) +\frac{\Delta}{2n}||\mathbf{\theta}||_{2}^{2}+\frac{\mathbf{b}^{T}\mathbf{\theta}}{n}, \tag{11}\] where \(\mathbf{b}\) is the injected random noise and \(\frac{\Delta}{2n}\left\lVert\mathbf{\theta}\right\rVert_{2}^{2}\) is an additional slack term necessary for DP via objective perturbation. Chaudhuri et al. (2011) show that objective perturbation generally provides better utility guarantees than output perturbation for the same privacy budget. _DPpack_ implements both algorithms using the EmpiricalRiskMinimizationDP.CMS_R6_ class. This class provides a general framework for running these algorithms, but is not intended to be utilized directly. Rather, it should be used as the parent class in an inheritance structure where the child class implements a specific realization of ERM for binary classification (i.e. logistic regression). Examples of this will be discussed in the subsequent sections. For regression problems, Kifer et al. (2012) proposed a slightly different algorithm that satisfies DP for regularized ERM in Eqn (11). Assume that \(\ell_{i}(\mathbf{\theta})\) is convex with a continuous Hessian, \(R\) is convex, and the following conditions hold for all \(\mathbf{x}_{i},y_{i}\) and for all \(\mathbf{\theta}\in\mathbb{F}\) (\(\mathbb{F}\) is a closed convex subset of \(\mathbb{R}^{p}\)): \(\left\lVert\nabla_{\mathbf{\theta}}\ell_{i}(\mathbf{\theta})\right\rVert_{2}\leq\zeta\) for some constant \(\zeta\), the eigenvalues of \(\nabla_{\mathbf{\theta}}^{2}\ell_{i}(\mathbf{\theta})\) are bounded above by some constant \(\lambda\), and the rank of \(\nabla_{\mathbf{\theta}}^{2}\ell_{i}(\mathbf{\theta})\) is at most one. Then the solutions \(\hat{\mathbf{\theta}}\in\mathbb{F}\) from minimizing the perturbed objective Eqn (11) satisfy DP1. The algorithm can be used to satisfy either \(\epsilon\)-DP or approximate \((\epsilon,\delta)\)-DP. If \(\epsilon\)-DP is desired, the noise vector is drawn from a Gamma distribution depending on the values of \(\epsilon\) and \(\zeta\), while if approximate DP is desired, the noise vector is drawn from a Gaussian distribution depending on the values of \(\epsilon\), \(\delta\), and \(\zeta\). We emphasize that for this algorithm, the resulting value \(\hat{\mathbf{\theta}}\) is restricted to the set \(\mathbb{F}\). Footnote 1: Though the objective perturbation algorithms from Chaudhuri et al. (2011) and Kifer et al. (2012) can both be written in the general form of Eqn (11), it is worth emphasizing that the former requires \(R\) to be 1-strongly convex, while the latter only requires \(R\) to be convex. The popular \(\ell_{1}\) regularizer is an example of a convex, but not 1-strongly convex regularizer. _DPpack_ implements this algorithm using the EmpiricalRiskMinimizationDP.KST_R6_ class. Similar to the EmpiricalRiskMinimizationDP.CMS class, this class provides a general framework for using the algorithm, but is not intended to be used directly. Child classes inheriting from this class and implementing a specific realization of the ERM for regression algorithm should be used instead. _DPpack_ implements linear regression in this way, which will be discussed in the subsequent section on regression methods. ### Logistic Regression The two algorithms described in the previous section for privacy preserving ERM for binary classification can be applied to logistic regression. The loss function given a single observation is the cross entropy loss (or the negative log-likelihood) \[\ell_{i}(\mathbf{\theta})=-(y_{i}\log(f_{\mathbf{\theta}}(\mathbf{x}_{i}))+(1-y_{i}) \log(1-f_{\mathbf{\theta}}(\mathbf{x}_{i}))), \tag{12}\] where \(f_{\mathbf{\theta}}(\mathbf{x}_{i})=\left(1+e^{-\mathbf{x}_{i}\mathbf{\theta}}\right)^ {-1}\) is the predicted value of \(y\). The regularized objective function given data \(\mathcal{D}=(\mathbf{x},\mathbf{y})\) is \[\frac{1}{n}\sum_{i=1}^{n}\left(y_{i}\log(1+e^{-\mathbf{x}_{i}\mathbf{\theta}})+(1 -y_{i})\log(1+e^{\mathbf{x}_{i}\mathbf{\theta}})\right)+\frac{\gamma}{n}R(\mathbf{ \theta}). \tag{13}\] The loss function in Eqn (13) meets all of the regularity conditions necessary for both the output perturbation and the objective perturbation algorithms to satisfy DP2. Footnote 2: with \(c=1/4\) for the objective perturbation algorithm (Chaudhuri et al., 2011). Also noted is that differentially private logistic regression was first proved outside of the ERM setting in Chaudhuri and Monteleoni (2009). _DPpack_ uses the LogisticRegressionDP_R6_ class to implement differentially private logistic regression in three steps using the EmpiricalRiskMinimizationDP.CMS framework that is based on the algorithm from Chaudhuri et al. (2011) 3. The first step is to construct a LogisticRegressionDP object. The constructor for this class accepts a callable function regularizer for the regularizer function, a privacy budget parameter eps, a regularization constant gamma, and a string perturbation.method indicating whether to use the output or the objective perturbation algorithm. If the argument perturbation.method is set to 'output', the output perturbation algorithm is run. The user must ensure in this case that the regularizer meets the necessary requirements, namely that it is differentiable and 1-strongly convex. If perturbation.method is set to 'objective', the objective perturbation algorithm is run. In this case, the user must ensure that the regularizer is doubly differentiable and 1-strongly convex. One popular regularization function is the \(\ell_{2}\) regularizer \(R(\boldsymbol{\theta})=\frac{1}{2}\left\|\boldsymbol{\theta}\right\|_{2}^{2}\). For convenience, this regularization function (and its gradient) can be used by simply setting regularizer to '12'. An optional callable function regularizer.gr representing the gradient of the regularizer can also be provided. After constructing a LogisticRegressionDP object, the second step is to train the model with a dataset. To do this, the user should call the $fit method of the constructed object. This method accepts as arguments a sensitive dataset X and corresponding sensitive labels for each row y. It also accepts numeric vectors giving the global or public bounds on the data in each column of X. There are several points to note regarding $fit. First, the method assumes that the binary labels provided by y are either 0 or 1. Second, both the output and objective perturbation algorithms assume that for each row \(\mathbf{x}_{i}\) of the input dataset we have \(\left\|\mathbf{x}_{i}\right\|_{2}\leq 1\). Given that this requirement is not met by most practical datasets, to allow for more realistic datasets to train the model, the $fit method utilizes the provided upper and lower bounds on the columns of X to pre-process and scale the values of X in such a way that this constraint is met. The privacy-preserving algorithm is then run, producing differentially private coefficients for the scaled dataset. After the private coefficients are generated, these are then post-processed and un-scaled before being stored as the object attribute $coeff, so that the stored coefficients correspond to the original data. Because both the pre-processing and the post-processing steps rely solely on the global or public bounds, DP is maintained by the post-processing theorem. Specifically, X is pre-processed as follows. First, the largest in absolute value of the upper and lower bounds on each column are used to scale each column individually such that the largest value in each column is at most 1 in absolute value. Second, each value in X is divided by \(\sqrt{p}\), the square root of the number of predictors of X. These two scalings ensure that each row of X satisfies the necessary constraints for DP. After training, the post-processing of the private coefficients is then accomplished by dividing each element of the trained vector by the same value used to scale the corresponding column individually in the pre-processing step, then dividing the entire vector by \(\sqrt{p}\). The original privacy-preserving ERM algorithms assume there is no bias term present in the predictor function. If a bias term is necessary, this issue can be partially circumvented by prepending a column of 1s to X before fitting the model. In this case, the first element of the fitted vector $coeff is essentially the bias term. The $fit method does this when the add.bias argument is set to TRUE. We caution that adding a column of 1s to X results in an additional column that must be scaled in the pre-processing step, and we recommend not using a bias term if at all possible. After training the model, the third and final step is to release the trained coefficients or to use them to predict the labels of new datapoints. The privacy-preserving coefficients are stored in the attribute $coeff, which can be directly released without violating privacy guarantees. Alternatively, the $predict method can be used. This method accepts a set of data X of the same form (i.e. dimensions, variable order, etc.) as the one provided to the $fit method, as well as boolean add.bias and boolean raw.value arguments. The method then returns a matrix of predicted values corresponding to each row of X based on the logistic regression predictor function \(f_{\boldsymbol{\theta}}\) and the trained and stored coefficients $coeff. The add.bias argument should be set to the same value as the identically named argument was when the $fit method was called. The raw.value argument is used to indicate whether the returned matrix should consist of the raw scores from the logistic regression predictor function (i.e. real numbers between 0 and 1), or whether it should consist of predicted labels for the rows (i.e. 0 or 1 values) obtained by rounding the scores. The following example shows the usage of the LogisticRegressionDP class on a 2-dimensional toy dataset. # Simulate train dataset X and y, and test dataset Xtest and ytest N <- 200 K <- 2 X <- data.frame() y <- data.frame() for (jin (1:K)){ t <- seq(-.25,.25, length.out = N) if (j==1) m <- rnorm(N,-.2,.1) if (j==2) m <- rnorm(N,.2,.1) Xtemp <- data.frame(x1 = 3*t, x2 = m - t) ytemp <- data.frame(matrix(j-1, N, 1)) X <- rbind(X, Xtemp) y <- rbind(y, ytemp) } # Bounds for X based on construction upper.bounds <- c( 1, 1) lower.bounds <- c(-1,-1) # Train-test split Xtest <- X[seq(1,(N*K),10),] ytest <- y[seq(1,(N*K),10),drop=FALSE] X <- X[-seq(1,(N*K),10),] y <- y[-seq(1,(N*K),10),,drop=FALSE] # Construct object for logistic regression regularizer <- function(coeff) coeff%*%coeff/2 regularizer.gr <- function(coeff) coeff eps <- 1 gamma <- 0.1 lrdp <- LogisticRegressionDP$new(regularizer, eps, gamma, regularizer.gr = regularizer.gr) # Fit with data lrdp$fit(X, y, upper.bounds, lower.bounds) # No bias term lrdp$coeff # Gets private coefficients #> 1.449110 5.562798 # Predict new data points predicted.y <- lrdp$predict(Xtest) n.errors <- sum(predicted.y!=ytest) ### Support Vector Machine (SVM) The privacy-preserving binary classification ERM algorithms can also be applied to linear and nonlinear SVM. For notational simplicity, we let \(\{-1,1\}\) be the binary labels for \(y\) when defining loss functions in SVM; for the implementation, for consistency with the LogisticRegressionDP class, we require \(y\) in the input dataset to be coded in \(\{0,1\}\). For linear SVM, the loss function given a single observation is the hinge loss \[\ell_{i}(\boldsymbol{\theta})=\max(0,1-y_{i}f_{\boldsymbol{\theta}}(\mathbf{x }_{i})), \tag{14}\] where \(f_{\boldsymbol{\theta}}(\mathbf{x}_{i})=\mathbf{x}_{i}\boldsymbol{\theta}\) is the predicted value of \(y\). The regularized objective function given data \(\mathcal{D}=(\mathbf{x},\mathbf{y})\) is \[\frac{1}{n}\sum_{i=1}^{n}\max(0,1-y_{i}\mathbf{x}_{i}\boldsymbol{\theta})+ \frac{\gamma}{n}R(\boldsymbol{\theta}). \tag{15}\] Unfortunately, the hinge loss is not differentiable everywhere and therefore does not satisfy the requirements for privacy-preserving ERM. One solution to this (used by _DPpack_) is to use the smooth Huber loss approximation to the hinge loss (Chapelle, 2007) defined by \[\ell_{\text{Huber}}(z)=\begin{cases}0,&\text{if}\;\;z>1+h\\ \frac{1}{4h}(1+h-z)^{2},&\text{if}\;\;|1-z|\leq h\\ 1-z,&\text{if}\;\;z<1-h\end{cases} \tag{16}\] for a given Huber loss parameter \(h\). Figure 2 shows a comparison between the Huber loss and the hinge loss for various values of \(h\). For linear SVM, the described predictor function and the Huber loss meet all of the requirements necessary for both the output perturbation and the objective perturbation algorithms for DP4. Footnote 4: with \(c=1/2h\) for the objective perturbation algorithm (Chaudhuri et al., 2011) Linear SVM implicitly assumes that the given dataset is (at least approximately) linearly separable. When this is not the case, nonlinear SVM is a better choice. Intuitively, nonlinear SVM first maps the potentially linearly non-separable input data in the original space to a higher dimension in which the data is linearly separable, then uses linear SVM in the higher-dimensional space. Performing this mapping directly suffers from the curse of dimensionality, and computations on the higher-dimensional dataset quickly become prohibitively expensive. For that reason, the kernel trick is used so that SVM can be applied easily in practice. Briefly, rather than explicitly transforming the data to the higher-dimensional space, the kernel trick utilizes a kernel function \(k\) to produce a similarity score between two datapoints in the original dimension. This is more computationally efficient than performing computations in the higher-dimensional space. One popular kernel function is the Gaussian or radial kernel \[k(\mathbf{x},\mathbf{x}^{\prime})=\exp\big{(}-\beta\|\mathbf{x}-\mathbf{x}^{ \prime}\|_{2}^{2}\big{)}, \tag{17}\] where \(\beta\) is a Gaussian kernel hyperparameter and equals to \(p^{-1}\) by default. The optimized predictor function at \(\mathbf{x}\) is a linear combination of kernel functions \[\hat{y}=f(\mathbf{x})=\sum_{i=1}^{n}a_{i}k(\mathbf{x}_{i},\mathbf{x}), \tag{18}\] where \(\mathbf{x}_{i}\) is the input of the original dataset. When there is no privacy concern, one may release estimated \(a_{i}\) and the observed input \(\mathbf{x}_{i}\), which can be plugged in Eqn (18) to predict the label of a given data point Figure 2: Comparison between Huber and hinge loss assuming \(y=1\). **x**. For privacy-preserving analysis, this practice poses problems due to the direct release of \(\mathbf{x}_{i}\). Chaudhuri et al. (2011) avoids this issue by using random projections to approximate the desired kernel function. Specifically, the algorithm first randomly samples \(D\) vectors \(\mathbf{z}_{j}\) based on the desired kernel function according to the approximation technique from Rahimi and Recht (2007, 2008). The algorithm then produces \(D\)-dimensional data \(\mathbf{v}_{i}\) (using \(\mathbf{z}_{j}\)) for each \(\mathbf{x}_{i}\), representing an approximate projection of each of the original \(\mathbf{x}_{i}\) onto the kernel space. Finally, the differentially private ERM algorithm for linear SVM is run on the new dataset \((\mathbf{v}_{i},y_{i})\). The vectors \(\mathbf{z}_{j}\) are not functions of the observed dataset, meaning the privacy-preserving linear SVM algorithm satisfies \(\epsilon\)- DP for a given \(\epsilon\). Therefore, this algorithm also satisfies \(\epsilon\)-DP when releasing the sampled vectors \(\mathbf{z}_{j}\) and the estimated coefficients from the linear SVM. _DPpack_ implements differentially private SVM via the svmDP _R6_ class using the framework provided by EmpiricalRiskMinimizationDP.CMS. Like the logistic regression model, using the SVM model requires three steps. The first step is to construct an svmDP object. The constructor for this class accepts a callable function regularizer for the regularizer function, a privacy budget parameter eps, a regularization constant gamma, a string perturbation.method indicating whether to use the output or the objective perturbation algorithm, a string kernel for the kernel used in SVM, and a constant huber.h defining the \(h\) value in the Huber loss in Eqn (16). Setting regularizer to 'l2' uses the \(\ell_{2}\) regularization function and its gradient. The perturbation.method argument operates identically to the argument of the same name used in constructing a LogisticRegressionDP object, and expects the user to verify the same requirements for the regularizer and regularizer.gr functions. The kernel argument can be set to either 'linear' or 'Gaussian'. In the former case, linear SVM is run using the specified predictor function and the Huber loss; for the latter, the Gaussian kernel approximation algorithm is run, where the constructor also requires the specification of two additional arguments: D to indicate the dimensionality of the projection dataset \(\mathbf{v}_{i}\) and kernel.param to indicate the value of \(\beta\) in Eqn (17). After constructing the svmDP object, the second step is to train the model on a dataset. Users should call the $fit method of the constructed object. This method accepts input data X with labels y, numeric vector global or public bounds for each column of X, and a boolean add.bias indicating whether to add a column of 1s to X to act as a bias variable5. If kernel is set to 'linear' when the object is constructed in the first step, the method finds \(\hat{\boldsymbol{\theta}}\) satisfying eps-DP, where eps is the privacy budget provided when the object is initialized. If kernel is set to 'Gaussian', the method first converts X to the \(D\)-dimensional new dataset V, then finds \(\hat{\boldsymbol{\theta}}\) corresponding to V satisfying eps-DP. For linear SVM, the same pre-processing of X using the provided bounds on its columns and subsequent post-processing of the private coefficients is performed. The results are again stored in the $coeff attribute. For Gaussian kernel nonlinear SVM, the mapping from X to V ensures that each row \(\mathbf{v}_{i}\) satisfies \(\left\|\mathbf{v}_{i}\right\|_{2}\leq 1\) regardless of the values of \(\mathbf{x}_{i}\). For this reason, no pre-processing of X is needed. In fact, providing bounds on the columns of X when calling the $fit method is unnecessary for the Gaussian kernel approximation. The third and final step is to release the estimated coefficients $coeff6 or use them to predict the labels given a set of new datapoints. For the latter, the $predict method can be used, which accepts input X of the same form (i.e. dimensions, variable order, etc.) as the one provided to the $fit method, as well as boolean add.bias and boolean raw.value arguments7. It returns a matrix of predicted values corresponding to each row of X. Footnote 6: For Gaussian kernel SVM, the dimension conversion function, $XtoV, can also be released in conjunction with $coeff. Footnote 7: The add.bias and raw.value arguments operate analogously to the respective arguments for LogisticRegressionDP. The following example shows how to use the svmDP class. # Simulate training dataset X and y, and testing dataset Xtest and ytest N <- 400 X <- data.frame() y <- data.frame() for (i in (1:N)){ Xtemp <- data.frame(x1 = rnorm(1,sd=.28), x2 = rnorm(1,sd=.28)) if (sum(Xtemp^2)<.15) ytemp <- data.frame(y=0) else ytemp <- data.frame(y=1) X <- rbind(X, Xtemp) y <- rbind(y, ytemp) } # Train-test split Xtest <- X[seq(1,N,10),] ytest <- y[seq(1,N,10),,drop=FALSE] X <- X[[-seq(1,N,10),] y <- y[-seq(1,N,10),,drop=FALSE] # Construct object for SVM regularizer <- 'l2' eps <- 1 gamma <- 0.1 kernel <- 'Gaussian' D <- 20 svmdp <- svmDP$new(regularizer, eps, gamma, kernel=kernel, D=D) # Fit with data (note no bounds necessary because kernel='Gaussian') svmdp$fit(X, y) # No bias term # Predict new data points predicted.y <- svmdp$predict(Xtest) n.errors <- sum(predicted.y!=ytest) ### Outcome Weighted Learning Outcome weighted learning (OWL) (Zhao et al., 2012) is a technique used for determining individualized treatment rules (ITRs), and can be categorized broadly as a method for causal inference ML. The primary goal of ITR is to derive a treatment assignment function that maps an individual's set of characteristics to a treatment that maximizes the expected benefit to that individual. A significant strength of OWL is its ability to tailor treatment assignments in response to individual characteristics, rather than using a one-size-fits-all approach. Its development was motivated by developing techniques for precision medicine (Council et al., 2011; Collins and Varmus, 2015) in randomized clinical trials, though other potential applications include personalized advertising (Wang et al., 2015; Sun et al., 2015), and recommender systems (Schnabel et al., 2016; Lada et al., 2019). The original ITR problem considered in Zhao et al. (2012) is to find the treatment assignment function \(T\) by maximizing the expected treatment benefit \(E[\frac{B}{P(A|\mathbf{x})}1(A=T(\mathbf{x}))]\), where \(A\) and \(B\) are random variables representing the randomly assigned treatment and observed benefit, respectively, \(P\) is the conditional probability function, and \(1\) is the indicator function. The key insight in Zhao et al. (2012) that produces the OWL framework is that the expected benefit problem can be reformulated as the weighted SVM problem \[\frac{1}{n}\sum_{i=1}^{n}\frac{B_{i}}{P(A_{i}|\mathbf{x}_{i})}\max(0,1-A_{i} \mathbf{x}_{i}\boldsymbol{\theta})+\frac{\gamma}{n}\left\|\boldsymbol{\theta} \right\|_{2}, \tag{19}\] where \(\mathcal{D}=(\mathbf{x},\mathbf{A},\mathbf{B})\). It is straightforward to see that this is a generalization of the standard SVM case to the case where individual observations are unevenly weighted according to the weights \(w_{i}=\frac{B_{i}}{P(A_{i}|\mathbf{x}_{i})}\). Giddens et al. (2023) showed that weighted ERM in general can be made to satisfy \(\epsilon\)-DP via output perturbation, as long as a global bound on the weights is provided. In order to incorporate OWL into _DPpack_, we generally implement DP weighted ERM through the general WeightedERMDP.CMS class. The svmDP class described in the previous section inherits from WeightedERMDP.CMS, which permit users to provide unequal observation weights such as those found in OWL. The following example shows how to use the svmDP class with weighted observations. Simulate train dataset X and y, and test dataset Xtest and ytest N <- 200 K <- 2 X <- data.frame() y <- data.frame() for (j in (1:K)){ t <- seq(-.25,.25, length.out = N) if (j==1) m <- rnorm(N,-.2,.1) if (j==2) m <- rnorm(N,.2,.1) Xtemp <- data.frame(x1 = 3*t, x2 = m - t) ytemp <- data.frame(matrix(j-1, N, 1)) X <- rbind(X, Xtemp) y <- rbind(y, ytemp) } # Bounds for X based on construction upper.bounds <- c( 1, 1) lower.bounds <- c(-1,-1) Train-test split Xtest <- X[seq(1,(N*K),10),] ytest <- y[seq(1,(N*K),10),drop=FALSE] X <- X[-seq(1,(N*K),10),] y <- y[-seq(1,(N*K),10),,drop=FALSE] Weights weights <- rep(1, nrow(y)) # Uniform weighting weights[nrow(y)] <- 0.5 # Half weight for last observation wub <- 1 # Upper bound on weights Construct object for logistic regression regularizer <- function(coeff) coeff%*%coeff/2 regularizer.gr <- function(coeff) coeff eps <- 1 gamma <- 0.1 perturbation.method <- 'output' svmdp <- svmDP$new(regularizer, eps, gamma, perturbation.method, regularizer.gr = regularizer.gr) Fit with data svmdp$fit(X, y, upper.bounds, lower.bounds, weights=weights, weights.upper.bound=wub) svmdp$coeff # Gets private coefficients #> 1.547518 13.456029 # Predict new data points predicted.y <- svmdp$predict(Xtest) n.errors <- sum(predicted.y!=ytest) ### Linear Regression The differentially private ERM algorithm for regression problems can be applied to linear regression. The loss function given a single observation is the squared error \[\ell_{i}(\boldsymbol{\theta})=\frac{(f_{\boldsymbol{\theta}}(\mathbf{x}_{i})-y _{i})^{2}}{2}, \tag{20}\] where the \(f_{\boldsymbol{\theta}}(\mathbf{x}_{i})=\mathbf{x}_{i}\boldsymbol{\theta}\) is the predicted value of \(y\). The regularized objective function given data \(\mathcal{D}=(\mathbf{x},\mathbf{y})\) is \[\frac{1}{n}\sum_{i=1}^{n}\frac{(\mathbf{x}_{i}\boldsymbol{\theta}-y_{i})^{2}}{ 2}+\frac{\gamma}{n}R(\boldsymbol{\theta}). \tag{21}\] In order to satisfy all of the assumptions needed to ensure privacy for the ERM algorithm, we must assume that each \(\mathbf{x}_{i}\), as well as the coefficient vector \(\boldsymbol{\theta}\) have a bounded \(\ell_{2}\) norm. For the purposes of _DPpack_, we choose to bound \(\mathbf{x}_{i}\) by \(\left\|\mathbf{x}_{i}\right\|_{2}\leq\sqrt{p}\) and the coefficient vector by \(\left\|\boldsymbol{\theta}\right\|_{2}\leq\sqrt{p}\), where \(p\) is the number of predictors, following Kifer et al. (2012). This implies that each value of the output \(y\) is contained in \([-p,p]\) automatically. With these assumptions, the conditions for the differentially private regression ERM algorithm are satisfied for linear regression with parameters \(\mathbb{F}=\left\{\boldsymbol{\theta}\in\mathbb{R}^{p}:\left\|\boldsymbol{ \theta}\right\|_{2}\leq\sqrt{p}\right\}\), \(\zeta=2p^{3/2}\), and \(\lambda=p\). _DPpack_ implements differentially private linear regression via the LinearRegressionDP_R6_ class using the framework of EmpiricalRiskMinimizationDP.KST. Similar to the classification models, this is done in three steps: constructing a LinearRegressionDP object, training the model by calling the $fit method of the constructed object, and releasing the trained coefficients $coeff or using them for prediction via $predict. The arguments and specification of the construction and prediction steps are similar to the those for LogisticRegressionDP and svmDP, so we refer the reader to those sections for explanations of the arguments. There are a few minor differences in the training step via the $fit method when compared to LogisticRegressionDP and svmDP. First, the arguments lower.bounds and upper.bounds should be vectors representing the global or public bounds on both the columns of X and the values of y. If X has \(n\) columns, then each vector of bounds should be of length \(n+1\). The first \(n\) elements of the vectors correspond to the bounds on the \(n\) columns of X, and are in the same order as the respective columns. The last element of the vectors corresponds to the bounds on the values in y. Similar to the training step for LogisticRegressionDP and svmDP, these bounds are used to pre-process X and y so that they satisfy the necessary constraints for privacy. The pre-processing/post-processing is essentially the same for LinearRegressionDP as it is for the classification methods, except that y is also shifted (and the resulting coefficients unshifted) to be centered at 0 if add.bias is set to TRUE. The following example shows how to use the LinearRegressionDP class. # Simulate an example dataset n <- 500 X <- data.frame(X=seq(-1,1,length.out = n)) true.theta <- c(-.3,.5) # First element is bias term p <- length(true.theta) y <- true.theta[1] + as.matrix(X)%*%true.theta[2:p] + rnorm(n=n,sd=.1) # Bounds based on construction. We assume y has values between -p and p upper.bounds <- c(1, p) # Bounds for X and y lower.bounds <- c(-1, -p) # Bounds for X and y # Construct object for linear regression regularizer <- 'l2' eps <- 1 delta <- 0.01 # Indicates to use approximate (1,0.01)-DP gamma <- 1 lrdp <- LinearRegressionDP$new('l2', eps, delta, gamma) # Fit with data lrdp$fit(X, y, upper.bounds, lower.bounds, add.bias=TRUE) lrdp$coeff # Gets private coefficients #> -0.3812353 0.3704237 # Predict new data points Xtest <- data.frame(X=c(-.5, -.25,.1,.4)) predicted.y <- lrdp$predict(Xtest, add.bias=TRUE) ### Hyperparameter Tuning Model training often involves the selection of hyperparameter values such as, for example, the constant \(\gamma\) for the regularizer in Eqn (10) or (11). Poorly selected values for these hyperparameters can result in models with poor performance. Often, hyperparameter selection relies on the observed dataset itself, resulting in privacy costs in the setting of privacy-preserving analysis. Chaudhuri et al. (2011) presents an algorithm for privacy-preserving hyperparameter tuning based on the exponential mechanism, which is implemented in _DPpack_. For binary classification models, differentially private hyperparameter tuning is realized in _DPpack_ via the tune_classification_model function. It accepts as inputs a list of model objects models of the same type8, each constructed with a different value from the set of potential hyperparameter values, observed input X, labels y, vectors representing global or public bounds on the columns of X, and a boolean add.bias argument. The function splits X and y into \(m+1\) equally sized sub-datasets, where \(m\) is the number of candidate models, and trains each model on one of the sub-datasets. The negative of the misclassification frequency by each model on the labels of the final sub-dataset is used as the utility function \(u\) for the exponential mechanism. It can be easily seen that the \(\ell_{1}\)-global sensitivity of \(u\) is \(\Delta_{1,u}=1\). The exponential mechanism is used to select and return one of the trained models provided with \(\epsilon\)-DP. Footnote 8: Such as a list of LogisticRegressionDP objects. Each model object must have the same privacy budget parameters. For example, assume one wishes to select a constant for the \(l_{2}\) regularizer from the set \(\{100,1,0.0001\}\) for privacy-preserving logistic regression. To do this, three objects from the LogisticRegressionDP class are constructed with the same privacy budget parameter eps and initialized with one of the three constant values. The three model objects are then passed into the tuning function, and the exponential mechanism returns one of them. The remaining arguments for the tuning function, X, y, upper.bounds, lower.bounds, and add.bias, should be given values according to their respective descriptions in the $fit method of the corresponding _R6_ class being used. An example of this situation follows. # Simulate a training dataset (X, y), and testing dataset (Xtest, ytest) N <- 200 K <- 2 X <- data.frame() y <- data.frame() for (j in (1:K)){ t <- seq(-.25,.25,length.out = N) if (j==1) m <- rnorm(N,-.2,.1) if (j==2) m <- rnorm(N,.2,.1) Xtemp <- data.frame(x1 = 3*t, x2 = m - t) ytemp <- data.frame(matrix(j-1, N, 1)) X <- rbind(X, Xtemp) y <- rbind(y, ytemp) } # Bounds for X based on construction upper.bounds <- c( 1, 1) lower.bounds <- c(-1,-1) # Train-test split Xtest <- X[seq(1,(N*K),10),] ytest <- y[seq(1,(N*K),10),,drop=FALSE] X <- X[-seq(1,(N*K),10),] y <- y[-seq(1,(N*K),10),drop=FALSE] y <- as.matrix(y) # Grid of gamma values for tuning logistic regression model grid.search <- c(100, 1,.0001) # Construct objects for logistic regression parameter tuning eps <- 1 # Privacy budget should be the same for all models lrdp1 <- LogisticRegressionDP$new("l2", eps, grid.search[1]) lrdp2 <- LogisticRegressionDP$new("l2", eps, grid.search[2]) lrdp3 <- LogisticRegressionDP$new("l2", eps, grid.search[3]) models <- c(lrdp1, lrdp2, lrdp3) # Tune using data and bounds for X based on its construction tuned.model <- tune_classification_model(models, X, y, upper.bounds, lower.bounds) tuned.model$gamma # Gives resulting selected hyperparameter #> 0.0001 # tuned.model can be used in the same way as any # LogisticRegressionDP model predicted.y <- tuned.model$predict(Xtest) n.errors <- sum(predicted.y!=ytest) _DPpack_ also implements differentially private hyperparameter tuning for linear regression via the tune_linear_regression_model function. This function was inspired by the binary classification hyperparameter tuning algorithm from Chaudhuri et al. (2011) as well as the feature selection algorithm for high-dimensional regression from Kifer et al. (2012). This function accepts the same input arguments as the tune_classification_model function, except that the models argument should be a list of constructed LinearRegressionDP objects with the same privacy budget parameters eps and delta. The function then splits the provided data X and y into \(m+1\) equally sized sub-datasets, where \(m\) is the number of provided models, and trains each model on one of the sub-datasets. The negative of the square of the Euclidean distance between the predicted values and the true values for the remaining sub-dataset is defined to be the utility function \(u\) for each of the models, the \(\ell_{1}\)-global sensitivity for which is given in Theorem 12. Finally, the exponential mechanism is used to select and return one of the trained models provided with (eps, delta)-DP. **Theorem 12**: _Let \(c_{0}\) and \(c_{1}\) be the global or public lower and upper bounds, respectively, on the possible values of \(y_{i}\). Let \(g\) be the linear regression model with coefficient parameters \(\boldsymbol{\theta}\). For a dataset \(\mathcal{D}=(\mathbf{x}_{i},y_{i})\) with \(n\) rows, define \(-\sum_{i=1}^{n}(g(x_{i})-y_{i})^{2}\). The \(\ell_{1}\)-global sensitivity of \(u\) is given by_ \[\Delta_{1,u}=(c_{1}-c_{0})^{2}. \tag{22}\] **Proof** Let \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) be (bounded) neighboring datasets. Without loss of generality, assume they differ only in their first element and define \((x_{1},y_{1})\in\mathcal{D}_{1}\) and \((x_{1}^{\prime},y_{1}^{\prime})\in\mathcal{D}_{2}\). Then \[\Delta_{1,u} =\max_{g}\max_{\mathcal{D}_{1},\mathcal{D}_{2}}|u(\mathcal{D}_{1},g)-u(\mathcal{D}_{2},g)|\] \[=\max_{g}\max_{\mathcal{D}_{1},\mathcal{D}_{2}}|(g(x_{1})-y_{1})^ {2}-(g(x_{1}^{\prime})-y_{1}^{\prime})^{2}|.\] Given that \((g(x_{1})-y_{1})^{2}\geq 0\) and \((g(x_{1}^{\prime})-y_{1}^{\prime})^{2}\geq 0\) for all \(g\) and for all \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) with \(d(\mathcal{D}_{1},\mathcal{D}_{2})=1\), \[\Delta_{1,u}\leq\max_{x_{1},y_{1}}(g(x_{1})-y_{1})^{2}=\max_{x_{1},y_{1}}(x_{ 1}\boldsymbol{\theta}-y_{1})^{2}=(c_{1}-c_{0})^{2},\] where we note the last step is a result of the assumptions made on the bounds of \(\left\|x_{1}\right\|_{2}\), \(\left\|\boldsymbol{\theta}\right\|_{2}\), and \(|y_{1}|\) in order to ensure DP for linear regression. For unbounded DP, \(\Delta_{1,u}=\max_{g}\max_{\mathcal{D}_{1},\mathcal{D}_{2}}(g(x_{1})-y_{1})^ {2}=(c_{1}-c_{0})^{2}\), the same as in the bounded case. \(\blacksquare\) Similar to the tune_classification_model function, the list of models provided to tune_linear_regression_model should be a list of objects constructed using the _R6_ class LinearRegressionDP with a different hyperparameter value and the same privacy budget parameters provided to each model. The remaining arguments for the tuning function, X, y, upper.bounds, lower.bounds, and add.bias, should be given values according to their respective descriptions in the $fit method of the LinearRegressionDP class. An example of using the tuning function for the regularization constant for linear regression follows. # Simulate an example dataset n <- 500 X <- data.frame(X=seq(-1,1,length.out = n)) true.theta <- c(-.3,.5) # First element is bias term p <- length(true.theta) y <- true.theta[1] + as.matrix(X)%*%true.theta[2:p] + rnorm(n=n,sd=.1) # Bounds for X and y based on their construction upper.bounds <- c( 1, 2) # Bounds for X and y lower.bounds <- c(-1,-2) # Bounds for X and y # Grid of possible gamma values for tuning linear regression model grid.search <- c(100, 1,.0001) # Construct objects for logistic regression parameter tuning Privacy budget should be the same for all models eps <- 1 delta <- 0.01 linrdp1 <- LinearRegressionDP$new("l2", eps, delta, grid.search[1]) linrdp2 <- LinearRegressionDP$new("l2", eps, delta, grid.search[2]) linrdp3 <- LinearRegressionDP$new("l2", eps, delta, grid.search[3]) models <- c(linrdp1, linrdp2, linrdp3) tuned.model <- tune_linear_regression_model(models, X, y, upper.bounds, lower.bounds, add.bias=TRUE) tuned.model$gamma # Gives resulting selected hyperparameter #> 100 tuned.model result can be used the same as a trained # LogisticRegressionDP model tuned.model$coeff # Gives coefficients for tuned model #> -0.5038190 0.2589978 Simulate a test dataset for prediction Xtest <- data.frame(X=c(-.5, -.25,.1,.4)) predicted.y <- tuned.model$predict(Xtest, add.bias=TRUE)
2306.17657
Diffraction of acoustic waves by multiple semi-infinite arrays
Analytical methods are fundamental in studying acoustics problems. One of the important tools is the Wiener-Hopf method, which can be used to solve many canonical problems with sharp transitions in boundary conditions on a plane/plate. However, there are some strict limitations to its use, usually the boundary conditions need to be imposed on parallel lines (after a suitable mapping). Such mappings exist for wedges with continuous boundaries, but for discrete boundaries, they have not yet been constructed. In our previous article, we have overcome this limitation and studied the diffraction of acoustic waves by a wedge consisting of point scatterers. Here, the problem is generalised to an arbitrary number of periodic semi-infinite arrays with arbitrary orientations. This is done by constructing several coupled systems of equations (one for every semi-infinite array) which are treated independently. The derived systems of equations are solved using the discrete Wiener-Hopf technique and the resulting matrix equation is inverted using elementary matrix arithmetic. Of course, numerically this matrix needs to be truncated, but we are able to do so such that thousands of scatterers on every array are included in the numerical results. Comparisons with other numerical methods are considered, and their strengths/weaknesses are highlighted.
Matthew Nethercote, Anastasia Kisil, Raphael Assier
2023-06-30T13:41:44Z
http://arxiv.org/abs/2306.17657v2
Diffraction of acoustic waves by multiple semi-infinite arrays: a generalisation of the point scatterer wedge1 ###### Abstract Analytical methods are fundamental in studying acoustics problems. One of the important tools is the Wiener-Hopf method, which can be used to solve many canonical problems with sharp transitions in boundary conditions on a plane/plate. However, there are some strict limitations to its use, usually the boundary conditions need to be imposed on parallel lines (after a suitable mapping). Such mappings exist for wedges with continuous boundaries, but for discrete boundaries, they have not yet been constructed. In our previous article, we have overcome this limitation and studied the diffraction of acoustic waves by a wedge consisting of point scatterers. Here, the problem is generalised to an arbitrary number of periodic semi-infinite arrays with arbitrary orientations. This is done by constructing several coupled systems of equations (one for every semi-infinite array) which are treated independently. The derived systems of equations are solved using the discrete Wiener-Hopf technique and the resulting matrix equation is inverted using elementary matrix arithmetic. Of course, numerically this matrix needs to be truncated, but we are able to do so such that thousands of scatterers on every array are included in the numerical results. Comparisons with other numerical methods are considered, and their strengths/weaknesses are highlighted. ## 1 Introduction Analytical solutions for acoustic/electromagnetic wave scattering problems by different combinations of finite, infinite and semi-infinite plates/arrays (Lawrie and Abrahams, 2007) are of special interest. These are difficult problems, and we will briefly review some of the work on this subject. Generalising the famous Sommerfeld's half-plane problem (consisting of one semi-infinite plate), Heins (1948a,b) uses the Wiener-Hopf (WH) technique to find the exact solution for the problem where an electric-polarised wave is incident on a pair of parallel semi-infinite plates, symbolising a receiving and transmitting antenna for parts I and II respectively. This WH formulation has been extended to a matrix version in (Abrahams and Wickham, 1988, 1990a,b), for the equivalent acoustic problem with a pair of staggered plates, and in (Jones, 1986) for three semi-infinite planes. Also, by employing appropriate mappings/transformations, it was possible to use the matrix WH techniques for wedges (Shanin, 1998; Daniele and Zich, 2014; Nethercote et al., 2020b). Additionally, since finding exact solutions is difficult for the matrix WH technique, there have been many developments on approximate and asymptotic factorisation techniques (Rogosin and Mishuris, 2016; Kisil and Ayton, 2018; Kisil, 2018). Alternatively, researchers resort to numerical and asymptotic schemes in their models (see (Peake and Cooper, 2001; Kirby, 2008; Adams et al., 2008; Craster et al., 2009) for waveguides and ducts for example). One of the advantages of the WH technique is that a solution for a scattering problem can also be used in aeroacoustics setting to study the interaction of plates with gusts. In particular, (Peake, 1992) considered an infinite staggered cascade of finite thin blades which were aligned with a uniform subsonic mean flow, and found an iterative solution by an infinite sequence of coupled, semi-infinite WH problems in the high frequency limit. More recently, this work has been extended for non-aligned mean flow (Peake and Kerschen, 1997, 2004), thin aerofoils Baddoo and Ayton (2018, 2020) and for high staggering angles Maierhofer and Peake (2020, 2022). In an elastic setting, the scattering and localisation of flexural waves on an elastic Kirchhoff plate is another important problem, especially determining waves that are blocked or trapped (Movchan et al., 2009, Haslinger et al., 2014, 2016). Jones et al. (2017) considered a pair of parallel semi-infinite gratings of rigid pins on a Kirchhoff plate. A follow-up article (Haslinger et al., 2018) studied problems with configurations of parallel semi-infinite gratings. In particular, it featured a problem where four parallel semi-infinite gratings form a waveguide in a herringbone pattern. Both these articles form and use the solution to the discrete WH functional equation but also identify trapped modes from analysing the kernel. The latter of the two articles also assumes that for each side of the waveguide, the two gratings are closely spaced which allowed them to use a dipole approximation. These diffraction problems can also be considered on a lattice governed by a discrete Helmholtz equation. For instance, waves diffracted in a square lattice by a semi-infinite crack or a semi-infinite rigid constant was solved recently using discrete WH technique in Sharma (2015) and Sharma (2015) respectively. These two problems are analogous to the classic Sommerfeld's half-plane problem (sound-hard and sound-soft respectively). This work has been extended to two staggered semi-infinite cracks Maurya and Sharma (2019) which was not solved exactly but asymptotically due to the notorious difficulties in factorising a matrix WH kernel. In this article, we are interested in arbitrary arrays of small Dirichlet cylinders within a continuum. This means that we use the continuous Helmholtz equation subject to boundary conditions imposed on a discrete set of scatterers. The semi-infinite array (the analogue of Sommerfeld's half-plane) problem was solved using the discrete WH technique long ago (Hills and Karp, 1965, Linton and Martin, 2004). But there has been little work to generalise this to other configurations of arrays as was the case in the continuous and the discrete case outlined above. In our previous work (Nethercote et al., 2022), we combined two semi-infinite arrays to form a wedge. Unlike the continuous boundary wedge (Nethercote et al., 2020), this did not lead to a matrix WH problem due to the difficulties in finding an appropriate mapping. Instead, we considered two coupled systems of equations which were solved using the discrete WH technique, followed by an effective numerical iterative procedure. In this article, we will study problems involving any number of independent semi-infinite arrays comprised of equidistant scatterers. This is very general since the position and orientation of the arrays are arbitrary, which allows us to model many types of interesting problems. As in (Nethercote et al., 2022), we will use the WH technique for each of the arrays and then couple them together. But this time, the coupling is encoded directly in the matrix inversion which means that there is no need for an iterative scheme. For any arbitrary configuration of scatterers, one can use numerical techniques to determine the scattering behaviour. Some examples of these techniques include finite element methods, a T-matrix reduced order model (Ganesh and Hawkins, 2017, Hawkins, 2023) and a least square collocation approach that was used in (Chapman et al., 2015) and (Hewett and Hewitt, 2016) to study the electrostatic and electromagnetic shielding by Faraday cages. While these methods are very efficient at modelling the interactions between individual scatterers, they do not work very well at modelling the infinite nature of periodic arrays. The structure of the paper is as follows. We start by setting up and solving the Wiener-Hopf problem in section 2.1, which results in a matrix equation that is inverted in section 2.2 to find the scattering coefficients. In section 2.3, we proceed by looking into the special case with two semi-infinite arrays and link with the point scatterer wedge from (Nethercote et al., 2022). We also analyse the determinants and condition numbers of the matrices involved in section 2.4 as well as discuss the use of fast multipole methods for efficient computations of their components in section 2.5. Finally, we showcase several different test cases in section 2.6 and compare the Wiener-Hopf solution with other numerical techniques in section 2.7 and highlight their strengths and weaknesses. ## 2 Multiple semi-infinite arrays Viewed as a three-dimensional problem, the scatterers are all cylinders of infinite height, have a small radius and satisfy homogeneous Dirichlet boundary conditions. This problem can naturally be reduced to two dimensions for non-skew incidence and this is what we will be considering here. Throughout this article, we will exploit the methodology depicted in (Nethercote et al., 2022), since the point scatterer wedge can be considered as a particular case of what is presented here. Similarly to (Nethercote et al., 2022a), we are looking for time-harmonic solutions to the linear wave equation by assuming and then suppressing the time factor \(e^{-i\omega t}\), where \(\omega\) is the angular frequency, and use a polar coordinate system \((r,\theta)\) with the position vector given by \(\mathbf{r}\). We let \(\Phi\) be the total wave field and decompose it into an incident wave field \(\Phi_{\rm I}\) and the resulting scattered field \(\Phi_{\rm S}\) by the equation \(\Phi=\Phi_{\rm I}+\Phi_{\rm S}\). Both of these fields satisfy the Helmholtz equation with wavenumber \(k\). The incident wave field \(\Phi_{\rm I}(\mathbf{r})\) takes the form of a unit amplitude plane wave given by, \[\Phi_{\rm I}=e^{-ikr\cos(\theta-\theta_{\rm I})} \tag{2.1}\] where \(\theta_{\rm I}\) is the incoming incident angle. An important assumption of this study is the use of Foldy's approximation (Foldy, 1945; Martin, 2006): where we assume that the cylinders are isotropic point scatterers. That requires them to be small in comparison to the wavelength (i.e. \(ka\ll 1\)) and this allows us to write the scattered field \(\Phi_{\rm S}\) in the form of a monopole expansion. This article is focused on the problem of an incident wave scattered by \(\mathcal{J}\) arbitrary periodic semi-infinite arrays. The \(j^{\rm th}\) array starts at an arbitrary position \(\mathbf{R}_{0}^{(j)}\), makes an arbitrary angle \(\alpha_{j}\) with the \(x\)-axis, and the scatterers have radius \(a_{j}>0\) and are arranged with a spacing \(s_{j}>0\). The position of the \(n^{\rm th}\) scatterer in the \(j^{\rm th}\) array is hence given by, \[\mathbf{R}_{n}^{(j)} =\mathbf{R}_{0}^{(j)}+ns_{j}(\cos(\alpha_{j})\mathbf{\hat{x}}+\sin(\alpha _{j})\mathbf{\hat{y}}),\quad n=0,1,2,...\] \[\mathbf{R}_{0}^{(j)} =R_{0}^{(j)}(\cos(\theta_{0}^{(j)})\mathbf{\hat{x}}+\sin(\theta_{0}^ {(j)})\mathbf{\hat{y}}) \tag{2.2}\] where \((\mathbf{\hat{x}},\mathbf{\hat{y}})\) are the unit basis vectors of a Cartesian coordinate system. We further introduce \(\Lambda^{(j,\ell)}(m,n)\) as the distance between the \(m^{\rm th}\) scatterer on the \(j^{\rm th}\) array and the \(n^{\rm th}\) on the \(\ell^{\rm th}\) array \[\Lambda^{(j,\ell)}(m,n)= |\mathbf{R}_{m}^{(j)}-\mathbf{R}_{n}^{(\ell)}|\] \[= \Big{(}\Big{(}R_{0}^{(j)}\Big{)}^{2}\!+\!\Big{(}R_{0}^{(\ell)} \Big{)}^{2}\!+\!(ms_{j})^{2}\!+\!(ns_{\ell})^{2}\!-\!2R_{0}^{(j)}R_{0}^{(\ell) }\cos(\theta_{0}^{(j)}\!-\!\theta_{0}^{(\ell)})\] \[-2ms_{j}s_{\ell}\cos(\alpha_{j}\!-\!\alpha_{\ell})\!+\!2ms_{j} \Big{(}\!R_{0}^{(j)}\cos(\theta_{0}^{(j)}\!-\!\alpha_{j})\!-\!R_{0}^{(\ell)} \cos(\theta_{0}^{(\ell)}\!-\!\alpha_{j})\Big{)}\] \[-2ns_{\ell}\Big{(}R_{0}^{(j)}\cos(\theta_{0}^{(j)}\!-\!\alpha_{ \ell})\!-\!R_{0}^{(\ell)}\cos(\theta_{0}^{(\ell)}\!-\!\alpha_{\ell})\Big{)} \Big{)}^{\frac{1}{2}} \tag{2.3}\] Figure 1: Diagram of a plane wave interacting with multiple arbitrary semi-infinite arrays. For simplicity here, the first array is positioned on the positive \(x\)-axis (i.e. \(R_{0}^{(1)}=\alpha_{1}=0\)). This distance function satisfies the identity \(\Lambda^{(j,\ell)}(m,n)=\Lambda^{(\ell,j)}(n,m)\). It is important to note that while we allow the arrays to cross, we do _not_ want the scatterers to overlap. This means that we need the condition \(a_{j}<s_{j}/2\) for all \(j=1,2,...\mathcal{J}\) to prevent overlapping between scatterers belonging to the \(j^{\text{th}}\) array and \(a_{j}+a_{\ell}<\Lambda^{(j,\ell)}(m,n)\) for all \(m,n\in\mathbb{Z}\) and \(j,\ell=1,2,...\mathcal{J}\) which prevents overlapping between the \(j^{\text{th}}\) and \(\ell^{\text{th}}\) arrays. Using Foldy's approximation, the scattered field \(\Phi_{\text{S}}\) is written in the form of a monopole expansion. This means that the total field \(\Phi\) is given by \[\Phi(\mathbf{r})=\Phi_{\text{I}}+\sum_{j=1}^{\mathcal{J}}\sum_{n=0}^{\infty}\left[ A_{n}^{(j)}H_{0}^{(1)}(k|\mathbf{r}-\mathbf{R}_{n}^{(j)}|)\right], \tag{2.4}\] where \(A_{n}^{(j)}\) is the scattering coefficient associated with the \(n^{\text{th}}\) scatterer of the \(j^{\text{th}}\) array. To obtain the systems of equations, we use the procedure described in [15, eqns (3.4), (3.6)] for the point scatterer wedge. As a result, we obtain \(\mathcal{J}\) systems of infinitely many equations governing the scattering coefficients. The \(m^{\text{th}}\) equation (\(m\geq 0\)) of the \(j^{\text{th}}\) system (\(j\in\{1,2,...\mathcal{J}\}\)) is given by \[A_{m}^{(j)}H_{0}^{(1)}(ka_{j})+\sum_{\begin{subarray}{c}n=0\\ n\neq m\end{subarray}}^{\infty}\left[A_{n}^{(j)}H_{0}^{(1)}(ks_{j}|m-n|)\right] =-\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq j\end{subarray}}^{\mathcal{J}}\sum_{n=0}^{\infty}\left[A_{n}^{(\ell)} H_{0}^{(1)}\left(k\Lambda^{(j,\ell)}(m,n)\right)\right]-e^{i\mathbf{k}\cdot\mathbf{R}_{m}^{(j)}} \tag{2.5}\] where \(\mathbf{k}\cdot\mathbf{R}_{m}^{(j)}=-kR_{0}^{(j)}\cos(\theta_{0}^{(j)}-\theta_{\text{ I}})-ks_{j}m\cos(\alpha_{j}-\theta_{\text{I}})\) and \(\mathbf{k}=-k(\cos(\theta_{\text{I}})\mathbf{\hat{x}}+\sin(\theta_{\text{I}})\mathbf{\hat {y}})\) is the incident wavevector. ### Solving the \(j^{\text{th}}\) system of equations To solve the system of equations (2.5) for a specific \(j\), we use the discrete analogue of the WH technique. We start by extending (2.5) for negative \(m\) using some unknown coefficients \(F_{j,m}\) and state that \(A_{m}^{(j)}=0\) for \(m<0\), \[A_{m}^{(j)}H_{0}^{(1)}(ka_{j})+\sum_{\begin{subarray}{c}n=0\\ n\neq m\end{subarray}}^{\infty}\left[A_{n}^{(j)}H_{0}^{(1)}(ks_{j}|m-n|)\right] \tag{2.6}\] \[=\begin{cases}-\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq j\end{subarray}}^{\mathcal{J}}\sum_{n=0}^{\infty}\left[A_{n}^{(\ell)} H_{0}^{(1)}\left(k\Lambda^{(j,\ell)}(m,n)\right)\right]-e^{i\mathbf{k}\cdot\mathbf{R}_{m}^{(j)}},&m\geq 0,\\ F_{j,m},&m<0.\end{cases}\] Here, the \(A_{m}^{(j)}\) scattering coefficients are the unknowns to find and all others are assumed to be known. Noting that the forward Z-transform is given for any sequence \(G_{m}\) by, \[G(z)=\sum_{m=-\infty}^{\infty}G_{m}z^{m},\ \ \text{with inverse},\ \ G_{m}=\frac{1}{2\pi i}\oint_{C}G(z)z^{-m-1}\text{d}z, \tag{2.7}\] we apply it to (2.6) to obtain the Wiener-Hopf equation \[K_{j}(z)A_{j}^{+}(z)= F_{j,\text{pole}}^{+}(z)+F_{j}^{-}(z)+\sum_{\begin{subarray}{c}\ell=1 \\ \ell\neq j\end{subarray}}^{\mathcal{J}}F_{\ell,A}^{+}(z), \tag{2.8}\] where \(A_{j}^{+}(z)\) is the Z-transform of the unknown scattering coefficients of the \(j^{\text{th}}\) array; \[A_{j}^{+}(z)=\sum_{m=-\infty}^{\infty}A_{m}^{(j)}z^{m}=\sum_{m=0}^{\infty}A_{m} ^{(j)}z^{m}. \tag{2.9}\] As in [Nethercote et al., 2022a], it is useful to assume that \(k\) has a small positive imaginary part to help with the convergence of the Z-transform. We also define the two regions, \[\Omega_{j}^{+} =\left\{z\in\mathbb{C}:|z|<e^{-\mathrm{Im}\{k\}s_{j}\cos(\alpha_{j }-\theta_{l})}\right\},\] \[\Omega_{j}^{-} =\left\{z\in\mathbb{C}:|z|>e^{-\mathrm{Im}\{k\}s_{j}}\right\}, \tag{2.10}\] in which a function with a \(+\) or \(-\) superscript is analytic. In these Wiener-Hopf problems, a crucial function is the Wiener-Hopf _kernel_\(K_{j}(z)\) given by \[K_{j}(z)=H_{0}^{(1)}(ka_{j})+\sum_{\ell=1}^{\infty}\left[(z^{\ell}+z^{-\ell})H_ {0}^{(1)}(ks_{j}\ell)\right], \tag{2.11}\] for \(j=1,2,...\mathcal{J}\), which has the exact same definition and properties as in [Nethercote et al., 2022a, eq (2.15)], including the important identity \(K_{j}(z)=K_{j}(1/z)\) and the singular points \(z=e^{\pm iks_{j}}\). Furthermore, the kernel is analytic and zero-free on an annulus which contains \(\Omega_{j}^{+}\cap\Omega_{j}^{-}\). Since \(K_{j}(z)\) is a slow-convergent infinite series, it is very impractical for numerical evaluation. To counter this, there are alternative methods of evaluation, including the use of the method of tail-end asymptotics [Lynott et al., 2019] and rewriting the Schlomilch series to a fast-convergent version [Linton, 1998, 2006, 2010] (see also the appendix in [Nethercote et al., 2022a] for specifics). The three forcing terms on the right-hand side of (2.8) are defined by \[F_{j,\mathrm{pole}}^{+}(z) =\frac{e^{ik\boldsymbol{k}\cdot\boldsymbol{R}_{0}^{(j)}}}{ze^{-ik s_{j}\cos(\alpha_{j}-\theta_{1})}-1}, \tag{2.12}\] \[F_{\ell,A}^{+}(z) =-\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\left[A_{n}^{(\ell)}z^{m }H_{0}^{(1)}\left(k\Lambda^{(j,\ell)}(m,n)\right)\right],\] (2.13) \[F_{j}^{-}(z) =\sum_{m=-\infty}^{-1}F_{j,m}z^{m}. \tag{2.14}\] Note that by design, \(F_{j}^{-}(z)=O\left(\frac{1}{z}\right)\) as \(|z|\to\infty\). We proceed with the WH technique by factorising \(K_{j}(z)\) in the exact same way as in [Nethercote et al., 2022a, eq (2.15)]. This means writing \(K_{j}(z)=K_{j}^{+}(z)K_{j}^{-}(z)\) in such a way that the two factors satisfy \(K_{j}^{-}(1/z)=K_{j}^{+}(z)\) and are defined by Cauchy's integral formulae (2.15)-(2.17). \[\ln(K_{j}^{+}(z)) =\ln(K_{j}^{0})-\frac{1}{2\pi i}\int_{C}\frac{\ln(K_{j}(\xi))}{ \xi-1/z}\mathrm{d}\xi, \tag{2.15}\] \[\ln(K_{j}^{-}(z)) =\ln(K_{j}^{0})-\frac{1}{2\pi i}\int_{C}\frac{\ln(K_{j}(\xi))}{ \xi-z}\mathrm{d}\xi,\] (2.16) \[\text{where},\quad\ln(K_{j}^{0}) =\ln(K_{j}^{+}(0))=\frac{1}{4\pi i}\int_{C}\frac{\ln(K_{j}(\xi)) }{\xi}\mathrm{d}\xi. \tag{2.17}\] Here, the integration contour \(C\) (see FIG. 2) is the anticlockwise circular path contained inside \(\Omega_{j}^{+}\cap\Omega_{j}^{-}\) on the \(\xi\) complex plane. Additionally, \(C\) will also run radially below the pole at \(\xi=z^{\pm 1}\). Both kernel factors \(K_{j}^{\pm}(z)\) are also analytic and zero-free inside the regions \(\Omega_{j}^{\pm}\). Now let us divide (2.8) by \(K_{j}^{-}(z)\) to obtain, \[K_{j}^{+}(z)A_{j}^{+}(z)= \frac{F_{j,\mathrm{pole}}^{+}(z)}{K_{j}^{-}(z)}+\frac{F_{j}^{-}( z)}{K_{j}^{-}(z)}+\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq j\end{subarray}}^{\mathcal{J}}\frac{F_{\ell,A}^{+}(z)}{K_{j}^{-}(z)}. \tag{2.18}\] Next, we sum-split the pole term using the pole removal technique to get, \[\frac{F_{j,\mathrm{pole}}^{+}(z)}{K_{j}^{-}(z)}= \frac{F_{j,\mathrm{pole}}^{+}(z)}{K_{j}^{-}(e^{iks_{j}\cos(\alpha _{j}-\theta_{1})})}\] \[+F_{j,\mathrm{pole}}^{+}(z)\!\!\left[\frac{1}{\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where the first and second terms are analytic in \(\Omega_{j}^{+}\) and \(\Omega_{j}^{-}\) respectively. We can also sum-split all of the \(\frac{F_{\ell,A}^{+}(z)}{K_{j}^{-}(z)}\) terms for every \(\ell\) by first noting that it can be rewritten as a Laurent series given by \[\frac{F_{\ell,A}^{+}(z)}{K_{j}^{-}(z)}=D_{j,\ell}(z)=\sum_{n=-\infty}^{\infty}D _{j,\ell,n}z^{n},\ \ \text{where}\ \ D_{j,\ell,n}=\frac{1}{2\pi i}\int_{C}D_{j,\ell}(z)z^{-n-1}\text{d}z, \tag{2.20}\] because it is analytic on an annulus region of \(z\) containing \(\Omega_{j}^{+}\cap\Omega_{j}^{-}\). The sum-split of \(D_{j,\ell}(z)\) is trivial, \[D_{j,\ell}^{+}(z)=\sum_{n=0}^{\infty}D_{j,\ell,n}z^{n},\ \ \text{and}\ \ D_{j,\ell}^{-}(z)=\sum_{n=-\infty}^{-1}D_{j,\ell,n}z^{n}. \tag{2.21}\] After the sum-split, we obtain the final WH equation, \[K_{j}^{+}(z)A_{j}^{+}(z)- \frac{F_{j,\text{pole}}^{+}(z)}{K_{j}^{-}(e^{ikz_{j}\cos(\alpha_ {j}-\theta_{1})})}-\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq j\end{subarray}}^{\mathcal{J}}D_{j,\ell}^{+}(z) \tag{2.22}\] \[=F_{j,\text{pole}}^{+}(z)\left[\frac{1}{K_{j}^{-}(z)}-\frac{1}{K _{j}^{-}(e^{ikz_{j}\cos(\alpha_{j}-\theta_{1})})}\right]+\sum_{\begin{subarray} {c}\ell=1\\ \ell\neq j\end{subarray}}^{\mathcal{J}}D_{j,\ell}^{-}(z)+\frac{F_{j}^{-}(z)}{K _{j}^{-}(z)}, \tag{2.23}\] where the left (right) hand side are analytic in \(\Omega_{j}^{+}\) and \(\Omega_{j}^{-}\) respectively. The two sides of the WH equation are used to construct an entire function \(\Psi_{j}(z)\) defined by \[\Psi_{j}(z)=\left\{\begin{array}{ll}(\ref{eq:2.22})&z\in\Omega_{j}^{+},\\ (\ref{eq:2.23})&z\in\Omega_{j}^{-},\\ (\ref{eq:2.22})=(\ref{eq:2.23})&z\in\Omega_{j}^{+}\cap\Omega_{j}^{-}.\end{array}\right. \tag{2.24}\] Each term of (2.23) is \(O\left(\frac{1}{z}\right)\) as \(|z|\to\infty\), which implies that \(\Psi_{j}\) is bounded and tends to zero at infinity as well as entire. Therefore, Liouville's theorem ensures that both (2.22) and (2.23) are equivalently zero, and then we obtain the solution for \(A_{j}^{+}(z)\), \[A_{j}^{+}(z)= \frac{F_{j,\text{pole}}^{+}(z)}{K_{j}^{+}(z)K_{j}^{-}(e^{ikz_{j} \cos(\alpha_{j}-\theta_{1})})}+\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq j\end{subarray}}^{\mathcal{J}}\frac{D_{j,\ell}^{+}(z)}{K_{j}^{+}(z)}. \tag{2.25}\] Figure 2: Diagram of the integration contour \(C\) on the \(\xi\) complex plane. Here, the border of the regions \(\Omega_{j}^{\pm}\) are shown as blue and red circles respectively and the grey dashed circle is the unit circle \(|\xi|=1\). The smaller diagram is the limiting case when the imaginary part of \(k\) tends to zero. To recover the scattering coefficients we use the inverse Z-transform (2.7), which gives us, \[A_{m}^{(j)} =\frac{e^{i\mathbf{k}\cdot\mathbf{R}_{0}^{(j)}+iks_{j}\cos(\alpha_{j}-\theta _{1})}}{2\pi iK_{j}^{-}(e^{iks_{j}\cos(\alpha_{j}-\theta_{1})})}\oint_{C}\frac{z^ {-m-1}}{K_{j}^{+}(z)(z-e^{iks_{j}\cos(\alpha_{j}-\theta_{1})})}\mathrm{d}z\] \[+\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq j\end{subarray}}^{\mathcal{J}}\frac{1}{2\pi i}\oint_{C}\frac{D_{j, \ell}^{+}(z)}{K_{j}^{+}(z)}z^{-m-1}\mathrm{d}z \tag{2.26}\] Next, we let the imaginary part of \(k\) tend to zero and then the integration contour \(C\) in (2.26) is an indented anticlockwise unit circle (by passing any singularities on the unit circle radially below) with the pole \(z=0\) being the only singularity inside. The smaller diagram in FIG. 2 illustrates \(C\) in (2.26), but note that we do not have the branch point at \(e^{iks}\) here. To evaluate these integrals, we need to recall the identity \(K_{j}^{-}(z)=K_{j}^{+}(1/z)\) and note the expansion of \(\left(K_{j}^{+}(z)\right)^{-1}\) given by \[\frac{1}{K_{j}^{+}(z)}=\sum_{n=0}^{\infty}\lambda_{j,n}z^{n},\ \ \text{where}\ \ \lambda_{j,n}=\frac{1}{n!}\frac{\mathrm{d}^{n}}{\mathrm{d}z^{n}}\left[\frac{ 1}{K_{j}^{+}(z)}\right]_{z=0}. \tag{2.27}\] For the first integral of (2.26), the evaluation is equivalent to the associated semi-infinite array problem (see [10, eqn (2.22)]), where the extra factor \(e^{i\mathbf{k}\cdot\mathbf{R}_{0}^{(j)}}\) accounts for the off-centre start of the array, \[\frac{e^{i\mathbf{k}\cdot\mathbf{R}_{0}^{(j)}+iks_{j}\cos(\alpha_{j}- \theta_{1})}}{2\pi iK_{j}^{-}(e^{iks_{j}\cos(\alpha_{j}-\theta_{1})})}\oint_{C} \frac{z^{-m-1}}{K_{j}^{+}(z)(z-e^{iks_{j}\cos(\alpha_{j}-\theta_{1})})}\mathrm{ d}z\] \[= -\frac{e^{-ikR_{0}^{(j)}\cos(\theta_{0}^{(j)}-\theta_{1})}}{K_{j} ^{+}(e^{-iks_{j}\cos(\alpha_{j}-\theta_{1})})}\sum_{n=0}^{m}\left[\lambda_{j,n }e^{-iks_{j}(m-n)\cos(\alpha_{j}-\theta_{1})}\right]. \tag{2.28}\] Each remaining term in (2.26) adds the interaction from the \(\ell^{\text{th}}\) array to the \(j^{\text{th}}\) array and is evaluated in much the same way as in [10, eqns (3.25-3.27)]: \[\frac{1}{2\pi i}\oint_{C}\frac{D_{j,\ell}^{+}(z)}{K_{j}^{+}(z)}z^{ -m-1}\mathrm{d}z =\sum_{n=0}^{\infty}\sum_{p=0}^{\infty}\left[\lambda_{j,p}D_{j,\ell,n}\frac{1}{2\pi i}\oint_{C}z^{p+n-m-1}\mathrm{d}z\right]\] \[=\sum_{n=0}^{m}\left[\lambda_{j,m-n}D_{j,\ell,n}\right]. \tag{2.29}\] The coefficients \(D_{j,l,n}\) are given by (2.20): \[D_{j,\ell,n} =\frac{1}{2\pi i}\int_{C}\frac{F_{\ell,A}^{+}(z)}{K_{j}^{-}(z)}z^ {-n-1}\mathrm{d}z\] \[=-\sum_{q=0}^{\infty}\sum_{p=0}^{\infty}\sum_{l=0}^{\infty}\left[ \lambda_{j,p}A_{q}^{(\ell)}H_{0}^{(1)}\left(k\Lambda^{(j,\ell)}(l,q)\right) \frac{1}{2\pi i}\int_{C}z^{l-n-p-1}\mathrm{d}z\right], \tag{2.30}\] where the integral is non-zero only when \(l-n-p=0\) which implies that, \[D_{j,\ell,n}=-\sum_{q=0}^{\infty}\sum_{p=0}^{\infty}\left[\lambda_{j,p}A_{q}^{( \ell)}H_{0}^{(1)}\left(k\Lambda^{(j,\ell)}(p+n,q)\right)\right], \tag{2.31}\] and then the scattering coefficients are equal to \[A_{m}^{(j)}= -\frac{e^{i\mathbf{k}\cdot\mathbf{R}_{0}^{(j)}}}{K_{j}^{+}(e^{-iks_{j} \cos(\alpha_{j}-\theta_{1})})}\sum_{n=0}^{m}\left[\lambda_{j,n}e^{-iks_{j}(m- n)\cos(\alpha_{j}-\theta_{1})}\right]\] \[-\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq j\end{subarray}}^{\mathcal{J}}\sum_{q=0}^{\infty}\sum_{p=0}^{\infty} \sum_{n=0}^{m}\left[\lambda_{j,m-n}\lambda_{j,p}A_{q}^{(\ell)}H_{0}^{(1)} \left(k\Lambda^{(j,\ell)}(p+n,q)\right)\right]. \tag{2.32}\] ### Writing and solving the Wiener-Hopf solution as a matrix equation We can write the Wiener-Hopf solution (2.32) in the form of an infinite matrix equation, \[\mathbf{A}^{(j)}=\mathbf{A}^{(j)}_{0}-\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq j\end{subarray}}^{\mathcal{J}}\mathcal{M}^{(j,\ell)}\mathbf{A}^{(\ell)}, \tag{2.33}\] where \(\mathbf{A}^{(j)}\) and \(\mathbf{A}^{(j)}_{0}\) are infinite column vectors of scattering coefficients with entries \(A^{(j)}_{m}\) and \[A^{(j)}_{0,m}=-\frac{e^{ik\cdot\mathbf{R}^{(j)}_{0}}}{K_{j}^{+}(e^{-iks_{j}\cos( \alpha_{j}-\theta_{1})})}\sum_{n=0}^{m}\left[\lambda_{j,n}e^{-iks_{j}(m-n)\cos( \alpha_{j}-\theta_{1})}\right],\ \ m\geq 0, \tag{2.34}\] respectively. The infinite matrices \(\mathcal{M}^{(j,\ell)}\) have entries \[\mathcal{M}^{(j,\ell)}_{mq}=\sum_{p=0}^{\infty}\sum_{n=0}^{m} \left[\lambda_{j,m-n}\lambda_{j,p}H^{(1)}_{0}\left(k\Lambda^{(j,\ell)}(p+n,q) \right)\right],\ \ m,q\geq 0. \tag{2.35}\] Putting together all values of \(j\) gives us a system of matrix equations which can also be written in block matrix form, \[\begin{pmatrix}\mathcal{I}&\mathcal{M}^{(1,2)}&\ldots&\mathcal{M}^{(1, \mathcal{J})}\\ \mathcal{M}^{(2,1)}&\mathcal{I}&\ldots&\mathcal{M}^{(2,\mathcal{J})}\\ \vdots&\vdots&\ddots&\vdots\\ \mathcal{M}^{(\mathcal{J},1)}&\mathcal{M}^{(\mathcal{J},2)}&\ldots&\mathcal{I }\end{pmatrix}\begin{pmatrix}\mathbf{A}^{(1)}\\ \mathbf{A}^{(2)}\\ \vdots\\ \mathbf{A}^{(\mathcal{J})}\end{pmatrix}=\begin{pmatrix}\mathbf{A}^{(1)}_{0}\\ \mathbf{A}^{(2)}_{0}\\ \vdots\\ \mathbf{A}^{(\mathcal{J})}\end{pmatrix}, \tag{2.36}\] where \(\mathcal{I}\) is the identity matrix. In principle, (2.36) can be inverted to get, \[\begin{pmatrix}\mathbf{A}^{(1)}\\ \mathbf{A}^{(2)}\\ \vdots\\ \mathbf{A}^{(\mathcal{J})}\end{pmatrix}=\begin{pmatrix}\mathcal{I}&\mathcal{M}^{(1,2)}&\ldots&\mathcal{M}^{(1,\mathcal{J})}\\ \mathcal{M}^{(2,1)}&\mathcal{I}&\ldots&\mathcal{M}^{(2,\mathcal{J})}\\ \vdots&\vdots&\ddots&\vdots\\ \mathcal{M}^{(\mathcal{J},1)}&\mathcal{M}^{(\mathcal{J},2)}&\ldots&\mathcal{ I}\end{pmatrix}^{-1}\begin{pmatrix}\mathbf{A}^{(1)}_{0}\\ \mathbf{A}^{(2)}_{0}\\ \vdots\\ \mathbf{A}^{(\mathcal{J})}_{0}\end{pmatrix}. \tag{2.37}\] Note that to evaluate these matrices and their inverses in practice, we will need to truncate the summations in the entries as well as the block matrices themselves. We will do this by ensuring that all blocks in the big matrix are of the same size. ### Two semi-infinite arrays In this section, we would like to analyse the form of the inverse matrix in (2.37) in terms of its blocks. This is difficult to do analytically, especially for larger \(\mathcal{J}\). Note that (2.37) reduces to the standard solution to the semi-infinite array problem if \(\mathcal{J}=1\). Let us say that we have just two semi-infinite arrays (i.e. \(\mathcal{J}=2\)), then we have the following matrix system, \[\mathbf{A}^{(1)} =\mathbf{A}^{(1)}_{0}-\mathcal{M}^{(1,2)}\mathbf{A}^{(2)},\] \[\mathbf{A}^{(2)} =\mathbf{A}^{(2)}_{0}-\mathcal{M}^{(2,1)}\mathbf{A}^{(1)}. \tag{2.38}\] Here, the entries of the first terms are still given by (2.34), and the entries of the matrices are given by, \[\mathcal{M}^{(1,2)}_{mq} =\sum_{p=0}^{\infty}\sum_{n=0}^{m}\left[\lambda_{1,m-n}\lambda_{ 1,p}H^{(1)}_{0}\left(k\Lambda^{(1,2)}(p+n,q)\right)\right],\] \[\mathcal{M}^{(2,1)}_{mq} =\sum_{p=0}^{\infty}\sum_{n=0}^{m}\left[\lambda_{2,m-n}\lambda_{ 2,p}H^{(1)}_{0}\left(k\Lambda^{(1,2)}(q,p+n)\right)\right]. \tag{2.39}\] We can solve the system (2.38) using simple matrix arithmetic, \[\mathbf{A}^{(1)} =\left(\mathcal{I}-\mathcal{M}^{(1,2)}\mathcal{M}^{(2,1)}\right)^{- 1}\left(\mathbf{A}^{(1)}_{0}-\mathcal{M}^{(1,2)}\mathbf{A}^{(2)}_{0}\right),\] \[\mathbf{A}^{(2)} =\mathbf{A}^{(2)}_{0}-\mathcal{M}^{(2,1)}\mathbf{A}^{(1)}. \tag{2.40}\] There is also an equivalent alternative solution to (2.4) where all the 1's and 2's have switched places. Although in theory, it is possible to write such formulae for \(\mathcal{J}>2\), it quickly becomes very convoluted. **Remark**.: _It is fairly simple to match these results with the specific case of the point scatterer wedge studied in [Nethercote et al., 2022a]. That wedge configuration is produced from these parameter choices: \(a_{1}=a_{2}=a\), \(s_{1}=s_{2}=R^{(2)}_{0}=s\), \(R^{(1)}_{0}=0\), \(\theta^{(2)}_{0}=\alpha_{2}=-\alpha_{1}=-\alpha\) which gives the distance function \(\Lambda^{(1,2)}(m,n)=s\left(m^{2}+(n+1)^{2}-2m(n+1)\cos(2\alpha)\right)^{\frac {1}{2}}\). This leads to the two WH kernels \((K_{1}(z)\) and \(K_{2}(z))\) as well as the resulting coefficients \((\lambda_{1,n}\) and \(\lambda_{2,n})\) becoming identical. There are two key differences left. One is that the second array here has one extra scatterer after truncation, i.e. the vector \(\mathbf{A}^{(2)}\) has one extra entry. However, this extra entry can be neglected and consequently, one must ignore the last column in \(\mathcal{M}^{(1,2)}\) and the last row in \(\mathcal{M}^{(2,1)}\) as well. The other difference is that we have revoked the need for the iterative scheme and have instead inverted the matrix equation (2.38) directly. Note that the inverted matrix can be expanded, \(\left(\mathcal{I}-\mathcal{M}^{(1,2)}\mathcal{M}^{(2,1)}\right)^{-1}= \mathcal{I}+\mathcal{M}^{(1,2)}\mathcal{M}^{(2,1)}+\left(\mathcal{M}^{(1,2)} \mathcal{M}^{(2,1)}\right)^{2}+...\), which can then be used to recover every iteration in the iterative scheme (although this requires the spectral radius \(\rho\left(\mathcal{M}^{(1,2)}\mathcal{M}^{(2,1)}\right)<1\) to converge)._ ### Uniqueness of Solution To be able to solve the matrix equation (2.36), we will need the determinant of that matrix to be non-zero. We have tried to find cases where the matrix is not invertible or show that it can always be inverted. While it seems to be the latter, it is clear from numerical experimentation that the individual block matrices are singular. The matrix entries can be written as follows, \[\mathcal{M}^{(j,\ell)} =\sum_{n=0}^{m}\lambda_{j,m-n}\bar{\mathcal{M}}^{(j,\ell)}_{nq},\] \[\text{where}\ \ \bar{\mathcal{M}}^{(j,\ell)}_{nq} =\sum_{p=0}^{\infty}\left[\lambda_{j,p}H^{(1)}_{0}\left(k \Lambda^{(j,\ell)}(p+n,q)\right)\right]. \tag{2.41}\] Say that we truncated the matrices at \(N\) such that \(\mathcal{M}^{(j,\ell)}\) is an \((N+1)\) by \((N+1)\) matrix, then by using Gaussian elimination, it can be shown that \[\det(\mathcal{M}^{(j,\ell)})=\lim_{N\to\infty}\left(\lambda_{j,0}^{N+1}\det( \bar{\mathcal{M}}^{(j,\ell)})\right), \tag{2.42}\] For small \(N\), the determinant of \(\bar{\mathcal{M}}^{(j,\ell)}\) is not generally zero. From numerical experimentation however, we find that as \(N\) increases, the extra eigenvalues (due to a bigger matrix) have very small absolute values and decay to zero. This means that \(\det(\bar{\mathcal{M}}^{(j,\ell)})\) (being the product of all eigenvalues) decays to zero very fast (at least exponentially), which implies that \(\det(\mathcal{M}^{(j,\ell)})\) decays to zero as well (assuming that \(\lambda_{j,0}\) is sufficiently small in the worst case scenarios). In addition to this, the condition numbers for \(\mathcal{M}^{(j,\ell)}\) should be infinite because these are singular matrices. FIG. 3 plots the absolute value of \(\det(\mathcal{M}^{(j,\ell)})\) w.r.t. \(N\) for several different test cases. We find that the case with two parallel arrays has the slowest decay but it is still exponential. The behaviour of the full matrix is naturally very different because the condition number for the matrix in (2.36) is quite moderate (approximately \(10^{0\text{--}2}\)) and the determinants are non-zero for all cases that we tested for this article. If the system had a zero determinant, it could either be due to the modelling assumption being insufficient [Nethercote et al., 2022b] or due to a specific physical phenomena. Since a zero determinant implies a non-unique solution to the matrix equation, and considering that this mostly relies on the chosen geometry of the problem, one could conjecture that this could imply the presence of homogeneous (Rayleigh-Bloch) waves. However, since it was proven that semi-infinite arrays with Dirichlet boundary conditions cannot support these waves [Bonnet-Ben Dhia and Starling, 1994, Bonnet-Ben Dhia et al., 2016], we are not expecting a zero determinant. ### Computational optimisation To calculate the matrices \(\bar{\mathcal{M}}^{(j,\ell)}\), we can use the fast multipole methods (FMM) library which is accessible from (Greengard and Gimbutas, 2012) (see also (Beatson and Greengard, 1997) for details on the algorithm). This method is very accurate at computing sums of the form (2.41). For the matrices \(\mathcal{M}^{(j,\ell)}\), the algorithm is able to reduce the computational cost with respect to the truncation (\(N\)) from \(O(N^{3})\) to as low as \(O(N^{2}\ln(N))\). However, it is only able to do this if the values of \(ks_{j}\) are sufficiently small for all \(j\). To demonstrate this, FIG. 4 plots the computation times to calculate the matrix in (2.36) for the point scatterer wedge given in FIG. 5 (a). This figure shows the difference between both methods w.r.t. the truncation \(N\) and \(ks\). On the left side, we see that for smaller \(ks\) the computational order has been reduced. On the right side, we see that FMM becomes slower than using direct methods when \(ks\) is approximately larger than \(\pi\). Although there are developments for larger wavenumbers (Crutchfield et al., 2006), this has not been implemented in the library we used for the current work. As a result, we will only use FMM to calculate \(\bar{\mathcal{M}}^{(j,\ell)}\) if the values of \(ks_{j}\) are sufficiently small for all \(j\). If not, then we will calculate it directly. Figure 4: Plots of the computation times to calculate the matrix in (2.36) for the point scatterer wedge given in FIG. 5. On the left (resp. right) side, these times are plotted w.r.t. the truncation \(N\) (resp. \(ks\)). Figure 3: Plot of the absolute value of \(\det(\mathcal{M}^{(j,\ell)})\) w.r.t. the truncation \(N\), compared with \(\lambda_{j,0}^{N+1}\). For all cases, we have \(k=5\pi\), \(s_{j}=0.1\) and \(a_{j}=0.001\) for all \(j\) so the value of \(\lambda_{j,0}\) is unchanged. ### Test cases In this section, we consider and showcase several different examples of test cases in FIG. 5. We look at five different configurations of semi-infinite arrays, where the array parameters are given in Table 1. In all of these test cases, we plot the real part of the total wave field and the incident wave has the same parameters, \(k=5\pi\) and \(\theta_{\text{I}}=\frac{\pi}{4}\) except for the last one (FIG. 5 (f)) which has \(k=7.5\pi\) instead. The first case (FIG. 5 (a)) is a exact recreation of the same point scatterer wedge that was considered in Figure 6 of (Nethercote et al., 2022). This time however, we are able to create gaps in the wedge interface (FIG. 5 (b)) as well as add additional scatterers to one or both of the arrays (FIG. 5 (c)), provided that we do not have overlapping scatterers. With this new generalised formulation, we can also consider cases with additional arrays. For example, we can have twelve outwardly pointing arrays where the ends are positioned to create a cage (FIG. 5 (d)). This cage is able to shield the middle region from an incident wave with a low wavenumber. In particular, the sound pressure level inside the cage (given by the formula \(20\log_{10}\) (root mean square(\(\Phi\)))) is approximately \(-26.36\) compared to \(0\) when there are no scatterers at all. This configuration is analogous to electrostatic and electromagnetic shielding problems using Faraday cages (see (Chapman et al., 2015; Hewett and Hewitt, 2016)). Although the WH technique is not typically used outside of semi-infinite problems, it is important to model the wave scattering by other types of arrays including circular arrays (Martin, 2014), infinite arrays with defects (Thompson and Linton, 2008) and long finite arrays (Thompson et al., 2008). Another case of special interest is determining the band-gap structure of doubly periodic lattices (Botten et al., 2001). For example, McIver (2007) and Krynkin and McIver (2009) study an infinite doubly periodic lattice of small scatterers with Dirichlet boundary conditions. With that in mind, the final case that we consider is a series of stacked infinite arrays to create a finitely thick doubly periodic lattice. The reason why we chose to look into this case is because we wanted to know if this configuration has similar properties to the fully periodic lattice as told by the band gap diagrams of (Krynkin and McIver, 2009). Specifically, we wanted to see the behaviour of an incident wave that can not penetrate the lattice (i.e. in a stop band) and the Bloch waves resulting from one that can (i.e. in a pass band). Choosing a wavenumber within a stop band (FIG. 5 (e)) causes the incident wave to be almost fully reflected from the lattice. The alternative choice of a wavenumber within a pass band (FIG. 5 (f)) does cause some reflection but most of the energy goes into the lattice and forms a Bloch wave inside before becoming a transmission out of the other side. ### Comparison with numerical methods Here, we seek to find a means of comparison with numerical methods that is both as fair and comprehensive as possible. We have three possible methods to compare with: finite element software COMSOL, a T-matrix solver (TMAT) by (Hawkins, 2023) and a least square collocation (LSC) method which was used in Chapman et al. (2015), Hewett and Hewitt (2016) for solving Laplace's and Helmholtz's equation respectively. In previous articles (Nethercote et al., 2020, 2022), we have extensively used COMSOL for comparison. However, there are limitations to the comparison since COMSOL is not able to find a solution with thousands \begin{table} \begin{tabular}{c|c|c|c|c|c|c} Case name & \(j\) & \(a_{j}\) & \(s_{j}\) & \(\alpha_{j}\) & \(\theta_{0}^{(j)}\) & \(R_{0}^{(j)}\) \\ \hline Point scatterer wedge & 1 & 0.001 & 0.1 & \(\frac{5\pi}{6}\) & 0 & 0 \\ (FIG. 5 (a)) & 2 & 0.001 & 0.1 & \(-\frac{5\pi}{6}\) & 0.1 \\ \hline Wedge with missing scatterers & 1 & 0.001 & 0.1 & \(\frac{5\pi}{6}\) & \(\frac{5\pi}{6}\) & 0.3 \\ (FIG. 5 (b)) & 2 & 0.001 & 0.1 & \(-\frac{5\pi}{6}\) & \(-\frac{5\pi}{6}\) & 0.3 \\ \hline Wedge with extra scatterers & 1 & 0.001 & 0.1 & \(\frac{5\pi}{6}\) & \(-\frac{5\pi}{6}\) & 0.45 \\ (FIG. 5 (c)) & 2 & 0.001 & 0.1 & \(-\frac{6\pi}{6}\) & \(\frac{5\pi}{6}\) & 0.45 \\ \hline Multilayered Faraday cage & 1,...7 & 0.001 & 0.05 & \(\frac{(j-1)\pi}{6}\) & \(\frac{(j-1)\pi}{6}\) & 0.1 \\ (FIG. 5 (d)) & 8,...12 & 0.001 & 0.05 & \(\frac{(j-1)3\pi}{6}\) & \(\frac{(j-1)3\pi}{6}\) & 0.1 \\ \hline Multiple infinite arrays & 1,...11 & 0.001 & 0.1 & 0 & \(-\frac{\pi}{2}\) & 0.1 \(\left(\frac{j-1}{2}\right)\) \\ (FIG. 5 (e) and (f)) & 2,...12 & 0.001 & 0.1 & \(\pi\) & \(-\pi\!+\!\tan^{-1}\!\left(\frac{j}{2}\!-\!1\right)\) & 0.1\(\sqrt{1\!+\!\left(\frac{j}{2}\!-\!1\right)^{2}}\) \\ \end{tabular} \end{table} Table 1: The parameters for all semi-infinite arrays of the six test cases displayed in FIG. 5. of scatterers to fully compare with our method. Here, we will compare with results computed using TMAT or LSC where we are able to have the same number of scatterers and also the computation is performed on the same computer. We are also able to compute the scattering coefficients with these methods which is not possible with COMSOL. The T-matrix software package provides an object-oriented implementation of an efficient reduced order Figure 5: Real part of total field for six different test cases. Here, the incident wave is given by the parameters \(k=5\pi\) and \(\theta_{\text{I}}=\frac{\pi}{4}\) (except for (f) plot which has \(k=7.5\pi\)), and the array parameters are given in Table 1. model framework for modelling two- and three-dimensional wave scattering simulations. For the scattered field, TMAT uses the multipole expansion which is truncated depending on the scatterer size relative to the wavelength. TMAT also creates a Bessel function expansion of the incident field about every single scatterer which is truncated to the same number of terms as the multipole expansion. Following the construction of the T-matrix for every scatterer, a system of matrix equations is formed and solved for the scattering coefficients. However, the current version of this software restricts the truncation such that dipole coefficients must be included as well as monopole coefficients. The idea of the least squares collocation method is to create and solve an overdetermined matrix system to find the scattering coefficients of a truncated multipole expansion. Here, the known data is a collection of boundary data from a number of collocation points for every scatterer. We would need more collocation points per scatterer than the number of multipole terms to guarantee that the matrix system is overdetermined. The TMAT and LSC methods are both able to consider a large number of scatterers and despite the different methods, they are comparable in their results such that we only need to compare the WH method with one of them. We choose the LSC method because it will be a fairer comparison since we can restrict that one to monopole coefficients only. If we were to compare each of the plots in FIG. 5 with the equivalent determined by the LSC method, they would look identical without closer inspection. However, one can have a better idea of the differences between them by looking at the scattering coefficients produced by both methods. Let us consider a case with a single infinite array where the parameters are the same as the multiple infinite array case in Table 1. The advantage here is that the infinite array problem has a known exact solution given by \[A_{m}^{(1)}=-\frac{e^{-iksm\cos(\theta_{\mathrm{I}})}}{K(e^{iks \cos(\theta_{\mathrm{I}})})},\quad m\geq 0\] \[A_{m}^{(2)}=-\frac{e^{iks(m+1)\cos(\theta_{\mathrm{I}})}}{K(e^{ iks\cos(\theta_{\mathrm{I}})})},\quad m\geq 0 \tag{2.43}\] where \(s=s_{1}=s_{2}\) and \(K=K_{1}=K_{2}\). This means that we are able compare both the WH and LSC solutions with the exact solution as well as each other. With this in mind, FIG. 6 is a collection of plots of the absolute value difference between the two sets of scattering coefficients where the truncation is chosen to be 1000. In these plots, the index \(n\) of the coefficients is ordered such that \(A_{-1001},...A_{-1},A_{0},...A_{1000}\) corresponds to \(A_{1000}^{(2)},...A_{0}^{(2)},A_{0}^{(1)},...A_{1000}^{(1)}\) respectively. On the top row of FIG. 6, we look at the infinite array problem and have an incident wave with wavenumber \(k=5\pi\) but two different incident angles; \(\theta_{\mathrm{I}}=\frac{\pi}{4}\) and \(\theta_{\mathrm{I}}=\frac{\pi}{12}\) for the left and right side respectively. Note that equivalent plots of the relative error share the same shape and are on similar scale as FIG. 6. On the bottom row, we look at the point scatterer wedge given in Table 1 and use the same incident waves as in Figure 6 of (Nethercote et al., 2022a) (i.e. \(k=5\pi\) and \(\theta_{\mathrm{I}}=0\) for the left side and \(k=15\pi\) and \(\theta_{\mathrm{I}}=\frac{\pi}{2}\) for the right side). The top row of FIG. 6 is especially interesting because the comparison with the exact solution allows us to decompose the error between the WH and LSC methods and assess their strengths and weaknesses. One conclusion of this decomposition can be seen in the middle of the plots (\(n\approx 0\)) where the WH seems to be at its weakest and the LSC method at its strongest. This is due to how each method sees the problem as the WH method considers each array separately and adds the interaction between arrays when solved, whereas the LSC method considers each scatterer separately and solves for the individual interactions. The other conclusion of this decomposition can be seen at the truncated ends of the arrays (\(n\approx\pm 1000\)) where the WH is at its strongest and the LSC method at its weakest. This is again due to how the methods see the problem as separate semi-infinite arrays or individual scatterers. It is likely that we will come to similar conclusions for most (if not all) of the potential configurations of this setup. It is also likely that the weak region for the WH method will improve when the semi-infinite arrays are well separated. It is important to note that the overall error does converge to zero as the truncation increases and shape of these error graphs scales with the truncation as well. ## 3 Conclusions To summarise, we have generalised the WH method used for diffraction by a wedge of point scatterers (Nethercote et al., 2022), by considering any number of semi-infinite arrays with an arbitrary set of parameters. The method remains essentially unchanged with the additional benefit of having removed the need for an iterative scheme. The MATLAB scripts we created from this solution are quite versatile and we are able to consider a very wide range of cases, some of which are illustrated in FIG. 5. We have also compared the WH method with some numerical approaches such as the LSC method, and FIG. 6 has shown that the two methods have some good agreement and highlighted the strengths and weaknesses between them. We found that the LSC method is better at modelling the interactions between the scatterers at the ends of the arrays and the WH method is better at modelling the infiniteness of the arrays. Knowing this, one could propose to use a hybrid of the two methods to get accurate coefficients for all scatterers (in other words, use LSC to get the coefficients \(A_{m}^{(j)}\) for small \(m\) and WH for large \(m\)). In theory, this hybrid would have the strengths of both methods but neither of the weaknesses. While finding the optimal \(m\) where we should switch methods would be simple for infinite array cases, more general configurations will be more difficult. This is because we will not have an exact solution to decompose the Figure 6: Absolute value of the difference between the different methods used to produce the scattering coefficients where the truncation is at 1000. The index \(n\) of the coefficients is ordered such that \(A_{-1001},...A_{-1},A_{0},...A_{1000}\) corresponds to \(A_{1000}^{(2)},...A_{0}^{(2)},A_{0}^{(1)},...A_{1000}^{(1)}\) respectively. The top row considers an infinite array case with the incident wave having wavenumber \(k=5\pi\) and incident angle \(\theta_{\rm I}=\frac{\pi}{4}\) (left) or \(\theta_{\rm I}=\frac{\pi}{12}\) (right). The bottom row consider the point scatterer wedge case given by Table 1 with the incident wave having wavenumber \(k=5\pi\) and incident angle \(\theta_{\rm I}=0\) (left) or \(k=15\pi\) and \(\theta_{\rm I}=\frac{\pi}{2}\) (right). error quantity \(|\mathrm{WHT}-\mathrm{LSC}|\) and this optimal \(m\) will not be unique. It is also possible to reformulate the entries of \(\bar{\mathcal{M}}^{(j,\ell)}\) (2.41) by rewriting the Hankel function in its integral form, evaluating the sum and then approximating the result using the method of steepest descent. This would lead to, \[\bar{\mathcal{M}}^{(j,\ell)}_{nq}\approx\sigma^{(j,\ell)}(n,q) \sqrt{\frac{2}{\pi k\Lambda^{(j,\ell)}(n,q)}}e^{ik\Lambda^{(j,\ell)}(n,q)- \frac{i\pi}{4}}, \tag{3.1}\] where \(\sigma^{(j,\ell)}(n,q)\) is a function of \(K_{j}^{+}\) which can change depending on the positioning of the scatterers at \(\mathbf{R}^{(j)}_{n}\) and \(\mathbf{R}^{(\ell)}_{q}\). The idea here is that an efficient approximation could improve the computation time for large truncations by reducing the computational order with respect to the truncation without sacrificing too much accuracy. Although we have not explicitly discussed resonance in the current article (see (Nethercote et al., 2022b) for an overview), it is nonetheless of special interest. By exclusively using the methods discussed in this article, we are capable of numerically evaluating cases where inward resonance is occurring, \((ks_{j}/2\pi)(1+\cos(\alpha_{j}-\theta_{\mathrm{I}}))\in\mathbb{Z}\). However, we cannot numerically evaluate outward resonance (and by extension double resonance) cases, \((ks_{j}/2\pi)(1-\cos(\alpha_{j}-\theta_{\mathrm{I}}))\in\mathbb{Z}\), because the associated scattering coefficients will tend to zero but still lead to a non-trivial wave field. Finding a general procedure which can find and extract outward resonant waves is a topic for future work. Another interesting avenue to pursue is to find a way to use the solution for the scattering coefficients to identify special features. Examples of this could include; the parameters of Bloch waves in lattices (see the multiple infinite array case given by FIG. 5 (f)), trapped modes in waveguides or the constructive/destructive interference caused by waveguides or the acoustic Faraday cage. ## Acknowledgements This research was supported by EPSRC grant EP/W018381/1. A.V.K. is supported by a Royal Society Dorothy Hodgkin Research Fellowship and a Dame Kathleen Ollerenshaw Fellowship. M.A.N. was supported by a David Crighton fellowship at Cambridge University and thanks Nigel Peake for many insightful discussions which greatly refined this article. The authors would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme _Mathematical theory and applications of multiple wave scattering_ when work on this paper was undertaken. This programme was supported by EPSRC grant EP/R014604/1. The authors also give thanks to Stuart Hawkins and David Hewett for providing the source code for the TMAT and LSC methods respectively.
2308.16783
Neutron Star vs Quark Star in the Multimessenger Era
Neutron stars (NSs) which could contain exotic degrees of freedom in the core and the self-bound quark stars (QSs) made purely of absolutely stable deconfined quark matter are still two main candidates for the compact objects observed in pulsars and gravitational wave (GW) events in binary star mergers. We perform a Bayesian model-agnostic inference of the properties of NSs and QSs by combining multi-messenger data of GW170817, GW190425, PSR J0030+0451, PSR J0740+6620, PSR J1614-2230, PSR J0348+0432 as well as ab initio calculations from perturbative quantum chromodynamics and chiral effective field theory. We find the NS scenario is strongly favored against the QS scenario with a Bayes factor of NS over QS $\mathcal{B}^\text{NS}_\text{QS} = 11.5$. In addition, the peak of the squared sound velocity $c_s^2 \sim 0.5c^2$ around $3.5$ times nuclear saturation density $n_0$ observed in the NS case disappears in the QS case which suggests that the $c_s^2$ first increases and then saturates at $c_s^2 \sim 0.5c^2$ above $\sim 4n_0$. The sound velocity and trace anomaly are found to approach the conformal limit in the core of heavy NSs with mass $M \gtrsim 2M_{\odot}$, but not in the core of QSs.
Zheng Cao, Lie-Wen Chen
2023-08-31T15:00:49Z
http://arxiv.org/abs/2308.16783v1
# Neutron Star vs Quark Star in the Multimessenger Era ###### Abstract Neutron stars (NSs) which could contain exotic degrees of freedom in the core and the self-bound quark stars (QSs) made purely of absolutely stable deconfined quark matter are still two main candidates for the compact objects observed in pulsars and gravitational wave (GW) events in binary star mergers. We perform a Bayesian model-agnostic inference of the properties of NSs and QSs by combining multi-messenger data of GW170817, GW190425, PSR J0030+0451, PSR J0740+6620, PSR J1614-2230, PSR J0348+0432 as well as _ab initio_ calculations from perturbative quantum chromodynamics and chiral effective field theory. We find the NS scenario is strongly favored against the QS scenario with a Bayes factor of NS over QS \(\mathcal{B}_{\rm QS}^{\rm NS}=11.5\). In addition, the peak of the squared sound velocity \(c_{s}^{2}\sim 0.5c^{2}\) around 3.5 times nuclear saturation density \(n_{0}\) observed in the NS case disappears in the QS case which suggests that the \(c_{s}^{2}\) first increases and then saturates at \(c_{s}^{2}\sim 0.5c^{2}\) above \(\sim 4n_{0}\). The sound velocity and trace anomaly are found to approach the conformal limit in the core of heavy NSs with mass \(M\gtrsim 2M_{\odot}\), but not in the core of QSs. _Introduction.--_ Understanding the nature of compact stars (CSs) observed in pulsars and gravitational wave (GW) events in binary star mergers is one of fundamental questions in contemporary nuclear physics, astrophysics and cosmology. The baryon number density in the core of CSs can reach several times nuclear saturation density (\(n_{0}=0.16\;{\rm fm}^{-3}\)), making the CSs be ideal laboratories to study the properties of dense nuclear matter and QCD phase diagram at extreme high densities and low temperatures [1; 2; 3; 4; 5; 6; 7; 8; 9], which is unaccessible in terrestrial labs. Theoretically, it is still a big challenge to determine the properties of dense nuclear matter at several times of \(n_{0}\) from _ab initio_ QCD calculations due to the complicated nonperturbative feature of QCD [10], and thus the composition inside the CSs is largely unknown. As discussed in Ref. [2], the CSs could be neutron stars (NSs) for which besides the conventional neutrons and protons, some exotic degrees of freedom such as hyperons, meson condensates and even quark matter may appear in the core. A popular alternative for CSs is the self-bound quark stars (QSs) made purely of absolutely stable deconfined quark matter (QM) composed of \(u\), \(d\) and \(s\)[11; 12; 13; 14] or \(u\) and \(d\)[15; 16; 17] quarks with some leptons. The CSs could be even self-bound strangeon stars in a solid state comprised of strangeons (quark-clusters with three-light-flavor symmetry) [18; 19]. Thanks to the fast development in astrophysical observation facilities, significant progress has been made in the last decades for the measurement of CSs. For example, the mass of several heavy pulsars with mass \(M\sim 2M_{\odot}\) was measured precisely by Shapiro delay [20]. The mass and radius of PSR J0030+0451 with \(M\sim 1.4M_{\odot}\) and PSR J0740+6620 with \(M\sim 2M_{\odot}\) were determined simultaneously by NICER via pulse-profile modeling [21; 22; 23; 24]. Especially, in recent years, two gravitational wave (GW) events GW170817 [25; 26] and GW190425 [27] from binary star mergers were reported by the LIGO Scientific and Virgo Collaborations (LVC), which inaugurates a new era of multimessenger astronomy. Theoretically, _ab initio_ calculations of dense matter have also made significant progress in recent years. At the low density limit, chiral effective field theory (ChEFT) [28; 29], which is the low-energy realization of QCD, provides a satisfactory constraint on the equation of state (EOS) of the NS matter up to densities \(n\approx 1\sim 2n_{0}\) with controllable uncertainties [30; 31]. At asymptotically high densities with baryon chemical potential of multi GeV, perturbative QCD (pQCD) computations become feasible [32] and provide potential constraints on the EOS of dense matter at intermediate densities inside CSs by combining the results at low densities [33; 34; 35; 36]. Based on these state-of-the-art _ab initio_ calculations together with the multi-messenger data, it is extremely interesting to perform a comparative study on NSs and QSs, which may provide valuable information on the nature of CSs and the properties of dense matter. We perform here a Bayesian model-agnostic inference of the properties of NSs and QSs by combining the data on the mass of heavy pulsars with \(M\sim 2M_{\odot}\) determined by Shapiro delay, the mass and radius of PSR J0030+0451 and PSR J0740+6620 from NICER, the tidal deformabilities of CSs from GW170817 and GW190425 together with _ab initio_ calculations from pQCD and ChEFT. We find the current multi-messenger data and constraints from pQCD and ChEFT strongly favor the NS scenario against the QS scenario. In addition, our analyses on the sound velocity and trace anomaly suggest that the conformal limit is violated inside QSs, but reached in the core of heavy NSs with \(M\gtrsim 2M_{\odot}\). _Model-agnostic EOS--_ To construct model-agnostic EOSs for NS matter, we adopt the EOS derived from ChEFT at low densities and extrapolate it to high den sity to match the pQCD constraints by speed of sound extension approach [37]. In particular, following Ref. [34], in the density region of \(n\in[0.58n_{0},1.1n_{0}]\), we choose "Soft", "Intermediate" and "Stiff" EOSs of Ref. [30], and match them to BPS EOS [38] below \(0.58n_{0}\). The speed of sound extension [37] is then utilized to obtain EOS of NS matter from \(1.1n_{0}\) to \(12n_{0}\), in which we uniformly sample a sequence of stitching points \(\{(n_{i},c_{s,i}^{2})\}_{i=1}^{N}\) (\(n_{j}>n_{k}\) for \(j>k\)). These matching points are then connected using piecewise-linear function to obtain \(c_{\text{s}}^{2}(n)\) as \[c_{\text{s}}^{2}(n)=\frac{(n_{i+1}-n)\,c_{\text{s},i}^{2}+(n-n_{i})\,c_{\text{s },i+1}^{2}}{n_{i+1}-n_{i}}, \tag{1}\] and the EOS of NS matter from \(1.1n_{0}\) to \(12n_{0}\) can then be obtained using the fundamental thermodynamic relation (see, e.g., Ref. [36]) \[\mu(n) =\mu_{1}\exp\left[\int_{n_{1}}^{n}dn^{\prime}\frac{c_{s}^{2}\left( n^{\prime}\right)}{n^{\prime}}\right], \tag{2}\] \[\varepsilon(n) =\varepsilon_{1}+\int_{n_{1}}^{n}dn^{\prime}\mu\left(n^{\prime} \right),\] (3) \[p(n) =-\varepsilon(n)+\mu(n)n. \tag{4}\] We set \(n_{1}=1.1n_{0}\) and \(n_{N}=12n_{0}\), and \(c_{s,1}^{2}\) (also the corresponding chemical potential \(\mu_{1}\) and energy density \(\varepsilon_{1}\)) is fixed at the corresponding value from ChEFT, \(n_{i}\) (\(i=2,\,\cdots,\,N-1\)) and \(c_{s,i}^{2}\) (\(i=2,\,\cdots,\,N\)) is uniformly sampled in \([1.1n_{0},12n_{0}]\) and \([0,\,1]\) respectively. In the following we use \(N=6\) and we note that the results just change slightly when \(N\) vary from \(5\) to \(10\). The nested sampler _pymultinest_[39] (which is installed in _bilby_[40]) is then used to sample over parameter \(\{(n_{i},c_{s,i}^{2})\}_{i=1}^{N}\) and generate posterior distribution. For QS matter (also for strangeon matter), its EOS is unknown even at low densities, and so we only consider its basic self-bound property with minimum assumption that the pressure becomes to zero at finite baryon number density \(n_{1}\) corresponding to the QS surface. Assuming \(n_{1}>n_{0}\) and \(n_{N}=12n_{0}\), we uniformly sample \(n_{i}\) (\(i=1\), \(\cdots,\,N-1\)) (\(n_{j}>n_{k}\) for \(j>k\)) and \(c_{s,i}^{2}\) (\(i=1,\,\cdots,\,N\)) in \([n_{0},12n_{0}]\) and \([0,\,1]\), respectively. At the same time, we also uniformly sample the chemical potential \(\mu_{1}\) at \(n_{1}\) in \([\mu_{1,\text{min}},930]\) MeV with \(\mu_{1,\text{min}}=500\) MeV, considering the fact that \(\mu_{1}\) should be less than the binding energy per baryon of the observed stable nuclei (i.e., \(930\) MeV) to satisfy the absolutely stable condition [11; 12; 13; 14]. The energy density at \(n_{1}\) can then be obtained as \(\varepsilon_{1}=\mu_{1}n_{1}\). The full EOS of QS matter can be then obtained similarly as in the case of NS. _Bayesian analysis_-- We use Bayesian hierarchical model to combine constraints from multiple observations with uncertainties and then make parameter estimate. According to Bayes' theorem, as discussed in Ref [41; 42], for the given data set \(\vec{d}\) and hypothesis \(\mathcal{H}\), the posterior distribution of EOS parameters \(\theta\) can be written as \[p(\theta|\vec{d},\mathcal{H})=\frac{\prod_{i}\mathcal{L}(d_{i}|\theta, \mathcal{H})\pi(\theta|\mathcal{H})}{\mathcal{Z}_{\mathcal{H}}(\vec{d})}, \tag{5}\] where \(i\) runs over individual constraints and each constraint is independent of each other, \(\pi(\theta|\mathcal{H})\) is the hyperprior distribution for \(\theta\) and here is chosen as uniform distribution, \(\mathcal{L}(d_{i}|\theta,\mathcal{H})\) is the likelihood of the EOS parameters under the assumption of \(\mathcal{H}\) for data \(d_{i}\), and the \(\mathcal{Z}_{\mathcal{H}}(\vec{d})\equiv\int\prod_{i}\mathcal{L}(d_{i}|\theta, \mathcal{H})\pi(\theta|\mathcal{H})\mathrm{d}\theta\) is a normalization factor called _evidence_ which quantifies how much the hypothesis is preferred by the data. Based on the data set \(\vec{d}\), the Bayes factor \(\mathcal{B}_{\text{QS}}^{\text{NS}}\) for CSs as NSs against QSs can be obtained as [43] \[\mathcal{B}_{\text{QS}}^{\text{NS}}=\mathcal{Z}_{\text{NS}}(\vec{d})/\mathcal{ Z}_{\text{QS}}(\vec{d}). \tag{6}\] In the present Bayesian analyses, we combine the Shapiro delay mass measurements of heavy CSs, the NICER mass-radius of PSR J0030+0451 and PSR J0740+6620 analyzed by Miller _et al._[21; 22] (Similar to Ref. [44]), the tidal information of GW170817 [25; 26] and GW190425 [27] together with the pQCD constraints at ultra high densities [36; 45; 32; 46] (and ChEFT results for NSs) as our default data set \(\vec{d}_{\text{def}}\). For the mass measurements, the prior distribution of CS mass for a given EOS parameter \(\theta\) can be written as [47; 48]\(\pi(m|\theta)=\frac{1_{\text{i}\text{d}_{\text{low}},M_{\text{TOV}}(\theta)}}{M_{ \text{TOV}}(\theta)-M_{\text{low}}}\), where \(M_{\text{low}}=0.1M_{\odot}\) is the assumed lower bound of the mass of CSs and \(M_{\text{TOV}}(\theta)\) is the maximum mass of static CSs determined by the EOS. For a given mass measurement data \(d_{M}\), the likelihood is \(\mathcal{L}(d_{M}|\theta)=\int\mathrm{d}m\mathcal{L}(d_{M}|m)\pi(m|\theta)\). We consider here the precise Shapiro delay mass measurement of PSR J1614-2230 [49; 50; 51], PSR J0348+0432 [52], PSR J0740+6620 [20] and use Gaussian function \(\mathcal{N}(1.908,0.016^{2})\), \(\mathcal{N}(2.01,0.04^{2})\), \(\mathcal{N}(2.08,0.07^{2})\) to approximate these mass measurements \(\mathcal{L}(d_{M}|m)\), respectively. Similarly, for the NICER mass-radius measurement data \(d_{\text{R}}\) of PSR J0030+0451 or PSR J0740+6620, the likelihood can be written as \(\mathcal{L}(d_{R}|\theta)=\int\mathrm{d}m\mathcal{L}[d_{R}|m,R(m,\theta)]\pi(m|\theta)\) and we use Kernel Density Estimation (KDE) to approximate the posterior mass-radius distribution [53; 54]. To avoid double counting, for PSR J0740+6620 [20], we do not use its Shapiro delay mass data if we include its NICER mass-radius data. For PSR J0437-4715, we use 2D Gaussian distribution \(\mathcal{N}([13.6,1.44]^{\top},\text{diag}(0.85^{2},0.07^{2}))\) to approximate its mass-radius joint distribution. For the measurements of GW events, the likelihood can be written as \(\mathcal{L}(d_{\text{GW}}|\theta)=\int\mathrm{d}\omega\mathcal{L}(d_{\text{GW}}| \omega)\pi(\omega|\theta)\)[55], where \(\omega=\{\mathcal{M}_{c},q,\Lambda_{1},\Lambda_{2}\}\) and \(\mathcal{L}(d_{\text{GW}}|\omega)\) is the nuisance-marginalized likelihood [41] which has marginalized over extrinsic parameters of the source. With the convention \(m_{1}\geq m_{2}\), the tidal parameter of each compact object could be uniquely determined by the chirp mass \(\mathcal{M}_{c}\), mass ratio \(q\), and EOS parameter \(\theta\), i.e. \(\Lambda_{i}(\mathcal{M}_{c},q,\theta)\). Based on thermodynamic stability and causality, the results from pQCD can be utilized to constrain the EOS at intermediate density region by fully taking advantage of thermodynamic potentials [45]. At a high chemcial potential \(\mu_{H}=2.6\) GeV (i.e. \(n\approx 40n_{0}\)), the uncertainties of thermodynamics quantity could be parameterized by a dimensionless parameter \(X\) and could be expressed by a set \(\vec{\beta}_{\rm pQCD}(X)=\left\{p_{\rm pQCD}\left(\mu_{H},X\right),n_{\rm pQCD }\left(\mu_{H},X\right),\mu_{H}\right\}\). By integrating the uncertainties, one can obtain the corresponding likelihood \(\mathcal{L}(\rm pQCD|\theta)=\int d\vec{\beta}_{H}P(\vec{\beta}_{H})\mathbf{1} _{[\Delta p_{\rm min},\Delta p_{\rm max}]}(\Delta p)\)[36], with \(\Delta p=p_{\rm pQCD}-p_{L}\) and \(p_{L}\) is the pressure of the last point (i.e., \(n_{N}=12n_{0}\) here) of interpolated EOS. It should be noted that the \(p_{L}\) value depends on the EOS at the low density \(n_{1}\) as well as the sequence of stitching points \(\{(n_{i},c_{s,i}^{2})\}_{i=1}^{N}\) in the speed of sound extension. _Results and discussions--_ Using the default data set \(\vec{d}_{\rm def}\), we perform Bayesian model-agnostic inference of the properties of NSs and QSs. Firstly, for \(M_{\rm TOV}\), our present analyses indicate that it is \(M_{\rm TOV,NS}=2.17^{+0.26}_{-0.15}M_{\odot}\) for NSs and \(M_{\rm TOV,QS}=2.49^{+0.47}_{-0.35}M_{\odot}\) for QSs in 90% credible interval (CI), indicating the QSs would have a significantly larger \(M_{\rm TOV}\) than the NSs. The \(M_{\rm TOV,NS}=2.17^{+0.26}_{-0.15}M_{\odot}\) is in nice agreement with the value of \(2.18^{+0.27}_{-0.13}M_{\odot}\) estimated recently by taking advantage of the various structures sampling by a single-layer feed-forward neural network model embedded in the Bayesian nonparametric inference [56], implying our present result is independent of the detailed realization of the model-agnostic EOSs. Secondly, for the radius \(R_{1.4}\) of CSs with canonical mass of \(1.4M_{\odot}\), its value is estimated to be \(R_{1.4,\;\rm NS}=12.44^{+0.74}_{-0.71}\) km for NSs and \(R_{1.4,\;\rm QS}=11.41^{+0.64}_{-0.61}\) km for QSs in 90% CI, suggesting NSs have a larger \(R_{1.4}\) than QSs. \(R_{1.4,\;\rm NS}=12.44^{+0.74}_{-0.71}\) km well agrees with \(R_{1.4}=12.42^{+0.52}_{-0.99}\) km (95% CI) reported in Ref. [57] where an ensemble of EOSs is generated in advance and weight each EOS according to the likelihood. Our result on \(R_{1.4,\;\rm NS}\) is also consistent with \(R_{1.4}=11.98^{+0.35}_{-0.40}\) (90% CI) [58] obtained recently by combining all the EOS-sensitive observations, including data of the kilonovae and the GRB afterglow. In addition, though different methods are adopted to construct EOS, our result on \(R_{1.4,\;\rm NS}\) is in agreement with previous work [44; 59; 60; 61; 62; 63; 64; 65; 66] within the uncertainty. For the case of QSs, our result also agrees with \(R_{1.4}=11.50^{+0.52}_{-0.55}\) km obtained by Miao _et al._[67] within the MIT bag model. Thirdly, the tidal deformability \(\Lambda_{1.4}\) of a \(1.4M_{\odot}\) CS is estimated to be \(\Lambda_{1.4,\rm NS}=504^{+223}_{-174}\) for NSs and \(\Lambda_{1.4,\rm QS}=642^{+260}_{-204}\) for QSs in 90% CI, and thus QSs would have a significantly larger \(\Lambda_{1.4}\) than NSs. The obtained \(\Lambda_{1.4,\rm NS}\) nicely agrees with the result \(507^{+234}_{-242}\) in Ref. [60] where the Gaussian processes are applied to construct the model-independent EOSs. Our result on \(\Lambda_{1.4,\rm QS}\) is also consistent with \(\Lambda_{1.4}=650^{+230}_{-190}\) obtained in Ref. [67] within the MIT bag model. The above discussions indicate that the NS and QS scenarios lead to different predictions of \(M_{\rm TOV}\), \(R_{1.4}\) and \(\Lambda_{1.4}\) for CSs. To assess the preference of the NS and QS hypotheses in the description of the current multi-messenger data under the constraints from pQCD and ChEFT, we evaluate the Bayes factor of NS over QS and find \(\mathcal{B}^{\rm NS}_{\rm QS}=11.5\). This value of \(\mathcal{B}^{\rm NS}_{\rm QS}\) means that the NS hypothesis for CSs is strongly preferred against the QS hypothesis according to the interpretation of Bayes factor, i.e., the \(\mathcal{B}^{H_{1}}_{H_{0}}\in[10,30]\) indicates strong evidence for hypothesis \(H_{1}\)[68]. To see the main reason leading to the large value of \(\mathcal{B}^{\rm NS}_{\rm QS}=11.5\), we calculate the \(\mathcal{B}^{\rm NS}_{\rm QS}\) by removing individually the data from the default data set \(\vec{d}_{\rm def}\), and we find the value of \(\mathcal{B}^{\rm NS}_{\rm QS}\) changes to 16.1, 5.1, 10.6, 12.4 and 1.3 by removing the data/constraints of pQCD, GW170817, GW190425, PSR J0740+6620 and PSR J0030+0451, respectively. The large value of \(\mathcal{B}^{\rm NS}_{\rm QS}=11.5\) is thus mainly due to the constraint from PSR J0030+0451, and next from GW170817. To see more clearly the influence of PSR J0030+0451, we show in Fig. 1 the 90% CI of radius at different masses for NSs and QSs. It is seen that the NS hypothesis can indeed describe the NICER measurements of PSR J0030+0451 much better than the QS hypothesis as the latter just marginally overlaps with the NICER mass-radius of PSR J0030+0451 where the probability density is relatively low. Therefore, our results suggest that the multi-messenger data prefer CSs as NSs over QSs, implying the conjecture of QM (as well as the strangeon matter) as the true ground state of QCD mat Figure 1: Posterior distribution of radius of NSs and QSs at different masses (90% CI). The NICER mass-radius posterior distributions (90% CI) of PSR J0030+0451 [21] and PSR J0740+6620 [22] are also shown for comparison. ter [11; 12; 13; 14] is disfavored. This may provide a natural explanation on the fact that there is so far no definite evidence for the existence of strangelet-like exotic objects after decades experimental and observational searching (See, e.g., Refs. [69; 70]). The sound velocity \(c_{s}\) is an important quantity to feature the EOS of density matter. Shown in Fig. 2(a) is the 68% CI of \(c_{s}^{2}\) as a function of baryon density for NS matter and QS matter. One sees the squared speed of sound \(c_{\rm s,NS}^{2}\) for NS matter first increases with baryon density and reaches a peak value of \(c_{\rm s,NS,max}^{2}\sim 0.5c^{2}\) (\(c\) is the speed of light in vacuum) around \(n\approx 3.5n_{0}\) (i.e., \(n_{\rm pk,NS}=0.55^{+0.19}_{-0.14}\) fm\({}^{-3}\)), then decreases and approaches the conformal limit \(c^{2}/3\) above \(n\approx 4.5n_{0}\). This peak structure may be related to the quarkyonic matter [71] or the high density behavior of the symmetry energy [72]. It is interesting to mention that according to percolation theory, the critical density that nucleons begin overlap with each other is estimated to be \(0.57^{+0.12}_{-0.09}\) fm\({}^{-3}\)[73], very close to the \(n_{\rm pk,NS}\). We have checked that the peak structure will disappear if the pQCD constraint is removed from \(\vec{d}_{\rm def}\). Furthermore, if the constraints of heavy CSs with \(M\approx 2M_{\odot}\) are excluded from \(\vec{d}_{\rm def}\), the \(c_{\rm s,NS}^{2}\) will increase monotonously until \(12n_{0}\). Therefore, the constraints from pQCD and heavy CSs with \(M\approx 2M_{\odot}\) are necessary conditions for the sound velocity peak structure in NS matter. In contrast to \(c_{\rm s,NS}^{2}\), it is interesting to see from Fig. 2(a) that the squared speed of sound \(c_{\rm s,QS}^{2}\) for QS matter first increases with baryon density and then essentially saturates at about \(0.5c^{2}\) above \(n\sim 4n_{0}\). Therefore, the peak structure is not present in \(c_{\rm s,QS}^{2}\) although the constraints of pQCD and heavy CSs with \(M\approx 2M_{\odot}\) are both considered. This feature clearly shows that the pQCD limits on the EOS and \(c_{\rm s}^{2}\) of dense matter at intermediate densities inside CSs significantly depends on the input EOS at low densities. Indeed, the low density EOS is very different for NS and QS matters, as illustrated in Fig. 2(b) where the 68% CI of pressure as a function of baryon density is dislayed for NS and QS matters. One sees from Fig. 2(b) that the pressure of NS matter is well constrained by BPS EOS and ChEFT below \(1.1n_{0}\) but the pressure of QS matter around \(n_{0}\) rapidly drops to zero due to absolutely stable condition although with large uncertainties. Quantitatively, the energy per baryon \(\mu_{1}\) and the number density \(n_{1}\) at zero pressure point of QS matter are estimated to be \([654;812]\) MeV and \([0.24,0.35]\) fm\({}^{-3}\) (68% CI), respectively. Very recently, the trace anomaly normalized by the energy density, i.e., \(\Delta=1/3-p/\epsilon\), is proposed as a new measure of conformality [74]. Shown in Fig. 2(c) is the 68% CI of the (normalized) trace anomaly \(\Delta\) as a function of baryon density for NS and QS matters. One sees the \(\Delta\) for NS matter first decreases with baryon density and then essentially approaches the conformal limit \(\Delta=0\) above \(n\approx 4.5n_{0}\). On the other hand, for QS matter, the \(\Delta\) decreases monotonously and becomes negative above \(n\approx 5n_{0}\). As pointed out in Ref. [74], the sound velocity can be decomposed into the derivative and the nonderivative terms in terms of \(\Delta\), i.e., \(c_{\rm s}^{2}/c^{2}=1/3-\Delta-\epsilon d\Delta/d\epsilon\), and the sound velocity peak observed in NS matter can be attributed to the derivative term from \(\Delta\). In addition, our results indicate while the NS matter seems to obey the conjecture [74] that the matter part of the trace anomaly is positive definite, the QS matter violates it. Furthermore, we note that the central density in \(2M_{\odot}\) and maximum mass (\(M=2.17^{+0.26}_{-0.15}M_{\odot}\)) NS are estimated to be \(n_{c,{\rm NS},2M_{\odot}}=0.56^{+0.14}_{-0.10}\) fm\({}^{-3}\) and \(n_{c,{\rm NS},{\rm max}}=0.90^{+0.11}_{-0.12}\) fm\({}^{-3}\), respectively. The corresponding values for QS are \(n_{c,{\rm QS},2M_{\odot}}=0.54^{+0.11}_{-0.09}\) fm\({}^{-3}\) and \(n_{c,{\rm QS},{\rm max}}=0.97^{+0.16}_{-0.14}\) fm\({}^{-3}\), respectively. These results imply that the conformal symmetry may be restored in the core of heavy NSs with \(M\gtrsim 2M_{\odot}\), consistent with the conclusion obtained recently from other groups [73; 74; 75]. _Conclusion._-- Based on Bayesian model-agnostic inference of the properties of NSs and QSs by combining the multi-messenger data and _ab initio_ calculations Figure 2: Posterior distributions (68% CI) and the corresponding median values for squared speed of sound \(c_{s}^{2}\) (a), pressure \(p\) (b) and trace anomaly \(\Delta\) (c) of NS matter and QS matter as functions of baryon number density. from pQCD and ChEFT, we find that the NS scenario is strongly favored against the QS scenario for the CSs, and the NS and QS matters display rather different density behaviors of sound velocity and trace anomaly. Our finding sheds light on the nature of CSs observed in pulsars and gravitational wave events in binary star mergers and provides valuable information on the properties of dense matter inside CSs. _Acknowledgments._-- The authors would like to thank Tyler Gorda, Sophia Han, Aleksi Kurkela, Ang Li, Yifeng Sun, Renxin Xu, Zhen Zhang and Zhenyu Zhu for useful discussions. This work was supported by the National SKA Program of China No. 2020SKA0120300 and the National Natural Science Foundation of China under Grant Nos. 12235010 and 11625521. The computations in this paper were run on the Siyuan-1 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University.
2303.17937
STFAR: Improving Object Detection Robustness at Test-Time by Self-Training with Feature Alignment Regularization
Domain adaptation helps generalizing object detection models to target domain data with distribution shift. It is often achieved by adapting with access to the whole target domain data. In a more realistic scenario, target distribution is often unpredictable until inference stage. This motivates us to explore adapting an object detection model at test-time, a.k.a. test-time adaptation (TTA). In this work, we approach test-time adaptive object detection (TTAOD) from two perspective. First, we adopt a self-training paradigm to generate pseudo labeled objects with an exponential moving average model. The pseudo labels are further used to supervise adapting source domain model. As self-training is prone to incorrect pseudo labels, we further incorporate aligning feature distributions at two output levels as regularizations to self-training. To validate the performance on TTAOD, we create benchmarks based on three standard object detection datasets and adapt generic TTA methods to object detection task. Extensive evaluations suggest our proposed method sets the state-of-the-art on test-time adaptive object detection task.
Yijin Chen, Xun Xu, Yongyi Su, Kui Jia
2023-03-31T10:04:44Z
http://arxiv.org/abs/2303.17937v1
STFAR: Improving Object Detection Robustness at Test-Time by Self-Training with Feature Alignment Regularization ###### Abstract Domain adaptation helps generalizing object detection models to target domain data with distribution shift. It is often achieved by adapting with access to the whole target domain data. In a more realistic scenario, target distribution is often unpredictable until inference stage. This motivates us to explore adapting an object detection model at test-time, a.k.a. test-time adaptation (TTA). In this work, we approach test-time adaptive object detection (TTAOD) from two perspective. First, we adopt a self-training paradigm to generate pseudo labeled objects with an exponential moving average model. The pseudo labels are further used to supervise adapting source domain model. As self-training is prone to incorrect pseudo labels, we further incorporate aligning feature distributions at two output levels as regularizations to self-training. To validate the performance on TTAOD, we create benchmarks based on three standard object detection datasets and adapt generic TTA methods to object detection task. Extensive evaluations suggest our proposed method sets the state-of-the-art on test-time adaptive object detection task. ## 1 Introduction Object detection is a fundamental task in computer vision research and has enabled numerous applications including autonomous driving, robotics, etc. With the advent of deep neural networks, we have observed unprecedented performance of object detection on different types of natural images [10, 35, 2]. Despite the encouraging progress on developing more efficient and accurate object detection algorithms, the robustness of these algorithms are often overlooked. Recent studies have revealed that by injecting photorealistic corruptions to natural images, the accuracy of existing object detection algorithms will suffer greatly [31]. To remedy the robustness of object detection models, unsupervised domain adaptation approaches are employed to learn domain invariant features to improve model generalization [9]. UDA assumes both source and target domain data samples are available during training a domain generalizable model. This assumption, however, is only applicable to the scenarios where source domain data is accessible and target domain distribution is static. Unfortunately, in a more realistic scenario, source domain data may not be available for adaptation due to privacy issues [21]. Hence, simultaneously training invariant representation on both source and target domain data is prohibited. Alternative to the strong assumptions made in UDA, source-free domain adaptation (SFDA) [25] relaxes the access to source domain data for domain adaptation. Extension to object detection, namely source-free object detection (SFOD), has been attempted by self-training with pseudo label [24] or style transfer [23]. Although SFDA advances further towards a more realistic domain adaptation setup, we argue that there are still realistic challenges remaining unresolved Figure 1: We illustrate the difference between the adopted test-time adaptation protocol and existing UDA and SFOD protocols. UDA requires access to both source and target domain data for adaptation. SFOD requires access to all target domain testing data for adaptation. In contrast, TTA sequentially adapts to target domain testing data on-the-fly. by SFDA. First, target domain distribution, e.g. the types of corruptions, are often unpredictable before testing begins. For example, it is unrealistic to assume the specific corruption that could happen subject to changing weather or lighting conditions on a new camera, until the testing samples are observed. SFDA will thus struggle to adapt to a testing distribution totally unknown before testing starts. Moreover, testing samples arrive in a sequential manner, predictions on testing samples should be made instantly upon the arrival of a new testing sample [39]. Since SFDA requires access to the whole target domain samples for adaptation, it fails to enable simultaneous inference and adaptation on-the-fly. In response to the unpredictability of target domain distribution and demand for simultaneous inference and adaptation, test-time adaptation (TTA) [41, 44, 39] emerged as a solution to adapt model weights to target distribution on-the-fly. TTA advocates a protocol that adaptation is carried out sequentially at test-time and predictions are made instantly [39], an illustration of the difference between UDA, SFOD and TTA object detection is presented in Fig. 1. It is often achieved by dynamically aligning source and target distributions [39], self-training with pseudo labels [4] or introducing self-supervised tasks [41, 30]. However, the existing TTA approaches are almost exclusively developed for image classification tasks [41, 44, 30, 4, 39]. It still remains elusive of how to adapt TTA methods to object detection tasks. In this work, we approach test-time adaptive object detection (TTAOD) from two perspectives. First of all, as self-training (ST) has demonstrated great success in semi-supervised learning [38, 49] and domain adaptation [29] by exploiting unlabeled data, we propose to employ self-training for TTAOD by learning from the unlabeled testing samples. This is often achieved by first predicting pseudo labelled objects on the testing sample which are then used for supervising the network training. However, as we empirically revealed in Fig. 4, the performance of applying ST alone may gradually degrade upon seeing more unlabeled data. This is probably caused by learning from a accumulation of incorrect pseudo labels, which is also referred to as confirmation bias [1]. Therefore, additional regularization is required to stabilize self-training for TTAOD. Alternative to self-training, distribution alignment has demonstrated success in test-time adaptation [30, 39]. Hence, we introduce distribution alignment as a regularization to self-training for TTAOD. Specifically, we first propose to align the backbone feature distribution between source and target domains, which is referred to as the **global feature alignment**. By doing so, target domain backbone feature will be better aligned with source distribution thus easing the difficulty of reusing the downstream RPN and predictor networks. In contrast to TTA approaches for classification, we further notice that a generic object detection predictor involves a classifier for predicting semantic labels and regressor for predicting spatial location. Therefore, reusing source domain classifier and regressor, a commonly adopted practice in TTA, requires eliminating covariate shift at the foreground features. For this purpose, we further propose to align distributions at ROI feature map level, which is referred to as the **foreground feature alignment**. To validate the effectiveness of the proposed method, we establish a benchmark for test-time adaptive object detection task. We created corrupted target domain data from three standard object detection datasets and adapted state-of-the-art TTA methods for object detection task. Extensive experiments are carried out on these datasets. The contributions of this work are summarized as below: * We aim to improve the robustness of object detection algorithms to corruptions that are not predictable before testing. The model must be adapted to testing data distribution at test-time, which is referred to as test-time adaptive object detection (TTAOD). * Test-time adaptive object detection is enabled by self-training (ST) on the testing data. For more stable ST we introduce source and target domain feature alignment at both global and foreground level as regularization. The combined model enables more stable and effective TTA performance. * We adapted existing TTA methods to object detection tasks and created a benchmark for test-time adaptive object detection task. Evaluations on three object detection datasets demonstrated the effectiveness of the proposed method. ## 2 Related Works ### Domain Adaptive Object Detection In the recent years, several Unsupervised Domain Adaptive Object Detection (UDAOD) studies [5, 36, 3, 14, 50, 48, 18, 22, 20, 45, 47, 19, 13] have been proposed to alleviate the impact from the domain gap cross domains in object detection task. These methods can be roughly divided into the following categories. i) Aligning the distributions of source and target domain in different layers and levels, e.g., DA-Faster [5], a pioneer in UDAOD, proposes a domain adaptive Faster R-CNN to reduce the domain discrepancy on both image and instance levels by adopting two different level domain classifiers and employing the adversarial training strategy. SWDA [36] proposes to align local features on shallow layers and image-level features by Focal Loss [27] on deep layers, i.e. strong local and weak global alignments. Similar to SWDA, Dense-DA [47], MAF [14], HTCN [3] and SSA-DA [50] align features on multiple layers by adversarial training. ICR-CCR [48] leverages the categorical consistency between image-level and instance-level predictions to re-weight the instance-level alignment. ii) Training from noisy labels, a.k.a. self-training, e.g., NL [18] and WST-BSR [19]. iii) Based on sample generation strategy. In this line of works, DD-MRL [20] leverages an image-to-image translation via GAN to generate various distinctive shifted domains from the source domain. AFAN [45] obtains the intermediate domain (fusing the source and target domains) by interpolation. UMT [47] and TDD [13] utilize both the source-like and target-like images to perform the cross-domain distillation. To enhance the robustness of the cross-domain model, both UMT and TDD utilize the teacher-student learning scheme, in which UMT adopts Mean Teacher [42] and TDD adopts Dual-branch detection network. In this work, a momentum-updated Faster R-CNN is performed for more stability in test-time adaptation. Although the excellent performance is reached, all UDAOD methods require access to the source domain data during the adaptation process. When the source data is not accessible due to privacy issues or storage overhead, more challenging settings are emerged with source-free domain adaptation [21, 25, 23] and test-time adaptation [41, 44]. ### Source-Free Object Detection Without access to the source data, Source-Free Domain Adaptation (SFDA) aims to explore how to rely only on unlabeled target data to adapt the source pre-trained model to the target domain. In the classification task, 3C-GAN [21] generate labeled target-like samples through conditional GAN [32] for training. SHOT [25] generates pseudo labels for each target samples and performs self-training process and information maximization to ensure class balanced. Recently, a few SFDA methods are used for alleviating the domain gap in object detection task when source data is not accessible, called Source-Free Object Detection (SFOD). SED [24] proposes self-entropy descent policy to search a confident threshold for pseudo labels generation. HCL [15] proposes historical contrastive instance discrimination to encourage the consistency between current representation and historical representations. LODS [23] enhances the style of the target image via style enhancement module and reduce the style degree difference between the original image and the enhanced one. It's not long ago that A\({}^{2}\)SFOD [6] is proposed to split target data into source similar part and dissimilar part and align them by adversarial training. It has been demonstrated SFOD mthods performs well on cross-domain object detection even compared against UDAOD methods [23]. Nevertheless, SFOD requires adaptation performed in target domain for multiple epochs. In a more realistic DA scenario where inference and adaptation must be implemented simultaneously, in the other words, real-time target domain data cannot be collected in bulk in advance, SFOD will no longer be effective. Figure 2: The overview of the proposed method, STFAR. In the source domain, STFAR computes the feature distributions at both global and foreground level in an offline manner. During test-time adaptation, self-training is applied by predicting pseudo labels with teacher network. The student network is then supervised by the pseudo labels. Self-training is further regularized by distribution alignment for improvement robustness. ### Test-Time Adaptation Collecting target domain samples in bulk in advance and transferring source model in an offline manner restricts the application to adapting to a static known target domain. To allow fast and online adaptation on unlabeled target domain, Test-Time Adaptation (TTA) [41, 44] emerges. TTT-R [41], a pioneer to this line, adapt the model on-the-fly by an auxiliary self-supervised task. Tent [44] first proposes a fully test-time adaptation method without any auxiliary branch. Following TTT-R and Tent, many effective methods, e.g., aligning source and target distribution [30, 39], self-training with pseudo labels [4], test-time normalization [26], anti-forgetting test-time adaptation [33], prototype-learning [17], more realistic test-time training/adaptation [39, 40, 34] and etc., have been proposed. However, the existing TTA methods are almost specific to image classification task rather than object detection task, which brings more challenges to the adaptation on-the-fly. In this work, we are devoted to study a more realistic and practical problem on adapting to real-time target domain for object detection on-the-fly, and denoted this setting as Test-Time Adaptive Object Detection (TTAOD). ## 3 Methodology In this section, we first provide an overview of test-time adaptation protocol. Then, we introduce self-training for object detection and how feature alignment could regularize self-training fore more resilient test-time training. ### Overview of Test-Time Adaptation Test-time training aims to adapt model weights to target domain distribution in parallel to the inference. We denote the source domain labeled data as \(\mathcal{D}_{s}=\{x_{i},y_{i}\}\) where \(y_{i}=\{\mathcal{B}_{i},\mathcal{C}_{i}\}\) are the ground-truth box annotation and class labels respectively. We further denote the backbone network as \(f_{i}=f(x_{i};\Theta)\in\mathbb{R}^{H\times W\times C}\), proposals after RPN and ROI pooling as \(a_{i}\in\mathbb{R}^{N_{a}\times D}\) and the predictors as \(h_{c}(a_{i})\) for semantic label classification and \(h_{r}(a_{i})\) for location and size regression. An object detection model is trained on the source domain labeled data by optimizing the classification and regressions losses. When the model is deployed for testing on the target domain unlabeled data \(\mathcal{D}_{t}=\{x_{j}\}\), we assume the testing samples are sequentially streamed and predicted by the model first, and the model weights \(\Theta\) are then updated after observing a batch of testing samples. In the following sections, we shall elaborate the details of achieving test-time adaptation for object detection by self-training with distribution alignment regularization. ### Test-Time Adaptation by Self-Training Self-training (ST) has demonstrated tremendous effectiveness for semi-supervised learning [38]. ST often predicts pseudo labels on the unlabeled data samples and the most confident pseudo labels are used for supervising model training. In this work, we adopt an approach similar to semi-supervised learning [43], two networks are maintained throughout the training stage, namely the student network \(f(x;\Theta)\) and the teacher network \(f(x;\hat{\Theta})\). The teacher network weights are the exponential moving average of student ones as below. \[\hat{\Theta}=\beta\hat{\Theta}+(1-\beta)\Theta \tag{1}\] An input image is first applied with one strong augmentation \(\mathcal{S}(x)\) and one weak augmentation \(\mathcal{W}(x)\). We adopt the augmentation strategy proposed in [49] as strong augmentation. The teacher model will predict objects on the weak augmented sample \(\mathcal{W}(x)\) and obtain a number of pseudo labeled objects \(\mathcal{P}=\{\hat{b}_{i},\hat{y}_{i}\}\) where \(\hat{b}_{i}\) and \(\hat{y}_{i}\) refer to as bounding box coordinate and object class label. The student model will treat the pseudo labeled objects as ground-truth and standard supervised learning losses apply to the student model. Specifically, we optimize the classification \(\mathcal{L}_{st}^{cls}\) and regression \(\mathcal{L}_{st}^{reg}\) losses on the student model branch, where the classification and regression losses follow the definitions in [35]. ### Test-Time Adaptation by Distribution Alignment Self-training (ST) alone is prone to the influence of incorrect pseudo labels, a.k.a. confirmation bias [1]. The situation is less severe in semi-supervised learning as the labeled loss serves as a strong regularization to self-training. As no labeled data exists in test-time adaptation, direct ST without regularization is exposed to the risk of failing on unlabeled testing data. As empirically revealed in Sect. 4.5, the performance of ST may degrade after certain training iterations. Therefore, to improve the robustness of self-training we further incorporate distribution alignment, which has demonstrated success for test-time adaptation/training [30, 39], as regularization for self-training. Existing distribution alignment is developed for classification task. For object detection task, we propose to align two types of features, the backbone feature and foreground feature. Aligning distribution at these two features allows us to achieve better reusing of RPN network and box predictors. Specifically, we use multi-variate Gaussian distributions in the source domain as \(N(\mu_{s}^{f},\Sigma_{s}^{f})\) and \(N(\mu_{s}^{a},\Sigma_{s}^{a})\) to characterize global and foreground feature distribution. In a typical faster RCNN framework, the backbone network outputs global feature map \(f_{i}\in\mathbb{R}^{C\times H\times W}\) and proposal features after RPN and ROI pooling \(a_{i}\in\mathbb{R}^{N_{p}\times D}\). To ob tain a single vectorized backbone feature and foreground feature for each individual image \(x_{i}\) for estimating the distribution, we first do average pooling over global and foreground features respectively as below. \[g^{f}(x_{i})=\frac{1}{HW}\sum_{h,w}z_{ihw};\ g^{a}(x_{i})=\frac{1}{N_{a}}\sum_{j=1 \cdots N_{a}}a_{ij} \tag{2}\] With vectorized features for each image, we estimate the distribution information by Eq. 3, the same estimation applies to foreground features, denoted as \(\mu_{s}^{a},\ \Sigma_{s}^{a}\). \[\begin{split}\mu_{s}^{f}&=\frac{1}{|\mathcal{D}_{s} |}\sum_{x_{i}\in\mathcal{D}_{s}}g^{f}(x_{i}),\\ \Sigma_{s}^{f}&=\frac{1}{|\mathcal{D}_{s}|}\sum_{x_{ i}\in\mathcal{D}_{s}}(g^{f}(x_{i})-\mu_{s}^{f})(g^{f}(x_{i})-\mu_{s}^{f})^{\top} \end{split} \tag{3}\] **Distribution Alignment**: Aligning target distribution to the source domain is achieved by minimizing the symmetric KL-Divergence between two multi-variate Gaussian distributions as in Eq. 4. As KL-Divergence between two Gaussian distributions has a closed-form solution, we can directly use gradient descent methods to optimize the distribution alignment objective. Alignment between the foreground feature distributions follows the same definition by substituting \(\mu^{f}\) and \(\Sigma^{f}\) with \(\mu^{a}\) and \(\Sigma^{a}\), resulting in \(\mathcal{L}_{al}^{a}\). \[\begin{split}\mathcal{L}_{al}^{f}=& D_{KL}( \mathcal{N}(\mu_{s}^{f},\Sigma_{s}^{f})||\mathcal{N}(\mu_{t}^{f},\Sigma_{t}^{f }))\\ &+D_{KL}(\mathcal{N}(\mu_{t}^{f},\Sigma_{t}^{f})||\mathcal{N}(\mu_ {s}^{f},\Sigma_{s}^{f}))\end{split} \tag{4}\] **Incremental Target Domain Update**: Although the source domain distributions can be updated in an off-line manner on all available source-domain training samples, it is not equally trivial to estimate the distribution in the target domain under a TTT protocol. Estimating the distribution within a single minibatch of testing samples as the target domain distribution is subject to randomness of testing data distribution within a small temporal window, e.g. testing samples are not drawn i.i.d. from the target domain distribution [11]. Therefore, we propose to estimate the true target domain distribution in an exponential moving average manner. In specific, we use a hyperparameter \(\gamma\) to control the contribution of current minibatch and the target domain distribution can be derived incrementally following the rules in Eq. 5, where \(\mathcal{B}\) indicates a minibatch of testing samples. \[\begin{split}\mu_{t}&=\mu_{t}+\delta\\ \Sigma_{t}&=\Sigma_{t}+\gamma\sum_{x_{i}\in\mathcal{ B}}[(g(x_{i})-\mu_{t})(g(x_{i})-\mu_{t})^{\top}-\Sigma_{t}]-\delta\delta^{\top}\\ \delta&=\gamma\sum_{x_{i}\in\mathcal{B}}(g(x_{i})- \mu_{t})\end{split} \tag{5}\] ### TTA for Object Detection Algorithm In this section, we summarize the overall algorithm on test-time adaptation for object detection. On the source domain, we summarize the backbone features and foreground features with two Gaussian distributions in an offline manner. During test-time adaptation, we make object detection prediction for each testing sample for instant inference and simultaneously update the distribution estimations in the target domain. When a minibatch of testing samples are accumulated, we update the model weights through gradient descent. A detailed description of test-time adaptation algorithm is summarized in Alg. 1. ``` Input: Testing sample batch \(\mathcal{B}^{t}=\{x_{i}\}_{i=1\cdots N_{B}}\). # Inference Stage: for\(x_{i}\gets 1\)to\(N_{B}\)do Predict objects: \(h_{c}(a(f(x_{i})),h_{r}(a(f(x_{i}))\) # Adaptation Stage: for\(x_{i}\gets 1\)to\(N_{B}\)do Augmentation: \(\mathcal{W}(x_{i}),\mathcal{S}(x_{i})\); Pseudo label prediction: \(\mathcal{P}=\{\hat{b}_{ij},\hat{y}_{ij}\}\); Incremental distribution update: Update \(\mathcal{N}(\mu_{t}^{f},\Sigma_{t}^{f})\) and \(\mathcal{N}(\mu_{t}^{a},\Sigma_{t}^{a})\) by Eq. 5; Test-time adaptation loss: \(\mathcal{L}_{tna}=\lambda_{st}^{cls}\mathcal{L}_{st}^{cls}+\lambda_{st}^{reg} \mathcal{L}_{st}^{reg}+\lambda_{al}^{f}\mathcal{L}_{al}^{f}+\lambda_{al}^{a} \mathcal{L}_{al}^{a}\); Gradient descent update: \(\Theta=\Theta-\alpha\nabla\mathcal{L}_{tna}\) ``` **Algorithm 1**Test-time adaptive object detection algorithm. ## 4 Experiments In this section, we validate the effusiveness of STFAR on test-time adaptive object detection task. We adopt the corrupted versions of 3 standard object detection datasets to create a test-time adaptation benchmark for object detection. Existing test-time adaptation methods are adapted to object detection task for comparison. ### Dataset and Evaluation Protocol We evaluate on three standard object detection datasets. **MS-COCO**[28] provided two training data sets,where train2017 contained 118k labeled images and unlabeled2017 contained 123k unlabeled images. In addition, val2017 datsset containing 5k images is provided for verification. We create a corrupted version on COCO, termed COCO-C by employing an image corruption package [31], which consists of 15 types of corruptions. For TTA experiments on COCO-C, we use the standard COCO training set to pre-train a source model, target domain is created by applying the corruptions to the validation set of COCO. **Pascal**[8] contains 20 categories of natural images. We employ approximately 15K images from the training and validation sets of the PASCAL VOC 2007 and 2012 to pre-train the source model. We follow a similar way to COCO-C to generate PASCAL-C on the test dataset of PASCAL2007 containing 4952 images as the target domain. **Cityscapes**[7] consists of 2,975 training images and 500 testing images with 8 categories of objects. The **Foggy-Cityscapes**[37] dataset was created by simulating images captured under foggy weather condition. Three levels of corruptions are generated in Foggy-Cityscapes for each image, controlled by a hyper-parameter \(\beta=0.005,0.01,0.02\), In this work, we choose the most difficult corruption level \(\beta=0.02\) for evaluation. ### Implementation Details **Hyperparameters**: We evaluate with ResNet-50 and ResNet-101 [12] as backbone network following the Faster RCNN object detection framework [35]. We optimize the backbone network by SGD with momentum on the three datasets. On COCO-C, we set batchsize to 8 and learning rate to 1e-4, on Pascal-C we set batchsize to 8 and learning rate to 1e-5, and on Foggy-CityScapes we set batchsize to 4 and learning rate to 1e-5. For other hyperparameters, we set \(\lambda_{st}^{cls}=\lambda_{st}^{reg}\) to 1.0 on three datasets. Specially, on COCO-C dataset we set \(\lambda_{al}^{f}\) to 0.1, \(\lambda_{al}^{a}\) to 0.01 and \(\gamma\) to \(\frac{1}{64}\), on Foggy-Cityscapes dataset we set \(\lambda_{al}^{f}\) to 0.1, \(\lambda_{al}^{a}\) to 1.0 and \(\gamma\) to \(\frac{1}{256}\), and on Pascal-C dataset, we set \(\lambda_{al}^{f}\) to 2.0, \(\lambda_{al}^{a}\) to 0.1 and \(\gamma\) to \(\frac{1}{128}\). **Data Augmentation**: The strong augmentation consists of common augmentations, including scale jitter, solarize jitter, brightness jitter, contrast jitter, sharpness jitter, translation, rotation and translation. In addition, the strong augmentation also includes RandErase, which randomly samples a few patches (less than 5) at random locations and erases its pixels with a fix valued, simulating occlusions. The weak augmentation only contains random resize and random flipping, similar to the test augmentation. Under the weak augmentation, teacher model can provide more precious pseudo label to instruct the student model. **Competing Methods**: We adapt the following generic state-the-of-art test-time adaptation methods to object detection task. Direct testing (**Direct Test**) without adaptation simply does inference on target domain with source domain model. Test-time normalization (**BN**) [16] moving average updates the batch normalization statistics in the backbone network by the testing data. Test-time entropy minimization (**TENT**) [44] updates the parameters of all batch normalization layers in the backbone network by minimizing the entropy of the model predictions on the testing data. Test-time classifier adjustment (**T3A**) [17] computes target prototype representation for each category using testing data and make predictions with updated prototypes. We allow T3A to update the classification predictor only. Source Hypothesis Transfer (**SHOT**) [25] freezes the linear classification head and trains the target-specific feature extraction module by exploiting balanced category assumption and self-supervised pseudo-labeling in the target domain. **Self-Training**[49] was originally developed semi Figure 3: Illustration of the detection results on the target domain, the first two lines and the last two lines represent the scenarios of **COCO**\(\rightarrow\)**COCO**(Snow corruption) and **Cityscapes**\(\rightarrow\)**Foggy-Cityscapes**. supervised object detection. We adapt it to TTA by only exploiting the unsupervised learning component. **TTAC**[39] discovers clusters in both source and target domain and match the target clusters to the source ones to improve generalization. Finally, we present our own approach **STFAR** by doing self-training with distribution alignment regularization. STFAR, SHOT and TTAC update the backbone weights during test-time adaptation. ### Test-Time Adaptation for Corrupted Target Domain We evaluate test-time adaptation (TTA) performance on COCO-C with results presented in Tab. 1. When target domain is contaminated with corruptions, the mAP drops substantially from \(44.6\%\to 15.9\%\) with ResNet50 as backbone, indicating the corruptions in target domain poses great challenge to the generalization of source domain model. When BN, TENT and T3A are adapted to TTA for object detection, we observe a further drop of performance. It suggests only updating the batchnorm components or classifier weights is not enough for tackling the distribution shift caused by corruptions. SHOT and Self-Training demonstrate improved results from direct testing (Direct Test), suggesting self supervising with pseudo label provides a viable way to TTA for object detection. TTAC serves as another strong baseline for TTA. This indicates distribution alignment is a very effective way to adapt source domain model with corruption as distribution shift. Finally, our STFAR combines self-training with distribution alignment and achieves the best results. We further report TTA performance on PASCAL-C dataset in Tab. 2 and Foggy-CityScapes in Tab. 10, respectively. We draw similar conclusions from these results. STFAR consistently outperforms the other competing methods and improves from the baseline by a large margin. **Qualitative Results**: We visualize object detection results on the corrupted target domain in Fig. 5. As seen from the qualitative results, direct testing source domain model (Direct Test) tends to miss small objects subject to strong corruption, e.g. the objects in the sky in row 1 and the faraway vehicles in row 3 and 4. TTA with self-training [49] and TTAC [39] improves detection results from the baseline by being able to identify smaller objects and better localization of spatial extent. Finally, STFAR achieves more stable detection results with more smaller objects being detected with more accurate spatial extent. ### Ablation Studies We carry out ablation study on COCO-C dataset to validate the effectiveness of individual components. Specifically, we ablate self-training, global feature alignment and foreground feature alignment. We observe from the results in Tab. 4 that self-training is effective for adapting source domain model to target distribution evidenced by the improvement from direct testing (\(19.8\%\to 27.6\%\)). When global feature alignment is independently applied, we also observe a significant improvement from baseline (\(19.8\%\to 26.7\%\)), though it is slightly behind self-training. As self-training is more prone to incorrect pseudo labels, we additionally utilize global feature alignment to regularize self-training, which results in an improvement from both self-training and global feature alignment alone. Finally, when foreground distribution alignment is incorporated we achieve the best performance on TTA, suggesting the effectiveness of combining self-training with distribution alignment for test-time adaptive object detection. ### Further Analysis In this section, we make further insights into some alternative designs of the framework, provide more qualitative results and more detailed analysis of results. **Cumulative Performance**: We investigate the cumulative performance of ablated models on COCO-C dataset. Good TTA methods should maintain stable or gradually increasing performance during TTA. As shown in Fig. 4, we make the following observations. First, self-training (ST) alone is a strong baseline as it picks up accuracy at very early stage of TTA. However, ST alone may suffer from the accumulation of incorrect pseudo labels and as more testing samples are seen, the performance of ST starts to decrease, partially owing to the confirmation bias. Second, global feature alignment alone is a relatively stable method for TTA, however, it struggles to further improve the performance at the late stage of TTA. Finally, STFAR benefits from the advantage of ST in fast convergence and still maintains its performance at the late stage of TTA due to the regularization of feature alignment. **Alternative Weights Update**: By default, STFAR only updates the backbone weights during TTA as it allows reusing RPN and RCNN networks which are less likely to be affected the corruptions in target domain. In this section, we examine the alternative subsets of weights to update during TTA. As shown in Tab. 5, when batchnorm statistics are allowed to be updated, we observe a significant compro Figure 4: The cumulative test-time adaptation performance (mAP) on COCO-C dataset. mise of mAP. When BN statistics are frozen and only the affine projection parameters are allowed to be updated, we observe an increase from baseline method. When all model weights, including RPN and RCNN, are updated, the performance is still inferior to STFAR which only updates the backbone weights. To conclude, updating only the backbone weights during TTA is more effective than alternative updating strategies. **Alternative Backbone**: We further implemented test-time adaptive object detection on COCO-C with ResNet-101 as backbone network with results presented in Tab. 1. Again, we observe consistently improvement of STFAR over TTAC or Self-Training alone, suggesting the effectiveness of the proposed method. ## 5 Conclusion In this work, we investigate a realistic setting for adapting source domain model to target distribution data subject to natural corruptions. Testing data distribution is assumed to be unknown before the inference stage. Model weights are then adapted to target domain at test-time by combining self-training with feature distribution alignment regularization. Three standard object detection datasets are converted for the evaluation of test-time adaptive object detection task. Extensive comparisons with test-time adaptation methods \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c c c|c} \hline \hline Backbone & Methods & Brit & Contr & Defoc & Elast & Fog & Frost & Gauss & Glass & Impul & Jpeg & Mton & Pixel & Shot & Snow & Zoom & mAP \\ \hline \multirow{5}{*}{ResNet-50} & Clean & - & - & - & - & - & - & - & - & - & - & - & - & - & - & 44.6 \\ \cline{2-13} & Direct Test & 38.4 & 22.9 & 12.9 & 16.5 & 38.9 & 24.0 & 8.2 & 4.7 & 9.1 & 13.2 & 9.1 & 6.2 & 10.0 & 19.8 & 4.9 & 15.9 \\ & BN [16] & 15.2 & 3.4 & 1.7 & 7.3 & 13.6 & 8.3 & 1.4 & 0.8 & 1.5 & 3.1 & 1.8 & 2.2 & 1.8 & 5.8 & 2.0 & 4.7 \\ & TENT [44] & 8.5 & 5.6 & 0.5 & 5.0 & 9.7 & 6.4 & 1.5 & 0.5 & 1.6 & 2.2 & 1.6 & 2.4 & 1.7 & 5.4 & 0.8 & 3.6 \\ & T3A [17] & 28.8 & 15.9 & 8.3 & 11.3 & 28.9 & 17.2 & 4.6 & 3.1 & 5.2 & 9.0 & 5.8 & 4.1 & 5.8 & 13.8 & 3.5 & 11.0 \\ & SHOT [25] & **40.9** & 26.6 & 14.7 & 19.7 & **41.5** & 26.7 & 11.0 & 7.2 & 12.1 & 16.4 & 11.0 & 9.7 & 13.0 & 22.0 & 6.4 & 18.6 \\ & Self-Training [49] & 38.1 & 28.4 & 14.7 & 25.5 & 38.5 & 27.9 & 16.7 & 11.4 & 18.8 & 23.8 & 16.0 & 24.5 & 18.6 & 27.6 & 7.8 & 22.6 \\ & TTAC [39] & 38.3 & 29.5 & 15.1 & 28.2 & 39.0 & 28.5 & 16.8 & 14.3 & 18.0 & 23.2 & 14.3 & 24.8 & 19.3 & 26.7 & 8.7 & 23.0 \\ & STFAR (Ours) & 39.1 & **31.1** & **16.8** & **29.0** & 39.0 & **29.2** & **19.2** & **15.4** & **20.1** & **26.1** & **17.2** & **28.3** & **21.0** & **29.5** & **10.2** & **24.7** \\ \hline \multirow{5}{*}{ResNet-101} & Clean & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & 47.6 \\ \cline{2-13} & Direct Test & 41.8 & 26.8 & 15.1 & 18.9 & 42.5 & 26.9 & 11.7 & 7.1 & 12.2 & 16.0 & 10.9 & 8.7 & 13.8 & 23.3 & 5.5 & 18.7 \\ \cline{1-1} & Self-Training [49] & 41.6 & 31.8 & 18.4 & 28.9 & 41.5 & 31.6 & 18.3 & 17.1 & 22.2 & 22.8 & 17.9 & 27.8 & 21.1 & 31.5 & 8.5 & 25.4 \\ \cline{1-1} & TTAC [39] & 42.3 & 33.5 & 18.3 & 30.7 & 42.6 & 31.5 & 21.2 & 17.7 & 22.1 & 24.9 & 16.9 & 26.3 & 23.1 & 30.0 & 9.7 & 26.1 \\ \cline{1-1} & STFAR (Ours) & **42.9** & **34.2** & **19.2** & **32.7** & **43.0** & **33.1** & **23.5** & **19.1** & **24.2** & **28.4** & **19.5** & **30.8** & **25.3** & **33.4** & **11.2** & **28.0** \\ \hline \hline \end{tabular} \end{table} Table 1: Test-time adaptation object detection results on \(\mathbf{COCO}\rightarrow\mathbf{COCO}\)-dataset. \begin{table} \begin{tabular}{c|c} \hline Methods & mAP \\ \hline Direct Test & 19.8 \\ \hline Update BatchNorm (BN) statistics & 5.8 \\ Update BN affine projection (Update the stats) & 7.3 \\ Update BN affine projection (Freeze the stats) & 24.6 \\ Update all network except BN Layer & 27.2 \\ Update backbone except BN Layer (Ours) & **29.5** \\ \hline \hline \end{tabular} \end{table} Table 5: Analysis of the choice of model update parameters. The mean Average Precision (mAP) is reported on the Snow corruption on COCO-C validation set. \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c c|c} \hline \hline Methods & Brit & Contr & Defoc & Elast & Fog & Frost & Gauss & Glass & Impul & Jpeg & Mton & Pixel & Shot & Snow & Zoom & mAP \\ \hline Clean & - & - & - & - & - & - & - & - & - & - & - & - & - & 80.4 \\ \hline Direct Test & 69.5 & 23.8 & 16.7 & 42.7 & 64.2 & 41.7 & 11.9 & 13.0 & 13.6 & 35.8 & 18.4 & 26.0 & 16.0 & 38.2 & 25.7 & 30.5 \\ BN [16] & 39.5 & 20.6 & 7.4 & 17.1 & 35.3 & 22.1 & 4.7 & 4.5 & 5.1 & 10.5 & 9.8 & 9.1 & 6.8 & 19.1 & 13.4 & 15.0 \\ TENT [44] & 19.6 & 9.9 & 2.6 & 11.0 & 19.0 & 13.7 & 3.1 & 2.5 & 3.3 & 4.5 & 5.3 & 8.8 & 4.0 & 12.8 & 4.8 & 8.3 \\ T3A [17] & 36.9 & 12.5 & 11.0 & 19.7 & 32.7 & 20.6 & 6.1 & 6.4 & 6.5 & 14.8 & 10.1 & 13.2 & 8.4 & 16.8 & 13.8 & 15.3 \\ SHOT [25] & 72.0 & 31.7 & 18.9 & 46.6 & 67.5 & 45.8 & 12.0 & 11.6 & 16.4 & 41.8 & 19.7 & 33.1 & 19.9 & 42.5 & 27.6 & 33.8 \\ Self-Training [49] & 67.9 & 39.3 & 2.6 & 52.5 & 65.7 & 47.2 & 11.9 & 20.2 & 12.1 & 29.3 & 4.1 & 6.9 & 17.4 & 44.9 & 9.5 & 28.8 \\ TTAC [39] & **72.2** & 40.4 & 29.3 & **58.1** & **68.7** & 50.4 & 29.8 & 28.7 & 33.6 & 46.4 & 29.2 & 46.1 & 35.1 & 48.0 & **34.9** & 43.4 \\ STFAR (Ours) & 67.3 & **51.8** & **34.8** & 55.7 & 65.2 & **50.7** & **32.4** & **34.6** & **36.3** & **49.4** & **34.6** & **55.7** & **37.8** & **50.9** & 34.8 & **46.1** \\ \hline \hline \end{tabular} \end{table} Table 2: Test-time adaptation object detection results on \(\mathbf{PASCAL}\rightarrow\mathbf{PASCAL}\)-\(\mathbf{C}\) dataset with ResNet-50 as backbone. \begin{table} \begin{tabular}{c|c} \hline Methods & mAP \\ \hline Direct Test & 19.8 \\ \hline Update BatchNorm (BN) statistics & 5.8 \\ Update BN affine projection (Update the stats) & 7. confirm the effectiveness of the proposed method.
2308.16797
Simple LLM Prompting is State-of-the-Art for Robust and Multilingual Dialogue Evaluation
Despite significant research effort in the development of automatic dialogue evaluation metrics, little thought is given to evaluating dialogues other than in English. At the same time, ensuring metrics are invariant to semantically similar responses is also an overlooked topic. In order to achieve the desired properties of robustness and multilinguality for dialogue evaluation metrics, we propose a novel framework that takes advantage of the strengths of current evaluation models with the newly-established paradigm of prompting Large Language Models (LLMs). Empirical results show our framework achieves state of the art results in terms of mean Spearman correlation scores across several benchmarks and ranks first place on both the Robust and Multilingual tasks of the DSTC11 Track 4 "Automatic Evaluation Metrics for Open-Domain Dialogue Systems", proving the evaluation capabilities of prompted LLMs.
John Mendonça, Patrícia Pereira, Helena Moniz, João Paulo Carvalho, Alon Lavie, Isabel Trancoso
2023-08-31T15:19:28Z
http://arxiv.org/abs/2308.16797v2
# Simple LLM Prompting is State-of-the-Art for Robust and Multilingual Dialogue Evaluation ###### Abstract Despite significant research effort in the development of automatic dialogue evaluation metrics, little thought is given to evaluating dialogues other than in English. At the same time, ensuring metrics are invariant to semantically similar responses is also an overlooked topic. In order to achieve the desired properties of robustness and multilinguality for dialogue evaluation metrics, we propose a novel framework that takes advantage of the strengths of current evaluation models with the newly-established paradigm of prompting Large Language Models (LLMs). Empirical results show our framework achieves state of the art results in terms of mean Spearman correlation scores across several benchmarks and ranks **first place on both the Robust and Multilingual tasks** of the DSTC11 Track 4 "Automatic Evaluation Metrics for Open-Domain Dialogue Systems", proving the evaluation capabilities of prompted LLMs. ## 1 Introduction Automatic dialogue evaluation has largely been focused on evaluating select few languages. The main reason for this constraint is the lack of linguistic diversity in dialogue corpora, which leads to a lack of chatbots that cover other languages. As a result, the need for multilingual metrics has also been limited. A possible solution to this issue is to leverage the latest batch of Large Language Models (LLMs) to synthetically generate multilingual dialogues. Some research has already been conducted to study the capabilities of these models Guo et al. (2023); Bubeck et al. (2023) and the consensus appears to be that these models have achieved a proxy of a _formal_ linguistic competence in the most studied languages. That is, its responses follow linguistic conventions and are fluent and grammatical, but they might be inaccurate or even hallucinate Guerreiro et al. (2023). More importantly, pertaining to dialogue, they also show signs of _functional_ linguistic competence in its responses, i.e., discursive coherence, narrative structure and linguistic knowledge, even if not fully consistent (sometimes they do not consider context or situated information, and fail to adapt to users and domains). Irrespective of these models' limitations, it is clear their emergent capabilities allow for the development of chatbots with capabilities vastly beyond what earlier models were able to achieve. Yet, Figure 1: Proposed framework architecture. The **Response**, **Context** and **Quality Aspect** under evaluation are fed to the submetrics: **VSP** (Valid Sentence Prediction), **NSP** (Next Sentence Prediction), **MLM** (Masked Language Modelling), **ENG** (Engagement) and **Chat-GPT**. Each submetric score is then weighted according to the aspect, yielding the final metric. an interesting research question lingers: _If these models are able to write responses that follow formal and functional linguistics rules, are they also capable of evaluating responses/dialogues in terms of these same rules?_ Prior work has confirmed the language understanding capabilities of instruction-based LLMs for dialogue evaluation Huynh et al. (2023). However, we are the first to study the evaluation capabilities of the newest batch of LLMs in terms of multilinguality and paraphrase robustness. This paper presents our contribution to the DSTC11 track on Robust and Multilingual Automatic Evaluation Metrics for Open-Domain Dialogue Systems Rodriguez-Cantelar et al. (2023), where we participated in both the Multilingual and Robustness tasks. This track is an excellent venue to benchmark the capabilities of these new LLMs for dialogue evaluation, as it evaluates properties that have been observed in these models. We propose a comprehensive framework, incorporating earlier encoder-based approaches and ChatGPT, as illustrated in Figure 1. By combining multiple models and submetrics through ensembling, our approach aims to improve the performance and robustness of dialogue evaluation, ultimately contributing to the advancement of dialogue system research and development. Overall, our contributions are the following: * We show that ChatGPT is a _strong_ evaluator of dialogues, outperforming typical encoder frameworks. * We propose a new framework for dialogue evaluation that is multilingual and robust to paraphrases. In fact, our combined Encoder and ChatGPT framework ranks **1st place** on both the Multilingual and Robust metrics task. * We discuss the outlook of Dialogue Evaluation in this new realm of LLMs. * We open source the code and checkpoints of the submetrics at github.com/johndmendonca/DialEvalML. ## 2 Related work ### Automatic Evaluation Metrics Statistic-based metrics such as BLEU Papineni et al. (2002), ROUGE Lin (2004), and METEOR Banerjee and Lavie (2005), are a popular choice to evaluate NLG (Natural Language Generation) models as they are easy to employ. These metrics assume valid responses have significant word-overlap with the ground truth. However, this is not a valid assumption for dialogue: there are many equally good responses for a single utterance. As such, the correlation with Human Evaluation (HE) annotations is very low for these metrics Liu et al. (2016), and they cannot be used to evaluate models whenever a gold-response is not available. Earlier learned metrics such as ADEM Lowe et al. (2017) and RUBER Tao et al. (2018) explicitly predict HE annotations by initialising pretrained Recurrent Neural Network response generators. Unlike ADEM, which is trained with HE-annotated data in a supervised manner, RUBER leverages negative samples. In both cases, a reference response is used to score the candidate response. As such, these metrics still suffer the same issues as word-overlap metrics. The primary motivation for the negative sampling approach in RUBER was the need for extensive HE annotations in ADEM. Approaches similar to this are now the norm for training open-domain dialogue evaluation metrics. By using well-defined self-supervised tasks which correlate well with their corresponding aspects, the annotation limitations are mostly circumvented. The most widely used self-supervised task is Next Sentence Prediction (NSP), as it is known to correlate well with HE that evaluate _"Context Awareness"_. The typical approach is to finetune a pretrained encoder model with this automatically generated data Mehri and Eskenazi (2020); Phy et al. (2020); Mendonca et al. (2022); Zhao et al. (2020); Zhang et al. (2022). More complex approaches leverage graph representations to model dialogue interactions explicitly Huang et al. (2020); Zhang et al. (2021). Another typically employed self-supervised task is Valid Sentence Prediction (VSP), which uses word-level noising techniques to generate negative samples and correlates well with HE that evaluate _Fluency_Phy et al. (2020); Mendonca et al. (2022); Zhang et al. (2022). Parallel to this trend, other annotation-free approaches in the literature have surfaced. For instance, qualities such as _Specificity_ correlate reasonably well with metrics obtained directly from the MLM (Masked Language Modelling) loss calculated using pretrained encoder models Mehri and Eskenazi (2020); Phy et al. (2020); Zhang et al. (2022). Given the multifaceted nature of dialogue, dialogue quality metrics typically employ a combination of submetrics. Mehri and Eskenazi (2020) leverage follow-up utterance from a pretrained decoder model to calculate 18 turn and dialogue-level submetrics, which are then used as inputs to a regression model for overall quality. In fact, Linear Regression is frequently used as a feature aggregation method in the literature Jiang et al. (2022); Mehri and Eskenazi (2020). Alternatively, Phy et al. (2020) propose a hierarchical composition where they incorporate the quality aspects together in a way that aspects in the lower hierarchy need to be satisfied before aspects higher up are considered. Also worth mentioning is the work of Zhang et al. (2022), which proposes the so called Correlation Re-scaling method. Here, the contribution of each aspect is calculated from the individual correlations of the submetrics, obtained from a subset of HE. ### Large Language Models The widespread use of LLMs was established, practically speaking, with the work of Devlin et al. (2019), where a transformer architecture Vaswani et al. (2017) is pretrained with substantial amounts of unlabelled text with a Masked Language Modelling (MLM) objective. With this architecture, a new paradigm in NLP surfaced, where the adaptation to downstream tasks was conducted by fine-tuning the pretrained model with supervised data. Later on, GPT-3 Brown et al. (2020), which is trained with an autoregressive objective, showed competitive results by leveraging few-shot prompting. Nevertheless, given their training objective function, it was difficult for autoregressive LLMs to successfully perform downstream NLP tasks without substantial prompt engineering. Ouyang et al. (2022) propose finetuning GPT-3 using a 3-step approach named Reinforcement Learning through Human Feedback (RLHF). In detail, the model is (1) initially finetuned using supervised data obtained from labelling prompts (SFT); (2) a reward model is trained using ranked responses given a prompt; (3) the policy is optimised against the reward model using the Proximal Policy Optimisation reinforcement learning algorithm Schulman et al. (2017). As a testament to the power of this approach, ChatGPT took the world by storm in late 2022 thanks to its incredible human-like generation capabilities. This was achieved by including dialogues in all steps of RLHF. ## 3 Problem Formulation The main goal of this track was to develop and benchmark automatic open-ended dialogue evaluation metrics. Two tasks were proposed this year, Metrics for Multilingual Data and Robust metrics. For the Metrics for Multilingual Data task, participants were asked to construct quality metrics that perform well on a multilingual setup. For the the Robust metrics task, the goal was to develop metrics that perform robustly when evaluated over back-translated/paraphrased sentences in English. In both tasks, the proposed metrics were evaluated at the turn and dialogue level, without access to a reference. In a _turn-level_ evaluation setting, the goal is, given prior dialogue history (frequently denoted as context) \(c\) of varying amount of turns, and a response \(r\), to learn a scoring function (also known as metric) that assigns a score \(f(c,r)\to s\). Conversely, in a _dialogue-level_ evaluation setting, the goal is to evaluate the performance throughout the full dialogue. Irrespective of the level of evaluation, the proposed metrics' outputs are typically compared against HE annotations that use a Likert scale, where the lowest value means lowest quality and highest value maximum quality. For this track, the performance of these metrics was evaluated by calculating the Pearson correlation between the calculated score and HE. ## 4 Methodology Our framework, which we call DialEvalML, can be viewed as a dual layered ensemble which are done at the **model** and **submetric** level, and that employ strong multilingual pretrained encoder and decoder models which were finetuned or prompted1. In this section, we describe the step-by-step process of DialEvalML, detailing the various components and methods employed. Footnote 1: We tried experimenting with metrics that use graph representations, but found implementing these metrics to be Multilingual and Robust, and including them in our framework, to be impractical, not to mention detrimental to performance in some instances. ### Submetrics Similar to other frameworks, including the best performing ones in last year's track Zhang et al. (2022); Jiang et al. (2022) which take inspiration from the works of Phy et al. (2020); Sinha et al. (2020); Mehri and Eskenazi (2020), we employ several submetrics to evaluate dialogue responses - ranging from zero-shot prediction using pretrained LLMs to trained models using self-supervised and supervised methods - and weigh them according to the aspect we wish to predict. #### 4.1.1 VSP: Valid Sentence Prediction Following Sinha et al. (2020), we train a regression model that is optimised to differentiate between positive samples and synthetic negative samples. **Positive** samples are perturbed by randomly applying one of the following: (1) no perturbation, (2) punctuation removal, (3) stop-word removal. **Negative** samples are generated by randomly applying one of the following rules: (1) word reorder (shuffling the ordering of the words); (2) word-drop; and (3) word-repeat (randomly repeating words). #### 4.1.2 NSP: Next Sentence Prediction With the binary **NSP** (Next Sentence Prediction) task, the goal is to distinguish a positive example from a semantically negative one, given a context. We train a discriminative regression model using the following sampling strategy: **positive** responses are drawn directly from the dialog; **negative** responses are randomly selected and a token coverage test discards semantically similar sentences. All responses are processed using the positive-sample heuristic used by **VSP**. For both tasks, the underlying goal is that paraphrased and/or translated responses should have the same coherence score as the original response, since they (in theory) convey the same message. In order to increase the robustness of our framework to paraphrased responses we propose a Siamese Neural Network. Simply put, we train an encoder model (denoted NSP-Siamese) to jointly optimise a Cosine Embedding Loss between the hidden states of the encoder model for the original and a paraphrase, and the individual errors between the predictions and the ground truth. We hypothesise this enables the model to compare the semantic coherence of the responses w.r.t the context, instead of more spurious features such as syntax. A similar approach could've been employed for multilingual metrics, however, scaling to more languages is computationally expensive: one would either need a new model for each language, or a training procedure requiring a forward pass for each language, for each example. #### 4.1.3 MLM: Masked Language Modelling Similar to Mehri and Eskenazi (2020b); Phy et al. (2020), we use a pretrained encoder model to calculate the MLM loss of all tokens of the response. The resulting **MLM** submetric is calculated as the sum of the individual losses. #### 4.1.4 ENG: Engagement An important quality aspect of dialogue that is frequently overlooked is _Engagement_. Some work attempt to equate this aspect with _Specificity_ and related metrics. However, we argue this is a reductive solution, as engagement is an abstract and multi-dimensional concept, thereby making a surface level evaluation of the response in terms of diversity insufficient. As such, and following the methodology used for **VSP** and **NSP**, we train a discriminate model using RED (Reddit-based Engagement Dataset) (Xu et al., 2022) which we then use as a submetric denoted in our framework as **ENG**. This dataset is sourced from Reddit and is curated using a novel distant-supervision framework. This framework aggregates emotional, attentional, behavioural and reply engagement onto a single score denoted EncLex, which then has a hyperparameter threshold applied to it to cluster posts into positive and negative samples. ### Exploiting Data Augmentation for Robust and Multilingual Evaluation The main novelty of this year's track is the release of training and development dialogue data that has been augmented with MT (Machine Translation) - for the Multilingual task - and Paraphrases - for the Robust task. These augmentations are subsequently scored to determine similarity against the original data: for MT, several COMET QE (Quality Estimation) scores (Rei et al., 2020; Zerva et al., 2021; Rei et al., 2022) were provided; for Paraphrases, the organisers provided cosine similarity scores of the sentence embeddings. A naive approach to obtain competitive metrics in both tasks would be to simply introduce the full amount of augmented data during self-supervised and supervised training. However, Mendonca et al. (2023) showed that low quality augmentation affects the performance of models trained on MT augmented data, especially for **VSP**. Following this work, we select 5 and 75 % of the best translated data (ranked using COMET QE) for training of the **VSP** and **NSP** models respectively. For **ENG**, we train different proportions of data and select the best performing ones. ### ChatGPT We briefly experimented with different prompts, and found the best performing prompt (irrespective of language) in a held-out internal set to be simply: * Turn-level: _"Given the Context, evaluate from 1-5 the Response in terms of [aspect]. Provide a single score and nothing else."_ * Dialogue-level: _"Evaluate the following dialogue from 1-5 in terms of [aspect]. Provide a single score and nothing else."_ Unlike GPT-3, the API for ChatGPT does not output the log probabilities of the most likely tokens. As such, the measurement of quality is non-deterministic. We attempt to reduce output variability by reinforcing the desired output in the prompt (_"Provide a single score and nothing else."_) and by setting the temperature to 0. We report a mean absolute deviation of 0.0182 across 3 runs when querying _Appropriateness_ on the provided en/dailydiag-grade dataset included in the development set. To facilitate ensembling in later stages, we normalise the predictions to [0,1]. The default processing step consists of searching for an integer in the response. However, there are some instances where ChatGPT fails to output the desired score: (1) When conducting dialogue level evaluation, the model sometimes outputs scores for each individual response. In these cases, we calculate the average score, similar to the dialogue-level encoder scores. (2) Less frequently, ChatGPT ignores the task and continues the conversation. Here, we prompt the model again until a score is provided. ### Submetric Ensembling Despite having a key role in NLG evaluation, HE has been performed while suffering from nontransparent and inconsistent annotation procedures. As such, annotations from different works one expects to report the same quality are frequently only nominal in nature. A good example is _Coherence_, with some definitions referring to it as (1) semantic relevance with respect to a previous sentence; (2) a theme/topic; or even (3) _Readability_, which is considered a different quality in other guidelines. Howcroft et al. (2020) provides an in-depth survey of 165 NLG papers with human evaluations where these issues are highlighted. Taking into account these facts, it is not clear we can successfully apply an empirical surjective mapping function from our submetrics to the quality aspects. Instead, we take a data-driven approach to generate this mapping, similar to the one proposed in Zhang et al. (2022). The main difference between the original Correlation Re-Scaling method and our approach is that, instead of zeroing the weights of submetrics that have a negative correlation with the given aspect, we take a probabilistic approach where we conduct a statistic significance test, i.e., we check if the \(p\)-value is higher than a given threshold. This ensures submetrics which are strongly and negatively correlated with the aspect (for example, **MLM** and _Fluency_) are still included in the ensembling 2. Footnote 2: For some annotations, none of the metrics were statistically significant. In these cases, we resort to the original proposed approach. ### Dialogue-level Evaluation We obtain dialogue-level quality predictions from the encoder models - **NSP**, **VSP**, **MLM** and **ENG** - by averaging the individual turn-level predictions. These are combined with the dialogue-level predictions obtained by prompting ChatGPT with the full dialogue in the prompt. ## 5 Experiments ### Datasets For data preprocessing we used spaCy. For the **VSP** and **NSP** models, we followed prior work and base the self-supervised data on DailyDialog Li et al. (2017). For the language specific and multilingual models, we rank the translations using the provided WMT22 scores. Models using paraphrased responses are trained using the least similar responses (lowest score3). Footnote 3: We also trained models using the highest scoring responses and report lower performance. This is in line with our intuition that lower scoring responses are more diverse, and as such more informative for training. The **ENG** model was trained using the RED dataset, more specifically on the 80k split with negative sampled data Xu et al. (2022). Given it is an English dataset, we use MBART504 Liu et al. (2020) to augment the original dataset with Spanish and Chinese MT. Finally, we score it using the WMT20-COMET-QE-DA model Rei et al. (2020). For the paraphrase augmentation, we follow the organisers' approach of using Parrot Paraphraser (Damodaran, 2021) and scoring the paraphrases with Cosine Similarity. ### Training and Hyperparameters We used \(\mathtt{XLM}\)-RoBERTa-large (Conneau et al., 2020) as the encoder model for the experiments. This model is the multilingual version of RoBERTa, pretrained on CommonCrawl data containing 100 languages. We used a single Quadro RTX 6000 24GB GPU for the encoder experiments, and accessed ChatGPT (gpt-3.5-turbo) in late March using the OpenAI API. For the **VSP**, **NSP** and **ENG** metrics, a token representing the speaker was added for each turn, and a maximum history length of 3 turns was used during training. For predictions in the development and test sets we include the full conversational context whenever possible. If it surpasses input size limitations, we iteratively remove turns from the context, starting from the oldest one. We applied a regression head consisting of a 2-layer MLP with a hidden size of 1024 and a hyperbolic tangent function as activation for prediction. All parameters were trained/finetuned using Adam optimiser (Kingma and Ba, 2015). The fully finetuned models used a learning rate of 3e-6 and were trained for 3 epochs using a batch size of 16. Evaluation was conducted every 10,000 steps. The best performing model on the evaluation set was selected for testing. For the **MLM** metric, we used the existing LM head available in the Transformers library (Wolf et al., 2020). With respect to the model-level ensembling, we conduct simple unweighted averaging of the predictions of the models. For the submetric-level ensembling, we define the mask threshold as \(p>0.05\) and square the correlations following Zhang et al. (2022). For testing, we define a mapping from the development quality aspects to the test-set aspects and obtain the final weights by averaging the weights obtained on the test set. ### Model ensembling In order to determine the best combination of models to include in our model ensemble, all encoder based models that require training were trained using different subsets of data. This includes the original (EN) English data, the corresponding augmentations in Chinese (ZH), Spanish (ES) and paraphrases (PA) and the QE-ranked multilingual augmentation (MLXX) 5. Footnote 5: We only include the best performing ML models. Spearman correlation results are presented in Table 1. For the **VSP** submetric, we note that the inclusion of translations is detrimental to performance. In fact, the best performing models are PA, followed by EN. This contrasts with **NSP**, where we observe that the inclusion of more translated data improves performance. For **ENG**, the best performance is obtained with 20% of translated data. We include the 10 and 50% models in our framework to take advantage of ensembling. ### Track Results For the track we submitted 4 different systems, exploring the contribution of the different components of our framework: * **System 1 (DialEvalML)**: Submetric ensembling of ChatGPT + XLM-R. * **System 2**: Submetric ensembling of XLM-R. * **System 3**: Submetric ensembling of ChatGPT. * **System 4**: Direct mapping of ChatGPT submetrics. Table 2 identifies the turn-level weights calculated for testing for System 1. \begin{table} \begin{tabular}{l|l|c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Submetric**}} & \multicolumn{6}{c}{**Language**} \\ \hline \multicolumn{1}{c}{\multirow{2}{*}{**Submetric**}} & \multicolumn{1}{c}{**Model**} & \multicolumn{1}{c}{**EN**} & \multicolumn{1}{c}{**ES**} & \multicolumn{1}{c}{**ZH**} & \multicolumn{1}{c}{**PA**} & \multicolumn{1}{c}{**ALL**} \\ \hline \multirow{4}{*}{**VSP**} & **EN** & 0.195 & 0.173 & 0.161 & **0.067** & 0.149 \\ & ES & 0.156 & 0.183 & 0.158 & 0.012 & 0.127 \\ & **ZH** & 0.179 & 0.111 & 0.102 & 0.086 & 0.119 \\ & **PA** & **0.212** & **0.193** & **0.198** & 0.062 & **0.166** \\ & **ML5** & 0.195 & 0.168 & 0.157 & 0.040 & 0.140 \\ \hline \multirow{4}{*}{**NSP**} & EN & 0.279 & 0.256 & 0.286 & 0.267 & 0.272 \\ & ES & 0.266 & 0.257 & 0.282 & 0.251 & 0.264 \\ & **ZH** & 0.246 & 0.238 & 0.298 & 0.232 & 0.254 \\ & **PA** & **0.307** & 0.279 & 0.286 & **0.279** & 0.288 \\ & **ML75** & 0.300 & **0.284** & **0.311** & 0.272 & **0.292** \\ \hline \multirow{4}{*}{**ENG**} & EN & 0.319 & 0.275 & 0.251 & 0.260 & 0.276 \\ & ML5 & 0.310 & 0.268 & 0.214 & 0.275 & 0.267 \\ \cline{1-1} & **ML10** & 0.334 & 0.296 & 0.243 & 0.279 & 0.288 \\ \cline{1-1} & **ML20** & **0.379** & **0.324** & **0.274** & **0.316** & **0.324** \\ \cline{1-1} & **ML50** & 0.340 & 0.263 & 0.258 & 0.289 & 0.287 \\ \cline{1-1} & PA & 0.265 & 0.245 & 0.213 & 0.265 & 0.247 \\ \hline \hline \end{tabular} \end{table} Table 1: Spearman Correlation scores of our trained model variants on all Language benchmarks on the full development set. The best score for each submetric and language is highlighted in **bold**. Models included in the final ensemble are in **bold**, except for NSP, which also includes NSP-Siamese. Task 1: Multilingual MetricsThe results for each team for Task 1 are presented in Table 3, together with all of our submissions. In all languages at both the dialogue and turn level, our submissions vastly outperform others, with the exception of S2, which has comparable results with other participants. This clearly demonstrates the conversational understanding ChatGPT possesses. As expected, the best submission is S1, which conducts submetric ensembling with the XLM-R submetrics. This is followed by S3 and S4, which are exclusive ChatGPT submissions with and without ensembling, respectively. Task 2: Robust MetricsThe results for each team for Task 2 are presented in Table 4. Similar to Task 1, in Task 2, our ChatGPT submissions outperform other teams. However, at the dialogue level, the best performing model is AM-FM. ### Example predictions Given the widely publicised emergent capabilities of current LLMs, it is worthwhile exploring where their quality predictions diverge from the annotators. To do so, we checked all instances where ChatGPT (System 4) diverges from the Human Evaluation (HE) annotations by more than 3 points. In all of the detected examples, we noted ChatGPT consistently underestimated the quality of the response when compared to HE. We present in Table 5 two representative examples. In the first example, we see that ChatGPT erroneously underestimates quality due to the inclusion of _"seed"_ in the response. We posit this is due to the RLHF finetuning, which conditions the model to avoid inappropriate or divisive topics. In the second example, we see ChatGPT has trouble understanding the conversation. Although one could argue the HE scores for _Correctness_ and _Appropriateness_ are too high, it seems clear the response is undeserving of a minimum score for all aspects. In fact, if one prompts the model to provide an explanation for _Content Richness_, it replies the following: _"The response attempts to provide some content related to the topic of adolescent sadness, but it is vague and lacks depth. The mention of "Qibing" without any explanation or context \begin{table} \begin{tabular}{l|c c} **Team** & **Turn (rank)** & **Dial (rank)** \\ \hline Baseline (AM-FM) & 0.3387 (4) & **0.4800** (1) \\ Team 1 & 0.1537 (6) & 0.1111 (4) \\ Team 3 & 0.2697 (5) & 0.2196 (3) \\ Team 4 (us) & _0.4890 (1)_ & _0.3031 (2)_ \\ - S2 & 0.3320 & 0.2335 \\ - S3 & 0.4756 & 0.2979 \\ - S4 & 0.4427 & 0.2492 \\ Team 6 & 0.4190 (2) & - \\ Team 7 & 0.3833 (3) & - \\ \hline \end{tabular} \end{table} Table 4: Average Spearman correlation and corresponding rank across the 4 dimensions evaluated for the baseline Deep AM-FM and all participating teams on the Task 2 (Robust metrics) test set. **Bold** denotes the best result for the corresponding Language, _italic_ denotes our best submission. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline **Aspect** & **VSP** & **NSP** & **MLM** & **ENG** & **cGPT-A** & **cGPT-R** & **cGPT-C** & **cGPT-G** \\ \hline **Appropriateness** & 0.039 & **0.176** & 0.017 & 0.0511 & **0.165** & **0.185** & 0.181 & **0.185** \\ **Relevance** & 0.014 & **0.214** & 0.003 & 0.023 & 0.188 & 0.210 & 0.160 & 0.190 \\ **Content Richness** & 0.176 & 0.085 & 0.181 & **0.238** & 0.039 & 0.022 & 0.210 & 0.048 \\ **Grammatical Correctness** & 0.021 & 0.084 & -0.06 & 0.061 & 0.238 & 0.242 & 0.155 & **0.258** \\ \hline \end{tabular} \end{table} Table 2: Calculated submetric weights of System 1 for test set quality aspects. Highest weight per aspect in **bold**. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c} \hline & \multicolumn{2}{c|}{**EN**} & \multicolumn{2}{c|}{**ZH**} & \multicolumn{2}{c|}{**ES**} & \multicolumn{2}{c|}{**ML-AVG**} & \multicolumn{2}{c}{**Rank**} \\ \hline **Team** & **Turn** & **Dial** & **Turn** & **Dial** & **Turn** & **Dial** & **Turn** & **Dial** & **Turn** & **Dial** \\ \hline Baseline (AM-FM) & 0.2940 & 0.2414 & 0.0753 & 0.4648 & 0.1826 & **0.8080** & 0.1840 & 0.5047 & 4 & 2 \\ Team 2 & 0.1469 & - & 0.1054 & - & 0.0808 & - & 0.1110 & - & 5 & - \\ Team 4 (us) & & & & & & & & & & 1 & 1 \\ - _- SI (DialEvalML)_ & _0.4818_ & _0.5342_ & _0.3936_ & _0.7133_ & _0.5590_ & _0.8080_ & _0.4881_ & _0.6852_ & & \\ - S2 & 0.2625 & 0.3295 & 0.3096 & 0.7030 & 0.5056 & 0.2500 & 0.3592 & 0.4275 & & \\ - S3 & 0.4795 & 0.5251 & 0.3656 & 0.6701 & 0.5409 & 0.8080 & 0.4620 & 0.6677 & & \\ - S4 & 0.4586 & 0.5039 & 0.3618 & 0.5859 & 0.5412 & 0.5915 & 0.4539 & 0.5604 & & \\ Team 5 & 0.3702 & 0.1865 & 0.0701 & 0.1356 & 0.1983 & 0.6830 & 0.2129 & 0.3350 & 3 & 3 \\ Team 7 & 0.2214 & - & 0.3112 & - & 0.5644 & - & 0.3657 & - & 2 & - \\ \hline \end{tabular} \end{table} Table 3: Average Spearman correlation across the 4 dimensions evaluated for the baseline Deep AM-FM (Zhang et al., 2021) and all participating teams on the Task 1 (Multilingual metrics) test set. **Bold** denotes the best result for the corresponding Language, _italic_ denotes our best submission. leaves the reader confused. The response could benefit from more specific and informative details about the topic to increase its content richness."_. However, if anything, the inclusion of the last sentence increases the richness of the response. Yet, it seems ChatGPT is conflating _Content Richness_ with _Relevance_. We observe the same behaviour in all other instances we studied, and is in line with the submetric weights (Table 2). ## 6 Discussions The results from our work on both tasks (Section 5.4) reveals that ChatGPT vastly outperforms typical encoder approaches that are trained to discriminate positive samples from artificially generated negative ones. It is important to note that, compared to the months worth of research dedicated to optimise our encoder models (including curation, training and selection), we were able to easily outperform all other teams and our own encoder models with a day's worth of prompt engineering. This is, in our opinion, a turning point in the paradigm of dialogue evaluation. In any case, we do find instances where ChatGPT fails to accurately evaluate aspects of quality, as identified in Section 5.5. Future research directions may attempt to tackle the issues of score calibration by providing prompts that include examples and/or explicitly provide guidelines for scoring. However, given the current landscape on dialogue generation, and as our submission suggests, dialogue evaluation, it is important to reflect on the value of current quality estimation frameworks. One might argue performing HE or developing metrics that evaluate responses and/or dialogues in terms of linguistic competence (e.g. _Grammatical Correctness_ or _Coherence_) is no longer informative for the current and future crop of LLMs. Besides becoming ever so clear that these models no longer output responses that are incoherent or incorrect, we are reaching the point where these models are better evaluators than humans themselves (Gilardi et al., 2023). As such, developing metrics that correlate well with HE is becoming increasingly questionable. One of the main contention points w.r.t the deployment of these models to the public pertain to their "safety" and "trustworthiness". But while "trustworthiness" can be evaluated by connecting the outputs to external and verifiable sources, the notion of "safety" is much more ambiguous. Kempt et al. (2023) suggests considering Positionality, Acceptability, and Value Alignment (PAVA) as features chatbots should have to fulfil appropriateness requirements. However, automatically evaluating if a chatbot has these features using current dialogue evaluation protocols seems implausible. Instead, the development of challenge sets for validation (such as the ones proposed in Valmeekam et al. 2023) appears to be the logical next step for evaluation of future chatbots6. Footnote 6: See OpenAI Evals for recent collaborative research efforts in this direction. ## 7 Conclusion This paper presents a novel open-domain and reference-free dialogue evaluation framework that leverages strong pretrained LLMs using finetuning and zero-shot prompting. These models, combined with effective ensembling strategies, substantially outperform the previous automatic evaluation paradigm of only training LMs with semisupervised training objectives. In fact, DialEvalML ranks 1st on both the Robust (1st turn-level, 2nd dialogue level) and Multilingual (1st on both levels) tasks of Track 4 at DSTC11. \begin{table} \begin{tabular}{l} \hline \hline **Chatbot:** Tennis racquets used to be made from cloth strips stuffed with feathers haha! Then eventually evolved into today’s technology \\ **Human:** oh i never knew that \\ **Chatbot:** Indeed, there was also aluminium construction improvements allowed for the introduction of oversized rackets by 1975 which was manufactured by weed \\ **Label:** **A:** 4.7 **C:** 4.7 **G:** 4.3 **R:** 5.0 **ChatGPT:** **A:** 1.0 **C:** 2.0 **G:** 2.0 **R:** 1.0 **R:** 1.0 \\ _The mention of ”weed” as the manufacturer may cause confusion or be received as inappropriate without further context.”_ \\ **Best:** **A:** 2.2 **C:** 2.6 **G:** 1.9 **R:** 2.2 \\ \hline \hline **Human:** Don’t cry for breakfast. \\ **Chatbot:** Cry t for comfort \\ **Human:** In the end, young people are always inexpixably said adolescence. Which one is Qibing? \\ **Label:** **A:** **C:** 5.0 **G:** 5.0 **R:** 5.0 **R:** 5.0 **ChatGPT:** **A:** 1.0 **C:** 1.0 **G:** 1.0 **R:** 1.0 **R:** 1.0 _”The response does not directly relate to the context or provide a meaningful answer. It seems unrelated and out of place. The mention of ”Qibing” without any explanation further adds to the confusion._ \\ **Best:** **A:** 1.3 **C:** 1.9 **G:** 1.1 **R:** 1.2 \\ \hline \hline \end{tabular} \end{table} Table 5: Example turn-level predictions for _Appropriateness_, _Content Richeness_, _Grammatical Correctness_ and _Relevance_. We include the ChatGPT explanation for _Appropriateness_. ## Acknowledgements This research was supported by the Portuguese Recovery and Resilience Plan through project C645008882-00000055 (Responsible.AI), and by national funds through _Fundacao para a Ciencia e a Tecnologia_ (FCT) with references PRT/BD/152198/2021 and UIDB/50021/2020, and by the P2020 program MAIA (LISBOA-01-0247-FEDER-045909).
2307.16674
Orbifolds of topological quantum field theories
The orbifold construction via topological defects in quantum field theory can either be understood as a state sum construction internal to a given ambient theory, or as the procedure of (identifying and) gauging ordinary and "non-invertible" symmetries. Here we explain how this is rigorously understood in the case of topological QFTs. We provide various examples and outline general features, also of relevance for full QFT.
Nils Carqueville
2023-07-31T13:50:35Z
http://arxiv.org/abs/2307.16674v2
# Orbifolds of topological quantum field theories ###### Abstract The orbifold construction via topological defects in quantum field theory can either be understood as a state sum construction internal to a given ambient theory, or as the procedure of (identifying and) gauging ordinary and "non-invertible" symmetries. Here we explain how this is rigorously understood in the case of topological QFTs. We provide various examples and outline general features, also of relevance for full QFT. This is a contribution to the Encyclopedia of Mathematical Physics (editors-in-chief: M. Bojowald and R. Szabo), for the section edited by C. Meusburger. ###### Contents * 1 Introduction and overview * 2 Reminder on closed TQFTs * 3 Defect TQFTs * 4 Orbifolds of defect TQFTs * 4.1 Orbifold data * 4.2 Orbifold construction * 4.3 Orbifold completion ## 1 Introduction and overview The generalised orbifold construction takes as input a quantum field theory \({\cal Z}\) and a collection of topological defects \({\cal A}\) of \({\cal Z}\), to produce a new quantum field theory \({\cal Z}_{\cal A}\). In particular, correlation functions which \({\cal Z}_{\cal A}\) associates to a closed spacetime \(M\) are computed by filling \(M\) with a network or "foam" of defects of type \({\cal A}\) (of all codimensions), and then evaluating with \({\cal Z}\), e.g. schematically (1.1) The construction should not depend on the choice of defect foam, and this condition imposes constraints on the labels \({\cal A}_{j}\) for defects supported on \(j\)-dimensional strata (of which \({\cal Z}\) detects only isotopy classes due to the topological nature of the defects). If the foam is taken to be Poincare dual to a triangulation of \(M\), triangulation invariance imposes finitely many constraints. A collection of defects \({\cal A}\) satisfying these constraints is called an orbifold datum. The eponymous source of orbifold data \({\cal A}\) are (gaugeable) symmetries of the theory \({\cal Z}\), in which case all \({\cal A}\)-defects of non-zero dimension are invertible with respect to their fusion. Then \({\cal Z}_{\cal A}\) is the associated gauged theory, or orbifold theory, obtained by averaging over all gauge connections. Examples 4.8, 4.12 and 4.25 give the relation to orbifold stacks in the context of sigma models. The defects in an orbifold datum however need not come from a group action, and they need not be invertible. Indeed, (lattice or) state sum models are other special cases of the generalised orbifold construction, namely when \({\cal Z}\) is the "trivial" theory as explained in Examples 4.7 and 4.11, see also Remark 4.30. It is thus natural to think of the orbifold construction as a state sum construction "internal" to any given theory \({\cal Z}\), and we drop the attribute "generalised". Alternatively, even if \({\cal A}\) does not arise from a group action one may still think of it as encoding a "generalised" symmetry of \({\cal Z}\). Other common designations are "non-invertible", "topological", or "categorical" symmetries. A foundational result in [FFRS] is that a huge class of 2-dimensional conformal field theories are orbifolds of one another, but that those from group actions do not suffice. More recently, \(n\)-dimensional orbifold data have appeared (at least behind the scenes) as non-invertible symmetries of \((n-1)\)-dimensional quantum field theories. A mathematically rigorous account of the orbifold construction is lacking for general quantum field theories \({\cal Z}\). However, much about (the gauging of) non-invertible symmetries can be separated from \({\cal Z}\) and discussed only with respect to a higher-dimensional topological quantum field theory, cf. e.g. [FMT]. Over the last decade or so, the orbifold construction has been developed rigorously for TQFTs of arbitrary dimension \(n\). The purpose of this chapter is to give an overview of this theory, as well as some of its applications for \(n\leqslant 4\). Further motivations include the phenomenon of "gauging topological phases of matter" (via the relation between the latter and invertible TQFTs, cf. [FH, Yon]) and the higher representation theory of orbifold data mentioned below. More precisely, we will work with (non-extended) defect TQFTs (reviewed in Section 3, after the less structure-rich closed TQFTs in Section 2), formalised as symmetric monoidal functors on stratified and labelled bordism categories. Such TQFTs are expected (in low dimension: known) to give rise to higher categories, and one finds that orbifold data are naturally certain types of algebras in this context. This is explained in Section 4.1, along with the basics of Pacher moves, their algebraic incarnations in orbifold data, and several examples. The construction of the (closed) orbifold TQFT \({\cal Z}_{\cal A}\) from a defect TQFT \({\cal Z}\) and an orbifold datum \({\cal A}\) is described in Section 4.2. This is then illustrated by identifying state sum models as well as gaugings of (higher) symmetry group actions as special cases of the construction, and we give several examples that are of neither extreme type. We stress that a higher-categorical approach is often convenient, but it is not necessary to construct \({\cal Z}_{\cal A}\). Since orbifold data can be viewed as (higher) algebras, it is natural to consider their higher representation category, called "orbifold completion". In Section 4.3 we describe how this allows to further generalise the orbifold construction to output defect TQFTs (understood in detail for \(n\leqslant 3\)), which in turn provides a more powerful technology for applications, and to develop the theory conceptually. It is worth noting that the orbifold construction has so far mostly been developed for oriented TQFTs, i.e. the bordisms on which they evaluate come with the structure of an orientation. It is expected that analogous constructions can be worked out for other tangential structures - a promising avenue for future research. For example, the "condensation monads" of [GJF] are thought to be "framed" variants of orbifold data, and one can build spin quantum field theories from oriented ones in an orbifold-esque way, see [NR, RSW]. Relatedly but separately, it would be interesting to build a general theory of orbifolds for extended TQFTs, defined as symmetric monoidal functors on higher bordism categories. At least in the fully extended case, the setting of \((\infty,n)\)-categories and the ideas described in [Lu, Sect. 4.3] seem to make this a technical, but non-conceptual challenge, given the simplicial origins of orbifold data. The case of orbifolds from group actions of once-extended TQFTs is described in [SW]. **Acknowledgements.** I am grateful to all the colleagues with whom I have collaborated and learned from in connection with the topic of orbifold TQFT, especially to V. Mulevicius, I. Runkel, and G. Schaumann, who also contributed insightful comments on an earlier version of this manuscript. Moreover, I thank B. Bartlett, C. Lieberum, C. Meusburger, L. Muller, and C. Schweigert for further helpful comments, and I acknowledge support from the DFG Heisenberg Programme. ## 2 Reminder on closed TQFTs A quantum field theory can be thought of as a structure-preserving map from spacetime to the algebraic description of physical processes therein. By discarding most of the geometry while retaining all of the topology of spacetime, one can define an \(n\)-dimensional closed topological quantum field theory (TQFT) for topological structure \(X\) and with values in \(\mathcal{C}\) to be a symmetric monoidal functor \[\mathcal{Z}\colon\mathrm{Bord}^{X}_{n,n-1}\longrightarrow\mathcal{C}\,. \tag{2.1}\] The codomain \(\mathcal{C}\) is often taken to be the category of (super) vector spaces, while the domain of \(\mathcal{Z}\) has smooth \((n-1)\)-dimensional closed \(X\)-manifolds as objects and (equivalence classes of) smooth \(n\)-dimensional \(X\)-bordisms as morphisms. The composition of bordisms \(M\colon E\longrightarrow E^{\prime}\) and \(N\colon E^{\prime}\longrightarrow E^{\prime\prime}\) is the glueing \(N\sqcup_{E^{\prime}}M\), the monoidal structure on \(\mathrm{Bord}^{X}_{n,n-1}\) is given by the disjoint union, and the symmetric braiding is obtained from its universal property. We refer to [At, Ko, T'ire2] for details, and illustrate the structure of the bordism category with an example for \(n=2\): (2.2) The topological structure \(X\) may e.g. be chosen to be a spin or string structure, a framing, or a principal bundle for some group. Here we will exclusively consider orientations. In this case classification results are available for \(n\leqslant 3\): for \(n=1\) the groupoid of symmetric monoidal functors \(\mathrm{Bord}^{\mathrm{or}}_{n,n-1}\longrightarrow\mathcal{C}\) is equivalent to that of dualisable objects in \(\mathcal{C}\), for \(n=2\) the classification is via commutative Frobenius algebras in \(\mathcal{C}\), while for \(n=3\) via "\(J\)-algebras"; see [Ju]. A certain type of algebra internal to any given rigid monoidal category \(\mathcal{C}\) plays an important role for many examples below, namely \(\Delta\)-separable symmetric Frobenius algebras. They consist of an object \(A\) together with (co)multiplication maps \(\mu\colon A\otimes A\longrightarrow A\) and \(\Delta\colon A\longrightarrow A\otimes A\) as well as (co)units \(\mathbb{1}\longrightarrow A\) and \(A\longrightarrow\mathbb{1}\), subject to the defining relations (see e.g. [FS] for a review on such algebras and their modules, and [Ca] for the string diagrammatic calculus) (2.3) An algebra is separable if its multiplication has a right inverse as a bimodule map; a Frobenius algebra is \(\Delta\)-separable if this is given by the comultiplication. **Example 2.1**.: 1. A dualisable object in \(\mathrm{Vect}_{\Bbbk}\) or \(\mathrm{sVect}_{\Bbbk}\) is precisely a finite-dimensional (super) \(\Bbbk\)-vector space. 2. The centre of a separable symmetric Frobenius algebra in \(\mathcal{C}\) is a commutative Frobenius algebra, describing a state sum model in dimension \(n=2\), cf. [LP] and [Mul, Sect. 3.2]. The origin of this term is reviewed in Section 4.2. Non-semisimple examples in \(\mathrm{sVect}_{\mathbb{C}}\) include B-twisted sigma models, where the commutative Frobenius algebras are constructed from Dolbeault cohomology of Calabi-Yau manifolds. Non-semisimple examples in \(\mathrm{Vect}_{\Bbbk}\) are Landau-Ginzburg models \(\mathcal{Z}_{W}\) with underlying commutative Frobenius algebra \(\mathrm{Jac}_{W}=\Bbbk[x_{i}]/(\partial_{x_{i}}W)\) for some polynomial \(W\) such that \(\mathrm{Jac}_{W}\) is finite-dimensional, see e.g. [HKK+, Ch. 16]. 3. State sum models in \(n=3\) dimensions are constructed from spherical fusion categories [TViro, BW, TVire1]. They turn out to be special cases of \(\mathrm{Vect}_{\Bbbk}\)-valued Reshetikhin-Turaev models [RT, Tu, Ba, TVire2] built from modular fusion categories, which are however often "anomalous" in the sense that the bordism category needs refinement, see [Tu]. Conjecturally, Rozansky-Witten models [RoW] built from compact holomorphic symplectic manifolds are also 3-dimensional TQFTs valued in \(\mathrm{sVect}_{\mathbb{C}}\). 4. State sum models in \(n=4\) dimensions are constructed from spherical fusion 2-categories \(\mathfrak{S}\)[DR], see also Example 4.11(4) below. Crane-Yetter models are the special case when \(\mathfrak{S}=\mathrm{B}\mathcal{M}\) is the delooping of a modular fusion category \(\mathcal{M}\), i.e. the 2-category with a single object \(*\) and \(\mathrm{End}_{\mathrm{B}\mathcal{M}}(*)=\mathcal{M}\). Conjecturally, topological twists of \(\mathcal{N}=4\) supersymmetric Yang-Mills theory are also 4-dimensional TQFTs [KW]. 5. For any \(n\), Dijkgraaf-Witten models [DW, FQ] are \(n\)-dimensional TQFTs built from finite (gauge) groups. They are special cases of state sum models. A simpler class of examples are Euler TQFTs \(\mathcal{Z}_{\psi}^{\mathrm{eu}}\colon\mathrm{Bord}_{n,n-1}^{\mathrm{or}} \longrightarrow\mathrm{Vect}_{\Bbbk}\) for some \(\psi\in\Bbbk^{\times}\). They assign \(\Bbbk\) to every object, and on bordisms \(M\) we have \[\mathcal{Z}_{\psi}^{\mathrm{eu}}(M):=\psi^{\chi(M)-\frac{1}{2}\chi(\partial M)}\] (2.4) where \(\chi\) is the Euler characteristic, see e.g. [Qu]. The trivial closed TQFT is the special case with \(\psi=1\). ## 3 Defect TQFTs The kind of "defect" which motivates "defect TQFT" is rooted in physics, where it describes a localised substance or physical system which is typically radically different from its immediate surroundings, and it often separates other regions, or mediates between them. Domain walls in ferromagnets are a standard (non-topological) example. We formalise this in terms of bordism categories \(\operatorname{Bord}_{n,n-1}^{\operatorname{def}}(\mathds{D})\) whose morphisms are represented by bordisms which are stratified into submanifolds that in turn are labelled by prescribed "defect data". The main idea is captured by the following example for \(n=2\): (3.1) Note that all top-dimensional strata have an orientation induced from the underlying manifold, while lower-dimensional strata come with their own orientation. Every stratum is also labelled: the prescribed set of defect data \(\mathds{D}\) consists of sets \(D_{0},D_{1},\dots,D_{n}\), and every \(j\)-stratum is labelled by an element in \(D_{j}\). Moreover, \(\mathds{D}\) also comes with adjacency rules about how strata may meet; for example in (3.1) the "source" and "target" of \(X_{1}\in D_{1}\) are \(u_{1}\in D_{2}\) and \(u_{2}\in D_{2}\), respectively, and the 1-strata adjacent to the 0-stratum labelled by \(\varphi\in D_{0}\) must carry labels \(X_{2},X_{4},X_{5}\in D_{1}\) with the given cyclic order and orientations. The set \(D_{n}\) is interpreted as a set of closed (or "bulk") TQFTs, while elements of \(D_{n-k}\) label "defects" of codimension \(k\); in particular, \(D_{n-1}\) is comprised of "domain walls" (including "boundary conditions", which are domain walls that have the trivial bulk TQFT on one side). The construction of the symmetric monoidal category \(\operatorname{Bord}_{n,n-1}^{\operatorname{def}}(\mathds{D})\) is made precise in [11, Sect. 2.1-2.2] (see [11, DKR] for important earlier work on \(n=2\), and [12] for a review), here we point out only a few more basic aspects. Objects are \((n-1)\)-dimensional closed manifolds \(E\) with \(\mathds{D}\)-labelled stratification such that \(j\)-strata of \(E\) are boundary components of \((j+1)\)-strata of bordisms \(M\) with (co)domain \(E\) that end transversally on \(\partial M\). By definition, all strata \(\sigma\) of a bordism \(M\) are open away from the boundary of \(M\), i.e. \(\partial\sigma=\sigma\cap\partial M\), and all strata of objects are open. We only admit stratifications which locally are either cylinders over (lower-dimensional) defect balls, or cones over defect spheres. For example, every 2-dimensional defect bordism locally looks like one of these neighbourhoods: (3.2) where \(u_{i}\in D_{2}\), \(X_{j}\in D_{1}\), \(\varphi\in D_{0}\), and \(m\in\mathds{Z}_{\geqslant 0}\). Again, which labels are allowed in such neighbourhoods is encoded in the adjacency rules of \(\mathds{D}\). We refer to [10] for the general case, and to [11, CMS] for details on the cases \(n=2\) and \(n=3\); see also Remark 3.11 for a higher categorical source of defect data. For a fixed choice of symmetric monoidal category \(\mathcal{C}\) and defect data \(\mathds{D}\) we have **Definition 3.1**.: An \(n\)-dimensional defect TQFT is a symmetric monoidal functor \[\mathcal{Z}\colon\mathrm{Bord}^{\mathrm{def}}_{n,n-1}(\mathds{D})\longrightarrow \mathcal{C}\,. \tag{3.3}\] If \(D_{n}=\{*\}\) is a one-element set and \(D_{j}=\varnothing\) for \(j\leqslant n-1\), then a defect TQFT (3.3) reduces to a closed TQFT (2.1) (after forgetting the label \(*\)). Slightly more generally, for arbitrary \(\mathds{D}\) we have a non-full embedding \(\mathrm{Bord}^{\mathrm{or}}_{n,n-1}\longleftrightarrow\mathrm{Bord}^{ \mathrm{def}}_{n,n-1}(\mathds{D})\) for every \(u\in D_{n}\), which views any bordism as trivially stratified and \(u\)-labelled. Pre-composition produces closed TQFTs from defect TQFTs. **Example 3.2**.: For \(n=1\) we may choose \(D_{1}\) to consist of finite-dimensional \(\Bbbk\)-vector spaces, \(D_{0}\) of their linear maps \(f\colon V\longrightarrow W\), while the adjacency rules simply read off the (co)domain \(V\) or \(W\) of \(f\). Then immersing defect bordisms into \(\mathds{R}^{2}\) and interpreting them as string diagrams in \(\mathrm{Vect}_{\Bbbk}\), e.g. (3.4) gives a defect TQFT \(\mathrm{Bord}^{\mathrm{def}}_{1,0}(\mathds{D})\longrightarrow\mathrm{Vect}_{ \Bbbk}\). **Example 3.3**.: For \(n=2\), fix a rigid symmetric monoidal category \(\mathcal{C}\). 1. For \(D_{2}^{\mathrm{triv}_{2}}=\{*\}\), \(D_{1}^{\mathrm{triv}_{2}}=\mathrm{Ob}(\mathcal{C})\) and \(D_{0}^{\mathrm{triv}_{2}}=\mathrm{Mor}(\mathcal{C})\), the trivial defect TQFT \(\mathcal{Z}_{2}^{\mathrm{triv}}\) simply disregards all 2-strata and interprets the remaining defects as a string diagram in \(\mathcal{C}\), e.g. \[\mathcal{Z}_{2}^{\mathrm{triv}}\colon\raisebox{-10.0pt}{\includegraphics[width=100.0pt]{fig/2-triv}}\mapsto\left(\varphi\otimes\mathrm{ev}_{V}\colon X\otimes V ^{\dagger}\otimes V\longrightarrow Y\otimes Z^{\dagger}\right).\] (3.5) Here in general both the symmetric monoidal structure and the rigidity of \(\mathcal{C}\) are needed. Note that \(\mathcal{Z}_{2}^{\mathrm{triv}}\) is "trivial" only on 2-strata. 2. We define \(D_{2}^{\mathrm{ss}_{2}}\) to be the set of separable symmetric Frobenius algebras in \(\mathcal{C}\), while \(D_{1}^{\mathrm{ss}_{2}}\) and \(D_{0}^{\mathrm{ss}_{2}}\) are their finite-dimensional bimodules and bimodule maps, respectively. By combining closed state sum models with \(\mathcal{Z}_{2}^{\mathrm{triv}}\) above, one obtains the defect state sum model \(\mathcal{Z}_{2}^{\mathrm{ss}}\colon\mathrm{Bord}^{\mathrm{def}}_{2,1}(\mathds{D }^{\mathrm{ss}_{2}})\longrightarrow\mathcal{C}\), see e.g. [DKR]. Below in Section 4.3 we will be in a better position to give details on \(\mathcal{Z}_{2}^{\rm ss}\), as a special case of the orbifold construction. In particular, by restricting \(\mathcal{Z}_{2}^{\rm ss}\) to defect bordisms all of whose \(2\)-strata are labelled by the trivial Frobenius algebra \(\mathbb{1}\in\mathcal{C}\), we essentially recover \(\mathcal{Z}_{2}^{\rm riv}\). **Example 3.4**.: For \(n=2\) and \(\mathcal{C}=\operatorname{Vect}_{\Bbbk}\), Landau-Ginzburg models give rise to a defect TQFT \(\mathcal{Z}^{\rm LG}\) whose bulk theory label set \(D_{2}^{\rm LG}\) consists of polynomials as in Example 2.1(2), but depending only on an even number of variables, cf. [CMu, Ca, CMoMo]. The sets \(D_{1}^{\rm LG}\) and \(D_{0}^{\rm LG}\) are made up of matrix factorisations and their maps up to homotopy, respectively, as reviewed e.g. in [CR]. **Example 3.5**.: (1) For \(n=3\), \(\mathcal{C}=\operatorname{Vect}_{\Bbbk}\) and an algebraically closed field \(\Bbbk\), Reshetikhin-Turaev models associated to a modular fusion category \(\mathcal{M}\) can be lifted to a defect TQFT \(\mathcal{Z}_{\mathcal{M}}^{\rm RT}\)[KS, FSV, CRS2, KMRS, CMu]. \(D_{3}^{\rm RT}\) consists of pairs \((\mathcal{M},A)\) where \(A\) is a commutative \(\Delta\)-separable Frobenius algebra in \(\mathcal{M}\); the case \(A=\mathbb{1}\) on trivially stratified bordisms recovers closed Reshetikhin-Turaev models. The surface defect label set \(D_{2}^{\rm RT}\) is made up of \(\Delta\)-separable symmetric Frobenius algebras \(F\) which have simultaneous bimodule structures, \(D_{1}^{\rm RT}\) consists of multimodules over such \(F\), and \(D_{0}^{\rm RT}\) of multimodule maps. One is naturally led to this somewhat intricate structure by carrying out the orbifold construction, see Section 4.3. 2. A special case is the trivial defect TQFT \(\mathcal{Z}_{3}^{\rm riv}\), which may be identified with (the Euler completion of, cf. Example 3.6) \(\mathcal{Z}_{\operatorname{vect}_{\Bbbk}}^{\rm RT}\) restricted to defect bordisms all of whose \(3\)-strata are labelled with the trivial modular category \(\operatorname{vect}_{\Bbbk}\) and trivial algebra \(\Bbbk\), see [CRS3]. Again, \(\mathcal{Z}_{3}^{\rm riv}\) is quite non-trivial away from top-dimensional strata, where it is essentially given by \(\mathcal{Z}_{3}^{\rm ss}\). The \(3\)-dimensional defect state sum model \(\mathcal{Z}_{3}^{\rm ss}\) is situated "between" \(\mathcal{Z}_{3}^{\rm riv}\) and \(\mathcal{Z}_{\mathcal{M}}^{\rm RT}\). This is explained in Section 4.3 as another special case of the orbifold construction. **Example 3.6**.: For arbitrary dimension \(n\), the Euler TQFT \(\mathcal{Z}_{\psi}^{\rm eu}\) of Example 2.1(\(n\)) can be generalised as follows [CRS1]. Fix \(\Psi:=(\psi_{1},\ldots,\psi_{n})\in(\Bbbk^{\times})^{n}\). Let \(\operatorname{Bord}_{n,n-1}^{\rm def}\) denote the category of stratified bordisms without any labels for strata. The Euler defect TQFT \(\mathcal{Z}_{\Psi}^{\rm eu}\colon\operatorname{Bord}_{n,n-1}^{\rm def} \longrightarrow\operatorname{Vect}_{\Bbbk}\) assigns \(\Bbbk\) to every object, while for a defect bordism \(M\) we set \[\mathcal{Z}_{\Psi}^{\rm eu}(M)=\prod_{j=1}^{n}\ \prod_{j\text{-strata}\, \sigma_{j}\,\subset\,M}\psi_{j}^{\chi(\sigma_{j})-\frac{1}{2}\chi(\partial \sigma_{j})}\,. \tag{3.6}\] For an arbitrary defect TQFT \(\mathcal{Z}\), its Euler completion \(\mathcal{Z}^{\odot}\) is constructed by allowing for extra insertions of invertible point defects \(\psi_{j}\) on all \(j\)-strata, in complete analogy to (3.6). This is made precise in [CRS1, Sect. 2.5], where it is also shown that \((\mathcal{Z}^{\odot})^{\odot}\cong\mathcal{Z}^{\odot}\) and \(\mathcal{Z}^{\odot}\otimes\mathcal{Z}_{\Psi}^{\rm eu}\cong\mathcal{Z}^{\odot}\) for every defect TQFT \(\mathcal{Z}\). All of the above examples have in common that their underlying defect data can be extracted from a corresponding higher category. In general, it is expected that a defect TQFT \(\mathcal{Z}\) as in (3.3) gives rise to a \(\mathcal{C}\)-enriched \(n\)-category \(\mathcal{D}_{\mathcal{Z}}\) with coherent adjoints for all morphisms. These higher categories in practice often admit a symmetric monoidal structure, making them natural codomains of fully extended TQFTs, cf. [Lu, Ka] and [CMoMo, Sect. 1]. Objects of \(\mathcal{D}_{\mathcal{Z}}\) are bulk theories, i.e. elements of \(D_{n}\), \(k\)-morphisms with \(1\leqslant k\leqslant n-1\) are defects of codimension \(k\), i.e. elements of \(D_{n-k}\) and more generally \((n-k)\)-fold cylinders over defect \(k\)-balls, while \(n\)-morphisms (contain \(D_{0}\) and) are obtained by evaluating \(\mathcal{Z}\) on defect \((n-1)\)-spheres. Adjunctions come from orientation reversal and "folding". There is also a notion of Euler completion \(\mathcal{D}_{\mathcal{Z}}^{\odot}\) on the level of higher categories, equivalent to \(\mathcal{D}_{\mathcal{Z}^{\odot}}\). For \(n=2\) and \(\mathcal{C}=\operatorname{Vect}_{\Bbbk}\) it is a rigorous result that \(\mathcal{D}_{\mathcal{Z}}\) is a (planar) pivotal 2-category, i.e. all 1-morphisms \(X\colon a\longrightarrow b\) have coherently isomorphic left and right adjoints \({}^{\dagger}\!X\cong X^{\dagger}\colon b\longrightarrow a\) (see e.g. [Ca] for a review): **Theorem 3.7** ([Dkr]).: Every defect TQFT \(\mathcal{Z}\colon\operatorname{Bord}_{2,1}^{\operatorname{def}}(\mathbb{D}) \longrightarrow\operatorname{Vect}_{\Bbbk}\) gives rise to a \(\Bbbk\)-linear pivotal 2-category \(\mathcal{D}_{\mathcal{Z}}\). **Example 3.8**.: 1. The pivotal 2-category associated to the \(\mathcal{C}\)-valued trivial defect TQFT \(\mathcal{Z}_{2}^{\operatorname{triv}}\) is the delooping of the full subcategory \(\mathcal{C}^{\operatorname{d}}\) of all dualisable objects in \(\mathcal{C}\), \(\mathcal{D}_{\mathcal{Z}_{2}^{\operatorname{triv}}}=\operatorname{B}\! \mathcal{C}^{\operatorname{d}}\), whose single object \(*\) is the single element of \(D_{2}^{\operatorname{triv}}\). 2. For state sum models, \(\mathcal{D}_{\mathcal{Z}_{2}^{\operatorname{ss}}}\) is the 2-category \(\operatorname{ssFrob}(\mathcal{C})\) of separable symmetric Frobenius algebras, finite-dimensional bimodules and bimodule maps in \(\mathcal{C}\), cf. [DKR]. 3. The 2-category \(\mathcal{L}\mathcal{G}_{\Bbbk}\) associated to Landau-Ginzburg models admits a natural pivotal structure on its full subcategory of polynomials in an even number of variables [CMu]. (All of \(\mathcal{L}\mathcal{G}_{\Bbbk}\) is only "graded pivotal", cf. [CMu, Def. 7.1], this relates to the fact that Landau-Ginzburg models are spin TQFTs [CS].) For \(n=3\) and \(\mathcal{C}=\operatorname{Vect}_{\Bbbk}\) the higher category associated to a defect TQFT has also been rigorously constructed. Recall [GPS, Gu] that every 3-category is equivalent to a Gray category \(\mathcal{T}\), i.e. a category enriched in 2-categories and strict 2-functors with the Gray tensor product. Hence only the interchange law is not necessarily strict in \(\mathcal{T}\). A Gray category with duals [BMS] then has pivotal Hom 2-categories and compatible adjoints for all 1-morphisms. **Theorem 3.9** ([Cms]).: Every defect TQFT \(\mathcal{Z}\colon\operatorname{Bord}_{3,2}^{\operatorname{def}}(\mathbb{D}) \longrightarrow\operatorname{Vect}_{\Bbbk}\) gives rise to a \(\Bbbk\)-linear Gray category with duals \(\mathcal{D}_{\mathcal{Z}}\). **Example 3.10**.: 1. The 3-category associated to the trivial defect TQFT \(\mathcal{Z}_{3}^{\operatorname{triv}}\) valued in \(\mathcal{C}\) is the delooping of \(\mathcal{D}_{\mathcal{Z}_{2}^{\operatorname{su}}}\), i.e. \(\mathcal{D}_{\mathcal{Z}_{3}^{\operatorname{triv}}}=\operatorname{B}\operatorname {ssFrob}(\mathcal{C})\). 2. The 3-category associated to the defect state sum model \(\mathcal{Z}_{3}^{\rm ss}\) is described in Section 4.3 below. The 3-category \(\mathrm{sFus}_{\Bbbk}\) of spherical fusion categories, bi-module categories with trace, bimodule functors and natural transformations introduced in [Sc] is a sub-3-category of \(\mathcal{D}_{\mathcal{Z}_{3}^{\rm ss}}\). 3. The 3-category associated to defect Reshetikhin-Turaev theory \(\mathcal{Z}_{\mathcal{M}}^{\rm RT}\) was constructed in [FSV, KMRS, CMu]. Its objects and \(k\)-morphisms are given by the sets \(D_{3}^{\rm RT}\) and \(D_{3-k}^{\rm RT}\) of Example 3.5(1), respectively, see Section 4.3 below. 4. Conjecturally, the 3-category associated to Rozansky-Witten models is the one described in [KRS, KR]. The pivotal symmetric monoidal structure of the homotopy sub-2-category for affine target spaces \(T^{*}\mathds{C}^{n}\) is worked out rigorously in [BCR, BCFR]. **Remark 3.11**.: The defect data \(\mathds{D}\) of a TQFT \(\mathcal{Z}\colon\mathrm{Bord}_{n,n-1}^{\mathrm{def}}(\mathds{D})\longrightarrow \mathcal{C}\) can essentially be reconstructed from the associated higher category \(\mathcal{D}_{\mathcal{Z}}\). Conversely, from any \(n\)-category with duals \(\mathcal{D}\) one naturally extracts defect data \(\mathds{D}^{\mathcal{D}}\) whose label sets \(D_{j}^{\mathcal{D}}\) consist of \((n-j)\)-cells in \(\mathcal{D}\), and the adjacency rules are supplied by source and target maps as well as other composition rules in \(\mathcal{D}\). ## 4 Orbifolds of defect TQFTs The orbifold construction takes as input a defect TQFT \(\mathcal{Z}\colon\mathrm{Bord}_{n,n-1}^{\mathrm{def}}(\mathds{D})\longrightarrow \mathcal{C}\) for arbitrary \(n\geqslant 1\), as well as a set of defect labels \(\mathcal{A}_{j}\in D_{j}\) for \(j\in\{1,\ldots,n\}\) and \(\mathcal{A}_{0}^{+},\mathcal{A}_{0}^{-}\in D_{0}\) that are subject to constraints described below. As output it produces a closed TQFT \(\mathcal{Z}_{\mathcal{A}}\colon\mathrm{Bord}_{n,n-1}^{\mathrm{or}} \longrightarrow\mathcal{C}\) which on any given bordism \(M\) is constructed in three main steps: 1. choose a nice stratification \(M^{t}\) of \(M\), 2. label the strata of \(M^{t}\) with \(\mathcal{A}\), obtaining a morphism \(M^{t,\mathcal{A}}\) in \(\mathrm{Bord}_{n,n-1}^{\mathrm{def}}(\mathds{D})\), 3. define \(\mathcal{Z}_{\mathcal{A}}(M)\) by taking the colimit of \(\mathcal{Z}(M^{t,\mathcal{A}})\) over all stratifications \(M^{t}\). This is made precise below. In Section 4.1 we take "nice stratifications" to be Poincare dual to oriented triangulations, and explain how independence of choice of triangulation imposes (algebraic) defining conditions on "orbifold data" \(\mathcal{A}\), thus covering steps (1) and (2) from above. In Section 4.2, we describe step (3) in detail, and we provide several examples of orbifold TQFTs \(\mathcal{Z}_{\mathcal{A}}\), including state sum models and gaugings of (higher) symmetry groups as special cases. Finally, in Section 4.3, we explain how the above orbifold construction can be generalised, at least for \(n\leqslant 3\), to produce a single orbifold defect TQFT that subsumes all the orbifold closed TQFTs \(\{\mathcal{Z}_{\mathcal{A}}\}_{\mathcal{A}}\) as well as all their defects. Algebraically, this is captured by the \(n\)-category \((\mathcal{D}_{\mathcal{Z}})_{\mathrm{orb}}\) of representations of all orbifold data \(\mathcal{A}\). ### Orbifold data There are several contenders for "nice stratifications" of bordisms in relation to orbifold TQFTs. For example, the "admissible skeleta" of [13, 14] are a good choice in practice for low dimensions. Here we will exclusively consider the more traditional choice of stratifications that are Poincare dual to triangulations, which are particularly useful for the general development of the theory. Recall that \(\Delta^{n}:=\{\sum_{i=1}^{n+1}t_{i}e_{i}\,|\,t_{i}\in\mathds{R}_{\geqslant 0},\ \sum_{i}t_{i}=1\} \longleftrightarrow\mathds{R}^{n}\) is the standard \(n\)-simplex, where \(\{e_{i}\}_{i}\) is the standard basis of \(\mathds{R}^{n+1}\). See e.g. [15, Sect. 3.1] for more details on the (affine) simplicial notions that we use in the following. By an oriented \(n\)-simplex we mean an \(n\)-simplex with a total order on the set of its vertices up to equivalence (given by even permutations of vertices). For example, (4.1) represent oriented \(n\)-simplices for \(n\in\{1,2,3\}\), and similarly for the oppositely oriented simplices \(\Delta^{n}_{-}\) that are represented, say, by swapping (only) \(1\) and \(2\). Note that before taking equivalence classes, a total order on vertices induces orientations on all subsimplices. To avoid clutter, we usually denote oriented \(n\)-simplices simply by \(\Delta^{n}\), and we note that the boundary \(\partial\Delta^{n}\) of \(\Delta^{n}\) consists of precisely \(n+1\)\((n-1)\)-simplices. A simplicial complex \(C\) is a finite collection of simplices which is closed with respect to taking faces, and such that for all \(\delta,\delta^{\prime}\) in \(C\), \(\delta\cap\delta^{\prime}\) is either empty or a face of \(\delta\) and \(\delta^{\prime}\). Since \(\Delta^{n}\) can naturally be viewed as a topological space \(|\Delta^{n}|\) (with topology induced by \(\mathds{R}^{n}\)), for any simplicial complex \(C\) we get a topological space \(|C|\), called its geometric realisation. By definition, an (oriented) triangulation\(t\) of an \(n\)-manifold \(M\) is a simplicial complex \(C\) with a total ordering on its vertices, together with a homeomorphism \(|C|\longrightarrow M\); we also ask that all simplices \(\delta\) satisfy either \(\delta\subset\partial M\) or \(\delta^{\circ}\cap\partial M=\varnothing\). It is customary to depict \(t\) as the image of \(|C|\) in \(M\), thus "approximating" \(M\) by a mesh of \(n\)-simplices glued along their faces. For example, (4.2) where we suppress orientations, and we do not show the triangulation on the rear side of the torus. An important result is that every two triangulations of a given piecewise linear (PL) manifold can be related by a finite sequence of "moves" that make only local changes. To make this precise on the level of an \(n\)-dimensional simplicial complex \(C\), let \(K\subset C\) be an \(n\)-dimensional subcomplex together with an isomorphism \(\varphi\colon K\longrightarrow F\) to an \(n\)-dimensional subcomplex \(F\subset\partial\Delta^{n+1}\) with precisely \(k\)\(n\)-simplices. The associated \(k\)-\((n+2-k)\)Pachner move is the replacement \[C\longmapsto\big{(}C\setminus K\big{)}\cup_{\varphi}\big{(}\partial\Delta^{n+ 1}\setminus\overset{\circ}{F}\big{)} \tag{4.3}\] which exchanges \(K\) by the \(n+2-k\) faces on "the other side of \(\partial\Delta^{n+1}\)". Pachner's theorem [Pa] then states that if two triangulated PL manifolds are PL isomorphic, then there exists a finite sequence of Pachner moves between them. From this it is straightforward to obtain an analogous result for oriented triangulations, cf. [CRS1, Prop. 3.3]. From now on we assume all triangulations to be oriented. **Example 4.1**.: 1. There is only one type of Pachner move (and its inverse) for \(n=1\): \(\partial\Delta^{2}\) has precisely three faces, and the 2-1 Pachner move replaces two 1-simplices joined at one vertex with a single 1-simplex (opposite to that vertex in \(\partial\Delta^{2}\)), (4.4) where \(a,b,c\) are pairwise distinct real numbers that give a total order (and induced orientation). Note that there is one oriented Pachner move for each of the three relative orders that \(b\) can have with respect to \(a<c\), and there are three further moves for \(a>c\). 2. In dimension \(n=2\), there are 2-2 and 1-3 Pachner moves, corresponding to partitions of the four faces of \(\partial\Delta^{3}\): (4.5) 3. In dimension \(n=3\), there are 2-3 and 1-4 Pachner moves, corresponding to partitions of the five faces of \(\partial\Delta^{4}\): (4.6) We can now explain what we mean by "nice stratification" in step (1) of the orbifold construction. Let \(M\) be an oriented \(n\)-bordism with an oriented triangulation \(t\). By definition, the \(j\)-strata of the Poincare dual stratification\(M^{t}\) are transversal to the \((n-j)\)-simplices of \(t\), and oriented such that for all strata \(\sigma\) in \(M^{t}\) the orientation of \(\sigma\) together with that of its dual simplex (in that order) gives the orientation of \(M\). Moreover, if a stratum \(\sigma\) intersects \(\partial M\), it does so transversally. Hence if \(M\) is \(|\Delta^{1}_{+}|\), \(|\Delta^{2}_{+}|\), or \(|\Delta^{3}_{+}|\), then \(M^{t}\) is given by (4.7) respectively. Another example (suppressing orientations to avoid clutter) is: \[(M,t)= \tag{4.8}\] **Example 4.2**.: The Poincare duals of \(n\)-dimensional Pachner moves relate different stratifications of the \(n\)-ball with identical boundary stratification: 1. The duals of the three 1-dimensional Pachner moves for \(a<c\) in (4.4) are \[\begin{CD}+&+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\begin{CD}+\\ -\bullet&\bullet\end{CD}\quad\ 3. The dual of one the oriented 2-3 Pachner moves in (4.6) is (4.11) and analogously for the other dual 2-3 and 1-4 moves, which are harder to picture. We are now ready to give the first definition of "orbifold datum", namely as a collection of defects that encode invariance under choice of triangulation (see [11, Def. 3.5] for details): **Definition 4.3**.: An orbifold datum \(\mathcal{A}\) for a defect \(\operatorname{TQFT}\mathcal{Z}\colon\operatorname{Bord}^{\operatorname{def}}_{ n,n-1}(\mathds{D})\longrightarrow\mathcal{C}\) consists of elements \(\mathcal{A}_{j}\in D_{j}\) for all \(j\in\{1,\dots,n\}\) and \(\mathcal{A}_{0}^{+},\mathcal{A}_{0}^{-}\in D_{0}\) subject to: Compatibility: \(\mathcal{A}_{j}\) can label \(j\)-strata in the dual of an (oriented) triangulation, such that all adjacent \(k\)-strata can be labelled by \(\mathcal{A}_{k}\) for \(j\in\{1,\dots,n\}\), and analogously for \(\mathcal{A}_{0}^{\pm}\) labelling the 0-strata dual to the two oriented \(n\)-simplices. Invariance: Let \(B\) and \(B^{\prime}\) be two stratified \(n\)-balls of a dual Pachner move. Viewing them as bordisms with domain \(\varnothing\), and labelling their strata with \(\mathcal{A}\) gives two morphisms \(B_{\mathcal{A}},B^{\prime}_{\mathcal{A}}\) in \(\operatorname{Bord}^{\operatorname{def}}_{n,n-1}(\mathds{D})\), and we demand \[\mathcal{Z}(B_{\mathcal{A}})=\mathcal{Z}(B^{\prime}_{\mathcal{A}})\,. \tag{4.12}\] **Remark 4.4**.: Since there are only finitely many Pachner moves, there are only finitely many invariance conditions on orbifold data. Moreover, the compatibility condition implies that \(\mathcal{A}_{n-1}\)-labelled \((n-1)\)-strata separate two \(\mathcal{A}_{n}\)-labelled \(n\)-strata (because the dual simplex \(\Delta^{1}\) has two faces), while \(\mathcal{A}_{n-2}\)-labelled \((n-2)\)-strata have three adjacent \(\mathcal{A}_{n-1}\)-strata (because \(\Delta^{2}\) has three faces). More generally, \(\mathcal{A}_{j}\)-labelled \(j\)-strata have precisely \(n-j+1\) adjacent \((j+1)\)-strata, because \(\Delta^{n-j}\) has \(n-j+1\) faces. **Example 4.5**.: (1) For an orbifold datum \(\mathcal{A}=(\mathcal{A}_{1},\mathcal{A}_{0}^{+},\mathcal{A}_{0}^{-})\) for a 1-dimensional \(\operatorname{TQFT}\mathcal{Z}\), the invariance condition from the first move in (4.9) reads (4.13) where we suppress the labels \({\cal A}_{1}\) for 1-strata. Hence \[{\cal Z}({\cal A}_{0}^{+}):={\cal Z}(\raisebox{-1.0pt}{\includegraphics[scale=0. 5]{fig/2-3-1}}^{+}_{{\cal A}_{0}^{+}})\colon{\cal Z}({\cal A}_{1})\longrightarrow{ \cal Z}({\cal A}_{1}):={\cal Z}(\raisebox{-1.0pt}{\includegraphics[scale=0. 5]{fig/2-3-1}}^{+}_{{\cal A}_{1}})\in{\cal C}\] (4.14) is an idempotent in the codomain \({\cal C}\) of \({\cal Z}\). The other conditions from Example 4.1(1) imply that \({\cal Z}({\cal A}_{0}^{-})={\cal Z}({\cal A}_{0}^{+})\). 2. An orbifold datum \({\cal A}=({\cal A}_{2},{\cal A}_{1},{\cal A}_{0}^{+},{\cal A}_{0}^{-})\) for a 2-dimensional TQFT \({\cal Z}\) satisfies the invariance conditions (4.15) (where we suppress the labels \({\cal A}_{1},{\cal A}_{2}\) for all 1- and 2-strata) as well as those coming from all the other total orders on vertices in (4.5). It follows from the first identity in (4.15) that \({\cal Z}({\cal A}_{0}^{+})\) is an associative multiplication on \({\cal A}_{1}\) viewed as a 1-morphism \({\cal A}_{1}\colon{\cal A}_{2}\longrightarrow{\cal A}_{2}\) in \({\cal D}_{\cal Z}\) (recall the discussion around Theorem 3.7). Here we identified \({\cal A}_{0}^{+}\) with the defect bordism (4.16) The other invariance conditions similarly impose that \({\cal Z}({\cal A}_{0}^{-})\) is a coassociative comultiplication on \({\cal A}_{1}\), and more generally all the invariance conditions (4.12) for \(n=2\) are equivalent to \(({\cal A}_{1},{\cal Z}({\cal A}_{0}^{+}),{\cal Z}({\cal A}_{0}^{-}))\) being a \(\Delta\)-separable symmetric Frobenius algebra in \({\cal D}_{\cal Z}\), see [CR, Prop. 3.4] for details. 3. An orbifold datum \({\cal A}=({\cal A}_{3},{\cal A}_{2},{\cal A}_{1},{\cal A}_{0}^{+},{\cal A}_{0}^ {-})\) for a 3-dimensional TQFT \({\cal Z}\) satisfies the invariance condition (4.17) (where we suppress the labels \(\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3}\) for all 1-, 2, and 3-strata) as well as those coming from the other dual 2-3 moves in (4.11) and those dual to the 1-4 moves in (4.6). Expressed internally to the 3-category \(\mathcal{D}_{\mathcal{Z}}\) (recall Theorem 3.9), it follows that the 1-morphism \(\mathcal{A}_{2}\colon\mathcal{A}_{3}\longrightarrow\mathcal{A}_{3}\) comes with the structure of a (not necessarily unital) \(E_{1}\)-algebra. Indeed, the multiplication \(\mathcal{A}_{1}\colon\mathcal{A}_{2}\circ\mathcal{A}_{2}\longrightarrow \mathcal{A}_{2}\) is associative up to the associator \(\mathcal{Z}(\mathcal{A}_{0}^{+})\), and the condition (4.17) precisely states that the pentagon axiom for \(\mathcal{Z}(\mathcal{A}_{0}^{+})\) holds. More generally, all the invariance conditions (4.12) for \(n=3\) are equivalent to \((\mathcal{A}_{2},\mathcal{A}_{1},\mathcal{Z}(\mathcal{A}_{0}^{+}),\mathcal{Z }(\mathcal{A}_{0}^{-}))\) being a categorification of \(\Delta\)-separable symmetric Frobenius algebras in \(\mathcal{D}_{\mathcal{Z}}\), where the uncategorified defining conditions only hold up to coherent 3-isomorphisms built from \(\mathcal{Z}(\mathcal{A}_{0}^{\pm})\) and the adjunction data in \(\mathcal{D}_{\mathcal{Z}}\), see [11, 12] for details. Remark 4.4 and Example 4.5 illustrate that orbifold data for \(\mathcal{Z}\) naturally give rise to \(E_{1}\)-algebras in the associated higher category \(\mathcal{D}_{\mathcal{Z}}\), i.e. algebras that are associative up to higher coherences which are part of their structure. Hence we have the following expected equivalent characterisation, which is rigorous for \(n=2\) and \(n=3\), see [11, Sect. 3.3] and [11, Sect. 4.2], respectively: **Definition 4.6**.: An orbifold datum for an \(n\)-dimensional defect TQFT \(\mathcal{Z}\) consists of an object \(\mathcal{A}_{n}\in\mathcal{D}_{\mathcal{Z}}\) together with \((n-j)\)-morphisms \(\mathcal{A}_{j}\), \(j\in\{1,\ldots,n-1\}\), and \(n\)-morphisms \(\mathcal{A}_{0}^{+},\mathcal{A}_{0}^{-}\) which can label the dual stratifications of simplices in the graphical calculus of \(\mathcal{D}_{\mathcal{Z}}\), such that dual Pachner moves become identities of \(n\)-morphisms in that calculus. In short, orbifold data \(\mathcal{A}\) for \(\mathcal{Z}\) are \(E_{1}\)-algebras in the monoidal \((n-1)\)-category \(\mathcal{D}_{\mathcal{Z}}(\mathcal{A}_{n},\mathcal{A}_{n})\), subject to further constraints from Pachner moves. **Example 4.7** (Orbifold data for state sum models).: We consider the trivial \(n\)-dimensional defect TQFT \(\mathcal{Z}_{n}^{\text{\tiny triv}}\) valued in \(\mathcal{C}=\operatorname{Vect}_{\Bbbk}\). The following is preparation for the construction of state sum models in Example 4.11 below. 1. For \(n=1\), an orbifold datum for \(\mathcal{Z}_{1}^{\text{\tiny triv}}\) is equivalent to \(1\in\Bbbk\), the only linear idempotent on \(\mathcal{Z}_{1}^{\text{\tiny triv}}(\operatorname{pt})=\Bbbk\), cf. Example 4.5(1). 2. For \(n=2\), an orbifold datum in \(\mathcal{D}_{\mathcal{Z}_{2}^{\text{\tiny triv}}}=\operatorname{B}\operatorname{ Vect}_{\Bbbk}\) is a \(\Delta\)-separable symmetric Frobenius \(\Bbbk\)-algebra, cf. Examples 3.8(1) and 4.5(2). More generally, all separable symmetric Frobenius \(\Bbbk\)-algebras are obtained by taking the Euler completion, cf. Example 3.6 and [15]. 3. For \(n=3\), orbifold data in the Euler completion of \(\mathcal{D}_{\mathcal{Z}_{3}^{\text{\tiny triv}}}=\operatorname{B}\operatorname{ ssFrob}(\operatorname{Vect}_{\Bbbk})\) can be obtained from spherical fusion categories, cf. Examples 3.10(1), 4.5(3) and [11, Sect. 4]. 4. For \(n=4\), orbifold data in the Euler completion of \({\cal D}_{{\cal Z}_{4}^{\rm triv}}\) can be obtained from spherical fusion 2-categories, as explained in [DR, CMuMu]. **Example 4.8** (Orbifold data from (higher) group actions).: Let \({\cal Z}\) be an \(n\)-dimensional defect TQFT, and let \(G\) be a finite group which we view as a discrete strict monoidal \((n-1)\)-category (whose only morphisms are identities). A \(G\)-action on \({\cal Z}\) is an \(n\)-functor \(\rho\colon{\rm B}G\longrightarrow{\cal D}_{\cal Z}\). Assuming \({\cal D}_{\cal Z}\) to have appropriate direct sums we set \({\cal A}_{n}^{\rho}=\rho(*)\), \({\cal A}_{n-1}^{\rho}=\bigoplus_{g\in G}\rho(g)\) and define \({\cal A}_{j}^{\rho}\) for \(j\leqslant n-2\) in terms of the coherence data of \(\rho\). In particular, \({\cal A}_{n-2}^{\rho}\) is obtained from sums over the 2-isomorphisms \(\rho(g)\circ\rho(h)\longrightarrow\rho(gh)\). This gives rise to an \(E_{1}\)-algebra \({\cal A}^{\rho}\) in \({\cal D}_{\cal Z}\), which may or may not be an orbifold datum. For \(n=2\), orbifold data of type \({\cal A}^{\rho}\) are studied in [BCP3], including twists of the (co)multiplication \(({\cal A}_{0}^{\rho})^{\pm}\) by elements in group cohomology \(H^{2}(G;{\Bbbk}^{\times})\). In the context of B-twisted sigma models and Landau-Ginzburg models, (the underlying algebras of) \({\cal A}^{\rho}\) had been considered earlier in [Po, Sect. 2.2] and [CR, Sect. 7.1] (in the 2-categories of [CW] and [CMu]), respectively. For \(n=3\), orbifold data of type \({\cal A}^{\rho}\) are constructed from ribbon crossed \(G\)-categories, cf. [CRS3, Sect. 5]. A related analysis in the context of once-extended TQFTs is carried out in [SW]. More generally, actions \({\rm B}{\cal G}\longrightarrow{\cal D}_{\cal Z}\) of \(n\)-groups \({\cal G}\) give candidates of orbifold data; for \(p\in{\mathds{Z}}_{\geqslant 1}\), a \(p\)-form symmetry is the special case when \({\cal G}={\rm B}^{p}H\) is the \(p\)-fold delooping of an abelian group \(H\). Just as in the group case, there may be obstructions; see [CR, Rem. 7.5(ii)] for an example where \({\cal A}^{\rho}\) is a \(\Delta\)-separable Frobenius algebra which is however not symmetric. **Example 4.9** (Orbifold data from invertible spheres).: Let \({\cal Z}\) be an \(n\)-dimensional defect TQFT, and let \(X\in{\cal D}_{\cal Z}(a,b)\) be a 1-morphism such that \(X\)-labelled \((n-1)\)-spheres are invertible \(n\)-morphisms in the graphical calculus of \({\cal D}_{\cal Z}\), cf. [CRS1, Rem. 3.19]. Then defining \({\cal A}_{n}^{X}=a\), \({\cal A}_{n-1}^{X}=X^{\dagger}\circ X\), and \({\cal A}_{j}^{X}\) for \(j\leqslant n-2\) in terms of (higher) adjunction data of \(X\) gives rise to an orbifold datum for \({\cal Z}\). Here \({\cal A}_{j}^{X}\) can be thought of as the way \(n-j+1\)\(j\)-spheres can touch at a \(j\)-stratum. For \(n=2\), invertibility of the two oriented \(X\)-labelled 1-spheres means that the left and right quantum dimensions of \(X\) are invertible. Writing \(\star=\dim_{\rm r}(X)^{-1}\), it is checked in [CR, Sect. 4] that \[({\cal A}_{0}^{X})^{+}=\] make \({\cal A}_{1}^{X}=X^{\dagger}\circ X\) into a \(\Delta\)-separable symmetric Frobenius algebra. As shown in [CRCR, ReW], there are such orbifold data which are neither related to state sum models nor to group actions, cf. Example 4.14 below. ### Orbifold construction Let \(\mathcal{Z}\colon\mathrm{Bord}^{\mathrm{def}}_{n,n-1}(\mathrm{D})\longrightarrow \mathcal{C}\) be a defect TQFT where idempotents in \(\mathcal{C}\) split, and let \(\mathcal{A}\) be an orbifold datum for \(\mathcal{Z}\). From this we construct a closed TQFT \(\mathcal{Z}_{\mathcal{A}}\colon\mathrm{Bord}^{\mathrm{or}}_{n,n-1} \longrightarrow\mathcal{C}\) as follows. For \(\Sigma\in\mathrm{Bord}^{\mathrm{or}}_{n,n-1}\), choose a triangulation \(t\) of the cylinder \(\Sigma\times[0,1]\colon\Sigma\longrightarrow\Sigma\), and denote the induced triangulations of its incoming and outgoing boundary components by \(\tau\) and \(\tau^{\prime}\), respectively. By labelling the Poincare dual stratification with \(\mathcal{A}\), we obtain a defect bordism \(C^{t,\mathcal{A}}_{\Sigma,\tau^{\prime},\tau}\colon\Sigma^{\tau,\mathcal{A}} \longrightarrow\Sigma^{\tau^{\prime},\mathcal{A}}\) in \(\mathrm{Bord}^{\mathrm{def}}_{n,n-1}(\mathrm{D})\). For example, if \(n=2\) and \(\Sigma=S^{1}\), we can choose \[C^{t,\mathcal{A}}_{\Sigma,\tau^{\prime},\tau}= \tag{4.19}\] where on the left we suppress \(\mathcal{A}\)-labels for the stratification, as well as orientations. The map \(\Phi^{\tau^{\prime},\tau}_{\Sigma,\mathcal{A}}:=\mathcal{Z}(C^{t,\mathcal{A}} _{\Sigma,\tau^{\prime},\tau})\colon\mathcal{Z}(\Sigma^{\tau,\mathcal{A}}) \longrightarrow\mathcal{Z}(\Sigma^{\tau^{\prime},\mathcal{A}})\) does not depend on the triangulation \(t\) away from the boundary thanks to the invariance condition on \(\mathcal{A}\), and we have \(\Phi^{\tau^{\prime\prime},\tau}_{\Sigma,\mathcal{A}}=\Phi^{\tau^{\prime \prime},\tau^{\prime}}_{\Sigma,\mathcal{A}}\circ\Phi^{\tau^{\prime},\tau}_{ \Sigma,\mathcal{A}}\) for all triangulations \(\tau,\tau^{\prime},\tau^{\prime\prime}\) of \(\Sigma\). Then we define \[\mathcal{Z}_{\mathcal{A}}(\Sigma)=\mathrm{colim}_{\tau,\tau^{\prime}}\left( \Phi^{\tau^{\prime},\tau}_{\Sigma,\mathcal{A}}\right). \tag{4.20}\] Concretely, \(\mathcal{Z}_{\mathcal{A}}\) can be computed (up to isomorphism) as the image of the idempotent \(\Phi^{\tau,\tau}_{\Sigma,\mathcal{A}}\) for any triangulation \(\tau\) of \(\Sigma\), which exists by assumption on \(\mathcal{C}\) (which in turn holds e.g. if \(\mathcal{C}=\mathrm{Vect}_{\Bbbk}\) or \(\mathcal{C}=\mathrm{sVect}_{\Bbbk}\)). Similarly, for a morphism \(M\colon\Sigma\longrightarrow\Sigma^{\prime}\) in \(\mathrm{Bord}^{\mathrm{or}}_{n,n-1}\), we may choose an arbitrary triangulation \(t\) which induces triangulations \(\tau\) and \(\tau^{\prime}\) on \(\Sigma\) and \(\Sigma^{\prime}\), respectively. Then by definition \[\mathcal{Z}_{\mathcal{A}}(M)=\left(\,\mathcal{Z}_{\mathcal{A}}(\Sigma) \xleftrightarrow{}\mathcal{Z}\!\left(\Sigma^{\tau,\mathcal{A}}\right) \xleftrightarrow{}\mathcal{Z}\!\left(\Sigma^{\tau^{\prime},\mathcal{A}}\right) \xleftrightarrow{}\mathcal{Z}\!\left(\Sigma^{\tau^{\prime},\mathcal{A}}\right) \xleftrightarrow{}\mathcal{Z}\!\left(\Sigma^{\prime}\right)\right) \tag{4.21}\] where the last map is part of the data of the colimit \(\mathcal{Z}_{\mathcal{A}}(\Sigma^{\prime})\), and the first map is obtained from the universal property of the colimit \(\mathcal{Z}_{\mathcal{A}}(\Sigma)\). This means that if e.g. \(\mathcal{C}=\mathrm{Vect}_{\Bbbk}\), \(\mathcal{Z}_{\mathcal{A}}(M)\) is given by pre- and post-composing \(\mathcal{Z}(M^{t,\mathcal{A}})\) with the inclusion and surjection maps which split the idempotents \(\Phi^{\tau,\tau}_{\Sigma,\mathcal{A}}\) and \(\Phi^{\tau^{\prime},\tau^{\prime}}_{\Sigma^{\prime},\mathcal{A}}\), respectively. As explained in more detail in [11, Sect. 3.2], the thus defined functor \(\mathcal{Z}_{\mathcal{A}}\) inherits a symmetric monoidal structure from \(\mathcal{Z}\), and we have: **Definition and Theorem 4.10**.: Let \(\mathcal{A}\) be an orbifold datum for a defect TQFT \(\mathcal{Z}\colon\mathrm{Bord}^{\mathrm{def}}_{n,n-1}(\mathrm{D})\longrightarrow \mathcal{C}\) such that the colimits (4.20) exist in \(\mathcal{C}\). Then (4.20) and (4.21) assemble into the orbifold \((\mathrm{TQFT})\)\(\mathcal{Z}_{\mathcal{A}}\colon\mathrm{Bord}^{\mathrm{or}}_{n,n-1} \longrightarrow\mathcal{C}\). **Example 4.11**.: State sum models are (Euler completed) orbifolds of the trivial defect TQFT valued in \(\mathcal{C}=\operatorname{Vect}_{\Bbbk}\): 1. There are no non-trivial state sum models in dimension \(n=1\). This is consistent with the fact that \(\mathcal{Z}_{1}^{\text{\tiny triv}}(\Sigma)=\Bbbk\) for all objects (points) \(\Sigma\in\operatorname{Bord}_{1,0}^{\operatorname{def}}(\mathds{D}^{\text{ \tiny triv}_{1}})\), and the only idempotent on \(\Bbbk\) is \(1\), cf. Example 4.7(1). 2. In dimension \(n=2\), orbifold data \(\mathcal{A}\) in \(\mathcal{D}_{\mathcal{Z}_{2}^{\text{\tiny triv}}}=\operatorname{B} \operatorname{vect}_{\Bbbk}\) are \(\Delta\)-separable symmetric Frobenius \(\Bbbk\)-algebras, for which the orbifold construction \(\mathcal{Z}_{\mathcal{A}}\) coincides with the state sum model construction of [BP, FHK, LP]. Hence \(\mathcal{Z}_{\mathcal{A}}\) is the closed TQFT equivalently described by the commutative Frobenius \(\Bbbk\)-algebra which is the centre of \(\mathcal{A}\). The construction of [BP, FHK, LP] refined by [Mul, Sect. 3.2] in fact takes arbitrary separable symmetric Frobenius algebras as input, not necessarily \(\Delta\)-separable ones. As explained in [Mul, CMu], such algebras correspond to the Euler completion of \((\mathcal{Z}_{2}^{\text{\tiny triv}})_{\mathcal{A}}\) (recall Example 3.6, see also Example 4.22). This appearance of Euler completion continues in higher dimensions. To explain the name "state sum model", let us choose a basis \(\{a_{i}\}\) of the vector space \(\mathcal{A}_{1}\). Hence there are scalars \(\mu_{ij}^{k},\Delta_{i}^{jk}\) such that \(\mathcal{A}_{0}^{+}(a_{i}\otimes a_{j})=\sum_{k}\mu_{ij}^{k}\cdot a_{k}\) and \(\mathcal{A}_{0}^{-}(a_{i})=\sum_{j,k}\Delta_{i}^{jk}\cdot a_{j}\otimes a_{k}\). This in turn means that the main ingredient \((\mathcal{Z}_{2}^{\text{\tiny triv}})(M^{t,\mathcal{A}})\) in \((\mathcal{Z}_{2}^{\text{\tiny triv}})_{\mathcal{A}}(M)\) for any bordism \(M\), which is basically a string diagram between tensor powers of \(\mathcal{A}_{1}\) whose only vertices are \(\mathcal{A}_{0}^{\pm}\), is a sum (one for each \(\mathcal{A}_{1}\)-labelled strand) over the states \(a_{i},a_{j}\), etc. 3. In dimension \(n=3\), orbifold data \(\mathcal{A}^{\mathcal{S}}\) in the Euler completion of \(\mathcal{D}_{\mathcal{Z}_{3}^{\text{\tiny triv}}}=\operatorname{B}\operatorname{ ssFrob}(\operatorname{Vect}_{\Bbbk})\) can be extracted from spherical fusion categories \(\mathcal{S}\). Indeed, if \(I\) is a set of representatives of isomorphism classes of simple objects in \(\mathcal{S}\), we have \(\mathcal{A}_{3}^{\mathcal{S}}=*\), \(\mathcal{A}_{2}^{\mathcal{S}}=\bigoplus_{i\in I}\Bbbk\) is a direct sum of trivial Frobenius algebras, \(\mathcal{A}_{1}^{\mathcal{S}}=\bigoplus_{i,j,k\in I}\mathcal{S}(i\otimes j,k)\) as an \(\mathcal{A}_{2}^{\mathcal{S}}\)-\((\mathcal{A}_{2}^{\mathcal{S}}\otimes_{\Bbbk}\mathcal{A}_{2}^{\mathcal{S}})\)-bimodule, and \((\mathcal{A}_{0}^{\mathcal{S}})^{\pm}\) is basically given by the associator of \(\mathcal{S}\), see [CRS3, Prop. 4.2] for details. As shown in [CRS3, Thm. 4.5] the orbifold \((\mathcal{Z}_{3}^{\text{\tiny triv}})_{\mathcal{A}^{\mathcal{S}}}^{\odot}\) is equivalent to the Turaev-Viro-Barrett-Westbury TQFT [TViro, BW] for \(\mathcal{S}\). Its evaluation on bordisms may be expressed as a sum of states, where now the latter involve both a basis of \(\mathcal{A}_{1}^{\mathcal{S}}\) and the simple objects in \(I\). 4. In dimension \(n=4\), orbifold data \(\mathcal{A}^{\mathfrak{S}}\) for the Euler completion \((\mathcal{Z}_{4}^{\text{\tiny triv}})^{\odot}\) can be extracted from spherical fusion \(2\)-categories \(\mathfrak{S}\) in a way analogous to the above \(3\)-dimensional case (see Remark 4.30 for more on \(\mathcal{Z}_{4}^{\text{\tiny triv}}\)). As shown in [CMuMu], the state sum model \((\mathcal{Z}_{4}^{\text{\tiny triv}})_{\mathcal{A}^{\mathfrak{S}}}^{\odot}\) precisely reproduces the Douglas-Reutter invariants of closed \(4\)-manifolds [DR], and lifts them to a TQFT. **Example 4.12**.: Orbifolds from group actions are eponymous for the (generalised) orbifold construction: if \(\mathcal{Z}\) is a twisted sigma model whose target manifold \(Y\) comes with a \(G\)-action, one may consider the corresponding sigma model \(\mathcal{Z}^{G}\) whose target is the orbifold stack \(Y\!/\!\!/G\), see e.g. [1]. Alternatively, the \(G\)-action on \(Y\) may lift to one on \(\mathcal{Z}\), \(\mathrm{B}G\longrightarrow\mathcal{D}_{\mathcal{Z}}\), giving a candidate orbifold datum \(\mathcal{A}_{G}\). If and only if \(\mathcal{A}_{G}\) is indeed an orbifold datum, we say that the \(G\)-action can be gauged (without anomaly). In this case the orbifold TQFT \(\mathcal{Z}_{\mathcal{A}_{G}}\) is expected to be equivalent to \(\mathcal{Z}^{G}\). In dimension \(n=2\), this expectation has been verified for many twisted sigma models and Landau-Ginzburg models, see e.g. [1, 2]. In particular, the state space \(\mathcal{Z}^{G}(S^{1})\) is naturally recovered as the endomorphisms of \(\mathcal{A}_{G}\) viewed as a bimodule over itself, thus effortlessly including all "twisted sectors", cf. Example 4.25 below. We end this section with some results and examples that are specific to dimensions \(2\) and \(3\), most of which are however expected to generalise to higher dimensions. A key tool behind the scenes here is the theory of "orbifold completion" discussed in Section 4.3 below. In particular, orbifolding with \(\mathcal{A}\) can be undone by orbifolding with a "quantum symmetry defect" \(\widetilde{\mathcal{A}}\), at least for \(n=2\): **Theorem 4.13** ([1, 1]).: Let \(\mathcal{A}\) be an orbifold datum for a \(2\)-dimensional defect TQFT \(\mathcal{Z}\). The orbifold \(\mathcal{Z}_{\mathcal{A}}\) naturally lifts to a defect TQFT, and there exists an orbifold datum \(\widetilde{\mathcal{A}}\) such that \((\mathcal{Z}_{\mathcal{A}})_{\widetilde{\mathcal{A}}}\cong\mathcal{Z}\). **Example 4.14** (Orbifolds from invertible quantum dimensions).: In Example 4.9 we constructed an orbifold datum \(\mathcal{A}^{X}\) for a \(2\)-dimensional defect TQFT \(\mathcal{Z}\) for every \(X\in\mathcal{D}_{\mathcal{Z}}(a,b)\) with invertible quantum dimensions. This has been applied to the case of the Landau-Ginzburg \(2\)-category \(\mathcal{L}\mathcal{G}_{\mathbb{C}}\) (cf. Example 3.8(3)), where checking the invertibility condition reduces to computations with square matrices with polynomial entries. Recall that simple isolated singularities over \(\mathbb{C}\) admit an ADE classification (see e.g. [21, Prop. 8.5]), with examples such as: (4.22) Viewed as objects in \(\mathcal{L}\mathcal{G}_{\mathbb{C}}\), these polynomials are far from equivalent. But as shown in [1] there is \(X\in\mathcal{L}\mathcal{G}_{\mathbb{C}}(W^{\mathrm{A}_{11}},W^{\mathrm{E}_{6}})\), i.e. a matrix factorisation of \(W^{\mathrm{E}_{6}}-W^{\mathrm{A}_{11}}\), that induces an equivalence \((\mathcal{Z}_{W^{\mathrm{A}_{11}}})_{\mathcal{A}^{X}}\cong\mathcal{Z}_{W^{ \mathrm{E}_{6}}}\) between the TQFTs, and analogously for \(W^{\mathrm{A}_{17}}\sim W^{\mathrm{D}_{10}}\sim W^{\mathrm{E}_{7}}\) and \(W^{\mathrm{A}_{29}}\sim W^{\mathrm{D}_{16}}\sim W^{\mathrm{E}_{8}}\). While the above examples were already expected from the "CFT/LG correspondence" combined with [1], Recknagel-Weinreb [1] have algorithmically constructed several entirely novel examples between non-simple isolated singularities, e.g. \(W^{\mathrm{E}_{13}}\sim W^{\mathrm{Z}_{11}}\) and \(W^{\mathrm{E}_{18}}\sim W^{\mathrm{Q}_{12}}\). These give new relations in singularity theory, and between Landau-Ginzburg models. Orbifolds in dimension 3 have been studied most extensively for Reshetikhin-Turaev models. A key technical result obtained in [14] is that to any simple orbifold datum \(\mathcal{A}\) for \(\mathcal{Z}_{\mathcal{M}}^{\mathrm{RT}}\) (recall Example 3.5(1)) one naturally associates another modular fusion category \(\mathcal{M}_{\mathcal{A}}\). With this it can be made precise that "Reshetikhin-Turaev TQFTs close under generalised orbifolds": **Theorem 4.15** ([13, 14]).: Let \(\mathcal{M}\) be a modular fusion category, and let \(\mathcal{A}\) be an orbifold datum for \(\mathcal{Z}_{\mathcal{M}}^{\mathrm{RT}}\). The orbifold \((\mathcal{Z}_{\mathcal{M}}^{\mathrm{RT}})_{\mathcal{A}}\) naturally lifts to a defect TQFT, and \((\mathcal{Z}_{\mathcal{M}}^{\mathrm{RT}})_{\mathcal{A}}\cong\mathcal{Z}_{ \mathcal{M}_{\mathcal{A}}}^{\mathrm{RT}}\). In addition to those related to state sum models (cf. Examples 4.7(3) and 4.11(3)) and those coming from group actions (cf. Examples 4.8 and 4.14), Reshetikhin-Turaev models admit orbifold data \(\mathcal{A}^{B}\) obtained from condensable algebras, i.e. commutative haploid separable symmetric Frobenius algebras \(B\) in \(\mathcal{M}\), see [12, Sect. 3.4]. The associated condensation \(\mathcal{M}_{\mathcal{A}^{B}}\) is equivalent to the category of local \(B\)-modules, and at least these types of orbifolds can be inverted: **Theorem 4.16** ([14]).: Let \(\mathcal{M}\) be a modular fusion category, and let \(B\) be a condensable algebra in \(\mathcal{M}\). There exists an orbifold datum \(\widetilde{\mathcal{A}}\) for \(\mathcal{Z}_{\mathcal{M}_{\mathcal{A}^{B}}}^{\mathrm{RT}}\) such that \((\mathcal{M}_{\mathcal{A}^{B}})_{\widetilde{\mathcal{A}}}\cong\mathcal{M}\) as ribbon categories. **Example 4.17**.: There are several modular fusion categories \(\mathcal{I}\) of "Ising type", with precisely three isomorphism classes of simple objects \(\mathbb{1},\sigma,\varepsilon\) and fusion rules \(\varepsilon\otimes\varepsilon\cong\mathbb{1}\) and \(\sigma\otimes\sigma\cong\mathbb{1}\oplus\varepsilon\). Via an algorithmic search, explicit orbifold data \(\widetilde{\mathcal{A}}\) of "Fibonacci type" were found in [14], such that \(\mathcal{I}_{\widetilde{\mathcal{A}}}\) is a condensation inversion for an orbifold of the modular fusion category associated to \(\mathfrak{sl}(2)\) at level \(k=10\). The fact that \(\widetilde{\mathcal{A}}\) comes neither from condensable algebras nor from group actions again illustrates the usefulness of the general orbifold theory. Building on Theorem 4.16, one obtains an equivalent characterisation of orbifold data for Reshetikhin-Turaev TQFTs in terms of Witt equivalence. Recall that two modular fusion categories \(\mathcal{M},\mathcal{M}^{\prime}\) are Witt equivalent if there exists a spherical fusion category together with a ribbon equivalence between its Drinfeld centre and \(\mathcal{M}^{\prime}\boxtimes\mathcal{M}^{\mathrm{rev}}\), where \((-)^{\mathrm{rev}}\) denotes the reversed braiding and twist. **Theorem 4.18** ([14]).: Two modular fusion categories \(\mathcal{M},\mathcal{M}^{\prime}\) are Witt equivalent if and only if there exists an orbifold datum \(\mathcal{A}\) for \(\mathcal{Z}_{\mathcal{M}}^{\mathrm{RT}}\) such that \(\mathcal{M}^{\prime}\cong\mathcal{M}_{\mathcal{A}}\) as ribbon categories. ### Orbifold completion An \(n\)-dimensional orbifold datum for \({\cal Z}\) is a type of algebra internal to the \(n\)-category \({\cal D}_{\cal Z}\). It is then natural to consider the \(n\)-category whose objects are all orbifold data, and whose (higher) morphisms capture their (higher) representation theory. It is also precisely this higher Morita category \(({\cal D}_{\cal Z})_{\rm orb}\) which allows us to lift the output of the orbifold construction from mere closed TQFTs \({\cal Z}_{\cal A}\) to a proper defect TQFT \({\cal Z}_{\rm orb}\), whose defects are the morphisms in \(({\cal D}_{\cal Z})_{\rm orb}\). While expected to hold in general, the representation theory of orbifold data has so far been rigorously developed only in dimension \(n\leqslant 3\). The case \(n=1\) is trivial, so we start with \(n=2\). **Definition 4.19** ([Cr]).: Let \({\cal B}\) be a pivotal 2-category with idempotent complete Hom categories. The orbifold completion \({\cal B}_{\rm orb}\) of \({\cal B}\) is the 2-category whose * objects are orbifold data \({\cal A}=({\cal A}_{2},{\cal A}_{1},{\cal A}_{0}^{\pm})\) in \({\cal B}\), i.e. \(\Delta\)-separable symmetric Frobenius algebras, * 1-morphisms \({\cal A}\longrightarrow{\cal A}^{\prime}\) in \({\cal B}_{\rm orb}\) are 1-morphisms \({\cal A}_{2}\longrightarrow{\cal A}^{\prime}_{2}\) in \({\cal B}\) together with an \({\cal A}^{\prime}\)-\({\cal A}\)-bimodule structure, * horizontal composition of \(X\colon{\cal A}\longrightarrow{\cal A}^{\prime}\) and \(Y\colon{\cal A}^{\prime}\longrightarrow{\cal A}^{\prime\prime}\) is the relative tensor product \(Y\otimes_{{\cal A}^{\prime}}X\) in \({\cal B}\) (that exists as Hom categories are idempotent complete), * the identity 1-morphism on \({\cal A}\in{\cal B}_{\rm orb}\) is \({\cal A}\) viewed as a bimodule over itself, * 2-morphisms in \({\cal B}_{\rm orb}\) are bimodule maps in \({\cal B}\). **Theorem 4.20** ([Cr]).: The pivotal structure on \({\cal B}\) induces a pivotal structure on \({\cal B}_{\rm orb}\), and there is a pivotal equivalence \(({\cal B}_{\rm orb})_{\rm orb}\cong{\cal B}_{\rm orb}\). Of course we want to apply this to the case \({\cal B}={\cal D}_{\cal Z}\) for some 2-dimensional defect TQFT \({\cal Z}\colon{\rm Bord}_{2,1}^{\rm def}({\mathbb{D}})\longrightarrow{\cal C}\). Then \(({\cal D}_{\cal Z})_{\rm orb}\) is naturally \({\cal C}\)-enriched, and we obtain a larger set of defect data \({\mathbb{D}}^{\rm orb}\) whose label sets \(D_{i}^{\rm orb}\) are defined to consist of the \((2-j)\)-cells of \(({\cal D}_{\cal Z})_{\rm orb}\). As shown in [CR, Sect. 3.4], this allows us to lift the orbifold construction \({\cal Z}\longmapsto{\cal Z}_{\cal A}\) from closed to defect TQFTs: **Definition 4.21**.: The orbifold defect TQFT \({\cal Z}_{\rm orb}\colon{\rm Bord}_{2,1}^{\rm def}({\mathbb{D}}^{\rm orb}) \longrightarrow{\cal C}\) is given on morphisms by replacing \({\cal A}\)-labelled 2-strata \(\sigma\) by \({\cal A}\)-labelled substratifications \(\sigma^{t,{\cal A}}\), connecting \({\cal A}_{1}\)-labelled 1-substrata to adjacent \(D_{1}^{\rm orb}\)-labelled 1-strata via the corresponding bimodule structure morphisms, evaluating with \({\cal Z}\), and taking the colimit over all substratifications. Hence for \(X,Y\in(\mathcal{D}_{\mathcal{Z}})_{\mathrm{orb}}(\mathcal{A}^{\prime},\mathcal{A})\) and \(\varphi\colon X\longrightarrow Y\), locally the evaluation of \(\mathcal{Z}_{\mathrm{orb}}\) near a \(\varphi\)-labelled \(0\)-stratum is (suppressing some labels on the right) (4.23) The fact that this is independent of the choice of substratification near \(X,Y,\varphi\) is precisely due to the defining properties of bimodules and bimodule maps. Hence \(\mathcal{Z}_{\mathrm{orb}}\) is well-defined by construction of the orbifold completion \((\mathcal{D}_{\mathcal{Z}})_{\mathrm{orb}}\). Moreover, by design we have \((\mathcal{D}_{\mathcal{Z}})_{\mathrm{orb}}\cong\mathcal{D}_{\mathcal{Z}_{ \mathrm{orb}}}\). **Example 4.22**.: Recall from Example 3.8(1) that the \(2\)-category associated to the \(\mathcal{C}\)-valued trivial defect TQFT \(\mathcal{Z}_{2}^{\mathrm{triv}}\) is \(\mathcal{D}_{\mathcal{Z}_{2}^{\mathrm{triv}}}=\mathrm{B}\mathcal{C}^{\mathrm{d}}\). Hence it directly follows from Definition 4.19 that \((\mathcal{D}_{\mathcal{Z}_{2}^{\mathrm{triv}}})_{\mathrm{orb}}=\mathrm{AssFrob }(\mathcal{C})\). Combining this with Examples 3.3(2), 3.8(2) and 4.11(2), we realise that the defect state sum model is the (Euler completed) orbifold of the trivial defect TQFT: \(\mathcal{Z}_{2}^{\mathrm{ss}}=(\mathcal{Z}_{2}^{\mathrm{triv}})_{\mathrm{ orb}}^{\odot}\). The completion property \((\mathcal{B}_{\mathrm{orb}})_{\mathrm{orb}}\cong\mathcal{B}_{\mathrm{orb}}\) can be viewed as an "oriented" categorification of idempotent completion of \(1\)-categories (replacing identities \(e\circ e=e\) by \(2\)-morphisms \(\mathcal{A}_{0}^{+}\colon\mathcal{A}_{1}\circ\mathcal{A}_{1}\longrightarrow \mathcal{A}_{1}\)) - as opposed to the slightly different categorification in [DR, GJF] inspired by framed TQFTs. Intuitively, this property should follow from the fact that making a given choice of substratification finer does not affect the orbifold construction (because of the invariance condition). One way to rigorously establish the completion property is to use the universal property of \(\mathcal{B}_{\mathrm{orb}}\). To state it concisely we say that for \(a\in\mathcal{B}\), an orbifold condensation of \(a\) (onto \(b\in\mathcal{B}\)) is \(X\in\mathcal{B}(a,b)\) such that \(\widetilde{\mathrm{ev}}_{X}\circ\mathrm{coev}_{X}=1_{1_{b}}\), and that an orbifold datum \(\mathcal{A}\in\mathcal{B}_{\mathrm{orb}}\) splits if there exists an orbifold condensation \(X\) of \(\mathcal{A}_{2}\) such that \(X^{\dagger}\circ X\cong\mathcal{A}\) as Frobenius algebras (recall Example 4.9). **Proposition 4.23** ([CMul]).: The inclusion \(\mathcal{B}\longleftrightarrow\mathcal{B}_{\mathrm{orb}}\), \(a\longmapsto(a,1_{a},\lambda_{1_{a}}^{\pm 1})\), satisfies the universal property that for every pivotal \(2\)-functor \(\mathcal{B}\longrightarrow\overline{\mathcal{D}}\) in whose codomain every orbifold condensation splits, there exists an essentially unique pivotal \(2\)-functor \(\mathcal{B}_{\mathrm{orb}}\longrightarrow\overline{\mathcal{D}}\) such that the following commutes up to equivalence: (4.24) **Example 4.24** (Orbifold equivalence).: The orbifold datum \(\mathcal{A}^{X}\) constructed from \(X\in\mathcal{B}(a,b)\) with invertible quantum dimensions (recall Examples 4.9 and 4.14) is equivalent to the image of \(b\in{\cal B}\) under \({\cal B}\longleftrightarrow{\cal B}_{\rm orb}\), as shown in [CR, Thm. 4.8]. We denote this orbifold equivalence \({\cal A}^{X}\cong(b,1_{b},\lambda_{1_{b}}^{\pm 1})\equiv 1_{b}\) as \(a\sim b\). It immediately follows that everything about \(b\in{\cal B}\) can be described in terms of the algebra \({\cal A}^{X}\) on \(a\); in particular we have equivalences \({\cal B}(b,c)\cong{\cal B}_{\rm orb}({\cal A}^{X},1_{c})={\rm mod}_{{\cal B}(a, c)}({\cal A}^{X})\) for all \(c\in{\cal B}\). Applying this to, say, \(X\in{\cal L}{\cal G}_{\rm C}(W^{{\rm A}_{11}},W^{{\rm E}_{6}})\) of Example 4.14, the orbifold equivalence \(W^{{\rm A}_{11}}\sim W^{{\rm E}_{6}}\) implies (by choosing \(c=0\in{\cal L}{\cal G}_{\rm C}\)) that the category of matrix factorisations of \(W^{{\rm E}_{6}}\) is equivalent to the category of right \({\cal A}^{X}\)-modules internal to the category of matrix factorisations of \(W^{{\rm A}_{11}}\). **Example 4.25** (\(G\)-equivariantisation).: Let \({\cal Z}\) be a \(2\)-dimensional defect TQFT with a \(G\)-action \(\rho\colon{\rm B}G\longrightarrow{\cal D}_{\cal Z}\) that gives rise to an orbifold datum \({\cal A}_{G}\) as in Example 4.12. Since the "bulk state space is given by endomorphisms of the identity defect" (as reviewed e.g. in [Ca, Sect. 3.1]), we have that \({\cal Z}^{G}(S^{1})={\cal Z}_{{\cal A}_{G}}(S^{1})={\rm End}_{({\cal D}_{\cal Z })_{\rm orb}}(1_{{\cal A}_{G}})={\rm End}_{{\cal A}_{G},{\cal A}_{G}}({\cal A} _{G})\). Using that by definition \(({\cal A}_{G})_{1}=\bigoplus_{g\in G}\rho(g)\), we find that "twisted sectors" are automatically included in the formalism, namely as the summands corresponding to \(g\neq e\). The \(G\)-action \(\rho\colon{\rm B}G\longrightarrow{\cal D}_{\cal Z}\) induces a \(G\)-action on the Hom categories \({\cal D}_{\cal Z}(\rho(*),c)\) for all \(c\in{\cal D}_{\cal Z}\), namely by horizontal composition (from the right) with \(\rho(g)\), \(g\in G\). Then one finds that the \(G\)-equivariantisation of these Hom categories is equivalent to the categories of right \({\cal A}_{G}\)-modules, \({\cal D}_{\cal Z}(\rho(*),c)^{G}\cong{\rm mod}_{{\cal D}_{\cal Z}(\rho(*),c)}( {\cal A}_{G})\). This is explained in detail for the case of Landau-Ginzburg models in [CR, Sect. 7.1], where it is also shown that \({\cal A}_{G}\) is an orbifold datum if the quantum dimensions of \(\rho(g)\) are all identities. **Example 4.26** (McKay correspondence and orbifold equivalence).: By enlarging the \(2\)-category \({\cal L}{\cal G}_{\rm C}\) to include more general maps than polynomials as objects, it is shown in [Io] that the McKay correspondence gives rise to many (new) orbifold equivalences in \({\cal L}{\cal G}_{\rm C}\). Recall that if a finite group \(G\) acts on a variety \(V\) such that \(V/G\) has a crepant resolution \(Y\), then the McKay correspondence states that (under certain technical assumptions) the bounded derived category of \(Y\) is equivalent to the \(G\)-equivariantisation of the derived category of \(V\). On the other hand, it is known that quotienting the derived category of a variety \(\{W=0\}\) by the subcategory of perfect complexes is equivalent to the homotopy category of matrix factorisations of \(W\). Hence one may expect that for a \(G\)-equivariant map \(f\colon V\longrightarrow{\mathds{C}}\) the McKay correspondence induces a relation between the homotopy categories of \(G\)-equivariant matrix factorisations of \(f\) and of matrix factorisations of the function \(\hat{f}\) on \(Y\) induced by \(f\). And indeed, \(f\sim\hat{f}\) as shown in [Io, Thm. 3.7]. We now turn to dimension \(3\). Recall from Theorem 3.9 that a \(3\)-dimensional defect TQFT \({\cal Z}\) gives rise to a Gray category with duals \({\cal D}_{\cal Z}\). An orbifold datum \({\cal A}\) for \({\cal Z}\) is in particular an \(E_{1}\)-algebra in the monoidal \(2\)-category \({\cal D}_{\cal Z}({\cal A}_{3},{\cal A}_{3})\). It is hence natural to consider, for any Gray category with duals \({\cal T}\) (with appropriate conditions on certain colimits), the Morita 3-category of [JFS] of such algebras in \(\mathcal{T}\), as spelled out in [CMu, Sect. 3]. Contrary to the 2-dimensional case in Definition 4.19, we need to impose additional constraints on the 1- and 2-morphisms of this 3-category to ensure invariance under Pachner moves in the construction of the orbifold defect TQFT \(\mathcal{Z}_{\mathrm{orb}}\) below. These constraints are identified in [CMu, Sect. 4], to which we refer for the precise definition of the orbifold completion 3-category \(\mathcal{T}_{\mathrm{orb}}\) of \(\mathcal{T}\). Below we only give a broad sketch. Using the graphical calculus of [BMS] (with the conventions of [CMS]), an orbifold datum \(\mathcal{A}\) in \(\mathcal{T}\) consists of 1-, 2- and 3-morphisms (4.25) Such data, subject to the constraints of [CRS1, Def. 3.13] (see also [CMu, Fig. 4.1]), are the objects of \(\mathcal{T}_{\mathrm{orb}}\). A 1-morphisms \(\mathcal{A}\longrightarrow\mathcal{A}^{\prime}\) in \(\mathcal{T}_{\mathrm{orb}}\) is a 1-morphism \(M\colon\mathcal{A}_{3}\longrightarrow\mathcal{A}^{\prime}_{3}\) in \(\mathcal{T}\) together with 2- and 3-morphisms (4.26) etc., that make \(M\) into an \(\mathcal{A}^{\prime}\)-\(\mathcal{A}\)-bimodule \(\mathcal{M}\). Similarly, a 2-morphism \(\mathcal{M}\longrightarrow\mathcal{M}^{\prime}\) in \(\mathcal{T}_{\mathrm{orb}}\) is a 2-morphism \(F\colon M\longrightarrow M^{\prime}\) in \(\mathcal{T}\) together with 3-isomorphisms (4.27) etc., and 3-morphisms in \(\mathcal{T}_{\mathrm{orb}}\) are those in \(\mathcal{T}\) which are compatible with the above. With all details about \(\mathcal{T}_{\mathrm{orb}}\) as laid out in [CMu], one finds that the first part of Theorem 4.20 generalises to **Theorem 4.27** ([CMu]).: \(\mathcal{T}_{\mathrm{orb}}\) admits adjoints for all 1- and 2-morphisms. The completion property \((\mathcal{T}_{\mathrm{orb}})_{\mathrm{orb}}\cong\mathcal{T}_{\mathrm{orb}}\) is expected to follow from a universal property analogous to that in Proposition 4.23. **Example 4.28**.: Recall from Examples 4.22 and 3.10(1) that \(({\cal D}_{({\cal Z}_{2}^{\rm triv})_{\rm orb}^{\odot}})\cong\operatorname{ssFrob}( \operatorname{Vect}_{\Bbbk})\) and \({\cal D}_{{\cal Z}_{3}^{\rm triv}}=\operatorname{B}\operatorname{ssFrob}( \operatorname{Vect}_{\Bbbk})\). Taking the (Euler completion of the) orbifold completion \(({\cal D}_{{\cal Z}_{3}^{\rm triv}})_{\rm orb}\) recovers the 3-category (with duals) of spherical fusion categories, bimodule categories with trace, bimodule functors, and bimodule natural transformations defined in [Sc]. By design, the orbifold completion \(({\cal D}_{\cal Z})_{\rm orb}\) allows us to construct the 3-dimensional orbifold defect TQFT \({\cal Z}_{\rm orb}\colon\operatorname{Bord}_{3,2}^{\rm def}(\mathbb{D}^{\rm orb })\longrightarrow{\cal C}\) in close analogy to Definition 4.21. Defects of dimension \(j\) are \((3-j)\)-cells in \(({\cal D}_{\cal Z})_{\rm orb}\), whose defining conditions ensure well-definedness of \({\cal Z}_{\rm orb}\) when taking the colimit over all triangulations; the details, including compatibility with given stratifications, are in [CMu, Sect. 6.2]. Again one has \(({\cal D}_{\cal Z})_{\rm orb}\cong{\cal D}_{{\cal Z}_{\rm orb}}\). **Example 4.29**.: The 3-dimensional defect state sum model is the (Euler completed) orbifold of the trivial defect TQFT: \({\cal Z}_{3}^{\rm ss}=({\cal Z}_{3}^{\rm triv})_{\rm orb}^{\odot}\). In particular, surface defects between Turaev-Viro-Barrett-Westbury models are given by bimodule categories with trace, and line defects at which an arbitrary number of surface defects meet are given by bimodule functors between appropriate relative Deligne products, in line with [KK]. The case of line defects between precisely two surface defects is studied in detail in [Me], which also provides explicit examples for the special case of Dijkgraaf-Witten models. **Remark 4.30**.: The theme of Examples 4.22 and 4.29 is expected to continue in higher dimensions: the \(n\)-dimensional trivial defect TQFT is obtained from the delooping \(\operatorname{B}\!{\cal D}_{{\cal Z}_{n-1}^{\rm ss}}\), and the \(n\)-dimensional defect state sum model is the (Euler completed) orbifold of the trivial defect TQFT, \[{\cal Z}_{n}^{\rm ss}=({\cal Z}_{n}^{\rm triv})_{\rm orb}^{\odot}\,. \tag{4.28}\] For \(n=4\), this is explained in detail for closed state sum models in [DR, CMuMu]. **Example 4.31**.: Let \({\cal M}\) be a modular fusion category. The Reshetikhin-Turaev defect TQFT \({\cal Z}_{\cal M}^{\rm RT}\) of [KMRS] mentioned in Example 3.5(1) is the defect TQFT obtained from the orbifold completion \((\operatorname{B}\!\Delta\!\operatorname{ssFrob}({\cal M}))_{\rm orb}\), as shown in [CMu, Sect. 6.2].
2309.15809
Fair Canonical Correlation Analysis
This paper investigates fairness and bias in Canonical Correlation Analysis (CCA), a widely used statistical technique for examining the relationship between two sets of variables. We present a framework that alleviates unfairness by minimizing the correlation disparity error associated with protected attributes. Our approach enables CCA to learn global projection matrices from all data points while ensuring that these matrices yield comparable correlation levels to group-specific projection matrices. Experimental evaluation on both synthetic and real-world datasets demonstrates the efficacy of our method in reducing correlation disparity error without compromising CCA accuracy.
Zhuoping Zhou, Davoud Ataee Tarzanagh, Bojian Hou, Boning Tong, Jia Xu, Yanbo Feng, Qi Long, Li Shen
2023-09-27T17:34:13Z
http://arxiv.org/abs/2309.15809v1
# Fair Canonical Correlation Analysis ###### Abstract This paper investigates fairness and bias in Canonical Correlation Analysis (CCA), a widely used statistical technique for examining the relationship between two sets of variables. We present a framework that alleviates unfairness by minimizing the correlation disparity error associated with protected attributes. Our approach enables CCA to learn global projection matrices from all data points while ensuring that these matrices yield comparable correlation levels to group-specific projection matrices. Experimental evaluation on both synthetic and real-world datasets demonstrates the efficacy of our method in reducing correlation disparity error without compromising CCA accuracy. ## 1 Introduction Canonical Correlation Analysis (CCA) is a multivariate statistical technique that explores the relationship between two sets of variables [30]. Given two datasets \(\mathbf{X}\in\mathbb{R}^{N\times D_{\pi}}\) and \(\mathbf{Y}\in\mathbb{R}^{N\times D_{\pi}}\) on the same set of \(N\) observations,1 CCA seeks the \(R\)-dimensional subspaces where the projections of \(\mathbf{X}\) and \(\mathbf{Y}\) are maximally correlated, i.e. finds \(\mathbf{U}\in\mathbb{R}^{D_{\pi}\times R}\) and \(\mathbf{V}\in\mathbb{R}^{D_{y}\times R}\) such that Footnote 1: The columns of \(\mathbf{X}\) and \(\mathbf{Y}\) have been standardized. \[\text{maximize }\ \ \mathrm{trace}\left(\mathbf{U}^{\top}\mathbf{X}^{\top} \mathbf{Y}\mathbf{V}\right)\quad\text{subject to}\quad\mathbf{U}^{\top} \mathbf{X}^{\top}\mathbf{X}\mathbf{U}=\mathbf{V}^{\top}\mathbf{Y}^{\top} \mathbf{Y}\mathbf{V}=\mathbf{I}_{R}.\] (CCA) CCA finds applications in various fields, including biology [51], neuroscience [2], medicine [79], and engineering [14], for unsupervised or semi-supervised learning. It improves tasks like clustering, classification, and manifold learning by creating meaningful dimensionality-reduced representations [70]. However, CCA can exhibit _unfair_ behavior when analyzing data with protected attributes, like sex or race. For instance, in Alzheimer's disease (AD) analysis, CCA can establish correlations between brain imaging and cognitive decline. Yet, if it does not consider the influence of sex, it may result in disparate correlations among different groups because AD affects males and females differently, particularly in cognitive decline [36; 81]. The influence of machine learning on individuals and society has sparked a growing interest in the topic of fairness [42]. While fairness techniques are well-studied in supervised learning [5; 18; 20], attention is shifting to equitable methods in unsupervised learning [11; 12; 15; 34; 49; 55; 64]. Despite extensive work on fairness in machine learning, fair CCA (F-CCA) remains unexplored. This paper investigates F-CCA and introduces new approaches to mitigate bias in (CCA). For further discussion, we compare CCA with our proposed F-CCA in sample projection, as illustrated in Figure 1. In Figure 1(a), we have samples \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) from matrix \(\mathbf{X}\), and in Figure 1(b), their corresponding samples \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) are from matrix \(\mathbf{Y}\). CCA learns \(\mathbf{U}\) and \(\mathbf{V}\) to maximize correlation, inversely related to the angle between the sample vectors. Figure 1(c) demonstrates the proximity within the projected sample pairs \((\mathbf{U}^{\top}\mathbf{x}_{1},\mathbf{V}^{\top}\mathbf{y}_{1})\) and \((\mathbf{U}^{\top}\mathbf{x}_{2},\mathbf{V}^{\top}\mathbf{y}_{2})\). In Figure 1(d)-(i), we compare the results of different learning strategies. There are five pairs of samples, with female pairs highlighted in red and male pairs shown in blue. Random projection (Figure 1(e)) leads to randomly large angles between corresponding sample vectors. CCA reduces angles compared to random projection (Figure 1(f)), but significant angle differences between male and female pairs indicate bias. Using sex-based projection matrices heavily biases the final projection, favoring one sex over the other (Figures 1(g) and 1(h)). To address this bias, our F-CCA maximizes correlation within pairs and ensures equal correlations across different groups, such as males and females (Figure 1(i)). Note that while this illustration represents individual fairness, the desired outcome in practice is achieving similar average angles for different groups. **Contributions.** This paper makes the following key contributions: * We introduce fair CCA (F-CCA), a model that addresses fairness issues in (CCA) by considering multiple groups and minimizing the correlation disparity error of protected attributes. F-CCA aims to learn global projection matrices from all data points while ensuring that these projection matrices produce a similar amount of correlation as group-specific projection matrices. * We propose two optimization frameworks for F-CCA: multi-objective and single-objective. The multi-objective framework provides an automatic trade-off between global correlation and equality in group-specific correlation disparity errors. The single-objective framework offers a simple approach to approximate fairness in CCA while maintaining a strong global correlation, requiring a tuning parameter to balance these objectives. * We develop a gradient descent algorithm on a generalized Stiefel manifold to solve the multi-objective problem, with convergence guarantees to a Pareto stationary point. This approach extends Riemannian gradient descent [8; 9] to multi-objective optimization, accommodating a broader range of retraction maps than exponential retraction [23; 6]. Furthermore, we provide a similar algorithm for single-objective problems, also with convergence guarantees to a stationary point. * We provide extensive empirical results showcasing the efficacy of the proposed algorithms. Comparison against the CCA method on synthetic and real datasets highlights the benefits of the F-CCA approach, validating the theoretical findings 2. Footnote 2: Code is available at [https://github.com/PennShenLab/Fair_CCA](https://github.com/PennShenLab/Fair_CCA). **Organization:** Section 2 covers related work. Our proposed approach is detailed in Section 3, along with its theoretical guarantees. Section 4 showcases numerical experiments, while Section 5 discusses implications and future research directions. Figure 1: Illustration of CCA and F-CCA, with the sensitive attribute being sex (female and male). Figures (a)–(c) demonstrate the general framework of CCA, while Figures (d)–(i) provide a comparison of the projected results using various strategies. It is important to note that the correlation between two corresponding samples is inversely associated with the angle formed by their projected vectors. F-CCA aims to equalize the angles among all pairs \((\mathbf{x},\mathbf{y})\). Related work **Canonical Correlation Analysis (CCA).** CCA was first introduced by [28; 29]. Since then, it has been utilized to explore relations between variables in various fields of science, including economics [72], psychology [19; 27], geography [45], medicine [39], physics [76], chemistry [69], biology [62], time-series modeling [26], and signal processing [57]. Recently, CCA has demonstrated its applicability in modern fields of science such as neuroscience, machine learning, and bioinformatics [59; 60]. CCA has been used to explore relations for developing brain-computer interfaces [10; 46] and in the field of imaging genetics [22]. CCA has also been applied for feature selection [47], feature extraction and fusion [61], and dimension reduction [71]. Additionally, numerous studies have applied CCA in bioinformatics and computational biology, such as [54; 56; 58]. The broad range of application domains highlights the versatility of CCA in extracting relations between variables, making it a valuable tool in scientific research. **Fairness.** Fairness in machine learning has been a growing area of research, with much of the work focusing on fair supervised methods [5; 16; 18; 20; 67; 78]. However, there has also been increasing attention on fair methods for unsupervised learning tasks [11; 12; 15; 34; 33; 49; 55; 64; 50; 66]. In particular, Samadi et al. [55] proposed a semi-definite programming approach to ensure fairness in PCA. Kleindessner et al. [33; 34] focused on fair PCA formulation for multiple groups and proposed a kernel-based fair PCA. Kamani et al. [32] introduced an efficient gradient method for fair PCA, addressing multi-objective optimization. In this paper, we propose a novel multi-objective framework for F-CCA, converting constrained F-CCA problems to unconstrained ones on a generalized Riemannian manifold. This framework enables the adaptation of efficient gradient techniques for numerical optimization on Riemannian manifolds. **Riemannian Optimization.** Riemannian optimization extends Euclidean optimization to smooth manifolds, enabling the minimization of \(f(\mathbf{x})\) on a Riemannian manifold \(\mathcal{M}\) and converting constrained problems into unconstrained ones [1; 8]. It finds applications in various domains such as matrix/tensor factorization [31; 63], PCA [21], and CCA [77]. Specifically, CCA can be formulated as Riemannian optimization on the Stiefel manifold [13; 43]. In our work, we utilize Riemannian optimization to develop a multi-objective framework for F-CCAs on generalized Stiefel manifolds. ## 3 Fair Canonical Correlation Analysis This section introduces the formulation and optimization algorithms for F-CCA. ### Preliminary Real numbers are represented as \(\mathbb{R}\), with \(\mathbb{R}_{+}\) for nonnegative values and \(\mathbb{R}_{++}\) for positives. Vectors and matrices use bold lowercase and uppercase letters (e.g., \(\mathbf{a}\), \(\mathbf{A}\)) with elements \(a_{i}\) and \(a_{ij}\). For \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{m}\), \(\mathbf{x}\prec\mathbf{y}\) and \(\mathbf{x}\preceq\mathbf{y}\) mean \(\mathbf{y}-\mathbf{x}\in\mathbb{R}_{++}^{m}\) and \(\mathbf{y}-\mathbf{x}\in\mathbb{R}_{+}^{m}\), respectively. For a symmetric matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\), \(\mathbf{A}\succ 0\) and \(\mathbf{A}\succeq 0\) denote positive definiteness and positive semidefiniteness (PSD), respectively. \(\mathbf{I}_{D}\), \(\mathbf{J}_{D}\), and \(\mathbf{0}_{D}\) are \(D\times D\) identity, all-ones, and all-zeros matrices. \(\Lambda_{i}(\mathbf{A})\) stands for the \(i\)-th singular values of \(\mathbf{A}\). Matrix norms are defined as \(\|\mathbf{A}\|_{1}=\sum_{ij}|a_{ij}|\), \(\|\mathbf{A}\|=\max_{i}\Lambda_{i}(\mathbf{A})\), and \(\|\mathbf{A}\|_{\mathrm{F}}:=(\sum_{ij}|a_{ij}|^{2})^{1/2}\). We introduce some preliminaries on manifold optimization [1; 6; 8]. Given a PSD matrix \(\mathbf{B}\in\mathbb{R}^{D\times D}\), the generalized Stiefel manifold is defined as \[\mathtt{St}(D,R,\mathbf{B})=\left\{\mathbf{Z}\in\mathbb{R}^{D\times R}\; \big{|}\;\mathbf{Z}^{\top}\mathbf{B}\mathbf{Z}=\mathbf{I}_{R}\right\}. \tag{1}\] The tangent space of the manifold \(\mathcal{M}=\mathtt{St}(D,R,\mathbf{B})\) at \(\mathbf{Z}\in\mathcal{M}\) is given by \[\mathcal{T}_{\mathbf{Z}}\mathcal{M}=\left\{\mathbf{W}\in\mathbb{R}^{D\times R }\big{|}\;\mathbf{Z}^{\top}\mathbf{B}\mathbf{W}+\mathbf{W}^{\top}\mathbf{B} \mathbf{Z}=\mathbf{0}_{R}\right\}. \tag{2}\] The tangent bundle of a smooth manifold \(\mathcal{M}\), which consists of \(\mathcal{T}_{\mathbf{Z}}\mathcal{M}\) at all \(\mathbf{Z}\in\mathcal{M}\), is defined as \[\mathcal{T}\mathcal{M}=\left\{(\mathbf{Z},\mathbf{W})\big{|}\;\mathbf{Z}\in \mathcal{M},\;\mathbf{W}\in\mathcal{T}_{\mathbf{Z}}\mathcal{M}\right\}. \tag{3}\] **Definition 1**.: _A retraction on a differentiable manifold \(\mathcal{M}\) is a smooth mapping from its tangent bundle \(\mathcal{T}\mathcal{M}\) to \(\mathcal{M}\) that satisfies the following conditions, with \(R^{\mathbf{R}}\) being the retraction of \(R\) to \(\mathcal{T}_{\mathbf{Z}}\mathcal{M}\):_ 1. \(R^{\mathbf{R}}(\mathbf{0})=\mathbf{Z}\)_, for all_ \(\mathbf{Z}\in\mathcal{M}\)_, where_ \(\mathbf{0}\) _denotes the zero element of_ \(\mathcal{T}_{\mathbf{Z}}\mathcal{M}\)_._ 2. _For any_ \(\mathbf{Z}\in\mathcal{M}\)_, it holds that_ \(\lim_{\mathcal{T}_{\mathbf{Z}}\mathcal{M}\ni\boldsymbol{\xi}\to 0}\frac{\|R^{ \mathbf{Z}}(\boldsymbol{\xi})-(\mathbf{Z}+\boldsymbol{\xi})\|_{F}}{\| \boldsymbol{\xi}\|_{F}}=0\)_._ In the numerical experiments, this work employs a generalized polar decomposition-based retraction. Given a PSD matrix \(\mathbf{B}\in\mathbb{R}^{D\times D}\), for any \(\boldsymbol{\xi}\in\mathcal{T}_{\mathbf{Z}}\mathcal{M}\) with \(\mathcal{M}=\mathtt{St}(D,R,\mathbf{B})\), it is defined as: \[R^{\boldsymbol{\alpha}}(\boldsymbol{\xi})=\bar{\mathbf{U}}(\mathbf{Q} \boldsymbol{\Lambda}^{-\frac{1}{2}}\mathbf{Q}^{\top})\bar{\mathbf{V}}^{\top}, \tag{4}\] where \(\bar{\mathbf{U}}\Sigma\bar{\mathbf{V}}^{\top}=\boldsymbol{\xi}\) is the singular value decomposition of \(\boldsymbol{\xi}\), and \(\mathbf{Q}\), \(\boldsymbol{\Lambda}\) are obtained from the eigenvalue decomposition \(\mathbf{Q}\boldsymbol{\Lambda}\boldsymbol{Q}^{\top}=\bar{\mathbf{U}}^{\top} \mathbf{B}\bar{\mathbf{U}}\). Further details on retraction choices are in Appendix A.1. ### Correlation Disparity Error As previously mentioned, applying CCA to the entire dataset could lead to a biased result, as some groups might dominate the analysis while others are overlooked. To avoid this, we can perform CCA separately on each group and compare the results. Indeed, we can compare the performance of CCA on each group's data with the performance of CCA on the whole dataset, which includes all groups' data. The goal is to find a balance between the benefits and sacrifices of different groups so that each group's contribution to the CCA analysis is treated fairly. In particular, suppose the datasets \(\mathbf{X}\in\mathbb{R}^{N\times D_{x}}\) and \(\mathbf{Y}\in\mathbb{R}^{N\times D_{y}}\) on the same set of \(N\) observations, belong to \(K\) different groups \(\{(\mathbf{X}^{k},\mathbf{Y}^{k})\}_{k=1}^{K}\) with \(\mathbf{X}^{k}\in\mathbb{R}^{N_{k}\times D_{x}}\) and \(\mathbf{Y}^{k}\in\mathbb{R}^{N_{k}\times D_{y}}\), based on demographics or some other semantically meaningful clustering. These groups need not be mutually exclusive; each group can be defined as a different weighting of the data. To determine how each group is affected by F-CCA, we can compare the structure learned from each group's data \((\mathbf{X}^{k},\mathbf{Y}^{k})\) with the structure learned from all groups' data combined \((\mathbf{X},\mathbf{Y})\). A fair CCA approach seeks to balance the benefits and drawbacks of each group's contribution to the analysis. Specifically, if we train global subspaces \(\mathbf{U}\in\mathbb{R}^{D_{x}\times R}\) and \(\mathbf{V}\in\mathbb{R}^{D_{y}\times R}\) on \(k\)-th group dataset \((\mathbf{X}^{k},\mathbf{Y}^{k})\), we can identify the group-specific (local) weights represented by \((\mathbf{U}^{k},\mathbf{V}^{k})\) that has the best performance on that dataset. Thus, F-CCA algorithm should be able to learn global weights \((\mathbf{U},\mathbf{V})\) on all data points while ensuring that each group's correlation on the CCA learned by the whole dataset is equivalent to the group-specific subspaces learned only by its own data. To define these fairness criteria, we introduce correlation disparity error as follows: **Definition 2** (**Correlation Disparity Error**).: _Consider a pair of datasets \((\mathbf{X},\mathbf{Y})\) with \(K\) sensitive groups with data matrix \(\{(\mathbf{X}^{k},\mathbf{Y}^{k})\}_{k=1}^{K}\) representing each sensitive group's data samples. Then, for any \((\mathbf{U},\mathbf{V})\), the correlation disparity error for each sensitive group \(k\in[K]\) is defined as:_ \[\mathcal{E}^{k}\left(\mathbf{U},\mathbf{V}\right):=\mathrm{trace}\left({ \mathbf{U}^{k,\star}}^{\top}{\mathbf{X}^{k}}^{\top}\mathbf{Y}^{k}\mathbf{V}^{ k,\star}\right)-\mathrm{trace}\left({\mathbf{U}^{\top}}{\mathbf{X}^{k}}^{ \top}\mathbf{Y}^{k}\mathbf{V}\right),\qquad 1\leq k\leq K. \tag{5}\] _Here, \((\mathbf{U}^{k,\star},\mathbf{V}^{k,\star})\) is the maximizer of the following group-specific CCA problem:_ \[\text{maximize}\ \ \mathrm{trace}\left({\mathbf{U}^{k}}^{\top}{\mathbf{X}^{k}}^{ \top}{\mathbf{Y}^{k}}\mathbf{V}^{k}\right)\ \ \text{subj. to}\ \ \ {\mathbf{U}^{k}}^{\top}{\mathbf{X}^{k}}^{\top}{\mathbf{X}^{k}}\mathbf{U}^{k}={ \mathbf{V}^{k}}^{\top}{\mathbf{Y}^{k}}^{\top}{\mathbf{Y}^{k}}^{\top}{\mathbf{Y} ^{k}}\mathbf{V}^{k}={\mathbf{I}_{R}}. \tag{6}\] This measure shows how much correlation we are suffering for any global \((\mathbf{U},\mathbf{V})\), with respect to the loss of optimal local \((\mathbf{U}^{k,\star},\mathbf{V}^{k,\star})\) that we can learn based on data points \((\mathbf{X}^{k},\mathbf{Y}^{k})\). Using Definition 2, we can define F-CCA as follows: **Definition 3** (**Fair CCA**).: _A CCA pair \((\mathbf{U}^{\star},\mathbf{V}^{\star})\) is called fair if the correlation disparity error among \(K\) different groups is equal, i.e.,_ \[\mathcal{E}^{k}\left(\mathbf{U}^{\star},\mathbf{V}^{\star}\right)=\mathcal{E}^{ s}\left(\mathbf{U}^{\star},\mathbf{V}^{\star}\right),\qquad\forall k\neq s, \quad k,s\in[K]. \tag{7}\] _A CCA pair \((\mathbf{U}^{\star},\mathbf{V}^{\star})\) that achieves the same disparity error for all groups is called a fair CCA._ Next, we introduce the concept of pairwise correlation disparity error for CCA, which measures the variation in correlation disparity among different groups. **Definition 4** (**Pairwise Correlation Disparity Error**).: _The pairwise correlation disparity error for any global \((\mathbf{U},\mathbf{V})\) and group-specific subspaces \(\{(\mathbf{U}^{k,\star},\mathbf{V}^{k,\star})\}_{k=1}^{K}\), is defined as_ \[\Delta^{k,s}\left(\mathbf{U},\mathbf{V}\right):=\phi\left(\mathcal{E}^{k}\left( \mathbf{U},\mathbf{V}\right)-\mathcal{E}^{s}\left(\mathbf{U},\mathbf{V}\right) \right),\qquad\forall k\neq s,\quad k,s\in[K]. \tag{8}\] _Here, \(\phi:\mathbb{R}\to\mathbb{R}_{+}\) is a penalty function such as \(\phi(x)=\exp(x)\), \(\phi(x)=x^{2}\), or \(\phi(x)=|x|\)._ The motivation for incorporating disparity error regularization in our approach can be attributed to the work by [40; 55] in the context of PCA. To facilitate convergence analysis, we will primarily consider smooth penalization functions, such as squared or exponential penalties. ### A Multi-Objective Framework for Fair CCA In this section, we introduce an optimization framework for balancing correlation and disparity errors. Let \(f_{1}\left(\mathbf{U},\mathbf{V}\right):=-\operatorname{trace}\left(\mathbf{U} ^{\top}\mathbf{X}^{\top}\mathbf{Y}\mathbf{V}\right),f_{2}\left(\mathbf{U}, \mathbf{V}\right):=\Delta^{1,2}\left(\mathbf{U},\mathbf{V}\right),\ldots,f_{M} \left(\mathbf{U},\mathbf{V}\right):=\Delta^{K-1,K}\left(\mathbf{U},\mathbf{V}\right)\). The optimization problem of finding an optimal Pareto point of \(\mathbf{F}\) is denoted by \[\begin{array}{cc}\underset{\mathbf{U},\mathbf{V}}{\text{minimize}}&\mathbf{ F}(\mathbf{U},\mathbf{V}):=\left[f_{1}\left(\mathbf{U},\mathbf{V}\right),f_{2} \left(\mathbf{U},\mathbf{V}\right),\ldots,f_{M}\left(\mathbf{U},\mathbf{V} \right)\right],\\ \text{subj. to}&\mathbf{U}\in\mathcal{U},\;\;\;\mathbf{V}\in\mathcal{V},\end{array} \tag{9}\] where \(\mathcal{U}:=\{\mathbf{U}\big{|}\mathbf{U}^{\top}\mathbf{X}^{\top}\mathbf{X} \mathbf{U}=\mathbf{I}_{R}\}\) and \(\mathcal{V}:=\{\mathbf{V}\big{|}\mathbf{V}^{\top}\mathbf{Y}^{\top}\mathbf{Y} \mathbf{V}=\mathbf{I}_{R}\}\). A point \((\mathbf{U},\mathbf{V})\in\mathcal{U}\times\mathcal{V}\) satisfying \(\mathtt{Im}(\nabla\mathbf{F}(\mathbf{U},\mathbf{V}))\cap(-\mathbb{R}_{++}^{M })=\emptyset\) is called _critical Pareto_. Here, \(\mathtt{Im}\) denotes the image of Jacobian of \(\mathbf{F}\). An _optiman Pareto point_ of \(\mathbf{F}\) is a point \((\mathbf{U}^{\star},\mathbf{V}^{\star})\in\mathcal{U}\times\mathcal{V}\) such that there exists no other \((\mathbf{U},\mathbf{V})\in\mathcal{U}\times\mathcal{V}\) with \(\mathbf{F}(\mathbf{U},\mathbf{V})\prec\mathbf{F}(\mathbf{U}^{\star},\mathbf{V} ^{\star})\). Moreover, a point \(\mathbf{U}^{\star},\mathbf{V}^{\star}\in\mathcal{U}\times\mathcal{V}\) is a _weak optimal Pareto_ of \(\mathbf{F}\) if there is no \((\mathbf{U},\mathbf{V})\in\mathcal{U}\times\mathcal{V}\) with \(\mathbf{F}(\mathbf{U},\mathbf{V})\preceq\mathbf{F}(\mathbf{U}^{\star},\mathbf{V }^{\star})\). The multi-objective framework (9) addresses the challenge of handling conflicting objectives and achieving optimal trade-offs between them. To effectively solve Problem (9), we propose utilizing a gradient descent method on the Riemannian manifold that ensures convergence to a _Pareto stationary point_. The proposed gradient descent algorithm for solving (9) is provided in **Algorithm 1**. For each \((\mathbf{U},\mathbf{V})\in\mathcal{U}\times\mathcal{V}\), let \(\mathbf{P}:=(\mathbf{P}^{\mathbf{u}},\mathbf{P}^{\mathbf{v}})\) with \(\mathbf{P}^{\mathbf{u}}\in\mathcal{T}_{\mathbf{U}}\mathcal{U}\) and \(\mathbf{P}^{\mathbf{v}}\in\mathcal{T}_{\mathbf{V}}\mathcal{V}\). The iterates \((\mathbf{P}^{\mathbf{u}}_{t},\mathbf{P}^{\mathbf{v}}_{t})\) in Step 4 are obtained by solving the following subproblem in the joint tangent plane \(\mathcal{T}_{\mathbf{U}}\mathcal{U}\times\mathcal{T}_{\mathbf{V}}\mathcal{V}\): \[\min_{\mathbf{P}\in\mathcal{T}_{\mathbf{U}}\mathcal{U}\times\mathcal{T}_{ \mathbf{V}}\mathcal{V}}\;\;Q_{t}(\mathbf{P}),\;\;\text{where}\;\;Q_{t}( \mathbf{P}):=\left\{\max_{i\in[M]}\operatorname{trace}\left(\mathbf{P}^{\top} \nabla f_{i}((\mathbf{U}_{t},\mathbf{V}_{t}))\right)+\frac{1}{2}\|\mathbf{P} \|_{\mathrm{F}}^{2}\right\}. \tag{10}\] If \((\mathbf{U}_{t},\mathbf{V}_{t})\notin\mathcal{U}\times\mathcal{V}\) is not a Pareto stationary point, Problem (10) has a unique nonzero solution \(\mathbf{P}_{t}\) (see Lemma 7), known as the _steepest descent direction_ for \(\mathbf{F}\) at \((\mathbf{U}_{t},\mathbf{V}_{t})\). In Steps 5 and 6, \(R^{\mathbf{u}}\) and \(R^{\mathbf{v}}\) denote the retractions onto the tangent spaces \(\mathcal{T}_{\mathbf{U}}\mathcal{U}\) and \(\mathcal{T}_{\mathbf{V}}\mathcal{V}\), respectively; refer to Definition 1. **Assumption A**.: _For a given subset \(\mathcal{S}\) of the tangent bundle \(\mathcal{T}\mathcal{U}\times\mathcal{T}\mathcal{V}\), there exists a constant \(L_{F}\) such that, for all \((\mathbf{Z},\mathbf{P})\in\mathcal{S}\), we have \(\mathbf{F}(R^{\mathbf{z}}(\mathbf{P}))\preceq\mathbf{F}(\mathbf{Z})+\boldsymbol {\nabla}+\left(L_{F}/2\right)\left\|\mathbf{P}\right\|_{\mathrm{F}}^{2}\mathbf{ 1}_{M},\) where \(\nabla_{i}:=\left\langle\nabla f_{i}(\mathbf{Z}),\mathbf{P}\right\rangle\), \(\boldsymbol{\nabla}:=[\nabla_{1},\cdots,\nabla_{M}]^{\top}\in\mathbb{R}^{M}\), and \(R^{\mathbf{z}}\) is the retraction._ The above assumption extends [8; A 4.3] to multi-objective optimization, and it always holds for the _exponential_ map (exponential retraction) if the gradient of \(\mathbf{F}\) is \(L_{F}\)-Lipschitz continuous [23; 6]. **Theorem 5**.: _Suppose Assumption A holds. Let \((\mathbf{U}_{t},\mathbf{V}_{t})\) be the sequence generated by MF-CCA. Let \(f_{i}^{*}:=\inf\{f_{i}(\mathbf{U},\mathbf{V}):\;(\mathbf{U},\mathbf{V})\in \mathcal{U}\times\mathcal{V}\}\), for all \(i\in[M]\) and define \(f_{i_{*}}(\mathbf{U}_{0},\mathbf{V}_{0})-f_{i_{*}}^{*}:=\min\left\{f_{i}( \mathbf{U}_{0},\mathbf{V}_{0})-f_{i}^{*}:\;i\in[M]\right\}\). If \(\eta_{t}^{\mathbf{u}}=\eta_{t}^{\mathbf{v}}=\eta\leq 1/L_{F}\) for all \(t\in\{0,\ldots,T-1\}\), then_ \[\min\Big{\{}\left\|\mathbf{P}_{t}\right\|_{\mathrm{F}}:\;t=0,\ldots,T-1\Big{\}} \leq\frac{2}{\eta}\left[\frac{f_{i_{*}}(\mathbf{U}_{0},\mathbf{V}_{0})-f_{i_{*}}^ {*}}{T}\right]^{\frac{1}{2}}.\] Proof Sketch.: We employ Lemma 7 to establish the unique solution \(\mathbf{P}_{t}\) for subproblem (10). Lemmas 9 and 10 provide estimates for the decrease of function \(\mathbf{F}\) along \(\mathbf{P}_{t}\): For any \(\eta_{t}\geq 0\), we have \(\mathbf{F}(\mathbf{U}_{t+1},\mathbf{V}_{t+1})\preceq\mathbf{F}(\mathbf{U}_{t}, \mathbf{V}_{t})-\left(\eta_{t}-L_{F}\eta_{t}^{2}/2\right)\left\|\mathbf{P}_{t} \right\|_{\mathbf{F}}^{2}\mathbf{1}_{M}\). Summing this inequality over \(t=0,1,\ldots,T-1\) and applying our step size condition yields the desired result. Theorem 5 provides a generalization of [8, Corollary 4.9] to the multi-objective optimization, showing that the norm of Pareto descent directions converges to zero. Consequently, the solutions produced by the algorithm converge to a stationary fair subspace. It is worth mentioning that multi-objective optimization in [23, 6] relies on the Riemannian exponential map, whereas the above theorem covers broader (and practical) retraction maps. ### A Single-Objective Framework for Fair CCA In this section, we introduce a straightforward and effective single-objective framework. This approach simplifies F-CCA optimization, lowers computational requirements, and allows for fine-tuning fairness-accuracy trade-offs using the hyperparameter \(\lambda\). Specifically, by employing a regularization parameter \(\lambda>0\), our proposed fairness model for F-CCA is expressed as follows: \[\begin{array}{ll}\underset{\begin{subarray}{c}\mathbf{U},\mathbf{V}\\ \text{subj. to}\end{subarray}}{\text{minimize}}&f(\mathbf{U},\mathbf{V}):=- \operatorname{trace}\left(\mathbf{U}^{\top}\mathbf{X}^{\top}\mathbf{Y}\mathbf{ V}\right)+\lambda\Delta\left(\mathbf{U},\mathbf{V}\right),\\ \text{subj. to}&\mathbf{U}\in\mathcal{U},\;\;\;\mathbf{V}\in\mathcal{V},\end{array} \tag{11}\] where \(\Delta\left(\mathbf{U},\mathbf{V}\right)=\sum_{i,j\in[K],i\neq j}\Delta^{i,j} \left(\mathbf{U},\mathbf{V}\right)\); see Definiton 4. The choice of \(\lambda\) in the model determines the emphasis placed on different objectives. When \(\lambda\) is large, the model prioritizes fairness over minimizing subgroup errors. Conversely, if \(\lambda\) is small, the focus shifts towards minimizing subgroup correlation errors rather than achieving perfect fairness. In other words, it is possible to obtain perfectly F-CCA subspaces; however, this may come at the expense of larger errors within the subgroups. The constant \(\lambda\) in the model allows for a flexible trade-off between fairness and minimizing subgroup correlation errors, enabling us to find a balance based on the specific requirements and priorities of the problem at hand. The proposed gradient descent algorithm for solving (11) is provided as **Algorithm 2**.: For each \((\mathbf{U},\mathbf{V})\in\mathcal{U}\times\mathcal{V}\), let \(\mathbf{G}:=(\mathbf{G}^{\mathbf{u}},\mathbf{G}^{\mathbf{v}})\) with \(\mathbf{G}^{\mathbf{u}}\in\mathcal{T}_{\mathbf{U}}\mathcal{U}\) and \(\mathbf{G}^{\mathbf{v}}\in\mathcal{T}_{\mathbf{V}}\mathcal{V}\). The iterates \((\mathbf{G}_{t}^{\mathbf{u}},\mathbf{G}_{t}^{\mathbf{v}})\) are obtained by solving the following problem in the joint tangent plane \(\mathcal{T}_{\mathbf{U}}\mathcal{U}\times\mathcal{T}_{\mathbf{V}}\mathcal{V}\): \[\min_{\mathbf{G}\in\mathcal{T}_{\mathbf{U}}\mathcal{U}\times\mathcal{T}_{ \mathbf{V}}\mathcal{V}}\;\;q_{t}(\mathbf{G}),\;\;\text{where}\;\;q_{t}( \mathbf{G}):=\left\{\operatorname{trace}\left(\mathbf{G}^{\top}\nabla f(( \mathbf{U}_{t},\mathbf{V}_{t}))\right)+\frac{1}{2}\|\mathbf{G}\|_{\mathbf{F}}^ {2}\right\}. \tag{12}\] The solutions \((\mathbf{G}_{t}^{\mathbf{u}},\mathbf{G}_{t}^{\mathbf{v}})\) are maintained on the manifolds using the retraction operations \(R^{\mathbf{u}}\) and \(R^{\mathbf{v}}\). **Assumption B**.: _For a subset \(\mathcal{S}\subseteq\mathcal{T}\mathcal{U}\times\mathcal{T}\mathcal{V}\), there exists a constant \(L_{f}\) such that for all \((\mathbf{Z},\mathbf{G})\in\mathcal{S}\), \(f(R^{\mathbf{z}}(\mathbf{G}))\leq\mathbf{F}(\mathbf{Z})+\left\langle\nabla f( \mathbf{Z}),\mathbf{G}\right\rangle+\left(L_{f}/2\right)\left\|\mathbf{G}\right\| _{\mathbf{F}}^{2},\) with \(R^{\mathbf{z}}\) as the retraction._ **Theorem 6**.: _Suppose Assumption B holds. Let \((\mathbf{U}_{t},\mathbf{V}_{t})\) be the sequence generated by_ SF-CCA_. Let \(f^{*}:=\inf\{f(\mathbf{U},\mathbf{V}):\;(\mathbf{U},\mathbf{V})\in\mathcal{U} \times\mathcal{V}\}\). If \(\eta_{t}^{\mathbf{u}}=\eta_{t}^{\mathbf{v}}=\eta\leq 1/L_{f}\) for all \(t\in[T]\), then_ \[\min\left\{\|\mathbf{G}_{t}\|_{\mathbf{F}}:\;t=0,\ldots,T-1\right\}\leq\frac{2} {\eta}\left[\frac{f(\mathbf{U}_{0},\mathbf{V}_{0})-f^{*}}{T}\right]^{\frac{1} {2}}.\] **Comparison between MF-CCA and SF-CCA:** MF-CCA addresses conflicting objectives and achieves optimal trade-offs automatically, but it necessitates the inclusion of \(\binom{K}{2}\) additional objectives. SF-CCA, on the other hand, provides a simpler approach but requires tuning an extra hyperparameter \(\lambda\). When choosing between the two methods, it is crucial to consider the trade-off between complexity and simplicity, as well as the number of objectives and the need for hyperparameter tuning. ## 4 Experiments In this section, we provide empirical results showcasing the efficacy of the proposed algorithms. ### Evaluation Criteria and Selection of Tuning Parameter F-CCA's performance is evaluated on correlation and fairness for each dimension of subspaces. Let \(\mathbf{U}=[\mathbf{u}_{1},\cdots,\mathbf{u}_{R}]\in\mathbb{R}^{D_{x}\times R}\) and \(\mathbf{V}=[\mathbf{v}_{1},\cdots,\mathbf{v}_{R}]\in\mathbb{R}^{D_{y}\times R}\). The \(r\)-th canonical correlation is defined as follows: \[\rho_{r}=\frac{\mathbf{u}_{r}^{\top}\mathbf{X}^{\top}\mathbf{Y} \mathbf{v}_{r}}{\sqrt{\mathbf{u}_{r}^{\top}\mathbf{X}^{\top}\mathbf{X}\mathbf{ u}_{r}\mathbf{v}_{r}^{\top}\mathbf{Y}^{\top}\mathbf{Y}^{\top}\mathbf{Y}\mathbf{v}_{r}}}, \quad r=1,\ldots,R.\] (13a) Next, in terms of fairness, we establish the following two key measures: \[\Delta_{\max,r} =\max_{i,j\in[K]}|\mathcal{E}^{i}(\mathbf{u}_{r},\mathbf{v}_{r}) -\mathcal{E}^{j}(\mathbf{u}_{r},\mathbf{v}_{r})|,\quad r=1,\ldots,R,\] (13b) \[\Delta_{\text{sum},r} =\sum_{i,j\in[K]}|\mathcal{E}^{i}(\mathbf{u}_{r},\mathbf{v}_{r}) -\mathcal{E}^{j}(\mathbf{u}_{r},\mathbf{v}_{r})|,\quad r=1,\ldots,R.\] (13c) Here, \[\Delta_{\max,r}\] measures maximum disparity error, while \[\Delta_{\text{sum},r}\] represents aggregate disparity error. The aim is to reach \[\Delta_{\max,r}\] and \[\Delta_{\text{sum},r}\] of \[0\] without sacrificing correlation ( \[\rho_{r}\] ) compared to CCA. We conduct a detailed analysis using component-wise measurements ( 13 ) instead of matrix versions; for more discussions, see Appendix C.2. The canoncorr function from MATLAB and [35] is used to solve (CCA). For MF-CCA and SF-CCA, the learning rate is searched on a grid in \(\{1e-1,5e-2,1e-2,\ldots,1e-5\}\), and for SF-CCA, \(\lambda\) is searched on a grid in \(\{1e-2,1e-1,0.5,1,2,\ldots,10\}\). Sensitivity analysis of \(\lambda\) is provided in Appendix B.2. The learning rate decreases with the square root of the iteration number. Termination of algorithms occurs when the descent direction norm is below \(1e-4\). ### Dataset #### 4.2.1 Synthetic Data Following [44; 4], our synthetic data are generated using the Gaussian distribution \[\begin{pmatrix}\mathbf{X}\\ \mathbf{Y}\end{pmatrix}\sim N\left(\begin{bmatrix}\mu_{\mathbf{X}}\\ \mu_{\mathbf{Y}}\end{bmatrix},\begin{bmatrix}\mathbf{\Sigma_{X}}&\mathbf{ \Sigma_{XY}}\\ \mathbf{\Sigma_{YX}}&\mathbf{\Sigma_{Y}}\end{bmatrix}\right).\] Here, \(\mu_{\mathbf{X}}\in\mathbb{R}^{D_{x}\times 1}\) and \(\mu_{\mathbf{Y}}\in\mathbb{R}^{D_{y}\times 1}\) are the means of data matrices \(\mathbf{X}\) and \(\mathbf{Y}\), respectively; covariance matrices \(\mathbf{\Sigma_{X}},\mathbf{\Sigma_{Y}}\) and the cross-covariance matrix \(\mathbf{\Sigma_{XY}}\) are constructed as follows. Given ground truth projection matrices \(\mathbf{U}\in\mathbb{R}^{D_{x}\times R},\mathbf{V}\in\mathbb{R}^{D_{y}\times R}\) and canonical correlations \(\boldsymbol{\rho}=(\rho_{1},\rho_{2},\ldots,\rho_{R})\) defined in (13a). Let \(\mathbf{U}=\mathbf{Q_{X}}\mathbf{R_{X}}\) and \(\mathbf{V}=\mathbf{Q_{Y}}\mathbf{R_{Y}}\) be the QR decomposition of \(\mathbf{U}\) and \(\mathbf{V}\), then we have \[\mathbf{\Sigma_{XY}} =\mathbf{\Sigma_{X}U}\text{ diag}(\boldsymbol{\rho})\ \mathbf{V}^{\top}\mathbf{\Sigma_{Y}}, \tag{14a}\] \[\mathbf{\Sigma_{X}} =\mathbf{Q_{X}}\mathbf{R_{X}}^{-\top}\mathbf{R_{X}}^{-1}\mathbf{Q_ {X}}^{\top}+\tau_{x}\mathbf{T_{X}}(\mathbf{I}_{D_{x}}-\mathbf{Q_{X}}\mathbf{Q _{X}}^{\top})\mathbf{T_{X}}^{\top},\] (14b) \[\mathbf{\Sigma_{Y}} =\mathbf{Q_{Y}}\mathbf{R_{Y}}^{-\top}\mathbf{R_{Y}}^{-1}\mathbf{Q _{Y}}^{\top}+\tau_{y}\mathbf{T_{Y}}(\mathbf{I}_{D_{y}}-\mathbf{Q_{Y}}\mathbf{Q _{Y}}^{\top})\mathbf{T_{Y}}^{\top}. \tag{14c}\] Here, \(\mathbf{T_{X}}\in\mathbb{R}^{D_{x}\times D_{x}}\) and \(\mathbf{T_{Y}}\in\mathbb{R}^{D_{y}\times D_{y}}\) are randomly generated by normal distributions, and \(\tau_{x}=1\) and \(\tau_{y}=0.001\) are scaling hyperparameters. For subgroup distinction, we added noise to canonical vectors and adjusted sample sizes: 300, 350, 400, 450, and 500 observations each. In the numerical experiment, different canonical correlations are assigned to each subgroup alongside two global canonical vectors \(\mathbf{U}\) and \(\mathbf{V}\) to generate five distinct subgroups. #### 4.2.2 Real Data **National Health and Nutrition Examination Survey (NHANES).** We utilized the 2005-2006 subset of the NHANES database [https://www.cdc.gov/nchs/nhanes](https://www.cdc.gov/nchs/nhanes), including physical measurements and self-reported questionnaires from participants. We partitioned the data into two distinct subsets: one with 96 phenotypic measures and the other with 55 environmental measures. Our objective was to apply F-CCA to explore the interplay between phenotypic and environmental factors in contributing to health outcomes, considering the impact of education. Thus, we segmented the dataset into three subgroups based on educational attainment (i.e., lower than high school, high school, higher than high school), with 2,495, 2,203, and 4,145 observations in each subgroup. **Mental Health and Academic Performance Survey (MHAAPS).** This dataset is available at [https://github.com/marks/convert_to_csv/tree/master/sample_data](https://github.com/marks/convert_to_csv/tree/master/sample_data). It consists of three psychological variables, four academic variables, as well as sex information for a cohort of 600 college freshmen (327 females and 273 males). The primary objective of this investigation revolves around examining the interrelationship between the psychological variables and academic indicators, with careful consideration given to the potential influence exerted by sex. **Alzheimer's Disease Neuroimaging Initiative (ADNI).** We utilized AV45 (amyloid) and AV1451 (tau) positron emission tomography (PET) data from the ADNI database ([http://adni.loni.usc.edu](http://adni.loni.usc.edu)) [73; 74]. ADNI data are analyzed for fairness in medical imaging classification [41; 53; 81], and sex disparities in ADNI's CCA study can harm generalizability, validity, and intervention tailoring. We utilized F-CCA to account for sex differences. Our experiment links 52 AV45 and 52 AV1451 features in 496 subjects (255 females, 241 males). ### Results and Discussion In the simulation experiment, we follow the methodology described in Section 4.2.1 to generate two sets of variables, each containing two subgroups of equal size. Canonical weights are trained and used to project the two sets of variables into a 2-dimensional space using CCA, SF-CCA, and MF-CCA. From Figure 2, it is clear that the angle between the distributions of the two subgroups, as projected by SF-CCA and MF-CCA, is smaller in comparison. This result indicates that F-CCA has the ability to reduce the disparity between distinct subgroups. Table 1 shows the quantitative performance of the three models: CCA, MF-CCA, and SF-CCA. They are evaluated based on \(\rho_{r}\), \(\Delta_{\max,r}\), and \(\Delta_{\text{sum},r}\) defined in (13) across five experimental sets. Table 2 displays the mean runtime of each model. Several key observations emerge from the analysis. Firstly, MF-CCA and SF-CCA demonstrate substantial improvements in fairness compared to CCA. However, it is important to note that F-CCA, employed in both MF-CCA and SF-CCA, compromises some degree of correlation due to its focus on fairness considerations during computations. Secondly, \begin{table} \begin{tabular}{c|c|c c c|c c c|c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Dataset**}} & \multicolumn{3}{c|}{\(\rho_{r}\uparrow\)} & \multicolumn{3}{c|}{\(\Delta_{\max,r}\downarrow\)} & \multicolumn{3}{c}{\(\Delta_{\text{sum},r}\downarrow\)} \\ \cline{3-10} & \((r)\) & CCA & MF-CCA & SF-CCA & CCA & MF-CCA & SF-CCA & CCA & MF-CCA & SF-CCA \\ \hline Synthetic & 2 & **0.7533** & 0.7475 & 0.7309 & 0.3555 & 0.2866 & **0.2241** & 3.3802 & 2.8119 & **2.2722** \\ Data & 5 & **0.4717** & 0.4681 & 0.4581 & 0.4385 & 0.3313 & **0.2424** & 4.1649 & 3.1628 & **2.2304** \\ \hline NHANES & 2 & **0.6392** & 0.6360 & 0.6334 & 0.0485 & 0.0359 & **0.0245** & 0.1941 & 0.1435 & **0.0980** \\ & 5 & **0.4416** & 0.4393 & 0.4392 & 0.1001 & **0.0818** & 0.0824 & 0.4003 & **0.3272** & 0.3297 \\ \hline MHAAPS & 1 & **0.4464** & 0.4451 & 0.4455 & 0.0093 & 0.0076 & **0.0044** & 0.0187 & 0.0152 & **0.0088** \\ & 2 & **0.1534** & 0.1529 & 0.1526 & 0.0061 & 0.0038 & **0.0019** & 0.0122 & 0.0075 & **0.0039** \\ \hline ADNI & 2 & **0.7778** & 0.7776 & 0.7753 & 0.0131 & 0.0119 & **0.0064** & 0.0263 & 0.0238 & **0.0127** \\ & 5 & **0.6810** & 0.6798 & 0.6770 & 0.0477 & 0.0399 & **0.0324** & 0.0954 & 0.0799 & **0.0648** \\ \hline \hline \end{tabular} \end{table} Table 1: Numerical results in terms of Correlation (\(\rho_{r}\)), Maximum Disparity (\(\Delta_{\max,r}\)), and Aggregate Disparity (\(\Delta_{\text{sum},r}\)) metrics. Best values are in bold, and second-best are underlined. We focus on the initial five projection dimensions, but present only two dimensions here; results for other dimensions are in the supplementary material. We put the results of other projection dimensions in the supplementary material. “\(\uparrow\)” means the larger the better and “\(\downarrow\)” means the smaller the better. Note that MHAAPS has only 3 features, so we report results for its 1 and 2 dimensions. \begin{table} \begin{tabular}{c|c c c} \hline \hline **Dataset** & **CCA** & **MF-CCA** & **SF-CCA** \\ \hline Synthetic Data & 0.0239\(\pm\)0.0026 & 109.0693\(\pm\)5.5418 & 29.1387\(\pm\)2.0828 \\ \hline NHANES & 0.0483\(\pm\)0.0059 & 42.3186\(\pm\)1.9045 & 14.9156\(\pm\)1.8941 \\ \hline MHAAPS & 0.0021\(\pm\)0.0047 & 3.5235\(\pm\)2.0945 & 0.8238\(\pm\)0.8155 \\ \hline ADNI & 0.0039\(\pm\)0.0032 & 2.7297\(\pm\)0.5136 & 1.8489\(\pm\)1.0519 \\ \hline \hline \end{tabular} \end{table} Table 2: Mean computation time in seconds (\(\pm\)std) of 10 repeated experiments for \(R=5\) on the real dataset and \(R=7\) on the synthetic dataset. Experiments are run on Intel(R) Xeon(R) CPU E5-2660. SF-CCA outperforms MF-CCA in terms of fairness improvement, although it sacrifices correlation. This highlights the effectiveness of the single-objective optimization approach in SF-CCA. Moreover, the datasets consist of varying subgroup quantities (5, 3, 2, and 2) and an imbalanced number of samples in distinct subgroups. F-CCA consistently performs well across these datasets, confirming its inherent scalability. Lastly, although SF-CCA requires more effort to tune hyperparameters, SF-CCA still exhibits a notable advantage in terms of time complexity compared to MF-CCA, demonstrating computational efficiency. Disparities among various CCA methods are visually represented in Figure 3. Notably, the conventional CCA consistently demonstrates the highest disparity error. Conversely, SF-CCA and MF-CCA consistently outperform CCA across all datasets, underscoring their efficacy in promoting fairness within analytical frameworks. In Table 1, we define the _percentage change_ of correlation (\(\rho_{r}\)), maximum disparity gap (\(\Delta_{\max,r}\)), and aggregate disparity (\(\Delta_{\text{sum},r}\)), respectively, as follows: \(P\rho_{r}:=(\rho_{r}\text{ of F-CCA }-\rho_{r}\text{ of CCA})/(\rho_{r}\text{ of CCA})\times 100\), \(P\Delta_{\max,r}:=-(\Delta_{\max,r}\text{ of F-CCA }-\Delta_{\max,r}\text{ of CCA})/(\Delta_{\max,r}\text{ of CCA})\times 100\), and \(P\Delta_{\text{sum},r}:=-(\Delta_{\text{sum},r}\text{ of F-CCA }-\Delta_{\text{sum},r}\text{ of CCA})/(\Delta_{\text{sum},r}\text{ of CCA})\times 100\). Here, F-CCA is replaced with either MF-CCA or Figure 3: Aggregate disparity of CCA, MF-CCA, and SF-CCA (results from Table 1). Figure 2: Scatter plot of the synthetic data points after projected to the 2-dimensional space. The distributions of the two groups after projection by CCA are orthogonal to each other. Our SF-CCA and MF-CCA can make the distributions of the two groups close to each other. SF-CCA to obtain the percentage change for MF-CCA or SF-CCA. Figure 4 illustrates the percentage changes of each dataset. \(P\rho_{r}\) is slight, while \(P\Delta_{\max,r}\) and \(P\Delta_{\text{sum},r}\) changes are substantial, signifying fairness improvement without significant accuracy sacrifice. ## 5 Conclusion, Limitations, and Future Directions We propose F-CCA, a novel framework to mitigate unfairness in CCA. F-CCA aims to rectify the bias of CCA by learning global projection matrices from the entire dataset, concurrently guaranteeing that these matrices generate correlation levels akin to group-specific projection matrices. Experiments show that F-CCA is effective in reducing correlation disparity error without sacrificing much correlation. We discuss potential extensions and future problems stemming from our work. * While F-CCA effectively reduces unfairness while maintaining CCA model accuracy, its potential to achieve a minimum achievable disparity correlation remains unexplored. A theoretical exploration of this aspect could provide valuable insights. * F-CCA holds promise for extensions to diverse domains, including multiple modalities [80], deep CCA [3], tensor CCA [44], and sparse CCA [25]. However, these extensions necessitate novel formulations and in-depth analysis. * Our approach of multi-objective optimization on smooth manifolds may find relevance in other problems, such as fair PCA [55]. Further, bilevel optimization approaches [37; 68; 65] can be designed on a smooth manifold to learn a single Pareto-efficient solution and provide an automatic trade-off between accuracy and fairness. * With applications encompassing clustering, classification, and manifold learning, F-CCA ensures fairness when employing CCA techniques for these downstream tasks. It can also be jointly analyzed with fair clustering [15; 66; 34] and fair classification [78; 18]. ## 6 Acknowledgements This work was supported in part by the NIH grants U01 AG066833, U01 AG068057, RF1 AG063481, R01 LM013463, P30 AG073105, and U01 CA274576, and the NSF grant IIS 1837964. The ADNI data were obtained from the Alzheimer's Disease Neuroimaging Initiative database ([https://adni.loni.usc.edu](https://adni.loni.usc.edu)), funded by NIH U01 AG024904. Moreover, the NHANES data were sourced from the NHANES database ([https://www.cdc.gov/nchs/nhanes](https://www.cdc.gov/nchs/nhanes)). We appreciate the reviewers' valuable feedback, which significantly improved this paper. Figure 4: Percentage change from CCA to F-CCA (results from Table 1). Each dataset panel shows two cases with projection dimensions (\(r\)). \(P\rho_{r}\) is slight, while \(P\Delta_{\max,r}\) and \(P\Delta_{\text{sum},r}\) changes are substantial, signifying fairness improvement without significant accuracy sacrifice.
2301.06931
Isomorphisms of groups of periodic infinite matrices
We describe isomorphisms of groups of several periodic infinite matrices and isomorphisms of groups of invertible elements of unital locally matrix algebras.
Oksana Bezushchak
2022-12-08T15:37:50Z
http://arxiv.org/abs/2301.06931v1
# Isomorphisms of groups of periodic infinite matrices ###### Abstract. We describe isomorphisms of groups of several periodic infinite matrices and isomorphisms of groups of invertible elements of unital locally matrix algebras. Key words and phrases:infinite matrix; isomorphism; locally matrix algebra; Steinitz number 2020 Mathematics Subject Classification: 20E34, 20H20 ## Introduction Let \(\mathbb{N}\) be the set of positive integers. A _Steinitz number_[13] is an infinite formal product of the form \[\prod_{p\in\mathbb{P}}p^{r_{p}},\] where \(\mathbb{P}\) is the set of all primes, \(r_{p}\in\mathbb{N}\cup\{0,\infty\}\) for all \(p\in\mathbb{P}\). We can define the product of two Steinitz numbers by the rule: \[\prod_{p\in\mathbb{P}}p^{r_{p}}\cdot\prod_{p\in\mathbb{P}}p^{k_{p}}=\prod_{p\in \mathbb{P}}p^{r_{p}+k_{p}},\quad r_{p},k_{p}\in\mathbb{N}\cup\{0,\infty\},\] where we assume that \[r_{p}+k_{p}=\begin{cases}r_{p}+k_{p}&\text{if $r_{p}<\infty$ and $k_{p}<\infty$},\\ \infty&\text{in other cases}.\end{cases}\] The Steinitz number \(s_{2}\)_divides_ the Steinitz number \(s_{1}\) (denote as \(s_{2}|s_{1}\)) if there exists the Steinitz number \(s_{3}\in\mathbb{SN}\) such that \(s_{1}=s_{2}\cdot s_{3}\). Let \(\mathbb{F}\) be a field. In what follows we always assume that \(\operatorname{char}\mathbb{F}\neq 2,3.\) We call an infinite \((\mathbb{N}\times\mathbb{N})\)-matrix \(A\) over the field \(\mathbb{F}\)_periodic_ if it is block-diagonal \(A=\operatorname{diag}(a,a,\ldots)\), where \(a\) is an \((n\times n)\)-matrix for some \(n\in\mathbb{N}\). The number \(n\) is called a _period_ of the matrix \(A\) and the matrix \(A\) is called \(n\)-_periodic_. Let \(M_{n}^{p}(\mathbb{F})\) be the algebra of all \(n\)-periodic \((\mathbb{N}\times\mathbb{N})\)-matrices, and let \(M_{n}(\mathbb{F})\) be the algebra of all \((n\times n)\)-matrices over a field \(\mathbb{F}\). Clearly, \[M_{n}^{p}(\mathbb{F})\cong M_{n}(\mathbb{F}),\quad\text{and}\quad M_{n}^{p}( \mathbb{F})\subseteq M_{m}^{p}(\mathbb{F})\quad\text{if and only if}\quad n \quad\text{divides}\quad m.\] Let \(GL_{n}^{p}(\mathbb{F})\) be the group of all invertible matrices in \(M_{n}^{p}(\mathbb{F}),\) and let \(GL_{n}(\mathbb{F})\) be the group of all invertible matrices in \(M_{n}(\mathbb{F}).\) It is easy to see that \(GL_{n}^{p}(\mathbb{F})\cong GL_{n}(\mathbb{F}).\) For a Steinitz number \(s\), we consider the algebra \[M_{s}^{p}(\mathbb{F})=\bigcup_{n|s}M_{n}^{p}(\mathbb{F}),\] and the group \[GL_{s}^{p}(\mathbb{F})=\bigcup_{n|s}GL_{n}^{p}(\mathbb{F})\] that consists of all invertible elements of the algebra \(M_{s}^{p}(\mathbb{F}).\) Let \[SL_{n}(\mathbb{F})=[GL_{n}(\mathbb{F}),GL_{n}(\mathbb{F})],\] \[SL_{n}^{p}(\mathbb{F})=[\,GL_{n}^{p}(\mathbb{F}),GL_{n}^{p}(\mathbb{F})\,] \quad\text{and}\quad SL_{s}^{p}(\mathbb{F})=[\,GL_{s}^{p}(\mathbb{F}),GL_{s} ^{p}(\mathbb{F})\,]\] be commutator subgroups of groups \(GL_{n}(\mathbb{F}),\)\(GL_{n}^{p}(\mathbb{F})\) and \(GL_{s}^{p}(\mathbb{F}),\) respectively. If \(n_{1}<n_{2}<\cdots\) is a sequence of positive integers such that \(n_{i}|n_{i+1},\)\(i\geq 1,\) and \(s\) is the least common multiple of \(n_{1},\)\(n_{2},\)\(\dots,\) then \[GL_{n_{1}}^{p}(\mathbb{F})\subset GL_{n_{2}}^{p}(\mathbb{F})\subset\cdots, \quad\bigcup_{i\geq 1}GL_{n_{i}}^{p}(\mathbb{F})=GL_{s}^{p}(\mathbb{F});\] \[SL_{n_{1}}^{p}(\mathbb{F})\subset SL_{n_{2}}^{p}(\mathbb{F})\subset\cdots, \quad\bigcup_{i\geq 1}SL_{n_{i}}^{p}(\mathbb{F})=SL_{s}^{p}(\mathbb{F}).\] For more information about groups \(GL_{s}^{p}(\mathbb{F}),\)\(SL_{s}^{p}(\mathbb{F}),\) see [1, 6]. Recall some definitions and facts concerning locally matrix algebras; see [2, 3, 4, 5]. An associative \(\mathbb{F}\)-algebra \(A\) with the unit \(1\) is said to be a _unital locally matrix algebra_ (see [11]) if for an arbitrary finite collection of elements \(a_{1},\dots,a_{t}\in A\) there exists a subalgebra \(A^{\prime}\subset A\) such that \(1,a_{1},\dots,a_{t}\in A^{\prime}\) and \(A^{\prime}\cong M_{n}(\mathbb{F})\) for some \(n\in\mathbb{N}.\) For a unital locally matrix algebra \(A,\) let \(D(A)\) be the set of all positive integers \(n\) such that there exists a subalgebra \(A^{\prime},\)\(1\in A^{\prime}\subset A,\)\(A^{\prime}\cong M_{n}(\mathbb{F}).\) The least common multiple of the set \(D(A)\) is called the _Steinitz number_\(\mathbf{st}(A)\)_of the algebra_\(A;\) see [5]. J. G. Glimm [9] proved that every countable-dimensional unital locally matrix algebra is uniquely determined by its Steinitz number. Remark, that the algebra \(M_{s}^{p}(\mathbb{F})\) is a countable-dimensional unital locally matrix algebra and \(\mathbf{st}(M_{s}^{p}(\mathbb{F}))=s.\) Let \(A\) be a unital locally matrix algebra over a field \(\mathbb{F}\). Let us denote by the symbol \(A^{*}\) the group of invertible elements of the algebra \(A\) and let \([A^{*},A^{*}]\) be its commutator subgroup. Our aim now is description of isomorphisms of the group \(A^{*}.\) Let \(R\), \(S\) be rings. A mapping \(\varphi:R\to S\) is called an _anti-isomorphism_ if 1. \(\varphi\) is an isomorphism of additive groups of \(R\) and \(S\), 2. \(\varphi(ab)=\varphi(b)\varphi(a)\) for arbitrary elements \(a,b\in R.\) **Theorem 1**.: _Let \(A\) and \(B\) be unital locally matrix \(\mathbb{F}\)-algebras. If groups \([A^{*},A^{*}]\) and \([B^{*},B^{*}]\) are isomorphic, then rings \(A\) and \(B\) are isomorphic or anti-isomorphic. Moreover, for an arbitrary isomorphism \(\varphi:[A^{*},A^{*}]\rightarrow[B^{*},B^{*}]\) either there exists an isomorphism of rings \(\theta_{1}:A\to B\) such that \(\varphi\) is the restriction of \(\theta_{1}\) to \([A^{*},A^{*}]\) or there exists an anti-isomorphism of rings \(\theta_{2}:A\to B\) such that for an arbitrary element \(g\in[A^{*},A^{*}]\) we have_ \[\varphi(g)=\theta_{2}(g^{-1}).\] If algebras \(A\) and \(B\) are countable-dimensional, then Theorem 1 can be made more precise. In this case, without loss of generality, we can assume that \(A=M_{s}^{p}(\mathbb{F}),\) where \(s\) is the Steinitz number of the algebra \(A.\) The algebra \(M_{s}^{p}(\mathbb{F})\) is invariant with respect to transpose \(t,\) which is an anti-isomorphism. **Theorem 2**.: _Let \(A\) and \(B\) be countable-dimensional unital locally matrix \(\mathbb{F}\)-algebras. If groups \([A^{*},A^{*}]\) and \([B^{*},B^{*}]\) are isomorphic, then rings \(A\) and \(B\) are isomorphic. Moreover, an arbitrary isomorphism \(\varphi:[A^{*},A^{*}]\rightarrow[B^{*},B^{*}]\) either extends to an isomorphism of rings \(A\to B\) or there exists an isomorphism of rings \(\theta:A\to B\) such that for an arbitrary element \(g\in[A^{*},A^{*}]\) we have_ \[\varphi(g)=\theta\big{(}(g^{-1})^{t}\big{)}.\] If countable-dimensional unital locally matrix algebras are isomorphic as rings then they are isomorphic as \(\mathbb{F}\)-algebras; see Lemma 2 below. Therefore, Theorem 2 implies **Theorem 3**.: _Groups \(SL_{s_{1}}^{p}(\mathbb{F})\) and \(SL_{s_{2}}^{p}(\mathbb{F})\) are isomorphic if and only if \(s_{1}=s_{2}.\)_ For description of isomorphisms between groups \(GL_{s}^{p}(\mathbb{F}),\) we need to introduce the concept of a central homothety. For a unital \(\mathbb{F}\)-algebra \(A,\) by a _central homothety_ of its multiplicative group \(A^{*}\) we mean a multiplicative homomorphism \[A^{*}\big{/}[A^{*},A^{*}]\rightarrow\mathbb{F}^{*}.\] **Theorem 4**.: _Let \(A\) and \(B\) be unital locally matrix \(\mathbb{F}\)-algebras. For an arbitrary isomorphism of multiplicative groups \(\varphi:A^{*}\to B^{*}\) there exists a central homothety \(\chi:A^{*}\big{/}[A^{*},A^{*}]\to\mathbb{F}^{*}\) and an isomorphism of rings \(\theta_{1}:A\to B\) such that_ \[\varphi(g)=\chi(g)\theta_{1}(g),\quad g\in A^{*},\] _or an anti-isomorphism \(\theta_{2}:A\to B\) such that_ \[\varphi(g)=\chi(g)\theta_{2}(g^{-1}),\quad g\in A^{*}.\] As above, in the countable-dimensional case we can be more precise. **Theorem 5**.: _Let \(A\) and \(B\) be countable-dimensional unital locally matrix \(\mathbb{F}\)-algebras, and let \(s\) be the Steinitz number of \(A\). For an arbitrary isomorphism of multiplicative groups \(\varphi:A^{*}\to B^{*}\) there exists a central homothety \(\chi:A^{*}\big{/}[A^{*},A^{*}]\to\mathbb{F}^{*}\) and an isomorphism of rings \(\theta:A\to B\) such that_ \[\varphi(g) =\chi(g)\theta(g)\quad\text{for all}\quad g\in A^{*},\quad\text{or}\] \[\varphi(g) =\chi(g)\theta\big{(}(g^{-1})^{t}\big{)}\quad\text{for all}\quad g \in A^{*}.\] Our final goal is description of the group of automorphisms of \(SL_{s}^{p}(\mathbb{F}).\) The automorphisms group \(\mathrm{Aut}_{\mathbb{F}}\big{(}M_{s}^{p}(\mathbb{F})\big{)}\) of the \(\mathbb{F}\)-algebra \(M_{s}^{p}(\mathbb{F})\) has been described in [3]. **Theorem 6**.: _Let \(H\) be the cyclic group of order \(2\) generated by the automorphism \(\psi:g\mapsto(g^{-1})^{t},\)\(g\in SL_{s}^{p}(\mathbb{F}).\) Then_ \[\mathrm{Aut}\big{(}SL_{s}^{p}(\mathbb{F})\big{)}=H\cdot\mathrm{Aut}_{\mathbb{ F}}\big{(}M_{s}^{p}(\mathbb{F})\big{)}\cdot\mathrm{Aut}(\mathbb{F}).\] ## 1. Isomorphisms of invertible elements groups of unital locally matrix algebras Description of isomorphisms of groups \(SL_{n}(\mathbb{F})\) and \(GL_{n}(\mathbb{F})\) is well known; see [7]. It is also easy to see that a group \(SL_{n}(\mathbb{F})\) is not isomorphic to the union of an infinite ascending chain \[SL_{n_{1}}(\mathbb{F})\subset SL_{n_{2}}(\mathbb{F})\subset\cdots,\quad n_{1} <n_{2}<\cdots.\] Therefore, in proofs of Theorems 1, 2 and Lemma 1 (see below) we will assume that the algebras \(A\) and \(B\) are infinite-dimensional. Hence, there exists a matrix subalgebra \(M_{n}(\mathbb{F})\subset A\), \(n\geq 4.\) Let \(A^{\prime}\) be the centralizer of the subalgebra \(M_{n}(\mathbb{F})\) in \(A\). By Joseph H. M. Wedderburn's Theorem (see [8, 10, 12]), the algebra \(A\) is isomorphic to \[M_{n}(\mathbb{F})\otimes_{\mathbb{F}}A^{\prime}\cong M_{n}(A^{\prime}).\] We will identify the algebra \(A\) with \(M_{n}(A^{\prime}).\) Recall that for an arbitrary associative ring \(R\) with \(1\) and an arbitrary positive integer \(k\geq 2\) the _elementary linear group_\(E_{k}(R)\) is the group generated by all transvections \[t_{ij}(a)=I_{k}+e_{ij}(a),\] where \(I_{k}\) is the identity \((k\times k)\)-matrix, \(1\leq i\neq j\leq k\), \(a\in R,\) and \(e_{ij}(a)\) is the \((k\times k)\)-matrix that has the element \(a\) at the intersection of the \(i\)-th row and \(j\)-th column and zeros everywhere else. **Lemma 1**.: \([A^{*},A^{*}]=E_{n}(A^{\prime}).\)__ Proof.: Consider an arbitrary transvection \(t_{ij}(a),\)\(1\leq i\neq j\leq n,\)\(a\in A^{\prime}.\) There exists a positive integer \(r,\)\(1\leq r\leq n,\) that is distinct from \(i\) and \(j.\) Then \(t_{ij}(a)=[t_{ir}(1),t_{rj}(a)].\) We showed that \(E_{n}(A^{\prime})\subseteq[A^{*},A^{*}].\) Now, consider an arbitrary element \(g\in[A^{*},A^{*}].\) There exists a subalgebra \(M_{n}(\mathbb{F})\subset M_{q}(\mathbb{F})\subset A\) such that \(g\in SL_{q}(\mathbb{F})=E_{q}(\mathbb{F}).\) Consider a transvection \(t_{ij}(\alpha)\) of the algebra \(M_{q}(\mathbb{F}),\)\(1\leq i\neq j\leq q,\)\(\alpha\in\mathbb{F}.\) The algebra \(M_{n}(\mathbb{F})\) is embedded in the algebra \(M_{q}(\mathbb{F})\) diagonally, \[a\to\operatorname{diag}(\underbrace{a,a,\ldots,a}_{k}),\quad k=\frac{q}{n}, \quad a\in M_{n}(\mathbb{F}).\] Hence, the matrix unit \(e_{ii}(1)\) of the algebra \(M_{n}(\mathbb{F}),\)\(1\leq i\leq n,\) is mapped into the element \[\overline{e}_{i}=\operatorname{diag}(\underbrace{e_{ii}(1),\ldots,e_{ii}(1)}_ {k})\in M_{q}(\mathbb{F}).\] We have \(\overline{e}_{i}M_{q}(\mathbb{F})\overline{e}_{i}\cong M_{k}(\mathbb{F})\) and the algebra \(M_{q}(\mathbb{F})\) can be identified with the algebra \(M_{n}\big{(}M_{k}(\mathbb{F})\big{)},\) where \(M_{k}(\mathbb{F})\cong A^{\prime}\cap M_{q}(\mathbb{F})\) is the centralizer of the subalgebra \(M_{n}(\mathbb{F})\) in \(M_{q}(\mathbb{F}).\) Consider integers \(l\) and \(r\) such that \((l-1)n<i\leq ln,\)\((r-1)n<j\leq rn,\) and let \(\overline{i}=i-(l-1)n,\)\(\overline{j}=j-(r-1)n.\) Then \[e_{ij}(\alpha)=\overline{e}_{\overline{i}}\cdot e_{ij}(\alpha)\cdot\overline{ e}_{\overline{j}}.\] If \(i-j\) is not divisible by \(n\), then \(\overline{i}\neq\overline{j}\) and, therefore, \(t_{ij}(\alpha)\) is a transvection of the ring \(M_{n}\big{(}M_{k}(\mathbb{F})\big{)}\). If \(i-j\) is divisible by \(n\), then there exists an integer \(m\), \(1\leq m\leq q\), such that \(i-m\) is not divisible by \(n\). In this case \(t_{ij}(\alpha)=[t_{im}(1),t_{mj}(\alpha)].\) In any case \(t_{ij}(\alpha)\in E_{n}(A^{\prime}).\) This completes the proof of the lemma. I. Z. Golubchik and A. V. Mikhalev [10], and E. I. Zelmanov [14] described isomorphisms of elementary linear groups over rings. **Theorem 7** (I. Z. Golubchik, A. V. Mikhalev, E. I. Zelmanov).: _Let \(R,\)\(S\) be rings with \(\frac{1}{6},\) and let \(n\geq 4,\)\(m\geq 4\) be integers. If \(\varphi:E_{n}(R)\to E_{m}(S)\) is an isomorphism of elementary linear groups, then there exist central idempotents \(e,\)\(f\) in the matrix rings \(M_{n}(R),\)\(M_{m}(S),\) respectively, an isomorphism \(\theta_{1}:eM_{n}(R)\to fM_{m}(S)\) and an anti-isomorphism \(\theta_{2}:(1-e)M_{n}(R)\to(1-f)M_{m}(S)\) such that_ \[\varphi(g)=\theta_{1}(eg)+\theta_{2}((1-e)g^{-1})\quad\text{for an arbitrary element}\quad g\in E_{n}(R).\] Proof of Theorem 1.: Let \(A\) and \(B\) be unital locally matrix algebras, and let \(\varphi:[A^{*},A^{*}]\to[B^{*},B^{*}]\) be an isomorphism. As above, we assume that algebras \(A\) and \(B\) are infinite-dimensional. Choose positive integers \(n\geq 4\) and \(m\geq 4\) dividing the Steinitz numbers \(\mathbf{st}(A)\) and \(\mathbf{st}(B),\) respectively. Then algebras \(A\) and \(B\) can be identified with matrix algebras \(M_{n}(A^{\prime})\) and \(M_{m}(B^{\prime}),\) where \(A^{\prime},\)\(B^{\prime}\) are centralizers of subalgebras \(1\in M_{n}(\mathbb{F})\) and \(1\in M_{m}(\mathbb{F})\) in \(A\) and \(B,\) respectively. By Lemma 1, \[[A^{*},A^{*}]=E_{n}(A^{\prime}),\quad[B^{*},B^{*}]=E_{m}(B^{\prime}),\] and \(\varphi:E_{n}(A^{\prime})\to E_{m}(B^{\prime})\) is an isomorphism. Since algebras \(A,\)\(B\) are simple, their only central idempotents are \(0\) and \(1.\) By the Golubchik-Mikhalev-Zelmanov Theorem (see Theorem 7), there exists an isomorphism \(\theta_{1}:A\to B\) such that \(\varphi(g)=\theta_{1}(g)\) for an arbitrary \(g\in[A^{*},A^{*}]\) or there exists an anti-isomorphism \(\theta_{2}:A\to B\) such that \[\varphi(g)=\theta_{2}(g^{-1})\quad\text{for an arbitrary element}\quad g\in[A^{*},A^{*}].\] This completes the proof of Theorem 1. Proof of Theorem 2.: Let \(A,\)\(B\) be countable-dimensional unital locally matrix algebras, and let \(\varphi:[A^{*},A^{*}]\to[B^{*},B^{*}]\) be an isomorphism. We have already mentioned that the algebra \(A\) can be identified with the algebra \(M_{s}^{p}(\mathbb{F}),\) where \(s=\mathbf{st}(M_{s}^{p}(\mathbb{F})),\) and that the algebra \(A=M_{s}^{p}(\mathbb{F})\) is invariant with respect to the transpose \(t.\) If \(\theta:A\to B\) is an anti-isomorphism, then the mapping \(\theta\,^{\prime}:A\to B,\)\(\theta\,^{\prime}(a)=\theta(a^{t})\) is an isomorphism and \(\theta(g^{-1})=\theta^{\prime}\big{(}(g^{-1})^{t}\big{)}\) for an arbitrary element \(g\in[A^{*},A^{*}].\) This completes the proof of Theorem 2. **Lemma 2**.: _Let \(A\) and \(B\) be countable-dimensional unital locally matrix algebras. If \(A\) and \(B\) are isomorphic as rings, then they are isomorphic as \(\mathbb{F}\)-algebras._ Proof.: Since the algebra \(B\) is countable-dimensional, without loss of generality, we can assume that \(B=M_{s}^{p}(\mathbb{F}),\) where \(s=\mathbf{st}(M_{s}^{p}(\mathbb{F})).\) An arbitrary automorphism \(\tau\) of the field \(\mathbb{F}\) extends to an automorphism \(\widetilde{\tau}\) of the ring \(M_{s}^{p}(\mathbb{F}),\) \[\widetilde{\tau}:(a_{ij})_{\mathbb{N}\times\mathbb{N}}\mapsto\big{(}\,\tau(a_ {ij})\,\big{)}_{\mathbb{N}\times\mathbb{N}}\,.\] Let \(\theta:A\to B\) be an isomorphism of rings. Then \(\theta\) maps the center \(\mathbb{F}\cdot 1_{A}\) of the algebra \(A\) to the center \(\mathbb{F}\cdot 1_{B}\) of the algebra \(B.\) Then there exists an automorphism \(\tau\) of the field \(\mathbb{F}\) such that \(\theta(\alpha\cdot 1_{A})=\tau(\alpha)\cdot 1_{B}\) for an arbitrary element \(\alpha\in\mathbb{F}.\) The composition \(\theta\circ\widetilde{\tau}^{-1}\) is an isomorphism of \(\mathbb{F}\)-algebras \(A\to B.\) This completes the proof of the lemma. Before we prove Theorems 4 and 5 we will show how central homotheties arise in locally matrix algebras. Let \(s\) be a Steinitz number. Suppose that for all integers \(n\geq 1\) such that \(n|s\) there exist homomorphisms \(\tau_{n}:\mathbb{F}^{*}\rightarrow\mathbb{F}^{*}\) such that 1. \(\big{(}\tau_{n}(\alpha)\big{)}^{n}=\alpha\) for all \(\alpha\in\mathbb{F}^{*},\) 2. if \(m|n,\)\(n|s,\)\(n=m\cdot k,\) then \(\tau_{n}(\alpha)=\tau_{k}\big{(}\tau_{m}(\alpha)\big{)}\) for all \(\alpha\in\mathbb{F}^{*}.\) For example, if \(\mathbb{F}=\mathbb{R}\) is the field of real numbers and \(s\) is a Steinitz number that is not divisible by \(2,\) then the mapping \(a\mapsto a^{\frac{1}{n}},\)\(a\in\mathbb{R},\) is well defined and satisfies (i), (ii). For an arbitrary element \(a\in A,\) we will define its _relative determinant_\(\det_{r}(a)\) in the following way. There exists a matrix algebra \(M_{n}(\mathbb{F})\subset A\) such that \(1,a\in M_{n}(\mathbb{F}).\) Let \(\det_{M_{n}(\mathbb{F})}(a)\) be the determinant of the matrix \(a\) in \(M_{n}(\mathbb{F}).\) Then \[\det_{r}(a)=\tau_{n}\big{(}\det_{M_{n}(\mathbb{F})}(a)\big{)}\] does not depend on a choice of the subalgebra \(M_{n}(\mathbb{F}).\) Indeed, if \(1,a\in M_{m}(\mathbb{F})\subset M_{n}(\mathbb{F}),\) then the subalgebra \(M_{m}(\mathbb{F})\) is embedded in \(M_{n}(\mathbb{F})\) diagonally. Hence, \[\det\nolimits_{M_{n}(\mathbb{F})}(a)=\big{(}\det\nolimits_{M_{m}(\mathbb{F})}( a)\big{)}^{k},\quad k=n/m.\] Denote \(\alpha=\det\nolimits_{M_{m}(\mathbb{F})}(a).\) Then, by (i) and (ii), \[\tau_{n}(\alpha^{k})=\tau_{m}\big{(}\tau_{k}(\alpha^{k})\big{)}=\tau_{m}( \alpha).\] It is easy to see that \(\det\nolimits_{r}(ab)=\det\nolimits_{r}(a)\det\nolimits_{r}(b)\) for arbitrary elements \(a,b\in A,\) and \(\det\nolimits_{r}(1)=1.\) An element \(a\in A\) is invertible if and only if \(\det\nolimits_{r}(a)\neq 0.\) The mapping \[A^{*}\to\mathbb{F}^{*},\quad a\mapsto\det\nolimits_{r}(a),\] is central homothety. Let \(R\) be a ring with \(1,\) and let \(n\geq 2\) be an integer. Along with the elementary linear group \(E_{n}(R)\) consider the group \(GE_{n}(R)\) that is generated by \(E_{n}(R)\) and all invertible diagonal matrices over \(R.\) The following result is due to I. Z. Golubchik and A. V. Mikhalev [10], and E. I. Zelmanov [14]. **Theorem 8** (**I. Z. Golubchik, A. V. Mikhalev, E. I. Zelmanov**).: _Let \(R,\)\(S\) be rings with \(\frac{1}{6},\) let \(m\geq 4,\)\(n\geq 4\) be integers, and let \(\varphi:GE_{n}(R)\to GE_{m}(S)\) be an isomorphism. Then there exist central idempotents \(e\) and \(f\) of matrix rings \(M_{n}(R)\) and \(M_{m}(S),\) respectively, an isomorphism \(\theta_{1}:eM_{n}(R)\to fM_{m}(S),\) an anti-isomorphism \(\theta_{2}:(1-e)M_{n}(R)\to(1-f)M_{m}(S)\) and a homomorphism \(\chi:GE_{n}(R)\to Z(GE_{m}(S))\) to the center of the group \(GE_{m}(S)\) such that_ \[\varphi(g)=\chi(g)\big{(}\theta_{1}(eg)+\theta_{2}((1-e)g^{-1})\big{)}\quad \text{for an arbitrary element}\quad g\in GE_{n}(R).\] Let \(A\) be a unital locally matrix algebra, let \(1\in M_{n}(\mathbb{F})\subset A,\) and let \(n\geq 4.\) Consider the centralizer \(A^{\prime}\) of the subalgebra \(M_{n}(\mathbb{F})\) in \(A.\) As above, we identify the algebra \(A\) with the matrix algebra \(M_{n}(A^{\prime}).\) **Lemma 3**.: \(A^{*}=GE_{n}(A^{\prime}).\)__ Proof.: Let \(g\in A^{*}.\) There exists a matrix subalgebra \(M_{q}(\mathbb{F})\subset A\) such that \(M_{n}(\mathbb{F})\subset M_{q}(\mathbb{F}),\)\(g\in M_{q}(\mathbb{F}).\) Then \(g\in\big{(}M_{q}(\mathbb{F})\big{)}^{*}=GL_{q}(\mathbb{F}).\) It is well known that the group \(GL_{q}(\mathbb{F})\) is generated by transvections and diagonal matrices \[d_{11}(\alpha)=\operatorname{diag}\big{(}\underbrace{\alpha,1,1,\ldots,1}_{q} \big{)},\quad 0\neq\alpha\in\mathbb{F}.\] As in the proof of Lemma 1, consider matrix units \(e_{ii}(1)\), \(1\leq i\leq n\), of the algebra \(M_{n}(\mathbb{F})\). They are embedded in \(M_{q}(\mathbb{F})\) as idempotents \[\overline{e}_{i}=\operatorname{diag}\big{(}\underbrace{e_{ii}(1),\ldots,e_{ii }(1)}_{q/n}\big{)}.\] Clearly, \[d_{11}(\alpha)-I_{q}=\overline{e}_{1}\left(d_{11}(\alpha)-I_{q}\right) \overline{e}_{1}.\] Hence, \(d_{11}(\alpha)\) is an invertible diagonal matrix of \(M_{n}\big{(}A^{\prime}\cap M_{q}(\mathbb{F})\big{)}\), \(d_{11}(\alpha)\in GE_{n}(A^{\prime}).\) We showed that \(GL_{q}(\mathbb{F})\subset GE_{n}(A^{\prime})\) and, therefore, \(g\in GE_{n}(A^{\prime}).\) This completes the proof of the lemma. Now, Theorems 4 and 5 immediately follow from simplicity of the algebras \(A\) and \(B\), Lemma 3 and the results of I. Z. Golubchik and A. V. Mikhalev [10], and E. I. Zelmanov [14] (Theorem 8). ## 2. Isomorphisms of groups of periodic infinite matrices Now, let us describe the group of automorphisms of \(SL_{s}^{p}(\mathbb{F})\). Proof of Theorem 6.: Let \(\operatorname{Aut}_{\operatorname{ring}}\bigl{(}M_{s}^{p}(\mathbb{F})\bigr{)}\) be the group of ring automorphisms of \(M_{s}^{p}(\mathbb{F})\). We claim that \[\operatorname{Aut}\bigl{(}SL_{s}^{p}(\mathbb{F})\bigr{)}=H\cdot\operatorname{ Aut}_{\operatorname{ring}}\bigl{(}M_{s}^{p}(\mathbb{F})\bigr{)}.\] Indeed, by Theorem 2, an arbitrary automorphism \(\varphi\in\operatorname{Aut}\bigl{(}SL_{s}^{p}(\mathbb{F})\bigr{)}\) either extends to an automorphism of the ring \(M_{s}^{p}(\mathbb{F})\), in which case \(\varphi\in\operatorname{Aut}_{\operatorname{ring}}\bigl{(}M_{s}^{p}(\mathbb{ F})\bigr{)}\), or there exists an automorphism \(\theta\in\operatorname{Aut}_{\operatorname{ring}}\bigl{(}M_{s}^{p}(\mathbb{F}) \bigr{)}\) such that \[\varphi(g)=\theta\bigl{(}(g^{-1})^{t}\bigr{)}\quad\text{for all}\quad g\in SL _{s}^{p}(\mathbb{F}),\] in which case \(\varphi=\psi\circ\theta\in H\cdot\operatorname{Aut}_{\operatorname{ring}} \bigl{(}M_{s}^{p}(\mathbb{F})\bigr{)}\). Let us show that \[\operatorname{Aut}_{\operatorname{ring}}\bigl{(}M_{s}^{p}(\mathbb{F})\bigr{)} =\operatorname{Aut}_{\mathbb{F}}\bigl{(}M_{s}^{p}(\mathbb{F})\bigr{)}\cdot \operatorname{Aut}(\mathbb{F}).\] In the proof of Lemma 2 we showed that an arbitrary automorphism \(\tau\) of the field \(\mathbb{F}\) gives rise to the automorphism \(\widetilde{\tau}\) of the ring \(M_{s}^{p}(\mathbb{F}).\) The mapping \(\tau\mapsto\widetilde{\tau}\) is an embedding of the group \(\mathrm{Aut}(\mathbb{F})\) into the group \(\mathrm{Aut}_{\mathrm{ring}}\big{(}M_{s}^{p}(\mathbb{F})\big{)}.\) If \(\varphi\) is an automorphism of the ring \(M_{s}^{p}(\mathbb{F}),\) then its restriction \(\varphi\big{|}_{\,\mathbb{F}\cdot 1}\) to the center \(\mathbb{F}\cdot 1\) of the ring \(M_{s}^{p}(\mathbb{F})\) is an automorphism of the field \(\mathbb{F},\) and it gives rise to the automorphism \(\widetilde{\varphi\big{|}_{\,\mathbb{F}\cdot 1}}.\) Clearly, \[\varphi\cdot\widetilde{\big{(}\varphi\big{|}_{\,\mathbb{F}\cdot 1}\big{)}}^{-1} \in\mathrm{Aut}_{\mathbb{F}}\big{(}M_{s}^{p}(\mathbb{F})\big{)},\] which implies the assertion of Theorem 6.
2309.04904
On real hyperelliptic solutions of focusing modified KdV equation
We study the real hyperelliptic solutions of the focusing modified KdV (MKdV) equation of the genus three. Since the complex hyperelliptic solutions of the focusing MKdV equation over $\mathbb{C}$ are associated with the real gauged MKdV equation, we present a novel construction of the real hyperelliptic solutions of the gauged MKdV equation. When the gauge field is constant, it can be regarded as the real solution of the focusing MKdV equation, and thus we also discuss the behavior of the gauge field numerically.
Shigeki Matsutani
2023-09-10T01:15:48Z
http://arxiv.org/abs/2309.04904v5
# An algebro-geometric model for the shape of supercoiled DNA II ###### Abstract. Following the previous paper (Matsutani and Previato, Physica D **430** (2022) 133073), the hyperelliptic solutions of generalized elastica of genus three are investigated; its curvature obeys the modified (MKdV) KdV equation. This article shows a novel construction of the hyperelliptic solutions of the MKdV equation and illustrates them numerically. The shapes reproduce some properties of shapes of the AFM image of the supercoiled DNAs observed by Japaridze et al (Nano Lett. **17** 3, (2017) 1938). ## 1. Introduction In the previous paper [20], the author with Emma Previato investigated an algebro-geometric model for the shape of supercoiled DNA. As mentioned there, the mathematical description of the shape of the supercoiled DNA is a challenging problem in which no one can find the shape mathematically. Since the shape of the supercoiled DNA plays crucial roles in life [4, 6, 13, 25, 28], there are many studies on the shape [3, 9, 12, 13, 26, 27]. The electro microscope image shows that the shape of the loop is much more complicated than Euler's elastica. Further, it is not squeezed nor tight but is characterized by voids between the intersections governed by elastic forces weakly. These properties mean that it cannot be realized as a minimal state of its Euler-Bernoulli energy functional even by considering its three-dimensional effect; the voids cannot appear mathematically if we consider the minimal state of a certain energy functional. The minimal state cannot have any further parameters and thus is expressed by the elliptic functions, having only double periods and no capability to express complicated shapes. The author proposed a model of the statistical mechanics of elastica to express the shapes of supercoiled DNA in 1998 [15]: The shapes can only be realized if thermal effects are taken into account, and must be the excitation states of the elastica rather than minimal ones. The excitation states of elastica on the plane are described well by the hyperelliptic solutions \(\phi\) of the modified KdV (MKdV) equation [1], \[(\partial_{t}+\alpha\partial_{s})\phi+\frac{1}{8}\left(\partial_{s}\phi\right) ^{3}+\frac{1}{4}\partial_{s}^{3}\phi=0, \tag{1.1}\] where \(t\) and \(s\) are the real axes and \(\alpha\) is a real parameter. Here \(t\) does not mean the physical time-axis but one of the inner-space directions of the excited states due to the thermal fluctuations. \(\phi\) corresponds to the tangential angle of the real curve on the plane for the generalized elastica. It contrasts to those of the elastica in three-dimensional space, which are given by the nonlinear Schrodinger (NLS) equation and the complex MKdV (CMKdV) equation [16]. We also have referred to them as generalized elasticae. We sometimes have called them the quantized elastica due to the analogy between the Planck constant \(\hbar\) and the inverse of the temperature \(\beta\). However, the hyperelliptic solutions have not explicitly and concretely obtained whereas the elliptic function solutions were studied well since Euler's discovery [7]. The author and Emma Previato decided to solve the problem in 2004 based on the papers [17, 24]. To solve the problem, they considered that a novel approach that directly connects the algebraic curves was required rather than the theta function approach [23]. They have sophisticated and reconstructed the Abelian function theory, including the hyperelliptic function theory, for two decades as problems in algebraic geometry [18, 21]. The NLS and CMKdV equations are much more complicated than the MKdV equation, and the analytic solution of the MKdV equation over \(\mathbb{C}\) in terms of the data of the hyperelliptic curves was obtained in [17]: in contrast, we have no such solutions of the NLS and CMKdV equation. Hence, the MKdV equation has been focused [19]. Since the tools were polished for the final target as in [21], they attempted to find the hyperelliptic solutions of the generalized elastica [20]. However, it was clarified that any hyperelliptic non-degenerate curves of genus two could not provide the generalized elastica because of the reality conditions [20]. As the conclusion of [20], higher-genus hyperelliptic curves (\(g\geq 3\)) are required to find the solution of (1.1). Hence this paper is devoted to find the solutions of (1.1) in terms of the meromorphic functions of hyperelliptic curves of genus three. This paper demonstrates that some hyperelliptic curves of genus three supply the shapes of the generalized elastica, which have never been obtained. This paper provides a novel method to obtain the hyperelliptic solution of the MKdV equation based on 1) generalization of Weierstrass sigma function theory [11, 21], 2) Baker's hyperelliptic function theory [5, 17], and 3) Euler's numerical integration method, though 1) and 2) are not touched on in this paper. We heuristically found the amazing Proposition 4.1, which guarantees the reality conditions of both \(S^{3}X\) and the Jacobian \(J_{X}\), and the Theorem 5.1. The relation enables us to show the novel algorithm to obtain a hyperelliptic solution of generalized elastica. Finally, we demonstrate some computational results, a closed one and open ones. They exhibit typical shapes, modulation of a repeat of figure eight and inverse of 'S'. It is quite surprising that we find a similar shape in a part of the AFM images of a supercoiled DNA in [10, Figure 4]. The figure-eight given by Euler in 1744 which appears similar shapes of the short closed supercoiled DNAs, e.g. in [25] but no one has ever mathematically reproduce any more complicated shape of supercoiled DNA with voids. We emphasize that this demonstration shows that an important first step has been made toward the complete mathematical expression of the supercoiled DNAs. The content is following: Section 2 reviews the previous results, which is the same as [20]. Section 3 is devoted to the geometry of the hyperelliptic curves of genus three.Section 4 provides solutions of the gauged MKdV equation. Based on them, the key fact to obtain the hyperelliptic solutions of genus three of the MKdV equation (1.1) is described in Section 5. There we show the numerical algorithm to obtain them. Section 6 shows the computational results and the relations to the supercoiled DNA. Section 7 gives the conclusion of this paper. ## 2. Hyperelliptic solutions of generalized elastica We review the solution of the generalized elastica problem [17] for a hyperelliptic curve \(X_{g}\) of genus \(g\) over \(\mathbb{C}\), \[\big{\{}(x,y)\in\mathbb{C}^{2}\ |\ y^{2}=(x-b_{1})(x-b_{2})\cdots(x-b_{2g+1}) \big{\}}\cup\{\infty\}, \tag{2.1}\] where \(b_{i}\)'s are mutually distinct complex numbers. Let \(\lambda_{2g}=-\sum_{i=1}^{2g+1}b_{i}\) and \(S^{k}X_{g}\) be the \(k\)-th symmetric product of the curve \(X_{g}\). The Abel integral \(v:S^{k}X_{g}\to\mathbb{C}^{g}\), \((k=1,\ldots,g)\) is defined by its \(i\)-th component \(v_{i}\) (\(i=1,\ldots,g\)), \[v_{i}((x_{1},y_{1}),\ldots,(x_{k},y_{k}))=\sum_{j=1}^{k}v_{i}(x_{j},y_{j}),\] \[v_{i}(x,y)=\int_{\infty}^{(x,y)}\nu_{i}^{\rm I},\quad\nu_{i}^{\rm I}=\frac{x^{i- 1}dx}{2y}. \tag{2.2}\] [17] shows the hyperelliptic solutions of the MKdV equation over \(\mathbb{C}\), **Theorem 2.1**.: _[_17_]_ _For \(((x_{1},y_{1}),\cdots,(x_{g},y_{g}))\in S^{g}X_{g}\), a fixed branch point \(b_{a}\)\((a=1,2,\ldots,2g+1)\), and \(u:=v((x_{1},y_{1}),\)\(\cdots,(x_{g},y_{g}))\),_ \[\psi(u):=-\sqrt{-1}\log(b_{a}-x_{1})(b_{a}-x_{2})\cdots(b_{a}-x_{g})\] _satisfies the MKdV equation over \(\mathbb{C}\),_ \[(\partial_{u_{g-1}}-\frac{1}{2}(\lambda_{2g}+3b_{a})\partial_{u_{g}})\psi- \frac{1}{8}\left(\partial_{u_{g}}\psi\right)^{3}-\frac{1}{4}\partial_{u_{g}}^ {3}\psi=0, \tag{2.3}\] _where \(\partial_{u_{i}}:=\partial/\partial u_{i}\) as an differential identity in \(S^{g}X_{g}\) and \(\mathbb{C}^{g}\)._ We, here, emphasize the difference between the MKdV equations (1.1) over \(\mathbb{R}\) and (2.3) over \(\mathbb{C}\). The difference is crucial since we want to obtain solutions of (1.1), not (2.3). However, the latter is expressed well in terms of the hyperelliptic function theory. We will construct the solutions of (1.1) based on the solutions of (2.3). As mentioned in [20, (11)], we describe the difference. By introducing real and imaginary parts, \(u_{b}=u_{b\,{\rm r}}+\sqrt{-1}u_{b\,{\rm i}}\) and \(\psi=\psi_{\rm r}+\sqrt{-1}\psi_{\rm i}\), the real part of (2.3) is reduced to the gauged MKdV equation with gauge field \(A(u)=(\lambda_{2g}+3b_{a}-\frac{3}{4}(\partial_{u_{g\,{\rm r}}}\psi_{\rm i})^{ 2})/2\), \[-(\partial_{u_{g-1}\,{\rm r}}-A(u)\partial_{u_{g\,{\rm r}}})\psi_{\rm r}+ \frac{1}{8}\left(\partial_{u_{g\,{\rm r}}}\psi_{\rm r}\right)^{3}+\frac{1}{4} \partial_{u_{g\,{\rm r}}}^{3}\psi_{\rm r}=0 \tag{2.4}\] by the Cauchy-Riemann relations as mentioned in [20, (11)]. In order to obtain a solution of (1.1) or a generalized elastica in terms of the data in Theorem 2.1, the following conditions must be satisfied [20]: CI \(\prod_{i=1}^{g}|x_{i}-b_{a}|=\) a constant \((>0)\) for all \(i\) in Theorem 2.1, CII \(du_{g\,{\rm i}}=du_{g-1\,{\rm i}}=0\) in Theorem 2.1, and CIII \(A(u)\) is a real constant: if \(A(u)=\) constant, (2.4) is reduced to (1.1). ## 3. Hyperelliptic Curves of Genus Three [20] concludes that it turns out that in order to obtain the solution of (1.1) based on Theorem 2.1, we should handle hyperelliptic curves \(X\) of genus \(g>2\). In this paper, we investigate the conditions CI-III for hyperelliptic curves \(X_{3}\) of genus \(g=3\), \[\begin{split} y^{2}&=x^{7}+\lambda_{6}x^{6}+\cdots +\lambda_{2}x^{2}+\lambda_{1}x+\lambda_{0}\\ &=(x-b_{0})(x-b_{1})(x-b_{2})\cdots(x-b_{5})(x-b_{6}).\end{split} \tag{3.1}\] We restrict the moduli (rather, parameter) space of the curve \(X\) by the following. We choose coordinates \(u={}^{t}(u_{1},u_{2},u_{3})\) in \(\mathbb{C}^{3}\); \(u_{i}=u_{i}^{(1)}+u_{i}^{(2)}+u_{i}^{(3)}\), where \(u_{i}^{(j)}=v_{i}((x_{j},y_{j}))\) for \((x_{j},y_{j})\in X_{3}\). We let \(b_{0}=-\gamma=-1\) and \(e_{j}:=b_{j}-b_{0}\) (\(j=1,2,\ldots,6\)) satisfying the following relations, \[\sqrt{e_{2a-1}}=\alpha_{a}+\sqrt{-1}\beta_{a},\quad\sqrt{e_{2a}}=\alpha_{a}- \sqrt{-1}\beta_{a},\] where \(\alpha_{a},\beta_{a}\in\mathbb{R}\), \(a,b=1,2,3\), satisfying \(\alpha_{a}^{2}+\beta_{a}^{2}=\gamma\). For a real expression of (3.1), we use the following transformation, which is a generalization of 'the sine function expression' of the elliptic integral as mentioned in [20]. **Lemma 3.1**.: \((w^{2}-e_{1})(w^{2}-e_{2})=-4\frac{\gamma}{k_{1}^{2}}\mathrm{e}^{2\sqrt{-1} \varphi}(1-k^{2}\sin^{2}\varphi)\)_, where_ \[w=\mathrm{e}^{\sqrt{-1}\varphi},\quad k_{1}=\frac{2\sqrt{-1}\sqrt[4]{e_{1}e_{2 }}}{\sqrt{e_{1}}-\sqrt{e_{2}}}=\frac{\sqrt{\gamma}}{\beta_{a}},\quad\gamma=e_ {1}e_{2}=1.\] Proof.: Let \(\gamma^{2}:=e_{1}e_{2}=1\). We recall the double angle formula \(\cos 2\varphi=1-2\sin^{2}\varphi\). \[(w^{2}-e_{1})(w^{2}-e_{2}) =w^{2}(w^{2}-(e_{1}+e_{2})+e_{1}e_{2}w^{-2})\] \[=w^{2}\gamma\left(\mathrm{e}^{2\sqrt{-1}\varphi}+\mathrm{e}^{-2 \sqrt{-1}\varphi}-\frac{e_{1}+e_{2}}{\gamma}\right)\] \[=2w^{2}\gamma\left(\cos(2\varphi)-\frac{e_{1}+e_{2}}{2\gamma}\right)\] \[=-w^{2}\gamma\left(\frac{e_{1}+e_{2}-2\sqrt{e_{1}e_{2}}}{\gamma}+4 \sin^{2}\varphi\right)\] \[=-4w^{2}\frac{\gamma}{k_{1}^{2}}\left(1-k_{1}^{2}\sin^{2}\varphi \right),\] where \((e_{1}+e_{2}-2\sqrt{e_{1}e_{2}})=(\sqrt{e_{1}}-\sqrt{e_{2}})^{2}=e_{1}^{-1}(e_ {1}+\gamma)^{2}=-4\gamma/k_{1}^{2}\). Under these assumptions, we have the real extension of the hyperelliptic curves \(X\) by \((\varphi,y)\). The direct computation shows the following: **Lemma 3.2**.: _Let \(\gamma\mathrm{e}^{2\sqrt{-1}\varphi}:=(x-b_{0})\), (3.1) is written by_ \[y^{2}=-64\frac{\gamma^{4}\mathrm{e}^{8\sqrt{-1}\varphi}}{k_{1}^{2}k_{2}^{2}k_{ 3}^{2}}(1-k_{1}^{2}\sin^{2}\varphi)(1-k_{2}^{2}\sin^{2}\varphi)(1-k_{3}^{2} \sin^{2}\varphi), \tag{3.2}\] _where \(k_{a}=\frac{2\sqrt{-1}\sqrt[4]{e_{2a-1}e_{2a}}}{\sqrt{e_{2a-1}}-\sqrt{e_{2a}}} =\frac{\sqrt{\gamma}}{\beta_{a}}\), \((a=1,2,3)\). \((\varphi,y)\) has six branch points \(\varphi_{\mathtt{bi}}^{\pm}\), \((i=1,2,3)\) corresponding to \(k_{i}\) modulo \(\pi\)._ We assume that \(b_{0}=-1\), \(k_{1}>k_{2}>k_{3}>0\) and \(\varphi_{\mathtt{b}}:=\sin^{-1}(1/k_{1})\), here though later we also consider \(k_{3}>k_{2}>k_{1}>1\) case. We consider a point \(((x_{1},y_{1}),\ldots,(x_{3},y_{3}))\) in \(S^{3}X\) under the condition CI, \(|x_{j}-b_{0}|=\gamma=1\). We define the variable \(\varphi_{j}\) by \(x_{j}=\gamma\mathrm{e}^{\sqrt{-1}\varphi_{j}}(\mathrm{e}^{\sqrt{-1}\varphi_{ j}}+(b_{0}/\gamma)\mathrm{e}^{-\sqrt{-1}\varphi_{j}})=2\sqrt{-1}\gamma\mathrm{e}^{ \sqrt{-1}\varphi_{j}}\sin\varphi_{j}\), \((j=1,2,3)\). Noting \(dx_{j}=2\sqrt{-1}\gamma\mathrm{e}^{2\sqrt{-1}\varphi_{j}}d\varphi_{j}\) and \(x_{j}^{\ell}dx_{j}=(2\sqrt{-1})^{\ell+1}\gamma\mathrm{e}^{(2+\ell)\sqrt{-1} \varphi_{j}}\sin^{\ell}\varphi_{j}\;d\varphi_{j}\), we have the holomorphic one forms \((\nu_{1}^{1(j)},\nu_{2}^{1(j)},\nu_{3}^{1(j)})\)\((j=1,2,3)\), \[\left(\frac{\mathrm{e}^{-2\sqrt{-1}\varphi_{j}}\;d\varphi_{j}}{8\gamma^{2}K( \varphi_{j})},\frac{-\sqrt{-1}\mathrm{e}^{-\sqrt{-1}\varphi_{j}}\sin(\varphi_ {j})\;d\varphi_{j}}{4\gamma K(\varphi_{j})},\frac{-\sin^{2}\varphi_{j}\;d \varphi_{j}}{2K(\varphi_{j})}\right), \tag{3.3}\] where \(K(\varphi):=\widetilde{\gamma}\widetilde{K}(\varphi)\), \(\widetilde{K}(\varphi):=\frac{\sqrt{\gamma(1-k_{1}^{2}\sin^{2}\varphi)(1-k_{ 2}^{2}\sin^{2}\varphi)(1-k_{3}^{2}\sin^{2}\varphi)}}{k_{1}k_{2}k_{3}}\), and \(\widetilde{\gamma}=\pm 1\). Using the ambiguity \(\widetilde{\gamma}\), we handle \(-(\nu_{1}^{1(j)},\nu_{2}^{1(j)},\nu_{3}^{1(j)})\) rather than \((\nu_{1}^{1(j)},\nu_{2}^{1(j)},\nu_{3}^{1(j)})\)\((j=1,2,3)\) from here. Then we obviously have the following lemmas: **Lemma 3.3**.: _Let \(K_{j}:=K(\varphi_{j})\), \(j=1,2,3\). The following holds:_ \[\begin{pmatrix}du_{1}\\ du_{2}\\ du_{3}\end{pmatrix}=-\begin{pmatrix}\frac{\mathrm{e}^{-2\sqrt{-1}\varphi_{1}}}{8 \gamma^{2}K_{1}}&\frac{\mathrm{e}^{-2\sqrt{-1}\varphi_{2}}}{8\gamma^{2}K_{2}} &\frac{\mathrm{e}^{-2\sqrt{-1}\varphi_{3}}}{8\gamma^{2}K_{3}}\\ \frac{\sqrt{-1}\mathrm{e}^{-\sqrt{-1}\varphi_{1}}\sin(\varphi_{1})}{4\gamma K _{1}}&\frac{\sqrt{-1}\mathrm{e}^{-\sqrt{-1}\varphi_{2}}\sin(\varphi_{2})}{4 \gamma K_{2}}&\frac{\sqrt{-1}\mathrm{e}^{-\sqrt{-1}\varphi_{3}}\sin(\varphi_{ 3})}{4\gamma K_{3}}\\ -\frac{\sin^{2}(\varphi_{1})}{2K_{1}}&\frac{-\sin^{2}(\varphi_{2})}{2K_{2}}& \frac{-\sin^{2}(\varphi_{3})}{2K_{3}}\end{pmatrix}\begin{pmatrix}d\varphi_{1} \\ d\varphi_{2}\\ d\varphi_{3}\end{pmatrix}.\] _Let the matrix be denoted by \(\mathcal{L}\)._ We also have the inverse of Lemma 3.3: **Lemma 3.4**.: _For \(\varphi_{j}\in(-k_{\mathfrak{b}},k_{\mathfrak{b}})\), \((j=1,2,3)\), we have_ \[\begin{pmatrix}d\varphi_{1}\\ d\varphi_{2}\\ d\varphi_{3}\end{pmatrix}=\mathcal{KM}\begin{pmatrix}du_{1}\\ du_{2}\\ du_{3}\end{pmatrix},\qquad\mathcal{L}^{-1}=\mathcal{KM}, \tag{3.4}\] _where \(\mathcal{K}:=-\begin{pmatrix}\frac{K_{1}}{\sin(\varphi_{2}-\varphi_{1})\sin( \varphi_{3}-\varphi_{1})}&0&0\\ 0&\frac{K_{2}}{\sin(\varphi_{3}-\varphi_{2})\sin(\varphi_{1}-\varphi_{2})}&0 \\ 0&0&\frac{K_{3}}{\sin(\varphi_{1}-\varphi_{3})\sin(\varphi_{2}-\varphi_{3})} \end{pmatrix}\) and,_ \[\mathcal{M}:=\begin{pmatrix}8\gamma^{2}\sin\varphi_{2}\sin\varphi_{3}&-4\sqrt {-1}\gamma(2\sqrt{-1}\sin\varphi_{2}\sin\varphi_{3}-\sin(\varphi_{2}+\varphi_ {3}))&-2\mathrm{e}^{-\sqrt{-1}(\varphi_{2}+\varphi_{3})}\\ 8\gamma^{2}\sin\varphi_{1}\sin\varphi_{3}&-4\sqrt{-1}\gamma(2\sqrt{-1}\sin \varphi_{1}\sin\varphi_{3}-\sin(\varphi_{3}+\varphi_{1}))&-2\mathrm{e}^{- \sqrt{-1}(\varphi_{1}+\varphi_{3})}\\ 8\gamma^{2}\sin\varphi_{1}\sin\varphi_{2}&-4\sqrt{-1}\gamma(2\sqrt{-1}\sin \varphi_{1}\sin\varphi_{2}-\sin(\varphi_{1}+\varphi_{2}))&-2\mathrm{e}^{- \sqrt{-1}(\varphi_{1}+\varphi_{2})}\end{pmatrix}.\] Proof.: The straightforward computations show it. We remark that (3.4) in Lemma 3.4 means that even if \(\varphi_{i}\)\((i=1,2,3)\) is real, \(d\varphi\) is complex valued one-form. We let it decomposed to \(d\varphi_{j}=d\varphi_{j,\mathrm{r}}+\sqrt{-1}d\varphi_{j,\mathrm{i}}\). Further we introduce \(\varphi:=\varphi_{1}+\varphi_{2}+\varphi_{3}\in\mathbb{R}\) and \(d\varphi=d\varphi_{\mathrm{r}}+\sqrt{-1}d\varphi_{\mathrm{i}}\); \(\psi_{\mathrm{r}}=2\varphi\), \(d\psi_{\mathrm{r}}=2d\varphi_{\mathrm{r}}\) and \(d\psi_{\mathrm{i}}=2d\varphi_{\mathrm{i}}\) for \(\psi\) in (2.3) and (2.4). ## 4. Hyperelliptic solutions of the gauged MKdV equation over \(\mathbb{R}\) Let us focus on the relation, \[\begin{pmatrix}d\varphi_{1,\mathrm{r}}\\ d\varphi_{2,\mathrm{r}}\\ d\varphi_{3,\mathrm{r}}\end{pmatrix}=\mathfrak{Re}\,\mathcal{C}\,\mathcal{K} \,\mathcal{M}\begin{pmatrix}0\\ 0\\ ds\end{pmatrix}=\begin{pmatrix}\frac{2K_{1}\widetilde{\gamma}_{1}\cos(\varphi_ {2}+\varphi_{3})ds}{\sin(\varphi_{2}-\varphi_{1})\sin(\varphi_{3}-\varphi_{1})} \\ \frac{2K_{2}\gamma_{2}\cos(\varphi_{3}+\varphi_{1})ds}{\sin(\varphi_{1}-\varphi_{ 3})\sin(\varphi_{2}+\varphi_{1})ds}\\ \frac{2K_{3}\widetilde{\gamma}_{3}\cos(\varphi_{1}+\varphi_{2})ds}{\sin( \varphi_{1}-\varphi_{3})\sin(\varphi_{2}-\varphi_{3})}\end{pmatrix}, \tag{4.1}\] where \(ds\) is the one-form on the real axis, or \(s\in\mathbb{R}\), \(\mathcal{C}\) is a diagonal matrix whose diagonal component \((\widetilde{\gamma}_{1},\widetilde{\gamma}_{2},\widetilde{\gamma}_{3})\), \((\gamma_{a}=\pm 1)\), and \(\mathfrak{Re}\) means the real part. Here \(s\) corresponds to the arclength of the generalized elastica and \(u_{g,\mathrm{r}}\) in (2.4) of \(g=3\). (4.1) means that we consider a projection of \(d\varphi_{i}\) to \(d\varphi_{i,\mathrm{r}}\). We note \(\mathcal{C}^{-1}=\mathcal{C}\). Direct computation leads an amazing property which is connected with the reality condition CI,II, III by the straightforward computation: **Proposition 4.1**.: \(\Xi:=\begin{pmatrix}1&0&0\\ -1&1&0\\ 0&0&1\end{pmatrix}\)_,_ \[\begin{pmatrix}ds\\ ds\\ ds\end{pmatrix}=\mathcal{L}\,\mathcal{C}\begin{pmatrix}d\varphi_{1,\mathrm{r}} \\ d\varphi_{2,\mathrm{r}}\\ d\varphi_{3,\mathrm{r}}\end{pmatrix}=\mathcal{L}\mathfrak{Re}\mathcal{K} \mathcal{M}\begin{pmatrix}0\\ 0\\ ds\end{pmatrix},\] \[\begin{pmatrix}ds\\ 0\\ ds\end{pmatrix}=\Xi\,\mathcal{L}\,\mathcal{C}\begin{pmatrix}d\varphi_{1,\mathrm{r}}\\ d\varphi_{2,\mathrm{r}}\\ d\varphi_{3,\mathrm{r}}\end{pmatrix}=\Xi\mathcal{O}\mathfrak{Re}\mathcal{K} \mathcal{M}\begin{pmatrix}0\\ 0\\ ds\end{pmatrix}.\] Since \(d\varphi_{i,\mathrm{r}}\) is the one-form of the real axis in the hyperelliptic curve \(X\) whereas \(ds\) is also so in the Jacobian \(J_{X}\), Proposition 4.1 shows the correspondence between real subspace of \(S^{3}X\) and \(J_{X}\). We have required such relation but considered that it is too difficult for the genus \(g=2\) in the previous paper [20]; though we implicitly found it in [20], we could not handle it well because of the situation explained in Remark 5.2. This correspondences open the novel real solutions of the gauged MKdV equation (2.4). We are concerned only with the real part in (4.1), i.e., \(d\varphi_{\mathrm{r}}:=d\varphi_{1,\mathrm{r}}+d\varphi_{2,\mathrm{r}}+d \varphi_{3,\mathrm{r}}\) is equal to \[\left(\frac{2K_{1}\widetilde{\gamma}_{1}\cos(\varphi_{2}+\varphi _{3})}{\sin(\varphi_{2}-\varphi_{1})\sin(\varphi_{3}-\varphi_{1})}\right. +\frac{2K_{2}\widetilde{\gamma}_{2}\cos(\varphi_{3}+\varphi_{1})}{ \sin(\varphi_{3}-\varphi_{2})\sin(\varphi_{1}-\varphi_{2})}\] \[\left.+\frac{2K_{3}\widetilde{\gamma}_{3}\cos(\varphi_{1}+ \varphi_{2})}{\sin(\varphi_{1}-\varphi_{3})\sin(\varphi_{2}-\varphi_{3})} \right)ds.\] Noting \(\varphi_{\mathrm{r}}=\varphi=\varphi_{1}+\varphi_{2}+\varphi_{3}\in\mathbb{R}\), we have \(\psi_{\mathrm{r}}(s)=2\varphi_{\mathrm{r}}(s)\) satisfies the gauged MKdV equation (2.4) of \(s=u_{3\mathrm{r}}\) because it is a differential identity for the meromorphic functions on the hyperelliptic curves. \(2\varphi(s)\) corresponds to an equi-curve with respect to \(u_{2}\). Proposition 4.1 means that if you solve the differential equation (4.1) or integral \(d\varphi_{i,\mathrm{r}}\) with respect to \(s\), we obtain the real vector valued \((\varphi_{1,\mathrm{r}}(s),\varphi_{2,\mathrm{r}}(s),\varphi_{3,\mathrm{r}}(s) )\in\mathbb{R}^{3}\), which is connected with \(\psi_{\mathrm{r}}=2(\varphi_{1,\mathrm{r}}(s)+\varphi_{2,\mathrm{r}}(s)+ \varphi_{3,\mathrm{r}}(s))\). \(\psi_{\mathrm{r}}\) satisfies the gauged MKdV equation (2.4) because (2.4) is a differential identity in the meromorphic function \(\psi\) in any hyperelliptic curves \(X\) given by (2.1). More precisely speaking, we have the following proposition. **Proposition 4.2**.: _For a solution of the differential equation (4.1), \((\varphi_{1,\mathrm{r}}(s),\varphi_{2,\mathrm{r}}(s),\varphi_{3,\mathrm{r}}(s ))\in\mathbb{R}^{3}\), we let \(\psi_{\mathrm{r}}=2(\varphi_{1,\mathrm{r}}(s)+\varphi_{2,\mathrm{r}}(s)+ \varphi_{3,\mathrm{r}}(s))\), and_ \[\begin{pmatrix}t_{3}\\ t_{2}\\ s\end{pmatrix}=\mathfrak{Re}\Xi\begin{pmatrix}u_{1}\\ u_{2}\\ u_{3}\end{pmatrix}\in\mathbb{R}^{3},\] _and then we have a solution of the gauged MKdV equation (2.4), with gauge field \(\widetilde{A}(t)=(\lambda_{6}-3-\frac{3}{4}(\partial_{s}\psi_{\mathrm{i}})^{ 2})/2\),_ \[-(\partial_{t_{2}}-\widetilde{A}(s,t_{2})\partial_{s})\psi_{\mathrm{r}}+\frac {1}{8}\left(\partial_{s}\psi_{\mathrm{r}}\right)^{3}+\frac{1}{4}\partial_{s}^ {3}\psi_{\mathrm{r}}=0. \tag{4.2}\] Proof.: Let \(s=t_{1}=u_{3,\mathrm{r}}\) formally. Since \(\Xi_{ij}=(\frac{\partial t_{4-i}}{\partial u_{j}})\), we have \[\begin{pmatrix}\partial_{u_{1}\,\mathrm{r}}\\ \partial_{u_{2}\,\mathrm{r}}\\ \partial_{u_{3}\,\mathrm{r}}\end{pmatrix}=\,\raisebox{-1.0pt}[0.0pt][0.0pt][0.0pt][0.0pt] [0.0pt]{${}^{*}\Xi\begin{pmatrix}\partial_{t_{3}}\\ \partial_{t_{2}}\\ \partial_{s}\end{pmatrix}$}=\begin{pmatrix}\partial_{t_{3}}-\partial_{t_{2}}\\ \partial_{t_{2}}\\ \partial_{s}\end{pmatrix},\] We note that in the MKdV equation, there is no differential terms with respect to \(u_{1}\). Hence we obtain the identity (4.2). ## 5. Hyperelliptic solutions of the MKdV equation over \(\mathbb{R}\) We obtain a solution of the gauged MKdV equation (4.2) If it satisfies the condition CIII, we have a hyperelliptic solution of the real MKdV equation (1.1). It means the following theorem: **Theorem 5.1**.: \(\psi_{\rm r}:=2(\varphi_{1}+\varphi_{2}+\varphi_{3})\) _of the quadrature \(d\varphi_{i,{\rm r}}\)\((i=1,2,3)\) of (4.1) is a local solution of the MKdV equation (1.1) if \(\partial_{s}\psi_{\rm i}=\) vanishes._ More precisely, though the above theorem can holds if \(\partial_{s}\psi_{\rm i}=\)constant number \(c\in\mathbb{R}\), we focus on the case of \(c=0\). Here we note that (4.1) holds for every point in \(\varphi\)'s, but each sign \(\widetilde{\gamma}\) in (3.3) is determined by the configurations of \((\varphi_{1},\varphi_{2},\varphi_{3})\in\mathbb{R}^{3}\), so the orbit in \(S^{3}X\) proceeds. We weaken the vanishing \(\partial_{s}\psi_{\rm i}\) condition and replace it with the condition that the maximum of \(\partial_{s}\psi_{\rm i}\) is much smaller than the maximum of \(\partial_{s}\psi_{\rm r}\), which we check numerically. Based on these results, we present an algorithm for computing hyperelliptic solutions of the MKdV equations of genus three. Assume that \((k_{1},k_{2},k_{3})\) is given. We note that the coefficients of the matrix \(\mathcal{CKM}\) consist of \(\varphi_{1}\), \(\varphi_{2}\), and \(\varphi_{3}\). For a certain initial condition \((d\varphi_{1,{\rm i}},d\varphi_{2,{\rm i}},\,d\varphi_{3,{\rm i}})|_{s=0}\) so that \(\partial_{s}\psi_{\rm i}\), we integrate the real part of \((d\varphi_{1},d\varphi_{2},d\varphi_{3})\) in (4.1) with respect to \(s\). The gauge field \(A\) consists of \(\partial_{s}\psi_{\rm i}\), which is the imaginary part of \(2\partial_{s}(\varphi_{1}+\varphi_{2}+\varphi_{3})\) in (4.1) is given by \[\begin{split} 2\left[\frac{K_{1}\widetilde{\gamma}_{1}\sin( \varphi_{2}+\varphi_{3})}{\sin(\varphi_{2}-\varphi_{1})\sin(\varphi_{3}- \varphi_{1})}\right.&+\frac{K_{2}\widetilde{\gamma}_{2}\sin( \varphi_{3}+\varphi_{1})}{\sin(\varphi_{3}-\varphi_{2})\sin(\varphi_{1}-\varphi _{2})}\\ &\left.+\frac{K_{3}\widetilde{\gamma}_{3}\sin(\varphi_{1}+\varphi _{2})}{\sin(\varphi_{1}-\varphi_{3})\sin(\varphi_{2}-\varphi_{3})}\right]. \end{split} \tag{5.1}\] Each term is determined by \(\partial_{s}\varphi_{j,{\rm i}}\), \(j=1,2,3\). In other words, if \(\partial_{s}\varphi_{\rm i}\) as a function of \((\varphi_{1},\varphi_{2},\varphi_{3})\) is numerically small enough, \(\psi_{\rm r}(s)=2\varphi_{\rm r}(s)\) is the solutions of the MKdV equation (1.1). It means that (4.1) satisfies the reality conditions CI, CII, CIII. **Remark 5.2**.: Even for genus two, we have a similar relation in Proposition 4.1, but it is difficult to find the situation so that the imaginary part of \(\partial_{s}\psi(s)\) vanishes. The vanishing condition means that \(\varphi_{1}\) is a function of \(\varphi_{2}\), and determines a real curve in \(S^{2}X_{2}\). If the integration with respect to \(ds\) must be on the curve, \(\psi(s)\) must contradict the reality condition CIII. In other words, for genus two, it is difficult to obtain the hyperelliptic solution of the MKdV equation (1.1) except the degenerating curves associated with the soliton solutions given by \(y^{2}=x^{2}(x-a)^{2}(x-b)^{2}\). ### Algorithm for hyperelliptic solutions of the generalized elastica We show the numerical integration (4.1) as follows. 1. We set \((k_{1},k_{2},k_{3})\) to determine a hyperelliptic curve. We assume Figure 1 (a) or (b). \(k_{1}>k_{2}>k_{3}>1.0\) and \(k_{3}>k_{2}>k_{1}>1.0\). We explain mainly the case (a). 2. We set the initial condition \((\varphi_{1},\varphi_{2},\varphi_{3})|_{s=0}\) that the imaginary part of \(2(d\varphi_{1}+d\varphi_{2}+d\varphi_{3})/ds\) in (5.1) vanishes as in Subsection 5.5. 3. We employ the Euler method of the numerical quadrature method for a sufficiently small real value \(\delta s\). (Rigorously, \(\delta s\) could be assumed to be infinitesimal). We find \(\delta\varphi_{a,{\rm r}}=\mathfrak{Re}(d\varphi_{a}/ds)\delta s\) for the real part of the component \(\mathcal{KM}\), e.g., \[\delta\varphi_{1,r}=\frac{2\widetilde{\gamma}_{1}K_{1}\cos(\varphi_{2}+\varphi _{3})}{\sin(\varphi_{2}-\varphi_{1})\sin(\varphi_{3}-\varphi_{1})}\delta s.\] In the computation, we check Table 1 and the sign of \(\cos(\varphi_{a}+\varphi_{b})\) in the numerator in (4.1) so that the orbit of \(\varphi_{a}\) moves back and forth between the branch points \((-\varphi_{\sf b},\varphi_{\sf b})\) in (3.2) as in Figure 1 (a). We note that at the branch point, the orbit of \(\varphi_{a}\) turns the direction by changing the sign of \(\widetilde{\gamma}_{a}\) so that \(\widetilde{\gamma}_{a}\sin^{2}(\varphi_{a})\delta\varphi_{a}/2K_{a}\) is positive as in Figure 1; it moves the different leaf of the Riemann surface with respect to the projection \(X_{3}\to\mathbb{P}\) (\((x,y)\mapsto x\)) after passing the branch points. Following the Euler method of the numerical quadrature method for the \(n\)-step, we obtain the \(\varphi_{a}\) development, \[\varphi_{a,n+1}:=\varphi_{a,n}+\delta\varphi,\quad(a=1,2,3).\] We let \(\psi_{n}=2(\varphi_{1,n}+\varphi_{2,n}+\varphi_{3,n})+\psi_{c}\) so that \(\psi_{0}\) is a certain value (in the following results, \(\psi_{0}=0\)), and numerically integrate \[X_{n+1}=X_{n}+\cos(\psi_{n})\delta s,\quad Y_{n+1}=Y_{n}+\sin(\psi_{n})\delta s,\] to obtain the generalized elastica \((X(s),Y(s))\) of a certain \((u_{1},u_{2},u_{3})\) point in \(J_{X}\). 1. We monitor the imaginary part of \(\partial_{s}\psi_{1}=2\mathfrak{Im}(\delta\varphi_{1}+\delta\varphi_{2}+ \delta\varphi_{3})/\delta s\) in (5.1). 2. Since in the branch point, there appears the singular computation, we use the following local parameter \(\mathfrak{t}\) as in Subsection 5.2. 3. At the intersection between two orbits, the component in \(\mathcal{KM}\) is singular. However, since \(d\varphi_{1,\mathrm{r}}+d\varphi_{2,\mathrm{r}}+d\varphi_{3,\mathrm{r}}\) is well-defined even at the intersection as follows, we avoid the numerical error so that we obtain the correct data of \(d\varphi_{\mathrm{r}}\) and \(d\varphi_{\mathrm{i}}\) as in Subsections 5.3 and 5.4. 4. We obtain the shape of the generalized elastica, and if needs, we set different \((k_{1},k_{2},k_{3})\) and to obtain the different shapes. ### At the branch points We consider the behavior at the branch point here. Assume \(k_{1}>k_{2}>k_{3}\). Let \(\varphi_{\mathfrak{b},1}:=\sin^{-1}(1/k_{1})\), simply \(\varphi_{\mathfrak{b}}\). We consider the one-forms at \(\varphi=\pm\varphi_{\mathfrak{b}}\): **Lemma 5.3**.: _Let \(\varphi=\pm(\varphi_{\mathfrak{b}}-\widetilde{\varphi})\) and \(\mathfrak{t}:=\sqrt{\widetilde{\varphi}}\). At \(\varphi=\pm\varphi_{\mathfrak{b}}\),_ \[\nu_{1}^{\mathrm{I}}=\frac{2\mathrm{e}^{-2\sqrt{-1}\varphi_{\mathfrak{b}}}dt }{8\gamma^{2}K_{\mathfrak{b}}},\quad\nu_{2}^{\mathrm{I}}=\frac{2\sqrt{-1} \mathrm{e}^{-\sqrt{-1}\varphi_{\mathfrak{b}}}\sin(\varphi_{\mathfrak{b}})dt} {4\gamma K_{\mathfrak{b}}},\quad\nu_{3}^{\mathrm{I}}=\frac{-2\sin^{2}(- \varphi_{\mathfrak{b}})dt}{K_{\mathfrak{b}}},\] _where \(K_{\mathfrak{b}}:=\widetilde{\gamma}\widetilde{K}_{\mathfrak{b}}(\varphi)\),_ \[\widetilde{K}_{\mathfrak{b}}(\varphi):=\frac{\sqrt{\gamma\xi_{\mathfrak{b}}(t)(1 \pm k_{1}\sin\varphi)(1-k_{2}^{2}\sin^{2}\varphi)(1-k_{3}^{2}\sin^{2}\varphi)}}{ k_{1}k_{2}k_{3}},\] \[\xi(t):=(k_{1}\cos\varphi_{\mathfrak{b}})+\frac{1}{2!}t^{2}-\frac{k_{1}}{3!} \cos\varphi_{c}t^{4}+\mathcal{O}(t^{5})\text{ and }\widetilde{\gamma}=\pm 1.\] Proof.: \((1\mp k_{1}\sin\varphi)=(1\mp k_{1}(\sin\varphi_{\mathfrak{b}}-\widetilde{ \varphi}\cos\varphi_{\mathfrak{b}}+\mathcal{O}(\widetilde{\varphi}^{2}))\). \(d\widetilde{\varphi}=-2td\). We consider \(\varphi_{1}=\varphi_{\mathfrak{b}}\) case: **Lemma 5.4**.: _For \(\varphi_{j}\in(-k_{\mathfrak{b}},k_{\mathfrak{b}})\), \((j=2,3)\), \(\varphi_{1}=\pm\varphi_{\mathfrak{b}}\), and \(\mathfrak{t}=\sqrt{\mp\varphi_{1}-\varphi_{\mathfrak{b}}}\). Let \(K_{j}:=K(\varphi_{j})\), \(j=2,3\). The following holds:_ \[\begin{pmatrix}du_{1}\\ du_{2}\\ du_{3}\end{pmatrix}=\begin{pmatrix}\frac{2e^{-2\sqrt{-1}\varphi_{\mathfrak{b} }}}{8\gamma^{2}K_{\mathfrak{b}}}&\frac{e^{-2\sqrt{-1}\varphi_{2}}}{8\gamma^{2 }K_{2}}&\frac{e^{-2\sqrt{-1}\varphi_{3}}}{8\gamma^{2}K_{3}}\\ \frac{2\sqrt{-1}e^{-\sqrt{-1}\varphi_{\mathfrak{b}}}\sin(\varphi_{\mathfrak{b} })}{4\gamma K_{\mathfrak{b}}}&\frac{\sqrt{-1}e^{-\sqrt{-1}\varphi_{2}}\sin( \varphi_{2})}{4\gamma K_{\mathfrak{b}}}&\frac{\sqrt{-1}e^{-\sqrt{-1}\varphi_{ 3}}\sin(\varphi_{3})}{4\gamma K_{3}}\\ \frac{-2\sin(\varphi_{\mathfrak{b}})}{2K_{\mathfrak{b}}}&\frac{-\sin^{2}( \varphi_{2})}{2K_{2}}&\frac{-\sin^{2}(\varphi_{3})}{2K_{3}}\\ \end{pmatrix}\begin{pmatrix}dt\\ d\varphi_{2}\\ d\varphi_{3}\end{pmatrix}.\] **Lemma 5.5**.: _For \(\varphi_{j}\in(-k_{\mathfrak{b}},k_{\mathfrak{b}})\), \((j=2,3)\), \(\varphi_{1}=\varphi_{\mathfrak{b}}\), and \(\mathfrak{t}=\sqrt{\mp\varphi_{1}-\varphi_{\mathfrak{b}}}\). we have_ \[\begin{pmatrix}dt_{1}\\ d\varphi_{2}\\ d\varphi_{3}\end{pmatrix}=\mathcal{K}_{\mathfrak{b}}\mathcal{M}_{\mathfrak{b}} \begin{pmatrix}du_{1}\\ du_{2}\\ du_{3}\end{pmatrix},\] _where \(\mathcal{K}_{\mathfrak{b}}:=\begin{pmatrix}\frac{K_{\mathfrak{b}}/2}{\sin( \varphi_{2}-\varphi_{\mathfrak{b}})\sin(\varphi_{3}-\varphi_{\mathfrak{b}})}&0 &0\\ 0&\frac{K_{2}}{\sin(\varphi_{3}-\varphi_{2})\sin(\varphi_{\mathfrak{b}}-\varphi_ {2})}&0\\ 0&0&\frac{K_{3}}{\sin(\varphi_{\mathfrak{b}}-\varphi_{3})\sin(\varphi_{2}- \varphi_{3})}\end{pmatrix}\) and,_ \[\mathcal{M}_{\mathfrak{b}}:=\begin{pmatrix}8\gamma^{2}\sin\varphi_{2}\sin \varphi_{3}&-4\sqrt{-1}\gamma(2\sqrt{-1}\sin\varphi_{2}\sin\varphi_{3}-\sin( \varphi_{2}+\varphi_{3}))&-2{\rm e}^{-\sqrt{-1}(\varphi_{2}+\varphi_{3})}\\ 8\gamma^{2}\sin\varphi_{\mathfrak{b}}\sin\varphi_{3}&-4\sqrt{-1}\gamma(2\sqrt{-1 }\sin\varphi_{\mathfrak{b}}\sin\varphi_{3}-\sin(\varphi_{3}+\varphi_{\mathfrak{b }}))&-2{\rm e}^{-\sqrt{-1}(\varphi_{\mathfrak{b}}+\varphi_{3})}\\ 8\gamma^{2}\sin\varphi_{\mathfrak{b}}\sin\varphi_{2}&-4\sqrt{-1}\gamma(2\sqrt{-1 }\sin\varphi_{\mathfrak{b}}\sin\varphi_{2}-\sin(\varphi_{\mathfrak{b}}+ \varphi_{2}))&-2{\rm e}^{-\sqrt{-1}(\varphi_{\mathfrak{b}}+\varphi_{2})}\end{pmatrix}.\] ### Intersection; real part We consider the behavior at the intersection point of two orbits \(\varphi_{a}(s)\) and \(\varphi_{b}(s)\), \((a\neq b)\). The intersection means that we consider the the integral \(\int_{(x,y)}^{(x,-y)}\nu^{1}\), which must be the value associated with the period of the lattice in the Jacobi variety due to the Abel theorem [8]. Algebraically, the point in \(S^{3}X\) is reduced to a point in \(X\); it may be considered as 'an algebraic jumping'. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \(\varphi_{1}>\varphi_{2}>\varphi_{3}\) & \(\varphi_{1}>\varphi_{3}>\varphi_{2}\) & \(\varphi_{2}>\varphi_{1}>\varphi_{3}\) \\ \hline \(\sin(\varphi_{2}-\varphi_{1})\) & \(-\) & \(-\) & \(+\) \\ \(\sin(\varphi_{3}-\varphi_{1})\) & \(-\) & \(-\) & \(-\) \\ \(\sin(\varphi_{3}-\varphi_{2})\) & \(-\) & \(+\) & \(-\) \\ \(\sin(\varphi_{2}-\varphi_{1})\sin(\varphi_{3}-\varphi_{1})\) & \(+\) & \(+\) & \(-\) \\ \(\sin(\varphi_{3}-\varphi_{2})\sin(\varphi_{1}-\varphi_{2})\) & \(-\) & \(+\) & \(+\) \\ \(\sin(\varphi_{1}-\varphi_{3})\sin(\varphi_{2}-\varphi_{3})\) & \(+\) & \(-\) & \(+\) \\ \hline \hline & \(\varphi_{2}>\varphi_{3}>\varphi_{1}\) & \(\varphi_{3}>\varphi_{1}>\varphi_{2}\) & \(\varphi_{3}>\varphi_{2}>\varphi_{1}\) \\ \hline \(\sin(\varphi_{2}-\varphi_{1})\) & \(+\) & \(-\) & \(+\) \\ \(\sin(\varphi_{3}-\varphi_{1})\) & \(+\) & \(+\) & \(+\) \\ \(\sin(\varphi_{3}-\varphi_{2})\) & \(-\) & \(+\) & \(+\) \\ \(\sin(\varphi_{2}-\varphi_{1})\sin(\varphi_{3}-\varphi_{1})\) & \(+\) & \(-\) & \(+\) \\ \(\sin(\varphi_{3}-\varphi_{2})\sin(\varphi_{1}-\varphi_{2})\) & \(+\) & \(+\) & \(-\) \\ \(\sin(\varphi_{1}-\varphi_{3})\sin(\varphi_{2}-\varphi_{3})\) & \(-\) & \(+\) & \(+\) \\ \hline \end{tabular} \end{table} Table 1. The sign of the factor Hence the intersection in our method is crucial. However it occurs near the branch point. At the branch point \(\int_{(b_{i},0)}^{(b_{i},0)}\nu^{\mathbb{I}}\) can be regarded as zero in this method. Even though it generates the jumping as \(\partial_{s}\psi\), the behavior of \(\psi\) is not so worse, and the shape of the generalized elastica has smooth shapes as in the following results. Let us assume that two orbits \((\varphi_{1},K_{1}>0)\) with the positive direction \(d\varphi_{1,\text{r}}\) and \((\varphi_{2},K_{2}<0)\) with the negative direction \(d\varphi_{2,\text{r}}\) intersect at \(\varphi_{0}\) as \(\varpi_{x}:X\to\mathbb{P}\). We consider the intersection point \(\varphi_{0}\) in \((s,\varphi)\)-plane. Then let \(\varphi_{1}=\varphi_{0}+\eta_{1}\), \(\eta_{1}\in(-\varepsilon,\varepsilon)\) and \(\varphi_{2}=\varphi_{0}-\eta_{2}\), \(\eta_{2}\in(-\varepsilon,\varepsilon)\) for \(1\gg\varepsilon>0\). Let \(-\widetilde{\gamma}_{1}=\widetilde{\gamma}_{2}=1\), and thus we have \(K_{1}=K_{0}+\partial_{\varphi}K_{0}\eta_{1}+o(\eta_{1})\) and \(K_{2}=-K_{0}+\partial_{\varphi}K_{0}\eta_{2}+o(\eta_{2})\), where \(K^{\prime}:=\dfrac{\partial K(\varphi)}{\partial\varphi}\) is equal to \[-\dfrac{\sin(2\varphi)(3(k_{1}k_{2}k_{3})^{2}\sin(\varphi)^{4}-2(k_{1}^{2}k_{ 2}^{2}+k_{1}^{2}k_{3}^{2}+k_{2}^{2}k_{3}^{2})\sin(\varphi)^{2}+(k_{1}^{2}+k_{2 }^{2}+k_{3}^{2}))}{2K(\varphi)}.\] \[\begin{pmatrix}d\eta_{1}\\ -d\eta_{2}\end{pmatrix} =\begin{pmatrix}\frac{-K_{1}\cos(\varphi_{2}+\varphi_{3})ds}{ \sin(\varphi_{2}-\varphi_{3}\sin(\varphi_{3}-\varphi_{1})}\\ -K_{2}\cos(\varphi_{i}+\varphi_{3})ds\\ \frac{-K_{2}\cos(\varphi_{1}+\varphi_{3})ds}{\sin(\varphi_{1}-\varphi_{2}) \sin(\varphi_{3}-\varphi_{2})}\end{pmatrix}\] \[=\begin{pmatrix}\frac{-(K_{0}+K_{0}\eta_{1})\cos(\varphi_{0}- \eta_{2}+\varphi_{3})ds}{\sin(\eta_{2}+\eta_{1})\sin(\varphi_{3}-\varphi_{0}) }+d_{>0}(\eta_{1},\eta_{2})\\ \frac{(K_{0}-K_{0}\eta_{2})\cos(\varphi_{0}-\eta_{1}+\varphi_{3})ds}{\sin( \eta_{1}+\eta_{2})\sin(\varphi_{3}-\varphi_{0}+\eta_{2})}+d_{>0}(\eta_{1}, \eta_{2})\end{pmatrix}\] \[=\begin{pmatrix}\frac{(K_{0}\cos(\varphi_{0}+\varphi_{3})+K_{0}^{ \prime}\cos(\varphi_{0}+\varphi_{3})\eta_{1}+K_{0}\sin(\varphi_{0}+\varphi_{3} )\eta_{2})ds}{(\eta_{2}+\eta_{1})\sin(\varphi_{3}-\varphi_{0})}(1+\frac{\cos (\varphi_{3}-\varphi_{0})\eta_{1}}{\sin(\varphi_{3}-\varphi_{0})})\\ -\frac{(K_{0}\cos(\varphi_{0}+\varphi_{3})-K_{0}\cos(\varphi_{0}+\varphi_{3} )\eta_{2}-K_{0}\sin(\varphi_{0}+\varphi_{3})\eta_{1})ds}{(\eta_{1}+\eta_{2}) \sin(\varphi_{3}-\varphi_{0})}(1-\frac{\cos(\varphi_{3}-\varphi_{0})\eta_{2} }{\sin(\varphi_{3}-\varphi_{0})})\\ \end{pmatrix}\] \[\qquad+d_{>0}(\eta_{1},\eta_{2})\] \[=\begin{pmatrix}\frac{K_{0}\cos(\varphi_{0}+\varphi_{3})ds}{( \eta_{2}+\eta_{1})\sin(\varphi_{3}-\varphi_{0})}\left(1+\frac{K_{0}^{\prime} \cos(\varphi_{0}+\varphi_{3})}{K_{0}\cos(\varphi_{0}+\varphi_{3})}+\frac{\cos (\varphi_{3}-\varphi_{0})}{\sin(\varphi_{3}-\varphi_{0})}\right)\eta_{1}+\frac {\sin(\varphi_{0}+\varphi_{3})}{\cos(\varphi_{0}+\varphi_{3})}\eta_{2}\right)\\ -\frac{K_{0}\cos(\varphi_{0}+\varphi_{3})ds}{(\eta_{1}+\eta_{2})\sin(\varphi_{ 3}-\varphi_{0})}\left(1-\frac{K_{0}^{\prime}\cos(\varphi_{0}+\varphi_{3})}{K_ {0}\cos(\varphi_{0}+\varphi_{3})}+\frac{\cos(\varphi_{3}-\varphi_{0})}{\sin( \varphi_{3}-\varphi_{0})}\right)\eta_{2}-\frac{\sin(\varphi_{0}+\varphi_{3})}{ \cos(\varphi_{0}+\varphi_{3})}\eta_{1}\right)\end{pmatrix}\] \[\qquad+d_{>0}(\eta_{1},\eta_{2})\end{pmatrix}.\] Here \(d_{>\ell}(t_{1},t_{2})\) denotes an element in the formal power series \(\mathbb{C}[[t_{1},t_{2}]]\) whose smallest order is \(\ell+1\). They can be expressed as \[\begin{split}(\eta_{1}+\eta_{2})(1-\mathfrak{b}_{1}\eta_{1}- \mathfrak{b}_{2}\eta_{2}+d_{>0}(\eta_{1},\eta_{2}))d\eta_{1}&=\mathfrak{a}ds, \\ (\eta_{1}+\eta_{2})(1+\mathfrak{b}_{1}\eta_{2}+\mathfrak{b}_{2}\eta_{1}+d_{>0} (\eta_{1},\eta_{2}))d\eta_{2}&=\mathfrak{a}ds,\end{split} \tag{5.2}\] where \[\mathfrak{a}=\frac{K_{0}\cos(\varphi_{0}+\varphi_{3})}{\sin(\varphi_{3}- \varphi_{0})},\quad\mathfrak{b}_{1}=\frac{K_{0}^{\prime}}{K_{0}}+\frac{\cos( \varphi_{3}-\varphi_{0})}{\sin(\varphi_{3}-\varphi_{0})},\quad\mathfrak{b}_{2}= \tan(\varphi_{0}+\varphi_{3})\] \[\frac{d\eta_{2}}{d\eta_{1}}=(1-2\mathfrak{b}_{1}\eta_{1}-2\mathfrak{b}_{2}\eta_ {2})+d_{>0}(\eta_{1},\eta_{2})).\] We note \(\eta_{2}(0)=0\). We substitute the expansion \(\eta_{2}=\eta_{1}+\mathfrak{c}_{1}\eta_{1}^{2}+d_{>2}(\eta_{1})\) into the equation, \[1+2\mathfrak{c}_{1}\eta_{1}=1-2\mathfrak{b}_{1}\eta_{1}-2\mathfrak{b}_{2}\eta_{1} +d_{>1}(\eta_{1}).\] We have \(\eta_{2}=\eta_{1}-(\mathfrak{b}_{1}+\mathfrak{b}_{2})\eta_{1}^{2}+d_{>2}(\eta_{ 1})\), Thus the behavior of the \(s\) at the crossing point, \[s=\frac{(\mathfrak{b}_{1}+\mathfrak{b}_{2})}{2\mathfrak{a}_{1}}\eta_{1}^{2}+d_{>2 }(\eta_{1}).\] We note that \(\eta_{2}\) is defined as the function of \(\eta_{1}\) for \((-\varepsilon,\varepsilon)\). It means that though \(\partial_{s}\varphi_{i,\text{r}}\) seems to diverge at the point due to (5.2) if the point is far from the branch point where \(K_{0}\) nearly vanishes. Even for the case, we have \[d\varphi_{1,\text{r}}+d\varphi_{2,\text{r}}=d(\eta_{1}-\eta_{2})=2(\mathfrak{b}_ {1}+\mathfrak{b}_{2})\eta_{1}+d_{>1}(\eta_{1},\eta_{2})).\] In the following numerical computations, we have such situations. ### Intersection; imaginary part Similarly, we consider the case of the imaginary part. \[\begin{pmatrix}d\varphi_{1,\mathrm{i}}\\ d\varphi_{2,\mathrm{i}}\end{pmatrix} =\begin{pmatrix}-K_{1}\sin(\varphi_{2}+\varphi_{3})ds\\ \hline\frac{K_{2}\sin(\varphi_{1}+\varphi_{2})ds}{\sin(\varphi_{1}-\varphi_{2}) \sin(\varphi_{3}-\varphi_{2})}\end{pmatrix}\] \[=\begin{pmatrix}\frac{-(K_{0}+K_{0}\gamma_{1})\sin(\varphi_{0}- \eta_{2}+\varphi_{3})ds}{-\sin(\eta_{2}+\eta_{1})\sin(\varphi_{3}-\varphi_{0}- \eta_{1})}+d_{>0}(\eta_{1},\eta_{2})\\ \frac{(-K_{0}+K_{0}^{\prime}\eta_{2})\sin(\varphi_{0}+\eta_{1}+\varphi_{3})ds }{\sin(\eta_{1}+\eta_{2})\sin(\varphi_{3}-\varphi_{0}+\eta_{2})}+d_{>0}(\eta_{ 1},\eta_{2})\end{pmatrix}\] By substituting the above results, \(\eta_{2}(\eta_{1})\) and \(s(\eta_{1})\) into the relation, we have \[d\varphi_{1,\mathrm{i}}+d\varphi_{2,\mathrm{i}}=0+\eta_{1}+d_{>1}(\eta_{1}). \tag{5.3}\] It vanishes at the cross point \(\eta_{1}=0\). ### Initial condition We obtain the configuration of \((\varphi_{1},\varphi_{2},\varphi_{3})\) such that \(d\varphi_{\mathrm{i}}(\varphi_{1},\varphi_{2},\varphi_{3})/ds\) vanishes as an initial condition as follows. Assume that \(\varphi_{1}=\varphi_{\mathfrak{b}}\) such that \(K_{1}=K(\varphi_{\mathfrak{b}})=0\), (5.1) is equal to \[-\frac{K_{2}\sin(\varphi_{3}+\varphi_{\mathfrak{b}})}{\sin(\varphi_{3}- \varphi_{2})\sin(\varphi_{\mathfrak{b}}-\varphi_{2})}+\frac{K_{3}\sin( \varphi_{\mathfrak{b}}+\varphi_{2})}{\sin(\varphi_{\mathfrak{b}}-\varphi_{3} )\sin(\varphi_{2}-\varphi_{3})}=0,\] whose solution is \((\phi_{2},K_{2})=(\phi_{3},K_{3})\) due to (5.3). ## 6. Numerical results We demonstrate the shapes of the generalized elastica of genus three, a closed solution and three open solutions. To obtain the closed generalized elasticae, we employed the so-called shooting method: By changing the \(k\)'s and initial conditions, we computed several shapes of the generalized elasticae, and picked up the closed ones. The first result is displayed in Figure 2. For the hyperelliptic curve given by \((k_{1},k_{2},k_{3})=(1.02,1.015,1.010)\), we put the initial condition \((\varphi_{1},\varphi_{2},\varphi_{3})=(\varphi_{\mathfrak{b}},-1.35,-1.35)\). Figure 2 (a) shows an open shape of the generalized elastica Figure 2 (b) displays these \(\varphi_{i,r}\) and \(\psi_{\mathrm{r}}/2\). As in Figure 2 (c), the maximum of the absolute value of \(\partial_{s}\psi_{\mathrm{r}}\) is much larger than that of \(\partial_{s}\psi_{\mathrm{i}}\). In other words, the orbit is regarded as one of the MKdV equation (1.1) rather than the gauged MKdV equation (2.4). The second result is illustrated in Figure 3. We used the hyperelliptic curve given by \((k_{1},k_{2},k_{3})=(1.04,1.038,1.019)\). The initial condition is set by \((\varphi_{1},\varphi_{2},\varphi_{3})=(\varphi_{\mathfrak{b}},-1.292,-1.292)\). Figure 3 (a) shows the shape of the generalized elastica, which is closed and is a part of an open one as in Figure 3 (b). Figure 3 (c) shows these \(\varphi_{i,r}\) and \(\psi_{\mathrm{r}}/2\). As in Figure 3 (d), the maximum of the absolute value of \(\partial_{s}\psi_{\mathrm{r}}\) is also larger than that of \(\partial_{s}\psi_{\mathrm{i}}\). The orbit might be regarded as one of the MKdV equation (1.1) rather than the gauged MKdV equation (2.4) approximately. Figure 3 (e) and (f) show each part of \(\partial_{s}\varphi_{i,r}\) and \(\partial_{s}\varphi_{i,\mathrm{i}}\). The third result is displayed in Figure 4, which is not closed. We used the hyperelliptic curve given by \((k_{1},k_{2},k_{3})=(1.04,1.039,1.010)\) and the initial condition is \((\varphi_{1},\varphi_{2},\varphi_{3})=(\varphi_{\mathfrak{b}},-0.90,-0.90)\). Figure 4 (a) gives the shape of the open generalized elastica. The orbit might be also regarded as one of the MKdV equation (1.1) rather than the gauged MKdV equation (2.4), approximately. The third example consists of figure eight and the inverse of figure 'S' as in Figure 4. The pattern is regarded as a repeat of the modulation of figure eight and the inverse 'S'. Similarly we can use \(\sqrt{-1}ds\) instead of \(ds\), and consider the case in Figure 1 (b). Since replacing \(ds\) with \(\sqrt{-1}ds\) does not seriously affect the MKdV equation (1.1) by replacing \((dt,\alpha)\) with \((-\sqrt{-1}dt,-\alpha)\), we can use the same algorithm to compute the shape of the generalized elastica by considering \(\sqrt{-1}K_{a}\) instead of \(K_{a}\) for Figure 1 (b). We obtained Figure 5. Figure 5 (a) and (b) show a shape of the generalized elastica whose parameters \((k_{1},k_{2},k_{3})=(4.00,5.00,6.00)\) and the initial condition \((\varphi_{1},\varphi_{2},\varphi_{3})=(\pi-\varphi_{\mathfrak{b}},1.4,1.4)\). Its distributions of \(\partial_{s}\psi_{\mathrm{r}}\) and \(\partial_{s}\psi_{\mathrm{i}}\) in Figure 5 (b) shows that there exist some major part that local maximum of \(|\partial_{s}\psi_{\rm i}|\) is smaller than that of \(|\partial_{s}\psi_{\rm r}|\) and there \(\psi_{\rm r}\) might be regarded as the solutions of the MKdV equation (1.1); of course \(\psi_{\rm r}\) is a solution of the gauged MKdV equation (2.4). The forth example also consists of figure eight and the inverse of figure 'S', a repeat of the modulation of figure eight and the inverse 'S', as in Figure 5 (a). In the shape of the supercoiled DNAs in Figure 5 (c) [10, Figure 4], it is surprising that we can find the similar shape, a repeat of the modulation of figure eight and the inverse 'S'. In their study of a new type of highly ordered DNA organization, Japaridze et al.investigated the conformation of DNA both experimentally and theoretically. They found the new type order, called "hyperplectonemes." The shape in the figure is presented as the case which has less hyperplectonemes. In fact, in the case there are similar shapes whose parts have figure eight and inverse S shapes. Following the study, some of the authors found a parameter which governs the order and controls the degree of the order in [22]. Then we also find the similar shapes to Figure 5 (c) in [22, Figure 7 (c)], whose part is expressed by the modulation of figure eight and the inverse 'S', as in Figure 5 (a). Thus we may consider that our model expresses a certain class of the shape of supercoiled DNA. We emphasize that except figure-eight given by Euler in 1744 which is found as short closed supercoiled DNAs, e.g. in [25], no one has ever mathematically reproduce any shape of supercoiled DNA with voids. Thus, this demonstration provides the first step to the mathematical representation of the conformations of the supercoiled DNA. In other words, the generalized elastica of genus three reproduces the geometrical property of the supercoiled DNA. The shapes of the supercoiled DNA are not tight but obey weak elastic forces in general. As the MKdV equation preserves the Euler-Bernoulli energy \(\int(\partial_{s}\psi)^{2}ds\), it can be regarded as excited states of elasticity rather than the ground state or minimal energy point. It means that we are beginning to step beyond the shape of Euler's elastica, including thermal effects. [18]. ## 7. Conclusion In this paper, we showed a novel algebro-geometric method to obtain the solutions of the generalized elastica of the gauged MKdV equation (2.4) as in Proposition 4.2. Theorem 5.1 shows that they are also regarded as the solutions of the MKdV equation (1.1) if the \(\partial_{s}\psi_{\rm i}\) is small. Based on them, we provided a concrete algorithm to obtain the numerical solutions of the generalized elastica in Subsection 5.1. We demonstrate typical conformations of the generalized elastica by numerical computations, which no one ever draw such a complicated shape. We found that one of them is very similar to the AFM image of the supercoiled DNAs in [10, Figure 4]. Thus we consider that our model expresses a certain class of the shape of supercoiled DNA. Only the shape of the supercoiled DNA related to figure eight and circle of Euler's has been expressed by theoretically and numerically. However the more complicated ones with void has never obtained. We showed such shapes by the algebro-geometric method. Figure 4. The shape of generalized elastica \((k_{1},k_{2},k_{3})=(1.04,1.039,1.010)\), and the initial condition is \((\varphi_{1},\varphi_{2},\varphi_{3})=(\varphi_{\sf b},-0.90,-0.90)\). Figure 5. The generalized elastica and shape of supercoiled DNA: (a) is a shape of generalized elastica \((k_{1},k_{2},k_{3})=(4.00,5.00,6.00)\), and the initial condition is \((\varphi_{1},\varphi_{2},\varphi_{3})=(\pi-\varphi_{\sf b},1.4,1.4)\), and (b) is its distribution of \(\partial_{s}\psi\). (c) is the shape of a supercoiled DNA, which is a part of the AFM images in [10, Figure 4]. However, while the results of this paper are certainly novel and intriguing, a number of issues need to be resolved in the future. Though we have computed it, the behavior of the imaginary part \(\partial_{s}\psi_{i}\) is unclear. The behavior should be considered more precisely in the future to find the solutions of the MKdV equation. Based on the knowledge, we should find the generalized elastica with the higher genus \(g>3\). Furthermore, we should find the hyperelliptic solutions of the NLS equation beyond [23] to obtain the generalized elastica in \(\mathbb{R}^{3}\) as in [16]. **Acknowledgment:** This project was started with Emma Previato 2004 in Montreal and had been collaborated until she passed away June 29, 2022. Though the author started to step to genus three curves without her, he appreciate her contributions and suggestions which she gave him to this project during her lifetime. Further, it is acknowledged that John McKay who passed way April 2022 invited the author and her to his private seminar in Montreal 2004 since he considered that this project [15] must have been related to his Monster group problem [14, 19]. Thus this study is devoted to Emma Previato and John McKay. The author thanks to Junkichi Satsuma, Takashi Tsuboi, and Tetsuji Tokihiro for inviting him to the Musashino Center of Mathematical Engineering Seminar and for valuable discussions and to Yuta Ogata, Yutaro Kabata and Kaname Matsue for helpful discussions and suggestions. He is also grateful to Aleksandre Japaridze, Giovanni Longo, and Giovanni Dietler, the authors of [10] for helpful comments on Figure 4 in [10] and sending him its follow-up interesting article [22]. He also acknowledges support from the Grant-in-Aid for Scientific Research (C) of Japan Society for the Promotion of Science, Grant No.21K03289.
2309.05429
Improving Information Extraction on Business Documents with Specific Pre-Training Tasks
Transformer-based Language Models are widely used in Natural Language Processing related tasks. Thanks to their pre-training, they have been successfully adapted to Information Extraction in business documents. However, most pre-training tasks proposed in the literature for business documents are too generic and not sufficient to learn more complex structures. In this paper, we use LayoutLM, a language model pre-trained on a collection of business documents, and introduce two new pre-training tasks that further improve its capacity to extract relevant information. The first is aimed at better understanding the complex layout of documents, and the second focuses on numeric values and their order of magnitude. These tasks force the model to learn better-contextualized representations of the scanned documents. We further introduce a new post-processing algorithm to decode BIESO tags in Information Extraction that performs better with complex entities. Our method significantly improves extraction performance on both public (from 93.88 to 95.50 F1 score) and private (from 84.35 to 84.84 F1 score) datasets composed of expense receipts, invoices, and purchase orders.
Thibault Douzon, Stefan Duffner, Christophe Garcia, Jérémy Espinas
2023-09-11T13:05:23Z
http://arxiv.org/abs/2309.05429v1
# Improving Information Extraction on Business Documents with Specific Pre-Training Tasks ###### Abstract Transformer-based Language Models are widely used in Natural Language Processing related tasks. Thanks to their pre-training, they have been successfully adapted to Information Extraction in business documents. However, most pre-training tasks proposed in the literature for business documents are too generic and not sufficient to learn more complex structures. In this paper, we use LayoutLM, a language model pre-trained on a collection of business documents, and introduce two new pre-training tasks that further improve its capacity to extract relevant information. The first is aimed at better understanding the complex layout of documents, and the second focuses on numeric values and their order of magnitude. These tasks force the model to learn better-contextualized representations of the scanned documents. We further introduce a new post-processing algorithm to decode BIESO tags in Information Extraction that performs better with complex entities. Our method significantly improves extraction performance on both public (from 93.88 to 95.50 F1 score) and private (from 84.35 to 84.84 F1 score) datasets composed of expense receipts, invoices, and purchase orders. Keywords:Business Documents Document Understanding Information Extraction Pre-Training BIESO Decoding Transformer ## 1 Introduction Business documents are paper-sized files containing useful information about interactions between companies. They may take the form of invoices, purchase orders, various reports, and agreements. The exact layout of a document depends on the issuer, but the contained information is conventionally structured. For example invoices and purchase orders share the same header, table, footer structure that almost all issuers have adopted. Because such documents trace every transaction made by companies, they are the key to business process automation. With the emergence of modern resource planning systems, accurate Information Extraction (IE) has become one of the core problems of Document Intelligence. Initially, information extraction was done by human operators, but software solutions have been developed since the early days of document analysis to tackle the problem. Their intent was to ease the work of human operators with hard-coded extraction rules. Unfortunately, these rules needed to be adapted for each and every layout of documents. This limitation has led to the rise of Machine Learning (ML) models for automatic document IE. First ML approaches relied on Optical Character Recognition (OCR) systems to provide the textual content of the document. This transition from image to text allowed for standard Natural Language Processing (NLP) methods to be applied by adopting a Sequence Labeling problem. The amount of labeled data necessary to train accurate NLP models has always been a problem. Business documents are inherently private which strongly limits the quantity of publicly available data. Thus, only companies selling business process automation software are able to collect larger amounts of such data. Moreover, they often rely on their customer to implicitly label the documents. Most recent proposals often include a pre-training step, where the model is trained on a pretext task. Those pretext tasks are self-supervised problems that teach the model many useful "skills" for manipulating the data. Usually, these tasks are as broad as possible, teaching the model common sense about language, grammar, and global structure. In this work, we focus on LayoutLM [31], a pre-trained Transformer that is specialized in business documents. As shown in Fig. 1, it reuses the same Transformer layer with multi-head attention with the addition of a 2D positional encoding. Its larger version achieved state-of-the-art performance in both document classification and information extraction. However, the required hardware to train it can be repelling. In this paper, we propose new pre-training tasks specific to business documents that will provide additional skills to the model. We also propose a new decoding post-processing algorithm that prevents many Figure 1: LayoutLM architecture. Token embeddings are enriched with 1D positional encoding and 2D spatial encoding specific to this architecture. The number of blocks \(N\) varies from 12, for the base model, to 24, for the large one. errors made by the model due to ambiguities. Combined, our contributions3 allow for the base LayoutLM model to perform on par with the large version. Footnote 3: Code available here: [https://github.com/thibaultdouzon/business-document-pre-training](https://github.com/thibaultdouzon/business-document-pre-training) ## 2 Related Work ### Information Extraction Rule-based approaches [10] have been supplanted by Deep Learning models in the last decade. Document IE first capitalized on the state of the art in Named Entity Recognition for NLP [8]. Recurrent Neural Networks with Long-Short Term Memories were first used to encode documents at a word level [16, 22], allowing a simple classifier to predict each word's associated label. Instead of a softmax and cross-entropy loss, a Conditional Random Field [25] model has been used in addition to BIESO tags. Other architectures have also been proposed to better adapt to the specificity of the document. For example, graphs [3, 12, 13, 33] and convolutions over a grid [1, 7, 11] constrained the model based on the words' positional information. Because most architectures relied on textual representations, they benefited from pre-trained word embeddings like Word2Vec [14] or GloVe [17]. With the emergence of Transformers [26] and text encoders like BERT [2], attention-based document analysis models [5, 30, 31] evolved quickly, which resulted in a large improvement of state-of-the-art performance. In line with [7] which included both textual and visual representations, multi-modal Transformers [11, 30] superseded conventional textual models. In parallel to the rise of Transformers, end-to-end IE models tried to reduce the labeling cost. First using RNNs with attention layers [15, 23], then shifting to Transformers [19]. Adopting at the same time the Question Answering [4] (QA) format, instead of the usual Sequence Labeling, provided more flexibility on the predicted labels. ### Pre-Training Semi-supervised training and pre-trained models were popularised in NLP with enriched word embeddings [14, 17, 18]. With the emergence of Transformers, large pre-trained models have been proposed [29]. Thanks to their pre-training, they can efficiently adapt to various tasks [27, 28] and data types. In general, these models are pre-trained on large unlabeled datasets in a self-supervised manner. This self-supervision removes parts of the burden of data labeling [24] and leverages the huge quantities of available data. A wide variety of pre-training tasks have been proposed. General-purpose tasks aiming at learning the language and grammar were used first. Auto-regressive tasks [20] and Masked Language Modeling [2] are still frequently used in new pre-trained models as they have proven to be effective in most situations. In addition to incremental improvements [21, 32], some new pre-training tasks were designed to align representations of multi-modal inputs [19, 30]. ## 3 Models ### Architecture We used the well-established LayoutLM architecture [31] which itself is based on BERT Transformer [2]. More specifically, we chose the base model4 with 12 layers and 512 dimensions for token embeddings. This model is computationally much more efficient compared to the larger version while still giving very good performance. Footnote 4: Pre-trained weights available here: [https://huggingface.co/microsoft/layoutlm=base-uncased](https://huggingface.co/microsoft/layoutlm=base-uncased) Transformer models work on tokens that are in between characters and words. LayoutLM and BERT both use the WordPiece algorithm. We use the same tokenizer as LayoutLM in order to compare our performance with the base LayoutLM model. It uses a vocabulary size of 30000, and we limit the sequence length to 512 tokens, including the special tokens [CLS] and [SEP]. This limitation due to GPU memory consumption of self-attention operations often forces us to cut documents in multiple pieces of 512 tokens and process them separately. Contrary to RNNs, all positions in the sequence are equivalent in a Transformer model. To provide information about position inside the sequence, a linear positional encoding [26] is added for each token. Then LayoutLM adapted this positional encoding to a 2D version that can represent the positions of words on a page. For both pre-training tasks and fine-tuning, we use a simple dense layer to map each token's final internal representation to the dimension of the prediction space. A softmax layer is applied to produce the final model confidence scores. For training, the cross-entropy loss is used on the model confidence scores. ### ConfOpt Post-Processing We model the Information Extraction task as sequence tagging on tokens. Predictions are done at the token level and then aggregated by a lightweight post-processing step to give the model's final prediction. In all experiments, we use BIESO tagging. That is, each field to extract is composed of a sequence of target tags of the following types: B for the beginning of the entity, I for inside, E for its end, or otherwise S for a single token entity. O is used for any token that is outside any target label. BIESO is widely used in IE as it provides structure to the target sequence that helps the model. Instead of the trivial post-processing which consists of simply following the maximum confidence of the model, we decided to decode a model's prediction by solving a basic optimization problem. We will refer to this method as ConfOpt in the remaining of the paper. The predicted sequence for a target label is the sequence that maximizes model confidence over the whole input sequence. There is a constraint to decode a prediction: it must match the following regular pattern: (BI*E) \(|\)S where \(*\) denotes zero or many occurrences and \(|\) denotes an alternative. This optimisation problem can be solved with a dynamic programming approach. The model's predictions for one target label can represented as a \(4\times N\) dimensional matrix where \(N\) is the sequence length and 4 comes from the 4 tags B,I,E,S. By noting \(C_{T,0}\) the model's confidence in T tag at position 0 and \(P_{T,i}\) the best prediction confidence ending at token \(i\) with tag T, the objective is to determine \(S\ =\ \max\limits_{\begin{subarray}{c}0\leq i<N\\ T\in\{E,S\}\end{subarray}}P_{T,i}\) where \[P_{B,i}=C_{B,i}\ ;\ P_{I,i}=C_{I,i}+\max\begin{cases}P_{B,i-1}\\ P_{I,i-1}\end{cases}\] \[P_{S,i}=C_{S,i}\ ;\ P_{E,i}=C_{E,i}+\max\begin{cases}P_{B,i-1}\\ P_{I,i-1}\end{cases}\] One drawback of this post-processing is dealing with no prediction and non-unique predictions. It can be solved with an empirically determined threshold below which no predictions are made. Though in this paper this is not further studied because fields are mandatory in a document and always unique. ## 4 Pre-training Transformer models provide great performance when first pre-trained on pretext tasks on very large unlabelled datasets. This pre-training is most of the time done in a self-supervised manner in order to avoid the labeling cost. LayoutLM uses Masked Visual-Language Modeling [31] which is adapted from BERT's Masked Language Modeling [2]. It teaches the model how text and documents are formed at a token level. In practice, at each training step, 15% of the tokens are randomly chosen and replaced by either a [MASK] token, a random token, or not replaced at all. The model tries to guess which token is the most probable right replacement at those positions. For all pre-training tasks when a document is too long to be processed at once, we randomly select a continuous span of words of maximum size and provide it to the model instead. We expect the model to learn useful features on various parts of documents thanks to the long training. For very short documents, the input is padded to the maximum size. We introduce two new specific pre-training tasks in addition to Masked Visual-Language Modeling (MVLM). The first one, Numeric Ordering teaches the model how to compare and order numbers. The second one, Layout Inclusion focuses on words in the 2D plane and their relative positioning. We chose to avoid regression tasks, even though their implementation would have been simpler. For example, simply removing the 2D positioning of some tokens, and asking the model to predict tokens' position is an alternative to what we propose. But this does not behave well for a token that could appear either at the top or the bottom of the document: the model would learn its mean position - the middle - where the token would never appear. In the following, we will describe the two pre-training tasks in detail. ### Numeric Ordering Task Numeric Ordering (NO) focuses on numeric figures in the document and their relative values. Contrary to MLM which only relies on self-supervised data, NO relies on a handcrafted number parser to find and parse all numbers that appear in a document. Because business documents are mostly made of decimal numbers written with digits, we ignore those written out in words. The numeric value of each token is determined by parsing beforehand each word in the document, looking for numbers and ignoring irrelevant characters. As shown in Fig. 2, the model must predict for every numeric figure in the document if its parsed value is smaller, equal or greater than a randomly selected number among the document. The loss is only computed on tokens starting a new word, but tokens continuing a word are important to determine the value represented by a word. We want the model not only to reason on the textual features, but also on the spatial context surrounding each figure in the document. Therefore, we randomly mask the textual representations of 15% of the numbers in the document and replace them with the [MASK] token as shown in Fig. 2. For the same reason, we also mask the spatial encoding of 15% of the numbers and make sure both text and position are not masked at the same time. All masked positions are replaced with (1000, 1000, 1000, 1000). Figure 2: A pre-training example with Numeric Ordering task. A random token containing a number is selected, then the target is to predict whether other numbers are smaller or bigger. Some random noise can be added by masking tokens’ textual or spatial representations. Only a small part of the document’s input is represented in this illustration. ### Layout Inclusion Task We introduce another pre-training task focusing on the 2D positional encoding, which we called Layout Inclusion (LI). Its purpose is to provide a better understanding of document layouts and complex structures. In fact, most business documents, including invoices, purchase orders, and expense receipts, contain tables where the meaning of tokens is mostly driven by their position relative to headers. As shown in Fig. 3, Layout Inclusion is formatted like a question answering prompt: a question followed by the content of the document. The question is simply a special token [LAYOUT] positioned at random coordinates \((x_{1},y_{1},x_{2},y_{2})\). The model must then classify every token in the document into 2 groups: either inside or outside of the question token. More precisely, the target answer is whether the middle point of a document token is inside or outside the rectangle described by the coordinates of the question. Again, the objective is for the model to not only reason on the 2D positions of tokens but also use their textual embedding. In order to force the model to use both representations, we randomly replace 15% of documents token positions with (1000, 1000, 1000, 1000). In case of a random position replacement, the target value is still computed based on the real position of the token, and the model must make its prediction based on the token's text and the neighboring tokens using the classical 1D positional encoding. Figure 3: A pre-training example with Layout Ordering task. Coordinates of the purple rectangle are drawn uniformly. Random noise is added by masking the 2D position of some tokens. Only a small part of the document is represented. ## 5 Datasets We used 2 different collections of documents to build 3 datasets for training and evaluation as described in the following. They all contain business documents: invoices and purchase orders for the private collection and expense receipts for the public one. The largest dataset used for pre-training isn't labeled, document samples with their target fields for the others datasets are shown in Fig. 4. ### Business Documents Collection The Business Documents Collection (BDC) is a large private dataset composed of 100k invoices and 300k purchase orders. Those real documents were submitted and processed on a commercial document automation solution in the last 3 years. It contains English-only documents divided into 70000 different issuers. All documents sharing the same issuer usually use the same information template. Therefore, we limited the maximum number of documents of the same issuer to 50. It is important to keep the number of similar layouts in the collection low and the variety of examples high. We used this collection for pre-training language models on business documents that are closer to our final objective than RVL-CDIP [9]. Textual and positional information have been extracted using a commercial OCR system. It achieves excellent accuracy on properly scanned documents and Figure 4: A document sample for each training dataset annotated with the expected predictions. For BDC-PO, we replaced the document with a fictive one due to privacy reasons. provides accurate word positions. We also use the provided read order to determine the order of tokens when feeding the network. This order determines the 1D positional encoding given to each token that complements the 2D positional encoding. Because we only used this collection for pre-training models on self-supervised tasks, most documents do not have extraction labeling. Only a subset composed of purchase orders is labeled for the IE task. ### Business Documents Collection - Purchase Orders We selected a subset of the Business Documents Collection to build a labeled dataset of English purchase orders called BDC-PO. It contains almost 9000 different issuers split into training, validation, and test set. In order to not introduce bias for models pre-trained on the BDC, we removed from BDC all documents emitted by a supplier contained in the test set. This means that document layouts contained in the test set have never been seen before by the model at pre-training or training time. Long purchase orders are rare but can sometimes be longer than 20 pages. If we wanted to train models and make predictions on such documents, we would have to evaluate the model on dozens of inputs for one document. Instead, we chose to limit documents to one page and crop the remaining. It only concerns roughly 25% of the dataset and sometimes impacts the prediction because labels are missing from the input. The extraction task consists of 3 fields: document number, delivery date, and total amount. Those fields were chosen because they are mandatory for most customers and thus are well labeled at the word level by the end-user. We controlled the labeling quality at the issuer level and rejected from the dataset some issuers with undesirable labeling practices. ### ICDAR 2019 - Scanned Receipts We also trained and evaluated our model on the public Scanned Receipts OCR and Information Extraction [6] (SROIE) dataset that was published for ICDAR 2019. We focus on the third task which consists in extracting information from the documents. SROIE contains Malaysian receipts split into 626 train and 347 test documents. Unfortunately, we do not have control over the composition of the test set, and most of the test layouts also occur in the training set. We used the OCR text provided with the dataset instead of using our own OCR system. As others have pointed out [31], it contains numerous little errors that negatively affect the final performance. For a fair comparison with the leaderboard, we manually fixed them such that the expected string appears in the input, at least. These fixes mostly concern addresses and company names. It almost exclusively involves fixing errors related to white-spaces and commas. ## 6 Experiments All experiments were performed on a single machine equipped with two Nvidia RTX A6000 with 48Go of video memory each. This allowed us to boost the batch size up to 32 per device on a base transformer model. To further increase the batch size, we also aggregated 6 batches together before propagating the gradient for a total batch size of 192. We used the Adam optimizer with a learning rate of \(1e-5\) and 200 linear warm-up steps as it improved our model's convergence. We used 1500 training steps for SROIE and 3000 steps for BDC-PO. Finally, we ran each fine-tuning 10 times in each setup to get a precise idea of the performance of the models and the variability of the results. For the different pre-training scenarios, we performed only two runs and the best model was kept. ### Post-Processing This first set of experiments aims at comparing the post-processing used to decode the sequence produced by the model. We want to determine whether our proposed ConfOpt algorithm is competitive with other decoding methods. We decided to use the LayoutLM base model and compare the proposed ConfOpt against two other decoding algorithms as shown in Table 1. We named Ad-Hoc the basic decoding using the label with maximal confidence for each token. When decoding with this method, a B tag starts a new entity, a I tag continues the previous entity, a E closes the previous entity, and a S tag produces a new entity and closes it right away. Ad-Hoc and ConfOpt use the same model weights in this experiment as they do not introduce any trainable parameters. The second decoding algorithm uses a Conditional Random Field (CRF) [8, 25] that processes LayoutLM's predictions. In this particular case, we did not use the classical cross-entropy loss but the score provided by the CRF layer. Because the CRF required specific training and did not optimize the same loss, its weights are different from the two other post-processing methods. We evaluated these algorithms on both SROIE and BDC-PO. The results in Table 1 show a tiny improvement using a CRF instead of the Ad-Hoc post-processing (0.13 and 0.05 F1 points) but those differences are always within one standard deviation range. We would need more evidence to conclude on the effect of adding a CRF layer for the post-processing. \begin{table} \begin{tabular}{c|c|c} & \multicolumn{2}{c}{Fine Tuning (F1 score)} \\ Post Processing & SROIE & BDC-PO \\ \hline Ad-Hoc & \(93.88\pm 0.59\) & \(84.35\pm 0.12\) \\ CRF & \(94.01\pm 0.55\) & \(84.40\pm 0.16\) \\ ConfOpt & \(\mathbf{94.94\pm 0.38}\) & \(\mathbf{84.57\pm 0.10}\) \\ \end{tabular} \end{table} Table 1: Performance comparison on SROIE and BDC-PO between multiple post-processing algorithm. Score is computed on the exact match between the prediction and the target string. On both datasets, using ConfOpt significantly increases performance (1.06 and 0.22 F1 points) compared to the Ad-Hoc post-processing, even though the model is strictly identical. In light of these results, we decided to use the ConfOpt for the next experiment. ### Business Document-Specific Pre-training We conducted another set of experiments in order to study the effects of the new business data-specific pre-training tasks on the model performance. At the same time, we controlled the performance gap obtained by pre-training with the basic MVLM task on the same new dataset. Both comparisons are insightful to decide whether it is useful to pre-train on clients' data and/or with data-specific pre-training tasks. For the pre-training part, we always initialize the model's weights with the base version [31]. We pre-train models for 20 epochs on 80% of BDC. When using multiple pre-training tasks at the same time, we chose to provide batches of single tasks to the model. Gradient aggregation over multiple batches helps smoothing the update between different tasks. We pre-trained 2 models on the BDC, one with MVLM only and another with MVLM+NO+LI. We evaluated each pre-trained model on both datasets, the results are available in Table 2 for BDC-PO and Table 3 for SROIE. Each cell contains the means of 10 runs with different seeds and the standard deviation is provided for the F1 score. There are a few interesting things to notice. The first important remark is the importance of the pre-training dataset. Pre-training on BDC significantly improves performance on both SROIE and \begin{table} \begin{tabular}{c c|c|c c c} \multicolumn{2}{c|}{Pre Training} & \multicolumn{4}{c}{Accuracy per field} \\ Task(s) & Dataset & F1 Score & PO Number & Total & Date \\ \hline MVLM & RVL-CDIP & \(84.57\pm 0.10\) & 89.98 & 89.10 & 93.59 \\ MVLM & BDC & \(84.77\pm 0.12\) & 90.61 & 89.33 & 93.59 \\ MVLM+NO+LI & BDC & \(\mathbf{84.84\pm 0.08}\) & \(\mathbf{90.71}\) & \(\mathbf{89.36}\) & \(\mathbf{93.83}\) \\ \end{tabular} \end{table} Table 2: Model performance when fine-tuning on BDC-PO \begin{table} \begin{tabular}{c|c c|c c c c c} \multicolumn{2}{c|}{Architecture} & \multicolumn{2}{c|}{Pre Training} & \multicolumn{4}{c}{Accuracy per field} \\ & Task(s) & Dataset & F1 Score & Company Address & Total & Date \\ \hline LayoutLM base * & MVLM & RVL-CDIP & \(94.94\pm 0.38\) & 92.91 & 90.81 & 89.25 & 99.48 \\ LayoutLM base * & MVLM & BDC & \(95.18\pm 0.23\) & \(\mathbf{93.72}\) & 91.00 & 89.48 & \(\mathbf{99.68}\) \\ LayoutLM base * & MVLM+NO+LI & BDC & \(\mathbf{95.50\pm 0.22}\) & 93.60 & \(\mathbf{91.41}\) & \(\mathbf{90.89}\) & 99.57 \\ \hline \hline LayoutLM base [31] & MVLM & RVL-CDIP & 94.38 & & & & \\ LayoutLM large [31] & MVLM & RVL-CDIP & 95.24 & & & & \\ LayoutLMv2 large [30] & MVLM+TIA+TIM & RVL-CDIP & \(\mathbf{97.81}\) & & & & \\ \end{tabular} \end{table} Table 3: Model performance when fine-tuning on SROIE. Models name ending with a * are our contribution. The second part contains published scores of the original LayoutLM and LayoutLMv2 as a comparison. BDC-PO, even though the pretext training task is the same as what was used for LayoutLM. BDC is more homogeneous and focuses on invoices and purchase orders. Contrary to our expectations, we observe a greater improvement on SROIE than on BDC-PO (0.24 vs 0.2 F1 points). But the overall improvement by using BDC can be explained because RVL-CDIP contains a broader panel of document types and is not specialized like BDC. Even though BDC does not contain expense receipts, its global structure is similar to invoices. Next, we can compare the pre-training tasks. Introducing Numeric Ordering (NO) and Layout Inclusion (LI) tasks also improves the performance over the previously pre-trained model. We observe a 0.32 F1 point improvement on SROIE but only 0.07 on BDC-PO. We suspect the small improvement introduced by the new tasks can be explained because most useful skills to process purchase orders were learned by pre-training on such documents. The new pre-training tasks help more for generalizing on new types of documents. We also can look at the results on a field per-field basis. We observe that using BDC over RVL-CDIP improved the recognition of all fields except for the dates in BDC-PO. If introducing new training tasks did not improve all fields, we notice that some fields were greatly enhanced like the total amount in SROIE (1.41 F1 points difference). We expected to observe a greater improvement in the total field with the new pre-training tasks. But it does not seem to improve performance much on BDC-PO's total. Finally it is interesting to compare on Table 3 our results with the published scores of LayoutLM and LayoutLMv2. Our pre-trained model with NO and LI tasks performs better than LayoutLM large which contains 3 times more parameters. However, LayoutLMv2 - which uses both textual and visual information - performance level is still unreachable for a textual-only model. ## 7 Conclusion In this work, we showed significant improvements are accessible without introducing more trainable parameters and computational complexity. Only using the base transformer architecture, we achieved a performance that is comparable to the large version which contains 3 times more parameters. Pre-trained models can be further specialized through in-domain datasets and specific pre-text training tasks. We demonstrated that by introducing a new collection of business documents and training tasks focusing on documents' layout and number understanding. We showed that performance improvements can be imputed to both pre-training tasks (Numeric Ordering and Layout Inclusion) and new pre-training dataset. In the future, we will investigate on IE as a Question Answering problem. It has already been proposed in the past [4] as an alternative to Sequence Labeling when fine-tuning models. It should improve the model's generalization capabilities and enable few-shot learning. But nowadays all models are pre-trained, and we would like to study the impact on generalization of a QA-only pre-training.
2309.14524
Sidon sequences and nonpositive curvarture
A sequence $a_0<a_1<\ldots<a_n$ of nonnegative integers is called a Sidon sequence if the sums of pairs $a_i+a_j$ are all different. In this paper we construct CAT(0) groups and spaces from Sidon sequences. The arithmetic condition of Sidon is shown to be equivalent to nonpositive curvature, and the number of ways to represent an integer as an alternating sum of triples $a_i-a_j+a_k$ of integers from the Sidon sequence, is shown to determine the structure of the space of embedded flat planes in the associated CAT(0) complex.
Sylvain Barré, Mikaël Pichot
2023-09-25T20:59:18Z
http://arxiv.org/abs/2309.14524v1
# Sidon sequences and nonpositive curvature ###### Abstract. A sequence \(a_{0}<a_{1}<\ldots<a_{n}\) of nonnegative integers is called a Sidon sequence if the sums of pairs \(a_{i}+a_{j}\) are all different. In this paper we construct \(\operatorname{CAT}(0)\) groups and spaces from Sidon sequences. The arithmetic condition of Sidon is shown to be equivalent to nonpositive curvature, and the number of ways to represent an integer as an alternating sum of triples \(a_{i}-a_{j}+a_{k}\) of integers from the Sidon sequence, is shown to determine the structure of the space of embedded flat planes in the associated \(\operatorname{CAT}(0)\) complex. ## 1. Introduction A sequence \(a_{0}<a_{1}<\ldots<a_{n}\) of nonnegative integers is called a Sidon sequence if the sums of pairs \(a_{i}+a_{j}\) (for \(i\leq j\)) are pairwise different. Here we assume that \(a_{0}=0\). An example is \(0,2,7,8,11\). Every Sidon sequence \(a_{0}=0<a_{1}<\ldots<a_{n}\) can be extended by letting \(a_{n+1}\) be the smallest integer such that \(a_{0}<\ldots<a_{n+1}\) satisfies the condition of Sidon. Starting at \(a_{0}=0\), this process defines an infinite Sidon sequence \[0,1,3,7,12,20,30,44,65,80,96,...\] called the Mian-Chowla sequence. Sidon, in connection with his investigations on Fourier series, was interested in estimates of the number of terms not exceeding \(N\) a Sidon sequence can have. In general, there are at most \(<<N^{1/2}\) terms in the interval \([0,N]\), and the Mian-Chowla sequence, for example, which verifies \(a_{n}\leq n^{3}\), contains \(>>N^{1/3}\) terms in this interval (see [13]). Erdos conjectured that for every \(\varepsilon>0\), there should exist denser infinite Sidon sequences with \(>>N^{1/2-\varepsilon}\) terms. This problem is well studied. We quote here the well-known theorem of Ruzsa [11] which constructs sequences with \(>>N^{\gamma+o(1)}\) terms, where \(\gamma=\sqrt{2}-1\). For finite sequences, a famous theorem of Singer [8, 12] in projective geometry provides, for any fixed \(\varepsilon>0\), Sidon sequences with \(>N^{1/2}(1-\varepsilon)\) terms for any large \(N\). This uses the fact that the asymptotic ratio of consecutive primes is \(1\), can be improved by using a sharper estimate for \(p-p^{\prime}\) where \(p\) and \(p^{\prime}\) are consecutive primes (see [9, SS2.3]). For the purpose of the present paper, we shall require the stronger condition that the sums of pairs \(a_{i}+a_{j}\) (\(i\leq j\)) are all different modulo some integer \(N\geq 2\). Such a sequence is called a _Sidon sequence modulo \(N\)_. Clearly, every finite Sidon sequence is a Sidon sequence modulo \(N\) for every large enough integer \(N\). The present paper describes a connection between Sidon sequences and nonpositive curvature. The basic idea is to associate with a Sidon sequence \(a_{0}<a_{1}<\ldots<a_{n}\) modulo \(N\) a group \(G\) acting geometrically a \(2\)-complex \(X\), whose geometric properties depend on the arithmetic properties of the sequence \(a_{0}<a_{1}<\ldots<a_{n}\), as follows: 1. the sum of pairs condition of Sidon ensures that the ambient space \(X\) is nonpositively curved; 2. the alternating sums of triples \(a_{i}-a_{j}+a_{k}\) of elements of the Sidon sequence regulates the structure of embedded flat planes in the space \(X\). We shall refer to [7] for nonpositive curvature and CAT(0) spaces. Before we state our main theorem, we indicate how the structure of embedded flat planes can be related to Sidon sequences. In some CAT(0) 2-complexes, including the ones defined in the present paper, the space of embedded flat planes can encoded by means of a ring puzzle problem [4]. A ring puzzle is a tessellation of the Euclidean plane obtained by using a (finite) set of shapes which are constrained locally by a set of conditions around every vertex. In this context, these local conditions are appropriately described by a (finite) set of length \(2\pi\) circles called the rings. Every ring is labeled and specifies unambiguously the shapes that can be used in the neighbourhood of the given vertex. Here we require ring puzzles in which all the shapes are equilateral triangles. We are given a set of \(n+1\) equilateral triangles, labeled by integers in \(\{0,\ldots,n\}\), together with a set of rings of the form: where \(e,f,g,h,i,j\in\{0,\ldots,n\}\) which specify which triangle label can be used around a vertex. The local condition ensures that the labels on the triangles and links are consistent. Given these remarks, the following theorem describes a connection between Sidon sequences, nonpositive curvature, and the structure of flats in the associated complexes. The construction offers some freedom. Thus, one may use more than one Sidon sequence (here we shall have three possibly different sequences, all of the same length \(n\geq 2\)), and one may twist the construction by using permutations in the symmetric group \(S_{n}\). **Theorem 1.1**.: _To every integer \(n\geq 2\), every triple of increasing sequences_ \[0\leq a_{0}^{j}<a_{1}^{j}<\ldots<a_{n}^{j},\ \ j=1,2,3,\] _every triple of integers \(r_{j}\), \(j=1,2,3\), such that for every \(j\in\{1,2,3\}\), the sequence \(a_{0}^{j}<a_{1}^{j}<\ldots<a_{n}^{j}\) is a Sidon sequence modulo \(r_{j}\), and every triples of bijections_ \[\sigma_{j}\colon\{a_{0}^{j},\ldots,a_{n}^{j}\}\to\{0,\ldots,n\},\ \ j=1,2,3\] _is associated a countable group \(G\) acting properly and totally discontinuously on a CAT(0) complex \(X\) of dimension 2 with compact quotient, and a \(G\)-invariant labelling \(X^{2}\to\{0,\ldots,n\}\) of the faces of \(X\), such that the following are **equivalent** for a flat plane with face labelled by the elements of \(\{0,\ldots,n\}\):_ 1. \(\Pi\) _embeds in_ \(X\) _in a label preserving way_ _._ 2. \(\Pi\) _is the solution, with respect to the induced labelling, of a triangle ring puzzle problem with rings of the form:_ _such that_ \(\sigma=\sigma_{j}\)_,_ \(k=\tau_{j}(0)\)_,_ \(l=\tau_{j}(1)\)_,_ \(j\in\{1,2,3\}\)_, for every triples_ \((a,b,c)\) _and_ \((a^{\prime},b^{\prime},c^{\prime})\) _of elements of_ \(\{a_{0}^{j},\ldots,a_{n}^{j}\}\) _with equal alternating sum_ \[a-b+c=a^{\prime}-b^{\prime}+c^{\prime}\mod r_{j}\] _such that_ \(a\neq b\neq c\)_,_ \(a^{\prime}\neq b^{\prime}\neq c^{\prime}\) _and_ \((a,b,c)\neq(a^{\prime},b^{\prime},c^{\prime})\)_._ In the simplest situation, we shall introduce, for every Sidon sequence \(a_{0}=0,\ldots,a_{n}\), a CAT(0) complex \(X(a_{0},\ldots,a_{n})\), called _the modular complex of the Sidon sequence \(a_{0},\ldots,a_{n}\)_ (see Definition 7.8). It follows by Theorem 1.1 that the automorphism group of \(X(a_{0},\ldots,a_{n})\) is transitive on the vertex set. These complexes appear to be new geometric objects for the most part, and we shall describe some examples (old and new) in SS7 below. Informally, Theorem 1.1_reduces the description of embedded flat planes in the resulting complexes (including in modular complexes) to the resolution of a paving problem in \(\mathbb{R}^{2}\)_. The latter problem is purely 2-dimensional in nature and has a number theoretic component, since the rings are obtained from the representations of a modular integer as an alternating sum of triples from the given Sidon sequences. Assuming that the Sidon sequences \((a_{j}^{j})\) and the integers \(r_{j}\) are fixed, we note that the choice of the \(\sigma_{j}\)'s and \(\tau_{j}\)'s generates a priori \((n+1)!^{3}\) distinct groups and spaces associated with these data. The associated 2-dimensional paving problems depends on the choice of these elements a priori, and so does the space of embedded flats in \(X\). The paper is structured as follows. The proof of Theorem 1.1 has been divided into five steps, occupying SS2 to SS6. It is shown in SS2 that the nonpositive curvature condition on links (namely, having girth at least \(2\pi\)) is equivalent to the arithmetic condition of Sidon. A new class of complexes, called completely homogeneous complexes, is introduced in SS3. The modular complexes are example of completely homogeneous complexes. In SS4 we prove a weaker version of Theorem 1.1 in which has an additional "sign redundancy"; the latter is removed in SS5, by showing that the complexes in Theorem 1.1 always admit sufficiently many "polarities"; in particular, the various constructions in SS4 are in fact pairwise isomorphic. It is then possible to establish the equivalence stated in Theorem 1.1 in SS6. The last section of the paper, SS7, concludes with some examples and applications. **Acknowledgment.** The authors are partially funded by NSERC Discovery Fund 234313. ###### Contents * 1 Introduction * 2 Sidon sequences * 3 Completely homogeneous colourings of cell complexes * 4 Main theorem with signs * 5 Existence of polarities * 6 Every ring puzzle embeds in \(X\) * 7 Examples and applications ## 2. Sidon sequences We refer to [9, 10] for surveys on finite and infinite Sidon sequences; here we begin the proof of Theorem 1.1. The first step in the proof is to relate the arithmetic condition of Sidon to a geometric condition on spaces (i.e., nonpositive curvature via the link condition [7]). Let \(a_{0}=0<a_{1}<\ldots<a_{n}\) be a sequence of positive integers. We consider the graph \(S\) on \(\mathbb{Z}\) in which \(n+1\) edges are issued from every even vertex with an increment of \(2a_{r}-1\), for every \(r\in[0,\ldots,n]\). We write \(S=S(a_{1},\ldots,a_{n})\) to indicate the dependency in the sequence of integers. **Theorem 2.1**.: _Let \(a_{0}=0<a_{1}<\ldots<a_{n}\) be an increasing sequence of positive integers and let \(S=S(a_{1},\ldots,a_{n})\). Let \(N\geq 1\) be an integer. The following are equivalent:_ 1. _the sequence_ \(a_{0},a_{1},\ldots,a_{n}\) _is a Sidon sequence modulo_ \(N\)_;_ 2. _the quotient of_ \(S\) _by the action of_ \(2N\mathbb{Z}\) _by translations has girth at least 6._ Proof.: Note that the condition of Sidon, that the sums of pairs \[a_{i}+a_{j}\ \ (i\leq j)\] are all different modulo \(N\), is verified if and only if the following alternating sums do not vanish \[a-b+c-d\neq 0\mod N\] for every quadruple \(a\neq b\neq c\neq d\neq a\) chosen from the set \(\{a_{0},a_{1},\ldots,a_{n}\}\). We write \(S_{N}\) for the quotient graph of \(S\) by the translation action of \(2N\mathbb{Z}\). By definition, \(S_{N}\) is a bipartite graph. Suppose that \[a-b+c-d\neq 0\mod N\] for every \(a\neq b\neq c\neq d\neq a\) as above. We claim that \(S_{N}\) has no repeated edges. Indeed, a repeated edge corresponds arithmetically to the the existence of distinct integers \(a\neq b\) such that \(2a-1=2b-1\mod 2N\). Then \[a-b+a-b=0\mod N.\] This contradicts the fact that the condition \(a\neq b\neq c\neq d\neq a\) is verified when \(a\neq b\), \(c=a\), \(d=b\). Next we claim that \(S_{N}\) contains no square. Observe that a square in \(S_{N}\) corresponds arithmetically to the choice of four elements \(a,b,c,d\) in the Sidon sequence, such that \[a\neq b\neq c\neq d\neq a,\] which indicates that consecutive edges in the square are distinct, and \[(2a-1)-(2b-1)+(2c-1)-(2d-1)=0\mod 2N,\] which indicates that the edges form a square. However, the equality \[2(a-b+c-d)=0\mod 2N\] again contradicts the Sidon condition modulo \(N\). This establishes that the condition of Sidon translates geometrically into the absence of (possibly degenerate) squares in the graph \(S_{N}\). Since the graph \(S_{N}\) is bipartite, it contains no cycle of odd length. This implies that the girth of \(S_{N}\) is \(\geq 6\). Conversely, suppose that \(S_{N}\) has girth at least \(6\). Since \(S\) contains no square, it follows that \[(2a-1)-(2b-1)+(2c-1)-(2d-1)\neq 0\mod 2N\] where \(a\neq b\neq c+d\neq a\). This implies that \(a-b+c-d+0\mod N\). Therefore, the Sidon condition holds modulo \(N\). We refer to [6] for an earlier use of Sidon sequences in graph theory (in a different context). ## 3. Completely homogeneous colourings of cell complexes The second step constructs a complex of dimension \(2\), together with a homogeneous colouring of its cells in every dimension, assuming that a similar homogeneous colouring exists in dimension \(1\). We shall first introduce some useful definitions. **Definition 3.1**.: A _complete colouring_ of an \(n\)-complex \(X\) is an assignment of a colour to every cell of \(X\) in such a way that any two incident cells of a given dimension have different colours. More precisely, if \(C\) is a set, a complete colouring of \(X\) (with colours in \(C\)) is a map \(\delta\colon X\to C\) such that 1. \(\delta(e)\neq\delta(f)\) if \(e\neq f\in X^{k}\) and \(e,f\subset g\) where \(g\in X^{l}\) for some \(l>k\); and, 2. \(\delta(e)\neq\delta(f)\) if \(e\neq f\in X^{k}\) and \(g\subset e,f\) where \(g\in X^{l}\) for some \(l<k\). We call \((c_{0},\ldots,c_{n})\)-colouring of \(X\) a complete colouring \(\delta\) such that \(c_{k}=|\delta(X_{k})|\). **Definition 3.2**.: If \((X,\delta)\) and \((X^{\prime},\delta^{\prime})\) are completely coloured complexes, a coloured homomorphism \(f\colon X\to X^{\prime}\) of \(n\)-complexes is a coloured homomorphism if \(\delta=\delta^{\prime}\circ f\). We let \(\operatorname{Aut}(X,\delta)\) denote the group of coloured automorphisms of \(X\). We are interested in the following. **Definition 3.3**.: An \(n\)-complex \(X\) is _completely homogeneous_ if it admits a complete colouring \(\delta\) such that \(\operatorname{Aut}(X,\delta)\) acts transitively on the sets \(X_{c}^{k}:=\{e\in X^{k}:\delta(e)=c\}\) of monochromatic cells in every dimension \(k\) for every \(c\in\delta(X^{k})\). In this paper we are interested in the cases \(n=1\) and \(n=2\). Namely, we use completely homogeneous \((2,m)\)-colourings in dimension \(1\) to construct completely homogeneous \((3,3,m)\)-colourings in dimension \(2\), for every \(m\geq 3\). **Theorem 3.4**.: _Let \(m\geq 3\). Suppose we are given:_ 1. _three real numbers_ \(\alpha,\beta,\gamma>0\) _such that_ \(\alpha+\beta+\gamma=\pi\)_, and three metric graphs_ \(L_{1}\)_,_ \(L_{2}\)_, et_ \(L_{3}\) _of order_ \(m\) _and respective edge length_ \(\alpha\)_,_ \(\beta\)_, and_ \(\gamma\)_, each having girth at least_ \(2\pi\)_;_ 2. _a completely homogeneous_ \((2,m)\)_-colouring on these graphs, with a common set of colours on the edges, and such that the set of colours on vertices on_ \(L_{k}\) _is_ \(\{1,2,3\}\setminus\{k\}\)_._ _Then there exists a unique \((3,3,m)\)-coloured complex \(X\), with vertex colour set \(\{1,2,3\}\), for which \(L_{k}\) is coloured isomorphic to the link of every vertex of \(X\) of colour \(k\). Furthermore, this complex is completely homogeneous._ Proof.: We let \(G_{k}\) denote the group of coloured automorphisms of the graph \(L_{k}\). By our assumption on \(L_{k}\), the action of \(G_{k}\) on \(L_{k}\) admits two orbits of vertices and \(n\) orbits of edges. We establish some claims first. **Claim 3.5**.: \(L_{k}\) _is bipartite._ Proof.: This follows since the extremities of an edge contain both vertex colours. **Claim 3.6**.: \(G_{k}\) _acts freely on the sets of vertices._ Proof.: The stabilizer of a vertex is trivial since the edges incident to a vertex have pairwise distinct colours. **Claim 3.7**.: \(G_{k}\) _acts transitively on the set of edges of a single colour._ Proof.: This follows since \(L_{k}\) is bipartite and \(G_{k}\) acts transitively on the set of vertices of a single colour. **Claim 3.8**.: _Every element stabilizing an edge of \(G\) is an involution._ Proof.: If \(\theta\) stabilizes an edge, then \(\theta^{2}\) fixes the end points of this edge, and therefore \(\theta^{2}=\operatorname{Id}\) by the previous lemma. Suppose we are given three completely homogeneous graphs as in (2) and let us construct \(X\). The first step is the construct a ball \(B_{1}\) of simplicial radius \(1\) with links isomorphic to \(L_{1}\) as a graph. We assign to the faces of \(B_{1}\) the colour of the edges in \(L_{1}\) they correspond to. We give to the center of \(B_{1}\) colour \(1\), and to every vertex \(y\) incident to \(x\) the colour of the vertex in \(L_{1}\) associated with the edge \([x,y]\) in \(B_{1}\). We obtain in this way a completely coloured complex \(B_{1}\), by associating to every edge in \(B_{1}\) the unique pair in \(\{\{1,2\},\{2,3\},\{1,3\}\}\) corresponding to the colour of its extremities. We proceed by induction. Suppose a completely coloured simplicial ball \(B_{n}\) is constructed in such a way that it satisfies the assumptions of the theorem in its interior vertices. We write \(S_{n}\) for its boundary and call \(\tilde{B}_{n}\) the coloured complex obtained by adding \(q\) coloured triangles above every edge in \(S_{n}\). **Lemma 3.9**.: _The link of a vertex \(y\) of \(S_{n}\) in \(\tilde{B}_{n}\) is a metric tree \(T_{y}\) which is completely coloured of diameter \(\leq 5\pi/3\)._ Proof.: Since \(B_{n}\) is a simplicial ball, the intersection of \(T_{y}\) with \(B_{n}\) has diameter \(\leq\pi\) (the diameter exactly \(\pi\) at the vertices in \(\tilde{B}_{n-1}\) and \(2\pi/3\) elsewhere). Adding the faces of \(\tilde{B}_{n}\), the diameter is now at most \(5\pi/3\). Finally, if \(y\) has colour \(k\), the colour of a vertex in \(T_{y}\) is the unique \(k^{\prime}\) such that \(\{k,k^{\prime}\}\) is the colour of the corresponding edge in \(\tilde{B}_{n}\). **Lemma 3.10**.: _Every completely coloured metric tree (with edges of length 1) of diameter \(\leq 5\pi/3\) having the same vertex and edge colours as \(L_{k}\) embeds in \(L_{k}\) in a colour preserving way (for every \(k=1,2,3\))._ Proof.: Since the girth of \(L_{k}\) is at least \(2\pi\), it follows by assumption that the balls of diameter \(5\pi/3\) are coloured trees, with \(m\) edges adjacent to every vertex. This shows every completely coloured tree (with the same sets of colours) embeds. Let \(B_{n+1}\) denote the simplicial ball obtained from \(\tilde{B}_{n}\) by fixing a completion \(\varphi_{y}\colon T_{y}\to L_{k}\) for every vertex \(y\) of \(S_{n}\) of colour \(k\), adding the coloured cells corresponding to \(L_{k}\smallsetminus\varphi_{y}(T_{y})\). **Claim 3.11**.: _The direct limit \(X\) of the \(B_{n}\) is a \((3,3,m)\)-coloured complex with vertex colour set \(\{1,2,3\}\), for which \(L_{k}\) is coloured isomorphic to the link of every vertex of \(X\) of colour \(k\)._ Proof.: This is clear since the requirements are local and satisfied by \(B_{n}\) viewed as a subcomplex of \(B_{n+1}\). Let us now turn the uniqueness of \(X\) up to isomorphism. **Lemma 3.12**.: _Suppose \(\varphi_{1}\) and \(\varphi_{2}\) are two completely coloured isometric embeddings of a metric completely coloured tree of diameter at most \(5\pi/3\) (with edges of length \(\pi/3\) in \(L_{k}\). Then \(\varphi_{2}\circ\varphi_{1}^{-1}\) extends in a unique way to a completely coloured isometric isomorphism of \(L_{k}\) (for every \(k=1,2,3\))._ Proof.: It follows by Claim 3.7 that the balls of radius exactly \(5\pi/3\) in \(L_{k}\) are pairwise coloured isometrically isomorphic. Since the codomains of \(\varphi_{1}\) and \(\varphi_{2}\) are coloured isometrically isomorphic and included in such balls, the map \(\varphi_{2}\circ\varphi_{1}^{-1}\) extends in a unique way. In order to show that two complexes \(X\) et \(X^{\prime}\) as described in the theorem are coloured isometrically isomorphic, one then proceeds by induction, by showing that the balls \(B_{n}(x)\) et \(B_{n}(x^{\prime})\) of radius \(n\), where \(x\in X\) and \(x^{\prime}\in X^{\prime}\) are two vertices of the same colour, are uniquely coloured isometrically isomorphic. This results from the previous lemma. Finally, uniqueness implies that \(X\) is completely homogeneous. More precisely, for any pair of vertices \((x,y)\), there exists a unique colour preserving isometric isomorphism of \(X\) taking \(x\) to \(y\). ## 4. Main theorem with signs The third step in the construction of \(G\) and \(X\) is to establish a version of Theorem 1.1 "with signs". The signs (defined below) appear the vertices in a fundamental domain and are necessary for the group \(G\) and the complex \(X\) to be well-defined in general. There are three choices for the signs for a total of eight possibles constructions \((\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})\), where \(\varepsilon_{i}\in\{\pm\}\), for every fixed choice of Sidon sequences and bijections \(\sigma_{i}\). In the formulation with signs, the puzzle problem is slightly more elaborate, because it must account for the fact that the rings must have markings on their vertices in addition to having markings on edges as in the situation described in Theorem 1.1. **Theorem 4.1**.: _For every integer \(n\geq 2\), every triple of increasing sequences_ \[0\leq a_{0}^{j}<a_{1}^{j}<\ldots<a_{n}^{j},\ \ j=1,2,3,\] _every triple of integers \(r_{j}\), \(j=1,2,3\), such that for every \(j\in\{1,2,3\}\), the sequence \(a_{0}^{j}<a_{1}^{j}<\ldots<a_{n}^{j}\) is a Sidon sequence modulo \(r_{j}\), and every family of bijections_ \[\sigma_{j}\colon\{a_{0}^{j},\ldots,a_{n}^{j}\}\to\{0,\ldots,n\},\ \ j=1,2,3\] _and_ \[\tau_{j}\colon\{0,1\}\to\{1,2,3\}\setminus\{j\},\ \ j=1,2,3\] _there exists a countable group \(G\), acting properly discontinuous on a CAT(0) complex \(X\) of dimension 2 with compact quotient, a \(G\)-invariant labelling \(X^{2}\to\{0,\ldots,n\}\) of the faces of \(X\), and a \(G\)-invariant labelling \(X^{1}\to\{1,2,3\}\) of the edges of \(X\), such that the following holds: if a flat plane embeds in \(X\), then it is the solution, with respect to the induced labelling, of a triangle ring puzzle problem with rings of the form:_ _such that \(\sigma=\sigma_{j}\), \(l=\tau_{j}(0)\), \(k=\tau_{j}(1)\), \(j\in\{1,2,3\}\), for every triples \((a,b,c)\) and \((a^{\prime},b^{\prime},c^{\prime})\) of elements of \(\{a_{0}^{j},\ldots,a_{n}^{j}\}\) with equal alternating sum_ \[a-b+c=a^{\prime}-b^{\prime}+c^{\prime}\mod r_{j}\] _such that \(a\neq b\neq c\), \(a^{\prime}\neq b^{\prime}\neq c^{\prime}\) and \((a,b,c)\neq(a^{\prime},b^{\prime},c^{\prime})\)._ Proof.: We apply Theorem 3.4 with \(\alpha=\beta=\gamma=\pi/3\). Consider the three graphs \(S_{r_{j}}(a_{0}^{j},a_{1}^{j},\ldots,a_{n}^{j})\) for \(j=1,2,3\). It is easy to verify that the colouring \(\delta_{j}\) defined by 1. \(\delta_{j}(v)=\tau_{j}(v\mod 2)\) for every vertex \(v\); 2. \(\delta_{j}(e)=\sigma_{j}(a)\) for every edge \(e\) labelled by \(a\in\{a_{0}^{j},a_{1}^{j}\ldots,a_{n}^{j}\}\). is completely homogeneous. Since the sequences \(a_{0}^{j},a_{1}^{j},\ldots,a_{n}^{j}\) are Sidon, the graphs \(S_{r_{j}}\) have girth at least \(2\pi\). This shows Theorem 3.4 applies. We let \(X\) be the corresponding \((3,3,m)\)-coloured complex and \(G\) the group of completely coloured automorphisms of \(X\). Using the maps \(\sigma_{j}\) we obtain a \(G\)-invariant labelling \(X^{2}\to\{0,\ldots,n\}\) of the faces of \(X\), and using the maps \(\tau_{j}\) we obtain a \(G\)-invariant labelling \(X^{1}\to\{1,2,3\}\) of the edges of \(X\). It remains to prove the assertion on the structure of the space of flats. We first show that every flat which embeds in \(X\) is the solution of a ring puzzle problem. A ring of \(X\) is a cycle of length \(2\pi\) included in a vertex link of \(X\). Every ring is endowed with the induced vertex and edge labeling from \(X\). Since \(S_{r_{j}}(a_{0}^{j},a_{1}^{j},\ldots,a_{n}^{j})\) is bipartite, the vertex labels of a ring at a vertex of type \(j\) alternate \(k\) and \(l\) where \(l=\tau_{j}(0)\) and \(k=\tau_{j}(1)\). We must show that the edge labelling has the given form, for some triples \((a,b,c)\) and \((a^{\prime},b^{\prime},c^{\prime})\) of elements of \(\{a_{0}^{j},\ldots,a_{n}^{j}\}\) with equal alternating sum \[a-b+c=a^{\prime}-b^{\prime}+c^{\prime}\mod r_{j}\] such that \(a\neq b\neq c\), \(a^{\prime}\neq b^{\prime}\neq c^{\prime}\) and \((a,b,c)\neq(a^{\prime},b^{\prime},c^{\prime})\). Let \(R\) be a ring in \(S_{r_{j}}(a_{0}^{j},a_{1}^{j},\ldots,a_{n}^{j})\). We fix an arbitrary base vertex \(m\in\mathbb{N}\) of \(R\) with label \(l\) and consider the two triples \((a,b,c)\) and \((a^{\prime},b^{\prime},c^{\prime})\) of elements of \(\{a_{0}^{j},\ldots,a_{n}^{j}\}\) describing the edge labelling of \(R\). Then by the definition of \(S_{r_{j}}(a_{0}^{j},a_{1}^{j},\ldots,a_{n}^{j})\) the following integers: \[m+(2a-1)-(2b-1)+(2c-1)\text{ and }m+(2a^{\prime}-1)-(2b^{\prime}-1)+(2c^{ \prime}-1)\] must coincide modulo \(2r_{j}\). This shows that \[2a-2b+2c=2a^{\prime}-2b^{\prime}+2c^{\prime}\mod 2r_{j}\] which implies the desired result. It is clear conversely that if this condition is satisfied, then the two integers: \[m+(2a-1)-(2b-1)+(2c-1)\text{ and }m+(2a^{\prime}-1)-(2b^{\prime}-1)+(2c^{ \prime}-1)\] coincide modulo \(2r_{j}\) for every \(m\in\mathbb{N}\) with label \(l\). The corresponding paths in \(S_{r_{j}}(a_{0}^{j},a_{1}^{j},\ldots,a_{n}^{j})\) are distinct assuming that \((a,b,c)\neq(a^{\prime},b^{\prime},c^{\prime})\), and therefore they define a cycle, which is a ring with the correct labelling. As mentioned above, we may encode the bijections \(\tau_{j}\) by a sign \(\varepsilon_{j}\in\{\pm\}\) at every vertex, where by convention \(\varepsilon_{j}=+\) if and only if the bijection \(\tau_{j}\) is increasing. Given the Sidon sequences and the bijections, the above theorem construct eight complexes \(X_{(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})}\). In the next section we prove these complexes are pairwise isomorphic. ## 5. Existence of polarities The fourth step concerns the automorphism group of \(X:=X_{(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})}\). We prove the existence of sufficiently many "polarities". This implies that \[X_{(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})}\simeq X_{(\varepsilon_{ 1}^{\prime},\varepsilon_{2}^{\prime},\varepsilon_{3}^{\prime})}\] for every \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3},\varepsilon_{1}^{\prime}, \varepsilon_{2}^{\prime},\varepsilon_{3}^{\prime}\in\{\pm\}\). **Definition 5.1**.: Let \(S\) be a bipartite graph. A _polarity_ of \(S\) is an automorphism of \(S\) of order \(2\) which permutes the vertex types of \(S\). If \(e\) is an edge of \(S\), we call _polarity at \(e\)_ a polarity which permutes the extremities of \(e\). In the next two lemmas we let \(a_{0}<\ldots<a_{n}\) be a Sidon sequence modulo \(N\) and consider the quotient graph \(S_{N}\) of \(S=S(a_{0},\ldots,a_{n})\) by the action of \(2N\mathbb{Z}\) by translations. We assume \(S_{N}\) is endowed with a fixed edge labelling \[\sigma\colon\{a_{0},\ldots,a_{n}\}\to\{0,\ldots,n\}\] (as in Theorem 4.1). **Lemma 5.2**.: \(S_{N}\) _admits an edge label preserving polarity at every edge._ Proof.: For every \(0\leq p\leq n\) consider the map \(\varphi_{p}\colon S\to S\) taking \(k\) to \(-k+2a_{p}-1\). For every \(0\leq q\leq n\) and \(l\) even we have \[\varphi_{p}([l,l+2a_{q}-1])=[-l+2a_{p}-1,-l+2a_{p}-1-(2a_{q}-1)].\] This shows \(\varphi_{p}\) takes edges to edges and defines an automorphism of \(S\). This automorphism factorizes to an automorphism of \(S_{N}\), which is a label preserving polarity that exchanges the end vertices of edge \([0,2a_{p}-1]\). By transitivity of the group of edge label preserving automorphisms, such a polarity exists at every edge of \(S_{N}\). **Lemma 5.3**.: _Let \(T,T^{\prime}\) be isomorphic subtrees of \(S_{N}\) containing at least a tripod, and let \(\varphi_{0}\colon T\to T^{\prime}\) be a isomorphism preserving the edge labels. Then \(\varphi_{0}\) extends in a unique way to an automorphism of \(S_{N}\) preserving the edge labels._ Proof.: If \(T\) and \(T^{\prime}\) are adjacent tripod it suffices to choose the polarity at the common edge. If \(T\) and \(T^{\prime}\) are arbitrary tripods, one can use transitivity of the group of automorphisms on vertices of the same type. If \(T\) and \(T^{\prime}\) are arbitrary trees, one may fix a tripod \(T_{0}\subset T\) can consider the unique edge label preserving automorphism of \(S_{N}\) whose restriction to \(T_{0}\) is \(\varphi_{0}\). It takes \(T\) to \(T^{\prime}\). Uniqueness is clear since we have required the automorphisms to preserve be the edge labelling. **Theorem 5.4**.: _There exists an isomorphism \(X_{(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})}\simeq X_{(\varepsilon_{ 1}^{\prime},\varepsilon_{2}^{\prime},\varepsilon_{3}^{\prime})}\) which preserve the labels on faces, for every \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3},\varepsilon_{1}^{\prime}, \varepsilon_{2}^{\prime},\varepsilon_{3}^{\prime}\in\{\pm\}\)._ Proof.: We write \(X=X_{(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3})}\) and \(X^{\prime}=X_{(\varepsilon_{1}^{\prime},\varepsilon_{2}^{\prime},\varepsilon_ {3}^{\prime})}\). By symmetry, it is enough to consider the case \(\varepsilon_{1}\neq\varepsilon_{1}^{\prime}\), \(\varepsilon_{2}=\varepsilon_{2}^{\prime}\), \(\varepsilon_{3}=\varepsilon_{3}^{\prime}\). We let \(x\) and \(x^{\prime}\) respectively denote a vertex of type \(1\) in \(X\) and \(X^{\prime}\). By Lemma 5.2, there exists an edge label perserving isomorphism of the corresponding links. This isomorphism induces an isomorphism \(\varphi_{1}\colon B_{1}(x)\to B_{1}(x^{\prime})\) between the ball of radius \(1\), which preserves the labels on the faces. Then, it follows by Lemma 5.3 that \(\varphi_{1}\) extends in a unique way to label preserving isomorphisms \(\varphi_{n}\) between successive balls \(B_{n}(x)\) and \(B_{n}(x^{\prime})\). The direct limit of these maps is an isomorphism which preserve the labels of the faces. ## 6. Every ring puzzle embeds in \(X\) The fifth and last step is to prove that every solution to the given ring puzzle problem defines a flat in \(X\) (showing (b) \(\Rightarrow\) (a) in Th. 1.1, which is the last assertion that remains to be established). **Theorem 6.1**.: _Suppose \(\Pi\) is a labeled flat obtained as a solution of the puzzle problem described in Th. 1.1. Then \(\Pi\) embeds in \(X\) in a label preserving way._ In fact, every labeled preserving embedding of a \(1\)-ball of \(\Pi\) into \(X\) extends uniquely to an embedding of \(\Pi\) into \(X\). Proof.: Let \(\Pi\) be a coloured flat obtained as a solution of the puzzle problem described in Th. 1.1, and let \(B_{1}\) be a \(1\)-ball of \(\Pi\). Consider a labeled preserving embedding \(\varphi_{1}\) of \(B_{1}\) into \(X\). We show that \(\varphi_{1}\) extends uniquely to an embedding of \(\Pi\) into \(X\). The proof is by induction using the following lemma. **Lemma 6.2**.: _Let \(R\) be a ring of type \(j=1,2,3\) (as described in Theorem 1.1) and let \(P\subset R\) be a segment in \(R\). Then every label preserving embedding \(\psi\colon P\to L_{j}\) extends to a labeled preserving embedding \(\psi^{\prime}\colon R\to L_{j}\)._ Proof of Lemma 6.2.: By assumption there exists an edge label preserving embedding \(\psi_{0}\colon R\to L_{j}\). We let \(e\) denote the initial edge in \(P\). Since \(L_{j}\) is completely transitive, there exists an edge label preserving automorphism \(\varphi\colon L_{j}\to L_{j}\) taking \(\psi_{0}(e)\) to \(\psi(e)\). Up to composing with a polarity of \(L_{j}\) fixing \(\psi(e)\), we may assume that \(\theta\) takes \(\psi_{0}(f)\) to \(\psi(f)\), where \(f\) is the edge adjacent to \(e\) in \(P\) (assuming that \(P\) has length at least \(2\)). It follows by the fact that \(\theta\) is labeled preserving that \(\theta\) takes \(\psi_{0}(P)\) to \(\psi(P)\). Then \(\psi^{\prime}:=\theta\circ\psi_{0}\) is a labeled preserving embedding extending \(\psi\). Suppose \(\varphi_{n}\) is an embedding the \(n\)-ball of \(\Pi\) concentric to \(B_{1}\) into \(X\) and let us show \(\varphi_{n}\) can be extended to an embedding of the \((n+1)\)-ball \(B_{n+1}\). For every vertex \(x\) of \(S_{n}=\partial\,B_{n}\), we let \(R_{x}\) denote the ring at \(x\) in \(\Pi\) and \(P_{x}\subset R_{x}\) denote the path in \(R_{x}\) associated to \(B_{n}\). Then \(\varphi_{n}\) induces a labeled preserving embedding \(\psi_{x}\colon P_{x}\to L_{\varphi_{n}(x)}\). By the previous lemma, \(\psi_{x}\) extends uniquely to a label preserving embedding \(\psi_{x}^{\prime}\colon R_{x}\to L_{\varphi_{n}(x)}\). We let \(B_{n+1}\) be the union of \(B_{n}\) and \(\bigcup_{x\in S_{n}}[\psi_{x}^{\prime}(R_{x})]\), where \([\psi_{x}^{\prime}(R_{x})]\) is the \(1\)-disk in \(X\) corresponding to \(\psi_{x}^{\prime}(R_{x})\). Since the maps \(\psi_{x}^{\prime}\) are colour preserving, this is a disk of radius \(n+1\) in \(X\) which extends \(B_{n}\). Thus, \(\varphi_{n}\) extends in a unique way. ## 7. Examples and applications ### Mian-Chowla complexes Let \((a_{n})_{n\geq 0}\) denote the Mian-Chowla sequence (obtained from \(a_{0}=0\) by the greedy algorithm). Theorem 1.1 associates groups and complexes to the truncated Mian-Chowla sequence \(a_{0},\ldots,a_{n}\) and integer \(N\) such that \(a_{0},\ldots,a_{n}\) is a Sidon sequence modulo \(N\). In general, this holds for every \(N\) large enough: **Proposition 7.1**.: _Let \(a_{0},\ldots,a_{n}\) be a Sidon sequence. Then \(N_{0}=2a_{n}+1\) is the smallest integer with the property that \(a_{0},\ldots,a_{n}\) Sidon sequence modulo \(N\) for every \(N\geq N_{0}\)._ Proof.: It is clear that \(N_{0}\geq 2a_{n}+1\), since \(a_{n}+a_{n}=a_{0}+a_{0}\mod 2a_{n}\). Conversely, if \(a_{1},\ldots,a_{n}\) is not Sidon modulo \(N\), then there exist \(a_{i},a_{j},a_{p},a_{q}\in\{a_{1},\ldots,a_{n}\}\) such that \[a_{i}+a_{j}\neq a_{p}+a_{q}\] and \[N\text{ divides }a_{i}+a_{j}-a_{p}-a_{q}.\] Since \(0\leq a_{i}+a_{j},a_{p}+a_{q}\leq 2a_{n}\), this fails if \(N\geq 2a_{n}+1\). While the proposition shows the general bound \(2a_{n}+1\) is sharp, we note that there exist Sidon sequences which are Sidon sequence modulo \(N\) for some values of \(N\) which are smaller than \(N_{0}\). Consider for example the Sidon sequence \(0,2,7\). Then \(N_{0}=15\), and it is easily verified that \(0,2,7\) is a Sidon sequence modulo \(N\), for several values of \(N<N_{0}\) (e.g., \(N=8\)). **Question 7.2**.: Given a Sidon sequence \(a_{0},\ldots a_{n}\), what is the value of \[N_{00}(a_{0},\ldots a_{n}):=\min\{N:a_{0},\ldots a_{n}\text{ is a Sidon sequence modulo }N\}?\] In some cases \(N_{00}=N_{0}\). Consider for example the Sidon sequence \(0,1,3\). Then \(N_{0}=7\), and it is easily verified that the Mian-Chowla sequence \(0,1,3\) fails to be a Sidon sequence modulo \(N\), for every \(N\leq 6\). On the other hand, it is not difficult to check, for example, that the Mian-Chowla sequence \(0,1,3,7,20\) is a Sidon sequence modulo \(35\). For every \(n\geq 2\) we call Mian-Chowla complex the CAT(0) 2-complex \(X_{n}\) associated by Theorem 1.1with the following data: 1. \(a_{0}^{j},\ldots,a_{n}^{j}\) is the truncated Mian-Chowla sequence; 2. \(\sigma_{j}\colon\{a_{0}^{j},\ldots,a_{n}^{j}\}\to\{0,\ldots,n\}\) is the increasing bijection; 3. \(N_{j}=N_{00}(a_{0}^{j},\ldots,a_{n}^{j})\); for every \(j=1,2,3\). Due to the symmetry in these data, we have: **Proposition 7.3**.: _The automorphism groups of the Mian-Chowla complexes are vertex transitive._ It would be interesting to study the geometric structure of the Mian-Chowla complexes \(X_{n}\), and solve the associated ring puzzle problems. We observe that the Mian-Chowla complex \(X_{2}\) is in fact a Bruhat-Tits building. ### Moebius-Kantor complexes We say that a CAT(0) 2-complex \(X\) is a _Moebius-Kantor complex_ if its faces are equilateral triangles and its vertex links are isomorphic to the Moebius-Kantor graph (namely, the unique bipartite cubic symmetric graph on \(16\) vertices). Consider the Sidon sequence \(a_{0}=0\), \(a_{1}=1\), \(a_{2}=3\) (of length \(3\)) modulo \(N=8\), and the bijections \[\sigma_{i}\colon\{0,1,3\}\to\{0,1,2\},\ \ i=1,2,3\] given by \(\sigma_{1}(0):=0\), \(\sigma_{1}(1):=1\), \(\sigma_{1}(3):=2\), and \(\sigma_{i}:=(0\ 1\ 2)^{i}\sigma_{1}\) for \(i=2,3\). Applying Theorem 1.1, we find a group \(G\) and a CAT(0) complex \(X\) with the indicated properties. We claim: **Proposition 7.4**.: \(X\) _is a Moebius-Kantor complex._ Proof.: By Theorem 1.1, the space \(X\) is a CAT(0) space with equilateral triangle faces. The links in \(X\) are determined by the Sidon sequence. By definition, they are isomorphic to the graph with vertex set \(\mathbb{Z}/16\mathbb{Z}\) and edge set \([n,n+1]\) for every \(n\) and \([n,n+5]\) for every \(n\) even. It is easy to verify that this graph is the Moebius-Kantor graph. Thus, Th. 1.1 provides a new construction method for Moebius-Kantor complexes. Here we shall describe here some properties of \(X\), and in particular motivate our choice of bijections. We call _root_ of \(X\) an isometric embedding \(\alpha\) of a path of length \(\pi\) in a link of \(X\), such that \(\alpha(0)\) is a vertex. Thus, the image of every root consists of three edges, which we shall endow with the induced labeling in \(\{0,1,2\}\). We say that \(\alpha\) is a root of rank \(2\) if there exist precisely two roots distinct from \(\alpha\) with the same end points. This definition is a particular case of the notion of rank of a root in a CAT(0) 2-complex--and we shall refer the interested reader to [5, SS4] for this generalization. We write \(S_{i}\) for the link in \(X\) at a vertex of type \(i\). For the complex \(X\), the rank of a root in \(S_{i}\) is a function of its labels; the following statement is the most relevant for our purpose. **Lemma 7.5**.: _Let \(\alpha\) be a root in \(S_{i}\), \(i=1,2,3\), with consecutive labels \(b\), \(a\), and \(b\) where \(a\neq b\). Then \(\alpha\) is a root of rank 2 if and only if \(a=\sigma_{i}(0)\)._ Proof.: Suppose an edge \(e=[r,r+1]\) has label \(a=\sigma_{i}(0)\) and let \(b\neq a\). Then, by our definition of \(S_{i}\), \(r\) is odd. Furthermore, since they have the same label \(b\), then the two edges adjacent to \(e\) in \(\alpha\) have the same increment, which is either \(1\) or \(5\). It is easily seen that \(\alpha\) is of rank 2 in both cases. Suppose now an edge \(e=[r,r+1]\) has label \(a=\sigma_{i}(1)\) or \(a=\sigma(3)\) and let \(b\neq a\). By the same argument, the two edges adjacent to \(e\) in \(\alpha\) have the same increment. It is easily seen that \(\alpha\) is not of rank 2 in both cases. Let \(t\) be a triangle in \(X\). For every side of \(t\), choose a triangle in \(X\) adjacent to \(t\). This defines three roots in the links of the vertices of \(t\). We say that \(t\) is _odd_ if the number of such roots of rank 2 is odd (this is well-defined by [3, SS2]). We say that \(X\) is _odd_ if every triangle is odd. **Proposition 7.6**.: \(X\) _is odd._ Proof.: Consider a triangle \(t\) in \(X\) with label \(a\in\{0,1,2\}\). Let \(b\in\{0,1,2\}\), \(b\neq a\). Adjacent to \(t\) are three triangles with label \(b\). These triangles form, together with \(t\), a larger triangle which we call \(T\). Suppose \(a=\sigma_{1}(k)\) for some \(k\in\{0,1,3\}\). Then \(\sigma_{2}(k)=(0\;1\;2)a\), \(\sigma_{3}(k)=(0\;1\;2)^{2}a\); therefore \(\sigma_{2}((0\;1\;3)k)=a\) and \(\sigma_{3}((0\;1\;3)^{2}k)=a\). This implies that \(a=\sigma_{i}(0)\) for a unique \(i=1,2,3\) which in turns shows \(T\) contains a single roots of rank 2. This proves that \(t\) is odd. ### Modular complexes Let \(a_{0}=0<a_{1}<\ldots<a_{n}\) be a sequence and \(N_{1}\), \(N_{2}\), and \(N_{3}\) be integers such that \(a_{0}=0<a_{1}<\ldots<a_{n}\) satisfies the condition of Sidon modulo \(N_{i}\) for \(i=1,2,3\). Let \(G(a_{0},a_{1},\ldots,a_{n}:N_{1},N_{2},N_{3})\) and, respectively, \(X(a_{0},a_{1},\ldots,a_{n}:N_{1},N_{2},N_{3})\) denote the group and complex obtained by applying Th. 1.1 with respect to these data and the increasing bijection \(\sigma\colon\{a_{0},\ldots,a_{n}\}\to\{0,\ldots,n\}\). To illustrate, we explain how Th. 1.1 provides an alternative approach to [1, SS13] for a construction "mixing" the Moebius-Kantor local geometry to that of \(\tilde{A}_{2}\) buildings in a same complex. Such a complex was said to be "of strict type \(A_{\mathrm{MK}}+\tilde{A}_{2}\)". It was obtained in [1] by a surgery construction, relying on the classification of collars between two "partial complexes" of both types. Here we can obtain similar results in as a direct consequence of Theorem 1.1. For instance, using the terminology of [1], we claim: **Proposition 7.7** (Compare [1, Prop. 13.1]).: _The modular complex \(X(0,1,3:7,7,8)\) is of strict type \(A_{\mathrm{MK}}+\tilde{A}_{2}\)._ The proof follows as in Prop. 7.4 above. Similarly, the complex \(X(0,1,3:7,8,8)\) is a complex of strict type \(A_{\mathrm{MK}}+\tilde{A}_{2}\), and it is not isomorphic to \(X(0,1,3:7,7,8)\). These "modular complexes" seem particularly interesting when \(N_{i}\) is small relative to \(a_{n}\). Furthermore, one can use bijections other than \(\sigma\) to twist the construction of modular complexes (for example, the complex \(X\) described in SS7.2 can be viewed as a "twisted modular complex"). We define "the" modular complex to be as untwisted as possible: **Definition 7.8**.: Let \(a_{0}=0<a_{1}<\ldots<a_{n}\) be a Sidon sequence. We let \[X(a_{0},\ldots,a_{n}):=X(a_{0},a_{1},\ldots,a_{n}:N_{00},N_{00},N_{00}),\] where \(N_{00}:=N_{00}(a_{0},\ldots,a_{n})\), and call this complex _the modular complex_ associated with \(a_{0}=0<a_{1}<\ldots<a_{n}\). By definition, the Mian-Chowla complexes are modular complexes in this sense. Due to symmetry in the data, follows by Theorem 1.1 that the automorphism group of the modular complex \(X(a_{0},\ldots,a_{n})\) is transitive on the vertex set. (This would not be true of generalized modular complex, for example, \(X(0,1,3:7,7,8)\) is not vertex transitive.) **Remark 7.9**.: These examples can be further generalized. If \(A\) is a finite abelian group, one says a set \(\{a_{0},a_{1},\ldots,a_{n}\}\) in \(A\) is Sidon if the number of pairs of elements in \(\{a_{0},a_{1},\ldots,a_{n}\}\) with a given sum is at most two. It is not difficult to extend our results to such Sidon sets; this gives additional, natural generalizations of our Sidon complexes. It would be interesting to study the asymptotic properties of the \(X(a_{0},\ldots,a_{n})\) complexes, for a fixed infinite Sidon sequence \(a_{0},a_{1},a_{2},\ldots\), including for example, the Mian-Chowla sequence or the Rusza sequence. We shall not pursue this direction on the present occasion, and conclude this paper with an application to the study of Moebius-Kantor complexes. ### Uniqueness of the odd Moebius-Kantor complex In this section we prove that the twisted modular complex constructed in SS7.2 is the unique odd Moebius-Kantor complex up to isomorphism; furthermore, we establish an unique extension theorem for automorphisms. This is done by "mapping" the data associated with the Sidon sequence to an arbitrary odd complex, and may compared to [2], in which a similar result is proved in the even case: the even Moebius-Kantor complex is unique up to isomorphism. This was established in [2] by "mapping" Pauli matrices from the (even) Pauli complex to an arbitrary even complex. **Theorem 7.10**.: _Let \(X\) and \(X^{\prime}\) be odd Moebius-Kantor complexes and let \(x\in X\) and \(x^{\prime}\in X^{\prime}\) be vertices. Let \(B_{1}(x)\) and \(B_{1}(x^{\prime})\) denote the ball of radius 1 with center \(x\) and \(x^{\prime}\), respectively, and let \(\varphi_{1}\colon B_{1}(x)\to B_{1}(x^{\prime})\) be an isomorphism. Then there exists a unique isomorphism \(\varphi\colon X\to X^{\prime}\) which coincides with \(\varphi_{1}\) on \(B_{1}(x)\)._ Proof.: We may assume that \(X\) is the complex constructed in SS7.2; furthermore, by symmetry, we may assume that \(x\) is a vertex of type 1 in this complex (associated with the map \(\sigma_{1}\)). Let us first extend \(\varphi_{1}\) to the ball \(B_{2}(x)\) of radius 2. We begin with the following lemma. **Lemma 7.11**.: _Suppose \(S\) and \(S^{\prime}\) are Moebius-Kantor graphs, \(T\subset S\) and \(T^{\prime}\subset S^{\prime}\) are tripods, and \(\psi_{0}\colon T\to T^{\prime}\) is an isomorphism. Then there exists a unique isomorphism \(\psi\colon S\to S^{\prime}\) which coincides with \(\psi_{0}\) on \(T\)._ Proof.: Existence follows because the Moebius-Kantor graph is 2-arc-transitive. If \(e\) and \(f\) are consecutive edges in \(T\), then there exists a graph isomorphism \(S\to S^{\prime}\) taking respectively \(e\) and \(f\) to \(\psi_{0}(e)\) and \(\psi_{0}(f)\). This isomorphism is unique since the stabilizer of a tripod is trivial. For every vertex \(y\) in the sphere \(\partial\,B_{1}(x)\) of radius \(1\) centred at \(x\), we let \(\psi_{y}\) denote the unique isomorphism between the ball \(B_{1}(y)\) and the ball \(B_{1}(\varphi_{0}(y))\) induced by the previous lemma, which extends \(\varphi_{0}\) on \(B_{1}(x)\). **Lemma 7.12**.: _The maps \(\psi_{y}\) are consistent._ Proof.: We must show that for every edge \([y,z]\) in \(\partial\,B_{1}(x)\), the maps \(\psi_{y}\) and \(\psi_{z}\) coincide on set of triangles adjacent to \([y,z]\). Since \(x\) is of type \(1\), we may assume that \(y\) is of type \(2\) and \(z\) of type \(3\). Let \([t_{1},y,z]\) be a triangle distinct from \([x,y,z]\), and consider the two triangles \([t_{2},x,y]\) and \([t_{3},x,z]\) whose labels coincide with that of \([t_{1},y,z]\). We write \(\alpha_{x}\), \(\alpha_{y}\) and \(\alpha_{z}\) for the three roots, respectively at \(x\), \(y\) and \(z\), associated with this configuration. There are two cases. Suppose first that \([x,y,z]\) is labeled by \(a=\sigma_{1}(0)\). Since \(a\neq\sigma_{2}(0)\) and \(a\neq\sigma_{3}(0)\), both roots \(\alpha_{y}\) and \(\alpha_{z}\) fail to be of rank \(2\). Since \(\varphi_{0}\), \(\psi_{y}\) and \(\psi_{z}\) are isomorphism, \(\varphi_{0}(\alpha_{x})\) is of rank \(2\) in \(X^{\prime}\), while \(\psi_{y}(\alpha_{y})\) and \(\psi_{z}(\alpha_{z})\) are not. Since the triangle \(\varphi_{0}([x,y,z])\) is odd, the two triangles \(\psi_{y}([t_{1},y,z])\) and \(\psi_{z}([t_{1},y,z])\) must coincide. Suppose next that \([x,y,z]\) is labeled by \(a=\sigma_{2}(0)\) or \(a=\sigma_{3}(0)\). The two cases are symmetric and we assume \(a=\sigma_{2}(0)\) to fix the ideas. Then \(\alpha_{y}\) is of rank \(2\), while \(\alpha_{x}\) and \(\alpha_{z}\) are not, and the same must be true of their images. Again, since the triangle \(\varphi_{0}([x,y,z])\) is odd, the two triangles \(\psi_{y}([t_{1},y,z])\) and \(\psi_{z}([t_{1},y,z])\) coincide. By Lemma 7.12, the map \[\varphi_{2}:=\varphi_{1}\vee\bigvee_{y\in\delta B_{1}(x)}\psi_{y}\] is well defined. It induces an isomorphism between \(B_{2}(x)\) and \(B_{2}(x^{\prime})\) which extends \(\varphi_{1}\) by definition. Furthermore, this extension is unique by Lemma 7.11. Let \(n\geq 2\). Let \(\varphi_{n}\colon B_{n}(x)\to B_{n}(x^{\prime})\) is an isomorphism, and fix a vertex \(y\) in the sphere \(\partial\,B_{n}(x)\) of radius \(n\) centred at \(x\). If there does not exist a triangle \([y,y_{1},y_{2}]\) in \(B_{n}\) such that \([y,y_{1},y_{2}]\cap\partial\,B_{n}=\{y\}\), we let \(\psi_{y}\) denote the unique isomorphism between the ball \(B_{1}(y)\) and the ball \(B_{1}(\varphi_{n}(y))\) induced by the Lemma 7.11, which extends \(\varphi_{n}\) on \(B_{n}(x)\). Suppose that there exists a triangle \([y,y_{1},y_{2}]\) in \(B_{n}\) such that \([y,y_{1},y_{2}]\cap\partial\,B_{n}=\{y\}\). Lemma 7.11 provides two maps \(\psi_{y}^{1}\) and \(\psi_{y}^{2}\) between the ball \(B_{1}(y)\) and the ball \(B_{1}(\varphi_{n}(y))\) induced by the Lemma 7.11, which extends the restriction \(\varphi_{n}\) to the set of triangles adjacent to \([y,y_{1}]\) and \([y,y_{2}]\), respectively. We show that: **Lemma 7.13**.: \(\psi_{y}^{1}=\psi_{y}^{2}\)_; furthermore, they extend the restriction of \(\varphi_{n}\) to \(B_{n}\cap B_{1}(y)\)._ Proof.: Let \(a\) denote the label of \([y,y_{1},y_{2}]\). Consider triangles \([t_{0},y_{1},y_{2}]\), \([t_{1},y,y_{1}]\) and \([t_{2},y,y_{2}]\) with the same label \(b\neq a\), and the corresponding roots \(\alpha_{y}\), \(\alpha_{y_{1}}\) and \(\alpha_{y_{2}}\), respectively. We assume that \(y\) is of type \(1\). The other cases are similar by symmetry. Suppose \(a=\sigma_{1}(0)\). Then \(\alpha_{y}\) is a root of rank \(2\), while \(\alpha_{y_{1}}\) and \(\alpha_{y_{2}}\) are not, and \(\psi_{y}^{1}\) takes \([t_{2},y,y_{2}]\) to the unique triangle in \(X^{\prime}\) such that \(\psi_{y}^{1}(\alpha_{y})\) is a root of rank \(2\) in the link of \(\varphi_{n}(y)\). Since the triangle \(\varphi_{n}([y,y_{1},y_{2}])\) is odd in \(X^{\prime}\), this shows that \(\varphi_{n}\) and \(\psi_{y}^{1}\) coincide on \(B_{n}\cap B_{1}(y)\). By symmetry, \(\varphi_{n}\) and \(\psi_{y}^{2}\) coincide on \(B_{n}\). Since \(\psi_{y}^{1}\) and \(\psi_{y}^{2}\) coincide on (at least) a tripod, they must coincide everywhere by Lemma 7.11. The two other cases, namely, \(a=\sigma_{2}(0)\) and \(a=\sigma_{3}(0)\), are similar. In the case that there exists a triangle \([y,y_{1},y_{2}]\) in \(B_{n}\) such that \([y,y_{1},y_{2}]\cap\partial\,B_{n}=\{y\}\), we let \(\psi_{y}:=\psi_{y}^{1}=\psi_{y}^{2}\); this now defines \(\psi_{y}\) for all \(y\in\delta B_{n}(x)\). A direct generalization of Lemma 7.12 to larger balls show that the maps \(\psi_{y}\) are consistent. It follows that \[\varphi_{n+1}:=\varphi_{n}\vee\bigvee_{y\in\delta B_{n}(x)}\psi_{y}\] is well defined, and induces an isomorphism between \(B_{n+1}(x)\) and \(B_{n+1}(x^{\prime})\) which extends \(\varphi_{n}\) by definition. This extension is unique by Lemma 7.11. Thus, \(\varphi:=\varinjlim\varphi_{n}\) is an isomorphism from \(X\) to \(X^{\prime}\) which extends \(\varphi_{1}\) uniquely.
2309.06120
Dimensions: Calculating Disruption Indices at Scale
Evaluating the disruptive nature of academic ideas is a new area of research evaluation that moves beyond standard citation-based metrics by taking into account the broader citation context of publications or patents. The "$CD$ index" and a number of related indicators have been proposed in order to characterise mathematically the disruptiveness of scientific publications or patents. This research area has generated a lot of attention in recent years, yet there is no general consensus on the significance and reliability of disruption indices. More experimentation and evaluation would be desirable, however is hampered by the fact that these indicators are expensive and time-consuming to calculate, especially if done at scale on large citation networks. We present a novel method to calculate disruption indices that leverages the Dimensions cloud-based research infrastructure and reduces the computational time taken to produce such indices by an order of magnitude, as well as making available such functionalities within an online environment that requires no set-up efforts. We explain the novel algorithm and describe how its results align with preexisting implementations of disruption indicators. This method will enable researchers to develop, validate and improve mathematical disruption models more quickly and with more precision, thus contributing to the development of this new research area.
Michele Pasin, Joerg Sixt
2023-09-12T10:37:09Z
http://arxiv.org/abs/2309.06120v1
# Dimensions: Calculating Disruption Indices at Scale ###### Abstract. Evaluating the disruptive nature of academic ideas is a new area of research evaluation that moves beyond standard citation-based metrics by taking into account the broader citation context of publications or patents. The "\(CD\) index" and a number of related indicators have been proposed in order to characterise mathematically the disruptiveness of scientific publications or patents. This research area has generated a lot of attention in recent years, yet there is no general consensus on the significance and reliability of disruption indices. More experimentation and evaluation would be desirable, however is hampered by the fact that these indicators are expensive and time-consuming to calculate, especially if done at scale on large citation networks. We present a novel method to calculate disruption indices that leverages the Dimensions cloud-based research infrastructure and reduces the computational time taken to produce such indices by an order of magnitude, as well as making available such functionalities within an online environment that requires no set-up efforts. We explain the novel algorithm and describe how its results align with preexisting implementations of disruption indicators. This method will enable researchers to develop, validate and improve mathematical disruption models more quickly and with more precision, thus contributing to the development of this new research area. Key words and phrases:\(CD\) index, SQL, disruption, Dimensions database, Google BigQuery ## 1. Introduction Evaluating the disruptive nature of academic ideas is a new and promising area of research evaluation that moves beyond standard citation-based metrics by taking into account the broader citation context of publications or patents. The idea of characterising scientific innovation in terms of its 'disruptive' property dates back to the work of Popper, 2019 and Kuhn, 1996 in the sociology and philosophy of science. These authors drew a fundamental distinction between contributions that improve pre-established scientific theories, and hence _consolidate_ their status as accepted truths, versus contributions that propose new or alternative methods that break away from the tradition, thus _disrupting_ it. In recent years, researchers working in the scientometrics and science of science (Fortunato et al., 2018) communities have been proposing quantitative approaches for identifying disruptiveness. Identifying or predicting disruptive scientific ideas allows to understand the significance of scientists' work in novel ways, as disruptive ideas not just impact the trajectory of scientific research, but also contribute to rendering obsolete the science that predates it. Two technological advancements underlie these developments: firstly, the growth of large, programmatically accessible bibliometric databases such as those provided by Dimensions, Crossref, Scopus and Web of Science (Thelwall, 2018, Visser et al., 2021); secondly, major advances in computing capabilities that facilitate the aggregation and processing of large-scale data sets, often using off-the-shelf infrastructure that requires minimal set up efforts for the scientometric researcher as it is available in the 'cloud' (Hook and Porter, 2021). These two aspects combined permit to develop disruption metrics at scale, that is, by taking into account the not just a subset of ## 1. Introduction The \(CD\) index of a given \(CD\) index is a fundamental concept of the \(CD\) index of a given \(CD\) index. The \(CD\) index is defined as an indicator for quantifying the degree to which future work. The \(CD\) index is defined as an indicator for quantifying the degree to which future work is focused on the \(CD\) index. In its original form, the \(CD\)-index can be calculated as follows: 1. Fix a focal publication \(f\) (the full circle in the diagram) published in year \(T\) for which we want to calculate \(CD_{t}\). 2. Fix an integer \(t\) that determines the time frame for which we want to measure impact: we will look at citations that occur at most \(t\) years after the publication of \(f\). 3. Find all publications \(r_{1}\ldots,r_{k}\) that are cited by \(f\) (the empty circles in the diagram) or in other words the "predecessors" or references of \(f\). 4. Find all \(n\) distinct publications \(c_{1},\ldots,c_{n}\) that cite at least one of the \(f\), \(r_{1}\ldots,r_{k}\) in the years \(T+1\) until and including \(T+t\) (in other words the "successors" of \(f\) or the union of all citations to \(f\), \(r_{1}\ldots,r_{k}\) that occurred in the \(t\) years after the publication of \(f\)) 5. Assign a score \(s(c_{i})\) to each \(c_{i}\) depending on what publication it cites: 1. Set \(s(c_{i}):=1\) if and only if \(c_{i}\) cites \(f\) but none of the references \(r_{1}\ldots,r_{k}\) (the grey triangles in the diagram): the idea here is that such a citation does not care about the references but only about the focal paper \(f\) highlighting the disruptive character of \(f\). 2. Set \(s(c_{i}):=-1\) if and only if \(c_{i}\) cites at least one of the references \(r_{1}\ldots,r_{k}\) and, in addition, also cites \(f\) (the grey square in the diagram): the idea here is that such a citation cares about the references and the focal publication \(f\) because \(f\) consolidates the literature. 3. Set \(s(c_{i}):=0\) if and only if \(c_{i}\) cites at least one of the references \(r_{1}\ldots,r_{k}\) but does not cite \(f\) (the grey pentagrams in the diagram): this means that \(c_{i}\) covers similar Figure 1. Example of a publication citation network around a focal publication. The x-axis is the timeline and indicates when the publications represented by squares, circles and pentagons are published. An arrow points from a citing publication to a publication it cites. topics as \(f\) (after all it cites one or more references of \(f\)) but it ignores \(f\) because \(f\) is not significant. 6. The \(CD\)-index is the average of all those scores i.e. \[CD_{t}:=\frac{1}{n}\sum_{i=1}^{n}s(c_{i})\] Clearly \(CD_{t}\) is a number between \(-1\) and \(1\). Accordingly, the two-year index \(CD_{2}\) for the above diagram can be calculated as follows: the pentagon citations only cite the references and therefore receive a score \(0\), the triangles cite only \(f\) and therefore receive score \(1\) and the square cite both \(f\) and its references and receive a score \(-1\). All in all we have \(5\) citations and therefore \(CD_{2}=\frac{1}{5}(0+0+(-1)+1+1)=0.2\). Note that \(CD_{1}\) would only consider the \(2\) citations taking place at \(T+1\) and therefore \(CD_{1}=\frac{1}{2}(0+1)=0.5\). (This also illustrates that the parameter \(t\) can have a significant effect on the index.) ### The challenge From a purely algorithmic perspective there are no issues with this method. It can be implemented in Python (R. Funk, 2017) or other languages and run on datasets usually provided from third parties like Elsevier Scopus or Clarivate in the form of CSV files, etc. Calculations of the index for a few publications will be fast. However, anecdotal evidence suggests that calculations for a large set of publications can take many days. An alternative approach is to store the publication and citation information in a database and run the calculation via SQL. Dimensions' publication data is already available as a GBQ table (Dimensions, 2023) and can be queried in SQL. GBQ and SQL are very fast and can handle vast amounts of data. This led us to hope that this is a quicker way to calculate the index. The challenge here is the restrictive nature of SQL. Unlike Python, Java, etc. iterative routines and procedures are difficult to implement in SQL. Therefore the original algorithm needs to be translated into a different method compatible with SQL. The following sections explain this alternative way of calculation and the resulting SQL query. ### An alternative calculation method The original method requires us to first collect all citations \(c_{i}\) to the focal paper \(f\) and all its references \(r_{i}\) and then in a next step check each of the \(c_{i}\) again if they cite \(f\) or not and if they cite one of the \(r_{i}\) or not. In a sense we need to either go through all citations of the focal paper and its references twice or somehow remember where the citations have come from. We are not aware if this approach can be easily implemented in SQL. In contrast we propose a different method that does not require cross-checking citations to \(f\) and the citations to its references. Instead we run through all citations to \(f\) and assign an intermediate score to each of them. In a next step we independently run through all citations to the focal paper's references and assign another intermediate score to each of them. Summing up these scores then gives us the final \(CD\)-index. As a result, this algorithm can be successfully expressed via SQL. We walk through this alternative algorithm step by step. See figure 2 below for a visual summary of this approach. 1. Just like in the original method we fix a focal paper \(f\) published in year \(T\), an integer \(T\), \(f\)'s references \(r_{1},\ldots,r_{k}\) and the citations \(c_{1},\ldots,c_{n}\) of any of the \(f\), \(r_{1},\ldots,r_{k}\) that occurred between \(T+1\) and \(T+t\). 2. Assign each citation \(c\) to \(f\) (regardless if they cite any of the \(r_{1},\ldots,r_{k}\) or not) a score \(s^{\prime}(c):=-1\) and \(s^{\prime}(x):=0\) for all other publications \(x\). 3. Assign each citation \(c\) to any of the \(r_{1},\ldots,r_{k}\) (regardless if they cite \(f\) or not) a score \(s^{\prime\prime}(c)=-2\) and \(s^{\prime\prime}(x)=0\) for all other publications \(x\). 4. The \(CD\)-index is then \[CD_{t}=\frac{1}{n}\left(\sum_{i=1}^{n}s^{\prime}(c_{i})+\sum_{i=1}^{n}s^{ \prime\prime}(c_{i})\right)+2\] This method is more complex but will help us to create a SQL statement in the next section. Before we look at an implementation however we need to prove that both methods lead indeed to the same result. First of all observe that we can rewrite the formula as \[CD_{t}=\frac{1}{n}\sum_{i=1}^{n}\left(s^{\prime}(c_{i})+s^{\prime\prime}(c_{i} )+2\right)\] Therefore it is enough to show that \(s(c)=s^{\prime}(c)+s^{\prime\prime}(c)+2\) for any \(c\) in \(\{c_{1},\ldots,c_{n}\}\). This can be easily verified by running through all the cases. Each \(c\) in \(\{c_{1},\ldots,c_{n}\}\) falls in exactly one of the following categories: 1. If \(c\) cites \(f\) but none of the \(r_{1},\ldots,r_{k}\). (i.e. \(c\) is one of the grey square in the illustration) then \(s^{\prime}(c)+s^{\prime\prime}(c)+2=(-1)+0+2=1\) which is exactly \(s(c)\) from the original algorithm 2. If \(c\) does not cite \(f\) but it cites at least one of the \(r_{1},\ldots,r_{k}\) (i.e. \(c\) is one of the the grey pentagons in the illustration) then \(s^{\prime}(c)+s^{\prime\prime}(c)+2=0+(-2)+2=0\) which is exactly \(s(c)\) from the original algorithm. 3. If \(c\) cites \(f\) and also cites at least one of the \(r_{1},\ldots,r_{k}\) (i.e. \(c\) is one of the the empty squares in the illustration) then \(s^{\prime}(c)+s^{\prime\prime}(c)+2=(-1)+(-2)+2=-1\) which is exactly \(s(c)\) from the original algorithm. Hence the two methods lead to the same result. ### The SQL statement In this section we will translate the alternative algorithm into SQL. The starting point is a table with a row for each publication with the following fields: Figure 2. Example of a publication citation network around a focal publication 1. **Publication ID**: a unique identifier for this publication e.g. DOI, PubMed ID or Dimensions publication ID 2. **Publication year**: the year the publication has been published 3. **Citations**: an array of all unique citations to this publication where each entry is a pair of a publication ID and citation (i.e. publication) year. 4. **References**: an array of all unique publication IDs cited by this publication Each of the IDs in citations and references needs to be an ID that is also included in the table. In the Dimensions publications GBQ table dimensions-ai.data_analytics.publications this data is already structured in that way with the fields id, year, citations and reference_ids (see Dimensions, 2023). The listing below is a simplified SQL statement that calculates \(CD_{5}\) for BALTIMORE, 1970. -- This is the focal publication f DECLARE focal_publication_id STRING DEFAULT "pub.1019844293"; -- This is the impact span t DECLARE time_diff INT64 DEFAULT 5; WITH cd_raw_data AS ( -- Calculating s' for each citation to the focal publication -- All are assigned a score s'=-1. Any other publications appearing in -- the second SELECT and aren't included here -- implicitly get a score s'= 0 ( SELECT DISTINCT -- make sure we list unique citations otherwise we may double count publications.id AS focal_id, -- focal publication citation.id AS citation_id, -- citing publication to focal publication -1 AS score -- s' -- the Dimensions GBQ table for publications FROM 'dimensions-ai.data_analytics.publications' AS publications -- fetch all its citing publications: id and year LEFT JOIN UNNEST(publications.citations) AS citation -- for this experiment we only look at one publication WHERE publications.id = focal_publication_id -- we only consider citations that appear at most time_diff years after -- the focal publication has been published AND citation.year - publications.year BETWEEN 1 AND time_diff ) UNION ALL -- Calculating s' for each citation to the references of -- the focal publication -- All are assigned a score s'=-2. Any other publications appearing in -- the first SELECT and aren't included here -- implicitly get a score s'= 0 SELECT DISTINCT publications.id as focal_id, -- focal publication reference_citation.id as citation_id,-- citing publication to references -2 as score -- s'' FROM 'dimensions-ai.data_analytics.publications' as publications -- get all the reference publication IDs of the focal publication LEFT JOIN UNNEST(publications.reference_ids) as reference_id -- get the references' meta data - mainly citations to it INNER JOIN 'dimensions-ai.data_analytics.publications' as references ON references.id = reference_id -- get the citations to the references LEFT JOIN UNNEST(references.citations) as reference_citation WHERE publications.id = focal_publication_id AND reference_citation.year - publications.year BETWEEN 1 AND time_diff ) ) -- Now add up all scores, count the distinct ids of the citations in both SELECTs -- above and use that information to calculate the $CD$-index SELECT focal_id, ((SUM(score)/COUNT(DISTINCT citation_id))+2) as cd_index FROM cd_raw_data GROUP BY focal_id At time of calculation the result was \(-0.44\) which is not so far away from \(-0.55\) listed in Park et al., 2023 (which also uses a different indexing service's publications and citation data). It is important to point out that one issue with this method is that COUNT DISTINCT in Google Big Query is a statistician function and may not always be exact. For our purposes where we are looking at trends this is sufficient but if you need exact results for each and every publication you may need to use the computationally much more expensive EXACT_COUNT_DISTINCT (see Google, 2023). ## 3. Results ### Calculating the \(Cd\)-index for all publications The query that allows you to calculate the \(CD\)-index for all publications can be found in Pasin and Sixt, 2023. Being able to access the Dimensions GBQ data is a prerequisite for running the query. Free of charge access for non-commercial scientometrics projects is available; also, it is possible to run the query on the freely available COVID dataset, although you will get different results. We have run these queries in July 2023 to calculate \(CD_{5}\) (\(t=5\) is used most widely in the literature) for several citation networks: 1. **All publications** (dim_all): We computed the index for the complete list of 138m publications. Since not all publications have references and citations in the 5 year time frame the resulting table lists only 79m publications. 1. Query: [https://github.com/digital-science/dimensions-gbq-lab/blob/master/archive/](https://github.com/digital-science/dimensions-gbq-lab/blob/master/archive/) CD-index/CD_index_query1_all.sql2. Citation network: all publications 3. Run: 28 July 2023, 18:00:06 UTC+1 4. Duration: 4 hr 24 min * Bytes processed: 69.62 GB * Number of rows/publications: 79,095,524 * Total logical bytes 2.36 GB 2. **Journal articles** (dim_journals): In order to make the results more compatible with the calculation in the literature and in order to avoid artefacts in the metadata we also ran the algorithm for only journal articles with some references: (i.e. type is article and the journal ID is not null) with at least 10 references. The restriction of references is important because the definition of the \(CD\)-index gives any publication with no references and at least one citation immediately an index of 1. However, lack of references is usually just a result of missing metadata for a publication. 1. Query:[https://github.com/digital-science/dimensions-gbq-lab/blob/master/archive/CD-index/CDindex_query2_journals.sql](https://github.com/digital-science/dimensions-gbq-lab/blob/master/archive/CD-index/CDindex_query2_journals.sql) 2. Citation network: all publications with type = article, journal.id not null, at least 10 references 3. Start: 28 July 2023, 13:36:48 UTC+1 4. Duration: 3hr 58min 5. Bytes processed: 72.26 GB 6. Number of rows/publications: 38,612,179 7. Total logical bytes: 1.15 GB 3. **PubMed** (dim_pubmed): For a later comparison we also run the calculation for all publications listed in PubMed. 1. Query: [https://github.com/digital-science/dimensions-gbq-lab/blob/master/archive/CD-index/CDindex_query3_pubmed.sql](https://github.com/digital-science/dimensions-gbq-lab/blob/master/archive/CD-index/CDindex_query3_pubmed.sql) 2. Citation network: all publications with a pubmed ID 3. Start: 29 Jul 2023, 07:52:46 UTC+1 4. Duration:3 hr 4 min 5. Bytes processed: 69.96 GB 6. Number of rows: 28,165,474 7. Total logical bytes: 859.54 MB Please note that it is in the nature of the \(CD\)-index that changing the underlying publication network will also change the resulting index. Mathematically our results should be correct however implementation mistakes, etc. can happen and therefore we decided to validate our results. In the following we use different approaches to validate the data. ### Comparison with selected publications Park et al., 2023 provides explicit calculations of 3 publications which are very similar to our results (see Table 1). Note that Park et al., 2023 uses data from Web of Science and PubMed whereas we use Dimensions data yet the results are in a similar range. We list our results for the two Since the values for the \(CD\)-index are closely concentrated around zero for most publications (see below) with values between \(-1\)and \(1\) the results are quite close. ### Comparison with Russel Funk's Python Library We created a small sub citation network based on Dimensions data for the above sample publications. We have fed the citation network both into the cdimdex Python library and our SQL and we arrived at exactly the same numbers in both instances. The precise implementation can be found in Pasin and Sivat, 2023. ### Comparison with calculations by Russel Funk We received two sample data sets with publication identifiers and DOIs and their \(CD\)-index calculated by Russel Funk. These are based on Web of Science data (1m publications) and PubMed (2.3m publications). Different indexing services will have different citation networks which will affect the \(CD\)-index: 1. Type of publications considered will restrict to certain references and citations e.g. PubMed covers (bio)medical and life science literature 2. Time of running the query: a calculation run at time \(T\) compared to another calculation at \(T+x\) will miss out on considering citations that happened between \(T\) and \(T+x\). Even older citations may suddenly appear or disappear e.g. if the indexing service improves data processing or if older publications are disqualified or additional older data sources get included. 3. Different indexation services use different ways to extract references and citations which can lead to differences in how citations are recognised In Table 2 we can see some basic statistical information: Source describes where the information came from: 1. funk_pubmed: an example dataset based on PubMed provided by R. Funk with 2.3m publications 2. funk_wos: an example dataset based on Web of Science provided by R. Funk with ca. 1m publications (however a number of them have no \(CD\)-index). Both datasets only included PubMed ID, DOI and the calculated \(CD\)-indices but no citation or references. 3. dim_all,dim_journals,dim_pubmed: our calculations, as per section 3.1 above 4. Count: number of publications with non-NULL \(CD_{5}\) index 5. \(q_{*}\): quantiles \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multirow{2}{*}{publication} & \multicolumn{1}{p{56.9pt}|}{From Park et al., 2023} & \multicolumn{1}{p{56.9pt}|}{dim\_all} & \multicolumn{1}{p{56.9pt}|}{dim\_journals} & \multicolumn{1}{p{56.9pt}|}{dim\_pubmed} \\ \hline Baltimore & \(-0.55\) & \(-0.44\) & \(-0.50\) & \(-0.44\) \\ \hline Kohn Sham 1965 & \(-0.22\) & \(-0.26\) & \(-0.29\) & NULL & (not indexed by PubMed) \\ \hline Watson Crick 1953 & \(0.52\) & \(0.60\) & NULL & (less than 5 references) & \(0.63\) \\ \hline \end{tabular} \end{table} Table 1. A comparison of \(CD_{5}\) from Park et al., 2023 and our calculations \begin{table} \begin{tabular}{|p{56.9pt}||p{56.9pt}||p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline source & count & mean & std & \(q_{25}\) & \(q_{50}\) & \(q_{75}\) & \(q_{95}\) & \(q_{99}\) \\ \hline funk\_pubmed & 2326769 & \(-0.01\) & \(0.11\) & \(-0.02\) & \(-0.00\) & \(-0.00\) & \(0.04\) & \(0.44\) \\ \hline funk\_wos & 836576 & \(0.01\) & \(0.12\) & \(-0.01\) & \(-0.00\) & \(0.00\) & \(0.02\) & \(1.00\) \\ \hline dim\_all & 79095505 & \(0.17\) & \(0.38\) & \(-0.00\) & \(0.00\) & \(0.01\) & \(1.00\) & \(1.00\) \\ \hline dim\_journals & 38612176 & \(0.00\) & \(0.08\) & \(-0.01\) & \(-0.00\) & \(0.00\) & \(0.03\) & \(0.29\) \\ \hline dim\_pubmed & 28165467 & \(0.15\) & \(0.37\) & \(-0.00\) & \(-0.00\) & \(0.00\) & \(1.00\) & \(1.00\) \\ \hline \end{tabular} \end{table} Table 2. Some basic statistical information of the various data fields. The minimum and maximum for all computations is -1 and 1. We observe that all versions of the \(CD\)-index behave very similarly: a distribution around 0 which is concentrated around 0. An exception is dim_all which seem to have many more publications with a high \(CD\)-index. This is mainly due to the fact that there are 13m publications in the dim_all dataset that have no references because Dimensions (or other services like CrossRef Dimension relies on) has not received the necessary metadata or full-text to extract references and citations. A simple histogram (Figure 3) of the 5 versions makes this even more evident. In Figure 4 we also reproduced the decline of disruptive papers over time visualised in R. J. Funk and Owen-Smith, 2017. Figure 4. The average \(CD_{5}\) index from our calculations over time. The data is also available in the file cd_trends.csv in Pasin and Sivt, 2023 Figure 3. The distribution of the \(CD_{5}\) index from the different sources. This is a histogram with bins of size 0.01 for the \(CD\)-index. The data is also available in the file cd_histogram.csv in Pasin and Sivt, 2023 At last we can also look at how much the sample data by R. Funk agrees with the one calculated by our method. As an example we use the PubMed data from R. Funk and Dimensions. These are 2.3m publications. We classify the top (bottom) 1% for each \(CD\)-index as disruptive (consolidating) and the rest as neutral (following the terminology of R. J. Funk and Owen-Smith, 2017 where high \(CD\)-indices indicate disruption and low \(CD\)-indices consolidation). We considered if our methodology and the \(CD\)-index computation on PubMed data on Dimensions GBQ can mimic the Funk's \(CD\)-index created via PubMed data and his own calculations. Although precision and recall are not very impressive at least there are very few cases where one \(CD\)-index labels a publication as disruptive and the other \(CD\)-index labels it as consolidating and vice versa. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & precision & recall & f1-score & support \\ \hline consolidating & 0.53 & 0.44 & 0.48 & 28270 \\ \hline disruptive & 0.63 & 0.63 & 0.63 & 23237 \\ \hline neutral & 0.99 & 0.99 & 0.99 & 2275205 \\ \hline \end{tabular} \end{table} Table 3. Precision and recall if we use our calculations of \(CD_{5}\) for PubMed publications to predict R. Funk’s \(CD_{5}\). Figure 5. The consolidation matrix for consolidating, neutral and disruptive publications according to Funk’s PubMed calculations and our calculations. ## 4. Conclusion In this article we presented a novel method for calculating disruption metrics based on SQL and the Dimensions on Google BigQuery data set, which reduces the computation time by an order of magnitude, when compared to traditional methods based on Python or other programming languages. Moreover, by leveraging the cloud-based architecture of Dimensions, this approach does not require specialised knowledge for setting up specialised computing infrastructure that can handle large-scale analytical tasks. Being able to calculate disruption metrics of publications and patents at scale, using multiple configurations and within reasonable amounts of time, makes it easier for researchers to focus on experimentation and analyses of these indicators, thus enabling the science of science community to assess and refine the usefulness of disruption indicators with increased confidence and speed. We validate our method against the original implementation of the CD index, both mathematically and by comparing the results of the calculations. The CD index results for the PubMed dataset and the code used to generate them is available online for review. Conflicts of interestThe authors are employees of Digital Science, the owner and commercial operator of Dimensions. AcknowledgementsWe thank Russel Funk for providing us with some of his results for comparison with our own data and Daniel Hook for his advice. Data AvailabilityJupyter notebooks and SQL queries for Dimensions on Google BigQuery are available on Github Pasin and Sixt, 2023 Dimensions on Google BigQuery data is available for non-commercial scientometrics research projects FundingThis research was not funded. The Open Access fees have been covered by Digital Science. CRediT 1. Joerg Sixt: Conceptualization, Data curation, Formal Analysis, Investigation, Validation, Writing - original draft, Writing - review and editing 2. Michele Pasin: Conceptualization, Project administration, Supervision, Validation, Writing - original draft, Writing - review and editing
2302.14656
Spatial Propagation of Weak Lensing Shear Response Corrections
In this paper we show how response function corrections to shear measurements (e.g. as required by Metacalibration) propagate into cosmic shear power spectra. We investigate a 2-sphere pixel (also known as HEALpixel') correction and a forward-modelling approach using simple Gaussian simulations. In the 2-sphere pixel-correction approach we find a free parameter that is the tolerated condition number of the local response matrices: if this is too large then this can cause an amplification of the shot noise power spectrum, if too small it can lead to a loss of area (and a possible selection bias). In contrast by forward-modelling the power spectrum this choice can be avoided. This also applies to map-based inference methods using shear-response calibrated maps.
T. D. Kitching, N. Tessore, P. L. Taylor
2023-02-28T15:27:15Z
http://arxiv.org/abs/2302.14656v1
# Spatial Propagation of Weak Lensing Shear Response Corrections ###### Abstract In this paper we show how response function corrections to shear measurements (e.g. as required by Metacalibration) propagate into cosmic shear power spectra. We investigate a 2-sphere pixel (also known as 'HEALpixels') correction and a forward-modelling approach using simple Gaussian simulations. In the 2-sphere pixel-correction approach we find a free parameter that is the tolerated condition number of the local response matrices: if this is too large then this can cause an amplification of the shot noise power spectrum, if too small it can lead to a loss of area (and a possible selection bias). In contrast by forward-modelling the power spectrum this choice can be avoided. This also applies to map-based inference methods using shear-response calibrated maps. Version November 8, 2021 ## 1. Introduction Weak lensing is the effect whereby the apparent observed ellipticity (third flattening, or third eccentricity) of galaxies is altered by the presence of matter along the line-of-sight. The effect can be approximated by an additional ellipticity added to the unlensed (intrinsic) ellipticity that is known as shear. Measurement of the weak lensing effect from data, to infer the shear, can be biased by several effects e.g. inaccuracies in the algorithms used (Heymans et al., 2006; Massey et al., 2007; Bridle et al., 2010; Kitching et al., 2012; Mandelbaum et al., 2015), detector effects (Antilogus et al., 2014), the size of the point spread function (Hoekstra et al., 2017; Kannawadi et al., 2019; Gatti et al., 2021), and detection effects (Hoekstra et al., 2015; Hoekstra, 2021). Kitching et al. (2019, 2020) demonstrated how biases, parameterised by multiplicative and additive terms that modify the observed ellipticity (unlensed ellipticity plus shear), affect the observed power spectrum of weak lensing data (known as cosmic shear). However, in several methods such as lensfit (Miller et al., 2007) and Metacalibration (MetaCal; Sheldon and Huff, 2017; Huff and Mandelbaum, 2017), there is the concept of an additional multiplicative response matrix that multiplies only the shear and not the unlensed ellipticity measurement. In particular to account for such a term in MetaCal a local pixel-level correction has been proposed, but the effect of the local noise on these corrections, and alternatively the propagation of the response terms through to the cosmic shear power spectrum has not been shown, and this is what we address in this paper. In Section 2 we present the methodology, in Section 3 we present results of testing on simulations, and in Section 4 we discuss conclusions. ## 2. Method The MetaCal (Huff and Mandelbaum, 2017; Sheldon and Huff, 2017) correction can be written as a local transformation of the ellipticity field like \[[\widetilde{e}_{1}+\mathrm{i}\widetilde{e}_{2}]_{p}=[e_{1}+\mathrm{i}e_{2}+(R _{11}\gamma_{1}+R_{12}\gamma_{2})+\mathrm{i}(R_{21}\gamma_{1}+R_{22}\gamma_{2 })]_{p}, \tag{1}\] where \(R_{ij}\) are elements of the \(2\times 2\) response matrix \(R\), \(\gamma_{i}\) are the true shear, \(\widetilde{e}\) are the observed ellipticities, and \(e_{i}\) are the unlensed uncorrelated intrinsic ellipticity component (i.e. the shot noise term). This is a locally defined transform within a angular pixel \(p\) where \(i=\{1,2\}\) refer to local distortions relative to a Cartesian tangent plane, where \(i=1\) are distortions along the axes and \(i=2\) are distortions at 45 degrees to the axes. Locally, the elements of the response matrix can be related to spin-0 and spin-4 multiplicative biases to the shear where \[[\widetilde{e}_{1}+\mathrm{i}\widetilde{e}_{2}]_{p} =[e_{1}+\mathrm{i}e_{2}+r_{0}(\gamma_{1}+\mathrm{i}\gamma_{2})+r_ {4}(\gamma_{1}-\mathrm{i}\gamma_{2})]_{p}\] \[r_{0} =\frac{1}{2}[(R_{11}+R_{22})+\mathrm{i}(R_{21}-R_{12})]\] \[r_{4} =\frac{1}{2}[(R_{11}-R_{22})+\mathrm{i}(R_{21}+R_{12})]. \tag{2}\] This can be compared to a global expression on the Celestial sphere in which \[\widetilde{\mathbf{c}}(\mathbf{\Omega})=\mathbf{e}(\mathbf{\Omega})+r_{0}( \mathbf{\Omega})\gamma(\mathbf{\Omega})+r_{4}(\mathbf{\Omega})\gamma^{*}( \mathbf{\Omega}) \tag{3}\] where \(\mathbf{x}(\mathbf{\Omega})=x_{E}(\mathbf{\Omega})+\mathrm{i}x_{B}(\mathbf{ \Omega})\), where each field is defined relative to an \(E\) and \(B\)-mode field on the sphere, where \(\mathbf{\Omega}\) are angular coordinates. \(r_{0}\) is a spin-0 term and \(r_{4}\) is a spin-4 term, where the total spin-2 nature of all terms is conserved. ### Power Spectra The power spectra estimates for the measured ellipticity can now be computed by taking the correlation of the spherical harmonic coefficients, computed using a spin-weight spherical harmonic transform for a spin-2 field, where \[\widetilde{C}^{GH}_{\ell,ij}\equiv\frac{1}{2\ell+1}\sum_{m}\widetilde{\epsilon} ^{G}_{\ell m,i}\widetilde{\epsilon}^{H,*}_{\ell m,j} \tag{4}\] for \(G=\{E,B\}\) and \(H=\{E,B\}\), where \(i\) and \(j\) labels for tomographic bins delineating galaxy populations defined by redshift or colour (Kitching et al., 2019). We will assume that the true \(EB\) and \(BE\) power spectra are zero \(C^{EB}_{\ell,ij}=C^{BE}_{\ell,ij}=0\), which should be the case in all but the most exotic dark energy models that cause parity-violating modes (Amendola et al., 2013). Given this assumption, the estimated \(EE\) power spectra is given by \[\widetilde{C}^{EE}_{\ell,ij}=\left[\sum_{\ell^{\prime}}\mathcal{M}^{++}_{\ell ^{\prime},ij}C^{EE}_{\ell^{\prime},ij}+\mathcal{M}^{--}_{\ell^{\prime},ij}C^{ BB}_{\ell^{\prime},ij}\right]+N_{\ell},\quad\text{ and}\quad\widetilde{C}^{BB}_{\ell,ij}=\left[\sum_{\ell^{\prime}}\mathcal{M}^{+-}_{ \ell^{\prime},ij}C^{BB}_{\ell^{\prime},ij}+\mathcal{M}^{-+}_{\ell^{\prime}, ij}C^{EE}_{\ell^{\prime},ij}\right]+N_{\ell} \tag{5}\] where \(N_{\ell}=\sigma_{e}^{2}/N_{\text{gal}}\) is the shot noise power spectrum (where \(N_{\text{gal}}\) is the number of galaxies used to compute the power spectrum and \(\sigma_{e}^{2}\) is the variance of the unlensed ellipticity), and \(C^{EE}_{\ell,ij}\) and \(C^{BB}_{\ell^{\prime},ij}\) are the \(EE\) and \(BB\) power spectra of shear. If the data is masked or there is a further multiplicative bias then a further mixing of modes would occur to the \(\widetilde{C}^{EE}_{\ell,ij}\) and \(\widetilde{C}^{BB}_{\ell,ij}\), as described in Kitching et al. (2020); these additional mixing matrices would affect both the signal and the noise terms. Following Brown et al. (2005) and Appendix A the calculation of mixing matrices can be written like \[\mathcal{M}^{++}_{\ell^{\prime},ij}=\frac{2\ell^{\prime}+1}{8\pi }\sum_{\ell^{\prime\prime}}(2\ell^{\prime\prime}+1)\left[1+(-1)^{\ell+\ell^{ \prime}+\ell^{\prime\prime}}\right]\left[C^{\prime\prime\prime}_{\ell^{\prime \prime},ij}\left(\begin{array}{cc}\ell^{\prime}\ell^{\prime\prime}\\ -2\end{array}0\right)^{2}+C^{\prime\prime}_{\ell^{\prime\prime},ij}\left( \begin{array}{cc}\ell^{\prime}\ell^{\prime\prime}\\ 2\end{array}-4\right)^{2}\pm 2C^{\prime\prime}_{\ell^{\prime\prime},ij} \left(\begin{array}{cc}\ell^{\prime}\ell^{\prime\prime}\\ -2\end{array}0\right)\left(\begin{array}{cc}\ell^{\prime}\ell^{\prime \prime}\\ 2\end{array}-4\right)\right]\] \[\mathcal{M}^{-\pm}_{\ell^{\prime\prime},ij}=\frac{2\ell^{\prime}+ 1}{8\pi}\sum_{\ell^{\prime\prime}}(2\ell^{\prime\prime}+1)\left[(-1)^{\ell+ \ell^{\prime}+\ell^{\prime\prime}}-1\right]\left[C^{\prime\prime}_{\ell^{ \prime\prime},ij}\left(\begin{array}{cc}\ell^{\prime}\ell^{\prime\prime}\\ -2\end{array}0\right)^{2}+C^{\prime\prime}_{\ell^{\prime\prime},ij}\left( \begin{array}{cc}\ell^{\prime}\ell^{\prime\prime}\\ 2\end{array}-4\right)^{2}\pm 2C^{\prime\prime}_{\ell^{\prime\prime},ij} \left(\begin{array}{cc}\ell^{\prime}\ell^{\prime\prime}\\ -2\end{array}0\right)\left(\begin{array}{cc}\ell^{\prime}\ell^{\prime\prime} \\ 2\end{array}-4\right)\right] \tag{6}\] where \(C^{\prime\times r_{Y}}_{\ell,ij}\) is the (cross) power spectrum of \(r_{X}\) and \(r_{Y}\), where \(X\) and \(Y=\{0,4\}\), and the matrices are Wigner-\(3j\) symbols. It is noted that, since the values of \(R_{ij}\) are not small compared to the shear, all terms including cross-terms need to be included. ### Pixel Correction An alternative is to correct the observed ellipticity field directly. In this case we label quantities with a subscript \(p\) to mean an angular/2-sphere pixels (we use pixelisation of the 2-sphere defined in McEwen & Wiaux, 2011) on the sky e.g. \(\mathbf{e}_{p}\). A correction for the response function then can be constructed as \(\hat{\gamma}_{p}\simeq R_{p}^{-}\mathbf{e}_{p}+\gamma_{p}\), where \(\hat{\gamma}_{p}\) is the estimated shear after correction, which in the case of no noise (\(\mathbf{e}\to 0\)) is equal to the shear. However, the first term in the correction \(R_{p}^{-1}\mathbf{e}\) represents a local amplification of the shot noise; this is true even if one only applies an average correct by computing the mean of the inverse-response over the sky. This is equivalent of applying a matrix transformation to the unlensed ellipticity where \[\left[\hat{\gamma}_{1}+\hat{\gamma}\hat{\gamma}_{2}\right]_{p} =\left[\gamma_{1}+\hat{\gamma}_{2}+r_{0}^{-}(e_{1}+\hat{\imath}e_{2})+r_ {4}^{\prime}(e_{1}-\hat{\imath}e_{2})\right]_{p}\] \[r_{0}^{\prime} =\frac{1}{2}\big{[}(R_{11}^{-1}+R_{22}^{-1})+\text{i}(R_{21}^{-1} -R_{12}^{-1})\big{]}\] \[r_{4}^{\prime} =\frac{1}{2}\big{[}(R_{11}^{-1}-R_{22}^{-1})+\text{i}(R_{21}^{-1} +R_{12}^{-1})\big{]}, \tag{7}\] where \(R_{ij}^{-1}\) are elements of the locally-inversed \(R\) field on the sphere (i.e. that field which is constructed by computing a pixel-by-pixel inverse of the \(R\) field). Therefore when taking the power spectrum the noise-amplification term needs to be corrected. Taking the power spectrum of equation (7) one finds a analogous equation to before where \[C^{EE}_{\ell,ij}=\widetilde{C}^{EE}_{\ell,ij}-\left[\sum_{\ell^{\prime}} \mathcal{N}^{++}_{\ell\ell^{\prime},ij}N^{EE}_{\ell^{\prime},ij}+\mathcal{N}^{--}_{ \ell\ell^{\prime},ij}N^{BB}_{\ell^{\prime},ij}\right],\quad\text{ and}\quad C^{BB}_{\ell,ij}=\widetilde{C}^{BB}_{\ell,ij}-\left[\sum_{ \ell^{\prime}}\mathcal{N}^{+-}_{\ell\ell^{\prime},ij}N^{BB}_{\ell^{\prime},ij}+ \mathcal{N}^{-+}_{\ell\ell^{\prime},ij}N^{EE}_{\ell^{\prime},ij}\right] \tag{8}\] where the \(\mathcal{N}^{++}_{\ell\ell^{\prime},ij}\) and \(\mathcal{N}^{-+}_{\ell\ell^{\prime},ij}\) are defined in the same way as equation (6) except using the \(r_{0}^{\prime}\) and \(r_{4}^{\prime}\) fields defined in equation (7). We note that \(N^{++}_{\ell\ell^{\prime},ij}\simeq N^{EE}_{\ell,ij}\) so some terms will cancel. Thus we find that a local pixel correction requires a correction for the noise amplification effect at the power spectrum level. We note that in doing such a pixel-correction an inverse of the local \(R\) matrices is required, which for close-to-singular matrices may lead to an ill-conditioned computational procedure; and since the distribution of \(R_{ij}\) values is broad and can cross \(R_{ij}=0\) it is possible the matrices may be singular. Hence a free parameter in such an approach is the acceptable condition number allowed, below which a pixel would be masked (thus decreasing the usable area of the survey); we define the conditional number1 for \(x\) as the norm of \(x\) multiplied by the norm of the inverse of \(x\), i.e. \(c_{R}=|x|/|x^{-1}|\). To explain further: in the pixel-correction case one needs to invert the \(R_{ij}\) matrices locally, the matrices might not be numerically invertible everywhere, we use the condition number to determine what is invertible, we consider a pixel unobserved if its matrix is not invertible. In the next Section we create simple simulations to test the forward-modelling approach, the pixel-correction approach, and an approach of taking the mean response over the sky. ## 3. Tests on Simulations We model \(\gamma(\Omega)\) as a Gaussian random field (generated using the massmappy code Wallis et al., 2017), assuming a DES Year 1 cosmology (Flaugher et al., 2015; Abbott et al., 2018; Morganson et al., 2018) to compute the EE cosmic shear power spectrum. We assume the the Limber (Limber, 1953; Kitching et al., 2017; Lemos et al., 2017), reduced shear (Deshpande and Kitching, 2020), flat-Universe (Taylor et al., 2018), flat-sky (Kamionkowski et al., 1998) and prefactor-unity (Kitching et al., 2017) approximations. Therefore the EE power spectrum is given by: \[C_{\ell}^{EE}=\int_{0}^{\chi n}\mathrm{d}\chi\frac{q^{2}(\chi)}{\chi^{2}}P_{ \delta}\left(\frac{\ell+1/2}{\chi},\chi\right),\quad\mathrm{where}\quad q(\chi )=\frac{3}{2}\Omega_{\mathrm{M}}\frac{H_{0}^{2}}{c^{2}}\frac{\chi}{a(\chi)} \int_{\chi}^{\chi n}\mathrm{d}\chi^{\prime}\,n(\chi^{\prime})\,\frac{\chi^{ \prime}-\chi}{\chi}; \tag{9}\] where \(P_{\delta}\) is the power spectrum of matter overdensities that we calculate using CAMB(Lewis et al., 2000) (we include the corrections from Mead et al., 2015, for the non-linear corrections). \(H_{0}\) is the Hubble constant, \(\chi\) and \(\chi_{\mathrm{H}}\) are the comoving distance and comoving distance to the horizon respectively (calculated using the astropy package Astropy Collaboration et al. 2018, 2013), \(a\) is the scale factor of the Universe, \(\Omega_{\mathrm{M}}\) is the dimensionless total matter density of the Universe, and \(c\) is the speed of light in a vacuum. \(n(\chi)\) is the galaxy distribution function of the survey (where we use the photometric DES Year 1 galaxy distribution. Abbott et al., 2018), and we assume only a single tomographic bin. We assume that \(EB\) and \(BB\) shear power spectra are zero. We also make a mask that removes data within \(20^{\circ}\) in both the galactic and ecliptic planes; and also \(20\%\) of pixels at random, to represent an all-sky mask with random patches removed, resulting in a total observed sky fraction of \(f_{\mathrm{sky}}=0.4\). Finally we assume \(\sigma_{e}=0.3\), and \(N_{\mathrm{gal}}=30f_{\mathrm{sky}}3600(4\pi[180/\pi]^{2})\) for the shot noise modelling. This is is similar to upcoming Stage-IV surveys (Albrecht et al., 2006). We then model the response matrix elements as Gaussian random fields with a white-noise power spectrum with mean \(\mu=1\) and a standard deviation of \(\sigma=0.6\) for all \(R_{ij}\); which approximates the amplitude observed in real shape measurement methods (Huff and Mandelbaum, 2017). This is meant as the simplest test of the approaches outlined in this paper, in future much more realistic values should be used that also correlate with galaxies, instrument and telescope properties. In Figure 1 we show the observed power spectrum, that includes the effects of the response matrix and noise (blue lines); and shear-only power spectrum that does not include noise or the response matrix terms (red lines). We then compare this the pixel-correction approach (equation 7) that should reproduce the shear-only power spectrum, and the forward modelling approach (equation 5) that should produce the observed power spectrum. We test the pixel-correction approach for two cases where the condition number of the response matrices is limited to \(c_{R}\leq 10\) and \(c_{R}\leq 100\); where \(c_{R}\) is the condition number of the local response matrices, defined using the numpy function numpy.linalg.cond1. We find that both approaches reproduce the expected results, however in the pixel-correction approach setting a condition number limit on the matrices that is too large can lead to an amplification of the noise term in equation (7) and hence a deviation from the shear-only power spectrum. Thus there is a trade-off in the pixel-correction approach between accuracy (a tight limit on the condition number) and area coverage (i.e. pixels not included because they are excluded by the limit). In our simple simulation we find a \(20\%\) change in area caused by going from \(c_{R}\leq 100\) to \(c_{R}\leq 10\). Such a trade-off is not required in the forward modelling approach. In the case of tomographic binning the shot-noise amplification will be larger because there will be fewer galaxies in each pixel. Footnote 1: The use of the inverse-scattering method is not necessary for the calculation of the inverse-scattering method. Finally a different approach could instead use a global correction, by dividing all observed ellipticities by the mean of the response function over the sky, hoping to derive the true shear power spectrum. In this case, since in our simulations the mean is unity, this would result in an error between the inferred shear power spectrum and the true shear power spectrum that was equivalent to the difference between the uncorrected case (blue lines) and the true case (red lines). Clearly in such an approach there would be a large bias. Figure 1.— We show the observed power spectrum, that includes the effects of the response matrix and noise (blue lines); and shear-only power spectrum that does not include noise or the response matrix terms (red lines). We then compare this the pixel-correction approach (equation 7, green lines) that should reproduce the shear-only power spectrum, and the forward modelling approach (equation 5; blue lines) that should produce the observed power spectrum. The left-hand plot is for a condition number limit of \(c_{R}\leq 10\), and the right-hand for \(c_{R}\leq 100\) in the pixel-corrected approach. ## 4 Conclusions In this paper we show how response functions propagate into cosmic shear power spectra computed from pixelised maps, and we investigate pixel-correction and forward-modelling approaches using simulations. In the pixel-correction approach there is a free parameter that is the tolerated condition number of the local response matrices - if this is too large then this can cause an amplification of the shot noise power spectrum, if too small it can lead to a loss of area. In contrast forward-modelling the power spectrum avoids this choice. Forward-modelling involves measuring the response of each galaxy as a function of local conditions at the map level and the propagation of angular variation through to the power spectrum. In more complex approaches one could also draw from the probability distribution function of the response-function map to propagate uncertainties in these measurements into the power spectrum. Alternatively one could infer the power spectrum directly, rather than via a map, but in this case the angular variation of the response function would still need to be accounted for. ###### Acknowledgements. We thank the developers of SBHT, massamppy, astropy and CAMB. NT is supported by UK Space Agency grants ST/W002574/1 and ST/X00208X/1.
2309.16455
Signatures of criticality in turning avalanches of schooling fish
Moving animal groups transmit information through propagating waves or behavioral cascades, exhibiting characteristics akin to systems near a critical point from statistical physics. Using data from freely swimming schooling fish in an experimental tank, we investigate spontaneous behavioral cascades involving turning avalanches, where large directional shifts propagate across the group. We analyze several avalanche metrics and provide a detailed picture of the dynamics associated to turning avalanches, employing tools from avalanche behavior in condensed matter physics and seismology. Our results identify power-law distributions and robust scale-free behaviour through data collapses and scaling relationships, confirming a necessary condition for criticality in fish schools. We explore the biological function of turning avalanches and link them to collective decision-making processes in selecting a new movement direction for the school. We report relevant boundary effects arising from interactions with the tank walls and influential roles of boundary individuals. Finally, spatial and temporal correlations in avalanches are explored using the concept of aftershocks from seismology, revealing clustering of avalanche events below a designated timescale and an Omori law with a faster decay rate than observed in earthquakes.
Andreu Puy, Elisabet Gimeno, David March-Pons, M. Carmen Miguel, Romualdo Pastor-Satorras
2023-09-28T14:05:45Z
http://arxiv.org/abs/2309.16455v3
# Self-similarity of turning avalanches in schooling fish ###### Abstract Groups of animals are observed to transmit information across them with propagating waves or avalanches of behaviour. These behavioral cascades often display scale-free signatures in their duration and size, ranging from activating a single individual to the whole group, signatures that are commonly related to critical phenomena from statistical physics. A particular example is given by turning avalanches, where large turns in the direction of motion of individuals are propagated. Employing experimental data of schooling fish, we examine characteristics of spontaneous turning avalanches and their dependency with schools of different number of individuals. We report self-similar properties in the avalanche duration, size and inter-event time distributions, as well as in the avalanche shape. We argue that turning avalanches are a result of collective decision-making processes to select a new direction to move. They start with the group having low speed and decreasing the coordination, but once a direction is chosen, speed increases and coordination is restored. We report relevant boundary effects given by wall interactions and by individuals at the border of the group. We conclude investigating spatial and temporal correlations using the concept of aftershocks from seismology. Contrary to earthquakes, turning avalanches display statistically significant clustered events only below a given time scale and follow an Omori law for aftershocks with a faster decay rate exponent than that observed in real earthquakes. ## I Introduction A fascinating and controversial hypothesis in biology is that some systems may operate close to a critical point from statistical physics, separating an ordered from a disordered state of the system [1; 2; 3]. Biological systems at a critical point are believed to posses functional advantages such as optimality in signal detection, storing and processing, large correlations in coordinated behaviour and widest spectrum of possible responses [4; 5; 6]. Criticality is often associated to scale invariance, exemplified by power-law distributions lacking relevant characteristic scales besides natural cut-offs [1; 2; 7]. In particular, this is observed for systems exhibiting spatiotemporal activity in the form of cascades or avalanches with variable duration and size, which at the critical point are distributed as power laws with anomalously large variance. There has been evidence of criticality signatures in many different biological systems, ranging from neural activity and brain networks, gene regulatory networks, collective behaviour of cells or collective motion [4; 5; 8]. The field of collective motion, in particular, studies the group movement patterns exhibited by social organisms, such as flocks of birds, fish schools, insect swarms, herds of mammals and human crowds [9; 10]. In this context, analytical and experimental studies of moving animal groups suggest the existence of phase transitions between phases of coherent and incoherent motion [11; 12; 13; 14; 15]. Moreover, groups of animals can transmit information across the group in the form of propagating waves or avalanches of behaviour, as occurs in fish schools [16; 17; 18; 19; 20; 21; 22; 23], honeybees [24], bird flocks [25], sheep herds [26] or macaque monkeys [27]. Such behavioural cascades are typically represented by behavioral shifts in the speed, acceleration or heading of individuals, and can be generated from responses to the detection of predators or sources of food, or even arise spontaneously. From a physical perspective, they can be related to critical systems with large susceptibility, but from a biological point of view they can occur when individuals follow the behaviour of others without regarding their own information [28]. Avalanche dynamics can transition from being supercritical with local changes propagating through the entire group, critical with changes propagating at all possible scales of the system, or subcritical with changes remaining local [29]. There is evidence that the state of criticality can be regulated by moving animal groups [20; 21; 27]. An example of behavioural cascades is given by turning avalanches [22], consisting in the propagation across the group of large changes in the heading direction of individuals, where large is defined comparing to a predefined turning threshold. Studying schooling fish of the species black neon tetra _Hyphessobycon herbertazelrodi_[22], the duration and size distributions of turning avalanches were observed to display scale-free signatures and fulfill a scaling relationship for different turning thresholds, resembling a critical system. In addition, the scale-free nature of turning avalanches was related to the presence of leadership structures, where leaders were identified as individuals displaying an unusually large probability to start a turning avalanche [22]. Here we study empirically spontaneous turning avalanches in schooling fish considering the effects of schools with different numbers of individuals. We first revise the definition of turning avalanches from individual turning rates and analyze basic avalanche metrics. We explore their statistical distributions and the dependency with schools of different number of individuals, finding robust scale-free distributions for the size, duration and time between avalanches. The scaling of the distributions as a function of the turning threshold and the number of individuals in the school is related to the density of avalanches in time, which allows to collapse the distributions for a fixed avalanche density. Next, we investigate how avalanches are triggered in space, time and by individual initiators. We also explore the dynamical evolution of avalanches and its relation with the state of the school. Finally, we analyze spatial and temporal correlations in avalanches borrowing the concept of aftershocks from seismology. ## II Experimental data We employ schooling fish of the species black neon tetra (_Hyphessobryson herbertaxelrodi_), a small freshwater fish of average body length \(2.5\) cm that has a strong tendency to form cohesive, highly polarized and planar schools [30]. The experiments, performed at the Scientific and Technological Centers UB (CCiTUB), University of Barcelona (Spain), consisted in schools of \(N=8,16,32\) and \(50\) individuals freely swimming in a square tank of side \(L=100\) cm with a water column of \(5\) cm of depth, resulting in an approximately two-dimensional movement. Videos of the fish movement were recorded with a digital camera at \(50\) frames per second, with a resolution of \(5312\times 2988\) pixels per frame, the side of the tank measuring \(L=2730\) pixels. Digitized individual trajectories were obtained from the video recordings using the open source software idtracker.ai [31]. Invalid values returned by the program caused by occlusions were corrected in a supervised way, semi-automatically interpolating with spline functions (now incorporated in the Validator tool from version 5 of idtracker.ai). For better accuracy, we projected the trajectories in the plane of the fish movement, warping the tank walls of the image into a proper square (for details see Ref. [32]). We smoothed the trajectories with a Gaussian filter with \(\sigma=2\), truncating the filter at \(5\sigma\)[33]. Individual velocities and accelerations were obtained from the Gaussian filter using derivatives of the Gaussian kernel. We discarded recordings where fish stop for prolonged periods. We implement this quantitatively applying a Gaussian filter with \(\sigma=200\) to the mean speed of individuals \(\left<v\right>\) and discarding sequences that go below a given threshold \(\left<v\right>_{th}=1.5\). The remaining experiments we analyze consist in \(6\) independent recordings (performed on different days and with different individuals) of \(N=8\) fish during \(30\) min (\(90000\) frames), \(3\) recordings of \(N=16\) fish during \(30\) min, \(3\) recordings of \(N=32\) fish during \(30\) min and \(3\) recordings of \(N=50\) fish during \(60\) min (\(180000\) frames). The data with \(N=8\) fish was previously used in Ref. [32]. ## III Avalanche definition and basic observables Behavioral avalanches in fish have been defined measuring changes of different quantities. Here we follow Ref. [22], where cascades were computed in terms of large changes in the heading of individuals, defined by their velocity vector. In order to remove the dependency with the experimental frame rate of the recordings, we measure the changes in time of the heading in terms of the _turning rate_\(\omega\), defined as the absolute value of the angular velocity, i.e. \[\omega=\frac{|\vec{v}\times\vec{a}|}{v^{2}}, \tag{1}\] Figure 1: (a) PDF of the turning rate \(\omega\) and (b) activity rate \(r\) of turning avalanches as a function of the turning threshold \(\omega_{th}\). The different curves correspond to experimental data from schools with different number of individuals \(N\). Quantities are expressed in natural units of frames and pixels. where \(\vec{v}\) and \(\vec{a}\) are the instantaneous velocity and acceleration of an individual respectively, and \(v\) is the modulus of the instantaneous velocity. See Appendix A for a derivation of this expression. We consider the absolute value due to symmetry in the turning direction. Notice also that this definition operates in continuous time and is defined, once velocity and acceleration are computed, for a single frame, in contrast to the turning angle used in Ref. [22]. In Fig. 1a we show the probability density function (PDF) of the turning rate \(P(\omega)\), observed in schools of different number of individuals \(N\). Here and in the following, we work in natural units of pixels and frames for distance and time, respectively. In addition, error bands in the PDF plots are calculated from the standard deviation of a Bernoulli distribution with the probability given by the fraction of counts in each bin of the numerical PDF [34]. As we can see, schools of different number of individuals show essentially the same behavior in their turning rate distributions. Most of the time, the turning rate is very small and uniformly distributed, corresponding to fish swimming locally in a straight trajectory. In rare instances, however, large turning rates can be observed, in which individuals swiftly rearrange their headings and reorient their direction of motion. Inspired by avalanche behavior in condensed matter physics [35], we introduce a _turning threshold_\(\omega_{th}\) separating small from large turns. Considering an _active_ fish as one with a turning rate \(\omega>\omega_{th}\), we introduce the dynamical variable \(n_{t}\) defined as the number of active fish observed at frame \(t\). Then, sequences of consecutive frames in which \(n_{t}>0\) (i.e. in which there is at least one active fish) define a _turning avalanche_. In the Supplemental Material Video S1 we show some examples of large turning avalanches for a school of \(N=50\) fish [36]. The most basic characterization of turning avalanches is given by the duration \(T\) and size \(S\) of avalanches, and by their inter-event time \(t_{i}\). An avalanche starting at frame \(t_{0}\) has _duration_\(T\) if the sequence of dynamic variables \(n_{t}\) fulfills \(n_{t_{0}-1}=0\), \(n_{t}>0\) for \(t=t_{0},\ldots,t_{0}+T-1\), and \(n_{t_{0}+T}=0\). The _size_\(S\) of an avalanche is given by the total number of active fish in whole duration of the avalanche, i.e. \(S=\sum_{t=t_{0}}^{t_{0}+T-1}n_{t}\). The _inter-event time_\(t_{i}\) between two consecutive avalanches is given by the number of frames between the end of one avalanche and the start of the next one, that is, by a sequence fulfilling \(n_{t_{f}}>0\), \(n_{t}=0\) for \(t=t_{f}+1,\ldots,t_{f}+t_{i}\), and \(n_{t_{f}+t_{i}+1}>0\), where \(t_{f}\) indicates the last frame of the first avalanche [37]. The effects of the turning threshold in avalanches can be measured with the _activity rate_\(r\), defined as the probability that a randomly chosen frame belongs to an avalanche. We compute it as the ratio between the number of frames with activity \(n_{t}>0\) and the total number of frames in the experimental series. As we can see from Fig. 1b, for fixed \(N\) the activity rate decreases with the turning threshold \(\omega_{th}\), since by increasing \(\omega_{th}\) we are decreasing the turning rates that we consider large and we find less frames with \(n_{t}>0\). On the other hand, increasing the number of individuals \(N\) at fixed \(\omega_{th}\) results in an increase of the activity rate. We can interpret this as a school with larger number of individuals has a higher probability for any of them to display a large turning rate. ## IV Statistical distributions In Ref. [22] the statistical distributions of duration \(T\) and size \(S\) for turning avalanches were studied for different turning thresholds \(\omega_{th}\) and for schools of fixed number of individuals \(N=40\), using recordings of short duration of 12000 frames (corresponding to 10 minutes). The results obtained were compatible with long-tailed power-law distributions of the form: \[P(T)\sim T^{-\alpha},\qquad P(S)\sim S^{-\tau}. \tag{2}\] The scaling exponents \(\alpha\) and \(\tau\) were estimated using an approach inspired by the finite-size scaling method [38], leading to the average values \(\alpha=2.4\pm 0.1\) and \(\tau=2.0\pm 0.1\). With the larger statistics of our experiments, for a fixed turning threshold \(\omega_{th}=0.1\) and for schools of different number of individuals \(N\), we show the duration \(T\) and size \(S\) distributions in Figs. 2a and 2b, respectively. We find that both PDFs show the same power-law scaling behavior for intermediate values of the corresponding variables, limited by a peak for low values and a fast decaying (exponential) tail. Interestingly, distributions for schools of different number of individuals collapse onto the same functional form with the exception of the exponential tail, which can be interpreted in terms of finite size effects, as larger schools tend to create avalanches of larger duration and size. The average exponents, obtained from a linear regression in double logarithmic scale in the scaling region, take the values \(\alpha=2.37\pm 0.11\) and \(\tau=1.97\pm 0.07\), which are statistically compatible with the ones obtained in Ref. [22]. Different values of \(\omega_{th}\) lead to similar average exponents (e.g \(\alpha=2.9\pm 0.4\) and \(\tau=2.4\pm 0.2\) for \(\omega_{th}=0.15\), see Supplemental Material Fig. S1). The size and duration of individual avalanches are not independent, as we can check by plotting the average size \(\left\langle S\right\rangle_{T}\) of avalanches of duration \(T\), see Fig. 2c. From this figure we can observe a superlinear behavior \[\left\langle S\right\rangle_{T}\sim T^{m}, \tag{3}\] with \(m=1.41\pm 0.03\). The value of \(m\) can be related to the exponents of the duration and size distributions as [22; 39] \[m=\frac{\alpha-1}{\tau-1}. \tag{4}\] Our experimental value \(m\) is fully compatible with the theoretical prediction \(m=1.41\pm 0.15\) for \(\omega_{th}=0.1\) (experimental \(m=1.35\pm 0.08\) and theoretical prediction \(m=1.4\pm 0.4\) for \(\omega_{th}=0.15\), see Supplemental Material Fig. S1c). Finally, in Fig. 2d we show the PDF of the inter-event time \(t_{i}\) for \(\omega_{th}=0.1\) and for schools of different number of individuals \(N\). We find again an intermediate scale-free region, limited between the small time behavior and an exponentially decreasing tail. Here also plots for different number of individuals \(N\) collapse on the same functional form, with the exception of the fast decaying tail. A fit to the form \[P(t_{i})\sim t_{i}^{-\gamma} \tag{5}\] in the scaling region leads to an average exponent \(\gamma=1.62\pm 0.04\). The value of this exponent is independent of the value of the turning threshold (see Supplemental Material Fig. S1d). It is noteworthy that the behavior of the decaying tails with \(N\) is reversed with respect to the duration and size PDFs, with larger number of individuals leading to smaller inter-event times. This observation is consistent with the behavior of the activity rate \(r\), as schools with larger number of individuals have a higher probability to be in an avalanche. The dependency of the exponential tails in the duration and size distributions with the turning rate threshold \(\omega_{th}\) reported on Ref. [22] and with the school size \(N\) observed here, suggests the possibility of a relationship between \(\omega_{th}\) and \(N\) resulting in avalanches with equivalent distributions. In order to test for this hypothesis, we select the threshold \(\omega_{th}\) that, for each value of \(N\), leads to a fixed activity rate \(r=r_{0}\). From Fig. 1b we estimate, for \(r_{0}=0.4\), \(\omega_{th}=0.055,0.076,0.11,0.13\) for \(N=8,16,32,50\), respectively. We plot the resulting distributions in Figs 3a, 3b and 3c for the duration \(T\), size \(S\) and inter-event time \(t_{i}\), respectively. In a system with no temporal correlations in the activity of individuals, a fixed activity rate results in duration and inter-event time distributions collapsing onto the same functional, exponential forms, see Appendix B. Surprisingly, even if this is not the case for empirical turning avalanches in schooling fish, both the duration and inter-event time distributions achieve a data collapse at fixed \(r\). On the other hand, Figure 2: (a) PDF of the duration \(T\), (b) PDF of the size \(S\), (c) average size \(\left<S\right>_{T}\) as a function of the duration \(T\) and (d) PDF of the inter-event time \(t_{i}\) for \(\omega_{th}=0.1\). The different curves correspond to schools of different number of individuals \(N\). The exponents from the green dashed power laws are (a) \(\alpha=2.37\pm 0.11\), (b) \(\tau=1.97\pm 0.07\), (c) \(m=1.41\pm 0.03\) and (d) \(\gamma=1.62\pm 0.04\). the size distributions do not collapse perfectly, possibly because correlations in the turning rates of individuals at a given frame, which results in more active individuals in an avalanche frame for schools of larger number of individuals. Interestingly, also in the uncorrelated case the size distributions are not expected to collapse, see Appendix B. On a similar note, for avalanches in different contexts, it has been found that the inter-event time distributions can be collapsed into a scaling form [40, 41], \[P(t_{i})=\frac{1}{\langle t_{i}\rangle}\Phi\left(\frac{t_{i}}{\langle t_{i} \rangle}\right), \tag{6}\] where \(\Phi(x)\) is a universal scaling function, and the only characteristic scale is the average inter-event time \(\langle t_{i}\rangle\). In Fig. 3d we show this sort of collapse for a turning threshold \(\omega_{th}=0.1\); as we can see, it also applies to turning avalanches in schooling fish. This reveals self-similar behaviour typical from critical systems, with the inter-event time distributions only differing in their average value for schools of different number of individuals. In the uncorrelated case, this collapse is also recovered, but now only in the limit of a large average inter-event time, see Appendix B. ## V Avalanche triggering In this Section we explore whether avalanches are triggered in some preferential points in space or time, as well as by particular individuals in the group. Here and in the following sections we show results for avalanches in a school of \(N=50\) individuals, which have the longest recording time, and a turning threshold \(\omega_{th}=0.1\). A plausible hypothesis is that avalanches are more frequently triggered near the tank walls due to boundary effects. These could arise when fish are approaching a wall and need to perform a large turn in order to avoid colliding with it. To check this hypothesis we consider the position for the center of mass (CM) \(\vec{x}_{CM}\) of the school, defined as \[\vec{x}_{CM}\equiv\frac{1}{N}\sum_{i}\vec{x}_{i}, \tag{7}\] Figure 3: Data collapse for the PDFs of (a) the duration \(T\), (b) the size \(S\) and (c) the inter-event time \(t_{i}\) for schools of different number of individuals \(N\) considering avalanches with a fixed activity rate \(r=0.4\) (corresponding to \(\omega_{th}=0.055,0.076,0.11,0.13\) for \(N=8,16,32,50\) respectively). (d) Data collapse of the inter-event time given by Eq. (6) for \(\omega_{th}=0.1\). Figure 4: Avalanche triggering in space, time and within the group. (a) Density for the position of the center of mass (CM) \(\vec{x}_{CM}\) at the start \(t_{0}\) of an avalanche (the triggering location) normalized against all trajectories of the center of mass, (b) average size \(S\) for triggering locations of avalanches, (c) in blue the temporal evolution for the center of mass speed \(v_{CM}\) and in dots avalanches triggered at the given speed \(v_{CM}\) and time \(t_{0}\) and coloured by their size \(S\), (d)-(e) density for the position of initiators normalized against the positions of all individuals at the start \(t_{0}\) of an avalanche for (d) the laboratory reference frame and (e) the center of mass reference frame and only for centered individuals. In (a), (d) and (e) the grey colour in the colormap corresponds to the expected density in the absence of correlations, given by the total counts of the quantity considered divided by the total counts of the normalization. In (c) we only plot avalanches that propagated to individuals other than the ones active in the first frame of the avalanche. In (e) the \(y\)-coordinate is oriented along the direction of motion of the group given by the center of mass velocity. where \(\vec{x}_{i}\) are the positions of the fish at a given instant of time. We define the _triggering location_ of an avalanche as the position of the CM at the first frame \(t_{0}\) of the avalanche, and study the distribution of these triggering locations on the surface of the tank. Because fish do not swim uniformly all around the tank, in order to extract a statistically significant density of triggering locations we normalize their counts against the counts of all observed positions of CM along the time evolution of the school. We show this in Fig. 4a, where the axis orientations correspond to the tank walls. The grey region in the colormap, separating the low density (red) and high density (blue) values, corresponds to the expected density in the absence of correlations, which we calculate from the total counts of triggering locations divided by the total counts of positions of CM. As we can see in this plot, the distribution of avalanches in the tank is quite homogeneous, although there is a slight tendency for avalanches to occur away from the walls. However, if we display the average size \(S\) of avalanches generated at the different triggering locations, we obtain a different picture, Fig. 4b, in which avalanches of larger sizes tend to occur more frequently near the tank corners. This observation suggests that interactions with the tank walls indeed promote the emergence of large turning avalanches, resulting in important orientation rearrangements of the school. Since larger avalanches seem to be originating from interactions with the walls, we explore whether these interactions are responsible for the features of the fast decaying tails observed in the duration and size distributions. To do so, we measure the statistical distributions of avalanches with triggering locations away from the walls, which we restrict to occur inside the square positioned at the center of the tank with side \(L/3\), where \(L\) is the side of the tank, see Supplemental Material Fig. S2. Although we have limited statistics, particularly for schools of small number of individuals, we obtain distributions that have longer power law regions with the same characteristic exponents as in the unrestricted case. To understand temporal triggerings of avalanches, we study how the avalanche starting time \(t_{0}\) relates to the group dynamics represented by the _center of mass speed_\(v_{CM}\), which is defined as \[v_{CM}\equiv\left|\frac{1}{N}\sum_{i}\vec{v}_{i}\right|. \tag{8}\] The center of mass speed is characterised for having oscillations due to a burst-and-coast mechanism of the individuals [42; 43; 44], with increases associated to an active phase powered by the fish muscles and decreases coming from a passive gliding phase. In Fig. 4c we plot, for a time window of 5 min, the temporal evolution of the center of mass speed as the blue line. We mark with dots avalanches triggered at the corresponding time \(t_{0}\) and speed \(v_{CM}\), color-code by their size \(S\). We only consider avalanches that propagated to individuals other than the ones active in the first frame of the avalanche. As we can observe, while small size avalanches tend to be randomly distributed over different values of \(v_{CM}\), large avalanches are more often located near the minima of the speed, even when the minimum changes across time. We notice that this behavior does not originate from small speeds being related to large turning rates, because we find the turning rate is inversely related to the speed only for \(v_{CM}<4\) and appears to be independent for larger speeds (see Supplemental Material Fig. S3). Instead, this suggests that large avalanches may emerge from turnings related to decision-making processes occurring at the onset of the active phase of the burst-and-coast mechanism [45; 46; 43]. Apart from the spatiotemporal triggering of avalanches, we can study how avalanches are triggered at the individual level within the school considering avalanche _initiators_, defined as the individuals that are active on the first frame of the avalanche. Previously it was observed that some individuals have a probability larger than random fluctuations to be initiators [22]. Here instead we focus on the location of individual initiators within the experimental tank and inside the school. Again, we have to keep in mind that individuals are not located uniformly around the tank at the start of an avalanche. Therefore, in order to extract a statistically significant density of initiators locations within the group, we normalize their counts against the counts of the positions of all individuals at the onset time \(t_{0}\) of the avalanche. We show the resulting plot in Fig. 4d. We find that initiators tend to accumulate near the tank walls, and particularly at the corners. This is compatible with the idea that large turning avalanches are promoted by interactions with the tank walls. In order to explore the natural relative position of avalanche initiators within the school, we select individuals that do not have relevant interactions with the tank walls. We define _centered individuals_ as those that are positioned in the central square of the tank with side \(L/3\), where \(L\) is the side of the tank. If we plot the density of the positions of centered initiators within the tank normalised by the positions of all centered individuals at the onset time \(t_{0}\) of an avalanche (see Supplemental Material Fig. S4), indeed we see a uniform pattern that confirms the idea that centered individuals do not experience significant interactions with the tank walls. We study the relative position of centered initiators within the school in Fig. 4e, where we plot the density of the positions of centered initiators normalized against all centered individuals at the triggering time \(t_{0}\) of the avalanche in the center of mass reference frame. In this plot the \(y\)-coordinate is directed along the direction of motion of the center of mass. As we can see, initiators of avalanches away from the tank walls accumulate on the boundary of the school and without any preferred direction along the movement of the group. Dynamical evolution of avalanches In this Section we examine characteristics of the school evolution during the development of an avalanche. In Ref. [22] it was suggested that turning avalanches were related to changes of the global orientation of the school, accompanied by an increase in the group speed and a decrease and a delayed increase of the global order. Here we further test these claims in a more systematic way. In order to compare avalanches with different sizes \(S\), we first normalize the temporal evolution of the avalanche by its duration \(T\), and then average the dynamics over groups of avalanches with similar sizes. First, we investigate the speed of the group given by the center of mass speed, \(v_{CM}\), defined in Eq. (8). We show how it evolves during a turning avalanche, averaged for different sizes \(S\), in Fig. 5a. For comparison, we plot the average value over the whole experiment as the green dashed horizontal line. We observe that avalanches tend to start below the average \(v_{CM}\), and that avalanches of small size do not alter the school speed noticeably. On the other hand, larger size avalanches tend to originate at lower values of \(v_{CM}\) and increase the school speed during their evolution. As a second characteristic of the school we consider the global order measured in terms of the _polarization_\(\phi\)[12], \[\phi\equiv\left|\frac{1}{N}\sum_{i}\frac{\vec{v}}{v}\right|, \tag{9}\] which tends to \(1\) if the school is ordered and all individuals move in the same direction, and takes a value close to zero if the school is disordered and fish move in random and independent directions [12]. We show its evolution within an avalanche in Fig. 5b. Small size avalanches tend to start in highly polarized configurations and do not change significantly the level of order. Contrarily, large avalanches tend to start with less ordered configurations than the average and further reduce the order as the avalanche spreads. However, at later stages this trend is reversed and the school recovers a highly ordered state. To gain further information about the possible role of the walls, we study the dynamical evolution of avalanches with respect to the distance to the tank walls. We define the _directed wall distance_\(d_{w}^{\vec{v}}\) as the distance from the center of mass of the school to the tank walls along the direction of the velocity of the center of mass. For a square tank, this distance is defined as \[d_{w}^{\vec{v}}\equiv\min \Big{[}\sqrt{1+\left(\frac{v_{y}}{v_{x}}\right)^{2}}\left(\Theta( v_{x})(L-x)+\Theta(-v_{x})x\right),\] \[\sqrt{1+\left(\frac{v_{x}}{v_{y}}\right)^{2}}\left(\Theta(v_{y})( L-y)+\Theta(-v_{y})y\right)\Big{]}, \tag{10}\] where the positions \(\vec{x}\) and velocities \(\vec{v}\) refer to the center of mass, \(\Theta(x)\) is the Heaviside step function, which discriminates the forward and backward motion, \(L\) is the side of the tank, and the two terms in the min function refer to the walls on the \(x\) and \(y\) coordinates, respectively. We plot the evolution of this quantity during turning avalanches in Fig. 5c. As we can observe, small size avalanches do not alter the directed wall distance. On the other hand, large avalanches tend to start closer to the wall and end at higher directed distances. This indicates that large turning avalanches typically produce a large change of the group orientation from facing a nearby wall to facing a farther away wall. We have also studied the evolution of the distance to the nearest wall, which we refer as the _minimum wall distance_\(d_{w}\), \[d_{w}\equiv\min(x,L-x,y,L-y). \tag{11}\] We observe (see Supplemental Material Fig. S5) that this quantity decreases and has a minimum for large avalanche sizes, indicating that during the avalanche evolution the school tends to approach the closest wall, to later move away from it. Finally, in Fig. 5d we consider the _avalanche shape_\(n_{t}\), defined by the number of active individuals at each frame of a turning avalanche [47]. As we can see, the avalanche shape shows a convex form, with small values at the beginning and the end, and a maximum in between, with a larger value for larger sizes. Many scale-free avalanche systems exhibit a collapse behavior in the avalanche shape given by the scaling relation \[n_{t}=T^{m-1}\Phi(t/T), \tag{12}\] where \(m\) is the exponent relating the average avalanche size with the duration \(T\), Eq. 3[48; 47; 6; 39]. In the case of turning avalanches, this scaling behavior is recovered in avalanches within the power-law scaling regime of the size distribution, as shown in Fig. 6. In this plot we use the value \(m=1.41\) numerically obtained. ## VII Aftershock correlations Another important aspect in avalanche behavior is the presence of _correlations_, namely, whether the occurrence of an avalanche induces the occurrence of other avalanches, such that they appear clustered in space and/or time [41]. The idea of correlations and clustering in avalanches is closely linked to the concept of main events and aftershocks in seismology [49]. In this context, _aftershocks_ are typically smaller events that occur after a main event in nearby locations and stand-out from the background noise. A relevant result here is the observation of the Omori law, which states that the probability rate to observe an aftershock at a given time \(t\) after a main event, follows the distribution \[P(t)=\frac{K}{(t+c)^{p}}, \tag{13}\] where \(K\), \(c\) and \(p\) are constants, with \(p\sim 1\)[50]. In seismology, earthquakes are quantified by their magnitude, which is a measure related to the logarithm of the energy released. Analogously, for turning avalanches we can introduce the _magnitude_\(m\) as \[m\equiv\ln S, \tag{14}\] where \(S\) is the size of the avalanche. Considering the observed size distribution from Eq. (2), magnitudes for turning avalanches follow the distribution \[P(m)\sim e^{-bm}, \tag{15}\] with \(b=\tau-1\), which is analogous to the well-known Gutenberg-Richter law for earthquakes [51]. In order to classify events (either earthquakes or avalanches) into main events and aftershocks, it is commonly employed the method proposed by Baiesi and Paczuski [52; 53]. This method is based in the definition of the _proximity_\(\eta_{ij}\) in space-time-magnitude domain from an event \(j\) to a previous (in time) event \(i\)[52; 54; 55]. Assuming that events are ordered in time, \(t_{1}<t_{2}<t_{3}\cdots\), the proximity is defined as \[\eta_{ij}\equiv\begin{cases}t_{ij}\,r_{ij}^{d}\,P(m_{i}),&\text{if }i<j\\ \infty,&\text{otherwise}\end{cases}, \tag{16}\] Figure 5: Dynamics within turning avalanches of (a) the center of mass speed \(v_{CM}\), (b) the polarization \(\phi\), (c) the directed wall distance \(d_{w}^{\varphi}\) and (d) the avalanche shape \(n_{t}\) depending on the normalized time \(t/T\) and averaged for similar sizes \(S\). The green dashed horizontal line is the average of the given variable over the whole experiment. Figure 6: Rescaled avalanche shape \(T^{1-m}n_{t}\) as a function of the normalized time \(t/T\). Avalanche shapes are averaged over similar sizes \(S\) within the power-law scaling region of the size distribution. where \(t_{ij}\) is the time interval between events \(i\) and \(j\), \(r_{ij}\) is the spatial distance between the events locations, \(d\) is the fractal dimension of the set of events positions and \(P(m_{i})\) is the Gutenberg-Richter law for event \(i\), which in our case is given by Eq. (15). In the context of turning avalanches, we have to consider two facts: (i) Avalanches have a finite duration that is comparable to the inter-event time between consecutive avalanches. We therefore consider \(t_{ij}\), \(i<j\), as the number of frames between the end of avalanche \(i\) and the start of avalanche \(j\); (b) During an avalanche, the school moves. We thus consider the distance \(r_{ij}\), \(i<j\), as the distance between the center of mass of the school at the end of avalanche \(i\) and the center of mass of the school at the beginning of avalanche \(j\). Additionally, the distribution of the positions of the center of mass at the start of avalanches does not seem to show a fractal structure, so we use here \(d=2\). The proximity \(\eta_{ij}\) is a measure of the expected number of events of magnitude \(m_{i}\) to occur, looking backward in time from event \(j\) within a time interval \(t_{ij}\) and distance \(r_{ij}\), in the absence of correlations, in such a way that the time and position of previous avalanches behave as independent Poisson processes [52]. Thus, the lower the value of the proximity, the more unlikely the events \(i\) and \(j\) should have occurred by chance and the higher the probability that they are correlated. For this reason, one can define the _correlation_\(c_{ij}\) of event \(j\) to a previous event \(i\) as [52] \[c_{ij}\equiv\frac{1}{\eta_{ij}}. \tag{17}\] In Fig. 6(a) we show the PDF of the correlations \(c_{ij}\) for all pairs of turning avalanches with magnitudes \(m\geq 1.6\) (i.e. of size \(S\geq 5\)). As it is observed in earthquakes, this distribution has a power-law tail spanning more than ten orders of magnitude [52, 53]. Using the correlations \(c_{ij}\) or the proximity \(\eta_{ij}\), every event \(j\) can be associated to a _nearest neighbour_ or _parent_\(p_{j}\), defined as the event in the past (\(p_{j}<j\)) that maximizes the correlation or minimizes the proximity with \(j\), namely \(c_{p_{j}j}\geq c_{ij}\), \(\forall i<j\) (\(\eta_{p,j}\leq\eta_{ij}\), \(\forall i<j\)). This proximity is denoted the _nearest-neighbour proximity_\(\eta_{j}\), its time interval \(t_{j}\) and the spatial distance \(r_{j}\). The set of events with the same parent are considered the aftershocks of that parent. In Fig. 6(b) we examine the distribution of the triggering locations of parents depending on their number of aftershocks \(a\). We find a possible influence of the tank walls, as parents with larger number of aftershocks tend to be located nearer the corners. In addition, we consider the measure of clustering proposed within this framework in Ref. [54]. This formalism is based in the _rescaled time_\(T_{j}\) and _rescaled space_\(R_{j}\)[55, 54], defined as \[T_{j} \equiv t_{j}\sqrt{P(m_{p_{j}})}, \tag{18}\] \[R_{j} \equiv(r_{j})^{d}\sqrt{P(m_{p_{j}})}, \tag{19}\] where \(p_{j}\) is the parent of event \(j\) and such that \[\eta_{j}=T_{j}R_{j}. \tag{20}\] In real earthquakes, it is observed that the joint distribution of \(T_{j}\) and \(R_{j}\) is bimodal. One mode corresponds to background events, and is compatible with a random (Poisson) distribution of times and positions of events. The other mode, on the other hand, corresponds to clustered events occurring closer in space and time [55]. In Fig. 6(c) we show the joint distribution of \(T_{j}\) and \(R_{j}\) for turning avalanches in terms of a color density plot. In the same figure, we display in terms of a contour plot the joint distribution obtained for randomized data, in which avalanche positions, inter-event times and magnitudes have been shuffled. We find that the experimental data shows clearly two modes in the distribution. In one mode, for large values of \(T_{j}\), increasing the rescaled time \(T_{j}\) results in a decrease of the rescaled space \(R_{j}\). This is almost identical to the distribution obtained for the shuffled data, indicating that it corresponds essentially to background, uncorrelated noise. The other mode occurs for smaller values of \(T_{j}\) and displays the opposite behaviour, increasing the rescaled time \(T_{j}\) results in a higher rescaled space \(R_{j}\). This behaviour is different from the background noise and corresponds to clustered (correlated) avalanches. We can understand the time scale separation between the modes taking into account that turning avalanches take place inside a school that is moving around the tank. The school typically performs a recurrent movement on the tank, visiting a given point in the tank with some average period. We can quantitatively analyse this behaviour looking at the mean square displacement of the position of the center of mass, which measures the average displaced distance of the group in time starting from any point in the trajectory (see Supplemental Material Fig. S6). The first maximum occurs around \(t_{c}=250\) frames and corresponds to the average time the school needs to perform a half-turn around the tank and becomes maximally separated from its initial position. Aftershocks with a lower time interval show an increase in their spatial distance, as the school is moving away from the parent location. After this time and up to very large time intervals, the school may return towards the parent position and we can find aftershocks occurring at lower spatial distances. However, these tend to occur rather randomly and can not be distinguished from random events. This highlights a major difference with earthquakes, where significant correlations can occur in the same location at widely separate instants of time. Finally, we examine the Omori law displaying the distribution for the time interval \(t_{j}\) between parents and aftershocks in Fig. 6(d). The distribution is computed considering the sequences of aftershocks for each parent, shifting the sequences to set each parent at a common time zero, and stacking all sequences in a single common sequence [56]. From the above reasons, we only consider time intervals below \(t_{c}=250\) that correspond to signifi cant correlated aftershocks. A least-squares fitting of the empirical data to the Omori law given by Eq. (13) (green dashed line), yields the parameters \(c=4.3\pm 0.4\) and \(p=2.2\pm 0.1\). This indicates a value \(p>1\), implying a faster decay rate of aftershocks than in earthquakes. ## VIII Discussion In this paper we have presented an empirical analysis of spontaneous behavioral cascades in schooling fish considering turning avalanches, where large turns in the direction of motion of individuals are propagated across the group. We have analyzed different avalanche metrics, employing tools from avalanche behavior in condensed matter physics and seismology. At the level of the probability distributions of simple observables, such as avalanche duration, size and inter-event times, we have found clear evidence of scale-free behavior, with distributions showing long tails compatible with a power-law form, as well as scaling relations and a data collapse for a fixed activity rate of the school, relating schools with different number of individuals and the turning threshold defining the avalanche. In addition, the inter-event times display a simple scaling behaviour normalizing by its mean, which has been previously observed in other avalanche systems. Another common observable in avalanche behaviour is given by the avalanche shape, which also exhibits a data collapse given by a scaling relationship with the duration. The presence of such scale-free signatures can be interpreted in terms of the fish school operating in the vicinity of a critical point. A possible advantage for the school to be near a critical point is efficient collective decision-making and transfer of information across the group. In this regard, we can understand turning avalanches as a process where fish decide collectively the direction to follow. For this reason, it is not surprising we find that large avalanches tend to occur at the onset of the active phase of the burst-and-coast mechanism of the fish loco Figure 7: Correlation measures of aftershocks. (a) PDF of the correlation \(c_{ij}\) between all avalanche pairs, (b) number of aftershocks \(a\) per parent depending on the triggering location of the parent, (c) counts for the joint distribution of the rescaled space \(R_{j}\) and time \(T_{j}\) (the contour plot corresponds to randomized avalanches, in which avalanche positions, inter-event times and magnitudes have been shuffled) and (d) PDF for the time interval \(t_{j}\) between parents and aftershocks for \(t_{j}<250\). We only considered avalanches with magnitudes \(m\geq 1.6\). In (d), the red dashed line corresponds to a fit to the Omori law Eq. (13) with \(c=4.3\pm 0.4\) and \(p=2.2\pm 0.1\). motion, where decision-making processes to change the direction are believed to occur [43; 45; 46]. In the process of deciding a new collective direction, coordination and group order decreases. However, once a new direction is chosen, speed increases and coordination emerges again. A similar result was found in the phenomenon of collective U-turns, which consists on directional switches for fish swimming in a ring shaped tank [18; 57]. We argue that collective U-turns can be understood as a particular example of turning avalanches. Boundary effects represented by interactions with the tank walls or by a distinct behaviour of individuals at the border of the group are commonly disregarded in the context of animal collective motion. Here we report significant effects of the tank walls in avalanche behavior. Thus, while the walls do not promote a larger number of avalanches, avalanches in their vicinity tend to have larger sizes and result in correlated avalanche clusters, giving rise to a larger number of aftershocks. This can occur as an obstacle in the movement of the school, such as a tank wall, may disrupt the movement of the group and precipitate the need to decide the subsequent direction [58], which will be necessarily away from the tank walls. Interestingly, however, these large avalanches induced by the tank walls affect mostly the exponential tail of the duration, size and inter-event time distributions, showing that the intermediate scale-free behavior of these distributions is not promoted by the walls, but rather it is an intrinsic property of the turning avalanche mechanisms. In addition, we find boundary effects from individuals at the border of the group, as these are the preferred positions for the initiators of large turning avalanches. This is compatible with previous results that found these positions were related to higher social influence [17; 59]. Our results also connect the separate disciplines of seismology and animal collective motion, analyzing spatial and temporal correlations in turning avalanches employing the concept of aftershocks. Earthquakes, which are a manifestation of the underlying properties of the Earth's crust, can manifest significant correlations at widely separated time intervals in a given location. However, in turning avalanches of schooling fish we only found significant clustered and correlated events below a time interval corresponding to a half-turn of the school around the tank. This may point to a fundamental property related to lack of collective memory for larger time scales [13]. In addition, we found the probability rate to observe correlated aftershocks after a main event in turning avalanches followed an Omori law with a decay rate exponent \(p\sim 2\), which is significantly faster than in earthquakes (\(p\sim 1\)). We believe our work represents a relevant contribution to the long-standing question of criticality, in particular to animal collective motion and in general to biological systems. Analysis of large data sets of experimental data reporting evidences of criticality have been scarce and are necessary to further elucidate this topic. ###### Acknowledgements. We acknowledge financial support from the Spanish MCIN/AEI/10.13039/501100011033, under Projects No. PID2019-106290GB-C21, No. PID2019-106290GB-C22, No. PID2022-137505NB-C21, and No. PID2022-137505NB-C22. A. P. acknowledges a fellowship from the Secretaria d'Universitats i Recerca of the Departament d'Empresa i Coneixement, Generalitat de Catalunya, Catalonia, Spain. We thank P. Romanczuk, H. J. Herrmann, and E. Vives for helpful comments. A.P., M.C.M. and R.P.S. designed the study. E.G. and D.M. acquired the empirical data. D.M. and A.P. processed the empirical data. A.P. and R.P.S. analysed the empirical data. A.P., M.C.M. and R.P.S. analysed the results. A.P. and R.P.S. wrote the paper. All authors commented on the manuscript. ## Appendix A Turning rate formula The turning rate \(\omega\) is defined as the rate of change of the orientation \(\theta\) of the velocity of an individual with time, i.e. \[\omega\equiv\frac{d\theta}{dt}. \tag{10}\] Consider the velocity vector in two instants of time, \(t\) and \(t+\Delta t\). The change of orientation \(\Delta\theta\) is given by \[\sin(\Delta\theta)=\frac{|\vec{v}(t+\Delta t)\times\vec{v}(t)|}{v(t+\Delta t )v(t)}. \tag{11}\] In the limit \(\Delta t\to 0\), \(\Delta\theta\to 0\), we have \[\sin(\Delta\theta) \simeq \frac{1}{v(t+\Delta t)v(t)}\left|\left[\vec{a}(t)\Delta t+\vec{v }(t)\right]\times\vec{v}(t)\right| \tag{12}\] \[= \frac{|\vec{a}(t)\times\vec{v}(t)|}{v(t+\Delta t)v(t)}\,\Delta t \simeq\Delta\theta\] where \(\vec{a}(t)\) is the fish acceleration. Then we can write \[\omega = \lim_{\Delta t\to 0}\frac{\Delta\theta}{\Delta t}=\lim_{\Delta t \to 0}\frac{|\vec{a}(t)\times\vec{v}(t)|}{v(t+\Delta t)v(t)} \tag{13}\] \[= \frac{|\vec{a}(t)\times\vec{v}(t)|}{v(t)^{2}},\] recovering the expression for the turning rate in Eq. (1). ## Appendix B Duration and inter-event time distributions in the absence of turning rate correlations Following Ref. [22], we can consider a null model of avalanche behavior in schooling fish in which individuals perform random uncorrelated turning rates, extracted from the empirical distribution \(P(\omega)\). In this case, the probability \(q\) that, at a given frame, a fish performs a turning rate larger than a threshold \(\omega_{th}\) (i.e. a fish is active) is given by \[q=\int_{\omega_{th}}^{\infty}P(\omega)\;d\omega, \tag{10}\] while the probability that, at a given frame, at least one fish in a school of \(N\) individuals performs a turning rate larger than \(\omega_{th}\) (i.e. there is at least one active fish) is \[Q=1-(1-q)^{N}. \tag{11}\] In this null model, an avalanche of duration \(T\) implies \(T\) consecutive frames with at least an active fish, followed by a frame with no active fish. Thus the duration distribution has the normalized form \[P_{0}(T)=\frac{1-Q}{Q}Q^{T},\;\;\;T\in[1,\infty). \tag{12}\] An inter-event time \(t_{i}\) consists, analogously, of \(t_{i}\) consecutive frames with no active fish, followed by a frame with at least one active fish. Therefore the inter-event time distribution has the form \[P_{0}(t_{i})=\frac{Q}{1-Q}(1-Q)^{t_{i}},\;\;\;t_{i}\in[1,\infty). \tag{13}\] Finally, the size distribution can be estimated as follows [22]: At each frame during an avalanche, the average number of active fish is \(Nq/Q\), where the normalization factor \(Q\) accounts for the fact that at least one fish was active in the frame considered. Thus, an avalanche of duration \(T\) has an average size \(S=TNaq/Q\). Transforming the duration distribution Eq. (13), we then have [22] \[P_{0}(S)=\frac{1-Q}{Nq}Q^{\frac{QS}{Nq}}. \tag{14}\] In all cases, we recover distributions with an exponentially decaying form. Now, the activity rate \(r\), defined as the probability that a randomly chosen frame belongs to an avalanche, is equal to the probability that in a randomly chosen frame there is at least one active fish. This trivially implies \[r=Q. \tag{15}\] That is, the duration and inter-event time distributions depend only on the activity rate, and can be made to collapse for different values of \(N\) and \(\omega_{th}\) leading to the same value of \(r\). On the other hand, the size distribution depends additionally on \(N\) and \(q\) and thus cannot be made to collapse by fixing \(r\). For the inter-event time distribution Eq. (13), we can write, in the limit of small \(Q\), \[P_{0}(t_{i})\simeq Q(1-Q)^{t_{i}}=Qe^{t_{i}\ln(1-Q)}\simeq Qe^{-Qt_{i}}. \tag{16}\] From Eq. (13), \(\langle t_{i}\rangle=\sum_{t_{i}=1}^{\infty}P_{0}(t_{i})=1/Q\). Thus, from Eq. (13), we have \[P_{0}(t_{i})\simeq\frac{1}{\langle t_{i}\rangle}e^{-t_{i}/\langle t_{i}\rangle}, \tag{17}\] recovering the scaling relation Eq. (6) with \(\Phi(x)=e^{-x}\), in the limit of large \(\langle t_{i}\rangle\). Interestingly, the activity rate \(r\) in this null model follows the empirical behavior shown in Fig. 0(b), as \(Q\) is a growing function of \(N\) and a decreasing function of \(\omega_{th}\).
2309.04051
Observation of Hybrid-Order Topological Pump in a Kekule-Textured Graphene Lattice
Thouless charge pumping protocol provides an effective route for realizing topological particle transport. To date, the first-order and higher-order topological pumps, exhibiting transitions of edge-bulk-edge and corner-bulk-corner states, respectively, are observed in a variety of experimental platforms. Here, we propose a concept of hybrid-order topological pump, which involves a transition of bulk, edge, and corner states simultaneously. More specifically, we consider a Kekul\'e-textured graphene lattice that features a tunable phase parameter. The finite sample of zigzag boundaries, where the corner configuration is abnormal and inaccessible by repeating unit cells, hosts topological responses at both the edges and corners. The former is protected by a nonzero winding number, while the latter can be explained by a nontrivial vector Chern number. Using our skillful acoustic experiments, we verify those nontrivial boundary landmarks and visualize the consequent hybrid-order topological pump process directly. This work deepens our understanding to higher-order topological phases and broadens the scope of topological pumps.
Tianzhi Xia, Yuzeng Li, Qicheng Zhang, Xiying Fan, Meng Xiao, Chunyin Qiu
2023-09-08T00:42:20Z
http://arxiv.org/abs/2309.04051v1
# Observation of Hybrid-Order Topological Pump ###### Abstract Thouless charge pumping protocol provides an effective route for realizing topological particle transport. To date, the first-order and higher-order topological pumps, exhibiting transitions of edge-bulk-edge and corner-bulk-corner states, respectively, are observed in a variety of experimental platforms. Here, we propose a concept of hybrid-order topological pump, which involves a transition of bulk, edge, and corner states simultaneously. More specifically, we consider a Kekule-textured graphene lattice that features a tunable phase parameter. The finite sample of zigzag boundaries, where the corner configuration is abnormal and inaccessible by repeating unit cells, hosts topological responses at both the edges and corners. The former is protected by a nonzero winding number, while the latter can be explained by a nontrivial vector Chern number. Using our skillful acoustic experiments, we verify those nontrivial boundary landmarks and visualize the consequent hybrid-order topological pump process directly. This work deepens our understanding to higher-order topological phases and broadens the scope of topological pumps. _Introduction_. Topology, originally a branch of mathematics, has become an important concept in different fields of physics [1; 2; 3; 4; 5; 6]. Thouless pump provides one of the simplest manifestations to understand the band topology in quantum systems. It was first proposed by Thouless when he studied the particle transport in one-dimensional (1D) periodic structures with adiabatic time evolution. He found out that the dynamic edge-bulk-edge pumping shares the same topological origin as the static two-dimensional (2D) Chern insulator [7; 8], where one of the momentum coordinates is replaced by an adiabatically varied cyclic parameter [Fig. 1(a)]. As such, this dynamic pumping process can be dictated by a nontrivial integer topological invariant (i.e., first Chern number). Similar theory has also been extended to explore four-dimensional (4D) quantum Hall effect characterized by a 4D topological invariant (known as second Chern number) [9; 10]. In this case, a 2D Thouless pump is implemented to a 2D spatial system, resorting to two additional external parameters [10]. With the discovery of higher-order topological insulators [11; 12; 13; 14; 15], referring to \(d\)-dimensional topological systems hosting nontrivial (\(d-N\))-dimensional boundary states (\(N>1\)), higher-order topological pumps have been proposed to connect the quantized corner to corner charge flow with the unconventional bulk-boundary correspondence [16; 17; 18; 19; 20; 21], as illustrated in Fig. 1(b). To date, both the conventional (or first-order) and higher-order Thouless pumps have been realized in diverse experimental platforms, from cold atom systems to various artificial crystals of classical waves [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. Note that both the first-order and higher-order pumping procedures involve only a simple transition between the bulk and edge (or corner) states. To the best of our knowledge, so far there is no realization of topological pump involving the states of more spatial dimensions simultaneously, e.g., the bulk, edge, and corner states in a 2D system. To achieve such new kind of topological pumps, dubbed hybrid-order pumps latter, a suitable band gap formed _between_ the 2D bulk and 1D edge bands should be designed for tolerating a continuous pumping of the zero-dimensional (0D) corner states [Fig. 1(c)]. This is much more difficult than that demanded for a higher-order pump: the validity of the higher-order topological invariant relies on a concrete selection of the periodical unit cell [11; 12; 13; 14; 15], which makes the boundary morphology and thus the edge modes predetermined by the corner configuration in the finite system formed by repeating unit cells. Figure 1: Schematic illustrations for three distinct topological pumps. (a) Conventional (first-order) pump in 1D systems. (b) Higher-order topological pump in 2D systems. (c) Hybrid-order topological pump in 2D systems. The red and blue lines sketch 0D states confined to different edges (or corners). In contrast to the conventional and higher-order pumps that exhibit edge-bulk-edge and corner-bulk-corner transports within a parameter cycle, the hybrid-order pump exhibits a transition among the bulk, edge, and corner states. In this work, we report an experimental observation of hybrid-order topological pump in acoustic systems. As shown in Fig. 2(a), we consider a 2D Kekule-textured graphene (KTG) lattice featuring three inequivalent hoppings, where the variable phase (\(\phi\)) acts as the cyclic pumping parameter. Particularly, we focus on the obtuse angle corners intersected by two zigzag boundaries. In this unusual boundary configuration, which is inaccessible by repeating any minimal unit cells, the system exhibits corner states that traverse the edge and bulk states continuously with the phase parameter. Using acoustic cavity-tube structures, we experimentally observe the highly appealing hybrid-order topological pumping process, accompanying convincing experimental evidence for the topological responses at the 1D edges and 0D corners. _Tight-binding model_. As shown in Fig. 2(a), we start with a 2D KTG model that is distorted from the graphene lattice of uniform hopping \(t_{\text{o}}\)[33, 34]. The KTG lattice features three inequivalent nearest-neighboring hoppings \(t_{n}=t_{\text{o}}+\delta t\cos(\chi_{n}+\phi)\), where \(\delta t\) characterizes the modulation strength of the hoppings, and \(\phi\in[-\pi,\pi]\) is a tunable phase parameter under the specific bias \(\chi_{n}=2(n-1)\pi/3\) (\(n=1\!\!-\!3\)). Note that the lattice constant \(a\) of the honeycomb lattice becomes \(\sqrt{3}a\) for the KTG lattice. In addition to time-reversal symmetry, the system of a general \(\phi\) has sublattice symmetry (or chiral symmetry). It is of particular interest that, as exhibited in Fig. 2(b), the band structure of the KTG lattice always hosts a wide spectral gap centered at the zero energy, besides the mini gaps appearing repeatedly between the highest (lowest) two bands. Now we consider the topological boundary manifestations in finite-sized samples. Armchair and zigzag boundaries are two typical boundary truncations for general honeycomb lattices. In our KTG lattice, however, unlike the armchair boundary accessible by simply duplicating unit cells, the zigzag boundary cannot be achieved without destroying the integrity of the 6-orbital KTG unit cell. Instead, it can be realized by repeating rhombus supercells of 18 orbitals. Figure 2(c) presents the spectral flow against the phase parameter \(\phi\) for the case of natural armchair boundary. It shows that near \(\phi=0\), two degenerate zero-energy states emerge at the corners of the obtuse angle, either symmetric or antisymmetric about the center of the finite diamond-shaped structure. The corner states can be explained by a \(Z_{2}\) topological invariant or a topological index defined at the high symmetry points in momentum space [34, 35, 36, 37]. For the system with unusual zigzag boundary, however, novel phenomena emerge in the primary bulk gap. As shown in Fig. 2(d), highly-degenerate flat bands (cyan curves) appear at zero energy and divide the primary gap into two pieces. They are edge states inherited from the original graphene lattice of zigzag boundary, which can be characterized by a nontrivial winding number [38, 39] (see _Supplemental Materials_). More intriguingly, there are two sets of corner states spanning the two separate primary gaps with the continuum evolution of \(\phi\), each of which emerges pairwise in energy at the same corner due to the chiral symmetry. Besides, the corner states associated to the samples of \(\pm\phi\) are localized in opposite obtuse angle corners, as exemplified by the field distributions for \(\phi=\pm 5\pi/8\). Therefore, for any of the two divided primary gaps, one can imagine a hybrid-order topological pumping process that involves the bulk, edge, and corner states simultaneously. Figure 2: Tight-binding model. (a) Unit cell of the KTG lattice, which features textured hoppings \(t_{\text{i}},t_{\text{2}}\), and \(t_{\text{3}}\). (b) Evolution of the band structure as the phase parameter \(\phi\), evaluated with fixed \(t_{\text{o}}=-1\) and \(\delta t=-0.7\). (c) Left: \(\phi\)-dependent energy spectrum for a finite system with armchair boundary. Right: Eigenfields for the degenerate zero-energy corner states highlighted in the spectrum. (d) Similar to (c), but for the system with zigzag boundary. Note that the corner configuration cannot be formed by simply repeating KTG unit cells or any other minimal unit cells of 6 orbitals, but can be achieved by duplicating the rhombus supercell of 18 orbitals. The coexistence of the nontrivial edge (cyan) and corner (red and blue) modes offers an opportunity to realize the hybrid-order topological pumping in the zigzag-boundary KTG lattice. _Topological nature of the gapless corner states._ Having known that the edge states are protected by a nontrivial winding number, below we explain the physical origin for the gapless corner states in the zigzag-boundary systems, inspired by the top-to-bottom scheme introduced in Ref. [40], from which one constructs 2D corner states from two independent 1D systems with 0D edge states. To do this, we consider a finite system of \(N\times N\) rhombus supercells, and decompose it into \(N\) 1D supercell chains in the \(\mathbf{a}\) and \(\mathbf{b}\) directions [Fig. 3(a)]. Each supercell chain constitutes a 2D Chern insulator when introducing the phase parameter \(\mathbf{\phi}\) as an effective momentum for the second, synthetic dimension. Figure 3(b) shows the bulk band structure for the synthetic 2D Chern insulator, where the chiral symmetry-related bands are plotted with the same color for clarity. In particular, the two nearly degenerate flat bands (black) share a similar physical origin to those edge modes exhibited in Fig. 2(d), while the states are now localized at the _supercell's_ zigzag boundaries. According to the bulk-boundary correspondence, the topological edge states of the synthetic Chern insulator are related to its gap Chern number, which can be defined as \[C_{g}^{s}=\frac{1}{2\pi i}\int_{BZ}\text{Tr}[F(k_{s},\mathbf{\phi})]\,dk_{s}d\mathbf{ \phi}\,. \tag{1}\] Here, \(\mathbf{s}=\mathbf{a},\mathbf{b}\) and \(F(k_{s},\mathbf{\phi})\) represents non-Abelian Berry curvature calculated for all bands below the target gap [40,41]. Specifically, the gap Chern number \(C_{g}^{s}=1\) (or \(-1\)) for the primary gap above (below) the flat bands. This suggests that each gap hosts one gapless edge band when the synthetic Chern insulator is truncated in the \(\mathbf{a}(\mathbf{b})\) direction. This can be seen in Fig. 3(c), the projected energy spectrum along the synthetic momentum \(\mathbf{\phi}\). Figure 3(d) exemplifies the edge state distributions for \(\mathbf{\phi}=\pm 5\pi/8\) in the lower primary gap. When both the \(\mathbf{a}\)- and \(\mathbf{b}\)-directed supercell chains host nontrivial edge states decaying exponentially away from the ends, corner states are concluded in the 2D zigzag-boundary sample whose couplings can be decomposed into two independent components in these directions [40,41]. (More specifically, the top edge localizations in \(\mathbf{\phi}=5\pi/8\) contribute the top corner states, while the bottom edge localizations for \(\mathbf{\phi}=-5\pi/8\) contribute the bottom corner states [Fig. 2(d)].) Therefore, the corner states of the original zigzag-boundary KTG lattice can be characterized by the two non-zero gap Chern numbers \(C_{g}^{\mathbf{a}}\) and \(C_{g}^{\mathbf{b}}\), or written together as a vector one, \(\mathbf{C}=\left(C_{g}^{\mathbf{a}},C_{g}^{\mathbf{b}}\right)\). More specifically, the vector Chern numbers \(\mathbf{C_{1}}=\left(-1,-1\right)\) and \(\mathbf{C_{2}}=\left(1,1\right)\) explain the corner states of the primary gaps below and above the zero energy, respectively. Ultimately, the coexistence of the topological corner and edge states in the diamond-shaped sample, protected respectively by the nontrivial vector Chern numbers and winding numbers, enables a unique hybrid-order pumping process when considering a continuous evolution of the phase parameter \(\mathbf{\phi}\). Acoustic realization of the tight-binding model.The above tight-binding model can be emulated precisely by our acoustic system, where the orbitals and hoppings are mimicked by air-filled cavity resonators and narrow tubes [42, 43, 44, 45], respectively. As shown in Fig. 4(a), each unit cell of our acoustic KTG lattice includes six identical hexagonal prism cavities of side length \(l=5.0\) mm and height \(H=32.8\) mm. The latter results in a dipole resonant mode of 5.27 kHz, which is far from the other resonant modes. (In our simulations, the sound speed is set as \(346\) m/s.) Every two nearest acoustic resonators are connected by two rectangular tubes of constant aspect ratio \(1\!:\!3\). Providing that the couplings are approximately proportional to the cross-sectional areas of the narrow tubes, as shown in Fig. 4(b), we realize the desired couplings by modulating the cross-sectional areas \(S_{n}=S_{0}+\delta S\cos(\chi_{n}+\phi)\), with \(S_{0}=20.3\) mm\({}^{2}\) and \(\delta S=12.0\) mm\({}^{2}\). The resultant effective hoppings (colored dots) capture well the cosine line shape \(t_{n}=t_{0}+\delta t\cos(\chi_{n}+\phi)\), associated with \(t=-424.5\) Hz and \(\delta t=-216.7\) Hz. To demonstrate the topological response of the system, Fig. 4(c) exemplifies the eigenvalue spectrum with \(\mathbf{\phi}=5\pi/8\), simulated for a sample of \(3\times 3\) supercells. In addition to the midgap edge modes, the energy spectrum features two chiral symmetry-related 0D modes at the top corner of the sample, as visualized further by their acoustic field distributions (see insets). Figure 3: Physical origin of the gapless corner states. (a) A Zigzag-boundary KTG lattice formed by duplicating rhombus supercells (shadowed). It can be decoupled into supercell chains along the \(\mathbf{a}\) and \(\mathbf{b}\) directions by removing the inter-supercell hoppings in the \(\mathbf{b}\) and \(\mathbf{a}\) directions, respectively. (b) Global evolution of the band structure for the 1D periodic chain in the \(\mathbf{a}(\mathbf{b})\) direction, which emulates the bulk dispersion of a 2D Chern insulator under the assistance of the effective momentum \(\mathbf{\phi}\). Here \(C_{g}^{s}\) highlights the Chern number for the two primary gaps near the zero-energy flat bands. (c) \(\mathbf{\phi}\)-dependent energy spectrum for the \(\mathbf{a}(\mathbf{b})\)-directed chain of finite length, which exhibits in-gap edge modes (blue and red lines) protected by the gap Chern numbers. (d) Field distributions of the edge modes labeled in (c), which together form the top (A) and bottom (B) corner modes in Fig. 2(d). Below we present our experimental evidence for the topological edge and corner states. Here, we focus on the sample with \(\phi=5\pi/\beta\) first. As shown in Fig. 4(d), our sample fabricated by 3D-printing consists of \(3\times 3\) supercells, i.e., 162 acoustic cavity resonators in total. To implement local measurements, two small holes are perforated in each cavity for inserting the sound source and probe, which are sealed when not in use. Both the input and output signals are recorded and frequency-resolved with a multi-analyzer system (B&K Type 4182). The samples are divided into non-overlapping spatial domains to extract the information of the bulk, edge, and corner states. The top panel of Fig. 4(c) presents the site-averaged bulk, edge and corner spectra. Clearly, the bulk spectrum (gray curve) shows a wide band gap centered at 5.27 kHz (associated to the zero energy in tight-binding model), around which the edge spectrum (cyan curve) shows a predominant peak. By contrast, a pair of chiral symmetry-related peaks appear in the spectrum of the corner 1 (red curve), which are missed in the spectrum of the corner 2 (blue curve), as predicted in Fig. 2(d). Similar phenomena can be observed in the case of \(\phi=-5\pi/\beta\) [Fig. 4(e), bottom panel], where the corner states appear in the corner 2 (blue curve). To further identify the bulk, edge, and corner states, we have integrated the pressure intensities over several typical frequency ranges for each site individually. As shown in Fig. 4(f), the site-resolved acoustic patterns exhibit clear spatial characteristics of the bulk, edge, and corner states as expected. It is worth pointing out that in graphene-like lattices, although extensively unveiled in armchair-boundary configurations, the higher-order corner states have not been observed so far in any system with zigzag boundaries. Figure 4: Acoustic emulation of the tight-binding model and experimental evidence for the topological states. (a) Unit cell geometry of our acoustic KTG lattice, which features three types of coupling tubes of different cross-sectional areas, \(S_{1}\), \(S_{2}\), and \(S_{3}\). (b) Top: \(\phi\)-dependent cross-sectional areas designed for the three coupling tubes. Bottom: Associated effective couplings (color dots), which are fitted well by cosine functions (color lines). (c) Eigenfrequency spectra simulated for a finite sample with \(\phi=5\pi/\beta\). Insets: Pressure field distributions extracted for the two chiral symmetry-related corner states. (d) A photograph of the zigzag-boundary sample, where the corner and edge sites are colored for clarity. The inset shows an enlarged view around the corner 1. (e) Pressure intensity spectra averaged for the bulk, edge, and corner sites. The results are exemplified with the samples of \(\phi=\pm 5\pi/\beta\). (f) Intensity patterns extracted for the bulk (B), edge (E), and corner (C) states, respectively, which are averaged over the frequency windows shadowed in (e). _Observation of the hybrid-order topological pump_. To reflect the global evolution of the bulk, edge, and corner states within one pumping cycle \(\phi\in[-\pi,\pi]\), we fabricated 17 diamond-shaped samples at the \(\phi\)-step of \(\pi/8\), and measured their topological manifestations individually as before. All experimental data match well our full-wave simulated bulk, edge, and corner spectra (see _Supplemental Materials_). For conciseness, here we count the bulk, edge, and corner spectra together and present the superimposed data (color scale) in Fig. 5(a). It exhibits an excellent agreement with our theoretical prediction (color lines), especially for the corner modes across the two primary band gaps around 5.3 kHz. The band broadening stems mostly from the unavoidable viscous and thermal dissipations inside the acoustic structure. More importantly, the spectral flow unveils a hybrid-order topological pumping process where the corner states traverse the bulk and edge states smoothly, as guided by the white arrows in the lower primary gap. To directly visualize the intricate hybrid-order topological pump, we display the site-resolved intensity patterns measured for a set of typical configurations [Fig. 5(b), left panel]. Clearly, the acoustic state evolves from the top corner 1 to the bulk 2, consequently to the bottom corner 3, and then it goes back to the initial state 5 through the edge 4, during which the band gap is never closed. This reproduces the pumping process predicted by the tight-binding model [Fig. 5(b), right panel]. _Conclusions_. By fully exploiting the controllability of acoustic metamaterials, we have designed and fabricated a series of acoustic KTG lattices to realize the novel hybrid-order topological pumping protocol. Tracking the global evolution of the acoustic states with the cyclic parameter, our experimental results exhibit an intact corner-bulk-corner-edge-corner transition, which involves the 0D corner, 1D edge, and 2D bulk states simultaneously. All experimental data agree well with the theoretical predictions. One may expect that both the corner states and edge states involved here would be considerably robust against disorders and defects, given the protection by the gap Chern number and winding number. Our findings open a new path for unveiling more complex topological pumping physics.
2309.12633
Learning to Coordinate with Anyone
In open multi-agent environments, the agents may encounter unexpected teammates. Classical multi-agent learning approaches train agents that can only coordinate with seen teammates. Recent studies attempted to generate diverse teammates to enhance the generalizable coordination ability, but were restricted by pre-defined teammates. In this work, our aim is to train agents with strong coordination ability by generating teammates that fully cover the teammate policy space, so that agents can coordinate with any teammates. Since the teammate policy space is too huge to be enumerated, we find only dissimilar teammates that are incompatible with controllable agents, which highly reduces the number of teammates that need to be trained with. However, it is hard to determine the number of such incompatible teammates beforehand. We therefore introduce a continual multi-agent learning process, in which the agent learns to coordinate with different teammates until no more incompatible teammates can be found. The above idea is implemented in the proposed Macop (Multi-agent compatible policy learning) algorithm. We conduct experiments in 8 scenarios from 4 environments that have distinct coordination patterns. Experiments show that Macop generates training teammates with much lower compatibility than previous methods. As a result, in all scenarios Macop achieves the best overall coordination ability while never significantly worse than the baselines, showing strong generalization ability.
Lei Yuan, Lihe Li, Ziqian Zhang, Feng Chen, Tianyi Zhang, Cong Guan, Yang Yu, Zhi-Hua Zhou
2023-09-22T06:01:26Z
http://arxiv.org/abs/2309.12633v1
# Learning to Coordinate with Anyone ###### Abstract In open multi-agent environments, the agents may encounter unexpected teammates. Classical multi-agent learning approaches train agents that can only coordinate with seen teammates. Recent studies attempted to generate diverse teammates to enhance the generalizable coordination ability, but were restricted by pre-defined teammates. In this work, our aim is to train agents with strong coordination ability by generating teammates that fully cover the teammate policy space, so that agents can coordinate with any teammates. Since the teammate policy space is too huge to be enumerated, we find only _dissimilar_ teammates that are _incompatible_ with controllable agents, which highly reduces the number of teammates that needed to be trained with. However, it is hard to determine the number of such incompatible teammates beforehand. We therefore introduce a continual multi-agent learning process, in which the agent learns to coordinate with different teammates until no more incompatible teammates can be found. The above idea is implemented in the proposed Macop (**M**ulti-**a**gent **c**ompatible **p**olicy learning) algorithm. We conduct experiments in 8 scenarios from 4 environments that have distinct coordination patterns. Experiments show that Macop generates training teammates with much lower compatibility than previous methods. As a result, in all scenarios Macop achieves the best overall coordination ability while never significantly worse than the baselines, showing strong generalization ability. ## 1 Introduction Cooperative Multi-Agent Reinforcement Learning (MARL) [1] has garnered significant attention due to its demonstrated potential in various real-world applications. Recent studies have show-cased MARL's exceptional performance in tasks such as pathfinding[1], active voltage control [21], and dynamic algorithm configuration [23]. However, these achievements are typically made within closed environments where teammates are pre-defined. The system will suffer from coordination ability decline when deploying the trained policies in real-world scenarios, where agents may encounter unexpected teammates in such open environments [16]. Training with diverse teammates presents a promising avenue for tackling the aforementioned challenge. Various methods have emerged in domains such as ad-hoc teamwork [14], zero-shot coordination [17], and few-shot teamwork [20]. Addressing this challenge effectively involves two crucial factors. Firstly, to enhance generalization and avoid overfitting to specific partners, it is essential for agents to be exposed to diverse teammates during the training process. Diversity can be achieved through various techniques, such as hand-crafted policies [18], object regularizers designed among agents [13], Lupu et al.(2021), Charakorn et al.(2023), or population-based training (PBT) [24], Xue et al.(2022a)]. Secondly, when dealing with multiple teammates, especially in the context of multi-modal scenarios, specialized consideration is necessary. Naive approaches, like self-play (or "self-training") [25], Silver et al.(2018)], Fictitious Co-Play (FCP) [14], Strouse et al.(2021)], or coevolving agent and partner populations [14], have been explored (See related work in App. A.1). Nevertheless, complex scenarios often present substantial challenges arising from both the complexity and vastness of the teammate policy space. On one hand, enumerating all possible teammate groups is a daunting task, and training the agents can be time-consuming. On the other hand, even when we pre-define only representative and diverse teammates, we may still accidentally omit some instances. The exact number of such teammates cannot be determined in advance as well. This prompts a crucial question: Can we design a more efficient training paradigm that ensures our controllable agents are trained alongside partners in a policy space that guarantees coverage, ultimately enabling high generalization and effective coordination ability with diverse teammates? To tackle the mentioned issue, we propose a novel coordination paradigm known as Macop, with which we can obtain a multi-agent compatible policy via incompatible teammates evolution. The core principle of Macop is the adversarial generation of new teammate instances, which are strategically crafted to challenge and refine the ego-system's (the agents we control) coordination policy. However, the exact number of representative teammates can not be determined beforehand, and maintaining a sufficiently diverse population requires significant computing and storage resources. We therefore introduce Continual Teammate Dec-POMDP (CT-Dec-POMDP), wherein the ego-system is trained with groups of teammates generated sequentially until convergence is reached. Our approach is rooted in two crucial factors: instance diversity and incompatibility between the newly generated teammates and the ego-system. During the training process, we iteratively refine teammate generation and optimize the ego-system until convergence is reached. This approach empowers the ego-system, leading to a coordination policy capable of seamlessly handling a wide array of team compositions and promptly adapting to new teammates. We conduct experiments on different MARL benchmarks that have distinct coordination patterns, including Level-based Foraging (LBF) [17], Predator-Prey (PP), Cooperative Navigation (CN) from MPE [18], and two customized maps from StarCraft Multi-agent Challenge (SMAC) [19]. Experimental results show that our proposed Macop exhibits remarkable improvement in comparison to existing methods, achieving nearly 20% average performance improvement in the conducted benchmarks compared to multiple baselines, and more experiments reveal it from multiple aspects. ## 2 Problem Formulation As we aim to solve a continual coordination problem, where the controllable agents are required to cooperate with diverse teammates which arise sequentially, we formalize it as a Continual Teammate Dec-POMDP (CT-Dec-POMDP) by extending the Dec-POMDP [15]. The CT-Dec-POMDP can be described as a tuple \(\mathcal{M}=\langle\mathcal{N},\mathcal{S},\mathcal{A},P,\{\mathbf{\pi}_{\text{m} }^{k}\}_{k=1}^{\infty},m,\)\(\Omega,O,R,\gamma\rangle\), here \(\mathcal{N}=\{1,\dots,n\}\), \(\mathcal{S}\), \(\mathcal{A}=\mathcal{A}^{1}\times...\times\mathcal{A}^{n}\) and \(\Omega\) are the sets of corresponding agents, global state, joint action, observation. \(P\) is the transition function, \(\{\mathbf{\pi}_{\text{m}}^{k}\}_{k=1}^{\infty}\) represents the \(k\) groups of teammates encountered sequentially during the training phase until time \(t\), \(m\) is the number of controllable agents, and \(\gamma\in[0,1)\) represents the discounted factor. At each time step, agent \(i\) receives the observation \(o^{i}=O(s,i)\) and outputs the action \(a^{i}\in\mathcal{A}^{i}\). Concretely, when training to cooperate with a group of teammates \(\pi_{\text{tm}}^{k}\), the agents do not have access to previous teammates groups \(\pi_{\text{tm}}^{k^{\prime}},k^{\prime}=1,...,k-1\). However, they are expected to remember how to cooperate with all previously encountered teammates groups. For simplicity, we denote a group of teammates as "teammate" when no ambiguity arises. The training phase of cooperating with teammate \(\pi_{\text{tm}}^{k}\) can be described as \(\mathcal{M}_{k}=\langle\mathcal{N},\mathcal{S},\mathcal{A},P,\mathbf{\pi}_{\text {tm}}^{k},m,\Omega,\)\(O,R,\gamma\rangle\). The controllable agents \(\mathbf{\pi}_{\text{ego}}=\{\pi_{\text{ego}}^{1},...,\pi_{\text{ego}}^{m}\}\in \Pi_{\text{ego}}=\otimes_{i=1}^{m}\Pi_{i}\) and the teammate \(\mathbf{\pi}_{\text{tm}}^{k}=\{\pi_{\text{tm}}^{k,m+1},..,\pi_{\text{tm}}^{k,n}\} \in\Pi_{\text{tm}}=\otimes_{i=m+1}^{n}\Pi_{i}\) formulate a new joint policy \(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm}}^{k}\rangle\). The joint action \(\langle\mathbf{a}_{\text{ego}},\mathbf{\pi}_{\text{tm}}^{k}\rangle=\langle\mathbf{\pi}_{ \text{ego}}(\mathbf{\pi}_{\text{ego}}),\mathbf{\pi}_{\text{tm}}^{k}(\mathbf{\pi}_{\text{ tm}}^{k})\rangle\) leads to the next state \(s^{\prime}\sim P(\cdot|s,\langle\mathbf{a}_{\text{ego}},\mathbf{a}_{\text{tm}}^{k}\rangle)\) and the global reward \(R(s,\langle\mathbf{a}_{\text{ego}},\mathbf{a}_{\text{tm}}^{k}\rangle)\), where \(\mathbf{\tau}_{\text{ego}}=\{\tau^{i}\}_{i=1}^{m},\mathbf{\tau}_{\text{tm}}^{k}=\{\tau^ {i}\}_{i=m+1}^{n}\). The controllable agents are optimized to maximize the expected return when cooperating with teammate \(\mathbf{\pi}_{\text{tm}}^{k}\): \[\max_{\mathbf{\pi}_{\text{ego}}}\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{ \text{tm}}^{k}\rangle)=\mathbb{E}_{\mathbf{\tau}\sim\rho(\langle\mathbf{\pi}_{\text{ego }},\mathbf{\pi}_{\text{tm}}^{k}\rangle)}[G(\mathbf{\tau})], \tag{1}\] where \(G(\mathbf{\tau})=\sum_{t=0}^{T}\gamma^{t}R(s_{t},\mathbf{a}_{t})\) is the return of a joint trajectory. At the same time, for a formal characterization of the relationship between the policy space of \(\mathbf{\pi}_{\text{ego}}\) and \(\mathbf{\pi}_{\text{tm}}\), we introduce the concept of complementary policy class: **Definition 1** (complementary policy class).: _For any sub policy \(\mathbf{\pi}\in\Pi_{i:j}=\otimes_{h=i}^{j}\Pi_{h},i\leq j\), we define its complementary policy class as \(\Pi_{\mathbf{\pi}}^{c}=\otimes_{h=1}^{i-1}\Pi_{h}\times\otimes_{h=i+1}^{n}\Pi_{h}\). We denote the complementary policy class of controllable agents and the teammate as \(\Pi_{\text{ego}}^{c}\) and \(\Pi_{\text{tm}}^{c}\) for simplicity. We also refer \(\mathcal{J}_{\text{sp}}(\mathbf{\pi}_{\text{ego}})=\max_{\mathbf{\pi}_{\text{tm}}\in \Pi_{\text{ego}}^{c}}\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm }}\rangle)\) and \(\mathcal{J}_{\text{sp}}(\mathbf{\pi}_{\text{tm}})=\max_{\mathbf{\pi}_{\text{ego}}\in \Pi_{\text{ego}}^{c}}\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text {tm}}\rangle)\) as "self-play return" of \(\mathbf{\pi}_{\text{ego}}\) and \(\mathbf{\pi}_{\text{tm}}\), respectively._ Method In this section, we will present the detailed design of our proposed method Macop (c.f. Fig. 1). First, we introduce a novel continual teammate generation module by combining population-based training and incompatible policy learning (Fig. 1(a)). Next, we outline the design of our continual coordination policy learning paradigm, which consists of a shared backbone and a dynamic head expansion module (Fig. 1(b)). These two phases proceed alternatively to train a robust multi-agent coordination policy that is capable of effectively cooperating with diverse teammates (Fig. 1(c)). ### Incompatible teammate generation The objective of Macop is to develop a joint policy that can effectively cooperate with diverse teammates. Since the policy space of teammate groups is too huge to be enumerated, we focus on identifying dissimilar teammate groups. To achieve this, we begin by establishing a complementary-policy-agnostic measure capable of effectively quantifying the similarity between two teammate groups, ensuring that it remains unaffected by complementary policies. In particular, we pair two teammate groups with any arbitrary complementary policy, as defined in Def. 1. These groups are considered similar if the probability of the trajectory produced by both groups surpasses a predefined threshold. **Definition 2** (\(\epsilon\)-similar policies).: _We measure the similarity between two different teammates \(\mathbf{\pi}_{\text{tm}}^{i},\mathbf{\pi}_{\text{tm}}^{j}\) with the probability of the trajectory induced by them when paired with any complementary policies. Specifically, for any fixed complementary policy \(\mathbf{\bar{\pi}}\in\Pi_{\text{tm}}^{c}\), the probability of the trajectory produced by the joint policy \(P(\mathbf{\tau}|\langle\mathbf{\bar{\pi}},\mathbf{\pi}_{\text{tm}}\rangle)=\prod_{t=0}^{T -1}\mathbf{\bar{\pi}}(\mathbf{\bar{a}}_{t}|\mathbf{\bar{\tau}}_{t})\mathbf{\pi}_{\text{tm}}( \mathbf{a}_{t,t}|\mathbf{\tau}_{\text{tm},t})P\left(\right.\left.s_{t+1}|s_{t},\langle \mathbf{\bar{a}}_{t},\mathbf{a}_{t,m,t}\rangle\right)\). Accordingly, we define the dissimilarity between the two teammates \(d(\mathbf{\pi}_{\text{tm}}^{i},\mathbf{\pi}_{\text{tm}}^{j})=\max_{\mathbf{\tau}}|1-\frac {P(\mathbf{\tau}|\langle\mathbf{\bar{\pi}},\mathbf{\pi}_{\text{tm}}^{i}\rangle)}{P(\mathbf{ \tau}|\langle\mathbf{\bar{\pi}},\mathbf{\pi}_{\text{tm}}^{j}\rangle)}|=\max_{\mathbf{\tau} }|1-\prod_{t=0}^{T-1}\frac{\mathbf{\pi}_{\text{tm}}^{i}(\mathbf{a}_{t,t}|\mathbf{\tau}_{ \text{tm},t})}{\mathbf{\pi}_{\text{tm}}^{j}(\mathbf{a}_{t,m,t}|\mathbf{\tau}_{\text{tm},t })}|\). Teammates \(\mathbf{\pi}_{\text{tm}}^{i}\) and \(\mathbf{\pi}_{\text{tm}}^{j}\) are \(\epsilon-\)similar policies if and only if \(d(\mathbf{\pi}_{\text{tm}}^{i},\mathbf{\pi}_{\text{tm}}^{j})\leq\epsilon,0\leq\epsilon\leq\)\(1\), which implies that \(1-\epsilon\leq\frac{P(\mathbf{\tau}|\langle\mathbf{\bar{\pi}},\mathbf{\pi}_{\text{tm}}^{j} \rangle)}{P(\mathbf{\tau}|\langle\mathbf{\bar{\pi}},\mathbf{\pi}_{\text{tm}}^{j}\rangle)} \leq 1+\epsilon,\forall\mathbf{\tau}\)._ Based on the Def. 2 above, our approach involves the identification of representative teammate groups, ensuring that the dissimilarity between them surpasses the specified threshold \(\epsilon\). We continually generate such dissimilar teammate groups in order to gradually cover the space of teammate policies. Drawing inspiration from the proven efficacy of population-based training (PBT) [1] and evolutionary algorithms (EA) [1], we adopt an evolutionary process to formulate the teammate generation process by maintaining a population of teammates \(\mathcal{P}_{\text{tm}}=\{\mathbf{\pi}_{\text{tm}}^{j}\}_{j=1}^{n_{p}}\) under the changing controllable agents \(\mathbf{\pi}_{\text{ego}}\). By ensuring that the teammate groups exhibits dissimilarity between instances in not only the current population but also previous ones, our aim is to systematically explore and cover the entire teammate policy space over time. Specifically, in each generation, the current population is first initialized through a customized parent selection mechanism (details provided later). We focus on promoting diversity within the teammate population, striving to enhance the Figure 1: The overall workflow of Macop. dissimilarity between each individual, i.e., \(\max\sum_{i\neq j}d(\mathbf{\pi}_{\text{tm}}^{i},\mathbf{\pi}_{\text{tm}}^{j})\). To achieve the goal mentioned, we take Jensen-Shannon divergence (JSD) [10] as a reliable proxy to effectively measure the dissimilarity between teammates' policies as is introduced in [23]: \[\begin{split}\mathcal{L}_{\text{div}}=&\mathbb{E}_{s} [\text{JSD}(\{\mathbf{\pi}_{\text{tm}}^{i}\}_{i=1}^{n_{p}})]\\ =&\mathbb{E}_{s}[\frac{1}{n_{p}}\sum_{i=1}^{n_{p}}D_ {KL}(\mathbf{\pi}_{\text{tm}}^{i}(\cdot|s)||\mathbf{\pi}_{\text{tm}}(\cdot|s))],\end{split} \tag{2}\] where \(\mathbf{\bar{\pi}}_{\text{tm}}(\cdot|s)=\frac{1}{n_{p}}\sum_{i=1}^{n_{p}}\mathbf{\pi}_{ \text{tm}}^{i}(\cdot|s)\) is the average policy of the population, and \(D_{KL}\) is the Kullback-Leibler (KL) divergence between two distribution. We provide proofs that the JSD proxy is a certifiable lower bound of the original dissimilarity objective in App. A.2. The advantages of JSD are immediately apparent. Unlike TV divergence or KL divergence, which only allows pairwise comparison between two distributions, JSD enables a more comprehensive assessment of the diversity within a population by accommodating multiple distributions. Meanwhile, JSD is symmetrically defined, which is invariant under the interchange of the distributions being compared, helping simplify the implementation. Despite the effectiveness of the population-based training with the \(\mathcal{L}_{\text{div}}\) in Eqn. 2, the continual generation would still result in teammate groups with similar behaviors in different generations without other guarantees. Meanwhile, the size of the population \(n_{p}\) might also have a significant impact. Inspired by the relationship between similarity and compatibility proved in [1], we extend the theorem to our CT-Dec-POMDP: **Definition 3** (\(\epsilon\)-compatible teammates).: _For the controllable agents \(\mathbf{\pi}_{\text{ego}}\), let \(\mathcal{J}_{\text{sp}}(\mathbf{\pi}_{\text{ego}})=\alpha\). We refer \(\mathbf{\pi}_{\text{tm}}\) as an \(\epsilon\)-compatible teammate \(\mathbf{\pi}_{\text{ego}}\) if and only if \(\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm}}\rangle)\geq(1- \epsilon)\alpha\)._ **Theorem 1**.: _Given the controllable agents \(\mathbf{\pi}_{\text{ego}}\) and teammate policies \(\mathbf{\pi}_{\text{tm}}\) and \(\forall\mathbf{\pi}_{\text{tm}}^{\prime}\), \(\mathbf{\pi}_{\text{tm}},\mathbf{\pi}_{\text{tm}}^{\prime}\) are \(\epsilon-\)similar policies. Then we have \((1-\epsilon)\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm}} \rangle)\leq\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm}}^{ \prime}\rangle)\leq(1+\epsilon)\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{ \pi}_{\text{tm}}\rangle)\)._ The underlying idea behind Thm. 1 is that controllable agents, when effectively collaborating with a specific teammate group, will also be compatible with the teammate group's \(\epsilon\)-similar policies. Proofs are given in App. A.2. We thus have the following corollary: **Corollary 1**.: _Given the controllable agents \(\mathbf{\pi}_{\text{ego}}\) and teammates \(\mathbf{\pi}_{\text{tm}}\). If \(\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm}}^{\prime}\rangle )<(1-\epsilon)\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm}}\rangle\), then \(\mathbf{\pi}_{\text{tm}}\) and \(\mathbf{\pi}_{\text{tm}}^{\prime}\) are not \(\epsilon\)-similar policies, i.e., \(d(\mathbf{\pi}_{\text{tm}},\mathbf{\pi}_{\text{tm}}^{\prime})>\epsilon\)._ The result from Cor. 1 shows that we can ensure that teammate groups generated in the current population are different from those before by decreasing its compatibility with the controllable agents \(\mathbf{\pi}_{\text{ego}}\), which are trained to effectively collaborate with the teammates generated so far. Assuming that the controllable agents are fixed during the teammate population evolving stage, the optimization objective can be written as: \[\mathcal{L}_{\text{incom}}=-\frac{1}{n_{p}}\sum_{i=1}^{n_{p}}\mathcal{J}( \langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm}}^{i}\rangle). \tag{3}\] To ensure the meaningful learning of teammate groups' policies, it is crucial for each individual in the population to be capable of cooperating with complementary policies. Thus, the optimization of teammate focuses on maximizing the following objective: \[\mathcal{L}_{\text{sp}}=\frac{1}{n_{p}}\sum_{i=1}^{n_{p}}\mathcal{J}_{\text{sp} }(\mathbf{\pi}_{\text{tm}}^{i}),\ where\ i=1,2,..,n_{p}. \tag{4}\] Considering the specified objectives, the complete objective function for the teammate population is as follows: \[\mathcal{L}_{\text{tm}}=\mathcal{L}_{\text{sp}}+\alpha_{\text{div}}\mathcal{L}_ {\text{div}}+\alpha_{\text{incom}}\mathcal{L}_{\text{incom}}, \tag{5}\] where, \(\alpha_{\text{div}}\) and \(\alpha_{\text{incom}}\) are adjustable hyper-parameters that control the balance between the three objectives. ### Compatible Coordination Policy Learning After generating a new teammate population that is diverse and incompatible with the controllable agents, we aim to train the controllable agents to effectively cooperate with newly generated teammate groups, as well as maintain the coordination ability with the trained ones. It requires the controllable agents to possess the continual learning ability, as introduced in Sec. 2, where teammate policies appear sequentially in CT-Dec-POMDP. In the context of evolutionary-generated teammate groups appearing sequentially, employing a single generalized policy network poses challenges due to the existence of multi-modality and varying behaviors among teammate groups. Consequently, conflicts and degeneration in the controllable agents' policies may arise. To address this issue, recent approaches like MACPro [23] have adopted a solution where customized heads are learned for each specific task. Building upon this idea, our approach involves designing a policy network with a shared backbone denoted as \(f_{\phi}\), complemented by multiple output heads represented as \(\{h_{\psi_{i}}\}_{i=1}^{m}\). The shared backbone is responsible for extracting relevant features, while each output head handles making the final decisions. With the structured policy network, when paired with the new teammate group's policy \(\mathbf{\pi}_{\text{tm}}^{k+1}\), we first instantiate a new output head \(h_{\psi_{m+1}}\). Subsequently, our focus shifts to training the controllable agents to effectively cooperate with the new teammate group. \[\mathcal{L}_{\text{com}}=\mathcal{J}((\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm }}^{k+1})). \tag{6}\] It is worth noting that once trained, the output heads \(\{h_{\psi_{i}}\}_{i=1}^{m}\) remain fixed, and during the training process, the gradient \(\mathcal{L}_{\text{com}}\) only propagates through the parameters \(\phi\) and \(\psi_{m+1}\). Training the best response via \(\mathcal{L}_{\text{com}}\) enables us to derive a policy that is capable of cooperating with the new teammate group \(\mathbf{\pi}_{\text{tm}}^{k+1}\). However, the use of one shared backbone poses a challenge as it inevitably leads to forgetting previously learned cooperation, especially when encountering teammates with different behaviors, resulting in failure to cooperate with teammates seen before. One straightforward approach to address this issue is to fix the parameters of the backbone upon completing the training of the first policy head. However, this approach has obvious drawbacks. On one hand, the fixed backbone might fail to extract common features adequately due to the limited coverage of training data. On the other hand, the output head's capacity might be insufficient, leading to suboptimal performance when training to cooperate with new teammates. To mitigate the problem of catastrophic forgetting and enhance the policy's expressiveness, we apply a regularization objective by constraining the parameters from changing abruptly while learning the new output head \(h_{\psi_{m+1}}\): \[\mathcal{L}_{\text{reg}}=\frac{1}{m}\sum_{i=1}^{m}||\phi-\phi_{i}||_{p}, \tag{7}\] where \(\phi_{i}\) is the saved snapshot of the backbone \(\phi\) after obtaining the \(i^{\text{th}}\) output head, and \(||\cdot||_{p}\) is \(l_{p}\) norm. This regularization mechanism helps to retain previously learned knowledge and ensures that the shared backbone adapts to the new teammate. Striking a balance between adaptability and retaining relevant knowledge, we can effectively enhance the cooperative performance of the policy with diverse teammates. The overall objective of the controllable agents when encountering the \((k+1)^{\text{th}}\) teammate group is defined as: \[\mathcal{L}_{\text{ego}}=\mathcal{L}_{\text{com}}+\alpha_{\text{reg}}\mathcal{ L}_{\text{reg}}, \tag{8}\] where \(\alpha_{\text{reg}}\) is a tunable weight. Despite the effectiveness of combining the proposed \(\mathcal{L}_{\text{ego}}\) and the carefully designed policy network architecture, a major limitation lies in its poor scalability as the number of output heads increases linearly with the dynamically generated teammate groups. To address this limitation and achieve better scalability, we propose a resilient head expansion strategy that effectively reduces the number of output heads while maintaining the policy's compatibility: * Upon completing the training of the output head \(h_{\psi_{m+1}}\), we proceed to evaluate the coordination performance of this head and all the existing ones \(\{h_{\psi_{i}}\}_{i=1}^{m+1}\) when paired with the new teammate group's policy \(\mathbf{\pi}_{\text{tm}}^{k+1}\). The coordination performance is measured using the empirical average return \(\{\hat{R}_{i}\}_{i=1}^{m+1}\), where \(\hat{R}_{i}=\frac{1}{N}\sum_{j=1}^{N}G(\tau_{j}^{i})\) represents the average return obtained by executing trajectories \(\tau_{j}^{i}\) generated by applying the \(i^{\text{th}}\) output head. * To manage the number of output heads and prevent uncontrolled growth, we choose to retain the newly trained head if its performance surpasses a certain threshold compared to the best-performing existing head. Formally, we keep the newly trained head if \(\frac{\hat{R}_{m+1}-\max_{i}\{\hat{R}_{i}\}_{i=1}^{m}}{\max_{i}\{\hat{R}_{i}\}_ {i=1}^{m}}\geq\lambda\). This approach ensures that we only expand the number of output heads when there is a substantial improvement in performance, indicating that the new teammate group's behavior requires a distinct policy. Otherwise, if the existing output heads are sufficiently generalized to cooperate effectively with the new teammate, no new head will be expanded. By adopting this resilient head expand strategy, we strike a balance between reducing the number of output heads and maintaining the policy's adaptability, resulting in a more scalable and efficient approach to handling dynamic teammate groups under the continual coordination setting. ### Overall Algorithm In this section, we present a comprehensive overview of the Macop (Multi-agent Compatible Policy Learning) procedure. Macop aims to train controllable agents to effectively cooperate with various teammate groups. During the training phase, Macop employs an evolutionary method to generate diverse and incompatible teammate groups and trains the controllable agents to be compatible with the teammates under the continual setting. In each iteration (generation) \(k(k>1)\), we first select the \((k-1)^{\text{th}}\) teammate population \(\mathcal{P}_{\text{tm}}^{k-1}\) as the parent population. Then, the offspring population is derived by training the parent population with \(\mathcal{L}_{\text{tm}}\) in Eqn. 5, i.e., mutation. The teammate groups are constructed based on value-based methods [16, 17], and \(\mathbf{\pi}_{\text{tm}}(\cdot|s)\) is replaced with \(softmax(Q_{\text{tm}}^{i}(\cdot|s))\) in \(\mathcal{L}_{\text{div}}\) for practical use. With \(n_{p}\) teammate groups of the parent population and \(n_{p}\) teammate groups of the offspring population, we apply a carefully designed selection scheme as follows. To expedite the training of meaningful teammate groups, we first eliminate \(\lfloor\frac{n_{p}}{2}\rfloor\) teammate groups with the lowest self-play return, i.e., \(\max_{\mathbf{\pi}_{\text{top}}^{i}\in\Pi_{\text{tm}}^{i}}\mathcal{J}(\langle \mathbf{\pi}_{\text{ego}}^{i},\mathbf{\pi}_{\text{tm}}^{i}\rangle)\). Next, we proceed to eliminate \(\lceil\frac{n_{p}}{2}\rceil\) teammate groups with the highest cross-play return under the controllable agents, i.e., \(\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm}}^{i}\rangle)\), so as to improve incompatibility. Finally, we utilize the remaining \(n_{p}\) teammate groups as the new teammate population of iteration \(k\), i.e., \(\mathcal{P}_{\text{tm}}^{k}\). With the teammate population \(\mathcal{P}_{\text{tm}}^{k}\) in place, we construct \(n_{p}\) continual coordination processes in a sequential order and train controllable agents to learn compatible policies. The controllable agents are optimized using \(\mathcal{L}_{\text{ego}}\) (defined in Eqn. 8), and the output head is expanded as introduced in Sec. 3.2. To determine when the continual process should be terminated, a carefully designed stopping criterion is employed. The training phase terminates at the \(k^{\text{th}}\) iteration if the minimum cross-play return of \(\mathcal{P}_{\text{tm}}^{k+1}\) and the controllable agents in iteration \(k\) exceeds a certain value, i.e., \(C=\frac{\min_{i}\mathcal{J}(\langle\mathbf{\pi}_{\text{ego}},\mathbf{\pi}_{\text{tm} }^{i}\rangle)}{\frac{1}{n_{p}}\sum_{i=1}^{n_{p}}\mathcal{J}_{\text{tr}}(\mathbf{ \pi}_{\text{tm}}^{i})}\geq\xi,\ \mathbf{\pi}_{\text{tm}}^{i}\in\mathcal{P}_{\text{tm}}^{k+1}\). It indicates that the controllable agents at the \(k^{\text{th}}\) iteration can effectively cooperate with the \((k+1)^{\text{th}}\) teammate population even they have been trained to decrease the compatibility, and the teammate policy space is covered for a given environment. During the testing phase, a meta-testing paradigm is employed to determine which output head is selected to pair with an unknown teammate group. Initially, all output heads are allowed to interact with the teammate group to collect a few trajectories, and their cooperation abilities are evaluated based on empirical returns. The output head with the highest performance is then chosen for testing. The pseudo-codes for both the training and testing phases of our Macop procedure are provided in App. A.3. ## 4 Experiments In this section, we conduct a series of experiments to answer the following questions: 1) Can Macop generate controllable agents capable of effectively collaborating with diverse teammates in different scenarios, surpassing the performance of other methods? 2) Does the evolutionary generation of teammates bring about a noticeable increase in diversity, and how do our controllable agents compare to other baseline models in terms of compatibility? 3) What is the detailed training process of Macop? 4) How does each component and hyperparameter influence Macop? Figure 2: Environments used in this paper, all details could be seen in App. A.4. ### Environments and Baselines We here select four multi-agent coordination environments and design eight scenarios as evaluation benchmarks (Fig. 2). Level-based Foraging (LBF) [14] presents a challenging multi-agent cooperative game, where agents with varying levels navigate through a grid world, collaboratively striving to collect food with different levels. The successful collection occurs when the sum of levels of participating agents matches or exceeds the level of the food item. Predator Prey (PP) and Cooperative Navigation (CN) are two benchmarks coming from the popular MPE environment [13]. In the PP scenario, agents (predators) must together pursue the moving adversaries (prey). On the other hand, in CN, multiple agents receive rewards when they navigate toward landmarks while ensuring they avoid collisions with one another. We also conduct experiments in the widely used StarCraft II combat scenario, SMAC [15], which involves unit micromanagement tasks. In this setting, ally units are trained to beat enemy units controlled by the built-in AI. We specifically design two scenarios for each mentioned benchmark (e.g., PP1 and PP2), and details could be found in App. A.4. To investigate whether Macop is capable of coordinating with diverse seen/unseen teammates, we implement Macop on the popular value-based methods VDN [12] and QMIX [12], and compare it with multiple baselines. First, to assess the impact of the teammate generation process on the coordination ability of the controllable agents, we compare Macop with FCP [16], which initially generates a set of teammate policies independently and then trains the controllable agents to be the best response to the set of teammates. The diversity among teammate polices is achieved solely through network random initialization. Additionally, we examine another population-based training mechanism that trains the teammate population using both \(\mathcal{L}_{\text{sp}}\) and \(\mathcal{L}_{\text{div}}\), aiming to generate teammates with enhanced diversity. This approach, which aligns with existing literature [13], Ding et al.(2023)], is referred to as TrajeDi for convenience. On the other hand, LIPO [1] induces teammate diversity by reducing the compatibility between the teammate policies in the population. Concretely, it trains the teammate population with an auxiliary objective \(\mathcal{J}_{\text{LIPO}}=-\sum_{i\neq j}\mathcal{J}(\langle\mathbf{\pi}_{\text{ tm}}^{i},\mathbf{\pi}_{\text{ tm}}^{j}\rangle)\), where the indices \(i,j\) refer to two randomly sampled teammates in the population. Furthermore, with the teammate generation module held constant, we proceed to compare Macop with Finetune. Finetune directly tunes all the parameters of the controllable agents to coordinate with the currently paired teammate group. We also investigate two other approaches: Single Head, which applies regularization \(\mathcal{L}_{\text{reg}}\) to the backbone but does not utilize the multi-head architecture, and Random Head, which randomly selects an existing head during evaluation, thus verifying the necessity of Macop's testing paradigm. Finally, we employ the popular continual learning method EWC [17] to learn to coordinate with the teammates generated by TrajeDi, thereby providing an overall validation of the effectiveness of Macop. More details are illustrated in App. A.4. ### Competitive Results In this section, we analyze the effectiveness of the controllable agents learned from different methods from two aspects: coordination performance with diverse seen/unseen teammates, and continual learning ability on a sequence of incoming teammates. **Overall Coordination Performance** To ensure a fair comparison of coordination performance, we aggregate all the teammate groups generated by Macop and baselines into an _evaluation set_. For each method, we pair the learned controllable agents with teammate groups in this _evaluation set_ to run 32 episodes for each pairing. The average episodic return over all episodes when pairing with different teammate groups is calculated as the evaluation metric. \begin{table} \begin{tabular}{c|c c c c c c c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{c} LBF \\ \end{tabular} } & \multicolumn{2}{c}{PP} & \multicolumn{2}{c}{CN} & \multicolumn{2}{c|}{SMAC} & \multirow{2}{*}{Avg. Performance} \\ \cline{2-2} \cline{5-8} & LBF1 & LBF4 & & & & & & & \\ \hline \multirow{2}{*}{\begin{tabular}{c} Macop (ours) \\ Single Head \\ Random Head \\ LIFO [14] \\ \end{tabular} } & \(1.14\pm 0.02\) & \(\mathbf{1.64}\pm\mathbf{0.03}\) & \(\mathbf{1.73}\pm\mathbf{0.11}\) & \(\mathbf{2.14}\pm\mathbf{0.53}\) & \(\mathbf{1.66}\pm\mathbf{0.03}\) & \(\mathbf{1.70}\pm\mathbf{0.06}\) & \(\mathbf{1.26}\pm\mathbf{0.42}\) & \(1.56\pm\mathbf{0.17}\) & \(\mathbf{60.44}\) \\ \hline \multirow{2}{*}{\begin{tabular}{c} Single Head \\ Random Head \\ LIFO [14] \\ \end{tabular} } & \(0.98\pm 0.07\) & \(1.10\pm 0.32\) & \(0.87\pm 0.58\) & \(1.44\pm 0.52\) & \(1.01\pm 0.49\) & \(0.99\pm 0.24\) & \(1.06\pm 0.14\) & \(1.25\pm 0.40\) & \(8.92\) \\ \hline \multirow{2}{*}{\begin{tabular}{c} LIPO [1] \\ \end{tabular} } & \(0.92\pm 0.05\) & \(0.85\pm 0.10\) & \(0.88\pm 0.17\) & \(1.18\pm 0.39\) & \(0.98\pm 0.23\) & \(0.92\pm 0.11\) & \(0.97\pm 0.14\) & \(1.28\pm 0.21\) & \(-0.25\) \\ \hline \multirow{2}{*}{\begin{tabular}{c} LIPO [14] \\ \end{tabular} } & \(0.09\pm 0.09\) & \(1.53\pm 0.14\) & \(1.64\pm 0.21\) & \(1.59\pm 0.52\) & \(1.13\pm 0.41\) & \(1.33\pm 0.25\) & \(1.19\pm 0.18\) & \(1.08\pm 0.21\) & \(36.27\) \\ \hline \multirow{2}{*}{\begin{tabular}{c} TripD [14] \\ EWC [14] \\ \end{tabular} } & \(1.16\pm 0.06\) & \(1.34\pm 0.11\) & \(1.68\pm 0.33\) & \(1.56\pm 0.52\) & \(1.29\pm 0.23\) & \(1.53\pm 0.11\) & \(1.25\pm 0.12\) & \(1.57\pm 0.16\) & \(42.26\) \\ \hline \multirow{2}{*}{\begin{tabular}{c} FMC [14] \\ \end{tabular} } & \(0.97\pm 0.08\) & \(0.99\pm 0.16\) & \(0.83\pm 0.48\) & \(0.77\pm 0.43\) & \(0.57\pm 0.37\) & \(0.71\pm 0.27\) & \(1.03\pm 0.13\) & \(0.61\pm 0.09\) & \(-18.82\) \\ \hline \multirow{2}{*}{ \begin{tabular}{c} Finetune \\ \end{tabular} } & \(1.00\pm 0.16\) & \(1.00\pm 0.27\) & \(1.00\pm 0.58\) & \(1.00\pm 0.68\) & \(1.00\pm 0.31\) & \(1.00\pm 0.24\) & \(1.00\pm 0.17\) & \(1.00\pm 0.23\) & / \\ \hline \hline \end{tabular} \end{table} Table 1: Average test return \(\pm\) std when paired with teammate groups from _evaluation set_ in different scenarios. We re-scale the value by taking the result of Finetune as an anchor and present average performance improvement w.r.t Finetune. The best result of each column is highlighted in **bold**. The symbols ‘+’, ‘\(\approx\)’, and ‘-’ indicate that the result is significantly inferior to, almost equivalent to, and superior to Macop, respectively, based on the Wilcoxon rank-sum test [12] with confidence level \(0.05\). This metric serves as a comprehensive measure of the overall coordination performance and generalization ability of the controllable agents. We run each method for five distinct random seeds. As depicted in Tab. 1, we observe that approaches such as FCP, TrajeDi, and LIPO exhibit limited coordination generalization ability in different scenarios, especially when the population size is restricted. This highlights the need for ample coverage in teammate policy space to establish a robust coordination policy. Intriguingly, among the three methods mentioned, we found no significant differences, indicating that certain design elements, such as instance diversity among teammates, fail to fundamentally address this challenge. In contrast, when using generated teammates, simply finetuning the multi-agent policy or employing widely-used continual approaches like EWC exhibits inferior coordination performance, as confirmed by our experiments and in line with the findings in MACPro [21]. This suggests that specialized designs tailored for multi-agent continual settings play a crucial role. On the other hand, Macop exhibits a remarkable performance advantage over nearly all baselines across various scenarios, demonstrating that controllable agents trained by Macop possess robust coordination abilities. Furthermore, we discovered that the Single Head architecture struggles due to the presence of multi-modality in teammate behavior, underscoring the necessity of a multi-head architecture. An effectively designed testing paradigm, utilizing multiple available learned heads, proves indispensable. It is worth noting that Random Head fails to select the optimal head for evaluation, resulting in a degradation in performance. Our pipeline relies on efficient design for continual learning, and more comprehensive results on the necessity of each component can be found in Sec. 4.5. **Continual Learning Ability** To further investigate the continual learning ability of different methods, we utilize all teammate groups generated by Macop and all baselines to construct a fixed teammate sequence. Four continual learning methods are applied to train the controllable agents to coordinate with this teammate sequence in a continual manner, including Macop, CLEAR [17], EWC [14] and Finetune. CLEAR is a replay-based method which stores some data of previously trained teammates to rehearse the controllable agents when training with the current teammate group. For a principled assessment of the continual learning ability, we introduce two metrics inspired by the concepts used in continual learning [23] within our CT-Dec-POMDP framework: 1) BWT\(=\frac{1}{K-1}(\sum_{k=2}^{K}\frac{1}{k-1}\sum_{j=1}^{k-1}(\alpha_{k}^{j}- \alpha_{j}^{j}))\). BWT (Backward Transfer) evaluates the average influence of learning to cooperate with the newest teammate group on previously encountered teammates. 2) FWT\(=\frac{1}{K-1}(\sum_{k=2}^{K}\frac{1}{k-1}\sum_{j=2}^{k}(\alpha_{j}^{j}- \tilde{\alpha}_{j}))\). FWT (Forward Transfer) assesses the average influence of all previously encountered teammate groups on the coordination performance of the new teammate. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{LBF4} & \multicolumn{2}{c}{PP1} & \multicolumn{2}{c}{CN3} & \multicolumn{2}{c}{SMAC1} \\ \cline{2-9} & BWT & FWT & BWT & FWT & BWT & FWT & BWT & FWT \\ \hline Macop & \(\mathbf{-0.01\pm 0.02}\) & \(\mathbf{0.07\pm 0.07}\) & \(\mathbf{0.03\pm 0.04}\) & \(-0.16\pm 0.18\) & \(\mathbf{0.04\pm 0.06}\) & \(\mathbf{0.10\pm 0.09}\) & \(\mathbf{-0.02\pm 0.11}\) & \(0.07\pm 0.19\) \\ CLEAR & \(-0.05\pm 0.07\) & \(\mathbf{0.07\pm 0.06}\) & \(0.01\pm 0.08\) & \(\mathbf{-0.05\pm 0.11}\) & \(-0.16\pm 0.15\) & \(0.00\pm 0.20\) & \(-0.50\pm 0.32\) & \(0.04\pm 0.35\) \\ EWC & \(-0.30\pm 0.08\) & \(0.05\pm 0.07\) & \(-0.34\pm 0.08\) & \(-0.05\pm 0.13\) & \(-0.20\pm 0.11\) & \(0.03\pm 0.11\) & \(-1.02\pm 0.47\) & \(0.05\pm 0.31\) \\ Finetune & \(-0.34\pm 0.07\) & \(0.04\pm 0.06\) & \(-0.37\pm 0.07\) & \(-0.05\pm 0.22\) & \(-0.31\pm 0.11\) & \(0.05\pm 0.10\) & \(-1.24\pm 0.51\) & \(\mathbf{0.33\pm 0.35}\) \\ \hline \end{tabular} \end{table} Table 2: Continual Learning Ability. Average BWT/FWT \(\pm\) std of four different methods in different evaluated environments. Figure 3: Teammate policy space analysis. (a) The t-SNE projections of the self-play trajectory features of Macop’s generated teammate groups in CN2. (b)(c) The cross-play returns of Macop’s and TrajeDi’s generated teammate groups in LBF4. (d) The change in TrajeDi’s coordination ability with varying population sizes in LBF4 and CN2, compared with Macop. Here, \(\alpha_{k}^{j}\) represents the coordination performance of the controllable agents paired with the \(j^{\text{th}}\) teammate group after training to cooperate with the \(k^{\text{th}}\) teammate group, measured by the empirical episodic return. Additionally, \(\tilde{\alpha}_{j}\) denotes the coordination performance of a randomly initialized complementary policy trained with the \(j^{\text{th}}\) teammate group. We record experimental results in Tab. 2. At first glance, Finetune demonstrates the worst BWT among all methods, validating the necessity of algorithm design to prevent catastrophic forgetting. However, even popular continual learning methods, CLEAR and EWC, grapple with forgetting to some degree. In contrast, Macop achieves the best BWT in all evaluated environments. As for FWT, Macop obtains a competitive result compared with other methods. Taking both BWT and FWT into consideration, Macop demonstrates a robust and adept continual learning ability. This aptitude empowers controllable agents to progressively acquire coordination proficiency with diverse teammates, and aligns seamlessly with the expanding coverage of the teammate policy space. ### Teammate Policy Space Analysis To investigate whether Macop is capable of generating teammate groups with diverse behaviors, a straightforward method involves comparing the self-play trajectories of different teammate groups. Concretely, we first learn a transformer-based encoder to map trajectories into a low-dimensional feature space (details will be provided in App. A.4.3). We subsequently encode the teammates' self-play trajectories generated by Macop into the feature space. For visualization, we select 10 teammate groups from the CN2 scenario and extract their trajectory features, as shown in Fig. 3(a). The projection displays a notable dispersion, validating that teammate groups generated by Macop exhibit diverse behaviors as expected. Furthermore, we conducted experiments to assess the compatibility among the generated teammate groups. In accordance with Def. 3, we paired different teammate groups in LBF4. The cross-play returns are presented in Fig. 3(b)(c), generated by Macop and TrajeDi, respectively. It is evident that when pairing two distinct groups from Macop, there is a noticeable drop in returns outside the main diagonal, indicating a lack of compatibility among the teammate groups generated by Macop. Conversely, the cross-play returns of TrajeDi's teammate groups are nearly identical to their self-play returns, suggesting a significantly lower level of incompatibility among teammate groups generated by TrajeDi because of poorer coverage of the teammate policy space. To further explore whether methods without dynamically generating teammates can address policy space coverage by increasing the population size, we trained controllable agents using TrajeDi, varying in population size from 1 to 15. Subsequently, we evaluated the coordination ability using the _evaluation set_, as depicted in Fig. 3(d). The results clearly illustrate that coordination ability improves as the population size increases until convergence is reached. However, a considerable performance gap between TrajeDi and Macop persists. Our analysis leads us to the conclusion that in intricate scenarios with multi-modality, vanilla methods that lack dynamic teammate generation struggle with new and unfamiliar teammates due to inadequate coverage of the teammates' policy space. On the contrary, Macop's deliberate generation of incompatible teammates contributes to a more comprehensive coverage of the teammate policy space, ultimately enhancing its coordination ability. ### Learning Process Analysis To gain a comprehensive understanding of Macop's functioning, it's essential to delve into its learning process, which involves generating incompatible teammates and refining controllable agents until convergence is achieved. Fig. 4 illustrates the process in PP1, showcasing key aspects, including the number of teammate groups generated, the number Figure 4: Macop’s learning process analysis. (a)(b) The self-play trajectories of the first four/five teammate groups. (c) The change of the number of trained teammate groups, the number of existing heads, and the stop criterion \(C\) on each iteration. (d) Coordination performance comparison with different teammate groups in the _evaluation set_. of existing heads, and the stop criterion \(C\), all presented for each iteration (Fig. 4(c)). In the first iteration, the teammate generation module produces a population of four distinct teammate groups, with three specializing in capturing the first prey and one focused on the second prey (Fig. 4(a)). However, the population lacks desired diversity, as none of the groups learn to catch the remaining third prey. As for the controllable agents, they acquire the ability to collaborate with their teammates: Head 1 coordinates with those capturing the first prey, while Head 2 interacts with the group targeting the second prey. During the second iteration, the teammate generation module generates new teammates incompatible with the controllable agents, expanding the coverage of the teammate policy space. As shown in Fig. 4(b), a new teammate group (identified as "tm5" in blue) successfully acquires the skill to capture the last prey, showcasing a completely novel behavior. Consequently, when the controllable agents complete their training with this new group, they establish a new head for better coordination. The dynamic interplay between the adversarial teammate generation module and the training of controllable agents persists until the seventh iteration, resulting in an increased number of teammate groups and output heads. In this final iteration, the teammate generation module endeavors to generate seemingly "incompatible" teammates as it has throughout the training process, but it encounters failure. The generated teammate groups up to this point have already effectively covered a wide range of the teammate policy space. The controllable agents have successfully acquired the ability to coordinate with a sufficiently diverse array of teammates. The newly generated teammate groups do not exhibit enough incompatibility, as indicated by the stop criterion surpassing the specified threshold \(\xi\). This signifies that the cross-play performance between the controllable agents and these new "incompatible" teammates is comparable to the self-play performance of the teammate groups. It's worth noting that the \(C\) value from the second iteration also exceeds the threshold, yet a minimum iteration count of 4 is enforced to ensure thorough exploration of the teammate policy space. This automated and self-regulating learning process within Macop concludes after the seventh iteration. As a result of this process, Macop produces a notable set of 28 teammate groups with remarkable diversity, along with controllable agents that possess 10 heads. This is evidenced by their robust coordination abilities, which are prominently illustrated in Fig. 4(d). ### Ablation and Sensitivity Studies We here conduct ablation studies on CN2 and SMAC1 to comprehensively assess the impacts of multiple modules. _No Incom_, _No Div_, and _No Incom & Div_, are derived by setting \(\alpha_{\text{incom}}=0\), \(\alpha_{\text{div}}=0\), and \(\alpha_{\text{incom}}=\alpha_{\text{div}}=0\), respectively. Furthermore, we examine the impact of \(\mathcal{L}_{\text{reg}}\), and designate this variant as _No Reg_ to explore the effects of regularization on the backbone network \(\phi\). To ensure a fair comparison, we incorporate the teammate groups generated by the four ablations into the _evaluation set_. The results, as illustrated in Fig. 5(a), reveal essential insights into the functioning of Macop. Removing \(\mathcal{L}_{\text{incom}}\) or \(\mathcal{L}_{\text{div}}\) leads to a performance degradation compared to the complete Macop, highlighting the significant contributions to the teammate diversity. Moreover, No Incom & Div exhibits a substantial performance degradation, verifying the necessity of actively generating diverse teammates, instead of relying solely on random network initialization. Furthermore, No Reg demonstrates the poorest performance among all the variants. The absence of regularization on the backbone network undermines the controllable agents' continual Figure 5: Ablation and sensitivity studies. learning ability, weakening their coordination capability with diverse teammates. These findings emphasize that each module plays an indispensable role in Macop. As Macop includes multiple hyperparameters, we conduct experiments to investigate their sensitivity. The teammate groups generated by different hyperparameter settings are also incorporated into the _evaluation set_ for a fair comparison. One important hyperparameter is the population size \(n_{p}\). On one hand, with a very small population, Macop cannot cover the teammate policy space in an efficient manner. On the other hand, setting the population size to an excessively large number will unnecessarily increase the running time of Macop, reducing the overall efficiency. As shown in Fig. 5(b), we can find that when \(n_{p}\leq 4\), the performance of Macop does improve with increasing population size. However, there is no further improvement as we continue to increase \(n_{p}\), proving that \(n_{p}=4\) is the best setting in scenario PP2. More detailed analysis of other important hyperparameters is provided in App. A.5. ## 5 Final Remarks We propose a novel approach to multi-agent policy learning called Macop, which is designed to enhance the coordination abilities of controllable agents when working with diverse teammates. Our approach starts by framing the problem as an CT-Dec-POMDP. This framework entails training the ego-system with sequentially generated groups of teammates until convergence is achieved. Empirical results obtained across various environments, compared against multiple baseline methods, provide strong evidence of its effectiveness. Looking ahead, in scenarios where we operate under a few-shot setting and need to collect some trajectories for an optimal head during policy deployment, developing mechanisms such as context-based recognition could be a potential future solution. Additionally, an intriguing direction for future research involves harnessing the capabilities of large language models [20] like ChatGPT [17] to expedite the learning process and further enhance the generalization capabilities of our approach.
2302.00145
Controllability of discrete-time linear systems on solvable Lie groups
The objective of this paper is to study the controllability of discrete-time linear control systems in solvable Lie groups. In the special case of nilpotent Lie groups, a necessary and sufficient condition for controllability is established. Furthermore, the class of discrete-time linear systems in the two-dimensional affine Lie group is constructed and a condition for controllability of these systems is also stated.
Thiago Cavalheiro, Alexandre Santana, João Cossich, Victor Ayala
2023-01-31T23:42:36Z
http://arxiv.org/abs/2302.00145v6
# Controllability of discrete-time linear systems on solvable Lie groups ###### Abstract The aim of this paper is to study controllability of discrete-time linear control systems on solvable Lie groups. In special case of nilpotent Lie groups, it is established a necessary and sufficient condition for controllability. Moreover, all discrete-time linear systems on bidimensional affine Lie group are constructed and it is stated a condition for controllability of these systems on this group. ## 1 Introduction Continuous-time linear control systems on \(\mathbb{R}^{d}\) are given by a family of differential equations of the form \[\Sigma_{c}:\ \dot{x}(t)=Ax(t)+Bu(t),\ t\in\mathbb{R},\] parametrized by control functions \(u\in\mathcal{U}:=\{u:\mathbb{R}\to U\subset\mathbb{R}^{m};\ u\mbox{ is piecewise continuous}\}\), where \(A\in\mathbb{R}^{d\times d}\) and \(B\in\mathbb{R}^{d\times m}\). This class of systems constitutes a widely known and applicable class of control systems (see Ogata [16]). The study of controllability of this system can be found e.g. in Sontag [20] where it is proved a classical result which establishes that the necessary and sufficient conditions for controllability are \(\mbox{rank}[B\ AB\ \cdots A^{d-1}B]=d\) and \(A\) admits only eigenvalues with zero real part. However, when it comes to digital signal processing modelling, for example, the discrete-time version of the above system becomes more effective (see Wilsky [21]). Such a class of systems is given by \[\Sigma_{d}:\ x_{k+1}=Ax_{k}+Bu_{k},\ k\in\mathbb{N}_{0},u=(u_{k})_{k\in \mathbb{N}_{0}}\in\mathcal{U}:=U^{\mathbb{N}_{0}}\] where \(U\subset\mathbb{R}^{m}\) is non-empty, \(A\in\mbox{Gl}(d,\mathbb{R})\) and \(B\in\mathbb{R}^{d\times d}\). Regarding the controllability of \((\Sigma_{d})\), Colonius et al in [8] proved that such a system is controllable if, and only if, \(\mbox{rank}[B\ AB\ \cdots A^{d-1}B]=d\) and all eigenvalues of \(A\) have absolute value equal to \(1\). Moreover they characterized the control sets, that is, proper subsets of the state space where approximate controllability property holds. A natural extension of the above system \((\Sigma_{c})\) is known linear system on Lie group \(G\), that is, a family of differential equations \(\cdot g(t)=\mathcal{X}(g(t))+u(t)X(g(t))\), where \(\mathcal{X}\) comes from and automorphism of \(G\), \(X\) is a right invariant vector field and \(u\in\mathcal{U}\). This class of systems has been extensively studied and Lie theory has been shown to be powerful in obtaining results about the controllability, conjugacy and invariance entropy of this system (e.g. Ayala and Da Silva [2, 3], Ayala, Da Silva and Zsigmond [4], Da Silva [10, 11], Jouan [13] and Jouan and Dath [14]). The discrete-time version of linear systems on Lie groups was introduced by Colonius et al in [9]. Essentially, its dynamic is given by a family of difference equations \[\Sigma\ :\ g_{k+1}=f(g_{k},u_{k}),\ k\in\mathbb{N}_{0},\ u_{k}\in U,\] on a connected Lie group \(G\), where \(0\in U\subset\mathbb{R}^{m}\), \(f_{0}:=f(\cdot,0)\) is an automorphism of \(G\) and for each \(u\in U\), \(f_{u}:=f(\cdot,u):G\to G\) satisfies \[f_{u}(g)=f_{u}(e)\cdot f_{0}(g)\ \text{for all}\ g\in G,\] where "\(\cdot\)" denotes the product of \(G\). The authors established a formula for calculating the outer invariance entropy of admissible pairs in terms of the eigenvalues of the Lie algebra automorphism \(d(f_{0})_{e}:\mathfrak{g}\rightarrow\mathfrak{g}\). In this context, the subject of our paper is to study controllability of the above system \((\Sigma)\). Our results were inspired in some results of Da Silva [10] where continuous-time linear systems are considered. Specifically, we show that if all eigenvalues of the automorphism \(d(f_{0})_{e}\) associated to a discrete-time linear systems on a solvable Lie groups have absolute value equal to \(1\) and that the reachable and the controllable set from the identity are open, then the system is controllable. It is also proved that the reciprocal of this result holds when \(G\) is nilpotent. This paper is structured as follows: Section 2 is dedicated to present Lie theoretic notations, facts and some general properties of discrete-time linear control systems as well. In Section 3 is shown a sufficient condition for controllability of discrete-time linear systems on connected solvable Lie groups. In addiction, the class of linear systems on the two dimensional Lie group \(\mathrm{Aff}(2,\mathbb{R})\) is constructed and a condition for controllability is derived. Section 4 deals with nilpotent Lie groups and it is proved that the sufficient condition for the controllability obtained for solvable Lie groups is also a necessary condition. ## 2 Preliminaries This section establishes the notations, basic concepts and necessary results for the development of this work. Besides, we present the definition of discrete-time linear control systems on Lie groups as some of their properties as well. ### Decompositions of Lie algebras and Lie groups In this subsection we show some dynamical decomposition on Lie algebras and connected Lie groups introduced by Ayala, Roman-Flores and Da Silva [1] for a given automorphism. For a Lie algebra \(\mathfrak{g}\), the generalized eigenspaces of an automorphism \(\xi:\mathfrak{g}\rightarrow\mathfrak{g}\) associated with an eigenvalue \(\alpha\) is given by \[\mathfrak{g}_{\alpha}=\{X\in\mathfrak{g}:(\xi-\alpha)^{n}X=0,\ \text{for some}\ n\in\mathbb{N}\}.\] Due to [1, Proposition 2.1], the following subspaces \[\mathfrak{g}^{+}=\bigoplus_{|\alpha|>1}\mathfrak{g}_{\alpha},\ \mathfrak{g}^{-}= \bigoplus_{|\alpha|<1}\mathfrak{g}_{\alpha},\ \mathfrak{g}^{0}=\bigoplus_{|\alpha|=1}\mathfrak{g}_{\alpha} \tag{1}\] are Lie subalgebras of \(\mathfrak{g}\), called **unstable**, **stable** and **center** Lie subalgebras of \(\mathfrak{g}\) in relation to \(\xi\), respectively. Moreover, the decomposition \[\mathfrak{g}=\mathfrak{g}^{+}\oplus\mathfrak{g}^{0}\oplus\mathfrak{g}^{-} \tag{2}\] holds. It can be also defined the **center-unstable** and **center-stable** Lie subalgebras \[\mathfrak{g}^{+,0}=\mathfrak{g}^{+}\oplus\mathfrak{g}^{0}\ \ \text{and}\ \ \mathfrak{g}^{-,0}=\mathfrak{g}^{-}\oplus\mathfrak{g}^{0}, \tag{3}\] resp. If \(G\) is a connected Lie group with Lie algebra \(\mathfrak{g}\), we denote by \(G^{+}\), \(G^{-}\), \(G^{0}\), \(G^{+,0}\) and \(G^{-,0}\) the connected Lie subgroups of \(G\) corresponding to \(\mathfrak{g}^{+}\), \(\mathfrak{g}^{-}\), \(\mathfrak{g}^{0}\), \(\mathfrak{g}^{+,0}\) and \(\mathfrak{g}^{-,0}\), respectively. We refer to the subgroups above as **unstable**, **stable**, **center**, **center-unstable** and the **center-stable** subgroups, resp. **Remark 1**: 1. _It follows from_ _[_1_, Proposition 2.1]_ _that the stable and unstable subalgebras are nilpotent. Moreover, since_ \([\mathfrak{g}^{+},\mathfrak{g}^{0}]\subset\mathfrak{g}^{+}\)_, then_ \(\mathfrak{g}^{+}\) _is an ideal of_ \(\mathfrak{g}^{+,0}\)_. Consequently,_ \(G^{+}\) _is a normal subgroup of_ \(G^{+,0}\)_. The same holds for_ \(\mathfrak{g}^{-}\subset\mathfrak{g}^{-,0}\) _and_ \(G^{-}\subset G^{-,0}\)_._ 2. _Item 1 above implies that_ \(G^{+,0}=G^{+}G^{0}=G^{0}G^{+}\) _and_ \(G^{-,0}=G^{-}G^{0}=G^{0}G^{-}\)_._ Given a homomorphism \(\xi:\mathfrak{g}\to\mathfrak{g}\), a Lie subalgebra is \(\xi\)-invariant if \(\xi(\mathfrak{h})\subset\mathfrak{h}\). In case when \(\xi\) is an automorphism and \(\mathfrak{h}\) is \(\xi\)-invariant, it is clear that \(\xi^{k}(\mathfrak{h})=\mathfrak{h}\), for all \(k\in\mathbb{Z}\). On the Lie group level, given a homomorphism \(\phi\) of \(G\), we say that a Lie subgroup \(H\) of \(G\) is \(\phi\)-invariant if \(\phi(H)\subset H\). Analogously, if \(\phi\) is an automorphism and \(H\) is \(\phi\)-invariant, it also holds that \(\phi^{k}(H)=H\), for any \(k\in\mathbb{Z}\). Note also that if \(H\) is connected and \(\mathfrak{h}\) is its Lie algebra, \(H\) is \(\phi\)-invariant iff \(\mathfrak{h}\) is \(d\phi_{e}\)-invariant, where \(e\) denotes the identity of \(G\). Next lemma shows that the above decompositions are preserved by a surjective Lie algebra homomorphism since this homomorphism commutes with two automorphisms. **Lemma 2**: _Let \(\eta:\mathfrak{g}\to\mathfrak{h}\) a surjective Lie algebra homomorphism, \(\xi_{1}\) and \(\xi_{2}\) automorphisms of \(\mathfrak{g}\) and \(\mathfrak{h}\), respectively. If \(\eta\circ\xi_{1}=\xi_{2}\circ\eta\), then_ \[\eta(\mathfrak{g}^{+})=\mathfrak{h}^{+},\ \eta(\mathfrak{g}^{-})=\mathfrak{h}^{ -}\ \text{and}\ \eta(\mathfrak{g}^{0})=\mathfrak{h}^{0}.\] _In addition, if \(G\) and \(H\) are connected Lie groups associated with \(\mathfrak{g}\) and \(\mathfrak{h}\), respectively, and \(\pi:G\to H\) is a surjective homomorphism such that \(d\pi_{e}\circ\xi_{1}=\xi_{2}\circ d\pi_{e}\), then_ \[\pi(G^{+})=H^{+},\ \pi(G^{-})=H^{-}\ \text{and}\ \pi(G^{0})=H^{0}.\] **Proof.** For an eigenvalue \(\alpha\) of \(\xi_{1}\), there exists \(n\in\mathbb{N}\) with \((\xi_{1}-\alpha)^{n}X=0\), for all \(X\in\mathfrak{g}\). By hypothesis, \[(\xi_{2}-\alpha)^{n}d\pi_{e}(X)=d\pi_{e}(\xi_{1}-\alpha)^{n}(X)=0.\] Since \(\eta\) is surjective, \(\alpha\) is also an eigenvalue of \(\xi_{2}\), hence \(\eta(\mathfrak{g}_{\alpha})\subset\mathfrak{h}_{\alpha}\) which shows that \(\eta(\mathfrak{g}^{+})\subset\mathfrak{h}^{+}\), \(\eta(\mathfrak{g}^{-})\subset\mathfrak{h}^{-}\) and \(\eta(\mathfrak{g}^{0})\subset\mathfrak{h}^{0}\). The equality follows from subjectivity of \(\eta\) and decomposition (2) applied to \(\mathfrak{g}\) and \(\mathfrak{h}\). The equalities at group level holds due subjectivity of \(\pi\) and the equality \(\pi(\exp_{G}(X))=\exp_{H}(d\pi_{e}X)\). \(\square\) In sequence we get that solvable groups can be decomposable as a product of subgroups \(G^{+}\), \(G^{0}\) and \(G^{-}\). **Proposition 3**: _If \(G\) is a connected solvable Lie group with Lie algebra \(\mathfrak{g}\) and \(\phi\) an automorphism of \(G\), then the Lie subgroups associated to the decomposition (2) of \(d\phi_{e}\) satisfy \(G=G^{+,0}G^{-}=G^{-,0}G^{+}\)._ **Proof.** In order to prove that \(G=G^{+,0}G^{-}\), we proceed by induction on \(\dim G\). If \(G\) is unidimensional, the group is abelian and the result follows. Now, suppose the result is true for all connected solvable Lie group with dimension less than \(d\). Let \(G\) be a Lie group with \(\dim G=d\) and consider the derivative series of its Lie algebra \(\mathfrak{g}\) \[\mathfrak{g}=\mathfrak{g}^{(0)}\supset\mathfrak{g}^{(1)}\supset\cdots \supset\mathfrak{g}^{(k)}\supset\mathfrak{g}^{(k+1)}=\{0\},\] where \(\mathfrak{g}^{(i)}=[\mathfrak{g}^{(i-1)},\mathfrak{g}^{(i-1)}]\) for \(i\in\{1,\ldots,k\}\). Each \(\mathfrak{g}^{(i)}\) is an ideal of \(\mathfrak{g}\), hence each Lie subgroup \(G^{(i)}\) associated to \(\mathfrak{g}^{(i)}\) is normal in \(G\). The \(d\phi_{e}\)-invariance of \(\mathfrak{g}^{(i)}\) implies the \(\phi\)-invariance of \(G^{(i)}\). In particular, \(G^{(k)}\) is \(\phi\)-invariant, abelian and normal, so its closure \(\overline{G^{(k)}}\) is a closed Lie subgroup with the same properties. The Lie group \(H:=G/\overline{G^{(k)}}\) is solvable with \[\dim H=\dim G-\dim\overline{G^{(k)}}\leq\dim G-\dim G^{(k)}<\dim G,\] because \(\dim G^{(k)}>0\). Denote by \(\pi\) the natural projection of \(G\) on \(H\) and note that \[H=H^{+,0}H^{-}=\pi(G^{+,0})\pi(G^{-})=\pi(G^{+,0}G^{-})\] by Lemma 2 and induction hypothesis. Then \(G=G^{+,0}G^{-}\overline{G^{(k)}}=G^{+,0}\overline{G^{(k)}}G^{-}\). If \(\overline{\mathfrak{g}^{(k)}}\) denotes the Lie algebra of \(\overline{G^{(k)}}\), one has that \(\overline{\mathfrak{g}^{(k)}}^{+,0}=\overline{\mathfrak{g}^{(k)}}\cap \mathfrak{g}^{+,0}\) and \(\overline{\mathfrak{g}^{(k)}}^{-}=\overline{\mathfrak{g}^{(k)}}\cap \mathfrak{g}^{-}\) by \(d\phi_{e}\)-invariance of \(\overline{\mathfrak{g}^{(k)}}\). Moreover, since it is abelian, it holds that \(\overline{G^{(k)}}=\overline{G^{(k)}}^{+,0}\overline{G^{(k)}}^{-}\), which shows that \(\overline{G^{(k)}}^{+,0}\subset G^{+,0}\) and \(\overline{G^{(k)}}^{-}\subset G^{-}\), therefore \[G=G^{+,0}\overline{G^{(k)}}G^{-}=G^{+,0}\overline{G^{(k)}}^{+,0}\overline{G^{ (k)}}^{-}\ G^{-}\subset G^{+,0}G^{-}\subset G,\] and the statement holds. The equality \(G=G^{-,0}G^{+}\) follows analogously. \(\square\) **Proposition 4**: _Every compact, connected and \(\phi\)-invariant Lie subgroup of \(G\) is contained in \(G^{0}\)._ **Proof.** Let \(H\) be a Lie subgroup of \(G\) satisfying the conditions above. If \(\mathfrak{h}\) denotes the Lie algebra of \(H\), then [12, Corollary 4.25] implies that \(\mathfrak{g}=\mathfrak{z}(\mathfrak{h})\oplus[\mathfrak{h},\mathfrak{h}]\), where \(\mathfrak{z}(\mathfrak{h})\) is the center of h and \([\mathfrak{h},\mathfrak{h}]\) is semisimple. Note that the \(\phi\)-invariance of \(H\) implies the \(d\phi_{e}\) invariance of \(\mathfrak{h}\). The connected Lie group associated to \(\mathfrak{z}(\mathfrak{h})\) is the connected component of \(Z(G)\), denoted by \(Z(G)_{0}\). Since \(Z(G)_{0}\) is compact and abelian, the subset \(S\subset Z(G)_{0}\) of all elements with finite order is dense in \(Z(G)_{0}\). In this case, given \(X\in\mathfrak{z}(\mathfrak{h})\) such that \(\exp X\in S\), there is \(k\in\mathbb{N}\) with \((\exp X)^{k}=e\), hence \[e=\phi((\exp X)^{k})=(\phi(\exp X))^{k}=\exp(d\phi_{e})^{k}X.\] Since the exponential map of an abelian Lie group is a diffeomorphism and \((d\phi_{e})^{k}X=0\), then \(X=0\). Hence \(\mathfrak{z}(\mathfrak{h})\) is trivial. In addition, by compactness of \(H\) and semisimplicity of \([\mathfrak{h},\mathfrak{h}]\) it follows that the Cartan-Killing form restricted to \([\mathfrak{h},\mathfrak{h}]\) is non-degenerated and negative defined (see [12, Theorem 1.45 and Corollary 4.26]). Since \(d\phi_{e}\) is an automorphism, it is an isometry by Cartan-Killing form, hence all eigenvalues of \(d\phi_{e}|_{[\mathfrak{h},\mathfrak{h}]}\) has absolute value equal to \(1\), which yields \(\mathfrak{h}=[\mathfrak{h},\mathfrak{h}]\subset\mathfrak{g}^{0}\). \(\square\) By Proposition 4 the following result is immediate. **Corollary 5**: _If \(\phi\) is an automorphism of a compact Lie group \(G\), then \(d\phi_{e}\) has only eigenvalues with absolute value equal to \(1\)._ The next lemma will often be used and can be found in [20, Lemma 3.1]. **Lemma 6**: _Let \(G\) be a Lie group with Lie algebra \(\mathfrak{g}\) and \(N\) a normal Lie subgroup of \(G\) with Lie algebra \(\mathfrak{n}\). Then for every \(X\in\mathfrak{g}\), we have that_ \[\exp\left(X+\mathfrak{n}\right)\subset\exp\left(X\right)N.\] ### Linear control systems on Lie groups This subsection presents some general properties of discrete-time control systems. We start by recalling that a discrete-time control system on a topological space \(M\) is given by a difference equation \[x_{k+1}=f(x_{k},u_{k}),\ k\in\mathbb{N}_{0}:=\mathbb{N}\cup\{0\},\ u_{k}\in U, \tag{4}\] where \(U\) is a non-empty set and \(f:M\times U\to M\) is a map such that for each \(u\in U\), \(f_{u}(\cdot):=f(\cdot,u):M\to M\) is continuous function. The set \(M\) is called **state space** of (4) and \(U\) is the **control range**. Moreover, the **shift space** is defined as the set of all sequences in \(U\), that is, \({\cal U}:=\prod_{i=0}^{\infty}U\). The elements in \({\cal U}\) are called **controls**. In this case, the following solutions of (4) is well-defined \[\varphi(k,x_{0},u)=\left\{\begin{array}{ll}x_{0},&\mbox{if}\ \ k=0\\ f_{u_{k-1}}\circ\cdots\circ f_{u_{1}}\circ f_{u_{0}}(x_{0}),&\mbox{if}\ \ k\geq 1 \end{array},\right.\] for all \(k\in\mathbb{N}_{0}\), \(x_{0}\in M\) and \(u=(u_{i})_{i\in\mathbb{N}_{0}}\in{\cal U}\). Note that \(\varphi(k,\cdot,u):M\to M\) is continuous for each \(k\in\mathbb{N}_{0}\) and \(u=(u_{i})_{i\in\mathbb{N}_{0}}\in{\cal U}\). The **shift** map \(\Theta:\mathbb{N}_{0}\times{\cal U}\to{\cal U}\) given by \(\Theta(k,(u_{j}))=\Theta_{k}((u_{j})):=(u_{j+k})\) defines a dynamical system on \({\cal U}\). It is well known that \(\varphi\) satisfies the cocycle property, that is, \[\varphi(k+l,x,u)=\varphi(k,\varphi(l,x,u),\Theta_{l}(u)),\ \forall\ k,l\in \mathbb{N}_{0},\ \forall\ x\in M,\ \forall\ u\in{\cal U}.\] **Definition 7**: _For \(x\in M\), the **reachable** and the **controllable set from x at time k** are given by_ \[{\cal R}_{k}(x)=\{y\in M:\mbox{ there is }u\in{\cal U}\mbox{ with }\varphi(k,x,u)=y\}\] _and_ \[{\cal C}_{k}(x)=\{y\in M:\mbox{ there is }u\in{\cal U}\mbox{ with }\varphi(k,y,u)=x\},\] _respectively. Moreover, the **reachable** and the **controllable set from x up to time k** are given by \({\cal R}_{\leq k}(x)=\bigcup_{t\leq k}{\cal R}_{t}(x)\) and \({\cal C}(x)=\bigcup_{t\leq k}{\cal C}_{t}(x)\), resp. The sets \({\cal R}(x)=\bigcup_{k\in\mathbb{N}}{\cal R}_{k}(x)\) and \({\cal C}(x)=\bigcup_{k\in\mathbb{N}}{\cal C}_{k}(x)\) denote the **reachable** and the **controllable set from x**, respectively._ The system (4) is **forward accessible** (resp. **backward accessible**) if \({\rm int}{\cal R}(x)\neq\emptyset\) (resp. \({\rm int}{\cal C}(x)\neq\emptyset\)), for all \(x\in M\) and it is called **accessible** if both conditions are satisfied. According to Wirth [22], for any \(k\in\mathbb{N}\) and \(g\in G\), define the smooth map \(G_{k}:G\times U^{k}\to G\) given by \[G_{k}(g,u)=\varphi(k,g,u).\] A pair \((g,u)\in G\times\mathrm{int}U^{k}\) such that \(\mathrm{rank}\left[\frac{\partial}{\partial u}G_{k}(g,u)\right]=\dim G\) is called **regular**. The **regular reachable set at time k** is defined by \[\hat{\mathcal{R}}_{k}(g)=\{\varphi(k,g,u):(x,u)\text{ is regular}\}\] and the **regular reachable set**\(\hat{\mathcal{R}}(g)=\cup_{k\in\mathbb{N}}\hat{\mathcal{R}}_{k}(g)\). One can prove that \(\hat{\mathcal{R}}(g)\) is open, for every \(g\in G\). Now, let us define the main object of this work. **Definition 8**: _Consider a discrete-time control system_ \[g_{k+1}=f(g_{k},u_{k}),u_{k}\in U, \tag{5}\] _on a connected Lie group \(G\) with \(U\subset\mathbb{R}^{m}\) a compact neighborhood of \(0\). The system (5) is called **linear** if \(f_{0}\) is an automorphism of \(G\) and for each \(g\in G\)_ \[f_{u}(g)=f_{u}(e)\cdot f_{0}(g). \tag{6}\] _where \("\ \cdot"\) denotes de product on \(G\)._ The group product will be omitted when it is clear. Moreover, the equation (6) can be write as \[f_{u}(g)=f_{u}(e)f_{0}(g)=L_{f_{u}(e)}\circ f_{0}(g),\] where \(L_{g}\) is the left translation by \(g\in G\). Considering the expression above, we can see that \(f_{u}\) is a diffeomorphism of \(G\), for each \(u\in U\), with inverse \[f_{u}^{-1}(g)=f_{0}^{-1}\circ L_{(f_{u}(e))^{-1}}(g)=f_{0}^{-1}((f_{u}(e))^{-1 }\cdot g).\] **Remark 9**: _It follows from Definition 8 that:_ 1. \(\mathcal{U}\) _endowed with the product topology is a compact space._ 2. _for all_ \(k\in\mathbb{N}_{0}\) _and_ \(u\in\mathcal{U}\)_, the map_ \(\varphi(k,\cdot,u)\) _is a diffeomorphism of_ \(G\)_._ 3. _Since_ \(f_{0}\) _is an automorphism of_ \(G\)_, its differential_ \(d(f_{0})_{e}:\mathfrak{g}\rightarrow\mathfrak{g}\) _is an automorphism and one has the Lie subalgebras (_1_) and (_3_) related to_ \(d(f_{0})_{e}\)_._ **Example 10**: _The standard example of discrete-time linear system is the difference equation_ \[x_{k+1}=Ax_{k}+Bu_{k},\ \ u_{k}\in U,\] _where \(G\) is the additive euclidean space \(\mathbb{R}^{d}\), \(A\in GL(d,\mathbb{R}),B\in\mathbb{R}^{d\times m}\), and \(0\in U\subset\mathbb{R}^{m}\). The map \(f:\mathbb{R}^{d}\times U\rightarrow\mathbb{R}^{d}\) is given by \(f(x,u)=Ax+Bu\) and satisfies_ * \(f_{0}(x)=Ax\) _is an automorphism of_ \(\mathbb{R}^{d}\)_;_ * \(f_{u}(x)=Ax+Bu=f_{0}(x)+f_{u}(0)=f_{u}(0)+f_{0}(x)\)_, since_ \(f_{u}(0)=Bu\)__ _In this case, the solutions are given by_ \[\varphi(k,x,u)=A^{k}x+\sum_{j=0}^{k-1}A^{k-1-j}Bu_{j}.\] The next proposition shows that the solutions of 5, from an element \(g\in G\) can be given by a translation of the solution from \(e\in G\) (see [9, Proposition 3]) **Proposition 11**: _Consider the discrete-time linear system (5) on a Lie group \(G\). Then, for all \(g\in G\) and \(u\in\mathcal{U}\) it holds that_ \[\varphi(k,g,u)=\varphi(k,e,u)f_{0}^{k}(g).\] In case of linear systems, the identity \(e\) of \(G\) satisfies \(e\in\mathcal{R}_{k}(e)\) for all \(k\in\mathbb{N}\), because it is a fixed point of \(f_{0}\). In addition, using the notation \(\mathcal{R}(e)=\mathcal{R}\), \(\mathcal{R}_{k}(e)=\mathcal{R}_{k}\) and \(\mathcal{R}_{\leq k}=\mathcal{R}_{\leq k}(e)\), we get the following proposition. **Proposition 12**: _For all \(k,k_{1},k_{2}\in\mathbb{N}\)\(g\in G\) and \(u\in\mathcal{U}\), it holds that_ 1. \(\mathcal{R}_{k}=\mathcal{R}_{\leq k}\)_;_ 2. _If_ \(k_{1}\leq k_{2}\)_, then_ \(\mathcal{R}_{k_{1}}\subset\mathcal{R}_{k_{2}}\)_;_ 3. \(\mathcal{R}_{k}(g)=\mathcal{R}_{k}f_{0}^{k}(g)\)_;_ 4. _If_ \(k_{1},k_{2}\in\mathbb{N}\)_, then_ \(\mathcal{R}_{k_{1}+k_{2}}=\mathcal{R}_{k_{1}}f_{0}^{k_{1}}(\mathcal{R}_{k_{2} })=\mathcal{R}_{k_{2}}f_{0}^{k_{2}}(\mathcal{R}_{k_{1}})\)_;_ 5. _For any_ \(u\in\mathcal{U}\)_,_ \(g\in G\) _and_ \(k\in\mathbb{N}\)_, then_ \[\varphi(k,\mathcal{R}(g),u)\subset\mathcal{R}(g);\] 6. \(e\in\text{int}\mathcal{R}\) _if and only if_ \(\mathcal{R}\) _is open._ **Proof.** 1. It is clear that \(\mathcal{R}_{k}\subset\mathcal{R}_{\leq k}.\) On the other hand, taking \(t\in[1,k)\cap\mathbb{N}\), an arbitrary \(u\in\mathcal{U}\) and considering the control \[v=\left\{\begin{array}{ll}0,&\text{for}\quad j<k-t\\ u_{j-k+t},&\text{for}\quad j\geq k-t\end{array},\right.\] we have \[\varphi(t,e,u)=\varphi(t,\varphi(k-t,e,0),u)=\varphi(t,\varphi(k-t,e,v),\Theta _{k-t}(v))=\varphi(k,e,v),\] which implies that \(\mathcal{R}_{k}\subset\mathcal{R}_{t}\), proving the statement. 2. Consequence of (1). 3. It follows from Proposition 11. 4. Note that \[\varphi(k_{1}+k_{2},e,u) = \varphi(k_{1},\varphi(k_{2},e,u),\Theta_{k_{2}}(u))\] \[= \varphi(k_{1},e,\Theta_{k_{2}}(u))f_{0}^{k_{1}}(\varphi(k_{2},e,u ))\in\mathcal{R}_{k_{1}}f_{0}^{k_{1}}(\mathcal{R}_{k_{2}})\] which means that \(\mathcal{R}_{k_{1}+k_{2}}\subset\mathcal{R}_{k_{1}}f_{0}^{k_{1}}(\mathcal{R} _{k_{2}})\). Now, given \(u,v\in\mathcal{U}\), we have \[\varphi(k_{1},e,u)f_{0}^{k_{1}}(\varphi(k_{2},e,v)) = \varphi(k_{1},\varphi(k_{2},e,v),u)\] \[= \varphi(k_{1}+k_{2},e,w),\] with \[w=\left\{\begin{array}{ll}v_{j},&j<k_{2}\\ u_{j-k_{2}},&j\geq k_{2}\end{array}.\right.\] The inclusion \(\mathcal{R}_{k_{2}}f_{0}^{k_{2}}(\mathcal{R}_{k_{1}})\subset\mathcal{R}_{k_{1 }+k_{2}}\) follows by the same arguments above. * Take \(k\in\mathbb{N}\), \(g\in G\) and \(u\in\mathcal{U}\). If \(h\in\mathcal{R}(g)\), then \(h=\varphi(t,g,v)\) for some \(t\in\mathbb{N}\) and \(v\in\mathcal{U}\). Thus \[\varphi(k,h,u)=\varphi(k,\varphi(t,g,v),u)=\varphi(k+t,g,w)\in\mathcal{R}(g),\] with \[w=\left\{\begin{array}{ll}v_{j},&j<t\\ u_{j-t},&j\geq t\end{array}\right..\] * As \(e\in\mathcal{R}\) if \(\mathcal{R}\) is open then \(e\in\mathrm{int}\mathcal{R}\). Suppose that \(e\in\mathrm{int}\mathcal{R}\) and take \(g\in\mathcal{R}\). Then there are \(t\in\mathbb{N}\) and \(u\in\mathcal{U}\) such that \[\varphi(t,e,u)=g.\] Take \(V_{e}\subset\mathcal{R}\) a neighborhood of \(e\). As \(f_{u}\) is diffeomorphism, the set \(V_{g}=\varphi(t,V_{e},u)\) is a neighborhood of \(g\) and \(\varphi(t,V_{e},u)\subset\varphi(t,\mathcal{R},u)\subset\mathcal{R}\). \(\square\) Since the map \(f_{u}\) of the linear system (5) is a diffeomorphism of \(G\) for each \(u\in U\) we can define its **reversed counterpart** by \[h_{k+1}=\tilde{f}_{u_{k}}(h_{k}),\ u\in\mathcal{U}, \tag{7}\] where \(\tilde{f}_{u}(h)=f_{u}^{-1}(e)f_{0}^{-1}(h)\) for all \(h\in G\). Note that (7) is also a linear system on \(G\) with \(\tilde{f}_{0}(x)=f_{0}^{-1}(x)\), hence \(d\tilde{f}_{0}=df_{0}^{-1}\). In this case, if \(\alpha\) is an eigenvalue of \(df_{0}\), \(\alpha^{-1}\) is an eigenvalue of \(d\tilde{f}_{0}\). Therefore, we can consider the generalized eigenspaces \[\mathfrak{g}_{*}^{+}=\sum_{|\alpha|>1}\mathfrak{g}_{\alpha}^{*},\ \mathfrak{g}_{*}^{-}=\sum_{|\alpha|<1}\mathfrak{g}_{\alpha}^{*},\ \mathfrak{g}_{*}^{0}=\sum_{|\alpha|=1}\mathfrak{g}_{\alpha}^{*}.\] where \(\mathfrak{g}_{\alpha}^{*}\) is the generalized eigenspace associated with the eigenvalue \(\alpha\) of \(d\tilde{f}_{0}\). One can easily see that \(\mathfrak{g}_{*}^{-}=\mathfrak{g}^{+},\mathfrak{g}_{*}^{+}=\mathfrak{g}^{-}\) and \(\mathfrak{g}_{*}^{0}=\mathfrak{g}^{0}\). Analogously, \(G_{*}^{+}=G^{-}\), \(G_{*}^{-}=G^{+}\) and \(G_{*}^{0}=G^{0}\). **Example 13**: _Consider the linear system presented in Example 10. In this case, the reverse counterpart of this system is given by_ \[x_{k+1}=A^{-1}x_{k}-A^{-1}Bu_{k},\ \ u_{k}\in U\subset\mathbb{R}^{m}.\] Let us denote by \(\mathcal{R}_{k}^{*}\) and \(\mathcal{C}_{k}^{*}\) the reachable and the controllable set from \(e\) up to time \(k\) of (7), respectively. Then the following lemma holds. **Lemma 14**: _If holds that \(\mathcal{R}_{k}^{*}=\mathcal{C}_{k}\) and \(\mathcal{R}_{k}=\mathcal{C}_{k}^{*}\), for all \(k\in\mathbb{N}\)._ **Proof.** Note that for any \(k\in\mathbb{N}\), \(g\in G\) and \(u\in\mathcal{U}\), we have \[\tilde{f}_{u_{k-1}}\circ\cdots\circ\tilde{f}_{u_{0}}(g)=e\Leftrightarrow\tilde {f}_{u_{0}}^{-1}\circ\cdots\circ\tilde{f}_{u_{k-1}}^{-1}(e)=g\Leftrightarrow f _{u_{0}}\circ\cdots\circ f_{u_{k-1}}(e)=g,\] because \(\tilde{f}_{u}^{-1}=f_{u}\), for all \(u\in U\). If \(\varphi^{*}\) denote the solution of (7), then \[\varphi^{*}(k,g,u)=e\Leftrightarrow\varphi(k,e,\tilde{u})=g,\] where \[\tilde{u}=\left\{\begin{array}{ll}u_{k-1-j},&j<k\\ 0,&j\geq k\end{array}\right.,\] This shows that \(\mathcal{R}_{k}=\mathcal{C}_{k}^{*}\). The other equality follows analogously. \(\square\) In sequence we present a lemma that will be widely used throughout the work. **Lemma 15**: _Let \(g\in{\cal R}\) such that \(f_{0}^{k}(g)\in{\cal R}\), for all \(k\in{\mathbb{Z}}\). Then \({\cal R}\cdot g\subset{\cal R}\)._ **Proof.** In fact, given \(k\in{\mathbb{N}}\) and \(u\in{\cal U}\) one has \[\varphi(k,e,u)g=\varphi(k,e,u)f_{0}^{k}(f_{0}^{-k}(g))=\varphi(k,f_{0}^{-k}(g), u),\] for all \(g\in G\). Hence, if we assume that \(g\) is an element in \({\cal R}\) satisfying the required assumptions, then there are \(l\in{\mathbb{N}}_{0}\) and \(v\in{\cal U}\) such that \(f_{0}^{-k}(g)=\varphi(l,e,v)\). Then \[\varphi(k,e,u)g=\varphi(k,f_{0}^{-k}(g),u)=\varphi(k,\varphi(l,e,v),u)= \varphi(k+l,e,w),\] with \[w=\left\{\begin{array}{ll}v_{j},&j<l\\ u_{j-l},&j\geq l\end{array},\right.\] that is \({\cal R}\cdot g\subset{\cal R}\). \(\square\) To simplify, we denote \(d(f_{0})_{e}\) by \(df_{0}\). Since \(f_{0}\) and \(df_{0}\) are automorphisms on \(G\) and \(\mathfrak{g}\), respectively, one has \[f_{0}^{n}(\exp X)=\exp df_{0}^{n}X,\] for all \(n\in{\mathbb{Z}}\) and all \(X\in\mathfrak{g}\). This fact is useful in the next result. **Corollary 16**: _Suppose that \(H\) is a connected \(f_{0}\)-invariant Lie subgroup of \(G\) with Lie algebra \(\mathfrak{h}\). If \(\exp X\in{\cal R}\), for any \(X\in\mathfrak{h}\) then \(H\subset{\cal R}\)._ **Proof.** Since \(H\) is \(f_{0}\)-invariant, then \(\mathfrak{h}\) is \(df_{0}\)-invariant. Hence \[f_{0}^{k}(\exp X)=\exp df_{0}^{k}(X)\in{\cal R},k\in{\mathbb{Z}}.\] The result follows by connectedness of \(H\) and Lemma 15. \(\square\) **Corollary 17**: _Suppose that \(H\) is a connected Lie subgroup of \(G\) and there is a neighborhood \(B\) of \(e\) in \(H\cap{\cal R}\) which is invariant by \(f_{0}\) and \(f_{0}^{-1}\), then \(H\) is \(f_{0}\)-invariant and \(H\subset{\cal R}\)._ **Proof.** The invariance of \(B\subset H\cap{\cal R}\) by \(f_{0}\) and \(f_{0}^{-1}\) yields that \(B^{n}\subset{\cal R}\) due Lemma 15. Since \(H\) is connected, then \(H=\bigcup_{n\in{\mathbb{N}}}B^{n}\subset{\cal R}\). \(\square\) Following [9, Section 3], we finish this section by recalling that given a linear system (5) on a connected Lie group \(G\) and \(H\) is a closed \(f_{0}\)-invariant subgroup of \(G\), one can induce the following control system on the homogeneous space \(G/H\) \[x_{k+1}=\bar{f}(x_{k},u_{k}),\ x_{k}\in G/H,\ u=(u_{i})_{i\in{\mathbb{N}}_{0}} \in{\cal U}, \tag{8}\] where \(\bar{f}_{u}\) is a diffeomorphism of \(G/H\) given by \(\bar{f}_{u}(gH)=f_{u}(g)H\) for each \(u\in U\), which satisfies \(\bar{f}_{u}^{k}\circ\pi=\pi\circ f_{u}^{k}\) for all \(k\in{\mathbb{Z}}\), where \(\pi:G\to G/H\) is the standard projection. If \(\varphi\) denotes the solution of (5) and \(\bar{\varphi}\) is the solution of (8), it holds that \(\pi(\varphi(k,g,u))=\bar{\varphi}(k,\pi(g),u)\) and \(\bar{\varphi}(k,gH,u)={\cal L}_{\varphi_{k},u}(e)(\bar{f}_{0}^{k}(gH))\), where \({\cal L}_{g}\) denotes the left translation in \(G/H\) given by \(xH\mapsto gxH\) for each \(g\in G\). The system (8) is called **induced linear system** on \(G/H\). ## 3 Controllability on solvable Lie groups From now on we consider a discrete-time linear system (5) on a connected solvable Lie group. This section presents some general controllability results and an application for linear systems on a non-abelian 2-dimensional solvable Lie group. ### General results The aim of this subsection is to present a sufficient condition for the controllability of discrete-time linear systems on solvable Lie groups. Consider \(N\subset G^{0}\) a solvable Lie subgroup of \(G\). If \(\mathfrak{n}\) is the Lie subalgebra of \(N\), there exists \(l\in\mathbb{N}\) such that its derived series satisfies \[\mathfrak{n}_{1}\supset\mathfrak{n}_{2}\supset\cdots\supset\mathfrak{n}_{l+1}= \{0\}.\] where \(\mathfrak{n}_{1}=\mathfrak{n}\) and \(\mathfrak{n}_{i+1}=[\mathfrak{n}_{i},\mathfrak{n}_{i}]\). Each set \(\mathfrak{n}_{i}\) can be considered as \[\mathfrak{n}_{i}=\bigoplus_{|\alpha|=1}\mathfrak{n}_{i,\alpha}, \tag{9}\] where \(\mathfrak{n}_{i,\alpha}=\mathfrak{g}_{\alpha}\cap\mathfrak{n}_{i}\). We also can consider \[\mathfrak{n}_{i,\alpha}=\bigcup_{j\in\mathbb{N}_{0}}\mathfrak{n}_{i,\alpha}^{ j},\mathfrak{n}_{i,\alpha}^{j}=\{X\in\mathfrak{n}_{i,\alpha}:(df_{0}-\alpha)^{j}X=0\}.\] The next lemma is important for the main result of this section. **Lemma 18**: _Let \(N\subset G^{0}\) be a solvable connected \(f_{0}\)-invariant Lie subgroup of \(G^{0}\). Then \(N\subset\mathcal{R}.\)_ **Proof.** As \(N\) is \(f_{0}\)-invariant, then \(\mathfrak{n}\) is \(df_{0}\)-invariant, which implies in \(df_{0}(\mathfrak{n})=\mathfrak{n}\). Consequently \(df_{0}(\mathfrak{n}_{i})=\mathfrak{n}_{i}\), for any \(i=1,\ldots,l+1.\) Let us denote by \(N_{i}=\langle\exp\mathfrak{n}_{i}\rangle\) the connected subgroup associated to the ideal \(\mathfrak{n}_{i}\), as in the expression (9). Each \(N_{i}\) is a normal subgroup of \(N\) and \(f_{0}\)-invariant, since \(f_{0}\) is a automorphism. Note that if for every \(X\in\mathfrak{n}_{i,\alpha}\) we have \(\exp X\in\mathcal{R}\), then \(N_{i}\in\mathcal{R}\). In fact, as \(\mathfrak{n}_{i}=\bigoplus_{|\alpha|=1}\mathfrak{n}_{i,\alpha}\), the set \[B=\prod_{|\alpha|=1}\exp\mathfrak{n}_{i,\alpha}\] is a neighborhood of \(e\) in \(N_{i}\). Besides, as \(df_{0}^{k}(\mathfrak{n}_{i,\alpha})=\mathfrak{n}_{i,\alpha}\) for every \(k\in\mathbb{Z}\) we have \[f_{0}^{k}(\exp\mathfrak{n}_{i,\alpha})=\exp\left(df_{0}^{k}(\mathfrak{n}_{i, \alpha})\right)=\exp\mathfrak{n}_{i,\alpha},\] for every \(k\in\mathbb{Z}\). It is clear that \(B\) is \(f_{0}\) and \(f_{0}^{-1}\)-invariant and as \(\exp X\in\mathcal{R}\) for every \(X\in\mathfrak{n}_{i,\alpha}\), the Lemma 15 ensures that \(B\subset\mathcal{R}\). By Corollary 17 we get \(N_{i}\subset\mathcal{R}\). Now, we claim that if \(N_{i+1}\subset\mathcal{R}\) then \(N_{i}\subset\mathcal{R}\), for every \(i=0,\ldots,l\). In fact, note that we just need to prove that \(\exp X\in\mathcal{R}\), for every \(X\in\mathfrak{n}_{i,\alpha}\) with \(|\alpha|=1\). Using the decomposition (9) it is enough to prove that for every \(j\in\mathbb{N}_{0}\) and \(X\in\mathfrak{n}_{i,\alpha}^{j}\) we have \(\exp X\in\mathcal{R}\). We prove this by induction. It is clear for \(j=0\) since \(\mathfrak{n}_{i,\alpha}^{0}=\{0\}\) and \(e\in\text{int}\mathcal{R}\). If it is true for \(j>0\) then \(\exp Z\in\mathcal{R}\) for every \(Z\in\mathfrak{n}_{i,\alpha}^{j-1}\). By [8, Lemma 13], there is a sequence of natural numbers \(S_{\alpha}=\{n_{k}\}_{k\in\mathbb{N}}\) such that \(n_{k}\to\infty\) and \(\alpha^{n_{k}}\to 1\), given that \(|\alpha|=1\). Let us take \(X\in\mathfrak{n}_{i,\alpha}^{j}\) and consider \(U\subset\mathfrak{g}\) such that \(\exp:U\to\exp U\) is a diffeomorphism with \(\exp U\subset\mathcal{R}\). There is a \(m\in\mathbb{N}\) such that \(v=\frac{X}{m}\in U\). Then, there is a \(\tau\in\mathbb{N}\) such that \(\exp v\in\mathcal{R}_{\tau}\). By Proposition 12, item 2, we have \(\mathcal{R}_{\tau}\subset\mathcal{R}_{\tau\tau}\) for any \(n\in\mathbb{N}\). Using the polar form of a complex number, we get \(\alpha^{\tau n}=e^{i(\tau n\theta)}=\cos\left(\tau n\theta\right)+i\sin\left( \tau n\theta\right)\) for some \(\theta\in(0,2\pi]\). The sequence \(\{n_{k}\}_{k\in\mathbb{N}}\) has the form \(n_{k}\theta=2\pi N_{k}+t_{k}\), for some \(t_{k}\to 0\) when \(k\to\infty\). Then \[\alpha^{n_{k}\tau}v=\left(\cos\left(n_{k}\tau\theta\right)+i\sin\left(n_{k} \tau\theta\right)\right)\frac{X}{m},\forall n_{k}\in S_{\alpha}. \tag{10}\] By the expression above, we can also consider a subsequence \(\{n_{s_{j}}\}_{j\in\mathbb{N}}\subset S_{\alpha}\) such that \(\sum_{j=1}^{m}\alpha^{rn_{k_{j}}}=\sum_{j=1}^{m}\left(\cos\left(n_{k_{j}}\tau \theta\right)+i\sin\left(n_{k_{j}}\tau\theta\right)\right)=m-\varepsilon_{k}+i \hat{\varepsilon}_{k}\), for any \(\{k_{j}\}_{j\in\mathbb{N}}\subset\{s_{j}\}_{j\in\mathbb{N}}\) and \(\varepsilon_{k},\hat{\varepsilon}_{k}\in\mathbb{R}^{+}\) small enough. Since \(\alpha^{rn_{k}}\to 1\), then \(\exp v\in\mathcal{R}_{\tau(n_{k_{2}}-n_{k_{1}})}\). As \(v\in\mathfrak{n}_{i,\alpha}^{j}\), given any \(k\in\mathbb{N}\) we have \[(df_{0})^{k}v=(df_{0}-\alpha+\alpha)^{k}v=\sum_{i=1}^{k}\binom{k}{i}(df_{0}- \alpha)^{k-i}\alpha^{i}v=\alpha^{k}v+v_{k}\] with \(\alpha^{k}v\in\mathfrak{n}_{i,\alpha}^{j}\) and \[v_{k}=\sum_{i=1}^{k-1}\binom{k}{i}(df_{0}-\alpha)^{k-i}\alpha^{i}v.\] Hence we can consider \[\alpha^{k}v=df_{0}^{k}v+u_{k},\forall k\in\mathbb{N}\] with \(u_{k}=-v_{k}\). The above expression above holds for every \(k\in\mathbb{N}\). Given \(Y,Z\in\mathfrak{g}\), we can consider the Baker-Campbell-Hausdorff (BCH) series for the expression \[\exp Y\exp Z=\exp\left(Y+Z+c(Y,Z)\right),\] with \(c(Y,Z)\) is the series associated with the iterated Lie brackets of \(Y\) and \(Z\). By using BCH series we have \[\exp\left(df_{0}^{\tau n_{k_{1}}}v\right)\exp\left(df_{0}^{\tau n_{k_{2}}}v \right)=\exp\left(df_{0}^{\tau n_{k_{1}}}v+df_{0}^{\tau n_{k_{2}}}v+O_{1}\right),\] where \(O_{1}\) is given by the Brackets between \(df_{0}^{\tau n_{k_{1}}}v\) and \(df_{0}^{\tau n_{k_{2}}}v\), which is in \([\mathfrak{n}_{i,\alpha}^{j},\mathfrak{n}_{i,\alpha}^{j}]\subset[\mathfrak{n}_ {i},\mathfrak{n}_{i}]=\mathfrak{n}_{i+1}\) and we have \(O_{1}\in\mathfrak{n}_{i+1}\). By the Lemma 6, we obtain \[\exp\left(df_{0}^{\tau n_{k_{1}}}v+df_{0}^{\tau n_{k_{2}}}v+O_{1}\right)=\exp \left(df_{0}^{\tau n_{k_{1}}}v+df_{0}^{\tau n_{k_{2}}}v\right)g_{1}\] for some \(g_{1}\in N_{i+1}\subset\mathcal{R}\). By item (4) in the Proposition 12 we get \[\exp\left(df_{0}^{\tau n_{k_{1}}}v\right)\exp\left(df_{0}^{\tau n_{k_{2}}}v \right)=f_{0}^{\tau n_{k_{1}}}(\exp v)f_{0}^{\tau n_{k_{2}}}(\exp v)\in \mathcal{R}_{2\tau n_{k_{2}}-\tau n_{k_{1}}}\subset\mathcal{R}.\] Once \(g_{1},g_{1}^{-1}\in N_{i+1}\subset\mathcal{R}\), we obtain \[\exp\left(df_{0}^{\tau n_{k_{1}}}v+df_{0}^{\tau n_{k_{2}}}v\right)=\exp\left( df_{0}^{\tau n_{k_{1}}}v+df_{0}^{\tau n_{k_{2}}}v+O_{1}\right)g_{1}^{-1}\in \mathcal{R}\cdot g_{1}^{-1}\subset\mathcal{R}.\] By Proposition 12 item 2, we can choose \(n_{k_{3}}\in\{n_{s_{j}}\}_{j\in\mathbb{N}}\) such that \[\exp\left(df_{0}^{\tau n_{k_{1}}}v+df_{0}^{\tau n_{k_{2}}}v\right)\in \mathcal{R}_{\tau n_{k_{3}}}.\] Using BCH series we have \[\exp\left(df_{0}^{\tau n_{k_{1}}}v+df_{0}^{\tau n_{k_{1}}}v\right)\exp\left( df_{0}^{\tau n_{k_{3}}}v\right)=\exp\left(\sum_{i=1}^{3}df_{0}^{\tau n_{k_{i}}}v+O_{2} \right),O_{2}\in\mathfrak{n}_{i,\alpha}.\] Again by the Lemma 6 we get \[\exp\left(\sum_{i=1}^{3}df_{0}^{\tau n_{k_{i}}}v+O_{2}\right)=\exp\left(\sum_{i =1}^{3}df_{0}^{\tau n_{k_{i}}}v\right)g_{2},g_{2}\in N_{i+1}.\] Then \[\exp\left(df_{0}^{\tau n_{k_{1}}}v+df_{0}^{\tau n_{k_{2}}}v\right)\exp\left( df_{0}^{\tau n_{k_{3}}}v\right)\in\mathcal{R}_{\tau(n_{k_{3}}+2n_{k_{2}}-n_{k_{1}})} \subset\mathcal{R}.\] and again by the \(f_{0}\)-invariance of \(N_{i+1}\) we have \[\exp\left(\sum_{i=1}^{3}df_{0}^{\tau n_{k_{i}}}v\right)=\exp\left(\sum_{i=1}^{3} df_{0}^{\tau n_{k_{i}}}v+O_{2}\right)g_{2}^{-1}\in\mathcal{R}\cdot g_{2}^{-1} \subset\mathcal{R}.\] Repeating the idea \(m-2\) times, we get \[\exp\left(\sum_{i=1}^{m}df_{0}^{\tau n_{k_{i}}}v\right)\in\mathcal{R}.\] The expression \(\alpha^{k}v=df_{0}^{k}v+u_{k}\) allow us to write \[\sum_{i=1}^{m}\alpha^{n_{k_{j}}}v=\sum_{i=1}^{m}df_{0}^{n_{k_{i}}}v+u,\] for some \(u^{\prime}\in\mathfrak{n}_{i,\alpha}^{j-1}\). Moreover, using (10) we have \[\sum_{i=1}^{m}\alpha^{\tau n_{k_{j}}}v=(m-\varepsilon+i\hat{\varepsilon}) \frac{X}{m}=\sum_{i=1}^{m}df_{0}^{\tau n_{k_{i}}}v+u^{\prime}.\] Then \[X=\sum_{i=1}^{m}df_{0}^{\tau n_{k_{i}}}v+u,\] with \(u=u^{\prime}+\frac{(\varepsilon-i\hat{\varepsilon})}{m}X\in\mathfrak{n}_{i, \alpha}^{j}\). Using BCH series we get \[\exp\left(\sum_{i=1}^{m}df_{0}^{\tau n_{k_{i}}}v\right)\exp u=\exp\left(X+O\right),\] with \(O\) is the expression of the brackets between \(\sum_{i=1}^{m}df_{0}^{n_{k_{i}}}v\) and \(u.\) In particular, \(df_{0}^{k}v\in\mathfrak{n}_{i,\alpha}^{j-1}\) for every \(k\in\mathbb{N}\). By the induction hypothesis \[f_{0}^{k}(\exp u)=\exp\left(df_{0}^{k}(u)\right)\in\mathcal{R},\forall k\in \mathbb{Z}.\] The Lemma 6 guarantees that \[\exp\left(X+O\right)=\exp\left(X\right)g,g\in N_{i+1}\] and by the \(f_{0}\)-invariance of \(N_{i+1}\) we get \[\exp\left(X\right)=\exp\left(X+O\right)g^{-1}\in\mathcal{R},\] as required. To prove that \(N\subset A\), as \(N_{l+1}=\{e\}\subset\mathcal{R}\), then \(N_{l}\subset\mathcal{R}\) by the previous affirmation. Using this reasoning \(l\) times, we get \(N\subset\mathcal{R}\). \(\Box\) From now on, we consider the dynamical Lie algebras \(\mathfrak{g}^{*}\), \(*\in\{+,-,0\}\) associated to the Lie algebra automorphism \(df_{0}\), and the corresponding connected Lie groups \(G^{*}\), \(*\in\{+,-,0\}\), unless stated otherwise. Moreover, we often assume that \(\mathcal{R}\) is an open set of \(G\). With the above lemma we can prove the following controllability results. **Theorem 19**: _Let \(G\) be a connected solvable Lie group and consider the system (5) defined on \(G\). If \(\mathcal{R}\) is open, then \(G^{+,0}\subset\mathcal{R}\)._ **Proof.** The space \(\mathfrak{g}^{+}\) is the unstable subspace associated with the differential \(df_{0}\). Since \(G^{+}=\exp\mathfrak{g}^{+}\) is nilpotent and taking \(g\in G^{+}\), then exists a \(X\in\mathfrak{g}^{+}\) such that \(g=\exp X\). As \(0\in\mathfrak{g}^{+}\) is stable in negative time there is a \(k\in\mathbb{N}\) such that \(df_{0}^{-k}X\) is as close as necessary of \(0\) for \(k\) large enough. By continuity, \[f_{0}^{-k}(\exp X)=\exp\left(df_{0}^{-k}X\right)\in\mathcal{R}.\] Then \(g=\exp X\in f_{0}^{-k}(\mathcal{R})\subset\mathcal{R}\), that is \(G^{+}\subset\mathcal{R}\). Solvability of \(G\) implies that \(G^{0}\) is solvable. Therefore \(G^{0}\subset\mathcal{R}\) by Lemma 18. Thus \(G^{+,0}\subset\mathcal{R}\). \(\square\) The final result of this section gives a sufficient condition for controllability of (5). Remember that the controllability of (5) can be seen by proving that \(G=\mathcal{R}\cap\mathcal{C}=\mathcal{R}\cap\mathcal{R}^{*}\). The following result present us a sufficient condition for the controllability of discrete-time linear systems on solvable Lie groups. **Theorem 20**: _The linear system (5) is controllable if \(\mathcal{R}\) and \(\mathcal{C}\) are open sets and \(G^{0}=G\)._ **Proof.** By Theorem 19, it follows that \(G^{0,+}\subset\mathcal{R}\) and \(G^{0,-}\subset\mathcal{R}^{*}=\mathcal{C}\). As \(\mathfrak{g}=\mathfrak{g}^{0}\) then \(G=\mathcal{R}\cap\mathcal{R}^{*}\). Following the ideas above the system (5) is controllable. \(\square\) ### Controllability and accessibility of affine Lie group In this subsection, as application of the previous results we study controllability of linear systems on two dimensional affine Lie group. The real abelian solvable Lie groups of dimension \(2\) are \(\mathbb{R}^{2}\), \(\mathbb{T}\times\mathbb{R}\) and \(\mathbb{T}^{2}\) (see e.g. [15]). In the non-abelian case, the unique (up to an isomorphism) real solvable lie group is the open half plane \(G=\mathbb{R}^{+}\ltimes\mathbb{R}\), endowed with the product \[(x_{1},y_{1})\cdot(x_{2},y_{2})=(x_{1}x_{2},y_{2}+x_{2}y_{1}).\] The Lie group \((G,\cdot)\) is called affine group and denoted by \(\mathrm{Aff}(2,\mathbb{R})\). In particular, the automorphisms of \(\mathrm{Aff}(2,\mathbb{R})\) are given by \[\phi(x,y)=(x,a(x-1)+dy),\] with \(d\in\mathbb{R}^{*}\) and \(a\in\mathbb{R}\). Then, the linear systems on \(\mathrm{Aff}(2,\mathbb{R})\) can be defined by the functions \[f((x,y),u):=f_{u}(x,y)=(h(u)x,a(x-1)+dy+g(u)x), \tag{11}\] where \(h:\mathbb{R}^{m}\to\mathbb{R}^{+}\) and \(g:\mathbb{R}^{m}\to\mathbb{R}\) are \(\mathcal{C}^{\infty}\) functions satisfying \(h(0)=1\) and \(g(0)=0\). In fact, note that \(f_{0}(x,y)=(x,a(x-1)+dy)\) is an automorphism of \(\mathrm{Aff}(2,\mathbb{R})\). Note that \[f_{0}^{-1}(x,y)=\left(x,\frac{-a}{d}(x-1)+\frac{y}{d}\right)\] and \[f_{u}(x,y) = (h(u)x,a(x-1)+dy+g(u)x)\] \[= (h(u),g(u))(x,a(x-1)+dy)\] \[= f_{u}(1,0)f_{0}(x,y),\] for each \(u\in U\) and \((x,y)\in\mathrm{Aff}(2,\mathbb{R})\). Hence \(f\) defines a linear system on \(\mathrm{Aff}(2,\mathbb{R})\) given by \[x_{k+1}=f(x_{k},u_{k}),\ k\in\mathbb{N},\ u\in U, \tag{12}\] where we assume that \(U\) is a compact and convex neighborhood of \(0\in\mathbb{R}^{m}\). Note also that \[f_{0}^{k}(x,y)=(x,a(x-1)(\sum_{j=0}^{k-1}d^{j})+d^{k}y),\forall k\geq 1.\] From this previous equality and the Proposition 11, one can define the solution \(\varphi(k,(x,y),u)\) for every \((x,y)\in\mbox{Aff}(2,\mathbb{R})\) and \(u\in\mathcal{U}\). In the identity we have that \[\varphi(k,(1,0),u)=\left\{\begin{array}{ccc}(1,0),&if&k=0\\ (h(u_{0}),g(u_{0})),&if&k=1\\ \hat{f}_{k}(u),&if&k\geq 2\end{array}\right.,\] with \[\hat{f}_{k}(u)=\left(\prod_{j=0}^{k-1}h(u_{k-1-j}),-a\left(\sum_{j=0}^{k-2}d^{ j}\right)+d^{k-1}g(u_{0})+\sum_{j=0}^{k-2}\left(d^{k-2-j}(a+g(u_{j+1}))\prod_{i =0}^{j}h(u_{i})\right)\right).\] Then by Proposition 11, we get \[\varphi(k,(x,y),u) = (\prod_{j=0}^{k-1}h(u_{k-1-j})x,d^{k-1}(a+g(u_{0}))x+\sum_{j=0}^{ k-2}d^{k-2-j}(a+g(u_{j+1}))\prod_{i=0}^{j}h(u_{i})x\] \[+d^{k}y-a(\sum_{j=0}^{k-1}d^{k-1-j})).\] for every \((x,y)\in\mbox{Aff}(2,\mathbb{R})\) and \(u=(u_{i})\in\mathcal{U}\). **Remark 21**: _As we can see above, it is difficult to work with the solution of (12). However, by considering the case \(h(u)=1\), for every \(u=(u_{i})\in\mathcal{U}\), the solution are given by_ \[\varphi(k,(x,y),u)=\left(x,\sum_{j=0}^{k-1}d^{k-1-j}((a+g(u_{j}))x-a)+d^{k}y\right)\] _which means that for each \((x,y)\in\mbox{Aff}(2,\mathbb{R})\), the solutions are contained in the set \(\{x\}\times\mathbb{R}\). Then \(\mbox{int}\mathcal{R}(x,y)=\emptyset\), for every pair \((x,y)\in\mbox{Aff}(2,\mathbb{R})\). Therefore, the results of the previous sections does not apply in this case._ Consider the linear system (12) and define the vector fields \[X_{u}^{+}(x) = \frac{\partial}{\partial v}\bigg{|}_{v=0}f_{u}^{-1}\circ f_{u+v}(x)\] \[X_{u}^{-}(x) = \frac{\partial}{\partial v}\bigg{|}_{v=0}f_{u}\circ f_{u+v}^{-1} (x)\] \[\mbox{Ad}_{u_{k}\ldots u_{1}}X_{u_{0}}^{+}(x) = (df_{u_{k}}\circ\cdots\circ f_{u_{1}})_{e}^{-1}X_{u_{0}}^{+}(f_{u_ {k}}\circ\cdots\circ f_{u_{1}}(x)).\] \[\mbox{Ad}_{u_{k}\ldots u_{1}}^{-1\cdots-1}X_{u_{0}}^{-}(x) = (df_{u_{k}}^{-1}\circ\cdots\circ f_{u_{1}}^{-1})_{e}^{-1}X_{u_{0} }^{-}(f_{u_{k}}^{-1}\circ\cdots\circ f_{u_{1}}^{-1}(x)).\] Moreover, define the sets \[\Gamma^{+}=\{\mbox{Ad}_{u_{k}\cdots u_{1}}X_{u_{0}}^{+}:k\in \mathbb{N},u_{k},\ldots,u_{0}\in U\}\] \[\Gamma^{-}=\{\mbox{Ad}_{u_{k}\cdots u_{1}}^{-1\cdots-1}X_{u_{0}}^{ -}:k\in\mathbb{N},u_{k},\ldots,u_{0}\in U\}\] By Jakubcyk and Sontag [19] one have the following accessibility criteria. **Theorem 22**: _Consider a discrete-time control system of the form (4) on a \(n\)-dimensional smooth manifold \(M\), where the control range \(U\subset\mathbb{R}^{m}\) is a compact and convex neighborhood of \(0\) in \(\mathbb{R}^{m}\) and \(f_{u}:M\to M\) is a diffeomorphism for any \(u\in U\)._ 1. _The system is forward accessible if and only if_ \[\dim\Gamma^{+}(x)=n,\forall x\in M.\] 2. _The system is backward accessible if and only if_ \[\dim\Gamma^{-}(x)=n,\forall x\in M.\] 3. _The system is accessible if both conditions are satisfied._ For linear systems on \(\mathrm{Aff}(2,\mathbb{R})\), the previous result can be applied in order to get a sufficient condition for the accessibility of (12). **Proposition 23**: _The system (12) is accessible if \(-ah^{\prime}(0)\neq g^{\prime}(0)(d-1)\) and \(h^{\prime}(0)\neq 0\)._ **Proof.** Given any \(u\in U\) we have \[df_{u}=\begin{bmatrix}h(u)&0\\ a+g(u)&d\end{bmatrix}\text{ and }(df_{u})^{-1}=\begin{bmatrix}\dfrac{1}{h(u)}&0\\ -\dfrac{a+g(u)}{dh(u)}&\dfrac{1}{d}\end{bmatrix}.\] The compositions \(f_{u}^{-1}\circ f_{u+v}(x,y)\) and \(f_{u}\circ f_{u+v}^{-1}(x,y)\) are given by \[f_{u}^{-1}\circ f_{u+v}(x,y) = \left(\dfrac{h(u+v)}{h(u)}x,\dfrac{x}{d}\left(g(u+v)+a-\dfrac{h( u+v)}{h(u)}(a+g(u))\right)+y\right),\] \[f_{u}\circ f_{u+v}^{-1}(x,y) = \left(\dfrac{h(u)}{h(u+v)}x,y+x\left(\dfrac{g(u)}{h(u+v)}-\dfrac {g(u+v)}{h(u+v)}\right)\right).\] Then \[X_{u}^{+}(x,y) = \left(\dfrac{h^{\prime}(u)}{h(u)}x,\dfrac{x}{d}\left(\dfrac{h^{ \prime}(u)}{h(u)}(-a-g(u))+g^{\prime}(u)\right)\right),\] \[X_{u}^{-}(x,y) = \left(-\dfrac{h^{\prime}(u)}{h(u)}x,-\dfrac{g^{\prime}(u)}{h(u)} x\right).\] Besides, we have \[\mathrm{Ad}_{u_{1}}X_{u_{0}}^{+}(x,y) = \begin{bmatrix}\dfrac{h^{\prime}(u_{0})}{h(u_{0})}x\\ -\dfrac{a+g(u_{1})}{dh(u_{0})}h^{\prime}(u_{0})x-\dfrac{(a+g(u_{0}))h^{\prime }(u_{0})h(u_{1})}{d^{2}h(u_{0})}x+\dfrac{g^{\prime}(u_{0})h(u_{1})}{d^{2}}x \end{bmatrix},\] \[\mathrm{Ad}_{u_{1}}^{-1}X_{u_{0}}^{-}(x,y) = \begin{bmatrix}\dfrac{h^{\prime}(u_{0})}{h(u_{0})}x\\ -\dfrac{a+g(u_{1})}{h(u_{0})h(u_{1})}h^{\prime}(u_{0})x-\dfrac{dg^{\prime}(u_ {0})}{h(u_{0})h(u_{1})}x\end{bmatrix},\] \[\mathrm{Ad}_{u_{2}u_{1}}X_{u_{0}}^{+}(x,y)=\begin{bmatrix}\dfrac{h^{\prime}(u_ {0})}{h(u_{0})}x\\ T_{21}(x,u_{0},u_{1},u_{2})\end{bmatrix}\text{ and }\mathrm{Ad}_{u_{2}u_{1}}^{-1-1}X_{u_{0}}^{+}(x,y)= \begin{bmatrix}\dfrac{h^{\prime}(u_{0})}{h(u_{0})}x\\ S_{21}(x,u_{0},u_{1},u_{2})\end{bmatrix},\] with \[T_{21}(x,u_{0},u_{1},u_{2}) = -\frac{a+g(u_{2})}{dh(u_{0})}h^{\prime}(u_{0})x-\frac{(a+g(u_{1}))h^ {\prime}(u_{0})h(u_{2})}{d^{2}h(u_{0})}x\] \[-\frac{(a+g(u_{0}))h^{\prime}(u_{0})h(u_{1})h(u_{2})}{d^{3}h(u_{0} )}x+\frac{g^{\prime}(u_{0})h(u_{2})h(u_{1})}{d^{3}}x,\] \[S_{21}(x,u_{0},u_{1},u_{2}) = -\frac{(a+g(u_{1}))h^{\prime}(u_{0})x}{h(u_{0})h(u_{1})}-\frac{d( a+g(u_{2}))h^{\prime}(u_{0})x}{h(u_{0})h(u_{1})h(u_{2})}-\frac{d^{2}g^{\prime}(u_{0} )x}{h(u_{0})h(u_{1})h(u_{2})}.\] By taking \(u_{0}=u_{1}=u_{2}=0\) in the above vector fields above we get \[\mathrm{Ad}_{u_{2}u_{1}}X^{+}_{u_{0}}(x,y) = \left[\begin{matrix}h^{\prime}(0)x\\ -\frac{ah^{\prime}(0)x}{d}\left(\frac{1}{d^{2}}+\frac{1}{d}+1\right)+\frac{g^ {\prime}(0)}{d^{3}}x\end{matrix}\right],\] \[\mathrm{Ad}_{u_{1}}X^{+}_{u_{0}}(x,y) = \left[\begin{matrix}h^{\prime}(0)x\\ -\frac{ah^{\prime}(0)x}{d}\left(\frac{1}{d}+1\right)+\frac{g^{\prime}(0)}{d^{ 2}}x\end{matrix}\right],\] \[\mathrm{Ad}^{-1-1}_{u_{2}u_{1}}X^{-}_{u_{0}}(x,y) = \left[\begin{matrix}h^{\prime}(0)x\\ -ah^{\prime}(0)x-dag^{\prime}(0)x-d^{2}g^{\prime}(0)x\end{matrix}\right],\] and \[\mathrm{Ad}^{-1}_{u_{1}}X^{-}_{u_{0}}(x,y) = \left[\begin{matrix}h^{\prime}(0)x\\ -ah^{\prime}(0)x-dg^{\prime}(0)x\end{matrix}\right].\] If we consider the sets \[\alpha=\{\mathrm{Ad}^{-1-1}_{u_{2}u_{1}}X^{-}_{u_{0}}(x,y),\mathrm{Ad}^{-1}_{ u_{1}}X^{-}_{u_{0}}(x,y)\}\mbox{ and }\beta=\{\mathrm{Ad}_{u_{2}u_{1}}X^{+}_{u_{0}}(x,y),\mathrm{Ad}_{u_{1}}X^{+}_{u _{0}}(x,y)\}\] with \(u_{0}=u_{1}=u_{2}=0\), one can prove that \(\alpha\) and \(\beta\) are linear independent sets if, and only if, \(h^{\prime}(0)\neq 0\) and \(-ah^{\prime}(0)\neq g^{\prime}(0)(d-1)\) as required. \(\Box\) Now, we prove that such condition guarantees that \(\mathcal{R}\) and \(\mathcal{C}\) are open sets. **Proposition 24**: _If the system (12) is accessible then it is reachable and controllable sets are open._ **Proof.** Initially, note that \[\frac{\partial}{\partial(u,v)}f_{u,v}(e)=\begin{bmatrix}h^{\prime}(u)h(v)&h^ {\prime}(v)h(u)\\ g^{\prime}(u)h(v)&ah^{\prime}(v)+dg^{\prime}(v)+h^{\prime}(v)g(u)\end{bmatrix}\] and for \(u=v=0\) one get \[\frac{\partial}{\partial(u,v)}f_{u,v}(e)=\begin{bmatrix}h^{\prime}(0)&h^{ \prime}(0)\\ g^{\prime}(0)&ah^{\prime}(0)+dg^{\prime}(0)\end{bmatrix} \tag{13}\] since \(g(0)=0\) and \(h(0)=1\). Then the matrix above has rank \(2\) if, and only if, \(-ah^{\prime}(0)\neq g^{\prime}(0)(d-1)\) and \(h^{\prime}(0)\neq 0\). The matrix in (13) guarantees that \(e\in\hat{\mathcal{R}}\), which is open. Therefore, as \(\hat{\mathcal{R}}\subset\mathcal{R}\) we have \(e\in\mathrm{int}\mathcal{R}\). By the Proposition 12 item 6, \(\mathcal{R}\) is open. Analogously we show that \[g_{k+1}=f_{u_{k}}^{-1}(e)f_{0}^{-1}(g_{k})\] then \(\mathcal{C}=\mathcal{R}^{*}\) is also open. \(\Box\) The next theorem present a characterization of accessibility which is an immediate statement from the above results. **Theorem 25**: _The reachable and controllable sets of the system (12) are open if and only if (12) is accessible._ We conclude this section presenting a sufficient condition for the controllability of (12). **Theorem 26**: _Consider the discrete-time linear system (12). If \(h^{\prime}(0)\neq 0\), \(-ah^{\prime}(0)\neq g^{\prime}(0)(d-1)\) and \(d=1\) then (12) is controllable._ **Proof.** In fact, the conditions \(h^{\prime}(0)\neq 0\) and \(-ah^{\prime}(0)\neq g^{\prime}(0)(d-1)\) implies by the Theorem 25 that the sets \(\mathcal{R}\) and \(\mathcal{C}\) are open. Furthermore, since \(d\) and \(1\) are the only eigenvalues of \(f_{0}\), by Theorem 20 the assumption \(d=1\) implies that (12) is controllable. \(\square\) ## 4 Controllability of nilpotent Lie groups In this section we present and kind of converse of Theorem 20. It is assumed that \(G\) is connected, simply connected and nilpotent Lie group. Recall the following well known fact will be useful. For connected, simply connected and nilpotent Lie groups, each connected subgroups are closed and simply connected (see e.g. Knapp [12, Corollary 1.134] or San Martin [17, Corollary 10.10]). To achieve this goal we need the following lemma. **Lemma 27**: _Suppose that \(df_{0}\) has no eigenvalue with absolute value greater than \(1\) and \(\mathcal{R}\) is open. If \(M:=G/G^{0}\) admits a \(G\)-invariant metric, then \(\mathcal{R}_{G^{-}}=\mathcal{R}\cap G^{-}\) is a relative compact set._ **Proof.** Consider the induced linear system \[x_{k+1}=\bar{f}(x_{k},u_{k}),\ x_{k}\in M,\ u=(u_{i})_{i\in\mathbb{N}_{0}}\in \mathcal{U},\] on \(M:=G/G^{0}\) with solutions denoted by \(\bar{\varphi}\) which satisfies \(\pi(\varphi(k,g,u))=\bar{\varphi}(k,\pi(g),u)\), for all \(k\in\mathbb{N}_{0}\), \(g\in G\) and \(u\in\mathcal{U}\), where \(\pi:G\to G/G^{0}\) denotes the standard projection (see end of Section 2). If \(\varrho\) is the distance induced by the \(G\)-invariant Riemannian metric on \(M\), then given \(x,y\in M\) and a smooth curve \(\gamma:[0,1]\to M\) with \(\gamma(0)=x\) and \(\gamma(1)=y\) we have that \(\bar{f}_{0}^{k}\circ\gamma\) is a smooth curve with \(\bar{f}_{0}^{k}\circ\gamma(0)=\bar{f}_{0}^{k}(x)\) and \(\bar{f}_{0}^{k}\circ\gamma(1)=\bar{f}_{0}^{k}(y)\). Hence \[\varrho(\bar{f}_{0}^{k}(x),\bar{f}_{0}^{k}(y))\leq\int_{0}^{1}|(d\bar{f}_{0}^ {k})_{\gamma(t)}\gamma^{\prime}(t)|dt.\] Note that \(\|(d\bar{f}_{0}^{k})_{gG^{0}}\|=\|(d\bar{f}_{0}^{k})_{eG^{0}}\|\) holds for all \(x\in M\) since the metric is \(G\)-invariant. From [9, Remark 9 and Proposition 10] there exist a \(c\geq 1\) and \(\sigma\in(0,1)\) such that \[\varrho(\bar{f}_{0}^{k}(x),\bar{f}_{0}^{k}(y))\leq\int_{0}^{1}\|(d\bar{f}_{0} ^{k})_{eG^{0}}\|\|\gamma^{\prime}(t)|dt\leq c^{-1}\sigma^{k}\int_{0}^{1}| \gamma^{\prime}(t)|dt,\] that is, \[\varrho(\bar{f}_{0}^{k}(x),\bar{f}_{0}^{k}(y))\leq c^{-1}\sigma^{k}\varrho(x, y),\ \forall\ k\in\mathbb{N}. \tag{14}\] Moreover, the solutions \(\bar{\varphi}\) satisfy \[\bar{\varphi}(k+l,eG^{0},u)=\mathcal{L}_{\varphi(k,e,\Theta_{k}(u))}(\bar{f}_ {0}^{k}(\bar{\varphi}(l,eG^{0},u))). \tag{15}\] Now we prove by induction on \(k\) that \[\varrho(\bar{\varphi}(k,eG^{0},u),eG^{0})\leq c^{-1}a\sum_{i=0}^{k-1}\sigma^{i},\] where \(a:=\max\limits_{u\in U}\varrho(\bar{f}_{u}(eG^{0}),eG^{0})\). Assume without loss of generality that \(c^{-1}\geq 1\). For \(k=1\), it follows from (14) that \[\varrho(\bar{\varphi}(1,eG^{0},u),eG^{0})=\varrho(\bar{f}_{u}(eG^{0}),eG^{0}) \leq c^{-1}a.\] Suppose that the assertion holds for \(n\geq 1\). Equality (15) yields \[\bar{\varphi}(k+1,eG^{0},u)=\mathcal{L}_{\varphi(k,e,\Theta_{k}(u))}(\bar{f}_ {0}^{k}(\bar{f}_{u}(eG^{0}))),\] consequently, \[\varrho(\bar{\varphi}(k+1,eG^{0},u),eG^{0}) \leq \varrho(\mathcal{L}_{\varphi(k,e,\Theta_{k}(u))}(\bar{f}_{0}^{k} (\bar{f}_{u}(eG^{0}))),\mathcal{L}_{\varphi(k,e,\Theta_{k}(u))}(eG^{0}))\] \[+ \varrho(\mathcal{L}_{\varphi(k,e,\Theta_{k}(u))}(eG^{0}),eG^{0})\] \[= \varrho(\bar{f}_{0}^{k}(\bar{f}_{u}(eG^{0})),eG^{0})+\varrho( \bar{\varphi}(k,eG^{0},u),eG^{0}),\] because \(\mathcal{L}_{g}\) is an isometry and \(\mathcal{L}_{g}(eG^{0})=\pi(g)\) for all \(g\in G\). By (14) and the inductive hypothesis we have \[\varrho(\bar{\varphi}(k+1,eG^{0},u),eG^{0}) \leq \varrho(\bar{f}_{0}^{k}(\bar{f}_{u}(eG^{0})),eG^{0})+\varrho( \bar{\varphi}(k,eG^{0},u),eG^{0})\] \[\leq c^{-1}\sigma^{k}\varrho(\bar{f}_{u}(eG^{0}),eG^{0})+c^{-1}a\sum_ {i=0}^{k-1}\sigma^{i}\] \[\leq c^{-1}a\sigma^{k}+c^{-1}a\sum_{i=0}^{k-1}\sigma^{i}=c^{-1}a\sum_ {i=0}^{k}\sigma^{i}.\] Since \(\sigma\in(0,1)\), one has \[\pi(\mathcal{R})=\pi\left(\bigcup\limits_{\tau\in\mathbb{N}}\mathcal{R}_{ \tau}\right)=\bigcup\limits_{\tau\in\mathbb{N}}\pi(\mathcal{R}_{\tau})\subset \overline{B_{\delta}(eG^{0})},\] where \(\delta=c^{-1}a\sum_{i=0}^{\infty}\sigma^{i}<\infty\). Hence \(\pi(\mathcal{R})\) is relatively compact in \(M\). Note also that \(\pi|_{G^{-}}:G^{-}\to M\) is a homeomorphism because \(G^{-,0}=G^{-}G^{0}\) and \(G^{-}\cap G^{0}=\{e\}\) (c.f. [1, Lemma 3.6]). Therefore, \(\mathcal{R}_{G^{-}}\) is relatively compact in \(G^{-}\). The result follows since \(G^{-}\) is closed in \(G\). \(\Box\) The next proposition establishes a necessary and sufficient condition for the controllability from the identity of \(G\). **Proposition 28**: _Assume that \(\mathcal{R}\) is open. Then \(\mathcal{R}=G\) if, and only if, \(G=G^{+,0}\)._ **Proof.** If \(G=G^{+,0}\), then Theorem 19 implies \(\mathcal{R}=G\). Reciprocally, assume \(\mathcal{R}=G\). The proof that \(G=G^{+,0}\) will proceed by induction on \(\dim G\). If \(\dim G=1\), then \(G\) is abelian and we can conjugate the system to its linearization by exponential map, consequently the result follows from [8, Corollary 16 (i)]. Now, suppose that the assertion holds for any nilpotent connected and simply connected Lie group with dimension smaller than \(d\). Consider a nilpotent connected and simply connected Lie group \(G\) with \(\dim G=n\). Since the center of a nilpotent Lie group is non-trivial, the Lie group \(H:=G/Z(G)\) is nilpotent, connected and simply connected (see [17, Proposition 9.11]) with \(\dim H<d\). The \(f_{0}\)-invariance of \(Z(G)\) implies that the induced linear system on \(H\) is well-defined. By hypothesis, \(\mathcal{R}=G\) yields \(\pi(\mathcal{R})=H\) where \(\pi:G\to H\) is the standard projection and by induction one has \(H=H^{+,0}\). Lemma 2 implies that \(G^{-}\subset Z(G)\) and by Proposition 3 we get that \(G^{+}\) is a normal subgroup of \(G\). If \(\mathcal{R}^{-,0}\) denotes the reachable set of the induced linear system on \(G^{-,0}=G/G^{+}\), it holds that \(\mathcal{R}^{-,0}=G^{-,0}\). Moreover, \(G^{-,0}/G^{0}\) admits a \(G\)-invariant Riemannian metric because \(G^{-}\subset Z(G)\). Then \(\mathrm{Ad}(G^{0})|_{\mathfrak{g}^{-}}=\{\mathrm{id}_{\mathfrak{g}^{-}}\}\) is compact in \(\mathrm{Gl}(\mathfrak{g}^{-})\) (see [7, Proposition 3.16]). By Lemma 27, \(\mathcal{R}^{-,0}\cap G^{-}=G^{-}\) is relatively compact in \(G\). As \(G^{-}\) is closed then it is compact. Hence by Proposition 4\(G^{-}=\{e\}\), therefore \(G=G^{+,0}\). \(\square\) **Remark 29**: _In the proof of Proposition 28, the hypothesis that \(G\) is nilpotent is important to get \(Z(G)\) non-trivial._ Finally, the main result of this section can be stated. **Theorem 30**: _Consider the linear system (5) on a nilpotent connected and simply connected Lie group \(G\). Then the system is controllable if, and only if, \(\mathcal{R}\) and \(\mathcal{C}\) are open and \(G=G^{0}\)._ **Proof.** If we assume that \(\mathcal{R}\) and \(\mathcal{C}\) are open and \(G=G^{0}\), the result follows from Theorem 20. On the other hand, if the system is controllable, then \(\mathcal{R}\) and \(\mathcal{C}\) are clearly open. Moreover, since \(G=\mathcal{R}\cap\mathcal{C}\), we can apply Proposition 28 to \(\mathcal{R}\) and \(\mathcal{R}^{*}=\mathcal{C}\) in order to conclude that \(G=G^{0}\). \(\square\) **Remark 31**: _Theorem 30 generalizes [8, Corollary 16 (iii)] about controllability of linear systems on euclidean spaces._ We finish this work with an application of the previous result in the case of Heisenberg group. **Example 32**: _Consider the Heisenberg group_ \[\mathbb{H}=\left\{\begin{bmatrix}1&x_{2}&x_{1}\\ 0&1&x_{3}\\ 0&0&1\end{bmatrix};\ x_{1},x_{2},x_{3}\in\mathbb{R}\right\},\] _which is diffeomorphic to \(\mathbb{R}^{3}\) with the product_ \[(x_{1},x_{2},x_{3})\cdot(y_{1},y_{2},y_{3})=(x_{1}+y_{1}+x_{2}y_{3},x_{2}+y_{2 },x_{3}+y_{3}).\] _Let \(U\) be a compact and connected neighborhood of \(0\in\mathbb{R}\) and \(f:\mathbb{H}\times U\to\mathbb{H}\) given by_ \[f_{u}(x_{1},x_{2},x_{3})=\left(x_{1}+x_{2}+\frac{x_{2}^{2}}{2}+ux_{2}+ux_{3}- \frac{u}{2}-\frac{u^{2}}{3},x_{2}+u,x_{2}+x_{3}-\frac{u}{2}\right).\] _It is not difficult to see that_ \[g_{k+1}=f_{u_{k}}(g_{k}),\ u_{k}\in U \tag{16}\] _is a linear system on \(\mathbb{H}\) (see [9, Example 6]). Note that_ \[f_{u}^{-1}(x_{1},x_{2},x_{3})=\left(x_{1}-x_{2}-\frac{x_{2}^{2}}{2}+ux_{2}-ux_ {3}+\frac{3u}{2}-\frac{2u^{2}}{2},x_{2}-u,-x_{2}+x_{3}-\frac{3u}{2}\right).\] _Since each coordinate of both \(f_{u}(0,0,0)\) and \(f_{u}^{-1}(0,0,0)\) are polynomials in \(u\) with root \(u=0\) and non-null derivative in \(u=0\), there is \(\varepsilon>0\) such that \(B_{\varepsilon}(0,0,0)\subset f_{\mathrm{int}U}(0,0,0)\cap f_{\mathrm{int}U}^{- 1}(0,0,0)\) is a neighborhood of \((0,0,0)\) in \(\mathcal{R}\) and \(\mathcal{R}^{*}\), respectively. By item (6) of Proposition 12, \(\mathcal{R}\) and \(\mathcal{R}^{*}=\mathcal{C}\) are open. Moreover, the unique eigenvalue of \(df_{0}\) is \(1\) (see [9, Example 20]), hence \(G=G^{0}\). By Theorem 30, the system (16) is controllable._
2310.00354
AI-Dentify: Deep learning for proximal caries detection on bitewing x-ray -- HUNT4 Oral Health Study
Background: Dental caries diagnosis requires the manual inspection of diagnostic bitewing images of the patient, followed by a visual inspection and probing of the identified dental pieces with potential lesions. Yet the use of artificial intelligence, and in particular deep-learning, has the potential to aid in the diagnosis by providing a quick and informative analysis of the bitewing images. Methods: A dataset of 13,887 bitewings from the HUNT4 Oral Health Study were annotated individually by six different experts, and used to train three different object detection deep-learning architectures: RetinaNet (ResNet50), YOLOv5 (M size), and EfficientDet (D0 and D1 sizes). A consensus dataset of 197 images, annotated jointly by the same six dentist, was used for evaluation. A five-fold cross validation scheme was used to evaluate the performance of the AI models. Results: he trained models show an increase in average precision and F1-score, and decrease of false negative rate, with respect to the dental clinicians. When compared against the dental clinicians, the YOLOv5 model shows the largest improvement, reporting 0.647 mean average precision, 0.548 mean F1-score, and 0.149 mean false negative rate. Whereas the best annotators on each of these metrics reported 0.299, 0.495, and 0.164 respectively. Conclusion: Deep-learning models have shown the potential to assist dental professionals in the diagnosis of caries. Yet, the task remains challenging due to the artifacts natural to the bitewing images.
Javier Pérez de Frutos, Ragnhild Holden Helland, Shreya Desai, Line Cathrine Nymoen, Thomas Langø, Theodor Remman, Abhijit Sen
2023-09-30T12:17:36Z
http://arxiv.org/abs/2310.00354v3
# AI-Dentify: Deep learning for proximal caries detection on bitewing x-ray - HUNT4 Oral Health Study ###### Abstract **Background:** Dental caries diagnosis requires the manual inspection of diagnostic bitewing images of the patient, followed by a visual inspection and probing of the identified dental pieces with potential lesions. Yet the use of artificial intelligence, and in particular deep-learning, has the potential to aid in the diagnosis by providing a quick and informative analysis of the bitewing images. **Methods:** A dataset of 13,887 bitewings from the HUNT4 Oral Health Study were annotated individually by six different experts, and used to train three different object detection deep-learning architectures: RetinaNet (ResNet50), YOLOv5 (M size), and EfficientDet (D0 and D1 sizes). A consensus dataset of 197 images, annotated jointly by the same six dentist, was used for evaluation. A five-fold cross validation scheme was used to evaluate the performance of the AI models. **Results:** the trained models show an increase in average precision and F1-score, and decrease of false negative rate, with respect to the dental clinicians. Out of the three architectures studied, YOLOv5 shows the largest improvement, reporting 0.647 mean average precision, 0.548 mean F1-score, and 0.149 mean false negative rate. Whereas the best annotators on each of these metrics reported 0.299, 0.495, and 0.164 respectively. **Conclusion:** Deep-learning models have shown the potential to assist dental professionals in the diagnosis of caries. Yet, the task remains challenging due to the artifacts natural to the bitewings. **Keywords:** caries detection, Bitewing, Digital dentistry, Deep learning, Object detection ## 1 Introduction As reported in the WHO Global Oral Health Status Report in 2022 [1], globally 3.5 billion people are afflicted by some form of oral disease, and 2 billion suffer from caries in permanent teeth. Furthermore, untreated dental caries in permanent teeth is the most common dental health condition. Diagnosis of such lesions requires both the inspection of clinical images e.g., X-ray (bi-dimensional images) or cone beam computed tomography (tri-dimensional images), as well as the visual examination and probing of the affected tooth or teeth. This procedure is time consuming, and requires a high level experience when analysing the clinical images. The two main image modalities used to assist and support the examination of caries are bitewing (BW) and panoramic radiography (OPG) [2, 3]. Caries, particularly proximal caries, a type of carious lesion located on the surfaces between adjacent teeth, are difficult to detect manually or visually (i.e. using radiographic X-ray images) due to artifacts. Also, poor angulation can hinder the correct identification of the lesions or even occlude lesser grade caries. Since 2008, the research on the application of artificial intelligence (AI) and, more specifically, deep learning (DL) convolutional neural networks (CNN) models for the analysis of dental has noticeably increased [4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. However, research on this field is still limited compared to other clinical areas. Data availability and reliable annotations [8, 13] are the main bottlenecks in the development of machine learning (ML) methods in dentistry. A large portion of the published work uses a dataset of fewer than \(300\) images, only few studies have access to large datasets [8] with more than \(1,000\) images like [11, 15, 16]. Of these publications, the work presented in [4, 5, 6, 11, 15] focus on object detection, which is the scope of the present study. Object detection or object recognition refers to the task of localising and classifying objects in a picture [14]. The localisation is usually marked using axis-aligned bounding boxes, surrounding the outermost boundary of the item of interest. In Devito _et al.,_[4], a multi-layer perceptron with \(51\) artificial neurons (\(25\) in the input layer, \(25\) in the hidden layer, and one in the output layer) is used to detect proximal caries on BW images, using a dataset of \(160\) images annotated by \(25\) experts. Whereas in Srivastava _et al.,_[11], a caries detector built using a tailor designed fully connected neural network was trained with \(3,000\) annotated BW images. In Singh _et al.,_[6], hand-crafted features for X-ray images are built using Radon and discrete cosine transformations, and further classified using an ensemble of ML techniques such as random forest. Park _et al.,_[15] proposed an ensemble of U-Net and Fast R-CNN for caries detection in colour image, trained with \(2,348\) RGB intraoral photographic images. Even though, the work done by Cantu _et al._[16] focuses on image segmentation, it is worth mentioning because of the dataset used: \(3,686\) BW images, with caries segmentation annotations, to train a U-Net model for segmentation. ### Study goals In this study we compare three state-of-the-art deep learning architectures for object detection on the task of proximal caries detection, namely RetinaNet, YOLOv5, and EfficientDet. By using an extensive and annotated dataset, we hypothesised that AI object detection models can perform in equal or better terms than dental clinicians. Hence, in this study we trained the aforementioned architectures in detection and classification of enamel caries, dentine caries, and secondary lesions, in BW images. Then, the models were compared to human annotators in order to test our hypothesis. In addition, a novel processing pipeline for merging multi-observer object detection annotations, based on Gaussian Mixture Models, is proposed. ## 2 Methods ### Dataset The bitewing images used in this study were collected as part of the HUNT4 Oral Health Study, the a sub-study of the fourth pahse of the HUNT study [17]. The HUNT4 Oral Health Study is a collaborative study between several Norwegian institutes including: the HUNT research centre, the Tannhelsetjenestens Kompetansesenter Midt (TkMidt), the Norwegian University of Technology (NTNU), the University of Oslo (UiO), the Tannhelsetjenestens Kompetansesenter Ost (TkO), and the Norwegian National Centre for Ageing and Health. The data collected consisted of clinical and radiographic oral examination, which took place between 2017 and 2019. A total of \(7,347\) participants were invited to participate in the study, out of a population of \(137,233\) people (2017) [18]. Only \(493\) participants where included in the Oral Health survey study [18]. A total of \(19,210\) BW and \(5,039\) OPG images where collected from the participants. For this study, only the BW images were considered. The following subsections will further describe the steps of the workflow followed in the present study, which is depicted in Figure 1. ### Data annotation The data was annotated by six dental clinicians with extensive experience in the diagnosis of proximal caries, using the open-source annotation tool AnnotationWeb[19]. The caries were classified in five different categories shown in Table 1. Further details of the annotation procedure can be found in Section 1 of the Additional Materials 1. To clean the annotations so as to get a ground truth to train the AI models, a novel object detection multi-observers annotations combination strategy was envisioned for this project. First, the annotated bounding boxes were grouped based on the intersection over union (IoU) score, a metric which describes how well the boxes overlap. Then, a Gaussian distribution was fitted to each bounding box in the group, along the vertical and horizontal axes. A mixture density function (MDF) of a Gaussian Mixture Model in which all distributions have the same weight, was obtained by combining the probability density functions of the fitted Gaussian distributions.The common bounding box was then obtained from the MDF given a probability threshold \begin{table} \begin{tabular}{r l} \hline \hline Label name & Description \\ \hline Grade 1 & Radiololucent in outer half of the enamel [20, 21] \\ Grade 2 & Radiolucent in the inner half of the enamel, but not in the dentine [20, 21] \\ Grade 3 & Radiolucent in the outer third of the dentine [20, 21] \\ Grade 4 & Radiolucent in 2/3 of the dentine [20, 21] \\ Grade 5 & Radiolucent in the inner third of the dentine [20, 21] \\ Secondary lesion & Caries related to implants or insertions \\ Unknown grade & Caries whose grade cannot be clearly identified \\ \hline \hline \end{tabular} \end{table} Table 1: Definition of the classes used to annotate the dataset Figure 1: Workflow (\(p\)), as detailed in Algorithm 2, in the Additional Materials 1. Alternatively, the non-maximum suppression (NMS) algorithm can be used to find the best fitting bounding box. However, since all the annotations had the same level of confidence, unlike the predictions done by an AI model, NMS will be biased towards the first bounding box selected as a reference. Lastly, the label of the common bonding box was determined based on the most voted class among the bounding boxes in the group. In case of tie, the most severe class was chosen e.g., dentine caries over enamel caries. A total of \(13,887\) images were annotated by one to six of the dental clinicians (see Figure 2), having a total of \(13,585\) images annotated by more than one dentist. The distribution of labels in Figure 3 shows a higher volume of secondary lesions than all the other grades. After discussion with the dental clinicians, it was agreed to merge the grade one and two under the label of "enamel caries", and grades three to five under the group of "dentine caries". Secondary caries and unknown grade groups were kept as separate label groups. In addition, 197 images were annotated by consensus agreement among all the expert annotators, so as to build a test set for evaluation purposes. To create this dataset, hereafter consensus test set, all annotators (dental clinicians) were brought together in the same room agreement by consensus on the annotation of the images. The images in the consensus test set had previously been annotated by all annotators individually, with a considerable time gap between the individual annotations and the Figure 2: Distribution of annotated images in the annotated dataset. In the legend, the number of annotated images for each interval is shown within brackets. creation of the consensus agreement annotations, so that the annotations could be considered independent of each other. ### Object detection models Three state of the art object detection architectures were evaluated for caries detection: RetinaNet (Keras implementation) [22] (ResNet50 backbone), YOLOv5 [23] (size M), and EfficientDet [24] (pretrained D0 and D1). All the models used transfer learning, which is a common strategy when adapting object detection models to a particular dataset, by loading the weights of pre-trained models on a larger dataset set e.g., ImageNet or COCO datasets. RetinaNet was initialised with the weights of ResNet50 trained on ImageNet dataset, YOLOv5 loaded the weights pre-trained on COCO dataset (provided in the original repository [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5)), and EfficientDet pretrained weights were obtained from https: Figure 3: Distribution of annotations in the dataset annotated by the six dental clinicians. Enamel proximal caries (Grades 1 and 2, total \(19,995\) annotations) are pictured in light green, dentine lesions (Grade 3 to 5, total \(17,903\) annotations) are in orange, secondary lesions are depicted in pink, and caries of uncertain grade have been highlighted in grey. Image free of lesions (No caries) are shown in dark blue, here the number of annotations matches the number of images. //github.com/rwightman/efficientdet-pytorch. For better comparison of the architectures, Table 2 shows the number of parameters for each architecture. Due to time restrictions, not all the versions of YOLOv5 and EfficientDet are included in the current results. The pre-processing, standardisation to the range \([0,1]\), and post-processing were the same for all models and experiments. Preliminary experiments were conducted with the contrast limited adaptive histogram equalization (CLAHE) method, inspired by Georgieva et al. [25], but these experiments were eliminated before the final round of cross-validation because they did not lead to any improvement in the scores. Only horizontal and vertical flipping were used to augment the training dataset, both being applied with a probability of \(0.5\). The training was done on a dedicated server running Ubuntu 20.04. The machine featured a NVidia Quadro RTX 5000 GPU with 16 GB VRAM, a Intel Core i7-9700 CPU, 32 GB RAM, 1 TB SSD, and 8 TB HDD. ### Validation protocol After removing the images rejected by the annotators (980), images with unknown grade annotations \((4,565)\), and those in the consensus test dataset (197), the remaining \(8,342\) images were split into five folds to perform a cross-validation (CV) study. Random sampling without replacement was used to build the folds. The cross-validation training and evaluation was performed with a three-way-split, i.e. for each iteration, three folds were used for training, one fold was used for validation during training, to \begin{table} \begin{tabular}{c c} \hline \hline Architecture & Number of parameters (millions) \\ \hline YOLOv5 M & 21.2M \\ \hline RetinaNet (ResNet50) & 36.4M \\ \hline EfficientDet D0 & 3.9M \\ EfficientDet D1 M & 6.6M \\ \hline \hline \end{tabular} \end{table} Table 2: Number of parameters of each architecture. avoid overfitting; and the final fold was kept aside as a test set for the final performance evaluation. To test our hypothesis on the performance of AI models, both the trained models in each fold and the annotators were evaluated against the consensus test set. However, due to time constraints, one of the annotators (marked with an *) did not complete the individual annotation task, missing one image, and thus the resulting metrics of this annotators are not strictly comparable to those of the models and other annotators. ### Performance evaluation As aforementioned, the models described in Section 2.3 and the annotators were evaluated on the consensus test set. The metrics used in the evaluation were the standard metrics for evaluating object detection model performance: average precision (AP) for each of the classes, the mean average precision (mAP) across classes, the F1-score (F1) for each of the classes, the mean F1-score (mF1), as a surrogate for the recall and precision, the false negative rate (FNR) for each class, and the average across classes (mFNR). These three metrics are in the range \([0,1]\). Bootstrap confidence intervals (95%) were computed for the test results of both the models and the annotators, to compare the performance of these. The intervals were computed using the bias-corrected and accelerated bootstrap algorithm [26], with \(1,000\) iterations for confidence interval. Significance in score differences between annotators and models were determined based on overlap of the confidence intervals. ## 3 Results The evaluation results on the consensus test set for the five-fold cross-validation can be found in Tables 3 to 5. The metrics were computed using the PASCAL VOC metrics implemented in [27], with an IoU thresholds of \(0.3\). The threshold was deemed an adequate trade-off between precision and recall for the current application, where detection of potential caries is preferred. The YOLOv5 model reached the highest AP scores for all classes, as well as the highest F1-scores for two out of three classes, and the lowest FNR for all classes. The confidence intervals for the means of the distributions of the performance metrics, calculated for each model and each annotator, can be found in Table 6. As described in Section 2.5, these intervals were used to assess statistical significance between the different architectures, as well as between the models and the human \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & Enamel caries & Dentine caries & Secondary lesion & mF1 \\ \hline YOLOv5 M & 0.513 \(\pm\) 0.011 & **0.588 \(\pm\) 0.019** & **0.563 \(\pm\) 0.029** & **0.555 \(\pm\) 0.011** \\ \hline RetinaNet & 0.234 \(\pm\) 0.032 & 0.312 \(\pm\) 0.015 & 0.228 \(\pm\) 0.027 & 0.258 \(\pm\) 0.013 \\ \hline EfficientDet D0 & 0.465 \(\pm\) 0.055 & 0.459 \(\pm\) 0.021 & 0.444 \(\pm\) 0.006 & 0.456 \(\pm\) 0.017 \\ EfficientDet D1 & **0.533 \(\pm\) 0.021** & 0.561 \(\pm\) 0.020 & 0.507 \(\pm\) 0.028 & 0.534 \(\pm\) 0.019 \\ \hline \hline \end{tabular} \end{table} Table 4: F1-score results (mean and standard deviation) of the five-fold cross-validation evaluated on the consensus test set. The best metrics are highlighted in bold. \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & Enamel caries & Dentine caries & Secondary lesion & mFNR \\ \hline YOLOv5 M & **0.153 \(\pm\) 0.021** & **0.215 \(\pm\) 0.032** & **0.160 \(\pm\) 0.035** & **0.176 \(\pm\) 0.025** \\ \hline RetinaNet & 0.185 \(\pm\) 0.060 & 0.364 \(\pm\) 0.044 & 0.200 \(\pm\) 0.045 & 0.250 \(\pm\) 0.031 \\ \hline EfficientDet D0 & 0.606 \(\pm\) 0.060 & 0.636 \(\pm\) 0.020 & 0.533 \(\pm\) 0.020 & 0.592 \(\pm\) 0.025 \\ EfficientDet D1 & 0.479 \(\pm\) 0.018 & 0.524 \(\pm\) 0.020 & 0.459 \(\pm\) 0.021 & 0.487 \(\pm\) 0.011 \\ \hline \hline \end{tabular} \end{table} Table 5: False negative rate (FNR) results (mean and standard deviation) of the five-fold cross-validation evaluated on the consensus test set. The best metrics are highlighted in bold. expert rater performance. A graphical representation of these is show in Figure 4, for ease of interpretation. Overall, the scores of all of the object detection models were similar to or better than that of the human expert annotators. In terms of AP, the YOLOv5 model achieved significantly higher scores than all of the annotators, as well as the RetinaNet and the EfficientDet D0. The RetinaNet and EfficientDet models also achieved mAP-scores that were similar to or significantly better than the annotators. Regarding F1, The YOLOv5 model achieved significantly higher scores than the RetinaNet model and 4 out of 6 annotators, but the difference with the EfficientDet models were not significant. The EfficientDet models achieved mF1-scores similar to or better than the annotators, whereas the mF1-score of the RetinaNet model was significantly lower than most of the annotators. In terms of FNR, the YOLOv5 model was significantly better (lower scores) than 4 annotators, and similarly, the mFNR of the RetinaNet was significantly better than 3 of the annotators. The EfficientDet models achieved mFNR scores that were similar to or significantly higher than the annotators, meaning that the performance was similar to worse than that of the annotators. A table with the results per class can be found in Section 4 of the Additional Materials 1. \begin{table} \begin{tabular}{r|c c|c c} \hline Model / Annotator & \multicolumn{2}{c}{mAP} & \multicolumn{2}{c}{mF1} & \multicolumn{1}{c}{mFNR} \\ \hline YOLOv5, fold 4 & 0.647 [0.566, 0.707] & 0.548 [0.506, 0.598] & 0.149 [0.110, 0.203] \\ \hline RetinaNet, fold 4 & 0.407 [0.355, 0.458] & 0.177 [0.154, 0.202] & 0.210 [0.167, 0.262] \\ \hline EfficientDet D0, fold 4 & 0.360 [0.290, 0.431] & 0.522 [0.461, 0.588] & 0.484 [0.422, 0.552] \\ EfficientDet D1, fold 4 & 0.503 [0.421, 0.569] & 0.503 [0.421, 0.569] & 0.359 [0.306, 0.431] \\ \hline Annotator 1* & 0.284 [0.231, 0.347] & 0.495 [0.447, 0.552] & 0.480 [0.413, 0.552] \\ Annotator 2 & 0.250 [0.247, 0.285] & 0.385 [0.346, 0.420] & 0.309 [0.251, 0.374] \\ Annotator 3 & 0.242 [0.199, 0.320] & 0.403 [0.343, 0.470] & 0.631 [0.564, 0.686] \\ Annotator 4 & 0.299 [0.270, 0.353] & 0.450 [0.411, 0.492] & 0.237 [0.180, 0.292] \\ Annotator 5 & 0.288 [0.244, 0.356] & 0.479 [0.423, 0.528] & 0.444 [0.376, 0.515] \\ Annotator 6 & 0.261 [0.248, 0.301] & 0.376 [0.346, 0.410] & 0.164 [0.124, 0.217] \\ \hline \end{tabular} \end{table} Table 6: Mean average precision (mAP), mean F1-score (mF1), and mean false negative rate (mFNR) evaluation of the models and individual annotators on the consensus test set, with an IoU-threshold of 0.3. All metrics are reported as score over the whole test set, and a 95% confidence interval. ## 4 Discussion In the presented study, three different object detection DL architectures were trained and evaluated on the task of detection of proximal caries in BW X-ray images. The caries were annotated by dental clinicians and classified into three groups: enamel, dentine, and secondary lesions. The predictive performance of the models was assessed in terms of the object detection metrics AP, F1-score, and FNR, and compared against the performance of human expert annotators on a consensus test set. The main finding is that all model performances were on par with or better than the human annotators, Figure 4: Bootstrap 95% confidence intervals for the metrics mAP, mF1 and mFNR, for the models and the annotators with the best model achieving significantly higher scores than the human annotators for all metrics. The dataset presented in this study features \(13,882\) BW images, with various lesions annotated by six dental clinicians. To the best of our knowledge, this is the largest dataset presented so far for the task of training object detection models for caries detection, exceeding the size of the dataset described in [11] with \(3,000\) images, and in [16] with \(3,686\) BW images. A novel strategy for combining the annotations from multiple annotators on the same image was presented, creating robust ground truth annotations for training by combining the expert knowledge of all the annotators. In addition, a test set consisting of \(197\) images was jointly annotated by all the annotators by consensus agreement. The consensus test set was used to compare the model performances against the performance of the individual annotators, allowing for an assessment of the models usefulness by comparison against a baseline of human expert knowledge. As detailed in Section 2.4, the performance of each of the architectures was assessed using five-fold cross validation. In addition, all of the models were evaluated on the consensus test set, presented in Tables 3 to 5. The selected metrics, AP, F1-score, and FNR, were deemed appropriate for this experiment, as they summarise the goodness of the models to correctly identify the caries (AP), the trade off between precision and recall (F1-score), and the rate at which the object detectors disregard the caries which are in the BW images (FNR). By using the PASCAL VOC implementation of the metrics, the AP precision is regressed using a larger amount of points, compared to the 11-point interpolation of the AP curve, used in the COCO implementation of AP [27]. This resulted in a better estimate of this metric, and was therefore considered adequate for this study. Lastly, to assess the statistical difference in performance of the models and the expert annotators, confidence intervals were estimated using the BCa algorithm [26]. The YOLOv5 model achieved the best performance in terms of the metrics used in the study. Both the EfficientDet D1 and YOLOv5 achieved significantly better performance than the RetinaNet in terms of mAP and mF1-score, even though the number of parameters for these models are lower than that of the of RetinaNet. Indeed, EfficientDet D1 is one fifth the size of RetinaNet, and yet it performed better in terms of mAP and F1. On the other hand, both the YOLOv5 and the RetinaNet achieved significantly lower FNR-scores than the EfficientDet models. In sum, all of the presented architectures exhibited different strengths and weaknesses, and an ensemble strategy of the models should be thus be considered, to improve the robustness of the predictions. Compared to equivalent previously published studies, comparable in scope with the presented work, the performances of the models are lower than the values reported in [15, 15, 13, 4, 5, 8, 9, 10, 11, 12, 4], although the values are not directly comparable as they are reported on different datasets. Unlike in these studies, the focus of this work was not to optimise and build a tailored object detection model, but to assess if the dataset was sufficient to obtain equivalent or better performance than dental clinicians, using state-of-the-art architectures. Indeed, as shown in Section 3, the trained models achieved significantly higher performances in sum on all of the metrics. A combination of the models strengths and weaknesses could thus be a solid foundation for an assistive tool for various lesion detection in clinical practice. As introduced in Section 1, the exclusive use of BW images to identify various lesions is under-par, as it requires a follow-up direct inspection and probing of the infected area. However, the presented deep learning models have the potential to improve the efficiency of the analysis of the bitewing images and aid in the detection of these lesions, helping to speed up and improve the detection and diagnosis of caries. The architectures included in this study were not modified nor tailored for the used dataset or applications, unlike previously published works [15, 15, 13, 4, 5, 6, 7, 8, 9, 10, 11, 12, 4]. Arranging the trained models in an ensemble fashion is expected to increase the overall performance, and the robustness of the predictions. Also, a patch-wise inference could further boost the performance by exposing the network to a closer view of the dental pieces, instead of working on the whole picture. Other augmentation techniques should be considered, such as gamma and brightness augmentations. Finally, future work should provide information regarding the inference runtime, so as to assess if it the detection models are suitable to be used in practice. ## 5 Conclusions Detection and identification of caries on BW images entails several difficulties, including the monocular view of the dental structures, and hence, presence of artifacts due to the overlap of the dental pieces. Therefore, it is common practice to perform a visual inspection of the lesions found in the medical images. In this study, it has been shown how AI-powered object detectors can ease the task of finding these lesions in the images, with better performance than dental clinicians. To support this statement, three state-of-the-art object detection architectures were trained on the HUNT4 Oral Health Study BW image dataset, and evaluated against expert dental clinicians. Out of the three architectures, YOLOv5 (medium size) yielded the best results, achieving significantly higher scores than the expert annotators. A combination of the presented models can be used as an assistive tool in the clinic, to speed up and improve the detection rate of carious lesions. The usefulness of such a tool will be assessed in a future clinical validation study. ## Abbreviations \begin{tabular}{l l} _AI:_ & Artificial intelligence \\ _ML:_ & Machine learning \\ _DL:_ & Deep learning \\ _BW:_ & Bitewing image \\ _OPG:_ & Panoramic X-ray image \\ _IoU:_ & Intersection over union \\ _NSM:_ & Non-maximum suppression algorithm \\ _MDF:_ & Mixture density function \\ _CLAHE:_ & Contrast limited adaptive histogram equalization \\ _AP:_ & Average precision \\ _F1:_ & F1-score \\ _FNR:_ & False negative rate \\ _mAP:_ & Mean average precision across classes \\ _mF1:_ & Mean F1-score across classes \\ _mFNR:_ & Mean false negative rate across classes \\ \end{tabular} ## Declarations ### Ethics approval and consent to participate Ethical approval has already been granted by the Regional Ethical Committee (REK) based in central Norway (project number 64645), and also had approval from Norsk Senter for Forskningsdata (reference number 718269). ### Consent for publication Not applicable. ### Availability of data and materials The HUNT data reported in this study cannot be deposited in a public repository because it is governed by Norwegian law. To request access, researchers associated with Norwegian research institutes can apply for the use of HUNT data and samples with approval by the Regional Committee for Medical and Health Research Ethics. Information for data access can be found at [https://www.ntnu.edu/hunt/data](https://www.ntnu.edu/hunt/data). All the data in the this manuscript are available from TkMidt (contact: Abhijit Sen, [email protected]) on reasonable request. The code and trained models can be provided upon reasonable request to Boneprox A.B. (contact: Shreya Desai, [email protected]). ### Competing interests The authors declare the following financial interest/personal relationships that may be considered as potential competing interests: SD is employee at Boneprox A.B., and TR is CEO of Boneprox A.B., and is co-founder and a major shareholder of Boneprox A.B. ### Funding The AI-Dentify project (project number 321408-IPN\(\not\)ERINGSLIV20) is funded by the Research Council of Norway, under the scope of the Innovation Project for the Industrial Sector. ### Authors' contributions **Conceptualization**: JPdF, RHH, SD, AS; **Methodology**: JPdF, RHH, SD, AS; **Data acquisition**: JPdF, LCN; **Formal analysis and investigation**: JPdF, RHH, SD, LCN; **Writing - original draft preparation**: JPdF, RHH, SD; **Writing - review and editing**: JPdF, RHH, SD, LCN, AS, TL; **Funding acquisition**: TL, TR, AS; **Resources**: TL, TR, AS; **Supervision**: TL, TR, AS; All authors read and approved the final manuscript. ### Acknowledgements The authors would like to express their gratitude to the dental clinicians that helped with the annotations of the BW images: Trine Matheson Bye, Gunnar Lyngstad, Odd-Arne Orland, Harald Solem, and Mats Sall. Also to Theodor Remman, CEO of Boneprox A.B. and project manager of the AI-Dentify project. Furthermore, we would like to thank Hedda Hovik, Astrid J Feuerherm, and Patrik Cetrelli working at TkMidt for helping in data processing, logistics and making resources available.
2309.13539
MediViSTA-SAM: Zero-shot Medical Video Analysis with Spatio-temporal SAM Adaptation for Echocardiography
The Segmentation Anything Model (SAM) has gained significant attention for its robust generalization capabilities across diverse downstream tasks. However, the performance of SAM is noticeably diminished in medical images due to the substantial disparity between natural and medical image domain. In this paper, we present a zero-shot generalization model specifically designed for echocardiography analysis, called MediViSTA-SAM. Our key components include (i) the introduction of frame-level self-attention, which leverages cross-frame attention across each frame and its neighboring frames to guarantee consistent segmentation outcomes, and (ii) we utilize CNN backbone for feature embedding for the subsequent Transformer for efficient fine-tuning while keeping most of the SAM's parameter reusable. Experiments were conducted using zero-shot segmentation on multi-vendor in-house echocardiography datasets, indicating evaluation without prior exposure to the in-house dataset during training. MediViSTA-SAM effectively overcomes SAM's limitations and can be deployed across various hospital settings without the necessity of re-training models on their respective datasets. Our code is open sourced at: \url{https://github.com/kimsekeun/MediViSTA-SAM}
Sekeun Kim, Kyungsang Kim, Jiang Hu, Cheng Chen, Zhiliang Lyu, Ren Hui, Sunghwan Kim, Zhengliang Liu, Aoxiao Zhong, Xiang Li, Tianming Liu, Quanzheng Li
2023-09-24T03:49:27Z
http://arxiv.org/abs/2309.13539v3
# MediViSTA-SAM: Zero-shot Medical Video Analysis with Spatio-temporal SAM Adaptation ###### Abstract The Segmentation Anything Model (SAM) has attracted considerable attention as a foundational model well-known for its robust generalization capabilities across various downstream tasks. However, SAM does not exhibit satisfactory performance in the realm of medical image analysis. In this study, we introduce the first study on adapting SAM on video segmentation, called MediViSTA-SAM, a novel approach designed for medical video segmentation. Given video data, MediViSTA, spatio-temporal adapter captures long and short range temporal attention with cross-frame attention mechanism effectively constraining it to consider the immediately preceding video frame as a reference, while also considering spatial information effectively. Additionally, it incorporates multi-scale fusion by employing a U-shaped encoder and a modified mask decoder to handle objects of varying sizes. To evaluate our approach, extensive experiments were conducted using state-of-the-art (SOTA) methods, assessing its generalization abilities on multi-vendor in-house echocardiography datasets. The results highlight the accuracy and effectiveness of our network in medical video segmentation. Our code will be open sourced at: [https://github.com/kimsekeun/MediViSTA-SAM](https://github.com/kimsekeun/MediViSTA-SAM) frames. This similarity could potentially aid in addressing the challenges associated with the time-related frames in medical video data, utilizing the beneficial attention mechanism. Expanding on the previously mentioned strategies, it is essential to further explore the complexities of analyzing medical video data, where both spatial and temporal aspects hold significant importance. Particularly, addressing the spatial characteristics of multi-scale objects in medical images is vital. Many anatomical structures or lesions in medical images are quite small, and achieving a higher resolution is often necessary to ensure improved discrimination in the context of medical imaging. The U-Net architecture (Ronneberger et al., 2015), a cornerstone in the domain of Fully Convolutional Networks (FCN), addresses these issues with its hallmark encoder-decoder, or contraction-expansion structure. Motivated by these observations, we propose an innovative video segmentation structure that utilizes the U-shaped convolution framework and the transformer block of SAM for medical video segmentation, adept at capturing the spatio-temporal information. To transition from 2D images to video data, we employ the TimeSformer(Bertasius et al., 2021) as a foundational element. We have customized the TimeSformer model to incorporate both long and short-range temporal attention, fostering temporally consistent segmentation through the development of a new cross-frame attention mechanism. This approach aims to enhance the accuracy and consistency of video segmentation tasks, demonstrating precise and reliable medical video analysis in extensive experiments. Our contributions are summarized as three-folds: \(\bullet\) We propose a new spatio-temporal adapter to modify the SAM for better compatibility with medical video segmentation. Especially, we have designed it to incorporate both long-range and short-range information with cross-frame attention mechanism to dependencies between frames. This approach successfully integrates the 2D SAM model into video segmentation tasks, enhancing its functionality and efficiency. \(\bullet\) We modify the multi-scale fusion framework and SAM to include SAM's generalization performance in our model, demonstrating a superior performance in echocardiography segmentation tasks compared to state-of-the-art(SOTA) methods. \(\bullet\) We conduct comprehensive validation of our methods using in-house data from multiple centers and vendors in the field of echocardiography. The results underscore the remarkable generalization abilities of our models, surpassing even the performance of state-of-the-art methods currently available. ## 2 Related Work ### Foundation models in Medical The large-scale model, key contributors such SegGPT (Wang et al., 2023c), SEEM (Zou et al., 2023), CLip (Radford et al., 2021), have made significant contributions in foundation model. Their remarkable zero-shot and few-shot generalization capabilities enable prompt adaptation and extension to specified tasks or domains, predominantly through pre-training and subsequent fine-tuning methodologies.Also, change enabling rapid adaptation and extension to target tasks or domains through pre-training and fine-tuning paradigms. SegGPT (Wang et al., 2023c) unifies segmentation and contextual learning using through transforming diverse segmentation data into images of a standardized format. This allows for a more comprehensive understanding of the image content. SEEM (Zou et al., 2023) proposes a interface that utilizing the various sources to effectually segment and categorize contents in images or videos concurrently. Clip (Radford et al., 2021) proposes a unified vision and language model that can be utilized for various downstream tasks, such as classification, and visual questioning task. SAM (Kirillov et al., 2023), distinguished for its zero-shot and few-shot capabilities, has been acknowledged for its effectiveness in specific tasks or domains, attributed to its adaptability and broad applicability. Recent research highlight SAM's zero-shot capability within medical image segmentation. Oliveira et al. (de Oliveira et al., 2023) explored its applicability with four distinct medical imaging modalities: X-ray, Ultrasound, Dermatoscopic, and colonoscopy images, and observed a notable improvement in SAM's performance when utilizing prompts. Wang et al. (Wang et al., 2023a) assessed SAM on surgical instruments data. Given the similarity of this data to natural image datasets, SAM exhibited exceptional zero-shot performance in their analyses. Figure 1: Image-to-video transfer learning strategies. (a)Vanilla transformer block (b) 3D adapter with depthwise 3D convolution (Pan et al., 2022) (c) 2D image to 3D adapter (Wu et al., 2023) (d) Proposed medical video spatio-temporal adapter (MediViSTA). ### Parameter-Efficient Fine-tuning in SAM In the context of zero-shot and few-shot performance in medical image analysis, the emphasis is placed on fine-tuning Self-Attention Mechanism (SAM) for particular medical segmentation datasets. ST-Adapter (Pan et al., 2022) incorporates 3D convolution layer in each transformer block to capture spatial-temporal features for recognition tasks as in Fig 1 (b). However, it increases computational cost with increasing video clips and degrade training efficiency. Second type leverage mathematical techniques, such as the use of low-rank matrices (Hu et al., 2021), to approximate parameter updates to minimize the number of trainable parameter. SAMed (Zhang and Liu, 2023)incorporates the Low-Rank Adaptation (LoRA) which indicates SOTA performance in various PETL tasks on image encoder of SAM. ### Time Sequence Modeling Understanding the temporal dimension of data is crucial for various computer vision tasks such as object tracking, segmentation, and detection. Time sequence modeling can be broadly categorized into two approaches. The first approach focuses on leveraging extracted features from individual frames or directly processing videos seamlessly to model temporal data. Typically, this involves extracting motion features between frames, such as optical flow, to capture temporal information. The second approach involves extracting features while considering the temporal aspect of data as depth information using 3D convolutions (Qiu et al., 2017; Song et al., 2019) This approach offers a more adaptable prompting system for open-set segmentation tasks. Once features are extracted from video frames, standard sequence models like LSTMs (Pfeuffer et al., 2019) or Transformers can be applied to handle various tasks. These models aim to capture dependencies within sequences, either through recurrent neural networks or self-attention mechanisms. For modeling long sequences and time series, is gaining attention in sequence modeling. For end-to-end video modeling, Transformer adaptations such as Video Swin Transformer (Hatamizadeh et al., 2021), and TimeSformer (Bertasius et al., 2021) are popular choices. ## 3 Methodology In this section, we will begin with an overview of SAM models, with a particular emphasis on the model perspective. Subsequently, we will introduce the design of adaptations for SAM. ### Segment anything model We provide an initial overview of the Segment Anything Model (SAM). SAM is structured around three core components: an image encoder, a prompt encoder, and a mask decoder. The image encoder employs a conventional Vision Transformer (ViT) that has been pre-trained using MAE. The author has officially released three pre-trained models, which are vit_b, vit_l, and vit_h, corresponding to various network model sizes. The image encoder's output embedding is downsampled by a factor of 16x from the input image. The prompt encoder can be categorized as either using sparse point points and boxes or dense mask prompts. The mask decoder consists of two convolution layers that up-sample the feature maps by 4 times. Through these two blocks, SAM enhances the image embedding's resolution, with a subsequent transformation performed by an MLP on the resulting token. ### Spatio-Temporal ViT architecture #### 3.2.1 Spatial and long-range temporal attention SAM designed within a 2D framework, however, medical data frequently encompasses 2DT, 3D or even higher-dimensional details (3DT). It becomes imperative to modify the existing SAM to accommodate higher-dimensional data. Therefore, we present a refined Vision Transformer (ViT), proficient in effectively integrating spatio-temporal information. The spatio-temporal ViT (ST-ViT) block consists of a normalization layer, long/short range temporal attention, and spatial attention layer followed by an adapter as in Figure 1 (d). In order to transfer the pretrained SAM (vit_h), we freeze the pre-trained SAM parameters within its transformer block, which constitutes the core component of SAM for downstream tasks. The overall model structures are depicted in Figure 2. Given a batch of video medical data as input \(X\in\mathbb{R}^{B\times C\times T\times H\times W}\), where B indicates batch Figure 2: The overview of MedVISTA-SAM, which consists of long and short range cross-frame attention and spatial transformer with U-shaped framework for medical video segmentation. We need reshape input [B, T, C, H, W] to [BT, C, H, W ] for our framework. During training, blue part are frozen while all the other red part are tuned. size, H and W represents the dimension of embedding, T indicates total number of frames, and C denotes the input channels. We first need to reshape them to \(X\in\mathbb{R}^{(B\times T)\times C\times W}\), after which it is passed through the encoder backbone. Within the transformer block, we designed the attention process into two separate attention sequences: spatial attention and temporal attention. The input feature matrix of dimensions \((B\times H\times W)\times T\times C\) into the multi-head attention mechanism within the temporal dimension which we refer to as long-range temporal attention since it spans the temporal axis. Following long-range temporal attention, we apply short-range cross-frame attention to establish inter-frame dependencies between adjacent frames, which will be discussed in the next section. Finally, by transposing the dimensions to \((B\times T)\times H\times W\times C\), now we can utilize the pretrained weights of SAM model. We refer to as spatial attention since it operates within the spatial domain. Following spatial attention, we scale the embeddings using a scaling factor \(s\), as described in (Chen et al., 2022). #### 3.2.2 Short-range cross-frame attention (optional) In the context of medical video segmentation, where there is limited motion between frames, we have devised a strategy for exploring short-range temporal information. Previous studies in video generation tasks have shown the effectiveness of cross-frame attention techniques in learning temporal dependencies and incorporating contextual information across video frames. et al. introduced the concept of cross-frame attention to anchor the object's appearance and shape in the initial frame of video, which in effectively generate subsequent video frames. However, relying solely on anchoring the first frame with cross-frame attention is inadequate when dealing with intermittent noise and image obscuration, especially in medical images such as echocardiography (Mitchell et al., 2019). To address this issue, we redesigned cross-frame attention mechanism that capitalizes on the approach of using the immediately preceding video frame as a constraint. More specifically, self-attention layer receives the latent feature \(v_{i}\in\mathbb{R}^{(B\times T)\times H\times W\times C}\) of \(I_{i}\), and linearly projects \(v_{i}\) into query, key, and value (\(Q\), \(K\), \(V\)) to produce the output of self-attention as follows: \[Q=W^{Q}v_{i},\quad K=W^{K}v_{i},\quad V=W^{W}v_{i} \tag{1}\] The self-attention output is calculated using the following equation: \[\text{Self\_Attn}(Q,K,V)=\text{Softmax}\left(\frac{Q^{\prime}\cdot(K^{t- \delta})^{T}}{\sqrt{d}}\right)V^{t-\delta} \tag{2}\] where t indicates frame, \(\delta\) is step size. On the contrary, the cross-frame attention employs the key and value from the first frame, along with the query from the current frame. We integrate information from both the current frame and the frame occurring at a time step \(\delta\) preceding the current frame. The specific value of \(\delta\) can be determined according to the frame rate of the scanned image. ### Multi-scale fusion To leverage the advantages of both CNN and the pretrained SAM model, we have devised an image encoder and made modifications to SAM's lightweight mask decoder. In the image encoder, we employ a CNN-based encoder to extract features while progressively downsampling input features. Within the modified SAM's mask decoder, consisting of two layers of transpose convolution, we integrate multi-head cross attention (MHCA) (Petit et al., 2021) to interact with multi-scale encoder features. The fundamental concept behind the MHCA module is to reduce the influence of irrelevant or noisy regions within the skip connection features while emphasizing areas of significant relevance for the specific application. This approach effectively combines the strengths of both CNN and self-attention mechanisms, thereby enhancing the model's performance in feature extraction and segmentation tasks. Our experimental evaluations have demonstrated the improved performance of our model, as illustrated in Table 4, confirming the effectiveness of our multi-scale fusion strategy. ## 4 Experiment settings ### Datasets and implementation details In this study, we employed the publicly available CAMUS dataset to train all the models presented. On the other hand, the multi-center in-house dataset was exclusively used as the testing set in our study. **CAMUS Dataset** CAMUS contains 1,000 patients's two-dimensional (2D) echocardiography, comprising both apical two-chamber (2CH) and four-chamber (4CH) views of 500 patients. It provides sparse annotation along the cardiac cycle only at end-diaside (ED) and end-systole (ES). The ground truth of three structures the left ventricle \(LV_{endo}\), the epicardium \(LV_{epi}\), and the left atrium (LA) for 2CH and 4CH are provided. Half of the patients have an ejection fraction(EF) lower than 45%, 19% of the images have poor quality. Concerning the aspect of variability, the CAMUS dataset reveals a substantial spectrum of dice similarity coefficients (Dice) in relation to both inter and intra variability, as assessed by experienced experts. In the case of the CAMUS dataset, we utilized 402 patients for training the model, while the remaining 98 patients were reserved for testing. Specifically, the testing dataset comprised full-cycle apical 4-chamber (A4C) sequences, and it included dense annotation data for \(LV_{endo}\) and \(LV_{epi}\), as made available by (Leclerc et al., 2019). **In-house Dataset** We collected an multi-center dataset consists of B-mode echocardiography images from 100 patients consists of apical two and four-chamber view (A2/4C). It was collected from two hospitals with two different imaging vendors, including GE and Philips, utilizing their respective flagship models, the Vivid E95 and the Philips EPIQ 7C. Each manufacturer contributed equally, providing samples from 50 patients each, thereby ensuring a balanced representation in the study. The images were collected at the Massachusetts General Hospital and Brigham and Women's Hospital between 2017 and 2022 who needs a clinical care. The annotation process was undertaken with utmost precision by two skilled clinicians. The annotation include the boundaries of the Left Ventricle endo (\(LV_{endo}\)), Left ventricle epicardium (\(LV_{epi}\)), and Left Atrium (LA) during the end-diastole (ED) and end-systole (ES) phases. Given the intermittent noise and image obscuration in images, the clinicians meticulously examined adjacent frames in the video sequences to pinpoint and define accurate boundaries. following the recommendations of the American Society of Echocardiography (Lang et al., 2015). This annotation process was accomplished using the Slicer 3D software (Fedorov et al., 2012), a tool well-regarded for its precision in medical imaging analysis. **Implementation details** Our model was trained using the Dice loss function with an ignore index. In this context, the ignore index represents frames in the dataset that do not have any annotations. Frames that do have annotations are assigned labels, while those without annotations are designated with the ignore index. This approach helps us manage the problem of label imbalance during training by effectively ignoring frames that lack annotations. A MADGRAD optimizer (Defazio and Jelassi, 2022) with a learning rate of \(10^{-4}\) is used for training. We employed gradient norm clipping to avoid exploding gradients and a soft constraint for the vanishing gradients problem and ensuring effective convergence, a gradient norm clipping was applied with a maximum norm of 1.0. In the pre-processing, we selected one cardiac cycle comprising 32 frames to more effectively address the SAM issue. The image intensities, initially ranging from [0, 255], were normalized to a scale between [0, 1] using min-max normalization technique. To maintain a consistent input size that includes one complete cardiac cycle, we sampled images during the period between end-diastole (ED) and end-systole (ES) phases. In cases where the desired number of frames was not obtained through this method, we conducted additional sampling after the ES frame to ensure that the required number of frames was acquired. We constructed our proposed model on the Pytorch platform, leveraging the computing power of 8 NVIDIA A100 GPUs to facilitate the process. #### 4.1.1 Evaluation Metric **Region-based metrics** In our study, we use three region-based metrics to assess segmentation precision, including the Dice coefficient, Hausdorff distance, and Average Symmetric Surface Distance (ASSD). The Dice coefficient is a statistical tool that quantifies the overlap between the actual and predicted segmentation areas. The Hausdorff distance measures the greatest distance from a point in one set to the closest point in the other set. Additionally, we employ the Average Surface Distance (ASD) metric to determine the mean distance between binary objects found in two segmented images, providing a comprehensive view of segmentation accuracy and consistency. **Temporal metrics** To evaluate the temporal coherence in the segmentation results, we apply normalization to each temporal sequence, \(s_{a}\). To evaluate the temporal smoothness (Painchaud et al., 2022)of an temporal sequence data, we analyze its second-order derivative. A high derivative value suggests periods of substantial variation, while a lower derivative indicates local smoothness. Given the discrete nature of cardiac time frames, we approximate the second-order derivative numerically as follows: \[\frac{d^{3}s_{a}(t)}{dt^{2}}\approx s_{a+1}+s_{a,t-1}-2s_{a,t} \tag{3}\] This approximation functions act as a Laplacian filter, evaluating the temporal alignment of three consecutive values across the cardiac cycle. We then measure the difference between each data point and the average of its neighboring data points, providing us with a measurable indicator of temporal consistency, \(L\). **Clinical metrics** The Left Ventricular Ejection Fraction (LVEF) is a critical measure that indicates the percentage of blood ejected from the heart's main pumping chamber, commonly used to evaluate cardiac function and performance. Following the clinical guidelines for echocardiography (Lang et al., 2015), we evaluate our method using the biplane simpson's method, a common approach for estimating the volume of the left ventricle (LV) of the heart. After segmenting the left ventricle (LV) from both the 2-chamber and 4-chamber views, the ventricle is then divided into multiple disks or slices using contour information, with the volume of each disk or slice calculated individually. \[V=\frac{\pi}{4}\sum_{1}^{n}D_{i}^{2}\times\frac{L}{n} \tag{4}\] where \(V\) is the volume of the left ventricle \(L\) is the length of the long axis. \(D\) is the diameter of the disk. \(n\) is number of disks, typically with a value of 20. The total volume of the ventricle is then calculated by summing the volumes of all the disks. The ejection fraction (EF) can be calculated as follows: \[EF=\left(\frac{EDV-ESV}{EDV}\right)\times 100\% \tag{5}\] where \(EF\) is the ejection fraction, \(EDV\) is the end-diastolic volume, and \(ESV\) is the end-systolic volume, respectively. Figure 4: Qualitative comparison results on different prompts, (a) zero prompt (b) one point (b) two points (d) box prompts, respectively. The green circles indicate the location of the point prompt. The white represents the location of the bounding box prompt. The segmentation results with blue are overlaid on input image. Figure 3: The visual comparison of segmentation results on CAMUS dataet (rows 1-2) and our in-house dataset (rows 3-4). The blue, red and green regions denote the left ventricle endocardium, epicardium and left atrium, respectively. Please note that the CAMUS sequential dataset only includes labels for \(LV_{endo}\) and \(LV_{epi}\). ## 5 Results and Analysis ### Comparision with Statie-of-the-Art Methods We have conducted an extensive evaluation of our methods in two datasets: campus dataset and in-house multi-center dataset. We compare state-of-the-art(SOTA) methods including six CNN-based methods: LUNet (Leclerc et al., 2020), Deeplabv2 (Chen et al., 2017), (Zhou et al., 2019), Enet,(Paszke et al., 2016), ICNet (Zhao et al., 2018), and BiSeNetV2 (Yu et al., 2021) and three transformer based methods: SegFormer (Xie et al., 2021), U-transformer (Petit et al., 2021), and SwiUNETR (Hatamizadeh et al., 2021). Table 1 presents the results for both the CAMUS and in-house datasets. While the CAMUS dataset comprises two labels, LV endo and LV epi, the in-house dataset extends to three, incorporating LA. For CAMUS dataset, all the methods demonstrated comparable segmentation performance for LV endo and LV epi. As detailed in Table 1, our model outperformed the next best method by enhancing the Dice score by 1.9% and 0.2%, and improving the temporal smoothness by 0.03. Its crucial to recognize the inherent challenges of the CAMUS dataset, which have a high inter and intra-observer variability, reaching up to 6% in the ground truth data (Leclerc et al., 2019). This variability can be further observed in Figure 3 and Table 1. Furthermore, we have included our analysis both SAM-based and non-SAM-based methods. However, for SAM without prompt, and one point prompt, we find they fail to generate plausible results, e.g., w/o prompts generate segmentation of scan of region area totally different from what we want as in Figure 4. Furthermore, when it comes to segmenting for the \(LV_{epi}\), which surrounds the \(LV_{endo}\), none of the prompt methods achieve successful segmentation. Therefore, we only include the results using two points prompts and box prompts as in Table 1. We presented all results without any post-processing, enabling a direct comparison of segmentation performance. ### Generalization Evaluation Evaluating the generalization capability of a model is of paramount significance. This is especially important in the medical field where data might come from diverse hospitals and vendors, each with their distinct scanning protocols. In our experiment, we evaluate the zero-shot capability of the proposed methods, which were trained on the CAMUS dataset, by testing our model on our in-house dataset. MediViSTA-SAM demonstrated a performance improvement over SwinUNETR, with a 3.18%, 1.6%, and 1.1% enhancement in the Dice coefficient and 2.62, 0.06, and 0.16 improvements in temporal smoothness for \(LV_{endo}\), \(LV_{epi}\), and LA, respectively. Furthermore, we conducted an additional evaluation of the consistently high-performing MediViSTA-SAM method using critical clinical metrics. The correlation were strong showing an pearson correlation coefficient of 0.96 for LVEDV, 0.98 for LVESV, and 0.89 for EF. These high correlation scores not only validate the accuracy of our model but also highlight its trustworthiness when applied to essential clinical indicators pivotal for cardiac health evaluations. The correlation and Bland-Altman plots for these indices are presented in Figure 5. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{CAMUS data} & \multicolumn{5}{c}{\(LV_{endo}\)} & \multicolumn{5}{c}{\(LV_{epi}\)} \\ \cline{2-10} & Dice\(\uparrow\) & dH(mm)\(\downarrow\) & dA(mm)\(\downarrow\) & \(L\downarrow\) & Dice\(\uparrow\) & dH(mm)\(\downarrow\) & dA(mm)\(\downarrow\) & \(L\downarrow\) \\ \hline LUNet (Leclerc et al., 2020) & 91.3 & 7.66 & 1.73 & 0.08 & 80.0 & 11.10 & 1.84 & 0.22 \\ Deeplabv3 (Chen et al., 2017) & 92.6 & 6.23 & 1.19 & 0.07 & 84.1 & 8.42 & 1.21 & 0.14 \\ Unet++ (Zhou et al., 2019) & 93.1 & 6.42 & 1.20 & 0.08 & 83.4 & 9.44 & 1.42 & 0.12 \\ Enet (Paszke et al., 2016) & 89.4 & 7.54 & 1.53 & 0.10 & 79.1 & 13.26 & 1.75 & 0.16 \\ ICNet (Zhao et al., 2018) & 90.8 & 9.48 & 1.84 & 0.05 & 79.4 & 10.81 & 1.43 & 0.13 \\ BiSeNetV2 (Yu et al., 2021) & 92.2 & 6.74 & 1.27 & 0.07 & 84.2 & 8.35 & 1.13 & 0.14 \\ SegFormer (Xie et al., 2021) & 90.4 & 10.51 & 1.93 & 0.04 & 80.3 & 11.16 & 1.52 & 0.11 \\ U-Transformer (Petit et al., 2021) & 94.1 & 6.81 & 0.84 & 0.06 & 88.4 & **8.31** & 1.19 & 0.12 \\ SwinUNETR (Hatamizadeh et al., 2021) & 94.0 & 5.02 & 1.32 & 0.05 & 88.9 & 10.10 & 1.23 & 0.10 \\ \hline SAM (2pts/slice) & 68.4 & 16.22 & 3.92 & 0.44 & - & - & - & - \\ SAM (1box/slice) & 85.1 & 8.43 & 1.87 & 0.21 & - & - & - & - \\ MediViSTA-SAM & **96.0** & **4.25** & **0.74** & **0.02** & **89.1** & 8.93 & **1.02** & **0.08** \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{In-house data} & \multicolumn{5}{c}{\(LV_{endo}\)} & \multicolumn{5}{c}{\(LV_{epi}\)} & \multicolumn{5}{c}{\(LA\)} \\ \cline{2-10} & Dice\(\uparrow\) & dH\(\downarrow\) & dA\(\downarrow\) & \(L\downarrow\) & Dice\(\uparrow\) & dH\(\downarrow\) & dA\(\downarrow\) & \(L\downarrow\) & Dice\(\uparrow\) & dH\(\downarrow\) & dA\(\downarrow\) & \(L\downarrow\) \\ \hline LUNet (Leclerc et al., 2020) & 87.9 & 14.21 & 6.10 & 0.16 & 75.1 & 15.63 & 7.10 & 0.18 & 84.5 & 18.48 & 4.91 & 0.09 \\ Deeplabv3 (Chen et al., 2017) & 88.1 & 13.30 & 5.44 & 0.10 & 74.2 & 13.95 & 6.90 & 0.13 & 85.7 & 16.22 & 4.88 & 0.06 \\ Unet++ (Zhou et al., 2019) & 85.1 & 16.34 & 6.42 & 0.19 & 72.2 & 19.43 & 7.49 & 0.21 & 83.5 & 23.04 & 5.91 & 0.07 \\ Enet (Paszke et al., 2016) & 80.4 & 20.12 & 7.02 & 0.16 & 70.6 & 23.23 & 9.92 & 0.24 & 81.2 & 26.21 & 7.11 & 0.12 \\ ICNet (Zhao et al., 2018) & 85.2 & 18.22 & 6.10 & 0.15 & 71.2 & 21.26 & 7.44 & 0.21 & 82.3 & 21.53 & 5.31 & 0.09 \\ BiSeNetV2 (Yu et al., 2021) & 86.1 & 16.04 & 4.98 & 0.09 & 73.1 & 20.71 & 5.93 & 0.12 & 84.9 & 19.04 & 5.92 & 0.12 \\ SegFormer (Xie et al., 2021) & 83.5 & 19.87 & 6.22 & 0.14 & 70.9 & 25.14 & 9.34 & 0.16 & 82.1 & 18.94 & 4.11 & 0.10 \\ U-Transformer Petit et al. (2021) & 86.2 & 14.52 & 5.11 & 0.18 & 74.2 & 12.34 & 6.85 & 0.18 & 87.1 & 13.22 & 4.02 & 0.08 \\ SwinUNETR (Hatamizadeh et al., 2021) & 87.8 & 13.98 & 5.88 & 0.18 & 78.4 & 13.29 & 6.73 & 0.16 & 88.9 & 13.11 & 5.22 & 0.19 \\ \hline SAM (2pts/slice) & 65.2 & 28.46 & 24.18 & 0.74 & - & - & - & - & 66.2 & 28.11 & 12.24 & 0.45 \\ SAM (1box/slice) & 83.2 & 18.23 & 5.47 & 0.20 & - & - & - & - & 80.4 & 16.11 & 5.97 & 0.21 \\ MediViSTA-SAM & **91.0** & **11.03** & **3.26** & **0.05** & **80.0** & **11.94** & **4.56** & **0.10** & **90.0** & **10.62** & **3.22** & **0.05** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of MediViSTA-SAM with SOTA segmentation methods on CAMUS data.Best results are denoted as **bold**. The grey background denotes the methods are proposed. Four metrics are assessed: the Dice coefficient, Hausdorff distance, Average Symmetric Surface Distance, and Laplacian temporal alignment, respectively. ## 6 Ablation Studies In this section, we conduct a comprehensive examination of the various elements that make up our proposed model. We begin our assessment by evaluating the effectiveness of the spatio-temporal design and then proceed to assess the impact of integrating multi-scale fusion. Furthermore, we evaluated the impact of pretrained SAM on model performance. **Spatio-Temporal Apater Design** In this section, we investigate the impact of spatio-temporal adapter designs on the performance of our method. We conducted ablation experiments to examine the effects of cross-frame attention in our model. Our experiments demonstrate that the inclusion of cross-frame attention leads to a 0.2% improvement in Dice and a 0.03 increase in temporal smoothness, showcasing its positive impact on our method's performance. Figure 6 provides a visual comparison of MediViSTA across frames, emphasizing the presence of ambiguous boundaries on the lateral side. Nevertheless, our model successfully captures temporal information along the time axis, resulting in segmented images with enhanced temporal smoothness. This observation highlights the interdependence between adjacent frames, where each frame significantly influences its neighboring frames. To further investigate the impact of the order in which the spatio-temporal adapter is built, we conducted a comparison by reversing the order of spatial and temporal attention. The quantitative evaluation results, presented in Table 3, reveal that initiating with spatial attention, followed by temporal attention, leads to a decrease of 0.5% and an increase of 0.02 in the metrics. In comparison, the concurrent application of spatial and temporal attention, as Fig 1 (Wu et al., 2023), results in a reduction of 0.4% in the dice coefficient and a 0.02% increase in temporal smoothness. Based on these findings, we have chosen to structure the temporal and spatial attention in the specific sequence used in our approach. **Multi-scale fusion Design** To evaluate the efficacy of the multi-scale fusion approach, we conduct an ablation study focusing on the multi-scale fusion encoder and the mask decoder components. By eliminating both the multi-scale image encoder and the mask decoder, we demonstrate its effectiveness on segmentation performance. As illustrated in Table 4 (row 4 and 5), without multi-scale fusion leads to a decline in segmentation outcomes by 1.1 %, also with an increase in temporal smoothness by 0.03. Finally, we obtain the best results when both, spatio-temporal adapter and multi-scale fusion are activated. **Effectiveness of pretrained SAM** Table 5 compares the performance of the pretrained SAM using various weight types, including vit_h, vit_l, and vit_b. The largest pretrained model, vit_h, exhibits the best accuracy across all metrics, achieving the highest dice score and exhibiting low temporal smoothness. Performance declines with decreasing model size, with vit_b recording the lowest dice score. Experiments without the pretrained SAM were conducted using the smallest model architec Figure 5: Comparison between automated method and manual measurement of LV volume, and ejection fraction (EF). Bland-Altman analysis and Pearson correlation plot on in-house dataset. For LVEDV, bias = 7.74; limits of agreement = -16.77 to 32.25. For LVESV, bias = 9.46; limits of agreement = -19.24 to 38.17. For EF, bias = -1.13; limits of agreement = -13.54 to 11.28. For person correlation coefficient, r=0.96, r=0.98, and r=0.89, respectively. Figure 6: Impact of cross-frame attention on a patient exhibiting rapid radial motion and a high heart rate. (a) B-mode image (b) radial image cut spanning the entire time sequence (c) and (d) segmentations without and with cross-frame attention, respectively. ture comprising 12 blocks due to GPU memory constraints. Our tests showed a 4.42% decrease in dice compared to when using pretrained SAM weights. Consistent with many studies, we also demonstrate that larger network models have superior performance for downstream tasks (Zhou et al., 2023). ## 7 Discussion SAM, a large-parameter vision foundation model, is a strategy for domain generalization that allows neural networks to transfer knowledge effectively to other domains, improving performance in downstream tasks. However, SAM's performance in directly applying to medical images is unsatisfactory. This is mainly because supervised learning relies on training data distribution, which primarily consists of natural images. Additionally, 2D SAM has limitations in medical image tasks, as most modalities in clinical practice are 3D or 2DT data. Therefore, we propose MediViSTA-SAM, a method specifically designed for medical video segmentation. By using a pre-trained SAM in our model, our method better adapts to generalization tasks, resulting in improved segmentation performance in zero-shot analysis on echocardiography data. We also found that the introduction of spatial attention and long and short-range temporal attention improves model performance for two reasons. First, it captures spatiotemporal information. We adopted a multi-scale fusion framework in our model to easily capture various object sizes. By combining these components, we achieve more robust results, and enhanced model generalization. Compared to other state-of-the-art methods and domain-specific approaches, our proposed method demonstrates superior performance across various metrics, including region-based, temporal, and clinical metrics. Our study's limitation is that we tested the MediViSTA-SAM method only on hospitalized patients. In future research, we plan to apply our method to diverse patient groups, including healthy individuals and those with various medical conditions. This will help us evaluate how well our approach works across a broader range of patient populations. ## 8 Conclusion To the best of our knowledge, we introduce the first study exploring the potential of SAM for medical video segmentation task, focusing on echocardiography. We propose a new framework named MediViSTA-SAM incorporating spatio-temporal adapter and multi-scale fusion. Taking advantage of both spatial multi-scale feature and long/short-range temporal information, our framework achieves highest segmentation accuracy performance. Extensive experiments show that our approach demonstrates promising results when compared to state-of-the-art methods in zero-shot echocardiography analysis.
2309.12627
A Quantum Computing-based System for Portfolio Optimization using Future Asset Values and Automatic Reduction of the Investment Universe
One of the problems in quantitative finance that has received the most attention is the portfolio optimization problem. Regarding its solving, this problem has been approached using different techniques, with those related to quantum computing being especially prolific in recent years. In this study, we present a system called Quantum Computing-based System for Portfolio Optimization with Future Asset Values and Automatic Universe Reduction (Q4FuturePOP), which deals with the Portfolio Optimization Problem considering the following innovations: i) the developed tool is modeled for working with future prediction of assets, instead of historical values; and ii) Q4FuturePOP includes an automatic universe reduction module, which is conceived to intelligently reduce the complexity of the problem. We also introduce a brief discussion about the preliminary performance of the different modules that compose the prototypical version of Q4FuturePOP.
Eneko Osaba, Guillaume Gelabert, Esther Villar-Rodriguez, Antón Asla, Izaskun Oregi
2023-09-22T05:27:23Z
http://arxiv.org/abs/2309.12627v3
A Quantum Computing-based System for Portfolio Optimization using Future Asset Values and Automatic Reduction of the Investment Universe ###### Abstract One of the problems in quantitative finance that has received the most attention is the portfolio optimization problem. Regarding its solving, this problem has been approached using different techniques, with those related to quantum computing being especially prolific in recent years. In this study, we present a system called _Quantum Computing-based System for Portfolio Optimization with Future Asset Values and Automatic Universe Reduction_ (Q4FuturePOP), which deals with the Portfolio Optimization Problem considering the following innovations: _i_) the developed tool is modeled for working with future prediction of assets, instead of historical values; and _ii_) Q4FuturePOP includes an _automatic universe reduction_ module, which is conceived to intelligently reduce the complexity of the problem. We also introduce a brief discussion about the preliminary performance of the different modules that compose the prototypical version of Q4FuturePOP. Keywords:Quantum Computing, Portfolio Optimization Problem, Quantum Annealer, D-Wave, Optimization ## 1 Introduction The present work aims to describe a quantum computing (\(QC\), [1]) based system for solving the portfolio optimization problem (\(POP\), [2]). Briefly explained, the \(POP\) intends to find the optimum asset allocation with the objective of _i_) maximizing the expected return and _ii_) minimizing the financial risk. More specifically, and following the Markowitz \(POP\) formulation, the financial risk is calculated based on the diversification of the built portfolio [3]. Following this philosophy, the system tends to distribute the whole budget into different and uncorrelated assets rather than investing large amounts of money into the highest expected, albeit correlated, returns. Formally described, the problem to be solved counts with _i_) a group of \(N\) assets \(\mathcal{A}=\{a_{0},\ldots,a_{i},\ldots,a_{N-1}\}\); _ii_) a dataset \(AD=\{ad_{0},\ldots,ad_{i},\ldots,ad_{N-1}\}\), in which \(ad_{i}\) is a list of historical values \(ad_{i,k}\) of an asset \(a_{i}\), representing \(k\) a specific day within the complete period of \(K\) days, and _iii_) a total \(bd\) budget. Thus, the objective is to find the most promising assets in which to invest this budget, considering that i) all \(bd\) must be invested and ii) the proportion of \(bd\) that can be allocated to each asset is conditioned by the variable \(p\). More concretely, and considering \(w_{i}\) as the proportion of \(bd\) invested on the asset \(a_{i}\), this \(w_{i}\) can be represented as the summation of any proportions \(p_{i}\) in \(P=\{p_{0}=bd,p_{1}=bd/2,...p_{p-1}=bd/2^{p-1},0\}\), Furthermore, it should be deemed that \(w_{i}<=bd\). With this notation in mind, the goal is to find the \(W=\{w_{0},w_{1},...,w_{N-1}\}\) that maximizes the total expected return while minimizing the financial risk. The system described in this paper, coined _Quantum Computing-based System for Portfolio Optimization with Future Asset values and automatic universe reduction_ (Q4FuturePOP), is a QC-based scheme for dealing with the \(POP\) considering the following innovations: * _Future projected values_: most of the QC based techniques proposed in the literature solve the \(POP\) using as input historical dataset values for \(\mathcal{A}\)[4]. In other words, developed solvers select the appropriate \(W\) considering the past values of the group of available assets. On the contrary, for Q4FuturePOP, the input dataset is composed of future predictions to try to account for realistic environments in the \(POP\) formulation. Using this input, Q4FuturePOP builds the complete dataset used for formulating the \(POP\) problem considering future projected values of \(\mathcal{A}\). Thus, the \(W\) chosen by the system is based on future predictions instead of historical values. * _Automatic universe reduction_: in order to calculate \(W\) in a more efficient way, a search space reduction mechanism has been implemented in Q4FuturePOP. This mechanism works as follows: first, Q4FuturePOP takes as input the complete universe of \(\mathcal{A}\). With this set of assets, the system conducts a number \(E\) of preliminary executions which are used by Q4FuturePOP for detecting a subgroup of promising assets. After these preliminary executions, Q4FuturePOP builds an alternative \(\mathcal{A}^{\prime}\), which is a subgroup of \(\mathcal{A}\) (\(\mathcal{A}^{\prime}\subseteq\mathcal{A}\)). Using this newly generated \(\mathcal{A}^{\prime}\), the problem is finally executed, and the obtained outcomes are returned to the user. Thanks to this procedure, the complexity of the problem to solve is automatically decreased, allowing the system to reach a higher level of accuracy. The rest of the paper is structured as follows. Section 2 presents a brief overview of the background related to QC and POP. In Section 3, the inputs and outputs of Q4FuturePOP are described for the sake of understandability. After that, in Section 4, the whole system is described. Then, in Section 5, we discuss the preliminary performance of Q4FuturePOP. Section 6 finishes this work by outlining some of the planned future work. ## 2 Related Work The first POP-focused paper including real quantum experiments was published in 2015, in which the authors solved the problem using a prototype of the D-Wave's quantum annealer [5]. With only 512 qubits at their disposal, the authors worked with a pool of 15 assets in which to invest. A second study focused on this topic was presented in 2017, exploring a specific investment case related to the Abu Dhabi Securities Exchange [6]. In the following years, some interesting theoretical papers appeared, exploring different formulations and their possible resolutions using quantum approaches. Examples of this scientific trend can be found in [7] and [8]. These papers, published in 2018 and 2019, respectively, theorize the implementation of two algorithms without actually testing them on real quantum devices. Also in 2019, some advanced approaches were presented. In [9], for example, the authors employ a hybrid algorithm in which the quantum module is executed by the D-Wave 2000Q computer in order to improve the solutions found by a classical greedy algorithm. In the same year, the first paper focused on gate-based quantum computers was published, in which the authors present a _Quantum Approximate Optimization Algorithm_[10]. Particularly interesting are the papers [11] and [12], published by _Chicago Quantum_ in 2020. In the first of these papers, the authors demonstrated how a hybrid solver can promisingly solve portfolios of up to 33 assets. In [12] a similar approach is proposed, in which the number of assets considered rises to 60 using another hybrid technique. Finally, in [13], the same authors managed to solve problems with a size of 134 assets, making use of the D-Wave Advantage System, composed of 5436 qubits. Since 2021, the study of the POP through the quantum paradigm has experienced a significant increase. In [14], for example, a variant of the problem known as _minimum holding time_ is solved by the D-Wave quantum annealer, considering a universe of 50 assets in which to invest. In [15], the authors present a problem in which different investment bands are deemed, allowing the fixing of a maximum permissible risk. Furthermore, a quantum-gate-based method was presented in [16], where a hybrid algorithm called _NISQ-HHL_ is proposed. The study presented in [17] is especially interesting for this paper. That work consists of a detailed analysis of the parameterization of the D-Wave's annealer. To do so, the authors use POP as a benchmarking problem, presenting a formulation of the problem that allows an investment granularity adapted to the user's needs. This study has proved to be interesting from a mathematical point of view since the formulation employed in this study is the one embraced in our work. Finally, it is worth highlighting the study presented in [18], where different quantum solvers are proposed based on both the quantum gate paradigm and the annealer. The authors of that work carried out different tests in a dynamic environment, solving problems with up to 52 assets. As can be seen, the research conducted in recent years has been prolific. This section has attempted to briefly outline this vibrant activity. Being aware that the full state of the art is much broader, we refer interested readers to works as [19]. ## 3 Inputs and Outputs of Q4FuturePop Now, let's define some notation and terms in order to properly understand how Q4FuturePOP operates. From now on, we use the superscripts \({}^{h}\) for the historical data and \({}^{f}\) for the predicted future data. Furthermore, the daily return of an asset \(a_{i}\) at day \(k\) is \(er^{h}_{k,i}=(ad^{h}_{k,i}-ad^{h}_{k-1,i})/ad^{h}_{k-1,i}\), resulting in \(er^{h}\in\mathbb{R}^{K\times N}\) representing the daily returns of \(\mathcal{A}\) in \(AD^{h}\). Thus, Q4FuturePOP receives as inputs _i_) \(AD^{h}\in\mathbb{R}^{K\times N}\), which contains all the historical values of \(\mathcal{A}\) assets during \(K\) days, _ii_) a list \(V=\{v_{0},v_{1},...,v_{N-1}\}\), in which \(v_{i}\) is the initial value of \(a_{i}\) at the time Q4FuturePOP is executed (in most cases, \(v_{i}=ad^{h}_{K-1,i}\)); and _iii_) \(Er^{f}=\{Er^{f}_{0},Er^{f}_{1},...,Er^{f}_{N-1}\}\in\mathbb{R}^{N}\), which is the list of predicted expected returns. It should be highlighted that these \(Er^{f}\) values are given by an expert from the Spanish company Welzia Management4. Footnote 4: [https://wz.welzia.com/](https://wz.welzia.com/) Once the input is received, the module named _Predicted Dataset Generation_ (PDG) builds the complete dataset \(AD^{f}\in\mathbb{R}^{K\times N}\), which is composed of all the projected future daily values of the whole \(\mathcal{A}\) in the period that goes from the moment in which the system is executed and the following \(K\) days. All values in the generated \(AD^{f}\) must meet two requirements: _i_) \(Cov(er^{f})=Cov(er^{h})\), meaning that the covariance of the input daily returns is the same as that of the daily returns generated by PDG; and _ii_) the expected return of the prices build must be the same as \(Er^{f}\). How \(AD^{f}\) is generated is described in Section 4.1. As output, Q4FuturePOP returns _i_) the above-mentioned \(W\), containing the list of investment weights \(w_{i}\) given to each \(a_{i}\); _ii_) the expected return of the chosen portfolio \(er_{portfolio}\); _iii_) the risk associated with this portfolio \(\sigma_{portfolio}\); and _iv_) the \(SHARPE\) value. It should be considered that for the computation of outcomes _ii_, _iii_ and _iv_, the Markowitz formulation of the \(POP\) has been used as a base. Q4FuturePOP: Quantum Computing based System for Portfolio Optimization with Future Asset values and automatic universe reduction Q4FuturePOP consists of three interconnected modules: PDG, Assets Universe Reduction Module (AUR) and the Quantum Computing Solver Module (QCS). A schematic description of Q4FuturePOP is depicted in Figure 1, in which the relationship of the three modules is represented. In a nutshell, PDG is devoted to generating the complete dataset of future predicted values of \(\mathcal{A}\). The second module, AUR, is in charge of intelligently reducing the complete \(\mathcal{A}\) into \(\mathcal{A}^{\prime}\), while QCS is the module that solves the \(POP\) and provides \(W\) both for AUR or for the final user (as seen in Figure 1). In the following subsections, we describe PDG, AUR and QCS in detail. ### Predicted Dataset Generation Module - PDG To simulate future prices and eventually build \(AD^{f}\), the PDG starts by generating a matrix \(X\in\mathbb{R}^{L-1\times N}\) composed of random values drawn from a standard normal distribution, finding a matrix \(A\in\mathbb{R}^{L-1\times L-1}\) and a bias \(b\in\mathbb{R}^{L-1\times N}\) such that \(er^{f}=AX+b\). For finding both \(A\) and \(b\), PDG follows two different procedures: * _Finding_\(A\). The Cholesky decomposition of the covariance matrix \(Cov(X)\) and \(Cov(er^{f})\) stands that there exist, respectively, two unique lower triangular matrix \(L_{x}\) and \(L_{h}\) such that \(Cov(X)=L_{x}L_{x}^{T}\) and \(Cov(er^{f})=L_{h}L_{h}^{T}\). Considering that \(X\) is an invertible matrix, which is highly probable since the random vectors that make up \(X\) are independent by construction, we set \(A^{T}=L_{h}L_{x}^{-1}\). Thanks to this procedure, we meet our first constraint, as \(Cov(er^{f})=Cov(er^{h})\). * _Finding_\(b\). Let us consider \(Y=AX\) and express the expected return as a function of the daily return. For an asset \(a_{i}\) we have \(ln(1+Er_{i}^{f})=\sum\limits_{k=1}^{L-1}ln(1+y_{k,i}+b_{i})\) if and only if \(\forall k,i\subseteq\mathbb{R}^{L-1\times N}y_{k,i}+b_{i}>-1\). Then, we use the Taylor-Young expansion of \(ln\) to find an approximation of \(ln(1+Er_{i}^{f})\) as a polynomial \(P_{i}^{n}(x)\). If \(\forall k\subseteq\mathbb{R}^{L-1},|y_{k,i}+x|<1\) then \(\lim\limits_{n\rightarrow+\infty}P_{i}^{n}(x)=ln(1+Er_{i}^{f})\). Now, we take \(b_{i}\) as a real root of the polynomial \(P_{i}^{n}(x)-ln(1+Er_{i}^{f})\) that respects the preceding constraint. If this root does not exist, the PDG cannot find a good solution. So, the PDG repeats this for all the assets in order to obtain \(b\). After these values are calculated following this procedure, \(AD^{f}\) is reconstructed from the initial values \(V\) to the predicted \(er^{f}\). ### Quantum Computing Solver Module - Qcs Despite being the last module called along the workflow of Q4FuturePOP, it is appropriate to describe QCS here since it is employed also as part of the calculation made within AUR. In a nutshell, the QCS is the module in charge of taking a complete dataset of assets containing their daily values and solving the \(POP\). We represent in Figure 2 a schematic description of QCS. As can be observed in that figure, this module is composed of three different components: _QUBO Builder_: the \(POP\) dealt by Q4FuturePOP is tackled by a QC device. More specifically, the system is built to solve the problem by means of a quantum annealer. This specific type of device natively solves QUBO problems. For this reason, the first component of QCS has the objective of gathering the input dataset and modeling the \(POP\) problem correctly. More specifically, for building the corresponding QUBO, we define the following Hamiltonian based on the formulation described in [17]: \[\mathbf{H}=\alpha\mathbf{H_{A}}+\beta\mathbf{H_{B}}+\gamma\mathbf{H_{C}}, \tag{1}\] Figure 1: A schematic description of the proposed Q4FuturePOP. where \[\mathbf{H_{A}}=\sum_{i}^{N-1}w_{i}er_{i} \tag{2}\] \[\mathbf{H_{B}}=-\sum_{i,j}^{N-1}Cov_{i,j}x_{i}x_{j}. \tag{3}\] \[\mathbf{H_{C}}=-(\sum_{i}^{n}w_{i}x_{i}-bd)^{2}. \tag{4}\] considering that \(er_{i}\) represents the expected return for an asset \(a_{i}\), and \(x_{i}\) is a binary variable that is 1 if the asset \(a_{i}\) has a \(w_{i}>0\). Furthermore, the values \(\alpha\), \(\beta\) and \(\gamma\) are float multipliers employed to weight each term. _Quantum annealing solver_: this is the central component of QCS, and where the call to the quantum device is made. As depicted in Figure 2, QCS receives as input the problem modeled as a QUBO. This problem is tackled by a quantum annealer device, such as the ones provided by D-Wave: Advantadge_System6.1 or Advantadge2_prototype1.1. Quantum-inspired alternatives, such as the Fujitsu Digital Annealer, are also eligible for being part of QCS as QUBO solver. Also, hybrid approaches such as Leap's hybrid Binary Quadratic Model Solver of D-Wave can be embraced in this module. This component returns a list \(S\) of binary values, which represents the solution to the QUBO ingested by the QC device. _Results Interpreter_: the solution \(S\) provided by the quantum device is not directly interpretable for the final user. For this reason, this last component oversees the obtaining of the string of binary values provided by the quantum annealer and calculates the variables that will be returned as outcomes. Thus, the QCS module is called in two different phases in the complete workflow of Q4FuturePOP. On the one hand, the QCS is called by AUR module in the process of asset universe reduction. In this iterative procedure, the QCS is executed \(E\) different times, using as input the complete dataset \(AD^{f}\). Through these repetitive runs, AUR aims to detect the most interesting assets for conducting a search space reduction of the \(POP\) (more details in Section 4.3). On the other hand, the QCS is called in the last stage of the complete Q4FuturePOP execution, using as input the reduced \(AD^{f^{\prime}}\), and with Figure 2: A schematic description of QCS. the goal of obtaining the final \(W\), \(er_{portfolio}\), \(\sigma_{portfolio}\) and \(SHARPE\) (calculated as \(er_{portfolio}\)/\(\sigma_{portfolio}\)). ### Assets Universe Reduction Module - AUR The motivation behind the implementation of the AUR module is to face the limitations of current quantum annealers. Despite all the research and developments made in the field, current quantum computers suffer from limitations such as a finite number of qubits or noisy processes that impact the performance of quantum solvers [20]. For this reason, the actual stage of QC field is known as NISQ era [21]. Under this rationale, the main objective of AUR module is to decrease the complexity of the problem at hand by building a reduced sub-instance of the \(POP\) problem. Thus, with a smaller solution space, the system is able to reach higher-quality and more robust solutions. The AUR works as follows: 1. The AUR receives as input the complete set of data \(AD^{f}\) generated by the previously described PDG module. 2. AUR solves the \(POP\) problem using the QCS module (detailed in Section 4.2) and \(AD^{f}\) as input. Despite QCS provides more information, it just stores the identifiers of the assets \(a_{i}\) to which any proportion of \(bd\) has been allocated. Thus, AUR generates a list \(\mathcal{A}^{\prime}\) which is a subgroup of \(\mathcal{A}\) (\(\mathcal{A}^{\prime}\subseteq\mathcal{A}\)). 3. Until the number of execution conducted by AUR is less than \(E\), step 2 is repeated. 4. Once the process is finished, AUR builds a reduction of \(AD\) considering the information of all the assets in \(\mathcal{A}^{\prime}\), and discarding all the data related to assets that have not been deemed in any of the \(E\) executions conducted by step 2. Lastly, for helping the understanding of AUR module, we depict a schematic description of AUR in Figure 3, representing also the relation with QCS module. Figure 3: A schematic description of AUR and its connection with QCS. = asset with no budget allocated; = asset with a budget \(w_{i}>0\) allocated; = asset eligible for allocation in the reduced universe; = asset discarded in the universe reduction process ## 5 Discussion on the preliminary performance At this moment, Q4FuturePOP is in a prototypical stage of development, waiting to be completely validated as a whole system. Anyway, each module has been checked separately. On the one hand, the PDG has been successfully tested using a pool of predicted values provided by Welzia Management. In any case, due to extension constraints and the fact that this paper is more focused on QC, we will deepen the validation of the PDG module in a future research paper. On the other hand, AUR and QCS modules have been jointly checked using a dataset also provided by Welzia Management. This dataset contains the daily values of 53 different assets over 12 years (from 01/01/2010 to 13/12/2022). Welzia Management also provided us with a set of historical portfolios chosen by the company's experts. Thus, using the dataset as input and the historical portfolios as baseline, six different use cases have been built for validation. Each of these instances consists of an excerpt of the complete dataset, with a depth ranging from 12 to 28 months. Therefore, for each use case, AUR+QCS modules of Q4FuturePOP have been run 6 independent times, and the results provided have been compared with the portfolios built by the experts. Also, it should be noted here that the quantum solver used for the conducted tests is the Advantadge_System6.2 of D-Wave, comprised of 5610 qubits and 40134 couplers spread over a Pegasus topology. The results of this preliminary experimentation are depicted in Table 1, in which we represent the time frame that compose each dataset, the number of available assets, and the outcomes provided by both the experts and AUR+QCS. Each instance is coined as UCX_Y_Z, where \(X\) is the ID of the dataset, \(Y\) the time frame measured in months, and \(Z\) the amount of assets deemed. Finally, it should be noted that both the employed datasets as well as the complete set of results obtained by AUR+QCS are available upon reasonable request. Analyzing the results obtained by AUR+QCS modules, it should be noted that they have proved to be promising. These results have been approved by the experts from Welzia Management after several technical meetings. These meetings, and the fact of having the help of these experts, have been a really enriching point in the development of Q4FuturePOP. This is so since quantum-based solutions are usually analyzed from a purely academic point of view. That is, the solutions provided by the systems are usually analyzed based solely on their \(SHARPE\) ratio. Although this ratio is a good indication of the quality of a solution, it does not represent the reality of the industry. High \(SHARPE\) ratios can lead to triumphalist conclusions, which eventually clash with the reality of an industry with volatility that is difficult to perceive by a computer (whether quantum or classical). This is why an expert's judgment when generating a portfolio is an absolutely necessary factor. All in all, the solutions achieved by the proposed system have proven to be promising, offering better results than the experts in some cases. In any case, as has been described, the fact that they present \(SHARPE\) ratios higher than the portfolios proposed by Welzia Management does not imply that in practice they are better than the latter. Even so, Welzia Management has valued very positively the results obtained by AUR+QCS, being aware of the value that a platform of these characteristics can have in its day to day operation, acting as an assistant in their decision making processes. ## 6 Conclusions and Future Work In this paper, a quantum-based approach for solving the well-known Portfolio Optimization Problem has been presented, coined as Q4FuturePOP. Two are the main innovations inherent to the system proposed: _i_) Q4FuturePOP is modeled for working with future prediction of assets instead of historical values; and _ii_) Q4FuturePOP includes an _automatic universe reduction_ module, which is conceived to intelligently reduce the complexity of the problem. Along with the description of the system, we have also introduced a brief discussion about the preliminary performance of the different modules that compose the prototypical version of the tool. Several research lines stem directly from the findings reported in this work. The first, and most obvious, is the validation of the complete system. Other future work includes the fine-tuning of the parameters that involve the generation of the POP's QUBO. Also, other quantum-based solvers apart from the ones provided by D-Wave are planned to be tested. ## Acknowledgments This work was supported by the Spanish CDTI through Proyectos I+D Cervera 2021 Program (QOOptimiza project, 095359). This work was also supported by the Basque Government through ELKARTEK program (BRTA-QUANTUM project, KK-2022/00041). The authors thank Miguel Uceda, Welzia's Investment Director, for his assistance and for providing the data employed for the tests conducted.
2307.16853
Spin-frame field theory of a three-sublattice antiferromagnet
We present a nonlinear field theory of a three-sublattice hexagonal antiferromagnet. The order parameter is the spin frame, an orthogonal triplet of vectors related to sublattice magnetizations and spin chirality. The exchange energy, quadratic in spin-frame gradients, has three coupling constants, only two of which manifest themselves in the bulk. As a result, the three spin-wave velocities satisfy a universal relation. Vortices generally have an elliptical shape with the eccentricity determined by the Lam\'e parameters.
Bastian Pradenas, Oleg Tchernyshyov
2023-07-31T17:13:37Z
http://arxiv.org/abs/2307.16853v2
# Spin-Frame Field Theory of a Three-Sublattice Antiferromagnet ###### Abstract We present a nonlinear field theory of a three-sublattice hexagonal antiferromagnet. The order parameter is the spin frame, an orthogonal triplet of vectors related to sublattice magnetizations and spin chirality. The exchange energy, quadratic in spin-frame gradients, has three coupling constants, only two of which manifest themselves in the bulk. As a result, the three spin-wave velocities satisfy a universal relation. Vortices generally have an elliptical shape with the eccentricity determined by the Lame parameters. Theory of magnetic solids is often described as a lattice problem, exemplified by the Heisenberg model of atomic spins. However, discrete models are notoriously difficult to solve outside of the simplest tasks, such as finding the spectrum of linear spin waves. An analytic theory of nonlinear solitons--such as domain walls or vortices--in the framework of a lattice model is often not feasible or cumbersome [1]. Continuum theories have the clear advantage of being more amenable to analytic treatment. By design, they focus on the physics of long distances and times, capturing the universal aspects of low-energy physics at the expense of microscopic details. In magnetism, a well-known example is micromagnetics, the continuum theory of a ferromagnet going back to Landau and Lifshits [2]. It is usually formulated through the equation of motion for the magnetization field \(\mathbf{m}\) of unit length parallel to the local direction of spins, \[\mathcal{S}\,\frac{\partial\mathbf{m}}{\partial t}=-\mathbf{m}\times\frac{ \delta U}{\delta\mathbf{m}}. \tag{1}\] Here \(\mathcal{S}\) is the spin density, \(U[\mathbf{m}]=\int d^{d}r\,\mathcal{U}\) is a potential-energy functional, whose functional derivative \(-\delta U/\delta\mathbf{m}\) acts as an effective magnetic field. The energy density is usually dominated by the Heisenberg exchange interaction of strength \(A\), \[\mathcal{U}=\frac{A}{2}\partial_{i}\mathbf{m}\cdot\partial_{i}\mathbf{m}+\ldots \tag{2}\] Doubly repeated indices imply summation. The omitted terms represent weaker anisotropic interactions of dipolar and relativistic origin. We use the calligraphic font to indicate intensive quantities (densities). The Landau-Lifshitz equation (1) provides a starting point for understanding the dynamics of ferromagnetic solitons. A further coarse-graining eliminates fast internal modes of a soliton and focuses on its slow collective motion whose seminal achievements; primary examples are Thiele's equation of rigid motion [3] and Walker's dynamical model of a domain wall [4]. The continuum approach has also been applied to simple antiferromagnets, in which adjacent spins are (nearly) antiparallel and can be split into two magnetic sublattices 1 and 2, each with its own magnetization field \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) of unit length. Because at low energies, the sublattice magnetizations are (nearly) antiparallel, both can be approximated by a single field of staggered magnetization \(\mathbf{n}\approx\mathbf{m}_{A}\approx-\mathbf{m}_{B}\), whose dynamics is described by an \(O(3)\)\(\sigma\)-model with the Lagrangian density [5; 6; 7] \[\mathcal{L}=\mathcal{K}-\mathcal{U}=\frac{\rho}{2}\partial_{i}\mathbf{n}\cdot \partial_{i}\mathbf{n}-\frac{A}{2}\partial_{i}\mathbf{n}\cdot\partial_{i} \mathbf{n}-\ldots \tag{3}\] The first term \(\mathcal{K}=\frac{\rho}{2}\partial_{i}\mathbf{n}\cdot\partial_{i}\mathbf{n}\) is the kinetic energy of staggered magnetization and \(\rho\) is a measure of inertia. The second, potential term comes from the Heisenberg exchange energy and has the same functional form as in a ferromagnet (2). The omitted terms represent various weak anisotropic interactions. Minimization of the action with the constraint \(\mathbf{n}^{2}=1\) yields the equation of motion \[\rho\,\partial_{t}(\mathbf{n}\times\partial_{t}\mathbf{n})=-\mathbf{n}\times \frac{\delta U}{\delta\mathbf{n}}. \tag{4}\] As with ferromagnets, the antiferromagnetic Landau-Lifshitz equation (4) can be translated into equations of motion for solitons [8; 9] and extended to include the effects of spin transfer and dissipation [10; 11]. The primary goal of this paper is to introduce a universal field theory for an antiferromagnet with three magnetic sublattices whose magnetization fields satisfy the relation \(\mathbf{m}_{1}+\mathbf{m}_{2}+\mathbf{m}_{3}\approx 0\). Such magnetic states are typically realized in antiferromagnetic solids of hexagonal symmetry with triangular motifs. Although such magnets have been studied for decades [12; 13], recent experimental studies of metallic antiferromagnets Mn\({}_{3}\)Sn and Mn\({}_{3}\)Ge [14; 15] have rekindled theoretical interest in these frustrated magnets [16; 17; 18]. The existing field theory for 3-sublattice antiferromagnets by Dombre and Read [12] has a couple of drawbacks. First, it is formulated specifically for the triangular lattice, which has a higher spatial symmetry than other hexagonal lattices (such as kagome) and therefore misses some of the universal features of 3-sublattice antiferromagnets. Second, their mathematical formalism represents the magnetic order parameter as a \(3\times 3\) rotation matrix, an abstract, and not very intuitive mathematical object. We derive a nonlinear field theory of a 3-sublattice antiferromagnet with the order parameter represented by a _spin frame_, i.e., a triad of orthonormal vectors \(\hat{\mathbf{n}}\equiv\{\mathbf{n}_{x},\mathbf{n}_{y},\mathbf{n}_{z}\}\), directly related to sublattice magnetizations \(\mathbf{m}_{1}\), \(\mathbf{m}_{2}\), and \(\mathbf{m}_{3}\). At low energies, the magnetization dynamics reduce to rigid rotations of the spin frame, \(\partial_{t}\mathbf{n}_{i}=\mathbf{\Omega}\times\mathbf{n}_{i}\), at a local angular frequency \(\mathbf{\Omega}\). One of our main results is the Landau-Lifshitz equation for a 3-sublattice antiferromagnet, \[\rho\,\partial_{t}\mathbf{\Omega}=-\mathbf{n}_{i}\times\frac{\delta U}{\delta \mathbf{n}_{i}}. \tag{5}\] A sum over doubly repeated Roman indices, \(i=x,y,z\), is implied hereafter. Like its analogs (1) and (4), it equates the rate of change of the local density of angular momentum with the torque density from conservative forces expressed by a potential energy functional \(U(\hat{\mathbf{n}})\). The transparent physical meaning of the Landau-Lifshitz equation makes it easy to add other relevant perturbations. To define the spin frame \(\hat{\mathbf{n}}\), we first switch from the three unit-vector fields of sublattice magnetizations \(\mathbf{m}_{1}\), \(\mathbf{m}_{2}\), and \(\mathbf{m}_{3}\) to uniform magnetization \(\mathbf{m}\) and two staggered magnetizations \(\mathbf{n}_{x}\) and \(\mathbf{n}_{y}\) (Fig. 1): \[\mathbf{m} = \mathbf{m}_{1}+\mathbf{m}_{2}+\mathbf{m}_{3},\] \[\mathbf{n}_{x} = (\mathbf{m}_{2}-\mathbf{m}_{1})/\sqrt{3}, \tag{6}\] \[\mathbf{n}_{y} = (2\mathbf{m}_{3}-\mathbf{m}_{2}-\mathbf{m}_{1})/3.\] To them, we add the vector spin chirality [19] \[\mathbf{n}_{z}=\frac{2}{3\sqrt{3}}(\mathbf{m}_{1}\times\mathbf{m}_{2}+ \mathbf{m}_{2}\times\mathbf{m}_{3}+\mathbf{m}_{3}\times\mathbf{m}_{1}). \tag{7}\] As long as \(\mathbf{m}=0\), sublattice fields \(\mathbf{m}_{1}\), \(\mathbf{m}_{2}\), and \(\mathbf{m}_{3}\) are coplanar and thus define the _spin plane_. Staggered magnetizations \(\mathbf{n}_{x}\) and \(\mathbf{n}_{y}\) lie in the spin plane, whereas spin chirality \(\mathbf{n}_{z}\) is orthogonal to it. The three unit vectors \(\mathbf{n}_{i}\) form a right-oriented orthonormal spin frame: \[\mathbf{n}_{i}\cdot\mathbf{n}_{j}=\delta_{ij},\quad\mathbf{n}_{i}\times \mathbf{n}_{j}=\epsilon_{ijk}\mathbf{n}_{k}. \tag{8}\] To derive the dynamics of the spin frame, we follow the standard Lagrangian approach [6; 12; 20] and integrate out the hard field of uniform magnetization \(\mathbf{m}\) to obtain the dynamics of the spin frame. Our starting point is the Landau-Lifshitz equations for sublattice magnetizations, \[\mathcal{S}\,\partial_{t}\mathbf{m}_{1}=-\mathbf{m}_{1}\times\frac{\delta V}{ \delta\mathbf{m}_{1}}, \tag{9}\] and similarly for sublattices 2 and 3. Like in 2-sublattice antiferromagnets [21], the potential energy functional \(V\) is dominated by the antiferromagnetic exchange interaction imposing a penalty for \(\mathbf{m}\neq 0\)[22], \[\mathcal{V}(\mathbf{m},\hat{\mathbf{n}})=\frac{\mathbf{m}^{2}}{2\chi}+ \mathcal{U}(\hat{\mathbf{n}}), \tag{10}\] where \(\chi\) is the paramagnetic susceptibility. The subdominant term \(U[\hat{\mathbf{n}}]\), expressing the energy of the antiferromagnetic order parameter, will be discussed below. With the aid of Eqs. (6), (9), and (10) we find that, like in 2-sublattice antiferromagnets [5], staggered magnetizations \(\mathbf{n}_{x}\) and \(\mathbf{n}_{y}\) precess about the direction of uniform magnetization \(\mathbf{m}\) at the angular velocity [20] \[\mathbf{\Omega}\approx\frac{\mathbf{m}}{\chi\mathcal{S}}. \tag{11}\] The linear proportionality between the precession frequency and uniform magnetization (11) can be derived as the equation of motion for uniform magnetization \(\mathbf{m}\) from the following Lagrangian for fields \(\mathbf{m}\) and \(\hat{\mathbf{n}}\)[20]: \[\mathcal{L}(\mathbf{m},\hat{\mathbf{n}})=\mathcal{S}\mathbf{m}\cdot\mathbf{ \Omega}-\frac{\mathbf{m}^{2}}{2\chi}-\mathcal{U}(\hat{\mathbf{n}}). \tag{12}\] The angular velocity can be expressed explicitly in terms of \(\hat{\mathbf{n}}\) via the kinematic identity \[\mathbf{\Omega}=\frac{1}{2}\mathbf{n}_{i}\times\partial_{t}\mathbf{n}_{i}, \tag{13}\] The first term in the Lagrangian (12) is linear in the velocities \(\partial_{t}\mathbf{n}_{i}\), so its action represnts the spin Berry phase. It yields the expected density of angular momentum \(\mathcal{S}\mathbf{m}\) Figure 2: Hexagonal lattices and their magnetic sublattices: (a) kagome, (b) triangular lattice. (c) Spatial coordinate axes \(x\), \(y\), and \(z\). Red, green, and blue colors indicate magnetic sublattices 1, 2, and 3. Filled triangles and dotted lines denote \(C_{3}\) and \(C_{2}\) rotation axes, respectively. Lagrangian (12) is quadratic in uniform magnetization \(\mathbf{m}\). Integrating out this field with the aid of its equation of motion (11) yields an effective Lagrangian for the remaining fields \(\hat{\mathbf{n}}\) endowed with kinetic energy of rotation with the inertia density \(\rho=\chi\mathcal{S}^{2}\): \[\mathcal{L}(\hat{\mathbf{n}})=\frac{\rho\Omega^{2}}{2}-\mathcal{U}(\hat{ \mathbf{n}})=\frac{\rho}{4}\partial_{t}\mathbf{n}_{i}\cdot\partial_{t}\mathbf{ n}_{i}-\mathcal{U}(\hat{\mathbf{n}}). \tag{14}\] Minimizing the action in the presence of holonomic constraints (8) yields equations of motion with undetermined Lagrange multipliers [23]\(\Lambda_{ij}=\Lambda_{ji}\): \[\frac{I}{2}\partial_{t}^{2}\mathbf{n}_{i}=-\frac{\delta U}{\delta\mathbf{n}_{ i}}-\Lambda_{ij}\mathbf{n}_{j}. \tag{15}\] Finally, we take a cross product with \(\mathbf{n}_{i}\), sum over \(i\), and use Eq. (13) to obtain the Landau-Lifshitz equation (5). The energy functional \(U[\hat{\mathbf{n}}]\) is usually dominated by the Heisenberg exchange interaction. The latter respects the SO(3) symmetry of global spin rotations and therefore depends not on the orientation of the spin frame \(\hat{\mathbf{n}}\) but rather on its spatial gradients. Like in ferromagnets and 2-sublattice antiferromagnets, the exchange energy is quadratic in \(\partial_{\alpha}\mathbf{n}_{\beta}\), where Greek indices take on values \(\alpha=x\) and \(y\) only. The form of these quadratic terms is restricted by the \(D_{3}\) point-group rotational symmetry of a hexagonal lattice (Fig. 2), including \(\pm 2\pi/3\) spatial rotations about a \(C_{3}\) axis normal to the \(xy\) plane and \(\pi\) spatial rotations about \(C_{2}\) axes lying in the \(xy\) plane. Under these transformations, the staggered magnetizations (\(\mathbf{n}_{x}\),\(\mathbf{n}_{y}\)) transform in terms of each other in the same way as the in-plane components (\(k_{x},k_{y}\)) of a spatial vector \(\mathbf{k}\) do; chirality \(\mathbf{n}_{z}\) transforms as \(k_{z}\). This observation helps to construct energy terms quadratic in the gradients of staggered magnetizations and invariant under both spin rotations and lattice symmetries. To that end, we may start with a rank-4 spatial tensor and spin scalar \(\partial_{\alpha}\mathbf{n}_{\beta}\cdot\partial_{\gamma}\mathbf{n}_{\delta}\) and contract its spatial indices pairwise to form a spatial scalar. This procedure yields our second main result, the three possible gradient terms for the exchange energy density, \[\mathcal{U}=\frac{\lambda}{2}\,\partial_{\alpha}\mathbf{n}_{\alpha}\cdot \partial_{\beta}\mathbf{n}_{\beta}+\frac{\mu}{2}\,\partial_{\alpha}\mathbf{n} _{\beta}\cdot\partial_{\alpha}\mathbf{n}_{\beta}+\frac{\nu}{2}\,\partial_{ \alpha}\mathbf{n}_{\beta}\cdot\partial_{\beta}\mathbf{n}_{\alpha}. \tag{16}\] This expression resembles the elastic energy density of an isotropic solid [24], albeit with 3 Lame constants. A triangular lattice has an extra spatial symmetry. Under primitive lattice translations, sublattice indices undergo cyclic permutations, see Fig. 2(b). Staggered magnetizations \(\mathbf{n}_{\alpha}\) effectively undergo \(\pm 2\pi/3\) spatial rotations, whereas gradients \(\partial_{\alpha}\) do not. Thus translational symmetry forbids the \(\lambda\) and \(\nu\) terms for a triangular lattice. The Landau-Lifshitz equation (5) for a 3-sublattice Heisenberg antiferromagnet reads \[\rho\,\partial_{t}\mathbf{\Omega}=(\lambda+\nu)\,\mathbf{n}_{\alpha}\times \partial_{\alpha}\partial_{\beta}\mathbf{n}_{\beta}+\mu\,\mathbf{n}_{\alpha} \times\partial_{\beta}\partial_{\beta}\mathbf{n}_{\alpha}. \tag{17}\] Note that the exchange coupling constants \(\lambda\) and \(\nu\) enter the equation of motion through a combination \(\lambda+\nu\), rather than individually. More on that below. An antiferromagnet with nearest-neighbor exchange interaction \(J\) has \(\lambda+\nu=0\) and \(\mu=JS^{2}\sqrt{3}/4\) on a triangular lattice; on kagome, \(\lambda+\nu=\sqrt{3}JS^{2}/4\) and \(\mu=0\) on kagome. See Supplemental Material [25] for contributions of further-neighbor interactions. In what follows, we use the spin-frame formulation of the field theory to obtain the properties of excitations: spin waves and vortices. _Spin waves._ Linear spin waves on top of a uniform ground state \(\mathbf{n}_{i}=\text{const}\) can be parametrized in terms of infinitesimal rotations of the spin frame, \(\delta\mathbf{n}_{i}=\delta\boldsymbol{\phi}\times\mathbf{n}_{i}\). Here \(\delta\boldsymbol{\phi}=\mathbf{n}_{i}\delta\phi_{i}\) is a triplet of infinitesimal rotation angles \(\delta\phi_{i}\) about the corresponding spin axes; \(\partial_{t}\delta\boldsymbol{\phi}=\mathbf{\Omega}\). Assuming a plane wave with wavenumber \(k\) travelling along the \(x\) direction, \(\delta\boldsymbol{\phi}(t,x)=\mathbf{e}\,\delta\phi\,e^{i(kx^{\prime}-\omega t)}\), we obtain three spin waves with \(\omega=ck\) and the following polarizations \(\mathbf{e}\) and velocities \(c\): \[\begin{split}\mathbf{e}_{\text{I}}=\mathbf{n}_{x},& c _{\text{I}}=\sqrt{\mu/\rho},\\ \mathbf{e}_{\text{II}}=\mathbf{n}_{y},& c_{\text{II}} =\sqrt{(\lambda+\mu+\nu)/\rho},\\ \mathbf{e}_{\text{III}}=\mathbf{n}_{z},& c_{\text{III}} =\sqrt{(\lambda+2\mu+\nu)/\rho}.\end{split} \tag{18}\] The velocities satisfy the identity \[c_{\text{I}}^{2}+c_{\text{II}}^{2}=c_{\text{III}}^{2}. \tag{19}\] Modes I, II are analogs of transverse and longitudinal sound in a hexagonal solid in two dimensions. _Vortices._ The existence of topologically stable point defects--vortices--in a 3-sublattice Heisenberg antiferromagnet was first pointed out by Kawamura and Miyashita [19]. A \(2\pi\) rotation of the spin frame corresponds to a loop in the order-parameter space that cannot be continuously deformed to a point. It is convenient to parametrize the orientation of the spin frame by starting with a reference uniform configuration \(\mathbf{n}_{x}=(1,0,0)\), \(\mathbf{n}_{y}=(0,1,0)\), \(\mathbf{n}_{z}=(0,0,1)\), and applying consecutive Euler rotations through angles \(\phi\) about \(\mathbf{n}_{z}\), \(\theta\) about \(\mathbf{n}_{y}\), and \(\psi\) about \(\mathbf{n}_{z}\). On a triangular lattice (\(\lambda+\nu=0\)), a vortex configuration with the lowest energy is described by the Euler angles \(\phi\), \(\theta\), and \(\phi\) given by \[e^{i\phi}=\frac{x+iy}{|x+iy|},\quad\theta=\frac{\pi}{2},\quad\psi=\text{const}. \tag{20}\] This expression agrees well with a numerically obtained vortex configuration for a triangular lattice, Fig. 3(a). Although we have not been able to find an exact vortex solution for a generic hexagonal antiferromagnet, we can understand the effect of the \(\lambda+\nu\) term on the vortex shape perturbatively. Starting with the isotropic solution (20) for \(\lambda+\nu=0\), we keep \(\theta=\pi/2\) and choose for simplicity \(\psi=\pi/2\) to obtain the following energy density: \[\mathcal{U}=\frac{\lambda+\mu+\nu}{2}(\partial_{x}\phi)^{2}+\frac{\mu}{2}( \partial_{y}\phi)^{2}. \tag{21}\] The vortex acquires an elliptical shape with the major axis ratio \(b=\sqrt{(\lambda+\mu+\nu)/\mu}\): \[e^{i\phi}=\frac{bx+iy}{|bx+iy|},\quad\theta=\frac{\pi}{2},\quad\psi=\frac{\pi} {2}. \tag{22}\] Although this result is obtained in the limit \(\lambda+\nu\ll\mu\), our numerical calculations on a kagome lattice demonstrate its accuracy even in the opposite limit. The right panel of Fig. 3(b) shows a vortex in a kagome antiferromagnet with the ratio of first and third-neighbor exchange interactions \(J_{1}/J_{3}=-20\), or \((\lambda+\nu)/\mu=5/3\). Its shape agrees with the extrapolated semiaxis ratio \(b=\sqrt{8/3}\). See Fig. S2[25] for other orientations of the major axes of a vortex. _Discussion._ In this paper, we have presented a universal field theory of a hexagonal antiferromagnet with 3 magnetic sublattices. The order parameter is a spin frame constructed from the sublattice magnetizations and vector chirality. Its mechanics is fully specified by the inertia of the spin frame \(\rho\) and three Lame constants \(\lambda\), \(\mu\), and \(\nu\). The simple and versatile field theory enabled us to establish a Pythagorean identity for the three spin-wave velocities (19) and to predict a generally elliptical shape for vortices (22). It is worth noting that the three Lame constants enter the equations of motion (17) in the form of two linear combinations, \(\lambda+\nu\) and \(\mu\). As a result of that, the three spin-wave velocities are constrained by a Pythagorean identity (19). The origin of this behavior can be understood by examining the exchange energy density (16). The \(\lambda\) and \(\nu\) terms in it can be obtained from one another through integration by parts. Thus their infinitesimal variations are the same--up to boundary terms--and so they make identical contributions to classical dynamics. The difference \(\lambda-\nu\) is a "silent" coupling constant that does not manifest itself in the classical dynamics of magnetization. It can be seen that it has an intriguing topological nature with the aid of the identity \[\frac{1}{2}(\partial_{\alpha}\mathbf{n}_{\alpha}\cdot\partial_{\beta}\mathbf{ n}_{\beta}-\partial_{\alpha}\mathbf{n}_{\beta}\cdot\partial_{\beta}\mathbf{n}_{ \alpha})=\mathbf{n}_{z}\cdot(\partial_{x}\mathbf{n}_{z}\times\partial_{y} \mathbf{n}_{z}). \tag{23}\] We now see that the energy term (23) is a topological quantity proportional to the skyrmion density of the spin chirality (7). Its topological nature assures that it gives a quantized contribution to the exchange energy that does not change under continuous variations of the order parameter and therefore does not affect classical dynamics. However, it may lead to interesting boundary effects such as the presence of edge modes, as discussed recently in a different context by Dong _et al._[26]. We will address the topological aspects of this field theory in a separate publication [27]. _Acknowledgments._ We thank Boris Ivanov and Se Kwon Kim for useful discussions. The research has been supported by the U.S. Department of Energy under Award No. DE-SC0019331 and by the U. S. National Science Foundation under Grants No. NSF PHY-1748958 Figure 3: Vortices in 3-sublattice antiferromagnets. (a) Triangular lattice with nearest-neighbor interactions only. (b) kagome lattice with first and third-neighbor interactions, \(J_{3}=J_{3}^{\prime}=-J_{1}/20\). See Supplemental Material [25] for the definition of further-neighbor interactions. Red, green, and blue arrows are spins of the three magnetic sublattices. Spins on sublattice 3 (blue) point away from the viewer; spins on sublattices 1 (red) and 2 (green) have components pointing toward the reader. The circle and ellipse reflect the expected shape of the vortex with the major axis ratio \(b=\sqrt{(\lambda+\mu+\nu)/\mu}\). and PHY-2309135.