id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.07544
VerilogEval: Evaluating Large Language Models for Verilog Code Generation
The increasing popularity of large language models (LLMs) has paved the way for their application in diverse domains. This paper proposes a benchmarking framework tailored specifically for evaluating LLM performance in the context of Verilog code generation for hardware design and verification. We present a comprehensive evaluation dataset consisting of 156 problems from the Verilog instructional website HDLBits. The evaluation set consists of a diverse set of Verilog code generation tasks, ranging from simple combinational circuits to complex finite state machines. The Verilog code completions can be automatically tested for functional correctness by comparing the transient simulation outputs of the generated design with a golden solution. We also demonstrate that the Verilog code generation capability of pretrained language models could be improved with supervised fine-tuning by bootstrapping with LLM generated synthetic problem-code pairs.
Mingjie Liu, Nathaniel Pinckney, Brucek Khailany, Haoxing Ren
2023-09-14T09:15:34Z
http://arxiv.org/abs/2309.07544v2
# VerilogEval: Evaluating Large Language Models for Verilog Code Generation ###### Abstract The increasing popularity of large language models (LLMs) has paved the way for their application in diverse domains. This paper proposes a benchmarking framework tailored specifically for evaluating LLM performance in the context of Verilog code generation for hardware design and verification. We present a comprehensive evaluation dataset consisting of 156 problems from the Verilog instructional website HDLBits. The evaluation set consists of a diverse set of Verilog code generation tasks, ranging from simple combinational circuits to complex finite state machines. The Verilog code completions can be automatically tested for functional correctness by comparing the transient simulation outputs of the generated design with a golden solution. We also demonstrate that the Verilog code generation capability of pretrained language models could be improved with supervised fine-tuning by bootstrapping with LLM generated synthetic problem-code pairs. ## I Introduction The escalating popularity of Large Language Models (LLMs), characterized by their remarkable capacity to comprehend and generate human-like text, has opened up a realm of possibilities across diverse domains [1, 2, 3]. LLMs tailored for specific domains have garnered significant attention owing to their impressive performance on both general-purpose benchmarks and specialized tasks within domains like financial engineering [4], biomedical studies [5, 6], and general scientific research [7]. When it comes to coding, LLMs can assist developers by suggesting code snippets, offering solutions to common programming challenges, and even explaining complex concepts in a more accessible manner [8, 9]. In the realm of Electronic Design Automation, LLMs provide the potential to aid engineers in designing and verifying digital systems, providing insights into Verilog coding, optimizing circuits, and automating time-consuming tasks [10, 11]. A number of studies have initiated the evaluation of LLMs' potential in generating Verilog code. Thakur et al. [12] fine-tuned CodeGen [9] models which was evaluated on 17 designs. A later follow up work [10] further demonstrate the ability to design chip-level circuits with ChatGPT. RTLLM [13] propose a benchmark framework with 30 designs, which focus on increasing the benchmark design scalability. The authors further improved the solution quality with simple and effective prompt engineering techniques. While LLMs have proven to be powerful tools, their pretraining phase, characterized by unsupervised training loss, often lacks alignment with specific tasks. To enhance performance, supervised fine-tuning (SFT) is used [14], involving task-specific data to adapt to requirements. Ensuring model alignment is imperative for achieving improved performance, driving the investigation of increasingly computationally demanding techniques, such as reinforcement learning with human feedback [15, 16]. The cost associated with acquiring labeled data also remains a barrier, prompting a growing interest in alternative annotation-free alignment techniques. Self-Instruct [17] starts with a seed task, using LLMs to create more instructions and instances. WizardLM's EvolveInstruct [18] evolves instructions for a diverse dataset, which is further applied to code generation [19]. Additionally, a recent study utilized GPT-4 to generate a high-quality synthetic textbook dataset, achieving superior coding capabilities at 1/100th of the cost of other models [20]. Within the realm of Verilog coding, there remains a notable gap in the exploration of supervised fine-tuning for model enhancement. Moreover, notwithstanding commendable endeavors, recent research in Verilog code benchmarking has revealed limitations concerning its comprehensiveness, quantity, and the diversity of problems studied. Effective benchmarks should exhibit diversity, encompassing a wide array of topics, to mitigate testing variance. Furthermore, they should offer unambiguous problem descriptions, ensuring that solutions can be assessed with clear distinctions regarding correctness. In addition, reliability and automation are key factors, enabling the straightforward evaluation of generated code through robust testing procedures. Fig. 1: **VerilogEval** uses a sandbox environment for simple and reproducible evaluation of LLM Verilog code generation Our research addresses these gaps through the introduction of **VerilogEval1**, a open-source benchmark that encompasses a diverse array of questions, offers clear and unambiguous problem descriptions, and incorporates automated, easily reproducible testing procedures. This contribution significantly enhances the robustness and effectiveness of the evaluation framework for Verilog code generation and assessment. Our specific contributions are as follows: Footnote 1: [https://github.com/NVlabs/verilog-eval](https://github.com/NVlabs/verilog-eval) * We present a comprehensive evaluation dataset comprising 156 problems sourced from the HDLBits. These problems have undergone meticulous curation, ensuring both clarity and diversity. * We developed a benchmarking framework wherein Verilog code completions are subjected to automatic functional correctness testing. * We constructed a synthetic supervised fine-tuning dataset by leveraging LLMs to generate problem descriptions paired with Verilog code. This dataset is employed in extensive experiments on SFT, further enhancing the model's proficiency in Verilog coding tasks. ## II Evaluation Framework In this section we discuss the details of our evaluation framework and evaluation dataset collection. Our work closely follows the widely adopted Python coding benchmark HumanEval [21] for best practices. **VerilogEval** is presented in Fig. 1 where we develop a sandbox environment for simple and reproducible evaluation of LLM Verilog code generation. ### **VerilogEval Evaluation Set** We evaluate functional correctness on a selected set of problems from the Verilog instructional website HDLBits2. HDLBits is a collection of digital circuit design exercises and an online judge for learning digital logic using the Verilog hardware description language. The evalutation set consists of diverse Verilog coding tasks, ranging from module implementation of simple combinational circuits to complex finite state machines, code debugging, and testbench construction. Footnote 2: [https://hdlbits.01xz.net/wiki/Problem_sets](https://hdlbits.01xz.net/wiki/Problem_sets) We focus on generating _self-contained3_ Verilog modules from natural language text descriptions. We define a Verilog module as _self-contained_ if the module implementation does not require instantiation of any other modules. We emphasize the significance of module instantiation as a crucial capability in Verilog, playing an essential role in constructing extensive system-level designs. It's important to note that our evaluation does not delve into this topic. However, while most problems in **VerilogEval** are intentionally concise, they demand the LLM to possess a comprehension of hardware design along with adept problem-solving skills in areas encompassing circuits, Boolean logic, state transitions, and more. Footnote 3: Example of a removed question that is not _self-contained_: [https://hdlbits.01x.net/wiki/Module_cselad](https://hdlbits.01x.net/wiki/Module_cselad). Fig. 2 shows an example of the problem vectorr. **Problem Description** includes both natural language description and module header and IO definition. Including the module header removes ambiguity such as the bit width of signals. The **Question Prompt** is concatenated with **Problem Description** and sent to the LLM for inference. **Canonical Solution** is provided as the golden solution for testing. ### _Problem Descriptions_ Although HDLBits serves as a valuable resource for Verilog coding challenges, a significant portion of the website's problem descriptions are not readily compatible with text-only language models. These problem descriptions rely on various modalities, frequently incorporating circuit schematic images, state transition diagram graphs, Boolean logic tables, and Karnaugh maps. We explore with two methods for generating text-only problem descriptions for these problem sets. #### Ii-B1 **VerilogEval-machine** We completely disregard the descriptions on the website and opt to utilize LLMs for the automated creation of problem descriptions. We employ the prompt template depicted in Fig. 3, employing gpt-3.5-turbo. Initially, we create all problem descriptions using zero-shot methods. We validate these descriptions by using the LLM to produce code solutions. Problem descriptions are considered invalid if none of the generated completions succeed across 100 samples, and such descriptions are then discarded. Surprisingly, among the pool of 156 candidate problems, 108 of them yield successful solutions upon initial sampling. Subsequent to this, we consider the valid generated descriptions as few-shot examples (4-shot) and proceed to further sample unsolved problems. In this phase, we iteratively sample descriptions along with their corresponding code completions (8 completions per description). Descriptions are labeled as valid as soon as any of the corresponding code completions pass testing. Sampling for each problem is halted Fig. 2: Example of vectorr in **VerilogEval-human**. The **Problem Description** includes both natural language description and module header, input, and output definition. upon reaching a allocated sampling budget, resulting in an increase of 35 additional solutions. In total we generated 143 valid problem descriptions. #### Iv-B2 **VerilogEval-hunan** We engaged in manual review and conversion of problem descriptions from the website into a text-only structure. We dedicated particular attention to addressing ambiguity within the problem descriptions, particularly when precisely determining attributes such as the clock's posedge or negedge triggering, whether reset and enable signals are active high or active low, and whether they operate synchronously or asynchronously. Boolean logic tables and Karnaugh maps were transformed into textual tabular formats. Circuit schematic diagrams were translated into natural language explanations of the connections between logical gates. For sequential waveforms, we meticulously detailed all signal values at each transition edge of the clock, presented in a tabular layout with an added column for time steps. One particular challenge we confronted revolved around the task of converting state transition graphs into a text-based representation. To tackle this, we turned to ChatGPT for guidance, as depicted in Fig. 4. We ultimately adopted the edge list-based format to depict these state transition graphs. Examples of manually converted descriptions are shown in Fig. 5. Initial explorations were conducted regarding Verilog code completion by employing the converted formats. Notably, ChatGPT exhibited the capability to generate meaningful code using these formats for simple problems. We manually converted 156 problem descriptions in total. Comparing the descriptions between **machine** and **human**, we find that **machine** descriptions are often more verbose (vectorr in Figs. 2 and 3). Although the model is directed to generate high-level explanations, produced **machine** descriptions frequently delve into low-level details. These descriptions tend to mirror the code's implementation line by line, rather than focusing on the overarching functionality of the circuit (2012_q2b in Figs. 3 and 5). Furthermore, despite that we have taken steps to ensure that all **machine** descriptions are capable of producing passing solutions through LLMs, we cannot guarantee the absence of ambiguity and errors. Nevertheless, **VerilogEval-machine** remains a valuable benchmark, particularly for assessing LLM's competence in comprehending low-level instructions and generating syntactically and functionally accurate Verilog code. simulations. To enable automated testing, we compare simulation results between generated code completions with golden reference solutions. We assert for output signal correctness at clock (posedge and/or negedge) transition edges for sequential circuits, while for combinational circuits, we validate them when any input signals changes. Our testbench incorporates two categories of input signals for each problem: manually crafted test patterns of significance, and randomly generated test patterns. Randomly generated test patterns may span from a few hundred clock cycles for simple problems to several thousand cycles for more complex ones. We adapted the sandbox environment to safety run untrusted programs from HumanEval [21]. We built and installed the open-source ICARUS Verilog [24] simulator in a docker container. We note that our evaluation of Verilog syntax is limited by the simulator, which might not include all features of Verilog HDL IEEE-1364 standard. Simulation and testing are handled under the hood and results can be produced using just a single line of command. ### _Evaluation Metric_ Early works on code evaluations report on match-based metrics such as BLEU score [25]. However, recent works [21, 26] have argued that such metrics does not correlate well with functional correctness. In Fig. 6 we show that Verilog coding exhibit similar issues, where the distributions of correct versus wrong solutions are not clearly seperable based on BLEU score probability densities. We follow recent work in directly measuring code functional correctness through pass@\(k\) metric [21, 22, 27], where a problem is considered solved if any of the \(k\) samples passes the unit tests. We also suggest using the unbiased estimator from [21]: \[pass@k:=\operatorname*{\mathbb{E}}_{Problems}\left[1-\frac{\binom{n-c}{k}}{ \binom{n}{k}}\right], \tag{1}\] where we generate \(n\geq k\) samples per task in which \(c\leq n\) samples pass testing. In Fig. 7 we show that the number of samples \(n\) need to be sufficiently large to produce low variance estimates for pass@\(k\). ## III Supervised Fine-Tuning This section provides our findings concerning the supervised fine-tuning (SFT) of Large Language Models (LLM) for Verilog coding. We elucidate our approach to the generation of synthetic SFT data, achieved by utilizing LLMs to create problem descriptions, detailed in Section III-A. Subsequently, Section III-B comprises a comprehensive suite of supervised fine-tuning (SFT) experiments, showcasing its potential for improving model performance. Fig. 5: Examples of **VerilogEval-human** descriptions. We show original website descriptions alongside manually converted text format. Fig. 6: BLEU score probability densities for correct and wrong solutions from codegen-16B-verilog [12] for 2 tasks from **VerilogEval-human**. ### _Synthetic SFT Data Generation_ In this work, we investigate the creation of synthetic SFT data through a bootstrapping process involving code descriptions generated by LLMs. To be precise, we undertake the task of identifying and refining _self-contained_ Verilog modules sourced from Github data [12]. Subsequently, we employ the prompt template depicted in Fig. 3 to generate corresponding descriptive texts for each of these Verilog modules, effectively creating **machine** descriptions and code pairs. It's worth noting that our approach to synthetic data generation is straightforward in its implementation, and we acknowledge the potential for more advanced techniques as a promising future direction. We leverage Pyverilog [28] to extract to abstract syntax tree from Verilog code and employ the following filtering process to identify _self-contained_ Verilog modules from open-sourced Github Verilog code [12]: * We verify that the filtered code contain the module and endmodule keywords, positioned at the beginning and end of the code, respectively. * We remove Verilog modules more than 200 lines of code or exceeding 1024 tokens. * We ascertain that the code includes at least one of the essential keywords: always, assign, always_ff, always_comb, always_latch. * We ensure extracted modules are _self-contained_ without any module instantiation. We further perform approximate deduplication based on MinHash algorithm [29] using _Jaccard_ similarity threshold of 0.8 as in [30]. We used gpt-3.5-turbo to generate code descriptions based on the prompt template in Fig. 3, using **VerilogEval-human** descriptions of shift18, rule10, lemmings1, fsm3onehot as _few-shot_ examples. We selected these examples with the aim of encompassing a wide range of design instances and the utilization of natural language descriptions, including those presented in tabular formats. In total we generated 8,502 problem description and code pairs. ### _Results on Supervised Fine-tuning_ We conduct extensive experiments on fine-tuning with the generated synthetic SFT data. Including both description and code, our SFT data is 11MB in file size, compared with \(\sim\)700MB of Github Verilog data used in [12]. For the fine-tuning process, we employed the Adam optimizer with hyperparameters \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and \(\epsilon=10^{-8}\). We set the learning rate to \(lr=2e^{-5}\), effective batch size as 1M tokens, and opted not to apply weight decay. For all our experiments, we sample \(n=20\) code completions for measuring \(pass@k=\{1,5,10\}\) using Equation (1). We use nucleus sampling [31] with top \(p=0.95\), temperature \(temp=0.8\), and context length of 2048. We used a single NVIDIA DGX node with 8 A100s and 2TB RAM. Our experimentation primarily focuses on the CodeGen model series [9] and its Verilog-trained counterparts in [12]. These experiments encompass model sizes of 350M, 2B, 6B, and 16B. We use -sft to indicate models fine-tuned with our synthetic SFT data. We clarify our notation for base models as follows: * codegen-nl [9] : Natural language model. Trained on ThePile [32] 825.18GB English text corpus. * codegen-multi [9]: Code model. Initialized from codegen-nl and continue trained on BigQuery multilingual code dataset consisting of C, C++, Go, Java, JavaScript, and Python. * codegen-verilog [12]: Verilog code model. Initialized from codegen-multi and continue trained on \(\sim\)300MB of Github Verilog and 400MB of textbook data. Furthermore, we conducted comparisons with the gpt-3.5-turbo and gpt-4 models through OpenAI APIs [33]. Our analysis specifically involved default 4k context length models from 0613 checkpoints. #### Vi-B1 Training Epochs Fig. 8 depicts the pass rate on **VerilogEval** with different SFT training epochs. Dashed lines indicate gpt-3.5 results. Results show that **machine** descriptions correlate well with **human**, demonstrating that synthetic generated benchmarks could be a good indicator for downstream task performance. In most cases, we observe that the performance metric \(pass@1\) continues to exhibit improvement as the supervised fine-tuning (SFT) training epochs progress, whereas the metrics \(pass@5\) and \(pass@10\) begin to deteriorate. This trend suggests that with an increase in training epochs, the model tends to overfit to the SFT data, limiting its ability to generate diverse solutions for tackling complex challenges. Interestingly, this overfitting also leads to an increase in the model's confidence and success rate when dealing with simpler problems, highlighting a discernible trade-off between the \(pass@1\) and \(pass@10\) metrics. Consequently, we encourage future research to report on both of these metrics, particularly for models post-alignment, to provide a more comprehensive assessment of their performance. Throughout the remainder of this study, we conduct supervised fine-tuning (SFT) using 10 epochs for multi and 5 Fig. 7: Variance in estimating pass@\(k\) with \(n\). Samples from codegen-16B-verilog [12] for **VerilogEval-human**. epochs for verilog models. #### Iv-A2 Model Size and Base Model Fig. 9 illustrates the pass rates for the **VerilogEval** task using various model sizes and base models. The base model denotes the initial model checkpoint prior to SFT. It is worth noting that we have omitted the results for models with a size of 350M, either due to their unavailability or because their pass rates are insufficient to demonstrate statistical significance. Our results suggest that more capable and larger models generally result in better Verilog coding capabilities. In most instances, SFT using synthetically generated data yields notable enhancements in downstream model performance. These improvements are particularly pronounced, especially in the case of multi models, where the original model was not explicitly trained on a substantial corpus of Verilog code. In the case of verilog models, **VerilogEval-machine** exhibited significant performance gains, whereas the **VerilogEval-human** approach displayed comparatively less improvement and, at times, even slight deteriorations. Our SFT data is sourced from the GitHub Verilog corpus, and thus does not introduce additional Verilog code that the model did not encounter during its training for verilog models. However, by providing problem-code pairs, this data facilitates better alignment of the model, resulting in improved outcomes for **VerilogEval-machine**. Despite incorporating _few-shot_ prompting during the generation of SFT data (as discussed in Section III-A), the generated descriptions tend to be primarily low-level, lacking the textual diversity found in **human** examples, such as state transition graphs, waveforms, Karnaugh maps, and similar elements. This "mis-alignment" between SFT data and **VerilogEval-human** might have caused verilog-sft models to degrade slightly in performance. We envision that increasing SFT (and Verilog pretraining) data diversity and quality would further lead to increased performance. In TABLE II we present the results obtained from both the gpt-3.5 and gpt-4 models for the **VerilogEval** task. Additionally, we demonstrate that our top-performing model codegen-16B-verilog-sft, exhibits performance that is on par with gpt models. In TABLE III we present a comparison of results between sft models utilizing two distinct base models: codegen-nl and codegen-multi. The tokenizer of codegen-nl model is inefficient in handling whitespaces, consequently preventing some of the **VerilogEval-human** problems from fitting within the limited context window of 2048 tokens. Thus we only display results for **VerilogEval-machine**. Despite the fact that multi models undergo pretraining on an extensive corpus of multi-lingual code data, they exhibit only marginal enhancements of approximately 3% when applied to Verilog coding task. This observation \begin{table} \begin{tabular}{c|c c c c c} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{**VerilogEval-machine**} & \multicolumn{3}{c}{**VerilogEval-human**} \\ \cline{2-6} & pass@1 & pass@5 & pass@10 & pass@1 & pass@5 & pass@10 \\ \hline gpt-3.5 & 46.7 & 69.1 & 74.1 & 26.7 & 45.8 & 51.7 \\ \hline gpt-4 & 47.9 & 67.8 & 72.9 & 27.0 & 45.8 & 52.0 \\ \hline \hline verilog-sft & 46.2 & 67.3 & 73.7 & 28.8 & 45.9 & 52.3 \\ \hline \end{tabular} \end{table} TABLE II: Results on gpt models, comparing with codegen-16B-verilog-sft. \begin{table} \begin{tabular}{c|c c c c c} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{**VerilogEval-machine**} \\ \cline{2-6} & pass@1 & pass@5 & pass@10 & pass@5 & pass@10 \\ \hline gpt-3.5 & 46.7 & 69.1 & 74.1 & 26.7 & 45.8 & 51.7 \\ \hline gpt-4 & 47.9 & 67.8 & 72.9 & 27.0 & 45.8 & 52.0 \\ \hline \hline verilog-sft & 46.2 & 67.3 & 73.7 & 28.8 & 45.9 & 52.3 \\ \hline \end{tabular} \end{table} TABLE III: Comparing nl and multi as SFT base models. Fig. 8: SFT training epochs and pass rate on **VerilogEval**. Dashed lines are gpt-3.5 results. Fig. 9: **VerilogEval** results on different model size. Solid lines are sft models, dotted lines are corresponding base models without SFT, dashed lines are gpt-3.5 results. potentially suggests that there is limited positive knowledge transfer between software programming languages like C++ and hardware descriptive languages such as Verilog. This highlights the significance of pretraining on substantial Verilog corpora, as it can significantly enhance model performance in Verilog-related tasks. #### Iii-B3 SFT Data Quality We conducted a comparative experiment aimed at assessing the significance of data quality in SFT. In this experiment, we introduced a manipulation by shuffling problem descriptions with incongruous Verilog code solutions, resulting in the creation of erroneous problem-code pairs denoted as sft-error. The outcomes, as presented in TABLE IV, provide a comparison of the performance results obtained through fine-tuning on the codegen-2B-verilog models concerning the **VerilogEval-machine** task. The results clearly demonstrate that the inclusion of incorrect problem-code pairs detrimentally impacts model performance, underscoring the critical importance of maintaining high-quality SFT data. ## IV Limitations and Future Directions In **VerilogEval**, our primary focus centers on harnessing Large Language Models (LLMs) to generate _self-contained_ Verilog modules directly from natural language text descriptions. While we incorporate a wide array of hardware design topics through human-generated descriptions, it's important to note that our current evaluations are confined to boilerplate code generation for relatively small-scale designs. We emphasize the significance of module instantiation as a crucial capability in Verilog, as it plays a pivotal role in constructing complex system-level designs, albeit currently absent from our benchmark. Recent advancements in LLM-based coding benchmarking, as seen in [34], are starting to explore pragmatic code generation beyond standalone functions. It's worth mentioning that our testing environment solely assesses functional correctness and does not ensure that the generated Verilog code adheres to synthesizable formatting standards. We do not evaluate the performance of downstream circuit implementations, a gap that is addressed by the work presented in [13]. Additionally, it's crucial to recognize that boilerplate Hardware Description Language (HDL) code generation, as currently addressed in our **VerilogEval** and similar endeavors, inherently operates within an exceedingly limited scope within the broader landscape of hardware design. Hardware design, in its entirety, necessitates a multidisciplinary approach that draws on the expertise of domain professionals ranging from transistor device, circuit design, and to hardware system architecture. This holistic understanding is indispensable, as it allows design teams to navigate the intricacies of hardware design effectively. Furthermore, it's important to highlight that a significant portion of the hardware design process revolves around optimizing the Power, Performance, and Area (PPA) metrics. These three factors, power consumption, computational performance, and physical chip area, are paramount considerations in modern hardware design. Achieving the right balance among them is a formidable challenge that requires meticulous planning, advanced simulation, and iterative refinement. Equally critical is the extensive effort invested in design verification, aimed at ensuring the reliability and yield of the hardware. Verifying that a design functions as intended under diverse conditions and corner cases is vital to mitigate the risk of costly errors and to guarantee the final product meets its specifications. In essence, the successful realization of hardware designs hinges on the convergence of domain expertise, PPA optimization, and robust verification practices. Nonetheless, Large Language Models (LLMs) present an exciting opportunity for future research to revolutionize the hardware design process. This transformative potential lies in their ability to collaborate with domain experts in formulating novel problems and devising innovative solutions. By leveraging the vast knowledge and natural language understanding of LLMs, domain experts can work in tandem with these models to explore uncharted territories in hardware design, potentially leading to breakthroughs that enhance the efficiency, reliability, and agility of the design process. The fusion of human expertise with machine intelligence using LLMs in this collaborative endeavor promises an exhilarating avenue for future research, one that holds the potential to reshape the very fabric of the hardware design research landscape. ## V Conclusion The growing prominence of Large Language Models (LLMs) has ushered in a new era of their application across various domains. In this paper we introduce a specialized benchmarking framework meticulously designed to assess LLM performance within the realm of Verilog code generation for hardware design. The cornerstone of this contribution lies in the creation of a robust evaluation dataset, comprising of 156 distinct problems sourced from HDLBits. Furthermore, we have demonstrated that the Verilog code generation capabilities of pretrained language models can be enhanced through supervised fine-tuning, facilitated by the generation of synthetic problem-code pairs using LLMs. These findings not only advance the state of the art in Verilog code generation but also underscore the vast potential of LLMs in shaping the future of hardware design and verification. ## Acknowledgment The authors would like to thank Henry Wong ([email protected]) the creator of HDLBits for his invaluable assistance in providing reference solutions and testbenches for the problems used in this paper. \begin{table} \begin{tabular}{c|c c c} \hline \hline Model & \multicolumn{3}{c}{**VerilogEval-machine**} \\ \cline{2-4} & pass@1 & pass@5 & pass@10 \\ \hline Codegen-2B-verilog & 20.1 & 46.0 & 55.9 \\ \hline codegen-2B-verilog-sft & 35.9 & 59.0 & 65.7 \\ \hline codegen-2B-verilog-sft-error & 21.4 & 38.8 & 46.1 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Comparative experiment on SFT data quality. Incorrect low-quality SFT data degrades model performance.
2309.10038
A JWST investigation into the bar fraction at redshifts 1 < z < 3
The presence of a stellar bar in a disc galaxy indicates that the galaxy hosts in its main part a dynamically settled disc and that bar-driven processes are taking place in shaping its evolution. Studying the cosmic evolution of the bar fraction in disc galaxies is therefore essential to understand galaxy evolution in general. Previous studies have found, using the Hubble Space Telescope (HST), that the bar fraction significantly declines from the local Universe to redshifts near one. Using the first four pointings from the James Webb Space Telescope (JWST) Cosmic Evolution Early Release Science Survey (CEERS) and the initial public observations for the Public Release Imaging for Extragalactic Research (PRIMER), we extend the studies of the bar fraction in disc galaxies to redshifts $1 \leq z \leq 3$, i.e., for the first time beyond redshift two. We only use galaxies that are also present in the Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) on the Extended Groth Strip (EGS) and Ultra Deep Survey (UDS) HST observations. An optimised sample of 368 close-to-face-on galaxies is visually classified to find the fraction of bars in disc galaxies in two redshift bins: $1 \leq z \leq 2$ and $2 < z \leq 3$. The bar fraction decreases from $\approx 17.8^{+ 5.1}_{- 4.8}$ per cent to $\approx 13.8^{+ 6.5}_{- 5.8}$ per cent (from the lower to the higher redshift bin), but is about twice the bar fraction found using bluer HST filters. Our results show that bar-driven evolution might commence at early cosmic times and that dynamically settled discs are already present at a lookback time of $\sim 11$ Gyrs.
Zoe A. Le Conte, Dimitri A. Gadotti, Leonardo Ferreira, Christopher J. Conselice, Camila de Sá-Freitas, Taehyun Kim, Justus Neumann, Francesca Fragkoudi, E. Athanassoula, Nathan J. Adams
2023-09-18T18:00:04Z
http://arxiv.org/abs/2309.10038v3
# A JWST investigation into the bar fraction at redshifts \(1\leq z\leq 3\) ###### Abstract The presence of a stellar bar in a disc galaxy indicates that the galaxy hosts a dynamically settled disc and that bar-driven processes are taking place in shaping the evolution of the galaxy. Studying the cosmic evolution of the bar fraction in disc galaxies is therefore essential to understand galaxy evolution in general. Previous studies have found, using the Hubble Space Telescope (HST), that the bar fraction significantly declines from the local Universe to redshifts near one. Using the first four pointings from the James Webb Space Telescope (JWST) Cosmic Evolution Early Release Science Survey (CEERS) and the initial public observations for the Public Release Imaging for Extragalactic Research (PRIMER), we extend the studies on the bar fraction in disc galaxies to redshifts \(1\leq z\leq 3\), i.e., for the first time beyond redshift two. We only use galaxies that are also present in the Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) on the Extended Groth Strip (EGS) and Ultra Deep Survey (UDS) HST observations. An optimised sample of 768 close-to-face-on galaxies is visually classified to find the fraction of bars in disc galaxies in two redshift bins: \(1\leq z\leq 2\) and \(2<z\leq 3\). The bar fraction decreases from \(\sim 18.9^{+9.7}_{-9.4}\) per cent to \(\sim 6.6^{+7.1}_{-5.9}\) per cent (from the lower to the higher redshift bin), but is \(\sim 3-4\) times greater than the bar fraction found in previous studies using bluer HST filters. Our results show that bar-driven evolution commences at early cosmic times and that dynamically settled discs are already present at a lookback time of \(\sim 11\) Gyrs. keywords: galaxies: bar - galaxies: evolution - galaxies: disc - galaxies: general - galaxies: high-redshift - galaxies: distances and redshifts ## 1 Introduction Stellar bars are one of the most abundant features in local disc galaxies (e.g., Eskridge et al., 2000; Marinova and Jogee, 2007; Aguerri et al., 2009; Buta et al., 2015), providing insight into the internal evolutionary processes taking place in these galaxies. Several investigations using optical surveys in the immediate Universe find strong stellar bars in about a third of disc galaxies (e.g., Barazza et al., 2009; Nair and Abraham, 2010; Masters et al., 2011). This fraction increases to \(60-80\) per cent with the inclusion of weaker bars (e.g., de Vaucouleurs et al., 1991; Menendez-Delmestre et al., 2007; Sheth et al., 2008; Erwin, 2018). Barred stellar structures in disc galaxies are thought to form relatively quickly, over times of the order of a hundred million years, in massive disc galaxies which are dynamically cold and rotationally supported (e.g., Hohl, 1971; Kalnajs, 1972; Ostriker and Peebles, 1973; Sellwood and Wilkinson, 1993). Hence, the formation of stellar bars is an indicator of the evolutionary stage of a galaxy. The bar is a dense central region of evolved stellar populations on highly eccentric orbits (e.g., Weinberg, 1985; Contopoulos and Grosbol, 1989; Athanassoula, 1992; Kormendy and Kennicutt, 2004). The non-axisymmetric nature of stellar bars is due to the very elongated form of the orbits that constitute the bar, which would be the \(x_{1}\) orbital family or one of the higher multiplicity families, both parallel to the semi-major axis of the bar (Contopoulos and Papayannopoulos, 1980; Wang et al., 2022), which have such properties. The orbital composition of the bar, coupled with the fact that the bar can be viewed from all possible angles, introduces a range of observed ellipticities. Therefore, the shape of the stellar bar in disc galaxies can appear shorter and more oval or longer and rectangular, thus influencing the bar strength measurement. The torque of the stellar bar redistributes the angular momentum within the galaxy (e.g., Lynden-Bell and Kalnajs, 1972; Athanassoula, 2003; Athanassoula, 2005). This makes bars a primary and efficient driver of internal evolution through the redistribution of baryonic and dark matter (e.g., Menendez-Delmestre et al., 2007; Regan et al., 2006; Di Matteo et al., 2013; Fragkoudi et al., 2018). Bar-driven gas inflow considerably impacts central galactic star formation, most notably in the formation of stellar structures, such as the nuclear disc (e.g., Sanders & Tubbs, 1980; Knapen et al., 1995; Allard et al., 2006; Coelho & Gadotti, 2011; de Lorenzo-Caceres et al., 2012; Bittner et al., 2020; Gadotti et al., 2020). The bar also undergoes buckling processes forming box/peanuts (e.g., Combes & Sanders, 1981; Combes et al., 1990; Ishizuki et al., 1990; Kormendy, 1982; Kormendy & Kennicutt, 2004; Carles et al., 2016). It is currently adopted as to whether the presence of a bar could influence the fueling mechanisms of the active galactic nucleus (AGN) although a consensus is emerging in that bars help building a fuel reservoir near the galactic centre (e.g., Knapen et al., 1995; Alonso et al., 2013; Cisternas et al., 2015; Alonso et al., 2018; Silva-Lima et al., 2022; Garland et al., 2023). Sheth et al. (2005) confirm the result of Sakamoto et al. (1999), namely that the central kiloparsec of barred galaxies contains a higher degree of molecular gas concentrations, however in simulations Fragkoudi et al. (2016) observe a consequential reduction in the gas inflow to the central kiloparsec due to the boxy/peanut bulge associated with the bar. Multiple observational investigations into the abundance of stellar bars in disc galaxies up to \(z\simeq 1\) find a linear decrease in their frequency with increasing redshift. A constant bar fraction out to \(z\sim 1\) in the Galaxy Evolution from Morphologies and SEDs (GEMS) survey was found in Jogee et al. (2004) where three independent techniques were used to identify spiral galaxies and ellipse fits were used to characterise barred galaxies. Abraham et al. (1999) found a decline in the bar fraction from quantitatively estimated bar strengths of galaxies in the Hubble Deep Field-North and -South. Sheth et al. (2003) identified barred galaxies by ellipse fitting techniques for galaxies \(z>0.7\) in the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS) Hubble Deep Field-North. Using the 2 deg\({}^{2}\) Cosmic Evolution Survey (COSMOS), Sheth et al. (2008) found a decrease in the bar fraction using cross-checked visual and ellipse fitting bar identification techniques. A decrease by a factor of two at \(z\sim 1\) in the COSMOS bar fraction was found in Melvin et al. (2014) using visual classifications. It has then been inferred from these studies that bar features cease to exist at greater lookback times, implying that bar-driven evolutionary processes do not commence until \(\sim 6\) Gyr after the Big Bang. These studies require high-resolution and sensitive imaging across a large sky area, which the Hubble Space Telescope (HST) has achieved. At \(z\simeq 1.5\), Simmons et al. (2014) discover prominent bars in massive disc galaxies and suggests that at \(\sim z>1\), the bar fraction is sustained at \(\sim 10\) per cent. Two observational studies of the evolution of the bar fraction with redshift find no sign of a sharp decline at \(z>0.7\): Elmegreen et al. (2004) find a near constant bar fraction of \(0.23\pm 0.03\) at redshifts up to \(z=1.1\) on a sample of 186 disc galaxies; Jogee et al. (2004) find the optical bar fraction of \(\sim 0.3\pm 0.06\) to remain at redshifts up to \(z\sim 1\). In cosmological simulations, Kraljic et al. (2012) found a depletion in the number of bars in present-day spiral progenitors beyond \(z\simeq 1\), implying a violent phase of galaxy evolution where discs are dynamically hot, and there are excessive merger events. However, Athanassoula et al. (2016) follow the merging of two disc galaxies and found the merger remnant starts forming a bar before the disc is fully developed. Rosas-Guevara et al. (2022) use TNG50 simulations (Nelson et al., 2019) to trace the bar fraction evolution with redshift and show the bar fraction to increase to \(\sim 50\) per cent at \(z\simeq 1\) and only significantly decrease at \(\sim z>2\). Even at \(z\simeq 6\), the simulated bar fraction, at a minimum, reaches \(\sim 25\) per cent. The bar fraction found in the Auriga cosmological zoom-in simulations from Fragkoudi et al. (2020) are in good agreement with observational studies, where for redshifts \(0\leq z\leq 1.5\) the bar fraction decreases from \(\sim 70\) per cent to \(\sim 20\) per cent. Various bar identification techniques can be applied to images, including classifications by eye (e.g., Athanassoula et al., 1990; Cheung et al., 2013; Simmons et al., 2014; Buta et al., 2015). Stellar bar characterisation and analysis identified structural features by eye from the colour composite images of galaxies, where participants vote a galaxy as _barred, candidate bar_ or _unbarred_ (e.g., de Vaucouleurs et al., 1991; Eskridge et al., 2000; Nair & Abraham, 2010; Buta et al., 2015). Characteristic signatures in the radial profiles of barred galaxies can be seen, which can be used to aid or replace visual classification methods. Position angle (PA) and ellipticity (\(e\)) measurements are obtained from isophotal ellipse fits (see SS 3.1 for an explanation), in which the parameter radial profiles are used to identify a bar feature. The criteria for bar identification differs between studies but generally agrees that within the bar-dominated region, the PA remains constant, and \(e\) gradually rises. Studies can choose to determine the end of the bar as one of three metrics or by taking the average of these positions: the peak ellipticity, the minimum ellipticity succeeding the peak ellipticity, and a significant change in the PA (e.g., Wozniak et al., 1995; Buta et al., 1998; Elmegreen et al., 2004; Jogee et al., 2004; Marinova & Jogee, 2007; Guo et al., 2023). In a volume-limited \(z\leq 0.01\) SDSS/DR7 sample with galaxies \(M_{r}\leq -15.2\), Lee et al. (2019) found visual classification methods to detect a higher number of weaker bars than ellipse fitting techniques and Fourier analysis. Additionally, in their study, they concluded that ellipse fitting techniques could miss \(\sim 15\%\) of visually classified bars due to large bulges in early-type spirals. Using a deep convolutional neural network, Abraham et al. (2018) identified bars in SDSS with good accuracy. Surveys are now on remarkably large scales, so automated techniques such as machine learning (e.g., Cheng et al., 2021) will become vital for morphological classifications. The James Webb Space Telescope (JWST) has provided the opportunity to expand the investigation of the bar fraction to higher redshifts. Imaging from the Near Infrared Camera (NIRCam) probes the rest-frame near-infrared (NIR) emission of galaxies at redshifts up to 3 and probes the rest-frame optical at redshifts up to 7; NIR emission traces the older stellar populations which dominate bar features and are also less affected by dust extinction and new star formation. (e.g., Frogel et al., 1996; Schneider, 2006). In fact, the NIR bar fraction at \(z\simeq 0\) is higher than the optical bar fraction (e.g., Marinova & Jogee, 2007), and Buta et al. (2015) argues that this is due to stellar structural features being more perceptible. Thus, weaker bars in the optical become stronger in the NIR, so a higher bar fraction is observed. In addition, the primary mirror on JWST is over 2.5 times the diameter size of the HST primary mirror, meaning that the sensitivity of JWST is significantly improved. The improved sensitivity, along with the longer rest-frame wavelengths probed by JWST, means elongated bar structures become more discernible than in their counterpart HST images (e.g., Huertas-Company et al., 2023). For this reason, we can now propel our understanding of bar-driven evolution with JWST by searching for the epoch when stellar barred structures form in disc galaxies. A previous study of stellar bars at \(\sim z>1\) using the Cosmic Evolution Early Release Science Survey (CEERS) was conducted by Guo et al. (2023) who quantitatively identified six strongly barred galaxies at \(z\sim 1-3\), with the highest redshift galaxy at \(z\sim 2.3\). In this study, we use the initial four NIRCam JWST observations from CEERS to find the evolution of the bar fraction at redshifts between \(z=1-3\). To this aim, we visually classify a mass-complete sample of these high-resolution rest-frame NIR images for barred features in disc galaxies. This paper is outlined as follows: in SS 2, we explain the NIRCam image reduction pipeline and our sample selection. Stellar bar identification techniques and our methodology for visual classifications are discussed in SS 3. In SS 4, we present the bar fraction for two redshift bins, \(z=1-2\) and \(z=2-3\), and SS 5 discuss the implications of our findings on when bar-driven evolution commences and, finally, SS 6 summarises this study. Throughout this study, we assume the latest Planck flat \(\Lambda\)CDM cosmology with H\({}_{0}\) = 67.36, \(\Omega_{m}\) = 0.3153, and \(\Omega_{\Lambda}\) = 0.6847 (Planck Collaboration et al., 2020). ## 2 The Parent Sample To define our sample, we use the initial four public NIRCam JWST observations from the Cosmic Evolution Early Release Science Survey (CEERS; PI: Filkelstein, ID=1345, Finkelstein et al., 2023, CEERS1, CEERS2, CEERS3 and CEERS6) taken in June 2022 that overlap with the Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS; Grogin et al., 2011; Koekemoer et al., 2011) on the Extended Groth Strip field (EGS), as well as the initial public observations for the Public Release Imaging for Extragalactic Research (PRIMER; PI: Dunlop, ID=1837, Dunlop et al., 2021), that overlap with the CANDELS Ultra Deep Survey (UDS) Field observations. Together, the data covers \(\sim~{}30\) arcmin2 of an area with CANDELS HST overlap. The raw data was reduced independently using a custom set-up of the JWST pipeline as described in SS 2.1. Our sample selection based on HST CANDELS catalogues is provided in SS 2.2. Footnote 1: [https://github.com/chriswillott/jwst](https://github.com/chriswillott/jwst) Footnote 2: [https://github.com/spacetelescope/drizzlepac](https://github.com/spacetelescope/drizzlepac) ### Data Reduction Pipeline We reprocess all of the uncalibrated lower-level JWST data products following a modified version of the JWST official pipeline. This is similar to the process used in Adams et al. (2023) and exactly the same reductions as used in Ferreira et al. (2022), which can be summarised as follows: (1) We use version 1.6.2 of the pipeline with the Calibration Reference Data System (CRDS) version 0942 which was the most up-to-date version at the time these data products were generated. Use of CRDS 0942 is essential for zero point issues as discussed in Adams et al. (2023). (2) We apply the 1/f noise correction derived by Chris Willott on the resulting level 2 data of the JWST pipeline.1 (3) We extract the sky subtraction step from stage 3 of the pipeline and run it independently on each NIRCam frame, allowing for quicker assessment of the background subtraction performance and fine-tuning. (4) We align calibrated imaging for each individual exposure to GAIA using tweakreg, part of the DrizzlePac python package.2 (5) We pixel-match the final mosaics with the use of astropy reproject. The final resolution of the drizzled images is 0.03 arcseconds/pixel. Footnote 1: [https://github.com/chriswillott/jwst](https://github.com/chriswillott/jwst) Footnote 2: [https://github.com/spacetelescope/drizzlepac](https://github.com/spacetelescope/drizzlepac) Furthermore, an additional step was added for the PRIMER reductions in step (2) above due to the presence of a significant stripping pattern artefact at a 45-degree angle in the NIRCam footprint, resembling the diffraction pattern of a bright star outside the field of view of the camera. This issue was removed with an adaptation of the 1/f noise algorithm, first rotating the observations to 45 degrees to align the pattern with one of the axes, followed by a background subtraction for each row based on the background mean of that row. Finally, the adjusted file is rotated back to its original orientation. This drastically reduces the artefact in the final products, although some are still visible in colour composites due to the non-uniform nature of the artefact across different NIRCam filters. Galaxy stamps that present these residual artefacts are flagged during subsequent classification as described in SS 3.3. Each one of the four June CEERS observations was processed into individual mosaics, while the PRIMER UDS observations were stacked in a single mosaic due to the large overlapping area. A description of the sample selection is given in SS 2.2. ### Sample Selection As a way to produce a selection with robust photometric redshifts and stellar masses, we use the CANDELS-based catalogues produced by Duncan et al. (2019) that include observations from HST, Spitzer, and other ground-based facilities. These redshifts are robustly calibrated from spectroscopic redshifts, with an average outlier fraction of \(\frac{|\Delta z|}{1+z_{\mathrm{spec}}}\sim 5\%\) (see Duncan et al., 2019 for details). From these catalogues, we select all sources that lie within the footprint of the CEERS and PRIMER observations outlined previously. All sources with photometric redshifts and stellar masses that are present in both CANDELS and the new JWST observations are selected. Additionally, no magnitude or signal-to-noise cut is done to mitigate any selection bias due to different sensitivities between HST and JWST, which prevents JWST bright galaxies from being excluded if they are faint in HST bands. Then, all overlapping sources between \(1\leq z\leq 6\) are selected, resulting in a total sample of 7111 galaxies present within the combined area of CEERS+PRIMER, including 3956 galaxies with visual Hubble type classifications from Ferreira et al. (2022) at \(z>1.5\). For each of the 711 galaxies in the sample, we produce 30 mas 128x128 pixel cutouts from the HST ACS CANDELS filters available, namely F606W and F814W, as well as for the JWST filters, namely F115W, F150W, F270W, F277W, F356W and F444W. For the HST Wide Field Camera 3 filters, namely F125W and F160W, we produce 60 mas 64x64 pixel cutouts covering a consistent angular field of view, enabling us to probe the same galaxies in a similar wavelength regime between the two instruments. However, note that the resolution varies between the JWST F150W filter and the HST WFC3 F160W filter, for example. In this study, we select the F444W JWST filter and compare these galaxies to their HST WFC3 filter F160W. To observe the evolution of the bar fraction, the sample is cut to sources between the redshifts \(1\leq z\leq 3\), resulting in a parent sample of 5218 galaxies in the JWST F444W filter and 5445 galaxies in the HST F160W filter. We note that some galaxies fall in the gaps between the NIRCam detectors, and therefore, while they are included in the HST sample, they cannot be analysed with JWST data. ## 3 Bar Identification The random orientation of galaxies challenges observational attempts of bar measurements. Stellar bars are distinguishable in near-face-on galaxies and become less defined in high-inclination galaxies. This study aims to determine the fraction of disc galaxies that harbour a bar in an optimised sample of F444W NIRCam and F160W WFC3 images. In SS 3.1, the optimisation of the parent sample is explained, while the search for disc galaxies and the derivation of the fraction of disc galaxies in the sample is given in SS 3.2. Finally, in SS 3.3, the bar classification method used on the optimised sample is discussed. For our bar identification process, we use visual inspection of galaxy images as well as of PA and \(e\) radial profiles. ### Sample Optimisation Considering the challenges involved in the identification of bars, as noted above, we choose to remove highly inclined and overly faint or poorly resolved galaxies from the sample through an automated process. To do so, we fit ellipses to the isophotal contours of all galaxies in the parent sample to extract radial profiles of \(e\) and PA (see, e.g., Gadotti et al., 2007; Barazza et al., 2008, 2009; Aguerri et al., 2009). Figure 1 shows in the left panel the ellipse fits superposed on the F444W NIRCam image of the galaxy EGS_23205, and in the right panel, radial profiles of the \(e\) and PA of the fitted ellipses. EGS_23205 is an example of a barred galaxy in this study and is observed relatively face-on. Before visually classifying galaxies as barred or unbarred, we apply a three-step procedure to obtain our final, optimised galaxy sample containing galaxies in which a bar can be identified robustly: **(1)** ellipse-fitting to NIRCam images without fixing the centre; **(2)** second ellipse-fitting with fixed centres; **(3)** removal of highly inclined galaxies. In the following, we give a detailed explanation of these three steps: **Phase 1** Elliptical isophotes are fitted to F444W NIRCam images of the JWST galaxy sample and analysed using photutils.isophote from Python's astropy package (Bradley et al., 2022). This package uses an iterative method to measure the isophotes (Jedrzejewski, 1987). Approximately 30% of the parent sample had successful ellipse fits in the F444W filter. The remaining \(\sim\) 70% of galaxies that failed ellipse fittings are poorly resolved and/or low surface brightness systems and are removed from the sample. **Phase 2** The ellipses fitted in the previous step do not have a specified centre, which may prevent the correct identification of highly inclined galaxies. We thus take the inner 10 to 40 per cent of the isophotes fitted to the galaxy in the first step and take the average position of the centre of these isophotes as the galaxy centre. The choice for this range of radii ensures that one has enough pixels to compute a statistically robust position of the galaxy centre and simultaneously avoids strongly asymmetric structures, which are often at larger radii. We then re-run photutils.isophote on F444W NIRCam galaxy images with fixed specified central positions. With fixed centres, the ellipse fits failed for approximately 8% of the parent sample. We verified that the failed ellipse fits correspond to galaxies with overly irregular morphology or images of point sources conspicuously showing the JWST point spread function (PSF). These systems are also removed from the sample. **Phase 3** An inclination limit of \(i\leq 60^{\circ}\) is applied to remove highly inclined galaxies as it is difficult to identify if a bar is present in these cases. We define the inclination of a galaxy by measuring the ellipticity of the outermost fitted ellipse, \[e=1-\frac{b}{a}, \tag{1}\] where \(b\) is the minor axis length and \(a\) is the major axis length. The inclination is defined as \[\cos i=\frac{b}{a}. \tag{2}\] Approximately 8% of the galaxies in the F444W filter of the parent sample were seen to be too highly inclined and were removed from the sample. We applied this three-step optimisation procedure to our initial large CEERS F444W galaxy sample, ensuring elliptical isophotes could be fitted to the galaxy image with an identified galactic centre and that the galaxy was not highly inclined. The resultant optimised sample of galaxies suitable to our analysis is 768 CEERS images in the NIRCam F444W filter (hereafter referred to as the optimised JWST sample). Of the optimised galaxy sample, 393 galaxies are between the redshifts \(1\leq z\leq 2\), and 376 galaxies are between the redshifts \(2<z\leq 3\). Before visual classifications, a co-author (DG) visually verified that all removed objects were indeed poorly resolved, overly faint/irregular, or too inclined. Table 1 gives the number of galaxies removed at each phase and the resultant galaxy sample size. To measure the difference in the bar fraction between JWST and HST, we also applied the three-step optimisation procedure in our HST CANDELS F160W galaxy sample. This reduced our HST CANDELS sample to an optimised sample of 115 galaxies (hereafter referred to as the optimised HST sample). The poorer sensitivity and wavelength range of HST means many of the galaxies are very pixelated, and features are difficult to discern. Therefore, the ellipse fitting technique failed on many of these galaxies, greatly reducing the optimised HST sample size. ### Disc identification The fraction of disc galaxies in the optimised JWST sample (\(f_{D}\)) must be calculated to derive the fraction of bars in disc galaxies. Disc galaxies have a distinct exponential radial profile, assisting morphological classifications. Two co-authors (ZLC and DG) visually classified the optimised JWST sample. The participants voted the galaxy to be a disc or non-disc based on the F444W NIRCam images and a log intensity radial profile. In principle, artefacts (discussed in SS 2.1) could mislead visual classifications, but these PSF effects are clearly distinguishable. The diffraction spikes mostly appeared as a large hexagon over the galaxy image, so the galaxy is not elongated in one direction preferentially. Therefore, we typically class these as non-discs/unidentifiable. To ensure we were not affected by less prominent artefacts, we checked for effects in the intensity radial profile of each galaxy. A higher disc fraction is thought to be found in JWST compared to HST due to the improved sensitivity and wavelength range. Figure 2 shows three examples of disc and non-disc galaxies in rest-frame JWST NIRCam F444W and HST WFC3 F160W filters. Non-disc galaxies can include strong PSF-affected sources, as shown by the central source in the figure. The average disc fraction of the optimised JWST sample is \(f_{D}=0.40\pm 0.14\) for the redshift range \(1\leq z\leq 3\). The disc fraction derived in this study agrees with the disc fraction found by LF and collaborators (Ferreira et al., 2022), where six independent participants visually classified 3956 CEERS sources in their rest-frame NIR images, using the fil \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Phase} & \multicolumn{2}{c}{HST} & \multicolumn{2}{c}{JWST} \\ \cline{2-5} & \(N_{\rm gal}\) removed & \(N_{\rm gal}\) remain & \(N_{\rm gal}\) removed & \(N_{\rm gal}\) remain \\ \hline 1 & 4980 & 465 & 3635 & 1583 \\ 2 & 230 & 235 & 416 & 1167 \\ 3 & 102 & 133 & 399 & 768 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of galaxies removed from the sample and the resultant sample size after each optimisation phase. Col. 1: the optimisation phase. Col. 2 and Col. 3 are in the context of HST CANDELS F160W images. Col. 2: the number of galaxies which failed to meet the phase criteria. Col. 3: the sample size after the criteria are applied, with phase 1 being applied to the parent sample. Col. 4 and Col. 5 are the same as Col. 2 and Col. 3 but in the context of JWST CEERS F444W images. ters F277W, F356W and F444W for the redshifts \(z=1.5-6.5\), which contained 1672 discs, i.e., \(f_{D}\sim 0.42\). Using JWST, Nelson et al. (2023) found that massive, dusty edge-on discs could have been missed as HST-dark galaxies. The systematic error on our \(f_{D}\), \(\pm 0.14\), is the range of \(f_{D}\) found by the different participants. In SS 4, the disc fraction is determined for the redshift bins selected, and the fraction of bars in disc galaxies is thus calculated. For the two redshift bins between \(1\leq z\leq 2\) and \(2<z\leq 3\), 181 and 176 discs were identified, respectively. ### Visual classifications The optimised JWST sample was then visually classified by five co-authors (ZLC, DG, CdSF, TK and JN). The participants were asked to vote _barred, maybe-barred_ or _unbarred_ on the F444W NIRCam images. The votes were tallied, and a galaxy was classified as follows: a galaxy is classified as strongly barred if it obtained at least three out of five votes for barred; a galaxy is classified as weakly barred if it obtained two out of five votes for barred or at least three out of five votes for maybe-barred; a galaxy is classified as unbarred if it did not obtain the vote thresholds. Figure 3 is a histogram of the number of barred and maybe-barred votes the co-authors gave on each galaxy in the optimised JWST sample. The figure does not show the galaxies where the co-authors were congruent about the galaxies being unbarred. The figure shows the difficulties in identifying bars as 75% of the galaxies shown here are below the vote threshold, hence classified as unbarred. The visual classification method was repeated for the optimised JWST sample in the NIRCam F356W filter. The resolution marginally improves at this shorter wavelength, so structural features are better defined. However, this wavelength can be more subjected to dust extinction and star formation effects, so the bar-dominated, evolved stellar populations are moderately traced. The overall bar fraction did not change between the two NIRCam filters, but a few weaker bars became stronger. Finally, the visual classification method was repeated again on the optimised HST sample. The galaxy EGS_31125 is shown in Figure 4 in the three different filters employed: HST WFC3 F160W and JWST NIRCam F356W and F444W. EGS_31125 is classified in the F444W filter as strongly barred and unbarred in the F160W filter. This figure clearly shows the impact of improved sensitivity and longer wavelength with JWST on the galaxy at redshift \(z\simeq 2.06\) and how distinctive the disc structures become (see also Guo et al. 2023). On the other hand, it is interesting to note that 15 galaxies have been classified as barred in the HST sample but did not receive any vote when classified using the JWST data. These images were inspected again, and while these galaxies could indeed be barred, we note that in some cases, effects from the JWST PSF impact the classification. In other instances, details in the structure of the galaxy are better discerned in the JWST images, ren Figure 1: Elliptical isophotal fits using the module photutils.isophote from Python’s astropy package (Bradley et al. 2022) to logarithmic F444W NIRCam images of the galaxy EGS_23205 at redshift \(z\sim 2.12\). The left-hand side shows the F444W image annotated with the pixel coordinates (top) and superposed elliptical isophotal fits (bottom). The right-hand side shows radial profiles of the ellipticity (\(e\)) (top) and position angle (PA) in degrees (bottom) as derived from the ellipse fits. dering the impression of a bar rather uncertain. We show examples of these galaxies in Figure 11 (Appendix B). ## 4 The bar fraction We aim to determine the fraction of the disc galaxy population at redshifts \(z=1-3\) hosting a bar. We visually classified the optimised JWST sample, which met the criteria in SS 3.1. The process is repeated for the optimised HST sample to explore if an increase in the bar fraction is found using JWST. Galaxies were classified as described in SS 3.3. Figure 5 shows three examples of strongly barred, weakly barred and unbarred galaxies in the JWST NIRCam F444W and HST WFC3 F160W filters. The strongly barred galaxies have distinct stellar structures, while some weakly barred galaxies have less prominent outer discs. The bar fraction is found for two redshift bins, \(1\leq z\leq 2\) and \(2<z\leq 3\), to observe the evolution of the bar fraction. The redshift was only divided into two bins, as the number of barred galaxies is relatively small. In the optimised JWST sample, 29 galaxies were identified as barred in the lower redshift bin, where eight are strongly barred, and 21 are weakly barred, which decreased to ten barred galaxies in the higher redshift bin, where five are strongly barred, and five are weakly barred. All galaxies classified as barred are shown in Appendix A: Figure 11 shows the strongly barred galaxies, while Figure 12 shows the weakly barred galaxies. In this study, the fraction of barred structures in disc galaxies in the redshift ranges \(1\leq z\leq 2\) and \(2<z\leq 3\) is \(\approx 18.9^{+3.6}_{-2.6}\) per cent and \(\approx 6.6^{+0.5}_{-0.3}\) per cent, respectively. In the optimised HST sample, nine galaxies were identified as weakly or strongly barred in the lower redshift bin, giving a bar fraction of \(\approx 5.9^{+0.5}_{-0.2}\) per cent, and only one weakly barred galaxy was identified in the higher redshift bin, giving a bar Figure 3: Distribution of the total number of barred in yellow (purple) and maybe-barred in pink (blue) votes cast by the five participating co-authors to candidate galaxies in the optimised JWST (HST) sample. A grey shaded area covers the galaxies which are classified as weakly or strongly barred. We exclude from this figure galaxies that received zero barred or maybe-barred votes. Figure 2: Rest-frame NIR logarithmic images of three disc (top row) and non-disc (bottom row) galaxies. The three exemplars for each classification, with IDs in the lower right of the NIRCam F444W image, are shown in the JWST NIRCam F444W (left) and HST WFC3 F160W (right). fraction of \(\approx 0.7^{+0.001}_{-2.7}\epsilon_{-5}\) per cent. The uncertainties quoted are the \(1\sigma\) Bayesian binomial confidence intervals (Cameron, 2011), and their derivation is explained below. Table 2 shows the progression of the JWST and HST galaxy samples after the different selection and classification criteria are applied. Figure 6 shows the visually classified bar fraction versus redshift and lookback time in the context of other observational work assessing strong bar fractions using HST. The figure shows that previous results based on HST data indicate a decline in the bar fraction from lower to higher redshifts. While the JWST bar fraction also decreases from the redshift bin \(1\leq z\leq 2\) to the redshift bin \(2<z\leq 3\), the JWST bar fraction in the lower redshift bin is more than three times the HST bar fraction in the same redshift bin. Therefore, our results clearly show that the bar fraction is significantly higher than what could be found with HST data. A dashed line indicates the redshift range of our visually identified barred galaxies, and a thick solid line indicates the distribution quartiles, i.e. 25%-75%. We identify the highest redshift strongly barred galaxy as EGS_24268 at \(z\simeq 2.32\) (also found in Guo et al., 2023) and the highest redshift weakly barred galaxy as EGS_22729 at \(z\simeq 2.82\). The higher redshift bin HST bar fraction is close to zero, and thus, a Bayesian approach is used to determine the statistical uncertainty in the computed bar fractions. Considering the fraction of a large population with a given attribute (i.e., bars) and neither close to 0 nor 1, the Normal approximation can be assumed to derive uncertainties, but for small sample sizes and extreme population proportion values (e.g., the HST bar fraction at \(2<z\leq 3\)), Cameron (2011) convincingly argues for a Bayesian approach to binomial confidence intervals. We adopt this method to estimate the full 68 per cent confidence intervals described above of the bar fraction in the two redshift bins. The sample used in this study is approximately mass complete (see below), meaning we do not account for incomplete sampling in the uncertainty estimates. On the other hand, the more important systematic errors in our analysis stem from the difficulty of defining a galaxy as a disc or barred galaxy. The fraction of disc galaxies was estimated by taking the average of the fractions determined above. The range of these two independent fractions is interpreted as the main systematic error in the bar fractions. For \(1\leq z\leq 2\) and \(2<z\leq 3\), the difference between the participants' disc fractions is 9.0 and 5.1 per cent, respectively. Hence, the sum in quadrature of the systematic \begin{table} \begin{tabular}{c c c c c} \hline Sample & Redshift & \(N_{\rm{gal,HST}}\) & \(N_{\rm{gal,JWST}}\) & Criteria applied \\ \hline Total sample & \(1\leq z\leq 6\) & 7111 & 7111 & N/A \\ Parent sample & \(1\leq z\leq 3\) & 5445 & 5218 & Redshift \\ \hline \multirow{2}{*}{Optimised sample} & \(1\leq z\leq 3\) & 133 & 768 & \\ & \(1\leq z\leq 2\) & 108 & 393 & ellipse fitting, \(i\leq 60^{\circ}\) \\ & \(2<z\leq 3\) & 25 & 376 & \\ \hline \multirow{2}{*}{Disc sample} & \(1\leq z\leq 3\) & 357 & \multirow{2}{*}{Visually classified discs} \\ & \(1\leq z\leq 2\) & N/A & 181 & \\ & \(2<z\leq 3\) & 176 & \\ \hline \multirow{2}{*}{Weakly Barred} & \(1\leq z\leq 3\) & 9 & 26 & \\ & \(1\leq z\leq 2\) & 8 & 21 & Visually classified bars \\ & \(2<z\leq 3\) & 1 & 5 & \\ \hline \multirow{2}{*}{Strongly Barred} & \(1\leq z\leq 3\) & 1 & 13 & \\ & \(1\leq z\leq 2\) & 1 & 8 & Visually classified bars \\ & \(2<z\leq 3\) & 0 & 5 & \\ \hline \end{tabular} \end{table} Table 2: Progression of the galaxy samples after the different selection and classification criteria are applied. Col. (1): the sample label. Col. (2): the redshift range. Col. (3): the number of galaxies after applying the criteria to HST CANDELS F160W images. Col. (4): the number of galaxies after applying the criteria to JWST CEERS F444W images. Col. (5): the criteria applied. Figure 4: The logarithmic image of galaxy EGS_31125 at redshift \(z\simeq 2.06\), visually classified as strongly barred from the JWST NIRCam F444W image, shown in an HST filter and two JWST filters. From left to right: HST WFC3 F160W and JWST NIRCam F356W and F444W. This filter comparison demonstrates the effects of PSF, sensitivity and wavelength range on a galaxy image, particularly in the context of bars. The image shows EGS_31125 in rest frame 0.52, 1.16, and 1.45 \(\mu\)m, respectively. and statistical errors of the JWST bar fractions are \(\approx 18.9^{+9.7}_{-9.4}\) per cent and \(\approx 6.6^{+7.1}_{-5.9}\) per cent. In Figure 7, we show the distribution of stellar mass as a function of redshift for all disc galaxies in the optimised sample. The disc galaxies are taken from the classification of one of the participants in the disc classification procedure (ZLC). Still, we verified that qualitatively similar results are found regardless of the classifier. The 95% empirical completeness limit of the sample, as estimated in Duncan et al. (2019), is indicated, showing that most of our sample is above or close to the completeness limit. Interestingly, this figure shows that barred galaxies tend to avoid the least massive galaxies at each redshift, in line with the results from Sheth et al. (2008). ## 5 Discussion Using NIRCam F444W images, corresponding to NIR rest-frame at \(1\leq z\leq 3\), the visually identified fraction of disc galaxies hosting a bar at redshifts \(z=1-2\) is \(-20\) per cent, which decreases to \(\sim 10\) per cent at redshifts \(z=2-3\). We found the bar fraction obtained from the F444W JWST images to be about three to four times greater than the bar fraction obtained using F160W HST images, as shown in Figure 6. In fact, the value of the bar fraction at \(z=1-2\), that we derive using HST images, matches perfectly the estimate from Simmons et al. (2014, see their Fig. 6) who also use HST data for their estimates. This begs the question, why do we find more bars in JWST than in HST images? Considering that the parent sample was chosen to contain sources present in both HST CANDELS and JWST CEERS and that the same bar-detection method was applied to both the JWST F444W and HST F160W images, the considerable difference between the JWST and HST bar fractions at each redshift bin implies that the identification of bars in disc galaxies is dependent on the sensitivity and wavelength range of the instrument; the bar fraction increases at longer wavelengths and with improved sensitivity. Defining the resolution of an instrument as the full-width half maximum (FWHM) of the empirical point spread function (PSF), the resolution of HST at 1.6 \(\mu\)m is \(0.151^{\prime\,\prime 3}\) and the resolution of JWST at 4.44 \(\mu\)m is \(0.145^{\prime\,\prime\prime}\)4, and therefore, our result do not give insight into the bar fraction dependence on resolution. Footnote 3: PSF FWHM taken from the HST user documentation: https://hst-doc s.stsci.edu/wfc3ihb/chapter-7-ir-imaging-with-wfc3/7-6-i-optical-performance Our results build upon the previous studies, which find that bar-driven internal evolutionary processes for settled disc populations begin at \(z\simeq 1\), whereas our new results suggest this to be \(z\simeq 2\). Additionally, our study finds that a sizable population of barred galaxies exists at \(z\leq 3\), implying that massive disc galaxies can become dynamically settled with prominent bars at a lookback time of \(\sim 11\) Gyrs. The idea that bar-driven galaxy evolution happens at \(z>2\) is generally consistent with the early bar formation epochs estimated for local galaxies in the Time Inference with MUSE in Extragalactic Rings (TIMER) project (Gadotti et al., 2019). For NGC 4371, it has been estimated that the bar formation happened at \(z\approx 2\)(Gadotti et al., 2015), while for NGC 1433, this happened at \(z\approx 1\)(de Sa-Freitas et al., 2023). Nonetheless, it is important to point out that not necessarily all barred galaxies observed at \(2<z\leq 3\) will remain as a barred disc galaxy down to \(z\approx 0\), as the galaxies in the TIMER sample: late violent mergers may destroy the bar, as well as the disc altogether. Figure 5: Rest-frame NIR logarithmic images of three strongly barred (top row), weakly barred (middle row) and unbarred (bottom row) galaxies. The three exemplars for each classification, with IDs in the lower right of the NIRCam F444W image, are shown in the JWST NIRCam F444W (left) and HST WFC3 F160W (right). In a recent study conducted by Guo et al. (2023), six strongly barred galaxies were identified at \(z>1\) using rest-frame NIR images from the first four pointings of CEERS. The six observed galaxies have a range in redshift from \(z\approx 1.1\) to \(z\approx 2.3\), using photometric redshifts (see Guo et al., 2023; Stefanon et al., 2017). In a cross-check, we find that all barred galaxies identified by Guo et al. were also classified by us as barred. Several previous studies have found a decline in the fraction of bars in disc and spiral galaxies with redshift, however mass- and volume-limits vary between the studies, along with the bar classification method. Sheth et al. (2008) observe the evolution of the bar fraction at redshifts \(0.2<z<0.84\) from luminous (brighter than \(L_{\nu}^{\star}\)) face-on spiral galaxies in the COSMOS 2 deg\({}^{2}\) field. The classification methods used in Sheth et al. are ellipse-fitting and visual, which are cross-checked, and an agreement of 85% is found. Masters et al. (2011) found the bar fraction of a volume-limited visually selected SDSS sample using Galaxy Zoo at redshifts \(0.01<z<0.06\) and \(M_{r}<-19.38\). Melvin et al. (2014) use visually selected galaxies via Galaxy Zoo from COSMOS HST images at redshifts \(0.4\leq z\leq 1.0\) with an applied stellar mass limit of log(M\({}_{\star}\)/M\({}_{\odot}\)) \(\geq 10\). The bar fraction was extended to redshifts \(0.5\leq z\leq 2.0\) in Simmons et al. (2014) through the visually selected CANDELS galaxies via Galaxy Zoo with an absolute \(H-\)band magnitude limit of \(H<25.5\). With the work of Simmons et al. overlapping with the lower redshift bin of our study and using visually identified CANDELS galaxies, we found that our results are in full agreement. Although many studies have found a decrease in the bar fraction at \(z=0-1\), some find little or no evolution of the bar fraction. Jogee et al. (2004) identified bars in spiral galaxies using three independent techniques and found the fraction of bars to be \(\sim 30\pm 6\) per cent in COSMOS-ACS galaxies at redshifts \(z\sim 0.2-0.7\) and \(z\sim 0.7-1.0\), with completeness cuts of \(M_{V}\leq-19.3\) and -20.6, respectively. Elmegreen et al. (2004) also found a constant bar fraction of \(\sim 23\pm 3\) per cent at redshifts \(z\sim 0.0-1.0\) in COSMOS-ACS galaxies. A direct comparison between the results from these various studies is difficult to accomplish given the different techniques employed to identify bars and the different sample selection criteria. In particular, Erwin (2018) shows that in the local Universe the bar fraction depends strongly on galaxy mass, with a peak at M\({}_{\star}\sim 10^{9.7}\)M\({}_{\odot}\), declining towards both higher and lower masses. At redshifts \(0.2\leq z\leq 0.6\) for a mass complete sample of M \(>10^{10.5}\)M\({}_{\odot}\) galaxies in the COSMOS field, Cameron et al. (2010) found the bar fraction of early-type discs with intermediate stellar masses to be twice that of late-type discs, and is reversed for high stellar masses. In this context, it is important to highlight that our sample probes the galaxy population with masses Figure 6: Evolution of the fraction of stellar bars in disc galaxies with redshift in the context of other bar assessment work using HST. The fractions of barred disc galaxies found in JWST NIRCam images are shown as green squares, and the fractions of barred disc galaxies found in this study in HST WFC3 images are shown as purple squares. The bar fraction was found for two redshift bins, \(1\leq z\leq 2\) and \(2<z\leq 3\), where the marker indicates the median redshift of the barred galaxies. All bar fraction errors indicate the sum in quadrature of the systematic and \(1\sigma\) Bayesian binomial confidence interval (Cameron, 2011, statistical error only in dark colours). A dashed line indicates the redshift range of barred galaxies. A thick solid line indicates the redshift range of the quartiles 25%-75% of the distribution of barred galaxies. At low redshifts, de Vaucouleurs et al. (1991, down-pointing triangle) and Masters et al. (2011, circle) found strong bars in a third of disc galaxies, while Eskridge et al. (2000, cross) found strong and weak bars in over two-thirds of disc galaxies. Simmons et al. (2014, left-pointing triangles), Sheth et al. (2008, diamonds) and Melvin et al. (2014, up-pointing triangles) found a decreasing trend of the bar fraction for higher redshifts. Jogee et al. (2004, right-pointing triangles) found a minimal decline in the bar fraction at higher redshifts. Finally, the bar fractions, as found in the Auriga cosmological simulations in Fragkoudi et al. (2020, ees) are shown in black. above \(\approx 10^{9}\)M\({}_{\odot}\), which at redshift zero may reflect the peak in the bar fraction distribution. Considering all barred galaxies we find in our study, their mean stellar mass is M\({}_{\bullet}\sim 1.2\times 10^{10}\)M\({}_{\odot}\), with a standard deviation of \(\sim 5.8\times 10^{10}\)M\({}_{\odot}\) Using the magnetic-hydrodynamical cosmological simulation TNG50 (Nelson et al., 2019), Rossa-Guevara et al. (2022) found that M\({}_{\bullet}\geq 10^{10}\)M\({}_{\odot}\) spiral galaxies with bar formation are present as early as \(z=4\). When an angular resolution limit of twice the HST \(I\)-band angular PSF FWHM was applied, the fraction of bars dropped to a tenth of its original value at \(z=2\), reconciling theoretical predictions and observations. Some of the previous observational studies discussed above suggest that the decrease in the bar fraction in massive disc galaxies out to \(z\sim 1\) could be due to minor merger events that keep the disc dynamically hot. However, depending on the details of the merger/flyby interaction, this could, in fact, tidally induce bar formation (e.g., Berentzen et al., 2003; Peschken and Lokas, 2019). The decline in the bar fraction in disc galaxies could be explained as a result of the decreasing physical spatial resolution with redshift. The ellipticity of bars at poorer resolution decreases, leading to a rounder, less elongated and compact bar, making the stellar bar less distinguishable. The perceptibility of a bar could be considerably affected by a clumpy outer disc, a bright central bulge and/or the angular size of the bar (e.g., Lee et al., 2019). In the context of our results using JWST, the PSF FWHM for the JWST F444W filter is \(0.145^{\prime\prime}\). The median redshift for barred galaxies between \(1\leq z\leq 2\) is \(z=1.48\), corresponding to a mean linear resolution of \(\approx 1.26\) kpc. As for the redshift bin \(2<z\leq 3\), the median redshift of barred galaxies is \(z=2.28\), corresponding to a mean linear resolution of \(\approx 1.22\) kpc. Bars smaller in angular size could have been preferentially missed at the high redshifts explored in this study. In a volume-limited SDSS galaxy sample where bars were identified through ellipse fits and Fourier analysis, Aguerri et al. (2009) established that only bars with lengths above 2.5 times the FWHM can be identified. The proposal that the high-redshift bar fraction is systematically underestimated was thoroughly discussed in the context of a mass- and volume-limited S\({}^{4}\)G galaxy sample in Erwin (2018). Erwin successfully reproduced SDSS bar fraction trends using SDSS observational parameters in simulations on the S\({}^{4}\)G galaxy sample and suggested a bar length detection limit of \(\sim 2\) times the FWHM. Applying these detection limits on NIRCam F444W images implies that bars shorter than \(\sim 2.5-3\) kpc in radius (semi-major axis) are missed in our study. Our resolution limit thus indicates that all bars we detect in this study are longer than \(\approx 3\) kpc and then presumably relatively strong bars. In this context, it is striking that the bar fraction we estimate at the redshift bin \(z=1-2\) is not too dissimilar to the local strong bar fraction of about 30% (e.g., de Vaucouleurs et al., 1991). In fact, Erwin (2005) found that the mean bar semi-major axis is 3.3 kpc for early-type disc galaxies and 1.5 kpc for late-type disc galaxies (see also Gadotti, 2011). Therefore, unless the bar size distribution at high redshifts differs from the local distribution, even with JWST, we are likely missing a sizeable fraction of barred galaxies. In the sample of massive galaxies (M\({}_{\bullet}\geq 10^{10}\)M\({}_{\odot}\), \(0.02\leq z\leq 0.07\)) studied in Gadotti (2011), there are not many bars that are shorter than 3 kpc (see his Fig. 1) although the author points out that due to resolution limits, he may also miss bars with semi-major axis below \(2-3\) kpc. However, in Erwin (2005), mass is not presented, so a direct comparison is not straightforward. Erwin (2019), on the other hand, shows that bar length increases with mass for galaxies more massive than log (M\({}_{\bullet}/\)M\({}_{\odot}\)) \(\leq 10.1\) for local galaxies, and a substantial fraction of the galaxies in his study has bars shorter than 3 kpc. Not only absolute bar length but the ratio of bar length to the galaxy size (e.g., disk scale length \(h\), or parameters such as \(R_{50}\) or \(R_{90}\)) may be more useful to compare at different redshifts, since it has been shown that the galaxy size also evolves (Trujillo et al., 2007; Buitrago et al., 2008; van der Wel et al., 2014; Buitrago et al., 2014; Whitney et al., 2019, mostly for massive early-type galaxies but also in the case of disk galaxies). Kim et al. (2021) measured bar length for galaxies at \(0.2\leq z\leq 0.84\) and found that the mean length of the bar is \(\sim 5\) kpc for galaxies with log(M\({}_{\bullet}/\)M\({}_{\odot}\)) \(\geq 10\) (see their Fig. 2). However, the normalised bar length \(R_{bar}\)/\(h\) of galaxies at \(0.2\leq z\leq 0.84\) in the study of Kim et al. (2021) is similar to that of local bars in Gadotti (2011). We postpone a thorough discussion on these aspects to a future paper, in which we will also present measurements of the bar length and its evolution at higher redshifts. ## 6 Summary and conclusions To derive the fraction of stellar bars in disc galaxies at high redshifts is an essential step towards understanding the onset of bar-driven galaxy evolution, which was found in previous studies using rest-frame optical HST images to occur at \(z\sim 1\). However, stellar bars are populated by evolved stars emitting strongly at longer wavelengths, and thus, bars can be more effectively identified in rest-frame NIR images. In this study, we observe the evolution of the bar fraction at redshifts \(z=1-3\) in a sample of galaxies present in both HST CANDELS and JWST CEERS and PRIMER and compare the results obtained after using rest-frame optical HST images and rest-frame NIR JWST images for galaxies in the same parent sample. We use the longest-wavelength JWST NIRCam F444W filter to trace the underlying stellar mass distribution as best as possible. The initial parent sample of 5241 galaxies is optimised to produce a sample in which bars can be more robustly identified, in particular by removing galaxies in a close to edge-on projection, with an inclination limit of \(i\leq 60^{\circ}\). After optimisation, the parent sample is reduced to 768 galaxies in Figure 7: Distribution of stellar masses for the sample of disc galaxies as classified by ZLC in JWST CEERS between the redshifts \(1\leq z\leq 3\). Disc galaxies are shown in blue, while the weakly and strongly barred galaxies are in pink. A dashed green line shows the 95% empirical completeness of the sample (see Figure 8 of Duncan et al., 2019). The parameter space below this line in this plot corresponds to a completeness fraction of \(\approx 85-90\%\). the JWST F444W filter and 115 galaxies in the HST F160W filter. Five co-authors visually classified all galaxies in the two optimised samples, searching for bars supported by radial profiles of isophotal ellipticity and position angle. Two co-authors visually classified all galaxies in the optimised JWST samples as disc or non-disc galaxies and found a disc fraction \(f_{D}\sim 0.40\pm 0.14\). The fraction of bars in disc galaxies was thus derived for two redshift bins, \(1\leq z\leq 2\) and \(2<z\leq 3\), with robust photometric redshifts. The bar fractions we found in JWST F444W are, respectively, \(\approx 19\)% and \(\approx 7\)% for the lower and higher redshift bins. In HST F160W, we found the bar fractions to be \(\approx 6\)% and \(\approx 1\)% for the lower and higher redshift bins, respectively. We thus found the bar fraction to be approximately three to four times greater in JWST F444W than in HST F160W in the lower redshift bin, showing that the detectability of stellar bars depends greatly on the wavelength range and the sensitivity of the instrument. A decrease in the bar fraction is observed at higher redshifts, but the trend could be due to shorter bars being preferentially missed from this study. We detect a substantial number of barred galaxies at redshifts \(z\leq 3\), implying that bar-driven galaxy evolution commences at a lookback time beyond \(\sim 11\) Gyrs. In fact, Guo et al. (2023) have recently reported the finding of a barred galaxy at \(z\approx 2.3\), and other teams have reported candidate barred galaxies beyond redshift three (Amvrosiadis et al., 2023, subm.) and even beyond redshift four (Tsukui et al., 2023; Smail et al., 2023). In this study, the highest redshift strongly and weakly barred galaxies found are at \(z\approx 2.3\) and \(z\simeq 2.8\), respectively. This study does not extend beyond \(z=3\) to remain in the rest-frame NIR and better detect the evolved stellar populations within the bar. Interesting investigations can be done on the bar fraction dependence on galaxy stellar mass and the evolution of the bar length, which are beyond the scope of this paper but will be explored in future papers. This study used the first four pointing of CEERS, and a future paper will present an enlarged census of the bar fraction at redshifts \(1\leq z\leq 3\) using the additional six CEERS pointings. ## Acknowledgements ZLC acknowledges funding from the Science and Technology Facilities Council ST/X508354/1. This work was supported by STFC grants ST/T000244/1 and ST/X001075/1. TK acknowledges support from the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (RS-2023-00240212 and No. 2019R11A3A02062242) and the grant funded by the Korean government (MSIT) (No. 2022R1A4A3031306 and WISET 2022-804). JN acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 694343). EA thanks the CNES for financial support. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) to any Author Accepted Manuscript version arising. ## Data Availability This work used Astropy (Astropy Collaboration et al., 2013) and PHOTUTILS (Bradley et al., 2022). The specific observations analyzed can be accessed via [https://doi.org/10.17909/xm8m-t59](https://doi.org/10.17909/xm8m-t59), and the visual classifications from Ferreira et al. (2022) are publicly available at [https://github.com/astroferreira/CEE](https://github.com/astroferreira/CEE) RS_EPOCHS_MORPBO/
2310.00360
The multiplicity of the zero Laplacian eigenvalue of uniform hypertrees
In this paper, the Laplacian characteristic polynomial of uniform hypergraphs with cut vertices or pendant edges and the Laplacian matching polynomial of uniform hypergraphs are characterized.The multiplicity of the zero Laplacian eigenvalue of uniform hypertrees is given, which proves the conjecture in \cite{zheng2023zero} (The zero eigenvalue of the Laplacian tensor of a uniform hypergraph, Linear and Multilinear Algebra, (2023) Doi:10.1080/03081087.2023.2172541).
Ge Lin, Changjiang Bu
2023-09-30T12:31:54Z
http://arxiv.org/abs/2310.00360v1
# The multiplicity of the zero Laplacian eigenvalue of uniform hypertrees ###### Abstract In this paper, the Laplacian characteristic polynomial of uniform hypergraphs with cut vertices or pendant edges and the Laplacian matching polynomial of uniform hypergraphs are characterized. The multiplicity of the zero Laplacian eigenvalue of uniform hypertrees is given, which proves the conjecture in [18] (The zero eigenvalue of the Laplacian tensor of a uniform hypergraph, Linear and Multilinear Algebra, (2023) Doi:10.1080/03081087.2023.2172541). keywords: hypertree, Laplacian tensor, multiplicity, characteristic polynomial, matching polynomial _AMS classification(2020):_05C65, 05C50. ## 1 Introduction A hypergraph is called \(k\)-uniform if its each edge contains exactly \(k\) vertices. For a \(k\)-uniform hypergraph \(H=(V(H),E(H))\), its adjacency tensor \(\mathcal{A}_{H}=(a_{i_{1}i_{2}\cdots i_{k}})\) is a \(k\)-order \(|V(H)|\)-dimensional tensor [5], where \[a_{i_{1}i_{2}\cdots i_{k}}=\begin{cases}\frac{1}{(k-1)!}&\text{if }\{i_{1},i_{2}, \ldots,i_{k}\}\in E(H),\\ 0&\text{otherwise}.\end{cases}\] The Laplacian tensor of \(H\) is \(\mathcal{L}_{H}=\mathcal{D}_{H}-\mathcal{A}_{H}\)[13], where \(\mathcal{D}_{H}\) is the diagonal tensor of vertex degrees of \(H\). The eigenvalues of \(\mathcal{A}_{H}\) and \(\mathcal{L}_{H}\) are called the eigenvalues and Laplacian eigenvalues of \(H\), respectively. The characteristic polynomial of and \(\mathcal{L}_{H}\) are called the characteristic polynomial and the Laplacian characteristic polynomial of \(H\), respectively. The characteristic polynomials of uniform hypergraphs are a research area that has attached much attention in spectral hypergraph theory. In 2012, Cooper and Dutle [5] characterized some properties on the characteristic polynomials of uniform hypergraphs and gave the characteristic polynomial of the one-edge hypergraph. In 2015, Cooper and Dutle [6] gave the characteristic polynomial of the 3-uniform hyperstar. In 2020, Bao et al. [1] provided a combinatorial method for computing the characteristic polynomial of uniform hypergraphs with cut vertices, and gave the characteristic polynomial of the \(k\)-uniform hyperstar. In 2021, Chen and Bu [3] gave a reduction formula for the characteristic polynomial of uniform hypergraphs with pendant edges. Besides, they used the reduction formula to derive the characteristic polynomial of the uniform hyperpath. However, there are few results on the Laplacian characteristic polynomials of uniform hypergraphs. In 2023, Zheng [18] gave the Laplacian characteristic polynomial of uniform hyperstar, and obtained the multiplicity of the zero Laplacian eigenvalue of uniform hyperstar and hyperpath. Moreover, the following conjecture was proposed in [18]. **Conjecture 1.1**.: _[_18_]_ _Let \(T=(V(T),E(T))\) be a \(k\)-uniform hypertree for \(k\geq 3\). Then the multiplicity of the zero Laplacian eigenvalue of \(T\) is \(k^{|E(T)|(k-2)}\)._ The eigenvalues of uniform hypertrees can be studied by the matching polynomial. In 2017, Zhang et al. [17] showed that the roots of the matching polynomial of a uniform hypertree are its eigenvalues. For a \(k\)-uniform hypertree \(T\) with \(k\geq 3\), Clark and Cooper [4] determined all eigenvalues of \(T\) by roots of the matching polynomials of all sub-hypertrees of \(T\). In 2022, Wan et al. [15] defined the Laplacian matching polynomial of uniform hypergraphs, and used the roots of the Laplacian matching polynomials of all sub-hypertrees of \(T\) to obtain all Laplacian eigenvalues of \(T\) (without multiplicity). In this paper, we give a expression for the Laplacian characteristic polynomial of uniform hypergraphs with cut vertices or pendant edges (Section 2). And we characterize some properties on the Laplacian matching polynomial of uniform hypergraphs (Section 3). Further, we use these results to give the multiplicity of the zero Laplacian eigenvalue of uniform hypertrees, which shows that Conjecture 1.1 is true (Section 4). ## 2 The Laplacian characteristic polynomial of uniform hypergraphs ### Preliminaries In this subsection, we present some notation and lemmas about the eigenvalue of tensors and the formula of resultants. A \(k\)-order \(n\)-dimensional tensor \(\mathcal{A}=(a_{i_{1}i_{2}\cdots i_{k}})\) refers to a multi-dimensional array with entries \(a_{i_{1}i_{2}\cdots i_{k}}\) for all \(i_{j}\in[n]:=\{1,\ldots,n\}\) and \(j\in[k]\). If there exists \(\lambda\in\mathbb{C}\) and a non-zero vector \(\mathbf{x}=(x_{1},\ldots,x_{n})^{\mathrm{T}}\in\mathbb{C}^{n}\) such that \[\mathcal{A}\mathbf{x}^{k-1}=\lambda\mathbf{x}^{[k-1]},\] where \(\mathcal{A}\mathbf{x}^{k-1}\) is an \(n\)-dimensional vector with \(\sum_{i_{2},\ldots,i_{k}=1}^{n}a_{ii_{2}\ldots i_{k}}x_{i_{2}}\cdots x_{i_{k}}\) as its \(i\)-th component and \(\mathbf{x}^{[k-1]}=(x_{1}^{k-1},\ldots,x_{n}^{k-1})^{\mathrm{T}}\), then \(\lambda\) is called an eigenvalue of \(\mathcal{A}\) and \(\mathbf{x}\) is an eigenvector of \(\mathcal{A}\) corresponding to \(\lambda\) (see [10, 12]). The resultant of the polynomials system \((\lambda\mathbf{x}^{[k-1]}-\mathcal{A}\mathbf{x}^{k-1})\) is called the characteristic polynomial of \(\mathcal{A}\), denoted by \(\phi(\mathcal{A})\). In the following, we introduce some formulas of resultants required for proofs in this section. **Lemma 2.1**.: _[_7_, Poisson Formula for resultants]_ _Let \(F_{1},F_{2},\ldots,F_{n}\in\mathbb{C}[x_{1},\ldots,x_{n}]\) be homogeneous polynomials of respective degrees \(d_{1},d_{2},\ldots,d_{n}\). For each \(i\in[n]\), let \(\overline{F}_{i}=F_{i}|_{x_{1}=0}\) and \(f_{i}=F_{i}|_{x_{1}=1}\). Let \(\mathcal{V}\) be the affine variety defined by the polynomials \(f_{2},\ldots,f_{n}\). If \(\mathrm{Res}(\overline{F}_{2},\ldots,\overline{F}_{n})\neq 0\), then_ \[\mathrm{Res}(F_{1},F_{2},\ldots,F_{n})=\mathrm{Res}(\overline{F}_{2},\ldots, \overline{F}_{n})^{d_{1}}\prod_{\mathbf{p}\in\mathcal{V}}f_{1}(\mathbf{p})^{m (\mathbf{p})},\] _where \(m(\mathbf{p})\) is the multiplicity of a point \(\mathbf{p}\) in \(\mathcal{V}\)._ **Lemma 2.2**.: _[_5_, lemma 3.2]_ _Let \(F_{1},\ldots,F_{n}\in\mathbb{C}[x_{1},\ldots,x_{n}]\) be homogeneous polynomials of respective degrees \(d_{1},\ldots,d_{n}\), and let \(G_{1},\ldots,G_{m}\in\mathbb{C}[y_{1},\ldots,y_{m}]\) be homogeneous polynomials of respective degrees \(\delta_{1},\ldots,\delta_{m}\). Then_ \[\mathrm{Res}(F_{1},\ldots,F_{n},G_{1},\ldots,G_{m})=\mathrm{Res}(F_{1},\ldots, F_{n})^{\prod_{j=1}^{m}\delta_{j}}\mathrm{Res}(G_{1},\ldots,G_{m})^{\prod_{i=1}^{ n}d_{i}}.\] Let \(H=(V(H),E(H))\) be a \(k\)-uniform hypergraph with \(V(H)=[n]\). For a vertex \(v\in V(H)\), let \(E_{H}(v)\) denote the set of edges of \(H\) containing \(v\) and \(d_{H}(v)\) denote the degree of \(v\) in \(H\). Given an edge \(e\in E(H)\) and a vector \(\mathbf{x}=(x_{1},\ldots,x_{n})^{\mathrm{T}}\in\mathbb{C}^{n}\) let \(\mathbf{x}_{e}=\prod_{v\in e}x_{v}\). Then the eigenvalue equation \(\mathcal{L}_{H}\mathbf{x}^{k-1}=\lambda\mathbf{x}^{[k-1]}\) corresponding to the Laplacian tensor of \(H\) can be written as \[d_{H}(v)x_{v}^{k-1}-\sum_{e\in E_{H}(v)}\mathbf{x}_{e\setminus\{v\}}=\lambda x _{v}^{k-1},v=1,\ldots,n.\] For each \(v\in V(H)\), define \[F_{v}=(\lambda-d_{H}(v))x_{v}^{k-1}+\sum_{e\in E_{H}(v)}\mathbf{x}_{e\setminus \{v\}}.\] For a fixed vertex \(w\in V(H)\), let \[\overline{F}_{v}=F_{v}|_{x_{w}=0},f_{v}=F_{v}|_{x_{w}=1}.\] Let \(\mathcal{V}^{H}\) be the affine variety defined by the polynomials \(f_{v}\) for all \(v\in V(H)\setminus\{w\}\). We use \(\mathcal{L}_{H}(w)=(l_{i_{1}\cdots i_{k}})\) to denote a \(k\)-order \(n-1\)-dimensional principal sub-tensor of \(\mathcal{L}_{H}\), where \(i_{1},\ldots,i_{k}\in V(H)\setminus\{w\}\). By the Poisson Formula for resultants, we obtain the following lemma about the Laplacian characteristic polynomial of \(H\). **Lemma 2.3**.: _Let \(H\) be a \(k\)-uniform hypergraph and \(w\) be a vertex on \(H\). Then the Laplacian characteristic polynomial_ \[\phi(\mathcal{L}_{H})=\phi(\mathcal{L}_{H}(w))^{k-1}\prod_{\mathbf{p}\in \mathcal{V}^{H}}(\lambda-d_{H}(w)+\sum_{e\in E_{H}(w)}\mathbf{p}_{e\setminus \{w\}})^{m(\mathbf{p})}, \tag{2.1}\] _where \(m(\mathbf{p})\) is the multiplicity of \(\mathbf{p}\) in \(\mathcal{V}^{H}\)._ Proof.: By the definition of the Laplacian characteristic polynomial, we know that \(\phi(\mathcal{L}_{H})=\operatorname{Res}(F_{v}:v\in V(H))\), where \(F_{v}=(\lambda-d_{H}(v))x_{v}^{k-1}+\sum_{e\in E_{H}(v)}\mathbf{x}_{e\setminus \{v\}}\). For the vertex \(w\in V(H)\), by Lemma 2.1, we have \[\phi(\mathcal{L}_{H})=\operatorname{Res}(\overline{F}_{v}:v\in V(H)\setminus \{w\})^{k-1}\prod_{\mathbf{p}\in\mathcal{V}^{H}}f_{w}(\mathbf{p})^{m(\mathbf{ p})}.\] For all \(v\in V(H)\setminus\{w\}\), \(\overline{F}_{v}=F_{v}|_{x_{w}=0}=(\lambda-d_{H}(v))x_{v}^{k-1}+\sum_{e\in E_{ H-w}(v)}\mathbf{x}_{e\setminus\{v\}}=0\) are the eigenvalue equations of \(\mathcal{L}_{H}(w)\), where \(H-w\) denote the hypergraph obtained from \(H\) by removing the vertex \(w\) and all edges incident to it, so we have \[\operatorname{Res}(\overline{F}_{v}:v\in V(H)\setminus\{w\})=\phi(\mathcal{L }_{H}(w)). \tag{2.2}\] Note that \(f_{w}=F_{w}|_{x_{w}=1}=\lambda-d_{H}(w)+\sum_{e\in E_{H}(w)}\mathbf{x}_{e\setminus \{w\}}\). Then we obtain \[\phi(\mathcal{L}_{H})=\phi(\mathcal{L}_{H}(w))^{k-1}\prod_{\mathbf{p}\in \mathcal{V}^{H}}(\lambda-d_{H}(w)+\sum_{e\in E_{H}(w)}\mathbf{p}_{e\setminus \{w\}})^{m(\mathbf{p})}.\] When \(H\) is a uniform hypergraph with cut vertices, we can give a description of the affine variety \(\mathcal{V}^{H}\) for this case and obtain a more explicit expression for the Laplacian characteristic polynomial of \(H\) than (2.1). ### Main results Let \(H=(V(H),E(H))\) be a \(k\)-uniform connected hypergraph and \(w\in V(H)\). Denote \(\widehat{E}_{H}(w)=\{e\setminus\{w\}:e\in E_{H}(w)\}\). Deleting the vertex \(w\), it can get a non-uniform hypergraph \(\widehat{H}\) with vertex set \(V(\widehat{H})=V(H)\setminus\{w\}\) and edge set \(E(\widehat{H})=(E(H)\setminus E_{H}(w))\cup\widehat{E}_{H}(w)\). The vertex \(w\) is called a cut vertex if \(\widehat{H}\) is not connected [1]. Suppose that \(w\) is a cut vertex on \(H\) and \(\widehat{H}_{1},\ldots,\widehat{H}_{s}\) are connected components of \(\widehat{H}\). For each \(i\in[s]\), denote the induced sub-hypergraph of \(H\) on \(V(\widehat{H}_{i})\cup\{w\}\) by \(\widetilde{H}_{i}\), and we call \(\widetilde{H}_{i}\) a branch of \(H\) associated with \(w\). Clearly, \(H\) can be obtained by coalescing the branches \(\widetilde{H}_{1},\ldots,\widetilde{H}_{s}\) to the vertex \(w\). Recall that the affine variety \(\mathcal{V}^{H}\) is defined by the polynomials \(f_{v}=(\lambda-d_{H}(v))x_{v}^{k-1}+\sum_{e\in E_{H}(v)}\mathbf{x}_{e\setminus \{v\}}|_{x_{w}=1}\) for all \(v\in V(H)\setminus\{w\}\). Then, for each \(v_{i}\in V(\widetilde{H}_{i})\setminus\{w\}\) and \(i\in[s]\), we have \[f_{v_{i}} =(\lambda-d_{H}(v_{i}))x_{v_{i}}^{k-1}+\sum_{e\in E_{H}(v_{i})} \mathbf{x}_{e\setminus\{v_{i},w\}}\] \[=(\lambda-d_{\widetilde{H}_{i}}(v_{i}))x_{v_{i}}^{k-1}+\sum_{e\in E _{\widetilde{H}_{i}}(v_{i})}\mathbf{x}_{e\setminus\{v_{i},w\}}.\] It is known that \(\mathcal{V}^{\widetilde{H}_{i}}\) is the affine variety defined by the polynomials \(f_{v_{i}}\) for all \(v_{i}\in V(\widetilde{H}_{i})\setminus\{w\}\) and each \(i\in[s]\). So \[\mathcal{V}^{H}=\mathcal{V}^{\widetilde{H}_{1}}\times\cdots\times\mathcal{V}^ {\widetilde{H}_{s}}. \tag{2.3}\] Combining Lemma 2.1 with (2.3), an expression for the Laplacian characteristic polynomial of uniform hypergraphs with cut vertices is derived as follows. **Theorem 2.4**.: _Let \(H\) be a \(k\)-uniform hypergraph and \(w\) be a cut vertex on \(H\). Let \(\widetilde{H}_{1},\ldots,\widetilde{H}_{s}\) are the branches of \(H\) associated with \(w\). Denote \(\mathcal{V}^{(i)}=\mathcal{V}^{\widetilde{H}_{i}}\) and \(E_{i}(w)=E_{\widetilde{H}_{i}}(w)\). Then_ \[\phi(\mathcal{L}_{H})=\prod_{i=1}^{s}\phi\left(\mathcal{L}_{\widetilde{H}_{i}}(w) \right)^{(k-1)^{2-s+\sum_{j\neq i}|V(\widetilde{H}_{j})|}}\prod_{\begin{subarray} {c}\mathbf{p}^{(i)}\in\mathcal{V}^{(i)}\\ i\in[s]\end{subarray}}(\lambda-\sum_{i=1}^{s}d_{\widetilde{H}_{i}}(w)+\sum_{ \begin{subarray}{c}e\in E_{i}(w)\\ i\in[s]\end{subarray}}\mathbf{p}^{(i)}_{e\setminus\{w\}})^{\prod_{i=1}^{s}m( \mathbf{p}^{(i)})},\] _where \(m(\mathbf{p}^{(i)})\) is the multiplicity of \(\mathbf{p}^{(i)}\) in \(\mathcal{V}^{(i)}\) for each \(i\in[s]\)._ Proof.: By Lemma 2.3, the Laplacian characteristic polynomial \[\phi(\mathcal{L}_{H})=\phi(\mathcal{L}_{H}(w))^{k-1}\prod_{\mathbf{p}\in \mathcal{V}^{H}}(\lambda-d_{H}(w)+\sum_{e\in E_{H}(w)}\mathbf{p}_{e\setminus \{w\}})^{m(\mathbf{p})}. \tag{2.4}\] From (2.2), we know that \(\phi(\mathcal{L}_{H}(w))=\operatorname{Res}(\overline{F}_{v}:v\in V(H) \setminus\{w\})\). Recall that \(\overline{F}_{v}=(\lambda-d_{H}(v))x_{v}^{k-1}+\sum_{e\in E_{H}(v)}\mathbf{x}_ {e\setminus\{v\}}|_{x_{w}=0}\) for each \(v\in V(H)\setminus\{w\}\), and note that \(H\) can be obtained by coalescing the branches \(\widetilde{H}_{1},\ldots,\widetilde{H}_{s}\) to the vertex \(w\). For all \(v_{i}\in V(\widetilde{H}_{i})\setminus\{w\}\) and each \(i\in[s]\), we have \[\overline{F}_{v_{i}} =(\lambda-d_{H}(v_{i}))x_{v_{i}}^{k-1}+\sum_{e\in E_{H}(v_{i})} \mathbf{x}_{e\setminus\{v_{i}\}}|_{x_{w}=0}\] \[=(\lambda-d_{\widetilde{H}_{i}}(v_{i}))x_{v_{i}}^{k-1}+\sum_{e\in E _{\widetilde{H}_{i}}(v_{i})}\mathbf{x}_{e\setminus\{v_{i}\}}|_{x_{w}=0}\] \[=(\lambda-d_{\widetilde{H}_{i}}(v_{i}))x_{v_{i}}^{k-1}+\sum_{e\in E _{\widetilde{H}_{i}-w}(v_{i})}\mathbf{x}_{e\setminus\{v_{i}\}},\] where \(\widetilde{H}_{i}-w\) denote the hypergraph obtained from \(\widetilde{H}_{i}\) by removing the vertex \(w\) and all edges incident to it. So \(\phi(\mathcal{L}_{H}(w))=\operatorname{Res}(\overline{F}_{v}:v\in V(H) \setminus\{w\})=\operatorname{Res}(\overline{F}_{v_{i}}:v_{i}\in V(\widetilde {H}_{i})\setminus\{w\},i\in[s])\). By Lemma 2.2, we get \[\phi(\mathcal{L}_{H}(w))=\prod_{i=1}^{s}\operatorname{Res}(\overline{F}_{v_{i} }:v_{i}\in V(\widetilde{H}_{i})\setminus\{w\})^{(k-1)^{1-s+\sum_{j\neq i}|V( \widetilde{H}_{j})|}}.\] For all \(v_{i}\in V(\widetilde{H}_{i})\setminus\{w\}\) and each \(i\in[s]\), \(\overline{F}_{v_{i}}=0\) are the eigenvalue equations of \(\mathcal{L}_{\widetilde{H}_{i}}(w)\). Then we have \(\operatorname{Res}(\overline{F}_{v_{i}}:v_{i}\in V(\widetilde{H}_{i})\setminus \{w\})=\phi(\mathcal{L}_{\widetilde{H}_{i}}(w))\), which implies that \[\phi(\mathcal{L}_{H}(w))=\prod_{i=1}^{s}\phi(\mathcal{L}_{\widetilde{H}_{i}}(w ))^{(k-1)^{1-s+\sum_{j\neq i}|V(\widetilde{H}_{j})|}}. \tag{2.5}\] For any \(\mathbf{p}\in\mathcal{V}^{H}\), by (2.3), we have \(\mathbf{p}=\begin{pmatrix}\mathbf{p}^{(1)}\\ \vdots\\ \mathbf{p}^{(s)}\end{pmatrix}\), where \(\mathbf{p}^{(i)}\in\mathcal{V}^{(i)}\) for all \(i\in[s]\). Then we obtain \[\prod_{\mathbf{p}\in\mathcal{V}^{H}}(\lambda-d_{H}(w)+\sum_{e\in E _{H}(w)}\mathbf{p}_{e\setminus\{w\}})^{m(\mathbf{p})} =\prod_{\mathbf{p}\in\mathcal{V}^{H}}(\lambda-d_{H}(w)+\sum_{ \begin{subarray}{c}e\in E_{i}(w)\\ i\in[s]\end{subarray}}\mathbf{p}_{e\setminus\{w\}})^{m(\mathbf{p})}\] \[=\prod_{\begin{subarray}{c}\mathbf{p}^{(i)}\in\mathcal{V}^{(i)} \\ i\in[s]\end{subarray}}(\lambda-\sum_{i=1}^{s}d_{\widetilde{H}_{i}}(w)+\sum_{ \begin{subarray}{c}e\in E_{i}(w)\\ i\in[s]\end{subarray}}\mathbf{p}^{(i)}_{e\setminus\{w\}})^{\prod_{i=1}^{s}m( \mathbf{p}^{(i)})}. \tag{2.6}\] Substituting (2.5) and (2.6) into (2.4), the proof is completed. An edge on \(k\)-uniform hypergraph is called a pendant edge if it contains exactly \(k-1\) vertices with degree one. When \(k\)-uniform hypergraph \(H\) has a pendant edge incident to \(w\), it implies that \(w\) is a cut vertex on \(H\) and one of the branches is the one-edge hypergraph. We use Theorem 2.4 to give a more explicit expression for the Laplacian characteristic polynomial of uniform hypergraphs with pendant edges. **Corollary 2.5**.: _Let \(H\) be a \(k\)-uniform hypergraph with a pendant edge incident to the non-pendent vertex \(w\), and we define \(\widetilde{H}\) as the \(k\)-uniform hypergraph obtained by removing the pendant edge and pendent vertices on it from \(H\). Then_ \[\phi(\mathcal{L}_{H})= (\lambda-1)^{(k-1)^{|V(\widetilde{H})|+k-1}}\phi(\mathcal{L}_{ \widetilde{H}}(w))^{(k-1)^{k}}\prod_{\mathbf{p}\in\mathcal{V}^{\widetilde{H}} }(\lambda-d_{\widetilde{H}}(w)-1+\sum_{e\in E_{\widetilde{H}}(w)}\mathbf{p}_ {e\setminus\{w\}})^{m(\mathbf{p})K_{1}}\] \[\times\prod_{\mathbf{p}\in\mathcal{V}^{\widetilde{H}}}(\lambda- d_{\widetilde{H}}(w)-1+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{H}}(w)}\mathbf{p}_{e\setminus\{w\}})^{m(\mathbf{p})K_{2}},\] _where \(K_{1}=(k-1)^{k-1}-k^{k-2}\) and \(K_{2}=k^{k-2}\)._ Proof.: Clearly, \(w\) is a cut vertex on \(H\). Suppose that the branches of \(H\) associated with \(w\) are \(\widetilde{H}\) and the one-edge hypergraph with \(k\) vertices, denoted by \(H^{\prime}\). By Theorem 2.4, we have \[\phi(\mathcal{L}_{H})= \phi\left(\mathcal{L}_{\tilde{H}}(w)\right)^{(k-1)^{k}}\phi\left( \mathcal{L}_{H^{\prime}}(w)\right)^{(k-1)^{|V(\tilde{H})|}}\] \[\times\prod_{\begin{subarray}{c}\mathbf{p}\in\mathcal{V}^{ \tilde{H}}\\ \mathbf{q}\in\mathcal{V}^{H^{\prime}}\end{subarray}}(\lambda-d_{\tilde{H}}(w)- 1+\mathbf{q}_{e^{\prime}\setminus\{w\}}+\sum_{e\in E_{\tilde{H}}(w)}\mathbf{p} _{e\setminus\{w\}})^{m(\mathbf{p})m(\mathbf{q})}, \tag{2.7}\] where \(e^{\prime}\) is the edge of \(H^{\prime}\). Since \(\mathcal{L}_{H^{\prime}}(w)\) is a \(k\)-order \(k-1\)-dimensional identity tensor for the one-edge hypergraph \(H^{\prime}\), we get \[\phi(\mathcal{L}_{H^{\prime}}(w))=(\lambda-1)^{(k-1)^{k-1}}. \tag{2.8}\] It is shown that the Laplacian characteristic polynomial of \(H^{\prime}\) is \(\phi(\mathcal{L}_{H^{\prime}})=(\lambda-1)^{k(k-1)^{k-1}-k^{k-1}}((\lambda-1) ^{k}+(-1)^{k-1})^{k^{k-2}}\) in the [18, Theorem 4.2]. It follows from (2.1) that \[\prod_{\mathbf{q}\in\mathcal{V}^{H^{\prime}}}(\lambda-1+\mathbf{ q}_{e^{\prime}\setminus\{w\}})^{m(\mathbf{q})} =\frac{\phi(\mathcal{L}_{H^{\prime}})}{\phi\left(\mathcal{L}_{H^{ \prime}}(w)\right)^{k-1}}\] \[=(\lambda-1)^{(k-1)^{k-1}-k^{k-2}}(\lambda-1+(\frac{-1}{\lambda- 1})^{k-1})^{k^{k-2}}.\] Then we have \[\mathbf{q}_{e^{\prime}\setminus\{w\}}=\begin{cases}0,&\text{if }\mathbf{q}= \mathbf{0},\\ (\frac{-1}{\lambda-1})^{k-1},&\text{if }\mathbf{q}\neq\mathbf{0},\end{cases} \tag{2.9}\] for \(\mathbf{q}\in\mathcal{V}^{H^{\prime}}\), and we have \(m(\mathbf{0})=(k-1)^{k-1}-k^{k-2}\), \(\sum_{\mathbf{0}\neq\mathbf{q}\in\mathcal{V}^{H^{\prime}}}m(\mathbf{q})=k^{k-2}\) for \(\mathbf{0}\in\mathcal{V}^{H^{\prime}}\). By (2.9), the equation in (2.7) is derived as follows: \[\prod_{\begin{subarray}{c}\mathbf{p}\in\mathcal{V}^{\widetilde{H}} \\ \mathbf{q}\in\mathcal{V}^{H^{\prime}}\end{subarray}}(\lambda-d_{\widetilde{H}}(w) -1+\mathbf{q}_{e^{\prime}\setminus\{w\}}+\sum_{e\in E_{\widetilde{H}}(w)} \mathbf{p}_{e\setminus\{w\}})^{m(\mathbf{p})m(\mathbf{q})}\] \[=\prod_{\begin{subarray}{c}\mathbf{p}\in\mathcal{V}^{\widetilde{H} }\\ \mathbf{0}=\mathbf{q}\in\mathcal{V}^{H^{\prime}}\end{subarray}}(\lambda-d_{ \widetilde{H}}(w)-1+\mathbf{q}_{e^{\prime}\setminus\{w\}}+\sum_{e\in E_{ \widetilde{H}}(w)}\mathbf{p}_{e\setminus\{w\}})^{m(\mathbf{p})m(\mathbf{q})}\] \[\quad\times\prod_{\begin{subarray}{c}\mathbf{p}\in\mathcal{V}^{ \widetilde{H}}\\ \mathbf{0}\neq\mathbf{q}\in\mathcal{V}^{H^{\prime}}\end{subarray}}(\lambda-d_ {\widetilde{H}}(w)-1+\mathbf{q}_{e^{\prime}\setminus\{w\}}+\sum_{e\in E_{ \widetilde{H}}(w)}\mathbf{p}_{e\setminus\{w\}})^{m(\mathbf{p})m(\mathbf{q})}\] \[=\prod_{\mathbf{p}\in\mathcal{V}^{\widetilde{H}}}(\lambda-d_{ \widetilde{H}}(w)-1+\sum_{e\in E_{\widetilde{H}}(w)}\mathbf{p}_{e\setminus\{ w\}})^{m(\mathbf{p})((k-1)^{k-1}-k^{k-2})}\] \[\quad\times\prod_{\mathbf{p}\in\mathcal{V}^{\widetilde{H}}}( \lambda-d_{\widetilde{H}}(w)-1+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{H}}(w)}\mathbf{p}_{e\setminus\{w\}})^{m(\mathbf{p})k^{k-2}}. \tag{2.10}\] Substituting (2.8) and (2.10) into (2.7), the proof is completed. ## 3 The Laplacian matching polynomial of uniform hypergraphs Let \(H=(V(H),E(H))\) be a \(k\)-uniform hypergraph. Let \(M\) be a sub-set of \(E(H)\). Denote by \(V(M)\) the set of vertices of \(H\) each of which is an endpoint of one of the edges in \(M\). If no two distinct edges in \(M\) share a common vertex, then \(M\) is called a matching of \(H\). The set of matchings (including the empty set) of \(H\) is denoted by \(\mathcal{M}(H)\). Let \(\mathbf{w}:V(H)\cup E(H)\rightarrow\mathbb{C}\) be a weighting function on \(H\). In 2022, Wan et al. [15] defined the weighted matching polynomial of \(H\) as \[\sum_{M\in\mathcal{M}(H)}(-1)^{|M|}\prod_{e\in M}\mathbf{w}(e)^{k}\prod_{v\in V (H)\setminus V(M)}(\lambda-\mathbf{w}(v)).\] For any sub-hypergraph \(\widetilde{H}\) of \(H\), if we choose the weighting function on \(\widetilde{H}\) such that \(\mathbf{w}(v)=d_{H}(v)\) for all \(v\in V(\widetilde{H})\) and \(\mathbf{w}(e)=-1\) for all \(e\in E(\widetilde{H})\), then the weighted matching polynomial of \(\widetilde{H}\) can be derived as \[\sum_{M\in\mathcal{M}(\widetilde{H})}(-1)^{(k-1)|M|}\prod_{v\in V(\widetilde{ H})\setminus V(M)}(\lambda-d_{H}(v))=:\varphi_{H}(\widetilde{H}). \tag{3.1}\] In [15], the polynomial (3.1) is called the Laplacian matching polynomial of \(\widetilde{H}\) with respect to \(H\). The goal of this section is to characterize some properties on the Laplacian matching polynomial of uniform hypergraphs, which will be used to prove the main results in Section 4. Firstly, we introduce some related notation. For a sub-set \(S\subseteq V(H)\), we use \(H-S\) to denote the hypergraph obtained from \(H\) by deleting the vertices in \(S\) and the edges incident to them. For a sub-set \(I\subseteq E(H)\), let \(H\setminus I\) denote the hypergraph obtained from \(H\) by deleting the edges in \(I\) (no deletion of resultant isolated vertices). When \(S=\{v\}\) and \(I=\{e\}\), \(H-S\) and \(H\setminus I\) are simply written as \(H-v\) and \(H\setminus e\), respectively. **Theorem 3.1**.: _Let \(H\) be a \(k\)-uniform hypergraph, and \(\widetilde{H}\) be a sub-hypergraph of \(H\). Then the following statements hold. (1) If \(\widetilde{H}\) is not connected and its connected components is \(\widetilde{H}_{1}\) and \(\widetilde{H}_{2}\), then \(\varphi_{H}(\widetilde{H})=\varphi_{H}(\widetilde{H}_{1})\varphi_{H}( \widetilde{H}_{2})\); (2) For \(e\in E(\widetilde{H})\), we have \(\varphi_{H}(\widetilde{H})=\varphi_{H}(\widetilde{H}\setminus e)+(-1)^{k-1} \varphi_{H}(\widetilde{H}-V(e))\); (3) For \(v\in V(\widetilde{H})\) and \(I\subseteq E_{\widetilde{H}}(v)\), we have_ \[\varphi_{H}(\widetilde{H})=\varphi_{H}(\widetilde{H}\setminus I)+(-1)^{k-1} \sum_{e\in I}\varphi_{H}(\widetilde{H}-V(e)),\] _and_ \[\varphi_{H}(\widetilde{H})=(\lambda-d_{H}(v))\varphi_{H}(\widetilde{H}-v)+(-1 )^{k-1}\sum_{e\in E_{\widetilde{H}}(v)}\varphi_{H}(\widetilde{H}-V(e));\] _(4) \(\frac{\mathrm{d}}{\mathrm{d}\lambda}\varphi_{H}(\widetilde{H})=\sum_{v\in V( \widetilde{H})}\varphi_{H}(\widetilde{H}-v)\)._ Proof.: (1) For any \(M\in\mathcal{M}(\widetilde{H})\), there exists \(M_{1}\in\mathcal{M}(\widetilde{H}_{1})\) and \(M_{2}\in\mathcal{M}(\widetilde{H}_{2})\) such that \(M=M_{1}\cup M_{2}\). It is easy to check that \(\varphi_{H}(\widetilde{H})=\varphi_{H}(\widetilde{H}_{1})\varphi_{H}( \widetilde{H}_{2})\). (2) For any \(M\in\mathcal{M}(\widetilde{H})\), if \(M\) does not contain edge \(e\), then \(M\) is a matching of \(\widetilde{H}\setminus e\); if \(M\) contain edge \(e\), then \(M\setminus\{e\}\) is a matching of \(\widetilde{H}-V(e)\). Thus, we have \[\varphi_{H}(\widetilde{H})= \sum_{e\notin M\in\mathcal{M}(\widetilde{H})}(-1)^{(k-1)|M|}\prod_{ v\in V(\widetilde{H})\setminus V(M)}(\lambda-d_{H}(v))\] \[+\sum_{e\in M\in\mathcal{M}(\widetilde{H})}(-1)^{(k-1)|M|}\prod_{ v\in V(\widetilde{H})\setminus V(M)}(\lambda-d_{H}(v))\] \[= \sum_{M\in\mathcal{M}(\widetilde{H}\setminus e)}(-1)^{(k-1)|M|} \prod_{v\in V(\widetilde{H}\setminus e)\setminus V(M)}(\lambda-d_{H}(v))\] \[+\sum_{M\setminus\{e\}\in\mathcal{M}(\widetilde{H}-V(e))}(-1)^{( k-1)(|M\setminus\{e\}|+1)}\prod_{v\in V(\widetilde{H}-V(e))\setminus V(M \setminus\{e\})}(\lambda-d_{H}(v))\] \[= \varphi_{H}(\widetilde{H}\setminus e)+(-1)^{k-1}\varphi_{H}( \widetilde{H}-V(e)).\] (3) Suppose that \(I=\{e_{1},\ldots,e_{s}\}\). It follows from Theorem 3.1 (2) that \[\varphi_{H}(\widetilde{H}) =\varphi_{H}(\widetilde{H}\setminus e_{1})+(-1)^{k-1}\varphi_{H} (\widetilde{H}-V(e_{1}))\] \[=\varphi_{H}(\widetilde{H}\setminus\{e_{1},e_{2}\})+(-1)^{k-1} \varphi_{H}(\widetilde{H}\setminus e_{1}-V(e_{2}))+(-1)^{k-1}\varphi_{H}( \widetilde{H}-V(e_{1})).\] Since \(\widetilde{H}\setminus e_{1}-V(e_{2})=\widetilde{H}-V(e_{2})\), we have \[\varphi_{H}(\widetilde{H})=\varphi_{H}(\widetilde{H}\setminus\{e_{1},e_{2}\} )+(-1)^{k-1}\varphi_{H}(\widetilde{H}-V(e_{2}))+(-1)^{k-1}\varphi_{H}( \widetilde{H}-V(e_{1})).\] Repeatedly using Theorem 3.1 (2), we get \[\varphi_{H}(\widetilde{H})=\varphi_{H}(\widetilde{H}\setminus I)+(-1)^{k-1} \sum_{e\in I}\varphi_{H}(\widetilde{H}-V(e)). \tag{3.2}\] When \(I=E_{\widetilde{H}}(v)\), the vertex \(v\) is an isolated vertex on \(H\setminus I\). By (3.2) and Theorem 3.1 (1), we thus have that \[\varphi_{H}(\widetilde{H})=(\lambda-d_{H}(v))\varphi_{H}(\widetilde{H}-v)+(-1 )^{k-1}\sum_{e\in E_{\widetilde{H}}(v)}\varphi_{H}(\widetilde{H}-V(e)).\] (4) By (3.1), we have \[\frac{\mathrm{d}}{\mathrm{d}\lambda}\varphi_{H}(\widetilde{H}) =\sum_{M\in\mathcal{M}(\widetilde{H})}\sum_{v\in V(\widetilde{H}) \setminus V(M)}(-1)^{(k-1)|M|}\prod_{v\neq u\in V(\widetilde{H})\setminus V(M) }(\lambda-d_{H}(u))\] \[=\sum_{M\in\mathcal{M}(\widetilde{H})}\sum_{v\in V(\widetilde{H} )\setminus V(M)}(-1)^{(k-1)|M|}\prod_{u\in V(\widetilde{H}-v)\setminus V(M)}( \lambda-d_{H}(u)). \tag{3.3}\] For any \(v\in V(\widetilde{H})\), a matching of \(\widetilde{H}\) without \(v\) is a matching of \(\widetilde{H}-v\). So \(\mathcal{M}(\widetilde{H}-v)\) can be seen as the set of all matchings without \(v\) in \(\widetilde{H}\). From (3.3), we obtain \[\frac{\mathrm{d}}{\mathrm{d}\lambda}\varphi_{H}(\widetilde{H}) =\sum_{v\in V(\widetilde{H})}\sum_{M\in\mathcal{M}(\widetilde{H}- v)}(-1)^{(k-1)|M|}\prod_{u\in V(\widetilde{H}-v)\setminus V(M)}(\lambda-d_{H}(u))\] \[=\sum_{v\in V(\widetilde{H})}\varphi_{H}(\widetilde{H}-v).\] Next, we will give a result about the zero roots of the Laplacian matching polynomial of uniform hypertrees. For this we need a result about the eigenvalues of principal sub-tensor of Laplacian tensor and the relationship between the eigenvalue of weighted adjacency tensor and the weighted matching polynomial. For a non-empty \(S\subseteq V(H)\), let \(\mathcal{L}_{H}[S]=(l_{i_{1}\cdots i_{k}})\) denote the \(k\)-order \(|S|\)-dimensional principal sub-tensor of \(\mathcal{L}_{H}\), where \(i_{1},\ldots,i_{k}\in S\). When \(S=V(H)\setminus\{v\}\), \(\mathcal{L}_{H}[S]\) is simply written as \(\mathcal{L}_{H}(v)\). A tensor is called a \(\mathcal{Z}\)-tensor if all of its off-diagonal entries are non-positive. Clearly, \(\mathcal{L}_{H}[S]\) is a \(\mathcal{Z}\)-tensor for any non-empty \(S\subseteq V(H)\). Applying some properties of \(\mathcal{Z}\)-tensor, we obtain the following result. **Lemma 3.2**.: _Let \(H\) be a uniform connected hypergraph. For any non-empty proper sub-set \(S\subset V(H)\), the real eigenvalues of \(\mathcal{L}_{H}[S]\) are all greater than zero._ Proof.: For any non-empty proper sub-set \(S\subset V(H)\), let \(\tau(\mathcal{L}_{H}[S])\) denote the minimum real part of all eigenvalues of \(\mathcal{L}_{H}[S]\). For a non-empty proper sub-set \(U\subset V(H)\) satisfying \(U\supseteq S\), it is known that \(\tau(\mathcal{L}_{H}[U])\leq\tau(\mathcal{L}_{H}[S])\)[14, Theorem 3.1]. Thus, we have \[\min_{v\in V(H)}\tau(\mathcal{L}_{H}(v))\leq\tau(\mathcal{L}_{H}[S]).\] By [8, Proposition 2.4], \(\tau(\mathcal{L}_{H}(v))\) is the minimum H-eigenvalue of \(\mathcal{L}_{H}(v)\) for any \(v\in V(H)\). It is shown that the minimum H-eigenvalue of \(\mathcal{L}_{H}(v)\) is greater than zero for uniform connected hypergraph \(H\) in [2, Lemma 2.1 and Theorem 3.1]. Then we have \(\tau(\mathcal{L}_{H}(v))>0\). Thus \(0<\min_{v\in V(H)}\tau(\mathcal{L}_{H}(v))\leq\tau(\mathcal{L}_{H}[S])\), which implies that the real eigenvalues of \(\mathcal{L}_{H}[S]\) are all greater than zero. For a \(k\)-uniform hypergraph \(H\) and the weighting function \(\mathbf{w}:V(H)\cup E(H)\to\mathbb{C}\), Wan et al. [15] defined the weighted adjacency tensor \(\mathcal{A}_{H,\mathbf{w}}=(a_{i_{1}\dots i_{k}})\), where \[a_{i_{1}\dots i_{k}}=\begin{cases}\mathbf{w}(v)&\text{if }i_{1}=\dots=i_{k}=v \in V(H),\\ \frac{\mathbf{w}(e)}{(k-1)!}&\text{if }\{i_{1},\dots,i_{k}\}=e\in E(H),\\ 0&\text{otherwise}.\end{cases}\] They determined all eigenvalues of the weighted adjacency tensor of uniform hypertrees by means of the weighted matching polynomial. **Lemma 3.3**.: _[_15_, Theorem2]_ _Let \(T=(V(T),E(T))\) be a \(k\)-uniform hypertree for \(k\geq 3\). Let \(\mathbf{w}:V(T)\cup E(T)\to\mathbb{C}\) be a weighting function on \(T\). Then \(\lambda\) is an eigenvalue of \(\mathcal{A}_{T,\mathbf{w}}\) if and only if there exists a sub-hypertree \(\widetilde{T}\) of \(T\) (including isolated vertices) such that \(\lambda\) is a root of the weighted matching polynomial_ \[\sum_{M\in\mathcal{M}(\widetilde{T})}(-1)^{|M|}\prod_{e\in M}\mathbf{w}(e)^{k }\prod_{v\in V(\widetilde{T})\setminus V(M)}(\lambda-\mathbf{w}(v)).\] We are now ready to derive the result as follows. **Theorem 3.4**.: _Let \(T\) be a \(k\)-uniform hypertree. Then zero is a simple root of the polynomial \(\varphi_{T}(T)\). Moreover, zero is not a root of the polynomial \(\varphi_{T}(\widetilde{T})\) for any non-trivial sub-hypertree \(\widetilde{T}\) of \(T\)._ Proof.: When \(k=2\), \(\varphi_{T}(T)\) is the Laplacian matching polynomial of tree \(T\). It is shown that \(\varphi_{T}(T)\) is equal to the Laplacian characteristic polynomial of \(T\) in the [11, Theorem3.3]. Since zero is a simple root of the Laplacian characteristic polynomial of \(T\), zero is a simple root of \(\varphi_{T}(T)\). By [16, Theorem 2.7], for any non-trivial sub-tree \(\widetilde{T}\) of \(T\), it is easy to check that \(\varphi_{T}(\widetilde{T})\) is equal to the characteristic polynomial of the Laplacian principal sub-matrix \(L_{T}(w)\) of \(T\). Since zero is not a root of the characteristic polynomial of \(L_{T}(w)\), zero is not a root of \(\varphi_{T}(\widetilde{T})\). In the following, we consider the case \(k\geq 3\). Clearly, for any sub-hypertree \(\widetilde{T}\) of \(T\), if we choose the weighting function \(\mathbf{w}\) on \(\widetilde{T}\) such that \(\mathbf{w}(v)=d_{T}(v)\) for all \(v\in V(\widetilde{T})\) and \(\mathbf{w}(e)=-1\) for all \(e\in E(\widetilde{T})\), then \(\mathcal{A}_{\widetilde{T},\mathbf{w}}\) is exactly the principal sub-tensor \(\mathcal{L}_{T}[V(\widetilde{T})]\) of \(\mathcal{L}_{T}\), and the weighted matching polynomial of \(\widetilde{T}\) is exactly \(\varphi_{T}(\widetilde{T})\). It follows from Lemma 3.3 that the roots of \(\varphi_{T}(\widetilde{T})\) is the eigenvalues of \(\mathcal{L}_{T}[V(\widetilde{T})]\). When \(\widetilde{T}\) is a non-trivial sub-hypertree of \(T\), by Lemma 3.2, we know that zero is not the eigenvalue of \(\mathcal{L}_{T}[V(\widetilde{T})]\), which implies that zero is not a root of the polynomial \(\varphi_{T}(\widetilde{T})\). Since zero is a Laplacian eigenvalue of \(T\), by [15, Corollary4], there exists a sub-hypertree of \(T\) such that zero is the root of the Laplacian matching polynomial of it with respect to \(T\). It is known that zero is not a root of \(\varphi_{T}(\widetilde{T})\) for any non-trivial sub-hypertree \(\widetilde{T}\) of \(T\), which implies that zero is a root of \(\varphi_{T}(T)\). Next, we prove that zero is a simple root of \(\varphi_{T}(T)\). By Theorem 3.1 (4), we have \[\frac{\mathrm{d}}{\mathrm{d}\lambda}\varphi_{T}(T)=\sum_{v\in V(T)}\varphi_{T }(T-v). \tag{3.4}\] Given a vertex \(v\in V(T)\), we know that \(T-v\) is not connected and each connected component is sub-hypertree of \(T\). By Theorem 3.1 (1), the roots of \(\varphi_{T}(T-v)\) are the eigenvalues of \(\mathcal{L}_{T}[V(T-v)]\). By Lemma 3.2, the real eigenvalues of \(\mathcal{L}_{T}[V(T-v)]\) are all greater than zero, which implies that all real roots of \(\varphi_{T}(T-v)\) are greater than zero. Note that \(\varphi_{T}(T-v)\) is a real coefficient polynomial, whose all of imaginary part non-zero complex roots occur in pairs. So the product of all roots of \(\varphi_{T}(T-v)\) is greater than zero. Let \(\lambda_{1}^{(v)},\ldots,\lambda_{|V(T)|-1}^{(v)}\) denote the roots of \(\varphi_{T}(T-v)\) for each \(v\in V(T)\) and we have \(\lambda_{1}^{(v)}\cdots\lambda_{|V(T)|-1}^{(v)}>0\). Then the constant term of the polynomial \(\sum_{v\in V(T)}\varphi_{T}(T-v)\) is \((-1)^{|V(T)|-1}\sum_{v\in V(T)}\lambda_{1}^{(v)}\cdots\lambda_{|V(T)|-1}^{(v)}\neq 0\), which implies that zero is not a root of \(\sum_{v\in V(T)}\varphi_{T}(T-v)\). By (3.4), zero is not a root of \(\frac{\mathrm{d}}{\mathrm{d}\lambda}\varphi_{T}(T)\). Thus, zero is a simple root of \(\varphi_{T}(T)\). ## 4 The multiplicity of the zero Laplacian eigenvalue of uniform hypertrees In this section, we apply the Laplacian characteristic polynomial and the Laplacian matching polynomial to give the multiplicity of the zero Laplacian eigenvalue of uniform hypertrees, which shows that Conjecture 1.1 is true. For a \(k\)-uniform hypertree \(T=(V(T),E(T))\) and a vertex \(w\in V(T)\), recall that \(F_{v}=F_{v}(x_{i}:i\in V(T))=(\lambda-d_{T}(v))x_{v}^{k-1}+\sum_{e\in E_{T}(v)} \mathbf{x}_{e\setminus\{v\}}\) and \(f_{v}=F_{v}|_{x_{w}=1}\) for all \(v\in V(T)\). Let \(\mathcal{V}^{T}\) be the affine variety defined by the polynomials \(f_{v}\) for all \(v\in V(T)\setminus\{w\}\). By Lemma 2.1, the Laplacian characteristic polynomial of \(T\) is \[\phi(\mathcal{L}_{T}) =\phi(\mathcal{L}_{T}(w))^{k-1}\prod_{\mathbf{p}\in\mathcal{V}^{ T}}(\lambda-d_{T}(w)+\sum_{e\in E_{T}(w)}\mathbf{p}_{e\setminus\{w\}})^{m( \mathbf{p})}\] \[=\phi(\mathcal{L}_{T}(w))^{k-1}\prod_{\mathbf{p}\in\mathcal{V}^{ T}}f_{w}(\mathbf{p})^{m(\mathbf{p})}. \tag{4.1}\] From Lemma 3.2, we know that zero is not the eigenvalue of \(\mathcal{L}_{T}(w)\). Hence, in order to determine the multiplicity of the zero Laplacian eigenvalue of \(T\), we only need to consider \(\prod_{\mathbf{p}\in\mathcal{V}^{T}}f_{w}(\mathbf{p})^{m(\mathbf{p})}\) in (4.1). Let \(\mathbf{p}=(p_{i})\) be a point in affine variety \(\mathcal{V}^{T}\), and let \(\mathbf{q}=(q_{i})\) be a \(|V(T)|\)-dimensional vector with components \(q_{w}=1\) and \(q_{i}=p_{i}\) for all \(i\in V(T)\setminus\{w\}\). Then we have \[f_{w}(\mathbf{p})=F_{w}(q_{i}:i\in V(T))=F_{w}(\mathbf{q}),\] and \(f_{v}(\mathbf{p})=F_{v}(q_{i}:i\in V(T))=F_{v}(\mathbf{q})=0\) for all \(v\in V(T)\setminus\{w\}\). When \(\lambda=0\). If \(F_{w}(\mathbf{q})=0\), then \(\mathbf{q}\) is an eigenvector corresponding to the zero Laplacian eigenvalue of \(T\). It is shown that all components of the eigenvector corresponding to the zero Laplacian eigenvalue of a connected uniform hypergraph are non-zero in the [9, Theorem 4.1 (i)]. Therefore, the all components of \(\mathbf{p}\in\mathcal{V}^{T}\) satisfying \(f_{w}(\mathbf{p})=0\) are non-zero when \(\lambda=0\). It implies that the multiplicity of the zero Laplacian eigenvalue of \(T\) is only related to the points having all components non-zero in \(\mathcal{V}^{T}\). **Lemma 4.1**.: _Let \(T\) be a \(k\)-uniform hypertree and \(w\) be a vertex on \(T\). If \(\mathbf{p}\in\mathcal{V}^{T}\) have all components non-zero, then_ \[\mathbf{p}_{e\setminus\{w\}}=\frac{(-1)^{k-1}\varphi_{T}(T-V(e))}{\varphi_{T} (T-w)}\] _for each \(e\in E_{T}(w)\)._ Proof.: We prove the result by the induction on the number of edges of \(T\). When \(|E(T)|=1\), we have \(\varphi_{T}(T-w)=(\lambda-1)^{k-1}\) and \(\varphi_{T}(T-V(e))=1\) for the edge \(e\in E_{T}(w)\). From (2.9), we know that \(\mathbf{p}_{e\setminus\{w\}}=(\frac{-1}{\lambda-1})^{k-1}\), which implies that \[\mathbf{p}_{e\setminus\{w\}}=\frac{(-1)^{k-1}\varphi_{T}(T-V(e))}{ \varphi_{T}(T-w)}.\] So the assertion holds. Assuming that the result holds for any \(|E(T)|\leq r\), we consider the case \(|E(T)|=r+1\). When \(w\) is a cut vertex of \(T\), \(T\) has \(d_{T}(w)(>1)\) branches associated with \(w\) and each \(e\in E_{T}(w)\) belongs to a distinct branch. Let \(\widetilde{T}_{i}\) be the branch of \(T\) with edge \(e_{i}\in E_{T}(w)\) for each \(i\in[d_{T}(w)]\) and we know that \(|E(\widetilde{T}_{i})|\leq r\). By the induction hypothesis, for \(\mathbf{p}^{(i)}\in\mathcal{V}^{\widetilde{T}_{i}}\) having all components non-zero, we have \[\mathbf{p}^{(i)}_{e_{i}\setminus\{w\}}=\frac{(-1)^{k-1}\varphi_{ \widetilde{T}_{i}}(\widetilde{T}_{i}-V(e_{i}))}{\varphi_{\widetilde{T}_{i}}( \widetilde{T}_{i}-w)}.\] By the definition of the Laplacian matching polynomial, we have \(\varphi_{\widetilde{T}_{i}}(\widetilde{T}_{i}-V(e_{i}))=\varphi_{T}( \widetilde{T}_{i}-V(e_{i}))\) and \(\varphi_{\widetilde{T}_{i}}(\widetilde{T}_{i}-w)=\varphi_{T}(\widetilde{T}_{i }-w)\). Then \[\mathbf{p}^{(i)}_{e_{i}\setminus\{w\}} =\frac{(-1)^{k-1}\varphi_{T}(\widetilde{T}_{i}-V(e_{i}))}{\varphi _{T}(\widetilde{T}_{i}-w)}\] \[=\frac{(-1)^{k-1}\varphi_{T}(\widetilde{T}_{i}-V(e_{i}))\prod_{ \begin{subarray}{c}j\in[d_{T}(w)]\\ j\neq i\end{subarray}}\varphi_{T}(\widetilde{T}_{j}-w)}{\prod_{j\in[d_{T}(w)]} \varphi_{T}(\widetilde{T}_{j}-w)}. \tag{4.2}\] Note that \(T-w\) is the disjoint union of \(\widetilde{T}_{i}-w\) for all \(i\in[d_{T}(w)]\), and \(T-V(e_{j})\) is the disjoint union of \(\widetilde{T}_{j}-V(e_{j})\) and \(\widetilde{T}_{i}-w\) for all \(i\neq j\in[d_{T}(w)]\). It follows from Theorem 3.1 (1) that \[\prod_{j\in[d_{T}(w)]}\varphi_{T}(\widetilde{T}_{j}-w)=\varphi_{T}(T-w),\] and \[\varphi_{T}(\widetilde{T}_{i}-V(e_{i}))\prod_{\begin{subarray}{c}j\in[d_{T}( w)]\\ j\neq i\end{subarray}}\varphi_{T}(\widetilde{T}_{j}-w)=\varphi_{T}(T-V(e_{i})).\] By Theorem 2.4 and (4.2), for \(\mathbf{p}\in\mathcal{V}^{T}\) having all components non-zero, we get \[\mathbf{p}_{e_{i}\setminus\{w\}}=\mathbf{p}_{e_{i}\setminus\{w\}}^{(i)}=\frac{( -1)^{k-1}\varphi_{T}(T-V(e_{i}))}{\varphi_{T}(T-w)}.\] When \(w\) is not a cut vertex of \(T\), the degree of \(w\) is clearly one. Let the edge \(\widehat{e}=\{v_{1},\ldots,v_{k-1},w\}\). Then \(T\setminus\widehat{e}\) has \(k\) connected components and we use \(\widehat{T}_{t}\) to denote the connected component containing \(v_{t}\) for each \(t\in[k]\). For all \(v\in V(T)\), recall that \(F_{v}=F_{v}(x_{i}:i\in V(T))=(\lambda-d_{T}(v))x_{v}^{k-1}+\sum_{e\in E_{T}(v)} \mathbf{x}_{e\setminus\{v\}}\) and \(f_{v}=F_{v}|_{x_{w}=1}\). For all \(t\in[k-1]\) and any \(v\in V(\widehat{T}_{t})\setminus\{v_{t}\}\), note that \(f_{v}=f_{v}(x_{i}:i\in V(\widehat{T}_{t}))\) is a homogeneous polynomial. Since \(\mathbf{p}=(p_{i})\in\mathcal{V}^{T}\) have all components non-zero, we get \[f_{v}(\mathbf{p})=f_{v}(p_{i}:i\in V(\widehat{T}_{t}))=f_{v}\left(\frac{p_{i}} {p_{v_{t}}}:i\in V(\widehat{T}_{t})\right)=0. \tag{4.3}\] Fix \(t\in[k-1]\), we consider the sub-hypertree \(\widehat{T}_{t}\). For all \(v\in V(\widehat{T}_{t})\setminus\{v_{t}\}\), let \(\widehat{F}_{v}=\widehat{F}_{v}(x_{i}:i\in V(\widehat{T}_{t}))=(\lambda-d_{ \widehat{T}_{t}}(v))x_{v}^{k-1}+\sum_{e\in E_{\widehat{T}_{t}}(v)}\mathbf{x}_ {e\setminus\{v\}}\) and \(\widehat{f}_{v}=\widehat{F}_{v}|_{x_{v_{t}}=1}\). It is easy to check that \(\widehat{F}_{v}=f_{v}\). Let \(q_{i}=\frac{p_{i}}{p_{v_{t}}}\) for all \(i\in V(\widehat{T}_{t})\) and note that \(q_{v_{t}}=1\). By (4.3), we have \[\widehat{f}_{v}(q_{i}:i\in V(\widehat{T}_{t})\setminus\{v_{t}\})=\widehat{F}_{ v}(q_{i}:i\in V(\widehat{T}_{t}))=f_{v}(q_{i}:i\in V(\widehat{T}_{t}))=0 \tag{4.4}\] for all \(v\in V(\widehat{T}_{t})\setminus\{v_{t}\}\). Let the vector \(\mathbf{q}=(q_{i})\) for \(i\in V(\widehat{T}_{t})\setminus\{v_{t}\}\). Then \(\mathbf{q}\) is a point in the affine variety \(\mathcal{V}^{\widehat{T}_{t}}\) defined by the polynomials \(\widehat{f}_{v}\) for all \(v\in V(\widehat{T}_{t})\setminus\{v_{t}\}\), and the all components of \(\mathbf{q}\) are non-zero. By the induction hypothesis, for each \(e\in E_{\widehat{T}_{t}}(v_{t})\), we have \[\mathbf{q}_{e\setminus\{v_{t}\}}=\frac{(-1)^{k-1}\varphi_{\widehat{T}_{t}}( \widehat{T}_{t}-V(e))}{\varphi_{\widehat{T}_{t}}(\widehat{T}_{t}-v_{t})}.\] By the definition of the Laplacian matching polynomial, we have \(\varphi_{\widehat{T}_{t}}(\widehat{T}_{t}-V(e))=\varphi_{T}(\widehat{T}_{t}- V(e))\) and \(\varphi_{\widehat{T}_{t}}(\widehat{T}_{t}-v_{t})=\varphi_{T}(\widehat{T}_{t}-v_{t})\). Then \[\mathbf{q}_{e\setminus\{v_{t}\}}=\frac{(-1)^{k-1}\varphi_{T}(\widehat{T}_{t}- V(e))}{\varphi_{T}(\widehat{T}_{t}-v_{t})}=\frac{\mathbf{p}_{e\setminus\{v_{t}\}}}{p _{v_{t}}^{k-1}}.\] Thus, for \(\mathbf{p}\in\mathcal{V}^{T}\) having all components non-zero and each \(e\in E_{\widehat{T}_{t}}(v_{t})\), we get \[\mathbf{p}_{e\setminus\{v_{t}\}}=\frac{(-1)^{k-1}\varphi_{T}(\widehat{T}_{t}-V( e))}{\varphi_{T}(\widehat{T}_{t}-v_{t})}p_{v_{t}}^{k-1}. \tag{4.5}\] For each \(t\in[k-1]\), recall that \[f_{v_{t}}(\mathbf{p})=(\lambda-d_{T}(v_{t}))p_{v_{t}}^{k-1}+\mathbf{p}_{\widehat {e}\setminus\{v_{t},w\}}+\sum_{e\in E_{\widehat{T}_{t}}(v_{t})}\mathbf{p}_{e \setminus\{v_{t}\}}=0.\] By (4.5) and Theorem 3.1 (3), we have \[\mathbf{p}_{\widehat{e}\setminus\{v_{t},w\}} =-\left(\lambda-d_{T}(v_{t})+\sum_{e\in E_{\widehat{T}_{t}}(v_{t} )}\frac{(-1)^{k-1}\varphi_{T}(\widehat{T}_{t}-V(e))}{\varphi_{T}(\widehat{T}_{ t}-v_{t})}\right)p_{v_{t}}^{k-1}\] \[=-\frac{\varphi_{T}(\widehat{T}_{t})}{\varphi_{T}(\widehat{T}_{t }-v_{t})}p_{v_{t}}^{k-1}.\] Combining these equations for all \(t\in[k-1]\), we get \[\prod_{t=1}^{k-1}\mathbf{p}_{\widehat{e}\setminus\{v_{t},w\}}=(-1)^{k-1}\prod _{t=1}^{k-1}\frac{\varphi_{T}(\widehat{T}_{t})}{\varphi_{T}(\widehat{T}_{t}-v _{t})}p_{v_{t}}^{k-1}.\] Since \(\prod_{t=1}^{k-1}\mathbf{p}_{\widehat{e}\setminus\{v_{t},w\}}=\prod_{t=1}^{k- 1}p_{v_{t}}^{k-2}\), we have \[\mathbf{p}_{\widehat{e}\setminus\{w\}}=\frac{\prod_{t=1}^{k-1}p_{v_{t}}^{k-1} }{\prod_{t=1}^{k-1}\mathbf{p}_{\widehat{e}\setminus\{v_{t},w\}}}=(-1)^{k-1} \prod_{t=1}^{k-1}\frac{\varphi_{T}(\widehat{T}_{t}-v_{t})}{\varphi_{T}(\widehat {T}_{t})}.\] Note that for all \(t\in[k-1]\), the disjoint union of \(\widehat{T}_{t}-v_{t}\) is \(T-V(\widehat{e})\) and the disjoint union of \(\widehat{T}_{t}\) is \(T-w\). It follows from Theorem 3.1 (1) that \[\mathbf{p}_{\widehat{e}\setminus\{w\}}=\frac{(-1)^{k-1}\varphi_{T}(T-V( \widehat{e}))}{\varphi_{T}(T-w)}.\] For the point \(\mathbf{p}\in\mathcal{V}^{T}\), we have \(f_{w}(\mathbf{p})=\lambda-d_{T}(w)+\sum_{e\in E_{T}(w)}\mathbf{p}_{e\setminus \{w\}}\). If \(\mathbf{p}\) have all components non-zero, by Lemma 4.1 and Theorem 3.1 (3), we get \[f_{w}(\mathbf{p}) =\lambda-d_{T}(w)+\sum_{e\in E_{T}(w)}\frac{(-1)^{k-1}\varphi_{T}(T -V(e))}{\varphi_{T}(T-w)}\] \[=\frac{\varphi_{T}(T)}{\varphi_{T}(T-w)}. \tag{4.6}\] Note that \(T-w\) is not connected and each connected component is a non-trivial sub-hypertree of \(T\). From Theorem 3.1 (1) and Theorem 3.4, we know that zero is not the root of \(\varphi_{T}(T-w)\) and is a simple root of \(\varphi_{T}(T)\). Let \(n_{0}(T)\) denote the multiplicity of the zero Laplacian eigenvalue of \(T\). Since \(n_{0}(T)\) is only related to \(\mathbf{p}\) having all components non-zero in \(\mathcal{V}^{T}\), combining (4.1) with (4.6), we have \[n_{0}(T)=\sum_{\begin{subarray}{c}\mathbf{p}\in\mathcal{V}^{T}\\ \forall p_{i}\neq 0\end{subarray}}m(\mathbf{p}), \tag{4.7}\] where \(m(\mathbf{p})\) is the multiplicity of \(\mathbf{p}=(p_{i})\) in \(\mathcal{V}^{T}\). We are now ready to determine the multiplicity of the zero Laplacian eigenvalue of \(T\). **Theorem 4.2**.: _Let \(T=(V(T),E(T))\) be a \(k\)-uniform hypertree. Then the multiplicity of the zero Laplacian eigenvalue of \(T\) is \(k^{|E(T)|(k-2)}\)._ Proof.: We prove the result by the induction on the number of edges of \(T\). When \(|E(T)|=1\). It is shown that the multiplicity of the zero Laplacian eigenvalue of \(T\) is \(k^{k-2}\) in the [18, Theorem 4.9]. So the assertion holds. Assuming that the result holds when \(|E(T)|=r\), we consider the case \(|E(T)|=r+1\). Let \(w\) be a non-pendent vertex on a pendant edge of \(T\), and \(\widetilde{T}\) denote the \(k\)-uniform hypertree obtained by removing this pendant edge and pendent vertices on it from \(T\). By Corollary 2.5, the Laplacian characteristic polynomial of \(T\) is \[\phi(\mathcal{L}_{T})= (\lambda-1)^{(k-1)^{(r+1)(k-1)+1}}\phi(\mathcal{L}_{\widetilde{T }}(w))^{(k-1)^{k}}\prod_{\mathbf{p}\in\mathcal{V}^{\widetilde{T}}}(\lambda-d_ {T}(w)+\sum_{e\in E_{\widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w\}})^{m( \mathbf{p})K_{1}}\] \[\times\prod_{\mathbf{p}\in\mathcal{V}^{\widetilde{T}}}(\lambda- d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{\widetilde{T}}(w)}\mathbf{p}_{e \setminus\{w\}})^{m(\mathbf{p})K_{2}}, \tag{4.8}\] where \(K_{1}=(k-1)^{k-1}-k^{k-2}\) and \(K_{2}=k^{k-2}\). Clearly, \(w\) is a cut vertex on \(T\). Suppose that the branches of \(T\) associated with \(w\) are \(\widetilde{T}\) and a one-edge hypergraph, denoted by \(T^{\prime}\). By (2.3), we know that \(\mathcal{V}^{T}=\mathcal{V}^{\widetilde{T}}\times\mathcal{V}^{T^{\prime}}\). Then we have \(\mathbf{r}=\begin{pmatrix}\mathbf{p}\\ \mathbf{q}\end{pmatrix}\) for any \(\mathbf{r}\in\mathcal{V}^{T}\), where \(\mathbf{p}\in\mathcal{V}^{\widetilde{T}}\), \(\mathbf{q}\in\mathcal{V}^{T^{\prime}}\). It is known from (4.7) that the multiplicity of the zero Laplacian eigenvalue of \(T\) is only related to \(\mathbf{r}\in\mathcal{V}^{T}\) having all components non-zero. By (2.10), it implies that we only need to consider \[\prod_{\mathbf{p}\in\mathcal{V}^{\widetilde{T}}}(\lambda-d_{T}(w)+(\frac{-1}{ \lambda-1})^{k-1}+\sum_{e\in E_{\widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w \}})^{m(\mathbf{p})K_{2}} \tag{4.9}\] in (4.8) and \(\mathbf{p}\) have all components non-zero in \(\mathcal{V}^{\widetilde{T}}\). By Lemma 4.1, for \(\mathbf{p}\in\mathcal{V}^{\widetilde{T}}\) having all components non-zero, we have \[\lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w\}}\] \[= \lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\frac{(-1)^{k-1}\varphi_{\widetilde{T}}(\widetilde{T}-V(e) )}{\varphi_{\widetilde{T}}(\widetilde{T}-w)}.\] By the definition of the Laplacian matching polynomial, we know that\(\varphi_{\widetilde{T}}(\widetilde{T}-w)=\varphi_{T}(\widetilde{T}-w)\) and \(\varphi_{\widetilde{T}}(\widetilde{T}-V(e))=\varphi_{T}(\widetilde{T}-V(e))\) for each \(e\in E_{\widetilde{T}}(w)\). It follows from Theorem 3.1 (3) that \[\lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w\}}\] \[= \frac{(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T})+(-1)^{k-1} \varphi_{T}(\widetilde{T}-w)}{(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T}-w)}. \tag{4.10}\] Let pendant edge \(\widetilde{e}=\{v_{1},\ldots,v_{k-1},w\}\), where \(v_{1},\ldots,v_{k-1}\) are the pendent vertices. Note that the Laplacian matching polynomial of \(v_{i}\) with respect to \(T\) is \(\lambda-1\) for each \(i\in[k-1]\). Since the disjoint union of \(\widetilde{T}-w\) and \(v_{i}\) for all \(i\in[k-1]\) is \(T-w\), by Theorem 3.1 (1), we have \[(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T}-w)=\varphi_{T}(T-w).\] Since \((\lambda-1)^{k-1}\varphi_{T}(\widetilde{T})+(-1)^{k-1}\varphi_{T}(\widetilde{ T}-w)=(\lambda-d_{T}(v_{i}))\varphi_{T}(T-v_{i})+(-1)^{k-1}\varphi_{T}(T-w)\), we have \[(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T}-w)=(\lambda-d_{T}(v_{i}))\varphi_{ T}(T-v_{i})+(-1)^{k-1}\varphi_{T}(T-w).\] Since \((\lambda-1)^{k-1}\varphi_{T}(\widetilde{T})+(-1)^{k-1}\varphi_{T}(\widetilde{ T}-w)=(\lambda-d_{T}(v_{i}))\varphi_{T}(T-v_{i})+(-1)^{k-1}\varphi_{T}(T-w)\), we have \[(\lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w\}})^{m(\mathbf{p})K_{2}}\] \[= \lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\frac{(-1)^{k-1}\varphi_{T}(\widetilde{T}-V(e))}{\varphi_{ \widetilde{T}}(\widetilde{T}-w)}.\] By the definition of the Laplacian matching polynomial, we know that\(\varphi_{\widetilde{T}}(\widetilde{T}-w)=\varphi_{T}(\widetilde{T}-V(e))\) and \(\varphi_{\widetilde{T}}(\widetilde{T}-V(e))=\varphi_{T}(\widetilde{T}-V(e))\) for each \(e\in E_{\widetilde{T}}(w)\). It follows from Theorem 3.1 (3) that \[\lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w\}}\] \[= \frac{(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T})+(-1)^{k-1} \varphi_{T}(\widetilde{T}-w)}{(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T}-w)}. \tag{4.11}\] Let \(\widetilde{T}\) be the Laplacian matching polynomial of \(v_{i}\) with respect to \(T\). Then we have \[(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T}-w)=(\lambda-d_{T}(v_{i}))\varphi_{ T}(T-v_{i})+(-1)^{k-1}\varphi_{T}(T-w).\] Since \((\lambda-1)^{k-1}\varphi_{T}(\widetilde{T})+(-1)^{k-1}\varphi_{T}(\widetilde{T} -w)=(\lambda-d_{T}(v_{i}))\varphi_{T}(T-v_{i})+(-1)^{k-1}\varphi_{T}(T-w)\), we have \[(\lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w\}})^{m(\mathbf{p})K_{2}}\] \[= \lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\frac{(-1)^{k-1}\varphi_{T}(\widetilde{T}-V(e))}{\varphi_{ \widetilde{T}}(\widetilde{T}-w)}.\] By the definition of the Laplacian matching polynomial, we know that\(\varphi_{\widetilde{T}}(\widetilde{T}-w)=\varphi_{T}(\widetilde{T}-V(e))\) and \(\varphi_{\widetilde{T}}(\widetilde{T}-V(e))=\varphi_{T}(\widetilde{T}-V(e))\) for each \(e\in E_{\widetilde{T}}(w)\). It follows from Theorem 3.1 (3) that \[\lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w\}}\] \[= \frac{(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T})+(-1)^{k-1}\varphi_{ T}(\widetilde{T}-w)}{(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T}-w)}. \tag{4.12}\] Let \(\widetilde{T}\) be the Laplacian matching polynomial of \(v_{i}\) with respect to \(T\). Then we have \[(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T}-w)=(\lambda-d_{T}(v_{i}))\varphi_{ T}(T-v_{i})+(-1)^{k-1}\varphi_{T}(T-w).\] Since \((\lambda-1)^{k-1}\varphi_{T}(\widetilde{T})+(-1)^{k-1}\varphi_{T}(\widetilde{T} -w)=(\lambda-d_{T}(v_{i}))\varphi_{T}(T-v_{i})+(-1)^{k-1}\varphi_{T}(T-w)\), we have \[(\lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w\}})^{m(\mathbf{p})K_{2}}\] \[= \lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\frac{(-1)^{k-1}\varphi_{T}(\widetilde{T}-V(e))}{\varphi_{ \widetilde{T}}(\widetilde{T}-w)}.\] By the definition of the Laplacian matching polynomial, we know that\(\varphi_{\widetilde{T}}(\widetilde{T}-w)=\varphi_{T}(\widetilde{T}-V(e))\) and \(\varphi_{\widetilde{T}}(\widetilde{T}-V(e))=\varphi_{T}(\widetilde{T}-V(e))\) for each \(e\in E_{\widetilde{T}}(w)\). It follows from Theorem 3.1 (3) that \[\lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w\}}\] \[= \frac{(\lambda-1)^{ \(V(\widetilde{e}))\) for any \(i\in[k-1]\), by Theorem 3.1 (3), we have \[(\lambda-1)^{k-1}\varphi_{T}(\widetilde{T})+(-1)^{k-1}\varphi_{T}(\widetilde{T} -w)=\varphi_{T}(T).\] From (4.10), for \(\mathbf{p}\in\mathcal{V}^{\widetilde{T}}\) having all components non-zero, we obtain \[\lambda-d_{T}(w)+(\frac{-1}{\lambda-1})^{k-1}+\sum_{e\in E_{ \widetilde{T}}(w)}\mathbf{p}_{e\setminus\{w\}}\] \[=\frac{\varphi_{T}(T)}{\varphi_{T}(T-w)}.\] Note that \(T-w\) is not connected and each connected component is a non-trivial sub-hypertree of \(T\). It is known from Theorem 3.1 (1) and Theorem 3.4 that zero is not the root of \(\varphi_{T}(T-w)\) and is a simple root of \(\varphi_{T}(T)\). By (4.9), we get \[n_{0}(T)=k^{k-2}\sum_{\begin{subarray}{c}\mathbf{p}\in\mathcal{V}^{\widetilde {T}}\\ \forall p_{i}\neq 0\end{subarray}}m(\mathbf{p}).\] It follows from (4.7) that \(\sum_{\begin{subarray}{c}\mathbf{p}\in\mathcal{V}^{\widetilde{T}}\\ \forall p_{i}\neq 0\end{subarray}}m(\mathbf{p})=n_{0}(\widetilde{T})\). By the induction hypothesis, we have \(n_{0}(\widetilde{T})=k^{r(k-2)}\). Thus, \(n_{0}(T)=k^{k-2}n_{0}(\widetilde{T})=k^{(r+1)(k-2)}\).
2305.19487
SPGNN-API: A Transferable Graph Neural Network for Attack Paths Identification and Autonomous Mitigation
Attack paths are the potential chain of malicious activities an attacker performs to compromise network assets and acquire privileges through exploiting network vulnerabilities. Attack path analysis helps organizations to identify new/unknown chains of attack vectors that reach critical assets within the network, as opposed to individual attack vectors in signature-based attack analysis. Timely identification of attack paths enables proactive mitigation of threats. Nevertheless, manual analysis of complex network configurations, vulnerabilities, and security events to identify attack paths is rarely feasible. This work proposes a novel transferable graph neural network-based model for shortest path identification. The proposed shortest path detection approach, integrated with a novel holistic and comprehensive model for identifying potential network vulnerabilities interactions, is then utilized to detect network attack paths. Our framework automates the risk assessment of attack paths indicating the propensity of the paths to enable the compromise of highly-critical assets (e.g., databases) given the network configuration, assets' criticality, and the severity of the vulnerabilities in-path to the asset. The proposed framework, named SPGNN-API, incorporates automated threat mitigation through a proactive timely tuning of the network firewall rules and zero-trust policies to break critical attack paths and bolster cyber defenses. Our evaluation process is twofold; evaluating the performance of the shortest path identification and assessing the attack path detection accuracy. Our results show that SPGNN-API largely outperforms the baseline model for shortest path identification with an average accuracy >= 95% and successfully detects 100% of the potentially compromised assets, outperforming the attack graph baseline by 47%.
Houssem Jmal, Firas Ben Hmida, Nardine Basta, Muhammad Ikram, Mohamed Ali Kaafar, Andy Walker
2023-05-31T01:48:12Z
http://arxiv.org/abs/2305.19487v2
SPGNN-API: A Transferable Graph Neural Network for Attack Paths Identification and Autonomous Mitigation ###### Abstract Attack paths are the potential chain of malicious activities an attacker performs to compromise network assets and acquire privileges through exploiting network vulnerabilities. Attack path analysis helps organizations to identify new/unknown chains of attack vectors that reach critical assets within the network, as opposed to individual attack vectors in signature-based attack analysis. Timely identification of attack paths enables proactive mitigation of threats. Nevertheless, manual analysis of complex network configurations, vulnerabilities, and security events to identify attack paths is rarely feasible. This work proposes a novel transferable graph neural network-based model for shortest path identification. The proposed shortest path detection approach, integrated with a novel holistic and comprehensive model for identifying potential network vulnerabilities interactions, is then utilized to detect network attack paths. Our framework automates the risk assessment of attack paths indicating the propensity of the paths to enable the compromise of highly-critical assets (e.g., databases) given the network configuration, assets' criticality, and the severity of the vulnerabilities in-path to the asset. The proposed framework, named SPGNN-API, incorporates automated threat mitigation through a proactive timely tuning of the network firewall rules and zero-trust policies to break critical attack paths and bolster cyber defenses. Our evaluation process is twofold; evaluating the performance of the shortest path identification and assessing the attack path detection accuracy. Our results show that SPGNN-API largely outperforms the baseline model for shortest path identification with an average accuracy \(\geq\) 95% and successfully detects 100% of the potentially compromised assets, outperforming the attack graph baseline by 47%. Graph Neural Network, Automated risk identification, zero trust, autonomous mitigation, risk assessment. ## I Introduction Cyber attacks have become not only more numerous and diverse but also more damaging and disruptive. New attack vectors and increasingly sophisticated threats are emerging every day. Attack paths, in general, are the potential chain of malicious activities an attacker performs to compromise assets and acquire network privileges through exploiting network vulnerabilities. Attack path analysis helps organizations identify previously unknown or unfamiliar chains of attack vectors that could potentially compromise critical network assets. This approach contrasts with signature-based attack analysis approaches such as vulnerability scanning, which typically focus on detecting individual attack vectors. Timely identification of attack paths enables proactive mitigation of threats before damage takes place. Nevertheless, manual processes cannot always provide the proactivity, fast response, or real-time mitigation required to deal with modern threats and threat actors, and constantly growing and dynamic network structure. An automated and efficient threat identification, characterization, and mitigation process is critical to every organization's cybersecurity infrastructure. The existing literature proposes various approaches based on attack graphs and attack trees that assess the interdependencies between vulnerabilities and the potential impact of exploitation [1, 2, 3, 4]. While these techniques provide a systematic perspective on potential threat scenarios in networks, their effectiveness is constrained by their inability to dynamically adapt to changes in the network structure, thus requiring the re-evaluation of the entire process. Several approaches based on deep learning (DL) have been proposed in the literature [5, 6, 7] to address this issue. For such models, network structure information is not learned, unlike Graph Neural Networks (GNN), but rather provided as input to the DL models. Consequently, the structure-based input must be re-generated every time there is a change in the network structure. This can potentially necessitate the entire DL models to be retrained, causing additional computational overhead. Another limitation of existing approaches is either being restricted to a set of predefined attacks [6] or using a set of predefined rules to define the potential interplay between vulnerabilities [8]. Given the rising complexity of cyber-attacks, a comprehensive approach is vital to ensure the security of network assets and sensitive data. **Challenges.** There are three major challenges for attack path detection: (1) **Adaptiveness**: How to develop an automated and adaptive identification of attack paths given the dynamic nature of the network structure driven by trends such as remote users, bring-your-own devices, and cloud assets? (2) **Agility**: With attackers constantly finding new ways to exploit vulnerabilities, how to comprehensively identify the potential interplay between vulnerabilities without being bound to a pre-defined set of rules or attack scenarios? (3) **Efficiency**: How to efficiently characterize and rank the risks of attack paths, and autonomously triage the ones requiring prompt response without disrupting the network functionalities? **Our Work.** Considering these challenges, we devise "Shortest Path Graph Neural Network-API" (SPGNN-API), a framework offering an autonomous identification of potential attack paths and associated risks of compromising critical assets. It further incorporates proactive mitigation of high-risk paths. (1) To address the adaptiveness challenge, we develop a novel GNN approach for attack path identification. The inductive property of GNNs enables them to leverage feature information of graph elements to efficiently generate node embeddings for previously unseen data. Additionally, GNNs incorporate network structural information as learnable features. This renders GNN-based approaches self-adaptive to dynamic changes in the network structure. (2) To tackle the agility challenge, we assume that an attacker who has compromised an asset can exploit all the underlying vulnerabilities. We rely on the GNN efficiency of graph representation learning to learn all potential vulnerability interactions that could compromise critical assets based on the CVSS base score metrics [9]. (3) To address the efficiency challenge, we automate the risk analysis of attack paths to determine their likelihood of compromising critical assets, based on factors such as network configuration, assets' criticality, and the severity of the vulnerabilities [10] in-path to the asset. We then develop autonomous mitigation of high-risk attack paths by automatically tuning the network zero-trust policies (See Section III-A) to disrupt the paths without impacting the network functionalities. In this work, we address a key limitation of existing GNNs that fail to capture the positional information of the nodes within the broader context of the graph structure [11, 12]. For instance, when two nodes share the same local neighborhood patterns but exist in different regions of the graph, their GNN representations will be identical. To address this, we introduce the SPGNN-API, which extends the Positional Graph Neural Network model [13] to achieve a transferable model for computing shortest paths to a predefined set of nodes representing highly-critical network assets. **Evaluation.** We conduct a three-fold evaluation process: Firstly, we evaluate the performance of the SPGNN shortest path calculation in a semi-supervised setting. Secondly, we assess the performance in a transfer-learning setting. Thirdly, we evaluate the accuracy of identifying critical attack paths. To carry out our evaluation, we use two synthetic network datasets, two real-world datasets obtained from middle-sized networks, and two widely used citation network datasets: Cora [14] and Citeseer [15]. We compare the GNN path identification performance with the state-of-the-art GNN path identification model "SPAGAN" [11]. Additionally, we compare the performance of the SPGNN-API with a state-of-the-art approach for attack path generation, "MulVAL" [8]. **Contributions.** In summary, our research contributions are: * We develop a novel transferable GNN for shortest path calculation that relies exclusively on nodes' positional embedding, regardless of other features. The presented approach is able to transfer previous learning to new tasks, hence alleviating the lack of labeled data problems. * We propose a novel GNN-based approach for network vulnerability assessment and potential attack path identification that leverages the inductive ability of the GNNs to accommodate the dynamic nature of enterprise networks without requiring continuous retraining. * We demonstrate that, unlike traditional GNN, the performance of positional GNN models is enhanced by removing self-loops achieving an average improvement \(\approx\) 5% on our six datasets with a maximum of 9%. * We develop a novel comprehensive model for learning the propensity of vulnerabilities to contribute to attacks compromising critical assets based on the CVSS base metrics without being bound to specific attack signatures or pre-defined set of rules for vulnerabilities interactions. * We formulate an autonomous risk characterization of the detected attack paths based on the network connectivity structure, asset configurations, criticality, and underlying vulnerabilities CVSS base score metrics. * We automate the mitigation of high-risk attack paths that could potentially compromise critical assets by tuning the network's zero-trust policies to break the path without disrupting the network functionalities. * We evaluate our proposed approach, the SPGNN-API, against two baseline models: the SPAGAN [11] for GNN-based shortest paths detection and MulVAL [8] for attack paths identification. Our results show that SPGNN-API outperforms the baseline models, achieving an average accuracy of over 95% for GNN shortest path identification. Moreover, our approach successfully identifies 47% more potentially compromised assets that were not detected by the baseline model, MulVAL. The rest of the paper is organized as follows: In Section II, we survey the literature. In Section III, we overview the zero-trust network architecture on which we base the attack paths risk assessment and mitigation. We further review different GNN architectures and limitations. Section IV details the design of our SPGNN-API framework. We evaluate our model and present our results in Section V. Finally, Section VI concludes our paper. ## II Related Work This work has two major contributions; a novel GNN approach for shortest path identification and an autonomous framework for detecting and mitigating attack paths in dynamic and complex networks. To highlight the novelty of our work, in this section, we survey the literature and differentiate our contributions from previous studies related to network vulnerability assessment and attack graphs generation (Sec. II-A) and GNN-based distance encoding and shortest path identification (Sec. II-B). ### _Network Attack Graph and Vulnerability Assessment_ We classify the existing approaches for vulnerability assessment into three main categories: traditional attack graphs/trees, ML/DL-based frameworks, and GNN-based approaches. **Traditional attack graphs/trees vulnerabilities assessment frameworks.** This class of models examines the interplay between the network vulnerabilities and the extent to which attackers can exploit them, offering a structured representation of the sequence of events that can potentially lead to the compromise of network assets [1, 2, 3, 4]. However, a major limitation of these models is their inability to adapt to dynamic changes in the network structure. Any modification to the network structure requires the regeneration of the attack graph. **Deep learning vulnerabilities assessment frameworks.** Previous studies have explored the use of deep learning-based (DL) approaches for vulnerability assessment and attack path detection [5, 6, 7]. To identify potential attack paths in a network, information about the network structure and configurations is essential. However, in DL-based approaches, the network structure information is not learned, unlike GNN, and instead, provided as input to the DL model. Therefore, the structure-based input needs to be re-generated every time there is a change in the network structure, which may also require retraining the entire DL model. **Graph neural network vulnerabilities assessment frameworks.** Recently, several approaches based on GNN have been proposed for cyber security tasks such as vulnerabilities detection [16, 17], anomaly detection [18], malware detection [19] and intrusion detection [20]. However, these approaches, in particular the vulnerability detection models, do not include any risk evaluation process that can help prioritize the detected threats for proactive mitigation. ### _GNN Shortest Path Identification_ The goal of graph representation learning is to create representation vectors of graphs that can precisely capture their structure and features. This is particularly important because the expressive power and accuracy of the learned embedding vectors impact the performance of downstream tasks such as node classification and link prediction. However, the existing GNN architectures have limited capability for capturing the position/location of a given node relative to other nodes in the graph [21] (See Sec. III-E). GNN iteratively updates the representation of each node by aggregating representations of its neighbors. Many nodes may share a similar neighborhood structure, and thus, the GNN produces the same representation for them although the nodes may be located at different locations in the graph. Several recent works have addressed this limitation of GNNs. Although some of these approaches have been successful, we present the first GNN-based method that is transferable and can accurately calculate shortest paths using only distance information, without relying on other node or edge features. For instance, in [12], the authors propose a general class of structure-related features called distance encoding, which captures the distance between the node set whose representation is to be learned and each node in the graph. These features are either used as extra node attributes or as controllers of message aggregation in GNNs. The Positional Graph Neural Network (PGNN) [13] approach randomly samples sets of anchor nodes. It then learns a non-linear vector of distance-weighted aggregation scheme over the anchor sets that represents the distance between a given node and each of the anchor sets. Another approach, SPAGAN [11], conducts paths-based attention in node-level aggregation to compute the shortest path between a center node and its higher-order neighbors. SPAGAN, therefore, allows more effective aggregation of information from distant neighbors into the center node. ## III Background In this section, we overview the Zero-Trust architecture and related policies' governance and compliance on which we base the risk assessment, triage, and mitigation of the detected attack paths (Sec. III-A, III-B). As the proposed framework relies on shortest paths calculation to identify attack paths, we briefly explain the shortest path identification problem (Sec. III-C) and discuss the processing of graph data with GNNs (Sec. III-D). We highlight the limitations of existing GNN architectures (Sec. III-E) that have motivated our novel GNN-based model for shortest path identification. ### _Zero-Trust Architecture_ Zero-trust (ZT) is a comprehensive approach to secure corporate or enterprise resources and data, including identity, credentials, access management, hosting environments, and interconnecting infrastructure. ZT architecture (ZTA) can be enacted in various ways for workflows. For instance, micro-segmentation [22] enforces ZTA by creating secure zones in cloud and data-center environments, isolating and securing different application segments independently. It further generates dynamic access network-layer control policies that limit network and application flows between micro-segments based on the characteristics and risk appetite of the underlying network's assets. Micro-segmentation is implemented via a distributed virtual firewall that regulates access based on network-layer security policies for each micro-segment. By limiting access to only what is necessary, micro-segmentation helps to prevent the spread of attacks within a network. The ZT micro-segmentation policies are defined as: **Definition 1**.: _ZT policies refer to network layer policies that the micro-segmentation distributed firewalls enforce to control the internal communications of the network. These policies follow the format: < Source Micro-Segment IP Range > < Destination Micro-Segment IP Range > < Protocol > Port Range >._ ### _Governance and Compliance_ The visibility of the network micro-segments underlying assets' characteristics and criticality is crucial for the optimal management of network communication policies. To achieve this purpose, a semantic-aware tier, called "governance", is used with the ZT policies to ensure their compliance with the best practices for communication between the network assets [23]. The governance tier uses semantic tags (e.g. Database, Web Server, etc.) to perform a risk-aware classification of the micro-segments and underlying assets based on the criticality of the data stored transported, or processed by the micro-segment assets and their accessibility [24]. In this work, we consider eight criticality levels for classifying the network micro-segments as detailed in Table I. This table is generated following the study in [24] in conjunction with guidance from the security team administrators of the two enterprises contributing to this study. It is worth mentioning that the governance rules are generated following the best network communication practices. They are tuned per organization based on the network structure and business processes. A governance rule is defined as follows: **Definition 2**.: _A governance rule represents the best practice of who/what communicates to the different network assets. It relies on the micro-segments assigned tags to assess the communications enabled through the network ZT policies. A governance rule has the following format: \(<\) Source Tag \(>\)\(<\) Destination Tag \(>\)\(<\) Service Tag \(>\)._ The Governance module assesses the compliance of each ZT policy with the respective governance rule. Consider \(P\) to be the set of governance rules. Governance-compliant connections, denotes by \(CC\), are defined as follows: **Definition 3**.: _Compliant connections are communications allowed by the ZT policies that comply with the defined governance rules. Let \(CC\) denote the set of compliant edges (connections enabled by the ZT policies) where \(CC\subseteq\left\{E\mid\left(\mathit{tag}(x),\mathit{tag}(y),s\right)\in P\right\}\) and \(\mathit{tag}(v)\) be a function to identify the governance tag assigned to vertex \(v\in V\)._ For instance, the ZT policy \(<\) Human-Resources Web Server IP Address \(>\)\(<\) Human-Resources Application Server IP Address \(>\)\(<\) TCP \(>\)\(<\) 443 \(>\) is compliant with the governance rule \(<\) Web Server \(>\)\(<\) Application Server \(>\)\(<\) Secure Web \(>\). Hence, all communications enabled through the above ZT policy are marked safe. Similarly, we denote by \(NC\) the set of non-compliant edges. In a network setting, _compliant_ connections are usually considered trusted as per the governance policies. The criticality of the non-compliant connections amongst the assets is a function of the trust rating of its incident vertices i.e., assets. In this work, we are mostly concerned with attack paths potentially compromising highly-critical assets. In particular, the ones incorporating non-compliant connections which imply a relatively higher risk of being exploited. In this context, we define highly-critical assets as follows: **Definition 4**.: _Highly-critical assets are network resources that are considered valuable due to the sensitivity of the data they host (e.g. databases). Let \(V_{critical}\) denote a set of nodes with maximum criticality. Formally, \(V_{critical}=\left\{v\mid v\in V\ \wedge\ c_{v}=\ 7\right\}\) where \(c_{v}\) is the criticality rating of node \(v\) implied by the assigned governance tag._ ### _Shortest Path Identification_ Shortest path (SP) algorithms (e.g. Bellman-Ford, Dijkstra's) are designed to find a path between two given vertices in a graph such that the total sum of the weights of the edges is minimum. Our proposed framework relies on shortest paths calculation to identify the eminent worst-case scenario for potential cyber-attacks compromising highly-critical assets. In this context, we define a critical attack path as follows [25]: **Definition 5**.: _An attack path is a succinct representation of the sequence of connections (enabled by ZT policies) through vulnerable assets that an attacker needs to exploit to eventually compromise a highly-critical asset._ The time complexity of shortest path (SP) algorithms on a directed graph can be bounded as a function of the number of edges and vertices by \(\mathit{O}\left(VE\right)\)[26]. However, the complexity of SP algorithms can be improved by using GNNs to approximate the distance between nodes in a graph. After training a neural network, the time complexity of finding the distance between nodes during the inference phase is constant, denoted by \(\left(O\left(1\right)\right)\). ### _Processing Graph Data with GNNs_ The goal of graph representation learning is to generate graph representation vectors that capture the structure and features of graphs accurately. Classical approaches to learning low dimensional graph representations [27, 28] are inherently transductive. They make predictions on nodes in a single, fixed graph (e.g. using matrix-factorization-based objectives) and do not naturally generalize to unseen graph elements. Graph Neural Networks (GNNs) [29, 30] are categories of artificial neural networks for processing data represented as graphs. Instead of training individual embeddings for each node, GNNs _learn_ a function that generates embeddings by sampling and aggregating features from a node's local neighborhood to efficiently generate node embeddings for previously unseen data. This inductive approach to generating node embeddings is essential for evolving graphs and networks constantly encountering unseen nodes. GNNs broadly follow a recursive neighborhood aggregation (or message passing) where each round of neighborhood aggregation is a hidden layer \(l\) in the GNN. Let \(\mathit{G=\left(V,E\right)}\) denote a directed graph with nodes \(V\) and edges \(E\). Let \(\mathit{N}(v)\) be the neighborhood of a node \(v\) where \(\mathit{N}(v)=\left\{u\in V\mid(v,u)\in E\right\}\). For each layer, or each message passing iteration, a node \(v\) aggregates information from its sampled neighbors \(\mathcal{N}\left(v\right)\) as described in Equation 1. \[h_{v}^{l}=\sigma\left(M^{l}\cdot\Lambda\left(\{h_{v}^{l-1}\}\cup\left\{w_{e} h_{u}^{l-1},\forall u\in\mathcal{N}(v)\right\}\right)\right) \tag{1}\] The aggregated information is computed using a differentiable function \(\Lambda\) and a non-linear activation function \(\sigma\). \(w_{e}\) is the edge feature vector from node \(v\) to node \(u\). The set of weight matrices \(M^{l},\forall l\in\left\{1,\ldots,L\right\}\) are used to propagate information between layers. After undergoing \(k\) rounds of aggregation, a node is represented by its transformed feature vector, which encapsulates the structural information of the node's k-hop neighborhood as described in [31]. \begin{table} \begin{tabular}{|l|l|} \hline **Level** & **Description** \\ \hline 0 & UnTagged/unknown \\ 1 & Untrusted and external/public c.g internet 0.0.0.0/0 \\ 2 & Trusted external e.g vendor \\ 3 & Internet facing \\ 4 & Untrusted and internal e.g users \\ 5 & Internal and connecting to untrusted internal e.g web servers \\ 6 & Internal and connecting to data or non-critical data \\ 7 & Critical data \\ \hline \end{tabular} \end{table} TABLE I: Assets criticality levels and associated description. ### _GNNs Expressive Power_ The success of neural networks is based on their strong expressive power that allows them to approximate complex non-linear mappings from features to predictions. GNNs learn to represent nodes' structure-aware embeddings in a graph by aggregating information from their \(k\)-hop neighboring nodes. However, GNNs have limitations in representing a node's location or position within the broader graph structure [12]. For instance, two nodes that have topologically identical or isomorphic local neighborhood structures and share attributes, but are in different parts of the graph, will have identical embeddings. The bounds of the expressive power of GNNs are defined by the 1-Weisfeiler-Lehman (WL) isomorphism test [21] In other words, GNNs have limited expressive power as they yield identical vector representations for subgraph structures that the 1-WL test cannot distinguish, which may be very different [12, 13]. ## IV Proposed Framework SPGNN-API In this section, we present our proposed framework that aims to achieve end-to-end autonomous identification, risk assessment, and proactive mitigation of potential network attack paths. As depicted in Figure 1, the SPGNN-API consists of five modules: (a) Network micro-segmentation, (b) governance and compliance, (c) network data pre-processing, (d) GNN-based calculation of shortest paths to critical assets, and (e) risk triage and proactive mitigation. We elaborate on these modules in the following subsections. ### _Micro-Segmentation_ First, we represent a given network as a directed connectivity graph. Let \(C(V,E,S)\) be a labeled, directed graph that represents the network's connectivity, where \(V\) is the set of graph vertices representing the network assets (servers and cloud resources). The set of graph-directed edges \(E\) indicates the connected vertices' communication using the service identified through the edge label \(s\in S\). Here \(S\) denotes the set of network services that are defined by a protocol and port range and \(E\subseteq\{(v,u,s)\mid(v,u)\in V^{2}\wedge x\neq y\wedge s\in S\}\). We derive the set of feature vectors characterizing the graph vertices (network assets) and edges (incident assets communication) from layers 3 and 4 network flow packet headers. This includes features such as frequently used ports and protocols, frequent destinations, and flow volume. Our approach assumes that assets within the same micro-segment exhibit similar communication patterns. To automatically identify the network micro-segments, we use attentional embedded graph clustering [32], a deep embedded clustering based on graph attentional auto-encoder. The clustering algorithm aims at partitioning the connectivity graph \(C=(V,E,S)\) into \(k\) sub-graphs representing the network micro-segments. It learns the hidden representations of each network asset, by attending to its neighbors, to combine the features' values with the graph structure in the latent representation. We stack two graph attention layers to encode both the structure and the node attributes into a hidden representation. ### _Governance and Compliance_ Each micro-segment is assigned a "governance" tag implying its underlying assets' criticality and risk appetite. For instance, a _web server_ asset criticality is lower than a _database_. To automate the assignment of tags, we further assess the network flows in terms of communication patterns and frequently used ports and protocols to identify the dominating service(s) used by each micro-segment's underlying assets. For instance, a micro-segment mostly using TCP 80 for communication is most likely a web server while a micro-segment substantially using TCP 3306 is presumably a database. The detailed process of application profile assignment and the handling of dynamic ports is beyond the scope of this paper. We then automate the generation of the ZT policies to govern the communication between the micro-segments at the network layer. We first identify all attempted communications in the network and automatically generate ZT policies to enable all these communications. We compare the generated Fig. 1: SPGNN-API framework architecture where Sub-figure a) illustrates the micro-segmentation process through attentional embedded graph clustering of the network based on layer 2 and 3 flow packets header analysis and the network connectivity graph. This process is followed by a GNN-based model for generating the ZT policies governing the communication between the micro-segments as detailed in Sub-figure b). Sub-figure c) describes the network data pre-processing stage to illuminate the edges that cannot be part of an attack path. The updated graph is then used to identify the shortest paths to highly-critical assets as illustrated in sub-figure d). Finally, edges are classified as either safe, compliant critical, or non-compliant critical. The ZT policies are then tuned to block the latter class of edges. policies with the governance rules and highlight the non-compliant policies. We further assess the risk imposed by the non-compliant connections based on the criticality of the incident edges and the network topology. We then formulate a GNN model for tuning the ZT policies to reduce the risks without disrupting the network functionalities. The details of this process are beyond the scope of this paper. ### _Network Data Pre-processing_ SPGNN-API relies on shortest paths calculation to predict imminent attack paths. We aim to pre-process the network connectivity graph by identifying edges that can potentially contribute to attack paths and filter out the edges that cannot be exploited by an attacker. This pre-processing stage ensures that all calculated shortest paths do represent attack paths. An essential step toward the identification of an attack path is locating network vulnerabilities and assessing their severity which directly impacts the risk imposed by potential attacks exploiting these vulnerabilities. To locate the network vulnerabilities, we utilize a port scanner (e.g. Nessus). We then rely on the NIST Common Vulnerability Scoring System (CVSS) base metrics [10] to identify the features and severity of the detected vulnerabilities. We identify edges potentially contributing to critical attack paths following an exclusion methodology. We filter out edges that cannot be exploited by attackers based on a pre-defined set of criteria. This set of criteria does not define specific vulnerability interactions and ways of exploiting these vulnerabilities. They rather highlight the propensity of exploiting the vulnerabilities to eventually compromise critical assets. **Edges exclusion criteria:** Graph edges are excluded if they don't meet the following criteria: (1) The edge source node needs to have a vulnerability with CVSS base metric "scope" set to "changed". This implies that the vulnerability can impact resources in components beyond its security scope. Hence, being exploited, it enables the attacker to move further in the network and potentially reach a highly-critical asset. (2) The edge source node needs to have a vulnerability with CVSS overall base score metric "High" or "Critical". This implies the potential criticality of the attack step. (3) All edges with highly-critical asset destinations are considered. A major strength of our proposed approach is that it does not restrict the detection of potential attacks to a predefined set of vulnerability interactions. Instead, we assume that once an attacker gains access to an asset, they can exploit any underlying vulnerability without any specific prerequisites such as access rights or user privilege. This assumption is based on the constantly evolving nature of attacks and the ability of attackers to discover new ways of exploiting vulnerabilities. Consequently, we do not track an end-to-end sequence of attack steps as there might be infinite alternatives. Instead, we identify the propensity of an edge being involved in an attack by determining if there exists a (shortest) path from that edge to a highly-critical asset going through vulnerable nodes. This comprehensive approach to representing vulnerability interactions is not feasible for traditional attack path detection models due to the time complexity of generating attack trees, where the size of the graph is a function of the potential vulnerabilities' interactions [8]. However, our presented approach, which is based on the P-GNN, overcomes this issue with a time complexity of \(O(\mathit{nlog}^{2}n)\), where \(n\) is the number of assets in the network. Accordingly, the size of the graph is irrelevant to the number of vulnerabilities and their potential interactions. ### _GNN Model for Shortest Paths Identification_ We formulate and develop a transferable GNN model for shortest path identification. Our approach involves identifying the shortest paths to a predefined set of nodes representing highly-critical assets in a network. By identifying the shortest path representing the minimum set of exploits an attacker would need to compromise such highly-critical assets, we account for the worst-case scenario for potential attacks. We base our framework on the Position Graph Neural Network (P-GNN) model. The P-GNN approach randomly samples sets of anchor nodes. It then learns a non-linear vector of distance-weighted aggregation scheme over the anchor sets that represents the distance between a given node and each of the anchor sets [13]. To enhance the P-GNN architecture; firstly, we recover the actual shortest path distance from the node embeddings through a transferable GNN model. Secondly, we identify the shortest path length to a predefined set of nodes representing high-criticality assets rather than a randomly distributed set of anchors. Thirdly, we update the message function to only consider the position information for calculating the absolute distances, independent of node features. Lastly, since we aim to identify high-risk network connections, we embed the shortest path distance as an edge feature. **Anchor Sets.** We formulate a strategy for selecting anchors and assigning critical assets to anchor sets. Let \(n\) be the number of highly-critical assets in the network. We first mark anchors around the nodes representing highly-critical assets where each anchor set holds only one critical asset. As per the original P-GNN model, to guarantee low distortion embedding at least \(k\) anchors are sampled where \(k=c\log^{2}|V|\) and \(c\) is a constant. If the number of critical assets \(|V_{critical}|<k\), the remaining anchors are sampled randomly where each node in \(V\sim V_{critical}\) is sampled independently. The anchors' size is distributed exponentially and is calculated as follows: \[|Anchor\_i|=\lfloor\frac{|V|}{2^{i+1}}\rfloor,i\in\{0..k\} \tag{2}\] **Objective Function.** The goal of the SPGNN is to learn a mapping \(V\times V_{critical}^{k}\mapsto R^{+}\) to predict the actual minimum shortest path distances from each \(u\in V\) to \(V_{critical}\) where \(k=|\,V_{critical}|\). Hence, unlike the original P-GNN objective function, defined for the downstream learning tasks using the learned positional embeddings (e.g. membership to the same community), our objective is formulated for learning the actual shortest path length as follows: \[\min_{\phi}\sum_{\forall u\in V}\mathcal{L}\left(\min_{i\in\{1..k\}}\hat{d}_{\phi}\left(u,v_{i}\right)-\min_{i\in\{1..k\}}d_{y}\left(u,v_{ i}\right)\right)\] \[\min_{\phi}\sum_{\forall u\in V}\mathcal{L}\left(\min\left(\hat{d }_{\phi}\left(u,V_{critical}\right)\right)-\min\left(d_{y}(u,V_{critical}) \right)\right) \tag{3}\] where \(\mathcal{L}\) is the mean squared error (MSE) loss function to be minimized. \(\hat{d}_{\phi}\left(u,\,V_{critical}\right)\) is the vector of learned approximation of the shortest path distance from a node \(u\) to every critical asset \(v\in V_{critical}\). \(d_{y}\) is the observed shortest path distance. As the model aims to identify the risk imposed by critical paths, we account for the worst-case scenario by considering the minimum shortest path length from the (vulnerable) node to a highly-critical asset. Therefore, the loss is computed only on the minimum of the distance vector. **Message Passing.** The message-passing function, in our approach, exclusively relies on the position information to calculate the absolute distances to the anchor sets and disregards node features. To calculate position-based embeddings, we follow the original P-GNN \(q\)-hop approach where the 1-hop \(d_{sp}^{t}\) distance can be directly inferred from the adjacency matrix. During the training process, the shortest path distances \(d_{sp}^{g}(u,v)\) between a node \(u\) and an anchor node \(v\) are calculated as follows [13]: \[d_{sp}^{q}(u,v)\mapsto\begin{cases}d_{sp}(u,v),&if\ d_{sp}(u,v)<q\\ \infty&otherwise.\end{cases} \tag{4}\] Where \(d_{sp}(u,v)\) is the shortest path distance between a pair of nodes. Since the P-GNN aims to map nodes that are close (in position) in the network to similar embedding, the distance is further mapped to a range in \((\,0\,,1\,)\) as follows [13]: \[s(u,v)=\frac{1}{d_{sp}^{g}(u,v)+1} \tag{5}\] Accordingly, the message-passing process is defined as: \[h_{u}=\phi(x_{u}\oplus_{(v\in\mathbb{R}_{v})}\psi(u,v)) \tag{6}\] where \(h_{u}\) represents the node embedding of the vertex \(u\), \(x_{u}\) is the input feature vector of the node \(u\) inferred based on the adjacency matrix. \(\oplus\) is the aggregation function. In our approach, we found that the mean aggregation function provides the best performance. \(\psi\) is the message function and is computed as described in Equation 5. Finally, \(\phi\) is the update function to obtain the final representation of node \(u\). **Recovery of true paths length.** We aim to learn the true shortest path length by pulling the value of the node embedding closer to the labels during the learning process. To this end, we rely on the MSE loss function to minimize the deviation between the predicted and observed shortest path distances. To recover the true path length from the learned positional embedding, we introduce four steps to the P-GNN learning process after learning the node embeddings through message passing: Firstly, for every node \(u\in V\), we calculate the absolute distance (AD) of the learned node embeddings between \(u\) and each critical asset \(v\in V_{critical}\). Secondly, we assign the minimum value of the calculated AD to the node \(u\). Thirdly, as the calculated AD is not necessarily an integer value, we approximate the assigned AD to an integer value to represent the predicted shortest path distance. Lastly, we attribute the approximated shortest path value to the incident edge features. _(1) Absolute Distance (AD) of node embedding._ We particularly use the AD function since it is less impacted by outliers, hence, more robust. This is particularly significant since complex network structures are characterized by a high variance in the criticality of the assets and the path-length distributions. For every node \(u\in V\), we calculate a vector of absolute distances \(T_{u}\) between the learned embedding of \(u\) denoted as \(h_{u}\) and the embedding of every critical asset \(v_{i}\in V_{critical}\), denoted as \(h_{v_{i}}\). \(h_{u}\) and \(h_{v_{i}}\) are calculated as described in Equation 6. The AD vector is calculated as follows, where \(k\) is the embedding space dimension: \[AD(u,v)=\sum_{n=1}^{k}|h_{u}^{n}-h_{v}^{n}| \tag{7}\] \[T_{u}=\forall_{v_{i}\in V_{critical}}AD(u,v_{i})\] \(T_{u}\) is then used in Equation 3 to calculate the loss where \(\hat{d}\left(u,\,V_{critical}\right)=T_{u}\). _(2) Minimum absolute distance to a critical asset._ The downstream task is concerned with identifying the risk imposed by potential attack paths. If a node \(u\in V\) has (shortest) paths to multiple critical assets, we account for the worst-case scenario by identifying the minimum length of the shortest paths \(z_{u}\) and assigning its value as a feature for node \(u\). It is calculated as follows: \[z_{u}=\min_{i\in\{1...k\}}T_{u}^{i} \tag{8}\] where \(k\) is the embedding space dimension. _(3) Approximation of path length._ We identify two approaches for approximating the learned minimum shortest path length \(z_{u}\) of a certain node \(u\). The first approach, denoted as \(SPGNN_{R}\), relies on simple rounding of the shortest path length. This naive approach is rather intuitive and is fully transferable as discussed in Section V. The predicted distance \(SP_{R}(u)\) is then calculated as follows: \[SP_{R}:V\mapsto N \tag{9}\] \[SP_{R}(u)\mapsto Round(z_{u})\] The second approach, \(SPGNN_{DNN}\), relies on a deep neural network (DNN) to learn a mapping between the learned shortest path length and its integer representation. To overcome the inaccuracies induced by rounding the AD, we aim to increase the separation between the labels representing the observed paths-length. Since the downstream task is concerned with assessing the risks imposed by the attack paths, we restrict the detection of paths to a certain range of length values that are anticipated to induce high risks. Accordingly, we transform the path identification into a classification task where the learned embeddings are mapped to a class representing a path length within the range of interest. The goal of the DNN is to learn a mapping to predict the integer representation of the minimum shortest path distance \(z_{u}\) described in Equation 8 from each \(u\in V\) to \(V_{critical}\) where \(k=|\,V_{critical}|\). Accordingly, the objective function is: \[\min_{\theta}\sum_{\forall u\in V}\mathcal{L}_{c}(g_{\theta}(\lambda_{u}),l) \tag{10}\] where \(g_{\theta}:R^{a}\mapsto R^{b}\) is a function that maps the node features \(\lambda_{u}\) (that include \(z_{u}\)) where \(|\lambda_{u}|=a\) to a label \(l\) in the set of the encoded labels \(L=I,...,b\) where \(b\) is the threshold of paths length considered. \(\theta\) denotes the parameters of \(g_{\theta}\) and \(\mathcal{L}_{c}\) is the categorical cross entropy loss function. In addition to the minimum shortest path distance \(z_{u}\), we enrich the classifier input with additional heuristics of the original P-GNN positional embeddings \(h_{u}\) described in Equation 6. We rely on the intuition that the learned P-GNN embeddings of nodes that share the same shortest path distance are most likely to have similar statistical features. We define the DNN classifier input feature vector \(\lambda_{u}|\ \forall u\in\ V\) as follows: \[\begin{split}\lambda_{u}=&(\max_{v\in V_{critical}}| cos_{sim}(u,v)|,\max_{v\in V_{critical}}cross_{entropy}(u,v),\\ & min(h_{u}),max(h_{u}),mean(h_{u}),var(h_{u}),norm_{2}(h_{u}), \\ & std(h_{u}),median(h_{u}),z_{u}).\end{split} \tag{11}\] The output of the DNN model is the classes representing the different shortest path lengths. We rely on the one-hot encoding mapping to represent the output. The predicted distance denoted as \(SP_{DNN}(u)\) is then calculated as follows: \[\begin{split} SP_{DNN}:V\mapsto N\\ SP_{DNN}(u)\mapsto g_{\theta}(z_{u})\end{split} \tag{12}\] The stacking of a DNN classifier significantly enhances the accuracy of the SPGNN when trained and tested on the same network data. However, it does not perform equally well in a transfer learning setting as discussed later in Section V. This can be attributed to the fact that the input to the DNN classifier depends on the learned positional embeddings \(h_{u}\) and is highly impacted by the size and distribution of the anchors set. _(4) Shortest path as edge feature._ When it comes to graph representation learning, relying on node features is often more efficient than edge features due to the amount of information contained within the nodes, and the relatively smaller number of nodes as compared to edges. As a result, we begin by predicting the shortest paths as additional node features. Then, We attribute the calculated distance to all _incident edges of the node_, as shown in Figure 2. Let \(v\) be a node in the network, \(SP(v)\) be the learned integer representation of the minimum shortest path for node \(v\), and \(y_{e}\) be the feature vector for edge \(e\). Accordingly, the node features are assigned to their incident edges as follows: \[\{\forall u\in V\ \wedge\ \exists\ e_{u,v}\in E\,\ y_{e_{u,v}}=SP(v)\} \tag{13}\] **Labels.** Manually generated labels are expensive and hard to acquire. Therefore, we rely on a regular shortest path algorithm (e.g. Dijkstra), denoted by \(d_{sp}(u,v)\), to generate the labels for training the SPGNN. We calculate the observed shortest path \(d_{y}\) from a node \(u\) to critical asset \(v\) as per Equation 14. The calculated shortest path represents the label of the node \(u\). \[\begin{split} d_{y}(u,v)\mapsto\begin{cases}0&if\ v\notin V _{critical}\ \lor\ d_{sp}(u,v)=\emptyset\\ d_{sp}(u,v)&otherwise\end{cases}\end{split} \tag{14}\] ### _Risk Triage and Mitigation_ We develop a module to automate the assessment of risks imposed by potential exploitation of the detected attack paths in terms of the propensity and impact of compromising highly-critical assets. We first identify critical attack paths that require immediate intervention based on a pre-defined set of criteria. We then autonomously locate connections (edges) playing a key role in enabling the critical attack paths. Accordingly, we proactively implement the proper mitigation actions. To assess attack path criticality, we introduce a new metric namely _Application Criticality (AC)_. The assets criticality metric assesses the risk based on the assets workload (e.g. database, application server, etc.) and data processed. However, the AC metric assesses the risk based on the application the asset belongs to. For instance, a human-resources application database with human-identifiable information is assigned a higher AC rating than an inventory application database. **Application criticality:** Applications can be classified based on the scope of expected damages, if the application fails, as either, mission-critical, business-critical, or non-critical (operational and administrative) [33]. Organizations rely on mission-critical systems and devices for immediate operations. Even brief downtime of a mission-critical application can cause disruption and lead to negative immediate and long-term impacts. A business-critical application is needed for long-term operations and does not always cause an immediate disaster. Finally, organizations can continue normal operations for long periods without the non-critical application. Two different companies might use the same application but it might only be critical to one. Hence, we rely on the security team of enterprises contributing to this study to assign the AC. Attack paths are considered critical if they meet the following criteria: (1) The start of the path is an asset with criticality level \(\leq\) 4 implying the ease of accessibility of the asset. (2) Destination highly-critical assets belong to a mission-critical application (3) The shortest path is of length at most five. After filtering out non-critical paths, we aim to locate and characterize connections playing a key role in enabling critical attack paths. Accordingly, we model a DNN edge classifier to assess the edges of attack paths. Three output classes are defined, based on which mitigation actions are planned: (1) Non-compliant critical, (2) compliant critical, and (3) safe. Non-compliant edges are inherently un-trusted as they do not comply with the organization's communication best practices. Accordingly, non-compliant critical edges are immediately blocked by automatically tuning the ZT policies enabling the connection. Compliant connections represent legitimate organizational communication, hence blocking them might disrupt the network functionalities. Therefore, these Fig. 2: The shortest path length is assigned to the path source node and all its incident edges. connections are highlighted and the associated ZT policies are located. A system warning is generated requesting the network administrators to further assess the highlighted ZT policies. Finally, no actions are required for safe connections. We assess the criticality of attack paths' edges based on the following criteria, representing the input of the DNN classifier: * \(\text{Feature}_{1}\): The trust level of the edge destination asset. * \(\text{Feature}_{2}\): The AC rating of the edge destination asset. * \(\text{Feature}_{3}\): Exploited vulnerability base score of the source asset. * \(\text{Feature}_{4}\): The shortest path distance from the edge to the highly-critical asset. * \(\text{Feature}_{5}\): The compliance of the edge. Let \(f_{\psi}:E\mapsto Y\) be a function that maps the set of edges \(E\) to the set of labels \(Y\) representing the three edge classes where \(\psi\) denotes the parameters of \(f_{\psi}\). Let \(feat_{e}\) be the input feature vector of the edge \(e\) to be assessed. To optimize the edge's classification task, we express the objective function as the minimization of the cross-entropy loss function \(\mathcal{L}_{d}\). We represent this objective function as follows: \[\min_{\psi}\sum_{\forall einE}(f_{\psi}(feat_{e}),y) \tag{15}\] ## V Results and Evaluation The evaluation process is three folds: (1) evaluating the performance of the \(SPGNN\) shortest path calculation in a semi-supervised setting (Sec. V-D), (2) assessing the performance in a transfer-learning setting (Sec. V-E), and (3) evaluating the accuracy of identifying critical attack paths and locating key paths edges (Sec. V-F). ### _Experimental Settings_ We test the performance of SPGNN in three settings: **Experiment 1 - evaluating the performance of shortest paths identification.** The focus of this experiment is to evaluate the ability of \(SPGNN_{R}\) and \(SPGNN_{DNN}\) to identify the shortest paths in a semi-supervised setting. We use the same dataset for training and testing. We compare the prediction accuracy with the baseline model \(SPAGAN\). To identify the minimum ratio of labeled data required to achieve satisfactory performance, we use the train and test split masks with distribution shifts for all datasets described in Section V-B. **Experiment 2 - assessing and validating the learning transferability.** This experiment setting is particularly concerned with assessing the learning transferability of the proposed \(SPGNN_{R}\) shortest path identification. We test the transferability by training the model using a dataset and testing it using a different unlabeled dataset. **Experiment 3 - Assessing and validating the attack paths identification.** This experiment aims to assess the end-to-end performance of the SPGNN-API in identifying critical attack paths and highlighting key connections enabling the paths. We test the performance of this task by comparing the model accuracy to labeled synthetic network datasets and real-world datasets of enterprises contributing to this research. ### _Dataset_ Two classes of datasets are used for the proposed model evaluation: (1) Enterprise network datasets (two synthetic datasets, \(STD_{1}\) and \(STD_{2}\), and two real-world datasets, \(RTD_{1}\) and \(RTD_{2}\)). (2) Two widely used citation network datasets Cora [14] and Citeseer [15]. We generate two synthetic datasets (\(STD_{1}\) and \(STD_{2}\)) to imitate a mid-sized enterprise network setting. We defined the node configurations and network connections to cover all possible combinations of values for the five features used for assessing the criticality of the attack path's edges. We collect the real-world datasets, denoted by \(RTD_{1}\) and \(RTD_{2}\), from two-mid sized enterprises; a law firm and a university, respectively. We rely on the Nessus scan output to identify the configurations and properties of the network assets as well as the underlying vulnerabilities. We use enterprise-provided firewall rules, ZT policies, and governance rules to define and characterize the assets' communications. Table II lists the details of the datasets used in assessing the performance of our proposed model. In the proposed approach, we identify the path length to a set of anchor nodes to represent highly-critical assets. For the citation datasets, we randomly sample nodes to represent highly-critical assets. Since the citation datasets do not represent a real network setting, we will limit the evaluation of the attack path identification to the (real-world and synthetic) enterprise network datasets. ### _Baseline Models_ We compare the performance of our proposed model architectures \(SPGNN_{R}\) and \(SPGNN_{DNN}\) with the state-of-the-art baseline \(SPAGAN\)[11] w.r.t. to the shortest path identification. The SPAGAN conducts path-based attention that explicitly accounts for the influence of a sequence of nodes yielding the minimum cost, or shortest path, between the center node and its higher-order neighbors. To validate the performance of the SPGNN-API for attack paths identification, we generate the network attack graph using the MulVAL tool [8] by combining the output of the vulnerability scanner Nessus [34] and the enterprise network perimeter and zero-trust firewall policies. ### _Evaluation of Shortest path Detection_ In this section, we assess the performance of the two proposed architectures the \(SPGNN_{R}\) and \(SPGNN_{DNN}\) using all six datasets. We report the mean accuracy of 100 runs with 80%-20% train-test masks and 20 epochs. \begin{table} \begin{tabular}{l||r|r|r|r|r} \hline \hline **Dataset** & **Nodes** & **Edges** & **Critical** & **Compliant** & **Non-compliant** \\ \hline \(SDT_{1}\) & 864 & 5,018 & 284 & 2,002 & 3,016 \\ \(SDT_{2}\) & 865 & 5,023 & 284 & 2,002 & 3,021 \\ \(RTD_{1}\) & 221 & 1,914 & 21 & 882 & 1,032 \\ \(RTD_{2}\) & 370 & 21,802 & 70 & 10901 & 10901 \\ \(CORA\) & 2,708 & 10,556 & 180 & N/A & N/A \\ \(CITESEER\) & 3,327 & 9,464 & 523 & N/A & N/A \\ \hline \hline \end{tabular} \end{table} TABLE II: Dataset features and statistics. **Accuracy evaluation:** Table III summarizes the performance of \(\mathit{SPGNN}_{R}\) and \(\mathit{SPGNN}_{DNN}\). While both models can discriminate symmetric nodes by their different distances to anchor sets, we observe that \(\mathit{SPGNN}_{DNN}\) significantly outperforms \(\mathit{SPGNN}_{R}\) across all datasets. This can be attributed to the power of the DNN in capturing the skewed relationships between the generated positional embedding and the defined set of path-length classes. Furthermore, transforming the prediction of path lengths to a classification task, where one-hot encoding is used to represent the output, enables the model to capture the ordinal relationships between the different lengths and hence the gain in the performance. Both architectures exhibit performance degradation when tested with the real-world dataset \(RTD_{1}\). Due to the relatively small size of the dataset. The model could not capture the complex relationships between the network entities during training. **Self-loops:** In general, adding self-loops allows the GNN to aggregate the source node's features along with that of its neighbors [35]. Nevertheless, since our model relies only on positional embedding irrelevant to the features of the nodes, removing the self-loops enhances the accuracy of SPGNN as detailed in Table III as the iterative accumulation of the node positional embedding confuses the learned relative distance to the anchor sets. Accordingly, we introduce a data pre-processing stage to remove self-loops in the real-world network datasets and the citation datasets. **SPGNN convergence:** We illustrate in Figure 3 the progression of the Mean Squared Error MSE loss during the training process of \(\mathit{SPGNN}_{R}\). We particularly assess the \(\mathit{SPGNN}_{R}\) since, unlike the \(\mathit{SPGNN}_{DNN}\), its output directly reflects the GNN performance without further learning tasks. We observe that the gradient is sufficiently large and proceeding in the direction with the steepest descent which indeed minimizes the objective. The stability and efficacy of the learning process constantly enhance the accuracy of the model irrelevant of the dataset characteristics. The objective function is sufficiently smooth indicating that the model is not under-fitting. **Analysis of the \(\mathit{SPGNN}_{R}\) generated shortest path distance embedding.** We conducted an in-depth analysis of 20 random samples from the test sets of the six datasets. For each sample, we plot the predicted \(\hat{d}\) vs rounded \(\mathit{SP}_{pred}\) vs observed \(d_{y}\) shortest paths distances in blue, yellow, and red, respectively, as illustrated in Figure 4. We observe the proximity of the predicted and observed distances where the predicted values are mostly in the range of +/- 1 hop from the observed values. Hence, we prove the strength of the proposed GNN approach in approximating the shortest path distance. We further notice that the rounded values are vastly overlapping with the observed values which further proves the robustness of the simple, yet intuitive, rounding approach. **Baseline comparison:** We compare the performance of the proposed model with the baseline \(\mathit{SPGAN}\). We observe that the proposed architectures, in particular the \(\mathit{SPGNN}_{DNN}\), strictly outperform \(\mathit{SPGAN}\) and can capture the skewed relationships in the datasets as shown in Table III. This can be attributed to the fact that \(\mathit{SPGAN}\) uses a spatial attention mechanism that only considers the neighboring nodes within a predefined radius around each target node during the learning phase and does not incorporate features of nodes beyond the predefined distance which impacts the model performance. Furthermore, \(\mathit{SPAGAN}\) (and most state-of-the-art approaches) relies on the graph elements' features to calculate the shortest paths distance information. This justifies the performance degradation of \(\mathit{SPAGAN}\), in this case, since only graph structure and positional embedding are considered. This further proves the strength of the proposed approach that can identify, with high accuracy, the shortest paths distance irrelevant to graph elements features. ### _Evaluation of Transfer-Learning_ In this setting, the pre-training and testing processes are executed through distinct datasets. The goal of the pre-training is to transfer knowledge learned from labeled datasets to facilitate the downstream tasks with the unlabeled datasets. We only consider the \(\mathit{SPGNN}_{R}\) for testing in this setting. The stacked DNN of the \(\mathit{SPGNN}_{DNN}\) approach is characterized by a fixed input size and hence is not expandable to accommodate different network structures. To assess the robustness of the model transferability, we pre-train the model using different synthetic and real-world datasets. We observe that, in general, the size and sophistication of the dataset used for pre-training highly impact the performance of the model transferability. In general, training with real data yields better performance. We believe that the significant improvements can be attributed to the ability of \(\mathit{SPGNN}\) to utilize the perturbation in real-world data to consider more complicated interactions between the data samples which optimizes the model's ability to extend label information to unlabeled datasets. In contrast, pre-training the model with synthetic data and testing on a real dataset slightly hurts the accuracy. The non-perturbed structure of the synthetic data gives limited performance gain and yields negative transfer on the downstream classification task. In general, the results show convincing evidence that the inductive capabilities of the proposed \(\mathit{SPGNN}\) generalize to unseen datasets as detailed in Table IV. ### _Evaluation of Attack Paths and Critical Edges Detection_ The SPGNN-API does not record the end-to-end sequence of attack steps as there might be an infinite number of alternatives as discussed in Section V-F. It rather identifies Fig. 3: \(\mathit{SPGNN}_{R}\) MSE loss convergence for the six datasets. the propensity of an edge being part of an attack, i.e. there exists a (shortest) path from that edge to a highly-critical asset going through vulnerable nodes. Accordingly, to evaluate the performance of the attack path detection, we do not rely on an end-to-end assessment of attack paths. We rather assess the propensity of single edges being part of an attack. We evaluate the accuracy of the edge classification (Sec. IV-E) in two different settings semi-supervised and transfer-learning. We compare the model performance against a baseline (MulVAL). We base our assessment on the four enterprise network datasets as the citation datasets do not incorporate vulnerability information. We rely on the security team of the enterprises contributing to this study to manually label connections they would potentially block or patch given the network structure, reported vulnerabilities, and network visualization tools. **Accuracy assessment:** We assess the performance of the edge classifier in categorizing the attack path edges as either critical compliant, critical non-compliant, or safe. Comparing the output of the classifier to the manually labeled data we note the performance results in Table V. Since the set of safe edges comprises the attack path edges classified as safe as well as the connectivity graph edges that were not part of any attack path, the recorded model accuracy proves the efficacy of the presented approach in detecting attack paths in general and identifying key critical edges in particular. In addition to the raw accuracy rates, we report the receiver operating characteristic curve (ROC) and area under the curve \begin{table} \begin{tabular}{l c c c c c|c c c c c c} \hline \hline & \multicolumn{4}{c}{**Dataset Before Deleting self loops**} & \multicolumn{4}{c}{**Dataset After Deleting self loops**} \\ \cline{2-13} **Metrics** & \(STD_{1}\) & \(STD_{2}\) & \(RTD_{1}\) & \(RTD_{2}\) & \(CORA\) & \(CITESER\) & \(STD_{1}\) & \(STD_{2}\) & \(RTD_{1}\) & \(RTD_{2}\) & \(CORA\) & \(CITESER\) \\ \hline \(SPGNN_{R}\)\(\mathcal{L}\) & 0.07 & 0.14 & 0.36 & 0.02 & 0.33 & 0.53 & 0.02 & 0.14 & 0.22 & 0.01 & 0.29 & 0.38 \\ \hline \hline Accuracy \(SP_{pred}\) & 90.00\% & 84.04\% & 71.00\% & 94.02\% & 65.70\% & 68.53\% & 98.88\% & 84.47\% & 72.41\% & 97.05\% & 65.34\% & 72.53\% \\ \hline \(Accuracy\pm_{1hop}\) & 100\% & 98.42\% & 91.61\% & 100\% & 96.41\% & 92.65\% & 100\% & 98.50\% & 93.85\% & 100\% & 97.11\% & 94.54\% \\ \hline \hline \(SPGNN_{DNN}\)\(\mathcal{L}_{c}\) & 0.03 & 0.08 & 0.24 & 0.01 & 0.26 & 0.41 & 0.01 & 0.10 & 0.19 & 0.01 & 0.23 & 0.26 \\ \hline \(AccuracyPSIGN_{DNN}\) & 95.63\% & 80.14\% & 53.05\% & 96.10\% & 81.36\% & 79.36\% & 98.45\% & 84.47\% & 78.65\% & 98.25\% & 75.82\% & 81.20\% \\ \hline \(Accuracy\pm_{1hop}\) & 86.45\% & 85.29\% & 86.15\% & 98.65\% & 92.70\% & 84\% & 93.10\% & 91.93\% & 89.23\% & 100\% & 92.94\% & 87.32\% \\ \hline \hline MSE(SPAGAN) & 0.54 & 0.62 & 0.91 & 0.48 & 0.85 & 0.95 & 0.52 & 0.59 & 0.72 & 0.35 & 0.69 & 0.82 \\ \hline \(AccuracySP_{pred}\) & 52.36\% & 50.14\% & 57.50\% & 82.35\% & 62.12\% & 53.36\% & 54.23\% & 52.36\% & 56.23\% & 85.65\% & 63.26\% & 55.68\% \\ \hline \(Accuracy\pm_{1hop}\) & 86.45\% & 85.29\% & 86.15\% & 98.65\% & 92.70\% & 84\% & 88.20\% & 85.60\% & 84.42\% & 96.75\% & 93.98\% & 83.62\% \\ \hline \hline \end{tabular} \end{table} TABLE III: Overview of shortest paths identification accuracy of \(SPGNN_{R}\) and \(SPGNN_{DNN}\) as compared to the \(SPAGAN\) across the six datasets before and after deleting self-loops. Fig. 4: Shortest path distance distribution of 20 random samples from each of the six datasets. The blue and yellow points are the \(SPGNN\) predicted distances _before_ and _after_ the application of the rounding process, respectively. The red points are the observed distances. The Figures illustrate the accuracy of the predicted distances being within the range [-1,1] of the observed values. We further observe that the majority of the _rounded distances_ are either overlapping with or closer to the _observed distances_. This shows the efficiency of the rounding approach to enhance the shortest path distance prediction accuracy. (AUC). We assess the ability of the classifier to discriminate critical and safe edges, in general. Accordingly, we combine the critical compliant and critical non-compliant classes. The true positive samples are (compliant/non-compliant) critical samples that have been classified as critical. The false positive samples are critical samples that have been classified as safe. The ROC curve in Figure 5 illustrates outstanding discrimination of the two classes with an AUC score of 0.998. **Transfer-learning:** To assess the end-to-end transferability of the presented approach, we train the edge classifier using a dataset and test it using different datasets. The recorded classification accuracy in Table VI proves the inductive capabilities of \(SPGNN\) and its ability to efficiently characterize previously unseen data. To our expectations, training the model using a real dataset performs better on all datasets. The model's capacity to extend the label information to previously unseen datasets is enhanced by the perturbations in real-world datasets that enable the classifier to consider more complex interactions between the data samples. To plot the ROC curve, we combine the critical compliant and critical non-compliant classes and assess the model's ability to discriminate the critical and safe edges. The ROC curve in Figure 6 illustrates outstanding discrimination of the two classes with an AUC score between 0.93 and 0.98. **Baseline comparison** : We compare the SPGNN-API with the MulVAL attack graph generator. The MulVAL-generated attack graph nodes can be of three types; configuration nodes, privilege nodes (exploits), and attack step nodes (conditions). The privilege nodes represent compromised assets. The root nodes of the attack graph represent network configurations/vulnerabilities contributing to attack possibilities. The privilege nodes denote the compromised assets. The set of paths of the attack graph comprises all directed attack paths starting at the root configuration nodes and ending at the privilege nodes (attack goals). We configure the attack path generation to have all highly-critical assets as attack goals. We assess the attack step nodes and note the ZT policies that have been a step to achieve the attack privilege. We then compare the noted rules to the set of rules that have been flagged as critical by the SPGNN-API. We perform the analysis relying on \(RTD_{2}\) since no Nessus scan is available for the synthetic datasets and we had limited access to the \(RTD_{1}\) Nessus output for privacy reasons. The dataset has 370 nodes, of which 70 are highly-critical assets. The Nessus identified 44 vulnerable assets, of which six are highly critical. All six assets have been identified as potentially compromised by the MulVAL as well as \(SPGNN-API\). The \(SPGNN\), however, outperformed the MulVAL by detecting more potentially compromised non-critical assets as detailed in Table VII. This proves the significance of the presented holistic approach to vulnerability interaction. Further assessing the generated attack paths, we observe that SPGNN-API labeled 713 edges (and underlying ZT policies) as critical while only 171 appeared as a MulVAL attack step. This can be attributed to the fact that MulVAL restricts the detection of potential attacks to a predefined set of vulnerability interactions while the SPGNN-API assumes that any vulnerability can potentially be exploited by the attacker irrelevant of any pre-requisites. Of the 171 edges detected by MulVAL, our approach was able to detect 166. The five edges we missed are connecting level 7 highly-critical assets to level 1 assets. Since we aim to protect highly-critical assets these edges are not considered critical as per our features. ## VI Conclusion This work presents the first attempt at GNN-based identification of attack paths in dynamic and complex network structures. Our work fills the gaps and extends the current \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{3}{c}{**Dataset**} \\ \cline{2-5} **Metrics** & \(STD_{1}\) & \(STD_{2}\) & \(RTD_{1}\) & \(RTD_{2}\) \\ \hline Cross\_Entropy Loss & \(0.095\) & \(0.0061\) & \(0.01\) & \(0.007\) \\ \hline Accuracy & \(99.5\%\) & \(100\%\) & \(99.11\%\) & \(100\%\) \\ \hline \hline \end{tabular} \end{table} TABLE V: Performance overview of the \(SPGNN\) edge criticality classification in semi-supervised setting. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Model trained by**} & \(RTD_{1}\) & **Model trained by** & \(STD_{1}\) \\ \cline{2-7} **Metrics** & \(STD_{1}\) & \(STD_{2}\) & \(RTD_{2}\) & \(STD_{2}\) & \(RTD_{1}\) & \(RTD_{2}\) \\ \hline Cross\_Entropy Loss & \(0.009\) & \(0.0037\) & \(1.20\) & \(0.002\) & \(0.79\) & \(0.18\) \\ \hline Accuracy & \(100.00\%\) & \(98.17\%\) & \(92.75\%\) & \(99.87\%\) & \(92.42\%\) & \(97.44\%\) \\ \hline \hline \end{tabular} \end{table} TABLE VI: Performance overview of the \(SPGNN\) edge criticality classification in transfer-learning. Fig. 5: ROC curves of the \(SPGNN\) edge classification in the semi-supervised setting. Fig. 6: ROC curves of the \(SPGNN\) edge classification in the transfer-learning setting. literature with a novel GNN-based approach to automated vulnerability analysis, attack path identification, and risk assessment of underlying network connections that enable critical attack paths. We further present a framework for automated mitigation through a proactive non-person-based timely tuning of the network firewall rules and ZT policies to bolster cyber defenses before potential damage takes place. We model a novel GNN architecture for calculating shortest path lengths exclusively relying on nodes' positional information irrelevant to graph elements' features. We prove that removing self-loops enhances the accuracy of shortest path distance identification as self-loops render the nodes' positional embedding misleading. Furthermore, our in-depth analysis of attack path identification proves the efficiency of the presented approach in locating key connections potentially contributing to attacks compromising network highly-critical assets, with high accuracy. A key strength of the presented approach is not limiting the attacks' detection to a predefined set of possible vulnerabilities interaction. Hence, it is capable of effectively and proactively mitigating cyber risks in complex and dynamic networks where new attack vectors and increasingly sophisticated threats are emerging every day.
2309.09541
Quantum probabilities for the causal ordering of events
We develop a new formalism for constructing probabilities associated to the causal ordering of events in quantum theory, where by an event we mean the emergence of a measurement record on a detector. We start with constructing probabilities for the causal ordering events in classical physics, where events are defined in terms of worldline coincidences. Then, we show how these notions generalize to quantum systems, where there exists no fundamental notion of trajectory. The probabilities constructed here are experimentally accessible, at least in principle. Our analysis here clarifies that the existence of quantum orderings of events does not require quantum gravity effects: it is a consequence of the quantum dynamics of matter, and it appears in presence of a fixed background spacetime.
Charis Anastopoulos, Maria_Electra Plakitsi
2023-09-18T07:36:48Z
http://arxiv.org/abs/2309.09541v1
# Quantum probabilities for the causal ordering of events ###### Abstract We develop a new formalism for constructing probabilities associated to the causal ordering of events in quantum theory, where by an event we mean the emergence of a measurement record on a detector. We start with constructing probabilities for the causal ordering events in classical physics, where events are defined in terms of worldline coincidences. Then, we show how these notions generalize to quantum systems, where there exists no fundamental notion of trajectory. The probabilities constructed here are experimentally accessible, at least in principle. Our analysis here clarifies that the existence of quantum orderings of events does not require quantum gravity effects: it is a consequence of the quantum dynamics of matter, and it appears in presence of a fixed background spacetime. ## 1 Introduction A bet on any type of race (with humans, horses, chariots, or cars) is equivalent to the assignment of probabilities to a causal ordering of events. The relevant events are the crossings of the finish line by the runners, and the causal ordering of such events is the results of the race. In this sense, assigning probabilities to causal orderings is both one of the oldest applications of probabilistic thinking, dating at least to the ancient Olympics, and one of the most common uses of probability theory today. In this paper, we describe causal ordering of events (COoE) for quantum systems, where by an event we mean the emergence of a macroscopic measurement record that is localized in space and in time [1, 2]. We construct the probabilities for such causal orderings, and we suggest physical set-ups where such probabilities can be measured, at least in principle. This work is partially motivated by the recent studies of indefinite causal ordering of events in quantum computing [3, 4]. In this context, the word "event" is used to denote an operation on a quantum system, for example, a step in an algorithm. An indefinite sequence of operations can arguably lead to significant advances in quantum computation and other technologies [5, 6, 7, 8]. The most common set-up to witness such phenomena involves the quantum switch, that is, a quantum operation in which two or more quantum channels act on a quantum system with the order of application determined by the state of another quantum system. Systems that manifest indefinite causal order in this sense have been realized in the laboratory [9, 10, 11]. This quantum informational notion of causal ordering differs and may even conflict with relativistic causality--for a detailed analysis of this issue, see [12, 13]. The meaning of the term "event" is crucial in this context. In this paper, we employ a notion of event that is similar to the crossing of the finishing line by a racer. This notion closely reflects the notion of an event in relativity, where physical events are invariantly defined in terms of world-line coincidences1. For example, a particle-detection event is defined as the intersection of the particle's and the detector's worldlines. With this definition, we use the lightcone structure of spacetime, in order to define causal relations between events. Since events are defined in terms of trajectories, the dynamical behavior of trajectories directly influences the properties of causal ordering. If the trajectories are stochastic, then the ordering of events is also a stochastic variable. Footnote 1: Einstein emphasized this perspective in his first review of General Relativity [14], see also [15] and [16]. Quantum theory does not admit trajectories as physical observables, so in quantum systems, we have to define events and their causal ordering differently. We define an event as the occurrence of a measurement outcome, that is, the emergence of a measurement record on a detector. This is the most conservative definition of a quantum event. It is also the most natural one in the Copenhagen interpretation [17], and the standard use of the term in particle physics. If the quantum events can be embedded in spacetime, i.e., if we can associate a spacetime point or region to the emergence of a measurement record, then we can define quantum probabilities for the COoE that are natural analogues of the classical ones. Such a definition requires the treatment of the time associated to an event as a quantum observable. It is an old result by Pauli [18] that time cannot be treated as a self-adjoint operator. The only way to have time as an observable is to represent it by a Positive-Operator-Valued measure (POVM). To this end, we use the Quantum Temporal Probabilities (QTP) approach that has been developed for constructing probabilities for temporal observables [1, 2, 19, 20, 21]. The key idea in QTP is to distinguish between the time parameter of Schrodinger's equation and the time variable associated to particle detection [23, 24]. The latter is then treated as a macroscopic quasi-classical variable associated to the detector degrees of freedom. A quasi-classical variable is a coarse-grained quantum variable that satisfies appropriate decoherence conditions, so that its time evolution can be well approximated by classical equations [25, 26, 27]. Hence, the detector admits a dual description: in microscopic scales it is described by quantum theory, but its macroscopic records are expressed in terms of classical spacetime coordinates. The key point here is that the spacetime coordinates of a quantum event are random variables, and they can be used to define quantum probabilities for the causal order of events. It is important to emphasize that there is no relation between the COoE considered here and a quantum causal structure of spacetime, as commonly postulated in quantum gravity research. The quantum behavior of the COoE, considered here, is due to the quantum nature of matter, and it coexists peacefully with a fixed background spacetime. In fact, the background spacetime structure is essential for defining quantum probabilities for the COoE. We also show that probabilities for quantum COoEs can also be defined even if there is no macroscopic record about the time at which the events occur. To this end, we construct a simple detection model, in which different orderings of events correspond to different measurement records. Hence, the probabilities of such records coincide with probabilities for different causal orders. The structure of this paper is the following. In Sec. 2, we provide a general definition of the notion of an event, and explain how we can construct probabilities for the causal order of events. In Sec. 3, we apply these definitions to classical physics, including Hamiltonian mechanics and stochastic processes. In Sec. 4, we define probabilities for the causal ordering of quantum events, using temporal observables. In Sec. 5, we present a simple model for the quantum order of events in absence of records about temporal observables. In Sec. 6, we summarize our results. ## 2 Main concepts In this section, we present a general characterization of events, and we identify the mathematical properties that are satisfied by a causal order of events. By "event" we mean a uniquely identifiable occurrence with definite characteristics. In classical mechanics, events are typically defined as the intersection of two world-lines. For example, one worldline may correspond to a particle and the other to an observer with a particle detector; their coincidence is a particle-detection event. We can improve on this description, by defining an event as the first intersection of a worldline with a specific time-like surface. We can consider, for example, the worldline of a runner crossing the world tube of the finish line in a marathon. However, in quantum theory, definite characteristics are attributed only to measurement outcomes; trajectories are not observables. For this reason, we will define events in terms of measurement records. For example, a detection event is the "click" of a particle detector. Hence, we identify event with changes in a macroscopic apparatus that denote the occurrence of a measurement. Let \(E\) be a set of events in the physical system under study. Events are discrete occurrences, so \(E\) is a discrete set. We will denote events by Greek letters, \(\alpha,\beta\), and so on. The events in the set \(E\) may be ordered causally. We say that \(\alpha<\beta\), if event \(\alpha\) occurs prior to an event \(\beta\). A _causal order_ on \(E\) is the consistent assignment of the order relation \(<\) to pairs of elements of \(E\). A causal order satisfies the properties of a partial-order relation, namely, 1. Irreflexivity: It is never true that \(\alpha<\alpha\). 2. Asymmetry: If \(\alpha<\beta\), then \(\beta<\alpha\) is false. 3. Transitivity: If \(\alpha<\beta\) and \(\beta<\gamma\), then \(\alpha<\gamma\). In a partial order, it is not necessary that all pairs of elements are related with the order relation. Physically, we can distinguish two cases. Some elements may be _simultaneous_, in which case we write \(\alpha\sim\beta\). Or they may be _uncomparable_, in which case we write \(\alpha|\beta\). We therefore define a causal order as a partial order that also include the distinction between simultaneous and incomparable pairs of elements. We will denote the set of all possible causal orders on \(E\) by \(CO(E)\). We will denote elements of \(CO(E)\) by capital Greek letters. For example, in a set \(E\) that consists of two distinct elements \(\alpha\) and \(\beta\), there are four possible causal orders * \(M_{1}=\{\alpha\prec\beta\}\) * \(M_{2}=\{\beta\prec\alpha,\}\) * \(M_{3}=\{\alpha|\beta\}\) * \(M_{4}=\{\alpha\sim\beta\}\) We say that a causal order defines a _time order_ on \(E\), if there exist no pair \(\alpha,\beta\in E\) such that \(\alpha|\beta\). We will denote the set of all time orders on \(E\) by \(TO(E)\). Clearly \(TO(E)\subset CO(E)\). Ever since Newton, we define the causal ordering of physical events in terms of the spacetime causal structure. That is, we consider a four-dimensional manifold \(M\) with points \((x^{0},x^{1},x^{2},x^{3})\) that is equipped with a partial ordering relation \(<\) that defines the causal structure of spacetime. * In non-relativistic physics, \(x<y\) if \(x^{0}<y^{0}\), and \(x\sim y\) if \(x^{0}=y^{0}\). There are no incomparable elements. * In relativistic physics, \(x<y\), if \(y\) lies in the future lightcone of \(x\), and \(x|y\) if \(x\) is spacelike separated from \(y\). Spacelike separated events are incomparable; there are no simultaneous events. Since all physical events occur in spacetime, we consider embeddings \(X\) of the set of events \(E\) into spacetime, that is, onto maps \(X:E\to M\). Then, the pullback of the spacetime causal structure with respect to \(X\) defines a causal order on \(E\), that is, \[\alpha\prec\beta,\ \ \text{if}\ \ X(\alpha)<X(\beta) \tag{1}\] Hence, the physical COoE reflects the causal structure on spacetime2. Footnote 2: Here, we associated events with spacetime points. A more general analysis should associate events with spacetime _regions_, but it will not be needed in this paper. It is imperative to distinguish COoEs from the causal structure of spacetime. As long as we ignore gravitational interactions, the latter is fixed an unchanging. It is defined by the lightcone structure of Minkowski spacetime, or of any other background spacetime. However, COoEs are not fixed: they can be stochastic or quantum variables. This is because they depend on the embeddings \(X\), which are themselves stochastic or quantum variables. The quantum behavior of the COoEs does not require a quantum behavior of spacetime, as postulated in quantum gravity theories. As a matter of fact, in quantum gravity proper, we expect to have no external spacetime causal structure, hence, the definitions of the COoEs given here do not work. The difficulties that arise from this fact are known as the _problem of time_ in quantum gravity [28, 29, 30]. ## 3 COoE in classical physics In this section, we construct probabilities for the COoEs for classical systems, namely, for Hamiltonian systems and for systems described by stochastic processes. ### Classical mechanics Let \(\Gamma\) be the state space of a classical system; we will denote the elements of \(\Gamma\) by \(\xi\). By Hamilton's equation, a system found at \(\xi\) at time \(t=0\), will evolve to a point \(\sigma_{t}(\xi)\) at time \(t\). The map \(\sigma_{t}\) is a diffeomorphism. An event in classical mechanics corresponds to the first intersection of a state space trajectory with a surface on \(\Gamma\). Surfaces of codimension \(s\) are locally determined by the vanishing of \(s\) functions on \(\Gamma\), hence, we can represent an event \(\alpha\) by a set of \(s\) functions \(F^{i}_{\alpha}\), where \(i=1,2,\ldots s\). A set \(E\) of \(n\) events consists \(n\) such families. For simplicity, we will consider only surfaces of codimension one in this paper, so that one event corresponds to a single function on \(\Gamma\). The causal ordering in the set of events is defined through the parameter of time evolution \(t\), which is assumed to coincide with the Newtonian absolute time. For any event \(\alpha\), we define the null set \(N_{\alpha}\) of \(\alpha\), as the set of all \(\xi\in\Gamma\), such that the equation \(F_{\alpha}[\sigma_{t}(\xi)]=0\) has no solution for all \(t\geq 0\). Then, we define the time \(T_{\alpha}\) of the event \(\alpha\), as a function \(T_{\alpha}:\Gamma-N_{\alpha}\to\mathbb{R}^{+}\), such that \(T_{\alpha}(\xi)\) is the smallest positive value of \(t\) that solves the equation \(F_{\alpha}[\sigma_{t}(\xi)]=0\). This means that \(T_{\alpha}(\xi)\) is the time it takes a trajectory that starts at \(\xi\) to cross the surface \(F_{\alpha}=0\) for the first time. Suppose that the initial state of the system corresponds to a probability distribution \(\rho(\xi)\). Then, we can construct joint probability distributions for the times of events \[p(t_{1},t_{2},\ldots,t_{n})=\int d\xi\rho(\xi)\delta[T_{1}(\xi)-t_{1}]\delta[ T_{2}(\xi)-t_{2}]\ldots\delta[T_{n}(\xi)-t_{n}]. \tag{2}\] These probability densities are not normalized to unity. For proper normalization, we have to include the probability densities for no events, which corresponds to the null sets \(N_{\alpha}\). For example, for \(n=2\), we have the probability densities \(p(t_{1},t_{2})\) as above, together with the probability densities \[p(N_{1},t_{2})=\int d\xi\chi_{N_{1}}(\xi)\delta[T_{2}(\xi)-t_{2}] \tag{3}\] \[p(t_{1},N_{2})=\int d\xi\chi_{N_{2}}(\xi)\delta[T_{1}(\xi)-t_{1}]\] (4) \[p(N_{1},N_{2})=\int d\xi\chi_{N_{1}}(\xi)\chi_{N_{2}}(\xi). \tag{5}\] Here, \(\chi_{C}\) is the characteristic function of a set \(C\). We can define the following four causal orders for the two events, * \(M_{1}=\{1\prec 2\}\) corresponds to \(t_{1}<t_{2}\), or \(N_{2}\) together with finite \(t_{1}\). * \(M_{2}=\{2\prec 1\}\) corresponds to \(t_{2}<t_{1}\), or \(N_{1}\) together with finite \(t_{2}\). * \(M_{3}=\{1|2\}\) corresponds to \(N_{1}\) and \(N_{2}\). * \(M_{4}=\{1\sim 2\}\) corresponds to \(t_{1}=t_{2}\). Then, we obtain the associated probabilities \[p(M_{1}) = \int_{0}^{\infty}dt_{1}\int_{0}^{t_{1}}dt_{2}p(t_{1},t_{2})+\int_{0 }^{\infty}dt_{1}p(t_{1},N_{2})\] \[p(M_{2}) = \int_{0}^{\infty}dt_{2}\int_{0}^{t_{2}}dt_{1}p(t_{1},t_{2})+\int_{0 }^{\infty}dt_{2}p(N_{1},t_{2})\] \[p(M_{3}) = p(N_{1},N_{2})\] \[p(M_{4}) = \int_{0}^{\infty}dtp(t,t). \tag{6}\] This procedure is straightforwardly generalized to \(n\) events. As an illustration, consider a system of two free particles of mass \(m\) in one dimension, with state space \(\Gamma=\{x_{1},x_{2},p_{1},p_{2}\}\). We restrict to \(x_{1}\leq 0\) and \(x_{2}\leq 0\), and we consider the pair of events that correspond to either of the two particles crossing the line \(x=0\). Hence, the two functions that define events are \(F_{1}=x_{1}\) and \(F_{2}=x_{2}\). The equations of motion are \(x_{1}(t)=x_{1}+p_{1}t/m\) and \(x_{2}(t)=x_{2}+p_{2}t/m\). We straightforwardly find that \(N_{1}=\{(x_{1},x_{2},p_{1},p_{2})|p_{1}\leq 0\}\), and \(N_{2}=\{(x_{1},x_{2},p_{1},p_{2})|p_{2}\leq 0\}\), that is, the particles never cross the line \(x=0\) if they have non positive momentum. Similarly, we compute the time functions \(T_{1}=-mx_{1}/p_{1}\) and \(T_{2}=-mx_{2}/p_{2}\). It is simple to identify the subsets of \(\Gamma\) that correspond to the different causal orders, \[M_{1} = \{(x_{1},x_{2},p_{1},p_{2})|p_{1}>0,p_{2}>0,x_{1}p_{2}>x_{2}p_{1} \}\cup\{(x_{1},x_{2},p_{1},p_{2})|p_{1}>0,p_{2}\leq 0\},\] \[M_{2} = \{(x_{1},x_{2},p_{1},p_{2})|p_{1}>0,p_{2}>0,x_{2}p_{1}>x_{1}p_{2} \}\cup\{(x_{1},x_{2},p_{1},p_{2})|p_{1}\leq 0,p_{1}>0\},\] \[M_{3} = \{(x_{1},x_{2},p_{1},p_{2})|p_{1}\leq 0,p_{2}\leq 0\},\] \[M_{4} = \{(x_{1},x_{2},p_{1},p_{2})|p_{1}>0,p_{2}>0,x_{2}p_{1}=x_{1}p_{2}\}. \tag{7}\] The associated probabilities are simply \(p(M_{i})=\int d\xi\chi_{M_{i}}(\xi)\rho(\xi)\). Note that \(M_{4}\) is a set of measure zero, so, in general, the associated probability vanishes. Suppose, for example, that both particles start from \(x_{0}<0\), and that they have the same momentum distribution \(f(p)\), so that \[\rho(x_{1},x_{2},p_{1},p_{2})=\delta(x_{1}-x_{0})\delta(x_{2}-x_{0})f(p_{1})f( p_{2}). \tag{8}\] Then, we compute, \(p(M_{1})=p(M_{2})=w_{+}-\frac{1}{2}w_{+}^{2}\), and \(p(M_{3})=(1-w_{+})^{2}\), where \(w_{+}=\int_{0}^{\infty}dpf(p)\) is the fraction of particles with positive momentum. ### Stochastic processes The analysis of Sec. 3.1 passes with little change to classical stochastic systems. Consider a system characterized by a state space \(\Gamma\) with elements \(\xi\). Let us denote by \(P(\Gamma)\) the space of paths on \(\Gamma\), that is, of continuous maps from the time interval \([0,T]\) to \(\Gamma\). Here, we are restricting to paths between an initial time \(t=0\), and a final time \(t=T\). A stochastic system is described by a probability measure \(\mu\) on \(P(\Gamma)\), such that the expectation of any function \(A\) of \(P(\Gamma)\) is given by \[\langle A\rangle=\int d\mu[\xi(\cdot)]A[\xi(\cdot)]] \tag{9}\] Again, an event \(\alpha\) is defined by the first intersection of a path with a surface, and it can be represented by a function \(F_{\alpha}\) on \(\Gamma\). We can still define a null space \(N_{\alpha}\), and a time function \(T_{\alpha}\), however, in absence of a deterministic evolution law, these objects are defined on the space of paths \(P(\Gamma)\), and not on \(\Gamma\). In particular, we define by \(N_{\alpha}\) the subset of \(P(\Gamma)\) that consists of paths \(\xi(\cdot)\) for which the equation \(F_{\alpha}(\xi(t))=0\) has no solution for any \(t\in[0,T]\); we will denote the complement of \(N_{\alpha}\) by \(\bar{N}_{\alpha}\). We also define the time function \(T_{\alpha}\) for any path \(\xi(\cdot)\in\bar{N}_{\alpha}\) by setting the value \(T_{\alpha}[\xi(\cdot)]\) on a path \(\xi(\cdot)\) equal to the smallest value of \(t\) such that \(F_{\alpha}(\xi(t))=0\). The definition of joint probabilities for the times of events proceeds in a similar way to Sec. 2. For example, the joint probability distribution for \(n\) events is \[p(t_{1},t_{2},\ldots,t_{n})=\int d\mu[\xi(\cdot)]\delta(t_{1}-T_{1}(\xi(\cdot) )\delta(t_{2}-T_{2}(\xi(\cdot))\ldots\delta(t_{n}-T_{n}(\xi(\cdot)). \tag{10}\] The space of paths \(P(\Gamma)\) is split into mutually exclusive and exhaustive subsets, each corresponding to an element of \(CO(E)\). For two events, we have four elements of \(CO(E)\), which correspond to the following subsets. \[M_{1} = \{\gamma\in P(\Gamma)|T_{1}[\gamma]<T_{2}[\gamma]\}\cup\left(N_{ 2}\cap\bar{N}_{1}\right),\] \[M_{2} = \{\gamma\in P(\Gamma)|T_{2}[\gamma]<T_{1}[\gamma]\}\cup\left(N_{ 1}\cap\bar{N}_{2}\right),\] \[M_{3} = N_{1}\cap N_{2},\] \[M_{4} = \{\gamma\in P(\Gamma)|T_{1}[\gamma]=T_{2}[\gamma]\}. \tag{11}\] As an example, consider the case of a Wiener process. We have two particles undergoing Brownian motion, that is, each particle is described by the evolution of a single-time probability density \(\rho\) on \(\mathbb{R}\), by \[\frac{\partial\rho}{\partial t}=\frac{D}{2}\frac{\partial^{2}\rho}{\partial x ^{2}}, \tag{12}\] where \(D\) is the diffusion constant. For a particle that starts at \(x_{0}=-L\), the probability density of crossing the line \(x=0\) is \[f(t)=\frac{1}{\sqrt{2\pi Dt}}\frac{L}{2t}e^{-\frac{L^{2}}{2Dt}}, \tag{13}\] with the probability of not crossing \(x=0\) for any \(t\in[0,\infty)\) equal to \(\frac{1}{2}\). We assume that the two particles move independently, and that the associated diffusion constants are different, \(D_{1}\) and \(D_{2}\). (This is possible, for example, if the particle masses are different.) The joint probability density that the first crosses \(x=0\) at time \(t_{1}\) and the second at time \(t_{2}\) is simply \(f_{1}(t_{1})f_{2}(t_{2})\), where \(f_{i}\) is the probability density (13), with diffusion constant \(D_{i}\). Then, we evaluate \[p(M_{1}) = \frac{1}{2\pi}\arctan\left(\sqrt{D_{1}/D_{2}}\right)+\frac{1}{4},\] \[p(M_{2}) = \frac{1}{2\pi}\arctan\left(\sqrt{D_{2}/D_{1}}\right)+\frac{1}{4},\] \[p(M_{3}) = \frac{1}{4}, \tag{14}\] where we ignored \(M_{4}\), as it is of measure zero. COoE in quantum systems In this section, we define probabilities for the COoE in quantum systems. ### Probability assignment For quantum systems, the definition of events in terms of paths crossing a surface does not work, because paths are not physical observables in quantum theory. The only meaningful observables are measurement outcomes. In the most common measurement scheme, namely, von Neumann measurements, the timing of the measurement events is fixed _a priori_. Hence, the causal order of events is also fixed. We need a measurement scheme that treats the time of an event as a random variable, if we are to treat the causal order of events as a random variable quantum mechanically. This is achieved by the QTP approach that was described in the introduction. Suppose that we have a particle detector located at a fixed region in space. Then, via QTP, we can construct a set of positive \(\hat{\Pi}(t)\), such that the probability density of detection at time \(t>0\) is \(p(t)=Tr(\hat{\rho}_{0}\hat{\Pi}(t))\), where \(\hat{\rho}_{0}\) is the initial state if the particle. Together with the positive operator \(\hat{\Pi}(N)\) of no detection, the operators \(\hat{\Pi}_{t}\) define a POVM. For example, we can construct a POVM for the time of arrival of a particle of mass \(m\). We assume that the particle moves at a line and that the particle detector is located at \(x=L\). In the momentum basis, \[\langle k|\hat{\Pi}(t)|k^{\prime}\rangle=\int\frac{dkdk}{2\pi}S(k,k^{\prime}) \sqrt{v_{k}v_{k^{\prime}}}e^{i(k-k^{\prime})L-i(\epsilon_{k}-\epsilon_{k^{ \prime}})t}, \tag{15}\] where \(\epsilon_{k}=\sqrt{m^{2}+k^{2}}\) is the particle's energy, \(v_{k}\) is the particle's velocity, and \(S(k,k^{\prime})\) is the _localization operator_, that is an operator that determines the irreducible spread of the detection record. The sharpest localization is achieved for \(S(k,k^{\prime})=1\). The operators \(\hat{\Pi}(t)\) are not normalized to unity for \(t\in[0,\infty)\). However, if we restrict to quantum states with strictly positive momentum content, the contribution to the total probability from negative values of \(t\) is negligible, and we can consider the normalization of \(\hat{\Pi}(t)\) in the full real line. In this case, \[\int_{-\infty}^{\infty}dt\hat{\Pi}(t)=\hat{I}. \tag{16}\] For \(n\) independent detectors, each detecting a different particle, we can identify a POVM \(\hat{\Pi}(t_{1},t_{2},\ldots,t_{n})=\hat{\Pi}_{1}(t_{1})\otimes\hat{\Pi}_{2}( t_{2})\otimes\ldots\otimes\hat{\Pi}_{n}(t_{n})\), where \(\hat{\Pi}_{i}(t_{i})\) corresponds to the POVM for the \(i\)-th detector. Thus, we can define probability densities \(p(t_{1},t_{2},\ldots,t_{n})\) for the \(n\) measurement events, and we can follow the same procedure as in Sec. 3, in order to obtain probabilities for different causal orders of \(n\) measurement events. For example, for two events, 1 and 2, we have the three COoEs \(M_{1}=\{1<2\}\), \(M_{2}=\{2<1\}\), and \(M_{3}=\{1||2\}\). The positive operators \[\hat{E}(M_{1}) = \int_{0}^{\infty}dt_{2}\int_{0}^{t_{2}}dt_{1}\hat{\Pi}_{1}(t_{1}) \otimes\hat{\Pi}_{2}(t_{2})+[\hat{I}-\hat{\Pi}_{1}(N)]\otimes\hat{\Pi}_{2}(N),\] \[\hat{E}(M_{2}) = \int_{0}^{\infty}dt_{1}\int_{0}^{t_{1}}dt_{2}\hat{\Pi}_{1}(t_{1}) \otimes\hat{\Pi}_{2}(t_{2})+\hat{\Pi}_{1}(N)[\hat{I}-\hat{\Pi}_{2}(N)],\] \[\hat{E}(M_{3}) = \hat{\Pi}_{1}(N)\otimes\hat{\Pi}_{2}(N), \tag{17}\] define a POVM for the causal orders. Assume that the two events correspond to the detection of identical particles with the two detectors located at \(x_{1}=L_{1}\) and \(x_{2}=L_{2}\) from the source--see Fig. 1. We use the POVM (15) for both \(\hat{\Pi}_{1}(t)\) and \(\hat{\Pi}_{2}(t)\). Taking \(-\infty\) for the lower bound in the time integral, we find \[\hat{E}(M_{1}) = \frac{1}{2}\hat{I}+\hat{B} \tag{18}\] \[\hat{E}(M_{2}) = \frac{1}{2}\hat{I}-\hat{B}\] (19) \[\hat{E}(M_{3}) = 0 \tag{20}\] where the operator \(\hat{B}\) is defined by matrix elements \[\langle k_{1},k_{2}|\hat{B}|k_{1}^{\prime},k_{2}^{\prime}\rangle = iS_{1}(k_{1},k_{1}^{\prime})S_{2}(k_{2},k_{2}^{\prime})\sqrt{v_ {k_{1}}v_{k_{2}}v_{k_{1}^{\prime}}v_{k_{2}^{\prime}}} \tag{21}\] \[\times e^{i(k_{1}-k_{1}^{\prime})L_{1}-i(k_{2}-k_{2}^{\prime})L_{2}} \delta(\epsilon_{k_{1}}+\epsilon_{k_{2}}-\epsilon_{k_{1}^{\prime}}-\epsilon_{ k_{2}^{\prime}})\;\mbox{PV}\frac{1}{\epsilon_{k_{2}}-\epsilon_{k_{2}^{\prime}}}.\] Here, PV stands for Cauchy principal value. Consider a general initial state of the form \(|\psi\rangle=\sum_{i}c_{i}|\psi_{1i}\rangle\otimes|\psi_{2i}\rangle\). The probability densities associated to the three orders are \[p(M_{1})=\frac{1}{2}+w,\;\;p(M_{2})=\frac{1}{2}-w,\;\;p(M_{3})=0, \tag{22}\] where the asymmetry \[w=\sum_{ij}c_{i}c_{j}^{*}\int_{-\infty}^{\infty}ds\;[\alpha_{ij}^{(1)}(s) \dot{\alpha}_{ij}^{(2)}(s)-\dot{\alpha}_{ij}^{(1)}(s)\alpha_{ij}^{(2)}(s)], \tag{23}\] is expressed in terms of the quantities \[\alpha_{ij}^{(a)}(s)=\int\frac{dkdk^{\prime}}{2\pi}\psi_{ai}(k)\psi_{aj}^{*}( k^{\prime})S_{a}(k,k^{\prime})\sqrt{v_{k}v_{k^{\prime}}}e^{i(k-k^{\prime})L_{a} -i(\epsilon_{k}-\epsilon_{k^{\prime}})s}\;\mbox{PV}\frac{1}{\epsilon_{k}- \epsilon_{k^{\prime}}}, \tag{24}\] where \(a=1,2\). Figure 1: A set-up by which to measure the causal order of two events that correspond to detections at detectors 1 and 2. ### Examples We analyze the case of massless particles, \(m=0\), and ideal detector, \(S(k,k^{\prime})=1\). For two particles prepared in the same state \(\psi_{0}(k)\), that is centered around \(x=0\). However, the distances traveled by the two particles may be different, \(L_{1}\neq L_{2}\). We take for \(\psi_{0}\) a Gaussian centered around \(k_{0}\), \[\psi_{0}(k)=(2\pi\sigma)^{-1/4}\exp\left[(k-k_{0})^{2}/(4\sigma^{2})\right]. \tag{25}\] Then, we find that \(w\) in Eq. (23) equals \(Q_{1}[\sigma(L_{1}-L_{2})]\), where \[Q_{1}(\delta)=\int_{-\infty}^{\infty}dxe^{-2(x-\delta)^{2}}{\rm erf}\left( \sqrt{2}x\right). \tag{26}\] The dependence of the function \(Q\) on \(\delta\) is plotted in Fig. 2. As expected \(w\) vanishes for \(L_{1}=L_{2}\) and tends to \(\pm\frac{1}{2}\) for large differences between \(L_{1}\) and \(L_{2}\), in which case the ordering of the events is almost certain. The probabilities in the example above could have been derived from a classical theory. To see quantum behavior, we consider an superposition state for the first particle \[\psi(k_{1},k_{2})=\frac{1}{\sqrt{2(1+\nu)}}\psi_{0}(k_{1})\left[1+e^{ik_{1} \ell}\right]\psi_{0}(k_{2}), \tag{27}\] Here \(\ell\) is a path difference for the first particle in one component of the superposition and \(\nu=\int dk_{1}dk_{2}|\psi_{0}(k)|^{2}\cos(k\ell)\). For the Gaussian (25), \(\nu=e^{-\sigma^{2}\ell^{2}/2}\cos(k_{0}\ell)\). For this initial state \[w=\frac{Q_{1}(\delta)+2Q_{2}(\delta)\cos\left(\frac{k_{0}}{\sigma}\delta \right)}{2\left[1+e^{-\delta^{2}/2}\cos\left(\frac{k_{0}}{\sigma}\delta\right) \right]}, \tag{28}\] where \(\delta=\sigma\ell\), and the function \[Q_{2}(\delta)=\int_{-\infty}^{\infty}dxe^{-x^{2}-(x-\delta)^{2}}{\rm erf} \left(\sqrt{2}x\right), \tag{29}\] is plotted in Fig. 2. In Fig. (3), we plot \(w\) as a function of \(\delta\) and of \(k_{0}/\sigma\). The quantum nature of the system is manifested in the oscillatory behavior of the probabilities. Figure 2: The function \(Q_{1}\) of Eq. (26) (solid) and the function \(Q_{2}\) of Eq. (29) (dashed) as a function of \(\delta\). ## 5 Probabilities for the COoE via a detection model In the examples of Sec. 4, the measurements of the causal ordering of events are coarse-grained. This means that the measuring apparatuses record the time of detection events, and the probabilities for the causal order of events are obtained by integrating over probabilities with respect to detection times. In this sense, the construction is formally similar to the one of classical physics, even if there are no paths at the fundamental level. However, in quantum theory, it may be possible to define probabilities for causal ordering of events as fine-grained observables, even if we cannot distinguish between the times of the individual events. In this section, we will present a simple model that provides such probabilities for the case of two potential events, and which can be straightforwardly generalized for \(n\) events. ### The model The key idea is to direct a pair of particles towards a detector that can record only one of them, but not both. As an example, we consider a three-level system (3LS), with states \(|0\rangle,|1\rangle\), and \(|2\rangle\). Suppose that particle 1 can excite only the transition \(0\to 1\), and particle 2 only the transition \(0\to 2\). If after the interaction of the particles with the three-level system, we find the system in state \(|1\rangle\), we can surmise that particle 1 was detected first and particle 2 was not detected, and vice versa. To conform with our definition of an event with a measurement record, we must place an identical 3LS after the first, in which the particle not absorbed by the first can be detected. However, this is superfluous for identifying the COoE in this system, so we will consider the case of a single 3LS. This set-up is straightforwardly generalized for determining the probabilities for \(n\) events. We require \(n\) particles that can be sharply distinguished by their energies and \(n-1\) systems with \(n+1\) energy levels, so that the detection of each particle can be associated to a single transition. To implement our model, we assume that the incoming particles are described by a free scalar field \(\hat{\phi}(x)\) with mass \(m\). The two particles are distinguished by their initial energies; we can assume that they are prepared from different sources. The particles interact with one 3LS, which we take to be located at \({\bf x}=0\). The total Hamiltonian is a sum of three terms \(\hat{H}_{\phi}+\hat{H}_{3LS}+\hat{H}_{int}\), where \(\hat{H}_{\phi}\) is the field Figure 3: The asymmetry \(w\) of Eq. (28) as a function of \(\delta\) for constant \(k_{0}/\sigma=10\) (left) and as a function of \(k_{0}/\sigma\) for constant \(\delta=1\) (right). Hamiltonian, expressed in terms of field creation and annihilation operators, \[\hat{H}_{\phi}=\int d{\bf k}\epsilon_{\bf k}\hat{a}_{\bf k}^{\dagger}\hat{a}_{bfk}, \tag{30}\] where \(d{\bf k}\) stands for \(d^{3}k/(2\pi)^{3}\); \[\hat{H}_{3LS}=\Omega_{1}|1\rangle\!\langle 1|+\Omega_{2}|2\rangle\!\langle 2| \tag{31}\] is the 3LS Hamiltonian, and the interaction Hamiltonian \[\hat{H}_{int}=\sum_{a=1}^{2}\lambda_{a}\int\frac{dk}{\sqrt{2\omega_{\bf k}}}( \hat{a}_{\bf k}\hat{u}_{a+}+\hat{a}_{\bf k}^{\dagger}\hat{u}_{a-}), \tag{32}\] describes a dipole coupling between the field and the 3LS. Here, \(\lambda_{a}\) is the coupling constants associated to the transition \(0\to a\), \(\hat{u}_{a+}=|a\rangle\!\langle 0|\), and \(\hat{u}_{a-}=|0\rangle\!\langle a|\); \(a=1,2\). This Hamiltonian is a variation of Lee's Hamiltonian that is commonly employed in the study of spontaneous decay [27]. ### Time evolution To derive the time evolution law for this model, we work in the interaction picture. Then, the quantum state satisfies the equation \[i\frac{\partial}{\partial t}|\psi(t)\rangle=\sum_{a=1}^{2}\lambda_{a}\int \frac{d{\bf k}}{\sqrt{2\omega_{\bf k}}}(\hat{a}_{\bf k}\hat{u}_{a+}e^{-i( \epsilon_{\bf k}-\Omega_{a})t}+\hat{a}_{\bf k}^{\dagger}\hat{u}_{a-}e^{i( \epsilon_{\bf k}-\Omega_{a})t})|\psi(t)\rangle. \tag{33}\] We assume an initial two-particle state for the field and the ground state for the 3LS. The Hamiltonian employed here causes transitions only to one-particles states with an excited state for the 3LS. Hence, the state is of the form \[|\psi(t)\rangle=\int d{\bf k}d{\bf k}^{\prime}c({\bf k},{\bf k}^{\prime};t)|{ \bf k},{\bf k}^{\prime},0\rangle+\sum_{a}\int d{\bf k}d_{a}({\bf k};t)|k,a\rangle, \tag{34}\] Substituting into Eq. (33), we obtain \[i\dot{c}({\bf k},{\bf k}^{\prime};t) = \sum_{a}\lambda_{a}\left[\frac{d_{a}({\bf k};t)}{\sqrt{2\epsilon _{\bf k}^{\prime}}}e^{i(\epsilon_{\bf k}-\Omega_{a})t}+\frac{d_{a}({\bf k}^{ \prime};t)}{\sqrt{2\epsilon_{\bf k}}}e^{i(\epsilon_{\bf k}^{\prime}-\Omega_{a })t}\right] \tag{35}\] \[i\dot{d}_{a}({\bf k};t) = 2\lambda_{a}\int\frac{d{\bf k}^{\prime}}{\sqrt{2\epsilon_{\bf k }^{\prime}}}c({\bf k},{\bf k}^{\prime};t)e^{-i(\epsilon_{\bf k}^{\prime}- \Omega_{a})t} \tag{36}\] These equations are to be solved subject to the initial conditions \(d_{a}({\bf k};0)=0\) and \(c({\bf k},{\bf k}^{\prime};0)=c_{0}({\bf k},{\bf k}^{\prime})\), where \(c_{0}({\bf k},{\bf k}^{\prime})\) is the initial state of the two particles. We integrate both sides of Eq. (35) and substitute \(c({\bf k},{\bf k}^{\prime};t)\) to Eq. (36). We obtain \[\dot{d}_{a}({\bf k};t) = 2\lambda_{a}\int\frac{d{\bf k}^{\prime}}{\sqrt{2\epsilon_{\bf k }^{\prime}}}c_{0}({\bf k},{\bf k}^{\prime})e^{-i(\epsilon_{\bf k}^{\prime}- \Omega_{a})t} \tag{37}\] \[- 2\lambda_{a}\sum_{b}\lambda_{b}\int\frac{d{\bf k}^{\prime}}{2 \epsilon_{\bf k}^{\prime}}e^{-i(\epsilon_{\bf k}^{\prime}-\Omega_{a})t}\int_{0 }^{t}dsd_{a}({\bf k};s)e^{i(\epsilon_{\bf k}-\Omega_{b})s}\] \[- 2\frac{\lambda_{a}}{\sqrt{\epsilon_{\bf k}}}\sum_{b}\lambda_{b} \int\frac{d{\bf k}^{\prime}}{2\sqrt{\epsilon_{\bf k}^{\prime}}}e^{-i(\epsilon _{\bf k}^{\prime}-\Omega_{a})t}\int_{0}^{t}dsd_{a}({\bf k}^{\prime};s)e^{i( \epsilon_{\bf k}^{\prime}-\Omega_{b})s}.\] Eq. (37) is exact. The term in the second line is proportional to the vacuum Wightman function \(W(t)=\int\frac{d{\bf k}^{\prime}}{2\epsilon_{{\bf k}^{\prime}}}e^{-i\epsilon_{{ \bf k}^{\prime}}t}\), which drops at least with \(e^{-mt}\) for \(m\neq 0\) and as \(t^{-2}\) for \(m=0\). Assuming that the particle starts sufficiently far from the detector, \(d_{a}({\bf k};t)\) becomes appreciable different from zero at times such that the term proportional to \(W(t)\) is strongly suppressed. The third-line term in Eq. (37) is of a structure that commonly appears in elementary treatments of spontaneous decay [27, 31]. It can be calculated by invoking a version of the Wigner-Weisskopf approximation. For \(\Omega_{a}t>>1\), this expression is strongly dominated by the term with \(b=a\). By carrying out the integration over \({\bf k}^{\prime}\), we obtain \[-\frac{\lambda_{a}^{2}}{2\pi^{2}\sqrt{\epsilon_{\bf k}}}\int_{m}^{\infty}d \epsilon\frac{(\epsilon^{2}-m^{2})^{3/2}}{\sqrt{\epsilon}}\int_{0}^{t}dse^{-i (\epsilon-\Omega_{a})(t-s)}d_{a}({\bf k},s). \tag{38}\] The time integral is negligible except for values of \(\epsilon\) around \(\Omega_{a}\). Hence, we are justified in substituting \((\epsilon^{2}-m^{2})^{3/2}/\sqrt{\epsilon}\) with \((\Omega_{a}^{2}-m^{2})^{3/2}/\sqrt{\Omega_{a}}\), and then, to extend integration over \(\epsilon\) to \((-\infty,\infty)\). Then, the term (38) simplifies to \(-\frac{1}{2}\eta_{a}\epsilon_{\bf k}^{-1/2}d_{a}({\bf k},t)\), where \[\eta_{a}=-\frac{\lambda_{a}^{2}(\Omega_{a}^{2}-m^{2})^{3/2}}{\pi\sqrt{\Omega_ {a}}}. \tag{39}\] Eq. (37) becomes \[\dot{d}_{a}({\bf k};t)+\frac{1}{2}\eta_{a}\epsilon_{\bf k}^{-1/2}d_{a}({\bf k };t)=2\lambda_{a}\int\frac{d{\bf k}^{\prime}}{\sqrt{2\epsilon_{\bf k}^{\prime }}}c_{0}({\bf k},{\bf k}^{\prime})e^{-i(\epsilon_{{\bf k}^{\prime}}-\Omega_{a} )t}. \tag{40}\] This is a linear inhomogenous equation of first order. The Green function for the corresponding homogeneous equation is simply \(\theta(t-t^{\prime})e^{-\frac{1}{2}\eta_{a}\epsilon_{\bf k}^{-1/2}(t-t^{\prime})}\). Hence, we obtain \[d_{a}({\bf k};t)=2\lambda_{a}\int\frac{d{\bf k}^{\prime}}{\sqrt{2\epsilon_{{ \bf k}^{\prime}}}}c_{0}({\bf k},{\bf k}^{\prime})\int_{0}^{t}dse^{-\frac{1}{2} \eta_{a}\epsilon_{\bf k}^{-1/2}(t-s)}e^{-i(\epsilon_{{\bf k}^{\prime}}-\Omega_ {a})s}=2\lambda_{a}\int d{\bf k}^{\prime}c_{0}({\bf k},{\bf k}^{\prime})h_{a}( \epsilon_{\bf k},\epsilon_{{\bf k}^{\prime}};t),\] where \[h_{a}(\epsilon,\epsilon^{\prime};t)=\frac{e^{-\frac{1}{2}\eta_{a}\epsilon^{-1 /2}t}-e^{-i(\epsilon^{\prime}-\Omega_{a})t}}{\sqrt{2\epsilon^{\prime}}\left[ \frac{1}{2}\eta_{a}\epsilon^{-1/2}-i(\epsilon^{\prime}-\Omega_{a})\right]}. \tag{41}\] The detection probability is non-negligible only if \({\bf k}\) is along the axis that connects the source to the detector. Hence, the problem is effectively one-dimensional. Therefore, we can substitute the initial state with \(c_{0}(k,k^{\prime})\), where \(k,k^{\prime}>0\), and write \(d_{a}(k;t)=2\lambda_{a}\int\frac{d{k^{\prime}}}{2\pi}c_{0}(k,k^{\prime})h_{a} (\epsilon_{k},\epsilon_{k^{\prime}};t)\). ### An example Consider an initial state \[c_{0}(k,k^{\prime})=\frac{1}{\sqrt{2}}\left[\psi_{1}(k)\psi_{2}(k^{\prime})+ \psi_{1}(k^{\prime})\psi_{2}(k)\right], \tag{42}\] where \(\psi_{i}\), for \(i=1,2\), is centered around momentum \(k_{i}\), or, equivalently, on energy \(\epsilon_{i}=\sqrt{k_{i}^{2}+m^{2}}\). We assume that there is no overlap between \(\psi_{1}\) and \(\psi_{2}\). Then, we can approximate \[d_{a}(k;t)=\lambda_{a}\left[\psi_{1}(k)F_{2a}(t)+\psi_{2}(k)F_{1a}(t)\right], \tag{43}\] where \[F_{ia}(t)=\int\frac{dk}{2\pi}\psi_{i}(k)\frac{e^{-\Gamma_{ia}t}-e^{-i(\epsilon_{k} -\Omega_{a})t}}{\sqrt{2\epsilon_{k}}\left[\Gamma_{ia}-i(\epsilon_{k}-\Omega_{a} )\right]}, \tag{44}\] where \(\Gamma_{1a}=\frac{1}{2}\eta_{a}\epsilon_{2}^{-1/2}\) and \(\Gamma_{2a}=\frac{1}{2}\eta_{a}\epsilon_{1}^{-1/2}\). Then, the probability \(p_{a}(t)\) that the 3LS is found in an excited state is given by \[p_{a}(t)=\int dk|d_{a}(k;t)|^{2}=\lambda_{a}^{2}\left(|F_{1a}(t)|^{2}+|F_{2a}(t )|^{2}\right). \tag{46}\] Let the states \(\psi_{i}(k)\) be well localized around \(x=-L\), so that they can be written as \(\chi(k-k_{i})e^{ikL}\), where \(\chi\) is a positive function peaked around \(k=0\), for example, a Gaussian. Then, the typical behavior of \(|F_{ia}(t)|^{2}\) is given in Fig. 4. The function is negligible prior to the arrival time \(t_{a}=mL/k_{i}\) of the particle to the locus of 3LS. Then, it jumps to a finite value, which then decays with a rate given by \(\Gamma_{ia}\). The peak value of \(|F_{ia}(t)|^{2}\) is approximately proportional to the Breit-Wigner term [\((\Gamma_{ia}^{2}+(\epsilon_{i}-\Omega_{a})^{2}]^{-1}\). Supposing that we choose \(\epsilon_{a}\simeq\Omega_{a}\), and that \(\Gamma_{ia}<<|\Omega_{1}-\Omega_{2}|\), for all \(i,a=1,2\), then, the terms \(|F_{11}|^{2}\) and \(|F_{22}|^{2}\) dominate in the probability assignment, and \[p_{a}(t)=\lambda_{a}^{2}|F_{aa}(t)|^{2}. \tag{47}\] The behavior of the probabilities is characteristic of resonant fluorescence. The 3LS absorbs one of the two particles, and after a time of order \(\Gamma_{aa}^{-1}\) it re-emits the particle, albeit in a different direction. Hence, the energy of the fluorescent particle determines whether the ordering \(M_{1}\) or \(M_{2}\) was realized. ## 6 Conclusions We provided a general definition of events in quantum theory, and showed how to construct probabilities associated to the causal ordering such events. Our notion of events is very different from that of Refs. [3, 4], and it is naturally related to the relativistic notion of events. Our analysis Figure 4: Typical plot of the functions \(|F_{ia}(t)|^{2}\) as a function of time \(t\), for a Gaussian function \(\chi(k)\). The function jumps to a finite value when the particle arrives at the 3LS, and then it decays with a rate of \(\Gamma_{ia}\). clarifies that the existence of an indefinite quantum causal order of events has no relation to quantum gravity, as this causal order is a dynamical consequence of the quantum nature of the _matter_ degrees of freedom. The COoE should not be conflated with the causal structure of spacetime, which we take to be fixed and unchanged in absence of gravity. Further work is needed, in order to explore how the quantum probabilities for causal order defined here differ from the corresponding classical one, for example, if they violate Bell' type inequalities. The model systems considered in this paper are experimentally accessible. The set-ups considered in Sec. 5 are essentially quantum races, that is the causal order of events coincides with the order that a number of distinguishable particles arrive in a specific finish line. The set-up of Sec. 6, when applied to photons, involves a variation of resonant fluorescence with specially engineered multi-level atoms that play the role of detectors for the causal ordering that is being realized.
2309.08887
GRaCE: Balancing Multiple Criteria to Achieve Stable, Collision-Free, and Functional Grasps
This paper addresses the multi-faceted problem of robot grasping, where multiple criteria may conflict and differ in importance. We introduce a probabilistic framework, Grasp Ranking and Criteria Evaluation (GRaCE), which employs hierarchical rule-based logic and a rank-preserving utility function for grasps based on various criteria such as stability, kinematic constraints, and goal-oriented functionalities. GRaCE's probabilistic nature means the framework handles uncertainty in a principled manner, i.e., the method is able to leverage the probability that a given criteria is satisfied. Additionally, we propose GRaCE-OPT, a hybrid optimization strategy that combines gradient-based and gradient-free methods to effectively navigate the complex, non-convex utility function. Experimental results in both simulated and real-world scenarios show that GRaCE requires fewer samples to achieve comparable or superior performance relative to existing methods. The modular architecture of GRaCE allows for easy customization and adaptation to specific application needs.
Tasbolat Taunyazov, Kelvin Lin, Harold Soh
2023-09-16T05:56:22Z
http://arxiv.org/abs/2309.08887v4
# GRaCE: Optimizing Grasps to Satisfy Ranked Criteria ###### Abstract This paper addresses the multi-faceted problem of robot grasping, where multiple criteria may conflict and differ in importance. We introduce Grasp Ranking and Criteria Evaluation (GRaCE), a novel approach that employs hierarchical rule-based logic and a rank-preserving utility function to optimize grasps based on various criteria such as stability, kinematic constraints, and goal-oriented functionalities. Additionally, we propose GRaCE-OPT, a hybrid optimization strategy that combines gradient-based and gradient-free methods to effectively navigate the complex, non-convex utility function. Experimental results in both simulated and real-world scenarios show that GRaCE requires fewer samples to achieve comparable or superior performance relative to existing methods. The modular architecture of GRaCE allows for easy customization and adaptation to specific application needs. ## I Introduction We typically grasp an object with a particular goal in mind and this goal affects the grasp. For example, it is natural to grasp a scissors by its handle to cut something, but grasp it by the blade to safely pass it to another individual. However, this goal is _not_ the only criteria the grasp has to satisfy, _nor the most important_; if the blade was obstructed or inaccessible (e.g., in Fig. 1), we would grasp the scissors by its handle even if our intention was to hand it over. This simple example illustrates that there are (i) _multiple criteria_ that affects the determination of a grasp and (ii) there is an underlying _priority_ over these criteria. Here, the grasp should be stable, reachable, and not result in a collision with other objects (the mug): these criteria are more important than the aforementioned goal (functional) objective. In this work, we are motivated by the problem of generating grasps that satisfy multiple criteria of differing importance. We apply hierarchical rule-based logic to robot grasping [1] and introduce a grasp utility function that is _rank-preserving_[2], i.e., it assigns larger utility values to grasps that satisfy higher ranked constraints. For example, robots are bound by their kinematic and dynamic constraints, which limits whether a proposed grasp can be performed. A stable grasp that satisfies these constraints should have larger utility than one that sacrifices these criteria for a functionally appropriate (but non-executable) grasp. Here, we take a probabilistic approach and optimize the _utility_ of a grasp, where the probability of a grasp satisfying a specific criteria is given by a classifier. Additional classifiers can be added (or removed) depending on the precise requirements of the application. This modular approach -- which we call Grasp Ranking and Criteria Evaluation (GRaCE) -- enables a robot to trade-off multiple conflicting criteria in complex contexts where not all desired objectives can be satisfied. Although the utility function enables scoring of grasps, it is a complicated non-convex function to optimize, especially when the classifiers are themselves complex (e.g., a deep neural network). Inspired by progress in gradient-free methods (e.g., [3]), we propose GRaCE-OPT -- a _hybrid_ optimization method that combines gradient-based and gradient-free optimization. Specifically, GRaCE-OPT applies gradient-free optimization, an Evolutionary Strategy (ES) [4], to conduct a more "diverse" exploration over the landscape and prevent the optimization process from getting stuck at local optima. However, on its own, this gradient-free method can be slow to converge. As such, we use gradient-based optimization on a _lower-bound_ of the utility to improve convergence speed. Experiments in complex environments show that GRaCE requires significantly fewer samples and less time to achieve comparable (or better) performance to a filtering method used in prior works [5, 6]. Our evaluations involved two simulated grasping scenarios -- shelf and diner (Fig. 2) -- in IsaacGym [7] and two real-world scenarios; these test scenarios are designed to be challenging (cluttered with small optimal grasping regions) and where the probability of satisfying multiple criteria may be traded-off. To summarize, this paper contributes GRaCE which comprises a utility function that assigns higher values following user-specified hierarchical rules and an optimization method that uses both gradient-free and gradient-based optimization of the expected utility. Code and implementation details can be found online at [https://github.com/clear-nus/GRaCE](https://github.com/clear-nus/GRaCE). Fig. 1: Illustration of the expected grasp utility, \(U\), used in GRaCE. The blue region indicates higher utility values that are collision free and stable. ## II Background and Related Work GRACE is a modular framework for optimizing 6-DoF grasps. It builds upon related work on 6-DoF grasp candidate generation and prior work on the optimization of multiple criteria specified via rule/constraint hierarchies. We briefly review these two topics in this section. **6-DoF Grasp Filtering and Refinement.** Generating appropriate 6-DoF grasping remains an active area of research. One common approach is to first _sample_ a set of grasp, either through data-driven methods [8, 9], heuristics [5] or a combination of both [6], then _filter_ the grasps using evaluators to select the most promising candidates for execution. This sample-and-filter approach is common and can be very effective in practice [5]. However, it can be time-consuming in complex environments even with state-of-the-art samplers, especially the optimal grasp regions are small. An alternative approach to optimize grasps directly. Early work on multi-fingered end-effector grasping [10] demonstrated that a scoring function for grasp quality (a pre-trained classifier) can be used to optimize grasps toward high quality regions. More recent work have applied optimization together with sample-and-filter methods, e.g., GraspNet [8] optimizes/refines grasp samples using the quality evaluator. These methods focus on a single quality criterion, where else our work addresses the problem of trading-off multiple conflicting criteria. GRACE can also be seen as a contrasting approach to "end-to-end" data-driven 6-DoF grasping [11, 12] where the sampler is trained to generate grasps that satisfy multiple criteria. However, these methods require retraining when a new criterion is added/removed, which is computationally expensive. GRACE enables the inclusion and removal of grasp criterion "on-the-fly", which we believe is more practical for many real-world applications. This aspect is similar to very recent work [9] that refines grasps using gradient flows, but GRACE enables the ranking of multiple criteria and proposes a hybrid optimization technique. **Hierarchical Optimization of Multiple Criteria.** A key component of our framework is a utility function, which leverages a rule hierarchy. Rule hierarchies have a long history in optimization, with early works dating back to 1967 [13]. More recent methods encode rule hierarchies using temporal logic [14, 2]. Unlike these methods, our framework is differentiable and we do not have to rely on external SAT solvers for optimization. Our work is closely related to very recent research on planning with a rank-preserving reward function for autonomous vehicles [15]. Our grasp utility function has a similar structure to their proposed reward function, but our approach is probabilistic; we optimize the expected rank of the grasp via a hybrid optimization method. ## III Ranking Grasps via Utility Functions In this section, we present our approach for trading-off criteria for grasp generation. A grasp \(\mathbf{g}\) is typically defined as a set of contact points with an object which restricts movement when external forces are applied [16]. For simplicity, we will refer to end-effector poses as grasps (or grasp candidates) and denote them as \(\mathbf{g}\), even if they do not satisfy the definition above (e.g., the pose does not make contact with the object). We first discuss how grasp criteria can be ranked, followed by our a utility function, and, finally, formulate an optimization based grasp generation method. **Criteria, Priority, and Rules.** We define a grasp criterion as a predicate \(c_{j}^{(i)}(\mathbf{g})\) where \(i\in\{1,...,N\}\) is the criterion's priority (with descending importance) for a grasp \(\mathbf{g}\) and \(j\) is an index of criterion, \(j=1,\ldots,M_{i}\). \(M_{i}\) is a number of criteria with the same priority \(i\). A rule \(\phi^{(i)}(\mathbf{g})\) is defined as a conjunction of criteria \(\phi_{i}(\mathbf{g})=\bigwedge_{j}^{M_{i}}c_{j=1}^{(i)}(\mathbf{g})\). Let \(p_{j}^{(i)}:=P(c_{j}^{(i)}(\mathbf{g})|\mathbf{o})\) be the probability that criterion \(c_{j}^{(i)}(\mathbf{g})\) is satisfied under observed context \(\mathbf{o}\). For notational simplicity, we will drop the explicit dependence of \(p_{j}^{(i)}\), \(c_{j}^{(i)}\) and \(\phi^{(i)}\) on \(\mathbf{g}\) and \(\mathbf{o}\). We assume that criteria are conditionally independent given the grasp and context. As such, the probability that a rule \(\phi^{(i)}\) is satisfied is given by \(\prod_{j=1}^{M_{i}}p_{j}^{(i)}\). Table I shows a list of priorities, rules, and their associated probabilities. **Rule Hierarchy and Rank of a Grasp.** A rule hierarchy \(\psi\) is defined as a sequence of rules \(\psi:=\{\phi^{(i)}\}_{i=1}^{N}\). The rule hierarchy induces a total order on the grasps, enabling us to rank grasps. A grasp that satisfies all the rules has the highest rank, i.e., rank 1. A grasp that satisfies all the rules except the lowest priority rule has rank 2. This continues on, with grasps satisfying none of the rules having the lowest rank. Formally, we define a rank of a grasp as: **Definition 1**: _Let \(\psi\) to be rule hierarchy with \(N\) rules. Let eval \(:\phi^{(i)}\mapsto\{0,1\}\) be a function that evaluates rule \(\phi^{(i)}\) to be 1 if the rule is satisfied, 0 otherwise. Then the rank of the grasp \(r:\mathcal{G}\mapsto\mathbb{R}\) is defined as:_ \[r(\mathbf{g}):=2^{N}-\sum_{i=1}^{N}2^{N-i}\textit{eval}(\phi^{(i)})\] Table II summarizes grasp ranks for the rule hierarchy and \begin{table} \begin{tabular}{c c c} \hline \hline \(r(\mathbf{g})\) & **Satisfed Rules** & **Probability** \\ \hline \(1\) & \(\bigwedge_{i=1}\phi^{(i)}\) & \(\prod_{j=1}^{M_{1}}p_{j}^{(1)}\cdots\prod_{j=1}^{M_{N}}p_{j}^{(N)}\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \(2^{N}\) & \(\bigwedge_{i=1}\neg\phi^{(i)}\) & \((1-\prod_{j=1}^{M_{1}}p_{j}^{(1)})\cdots(1-\prod_{j=1}^{M_{N}}p_{j}^{(N)})\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Rank-Preserving Grasp Utility \begin{table} \begin{tabular}{c c c} \hline \hline **Priority** & **Rule** & **Probability** \\ \hline \(1\) & \(\phi^{(1)}=\bigwedge_{j=1}^{M_{1}}c_{j}^{(1)}\) & \(\prod_{j=1}^{M_{1}}p_{j}^{(1)}\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \(N\) & \(\phi^{(N)}=\bigwedge_{j=1}^{M_{N}}c_{j}^{(N)}\) & \(\prod_{j=1}^{M_{N}}p_{j}^{(N)}\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Formulas and Grasp Criteria with Associated Probability. our utility is defined as the negative expected rank, \[U(\mathbf{g})=-\mathbb{E}_{\psi}[r(\mathbf{g})]=\sum_{i=1}^{N}2^{N-i}\prod_{j=1}^ {M_{i}}p_{j}^{(i)}-2^{N} \tag{1}\] This simplified form can be obtained by observing that \(\text{eval}(\phi^{(i)})\) is a Bernoulli variable with probability \(\prod_{j=1}^{M_{i}}p_{j}^{(i)}\), \[U(\mathbf{g}) =-\mathbb{E}_{\psi}[r(\mathbf{g})]\] \[=-\mathbb{E}_{\psi}[2^{N}-\sum_{i=1}^{N}2^{N-i}\text{eval}(\phi^{ (i)})]\] (by definition) \[=-2^{N}+\sum_{i=1}^{N}2^{N-i}\mathbb{E}_{\psi}[\text{eval}(\phi^ {(i)})]\] (by linearity of \[\mathbb{E}\] ) \[=-2^{N}+\sum_{i=1}^{N}2^{N-i}\prod_{j=1}^{M_{i}}p_{j}^{(i)}\] **Problem Statement.** We seek to find a grasp that maximizes the utility function: \[\mathbf{g}^{*}=\arg\max_{\mathbf{g}}U(\mathbf{g}) \tag{2}\] The key challenge is that Eq. (2) is a non-convex function of the grasps with local optima that can trap standard gradient-based methods. Moreover, the multiplication of probabilities leads to numerical instabilities with vanishing gradients when used with neural classifiers [17, 18]. In the next section, we describe how to optimize this function using GRaCE. ## IV Hybrid Optimization of Grasps In this section, we introduce GRaCE-OPT, a hybrid optimization technique that leverages both gradient-free and gradient-based methods to optimize Equation (2). As an initial step, we considered optimizing a lower-bound of Eq. 2 using Jensen's inequality: \[\log U(\mathbf{g}) =\log\left(\sum_{i=1}^{N}2^{N-i}\prod_{j=1}^{M_{i}}p_{j}^{(i)}\right)\] \[\geqslant\sum_{i=1}^{N}\left(\log\left(2^{N-i}\prod_{j=1}^{M_{i}} p_{j}^{(i)}\right)\right)\] \[=\sum_{i=1}^{N}\sum_{j=1}^{M_{i}}\log p_{j}^{(i)}\triangleq L( \mathbf{g}) \tag{3}\] Empirically, we find \(L(\mathbf{g})\) to be easier to optimize and numerically stable, but inspection of its form clearly shows that it is no longer rank preserving since the utilities are factored out. As such, we only use this gradient-based optimization as an inner-loop within a gradient-free ES setup, shown in Algorithm 1 below. We assume that we have access to a grasp sampler \(q_{0}\) from which we can sample initial grasps \(\mathbf{G}_{0}\) from (line 1). In practice, \(q_{0}\) can be any grasp candidate sampler, e.g., GraspNet-VAE [8] or a heuristic sampler such as GPD [5]. We then optimize these grasps over \(T\) outer gradient-free iterations (lines 2-10). In detail: new batches of grasps are sampled using a multivariate Gaussian distribution with mean \(\mathbf{G}_{t}\) and covariance matrix \(\Sigma\). The covariance \(\Sigma\) manually selected in our work but can also be adaptive [19]. Lines 4 to 6 optimizes the lower bound \(L(\mathbf{g})\). Line 8 and 9 assesses grasps using \(U(\mathbf{g})\) and selects the top \(Q\) grasps. In preliminary experiments, we found GRaCE-OPT to be superior to using either gradient-based or gradient-free methods alone. ``` 0: grasp sampler \(q_{0}\), utility \(U(\mathbf{g})\), lower bound for utility \(L(\mathbf{g})\), number of update steps (\(T\)), covariance matrix (\(\Sigma\)), step size (\(\eta\)), number of update steps for lower bound (\(K\)), selection size (\(Q\)). 1:\(\mathbf{G}_{0}\sim q_{0}\) 2:for\(t\gets 1\) to \(T\)do 3:\(\mathbf{\tilde{G}}_{0}\sim\mathcal{N}(\mathbf{G}_{t},\Sigma)\) 4:for\(k\gets 1\) to \(K\)do// Optimize new samples 5:\(\mathbf{\tilde{G}}_{t}=\mathbf{\tilde{G}}_{t-1}+\eta\nabla_{\mathbf{\tilde{G}} _{t-1}}L(\mathbf{\tilde{G}}_{t-1})\) 6:endfor 7:\(\mathbf{G}^{\text{combined}}\) = concat[\(\mathbf{G}_{t}\), \(\mathbf{\tilde{G}}_{L}\)] 8:\(\mathbf{G}^{\text{sorted}}=\arg\operatorname{sort}_{\mathbf{G}}U(\mathbf{G}^ {\text{combined}})\) 9:\(\mathbf{G}_{t}=(\mathbf{G}^{\text{sorted}})_{1:Q}\)// Select top \(Q\) grasps 10:endfor 11:return\(\mathbf{G}_{T}\)// Optimized grasp ``` **Algorithm 1** GRACE-OPT ## V Criteria for successful 6-DoF grasps In this section, we describe different grasp criteria used in our experiments. We assume a setup where a human user is asking the robot to perform a task, e.g., to "handover the scissors". The robot has access to the natural language utterance from the human as well as observations of the environment (a point cloud). As previously mentioned, we assume a probabilistic setup where the probability of criteria satisfaction is given by a classifier \(P(c_{j}^{(i)}(\mathbf{g})|\mathbf{o})\). We used four different classifiers that capture different quality aspects of a grasp: stability, executability, collision-free, and functional. We discuss these classifiers at a high-level and leave implementation details to the online supplementary material1 Footnote 1: [https://github.com/clear-nus/GRaCE](https://github.com/clear-nus/GRaCE) **Stability Classifier (S).** We use the stability evaluator in [9]. The classifier takes as inputs a grasp pose and a point cloud of the object, and outputs a prediction of grasp stability. **Execution Classifier (E).** Our execution classifier captures two important aspects of robot poses: reachability map [20] and kinematic singularity [21]: We calculate the manipulability score for a given grasp: \[\omega(\boldsymbol{\theta})=\sqrt{\det\mathbf{J}(\boldsymbol{\theta})\mathbf{J }(\boldsymbol{\theta})^{\mathsf{T}}}\geqslant 0 \tag{4}\] where \(\mathbf{J}(\boldsymbol{\theta})\) is the Jacobian matrix and \(\boldsymbol{\theta}\) is the Inverse Kinematics (IK) solution. Then, we define the predicted grasp pose, \(\tilde{\mathbf{g}}\), using the IK solution: \[\tilde{\mathbf{g}}=\text{FK}(\boldsymbol{\theta})\] Finally, we combine this two quantities to yield, \[p(\text{eval}(c_{e}(\mathbf{g}))=1)=\begin{cases}\sigma(-C_{m}d(\mathbf{g},\tilde{ \mathbf{g}})),&\text{if }\mathbf{g}\neq\tilde{\mathbf{g}}\\ \sigma(C_{w}(\omega(\boldsymbol{\theta})-\omega_{\text{th}})),&\text{otherwise} \end{cases} \tag{5}\] where \(C_{m}\) and \(C_{w}\) are scaling coefficients, \(\omega_{\text{th}}\) is a lowest manipulability threshold that allows safe grasp execution, and \(d(\cdot,\cdot)\) is a distance function between predicted grasp pose from the IK solution and current pose calculated in SE(3) [22]. **Collision Detection Classifier (C).** The backbone of our Collision Detection Classifier is the 3-D Signed Distance Function (SDF) [23]. For simplicity, we use the original version of SDF that is designed for convex objects. Let \(\mathcal{X}\in\mathbb{R}^{3}\) represent a point cloud represented with respect to the world frame and \(\mathbf{x}_{k}\in\mathcal{X}\) be a point within \(\mathcal{X}\). The SDF for the box \(R_{i}\) is defined as \[d_{R_{i}}=\frac{1}{|\mathcal{X}|}\sum_{k=1}^{k=|\mathcal{X}|}\|\max(|\mathbf{ x}|-\mathbf{H},0)\|_{2} \tag{6}\] where \(\mathbf{H}\in\mathbb{R}^{3}\) is the half-extent of the box in Cartesian coordinates. We decompose the gripper into three boxes \(R_{1}\), \(R_{2}\) and \(R_{3}\).The SDF is differentiable and we use it to create our collision detection classifier: \[P(\text{eval}(c_{c}(\mathbf{g}))=1|\mathbf{o})=\sigma\left(C_{c}(d_{\text{th}} -\frac{1}{3}\sum_{i=1}^{i=3}d_{R_{i}})\right) \tag{7}\] where \(d_{\text{th}}\) is a user-defined threshold and \(C_{c}\) is a scale coefficient. **Intention Classifier (N).** Our intention classifier outputs the probability that the grasp location coincides with their intent. We first extract the user's intent (e.g., "Handover") from their utterance (e.g., "Hand over the knife") using Joint-BERT [24]. Our JointBERT model is trained on a curated dataset of programmatically generated queries and evaluated on sentences surveyed from test users. To evaluate if the grasp matches the intention, we use TaskGrasp [25] as it can identify affordance-rich and affordance-poor regions of objects. TaskGrasp evaluates grasps with respect to the point cloud and task, and outputs \(P(\text{eval}(c_{n}(\mathbf{g}))=1|\mathbf{o})\). As TaskGrasp inherently assumes that all grasps are stable before inference, we lower the score to zero if the grasp is more than 3cm away from the nearest point in the point cloud; we find that this modification helps to reduce false positives. **Summary and Ranking.** The above classifiers are all differentiable and gradients can be obtained using modern auto-differentiation libraries such as PyTorch [26]. In our experiments, we rank the criteria as follows: the S-classifier has rank 1, the E-classifier and C-classifier have rank 2, and the N-classifier has rank 3. ## VI Simulation Experiment The goal of our experiments is to establish if GRACE is able to (i) effectively generate suitable grasps in a complex environment, and (ii) trade-off multiple criteria. In particular, GRACE is more computationally expensive compared to a simple sample-and-filter approach (principally due to gradient computation). Is this added cost justified? Moreover, are the multiple criteria necessary for finding successful grasps and if so, can they be traded-off effectively? To answer these questions, we conducted the following experiments. ### _Simulated Environments_ GRACE was evaluated in simulation using IsaacGym from NVIDIA [7]. IsaacGym is a state-of-the-art simulator, capable of simulating robot movement and object grasping. We engineered two simulation environments, namely a Shelf module and a Diner module: * **Shelf** consists of a two-layered shelf with common everyday items placed on both layers. The Shelf module is designed to be cluttered and complex. Hence, the optimal grasping region for each object is confined to a small area, reducing the effectiveness of sampling-based methods. * **Diner** consists of items that may be present in a typical dining setup, such as bowls, forks, a pan, and a spatula. The graspable objects in these environments are from the ShapeNet dataset, and the shelves and tables were created using Blender. We packaged the Shelf and Diner modules as a set of OBJ files that can be loaded into any simulator capable of importing OBJ meshes. ### _Experiment Process_ **Perception.** To carry out GRACE, we first record point cloud data through IsaacGym's simulated depth and segmentation cameras from multiple views, and segment out the target object from the environment. **Grasp Sampling and Optimization.** GraspNet is then used to sample grasps and optimized with GRACE. The resulting output is a list of grasp poses, along with their utility scores. **Grasp Planning, Execution, and Evaluation.** The optimized grasps are passed to Moveit! [27] to generate trajectory plan. To minimize collision, instead of planning to the grasp pose, we plan the trajectory to a pre-grasp configuration 5cm linearly behind the actual grasp pose. The robot performs the grasping by moving the end-effector towards the object and closing its grippers. To execute the plan, we use a configuration-space controller to closely mimic and execute the planned trajectories. As IsaacGym is deterministic across Fig. 2: Shelf and Diner benchmark environments with sample grasps with high utility. sessions, only one execution attempt of each trajectory was performed. A grasp was termed as _successful_ if the target object is held by the gripper fingers after the trajectory was executed. Note that this measure of success excludes the intention criteria (which is subjective and handled separately). **Baseline Methods.** We compared the following methods: * GRACE as described with all four classifiers. * GRACE with only the S-classifier. This ablation uses a single criteria (stability) and is similar to the refinement used in GraspNet [8] and its variants. * Ablations of GRACE by removing criteria, e.g., SE denotes that only the stability and execution classifier were used. These ablations enable us to see if excluding important criteria leads to more failures. * Sample-and-Filter, termed as "Filter", is a popular approach due to its simplicity and ease of implementation. ### _Results_ In this section, we summarize our main findings. In general, we find GRACE to be superior to filtering on both the Shelf and Diner scenarios. Moreover, it is able to prioritize important criteria to find higher utility grasps. **Is optimization really necessary? Does GRACE outperform Sample-and-Filter?** We evaluated GRACE (SEC) against sample-and-filter with different sample sizes (10, 50, 100). Fig. 3 shows the average number of successes per object for the top-10 grasps across the different objects in the Shelf and Diner environments (seven and five objects, respectively). Note that the intention criterion was excluded as compliance with user intent involves subjective evaluation. Focusing on the subplots in Fig. 3(a) and 3(f), we observe that GRACE outperforms Filter across the same sampling sizes (10, 50, 100). We further ran Filter with larger sample sizes (1000 and 5000), which enabled it to achieve attain similar performance to GRACE. At 5000 samples, Filter performs similarly to GRACE using 50 samples. However, this also resulted in it requiring longer almost 2x longer compute times as shown in Figs. 3(b) and 3(g). In short, although GRACE is more expensive _per sample_, it is able to achieve better grasps with fewer samples. **Can GRACE optimize multiple criteria to find successful grasps?** The results of our GRACE ablations are shown in Figs. 3(c) and 3(h). We observe that the using all three classifiers (SEC) resulted in the best performance. The marked increase in performance from SE to SC may be attributed to the cluttered nature of Shelf and Diner, where many candidate grasp poses can collide with other objects. **Does GRACE with the intention classifier generate successful functional grasps?** More precisely, we sought to evaluate if (i) GRACE would generate grasps in regions matching the user intent if the higher-ranked criteria can be satisfied, and (ii) prioritize the higher-ranked criteria, even if the resulting grasp has violates the the functional criteria. To that end, we selected four objects (shown in Fig. 4) and paired with pan and spatula with the "Use" intention, and the scissors and fork with the "handover" intention. Fig. 5 shows the grasps generated with and without the intention criteria. To elaborate, the spatula can be separated into two regions: handle, which is ideal to grasp for "use", and the head, which should not be grasped for this purpose. Fig. 4: Selected objects for intention evaluation. Fig. 3: Results on Experiments on the Shelf (top) and Diner (bottom) Environments. The bar graphs show averages with standard deviation as error-bars. Using 50 samples, GRACE outperforms Filter (5000 samples) and takes less computational time. Notably, both of these regions satisfy the stability, executable, and collision-free criteria. We see that GRACE using only SEC generated grasps in both regions, while GRACE with SECN produced grasps only at the handle. Similar grasps can be observed for the pan. Turning our attention to the "handover" intent, the scissors and fork are in placements that limit access to regions that have coincide with the "handover" intention. In this case, we observe GRACE (with SECN) to forgoes these regions and instead produces grasps that satisfy the other, more highly ranked, criteria (examples in Fig. 6). ## VII Real-world Experiments Thus far, we have discussed GRACE in simulation settings, but does GRACE's performance carry over to the real world? We conducted real-world tests comparing GRACE against the filter baseline, similar to the simulation setup. We tasked a real-world Franka Emika Panda robot (with an RGB-D camera) to grasp objects in two different scenarios (Fig. 7): * **Box**, where the robot was tasked to generate grasps for 10 different items in a clutter. Here, there is no intention criteria and the goal was to execute a stable grasp and lift the object. * **Bowl**, where the robot attempted to grasp one of three different items (a wooden spoon, a knife, or a screwdriver) with the intent to handover the object. A grasp was successful if the robot managed to lift the object out of the bowl and hand it over to the experimenter. Both these settings are challenging due to (i) noisy perception and (ii) the feasible grasp region for each object was generally small due to the clutter. In each experiment, we conducted 10 trials for each object to be grasped and recorded the number of grasp successes; in total, our experiment involved 260 real-world grasps. We set GRACE to use 50 samples, while Filter used 1000 samples. Both methods have comparable timings; GRACE took an average of \(14\) seconds to obtain a grasp, while Filter took \(12\) seconds. Note that these timings are for un-optimized Python implementations and future work can look into reducing this computation time. **Does GRACE find successful multi-criteria grasps in real-world scenarios?** Our results, summarized in Table III, show that GRACE outperforms Filter in both domains by a significant margin. In both cases, GRACE achieves approximate double the success rate of Filter. Qualitatively, we found GRACE to more reliably return a feasible grasp; in contrast, Filter failed to return _any_ suitable grasp in 26 out of the 130 trials (20%). Other failures in both cases were commonly due to perception errors and robot trajectories executed near singular configurations, leading to grasp offsets, collisions, and robot errors. Overall, our findings affirm that GRACE sustains its performance in real-world conditions. ## VIII Conclusions and Future Work In this study, we introduced GRACE, a modular framework designed for optimizing robotic grasps based on multiple, often conflicting, criteria. Our experimental evaluations show GRACE's efficacy in generating high-quality grasps in complex, cluttered environments. Several avenues for improvement emerge. First, extending GRACE to incorporate additional criteria, such as tactile data for improved grasping of soft or deformable objects, holds promise. Second, the computational efficiency of GRACE can be further enhanced; specifically, the computational burden of gradient calculations may be mitigated through the use of optimized classifiers. ## IX Acknowledgements This research is supported by the National Research Foundation Singapore and DSO National Laboratories under Fig. 5: Incorporating the intention classifier (SECN) shifts grasps towards functional regions. \begin{table} \begin{tabular}{l c c} \hline \hline Method & Box & Bowl \\ \hline GRACE & 65\% (0.10) & 57\% (0.06) \\ Filter & 31\% (0.12) & 33\% (0.06) \\ \hline \hline \end{tabular} \end{table} TABLE III: Real-World Grasp Experiments: Average Success Rates with Standard Deviation in Brackets. Fig. 6: GRACE produces grasps that prioritize the higher ranked criteria, sacrificing functional regions for stable, executable, and collision free grasps. Fig. 7: Real world items, along with the box and bowl scenarios. the AI Singapore Programme (AISG Award No: AISG2-RP-2020-017).
2308.16748
A Novel Perception and Semantic Mapping Method for Robot Autonomy in Orchards
Agricultural robots must navigate challenging dynamic and semi-structured environments. Recently, environmental modeling using LiDAR-based SLAM has shown promise in providing highly accurate geometry. However, how this chaotic environmental information can be used to achieve effective robot automation in the agricultural sector remains unexplored. In this study, we propose a novel semantic mapping and navigation framework for achieving robotic autonomy in orchards. It consists of two main components: a semantic processing module and a navigation module. First, we present a novel 3D detection network architecture, 3D-ODN, which can accurately process object instance information from point clouds. Second, we develop a framework to construct the visibility map by incorporating semantic information and terrain analysis. By combining these two critical components, our framework is evaluated in a number of key horticultural production scenarios, including a robotic system for in-situ phenotyping and daily monitoring, and a selective harvesting system in apple orchards. The experimental results show that our method can ensure high accuracy in understanding the environment and enable reliable robot autonomy in agricultural environments.
Yaoqiang Pan, Hao Cao, Kewei Hu, Hanwen Kang, Xing Wang
2023-08-31T14:10:00Z
http://arxiv.org/abs/2308.16748v3
# A Novel Perception and Semantic Mapping Method for Robot Autonomy in Orchards ###### Abstract In this work, we propose a novel framework for achieving robotic autonomy in orchards. It consists of two key steps: perception and semantic mapping. In the perception step, we introduce a 3D detection method that accurately identifies objects directly on point cloud maps. In the semantic mapping step, we develop a mapping module that constructs a visibility graph map by incorporating object-level information and terrain analysis. By combining these two steps, our framework improves the autonomy of agricultural robots in orchard environments. The accurate detection of objects and the construction of a semantic map enable the robot to navigate autonomously, perform tasks such as fruit harvesting, and acquire actionable information for efficient agricultural production. ## I Introduction Robot autonomy has become a key aspect of precision agriculture, which leverages the emerging robotics, sensor, and AI technologies to automate the process of scientific data collection [1, 2], yields estimation[3, 4], fruit growth monitoring[5], and fruits harvesting[6, 7]. Among the various subsystems of an intelligent agricultural robot, navigation is an essential component that enables the robot to operate autonomously in unstructured agriculture environments, like orchards and plantations [8]. It comprises two essential steps: mapping and motion planning. The first step aims to model orchards enabling robots to comprehensively understand the environment where they are operating. The second step plans the motions on the map for the robot to accomplish its tasks. In summary, an accurate and effective representation of environments is key to achieving robot autonomy. Simultaneous location and mapping (SLAM) technology is a basic technique that has been widely utilised in self-positioning and map construction for robotic autonomy in the field. It applies sensors such as cameras, Light Detection and Range (Lidar), radar, and IMU to acquire visual and kinetic information from their surroundings. At the front end of the SLAM, the information will be used to compute the odometry of their motion and can be used to construct the map of the surrounding environments. At the back end of the SLAM, the map of the environments will be fine-tuned by close-loop detection and overall map optimisation. Then, an accurate landmark or point cloud map can be obtained. Traditional SLAM focuses on extracting robust low-level geometry features to establish the right correspondence and compute the correct transient pose while lacking the ability to extract information and understand the environments that they are currently building. Therefore, traditional SLAM can only provide raw geometries that do not include any semantic information, which limits its utilisation in autonomous operations. Having additional semantic information can significantly increase the capability of the robotics application. For example, in an orchard, if the robot knows where the target fruit trees are, it can automatically find its path to the given position and finish the work, rather than having a human click on the screen each time to tell the robot specifically where to go. The increasing demand for higher levels of autonomy in agricultural production places high demands on the robot's ability to understand its environment [9]. To achieve this goal, robots need to recognise information about objects in the scene and find out their locations on the map. That is, based on the original map from the SLAM, a semantic map is created that represents the environment using a set of semantically meaningful objects. This representation facilitates large-scale autonomy and the acquisition of actionable information in highly unstructured orchard environments because it is memory efficient, less ambiguous, and more informative. Deep learning, is an emerging and powerful tool for processing and extracting semantic information from input sensors'reading [10]. Although significant progress has been made in the construction of semantic maps using 2D image data or pseudo-3D point cloud data, semantic mapping methods directly on 3D point cloud data have not yet been widely explored. Compared with the semantic processing on 2D image data or pseudo-3D point cloud data, the semantic processing on 3D point cloud data can directly use the output of SLAM and does not require any additional calibration or data format conversion, which avoids numerical errors and large computational consumption. However, due to the unstructured and sparse nature of 3D point clouds, efficiently processing semantic information within point clouds remains challenging. In this work, by taking advantage of LiDAR-based mapping methods, a novel framework including two-step perception and semantic mapping is proposed to achieve robotic autonomy in orchards. First, we present a novel 3D detection method to accurately identify and localise objects on point cloud maps directly. Second, we develop a mapping module to construct a visibility graph map of orchards based on the extracted
2306.17844
The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks
Do neural networks, trained on well-understood algorithmic tasks, reliably rediscover known algorithms for solving those tasks? Several recent studies, on tasks ranging from group arithmetic to in-context linear regression, have suggested that the answer is yes. Using modular addition as a prototypical problem, we show that algorithm discovery in neural networks is sometimes more complex. Small changes to model hyperparameters and initializations can induce the discovery of qualitatively different algorithms from a fixed training set, and even parallel implementations of multiple such algorithms. Some networks trained to perform modular addition implement a familiar Clock algorithm; others implement a previously undescribed, less intuitive, but comprehensible procedure which we term the Pizza algorithm, or a variety of even more complex procedures. Our results show that even simple learning problems can admit a surprising diversity of solutions, motivating the development of new tools for characterizing the behavior of neural networks across their algorithmic phase space.
Ziqian Zhong, Ziming Liu, Max Tegmark, Jacob Andreas
2023-06-30T17:59:13Z
http://arxiv.org/abs/2306.17844v2
# The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks ###### Abstract Do neural networks, trained on well-understood algorithmic tasks, reliably re-discover known algorithms for solving those tasks? Several recent studies, on tasks ranging from group arithmetic to in-context linear regression, have suggested that the answer is yes. Using modular addition as a prototypical problem, we show that algorithm discovery in neural networks is sometimes more complex. Small changes to model hyperparameters and initializations can induce discovery of qualitatively different algorithms from a fixed training set, and even parallel implementations of multiple such algorithms. Some networks trained to perform modular addition implement a familiar _Clock_ algorithm [1]; others implement a previously undescribed, less intuitive, but comprehensible procedure we term the _Pizza_ algorithm, or a variety of even more complex procedures. Our results show that even simple learning problems can admit a surprising diversity of solutions, motivating the development of new tools for characterizing the behavior of neural networks across their algorithmic phase space. ## 1 Introduction Mechanistically understanding deep network models--reverse-engineering their learned algorithms and representation schemes--remains a major challenge across problem domains. Several recent studies [2; 3; 4; 5; 1] have exhibited specific examples of models apparently re-discovering interpretable (and in some cases familiar) solutions to tasks like curve detection, sequence copying and modular arithmetic. Are these models the exception or the rule? Under what conditions do neural network models discover familiar algorithmic solutions to algorithmic tasks? In this paper, we focus specifically on the problem of learning modular addition, training networks to compute sums like \(8+6=2\ (\mathrm{mod}\ 12)\). Modular arithmetic can be implemented with a simple geometric solution, familiar to anyone who has learned to read a clock: every integer is represented as an angle, input angles are added together, and the resulting angle evaluated to obtain a modular sum (Figure 1, left). Nanda et al. [1] show that specific neural network architectures, when trained to perform modular addition, implement this _Clock_ algorithm. In this work, we show that the _Clock_ algorithm is only one part of a more complicated picture of algorithm learning in deep networks. In particular, networks very similar to the ones trained by Nanda et al. preferentially implement a qualitatively different approach to modular arithmetic, which we term the _Pizza_ algorithm (Figure 1, right), and sometimes even more complex solutions. Models exhibit sharp _algorithmic phase transitions_[6] between the _Clock_ and _Pizza_ algorithms as their width and attention strength very, and often implement multiple, imperfect copies of the _Pizza_ algorithm in parallel. Our results highlight the complexity of mechanistic description in even models trained to perform simple tasks. They point to characterization of algorithmic phase spaces, not just single algorithmic solutions, as an important goal in algorithm-level interpretability. **Organization** In Section 2, we review the _Clock_ algorithm [1] and show empirical evidence of deviation from it in models trained to perform modular addition. In Section 3, we show that these deviations can be explained by an alternative _Pizza_ algorithm. In Section 4, we define additional metrics to distinguish between these algorithms, and detect phase transitions between these algorithms (and others _Non-circular_ algorithms) when architectures and hyperparameters are varied. We discuss the relationship between these findings and other work on model interpretation in Section 5, and conclude in Section 6. ## 2 Modular Arithmetic and the _Clock_ Algorithm **Setup** We train neural networks to perform modular addition \(a+b=c\pmod{p}\), where \(a,b,c=0,1,\cdots,p-1\). We use \(p=59\) throughout the paper. In these networks, every integer \(t\) has an associated embedding vector \(\mathbf{E}_{t}\in\mathbb{R}^{d}\). Networks take as input embeddings \([\mathbf{E}_{a},\mathbf{E}_{b}]\in\mathbb{R}^{2d}\) and predict a categorical output \(c\). Both embeddings and network parameters are learned. In preliminary experiments, we train two different network architectures on the modular arithmetic task, which we refer to as: Model A and Model B. **Model A** is a one-layer ReLU transformer [7] with constant attention, while **Model B** is a standard one-layer ReLU transformer (see Appendix F.1 for details). As attention is not involved in Model A, it can also be understood as a ReLU MLP (Appendix G). ### Review of the _Clock_ Algorithm As in past work, we find that after training both Model A and Model B, embeddings (\(\mathbf{E}_{a},\mathbf{E}_{b}\) in Figure 1) usually describe a circle [8] in the plane spanned by the first two principal components of the embedding matrix. Formally, \(\mathbf{E}_{a}\approx[\cos(w_{k}a),\sin(w_{k}a)]\) where \(w_{k}=2\pi k/p\), \(k\) is an integer in \([1,p-1]\). Nanda et al. [1] discovered a circuit that uses these circular embeddings to implement an interpretable algorithm for modular arithmetic, which we call the _Clock_ algorithm. \begin{table} \begin{tabular}{c c c c} \hline Algorithm & Learned Embeddings & Gradient Symmetry & Required Non-linearity \\ \hline Clock & Circle & No & Multiplication \\ Pizza & Circle & Yes & Absolute value \\ Non-circular & Line, Lissajous-like curves, etc. & Yes & N/A \\ \hline \end{tabular} \end{table} Table 1: Different neural algorithms for modular addition Figure 1: Illustration of the _Clock_ and the _Pizza_ Algorithm. "If a meeting starts at 10, and lasts for 3 hours, then it will end at 1." This familiar fact is a description of a modular sum, \(10+3=1\,(\mathrm{mod}\ 12)\), and the movement of a clock describes a simple algorithm for modular arithmetic: the numbers 1 through 12 are arranged on a circle in \(360^{\circ}/12=30^{\circ}\) increments, angles of \(10\times 30^{\circ}\) and \(3\times 30^{\circ}\) are added together, then this angle is evaluated to determine that it corresponds to \(1\times 30^{\circ}\). Remarkably, Nanda et al. [1] find that neural networks like our Model B implement this _Clock_ algorithm, visualized in Figure 1 (left): they represent tokens \(a\) and \(b\) as 2D vectors, and adding their polar angles using trigonometric identities. Concretely, the _Clock_ algorithm consists of three steps: In step 1, tokens \(a\) and \(b\) are embedded as \(\mathbf{E}_{a}=[\cos(w_{k}a),\sin(w_{k}a)]\) and \(\mathbf{E}_{b}=[\cos(w_{k}b),\sin(w_{k}b)]\), respectively, where \(w_{k}=2\pi k/p\) (a real clock in everyday life has \(p=12\) and \(k=1\)). Then the polar angles of \(\mathbf{E}_{a}\) and \(\mathbf{E}_{b}\) are added (in step 2) and extracted (in step 3) via trigonometric identities. For each candidate output \(c\), we denote the logit \(Q_{abc}\), and finally the predicted output is \(c^{*}=\mathrm{argmax}_{c}\,Q_{abc}\). Crucial to this algorithm is the fact that the attention mechanism can be leveraged to perform multiplication. What happens in model variants when the attention mechanism is absent, as in Model A? We find two pieces of evidence of deviation from the _Clock_ algorithm in Model A. ### First Evidence for _Clock_ Violation: Gradient Symmetricity Since the _Clock_ algorithm has logits: \[Q_{abc}^{\mathrm{Clock}}=(\mathbf{E}_{a,x}\mathbf{E}_{b,x}-\mathbf{E}_{a,y} \mathbf{E}_{b,y})\mathbf{E}_{c,x}+(\mathbf{E}_{a,x}\mathbf{E}_{b,y}+\mathbf{ E}_{a,y}\mathbf{E}_{b,x})\mathbf{E}_{c,y}, \tag{1}\] (see Figure 1) the gradients of \(Q_{abc}\) generically lack permutation symmetry in argument order: \(\nabla_{\mathbf{E}_{a}}Q_{abc}\neq\nabla_{\mathbf{E}_{b}}Q_{abc}\). Thus, if learned models exhibit permutation symmetry (\(\nabla_{\mathbf{E}_{a}}Q_{abc}=\nabla_{\mathbf{E}_{b}}Q_{abc}\)), they must be implementing some other algorithm. We compute the 6 largest principal components of the input embedding vectors. We then compute the gradients of output logits (unnormalized log-probabilities from the model) with respect to the input embeddings, and then project them onto these 6 principal components. We compute the gradients of output logits with respect to input embeddings, then compute the projections of these gradients onto the principal components of the embedding space (since the angles relevant to the _Clock_ and _Pizza_ algorithms are encoded in the first few principal components). These projections are shown in Figure 2. While Model B demonstrates asymmetry in general, Model A exhibits gradient symmetry. ### Second Evidence for _Clock_ Violation: Logit Patterns Inspecting models' outputs, in addition to inputs, reveals further differences. For each input pair \((a,b)\), we compute the output logit (un-normalized log probability) assigned to the correct label \(a+b\). We visualize these _correct logits_ from Models A and B in Figure 3. Notice that the rows are indexed by \(a-b\) and the columns by \(a+b\). From Figure 3, we can see that the correct logits of Model A have a clear dependency on \(a-b\) in that within each row, the correct logits are roughly the same, Figure 2: Gradients on first six principal components of input embeddings. \((a,b,c)\) in the title stands for taking gradients on the output logit \(c\) for input \((a,b)\). x and y axes represent the gradients for embeddings of the first and the second token. The dashed line \(y=x\) signals a symmetric gradient. while this pattern is not observed in Model B. This suggests that Models A and B are implementing different algorithms. ## 3 An Alternative Solution: the _Pizza_ Algorithm How does Model A perform modular arithmetic? Whatever solution it implements must exhibit gradient symmetricity in Figure 2 and the output patterns in Figure 3. In this section, we describe a new algorithm for modular arithmetic, which we call the _Pizza_ algorithm, and then provide evidence that this is the procedure implemented by Model A. ### The _Pizza_ Algorithm Unlike the _Clock_ algorithm, the _Pizza_ algorithm operates _inside_ the circle formed by embeddings (just like pepperoni are spread all over a pizza), instead of operating on the circumference of the circle. The basic idea is illustrated in Figure 1: given a fixed label \(c\), for _all_\((a,b)\) with \(a+b=c\pmod{p}\), the points \(\mathbf{E}_{ab}=(\mathbf{E}_{a}+\mathbf{E}_{b})/2\) lie on a line though the origin of a 2D plane, and the points closer to this line than to the lines corresponding to any other \(c\) form two out of \(2p\) mirrored "pizza slices", as shown at the right of the figure. Thus, to perform modular arithmetic, a network can determine which slice pair the average of the two embedding vectors lies in. Concretely, the _Pizza_ algorithm also consists of three steps. Step 1 is the same as in the _Clock_ algorithm: the tokens \(a\) and \(b\) are embedded at \(\mathbf{E}_{a}=(\cos(w_{k}a),\sin(w_{k}a))\) and \(\mathbf{E}_{b}=(\cos(w_{k}b),\sin(w_{k}b))\), respectively. Step 2 and Step 3 are different from the _Clock_ algorithm. In Step 2.1, \(\mathbf{E}_{a}\) and \(\mathbf{E}_{b}\) are averaged to get \(\mathbf{E}_{ab}\). In Step 2.2 and Step 3, the polar angle of \(\mathbf{E}_{ab}\) is (implicitly) computed by computing the logit \(Q_{abc}\) for any possible outputs \(c\). While one possibility of doing so is to take the absolute value of the dot product of \(\mathbf{E}_{ab}\) with \((\cos(w_{k}c/2),\sin(w_{k}c/2))\), it is not commonly observed in neural networks (and will result in a different logit pattern). Instead, Step 2.2 transforms the sum into \(\mathbf{H}_{ab}\), which is then taken dot product with the output embedding \(U_{c}=(\cos(w_{k}c),\sin(w_{k}c))\). Finally, the prediction is \(c^{*}=\operatorname*{argmax}_{c}Q_{abc}\). See Appendix A and Appendix J for an example of such transforms (Step 2.2), mathematical derivation and the analysis of a circuit found in the wild. The key difference between the two algorithms lies in what non-linear operations are required: _Clock_ requires multiplication of inputs in Step 2, while _Pizza_ requires only absolute value computation, which is easily implemented by the ReLU layers. If neural networks lack inductive biases toward implementing multiplication, they may be more likely to implement _Pizza_ rather than _Clock_, as we will verify in Section 4. ### First Evidence for _Pizza_: Logit Patterns Both the _Clock_ and _Pizza_ algorithms compute logits \(Q_{abc}\) in Step 3, but they have different forms, shown in Figure 1. Specifically, \(Q_{abc}(Pizza)\) has an extra multiplicative factor \(|\cos(w_{k}(a-b)/2)|\) than \(Q_{abc}(\mathit{Clock})\). As a result, given \(c=a+b\), \(Q_{abc}(Pizza)\) is dependent on \(a-b\), but \(Q_{abc}(\mathit{Clock})\) is not. The intuition for the dependence is that a sample is more likely to be classified correctly if \(\mathbf{E}_{ab}\) is longer. The norm of this vector depends on \(a-b\). As we observe in Figure 3, the logits in Model A indeed exhibit a strong dependence on \(a-b\). Figure 3: Correct Logits of Model A & Model B. The correct logits of Model A (left) have a clear dependence on \(a-b\), while those of Model B (right) do not. ### Second Evidence for _Pizza_: Clearer Logit Patterns via Circle Isolation To better understand the behavior of this algorithm, we replace the embedding matrix \(\mathbf{E}\) with a series of rank-2 approximations: using only the first and second principal components, or only the third and fourth, etc. For each such matrix, embeddings lie in a a two-dimensional subspace. For both Model A and Model B, we find that embeddings form a circle in this subspace (Figure 4 and Figure 5, bottom). We call this procedure _circle isolation_. Even after this drastic modification to the trained models' parameters, both Model A and Model B continue to behave in interpretable ways: a subset of predictions remain highly accurate, with this subset determined by the periodicity of the \(k\) of the isolated circle. As predicted by the _Pizza_ and _Clock_ algorithms described in Figure 1, Model A's accuracy drops to zero at specific values of \(a-b\), while Model B's accuracy is invariant in \(a-b\). Applying circle isolation to Model A on the two principal components (one circle) yields a model with \(32.8\%\) overall accuracy, while retaining the first six principal components (three circles) yields an overall accuracy of \(91.4\%\). See Appendix D for more discussion. By contrast, Model B achieves \(100\%\) when embeddings are truncated to the first six principal components. Circle isolation thus reveals an _error correction_ mechanism achieved via ensembling: when an algorithm (clock or pizza) exhibits systematic errors on subset of inputs, models can implement multiple algorithm variants in parallel to obtain more robust predictions. Using these isolated embeddings, we may additionally calculate the isolated logits directly with formulas in Figure 1 and compare with the actual logits from Model A. Results are displayed in Table 2. We find that \(Q_{abc}(Pizza)\) explains substantially more variance than \(Q_{abc}(\textit{Clock})\). **Why do we only analyze correct logits?** The logits from the _Pizza_ algorithm are given by \(Q_{abc}(Pizza)=|\mathrm{cos}(w_{k}(a-b)/2)|\cos(w_{k}(a+b-c))\). By contrast, the _Clock_ algorithm has logits \(Q_{abc}(\textit{Clock})=\mathrm{cos}(w_{k}(a+b-c))\). In a word, \(Q_{abc}(Pizza)\) has an extra multiplicative factor \(|\mathrm{cos}(w_{k}(a-b)/2)|\) compared to \(Q_{abc}(\textit{Clock})\). By constraining \(c=a+b\) (thus \(\mathrm{cos}(w_{k}(a+b-c))=1\)), the factor \(|\mathrm{cos}(w_{k}(a-b)/2)|\) can be identified. **(Unexpected) dependence of logits \(Q_{abc}(\textit{Clock})\) on \(a+b\)**: Although our analysis above expects logits \(Q_{abc}(\textit{Clock})\) not to depend on \(a-b\), they do not predict its dependence on \(a+b\). In Figure 5, we surprisingly find that \(Q_{abc}(\textit{Clock})\) is sensitive to this sum. Our conjecture is that Step 1 and Step 2 of the _Clock_ are implemented (almost) noiselessly, such that same-label samples collapse to the same point after Step 2. However, Step 3 (classification) is imperfect after circle isolation, resulting in fluctuations of logits. Inputs with common sums \(a+b\) produce the same logits. Figure 4: Correct logits of Model A (_Pizza_) after circle isolation. The rightmost pizza is accompanying the third pizza (discussed in Section 3.4 and Appendix D). _Top:_ The logit pattern depends on \(a-b\). _Bottom:_ Embeddings for each circle. ### Third Evidence for _Pizza_: Accompanied & Accompanying Pizza The Achilles' heel of the _Pizza_ algorithm is antipodal pairs. If two inputs \((a,b)\) happen to lie antipodally, then their middle point will lie at the origin, where the correct "pizza slice" is difficult to identify. For example in Figure 1 right, antipodal pairs are (1,7), (2,8), (3,9) etc., whose middle points all collapse to the origin, but their class labels are different. Therefore models cannot distinguish between, and thus correctly classify, these pairs. Even for prime \(p\) where there are no strict antipodal pairs, approximately antipodal pairs are also more likely to be classified incorrectly than non-antipodal pairs. Intriguingly, neural networks find a clever way to compensate for this failure mode. we find that pizzas usually come with "accompanying pizzas". An accompanied pizza and its accompanying pizza complement each other in the sense that near-antipodal pairs in the accompanied pizza become adjacent or close (i.e, very non-antipodal) in the accompanying pizza. If we denote the difference between adjacent numbers on the circle as \(\delta\) and \(\delta_{1}\), \(\delta_{2}\) for accompanied and accompanying pizzas, respectively, then \(\delta_{1}=2\delta_{2}\pmod{p}\). In the experiment, we found that pizzas #1/#2/#3 in Figure 4 all have accompanying pizzas, which we call pizzas #4/#5/#6 (see Appendix D for details). However, these accompanying pizzas do not play a significant role in final model predictions 1. We conjecture that training dynamics are as follows: (1) At initialization, pizzas #1/#2/#3 correspond to three different "lottery tickets" [9]. (2) In early stages of training, to compensate the weaknesses (antipodal pairs) of pizzas #1/#2/#3, pizzas #4/#5/#6 are formed. (3) As training goes on (in the presence of weight decay), the neural network gets pruned. As a result, pizzas #4/#5/#6 are not used much for prediction, although they continue to be visible in the embedding space. \begin{table} \begin{tabular}{l|l|l|l} Circle & \(w_{k}\) & \(Q_{abc}(\text{clock})\) FVE & \(Q_{abc}(\text{pizza})\) FVE \\ \hline \#1 & \(2\pi/59\cdot 17\) & 75.41\% & 99.18\% \\ \hline \#2 & \(2\pi/59\cdot 3\) & 75.62\% & 99.18\% \\ \hline \#3 & \(2\pi/59\cdot 44\) & 75.38\% & 99.28\% \\ \end{tabular} \end{table} Table 2: After isolating circles in the input embedding, fraction of variance explained (FVE) of **all** Model A’s output logits (\(59\times 59\times 59\) of them) by various formulas. Both model output logits and formula results’ are normalized to mean \(0\) variance \(1\) before taking FVE. \(w_{k}\)’s are calculated according to the visualization. For example, distance between \(0\) and \(1\) in Circle #1 is \(17\), so \(w_{k}=2\pi/59\cdot 17\). Figure 5: Correct logits of Model B (_Clock_) after circle isolation. _Top:_ The logit pattern depends on \(a+b\). _Bottom:_ Embeddings for each circle. The Algorithmic Phase Space In Section 3, we have demonstrated a typical _Clock_ (Model A) and a typical _Pizza_ (Model B). In this section, we study how architectures and hyperparameters govern the selection of these two algorithmic "phases". In Section 4.1, we propose quantitative metrics that can distinguish between _Pizza_ and _Clock_. In Section 4.2, we observe how these metrics behave with different architectures and hyperparameters, demonstrating sharp phase transitions. The results in this section focus _Clock_ and _Pizza_ models, but other algorithmic solutions to modular addition are also discovered, and explored in more detail in Appendix B. ### Metrics We want to study the distribution of _Pizza_ and _Clock_ algorithms statistically, which will require us to distinguish between two algorithms automatically. In order to do so, we formalize our observations in Section 2.2 and 2.3, arriving at two metrics: **gradient symmetricity** and **distance irrelevance**. #### 4.1.1 Gradient Symmetricity To measure the symmetricity of the gradients, we select some input-output group \((a,b,c)\), compute the gradient vectors for the output logit at position \(c\) with respect to the input embeddings, and then compute the cosine-similarity. Taking the average over many pairs yields the gradient symmetricity. **Definition 4.1** (Gradient Symmetricity).: _For a fixed set \(S\subseteq\mathbb{Z}_{p}^{3}\) of input-output pairs2, define **gradient-symmetricity** of a network \(M\) with embedding layer \(E\) as_ Footnote 2: To speed-up the calculations, in our experiments \(S\) is taken as a random subset of \(\mathbb{Z}_{p}^{3}\) of size \(100\). \[s_{g}\equiv\frac{1}{|S|}\sum_{(a,b,c)\in S}\text{sim}\left(\frac{\partial Q_{ abc}}{\partial\mathbf{E}_{a}},\frac{\partial Q_{abc}}{\partial\mathbf{E}_{b}} \right),\] _where \(\text{sim}(a,b)=\frac{a\cdot b}{|a||b|}\) is the cosine-similarity, \(Q_{abc}\) is the logit for class \(c\) given input \(a\) and \(b\). It is clear that \(s_{g}\in[-1,1]\)._ As we discussed in Section 2.2, the _Pizza_ algorithm has symmetric gradients while the _Clock_ algorithm has asymmetric ones. Model A and Model B in Section 3 have gradient symmetricity \(99.37\%\) and \(33.36\%\), respectively (Figure 2). #### 4.1.2 Distance Irrelevance To measure the dependence of correct logits on differences between two inputs, which reflect the distances of the inputs on the circles, we measure how much of the variance in the correct logit matrix depends on it. We do so by comparing the average standard deviation of correct logits from inputs with the same differences and the standard deviation from all inputs. **Definition 4.2** (Distance Irrelevance).: _For some network \(M\) with correct logit matrix \(L\) (\(L_{i,j}=Q_{ij,i+j}\)), define its **distance irrelevance** as_ \[q\equiv\frac{\frac{1}{p}\sum_{d\in\mathbb{Z}_{p}}\operatorname{std}\left(L_{ i,i+d}\mid i\in\mathbb{Z}_{p}^{2}\right)}{\operatorname{std}\left(L_{i,j}\mid i,j \in\mathbb{Z}_{p}^{2}\right)},\] _where \(\operatorname{std}\) computes the standard deviation of a set. It is clear that \(q\in[0,1]\)._ Model A and Model B in Section 3 give distance irrelevance 0.17 and 0.85, respectively (Figure 3). A typical distance irrelevance from the _Pizza_ algorithm ranges from 0 to 0.5 while a typical distance irrelevance from _Clock_ algorithm ranges from 0.5 to 1. ### Phase Transition Results We want to study how models "choose" whether to implement the _Clock_ or _Pizza_ algorithm. We do so by interpolating between Model A (transformer without attention) and Model B (transformer with attention). To do so, we introduce a new hyperparameter \(\alpha\) we call the **attention rate**. For a model with attention rate \(\alpha\), we modify the attention matrix \(M\) for each attention head to be \(M^{\prime}=M\alpha+I(1-\alpha)\). In other words, we modify this matrix to consist of a linear interpolation between the identity matrix and the original attention (post-softmax), with the rate \(\alpha\) controlling how much of the attention is kept. The transformer with and without attention corresponds to the case where \(\alpha=1\) (attention kept) and \(\alpha=0\) (constant attention matrix). With this parameter, we can control the balance of attention versus linear layers in transformers. We performed the following set of experiments on transformers (see Appendix F.1 for architecture and training details). (1) One-layer transformers with width \(128\) and attention rate uniformly sampled in \([0,1]\) (Figure 6). (2) One-layer transformers with width log-uniformly sampled in \([32,512]\) and attention rate uniformly sampled in \([0,1]\) (Figure 6). (3) Transformers with \(2\) to \(4\) layers, width \(128\) and attention rate uniformly sampled in \([0,1]\) (Figure 10). The _Pizza_ and the _Clock_ algorithms are the dominating algorithms with circular embeddings.For circular models, most observed models either have low gradient symmetricity (corresponding to the _Clock_ algorithm) or low distance irrelevance (corresponding to the _Pizza_ algorithm). Two-dimensional phase change observed for attention rate and layer width.For the fixed-width experiment, we observed a clear phase transition from the _Pizza_ algorithm to the _Clock_ algorithm (characterized by gradient symmetricity and distance irrelevance). We also observe an almost linear phase boundary with regards to both attention rate and layer width. In other words, the attention rate transition point increases as the model gets wider. Figure 6: Training results from 1-layer transformers. Each point in the plots represents a training run reaching circular embeddings and 100% validation accuracy. See Appendix C for additional plots. _Top:_ Model width fixed to be 128. _Bottom:_ Model width varies. The phase transition lines are calculated by logistic regression (classify the runs by whether gradient symmetricity \(>98\%\) and whether distance irrelevance \(<0.6\)). **Dominance of linear layers determines whether the _Pizza_ or the _Clock_ algorithm is preferred.** For one-layer transformers, we study the transition point against the attention rate and the width: * The _Clock_ algorithm dominates when the attention rate is higher than the phase change point, and the _Pizza_ algorithm dominates when the attention rate is lower than the point. Our explanation is: At a high attention rate, the attention mechanism is more prominent in the network, giving rise to the clock algorithm. At a low attention rate, the linear layers are more prominent, giving rise to the pizza algorithm. * The phase change point gets higher when the model width increases. Our explanation is: When the model gets wider, the linear layers become more capable while the attention mechanism receive less benefit (attentions remain scalars while outputs from linear layers become wider vectors). The linear layer therefore gets more prominence with a wider model. Existence of non-circular algorithmsAlthough our presentation focuses on circular algorithms (i.e., whose embeddings are circular), we find non-circular algorithms (i.e., whose embeddings do not form a circle when projected onto any plane) to be present in neural networks. See Appendix B for preliminary findings. We also find that deeper networks are more likely to form non-circular algorithms. We also observe the appearance of non-circular networks at low attention rates. Nevertheless, the _Pizza_ algorithm continues to be observed (low distance irrelevance, high gradient symmetricity). ## 5 Related Work **Mechanistic interpretability** aims to mechanically understand neural networks by reverse engineering them [2; 3; 5; 4; 10; 11; 12; 13; 14]. One can either look for patterns in weights and activations by studying single-neuron behavior (superposition [11], monosemantic neurons [15]), or study meaningful modules or circuits grouped by neurons [4; 14]. Mechanistic interpretability is closely related to training dynamics [8; 13; 1]. **Learning mathematical tasks**: Mathematical tasks provide useful benchmarks for neural network interpretability, since the tasks themselves are well understood. The setup could be learning from images [16; 17], with trainable embeddings [18], or with number as inputs [19; 5]. Beyond arithmetic relations, machine learning has been applied to learn other mathematical structures, including geometry [20], knot theory [21] and group theory [22]. **Algorithmic phase transitions**: Phase transitions are present in classical algorithms [23] and in deep learning [6; 24; 25]. Usually the phase transition means that the algorithmic performance sharply changes when a parameter is varied (e.g., amount of data, network capacity etc). However, the phase transition studied in this paper is _representational_: both clock and pizza give perfect accuracy, but arrive at answers via different internal computations. These model-internal phase transitions are harder to study, but closer to corresponding phenomena in physical systems [24]. **Algorithm learning in neural networks**: Emergent abilities in deep neural networks, especially large language models, have recently attracted significant attention [26]. An ability is "emergent" if the performance on a subtask suddenly increases with growing model sizes, though such claims depend on the choice of metric [27]. It has been hypothesized that the emergence of specific capability in a model corresponds to the emergence of a modular circuit responsible for that capability, and that emergence of some model behaviors thus results from a sequence of quantized circuit discovery steps [5]. ## 6 Conclusions We have offered a closer look at recent findings that familiar algorithms arise in neural networks trained on specific algorithmic tasks. In modular arithmetic, we have shown that such algorithmic discoveries are not inevitable: in addition to the _Clock_ algorithm reverse-engineered by [1], we find other algorithms (including a _Pizza_ algorithm, and more complicated procedures) to be prevalent in trained models. These different algorithmic phases can be distinguished using a variety of new and existing interpretability techniques, including logit visualization, isolation of principle components in embedding space, and gradient-based measures of model symmetry. These techniques make it possible to _automatically_ classify trained networks according to the algorithms they implement, and reveal algorithmic phase transitions in the space of model hyperparameters. Here we found specifically that the emergence of a _Pizza_ or _Clock_ algorithm depends on the relative strength of linear layers and attention outputs. We additionally showed that these algorithms are not implemented in isolation; instead, networks sometimes ensemble multiple copies of an algorithm in parallel. These results offer exciting new challenges for mechanistic interpretability: (1) How can find, classify, and interpret unfamiliar algorithms in a systematic way? (2) How to disentangle multiple, parallel algorithm implementations in the presence of ensembling? LimitationsWe have focused on a single learning problem: modular addition. Even in this restricted domain, qualitatively different model behaviors emerge across architectures and seeds. Significant additional work is needed to scale these techniques to the even more complex models used in real-world tasks. Broader ImpactWe believe interpretability techniques can play a crucial role in creating and improving safe AI systems. However, they may also be used to build more accurate systems, with the attendant risks inherent in all dual-use technologies. It is therefore necessary to exercise caution and responsible decision-making when deploying such techniques. ## Acknowledgement We would like to thank Mingyang Deng for valuable discussions and MIT SuperCloud for providing computation resources. ZL and MT are supported by the Foundational Questions Institute, the Rothberg Family Fund for Cognitive Science and IAIFI through NSF grant PHY-2019786. JA is supported by a gift from the OpenPhilanthropy Foundation.
2309.06114
Split gluon masses in $SU(N)\times SU(M)$ theories
We extend a known mass-gap equation for pure gluodynamics in global colour models (formulated in equal time quantization in Coulomb gauge) to one in which gluons split into two sets which may have different masses. If the theory is $SU(N)\times SU(M)$ with gluons in both groups having identical couplings (as suggested by Grand Unification arguments at large scales) it is immediate to see that different masses are generated for each subgroup. This global symmetry is not broken, but the split masses erase accidental symmetries that might be present due to the two couplings being the same at the large scale, such as $SU(N\times M)$ or similar. We also numerically explore a couple of low-dimensional examples of simple Lie groups, but in spite of the system having a form that would seem to allow spontaneous symmetry breaking, it is not triggered for these groups whose algebra has no ideal, and the dispersion relations for the various gluons converge to the same form.
Julia Gómez Concejo, Felipe J. Llanes-Estrada, Diego María-Almazán, Alexandre Salas-Bernárdez
2023-09-12T10:28:50Z
http://arxiv.org/abs/2309.06114v1
# Split gluon masses in \(SU(N)\times SU(M)\) theories ###### Abstract We extend a known mass-gap equation for pure gluodynamics in global colour models (formulated in equal time quantization in Coulomb gauge) to one in which gluons split into two sets which may have different masses. If the theory is \(SU(N)\times SU(M)\) with gluons in both groups having identical couplings (as suggested by Grand Unification arguments at large scales) it is immediate to see that different masses are generated for each subgroup. This global symmetry is not broken, but the split masses erase accidental symmetries that might be present due to the two couplings being the same at the large scale, such as \(SU(N\times M)\) or similar. We also numerically explore a couple of low-dimensional examples of simple Lie groups, but in spite of the system having a form that would seem to allow spontaneous symmetry breaking, it is not triggered for these groups whose algebra has no ideal, and the dispersion relations for the various gluons converge to the same form. pacs: 11.15.-qGauge field theories and 11.15.ExSpontaneous breaking of gauge symmetries ## 1 Introduction Spontaneous gauge-boson mass generation is at the core of the Standard Model. Additionally to the Higgs mechanism, the Schwinger mechanism and similar ideas allow the gauge bosons (henceforth, "gluons") to acquire a mass without the assistance of an explicit additional field [1; 2; 3; 4; 5]. Gluon masses are a welcome gauge-fixed feature of Chromodynamics as they raise glueballs from the low-lying hadron spectrum [6; 7; 8; 9] where the existing hadrons are well understood. This is perhaps worth exploring in the context of Grand Unification because complicated symmetry breaking patterns [10; 11] appear and the scalar Higgs-type mechanisms to break the symmetry can be convoluted (in fact, in many Grand Unification situations, the Higgs needs to be described by a composite field from the start [12; 13]). We do not have a particular agenda nor unification model in mind, but want to generically explore a system of coupled gap equations that may allow splitting the gluon masses into two or more different values. Having this theoretical mechanism (which we partially achieve as will be explained in detail) would allow to have additional theoretical tools to explore unification dynamics. Because of the first theorem of Vafa and Witten [14], we know that spontaneous global colour-symmetry breaking is impossible in the quark sector, so our exploration concentrates on the Yang-Mills sector alone. Then there is also the question of why the Standard Model is built out of low-dimensional Lie groups [15; 16] that may well have to do with the spontaneous acquisition of large masses (triggered by very different evolutions of the coupling constants) by particles charged under the (absent) large dimensional groups, which would remove such particles from the spectrum. These gap equations are formulated in Coulomb gauge, but our considerations should be easy to extend to other gauges such as Landau gauge [17; 18]. Modeling the Coulomb gauge dynamics with simple global-colour model does not capture all the interesting phenomena, such as for example the Gribov divergent gluon mass at low momentum (a very strongly infrared enhanced propagator) [19] but they are strong enough to trigger the generation of gluon masses: and we are not sure that we want to explore confinement (including Coulomb confinement is a necessary condition to describe confinement in arbitrary gauges, as the Coulomb potential is an upper bound for the QCD potential [20]) in this work, that does not necessarily restrict itself to Quantum Chromodynamics (QCD) with the group \(SU(3)\) but the production of a gap. This modified dispersion relation with a finite gluon mass is a feature of a more general class of theories. In this article we review, in section 2, the obtention of the known pure Yang-Mills gap equation in the North Carolina State [21] model; we solve it for various groups, all of which have the same coupling constant at a low scale in section 3; we then, in section 4, extend the mechanism to allow for the possibility of different variational wavefunctions for each of the gauge bosons, which could possibly trigger spontaneous breaking of a global symmetry. We succeed in doing this for product Lie groups or any other situation in which the underlying Lie algebra contains an ideal. Afterwards, we conduct a first numerical exploration for a few simple Lie groups of low-dimension, reported in section 5 and do not currently find a situation in which the symmetry breaks. After a brief outlook, we complement the discussion with an appendix detailing the numerical solution method, the necessary colour algebra, and an exhaustive list of the structure constant combinations (in the particular case of \(SU(3)\) only). ## 2 Coulomb-gauge gap equation for a singlet condensate In this section we present the relatively well-known theory of the mass gap equation leading to a gluon mass in Coulomb gauge with a color-singlet condensate (note that any gauge boson mass and condensates are necessarily features of a gauge-fixed picture of the theory) that therefore respects all global symmetries. We start from a global-color symmetry preserving Hamiltonian [21]: \[H =\int d^{3}\mathbf{x}\,(\mathbf{\Pi^{a}}\cdot\mathbf{\Pi^{a}}+ \mathbf{B^{a}}\cdot\mathbf{B^{a}})-\] \[-\frac{1}{2}\int d^{3}\mathbf{x}\int d^{3}\mathbf{y}\rho^{a}_{ \mathrm{glue}}(\mathbf{x})V(|\mathbf{x}-\mathbf{y}|)\rho^{a}_{\mathrm{glue}}( \mathbf{y})\, \tag{1}\] where \(\mathbf{\Pi^{a}}\) represents the colour electric field, \(\mathbf{B^{a}}\) the chromomagnetic field, and \(\rho^{a}_{\mathrm{glue}}=f^{abc}\mathbf{A}^{b}\cdot\mathbf{\Pi^{c}}\) the colour charge density. The Hamiltonian differs from exact QCD in that the potential \(V(|\mathbf{x}-\mathbf{y}|)\) is a c-function (like in electrodynamics) given below in Eq. (16) simplifying the kernel that appears in the full non-Abelian theory [22; 23]. In the basis of well-defined momentum particles, with creation, \(a^{a\dagger}\), and destruction, \(a^{a}\), boson operators the fields take the form \[A^{a}_{i}(\mathbf{x}) =\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\frac{1}{\sqrt{2|\mathbf{k }|}}\left[a^{a}_{i}(\mathbf{k})+a^{a\dagger}_{i}(-\mathbf{k})\right]e^{i \mathbf{k}\cdot\mathbf{x}} \tag{2}\] \[\Pi^{a}_{i}(\mathbf{x}) =-i\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\sqrt{\frac{|\mathbf{k}|} {2}}\left[a^{a}_{i}(\mathbf{k})-a^{a\dagger}_{i}(-\mathbf{k})\right]e^{i \mathbf{k}\cdot\mathbf{x}}\] (3) \[B^{a}_{i} =\epsilon_{ijk}\left(\bigtriangledown_{j}A^{a}_{k}+\frac{g}{2}f^ {abc}A^{b}_{j}A^{c}_{k}\right). \tag{4}\] Because the Coulomb gauge is spatially transverse, adequate commutation rules that project out the longitudinal gluons are \[[a^{a}_{i}(\mathbf{k}),a^{b}_{j}(\mathbf{q})^{\dagger}]=(2\pi)^{3}\delta^{ab} \delta^{3}(\mathbf{k}-\mathbf{q})\left(\delta_{ij}-\hat{k}_{i}\hat{k}_{j} \right)\, \tag{5}\] where \(\hat{\mathbf{k}}\equiv\mathbf{k}/|\mathbf{k}|\). A gluon condensed vacuum \(|\Omega\rangle\) is variationally chosen by minimizing the expectation value of the Hamiltonian \(\langle H\rangle\). The quasiparticles that will annihilate it will have a dispersion relation \(E(k)\) that serves as the actual variational function, controlling the canonical Bogoliubov rotation [24] \[\alpha^{a}_{i} =\cosh\theta^{a}_{k}a^{a}_{i}(\mathbf{k})+\sinh\theta^{a}_{k}a^{a \dagger}_{i}(-\mathbf{k}) \tag{6}\] \[\alpha^{a\dagger}_{i} =\sinh\theta^{a}_{k}a^{a}_{i}(\mathbf{k})+\cosh\theta^{a}_{k}a^{a \dagger}_{i}(-\mathbf{k})\, \tag{7}\] so that the field expansions become \[A^{a}_{i}(\mathbf{x}) =\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\frac{1}{\sqrt{2E^{a}_{ \mathbf{k}}}}\left(\alpha^{a}_{i}(\mathbf{k})+\alpha^{a\dagger}_{i}(-\mathbf{ k})\right)e^{i\mathbf{k}\cdot\mathbf{x}} \tag{8}\] \[\Pi^{a}_{i}(\mathbf{x}) =-i\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\sqrt{\frac{E^{a}_{ \mathbf{k}}}{2}}\left(\alpha^{a}_{i}(\mathbf{k})-\alpha^{a\dagger}_{i}(- \mathbf{k})\right)e^{i\mathbf{k}\cdot\mathbf{x}}. \tag{9}\] The relation between the hyperbolic Bogoliubov angle \(\theta^{a}_{k}\) and the dispersion relation is then \[\tanh\theta^{a}_{\mathbf{k}}=\frac{|\mathbf{k}|-E^{a}_{\mathbf{k}}}{|\mathbf{ k}|+E^{a}_{\mathbf{k}}}. \tag{10}\] Although it is not directly used in practice, the vacuum state of the interacting theory satisfying \(\alpha^{a}_{i}\,|\Omega\rangle=0\) is obtained from the free vacuum via \[|\Omega\rangle=e^{\left(-\int\frac{d^{3}\mathbf{k}}{2(2\pi)^{3}}\tanh\theta^{a }_{\mathbf{k}}(\delta_{ij}-\hat{k}_{i}\hat{k}_{j})a^{a\dagger}_{i}(\mathbf{k} )a^{a\dagger}_{j}(-\mathbf{k})\right)}\,|0\rangle\,. \tag{11}\] To apply the Rayleigh-Ritz variational principle we require the expectation value of the Hamiltonian in the family of rotated vacuum states, \[\langle H_{\Pi}\rangle_{\Omega} =(2\pi)^{3}\delta^{3}(0)\sum_{a}\int\frac{d^{3}\mathbf{k}}{(2\pi )^{3}}\frac{E^{a}_{\mathbf{k}}}{2}\] \[\langle H_{B}\rangle_{\Omega} =(2\pi)^{3}\delta^{3}(0)\sum_{a}\int\frac{d^{3}\mathbf{k}}{(2\pi)^ {3}}\frac{|\mathbf{k}|^{2}}{2E^{a}_{\mathbf{k}}}\] \[\langle H_{V}\rangle_{\Omega} =(2\pi)^{3}\delta^{3}(0)\frac{1}{8}\sum_{abc}\int\frac{d^{3} \mathbf{k}}{(2\pi)^{3}}\frac{d^{3}\mathbf{k}^{\prime}}{(2\pi)^{3}}\times\] \[\times\left[(1+(\hat{\mathbf{k}}\cdot\hat{\mathbf{k}}^{\prime})) \hat{V}(|\mathbf{k}-\mathbf{k}^{\prime}|)f^{abc}f^{abc}\left(\frac{E^{c}_{ \mathbf{k}}}{E^{b}_{\mathbf{k}^{\prime}}}-C_{G}\right)\right]. \tag{12}\] The factor \(\delta^{3}(0)\) simply represents the quantization volume: it can be ignored in minimizing the energy density. The variational principle then yields \[\frac{\delta\langle H\rangle_{\Omega}}{\delta E^{d}_{\mathbf{q}}}=\frac{ \delta(\langle H_{\Pi}\rangle_{\Omega}+\langle H_{B}\rangle_{\Omega}+ \langle H_{V}\rangle_{\Omega})}{\delta E^{d}_{\mathbf{q}}}=0\] and this entails the following mass-gap equation for the gauge bosons, \[(E^{d}_{\mathbf{k}})^{2} =|\mathbf{q}|^{2}-\frac{1}{4}\sum_{a,b}f^{abd}f^{abd}\int\frac{d^{ 3}\mathbf{k}}{(2\pi)^{3}}\times\] \[\times\hat{V}(|\mathbf{k}-\mathbf{q}|)(1+(\hat{\mathbf{k}}\cdot \hat{\mathbf{q}})^{2})\left(\frac{(E^{b}_{\mathbf{k}})^{2}-(E^{d}_{\mathbf{q}})^ {2}}{E^{b}_{\mathbf{k}}}\right). \tag{13}\] This is a nonlinear integral equation for \(E_{\mathbf{q}}\) that appears also on the right, so the solution needs to be iterative until a fixed point is found. We can simplify a bit by noting that \(E_{\mathbf{k}}\) does not depend on the angular variables, so that defining an effective potential that absorbs the polar integral \[\hat{V}_{\rm eff}({\bf k},{\bf q})=\frac{1}{2\pi}\int d\Omega\,\hat{V}(|{\bf k}-{ \bf q}|)(1+(\hat{\bf k}\cdot\hat{\bf q})^{2}), \tag{14}\] the radial equation becomes \[(E_{\bf q}^{d})^{2} = |{\bf q}|^{2}-\frac{1}{4}\sum_{a,b}f^{abd}f^{abd}\int_{0}^{\infty }\frac{d|{\bf k}|}{(2\pi)^{2}}|{\bf k}|^{2}\times \tag{15}\] \[\times \hat{V}_{\rm eff}({\bf k},{\bf q})\left(\frac{(E_{\bf k}^{b})^{2} -(E_{\bf q}^{d})^{2}}{E_{\bf k}^{b}}\right)\.\] An alternative way to derive this equation is via the functional approach [25]. The potential \(\hat{V}(|{\bf k}-{\bf q}|)\) is the Fourier transform of the potential \(V(|{\bf x}-{\bf y}|)\) in Eq. (1). In a confining theory, the potential can be approximated by the Cornell linear+Coulomb \(1/r\) potential, resulting in \[V(|{\bf x}-{\bf y}|)=-\frac{\alpha_{s}}{|{\bf x}-{\bf y}|}+b\,|{\bf x}-{\bf y }|\,e^{-A_{\rm phen}|{\bf x}-{\bf y}|}\, \tag{16}\] where \(\alpha_{s}\) is the strong interaction constant and \(b\) the string tension. The term \(e^{-A_{\rm phen}|{\bf x}-{\bf y}|}\) is a regulator that tames the strong infrared growth of the linear potential, but we will not use it and actually employ the computer grid to regulate the integration. Since the gluon pairs in the condensate are in a singlet state, all quasiparticles have the same dispersion relation and no global symmetry is broken, \(E_{\bf q}^{d}=E_{\bf q}\) for all \(d\). The sum over the structure constants is the Casimir of the adjoint representation, \(C_{G}=\sum_{a,b}f^{abd}f^{abd}\), that for \(SU(N)\) is simply \(C_{G}=N\). Eq. (15) then reduces to \[(E_{\bf q})^{2}=|{\bf q}|^{2}-\frac{C_{G}}{4}\int_{0}^{\infty}\frac{d|{\bf k} |}{(2\pi)^{2}}|{\bf k}|^{2}\hat{V}_{\rm eff}({\bf k},{\bf q})\left(\frac{E_{ \bf k}^{2}-E_{\bf q}^{2}}{E_{\bf k}}\right). \tag{17}\] ## 3 Colour-symmetric gap equation for various groups Let us now separately study the effect of the terms of the potential in Eq. (16), starting by the linear potential (first used in a gap equation, to our knowledge, in [26; 27]), \[V_{L}(|{\bf x}-{\bf y}|)=b\,|{\bf x}-{\bf y}|\,\] that, after Fourier transform, becomes \[\hat{V}_{L}={\cal F}^{3}(V_{L})=-\frac{8\pi b}{|{\bf k}-{\bf q}|^{4}}\, \tag{18}\] and handling the angular integrals yields the effective potential for the radial equation, \[\hat{V}_{\rm eff,\,\,L}= \int_{-1}^{1}d\theta\frac{-8\pi b}{(|{\bf k}|^{2}+|{\bf q}|^{2}-2 |{\bf k}||{\bf q}|\cos\theta)^{2}}(1+\cos^{2}\theta)=\] \[= -8\pi b\Big{[}\left(\frac{|{\bf k}|^{2}+|{\bf q}|^{2}}{|{\bf k}|^ {2}-|{\bf q}|^{2}}\right)^{2}\frac{1}{|{\bf k}|^{2}|{\bf q}|^{2}}+\] \[+\frac{|{\bf k}|^{2}+|{\bf q}|^{2}}{4|{\bf k}|^{3}|{\bf q}|^{3}} \log\left(\frac{|{\bf k}|-|{\bf q}|}{|{\bf k}|+|{\bf q}|}\right)^{2}\Big{]}. \tag{19}\] The \(k\) integral in Eq. (15) has a log infrared divergence upon employing the \(1/(k-q)^{4}\). The regulated equation is numerically solved (as detailed in the appendix) and the solutions are plot in figure 1 for different symmetry groups. In all cases we see the emergence of a mass, \(m=E(0)\), larger with increasing dimension of the symmetry group due to the \(C_{G}\) colour factor in Eq. (17). We now turn to the Coulomb potential that is a good description of the actual potential when interactions are small, that is, at high momentum transfers in non-Abelian theories. It is \[V_{C}(|{\bf x}-{\bf y}|)=-\frac{\alpha_{s}}{|{\bf x}-{\bf y}|}\,\] with Fourier transform \[\hat{V}_{C}(|{\bf k}-{\bf q}|)={\cal F}^{3}(V_{C})=-\frac{4\pi\alpha_{s}}{|{ \bf k}-{\bf q}|^{2}}\,, \tag{20}\] and, because of the absence of a \(\phi\)-dependence, just as for the linear potential, both being central, the effective Figure 1: Computed numerical dispersion relations \(E(p)\) with the linear potential from Eq. (19). From bottom to top they correspond to the groups \(SU(3)\) through \(SU(10)\), all with the same string tension \(b=0.18\) GeV\({}^{2}\). potential for the radial equation is \[\hat{V}_{\rm eff,\,\,C}=\int_{-1}^{1}d\theta\frac{-4\pi\alpha_{s}}{| \mathbf{k}|^{2}+|\mathbf{q}|^{2}-2|\mathbf{k}||\mathbf{q}|\cos\theta}(1+\cos^{2} \theta)=\] \[=4\pi\alpha_{s}\Big{[}\frac{1}{2|\mathbf{q}|^{2}}+\frac{1}{2| \mathbf{k}|^{2}}+\] \[\qquad\qquad+\frac{|\mathbf{k}|^{4}+6|\mathbf{k}|^{2}|\mathbf{q}| ^{2}+|\mathbf{q}|^{4}}{8|\mathbf{k}|^{3}|\mathbf{q}|^{3}}\log\left(\frac{| \mathbf{k}|-|\mathbf{q}|}{|\mathbf{k}|+|\mathbf{q}|}\right)^{2}\Big{]}. \tag{21}\] This potential causes no problem in the infrared \(k\to q\) limit, but the improper integral in the ultraviolet \(k\to\infty\) does not converge. Since, unlike the linear potential, the Coulombic one is scale-free, the solutions scale with the regulating cutoff). Since it is not particularly appealing that the computer grid determines the mass gap (although common practice in many computer fields), we will eliminate that dependence by a fixed momentum subtraction (MOM scheme). We therefore detract from Eq. (17) the same equation but with a fixed value of the momentum scale, \(\mu\), that is now dictating the solution's mass. The resulting equation, \[(E_{\mathbf{q}})^{2}=(E_{\mu}^{d})^{2}+|\mathbf{q}|^{2}-\mu^{2}- \frac{C_{G}}{4}\int_{0}^{\infty}\frac{d|\mathbf{k}|}{(2\pi)^{2}}\frac{| \mathbf{k}|^{2}}{E_{\mathbf{k}}}\times\] \[\times(\hat{V}_{\rm eff}(\mathbf{k},\mathbf{q})\big{(}(E_{ \mathbf{k}})^{2}-(E_{\mathbf{q}})^{2}\big{)}-\hat{V}_{\rm eff}(\mathbf{k},\mu )\big{(}(E_{\mathbf{k}})^{2}-(E_{\mu})^{2}\big{)}\, \tag{22}\] has a much better behaviour, and any \(k\to\infty\) integration divergence is suppressed by the new fixed-point subtraction, with the same potential but opposite sign. \(\mu\) has to be chosen high enough so that the energy be practically equal to the momentum, that is, \(E(\mu)=\mu\), and one is in a quasifree regime. Beyond that, \(\mu\) is arbitrary just as the choice of cutoff was. Still it allows control of the problem's scale without regards to the integration grid. Upon applying this method to various symmetry groups, the masses generically diminish, as can be observed in figure 2. The qualitative features of mass generation are similar to the linear potential, and in both cases the larger the dimension of the symmetry group, the larger the mass which is generated, all other things being equal, due to the \(C_{G}\) colour factor. Of course, the scales of both plots have been set so that the resulting gluon masses make sense at the QCD scale with \(SU(3)\), yielding glueballs of reasonable mass [28], but the reader can easily scale them as needed to any other energy regime. The resulting dispersion relations are standard, as would appear in a plasma with a cutoff or upon solving the Helmholtz equation in a waveguide, and show that a minimum threshold energy is required to propagate gluons of any momentum. The gluon masses generated for large-dimensional groups are not exponentially far from the QCD one, but this is because we start with the same coupling constant at a low scale. If instead we started with the same coupling constant at a very large (Grand Unification) scale, the much larger antiscreening of the Yang-Mills coupling constant for larger groups would yield exponentially larger masses at a low-scale, effectively removing such theories from the spectrum, as we have shown elsewhere [15; 16]. ## 4 Splitting the masses in \(Su(n)\!\!\times\!\!Su(m)\) The mass generation that we have so far fixed forced us to fix the local gauge (we have adopted the Coulomb one, but similar results have been obtained in others), but the solutions fully respect the global color symmetry. In this section we turn to the possibility that the solutions may spontaneously break some global symmetry and different gluons come with different masses even if the scale \(\mu\) and the coupling \(\alpha_{s}\) are the same for all of them. Figure 2: Dispersion relations solving the gauge–boson gap equation (17) for different \(SU(N)\), all with the same strong coupling constant \(\alpha_{s}=1\) in Eq. (21), with cutoffs \(k_{1}=10\) GeV and \(k_{2}=20\) GeV. The top plot (regularized but unrenormalized equation) shows two bunches of three functions \(E(k)\), with the lower bunch employing the regulator \(k_{1}\) and the upper one using \(k_{2}\); in both cases the symmetry groups correspond to \(SU(3)\), \(SU(4)\) and \(SU(5)\). The bottom plot shows the same solutions but now with a MOM subtraction and renormalization scale of \(\mu=9\) GeV. We will, for the sake of simplicity, explore the partition of the \(N^{2}-1\) gluons of \(SU(N)\) in two subsets, one with \(n\) lighter gluons and another, containing the rest of them, heavier. We need a bit more notation to distinguish the sets, and have opted for lowercase letters \(a,b,c,d...=1,...,n\) to denote the colours of the lighter (\(L\)) gluons, whose dispersion relation shall be written as \(\omega_{\bf q}\), and uppercase letters \(A,B,C,D...=n,...,N^{2}-1\) for the heavier (\(H\)) ones, with dispersion relation naturally chosen as the capital letter \(\Omega_{\bf q}\). To refer to all the colors simultaneously we adopt the greek indices \(\alpha,\beta,\gamma,\delta...=1,...N^{2}-1\). The gap equation (15) now formally separates into a \(2\times 2\) system for the two types of gluons, \[(\omega_{\bf q}^{d})^{2}=|{\bf q}|^{2}-\frac{1}{4}\int_{0}^{\infty} \frac{d|{\bf k}|}{(2\pi)^{2}}|{\bf k}|^{2}\hat{V}_{\rm eff}({\bf k},{\bf q}) \sum_{\alpha}\] \[\left[\sum_{b}(f^{\alpha bd})^{2}\frac{(\omega_{\bf k}^{b})^{2}- (\omega_{\bf q}^{d})^{2}}{\omega_{\bf k}^{b}}+\sum_{B}(f^{\alpha Bd})^{2} \frac{(\Omega_{\bf k}^{B})^{2}-(\omega_{\bf q}^{d})^{2}}{\Omega_{\bf k}^{B}}\right] \tag{23}\] \[(\Omega_{\bf q}^{D})^{2}=|{\bf q}|^{2}-\frac{1}{4}\int_{0}^{\infty }\frac{d|{\bf k}|}{(2\pi)^{2}}|{\bf k}|^{2}\hat{V}_{\rm eff}({\bf k},{\bf q}) \sum_{\alpha}\] \[\left[\sum_{B}(f^{\alpha BD})^{2}\frac{(\Omega_{\bf k}^{B})^{2} \!-\!(\Omega_{\bf q}^{D})^{2}}{\Omega_{\bf k}^{B}}\!+\!\sum_{b}\!(f^{\alpha b })^{2}\frac{(\omega_{\bf k}^{b})^{2}\!-\!(\Omega_{\bf q}^{D})^{2}}{\omega_{ \bf k}^{b}}\right]\;. \tag{24}\] Each integral contains two terms between brackets. The first depends only on the dispersion relation being solved for on the left hand side of the equation (diagonal terms), be it \(\omega\) for the light bosons or \(\Omega\) for the heavy ones. The second term depends on both dispersion relations and couples the equations, pushing the solution towards the symmetric point \(\omega^{a}=\Omega^{A}\) (this can be seen with a little patience from the combination of signs). More compactly, and focusing on the colour structure of these equations, this system reads \[(\omega_{\bf q}^{d})^{2} =|{\bf q}|^{2}-\frac{1}{4}\int\frac{d|{\bf k}|}{(2\pi)^{2}}|{\bf k }|^{2}\hat{V}_{\rm eff}({\bf k},{\bf q})\times\] \[\times\left({\rm LL}\sum_{\alpha,b}\left(f^{\alpha bd}\right)^{2} +{\rm LH}\sum_{\alpha,B}\left(f^{\alpha Bd}\right)^{2}\right)\;,\] \[(\Omega_{\bf q}^{D})^{2} =|{\bf q}|^{2}-\frac{1}{4}\int\frac{d|{\bf k}|}{(2\pi)^{2}}|{\bf k }|^{2}\times\] \[\times\left({\rm HL}\sum_{\alpha,b}\left(f^{\alpha bD}\right)^{2} +{\rm HH}\sum_{\alpha,B}\left(f^{\alpha BD}\right)^{2}\right). \tag{25}\] where the various symbols have obvious meaning shortening Eq. (24). The observation that gives this paper its title is that, if the combination of structure constants of these coupling terms would vanish, which can be achieved by making all constants of the forms \(f^{\alpha Bd}\) and \(f^{\alpha bD}\) to vanish, the two equations completely decouple. We then have two copies of Eq. (15), one for the light dispersion relation \(\omega(q)\) and one for the heavier \(\Omega(q)\). The mass in each case is proportional to the size of the factors \(\sum_{\alpha,b}(f^{abd})^{2}\) or \(\sum_{\alpha,B}(f^{\alpha BD})^{2}\) that appear in the diagonal terms (not all four terms can simultaneously vanish in a non-Abelian gauge theory, since they are sums of squares and some structure constants must be nonzero). The vanishing of the two coupling terms is precisely what happens if the split of the gauge bosons is done along the lines of two commuting generating algebras, so that the algebra corresponding to the total group is not simple, but the direct sum of two ideals, in group-theory parlance (\(\mathfrak{su}(N)\oplus\mathfrak{su}(M)\oplus...\)). We have chosen to split the system in two sets, but the reader can easily note that a larger number of dispersion relations \(\omega_{1},\omega_{2},\ldots\omega_{j}\) is possible, in which case the system of equations would further split into several. For each ideal in which we can decompose the algebra, we will obtain one decoupled equation that will provide a different gauge-boson mass, as long as the dimensions are different, \(N\neq M\), which drive different colour factors. This splitting happens even in the presence of the same effective potential, coupling constant and renormalization scale, and is entirely driven by the colour factors. In the case of simple Lie algebras such as \(\mathfrak{su}(N)\), without proper ideals, this decoupling cannot take place (because no subset of generators of the algebra can commute with all of those outside the subset, in which case some of the mixed-index \(f\) structure constants need to be different from zero), and the various gap equations are necessarily coupled to one another. To this we turn in the next section. ## 5 Global colour breaking not obvious for a simple Lie algebra In this section we report an initial exploration of the new system of coupled equations for a couple of low-dimensional Lie algebras. Because there is no ideal, at least three of the four terms contribute, independently of how the partition of the \(N^{2}-1\) gluons is taken. Thus, the system remains coupled. In a first exercise, we attempt to break the symmetry by hand. In a totally artificial manner, we include a multiplicative factor in the off-diagonal terms of Eq. (24) that reduces their intensity. The outcome is exposed in figure 3, both for a strong artificial suppression by \(1/10\) but also for a modest reduction factor of \(8/10\). In both cases, the global colour symmetry is broken and the system converges to two dispersion relations with different mass. The plots correspond to a global \(SU(3)\) colour group in which an \(SU(2)\) subgroup remain light (dispersion relation \(\omega\)) and the rest acquire a heavier dispersion relation \(\Omega\). However, if we reset the equation to its original form without artificial factors explicitly breaking the symmetry, it is not so easy to find a solution with spontaneous breaking of a simple group. We have not yet deployed this project to a supercomputer; in a tabletop machine we have been able to quickly examine the following symmetry breaking chains: \(SU(2)\to U(1)\), \(SU(3)\to SU(2)\), \(SU(4)\to SU(3)\), \(SU(4)\to SU(2)\) and \(SU(5)\to SU(4)\). For example, in this last case, of the \(5^{2}-1=24\) bosons, \(4^{2}-1=15\) were candidates to remain light and the remaining 9 candidates to become heavy. As an example analysis, we provide detail for a partition of the 8 gluons of \(SU(3)\) into a group of 3 and a group of 5. Depending on how the first group is chosen, its three gluons may correspond to a subgroup \(SU(2)\). Table 1 in the appendix lists the sums of squared structure constants of the nondiagonal, coupling terms, those multiplying \(LH\) and \(HL\) in Eq. (25) for possible combinations of three gluons chosen among the eight of \(SU(3)\). As an example, reading the first row of the table, we observe that the coupling of gluon number 8 is null. This means that, initially, this gluon's dispersion relation does not converge towards the others. However, the rest of the system is coupled to it in a nonvanishing way (because the corresponding generator \(T^{8}\) is not not nor does it belong to an ideal of the underlying algebra), so that the system evolves towards the symmetric solution. This can be observed in figure 4. To start the iteration we have chosen an initial mass of 42.4 MeV for the first three gluons and 848 MeV for the rest (the precise choice of these numbers is immaterial, they here have to do with the units employed in the program, 424 MeV as the scale of the string tension when the linear potential is active, not the case in this purely Coulombic computation). After initial large jumps, that even change the second derivative of the dispersion relations in intermediate steps, the gluons converge towards the same mass in the 1 GeV range, in this case. One could then conjecture that the system of Eq. (24) has as only fixed point \(\omega^{a}=\Omega^{\mathcal{A}}=\omega\) for all gluons, due to some symmetry upon reorganizing the \(f^{abc}\) structure constants among different choices of the boson partition. Whatever this might be, we have not yet been able to identify it. Figure 3: We artificially attenuate the nondiagonal coupling among dispersion relations in Eq. (24) to show a reaction of the system breaking global colour. Upper plot: the multiplicative factor is 0.1, greatly damping the strength of the coupling, and thus showing enhanced symmetry breaking. Lower plot: the factor is only 0.8, and the explicit symmetry breaking is still visible. The symmetry breaking pattern is \(SU(3)\to SU(2)\). ## 6 Outlook We have worked out the known gap equation for gluodynamics in the North Carolina State family of global colour models inspired by Coulomb gauge QCD, then extended it to allow for the possibility of different bosons acquiring different mass in a spontaneous way. This leads to a coupled system of equations and we have performed a first investigation of its colour structure. For simplicity, we have limited ourselves here to split the gluons into two groups (light and heavy), but we have also made a few exploratory runs in which each of the gluons might acquire its own different mass. Computations here are more numerically costly as several independent functions have to be simultaneously determined (and note that the number of them grows as \(N^{2}-1\) with the dimension of the group). We have found nothing different to report so we omit the discussion for the time being. What would be extremely interesting is to find an alley for spontaneous symmetry breaking among simple groups such as \(SU(N)\to SU(M)\) but we have not yet identified an example where the system converges to two sets of gluons with different mass in this form. If this was possible, one could do away with complicated Higgs boson representations in Grand Unified Theories that seem rather ad hoc. Our exploratory study has been limited in scope and we have only examined a few group breaking patterns. Because we do not have a clear proof that the system must remain symmetric for a simple group either, we must limit ourselves to leaving the question of whether this is possible as open for future investigation. With more computing power we hope to be able to systematize the choice of the gluons that remain vs. those that remain heavy, to extend the system beyond binary (with three, four or more types of dispersion relations with different gluon masses) The current findings do show how, due to the colour factors alone (with equal couplings for the two groups), the gauge bosons in an \(SU(N)\times SU(M)\) theory with \(N\neq M\) acquire different fixed-gauge masses. This entails a breaking of possible accidental symmetries: for example, fermions in the fundamental representation carrying indices for both groups \(\psi_{nm}\) could be seen, stretching the index, (such as in 1=red-up, 2=red-down, 3=blue-up etc.) as belonging to the fundamental representation of \(SU(N\times M)\). This global symmetry is not gauged by definition of the Lagrangian, rather it would be "accidental". It ceases making sense when the mass of the gauge bosons of the two subgroups are different, so it is broken without the resort of a Higgs boson multiplet. As argued by Dobson and collaborators [12; 13], the Higgs in more general theories than the SM should best be defined in terms of composite, gauge-invariant fields. It is not unconceivable that in strongly coupled theories, the equivalent field could be made from modes in the gauge-boson spectrum itself, due to nonlinearities. Figure 4: Intermediate steps towards convergence for the dispersion relations of \(SU(3)\) gluons with initially disparate masses. Shown are calculations at 1, at 100 and at 200 iterations. The gluon numbered as 8 is seen to quickly decouple from the rest even on the upper diagram, the first iteration. With 100, it is clear that the system is already converging towards the symmetric solution. ## Appendix ### Numerical solution of the gap equation In this first appendix we comment on the numerical solution of Eq. (15). The method is easily extended to the system (25). We proceed by iteration from an initial guess \(\tilde{E}^{d}(q)\) for the dispersion relation that differs from the real function by \(E^{d}(q)=\tilde{E^{d}}(q)+\epsilon^{d}(q)\) (where we define \(q\equiv|\mathbf{q}|\)). This we substitute in Eq. (22) to isolate \(\epsilon^{b}(q)\) to linear order in the Taylor expansion (we here omit the renormalization subtraction for conciseness, but it has been programmed), \[\tilde{E}^{d}(q)^{2}-q^{2}+\frac{1}{4}\sum_{a,b}f^{abd}f^{abd}\int _{0}^{\infty}\frac{dk}{(2\pi)^{2}}k^{2}\hat{V}_{\text{eff}}(k,q)\times\\ \times\left(\frac{\tilde{E}^{b}(k)^{2}-\tilde{E}^{d}(q)^{2}}{ \tilde{E}^{b}(k)}\right)\approx\\ -2\tilde{E}^{d}(q)\epsilon^{d}(q)-\frac{1}{4}\sum_{a,b}f^{abd}f^{ abd}\int_{0}^{\infty}\frac{dk}{(2\pi)^{2}}k^{2}\hat{V}_{\text{eff}}(k,q) \times\\ \times\left[\left(\frac{\tilde{E}^{b}(k)^{2}+\tilde{E}^{d}(q)^{2 }}{\tilde{E}^{b}(k)^{2}}\epsilon^{b}(k)\right)-2\frac{\tilde{E}^{d}(q) \epsilon^{d}(q)}{\tilde{E}^{b}(k)}\right]. \tag{26}\] It is convenient to introduce auxiliary functions, \[b^{d}(q) :=\tilde{E}^{d}(q)^{2}-q^{2}+\frac{1}{4}\sum_{a,b}f^{abd}f^{abd}\times\] \[\times\int_{0}^{\infty}\frac{dk}{(2\pi)^{2}}k^{2}\hat{V}_{\text{ eff}}(k,q)\left(\frac{\tilde{E}^{b}(k)^{2}-\tilde{E}^{d}(q)^{2}}{\tilde{E}^{b}(k)}\right)\] \[A^{db}(q,k) :=-2\tilde{E}^{b}(k)\delta^{bd}\delta(k-q)-\] \[-\frac{1}{4}\sum_{a}f^{abd}f^{abd}\frac{k^{2}}{(2\pi)^{2}}\hat{V }_{\text{eff}}(k,q)\times\] \[\times\left[\left(\frac{\tilde{E}^{b}(k)^{2}+\tilde{E}^{d}(q)^{2 }}{\tilde{E}^{b}(k)^{2}}\right)-2\frac{\tilde{E}^{d}(q)\delta^{bd}\delta(k-q)} {\tilde{E}^{b}(k)}\right]\, \tag{27}\] to shorten notation, so that Eq. (26) is recognisable as a linear system \[b^{d}(q)=\sum_{b}\int_{0}^{\infty}dk\,A^{db}(q,k)\epsilon^{b}(k). \tag{28}\] We then discretise momenta to make the expression amenable to automation, \[b_{i}^{d}=\sum_{b}\sum_{j}\Delta k_{j}A_{ij}^{db}\epsilon_{j}^{b}. \tag{29}\] As \(E(k)\) approaches its linear asymptote for large \(k\) and its nontrivial structure is at low \(k\), we skew the discrete grid to have more points towards low \(k\) with the help of a change of variable that introduces a Jacobian. Here the algorithms for Eq. (15) and (25) diverge, as the global colour symmetry (all gluons have equal mass) allows the first to take a simpler form. Because all \(\omega^{a}\) are equal, they can be factored out of the colour factors, so that the colour indices can be summed with the closure relation. The auxiliary quantities of Eq. (27) then read \[b(q) :=b^{d}(q)=\tilde{E}(q)^{2}-q^{2}+\frac{C_{G}}{4}\int_{0}^{\infty }\frac{dk}{(2\pi)^{2}}k^{2}\times\] \[\times\hat{V}_{\text{eff}}(k,q)\left(\frac{\tilde{E}(k)^{2}- \tilde{E}(q)^{2}}{\tilde{E}(k)}\right)\] \[A(q,k) :=\sum_{b}A^{db}(q,k)=-2\tilde{E}(k)\delta(k-q)-\frac{C_{G}}{4} \frac{k^{2}}{(2\pi)^{2}}\times\] \[\times\hat{V}_{\text{eff}}(k,q)\left[\left(\frac{\tilde{E}(k)^{2 }+\tilde{E}(q)^{2}}{\tilde{E}(k)^{2}}\right)-2\frac{\tilde{E}(q)\delta(k-q)}{ \tilde{E}(k)}\right]. \tag{30}\] The linear system that allows to extract each Newton step of the algorithm then takes the form \[b(q)=\int_{0}^{\infty}dk\,A(q,k)\epsilon(k)\, \tag{31}\] or discretized, \[b_{i}=\sum_{j}\Delta k_{j}A_{ij}\epsilon_{j}\,, \tag{32}\] which is solved for \(\epsilon\), allowing for the update of \(E(k)\). In the case of nondiagonal colour couplings, Eq. (29) has to be addressed instead. ### Structure constants for the generic Lie algebra \(\mathfrak{su}(\text{N})\) The \(SU(N)\) has \(N^{2}-1\) generators that commute according to the rules of the \(\mathfrak{su}(\text{N})\) algebra, \([T^{a},T^{b}]=if^{abc}T^{c}\) with structure constants \(f^{abc}\). The following formulae give these structure constants in a direct way which makes them apt for computer programming, as necessary to solve for example Eq. (13). The \(N^{2}-1\) generators of \(SU(N)\) can be split into three subsets. First, there are the \(N-1\) diagonal matrices of the Cartan subalgebra, that commute and thus provide simultaneous good quantum numbers (such as hypercharge and the third component of isospin in the case of \(SU(3)\)). Then, there are [29]\(N(N-1)/2\) antisymmetric matrices and \(N(N-1)/2\) symmetric but nondiagonal matrices. We then split the color index of the adjoint representation into three distinct ones for each of this types of matrices: \(D\) for diagonal, \(S\) for symmetric and \(A\) for antisymmetric. When acting on the fundamental representation of the fermions they take the explicit form [29]: \[T_{S_{nm}} =\frac{1}{2}(\left|m\right\rangle\left\langle n\right|+\left|n \right\rangle\left\langle m\right|) \tag{33}\] \[T_{A_{nm}} =\frac{1}{2}(\left|m\right\rangle\left\langle n\right|-\left|n \right\rangle\left\langle m\right|)\] (34) \[T_{D_{n}} =\frac{1}{\sqrt{2n(n-1)}}\left(\sum_{k=1}^{N-1}\left|k\right\rangle \left\langle k\right|+\left(1-n\right)\left|n\right\rangle\left\langle n \right|\right)\;. \tag{35}\] The \(n\) and \(m\) subindices take \(N\) different values, the size of the fundamental representation of \(\mathfrak{su}(\text{N}).\) We can retrieve the values of the various subindices from the closed formulae \[S_{nm} =n^{2}+2(m-n)-1\] \[A_{nm} =n^{2}+2(m-n)\] \[D_{n} =n^{2}-1\;,\] that guarantee that no value is repeated and all values from \(1\) to \(N^{2}-1\) is covered when additionally imposing \(1\leq m<n\leq N.\) Thus, any \(a\in\{S_{nm},A_{nm},D_{n}\}\) and the correspondence between these and the usual indices is bijective. For example, in \(SU(2)\) one has three generators, and with this convention the symmetric one is \(T_{S_{2i}}=T_{1}\) (Pauli's \(\sigma_{x}/2\) matrix, essentially), the antisymmetric \(T_{A_{2i}}=T_{2}\) (that is, \(\sigma_{y}/2\)) and the diagonal one is \(T_{D_{2}}=T_{3}\) (or \(\sigma_{z}/2\)). If, in turn, we now apply these indexing rules to \(SU(3)\), we reproduce Gell-Mann's matrices in the usual order, with diagonal \(T_{3}\) and \(T_{8}\). This indexing system and the explicit expression in terms of commutators of the generators (normalized as \(Tr(T^{a}T^{b})=\delta^{ab}/2\)) \[f^{abc}=-2i\text{Tr}\left[[T^{a},T^{b}],T^{c}\right]\;, \tag{36}\] that makes their total antisymmetry explicit, allows to find directly programmable expressions reported by Bossion and Huo [30], \[\begin{array}{cc}f_{S_{nm}S_{kn}A_{km}}&=f_{S_{nm}S_{nk}A_{km}}\\ =f_{S_{nm}S_{km}A_{km}}&=f_{A_{nm}A_{kn}A_{km}}=\frac{1}{2}\\ \\ f_{S_{nm}A_{nm}D_{n}}&=-\sqrt{\frac{m-1}{2m}}\\ \\ f_{S_{nm}A_{nm}D_{n}}&=\sqrt{\frac{n}{2(n-1)}}\\ \\ f_{S_{nm}A_{nm}D_{k}}=\sqrt{\frac{1}{2k(k-1)}}&m<k<n\end{array} \tag{37}\] (other combinations of the different symmetries are null unless obtained by permutation and antisymmetry, for example if \(f_{123}=1/2\) then \(f_{231}=f_{312}=-f_{321}=-f_{132}=-f_{213}=1/2\)). ### Sums of squared structure constants for SU(3), partitioning it in 3-gluon and 5-gluon subsets Here we list, as an example of the combinations of \(\sum f^{2}\) with various indices that appear in Eq. (25), an exhaustive list of these combinations for the example in which the eight gluons of \(SU(3)\) split into three light ones (that, in certain cases but not necessarily, can generate an \(SU(2)\) subgroup) and five heavier ones. The number of combinations of the eight gluons taken three at a time is \[\begin{pmatrix}8\\ 3\end{pmatrix}=56\;,\] and we list them explicitly in table 1 and following. \begin{table} \begin{tabular}{||c|c|c||c|c|c|c|c||} \hline **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** \\ \hline 1.00 & 1.00 & 1.00 & 0.75 & 0.75 & 0.75 & 0.75 & 0.00 \\ \hline **1** & **2** & **4** & **3** & **5** & **6** & **7** & **8** \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 1.50 & 0.75 & 0.75 & 0.75 \\ \hline **1** & **2** & **5** & **3** & **4** & **6** & **7** & **8** \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 1.50 & 0.75 & 0.75 & 0.75 \\ \hline **1** & **2** & **6** & **3** & **4** & **5** & **7** & **8** \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 1.50 & 0.75 & 0.75 & 0.75 \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 1.50 & 0.75 & 0.75 & 0.75 \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 1.50 & 0.75 & 0.75 & 0.75 \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 1.50 & 0.75 & 0.75 & 0.75 \\ \hline **1** & **3** & **6** & **2** & **4** & **5** & **7** & **8** \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 0.75 & 0.75 & 1.50 & 0.75 \\ \hline **1** & **3** & **7** & **2** & **4** & **5** & **6** & **8** \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 0.75 & 0.75 & 1.50 & 0.75 \\ \hline **1** & **3** & **7** & **2** & **4** & **5** & **6** & **8** \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 0.75 & 0.75 & 1.50 & 0.75 \\ \hline **1** & **3** & **8** & **2** & **4** & **5** & **6** & **7** \\ \hline 2.00 & 2.00 & 3.00 & 2.00 & 1.25 & 1.25 & 1.25 & 1.25 \\ \hline **1** & **4** & **5** & **2** & **3** & **6** & **7** & **8** \\ \hline 2.50 & 1.75 & 1.75 & 1.50 & 1.50 & 0.75 & 0.75 & 1.50 \\ \hline **1** & **4** & **6** & **2** & **3** & **5** & **7** & **8** \\ \hline 2.50 & 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **1** & **5** & **8** & **2** & **3** & **5** & **6** & **7** \\ \hline 2.50 & 2.50 & 1.75 & 1.75 & 1.50 & 1.50 & 0.75 & 1.50 \\ \hline **1** & **4** & **6** & **2** & **3** & **5** & **7** & **8** \\ \hline 2.50 & 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **1** & **5** & **7** & **2** & **3** & **4** & **6** & **8** \\ \hline 2.50 & 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **1** & **5** & **8** & **2** & **3** & **5** & **6** & **7** \\ \hline 2.50 & 2.50 & 1.75 & 1.75 & 1.50 & 1.50 & 0.75 & 0.75 & 1.50 \\ \hline \end{tabular} \end{table} Table 1: Sums of squared structure constants necessary for Eq. (25). The rows alternate in shade. The grey shaded ones indicate the \(SU(3)\) gluon combination, with the first three gluons corresponding to the light ones with colour index \(d\) and the following five ones to the heavy ones with index \(D.\) The row with white background immediately below lists, in the first three columns, the corresponding \(\sum_{\alpha,B}\left(f^{ABd}\right)^{2}\) to each \(d\) entry from the row above. The remaining columns give \(\sum_{\alpha,b}\left(f^{abD}\right)^{2}\), also associated to the index \(D\) immediately above each entry. As can be seen, the off-diagonal combinations do not all vanish simultaneously, meaning that the Lie algebra has no ideals of either dimension 3 nor 5 (we of course know that the \(\mathfrak{su}(3)\) Lie algebra has no ideal of any dimension, but it is reassuring to see this appear in the tabulated data. One might entertain the hope that a clever way of splitting the structure constants could bring about a breaking of the global symmetry even for a simple Lie algebra, perhaps of large dimension, but we have not found an example yet, nor do we know of a theorem (such as the no-go theorem of Vafa and Witten in the fermion sector) that forbids it at this point. \begin{table} \begin{tabular}{|c|c|c||c|c|c|c|c|} \hline **1** & **6** & **8** & **2** & **3** & 4 & **5** & **7** \\ \hline 2.75 & 2.00 & 2.25 & 1.25 & 1.25 & 1.25 & 1.25 & 2.00 \\ \hline **1** & **7** & **8** & **2** & **3** & 4 & **5** & **6** \\ \hline 2.75 & 2.00 & 2.25 & 1.25 & 1.25 & 1.25 & 2.00 \\ \hline **2** & **3** & **4** & **1** & **5** & **6** & **7** & **8** \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 1.50 & 0.75 & 0.75 & 0.75 \\ \hline **2** & **3** & **5** & **1** & **4** & **6** & **7** & **8** \\ \hline 1.75 & 1.75 & 2.50 & 2.25 & 1.50 & 0.75 & 0.75 & 0.75 \\ \hline **2** & **4** & **5** & **1** & **3** & **6** & **7** & **8** \\ \hline 2.50 & 1.75 & 1.75 & 1.50 & 1.50 & 0.75 & 0.75 & 1.50 \\ \hline **2** & **4** & **6** & **1** & **3** & **5** & **7** & **8** \\ \hline 2.50 & 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **2** & **5** & **8** & **1** & **3** & **6** & **8** \\ \hline **2** & **5** & **6** & **1** & **3** & **4** & **7** & **8** \\ \hline 2.50 & 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **2** & **5** & **7** & **1** & **3** & **4** & **6** & **8** \\ \hline 2.50 & 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **2** & **5** & **8** & **1** & **3** & **4** & **6** & **7** \\ \hline 2.75 & 2.00 & 2.25 & 1.25 & 1.25 & 2.00 & 1.25 & 1.25 \\ \hline **2** & **6** & **8** & **1** & **3** & **4** & **5** & **7** \\ \hline 2.75 & 2.00 & 2.25 & 1.25 & 1.25 & 1.25 & 1.25 & 2.00 \\ \hline **2** & **7** & **8** & **1** & **3** & **4** & **5** & **6** \\ \hline 2.75 & 2.00 & 2.25 & 1.25 & 1.25 & 1.25 & 1.25 & 2.00 \\ \hline **3** & **4** & **5** & **1** & **2** & **6** & **7** & **8** \\ \hline 2.50 & 1.75 & 1.75 & 1.50 & 1.50 & 0.75 & 0.75 & 1.50 \\ \hline 2.50 & 1.75 & 1.75 & 1.50 & 1.50 & 0.75 & 0.75 & 1.50 \\ \hline 2.75 & 2.00 & 2.25 & 1.25 & 1.25 & 1.25 & 1.25 & 2.00 \\ \hline **3** & **4** & **5** & **1** & **2** & **6** & **7** & **8** \\ \hline 2.50 & 1.75 & 1.75 & 1.50 & 1.50 & 0.75 & 0.75 & 1.50 \\ \hline 2.50 & 1.75 & 1.75 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline 2.75 & 2.00 & 2.25 & 1.25 & 1.25 & 1.25 & 1.25 & 2.00 \\ \hline 2.50 & 1.75 & 1.75 & 1.50 & 1.50 & 0.75 & 0.75 & 1.50 \\ \hline 2.50 & 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **3** & **4** & **8** & **1** & **2** & **5** & **6** & **8** \\ \hline 2.50 & 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **3** & **4** & **8** & **1** & **2** & **5** & **6** & **7** \\ \hline 2.50 & 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **3** & **4** & **8** & **1** & **2** & **5** & **6** & **7** \\ \hline 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **3** & **4** & **8** & **1** & **2** & **5** & **6** & **7** \\ \hline 2.50 & 2.50 & 1.50 & 1.50 & 1.50 & 1.50 & 1.50 \\ \hline **3** & **4** & **8** & **1** & **2** & **5** & **6** & **8** \\ \hline 2.75 & 2.00 & 2.25 & 1.25 & 1.25 & 1.25 & 1.25 \\ \hline 2.50 & 1.75 & 1.75 & 1.50 & 1.50 & 0.75 & 0.75 & 1.50 \\ \hline **5** & **6** & **8** & **1** & **2** & **3** & **4** & **5** \\ \hline 2.00 & 2.00 & 1.50 & 0.50 & 0.50 & 2.00 & 2.00 \\ \hline **4** & **7** & **8** & **1** & **2** & **3** & **5** & **6** \\ \hline 2.00 & 2.00 & 1.50 & 0.50 & 0.50 & 2.00 & 2.00 \\ \hline **5** & **6** & **7** & **1** & **2** & **3** & **4** & **8** \\ \hline 2.50 & 1.75 & 1.75 & 0.75 & 0.75 & 0.75 & 1.50 & 2.25 \\ \hline **5** & **6** & **8** & **1** & **2** & **3** & **4** & **7** \\ \hline ## Acknowledgments The authors thank early conversations with Lucas Barbero. Work partially supported by the EU under grant 824093 (STRONG2020); spanish MICINN under PID2019-108655GB-I00/AEI/10.13039/501100011033, PID2019-106080GB-C21; Univ. Complutense de Madrid under research group 910309 and the IPARCOS institute. This preprint has been issued with number IPARCOS-UCM-23-070
2309.10776
EU law and emotion data
This article sheds light on legal implications and challenges surrounding emotion data processing within the EU's legal framework. Despite the sensitive nature of emotion data, the GDPR does not categorize it as special data, resulting in a lack of comprehensive protection. The article also discusses the nuances of different approaches to affective computing and their relevance to the processing of special data under the GDPR. Moreover, it points to potential tensions with data protection principles, such as fairness and accuracy. Our article also highlights some of the consequences, including harm, that processing of emotion data may have for individuals concerned. Additionally, we discuss how the AI Act proposal intends to regulate affective computing. Finally, the article outlines the new obligations and transparency requirements introduced by the DSA for online platforms utilizing emotion data. Our article aims at raising awareness among the affective computing community about the applicable legal requirements when developing AC systems intended for the EU market, or when working with study participants located in the EU. We also stress the importance of protecting the fundamental rights of individuals even when the law struggles to keep up with technological developments that capture sensitive emotion data.
Andreas Hauselmann, Alan M. Sears, Lex Zard, Eduard Fosch-Villaronga
2023-09-19T17:25:02Z
http://arxiv.org/abs/2309.10776v1
# EU law and emotion data ###### Abstract This article sheds light on the intricate legal implications and challenges surrounding emotion data processing within the EU data protection framework. Despite the sensitive nature of emotion data, the GDPR does not explicitly categorize it as special data, resulting in a lack of comprehensive protection. This legal ambiguity poses significant obstacles for affective computing system developers as they struggle to comply with the GDPR's requirements and ensure ethical practices. The article also discusses the nuances of different approaches in affective computing and their relevance to the GDPR and the introduction of biometric-based data in the AI Act proposal. Moreover, it highlights potential conflicts with GDPR principles, such as fairness and accuracy, and the limitations of the AI Act in addressing specific harmful uses of emotion data. Additionally, the article outlines the new obligations and transparency requirements introduced by the DSA for online platforms utilizing emotion data, making it crucial for the affective computing community to be well-informed in order to adhere to the regulations, maintain ethical and legal standards, and protect users' fundamental rights while developing emotion-sensing technologies for the EU market. emotions, emotion data, affective computing, data protection, privacy, accuracy, law, fairness, manipulation, transparency, autonomy. ## I Introduction Emotions are inherent to humanity and play a significant role in human behavior, communication, and interaction [1]. Until recently, emotions could not be captured and processed automatically; they were reserved for each individual and the people with whom they wanted to share them. With advancements in computing that relate to, arise from, or influence emotions [2] ('affective computing'), something once unimaginable has become a reality. Affective computing (AC) systems can now capture emotional data. Emotion data is information about a person's inner emotional state, such as their subjective response to a thing, person, or situation. It can encompass quantitative and qualitative data, such as physiological measurements, facial expressions, speech, and self-reports of feelings captured through technical means [3-4]. Thanks to AC, emotions became machine-readable. Emotion data detected by AC systems increasingly supports and informs ulterior decision-making processes in several fields, including marketing, healthcare, border control, and education, among others; it is often claimed that such use can provide valuable insight into how people feel and respond to different situations [5]. Emotions are complex--a person may not be able to express in words how they feel, and others may or may not understand their meaning. As one can imagine, capturing inner, subjective states through 'objective' technical means may be a difficult task that can lead to errors, as in the case of lie detectors [6] or gender classifier systems [7]. If the contested relationship between several physiological states and several emotions is assumed to connect with potential user behaviors directly, such inferences could lead to disastrous consequences depending on the application context, such as border control [8]. Cnturies ago, studies began to explore the biological and evolutionary underpinnings of emotions, including their influence on decision-making, memory, and learning [9-11]. Despite this, there is little understanding of how the law regulates emotional data and how this impacts the field of _affective computing_. With this article, we want to shed some light on this from an EU law perspective. Therefore, this article explains what constitutes emotion data and the scope and limitations of EU law aiming to protect the persons to whom emotion data belongs. We focus on provisions in the General Data Protection Regulation (GDPR) and, to a lesser extent, on the Digital Services Act (DSA). We also discuss provisions of the AI Act proposal, which is currently undergoing the EU's dialogue process. The EU regulations we discuss have an impact on the global AC community because of their (extraterriorial scope (Article 3(2) GDPR, Article 2 DSA, Article 2 AI Act proposal). Our article also highlights some of the consequences, including harm, that processing of highly sensitive data may have for the individuals concerned. By doing so, the article aims at raising awareness among the AC community about the importance of protecting the rights of individuals even when the law struggles to keep up with technological developments that look inside our most human side. Our article outlines that emotion data is not protected as'special data' according to Art. 9 of the GDPR despite its sensitive nature and the related impacts processing such data may have on people. As such, it is also tricky for the affective computing community to consider the applicable legal requirements when developing AC systems that involve study participants in the EU or intended for the EU market. For instance, processing special data is prohibited under the GDPR unless an exception applies. Whether processing of personal data used to detect or derive emotion data falls under the framework applicable to special personal data (Art. 9 GDPR) depends on the approach taken in AC. Approaches that process physiological information fall under the scope of Article 9 GDPR, whereas visual approaches relying on the processing of facial expressions do not. ## II Emotions and Emotion Data Since ancient times, emotions have been considered significant drivers of human action and essential aspects of human decision-making processes [12]. Nevertheless, although emotions are an essential part of the human experience, the essence of emotions becomes elusive when we try to define them [13]. Despite this lack of definition, research and advances in understanding emotions and their role in human life have been made in various disciplines, such as philosophy, music, sociology, and neuroscience [14]. In psychology, for instance, emotions are discussed as particular affective states that humans experience temporarily [15]. Researchers in affective sciences have proposed several taxonomies to categorize emotions in everyday experiences [17]. The most popular taxonomy is called 'basic emotions,' which are assumed to be universally present in all humans [18]. According to this taxonomy, there are six basic emotion categories: anger, disgust, fear, happiness, sadness, and surprise [19], to which additional categories, such as anxiety, guilt, shame, pride, compassion, relief, hope, and love were added over time [20-21]. The experience of emotions and the mechanisms determining our behavior or action selection, including, e.g., facial expressions, correlate somehow. Ekman [22] developed the theory that certain facial expressions reveal universal basic emotions. Quite early, the basic emotion taxonomy was subject to substantial disagreement, especially over the extent to which the origins of facial expressions are imate or sociocultural and whether emotions could accurately be inferred from human facial expressions [18, 20-21, 23] which is still a matter of debate today [24]. Instead of elaborating on existing definitions, we will use the notion of emotion data. Emotion data is information relating to the emotions of an individual. For the sake of simplicity, emotions refer to the six basic emotion categories: anger, disgust, fear, happiness, sadness, and surprise. These emotion categories have received the lion's share of attention in scientific research [24]. This also makes sense from an affective computing perspective, as most approaches in the field rely on basic emotion categories. Since the beginning, fundamental emotion theories and their emotion categories have been the models of choice in computer science and engineering [16]. According to a review performed ten years ago, most systems were concerned with detecting the six basic emotions [17]. While we are unaware of a more recent comprehensive review, most modern AC systems also seem to focus on these emotion categories or alterations [25]. Real-world applications are Amazon's wearable 'Halo' [26], Spotify's patented voice assistant [27], and Amazon's patent enabling Alexa to recognize the user's emotional state [28]. ## III Emotion Data Harms The processing of emotional data can have significant adverse effects on people that make it legally relevant. Emotions are a private, sometimes intimate part of people's lives [29]. In particular, emotions are central to social connections - people become vulnerable, establish trust, and build relationships by revealing them to each other [30]. Emotional trust assumes the possibility of being hurt by another person [31]. Therefore, people often keep emotions private and decide with whom, when, and the contents to share them [29]. By processing emotional data, machines provide access to information about people's emotional lives that are private and intimate [32]. Therefore, the most often discussed concern regarding the processing of emotion data is the risk of undermining "informational privacy" or people's interest in being in charge of the information about themselves, including their emotions [33]. However, the adverse effects of emotion data processing go far beyond privacy issues. Depending on how emotional data is _used_, its processing can result in manipulation and discrimination (or oppression) of individuals, which can lead to economic, relational, psychological, and physical harm to individuals, and also societal threats. Manipulation can be defined as the hidden influence achieved by targeting to exploit people's decision-making vulnerabilities [34]. There is a scientific consensus that humans are not entirely rational in their decision-making. Behavioral science reveals that humans act on _heuristics_ to decrease the complexity of real-life situations and shortcut everyday decisions [35]. This act often results in cognitive _biases_ or systematic errors in judgment that the manipulator can exploit. However, emotions are another source of vulnerability to manipulation [34]. For example, in William Shakespeare's tragedy Othello, written around 1600, lago manipulates Othello because he knows his insecurity, love, and jealousy. The potential for manipulative use is a central concern in processing emotional data. One paradigm example of such a service can be manipulative advertising. Once a widespread science-fiction practice (e.g., Spielberg's Minority Report), using emotion data for targeting advertisements has become particularly relevant in the online advertising industry through techniques such as "dynamic emotional targeting" or "emotion analytics" [36]. Such targeting is made possible due to the increased ability of online platforms, such as Google and Meta, to identify emotional data by analyzing keyboard typing patterns [37], video data [38], and metadata [39]. Targeting advertisements based on emotion data can result in manipulation when, for example, the internet user is not aware that data about their sadness is used to target them with a video-game advertisement - the user does not know precisely how they are being influenced, making a decision that they cannot regard as their own. From a deontological stance, manipulation is harmful because it undermines personal _autonomy_ or disables a person's capability for an authentic choice [33]. However, it can also lead to various adverse consequences that are also legally relevant. For example, it can result in economic loss: subscribing to a video game after manipulative advertising would mean the user pays needlessly. This may lead to direct financial loss to a consumer, structural inefficiencies, and market failure. Moreover, manipulation can lead to time loss (e.g., a user playing a video game instead of spending time more authentically). Manipulating users based on their emotional data can also harm their psychological and physical health and integrity. This is particularly true when emotional data relates to a person's mental health. In one extreme example, an online personalization algorithm that identified 14-year-old Molly Russel as depressed targeted her with content about self-harm and suicide [40]. Eventually, Russel developed a severe depressive disorder and later ended her life. While linking such cases directly to the processing of emotion data may seem far-fetched, a coroner directly linked the personalization algorithm as the cause of her death. On the other hand, using emotion data can exacerbate inequality or otherwise discriminate. For example, a person's temporary anxiety when considering airplane tickets can be used to extract surplus profit by charging a higher price. This can exacerbate inequality as low-income people may be more likely to experience anxiety about ticket prices. Moreover, when emotion data reveals mental health conditions, such as, for example, showing a mood disorder, this can be used in employment decisions or other decisions that may significantly affect the person's life. Finally, when emotion data is processed for public security purposes, for example, at border control, it may disadvantage historically marginalized groups and ethnic minorities who may find immigration or security lines relatively more "stressful" [41]. Furthermore, if emotion data is processed in a way that undermines information privacy, such processing may have significant risks to other interests. For example, it may have chilling effects on free speech - the feeling of being watched and classified can act as an intimidation mechanism and limit their self-expression. On the other hand, if emotion data is processed in the context of an interrogation, it may undermine an individual's interest in avoiding self-incrimination [41]. Lastly, emotions are felt as personal and express what a person cares about [42]. As people often rely on emotions to relate to their authentic core selves, processing emotion data can interfere with constructing one's selfhood. The construction of one's selfhood can be considered one of the most sensitive areas, and interfering with such a process can be regarded as treating persons as less than human, undermining their dignity, which is the foundational value in the European Union [43]. ## IV EU Law and Emotion Data In law, expressions, and attributions of emotions historically played a critical role in legal decision-making, particularly in criminal law [44]. An accused individual's physical movements were considered to indicate inner emotions and ultimately used to determine guilt or innocence. The lie detector developed by Hugo Minterberg is a prime example [43]. A recent example is the automated border control system called IBORDERCTRL [8]. This research project, which the EU funded, analyses travelers' micro-gestures to determine if the interviewee is lying. There are existing and emerging areas of EU law in which emotion data plays an important role. We discuss the GDPR and the DSA. The former is active for five years and shall apply as of 24 February 2024. We also consider the AI Act proposal, which is subject to the EU's legislative procedure at the time of writing. ### The EU General Data Protection Regulation The GDPR only applies to the processing of personal data. Personal data is defined as a concept with four elements: i) any information ii) relating to iii) an identified or identifiable iv) natural person (Article 4 GDPR). Although emotions are felt as personal because they are related to a person's values [29] and express what a person cares about [42], such information is not per se considered personal data from a legal perspective [43], as confirmed by the examples of two European Supervisory Authorities. Processing emotion data does not fall under the material scope of the GDPR in case the individual concerned is neither identified nor identifiable. An example of this may be found in a billboard installed at Piccadially Circus in London using AC to broadcast ads based on people's age, gender, and mood [46]. The same holds when retailers capture data about age, gender and observed emotions of retail customers without identifying them. Providers usually argue that the system only stores anonymous data like age and gender. Therefore, the captured emotion data "belong to no one" because they cannot be linked to individuals [47]. Some EU Data Protection Supervisory Authorities seem to agree [48, 49]. Usually, however, the use of emotion data amounts to the processing of personal data because individuals are identified or identifiable, i.e., if AC systems are used in a hiring context or by call center agents. ### Emotion data as special data In most cases, emotion data constitutes personal data. The question arises whether such data is specifically protected due to its sensitive nature. The answer to this question is complicated. First and foremost, emotion data is not specifically protected under Article 9 GDPR. This provision regulates the processing of special categories of personal data ('special data'). It prohibits "_the processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation._" Processing such data is prohibited and only allowed if an exception in Article 9(2) GDPR applies. Because emotion data is not listed in this exhaustive enumeration of special data, it is never specifically protected as such [50, 51, 52]. Nonetheless, the data used to detect emotion data may constitute special data. Ultimately, the _approach taken in AC_ determines whether processing _personal data used to detect or derive_ emotion data falls under the scope of Article 9 GDPR. A current survey distinguishes between AC's single-modal and multi-modal affect recognition approaches [52]. Single-modal approaches are divided into text sentiment analysis, audio emotion recognition, visual emotion recognition focusing on facial expression and body gestures, and physiological-based emotion recognition systems [52]. The latter include AC systems that detect emotional states from electroencephalograms (EEGs) and electrocardiograms (ECGs). ECG-based emotion recognition systems record the physiological changes of the human heart in order to detect the corresponding waveform transformation, which provides information for emotion recognition [52]. For instance, Hsu et al. presented an ECG-based emotion recognition system for music listening [53]. EEG is a non-invasive method that detects and registers electrical activity in the brain [54]. EEG-based emotion recognition systems directly measure changes in brain activities, which provide internal features of an emotional state [52, 55]. Merely physiological-based emotion recognition systems in AC involve processing special data as defined in the GDPR. Information processed by these systems falls under the definition of health data, which not only covers physical or mental health but also "any information (..) on the _physiological_ or biomedical state of the data subject independent of its source" (Recital 35). Consider, for instance, AC systems that infer emotion data from physiological data such as heart rate, blood pressure, and skin conductance. Such information falls under the definition of health data and is protected as a special category of personal data according to Article 9 of the GDPR. Conversely, most of the single-modal affect recognition systems pursued in AC do _not_ amount to the processing of special data. AC systems deploying approaches such as text sentiment analysis, audio emotion recognition, and visual recognition of emotion focusing on facial expressions and body gestures do _not_ directly involve processing special categories of personal data (recitals 51-53). Information processed within these approaches and derived emotion data are thus not protected as special personal data under the GDPR, despite their sensitive and intimate nature. This also holds when biometric data is used for AC to detect the emotional state of the individual concerned. Think about automated face analysis (AFA) approaches relying on the facial action coding system (FACS) [56, 57]. Within these approaches, biometric data in facial expressions are _not_ processed to identify an individual. The same applies to AC systems aiming to detect emotion data from an individual's voice and speech. According to the wording of Article 9(1) GDPR, biometric data is only protected as special personal data if it is used _for the purpose of uniquely identifying_ an individual. This means "processed through a specific technical means allowing the unique identification or authentication of a natural person" (recital 51 GDPR). According to regulatory guidance, biometric identification typically involves "the process of comparing biometric data of an individual (acquired at the time of the identification) to several other biometric templates stored in a data database (i.e., a one-to-many matching process)" (WP 193, 27 April 2012). For example, HumeAI, or formerly HireVue, provides AC-powered tools to help recruiters to assess personality traits and detect emotional states of job candidates disclosed during automated video assessments based on facial expressions. These systems do not process biometric data in the form of facial expressions for the purpose of uniquely identifying the job candidate as required by Article 9(1) GDPR. Instead, they detect a candidate's emotional state during the automated video assessment. Identification is achieved through other means, namely when the candidate reveals their name or other identifiable information. Likewise, according to the automated voice analysis, an AC system that advises the call agent to speak with more empathy because the customer seems to be angry does not process biometric data for identification purposes. In conclusion, highly sensitive emotion data never constitutes special personal data protected explicitly in the GDPR. In some cases, AC systems process special data to _derive or detect_ emotion data. This applies to physiological-based emotion recognition systems that process information like heart rate, blood pressure, and skin conductance. Such information constitutes health data and is protected as a special category of personal data in the GDPR. Nonetheless, the highly sensitive detected emotion data never constitutes special data under the GDPR, irrespective of which affect recognition (single-modal or multi-modal) approach in AC is deployed. Inherently sensitive personal data is not specifically protected in EU data protection law. This leads to a significant gap in legal protection. It could be argued that emotions should be regulated like human speech or text because both somehow define humanity. In our view, the inherently highly sensitive nature of emotion data and the close link with one's personhood merits specific protection. ### Data protection principles Article 5 of the GDPR stipulates the principles governing any personal data processing. These principles provide the basis for protecting personal data in EU data protection law [58]. In the context of AC, three EU data protection law principles are particularly relevant. These are the fairness and transparency principles enshringed in Article 5(1) lit a GDPR as well as the accuracy principle according to Article 5(1)(d) GDPR. Moreover, according to the principle of data protection by design and default (Article 25 GDPR), these principles must be considered when developing and designing AI systems that process personal data. ### Fairness Even though the fairness principle is a crucial tenet of EU data protection law, its role has thus far endured being elusive [59] due to the lack of judicial guidance. Nonetheless, regulatory guidance (Guidelines 2/2019) and regulatory enforcement at the EU level (in the form of binding decisions adopted by the EDPB according to Article 65 of the GDPR) identify five key elements of the fairness principle. These elements are the autonomy of data subjects concerning data processing, their reasonable expectations, ensuring power balance between controllers and data subjects, avoidance of deception as well as possible adverse consequences of processing, and ensuring ethical and truthful processing (EDPB binding decisions 3/2022, 4/2022, 5/2022). Fairness is an _overarching_ principle beyond transparency [60]. Processing personal data occurring in the context of AC raises the question of whether such processing complies with the fairness principle. Processing emotion data enabled by AC could be misleading, mainly because the accuracy of AC has been severely questioned [24, 61-62]. Processing emotion data through AC could be detrimental and unexpected for the individuals concerned. Imagine an employer that uses automated video assessments provided by HumeAI and formerly HireVue to detect the emotional states of candidates shown during these assessments. Particularly in these circumstances, processing emotion data employing AC may have adverse consequences for the data subject. Perhaps for this reason, HireVue halted the operation of its services' component analyzing the facial expressions of applicants [63]. Considering the sometimes questionable accuracy of AC (see the accuracy principle), the ubiquitous manner of processing (see the transparency principle), the sensitive nature of the personal data processed, and possible adverse effects for the candidate, it seems reasonable to conclude that such processing does not comply with the fairness principle. The power asymmetry between the employer and the applicants also plays a role. Fairness aims to balance precisely these kinds of power asymmetries and to prevent adverse effects in concrete circumstances [64]. Here, the adverse effects are apparent. Rather sensitive personal data is processed to determine whether the applicent will receive a job offer. Undoubtedly, this decision has a considerable effect on the candidate. Even so, relying on a human observer instead of an emotion recognition system may lead to similar problems. Moreover, processing emotion data can also improve fairness, for instance, by highlighting human biases. Of course, there are also examples outside the HR domain. Think about the automated border control system called IBORDERCTRL, which tries to determine if the interviewee is lying. It may be severely questionable whether such processing is fair. The use of AC might also be unfair within other domains, for instance, when implemented in cars, classroom teaching aids, smart toys, virtual assistants, and targeted advertisements. Although AC systems are predominantly developed in the West, they are sold to global marketplaces. Algorithms are hardly modified for racial, cultural, ethnic, or gender differences [65, 66, 67]. Importantly, AC systems may violate the fairness principle when emotion data is used to manipulate data subjects and adversely affect their personal autonomy. #### Transparency The transparency principle inherent in Article 6(1) GDPR requires that it must be transparent to natural persons "that personal data concerning them are collected, used, consulted or otherwise processed" (recital 39). It is further specified in articles 12 to 14 of the GDPR in the form of data controller obligations to provide certain information to the data subject, for instance, about the purposes of processing. As propagated by Picard [32], transparent processing performed by AC systems presupposes that individuals know what emotions the machine has recognized. However, this does not seem to be the case when the transparency obligations contained in the GDPR are applied in practice. Usually, emotion data are not directly provided by the data subject concerned. Instead, emotion data is _inferred_ by AC systems based on personal data collected from data subjects. This is important because Article 13 of the GDPR only applies when personal data are obtained or observed from the data subject. Because complex computing is needed to detect emotion data, it cannot be considered as simply observed data. Thus, the transparency requirements enshringed in Article 13 of the GDPR are not triggered [67]. Emotion data must be considered as being _inferred_ from personal data provided by the data subject. Regarding inferred personal data, regulatory guidance states that the controller must provide information about the intended _purpose_ and the _categories_ of the inferred data (WP 260 rev.01, 11 April 2018). When this guidance is applied to processing the AC software provided by HumeAI and formerly HireVue, the employer merely needs to inform the candidate about the intended purpose of creating inferred data and the category of inferred data at the commencement phase of the processing cycle. Hence, the prospective employer is not obliged to inform the candidate about the specific emotions detected by the system. The same holds if the prospective employer receives emotion data from an independent assessment provider (i.e., not obtained from the data subject). Article 14 GDPR, which is applicable in this scenario, simply requires the employer to inform the candidate about the category of personal data (e.g., 'emotion data'). The candidate will not know what specific emotions the system detected. The same conclusion can be drawn for Speech Emotion Recognition (SER) applications used by call centers to detect emotion data of customers employing automated speech analysis. This leads to a significant loophole [50]. Candidates and callers do not know what specific emotions are being detected about them. We use the term 'loophole' because we share Picard's view that transparent processing presupposes that individuals can see what emotion the machine recognized [32]. However, as outlined in this section, EU data protection law does not oblige controllers to disclose the specific emotion data detected by the AC system. #### Accuracy Article 5(1)(d) GDPR states that the processing of personal data must be accurate. The accuracy principle protects the individual concerned from being irrationally or unfairly treated based on wrong and inaccurate representations [68]. According to regulatory guidance, accurate means "accurate as to a matter of fact" (WP 225, 26 November 2014). Case law of the CJEU (Case C-434/16) indicates that the precision required for processing personal data is determined by the purpose of the processing [68]. Thus, the assessment as to whether personal data is accurate and complete depends on the _purpose_ for which it was collected. Personal data generated using AC are subject to this accuracy principle. However, different studies have challenged the idea that a person's emotional state can accurately be inferred from his or her facial movements. An extensive study suggests that facial movements are not diagnostic displays that reliably and precisely signal particular emotional states regardless of context, person, and culture [24]. Another study revealed that the accuracy levels of eight commercial automatic classifiers used for facial affect recognition were consistently lower when applied to spontaneous affective behaviors than "posed" affective behavior. Recognition accuracy rates of the tested classifiers varied from 48% to 62% [69]. Additionally, the accuracy of emotion data inferred by other means, such as physiological data or speech, has been questioned [70]. Research that explores emotional aspects of speech has been restricted to laboratory conditions with noise-free, uncompressed, and full-bandwidth audio recordings. Recent studies, however, indicate that speech compression, filtering, band reduction, and the addition of noise significantly reduce accuracy [71]. Nevertheless, speech emotion recognition is already being applied 'in the wild', and emotions are inferred from speech recorded or streamed in natural and daily-life environments, likely with significantly lower accuracy rates. Also, Recital 26c of the AI Act proposal discussed in Section B below mentions the following'shortcomings' of current AC systems: limited reliability, lack of specificity, and limited generalizability. Thus, the processing of personal data by means of AC creates severe tension with the accuracy principle. If companies act upon such arguably inaccurate data and treat individuals in a particular manner, it could lead to severe problems for the individuals concerned. ### _AI Act proposal_ In 2021, the EU Commission proposed the AI Act [72]. After multiple amendments, the European Parliament has adopted its negotiation position for the AI Act [73] ('AI Act proposal'), which now undergoes the EU's trialogue process. The AI Act proposal covers aspects ranging from product safety law to fundamental rights. It contains an exemption for scientific research. Research, testing, and development activities of AI systems occurring before putting such systems on the market do not fall under the AI Act, except for testing in real-world conditions (Article 5d). Article 3 (34) of the AI Act proposal directly relates to AC systems. It defines 'emotion recognition systems' as systems used "for the purpose of identifying or inferring emotions, thoughts, states of mind or intentions of individuals or groups on the basis of their biometric and biometric-based data." Biometric-based data are defined in Article 3 (33a) as "data resulting from specific technical processing relating to physical, physiological or behavioral signals of a natural person." Recital 26c of the AI Act proposal names facial expressions, movements, pulse frequency, and voice as examples. Thus, the definition of emotion recognition systems ('ERS') is very broad, arguably covering both single-modal and multi-modal affect recognition approaches in AC as introduced in the previously mentioned survey [52]. The AI Act proposal takes a risk-based approach toward emotion recognition systems ('ERS'). Firstly, Article 5(1) dc prohibits ERS from being used in law enforcement, border management, the workplace, and education. Secondly, ERS are generally considered high-risk systems as outlined in Article 6(2) and Annex III. This means that ERS must meet specific compliance requirements, such as those pertaining to risk management systems, accuracy, data governance, transparency, human oversight, specific technical documentation, and record-keeping (Articles 9-15). Deployers of AI systems must also perform a fundamental rights impact assessment (Article 29a). We focus on transparency and accuracy requirements relating to ERS. #### Transparency Article 52(2a) AI Act proposal obliges entities under whose authority the ERS is used ('deployer') to inform individuals concerned about _the operation of the system_. Accompanying recital 70 explains that "natural persons should be notified" when exposed to ERS. None of the rectilas further clarify what information about the system's operation precisely entails. Arguably, it simply means to make natural persons aware that they are exposed to an ERS. Hence, deployers of ERS are not obliged to inform individuals about what specific emotion the system detected. Similar to the situation with the GDPR, this contradicts what Picard propagated: individuals should be able to know what emotion the machine recognizes [32]. In conclusion, the AI Act does not fill the current loophole in EU data protection law. Under both the GDPR and the AI Act proposal, individuals do not know what specific emotions are being detected about them. #### Accuracy Under the AI Act proposal, ERS are high-risk AI systems according to Article 6(2) and Annex III. Article 15(1) of the AI Act proposal requires that high-risk AI systems are designed and developed in such a way that they achieve, in light of their intended purpose, an "appropriate level of accuracy" Levels of accuracy and relevant accuracy metrics must be declared in the accompanying documentation. This contains, inter alia, detailed information about the AI system's degree of accuracy for specific _persons_ or _groups of persons_ on which the system is intended to be used. The documentation must also disclose the overall expected level of accuracy concerning its intended purpose. The latter resembles the accuracy principle in data protection law as discussed under A) above. Nonetheless, accuracy under the AI Act appears to be much more specific, as the degree of accuracy must be disclosed regarding specific persons or groups of persons on which the ERS will be used. ### _Digital Services Act_ The newly enacted Digital Services Act (DSA) updates the EU's legal framework for intermediary services, including information society services and online platforms. Certain provisions are only applicable to very large online platforms (VLOP) and very large online search engines (VLOSE), as defined by the legislation (Art. 33 DSA). Several provisions in the DSA may impact AC systems, many of which relate to advertisements or recommender systems. Notably, the DSA introduces a prohibition of advertising to minors based on their profiling (Art. 28(2) DSA), which inherently includes profiling that uses emotion data. Advertising based on profiling that uses special data is prohibited (Art. 26(3) DSA). However, as seen above, emotion data does not necessarily constitute special data, and thus may fall outside the scope of this provision. For entities that are designated as VLOP or VLOSE, the DSA includes requirements for risk assessments and the related mitigation of risks (Art. 34, 35 DSA), yearly independent audits (Art. 37 DSA), and the provision of a recommender system that is not based on profiling (Art. 38 DSA). The risk assessments are to include systemic risks to "any actual or foreseeable negative effects for the exercise of fundamental rights," particularly those to human dignity, respect for private and family life, the protection of personal data, non-discrimination, and a high level of consumer protection, among others (Art. 34(1)(b) DSA). They are also to include systemic risks to "any actual or foreseeable negative effects in relation to... serious negative consequences to the person's physical and mental well-being" (Art. 34(1)(d) DSA). Given the implications of the use of emotion data on individuals' autonomy, privacy, and mental well-being (_see_ Section III), using such data in services provided by VLOP or VLOSE should form part of their risk assessment. They are then obligated to "put in place reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks identified" (Art. 35(1) DSA). This can include "adapting the design, features or functioning of their services," adapting their algorithmic systems (including recommender systems), or adapting their advertising systems (Art. 35(1)(a), (d), and (e) DSA). Finally, VLOP or VLOSE entities must "provide at least one option for each of their recommender systems which is not based on profiling" (Art. 38 DSA). This is important for individuals as they can prevent recommender systems from using their emotion data. Interestingly, the DSA does not require such an option for advertisements. The DSA also introduces new requirements regarding transparency not covered by the GDPR and AI Act proposal. Online platforms that present advertisements must provide individuals with information "about the main parameters used to determine the recipient to whom the advertisement is presented and, where applicable, about how to change those parameters" (Art. 26(1)(d) DSA). Online platforms must also disclose "the main parameters used in their recommender systems, as well as any options for the recipients of the service to modify or influence" them (Art. 27(1) DSA). This must include the most significant criteria used in determining the information suggested to individuals and the reasons for their importance (Art. 27(2)(a) and (b) DSA). Where AC systems are used, both Articles require online platforms to mention emotion data when it is used as a primary parameter for advertisements or in recommender systems. However, this leaves a gap when emotion data may be used as a secondary parameter, as its use may not be disclosed. Moreover, even when emotion data may be a main parameter, online platforms are unlikely to detail which emotion the AC system recognized [32]. Given the novelty of these provisions, it remains to be seen how online platforms will attempt to comply with them or how supervisory authorities will enforce them. ## V Conclusions In this article, we have outlined that emotion data does not constitute special data in EU data protection law despite its sensitive nature. The GDPR fails to keep up with technological developments, which leads to a lacuna of protection. As such, it is also tricky for the affective computing community to consider the GDPR's legal requirements when developing AC systems because of its lack of clarity and knowledge-specific sector. Whether the processing of personal data used to detect or derive emotion data falls under the scope of special personal data, according to Art. 9 of the GDPR, depends on the _approach_ taken in affective computing. Approaches that process physiological information do, whereas visual approaches that rely on biometric data (e.g., facial expressions) do not--at least not directly. The AI Act proposal introduces yet another term relevant to AC: biometric-based data. This is data resulting from specific technical processing relating to a natural person's physical, physiological, or behavioral signals. These legal nuances make it difficult for the AC community to sufficiently consider the applicable legal requirements when developing AC systems intended for the EU market or when working with study participants in the EU. Processing emotion data by means of affective computing systems may be detrimental to critical elements of the fairness principle contained in the GDPR. Also, the processing of emotion data creates severe tensions with the accuracy principle enshmed in the GDPR. Several studies have questioned the accuracy of emotion data inferred by means of AC [69, 70, 24, 71]. Moreover, EU data protection law does not obligle deployers of AC systems to inform individuals about the specific emotions detected by the system, contrary to what Picard propagates [32]. It seems that the AI Act, for now, will not fill this loophole. Moreover, the AI Act may limit specific harmful uses of emotion data. This can mainly be the case regarding manipulation that leads to psychological or physical harm. However, arguing that the risk of such harm exists will be a difficult exercise. Furthermore, AI Act leaves unaddressed other harms (e.g., time loss, economic loss) of manipulative uses of emotion data processing. The DSA introduces several new obligations for online platforms relevant to AC systems and emotion data. These include risk assessments and mitigation measures, independent audits, and the implementation of audit recommendations, all accompanied by reports that must be made public. In addition, transparency requirements regarding recommender systems and advertisements may additionally shed light on how AC systems and emotion data are used in practice, though there are some limitations. Further, the DSA introduces several provisions that limit how emotion data may be used. For instance, both the use of special data used for profiling in advertisements, as well as the profiling of minors for advertisements, is prohibited. ## Ethical Impact Statement Compliance with legislation cited in our article does not mean that all ethical concerns are satisfied. This holds particularly true when considering the gaps of legal protection we have identified. These loopholes shall not be exploited. Furthermore, ethical concerns outlined in Section III should be taken into account. ## Acknowledgment The authors would like to give a special thank you to Joost Batenburg, the coordinator of SAILS Program, a Leiden University wide initiative aiming to facilitate collaboration across disciplines on the use of AI.
2301.04121
Lagrangian reduction and wave mean flow interaction
How does one derive models of dynamical feedback effects in multiscale, multiphysics systems such as wave mean flow interaction (WMFI)? We shall address this question for hybrid dynamical systems, whose motion can be expressed as the composition of two or more Lie-group actions. Hybrid systems abound in fluid dynamics. Examples include: the dynamics of complex fluids such as liquid crystals; wind-driven waves propagating with the currents moving on the sea surface; turbulence modelling in fluids and plasmas; and classical-quantum hydrodynamic models in molecular chemistry. From among these examples, the motivating question in this paper is: How do wind-driven waves produce ocean surface currents? The paper first summarises the geometric mechanics approach for deriving hybrid models of multiscale, multiphysics motions in ideal fluid dynamics. It then illustrates this approach for WMFI in the examples of 3D WKB waves and 2D wave amplitudes governed by the nonlinear Schr\"odinger (NLS) equation propagating in the frame of motion of an ideal incompressible inhomogeneous Euler fluid flow. The results for these examples tell us that the fluid flow in WMFI does not create waves. However, feedback in the opposite direction is possible, since 3D WKB and 2D NLS wave dynamics can indeed create circulatory fluid flow.
Darryl D. Holm, Ruiao Hu, Oliver D. Street
2022-12-12T16:36:05Z
http://arxiv.org/abs/2301.04121v2
# Lagrangian reduction and wave mean flow interaction ###### Abstract How does one derive models of dynamical feedback effects in multiscale, multiphysics systems such as wave mean flow interaction (WMFI)? We shall address this question for hybrid dynamical systems, whose motion can be expressed as the composition of two or more Lie-group actions. Hybrid systems abound in fluid dynamics. Examples include: the dynamics of complex fluids such as liquid crystals; wind-driven waves propagating with the currents moving on the sea surface; turbulence modelling in fluids and plasmas; and classical-quantum hydrodynamic models in molecular chemistry. From among these examples, the motivating question in this paper is: How do wind-driven waves produce ocean surface currents? The paper first summarises the geometric mechanics approach for deriving hybrid models of multiscale, multiphysics motions in ideal fluid dynamics. It then illustrates this approach for WMFI in the examples of 3D WKB waves and 2D wave amplitudes governed by the nonlinear Schrodinger (NLS) equation propagating in the frame of motion of an ideal incompressible inhomogeneous Euler fluid flow. The results for these examples tell us that the mean flow in WMFI does not create waves. However, feedback in the opposite direction is possible, since 3D WKB and 2D NLS wave dynamics can indeed create circulatory mean flow. ###### Contents * 1 Introduction * 1.1 Examples of hybrid models * 2 Lagrangian reduction * 2.1 The Hamiltonian formulation. * 2.2 Additional symmetry * 3 Examples: Eulerian wave elevation field equations * 3.1 WKB internal waves in the Euler-Boussinesq (EB) approximation * 3.2 Coupling to the nonlinear Schrodinger (NLS) equation * 4 Numerical simulations * 5 Conclusion and outlook * 5.1 Acknowledgements * A Stochastic Hamiltonian wave-current dynamics B Coupling of Harmonic Oscillations ## 1 Introduction **Interaction of wind waves and ocean currents.** In the Iliad, one of Homer's verses describing air-sea interaction seems to hint that wind-driven waves convey an impulse of momentum into the sea [48] like blasts of storming winds striking the earth under Father Zeus's thunder, then with a roar slicing into the sea, whipping up a crowd of surging waves across a booming ocean, with lines of arching foam, one following another Modern geophysical fluid dynamics (GFD) would not disagree with Homer's simile for air-sea interaction. In particular, the well-known Craik-Leibovich (CL) theory of the role of Stokes drift in the creation of Langmuir circulations [11] and the Andrews-McIntyre theory of generalised Lagrangian mean (GLM) dynamics [3] each introduce a shift in the definition of total momentum by introducing an additional fluid velocity field and a corresponding non-inertial force on the mean fluid motion due to a change of frame. In this paper, we use standard methods of geometric mechanics to formulate models of wave mean flow interaction (WMFI) of fluctuations on the Earth's mean sea surface that is based on boosting the fluctuation dynamics into the frame of the mean flow. We hope that such a model may become useful, for example, in the interpretation of satellite image data from the Surface Water and Ocean Topography (SWOT) satellite mission, which is the first satellite with the specific aim to measure fluctuations on the Earth's sea surface [63]. Our objective here is to construct WMFI dynamics as a _hybrid_ fluid theory based on symmetry reduction in an Euler-Poincare variational principle for the nonlinear dynamics of a system of two fluid degrees of freedom [42]. The mathematical theory formulated here is illustrated in a hybrid fluid theory reminiscent of Landau's two-fluid model of superfluid \(He\)-II as discussed, e.g., in [49]. Just as with superfluids, the formulation of the theory in this paper involves transforming between the frames of motion of the two fluidic degrees of freedom. The role of the superfluid component of Landau's two-fluid \(He\)-II model in the WMFI model proposed here is played by the slowly varying complex amplitude of WKB wave equations, e.g., of the nonlinear Schrodinger (NLS) equation. In the absence of additional assumptions, the inverse problem of determining a three-dimensional fluid flow under gravity solely from observations of its two-dimensional surface flow and its propagating wave elevation field has no unique solution. Without attempting to discover the three-dimensional flow beneath the surface, though, one may still derive a mathematical model of some of the phenomena on the free surface via the implications of the kinematic boundary condition. Specifically, the kinematic boundary condition implies a composition of horizontal flow and vertically oscillating wave elevation dynamics of the Lagrangian material parcels remaining on the surface. In this paper, we formulate the initial value problem for wave dynamics on the free surface of a three-dimensional fluid flow. This is done entirely in terms of surface phenomena, as the semi-direct composition of a two-dimensional area-preserving horizontal fluid flow map acting on the vertical wave elevation dynamics. The surface wave dynamics formulated here is derived via Hamilton's variational principle by using a Lagrangian comprising the difference of the fluid kinetic and potential energies, constrained by the kinematic boundary condition that the flow of material parcels remains on the surface of the fluid. ### Examples of hybrid models Hybrid systems often involve sequences of relative motions in which one degree of freedom evolves in the frame of motion of the previous one. Lewis Fry Richardson's "whorls within whorls" verse about the turbulence cascade describes the familiar situation in which big whorls, little whorls and lesser whorls interact sequentially, one within the frame of motion of the one before, each feeling an additional reaction force from the change of frame. Plasma dynamics exemplifies another type of hybrid system, one in which Lagrangian charged particles interact with Eulerian electromagnetic fields. In this case, the Lorentz force on the charged fluid particles arises in the plasma fluid motion equation when the electromagnetic fields are Lorentz-transformed into the frame of the moving fluid. This type of reaction force due to a frame change can usually be attributed to a momentum shift associated with the change of frame. **Complex fluids.** In a sequence of papers [19, 20, 21, 29] the geometric mechanics of perfect complex fluids was developed, culminating in a full description of the geometry underlying the classic Ericksen-Leslie and and Eringen theories of complex fluids[20]. The hybrid model approach we shall discuss in the present paper is consistent with these previous approaches. The next three hybrid models have the additional feature that the hybrid components of the degrees of freedom live in nested sets of physical spaces or phase spaces. **Multiscale fluid turbulence models.** The geometric hybrid approach also applies in the kinetic sweeping of microstructure in turbulence models [44]. The basic idea in these turbulence models is that the coarse-grained space contains the fine-grained space as a subgrid-scale degree of freedom. The fine-grained fluid dynamics are transported along the Lagrangian paths of the coarse-grained fluid dynamics by a composition of maps. Spatial averages over the evolution of the fine-grained fluid dynamics act back on the motion in the coarse-grained space and modify it. The back-reaction is calculated via the coarse-grained divergence of the Reynolds stress tensor for the coarse-grained fluid dynamics. The latter is defined by spatial averaging over the terms in the coarse-grained dynamics that feed back from the fluid dynamics in the fine-grained space, which is again parameterised by the coarse-grained coordinates by the composition of smooth invertible maps. **Hybrid models of electromagnetic wave / fluid plasma interaction.** A natural candidate for hybrid models would be the electromagnetic wave / fluid plasma interaction. Examples of hybrid models of the geometric type considered here in plasma physics include: (i) ponderomotive coupling of microwaves and plasma in magnetic controlled fusion [57, 58]; Electro- and magneto- fluids [25]; (iii) relativistic fluid plasma dynamics [27]; and (iv) Vlasov-fluid hybrid plasma models, Holm and Kaufman [36], Holm and Tronci 2010 [45]. **Classical-quantum mechanics.** The coupling between classical and quantum degrees of freedoms has raised an outstanding question ever since the rise of quantum mechanics as a physical theory. How does one separate classical and quantum? How do they influence one another? Is there a back reaction? For example, is there something like Newton's Law of action and reaction when a classical measurement of a quantum property occurs? A general model of classical-quantum back-reaction must be able to give consistent answers to the various quantum paradoxes. For example, the exact factorisation (EF) model of quantum molecular chemistry is discussed from the viewpoint of the geometric mechanics approach in [16, 22, 43, 56]. The EF model shares some similarities with the multiscale turbulence models in that two spatial scales are involved: one spatial scale for the slowly moving classical dynamics of the ions; and another spatial scale for the rapid quantum motion. The term "exact factorisation" indicates that the total wave function is factorised into a classical wave function for the ions depending on one set of coordinates and a quantum wave function depending on a second set of coordinates whose motion relative to the first set of coordinates is determined by a composition of maps. **Image registration by LDM using the metamorphosis approach.** Large deformation diffeomorphic matching methods (LDM) for image registration are based on optimal control theory, i.e., minimizing the sum of a kinetic energy metric, plus a penalty term. The former ensures that the template deformation by the diffeomorphism follows an optimal path, while the latter ensures an acceptable tolerance in image mismatch. The _metamorphosis approach_ is a variant of LDM that parallels the present considerations, in allowing dynamical templates, so that the evolution of the image template deviates from pure deformation [64, 46]. **Wave mean flow interaction.** The hybrid description of WMFI in terms of two fluid fields is already standard in geophysical fluid dynamics (GFD) models. For example, the Craik-Leibovich (CL) approach [11] and the Generalised Lagrangian Mean (GLM) approach [3, 23] both introduce two types of fluid velocities, one for the mean flow and another for the fluctuations. See [62] for a recent summary of the state of the art in Craik-Leibovich models. **The present work.** In all of the hybrid models mentioned so far, a simple and universal property of transformation theory called the cotangent-lift momentum map plays a key role in describing the interactions among the various degrees of freedom in the hybrid dynamical system. The same property plays a key role in the theory developed here for the interaction of free-surface waves and the fluid currents which transport them. Thus, the present work extends the ongoing series of applications of geometric mechanics in multiscale, multiphysics continuum dynamics to the case of the interaction of fluid waves and currents. As mentioned earlier, we hope that restricting this approach to two spatial dimensions will contribute a useful method for data calibration and analysis of satellite observations of the ocean surface in the SWOT mission. In preparation for the data calibration, analysis, and assimilation aspects of the potential applications of this approach, we also include Appendix A which formulates the stochastic versions of the deterministic WMFI equations treated in the main body of the paper that could be useful as a basis for SWOT data analysis. Plan of the paper.Section 2 shows the Lie group reduced variational path via Hamilton's principle for deriving hybrid fluid models. In Section 3, we introduce and discuss two examples of hybrid models. These hybrid fluid models are Eulerian wave elevation field equations governing the coupling of an Euler fluid to: (i) harmonic scalar wave field elevation oscillations; and (ii) complex scalar elevation field dynamics governed by the nonlinear Schrodinger (NLS) equation. The latter are called _hybrid Euler-NLS equations_. Section 4 shows simulations of the hybrid Euler-NLS equations that verify the predictions of momentum exchange derived in the previous section. Section 5 contains concluding remarks, as well as an outlook toward future work. Appendix A proposes stochastic modifications of the present deterministic variantional theory and Appendix B discusses an instructive elementary example in which the waves comprise a field of vertical simple harmonic oscillators. ## 2 Lagrangian reduction We are dealing with physical problems that involve a subset of variables evolving dynamically in the frame of reference moving with an underlying dynamical system. An example was given earlier of waves propagating in the frame of reference given by ocean currents [35]. In general, the dynamics of some order parameter breaks the symmetry that the system would have had without the presence of said parameter. This problem may be described geometrically in the following way. Motivated by wave mean flow interactions (WMFI), within this section we will perform the calculations for the case of continuum dynamics, where the Lie group acting on the order parameters is taken to be the group of diffeomorphisms. We will therefore choose Lagrangians, group actions, and representations that are _right_ invariant. It should be noted that the theory presented in this section is general enough to apply for other dynamical systems whose behaviour can be described by the action of a Lie group on a configuration space. The configuration space of fluid motion within a spatial domain1, \(\mathcal{D}\in\mathbb{R}^{n}\), is given by the diffeomorphism group, \(G=\text{Diff}(\mathcal{D})\). That is, each element, \(g\in G\), is a map from \(\mathcal{D}\) to itself which takes a fluid particle at a position, \(X\in\mathcal{D}\), at initial time \(t=0\), to a position, \(x=g_{t}(X)\), at the current time, \(t\), with \(g_{0}(X)=X\), so that \(g_{0}=Id\). The time-parametrised curve of diffeomorphisms, \(g_{t}\in G\), therefore governs the history of each fluid particle path within the domain. Thus, the fluid motion is described by the transitive action of \(G\) on \(\mathcal{D}\). In what follows, we will denote the corresponding Lie algebra by \(\mathfrak{g}\), which for fluid motion is the space of vector fields, i.e., \(\mathfrak{g}=\mathfrak{X}(\mathcal{D})\). Footnote 1: For our examples of WMFI dynamics, we will take dimension \(n=3\) and \(n=2\) for the examples in section 3 For a \(G\)-invariant Lagrangian defined on the tangent bundle, \(TG\), the equations of motion are given by the standard Euler-Poincare theorem, which can be expressed on \(G\), or in their reduced form on the dual of the Lie algebra, \(\mathfrak{g}^{*}=\Lambda\otimes\text{Den}(\mathcal{D})\), the 1-form densities on domain \(\mathcal{D}\) in the case of fluids with \(L^{2}\) pairing. The symmetry of this description can be broken by the presence of a _parameter_, \(a_{0}\in V^{*}\), in a vector space \(V^{*}\) where there is some representation of \(G\) on \(V\). The advection relation, \[a_{t}(x)=a_{0}(X)g_{t}^{-1}=:g_{t\,*}a_{0}(X)\,, \tag{2.1}\] is the solution of the advection equation, denoted as \[\partial_{t}a+\mathcal{L}_{u}a=0\,,\quad\text{with}\quad u:=\dot{g}_{t}g_{t}^{ -1}\,, \tag{2.2}\] where \(\mathcal{L}_{u}\) denotes the Lie derivative with respect to the Eulerian velocity vector field, \(u:=\dot{g}_{t}g_{t}^{-1}\). The advection equation follows from the Lie chain rule for the push-forward \(g_{t\,*}\) of the initial condition \(a_{0}(X)\) by the time-dependent smooth invertible map \(g_{t}\). Namely, \[\partial_{t}a_{t}(x)=\partial_{t}\big{(}g_{t\,*}a_{0}(X)\big{)}=-\,g_{t\,*} \big{(}\mathcal{L}_{\dot{g}_{t}g_{t}^{-1}}a_{0}(X)\big{)}=-\,\mathcal{L}_{u}a _{t}(x)=-\,\mathcal{L}_{u}a(x,t)\,. \tag{2.3}\] Imposing the advection relation in (2.1) in Hamilton's principle when the Lagrangian is invariant under \(g_{t}\) yields the standard Euler-Poincare theory for semidirect product Lie algebras, [42]. Suppose further that we have an additional configuration space, \(Q\), which represents (order) parameters with their own dynamics, and that we have a representation of the (free, transitive) group action of \(G\) on \(Q\). Within this space we will find dynamics (e.g. waves) occurring within the frame of reference corresponding to the (fluid) motion on \(\mathfrak{g}^{*}\). The distinction between parameters in \(V^{*}\) and \(TQ\) becomes apparent in the variational formulation. Indeed, let us consider the general case in which the Lagrangian \(L\) takes the form \[L:T\left(G\times Q\right)\times V^{*}\rightarrow\mathbb{R}\,, \tag{2.4}\] where \(G\), \(Q\), and \(V^{*}\) are as defined above. We assume that \(G\) acts on \(T\left(G\times Q\right)\times V^{*}\) in the natural way on the right. We denote this right action using concatenation and tangent fibre notation \(u_{g}\) at footpoint \(g\) on the manifold \(G\) as \[(g,u_{g},q,u_{q},a)h=(gh,u_{g}h,qh,u_{q}h,ah)\,. \tag{2.5}\] Invariance of the Lagrangian \(L\) in Hamilton's principle under the right action of \(G\) is written as \[L(g,\dot{g},q,\dot{q},a_{0})=L(gh,\dot{g}h,qh,\dot{q}h,a_{0}h)\,, \tag{2.6}\] for all \(h\in G\). Choosing \(h=g^{-1}\), one defines the reduced Lagrangian as \[L(e,\dot{g}g^{-1},qg^{-1},\dot{q}g^{-1},a_{0}g^{-1})=:\ell(u,n,\nu,a)\,, \tag{2.7}\] with further notation \(u:=\dot{g}g^{-1}\), \(n=qg^{-1}\) and \(\nu=\dot{q}g^{-1}\). The reduced Lagrangian \(\ell\) is then associated to the quotient map, \[T(G\times Q)\times V^{*}\to\mathfrak{g}\times TQ\times V^{*}\,. \tag{2.8}\] We have thus formulated the reduced Euler Poincare variational principle, \[0=\delta S=\delta\int_{t_{0}}^{t_{1}}\ell(u,n,\nu,a)\,dt\,, \tag{2.9}\] defined subject to the following constrained variations of \(u,n,\nu\) and \(a\), derived from their definitions, \[\begin{split}\delta u&=\partial_{t}\eta-\text{ad} _{u}\,\eta\,,\\ \delta n&=w-\mathcal{L}_{\eta}\eta\,,\\ \delta\nu&=\partial_{t}w+\mathcal{L}_{u}w-\mathcal{ L}_{\eta}\nu\,,\\ \delta a&=-\mathcal{L}_{\eta}a\,,\end{split} \tag{2.10}\] where \(\text{ad}_{u}\eta=-[u,\eta]\), \(\eta=\delta gg^{-1}\) and \(w=\delta qg^{-1}\) are arbitrary and vanish at the endpoints in time, \(t=t_{0}\) and \(t=t_{1}\). Here, the Lie derivative w.r.t to the vector field \(\eta\) is denoted as \(\mathcal{L}_{\eta}\). The dual Lie derivative operator, \(\diamond\), is defined via pairings \(\left\langle\cdot\,,\,\cdot\right\rangle\) over \(\mathfrak{g}\) and \(T^{*}Q\) as \[\left\langle p\,,\,\mathcal{L}_{\eta}q\right\rangle_{Q\times Q^{*}}=\left\langle -p\diamond q\,,\,\eta\right\rangle_{\mathfrak{g}}\,, \tag{2.11}\] for all \((p,q)\in Q\times Q^{*}\) and \(\eta\in\mathfrak{g}\). Here we have used subscripts to distinguish between the pairings over the cotangent bundle \(T^{*}Q\) and the Lie algebra \(\mathfrak{g}\). One can similarly define the \(\diamond\) operator for the cotangent bundle \(T^{*}V\). We will drop the subscripts in subsequent derivations when the space corresponding to the pairing is evident from the context. Upon applying the constrained variations in (2.10), the variational principle in (2.9) takes its Euler-Poincare form, \[\begin{split} 0=\delta S&=\int\left\langle\frac{\delta \ell}{\delta u}\,,\,\delta u\right\rangle+\left\langle\frac{\delta\ell}{ \delta n}\,,\,\delta n\right\rangle+\left\langle\frac{\delta\ell}{\delta\nu} \,,\,\delta\nu\right\rangle+\left\langle\frac{\delta\ell}{\delta a}\,,\, \delta a\right\rangle\,dt\\ &=\int\left\langle\frac{\delta\ell}{\delta u}\,,\,\partial_{t} \eta-\text{ad}_{u}\,\eta\right\rangle+\left\langle\frac{\delta\ell}{\delta n }\,,\,w-\mathcal{L}_{\eta}n\right\rangle+\left\langle\frac{\delta\ell}{\delta \nu}\,,\,\partial_{t}w+\mathcal{L}_{u}w-\mathcal{L}_{\eta}\nu\right\rangle+ \left\langle\frac{\delta\ell}{\delta a}\,,\,-\mathcal{L}_{\eta}a\right\rangle \,dt\\ &=\int\left\langle-\partial_{t}\frac{\delta\ell}{\delta u}-\text{ ad}_{u}^{*}\,\frac{\delta\ell}{\delta u}+\frac{\delta\ell}{\delta n}\diamond n+\frac{\delta \ell}{\delta\nu}\diamond\nu+\frac{\delta\ell}{\delta a}\diamond a\,,\,\eta \right\rangle+\left\langle-\partial_{t}\frac{\delta\ell}{\delta\nu}+\mathcal{ L}_{u}^{T}\frac{\delta\ell}{\delta\nu}+\frac{\delta\ell}{\delta n}\,,\,w\right\rangle\,dt\,,\end{split} \tag{2.12}\] where the coadjoint operation \(\text{ad}^{*}:\mathfrak{g}\times\mathfrak{g}^{*}\to\mathfrak{g}^{*}\) for right action is defined by the \(L^{2}\) pairing \[\left\langle\text{ad}_{u}^{*}\,\mu\,,\,v\right\rangle:=\left\langle\mu\,,\, \text{ad}_{u}\,v\right\rangle=\left\langle\mu\,,\,-\mathcal{L}_{u}v\right\rangle \,,\quad\text{and}\quad\text{ad}_{u}^{*}\,\mu=\mathcal{L}_{u}\mu\quad\text{ for}\quad\mu\in\mathfrak{g}^{*},\quad u,v\in\mathfrak{g}\,. \tag{2.13}\] The stationary conditions resulting from the variations, together with the definitions of \(w\) and \(a\), provide the evolution equations for the dynamics of the whole system,2 Footnote 2: As discussed further below, the equation set in (2.14) for WMFI dynamics taking place in the frame of the fluid motion closely tracks the equations for the dynamics of complex fluids reviewed authoritatively in [19]. \[\begin{split}\left(\partial_{t}+\text{ad}_{u}^{*}\right)\frac{ \delta\ell}{\delta u}&=\frac{\delta\ell}{\delta n}\diamond n+ \frac{\delta\ell}{\delta\nu}\diamond\nu+\frac{\delta\ell}{\delta a}\diamond a\,,\\ \left(\partial_{t}+\mathcal{L}_{u}\right)\frac{\delta\ell}{\delta \nu}&=\frac{\delta\ell}{\delta n}\,,\\ \left(\partial_{t}+\mathcal{L}_{u}\right)n&=\nu\,, \\ \left(\partial_{t}+\mathcal{L}_{u}\right)a&=0\,,\end{split} \tag{2.14}\] where we have used the fact that \(-\mathcal{L}_{u}^{T}=\mathcal{L}_{u}\) under integration by parts in the \(L^{2}\) pairing. We shall refer the equations (2.14) as Euler-Poincare equations with cocyles, versions of which have also been derived in a variety of places elsewhere for hybrid dynamics, as well as when using _metamorphysis reduction_ in [18]. Note that the second and third equations in (2.14) are the Euler-Lagrange equations in the frame of reference moving with the dynamics on \(\mathfrak{g}\). Hence, the usual time derivative found in the Euler-Lagrange equations has been replaced by the advective derivative \(\partial_{t}+\mathcal{L}_{u}\). It should also be noted that the third equation in (2.14) takes the same form as the kinematic boundary condition, commonly found in free boundary fluid dynamics models. Thus, the kinematic boundary constraint may be interpreted as a relationship between position and velocity in a moving frame of reference, in agreement with the statement that a particle initially on the surface remains so. See, e.g., [35]. **Remark 2.1** (Hamilton-Pontryagin principle and semidirect product reduction).: _The Hamilton-Pontryagin principle equivalent to the constrained variational principle (2.9) is the following,_ \[0=\delta\int\ell(u,qg^{-1},vg^{-1},a)+\left\langle m\,,\,\dot{g}g^{-1}-u \right\rangle+\left\langle b\,,\,a_{0}g^{-1}-a\right\rangle+\left\langle pg^{ -1}\,,\,\dot{q}g^{-1}-vg^{-1}\right\rangle\,dt\,, \tag{2.15}\] _where all variations are arbitrary, modulo vanishing at the end points in time. Note that the Euler-Poincare constraint \(\left\langle p\,,\,\dot{q}-v\right\rangle\) has been acted on from the right by \(g^{-1}\) and it takes the form of the kinematic boundary condition for a free boundary. Together with the constraint \(\left\langle m\,,\,\dot{g}g^{-1}-u\right\rangle\), one can view the tuple \((g,q)\) are elements of the semi-direct product group \(S=G\)\(\otimes Q\) since the relation_ \[\partial_{t}(g,q)(g,q)^{-1}=(\dot{g}g^{-1},\dot{q}g^{-1})\,, \tag{2.16}\] _is isomorphic to the Lie algebra \(\tilde{\mathfrak{s}}\) of \(\tilde{S}\). See, e.g., [10] for an another application of this relation. The metamorphosis Hamilton-Pontryagin variational principle in (2.15) becomes_ \[0=\delta\int\ell(u,n,\nu,a)+\left\langle(m,\pi)\,,\,\partial_{t}(g,q)(g,q)^{ -1}-(u,\nu)\right\rangle_{\tilde{\mathfrak{s}}}+\left\langle b\,,\,a_{0}g^{-1 }-a\right\rangle\,dt\,, \tag{2.17}\] _when the reduced definitions \(u:=\dot{g}g^{-1}\), \(n=qg^{-1}\), \(\nu=\dot{q}g^{-1}\) are used, and one defines \(\pi:=pg^{-1}\). The subscript \(\tilde{\mathfrak{s}}\) included in the pairing indicates that the pairing is to be taken with respect to \(\tilde{\mathfrak{s}}\)._ **Remark 2.2** (Symmetry breaking).: _The explicit dependence of the Lagrangian, \(\ell\), on \(n=qg^{-1}\) means that the dynamics is not reduced by the entire symmetry group \(\tilde{S}=G\)\(\otimes Q\) from the cotangent bundle \(T^{*}\tilde{S}\). Instead, the reduction is only by \(G\) and thus the dynamics takes place on the Lie-algebra \(\tilde{\mathfrak{s}}:=\mathfrak{g}\)\(\otimes\)\((T^{*}Q)\). Thus, the canonical two-cocyle arising from metamorphosis reduction of this type is inherited from the canonical Hamiltonian motion on \(T^{*}Q\)._ **Remark 2.3** (A composition of maps).: _As shown in [59], the Euler-Poincare equations (2.14) can similarly be obtained from a Lagrangian depending on \(TQ\times V^{*}\) in which an element of \(TQ\) is defined as a composition. This feature builds on the 'composition of maps' approach discussed in [35]. The resulting Lagrangian is defined to be right invariant under the action of \(G\) as_ \[L(g,\dot{g},ng,(ng)\dot{},a_{0})=\ell(\dot{g}g^{-1},n,(ng)\dot{}g^{-1},a_{t})\,, \tag{2.18}\] _where we have again denoted the composition of two maps by concatenation from the right. By writing the composition as a pullback, the Lie chain rule allows us to define \(\nu\) as follows_ \[(g^{*}n)\dot{}\,g^{-1}=g^{*}\big{[}(\partial_{t}+\mathcal{L}_{u})n\big{]}g^{-1 }=(\partial_{t}+\mathcal{L}_{u})n=:\nu\,, \tag{2.19}\] _since the pull-back by \(g\) is the inverse of the push-forward by \(g\). Indeed, we see that this agrees with the definition made in the reduction by stages process above; namely, \(\nu=\dot{q}g^{-1}\)._ ### The Hamiltonian formulation. One may also consider the reduced variational principle from the perspective of Hamiltonian mechanics. Indeed, the corresponding reduced Hamiltonian \[h:\mathfrak{g}^{*}\times T^{*}Q\times V^{*}\to\mathbb{R}\,, \tag{2.20}\] can be derived equivalently by reduction by symmetry on the Hamiltonian side. Please note that the Hamiltonian \(H:T^{*}(G\times Q)\times V^{*}\) is invariant under the right action of \(G\), where the group action is denoted by concatenation. The reduced Hamiltonian \(h\) can be found by the quotient map \[T^{*}(G\times Q)\times V^{*}\to\mathfrak{g}^{*}\times T^{*}Q\times V^{*}\,, \quad(g,\alpha,q,p,a_{0})\to(m,n,\pi,a)\,, \tag{2.21}\] where \(m:=\alpha g^{-1}\) and \(\pi:=pg^{-1}\). One can equivalently use the reduced Legendre transform \[h(m,n,\pi,a)=\langle m\,,\,u\rangle+\langle\pi\,,\,\nu\rangle-\ell(u,n,\nu,a )\,, \tag{2.22}\] to obtain the reduced Hamiltonian \(h\) from the corresponding reduced Lagrangian \(\ell\). Noting that \(\frac{\delta\ell}{\delta\nu}=\pi\) and \(\frac{\delta h}{\delta\pi}=\nu\), one can write (2.14) in Hamiltonian form as \[\begin{split}(\partial_{t}+\mathrm{ad}_{u}^{*})\,m& =-\frac{\delta h}{\delta n}\diamond n-\frac{\delta h}{\delta \nu}\diamond\nu-\frac{\delta h}{\delta a}\diamond a\,,\\ (\partial_{t}+\mathcal{L}_{u})\,\pi&=-\frac{ \delta h}{\delta n}\,,\\ (\partial_{t}+\mathcal{L}_{u})\,n&=\frac{\delta h}{ \delta\pi}\,,\\ (\partial_{t}+\mathcal{L}_{u})\,a&=0\,,\quad\text{ where}\quad u:=\frac{\delta h}{\delta m}\,,\end{split} \tag{2.23}\] which are the Lie-Poisson equations with cocycles. In particular, the second and third equations in (2.23) are _Hamilton's canonical equations_, boosted into a moving frame of reference. At the level of the equations, this is equivalent to replacing the time derivative with \(\partial_{t}+\mathcal{L}_{u}\), as we saw with the Euler-Lagrange equations in (2.14). Hence, one can arrange (2.23) into Poisson bracket form as \[\partial_{t}\begin{pmatrix}m\\ a\\ \pi\\ n\end{pmatrix}=-\begin{pmatrix}\mathrm{ad}_{\big{\bullet}}^{*}m&\bigtriangledown \diamond a&\bigtriangledown\triangledown\pi&\bigtriangledown\circ n\\ \mathcal{L}_{\big{\bullet}}a&0&0&0\\ \mathcal{L}_{\big{\bullet}}\pi&0&0&1\\ \mathcal{L}_{\big{\bullet}}n&0&-1&0\end{pmatrix}\begin{pmatrix}\frac{\delta h }{\delta m}=u\\ \frac{\delta h}{\delta n}=-\frac{\delta\ell}{\delta a}\\ \frac{\delta h}{\delta n}=\nu\\ \frac{\delta h}{\delta n}=-\frac{\delta\ell}{\delta n}\end{pmatrix}\,. \tag{2.24}\] The Hamiltonian structure of the Poisson bracket (2.24) is _tangled_ in the sense that the Lie-Poisson bracket on \(\mathfrak{g}^{*}\)\(\textcircled{S}V^{*}\) is coupled to the canonical Poisson bracket on \(T^{*}Q\) via the semidirect product structure. The Poisson structure is then \(\mathfrak{g}^{*}\)\(\textcircled{S}V^{*}\)\(\textcircled{S}T^{*}Q\). One can _untangle_ the Hamiltonian structure of the Poisson bracket (2.24) into the direct sum of the Lie-Poisson bracket on \(\mathfrak{g}^{*}\)\(\textcircled{S}V^{*}\) and the canonical Poisson bracket on \(T^{*}Q\). This is done via the map \[(m,n,\pi,a)\in\mathfrak{g}^{*}\times T^{*}Q\times V^{*}\to(m+\pi\diamond n,n, \pi,a)=:(\kappa,n,\pi,a)\in\mathfrak{g}^{*}\times T^{*}Q\times V^{*}\,. \tag{2.25}\] The untangled Poisson structure can be directly calculated and written in terms of the transformed Hamiltonian \(h_{HP}(\kappa,n,\pi,a)\) as \[\partial_{t}\begin{pmatrix}\kappa\\ a\\ \pi\\ n\end{pmatrix}=-\begin{pmatrix}\mathrm{ad}_{\bigbullet}^{*}\kappa&\bigtriangledown \diamond a&0&0\\ \mathcal{L}_{\bigbullet}a&0&0&0\\ 0&0&0&1\\ 0&0&-1&0\end{pmatrix}\begin{pmatrix}\frac{\delta h_{HP}}{\delta\kappa}=u\\ \frac{\delta h_{HP}}{\delta a}-\frac{\delta\ell_{HP}}{\delta a}\\ \frac{\delta h_{HP}}{\delta\pi}=\nu-\mathcal{L}_{u}n\\ \frac{\delta h_{HP}}{\delta n}=-\frac{\delta\ell}{\delta n}+\mathcal{L}_{u}\pi \end{pmatrix}\,. \tag{2.26}\] As pointed out in [18], the untangled Poisson structure can be derived via the _Hamilton Poincare_ reduction principle when the Hamiltonian collectivises into the momentum map \(\kappa=m+\pi\diamond n\). The dual map of (2.25) is \[(u,n,\nu,a)\in\mathfrak{g}\times TQ\times V^{*}\to(u,n,\nu-\mathcal{L}_{u}n,a)=: (u,n,\dot{n},a)\in\mathfrak{g}\times TQ\times V^{*}\,, \tag{2.27}\] which are the variables in _Lagrange Poincare reduction_ of \(L\) to the reduced Lagrangian \(\ell_{LP}\) and we have the equivalence \[\ell(u,n,\nu,a)=\ell_{LP}(u,n,\nu-\mathcal{L}_{u}n,a)\,. \tag{2.28}\] **Remark 2.4** (Untangling from constrained variations).: _Recall the constrained variations (2.10). The choice of whether to define the variations in terms of \((\delta q)g^{-1}\) or \(\delta(qg^{-1})\) will lead respectively to the tangled and untangled Euler-Poincare equations corresponding to the Poisson operators (2.24) and (2.26). This is due to the correspondence between the variations and definitions of \(\nu\) and \(\dot{n}\) as the tangled and untangled velocities in \(TQ\)._ By assuming further that the fluid density \(D\) is also advected by the flow, i.e. \(\partial_{t}D+\mathcal{L}_{u}D=0\), we find the following Kelvin-circulation theorem for the momentum map \(\kappa=m+\pi\diamond n\), \[\frac{d}{dt}\oint_{c(u)}\quad\underbrace{\frac{m}{D}+\frac{\pi \diamond n}{D}}_{\text{`Momentum shift'}}=-\oint_{c(u)}\frac{1}{D}\left( \frac{\delta h}{\delta a}\diamond a+\frac{\delta h}{\delta D}\diamond D\right)\,. \tag{2.29}\] The additional term \((\pi\diamond n)/D\) in the integrand of the Kelvin-Noether theorem in (2.29) is a shift in momentum 1-form, as observed earlier in the GLM and CL cases. The canonically conjugate pair \((\pi,n)\) here are Hamiltonian variables whose dynamics takes place in the frame of the fluid motion, appearing in the result of Hamilton's principle in equation (2.26). Using the tangled form of the Poisson matrix (2.24) and the untangled Kelvin-Noether theorem (2.29) yields the separated Kelvin-Noether equations, \[\frac{d}{dt}\oint_{c(u)}\frac{m}{D} =-\oint_{c(u)}\frac{1}{D}\left(\frac{\delta h}{\delta a} \diamond a+\frac{\delta h}{\delta D}\diamond D\right)-\oint_{c(u)}\underbrace{ \frac{1}{D}\left(\frac{\delta h}{\delta n}\diamond n-\pi\diamond\frac{\delta h }{\delta\pi}\right)}_{\text{Non-inertial force}}, \tag{2.30}\] \[\frac{d}{dt}\oint_{c(u)}\frac{\pi\diamond n}{D} =\oint_{c(u)}\frac{1}{D}\left(\frac{\delta h}{\delta n}\diamond n -\pi\diamond\frac{\delta h}{\delta\pi}\right)\,.\] Thus, the wave degree of freedom introduces a non-inertial force reminiscent of the Coriolis force, except that it has become dynamical. Equations (2.30) are interpreted as the result of shifting the Hamiltonian \((\pi,n)\) dynamics into the frame of the moving fluid. In the inertial Eulerian frame, the result of the Galilean shift of the Hamiltonian \((\pi,n)\) dynamics is represented by the shift in the momentum 1-form in the Kelvin circulation integrand in (2.29). In the non-inertial Lagrangian frame, the result of the Galilean shift of the Hamiltonian \((\pi,n)\) dynamics is represented as the additional non-inertial force on the current in (2.30). **Remark 2.5** (Partial Legendre transform (Routhian)).: _One can show the Hamilton-Pontryagin principle in (2.15) takes a form similar to that introduced in [32] through a partial Legendre transform of a particular form of the reduced Lagrangian \(\ell\). Namely, one assumes that \(\ell\) is separable between the variables in \(TQ\) and variables in \(\mathfrak{g}\times V^{*}\),_ \[\ell(u,n,\nu,a)=\ell_{\mathfrak{g}\times V^{*}}(u,a)+\ell_{TQ}(n,\nu)\,. \tag{2.31}\] _After using the partial Legendre transform to obtain the Hamiltonian_ \[h_{T^{*}Q}(\pi,n):=\langle\pi\,,\,\nu\rangle-\ell_{TQ}(n,\nu)\,, \tag{2.32}\] one inserts \(h_{T^{*}Q}\) into the Hamilton-Pontryagin form (2.15) to find the equivalent action principle_ \[0=\delta\int\ell_{\mathfrak{g}\times V^{*}}(u,a)+\left\langle m\,,\,\dot{g}g^{-1} -u\right\rangle+\left\langle b\,,\,a_{0}g^{-1}-a\right\rangle+\left\langle \pi\,,\,\dot{q}g^{-1}\right\rangle-h_{T^{*}Q}(\pi,qg^{-1})\,dt\,. \tag{2.33}\] _In terms of the \((\pi,n)\) variables, one can cast (2.33) into a familiar form for wave dynamics seen, e.g. in [32]. Namely, the Hamilton-Pontryagin form (2.33) can be cast as a phase-space Lagrangian,_ \[0=\delta\int\ell_{\mathfrak{g}\times V^{*}}(u,a)+\left\langle\pi\,,\,\partial_ {t}n+\mathcal{L}_{u}n\right\rangle-h_{T^{*}Q}(\pi,n)\,dt\,, \tag{2.34}\] _where we have introduced the constrained variations \(\delta u=\partial_{t}\eta-\mathrm{ad}_{u}\,\eta\) and \(\delta a=-\mathcal{L}_{u}a\) in place of the Hamilton-Pontryagin constraints and the canonical Hamiltonian variables \((\pi,n)\) can be varied arbitrarily._ _Equivalently, the metamorphosis phase-space form in (2.34) can be seen from the perspective of the 'composition of maps' form of the Lagrangian discussed in Remark 2.3. Indeed, beginning from the Lagrangian (2.18), notice that the form of the right hand side of the inner product term of equation (2.34) is a direct consequence of equation (2.19)._ ### Additional symmetry So far, we have only considered the case where the symmetries of the system exist solely in the Lie group \(G\). It is natural to extend the reduction principle to consider cases where the configuration manifold \(Q\) is also a Lie group with corresponding Lie algebra \(\mathfrak{q}\). Additionally, we introduce explicit dependence of order parameter \(\chi_{0}\in V_{Q}^{*}\) for \(Q\) to the Lagrangian \(L\) such that \[L(g,\dot{g},q,\dot{q},\chi_{0},a_{0}):TG\times TQ\times V_{Q}^{*}\times V^{*} \rightarrow\mathbb{R}\,, \tag{2.35}\] and assume that the Lagrangian is invariant under the action of both \(Q\) and \(G\). For simplicity of exposition, let us consider only the right action of \(q\in Q\) on \(TQ\) and \(\chi_{0}\); so the \(Q\)-reduced Lagrangian, \(\tilde{L}\), takes the following form \[L(g,\dot{g},q,\dot{q},\chi_{0},a_{0})=:\tilde{L}(g,\dot{g},\dot{q}q^{-1},\chi_ {0}q^{-1},a_{0}):TG\times\mathfrak{q}\times V_{Q}^{*}\times V^{*}\rightarrow \mathbb{R}\,. \tag{2.36}\] After the reduction by \(Q\), the equations of motions are Lagrange-Poincare equations [9]. The further reduction by \(G\) then defines the fully reduced Lagrangian \(\tilde{\ell}\) by \[\tilde{L}(g,\dot{g},\dot{q}q^{-1},\chi_{0}q^{-1},a_{0}) =\tilde{L}(e,\dot{g}g^{-1},(\dot{q}q^{-1})g^{-1},(\chi_{0}q^{-1})g ^{-1},a_{0}g^{-1}) \tag{2.37}\] \[=:\tilde{\ell}(u,\omega,\chi,a):\mathfrak{g}\times\mathfrak{q} \times V_{Q}^{*}\times V^{*}\rightarrow\mathbb{R}\,, \tag{2.38}\] where one defines the following abbreviated notation, \[u:=\dot{g}g^{-1},\quad\omega:=(\dot{q}q^{-1})g^{-1},\quad\chi:=(\chi_{0}q^{-1 })g^{-1},\quad\text{and}\quad a:=a_{0}g^{-1}\,. \tag{2.39}\] The reduced Euler-Poincare variational principle becomes \[0=\delta S=\delta\int_{t_{0}}^{t_{1}}\tilde{\ell}(u,\omega,\chi,a)\,dt\,, \tag{2.40}\] subject the constrained variations obtained from the definitions of \(u\), \(\omega\) and \(a\) in equation (2.39), \[\begin{split}\delta u&=\partial_{t}\eta-\mathrm{ ad}_{u}\,\eta\,,\\ \delta\omega&=\partial_{t}u-\mathcal{L}_{\eta}\omega+ \mathcal{L}_{u}\gamma-\mathrm{ad}_{\omega}\,\gamma\,,\\ \delta\chi&=-\mathcal{L}_{\eta}\chi-\widehat{ \mathcal{L}}_{\gamma}\chi\,,\\ \delta a&=-\mathcal{L}_{\eta}a\,.\end{split} \tag{2.41}\] Here, we denote \(\gamma:=(\delta qq^{-1})g^{-1}\) and \(\eta\) is chosen arbitrarily and vanishes at the endpoints \(t=t_{0},t_{1}\). We also introduce the notation \(\widehat{\mathcal{L}}_{\gamma}\) for the action of an arbitrary Lie algebra element \(\gamma\in\mathfrak{q}\). As in the definition of the diamond operator (\(\diamond\)) in (2.11) for the Lie-derivative of vector fields \(\eta\in\mathfrak{g}\), we define the diamond operator (\(\widehat{\diamond}\)) with respect to the action by \(\mathfrak{q}\) through \[\left\langle-\xi\widehat{\diamond}\chi\,,\,\gamma\right\rangle_{\mathfrak{q}}= \left\langle\xi\,,\,\widehat{\mathcal{L}}_{\gamma}\chi\right\rangle_{T^{*}V_ {Q}}\,, \tag{2.42}\] for all \((\xi,\chi)\in T^{*}V_{Q}\) and \(\gamma\in\mathfrak{q}\). After taking variations one finds the Euler-Poincare equations from the reduced Euler-Poincare principle (2.40) as \[\partial_{t}\frac{\delta\tilde{\ell}}{\delta u}+\mathrm{ad}_{u}^{ *}\frac{\delta\tilde{\ell}}{\delta u} =\frac{\delta\tilde{\ell}}{\delta\omega}\diamond\omega+\frac{ \delta\tilde{\ell}}{\delta a}\diamond a+\frac{\delta\tilde{\ell}}{\delta \chi}\diamond\chi\,,\] \[\partial_{t}\frac{\delta\tilde{\ell}}{\delta\omega}+\mathrm{ad}_{ \omega}^{*}\frac{\delta\tilde{\ell}}{\delta\omega}+\mathcal{L}_{u}\frac{ \delta\tilde{\ell}}{\delta\omega} =\frac{\delta\tilde{\ell}}{\delta\chi}\widehat{\diamond}_{\chi}\,, \tag{2.43}\] \[\partial_{t}\chi+\mathcal{L}_{u}\chi+\widehat{\mathcal{L}}_{u}\chi =0\,,\] \[\partial_{t}a+\mathcal{L}_{u}a =0\,.\] Under similar considerations on the Hamiltonian side, we can construct the reduced Hamiltonian \(\tilde{h}(m,\lambda,a):\mathfrak{g}^{*}\times\mathfrak{q}^{*}\times V_{Q}^{*} \times V^{*}\to\mathbb{R}\) via the Legendre transform such that \(\lambda:=\frac{\delta\tilde{\ell}}{\delta\omega}\) and \(m:=\frac{\delta\tilde{\ell}}{\delta u}\). The equations (2.43) can then be written in a Poisson matrix form \[\partial_{t}\begin{pmatrix}m\\ a\\ \lambda\\ \chi\end{pmatrix}=-\begin{pmatrix}\mathrm{ad}_{\bigbox{$\square$}}^{*}m& \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_1}}\diamond a&\raisebox{ -1.5pt}{\includegraphics[scale=0.5]{fig/P0_2}}\diamond\lambda&\raisebox{ -1.5pt}{\includegraphics[scale=0.5]{fig/P0_3}}\diamond\chi\\ \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond a&0&0&0\\ \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_5}}\diamond a&0&\raisebox{ -1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda&\raisebox{ -1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda\\ \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda&0& \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda& \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda\\ \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda&0& \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda& \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda\\ \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda&0& \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda&0\\ \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda&0& \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_4}}\diamond\lambda&0\\ \end{pmatrix}\begin{pmatrix}\frac{\delta\tilde{h}}{\delta m}=u\\ \frac{\delta\tilde{h}}{\delta a}=-\frac{\delta\tilde{\ell}}{\delta a}\\ \frac{\delta\tilde{h}}{\delta\tilde{h}}=-\frac{\delta\tilde{h}}{\delta a}\\ \frac{\delta\tilde{h}}{\delta\chi}=\omega\\ \frac{\delta\tilde{h}}{\delta\chi}=-\frac{\delta\tilde{\ell}}{\delta\chi} \end{pmatrix}\,. \tag{2.44}\] The Lie-Poisson matrix in equation (2.44) defines a Lie-Poisson bracket on \(\mathfrak{g}^{*}\times\mathfrak{q}^{*}\times V_{Q}^{*}\times V^{*}\), which is the same as the bracket on the dual of the semidirect product Lie algebra \(\mathfrak{g}^{*}=\mathfrak{g}^{*}\check{\otimes}((\mathfrak{q}^{*}\check{ \otimes}V_{Q}^{*})\oplus V^{*})\). Thus, equations (2.44) are the canonical Lie-Poisson equations on \(\mathfrak{s}\), the Lie-algebra of the semi-direct product group \(S=G\check{\otimes}((Q\check{\otimes}V_{Q})\oplus V)\), under the reduction by symmetry of \(S\) itself. Reduction by left action follows an analogous procedure, and a combination of left and right reduction may also be applied. An extensive literature exists for reduction by symmetry in the theory and applications of geometric mechanics, whose foundations are reviewed in Abraham and Marsden [2]. A geophysical fluid system with similar Poisson structure to (2.44) arises in the vertical slice models [10]. In this model, one has \(q\in\mathrm{Diff}(\mathbb{R}^{2})\) and the symmetry group is full diffeomorphism group \(\mathrm{Diff}(\mathbb{R}^{2})\). Then, the reduction process gives \(\omega\in\mathfrak{X}\) and \(\pi\in\mathfrak{X}^{*}\) and the Lie-Poisson matrix becomes, \[\partial_{t}\begin{pmatrix}m\\ \pi\\ a\end{pmatrix}=-\begin{pmatrix}\mathrm{ad}_{\bigbox{$\square$}}^{*}m&\mathrm{ad}_ {\bigbox{$\square$}}^{*}\pi&\raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/ P0_1}}\diamond a\\ \mathrm{ad}_{\bigbox{$\square$}}^{*}\pi&\mathrm{ad}_{\bigbox{$\square$}}^{*}\pi&0\\ \raisebox{-1.5pt}{\includegraphics[scale=0.5]{fig/P0_1}}\big{(}\raisebox{-1.5pt}{ \includegraphics[scale=0.5]{fig/P0_4}}\big{)}^{a}&0&0\end{pmatrix}\begin{pmatrix} \frac{\delta h}{\delta m}=u\\ \frac{\delta h}{\delta\pi}=\omega\\ \frac{\delta h}{\delta a}=-\frac{\delta l}{\delta a}\end{pmatrix}\,. \tag{2.45}\] Starting from a Lagrangian \(L\) defined on \(T(G\times Q)\times V^{*}\) the two reduction pathways discussed here can be represented diagrammatically as Both branches of this diagram reflects the reduction process relating to the specific WMFI models discussed in Section 3. ## 3 Examples: Eulerian wave elevation field equations In this section, we feature worked examples of wave mean flow interaction (WMFI) models. To better understand the structure the forthcoming models, see also Appendix B, where one can find an elementary example demonstrating the coupling of a field of simple harmonic oscillators to an Euler fluid. ### WKB internal waves in the Euler-Boussinesq (EB) approximation Gjaja and Holm [23] closed the Generalised Lagrangian Mean (GLM) theory of Andrews and McIntyre [3] for the case that the displacement fluctuation in \(\xi(\mathbf{x},t)\in\mathbb{R}^{3}\) away from the Lagrangian mean trajectory in [3] is given by a single-frequency travelling wave with slowly varying complex vector amplitude, \[\xi(\mathbf{x},t)=\frac{1}{2}\Big{(}\mathbf{a}(\mathbf{x},t)e^{i\phi(\mathbf{ x},t)/\epsilon}+\mathbf{a}^{*}(\mathbf{x},t)e^{-i\phi(\mathbf{x},t)/\epsilon} \Big{)}\quad\text{with}\quad\epsilon\ll 1\,.\] Holm [34] simplified the wave mean flow interaction (WMFI) closure in [23] by neglecting pressure coupling and Coriolis force in the dispersion relation, thereby placing the WMFI theory into the present hybrid formulation, by coupling Lagrangian mean EB fluid equations to leading order Hamiltonian wave dynamics in the following variational principle \[0=\delta\int_{t_{0}}^{t_{1}}\int_{\mathcal{D}}\frac{D}{2}\big{|} \mathbf{u}^{L}\big{|}^{2}+D\mathbf{u}^{L}\cdot\mathbf{R}(\mathbf{x})-gDbz-p(D-1) \tag{3.1}\] \[\qquad\qquad\qquad\qquad\qquad-N(\partial_{t}\phi+\mathbf{u}^{L} \cdot\nabla\phi)\,d^{3}x\,dt-H_{W}(N,\mathbf{k})\,dt\quad\text{with}\quad \mathbf{k}:=\nabla\phi\,,\] where the constrained variations are \(\delta\mathbf{u}^{L}=\partial_{t}\mathbf{w}+[\mathbf{u}^{L},\mathbf{w}]\) and \(\delta D=-\text{div}(D\mathbf{w})\) the arbitrary variations are \(\mathbf{w}\), \(\delta N\), \(\delta p\) and \(\delta\phi\) which vanish at at endpoints. The first summand of the variational principle in (3.1) governs the Lagrangian mean EB fluid dynamics, and the second summand in that variational principle governs the dynamics of the leading order fluctuations away from the mean. Among the fluid variables, \(\mathbf{u}^{L}(\mathbf{x},t)\) is the Lagrangian mean velocity, \(\operatorname{curl}\mathbf{R}(\mathbf{x})=2\mathbf{\Omega}(\mathbf{x})\) is the Coriolis parameter, \(Dd^{3}x\) is the volume element and \(b\) is the scalar buoyancy. As for the wave variables, \(Nd^{3}x\) is the wave action density and \(\phi\) is the canonically conjugate scalar wave phase. From the variational principle (3.1), the modified canonical Hamilton's equations for the wave dynamics are \[(\partial_{t}+\mathcal{L}_{u^{L}})\phi+\frac{\delta H_{W}}{\delta N}=0\quad \text{and}\quad(\partial_{t}+\mathcal{L}_{u^{L}})(N\,d^{3}x)+d\left(\frac{ \delta H_{W}}{\delta\mathbf{k}}\right)=0\,, \tag{3.2}\] Figure 1: Reduction pathways where we see the fluid velocity \(u^{L}\) transports the wave dynamics in the reference frame of the fluid flow. The equations (3.2) can be assembled to give the evolution equation of the wave momentum density \(N\nabla\phi\cdot d\mathbf{x}\otimes d^{3}x\) as the following, \[(\partial_{t}+\mathcal{L}_{u^{L}})\Big{(}N\nabla\phi\cdot d\mathbf{x}\otimes d^ {3}x\Big{)}=-\left(\operatorname{div}\!\left(\frac{\delta H_{W}}{\delta \mathbf{k}}\right)\!d\phi-Nd\Big{(}\frac{\delta H_{W}}{\delta N}\Big{)}\right) \otimes d^{3}x\,. \tag{3.3}\] The evolution of the equation of the fluid advected quantites and the evolution of the total momentum can also be derived from the variational principle to be \[\begin{split}&(\partial_{t}+\mathcal{L}_{u^{L}})\big{(}\mathbf{M} \cdot d\mathbf{x}\otimes d^{3}x\big{)}=(Dd\pi+Dgzdb)\otimes d^{3}x\,,\\ &(\partial_{t}+\mathcal{L}_{u^{L}})(D\,d^{3}x)=0\,,\qquad D=1\,, \qquad(\partial_{t}+\mathcal{L}_{u^{L}})b=0\,,\end{split} \tag{3.4}\] where the Eulerian total momentum density \(\mathbf{M}\) and pressure \(\pi\) in equation (3.4) are given by, \[\mathbf{M}:=D(\mathbf{u}^{L}+\mathbf{R}(\mathbf{x}))-N\nabla\phi\,,\qquad\pi :=\frac{1}{2}|\mathbf{u}^{L}|^{2}+\mathbf{R}(\mathbf{x})\cdot\mathbf{u}^{L}- gbz-p\,. \tag{3.5}\] Note that the dynamics of \(\mathbf{M}\cdot d\mathbf{x}\) is independent of the form of the wave Hamiltonian \(H_{W}\), thus one finds the Kelvin circulation dynamics of \(\mathbf{M}\cdot d\mathbf{x}\), \[\frac{d}{dt}\oint_{c(\mathbf{u}^{L})}\Big{(}\mathbf{u}^{L}+\mathbf{R}( \mathbf{x})-\frac{N\nabla\phi}{D}\Big{)}\cdot d\mathbf{x}=\oint_{c(\mathbf{u} ^{L})}(\nabla\pi+gz\nabla b)\cdot d\mathbf{x}\,, \tag{3.6}\] where \(c(\mathbf{u}^{L})\) is a material loop moving with the flow at velocity \(\mathbf{u}^{L}(\mathbf{x},t)\). The total momentum density \(\mathbf{M}=D(\mathbf{u}^{L}+\mathbf{R}(\mathbf{x}))-N\nabla\phi\) decomposes into the _sum_ of the momentum densities for the two degrees of freedom, namely, the wave and fluid degrees of freedom. Defining the fluid momentum \(\mathbf{m}\cdot d\mathbf{x}:=\big{(}\mathbf{u}^{L}+\mathbf{R}(\mathbf{x}) \big{)}\cdot d\mathbf{x}\), one finds its evolution as the differences of (3.3) and (3.4) \[(\partial_{t}+\mathcal{L}_{u^{L}})\big{(}\mathbf{m}\cdot d\mathbf{x}\otimes d ^{3}x\big{)}=(Dd\pi+Dgzdb)\otimes d^{3}x-\left(\operatorname{div}\!\left(\frac {\delta H_{W}}{\delta\mathbf{k}}\right)\!d\phi-Nd\Big{(}\frac{\delta H_{W}}{ \delta N}\Big{)}\right)\otimes d^{3}x \tag{3.7}\] **WKB wave Hamiltonian in 3D.** Suppose for \(H_{W}\) one takes the WKB wave Hamiltonian in 3D, whose variational derivatives are given by familiar wave quantities, \[H_{W}=\int_{M}N\omega(\mathbf{k})\,d^{3}x\,,\quad\text{with}\quad\frac{\delta H _{W}}{\delta N}\Big{|}_{\mathbf{k}}=\left.\omega(\mathbf{k})\,,\quad\text{and }\quad\frac{\delta H_{W}}{\delta\mathbf{k}}\right|_{N}=N\frac{\partial\omega( \mathbf{k})}{\partial\mathbf{k}}=:\,N\mathbf{v}_{G}(\mathbf{k})\,, \tag{3.8}\] in which \(\mathbf{v}_{G}(\mathbf{k}):=\partial\omega(\mathbf{k})/\partial\mathbf{k}\) is the group velocity for the dispersion relation \(\omega=\omega(\mathbf{k})\) between wave frequency, \(\omega\), and wave number, \(\mathbf{k}\). Then, the explicit form of the dynamics of the WKB wave momentum \(\frac{N}{D}\nabla\phi\cdot d\mathbf{x}\) from (3.3) appears as \[(\partial_{t}+\mathcal{L}_{u^{L}})\left(\frac{N}{D}\nabla\phi\cdot d\mathbf{x} \right)=-\frac{1}{D}\bigg{(}\mathbf{k}\operatorname{div}\!\left(N\mathbf{v}_{G }(\mathbf{k})\right)-N\nabla\omega(k)\bigg{)}\cdot d\mathbf{x}\,. \tag{3.9}\] Likewise, one has the explicit form of the Kelvin-circulation dynamics for the Eulerian fluid momentum \(m=\big{(}\mathbf{u}^{L}+\mathbf{R}(\mathbf{x})\big{)}\cdot d\mathbf{x}\) and wave momentum \(\frac{N}{D}\nabla\phi\cdot d\mathbf{x}\) as (3.4) \[\begin{split}\frac{d}{dt}\oint_{c(\mathbf{u}^{L})}\big{(}\mathbf{ u}^{L}+\mathbf{R}(\mathbf{x})\big{)}\cdot d\mathbf{x}&=\oint_{c( \mathbf{u}^{L})}\big{(}\nabla\pi+gz\nabla b\big{)}\cdot d\mathbf{x}-\underbrace{ \frac{1}{D}\bigg{(}\mathbf{k}\operatorname{div}\!\left(N\mathbf{v}_{G}(\mathbf{ k})\right)-N\nabla\omega(\mathbf{k})\bigg{)}}_{\text{WKB Wave Forcing}}\cdot d\mathbf{x}\,,\\ \frac{d}{dt}\oint_{c(\mathbf{u}^{L})}\frac{N}{D}\nabla\phi\cdot d \mathbf{x}&=-\oint_{c(\mathbf{u}^{L})}\frac{1}{D}\bigg{(} \mathbf{k}\operatorname{div}\!\left(N\mathbf{v}_{G}(\mathbf{k})\right)-N\nabla \omega(k)\bigg{)}\cdot d\mathbf{x}\end{split} \tag{3.10}\] where \(c(\mathbf{u}^{L})\) is a material loop moving with the flow at velocity \(\mathbf{u}^{L}(\mathbf{x},t)\). **Remark 3.1** (Summary of WKB internal wave dynamics in the Euler-Boussinesq (EB) approximation).: \(\bullet\) _Equations (3.10) and (3.6) provide an additive decomposition the Kelvin circulation theorem representation of WCI in the example of EB flow. This result from the variational principle for WCI dynamics in (3.1) fits well with the vast literature of mean flow interaction. See, e.g., [54, 65, 7]._ \(\bullet\) _The total potential vorticity (PV) is conserved on Lagrangian mean particle paths. That is,_ \[\partial_{t}Q+\mathbf{u}^{L}\cdot\nabla Q=0\,, \tag{3.11}\] _where PV is defined as \(Q:=D^{-1}\nabla b\cdot\mathrm{curl}\big{(}\mathbf{u}^{L}+\mathbf{R}(\mathbf{x })-\mathrm{D}^{-1}\mathrm{N}\nabla\phi\big{)}\) with \(D=1\)._ \(\bullet\) _For the WKB wave Hamiltonian in (_3.8_), the phase-space Lagrangian in (_3.1_) has produced a model of wave interactions with the mean EB fluid current in which the total circulation separates into a sum of wave and current components._ \(\bullet\) _In particular, the total momentum density in the model \(\mathbf{M}=D(\mathbf{u}^{L}+\mathbf{R}(\mathbf{x}))-N\nabla\phi\) represents the sum of the momentum densities for the current and wave degrees of freedom, respectively._ \(\bullet\) _The result from the first formula in (_3.10_) implies that the WKB wave contribution can feed back to create circulation of the fluid current. However, if waves are initially absent, the fluid current cannot subsequently create waves._ \(\bullet\) _The latter conclusion supports the interpretation of the model that the fluid variables describe mean flow properties._ The next example will consider a two-dimensional case when the wave Hamiltonian \(H(N,\mathbf{k})\) corresponds to the nonlinear Schrodinger (NLS) equation. ### Coupling to the nonlinear Schrodinger (NLS) equation As explained in Stuart and DiPrima [61], 2D surface wave dynamics near the onset of instability may be approximated by the solutions of the NLS equation. The NLS equation is written in terms of a complex wave amplitude, \(\psi\), defined in a certain Hilbert space, \(\mathcal{H}\), as \[i\hbar\partial_{t}\psi=-\frac{1}{2}\Delta\psi+\kappa|\psi|^{2} \psi\,. \tag{3.12}\] The sign of the real parameter \(\kappa\) in (3.12) controls the behaviour of NLS solutions. In what follows, we shall use the Dirac-Frenkel (DF) variational principle pioneered in [17] to derive the NLS equation from Hamilton's principle and then couple its solutions to a fluid flow. The DF variational principle for the _linear_ Schrodinger equation \(i\hbar\partial_{t}\psi=\widehat{H}\psi\) with Hamiltonian operator \(\widehat{H}\) can be written in the form of a _phase space Lagrangian_, as \[0=\delta S=\delta\int_{b}^{a}\left\langle\psi\,,\,i\hbar\partial _{t}\psi-\widehat{H}\psi\right\rangle\,. \tag{3.13}\] The pairing \(\left\langle\cdot\,,\,\cdot\right\rangle\) in (3.13) is defined by \[\left\langle\psi_{1}\,,\,\psi_{2}\right\rangle=\Re\left\langle \psi_{1}\,|\,\psi_{2}\right\rangle\,, \tag{3.14}\] in which the bracket \(\left\langle\psi_{1}\,|\,\psi_{2}\right\rangle\) is the natural inner product in Hilbert space \(\mathcal{H}\). In the case \(\mathcal{H}=L^{2}(\mathbb{R}^{2})\), the inner product is given by \[\left\langle\psi_{1}\,|\,\psi_{2}\right\rangle=\int\psi_{1}^{*}( x)\psi_{2}(x)\,d^{2}x\,, \tag{3.15}\] where the extension to higher dimensional Euclidean spaces can be treated similarly. Following [16], the standard geometric treatment of complex wave functions are regarded as half densities, i.e. \(\psi,\psi^{*}\in\mathrm{Den}^{\frac{1}{2}}(\mathbb{R}^{2})\) such that the modulus \(|\psi|^{2}\in\mathrm{Den}(\mathbb{R}^{2})\). In basis notation, we have \(\psi=\tilde{\psi}\sqrt{d^{2}x}\) where \(\tilde{\psi}\) is the coefficient of the half-density basis \(\sqrt{d^{2}x}\). For ease of notation, we shall suppress the basis and work with the notation \(\psi\) to denote the product of the coefficients and basis. The linear Schrodinger equation in terms of the Hamiltonian operator \(\widehat{H}\) is the Euler-Lagrange equation of (3.13), \[i\hbar\partial_{t}\psi=\widehat{H}\psi\,. \tag{3.16}\] By considering the Hamiltonian functional \(H(\psi,\psi^{*}):=\left\langle\psi\,,\,\widehat{H}\psi\right\rangle=:H[\psi]\), Schrodinger's equation can be cast into canonical Hamiltonian form as \[i\hbar\partial_{t}\psi=\frac{\delta H}{\delta\psi^{*}}\,, \tag{3.17}\] where the normalisation for the canonical Poisson brackets is taken as \(\{\psi(x),\psi^{*}(x^{\prime})\}=-\frac{i}{\hbar}\delta(x-x^{\prime})\)3. Similarly, the NLS equation (3.12) may be derived from the Hamiltonian functional Footnote 3: A factor of \(\frac{1}{2}\) has been introduced to the canonical Poisson structure of \((\psi,\psi^{*})\) relative to reference [16]. \[H[\psi,\psi^{*}]=\frac{1}{2}\int_{\mathcal{D}}|\nabla\psi|^{2}+\kappa|\psi|^{4 }\,d^{2}x\,. \tag{3.18}\] In 1D, the NLS equation is a completely integrable Hamiltonian system, with an infinity of conserved quantities that all Poisson commute amongst themselves, [1]. However, in higher dimensions, the NLS equation conserves only the energy \(H[\psi,\psi^{*}]\) and the two cotangent-lift momentum maps which arise from the invariances of the deterministic Hamiltonian \(H[\psi,\psi^{*}]\) in (3.18) under constant shifts of phase and translations in space. Let \(g_{t}\in\mathrm{Diff}(\mathbb{R}^{2})\) a time dependent diffeomorphism which act on \(\psi\) by pull-back, the Lie derivative \(\mathcal{L}_{u}\psi\) of \(\psi\) by \(u\in\mathfrak{X}(\mathbb{R}^{2})\) can be calculated in terms of basis functions as \[\mathcal{L}_{u}\psi:=\left.\frac{d}{dt}\right|_{t=0}(g_{t}^{*}\psi)=\left( \frac{1}{2}(\partial_{j}u_{j}+u_{j}\partial_{j})\psi\right)\,, \tag{3.19}\] where \(g_{t}\) is the flow of \(u\). The diamond operation \(\psi_{2}\diamonds\psi_{1}\in\mathfrak{X}(\mathbb{R}^{2})^{*}\) for \(\psi_{1},\psi_{2}\in\mathrm{Den}^{\frac{1}{2}}(\mathbb{R}^{2})\) can be calculated using the pairing (3.14) to have \[\left\langle\psi_{2}\,,\,\mathcal{L}_{u}\psi_{1}\right\rangle=\Re\int\psi_{2} ^{*}\left(\frac{1}{2}(\partial_{j}u_{j}+u_{j}\partial_{j})\psi_{1}\right)d^{ n}x=\Re\int-\left(\frac{1}{2}\psi_{1}\nabla\psi_{2}^{*}-\frac{1}{2}\psi_{2}^{*} \nabla\psi_{1}\right)ud^{2}x=:\left\langle-\psi_{2}\diamonds\psi_{1}\,,\,u \right\rangle. \tag{3.20}\] The cotangent lift momentum map associated with the action of diffeomorphisms is then easily derived from the application of Noether's theorem [50] \[\mathbf{J}(\psi,\psi^{*})=\hbar\Im(\psi^{*}\nabla\psi)=\hbar N\nabla\phi\,, \tag{3.21}\] where the last equality comes from writing complex wave amplitude as \(\psi:=\sqrt{N}\exp(i\phi)\) in polar form in terms of its modulus, \(N\,d^{2}x:=|\psi|^{2}\), and phase, \(\phi\). Here \(N\,d^{2}x\in\mathrm{Den}(\mathbb{R}^{2})\) and \(\phi\in\mathcal{F}(\mathbb{R}^{2})\) which forms the cotangent bundle \(T^{*}\mathcal{F}(\mathbb{R}^{2})\) which implies \(\mathbf{J}\) is the also the cotangent lift momentum map of from \(T^{*}\mathcal{F}(\mathbb{R}^{2})\). Under similar consideration as the case of invariance of translation in space, the invariance of the Hamiltonian to constant phase shift gives the \(\varphi\in S^{1}\) action on \(\psi\), given by \(\psi\to e^{i\varphi}\psi\) gives the momentum map \(N=|\psi|^{2}\). The Hamiltonian functional in (3.18) can be transformed into \[H[\phi,N]=\frac{1}{2}\int_{\mathcal{D}}N|\nabla\phi|^{2}+|\nabla\sqrt{N}|^{2}+ \kappa N^{2}\,d^{2}x\,, \tag{3.22}\] where the Poisson bracket are \(\{N,\phi\}=\frac{1}{\hbar}\). The NLS dynamics can be written in \((N,\phi)\) variables as \[\begin{split}\hbar\partial_{t}\phi&=\big{\{}\phi,H[ \phi,N]\big{\}}=-\frac{\delta H}{\delta N}=-\left(\frac{1}{2}|\nabla\phi|^{2}+ \frac{1}{8}\frac{|\nabla N|^{2}}{N^{2}}-\frac{1}{4}\frac{\Delta N}{N}+\kappa N \right)=:-\varpi\,,\\ \hbar\partial_{t}N&=\big{\{}N,H[\phi,N]\big{\}}= \frac{\delta H}{\delta\phi}=-\operatorname{div}\!\big{(}N\nabla\phi\big{)}=:- \operatorname{div}\!\mathbf{J}\,,\end{split} \tag{3.23}\] where \(\varpi\) in equation (3.23) is the Bernoulli function. According to (3.23), the NLS probability density \(N\) is advected by the velocity \(\mathbf{J}/N=\nabla\phi\) and the equation for the phase gradient \(\nabla\phi\) reduces to the NLS version of Bernoulli's law. The Hamiltonian in (3.18) collectivises through the momentum maps \(N\) and \(\mathbf{J}\) into \[H[\mathbf{J},N]=\frac{1}{2}\int_{\mathcal{D}}\frac{|\mathbf{J}|^{2}}{\hbar^{2 }N}+|\nabla\sqrt{N}|^{2}+\kappa N^{2}\,d^{2}x\,, \tag{3.24}\] such that it is a Hamiltonian functional defined on the semi-direct product Lie algebra \(\mathfrak{X}^{*}(\mathbb{R}^{2})\)()(). The Lie-Poisson structure of \((\mathbf{J},N)\in\mathfrak{X}^{*}(\mathbb{R}^{2})\)()() implies the NLS equation can be expressed in matrix operator Lie-Poisson bracket form as \[\frac{\partial}{\partial t}\begin{bmatrix}J_{i}\\ N\end{bmatrix}=-\begin{bmatrix}(\partial_{k}J_{i}+J_{k}\partial_{i})&N \partial_{i}\\ \partial_{k}N&0\end{bmatrix}\begin{bmatrix}\frac{\delta H[\mathbf{J},N]}{ \delta J_{k}}=J_{k}/(\hbar N)=\phi_{,k}/\hbar\\ \frac{\delta H[\mathbf{J},N]}{\delta N}=-\frac{1}{2\hbar^{2}N^{2}}+\frac{1}{ 8}\frac{|\nabla N|^{2}}{N^{2}}-\frac{1}{4}\frac{\Delta N}{N}+\kappa N\end{bmatrix}. \tag{3.25}\] Noting that the canonical and the Lie-Poisson Hamiltonian structure of the NLS equation in (3.23) and (3.25) respectively, we can apply both side of the reduction pathway shown in Figure 1 to couple the NLS equation to a fluid flow. In the following considerations, we shall set \(\hbar=1\) for ease of notation. Let us first consider the coupling of the NLS equation in canonical Hamiltonian form (3.23) to an inhomogeneous Euler's fluid through the following Hamilton's principle in the form of (2.34), \[0=\delta S=\delta\int_{a}^{b}\frac{D\rho}{2}|\mathbf{u}|^{2}-p(D-1)-\mathbf{u} \cdot N\nabla\phi-N\partial_{t}\phi-\frac{1}{2}\left(N|\nabla\phi|^{2}+| \nabla\sqrt{N}|^{2}+\kappa N^{2}\right)\,d^{2}x\,dt\,, \tag{3.26}\] where the constrained variations are \(\delta\mathbf{u}=\partial_{t}\mathbf{w}+[\mathbf{u},\mathbf{w}]\), \(\delta D=-\operatorname{div}(D\mathbf{w})\) and \(\delta\rho=-\mathbf{w}\cdot\nabla\rho\); the arbitrary variations are \(\delta\mathbf{w}\), \(\delta N\) and \(\delta\phi\) which vanish at at endpoints. In the variational principle (3.26), the fluid variables are the horizontal velocity \(\mathbf{u}\), pressure \(p\), density \(D\) and spatially inhomogeneous buoyancy \(\rho\). The modified canonical Hamiltonian equations for \((N,\phi)\) arising from Hamilton's principle (3.26) are \[\begin{split}\partial_{t}N+\operatorname{div}\!\big{(}N(\mathbf{ u}+\nabla\phi)\big{)}&=0\,,\\ \partial_{t}\phi+\mathbf{u}\cdot\nabla\phi&=- \varpi\,.\end{split} \tag{3.27}\] Thus, the evolution equations for the Eulerian wave variables \((N,\phi)\) in (3.27) keep their form as canonical Hamilton's equation forms with the added effects of 'Doppler-shifts' by the fluid velocity \(\mathbf{u}\). The modified Euler-Poincare equations that arise from Hamilton's principle in (3.26) are \[\big{(}\partial_{t}+\mathcal{L}_{u}\big{)}\Big{(}\Big{(}D\rho\mathbf{u}-N\nabla \phi\Big{)}\cdot d\mathbf{x}\otimes d^{2}x\Big{)}=\bigg{(}D\nabla\Big{(}\frac{ \rho}{2}|\mathbf{u}|^{2}-p\Big{)}-\Big{(}\frac{D}{2}|\mathbf{u}|^{2}\Big{)}d \nabla\rho\bigg{)}\cdot d\mathbf{x}\otimes d^{2}x\,, \tag{3.28}\] along with the NLS equations in (3.27) and the advection equations \[\begin{split}\big{(}\partial_{t}+\mathcal{L}_{u}\big{)}\rho& =\partial_{t}\rho+\mathbf{u}\cdot\nabla\rho=0\,,\\ \big{(}\partial_{t}+\mathcal{L}_{u}\big{)}\big{(}D\,d^{2}x\big{)}& =\big{(}\partial_{t}D+\operatorname{div}(D\mathbf{u})\big{)}\,d^{2}x=0\,, \quad D=1\Longrightarrow\operatorname{div}\!\mathbf{u}=0\,,\end{split} \tag{3.29}\] in which preservation of the constraint \(D=1\) requires divergence-free flow velocity, \(\mathrm{div}\mathbf{u}=0\). Then equations (3.27) with (3.28) imply \[\big{(}\partial_{t}+\mathcal{L}_{u}\big{)}\,\big{(}D\rho\mathbf{u}\cdot d \mathbf{x}\otimes d^{2}x\big{)}=\bigg{(}D\nabla\Big{(}\frac{\rho}{2}|\mathbf{u }|^{2}-p\Big{)}-\Big{(}\frac{D}{2}|\mathbf{u}|^{2}\Big{)}\nabla\rho-\mathrm{ div}(N\nabla\phi)\nabla\phi-N\nabla\varpi\bigg{)}\cdot d\mathbf{x}\otimes d^{2}x\,. \tag{3.30}\] The equations (3.30), (3.29) and (3.27) are exactly in the general form (2.23). The general result in equation (2.29) yields the following Kelvin-Noether theorem for the total Hamilton's principle for NLS waves on a free fluid surface in equation (3.26), \[\frac{d}{dt}\oint_{c(\mathbf{u})}\underbrace{\Big{(}\mathbf{u}-\frac{N\nabla \phi}{D\rho}\Big{)}\cdot\,d\mathbf{x}}_{\text{`Momentum shift'}}=\oint_{c( \mathbf{u})}\bigg{(}\nabla\left(\frac{|u|^{2}}{2}\right)-\frac{1}{\rho}\nabla p \bigg{)}\cdot d\mathbf{x}\,. \tag{3.31}\] Equation (3.30) yields the separated Kelvin-Noether equations as in (2.30), \[\frac{d}{dt}\oint_{c(\mathbf{u})}\mathbf{u}\cdot d\mathbf{x} =\oint_{c(\mathbf{u})}\bigg{(}\nabla\left(\frac{|u|^{2}}{2}\right) -\frac{1}{\rho}\nabla p\bigg{)}\cdot d\mathbf{x}-\oint_{c(\mathbf{u})}\underbrace{ \frac{1}{D\rho}\left(\mathrm{div}(N\nabla\phi)\nabla\phi+\nabla\varpi\right) \cdot d\mathbf{x}}_{\text{Non-inertial force}}\,, \tag{3.32}\] \[\frac{d}{dt}\oint_{c(\mathbf{u})}\frac{1}{D\rho}\left(N\nabla \phi\right)\cdot d\mathbf{x} =-\oint_{c(\mathbf{u})}\,\frac{1}{D\rho}\left(\mathrm{div}(N \nabla\phi)\nabla\phi+\nabla\varpi\right)\cdot d\mathbf{x}\,,\] \[=-\oint_{c(\mathbf{u})}\,\frac{1}{D\rho}\Bigg{(}\partial_{j}\big{(} N\phi\cdot^{j}\phi,_{k}\big{)}dx^{k}-\frac{N}{4}\nabla\left(\frac{|\nabla N|^{2}}{2N^{2}}- \frac{\Delta N}{N}+4\kappa N\right)\cdot d\mathbf{x}\Bigg{)}\,,\] where \(\varpi\) is again the Bernoulli function in equation (3.23). The stress tensor \(T_{k}^{j}:=N\phi\cdot^{j}\phi,_{k}\) in the last equation mimicks the corresponding stress tensor in the evolution of the Berry curvature in quantum hydrodynamics; see equation (106) in [16]. **Remark 3.2**.: _Upon comparing the unified and separated Kelvin circulation equations in (3.31) and (3.32), respectively, one sees that: (1) In (3.31) the standard Kelvin circulation theorem for an inhomogeneous planar Euler flow holds in the absence of waves. Thus, the fluid flow does not create waves. (2) In (3.32) the first equation of the separated Kelvin theorem shows that the Kelvin circulation theorem for an inhomogeneous planar Euler flow has an additional source in the presence of waves. Thus, one sees that the waves can create circulatory fluid flow._ In terms of the fluid momentum density \(\mathbf{m}:=D\rho\mathbf{u}\) with fluid transport velocity \(\mathbf{u}\), the Hamiltonian for NLS wave-current system dynamics is written as \[H_{m}[\mathbf{m},D,\rho,\phi,N]=\int_{\mathcal{D}}\frac{|\mathbf{m}|^{2}}{2D \rho}+p(D-1)+\frac{1}{2}\Big{(}N|\nabla\phi|^{2}+|\nabla\sqrt{N}|^{2}+\kappa N ^{2}\Big{)}\,d^{2}x\,. \tag{3.33}\] The dynamics of the current-coupled NLS system may then be written in Lie-Poisson bracket form as \[\frac{\partial}{\partial t}\begin{bmatrix}m_{i}\\ D\\ \rho\\ \phi\\ N\end{bmatrix}=-\begin{bmatrix}(\partial_{k}m_{i}+m_{k}\partial_{i})&D \partial_{i}&-\rho_{,i}&-\phi_{,i}&N\partial_{i}\\ \partial_{k}D&0&0&0&0\\ \rho_{,k}&0&0&0&0\\ \phi_{,k}&0&0&0&1\\ \partial_{k}N&0&0&-1&0\end{bmatrix}\begin{bmatrix}\frac{\delta H_{m}}{\delta m _{k}}=u_{k}\\ \frac{\delta H_{m}}{\delta D}=-\frac{|\mathbf{m}|^{2}}{2D^{2}\rho}\\ \frac{\delta H_{m}}{\delta\rho}=-\frac{|\mathbf{m}|^{2}}{2D\rho^{2}}\\ \frac{\delta H_{m}}{\delta\phi}=-\mathrm{div}(N\nabla\phi)\\ \frac{\delta H_{m}}{\delta N}=\varpi\end{bmatrix}, \tag{3.34}\] where the Bernoulli function \(\varpi\) is given in equation (3.23). By taking the untangling map and writing the Hamiltonian (3.33) in terms of the total momentum \(\mathbf{M}:=\mathbf{m}-N\nabla\phi\), we have the following Hamiltonian \[H_{HP}[\mathbf{M},D,\rho,\phi,N]=\int_{\mathcal{D}}\frac{|\mathbf{M}+N\nabla \phi|^{2}}{2D\rho}+p(D-1)+\frac{1}{2}\Big{(}N|\nabla\phi|^{2}+|\nabla\sqrt{N}| ^{2}+\kappa N^{2}\Big{)}\,d^{2}x\,, \tag{3.35}\] and the untangled Poisson structure \[\frac{\partial}{\partial t}\begin{bmatrix}M_{i}\\ D\\ \rho\\ \phi\\ N\end{bmatrix}=-\begin{bmatrix}(\partial_{k}M_{i}+M_{k}\partial_{i})&D \partial_{i}&-\rho_{,i}&0&0\\ \partial_{k}D&0&0&0&0\\ \rho_{,k}&0&0&0&0\\ 0&0&0&0&1\\ 0&0&0&-1&0\end{bmatrix}\begin{bmatrix}\frac{\delta H_{HP}}{\delta M_{k}}=\frac{ \delta H_{m}}{\delta m_{k}}=u_{k}\\ \frac{\delta H_{HP}}{\delta D}=-\frac{|\mathbf{M}+N\nabla\phi|^{2}}{2D\rho^{2 }}=\frac{\delta H_{m}}{\delta D}\\ \frac{\delta H_{HP}}{\delta\rho}=-\frac{|\mathbf{M}+N\nabla\phi|^{2}}{2D\rho^{2 }}=\frac{\delta H_{m}}{\delta\rho}\\ \frac{\delta H_{HP}}{\delta\phi}=-\mathrm{div}(N(\nabla\phi+\mathbf{u}))=- \mathrm{div}(N\mathbf{u})+\frac{\delta H_{m}}{\delta\phi}\\ \frac{\delta H_{HP}}{\delta\bar{N}}=\varpi+\mathbf{u}\cdot\nabla\phi=\mathbf{u }\cdot\nabla\phi+\frac{\delta H_{m}}{\delta\bar{N}}\end{bmatrix}\,. \tag{3.36}\] The transformation to the Lie-Poisson wave variables \((\mathbf{J},N)\), the canonical Hamiltonian (3.33) transforms to \[H_{J}[\mathbf{m},D,\rho,\mathbf{J},N]=\int_{\mathcal{D}}\frac{|\mathbf{m}|^{2 }}{2D\rho}+p(D-1)+\frac{|\mathbf{J}|^{2}}{2N}+\frac{1}{2}\Big{(}|\nabla\sqrt{N }|^{2}+\kappa N^{2}\Big{)}\,d^{2}x\,, \tag{3.37}\] and the corresponding equations in Lie-Poisson bracket form are given by \[\frac{\partial}{\partial t}\begin{bmatrix}m_{i}\\ D\\ \rho\\ J_{i}\\ N\end{bmatrix}=-\begin{bmatrix}(\partial_{k}m_{i}+m_{k}\partial_{i})&D \partial_{i}&-\rho_{,i}&(\partial_{k}J_{i}+J_{k}\partial_{i})&N\partial_{i}\\ \partial_{k}D&0&0&0&0\\ \rho_{,k}&0&0&0&0\\ (\partial_{k}J_{i}+J_{k}\partial_{i})&0&0&(\partial_{k}J_{i}+J_{k}\partial_{i} )&N\partial_{i}\\ \partial_{k}N&0&0&\partial_{k}N&0\end{bmatrix}\begin{bmatrix}\frac{\delta H_{J }}{\delta m_{k}}=u_{k}\\ \frac{\delta H_{J}}{\delta D}=-\frac{|\mathbf{J}|^{2}}{2D^{2}\rho}\\ \frac{\delta H_{J}}{\delta D}=-\frac{|\mathbf{J}|^{2}}{2D\rho^{2}}\\ \frac{\delta H_{J}}{\delta J_{k}}=J_{k}/N\\ \frac{\delta H_{J}}{\delta N}=-\frac{|\mathbf{J}|^{2}}{2N^{2}}+\frac{1}{8} \frac{|\nabla N|^{2}}{N^{2}}-\frac{1}{4}\frac{\Delta N}{N}+\kappa N\end{bmatrix}\,, \tag{3.38}\] In transforming the wave variables from \((\phi,N)\) to \((\mathbf{J},N)\) the canonical two-cocyle between \((\phi,N)\) has been transformed into a generalised cocycle in \((\mathbf{J},N)\). The Poisson bracket (3.38) is a standard Lie-Poisson bracket on the dual of the Lie algebra \[\mathfrak{X}_{1}\text{\textcircled{S}}\big{(}(\mathfrak{X}_{2}\text{\textcircled{S}} \mathcal{F})\oplus\mathcal{F}\oplus\mathrm{Den}\big{)}\,, \tag{3.39}\] where the corresponding semidirect-product Lie group is \[\mathrm{Diff}_{1}\text{\textcircled{S}}\big{(}(\mathrm{Diff}_{2}\text{\textcircled{S}} \mathcal{F})\oplus\mathcal{F}\oplus\mathrm{Den}\big{)}\,. \tag{3.40}\] Equation (3.38) yields a modified version of separated Kelvin-Noether theorem, namely, \[\frac{d}{dt}\oint_{c(\mathbf{u})}\mathbf{u}\cdot d\mathbf{x} =\oint_{c(\mathbf{u})}\left(\nabla\left(\frac{|u|^{2}}{2}\right)- \frac{1}{\rho}\nabla p\right)\cdot d\mathbf{x}-\oint_{c(\mathbf{u})}\underbrace{ \frac{1}{D\rho}\left(\frac{\mathbf{J}}{N}\cdot\nabla\mathbf{J}+J_{k}\nabla \left(\frac{J_{k}}{N}\right)+\mathbf{J}\mathrm{div}(\mathbf{J}/N)+\nabla \widetilde{\varpi}\right)\cdot d\mathbf{x}}_{\text{Non-inertial force}}\,, \tag{3.41}\] \[\frac{d}{dt}\oint_{c(\mathbf{u})}\frac{1}{D\rho}\,\mathbf{J} \cdot d\mathbf{x} =-\oint_{c(\mathbf{u})}\frac{1}{D\rho}\left(\frac{\mathbf{J}}{N} \cdot\nabla\mathbf{J}+J_{k}\nabla\left(\frac{J_{k}}{N}\right)+\mathbf{J} \mathrm{div}(\mathbf{J}/N)+\nabla\widetilde{\varpi}\right)\cdot d\mathbf{x}\,,\] where \(\widetilde{\varpi}:=-\frac{|\mathbf{J}|^{2}}{2N^{2}}+\frac{1}{8}\frac{|\nabla N |^{2}}{N^{2}}-\frac{1}{4}\frac{\Delta N}{N}+\kappa N\). **Remark 3.3** (Coupling to complex half densities).: _For completeness, let us consider Hamilton's principle for coupling the inhomogenous Euler's equation to the NLS equations in the complex wave function variables \((\psi,\psi^{*})\) (3.12), it reads,_ \[0=\delta S=\delta\int_{a}^{b}\int_{\mathcal{D}}\left(\frac{D\rho}{2}|\mathbf{u} |^{2}-p(D-1)-\mathbf{u}\cdot\Im(\psi^{*}\nabla\psi)\right)\,d^{2}x+\langle\psi \,,\,i\partial_{t}\psi\rangle-H[\psi,\psi^{*}]\,dt\,, \tag{3.42}\] _where \(H[\psi,\psi^{*}]\) is the NLS Hamiltonian in terms of \((\psi,\psi^{*})\), defined in (3.18). The canonical equations for complex wave function \(\psi\) can then be calculated to be_ \[i\hbar\left(\partial_{t}+\mathcal{L}_{u}\right)\psi:=i\hbar\left(\partial_{t} +\frac{1}{2}(\partial_{j}u^{j}+u^{j}\partial_{j})\right)\psi=-\frac{1}{2} \triangle\psi+\kappa|\psi|^{2}\psi\,. \tag{3.43}\] _Just as the current boosts the scalar phase \(\phi\) and density \(Nd^{2}x\) by the Lie derivative in equation (3.28), the half density \(\psi\sqrt{d^{2}x}\) is also boosted by the Lie derivative with respect to the current velocity vector field \(u\) in equation (3.43)._ **Remark 3.4** (Coupling NLS to mesoscale QG motion).: _Coupling of NLS to homogeneous \((\rho=1)\) mesoscale QG motion can be accomplished by modifying the reduced Lagrangian in (3.42) to include rotation and quasigeostrophic balance, as follows [47, 66]_ \[0=\delta S=\delta\int_{a}^{b}\int_{\mathcal{D}}\frac{D}{2} \Big{(}\mathbf{u}\cdot\left(1-\mathcal{F}\Delta^{-1}\mathbf{u} \right)+\mathbf{u}\cdot\mathbf{R}(\mathbf{x})\Big{)}-p(D-1) \tag{3.44}\] \[-\mathbf{u}\cdot\Im(\psi^{*}\nabla\psi)\,d^{2}x+\langle\psi\,,\, i\partial_{t}\psi\rangle-H[\psi,\psi^{*}]\,dt\,. \tag{3.45}\] _Here, \(\mathcal{F}\) is the rotational Froude number and \(\mathbf{R}(\mathbf{x})\) is the prescribed vector potential for the Coriolis parameter. The derivation of the equations of motion and Hamiltonian formulation can be accomplished by combining the calculations above with those in [47, 66] to accommodate rotation and quasigeostrophy._ ## 4 Numerical simulations In preparation for the numerical simulations of the coupled non-homogeneous Euler coupled NLS equations (3.38) in 2D, as discussed in Section 3.2, let us consider the equation in terms of the real and imaginary parts of \(\psi\), namely \(a\) and \(b\) such that \(\psi:=a+ib\). This particular change of variables is done for ease of implementation of the numerical solver. Inserting these relations into the action (3.42) gives \[\begin{split} 0=\delta S=\delta\int_{a}^{b}\int_{\mathcal{D}} \frac{D\rho}{2}|\mathbf{u}|^{2}-p(D-1)+\hbar\left(b\left(\partial_{t}+ \mathbf{u}\cdot\nabla\right)a-a\left(\partial_{t}+\mathbf{u}\cdot\nabla \right)b\right)\\ -\frac{1}{2}\left(|\nabla a|^{2}+|\nabla b|^{2}+\kappa\left(a^{2} +b^{2}\right)^{2}\right)\,d^{2}x\,dt\,,\end{split} \tag{4.1}\] The NLS momentum map in terms of \(a,b\) can be computed as \(\mathbf{J}(a,b):=\hbar(a\nabla b-b\nabla a)\) and we have the equation to solve as \[\begin{split}&\left(\partial_{t}+\mathcal{L}_{\mathbf{u}}\right) \left((D\rho\mathbf{u}-\mathbf{J}(a,b))\cdot d\mathbf{x}\right)=Dd\left( \frac{\rho}{2}|\mathbf{u}|^{2}-p\right)-\frac{D}{2}|\mathbf{u}|^{2}d\rho\,, \\ &\partial_{t}\rho+\mathbf{u}\cdot\nabla\rho=0\,,\quad\partial_{t} D+\mathrm{div}(D\mathbf{u})=0\,,\quad D=1\Rightarrow\mathrm{div}(\mathbf{u})=0\,, \\ &\partial_{t}a+\mathcal{L}_{\mathbf{u}}a=-\frac{1}{2}\Delta b+ \kappa\left(a^{2}+b^{2}\right)b\,,\\ &\partial_{t}b+\mathcal{L}_{\mathbf{u}}b=\frac{1}{2}\Delta a- \kappa\left(a^{2}+b^{2}\right)a\,,\end{split} \tag{4.2}\] where we have again set \(\hbar=0\) for convience. In 2D, one can cast the equation into stream function and vorticity form by defining fluid and wave potential vorticities as follows \[Q_{F}\,d^{2}x:=d\left(\rho\mathbf{u}\cdot d\mathbf{x}\right)=\mathrm{div}(\rho \nabla\Psi)\,,\quad Q_{W}\,d^{2}x:=d\left(\mathbf{J}(a,b)\cdot d\mathbf{x} \right)=2\hbar J(a,b)d^{2}x\,, \tag{4.3}\] where \(\Psi\) is the stream function, \(\mathbf{u}=\nabla^{\perp}\Psi\) and the Jacobian operator \(J\) is defined by \(J(f,h)=\partial_{x}f\partial_{y}h-\partial_{y}f\partial_{x}h\) for arbitrary smooth functions \(f,h\). In these variables, the Euler-NLS equations take the following form, \[\partial_{t}(Q_{F}-Q_{W})+J(\Psi,Q_{F}-Q_{W})=\frac{1}{2}J(\rho,| \mathbf{u}|^{2})\,,\] \[\partial_{t}Q_{W}+J(\Psi,Q_{W})=2J\left(-\frac{1}{2}\Delta b+ \kappa(a^{2}+b^{2})b,b\right)+2J\left(a,\frac{1}{2}\Delta a-\kappa(a^{2}+b^{2 })a\right)\,,\] \[\partial_{t}\rho+J(\Psi,\rho)=0\,, \tag{4.4}\] \[\partial_{t}a+J(\Psi,a)=-\frac{1}{2}\Delta b+\kappa\left(a^{2}+b ^{2}\right)b\,,\] \[\partial_{t}b+J(\Psi,b)=\frac{1}{2}\Delta a-\kappa\left(a^{2}+b^ {2}\right)a\,.\] Our implementation of the inhomogeneous Euler coupled NLS equations (4.4) used the finite element method (FEM) for the spatial variables. The FEM algorithm we used is implemented using the Firedrake 4 software. In particular, for (4.4) we approximated the fluid potential vorticity \(Q_{F}\), buoyancy \(\rho\) using a first order discrete Galerkin finite element space. The real and imaginary parts of the complex wave function, \(a\) and \(b\), and the stream function \(\Psi\) are approximated using a first order continuous Galerkin finite element space. For the time integration, we used the third order strong stability preserving Runge Kutta method [24]. In the numerical examples, we demonstrate numerically the effects of currents on waves and the effects of waves on currents by considering two runs of the 2D inhomogeneous Euler coupled NLS equations (4.4) with the following parameters. The domain is \([0,50]^{2}\) at a resolution of \(512^{2}\). The boundary conditions are periodic in the \(x\) direction, homogeneous Dirichlet for \(\Psi\), homogeneous Neumann for \(a\) and \(b\) in the \(y\) direction. To see the effects of the waves on the currents, the procedure was divided into two stages. The first stage was performed on the inhomogenous Euler's equations for \(T_{spin}=100\) time units starting from the following initial conditions Footnote 4: [https://firedrakeproject.org/index.html](https://firedrakeproject.org/index.html) \[Q_{F}(x,y,0) =\sin(0.16\pi x)\sin(0.16\pi y)+0.4\cos(0.12\pi x)\cos(0.12\pi y) +0.3\cos(0.2\pi x)\cos(0.08\pi y)+\] \[\qquad\qquad 0.02\sin(0.04\pi y)+0.02\sin(0.04\pi x)\,, \tag{4.5}\] \[\rho(x,y,0) =1+0.2\sin(0.04\pi x)\sin(0.04\pi y)\,.\] The purpose of the first stage was to allow the fluid system to spin up to a statistically steady state without influences from the wave dynamics. The PV and buoyancy variables at the end of the initial spin-up period are denoted as \(Q_{spin}(x,y)=Q_{F}(x,y,T_{spin})\) and \(\rho_{spin}(x,y)=\rho(x,y,T_{spin})\). In the second stage, the full simulations including the wave variables were run with the initial conditions for the fluid variables being the state achieved at the end of the first stage. To start the second stage for (4.4), wave variables were introduced with the following initial conditions \[a(x,y,0) =\exp(-((x-25)^{2}+(y-25)^{2}))\,,\quad b(x,y,0)=0\,,\quad\kappa= \frac{1}{2}\,, \tag{4.6}\] \[Q_{F}(x,y,0) =Q_{spin}(x,y)\,,\quad\rho(x,y,0)=\rho_{spin}(x,y)\,.\] For comparison, we also consider the numerical simulations of the 2D NLS equation without coupling to the inhomogenous Euler equation. The uncoupled NLS equations in the \(a\) and \(b\) variables are simply the last two equations of (4.4) with \(\Psi=0\). From the same initial condition (4.6), the snapshots at \(t=30\) of the coupled and uncoupled equations are shown in Figure 2. To show the effects of waves to currents, we consider the numerical simulations started with the following initial conditions, \[a(x,y,0) =\exp(-((x-25)^{2}+(y-25)^{2}))\,,\quad b(x,y,0)=0\,,\quad\kappa= \frac{1}{2}\,, \tag{4.7}\] \[Q_{F}(x,y,0) =0\,,\quad\rho(x,y,0)=\rho_{spin}(x,y)\,.\] In (4.7), we have used the same initial condition as in (4.6) except from the PV \(Q_{F}\) which has been set to zero. With this configuration, any PV excitation generated by the waves can interact with a "well mixed" buoyancy field to generate further circulation. Snapshots of the \(Q_{F}\) and \(Q_{W}\) fields are shown in Figure 3 for the numerical simulations started from the initial conditions (4.7). From Figure 3, we see that the spatial features of \(Q_{W}\) are localised and periodic in both directions with varying densities. \(Q_{F}\) possess similar spatial features as \(Q_{W}\), however, these features are deformed. The deformations are precisely caused by the transport of the generated fluid flow and interaction with the buoyancy field. ## 5 Conclusion and outlook **Summary.** After reviewing the framework in geometric mechanics for deriving hybrid fluid models in the introduction, section 2 showed a path for their derivation, section 3 discusssed examples of the wave mean flow hybrid equations and section 4 showed simulations of the hybrid Euler-NLS equations. The hybrid Euler-NLS equations describe boosted dynamics of small-scale NLS subsystems into the moving frame of the large-scale 2D Euler fluid dynamics. The Kelvin-Noether theorem in section 2 showed that the small-scale dynamics can feed back to create circulation in the large-scale dynamics. Over a short time, this creation of large-scale circulation may be only a small effect, as shown in numerical simulations Figure 2: These are the \(512^{2}\) snapshot of the wave amplitudes \(N:=a^{2}+b^{2}\) from the numerical simulating the Euler coupled NLS equation (4.4)(left) and numerical simulations of the uncoupled NLS equations (right) at time \(t=30\). The initial conditions for \(Q_{F}\) and \(\rho\) are obtained following a spin-up period of the inhomogeneous Euler equations without waves. As seen in the right hand panel, the uncoupled NLS equation produced a ‘Gingham’ pattern due to the boundary conditions and the spatial symmetry of the initial conditions. However, when coupled to the ‘mixing’ flow of the inhomogeneous Euler’s equation, the spatial coherence of \(N\) is distorted as seen in the left hand panel. However, it still retains the localisation of the patterns as local regions of high densities usually have filaments of zero densities as boundaries. displayed in Figures 2 and 3 of section 4. Over a long enough time period, though, the small-scale effects may produce a more pronounced effect on the larger scales, especially if the small-scale momentum is continuously driven externally. **Waves versus patterns.** NLS is a pattern-forming equation that is associated with several different applications in several different fields, including nonlinear fibre optics dynamics of telecommunication as well as studies of deep water waves. When linear driving and dissipation are introduced, NLS becomes the Complex Ginzburg Landau (CGL) equation, which is another well-known pattern-forming equation, [4, 52, 53]. This class of equations is extremely useful for its universal quality as normal form equations for bifurcations, the onset of instability due to symmetry breaking, and the saturation of instability due to nonlinear effects [53]. The utility of CGL universality suggests, in particular, that a dissipative and driven version of the hybrid Euler-NLS equations - that is, the hybrid Euler Complex Ginzburg-Landau (ECGL) equations - could be proposed as an elementary model to describe some aspects of air-sea coupling that can be encompassed with only a few parameters. Computational simulations of this proposition are to be discussed elsewhere in future work. ### Acknowledgements This paper was written in appreciation of the late Hermann Flaschka's elegant, thoughtful and sometimes humorous contributions to nonlinear mathematics during his marvellous career. We hope that the paper has presented "do-able examples that reveal something new." (Namely, that waves are not always carried passively by the current. Waves can feed back in the Kelvin theorem to produce circulation of the mean fluid velocity that carries them.) We are grateful to our friends, colleagues and collaborators for their advice and encouragement in the matters treated in this paper. DH especially thanks C. Cotter and C. Figure 3: These are the \(512^{2}\) snapshot of the fluid PV \(Q_{F}\) (left) and wave PV \(Q_{W}\) (right) snapshots at time \(t=30\) of the numerical simulation of (4.4) with the zero fluid PV initial conditions (4.7). From the right hand panel, one sees the \(Q_{W}\) field form a coherent spatial pattern similar to wave amplitude \(N\) of the uncoupled NLS simulation in the left panel of Figure 2. The left hand panel is the \(Q_{F}\) generated by \(Q_{W}\). The overall patterns of \(Q_{F}\) is reminiscent of \(Q_{W}\), however, \(Q_{F}\) also shows signs of ‘mixing’ by the fluid since the generated fluid PV will interact with buoyancy to generate circulation. Note that the magnitude of the \(Q_{F}\) is much smaller than \(Q_{W}\), thus the isolated NLS dynamics is dominant over the advection dynamics which implies the minimal ‘mixing’ in the \(Q_{W}\) field. Tronci, F. Gay-Balmaz and T. S. Ratiu for many insightful discussions of corresponding results similar to the ones derived here for WMFI, and in earlier work together in deriving hybrid models of complex fluids, turbulence, plasma dynamics, vertical slice models and the quantum-classical hydrodynamic description of molecules. DH and OS were partially supported during the present work by European Research Council (ERC) Synergy grant STUOD - DLV-856408. RH was partially supported during the present work by EPSRC scholarship (Grant No. EP/R513052/1).
2309.04703
Task Freshness-aware Incentive Mechanism for Vehicle Twin Migration in Vehicular Metaverses
Vehicular metaverse, which is treated as the future continuum between automotive industry and metaverse, is envisioned as a blended immersive domain as the digital twins of intelligent transportation systems. Vehicles access the vehicular metaverses by their own Vehicle Twins (VTs) (e.g., avatars) that resource-limited vehicles offload the tasks of building VTs to their nearby RoadSide Units (RSUs). However, due to the limited coverage of RSUs and the mobility of vehicles, VTs have to be migrated from one RSU to other RSUs to ensure uninterrupted metaverse services for users within vehicles. This process requires the next RSUs to contribute sufficient bandwidth resources for VT migrations under asymmetric information. To this end, in this paper, we design an efficient incentive mechanism framework for VT migrations. We first propose a novel metric named Age of Migration Task (AoMT) to quantify the task freshness of the VT migration. AoMT measures the time elapsed from the first collected sensing data of the freshest avatar migration task to the last successfully processed data at the next RSU. To incentivize the contribution of bandwidth resources among the next RSUs, we propose an AoMT-based contract model, where the optimal contract is derived to maximize the expected utility of the RSU that provides metaverse services. Numerical results demonstrate the efficiency of the proposed incentive mechanism for VT migrations.
Jinbo Wen, Jiawen Kang, Zehui Xiong, Yang Zhang, Hongyang Du, Yutao Jiao, Dusit Niyato
2023-09-09T07:08:17Z
http://arxiv.org/abs/2309.04703v2
# Task Freshness-aware Incentive Mechanism for Vehicle Twin Migration in Vehicular Metaverses ###### Abstract Vehicular metaverse, which is treated as the future continuum between automotive industry and metaverse, is envisioned as a blended immersive domain as the digital twins of intelligent transportation systems. Vehicles access the vehicular metaverses by their own Vehicle Twins (VTs) (e.g., avatars) that resource-limited vehicles offload the tasks of building VTs to their nearby RoadSide Units (RSUs). However, due to the limited coverage of RSUs and the mobility of vehicles, VTs have to be migrated from one RSU to other RSUs to ensure uninterrupted metaverse services for users within vehicles. This process requires the next RSUs to contribute sufficient bandwidth resources for VT migrations under asymmetric information. To this end, in this paper, we design an efficient incentive mechanism framework for VT migrations. We first propose a novel metric named Age of Migration Task (AoMT) to quantify the task freshness of the VT migration. AoMT measures the time elapsed from the first collected sensing data of the freshest avatar migration task to the last successfully processed data at the next RSU. To incentivize the contribution of bandwidth resources among the next RSUs, we propose an AoMT-based contract model, where the optimal contract is derived to maximize the expected utility of the RSU that provides metaverse services. Numerical results demonstrate the efficiency of the proposed incentive mechanism for VT migrations. Metaverse, vehicle twin, contract theory, age of information, migration. ## I Introduction With the gradual maturation of metaverse technologies, implementing metaverse-like immersive experiences within vehicles appears to be a potential future direction for vehicular interactions [1]. Vehicular metaverse is expected to lead an evolution of the automotive industry [2], which integrates extended reality technologies and real-time vehicular data seamlessly to blend physical and virtual spaces for drivers and passengers within vehicles [3]. In [4], smart driving of the digital twin in the metaverse was introduced. As the digital component of vehicular metaverses, Vehicle Twins (VTs) are large-scale and highly accurate digital replicas that cover the life cycle of vehicles and manage vehicular applications [4]. With the help of intra-twin communications, which refer to interactions between VTs and vehicles [5], vehicles can access vehicular metaverses through VTs, for example, in an avatar manner. The VTs can be updated in virtual spaces continuously by sensing data from surrounding environments [6], including bio-data of passengers, real-time vehicle status, and traffic data in the physical space [2], which is advantageous in the development of vehicular metaverses that can interact and coexist with the physical space, functioning as autonomous and durable virtual spaces [4]. Due to the resource limitation of vehicles, it is impractical for vehicles to build high-fidelity virtual models, which may lead to intensive computation for resource-limited vehicles [3]. Under such conditions, vehicles offload the large-scale rendering tasks of building VTs to the nearby edge servers in RoadSide Units (RSUs) for ultra-reliable and lower-latency metaverse services. Here the RSU providing metaverse services is called Metaverse Service Provider (MSP). Owing to the limited coverage of RSUs, VTs with a mobile nature have to be migrated from the current RSU (i.e., the MSP) to others for continuous metaverse services. Hence, the task freshness of the VT migration (i.e., the time elapsed of completing the current VT migration task) is essential to the provision of continuous metaverse services. To ensure the task freshness of the VT migration, VT migrations require enough available resources, especially bandwidth resources, thus the destination RSUs are required to provide bandwidth resources for VT migrations, where the destination RSUs are called Metaverse Resource Providers (MRPs). Because of information asymmetry, MRPs' private information (e.g., channel conditions and bandwidth costs) might be not aware to the MSP [7]. As a result, a malicious MRP may not contribute bandwidth resources honestly to obtain more benefits without a reasonable incentive mechanism [8], which affects the task freshness of the VT migration. Some efforts have been conducted for optimizing resource allocation and efficiently processing computing-intensive tasks of real-time rendering in vehicular metaverses [3, 9, 10, 11]. For example, the authors in [3] proposed a hierarchical game-theoretic approach to investigate the sustainable and reliable coded distributed computing scheme, which supports immer sive user experiences in vehicular metaverses. In [9], the authors formulated a learning-based incentive mechanism to evaluate and enhance VR experiences in the metaverse. In [10], the authors proposed a quantum collective learning and many-to-many matching game-based scheme in the metaverse for connected and autonomous vehicles. However, the above works ignore the VT migration problem due to the mobility of vehicles. Thus, it is still challenging how to optimize resource allocation for VT migrations in vehicular metaverses [4]. To address the above challenges, in this paper, since the existing metrics like Age of Task [12] cannot measure the VT migration delay, we first propose a novel metric named Age of Migration Task (AoMT) based on the concept of Age of Information (AoI). To improve VT migration efficiency, we formulate an AoMT-based incentive mechanism with asymmetric information. The main contributions of this paper are summarized as follows: * To measure precisely the task freshness of the VT migration, we propose a novel metric named AoMT for vehicular metaverses, which can be applied to evaluate the satisfaction of the MSP. * To incentivize the contribution of bandwidth resources among MRPs, we propose an AoMT-based contract model. _To the best of our knowledge, this is the first work studying the incentive mechanism for VT migrations in vehicular metaverses._ * We design the optimal contract which is feasible and maximize the utility of the MSP under information asymmetry. Numerical results demonstrate that the proposed incentive mechanism is practical and efficient. ## II AoMT-based Incentive Mechanism Framework for Vehicle Twin Migration As shown in Fig. 1, edge-assisted remote rendering is an important technology applied in the metaverse [13]. To build VTs (e.g., avatars) for accessing metaverse services like Augmented Reality (AR) navigation, occupants (i.e., users) send service requirements to the nearby RSU (i.e., the MSP) that can provide necessary resources (i.e., storage, caching, and computing) for the VT construction [4]. For the convenience of explanation, we take vehicle avatars as an example of VTs. Then, the MSP offloads computation-intensive rendering tasks to its proximal edge server and builds avatars to provide lower-latency and ultra-reliable metaverse services for users [13]. To efficiently manage avatars on RSUs, Vehicle Twin Managers of the RSU are introduced. However, when the users travel on the road, the current MSP cannot provide continuous services for users outside its coverage. Thus, the avatars have to be migrated to other RSUs. In addition, to ensure immersive experiences for users, the MSP requires sufficient bandwidth resources to enhance avatar migration efficiency and meets the delay requirement of metaverse services during migration [13]. We provide more details of the framework as follows: * **MSP:** The MSP collects sensing data of users and builds avatars to provide ultra-reliable and real-time metaverse services for users. To ensure high-quality metaverse services, the MSP focuses on the task freshness of the avatar migration and requires bandwidth resources from the destination RSUs (i.e., MRPs). After completing avatar migrations, the MSP pays for the MRPs according to their contributions. * **MRPs:** Each MRP contributes bandwidth resources for the MSP to achieve the avatar migration. The required amount of bandwidth depends on the service level agreements. All MRPs with private information (e.g., channel Fig. 1: An AoMT-based incentive mechanism framework for VT migrations. conditions and bandwidth costs) are selfish and have the potential to obtain more benefits because of information asymmetry. Note that each MRP becomes the new MSP after completing the current avatar migration. * **Users:** Occupants request and obtain metaverse services from the MSP, such as AR navigation and VR vehicular videos. After completing the avatar migration, each vehicle establishes a connection with the MRP where its avatar is hosted to provide metaverse services for users, and the MRP becomes the new MSP. Note that we consider that each vehicle has a corresponding avatar to manage vehicular applications during migration. * **Vehicle Twin Manager:** The main responsibility of the Vehicle Twin Manager is to manage avatars on its RSU (i.e., the MSP), including updating avatars. For instance, when avatars experience technical issues, such as being unable to maintain stability, the Vehicle Twin Manager immediately informs the MSP to reconstruct avatars, which ensures the high quality of immersive experiences for users. ## III Problem Formulation To incentivize MRPs for the contribution of bandwidth resources, we first propose a novel metric named AoMT to quantify the task freshness of avatar migration, which can be applied to evaluate the satisfaction of the MSP. Second, we formulate the utility functions of both MRPs and the MSP (i.e., the avatar migration task publisher). We consider that there are one MSP and a set \(\mathcal{M}\) of \(M\) MRPs in avatar migrations, where \(\mathcal{M}=\{1,\ldots,m,\ldots,M\}\). The MSP, which publishes \(M\) avatar migration tasks, motivates \(M\) MRPs to contribute bandwidth resources in avatar migrations. ### _Age of Migration Task for Avatar Migrations_ AoI has been commonly used as an effective metric to quantify information freshness at the destination. It is defined as the time elapsed since the generation of the last successfully received message containing updated information about its source system, and its minimization depends on the status update frequency [14]. However, it does not consider the data processing procedure [15]. Recent studies like Age of Task and Age of Processing [16] improve the AoI by taking the data processing time into account, but they only consider the scenarios with single-type sensing data and cannot measure the avatar migration delay. Therefore, to quantify the task freshness of the avatar migration, we propose a new metric named AoMT based on the concept of AoI. Similar to [15], AoMT is defined as the time elapsed from the first collected sensing data of the newest avatar migration task to the last successfully processed data at the MRP. The time of completing an avatar migration comprises three parts: 1) The time of collecting sensing data (e.g., traffic conditions and vehicle locations) by the MSP (denoted as \(t_{c}\)). 2) The time of sending the avatar data from the MSP to the MRP (denoted as \(t_{s}\)). 3) The time of processing received data by the MRP (denoted as \(t_{p}\)). For simplicity, MRPs have the same ability to communicate with users and process data [13]. Therefore, we consider that \(t_{c}\) and \(t_{p}\) are the same for all avatar migrations, respectively. We set \(t_{c}+t_{p}=T\in\mathbb{R}^{+}\) as a constant. It is considered that the Orthogonal Frequency Division Multiplexing Access (OFDMA) technology is applied in the system, which ensures that all communication channels occupied by different MRPs and the MSP are orthogonal [13, 17]. For MRP \(m\in\mathcal{M}\), given the bandwidth \(b_{m}\) allocated to the MSP, the achievable information transmission rate between the MSP and the MRP \(m\) is \[\gamma_{m}=b_{m}\log_{2}\bigg{(}1+\frac{\rho_{s}h_{m}^{0}d_{s,m}^{-\alpha}}{N _{0}b_{m}}\bigg{)}, \tag{1}\] where \(\rho_{s}\), \(h_{m}^{0}\), \(d_{s,m}\), \(\alpha\), and \(N_{0}\) represent the transmit power of the MSP, the unit channel power gain, the distance between the MSP and the MRP \(m\), the path-loss coefficient, and the noise power density, respectively [17, 18]. We define the channel power gain between the MSP and the MRP \(m\) as \(G_{s,m}=h_{m}^{0}d_{s,m}^{-\alpha}\). Therefore, for the MRP \(m\), the AoMT of the avatar migration is \[A_{m}(b_{m})=\frac{D_{m}}{\gamma_{m}}+T, \tag{2}\] where \(D_{m}\) is defined as the avatar data transmitted to the MRP \(m\), including the information of the system configuration, historical running data, and real-time avatar states [19]. Note that \(A_{m}(b_{m})\) is not a convex function with respect to \(b_{m}\). ### _MRP Utility_ The utility of MRP \(m\) is the difference between the received monetary reward \(R_{m}\) and its cost \(C_{m}\) of participating in the avatar migration, which is presented as \[U_{m}=R_{m}-C_{m}. \tag{3}\] Since the cost of bandwidth is from the energy consumption of the transmitted information1, referring to [8, 20], \(C_{m}\) is defined as Footnote 1: Note that the transmit power is the average power of the transmit signal, and the bandwidth reflects the spectrum of significant frequency components allocated for the transmission of the input signal. \[C_{m}=\mathcal{C}_{m}(b_{m}/G_{s,m}), \tag{4}\] where \(\mathcal{C}_{m}(\cdot)\) is used to model the bandwidth cost of MRP \(m\), given by \[\mathcal{C}_{m}(x)=a_{m}x^{2}, \tag{5}\] where \(a_{m}>0\) is the bandwidth cost coefficient. Thus, the utility of MRP \(m\) becomes \[U_{m}=R_{m}-\frac{a_{m}}{G_{s,m}^{2}}b_{m}^{2}. \tag{6}\] Due to information asymmetry, the MSP is not aware of each MRP's exact bandwidth cost coefficient and channel gain, but it can sort the MRPs into discrete types and use the statistical distributions of the MRPs' types from historical data to optimize the expected utility of the MSP [21]. Specifically, we divide the MRPs into different types and define the \(n\)-th type MRP as \[\theta_{n}\triangleq\frac{G_{s.n}^{2}}{a_{n}}. \tag{7}\] Since \(a_{n}>0\) and \(G_{s,n}>0\), we have \(\theta_{n}>0\). (7) indicates that the larger the channel gain \(G_{s,n}\) between the MSP and the \(n\)-th type MRP, or the lower the unit bandwidth cost coefficient \(a_{n}\), the higher the type of the MRP. Without loss of generality, the MRPs can be classified into a set \(\mathcal{N}=\left\{\theta_{n}:1\leq n\leq N\right\}\) of \(N\) types. In an ascending order, the MRPs' types are sorted as \(\theta_{1}\leq\theta_{2}\leq\cdots\leq\theta_{N}\). In this definition, the higher type MRP has a better channel quality or a lower bandwidth cost coefficient. To facilitate explanation, the MRP with type \(n\) is called the type-\(n\) MRP. Therefore, based on (7), the utility of the type-\(n\) MRP is rewritten as \[U_{n}^{C}(b_{n},R_{n})=R_{n}-\frac{b_{n}^{2}}{\theta_{n}}. \tag{8}\] ### _MSP Utility_ Since the large AoMT not only leads to a poor immersive experience for users but also degrades the MSP's satisfaction with the avatar migration, the MSP's satisfaction function obtained from the type-\(n\) MRP is defined as [21] \[S_{n}=\beta\ln(g(b_{n})+1), \tag{9}\] where \(\beta>0\) is the unit profit for the satisfaction of the MSP and \(g(\cdot)\) is the performance obtained from the type-\(n\) MRP, which is defined as \[g(b_{n})=K-A_{n}, \tag{10}\] where \(K\) is the maximum tolerant AoMT. In this paper, we consider that \(K\) is not less than \(A_{n}\). Because of information asymmetry, the MSP only knows the number of MRPs and the distribution of each type but does not know each MRP's private type, namely the exact number of MRPs belonging to each type [8]. Thus, considering that the probability of an MRP belonging to a certain type-\(n\) is \(Q_{n}\), subject to \(\sum_{n\in\mathcal{N}}Q_{n}=1\), the utility of the MSP is \[U_{s}(\boldsymbol{b},\boldsymbol{R})=\sum_{n\in\mathcal{N}}MQ_{n}(S_{n}-R_{n }), \tag{11}\] where \(\boldsymbol{b}=[b_{n}]_{1\times N}\) and \(\boldsymbol{R}=[R_{n}]_{1\times N}\) denote the bandwidth and reward vectors for all \(N\) types of MRPs, respectively. ## IV Optimal Contract Design In this section, we formulate the optimal contract, characterize its feasibility conditions, and provide an optimal solution for the formulated contract. Since the types of MRPs are private information that is not visible to the MSP, a rational MRP may provide false information maliciously and pretend to be an MRP with a better channel condition and/or a smaller bandwidth cost to cheat for more rewards [8]. To improve the performance of avatar migrations under asymmetric information, the MSP uses contract theory to effectively motivate the MRPs to contribute bandwidth resources. ### _Contract Formulation_ A contract consists of a group of bandwidth-reward pairs (i.e., contract items) provided to the MRPs, which are designed by the MSP to maximize the expectation of the MSP's utility. Each MRP selects the best contract item based on its type to maximize its benefit. The contract item can be denoted as \(\Phi=\left\{(b_{n},R_{n}),n\in\mathcal{N}\right\}\), where \(b_{n}\) is the bandwidth provided by the type-\(n\) MRP and \(R_{n}\) is the reward paid to the type-\(n\) MRP as the incentive for the corresponding contribution. To ensure that each MRP optimally chooses the contract item designed for its type, the following Individual Rationality (IR) and Incentive Compatibility (IC) constraints should be satisfied [8]. **Definition 1**.: _(Individual Rationality) The contract item that an MRP should ensure a non-negative utility, i.e.,_ \[U_{n}^{C}(b_{n},R_{n})=R_{n}-\frac{b_{n}^{2}}{\theta_{n}}\geq 0,\,\forall n\in \left\{1,\ldots,N\right\}. \tag{12}\] **Definition 2**.: _(Incentive Compatibility) An MRP of any type \(n\) prefers to select the contract item \((b_{n},R_{n})\) designed for its type rather than any other contract item \((b_{j},R_{j}),\forall j\in\left\{1,\ldots,N\right\}\), and \(j\neq n\), i.e.,_ \[R_{n}-\frac{b_{n}^{2}}{\theta_{n}}\geq R_{j}-\frac{b_{j}^{2}}{\theta_{n}},\, \forall n,j\in\left\{1,\ldots,N\right\},n\neq j. \tag{13}\] The IR constraints ensure the participation of MRPs and the IC constraints ensure that each MRP chooses the contract item designed for its specific type to obtain the highest benefits. With the IR and IC constraints, the MSP aims to maximize its expected utility. Therefore, the problem of maximizing the expected utility of the MSP is formulated as **Problem 1**: \[\max_{\boldsymbol{b},\boldsymbol{R}}\,U_{s}(\boldsymbol{b}, \boldsymbol{R})\] \[\text{s.t.}\,\,\,R_{n}-\frac{b_{n}^{2}}{\theta_{n}}\geq 0,\,\forall n \in\left\{1,\ldots,N\right\},\] \[R_{n}-\frac{b_{n}^{2}}{\theta_{n}}\geq R_{j}-\frac{b_{j}^{2}}{ \theta_{n}},\,\forall n,j\in\left\{1,\ldots,N\right\},\] \[b_{n}\geq 0,R_{n}\geq 0,\theta_{n}>0,\,\forall n\in\left\{1, \ldots,N\right\},\] (14) ### _Optimal Contract Solution_ Since there are \(N\) IR constraints and \(N(N-1)\) IC constraints in **Problem 1**, it is difficult to directly solve **Problem 1**. Therefore, we reformulate **Problem 1** by the following necessary conditions. **Lemma 1**.: _With information asymmetry, a feasible contract must satisfy the following conditions:_ \[R_{1}-\frac{b_{1}^{2}}{\theta_{1}}\geq 0, \tag{15a}\] \[R_{n}-\frac{b_{n}^{2}}{\theta_{n}}\geq R_{n-1}-\frac{b_{n-1}^{2}}{ \theta_{n}},\,\forall n\in\left\{2,\ldots,N\right\},\] (15b) \[R_{n}-\frac{b_{n}^{2}}{\theta_{n}}\geq R_{n+1}-\frac{b_{n+1}}{ \theta_{n}},\,\forall n\in\left\{1,\ldots,N-1\right\},\] (15c) \[R_{N}\geq R_{N-1}\geq\cdots\geq R_{1},\,b_{N}>b_{N-1}\geq\cdots \geq b_{1}. \tag{15d}\] Proof.: Please refer to [8]. Constraint (15a) is related to the IR constraints. Constraints (15b), (15c), and (15d) are related to the IC constraints. Constraints (15b) and (15c) show that the IC constraints can be transformed into the Local Downward Incentive Compatibility (LDIC) and the Local Upward Incentive Compatibility (LUIC) with monotonicity, respectively [8]. Based on **Lemma 1**, the optimal rewards for any allocated bandwidth can be obtained by the following **Lemma 2**. **Lemma 2**.: _For a feasible set of bandwidth \(\mathbf{b}\) satisfying \(b_{1}\leq\cdots\leq b_{n}\leq\cdots\leq b_{N}\), we can obtain the optimal reward as_ \[R_{n}^{\star}=\begin{cases}\dfrac{b_{1}^{2}}{\theta_{1}},\,n=1,\\ R_{n-1}+\dfrac{b_{n}^{2}}{\theta_{n}}-\dfrac{b_{n-1}^{2}}{\theta_{n}},\,n=2, \ldots,N.\end{cases} \tag{16}\] Proof.: Please refer to [8]. Based on the iterative method, the optimal reward in (16) can be rewritten as \[R_{n}^{\star}=\dfrac{b_{1}^{2}}{\theta_{1}}+\sum_{i=1}^{n}\Delta_{i},\,n\in \mathcal{N}, \tag{17}\] where \(\Delta_{1}=0\) and \(\Delta_{i}=\frac{b_{i}^{2}-b_{i-1}^{2}}{\theta_{i}},\forall i\in\{2,\ldots,N\}\). By substituting the optimal reward (17) into the MSP's utility (11), we can get the MSP's utility with respect to \(\mathbf{b}\). Therefore, **Problem 1** is reformulated as \[\begin{split}\textbf{Problem 2:}\max_{\mathbf{b}}& \,U_{s}(\mathbf{b})\\ \text{s.t.}&\,b_{1}\leq\cdots\leq b_{N},\end{split} \tag{18}\] where \(U_{s}(\mathbf{b})=\sum_{n\in\mathcal{N}}U_{s,n}=\sum_{n\in\mathcal{N}}M(Q_{n}S_{n} -e_{n}b_{n}^{2})\), and \(e_{n}\) is given by \[e_{n}=\begin{cases}\dfrac{Q_{n}}{\theta_{n}}+\left(\dfrac{1}{ \theta_{n}}-\dfrac{1}{\theta_{n+1}}\right)\sum_{j=n+1}^{N}Q_{j},\,1\leq n<N, \\ \dfrac{Q_{N}}{\theta_{N}},\,n=N.\end{cases} \tag{19}\] Since \(U_{s}\) is not a concave function, which cannot be solved by the standard convex optimization tools, we propose a greedy algorithm to design the optimal contract referring to [21]. Motivated by the above analysis, the detailed contract design is shown in **Algorithm 1**. Firstly, we can obtain the optimal bandwidth \(b_{n}^{\star}\) by using the iterative method. If \(\mathbf{b}^{\star^{\prime}}\) cannot satisfy the monotonicity constraint, the iterative algorithm, i.e., _Bunching and Ironing_ algorithm [22] is adopted to obtain the optimal solution \(\mathbf{b}^{\star}\), which ensures that the monotonicity constraint is satisfied. Finally, the optimal reward \(R_{n}^{\star}\) can be calculated by (16). Note that the computational complexity of **Algorithm 1** is \(\mathcal{O}(N\log\left(\frac{b_{max}-b_{min}}{\varphi}\right))\), which indicates that **Algorithm 1** is actually efficient. ``` Input: Basic channel parameters \(\{\rho_{s},h_{m}^{0},d_{s,m},\alpha,N_{0}\}\) and MRPs' types \(\{\theta_{n},1\leq n\leq N\}\). Output: The optimal bandwidth \(\mathbf{b}^{\star}\) and the optimal reward \(\mathbf{R}^{\star}\). 1for\(n=1,\ldots,N\)do 2 Initialize the iteration index \(z=0\), the step size \(\varphi\), the empty vector \(\mathbf{v}_{s,n}\), and the feasible range of bandwidth \([b_{min},b_{max}]\), where \(b_{min}=b_{n}^{z}=10^{5}\). while\(b_{n}^{z}<b_{max}\)do 3 Calculate \(U_{s,n}(b_{n}^{z})\). Set \(\mathbf{v}_{s,n}(z)=U_{s,n}(b_{n}^{z})\). \(b_{n}^{z}=b_{n}^{z}+\varphi\). \(z=z+1\). Obtain the optimal bandwidth \(b_{n}^{\star}\) for the type-\(n\) MRP by using the maximum value index in \(\mathbf{v}_{s,n}\). 4 Obtain the optimal bandwidth vector \(\mathbf{b}^{\star^{\prime}}=\{b_{1}^{\star},\ldots,b_{n}^{\star},\ldots,b_{N}^{ \star}\}\). 5if\(\mathbf{b}^{\star^{\prime}}\) does not satisfy the monotonicity conditionthen 6 Apply Bunching and Ironing algorithm [22] to adjust \(\mathbf{b}^{\star^{\prime}}\) and output \(\mathbf{b}^{\star}\). 7else 8\(\mathbf{b}^{\star}=\mathbf{b}^{\star^{\prime}}\). 9 10for\(n=1,\ldots,N\)do 11 Calculate the optimal reward \(R_{n}^{\star}\) based on (16). 12 Obtain the optimal reward vector \(\mathbf{R}^{\star}=\{R_{1}^{\star},\ldots,R_{n}^{\star},\ldots,R_{N}^{\star}\}\). return\(\{\mathbf{b}^{\star},\mathbf{R}^{\star}\}\). ``` **Algorithm 1**Optimal Contract Design ## V Numerical Results In this section, we consider \(M=10\) MRPs and the type-\(n\) follows the uniform distribution [21]. Referring to [8, 21, 23, 24, 25], the main parameters are listed in Table I. Firstly, we validate the IC and IR constraints. Then, we compare the proposed incentive mechanism with other incentive mechanisms: 1. [leftmargin=*] 2. _Contract theory with complete information_ that the channel condition and the bandwidth cost of MRPs are known by the MSP [8]. \begin{table} \begin{tabular}{l|c} \hline **Parameters** & **Values** \\ \hline Transmit power of the MSP (\(\rho_{s}\)) & \(23\,\mathrm{dBm}\) \\ \hline Noise power density \((N_{0})\) & \(-174\,\mathrm{dBm/Hz}\) \\ \hline Path-loss coefficient \((\alpha)\) & \(2\) \\ \hline The size of avatar data transmitted to the MRP \(m\)\((D_{m})\) & \([100\,\mathrm{MB},200\,\mathrm{MB}]\) \\ \hline Unit profit for the satisfaction \((\beta)\) & \(200\) \\ \hline Unit bandwidth cost \((a_{m})\) & \([0.0001,0.001]\) \\ \hline Maximum tolerant AoMT \((K)\) & \(50\,\mathrm{s}\) \\ \hline Distance between the MSP and the & \(500\,\mathrm{m}\) \\ \hline The sum of the time of collecting data and processing data \((T)\) & \(5\,\mathrm{s}\) \\ \hline \end{tabular} \end{table} TABLE I: Key Parameters in the Simulation. 2. _Contract theory with social maximization_[26] that the MSP aims to maximize social welfare under information asymmetry [27]. Figure 2 shows the feasibility (i.e., IR and IC constraints) of the proposed scheme under information asymmetry. The utilities of four types of MRPs are shown when they sign different contract items. We can find that the utilities of MRPs are increasing with the increasing types of MRPs, and the utility of the MRP choosing the corresponding contract item is no less than \(0\), which demonstrates that our designed contract guarantees the IR conditions. Besides, each MRP selects the contract item corresponding to its own type that achieves the maximum utility. For example, a type-\(1\) MRP obtains the maximum utility only when it chooses the contract item \((b_{1},R_{1})\), which is exactly designed for its type. If the type-\(1\) MRP selects any other contract items \((b_{n},R_{n}),n\in\{2,\ldots,N\}\), its utility will reduce. Note that a similar phenomenon can be observed for all other types of MRPs when they choose the contract item designed for their corresponding types. Therefore, the above observations validate that our designed contract satisfies the IR and IC conditions. Based on the above analysis, we conclude that MRPs will automatically reveal their types to the MSP after choosing the contract item, which means that by utilizing the proposed scheme, the MSP can capture the MRPs' private information and thus effectively alleviate the impact of information asymmetry. Figure 3 shows the utility of the MSP corresponding to different avatar data sizes \(D\) under three incentive mechanisms. From Fig. 3, we can observe that regardless of the incentive mechanism, the utility of the MSP decreases as the avatar data size \(D\) increases. The reason is that to meet the delay requirement of the avatar migration, the bigger avatar data size \(D\) indicates that the MSP requires more bandwidth resources from MRPs and pays more rewards to them, thus decreasing the utility of the MSP. Besides, the utility of the MSP under the contract theory with complete information is always greater than that under the contract theory with asymmetric information, which indicates that the MSP obtains fewer benefits because of information asymmetry. The reason is that although the proposed scheme can effectively mitigate the effects of information asymmetry by leveraging contract theory [8], a rational MSP still has a chance to provide false information maliciously and cheat for more rewards, which decreases the utility of the MSP. Figure 4 shows the sum utilities of MRPs corresponding to different avatar data sizes \(D\) under three incentive mechanisms. From Fig. 4, we can observe that as the avatar data size \(D\) increases, the sum utilities of MRPs under the contract theory with complete information are always \(0\), which indicates that the MRP receives rewards equal to its bandwidth cost with complete information. We can also find that the sum utilities of MRPs increase as the avatar data size \(D\) increases under the contract theory with asymmetric information or the contract theory with social maximization. The reason is that since the amount of avatar data migrated increases, the MRPs can obtain more rewards based on the designed contract when they contribute more bandwidth resources for avatar migrations. Therefore, the sum utilities of MRPs increase as the avatar data size \(D\) increases. Besides, the MRPs obtain the optimal utilities under the contract theory with social maximization, and the sum utilities of the MRPs under the contract theory with asymmetric information are greater than those under the contract theory with complete information. ## VI Conclusion In this paper, we have studied VT migrations in vehicular metaverses and formulated the incentive mechanism under asymmetric information for avatar migrations (as an example of VT migrations). We have proposed a novel metric named AoMT based on the concept of AoI for vehicular metaverses to quantify the task freshness of the avatar migration, which can evaluate the MSP's satisfaction. Furthermore, to improve the efficiency of avatar migrations, we have designed an AoMT-based contract model under information asymmetry for incentivizing MRPs to contribute bandwidth resources. Finally, numerical results have demonstrated the efficiency of the proposed incentive mechanism for avatar migrations in vehicular metaverses. In the future, we will improve the mathematical model to adapt to the VT migration. Besides, we may design a prototype system to evaluate our scheme and use artificial intelligence tools like deep reinforcement learning to enhance the solution methodology.
2309.12161
Code Soliloquies for Accurate Calculations in Large Language Models
High-quality conversational datasets are crucial for the successful development of Intelligent Tutoring Systems (ITS) that utilize a Large Language Model (LLM) backend. Synthetic student-teacher dialogues, generated using advanced GPT-4 models, are a common strategy for creating these datasets. However, subjects like physics that entail complex calculations pose a challenge. While GPT-4 presents impressive language processing capabilities, its limitations in fundamental mathematical reasoning curtail its efficacy for such subjects. To tackle this limitation, we introduce in this paper an innovative stateful prompt design. Our design orchestrates a mock conversation where both student and tutorbot roles are simulated by GPT-4. Each student response triggers an internal monologue, or `code soliloquy' in the GPT-tutorbot, which assesses whether its subsequent response would necessitate calculations. If a calculation is deemed necessary, it scripts the relevant Python code and uses the Python output to construct a response to the student. Our approach notably enhances the quality of synthetic conversation datasets, especially for subjects that are calculation-intensive. Our preliminary Subject Matter Expert evaluations reveal that our Higgs model, a fine-tuned LLaMA model, effectively uses Python for computations, which significantly enhances the accuracy and computational reliability of Higgs' responses. Code, models, and datasets is available at https://github.com/luffycodes/Tutorbot-Spock-Phys.
Shashank Sonkar, MyCo Le, Xinghe Chen, Naiming Liu, Debshila Basu Mallick, Richard G. Baraniuk
2023-09-21T15:16:58Z
http://arxiv.org/abs/2309.12161v2
# Code Soliloquies for Accurate Calculations in Large Language Models ###### Abstract High-quality conversational datasets are crucial for the successful development of Intelligent Tutoring Systems (ITS) that utilize a Large Language Model (LLM) backend. Synthetic student-teacher dialogues, generated using advanced GPT-4 models, are a common strategy for creating these datasets. However, subjects like physics that entail complex calculations pose a challenge. While GPT-4 presents impressive language processing capabilities, its limitations in fundamental mathematical reasoning curtail its efficacy for such subjects. To tackle this limitation, we introduce in this paper an innovative stateful prompt design. Our design orchestrates a mock conversation where both student and tutorbot roles are simulated by GPT-4. Each student response triggers an internal monologue, or 'code soliloquy' in the GPT-tutorbot, which assesses whether its subsequent response would necessitate calculations. If a calculation is deemed necessary, it scripts the relevant Python code and uses the Python output to construct a response to the student. Our approach notably enhances the quality of synthetic conversation datasets, especially for subjects that are calculation-intensive. Our preliminary Subject Matter Expert evaluations reveal that our Higgs model, a fine-tuned LLaMA model, effectively uses Python for computations, which significantly enhances the accuracy and computational reliability of Higgs' responses. Code, models, and datasets is available at [https://github.com/luffycodes/Tutorbot-Spock-Phys](https://github.com/luffycodes/Tutorbot-Spock-Phys). ## 1 Introduction In the rapidly evolving domain of Natural Language Processing (NLP), the creation of high-quality chatbots using pre-trained Large Language Models (LLMs) is heavily reliant on conversational datasets as shown by Vicuna model (Chiang et al., 2023). With advanced models like Generative Pretrained Transformer-4 (GPT-4) (Bubeck et al., 2023), it is possible to generate such synthetic yet engaging conversations by designing creative prompts (Sonkar et al., 2023). CLASS framework (Sonkar et al., 2023) demonstrates the capacity of GPT-4 to synthesize meaningful interactions between a student and a tutorbot to train effective Intelligent Tutoring Systems (ITS). However, this framework largely caters to subjects that circumvent calculation-intensive problems, such as biology. Generating synthetic conversations for subjects like physics, which require complex calculations, presents a significant challenge. This is primarily due to the limited mathematical capabilities of models like GPT-4. For instance, ChatGPT Kasneci et al. (2023) and GPT-4 achieve only \(55\%\) and \(59\%\) accuracy respectively on three-digit by three-digit multiplication tasks, as reported by (Dziri et al., 2023). This limitation makes the one-shot prompt design introduced in CLASS inadequate for generating holistic conversations in calculation-intensive subjects. Recognizing these limitations in GPT-4's mathematical capabilities, we have developed an innovative approach to generate synthetic student-tutor dialogues (example shown in figure 1) that incorporate Figure 1: An example of a synthetic student-tutorbot conversation generated using our proposed multi-turn, stateful prompt design. Both the student and the tutorbot roles are simulated by the GPT model. The goal of our prompt design is to ensure the mathematical accuracy of responses from the GPT-tutorbot, such as in scenarios that require calculation verification or responses to calculation-based queries. To achieve this, our design engages the tutorbot in an ‘internal monologue’, what we term a ‘code soliloquy’. This soliloquy, illustrated by the dotted bubbles in the figure, is hidden from the student. The soliloquy is initiated each time the tutorbot receives a student input and is guided by a sequence of four state prompts. The first state in this soliloquy prompts the tutorbot to assess whether the next response necessitates any calculations. If the answer is affirmative, the tutorbot is prompted to generate the corresponding Python code, and the output of this code is then utilized to formulate the tutorbot’s response. In contrast, if the tutorbot determines that Python is not required, it proceeds to respond without invoking any Python. The prompts in the figure are greatly simplified for illustrative purposes. Detailed versions of these prompts can be found in the appendix. accurate calculations. Our solution is a multi-turn, stateful prompt design that leverages GPT-4 to simulate both student and tutorbot roles. Central to this design is the unique incorporation of 'code soliloquy', a novel concept that significantly enhances the dialogue's computational accuracy. For every input from the GPT-4 emulated student, we initiate a soliloquy within the GPT-4 tutorbot - an internal dialogue hidden from the student. During this soliloquy, GPT-tutorbot prompts itself to determine whether its next response necessitates any calculations. If a calculation is required, it proceeds to script the necessary code and then utilizes the output from this code to generate an informed response to the student. Given that GPT-4 demonstrates a remarkable proficiency in writing code, we ingeniously utilize this strength in our design through the process of code soliloquy. This allows us to overcome GPT-4's calculation limitations, thereby significantly enhancing the quality of the synthetic dialogues, particularly for subjects that are calculation-intensive. To demonstrate the efficacy of our stateful prompt design and the quality of the generated synthetic conversations, we introduce Higgs, a variant of the LLaMA-2-70b-chat base model (Touvron et al., 2023b), fine-tuned on the generated conversational dataset. The starting question posed by the GPT-student to the GPT-tutorbot to initiate these conversations is our newly curated physics question dataset, PHY300, adapted from high school physics textbooks. These questions are carefully chosen to cover a broad spectrum of topics, ranging from mechanics to thermodynamics to electromagnetism. In order to test Higgs, we develop a comprehensive evaluation protocol. The evaluation measures the accuracy and computational reliability of Higgs' responses, particularly its proficiency in using Python for computations whenever necessary. The results from preliminary SME evaluations are highly encouraging. Higgs exhibited an impressive ability to determine when Python computations were necessary in the conversation, and it consistently generated valid code. Most notably, Higgs accurately verified student's calculations by leveraging Python code, underlining the utility of our approach in improving the computational reliability of LLMs. These results demonstrate the potential of our stateful prompt design and generated mock conversations with code soliloquies in significantly enhancing the capabilities of LLMs, particularly in the context of calculation-intensive subjects. By fostering accuracy and computational reliability, our approach can transform LLMs into more effective and reliable educational tools. Code, models, and datasets can be accessed here 1. Footnote 1: [https://github.com/luffycodes/Tutorbot-Spock-Phys](https://github.com/luffycodes/Tutorbot-Spock-Phys) ## 2 Related Work In this section, we explore two critical areas that form the foundation of our study. First, we discuss the principles of ITS and how they seamlessly integrate with the concepts of Learning Science in the context of our stateful prompt design. Following this, we delve into the realm of LLMs, focusing on their mathematical abilities. ### Intelligent Tutoring Systems and Learning Science Principles ITS have carved a significant niche in the sphere of personalized education by providing students an interactive and individualized learning experience Winkler and Sollner (2018). ITS can be broadly classified into four categories Feng et al. (2021). Firstly, Dialogue-Based ITS such as AutoTutor Graesser et al. (2004) leverage natural language processing to pinpoint and rectify student misconceptions. Secondly, constraint-based modeling systems like KERMIT Suraweera and Mitrovic (2002) utilize predefined constraints to steer student learning. Thirdly, Knowledge Tracing models Liu et al. (2022); Sonkar et al. (2020) track student knowledge states to capture their problem-solving skills. Lastly, Bayesian modeling Waters et al. (2012) extends the model tracing approach by introducing Bayesian networks. Our proposed framework synergizes the principles of the first two types of ITS, utilizing a scaffolding strategy to deconstruct complex physics problems into smaller, manageable steps, and guiding students through these steps using conversational dialogues. This scaffolding approach is deeply ingrained in specific learning science principles Wing (2006); Shute et al. (2017), which emphasize the efficacy of problem decomposition in fostering student learning. Our methodology aligns with the socio-constructivist model of learning Vygotsky and Cole (1978), a model that champions scaffolding in education. This model advocates the breakdown of complex concepts into smaller subtasks, which are easier for learners to grasp--a strategy that is at the heart of our conversation design. Further, research indicates that optimal learning outcomes are achieved when the complexity of the task is synchronized with the learner's current abilities Stone (1998). Thus, our approach merges the principles of ITS and learning science to provide an effective and engaging learning experience. ### Large Language Models and their Math abilities Recent advancements in NLP have led to the development of LLMs that show remarkable capabilities in generating human-like text and understanding complex language patterns. These capabilities make LLMs ideally suited for applications in ITS, which aim to engage students in a natural, interactive learning experience. Large-scale models such as GPT-4 Bubeck et al. (2023) and PaLM Chowdhery et al. (2022), have garnered significant attention for their advanced capabilities. However, smaller models like LLaMA Touvron et al. (2023, 2023) have also demonstrated promising results. These smaller models offer additional advantages such as increased customizability, safer deployment, and reduced costs. Despite these advancements, one of the key challenges these models face is their limited accuracy in handling mathematical calculations. For instance, models like ChatGPT and GPT-4 have shown only \(55\%\) and \(59\%\) accuracy, respectively, on elementary tasks like three-digit multiplication Dziri et al. (2023). This limitation is a significant concern for their application in ITS, particularly for subjects like physics that often involve complex calculations. Various strategies have been explored to improve the mathematical capabilities of LLMs. Some of these approaches include the evol-instruct framework of WizardMath Luo et al. (2023), combining LLMs with symbolic solvers He-Yueya et al. (2023), or integrating them with external calculators Wei et al. (2022). Another innovative approach, which we adopt in our work, involves leveraging code Gao et al. (2023); Chen et al. (2022) for solving simple math word problems Cobbe et al. (2021). Our unique contribution in this area is the introduction of 'code soliloquies' that enables precise invocation of Python computations whenever a student's response necessitates it, thus significantly enhancing the computational reliability and interaction quality of the tutoring model. ## 3 Methodology: Generating Conversations with Code Soliloquies In this section, we outline our innovative stateful prompt design, a methodology specifically developed to ensure that generated synthetic student-tutorbot conversations incorporate accurate calculations. This design introduces 'code soliloquies' - a novel feature that ensures the precise execution of computations during dialogues. Our methodology employs two primary role prompts for the GPT-4 model, enabling it to simulate both the student and the tutorbot roles in generating these conversations. While a straightforward approach might be to instruct the model to 'act as a student/tutorbot and respond to the tutorbot/student', we instead adopt a more sophisticated strategy inspired by the CLASS framework. The student-specific prompt, detailed in appendix A.1, instructs GPT-student to generate inquiries and responses mimicking a real student's behavior. The tutorbot-specific prompt instructs GPT-tutorbot to simplify complex problems into subproblems, helping the student solve these incrementally. The prompt also directs the bot to offer regular hints and not reveal answers prematurely. The heart of our methodology lies in the intricate design of the tutorbot prompt because it is through this prompt design that we introduce the concept of 'code soliloquies' - a critical feature ensuring the accurate execution of computations during dialogues. The tutorbot prompt is composed of four sub-prompts representing four distinct states the tutorbot can be in. These states form the backbone of the 'code soliloquy'. They represent the internal thought process of the tutorbot as it determines whether a calculation is necessary for its response. Now, let us delve into the specifics of each of these states to understand the tutorbot's internal monologue better. 1. **Deciding State**: The 'Deciding State' is the initial state in which the GPT-tutorb determines whether a calculation is needed for its response to the student. In this state, the tutorbot is prompted with a specific prompt designed for this purpose (refer to appendix A.2). The prompt instructs the model to make a binary decision - 'yes' or 'no' - to the question 'Use Python?', signaling whether Python computations are necessary. The model should output 'yes' if the student's response contains a numerical answer that needs verification using Python, or if the model anticipates its upcoming response to be reliant on mathematical calculations. Conversely, the model should output 'no' when the scenario doesn't demand calculations. If the output is 'yes', the tutorbot transitions to the 'Use Python State'. If the output is 'no', the conversation flow moves to the 'No Python State'. Also, throughout this process, the Python functionalities remain hidden from the student. 2. **Use Python State**: If GPT-tutorbot model decides to use Python, the model is then prompted using prompt specific to 'Use Python State' (refer to appendix A.3). The prompt instructs the model to first output a natural language description of the desired calculation, and then generate the corresponding Python code, enclosed within backticks and the 'Python' keyword for easy parsing. 3. **Received Python State**: Post the execution of the Python code from previous step, the final state of code soliloquy is reached. The GPT-tutorbot model is prompted using the specific prompt for this state (refer to appendix A.4). The model is instructed to use the Python output to assess the student's answer and provide suitable feedback. If the student's answer is approximately close to the Python output (using rounding for comparison), the model is instructed to approve the answer. Once this state is concluded, the GPT-tutorbot's response is relayed to the GPT-student model, and the GPT-tutorbot model resets to the 'Deciding State'. 4. **No Python State**: If the GPT-tutorbot concludes during the 'Deciding State' that there's no need for Python, it transitions to the 'No Python State'. The prompt specific to this state (refer to appendix A.5) instructs the model to continue the conversation and respond to the student. Once the GPT-tutorbot's response is relayed to the GPT-student model, the GPT-tutorbot model reverts to the 'Deciding State'. Thus, our unique stateful prompt design facilitates the creation of synthetic conversations where the tutorbot, though limited in mathematical calculations ability, can skillfully use Python for computations to guide the conversation accurately. This methodology significantly enhances the quality of synthetic dialogues, paving the way for the next crucial phase: training our model with these enriched dialogues. In the following section, we discuss how these synthetic conversations are used to fine-tune our Higgs model. ## 4 Dataset Curation Our methodology is underpinned by the careful curation of a high-quality dataset, termed PHY300, which comprises a diverse range of problems extracted from NCERT physics textbooks, covering topics from Newton's Laws to Thermodynamics to Electromagnetism. A physics Subject Matter Expert (SME) was enlisted to provide solutions for these problems, incorporating the necessary mathematical computations within each solution. This process yielded a diverse dataset of 300 unique question-solution pairs. These questions-solution pairs become the seed for generating 450 mock student-tutorbot conversations, an instance of which is depicted in figure 1. For generating high quality conversations with rich pedagogical value, we further enrich SME provided solutions with the GPT-4 model. The aim was to transform these succinct solutions into more comprehensive, step-by-step guides that explained the problem-solving process in a detailed, intuitive manner. To achieve this, we designed a prompt that guided the GPT-4 model to not only elaborate on each step but also to articulate the underlying logic and principles guiding these steps, essentially providing a 'teaching narrative'. The output from GPT-4 was a comprehensive, easy-to-follow, and pedagogically sound step-by-step guide, based on the original SME-provided solution. The exact wording of the prompt is provided in the appendix A.6 for reference. This process of enhancing the solutions leads us to a crucial assumption underpinning our methodology. It's well-established that an LLM, on its own, struggles to solve physics problems independently. This limitation is reflected in the LLM's performance on datasets like the MMLU (Hendrycks et al., 2020) and GSM8K (Cobbe et al., 2021). However, by providing the LLM with detailed, step-by-step solutions, we essentially equip it with a'script' that it can then translate into interactive, pedagogical dialogue. This assumption is not unfounded but is based on the inherent capabilities of an LLM. While the LLM may not independently generate solutions, it excels in natural language understanding and generation, making it well-suited to explain pre-determined solutions in an engaging and informative manner. Moreover, by leveraging Python for mathematical computations, we address the LLM's limitations in handling calculations. During inference, we continue to hold this assumption, particularly for complex problems. For simpler problems, an LLM's ability to use Python computations might suffice without needing detailed solutions. Thus our conversational dataset construction methodology enables the LLM to teach effectively by scaffolding the learning process by making informative use of GPT-enhanced dataset. This sets the stage for the next critical phase: fine-tuning our Higgs model on this enriched dataset to develop a robust and effective tutoring system. ## 5 Model Training In this section, we delve into the specifics of training our Higgs model, a fine-tuned version of the Llama-2-70b-chat base model (Touvron et al., 2023b). We used LoRA Hu et al. (2021) to freeze the base model weights, but only train rank decomposition matrices for each layer of the Llama model. In addition to applying LoRA to the query, key, and value matrices, we also fine-tuned the projection and MLP matrices, a strategy known to boost performance (Dettmers et al., 2023). The Higgs model was trained with an initial learning rate of \(1e-4\) for \(25\) epochs. We used a cosine learning rate scheduler and batch size of \(16\). The cost of training Higgs can be broken down into two primary components. First, the creation of conversational dataset involves prompting GPT-4, which costs approximately $300 each. Second, we fine-tune the model using the CLM loss on conversational dataset for \(25\) epochs. This process is executed on \(8\) NVIDIA RTX 48-GB A6000 GPUs and runs for three days. In summary, the implementation of Higgs model involves conversational dataset generation using GPT-4, model selection, and domain-specific fine-tuning. ## 6 Model Evaluation To accurately gauge the capabilities of our Higgs model, we designed an extensive evaluation protocol. This protocol, equipped with crucial metrics, measures Higgs' ability to effectively utilize Python computations within an educational dialogue. Paired with a Subject Matter Expert (SME) to provide measurements for each metric on a set of test questions, this protocol forms the backbone of our evaluation process. ### Evaluation Protocol Our evaluation protocol is designed to assess Higgs' performance in a comprehensive manner, focusing on its ability to appropriately use Python for calculations and its overall reliability in an educational dialogue. The protocol is centered around four key performance metrics: 1. Python Usage Accuracy: This first metric evaluates Higgs' ability to accurately determine when Python computations are needed within the dialogue. Specifically, it assesses whether Higgs correctly invokes Python to generate a suitable response or provide feedback to the student, for example in instances where the student's response includes a numerical answer requiring confirmation. This metric, therefore, serves as an indicator of Higgs' precision in recognizing and responding to calculation-dependent scenarios within the educational dialogue. 2. Non-Usage of Python: The second metric evaluates Higgs' ability to correctly identify instances where the use of Python is unnecessary. This ensures that the model judiciously invokes Python and can effectively distinguish between calculation-dependent and independent scenarios. Thus, this metric complements the first one by evaluating Higgs' ability to correctly avoid the use of Python when it is not needed. 3. Code Compilation: The third metric gauges the reliability of the Python code generated by Higgs. This involves checking if the code is syntactically correct and if it compiles without errors. A successful compilation validates the model's capability to generate executable Python code. 4. Calculation Verification: The final and most critical metric measures Higgs' ability to verify calculations using Python. This assesses the model's competence in cross-verifying a student's calculation-based answer and providing accurate feedback. Each of these metrics is binary, indicating either a success (1) or a failure (0) for the given task. This comprehensive evaluation protocol allows us to thoroughly assess Higgs' performance, ensuring that it accurately and reliably utilizes Python computations in the context of an educational dialogue. ### Preliminary SME Evaluation Our evaluation protocol was executed in collaboration with a Subject Matter Expert (SME), who devoted six hours to this task. The SME tested our model on a set of 25 questions, covering a wide range of topics. Each question was introduced to the model twice, once with the correct answer and once with an incorrect answer. The purpose of this methodology was two-fold. Firstly, when the correct answer was provided, we assessed if Higgs could accurately fact-check the answer using Python computations. Secondly, when the incorrect answer was provided, we evaluated whether Higgs could identify the error and provide the correct feedback to the student. The results of this evaluation process are summarized in the evaluation table 1. The Higgs model demonstrated an impressive performance across all metrics, showcasing its ability to accurately and reliably utilize Python computations in an educational dialogue. The perfect score on 'Python Usage Accuracy' and 'Non-Usage of Python' demonstrates the model's exceptional ability to discern when Python computations are necessary or superfluous during the conversation. The high score in 'Code Compilation' indicates that the Python code generated by the model is almost always syntactically correct and executable. The 'Calculation Verification' score, while slightly lower than the others, is still notably high. This metric shows the model's ability to correctly verify student's answers using Python computations. Our SME observed that the model struggled with questions that required equation rearrangement, a known limitation of the model's mathematical capabilities. This observation provides context to the slightly lower score in this metric. Overall, these scores affirm the effectiveness of our methodology and the resulting proficiency of the Higgs model. Higgs' successful usage of Python computations significantly enhance the quality and accuracy of its educational dialogues, making it a powerful tool for AI-assisted education. ## 7 Conclusion Our research presents a novel stateful prompt design that significantly bolsters the quality of synthetic conversation datasets, particularly for calculation-intensive subjects. Using an inner monologue or code soliloquy in a GPT-4 simulated tutorbot, we enable it to decide when a response requires calculations, script the necessary Python code, and leverage the output to generated accurate responses and feedback as a tutorbot. This innovative use of code soliloquy effectively mitigates GPT-4's known limitation in handling calculations, thereby improving its utility in generating mathematically accurate conversations. Our model, named Higgs, fine-tuned on these mock conversations, demonstrates the effectiveness of our approach in training large language models to accurately perform computations within an educational dialogue. Demonstrating an impressive ability to accurately and consistently \begin{table} \begin{tabular}{c|c|c|c} \hline \hline **Python Usage Accuracy** & **Non-Usage of Python** & **Code Compilation** & **Calculation Verification** \\ \hline 1.0 & 1.0 & 0.97 & 0.88 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance metrics for Higgs model on a dataset of 50 test cases. Each case is a physics question posed with both correct and incorrect answers. The model demonstrated flawless aptitude in identifying when to employ or bypass Python computations and generated syntactically correct Python code with near-perfect consistency. Despite a marginally lower score in verifying calculations, primarily due to challenges with equation rearrangement, the overall performance strongly asserts Higgs model’s proficient and accurate use of Python in educational dialogues involving calculations. deploy Python for computations, Higgs underscores the significance of integrating code soliloquies in the creation of synthetic dialogue datasets. Thus, our research underscores the importance of incorporating code soliloquies in the generation of synthetic conversation datasets, paving the way for more accurate and computationally reliable Intelligent Tutoring Systems. ## Acknowledgements This work was supported by NSF grants 1842378, ONR grant N0014-20-1-2534, AFOSR grant FA9550-22-1-0060, and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.
2303.13230
Volumes of Solid Objects in Elamite Mathematics
This article studies three-dimensional objects and their volumes in Elamite mathematics, particularly those found in the Susa Mathematical Tablet No.\,14 (\textbf{SMT No.\,14}). In our discussion, we identify some basic solids whose volumes have been correctly computed in Babylonian and Elamite mathematics. We also show that the Elamite scribes knew the right formula for calculating the volume of a certain pyramid which is a rare phenomenon occurring in the Babylonian mathematical tablets.
Nasser Heydari, Kazuo Muroi
2022-12-29T18:07:44Z
http://arxiv.org/abs/2303.13230v1
# Volumes of Solid Objects in Elamite Mathematics ###### Abstract This article studies three-dimensional objects and their volumes in Elamite mathematics, particularly those found in the Susa Mathematical Tablet No. 14 (**SMT No. 14**). In our discussion, we identify some basic solids whose volumes have been correctly computed in Babylonian and Elamite mathematics. We also show that the Elamite scribes knew the right formula for calculating the volume of a certain pyramid which is a rare phenomenon occurring in the Babylonian mathematical tablets. ## 1 Introduction **SMT No. 14** is one of 26 clay tablets excavated from Susa in southwest Iran by French archaeologists in 1933. The texts of all the Susa mathematical tablets (**SMT**) along with their interpretations were first published in 1961 (see [1]). This tablet1 consists of two problems both of which concern the volume of an imaginary large grain-heap, whose length, width, and height are known. From a mathematical point of view, the first problem is very important because the volume of a pyramid is correctly calculated. Unfortunately, the second problem is nearly unintelligible to us because most parts of the text are lost. Footnote 1: The reader can see the new photos of this tablet on the website of the Louvre’s collection. Please see [https://collections.louvre.fr/en/ark:/53355/cl010186539](https://collections.louvre.fr/en/ark:/53355/cl010186539) for obverse and reverse. ## 2 Volumes of Solid Objects Usually, a _solid_ is any limited portion of three-dimensional space bounded by surfaces. If all boundary surfaces forming a solid are planes, it is called a _polyhedron_.2 The intersections of planes in a polyhedron are called _edges_ and the polygons formed by the edges are _faces_. The intersections of the edges are the _vertices_ of the polyhedron. The angle between two faces of a polyhedron is a _dihedral angle_. The space near a vertex of a polyhedron is called a _solid angle_ or _polyhedral angle_, which is formed by the intersection of at least three faces. A polyhedron is convex if the line segment connecting any pair of its points is contained completely within the polyhedron. Here, we mostly consider convex polyhedra. Figure 1 shows four polyhedra three of which are convex but the one on the right is not. There is an interesting relation between the numbers of faces \(f\), edges \(e\) and vertices \(v\) of a polyhedron known as the _polyhedron formula_, observed by the Swiss mathematician Leonhard Euler (1707-1783), saying that \[v-e+f=2. \tag{1}\] Note that for the second polyhedron in Figure 1, we have \(12-20+10=22-20=2\) satisfying the Euler's polyhedron formula (1). Similar to regular polygons, we can define _regular polyhedra_ as those polyhedra all of whose faces are congruent regular polygons and all of whose polyhedral angles are equal. There is a classic theorem in geometry stating that there are only five regular convex polyhedra: tetrahedron with 4 faces, hexahedron (cube) with 6 faces, octahedron with 8 faces, dodecahedron with 12 faces and icosahedron with 20 faces (see Figure 2). These special polyhedra are usually called _Platonic solids_. These five regular polyhedra are named after the Greek philosopher Plato (circa 428-348 BC) who was so captivated by their perfect forms that in his dialogue _Timaeus_ he associates them with Figure 1: Examples of polyhedra Figure 2: Platonic solids what, at that time, were believed to be the basic elements: earth, fire, air, water and ether. Recently, there has been an increasing interest in a class of polyhedra (including the Platonic solids) because they arise naturally in a number of diverse problems in physics, chemistry, biology as well as a variety of other disciplines (see [1]). Although these polyhedra bear the name of Plato, the Greek mathematician Theaetetus of Athens (circa 417-369 BC) was the first to give a mathematical description of all five regular polyhedra and may also have been responsible for the first known proof that no other regular polyhedra exist (see [1, 2]). For a proof of this theorem, see [1]. The _volume_ of a solid is a quantity for the amount of space occupied by it which is usually measured as the number of times it contains a chosen solid as the unit of volume. Nowadays, volumes are mostly expressed in cubic units such as \(cm^{3}\), \(m^{3}\) and so on depending on the length unit one considers for the three dimensions of the unit cube. A _cuboid_ is a polyhedron with six faces all of which are rectangles and all of whose dihedral angles are right angles. A cuboid has three main dimensions, i.e., the length, the width and the height. The volume of a cuboid \(\Omega\) with dimensions \(a,b,c\) is simply given by \[V_{\Omega}=abc.\] This is due to the simple observation that a cuboid with integer dimensions \(m,n,p\) contains \(m\times n\times p\) numbers of the unit cube. Although the volume of a cuboid follows directly from the definition, to find the volumes of other solids, one needs to apply _Cavalieri' principle_ attributed to the Italian mathematicians Bonaventura Cavalieri (1598-1647). This states that if two solids are bounded between two parallel planes and if any plane parallel to the boundary planes intersects the two solids in two cross-sections with equal areas, then the two solids have the same volume (see Figure 4). Figure 3: Volume of a cuboid A _prism_\(\Gamma\) is a polyhedron two of whose faces are parallel equal polygons, say \(\Gamma_{n}\). In fact, if we translate \(\Gamma_{n}\) along a fixed direction, the obtained solid is a prism. In a general prism, all faces other than the two parallel ones are parallelograms. If these parallelogram faces of a prism are rectangles, it is called a _right prism_. The distance between the two parallel faces is called the _height_ of the prism denoted by \(h\). The volume of a general prism is obtained by the general rule "area of base times height": \[V_{\Gamma}=h\times S_{\Gamma_{n}}. \tag{2}\] This easily gives the volumes of special prisms such as triangular prisms, parallelepipeds and cuboids. A _truncated prism_ is obtained from a prism by cutting it with two non-parallel planes as shown in Figure 5. The volume formula for a prism can be proved by using the fact that any cuboid \(\Lambda\) with sides \(a,b,c\) can be diagonally cut off into two equal triangular right prisms, say \(\Lambda_{1}\) and \(\Lambda_{2}\), whose bases are right triangles with sides \(a\) and \(b\) (see Figure 6). So, the common volume of these two triangular prisms is \[V_{\Lambda_{1}}=V_{\Lambda_{2}}=\frac{1}{2}abc=\left(\frac{1}{2}ab\right)c,\] which is area of base times height. Any polygon can be partitioned into triangles, so this, together with Cavalieri's principle, can be used to prove the general formula (2). Figure 4: Cavalieri’s principle Figure 5: Different kinds of prisms A _pyramid_\(\Lambda\) is a polyhedron obtained by connecting a point \(p\) outside a polygon \(\Lambda_{n}\) by straight lines to all of its points. This point \(p\) is the _apex_ and the polygon \(\Lambda_{n}\) is the base of the pyramid. The distance between the apex and the base is the height of the pyramid. Note that all faces of a pyramid containing the apex are triangles. In a right pyramid, the line connecting the apex and the centroid of the base is perpendicular to the base. A _pyramidal frustum_ is obtained from a pyramid by cutting it with a plane parallel to its base (see Figure 7). One can make the observation that any right triangular prism \(\Lambda\) is the union of three right triangular pyramids \(\Lambda_{1},\Lambda_{2},\Lambda_{3}\) with equal volumes as shown in Figure 8. Figure 8: Dividing a prism into three pyramids with equal volumes Figure 6: Cutting a cuboid into two equal right triangular prisms Figure 7: Different kinds of pyramids In fact, on the one hand, \(\Lambda_{1}\) and \(\Lambda_{3}\) have the same height \(h\) and equal triangular base with sides \(a,b,c\), and on the other hand, \(\Lambda_{2}\) and \(\Lambda_{3}\) have the same height \(a\) and the same triangular base with height \(b\) and base \(h\). So by Cavalieri's principle \(V_{\Lambda_{1}}=V_{\Lambda_{2}}=V_{\Lambda_{3}}\). Now, since any pyramid can be divided into a finite number of triangular pyramids by triangulation of its polygonal base, one can use the previous fact to show that the volume of a general pyramid \(\Lambda\) with polygonal base \(\Lambda_{n}\) and height \(h\) is \[V_{\Lambda}=\frac{1}{3}\times h\times S_{\Lambda_{n}}.\] For a pyramidal frustum whose base and top are regular \(n\)-gons with sides \(a\) and \(b\) respectively and whose height is \(h\), one can easily find a volume formula with respect to \(n\), \(a\), \(b\) and \(h\). In fact, consider Figure **9** in which the frustum with \(\Gamma_{n}(a)\) and \(\Gamma_{n}(b)\) as the base and top is a part of pyramid with height \(h+h^{\prime}\) and base \(\Gamma_{n}(a)\). Since the two triangles \(\triangle AH^{\prime}B^{\prime}\) and \(\triangle AHB\) in the figure are similar, we have \[\frac{\overline{AH^{\prime}}}{\overline{H^{\prime}B^{\prime}}}=\frac{ \overline{AH}}{\overline{HB}}.\] Also, \(\overline{AH^{\prime}}=h^{\prime}\), \(\overline{AH}=h+h^{\prime}\), \(\overline{B^{\prime}H^{\prime}}=\frac{b}{2\sin(\frac{\pi}{n})}\), and \(\overline{BH}=\frac{a}{2\sin(\frac{\pi}{n})}\), so it follows from the last equality that \[h^{\prime}=\frac{bh}{a-b}.\] On the other hand, since \(S_{\Gamma_{n}(a)}=\frac{na^{2}}{4}\cot(\frac{\pi}{n})\) and \(S_{\Gamma_{n}(b)}=\frac{nb^{2}}{4}\cot(\frac{\pi}{n})\), and the volume of the frustum is the volume of a pyramid with height \(h\) and base \(\Gamma_{n}(a)\) minus that of the small pyramid with height \(h^{\prime}\) and base \(\Gamma_{n}(b)\), we can write \[V=\frac{h+h^{\prime}}{3}\times\frac{na^{2}}{4}\cot\left(\frac{\pi}{n}\right)- \frac{h^{\prime}}{3}\times\frac{nb^{2}}{4}\cot\left(\frac{\pi}{n}\right)\] which can be simplified as \[V=\frac{nh}{12}\cot\left(\frac{\pi}{n}\right)\left(a^{2}+ab+b^{2}\right). \tag{3}\] Figure 9: Volume of a pyramidal frustum whose base and top are regular polygons Besides polyhedra, other solids can be obtained by rotating closed surfaces around a fixed direction (the _axis of rotation_) in 3-space, generally known as _solids of rotation_. Examples of such solids of rotation are spheres, ellipsoids, cylinders and cones, obtained by rotating a semicircle, a semi-ellipse, a rectangle and a right triangle respectively around a fixed direction as shown in Figure 10. If the radius of the semicircle is \(r\), the volume of generated sphere is \(\frac{4}{3}\pi r^{3}\). It was Archimedes who showed that the volume of a sphere is equal to twice the volume between the sphere and its circumscribed cylinder. The volume of the cylinder obtained by rotating a rectangle with sides \(r\) and \(h\) around the latter is \(\pi r^{2}h\) which follows from Cavalieri's principle. If the base and the height of a right triangle are \(r\) and \(h\) and we rotate it around its height, the volume of the generated cone is \(\frac{1}{3}\pi r^{2}h\) which is similar to the volume formula of a pyramid. One way to intuitively see why this is true is to place two containers in the shapes of a cone and a cylinder with the same height and circular base side by side. By filling the cone with water and then pouring it into the cylinder, it is apparent that the level of water reaches one-third of the height of the cylinder. This suggests that the volume of a cone with height \(h\) and radius \(r\) is that of a cylinder with height \(\frac{h}{3}\) and the same radius \(r\). ## 3 Volumes in ancient mathematics Although the above-mentioned volume formulas for basic solids can be obtained by using elementary techniques from solid geometry3 or double integration from calculus, their appearance in mathematics can be traced back to Greek mathematicians such as Euclid (in Book XII of the _Elements_) and Eudoxus of Cnidus (circa 395-337 BC) who developed solid geometry by establishing the usual proportions and volume relations for solid figures. For example, Eudoxus proved that the volume of a sphere is proportional to the cube of its radius and the volume of a pyramid is the one-third of that of a prism with the same base and height. Footnote 3: The interested reader can consult [10] for a detailed discussion on this topic. Whereas the Greek mathematicians conducted a detailed study of solids and their volume, there is evidence that the Babylonians and Egyptians used certain formulas concerning the volume of cubes, prisms and cones long before the Greeks (see [11, 12, 13]). Besides the purely mathematical point of view, some of these formulas seemed to be Figure 10: Different kinds of solids of rotation used in calculations regarding construction projects such as digging a hole or a canal or building a brick wall. Such volume formulas would have been of great advantage to builders. For example, by using the dimensions of a cubical brick and an intended wall, they could estimate the total number of bricks needed to finish the wall. In addition, such data could be used to determine the number of workers required for any project. Unsurprisingly, calculating the volume of pyramids was a challenge for ancient scribes. Apparently, the Egyptians and the Babylonians were aware of the volume formula for a pyramid but they did not use it explicitly. One can see this implicit usage in the Babylonian and the Egyptian formulas for a pyramidal frustum, say \(\Delta\), with square base of side \(a\) and square top of side \(b\) and height \(h\) (see Figure **11**). As is known from mathematical tablet **BM 851944**, the Babylonians used the volume formula Footnote 4: For an interpretation of a part of this text, see the appendix section. \[V_{\Delta}^{B}=\left[\left(\frac{a+b}{2}\right)^{2}+\frac{1}{3}\left(\frac{a- b}{2}\right)^{2}\right]h \tag{4}\] for the volume of the pyramidal frustum \(\Delta\). One way to obtain this formula is the "cut and paste" method given in Figure **11**.5 A pyramidal frustum is first cut vertically at its four corners to get four pyramids with height \(\frac{h}{2}\) and square bases with side \(\frac{a-b}{2}\). Next, we slice off four extra parts on four faces of the lower half frustum to get four truncated prisms. Then these four polyhedra are rotated \(180^{\circ}\) around their top edges and attached to the faces of the upper half Figure 11: Babylonian formula for the volume of a pyramidal frustum frustum as shown in the figure. We can also attach two small triangular pyramids on their proper faces to get a pair of pyramids with square bases. At the end, we obtain a cube with dimensions \(h\), \(\frac{a+b}{2}\) and \(\frac{a+b}{2}\) plus two pyramids of square bases with sides \(\frac{a-b}{2}\) and height \(\frac{h}{2}\). The total volume of these solids is exactly the value given in (4). The second part of the formula clearly proves that the Babylonians knew the formula of a pyramid with a square base. On the other hand, according to the _Moscow papyrus_6, the Egyptians applied the formula Footnote 6: The Moscow Mathematical Papyrus is an ancient Egyptian mathematical papyrus containing several problems in arithmetic, geometry, and algebra. It is held in the collection of the Pushkin State Museum of Fine Arts in Moscow. \[V_{\Delta}^{E}=\frac{h}{3}\left(a^{2}+ab+b^{2}\right) \tag{5}\] for the same pyramidal frustum (note that this is obtained from (3) by setting \(n=4\)). Although the formula is different than that of the Babylonians, the two formulas are equivalent. We can use the "cut and paste" method shown in Figure 11 to obtain the formula (see [14, pp. 35-39]). The only thing in this method is that we replace four pyramids with heights \(h\) in the corners with four cubes with the same base but height \(\frac{h}{3}\). The other steps are just cutting and pasting processes as shown in the figure. Figure 12: Egyptian formula for the volume of a pyramidal frustum Although no explicit formula or direct calculation for the volume of a pyramid is given, there are clay tablets that address problems regarding the volumes of solids whose calculations involve the volume of a pyramid. Two such tablets are **BM 96954** and **SMT No. 14** dealing with the volume of granaries. In both mathematical and non-mathematical texts, different shapes are suggested for ancient granaries. The common shape of a cylinder with a domed top is one of the oldest and can be found on seal imprints. For example, in figure 222 of **MDP XVI** (see [12, Plate 15]), a worker is climbing up a ladder to put grain into a pair of cylindrical granaries (see Figure 13). This ancient seal from Susa caused some scholars to suggest that the cylinder was the likely shape of granaries in ancient Elam. Unfortunately, this led them to misunderstand some mathematical texts and so to mistakes in their interpretations (see [1, TEXTE XIV], for example). In this process of computing the volume of a truncated triangular prism (see Figure 14), the Babylonian and Elamite scribes usually needed to calculate the volume of a special rectangular pyramid. This truncated triangular prism was a solid shape familiar to the Elamites and the Babylonians and might have served as a pattern for building storage facilities such as granaries due to the rigid structure provided by this shape. Figure 14: A truncated triangular prism and its dimensions Figure 13: A pair of granaries on a Susa seal dated to circa 3500-3000 BC Let us compute the volume of the truncated triangular prism in Figure 14 by assuming that its height is \(h\), its width is \(y\), its ridge is \(x\), and its length is \(z\). We also denote the other two parts of the length by \(x_{1}\) and \(x_{2}\) (that is, the lengths of two right rectangular pyramids attached to the middle triangular prism). Clearly, \(z=x+x_{1}+x_{2}\). In general, \(x_{1}\) and \(x_{2}\) may not be equal although Babylonian and Elamite scribes considered the special case \(x_{1}=x_{2}\). First, note that our solid consists of three parts: the middle part is a triangular prism and the left and right parts are two rectangular pyramids. The volume of the triangular prism is \(V_{0}=\frac{xyh}{2}\) while those of the two rectangular pyramids are \(V_{1}=\frac{1}{3}\times x_{1}\times y\times h=\frac{x_{1}yh}{3}\) and similarly \(V_{2}=\frac{xyh}{3}\). Therefore, \[V=V_{0}+V_{1}+V_{2}=\frac{xyh}{2}+\frac{x_{1}yh}{3}+\frac{x_{2}yh}{3}=\frac{ yh}{6}\times(3x+2x_{1}+2x_{2})\] implying that \[V=\frac{2zyh+xyh}{6}. \tag{6}\] As we see in the next section, the Susa scribes correctly computed the volume of such a truncated triangular prism in which the two pyramids are equal (that is, \(x_{1}=x_{2}\)). Further, mathematical analysis of the text shows that the Susa scribes knew the formula for the volume of a specific rectangular pyramid whose height and sides of the base are all the same number \(h\). They used the formula \(\frac{h^{3}}{3}\) for the volume which is obtained from the usual formula "one-third of height times the area of base". ## 4 SMT No. 14 As mentioned, this text consists of two problems both of which concern the volume of an imaginary grain-heap, whose length, width, and height are 10 \(\mathsf{nindan}\) (\(\approx 60m\)), 6 \(\mathsf{nindan}\) (\(\approx 36m\)), and 3 \(\mathsf{nindan}\) (\(\approx 18m\)) respectively. From the mathematical point of view, the first problem is very important because the volume of a pyramid is correctly calculated and the mathematical technical term \(\mathsf{a-r\!a\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! (L9) 40 _ta-mar_ 40 _as-sum_ 2 sig gur\({}_{7}\)_a-na_ 2 tab-ba (L10) 2(sic),20 _ta-mar_ 1,20 _a-na_ 27 _i-si-ma_ 36 _ta-mar_ (L11) 36 _i-n[a]_ 1,12 zi 36 [_ta-mar tu_]-_ir-ma_ (L12) 3 _me-la-a-am_ nignign 9 _ta-[mar_ igi-9 _pu-t_]_u-ir_ (L13) 6,40 _ta-mar_ 6,40 _a-na_ 36 _i-s[i-ma]_ (L14) 4 _ta-mar_ 4 _ka-aq-qa-du_ 3 _me-l[a-a-am_ gar] (L15) [_as-su_]_m_ _i-na am-ma-at am-ma-a[t i-ku-lu_] (L16) [_a-na_ 2] tab-ba 6 _ta-mar_ 6 sag 6 [_a-na_ 4] (L17) [_ka-aq-qa_]-_di_ dah 10 _ta-mar_ 10 [uS] (L18) [_\(\cdots\)_\(\cdots\)_\(\cdots\)_\(\cdots\)_]_ Reverse: Lines 1-7 (L1) [\(\cdots\)_\(\cdots\)_\(\cdots\)_\(\cdots\)_]_ (L2) [\(\cdots\)_\(\cdots\)_\(]\)_3 _me-la-a-am_ [_a-na_ 12] (L3) [_i-si-ma_ 36 _ta-mar_ 3[6 _a-na_ 24 _i-si-ma_] (L4) [14],24 _ta-mar_ sahar 14,2[4 \(\cdots\)_\(\cdots\)_\(\cdots\)_]_ (L5) _a-na_ 8 _na-as-pa-ak_ gur\({}_{7}\)_[_i-si-ma_] (L6) 1,55,12 gub-_ma_ 20,30(sic) [gur\({}_{7}\)] (L7) \(u\) 2-su_ 24 gur _se-u[m ki-a]-am ne-[pe-sum_] ### Translation Obverse: Lines 1-18 (L1) The grain-heap. For the volume 14,24, I put down 3 (nindan, that is, 6) gi as the height. (L2-3) For the volume 14,24, what did I put down as the length, width, and the top? You, make the reciprocal of the (constant) 12 of depth, (and) (L4) you see 0;5. Multiply 0;5 by the volume 14,24, and (L5) you see 1,12. Square the height, 3 (nindan, that is, 6) gi, (and) you see 9. (L6) Multiply 9 by the height 3 again, and you see 27. (L7-8) From the regular number 1, subtract 0;20 of the volume, one third of the regular (number 1) of the wing that you add (to the middle part), (and) (L9) you see 0;40. Since there are two dilapidated parts of the grain-heap, double 0;40, (and) (L10) you see 2;20 (error for 1;20). Multiply 1;20 by 27, and you see 36. (L11) Subtract 36 from 1,12, (and) you see 36. Return and (L12) square the height 3, (and) you see 9. Make the reciprocal of 9, (and) (L13) you see 0;6,40. Multiply 36 by 0;6,40, (and) (L14) you see 4. 4 is the top (length). Put down the height 3. (L15-16) Since the inclination of the sides of the grain-heap is 1 kus (\(\approx 50cm\)) per 1 kus, double the height 3, (and) you see 6. 6 is the width. (L16-17) Add 6 to the top (length) 4, (and) you see 10. 10 is the length. (L18) \(\cdots\)_\(\cdots\)_\(\cdots\)_._ Reverse: Lines 1-7 * \(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\). * Multiply the height 3 by 12 \(\cdots\), and you see 36. * Multiply 36 by 24, and you see 14,24. * Multiply the volume 14,24, \(\cdots\), by 8,0,0, the storage (constant) of the grain-heap, and 1,55,12,0,0 (**sila**) is confirmed, and 20,30 (error for 23) \(\mathsf{gur}_{7}\) and 2,24 \(\mathsf{gur}\) is the barley. Such is the procedure. ### Technical Terms in SMT No. 14 Before discussing the mathematical meaning of the problems, we explain a few technical terms that occur in the text. #### The length unit gi Since 1 \(\mathsf{gi}\) is equal to half a \(\mathsf{nindan}\) (\(\approx\) 3m), the value 3 \(\mathsf{gi}\) in lines 1 and 5 seems to be a mistake if it means "3 (\(\mathsf{nindan}\), that is, 6) \(\mathsf{gi}\)". However, we know that the same expressions were occasionally used in mathematical texts, for example: (i) 30 \(\mathsf{gi}\) "0;30 (\(\mathsf{nindan}\), that is, 1) \(\mathsf{gi}\)", (ii) 30 \(\mathsf{kus}\) "0;30 (\(\mathsf{nindan}\), that is, 6) \(\mathsf{kus}\)". Note that the basic length unit \(\mathsf{nindan}\) is always omitted. Due to the absence of the "sexagesimal point" in Babylonian mathematics, the scribe used the clumsy expression 3 \(\mathsf{gi}\) in our problem. The practice of writing a number in this way may also be found in a Sumerian inscription written around 2400 BC (see [14, 15]). #### Rectangular pyramids The grain-heap (\(\mathsf{gur}_{7}\)) consists of a right triangular prism and two rectangular pyramids attached to the top and base of the triangular prism (see Figure 15). In line 8, the pyramid is called \(\mathfrak{A}(\mathit{album})\) "a wing (of the grain-heap)" and also \(\mathsf{sig}\)\(\mathsf{gur}_{7}\) "a dilapidated part of the grain-heap" in line 9. As mentioned, the shape in this text was misunderstood by Bruins and he mistakenly considered a solid shape as shown in Figure 16 represented this grain-heap. Friberg Figure 15: An imaginary grain-heap believes that the imprint of a Susa seal (see Figure **13**) was the cause of this error ([10, 2]). #### Adjective _kayyamanum_ In several Susa mathematical texts the Akkadian adjective _kayyamanum_ "normal, regular, usual" occurs with the number 1 or 2 or 3. It determines the integer part of a number, namely, it is used to refer to, for example, the number 2 in sexagesimal numbers like \(2\times 60^{\pm n}\): \[2\times 60^{2}=2,0,0\ \ \text{and}\ \ 2\times 60^{-4}=0;0,0,0,2.\] Additionally, there is a possibility that 5 a-ra in **SMT No. 7** is an abbreviation for 5 a-ra _kayyamanum_, because the numbers 5 and 7 occur together as the prime factors of 35, and the former is called "the factor 5" and the latter is not modified at all (see [11, 12, 13]). Therefore, it is highly probable that the term a-ra _kayyamanum_, which is a translation of the Sumerian term a-ra-gub-ba, is specifically used for the numbers 1, 2, 3, and 5. In other words, these numbers are "normal, regular" in the sense that their reciprocals can be expressed by finite sexagesimal fractions. The Sumerians must have known the fact that the numbers 2, 3, and 5 are "regular" with respect to the base 60, and this was also known to the Susa and the Babylonian scribes. #### Inclination of a plane In line 15 occurs a technical expression of Babylonian mathematics that defines the inclination of a plane. A typical example is as follows: _i-na_ 1 kus \(x\) kus (k\(\ddot{\text{a}}\)) \(\mathfrak{h}\)-k\(\ddot{\text{a}}\) "In 1 kus (in height) it ate \(x\) kus (of fodder)". The underlying idea of this expression, which is also thought to be of Sumerian origin, is "the water ate away the bank of a river". It refers to the angle of erosion formed between the river surface and the incline of the eroded bank which is denoted by \(\alpha\) in Figure **17** depicting a cross section of both a river and bank. Figure 16: The grain-heap suggested by Bruins By considering the right triangle in Figure 17 whose bases are \(1\)kus and \(x\)kus, we can compute \(\tan(\alpha)=\frac{1}{x}\). In our problem since \(x=1\), the angle between the ground and the side of the grain-heap is computed as \(\alpha=\arctan(\frac{1}{1})=\arctan(1)=45^{\circ}\) (see Figure 15 and Figure 18). ### Mathematical Calculations We now analyze the first problem of **SMT No. 14**. In the statement of this problem only two pieces of data are provided: the volume and the height of a grain-heap, which are obviously insufficient to determine the shape of the solid figure being considered. However, careful consideration of the calculation assists in understanding the geometrical intention of the scribe. The solid figure is a truncated triangular prism as shown in Figure 18. Note that, as we said in the previous section, both acute angles of the right triangles in the figure are \(45^{\circ}\), so they are isosceles triangles too. As in Figure 18, we denote the top (length) by \(x\), the width (of the base) by \(y\), the length (of the base) by \(z\), and the height by \(h\). Note that the value of height here is \(h=3\)**nindan**. Since the height \(h\) bisects the base \(BC\) in the isosceles right triangle Figure 17: Erosion of a river bank Figure 18: Dimensions of an imaginary Elamite grain-heap \(\triangle ABC\), the scribe has rightly assumed that \[\begin{cases}y=2h\\ z=x+2h.\end{cases} \tag{7}\] It follows from the translation of the text that the scribe has used the following formula for the volume \(V\) of the three-dimensional solid in Figure 18: \[V=xh^{2}+2(1-0;20)h^{3}. \tag{8}\] Note that this formula is consistent with (6), because by setting \(y=2h\) and \(z=x+2h\) in (6), we get \[V =\frac{yh}{6}(2z+x)\] \[=\frac{(2h\times h)}{6}(3x+4h)\] \[=xh^{2}+\frac{4}{3}h^{3}\] \[=xh^{2}+2\left(1-\frac{1}{3}\right)h^{3}\] \[=xh^{2}+2(1-0;20)h^{3}.\] Let us break down the formula (8). First, note that our solid consists of two equal rectangular pyramids, say \(P_{1}\) and \(P_{2}\), as well as a triangular prism, say \(\Lambda\) (see Figure 19). Clearly, the volume of \(\Lambda\) is obtained by "area of base times height": \[V_{\Lambda}=S_{\triangle ABC}\times x=\left[\frac{1}{2}(h)(2h)\right]\times x\] so \[V_{\Lambda}=h^{2}x. \tag{9}\] Figure 19: Splitting an imaginary grain-heap The remaining part \(2\left(1-\frac{1}{3}\right)h^{3}\) is the total volume of two equal rectangular pyramids \(P_{1}\) and \(P_{2}\). So, it follows from (8) that \[V_{P_{1}}=V_{P_{2}}=(1-0;20)h^{3}\] or equivalently \[V_{P_{1}}=V_{P_{2}}=\left(1-\frac{1}{3}\right)h^{3} \tag{10}\] which confirms that Susa scribes have computed the correct values of these volumes. #### Formula for the Volume of a Rectangular Pyramid The statement of formula (10) suggests two facts about the volumes of pyramids. Firstly, the Susa scribes have assumed that the volume of a rectangular pyramid with height \(h\), width \(h\) and length \(h\) is \(\frac{1}{3}h^{3}\). Secondly, if they subtract this value from the volume of a cube of the same dimensions, they get the volume of a rectangular pyramid with height \(h\), length \(h\) and width \(2h\). In the following, we give a geometric explanation for the first fact. Consider three copies of a rectangular pyramid of height \(h\) whose base is a square of side \(h\) too. If we rotate each copy in a certain way and then put them together, we obtain a cube of dimensions \(h,h,h\) as shown in Figure 20. Clearly, the volume of the resulting cube is equal to the sum of volumes of the three equal pyramids. Since the volume of the cube i.e., \(h^{3}\), was known to Elamite scribes, they could conclude that the volume of a rectangular pyramid with dimensions \(h,h,h\) must be \(\frac{1}{3}h^{3}\). The second fact now becomes clear. To see this, take a cube \(\Gamma\) with length \(h\), width \(h\) and height \(h\). As shown in Figure 20, this cube is the union of three copies of a rectangular pyramid \(\Gamma_{1}\) with equal length, width and height \(h\). If we attach two of these copies properly, we get a rectangular pyramid \(\Gamma_{2}\) with length \(h\), width \(2h\) and height \(h\) (see Figure 21). Figure 20: Splitting a cube into three equal pyramids This means \(\Gamma\) is the union of \(\Gamma_{1}\) and \(\Gamma_{2}\) which implies that \[V_{\Gamma}=V_{\Gamma_{1}}+V_{\Gamma_{2}} \tag{11}\] and also \[V_{\Gamma}=3V_{\Gamma_{1}}. \tag{12}\] Since \(V_{\Gamma}=h^{3}\), both (11) and (12) imply that \(V_{\Gamma_{2}}=h^{3}-\frac{1}{3}h^{3}\) which proves (10). Now, we return to the first problem of **SMT No. 14**. Lines 1-3 tell us that the volume of the grain-heap, \(V=V_{\Lambda}+V_{P_{1}}+V_{P_{2}}\), is equal to \(14,24\) volume-sar, so it follows from (8) that \[xh^{2}+2(1-0;20)h^{3}=14,24. \tag{13}\] Since7 Footnote 7: Note that 1 nindan is equal to 12 küs. \[1\text{ küs}=0;5\text{ nindan}\] and \[1\text{ volume-sar}=\left(1\text{ nindan}^{2}\right)\times(1\text{ küs})=0;5 \text{ nindan}^{3}\] so according to lines 3-4, \[14,24\text{ volume-sar}=(14,24)\times(0;5)\text{ nindan}^{3}=1,12\text{ nindan}^{3}.\] According to lines 5-14, in order to compute the value of \(x\), we can substitute \(h=3\) nindan in (13) and simplify. Note that we have converted all the units involved into Figure 21: Volume of a rectangular pyramid nindan. We have \[xh^{2}+2(1-0;20)h^{3}=14,24\] \[\implies 3^{2}x+2(1-0;20)3^{3}=1,12\] \[\implies 9x+(1;20)\times 27=1,12\] \[\implies 9x+36=1,12\] \[\implies 9x=1,12-36\] \[\implies 9x=36\] \[\implies x=\frac{1}{9}\times 36\] \[\implies x=(0;6,40)\times 36\] \[\implies x=4.\] Finally, by lines 16-17, we can use (7) to write \[y=2\times 3=6\] and \[z=4+(2\times 3)=4+6=10.\] In the second problem the volume-sar 14,24 is converted to the **sla** unit by the storage constant 8,0,08: Footnote 8: In fact, 1 volume-sar is equal to 5,0,0 **sla** (or 18,800 liters). The constant here is a bit larger than this for unknown reasons! \[14,24\ \mbox{\bf volume-sar}\ =\ (8,0,0)\times(14,24)\ \mbox{\bf sila}=1,55,12,0,0\ \mbox{\bf sila}\] and further to \[23\ \mbox{\bf gur}_{7}\ \mbox{and}\ 2,24\ \mbox{\bf gur}\] where \(\mbox{\bf gur}_{7}\) is the largest capacity unit such that 1 \(\mbox{\bf gur}_{7}\) = 5,0,0,0 **sila** in the Old Babylonian period and as we saw before 1 \(\mbox{\bf gur}\) =5,0 sila. ## 5 Conclusion Clearly, the Elamite scribes like their Babylonian counterparts were familiar with the basics of solid geometry and knew how to compute the volume of three-dimensional figures such as cubes, prisms and truncated pyramids. Our mathematical interpretation of **SMT No. 14** reveals that the scribes of Susa used a formula for the volume of a rectangular pyramid which enables them to calculate the right value. It confirms that they knew the volume formula for a pyramid even if they may not have expressed it explicitly and this ability on their part is of considerable interest to those researching the history of mathematics. ## Appendix: Mathematical Tablet BM 85194 In this appendix, we give the transliteration, the translation and the mathematical interpretation of lines 41-49 on the reverse of **BM 85194**. ### Transliteration Reverse II: Lines 41-49 (L41) _hi-ri-tum_ 10-ta-am _mu-hu_ 18 sukud _i-na_ 1 kus 1 sa-gal (L42) _sa-simu_ a sahar-hi-a za-e 5 \(u\) 5 ul-gar 10 _ta-mar_ (L43) [10] _a-na_ 18 sukud _i-si_ 3 _ta-mar_ 3 _i-na_ 10 ba-zi 7 (L44) _[ta-mar] sa-sim_ nigin-na _sa-simu mu-hu_ 10 ul-gal 17 _ta-mar_ (L45) _[\(\frac{1}{2}\) 17 he-pe]_ 8,30 _ta-mar_ nigin 1,12,15 _ta-mar_ (L46) 1,12,[15 gar]-ra igi-2-gal 3 dirig _sa mu-hu_ ugu (L47) _sa-sim_ nig[in susana] 45 _a-na_ 1,12,15 dah-ha-_ma_ (L48) 1,13 _ta-mar_ 1[8] _a-na_ 1,13 _i-si_ 22,30 (sic) _ta-mar_ (L49) 2 (ese) 1 (iku) 1 (ubu) gan sahar-hi-a _ki-\(<\)a-am\(>\) ne-pe-sum_ ### Translation Reverse II: Lines 41-49 (L41) A hole dug in the ground. Each of the sides of the upper surface is 10 (**nindan**\(\approx 60m\)) in length. The depth is 18 (**kus**\(\approx 9m\)). The inclination of the slope is 1 (**kus**) per 1 **kus** (\(\approx 50cm\)). (L42) (What are the sides of) the base surface and the volume? (When) you (perform the operation), add 0;5 and 0;5 together, and you see 0;10. (L43) Multiply 0;10 by 18 of the depth, and you see 3. Subtract 3 from 10, and you see 7, (L44) the side of the base surface. On the other hand, add the side of the base and the side of the upper together, and you see 17. (L45) Halve 17, and you see 8;30. Square (it), and you see 1,12;15. (L46) Put down 1,12;15. \(\frac{1}{2}\) of 3 that is the difference between the upper side and the base side. (L47) square (it) and (take) \(\frac{1}{3}\) (of the result). Add 0;45 to 1,12;15, and (L48) you see 1,13. Multiply 18 by 1,13, (and) you see 22,30 (error for 21,54). (L49) 2 (ese) 1 (**iku**) 1 (**ubu**) (\(=22,30\)**volume-sar**) is the volume (of the hole). Such is the procedure. ### Mathematical Interpretation Here, the scribe is considering a pyramidal frustum whose base and top are squares with sides \(a\) and \(b\) and the height (depth) is \(h\) (see Figure 22). According to the date in text, \(a=10\)**nindan** and \(h=18\)**kus**. Since the slope is 1, the angle \(\alpha\) in the figure must be \(45^{\circ}\), so \[1=\tan(45^{\circ})=\tan(\alpha)=\frac{h}{\frac{a-b}{2}}\] and thus \[h=\frac{a-b}{2}.\] Since \(a=10\) nindan, \(h=18\) kus, and \(1\) kus \(=0;5\) nindan, we get \[a-b=2\times(0;5)\times 18=3\text{ nindan}.\] This implies that \[b=a-(a-b)=10-3=7\text{ nindan}\] and \[a+b=7+10=17\text{ nindan}.\] Therefore \[\left(\frac{a+b}{2}\right)^{2}=\left(\frac{17}{2}\right)^{2}=(8;30)^{2}=1,12 ;15\text{ nindan}^{2}\] and \[\frac{1}{3}\left(\frac{a-b}{2}\right)^{2}=\frac{1}{3}\left(\frac{3}{2}\right)^ {2}=\frac{1}{3}\times(1;30)^{2}=0;45\text{ nindan}^{2}.\] The volume is \[V =\left[\left(\frac{a+b}{2}\right)^{2}+\frac{1}{3}\left(\frac{a-b} {2}\right)^{2}\right]h\] \[=18\times(1,12;15+0;45)\] \[=18\times(1,13)\] \[=21,54\text{ volume-sar}.\]
2309.06100
Pseudo-variance quasi-maximum likelihood estimation of semi-parametric time series models
We propose a novel estimation approach for a general class of semi-parametric time series models where the conditional expectation is modeled through a parametric function. The proposed class of estimators is based on a Gaussian quasi-likelihood function and it relies on the specification of a parametric pseudo-variance that can contain parametric restrictions with respect to the conditional expectation. The specification of the pseudo-variance and the parametric restrictions follow naturally in observation-driven models with bounds in the support of the observable process, such as count processes and double-bounded time series. We derive the asymptotic properties of the estimators and a validity test for the parameter restrictions. We show that the results remain valid irrespective of the correct specification of the pseudo-variance. The key advantage of the restricted estimators is that they can achieve higher efficiency compared to alternative quasi-likelihood methods that are available in the literature. Furthermore, the testing approach can be used to build specification tests for parametric time series models. We illustrate the practical use of the methodology in a simulation study and two empirical applications featuring integer-valued autoregressive processes, where assumptions on the dispersion of the thinning operator are formally tested, and autoregressions for double-bounded data with application to a realized correlation time series.
Mirko Armillotta, Paolo Gorgi
2023-09-12T10:07:31Z
http://arxiv.org/abs/2309.06100v1
# Pseudo-variance quasi-maximum likelihood estimation of semi-parametric time series models+ ###### Abstract We propose a novel estimation approach for a general class of semi-parametric time series models where the conditional expectation is modeled through a parametric function. The proposed class of estimators is based on a Gaussian quasi-likelihood function and it relies on the specification of a parametric pseudo-variance that can contain parametric restrictions with respect to the conditional expectation. The specification of the pseudo-variance and the parametric restrictions follow naturally in observation-driven models with bounds in the support of the observable process, such as count processes and double-bounded time series. We derive the asymptotic properties of the estimators and a validity test for the parameter restrictions. We show that the results remain valid irrespective of the correct specification of the pseudo-variance. The key advantage of the restricted estimators is that they can achieve higher efficiency compared to alternative quasi-likelihood methods that are available in the literature. Furthermore, the testing approach can be used to build specification tests for parametric time series models. We illustrate the practical use of the methodology in a simulation study and two empirical applications featuring integer-valued autoregressive processes, where assumptions on the dispersion of the thinning operator are formally tested, and autoregressions for double-bounded data with application to a realized correlation time series. _Keywords:_ Double-bounded time series, integer-valued autoregressions, quasi-maximum likelihood. _JEL codes:_ C32, C52, C58. Introduction A wide range of time series models have been proposed in the literature to model the conditional mean of time series data. Their specification often depends on the nature of the time series variable of interest. For example, AutoRegressive Moving Average (ARMA) models (Box et al., 1970) are typically employed for time series variables that are continuous and take values on the real line. INteger-valued AutoRegressive (INAR) models (Al-Osh and Alzaid, 1987; McKenzie, 1988) and INteger-valued GARCH models (INGARCH) (Heinen, 2003; Ferland et al., 2006) are designed to account for the discrete and non-negative nature of count processes. Autoregressive Conditional Duration (ACD) models (Engle and Russell, 1998) are used for modeling non-negative continuous processes. Beta autoregressive models (Rocha and Cribari-Neto, 2009) are employed for modeling double-bounded time series data lying in a specified interval domain. The estimation of such models can be carried out by the Maximum Likelihood Estimator (MLE), which constitutes the gold standard approach for the estimation of unknown parameters in parametric models. However, the MLE requires parametric assumptions on the entire distribution of the time series process. This feature is not appealing when the interest of the study is only on modeling the conditional mean instead of the entire distribution. Furthermore, the likelihood function can sometimes present a complex form and the implementation of the MLE can become unfeasible. For instance, exact likelihood inference of INAR models is well-known to be cumbersome and numerically difficult, especially when the order of the model is larger than one (Bu et al., 2008; Drost et al., 2009; Pedeli et al., 2015). In such situations, the use of quasi-likelihood methods becomes attractive. The Quasi-MLE (QMLE), introduced by Wedderburn (1974), is a likelihood-based estimator where there is a quasi-likelihood that is not necessarily the true distribution of the data. Quasi-likelihoods are typically a member of the one-parameter exponential family. Gourieroux et al. (1984) show that the QMLE is consistent for the true unknown parameters of the model. Nevertheless, QMLEs can be inefficient because, given a parametric definition for the conditional mean of the process, the conditional variance is implicitly constrained to be a function of the conditional mean as determined by the exponential family of distributions that is considered. In order to improve the estimation efficiency for the parameters of the conditional mean in time series models, Aknouche and Francq (2021) propose a two-stage Weighted Least Squares Estimator (WLSE) where in the first step the conditional variance of the process is estimated and it is then used in the second step as weighting sequence for the solution of the weighted least squares problem. It is shown that this WLSE leads to improved efficiency with respect to QMLE if the variance function is correctly specified. A similar estimator has been more recently proposed in the context of estimating functions approach leading to the same type of efficiency improvement (Francq and Zakoian, 2023). In this paper, we propose a novel class of QMLEs for the estimation of the conditional expectation of semi-parametric time series models. The estimators are based on a Gaussian quasi-likelihood and a pseudo-variance specification, which can contain restrictions with the parameters of the conditional expectation. The Pseudo-Variance QMLEs (PVQMLEs) only require parametric assumptions on the conditional expectation as the pseudo-variance function does not need to be correctly specified. We establish strong consistency and asymptotic normality of the PVQMLEs under very general conditions. The case in which the pseudo-variance formulation corresponds to the true conditional variance of the process is obtained as a special case. We show that when no restrictions are imposed between the mean and pseudo-variance, the resulting unrestricted PVQMLE has the same asymptotic efficiency of a particular WLSE. Furthermore, if the pseudo-variance is correctly specified it achieves the same asymptotic efficiency as the efficient WLSE. On the other hand, when parameter restrictions are considered, the resulting restricted PVQMLEs can achieve higher efficiency compared to the efficient WLSE and alternative QMLEs. This result is theoretically shown in some special cases and empirically verified for INAR models through an extensive numerical exercise. We discuss how the specification of the pseudo-variance and the parameter restrictions naturally arise for time series processes with bounded support. We obtain that the restricted PVQMLEs retain the desired asymptotic properties when the imposed restrictions are valid with respect to the true parameter of the mean and a pseudo-true parameter of the conditional variance. The validity of such restrictions can be tested without requiring correct specification of the conditional variance. We derive a test for this purpose that can be used as a consistency test for restricted PVQMLEs. When the evidence-based parameter constraints are identified and validated, they constitute a restriction set where an higher-efficiency restricted PVQMLE can be obtained. Furthermore, under correct specification of the pseudo-variance, the test can be used as a specification test on the underlying process generating the data. Finally, the practical usefulness of PVQMLE approach is illustrated by means of two real data applications. One is concerned with INAR models and one with a Beta autoregression for double-bounded data. INAR processes depend on the distribution assumed for the innovation and the thinning specification (Lu, 2021). Our test allows us to test for the degree of dispersion in the thinning operator as well as the error term. There exists a vast literature of INAR models in testing innovations and marginal distributions dispersion (Schweer and Weiss, 2014; Aleksandrov and Weiss, 2020), testing for serial dependence (Sun and McCabe, 2013), and general goodness of fit tests (Weiss, 2018). However, to the best of our knowledge, specification tests are not available for the thinning dispersion. The thinning operator is typically assumed to be binomial, which implies underdispersion in the thinning. Once appropriate thinning and innovation restrictions are identified through the specification test, the corresponding PVQMLE is used to estimate the parameters of the INAR model. The second application concerns the analysis of daily realized correlations between a pair of stock returns, which forms a double-bounded time series as the realized correlation takes values between minus one and one. We consider a pseudo-variance specification based on the implied variance from Beta-distributed variables for the definition of PVQMLEs. We then test the validity of parametric restrictions between the mean and pseudo-variance to validate the use of restricted PVQMLEs. The remainder of the paper is organized as follows. Section 2 introduces the general mean and pseudo-variance framework and the PVQMLEs, together with some examples. Section 3 presents the main theoretical results of the paper and a comparison between the PVQMLE and alternative quasi-likelihood methods. Section 4 introduces the specification test for the validity of the constraints with an extensive simulation study in the case of INAR models. Section 5 presents empirical applications. ## 2 Specification and estimation ### PVQML estimators Consider a stationary and ergodic time series process \(\{Y_{t}\}_{t\in\mathbb{Z}}\) with elements taking values in the sample space \(\mathcal{Y}\subseteq\mathbb{R}\) and with conditional mean given by \[\mathrm{E}(Y_{t}|\mathcal{F}_{t-1})=\lambda(Y_{t-1},Y_{t-2},\ldots;\psi_{0})= \lambda_{t}(\psi_{0})\,,\quad t\in\mathbb{Z}, \tag{1}\] where \(\mathcal{F}_{t}\) denotes the \(\sigma\)-field generated by \(\{Y_{s}\,,\ s\leq t\}\), \(\lambda:\mathbb{R}^{\infty}\times\Psi\to\mathbb{R}\) is a known measurable function, and \(\psi_{0}\in\Psi\subset\mathbb{R}^{p}\) is the true unknown \(p\)-dimensional parameter vector. We denote with \(\nu_{t}\) the conditional variance of the process, i.e. \(\mathrm{V}(Y_{t}|\mathcal{F}_{t-1})=\nu_{t}\), which is considered to have an unknown specification. The model is a semi-parametric model as the quantity of interest is the parameter vector of the conditional mean \(\psi_{0}\) and other distributional properties are left unspecified and treated as an infinite dimensional nuisance parameter. The general specification of the model in (1) includes a wide range of time series models as special case. For instance, it includes linear and non-linear ARMA models when \(\mathcal{Y}=\mathbb{R}\), INGARCH and INAR models when \(\mathcal{Y}=\mathbb{N}\), ACD models when \(\mathcal{Y}=(0,\infty)\), and Beta autoregressive models for bounded data when \(\mathcal{Y}=(0,1)\). The main objective is to estimate the parameter vector \(\psi_{0}\) of the conditional expectation. For this purpose, we consider the specification of a pseudo-variance \[\nu_{t}^{*}(\gamma)=\nu^{*}(Y_{t-1},Y_{t-2},\ldots;\gamma),\quad t\in\mathbb{ Z}, \tag{2}\] where \(\nu^{*}:\mathbb{R}^{\infty}\times\Gamma\to[0,+\infty)\) is a known function that is indexed by the \(k\)-dimensional parameter \(\gamma\in\Gamma\subset\mathbb{R}^{k}\). We refer to this as a pseudo-variance as it is not necessarily correctly specified, i.e. there may be no value \(\gamma\in\Gamma\) such that \(\nu_{t}^{*}(\gamma)=\nu_{t}\). The idea is to use the pseudo-variance \(\nu_{t}^{*}(\gamma)\) to enhance the efficiency of the estimation of \(\psi_{0}\) by means of a Gaussian QMLE. We denote the whole parameter vector that contains both the parameter of the mean and pseudo-variance with \(\theta=(\psi^{\prime},\gamma^{\prime})^{\prime}\) and \(\theta\in\Theta=\Psi\times\Gamma\subset\mathbb{R}^{m}\), \(m=p+k\). We introduce the class of PVQMLEs that relies on a Gaussian quasi-likelihood for the mean equation with the pseudo-variance as scale of the Gaussian density. We consider estimators based on both unrestricted and restricted quasi-likelihood functions. Assume that we have an observed sample of size \(T\) from the process defined in (1), given by \(\{Y_{t}\}_{t=1}^{T}\). Since \(\lambda_{t}(\psi)\) and \(\nu_{t}^{*}(\gamma)\) can depend on the infinite past of \(Y_{t}\), we define their approximations of \(\tilde{\lambda}_{t}(\psi)\) and \(\tilde{\nu}_{t}^{*}(\gamma)\) based on the available finite sample \(\{Y_{t}\}_{t=1}^{T}\), \[\tilde{\lambda}_{t}(\psi)=\lambda(Y_{t-1},\dots,Y_{1},\tilde{Y}_{0},\tilde{Y} _{-1},\dots;\psi)\,,\quad\tilde{\nu}_{t}^{*}(\gamma)=\nu^{*}(Y_{t-1},\dots,Y_ {1},\tilde{Y}_{0},\tilde{Y}_{-1},\dots;\gamma), \tag{3}\] where \(\tilde{Y}_{0},\tilde{Y}_{-1},\dots\) are given initial values. The Gaussian quasi-likelihood for \(\psi\) with the pseudo-variance scaling is defined as \[\tilde{L}_{T}(\theta)=\frac{1}{T}\sum_{t=1}^{T}\tilde{l}_{t}(\theta)\,,\quad \tilde{l}_{t}(\theta)=-\frac{1}{2}\log\tilde{\nu}_{t}^{*}(\gamma)-\frac{[Y_{t }-\tilde{\lambda}_{t}(\psi)]^{2}}{2\tilde{\nu}_{t}^{*}(\gamma)}. \tag{4}\] Based on the quasi-likelihood function in (4), we define the unrestricted and restricted PVQMLE. The unrestricted PVQMLE is based on the unconstrained maximization of the pseudo-likelihood without imposing any constrains between \(\psi\) and \(\gamma\). The unrestricted PVQMLE \(\hat{\theta}\) is defined as \[\hat{\theta}=\operatorname*{arg\,max}_{\theta\in\Theta}\tilde{L}_{T}(\theta), \tag{5}\] where \(\hat{\theta}=(\hat{\psi}^{\prime},\hat{\gamma}^{\prime})^{\prime}\) and \(\hat{\psi}\) is the unrestricted PVQMLE of \(\psi_{0}\). In Section 3, we shall see that the unrestricted PVQMLE \(\hat{\psi}\) is a consistent estimator of \(\psi_{0}\) and, in fact, it is asymptotically equivalent to a specific WLSE. If the pseudo-variance is correctly specified, i.e. there is \(\gamma_{0}\in\Gamma\) such that \(\nu^{*}(\gamma_{0})=\nu_{t}\), then \(\hat{\psi}\) is asymptotically equivalent to the efficient WLSE. In models where the sample space \(\mathcal{Y}\) is bounded, such as count-time series models, there can be a natural relationship between the conditional mean and variance of the process. For example, in a count time series process we have that if the mean goes to zero, then also the variance goes to zero as, in fact, the limit case is the mean being exactly zero. Such relationship between mean and variance, as given by parametric models, provide a natural way to introduce restrictions between the mean and pseudo-variance parameters \(\psi\) and \(\gamma\). Several examples are presented at the end of this section. To specify the restricted PVQMLE, we consider the constrained parameter set \(\Theta_{R}\) that imposes \(r\) restrictions on the pseudo-variance parameters \[\Theta_{R}=\{\theta\in\Theta:S\gamma=g(\psi)\},\] where \(S\) is a \(r\times k\) selection matrix and \(g:\Psi\rightarrow\mathbb{R}^{r}\). The estimator derived from the maximization of (4) over the set \(\Theta_{R}\) is the restricted PVQMLE, \[\hat{\theta}_{R}=\operatorname*{arg\,max}_{\theta\in\Theta_{R}}\tilde{L}_{T}( \theta) \tag{6}\] where \(\hat{\theta}_{R}=(\hat{\psi}_{R}^{\prime},\hat{\gamma}_{R}^{\prime})^{\prime}\) and \(\hat{\psi}_{R}\) is the restricted PVQMLE of \(\psi_{0}\). In Section 3, we shall see that the restricted PVQMLE \(\hat{\psi}_{R}\) is a consistent estimator of \(\psi_{0}\) if the constrains in \(\Theta_{R}\) hold with respect to a pseudo-true parameter \(\gamma^{*}\). The advantage of the restricted PVQMLE \(\hat{\psi}_{R}\) is that it can achieve higher efficiency than the unrestricted one. Furthermore, as it shall be presented in Section 4, the validity of the restrictions can be tested under both misspecification and correct specification of the pseudo-variance. The test can be interpreted as a consistency test for the restricted estimator when the pseudo-variance is misspecified. Instead, it can be employed as a specification test if we assume correct specification of the pseudo-variance. For instance, it shall be employed to test for underdispersion, equidispersion or overdispersion in the thinning operator of INAR models. ### Examples The model specification in (1) is very general and it covers a wide range of semi-parametric observation-driven time series model. The unrestricted and restricted QMLE based on the pseudo-variance in (2) can be employed for such general class of models. However, PVQMLEs are particularly suited for time series processes where the support of the conditional mean is bounded and a natural relationship with the conditional variance can be assumed. In Section 3.2 it will be shown that in models where conditional mean and pseudo-variance share some parameter restrictions, a more efficient estimator may be obtained with respect to alternative estimation approaches available in the literature. The specification of the pseudo-variance and the parameter restrictions with the conditional mean can be based on well known model specifications. The validity of such restrictions is testable and the asymptotic properties do not require correct specification of the pseudo-variance. This means that no assumptions on the true conditional variance are needed and the consistency of the restricted PVQMLE can also be tested without relying on such assumptions. Below we present some examples of models that are encompassed in the framework defined in equations (1) and (2), and provide a general way to specify the pseudo-variance and the parameter restrictions with the conditional mean. **Example 1** (INAR models).: INAR models are widely used in the literature to model count time series. The INAR(1) model is given by \[Y_{t}=a\circ Y_{t-1}+\varepsilon_{t}\,,\quad t\in\mathbb{Z}, \tag{7}\] where \(\{\varepsilon_{t}\}_{t\in\mathbb{Z}}\) is an iid sequence of non-negative integer-valued random variables with mean \(\omega_{1}>0\) and variance \(\omega_{2}>0\), and '\(\circ\)' is the thinning operator of Steutel and Van Harn (1979). For a given \(N\in\mathbb{N}\) and \(a\in(0,1)\), the most general formulation of the thinning operator \(a\circ N\) is defined to be a count random variable with mean \(aN\). The most common formulation (Steutel and Van Harn, 1979) is the Bernoulli thinning where \(a\circ N\) is a binomial random variable with \(N\) trials and success probability \(a\). The conditional mean of the INAR(1) is \[\lambda_{t}=aY_{t-1}+\omega_{1},\] and the pseudo-variance can be specified as \[\nu_{t}^{*}=bY_{t-1}+\omega_{2}.\] As discussed in Section 5, several restrictions can be considered for the PVQMLE. For instance, the restriction \(b=a(1-a)\) is implied by a binomial thinning and \(\omega_{1}=\omega_{2}\) is implied by a Poisson error. **Example 2** (INGARCH models).: Another popular model for time series of counts is the INGARCH model. The conditional mean of the INGARCH(1,1) model takes the form \[\lambda_{t}=\omega_{1}+\alpha_{1}Y_{t-1}+\beta_{1}\lambda_{t-1}\,, \tag{8}\] where \(\omega_{1},\alpha_{1},\beta_{1}\geq 0\). The pseudo-variance can be specified as \[\nu_{t}^{*}=\omega_{2}+\alpha_{2}Y_{t-1}+\beta_{2}\lambda_{t-1}\,.\] Also in this case, several restrictions can be considered for the PVQMLE. For instance, the restrictions \(\omega_{2}=\omega_{1}\), \(\alpha_{2}=\alpha_{1}\) and \(\beta_{2}=\beta_{1}\) are implied by an equidispersion assumption \(\nu_{t}^{*}=\lambda_{t}\), which follows assuming a conditional Poisson distribution for example. Alternatively, the restrictions \(\omega_{2}=c\omega_{1}\), \(\alpha_{2}=c\alpha_{1}\) and \(\beta_{2}=c\beta_{1}\) with \(c>0\) are implied by a proportional variance assumption \(\nu_{t}^{*}=c\lambda_{t}\). **Example 3** (ACD models).: ACD models are typically used to model non-negative continuous time series variables, like durations or volumes. These models take the form \(Y_{t}=\lambda_{t}\varepsilon_{t}\) where \(\varepsilon_{t}\) is a sequence of positive variables with mean equal to 1. The conditional expectation \(\lambda_{t}\) may take the form as in equation (8). The pseudo-variance can be specified in several ways and restrictions can be imposed. For instance, the restriction \(\nu_{t}^{*}=\lambda_{t}^{2}\) follows by assuming an exponential error distribution. An alternative restriction is given by \(\nu_{t}^{*}=c\lambda_{t}^{2}\), \(c>0\). **Example 4** (double-bounded autoregressions).: For double-bounded time series data the conditional mean \(\lambda_{t}\) can be specified as in equation (8), see Gorgi and Koopman (2021) for instance. Several specifications and restrictions for the pseudo-variance can be considered. For instance, the restriction \(\nu_{t}^{*}=\lambda_{t}(1-\lambda_{t})/(1+\phi)\) is implied by a beta conditional distribution with dispersion parameter \(\phi\). Intermediate restrictions on the pseudo-variance are discussed in the corresponding application in Section 5. We note that the example presented in this section are focused on a linear mean equation for simplicity of exposition. Several other non-linear model specifications are encompassed in the general framework in (1) and (2), see for example Creal et al. (2013) and Christou and Fokianos (2015). ## 3 Asymptotic theory In this section, the asymptotic properties of the PVQMLEs in (5) and (6) are formally derived. Although asymptotic results related to quasi-maximum likelihood estimators of observation-driven models are well-established in the literature, the associated theory for PVQMLEs differs as it relies on simultaneous estimation of mean and pseudo-variance parameters, where the latter can be misspecified and present parameter restrictions with the mean. Since the pseudo-variance can be misspecified, the estimator of the pseudo-variance parameter \(\hat{\gamma}\) will be consistent with respect to a pseudo-true value \(\gamma^{*}\), which is given by \[\gamma^{*}=\operatorname*{arg\,max}_{\gamma\in\Gamma}-\frac{1}{2}\mathrm{E} \left(\log\nu_{t}^{*}(\gamma)+\frac{[Y_{t}-\lambda_{t}(\psi_{0})]^{2}}{\nu_{t} ^{*}(\gamma)}\right). \tag{9}\] We define the vector \(\theta_{0}=(\psi_{0}^{\prime},\gamma^{*\prime})^{\prime}\) that contains both true and pseudo-true parameters. The estimator of the mean parameters preserves the consistency and asymptotic normality results to the true parameter vector \(\psi_{0}\). Moreover, we will show that such result holds for both unrestricted (5) and restricted (6) estimators. We start by showing consistency and asymptotic normality of the unrestricted PVQMLE in (5). We first obtain the score function related to (4) \[\tilde{S}_{T}(\theta)=\frac{1}{T}\sum_{t=1}^{T}\tilde{s}_{t}(\theta),\ \ \ \tilde{s}_{t}(\theta)=\frac{Y_{t}-\tilde{\lambda}_{t}(\psi)}{\tilde{\nu}_{t}^{*} (\gamma)}\frac{\partial\tilde{\lambda}_{t}(\psi)}{\partial\theta}+\frac{[Y_{t }-\tilde{\lambda}_{t}(\psi)]^{2}-\tilde{\nu}_{t}^{*}(\gamma)}{2\tilde{\nu}_{t} ^{*2}(\gamma)}\frac{\partial\tilde{\nu}_{t}^{*}(\gamma)}{\partial\theta}. \tag{10}\] Furthermore, we define \(L_{T}(\theta)\), \(l_{t}(\theta)\), \(S_{T}(\theta)\) and \(s_{t}(\theta)\) as the random functions obtained from \(\tilde{L}_{T}(\theta)\), \(\tilde{l}_{t}(\theta)\), \(\tilde{S}_{T}(\theta)\) and \(\tilde{s}_{t}(\theta)\) by substituting \(\tilde{\lambda}_{t}(\psi)\) and \(\tilde{\nu}_{t}^{*}(\gamma)\) with \(\lambda_{t}(\psi)\) and \(\nu_{t}^{*}(\gamma)\), respectively. We consider the following assumptions. **A1**: The process \(\{Y_{t},t\in Z\}\) is strictly stationary and ergodic. **A2**: \(\lambda_{t}(\cdot)\) is continuous in \(\Psi\), \(\nu_{t}^{*}(\cdot)\) is continuous in \(\Gamma\) and the set \(\Theta\) is compact. Moreover, \[\mathrm{E}\ \sup_{\gamma\in\Gamma}|\!\log\nu_{t}^{*}(\gamma)|<\infty\,,\quad \mathrm{E}\ \sup_{\theta\in\Theta}\frac{[Y_{t}-\lambda_{t}(\psi)]^{2}}{\nu_{t}^{*}(\gamma) }<\infty\,.\] **A3**: \(\lambda_{t}(\psi)=\lambda_{t}(\psi_{0})\) a.s. if and only if \(\psi=\psi_{0}\). **A4**: There is a constant \(\underline{\nu}^{*}>0\) such that \(\nu_{t}^{*}(\gamma),\tilde{\nu}_{t}^{*}(\gamma)\geq\underline{\nu}^{*}\) for any \(t\geq 1\) and any \(\gamma\in\Gamma\). **A5**: Define \(a_{t}=\sup_{\psi\in\Psi}|\tilde{\lambda}_{t}(\psi)-\lambda_{t}(\psi)|\) and \(b_{t}=\sup_{\gamma\in\Gamma}|\tilde{\nu}_{t}^{*}(\gamma)-\nu_{t}^{*}(\gamma)|\), it holds that \[\lim_{t\to\infty}\Big{(}1+|Y_{t}|+\sup_{\psi\in\Psi}|\lambda_{t}(\psi)|\Big{)} a_{t}=0\,,\quad\lim_{t\to\infty}\Big{(}1+Y_{t}^{2}+\sup_{\psi\in\Psi}\lambda_{t} ^{2}(\psi)\Big{)}b_{t}=0\quad a.s.\] **A6**: The pseudo-true parameter \(\gamma^{*}\in\Gamma\) defined in (9) is unique. **A7**: Define \(c_{t}=\sup_{\theta\in\Theta}\|\partial\tilde{\lambda}_{t}(\psi)/\partial \theta-\partial\lambda_{t}(\psi)/\partial\theta\|\), \(d_{t}=\sup_{\theta\in\Theta}\|\partial\tilde{\nu}_{t}^{*}(\gamma)/\partial \theta-\partial\nu_{t}^{*}(\gamma)/\partial\theta\|\). The following quantities are of order \(\mathcal{O}(t^{-\delta})\) a.s. for some \(\delta>1/2\) \[\sup_{\theta\in\Theta}\Big{\|}\frac{\partial\lambda_{t}(\psi)}{\partial\theta }\Big{\|}a_{t}\,,\quad\sup_{\theta\in\Theta}\Big{\|}\frac{\partial\nu_{t}^{* }(\gamma)}{\partial\theta}\Big{\|}\Big{(}1+|Y_{t}|+\sup_{\psi\in\Psi}|\lambda_ {t}(\psi)|\,\Big{)}a_{t}\,,\] \[\sup_{\theta\in\Theta}\Big{\|}\frac{\partial\lambda_{t}(\psi)}{\partial\theta }\Big{\|}\Big{(}\,|Y_{t}|+\sup_{\psi\in\Psi}|\lambda_{t}(\psi)|\,\Big{)}b_{t} \,,\quad\sup_{\theta\in\Theta}\Big{\|}\frac{\partial\nu_{t}^{*}(\gamma)}{ \partial\theta}\Big{\|}\Big{(}1+Y_{t}^{2}+\sup_{\psi\in\Psi}\lambda_{t}^{2}( \psi)\Big{)}b_{t}\,,\] \[\big{(}1+|Y_{t}|+\sup_{\psi\in\Psi}|\lambda_{t}(\psi)|\,\big{)}c_{t}\,,\quad \big{(}1+Y_{t}^{2}+\sup_{\psi\in\Psi}\lambda_{t}^{2}(\psi)\big{)}d_{t}.\] **A8**: \(\lambda_{t}(\cdot)\) and \(\nu_{t}^{*}(\cdot)\) have continuous second-order derivatives in their spaces. Moreover, \[\mathrm{E}\ \sup_{\theta\in\Theta}\frac{[Y_{t}-\lambda_{t}(\psi)]^{4}}{\nu_{t}^{ *2}(\gamma)}<\infty\,,\quad\mathrm{E}\ \sup_{\theta\in\Theta}\Big{\|}\frac{1}{\sqrt{\nu_{t}^{*}(\gamma)}}\frac{ \partial^{2}\lambda_{t}(\psi)}{\partial\theta\partial\theta^{\prime}}\Big{\|} ^{2}<\infty\,,\] \[\mathrm{E}\ \sup_{\theta\in\Theta}\bigg{\|}\frac{1}{\nu_{t}^{*2}(\gamma)}\frac{ \partial\lambda_{t}(\psi)}{\partial\theta}\frac{\partial\lambda_{t}(\psi)}{ \partial\theta^{\prime}}\bigg{\|}^{2}<\infty\,,\quad\mathrm{E}\ \sup_{\theta\in\Theta}\bigg{\|}\frac{1}{\nu_{t}^{*}(\gamma)}\frac{\partial \lambda_{t}(\psi)}{\partial\theta}\frac{\partial\nu_{t}^{*}(\gamma)}{\partial \theta^{\prime}}\bigg{\|}^{2}<\infty\,,\] **A9**: The matrices \(H(\theta_{0})=\mathrm{E}[-\partial^{2}l_{t}(\theta_{0})/\partial\theta\partial \theta^{\prime}]\), \(I(\theta_{0})=\mathrm{E}[s_{t}(\theta_{0})s_{t}^{\prime}(\theta_{0})]\) exist with \(H(\theta_{0})\) invertible. **A10**: \(\theta_{0}\in\dot{\Theta}\), where \(\dot{\Theta}\) is the interior of \(\Theta\). **A11**: The sequence \(\sqrt{T}S_{T}(\theta_{0})\) obeys the central limit theorem. The strict stationarity and ergodicity in assumption **A1** depends upon the model formulation in (1) and (2) and it can be established by means of different probabilistic approaches, see for instance Straumann and Mikosch (2006) and Debaly and Truquet (2021). Assumption **A2** is a standard moment condition. Assumption **A3** is required for the identification of the true parameter \(\psi_{0}\). Assumptions **A5** and **A7** are needed to guarantee that the initialization of filters in (3) is asymptotically irrelevant. Assumption **A6** imposes the uniqueness of the pseudo-true parameter for the variance equation. In Corollary 3 below, we show that this assumption can be dropped if the researcher is not interested in the asymptotic normality of the estimator but only in the consistency. Assumptions **A8** and **A9** impose moments on the second derivatives of the log-quasi-likelihood that are required for asymptotic normality to apply. Assumption **A10** is the standard condition for asymptotic normality that the true parameter value is in the interior of the parameter set. Finally, assumption **A11** is an high-level condition that a central limit theorem applies to the score. This is condition is left high-level for generality purposes since the score function \(s_{t}(\theta_{0})\) is not a martingale difference sequence, see equation (10). There are several alternative CLTs for non-martingale sequences and the choice of the most appropriate one is strongly dependent on the specific mean-variance model formulation. For example, CLTs appealing the concept of mixing processes or mixingales are widely available, see the surveys in Doukhan (1994), Bradley (2005) and White (1994). In case of correct conditional variance specification then assumption **A11** can be dropped, see Corollary 2. Theorem 1 delivers the consistency and asymptotic normality of the unrestricted PVQMLE of the true parameter \(\psi_{0}\). **Theorem 1**.: _Consider the unrestricted PVQMLE in (5). Under conditions **A1**-**A6**_ \[\hat{\psi}\rightarrow\psi_{0}\,,\quad a.s.\quad T\rightarrow\infty\,. \tag{11}\] _Moreover, if also **A7**-**A11** hold, as \(T\rightarrow\infty\)_ \[\sqrt{T}\left(\hat{\psi}-\psi_{0}\right)\xrightarrow{d}N(0,\Sigma_{\psi})\,, \qquad\Sigma_{\psi}=H_{\psi}^{-1}(\theta_{0})I_{\psi}(\theta_{0})H_{\psi}^{-1 }(\theta_{0})\,, \tag{12}\] _where_ \[H_{\psi}(\theta_{0})=\mathrm{E}\left[\frac{1}{\nu_{t}^{*}(\gamma^{*})}\frac{ \partial\lambda_{t}(\psi_{0})}{\partial\psi}\frac{\partial\lambda_{t}(\psi_{ 0})}{\partial\psi^{\prime}}\right],\ I_{\psi}(\theta_{0})=\mathrm{E}\left[ \frac{\nu_{t}}{\nu_{t}^{*2}(\gamma^{*})}\frac{\partial\lambda_{t}(\psi_{0})} {\partial\psi}\frac{\partial\lambda_{t}(\psi_{0})}{\partial\psi^{\prime}} \right]. \tag{13}\] The asymptotic properties of the estimator of the variance parameter \(\gamma\) are obtained in Corollary 1 below. Let \(s_{t}(\theta_{0})^{\prime}=[s_{t}^{(\psi)}(\theta_{0})^{\prime},s_{t}^{( \gamma)}(\theta_{0})^{\prime}]^{\prime}\) be the partition of the score with respect to the mean and (pseudo-)variance parameters. Let \(H_{\gamma}(\theta_{0})=\mathrm{E}[-\partial^{2}l_{t}(\theta_{0})/\partial \gamma\partial\gamma^{\prime}]\) and \(I_{\gamma}(\theta_{0})=\mathrm{E}[s_{t}^{(\gamma)}(\theta_{0})s_{t}^{(\gamma) }(\theta_{0})^{\prime}]\). **Corollary 1**.: _Under the assumptions of Theorem 1 we have that as \(T\rightarrow\infty\), a.s. \(\hat{\gamma}\xrightarrow{}\gamma^{*}\) and \(\sqrt{T}\left(\hat{\gamma}-\gamma^{*}\right)\xrightarrow{d}N(0,\Sigma_{\gamma})\), where \(\Sigma_{\gamma}=H_{\gamma}^{-1}(\theta_{0})I_{\gamma}(\theta_{0})H_{\gamma}^{ -1}(\theta_{0})\)._ Theorem 1 determines the asymptotic distribution of the unrestricted PVQMLE of \(\psi_{0}\) without requiring correct specification of the pseudo-variance. The following result shows that in the special case in which the variance is well-specified then the estimator \(\hat{\psi}\) gains in efficiency. **Corollary 2**.: _Consider the assumptions of Theorem 1. If, in addition, the variance (2) is correctly specified, i.e. \(\nu_{t}^{*}(\gamma^{*})=\nu_{t}\), then 1-10 entail (11) and_ \[\sqrt{T}\left(\hat{\psi}-\psi_{0}\right)\xrightarrow{d}N(0,I_{\psi}^{-1})\,, \qquad I_{\psi}=\mathrm{E}\left[\frac{1}{\nu_{t}}\frac{\partial\lambda_{t}( \psi_{0})}{\partial\psi}\frac{\partial\lambda_{t}(\psi_{0})}{\partial\psi^{ \prime}}\right]\,, \tag{14}\] _where \(\Sigma_{\psi}-I_{\psi}^{-1}\) is positive semi-definite._ We also note that in Corollary 2 the uniqueness of the variance parameter in assumption 6 is implied by the condition \(\nu_{t}^{*}(\gamma)=\nu_{t}^{*}(\gamma^{*})\) a.s. if and only if \(\gamma=\gamma^{*}\). This follows immediately from the correct specification of the pseudo-variance. Corollary 3 below shows that even if the pseudo-true parameter \(\gamma^{*}\) is not unique, i.e. assumption 6 does not hold, the consistency of the unrestricted estimator \(\hat{\psi}\) is retained without any additional assumption. The overall estimator \(\hat{\theta}\) will instead be set consistent over the set of values that maximize the limit of the quasi-likelihood, \(\Theta_{0}\), since the pseudo-true parameter \(\gamma^{*}\) is not uniquely identified. **Corollary 3**.: _Consider the unrestricted PVQMLE (5) and assume conditions 1-1 hold. Then, as \(T\to\infty\), \(\inf_{\theta_{0}\in\Theta_{0}}\|\hat{\theta}-\theta_{0}\|\to 0\) a.s. and \(\hat{\psi}\xrightarrow{}\psi_{0}\) a.s._ We now treat the case in which the conditional mean and pseudo-variance parameters are constrained. We study the asymptotic properties of the restricted PVQMLE \(\hat{\psi}_{R}\) defined in (6). * The equality \(S\gamma^{*}=g(\psi_{0})\) holds and \(g(\cdot)\) is continuous. Assumption 12 is required to ensure that \(\theta_{0}\in\Theta_{R}\), i.e. the imposed restrictions are valid with respect to the true parameter \(\psi_{0}\) and the pseudo-true parameter \(\gamma^{*}\). The continuity of \(g(\cdot)\) guarantees that \(\Theta_{R}\) remains compact. Define \(\gamma=(\gamma_{1}^{\prime},\gamma_{2}^{\prime})^{\prime}\) where \(\gamma_{1}=S\gamma=g(\psi)\) is the sub-vector of pseudo-variance parameters that are restricted to mean parameters and \(\gamma_{2}\) constitutes the sub-vector of remaining free parameters. For \(\theta\in\Theta_{R}\), with some abuse of notation, we have \(\theta=(\psi^{\prime},\gamma_{1}^{\prime},\gamma_{2}^{\prime})^{\prime}=(\psi^ {\prime},g(\psi)^{\prime},\gamma_{2}^{\prime})^{\prime}=(\psi^{\prime},\gamma_ {2}^{\prime})^{\prime}\). Recall that \(H_{x}(\theta_{0})=\mathrm{E}\left[-\partial^{2}l_{t}(\theta_{0})/\partial x \partial x^{\prime}\right]\) and \(I_{x}(\theta_{0})=\mathrm{E}[s_{t}^{(x)}(\theta_{0})s_{t}^{(x)}(\theta_{0})^{ \prime}]\). Moreover, define \(H_{x,z}(\theta_{0})=\mathrm{E}\left[-\partial^{2}l_{t}(\theta_{0})/\partial x \partial y^{\prime}\right]\), \(I_{x,z}(\theta_{0})=\mathrm{E}[s_{t}^{(x)}(\theta_{0})s_{t}^{(z)}(\theta_{0})^ {\prime}]\) and \(I_{x,x}(\theta_{0})=I_{x,z}^{\prime}(\theta_{0})\). Analogously, set \(D(\theta_{0})=H^{-1}(\theta_{0})\) and \(D_{x,y}(\theta_{0})\) being the corresponding partition related to rows \(x\) and columns \(y\) of \(D(\theta_{0})\). Theorem 2 delivers the asymptotic distribution of the restricted PVQMLE. **Theorem 2**.: _Consider the restricted PVQMLE in (6). Under conditions **A1**-**A6** and **A12**_ \[\hat{\psi}_{R}\to\psi_{0}\,,\quad a.s.\quad T\to\infty\,. \tag{15}\] _Moreover, if also **A7**-**A11** hold, as \(T\to\infty\)_ \[\sqrt{T}\left(\hat{\psi}_{R}-\psi_{0}\right)\xrightarrow{d}N(0,\Sigma_{R})\,, \tag{16}\] _where_ \[\Sigma_{R}= D_{\psi}(\theta_{0})I_{\psi}(\theta_{0})D_{\psi}(\theta_{0})+D_{ \psi,\gamma_{2}}(\theta_{0})I_{\gamma_{2},\psi}(\theta_{0})D_{\psi}(\theta_{0}) \tag{17}\] \[+D_{\psi}(\theta_{0})I_{\psi,\gamma_{2}}(\theta_{0})D_{\gamma_{2},\psi}(\theta_{0})+D_{\psi,\gamma_{2}}(\theta_{0})I_{\gamma_{2}}(\theta_{0})D_ {\gamma_{2},\psi}(\theta_{0})\,.\] We note that Corollaries 1-3 can easily be adapted to hold also for \(\hat{\theta}_{R}\). In Section 3.2 below, we shall see that the restricted PVQMLE can lead to substantial gains in efficiency with respect to the unrestricted PVQMLE. The consistency of the restricted PVQMLE requires the additional assumption L. However, as discussed in Section 4, this assumption can be tested and the correct specification of the pseudo-variance is not required. Clearly, when \(\psi\) and \(\gamma\) do not have parameter restrictions, i.e. \(\hat{\psi}_{R}=\hat{\psi}\), it can be noted that Theorem 1 is equivalent to Theorem 2 with \(\Sigma_{R}=\Sigma_{\psi}\), since \(H_{\psi,\gamma_{2}}(\theta_{0})=0\), \(H(\theta_{0})\) becomes block diagonal, its inverse has block elements \(D_{x}(\theta_{0})=H_{x}^{-1}(\theta_{0})\) and \(D_{x,y}(\theta_{0})=D_{y,x}(\theta_{0})=0\), implying that \(\Sigma_{R}=\Sigma_{\psi}\). ### Comparison to alternative estimators In this section, we show that the unrestricted PVQMLE achieves the same asymptotic variance of existing estimators. Consider the unrestricted PVQMLE depicted in Theorem 1. The partition of the score related to the mean parameter \(\psi\) is \[\tilde{s}_{t}^{(\psi)}(\theta)=\frac{Y_{t}-\tilde{\lambda}_{t}(\psi)}{\tilde{ \nu}_{t}^{*}(\gamma)}\frac{\partial\tilde{\lambda}_{t}(\psi)}{\partial\psi}. \tag{18}\] We compare (18) with some alternative semi-parametric estimators presented in the literature. Consider the two-stage Weighted Least Squares (WLSE) of Aknouche and Francq (2021) defined as \[\hat{\psi}_{W}=\operatorname*{arg\,max}_{\psi\in\Psi}\frac{1}{T}\sum_{t=1}^{ T}\tilde{l}s_{t}(\psi,\hat{w}_{t})\,,\quad\tilde{l}s_{t}(\psi,\hat{w}_{t})=- \frac{[Y_{t}-\tilde{\lambda}_{t}(\psi)]^{2}}{\hat{w}_{t}},\] where \(\hat{w}_{t}\) is a first-step estimator of the set of weights \(w_{t}\). The resulting score of the WLSE is \[\tilde{s}_{t}(\psi,\hat{w}_{t})=\frac{Y_{t}-\tilde{\lambda}_{t}(\psi)}{\hat{w }_{t}}\frac{\partial\tilde{\lambda}_{t}(\psi)}{\partial\psi}. \tag{19}\] Since it is well-known that the conditional variance is the optimal weight for the WLSE, the same authors set \(w_{t}=\nu_{t}^{*}(\xi)=\nu^{*}(Y_{t-1},Y_{t-2},\ldots;\xi)\) by defining a functional form for a pseudo-variance, where the parameters \(\xi^{*}\) may also contain \(\psi_{0}\) or parts of it. The corresponding first-step estimated weights are \(\hat{w}_{t}=\tilde{\nu}_{t}^{*}(\hat{\xi})\), where \(\hat{\xi}\) represents the first-step estimate of the parameter \(\xi\). Consider the general QMLE of Wedderburn (1974) and Gourieroux et al. (1984) based on the exponential family of quasi-likelihoods defined as \[\hat{\psi}_{Q}=\operatorname*{arg\,max}_{\psi\in\Psi}\tilde{l}_{T}(\psi)\,,\] where the log-quasi-likelihood \(\tilde{l}_{T}(\psi)\) is a member of the one-parameter exponential family with respect to \(\tilde{\lambda}_{t}(\psi)\). The corresponding score is given by \[\tilde{s}_{t}(\psi)=\frac{Y_{t}-\tilde{\lambda}_{t}(\psi)}{\tilde{\nu}_{t}( \psi)}\frac{\partial\tilde{\lambda}_{t}(\psi)}{\partial\psi}\,. \tag{20}\] where the conditional variance \(\tilde{\nu}_{t}(\psi)\) is typically a function of the mean, i.e. \(\tilde{\nu}_{t}(\psi)=h(\tilde{\lambda}_{t}(\psi))\) for some function \(h(\cdot)\). For example, selecting the Poisson quasi-likelihood yields \(\tilde{\nu}_{t}(\psi)=\tilde{\lambda}_{t}(\psi)\)(Ahmad and Francq, 2016), see Aknouche and Francq (2021, Sec. 2.2) for other examples. The expression of the scores in (18)-(20) highlight how the unrestricted PVQMLE is closely related to WLSE and the QMLE based on the exponential family. The main difference between the unrestricted PVQMLE and the QMLE with score in (20) is that the QMLE only considers the specification of the conditional mean and the conditional variance is a function of the conditional mean that is implied by the selected distribution in the exponential family. On the the other hand, the unrestricted PVQMLE differs from the WLSE as the parameters are estimated jointly instead of a multi-step estimation. The unrestricted PVQMLE, the QMLE and the WLSE enjoy the same consistency property for the mean parameters \(\psi_{0}\) irrespective of the correct specification of the conditional variance. Furthermore, when they have the same specification of the conditional variance, these estimators are asymptotically equivalent. **Corollary 4**.: _Assume Theorem 1 holds. Moreover, suppose the WLSE (19) with \(w_{t}=\nu_{t}^{*}(\gamma^{*})\) is consistent and asymptotically normal with limiting variance \(\Sigma_{W}\). Then the unrestricted PVQMLE in (5) is asymptotically as efficient as the WLSE, meaning that \(\Sigma_{\psi}=\Sigma_{W}\). In addition, if \(\nu_{t}^{*}(\cdot)=\nu_{t}(\cdot)\), then \(\Sigma_{W}=\Sigma_{\psi}=I_{\psi}^{-1}\)._ The result in Corollary 4 follows immediately from Theorem 1 and Corollary 2. We also note that if Corollary 4 holds then also Corollaries 2.1-2.3 in Aknouche and Francq (2021) hold for the unrestricted PVQMLE. This has two direct consequences: (i) if the variance is well-specified, the unrestricted PVQMLE is asymptotically more efficient than the QMLE of \(\psi_{0}\), if the variance implied by the exponential family is not the true one, and (ii) if the conditional distribution of \(Y_{t}\) comes from the exponential family, then the well-specified PVQMLE is asymptotically as efficient as the MLE of \(\psi_{0}\). We note that the comparison discussed so far only concerns the unrestricted PVQMLE. This equivalence of the PVQMLE with respect to the WLSE and the QMLE does not hold for the restricted PVQMLE. This can be noted from the form of the score function given in equation (10) and the fact that the partial derivative of \(\tilde{\nu}_{t}^{*}(\gamma)\) with respect to \(\psi\) is no longer equal to zero. Below we discuss how the restricted PVQMLE can achieve higher efficiency compared to the unrestricted PVQMLE. ### Efficiency of PVQMLE Given that the PVQMLE with distinct parameters on mean and pseudo-variance is asymptotically equivalent to the WLSE for the mean parameters \(\psi_{0}\) (Corollary 4), it may be expected that if the mean and pseudo-variance equations share common parameters in \(\theta\), i.e. \(\psi_{0}\) and \(\gamma^{*}\) are not completely distinct so that \(\theta_{0}\in\Theta_{R}\), then the restricted PVQMLE in (6) could show improved efficiency over the unrestricted PVQMLE and the WLSE. This result cannot be proved in general but for the following special cases it is verified. **A13**: One of the following conditions holds: **A13.a**: \(Y_{t}[\mathcal{F}_{t-1}\sim q(\lambda_{t},\nu_{t})\) where \(q(\cdot)\) is Gaussian. **A13.b**: \(m=1\), \(q(\cdot)\) is symmetric and is meso- or platy-kurtic. **A13.c**: \(m=1\), the first derivatives of the functions \(\lambda_{t}(\psi_{0})\) and \(\nu_{t}(\gamma_{0})\) have the (opposite) same sign, and \(q(\cdot)\) is (positive) negative skewed and meso- or platy-kurtic. **Proposition 1**.: _Assume that Assumptions **A1**-**A13** hold with \(\nu_{t}^{*}(\gamma^{*})=\nu_{t}\). Moreover, suppose that the WLSE in (19) with \(w_{t}=\nu_{t}\) is consistent and asymptotically normal with asymptotic variance \(I_{\psi}^{-1}\). Then, the restricted PVQMLE in (6) is asymptotically more efficient than the unrestricted PVQMLE and the WLSE, i.e. \(I_{\psi}^{-1}-\Sigma_{R}\) is positive semi-definite._ The conditions stated in Assumption M can be somewhat restrictive, however, we note that they are only sufficient conditions. In general, it is not straightforward to derive sharper theoretical conditions under which the restricted PVQMLE is more efficient than the unrestricted PVQMLE. However, for specific models, we can appeal to numerical methods to obtain the asymptotic covariance matrix of the two estimators and evaluate their relative efficiency. We consider the INAR(1) model in (7) with binomial thinning and Poisson error distribution as an example. The unrestricted PVQMLE \(\hat{\psi}\) is based on the following conditional mean and pseudo-variance equations \[\lambda_{t}(\psi)=aY_{t-1}+\omega_{1}\,,\qquad\nu_{t}^{*}(\gamma)=bY_{t-1}+ \omega_{2}\,, \tag{21}\] where \(\psi^{\prime}=(a,\omega_{1})\) and \(\gamma^{\prime}=(b,\omega_{2})\). Instead, the restricted PVQMLE \(\hat{\psi}_{R}\) imposes the restrictions \(b=a(1-a)\) and \(\omega_{2}=\omega_{1}=\omega\). We focus on the analysis of the asymptotic variances of these estimators. To this aim, we simulate a long time series (\(T=10,000\)) from the INAR(1) process (binomial thinning and Poisson errors) for different values of the parameters \(a\) and \(\omega_{1}\) over a grid. The asymptotic covariance matrices of the two estimators are computed by approximating their expectations with the corresponding sample means. Figure 1 reports an heatmap plot of the ratio (in \(\log_{10}\) scale) between the asymptotic variance of the unrestricted and the restricted PVQMLEs for the parameter estimates of \(a\) and \(\omega_{1}\). The regions of the parameter set where the \(\log_{10}\)-variance ratio is greater than zero, i.e. variance ratio is greater than one, indicate the parameter values for which the restricted estimator is more efficient of the unrestricted one, and vice versa. The pictures suggest that the restricted estimator \(\hat{\psi}_{R}\) is more efficient than the unrestricted estimator \(\hat{\psi}\) in most cases, except when \(a\) and \(\omega_{1}\) are close to zero. Furthermore, the lack of efficiency of the restricted PVQMLE in the green areas is showed to be minimal. For example, a \(\log_{10}\)-variance ratio around \(-0.05\) indicates a variance ratio around \(0.9\). Therefore, for small values of \(a\) and \(\omega_{1}\) the two estimators are essentially equivalent. Instead, for larger values of \(a\) and \(\omega_{1}\), the variance ratio gets substantially larger with the unrestricted PVQMLE estimator having up to 30 times larger variance of the restricted one. This is further illustrated in Figure 2, which displays a graph of cross-section of the \(\log_{10}\)-variance ratio for some fixed values of \(a\) and \(\omega_{1}\). Another way to grasp the intuition behind the improved efficiency of the restricted PVQMLE comes from the literature on saddlepoint approximations (Daniels, 1954). Saddlepoint approximations are used to approximate a density function with a function that is based on the cumulant generating function of the data, which is typically called saddlepoint density. Pedeli et al. (2015) show that the conditional saddlepoint density can approximate the conditional density of the INAR(\(p\)) model in (7) to a certain degree of accuracy. It is not hard to see that the conditional saddlepoint density is approximately equal to the pseudo-variance quasi-likelihood in (4) with correctly specified variance (Pedeli et al., 2015, Sec. 3.4). Therefore, when the variance is correctly specified, the restricted PVQMLE of the INAR(\(p\)) model is close to the maximizer of the log-likelihood obtained by the saddlepoint density, which in turn is expected to get closer to the MLE as \(\lambda_{t}\to\infty\). This is confirmed empirically from the results in figures 1 and 2, where the efficiency of restricted PVQMLE over the unrestricted PVQMLE grows as \(a,w\rightarrow\infty\) i.e. where restricted PVQMLE approximates more accurately the MLE. We conjecture that similar results may apply also to other models. For case of independent observations, Goodman (2022) has recently shown that the approximation error in using saddlepoint approximation is negligible compared to the inferential uncertainty inherent in the MLE. Although the literature is still under development, these arguments provide reliable evidence on the higher asymptotic performance of restricted PVQMLEs compared to the unrestricted one and other quasi-likelihood methods presented in Section 3.1. Figure 1: Contour plots of log10-variance ratios for the INAR coefficients. Left: ratio \(\log_{10}[Var(\hat{a})/Var(\hat{a}_{R})]\) plotted for several values of \(a\) and \(\omega\). Right: ratio \(\log_{10}[Var(\hat{\omega})/Var(\hat{\omega}_{R})]\) plotted for several values of \(a\) and \(\omega\). The green area indicates a variance ratio smaller than one. Finally, we consider a simulation study to assess the small sample properties of PVQMLEs in comparison with several other alternative estimators. The study consists of 1,000 Monte Carlo replications where we generate data from the Poisson INAR(1) process and estimate the mean parameter vector \(\psi\). We consider several PVQMLEs based on different restrictions of the variance parameter vector \(\gamma\). The unrestricted PVQMLE \(\hat{\psi}\) is based on the mean and pseudo-variance equations in (21). The first restricted PVQMLE \(\hat{\psi}_{R_{1}}\) imposes the restriction \(R_{1}:b=a(1-a)\), the second restricted PVQMLE \(\hat{\psi}_{R_{2}}\) imposes the restriction \(R_{2}:\omega_{2}=\omega_{1}\), and the third restricted PVQMLE \(\hat{\psi}_{R_{3}}\) imposes the restriction \(R_{3}:b=a(1-a),\ \omega_{2}=\omega_{1}\). Furthermore, we consider the QMLE based on the Poisson quasi-likelihood \(\hat{\psi}_{Q}\), the conditional least squares estimator (CLSE) \(\hat{\psi}_{LS}\), the WLSE in (19) with weights \(\hat{w}_{t}=\hat{a}_{LS}(1-\hat{a}_{LS})Y_{t-1}+\hat{\omega}_{LS}\) where \((\hat{\omega}_{LS},\hat{a}_{LS})^{\prime}=\hat{\psi}_{LS}\) are first-step estimates obtained from the CLSE, and the unfeasible WLSE \(\hat{\psi}_{WUN}\) with weights given by the true conditional variance. The results of the simulation study are reported in Table 1. Since the PVQMLE without constraints on the first two moments is asymptotically equivalent to the WLSE, it can be expected that the restricted PVQMLE where suitable constraints corresponding to the true model are imposed should show improved performances over the other estimators. Indeed, from Table 1 it can be seen that QMLE, CLSE, WLSE and unrestricted PVQMLE of model (21) share similar performances both in terms of bias and RMSE. Instead, a partial specification of the true constraints underlying the model in \(\hat{\psi}_{R_{1}}\) and \(\hat{\psi}_{R_{2}}\) already leads to an improvement with respect to the other estimation techniques; such improvement becomes substantial in \(\hat{\psi}_{R_{3}}\) where all the correct constraints are considered. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{\(T=500\)} & \multicolumn{3}{c}{\(T=2000\)} & \\ \cline{2-10} & \(\omega_{1}\) & & \(a\) & & & \(\omega_{1}\) & & \(a\) & \\ \cline{2-10} Est. & Bias & RMSE & Bias & RMSE & Bias & RMSE & Bias & RMSE \\ \hline \(\hat{\psi}_{Q}\) & 0.1190 & 0.5080 & -0.0063 & 0.0255 & 0.0323 & 0.2425 & -0.0016 & 0.0121 \\ \(\hat{\psi}_{LS}\) & 0.1206 & 0.5003 & -0.0064 & 0.0251 & 0.0290 & 0.2401 & -0.0014 & 0.0119 \\ \(\hat{\psi}_{W}\) & 0.1175 & 0.5001 & -0.0062 & 0.0251 & 0.0301 & 0.2392 & -0.0015 & 0.0119 \\ \(\hat{\psi}_{WUN}\) & 0.1174 & 0.5001 & -0.0062 & 0.0251 & 0.0300 & 0.2393 & -0.0015 & 0.0119 \\ \(\hat{\psi}\) & 0.1159 & 0.5021 & -0.0061 & 0.0252 & 0.0295 & 0.2388 & -0.0015 & 0.0119 \\ \(\hat{\psi}_{R_{1}}\) & 0.1103 & 0.4819 & -0.0058 & 0.0241 & 0.0305 & 0.2314 & -0.0015 & 0.0115 \\ \(\hat{\psi}_{R_{2}}\) & 0.1109 & 0.4911 & -0.0059 & 0.0246 & 0.0246 & 0.2332 & -0.0012 & 0.0115 \\ \(\hat{\psi}_{R_{3}}\) & 0.0052 & 0.2028 & -0.0006 & 0.0098 & 0.0027 & 0.1011 & -0.0001 & 0.0049 \\ \hline \hline \end{tabular} \end{table} Table 1: Bias and RMSE of estimators of the mean parameters when the data generating process is an INAR(1) with \(a=0.85\) and \(\omega=3\), and sample sizes \(T=500\) and \(T=2000\). Testing restrictions In the previous section, we have seen that correctly identified constraints on mean and pseudo-variance equations can deliver a restricted PVQMLE with improved efficiency. In this section, we develop a test based on the unrestricted estimator in (5) which allows us to test the validity of the restriction \(S\gamma=g(\psi)\). We define \(r(\theta)=S\gamma-g(\psi)\) and we denote with \(\Sigma(\theta_{0})=H^{-1}(\theta_{0})I(\theta_{0})H^{-1}(\theta_{0})\) the asymptotic covariance matrix of the entire unrestricted estimator vector \(\hat{\theta}\). Moreover, consider the following plug-in estimators of \(H(\theta_{0})\) and \(I(\theta_{0})\) given by \(\tilde{H}_{T}(\hat{\theta})=T^{-1}\sum_{t=1}^{T}-\partial^{2}\tilde{l}_{t}( \hat{\theta})/\partial\theta\partial\theta^{\prime}\) and \(\tilde{I}_{T}(\hat{\theta})=T^{-1}\sum_{t=1}^{T}\tilde{s}_{t}(\hat{\theta}) \tilde{s}_{t}^{\prime}(\hat{\theta})\), respectively. The following result holds. **Proposition 2**.: _Assume that the assumptions of Theorem 1 hold. Consider the test \(H_{0}:r(\theta_{0})=0\) versus \(H_{1}:r(\theta_{0})\neq 0\) where the function \(r(\cdot)\) is continuously differentiable. Let \(R(\theta)=\partial r(\theta)/\partial\theta^{\prime}\). Then, under \(H_{0}\), as \(T\to\infty\)_ \[W_{T}=Tr^{\prime}(\hat{\theta})\big{[}R(\hat{\theta})\tilde{\Sigma}_{T}(\hat{ \theta})R^{\prime}(\hat{\theta})\big{]}^{-1}r(\hat{\theta})\xrightarrow{d} \chi_{r}^{2},\] _where \(\tilde{\Sigma}_{T}(\hat{\theta})=\tilde{H}_{T}^{-1}(\hat{\theta})\tilde{I}_{T }(\hat{\theta})\tilde{H}_{T}^{-1}(\hat{\theta})\)._ The result follows immediately by the multivariate delta method, the continuous mapping theorem and standard asymptotic convergence arguments. Proposition 2 provides us a testing procedure for \(H_{0}:\theta_{0}\in\Theta_{R}\) versus \(H_{1}:\theta_{0}\notin\Theta_{R}\). It is worth nothing that the hypothesis test depicted in Proposition 2 does not require the variance of the model to be correctly specified. In the special case in which the pseudo-variance is correctly specified, then the test can be interpreted as a test of correct specification. For example, consider the INAR(1) model in (7) with conditional mean and pseudo-variance equations as defined in equation (21). We may consider the following test \[H_{0}:b=a(1-a)\quad\text{vs}\quad H_{1}:b\neq a(1-a)\,, \tag{22}\] which is a test for the assumption of a binomial thinning operator '\(\circ\)'. This follows from the definition of the INAR model in (7) as the autoregressive coefficient of the variance takes the form \(b=a(1-a)\) under the assumption of binomial thinning. Alternative thinning specifications can be tested leading to a different form of the autoregressive variance parameter \(b\), see Latour (1998) for the properties of INAR models with a general thinning specification. For instance, if we have a Poisson distribution for the thinning operator we have the restriction \(b=a\). The corresponding test is \[H_{0}:b=a\quad\text{vs}\quad H_{1}:b\neq a, \tag{23}\] which assesses the validity of the assumption of equidispersion in the thinning operator versus either overdispersion or underdispersion. We carry out a simulation study with 5000 Monte Carlo replications to assess the empirical size and power of the test of the parameter restrictions for the INAR(1) model. We consider the hypothesis in (23). To assess the size of the test we simulate under \(H_{0}\) from a model with Poisson thinning operator and a Poisson distribution of the error term. Table 2 reports the results on the empirical size of the test. We can see that the test is slightly oversized for the smallest sample size, though still close to the nominal level, and it quickly becomes correctly sized as the sample size increases. Next, we evaluate the power of the test by simulating under the alternative. We consider a negative binomial thinning specification such that \(a\circ X\) has a negative binomial distribution with mean \(aX\) and variance \(bX\), \(b=a+a^{2}/v\), where \(v\) is the dispersion parameter of the negative binomial. We note that this generates overdipersion in the thinning as \(b=a+a^{2}/v>a\) and \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Nominal size} & \multicolumn{4}{c}{\(T\)} \\ \cline{2-5} & 250 & 500 & 1000 & 2000 \\ \hline 0.1000 & 0.1224 & 0.1170 & 0.1058 & 0.1094 \\ 0.0500 & 0.0690 & 0.0662 & 0.0542 & 0.0582 \\ 0.0100 & 0.0218 & 0.0182 & 0.0110 & 0.0138 \\ \hline \hline \end{tabular} \end{table} Table 2: Empirical size for test in (23). The model considered under \(H_{0}\) is an INAR(1) model with Poisson thinning as well as Poisson error with parameter values \(a=0.5\) and \(\omega=2\). Figure 3: Empirical power for test in (23). The true parameter values of the INAR(1) model with negative binomial thinning and Poisson error are \(a=0.5\) and \(\omega=2\). The value of the dispersion parameter \(v\) changes as indicated in the horizontal axis through the \(\%\) of overdispersion: \(1-a/(a+a^{2}/v)\). the smaller the parameter \(v\) the more the overdispersion. Figure 3 shows the power of the test in (23) to reject the null hypothesis. As expected, we see that the power increases as the relative overdispersion \(1-a/b\) increases (\(v\) decreases) and as the sample size increases. Overall, the results show how the test has appropriate size and it has power against alternative hypotheses. ## 5 Real data applications In this section, we present two empirical applications where we employ PVQMLEs. We consider the test described in Section 4 to select appropriate parameter restrictions and compare different PVQMLEs. The first application concerns a dataset of crime counts, where the INAR model is considered for the specification of the conditional mean and the pseudo-variance. The second application concerns the realized correlation between two financial assets that forms a double-bounded time series, where we consider a beta autoregression for the specification of the conditional mean and the pseudo-variance. ### INAR model for crime counts We consider an empirical application to the monthly number of offensive conduct reports in the city of Blacktown, Australia, from January 1995 to December 2014. This dataset has been employed in several articles featuring the INAR(1) model (Gorgi, 2018; Leisen et al., 2019). The time series is displayed in Figure 4. In the literature, the distributional structure of the INAR innovation term \(\varepsilon_{t}\) is typically allowed to be flexible or left unspecified but the thinning operator is typically considered to be binomial. We consider the test proposed in the previous section to formally test the validity of binomial thinning assumption as well as the dispersion of the error term. We obtain the unrestricted PVQMLE for the INAR conditional mean and pseudo-variance equations in (21) and test several restrictions based on the test in Proposition 2. We test for equidispersion in the error \(H_{0}:\omega_{1}=\omega_{2}\), binomial thinning \(H_{0}:b=a(1-a)\), Poisson thinning \(H_{0}:a=b\) and geometric thinning \(H_{0}:b=a+a^{2}\). \begin{table} \begin{tabular}{c c c c} \hline \(\omega_{1}=\omega_{2}\) & \multicolumn{3}{c}{Thinning} \\ \cline{2-4} & binomial & Poisson & geometric \\ \hline 0.372 & 0.005 & 0.043 & 0.229 \\ \hline \end{tabular} \end{table} Table 3: \(p\)-values of the restriction tests for the INAR(1) model. The results of the tests are summarized in Table 3. We can see that the test does not reject the hypothesis of equidispersion in the error \(\omega_{1}=\omega_{2}\). As it concerns the tests on the thinning, the binomial and Poisson thinning are rejected at 5% significance level, instead, the geometric thinning is not rejected. This indicates that there is overdispersion in the thinning and the geometric one may be appropriate to describe the degree of overdispersion. Table 4 reports the estimation results for several PVQMLEs that are based on the different restrictions on the thinning operator. We can see that restricting to a binomial thinning leads to substantially biased estimates with respect to the unrestricted PVQMLE. Instead, from the geometric thinning we do not have such bias and the estimator can be expected to have an higher efficiency. \begin{table} \begin{tabular}{l c c c c} \cline{2-5} & \multicolumn{1}{c}{\(\hat{\omega}_{1}\)} & \(\hat{\omega}_{2}\) & \(\hat{a}\) & \(\hat{b}\) \\ \hline Unrestricted & 4.559 & 6.644 & 0.509 & 1.170 \\ & (0.520) & (2.374) & (0.058) & (0.330) \\ binomial thinning & 6.280 & - & 0.371 & - \\ & (0.434) & & (0.040) & \\ Poisson thinning & 4.820 & - & 0.524 & - \\ & (0.523) & - & (0.058) & - \\ geometric thinning & 4.129 & - & 0.592 & - \\ & (0.500) & - & (0.059) & - \\ \hline \end{tabular} \end{table} Table 4: PVQMLEs of the INAR(1) model for the crime time series dataset. Standard errors in brackets. Figure 4: Monthly number of offensive conduct reports in Blacktown, Australia, from January 1995 to December 2014. ### Double-bounded autoregression for realized correlation The second application we present concerns the modelling of daily realized correlations between Boeing and Honeywell stocks as considered in Gorgi and Koopman (2021). Figure 5 reports the plot of the time series. The sample size is \(T=2515\). Realized correlation measures take values in the interval \([-1,1]\) and the transformation \(Y_{t}/2+1/2\) is applied to rescale the realized correlation in the unit interval \([0,1]\). We refer to Gorgi and Koopman (2021) for a discussion on how models on the unit interval can be extended to a general interval with known bounds. We consider the following specification for the conditional mean and pseudo-variance \[\lambda_{t} =\delta_{1}+\alpha_{1}Y_{t-1}+\beta_{1}\lambda_{t-1},\] \[\nu_{t}^{*} =\frac{\mu_{t}(1-\mu_{t})}{1+\phi},\qquad\mu_{t}=\delta_{2}+\alpha _{2}Y_{t-1}+\beta_{2}\mu_{t-1},\] where the double-bounded nature of the data requires \(\delta_{i}+\alpha_{i}+\beta_{i}<1\) for \(i=1,2\) and \(\phi>0\). Besides the unrestricted PVQMLE, we consider a restricted PVQMLE with \(\delta_{2}=\delta_{1}\), \(\alpha_{2}=\alpha_{1}\), \(\beta_{1}=\beta_{2}\), which implies \(\mu_{t}=\lambda_{t}\). These restrictions impose that the pseudo-variance is equal to the conditional variance implied by a beta distribution with mean parameter \(\lambda_{t}\) and precision parameter \(\phi\) (see Example 4). In this way, we can also test the adequacy of the beta autoregression for modelling the analyzed data through the specification test on the restriction. Table 5 reports the estimation results together with the restriction tests. We can see that the specification test rejects the null hypothesis of equality for the estimated \(\alpha\) coefficients. For the same reason also the null assumption of the combined joint test is rejected. However, the null hypothesis is instead not Figure 5: Daily time series of realized correlations between Boeing (BA) and Honeywell (HON) asset returns, from January 2001 to December 2010. rejected for \(\delta\) and \(\beta\) coefficients at \(1\%\) level. This leans in favour of the restricted PVQMLE. We also notice that the estimated coefficients and the corresponding standard errors of the restricted PVQMLE are fairly close to the ones obtained from the beta autoregression reported in Table 1 of Gorgi and Koopman (2021). ## Appendix A: Proofs of Results ### Proofs **Proof of Theorem 1:** Let \(L(\theta)=\mathrm{E}[l_{t}(\theta)]\) be the limit log-quasi-likelihood. In what follows we show the following intermediate results. 1. Uniform convergence: \(\sup_{\theta\in\Theta}|\tilde{L}_{T}(\theta)-L(\theta)|\to 0\) almost surely, as \(T\to\infty\). 2. Identifiability: the true parameter value \(\theta_{0}\) is the unique maximizer of \(L(\theta)\), i.e. \(\mathrm{E}\left[l_{t}(\theta)\right]<\mathrm{E}\left[l_{t}(\theta_{0})\right]\) for all \(\theta\in\Theta,\theta\neq\theta_{0}\). In order to prove 1 the uniform convergence of the two summands of (A.1) should be shown. \[|\tilde{L}_{T}(\theta)-L(\theta)|\leq|\tilde{L}_{T}(\theta)-L_{T}(\theta)|+|L _{T}(\theta)-L(\theta)|\,.\] (A.1) The first term converges uniformly by Lemma 1, under A4-A5, implying that the starting value of the process is asymptotically unimportant for the quasi-likelihood contribution. By assumption A1 the log-quasi-likelihood contribution \(l_{t}(\theta)\) is stationary and ergodic. Moreover, it is uniformly bounded \[\mathrm{E}\,\sup_{\theta\in\Theta}|l_{t}(\theta)|\leq\frac{1}{2}\mathrm{E}\, \sup_{\gamma\in\Gamma}|\log\nu_{t}^{*}(\gamma)|+\frac{1}{2}\mathrm{E}\,\sup_{ \theta\in\Theta}\left(\frac{[Y_{t}-\lambda_{t}(\psi)]^{2}}{\nu_{t}^{*}(\gamma )}\right)<\infty\] \begin{table} \begin{tabular}{c c c c c c c c} & \multicolumn{1}{c}{\(\tilde{\delta}_{1}\)} & \(\tilde{\alpha}_{1}\) & \(\tilde{\beta}_{1}\) & \(\tilde{\phi}\) & \(\tilde{\delta}_{2}\) & \(\hat{\alpha}_{2}\) & \(\hat{\beta}_{2}\) \\ \hline Unrestricted & 0.01 & 0.163 & 0.822 & 22.226 & 0.055 & 0.045 & 0.898 \\ & (0.003) & (0.013) & (0.015) & (2.745) & (0.019) & (0.007) & (0.022) \\ Restricted & 0.01 & 0.161 & 0.826 & 36.963 & - & - & - \\ & (0.003) & (0.013) & (0.015) & (1.073) & & & \\ \hline \(H_{0}\) & \(\delta_{1}=\delta_{2}\) & \(\alpha_{1}=\alpha_{2}\) & \(\beta_{1}=\beta_{2}\) & joint test & & & \\ \(p\)-value & 0.02 & \(<\)0.001 & 0.01 & \(<\)0.001 & & & \\ \hline \end{tabular} \end{table} Table 5: Estimation results for the realized correlation series. Standard errors in brackets. The bottom of the table reports the \(p\)-values of the tests on the restrictions. by assumption **A2**. For the continuity of the quasi-likelihood and the compactness of \(\Theta\), Straumann and Mikosch (2006, Thm. 2.7) applies providing the uniform convergence of the second term in (A.1); in symbols \(\sup_{\theta\in\Theta}|L_{T}(\theta)-L(\theta)|\to 0\) almost surely, as \(T\to\infty\). This concludes the proof of (i). We now prove (ii). First note that by the uniform limit theorem \(L(\theta)=\operatorname{E}[l_{t}(\theta)]\) is a continuous function and it attains at least a maximum in \(\Theta\) since \(\Theta\) is compact. We now prove that such maximum is unique so that it can be univocally identified. Recall that \(\theta=(\psi^{\prime},\gamma^{\prime})^{\prime}\), assumption **A2** provides \(\operatorname{E}\;\sup_{\psi\in\Psi}|l_{t}(\psi,\gamma)|<\infty\) and \(\operatorname{E}\;\sup_{\gamma\in\Gamma}|l_{t}(\psi_{0},\gamma)|<\infty\) so also the function \(l_{t}(\psi,\gamma)\) has at least a maximum for \(\psi\in\Psi\), and \(l_{t}(\psi_{0},\gamma)\) has at least a maximum for \(\gamma\in\Gamma\). Consider now \(\operatorname{E}\left\{l_{t}(\theta)-l_{t}(\theta_{0})\right\}=\operatorname{ E}\left\{l_{t}(\theta)-l_{t}(\psi_{0},\gamma)\right\}+\operatorname{E}\left\{l_{t}( \psi_{0},\gamma)-l_{t}(\theta_{0})\right\}\). \[\operatorname{E}\left\{l_{t}(\theta)-l_{t}(\psi_{0},\gamma)\right\} =\operatorname{E}\left\{-\frac{\operatorname{E}[\left(Y_{t}- \lambda_{t}(\psi)\right)^{2}|\mathcal{F}_{t-1}]}{2\nu_{t}^{*}(\gamma)}+\frac{ \nu_{t}}{2\nu_{t}^{*}(\gamma)}\right\}\] \[\leq\operatorname{E}\left\{-\frac{\nu_{t}}{2\nu_{t}^{*}(\gamma)} +\frac{\nu_{t}}{2\nu_{t}^{*}(\gamma)}\right\}=0\] with equality if and only if \(\psi=\psi_{0}\) by assumption **A3**. Moreover, \(\operatorname{E}\left\{l_{t}(\psi_{0},\gamma)-l_{t}(\theta_{0})\right\}= \operatorname{E}\left[l_{t}(\psi_{0},\gamma)\right]-\operatorname{E}\left[l_{ t}(\psi_{0},\gamma^{*})\right]\leq 0\) by assumption **A6**. This concludes the proof of (ii). The consistency of the whole estimator \(\hat{\theta}\) follows from (i), (ii) and the compactness of \(\Theta\) by Potscher and Prucha (1997, Lemma 3.1). This implies (11). To prove the asymptotic normality of the estimator we establish additional intermediate results. 1. \(\sqrt{T}\sup_{\theta\in\Theta}\|S_{T}(\theta)-\tilde{S}_{T}(\theta)\|\to 0\) almost surely, as \(T\to\infty\). 2. Define \(V_{T}(\theta)=T^{-1}\sum_{t=1}^{T}-\partial^{2}l_{t}(\theta)/\partial\theta \partial\theta^{\prime}\). \(V_{T}(\theta)\to H(\theta_{0})\) almost surely uniformly over \(\theta\in\Theta\), as \(T\to\infty\). 3. \(\operatorname{E}[s_{t}(\theta_{0})]=0\). The condition (a) is satisfied by Lemma 2, under **A4**-**A5** and **A7** implying that initial values of the process do not affect the asymptotic distribution of the PVQMLE. Consider the second derivative of the log-quasi-likelihood contribution. \[\frac{\partial^{2}l_{t}(\theta)}{\partial\theta\partial\theta^{ \prime}} =\left(\frac{1}{2\nu_{t}^{*2}(\gamma)}-\frac{[Y_{t}-\lambda_{t}( \psi)]^{2}}{\nu_{t}^{*3}(\gamma)}\right)\frac{\partial\nu_{t}^{*}(\gamma)}{ \partial\theta}\frac{\partial\nu_{t}^{*}(\gamma)}{\partial\theta^{\prime}}\] \[\qquad-\frac{Y_{t}-\lambda_{t}(\psi)}{\nu_{t}^{*2}(\gamma)}\left( \frac{\partial\lambda_{t}(\psi)}{\partial\theta}\frac{\partial\nu_{t}^{*}( \gamma)}{\partial\theta^{\prime}}-\frac{\partial\nu_{t}^{*}(\gamma)}{\partial \theta}\frac{\partial\lambda_{t}(\psi)}{\partial\theta^{\prime}}\right)\] \[\qquad-\frac{1}{\nu_{t}^{*}(\gamma)}\frac{\partial\lambda_{t}( \psi)}{\partial\theta}\frac{\partial\lambda_{t}(\psi)}{\partial\theta^{\prime }}+\frac{Y_{t}-\lambda_{t}(\psi)}{\nu_{t}^{*}(\gamma)}\frac{\partial^{2} \lambda_{t}(\psi)}{\partial\theta\partial\theta^{\prime}}\] \[\qquad+\left(\frac{[Y_{t}-\lambda_{t}(\psi)]^{2}}{2\nu_{t}^{*2}( \gamma)}-\frac{1}{2\nu_{t}^{*}(\gamma)}\right)\frac{\partial^{2}\nu_{t}^{*}( \gamma)}{\partial\theta\partial\theta^{\prime}}\,.\] Assumption A8 and the Cauchy-Schwarz inequality yield \(\mathrm{E}\sup_{\theta\in\Theta}\left|\partial^{2}l_{t}(\theta)/\partial\theta_{i} \partial\theta_{j}\right|<\infty\) for all \(i,j=1,\ldots,m\). Furthermore, the second derivative is a continuous, stationary and ergodic sequence. Then, an application of Straumann and Mikosch (2006, Thm. 2.7) provides the condition (b). Note that since in this case \(\partial\lambda_{t}(\psi)/\partial\gamma=\partial\nu_{t}^{*}(\gamma)/\partial\psi=0\) the matrix \(H(\theta_{0})\) is block diagonal with diagonal block matrices \(H_{\psi}(\theta_{0})=\mathrm{E}\left[-\partial^{2}l_{t}(\theta_{0})/\partial \psi\partial\psi^{\prime}\right]\) and \(H_{\gamma}(\theta_{0})=\mathrm{E}\left[-\partial^{2}l_{t}(\theta_{0})/ \partial\gamma\partial\gamma^{\prime}\right]\). The former is defined in (13). For establishing the asymptotic normality of the estimator \(\hat{\theta}\) the proof of (c) is needed. Let \(s_{t}(\theta_{0})=[s_{t}^{(\psi)}(\theta_{0})^{\prime},s_{t}^{(\gamma)}( \theta_{0})^{\prime}]^{\prime}\) be the partition of the score between mean and pseudo-variance parameters. Observe that \(\mathrm{E}(s_{t}^{(\psi)}(\theta_{0})|\mathcal{F}_{t-1})=0\) but \(\mathrm{E}\left(s_{t}(\theta_{0})|\mathcal{F}_{t-1}\right)\neq 0\). Note that \(\sup_{\theta\in\Theta}\left|\partial l_{t}(\theta)/\partial\theta_{i}\right| \leq 2\left[\sup_{\theta\in\Theta}\left|l_{t}(\theta)\right|\right]^{1/2} \left[\sup_{\theta\in\Theta}\left|\partial^{2}l_{t}(\theta)/\partial\theta_{i }\partial\theta_{i}\right|\right]^{1/2}\), by Rudin (1976, p. 115). Moreover, \(\mathrm{E}\sup_{\theta\in\Theta}\left|l_{t}(\theta)\right|<\infty\), and \(\mathrm{E}\sup_{\theta\in\Theta}\left|\partial^{2}l_{t}(\theta)/\partial \theta_{i}\partial\theta_{j}\right|<\infty\). Then an application of Cauchy-Schwarz inequality entails \(\mathrm{E}\sup_{\theta\in\Theta}\left|\partial l_{t}(\theta)/\partial\theta_{ i}\right|<\infty\). Finally, \(\left\|\partial l_{t}(\theta)/\partial\theta\right\|\leq\sup_{\theta\in\Theta} \left\|\partial l_{t}(\theta)/\partial\theta\right\|\) and an application of the dominated convergence theorem leads to \(\mathrm{E}\left[\partial l_{t}(\theta)/\partial\theta\right]=\partial\mathrm{ E}\left[l_{t}(\theta)\right]/\partial\theta\). By noting that \(\theta_{0}\) is the unique maximizer of \(\mathrm{E}\left[l_{t}(\theta)\right]\) the result (c) follows. For \(T\) large enough \(\hat{\theta}\in\dot{\Theta}\) by A10, so the following derivatives exist almost surely \[0=\sqrt{T}\tilde{S}_{T}(\hat{\theta})=\sqrt{T}S_{T}(\hat{\theta})+o(1)=\sqrt{ T}S_{T}(\theta_{0})-V_{T}(\bar{\theta})\sqrt{T}(\hat{\theta}-\theta_{0})+o(1),\] where the first equality comes from the definition (4), the second equality holds by (a), and the third equality is obtained by Taylor expansion at \(\theta_{0}\) with \(\bar{\theta}\) lying between \(\hat{\theta}\) and \(\theta_{0}\). By assumption A11 and (c) we have \(\sqrt{T}S_{T}(\theta_{0})\xrightarrow{d}N(0,I(\theta_{0}))\) with \(I(\theta_{0})=\mathrm{E}\left[s_{t}(\theta_{0})s_{t}(\theta_{0})^{\prime}\right]\). This fact and (b) establish the asymptotic normality of the estimator \(\hat{\theta}\) with covariance matrix \(\Sigma(\theta_{0})=H^{-1}(\theta_{0})I(\theta_{0})H^{-1}(\theta_{0})\) by assumption A9, where \[H(\theta_{0})=\begin{pmatrix}H_{\psi}(\theta_{0})&0\\ 0&H_{\gamma}(\theta_{0})\end{pmatrix}\,,\quad I(\theta_{0})=\begin{pmatrix}I_{ \psi}(\theta_{0})&I_{\psi,\gamma}(\theta_{0})\\ I_{\psi,\gamma}(\theta_{0})^{\prime}&I_{\gamma}(\theta_{0})\end{pmatrix}\,,\] (A.2) with \(H_{x}(\theta_{0})=\mathrm{E}\left[-\partial^{2}l_{t}(\theta_{0})/\partial x \partial x^{\prime}\right]\), \(I_{x}(\theta_{0})=\mathrm{E}[s_{t}^{(x)}(\theta_{0})s_{t}^{(x)}(\theta_{0})^{ \prime}]\) and \(I_{x,z}(\theta_{0})=\mathrm{E}[s_{t}^{(x)}(\theta_{0})s_{t}^{(z)}(\theta_{0})^{ \prime}]\). In particular, standard algebra shows that \(I_{\psi}(\theta_{0})\) equals (13). See also equation (18). A suitable block matrix multiplication of (A.2) provides \[\Sigma(\theta_{0})=\begin{pmatrix}\Sigma_{\psi}(\theta_{0})&\Sigma_{\psi, \gamma}(\theta_{0})\\ \Sigma_{\psi,\gamma}(\theta_{0})^{\prime}&\Sigma_{\gamma}(\theta_{0})\end{pmatrix}\,,\] where \(\Sigma_{\psi}(\theta_{0})\) takes the form defined in (12). Finally, note that for the marginal property of the multivariate Gaussian distribution result (12) holds with covariance matrix \(\Sigma_{\psi}\) being the partition of \(\Sigma(\theta_{0})\) for the mean parameters \(\psi\). **Proof of Corollary 2:** Condition A11 is not required since in this case is easily showed by (10) that \(\mathrm{E}\left(s_{t}(\theta_{0})|\mathcal{F}_{t-1}\right)=0\). Recall that \(\sqrt{T}s_{T}(\theta_{0})=T^{-1/2}\sum_{t=1}^{T}U_{t}\) where \(U_{t}=s_{t}(\theta_{0})\). Note that \(\{U_{t},\mathcal{F}_{t}\}\) is a stationary martingale difference, and due to A9 it has finite second moments. Then A11 follows by the central limit theorem for martingales (Billingsley, 1961) and the Cramer-Wold device. The consistency and asymptotic normality of \(\hat{\theta}\) follow as above. Finally, in view of (18) and \(\mathrm{E}(s_{t}^{(\psi)}(\theta_{0})|\mathcal{F}_{t-1})=0\) \[\mathrm{Var}\big{[}H_{\psi}^{-1}(\theta_{0})s_{t}^{(\psi)}(\theta_{0})-I_{ \psi}^{-1}(\theta_{0})s_{t}^{(\psi)}(\theta_{0})\big{]}=\Sigma_{\psi}-I_{\psi}\] being necessarily positive semi-definite. \(\square\) **Proof of Corollary 3:** Analogously to the proof of Theorem 1, A1-A5 guarantee that \(L_{t}(\theta)\) is continuous and a.s. uniformly convergent to \(\mathrm{E}[l_{t}(\theta)]\). By recalling that \(\Theta\) is compact the result follows by Potscher and Prucha (1997, Lemma 4.2). \(\square\) **Proof of Theorem 2:** The consistency of \(\hat{\theta}_{R}\) follows from the fact that by the proof of Theorem 1 we have that \(\mathrm{E}[l_{t}(\psi,\gamma)]\leq\mathrm{E}[l_{t}(\psi_{0},\gamma^{*})]\) for any \(\theta\in\Theta\) with equality holding only if \(\theta=(\psi_{0}{}^{\prime},\gamma^{*})^{\prime}\), and assumption L ensures that \((\psi_{0}{}^{\prime},\gamma^{*})^{\prime}\in\Theta_{R}\) with \(\Theta_{R}\subseteq\Theta\). The consistency in (15) follows. The asymptotic normality of the estimator \(\hat{\theta}_{R}\) follows as in the proof of Theorem 1 with covariance matrix \(\Sigma(\theta_{0})=H^{-1}(\theta_{0})I(\theta_{0})H^{-1}(\theta_{0})\). In this case Hessian and Fisher information matrices can be written in the following block matrix form \[H(\theta_{0})=\begin{pmatrix}H_{\psi}(\theta_{0})&H_{\psi,\gamma_{2}}(\theta_ {0})\\ H_{\psi,\gamma_{2}}(\theta_{0})^{\prime}&H_{\gamma_{2}}(\theta_{0})\end{pmatrix} \,,\quad I(\theta_{0})=\begin{pmatrix}I_{\psi}(\theta_{0})&I_{\psi,\gamma_{2}} (\theta_{0})\\ I_{\psi,\gamma_{2}}(\theta_{0})^{\prime}&I_{\gamma_{2}}(\theta_{0})\end{pmatrix}\,.\] (A.3) Moreover, recall that \[H^{-1}(\theta_{0})=D(\theta_{0})=\begin{pmatrix}D_{\psi}(\theta_{0})&D_{\psi, \gamma_{2}}(\theta_{0})\\ D_{\psi,\gamma_{2}}(\theta_{0})^{\prime}&D_{\gamma_{2}}(\theta_{0})\end{pmatrix}\,.\] (A.4) By computing \(\Sigma(\theta_{0})\) using the block matrix multiplication as defined in (A.3) and (A.4) the partition of \(\Sigma(\theta_{0})\) for the mean parameters \(\psi\) equals \(\Sigma_{R}\). This entails (16). \(\square\) **Proof of Proposition 1:** It is not hard to show that under the conditions of Proposition 1 the Hessian and information matrices of (4) take the form defined in (A.5) and (A.6), respectively. \[H(\theta_{0})=\mathrm{E}\left[\frac{1}{\nu_{t}(\gamma_{0})}\frac{\partial \lambda_{t}(\psi_{0})}{\partial\theta}\frac{\partial\lambda_{t}(\psi_{0})}{ \partial\theta^{\prime}}+\frac{1}{2\nu_{t}^{2}(\gamma_{0})}\frac{\partial\nu_ {t}(\gamma_{0})}{\partial\theta}\frac{\partial\nu_{t}(\gamma_{0})}{\partial \theta^{\prime}}\right],\] (A.5) \[I(\theta_{0})= \mathrm{E}\left[\frac{1}{\nu_{t}(\gamma_{0})}\frac{\partial\lambda_ {t}(\psi_{0})}{\partial\theta}\frac{\partial\lambda_{t}(\psi_{0})}{\partial \theta^{\prime}}+\frac{h_{t}}{2\nu_{t}^{3}(\gamma_{0})}\left(\frac{\partial \lambda_{t}(\psi_{0})}{\partial\theta}\frac{\partial\nu_{t}(\gamma_{0})}{ \partial\theta^{\prime}}+\frac{\partial\nu_{t}(\gamma_{0})}{\partial\theta} \frac{\partial\lambda_{t}(\psi_{0})}{\partial\theta^{\prime}}\right)\right]\] \[+\mathrm{E}\left[\left(\frac{k_{t}}{\nu_{t}^{2}(\gamma_{0})}-1 \right)\frac{1}{4\nu_{t}^{2}(\gamma_{0})}\frac{\partial\nu_{t}(\gamma_{0})}{ \partial\theta}\frac{\partial\nu_{t}(\gamma_{0})}{\partial\theta^{\prime}} \right],\] (A.6) where \(h_{t}=\mathrm{E}[\left(Y_{t}-\lambda_{t}(\psi_{0})\right)^{3}\left|\mathcal{F }_{t-1}\right]\) and \(k_{t}=\mathrm{E}[\left(Y_{t}-\lambda_{t}(\psi_{0})\right)^{4}\left|\mathcal{ F}_{t-1}\right]\). From (A.5) and (A.6), we can see that when the variance is correctly specified the PVQMLE can be considered as a Gaussian QMLE. In case the data are normally distributed the PVQMLE eventually becomes MLE since \(h_{t}=0\) and \(k_{t}=3\nu_{t}^{2}\) implying \(I(\theta_{0})=H(\theta_{0})\). So the Gaussian case in assumption **A13** is trivial. For the other cases we have \(m=1\), so \(\psi=\theta\) and \(I(\theta_{0})\leq H(\theta_{0})\), then \(\Sigma_{R}=I(\theta_{0})/H(\theta_{0})^{2}\leq 1/I(\theta_{0})\). \(\Box\) ### Technical lemmas **Lemma 1**.: _Consider the PVQMLE in (5) with log-quasi-likelihood (4). Under conditions **A4**-**A5**, almost surely as \(T\to\infty\), \(\sup_{\theta\in\Theta}|\tilde{L}_{T}(\theta)-L_{T}(\theta)|\to 0\)._ **Proof of Lemma 1:** From assumption **A4**, we have that \[\sup_{\theta\in\Theta}|l_{t}(\theta)-\tilde{l}_{t}(\theta)|\] \[\leq\sup_{\theta\in\Theta}\left|\frac{[\tilde{\lambda}_{t}(\psi) -\lambda_{t}(\psi)][\tilde{\lambda}_{t}(\psi)+\lambda_{t}(\psi)-2Y_{t}]}{2 \tilde{\nu}_{t}^{*}(\gamma)}+\frac{[\nu_{t}^{*}(\gamma)-\tilde{\nu}_{t}^{*}( \gamma)][Y_{t}-\lambda_{t}(\psi)]^{2}}{2\nu_{t}^{*}(\gamma)\tilde{\nu}_{t}^{*} (\gamma)}\right|\] \[\qquad+\frac{1}{2}\sup_{\gamma\in\Gamma}\left|\log\frac{\tilde{ \nu}_{t}^{*}(\gamma)}{\nu_{t}^{*}(\gamma)}\right|\] \[\leq\frac{1}{\underline{\nu}^{*}}a_{t}\Big{(}a_{t}+|Y_{t}|+\sup_{ \psi\in\Psi}|\lambda_{t}(\psi)|\,\Big{)}+\frac{1}{\underline{\nu}^{*2}}b_{t} \Big{(}Y_{t}^{2}+\sup_{\psi\in\Psi}\lambda_{t}^{2}(\psi)\Big{)}\] \[\qquad+\frac{1}{2}\sup_{\gamma\in\Gamma}\left|\log\left(1+\frac {\tilde{\nu}_{t}^{*}(\gamma)-\nu_{t}^{*}(\gamma)}{\nu_{t}^{*}(\gamma)}\right)\right|\] \[\leq\frac{1}{\underline{\nu}^{*}}a_{t}\Big{(}1+|Y_{t}|+\sup_{ \psi\in\Psi}|\lambda_{t}(\psi)|\,\Big{)}+\frac{1}{\underline{\nu}^{*2}}b_{t} \Big{(}Y_{t}^{2}+\sup_{\psi\in\Psi}\lambda_{t}^{2}(\psi)\Big{)}+\frac{1}{2 \underline{\nu}^{*}}b_{t},\] for \(t\) large enough since, by assumption **A5**, a.s. \(a_{t}\to 0\) as \(t\to\infty\) and by noticing that \(\log(1+x)\leq x\) for \(x>-1\). Assumption **A5** and an application of Cesaro's lemma lead to \[\sup_{\theta\in\Theta}|\tilde{L}_{T}(\theta)-L_{T}(\theta)|\leq T^{-1}\sum_{t=1 }^{T}\sup_{\theta\in\Theta}|\tilde{l}_{t}(\theta)-l_{t}(\theta)|\to 0\,,\quad a.s.\] as \(T\to\infty\). \(\Box\) **Lemma 2**.: _Consider the PVQMLE in (5) with score (10). Under conditions **A4-A5** and **A7**, almost surely as \(T\to\infty\), \(\sqrt{T}\sup_{\theta\in\Theta}\|\tilde{S}_{T}(\theta)-S_{T}(\theta)\|\to 0\)._ **Proof of Lemma 2:** We obtain that \[\sup_{\theta\in\Theta}\|s_{t}(\theta)-\tilde{s}_{t}(\theta)\|\leq \sup_{\theta\in\Theta}\left\|\frac{1}{2\tilde{\nu}_{t}^{*}(\gamma )}\frac{\partial\tilde{\nu}_{t}^{*}(\gamma)}{\partial\theta}-\frac{1}{2\nu_{t }^{*}(\gamma)}\frac{\partial\nu_{t}^{*}(\gamma)}{\partial\theta}\right\|\] \[+\sup_{\theta\in\Theta}\left\|\frac{Y_{t}-\tilde{\lambda}_{t}( \psi)}{\tilde{\nu}_{t}^{*}(\gamma)}\frac{\partial\tilde{\lambda}_{t}(\psi)}{ \partial\theta}-\frac{Y_{t}-\lambda_{t}(\psi)}{\nu_{t}^{*}(\gamma)}\frac{ \partial\lambda_{t}(\psi)}{\partial\theta}\right\|\] \[+\sup_{\theta\in\Theta}\left\|\frac{[Y_{t}-\tilde{\lambda}_{t}( \psi)]^{2}}{2\tilde{\nu}_{t}^{*2}(\gamma)}\frac{\partial\tilde{\nu}_{t}^{*}( \gamma)}{\partial\theta}-\frac{[Y_{t}-\lambda_{t}(\psi)]^{2}}{2{\nu_{t}^{*2}( \gamma)}}\frac{\partial\nu_{t}^{*}(\gamma)}{\partial\theta}\right\|=\delta_{t} ^{1}+\delta_{t}^{2}+\delta_{t}^{3},\] with obvious notation. We now bound the single terms individually. In what follows the notation \(o(1)\) almost surely, as \(t\to\infty\), will be abbreviated to \(o(1)\). \[\delta_{t}^{1} \leq\sup_{\theta\in\Theta}\left\|\frac{1}{2\tilde{\nu}_{t}^{*}( \gamma)}\left(\frac{\partial\tilde{\nu}_{t}^{*}(\gamma)}{\partial\theta}-\frac {\partial\nu_{t}^{*}(\gamma)}{\partial\theta}\right)+\frac{[\nu_{t}^{*}( \gamma)-\tilde{\nu}_{t}^{*}(\gamma)]}{2\tilde{\nu}_{t}^{*}(\gamma)\nu_{t}^{*}( \gamma)}\frac{\partial\nu_{t}^{*}(\gamma)}{\partial\theta}\right\|\] \[\leq\frac{d_{t}}{2\underline{\nu}^{*}}+\frac{b_{t}}{2\underline{ \nu}^{*2}}\sup_{\theta\in\Theta}\left\|\frac{\partial\nu_{t}^{*}(\gamma)}{ \partial\theta}\right\|\,.\] Similarly, \[\delta_{t}^{2} \leq\sup_{\theta\in\Theta}\left\|\frac{Y_{t}-\tilde{\lambda}_{t}( \psi)}{\tilde{\nu}_{t}^{*}(\gamma)}\left(\frac{\partial\tilde{\lambda}_{t}( \psi)}{\partial\theta}-\frac{\partial\lambda_{t}(\psi)}{\partial\theta}\right)\right\|\] \[\qquad+\sup_{\theta\in\Theta}\left\|\frac{\partial\lambda_{t}( \psi)}{\partial\theta}\left(\frac{\lambda_{t}(\psi)-\tilde{\lambda}_{t}(\psi) }{\tilde{\nu}_{t}^{*}(\gamma)}+\frac{Y_{t}-\lambda_{t}(\psi)}{\tilde{\nu}_{t }^{*}(\gamma)}-\frac{Y_{t}-\lambda_{t}(\psi)}{\nu_{t}^{*}(\gamma)}\right)\right\|\] \[\leq\frac{c_{t}}{\underline{\nu}^{*}}\Big{(}\left|Y_{t}\right|+ \sup_{\psi\in\Psi}\left|\lambda_{t}(\psi)\right|+a_{t}\Big{)}\] \[\qquad+\sup_{\theta\in\Theta}\left\|\frac{\partial\lambda_{t}( \psi)}{\partial\theta}\right\|\left(\frac{a_{t}}{\underline{\nu}^{*}}+\sup_{ \theta\in\Theta}\left|\frac{[\nu_{t}^{*}(\gamma)-\tilde{\nu}_{t}^{*}(\gamma) ]\left[Y_{t}-\lambda_{t}(\psi)\right]}{\tilde{\nu}_{t}^{*}(\gamma)\nu_{t}^{*}( \gamma)}\right|\Big{)}\] \[\leq\frac{c_{t}}{\underline{\nu}^{*}}\Big{(}\left|Y_{t}\right|+ \sup_{\psi\in\Psi}\left|\lambda_{t}(\psi)\right|+o(1)\Big{)}+\sup_{\theta\in \Theta}\left\|\frac{\partial\lambda_{t}(\psi)}{\partial\theta}\right\|\Big{(} \frac{a_{t}}{\underline{\nu}^{*}}+\frac{b_{t}}{\underline{\nu}^{*2}}\Big{(} \left|Y_{t}\right|+\sup_{\psi\in\Psi}\left|\lambda_{t}(\psi)\right|\Big{)} \Big{)}.\] Using similar arguments for \(\delta_{t}^{3}\) and assumption A5 leads to \[\delta_{t}^{3} \leq\frac{d_{t}}{{\underline{\nu}}^{*}{}^{2}}\sup_{\theta\in\Theta} \left(Y_{t}^{2}+\lambda_{t}^{2}(\psi)+a_{t}^{2}+2a_{t}\lambda_{t}(\psi)\right)\] \[\qquad+\sup_{\theta\in\Theta}\left\|\frac{\partial\nu_{t}^{*}( \gamma)}{\partial\theta}\right\|\sup_{\theta\in\Theta}\left|\frac{[\tilde{ \lambda}_{t}(\psi)-\lambda_{t}(\psi)][\tilde{\lambda}_{t}(\psi)+\lambda_{t}( \psi)-2Y_{t}]}{2\tilde{\nu}_{t}^{*}{}^{2}(\gamma)}\right|\] \[\qquad+\sup_{\theta\in\Theta}\left\|\frac{\partial\nu_{t}^{*}( \gamma)}{\partial\theta}\right\|\sup_{\theta\in\Theta}\left|\frac{[\nu_{t}^{*} (\gamma)-\tilde{\nu}_{t}^{*}(\gamma)]\left[\nu_{t}^{*}(\gamma)+\tilde{\nu}_{t }^{*}(\gamma)\right]\left[Y_{t}-\lambda_{t}(\psi)\right]^{2}}{2\nu_{t}^{*}{}^{ 2}(\gamma)\tilde{\nu}_{t}^{*}{}^{2}(\gamma)}\right|\] \[\leq\frac{d_{t}}{{\underline{\nu}}^{*}{}^{2}}\Big{(}Y_{t}^{2}+ \sup_{\psi\in\Psi}\lambda_{t}^{2}(\psi)+o(1)\Big{)}+\sup_{\theta\in\Theta} \left\|\frac{\partial\nu_{t}^{*}(\gamma)}{\partial\theta}\right\|\frac{a_{t}}{ {\underline{\nu}}^{*}{}^{2}}\left(|Y_{t}|+\sup_{\psi\in\Psi}|\lambda_{t}(\psi) |+o(1)\right)\] \[\qquad+\sup_{\theta\in\Theta}\left\|\frac{\partial\nu_{t}^{*}( \gamma)}{\partial\theta}\right\|\frac{2b_{t}}{{\underline{\nu}}^{*}{}^{3}} \Big{(}Y_{t}^{2}+\sup_{\psi\in\Psi}\lambda_{t}^{2}(\psi)\Big{)}\,.\] By assumption A7, \(\delta_{t}^{j}=\mathcal{O}(t^{-\delta})\), for \(\delta>1/2\) and \(j=1,2,3\). Therefore \(\sqrt{T}\sup_{\theta\in\Theta}\|S_{T}(\theta)-\tilde{S}_{T}(\theta)\|\leq T^{-1 /2}\sum_{t=1}^{T}\mathcal{O}(t^{-\delta})\) converges to \(0\) almost surely as \(T\to\infty\). ## Funding Mirko Armillotta acknowledges financial support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No.101108797.
2309.10762
A Generalization of a Theorem of Mandel
A theorem of Mandel allows to determine the covector set of an oriented matroid from its set of topes by using the composition condition. We provide a generalization of that result, stating that the covector set of a conditional oriented matroid can also be determined by its set of topes, but by using the face symmetry condition. It permits to represent geometrical configurations in terms of conditional oriented matroids, more suitable for computer calculations. We treat apartments of hyperplane arrangements as example.
Hery Randriamaro
2023-09-19T17:01:43Z
http://arxiv.org/abs/2309.10762v1
# A Generalization of a Theorem of Mandel # A Generalization of a Theorem of Mandel **Hery Randriamaro** Institut fur Mathematik, Universitat Kassel, Heinrich-Plett-Strasse 40, 34132 Kassel, Germany [email protected] The author was supported by the Alexander von Humboldt Foundation Abstract A theorem of Mandel allows to determine the covector set of an oriented matroid from its set of toes by using the composition condition. We provide a generalization of that result, stating that the covector set of a conditional oriented matroid can also be determined by its set of toes, but by using the face symmetry condition. It permits to represent geometrical configurations in terms of conditional oriented matroids, more suitable for computer calculations. We treat apartments of hyperplane arrangements as example. Keywords: Conditional Oriented Matroid, Tope, Hyperplane Arrangement MSC Number: 52C40, 68R05 Recently, Bandelt et al. (2018) introduced the notion of conditional oriented matroids, or complexes of oriented matroids, which are common generalizations of oriented matroids and lopsided sets. As observed by Richter-Gebert & Ziegler (2017), oriented matroids are abstractions for several mathematical objects including directed graphs and central hyperplane arrangements, while Bandelt et al. (2006) pointed out that lopsided sets can be regarded as common generalizations of antimatroids and median graphs. We provide a generalization of a theorem of Mandel in Section 1 by proving that a conditional oriented matroid can completely determined from knowledge of its tope and by means of the face symmetry condition. Knauer & Marc (2020), independently without the author having prior knowledge, gave another version of that generalization in their Theorem 4.9 by using tope graphs. In Section 2, we propose an algorithm to convert apartments of hyperplane arrangements to conditional oriented matroids. It gives the possibility to do computations, like \(f\)-polynomial computing, on these geometrical configurations. ## 1 Conditional Oriented Matroids This section describes conditional oriented matroids, recalls oriented matroids and the theorem of Mandel, then establishes our generalization of that theorem. A _sign system_ is a pair \((E,\,\mathcal{L})\) containing a finite set \(E\) and a subset \(\mathcal{L}\) of \(\{-1,\,0,\,1\}^{E}\). For \(X,Y\in\mathcal{L}\), the _composition_ of \(X\) and \(Y\) is the element \(X\circ Y\) of \(\{-1,\,0,\,1\}^{E}\) defined, for every \(e\in E\), by \[(X\circ Y)_{e}:=\begin{cases}X_{e}&\text{if $X_{e}\neq 0$},\\ Y_{e}&\text{otherwise},\end{cases}\] and the _separation set_ of \(X\) and \(Y\) is \(\mathrm{S}(X,\,Y):=\big{\{}e\in E\ \big{|}\ X_{e}=-Y_{e}\neq 0\big{\}}\). A _conditional oriented matroid_ is a sign system \((E,\,\mathcal{L})\) such that \(\mathcal{L}\) satisfies the following conditions: * if \(X,Y\in\mathcal{L}\), then \(X\circ-Y\in\mathcal{L}\), * for each pair \(X,Y\in\mathcal{L}\), and every \(e\in\mathrm{S}(X,\,Y)\), there exists \(Z\in\mathcal{L}\) such that \[Z_{e}=0\quad\text{and}\quad\forall f\in E\setminus\mathrm{S}(X,\,Y),\ Z_{f}=(X \circ Y)_{f}=(Y\circ X)_{f}.\] * (FS) stands for face symmetry, and (SE) for strong elimination condition. The elements of \(\mathcal{L}\) are called _covectors_. A partial order \(\preceq\) is defined on \(\mathcal{L}\) by \[\forall X,Y\in\mathcal{L}:\ X\preceq Y\iff\forall e\in E,\,X_{e}\in\{0,\,Y_{e}\}.\] Write \(X\prec Y\) if \(X\preceq Y\) and \(X\neq Y\). One says that \(Y\)_covers_\(X\), denoted \(X\prec Y\), if \(X\prec Y\) and no \(Z\in\mathcal{L}\) satisfies \(X\prec Z\prec Y\). One calls \(Y\) a _tope_ if no \(Z\in\mathcal{L}\) covers \(Y\). An _oriented matroid_ is a sign system \((E,\,\mathcal{L})\) such that \(\mathcal{L}\) satisfies the following conditions: * if \(X,Y\in\mathcal{L}\), then \(X\circ Y\in\mathcal{L}\), * if \(X\in\mathcal{L}\), then \(-X\in\mathcal{L}\), * for each pair \(X,Y\in\mathcal{L}\), and every \(e\in\mathrm{S}(X,\,Y)\), there exists \(Z\in\mathcal{L}\) such that \[Z_{e}=0\quad\text{and}\quad\forall f\in E\setminus\mathrm{S}(X,\,Y),\ Z_{f}=(X \circ Y)_{f}=(Y\circ X)_{f}.\] * (C) stands for composition, and (Sym) for symmetry condition. For the sake of understanding, we provide a proof to the following known property. **Proposition 1.1**.: _An oriented matroid is a conditional oriented matroid \((E,\,\mathcal{L})\) that satisfies the zero vector condition_ * _the zero element_ \((0,\,\dots,\,0)\) _belongs to_ \(\mathcal{L}\)_._ Proof.: Every oriented matroid is a conditional oriented matroid since both satisfy (SE), and one gets (FS) by combining (C) with (Sym). Moreover, it is obvious that every oriented matroid satisfies (Z). Now, suppose that \((E,\,\mathcal{L})\) is a conditional oriented matroid satisfying (Z): * For every \(X\in\mathcal{L}\), \((0,\,\dots,\,0)\circ-X=-X\in\mathcal{L}\), so we get (Sym). * If \(X,Y\in\mathcal{L}\), then \(-Y\in\mathcal{L}\), hence \(X\circ-(-Y)=X\circ Y\in\mathcal{L}\), so we get (C). We recall the Theorem of Mandel as stated in Theorem 4.2.13 of Bjorner et al. (1999). One can look at Theorem 1.1 of Cordovil (1985) for a version using non-Radon partitions. **Theorem 1.2**.: _Let \((E,\,\mathcal{L})\) be an oriented matroid. Its set of topes \(\mathcal{T}\) determines \(\mathcal{L}\) via_ \[\mathcal{L}=\big{\{}X\in\{-1,\,0,\,1\}^{E}\ \big{|}\ \forall T\in\mathcal{T},\,X \circ T\in\mathcal{T}\big{\}}.\] Coming back to conditional oriented matroids, the _rank_ of a covector \(X\) is \(0\) if it covers no elements in \(\mathcal{L}\), otherwise it is \[\operatorname{rk}X:=\max\{l\in\mathbb{N}\ |\ \exists X^{1},X^{2},\ldots,X^{l} \in\mathcal{L},\,X^{1}\preccurlyeq X^{2}\preccurlyeq\ldots\preccurlyeq X^{l} \preccurlyeq X\}.\] The rank of \(\mathcal{L}\) is \[\operatorname{rk}\mathcal{L}:=\max\{\operatorname{rk}X\ |\ X\in\mathcal{L}\}.\] The _support_ of \(X\) is \(\underline{X}:=\{e\in E\ |\ X_{e}\neq 0\}\). And for \(A\subseteq E\), the _restriction_ of \(X\) to \(E\setminus A\) is the element \(X\setminus A\in\{-1,\,0,\,1\}^{E\setminus A}\) such that \((X\setminus A)_{e}=X_{e}\) for all \(e\in E\setminus A\). **Lemma 1.3**.: _(Bandelt et al. 2018, Lem. 1) Let \((E,\,\mathcal{L})\) be a conditional oriented matroid, and \(A\subseteq E\)._ * _The_ deletion__\((E\setminus A,\,\mathcal{L}\setminus A)\) _of_ \(A\)_, with_ \(\mathcal{L}\setminus A=\{X\setminus A\ |\ X\in\mathcal{L}\}\)_,_ * _and the_ contraction__\((E\setminus A,\,\mathcal{L}/A)\) _of_ \(A\)_, with_ \(\mathcal{L}/A=\{X\setminus A\ |\ X\in\mathcal{L},\,\underline{X}\cap A= \varnothing\}\)_,_ _are conditional oriented matroids._ **Lemma 1.4**.: _Let \((E,\,\mathcal{L})\) be a conditional oriented matroid, and take two topes \(T^{1},T^{2}\in\mathcal{L}\). Then, \(\underline{T^{1}}=\underline{T^{2}}\)._ Proof.: Suppose that \(\underline{T^{1}}\neq\underline{T^{2}}\). Then, \(T^{1}\circ-T^{2}\in\mathcal{L}\) and \(T^{1}\prec T^{1}\circ-T^{2}\). This implies that \(T^{1}\) is not a tope, which is absurd. We can now state our generalization. **Theorem 1.5**.: _Let \((E,\,\mathcal{L})\) be a conditional oriented matroid. Its set of topes \(\mathcal{T}\) determines \(\mathcal{L}\) via_ \[\mathcal{L}=\big{\{}X\in\{-1,\,0,\,1\}^{E}\ \big{|}\ \forall T\in\mathcal{T}, \,X\circ-T\in\mathcal{T}\big{\}}.\] Proof.: It is clear that \(\mathcal{L}\subseteq\big{\{}X\in\{-1,\,0,\,1\}^{E}\ \big{|}\ \forall T\in\mathcal{T}, \,X\circ T\in\mathcal{T}\big{\}}\) since, for every \(X\in\mathcal{L}\) and all \(T\in\mathcal{T}\), \(X\circ-T\in\mathcal{L}\) and \((X\circ T)^{0}=\varnothing\). For the backward argument, we argue by induction on \(\operatorname{rk}\mathcal{L}\) and \(\#E\). If \(\operatorname{rk}\mathcal{L}=0\), \(\mathcal{L}\) consists of a one-element set \(\{T\}\subseteq\{-1,\,0,\,1\}^{E}\). Therefore, for an element \(X\in\{-1,\,0,\,1\}^{E}\), the fact \(X\circ-T=T\) implies \(X=T\), hence \(X\in\mathcal{L}\). If \(\operatorname{rk}\mathcal{L}=1\) and \(\#E=1\), then \(\mathcal{L}=\{-1,\,0,\,1\}\) and \(\mathcal{T}=\{-1,\,1\}\). So, we clearly have \(X\in\mathcal{L}\) for all \(X\in\{-1,\,0,\,1\}\). Now, assume that \(\operatorname{rk}\mathcal{L}=1\) and \(\#E>1\). Take \(X\in\{-1,\,0,\,1\}^{E}\) such that \(X\circ-T\in\mathcal{T}\) for each tope \(T\in\mathcal{T}\). Denote by \(F\) the subset of \(E\) such that \(\underline{T}=F\) for every \(T\in\mathcal{T}\). The case \(\underline{X}=F\) is easily solved, since \(X=X\circ-T\in\mathcal{T}\subseteq\mathcal{L}\). The case \(\underline{X}\varsubsetneq F\) remains open. Pick an element \(e\in X^{0}\cap F\), and consider a tope \(Y\setminus\{e\}\) of the deletion \((E\setminus\{e\},\,\mathcal{L}\setminus\{e\})\). We have \(Y^{0}=\{e\}\cap F\), and \(Y\) is covered in \(\mathcal{L}\) by two topes \(T^{1},T^{2}\in\mathcal{T}\) such that \(\operatorname{S}(T^{1},\,T^{2})=\{e\}\). There exists \(Z\in\mathcal{L}\) such that \[Z_{e}=0\quad\text{and}\quad\forall f\in E\setminus\{e\},\ Z_{f}=\big{(}(X\circ -T^{1})\circ(X\circ-T^{2})\big{)}_{f}=(X\circ-Y)_{f}.\] The only possibility is \(Z=X\circ-Y\), which means that \(X\circ-Y\in\mathcal{L}\). Hence, for all topes \(Y\setminus\{e\}\) in the deletion \((E\setminus\{e\},\,\mathcal{L}\setminus\{e\})\), we have \(X\setminus\{e\}\circ-(Y\setminus\{e\})\in\mathcal{L}\setminus\{e\}\). By induction, we get \(X\setminus\{e\}\in\mathcal{L}\setminus\{e\}\), and consequently \(X\in\mathcal{L}\). Finally, assume that \(\operatorname{rk}\mathcal{L}>1\). Take \(X\in\{-1,\,0,\,1\}^{E}\) such that \(X\circ-T\in\mathcal{T}\) for each tope \(T\in\mathcal{T}\). The case \(\underline{X}=F\) is easily solved like before. The case \(\underline{X}\varsubsetneq F\) remains. Pick an element \(e\in X^{0}\cap F\), and consider a tope \(Y\setminus\{e\}\) of the contraction \((E\setminus\{e\},\,\mathcal{L}/\{e\})\). We have \(Y^{0}=\{e\}\cap F\), and \(Y\) is covered in \(\mathcal{L}\) by two topes \(T^{1},T^{2}\in\mathcal{T}\) such that \(\operatorname{S}(T^{1},\,T^{2})=\{e\}\). There exists \(Z\in\mathcal{L}\) such that \[Z_{e}=0\quad\text{and}\quad\forall f\in E\setminus\{e\},\ Z_{f}=\big{(}(X\circ -T^{1})\circ(X\circ-T^{2})\big{)}_{f}=(X\circ-Y)_{f}.\] The only possibility is \(Z=X\circ-Y\), which means that \(X\circ-Y\in\mathcal{L}\). Hence, for all topes \(Y\setminus\{e\}\) in the contraction \((E\setminus\{e\},\,\mathcal{L}/\{e\})\), we have \(X\setminus\{e\}\circ-(Y\setminus\{e\})\in\mathcal{L}/\{e\}\). Since \(\operatorname{rk}\mathcal{L}/\{e\}=\operatorname{rk}\mathcal{L}-1\), then \(X\setminus\{e\}\in\mathcal{L}/\{e\}\) by induction, and consequently \(X\in\mathcal{L}\). ## 2 Applications on Hyperplane Arrangements This section describes the structure of apartments of hyperplane arrangements in term of conditional oriented matroids. Then, it proposes an algorithm to convert the former to the latter. We give the \(f\)-polynomial computing as extension example of this algorithm. Let \(a_{1},\dots,a_{n},b\) be \(n+1\) real coefficients such that \((a_{1},\,\dots,\,a_{n})\neq(0,\,\dots,\,0)\). A _hyperplane_ of \(\mathbb{R}^{n}\) is an affine subspace \(H:=\big{\{}(x_{1},\,\dots,\,x_{n})\in\mathbb{R}^{n}\bigm{|}a_{1}x_{1}+\dots+a _{n}x_{n}=b\big{\}}\) denoted by \(\{a_{1}x_{1}+\dots+a_{n}x_{n}=b\}\). A _hyperplane arrangement_\(\mathscr{A}\) is a finite set of hyperplanes. Denote by \(H^{-1}\) and \(H^{1}\) both connected components \(\{a_{1}x_{1}+\dots+a_{n}x_{n}<b\}\) and \(\{a_{1}x_{1}+\dots+a_{n}x_{n}>b\}\) of \(\mathbb{R}^{n}\), respectively. Moreover, set \(H^{0}=H\). The _sign map_ of \(H\) is the function \[\sigma_{H}:\mathbb{R}^{n}\to\{-1,\,0,\,1\},\quad v\mapsto\begin{cases}-1& \text{if $v\in H^{-1}$},\\ 0&\text{if $v\in H^{0}$},\\ 1&\text{if $v\in H^{1}$}.\end{cases}\] The sign map of \(\mathscr{A}\) is the function \(\sigma_{\mathscr{A}}:\mathbb{R}^{n}\to\{-1,\,0,\,1\}^{\mathscr{A}},\ v\mapsto\big{(}\sigma_{H}(v)\big{)}_{H \in\mathscr{A}^{\prime}}\). And the _sign set_ of \(\mathscr{A}\) is the set \(\sigma_{\mathscr{A}}(\mathbb{R}^{n}):=\big{\{}\sigma_{\mathscr{A}}(v)\bigm{|}v \in\mathbb{R}^{n}\big{\}}\). A _face_ of \(\mathscr{A}\) is a subset \(F\) of \(\mathbb{R}^{n}\). H. Randriamaro such that \[\exists x\in\sigma_{\mathscr{A}}(\mathbb{R}^{n}),\ F=\big{\{}v\in\mathbb{R}^{n}\ \big{|}\ \sigma_{\mathscr{A}}(v)=x\big{\}}.\] A _chamber_ of \(\mathscr{A}\) is a face \(F\) such that \(\sigma_{\mathscr{A}}(F)\in\{-1,\,1\}^{\mathscr{A}}\). Denote by \(F(\mathscr{A})\) and \(C(\mathscr{A})\) the sets composed by the faces and the chambers of \(\mathscr{A}\), respectively. An _apartment_ of \(\mathscr{A}\) is a chamber of a hyperplane arrangement contained in \(\mathscr{A}\). Denote by \(K(\mathscr{A})\) the apartment set of \(\mathscr{A}\). The sets of faces and chambers in an apartment \(K\in K(\mathscr{A})\) are, respectively, \[F(\mathscr{A},\,K):=\big{\{}F\in F(\mathscr{A})\ |\ F\subseteq K\big{\}}\quad \text{and}\quad C(\mathscr{A},\,K):=C(\mathscr{A})\cap F(\mathscr{A},\,K).\] Let \(K\in K(\mathscr{A})\), and \(\mathscr{B}=\{H\in\mathscr{A}\ |\ H\cap K=\varnothing\}\). The sign system \[\Big{(}\mathscr{A}\setminus\mathscr{B},\,\sigma_{\mathscr{A}}\big{(}F( \mathscr{A},\,K)\big{)}\setminus\mathscr{B}\Big{)}\] is a conditional oriented matroid. Bandelt et al. (2018) called it realizable COMs, and presented it as motivating example for conditional oriented matroids. We now present algorithms to do computations on apartments of hyperplane arrangements. The use of mathematics software system containing the following functions is assumed: * **length** gives the length of a tuple, * **RandomElement** returns randomly an element from a set, * **poset** transforms a set, on which a partial order can be defined, to a poset, * and **rank** computes the rank of a poset or that of its elements. **Algorithm 1**.: (Generating Conditional Oriented Matroid from Topes): * _Input:_ A tope set \(\mathcal{T}\). * _Output:_ A covector set \(\mathcal{L}\). * _Remark:_ It is an algorithmic version of Theorem 1.5. **function** GeneratingCOM(T) L \(\leftarrow\) {} l \(\leftarrow\) **length(RandomElement(T))** **for** X **in** \(\{-1,\,0,\,1\}^{l}\) a \(\leftarrow\) true **for** Y **in** T a \(\leftarrow\) a and (X \(\circ\) -Y **in** T) **if** a = true L \(\leftarrow\) L \(\sqcup\) {X} **return** L We generate tope set by determining \(\sigma_{\mathscr{A}}(v)\) for a random point of each chamber. Afterwards, we apply the previous algorithm to get the aimed conditional oriented matroid. **Algorithm 2**.: (Transforming Apartment to Conditional Oriented Matroid): * _Input:_ An affine function set \(\mathscr{A}\) and a point set \(P\). * _Output:_ A covector set \(\mathcal{L}\). * _Remark:_ Each function in \(\mathscr{A}\) corresponds to a hyperplane of the arrangement, and each point in \(P\) is included in a chamber of the arrangement. **function** ApartmentToCOM(A, P) **function** ApartmentToTope(A, P) **function** covector(A, p) **function** sign(h, p) **if** h(p) \(<\) 0 **return** -1 **else** **return** 1 **return** \(\mathbf{tuple(sign(h, p)}\) **for** h **in** A) **return** set(**covector(A, p) for p **in** P) **return GeneratingCOM**(ApartmentToTope(A, P)) Consider an apartment \(K\in K(\mathscr{A})\) in \(\mathbb{R}^{n}\). Let \(f_{i}(K)\) be the number of \(i\)-dimensional faces in \(F(\mathscr{A},\,K)\), and \(x\) a variable. The _\(f\)-polynomial_ of \(K\) is \[f_{K}(x):=\sum_{i=0}^{n}f_{i}(K)\,x^{n-i}.\] **Algorithm 3**.: (\(f\)-Polynomial of Apartment): * _Input:_ An affine function set \(\mathscr{A}\) and a point set \(P\). * _Output:_ A \(f\)-polynomial \(f_{K}(x)\). * _Remark:_ An apartment is still represented by a pair (\(\mathscr{A},\,E\)). **function** fPolynomial(A, P) \(x\)**variable** \(\mathrm{f}\gets 0\) \(\mathrm{COM}\leftarrow\mathbf{poset(ApartmentToCOM(A, P))}\) **for** X **in** COM \(\mathrm{f}\leftarrow\mathrm{f}+x^{\mathbf{rank(COM)-rank(X)}}\) **return** f Other computer calculations of functions associated to apartments of hyperplane arrangements, like their Varchenko determinant (Randriamaro 2020, Th. 1.3), can also be implemented by means of their conversion to conditional oriented matroids. H. Randriamaro _Example._ Consider the apartment on Figure 1 with arrangement composed by the hyperplanes \(\{x_{2}=0\}\), \(\{x_{1}-x_{2}=0\}\), \(\{x_{1}+x_{2}=1\}\), \(\{x_{2}=3\}\), and \(\{x_{2}=-2\}\). To be able to compute the corresponding conditional oriented arrangement, and its \(f\)-polynomial, we take the nine points \((0,\,4)\), \((0,\,1.5)\), \((0,\,0.5)\), \((0.5,\,0.2)\), \((1,\,0.2)\), \((-1,\,-0.2)\), \((0,\,-0.5)\), \((1.5,\,-0.2)\), and \((0,\,-3)\) into account. A computation with the mathematics software system SageMath gives us the following result.
2307.16867
Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy
Current state-of-the-art results in computer vision depend in part on fine-tuning large pre-trained vision models. However, with the exponential growth of model sizes, the conventional full fine-tuning, which needs to store a individual network copy for each tasks, leads to increasingly huge storage and transmission overhead. Adapter-based Parameter-Efficient Tuning (PET) methods address this challenge by tuning lightweight adapters inserted into the frozen pre-trained models. In this paper, we investigate how to make adapters even more efficient, reaching a new minimum size required to store a task-specific fine-tuned network. Inspired by the observation that the parameters of adapters converge at flat local minima, we find that adapters are resistant to noise in parameter space, which means they are also resistant to low numerical precision. To train low-precision adapters, we propose a computational-efficient quantization method which minimizes the quantization error. Through extensive experiments, we find that low-precision adapters exhibit minimal performance degradation, and even 1-bit precision is sufficient for adapters. The experimental results demonstrate that 1-bit adapters outperform all other PET methods on both the VTAB-1K benchmark and few-shot FGVC tasks, while requiring the smallest storage size. Our findings show, for the first time, the significant potential of quantization techniques in PET, providing a general solution to enhance the parameter efficiency of adapter-based PET methods. Code: https://github.com/JieShibo/PETL-ViT
Shibo Jie, Haoqing Wang, Zhi-Hong Deng
2023-07-31T17:22:17Z
http://arxiv.org/abs/2307.16867v1
# Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy ###### Abstract Current state-of-the-art results in computer vision depend in part on fine-tuning large pre-trained vision models. However, with the exponential growth of model sizes, the conventional full fine-tuning, which needs to store a individual network copy for each tasks, leads to increasingly huge storage and transmission overhead. Adapter-based Parameter-Efficient Tuning (PET) methods address this challenge by tuning lightweight adapters inserted into the frozen pre-trained models. In this paper, we investigate how to make adapters even more efficient, reaching a new minimum size required to store a task-specific fine-tuned network. Inspired by the observation that the parameters of adapters converge at flat local minima, we find that adapters are resistant to noise in parameter space, which means they are also resistant to low numerical precision. To train low-precision adapters, we propose a computational-efficient quantization method which minimizes the quantization error. Through extensive experiments, we find that low-precision adapters exhibit minimal performance degradation, and even 1-bit precision is sufficient for adapters. The experimental results demonstrate that 1-bit adapters outperform all other PET methods on both the VTAB-1K benchmark and few-shot FGVC tasks, while requiring the smallest storage size. Our findings show, for the first time, the significant potential of quantization techniques in PET, providing a general solution to enhance the parameter efficiency of adapter-based PET methods. Code: [https://github.com/JieShibo/PETL-ViT](https://github.com/JieShibo/PETL-ViT) ## 1 Introduction Large pre-trained vision models have demonstrated exceptional performance on various visual tasks via fine-tuning on task-specific data. In the traditional fine-tuning paradigm, the entire model is updated for each downstream task, resulting in the need to store a fine-tuned model separately for each task. However, with the remarkable scalability of modern vision models, the size of pre-trained vision models is increasing exponentially to achieve superior performance. As a result, the storage cost of the full fine-tuning paradigm becomes prohibitive in multi-task scenarios. _Parameter-Efficient Tuning_ (PET) has recently emerged as a promising approach for fine-tuning a limited number of parameters while attaining performance comparable to full fine-tuning on downstream tasks. Adapter-based methods [6, 25, 26, 31, 59, 63, 25, 51] are among the techniques proposed for PET and have gained considerable attention due to their effectiveness. Adapters are typically small subnetworks with bottleneck architecture comprising two fully-connected (FC) layers inserted into pre-trained models. Adapter-based methods freeze pre-trained weights and update only the adapters, whose parameter efficiency is achieved through their small hidden dimension. Although the bottleneck adapters have been already lightweight (_e.g_., 0.5 MB/task for ViT-B [12]), the storage costs remain considerable when dealing with a huge number of tasks (_e.g_., platform that provides customized models for millions of users). To address this issue, recent stud Figure 1: **Average accuracy** _vs._ **size of trainable parameters in backbones (log scale) on VTAB-1K benchmark.** Our low-precision adapter-based methods outperform other baselines. ies have shown that the parameter efficiency of adapters can be further improved. For example, [22, 32, 51] explore the low-rank structure in adapters, reparameterizing the weight of adapters into smaller subspace with Kronecker, TensorTrain, or Tucker factorization. Additionally, [21] leverages network pruning to train sparse adapters. We find that these methods actually have a common motivation - reducing the redundancy (_e.g_., rank redundancy, density redundancy) in adapters. Also motivated by this, we pose a question, _whether there is any other kind of redundancy that can be utilized to better improve the efficiency of adapters._ In this paper, we begin by exploring the loss landscape of adapters and observe that the local minima of adapters are much flatter than that of the fully fine-tuned models. The flatness of local minima indicates that the trained adapters possess greater resilience to noise in parameter space, such that adapters with low-precision parameters should perform equally well as their high-precision counterparts. Therefore, we infer that adapters are redundant in numerical precision. Since previous work on adapters all employs full-precision (FP32) data type, the impact of precision on adapters has not been investigated yet. To reduce the precision redundancy, we propose an approach that involves training and storing adapters in low-bit parameter space. Through empirical analysis, we observe that the parameters of each adapter weight approximately follow a Gaussian distribution. Under this assumption, we quantize the adapter parameters by minimizing the quantization loss. Inspired by previous work of neural network quantization [17], we adopt quantization-aware training and train the low-bit adapters with straight-through estimator (STE). Our experiments, conducted on extensive datasets, reveal several key findings: _1)_ Unlike quantizing the entire model, quantizing only the adapters results in negligible performance degradation, even in the 1-bit setting; _2)_ With a fixed storage budget, 1-bit quantized (_i.e_., binary) adapters achieve superior performance among all precision settings; _3)_ Our 1-bit adapter can outperform all previous PET methods, including low-rank factorization methods, while using the smallest storage size. Our contributions are summarized as follows: * From the investigation on the flat local minima of adapters, we infer the existence of precision redundancy in the parameters of adapters, which can be leveraged to improve their parameter efficiency. * Based on empirical observations of the distribution of adapter parameters, we propose an efficient quantization-aware training method for learning low-bit adapters while minimizing the quantization error. * Extensive experiments and comparisons verify that lowering the bit-width brings significant efficiency improvement to adapters. Our proposed method achieves new state-of-the-art results in terms of both performance and parameter efficiency. ## 2 Related Work **Parameter-Efficient Tuning.** Parameter-Efficient Tuning (PET) aims to adapt pre-trained vision backbone to downstream tasks by tuning only a small number of parameters. Most work about PET focuses on tuning transformer-based networks, _e.g_., Vision Transformers (ViTs) [12]. Prompt-based methods [30, 49, 65, 78, 79, 74] concatenate trainable tokens to the sequential inputs of transformers as prompts, adapting the models by tuning the prompts. However, since the computational cost of self-attention is proportional to the square of the length of inputs, prompt-based methods are not as computation-efficient as the original network [30, 6]. Adapter-based methods [31, 25, 6, 50, 51, 59, 63, 50] insert small adapters into the pre-trained model, adjusting the intermediate representations of the network to fit the downstream data. Some of them [26, 32] can be absorbed into the pre-trained weights during inference, which ensures the computational cost is not increased. Besides, there are also methods that tune bias parameters [73], modify the intermediate features via affine transformation [42], fit the change in the network outputs by a small side-network [76], or combine multiple methods automatically [77, 5]. Among them, adapter-based methods have attracted much attention for their competitive performance, generality to different backbones, and scalability. **Efficient Designs of Adapters.** As illustrated in Figure 2 (left), adapters are commonly subnetworks composed of two FC layers with nonlinear activation in between. Adapter-P [59] places the adapters after the Feed-Forward Network (FFN) blocks, and AdapTFormer[6] uses adapters parallel to the FFN blocks of ViT. LoRA [26] uses two low-rank matrices to fit the change in the query and value transformation of Multi-Head Self-Attention (MHSA). The formulation of LoRA is equivalent to two FC layers without bias parameters and activation, and can be regarded as special adapters in parallel with the query and value weights. Besides, some work focuses on more compact designs for adapters. Compacter[51] and KAdaptation[22] regard the weights of adapters as the Kronecker product of two smaller matrices, one of which is shared among adapters. FacT[32] tensorizes the network as a tensor, and reparameterizes its change as several factors according to Tensor-Train or Tucker format that are updated end-to-end. Similar to LoRA, FacT is not proposed as an adapter-based method, but it can also be viewed as reparameterized adapters with partially shared weights. Besides, SparseAdapter[21] prunes the dense weights of adapters before fine-tuning. These designs reduce the rank and density redundancy in adapters, but we focus on a neglected but more effective direction - precision redundancy. **Network Quantization.** Network quantization [17] compresses networks by reducing the bit-width of weight and activation. Current quantization methods include Post-Training Quantization [1, 27, 29, 47, 55, 69, 72], which performs quantization on trained model without re-training; and Quantization-Aware Training [3, 34, 41, 13], which introduce quantization during the training process by approximating the gradient of the non-differentiable quantization operator. The former paradigm does not require access to the entire training data during quantization and has shown almost lossless performance using FP16 and INT8 data type, while the latter yields quantized models with better performance and can work in extremely low-bit settings, _e.g_., binary quantization [43, 60, 46]. ## 3 Preliminaries In this paper, we mainly focus on ViTs as pre-trained backbone following previous work [30, 31, 77, 6]. We start with a concise formalization of the commonly used adapters. AdaptFormer[6] uses bottleneck FFN composed of two FC layers with in-between ReLU activation as adapters. The weights of an adapter are \(\mathbf{W}_{down}\in\mathbb{R}^{d\times h}\) and \(\mathbf{W}_{up}\in\mathbb{R}^{h\times d}\), where \(h<<d\). Adapters are inserted into networks as shortcuts bypassing the FFN blocks, _i.e_., given an input \(\mathbf{X}\in\mathbb{R}^{N\times d}\), the computation is formulated as \[\mathbf{X}^{\prime}=\underbrace{\mathbf{X}+\textit{FFN}(\mathbf{X})}_{\text{ Frozen}}+\underbrace{s\cdot\textit{ReLU}(\mathbf{X}\mathbf{W}_{down})\mathbf{W}_{up}}_{\text{Adapter}} \tag{1}\] where \(s\) is a hyper-parameter, \(\mathbf{X}\) is the input of FFN blocks. LORA[26] learns the low-rank approximation of change in \(\mathbf{W}_{q}\) and \(\mathbf{W}_{v}\). Formally, it reparameterizes \(\Delta\mathbf{W}_{q/v}\) into \(\mathbf{A}_{q/v}\mathbf{B}_{q/v}\), where \(\mathbf{A}_{q/v}\in\mathbb{R}^{d\times h},\mathbf{B}_{q/v}\in\mathbb{R}^{h\times d}\) and \(h<<d\). The query and value of MHSA are computed as \[\mathbf{Q}/\mathbf{V}=\underbrace{\mathbf{X}\mathbf{W}_{q/v}}_{\text{Frozen}}+\underbrace{s \cdot\mathbf{X}\mathbf{A}_{q/v}\mathbf{B}_{q/v}}_{\text{Adapter}} \tag{2}\] in which \(s\) is a scaling hyper-parameter, and \(\mathbf{X}\) is the input of MHSA blocks. LoRA is equivalent to using AdapFormer-style adapters with identity activation, whose weights are \(\mathbf{A}_{q},\mathbf{B}_{q},\mathbf{A}_{v},\mathbf{B}_{v}\). ## 4 Methodology ### Precision Redundancy in Adapters It has been extensively studied that the property of a neural network is highly correlated with the flatness of its loss landscape, _e.g_., the flatter the local minima, the better the generalization [7, 15, 20, 36, 40, 64]. Inspired by them, we here investigate the loss landscape of adapters in vision models to explore their property. Following [40], we plot the loss landscape of full fine-tuning, AdapFormer, and LoRA when adapting pre-trained ViT-B [12]. As shown in Figure 2 (right), AdapFormer and LoRA obviously converge at much flatter regions than full fine-tuning. The flat local minima of visual adapters indicate that they generalize better, providing an explanation for their superior Figure 3: **Accuracy degradation under different intensity of Gaussian noise.** Adapters converge at flatter local minima and are more resistant to disturbation. Figure 2: **Left:** Illustration of adapters. “Pre-Trained OP” denotes operations in pre-trained models, such as the FFN blocks or QKV transformations in ViTs. **Right:** Loss landscape visualization of full fine-tuning and adapter-based tuning [6, 26] on ViT-B. performance over full fine-tuning on small and medium-size datasets [30, 31]. Moreover, if the parameters converge at flatter local minima, there are wide low-loss areas around these points. Therefore, when adding noise to the converged parameters, we can expect that the loss will not increase significantly. In other words, the model is resistant to disturbation in parameter space. As shown in Figure 3, we add Gaussian noise \(\mathcal{N}(0,\sigma_{noise}^{2})\) with different \(\sigma_{noise}\) to the fine-tuned weights, and find that adding noise to adapter-tuned models leads to much less accuracy degradation than fully fine-tuned models. Adapters still retain most of the performance even if the noise has equivalent variance to the weights (_i.e._, \(\sigma_{noise}=\sigma_{weight}\)). Since numerical error can be also viewed as a type of noise, we conjecture that the adapters would not suffer from lower numerical precision. ### Trading Precision for Efficiency In view of the existence of precision redundancy, a natural idea is to trading the redundant precision for much needed efficiency. Previous work on quantization [9, 18, 71] has demonstrated that clustering is a reliable direction for quantization of arbitrary bit-width, so we also adopt a clustering-based quantization strategy for adapters. As illustrated in Figure 3, the smaller the noise, the less the performance degradation. The object of adapter quantization is to minimize the noise involved, _i.e._, minimize the quantization error. The \(b\)-bit quantization process can be viewed as dividing \(\mathbb{R}\) into \(B=2^{b}\) non-overlapping sets \(\{\mathcal{U}_{1},...,\mathcal{U}_{B}\}\), which correspond to a codebook with \(B\) codes \(\{c_{1},...,c_{B}\}\). The quantization function quantizes all values in \(\mathcal{U}_{j}\) to \(c_{j}\), \[\mathcal{Q}(w)=c_{j}\text{ if }w\in\mathcal{U}_{j} \tag{3}\] Then we minimize the quantization error as follows, \[\underset{c_{1},...,c_{B},\mathcal{U}_{1},...,\mathcal{U}_{B}}{ \text{minimize}}\quad\sum_{i=1}^{m}|w_{i}-\mathcal{Q}(w_{i})|^{p} \tag{4}\] in which \(w_{i}\) is an element of a weight \(\mathbf{W}\) of the adapters, and \(m\) is the number of elements in \(\mathbf{W}\). This problem is equivalent to 1D clustering, which can be addressed via clustering algorithm such as \(k\)-means (\(p=2\)) and \(k\)-medians (\(p=1\)). Low-bit quantization, particularly 1-bit quantization suffers catastrophically poor performance in the absence of quantization-aware training (QAT). In QAT, the weights are ever-changing, so the clustering algorithm has to be rerun in each forward propagation during tuning. An appropriate clustering algorithm is supposed to have negligible computational cost, but an iterative algorithm like \(k\)-means and \(k\)-medians is not efficient enough. Moreover, since the cluster assignment in \(k\)-means and \(k\)-medians is not differentiable, this process cannot be end-to-end optimized in QAT. Therefore, although previous work [18] has applied \(k\)-means into post-training quantization, it is not a suitable choice for QAT on adapters. To find an efficient and differentiable clustering method, we visualize the frequency histogram of the parameters in the weights of adapters. As shown in Figure 5, we find that the parameters in full-precision adapters are subject to a bell-shaped distribution with tails. For simplicity, we suppose the parameters of each weight are always Gaussian, so that the clustering algorithm can be simplified considerably. Before tuning, we perform clustering on a standard Gaussian distribution to calculate \(\{c_{1},...,c_{B}\}\) and \(\{\mathcal{U}_{1},...,\mathcal{U}_{B}\}\). We suppose \(p=1\) in Eq. (4) and use \(k\)-medians for simplicity. As illustrated in Figure 4, in each training step, we first standardize the weights by the means and variances of their parameters, \[w_{i}^{\prime}=\frac{w_{i}-\mu}{\sigma} \tag{5}\] where \(\mu=\textit{MEAN}(\{w_{i}\}_{i=1}^{m}),\sigma=\textit{STD}(\{w_{i}\}_{i=1}^{ m})\). According to the Gaussian assumption, the parameters in each stan Figure 4: **Illustration of the proposed quantization method with \(b=2\).** Figure 5: **Parameter frequency histogram visualization** of the 24 weight matrices in all the 12 adapters of AdaptFormer fine-tuned on Caltech101. The parameters (blue histograms) are roughly subject to Gaussian distribution (red curves). dardized weight are subject to standard Gaussian distribution. Then we quantize each standardized weight with the pre-calculated \(\{c_{1},...,c_{B}\}\) and \(\{\mathcal{U}_{1},...,\mathcal{U}_{B}\}\), \[\hat{w_{i}}^{\prime}=\mathcal{Q}(w_{i}^{\prime})=c_{j}\text{ if }w_{i}^{ \prime}\in\mathcal{U}_{j} \tag{6}\] Finally, we de-standardize the weights to their original means and variances, \[\hat{w}_{i}=\hat{w_{i}}^{\prime}\cdot\sigma+\mu \tag{7}\] and then feed the inputs to perform the forward and backward propagation. In the whole quantization process, only the quantization operation \(\mathcal{Q}\) is not differentiable, so we use straight-through estimator (STE) to approximate the gradient,, \(\frac{\partial\mathcal{Q}(w_{i}^{\prime})}{\partial w_{i}^{\prime}}=1\). Then \(\forall w_{i},w_{k}\in\mathbf{W}\) the overall gradient is calculated as \[\frac{\partial\hat{w}_{i}}{\partial w_{k}}=\begin{cases}1+\frac{w_{i}^{\prime }(\hat{w}_{i}^{\prime}-w_{i}^{\prime})}{m}&\text{ if }i=k\\ \frac{w_{i}^{\prime}(\hat{w}_{i}^{\prime}-w_{i}^{\prime})}{m}&\text{ otherwise}\end{cases} \tag{8}\] During tuning, the pre-trained weights are always frozen, and only the adapters as well as the classification head are updated. The full-precision weights are maintained in training, and updated via end-to-end gradient descent. Since PET only focuses on boosting parameter efficiency, we still use full-precision activation for better performance. After tuning, we store necessary information for reproducing the quantized weights instead of the full-precision adapters,, the \(b\)-bit quantization indexes \(j\) of adapters' parameters (\(b\) bits per parameter) and the mean \(\mu\) and standard deviation \(\sigma\) of each full-precision weight matrix in adapters (128 bits per adapter). \(\{c_{1},...,c_{B}\}\) and \(\{\mathcal{U}_{1},...,\mathcal{U}_{B}\}\) can be recalculated before inference. At inference time, the weights are reconstructed as \[\hat{w}_{i}=c_{j}\cdot\sigma+\mu \tag{9}\] which are directly used for inference. ## 5 Experiments ### Datasets We use more than 20 image classification tasks to evaluate the performance of different PET methods. **VTAB-1K benchmark.** VTAB-1K [75] contains 19 image classification tasks from diverse fields, which can be categorized into three groups: Natural, Specialized, and Structured. These tasks cover a large range of possible domains where downstream tasks come, so the performance of different methods on this benchmark largely reflects their ability to transfer learning. Each dataset contains 800 samples for training and 200 for validation. Following previous work [30, 31, 32, 42, 77], we tune the pre-trained model with all the 1,000 training and validation samples and report results evaluated on test-set. Following [30, 42], we use _unnormalized inputs_ that are consistent with the VTAB paper [75]. Note that some previous methods [32, 77] normalize the images with ImageNet's mean and standard deviation, so we re-implement some of them for a fair comparison. **Few-shot fine-grained visual recognition (FGVC).** We use five FGVC datasets to evaluate the capability of PET methods in the low-data regime. The five datasets are FGVC-Aircraft [52], Oxford-Pets [58], Food-101 [4], Stanford Cars [37], and Oxford-Flowers102 [57]. Experiments are conducted in 1, 2, 4, 8, and 16-shot settings. are directly converted from fine-tuned FP32 adapters. Others are fine-tuned using the proposed QAT method. Table 1 presents the accuracy and adapter size on VTAB-1K. We notice that using \(b\)-bit adapters leads to about \(\frac{32}{b}\times\) more parameter efficiency than full-precision adapters. However, the performance degradation resulting from quantization is very slight and sometimes negligible, even in the 1-bit setting. Note that quantizing the entire model to a very low bit-width usually causes significant performance degradation, but our observation indicates that low-bit quantization only on adapters is reliable and much less damaging. Moreover, we explore the best bit-width given a certain storage budget. Since low-precision adapters are more lightweight, we can augment their performance by using higher hidden dimension to utilize the saved space. The size of a \(b\)-bit \(h\)-dimension adapter is about \(2dbh\) bits where \(d\) is the feature dimension, so we fix \(bh=32\) and compare different combinations of \(b\) and \(h\). As shown in Table 2, the lower \(b\) and higher \(h\) yield better performance on LoRA and AdapFormer. 1-bit adapters perform the best across different combinations. Overall, we find that the parameter efficiency gains of the low-bit adapters far outweigh their performance damage, demonstrating the feasibility and necessity to trade precision for efficiency. ### Comparison with the State-of-the-Art #### 5.3.1 Vtab-1k benchmark We compare our methods with full fine-tuning, linear probing (, only training the classification head), VPT [30], NOAH [77], SSF [42], Adapter-P [59], BitFit[73], AdapFormer[6], LoRA [26], Compacter[51], and FaCT [32] on VTAB-1K. All baselines use FP32 by default. The hidden dimension \(h\) is set to 8 for Adapter-P, AdapFormer, and LoRA. The number of Kronecker products and hidden dimensions are 4 and 32 for Compacter, respectively. For FaCT, we use Fact-TT with rank searched from {8, 16, 32} to adapt the MHSA blocks. The settings of other baselines follow their original papers. As for our low-precision adapters, we quantize the bit-width of AdapFormer and LoRA to 1, named Bi-AdaptFormer and Bi-LoRA, and report results with hidden dimensions \(h=1\) and \(32\). All these methods use a ViT-B/16 [12] pre-trained on supervised ImageNet-21K as backbone. We train the models for 100 epochs with AdamW optimizer. Table 3 shows the full results on VTAB-1K. Since 1-bit adapters are much more storage-efficient than their full-precision counterparts, Bi-AdaptFormer and Bi-LoRA can use a larger hidden dimension while maintaining a \begin{table} \begin{tabular}{l c c c c c c c|c c c c c|c c c c c c} \hline \hline & & & \multicolumn{4}{c|}{**Natural**} & \multicolumn{4}{c|}{**Specialized**} & \multicolumn{4}{c}{**Structured**} \\ \cline{2-13} & & & & & & & & & & & & & & & & & & & & & & & \\ \hline \multicolumn{13}{l}{_Conventional Fine-Tuning_} \\ \hline Full & 327 & 68.9 & 68.9 & 87.7 & 64.3 & 97.2 & 86.9 & 87.4 & 38.8 & 79.7 & 95.7 & 84.2 & 73.9 & 56.3 & 58.6 & 41.7 & 65.5 & 57.5 & 46.7 & 25.7 & 29.1 \\ Linear & 0 & 57.6 & 64.4 & 85.0 & 63.2 & 97.0 & 86.3 & 36.6 & 51.0 & 78.5 & 87.5 & 68.5 & 74.0 & 34.3 & 30.6 & 33.2 & 55.4 & 12.5 & 20.0 & 9.6 & 19.2 \\ \hline \multicolumn{13}{l}{_PET methods_} \\ \hline VPT-Deep [30] & 2.03 & 72.0 & **78.8** & 90.8 & 65.8 & 98.0 & 88.3 & 78.1 & 49.6 & 81.8 & **96.1** & 83.4 & 68.4 & 68.5 & 60.0 & 46.5 & 72.8 & 73.6 & 47.9 & 32.9 & 37.8 \\ NOAH\({}^{\dagger}\)[77] & 1.37 & 75.5 & 69.6 & 92.7 & 70.2 & 99.1 & 90.4 & 86.1 & 53.7 & 84.4 & 95.4 & 83.9 & 75.8 & 82.8 & 68.9 & 49.9 & 81.7 & 81.8 & 48.3 & 32.8 & 44.2 \\ LoRA [26] & 1.13 & 76.4 & 72.0 & 91.2 & 71.6 & 99.1 & 91.3 & 88.9 & 56.4 & 87.2 & 94.6 & 83.9 & 74.9 & **83.7** & 64.0 & 52.3 & 81.2 & 84.8 & 53.3 & **38.1** & 43.4 \\ SSF [42] & 0.78 & 75.7 & 69.0 & 92.6 & **75.1** & **99.4** & **91.8** & **90.2** & 52.9 & 87.4 & 95.9 & **87.4** & 75.5 & 75.9 & 62.3 & **53.3** & 80.6 & 77.3 & 54.9 & 29.5 & 37.9 \\ AdapAPT-P [59] & 0.56 & 75.5 & 73.2 & 90.1 & 69.6 & 99.2 & 91.1 & 84.9 & 56.0 & 86.6 & 94.8 & 82.5 & **75.8** & 82.9 & 63.9 & 49.7 & 79.8 & 71.1 & 55.5 & 31.6 & 42.2 \\ AdapFormer [6] & 0.56 & **76.7** & 73.8 & 92.3 & 72.7 & 99.3 & 91.6 & 89.1 & 56.5 & **87.8** & 95.5 & 54.9 & 75.2 & 83.3 & 62.5 & 52.4 & **81.7** & 86.2 & **55.9** & 34.4 & 40.2 \\ BitFit [73] & 0.39 & 65.2 & 72.8 & 87.0 & 59.2 & 97.5 & 85.3 & 59.9 & 51.4 & 78.7 & 91.6 & 72.9 & 69.8 & 61.5 & 55.6 & 32.4 & 55.9 & 66.6 & 40.0 & 15.7 & 25.1 \\ FaCT-TT [32] & 0.30 & **76.7** & 73.4 & 91.0 & 72.4 & 99.2 & 91.4 & 90.1 & **56.6** & 87.3 & 94.7 & 84.5 & **75.8** & 83.0 & 64.9 & 51.3 & 81.4 & **87.4** & 53.2 & 33.5 & **44.3** \\ VPT-Shallow [30] & 0.24 & 67.8 & 77.7 & 86.9 & 62.6 & 97.5 & 87.3 & 74.5 & 51.2 & 78.2 & 92.0 & 75.6 & 72.9 & 50.5 & 58.6 & 40.5 & 67.1 & 68.7 & 36.1 & 20.2 & 34.1 \\ Compacter [51] & **0.15** & 74.2 & 71.9 & 89.0 & 69.7 & 99.1 & 90.7 & 82.7 & 56.1 & 86.0 & 93.5 & 82.4 & 75.3 & 80.2 & 63.4 & 47.4 & 77.2 & 78.1 & 53.5 & 27.3 & 39.8 \\ \hline Bi-LoRA (Ours) & & & & & & & & & & & & & & & & & & & & & & & \\ \(h=32\) & 0.14 & 76.7 & 72.1 & 91.7 & 71.2 & 99.1 & 91.4 & **90.2** & 55.8 & 87.0 & **95.4** & 85.5 & 75.5 & 83.1 & 64.1 & 52.2 & 81.3 & **68.4** & 53.5 & **36.7** & **44.4** \\ \(h=1\) & 0.0048 & 75.4 & 72.6 & 90.4 & 71.8 & 99.0 & 91.3 & 87.0 & 56.0 & 86.1 & 94.1 & 82.1 & 75.4 & 81.0 & **64.2** & 50.5 & 79.7 & 83.0 & 53.7 & 29.7 & 42.9 \\ Bi-AdaptFormer (Ours) & & & & & & & & & & & & & & & & & & & & & & \\ \(h=32\) & 0.071 & **77.0** & **74.1** & **92.4** & **72.1** & **99.3** & **91.6** & 89.0 & **56.3** & **88.2** & 95.2 & **86.0** & **76.2** & **83.9** & 63.6 & **53.0** & **81.4** & 86.2 & **54.8** & 35.2 & 41.3 \\ \(h=1\) & **0.0024** & 75.0 & 73.3 & 91.0 & **72.1** & 99.1 & 91.4 smaller size. Our Bi-AdaptFormer with \(h=32\) beats all previous PET methods while using a smaller storage size. Notably, Bi-AdaptFormer and Bi-LoRA achieve better performance than Compacter and FacT-TT while being more parameter-efficient, indicating that precision redundancy is more significant than rank redundancy in adapters and thus quantization is a better solution than low-rank parameterization for designing efficient adapters. Moreover, Bi-AdaptFormer and Bi-LoRA with \(h=1\) only store less than 5 KB of backbone parameters for each task, while reaching performance better than VPT, BitFit, Compacter, and full fine-tuning. #### 5.3.2 Few-shot learning on FGVC On few-shot FGVC datasets, we compare Bi-AdaptFormer, the best-performing quantized adapter in the experiments above, with other competitive baselines: VPT-Deep, Adapter-P, LoRA, AdapFormer, NOAH, and FacT-TT. The hidden dimensions of Adapter-P, LoRA, and AdapFormer, as well as the \begin{table} \end{table} Table 4: **Supplementary results on VTAB-1K benchmark.** \begin{table} \end{table} Table 4: **Supplementary results on VTAB-1K benchmark.** Figure 6: **Accuracy of few-shot learning on FGVC datasets. The average size (MB) of trainable parameters in backbones is shown in parentheses. Bi-AdaptFormer outperforms other baselines on average accuracy using the fewest trainable parameters. Results are averaged over three trials with different seeds.** prompt length of VPT-Deep, are all set to 8. The rank of FacT-TT is set to 16, and NOAH follows the best recipes in [77]. As for Bi-AdaptFormer, we use a hidden dimension of 32. Other settings are the same as in the VTAB-1K experiments. Per-dataset results as well as the average results in the five settings are shown in Figure 6. Overall, our Bi-AdaptFormer outperforms all baselines on 5-task average accuracy with the smallest size of trainable parameters. On FGVC-Aircraft, Oxford-Pets, and Stanford Cars, Bi-AdaptFormer exhibits significant performance improvement over the previously state-of-the-art PET methods. Only on Food-101, Bi-AdaptFormer performs worse than FacT-TT and NOAH. Note that Bi-AdaptFormer is about 3\(\times\) and 19\(\times\) more storage-efficient than FacT and NOAH, respectively, and thus is more competitive under strict storage restrictions. ### Further Analysis #### 5.4.1 Quantizing classification head As the size of adapters is compressed, the classification heads take up most of the storage space, hindering further improvements in storage efficiency. For example, on VTAB-1K, the average size of the classification heads is 0.14 MB, much larger than that of Bi-AdaptFormer modules. As shown in Table 3(a), by quantizing the classification heads, Bi-AdaptFormer keeps state-of-the-art results (76.89 _vs._ AdapFormer's 76.70) with checkpoint size smaller than linear probing (76.7 KB _vs._ 140.8 KB). Note that linear probing is usually considered as the efficiency lower bound of adaptation. Furthermore, Bi-AdaptFormer and Bi-LoRA with \(h=1\) and binary head achieve better performance than full fine-tuning, linear probing, and VPT, but the average size of the total checkpoints is only 6.8 KB and 9.2 KB, respectively, which are dozens of times more storage-efficient than linear probing. #### 5.4.2 Computational efficiency One of the design principles behind our quantization method is to ensure the quantization operation has negligible computational cost during QAT. To evaluate the efficiency of our proposed method, we conducted experiments to study the training and inference time of different tuning methods, as summarized in Table 3(c). For all baselines, we use the same settings as in the VTAB-1K experiments. As for our Bi-AdaptFormer and Bi-LoRA, we set a larger hidden dimension \(h=32\). We find that the QAT and larger \(h\) slightly increase the training time of adapters. However, Bi-AdaptFormer and Bi-LoRA are still faster than VPT, FacT, and full fine-tuning. At inference time, since (Bi-)LoRA, and FacT can be re-parameterized and absorbed into the pre-trained backbone, they do not incur additional computation. #### 5.4.3 Performance on other backbones Note that our proposed quantization method is a plug-in strategy that can be applied in any backbones and any adapters. Besides ViTs [12], there are also other commonly used backbone networks in vision, such as hierarchical transformers like Swin [44] and convolutional networks like ConvNeXt [45]. In Table 3(b), we apply Bi-AdaptFormer to Swin-B and ConvNeXt-B, and compare it with other baselines that can also be extended to these backbones. We notice that Bi-AdaptFormer still achieves state-of-the-art results on VTAB-1K. Bi-AdaptFormer with \(h=32\) offers on-par or better performance than AdapFormer with \(h=8\) while only using about \(\frac{1}{8}\) of the storage size, which verifies the generalization ability of binary adapters. #### 5.4.4 Ablation studies We perform further ablation experiments on our low-bit adapters. The low-bit adapters are fine-tuned via QAT, which has been proven to work better in low-bit settings. To illustrate this, we compare our method with a PTQ method, _i.e_., directly quantizing fine-tuned full-precision adapters using \(k\)-means. We set \(h=8\) for AdapFormer and LoRA. As shown in Table 3(d), PTQ obviously underperforms QAT, especially in 1-bit setting. Moreover, since each weight matrix can be divided into several sub-matrices as blocks to perform block-wise quantization, _i.e_., standardizing the parameters and storing the \(\mu\) and \(\sigma\) of each block, we here compare the performance of 1-bit adapters across different numbers of blocks. We set \(h=32\) for all methods. As shown in Table 3(e), since block-wise quantization methods (# block \(>1\)) store more \(\mu\) and \(\sigma\) than our methods (# block \(=1\)), block-wise quantization uses a larger storage size. However, block-wise quantization does not demonstrate superiority over our methods. ## 6 Conclusion In this work, we systematically revisit the parameter efficiency of adapter-based PET through the lens of precision redundancy. Based on our observations, we propose a plug-in strategy to train low-precision counterparts for existing adapter-based methods. Through extensive experiments on more than 20 datasets, we empirically verify the superiority of 1-bit adapters in terms of both performance and parameter efficiency. Surprisingly, we find that 2.4 KB parameters in backbone is almost sufficient to describe the difference between the pre-trained ViT-B and a task-specific fine-tuned ViT-B, suggesting that the intrinsic dimension of visual datasets is much smaller than what we used to believe. Our work also brings quantization to PET, providing a general solution to largely enhance the parameter efficiency of adapter-based PET methods.
2309.08893
Magneto-Acoustic Waves in antiferromagnetic CuMnAs excited by Surface Acoustic Waves
Magnetoelastic effects in antiferromagnetic CuMnAs are investigated by applying dynamic strain in the 0.01% range through surface acoustic waves in the GaAs substrate. The magnetic state of the CuMnAs/GaAs is characterized by a multitude of submicron-sized domains which we image by x-ray magnetic linear dichroism combined with photoemission electron microscopy. Within the explored strain range, CuMnAs shows magnetoelastic effects in the form of N\'eel vector waves with micrometer wavelength, which corresponds to an averaged overall spin-axis rotation up to 2.4 deg driven by the time-dependent strain from the surface acoustic wave. Measurements at different temperatures indicate a reduction of the wave amplitude when lowering the temperature. However, no domain wall motion has been detected on the nanosecond timescale
M. Waqas Khaliq, Oliver Amin, Alberto Hernández-Mínguez, Marc Rovirola, Blai Casals, Khalid Omari, Sandra Ruiz-Gómez, Simone Finizio, Richard P. Campion, Kevin W. Edmonds, Vıt Novak, Anna Mandziak, Lucia Aballe, Miguel Angel Niño, Joan Manel Hernàndez, Peter Wadley, Ferran Macià, Michael Foerster
2023-09-16T06:12:47Z
http://arxiv.org/abs/2309.08893v1
# Neel vector waves in antiferromagnetic CuMnAs excited by Surface Acoustic Waves ###### Abstract Magnetoelastic effects in antiferromagnetic CuMnAs are investigated by applying dynamic strain in the 0.01% range through surface acoustic waves in the GaAs substrate. The magnetic state of the CuMnAs/GaAs is characterized by a multitude of submicron-sized domains which we image by x-ray magnetic linear dichroism combined with photoemission electron microscopy. Within the explored strain range, CuMnAs shows magnetoelastic effects in the form of Neel vector waves with micrometer wavelength, which corresponds to an averaged overall spin-axis rotation up to \(2.4^{\circ}\) driven by the time-dependent strain from the surface acoustic wave. Measurements at different temperatures indicate a reduction of the wave amplitude when lowering the temperature. However, no domain wall motion has been detected on the nanosecond timescale. Antiferromagnets (AFM) have become a focus of recent research in spintronics, mostly thanks to their potential advantages for future devices. Their low stray fields and robustness versus external magnetic fields are favorable for the further down-scaling of memory elements and their high-frequency internal resonances promise higher intrinsic speed limits for operation. However, together with these advantages also challenges arise, for example, related with the readout and mostly the writing process. Magnetic field control, although not completely impossible [1], is impractical due to the field magnitudes required (\(\gtrsim 2\) T) in order to overcome exchange energy and modify the magnetization of the two sublattices existing in the AFM. Domain modification by electrical currents has been demonstrated through Spin transfer/orbit torque [2] as well as through thermoelastic effects [3]. Specifically for CuMnAs, the manipulation of antiferromagnetic domains in thin films has been studied by means of injecting current pulses [2; 4; 5] and defects [6]. Other approaches use the transitions to a ferromagnetic (FM) phase, like in FeRh [7], or the coupling with a FM [8], which compromise many of the potential advantages (stray fields, speed) of AFM materials for usage in a real device. On the other hand the appearance of closure domain-like features in patterned AFM samples has been attributed to magnetoelastic effects caused by shape dependent strain [9], which suggests that much smaller energies than the exchange energy may be enough to rotate the Neel vector. Surface acoustic waves (SAW), are propagating elastic deformations in the upper micrometric layer of a crystal. SAW can be conveniently excited in piezoelectric materials by radiofrequency electrical signals applied to an antenna-like structure named interdigitated transducer (IDT). Typical strain amplitudes achieved under UHV conditions can reach the range of \(2\times 10^{-4}\) for LiNbO\({}_{3}\) in the hundreds of MHz to GHz frequency regime [10]. There is a sizable interaction of SAW with FM systems in heterostructures, which is driven by the transfer of the time dependent strain state of the underlayer/substrate into the FM overlayer. The interaction is mediated by the magnetoelastic effect and has been investigated by a growing number of groups [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23] (see review articles [24; 25; 26] and references therein). GaAs, apart from being a suitable substrate for epitaxial CuMnAs growth, has substantial applications in optoelectronic devices due to its outstanding photovoltaic properties [27] and robust piezoelectricity [28]. In this paper we generate Neel vector waves in collinear antiferromagnet CuMnAs induced by the time dependent strain from the supporting GaAs substrate. We obtain direct images using stroboscopic X-ray magnetic linear dichroism combined with photoemission electron microscopy (XMLD-PEEM) of both dynamic strain and Neel vector oscillations with a quantification of the amplitude of the spin axis rotation up to \(2.4^{\circ}\). The overall amplitude of the observed Neel vector oscillations decreased with lowering temperature. Experiments were carried out at the CIRCE beamline of the ALBA Synchrotron light source [29]. The CuMnAs epitaxial thin films with 45 nm thickness were grown on lattice-matched GaAs substrate by molecular beam epitaxy as described previously [30]. To generate SAWs with a frequency tuned to the synchrotron repetition rate (500 MHz), IDT with a finger periodicity of 5.73 \(\mu\)m (which determines the SAW wavelength) were patterned and deposited on the GaAs by electron beam lithography and metal evaporation. The sample was mounted on a printed circuit board (PCB) inside the sample holder and the IDT were contacted with wire bonds to apply electrical signals. The schematic illustration of the experiment is presented in Fig. 1. The radiofrequency signal applied to the IDT generates a SAW beam [31] traveling along the [110] crystalline direction of the GaAs substrate and confined to a depth in the order of the SAW wavelength [26]. The SAW causes a periodic in-plane, parallel to the SAW propagation direction, and out-of-plane change in the substrate lattice constant which is transferred as strain to the CuMnAs film. In order to assess the structure of the CuMnAs thin film, X-ray diffraction (XRD) measurements were carried out on the sample using a laboratory diffractometer before the synchrotron experiment. The XRD pattern in Fig. 2a shows the peaks of both the CuMnAs film (black) and the GaAs substrate (red) with respect to \(2\theta\). The tetragonal crystal system (P4/nmm) and the planes (001), (002), (003), and (004) of the antiferromagnetic thin film were identified with the JCDPS-ICDD 01-082-3986 [32; 33]. X-ray absorption spectroscopy (XAS) measurements of the CuMnAs were performed detecting low energy secondary electrons in the PEEM while scanning the photon energy. The beamline is equipped with an undulator that enables the control of the incoming X-ray polarization, for example between linear horizontal (electric field vector in the sample plane) and vertical (electric field vector under 16 degree to the sample normal) polarization directions. X-ray absorption spectra at the Mn \(L_{3,2}\) edges, with both polarizations, and their difference (linear dichroism, XMLD), are depicted in Fig. 2b at \(T\simeq 235\) K. The features of the XMLD spectrum (blue line), marked by black boxes, are similar to previously published data [34]. The imaging of antiferromagnetic domains in the CuMnAs thin film is performed by PEEM employing XMLD contrast. Such a contrast is obtained by subtracting different images taken with linear horizontal polarization at energies before and at the \(L_{3}\) absorption peak (See details of the methodology in Supp. Mater. I). Fig. 2c shows the domains arrangement in the film at 223 K nominal temperature without any SAW applied. Equivalent images taken with linear vertical polarization (electric field vector \(16^{\circ}\) to the sample normal) did not show visible contrast, confirming a dominant in-plane Neel vector. In order to quantify the dynamic magnetoelastic effects in the CuMnAs film, it is necessary to determine first the amplitude of the SAW-induced strain in the GaAs substrate. The inset of Fig. 2d depicts an XPEEM image of the sample surface, measured with 500 MHz pulsed synchrotron X-rays. An intensity contrast with a periodicity matching the SAW wavelength, i.e., 5.73 \(\mu\)m, is evident in the sample region not covered by the CuMnAs film. This contrast originates from the oscillating piezoelectric potential accompanying the Figure 1: Schematic illustration of the stroboscopic experiment. The CuMnAs thin film is analyzed in the PEEM microscope. SAWs are generated in the GaAs substrate by applying an electrical signal which is synchronized with the synchrotron frequency (repetition rate time of the X-ray pulses). The PEEM image is formed by photoelectrons emitted from the sample under the X-ray illumination. strain wave at the surface of the GaAs substrate. The photoelectron spectra displayed in Fig. 2d show the average intensity (number of detected electrons) at the areas marked by the red and blue rectangles in the inset image, recorded as a function of the bias voltage applied to the substrate for a fixed energy analyzer configuration. The voltage shift between these curves, obtained by selecting the positions of maximum slope in both spectra, amounts to 0.35 V and corresponds to the peak-to-peak amplitude of the oscillating piezoelectric potential. This value is used to calculate the amplitude of the strain field by numerically solving the coupled differential equations of the mechanical and electrical displacement, obtaining values in the range of 0.01% at the sample surface. Details on stroboscopic XPEEM measurements with synchronized SAW can be found in [10]. Fig. 3 shows XMLD images taken while the SAW is applied. The bright and dark areas in Fig. 3 correspond to domains with spin axis parallel and perpendicular to the x-ray polarization, with a typical domain size below one micrometer. The presence of domains with continuously differing gray scale contrast indicates the absence of significant in-plane anisotropy for the spin axis in the sample. Between Fig. 3a and 3b, the phase of the radiofrequency signal exciting the SAW was shifted by 180\({}^{\circ}\). Thus, the phase of the SAW in any given position is inverted for the stroboscopic measurement, i.e., when the X-rays hit the sample. A close inspection of the individual images shown in Figs. 3a and 3b as well the difference image (Fig. 3c), same gray scale, does not show any observable change of the domain boundaries. The domain wall motion in the CuMnAs in the present experiment is thus either negligible or below the detection limit. The difference images are used to eliminate the static domain contrast and enhance the dynamic changes induced by the SAW. Several difference images equivalent to the one shown in Fig. 3c, were recorded while changing the electronic phase by a small amount (typically 15\({}^{\circ}\)). Broad line Figure 4: Néel vector wave (spin axis rotation wave) in CuMnAs observed by XMLD-PEEM. **a)** example of an XMLD difference image from opposite SAW phases. Line profiles are taken along the blue rectangle box, in the propagation direction of the SAW (red arrow). **(b-d)** XMLD signal (in %) along the line profile after averaging at \(T\simeq 223\) K in **b)**, \(T\simeq 233\) K, in **c)** and \(T\simeq 296\) K in **d)**. The black line in each plot shows the data and the green line is the sinusoidal function curve fitted to the data points. (insets) fit results for single profiles before averaging: fitted phase shift as a function of the shift \(\psi\) in the electrical excitation. Figure 3: XMLD images of CuMnAs with opposite electronic phases \(\psi=0^{\circ}\) in **a)** and \(\psi=180^{\circ}\) in **b)** of the SAW excitation at \(T\simeq 223\) K. The direction of SAW propagation is perpendicular to the dashed lines which are separated by one wavelength of 5.73 \(\mu\)m. **c)** Image obtained by subtracting the images at opposite phases, at the same contrast scale. No evident variations in the domain boundaries can be observed. Figure 2: **(a)** XRD measurements of 45 nm thin film of CuMnAs on GaAs showing the film (black) and substrate (red) peaks **b)** XAS and XMLD spectra of CuMnAs/GaAs at the Mn \(L_{3,2}\) edges, obtained with horizontal (black) and vertical (red) linear polarization. **c)** An XMLD image of CuMnAs thin film at \(T\simeq 223\) K by employing linear horizontal polarization component without applying SAW signal. **d)** Quantification of the SAW: Average intensity (number of emitted electrons) from the red and blue rectangles in the inset as a function of sample bias voltage. The inset shows the XPEEM image of the SAW in GaAs at 0.7 V bias. profiles along the direction of the SAW (blue box in Fig. 4a) were extracted. For each line profile a background correction is performed by subtracting the signal averaged over exactly one wavelength in order to highlight the oscillatory component(s). Please refer to Supp. Mater. II for details on the data analysis. The line profiles obtained for all phases are then averaged, after shifting them to account for the corresponding phase difference of the SAW from the electronic signal. The results are plotted in Fig. 4b-d and correspond to different data sets at \(T=223\) K, \(T=233\) K, and \(T=296\) K, respectively. Green lines are best fits with sinusoidal functions, which is used to obtain the amplitude of the Neel vector (spin axis rotation) signal. In the insets of Fig. 4b-d, we show as a validation the result for each single profile, i.e., the fitted experimental phase shift in units of the wavelength \(\lambda\). These values are not used for further analysis, but their excellent agreement with the electronic phase \(\psi\) applied to the IDT clearly demonstrates that the Neel vector oscillations are driven by SAW. Now we turn to the quantitative analysis of the Neel vector rotation amplitudes for the three data sets at different temperatures, i.e., \(T=223\) K, \(T=233\) K, and \(T=296\) K. The results are summarized in Table 1. Due to experimental constraints, the low-temperature data was taken with an angle of \(65^{\circ}\) between SAW and probing X-rays, while room temperature data was taken with a \(90^{\circ}\) angle. The conversion of XMLD amplitude to rotation in degrees as well as the correction factor for the reduced sensitivity under \(65^{\circ}\) is calculated in Supp. Mater. III. As mentioned above, the applied strain from the SAW has been calculated from the voltage shift of the secondary electron spectra detected in the PEEM (Fig. 2d). The typical strain amplitudes achieved in our experiments on GaAs are \((0.75-1.5)\times 10^{-4}\). The XMLD wave amplitude was determined from fits to the averaged line profiles in Fig. 4b-d. In order to convert those numbers into the corresponding rotation of the spin axis, we take first into account the temperature dependence of the XMLD contrast in CuMnAs. The XMLD wave amplitude is thus normalized to the maximum XMLD contrast for each temperature in the static domain image (See, Fig. 2c for example), taken as average in three different locations each. We then consider the film to be populated by an equal portion of domains in all direction, i.e., without net in-plane anisotropy and calculate for each domain i\(\underline{1}\) the rotation angle as fraction of a maximum angle \(\phi_{0}\) as explained in Supp. Mater. III and ii\(\underline{1}\) the sensitivity of the XMLD signal to this rotation as function of the domain spin axis. The numbers reported in Table 1 show that the spin axis rotation wave driven by SAW in CuMnAs can reach a sizable \(2.44^{\circ}\) at room temperature. We plotted in Fig. 5 the efficiency of the SAW induced Neel vector wave defined as the overall variation divided by the SAW strain for the three temperatures. A similar quantity is added into the graph for magnetoacoustic waves in ferromagnetic samples measured through XMCD [21; 35]. Values for ferromagnetic Ni [21] and Heusler alloy Fe\({}_{3}\)Si [35] are shown as broad bands, because they depend on the external applied magnetic field, showing a resonance like peak (Nickel showed an efficency of 2 to 4.5 and FeSi from 1.6 to 4.1). These results indicate a sizable dynamic magneto-elastic effect in CuMnAs induced by SAW and a comparable efficiency like in FM materials. We notice here that detection of spin axis rotation in CuMnAs is more challenging compared to magnetoacoustic waves in ferromagnets mainly because the XMLD contrast is weaker and because the sample is in a multi-domain state. A reduction of the Neel vector wave excitation efficiency when lowering temperature, i.e., from \(3.3^{\circ}/10^{-4}\) strain at room temperature (296 K) to \(1.8^{\circ}/10^{-4}\) at 223 K has been observed. While this apparent reduction needs further experiments to verify [36; 37], it may indicate an increase of the energy barrier for the spin axis rotation as temperature lowers (a type of "freezing"). In summary, we have investigated high frequency (500 MHz) magnetoelastic effects in antiferromagnetic CuMnAs excited by SAW in the GaAs substrate using XMLD-PEEM. An averaged magneto-acoustic wave signal can be detected in XMLD, corresponding to a rotation of the spin axis in the individual domains by up to \(\pm 2.4^{\circ}\). The efficiency of the Neel vector excitation in CuMnAs is a proof that magnetoelastic effects are a viable way to manipulate antiferromagnetic systems, even on the subnanosecond time scale. Moreover, in static condition, the CuMnAs thin film is characterized by a multidomain Figure 5: Efficiency of the magnetoacoustic wave excitation in CuMnAs as a function of temperature in comparison with all data at 500 MHz SAW. The area for Ferromagnetic samples corresponds to Ni and FeSi at room temperature (RT) and covers the range between zero external field and resonance. configuration of submicron size. No SAW-induced motion of domain walls has been detected, which could be related to intrinsic pinning due the film microstructure. For the future, a combination of GaP substrates with ZnO based IDT can enable to study SAW driven effects in high quality single domain patterns of CuMnAs. ###### Acknowledgements. The authors are thankful to Spanish Ministry of Science, Innovation, Universities, and DOC-FAM who have received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 754397. MWK MF, MAN, and LA acknowledge the funding from MICINN through grant numbers RTI2018-095303-B-C53 and PID2021-122980OB-C54. FM, MR, BC, and JMH are grateful to funding from MCIN/AEI/10.13039/501100011033 through grant number: PID2020-113024GB-100. OA, KO, KWE, RPC, and PW acknowledge funding from EU FET Open RIA Grant no 766566. VN is grateful to MEYS Grant No. LM2018110. This work has also been supported by the ALBA inHouse Research Program.
2310.00374
Coordinated pausing: An evaluation-based coordination scheme for frontier AI developers
As artificial intelligence (AI) models are scaled up, new capabilities can emerge unintentionally and unpredictably, some of which might be dangerous. In response, dangerous capabilities evaluations have emerged as a new risk assessment tool. But what should frontier AI developers do if sufficiently dangerous capabilities are in fact discovered? This paper focuses on one possible response: coordinated pausing. It proposes an evaluation-based coordination scheme that consists of five main steps: (1) Frontier AI models are evaluated for dangerous capabilities. (2) Whenever, and each time, a model fails a set of evaluations, the developer pauses certain research and development activities. (3) Other developers are notified whenever a model with dangerous capabilities has been discovered. They also pause related research and development activities. (4) The discovered capabilities are analyzed and adequate safety precautions are put in place. (5) Developers only resume their paused activities if certain safety thresholds are reached. The paper also discusses four concrete versions of that scheme. In the first version, pausing is completely voluntary and relies on public pressure on developers. In the second version, participating developers collectively agree to pause under certain conditions. In the third version, a single auditor evaluates models of multiple developers who agree to pause if any model fails a set of evaluations. In the fourth version, developers are legally required to run evaluations and pause if dangerous capabilities are discovered. Finally, the paper discusses the desirability and feasibility of our proposed coordination scheme. It concludes that coordinated pausing is a promising mechanism for tackling emerging risks from frontier AI models. However, a number of practical and legal obstacles need to be overcome, especially how to avoid violations of antitrust law.
Jide Alaga, Jonas Schuett
2023-09-30T13:38:33Z
http://arxiv.org/abs/2310.00374v1
# Coordinated pausing: An evaluation-based coordination scheme for frontier AI developers ###### Abstract As artificial intelligence (AI) models are scaled up, new capabilities can emerge unintentionally and unpredictably, some of which might be dangerous. In response, dangerous capabilities evaluations have emerged as a new risk assessment tool. But what should frontier AI developers do if sufficiently dangerous capabilities are in fact discovered? This paper focuses on one possible response: coordinated pausing. It proposes an evaluation-based coordination scheme that consists of five main steps: (1) Frontier AI models are evaluated for dangerous capabilities. (2) Whenever, and each time, a model fails a set of evaluations, the developer pauses certain research and development activities. (3) Other developers are notified whenever a model with dangerous capabilities has been discovered. They also pause related research and development activities. (4) The discovered capabilities are analyzed and adequate safety precautions are put in place. (5) Developers only resume their paused activities if certain safety thresholds are reached. The paper also discusses four concrete versions of that scheme. In the first version, pausing is completely voluntary and relies on public pressure on developers. In the second version, participating developers collectively agree to pause under certain conditions. In the third version, a single auditor evaluates models of multiple developers who agree to pause if any model fails a set of evaluations. In the fourth version, developers are legally required to run evaluations and pause if dangerous capabilities are discovered. Finally, the paper discusses the desirability and feasibility of our proposed coordination scheme. It concludes that coordinated pausing is a promising mechanism for tackling emerging risks from frontier AI models. However, a number of practical and legal obstacles need to be overcome, especially how to avoid violations of antitrust law. Figure 1: The main steps of our proposed evaluation-based coordination scheme Introduction The past few years have shown a remarkable trend: more compute, larger datasets, and more parameters have led to the development of more capable artificial intelligence (AI) models. This phenomenon is commonly referred to as "scaling laws" [43, 38, 84, 86, 18, 96] and the claim that this trend will continue as the "scaling hypothesis" [33].1 While these scaling laws have been the driver of recent progress in AI development, they also have concerning implications. As models are scaled up, new capabilities can emerge unintentionally and unpredictably [30, 97], some of which might be dangerous [79].2 For example, models might become able to persuade and manipulate people [66, 54], discover cyber vulnerabilities [42, 79], or develop novel biological weapons [69, 94, 35]. These capabilities could be misused by malicious actors or used inadvertently by AI systems themselves. Some people even argue that certain combinations of capabilities could potentially lead to catastrophic outcomes [19, 59]. Footnote 1: Note that it has been argued that the current rate of scaling may be unsustainable [52]. Footnote 2: Note that a recent paper expressed doubts about this phenomenon [71]. In response, a suite of model evaluations that focus specifically on dangerous capabilities has emerged as a new risk assessment tool.3 In addition to developing these evaluations internally, some leading developers are taking proactive steps by involving external experts in safety evaluations before public releases. For example, before releasing GPT-4, OpenAI gave the Alignment Research Center's evaluation team (ARC Evals) early access to the model to assess the extent to which it possessed dangerous capabilities [63]. ARC Evals did the same with Anthropic's Claude and Claude 2 [5, 6]. In both cases, ARC Evals concluded that the versions they tested did not have such dangerous capabilities [11]. Yet, it remains unclear what developers should do if future evaluations actually discover sufficiently dangerous capabilities. This paper focuses on one possible response: coordinated pausing. The basic idea is that all frontier AI developers should pause certain research and development activities whenever and each time one of them discovers sufficiently dangerous capabilities. Developers only resume their paused activities if the discovered capabilities have been analyzed and adequate safety precautions have been put in place. Footnote 3: For an overview of other risk assessment techniques, see [45]. While there has been some work on evaluations for language models [20, 68, 50, 31], there is only limited work on dangerous capabilities evaluations. ARC Evals recently published a report in which they describe their methodology for assessing the capacity of language model agents to acquire resources, create copies of themselves, and adapt to novel challenges they encounter in the wild [44]. They have also published an update on their efforts to evaluate GPT-4 and Claude [11]. Details of both efforts can be found in the GPT-4 system card [63] and the Claude 2 model card [6]. In addition to this work, there is only a single introductory paper on dangerous capabilities evaluations [79] and another paper that proposes a regulatory regime in which evaluations play a key role [4].4 Footnote 4: Besides that, there only seem to be a few informal forum posts on the topic [47, 40, 21]. Despite this shortage of literature, many experts take the topic very seriously. In a recent expert survey (\(N=51\)), 98% of respondents somewhat or strongly agreed with the statement "AGI labs should run evaluations to assess their models' dangerous capabilities", while 93% thought that "AGI labs should pause the development process if sufficiently dangerous capabilities are detected" [73]. There have also been calls for a temporary moratorium in frontier AI development [29, 99], but these calls were not linked to dangerous capabilities evaluations. Taken together, scholars and practitioners show considerable interest in evaluations, but the question of what should happen if sufficiently dangerous capabilities are in fact discovered remains underexplored.5 Against this background, the paper seeks to answer two research questions (RQs): Footnote 5: Notably, ARC Evals recently announced plans to research responsible scaling policies, outlining how AI labs should scale, deploy, and contain models in the face of dangerous capabilities [10]. Yet, this initiative remains an outlier, as there are few similar efforts in the broader AI community. * **RQ1:** How can frontier AI developers coordinate to pause if one of them discovers a model with sufficiently dangerous capabilities? * **RQ2:** How desirable and feasible would an evaluation-based coordination scheme be? The paper has three areas of focus. First, it focuses on _dangerous capabilities evaluations_, but many considerations also apply to other types of evaluations (e.g. alignment evaluations).6 Our vision for the proposed coordination scheme is that it should only be triggered if sufficiently dangerous capabilities are discovered. This is especially important given how intrusive the intervention is by nature. Second, the paper focuses on _frontier AI developers_. We assume that the most concerning capabilities will only emerge in frontier AI models [4], defined as "models that are both (a) close to, or exceeding, the average capabilities of the most capable existing models, and (b) different from other models, either in terms of scale, design (e.g. different architectures or alignment techniques), or their resulting mix of capabilities and behaviors" [79]. Third, the paper focuses on a _collective solution_. The emphasis is not on what individual developers should do if they discover sufficiently dangerous capabilities, but how multiple (ideally all) frontier AI developers should respond to such a situation. Footnote 6: In principle, this intervention can be implemented using any type of model evaluations related to catastrophic AI risk, such as alignment evaluations or evaluations for cooperative AI. However, comprehensively integrating other evaluations is beyond the scope of this paper. We leave this expansion for future work and focus here on existing methodologies. The paper proceeds as follows. Section 2 proposes an evaluation-based coordination scheme. Section 3 discusses four versions of that scheme. Sections 4 and 5 discuss the desirability and feasibility of coordinated pausing. Section 6 concludes with suggestions for further research. ## 2 An evaluation-based coordination scheme In this section, we propose an evaluation-based coordination scheme for frontier AI developers. The scheme consists of five main steps as illustrated in Figure 1: * **Step 1: Dangerous capabilities evaluations.** Frontier AI models are evaluated for dangerous capabilities (Section 2.1). * **Step 2: Individual pausing.** Whenever, and each time, a model fails a set of evaluations, the developer pauses any further training and fine-tuning of that model. They also pause the development and deployment of similar models and do not publish related research (Section 2.2). * **Step 3: Coordinated pausing.** Other developers are notified whenever a model with dangerous capabilities has been discovered. They also pause the development and deployment of similar models and do not publish related research (Section 2.3). * **Step 4: Investigation during pausing.** The discovered capabilities are analyzed and adequate safety precautions are put in place (Section 2.4). * **Step 5: Resuming paused activities.** Developers only resume their paused activities if certain safety thresholds are reached (Section 2.5). In the following, we describe the five steps in more detail. For each of them, we identify key variables and list options. It is worth noting that, although the steps are described sequentially, there will be some overlap between them. For example, the investigation (Step 4) should arguably start as soon as dangerous capabilities are discovered (Step 2). ### Dangerous capabilities evaluations **Step 1: Frontier AI models are evaluated for dangerous capabilities.** **Which models should be evaluated?** Our proposed coordination scheme should only apply to frontier AI models, as defined above. Frontier AI models are particularly risky because "(a) more capable models can excel at a wider range of tasks, which will unlock more opportunities to cause harm; and (b) novel models are less well-understood by the research community" [79]. Ideally, all such models should be evaluated for dangerous capabilities.7 **What kind of evaluations?** Developers of frontier AI models need to run _dangerous capabilities evaluations_. Shevlane et al. provide a good overview of different types of dangerous capabilities [79]. We have already mentioned the ability to persuade and manipulate people [66, 54], discover cyber vulnerabilities [42, 79], and develop novel biological weapons [69, 94, 35]. Other potentially dangerous capabilities might include situational awareness, i.e. a model's ability to to refer to and make predictions about itself as distinct from the rest of the world [25, 59]; power-seeking behavior, i.e. active efforts by a model to gain and maintain power in ways that its developers did not intend [19, 92, 91, 46]; and long-horizon planning, i.e. a model's ability to make sequential plans that involve multiple steps, unfolding over long time horizons [79]. We are aware of evaluations for power-seeking behavior [11, 44] and efforts to develop evaluations for deception [8], situational awareness [27], and manipulation [1]. We are unaware of evaluations for other capabilities, such as the ability to exploit vulnerabilities in software systems or develop weapons.8 In any case, our proposed coordination scheme should only apply to evaluations that try to discover _sufficiently_ dangerous capabilities, which should be interpreted restrictively. It is beyond the scope of this paper to suggest what these danger thresholds should be. However, the intervention we are proposing is very intrusive and may only be appropriate if _ex post_ remedies would be insufficient. Footnote 8: But note that the red team OpenAI commissioned before releasing GPT-4 assessed the model’s ability to discover and exploit cybersecurity vulnerability, and support social engineering. The red team also tested whether GPT-4 could provide the necessary information to develop, acquire, or disperse nuclear, radiological, biological, and chemical weapons [63]. **Who creates, maintains, and runs the evaluations?** Dangerous capabilities evaluations could be created and maintained by the developers themselves. A number of frontier AI developers already seem to have internal evaluation programs [79, 47], though it is difficult to comment on such efforts from the outside. Alternatively, third-party organizations like ARC Evals or Apollo Research or academic centers like Stanford's Center for Research on Foundation Models could take on these roles, either on their own or in collaboration with developers. The task of running evaluations could also be shared between developers and third parties. Another option is having one developer scrutinize another's work, although a recent survey found little support for inter-lab scrutiny [73]. It is worth noting that the actor who creates and maintains the evaluations does not necessarily need to be the one who runs them. For example, evaluations might be created by a third party, but run by the developers themselves. **How is compliance monitored and enforced?** Developers are incentivized to deploy models quickly, but running evaluations takes time. We should therefore expect that some developers will not run dangerous capabilities evaluations. This raises the questions of how compliance should be monitored and enforced. By default, there are no monitoring and enforcement mechanisms. The scheme would rely on the goodwill of frontier developers. However, developers could support various monitoring mechanisms on a voluntary basis. One option would be to create and maintain a whistleblower program. Employees who find out that their company decided against running evaluations on frontier models could reveal such information to a trusted party, such as an ethics board [74], an internal audit team [72], or a regulator. Another option would be to give a third party certain investigative powers (e.g. the right to access documents, interview employees, or attend meetings). This could include an ethics board [74], an auditor (e.g. ARC Evals), an industry body (e.g. Frontier Model Forum), a multi-stakeholder organization (e.g. Partnership on AI), or a regulator. If developers make legally binding commitments to run dangerous capabilities evaluations, they might face contractual liability if they break them. Finally, if at some point frontier AI developers are required by law to run dangerous capabilities evaluations, such laws would likely entail provisions about the enforcement of such requirements (e.g. via fines and penalties). ### Individual pausing **Step 2:**_Whenever, and each time, a model fails a set of evaluations, the developer pauses further training and fine-tuning of that model. They also pause the development and deployment of similar models and do not publish related research._ **When do developers pause?** Developers should pause certain research and development activities whenever, and each time, a model fails a set of dangerous capabilities evaluations. This trigger is one of the main differences to a general moratorium on frontier AI development [29]. But when exactly does a model "fail" a set of evaluations? Defining this danger threshold is one of the most important parts of our proposed scheme. Unfortunately, it is also one of the most difficult parts. On the one hand, the threshold should be very high. Since pausing is a very intrusive intervention, it should only be triggered in rare cases. On the other hand, the pausing scheme is intended to prevent severe harm. It is therefore crucial to avoid false negatives, i.e. models that do have dangerous capabilities do not trigger a pause. How to balance these and other considerations is still an open question. It is beyond the scope of this paper to suggest danger thresholds or tiers for evaluation results. **What do they pause?** Developers should pause four types of activities: * **Development.** First, they should pause the development of the model that failed the evaluations. More precisely, they should pause any ongoing training runs and delay any scheduled training runs [79].9 This also applies to fine-tuning and reinforcement learning from human feedback (RLHF). To avoid situations in which the same or similar dangerous capabilities emerge in other models, developers should also pause the development of similar models. Models can be similar in terms of their architecture, size, training data, or compute, to name just a few criteria. It is beyond the scope of this paper to suggest measures and thresholds for similarity. This is an open question that needs more research. Footnote 9: Shevlane et al. note that frontier AI developers should factor in potential pauses in their research plans [79]. For example, they should plan in advance how they would backfill vacant computing resources with other projects. They should also avoid promising certain release dates. * **Deployment.** Second, developers should pause the deployment of the model that failed the evaluations. It would be inconsistent if developers were required to pause the development process, but still allowed to deploy the model. For the same reasons, they should also pause the deployment of similar models. * **Access.** Third, developers should restrict access to similar models that they have already deployed [60]. This is not possible if such models are open-sourced. Participating developers should therefore not open-source frontier models [75] and instead deploy them via an API [77, 81]. We wish to emphasize that open-sourcing non-frontier models--and the vast majority of models are non-frontier models--is often a valuable contribution to the AI research community and society more generally. * **Related research.** Fourth, developers should pause the publication of related research. This avoids the possibility of other actors quickly developing models with similar capabilities. The research necessary to develop such models should therefore not be publicly available (sometimes this is already the case). Relatedly, developers should arguably also pause doing related research itself, but we are less certain about that. **How is compliance monitored and enforced?** The monitoring and enforcement options are similar to the ones mentioned above (Section 2.1). By default, individual pausing cannot be enforced, but developers could take voluntary steps to support the monitoring of compliance (e.g. by maintaining a whistleblower program). They could also give a third party certain investigative powers (e.g. an ethics board or an auditor). If developers are legally required to pause, supervisory authorities would likely be able to monitor and enforce compliance. ### Coordinated pausing **Step 3:**_Other developers are notified whenever a model with dangerous capabilities has been discovered. They also pause the development and deployment of similar models and do not publish related research._ **How are other developers notified?** There are three ways in which other developers can be notified. First, the developer who has trained the model with dangerous capabilities could notify other developers directly (e.g. via email). Second, the developer could make the incident public. For example, they could tweet about the incident, publish a blog post (e.g. similar to OpenAI [64]), or make an entry in an incident database [53]. Third, the developer could notify a third party who could then notify other developers. The third party could be a mutual auditor: a single organization runs evaluations on frontier models of multiple developers and notifies all developers they work with if a model fails a set of evaluations. The auditor's right to notify other developers (and the developers' right to be notified) would have to be specified in the contract between the auditor and each of the participating developers (e.g. it could be a standard clause in the audit agreement). Other potential third parties include industry bodies (e.g. Frontier Model Forum), multi-stakeholder organizations (e.g. Partnership on AI), or regulators, especially if they were involved in previous steps. Involving a third party could be an elegant way to avoid some of the antitrust concerns mentioned in Section 5. **What do they pause?** Other developers should pause the development and deployment of models similar to the one with dangerous capabilities, restrict access to similar models that have already been deployed, and not publish related research (Section 2.2). Again, it is beyond the scope of this paper to define which models are "similar" and which research is "related". **How is compliance monitored and enforced?** The monitoring and enforcement options are similar to those mentioned above (Section 2.1 and 2.2). ### Investigation during pausing **Step 4:**_The discovered capabilities are analyzed and adequate safety precautions are put in place_. **What happens during pausing?** During pausing, four things should happen. First, the model should be contained (e.g. via boxing or air-gapping) as soon as dangerous capabilities are discovered [13; 11; 73]. Developers should also increase their efforts to prevent leakage and theft of the model. In the future, this might require military-grade information security sufficient to defend against nation states [73]. Second, the model should be analyzed to determine why it failed the evaluations. This might involve additional tests of the model's behavior or attempts to understand the inner workings of the model via interpretability research.10 Third, the developer should take measures to make the model safer, for example, via fine-tuning [83], reinforcement learning from human feedback (RLHF) [22; 101; 49], or reinforcement learning from AI feedback (RLAIF), more commonly known as "constitutional AI" [14]--though it is also possible that existing techniques will not be sufficient. Fourth, the developer should put in place adequate safety controls. For example, they might only deploy the model in stages [82; 81], via an API [82; 24; 77; 81], and with certain restrictions (e.g. who can use the model, how they can use the model, and whether the model can access the internet). Shevlane et al. list a number of variables that affect the risk level of deployment [79]. Footnote 10: Although there are promising developments [57], the field of mechanistic interpretability is still in its infancy [56; 61]. Conducting interpretability research is very time-consuming and does not yet seem practical in a pausing context. **Who does the investigation?** Most of the above-mentioned activities need to be performed by the developer of the model that has triggered the pause (e.g. model containment). However, the developer could also be supported by other actors. For example, if the evaluations are run by an independent auditor, this auditor will often be best equipped to analyze why the model has failed the evaluations. The developer might also bring in additional auditors (e.g. to replicate the evaluation results or run additional evaluations) or external researchers (e.g. to conduct interpretability research). In theory, it would also be conceivable that other developers support the investigations, but in practice it does not seem politically feasible (e.g. because of antitrust and confidentiality concerns).11 Other developers should also take corresponding measures where appropriate (e.g. taking additional measures to make their models safer and strengthening their safety controls). The entire investigation should probably be overseen by a third party (e.g. an auditor, ethics board, multi-stakeholder organization, or regulator). Footnote 11: In a recent expert survey (\(N=51\)), inter-lab scrutiny was one of the least supported items, though it still received more agreement than disagreement [73]. On a scale from -2.0 (strongly disagree) to 2.0 (strongly agree), the medium (\(M\)) rating for inter-lab scrutiny was 0.7. It is worth noting that, while not statistically significant, we saw higher support for this statement from respondents from AGI labs (\(M=1.2\)) in comparison to respondents from academia (\(M=0.3\)) and civil society (\(M=0.2\)). ### Resuming paused activities **Step 5:**_Developers only resume their paused activities if certain safety thresholds are reached._ **When can developers resume their paused activities?** The decision to resume the paused activities raises some of the same issues as the initial decision to pause. Defining danger thresholds (when should frontier AI developers pause?) and safety thresholds (when can they resume their paused activities?) are essentially two sides of the same coin. However, it might make sense to set the safety threshold higher than the danger threshold. To reach that threshold, the model may need to pass a more demanding and more diverse set of evaluations. As above (Section 2.2), more research is needed to determine when a developer's understanding of certain capabilities is sufficient and what kind of safety precautions are adequate. This will likely require a holistic evaluation of each case, taking into account technical, social, and organizational factors. The decision will inherently involve uncertainties. And the actor who makes this decision will likely need discretion. **Who decides when safety thresholds are reached?** Each developer could make that decision for themselves. Absent legal requirements or voluntary commitments, other developers will likely want to resume their paused activities before the developer of the model with dangerous capabilities. It would also be conceivable that all participating developers make a collective decision (e.g. they could vote), but this may raise antitrust concerns. Another option would be that a third party makes that decision on behalf of the developers. This could be the auditor who ran the evaluations that discovered the dangerous capabilities, an industry body (e.g. Frontier Model Forum), a multi-stakeholder organization (e.g. Partnership on AI), or a regulator. In this section, we have described the five steps of our proposed coordination scheme. We have identified key variables and listed options. However, we have not discussed how different options could be combined in a coherent way. We will turn to this next. ## 3 Concrete versions of the proposed coordination scheme In this section, we discuss four concrete versions of the coordination scheme proposed above (Section 2). In the first version, pausing is completely voluntary and relies on public pressure on developers (Section 3.1). In the second version, participating developers collectively agree to pause under certain conditions (Section 3.2). In the third version, a single auditor runs evaluations on models of multiple developers and they agree to pause if any model fails a set of evaluations (Section 3.3). In the fourth version, developers are legally required to run evaluations and pause if dangerous capabilities are discovered (Section 3.4). For each of the four versions, we explain how they work, suggest variations, discuss their main benefits and limitations, and make recommendations. Table 1 contains an overview of the four versions. ### Voluntary pausing In the first version, pausing is completely voluntary and relies on public pressure on developers. **How it works.** Frontier AI developers do not make any commitments to pause and there are no legal requirements. Running evaluations and pausing is completely voluntary, but developers face public pressure to do so. In particular, they are expected to publish the results of dangerous capabilities evaluations (e.g. in short summary reports) before deploying frontier models or publishing related research. Some developers have already made a high-level commitment along these lines [89]. These evaluations may be conducted by external auditors or developers themselves. Publishing evaluation results creates a "public commentary period" which allows the wider AI research community to scrutinize the evaluation results and raise concerns. Depending on how serious such concerns are, the developer of that model and other developers might be pressured to pause certain research and development activities. The length and nature of the pausing period would be at the discretion of the developers, but it would be affected by how much pressure is put on them. Monitoring relies on whistleblowing, though in individual cases, regulators may request additional information [100]. **Variations.** In the version described above, developers do not make any commitments. But it would be conceivable that they make at least a soft commitment. For example, they could publish a blog post in which they commit to evaluate frontier models and pause if dangerous capabilities are discovered in any frontier model. Google DeepMind's post on dangerous capabilities evaluations is promising sign [78], but it remains vague. The post does not specify what kinds of evaluations Google DeepMind currently runs, it does not contain an explicit commitment to pause, and it does not define any thresholds. **Benefits.** Of the four versions, voluntary pausing is the most feasible one. It is close to the status quo. Frontier AI developers already run evaluations and there is already an expectation to pause if dangerous capabilities are discovered [79; 73], though this pressure might not yet be strong enough. A benefit of this version is that it is fairly light-touch. Developers do not have to negotiate a contract and policymakers do not have to pass new laws or regulations. The administrative burden on developers is also comparably small. Another benefit of this version is that it is fairly flexible. Since the appropriate response to a set of failed evaluations is not enshrined in any way, this version can quickly react to scenarios in which frontier models are less dangerous than expected (do not pause) or even more dangerous (take more extreme measures). Changing expectations often takes less time than amending a contract, regulation, or law. A single incident might be sufficient to cause a public outcry that puts significant pressure on developers [23]. \begin{table} \begin{tabular}{l l l l l} \hline \hline & **Voluntary pausing** & **Pausing agreement** & **Mutual auditor** & **Required pausing** \\ \hline \hline **Step 1: Dangerous** & & & & & \\ **capabilities evaluations** & & & & & \\ Which models should be evaluated? & Frontier AI models & — & — & — \\ What kind of evaluations? & Dangerous capabilities & — & — & — \\ Who creates, maintains, and runs the evaluations? & Developers and/or third party & Auditor & Auditor & Auditor \\ How is compliance & No monitoring and monitored and enforced? & enforcement, only & Other developers & Auditor & Regulator \\ & & public pressure and & & & \\ \multicolumn{1}{l}{**Step 2: Individual pausing**} & & & & \\ When do developers pause? & A frontier AI model & — & — & — \\ & fails a set of dangerous capabilities evaluations & & & & \\ What do they pause? & Development, deployment, related research & — & — & — \\ How is compliance & No monitoring and monitored and enforced? & Onforcement, only public pressure and & Other developers & Auditor & Regulator \\ & & whistleblowing & & & \\ **Step 3: Coordinated** & & & & & \\ \multicolumn{1}{l}{**passing**} & & & & \\ How are other developers & Results of evaluations & Developer & Auditor & Regulator \\ & and incidents are made & & & & \\ What do they pause? & Development, deployment, related research & — & — & — \\ How is compliance & No monitoring and enforcement, only & Other developers, contractual penalties & Auditor & Regulator \\ & & public pressure and & & & \\ \multicolumn{1}{l}{**Step 4: Investigation**} & & & & \\ \multicolumn{1}{l}{**during pausing**} & & & & & \\ What happens during & Model is contained, incident is analyzed, model is made safer, safety controls are implemented & — & — & — \\ Who does the investigation? & Developer and/or third & Developer and/or third party & Auditor, developers & Auditor, supervised by regulator, developers \\ & & & & & cooperate \\ \multicolumn{1}{l}{**Step 5: Resuming paused**} & & & & \\ activities & & & & & \\ When can developers & Safety threshold is & — & — & — \\ resume their paused activities? & & & & & \\ Who decides when this is the case? & Developers & Developers & Auditor & Regulator \\ & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of four concrete versions of the proposed coordination scheme **Limitations.** From a societal perspective, voluntary pausing is not particularly desirable. Since there are no monitoring and enforcement mechanisms, there is no reliable way to ensure compliance. While public pressure does incentivize compliance to some extent, other incentives might be even stronger (e.g. money, prestige, national interests), especially if the stakes are high. Developers might also be reluctant to publish the results of their evaluations. They would open themselves up to public scrutiny with unpredictable PR risks. And even if they do publish the results of their evaluations, the public commentary period might still lead to counterproductive outcomes. For example, one could imagine that the discourse becomes politicized and dominated by non-safety considerations. Another limitation is that there would be little consistency between developers in terms of evaluations and pausing. While this would still be better than the status quo, a single developer who does not participate might be enough to cause severe harm. Finally, since models are not tested by an independent third party, developers can--intentionally or not--run evaluations in a way that ensures their models remain below the danger threshold. This concern is related to Goodhart's law which states that "when a measure becomes a target, it ceases to be a good measure" [85] (Section 5). **Recommendation.** We would strictly prefer any of the other versions over this one. But as long as there are no pausing agreements (Section 3.2), audit agreements (Section 3.3), or pausing requirements (Section 3.4), external stakeholders (e.g. independent researchers and civil society organizations) should put pressure on frontier AI developers to run evaluations and pause if sufficiently dangerous capabilities are discovered. In particular, they should voice their expectations that developers publish the results of evaluations to allow for a "public commentary period". They should also advocate for binding commitments and eventually legal requirements. Overall, this version should only be seen as an intermediate solution. Once public pressure is strong enough, other versions will likely become more feasible. ### Pausing agreement In the second version, participating developers collectively agree to pause under certain conditions. **How it works.** Participating developers negotiate a contract (Figure 1(a)). In that contract, they all commit to commission a third party to run dangerous capabilities evaluations, notify the other contracting parties if a model fails a set of evaluations, and pause certain research and development activities until certain safety thresholds are reached. Compliance is monitored by the developers themselves and enforced via contractual penalties. Conflicts resulting from the agreement are resolved by an independent arbitrator (e.g. a panel of experts). **Variation.** Instead of a collective pausing agreement, developers could make individual agreements with a third party (Figure 1(b)). An obvious candidate would be the Frontier Model Forum, an industry body founded by Anthropic, Google DeepMind, Microsoft, and OpenAI in July 2023 [62]. As a condition of membership, the Frontier Model Forum could legally require developers to commit to coordinated pausing in their terms and conditions. Specifically, it could mandate that members notify the Forum whenever one of their models fails a set of dangerous capabilities evaluations, and agree to pause development when notified that any member's model has failed evaluations. While the Forum would not conduct evaluations itself, by making coordinated pausing a binding membership requirement, it could serve as an intermediary to facilitate implementation of the intervention. This approach is somewhat similar to using a mutual auditor (Section 3.3). The main difference is that the Forum would leverage membership rules rather than direct auditing agreements to coordinate pausing, while preserving antitrust law compliance. **Benefits.** The main benefit of a pausing agreement over voluntary pausing (Section 3.1) is that participating developers make a legally binding commitment. Compliance with this commitment is monitored and enforced which will likely lead to higher degrees of compliance. It also seems more realistic that developers enter into a pausing agreement than that policymakers create pausing requirements (Section 3.4),12 though the regulatory debate in the US and UK has picked up speed. Footnote 12: Collective or individual pausing agreements seem similarly feasible as an agreement with a mutual auditor (Section 3.3). **Limitations.** Some scholars and practitioners have voiced the concern that this kind of cooperation between developers violates US and EU antitrust laws. We imagine that individual agreements with a third party (e.g. the Frontier Model Forum) would not run into this problem. However, since we are not antitrust experts and a legal analysis is beyond the scope of this paper, we do not want to comment on the matter. Notably though, precedents exist in other industries for granting antitrust exemptions on matters of public importance. If coordinated pausing is deemed sufficiently important for managing AI risks, exploring similar limited exemptions may be warranted. Regardless of this, a pausing agreement would have other limitations. We are skeptical that frontier AI developers would be willing to enter into a legally binding pausing agreement. And even if they do, monitoring and enforcing the pause would still be left to the private sector with little or no public assurance. This would be problematic because we think that democratic institutions need to be involved if dangerous capabilities are in fact a serious threat to public safety and security [4, 79, 76]. It is also worth noting that in many jurisdictions it is not possible or at least very difficult to "force" a contracting party to comply. In principle, participating developers can still decide not to pause and pay the contractual fine (even though this will likely cause severe reputational damages). **Recommendation.** To clarify the antitrust concern, developers may want to consult a specialized law firm to write a somewhat authoritative legal opinion on the topic. We also encourage legal scholars to analyze the question in detail [39]. In general, we think that a pausing agreement would be better than voluntary pausing (Section 3.1), but it would still not be ideal. We would therefore prefer an audit agreement (Section 3.3) or pausing requirements (Section 3.4). In the meantime, we recommend that Anthropic, Google DeepMind, Microsoft, and OpenAI give the Frontier Model Forum the mandate to oversee their evaluation activities and, most importantly, membership in the Forum should require a pausing commitment. ### Mutual auditor In the third version, a single auditor evaluates models of multiple developers who agree to pause if any model fails a set of evaluations. **How it works.** All participating developers make an agreement with the same external auditor (Figure 3a). They authorize the auditor to run dangerous capabilities evaluations on all frontier models they develop. Evaluations are developed and updated by the auditor with input from the developers. The developers commit to pause certain research and development activities if the auditor informs them that one of their models has failed a set of evaluations. They also give the auditor permission to notify other developers about the incident. Inversely, they commit to pause certain research and development activities if the auditor notifies them that a model from another developer has failed a set of evaluations. Finally, they commit to only resume the paused activities if the auditor gives them permission to do so. At the moment, ARC Evals seems to be the only organizations that would be able to serve the role of a mutual auditor [9]. However, we suspect that more organizations will be set up in the future. **Variation.** Instead of commissioning the same auditor, different developers could make agreements with different auditors (Figure 3b). Auditors may be highly specialized organizations who run their own evaluations (e.g. ARC Evals and Apollo Research), or large audit firms without deep evaluation expertise (e.g. KPMG and Deloitte) who subcontract researchers or specialized organizations. However, any failed evaluation from any auditor would have to initiate a pause as described above. Figure 2: Collective pausing agreement **(a)** and individual pausing agreements with a third party **(b)** he auditor of the potentially dangerous model could either inform other auditors or other developers. If the auditors serve as licensed private regulators, this variation would be very close to the proposal of a "regulatory market" [34]. The value of multiple auditors is that it reduces the coordination effort required to agree on a mutual auditor. Developers have more choice, both with respect to evaluations and the auditors running them. It also allows developers to discover multiple failure modes, instead of just one standard set. On the flip side, it is harder to enforce a coordinated pause. For example, it will be difficult to agree on a danger threshold across different evaluations that different auditors run. In some cases, developers who are lagging behind might even want to trigger a pause to catch up. **Benefits.** This version has three main benefits. First, the quality of the evaluations would be more consistent. The same actor would run the same evaluations, following the same process, using the same danger and safety thresholds. If the auditor accepts input from the wider AI safety community, their evaluations might actually represent the current state of the art, especially if they are routinely updated. Second, third-party evaluations tend to be less biased than internal evaluations. As a result, it is more likely that a pause will actually be imposed if necessary. Third, since the auditor would have access to the models, they can monitor compliance, at least to some extent. **Limitations.** The following limitations seem most important to us. First, developers might be hesitant to give too much power to a single auditor, especially if the auditor has discretion and needs to make subjective judgments. Pausing certain research and development activities would have significant consequences for developers. They might lose millions or even billions of dollars in revenue, undermine their market position, and risk negative PR. They might only be willing to expose themselves to such risks, if they trust the auditor, the evaluations are sufficiently objective, and their main competitors also participate. But even this might not be enough. Second, in some cases, pausing might not be enough. The model that has failed the evaluations might already be so dangerous that simply pausing the training run might be an insufficient countermeasure. This might include cases where some sort of paradigm shift would be needed to avoid similar safety incidents. Third, some evaluations are similar to gain-of-function research. To see if a model has certain dangerous capabilities, the evaluator tries to elicit such behavior. Depending on the behavior, this type of evaluation might be extremely dangerous (e.g. power-seeking behavior). If such evaluations are conducted by an irresponsible actor, the measure might ultimately increase the risk. **Recommendation.** This version seems particularly promising to us. It seems to be close to the sweet spot between desirability (i.e. it would be good from a societal perspective) and feasibility (i.e. there is a realistic chance that it would be implemented). Different stakeholders within and outside frontier AI developers should advocate for this option and policy makers should encourage it (e.g. in meetings with senior executives of frontier AI developers [88]). This option should also be on the agenda of the upcoming global summit on AI safety [36]. ### Pausing requirements In the fourth version, developers are legally required to run evaluations and pause if dangerous capabilities are discovered. Figure 3: All participating developers commission the same auditor **(a)** or different developers commission different auditors **(b)** **How it works.** New laws or regulations require frontier AI developers to commission an external auditor to run dangerous capabilities evaluations on all frontier models. These laws also require developers to pause certain research and development activities and immediately notify a regulatory body whenever and each time one of their models fails a set of evaluations. This body, in turn, alerts other developers, asking them to also suspend similar activities. An independent investigation into the incident is then initiated by the auditor, in collaboration with the developer and under the regulator's supervision. Compliance is overseen by the regulator, who possesses investigative authority and can levy administrative fines. The decision to resume paused activities is made by the regulator, based on recommendations from the auditor. **Variation.** Instead of creating new laws or regulations, regulatory bodies could try to use existing powers to enforce a pause. For example, the US Federal Trade Commission (FTC) has recently opened an investigation into OpenAI [100]. While the investigation focuses on potential violations of consumer protection laws, it seems plausible that the FTC or other regulators would also intervene in situations where it becomes publicly known that a model has failed a set of dangerous capabilities evaluations, creating incentives via legal liability. A detailed analysis of different powers of different regulatory bodies is beyond the scope of this paper. Another way of implementing this model could be through regulatory markets [34]. This means that the government would set overall policy aims but rely on private regulators to determine the specific methods used for the intervention. In this case, developers would be legally required to purchase regulatory services from approved auditors. Auditors would be empowered to run evaluations, mandate pausing if triggers are met, oversee investigations, and approve resuming activities. The government provides ongoing oversight and can influence auditors through policy and incentives, but faces no pressure to gain state of the art technical expertise. **Benefits.** The main benefit of this version is that it can ensure the highest levels of compliance. Depending on their precise powers, regulators can use various monitoring and enforcement measures to ensure that frontier AI developers actually run evaluations and pause if a model fails a set of evaluations [4]. It is also the only version where a democratically legitimated actor is involved in the pausing decision. If a model does in fact pose a serious threat to public safety and security, the government needs to be involved. **Limitations.** Pausing requirements would also have a number of limitations. First, creating new laws and regulations takes time. However, many experts worry that frontier models might very soon be able to cause very severe harm (e.g. by enabling malicious actors to develop biological weapons). Second, frontier AI developers might lobby for weaker requirements. Although many developers actively support such requirements [3; 2], one should still be concerned of regulatory capture [4]. Third, regulators might be incentivized not to enforce the pausing requirements. The government might expect them to interpret their mandate in a laissez-faire, industry-friendly way (see e.g. [93]). Fourth, the introduction of pausing requirements would raise a number of further challenges. For example, it will be very difficult to define terms like "frontier AI model" and "dangerous capabilities" in a precise and future-proof way [4; 72]. **Recommendation.** We think that frontier AI developers should eventually be required by law to run evaluations and pause if dangerous capabilities are discovered [4; 79]. We highly recommend policymakers, above all the US and UK government, to seriously consider policy options along these lines. The recent announcement by the White House [89], which explicitly mentions "capability evaluations" and "dangerous capabilities", is a promising step in this direction. ## 4 Desirability In this section, we discuss some of the benefits (Section 4.1) and potential harms (Section 4.1) of our proposed coordination scheme. ### Benefits Coordinated pausing would have a number of benefits. But since it is a novel intervention, there is not yet any empirical evidence in support of these benefits. They are mainly based on abstract plausibility considerations. **Preventing further scaling of dangerous models.** While running evaluations increases the chance that dangerous capabilities are discovered right after they emerge, our pausing scheme reduces the chance that developers further scale up such models. This is important because scaling up models with dangerous capabilities would likely make them even more dangerous. Without a pausing scheme, it seems plausible that at least some developers would continue scaling up their own models, even though a model with dangerous capabilities has been discovered. **Preventing the deployment of dangerous models.** Pausing also reduces the risk that models with dangerous capabilities are deployed. Although models might already pose some risks before they are deployed (e.g. because they are used internally, leaked, or stolen), most risk scenarios require models to be deployed (i.e. made available to the public). While most developers would probably not deploy a model that has failed a set of dangerous capabilities evaluations, it seems plausible that other developers would continue deploying similar models. This would be bad because one might expect that similar capabilities will emerge in similar models. Put simply, pausing turns would-be catastrophes into warning shots. **Buying more time for safety research.** Pausing creates more time for safety research. During the pause, safety researchers can study why a model has failed its evaluations, how to make the model safer, and what safety controls would be adequate (Section 2.4). They might also discover other safety issues. We think that buying more time for safety research may be one of the main benefits of our proposed pausing scheme. It seems plausible that safety research in this period is particularly valuable, mainly because it is possible to conduct empirical research on real models that pose real dangers. In the past, a lot of safety research was either theoretical or relied on toy models. The underlying principle--promoting risk-reducing technologies while delaying risk-increasing ones--has been referred to as "differential technological development" [16, 65, 70]. **Slowing down a race to the bottom.** Pausing might slow down a race to the bottom on safety. Commercial pressure might incentivize developers to cut corners on safety to get ahead of their competitors [12, 58]. For example, after OpenAI released ChatGPT, Google famously announced it would "recalibrate" the level of risk it is willing to take [32]. If a developer gets an advantage by neglecting safety, others are incentivized to do the same. Otherwise, they might be left behind. However, during a pausing period, developers who have neglected their safety efforts would be able to catch up. This would at least temporarily stop a downward spiral. **Shifting the Overton window.** Coordinated pausing might contribute to shifting the Overton window for other safety interventions, such as introducing strict domestic regulations on frontier models [4] or setting up new international institutions [37]. Every time a model fails a set of evaluations and participating developers pause, the incident would raise awareness of the dangers of frontier models. These "warning shots" would make other safety interventions increasingly politically feasible. We think that, although coordinated pausing may contribute to an Overton window shift, the effects of the scheme should not be overstated. In a world where some frontier models in fact fail dangerous capabilities evaluations, it seems likely that policymakers and the public would already be aware of the dangers and consider other interventions. **Creating good incentives.** Coordinate pausing creates good incentives. Since pausing has a number of negative consequences, developers would likely want to avoid pauses. The most straightforward way a developer can avoid pauses is by ensuring that their own models and models of other developers pass evaluations. This provides an incentive to invest more in safety research and share insights with other developers. ### Potential harms Below, we discuss ways in which our proposed coordination scheme might be harmful. **Providing China with more time to catch up.** At the moment, Chinese AI companies seem to be behind their US competitors [26, 90]. However, one might worry that pausing frontier AI development in the US would give Chinese AI companies time to catch up.13 Our best guess is that this concern is overblown. There are at least three reasons for this. First, we do not expect frontier AI developers in the US to pause for enough time for China to catch up in a meaningful way. Second, the US export controls on advanced computing and semiconductor manufacturing items [95, 80] make it harder for Chinese AI companies to get access to cutting edge chips, which are necessary to train frontier models. This seems to be a meaningful constraint, even though Chinese firms have found some ways to evade the restrictions [28]. Third, Chinese AI companies have less incentives to develop frontier models, especially language models, mainly because they fear repercussions from the Chinese Communist Party (CCP). **Providing a false sense of security.** One might worry that frontier AI developers and other stakeholders (e.g. regulators) rely too much on evaluations and coordinated pausing as their main intervention to reduce catastrophic risks from AI. This would be problematic if our proposed scheme alone is insufficient--which it probably is--and additional measures are not pursued. Pausing might buy negligible time for extra safety research, while providing a false sense of security to capabilities researchers. Again, our best guess is that the concern is overblown. We think it is unlikely that people would overly rely on the scheme. There seems to be an overall consensus among AI governance scholars and practitioners that there are no silver bullets and we need many different interventions ("defense in depth"). People would rather see it as yet another mechanism in a portfolio of mechanisms. For example, in a recent proposal for frontier AI regulation, dangerous capabilities evaluations would only inform a broader risk assessment [4]. **Maintaining market position.** It is possible that, if this intervention is implemented, developers ahead in capabilities would have incentives to dishonestly trigger pausing periods to delay the progress of their competitors. This risks distorting the entire scheme by transforming pauses into a mechanism for suppressing competition rather than promoting safety. To mitigate this, it is crucial that evaluations are conducted transparently (even if they are developed in-house and on a voluntary basis), so that other researchers can ensure the evaluations serve their intended safety purpose. Whistleblowing schemes could add such a layer of transparency, making it riskier for lab leadership to manipulate the intervention. Given that such manipulation would likely require coordination across multiple teams, the presence of a whistleblowing mechanism would reduce the likelihood of internal trust sufficient for such a scheme. **Discontinuous scaling.** Rapid AI capability advancements could occur post-pause. If developers continue to make (even restricted) algorithmic improvements while paused, they could create a sudden leap in capabilities once the pause is lifted. For example, if developers are restricted to working on smaller models during a pause, they might focus on fine-tuning and developing new techniques. When they are allowed to use more computing power again, these improvements could combine to create a big jump in capabilities, catching regulators off guard. This could even lead to a "hard takeoff", where AI capabilities advance very quickly [98]. To prevent this, it's crucial that the pause restrictions are designed to actually slow down capabilities research, not just limit deployment. **"Wolf cries".** While the open letter "Pause Giant AI Experiments" [29] has received some support [15], it has also been criticized [41, 67]. In general, it has likely contributed to push backs against the concern that future AI systems might cause catastrophic or even existential risks. Skeptics see a discrepancy between current capabilities and warnings of imminent threats. One might worry that if capabilities become more dangerous, people will take justified warnings less seriously. This situation is similar to the fable of the boy who cried wolf.14 Our proposed coordination scheme might make this scenario more likely. We wish to emphasize that current warnings might very well be justified, not because existing models already pose catastrophic risks, but because we need to be prepared for scenarios in which the next generations of models do. Footnote 14: A shepherd boy repeatedly fools villagers into thinking a wolf is attacking his town’s flock. When an actual wolf appears and the boy calls for help, the villagers believe that it is another false alarm, and the sheep are eaten by the wolf. In this section, we have discussed the main benefits and potential harms of coordinated pausing. We conclude that coordinated pausing is a promising mechanism for tackling emerging risks from frontier AI models. But could it actually be implemented? ## 5 Feasibility This section discusses the feasibility of our proposed coordination scheme. The following factors seem most important, that is, we expect the intervention to fail if these factors prove to be insurmountable. **Violation of US and EU antitrust law.** One concern is that coordination between AI developers could violate antitrust laws in the EU and US. In the European Union, Article 101(1) of the Treaty on the Functioning of the European Union prohibits and nullifies agreements between companies that have the effect of restricting competition. A coordinated commitment to pause and any communication between developers about pausing plans could potentially violate this law. However, there are ways developers may be able to avoid this issue. For example, they can make independent commitments to pause without discussing them with each other. This avoids any explicit agreement or _concurrence of wills_ between competitors. Similarly, when communicating about dangerous capabilities discoveries, developers can avoid explicitly mentioning plans to pause or encouraging others to do so as well. Using third parties like independent auditors or regulators as intermediaries for sharing information may also help mitigate these concerns. For example, a mutual auditor can notify developers of failed evaluations without the developers communicating directly. Finally, developers can avoid sharing commercially sensitive information about models with one another, to the extent it is possible. In the United States, Section 1 of the Sherman Antitrust Act lays out the relevant law on this matter, and similarly prohibits conduct that unreasonably restrains trade in a way that is harmful for competition. In addition to the measures mentioned above, it is also crucial that any coordination scheme between AI developers in America does not include explicit restrictions on price or output that position developers to profit. Retaliatory actions against non-participating developers should also be avoided. Additionally, it is worth noting that US courts may not accept a defense of coordination schemes based on public policy merits if they are found to be anticompetitive. Consulting with regulatory bodies like the US Federal Trade Commission, the US Department of Justice, and the EU Competition and Markets Authority may be an important strategy to ensure compliance with antitrust law. **Enforcement concerns.** Enforcing compliance with pausing commitments could also be challenging. This is especially the case for voluntary schemes that lack formal oversight. While public reputation provides some incentive to comply, it has limitations as an enforcement tool. For instance, the strength of public pressure could fade over time if pausing is rarely triggered, as media attention shifts to other issues. Developers might also try to strategically influence public discourse to portray the intervention negatively and reduce compliance pressure. Similarly, if highly profitable models emerge during pause periods, commercial incentives could override reputational concerns about violating commitments. Thus, while voluntary compliance is worth pursuing, robust legal authorities and enforcement tools would likely be needed to ensure developers adhere to pausing in impactful cases. Despite this, there may be some lower-cost enforcement options to consider. For example, cloud computing providers could be pressured into updating their terms of service to contractually enforce pauses by restricting access to resources during pause periods. Yet, overall, relying solely on public pressure will provide limited assurance. Enforcing compliance through legal agreements with auditors also poses challenges. As previously mentioned, it is often challenging to force contracting parties to actually fulfill their obligations, rather than just pay damages for breaking the contract. Developers could choose to breach a pausing agreement and accept the financial consequences instead. However, structuring agreements so penalties are sufficiently large and tied to revenue could make violations prohibitively expensive, helping disincentivize non-compliance. But ultimately, a multipronged approach combining reputational incentives, contractual leverage, ongoing scrutiny, and collaborative partnerships between stakeholders will likely be needed to reliably enforce adherence. **Model verification concerns.** Another potential obstacle is ensuring that the systems evaluated during training are the same systems that are eventually deployed. For instance, OpenAI's GPT-4 system card revealed the audited version of GPT-4 differed from the deployed model [63]. Ensuring the integrity of this matching is critical, as deploying systems with capabilities differing from those assessed during development could enable developers to bypass the intervention and deploy unsafe models. Several potential solutions exist: Auditors could check if the distribution of outputs on a secret benchmark dataset match between the audited and deployed versions. Hashing the trained model weights and having compute providers verify the hashes match is another option. Watermarking models in a way that is sensitive to fine-tuning, then checking the watermark pre and post-deployment could also work. In the future, requiring "signed" models approved by auditors may be possible if hardware only runs approved models. However, even if audited and deployed versions can be matched initially, models are often continuously updated after deployment, potentially developing new dangers. Requiring re-evaluations before modifications or limiting live tuning may help, but could also hamper capabilities. Ongoing monitoring of deployed systems for emerging issues will likely be needed. **Goodharting.** Developers may try to manipulate evaluations in order to avoid triggering pauses, i.e. training their systems in ways that ensure they will reliably pass model evaluations, even if they are dangerous. However, certain evaluation design choices could help mitigate this risk. For instance, using private benchmark datasets that are not included in the training data for future models may make it harder for developers to unfairly optimize performance. Additionally, keeping some aspects of evaluation methodologies opaque could increase the difficulty of gaming them. Furthermore, lengthy dynamic evaluations involving humans in the loop at multiple stages would limit the ability of developers to rapidly iterate and overfit to the assessment. If evaluations are designed to be robust and multifaceted, requiring many iterations to reverse engineer and bypass, the risks of Goodharting may be reduced. **No difference between safety and capabilities research.** A key premise of our proposed intervention is that it provides time for safety research to progress separately from capabilities development. Yet some safety techniques like Reinforcement Learning from Human Feedback also improve capabilities. This means that developers conducting safety research during pauses may inadvertently continue to advance capabilities, even if direct capabilities work is paused. However, this risk can be acknowledged and mitigated. Developers can conduct safety research under intentionally constrained conditions during pauses, limiting training data, compute resources, etc. to slow capability gains. Additionally, thresholds can be set for acceptable capability gains from safety work during pauses [35]. Therefore, even if safety and capabilities research are intertwined, it may be possible to maintain meaningful differentiated progress with deliberate effort and constraint. The following factors seem less critical. While we do not expect our intervention to fail if progress is not made on these factors, we expect the feasibility of the intervention to drop considerably. **No consensus on model evaluations.** A lack of consensus on the appropriate model evaluations to implement could also become an obstacle to the intervention's feasibility. On the one hand, allowing for a diversity of assessments might increase the chances of detecting varied failure modes that a single standardized set of evaluations would miss. On the other, too much inconsistency could also reduce the likelihood that any given issue is caught across all developers. Furthermore, if participating developers implement wildly varying or self-serving evaluations, dangerous systems may slip through undetected, limiting the value of coordination. However, some flexibility could still improve the status quo, provided leading developers conduct rigorous evaluations and do so responsibly. Moreover, reaching perfect agreement creates additional complexity when trying to convince developers to commit to pausing. Allowing some room for disagreement on precise metrics may increase willingness to align on the foundational intervention, if not every detail. **Dissuasion from investors.** An additional worry is that investors could threaten to withdraw funding, dissuading developers from making pausing commitments that might slow research progress. Our sense is that this concern is surmountable. If leading developers coordinate around pausing, investors may have little choice but to continue funding them or accept lower returns from less capable labs. This dynamic persists as long as developers have multiple competing funding options, providing leverage. Additionally, both investors and AI developers are likely interested in appearing socially responsible. As long as pausing does not completely preclude promising research, the same incentives that persuade AI developers to participate should also keep investors onboard. **IP concerns.** Another potential obstacle is that developers may be reluctant to allow the level of external auditing this intervention requires due to concerns over intellectual property and confidentiality. Underpinning these concerns could be the idea that providing auditors structured access to models, such as through an API, may not be sufficient [17]. Yet, it is possible that existing measures from other industries can alleviate this problem. For instance, developers can implement access controls like air-gapped evaluation rooms for auditors to view model weights in, a common information disclosure strategy within governments. IP concerns may also extend beyond simple information disclosure issues. For instance, AI developers might be concerned that auditors will be unable to adequately prevent sensitive information from being stolen by third parties. Securing lab information is already an extraordinary challenge because of the sheer size of the attack surface and the incredible influence of potential adversaries [48]. Auditors are unlikely to have the same level of defensive resources as top labs and, as a result, may represent an additional layer of vulnerability for labs. However, it may be worth noting that upcoming regulations like the EU AI Act, as well as recent commitments by frontier developers visiting the White House, may necessitate external audits regardless, suggesting developers already have strong incentives to find solutions here. In this section, we have discussed whether our proposed coordination scheme is actually feasible. Overall, we are cautiously optimistic that the practical obstacles hindering the intervention are surmountable. But successfully implementing such coordination will require care, foresight, and continued research. ## 6 Conclusion This paper has proposed an evaluation-based coordination scheme for situations in which frontier AI developers discover that their models have certain dangerous capabilities (Section 2). Such a scheme could rely on public pressure, a pausing agreement, a mutual auditor, or legal requirements (Section 3). The paper has also discussed the desirability and feasibility of the proposed scheme (Section 4 and 5). We concluded that coordinated pausing is a promising mechanism for tackling emerging risks from frontier AI models. However, a number of practical and legal obstacles need to be overcome. **Questions for further research.** This paper has left many questions unanswered and more research is urgently needed. The following six areas seem particularly important: * **Dangerous capability evaluations.** The most obvious bottleneck of the proposed coordination scheme is a lack of reliable evaluations for dangerous capabilities. We are only aware of ready-to-use evaluations for power-seeking behavior [11; 44], though we expect that some developers have internal evaluations that they do not share publicly. Evaluations for other dangerous capabilities discussed in the literature [79] do not yet exist, even though there are efforts to create them. We strongly encourage researchers and practitioners to create new evaluations and scrutinize existing ones. * **Safety thresholds.** Defining danger thresholds (when should frontier AI developers pause?) and safety thresholds (when can they resume their paused activities?) are still open questions which require more research. * **Model similarity.** We have skipped the questions of which models should be considered "similar" to those which have failed evaluations, and which research should be considered "related" to that which led to the failed evaluations. This raises a number of thorny questions. * **Developer buy-in.** We encourage more work that investigates ways in which developers can be incentivized to run evaluations and to participate in the proposed coordination scheme. For example, this might involve frontier AI regulation [4] or advocacy aimed at increasing public pressure on developers. * **Legal considerations.** There is some uncertainty over whether some versions of this intervention might violate antitrust law. It would be valuable to know to what extent these concerns are justified. If antitrust law is in fact a meaningful constraint, one could investigate options for a narrow safe harbor for coordinated pausing [55]. Section 708 of the US Defense Production Act (DPA), which can shield companies cooperating under the DPA from antitrust liability, might be a promising tool. The tool has already been used during the COVID-19 pandemic [87; 51]. * **Internal response policies.** This paper has focused on a collective solution, i.e. what multiple developers should do if one of them discovers a model with sufficiently dangerous capabilities. A related question that warrants further attention is what exactly a single developer who discovers these capabilities should do. Recently, Anthropic laid out its Responsible Scaling Policies: internal safety measures they plan to implement before scaling up models [7]. Similar work has been published by ARC Evals [10]. These policies are based on evaluations and include commitments to pause training and deployment for models that fail Anthropic's assessments. Developing similar evaluation-triggered internal response protocols is an important area for future work. These policies should also specify the conditions under which developers can resume any paused activities. In their latest update, ARC Evals concluded that the versions of Claude and GPT-4 they tested did not have sufficiently dangerous capabilities, but their outlook was concerning: "for systems more capable than Claude and GPT-4, we are now at the point where we need to check carefully that new models do not have sufficient capabilities to replicate autonomously or cause catastrophic harm--it's no longer obvious that they won't be able to" [11]. We urge policymakers, researchers, and practitioners to take this warning seriously. We need to be prepared for a world in which Claude 3 or GPT-5 fail their evaluations. We believe that coordinated pausing needs to be part of any solution. ## Acknowledgements We are grateful for valuable feedback and suggestions from Akash Wasil, Alan Chan, Andrea Miotti, Ben Garfinkel, Daniel Kokotajlo, Hjalmar Wijk, Holden Karnofsky, Jack Clark, Lennart Heim, Lukas Gloor, Markus Anderljung, Nate Soares, Nathan Calvin, Noam Kolt, Noemi Dreksler, Francis Rhys Ward, Vivian Dong, and Zvi Mowshowitz (in alphabetical order). All errors are our own.
2309.12613
User Migration across Multiple Social Media Platforms
After Twitter's ownership change and policy shifts, many users reconsidered their go-to social media outlets and platforms like Mastodon, Bluesky, and Threads became attractive alternatives in the battle for users. Based on the data from over 14,000 users who migrated to these platforms within the first eight weeks after the launch of Threads, our study examines: (1) distinguishing attributes of Twitter users who migrated, compared to non-migrants; (2) temporal migration patterns and associated challenges for sustainable migration faced by each platform; and (3) how these new platforms are perceived in relation to Twitter. Our research proceeds in three stages. First, we examine migration from a broad perspective, not just one-to-one migration. Second, we leverage behavioral analysis to pinpoint the distinct migration pattern of each platform. Last, we employ a Large Language Model (LLM) to discern stances towards each platform and correlate them with the platform usage. This in-depth analysis illuminates migration patterns amid competition across social media platforms.
Ujun Jeong, Ayushi Nirmal, Kritshekhar Jha, Susan Xu Tang, H. Russell Bernard, Huan Liu
2023-09-22T04:15:39Z
http://arxiv.org/abs/2309.12613v2
# User Migration across Multiple Social Media Platforms ###### Abstract After Twitter's ownership change and policy shifts, many users reconsidered their go-to social media outlets and platforms like Mastodon, Bluesky, and Threads became attractive alternatives in the battle for users. Based on the data from over 16,000 users who migrated to these platforms within the first eight weeks after the launch of Threads, our study examines: (1) distinguishing attributes of Twitter users who migrated, compared to non-migrants; (2) temporal migration patterns and associated challenges for sustainable migration faced by each platform; and (3) how these new platforms are perceived in relation to Twitter. Our research proceeds in three stages. First, we examine migration from a broad perspective, not just one-to-one migration. Second, we leverage behavioral analysis to pinpoint the distinct migration pattern of each platform. Last, we employ a large language model (LLM) to discern stances towards each platform and correlate them with the platform usage. This in-depth analysis illuminates migration patterns amid competition across social media platforms. **Keywords:** Platform Migration, User Behavior Study, Twitter, Bluesky, Threads, Mastodon ## 1 Introduction In the years since the 1997 launch of Bolt and Six Degrees, social media have become online hubs, offering many avenues for communication, entertainment, and information [2]. Users are increasingly mobile, migrating between platforms as their needs, preferences, and interests evolve, driving intense competition among social media platforms for user attention. One example is the substantial migration from Twitter to Mastodon following Twitter's ownership change [10, 24]. With the emergence of other platforms, like Threads and Bluesky, users are questioning whether their current "cyber hometown" is the best choice [21]. Prior research has examined the motivations behind platform migration and typical behaviors of migrants-the pushes and pulls, as they are known in the social science literature [6, 9, 10]. Here, we extend this research to examine: (1) the varying engagement levels of migrating users based on their new platform choice; (2) the competitive dynamics between platforms seeking user attention and what influences their success; and (3) the perspectives of migrants towards each platform and how these perspectives associate with user behaviors. To collect data on platform migrants, we identified the account handles of 16,000 users who initiated migration from Twitter to Bluesky, Threads, and Mastodon, focusing on user profiles and their activities within the first eight weeks after the official launch of Threads on July 5, 2023. For those who did not migrate from Twitter, our sampling techniques leveraged network traffic analysis between Twitter and its counterparts, ensuring the chances of precise selection of non-migrants. Our study is motivated by three questions: * **RQ1:** What characteristics distinguish migrant groups and non-migrants on Twitter? * **RQ2:** What patterns of migration reveal the relationships between Twitter and other platforms? * **RQ3:** After attempting to leave Twitter, did users sustain their engagement with their new platforms? With respect to **RQ1**, we analyzed the behavioral traits of migrants to Bluesky, Threads, and Mastodon. We quantified their influence scores and compared them to those of non-migrants to determine the level of Figure 1: The migration flow between Twitter and its alternatives: Mastodon, Bluesky, and Threads. The dashed lines represent the shift of user attention across these platforms. presence of these migrant groups on Twitter. This revealed that many users migrated despite their high level of presence on the prior platform. Regarding **RQ2**, we examined the evolving migration patterns with users' active status between Twitter and various pairs of platforms. Specifically, we measured the power of attraction and inertia to understand source platform to destination platform behavior. Concerning **RQ3**, we assessed the perspectives of migrants on brand loyalty by analyzing the texts in their posts. Paradoxically, although the use of Twitter has increased over time, there was a distinct lack of loyalty expressed towards Twitter. We also examined the correlation between their loyalty and platform usage patterns to understand the power of inertia in choosing their primary social media platform. Our main contributions are as follows: * We curated a dataset of 16,000+ users migrating from Twitter to Bluesky, Threads, and Mastodon, following those platforms' terms of service. * To our knowledge, this is the first study to examine the differences among multiple migrant groups (based on their chosen platforms) and contrast them with non-migrants. * Our comparative analysis of migration shows that, despite the rhetoric to the contrary, migrants have a strong inertia for Twitter over other platforms. ## 2 Related Work ### Human Migration and Platform Migration. Across the social sciences, the push-pull theory is widely used to explain human migration. The theory assumes that for every migration event, there are factors pushing people away from their home territory and factors pulling them towards a new home [15, 14]. This can be applied to the migration of users between social media platforms [23]. Unlike physical movement, where one is constrained to a single location at a time, the digital world allows users to engage with multiple platforms simultaneously. Such a dynamic calls for a refined classification of online migrants [12]. In economics, the concept of "service switching" parallels this phenomenon, portraying online users as shoppers exploring various platforms to find their preferred choice [7, 8]. The study of platform migration necessitates an understanding of usage patterns and the dynamics of competing options [3, 6]. ### Large-scale Online Migrations. Historically, substantial migrations between social media platforms have been observed, such as the shift from MySpace to Facebook [20] and from Facebook to Instagram [9]. Often, these shifts stem from perceived deficiencies in one platform and the emergence of superior features in another that better cater to user needs. The main drivers for this migration include push factors, such as low quality of service and bad experiences in social interactions, and pull factors, such as the presence of attractive new features and highly influential users on another platform. [9]. On Reddit, user migrations occurred as a response to moderation, such as deplatforming [17, 19] or policy changes [16]. Recently, Twitter's ownership change spurred a mass migration of users to Mastodon [13]. However, doubts arose about whether Mastodon could retain these users [10], prompting users to explore alternatives such as Bluesky and Threads. Our research differs from previous studies on platform migration by examining the dynamics of user migration across multiple platforms, focusing on users' perspectives on inter-platform relationships. This study underlines potential factors for users reverting to their prior platform and challenges in platform migration. ## 3 Preliminaries In social media and migration studies, two types of migration are defined [9, 15]: (1) _Permanent migration_, where users transition to a new platform, deactivate their original account, and exclusively engage on the new platform; and (2) _Temporary migration_, where users maintain a presence on both platforms but switch their focus between them. **Permanent Migration** If user \(u\) was a member of platform \(p_{1}\) at time \(t\) and is no longer on \(p_{1}\) at time \(t^{\prime}\), but has joined \(p_{2}\), then user \(u\) is considered to have permanently migrated from platform \(p_{1}\) to \(p_{2}\). **Temporary Migration** If user \(u\) is a member of \(p_{1}\) before time \(t\) and is found on platforms \(p_{1}\) and \(p_{2}\) at a later time \(t^{\prime}\). That user is considered to have temporarily migrated from platform \(p_{1}\) to \(p_{2}\). Figure 2: The trend of deleted and suspended migrants’ accounts on Twitter over time. The red dashed line marks the date Twitter announced its rebranding to “X”. Every day at 6 AM, we checked the status of 16,079 user profiles for signs of _permanent migration_ through actions like profile deletion or suspension on Twitter. As shown in Figure 2, only 1.6% (270 out of 16,279) deleted their accounts on August 7th, 2023. Most of these migrations occurred after Twitter's rebranding to "X", implying that rebranding either precipitated or intensified this move [11]. In accordance with the GDPR guidelines1, we re-frained from gathering information on individuals who deactivated their Twitter accounts. We only verified the existence of their accounts using the Twitter API. As a result, our remaining study emphasizes _temporary migration_--users who migrated to new platforms but might return to Twitter later on. Footnote 1: [https://gdpr.twitter.com/en.html](https://gdpr.twitter.com/en.html) ## 4 Data Collection From July 1 to August 27, 2023, we identified a total of 16,279 migrants. After removing 270 migrants with deleted or suspended accounts, we were left with 16,009 migrants with unique handles. We further verified that none of these users had accounts on the destination platform before establishing their Twitter accounts. ### Collecting Migrants from Twitter to Destination Platforms. To map users to their alternate social media accounts, we employed a platform-specific approach: (1) For Bluesky, by targeting keywords "bsky.social" and "bsky.app", we extracted relevant Bluesky handles from Twitter profiles; (2) For Threads, we used "threads.net" as our primary keyword filter, from which we derived associated Threads handles; and (3) For Mastodon, we began by gathering a complete list of 18,605 Mastodon server domains via the API from _instances.social_. By using these domains as keywords, we identified Twitter profiles linked to Mastodon handles. Noticing that Twitter users often include other account handles in their profiles, we examined their display names to pinpoint handles from different platforms. This approach avoids confusion, as handles mentioned in tweets may refer to other users [24, 10]. Table 1 displays the count of detected migrants for each platform. The fewest migrations were noted from Twitter to Threads, likely the result of Twitter's recent action of hiding tweets containing URLs that link to Threads2. Footnote 2: [https://www.washingtonpost.com/technology/2023/08/15/twitter-x-links-delayed/](https://www.washingtonpost.com/technology/2023/08/15/twitter-x-links-delayed/) Footnote 3: [https://www.semrush.com/](https://www.semrush.com/) ### Collecting Non-migrants from Twitter. We utilized Semrush3, a tool designed for network traffic and competitor analysis to estimate non-migrants. The results are shown in Figure 3, which indicates users active across Twitter, along with Bluesky, Threads, and Mastodon. The maximum contribution from the targeted platforms to Twitter's total traffic is 2.48%, when there are no overlaps among them. Based on this, we randomly sampled 20,000 unique active users who tweeted at least once on Twitter between July and August 2023 to minimize the inclusion of those migrant users to Bluesky, Threads, and Mastodon. Footnote 3: [https://www.semrush.com/](https://www.semrush.com/) ### Collecting Profiles and Posts of Migrants from Multiple Platforms. We gathered profiles of detected migrants from both Twitter and the respective destination platforms and compiled posts made within our study's timeframe. For Bluesky, we amassed 98,352 posts from Twitter and 285,174 posts from Bluesky. Threads migrants contributed 67,740 posts on Twitter and 9,955 posts on Threads. As for Mastodon, we recorded 190,328 posts and collected 229,083 posts from Mastodon. All data we collect is encrypted at the field level in our database to ensure anonymity. We used the platform's APIs. Twitter's official API provides comprehensive access to tweets, user profiles, tweets, retweets, and various metadata elements, ensuring a detailed view of user activities. Bluesky, aiming to \begin{table} \begin{tabular}{c c c} \hline \hline **Destination Platform** & **\# Migrant** & **\# Server (Domain)** \\ \hline Bluesky & 1,062 & 1 (bsky.social) \\ \hline Threads & 679 & 1 (threads.net) \\ \hline Mastodon & 14,268 & 18,605 (mastodon.social, etc.) \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics on user migration from Twitter to other platforms. Bluesky and Threads operate on a single server or a cluster of servers, given their early stage of deployment. Figure 3: Network traffic analysis for July-August 2023 between Twitter and the targeted domain. Overlaps show users accessing both domains from the same IP address. develop a decentralized standard for social media, also offers its official API4 based on the AT protocol, which was instrumental in accessing public posts and profile details. Mastodon, being an open-source and federated platform, offers its official API5 based on ActivityPub protocol. Since Threads does not provide an official API, we manually collected the text contents of public user profiles and posts through the platform's web interface6 (released August 24, 2023). Footnote 4: [https://atproto.com/guides/overview/](https://atproto.com/guides/overview/) Footnote 5: [https://docs.joinmastodon.org/api/](https://docs.joinmastodon.org/api/) Footnote 6: [https://www.threads.net/](https://www.threads.net/) ## 5 Distinguishing between Migrant Groups and Non-migrants on Twitter (RQ1) For this question, we analyzed a range of user characteristics, from basic metrics such as the number of followers to intricate measures of user influence. We then compared migrant groups and non-migrants on Twitter. ### Comparing Profile Metrics. Twitter offers metrics for user profile7. We investigated five of these, namely _followers_count_, _friends_count_, _listed_count_, _favorites_count_, and _statuses_count_. We examined the variations in these metrics among the migrant groups (Bluesky, Threads, and Mastodon) in comparison to non-migrants on Twitter with a multiple comparisons ANOVA test and Tukey's HSD post-hoc test. Table 2 shows the rankings across migrant groups and non-migrants based on the targeted platform. Migrant groups of Threads and Bluesky consistently demonstrate stronger network engagement compared to other groups. Their notably higher values in both _followers_count_ and _friends_count_ underscore a wider and more interactive network presence. Footnote 7: [https://developer.twitter.com/en/docs/twitter-api/data-dictionary/object-model/user](https://developer.twitter.com/en/docs/twitter-api/data-dictionary/object-model/user) ### Comparing Influence Metrics. Building on prior research that assessed user influence on Twitter [1, 4], we utilized both a user-level score, represented as \(S_{u}\), and a score for user's posts, denoted by \(S_{\mathcal{P}_{u}}\). Based on this, we calculate the influence score as follows: \[S_{u}=\frac{u_{\#followers}+u_{\#lists}}{2}, \tag{5.1}\] \[S_{\mathcal{P}_{u}}=\frac{1}{|\mathcal{P}_{u}|}\sum_{p\in \mathcal{P}_{u}}\frac{p_{\#retweets}+p_{\#favorites}}{2},\] \[IS_{u}=w_{profile}\times S_{u}+w_{posts}\times S_{\mathcal{P}_{u}}\] where \(IS_{u}\) denotes the influence score for user \(u\). The weights are set as \(w_{profile}=0.5\) and \(w_{posts}=0.5\) to ensure a balanced combination. Figure 4 depicts the distribution of influence scores between migrants and non-migrants. The median and mean scores for migrant groups (Bluesky, Threads, and Mastodon) were greater than those of non-migrants on Twitter, suggesting that highly engaged Twitter users might consider migration. Furthermore, Bluesky migrants displayed higher median and mean scores compared to other migrant groups, suggesting that users with high influence scores may prefer to explore Bluesky than the other platforms. ### Summary (RQ1) Migrant groups' varied characteristics show each platform attracted its distinct audience. Though migrants had a stronger Twitter presence than non-migrants, they also explored new platforms. \begin{table} \begin{tabular}{c c} \hline \hline **User Metric** & **Statistical Ranking** \\ \hline _followers\_count_ & Threads \(>\) Bluesky \(>\) (Mastodon = Non-migrant) \\ \hline _friends\_count_ & Threads \(>\) Bluesky \(>\) Mastodon \(>\) Non-migrant \\ \hline _listed\_count_ & (Bluesky = Threads) \(>\) Non-migrants \(>\) Mastodon \\ \hline _favorites\_count_ & Mastodon \(>\) (Non-migrant = Threads) \(>\) Bluesky \\ \hline _statuses\_count_ & Mastodon \(>\) (Bluesky = Threads) \(>\) Non-migrant \\ \hline \hline \end{tabular} \end{table} Table 2: Ranking of migrant groups and non-migrants across five user metrics. The inequality symbol denotes a significant disparity, while the equality symbol indicates no significant disparity as assessed by ANOVA test at \(p<0.05\). Figure 4: Box plots show influence scores for migrants (Bluesky, Threads, Mastodon) and non-migrants (Twitter) with interquartile ranges. Red dots indicate group means. Understanding Relationships between Twitter and Alternative Platforms (RQ2) We conducted an analysis comparing individuals active on Twitter to those on alternative platforms. Our goal is to discern the dynamics and relationships between Twitter and its competitors as they via for users' attention in the competitive landscape of social media. ### Comparing Active Users between Twitter and Alternatives. To understand how competition evolved over time between Twitter and the three other platforms, we counted the number of active users based on the following definition: #### 6.1.1 Active User For a platform \(p\), let a user be \(u\in U_{p}\). Given a time interval \(\delta=t^{\prime}-t\) where \(t^{\prime}>t\), the user \(u\) is active on platform \(p\) at time \(t^{\prime}\) if the user engaged in posting or resharing since time \(t\). Figure 5 depicts the active users between Twitter and alternative platforms among the studied migrants. Initially, migrants from Twitter to Bluesky favored exclusive Bluesky usage, and this trend held strong until July 18, 2023, with a marked decrease afterward. Conversely, the population of users either staying dedicated to Twitter or leveraging both platforms saw a consistent increase. This suggests that relying solely on Bluesky did not fully cater to users' needs. Second, from the launch of Threads on July 5, 2023, until July 13, 2023, there is a consistent rise in the number of active users either adopting Threads exclusively or using it alongside Twitter. After just a week, the number of active users on Threads began to decrease, illustrating the "shiny object effect", where individuals are drawn to the allure of the latest novelty, experiencing a brief surge of joy from its acquisition, only to soon shift their attention elsewhere [18]. Last, the active users between Twitter and Mastodon do not show any dramatic changes for either platform. This stasis may be because users already experienced mass migration from Twitter to Mastodon, especially after Elon Musk's takeover of Twitter on October 27, 2022 [10]. The limited user overlap indicates that Mastodon operates independently of Twitter, and migrants to Mastodon typically divide into groups focused either on Twitter or Mastodon. ### Evaluating the Association between Twitter and Alternatives. To understand the associations between Twitter and other platforms through the number of active migrants, we utilized Yule's Q, a statistical measure for assessing the association between two or more binary or nominal variables [22]. We used this measure to analyze the presence or absence of users on Twitter compared to its competing platforms. Yule's Q offers valuable insights into whether Twitter usage reflects or influences behavior on the other platforms. On a scale ranging from -1 to 1, values close to 1 indicate a complementary relationship and values nearing -1 suggest a substitute relationship. To define Yule's Q in terms of a specific time period, we segmented the cumulative count of active users within interval \(\delta\) starting from date \(t\). We gauged active users on platforms \(A\) and \(B\) based on the subsequent metrics: \(U_{A,t,\delta}\) is the number of users who only used Platform \(A\), not appearing on Platform \(B\). Conversely, \(U_{B,t,\delta}\) is the number of users using only Platform \(B\), absent on Platform \(A\). Meanwhile, \(U_{AB,t,\delta}\) is the number of users using both platforms and \(U_{\neg A-B,t,\delta}\) is the number of users not using both platforms. \[Q_{t,\delta}=\frac{(U_{AB,t,\delta}\times U_{\neg A\neg B,t,\delta})-(U_{A,t,\delta}\times U_{B,t,\delta})}{(U_{AB,t,\delta}\times P_{\neg A-B,t,\delta}) +(U_{A,t,\delta}\times U_{B,t,\delta})} \tag{6.2}\] Figure 5: Active user trends comparing Twitter with (1) Bluesky, (2) Threads, and (3) Mastodon. The blue line signifies Twitter-only users, the red line for other platform-only, and the green line for those using both. The red dashed line marks Threads’ launch date. \(y\)-axis indicates the portion of active users relative to the total number of migrants on each platform. Figure 6 depicts the evolution of Yule's Q over daily, weekly, and monthly intervals. The Yule's Q for Twitter and Bluesky consistently rises across intervals, transitioning from approximately 0.1 to around 0.3, highlighting Bluesky's complementary relationship with Twitter and its benefit from this association. Twitter and Threads display fluctuating Yule's Q values at both daily and weekly intervals. However, the monthly trend reveals a noticeable drop from approximately 0.3 to 0.2, suggesting a shift in user preference back towards Twitter. The relationship between Twitter and Mastodon remains steady, centering around Yule's Q value of 0.3 across all intervals, suggesting a complementary role and a stable migration pattern for Mastodon users. ## 7 Did Migrants Really Leave Twitter? (RQ3) While frequent use of a platform does not necessarily signify user satisfaction, as many users continue to use the platform due to a lack of alternatives, we investigated the link between platform usage and migrants' loyalty. ### Brand Loyalty of Users Towards Platforms. Considering this uncertainty, we assessed the brand loyalty of users towards each platform, through textual analysis of their posts. Since there are no stance detection algorithms or datasets for assessing the brand loyalty of users, we employed ChatGPT (GPT-4) to categorize stances as loyal, neutral, or disloyal, given GPT-4's proven proficiency in stance detection [25]. We used the advanced features of GPT-4 to evaluate and predict the brand loyalty of migrants towards each platform through our customized prompt for target-based stance detection, detailed in the Appendix. We first extracted posts that mentioned specific platforms and grouped them by users. Evaluating our method, two coders classified the stances of 400 random users, achieving a significant Cohen's Kappa of 76.27% and an F1 score of 79.42% in our stance detection task. Figure 7 presents the distribution of brand loyalty among users across various platforms. Mastodon leads with 37% loyalty among its migrants, surpassing figures from Twitter (11%), Bluesky (13%), and Threads (5%). Notably, Twitter has the highest level of expressed disloyalty (52%) among migrants. However, the majority of migrants on Bluesky, Threads, and Mastodon remain neutral, in contrast to a wider spectrum of stances on Twitter. This suggests that users are currently uncertain about their opinions on these newer platforms. Figure 6: Yule’s Q trend between Twitter and (1) Bluesky, (2) Threads, and (3) Mastodon with three different intervals. Blue represents daily trends (\(\delta=1\) day), orange for weekly trends (\(\delta=1\) week), and green for monthly trends (\(\delta=4\) weeks). Figure 7: The brand loyalty among migrants on various platforms, determined by stance from their aggregated posts. ### Rhetoric in Loyal and Disloyal Migrants. Figure 8 depicts the distribution of hashtags among different platform migrant groups. Unsurprisingly, a universal trend is evident across all disloyal migrants: hashtags such as #RIPTwitter and #TwitterIsDead dominate the conversation. Loyal Bluesky migrants showcase their group affinity with #LibraryTwitter and #FilmTwitter on loyal and disloyal migrants, respectively. Loyal Threads migrants, in contrast, emphasize their artistic and sports-related affinities through hashtags such as #ArtistsInTwitter and #MetsTwitter, and also comment on Twitter's new brand name and its logo with #TwitterX and #TwitterLogo. Loyal Mastodon migrants showcase a broader academic spectrum among loyal migrants compared to Bluesky and Threads. However, their disloyal counterparts predominantly focus on migration-specific hashtags including #TwitterMigration and #Fediverse, and IT-related hashtags such as #Opensource and #ActivityPub. ### State Dependencies for Disloyal Migrants. In economics, state dependence reflects the propensity of a consumer to remain consistent with a prior choice [5]. We begin by defining the total number of activities on platform \(A\) as follows: \[T^{\prime}_{A,t}=T_{A,t}+\lambda \tag{7.3}\] where \(T_{A,t}\) is the cumulative count of a user's activities on platform \(A\) at date \(t\). These activities encompass both postings and resharing behaviors (e.g., tweet and retweet). We introduce a smoothing factor, denoted by \(\lambda=1\), to ensure \(T^{\prime}_{A,t}\) is not considered as zero. Subsequently, we calculate the prior, likelihood, and marginal values for state dependency as outlined below: \[P(A_{t})=\frac{T^{\prime}_{A,t}}{T^{\prime}_{A,t-1}+T^{\prime}_{B,t-1}} \tag{7.4}\] \[P(D|A_{t})=\frac{T^{\prime}_{A,t-1}}{T^{\prime}_{A,t}+T^{\prime}_{B,t}} \tag{7.5}\] \[P(D)=P(D\mid A_{t})\times P(A_{t-1}) \tag{7.6}\] \[+P(D\mid B_{t})\times P(B_{t-1})\] Finally, we employ the Bayesian approach to determine the state dependency for platform \(A\) at time \(t\) by updating the posterior in the following manner: \[P(A_{t}\mid D)=\frac{P(D\mid A_{t})\times P(A_{t-1})}{P(D)} \tag{7.7}\] Figure 9 depicts state dependencies of disloyal migrants over time, highlighting different trends among platforms. We observed that there is a convergence for Twitter and Bluesky, a divergence for Twitter and Threads, and a parallelism between Twitter and Mastodon. These patterns for Bluesky and Threads were intensified after July 22, 2023, coinciding with Elon Musk's announcement of Twitter's rebranding to "X". These differences highlight the platform's impact on the migrants' activities, although we focused on migrants expressing a similar stance of disloyalty to Twitter. Examining platform-specific dynamics, Bluesky witnessed two notable convergences in state dependency on July 22 (0.5) and August 24 (0.5), which suggests identical state dependencies between Twitter and Bluesky. Following the rebranding to "X", Twitter experienced a temporary decline in state dependency, but Bluesky migrants gradually reverted to Twitter. This shift is indicated by fluctuations in state dependency from July 1 (0.62) to a brief dip on July 8 (0.5), followed by a resurgence peaking on July 24 (0.58). Conversely, migrants to Mastodon maintained a consistently lower dependency on Twitter, hinting at reduced sensitivity to external events related to Twitter's changes. ## 8 Summary (RQ) Migrants displayed a broader spectrum of brand loyalty towards Twitter than other platforms, with many showing wavering commitment. Nevertheless, their activities on Twitter consistently overshadowed their activity on platforms like Bluesky and Threads. Figure 8: Word clouds of migrants’ hashtags based on two stances towards Twitter: Loyal (Blue) and Disloyal (Red). ## 8 Limitations First, our dataset is composed of migrants who voluntarily disclosed their identities on other platforms, suggesting a possible inclination towards more open communication and active engagement with peers. This potential selection bias could represent migrants with a more heightened online presence. Second, we centered on migration patterns from Twitter to other platforms, without considering migration between alternative platforms. This specific direction was chosen due to technical constraints, including the limited search features of Bluesky's API's and Threads's lack of a publicly accessible API, which hindered our ability to track migrations from Bluesky or Threads to other platforms. Last, we examined the first eight weeks after Threads' launch, which experienced a significant user influx8. This period might not capture earlier shifts from Twitter to Mastodon [10]. Long-term analysis of Twitter data from these times was constrained by API pricing9, imposing substantial fees for retrieving users' tweets and retweets. Footnote 8: [https://www.reuters.com/technology/metas-twitter-rival-threads-hits-100-mln-users-record-five-days-2023-07-10/](https://www.reuters.com/technology/metas-twitter-rival-threads-hits-100-mln-users-record-five-days-2023-07-10/) Footnote 9: [https://developer.twitter.com/en/docs/twitter-api/getting-started/about-twitter-api](https://developer.twitter.com/en/docs/twitter-api/getting-started/about-twitter-api) ## 9 Conclusion & Future Work Our analysis shows that Bluesky cleverly capitalized on the conflicts between Twitter and two of its competitors, such as Threads and Mastodon. With a lot of overlap and association in usage with Twitter's user base, Bluesky secured its spot in the competitive landscape. New platforms, such as Threads, initially benefit from the "shiny object effect," attracting users with their novelty. However, retaining these early enthusiasts proves challenging, especially when many still remain active on Twitter. While some initial migration barriers might be short-lived, the enduring pull of established platforms like Twitter is evident. Even if users voice intentions to switch, deep-seated inertia often keeps them anchored, either out of habit or due to perceived shortcomings in newer platforms. In future work, we will explore attitudes towards Twitter and its proprietor. While a portion of its user base longs for the older version of Twitter, different segments of the user base already show highly varied levels of dissatisfaction with Twitter's owner. We will also probe the structural determinants of brand loyalty on social platforms. Factors such as user interface design, social media fatigue, and distinct social interactions unique to each platform can greatly sway user choices. To validate these findings, we will conduct online surveys using opt-in panels. Finally, we will delve deeper into the facets of brand loyalty, examining satisfaction, emotional connections, and perceived platform value, to understand the forces that keep users loyal or drive them away. ## 10 Data Collection Policy We collected data from the designated social media platforms using their public interfaces. We are aware of the forthcoming alteration to Twitter's terms of service10, effective September 29, 2023. In line with this, our data collection strictly employed Twitter's official API, adhering to their guideline of using the currently published interfaces by Twitter. Furthermore, we manually sourced text data from Threads due to the absence of its public API. We ensured user privacy by anonymizing personal data during our analysis. Upon acceptance of our article, we will share user IDs and codes for mapping migrants, but will not share other data without the platform's consent. Footnote 10: [https://twitter.com/en/tos](https://twitter.com/en/tos) Figure 9: State dependencies of users disloyal towards Twitter. We compared Twitter and its counterparts: (1) Bluesky, (2) Threads, (3) Mastodon. The blue line represents the average state dependency of individuals preferring Twitter over its counterpart, while the orange line represents the opposite. The red dashed line marks when Twitter rebranded to “\(\mathbb{X}\)”. Acknowledgments This work received support from the Office of Naval Research, under Award No. N00014-21-1-4002. Opinions, interpretations, conclusions, and recommendations within this article are solely those of the authors.
2309.14262
Chiral Meissner effect in time-reversal invariant Weyl superconductors
Weyl semimetals have nodes in their electronic structure at which electrons attain a definite chirality. Due to the chiral anomaly, the non-conservation of charges with given chirality, the axion term appears in their effective electromagnetic action. We determine how this affects the properties of time-reversal invariant Weyl {\it superconductors} (SCs) in the London regime. For type II SCs the axion coupling generates magnetic $B$-fields transverse to vortices, which become unstable at a critical coupling so that a transition into type I SC ensues. In this regime an applied $B$-field not only decays inside the SC within the London penetration depth, but the axion coupling generates an additional perpendicular field. Consequently, when penetrating into the bulk the $B$-field starts to steadily rotate away from the applied field. At a critical coupling the screening of the magnetic field breaks down. The novel chiral superconducting state that emerges has a periodically divergent susceptibility that separates onsets of chiral Meissner regimes. The chiral anomaly thus leaves very crisp experimental signatures in structurally chiral Weyl SCs with an axion response.
Vira Shyta, Jeroen van den Brink, Flavio S. Nogueira
2023-09-25T16:20:27Z
http://arxiv.org/abs/2309.14262v2
# Chiral Meissner effect in time-reversal invariant Weyl superconductors ###### Abstract Weyl semimetals are characterised by pairs of topologically protected nodes at which electronic bands cross and electrons attain a definite chirality. Due to the chiral anomaly, the non-conservation of charges with given chirality, the axion term appears in their effective electromagnetic action. We determine how this affects the properties of time-reversal invariant Weyl _superconductors_ (SCs) in the London regime. For type II SCs we show that axion coupling generates magnetic \(B\)-fields transverse to a vortex. Above a critical axion coupling vortices become unstable and a transition into a type I SC follows. In this regime an applied \(B\)-field not only decays inside the SC within the London penetration depth, but the axion coupling generates an additional perpendicular field. Consequently the \(B\)-field inside the superconductor progressively rotates away from the applied one when going into the bulk. At a critical coupling the Meissner state breaks down. The novel chiral SC state that then emerges has a periodically divergent susceptibility, at which the winding of \(B\) inside the superconductor jumps. Thus the axion coupling leaves crisp experimentally observable signatures in Weyl SCs. _Introduction_- Experimentally superconductivity has been reported in a number of Weyl semimetals, both at ambient [1; 2; 3; 4; 5; 6] and high pressures [7; 8]. The topological nature of Weyl semimetals [9; 10; 11; 12; 13; 14; 15] gives hope that Majorana zero modes bounded to vortices [16; 17] may be detected in the future. Another recent experimental development in the field was the recently found superconductivity in the time-reversal invariant (TRI) Weyl semimetal PtBi\({}_{2}\)[5; 6; 18], where the superconducting state seems to occur only on the surface of the material. Accordingly, a Berezinskii-Kosterlitz-Thouless phase transition [19; 20] was reported to occur [5]. As the low-energy electromagnetic response of superconductors (SCs) is governed by the London equations, the question arises how the presence of Weyl nodes modifies the electromagnetic properties of these Weyl superconductors, in particular as to their Meissner effect for a type I and magnetic vortices for a type II SC. Here we consider the London electrodynamics of TRI Weyl superconductors [9; 10; 11; 12; 13; 14; 15], which in the case of Weyl semimetals originates from the axion action, \[S_{a}=\frac{\alpha}{4\pi^{2}}\int dt\int d^{3}r\vartheta(t,\mathbf{r})\mathbf{E}\cdot \mathbf{B}, \tag{1}\] where \(\alpha\) is the fine-structure constant and the axion field is assumed to have the explicit form \(\vartheta(t,\mathbf{r})=\mathbf{b}\cdot\mathbf{r}-b_{0}t\). Here \(\mathbf{b}\) and \(b_{0}\) represent the separation between Weyl nodes in momentum and energy, respectively [14; 15]. Specifically, we will be interested here in the case where TRI holds, which leads to a net \(\mathbf{b}=0\) due to the presence of time reversed Weyl node pairs. A typical physical consequence of the axion response is the chiral magnetic effect (CME) [21; 22; 23], which implies that the current density contains a contribution \(\mathbf{j}_{\rm CME}=-a\mathbf{B}/(4\pi)\), where \(a\) is the axion coupling constant, related to \(b_{0}\). Here we uncover a number of novel electrodynamic features that follow from the interplay between the axion-induced CME and superconductivity in Weyl systems. Within the London theory the superconducting current is given by \(\mathbf{j}_{\rm SC}=q\rho_{s}(\mathbf{\nabla}\theta-q\mathbf{A})\), where \(\rho_{s}\) is the superfluid stiffness, \(q=2e\) is the charge, \(\theta\) is the phase of the order parameter and \(\mathbf{A}\) the vector potential. Therefore, in a Weyl superconductor with TRI the total current density is given by \(\mathbf{j}=\mathbf{j}_{\rm SC}+\mathbf{j}_{\rm CME}\). As we will see, while the magnetic field expulsion from a superconductor is ensured by its current being proportional to \(\mathbf{A}\), the contribution from CME, which is linear in \(\mathbf{B}\), leads to a rotation in the magnetic field screening. The chiral behavior of the Meissner effect may be understood by first considering the non-superconducting phase. In this case \(\mathbf{\nabla}\times\mathbf{B}=-a\mathbf{B}\) and we see that \(\nabla^{2}\mathbf{B}+a^{2}\mathbf{B}=0\), which yields spatially rotating magnetic field profiles. When the system becomes superconducting, the Meissner screening twists as a response to the rotation induced by the CME. We also find that at a critical axion coupling \(a_{c}\) the Meissner state breaks down and the magnetic field starts to rotate periodically inside the entire SC. The number of windings of the field inside the SC is quantized and transitions between plateaus associated with a divergence in magnetic susceptibility. Apart from this we establish how the axion coupling manifests in the vortex properties of type II Weyl SCs. Since the vortex appears as a response to the external magnetic field, the magnetic field inside of the vortex is expected to be directed along the vortex line. However, in a Weyl superconductor another, transverse component of the magnetic field is induced. Due to the competition between the axion term and superconductivity a transition occurs from this chiral vortex state to a conventional one without vortices at a critical coupling \(a_{c}\). _Axion London electrodynamics_ -- Accounting for the chiral anomaly in Weyl systems, the following Lagrangian governs the electromagnetic properties of TRI Weyl su perconductors in the London regime \[\mathcal{L} = \frac{\epsilon}{8\pi}\mathbf{E}^{2}-\frac{1}{8\pi}\mathbf{B}^{2}+\frac{\rho _{s}}{2}\left[\left(\partial_{t}\theta-q\phi\right)^{2}-\left(\mathbf{\nabla}\theta -q\mathbf{A}\right)^{2}\right] \tag{2}\] \[- \frac{q^{2}}{8\pi^{2}}b_{0}\varepsilon_{ijk}A_{i}\partial_{j}A_{ k},\] with units such that \(\hbar=c=1\). The most important equation for us following from the above Lagrangian is, \[\mathbf{\nabla}\times\mathbf{B}=4\pi\mathbf{j}_{\mathrm{SC}}+\epsilon\partial_{t}\mathbf{E}-a \mathbf{B}, \tag{3}\] where \(a=q^{2}b_{0}/\pi\). In the static regime that we consider here Eq. (3) becomes a generalized London equation having the current density of the form mentioned above, namely, one where the total current includes the CME contribution, \(\mathbf{j}_{\mathrm{CME}}=-a\mathbf{B}/(4\pi)\). _Vortex in type II Weyl SC --_ We first consider the fate of a magnetic vortex in presence of an axion coupling. Due to the CME current, the analysis here differs significantly from previous discussions on the subject based on the Witten effect [24], where the field of the vortex induces a fractional charge at the interface between an SC and a topological insulator [25], as well as a fractional angular momentum [26; 27]. The vortex axion physics discussed below does not involve the electric field and is intrinsic to TRI Weyl superconductors, so proximity to a topological material needs not to be assumed. Taking the curl of Eq. (3) we obtain in the static regime, \[-\nabla^{2}\mathbf{B}+a\mathbf{\nabla}\times\mathbf{B}+M^{2}\mathbf{B}=\frac{M^{2}\Phi_{0}}{2 \pi}\mathbf{\Omega}, \tag{4}\] where \(M^{2}=4\pi q^{2}\rho_{s}\) represents the inverse square of the London penetration depth \(\lambda\) (in London theory without axion coupling \(\lambda=1/M\)), \(\Phi_{0}=2\pi/q\) is the elementary flux quantum, and \(\mathbf{\Omega}=\mathbf{\nabla}\times\mathbf{\nabla}\theta\) is the vorticity (recall that the curl of a gradient vanishes everywhere, except there where topological defects like vortices exist [28]). For an infinite system the exact solution is obtained by performing a Fourier transform, which leads to \[B_{i}(p)=\frac{2\pi M^{2}\Phi_{0}\delta(p_{z})\bar{p}^{2}}{\bar{p}^{4}-a^{2}p^ {2}}\left(\delta_{iz}+\frac{ia}{\bar{p}^{2}}\varepsilon_{izk}p_{k}\right), \tag{5}\] where \(\bar{p}^{2}=p^{2}+M^{2}\) and yields in real space \(\mathbf{B}(\mathbf{r})=B_{\varphi}(r)\hat{\mathbf{\varphi}}+B_{z}(r)\hat{\mathbf{z}}\), with \[B_{\varphi}(r)=\frac{M^{2}\Phi_{0}}{2\pi\sqrt{a^{2}-a_{c}^{2}}}\sum_{\sigma= \pm}\sigma M_{\sigma}K_{1}(M_{\sigma}r), \tag{6}\] \[B_{z}(r)=\frac{M^{2}\Phi_{0}}{2\pi\sqrt{a_{c}^{2}-a^{2}}}\sum_{\sigma=\pm}M_{ \sigma}K_{0}(M_{\sigma}r), \tag{7}\] where \(K_{\alpha}(x)\) are modified Bessel functions of the second kind, and \(2M_{\pm}=\sqrt{a_{c}^{2}-a^{2}}\pm ia\), where \(a_{c}=2M\). Equations (6) and (7) reduce to the well known London solution when \(a=0\), yielding a magnetic field parallel to the \(z\)-axis. The axion contribution generates a \(\varphi\)-component of the magnetic field and, as a consequence, a component of the current parallel to the vortex is generated. The total current screening the vortex is thus encircling it in a helical manner, with a handedness determined by the sign of \(a\). This solution is well defined for \(a<a_{c}\). In Fig. 1 the magnetic induction components corresponding to the vortex solution of Eqs. (6) and (7) are displayed for different values of \(a\). We note the fields start to develop more spatial structure with increasing \(a\), so the Meissner effect around the vortex is not complete, with a spatially damped oscillatory behavior emerging. For values of \(a\) close to the critical value \(a_{c}\) the oscillations become much stronger, as shown in panels (c) and (d) of Fig. 1. For \(a\geq a_{c}\) the argument of the Bessel functions appearing in Eqs. (6) and (7) become purely imaginary and the vortex solution breaks down. As a result, the system must transition into a type I regime. As at the SC phase transition the penetration depth \(\lambda\) diverges and thus \(M\) tends to zero, for any finite (and possibly small) \(a\), the regime \(a\geq a_{c}\) is always realized close to the SC phase transition. The vortex solution can also be obtained exactly in a finite slab of thickness \(L\), with the vortex line perpendicular to the surface, as discussed in the Supplemental Material. In this case it is interesting to consider the external magnetic field explicitly for the case when the vortex solution does not exist, where \(a\geq a_{c}\). Here we can see better the perfect diamagnetic character of the phase. Actually, since for this geometry the magnetic field is perpendicular to the surface, continuity of the normal component implies that the CME current vanishes, in which case the usual London equation follows. The only possible solution henceforth is one of a vanishing magnetic induction. _Meissner effect in type I Weyl SC --_ Compared to the situation of an external magnetic field is applied along the surface normal of a type I London SC, where perfect diamagnetism is unaltered by the CME, the more interesting case is the one with the field applied parallel to the surface, rendering a finite London penetration depth. Here, due to the CME, we find a crucial difference relative the usual London electrodynamics: the Meissner screening works differently, since application of an external magnetic field generates additional components for the magnetic induction, as the differential equations for the field components are coupled via the axion term. As a first example, let us consider a semi-infinite superconductor located in the region \(x>0\) in the presence of an applied magnetic field \(\mathbf{B}_{\mathrm{ap}}=B_{\mathrm{ap}}\hat{\mathbf{y}}\). For this simple geometry one obtains the coupled equations \[-\partial_{x}^{2}B_{y}+M^{2}B_{y}-a\partial_{x}B_{z}=0, \tag{8}\] \[-\partial_{x}^{2}B_{z}+M^{2}B_{z}+a\partial_{x}B_{y}=0 \tag{9}\] with the boundary conditions, \(B_{y}(x=0)=B_{ap}\), \(B_{y}(x\rightarrow\infty)=0\), \(B_{z}(x=0)=0\), \(B_{z}(x\rightarrow\infty)=0\). As with the vortex solution, there are two distinct regimes to consider: \(a<a_{c}\) and \(a>a_{c}\). The former yields the solution \(\mathbf{B}(x)=B_{ap}e^{-(x/2)\sqrt{a_{c}^{2}-a^{2}}}\hat{\mathbf{u}}(x)\) in terms of the unit vector, \[\hat{\mathbf{u}}(x)=\cos\left(ax/2\right)\hat{\mathbf{y}}+\sin\left(ax/2\right)\hat{\mathbf{ z}}, \tag{10}\] and one observes that the field inside the SC rotates with respect to the applied one, a chiral oscillatory feature as was found for the vortex. Thus, applying a magnetic field in the \(y\)-direction does not only lead to a Meissner effect with an exponentially decaying \(y\)-component of the field, but also generates a similarly decaying field along the \(z\)-direction as a consequence of the axion coupling. The corresponding field profiles are shown in panels (a) and (b) of Fig. 2 for exemplary values of \(a\) below and above \(a_{c}\). Note that the effective penetration depth renormalized by \(a\) is given by \(\lambda=1/\sqrt{M^{2}-a^{2}/4}\) and is larger than the unrenormalized London penetration depth \(1/M\). Clearly also this effective penetration depth diverges for \(a=a_{c}\), showing that indeed this value of \(a\) corresponds to a critical point. For \(a\geq a_{c}\) there is no solution that fulfils the boundary condition \(B_{z}(x\to\infty)=0\). Thus, instead of demanding \(B_{y}(x\to\infty)=0\) and \(B_{z}(x\to\infty)=0\), we only enforce the boundary conditions at the surface \(B_{y}(x=0)=B_{ap}\), \(B_{z}(x=0)=0\), and require the solutions to be real, which yields \[\mathbf{B}(x)=B_{ap}\cos(\sqrt{a_{c}^{2}-a^{2}}\ x/2)\hat{\mathbf{u}}(x) \tag{11}\] Remarkably, the magnetic field inside of the sample exhibits a purely oscillatory behavior and as illustrated in Fig. 2-(b) for \(a=5a_{c}/2\), there is no Meissner effect in Figure 2: Panel (a): Magnetic field components of a semi-infinite superconductor located at \(x>0\) and in the presence of an applied magnetic field, \(\mathbf{B}_{\rm ap}=B_{\rm ap}\hat{\mathbf{y}}\). Panel (b): Absence of Meissner effect for \(a=5a_{c}/2\) in a semi-infinite TRI Weyl superconductor. From panels (c-e) we have \(a<a_{c}\), corresponding to \(a=0.4a_{c}\), \(a=3a_{c}/4\), and \(a=0.95a_{c}\), respectively. We can see once more the onset of spatial oscillations in the magnetic induction as \(a\) increases. In panel (f) \(a=1.1a_{c}\), which is a little bit above its critical value. In this case the axion coupling completely dominates over Meissner screening. Figure 1: Magnetic induction profiles for different values of the axion coupling \(a\) [panels (a) and (b) for the field components \(B_{\varphi}\) and \(B_{z}\), respectively]. Fields are plotted in units of \(M^{2}\Phi_{0}/(2\pi)\) against the radial coordinate \(r\) in units of \(M\). Panels (c) and (d) show the field components for \(a=1.99a/2\), corresponding to a situation where \(a\) is very close to the critical value \(a_{c}=2M\) for which the vortex solution ceases to exist. We note that as \(a\) approaches \(a_{c}\) the field profiles start to become more oscillatory. The onset of these spatial oscillations is illustrated by the three-dimensional plots for \(B_{z}\) in panels (e) and (f) for \(a=3a_{c}/2\) and \(a=1.99a_{c}/2\), respectively. the sense that the magnetic field is eventually screened inside the bulk of the SC. Instead, it rotates around the surface normal modulated by the field magnitude as it penetrates all the way into the bulk. This brings to the fore once more the important role of the critical value \(a_{c}=2M\) of the axion coupling \(a\) in modifying the nature of the Meissner effect. To investigate how it precisely signals a phase transition we compute the magnetic susceptibility as a response to the applied field. This requires us to determine the average the magnetic induction over the system, which cannot be done easily in a convergent manner in semi-infinite system and it is more convenient to consider a finite slab geometry with two surfaces such that \(|x|<L/2=\bar{L}\). We obtain, \[\mathbf{B}(x)=\frac{B_{ap}}{\sin(\bar{L}\sqrt{a^{2}-a_{c}^{2}})}\] \[\times\sum_{\sigma=\pm}\sigma\sin\left[\frac{\sqrt{a^{2}-a_{c}^{2 }}}{2}\left(x+\sigma\bar{L}\right)\right]\hat{\mathbf{u}}(x-\sigma\bar{L}), \tag{12}\] which has the advantage to hold any value of \(a\). Figure 2 shows the magnetic induction profiles corresponding to Eq. (12) for increasing values of \(a\) up to slightly above \(a_{c}\). The axion coupling causes a spatial rotation in the Meissner screening, causing it to disappear for \(a>a_{c}\) when the oscillations are stronger. Remarkably, for \(a\) significantly larger than \(a_{c}\) the oscillation amplitude can get larger than the applied field. In order to elucidate the behavior for \(a>a_{c}\), we calculate the diamagnetic susceptibility \(\chi\) from the spatial average of the magnetic induction. From the expressions it is clear that axion-induced field component \(B_{z}\) averages to zero for any \(a\). For the component parallel to the applied field we obtain the diamagnetic susceptibility \[\chi=\frac{\sqrt{a^{2}-a_{c}^{2}}}{LM^{2}\sin(\bar{L}\sqrt{a^{2}-a_{c}^{2}})} \left[\cos(\bar{L}\sqrt{a^{2}-a_{c}^{2}})-\cos(a\bar{L})\right]\] which for \(a\to a_{c}\) goes to \(\frac{2}{M^{2}L^{2}}[1-\cos(ML)].\) For \(ML\gg 1\), corresponding to a large slab thickness compared to the London penetration depth, \(\chi\) vanishes for all \(a\leq a_{c}\). For \(a>a_{c}\), on the other hand, the susceptibility diverges for \[a^{2}=a_{c}^{2}+\left(\frac{2\pi n}{L}\right)^{2}\quad\text{with}\quad n\in \mathbb{N}. \tag{13}\] Thus, at quantized values of the axion coupling the system becomes unstable. At these values the winding of the field inside the SC changes by unity. It is interesting to note that this leads to a situation reminiscent of the Little-Parks effect [29] in superconducting cylinders subjected to a parallel magnetic field, where the persistent current suppresses the cylinder's superconductivity. Although in our case no such geometry is involved, the currents generated by the CME affect the SC state of the slab, and particularly so when the length scale associated to the axion coupling is comparable to the London penetration depth. _Conclusions and outlook_ -- As the axion term affects the properties of Weyl superconductors in a rather non-trivial manner, several distinct experimentally testable predictions follow from our results. This is clear for the currents parallel to magnetic vortices and the magnetic fields perpendicular to them induced by the axion coupling. In future work it will be interesting to establish how this affects the vortex lattice and its stability. The vortex becoming unstable at the critical axion coupling implies a transition from a type II to a type I superconducting state. Such a transition between type I and II SC in the same material is known as type 1.5 superconductivity for multiband systems [30; 31]. As close to the SC phase transition the London penetration depth diverges the critical axion coupling vanishes there. Thus for any given axion coupling intrinsic to the Weyl SC material, close enough to the SC transition the system automatically enters the strong coupling regime where the vortex state becomes unstable and a type 1.5 regime may ensue. In type I superconductors the axion induced magnetic field component perpendicular to the applied field and parallel to the surface may be explored by surface sensitive probes, e.g. the magneto-optic Kerr effect. It is interesting to note that the rotating \(B\)-field inside the SC also causes vortices close to the surface to cant with respect to the applied field, which results in magnetic stray fields outside the SC [32]. This is a rather intricate consequence of the emergent field components transverse to the flux line, as simple addition of a mirror vortex cannot fulfil the boundary conditions on the surface. For strong coupling the axion renormalization of the London penetration depth may be probed experimentally, whereas the breaking down of the Meissner state is associated with a periodically divergent susceptibility, at which the winding of \(\mathbf{B}\) inside the superconductor jumps, allowing in principle for rather direct observation. We thank Volodymyr Kravchuk for stimulating discussions. We acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), through SFB 1143 project A5 and the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter-ct.qmat (EXC 2147, Project Id No. 390858490). While preparing this manuscript we became aware of the work of M. Stalhammar et al. [33], who report semi-infinite slab and cylinder solutions similar to our results.
2303.17903
Crossed products as compact quantum metric spaces
By employing the external Kasparov product, Hawkins, Skalski, White and Zacharias constructed spectral triples on crossed product C$^\ast$-algebras by equicontinuous actions of discrete groups. They further raised the question for whether their construction turns the respective crossed product into a compact quantum metric space in the sense of Rieffel. By introducing the concept of groups separated with respect to a given length function, we give an affirmative answer in the case of virtually Abelian groups equipped with certain orbit metric length functions. We further complement our results with a discussion of natural examples such as generalized Bunce-Deddens algebras and higher-dimensional non-commutative tori.
Mario Klisse
2023-03-31T09:01:05Z
http://arxiv.org/abs/2303.17903v1
# Crossed Products as Compact Quantum Metric Spaces ###### Abstract. By employing the external Kasparov product, in [18] Hawkins, Skalski, White and Zacharias constructed spectral triples on crossed product C\({}^{*}\)-algebras by equicontinuous actions of discrete groups. They further raised the question for whether their construction turns the respective crossed product into a compact quantum metric space in the sense of Rieffel. By introducing the concept of groups separated with respect to a given length function, we give an affirmative answer in the case of virtually Abelian groups equipped with certain orbit metric length functions. We further complement our results with a discussion of natural examples such as generalized Bunce-Deddens algebras and higher-dimensional non-commutative tori. Key words and phrases:2000 Mathematics Subject Classification: 05C20, 05C20, 05C20, 05C20, 05C20 ## Introduction The standard Dirac operator of a compact spin manifold encodes large parts of its geometrical structure. Motivated by this, Connes introduced the notion of spectral triples: a _spectral triple_\((\mathcal{A},\mathcal{H},D)\) on a separable unital C\({}^{*}\)-algebra \(A\) consists of a norm dense unital \(*\)-subalgebra \(\mathcal{A}\) of \(A\) that is boundedly represented on a Hilbert space \(\mathcal{H}\), and a densely defined self-adjoint operator \(D\) on \(\mathcal{H}\) that has compact resolvent and for which all commutators of elements in \(\mathcal{A}\) with \(D\) extend to bounded operators. It is _even_, if the triple carries the additional structure of a \(\mathbb{Z}_{2}\)-grading. The concept (also referred to as _unbounded Fredholm modules_) is one of the fundamental building blocks in the theory of non-commutative geometry. Following Connes (see [10]), with a given spectral triple one can associate a pseudo-metric on the state space \(\mathcal{S}(A)\) of \(A\) via \[(\psi,\psi^{\prime})\mapsto\sup\{||\psi(a)-\psi^{\prime}(a)||\;a\in\mathcal{A} \text{ with }\|[D,a]\|\leq 1\}. \tag{0.1}\] This generalizes the Monge-Kantorovich metric on the space of probability measures of a given compact metric space \(X\) (see [20]), and in this case, the induced topology coincides with the weak-\(*\) topology. In the non-commutative setting, the latter statement does not necessarily hold anymore. This observation inspired Rieffel to introduce the notion of compact quantum metric spaces. Even though the definition in [31, Definition 2.2] is given in the general setting of order unit spaces, in the present article, we will exclusively be concerned with Lip-norms induced by spectral triples: for a spectral triple \((\mathcal{A},\mathcal{H},D)\) on a C\({}^{*}\)-algebra \(A\) the pair \((A,L_{D})\) with the _Lipschitz semi-norm_\(L_{D}:a\mapsto\|[D,a]\|\) is called a compact quantum metric space if the pseudo-metric in (0.1) induces the weak-\(*\) topology on the state space of \(A\); in this case \(L_{D}\) is called a _Lip-norm_. In [12] and [1] spectral triples on certain crossed products by the integers were constructed from suitable triples on the corresponding coefficient algebra. This approach was extended in [18], where Hawkins, Skalski, White, and Zacharias make use of the external Kasparov product to construct odd and even spectral triples on crossed products by equicontinuous actions of discrete groups that are equipped with a proper translation bounded function. Their construction translates verbatim into the setting of groups equipped with proper length functions as in [10]. Under the additional assumption that the Lipschitz semi-norm induced by the original spectral triple on the coefficient algebra provides a Lip-norm, the authors formulate the following natural question, which they answer affirmatively for \(G=\mathbb{Z}\) equipped with the word length function associated with the standard generating set \(\{-1,1\}\). Similar questions were addressed in [1] and [19], where the latter reference also provides a set of assumptions ensuring that a continuous family of \(*\)-automorphisms of a compact quantum metric space yields a field of crossed product algebras which varies continuously in Rieffel's quantum Gromov-Hausdorff distance. **Question**.: _Let \((\mathcal{A},\mathcal{H}_{A},D_{A})\) be a spectral triple on a separable unital C\({}^{*}\)-algebra \(A\) and assume that the induced Lipschitz semi-norm \(L_{D_{A}}(a):=\left\|[D_{A},a]\right\|,a\in\mathcal{A}\) defines a compact quantum metric space \((A,L_{D_{A}})\). Let further \(\alpha:G\to\text{Aut}(A)\) be a metrically equicontinuous action of a discrete group \(G\), equipped with a proper length function \(\ell:G\to\mathbb{R}_{+}\). Under what conditions does the spectral triple defined in [18] define a compact quantum metric space?_ In [29] Rieffel examined quantum metric space structures of (twisted) group C\({}^{*}\)-algebras of free Abelian groups induced by spectral triples coming from word length functions and restrictions of norms on Euclidean spaces. His results were later extended to word hyperbolic groups (see [24]) and groups of polynomial growth (see [8]). The proof in [29] strongly relies on the study of Gromov's horofunction compactification (or metric compactification) of free Abelian groups and fixed points under the corresponding group action; this study was extended to finitely generated nilpotent groups in [32]. For a given discrete group \(G\) endowed with a proper length function \(\ell\), the continuous functions on the corresponding horofunction compactification can be viewed as a C\({}^{*}\)-subalgebra of \(\ell^{\infty}(G)\). The objective of the present article is to approach the question above, mostly in the setting of virtually Abelian groups. Our approach is inspired by those in [29], [24], and employs metric geometry results on the approximation of length functions by their stable semi-norms (see [5], [22]). However, compared to the group C\({}^{*}\)-algebraic setting, the more complicated crossed product setup causes increased technical difficulties. As our main tool we introduce the notion of groups that are separated with respect to length functions: we say that the pair \((G,\ell)\) is _separated_, if the space of restrictions of the invariant means on \(\ell^{\infty}(G)\) to the continuous functions on the horofunction compactification is in a certain sense sufficiently rich; for the precise definition see Definition 2.10. With this notion at hand, we prove (among other things) the following theorem. **Theorem** (see Theorem 2.13 and Theorem 2.15).: _Let \((\mathcal{A},\mathcal{H}_{A},D_{A})\) be a non-degenerate odd (resp. even) spectral triple on a separable unital C\({}^{*}\)-algebra \(A\) and assume that the induced Lipschitz semi-norm \(L_{D_{A}}(a):=\left\|[D_{A},a]\right\|,a\in\mathcal{A}\) defines a compact quantum metric space \((A,L_{D_{A}})\). Let further \(\alpha:G\to\text{Aut}(A)\) be a metrically equicontinuous action of a finitely generated discrete group \(G\), equipped with a proper length function \(\ell:G\to\mathbb{R}_{+}\), and assume that there exists a finite index subgroup \(H\) of \(G\) that is separated with respect to the restriction \(\ell|_{H}\) and whose commutator subgroup \([H,H]\) is finite. Then the even (resp. odd) spectral triple defined in [18] satisfies the Lipschitz condition._ As a consequence, we deduce the following statement on virtually Abelian groups. It can be formulated in a more general way by replacing word length functions with suitable orbit distance length functions, see Corollary 3.4. **Corollary** (see Corollary 3.5).: _Under the conditions of the theorem above, additionally assume that \(G\) is virtually Abelian and finitely generated by a set \(S\) with \(S=S^{-1}\). Let further \(\ell:G\to\mathbb{R}_{+}\) be the corresponding word length function. Then the even (resp. odd) spectral triple defined in [18] satisfies the Lipschitz condition._ The statements above can be applied to several natural examples, some of which already occur in [18]. By using a result by Christensen and Ivan on the construction of spectral triples on AF-algebras (see [9]), we can equip generalized Bunce-Deddens algebras (as introduced in [23] and [7]) associated with virtually Abelian groups with compact quantum metric space structures. More generally, this procedure works for all crossed products associated with suitable actions of virtually Abelian groups on AF-algebras. Another family of examples arises from higher-dimensional non-commutative tori, see [25] and [26]. Any such C\({}^{*}\)-algebra identifies with an iterated crossed product by actions of the integers \(\mathbb{Z}\). In particular, a repeated application of the corollary above leads to spectral triples, that satisfy the Lipschitz condition. _Structure._ The paper is organized as follows. In Section 1 we recall the basic notions of spectral triples, compact quantum metric spaces, and horofunction compactifications. In the second one, we explain the construction of odd and even spectral triples on crossed product C\({}^{*}\)-algebras by Hawkins, Skalski, White, and Zacharias, introduce the notion of groups that are separated with respect to length functions, and prove the main result of this article. Section 3 is concerned with the study of length functions on free Abelian groups with respect to which these groups are separated. We further discuss some implications of Walsh's results in [32]. In the last section, we consider natural examples of C\({}^{*}\)-algebras to which the statements of the earlier sections can be applied. This selection includes generalized Bunce-Deddens algebras and higher-dimensional non-commutative tori. **Acknowledgements.** I am grateful to Adam Skalski and Piotr Nowak for bringing the questions studied in this article to my attention. They further contributed by providing fruitful discussions and by giving feedback on an earlier draft of this paper. I also wish to thank IMPAN where part of this work was carried out during a research visit. ## 1. Preliminaries ### General notation We will write \(\mathbb{N}:=\{0,1,2,...\}\) and \(\mathbb{N}_{\geq 1}:=\{1,2,...\}\) for the natural numbers. The neutral element of a group is always denoted by \(e\) and for a set \(S\) we write \(\#S\) for the number of elements in \(S\). Scalar products of Hilbert spaces are linear in the first variable and we denote the bounded operators on a Hilbert space \(\mathcal{H}\) by \(\mathcal{B}(\mathcal{H})\). Further, all Hilbert spaces and C\({}^{*}\)-algebras in this article are assumed to be separable. We write \(\otimes\) for the spatial tensor product of C\({}^{*}\)-algebras as well as for tensor products of Hilbert spaces. For a discrete group \(G\) we denote by \(\ell^{2}(G)\) the Hilbert space of all square summable functions \(G\to\mathbb{C}\) and by \((\delta_{g})_{g\in G}\) the canonical orthonormal basis of \(\ell^{2}(G)\). ### Spectral triples and compact quantum metric spaces One of the key concepts in the theory of non-commutative geometry is that of spectral triples introduced by Connes. **Definition 1.1**.: Let \(A\) be a separable unital C\({}^{*}\)-algebra. 1. An _odd spectral triple_\((\mathcal{A},\mathcal{H},D)\) on \(A\) consists of a \(*\)-representation \(\pi:A\to\mathcal{B}(\mathcal{H})\), a norm dense unital \(*\)-subalgebra \(\mathcal{A}\) of \(A\) and a densely defined self-adjoint operator \(D\) on \(\mathcal{H}\) such that \((1+D^{2})^{-\frac{1}{2}}\) is compact and such that for every \(a\in\mathcal{A}\) the domain of \(D\) is invariant under \(\pi(a)\) and the commutator \([D,\pi(a)]\) is bounded. 2. An _even spectral triple_ on \(A\) consists of a triple \((\mathcal{A},\mathcal{H},D)\) as before and a \(\mathbb{Z}_{2}\)-grading, i.e. a Hilbert space decomposition \(\mathcal{H}=\mathcal{H}_{1}\oplus\mathcal{H}_{2}\) for which \(\pi\) and \(D\) decompose via \(\pi=\pi_{1}\oplus\pi_{2}\) and \[D=\left(\begin{array}{cc}0&D_{1}\\ D_{1}^{*}&0\end{array}\right)\] for suitable \(D_{1}\). The operator \(D\) from above is often called the triple's _Dirac operator_. Following Connes [10], given a spectral triple \((\mathcal{A},\mathcal{H},D)\), one can define a _Lipschitz semi-norm_\(L_{D}\) on \(\mathcal{A}\) via \[L_{D}(a):=\left\|[D,\pi(a)]\right\|,\] meaning that \(L_{D}:\mathcal{A}\to\mathbb{R}_{+}\) is a semi-norm whose domain is a dense subspace of \(A\) that contains \(1\) and for which \(L_{D}(1)=0\). (Note that there are various versions of this concept; here we follow the conventions in [18].) By [28, Proposition 3.7] the semi-norm \(L_{D}\) is _lower semi-continuous_, that is for every \(r>0\) the set \(\{a\in\mathcal{A}\mid L_{D}(a)\leq r\}\) is closed in \(\mathcal{A}\) with respect to the subspace topology. \(L_{D}\) further induces a pseudo-metric \(d_{L_{D}}:\mathcal{S}(A)\to[0,\infty]\) on the state space \(\mathcal{S}(A)\) of \(A\) via \[d_{L_{D}}(\psi,\psi^{\prime}):=\sup_{a\in\mathcal{A}:L_{D}(a)\leq 1}|\psi(a)- \psi^{\prime}(a)|.\] Note that \(d_{L_{D}}\) may take value \(+\infty\). It is a natural question to ask when the topology on \(\mathcal{S}(A)\) coming from \(d_{L_{D}}\) coincides with the weak-\(*\) topology (see [27], [28]). This is the defining property of a compact quantum metric space. One necessary condition for this to happen is that the triple \((\mathcal{A},\mathcal{H},D)\) is _non-degenerate_ in the sense that the representation of \(\mathcal{A}\) on \(\mathcal{H}\) is faithful and \([D,\pi(a)]=0\) if and only if \(a\in\mathbb{C}1\). If the representation is faithful we usually suppress it in the notation and view \(\mathcal{A}\) and \(A\) as \(*\)-subalgebras of \(\mathcal{B}(\mathcal{H})\). **Definition 1.2** ([28, Definition 5.1] and [31, Definition 2.2]).: Let \((\mathcal{A},\mathcal{H},D)\) be a non-degenerate spectral triple and define \(L_{D}\) and \(d_{L_{D}}\) as before. If the topology on \(\mathcal{S}(A)\) induced by the metric \(d_{L_{D}}\) coincides with the weak-\(*\) topology, \(L_{D}\) is called a _Lip-norm_. In this case we also say that the pair \((A,L_{D})\) is a _compact quantum metric space_ and that \((\mathcal{A},\mathcal{H},D)\) satisfies the _Lipschitz condition_. Rieffel proved the following characterizations. **Theorem 1.3** ([27, Theorem 1.8]).: _Let \((\mathcal{A},\mathcal{H},D)\) be a non-degenerate spectral triple on a C\({}^{*}\)-algebra \(A\) and define \(L_{D}\) and \(d_{L_{D}}\) as before. Then the following statements are equivalent:_ 1. _The pair_ \((A,L_{D})\) _defines a compact quantum metric space;_ 2. _The image of_ \(\{a\in\mathcal{A}\mid L_{D}(a)\leq 1\}\) _is totally bounded in the quotient space_ \(A/\mathbb{C}1\)_;_ 3. \(d_{L_{D}}\) _is bounded and the set_ \(\{a\in\mathcal{A}\mid L_{D}(a)\leq 1\text{ and }\|a\|\leq 1\}\) _is totally bounded in_ \(A\) ### **Horofunction compactifications** In [29] Rieffel demonstrated that (twisted) group \(\mathrm{C}^{*}\)-algebras of Abelian free groups \(\mathbb{Z}^{m}\), \(m\in\mathbb{N}\) equipped with the natural Dirac operators coming from word length functions and restrictions of norms on Euclidean spaces, induce compact quantum metric spaces. His proof relies on the study of Gromov's horofunction compactification (or metric compactification) of these groups and fixed points under the corresponding group action. Unfortunately, the approach does not cover other natural examples such as reduced group \(\mathrm{C}^{*}\)-algebras of word hyperbolic groups. Only later this class of \(\mathrm{C}^{*}\)-algebras (and more generally a class of certain filtered \(\mathrm{C}^{*}\)-algebras) was treated by Ozawa and Rieffel in [24] by employing their notion of Haagerup-type condition. The results in [29] were extended to general nilpotent-by-finite groups by Christ and Rieffel in [8]. Going back to Gromov [17] (see also [29]), the horofunction compactification of a metric space \((Y,d)\) is defined as follows. Consider the space \(C(Y)\) of continuous functions on \(Y\) equipped with the topology of uniform convergence on bounded sets. For \(y_{0}\in Y\) define \(C(Y,y_{0}):=\{f\in C(Y)\mid f(y_{0})=0\}\). Then \(C(Y,y_{0})\) is homeomorphic to \(C_{*}(Y):=C(Y)/\mathbb{C}1\) equipped with the quotient topology, so in particular \(C(Y,y_{0})\) is independent of \(y_{0}\in Y\). One can define a continuous embedding of the space \(Y\) into \(C(Y,y_{0})\) via \(y\mapsto f_{y}(\,\cdot\,):=d(y,\,\cdot\,)-d(y,y_{0})\). The corresponding closure of \(Y\) in \(C(Y,y_{0})\) is denoted by \(\widehat{Y}\). If \((Y,d)\) is _proper_ in the sense that every closed ball in \(Y\) is compact, \(\widehat{Y}\) is a compact Hausdorff space which is called the _horofunction compactification_ of \(Y\). The action of the isometry group of \(Y\) extends to a continuous action on \(\widehat{Y}\) by homeomorphism. The space \(\partial Y:=\widehat{Y}\setminus Y\) equipped with the subspace topology is called the _horofunction boundary_ of \(Y\). In [29, Section 4] it was shown that if \((Y,d)\) is a complete locally compact metric space, \(C(\widehat{Y})\) can be described as the (commutative) unital \(\mathrm{C}^{*}\)-subalgebra \(\mathcal{G}(Y,d)\) of \(C_{b}(Y)\) generated by \(C_{0}(Y)\) and the functions \(Y\to\mathbb{C},y\mapsto f_{y}(x)\) where \(x\in Y\), i.e. \(\widehat{Y}\) is homeomorphic to the character spectrum of \(\mathcal{G}(Y,d)\). An important notion in Rieffel's work is that of weakly geodesic rays. **Definition 1.4** ([29, Definition 4.3]).: Let \((Y,d)\) be a complete locally compact metric space and let \(T\subseteq\mathbb{R}_{+}\) be an unbounded subset that contains \(0\). Consider a function \(\gamma:T\to Y\). * \(\gamma\) is called a _geodesic ray_ if \(d(\gamma(s),\gamma(t))=|s-t|\) for all \(s,t\in T\); * \(\gamma\) is called an _almost geodesic ray_ if for every \(\varepsilon>0\) there exists an integer \(N\) such that for all \(t\geq s\geq N\), \[|d(\gamma(t),\gamma(s))+d(\gamma(s),\gamma(0))-t|<\varepsilon;\] * \(\gamma\) is called a _weakly geodesic ray_ if for every \(y\in Y\) and \(\varepsilon>0\) there exists an integer \(N\) such that if \(s,t\geq N\), then \[|d(\gamma(t),\gamma(0))-t|<\varepsilon\quad\text{and}\quad|d(\gamma(t),y)-d( \gamma(s),y)-(t-s)|<\varepsilon.\] It can be shown that every almost geodesic ray is weakly geodesic. Further, the following theorem holds. **Theorem 1.5** ([29, Theorem 4.7]).: _Let \((Y,d)\) be a complete locally compact metric space and let \(\gamma:T\to Y\subseteq\widehat{Y}\) be a weakly geodesic ray. Then for every \(f\in\mathcal{G}(Y,d)\) the limit \(\lim_{t\to\infty}f(\gamma(t))\) exists and gives a (unique) element in \(\partial Y\) in the sense that_ \[\chi_{\gamma}:\mathcal{G}(Y,d)\to\mathbb{C},\chi_{\gamma}(f):=\lim_{t\to\infty }f(\gamma(t))\] _defines a character on \(\mathcal{G}(Y,d)\) whose restriction to \(C_{0}(Y)\) vanishes. If \(Y\) is proper and if the topology of \((Y,d)\) has a countable base, then every point in \(\partial Y\) is determined as above by a weakly geodesic ray._ **Definition 1.6** ([29, Definition 4.8]).: Let \((Y,d)\) be a complete locally compact metric space. A point in \(\partial Y\) induced by a weakly geodesic ray \(\gamma\) as in Theorem 1.5 is called a _Busemann point_. In this article, we will mostly be concerned with the following setup that occurs in [10]. Let \(G\) be a discrete group equipped with a _length function_\(\ell:G\to\mathbb{R}_{+}\); that is \(\ell(gh)\leq\ell(g)+\ell(h)\) and \(\ell(g^{-1})=\ell(g)\) for all \(g,h\in G\), and \(\ell(g)=0\) exactly if \(g=e\). Note that every such length function induces a natural metric \(d_{\ell}\) on \(G\) via \(d_{\ell}(g,h):=\ell(g^{-1}h)\). The space \((G,d_{\ell})\) is proper if \(\ell\) is _proper_ in the sense that the set \(\{g\in G\mid\ell(g)\leq r\}\) is finite for all \(r>0\). We will write \(\overline{G}^{\ell}\) for the horofunction compactification of \((G,d_{\ell})\) and \(\partial_{\ell}G\) for the corresponding boundary. The canonical action of \(G\) on itself via left multiplication extends to a continuous action \(G\curvearrowright\overline{G}^{\ell}\) which again restricts to an action \(G\curvearrowright\partial_{\ell}G\) on the boundary. Prototypes of length functions on finitely generated groups are _word length functions_: for every discrete group \(G\) finitely generated by a set \(S\) with \(S=S^{-1}\) the expression \(\ell_{S}(g):=\min\{n\mid g=s_{1}...s_{n}\) where \(s_{1},...,s_{n}\in S\}\), \(g\in G\) defines a length function on \(G\). ## 2. Spectral triples on crossed product C\({}^{*}\)-algebras ### Crossed product C\({}^{*}\)-algebras Let \(\alpha:G\to\operatorname{Aut}(A)\) be an action of a discrete group \(G\) on a separable unital C\({}^{*}\)-algebra \(A\) and let \(\ell:G\to\mathbb{R}_{+}\) be a proper length function on \(G\). We will often write \(g.a:=\alpha_{g}(a)\) where \(g\in G\), \(a\in A\). Assume that \((\mathcal{A},\mathcal{H}_{A},D_{A})\) is an odd spectral triple on \(A\) via a faithful representation \(\pi\) of \(A\) and consider the canonical odd spectral triple \((\mathbb{C}[G],\ell^{2}(G),M_{\ell})\) on \(C_{r}^{*}(G)\). Here \(M_{\ell}\) denotes the multiplication operator given by \(M_{\ell}\delta_{g}:=\ell(g)\delta_{g}\) for \(g\in G\) and \(\mathbb{C}[G]\subseteq C_{r}^{*}(G)\) is the span of all left regular representation operators. Recall that the reduced crossed product C\({}^{*}\)-algebra \(A\rtimes_{\alpha,r}G\) is defined as the C\({}^{*}\)-subalgebra of \(\mathcal{B}(\mathcal{H}_{A}\otimes\ell^{2}(G))\) generated by the operators \(\widetilde{\pi}(a)\), \(a\in A\) and \(\lambda_{g}\), \(g\in G\) with \[\widetilde{\pi}(a)(\xi\otimes\delta_{h}):=\pi(h^{-1}.a)\xi\otimes\delta_{h} \qquad\text{and}\qquad\lambda_{g}(\xi\otimes\delta_{h}):=\xi\otimes\delta_{gh}\] for \(\xi\in\mathcal{H}_{A}\), \(h\in G\). This definition does (up to isomorphism) not depend on the choice of the faithful representation \(\pi\). The C\({}^{*}\)-algebra \(A\) naturally embeds into \(A\rtimes_{\alpha,r}G\) via \(a\mapsto\widetilde{\pi}(a)\). We will therefore often view \(A\) as a C\({}^{*}\)-subalgebra of \(A\rtimes_{\alpha,r}G\) and suppress \(\pi\) and \(\widetilde{\pi}\) in the notation. Further, we can canonically view the reduced group C\({}^{*}\)-algebra \(C_{r}^{*}(G)\) as a C\({}^{*}\)-subalgebra of \(A\rtimes_{\alpha,r}G\). **Lemma 2.1**.: _The C\({}^{*}\)-subalgebra of \(\mathcal{B}(\mathcal{H}_{A}\otimes\ell^{2}(G))\) generated by \(A\rtimes_{\alpha,r}G\) and \(\mathbb{C}1\otimes\ell^{\infty}(G)\) does (up to isomorphism) not depend on the choice of the faithful representation \(\pi:A\hookrightarrow\mathcal{B}(\mathcal{H}_{A})\)._ Proof.: The argument is standard, compare for instance with the proof of [3, Proposition 4.1.5]. Let \(\pi:A\hookrightarrow\mathcal{B}(\mathcal{H}_{A})\) and \(\pi^{\prime}:A\hookrightarrow\mathcal{B}(\mathcal{H}^{\prime}_{A})\) be two faithful representations of \(A\), define \(\widetilde{\pi}\) and \(\widetilde{\pi}^{\prime}\) as above and consider \[B_{1} := C^{*}(\widetilde{\pi}(A)\cup\{\lambda_{g}\mid g\in G\}\cup \mathbb{C}1\otimes\ell^{\infty}(G))\subseteq\mathcal{B}(\mathcal{H}_{A}\otimes \ell^{2}(G)),\] \[B_{2} := C^{*}(\widetilde{\pi}^{\prime}(A)\cup\{\lambda_{g}\mid g\in G\} \cup\mathbb{C}1\otimes\ell^{\infty}(G))\subseteq\mathcal{B}(\mathcal{H}^{\prime}_ {A}\otimes\ell^{2}(G)).\] We have to show that \(B_{1}\cong B_{2}\) via \(\widetilde{\pi}(a)\mapsto\widetilde{\pi}^{\prime}(a)\), \(\lambda_{g}\mapsto\lambda_{g}\) and \(1\otimes f\mapsto 1\otimes f\) for \(a\in A\), \(g\in G\), \(f\in\ell^{\infty}(G)\). For every finite subset \(F\subseteq G\) define \(P_{F}\in\ell^{\infty}(G)\) to be the orthogonal projection onto the closure of \(\operatorname{Span}\{\delta_{g}\mid g\in F\}\subseteq\ell^{2}(G)\). It is then easy to see that for all finite sequences \((a_{g})_{g\in G}\subseteq A\), \((f_{g})_{g\in G}\subseteq\ell^{\infty}(G)\) with \(a_{g}=0\) and \(f_{g}=0\) for almost all \(g\in G\), \[\|\sum_{g\in G}\widetilde{\pi}(a_{g})(1\otimes f_{g})\lambda_{g}\|=\sup_{F \subseteq G\text{ finite}}\|(1\otimes P_{F})(\sum_{g\in G}\widetilde{\pi}(a_{g})(1 \otimes f_{g})\lambda_{g})(1\otimes P_{F})\|\] and \[\|\sum_{g\in G}\widetilde{\pi}^{\prime}(a_{g})(1\otimes f_{g})\lambda_{g}\|= \sup_{F\subseteq G\text{ finite}}\|(1\otimes P_{F})(\sum_{g\in G}\widetilde{\pi}^{ \prime}(a_{g})(1\otimes f_{g})\lambda_{g})(1\otimes P_{F})\|.\] Now, for every finite subset \(F\subseteq G\) and \(a\in A\), \[(1\otimes P_{F})\widetilde{\pi}(a)=(1\otimes P_{F})\widetilde{\pi}(a)(1 \otimes P_{F})=\sum_{h\in F}\widetilde{\pi}(h^{-1}.a)\otimes e_{h,h},\] where \(e_{g,h}\), \(g,h\in F\) denote the canonical matrix units of \(P_{F}\mathcal{B}(\ell^{2}(G))P_{F}\cong M_{\#F}(\mathbb{C})\). This implies that \[(1\otimes P_{F})(\sum_{g\in G}\widetilde{\pi}(a_{g})(1\otimes f_ {g})\lambda_{g})(1\otimes P_{F}) = \sum_{g\in G}\sum_{h\in F\cap gF}\widetilde{\pi}(h^{-1}.a_{g}) \otimes(f_{g}e_{h,g^{-1}h})\] \[\in \widetilde{\pi}(A)\otimes M_{\#F}(\mathbb{C})\] and similarly for \(\widetilde{\pi}^{\prime}\). But \(\widetilde{\pi}(A)\otimes M_{\#F}(\mathbb{C})\cong\widetilde{\pi}^{\prime}(A) \otimes M_{\#F}(\mathbb{C})\) canonically and hence the norms above coincide. As in [18, Section 2] define a Dirac operator \(D\) on \(\mathcal{H}\oplus\mathcal{H}\) with \(\mathcal{H}:=\mathcal{H}_{A}\otimes\ell^{2}(G)\) via \[D:=\left(\begin{array}{cc}0&D_{A}\otimes 1-i\otimes M_{\ell}\\ D_{A}\otimes 1+i\otimes M_{\ell}&0\end{array}\right).\] and write \[C_{c}(G,\mathcal{A}):=\left\{\sum_{g\in G}a_{g}\lambda_{g}\mid(a_{g})_{g\in G }\subseteq\mathcal{A}\text{ with }a_{g}=0\text{ for almost all }g\in G\right\}.\] Then \(C_{c}(G,\mathcal{A})\) is a dense \(*\)-subalgebra of \(A\rtimes_{\alpha,r}G\subseteq\mathcal{B}(\mathcal{H})\). For \(x=\sum_{g\in G}a_{g}\lambda_{g}\in C_{c}(G,\mathcal{A})\) with \((a_{g})_{g\in G}\subseteq\mathcal{A}\) we call \(\operatorname{supp}(x):=\{g\in G\mid a_{g}\neq 0\}\) the _support of \(x\)_. It was argued in [18, Theorem 2.7] that, under the assumption that \(\mathcal{A}\) is invariant under the action of \(G\) and that \(\sup_{g\in G}\|[D_{A},g.a]\|<\infty\) for every \(a\in\mathcal{A}\), the triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) defines an even spectral triple on \(A\rtimes_{\alpha,r}G\). Further, if \((\mathcal{A},\mathcal{H}_{A},D_{A})\) is non-degenerate, so is \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\). (Note that in [18] the slightly different setup of proper translation bounded integer-valued functions on \(G\) is considered; however the results translate into our setting verbatim.) Motivated by this, let us introduce the notion of metrically equicontinuous actions. **Definition 2.2** ([18, Definition 2.5]).: Let \((\mathcal{A},\mathcal{H}_{A},D_{A})\) be a non-degenerate odd spectral triple on a unital separable C\({}^{*}\)-algebra \(A\). Assume that \(L_{D_{A}}:\mathcal{A}\to[0,\infty)\), \(L_{D_{A}}(a)=\|[D,a]\|\) is a Lipschitz seminorm such that the pair \((A,L_{D_{A}})\) is a compact quantum metric space. An action \(\alpha:G\to\operatorname{Aut}(A)\) is called _smooth_ if \(\alpha_{g}(\mathcal{A})\subseteq\mathcal{A}\) for every \(g\in G\). If further \(\sup_{g\in G}L_{D_{A}}(g.a)<\infty\) for every \(a\in\mathcal{A}\), \(\alpha\) is called _metrically equicontinuous_. Recall that the horofunction compactification \(\overline{G}^{\ell}\) of a discrete group \(G\) equipped with a proper length function \(\ell:G\to\mathbb{R}_{+}\) is the (compact) closure of the image of \(G\) in \(C(G,e)\) under the embedding \(g\mapsto f_{g}(\,\cdot\,):=d_{\ell}(g,\,\cdot\,)-d_{\ell}(g,e)\) and that the canonical action of \(G\) on itself induces actions \(\beta:G\curvearrowright C(\overline{G}^{\ell})\) and \(G\curvearrowright C(\partial_{\ell}G)\). By the very construction, for every \(g\in G\) there exists a unique continuous bounded map \(\varphi_{g}^{\ell}:\overline{G}^{\ell}\to\mathbb{C}\) defined by \(\varphi_{g}^{\ell}(h):=\ell(h)-\ell(g^{-1}h)\) for \(h\in G\). These maps very naturally occur in our crossed product setting, as the following lemma illustrates. **Lemma 2.3**.: _Let \(\alpha:G\to\text{Aut}(A)\) be an action of a discrete group \(G\) on a separable unital \(C^{*}\)-algebra \(A\subseteq\mathcal{B}(\mathcal{H}_{A})\) and let \(\ell:G\to\mathbb{R}_{+}\) be a proper length function on \(G\). For every \(x=\sum_{g\in G}a_{g}\lambda_{g}\in C_{c}(G,A)\subseteq\mathcal{B}(\mathcal{H})\) with \((a_{g})_{g\in G}\subseteq A\), \(\mathcal{H}:=\mathcal{H}_{A}\otimes\ell^{2}(G)\),_ \[[1\otimes M_{\ell},x]=\sum_{g\in G}(1\otimes\varphi_{g}^{\ell})a_{g}\lambda_{g}\] _where \(\varphi_{g}^{\ell}\) is viewed as a multiplication operator \(\delta_{h}\mapsto\varphi_{g}^{\ell}(h)\) in \(\ell^{\infty}(G)\subseteq\mathcal{B}(\ell^{2}(G))\)._ Proof.: One has that for every finite sum \(x=\sum_{g\in G}a_{g}\lambda_{g}\in C_{c}(G,\mathcal{A})\) with \((a_{g})_{g\in G}\subseteq\mathcal{A}\) and \(\xi\in\mathcal{H}\), \(h\in G\), \[[1\otimes M_{\ell},x](\xi\otimes\delta_{h})=(1\otimes M_{\ell})x (\xi\otimes\delta_{h})-x(1\otimes M_{\ell})(\xi\otimes\delta_{h})\] \[= \sum_{g\in G}\left(\ell(gh)-\ell(h)\right)(\alpha_{(gh)^{-1}}(a_ {g})\xi\otimes\delta_{gh})=\sum_{g\in G}\varphi_{g}^{\ell}(gh)(\alpha_{(gh)^{ -1}}(a_{g})\xi\otimes\delta_{gh})\] and hence \[[1\otimes M_{\ell},x]=\sum_{g\in G}(1\otimes\varphi_{g}^{\ell})a_{g}\lambda_{g }\in\mathcal{B}(\mathcal{H}),\] which implies the claim. The maps \(\varphi_{g}^{\ell}\), \(g\in G\) further satisfy the following 1-cocycle condition which will become important in the later sections. **Lemma 2.4**.: _Let \(G\) be a discrete group and let \(\ell:G\to\mathbb{R}_{+}\) be a proper length function on \(G\). Then \(\varphi_{gh}^{\ell}=g.\varphi_{h}^{\ell}+\varphi_{g}^{\ell}\) for all \(g,h\in G\)._ Proof.: For all \(g,h,x\in G\), \[\varphi_{gh}^{\ell}(x) = \ell(x)-\ell(h^{-1}g^{-1}x)\] \[= \ell(x)-\ell(g^{-1}x)+\ell(g^{-1}x)-\ell(h^{-1}g^{-1}x)\] \[= \varphi_{g}^{\ell}(x)+\varphi_{h}^{\ell}(g^{-1}x).\] The claim then follows from the fact that the functions \(\varphi_{gh}^{\ell}\), \(\varphi_{g}^{\ell}\) and \(\varphi_{h}^{\ell}\) are continuous and that \(G\) is dense in \(\overline{G}^{\ell}\). By the discussion in Subsection 1.3, the commutative C\({}^{*}\)-algebra \(C(\overline{G}^{\ell})\) is isomorphic to the unital C\({}^{*}\)-subalgebra \(\mathcal{G}(G,\ell)\) of \(\ell^{\infty}(G)\) generated by \(C_{0}(G)\) and the set \(\{\varphi_{g}^{\ell}\mid g\in G\}\), where again the \(\varphi_{g}^{\ell}\) are viewed as multiplication operators. In the setting from before, define \(\mathcal{C}(A,G,\ell)\) as the C\({}^{*}\)-subalgebra of \(\mathcal{B}(\mathcal{H})\) generated by \(A\), \(\mathbb{C}1\otimes\mathcal{G}(G,\ell)\) and \(C_{r}^{*}(G)\). Then, \([1\otimes M_{l},x]\in\mathcal{C}(A,G,\ell)\) for every \(x\in C_{c}(G,\mathcal{A})\). For the sake of transparency, let us in the following denote the canonical embeddings of \(A\) and \(\mathcal{G}(G,\ell)\cong C(\overline{G}^{\ell})\) by \[\widetilde{\pi} : A\hookrightarrow\mathcal{C}(A,G,\ell)\subseteq\mathcal{B}(\mathcal{ H}),\] \[\pi_{\rtimes} : A\hookrightarrow A\rtimes_{\alpha,r}G\subseteq\mathcal{B}( \mathcal{H}),\] \[\nu : \mathcal{G}(G,\ell)\hookrightarrow\mathcal{B}(\ell^{2}(G)),\] \[\widetilde{\nu} : \mathcal{G}(G,\ell)\hookrightarrow\mathcal{C}(A,G,\ell)\subseteq \mathcal{B}(\mathcal{H}),\] \[\nu_{\rtimes} : \mathcal{G}(G,\ell)\hookrightarrow C(\overline{G}^{\ell})\rtimes_{ \beta,r}G.\] Similarly, denote the left regular representation operators in \(\mathcal{C}(A,G,\ell)\subseteq\mathcal{B}(\mathcal{H})\) by \(\widetilde{\lambda}_{g}\), \(g\in G\) and the ones in \(\mathcal{B}(\ell^{2}(G))\), \(A\rtimes_{\alpha,r}G\subseteq\mathcal{B}(\mathcal{H})\) and \(C(\overline{G}^{\ell})\rtimes_{\beta,r}G\) by \(\lambda_{g}\), \(g\in G\). Note that \(\widetilde{\pi}(a)=\pi_{\rtimes}(a)\) and \(\widetilde{\lambda}_{g}=\lambda_{g}\) in \(\mathcal{B}(\mathcal{H})\) for all \(a\in A\), \(g\in G\). **Proposition 2.5**.: _The map \(\mathcal{C}(A,G,\ell)\to(A\rtimes_{\alpha,r}G)\otimes(C(\overline{G}^{\ell}) \rtimes_{\beta,r}G)\subseteq\mathcal{B}(\mathcal{H})\otimes\mathcal{B}(\ell^{ 2}(G)\otimes\ell^{2}(G))\) given by \(\widetilde{\pi}(a)\mapsto\pi_{\rtimes}(a)\otimes 1\), \(\widetilde{\nu}(f)\mapsto 1\otimes\nu_{\rtimes}(f)\) and \(\widetilde{\lambda}_{g}\mapsto\lambda_{g}\otimes\lambda_{g}\) for \(a\in A\), \(f\in\mathcal{G}(G,\ell)\cong C(\overline{G}^{\ell})\) and \(g\in G\) is a well-defined \(*\)-isomorphism onto its image._ Proof.: One can view \(A\) as being covariantly and faithfully represented on \(\mathcal{H}=\mathcal{H}_{A}\otimes\ell^{2}(G)\) (via \(\widetilde{\pi}\) from before). In turn, by applying Lemma 2.1, we can both interpret \(\mathcal{C}(A,G,\ell)\) as a C\({}^{*}\)-subalgebra of \(\mathcal{B}(\mathcal{H}\otimes\ell^{2}(G))\) and as a C\({}^{*}\)-subalgebra of \(\mathcal{B}(\mathcal{H})\). Write \(\iota\) for the corresponding embedding \(\mathcal{C}(A,G,\ell)\hookrightarrow\mathcal{B}(\mathcal{H}\otimes\ell^{2}(G))\) and define a unitary \(U:\mathcal{H}\otimes\ell^{2}(G)\to\mathcal{H}\otimes\ell^{2}(G)\) via \(U(\xi\otimes\delta_{g}):=\widetilde{\lambda}_{g}\xi\otimes\delta_{g}\). For \(a\in A\), \(\xi\in\mathcal{H}\), \(g\in G\) one has \[U(\iota\circ\widetilde{\pi})(a)U^{*}(\xi\otimes\delta_{g}) = U(\iota\circ\widetilde{\pi})(a)(\widetilde{\lambda}_{g^{-1}}\xi \otimes\delta_{g})\] \[= U(\alpha_{g^{-1}}(\widetilde{\pi}(a))\widetilde{\lambda}_{g^{-1} }\xi\otimes\delta_{g})\] \[= U(\widetilde{\lambda}_{g^{-1}}\widetilde{\pi}(a)\xi\otimes \delta_{g})\] \[= \widetilde{\pi}(a)\xi\otimes\delta_{g}\] \[= \pi_{\rtimes}(a)\xi\otimes\delta_{g},\] so \(U(\iota\circ\widetilde{\pi})(a)U^{*}=\pi_{\rtimes}(a)\otimes 1\). For \(f\in C(\overline{G}^{\ell})\cong\mathcal{G}(G,\ell)\), \(\xi\in\mathcal{H}\), \(g\in G\), \[U(\iota\circ\widetilde{\nu}(f))U^{*}(\xi\otimes\delta_{g}) = U(\iota\circ\widetilde{\nu})(f)(\widetilde{\lambda}_{g^{-1}}\xi \otimes\delta_{g})\] \[= f(g)U(\widetilde{\lambda}_{g^{-1}}\xi\otimes\delta_{g})\] \[= f(g)(\xi\otimes\delta_{g}),\] so \(U(\iota\circ\widetilde{\nu})(f)U^{*}=1\otimes\nu(f)\). Lastly, for \(\xi\in\mathcal{H}\), \(g,h\in G\), \[U\iota(\widetilde{\lambda}_{g})U^{*}(\xi\otimes\delta_{h}) = U\iota(\widetilde{\lambda}_{g})(\widetilde{\lambda}_{h^{-1}}\xi \otimes\delta_{h})\] \[= U(\widetilde{\lambda}_{h^{-1}}\xi\otimes\delta_{gh})\] \[= \widetilde{\lambda}_{g}\xi\otimes\delta_{gh}\] \[= \lambda_{g}\xi\otimes\delta_{gh},\] so \(U\iota(\widetilde{\lambda}_{g})U^{*}=\lambda_{g}\otimes\lambda_{g}\). This implies that conjugation by \(U\) implements a \(*\)-embedding of \(\mathcal{C}(A,G,\ell)\) into \((A\rtimes_{\alpha,r}G)\otimes C_{u}^{*}(G)\) via \(\widetilde{\pi}(a)\mapsto\pi_{\rtimes}(a)\otimes 1\), \(\widetilde{\nu}(f)\mapsto 1\otimes\nu(f)\) and \(\widetilde{\lambda}_{g}\mapsto\lambda_{g}\otimes\lambda_{g}\) for \(a\in A\), \(f\in C(\overline{G}^{\ell})\cong\mathcal{G}(G,\ell)\) and \(g\in G\). Here \(C_{u}^{*}(G)\subseteq\mathcal{B}(\ell^{2}(G))\) denotes the uniform Roe algebra which is generated by \(\ell^{\infty}(G)\) and \(C_{r}^{*}(G)\) in \(\mathcal{B}(\ell^{2}(G))\). By [3, Proposition 5.1.3], \(C_{u}^{*}(G)\cong\ell^{\infty}(G)\rtimes_{r}G\) canonically where the crossed product is taken with respect to the left translation action. In particular, the C\({}^{*}\)-subalgebra of \(\mathcal{B}(\ell^{2}(G))\) generated by \(\mathcal{G}(G,\ell)\) and \(C_{r}^{*}(G)\) identifies with \(C(\overline{G}^{\ell})\rtimes_{\beta,r}G\). We deduce the claim. Note that the proof of Proposition 2.5 does not require the action \(\beta\) to be amenable. For notational convenience, if \(S\) is a subset of \(G\), we define \[C_{c}(S,\mathcal{A}):=\left\{\sum_{g\in S}a_{g}\lambda_{g}\mid(a_{g})_{g\in G }\subseteq\mathcal{A}\text{ with }a_{g}=0\text{ for almost all }g\in S\right\}\subseteq A\rtimes_{\alpha,r}G\] and \(C_{c}(S,A)\subseteq A\rtimes_{\alpha,r}G\) analogously. If \(S\) is a subgroup of \(G\), then these spaces will be \(*\)-subalgebras of \(A\rtimes_{\alpha,r}G\). **Lemma 2.6**.: _Let \(H\subseteq G\) be a subgroup. Then there exists a contractive linear map \(\mathbb{E}_{H}\) on \(\mathcal{B}(\mathcal{H})\) such that for every \(x=\sum_{g\in G}a_{g}\lambda_{g}\in C_{c}(G,\mathcal{A})\subseteq\mathcal{B}( \mathcal{H})\) with \((a_{g})_{g\in G}\subseteq\mathcal{A}\) and \(g\in G\) the identities \(\mathbb{E}_{H}(x)=\sum_{h\in H}a_{h}\lambda_{h}\in C_{c}(H,\mathcal{A})\), \(\mathbb{E}_{H}([D_{A}\otimes 1,x]\lambda_{g^{-1}})\lambda_{g}=[D_{A}\otimes 1, \mathbb{E}_{H}(x\lambda_{g^{-1}})\lambda_{g}]\) and \(\mathbb{E}_{H}([1\otimes M_{\ell},x]\lambda_{g^{-1}})\lambda_{g}=[1\otimes M _{\ell},\mathbb{E}_{H}(x\lambda_{g^{-1}})\lambda_{g}]\) hold._ Proof.: Let \((g_{i})_{i\in I}\subseteq G\) be a family of elements with \(G=\bigcup_{i\in I}Hg_{i}\) and \(Hg_{i}\neq Hg_{j}\) for \(i\neq j\). For \(i\in I\) write \(P_{Hg_{i}}\) for the orthogonal projection onto the closed subspace of \(\ell^{2}(G)\) spanned by all orthonormal basis vectors \(\delta_{hg_{i}}\), \(h\in H\). We claim that the linear map \(\mathbb{E}_{H}\) given by \(x\mapsto\sum_{i\in I}(1\otimes P_{Hg_{i}})x(1\otimes P_{Hg_{i}})\) satisfies the required conditions, where the sum converges in the strong operator topology. Indeed, for every \(x\in\mathcal{B}(\mathcal{H})\), \[\|\sum_{i\in I}(1\otimes P_{Hg_{i}})x(1\otimes P_{Hg_{i}})\|\leq\sup_{i\in I }\|(1\otimes P_{Hg_{i}})x(1\otimes P_{Hg_{i}})\|\leq\|x\|\,,\] as the operators \((1\otimes P_{Hg_{i}})x(1\otimes P_{Hg_{i}})\), \(i\in I\) have pairwise orthogonal support and ranges. For \(x=\sum_{g\in G}a_{g}\lambda_{g}\in C_{c}(G,\mathcal{A})\subseteq\mathcal{B}( \mathcal{H})\) with \((a_{g})_{g\in G}\subseteq\mathcal{A}\) and \(\xi\in\mathcal{H}_{A}\), \(h^{\prime}\in H\), \(i\in I\) we further find \[(\mathbb{E}_{H}(x))\left(\xi\otimes\delta_{h^{\prime}g_{i}}\right) = ((1\otimes P_{Hg_{i}})\sum_{g\in G}a_{g}\lambda_{g})(\xi\otimes \delta_{h^{\prime}g_{i}})\] \[= \sum_{g\in G}(1\otimes P_{Hg_{i}})\left(((gh^{\prime}g_{i})^{-1}.a_{g})\xi\otimes\delta_{gh^{\prime}g_{i}}\right)\] \[= \sum_{h\in H}\left(((hh^{\prime}g_{i})^{-1}.a_{h})\xi\otimes \delta_{hh^{\prime}g_{i}}\right)\] \[= (\sum_{h\in H}a_{h}\lambda_{h})(\xi\otimes\delta_{h^{\prime}g_{i} }),\] so that \(\mathbb{E}_{H}(x)=\sum_{h\in H}a_{h}\lambda_{h}\). For \(g,g^{\prime}\in G\) there exist \(i\in I\) and \(h^{\prime}\in H\) such that \(gg^{\prime}=h^{\prime}g_{i}\). It follows that \[\left(\mathbb{E}_{H}([D_{A}\otimes 1,x]\lambda_{g^{-1}})\lambda_{g} \right)(\xi\otimes\delta_{g^{\prime}}) = ((1\otimes P_{Hg_{i}})\sum_{g^{\prime\prime}\in G}[D_{A}\otimes 1,a_{g^{ \prime\prime}}]\lambda_{g^{\prime\prime}g^{-1}})(\xi\otimes\delta_{h^{\prime}g_{ i}})\] \[= (\sum_{g^{\prime\prime}\in Hg}[D_{A}\otimes 1,a_{g^{\prime\prime}}] \lambda_{g^{\prime\prime}g^{-1}})(\xi\otimes\delta_{h^{\prime}g_{i}})\] \[= (\sum_{g^{\prime\prime}\in Hg}[D_{A}\otimes 1,a_{g^{\prime\prime}}] \lambda_{g^{\prime\prime}})(\xi\otimes\delta_{g^{\prime}})\] \[= \left([D_{A}\otimes 1,\mathbb{E}_{H}(x\lambda_{g^{-1}})\lambda_{g}] \right)(\xi\otimes\delta_{g^{\prime}})\] and with Lemma 2.3, \[\left(\mathbb{E}_{H}([1\otimes M_{\ell},x]\lambda_{g^{-1}})\lambda_{ g}\right)(\xi\otimes\delta_{g^{\prime}}) = ((1\otimes P_{Hg_{i}})\sum_{g\in G}(1\otimes\varphi_{g^{\prime \prime}}^{\ell})a_{g^{\prime\prime}}\lambda_{g^{\prime\prime}g^{-1}})(\xi\otimes \delta_{h^{\prime}g_{i}})\] \[= (\sum_{g^{\prime\prime}\in Hg}(1\otimes\varphi_{g^{\prime\prime} }^{\ell})a_{g^{\prime\prime}}\lambda_{g^{\prime\prime}g^{-1}})(\xi\otimes \delta_{h^{\prime}g_{i}})\] \[= (\sum_{g^{\prime\prime}\in Hg}(1\otimes\varphi_{g^{\prime\prime} }^{\ell})a_{g^{\prime\prime}}\lambda_{g^{\prime\prime}})(\xi\otimes\delta_{g^ {\prime}})\] \[= ([1\otimes M_{\ell},\mathbb{E}_{H}(x\lambda_{g^{-1}})\lambda_{g} ])(\xi\otimes\delta_{g^{\prime}}).\] We deduce that \(\mathbb{E}_{H}([D_{A}\otimes 1,x]\lambda_{g^{-1}})\lambda_{g}=[D_{A}\otimes 1, \mathbb{E}_{H}(x\lambda_{g^{-1}})\lambda_{g}]\) and \(\mathbb{E}_{H}([1\otimes M_{\ell},x]\lambda_{g^{-1}})\lambda_{g}=[1\otimes M _{\ell},\mathbb{E}_{H}(x\lambda_{g^{-1}})\lambda_{g}]\), as claimed. ### Crossed product \(\mathbf{C}^{*}\)-algebras as compact quantum metric spaces Consider the setting of Subsection 2.1, i.e. let \((\mathcal{A},\mathcal{H}_{A},D_{A})\) be a non-degenerate odd spectral triple on a separable unital \(\mathrm{C}^{*}\)-algebra \(A\) and assume that the induced Lipschitz semi-norm \(L_{D_{A}}(a):=\left\|[D_{A},a]\right\|,a\in\mathcal{A}\) defines a compact quantum metric space \((A,L_{D_{A}})\). Let further \(\alpha:G\to\mathrm{Aut}(A)\) be a metrically equicontinuous action of a discrete group and \(\ell:G\to\mathbb{R}_{+}\) a proper length function on \(G\). It is natural to ask whether the even spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) defined in Subsection 2.1 induces a Lip-metric on the state space of \(A\rtimes_{\alpha,r}G\). This question was formulated in [18] in the case of \(G=\mathbb{Z}\) and length functions induced by finite symmetric generating sets. The discussion in [18, Subsection 2.3] implies the following convenient criterion. We include its proof for the convenience of the reader. **Proposition 2.7** ([18, Subsection 2.3]).: _The even spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) defined in Subsection 2.1 satisfies the Lipschitz condition if and only if the set_ \[\{x\in C_{c}(G,\mathcal{A})\mid\|[D_{A}\otimes 1,x]\|\leq 1\text{ and }\|[1 \otimes M_{\ell},x]\|\leq 1\} \tag{2.1}\] _has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\)._ Proof.: By Theorem 1.3 the even spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition if and only if the set \(\mathcal{Q}:=\{x\in C_{c}(G,\mathcal{A})\mid\|[D,x\oplus x]\|\leq 1\}\) has totally bounded image in the quotient space \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\). Denote the set in (2.1) by \(\mathcal{Q}^{\prime}\). "\(\Rightarrow\)" Assume that the even spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition. For \(x=\sum_{g\in G}a_{g}\lambda_{g}\in\mathcal{Q}^{\prime}\) with \((a_{g})_{g\in G}\subseteq\mathcal{A}\) we have by \[[D,x\oplus x]=\left(\begin{array}{cc}0&[D_{A}\otimes 1,x]-i[1\otimes M_{ \ell},x]\\ [D_{A}\otimes 1,x]+i[1\otimes M_{\ell},x]&0\end{array}\right) \tag{2.2}\] that \[\|[D,x\oplus x]\|\leq 2\|[D_{A}\otimes 1,x]\|+2\|[1\otimes M_{\ell},x]\|\leq 4.\] This means that \(\mathcal{Q}^{\prime}\subseteq 4\mathcal{Q}\) and therefore the image of \(\mathcal{Q}^{\prime}\) must be totally bounded in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\). "\(\Leftarrow\)" Assume that the image of \(\mathcal{Q}^{\prime}\) is totally bounded in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\). From (2.2) it follows that \(\|[D_{A}\otimes 1,x]+i[1\otimes M_{\ell},x]\|\leq 1\) and \(\|[D_{A}\otimes 1,x]-i[1\otimes M_{\ell},x]\|\leq 1\) for every \(x\in\mathcal{Q}\). But then, \(\|[D_{A}\otimes 1,x]\|\leq 2\) and \(\|[1\otimes M_{\ell},x]\|\leq 2\), and therefore \(\mathcal{Q}\subseteq 2\mathcal{Q}^{\prime}\). We conclude that the image of \(\mathcal{Q}\) in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\) must be totally bounded and therefore the triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition. Proposition 2.7 implies that in the treatment of the question above it suffices to restrict to cosets of finite index subgroups. **Lemma 2.8**.: _Let \((\mathcal{A},\mathcal{H}_{A},D_{A})\) be a non-degenerate odd spectral triple on a separable unital C\({}^{*}\)-algebra \(A\) and assume that the induced Lipschitz semi-norm \(L_{D_{A}}(a):=\left\|[D_{A},a]\right\|,a\in\mathcal{A}\) defines a compact quantum metric space \((A,L_{D_{A}})\). Let further \(\alpha:G\to\text{Aut}(A)\) be a metrically equicontinuous action of a finitely generated discrete group \(G\) equipped with a proper length function \(\ell:G\to\mathbb{R}_{+}\) and let \(H\leq G\) be a finite index subgroup. Then the even spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) defined in Subsection 2.1 satisfies the Lipschitz condition if and only if for every \(g\in G\) the set of all elements \(x=\sum_{h\in H}a_{h}\lambda_{hg}\in C_{c}(Hg,\mathcal{A})\) with \((a_{h})_{h\in H}\subseteq\mathcal{A}\) satisfying \(\left\|[D_{A}\otimes 1,x]\right\|\leq 1\) and \(\left\|[1\otimes M_{\ell},x]\right\|\leq 1\) has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\)._ Proof.: Set \(\mathcal{Q}:=\{x\in C_{c}(G,\mathcal{A})\mid\left\|[D_{A}\otimes 1,x]\right\|\leq 1\) and for \(g\in G\) write \(\mathcal{Q}_{g}\) for the set of all elements \(x=\sum_{h\in H}a_{h}\lambda_{hg}\in C_{c}(Hg,\mathcal{A})\) with \((a_{h})_{h\in H}\subseteq\mathcal{A}\) satisfying \(\left\|[D_{A}\otimes 1,x]\right\|\leq 1\) and \(\left\|[1\otimes M_{\ell},x]\right\|\leq 1\). "\(\Rightarrow\)" Assume that the triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition. By Proposition 2.7, the set \(\mathcal{Q}\) has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\). But \(\mathcal{Q}_{g}\) is contained in \(\mathcal{Q}\). It follows that \(\mathcal{Q}_{g}\) must also have totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\). "\(\Leftarrow\)" Assume that \(\mathcal{Q}_{g}\) has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\) for every \(g\in G\) and let \(g_{1},...,g_{m}\in G\) be elements with \(G=\bigcup_{i=1}^{m}Hg_{i}\) and \(Hg_{i}\neq Hg_{j}\) for \(i\neq j\). For \(x=\sum_{g\in G}a_{g}\lambda_{g}\in\mathcal{Q}\) with \((a_{g})_{g\in G}\subseteq\mathcal{A}\) and \(i=1,...,m\) set \(x_{i}:=\sum_{h\in H}a_{hg_{i}}\lambda_{hg_{i}}\). Then \(x=x_{1}+...+x_{m}\), \[\left\|[1\otimes M_{\ell},x_{i}]\right\| = \left\|\sum_{h\in H}(1\otimes\varphi_{hg_{i}})a_{hg_{i}}\lambda_{hg _{i}}\right\|\] \[= \left\|[1\otimes M_{\ell},\mathbb{E}_{H}(x\lambda_{g_{i}^{-1}}) \lambda_{g_{i}}]\right\|\] \[= \left\|\mathbb{E}_{H}([1\otimes M_{\ell},x]\lambda_{g_{i}^{-1}}) \lambda_{g_{i}}\right\|\] \[\leq \left\|[1\otimes M_{\ell},x]\right\|\] \[\leq 1,\] where \(\mathbb{E}_{H}\) is the contractive linear map appearing in Lemma 2.6, and similarly \[\left\|[D_{A}\otimes 1,x_{i}]\right\|=\left\|\mathbb{E}_{H}([D_{A}\otimes 1,x] \lambda_{g_{i}^{-1}})\lambda_{g_{i}}\right\|\leq\left\|[D_{A}\otimes 1,x]\right\| \leq 1.\] It follows that \(\mathcal{Q}\subseteq\mathcal{Q}_{g_{1}}+...+\mathcal{Q}_{g_{m}}\) and hence, since the \(\mathcal{Q}_{g_{i}}\) are assumed to have totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\), the set \(\mathcal{Q}\) also has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\). With Proposition 2.7 we deduce that the triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition. For a group \(G\) we denote its (normal) _commutator subgroup_ (or _derived subgroup_) generated by all commutators \([g,h]:=g^{-1}h^{-1}gh\), \(g,h\in G\) by \([G,G]\). Its _Abelianization_ is the commutative group \(G/[G,G]\). If \(G\) is finitely generated, then so is its Abelianization, which can therefore be written as a direct product \(T\times\mathbb{Z}^{m}\) where \(m\geq 0\) is the rank of \(G/[G,G]\) and where \(T\) is its torsion subgroup. Recall that an _invariant mean_ of a discrete group \(G\) is a state on \(\ell^{\infty}(G)\subseteq\mathcal{B}(\ell^{2}(G))\) that is invariant under the canonical action of \(G\). A group \(G\) is _amenable_ if it admits an invariant mean. As it will be convenient in Section 3, we formulate the following lemma as well as Definition 2.10 for pseudo-length functions instead of just length functions. A _pseudo-length function_ on a discrete group \(G\) is a map \(\ell:G\to\mathbb{R}_{+}\) satisfying \(\ell(gh)\leq\ell(g)+\ell(h)\), \(\ell(g^{-1})=\ell(g)\) for all \(g,h\in G\) and \(\ell(e)=0\). As before, associate bounded operators \(\varphi_{g}^{\ell}\in\ell^{\infty}(G)\subseteq\mathcal{B}(\ell^{2}(G))\), \(g\in G\) with \(\ell\) by defining \(\varphi_{g}^{\ell}\delta_{h}:=(\ell(h)-\ell(g^{-1}h))\delta_{h}\) for \(h\in G\). It is easy to check that the \(1\)-cocycle identity in Lemma 2.4 holds for (not necessarily proper) pseudo-length functions as well. **Lemma 2.9**.: _Let \(G\) be a finitely generated discrete group equipped with a pseudo-length function \(\ell\). Denote the projection onto the torsion-free component of the Abelianization \(G/[G,G]\) of \(G\) by \(p_{G}\). Then every invariant mean \(\mu:\ell^{\infty}(G)\to\mathbb{C}\) induces a well-defined group homomorphism \(\widehat{\mu}_{\ell}:\text{im}(p_{G})\to\mathbb{R}\) via \(p_{G}(g)\mapsto\mu(\varphi_{g}^{\ell})\)._ Proof.: Note that \(\varphi_{g}^{\ell}\in\ell^{\infty}(G)\) is self-adjoint for every \(g\in G\). In combination with the \(1\)-cocycle identity this implies that the map \(G\to\mathbb{R}\), \(g\mapsto\mu(\varphi_{g}^{\ell})\) is a well-defined group homomorphism. Every such homomorphism vanishes on the commutator subgroup. The induced map on the Abelianization must vanish on the torsion subgroup. This proves the statement. The fundamental idea of our approach consists of showing that for suitable groups and (pseudo-)length functions on them the space of all invariant means is sufficiently rich in the sense that it induces many non-trivial group homomorphisms as in Lemma 2.9. Let us therefore introduce the following notion. **Definition 2.10**.: Let \(G\) be a finitely generated discrete group equipped with a pseudo-length function \(\ell\) and let \(p_{G}\) be the projection onto the torsion-free component of the Abelianization of \(G\). We call \(G\)_separated with respect to \(\ell\)_ if \[\operatorname{Hom}(\operatorname{im}(p_{G}),\mathbb{R})=\operatorname{Span} \left\{\widehat{\mu}_{\ell}\mid\mu\text{ invariant mean}\right\}.\] In this case, we also say that the pair \((G,\ell)\) is separated. It is clear that every group that is separated with respect to a certain length function has to be amenable. For notational convenience, for subsets \(S,T\) of a C\({}^{*}\)-algebra \(A\) and \(\varepsilon>0\) we write \(S\subseteq_{\varepsilon}T\) if for every \(a\in S\) there exists \(b\in T\) with \(\|a-b\|<\varepsilon\). For \(\lambda>0\) we further denote the set of all elements \(\lambda a\) with \(a\in S\) by \(\lambda S\). **Proposition 2.11**.: _Let \(G\) be a finitely generated discrete group equipped with a proper length function \(\ell:G\to\mathbb{R}_{+}\). Assume that \(G\) admits a finite index subgroup \(H\) that is separated with respect to the restriction of \(\ell\) to \(H\) and let \(A\subseteq\mathcal{B}(\mathcal{H}_{A})\) be a unital separable C\({}^{*}\)-algebra on which \(G\) acts. For every \(g\in G\) define_ \[\mathcal{Q}_{1}^{g}:=\left\{x\in C_{c}(Hg,A)\mid\|[1\otimes M_{\ell},x]\|\leq 1 \right\},\] _then for every \(\varepsilon>0\) there exists \(\delta>0\) and finitely many elements \(g_{1},...,g_{n}\in G\) such that \(\mathcal{Q}_{1}^{g}\subseteq_{\varepsilon}\delta\mathcal{Q}_{1}^{g}\cap C_{c }(K,A)\) where \(K:=\bigcup_{i=1}^{n}[H,H]g_{i}\)._ _Similarly, if \((\mathcal{A},\mathcal{H}_{A},D_{A})\) is a non-degenerate spectral triple on \(A\), \(g\in G\) and_ \[\mathcal{Q}_{2}^{g}:=\left\{x\in C_{c}(Hg,\mathcal{A})\mid\|[1\otimes M_{\ell },x]\|\leq 1,\|[D_{A}\otimes 1,x]\|\leq 1\right\},\] _then for every \(\varepsilon>0\) there exists \(\delta>0\) and finitely many elements \(g_{1},...,g_{n}\in G\) such that \(\mathcal{Q}_{2}^{g}\subseteq_{\varepsilon}\delta\mathcal{Q}_{2}^{g}\cap C_{c }(K,\mathcal{A})\) where \(K:=\bigcup_{i=1}^{n}[H,H]g_{i}\)._ Roughly speaking, Proposition 2.11 states that all elements \(x\in C_{c}(Hg,A)\), \(g\in G\) with \(\|[1\otimes M_{\ell},x]\|\leq 1\) (and \(\|[D_{A}\otimes 1,x]\|\leq 1\)) can suitably be approximated by ones that are in some sense almost supported on the commutator subgroup of \(H\). This has important implications. The proof of the proposition relies on the following variation of the result in [24, Section 2]. **Lemma 2.12**.: _Let \(G\) be a finitely generated discrete group equipped with a proper length function \(\ell:G\to\mathbb{R}_{+}\), let \(\alpha:G\to\text{Aut}(A)\) an action of \(G\) on a unital separable C\({}^{*}\)-algebra \(A\subseteq\mathcal{B}(\mathcal{H}_{A})\), and let \(L\in\mathbb{R}\). For a non-trivial group homomorphism \(\phi:G\to\mathbb{Z}\) define an unbounded operator \(M_{\phi}\) on \(\ell^{2}(G)\) via \(M_{\phi}\delta_{g}:=\phi(g)\delta_{g}\) for \(g\in G\). Then for every \(x=\sum_{g\in G}a_{g}\lambda_{g}\in C_{c}(G,A)\) with \((a_{g})_{g\in G}\subseteq A\) the operator \([1\otimes M_{\phi},x]\) has dense domain and is bounded. Further,_ \[\|\sum_{g\in G:|\phi(g)|>N}a_{g}\lambda_{g}\|\leq\left(\sum_{k\in\mathbb{Z}:| k|>N}\frac{1}{(k+L)^{2}}\right)^{1/2}\|[1\otimes M_{\phi},x]+Lx\|\] _for every \(N\in\mathbb{N}\) with \(N\geq|L|\)._ Proof.: It is clear that \([1\otimes M_{\phi},x]\) has dense domain and by the same computation as in the proof of Lemma 2.3, \[[1\otimes M_{\phi},x]=\sum_{g\in G}\phi(g)a_{g}\lambda_{g},\] so \([1\otimes M_{\phi},x]\) is bounded. To prove the inequality, define a strong operator-continuous \(1\)-parameter family \(\mathbb{R}\to\mathcal{B}(\mathcal{H}_{A}\otimes\ell^{2}(G))\), \(t\mapsto U_{t}\) via \(U_{t}(\xi\otimes\delta_{g}):=e^{it\phi(g)}(\xi\otimes\delta_{g})\) for \(\xi\in\mathcal{H}_{A}\), \(g\in G\). For fixed \(N\in\mathbb{N}\) with \(N\geq|L|\) we obtain a bounded linear map on \(\mathcal{B}(\mathcal{H}_{A}\otimes\ell^{2}(G))\) via \(\kappa(x)\eta:=(2\pi)^{-1}\int_{0}^{2\pi}f_{N}(t)U_{t}xU_{t}^{*}\eta dt\) for \(\eta\in\mathcal{H}_{A}\otimes\ell^{2}(G)\) with the \(L^{2}\)-function \(f_{N}(t):=\sum_{k\in\mathbb{Z}:|k|>N}(k+L)^{-1}e^{-ikt}\) with prescribed Fourier coefficients \((k+L)^{-1}\) for \(|k|>N\). Then, for \(x=\sum_{g\in G}a_{g}\lambda_{g}\in C_{c}(G,A)\) with \((a_{g})_{g\in G}\subseteq A\) and \(\xi\in\mathcal{H}_{A}\), \(h\in G\), \[\left[\kappa([1\otimes M_{\phi},x]+Lx)\right](\xi\otimes\delta_{h}) = \sum_{g\in G}\frac{\phi(g)+L}{2\pi}\left\{\int_{0}^{2\pi}e^{it \phi(g)}f_{N}(t)dt\right\}\left(((gh)^{-1}.a_{g})\xi\otimes\delta_{gh}\right)\] \[= \sum_{g\in G:|\phi(g)|>N}\left(((gh)_{\cdot}^{-1}a_{g})\xi\otimes \delta_{gh}\right).\] We get that \(\kappa([1\otimes M_{\phi},x]+Lx)=\sum_{g\in G:|\phi(g)|>N}a_{g}\lambda_{g}\) and hence \[\|\sum_{g\in G:|\phi(g)|>N}a_{g}\lambda_{g}\|=\|\kappa([1\otimes M _{\phi},x]+Lx)\|\leq\frac{\|[1\otimes M_{\phi},x]+Lx\|}{2\pi}\int_{0}^{2\pi} \left|f_{N}(t)\right|dt\] \[\leq \frac{\|[1\otimes M_{\phi},x]+Lx\|}{\sqrt{2\pi}}\left(\int_{0}^{2 \pi}\left|f_{N}(t)\right|^{2}dt\right)^{1/2}=\left(\sum_{k\in\mathbb{Z}:|k|>N} \frac{1}{(k+L)^{2}}\right)^{1/2}\|[1\otimes M_{\phi},x]+Lx\|\,.\] We are now ready to prove Proposition 2.11. As mentioned earlier, Rieffel's approach in [29] relies on the construction of sufficiently many fixed points in the horofunction boundaries of \(\mathbb{Z}^{m}\), \(m\in\mathbb{N}\). These fixed points induce conditional expectations from the crossed product C\({}^{*}\)-algebra associated with the horofunction compactification onto the group C\({}^{*}\)-algebra \(C_{r}^{*}(\mathbb{Z}^{m})\). Similarly, in the proof of Proposition 2.11 we will make use of the assumption that \((G,\ell)\) is separated, to construct suitable maps onto the restricted crossed product C\({}^{*}\)-algebra \(A\rtimes_{\alpha|_{H},r}H\). Proof of Proposition 2.11.: We only prove the second statement of Proposition 2.11 since the first one follows similarly. So assume that \((\mathcal{A},\mathcal{H}_{A},D_{A})\) is a non-degenerate odd spectral triple on \(A\), \(g\in G\), pick \(x=\sum_{h\in H}a_{h}\lambda_{hg}\in C_{c}(Hg,A)\) with \((a_{h})_{h\in H}\subseteq\mathcal{A}\), \(\|[1\otimes M_{\ell},x]\|\leq 1\), \(\|[D_{A}\otimes 1,x]\|\leq 1\), and fix \(\varepsilon>0\). As before, let \(p_{H}:H\twoheadrightarrow\mathbb{Z}^{m}\) be the projection onto the torsion-free component of the Abelianization of \(H\), i.e. \(m\) is the rank of the finitely generated Abelian group \(H/[H,H]\). By our assumption, \(H\) is separated with respect to the restriction \(\ell|_{H}\). We can therefore find linear combinations \(\phi_{1},...,\phi_{m}\) of invariant means on \(\ell^{\infty}(H)\) such that \(\phi_{i}(\varphi_{h}^{\ell|_{H}})=p_{i}\circ p_{H}(h)\) for every \(h\in H\), \(1\leq i\leq m\) where \(p_{i}:\mathbb{Z}^{m}\rightarrow\mathbb{Z}\) is the projection onto the \(i\)-th component of \(\mathbb{Z}^{m}\). These functionals induce maps \(\ell^{\infty}(G)\rtimes_{\beta|_{H},r}\!H\to C_{r}^{*}(H)\) via \(f\lambda_{h}\mapsto\phi_{i}(f|_{H})\lambda_{h}\) for \(f\in\ell^{\infty}(G)\), \(h\in H\) and composition with the isomorphism from Proposition 2.5 and an application of Fell's absorption principle (see [3, Proposition 4.1.7]) leads to bounded maps \(P_{i}:\mathcal{C}(A,H,\ell)\to A\rtimes_{\alpha|_{H},r}H\) via \(P_{i}(a(1\otimes f)\lambda_{h}):=\phi_{i}(f|_{H})a\lambda_{h}\) for \(a\in A\), \(f\in\mathcal{G}(G,\ell)\), \(h\in H\). Here \(\mathcal{C}(A,H,\ell)\) is the C\({}^{*}\)-subalgebra of \(\mathcal{B}(\mathcal{H})\) with \(\mathcal{H}:=\mathcal{H}_{A}\otimes\ell^{2}(G)\) generated by \(A\), \(\mathbb{C}1\otimes\mathcal{G}(G,\ell)\) and \(C_{r}^{*}(H)\). For every \(i\) write \(\mathbb{E}_{i}\) for the contractive linear map on \(\mathcal{B}(\mathcal{H})\) associated with the subgroup \(\ker(p_{i}\circ p_{H})\leq G\) as in Lemma 2.6. We proceed inductively. Define \(L:=\max\{\left\|P_{1}\right\|,...,\left\|P_{m}\right\|\}\) and note that \(\varphi_{h}^{\ell|_{H}}=\varphi_{h}^{\ell}|_{H}\) for every \(h\in H\). By applying \(P_{1}\) to \([1\otimes M_{\ell},x]\lambda_{g^{-1}}\in\mathcal{C}(A,H,\ell)\) and by using the identity in Lemma 2.3, we find \[\|\phi_{1}(\varphi_{g}^{\ell}|_{H})x\lambda_{g^{-1}}+[1\otimes M_ {p_{1}\circ p_{H}},x\lambda_{g^{-1}}]\| = \|\sum_{h\in H}\left\{\phi_{1}(\varphi_{g}^{\ell}|_{H})+p_{1} \circ p_{H}(h)\right\}a_{h}\lambda_{h}\|\] \[= \|\sum_{h\in H}\phi_{1}(\varphi_{hg}^{\ell}|_{H})a_{h}\lambda_{h}\|\] \[= \|P_{1}([1\otimes M_{\ell},x]\lambda_{g^{-1}})\lambda_{g}\|\] \[\leq L,\] In combination with Lemma 2.12 (where the constant is taken to be \(\phi_{1}(\varphi_{g}^{\ell}|_{H})\)) this implies that there exists \(N_{1}\in\mathbb{N}\) (which is independent of \(x\in\mathcal{Q}_{2}^{g}\)) with \[\|\sum_{h\in H:|p_{1}\circ p_{H}(h)|>N_{1}}a_{h}\lambda_{hg}\|\leq\|\sum_{h\in H :|p_{1}\circ p_{H}(h)|>N_{1}}a_{h}\lambda_{h}\|\leq m^{-1}\varepsilon.\] For every \(-N_{1}\leq i\leq N_{1}\) choose \(h_{i}\in H\) with \(p_{1}\circ p_{H}(h_{i})=i\) and define an element in the crossed product via \(x_{1}:=\sum_{h\in H:|p_{1}\circ p_{H}(h)|\leq N_{1}}a_{h}\lambda_{hg}\in A \rtimes_{\alpha,r}G\). Then \(\|x-x_{1}\|\leq m^{-1}\varepsilon\), \[\|[1\otimes M_{\ell},x_{1}]\| = \|\sum_{h\in G:[p_{1}\circ p_{H}(h)]\leq N_{1}}(1\otimes\varphi_ {hg}^{\ell})a_{h}\lambda_{hg}\|\] \[= \|\sum_{-N_{1}\leq i\leq N_{1}}\sum_{h\in\ker(p_{1}\circ p_{H})}( 1\otimes\varphi_{hh_{i}g}^{\ell})a_{hh_{i}}\lambda_{hh_{i}g}\|\] \[= \|\sum_{-N_{1}\leq i\leq N_{1}}\mathbb{E}_{1}([1\otimes M_{\ell},x\lambda_{(h_{i}g)^{-1}}]\lambda_{h_{i}g})\|\] \[= \|\sum_{-N_{1}\leq i\leq N_{1}}\mathbb{E}_{1}([1\otimes M_{\ell},x]\lambda_{(h_{i}g)^{-1}})\lambda_{h_{i}g}\|\] \[\leq 2N_{1}+1,\] and similarly \[\|[D_{A}\otimes 1,x_{1}]\|=\|\sum_{-N_{1}\leq i\leq N_{1}}\mathbb{E}_{1}([D_{A} \otimes 1,x]\lambda_{(h_{i}g)^{-1}})\lambda_{h_{i}g}\|\leq 2N_{1}+1.\] In the same way we can now apply \(P_{2}\) to \([1\otimes M_{\ell},x_{1}]\lambda_{g^{-1}}\) and invoke Lemma 2.12 again to find \(N_{2}\in\mathbb{N}\) with \(\|x_{1}-x_{2}\|\leq m^{-1}\varepsilon\), \(\|[1\otimes M_{\ell},x_{2}]\|\leq(2N_{1}+1)(2N_{2}+1)\) and \(\|[D_{A}\otimes 1,x_{1}]\|\leq(2N_{1}+1)(2N_{2}+1)\), where \(x_{2}:=\sum_{h\in H:|p_{1}\circ p_{H}(h)|\leq N_{1},|p_{2}\circ p_{H}(h)|\leq N _{2}}a_{h}\lambda_{hg}\in A\rtimes_{\alpha,r}G\). Performing these steps repeatedly leads to a sequence of natural numbers \(N_{1},...,N_{m}\in\mathbb{N}\) and elements \(x_{1},...,x_{m}\in A\rtimes_{\alpha,r}G\) given by \[x_{i}:=\sum_{h\in H:|p_{1}\circ p_{H}(h)|\leq N_{1},...,|p_{i}\circ p_{H}(h)| \leq N_{i}}a_{h}\lambda_{hg}\] for which \(\left\|x_{i}-x_{i+1}\right\|<m^{-1}\varepsilon\), \(\left\|[1\otimes M_{\ell},x_{i}]\right\|\leq(2N_{1}+1)...(2N_{i}+1)\) and \(\left\|[D_{A}\otimes 1,x_{i}]\right\|\leq(2N_{1}+1)...(2N_{i}+1)\). For \(i=m\) we in particular have \[\left\|x-x_{m}\right\|\leq\left\|x-x_{1}\right\|+\left\|x_{1}-x_{2}\right\|+...+\left\|x_{m-1}-x_{m}\right\|<\varepsilon.\] Set \(\widetilde{N}:=\max\{N_{1},...,N_{m}\}\) and note that \(p_{H}(\text{supp}(x_{m}\lambda_{g^{-1}}))\) is contained in the \(\widetilde{N}\)-ball of \(\mathbb{Z}^{m}\) with respect to the restriction of the supremum norm on \(\mathbb{R}^{m}\) to \(\mathbb{Z}^{m}\). It follows that there exist elements \(g_{1},...,g_{n}\in G\) with \(\text{supp}(x_{m})\subseteq K:=\bigcup_{i=1}^{n}[H,H]g_{i}.\) We therefore get that \(\mathcal{Q}_{2}^{g}\subseteq_{\varepsilon}(2N_{1}+1)...(2N_{m}+1)\mathcal{Q }_{2}^{g}\cap C_{c}(K,\mathcal{A})\), which finishes the proof. **Theorem 2.13**.: _Let \((\mathcal{A},\mathcal{H}_{A},D_{A})\) be a non-degenerate odd spectral triple on a separable unital \(C^{*}\)-algebra \(A\) and assume that the induced Lipschitz semi-norm \(L_{D_{A}}(a):=\left\|[D_{A},a]\right\|,a\in\mathcal{A}\) defines a compact quantum metric space \((A,L_{D_{A}})\). Let further \(\alpha:G\rightarrow\text{Aut}(A)\) be a metrically equicontinuous action of a finitely generated discrete group \(G\) equipped with a proper length function \(\ell:G\rightarrow\mathbb{R}_{+}\) and assume that there exists a finite index subgroup \(H\) of \(G\) that is separated with respect to the restricted length function \(\ell|_{H}\). As before, define \(\mathcal{H}:=\mathcal{H}_{A}\otimes\ell^{2}(G)\) and \(D\) as in Subsection 2.1. Then the even spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition if and only if for every \(g\in G\) the set of all elements \(x=\sum_{h\in[H,H]}a_{h}\lambda_{hg}\in C_{c}(G,\mathcal{A})\) with \((a_{h})_{h\in[H,H]}\subseteq\mathcal{A}\) satisfying \(\left\|[D_{A}\otimes 1,x]\right\|\leq 1\) and \(\left\|[1\otimes M_{\ell},x]\right\|\leq 1\) has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\)._ _In particular, if \([H,H]\) is finite, then \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition._ Proof.: For \(g\in G\) set \(\mathcal{Q}_{g}:=\{x\in C_{c}(Hg,\mathcal{A})\mid\left\|[D_{A}\otimes 1,x] \right\|\leq 1\text{ and }\left\|[1\otimes M_{\ell},x]\right\|\leq 1\}\) and write \(\mathcal{Q}_{g}^{\prime}\) for the set of all elements \(x=\sum_{h\in[H,H]}a_{h}\lambda_{hg}\in C_{c}(Hg,\mathcal{A})\) with \(\left(a_{h})_{h\in[H,H]}\right\|\subseteq\mathcal{A}\) satisfying \(\left\|[D_{A}\otimes 1,x]\right\|\leq 1\) and \(\left\|[1\otimes M_{\ell},x]\right\|\leq 1\). The "only if" direction follows in the same way as in the proof of Lemma 2.8. For the "if" direction assume that \(\mathcal{Q}_{g}^{\prime}\) has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\) for every \(g\in G\) and let \(\varepsilon>0\). By Proposition 2.11 for fixed \(g\in G\) we find \(\delta>0\) and finitely many elements \(g_{1},...,g_{n}\in G\) such that \(\mathcal{Q}_{g}\subseteq_{\varepsilon/4}\delta\mathcal{Q}_{g}\cap C_{c}(K, \mathcal{A})\) where \(K:=\bigcup_{i=1}^{n}[H,H]g_{i}\). In other words, for every \(x\in\mathcal{Q}_{g}\) there exists \(y\in\delta\mathcal{Q}_{g}\) of the form \(y=\sum_{i=1}^{n}\sum_{h\in[H,H]}b_{hg_{i}}\lambda_{hg_{i}}\) with \(b_{hg_{i}}\in\mathcal{A}\) for \(h\in[H,H]\), \(i=1,...,n\) such that \(\left\|x-y\right\|<\frac{\varepsilon}{4}\). For every \(i\) set \(y_{i}:=\sum_{h\in[H,H]}b_{hg_{i}}\lambda_{hg_{i}}\). By the same argument as in the proof of Lemma 2.8 and Proposition 2.11, \[\left\|[1\otimes M_{\ell},y_{i}]\right\| = \left\|\sum_{h\in[H,H]}(1\otimes\varphi_{hg_{i}})b_{hg_{i}} \lambda_{hg_{i}}\right\|\] \[= \left\|\mathbb{E}_{[H,H]}([1\otimes M_{\ell},y\lambda_{g_{i}^{-1} }]\lambda_{g_{i}})\right\|\] \[= \left\|[1\otimes M_{\ell},\mathbb{E}_{[H,H]}(y\lambda_{g_{i}^{-1} })\lambda_{g_{i}}]\right\|\] \[\leq \left\|[1\otimes M_{\ell},y]\right\|\] \[\leq \delta\] and similarly \[\left\|[D_{A}\otimes 1,y_{i}]\right\|=\left\|\mathbb{E}_{[H,H]}([D_{A}\otimes 1,y] \lambda_{g_{i}^{-1}})\lambda_{g_{i}}\right\|\leq\left\|[D_{A}\otimes 1,y]\right\|\leq\delta,\] where \(\mathbb{E}_{[H,H]}\) is the contractive linear map from Lemma 2.6. We conclude that \(\mathcal{Q}_{g}\subseteq_{\varepsilon/4}\mathcal{R}\) where \(\mathcal{R}:=\delta\mathcal{Q}_{g_{1}}^{\prime}+...+\delta\mathcal{Q}_{g_{n}}^{\prime}\). From our assumption it can easily be derived that \(\mathcal{R}\) has a totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\). We hence find finitely many elements \(x_{1},...,x_{m}\in\mathcal{R}\) such that for every \(y\in\mathcal{R}\) there exists \(1\leq i\leq m\) with \(\|(y-x_{i})+\mathbb{C}1\|<\frac{\varepsilon}{4}\). For every \(i\) choose \(\widetilde{x}_{i}\in\mathcal{Q}_{g}\) with \(\|(x_{i}-\widetilde{x}_{i})+\mathbb{C}1\|<\frac{\varepsilon}{2}\), if possible. We claim that the \(\varepsilon\)-balls around the \(\widetilde{x}_{i}+\mathbb{C}1\) cover the image of \(\mathcal{Q}_{g}\) in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\). Indeed, for \(x\in\mathcal{Q}_{g}\) there exists \(y\in\mathcal{R}\) with \(\|x-y\|<\frac{\varepsilon}{4}\) and we find \(i\) with \(\|(y-x_{i})+\mathbb{C}1\|<\frac{\varepsilon}{4}\). By \(\|(x-x_{i})+\mathbb{C}1\|\leq\|(x-y)+\mathbb{C}1\|+\|(y-x_{i})+\mathbb{C}1\|< \frac{\varepsilon}{2}\) the element \(\widetilde{x}_{i}\in\mathcal{Q}_{g}\) exists and \[\|(x-\widetilde{x}_{i})+\mathbb{C}1\|\leq\|(x-y)+\mathbb{C}1\|+\|(y-x_{i})+ \mathbb{C}1\|+\|(x_{i}-\widetilde{x}_{i})+\mathbb{C}1\|<\varepsilon.\] The claim follows. Hence the image of \(\mathcal{Q}_{g}\) in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\) is totally bounded and thus by Lemma 2.8 the even spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition. For the proof of the second statement assume that the derived subgroup \([H,H]\) is finite and fix \(g\in G\). We proceed by arguing along the lines of the proof of [18, Theorem 2.11]. For \(x=\sum_{h\in[H,H]}a_{h}\lambda_{hg}\in\mathcal{Q}^{\prime}_{g}\) with \((a_{h})_{h\in[H,H]}\subseteq\mathcal{A}\) one has that for every \(\xi,\eta\) in the domain of \(D_{A}\) and \(h\in H\), \[\left\langle[D_{A},a_{h}]\xi,\eta\right\rangle = \left\langle(D_{A}\otimes 1)x\lambda_{(hg)^{-1}}(\xi\otimes \delta_{e}),\eta\otimes\delta_{e}\right\rangle-\left\langle x\lambda_{(hg)^{-1 }}(D_{A}\otimes 1)(\xi\otimes\delta_{e}),\eta\otimes\delta_{e}\right\rangle\] \[= \left\langle[D_{A}\otimes 1,x](\xi\otimes\delta_{(hg)^{-1}}), \eta\otimes\delta_{e}\right\rangle\] and therefore \(\|[D_{A},a_{h}]\|\leq\|[D_{A}\otimes 1,x]\|\leq 1\). Similarly, for \(h\in H\) and \(\xi,\eta\in\mathcal{H}_{A}\), \[\left\langle[1\otimes M_{\ell},x](\xi\otimes\delta_{(hg)^{-1}}), \eta\otimes\delta_{e}\right\rangle=\left\langle x(1\otimes M_{\ell})(\xi \otimes\delta_{(hg)^{-1}}),\eta\otimes\delta_{e}\right\rangle=\ell(hg)\left \langle a_{h}\xi,\eta\right\rangle\] so that \(\|a_{h}\|\leq(\ell(hg))^{-1}\,\|[1\otimes M_{\ell},x]\|\leq L\) for every \(h\in[H,H]\setminus\{g^{-1}\}\), where \[L:=\max\{(\ell(hg))^{-1}\mid h\in[H,H]\setminus\{g^{-1}\}\}.\] It follows that \(\mathcal{Q}^{\prime}_{g}\) is contained in the set of all \(x=\sum_{h\in[H,H]}a_{h}\lambda_{hg}\in C_{c}(Hg,\mathcal{A})\) with \((a_{h})_{h\in[H,H]}\subseteq\mathcal{A}\) satisfying \(\|[D_{A},a_{h}]\|\leq L^{\prime}\) for all \(h\in H\) and \(\|a_{h}\|\leq L^{\prime}\) for all \(h\in[H,H]\setminus\{g^{-1}\}\) with \(L^{\prime}:=\max\{1,L\}\). Denote this set by \(\mathcal{S}_{g}\). We claim that \(\mathcal{S}_{g}\) has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\) which then implies that \(\mathcal{Q}^{\prime}_{g}\) has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\) and hence, by the previous part, that the triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition. Indeed, by Theorem 1.3 the set \(F:=\{a\in\mathcal{A}\mid\|a\|\leq L^{\prime}\text{ and }\|[D_{A},a]\|\leq L^{\prime}\}\) is totally bounded in \(A\). For every \(\varepsilon>0\) we can hence pick a finite subset \(F_{1}\) of \(F\) such that the \(\frac{\varepsilon}{\#[H,H]}\)-balls around its elements cover \(F\). Similarly, we can choose a finite subset \(F_{2}\) of \(\{a\in\mathcal{A}\mid\|[D_{A},a]\|\leq L^{\prime}\}\) such that the \(\frac{\varepsilon}{\#[H,H]}\)-balls around the image of the elements of \(F_{2}\) in \(A/\mathbb{C}1\) covers the image of \(\{a\in\mathcal{A}\mid\|[D_{A},a]\|\leq L^{\prime}\}\). From this we can deduce that if \(g\in[H,H]\), the image of \[\left\{\sum_{h\in[H,H]}f_{h}\lambda_{hg}\mid f_{g^{-1}}\in F_{2}\text{ and }f_{h}\in F_{1}\text{ for }h\neq g^{-1}\right\}\] is an \(\varepsilon\)-net for the image of \(\mathcal{S}_{g}\) in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\) and similarly the image of \[\left\{\sum_{h\in[H,H]}f_{h}\lambda_{hg}\mid f_{h}\in F_{1}\text{ for all }h\in[H,H]\right\}\] is an \(\varepsilon\)-net for the image of \(\mathcal{S}_{g}\) in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\) if \(g\notin[H,H]\). This finishes the proof. ### The construction of odd spectral triples In [18, Subsection 2.4] it was noted that analogous to the construction of even spectral triples on crossed product C\({}^{*}\)-algebras coming from odd spectral triples, a similar procedure can be used to obtain odd spectral triples coming from even ones. As in Subsection 2.1, let \(\alpha:G\to\text{Aut}(A)\) be an action of a discrete group \(G\) on a unital separable C\({}^{*}\)-algebra \(A\) and let \(\ell:G\to\mathbb{R}_{+}\) be a proper length function on \(G\). Assume that \[\left(\mathcal{A},\mathcal{H}_{A,1}\oplus\mathcal{H}_{A,2},\left(\begin{array} []{cc}0&D_{A}\\ D_{A}^{*}&0\end{array}\right)\right)\] is a spectral triple on \(A\) with \(\mathbb{Z}_{2}\)-grading \(\mathcal{H}_{A}:=\mathcal{H}_{A,1}\oplus\mathcal{H}_{A,2}\) and corresponding faithful representation \(\pi:=\pi_{1}\oplus\pi_{2}\). As before, consider the canonical odd spectral triple \((\mathbb{C}[G],\ell^{2}(G),M_{\ell})\) on \(C_{r}^{*}(G)\), where \(M_{\ell}\) denotes the multiplication operator \(\delta_{g}\mapsto\ell(g)\delta_{g}\) for \(g\in G\). The reduced crossed product C\({}^{*}\)-algebra \(A\rtimes_{\alpha,r}G\) can be (faithfully) represented on \(\mathcal{H}:=(\mathcal{H}_{A,1}\otimes\ell^{2}(G))\oplus(\mathcal{H}_{A,2} \otimes\ell^{2}(G))\) in a natural way. By assuming _metric equicontinuity_ in the sense that \(\alpha_{g}(\mathcal{A})\subseteq\mathcal{A}\) and \(\sup_{g\in G}\|\pi_{1}(g.a)D_{A}-D_{A}\pi_{2}(g.a)\|<\infty\) for all \(a\in\mathcal{A}\), \(g\in G\), one can define an odd spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H},D)\) on \(A\rtimes_{\alpha,r}G\), where \[D:=\left(\begin{array}{cc}1\otimes M_{\ell}&D_{A}\otimes 1\\ D_{A}^{*}\otimes 1&-1\otimes M_{\ell}\end{array}\right). \tag{2.3}\] This triple is non-degenerated if the one on \(A\) is. We claim that an analog to Theorem 2.13 holds in this setting as well. This follows from a variation of the characterization in Proposition 2.7. **Proposition 2.14**.: _The even spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H},D)\) defined above satisfies the Lipschitz condition if and only if the set of all elements \(x=\sum_{g\in G}a_{g}\lambda_{g}\in C_{c}(G,\mathcal{A})\subseteq\mathcal{B}( \mathcal{H})\) with \((a_{g})_{g\in G}\subseteq\mathcal{A}\) for which the operator norms of the commutators_ \[\left[x,\left(\begin{array}{cc}1\otimes M_{\ell}&0\\ 0&-1\otimes M_{\ell}\end{array}\right)\right],\left[x,\left(\begin{array}{cc }0&D_{A}\otimes 1\\ D_{A}^{*}\otimes 1&0\end{array}\right)\right]\in\mathcal{B}(\mathcal{H})\] _are bounded by \(1\), has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\)._ **Theorem 2.15**.: _Let_ \[\left(\mathcal{A},\mathcal{H}_{A,1}\oplus\mathcal{H}_{A,2},\left(\begin{array} []{cc}0&D_{A}\\ D_{A}^{*}&0\end{array}\right)\right)\] _be a non-degenerate even spectral triple on a separable unital C\({}^{*}\)-algebra \(A\) with \(\mathbb{Z}_{2}\)-grading \(\mathcal{H}_{A}:=\mathcal{H}_{A,1}\oplus\mathcal{H}_{A,2}\) and corresponding representation \(\pi:=\pi_{1}\oplus\pi_{2}\), and assume that the induced Lipschitz semi-norm \(L_{D_{A}}(a):=\left\|[D_{A},a]\right\|,a\in\mathcal{A}\) defines a compact quantum metric space \((A,L_{D_{A}})\). Let further \(\alpha:G\to\text{Aut}(A)\) be a metrically equicontinuous action of a finitely generated discrete group \(G\) equipped with a proper length function \(\ell:G\to\mathbb{R}_{+}\) and assume that there exists a finite index subgroup \(H\) of \(G\) that is separated with respect to the restricted length function \(\ell|_{H}\). As before, define \(\mathcal{H}:=(\mathcal{H}_{A,1}\otimes\ell^{2}(G))\oplus(\mathcal{H}_{A,2} \otimes\ell^{2}(G))\) and \(D\) as in (2.3). Then the odd spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H},D)\) satisfies the Lipschitz condition if and only if for every \(g\in G\) the set of all elements \(x=\sum_{h\in[H,H]}a_{h}\lambda_{hg}\in C_{c}(G,\mathcal{A})\) with \((a_{h})_{h\in[H,H]}\subseteq\mathcal{A}\) for which the operator norms of the commutators_ \[\left[x,\left(\begin{array}{cc}1\otimes M_{\ell}&0\\ 0&-1\otimes M_{\ell}\end{array}\right)\right],\left[x,\left(\begin{array}{cc }0&D_{A}\otimes 1\\ D_{A}^{*}\otimes 1&0\end{array}\right)\right]\] _are bounded by \(1\), has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\)._ _In particular, if \([H,H]\) is finite, then \((C_{c}(G,\mathcal{A}),\mathcal{H},D)\) satisfies the Lipschitz condition._ Despite being lengthy, the arguments for proving Theorem 2.15 are essentially variations of those in Subsection 2.1 and Subsection 2.2. We therefore omit the details here. _Remark 2.16_.: By applying [18, Proposition 2.8] and its counterpart for even spectral triples (see the discussion in [18, Subsection 2.4]), an iteration of Theorem 2.13 and Theorem 2.15 allows to construct spectral triples on suitable crossed products of the form \(A\rtimes_{\alpha_{r}}\!G^{m}\cong(...((A\rtimes_{\alpha_{1},r}\!G)\rtimes_{ \alpha_{2},r}G)...)\rtimes_{\alpha_{m},r}G\), \(m\in\mathbb{N}\) that give rise to quantum metric spaces; compare with [18, Theorem 2.14]. Here, as before, \(G\) is a finitely generated discrete group equipped with a proper length function \(\ell:G\to\mathbb{R}_{+}\) that admits a finite index subgroup \(H\) that is separated with respect to \(\ell|_{H}\) and whose commutator \([H,H]\) is finite. The \(\alpha_{i}\), \(1\leq i\leq m\) denote the (metrically equicontinuous) coordinate \(G\)-actions of \(\alpha\). ## 3. Groups separated with respect to length functions Recall that we call a finitely generated discrete group \(G\) separated with respect to a pseudo-length function \(\ell\) if \(\operatorname{Hom}(\operatorname{im}(p_{G}),\mathbb{R})=\operatorname{Span} \left\{\widehat{\mu}_{\ell}\mid\mu\text{ invariant mean}\right\}\), where \(p_{G}\) is the projection onto the torsion-free component of the Abelianization of \(G\) and \(\widehat{\mu}_{\ell}\) is given by \((\widehat{\mu}_{\ell}\circ p_{G})(g):=\mu(\varphi_{g}^{\ell})\) for \(g\in G\). In the present section, we study the notion's link to the asymptotic semi-norm construction in the Abelian setting and provide groups and length functions that satisfy (a variation of) this quality. We further give a counterexample that demonstrates that even the integers equipped with a very natural length function is not separated. ### Integer lattices Recall that a _quasi-isometric embedding_ between metric spaces \((X,d_{X})\) and \((Y,d_{Y})\) is a map \(f:X\to Y\) for which there exists \(C\geq 1\) and \(r>0\) with \[C^{-1}d_{X}(x,y)-r\leq d_{Y}(f(x),f(y))\leq Cd(x,y)+r\] for all \(x,y\in X\). It is well-known that, given finite generating sets \(S=S^{-1}\) and \(S^{\prime}=(S^{\prime})^{-1}\) of a group \(G\), the identity map on \(G\) equipped with the respective induced word metrics defines a quasi-isometric embedding. Similarly, if \(G\) is a finitely generated group and \(H\leq G\) is a finite index subgroup, then \(H\) is also finitely generated and the embedding of \(H\) into \(G\) is quasi-isometric with respect to the induced word metrics; this follows for instance from the Milnor-Svarc Lemma. We call two length functions \(\ell,\ell^{\prime}:G\to\mathbb{R}_{+}\) on a group \(G\)_bi-Lipschitz equivalent_, if the identity on \(G\) equipped with the metrics \(d_{\ell}\) and \(d_{\ell^{\prime}}\) is a quasi-isometric embedding; that is if there exist \(C\geq 1\) and \(r\geq 0\) with \(C^{-1}\ell(g)-r\leq\ell^{\prime}(g)\leq C\ell(g)+r\) for all \(g\in G\). From now on we restrict to the case of integer lattices \(G=\mathbb{Z}^{m}\), \(m\in\mathbb{N}\) equipped with length functions \(\ell:G\to\mathbb{R}_{+}\). By applying Fekete's Subadditivity Lemma one obtains that for every \(g\in G\) the limit \(\lim_{i\to\infty}i^{-1}\ell(ig)\) exists and that it coincides with \(\inf_{i\in\mathbb{N}}i^{-1}\ell(ig)\); so in particular \(\lim_{i\to\infty}i^{-1}\ell(ig)\leq\ell(g)\). The function \(g\mapsto\lim_{i\to\infty}i^{-1}\ell(ig)\) uniquely extends to a semi-norm \(\left\|\cdot\right\|_{\ell}\) on \(\mathbb{R}^{m}\), which is called the _asymptotic semi-norm_ (or _stable semi-norm_) associated with \(\ell\), see e.g. [6, Proposition 8.5.3]. In many interesting cases, the asymptotic semi-norm is positive definite, i.e. a genuine norm. This is for instance the case if \(\ell\) is bi-Lipschitz equivalent to a word length function (e.g. if \(\mathbb{Z}^{m}\) embeds as a finite index subgroup into a larger group and \(\ell\) is a restricted word length function). Indeed, in that case there exists a constant \(C\geq 1\) such that \[C^{-1}\left\|x\right\|_{1}=C^{-1}\left\|x\right\|_{\ell_{1}}\leq\left\|x \right\|_{\ell}\leq C\left\|x\right\|_{\ell_{1}}=C\left\|x\right\|_{1}\] for every \(x\in\mathbb{R}^{m}\). Here \(\left\|\cdot\right\|_{1}\) denotes the \(1\)-norm on \(\mathbb{R}^{m}\) and \(\ell_{1}\) is the word length function associated with the canonical generating set of \(\mathbb{Z}^{m}\). The restriction \(\ell^{\text{as}}:\mathbb{Z}^{m}\to\mathbb{R}_{+}\) of the asymptotic semi-norm to \(\mathbb{Z}^{m}\) is a homogeneous pseudo-length function. The proof of the following lemma is an easy exercise. **Lemma 3.1**.: _For every \(g\in G\) the sequence \((i^{-1}(\varphi_{ig}^{\ell}-\varphi_{ig}^{\ell^{\text{as}}}))_{i\in\mathbb{N}} \subseteq\mathcal{B}(\ell^{2}(G))\) strongly converges to \(0\)._ In general there is no reason to expect that the sequence in Lemma 3.1 converges with respect to the operator norm (i.e. uniformly). Still, for many natural examples that is the case. As it turns out, \(\ell^{\text{as}}\) very naturally occurs in the context of the question for separateness of the pair \((G,\ell)\). **Proposition 3.2**.: _Let \(\ell:G\to\mathbb{R}_{+}\) be a length function on \(G=\mathbb{Z}^{m}\), \(m\in\mathbb{N}\). Assume that \(i^{-1}(\varphi_{ig}^{\ell}-\varphi_{ig}^{\ell^{\text{as}}})\to 0\) uniformly. Then \((G,\ell)\) is separated if and only if \((G,\ell^{\text{as}})\) is separated._ Proof.: Let \(\mu:\ell^{\infty}(G)\to\mathbb{C}\) be an invariant mean. Then, \[\widehat{\mu}_{\ell}\circ p_{G}(g)-\widehat{\mu}_{\ell^{\text{as}}}\circ p_{ G}(g)=\mu(i^{-1}(\varphi_{ig}^{\ell}-\varphi_{ig}^{\ell^{\text{as}}}))\to 0\] for every \(g\in G\) and hence \(\widehat{\mu}_{\ell}=\widehat{\mu}_{\ell^{\text{as}}}\). This implies the claim. By adding the assumption that the asymptotic semi-norm is positive definite, we obtain the following much stronger result. Before giving a proof, we pick up some of its implications. **Theorem 3.3**.: _Let \(\ell:G\to\mathbb{R}_{+}\) be a length function on \(G=\mathbb{Z}^{m}\), \(m\in\mathbb{N}\). Assume that \(i^{-1}(\varphi_{ig}^{\ell}-\varphi_{ig}^{\ell^{\text{as}}})\to 0\) uniformly and that the asymptotic semi-norm associated with \(\ell\) is positive definite. Then \(G\) is separated with respect to \(\ell\)._ Theorem 3.3 applies to many natural situations. If for instance \(\ell\) is a word length function, it is easy to show that the map \(G\to\mathbb{R}_{+}\), \(g\mapsto|\ell(g)-\|g\|_{\ell}|\) is bounded (see e.g. [15, Lemma 3.5]) and hence \(i^{-1}(\varphi_{ig}^{\ell}-\varphi_{ig}^{\ell^{\text{as}}})\to 0\) uniformly. The proof of the following more general statement relies on the results in [22] which are again in the spirit of Burago's approach in [5]. Recall that an action of a group \(G\) on a metric space \((X,d)\) is called _cocompact_ if the quotient space \(X/G\) is compact. It is called _properly discontinuous_ if each point admits a neighborhood satisfying the property that all non-trivial elements of \(G\) move the neighborhood outside itself. The metric space \((X,d)\) is _geodesic_ if, given two points, there exists a path between them whose length equals the distance between the points. Here the length of a path \(c:[0,1]\to X\) is defined as the infimum over all sums \(\sum_{i=1}^{k}d(c(t_{i-1}),c(t_{i}))\) where \(0\leq t_{0}\leq t_{1}...\leq t_{k}\leq 1\). **Corollary 3.4**.: _Let \((X,d)\) be a proper, geodesic metric space on which \(\mathbb{Z}^{m}\), \(m\in\mathbb{N}\) acts freely, cocompactly and properly discontinuously by isometries. Assume that there exists a continuous map \(F:X\to\mathbb{R}^{m}\) that is equivariant with respect to the canonical shift action \(\mathbb{Z}^{m}\curvearrowright\mathbb{R}^{m}\) and let \(x_{0}\in X\). Define \(\ell:\mathbb{Z}^{m}\to\mathbb{R}_{+}\) by \(\ell(g):=d(x_{0},g.x_{0})\) for \(g\in\mathbb{Z}^{m}\) where \(x_{0}\in X\). Then \(\ell\) is a proper length function and \(G\) is separated with respect to \(\ell\)._ _In particular, if \(G\) is a discrete group finitely generated by a set \(S\) with \(S=S^{-1}\) that contains \(\mathbb{Z}^{m}\), \(m\in\mathbb{N}\) as a finite index normal subgroup, then \(\mathbb{Z}^{m}\) is separated with respect to the restricted word length function \(\ell_{S}|_{\mathbb{Z}^{m}}\)._ Proof.: For the first statement recall that Fekete's Subadditivity Lemma implies \(\ell^{\text{as}}(h)\leq\ell(h)\) for all \(h\in\mathbb{Z}^{m}\). By [22, Lemma 20] there further exists a constant \(C\geq 0\) such that \(2\ell(h)\leq\ell(2h)+C\) for every \(h\in\mathbb{Z}^{m}\). Inductively we obtain that \[\ell(h)\leq\frac{\ell(2h)}{2}+C\leq\frac{\ell(4h)}{4}+\frac{3C}{2}\leq...\leq \frac{\ell(2^{i}h)}{2^{i}}+\left(2-\frac{1}{2^{i-1}}\right)C\] for all \(h\in G\), \(i\in\mathbb{N}\) and therefore \(\ell(h)\leq\ell^{\text{as}}(h)+2C\). But then \[\left\|\frac{\varphi_{ig}^{\ell}-\varphi_{ig}^{\ell^{\text{as}}}}{i}\right\|\leq \sup_{h\in\mathbb{Z}^{m}}\left\{\left|\frac{\ell(h)-\ell^{\text{as}}(h)}{i} \right|+\left|\frac{\ell(h-ig)-\ell^{\text{as}}(h-ig)}{i}\right|\right\}\leq \frac{4C}{i}\to 0\] for \(g\in G\). The asymptotic semi-norm associated with \(\ell\) is further positive definite. Indeed, by [6, Theorem 8.3.19] the metric \((g,h)\mapsto d(g.x_{0},h.x_{0})\) on \(G\) is bi-Lipschitz equivalent to a word metric and thus \(\ell\) is bi-Lipschitz equivalent to a word length function. Therefore, by our discussion above, \(\left\|\cdot\right\|_{\ell}\) is positive definite and \(\ell\) is proper. We deduce the statement of the first part of the corollary by invoking Theorem 3.3. For the second statement we argue as in [22, Corollary 23]. Consider \(G\) equipped with the word length metric \(d_{\ell_{S}}\). Then \((G,d_{\ell_{S}})\) is a proper geodesic metric space and the action of \(\mathbb{Z}^{m}\) on \(G\) via left translation is free, cocompact and properly discontinuous. Choose elements \(g_{1},...,g_{k}\in G\) with \(G=\bigcup_{i=1}^{k}\mathbb{Z}^{m}g_{i}\) and \(\mathbb{Z}^{m}g_{i}\neq\mathbb{Z}^{m}g_{j}\) for \(i\neq j\) and define \(F:G\to\mathbb{R}^{m}\) via \(F(hg_{i}):=F(g_{i})+h\) for \(h\in\mathbb{Z}^{m}\), \(1\leq i\leq k\) where \(F(g_{1}),...,F(g_{k})\in\mathbb{R}^{m}\) are chosen arbitrarily. Then \(F\) satisfies the conditions of the first part of the corollary and hence \(G\) is separated with respect to the restricted word length function \(\ell_{S}|_{\mathbb{Z}^{m}}\). With Theorem 2.13 and Theorem 2.15 at hand the Corollary 3.4 implies the following important fact. Of course, the statement holds for all orbit metrics as in Corollary 3.4. **Corollary 3.5**.: _The following two statements hold:_ 1. _Let_ \((\mathcal{A},\mathcal{H}_{A},D_{A})\) _be a non-degenerate odd spectral triple on a separable unital C_\({}^{*}\)_-algebra_ \(A\) _and assume that the induced Lipschitz semi-norm_ \(L_{D_{A}}(a):=\left\|[D_{A},a]\right\|,a\in\mathcal{A}\) _defines a compact quantum metric space_ \((A,L_{D_{A}})\)_. Let further_ \(\alpha:G\to\text{Aut}(A)\) _be a metrically equicontinuous action of a virtually Abelian discrete group_ \(G\) _that is finitely generated by a set_ \(S\) _with_ \(S=S^{-1}\) _and let_ \(\ell:G\to\mathbb{R}_{+}\) _be the corresponding word length function. Define_ \(\mathcal{H}:=\mathcal{H}_{A}\otimes\ell^{2}(G)\) _and_ \(D\) _as in Subsection_ 2.1_. Then the even spectral triple_ \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) _satisfies the Lipschitz condition._ 2. _Let_ \[\left(\mathcal{A},\mathcal{H}_{A,1}\oplus\mathcal{H}_{A,2},\left(\begin{array} []{cc}0&D_{A}\\ D_{A}^{*}&0\end{array}\right)\right)\] _be a non-degenerate even spectral triple on a separable unital C_\({}^{*}\)_-algebra_ \(A\) _with_ \(\mathbb{Z}_{2}\)_-grading_ \(\mathcal{H}_{A}:=\mathcal{H}_{A,1}\oplus\mathcal{H}_{A,2}\) _and corresponding representation_ \(\pi:=\pi_{1}\oplus\pi_{2}\)_, and assume that the induced Lipschitz semi-norm_ \(L_{D_{A}}(a):=\left\|[D_{A},a]\right\|,a\in\mathcal{A}\) _defines a compact quantum metric space_ \((A,L_{D_{A}})\)_. Let further_ \(\alpha:G\to\text{Aut}(A)\) _be a metrically equicontinuous action of a virtually Abelian discrete group_ \(G\) _that is finitely generated by a set_ \(S\) _with_ \(S=S^{-1}\) _and let_ \(\ell:G\to\mathbb{R}_{+}\) _be the corresponding word length function. Define_ \(\mathcal{H}:=(\mathcal{H}_{A,1}\otimes\ell^{2}(G))\oplus(\mathcal{H}_{A,2} \otimes\ell^{2}(G))\) _and_ \(D\) _as in Subsection_ 2.3_. Then the odd spectral triple_ \((C_{c}(G,\mathcal{A}),\mathcal{H},D)\) _satisfies the Lipschitz condition._ Let us now turn to the proof of Theorem 3.3. Our argument requires Rieffel's construction in [29, Section 7]. For a given norm \(\left\|\cdot\right\|\) on \(\mathbb{R}^{m}\) write \(\ell_{\left\|\cdot\right\|}\) for its restriction to \(\mathbb{Z}^{m}\); this defines a length function that is bi-Lipschitz equivalent to the word length function \(\ell_{1}\) from before. Let \(v\in\mathbb{R}^{m}\) with \(\left\|v\right\|=1\) be a _smooth point_ in the sense that there exists exactly one functional \(\sigma_{v}\) on \(\mathbb{R}^{m}\) with \(\left\|\sigma_{v}\right\|=1=\sigma_{v}(v)\); for the background on tangent functionals see [16, Section V.9]. Then the geodesic ray \(\mathbb{R}_{+}\ni t\mapsto tv\) determines a Busemann point \(\mathfrak{b}_{v}\) in the horofunction boundary and by [29, Proposition 6.2] and [29, Proposition 6.3] this point is fixed under the action of \(\mathbb{R}^{m}\) with \(\|tv\|-\|tv-x\|\to\sigma_{v}(x)\) for every \(x\in\mathbb{R}^{m}\). By invoking a variation of Kronecker's theorem (see [29, Lemma 7.2]) one finds an unbounded strictly increasing sequence \((t_{i})_{i\in\mathbb{N}_{\geq 1}}\subseteq\mathbb{R}_{+}\) such that for every \(i\in\mathbb{N}_{\geq 1}\) there exists \(x_{i}\in\mathbb{Z}^{m}\) with \(\left\|x_{i}-t_{i}v\right\|<i^{-1}\). Set \(t_{0}:=0\), \(x_{0}:=0\) and define \(\gamma:\{t_{i}\mid i\in\mathbb{N}\}\to\mathbb{Z}^{m}\) by \(\gamma(t_{i}):=x_{i}\). Then \(\gamma\) is an almost geodesic ray that determines a Busemann point \(\mathfrak{b}^{\prime}_{v}\in\partial_{\ell_{\|\cdot\|}}\mathbb{Z}^{m}\). By [29, Proposition 7.4] this point is fixed under the action of \(\mathbb{Z}^{m}\) and satisfies \(\varphi_{g}^{\ell_{\|\cdot\|}}(\mathfrak{b}^{\prime}_{v})=\sigma_{v}(g)\) for every \(g\in\mathbb{Z}^{m}\). Proof of Theorem 3.3.: Following Proposition 3.2, it suffices to show that \(G\) is separated with respect to the length function \(\ell^{\text{as}}\). As \(\ell^{\text{as}}\) is the restriction of the asymptotic (semi-)norm \(\left\|\cdot\right\|_{\ell}\), we may apply [29, Proposition 7.4] to find for every smooth point \(v\in\mathbb{R}^{m}\) (with respect to the asymptotic semi-norm) with \(\left\|v\right\|_{\ell}=1\) a point \(\mathfrak{b}^{\prime}_{v}\in\partial_{\ell^{\text{as}}}G\) that is fixed under the action of \(G\) with \(\varphi_{g}^{\ell^{\text{as}}}(\mathfrak{b}^{\prime}_{v})=\sigma_{v}(g)\) for every \(g\in G\). Evaluation in \(\mathfrak{b}^{\prime}_{v}\) leads to a (multiplicative) \(G\)-invariant state \(\nu_{v}\) on \(C(\overline{G}^{\ell^{\text{as}}})\). Further, by the amenability of \(G\), the linear map \(\chi:C^{*}_{r}(G)\to\mathbb{C}\) defined by \(\chi(\lambda_{g}):=1\) for \(g\in G\) is bounded and multiplicative (see [3, Theorem 2.6.8]). Recall that Proposition 2.5 (in combination with Fell's absorption principle, see [3, Proposition 4.1.7]) provides a canonical identification of \(C(\overline{G}^{\ell^{\text{as}}})\rtimes_{\beta,r}G\) with the C\({}^{*}\)-subalgebra of \(\mathcal{B}(\ell^{2}(G))\) generated by \(C^{*}_{r}(G)\), \(C_{0}(G)\) and the multiplication operators \(\{\varphi_{g}^{\ell^{\text{as}}}\mid g\in G\}\). Via composing \(\chi\) with the conditional expectation \(C(\overline{G}^{\ell^{\text{as}}})\rtimes_{\beta,r}G\to C^{*}_{r}(G)\), \(f\lambda_{g}\mapsto\nu_{v}(f)\lambda_{g}\) for \(f\in C(\overline{G}^{\ell^{\text{as}}})\), \(g\in G\) and extending to \(\mathcal{B}(\ell^{2}(G))\), we hence obtain a state that contains \(C^{*}_{r}(G)\) in its multiplicative domain (see [3, Proposition 1.5.7]). It is thus invariant under the canonical action of \(G\) and restricts to an invariant mean \(\mu_{v}:\ell^{\infty}(G)\to\mathbb{C}\) with \(\widehat{(\mu_{v})_{\ell^{\text{as}}}}=\sigma_{v}|_{G}\). To conclude the statement from the theorem it hence suffices to prove that the span of all \(\sigma_{v}|_{G}\) where \(v\in\mathbb{R}^{m}\) is a smooth point of the unit sphere (with respect to \(\|\cdot\|_{\ell}\)), coincides with \(\operatorname{Hom}(G,\mathbb{R})\). For this purpose assume that the complement of the span is non-empty and denote the canonical orthonormal basis of \(\mathbb{R}^{m}\) by \((e_{i})_{i=1,\ldots,m}\). Then there exists a non-trivial vector \(\xi\) in the orthogonal complement (with respect to the canonical inner product on \(\mathbb{R}^{m}\)) of \[\operatorname{Span}\{(\sigma_{v}(e_{i}))_{i=1,\ldots,m}\in\mathbb{R}^{m}\mid v \in\mathbb{R}^{m}\text{ with }\left\|v\right\|_{\ell}=1\text{ smooth point}\}\subseteq\mathbb{R}^{m}.\] But this means that \(\sigma_{v}(\xi)=0\) for all smooth points \(v\in\mathbb{R}^{m}\) of the unit sphere. With [29, Proposition 6.7] we conclude that \(\xi=0\) in contradiction to our assumption that \(\xi\) is non-trivial. ### Nilpotent groups Besides from constructing fixed points in the horofunction boundary of \(\mathbb{Z}^{m}\), \(m\in\mathbb{N}\) associated with length functions that are restrictions of norms on \(\mathbb{R}^{m}\), in [29, Section 8] Rieffel also constructed finite orbits of horofunction boundaries of \(\mathbb{Z}^{m}\) associated with word length functions. The investigation of such points was later extended by Walsh in [32] to nilpotent groups. He proved that for a given nilpotent group \(G\) finitely generated by a set \(S\) with \(S=S^{-1}\) there is one finite orbit associated with each facet of the polytope obtained by projecting \(S\) onto the torsion-free component of the Abelianization of \(G\). The aim of this subsection is to discuss the implications of Walsh's results in our context. Let us review the construction in [32] in more detail. The map \(p_{G}\) from before gives a group homomorphism \(G\to\mathbb{Z}^{m}\) where \(m\) is the rank of \(G/[G,G]\). Again, view \(\mathbb{Z}^{m}\) as embedded into \(\mathbb{R}^{m}\) and consider the convex hull \(K_{S}:=\operatorname{conv}(p_{G}(S))\). The set \(K_{S}\) defines a polytope in \(\mathbb{R}^{m}\). Its proper faces of co-dimension \(1\) are called _facets_. For such a facet \(F\) consider the subset \(V_{F}:=\{s\in S\mid p_{G}(s)\in F\}\) of \(S\) and write \(\langle V_{F}\rangle\) for the (nilpotent) subgroup of \(G\) generated by \(V_{F}\). This subgroup has finite index in \(G\). Further, by [32, Section 4] one finds a word \(w_{F}\) with letters in \(V\) such that the infinite reduced word \(w_{F}w_{F}...\) defines a geodesic path in the Cayley graph of \(G\) with respect to \(S\) in the sense that each of the word's prefixes are geodesic with respect to the word metric \(d_{\ell_{S}}\). By Theorem 1.5 this geodesic path gives a Busemann point \(\xi_{F}\in\partial_{\ell_{S}}G\) in the horofunction boundary of \(G\). The stabilizer of \(\xi_{F}\) is given by \(\langle V_{F}\rangle\). But even more is true. **Theorem 3.6** ([32, Theorem 1.1]).: _Let \(G\) be a nilpotent group with finite generating set \(S=S^{-1}\) and consider the action of \(G\) on its horofunction boundary with respect to the corresponding word length metric. Then there exists a natural one-to-one correspondence between the finite orbits of Busemann points and the facets of \(K_{S}\)._ Let \(\mathcal{F}\) be the (finite) set of facets of \(K_{S}\). Similar to [29, Section 8] (and similar to Subsection 3.1) every facet \(F\in\mathcal{F}\) of \(K_{S}\) is characterized by the fact that there exists a (unique) linear functional \(\sigma_{F}\) on \(\mathbb{R}^{m}\) with \(\sigma_{F}\circ p_{G}(s)\leq 1\) for all \(s\in S\) and \(F=\operatorname{conv}(\{p_{G}(s)\mid s\in S\text{ with }\sigma_{F}\circ p_{G}(s)=1\})\). Rieffel calls this the _support functional_ of \(F\). **Lemma 3.7**.: _For every \(F\in\mathcal{F}\) and \(h\in\langle V_{F}\rangle\) the equality \(\varphi_{h}^{\ell_{S}}(\xi_{F})=\sigma_{F}\circ p_{G}(h)\) holds._ Proof.: By [32, Lemma 4.3] there exists \(i_{0}\in\mathbb{N}\) such that for all \(i\geq i_{0}\) the element \(h^{-1}w_{F}^{i}\) can be written as a product of elements of \(V_{F}\). As in the proof of [32, Lemma 4.1] one deduced that \(\left|h^{-1}w_{F}^{i}\right|=\sigma_{F}\circ p_{G}(h^{-1}w_{F}^{i})\) and \(\left|w_{F}^{i}\right|=\sigma_{F}\circ p_{G}(w_{F}^{i})\) for \(i\geq i_{0}\) so that \[\varphi_{h}(\xi_{F})=\lim_{i\to\infty}(\sigma_{F}\circ p_{G}(w_{F}^{i})-\sigma _{F}\circ p_{G}(h^{-1}w_{F}^{i}))=\sigma_{F}\circ p_{G}(h).\] Lemma 3.7 implies that nilpotent groups equipped with word length functions contain finite index subgroups that satisfy a property that is close to being separated. **Proposition 3.8**.: _Let \(G\) be a finitely generated discrete nilpotent group \(G\) with finite generating set \(S=S^{-1}\). Then there exists a finite index subgroup \(H\) of \(G\) such that every group homomorphism \(H\to\mathbb{R}\) that vanishes on \(H\cap[G,G]\) can be written as a linear combination of maps of the form \(h\mapsto\mu(\varphi_{h}^{\ell_{S}})\) where \(\mu:\ell^{\infty}(G)\to\mathbb{C}\) is an \(H\)-invariant state._ Proof.: For every \(F\in\mathcal{F}\) the group \(\langle V_{F}\rangle\) has finite index in \(G\), so \(H:=\bigcap_{F\in\mathcal{F}}\left\langle V_{F}\right\rangle\) is a subgroup of finite index as well. The evaluation maps \(\mathcal{G}(G,\ell_{S})\cong C(\overline{G}^{\ell_{S}})\to\mathbb{C}\), \(f\mapsto f(\xi_{F})\) with \(F\in\mathcal{F}\) extend to \(H\)-invariant states on \(\ell^{\infty}(G)\) that we denote by \(\mu_{F}\). As in Lemma 2.9 one obtains that \((\widehat{\mu_{F}})_{H}:\operatorname{im}(p_{H})\to\mathbb{R}\) given by \(p_{H}(h)\mapsto\mu_{F}(\varphi_{h}^{\ell_{S}})\) is a well-defined group homomorphism. By Lemma 3.7, \[\widehat{(\mu_{F})}_{H}\circ p_{H}(h)=\mu_{F}(\varphi_{h}^{\ell_{S}})=\sigma_{ F}\circ p_{G}(h)\] for every \(h\in H\) and therefore \(\widehat{(\mu_{F})}_{H}\circ p_{H}=(\sigma_{F}\circ p_{G})|_{H}\). Now, every group homomorphism \(\mathbb{Z}^{m}\to\mathbb{R}\) canonically extends to \(\mathbb{R}^{m}\). Further, every group homomorphism in \(\operatorname{Hom}(\mathbb{R}^{m},\mathbb{R})\) can be written as a linear combination of support functionals \(\sigma_{F}\), \(F\in\mathcal{F}\). Indeed, assume that \(\operatorname{Hom}(\mathbb{R}^{m},\mathbb{R})\setminus\operatorname{Span}\{ \sigma_{F}\mid F\in\mathcal{F}\}\) is non-empty and denote the canonical orthonormal basis of \(\mathbb{R}^{m}\) by \((e_{i})_{i=1,\ldots,m}\). Then there exists a non-trivial vector \(v\) in the orthogonal complement (with respect to the canonical inner product) of \(\operatorname{Span}\{(\sigma_{F}(e_{i}))_{i=1,\ldots,m}\in\mathbb{R}^{m}\mid F \in\mathcal{F}\}\subseteq\mathbb{R}^{m}\). Without loss of generality we can assume that \(v\) is contained in some facet \(F\in\mathcal{F}\). But then \[1=\sigma_{F}(v)=\langle v,(\sigma_{F}(e_{i}))_{i=1,\ldots,m}\rangle=0,\] which is a contradiction. Hence, \(\operatorname{Span}\{\sigma_{F}\mid F\in\mathcal{F}\}=\operatorname{Hom}(\mathbb{R}^ {m},\mathbb{R})\) as claimed. By using this we obtain that every group homomorphism \(H\to\mathbb{R}\) that vanishes on \(H\cap[G,G]\) can be written as a linear combination of the maps \(\widehat{(\mu_{F})}_{H}\circ p_{H}\), \(F\in\mathcal{F}\). It can be checked that the property in Proposition 3.8 allows to prove a variant of Proposition 2.11 and Theorem 2.13 for nilpotent groups. Since the following theorem does not lead to interesting new examples of quantum metric spaces and since the proof is similar to the one in Subsection 2.2, we leave the details to the reader. **Theorem 3.9**.: _Let \((\mathcal{A},\mathcal{H}_{A},D_{A})\) be a non-degenerate odd spectral triple on a separable unital C\({}^{*}\)-algebra \(A\) and assume that the induced Lipschitz semi-norm \(L_{D_{A}}(a):=\left\|[D_{A},a]\right\|,a\in\mathcal{A}\) defines a compact quantum metric space \((A,L_{D_{A}})\). Let further \(\alpha:G\to\text{Aut}(A)\) be a metrically equicontinuous action of a finitely generated discrete nilpotent group \(G\) equipped with a word length function. As before, define \(\mathcal{H}:=\mathcal{H}_{A}\otimes\ell^{2}(G)\) and \(D\) as in Subsection 2.1. Then the even spectral triple \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition if and only if for every \(h\in G\) the set of all elements \(x=\sum_{g\in[G,G]}a_{g}\lambda_{gh}\in C_{c}(G,\mathcal{A})\) with \((a_{g})_{g\in[G,G]}\subseteq\mathcal{A}\) satisfying \(\left\|[D_{A}\otimes 1,x]\right\|\leq 1\) and \(\left\|[1\otimes M_{\ell},x]\right\|\leq 1\) has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\)._ _Example 3.10_.: Consider the discrete Heisenberg group \[H_{3}:=\left\{\left(\begin{array}{ccc}1&x&z\\ 0&1&y\\ 0&0&1\end{array}\right)\mid x,y,z\in\mathbb{Z}\right\}\] and define elements \[a:=\left(\begin{array}{ccc}1&1&0\\ 0&1&0\\ 0&0&1\end{array}\right),\quad b:=\left(\begin{array}{ccc}1&0&0\\ 0&1&1\\ 0&0&1\end{array}\right),\quad c:=\left(\begin{array}{ccc}1&0&1\\ 0&1&0\\ 0&0&1\end{array}\right).\] The set \(S:=\{a,a^{-1},b,b^{-1}\}\) generates \(H_{3}\). Let \(\ell:=\ell_{S}\) be the corresponding word length function. The commutator subgroup of \(H_{3}\) coincides with its center which is given by the cyclic group \(\langle c\rangle\cong\mathbb{Z}\). Let \((\mathcal{A},\mathcal{H}_{A},D_{A})\), \(L_{D_{A}}\), \(\alpha\), \(\mathcal{H}\) and \(D\) be as in Theorem 3.9, then the theorem implies that \((C_{c}(G,\mathcal{A}),\mathcal{H}\oplus\mathcal{H},D)\) satisfies the Lipschitz condition if and only if for every \(h\in H_{3}\) the set of all elements \(x=\sum_{i=0}^{\infty}a_{i}\lambda_{c^{i}h}\in C_{c}(H_{3},\mathcal{A})\) with \((a_{i})_{i\in\mathbb{N}}\subseteq\mathcal{A}\) satisfying \(\left\|[D_{A}\otimes 1,x]\right\|\leq 1\) and \(\left\|[1\otimes M_{\ell},x]\right\|\leq 1\) has totally bounded image in \((A\rtimes_{\alpha,r}G)/\mathbb{C}1\). It would be interesting to see if our methods can be extended to give an analog to Corollary 3.5 in this setting (or even for general nilpotent groups). However, \(\ell(c^{i})=2\lceil 2\sqrt{|i|}\rceil\) for every \(i\in\mathbb{N}\) by [2]. From this it can be deduced that every invariant mean \(\ell^{\infty}(\langle c\rangle)\to\mathbb{C}\) must vanish on the elements \(\varphi_{g}^{\ell|_{\langle c\rangle}}\), \(g\in\langle c\rangle\). The restriction of \(\ell\) to \(\langle c\rangle\cong\mathbb{Z}\) hence provides a natural example of a group and a length function on it that is not separated. ## 4. Examples In this section we discuss natural examples of crossed products, that are covered by the results of the previous sections. Our selection extends the one in [18]. ### Actions on AF-algebras We begin by reminding the reader of a general construction by Christensen and Ivan [9] that allows associating spectral triples with AF-algebras, which satisfy the Lipschitz condition. This construction was also employed in [18, Subsection 3.3]. Recall that an _AF-algebra_ is an inductive limit of a sequence of finite-dimensional C\({}^{*}\)-algebras. Given a unital AF-algebra \(A\), let \((\mathcal{A}_{i})_{i\in\mathbb{N}}\subseteq A\) with \(\mathcal{A}_{0}:=\mathbb{C}1\) and \(A=\overline{\bigcup_{i\in\mathbb{N}}\mathcal{A}_{i}}\) be an increasing sequence of finite-dimensional C\({}^{*}\)-algebras and let \(\phi\) be a faithful state on \(A\). We call \((\mathcal{A}_{i})_{i\in\mathbb{N}}\) an _AF-filtration_. Write \(\pi_{\phi}\) for the (faithful) GNS-representation of \(A\) associated with \(\phi\) and denote the corresponding GNS-Hilbert space by \(L^{2}(A,\phi)\). We will further write \(\Omega_{\phi}\in L^{2}(A,\phi)\) for the canonical cyclic vector. Using this data one can define a sequence \((H_{i})_{i\in\mathbb{N}}\) of pairwise orthogonal finite-dimensional subspaces of \(L^{2}(A,\phi)\) via \(H_{0}:=\pi_{\phi}(A_{0})\Omega_{\phi}=\mathbb{C}\Omega_{\phi}\) and \(H_{i}:=\pi_{\phi}(A_{i})\Omega_{\phi}\cap(\pi_{\phi}(A_{i-1})\Omega_{\phi})^{\perp}\) for \(i\in\mathbb{N}_{\geq 1}\). Write \(Q_{i}\), \(i\in\mathbb{N}\) for the orthogonal projection onto \(H_{i}\). As was argued in [9, Theorem 2.1], there exists a sequence \((\alpha_{i})_{i\in\mathbb{N}}\) of real numbers with \(\alpha_{0}=0\) and \(|\alpha_{i}|\to\infty\) such that the odd spectral triple \((\mathcal{A},L^{2}(A,\phi),D)\) with \(\mathcal{A}:=\bigcup_{i\in\mathbb{N}}A_{i}\) and \(D:=\sum_{i\in\mathbb{N}}\lambda_{i}Q_{i}\) satisfies the Lipschitz condition. Now assume that \(\alpha:G\to\operatorname{Aut}(A)\) is an action of a discrete group \(G\) on \(A\), satisfying \(\alpha_{g}(A_{i})\subseteq A_{i}\) for every \(g\in G\), \(i\in\mathbb{N}\). Since the elements of \(A_{i}\) commute with the projections \(Q_{j}\) for \(j>i\), we obtain that for \(x\in A_{i}\), \[\sup_{g\in G}\|[D,\alpha_{g}(x)]\|=\sup_{g\in G}\|\sum_{j\leq i}\lambda_{j}[Q _{j},\alpha_{g}(x)]\|\leq\sum_{j\leq i}2\,|\lambda_{j}|\,\|x\|<\infty,\] i.e. the action \(\alpha\) is metrically equicontinuous. In particular, if \(G\) is a virtually Abelian group that is finitely generated by a set \(S\) with \(S=S^{-1}\) and if \(\ell:G\to\mathbb{R}_{+}\) is the corresponding word length function (or more generally a length function as in Corollary 3.4), we obtain from Corollary 3.5 a non-degenerate spectral triple on the crossed product \(A\rtimes_{r,\alpha}G\), that satisfies the Lipschitz condition. _Example 4.1_.: Let \(G\) be a countable residually finite discrete group and let \((G_{i})_{i\in\mathbb{N}}\) be a strictly decreasing sequence of finite index subgroups of \(G\) with \(\bigcap_{i\in\mathbb{N}}G_{i}=\{e\}\). For every \(i\in\mathbb{N}\) the group \(G\) acts on \(G/G_{i}\) via left multiplication. Let \(p_{i}:G/G_{i+1}\to G/G_{i}\) be the (surjective and \(G\)-equivariant) map \(gG_{i+1}\mapsto gG_{i}\) and consider the corresponding inverse limit \(X\) given by \[X:=\{(g_{i})_{i\in\mathbb{N}}\mid p_{i}(g_{i+1})=g_{i}\text{ for all }i\geq 0\} \subseteq\prod_{i\in\mathbb{N}}G/G_{i}.\] We equip \(X\) with the subspace topology of the product \(\prod_{i\in\mathbb{N}}G/G_{i}\), where each \(G/G_{i}\), \(i\in\mathbb{N}\) carries the discrete topology. In this way, \(X\) becomes a Cantor set, and the action of \(G\) on its left cosets extends to a continuous action on \(X\) that (following [13, Definition 2], see also [21]) we call a \(G\)_-subodometer action_. The commutative C\({}^{*}\)-algebra \(C(X)\) identifies with the inductive limit \(\lim_{\to}(C(G/G_{i}),\iota_{i})\), where \(\iota_{i}:C(G/G_{i})\to C(G/G_{i+1})\) is given by \(f\mapsto f\circ p\); so in particular \(C(X)\) is an AF-algebra. By fixing a faithful state \(\phi\) on \(A:=C(X)\) and by setting \(\mathcal{A}_{i}:=C(G/G_{i})\) for \(i\in\mathbb{N}\), we can apply the construction from above to obtain an odd spectral triple \((\mathcal{A},L^{2}(C(X),\phi),D)\) on \(C(X)\) that satisfies the Lipschitz condition. In particular, if \(G\) is a virtually Abelian group that is finitely generated by a set \(S\) with \(S=S^{-1}\) and if \(\ell:G\to\mathbb{R}_{+}\) is the corresponding word length function (or more generally a length function as in Corollary 3.4), we obtain a non-degenerate spectral triple on \(C(X)\rtimes_{r,\alpha}G\), that satisfies the Lipschitz condition. The crossed product \(C(X)\rtimes_{r,\alpha}G\) is called a _generalized Bunce-Deddens algebra_ (see [23] and [7]); note that for \(G=\mathbb{Z}\) and \(G_{i}:=(m_{1}...m_{i})\mathbb{Z}\subseteq\mathbb{Z}\) where \((m_{i})_{i\in\mathbb{N}}\) is a sequence of natural numbers with \(m_{i}\geq 2\) for all \(i\in\mathbb{N}\) we recover the classical _Bunce-Deddens algebras_ (see [4] and also [14, Chapter V.3]). ### Higher-dimensional non-commutative tori The _rotation algebra_ (or _non-commutative 2-torus_) \(\mathcal{A}_{\theta}\), \(\theta\in\mathbb{R}\), introduced in [25], can be defined as the universal C\({}^{*}\)-algebra generated by two unitaries \(u\) and \(v\) subject to the relation \(uv=e^{2\pi i\theta}vu\). In the case where \(\theta\in\mathbb{Z}\), \(\mathcal{A}_{\theta}\cong C(\mathbb{T}^{2})\) and for irrational values of \(\theta\) the C\({}^{*}\)-algebra \(\mathcal{A}_{\theta}\) is simple. The construction admits a natural generalization to higher dimensions: let \(\Theta:=(\theta_{i,j})_{i,j=1,...,m}\) be a skew symmetric real \((m\times m)\)-matrix (i.e. \(\theta_{i,j}=-\theta_{j,i}\) for all \(1\leq i,j\leq m\)) and define \(\mathcal{A}_{\Theta}\) to be the universal C\({}^{*}\)-algebra generated by unitaries \(u_{1},...,u_{m}\) subject to relations \(u_{i}u_{j}=e^{2\pi i\theta_{i,j}}u_{j}u_{i}\) for \(1\leq i,j\leq m\). These C\({}^{*}\)-algebras, which were defined in [26], are called _non-commutative \(m\)-tori_. Note that for \(m=1\) the C\({}^{*}\)-algebra \(\mathcal{A}_{\Theta}\) is isomorphic to \(C(\mathbb{T})\) and for \(m=2\) we have \(\mathcal{A}_{\Theta}=\mathcal{A}_{\theta_{1,2}}\). Any non-commutative torus can be constructed as an iteration of crossed products by actions of the integers \(\mathbb{Z}\). To make this precise, set \(\Theta_{d}:=(\theta_{i,j})_{1\leq i,j\leq d}\) for \(d=1,...,m\) and define an action \[\alpha_{d}:\mathbb{Z}\curvearrowright\mathcal{A}_{\Theta_{d}}\text{ via }\alpha_{d}^{n}(u_{i}):=e^{-2\pi in\theta_{i,d+1}}u_{i},\] where the \(u_{1},...,u_{d}\) are the standard generators of \(\mathcal{A}_{\Theta_{d}}\). Write \(\widetilde{u}_{1},...,\widetilde{u}_{d-1}\) for the standard generators of \(\mathcal{A}_{\Theta_{d-1}}\). Then there exists an isomorphism \(\mathcal{A}_{\Theta_{d}}\cong\mathcal{A}_{\Theta_{d-1}}\rtimes_{\alpha_{d-1},r}\mathbb{Z}\) defined by \(u_{i}\mapsto\widetilde{u}_{i}\) for \(i=1,...,d-1\) and \(u_{d}\mapsto\lambda_{1}\). We obtain that \[\mathcal{A}_{\Theta}\cong(...((\mathcal{A}_{\Theta_{1}}\rtimes_{\alpha_{1}} \mathbb{Z})\rtimes_{\alpha_{2}}\mathbb{Z})...)\rtimes_{\alpha_{m-1}}\mathbb{Z }\cong(...((C(\mathbb{T})\rtimes\mathbb{Z})\rtimes\mathbb{Z})...)\rtimes \mathbb{Z},\] where the induced action of \(\mathbb{Z}\) on \(\mathbb{T}\) is given by rotation by the angle \(\theta_{1,1}\). Endow \(C(\mathbb{T})\) with the canonical non-degenerate odd spectral triple \((C^{\infty}(\mathbb{T}),L^{2}(\mathbb{T}),D)\), where \(D\) is the differentiation operator. We claim that, if we equip the integers with word length functions (or more generally length functions as in Corollary 3.4), a repeated application of Corollary 3.5 leads to non-degenerate spectral triples on the non-commutative \(m\)-tori, that satisfy the Lipschitz condition. Since it is well-known that the spectral triple \((C^{\infty}(\mathbb{T}),L^{2}(\mathbb{T}),D)\) on \(\mathcal{A}_{\Theta_{1}}\cong C(\mathbb{T})\) satisfies the Lipschitz condition, for this it suffices to prove that for every \(d=1,...,m-1\) the action \(\alpha_{d}:\mathbb{Z}\curvearrowright\mathcal{A}_{\Theta_{d}}\) is metrically equicontinuous. This can be proved via induction over \(d\): For \(d=1\) the action \(\alpha_{d}:\mathbb{Z}\curvearrowright\mathcal{A}_{\Theta_{1}}\cong C( \mathbb{T})\) is obviously metrically equicontinuous. For the induction step fix \(1\leq d\leq m-2\) and assume that the action \(\alpha_{d}:\mathbb{Z}\curvearrowright\mathcal{A}_{\Theta_{d}}\) is metrically equicontinuous. We proceed by distinguishing two cases: * _Case_ 1: If \(d\) is odd, the corresponding spectral triple on \(\mathcal{A}_{\Theta_{d}}\) is of the form \((\mathcal{A},\mathcal{H},D)\) with dense \(*\)-subalgebra \(\mathcal{A}\subseteq\mathcal{A}_{\Theta_{d}}\) and corresponding faithful representation \(\pi\). One easily checks that \(\alpha_{d+1}\) leaves both \(\mathcal{A}\) and \(C_{c}(\mathbb{Z},\mathcal{A})\subseteq\mathcal{A}_{\Theta_{d}}\rtimes_{ \alpha_{d},r}\mathbb{Z}\cong\mathcal{A}_{\Theta_{d+1}}\) invariant. Further, if we denote the length function on \(\mathbb{Z}\) by \(\ell\), \[\sup_{n\in\mathbb{Z}}\left\|\left[\left(\begin{array}{cc}0&D \otimes 1-i\otimes M_{\ell}\\ D\otimes 1+i\otimes M_{\ell}&0\end{array}\right),\left(\begin{array}{cc}\alpha_{d+ 1}^{n}(x)&0\\ 0&\alpha_{d+1}^{n}(x)\end{array}\right)\right]\right\|\] (4.1) \[\leq \sup_{n\in\mathbb{N}}\left\{2\|[D\otimes 1,\alpha_{d+1}^{n}(x)]\|+2 \|[1\otimes M_{\ell},\alpha_{d+1}^{n}(x)]\|\right\}\] for every \(x\in C_{c}(\mathbb{Z},\mathcal{A})\), \(n\in\mathbb{Z}\). For \(x=\sum_{g\in\mathbb{Z}}a_{g}\lambda_{g}\in C_{c}(\mathbb{Z},\mathcal{A})\) with \((a_{g})_{g\in\mathbb{Z}}\subseteq\mathcal{A}\) we have \(\alpha_{d+1}^{n}(x)=\sum_{g\in\mathbb{Z}}e^{-2\pi ing\theta_{d+1,d+2}}\alpha_{ d+1}^{n}(a_{g})\lambda_{g}\) for all \(n\in\mathbb{Z}\) and hence \[\|[1\otimes M_{\ell},\alpha_{d+1}^{n}(x)]\| = \|\sum_{g\in\mathbb{Z}}e^{-2\pi ing\theta_{d+1,d+2}}\alpha_{d+1}^ {n}(a_{g})[1\otimes M_{\ell},\lambda_{g}]\|\] \[\leq \sum_{g\in\mathrm{supp}(x)}\|a_{g}\|\,\|[1\otimes M_{\ell}, \lambda_{g}]\|\] and \[\|[D\otimes 1,\alpha_{d+1}^{n}(x)]\| = \|\sum_{g\in\mathbb{Z}}e^{-2\pi ing\theta_{d+1,d+2}}[D\otimes 1, \alpha_{d+1}^{n}(a_{g})]\lambda_{g}\|\] \[\leq \sum_{g\in\mathrm{supp}(x)}\|[D\otimes 1,\alpha_{d+1}^{n}(a_{g})]\|.\] Since \(\Theta\) was arbitrary, it follows from the induction assumption, that the restriction of the action \(\alpha_{d+1}\) to \(\mathcal{A}_{\Theta_{d}}\) is metrically equicontinuous and hence that the supremum in (4.1) is finite. We deduce the metric equicontinuity of the action \(\alpha_{d+1}:\mathbb{Z}\curvearrow\mathcal{A}_{\Theta_{d+1}}\). * _Case_ 2: If \(d\) is even, the corresponding spectral triple on \(\mathcal{A}_{\Theta_{d}}\) is (since it is obtained by repeated application of Corollary 3.5) of the form \[\left(\mathcal{A},\mathcal{H}\oplus\mathcal{H},\left(\begin{array}{cc}0&D\\ D^{*}&0\end{array}\right)\right)\] with dense \(*\)-subalgebra \(\mathcal{A}\subseteq\mathcal{A}_{\Theta_{d}}\) and corresponding faithful representation \(\pi\oplus\pi\). Again, \(\alpha_{d+1}\) leaves \(\mathcal{A}\) and \(C_{c}(\mathbb{Z},\mathcal{A})\subseteq\mathcal{A}_{\Theta_{d}}\rtimes_{\alpha _{d},r}\mathbb{Z}\cong\mathcal{A}_{\Theta_{d+1}}\) invariant. Further, \[\sup_{n\in\mathbb{Z}}\left\|\left[\left(\begin{array}{cc}1 \otimes M_{\ell}&D\otimes 1\\ D^{*}\otimes 1&-1\otimes M_{\ell}\end{array}\right),\left(\begin{array}{cc} \alpha_{d+1}^{n}(x)&0\\ 0&\alpha_{d+1}^{n}(x)\end{array}\right)\right]\right\|\] \[\leq \sup_{n\in\mathbb{Z}}\left\{2\|[1\otimes M_{\ell},\alpha_{d+1}^{ n}(x)]\|+\|[D\otimes 1,\alpha_{d+1}^{n}(x)]\|+\|[D^{*}\otimes 1,\alpha_{d+1}^{n}(x)]\|\right\}\] for every \(x\in C_{c}(\mathbb{Z},\mathcal{A})\), \(n\in\mathbb{Z}\). In the same way as before one deduces that the action \(\alpha_{d+1}:\mathbb{Z}\curvearrow\mathcal{A}_{\Theta_{d+1}}\) is metrically equicontinuous.
2309.12820
Almost-Optimal Computational Basis State Transpositions
We give an explicit construction to perform any $n$-qubit computational basis state transposition using $\Theta(n)$ gates. This nearly coincides with the lower bound $\Omega(n/\log(nd))$ on worst-case and average-case gate complexity to perform transpositions using a $d$-element gate-set, which we also prove.
Steven Herbert, Julien Sorci, Yao Tang
2023-09-22T12:19:59Z
http://arxiv.org/abs/2309.12820v2
# Almost-Optimal Computational Basis State Transpositions ###### Abstract We give an explicit construction to perform any \(n\)-qubit computational basis state transposition using \(\Theta(n)\) gates. This nearly coincides with the lower bound \(\Omega(n/\log(nd))\) on worst-case and average-case gate complexity to perform transpositions using a \(d\)-element gate-set, which we also prove. + Footnote †: star}\) Equal contributions: author order is alphabetical Contact: [email protected], [email protected], [email protected] ## 1 Introduction Quantum circuits that permute computational basis states are widely found in quantum computing: the \(X\), CNOT and Toffoli gates do exactly this, and blocks of \(\{X,\text{CNOT},\text{Toffoli}\}\) are found, for example, every time an oracle is invoked to compute a classical function. Indeed, owing to the quantum _computational_ universality of the gate-set \(\{H,\text{Toffoli}\}\)[1], every quantum circuit can be replaced by a functionally equivalent version represented as alternating blocks of permutations and Hadamard gates. Furthermore, it has been observed that many of the most powerful quantum circuits amount to no more than a computational basis state permutation conjugated by a transform, such as the Fourier or Schur transform. [2]. Owing to the general importance of permutations in quantum circuits, we explore bounds on performing arbitrary computational basis state _transpositions_. Specifically, we consider an \(n\)-qubit circuit with computational basis states \(\{|x\rangle:x\in\{0,1\}^{n}\}\), and are interested in the gate complexity of the operation: \[|x\rangle\mapsto\begin{cases}|x\rangle\,,&\text{if }x\notin\{a,b\}\\ |b\rangle\,,&\text{if }x=a\\ |a\rangle\,,&\text{if }x=b\end{cases} \tag{1}\] for fixed but arbitrary \(a,b\in\{0,1\}^{n}\). Previous work on the compilation of permutation circuits has largely focused on the complexity of compiling an arbitrary computational basis state permutation. The worst-case gate complexity was shown to be \(\Omega(n2^{n}/\log(n))\) in Ref. [3] and constructions which nearly meet this lower-bound have been proposed in Ref. [4] and Ref. [5]. On the other hand, there appears to be little in the literature on the compilation of a computational basis state transposition. Noting that the set of transpositions generates the full group of permutations, transpositions constitute an important building block for quantum circuits in general. The organisation of this note is as follows. In Section 2 we prove a lower bound on the worst-case gate complexity to compile a unitary from a given family of unitary matrices, and show that the same asymptotic lower bound holds for the average gate complexity, independent of the number of ancilla qubits present. We specialise these results to the case of computational basis state transpositions. In Section 3 we give a construction for a circuit that performs any transposition with \(\Theta(n)\) gates and either two or \(n-1\) clean ancillas, which nearly achieves the lower-bound of \(\Omega(n/\log(nd))\) for a \(d\)-element gate-set proved in the preceding section. In Section 4 we present numerical results demonstrating the performance of our proposed method of performing a computational basis state transposition in terms of CNOT and T gate counts. Lastly, in Section 5 we conclude the paper with some final remarks. A lower-bound on the gate-complexity of computational basis state transpositions In this section we prove a lower-bound on the worst-case and average-case gate complexity of a computational basis state transposition for any finite gate-set. We begin by proving a worst-case lower bound for an arbitrary set of operators, and then specialise to transpositions. For the remainder of the section we will let \(\mathcal{G}\) denote a finite gate-set consisting of \(d\) gates with each gate acting on at most \(c\) qubits for some constant \(c\). **Theorem 2.1**.: _Let \(\mathcal{G}\) be a finite gate-set consisting of \(d\) gates. Then for any set of unitary matrices, \(\mathcal{U}\), there is an element of \(\mathcal{U}\) with gate complexity_ \[\Omega\Big{(}\log(|\mathcal{U}|)/\log(nd)\Big{)}. \tag{2}\] _Moreover, if \(|\mathcal{U}|\in\mathcal{O}(n^{n})\) then this holds even if we permit an arbitrary number of ancilla qubits. 1_ Footnote 1: We note that a similar version of Theorem 2.1 has appeared in [3, Lemma 8]. However, our more general statement will be important for the results which follow it. Proof.: We first show that the claimed gate complexity holds if no ancillas are present. Consider an \(n\)-qubit circuit that is compiled by \(k\) gates of \(\mathcal{G}\). Since each gate in \(\mathcal{G}\) acts on at most \(c\) qubits then there are at most \(\binom{n}{c}d\) ways of applying a gate from \(\mathcal{G}\) to the circuit, and therefore there are at most \(\big{(}\binom{n}{c}d\big{)}^{k}\) possible operations that can be achieved by a circuit with \(k\) gates. If every element of \(\mathcal{U}\) can be compiled by such a circuit then \(k\) must be large enough so that: \[|\mathcal{U}|\leq\Big{(}\binom{n}{c}d\Big{)}^{k}\] or equivalently: \[k\geq\log\big{(}|\mathcal{U}|\big{)}/\log\Big{(}\binom{n}{c}d\Big{)}. \tag{3}\] Therefore there is some element in \(\mathcal{U}\) that requires at least \(\log\Big{(}|\mathcal{U}|\Big{)}/\log\Big{(}\binom{n}{c}d\Big{)}\) gates of \(\mathcal{G}\) to be compiled. For any positive integers \(n,c\) with \(c\leq n\) the binomial coefficient satisfies the well-known bound \(n^{c}/c^{c}\leq\binom{n}{c}\leq(ne/c)^{c}\), from which it directly follows that \(\log(\binom{n}{c})\in\Theta(\log(n))\). Thus the resulting element of \(\mathcal{U}\) has gate-complexity \(\Omega\big{(}\log(|\mathcal{U}|)/\log(nd)\big{)}\), as claimed. Next, we consider the case where an additional \(m\) ancillas are available. In particular we ask whether the lower-bound on the gate complexity can be reduced from that in (2). First, by the premise, we are only concerned with the case where the lower bound has been reduced from \(\log\big{(}|\mathcal{U}|\big{)}/\log\Big{(}\binom{n}{c}d\Big{)}\), and so as each gate operates on at most \(c\) qubits, this means that at most some \[n^{\prime}\leq c\cdot\frac{\log(|\mathcal{U}|)}{\log\Big{(}\binom{n}{c}d \Big{)}}\] qubits can be involved in the circuit. The assumption that \(|\mathcal{U}|\in\mathcal{O}(n^{n})\) implies that \(\log(|\mathcal{U}|)\in\mathcal{O}(n\log(n))\), and thus \(n^{\prime}\in\mathcal{O}(n)\). Therefore, even if an arbitrary number of ancillas are available, we can effectively upper-bound the total number of qubits by \(n^{\prime}\) (as the ancillas are identical). It follows that we can substitute \(n^{\prime}\) into the denominator of the expression in (2), however as \(n^{\prime}\in\mathcal{O}(n)\) the asymptotic expression does not change. We now show that the same gate complexity in Theorem 2.1 holds on average. **Theorem 2.2**.: _Let \(\mathcal{G}\) be a finite gate-set consisting of \(d\) gates. Then for any set of unitary matrices, \(\mathcal{U}\), the average gate complexity of the elements of \(\mathcal{U}\) is_ \[\Omega\Big{(}\log(|\mathcal{U}|)/\log(nd)\Big{)}.\] _Moreover, if \(|\mathcal{U}|\in\mathcal{O}(n^{n})\) then this holds even if we permit an arbitrary number of ancilla qubits._ Proof.: If we now adapt Theorem 2.1 to consider \(\tilde{k}\) large enough such that _half_ of the elements of \(\mathcal{U}\) can be compiled, then we obtain: \[\tilde{k}\geq\log\big{(}|\mathcal{U}|/2\big{)}/\log\Big{(}\binom{n}{c}d\Big{)}\] To lower-bound the average gate complexity of compiling the elements of \(\mathcal{U}\) we now lower-bound: * The at most half of the elements of \(\mathcal{U}\) that have been compiled within \(\log\big{(}|\mathcal{U}|/2\big{)}/\log\Big{(}\binom{n}{c}d\Big{)}\) operations have consumed at least \(0\) operations in their compilation. * The at least half of the elements of \(\mathcal{U}\) that have not been compiled within \(\log\big{(}|\mathcal{U}|/2\big{)}2\log\Big{(}\binom{n}{c}d\Big{)}\) operations have each consumed at least \(\log\big{(}|\mathcal{U}|/2\big{)}/\log\Big{(}\binom{n}{c}d\Big{)}\) to compile. From this we can easily obtain a lower-bound on the average gate complexity: \[k_{\text{ave}}\geq 0.5\times 0+0.5\times\log\big{(}|\mathcal{U}|/2\big{)}/\log \Big{(}\binom{n}{c}d\Big{)}.\] The claim that the average complexity holds even with an arbitrary number of ancilla qubits follows by the same reasoning presented in the proof of Theorem 2.1. We now specialise Theorems 2.1 and 2.2 to deduce the worst-case and average gate complexity of a computation basis state transposition. **Corollary 2.3**.: _Let \(\mathcal{G}\) be a finite gate-set consisting of \(d\) gates. Then, for any \(n\)-bit computational basis state, \(|a\rangle\) there exists another \(n\)-bit computational basis state, \(|b\rangle\), such that the gate complexity required to compile a transposition of \(|a\rangle\) and \(|b\rangle\) using the gate-set \(\mathcal{G}\) is \(\Omega\Big{(}n/\log(nd)\Big{)}\). In addition, the average complexity of such a transposition is \(\Omega\Big{(}n/\log(nd)\Big{)}\). Both of these lower bounds hold even if we permit an arbitrary number of ancilla qubits._ Proof.: This follows directly from Theorems 2.1 and 2.2 by taking \(\mathcal{U}\) to be the set of transpositions with \(|a\rangle\). This set has \(2^{n}-1\) elements since there are \(2^{n}-1\) distinct transpositions of \(|a\rangle\) with another computational basis state. ## 3 Achieving nearly-optimal gate complexity for a computational basis state transposition In this section we present a quantum circuit construction to compile an arbitrary transposition. Our construction makes use of the \(C^{n}X\) gate, so we first provide several statements on its decomposition into elementary gates. The main ideas behind these \(C^{n}X\) decompositions can be traced back to [6]. In the following, we will refer to an ancilla qubit as a _borrowed ancilla_ if it can be in any initial state and its output state is unchanged. Similarly, we will refer to an ancilla qubit as a _clean ancilla_ if its initial and final state are both \(|0\rangle\). **Lemma 3.1**.: _For all \(n\geq 3\), a \(C^{n}X\) gate can be compiled using \(n-2\) borrowed ancilla qubits and at most \(4n-8\) Toffoli gates._ The compilation and its proof are deferred to the Appendix, but we give the general construction now. We write \(\mathrm{Tof}(i,j,k)\) to denote a Toffoli controlled on qubits \(i,j\) and targeted on qubit \(k\), and assume that a \(C^{n}X\) is controlled on qubits \(x_{1},...,x_{n}\), targeting qubit \(x_{n+1}\), and \(a_{1},...,a_{n-2}\) are borrowed ancillas. The sequence of Toffoli gates which implements the desired \(C^{n}X\) operation is: \[\mathrm{Tof}(a_{n-2},x_{n},x_{n+1})\times\Big{[}\mathrm{Tof}(a_{n-3},x_{n-1},a _{n-2})\mathrm{Tof}(a_{n-4},x_{n-2},a_{n-3})\dots\mathrm{Tof}(a_{1},x_{3},a_{2} )\Big{]}\times \tag{4}\] \[\Big{[}\mathrm{Tof}(x_{1},x_{2},a_{1})\mathrm{Tof}(a_{1},x_{3},a_{2})\dots \mathrm{Tof}(a_{n-4},x_{n-2},a_{n-3})\Big{]}\times\mathrm{Tof}(a_{n-3},x_{n-1},a_{n-2})\] which is all repeated once more. The reader is directed to the Appendix for an explicitly worked out example of the above decomposition. The compilation of Lemma 3.1 uses a large number of ancilla qubits; However, this construction can be used for an alternative compilation with the same asymptotic gate complexity but which uses only a single clean ancilla. **Lemma 3.2**.: _For all \(n\geq 3\), a \(C^{n}X\) gate can be compiled using one clean ancilla qubit and at most:_ 1. \(3\) _Toffoli gates when_ \(n=3\)_;_ 2. \(6n-18\) _Toffoli gates for all_ \(n\geq 4\)_._ Proof.: Let \(n_{0}=\lceil n/2\rceil\) and \(n_{1}=\lfloor n/2\rfloor\) (thus \(n_{0}+n_{1}=n\)). We show that for all \(n\geq 3\) the circuit: acts as a \(C^{n}X\) gate controlled on the first two registers and targeting the third, with the fourth register being a clean ancilla (where a control on a bundle of qubits represents a control on each qubit in the bundle). We prove this by showing that it implements the mapping: \[|x,y,z,0\rangle\mapsto|x,y,z\oplus(x_{1}\wedge\cdots\wedge x_{n_{0}})\wedge(y _{1}\wedge\cdots\wedge y_{n_{1}}),0\rangle\] for all \(x=(x_{1},...,x_{n_{0}})\in\{0,1\}^{n_{0}}\), \(y=(y_{1},...,y_{n_{1}})\in\{0,1\}^{n_{1}}\), and \(z\in\{0,1\}\), where \(\oplus\) denotes bit-wise addition and \(\wedge\) denotes the logical "and". Considering the action of each operator in the circuit on an arbitrary initial state \(|x,y,z,0\rangle\), the basis state is mapped as: \[|x,y,z,0\rangle \mapsto|x,y,z,x_{1}\wedge x_{2}\wedge\cdots\wedge x_{n_{0}}\rangle\] \[\mapsto|x,y,z\oplus(x_{1}\wedge x_{2}\wedge\cdots\wedge x_{n_{0}}) \wedge(y_{1}\wedge\cdots\wedge y_{n_{1}}),x_{1}\wedge x_{2}\wedge\cdots\wedge x _{n_{0}}\rangle\] \[\mapsto|x,y,z\oplus(x_{1}\wedge x_{2}\wedge\cdots\wedge x_{n_{0}}) \wedge(y_{1}\wedge\cdots\wedge y_{n_{1}}),0\rangle\,,\] which shows the circuit implements the claimed operation. Lastly, we count the number of Toffoli gates used. The circuit is composed of two \(C^{n_{0}}X\) gates and one \(C^{n_{1}+1}X\) gate. We compute the resulting Toffoli gate count by cases. When \(n=3\), then \(n_{0}=2\) and \(n_{1}=1\), so in this case we have used \(3\) Toffoli gates and only one clean ancilla shown. This completes the proof of (a). For (b) we first consider the case of \(n=4\), where \(n_{0}=2\) and \(n_{1}=2\). In this case we may apply Lemma 3.1 to compile the \(C^{n_{1}+1}X\) gate using one borrowed ancilla and \(4\) Toffoli gates. There are \(2\) qubits that are neither the target nor the control of the \(C^{n_{1}+1}X\) gate and either may be used as a borrowed ancilla for its compilation. Therefore in this case we have used a total of \(6\) Toffoli gates and only one clean ancilla, as claimed in (b), i.e., noting \(6\times 4-18=6\) Toffoli gates for \(n=4\). Finally, when \(n\geq 5\) then both \(n_{0}\) and \(n_{1}+1\) are at least \(3\) so we may compile the \(C^{n_{0}}X\) and \(C^{n_{1}+1}X\) gates using Lemma 3.1. By Lemma 3.1, a \(C^{n_{0}}X\) gate can be compiled using \(n_{0}-2\) borrowed ancilla qubits. Since \(n_{0}-2\leq n_{1}+1\), the \(n_{1}+1\) qubits that are neither the target nor control of the \(C^{n_{0}}X\) gates may be used as borrowed ancillas for their compilation. Similarly, a \(C^{n_{1}+1}X\) gate can be compiled using \(n_{1}-1\) borrowed ancilla qubits, and since \(n_{1}-1\leq n_{0}\), the \(n_{0}\) qubits that are neither the target nor control of the \(C^{n_{1}+1}X\) gate may be used as borrowed ancillas to compile it. Therefore no additional ancilla qubits are required. Counting Toffoli gates, we obtain a total of: \[2(4n_{0}-8)+4(n_{1}+1)-8=4n+4n_{0}-20\leq 4n+2n+2-20=6n-18\] gates, where the first equality follows since \(n_{0}+n_{1}=n\), and the inequality follows since \(4n_{0}\leq 2n+2\) (which holds because \(n\) is an integer). Thus we have used the claimed number of Toffoli gates in (b). This completes the proof in all cases. The final \(C^{n}X\) compilation that we present uses a larger number of ancilla qubits, but reduces the number of Toffoli gates by a multiplicative constant. **Lemma 3.3**.: _For all \(n\geq 3\), a \(C^{n}X\) gate can be compiled using \(n-2\) clean ancilla qubits and \(2n-3\) Toffoli gates._ Proof.: We provide a proof for the case \(n=4\) for concreteness. The general case follows by an analogous argument on a circuit with the same pyramid-like shape that we present now. Consider the circuit: We will show that this circuit acts as a \(C^{4}X\) gate which is controlled on the first, second, fourth, and sixth qubit, targets the final qubit, and the remaining qubits are clean ancillas. Applying the gates one at a time to an arbitrary initial computational basis state \(\ket{x_{1},x_{2},0,x_{3},0,x_{4},x_{5}}\) we obtain: \[\ket{x_{1},x_{2},0,x_{3},0,x_{4},x_{5}} \mapsto\ket{x_{1},x_{2},x_{1}\wedge x_{2},x_{3},0,x_{4},x_{5}}\] \[\mapsto\ket{x_{1},x_{2},x_{1}\wedge x_{2},x_{3},x_{1}\wedge x_{2} \wedge x_{3},x_{4},x_{5}}\] \[\mapsto\ket{x_{1},x_{2},x_{1}\wedge x_{2},x_{3},x_{1}\wedge x_{2} \wedge x_{3},x_{4},x_{5}\oplus(x_{1}\wedge x_{2}\wedge x_{3}\wedge x_{4})}\] \[\mapsto\ket{x_{1},x_{2},x_{1}\wedge x_{2},x_{3},0,x_{4},x_{5} \oplus(x_{1}\wedge x_{2}\wedge x_{3}\wedge x_{4})}\] \[\mapsto\ket{x_{1},x_{2},0,x_{3},0,x_{4},x_{5}\oplus(x_{1}\wedge x _{2}\wedge x_{3}\wedge x_{4})}\] which shows that the circuit implements the claimed operation. The number of Toffoli gates and clean ancillas follows by directly counting. We can now give the main result of this section. To this end, let \(\ket{a}\) and \(\ket{b}\) be an arbitrary pair of \(n\)-qubit computational basis states that are to be transposed; further let \(\Pi_{a}\) and \(\Pi_{b}\) be projectors onto these basis states. Our construction will make use of the \((n+1)\)-qubit operators: \[\Pi_{a}\otimes X+(I-\Pi_{a})\otimes I, \tag{5}\] \[\Pi_{b}\otimes X+(I-\Pi_{b})\otimes I. \tag{6}\] In circuit diagrams, these are represented as a block denoted \(\Pi_{a}\) or \(\Pi_{b}\) controlling a "\(\oplus\)" on the target qubit. As the projectors in question are onto computational basis states, these gates may be realised by a \(C^{n}X\) gate where the control is "sandwiched" between a pair of \(X\) gates when the conditioned on \(0\) for the relevant qubit. In this way, the gate "picks out" a single computational basis state which controls a bit flip on the target qubit. We also define the \(n\)-qubit operator: \[U_{a,b}:=U_{1}\otimes U_{2}\otimes\cdots\otimes U_{n}\] where \(U_{i}=X\) if \(a\) and \(b\) differ in the \(i^{th}\) bit, and \(U_{i}=I\) otherwise. Note that \(U_{a,b}\) acts on \(\ket{a}\) and \(\ket{b}\) as \(U_{a,b}\ket{a}=\ket{b}\) and \(U_{a,b}\ket{b}=\ket{a}\). **Theorem 3.4**.: _The circuit:_ _acts as a transposition of the computational basis states \(\left|a\right\rangle\) and \(\left|b\right\rangle\) for all \(n\). For \(n=1\), \(n=2\) and \(n=3\) the circuit requires at most:_ 1. \(2\) _Hadamard gates;_ \(4\) _X gates;_ \(4\) _CNOT gates; and one clean ancilla;_ 2. \(2\) _Hadamard gates;_ \(8\) _X gates;_ \(4\) _CNOT gates;_ \(2\) _Toffoli gates; and one clean ancilla;_ 3. \(2\) _Hadamard gates;_ \(12\) _X gates;_ \(6\) _CNOT gates;_ \(6\) _Toffoli gates; and 2 clean ancillas;_ _respectively, and for all \(n\geq 4\) requires at most either:_ 1. \(2\) _Hadamard gates;_ \(4n\) _X gates;_ \(2n\) _CNOT gates;_ \(12n-36\) _Toffoli gates; and_ \(2\) _clean ancillas; or_ 2. \(2\) _Hadamard gates;_ \(4n\) _X gates;_ \(2n\) _CNOT gates;_ \(4n-6\) _Toffoli gates; and_ \(n-1\) _clean ancillas._ _Thus in all cases the overall gate complexity is \(\Theta(n)\), nearly achieving the lower-bound of Corollary 2.3._ Proof.: First, we show that the circuit acts as the mapping defined in (1) for an arbitrary input \(\left|x\right\rangle\left|0\right\rangle\): * For \(\left|x\right\rangle\left|0\right\rangle\) with \(x\notin\left\{a,b\right\}\), the first Hadamard gate maps \(\left|x\right\rangle\left|0\right\rangle\) to \(\left|x\right\rangle\left|+\right\rangle\). As \(U_{a,b}\) is a permutation which sends \(\left|a\right\rangle\) to \(\left|b\right\rangle\) and \(\left|b\right\rangle\) to \(\left|a\right\rangle\), it follows that \(U_{a,b}\) must send \(\left|x\right\rangle\) to some computational basis state \(\left|y\right\rangle\), where \(y\) is not equal to \(a\) or \(b\). Therefore the controlled-\(U_{a,b}\) sends \(\left|x\right\rangle\left|+\right\rangle\) to \(\frac{1}{\sqrt{2}}\left|x\right\rangle\left|0\right\rangle+\frac{1}{\sqrt{2}} \left|y\right\rangle\left|1\right\rangle\). Following this, none of the conditions of the next two controlled operations are met, so the state remains unchanged. The state is then mapped by the last controlled-\(U_{a,b}\) to \(\frac{1}{\sqrt{2}}\left|x\right\rangle\left|0\right\rangle+\frac{1}{\sqrt{2}} \left|x\right\rangle\left|1\right\rangle\), and the final Hadamard gate maps this to \(\left|x\right\rangle\left|0\right\rangle\). Therefore the overall operation in this case is to map \(\left|x\right\rangle\left|0\right\rangle\) to \(\left|x\right\rangle\left|0\right\rangle\). * Turning to the case where \(x=a\), the first Hadamard gate sends \(\left|a\right\rangle\left|0\right\rangle\mapsto\frac{1}{\sqrt{2}}\left|a \right\rangle\left|0\right\rangle+\frac{1}{\sqrt{2}}\left|a\right\rangle \left|1\right\rangle\); the controlled-\(U_{a,b}\) operation then sends this to \(\frac{1}{\sqrt{2}}\left|a\right\rangle\left|0\right\rangle+\frac{1}{\sqrt{2}} \left|b\right\rangle\left|1\right\rangle\); the third and fourth circuit block together send this to \(\frac{1}{\sqrt{2}}\left|a\right\rangle\left|1\right\rangle+\frac{1}{\sqrt{2} }\left|b\right\rangle\left|0\right\rangle\); the second controlled-\(U_{a,b}\) operation then sends this to \(\frac{1}{\sqrt{2}}\left|b\right\rangle\left|1\right\rangle+\frac{1}{\sqrt{2}} \left|b\right\rangle\left|0\right\rangle\) and the remaining Hadamard gate maps this to \(\left|b\right\rangle\left|0\right\rangle\). Thus the overall operation is to send \(\left|a\right\rangle\left|0\right\rangle\mapsto\left|b\right\rangle\left|0\right\rangle\). The case where \(x=b\) follows by a completely analogous argument. By the above case analysis, we have shown that the circuit performs the claimed transposition. Having shown that the circuit has the required operation, it remains to count gates and qubits. There are two uses of the controlled-\(U_{a,b}\) operator. Each requires at most \(n\) CNOT gates, giving at most \(2n\) CNOT gates. Continuing, there are two uses of the operators defined in (5) and (6). Each of these operators consists of a \(C^{n}X\) gate and at most \(2n\) additional \(X\) gates. Thus for any \(n\geq 1\) the total number of operations required is: \(2\) Hadamard gates; \(4n\) X gates; \(2n\) CNOT gates; and \(2\)\(C^{n}X\) gates. For the cases \(n=1\) and \(n=2\) this results in the bounds claimed in (i) and (ii). For \(n\geq 3\), we may apply either Lemma 3.2 or 3.3 to compile each \(C^{n}X\) (and it turns out that for the case of \(n=3\) the resources are identical). Using the compilation provided by Lemma 3.2, each \(C^{n}X\) can be compiled with a clean ancilla qubit and \(3\) Toffoli gates when \(n=3\), or \(6n-18\) Toffoli gates when \(n\geq 4\). Since the required ancilla qubit is a clean ancilla, we may reuse the same one for each of these operations. Therefore in this case we have used a total of: two clean ancillas - one explicitly shown and one required by Lemma 3.2; \(6\) Toffoli gates for \(n=3\) and \(12n-36\) Toffoli gates for \(n\geq 4\); \(2n\) CNOT gates; \(2\) Hadamard gates; and at most \(4n\)\(X\) gates. This completes the proof of (a). For (b), suppose we instead apply the \(C^{n}X\) compilation of Lemma 3.3. In this case each \(C^{n}X\) can be compiled using \(n-2\) clean ancillas and \(2n-3\) Toffoli gates. Since the ancillas are clean then they may be reused for each of these operations. Therefore the total gate complexity in this case is: \(n-1\) clean ancillas - one explicitly shown and \(n-2\) required by Lemma 3.3; \(4n-6\) Toffoli gates; \(2n\) CNOT gates; \(2\) Hadamard gates; and at most \(4n\)\(X\) gates - completing the proof of (b). **Remark**.: _The number of \(X\) gates can be reduced to \(3n\) by noticing that for any qubit controlled on the 0 state for both the \(\Pi_{a}\) and \(\Pi_{b}\) controlled \(X\) gates, a pair of \(X\) gates will cancel. That is, owing to the fact that these gates occur consecutively, there will be three rather than four (partially filled) banks of \(X\) gates, when the trivial simplification \(XX=I\) is applied to the compilation._ ## 4 Numerical Results We now present some numerical results to demonstrate the performance of our proposed method of transposing computational basis states. The numerical results fall into two categories. First, we compile a range of transpositions using the approach described in Theorem 3.4 (a) and (b) and compare the CNOT and Toffoli gate counts of the resulting circuits to the theoretical bounds described therein. Second, we compare the CNOT and T gate counts of our method against several state-of-the-art approaches for compiling permutational circuits, namely the Tweedledum-based construction presented in [7] and the ToffoliBox of pytket. In each case, the Toffoli gates are compiled according to Fig. 1, such that the final circuits consist only of CNOT and single-qubit gates. The choice to compare CNOT and T gate counts is motivated by the fact that typically CNOT gates are the most expensive gates when running circuits with physical qubits, and T gates are the most expensive gates to perform fault-tolerantly. All of the resulting circuits were compiled using pytket 1.18.0, and only mild optimisation passes were used to simplify gate redundancies.2 Footnote 2: See [https://cqcl.github.io/tket/pytket/api/](https://cqcl.github.io/tket/pytket/api/) for pytket documentation. ### Comparison with theoretical bounds For our first set of results, we compare the average CNOT and Toffoli gate counts across a range of random transpositions to the theoretical bounds described in Theorem 3.4. For each \(2\leq n\leq 20\), we generate 200 random transpositions of two \(n\)-qubit computational basis states and use the constructions proposed in Theorem 3.4 (a) and (b) to compile their corresponding circuits, resulting in circuits over the gate-set \(\{H,X,\text{CNOT},\text{Toffoli}\}\). The RemoveRedundacies pass in pytket was applied to each of these circuits and then the average CNOT and Toffoli gate counts were tabulated and presented in Tables 1. The results show that the average CNOT count is typically only approximately half of the bound that we prove in this paper; whereas the Toffoli counts saturate or nearly saturate the bounds in all cases. ### Comparison with other approaches For our second set of results we compare the average CNOT and T gate counts of the compilation method proposed in Theorem 3.4 to the Tweedledum-based compilation method of Ref. [7] and the ToffoliBox of pytket. In each case either 100 transpositions of the same Hamming weight were randomly generated, or in the case where there are fewer than 100 distinct tranpositions of the same Hamming weight then the entire set of them was considered3. For each Hamming weight the resulting circuits were compiled and the average CNOT count and T gate counts were computed and presented in Figures 2 and 3. Footnote 3: In particular, in the following cases there were fewer than 100 possible transpositions (format is (number of qubits, Hamming distance) total transpositions): (4, 1) 32; (4, 2) 48; (4, 3) 32; (4, 4) 8; (5, 1) 80; (5, 4) 80; (5, 5) 16; (6, 6) 32; (7, 7) 64. _Tweedledum:_ The first compilation method that we compare our method to is that of Ref. [7]. For each of the random transpositions the circuits were compiled and simplified using the CliffordSimp, SynthesiseTket Figure 1: The standard decomposition of the Toffoli gate into single-qubit and CNOT gates. [8, Fig. 4.9] and RemoveReduncies passes in pytket, which resulted in circuits using CNOT, TK1, and global phase gates. Pytket 1.18.0 does not contain functionality for T gate synthesis, and so only the CNOT gate counts were recorded and presented in Fig. 2. _ToffoliBox:_ The second compilation method is the ToffoliBox of pytket. The compilation can use one of two strategies, referred to as "Matching" and "Cycle". For the matching strategy, the resulting circuits were compiled and simplified using the CliffordSimp, SynthesiseTket and RemoveReduncies passes in pytket, resulting in circuits that use CNOT, TK1, and global phase gates. As noted in the Tweedledum case, pytket 1.18.0 does not contain functionality for T gate synthesis, and so only the CNOT gate counts were recorded and presented in Fig. 2 as Pytket-Match. It is worth mentioning that compilation from a gate-set including the continuously-parameterised gate TK1 to a finite gate-set, such as that containing the Clifford gates and T gates (or Cliffords, Toffolis and T gates), can only be done approximately, and if very high accuracy is required, the T gate count becomes large. For this reason, fault-tolerant compilation is likely to favor techniques that only require gates from a suitable finite set in the first place. For the cycle strategy, the ToffoliBox returns a circuit consisting of \(X\) and \(C^{n}X\) gates. To more readily compare these circuits to those of our proposed construction, the \(C^{n}X\) gates were decomposed into \(X\), CNOT and Toffoli gates using the same \(C^{n}X\) decompositions used in Theorem 3.4 (a) and (b). The Toffoli gates were then decomposed into single-qubit gates and CNOT gates using the standard decomposition of Fig. 1. The RemoveRedundancies pass of pytket was then applied and the CNOT and T gate counts were recorded. The counts are denoted by Pytket-Cycle (a) and (b) in Figures 2 and 3 corresponding to the \(C^{n}X\) decomposition used. _Theorem 3.4:_ For the compilation method of Theorem 3.4 the circuits were compiled into \(X\), CNOT and Toffoli gates using the constructions described in the theorem. Following this, the Toffoli gates in the circuits were then decomposed into single-qubit gates and CNOT gates using the decomposition of Fig. 1. The RemoveRedundancies pass of pytket was then applied and the CNOT and T gate counts were recorded. The corresponding counts are denoted by Thm 3.4 (a) and Thm 3.4 (b) in Figures 2 and 3. We can see that the methods we propose in this paper are relatively most advantageous for large numbers \begin{table} \begin{tabular}{|l|c c c|c c c c|} \hline & \multicolumn{3}{c|}{CNOT} & \multicolumn{3}{c|}{Toffoli} \\ \(n\) & Avg (a) & Avg (b) & Bound & Avg (a) & Bound (a) & Avg (b) & Bound (b) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: The average CNOT and Toffoli gate counts across 200 random transpositions. The number \(n\) is the number of qubits required for the computational basis states that are transposed. ‘(a)’ and ‘(b)’ refer to the two settings considered in Theorem 3.4; note that the bound on the number of CNOTS is the same for both (a) and (b). of qubits and for large Hamming distances. This is to be expected, as our methods are nearly optimal in the number of qubits, and have approximately the same performance for any transposition - whereas other methods, such as those that use a Gray code, suffer when the transposition is such that the Hamming distance between transposed computational basis states (written as binary strings) is large. Figure 2: The average CNOT counts across 100 randomly selected transpositions (or over all transpositions, when the total is fewer than 100) between computational basis states with a fixed Hamming distance. The number of qubits is the number of qubits required for the computational basis states that are transposed. ## 5 Discussion In this paper we have shown that on average \(n\)-qubit computational basis states transpositions have a gate complexity \(\Omega(n/\log(nd))\) when using any \(d\)-element gate-set, and even if ancillas are available. Since a general permutation can be expressed as a product of at most \(2^{n-1}\) transpositions then this lower bound is consistent with the \(\Omega(n2^{n}/\log(n))\) worst-case lower bound of Ref [3] for an arbitrary permutation. We subsequently give an explicit construction to perform any computational basis state transposition with \(\Theta(n)\) gates and two ancillas. To our knowledge, this is first time that this construction has been proposed, and conventional Figure 3: The average T gate counts across 100 randomly selected transpositions (or over all transpositions, when the total is fewer than 100) between computational basis states with a fixed Hamming distance. The number of qubits is the number of qubits required for the computational basis states that are transposed. wisdom is to use the Gray code construction popularised in Nielsen and Chuang [8, Section 4.5.2] to perform any 2-level unitary4, which requires \(\Theta(n^{2})\) gates in the worst case. This therefore represents a potentially practically useful result for any compiler that constructs arbitrary permutations from transpositions. This claim of potential for practical utility is backed-up by the numerical results presented, which show that for transpositions with large numbers of qubits and / or large Hamming distance, our methods outperform the standard alternatives. Footnote 4: In the case of a transposition the unitary is the Pauli-\(X\) matrix. It is also worth noting that the transposition construction presented in Theorem 3.4 is amenable to several further circuit optimisations during compilation. In particular, if we consider the compilation of a Toffoli gate into single-qubit gates and CNOTs, then the standard circuit is given by Fig. 1. However, as the Toffoli gate is equal to its inverse, we also have that the circuit reversed and with every gate replaced by its inverse also implements the Toffoli: Therefore it follows that, each time a pair of Toffoli gates appear as: (where the dotted line implies that other operations occur here) then we can use the second Toffoli decomposition for the second Toffoli, such that the decomposed circuit is: where we can readily see that the gates inside of the region enclosed by the dashed line cancel to the identity. So it follows that we have implemented the two Toffolis using a total of 8 CNOTs and 12 single-qubit gates - fewer than the 12 CNOTs and 20 single-qubit gates that are needed in general to compile two Toffolis. We can further see, for example in (7), such structures are commonplace in our construction, and hence has the potential for significant CNOT and T gate count reductions during compilation. These savings can be readily observed in the numerical data presented in Figures 2 and 3, which demonstrates lower CNOT and T gate counts for transpositions between computational basis states of large Hamming distance when compared to Tweedledum and Pytket. ## Acknowledgements The authors would like to thank Silas Dilkes, Alexandre Krajenbrink, and Tuomas Laakonen for carefully reviewing and providing useful feedback on an earlier draft of this article. Special thanks to Tuomas for suggesting improvements to the circuit construction given in Theorem 3.4, and to Silas for providing various suggestions for Section 4.
2309.08878
Surface Extraction from Neural Unsigned Distance Fields
We propose a method, named DualMesh-UDF, to extract a surface from unsigned distance functions (UDFs), encoded by neural networks, or neural UDFs. Neural UDFs are becoming increasingly popular for surface representation because of their versatility in presenting surfaces with arbitrary topologies, as opposed to the signed distance function that is limited to representing a closed surface. However, the applications of neural UDFs are hindered by the notorious difficulty in extracting the target surfaces they represent. Recent methods for surface extraction from a neural UDF suffer from significant geometric errors or topological artifacts due to two main difficulties: (1) A UDF does not exhibit sign changes; and (2) A neural UDF typically has substantial approximation errors. DualMesh-UDF addresses these two difficulties. Specifically, given a neural UDF encoding a target surface $\bar{S}$ to be recovered, we first estimate the tangent planes of $\bar{S}$ at a set of sample points close to $\bar{S}$. Next, we organize these sample points into local clusters, and for each local cluster, solve a linear least squares problem to determine a final surface point. These surface points are then connected to create the output mesh surface, which approximates the target surface. The robust estimation of the tangent planes of the target surface and the subsequent minimization problem constitute our core strategy, which contributes to the favorable performance of DualMesh-UDF over other competing methods. To efficiently implement this strategy, we employ an adaptive Octree. Within this framework, we estimate the location of a surface point in each of the octree cells identified as containing part of the target surface. Extensive experiments show that our method outperforms existing methods in terms of surface reconstruction quality while maintaining comparable computational efficiency.
Congyi Zhang, Guying Lin, Lei Yang, Xin Li, Taku Komura, Scott Schaefer, John Keyser, Wenping Wang
2023-09-16T05:00:46Z
http://arxiv.org/abs/2309.08878v1
# Surface Extraction from Neural Unsigned Distance Fields ###### Abstract We propose a method, named DualMesh-UDF, to extract a surface from unsigned distance functions (UDFs), encoded by neural networks, or neural UDFs. Neural UDFs are becoming increasingly popular for surface representation because of their versatility in presenting surfaces with arbitrary topologies, as opposed to the signed distance function that is limited to representing a closed surface. However, the applications of neural UDFs are hindered by the notorious difficulty in extracting the target surfaces they represent. Recent methods for surface extraction from a neural UDF suffer from significant geometric errors or topological artifacts due to two main difficulties: (1) A UDF does not exhibit sign changes; and (2) A neural UDF typically has substantial approximation errors. DualMesh-UDF addresses these two difficulties. Specifically, given a neural UDF encoding a target surface \(\tilde{S}\) to be recovered, we first estimate the tangent planes of \(\tilde{S}\) at a set of sample points close to \(\tilde{S}\). Next, we organize these sample points into local clusters, and for each local cluster, solve a linear least squares problem to determine a final surface point. These surface points are then connected to create the output mesh surface, which approximates the target surface. The robust estimation of the tangent planes of the target surface and the subsequent minimization problem constitute our core strategy, which contributes to the favorable performance of DualMesh-UDF over other competing methods. To efficiently implement this strategy, we employ an adaptive Octree. Within this framework, we estimate the location of a surface point in each of the octree cells identified as containing part of the target surface. Extensive experiments show that our method outperforms existing methods in terms of surface reconstruction quality while maintaining comparable computational efficiency. ## 1 Introduction Implicit surfaces are widely used for surface representation in computer vision and computer graphics. An implicit surface is usually defined as a level set of a function, such as the zero-level set of a signed distance function (SDF). Extracting a mesh representation of an implicit surface from its defining equation is therefore a critical task for surface visualization and processing. Recent advances in machine learning have given rise to a new kind of implicit surface, called a _neural implicit surface_. A neural implicit surface is a level-set of a function encoded by an MLP (multilayer perceptron) and has the advantage of compactness and inherent smoothness thanks to its MLP representation. SDFs or occupancy fields are widely used in these implicit representations [17, 15, 16, 4, 18, 20, 6, 1, 8, 14, 19]. However, neural implicit surfaces based on the SDF or occupancy fields require inside/outside labeling and thus can only represent orientable and closed surfaces. Hence, as an extension, unsigned distance functions (UDFs) have been used to represent surfaces of arbitrary topologies, including open surfaces with boundaries or non-orientable surfaces (_e.g_. the Mobius strip). Despite its versatility, applications of a UDF-based surface representation are severely hindered by the difficulty in extracting the target surface it represents, as shown in [5] and [7]. **Problem formulation:** Suppose that a surface \(\mathcal{S}\), called the _target surface_, is defined as the zero-level set of its unsigned distance function (UDF) \(\tilde{F}(\mathbf{p})\). Then suppose that this UDF \(\tilde{F}(\mathbf{p})\) is approximated by a neural network with the resulting network-encoded UDF being referred to as the _neural UDF_, denoted by \(F(\mathbf{p})\). Given a neural UDF \(F(\mathbf{p})\), the surface extraction problem is to robustly extract a surface \(\mathcal{S}\) from \(F(\mathbf{p})\) such that \(\mathcal{S}\) well approximates the target surface \(\mathcal{\tilde{S}}\). **Challenges:** The difficulty in surface extraction from a neural UDF arises from two aspects: (1) A UDF does not have zero-crossings (or sign changes) across the surface it represents. As a result, traditional mesh extraction methods that rely on zero-crossings (_e.g_. Marching Cubes [13, 11], Dual Contouring [10], and their variants) are not applicable to UDFs. (2) The MLP representation of a neural UDF tends to have significant approximation errors around the target surface (see detailed error characteristics of neural UDFs in Sec. 3). This makes it even more challenging to extract a high-quality approximation of the target surface. Several methods, MeshUDF [7], CAP-UDF [21] and Neural Dual Contouring (NDC) [3], have recently been proposed for extracting a mesh surface from a UDF. MeshUDF and CAP-UDF attempt to infer the gradients of a UDF on the grids and determine the sign changes of the estimated gradients, invoking the Marching Cubes method for mesh extraction. When applied to a neural UDF, the sign-change inference step of this method suffers from instability due to the non-negligible error introduced by the approximate MLP representation near the surface where the gradients of the ideal UDF are undefined. As a result, the extracted meshes are less accurate and often have topological errors (_e.g_. holes). The NDC method proposes a data-driven Dual Contouring approach to predict the position of mesh vertices and dual faces directly from the UDF data. When applied to a neural UDF, this method often produces meshes with considerable artifacts such as holes, zig-zags, etc. We develop a new strategy, consisting of novel sampling and efficient optimization techniques to address the difficulties in surface extraction from the neural UDF. Suppose the input is a neural UDF \(F(\mathbf{p})\) encoding the target surface \(\bar{\mathcal{S}}\) to be recovered. Our strategy has two key steps: (1) _computing approximate tangent planes of the target surface_; and (2) _local minimization for generating final surface points_. In Step (1), we first generate sample points \(\mathbf{p}_{i}\) around, but not too close to, the target surface \(\bar{\mathcal{S}}\), because the UDF values and gradients at locations too close to \(\bar{\mathcal{S}}\) are relatively unreliable. Thus, \(\mathbf{p}_{i}\) are called _off-surface sample points_. For each \(\mathbf{p}_{i}\), we use the UDF value \(F(\mathbf{p}_{i})\) and its gradient \(\nabla F(\mathbf{p}_{i})\) to project \(\mathbf{p}_{i}\) towards the target surface to obtain point \(\mathbf{q}_{i}=\mathbf{p}_{i}-F(\mathbf{p}_{i})\mathbf{n}_{i}(\mathbf{p}_{i})\) where \(\mathbf{n}(\mathbf{p}_{i})=\nabla F(\mathbf{p}_{i})/\|\nabla F(\mathbf{p}_{i})\|\)[5]. These points \(\mathbf{q}_{i}\) are called _projection points_. Although the points \(\mathbf{q}_{i}\) are very close to the target surface \(\bar{\mathcal{S}}\), as we will show later, the noisy error in the neural UDF makes these points a poor approximation to the target surface. To further improve surface accuracy, for each projection point \(\mathbf{q}_{i}\) we generate an estimated tangent plane \(T_{i}\) of \(\bar{\mathcal{S}}\) such that \(T_{i}\) passes through \(\mathbf{q}_{i}\) and has the unit normal vector \(\mathbf{n}_{i}\). Note that the normal vector \(\mathbf{n}_{i}\) of \(T_{i}\) is set to be \(\mathbf{n}(\mathbf{p}_{i})\) rather than \(\mathbf{n}(\mathbf{q}_{i})\) since the former is a more reliable estimation. This is because the initial sample point \(\mathbf{p}_{i}\) is not too close to \(\bar{\mathcal{S}}\), so the gradient \(\nabla F(\mathbf{p}_{i})\) is less contaminated by the pronounced errors of the neural UDF close to \(\bar{\mathcal{S}}\). In Step (2), the estimated tangent planes are organized into clusters, which may overlap. For each cluster of tangent planes \(T_{i}\), we solve a linear least squares problem to produce a final surface point \(\mathbf{s}_{i}\) that minimizes the sum of its squared distances to the tangent planes \(T_{i}\). This minimization step based on tangent planes not only provides an accurate surface point but also allows us to faithfully reconstruct the sharp edges of the target surface. Finally, all the surface points \(\mathbf{s}_{i}\) from all the clusters are connected to form the output mesh surface to approximate the target surface \(\bar{\mathcal{S}}\). To efficiently implement the above strategy, our _DualMesh-UDF_ method adopts an adaptive Octree structure to partition the space containing the target surface to regular cells. We developed efficient procedures to determine those cells that contain part of the target surface and perform the sampling and minimization procedures in each occupied cell. To connect the surface points to create the output mesh, we follow the Dual Contouring approach, connecting surface points residing in adjacent grid cells to create polygons dual to octree edges. Extensive experiments demonstrate that our DualMesh-UDF significantly outperforms existing methods in terms of surface reconstruction accuracy and sharp feature preservation. The main contribution of this work is a new algorithm to robustly and accurately extract a surface from a neural UDF. To overcome the inevitable approximation errors near the target surface and cut locus, we obtain robust estimation of surface tangent planes by leveraging off-surface sample points, use least square minimization to better predict the surface points, and achieve high-quality surface extraction results with sharp features better preserved compared to the state of the art. The code is available at [https://github.com/cong-yi/DualMesh-UDF](https://github.com/cong-yi/DualMesh-UDF). ## 2 Related Work The Marching Cubes method [13] and its variants have been established as the _de facto_ standard of converting distance fields to boundary mesh representations. Using the gradient information of the distance function, Extended Marching Cubes [11] and Dual Contouring [10] are both capable of producing meshes with faithfully preserved sharp features (_e.g_., corners and edges). However, all of these methods require inside/outside labeling on the sampling grid, which is either a regular grid of cubes or an adaptive octree grid of cells, to determine whether any zero-level set surface of the signed distance field crosses a particular grid cell. This requirement for sign changes limits the application of these methods to only SDFs or its variants that have sign changes across the underlying surface. Hence, they cannot be directly used to extract a mesh surface from a UDF, which is now often used to represent arbitrary surfaces such as open surfaces or non-orientable surfaces [5]. Our method is similar to [10] only in the sense that we follow [10] to solve a quadratic error function (QEF) to estimate a surface point per grid cell. But unlike [10], where sign changes are available and the gradients at the zero-crossings are reliable, unsigned distance fields do not have sign changes and are non-differentiable at the zero level set. Additionally, we will show that the information around the zero-level set of a neural UDF is unreliable. Due to these two reasons, we alternatively make use of the spatial sample points that are _off_ the target surface in the neural UDF to find reliably estimated tangent planes and formulate a QEF for estimating the final surface points. Three relevant methods, MeshUDF [7], CAP-UDF [21] and Neural Dual Contouring (NDC) [3], have recently been proposed to extract meshes from UDFs. MeshUDF [7] and CAP-UDF [21] use the gradient information of the UDF to assign different signs (\(+\) or \(-\)) to grid points on different sides of the underlying surface, thus invoking Marching Cubes to extract the surface according to the sign labels. However, the quality of the estimated sign labels can be significantly affected by the accuracy of the neural UDF. As we will show later, the neural UDF becomes less accurate around the target surface it represents, contaminating the inferred sign labels and explaining the poor performance of these two methods. Their use of Marching Cubes is incapable of preserving sharp corners or edges, while the use of a regular grid incurs high memory overhead when a high grid resolution is needed to resolve shape details. In contrast, our DualMesh-UDF method preserves sharp features and uses an adaptive octree grid to reduce computational expense even with a high grid resolution. Furthermore, estimating the surface point location is done by solving a least square problem (QEF) which is more robust than the gradient-based sign-labeling strategy used in MeshUDF in terms of the topology of the resulting meshes. The NDC method [3] uses a data-driven approach to train a neural network that predicts vertex position per regular grid cell and the overall dual faces directly from a UDF. However, as a data-driven method, NDC's performance is largely influenced by how accurately the network is trained. We show in the extensive experiments that our explicit geometry-based method consistently outperforms NDC in terms of reconstruction accuracy, preserving sharp features, and preserving smooth boundaries in the original shapes. ## 3 Error Characteristics of Neural UDF We first analyze the characteristics of the errors introduced by the MLP representation used to approximate a UDF, which explains why extracting a mesh surface from a neural UDF is difficult. This analysis also provides a foundation for justifying design choices in our method to overcome these challenging characteristics and achieve robust surface extraction. We begin with an ideal UDF \(\bar{F}(\mathbf{x})\) that represents a target surface \(\bar{\mathcal{S}}\) defined by the zero-level set of \(\bar{F}(\mathbf{x})\), that is, \(\bar{\mathcal{S}}=\{\mathbf{x}|\bar{F}(\mathbf{x})=0\}\subset\mathbb{R}^{3}\). Note that the UDF \(\bar{F}(\mathbf{x})\) is non-negative, so it does not have sign changes across the surface \(\bar{\mathcal{S}}\). Furthermore, \(\bar{F}(\mathbf{x})\) is not differentiable at the surface \(\bar{\mathcal{S}}\) or the surface's cut locus 1[9]. Footnote 1: The cut locus of a surface is a set of points such that each point of the set has two or more distinct closest points on the surface. Now suppose that the ideal UDF \(\bar{F}(\mathbf{x})\) is approximated by an MLP, denoted by \(F(\mathbf{x})\). The approximation errors of \(F(\mathbf{x})\) to \(\bar{F}(\mathbf{x})\) are significant in the narrow region around the surface and a narrow region around the cut locus of \(\bar{\mathcal{S}}\), because the neural UDF \(F(\mathbf{x})\) is inherently smooth and thus poorly approximates the ideal \(\bar{F}(\mathbf{x})\), which is non-smooth at the surface \(\bar{\mathcal{S}}\) and its cut locus. Due to these errors, in general, we have \(F(\mathbf{x})>0\) for any \(\mathbf{x}\in\bar{\mathcal{S}}\) when the MLP is differentiable. For ease of visualization, we will use a 2D example (see Fig. 1) to illustrate the characteristic behaviors of the approximation errors of \(F(\mathbf{x})\), whose zero-level set defines an open curve (red in Fig. 1(a)). The characteristics of the approximation errors of \(F(\mathbf{x})\) in 3D space are similar. Detailed visualization and analysis of the error behaviors in 3D space are provided in the supplementary materials. Note Figure 1: A 2D illustrative example. We observe that the neural UDF tends to have larger errors near the zero-level set \(\bar{\mathcal{S}}\) and its cut locus. (a) The target shape (the red solid curve), its cut locus (green dashed), and the induced GT UDF; (b) Approximation errors of the neural UDF to the GT UDF; (c) Gradient direction errors between the neural UDF and the GT UDF; For the local region outlined with the white box around the corner, we show the close-up views of the distance error and the gradient direction error of the region in (e) and (f), respectively; (d) The GT UDF and its approximation by a neural UDF showing that the neural UDF is positive and smooth at the zero-level set of the ideal UDF, but the ideal UDF is non-differentiable. that the cut locus (Fig. 1(a)) of \(\bar{\mathcal{S}}\) touches the sharp features of \(\bar{\mathcal{S}}\). Therefore the neural UDF \(F(\mathbf{x})\) has even more significant errors in terms of both the distance value and the gradient direction around the sharp features (see the error maps of the distance values and the gradient directions in Fig. 1(e,f), respectively). Consequently, in 3D space, the resulting inaccurate distance values and unreliable gradient vectors of the neural UDF \(F(\mathbf{x})\) make it hard to faithfully reconstruct the target surface, especially to preserve the sharp edges in the extracted surface. To recap, given a neural UDF \(F(\mathbf{x})\), there are mainly two reasons for the difficulty in extracting a surface \(\bar{\mathcal{S}}_{M}\) to approximate the target surface \(\bar{\mathcal{S}}\) defined by the ideal UDF \(\bar{F}(\mathbf{x})\): (1) the given neural UDF \(F(\mathbf{x})\) is usually a poor approximation of the ideal UDF \(\bar{F}(\mathbf{x})\) around the target surface \(\bar{\mathcal{S}}\) and its cut locus, where \(\bar{F}(\mathbf{x})\) is non-differentiable; and (2) the neural UDF \(F(\mathbf{x})\) does not, in general, have a zero-crossing around the target surface \(\bar{\mathcal{S}}\), thus no well-defined surface is associated with it. These issues, faced by current deep neural networks, make it hard to estimate the location of the surface in a numerically stable manner and thus motivate us to develop the two filtering criteria described in Sec. 4.3. ## 4 Method Our method consists of three major designs: (1) a quadratic error function (QEF) that solves for a surface point within a cell for mesh extraction (Sec. 4.1); (2) an adaptive octree data structure that produces high-resolution grid cells while reducing the number of QEF solves required (Sec. 4.2); and (3) a point filtering strategy (Sec. 4.3) for a neural UDF, whose distance values and gradient directions are considerably more noisy compared to the GT as discussed in Sec. 3. The proposed DualMesh-UDF pipeline has three main steps. Firstly, it employs an adaptive tree to partition the domain of the UDF containing the target surface \(\bar{\mathcal{S}}\). Then, it detects non-empty cells (defined in Sec. 4.2) based on a two-step cell-shape intersection detection method, and solves one surface point per non-empty leaf cell in a least squares manner. For a neural network, two distance bounds are introduced to filter out sample points that may introduce noisy information and contaminate the localization of the target surface. Lastly, since there are no sign changes in the UDFs, we construct the mesh faces by checking each edge shared by four incident non-empty cells in the octree. ### QEF to locate surface points Given a cell that contains part of the target surface, we present a procedure to estimate a surface point in this non-empty cell, as illustrated in Fig. 2 for 2D demonstrations. For a neural UDF, we aim to compute a reconstructed point per non-empty cell. For a set of sample points \(\{\mathbf{p}_{i}\}_{i=1}^{m}\) in the given cell, we query the neural UDF \(F(\mathbf{x})\) to get the approximate distance value and the normalized gradient vector1 at \(\mathbf{p}_{i}\), _i.e._, \(d_{i}=F(\mathbf{p}_{i})\) and \(\mathbf{n}_{i}=\nabla F(\mathbf{p}_{i})/\|\nabla F(\mathbf{p}_{i})\|\), respectively (see Fig. 1(a)). Footnote 1: While \(\|\nabla F\|\) should be 1 for a distance function, note that since \(\nabla F\) is drawn from a neural representation, it may differ slightly, and thus need normalization. Each of the points contributes to an estimated tangent plane of the target surface \(\bar{\mathcal{S}}\): \(\mathbf{n}_{i}\cdot(\mathbf{p}_{i}-\mathbf{x})-d_{i}=0\). Considering the inaccurate nature of the neural UDF, we define the estimated surface point to be the point that is the closest to all these tangent planes and compute it by minimizing a _quadratic error function_ (a least squares problem): \[\mathbf{v}=\operatorname*{arg\,min}_{\mathbf{x}}\sum_{i=1}^{m}\left(\mathbf{n} _{i}\cdot(\mathbf{p}_{i}-\mathbf{x})-d_{i}\right)^{2}, \tag{1}\] where \(m\) is the number of points \(\{\mathbf{p}_{i}\}\) and \(\mathbf{x}\) is a surface point to be solved. In our implementation, we solve the linear least squares problem in Eqn. 1 of the form \(\min\|Ax-b\|^{2}\). For a non-empty cell, this procedure can yield a reconstructed point \(\mathbf{v}\), which the target surface \(\bar{\mathcal{S}}\) approximately crosses. However, when \(A\) is nearly singular, the solution to Eqn. 1 can be located near the boundary of the cell or even outside of it, which would compromise the quality of extracted mesh surfaces. Note that while [10] proposed an approach with an additional regularization point to stabilize such cases, their approach relies on the intersection points at the zero-level set of an SDF with the cell edges. However, in our setting, the zero-level set of the UDF is elusive or unavailable, therefore we do not have such intersections. We propose an alternative approach based on singular value analysis of the matrix \(A\). We denote the three singular val Figure 2: The QEF formulation for the unsigned distance field. The orange lines represent the target surface to be reconstructed. The gray lines in (a) stand for the estimated tangent planes contributed by the corresponding sample points. The red point is the solution to the QEF problem. ues of the matrix \(A\) as \(\sigma_{0}\geq\sigma_{1}\geq\sigma_{2}\geq 0\) and consider three cases: 1) In the non-singular case, where all three singular values are much larger than \(0\), \(\mathbf{v}\) corresponds to a _sharp feature point_ that can be solved directly from Eqn. 1. 2) If only \(\sigma_{2}\approx 0\), the solution space of \(\mathbf{v}\) corresponds to a linear edge within the cell. This line has the same direction with the singular vector corresponding to \(\sigma_{2}\) and passes the solution of Eqn. 1. We then compute the intersections between this line and all 6 faces of the cell, yielding two intersecting points. We set \(\mathbf{v}\) as the midpoint of these two points. 3) In the case where both \(\sigma_{1,2}\approx 0\), the part of the target surface enclosed in this cell is approximately planar. Similar to the edge case, we can formulate a plane function from the corresponding singular vectors. By computing the intersecting points between all 12 edges of the cell and this plane, we obtain multiple intersecting points. We use the centroid of these points as \(\mathbf{v}\). This way, in the degenerate cases (2 and 3), the point \(\mathbf{v}\) is still a good approximate solution to Eqn. 1 and well positioned inside the cell. In a neural UDF (_e.g._, Fig. 2a), the UDF values \(\{d_{i}\}\) and the gradient directions \(\{\mathbf{n}_{i}\}\) are approximated and often unreliable. Consequently, the tangent plane computed from each sample point \(\mathbf{p}_{i}\) may not exactly align with the target surface. Solving Eqn. 1 yields a least squares solution that is numerically robust to the approximation errors in neural UDF and can lead to an accurate estimation of the surface point in this non-empty cell. For an ideal UDF case, our method is also applicable to yield an accurate result as shown in Fig. 2b. Furthermore, to enhance the robustness of our method, we also design a point filtering strategy to remove unreliable sample points, especially those near the target surface and its cut locus, where larger approximation errors exist in the MLP-encoded UDF. The details of this filtering strategy are explained in Sec. 4.3. **Differences to DC [10].** Note that the dual contouring (DC) method [10] designs a QEF using intersection points between the surface and cell edges, along with the gradient directions at these intersection points. This approach is not applicable to our objective of extracting surface meshes from a neural UDF. This is because (1) it is difficult to find a reliable intersection point, as an exact zero-level set that crosses an edge may not exist, and (2) the gradient directions in the region with lower UDF values (_i.e._, closer to the target surface) are highly unstable and thus unreliable for estimating the tangent plane. Hence, our method leverages _off-surface_ points that are sufficiently far from the target surface, as indicated by their UDF values (see Sec. 4.3). Our approach thus has less stringent requirements than the DC method (as presented in [10]), yet produces satisfactory results even on noisy neural UDFs as demonstrated by our results. ### Octree Design and Subdivision To efficiently process high-resolution data and speed up the mesh extraction process, we employ an octree data structure. As shown in Fig. 3, we recursively subdivide the cells unless they are categorized as _empty_ according to the following criterion or they reach the maximal depth. After the subdivision, all leaf nodes that are non-empty will invoke the QEF procedure to solve for the dual points. **Checking obviously empty cells.** Firstly, we design a checking condition to help quickly prune obviously empty cells. Given an octree cell \(C\), let \(\mathbf{c}_{0}\) denote its center and \(d_{0}\) denote the UDF value at \(\mathbf{c}_{0}\), that is, \(d_{0}=F(\mathbf{c}_{0})\). Then we have the following sufficient condition for cell \(C\) to be empty: \[d_{0}>\mathrm{Diag}(C)/2, \tag{2}\] where \(\mathrm{Diag}(C)\) is the diagonal length of cell \(C\). Since the UDF distance indicates the distance between a spatial point and the target surface, this criterion determines if the target surface lies outside the sphere centered at \(\mathbf{c}_{0}\) with radius \(\mathrm{Diag}(C)/2\). Given that this sphere contains the entire octree cell, the cell will not contain any portion of the surface if Eqn. 2 is satisfied. Otherwise, the cell will first be categorized as an unsolved cell. Since neural UDF is a reasonable approximation of the ideal UDF, we tailor Eqn. 2 to \(d_{0}>\mathrm{Diag}(C)/2+\epsilon\), where \(\epsilon>0\) is a tolerance for the approximation error and set to \(\epsilon=2\times 10^{-3}\) throughout all experiments. Empirically, we observed this pruning strategy did not affect the quality of the results. **Maximum octree depth.** Having a predefined maximal depth of the octree is critical, not only in terms of limiting the amount of computation but also because of the inaccuracy of the neural UDF near the target surface. Specifically, if the octree keeps subdividing the space that contains the target surface, after a certain depth the cells will become too small and the sample points \(\mathbf{p}\) within these cells will lie in the unreliable region of the neural UDF, which will eventually lead to unstable estimation. Figure 3: The pipeline of our mesh extraction method. ### Point filtering for neural UDFs Since our method makes use of the gradient directions and the distance values to estimate a surface point in each non-empty grid cell (see Eqn. 1), the quality of the extracted mesh heavily depends on the approximation accuracy of the neural UDF \(F(\mathbf{x})\). Consider a neighborhood of a sharp feature of the target surface, as shown in the zoom-in views of Fig. 1(e,f). The distance errors and gradient direction errors are more significant around the target surface and its cut locus. Hence, motivated by our observation and analysis in Sec. 3, to avoid building the QEF using points from these regions, we use the following criteria to filter out unreliable points to enhance the robustness of the QEF solution to the noisy characteristics of the MLP-encoded UDFs. **Criterion 1: Removing sample points potentially near the surface**. A candidate point \(\mathbf{p}_{i}\) is considered too close to the target surface if \(F(\mathbf{p}_{i})<\delta_{1}\), where \(\delta_{1}\) is a preset filtering threshold; such points are discarded. This criterion ensures the sample points are from a region where the distance errors are expected to be relatively small. **Criterion 2: Removing sample points whose projections have large UDF values.** While the previous criterion rejects the sample points that are too close to the target surface and thus avoids the erroneous approximation at those regions, some sample points may still have larger UDF errors even far from the target surface (_e.g_. near the cut locus). We observe that these sample points will not be projected to regions near the target surface. Hence, if the UDF of the projected point \(\{\mathbf{q}_{i}\}\) is large, we consider it to be an incorrect estimate and thus reject the corresponding sample point \(\{\mathbf{p}_{i}\}\) as unreliable. To this end, we introduce the second filtering criterion as follows, \(F(\mathbf{q}_{i})>\delta_{2}\), where \(\delta_{2}\) is another preset filtering bound; again, the corresponding points \(\mathbf{p}_{i}\) are discarded. ### Creating Output Mesh Surfaces We consider the regular grid of cubic cells at the maximum depth of the octree. Then each edge of the grid is shared by four cubic cells surrounding the edge. To build the initial mesh connectivity, for each edge of the grid, we examine each of its four incident cells. Similar to [10], the mesh connection rule is designed as follows: if all four incident cells are non-empty, a quad-face candidate that connects the four dual points in these cells will be constructed. To ensure the correct connectivity, we validate and triangulate the quad faces. We further examine if the normals of the face candidates can consistently reflect their geometric property, being a sharp corner, part of a sharp edge, or part of a plane as classified by the SVD shape analysis (Sec. 4.1). For example, all correct triangulated faces incident to a surface point classified as part of a plane would have all their normals parallel to the singular vector corresponding to the largest singular value from SVD shape analysis, while the normals of triangulated faces incident to a surface point being part of a sharp edge should be orthogonal to its singular vector direction. This way, we reject all inconsistent face candidates. We also provide a practical approach to make sure the output mesh is manifold when the desired target surface is manifold: With the help of the octree cells and the definition of reconstructed dual points, it is feasible to generate an auxiliary _blocky_ model by moving all mesh vertices to the centroids of the corresponding cells. By extracting the outer envelope of this auxiliary model, we obtain a manifold structure. Finally, we tessellate the surface points using the connectivity from that manifold structure to produce a manifold mesh. This approach is crucial for applications that require manifold meshes (_e.g_. parameterization, remeshing, or shape analysis). ## 5 Experiments ### Experimental details and metrics **Experimental setting.** We rescaled all the shapes to a bounding cube with a side length of 2, centered at the origin. To improve efficiency, we share sample points between cells, _i.e_. \(m=27\) points in each non-empty leaf cell, including 8 corner points, 12 edge midpoints, 6 face midpoints, and 1 centroid point. We set the filtering parameters \(\delta_{1}=2\times 10^{-3}\) and \(\delta_{2}=2\times 10^{-3}\) to filter the sampled points as described in Sec. 4.3. We notice that more sample points may bring marginal performance improvement but incur computational overhead in our ablation study. Given that the grid resolution of \(128^{3}\) and \(256^{3}\) are commonly used in related works, we compare our method with these prior techniques using a maximal octree resolution of \(128^{3}\) and \(256^{3}\). We also discuss how the resolution will affect our results by varying the maximum depth of the octree to have a resolution from \(64^{3}\) to \(512^{3}\) in our supplementary material. **MLP architecture.** To overfit single shapes using individual MLPs, we employed the MLP implementation provided by [18]. The activation functions are the Sine activation, except for the last one which is a _SoftPlus_ (\(\beta=100\)) activation to ensure the output value is non-negative. All neural UDFs in the experiments were trained with the described MLP implementation. All MLP networks were trained for \(3k\) iterations with the ADAM optimizer to minimize the difference between the predicted and the GT unsigned distance fields. For more general tasks, we also test an MLP with latent codes that represent a shape space, following the network and training settings in [5]. We report the timing performance on a Linux desktop with an Intel CoreTM i7-10870H CPU and an NVIDIA GeForce RTX 3090 graphics card. We elaborate on the loss function as Figure 4: Meshes extracted from neural UDFs. We show 6 examples and compare our results to the mesh extraction results in _MeshUDF_[7], _CAP-UDF_[21] and _NDC_[3]. Our method preserves sharp geometric features better, yields results without undesirable holes, and reconstructs the original smooth boundaries of open surfaces. well as other details regarding training the neural UDFs in the supplementary materials. **Metrics.** To evaluate the performance of our method and the other methods, we adopt the following metrics, the double-sided Chamfer distance (CD), the F-score based on CD, and the Hausdorff distance (HD). The Chamfer distance reflects the overall quality of the extracted surface mesh as compared to the GT shape. The F-score indicates the percentage of points that are reconstructed correctly under a threshold (set to \(0.001\) for shape-overfitting neural UDFs, and \(0.01\) for shared UDF network with latent codes). The Hausdorff distance can reveal if there is anything missing or redundant in the reconstructed geometry (a hole or a floating piece). ### Comparison with SOTA methods We compared our results with three existing methods: 1) **MeshUDF**[7], 2) the _standalone mesh extraction module_ presented in **CAP-UDF**[21], and 3) the UNDC presented in **NDC**[3]. The first two methods use gradients of the UDF to estimate sign changes in the field to invoke the Marching Cubes method to extract a surface mesh from the UDF. NDC proposes a data-driven approach to extract a surface mesh from the grid-based representation of an implicit field. **Shape-overfitting neural UDFs** We first compare our results to those obtained by these three methods on a shape collection consisting of shapes from four public sources: 1) 100 from the Thingi10K Dataset [22] containing 3D printing models; 2) 134 from the MGN Dataset [2] containing garments with open boundaries; 3) 100 from the ABC Dataset [12] containing CAD models; and 4) 20 commonly used shapes in geometric processing research. In addition, we also compare different methods on a Mobius strip, which is a non-orientable surface. We fit each shape with an independent neural UDF and apply mesh extraction to each of these neural UDFs. Table 1 reports the performance of different methods on this shape collection. Our method outperforms the other three competing methods in terms of all three quantitative metrics. While all of these methods use uniform grids, our octree structure results in increased computing efficiency. MeshUDF and CAP-UDF would likely be accelerated by adopting an octree structure, however, it would be non-trivial to adopt an octree structure for NDC. For a fair comparison, we also test our method without using the octree acceleration approach for the resolution of \(128^{3}\); by doing so, the T1 time increases to 2.70s, and the T2 time increases to 5.54s. To qualitatively compare results obtained by different methods, we show the mesh surfaces extracted from neural UDFs for several shapes in Fig. 4. We can see that our results are higher quality than those produced by MeshUDF and CAP-UDF. These two methods cannot preserve sharp geometric features due to the use of the Marching Cubes method. Compared to NDC - a data-driven method - our method also produces consistently better results both quantitatively and qualitatively. Specifically, some mesh surfaces extracted by NDC are less smooth than ours. One example (the Pants) is shown in the fourth row in Fig. 4. The staircase artifact observed may be attributed to NDC having been trained on the ABC dataset [12] containing only mechanical components. Our method is the only method that recovers open boundaries faithfully, while the other compared methods \begin{table} \begin{tabular}{l|l||c c c||c c||c c||c c||c c} \hline \hline & & \multicolumn{3}{c||}{MGN} & \multicolumn{3}{c||}{Thingi10K} & \multicolumn{3}{c||}{ABC} & Running time \\ \cline{3-13} & & CD\(\downarrow\) & F-score\({}^{\dagger}\) & HD \(\downarrow\) & CD\(\downarrow\) & F-score\({}^{\dagger}\) & HD \(\downarrow\) & CD\(\downarrow\) & F-score\({}^{\dagger}\) & HD \(\downarrow\) & & T1 & T2 \\ \hline \hline \multirow{3}{*}{\(128^{3}\)} & Ours & **2.38** & **98.09** & **11.91** & **1.97** & **97.51** & **9.21** & **3.69** & **93.41** & **11.47** & **0.297** & **1.184** \\ & MeshUDF [7] & 4.76 & 90.06 & 22.39 & 4.57 & 90.78 & 15.33 & 8.49 & 87.74 & 23.11 & 0.313 & 1.316 \\ & CAP-UDF [21] & 13.53 & 87.10 & 60.04 & 19.55 & 84.85 & 57.49 & 27.75 & 77.43 & 53.46 & 6.466 & 4.764 \\ & NDC [3] & 3.32 & 95.61 & 12.04 & 2.74 & 96.44 & 9.69 & 3.95 & 92.65 & 14.04 & 2.358 & 1.324 \\ \hline \hline \multirow{3}{*}{\(256^{3}\)} & Ours & **2.03** & **98.96** & **7.35** & **1.60** & **98.29** & **6.78** & **2.22** & **96.57** & **8.20** & **1.492** & **5.508** \\ & MeshUDF [7] & 3.45 & 95.10 & 12.8 & 3.33 & 94.13 & 8.91 & 3.25 & 93.85 & 11.49 & 1.600 & 10.27 \\ & CAP-UDF [21] & 8.45 & 92.88 & 49.40 & 12.17 & 90.53 & 47.28 & 21.67 & 83.88 & 43.33 & 49.416 & 37.336 \\ & NDC [3] & 2.54 & 98.04 & 7.84 & 2.42 & 97.16 & 7.83 & 2.36 & 96.10 & 11.38 & 9.363 & 10.256 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison between the results obtained by our method and those by the competing methods, MeshUDF, CAP-UDF, and NDC. The average performance on the dataset containing 354 shapes is reported. The Chamfer distance (CD) and the Hausdorff Distance (HD) are scaled by \(10^{-4}\) and \(10^{-3}\), respectively. T1 and T2 stand for the time spent (seconds) on the mesh extraction and that on the UDF query, respectively. \begin{table} \begin{tabular}{l|c c c} \hline \hline & CD\(\downarrow\) & F-score\(\uparrow\) & HD \(\downarrow\) \\ \hline Ours & **3.41** & **82.30** & **3.92** \\ MeshUDF [7] & 4.10 & 79.35 & 4.67 \\ CAP-UDF [21] & 7.70 & 70.30 & 7.21 \\ NDC [3] & 7.16 & 78.38 & 12.38 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative comparison between our method and three competing methods, MeshUDF, CAP-UDF, and NDC on a pre-trained UDF network with latent codes, on the ShapeNet dataset. In this table, CD and HD are scaled by \(10^{-3}\) and \(10^{-2}\) respectively. The F-score is calculated with a threshold of \(0.006\). show staircase artifacts or even redundant pieces near the open boundaries; see the first two rows of Fig. 4. Also, unexpected holes can be observed on the mesh surface extracted by the competing methods from the neural UDFs. Empirically, our method produces quality results without redundant pieces or unexpected holes, generating clear-cut boundaries of the open surfaces as shown in the top four rows of Fig. 4, and reproducing the sharp features as shown in the bottom two rows of the same figure. More results and comparisons (also on the GT UDF) are presented in our Supplementary Materials, which further validate the superior performance of our method. **Shared, _pre-trained_ UDF network with latent codes.** We consider a single neural network with latent codes trained to represent the entire shape space, where each shape is associated with a unique latent code. Specifically, we test our method and the competing methods on the pre-trained UDF network provided by [5] representing 300 ShapeNet car models. Although this UDF network is less accurate than the other overfitting-based network settings considered earlier, our method still outperforms the other methods by a significant margin as reported in Tab. 2, showing the versatility and robustness of our method. ### Ablation study We conducted an ablation study on our method using sampling numbers (the number \(m\) in Eqn. 1) of 27 (\(3^{3}\)) and 125 (\(5^{3}\)) to justify our design choice to use 27 as our sampling number. In Tab. 3, we demonstrate that using 125 sample points per cell results in a marginal improvement but incurs a significantly higher computing cost. ### Limitations While DualMesh-UDF demonstrates the ability to accurately extract surfaces from neural UDFs, some limitations remain. First, the extracted surface cannot be adaptively subdivided with respect to sharp features or fine geometry details. To faithfully reconstruct the fine geometry details, we need to pre-define a sufficient depth of the octree and solve the QEF problem in each of its non-empty leaf nodes. Second, to share the UDF and gradient values at sampled points between adjacent octree cells, we adopt a regular grid sampling strategy. A more flexible and adaptive sampling strategy may bring further improvement to our method. ## 6 Conclusion We have presented a method for extracting high-quality surface meshes from unsigned distance fields. In order to attain robust performance on MLP-encoded neural UDFs, we discuss the characteristics of the approximation errors of the neural UDF and develop an adaptive octree-based method to effectively localize the target surface embedded in the given UDF. Extensive experiments show that our method outperforms the SOTA methods and produces high-quality results with sharp geometry features, with clear open boundaries, and free of undesirable holes. ## 7 Acknowledgement This research is partly supported by the Innovation and Technology Commission of the HKSAR Government through the InnoHK initiative. The authors would like to thank Yanhong Lin and Ruixing Jia for their generous help on the project.
2309.06855
Discovery of a molecular cloud possibly associated with the youngest Galactic SNR G1.9+0.3
The youngest known Galactic supernova remnant (SNR) G1.9+0.3 has high-velocity supernova shock beyond 10000 km s-1, and it is considered to be one of the major candidates of a PeVatron. Despite these outstanding properties, the surrounding interstellar matter of this object is poorly understood. We investigated the interstellar gas toward G1.9+0.3 using the 12CO(J=3-2) data with the angular resolution of 15" obtained by the CHIMPS2 survey by the James Clerk Maxwell Telescope, and discovered three individual clouds at -1, 7, and 45 km s-1. From its morphological and velocity structures, the -1 km s-1 cloud, having the largest velocity width >20 km s-1 and located at the distance of the Galactic Center, is possibly associated with the SNR. The associated cloud shows a cavity structure both in space and velocity and coincides well with the SNR. We found that the associated cloud has higher column densities toward three bright, radio synchrotron-emitted rims where the radial expansion velocity of the supernova shock is decelerated, and the cloud is faint in the other parts of the SNR. This is the first direct evidence indicating that the highly anisotropic expansion of G1.9+0.3 observed by previous studies results from the deceleration by the interaction between the supernova shock and surrounding dense interstellar medium.
Rei Enokiya, Hidetoshi Sano, Miroslav D. Filipovic, Rami Z. E. Alsaberi, Tsuyoshi Inoue And Tomoharu Oka
2023-09-13T10:03:08Z
http://arxiv.org/abs/2309.06855v1
# Discovery of a molecular cloud possibly associated with the youngest Galactic SNR G1.9\(+\)0.3 ###### Abstract The youngest known Galactic supernova remnant (SNR) G1.9\(+\)0.3 has high-velocity supernova shock beyond 10000 km s\({}^{-1}\), and it is considered to be one of the major candidates of a PeVatron. Despite these outstanding properties, the surrounding interstellar matter of this object is poorly understood. We investigated the interstellar gas toward G1.9\(+\)0.3 using the \({}^{12}\)CO(\(J\)=3-2) data with the angular resolution of 15\({}^{\prime\prime}\) obtained by the CHIMPS2 survey by the James Clerk Maxwell Telescope, and discovered three individual clouds at \(-\)1, 7, and 45 km s\({}^{-1}\). From its morphological and velocity structures, the \(-\)1 km s\({}^{-1}\) cloud, having the largest velocity width \(>\)20 km s\({}^{-1}\) and located at the distance of the Galactic Center, is possibly associated with the SNR. The associated cloud shows a cavity structure both in space and velocity and coincides well with the SNR. We found that the associated cloud has higher column densities toward three bright, radio synchrotron-emitted rims where the radial expansion velocity of the supernova shock is decelerated, and the cloud is faint in the other parts of the SNR. This is the first direct evidence indicating that the highly anisotropic expansion of G1.9\(+\)0.3 observed by previous studies results from the deceleration by the interaction between the supernova shock and surrounding dense interstellar medium. ISM: clouds -- ISM: kinematics and dynamics -- ISM: supernova remnants 2018 ## 1 Introduction Supernova remnants (SNRs) are believed to be cosmic-ray accelerators in the Universe (e.g., Filipovic & Tothill 2021; Rowell 2021). Since cosmic rays are accelerated beyond PeV only at the early stage of an SNR evolution with the shock velocity \(>\)10,000 km s\({}^{-1}\) (H. E. S. S. Collaboration et al. 2014), investigations of young SNRs are very important for astrophysics. G1.9\(+\)0.3 (hereafter G1.9; Green & Gull 1984) is the youngest known Galactic SNR toward the Galactic Center (GC) with an age of \(\sim\)100 year (Reynolds et al. 2008; Borkowski et al. 2017; De Horta et al. 2014; Luken et al. 2020). This SNR is believed to be located in the Galactic Center region (e.g., Carlton et al. 2011). Although TeV and GeV gamma-ray emissions have not been detected toward G1.9 so far by the current generation of telescopes, possibly owing to its large distance and young age, it is expected to provide effective particle acceleration due to its highest expanding velocity among the known Galactic SNRs. Thus, G1.9 attracts keen interest as the prime candidate for the Galactic cosmic-ray accelerator to PeV ener gies (i.e., PeVatron; Aharonian et al. 2017). Borkowski et al. (2017) observed a lot of X-ray filaments, which constitute X-ray shell of the SNR and likely trace supernova (SN) shock fronts, from 2011 to 2015 and measured the distribution of the radial expansion velocity (see Figure 3 in Borkowski et al. 2017). Despite the young age, there is a factor of almost 5 velocity difference depending on the directions--the south-east and north-west parts of the SNR where the X-ray filaments are very bright (hereafter X-ray-bright rims) show higher expanding velocity, while the south-west, north, and north-east parts where radio emission is very bright (hereafter radio-bright rims) show slower expanding velocity--and Borkowski et al. (2017) pointed out that the radial expansion is highly anisotropic. A similar trend has been confirmed by radio continuum observations compiled over 30 years (De Horta et al. 2014; Luken et al. 2020). Two possible origins for the anisotropic expansion of the SN shock have been proposed so far. One is an interaction model: the anisotropic expansion has been caused by the interaction with surrounding material, and thereby is an acquired cause (Borkowski et al. 2017). The other is a non-spherical SN explosion model: the anisotropic expansion originated from the non-spherical SN explosion, and thereby is a congenital cause (Griffeth Stone et al. 2021). Another mystery of G1.9 is the anti-correlation between the radio and X-ray emissions. It may be relevant to the origin of the marked anisotropic expansion. The SNR G1.9 in radio continuum emission is bright, bilaterally asymmetric and peaked at the south-west, north, and north-east. In X-rays, it bilaterally peaks at the north-west and south-east (Figure 1) sides that are parallel to the Galactic plane. Considering that all SNR emission comes from synchrotron radiation (e.g., Reynolds et al. 2008), this anti-correlation is puzzling. Based on the distribution of the anisotropic expanding shock velocities, which shows that slow shocks are concentrated toward the radio-bright rims and rapid shocks are concentrated toward the X-ray-bright rims, Borkowski et al. (2017) proposed a deceleration scenario such that the anisotropic expansion was achieved by the encounter of the dense surrounding material. In the dense material region, given the electron fraction in all particles is \(\sim\)10\({}^{-4}\) (e.g., Ellison & Cassam-Chenai 2005), the electron density is expected to become larger, while the maximum energy becomes lower since \(E_{\rm max}\propto B_{\nu}v_{\rm shock}^{2}\) for the age-limited acceleration (e.g., Reynolds et al. 2008; Griffeth Stone et al. 2021). Therefore, the resulting synchrotron radiation is bright in radio compared to the rapid shock region, which is on the contrary bright in X-ray (Borkowski et al. 2017). However, there is still no direct evidence of the dense gas interacting with the shocks at the slower regions, meaning that we cannot rule out the possibility of the anisotropic explosion scenario. To resolve the above questions, investigations of gas distribution in Figure 1: Two-color composite image of G1.9+0.3, where red is 9 GHz radio continuum emission and blue is 2–7 keV emission. The white arrow indicates the direction of the north. The white and magenta circles indicate the SN-shell radius (-38\({}^{\circ}\)5, see Figure 5 in Luken et al. 2020) and the outer(inner boundary of the SN shell, respectively, from the dynamical center (\(l\), \(b\)) = (1:8710, 0:3237) (Borkowski et al. 2017). The width from the outer (inner) boundary to the shell center (=11\({}^{\circ}\)3) was defined as the half-width at half-maximum of the radio shell fitted by a Gaussian function (Luken et al. 2020). the vicinity of the SNR G1.9 is essential. We study interstellar gas toward G1.9 through molecular line emissions in this paper. The paper is organized as follows. Section 2 describes observations and data reductions, while section 3 presents the results of our investigation. We discuss the possible origin of the molecular gas, radio continuum, and X-ray in section 4. Section 5 summarizes our findings. ## 2 Data and analyses ### Molecular lines As the main molecular gas tracer, we use \({}^{12}\)CO(\(J\)=3-2) observations obtained with the CHIIPS2 survey by the James Clerk Maxwell Telescope (JCMT) (Eden et al., 2020). The angular resolution, velocity resolution, and typical r.m.s. noise fluctuation are 15\({}^{\prime\prime}\), 1.0 km s\({}^{-1}\), and 0.8 K, respectively. The details of the observations and the data reduction are fully described in Eden et al. (2020). As the dense gas tracer, we use \({}^{13}\)CO(\(J\)=2-1) data with the angular resolution of \(\sim\)20\({}^{\prime\prime}\) obtained with the SEDIGISM survey by the Atacama Pathfinder EXperiment (APEX) telescope. The full description of the observations is seen in Schuller et al. (2021). We also use 1667 MHz OH data obtained with the Green Bank telescope (proposal ID: GBT/09B-007, PI: Natsuko Kudo). The angular resolution, velocity resolution, grid size, and typical r.m.s noise are 8\({}^{\prime}\), 1 km s\({}^{-1}\), 4\({}^{\prime}\), and \(\sim\)0.2 K, respectively. The \({}^{12}\)CO(\(J\)=1-0) and \({}^{13}\)CO(\(J\)=1-0) data sets with the angular resolutions of \(\sim\)21\({}^{\prime\prime}\) obtained with the Nobeyama 45m telescope were used to study \({}^{13}\)CO/\({}^{12}\)CO (R\({}_{13/12}\)) in molecular clouds toward the central molecular zone. The coverage of these datasets are (\(l\), \(b\)) = (\(-\)0\(\aas@@fstack{\circ}\)8 to 1\(\aas@@fstack{\circ}\)4, \(-\)0\(\aas@@fstack{\circ}\)35 to 0\(\aas@@fstack{\circ}\)35), thus do not cover the G1.9 region. The full description of the observations is in Tokuyama et al. (2019). ### Radio continuum observations We analysed Australia Telescope Compact Array (ATCA; project code C1952) observations that are summarized in Table 1. All observations were carried out in "snap-shot" mode, with one hour of integration over a 12-hour observing session. The Compact Array Broadband Backend (CABB) was used with 2048 MHz bandwidth and 2049 channels at wavelengths of 3 cm (v = 8000-10000 MHz; centered at 9000 MHz) totaling 613.8 min of integration. We used miriad1(Sault et al., 1995) and karma2(Gooch, 1995) software packages for reduction and analysis. All observations were calibrated using the phase and flux calibrators listed in Table 1 with two rounds of self-calibration using the selfcal task. Imaging was completed using the multi-frequency synthesize invert task with natural Briggs weighting (robust = 0). The Clean and Restor algorithms were used to deconvolve the images, with primary beam correction applied using the linmos task. The rms for the 9000 MHz image is \(\sim\)26 \(\mu\)Jy beam\({}^{-1}\) with a synthesized beam of 3\(\aas@@fstack{\prime\prime}\)1 \(\times\) 1\(\aas@@fstack{\prime\prime}\)0 and PA of \(-\)11\(\aas@@fstack{\circ}\)7. Footnote 1: [http://www.a4.cnr.oau/computing/software/mirad/](http://www.a4.cnr.oau/computing/software/mirad/) Footnote 2: [http://www.a4.cnr.oau/computing/software.karms/](http://www.a4.cnr.oau/computing/software.karms/) ### X-ray We used archival X-ray data obtained by Chandra with Advanced CCD Imaging Spectrometer S-array. We combined 26 individual observations from 2007 February (ObsID: 6708) to 2015 September (ObsID: 18354) using Chandra Interactive Analysis of Observations (CIAO; Fruscione et al., 2006) software version 4.12 with CALDB 4.9.1 (Graessle et al., 2007). The archival data have been published in previous papers (Reynolds et al., 2008; Carlton et al., 2011; Borkowski et al., 2013; Borkowski et al., 2014; Zoglauer et al., 2015; Aharonian et al., 2017; Tsuji et al., 2021). All the data were reprocessed using the "chandra_repro" procedure. We created an energy-filtered, exposure-corrected image using the "merge_obs" procedure in the energy band from 0.5 to 7.0 keV. The resulting effective exposure is \(\sim\)1.66 Ms in total. ## 3 Results ### Molecular clouds toward the SNR Figure 2a shows averaged line spectra of \({}^{12}\)CO(\(J\)=3-2), \({}^{13}\)CO(\(J\)=2-1), and OH 1667 MHz toward G1.9. The averaged area, indicated by the black square at the first panel in figure 3 corresponds with the grid size of the coarsest data (OH 1667 MHz). The line profile of \({}^{12}\)CO(\(J\)=3-2) emission peaks at \(\sim\)\(-\)1, 7, and 45 km s\({}^{-1}\). Although the \(\sim\)\(-\)1 km s\({}^{-1}\) component appears to be a combination of two narrower sub-components at \(-\)1 and 3 km s\({}^{-1}\), the temperature decreases at 0 km s\({}^{-1}\) is less than three \(\sigma\) and these two sub-components are not visible in \({}^{13}\)CO(\(J\)=2-1). Thus, we consider the \(-\)1 km s\({}^{-1}\) component to be a single component. The \(-\)1 and 7 km s\({}^{-1}\) components are inter-mingled in both, \({}^{12}\)CO(\(J\)=3-2) and \({}^{13}\)CO(\(J\)=2-1). In \({}^{13}\)CO(\(J\)=2-1), the 7 km s\({}^{-1}\) component is strongest and clearly seen, while the 45 km s\({}^{-1}\) component is buried in noise and invisible. Although the \(-\)1 and 7 km s\({}^{-1}\) components are mingled, it is more distinguishable in the \(l\)-\(v\) diagram in figure 2b. A cloud at 45 km s\({}^{-1}\) is isolated from the other clouds. We hereafter call these three components \(-\)1, 7, and 45 km s\({}^{-1}\) clouds. In figure 3 we show the velocity channel distribution on the large-scale view (0\(\aas@@fstack{\circ}\)15 \(\times\) 0\(\aas@@fstack{\circ}\)12) toward the SNR. The three molecular clouds exhibit individual and different morpholog ical structures. The \(-1\) km s\({}^{-1}\) cloud is visible from \(V_{\rm LSR}\) = \(-16\) km s\({}^{-1}\) and ends at \(V_{\rm LSR}\) = 9 km s\({}^{-1}\) with the filamentary shape extending \(\sim\)0\(\fdg\)10 from the north-east to the south-west. This filament has an intensity depression in the direction of the SNR (see panels at \(V_{\rm LSR}\) from \(-11\) to \(-1\) km s\({}^{-1}\)). This cloud has the highest column density among the three molecular clouds. The 7 km s\({}^{-1}\) cloud consists of diffuse weak emissions spreading over the entire field with a narrow velocity span (see blue-colored emissions at \(V_{\rm LSR}\) from 4.0 to 14.0 km s\({}^{-1}\)). While the integrated intensity of the cloud is the lowest among the three, the emitting area is the largest, and hence the averaged brightness temperature in the spectrum is very strong (see figure 2). The 45 km s\({}^{-1}\) cloud is elongated and its filaments extend 0\(\fdg\)05 from east to west. This cloud has a large velocity \begin{table} \begin{tabular}{c c c c c c} \hline Observing & Array & Frequency v & Flux & Phase & Integrated time \\ date & config. & (MHz) & calibrator & calibrator & (minutes) \\ \hline 2018 October 19 & 6A & 9000 & PKS B 1934–638 & PKS B 1710–269 & 206.4 \\ 2018 December 28 & 1.5D & 9000 & PKS B 1934–638 & PKS B 1710–269 & 147.6 \\ 2019 May 11 & 1.5B & 9000 & PKS B 0823–500 & PKS B 1710–269 & 259.8 \\ \hline \end{tabular} \end{table} Table 1: Summary of ATCA observations for SNR G1.9+0.3. Figure 2: (a) Averaged spectra of \({}^{12}\)CO(\(J\)-3–2) (black), \({}^{11}\)CO(\(J\)-2–1) (blue), and OH 1667 MHz (light blue) within (\(J\), \(b\)) = (1:833 to 1:900, 0:300 to 0:367). The averaged area is indicated by the black rectangle at the top left-hand panel in figure 3. (b) Longitude-velocity diagram toward G1.9. The integrated latitude range is indicated at the top left. The three molecular clouds are labelled with three different colors. span comparable to the \(-1\) km s\({}^{-1}\) cloud and these two clouds are morphologically alike. However, the following differences suggest that the two clouds are independent features and not to be a unified cloud: (1) the slope angles of these two filaments are slightly different, (2) the integrated intensity of the \(-1\) km s\({}^{-1}\) cloud is twice as high as that of the 45 km s\({}^{-1}\) cloud, and (3) these clouds are not connected in the velocity space (see the panel 29.0 to 34.0 km s\({}^{-1}\) in figure 3), lending support to a chance overlapping of the two individual clouds. In order to highlight the characteristics of these three clouds, we defined a velocity range, which represents the clouds' major morphological feature (hereafter the representative velocity range; \(V_{\rm rep}\)). Enokiya et al. (2021) developed the derivation of the representative velocity range of a cloud and showed that \(V_{\rm center}\pm dV\), where \(V_{\rm center}\) and \(dV\) are the means of moment 1 and moment 2 values, respectively, can be a good expression of the \(V_{\rm rep}\) even for the case where two clouds are mingled with each other (the moments method). By using the moments method, the representative velocity ranges for the \(-1\), 7, and 45 km s\({}^{-1}\) clouds were derived to be \(-5.32\) to 3.68, 5.7 to 7.68, and 39.68 to 48.68 km s\({}^{-1}\), respectively. Figure 4 shows \({}^{12}\)CO(\(J\)=3-2) distributions of the three clouds obtained by integrating each \(V_{\rm rep}\). Note that the maximum values of the color bars in the figure are at most four times different. The figure clearly exhibits the morphological characteristics of the three clouds. Contrary to the line spectra, the 7 km s\({}^{-1}\) cloud has the weakest integrated intensity (i.e., column density) due to its very narrow velocity span. ### Distances to the molecular clouds The foreground clouds and clouds in the GC region (hereafter GC clouds) have different characteristics in space and velocity in \({}^{12}\)CO. Thus, it is possible to estimate the line-of-sight (LOS) Figure 3: Velocity channel distribution of the 1:5 by 1:2 area toward G1.9 in \({}^{12}\)CO(\(J\)=3–2). The black circle indicates the position of the SNR. The half-power-beam-width (HPBW), 5 pc scale bar, and the north direction are indicated at the top of Figure. The black rectangle at the top-left panel corresponds with the area used for averaged spectra in Figure 2. The velocity ranges for the three clouds are indicated by cyan, magenta and red arrows at the bottom of the panels. Note that the velocity channel was rebinned in advance for Figure 3 to show the three molecular clouds, and the arrows are guides only and do not reflect exact values of velocity ranges for the clouds. location of a cloud toward the GC region (see, e.g., Enokiya et al. 2014) based on: 1. Following the Galactic rotation, the foreground clouds can be observable only in \(-60\leq V_{\rm LSR}\leq 30\) km s\({}^{-1}\), while velocities of the GC clouds have no such limitation due to non-circular motions (e.g., Reid et al. 2016). 2. The foreground clouds have typical line-widths of \(\lesssim 5\) km s\({}^{-1}\) in FWHM, while those of the GC clouds are much larger, originating from highly turbulent gas motions (e.g., Reid et al. 2016). 3. The foreground clouds have typical line-widths of \(\lesssim 5\) km s\({}^{-1}\) in FWHM, while those of the GC clouds are much larger, originating from highly turbulent gas motions (e.g., Reid et al. 2016). 4. The foreground clouds have typical line-widths of \(\lesssim 5\) km s\({}^{-1}\) in FWHM, while those of the GC clouds are much larger, originating from highly turbulent gas motions (e.g., Reid et al. 2016). Figure 4: Panels (a–c): Integrated intensity distributions of the three clouds in \({}^{12}\)CO/\(3\)–\(2\)) toward G1.9. The black circle indicates the position of the SNR. The integrated velocity ranges are indicated at the top-left of each panel. The scale bar, HPBW, and the north direction are indicated in the top left corner. Figure 3: _Continued_ Figure 5: (a) Longitude-velocity diagram of \({}^{12}\)CO/\(\ast\)=1–0) toward the GC region, integrating latitude over \(-2\aas@@fstack{\circ}5\) to \(2\aas@@fstack{\circ}5\). The CMZ and the known Galactic arms, based on Reid et al. (2016), are drawn by the yellow and the other colors, respectively. (b) Schematic top-down view of the Galaxy based on Reid et al. (2016). The black cross and filled, light gray circle indicate the GC, and the GC region, respectively. The CMZ and known Galactic arms are colored in the same manner as in panel (a). The possible location ranges of the \(-1\), 7, and 45 km s\({}^{-1}\) clouds are indicated by cyan, magenta and red colors, respectively. tions in this region and typically exceeding 10 km s\({}^{-1}\) (e.g., Morris & Serabyn 1996). 3. \({}^{12}\)CO is optically thick and cannot trace small, detailed structures within a molecular clump and filament. Given that the sizes of the clumps and filaments are a few pc (Goldsmith 1987), the foreground cloud is diffusively extended on 0\(\aas@@fstack{\circ}\)1 scale because of its close distance, while the GC cloud consists of clumps and filaments on the same scale (e.g., Enokiya et al. 2014). Although (i)-(iii) are not the absolute indicators but trends, (i) and (ii) in particular are visible in the large-scale \(l\)-\(\nu\) diagram shown in figure 5a which includes clouds at all LOS distances toward the GC region. The narrow velocity-width clouds indicated by transparent, orange, light-green, blue, and pink belts are the foreground clouds, whereas the others are the GC clouds (i.e., clouds within the Galactocentric radius of \(\sim\)1 kpc). The marked cloud complex showing parallelogram shape, colored transparent, yellow parallelogram is a well-known dense gas complex within the Galactocentric radius of \(\sim\)200 pc, which is often referred to as the central molecular zone (CMZ; Morris & Serabyn 1996). Based on this, G1.9 is located toward the eastern edge of the CMZ. In accordance with the assessment using (i)-(iii), it is most likely that the 7 km s\({}^{-1}\) cloud is located in the foreground. From its velocity, this cloud is likely located in Scutum-Centaurus (Set-Cen) Arm and thus the distance is \(\sim\)3 kpc (Reid et al. 2016; Velusamy et al. 2012; see figures 5 and 2b). The velocity of the \(-1\) km s\({}^{-1}\) cloud is between Norma and Scutum arms, and it has quite a large velocity width exceeding 20 km s\({}^{-1}\). Although the large line width supports the location of the GC region, its velocity is close to that of the foreground clouds. Molecular clouds with such a large velocity width are only observed in the Galactic superbubbles or Hii regions except for the GC clouds (e.g., Fukui et al. 1999). Thus, we searched catalogues and an excess of infrared emission toward the G1.9 region, but no superbubbles and Hii regions have been detected within the 0\(\aas@@fstack{\circ}\)15 \(\times\) 0\(\aas@@fstack{\circ}\)12 field shown in figure 3. Furthermore, there are no known foreground clouds with the velocity of \(-1\) km s\({}^{-1}\) (see figure 5a), and the morphology of the \(-1\) km s\({}^{-1}\) cloud is filamentary. Therefore, we suggest its location in the GC region. The 45 km s\({}^{-1}\) cloud is most likely located at the distance of the GC region because it has a large velocity width (\(\sim\)15 km s\({}^{-1}\) in FWHM), the velocity is far away from that of the foreground clouds, and the morphology is filamentary. Since G1.9 is located toward the edge of the CMZ, and the boundary of the CMZ is ambiguous, possible locations of the two GC clouds (i.e., \(-1\) and 45 km s\({}^{-1}\) clouds) are on the near side of the GC region, in the CMZ, and on the far side of the GC region. Sawada et al. (2004) suggested a method to estimate the LOS distance for a GC cloud based on CO and OH emission/absorption. According to Sawada et al. (2004), the CMZ (\(|l|\leq 2\aas@@fstack{\circ}\)0) is a bright 18-cm continuum source, and hence the 1667 MHz OH line shows absorption if the cloud is located in the nearside of the GC region. Since the angular resolution of our 18 cm data is poor, and we cannot detect significant emission from G1.9, the background source for the absorption in our data is likely to be the CMZ. We found deep absorption from \(-\)60 to \(+\)40 km s\({}^{-1}\) in our OH data (see figure 2a). The spectrum is noisy, but its smoothed spectrum shows the deepest absorption at \(\sim\)6 km s\({}^{-1}\), and it also shows a tail in the negative velocity, thus this absorption is likely caused by the \(-1\) and 7 km s\({}^{-1}\) clouds. Thus, the \(-1\) km s\({}^{-1}\) cloud is possibly located at the near side in the GC region (see figure 5b). Since the \(-1\) and 45 km s\({}^{-1}\) clouds are _not_ a unified feature, the 45 km s\({}^{-1}\) cloud, which does not show significant absorption in OH, is probably located in the bright background OH source (i.e., the CMZ) or in the far side of the GC region (see figure 5b). Assuming the distance to the GC is 8.3 kpc (e.g., Gillessen et al. 2009), the distances \(d\) to the \(-1\), and 45 km s\({}^{-1}\) clouds are \(7\lesssim d\leq 8.1\) kpc and \(8.1\leq d\lesssim 9\) kpc, respectively (see figure 5b). Given that the \(-1\) km s\({}^{-1}\) cloud is located at its projected distance (\(i.e.,\)\(\sim\)270 pc) close to us from the GC, we hereafter use 8.0 kpc as its distance. The possible positions and LOS-distance ranges of the three clouds in the top-down view of the Galaxy are illustrated in figure 5b. ### Physical parameters of the clouds In order to estimate a column density and mass, we summarized \(V_{\rm LSR}\) and \(V_{\rm rep}\) of clouds in table 2. Note that \(V_{\rm rep}\) is a limited velocity range, symbolizing the representative shape of a cloud, and does not include all the emission that makes up the cloud. To derive the column density and mass, ideally we need to use the end-to-end, full velocity range (\(V_{\rm LSR}\)), which includes all the emission, and the emission in that velocity range need to be dominated by only one cloud. From the detailed velocity channel distribution and the \(l\)-\(\nu\) diagram in figure 2b, we estimated these full velocity ranges for the \(-1\), 7, and 45 km s\({}^{-1}\) clouds to be \(-18.3\) to 11.7, 5.7 to 9.7, and 34.7 to 55.7 km s\({}^{-1}\), respectively. In 5.7 to 9.7 km s\({}^{-1}\), the \(-1\) and 7 km s\({}^{-1}\) clouds are mingled with each other. Therefore, we determined the end of the velocity range for deriving the column density and mass for the \(-1\) km s\({}^{-1}\) cloud to be 5.7 km s\({}^{-1}\), where the 7 km s\({}^{-1}\) cloud begins to dominate. Thus, the derived column density and mass for the \(-1\) km s\({}^{-1}\) cloud give a lower limit. For the \(-1\) and 45 km s\({}^{-1}\) clouds, we use the typical value of \({}^{12}\)CO(\(J\)=3-2)/\({}^{12}\)CO(\(J\)=1-0) as 0.7 (Oka et al. 2012) and of the conversion factor (\(X_{\rm CO}\)) as 0.7 \(\times\)10\({}^{20}\) cm\({}^{-2}\) (K km s\({}^{-1}\))\({}^{-1}\) (Torii et al. 2010), while for the 7 km s\({}^{-1}\) cloud we use \({}^{12}\)CO(\(J\)=3-2)/\({}^{12}\)CO(\(J\)=1-0) as 0.4 (Oka et al., 2012) and \(X_{\rm CO}\) as \(1.0\times 10^{20}\) cm\({}^{-2}\) (K km s\({}^{-1}\))\({}^{-1}\)(Okamoto et al., 2017). A column density of molecular hydrogen is given by equation (1), \[N_{\rm H_{2}}=X_{\rm CO}\times W(CO), \tag{1}\] where \(W\)(CO) is the integrated intensity of \({}^{12}\)CO(\(J\)=1-0). To estimate the hydrogen mass, we use the following equation: \[M=\mu m_{p}\sum_{i}\ [d^{2}\Omega N_{\rm H_{2},\,j}], \tag{2}\] where \(\mu\), \(m_{p}\), \(d\), \(\Omega\) and \(N_{\rm H_{2},\,j}\) are the mean molecular weight, proton mass, distance, a solid angle subtended by a pixel, and column density of molecular hydrogen for the \(i\)-th pixel, respectively. We assume a helium abundance of 20%, which corresponds to \(\mu\)=2.8. The deduced peak and typical column densities, and masses of the clouds are summarized in table 2. The areas used for the mass estimation for the \(-1\), 7, and 45 km s\({}^{-1}\) clouds are the 0\(\aas@@fstack{\circ}\)1 filament, whole the field, and the 0\(\aas@@fstack{\circ}\)05 filament in figure 4. Thus, given the 0\(\aas@@fstack{\circ}\)1 filament extends more from the lower Galactic latitude (i.e., \(<0\aas@@fstack{\circ}\)29), the mass of the \(-1\) km s\({}^{-1}\) cloud is the lower limit. The physical parameters for the \(-1\) km s\({}^{-1}\) cloud within the shell in table 2 were estimated in the area enclosed by the inner and outer boundaries of the shell (see magenta circles in figure 1). ## 4 Discussion ### Distance to G1.9 and its associated cloud(s) We first focus on the morphological coincidences between the three clouds with the SNR. Among the three clouds, the 7 km s\({}^{-1}\) cloud does not show any morphological correlation with the SNR, whereas the \(-1\) km s\({}^{-1}\) cloud shows the obvious gas depression toward the SNR G1.9 (figures 3-4). This might be evidence of the interaction between the SNR and the \(-1\) km s\({}^{-1}\) cloud. The 45 km s\({}^{-1}\) cloud has a filamentary shape and is overlapping the SNR, thus this cloud also is possibly associated with the SNR. Next, we discuss the distances as the indicator of the association. The estimated distances to the \(-1\), 7 and 45 km s\({}^{-1}\) clouds are \(7\lesssim d\leq 8.1\) kpc, \(\sim\)3 kpc, and \(8.1\leq d\lesssim 9\) kpc, respectively (see also Figure 5b). On the other hand, the distance to G1.9 is bit uncertain. Green & Gull (1984) discovered G1.9 and first suggested its distance to be \(<\)20 kpc. Nord et al. (2004) carried out wide-field imaging of the GC at 330 MHz, and by comparing 74 MHz flux they found that G1.9 does not have significant 74 MHz absorption. Thus, to avoid absorption, they concluded that the SNR may be located on the near side of the GC region, i.e., \(<\)7.8 kpc (Nord et al., 2004). Reynolds et al. (2008) measured an extremely high absorbing column density of 5.5 \(\times\) 10\({}^{22}\) cm\({}^{-2}\) toward G1.9, and thus suggested that the location is at least _not_ in the foreground (hence in the GC region or farther). Based on a stellar synthesis model, half of the extinction is accounted for by material located in front of the GC region (Reynolds et al., 2008). On the other hand, if the distance is far from the GC, the derived expansion velocity is unnaturally high. Thus, Reynolds et al. (2008) finally concluded that the location in the GC region is the most plausible. Note that this distance requires high column density accounting for the remaining half of the extinction to the associated cloud, even though they could not find it. Borkowski et al. (2010) measured line widths and confirmed that the values are consistent with the expanding velocity if assuming the distance of 8.5 kpc (14,000 km s\({}^{-1}\)). Furthermore, Carlton et al. (2011) monitored the X-ray filaments and confirmed that the shock velocity assuming 8.5 kpc is comparable to the spectroscopically deduced velocity. There is another estimated distance by Roy & Pal (2014) who presented HI absorption measurements and suggested that G1.9 is 2 kpc further away than the GC. However, we did not find the absorption in the archival HI combined data obtained with ATCA and Parkes (McClure-Griffiths et al., 2012). In addition, Roy & Pal (2014) does not include information regarding data or the estimation method. Therefore, we do not consider the distance suggested by Roy & Pal (2014) further. Although the distance estimated by Nord et al. (2004) is inconsistent with the estimated distance to the 45 km s\({}^{-1}\) cloud, almost all of these arguments suggest that the \(-1\) and 45 km s\({}^{-1}\) clouds are possible counterparts to the SNR. However, taking into account that the \(-1\) and 45 km s\({}^{-1}\) clouds are not associated with each other (see subsection 3.1), only one of these two clouds is possibly associated with the SNR. \begin{table} \begin{tabular}{l c c c c c} \hline Name & \(V_{\rm{rep}}\) & \(V_{\rm{LSR}}\) & distance & \(N_{\rm H_{2}}\) (typical/peak) & Mass \\ & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (kpc) & (\(\times 10^{21}\) cm\({}^{-2}\)) & (\(\times 10^{3}\) M\({}_{\odot}\)) \\ \hline \(-1\) km s\({}^{-1}\) cloud & \(-5.32\) to 3.68 & \(-18.3\) to 5.7 & \(7\lesssim d\leq 8.1\) & 3.3\({}^{\circ}\) / 9.4\({}^{\circ}\) & 4.6\({}^{\circ}\) \\ 7 km s\({}^{-1}\) cloud & 5.7 to 7.68 & 5.7 to 9.7 & \(\sim\)3 & 2.4 / 5.5 & 3.8 \\ 45 km s\({}^{-1}\) cloud & 39.68 to 48.68 & 34.7 to 55.7 & \(8.1\leq d\lesssim 9\) & 0.5 / 4.2 & 1.0 \\ \(-1\) km s\({}^{-1}\) cloud within the shell & \(-18.3\) to 5.7 & \(7\lesssim d\leq 8.1\) & \(\sim\)3 / \(\sim\)6 & 0.6 \\ \hline \end{tabular} * lower limits. Note. – Col.1, names of clouds: Col.2, representative velocity ranges for the clouds: Col.3, velocity ranges used for the column density and mass derivations: Col.4, distances to the clouds: Col.5, typical and peak molecular column densities: Col.6, molecular masses. \end{table} Table 2: Physical parameters of the molecular clouds toward G1.9 Since the estimated distance to the \(-\)1 km s\({}^{-1}\) cloud agrees with all the estimated distances to G1.9 in the literature, and the cloud shows a clear depression of gas toward the SNR than 45 km s\({}^{-1}\) cloud (see figure 3), the \(-\)1 km s\({}^{-1}\) cloud is the more favorable counterpart. Therefore, hereafter we call the \(-\)1 km s\({}^{-1}\) cloud the _associated cloud_ with respect to the SNR and use 8.0 kpc as the distance to G1.9. However, we still cannot completely rule out the possibility of a physical association between the 45 km s\({}^{-1}\) cloud and G1.9. According to Seta et al. (2004), a broad velocity wing is the strongest evidence for the association, while we found no significant differences between spectra in the SNR region and the other region. This may be due to beam dilution by small emitting areas of the wing cloud for this small, young SNR. Further investigations such as a detection of the broad velocity wing, and comparison of distributions of the SNR and gas temperature or excitation state derived by future multi-line observations are required to obtain robust evidence. ### Spatial-velocity distribution of the associated cloud In figure 6 we show the velocity channel distribution of the associated cloud in \({}^{12}\)CO(\(J\)=3-2) overlaid with black contours of the 9 GHz emission. The filamentary structure of the associated cloud is clearly seen from \(V_{\rm LSR}=-\)14.32 to \(-\)8.32 km s\({}^{-1}\). At the velocity from \(-\)8.32 to 0.68 km s\({}^{-1}\), the emissions inside the SNR are depressed and the filament is discontinued there. The south-western (SW), northern (N), and north-eastern (NE) radio-bright rims coincide well with local enhancements of the molecular gas emission at \(-\)5.32 to \(-\)2.32 km s\({}^{-1}\), \(-\)2.32 to 0.68 km s\({}^{-1}\), and \(-\)8.32 to \(-\)5.32 km s\({}^{-1}\), respectively (see cyan circles in figure 6), whereas molecular gas is barely detected toward the south-eastern direction of the SNR where the radio continuum emission is faint. As the maximum energy is \(E_{\rm max}\propto B_{\rm v}v_{\rm shock}^{2}\), if dense molecular gas surrounding the SNR interrupts the expansion of the SN shock, such a region would be brighter in radio rather than X-ray (Borkowski et al. 2017). While the current angular resolution of CO is quite coarse compared to radio and X-ray, the coincidences of local peaks between CO and radio may suggest the interaction between the gas and the SNR. Through point-by-point comparisons with higher angular resolution CO data and the other wavelengths, the validity of the interaction can be examined. At the velocity range from 3.68 to 9.86 km s\({}^{-1}\), it is difficult to find the associated cloud because of the overlapping of the foreground diffuse cloud in the Sct-Cen Arm (i.e., the 7 km s\({}^{-1}\) cloud). We next examine denser gas distribution toward the same region in figure 6 by using \({}^{13}\)CO(\(J\)=2-1) data obtained with the SEDIGISM survey (Schuller et al. 2021). As shown in light green contours in figure 7, which correspond a 3\(\sigma\) significance level, compared to \({}^{12}\)CO(\(J\)=3-2), \({}^{13}\)CO(\(J\)=2-1) emission is very weak in the associated cloud, whereas that from the 7 km s\({}^{-1}\) cloud is very strong. According to Tokuyama et al. (2019), the intensity ratio of R\({}_{13/12}\) is significantly higher in the foreground clouds than the GC clouds (see also figure 10 in the Appendix), which further supports the associated cloud's location in the GC region. In the associated cloud, there is a tendency that the bright \({}^{13}\)CO(\(J\)=2-1) areas, which correspond to denser gas, are located outside the SN shell, whereas \({}^{12}\)CO(\(J\)=3-2) emissions corresponding to high temperature and/or dense gas are distributed both at the shell and outside. This may indicate that the molecular gas in the shell is heated by the shock, while the gas outside the shell has not yet experienced an interaction with the SNR and hence maintains the high density. In figure 8 we show integrated intensity distribution and position-velocity diagrams of the associated cloud in \({}^{12}\)CO(\(J\)=3-2) in the Offset X-Offset Y coordinates, which is defined as rotated 45 degrees clockwise from the Galactic coordinates centered at (\(l\), \(b\)) = (1\(\fdg\)870, 0\(\fdg\)325). Most emission from the associated cloud coincident with the SN shell is seen in the N, NE, and SW directions, where the radio-bright rims are found [see panels (a)-(c)]. We also found a cavity-like structure in each position-velocity diagram of CO whose velocity range is from \(-\)10 to 6 km s\({}^{-1}\) [see black ellipses in panels (b) and (c)]. Note that the spatial extent of the CO cavity is roughly consistent with that of the radio continuum shell. This is further possible evidence for the interaction between the cloud and the SNR, because such a cavity-like structure, corresponding to an expanding gas motion, is thought to be formed by supernova shocks and/or strong winds from the progenitor system of the SNR (e.g., Koo et al. 1990; Koo & Heiles 1991). We argue that the gas acceleration due to supernova shocks is negligible in G1.9. As described in subsection 3.3, the mass of the \(-\)1 km s\({}^{-1}\) cloud within the shell is \(\sim\)600M\({}_{\odot}\). If the ambient gas with this large mass was uniformly filled over the current volume of the SNR before being blown out, the initial ambient density is estimated to be \(\sim\)660 cm\({}^{-3}\). By contrast, previous X-ray spectroscopic studies indicated the low pre-shock density of \(\sim\)0.04 cm\({}^{-3}\) (Reynolds et al. 2008), based on the high velocity of the SNR forward shock (Brose et al. 2019) and low ionization state of the post-shock gas. This discrepancy implies that the expanding CO gas was first formed by the strong pre-explosion winds and subsequently the progenitor of G1.9 exploded inside the low-density cavity. The smaller wind bubble compared to that seen in the disk is expected to be due to the high gas density in the GC. Since G1.9 is widely thought to be a Type Ia supernova remnant (e.g., Reynolds et al. 2008; Borkowski et al. 2010; Borkowski et al. 2013; Chakraborti et al. 2016; Borkowski et al. 2017; Luken et al. 2020; Griffeth Stone et al. 2021), such strong pre-explosion winds can be only seen in a progenitor system comprising a white dwarf and non-degenerate companion star (also known as a single-degenerate system). An alternative idea is that the expanding gas was formed by the strong stellar wind from the high-mass progenitor of G1.9. According to Luken et al. (2020), the observed rotation measure can be explained as a combination of the red supergiant wind and toroidal magnetic field hypothesis. In case G1.9 is a core-collapse remnant, the expansion velocity of CO gas \(\sim\)8 km s\({}^{-1}\) is roughly consistent with the other core-collapse SNRs with stellar wind bubbles (e.g., Fukui et al., 2012; Kuriki et al., 2018; Sano et al., 2021). In any case, this further supports the association between the \(-1\) km s\({}^{-1}\) cloud and the SNR. ### Comparison with the X-ray and radio continuum In figures 9a and 9b we show distributions of 9 GHz continuum and X-ray, respectively with the black contours showing \({}^{12}\)CO(\(J\)=3-2). We found that the three intensity peaks of the radio-bright rims as NE, N, and SW, appear to be overlapped with molecular clouds as shown by black contours, while the radio-dim regions have no strong CO emission. However, synchrotron X-rays are bright in the CO-dim regions except for the SW rim. Figure 9c shows that the spatial relations among the Figure 6: Velocity channel distribution at \({}^{12}\)CO(\(J\)=3–2) toward the associated cloud obtained with JCMT, overlaid with the black contours outlining 9 GHz radio-continuum emission. The cyan circles indicate the positions of emission enhancements toward the shelf of the SNR. The half-power beam-width (HPBW) and the direction of north are indicated at the top of the figure. The velocity ranges for the clouds are indicated by cyan and magenta arrows at the bottom of the panels. radio continuum, X-ray and CO are clearer than each overlay map. We argue that this tendency could be explained by shock interactions with inhomogeneous gas distributions, which was first proposed by Borkowski et al. (2017). The ambient gas density in the northern radio-brightest rim is at least 10 times larger than that of the eastern X-ray-brightest rim (see figure 9c). Since the shock velocity \(v_{\rm shock}\) is inversely proportional to the square root of gas density, the shock velocity of the eastern rim is expected to be at least three times higher than that of the northern rim. Indeed, the previous measurements of the X-ray proper motion indicated that the X-ray-bright major axis (east-west) is \(\sim\)3-4 times faster than the northern radio bright shell (Borkowski et al., 2017). Since the maximum energy of accelerated cosmic-ray electrons \(E_{\rm max}\propto B_{\rm v}v_{\rm shock}^{2}\) under the age-limited acceleration (e.g., Reynolds et al., 2008), the shock-velocity difference due to the inhomogeneous gas distribution is naturally expected. The shock interactions with the clumpy medium can explain the origin of the SW X-rays-bright rim despite a large amount of gas. According to magnetohydrodynamic (MHD) numerical simulations, shock-cloud interactions can enhance turbu Figure 7: Velocity channel distribution of \({}^{11}\)CO/\(\sim\)2–1) toward the associated cloud obtained with APEX, overlaid with black and light green contours outlining 9 GHz radio-continuum emission and a three sigma level \({}^{11}\)CO/\(\sim\)2–1) emission (=2.87 Kurns), respectively. The cyan circles indicate the positions of enhancements in \({}^{11}\)CO/\(\sim\)3–2) toward the shell of the SNR (see figure 6). The HPBW and north are indicated at the top of figure. The velocity ranges for the clouds are indicated by cyan and magenta arrows at the bottom of the panels. lent magnetic field up to \(\sim\)1 mG on the surface of the shocked clouds (e.g., Inoue et al. 2012; Celli et al. 2019; Pavlovic et al. 2018). The magnetic field amplification induces the synchrotron X-ray limb-brightening around the shocked clumps (e.g., Sano et al. 2010; Sano et al. 2013; Tanaka et al. 2020). In the case of G1.9, the SW CO clouds are expected to be highly clumpy with a clump size of \(\sim\)0.1 pc or less (e.g., Sano et al. 2020). Further ALMA observations with a high-angular resolution and unprecedented sensitivity are needed to better understand the X-ray and radio continuum emission as well as the cosmic-ray electron acceleration to higher energy in the SNR G1.9 (Sano et al. 2015). ### Implications for future observations Our analyses have revealed molecular gas possibly interacting with the SN shock toward the radio-bright rims (N, NE, SW). According to Aharonian et al. (2017), the increasing number of detections of so-called \(\pi^{0}\)-decay bump in the GeV/TeV spectra in SNRs has been reported, and such SNRs have correlations between distributions of TeV and gas emissions (e.g., Fukui et al. 2012). This has been considered to be substantial evidence for the acceleration of cosmic-ray protons in SNRs. The molecular gas discovered in the present work may be responsible for the targets for hadronic interactions in G1.9, if hadronic-origin \(\gamma\)-rays are detected in the future, and hence investigation of the gas is essential. Therefore, the comparison of the interacting gas candidates with the future gamma-ray data with high sensitivity and high angular resolution is an important perspective. High-resolution, multi-line observations of the candidate Figure 8: (a) Integrated intensity distribution of the associated cloud in \({}^{12}\)CO(\(J\)-\(3\)-\(2\)) overlaid with the contours of 9 GHz continuum emission in the Offest X–Offset V coordinates. The direction of north, the HPBW, and the scale bar are indicated at the bottom corners of the panel. The integration ranges for panels (b) and (c) are indicated by the black lines. (b) Velocity–Offset Y diagram of the associated cloud. The integrated velocity ranges for panel (a) and the Set-Cen arm are indicated by the white and red lines, respectively. (c) Offest X–Velocity diagram of the associated cloud. The integrated velocity ranges for panel (a) and the Set-Cen arm are indicated by the white and red lines, respectively. clouds will allow us to investigate very accurately the interaction of gas with the high-velocity X-ray filaments expanding over 10,000 km s\({}^{-1}\). This will reveal the effect of the SN-shock deceleration quantitatively. Thanks to its very high SN shock velocity, G1.9 is the best and only laboratory to test physics of the SN shock propagation. Therefore, a few to tens of years monitoring observations of the SNR in radio, X-ray and CO could provide new insights into gas dynamics and shock propagation. ## 5 Conclusions We investigated the interstellar gas toward the youngest known Galactic SNR G1.9+0.3 by mainly using archival \({}^{12}\)CO(\(J\)=3-2) data obtained with the CHIMPS2 survey. Based on the very large velocity width, exceeding 20 km s\({}^{-1}\) in \({}^{12}\)CO(\(J\)=3-2), Figure 9: (a) Distribution of 9 GHz continuum emission with overlaid contours of the integrated intensity distribution of the associated cloud in \({}^{12}\)CO(\(J\)=3–2). The integration range corresponds to \(V_{\rm LSR}\) in table 2 (\(-\)18.3 to 5.7 km s\({}^{-1}\)). The interval and the lowest level of the contours are 4 and 30 K km s\({}^{-1}\), respectively. The black ellipse indicates the area calculating the azimuthal profile shown in panel (c). (b) Distribution of \(X\)-ray emission with the same CO contours in panel (a). (c) Azimuthal profiles compiled flux inside the black ellipse indicated in panel (a) centered at the dynamical center of the SNR derived by Borkowski et al. (2017) with the radius and eccentricity of 0:013 and 0.55, respectively. The filled grey area and the areas marked by blue and red lines indicate \({}^{12}\)CO(\(J\)=3–2), 9 GHz continuum, and \(X\)-ray emissions normalized to the peak, respectively. Plus (voxels) below 50 were removed from each data in advance. we suggest the \(-1\) km s\({}^{-1}\) cloud, whose estimated distance is 8.0 kpc, is possibly associated with the SNR. The cloud has three peaks at N, NE, and SW areas of the SNR, which coincides well with the radio-bright rims, whereas the N and NE peaks are anti-correlated with the X-ray-bright rims. The SW CO peak corresponding to the X-ray-bright rim can be interpreted as the result of the shock-cloud interaction. The CO distribution is direct evidence that the anisotropic expansion of the observed SN shocks originates from the deceleration by interaction with the surrounding dense, anisotropic cloud. ## Funding This work was financially supported by Grants-in-Aid for Scientific Research (KAKENHI) from the Japanese Society for the Promotion for Science (JSPS; grant number society for 20K14520). MDF acknowledge Australian Research Council funding through grant DP200100784. ## Conflict of Interest The authors declare that they have no conflict of interest. ## Acknowledgments We thank the anonymous referee(s) for helpful comments that improved the manuscript. RE is grateful to Dr. K. Torii for providing the OH data. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan; Academia Sinica Institute of Astronomy and Astrophysics; the Korea Astronomy and Space Science Institute; Center for Astronomical Mega-Science (as well as the National Key R&D Program of China with No. 2017YFA0402700). Additional funding support is provided by the Science and Technology Facilities Council of the United Kingdom and participating universities and organizations in the United Kingdom and Canada. The scientific results reported in this article are based on data obtained from the Chandra Data Archive (Obs IDs: 6708, 8521, 10111, 10112, 10928, 10930, 12689, 12690, 12691, 12692, 12693, 12694, 12695, 13407, 13509, 16947, 16948, 16949, 17651, 17652, 17663, 17699, 17700, 17702, 17705, and 18354). This research has made use of the software provided by the Chandra X-ray Center (CXC) in the application packages CIAO (v 4.12). ## Appendix A \(\mathrm{R}_{13/12}\) variations in the GC clouds and the foreground Galactic arms Here, we present \(\mathrm{R}_{13/12}\) in the GC clouds and the other clouds in the foreground Galactic arms. We used \({}^{12}\)CO(\(J\)=1-0) and \({}^{13}\)CO(\(J\)=1-0) data obtained simultaneously with the Nobeyama 45-m telescope by Tokuyama et al. (2019). These data cubes have the same HPBWs, grid sizes, and coverages, thus they can be directly compared to each other. We first flagged voxels having 5\(\sigma\) and lower emissions from both data cubes to avoid noise fluctuations. Next, we examined spectra in the two data cubes and determined two groups--one dominated by emission from the GC clouds to be (\(l\), \(b\), \(v\)) = (\(-1\fdg 0\) to \(1\fdg 0\), \(-0\fdg 5\) to \(0\fdg 5\), \(-200\) to \(-70\) and \(20\) to \(200\) km s\({}^{-1}\)), and the other dominated by emission from the foreground clouds to be (\(l\), \(b\), \(v\)) = (\(-1\fdg 0\) to \(1\fdg 0\), \(0\fdg 2\) to \(0\fdg 5\), \(-60\) to \(20\) km s\({}^{-1}\)). The emission from the GC region is brighter than that from the foreground region, and thus the GC emission becomes absorption when the position and velocity of the two emissions match (i.e., \(b\) = -0\(\fdg 2\) to \(0\fdg 2\) and \(v\) = \(-60\) to \(20\) km s\({}^{-1}\)). This absorption is clearly visible in the \(l\)-\(v\) diagram of \({}^{12}\)CO(\(J\)=1-0). Therefore, such voxels were excluded from the above \(lbv\) definition. Comparing these data cubes, we have obtained normalized histograms of \(\mathrm{R}_{13/12}\) for the foreground clouds (blue) and GC clouds (salmon pink) in figure 10. As mentioned by Tokuyama et al. (2019), the foreground clouds have twice as much \(\mathrm{R}_{13/12}\) than the GC clouds, and hence these two location clouds are distinguishable by using \(\mathrm{R}_{13/12}\).
2309.07593
Statistically Valid Variable Importance Assessment through Conditional Permutations
Variable importance assessment has become a crucial step in machine-learning applications when using complex learners, such as deep neural networks, on large-scale data. Removal-based importance assessment is currently the reference approach, particularly when statistical guarantees are sought to justify variable inclusion. It is often implemented with variable permutation schemes. On the flip side, these approaches risk misidentifying unimportant variables as important in the presence of correlations among covariates. Here we develop a systematic approach for studying Conditional Permutation Importance (CPI) that is model agnostic and computationally lean, as well as reusable benchmarks of state-of-the-art variable importance estimators. We show theoretically and empirically that $\textit{CPI}$ overcomes the limitations of standard permutation importance by providing accurate type-I error control. When used with a deep neural network, $\textit{CPI}$ consistently showed top accuracy across benchmarks. An experiment on real-world data analysis in a large-scale medical dataset showed that $\textit{CPI}$ provides a more parsimonious selection of statistically significant variables. Our results suggest that $\textit{CPI}$ can be readily used as drop-in replacement for permutation-based methods.
Ahmad Chamma, Denis A. Engemann, Bertrand Thirion
2023-09-14T10:53:36Z
http://arxiv.org/abs/2309.07593v2
# Statistically Valid Variable Importance Assessment through Conditional Permutations ###### Abstract Variable importance assessment has become a crucial step in machine-learning applications when using complex learners, such as deep neural networks, on large-scale data. Removal-based importance assessment is currently the reference approach, particularly when statistical guarantees are sought to justify variable inclusion. It is often implemented with variable permutation schemes. On the flip side, these approaches risk misidentifying unimportant variables as important in the presence of correlations among covariates. Here we develop a systematic approach for studying Conditional Permutation Importance (CPI) that is model agnostic and computationally lean, as well as reusable benchmarks of state-of-the-art variable importance estimators. We show theoretically and empirically that _CPI_ overcomes the limitations of standard permutation importance by providing accurate type-I error control. When used with a deep neural network, _CPI_ consistently showed top accuracy across benchmarks. An experiment on real-world data analysis in a large-scale medical dataset showed that _CPI_ provides a more parsimonious selection of statistically significant variables. Our results suggest that _CPI_ can be readily used as drop-in replacement for permutation-based methods. ## 1 Introduction Machine learning is an area of growing interest for biomedical research (Iniesta et al., 2016; Taylor and Tibshirani, 2015; Malley et al., 2011) for predicting biomedical outcomes from heterogeneous inputs (Hung et al., 2020; Zheng and Agresti, 2000; Giorgio et al., 2022; Sechidis et al., 2021). Biomarker development is increasingly focusing on multimodal data including brain images, genetics, biological specimens and behavioral data (Coravos et al., 2019; Siebert, 2011; Ye et al., 2008; Castillo-Barnes et al., 2018; Yang et al., 2022). Such high-dimensional settings with correlated inputs put strong pressure on model identification. With complex, often nonlinear models, it becomes harder to assess the role of features in the prediction, aka _variable importance_(Casalicchio et al., 2019; Altmann et al., 2010). In epidemiological and clinical studies, one is interested in _population-level_ feature importance, as opposed to instance-level feature importance. In that context, variable importance is understood as _conditional_ importance, meaning that it measures the information carried by one variable on the outcome _given_ the others, as opposed to the easily accessible marginal importance of the variables. Conditional importance is necessary e.g. to assess whether a given measurement is worth acquiring, on top of others, for a diagnostic or prognostic task. As the identification of relevant variables is model-dependent and potentially unstable, point estimates of variable importance are misleading. One needs confidence intervals of importance estimates or statistical guarantees, such as type-I error control, i.e. the percentage of non-relevant variables detected as relevant (false positives). This control depends on the accuracy of the p-values on variable importance being non-zero (Cribbie, 2000). Within the family of removal-based importance assessment methods (Covert et al., 2022), a popular model-agnostic approach is _permutation_ variable importance, that measures the impact of shuffling a given variable on the prediction (Janitza et al., 2018). By repeating the _permutation_ importance analysis on permuted replicas of the variable of interest, importance values can be tested against the null hypothesis of being zero, yielding p-values that are valid under general distribution assumptions. Yet, statistical guarantees for permutation importance assessment do not hold in the presence of correlated variables, leading to selection of unimportant variables (Molnar et al., 2021; Hooker et al., 2021; Nicodemus et al., 2010; Stigler, 2005). For instance, the method proposed in (Mi et al., 2021) is a powerful variable importance evaluation scheme, but it does not control the rate of type-I error. In this work, we propose a general methodology for studying the properties of Conditional Permutation Importance in biomedical applications alongside tools for benchmarking variable importance estimators: * Building on the previous literature on CPI, we develop theoretical results for the limitations regarding Permutation Importance (PI) and advantages of conditional Permutation Importance (CPI) given correlated inputs (section 3). * We propose a novel implementation for CPI allowing us to combine the potential advantages of highly expressive base learners for prediction (a deep neural network) and a comparably lean Random Forest model as a conditional probability learner (section 4). * We conduct extensive benchmarks on synthetic and heterogeneous multimodal real-world biomedical data tapping into different correlation levels and data-generating scenarios for both classification and regression (section 5). * We propose a reusable library for simulation experiments and real-world applications of our method on a public GitHub repo [https://github.com/achamma723/Variable_Importance](https://github.com/achamma723/Variable_Importance). ## 2 Related work A popular approach to interpret black-box predictive models is based on _locally interpretable_, i.e. _instance-based_, models. _LIME_(Ribeiro et al., 2016) provides local interpretable model-agnostic explanations by locally approximating a given complex model with a linear model around the instance of interest. _SHAP_(Burzykowski, 2020) is a popular package that measures _local_ feature effects using the Shapley values from coalitional game theory. However, global, i.e. population-level, explanations are better suited than instance-level explanations for epidemiological studies and scientific discovery in general. Many methods can be subsumed under the general category of removal-based approaches (Covert et al., 2022). _Permutation_ importance is defined as the decrease in a model score when the values of a single feature are randomly shuffled (Breiman, 2001). This procedure breaks the relationship between the feature and the outcome, thus the drop in model performance expresses the relevance of the feature. Janitza et al. (2018) use an ensemble of Random Forests with the sample space equally partitioned. They approximate the null distribution based on the observed importance scores to provide p-values. Yet, this coarse estimate of the null distribution can give unstable results. Recently, a generic approach has been proposed in (Williamson et al., 2021) that measures the loss difference between models that include or exclude a given variable, also applied with LOCO (Leave One Covariate Out) in the work by Lei et al. (2018). They show the asymptotic consistency of the model. However, their approach is intractable, given that it requires refitting the model for each variable. A simplified version has been proposed by Gao et al. (2022). However, relying on linear approximations, some statistical guarantees from (Williamson et al., 2021) are potentially lost. Another recent paper by Mi et al. (2021) has introduced model-agnostic explanation for black-box models based on the _permutation_ approach. _Permutation_ importance (Breiman, 2001) can work with any learner. Moreover, it relies on a single model fit, hence it is an efficient procedure. Strobl et al. (2008) pointed out limitations with the _permutation_ approach in the face of correlated variables. As an alternative, they propose a _conditional permutation_ importance by shuffling the variable of interest conditionally on the other variables. However, the solution was specific to Random Forests, as it is based on bisecting the space with the cutpoints extracted during the building process of the forest. With the _Conditional Randomization Test_ proposed by Candes et al. (2017), the association between the outcome \(y\) and the variable of interest \(x^{j}\) conditioned on \(\mathbf{x}^{-\mathbf{j}}\) is estimated. The variable of interest is sampled conditionally on the other covariates multiple times to compute a test statistic and p-values. However, this solution is limited to generalized linear models and is computationally expensive. Finally, a recent paper by Watson and Wright (2021) showed the necessity of conditional schemes and introduced a knockoff sampling scheme, whereby the variable of interest is replaced by its knockoff to monitor any drop in performance of the leaner used without refitting. This method is computationally inexpensive, and enjoys statistical guarantees from from (Lei et al., 2018). However, it depends on the quality of the knockoff sampling where even a relatively small distribution shift in knockoff generation can lead to large errors at inference time. Other work has presented comparisons of select models within distinct communities (Liu et al., 2021; Chipman et al., 2010; Janitza et al., 2018; Mi et al., 2021; Altenmuller et al., 2021), however, lacking conceptualization from a unified perspective. In summary, previous work has established potential advantages of conditional permutation schemes for inference of variable importance. Yet, the lack of computationally scalable approaches has hampered systematic investigations of different permutation schemes and their comparison with alternative techniques across a broader range of predictive modeling settings. ## 3 Permutation importance and its limitations ### Preliminaries NotationsWe will use the following system of notations. We denote matrices, vectors, scalar variables and sets by bold uppercase letters, bold lowercase letters, script lowercase letters, and calligraphic letters, respectively (e.g. \(\mathbf{X}\), \(\mathbf{x}\), \(x\), \(\mathcal{X}\)). We call \(\mu\) the function that maps the sample space \(\mathcal{X}\subset\mathbb{R}^{p}\) to the sample space \(\mathcal{Y}\subset\mathbb{R}\) and \(\hat{\mu}\) is an estimate of \(\mu\). Permutation procedures will be represented by (_perm_). We denote by \([\![n]\!]\) the set \(\{1,\ldots,n\}\). Let \(\mathbf{X}\in\mathbb{R}^{n\times p}\) be a design matrix where the i-th row and the j-th column are denoted \(\mathbf{x_{i}}\) and \(\mathbf{x^{j}}\) respectively. Let \(\mathbf{X^{-j}}=(\mathbf{x^{1}},\ldots,\mathbf{x^{j-1}},\mathbf{x^{j+1}}, \ldots,\mathbf{x^{p}})\) be the design matrix, where the \(j^{th}\) column is removed, and \(\mathbf{X^{(j)}}=(\mathbf{x^{1}},\ldots,\mathbf{x^{j-1}},\{\mathbf{x^{j}}\}^{ perm},\mathbf{x^{j+1}},\ldots,\mathbf{x^{p}})\) the design matrix with the \(j^{th}\) column shuffled. The rows of \(\mathbf{X^{-j}}\) and \(\mathbf{X^{(j)}}\) are denoted \(\mathbf{x^{-j}_{i}}\) and \(\mathbf{x^{(j)}_{i}}\) respectively, for i \(\in[\![n]\!]\). Problem settingMachine learning inputs are a design matrix \(\mathbf{X}\) and a target \(\mathbf{y}\in\mathbb{R}^{n}\) or \(\in\{0,1\}^{n}\) depending on whether it is a regression or a classification problem. Throughout the paper, we rely on an i.i.d. sampling train / test partition scheme where the \(n\) samples are divided into \(n_{train}\) training and \(n_{test}\) test samples and consider that \(\mathbf{X}\) and \(\mathbf{y}\) are restricted to the test samples - the training samples were used to obtain \(\hat{\mu}\). ### The _permutation_ approach leads to false detections in the presence of correlations A known problem with _permutation_ variable importance is that if features are correlated, their importance is typically over-estimated (Strobl et al., 2008), leading to a loss of type-I error control. However, this loss has not been precisely characterized yet, which we will work through for the linear case. We use the setting of (Mi et al., 2021), where the estimator \(\hat{\mu}\), computed with empirical risk minimization under the training set, is used to assess variable importance on a new set of data (test set). We consider a regression model with a least-square loss function for simplicity. The importance of variable \(\mathbf{x^{j}}\) is computed as follows: \[\hat{m}^{j}=\frac{1}{n_{test}}\sum_{i=1}^{n_{test}}\left((y_{i}-\hat{\mu}( \mathbf{x^{(j)}_{i}}))^{2}-(y_{i}-\hat{\mu}(\mathbf{x_{i}}))^{2}\right). \tag{1}\] Let \(\varepsilon_{i}=y_{i}-\mu(\mathbf{x_{i}})\) for \(i\in[n_{test}]\). Re-arranging terms yields \[\hat{m}^{j}= \frac{1}{n_{test}}\sum_{i=1}^{n_{test}}(\hat{\mu}(\mathbf{x_{i}})- \hat{\mu}(\mathbf{x_{i}^{(j)}}))(2\mu(\mathbf{x_{i}})-\hat{\mu}(\mathbf{x_{i}}) -\hat{\mu}(\mathbf{x_{i}^{(j)}})+2\varepsilon_{i}). \tag{2}\] Mi et al. (2021) argued that these terms vanish when \(n_{test}\rightarrow\infty\). But it is not the case as long as the training set is fixed. In order to get tractable computation, we assume that \(\mu\) and \(\hat{\mu}\) are linear functions: \(\mu(\mathbf{x})=\mathbf{x}\mathbf{w}\) and \(\hat{\mu}(\mathbf{x})=\mathbf{x}\hat{\mathbf{w}}\). Let us further consider that \(\mathbf{x^{j}}\) is a null feature, i.e. \(w^{j}=0\). This yields \(\mathbf{x}\mathbf{w}=x^{j}w^{j}+\mathbf{x^{-j}}\mathbf{w^{-j}}=\mathbf{x^{-j} }\mathbf{w^{-j}}\). Denoting the standard dot product by \(\langle.,.\rangle\), this leads to (Detailed proof of getting from Eq. 2 to Eq. 3 can be found in supplement section A) \[\hat{m}^{j}=\frac{2\hat{w}^{j}}{n_{test}}\left<\mathbf{x^{j}}-\{\mathbf{x^{j} }\}^{perm},\mathbf{X^{-j}}(\mathbf{w^{-j}}-\mathbf{\hat{w}^{-j}})+\varepsilon\right> \tag{3}\] as \((\|\mathbf{x^{j}}\|^{2}-\|\{\mathbf{x^{j}}\}^{perm}\|^{2})=0\). Next, \(\frac{1}{n_{test}}(\{\mathbf{x^{j}}\}^{perm},\mathbf{X^{-j}}(\mathbf{w^{-j}}- \mathbf{\hat{w}^{-j}}))\to 0\) and \(\frac{1}{n_{test}}\langle\mathbf{x^{j}}-\{\mathbf{x^{j}}\}^{perm},\varepsilon\rangle\to 0\) when \(n_{test}\rightarrow\infty\) with speed \(\frac{1}{\sqrt{n_{test}}}\) from the Berry-Essen theorem, assuming that the first three moments of these quantities are bounded and that the test samples are i.i.d. Let us assume that the correlation within \(\mathbf{X}\) takes the following form: \(\mathbf{x^{j}}=\mathbf{X^{-j}}\mathbf{u}+\boldsymbol{\delta}\), where \(\mathbf{u}\in\mathbb{R}^{p-1}\) and \(\boldsymbol{\delta}\) is a random vector independent of \(\mathbf{X^{-j}}\). By contrast, \(\frac{2\hat{w}^{j}}{n_{test}}(\mathbf{x^{j}},\mathbf{X^{-j}}(\mathbf{w^{-j}}- \mathbf{\hat{w}^{-j}}))\) has a non-zero limit \(2\hat{w}^{j}\mathbf{u}^{T}Cov(\mathbf{X^{-j}})(\mathbf{w^{-j}}-\mathbf{\hat{w }^{-j}})\), where \(Cov(\mathbf{X^{-j}})=\lim_{n_{test}\rightarrow\infty}\frac{\mathbf{X^{-j}}^{ \mathbf{X^{-j}}}\mathbf{x^{-j}}}{n_{test}}\) (remember that both \(\mathbf{w^{-j}}\) and \(\mathbf{\hat{w}^{-j}}\) are fixed, because the training set is fixed). Thus, the permutation importance of a null but correlated variable does not vanish when \(n_{test}\rightarrow\infty\), implying that this inference scheme will lead to false positives. ## 4 _Conditional sampling_-based feature importance ### Main result We define the permutation of variable \(x^{j}\) conditional to \(\mathbf{x^{-j}}\), as a variable \(\tilde{x}^{j}\) that retains the dependency of \(x^{j}\) with respect to the other variables in \(\mathbf{x^{-j}}\), but where the independent part is shuffled, \(\mathbf{\tilde{x}^{(j)}}\) is the vector \(\mathbf{x}\) where \(x^{j}\) is replaced by \(\tilde{x}^{j}\). We propose two constructions below (see Fig. E1). In the case of regression, this leads to the following importance estimator: \[\hat{m}^{j}_{CPI}=\frac{1}{n_{test}}\sum_{i=1}^{n_{test}}\left((y_{i}-\hat{\mu} (\mathbf{\tilde{x}_{i}^{(j)}}))^{2}-(y_{i}-\hat{\mu}(\mathbf{x_{i}}))^{2} \right). \tag{4}\] As noted by Watson and Wright (2021), this inference is correct, as in traditional permutation tests, as long as one wishes to perform inference conditional to \(\hat{\mu}\). However, the following proposition states that this inference has much wider validity in the asymptotic regime. **Proposition**.: _Assuming that the estimator \(\hat{\mu}\) is obtained from a class of functions \(\mathcal{F}\) with sufficient regularity, i.e. that it meets conditions (A1, A2, A3, A4, B1 and B2) defined in supplementary material, the importance score \(\hat{m}^{j}_{CPI}\) defined in (4) cancels when \(n_{train}\rightarrow\infty\) and \(n_{test}\rightarrow\infty\) under the null hypothesis, i.e. the \(j\)-th variable is not significant for the prediction. Moreover, the Wald statistic \(z^{j}=\frac{mean(\hat{m}^{j}_{CPI})}{std(\hat{m}^{j}_{CPI})}\) obtained by dividing the mean of the importance score by its standard deviation asymptotically follows a standard normal distribution._ This implies that in the large sample limit, the p-value associated with \(z^{j}\) controls the type-I error rate for all optimal estimators in \(\mathcal{F}\). The proof of the proposition is given in the supplement (section C). It consists in observing that the importance score defined in (4) is \(0\) for the class of learners discussed in (Williamson et al., 2021), namely those that meet a certain set of convergence guarantees and are invariant to arbitrary change of their \(j^{th}\) argument, conditional on the others. In the supplement, we also restate the precise technical conditions under which the importance score \(\hat{m}^{j}_{CPI}\) used is (asymptotically) valid, i.e. leads to a Wald-type statistic that behaves as a standard normal under the null hypothesis. It is easy to see that for the setting in Sec. 3.2, all terms in Eq. 4 vanish with speed \(\frac{1}{\sqrt{n_{test}}}\). ### Practical estimation Next, we present algorithms for computing conditional permutation importance. We propose two constructions for \(\tilde{x}^{j}\), the conditionally permuted counterpart of \(x^{j}\). The first one is additive: on test samples, \(x^{j}\) is divided into the predictable and random parts \(\tilde{x}^{j}=\mathbb{E}(x^{j}|\mathbf{x}^{-\mathbf{j}})+\left(x^{j}-\mathbb{E }(x^{j}|\mathbf{x}^{-\mathbf{j}})\right)^{perm}\), where the residuals of the regression of \(x^{j}\) on \(\mathbf{x}^{-\mathbf{j}}\) are shuffled to obtain \(\tilde{x}^{j}\). In practice, the expectation is obtained by a universal but efficient estimator, such as a random forest trained on the test set. The other possibility consists in using a random forest (RF) model to fit \(x^{j}\) from \(\mathbf{x}^{-\mathbf{j}}\) and then sample the prediction within leaves of the RF. Random shuffling is applied B times. For instance, using the additive construction, a shuffling of the residuals \(\tilde{\mathbf{c}}^{\mathbf{j},\mathbf{b}}\) for a given \(b\in\llbracket B\rrbracket\) allows to reconstruct the variable of interest as the sum of the predicted version and the shuffled residuals, that is \[\mathbf{\tilde{x}}^{\mathbf{j},\mathbf{b}}=\mathbf{\hat{x}}^{\mathbf{j}}+ \tilde{\mathbf{c}}^{\mathbf{j},\mathbf{b}}. \tag{5}\] Let \(\mathbf{\tilde{X}}^{\mathbf{j},\mathbf{b}}=(\mathbf{x}^{\mathbf{1}},\ldots, \mathbf{x}^{\mathbf{j}-\mathbf{1}},\mathbf{\tilde{x}}^{\mathbf{j},\mathbf{b} },\mathbf{x}^{\mathbf{j}+\mathbf{1}},\ldots,\mathbf{x}^{\mathbf{p}})\in\mathbb{ R}^{n_{test}\times p}\) be the new design matrix including the reconstructed version of the variable of interest \(\mathbf{x}^{\mathbf{j}}\). Both \(\mathbf{\tilde{X}}^{\mathbf{j},\mathbf{b}}\) and the target vector \(\mathbf{y}\) are fed to the loss function in order to compute a loss score \(l_{i}^{j,b}\in\mathbb{R}\) defined by \[l_{i}^{j,b}=\left\{\begin{array}{l}y_{i}\log\left(\frac{S(\hat{y}_{i})}{S( \hat{y}_{i}^{2})}\right)+(1-y_{i})\log\left(\frac{1-S(\hat{y}_{i})}{1-S(\hat{y }_{i}^{2})}\right)\\ (y_{i}-\hat{y}_{i}^{2})^{2}-(y_{i}-\hat{y}_{i})^{2}\end{array}\right. \tag{6}\] for binary and regression cases respectively where \(i\in\llbracket n_{test}\rrbracket\), \(j\in\llbracket p\rrbracket\), \(b\in\llbracket B\rrbracket\), \(i\) indexes a test sample of the dataset, \(\hat{y}_{i}=\hat{\mu}(\mathbf{x}_{\mathbf{i}})\) and \(\tilde{y}_{i}^{b}=\hat{\mu}(\mathbf{\tilde{x}}_{\mathbf{i}}^{\mathbf{j}, \mathbf{b}})\) is the new fitted value following the reconstruction of the variable of interest with the \(b^{th}\) residual shuffled and S(\(x\)) = \(\frac{1}{1+e^{-x}}\). The variable importance scores are computed as the double average over the number of permutations \(B\) and the number of test samples \(n_{test}\) (line 15 of Alg. 1), while their standard deviations are computed as the square root of the average over the test samples of the quadratic deviation over the number of permutations (line 17). Note that, unlike Williamson et al. (2021), the variance estimator is non-vanishing, and thus can be used as a plugin. A \(z_{CPI}^{j}\) statistic is then computed by dividing the mean of the corresponding importance scores with the corresponding standard deviation (line 18). P-values are computed using the cumulative distribution function of the standard normal distribution (line 19). The conditional sampling and inference steps are summarized in Algorithm 1. This leads to the _CPI-DNN_ method when \(\hat{\mu}\) is a deep neural network, or _CPI-RF_ when \(\hat{\mu}\) is a random forest. Supplementary analysis reporting the computational advantage of _CPI-DNN_ over a remove-and-relearn alternative a.k.a. _LOCO-DNN_, can be found in supplement (section D), which justifies its _computational leanness_. ## 5 Experiments & Results In all experiments, we refer to the original implementation of the different methods in order to maintain a fair comparison. Regarding _Permfit-DNN, CPI-DNN_ and _CPI-RF_ models specifically, our implementation involves a 2-fold internal validation (the training set of further split to get validation set for hyperparameter tuning). The scores from different splits are thus concatenated to compute the final variable importance. We focus on the _Permfit-DNN_ and _CPI-DNN_ importance estimators that use a deep neural network as learner \(\hat{\mu}\), using standard permutation and algorithm 1, respectively. All experiments are performed with \(100\) runs. The evaluation metrics are detailed in the supplement (section E). ### Experiment 1: Type-I error control and accuracy when increasing variable correlation We compare the performance of _CPI-DNN_ with that of _Permfit-DNN_ by applying both methods across different correlation scenarios. The data \(\{\mathbf{x}_{i}\}_{i=1}^{n}\) follow a Gaussian distribution with a prescribed covariance structure \(\mathbf{\Sigma}\) i.e. \(\mathbf{x}_{i}\sim\mathcal{N}(0,\mathbf{\Sigma})\forall i\in\llbracket n\rrbracket\). We consider a block-designed covariance matrix \(\mathbf{\Sigma}\) of 10 blocks with an equal correlation coefficient \(\rho\in\{0,0.2,0.5,0.8\}\) among the variables of each block. In this experiment, \(p=100\) and \(n=300\). The first variable of each of the first 5 blocks is chosen to predict the target \(y\) with the following model, where \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\): \[y_{i}=x_{i}^{1}+2\text{ log}(1+2(x_{i}^{11})^{2}+(x_{i}^{21}+1)^{2})+x_{i}^{31} x_{i}^{41}+\epsilon_{i},\ \forall i\in\llbracket n\rrbracket\] The AUC score and type-I error are presented in Fig. 1. Power and computation time are reported in the supplement Fig. 1 - S1. Based on the AUC scores, _Permfit-DNN_ and _CPI-DNN_ showed virtually identical performance. However, _Permfit-DNN_ lost type-I error control when correlation in \(\mathbf{X}\) is increased, while _CPI-DNN_ always controlled the type-I error at the targeted rate. ### Experiment 2: Performance across different settings In the second setup, we check if _CPI-DNN_ and _Permfit-DNN_ control the type-I error with an increasing total number of samples \(n\). The data are generated as previously, with a correlation \(\rho=0.8\). We fix the number of variables \(p\) to \(50\) while the number of samples \(n\) increases from \(100\) to \(1000\) with a step size of \(100\). We use 5 different models to generate the outcome \(\mathbf{y}\) from \(\mathbf{X}\): _classification_, _Plain linear_, _Regression with ReLu_, _Interactions only_ and _Main effects with interactions_. Further details regarding each data-generating scenario can be found in supplement (section G). Figure 1: **CPI-DNN vs Permfit-DNN: Performance at detecting important variables on simulated data with \(n=300\) and \(p=100\). (A): The type-I error quantifies to which extent the rate of low p-values (\(p<0.05\)) exceeds the nominal false positive rate. (B): The AUC score measures to which extent variables are ranked consistently with the ground truth. Dashed line: targeted type-I error rate. Solid line: chance level.** The AUC score and type-I error of _Permfit-DNN_ and _CPI-DNN_ are shown as a function of sample size in Fig. 2. The accuracy of the two methods was similar across data-generating scenarios, with a slight reduction in the AUC scores of _Permfit-DNN_ as compared to _CPI-DNN_. Only _CPI-DNN_ controlled the rate of type-I error in the different scenarios at the specified level of \(0.05\). Thus, _CPI-DNN_ provided an accurate ranking of the variables according to their importance score while, at the same time, controlling for the type-I error in all scenarios. ### Experiment 3: Performance benchmark across methods In the third setup, we include _Permfit-DNN_ and _CPI-DNN_ in a benchmark with other state-of-the-art methods for variable importance using the same setting as in Experiment 2, while fixing the total number of samples \(n\) to \(1000\). We consider the following methods: * Marginal Effects: A univariate linear model is fit to explain the response from each of the variables separately. The importance scores are then obtained from the ensuing p-values. * Conditional-RF (Strobl et al., 2008): A conditional variable importance approach based on a Random Forest model. This method provides p-values. * d\({}_{0}\)CRT (Liu et al., 2021; Nguyen et al., 2022): The Conditional Randomization Test with distillation, using a sparse linear or logistic learner. * Lazy VI (Gao et al., 2022). * Permfit-DNN (Mi et al., 2021). * LOCO (Lei et al., 2018): This method applies the remove-and-retrain approach. * cpi-knockoff (Watson and Wright, 2021): Similar to CPI-RF, but permutation steps are replaced by a sampling step with a knockoff sampler. * CPI-RF: This corresponds to the method in Alg. 1, where \(\hat{\mu}\) is a Random Forest. * CPI-DNN: This corresponds to the method in Alg. 1, where \(\hat{\mu}\) is a DNN. The extensive benchmarks on baselines and competing methods that provide p-values are presented in Fig. 3. For type-I error, \(d_{0}\)_CRT, CPI-RF_, _CPI-DNN_, _LOCO_ and _cpi-knockoff_ provided reliable control, whereas Marginal effects, _Permfit-DNN_, _Conditional-RF_ and _Lazy VI_ showed less consistent results across scenarios. For AUC, we observed that marginal effects performed poorly, as they do not use a proper predictive model. _LOCO_ and _cpi-knockoff_ behave similarly. \(d_{0}\)_CRT_ performed well when the data-generating model was linear and did not include interaction effects. _Conditional-RF_ and _CPI-RF_ showed reasonable performance across scenarios. Finally, _Permfit-DNN_ and _CPI-DNN_ outperformed all the other methods, closely followed by _Lazy VI_. Additional benchmarks on popular methods that do not provide p-values, e.g. BART (Chipman et al., 2010) or local and instance-based methods such as Shapley values (Kumar et al., 2020), are reported in the supplement (section H). The performance of these methods in terms of power and computation Figure 2: **Model comparisons across data-generating scenarios**: The **(A)** type-I error and **(B)** AUC scores of _Permfit-DNN_ and _CPI-DNN_ are plotted as function of sample size for five different settings. The number \(n\) of samples increased from \(100\) to \(1000\) with a step size of \(100\). The number of variables \(p\) was set to 50. Dashed line: targeted type-I error rate. Solid line: chance level. time are reported in the supplement Figs. 3 - S2 & 3 - S3 respectively. Additional inspection of power showed that across data generating scenarios, _CPI-DNN_, _Permfit-DNN_ and _conditional-RF_ showed strong results. _Marginal_ and _d0CRT_ performed only well in scenarios without interaction effects. _CPI-RF_, _epi-knockoff_, _LOCO_ and _Lazy VI_ performed poorly. Finally, to put estimated variable importance in perspective with model capacity, we benchmarked prediction performance of the underlying learning algorithms in the supplement Fig. 3 - S4. ### Experiment 4: _Permfit-DNN_ vs _Cpi-Dnn_ on Real Dataset UKBB Large-scale simulations comparing the performance of _CPI-DNN_ and _Permfit-DNN_ are conducted in supplement (section L). We conducted an empirical study of variable importance in a biomedical application using the non-conditional permutation approach Permfit-DNN (no statistical guarantees for correlated inputs) and the safer CPI-DNN approach. A recent real-world data analysis of the UK Biobank dataset reported successful machine learning analysis of individual characteristics. The UK Biobank project (UKBB) curates phenotypic and imaging data from a prospective cohort of volunteers drawn from the general population of the UK [Constantinescu et al., 2022]. The data is provided by the UKBB operating within the terms of an Ethics and Governance Framework. The work focused on age, cognitive function and mood from brain images and social variables and put the ensuing models in relation to individual life-style choices regarding sleep, exercise, alcohol and tobacco [Dadi et al., 2021]. A coarse analysis of variable importance was presented, in which entire blocks of features were removed. It suggested that variables measuring brain structure or brain activity were less important for explaining the predictions of cognitive or mood outcomes than socio-demographic characteristics. On the other hand, brain imaging phenotypes were highly predictive of the age of a person, in line with the brain-age literature [Cole and Franke, 2017]. In this benchmark, we explored variable-level importance rankings provided by the _CPI-DNN_ and _Permfit-DNN_ methods. The real-world empirical benchmarks on predicting personal characteristics and life-style are summarized in Fig. 4. Results in panel **(A)** suggest that highest agreement for rankings between _CPI-DNN_ and _Permfit-DNN_ was achieved for social variables (bottom left, orange squares). At the same time, _CPI-DNN_ flagged more brain-related variables as relevant (bottom right, circles). We next computed counts and percentage and broke down results by variable domain (Fig. 4, **B**). Naturally, the total relevance for brain versus social variables varied by outcome. However, as a tendency, _CPI-DNN_ seemed more selective as it flagged fewer variables as important (blue) beyond those flagged as important by both methods (light blue). This was more pronounced for social variables where _CPI-DNN_ Figure 3: **Extended model comparisons**: _CPI-DNN_ and _Permfit-DNN_ were compared to baseline models (outer columns) and competing approaches across data-generating scenarios (inner columns). Prediction tasks were simulated with \(n\) = 1000 and \(p\) = 50. **(A)**: Type-I error. **(B)**: AUC scores. Dashed line: targeted type-I error rate. Solid line: chance level. sometimes added no further variables. As expected by the impact of aging on brain structure and function, brain data was most important for age-prediction compared to other outcomes. Interestingly, most disagreements between the methods occurred in this setting as _CPI_ rejected 16 out of 66 brain inputs that were found as important by _Permfit_. This outlines the importance of correlations between brain variables, that lead to spurious importance findings with _Permfit_. We further explored the utility of our approach for age-prediction from neuromagnetic recordings (Engemann et al., 2020) and observed that _CPI-DNN_ readily selected relevant frequency bands without fine-tuning the approach (section M in the supplement). ## 6 Discussion In this work, we have developed a framework for studying the behavior of marginal and conditional permutation methods and proposed the _CPI-DNN_ method, that was inspired by the limitations of the _Permfit-DNN_ approach. Both methods build on top of an expressive DNN learner, and both methods turned out superior to competing methods at detecting relevant variables, leading to high AUC scores across various simulated scenarios. However, our theoretical results predicted that _Permfit-DNN_ would not control type-I error with correlated data, which was precisely what our simulation-based analyzes confirmed for different data-generating scenarios (Fig. 1 - 2). Other popular methods (Fig. 3) showed similar failures of type-I error control across scenarios or only worked well in a subset of tasks. Instead, _CPI-DNN_ achieved control of type-I errors by upgrading the _permutation_ to _conditional permutation_. The consequences were pronounced for correlated predictive features arising from generative models with product terms, which was visible even with a small fraction of data points for model training. Among alternatives, the _Lazy VI_ approach (Gao et al., 2022) obtained an accuracy almost as good as _Permfit-DNN_ and _CPI-DNN_ but with an unreliable type-I error control. Taken together, our results suggest that _CPI-DNN_ may be a practical default choice for variable importance estimation in predictive modeling. A practical validation of the standard normal distribution assumption for the non important variables can be found in supplement (section N). The _CPI_ approach is generic and can be implemented for any combination of learning algorithms as a base learner or conditional means estimator. _CPI-DNN_ has a linear and quadratic complexity in the number of samples and variables, respectively. This is of concern when modeling the conditional distribution of the variable of interest which lends itself to high computational complexity. In our work, Random Forests proceed to be useful default estimators as they are computationally lean and their model complexity, given reasonable default choices implemented in standard software, can be well controlled by tuning the tree depth. In fact, our supplementary analyses (section O) suggest that proper hyperparameter tuning was sufficient to obtain good calibration of p-values. As a potential limitation, it is noteworthy the current configuration of our approach uses a deep neural network as Figure 4: **Real-world empirical benchmark**: Prediction of personal characteristics (age, cognition, mood) and life-style habits (alcohol consumption, sleep, exercise & smoking) from various sociodemographic and brain-imaging derived phenotypes in a sample of \(n=8357\) volunteers from the UK Biobank. **(A)** plots variable rankings for _Permfit-DNN_ (x axis) versus _CPI-DNN_ (y axis) across all outcomes. Color: variable domain (brain versus social). Shape: variables classified by both methods as important (squares), unimportant (crosses) or by only one of the methods, _i.e._, _CPI-DNN_ (circles) or _Permfit-DNN_ (triangles). **(B)** presents a detailed breakdown of percentage and counts of variable classification split by variable domain. the base learner. Therefore, in general, more samples might be needed for good model performance, hence, improved model interpretation. Our real-world data analysis demonstrated that _CPI-DNN_ is readily applicable, providing similar variable rankings as _Permfit-DNN_. The differences observed are hard to judge as the ground truth is not known in this setting. Moreover, accurate variable selection is important to obtain unbiased interpretations which are relevant for data-rich domains like econometrics, epidemiology, medicine, genetics or neuroscience. In that context, it is interesting that recent work raised doubts about the signal complexity in the UK biobank dataset (Schulz et al., 2020), which could mean that underlying predictive patterns are spread out over correlated variables. In the subset of the UK biobank that we analysed, most variables actually had low correlation values (Fig. E4), which would explain why _CPI-DNN_ and _Permfit-DNN_ showed similar results. Nevertheless, our empirical results seem compatible with our theoretical results as _CPI-DNN_ flagged fewer variables as important, pointing at stricter control of type-I errors, which is a welcome property for biomarker discovery. When considering two highly correlated variables \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), the corresponding conditional importance of both variables is 0. This problem is linked to the very definition of conditional importance, and not to the _CPI_ procedure itself. The only workaround is to eliminate, prior to importance analysis, degenerate cases where conditional importance cannot be defined. Therefore, possible future directions include inference on groups of variables, e.g, gene pathways, brain regions, while preserving statistical control offered by _CPI-DNN_. AcknowledgementThis work has been supported by Bertrand Thirion and is supported by the KARAIB AI chair (ANR-20-CHIA-0025-01), and the H2020 Research Infrastructures Grant EBRAIN-Health 101058516. D.E. is a full-time employee of F. Hoffmann-La Roche Ltd.
2302.14845
Long-term modulation of solar cycles
Solar activity has a cyclic nature with the ~11-year Schwabe cycle dominating its variability on the interannual timescale. However, solar cycles are significantly modulated in length, shape and magnitude, from near-spotless grand minima to very active grand maxima. The ~400-year-long direct sunspot-number series is inhomogeneous in quality and too short to study robust parameters of long-term solar variability. The cosmogenic-isotope proxy extends the timescale to twelve millennia and provides crucial observational constraints of the long-term solar dynamo modulation. Here, we present a brief up-to-date overview of the long-term variability of solar activity at centennial--millennial timescales. The occurrence of grand minima and maxima is discussed as well as the existing quasi-periodicities such as centennial Gleissberg, 210-year Suess/de Vries and 2400-year Hallstatt cycles. It is shown that the solar cycles contain an important random component and have no clock-like phase locking implying a lack of long-term memory. A brief yet comprehensive review of the theoretical perspectives to explain the observed features in the framework of the dynamo models is presented, including the nonlinearity and stochastic fluctuations in the dynamo. We keep gaining knowledge of the processes driving solar variability with the new data acquainted and new models developed.
Akash Biswas, Bidya Karak, Ilya Usoskin, Eckhard Weisshaar
2023-02-28T18:45:01Z
http://arxiv.org/abs/2302.14845v1
# Long-term modulation of solar cycles ###### Abstract Solar activity has a cyclic nature with the \(\approx\)11-year Schwabe cycle dominating its variability on the interannual timescale. However, solar cycles are significantly modulated in length, shape and magnitude, from near-spotless grand minima to very active grand maxima. The \(\approx\)400-year-long direct sunspot-number series is inhomogeneous in quality and too short to study robust parameters of long-term solar variability. The cosmogenic-isotope proxy extends the timescale to twelve millennia and provides crucial observational constraints of the long-term solar dynamo modulation. Here, we present a brief up-to-date overview of the long-term variability of solar activity at centennial - millennial timescales. The occurrence of grand minima and maxima is discussed as well as the existing quasi-periodicities such as centennial Gleissberg, 201-year Suess/de Vries and 2400-year Hallstatt cycles. It is shown that the solar cycles contain an important random component and have no clock-like phase locking implying a lack of long-term memory. A brief yet comprehensive review of the theoretical perspectives to explain the observed features in the framework of the dynamo models is presented, including the nonlinearity and stochastic fluctuations in the dynamo. We keep gaining knowledge of the processes driving solar variability with the new data acquainted and new models developed. Solar activity, Solar cycle, Cosmogenic isotopes ## 1 Introduction Sun is a magnetically active star whose activity is a result of the magnetic dynamo process operating in the Sun's convection zone (see, e.g., Karak et al, 2014; Charbonneau, 2020). Solar surface magnetic activity varies cyclicly with the main period of about 11 years (called the Schwabe cycle) or, considering inversion of the sign of its magnetic polarity, the 22-year Hale cycle. More details can be found in an extensive review by Hathaway (2015). The physics of the dynamo mechanism is currently believed to be reasonably well understood. However, solar cyclicity is far from being a regularly ticking clock and experiences essential long-term variability at timescales longer than the Schwabe cycle. The solar cycles are not perfectly regular and vary in length, shape, and strength/intensity, or even can enter periods of almost inactive state, called grand minima of solar activity (e.g., Usoskin, 2017). The standard index quantifying solar activity is related to sunspot numbers which are available from 1610 AD onward with the quality degrading backwards in time, as discussed in Section 2. On one hand, this 410-year-long series exhibits a great deal of variability covering the range from an almost spotless period of the Maunder minimum between 1645 - 1715 AD (Eddy, 1976) to an epoch of very active Sun between 1940 - 2009 called the Modern grand maximum (Solanki et al, 2004; Usoskin et al, 2007). This great variability raises important questions, answers to which can put crucial observational constraints on the solar/stellar dynamo theory: * Do the changes between the Maunder minimum and the Modern grand maximum cover the full possible range of solar variability? * Do the grand minima and maxima represent special states of the solar dynamo or simply represent the tails of the distribution? * How typical are these changes? * Do the grand minima episodes appear periodically or randomly? * What physical processes drive such changes? The four-century-long sunspot number series is not sufficiently long to answer these questions, and a much longer dataset is needed to form a basis for the answers. Fortunately, solar activity can be reliably reconstructed from indirect natural proxy data (cosmogenic radioisotopes) on the timescale of 10 - 12 millennia, during the period of the Holocene with a stable warm climate on Earth, as discussed in Section 3. This reconstruction extends the solar-activity dataset by a factor of about 25 making it possible to perform a thorough statistical analysis of solar variability as discussed in Section 4, while statistical properties of the solar-cycle modulation are summarized in Section 5. In Section 6, we discuss the implications of the long-term solar variability for the solar dynamo theory and our present level of understanding of the related physics. ## 2 Direct Sunspot number series since 1610 Sunspots have been more or less systematically studied since 1610, soon after the invention of the telescope. Thousands of observational records and drawings exist in archives as being continuously recovered and analyzed (e.g., Vaquero and Vazquez, 2009; Arlt and Vaquero, 2020). The most recent and continuously updated database of raw sunspot-group observation is collected at the HASO (Historical Archive of Sunspot Observations, [http://haso.unex.es/haso](http://haso.unex.es/haso) - Vaquero et al, 2016). Despite numerous observational records, it was noticed only in the middle of the 18th century by the Danish astronomer Christian Horrebow and finally confirmed in the early 19th century by the German observer Heinrich Schwabe, that the number of sunspots varies cyclicly with about 10-year period. This cycle was later shown to be of about 11 years mean length and appears to be a fundamental feature of solar activity and is now called the _Schwabe_ cycle. More details of the sunspot number measurements and reconstructions can be found elsewhere in this volume or in comprehensive reviews by Hathaway (2015) and Usoskin (2017). Figure 1: Annual sunspot activity for the last centuries based on direct sunspot observations: a) International sunspot number series version 2 from SILSO ([http://sidc.be/silso/datafiles](http://sidc.be/silso/datafiles)). b) Number of sunspot groups according to Hoyt and Schatten (1998, – HS98) and Usoskin et al (2016b, – U16). Approximate dates of the Maunder minimum (MM) and Dalton minimum (DM) are shown in the lower panel. Standard (Zürich) cycle numbering is shown between the panels. Cycles during the MM are only indicative as provided by Usoskin et al (2000). ### Wolf sunspot series \(R_{\rm W}\) and International sunspot number \(R_{\rm I}\) Following the discovery of the solar cycle, Rudolf Wolf from Zurich Observatory founded a synthetic index called the sunspot number presently known as Wolf or Zurich sunspot number \(R_{\rm W}\) (WSN) defined as \[R_{\rm W}=k\cdot(10\cdot G+S), \tag{1}\] where \(G\) and \(S\) are the numbers of sunspot groups and all sunspots, including those in groups, respectively, visible on the solar disc during a given day by the primary observer whose quality scaling factor \(k\) is set to reduce his/her counts to the reference observer with \(k\equiv\)1. Obviously, the sunspot number is not the same as the number of spots, and for a single sunspot, \(R_{\rm W}\)=11 assuming \(k\)=1. This series, constructed by R. Wolf in 1861 using his own and recovered earlier observations, formally covered the period since 1749 (solar cycle SC #1 in Wolf's numbering), but in fact, it was more or less reliable only since the 1820's when H. Schwabe started his observations. Later it was extended back to 1700 with unreliable data. The compilation of the \(R_{\rm W}\) was continued at Zurich by Wolf's successors Wolfer, Brunner, Waldmeier and Koeckelenbergh until 1981 when the formation of the sunspot series was transferred to the Royal Observatory of Belgium (Clette et al, 2007). Until 1981, the \(R_{\rm W}\) was constructed considering the observation of only one primary observer for each day, all other observations were discarded. This series could not, till now, be revisited or redone because of the lack of original raw data. Accordingly, when several apparent inhomogeneities were found in the standard Wolf sunspot series (Leussu et al, 2013; Clette et al, 2014; Lockwood et al, 2014), only step-wise corrections to the old series could be done (Clette et al, 2014; Clette and Lefevre, 2016). This 'corrected' sunspot series is known as the International sunspot number series version 2.0, \(R_{\rm I}\)(2.0), and is available at the SILSO (Sunspot Index and Long-term Solar Observations, https://www. sidc.be/silso/datafiles) formally since 1700. The \(R_{\rm I}\)(2.0) is shown in Figure 1 along with the standard Zurich sunspot cycle numbering. Although the update of the series was through several adjustments of scaling jumps, an important effort is currently done by the community to restore and digitize old raw data (Clette et al, 2021) so that it will be possible to redo the sunspot number series from scratch increasing its reliability and assessing realistic uncertainties. ### Group sunspot number series \(G_{\rm N}\) Since the sunspot number (Equation 1) includes both numbers of sunspot groups (weighted by a factor of 10) and individual sunspots, it is sensitive to the quality of observations. This was addressed by Hoyt and Schatten (1998) who noticed that sunspot groups are defined more reliably than individual spots and created the group sunspot number series \(G_{\rm N}\) which is simply the number of sunspot groups \(G\) on the solar disc corrected for the observer's quality. This series is shown in Figure 1b. Sometimes it is scaled up to match the values typical for \(R_{\mathrm{W}}\). However, contrary to \(R_{\mathrm{W}}\), \(G_{\mathrm{N}}\) is based on the average of all available observations for each day, not only the primary ones. Another principal difference between \(R_{\mathrm{W}}\) and \(G_{\mathrm{N}}\) is that Hoyt and Schatten (1998) created and published a full database of raw data they used to construct the \(G_{\mathrm{N}}\) series. Accordingly, this series can be completely redone as a whole, without limitation to the 'correction factors'. It was recognized that the original \(G_{\mathrm{N}}\) underestimated solar activity during the 19-th century (Clette et al, 2014), and several efforts have been made to revisit it using different methodologies and inter-calibrations (e.g., Svalgaard and Schatten, 2016; Usoskin et al, 2016; Chatzistergos et al, 2017; Willamo et al, 2017). One of the reconstructions is also shown in Figure 1b. However, these new series often moderately disagree with each other illustrating the problem of compiling a homogeneous series from individual raw datasets (Munoz-Jaramillo and Vaquero, 2019). It is presently impossible to decide between different reconstructions of the group sunspot series, but the zoo of those gives a clue of what the related uncertainties are, and presently they are bounded by the series of Svalgaard and Schatten (2016) from the top and from below by Hoyt and Schatten (1998). ## 3 Cosmogenic-isotope-based reconstructions of long-term solar variability The sunspot number series covers ca. 410 years in the past with the quality degrading back in time (Munoz-Jaramillo and Vaquero, 2019) and principally cannot be extended before the 17-th century because of the lack of instrumental data. Unaided (naked-eye) observations of sunspots do not provide systematic quantitative information on solar activity (Usoskin, 2017). There are some other proxy-based indices of solar activity, such as geomagnetic or heliospheric activity, and radio-emission of the Sun, but they all are based on scientific measurements and typically do not go beyond the middle of the 19-th century. Fortunately, there is one solar-activity proxy which can help in reconstructing solar variability on the multi-millennial timescale. This is related to cosmogenic radioisotopes which are produced and preserved in dateable archives in a natural way. ### Method of cosmogenic isotopes Solar surface magnetic activity and hot corona create the solar wind which is a supersonic outflow of solar coronal plasma permanently emitted from the Sun (see, e.g., Vidotto, 2021). Because of its high conductivity, solar wind drags away the solar magnetic field which appears 'frozen' in the solar-wind plasma. This wind radially expands forming the heliosphere, a region of about 200 astronomical units across which is totally controlled (in the magnetohydrodynamical sense) by the solar wind and magnetic field (e.g., Owens and Forsyth, 2013). The heliosphere makes an obstacle for charged highly energetic particles of galactic cosmic rays (GCRs) which permanently bombard it isotropically with nearly constant flux. Inside the heliosphere, cosmic rays are affected by four major processes, viz. scattering and diffusion on magnetic irregularities, convection by expanding solar wind, adiabatic cooling, and large-scale drifts. All these processes are ultimately driven by solar activity leading to the solar modulation of cosmic-ray flux near Earth so that the cosmic-ray flux is stronger when solar activity is weak and vice-versa (e.g., Potgieter, 2013). Thus, knowing the modulated flux of GCRs at a moment in time, one can assess the level of solar activity slightly before that (within one year - Koldobskiy et al, 2022). Of course, there were no scientific cosmic-ray detectors in the distant past, but there is a natural cosmic-ray monitor - cosmogenic radioisotopes. Cosmogenic radioisotopes are unstable nuclides, which cannot survive from the time of the solar-system formation, and whose main source is related to nuclear reactions caused by cosmic rays in the Earth's atmosphere (Beer et al, 2012). After production in the atmosphere by GCR, nuclides can be stored in natural independently dateable archives, such as tree trunks, polar ice cores, lake/marine sediments, etc. Accordingly, the flux of GCR can be estimated in the past by measuring the abundance of such isotopes in the archives, forming the only quantitative proxy of solar activity over long timescales (see more details in Beer, 2000; Usoskin, 2017). The most important cosmogenic isotopes are \({}^{14}\)C 'radiocarbon' (half-life 5730 years) measured in dendrochronologically dated tree rings and \({}^{10}\)Be (\(\approx 1.4\cdot 10^{6}\) years) measured in glaciologically dated ice cores. Conversion between the measured isotope concentration and production by cosmic rays requires a knowledge of the isotope's transport and deposition processes which are currently well modelled (e.g., Roth and Joos, 2013; Heikkila et al, 2013; Golubenko et al, 2021). Additionally, it needs to be corrected for the changing geomagnetic field (e.g., Pavon-Carrasco et al, 2018), and the resulting variability can be attributed to solar activity. The conversion from the cosmic-ray modulation to the heliospheric properties (open solar flux) and then to the pseudo-sunspot numbers is done via a chain of physics-based models making it possible to reconstruct solar activity and the related uncertainties (see, e.g., Usoskin, 2017; Wu et al, 2018). ### Holocene (\(\approx\)12 kyr) decadal reconstruction While the idea of the use of cosmogenic-isotope data as a proxy to solar activity has been discussed since long (Stuiver, 1961; Lal and Peters, 1962), first approaches were empirical as based on timescale separation of the cosmogenic data: timescales longer than 500 years were thought to be caused by changes in the large-scale geomagnetic field, while shorter time scales - by solar activity (Damon and Sonett, 1991). That approach made it possible to identify grand solar minima (Eddy, 1976; Stuiver and Braziunas, 1989) but was unable to provide a quantitative reconstruction of solar activity because both factors are important at the centennial timescales. A full reconstruction of solar activity from cosmogenic-isotope data became possible only after the development of models of cosmic-ray-induced atmospheric cascades (Masarik and Beer, 1999). The first quantitative reconstruction of solar activity using a physics-based approach was made by Usoskin et al (2003) on the millennial time scale (see also Solanki et al, 2004). Later the reconstructions were extended to the Holocene (the present period of stable warm climate lasting for about 12 millennia) using different cosmogenic isotopes (e.g., Vonmoos et al, 2006; Steinhilber et al, 2012; Usoskin et al, 2016). The most recent and accurate multi-millennial solar-activity reconstruction by Wu et al (2018) is based on a multi-proxy Bayesian approach providing also realistic uncertainties. It is shown in Figure 2. One can see that solar activity varies essentially between the grand minima, visible at sharp dips down to 10 - 20 (in sunspot number, SN), and grand maxima when SN exceeds 60, while most of the time the solar-activity level remains moderate at SN\(=40\pm 10\) (see more detail in Usoskin et al, 2014). The results of an analysis of the solar-activity variability are reviewed in Section 4. Because of the low time resolution of the cosmogenic-isotope throughout the Holocene (typically decadal - see, e.g., Reimer et al, 2020), reconstructions of solar activity are also usually limited to the 10-year resolution being thus unable to resolve individual solar cycles. Long-term reconstructions of solar activity are limited to the Holocene timescale because of the stable climate so that the standard models of the isotope atmospheric transport and deposition can apply. However, for the ice-age-type of climate, the properties of the atmospheric transport are quite uncertain including the large-scale atmospheric and ocean circulation, which prevents quantitative assessment of solar activity. At present, there is no model which is able to handle this in a satisfactory manner, but progress is expected in the future. Figure 2: Multi-proxy reconstruction of the decadal sunspot numbers (in the classical Wolf’s definition) over the last nine millennia, along with the 1\(\sigma\) uncertainties (Wu et al, 2018). The blue and red dashed lines approximately denote the low (Grand minimum) and high states of solar activity. ### \(\approx\)100 solar cycles reconstructed Thanks to the recent technological progress, high-precision measurements of annual \({}^{14}\)C concentrations have been performed with the annual resolution for the last millennium (Brehm et al, 2021). It allowed us to make, by applying the physics-based model, the first reliable reconstruction of individual solar cycles beyond the epoch of telescopic observations (Usoskin et al, 2021) as shown in Figure 3. Four known grand minima are seen - Oort, Wolf, Sporer and Maunder minima, and between the minima, there are clear solar cycles of variable amplitude. In this way, 85 individual solar cycles have been reconstructed from \({}^{14}\)C of which 35 cycles are reasonably and well resolved, 21 are poorly and 29 are not reliably resolved, mostly during the grand minima of activity. Overall, including both direct solar observations and proxy-based reconstructions, we now have information on 96 solar cycles of which 50 are well resolved, thus nearly tripling the extent of the solar-cycle knowledge and doubling the number of well-defined cycles. The extended statistic made it possible to perform a primary analysis of the solar-cycle parameters. The length of the well-defined cycles was \(10.8\pm 1.4\) years which is in good agreement with \(11.0\pm 1.1\) years known for the ISN dataset. The statistical significance of the Waldmeier rule (solar-cycle height is inversely correlated with the length of the ascending phase - high cycles rise fast) has been confirmed with the extended dataset, implying its robust nature (Usoskin et al, 2021). However, the Gnevyshev-Ohl rule of even-odd cycle pairing (Gnevyshev and Ohl, 1948; Usoskin et al, 2001) has not been confirmed, nor rejected with the extended data. A more detailed analysis of this new dataset is still pending. ## 4 Long-term solar activity With the reconstructed long series, one can investigate properties of solar variability which pose observational constraints crucially important for solar Figure 3: Annual reconstruction, based on high-precision \({}^{14}\)C data, of the sunspot numbers over the last millennium (970 – 1900), along with the 1\(\sigma\) uncertainties (Usoskin et al, 2021). The red curve presents the ISN (v.2) since 1900. Approximate periods of the Oort (OM), Wolf (WM), Spörer (SM) and Maunder (MM) grand minima are indicated in blue letters. physics but cannot be set by the too short-ranging conventional direct telescopic observations of the Sun. While the 11-year solar cycle forms the main feature of solar activity, the cycles are far from being perfect clock ticks - they vary by both duration and amplitude including periods of greatly suppressed activity, grand minima (see Figure 3). Here we review the most important features of long-term solar variability. ### Long quasi-periodic variations (Gleissberg, Suess/de Vries, Hallstatt cycles) It is hardly possible to distinguish whether solar variability on a long-term scale (Figure 2) is stochastic/chaotic or (quasi)periodic. Power-spectrum analyses are controversial but generally agree that there are three period ranges with apparent and barely significant variability. An example of the global wavelet power spectrum is shown in Figure 4. One is the centennial variability, called the _Gleissberg_ cycle, which is not a strict periodicity but a characteristic period range between 60 - 140 years (e.g., Peristykh and Damon, 2003; Ogurtsov, 2004). The Gleissberg cycle is clearly seen in the direct sunspot data but is less pronounced throughout the Holocene. Another important periodicity is the _Suess_ cycle (called also _de Vries_ cycle in the literature), which has a narrow period range between 200 - 210 years and an intermittent occurrence. It is typically seen as a recurrence of grand minima within clusters of reduced solar activity (Usoskin et al, 2014) as seen, e.g., in Figure 3, but is not readily observed during the epochs of moderate solar activity. Figure 4: Global wavelet (Morlet basis) power spectrum (black curve) of the long-term sunspot-number series shown in Figure 2. Blue-dashed line denotes the 90% confidence level estimated using the AR1 auto-regressive noise, following the methodology of Grinsted et al (2004). Approximate locations of the discussed quasi-periodic variations (Section 4.1) are indicated by vertical arrows. Sometimes, the so-called _Eddy_ millennial cycle is claimed to exist (Steinhilber et al, 2012), but it is unstable and cannot be identified in a significant way (see Figure 4). Additionally, there exists a very-long cycle with a timescale of 2000 - 2400 years called the _Hallstatt_ cycle (Damon and Sonett, 1991; Vasiliev and Dergachev, 2002; Usoskin et al, 2016). Because of its length, it cannot be robustly defined in the \(\approx\)10-kyr time series (see Figure 4). The nature of the Hallstatt cycle is still unclear: it is likely to be ascribed to the Sun (Usoskin et al, 2016) but geomagnetic or climatic origin cannot be excluded. Longer-scale variability cannot be reliably assessed from the cosmogenic-isotope data, in particular, because of the unresolved discrepancy between \({}^{14}\)C and \({}^{10}\)Be datasets on the multi-millennial timescale as probably related to the effect of deglaciation (e.g., Vonmoos et al, 2006; Usoskin et al, 2016; Wu et al, 2018). ### Grand minima and maxima As seen, e.g., in Figures 2 and 3, solar activity sometimes drops fast, within one-two solar cycles, to the very quiet level with almost no sunspots on the solar surface. These drops are called grand minima of activity. Until the 1970s, the existence of such minima was debated, but Eddy (1976) had convincingly proved that the sunspot activity indeed dropped to almost no sunspots between 1645 - 1715 as confirmed also by other proxies such as auroral displays at mid-latitudes. That grand minimum was called the _Maunder_ minimum. More grand minima have been found later using the cosmogenic-isotope data (e.g., Usoskin et al, 2007; Inceoglu et al, 2015). At present, about 30 grand minima of duration ranging between 40 - 70 (Maunder-type minima) and 100 - 140 years (Sporer-type) each, have been identified during the Holocene occupying about 1/6 of the time. It has been shown that the grand minima correspond to a special state of the solar dynamo (e.g., Usoskin et al, 2014). Solar activity was abnormally high in the second half of the 20th century compared to the 19th or 21st centuries (Lockwood et al, 1999) but it was unknown whether this high level is unique or typical. Using the cosmogenic-isotope data, it was discovered that the period from the 1940s to 2010 was not unique and there are other similarly high but very rare episodes, that forms the concept of a _grand solar maximum_(Usoskin et al, 2003; Solanki et al, 2004). Grand maxima represent periods of enhanced solar activity covering at least a few solar cycles. There were about 20 grand maxima over the Holocene which cover \(\approx\)10 % of the time (Usoskin et al, 2007; Inceoglu et al, 2015), but they are defined not as robustly as grand minima. No apparent clustering in the grand-maxima occurrence or duration has been found, nor do they form a special distribution of solar cycles (Usoskin et al, 2014, 2016). It is still unknown whether grand maxima make a special mode of the dynamo, similar to grand minima, or just represent a rare tail of the solar-cycle-strength distribution. ## 5 Statistical properties of the long-term modulation of solar cycles As historical records show, solar cycles are highly variable in amplitude and length. The validity of theoretical models that attempt to predict this variability depends heavily on whether the cycle exhibits long-term phase stability or whether the phase is subject to a random walk, or a mixture of these. In the first of the two extreme cases, the system has infinite phase memory and in the second case no phase memory at all. Phase stability could be achieved through synchronization processes, such as high-quality torsional oscillations in the solar interior (Dicke, 1970) or the weak tidal forces of planets (e.g., Stefani et al, 2021). Dynamo models generally predict phase progression without memory. An insightful summary of the use of historical observations to explain solar phenomena was given by Vaquero and Vazquez (2009). The question of the regularities and randomness of solar activity variability has been studied for a long time. For example, statistical methods including those based on the Lyapunov and Hurst exponents or Kolmogorov entropy (e.g., Ostriakov and Usoskin, 1990; Mundt et al, 1991; Carbonell et al, 1994; Ruzmaikin et al, 1994; Lepreti et al, 2021) were inconclusive, implying that a mixture of different components is likely (see more details in Usoskin, 2017; Petrovay, 2020). Various publications (e.g., Lomb, 2013; Russell et al, 2019; Stefani et al, 2020) claim that the solar cycle is phase stable. However, to answer the question of whether the phase is stable or not, one needs a clear definition of phase stability, an appropriate statistical analysis as well as reliable data on which to apply the analysis. Dicke (1978) and Gough (1978) were among the first to perform a systematic statistical analysis based on telescopic sunspot records. Independently, but using similar concepts, they concluded that the time span of the available data was too small for a clear distinction between the two cases. Later, Gough (1981, 1983, 1988) corrected and modified his earlier analysis without altering the conclusion. Interestingly, Eddington and Plakidis (1929) analyzed the light-curve variations of long-period variable stars, a problem close to the variability of the solar cycle. By deriving a statistical function to which the processed observational data were fitted, they were able to determine two indicators for the composition of clock-synchronised phase perturbations and random phase perturbations of the light signal. Weisshaar et al (2023) have revisited Gough's analysis based on newly available data. For clarity, a brief outline of Gough's test is given here: From the arithmetic mean of the individual cycle lengths (Gough, 1981), the regular minima or maxima of the hypothetical dynamo or clock cycles and thus the corresponding phase deviations can be determined as the difference to the observed minima or maxima. The basic statistics are the expectation values of the variances of cycle period, \(E(\sigma_{P}^{2})\), and phase, \(E(\sigma_{\phi}^{2})\). The final statistics is defined as the ratio of the two variances to cancel out the unknown fluctuation amplitude: \[S=\frac{E({\sigma_{\phi}}^{2})}{E({\sigma_{P}}^{2})} \tag{2}\] Later, Gough (1983) modified the method by replacing the arithmetic mean of the cycle period with a value that minimizes the variance of the phase deviations, resulting in a more sensitive distinction between the clock regime and the random phase regime. Calculating the expectation values of the variances for the two cases, one obtains the following expressions for \(S_{c}\) (clock) and \(S_{r}\) (random phase) using the modified method: \[S_{c}=\frac{E({\sigma_{\phi}}^{2})}{E({\sigma_{P}}^{2})}=\frac{N^{2}}{2(N+1)^{ 2}} \tag{3}\] which asymptotically reaches \(N\rightarrow\infty\), \(S_{c}\rightarrow\frac{1}{2}\); \[S_{r}=\frac{E({\sigma_{\phi}}^{2})}{E({\sigma_{P}}^{2})}=\frac{N(N+3)}{15(N+1)} \tag{4}\] which asymptotically reaches \(N\rightarrow\infty\), \(S_{r}\rightarrow\frac{N}{15}\). The procedure to apply Gough's test to an observed data set is as follows: The data set is divided into contiguous segments of \(N\) cycles each. Then the ratio of the averages of the empirical variances is calculated and compared with the ratio of the expectation values, plotted as functions of \(N\) in Figure 6. Figure 5: Modified Gough test \(S\) applied to the epochs of sunspot minima and maxima of 28 activity cycles between 1712 and 2019. Symbols correspond to the solar cycle maxima and minima, as denoted in the legend. The black line with the shaded 68% confidence interval depicts the random phase hypothesis (Eq. 4). The red curve with the shaded 95% c.i. depicts the clock phase hypothesis (Eq. 3). Weisshaar et al (2023) augmented the method by determining suitable confidence intervals through Monte Carlo simulations for the clock and the random phase cases, assuming normally distributed variations in cycle length. They applied the test to the extended sunspot record of now 28 cycles, four more than available to Gough. The main improvement is narrower confidence intervals, rejecting the synchronization hypothesis on a \(2\sigma\) level (Figure 5). Recently, a reconstruction of yearly sunspot numbers from the record of cosmogenic \({}^{14}\)C in tree rings for the years 976 until 1888 (Brehm et al, 2021; Usoskin et al, 2021) has extended the number of contiguous cycles available for the analysis to 84. The Gough test confirms the previous result based on the direct sunspot record, in fact strengthening it significantly, since now the synchronization hypothesis can be rejected even on a \(>3\sigma\) level (Figure 6). Weisshaar et al (2023) also applied the method of Eddington and Plakidis (1929) mentioned above to these new data and found, consistent with the analysis discussed here, that the fraction of clock-synchronised perturbations is negligible. The question may arise how misidentifications of the observed solar cycles can affect the results. If this happens not too common, the nature of the fluctuations (phase stability or migration) is not expected to be changed by this bias. As a test, a lost cycle between more distant minima was "restored" by placing a minimum in between. This did not cause the \(S\)-values to leave the phase migration confidence interval. Figure 6: Modified Gough test (notations are similar to those in Fig. 5) applied to the series of 84 cycles covering the period between 976 and 1999 as reconstructed from \({}^{14}\)C data by (Usoskin et al, 2021). The data agree with a random phase shift, while synchronization with the ”clock” is rejected at the confidence level much higher than 99% due to the longer data set. Furthermore, the above-mentioned method of the phase evolution of empirical cycle data is therefore consistent with a random walk (such as provided by a memory-less dynamo process). External synchronization by a 'clock' is clearly excluded at a high significance. ## 6 Implications for the dynamo theory The solar magnetic cycle is maintained by a dynamo process, operating in the solar convection zone (SCZ). Thus, it is natural to expect that the variations in the solar cycle are caused by some mechanisms in the solar dynamo. Here we identify the causes of the variations in the solar cycle and demonstrate them by presenting results from some illustrative models. Let us first summarise the mechanism of the solar dynamo. ### Introduction to the solar dynamo There is enough evidence that the solar dynamo is a mechanism in which toroidal and poloidal fields sustain each other through a cyclic loop (e.g., Parker, 1955; Cameron and Schussler, 2015). In this loop, the toroidal field is generated due to the shearing of the poloidal field by the differential rotation in the deeper CZ. The toroidal field rises to the surface due to magnetic buoyancy to give rise to sunspots or more generally bipolar magnetic regions (BMRs). These BMRs are systematically tilted with respect to their East-West orientations. Due to these tilts, after their decay, BMRs produce a poloidal field. This, the so-called Babcock-Leighton process is clearly identified in the observed magnetic field data on the solar surface (e.g., Mordvinov et al, 2022). The observed correlation between the polar field (or its proxy) at the solar minima and the amplitude of the next cycle (Wang and Sheeley, 2009; Kitchatinov and Olemskoy, 2011; Munoz-Jaramillo et al, 2013; Priyal et al, 2014) and the flux budgets of the observed and the generated poloidal and toroidal fields (Cameron and Schussler, 2015) suggest that the Babcock-Leighton process is possibly the main source of the poloidal field in the Sun. There is however another mechanism through which the poloidal field in the sun can be produced and that is the classical \(\alpha\) effect as originally proposed by Parker (1955) and mathematically formulated by Steenbeck et al (1966). In this mechanism, the toroidal field is twisted by the helically rising blobs of plasma in the SCZ. However, this process of lifting and twisting of the field by the convective flow experiences catastrophic quenching due to helicity conservation and thus this process operates when the energy density of the toroidal field is less than the energy density of the convective motion (Sec. 8.7 of Brandenburg and Subramanian, 2005). Therefore, this \(\alpha\) effect is unfavourable in the solar convection zone and the obvious option is to consider the observationally supported Babcock-Leighton process for the generation of the poloidal field in the sun. To study the dynamo action, we need to begin with at least following two fundamental equations of magnetohydrodynamics (MHD). \[\frac{\partial\mathbf{B}}{\partial t}=\mathbf{\nabla}\times(\mathbf{v}\times\mathbf{B}-\eta\mathbf{ \nabla}\times\mathbf{B}), \tag{5}\] \[\rho\left[\frac{\partial\mathbf{v}}{\partial t}+(\mathbf{v}\cdot\mathbf{\nabla})\mathbf{v} \right]=-\mathbf{\nabla}P+\mathbf{J}\times\mathbf{B}+\mathbf{\nabla}\cdot(2\nu\rho S)+\mathbf{F}, \tag{6}\] where \(\mathbf{B}\) and \(\mathbf{v}\) are the magnetic and velocity fields, respectively, \(\eta\) is the magnetic diffusivity, \(\rho\) is the density, \(P\) is the pressure, \(\mathbf{J}=\mathbf{\nabla}\times\mathbf{B}/\mu_{0}\), the current density, \(\nu\) is the kinetic viscosity, \(S_{ij}=\frac{1}{2}(\nabla_{i}v_{j}+\nabla_{j}v_{i})-\frac{1}{3}\delta_{ij}\mathbf{ \nabla}\cdot\mathbf{v}\) is the rate-of-strain tensor, and the term \(\mathbf{F}\) includes gravitational, Coriolis and any other body forces acting on the fluid. These equations along with the mass continuity and energy equations and equation of state are numerically solved with appropriate boundary conditions in the solar CZ to study the dynamo problem. Broadly there are two approaches for doing this, namely, the global MHD simulations and mean-field modellings. In global MHD simulations, we solve the above MHD equations numerically to resolve the full spectrum of turbulent convection. In mean-field models, we study the evolution of the mean/large-scale quantities by parameterizing the small-scale/fluctuating quantities using suitable approximations. Global MHD simulations for the Sun are challenging due to extreme parameter regimes, such as high fluid and magnetic Reynolds numbers and large stratification. Despite these, simulations have begun to produce some solar-like features; see Section 6 of Charbonneau (2020). However, due to their computationally expensive nature, these simulations were rarely run for many cycles so that the cycle variabilities can be studied. Passos and Charbonneau (2014) have produced simulations for several cycles and shown long-term modulations (also see Karak et al, 2015, for a simulation at solar rotation rate although ran for not many cycles). Augustson et al (2015) and Kapyla et al (2016) performed MHD convection simulations for the cases of three and five times the solar rotation rate, respectively. They both found an episode of suppressed surface activity, somewhat resembling the solar grand minimum. Although these results of cycle modulations are encouraging, simulations face serious issues when matching with observations, for example, concerning solar observations, simulations (i) produce higher power at the largest length scale, (ii) do not produce BMRs, and (iii) do not produce correct large-scale flows, particularly, they produce a large variation in the differential rotation. On the other hand, mean-field models are computationally less expensive and easy to analyse their results. Probably due to these reasons, long-term modulations are studied using mean-field dynamo models. Due to the observational facts that the magnetic field at the solar minima and the large-scale velocity field are largely axisymmetric, historically the mean-field models are constructed under axisymmetric approximation. With this approximation, the equations for the poloidal and toroidal fields are written as \[\frac{\partial A}{\partial t}+\frac{1}{s}(\mathbf{v_{m}}\cdot\mathbf{\nabla})(sA)=\eta_{t }\left(\nabla^{2}-\frac{1}{s^{2}}\right)A+\alpha B, \tag{7}\] \[\frac{\partial B}{\partial t}+\frac{1}{r}\left[\frac{\partial(rv_{r}B)}{ \partial r}+\frac{\partial(v_{\theta}B)}{\partial\theta}\right]=\eta_{t} \left(\nabla^{2}-\frac{1}{s^{2}}\right)B+s(\mathbf{B_{p}}\cdot\mathbf{\nabla})\Omega+ \frac{1}{r}\frac{d\eta_{t}}{dr}\frac{\partial(rB)}{\partial r}, \tag{8}\] where \(A\) is the potential for the poloidal field (\(\mathbf{B_{p}}=\mathbf{\nabla}\times(A\mathbf{\hat{\phi}})\), \(B\) is the toroidal field, \(s=r\sin\theta\), \(\mathbf{v_{m}}(=v_{r}\mathbf{\hat{r}}+v_{\theta}\mathbf{\hat{\theta}})\) represents the meridional circulation, \(\eta_{t}\) is the turbulent diffusivity which is assumed to depend only on \(r\), \(\alpha\) is the \(\alpha\) effect, and \(\Omega\) is the angular frequency. The term \(\alpha B\) in Equation (7) is the source for the poloidal field through the \(\alpha\) effect. The generation of the poloidal field through the Babcock-Leighton process is also parameterised in the 2D (axisymmetric models) through the same \(\alpha B\) term. However, this \(\alpha\) operates near the surface of the sun and it has a completely different origin than the \(\alpha\) effect which operates in the whole convection zone due to helical convection. In comprehensive 3D dynamo models (Yeates and Munoz-Jaramillo, 2013; Miesch and Dikpati, 2014; Miesch and Teweldebirhan, 2016; Kumar et al, 2019; Bekki and Cameron, 2022), this \(\alpha B\) term is not added in Equation (7), instead, explicit BMRs are deposited whose decay produces a poloidal field. The source for the toroidal field in Equation (8) is due to the \(\Omega\)-effect which is represented by the term: \(s(\mathbf{B_{p}}\cdot\mathbf{\nabla})\Omega\). The above equations technically represent the equations for the \(\alpha\Omega\) dynamo model, in which the generation of the toroidal field through the \(\alpha\) effect is assumed to be much less than the generation due to \(\Omega\) effect, which is true in the sun; see e.g., Cameron and Schussler (2015). ### Causes for long-term variations in the solar activity With the above discussion of the solar dynamo, we now identify the causes of the cycle modulation. As the solar dynamo is nonlinear, it is natural to expect that the modulation in the solar cycle is caused by the back reaction of the flow on the magnetic field. Therefore, we first identify the nonlinearities in the dynamo models and check if they can lead to cycle modulations. #### 6.2.1 Nonlinearities in the dynamo As we can see from Equation (6), the magnetic field can alter the flow directly through the Lorentz force. The Lorentz force can come from the mean magnetic field and the mean current (which is popularly known as the Malkus-Proctor effect (Malkus and Proctor, 1975) in the mean-field context) and from the fluctuating magnetic field and the current. The mean magnetic field can also alter the anisotropic convection which is responsible for transporting angular momentum and maintaining differential rotation and meridional flow in the Sun (Kitchatinov et al, 1994b). This effect is also called micro-feedback. When these Lorentz feedbacks of the magnetic fields are included in the flow, we expect a long-term modulation in the flow and the magnetic cycle. In mean-field models, the magnetic feedback is captured by considering a direct Lorentz force of the mean magnetic field in the zonal flow (e.g., Bushby, 2006) and/or by a quenching term in the \(\Lambda\) effect (e.g., Kuker et al, 1999). Cycle modulations in these systems can generally happen in two ways. In the first one, the magnetic energy of the primary mode (the equatorial symmetry or antisymmetric) can oscillate due to the energy exchange between the flow and the magnetic field via the nonlinear Lorentz feedback. In this case, a considerable amount of modulation in the differential rotation is observed. In the second case, a small magnetic perturbation on the differential rotation can slowly change one dominant dynamo mode into another. In this case, the magnetic field parity can change (between equatorially symmetric (quadrupole) and antisymmetric (dipole)) without producing a large change in the differential rotation. These two mechanisms are respectively coined as Type II and I modulations. Mean-field models have demonstrated that nonlinear back reaction of magnetic field on large-scale flow through these types of modulations can induce a variety of modulation patterns in the cycle amplitude, including grand minima and parity modulations which do not leave a strong imprint in differential rotation (e.g., Beer et al, 1998; Knobloch et al, 1998; Bushby, 2006; Weiss and Tobias, 2016). Both types of modulation can arise in a model, however, as the observed differential rotation shows a tiny variation over the solar cycle, we expect the Type II modulation is less likely to occur in the Sun. Even for Type I modulation, a detailed comparison of the magnetic field and the flows in these models with the observations is missing (also see Section 7 of Charbonneau, 2020, for a discussion on this topic). Next is the meridional flow, which is the second important large-scale flow in the Sun. As it arises due to a slight imbalance between the non-conservative centrifugal and buoyancy forces, we expect its large variation. In fact, the global simulations find a large variation in the meridional flow despite a small variation in the differential rotation (Karak et al, 2015). In Babcock-Leighton type dynamo models, meridional circulation plays a crucial role in transporting the field on the surface from low to high latitudes and down to the deeper CZ where the shear produces a toroidal field. The toroidal field is transported to the low latitudes via the equatorward return flow and possibly causes the equatorward migration of the sunspot belt. Thus, in these models, meridional circulation largely regulates the cycle period (Dikpati and Charbonneau, 1999; Karak and Choudhuri, 2011). It also affects the strength of the field as a weak meridional circulation allows the field to advect slowly and gives more time for diffusion (Yeates et al, 2008). Karak (2010) showed that when a variable meridional flow is used in a high diffusivity dynamo model to match the observed solar cycle periods, the amplitudes of the cycles are also modelled up to some extent (also see Karak and Choudhuri, 2011; Hazra et al, 2015, for modelling various aspects of solar cycle using variable meridional flow). In an extreme case, a largely reduced meridional circulation can trigger a Maunder-like grand minimum. In reality, how large the variation in the meridional flow occurred in the past remains uncertain. However, it is obvious that any changes in the flow can lead to modulation in the solar cycle. Turbulent transport as parameterized by, for example, the turbulent diffusivity, \(\Lambda\) effect, and heat diffusion are also nonlinear because the Lorentz force of the small-scale as well as the large-scale dynamo-generated fields act on the small-scale turbulent flows. However, due to limited knowledge in the turbulence theory for solar parameter regions, we do not have a satisfactory model for the magnetic field-dependent form of the turbulent transport parameters; however, see Ruediger and Kichatinov (1993) and Kitchatinov et al (1994a) respectively, for the magnetic field-dependent forms of \(\alpha\) and \(\eta\) based on the quasi-linear approximation. Finally, the toroidal to poloidal part of the dynamo loop involves some nonlinearities. When the generation of poloidal field is due to the classical \(\alpha\) effect, there is a well-known \(\alpha\) quenching of the form \(1/\left(1+(B/B_{\rm eq})^{2}\right)\) with \(B_{\rm eq}\) being the equipartition field strength. However, this type of \(\alpha\) quenching tries to make a stable cycle rather than producing irregularity in the cycle. In the Babcock-Leighton dynamo, the generation of the poloidal field from the toroidal one also involves several nonlinearities. Here we discuss the following three potential candidates for these. * Flux loss due to magnetic buoyancy The magnetic buoyancy as proposed by Parker (1955) plays a critical role in the emergence of BMRs on the solar surface. As the shearing of the poloidal field due to differential rotation intensifies the strength of the toroidal field, there comes a point where the magnetic energy density of the toroidal flux tubes becomes greater than the kinetic energy of the local convective plasma inside the CZ, as a result, the flux tubes become buoyant and start rising Figure 7: The trajectories of (a) annual sunspot number and (b) FWHM vs the central latitude of the annual spot distribution obtained from a dynamo simulation with buoyancy-induced flux loss (Biswas et al, 2022). Curves clearly show that the beginning phases of the cycles differ widely depending on their strengths but they decline in the same way irrespective of their strengths. This property closely matches with the observations of Cameron and Schüssler (2016). through the CZ, eventually giving birth to the sunspots. Following this process, the strength of the magnetic field gets locally reduced as a part of it rises due to buoyancy and the flux tube becomes inefficient to produce further sunspots for some time (however see a counter-argument by Rempel and Schussler, 2001). The sharp rise in the flux loss once the toroidal field strength exceeds a certain value clearly indicates a nonlinear mechanism in the solar dynamo. Incorporating this mechanism of toroidal flux loss due to buoyancy in a simple manner, Biswas et al (2022) showed that this nonlinear process plays a critical role in limiting the growth of the solar dynamo which is a potential mechanism to explain why different solar cycles rise differently depending on their strength but all the solar cycles decay with similar statistical properties (see Figure 7). They found that introducing the flux loss in the dynamo simulations was critical to reproduce the long-term features of the latitudinal distribution of the sunspots (Waldmeier, 1955; Cameron and Schussler, 2016); also see Cameron and Schussler (2016) and Talafha et al (2022) for an alternative explanation of the universal decay of the solar cycle using cross-equatorial diffusion. * Latitude quenching It has been found that when BMRs appear in low latitudes, the leading polarities from both hemispheres get efficiently cancelled at the equator. This leads to the following polarities of the BMRs efficiently getting carried to the poles and contributing to the polar field, see Figure 8. On the other hand, BMRs appearing in the high latitudes do not exhibit efficient cross-hemisphere cancellation and thus do not contribute significantly to the polar field (Jiang et al, 2014; Karak and Miesch, 2018). It is seen that strong cycles produce more BMRs at high latitudes. In other words, the average latitude of the BMRs is high for the strong cycles (Solanki et al, 2008; Mandal et al, 2017). Hence for a strong cycle, most of its BMRs emerging at high latitudes would be less efficient in polar field production and vice versa for the weak cycles. Figure 8: Demonstration of latitude quenching: Temporal evolution of the net polar flux generated from two BMRs deposited symmetrically in two hemispheres at different latitudes. This mechanism, so-called the _latitude quenching_(Petrovay, 2020) may help to stabilize the growth of the magnetic field in the Sun (Jiang, 2020). Introducing a latitude-dependent threshold on the BMR emergence condition into a 3D Babcock-Leighton dynamo simulation, Karak (2020) showed that latitude quenching can regulate the growth of a magnetic field when the dynamo is not too supercritical. * Tilt quenching The tilt angle of BMR plays a crucial role in generating poloidal field in the Sun. For a given latitude, the amount of generated poloidal field increases with the increase of tilt. The thin flux tube model for the sunspot formation suggests that the tilt of the BMR is produced due to a torque acting on the diverging flows produced from the apex of the rising flux tube which forms the BMR (D'Silva and Choudhuri, 1993; Fan et al, 1994). Thus, if the magnetic field of the sunspot-forming flux tube is strong, then it will rise quickly and the Coriolis force will get less time to induce tilt. In a strong cycle, the toroidal magnetic field is strong and the number of BMRs with strong magnetic field tends to be high (Jha et al, 2020). Thus, we expect the mean tilt in that cycle to be smaller. A lesser tilt will produce less poloidal field and the next cycle will be weak. Hence, this may be a potential mechanism for stabilizing the growth of the magnetic cycle through the reduction of tilt which is known as the _tilt quenching_. The observational evidence of tilt quenching is limited. Dasi-Espuig et al (2010); Jiao et al (2021) showed that there is a statistical anti-correlation between the cycle-average tilt of the sunspots with the cycle strength (Figure 9a). On the other hand, Jha et al (2020) examined the variation of BMR tilt with the strength of its magnetic field within a cycle. They found a non-monotonous dependence of the tilt with the BMR field strength as seen in Figure 9(b). For weak field strengths, the tilt first increases, however at sufficiently strong field strengths, the BMR tilt starts to decrease. Figure 9: Demonstration of tilt quenching: (a) Tilt coefficient (mean tilt normalized by the mean latitude) vs the cycle strength (Jiao et al, 2021); also see Dasi-Espuig et al (2010). (b) The slope of Joy’s law vs the maximum field strength in the BMR (Jha et al, 2020). #### Stochastic effects in the dynamo The solar convection zone is turbulent and thus the turbulent quantities (such as \(\alpha\) effect) are subject to fluctuate around their means. Hoyng (1993) showed that as there are finite numbers of convection eddies along the longitudes in the sun, the fluctuations of the turbulent transport coefficients can be larger than their means. There is a long history including the stochastic noise in the \(\alpha\) effect in the mean-field dynamo models. Most of these studies find long-term modulations in the cycle and grand minima in a certain parameter range of the dynamo number (Choudhuri, 1992; Ossendrijver and Hoyng, 1996; Ossendrijver et al, 1996; Gomez and Mininni, 2006; Brandenburg and Spiegel, 2008; Moss et al, 2008). In Babcock-Leighton dynamo also stochastic fluctuations are unavoidable. The toroidal to poloidal part of this model primarily involves stochastic fluctuations due to the following effects. * Scatter around Joy's law Observations find that the tilt "statistically" increases with the increase of latitude, which is known as Joy's law. However, a large number of BMRs do not follow this relation (so-called non-Joy), as seen by a huge scatter around the mean trend in Figure 10. In fact, there are many BMRs which are of anti-Hale type. These anti-Hale and non-Joy BMRs, having opposite tilts (negative in the northern hemisphere) are responsible for generating opposite polarity field (with respect to the expected polarity) and lead to large fluctuations in the polar field (Jiang et al, 2014; Hazra et al, 2017; Nagy et al, 2017; Mordvinov et al, 2022). * Variations in the BMR eruption rates There are spatial and temporal variations in the BMR eruptions. BMRs near the equator are much more efficient in generating poloidal field in the Sun because for them the leading polarity can easily connect with the opposite polarity flux from the opposite hemisphere (Cameron et al, 2013; Jiang et al, 2014; Karak and Miesch, 2018; Karak, 2020; Mordvinov et al, 2022). Thus Figure 10: (a) Scatter of BMR tilt around Joy’s law (solid line). (b) The tilt distribution with fitted Gaussian (solid line). Here the tilt angles of BMRs are computed by tracking the MDI line-of-sight magnetograms for September 1996 – December 2008. variation in the latitudinal position can produce variation in the generated poloidal field. Next, the rate of BMR eruption is not the same--there is a distribution. Thus, the rate of generation of the poloidal field is not the same (Karak and Miesch, 2017). Furthermore, the flux contents of the BMR has also a distribution and thus a wrongly tilted BMR with _high flux_ can disturb the polar field in the sun considerably (Nagy et al, 2017). In summary, the randomness involved in the BMR properties (originated due to the turbulent nature of the convection) produces variation in the poloidal field. Although the sun produces thousands of spots in a cycle, only a few spots are produced (on average) per day. This leads to variations in the polar field comparable to its mean value. In the next section, we shall demonstrate some illustrative results from stochastically driven Babcock-Leighton dynamo models. ### Babcock-Leighton dynamo models for the long-term variation As discussed above, the generation of the poloidal field in the Babcock-Leighton dynamo models involves some randomness. Thus, in axisymmetric dynamo models, these randomnesses were captured by adding a noise term in the poloidal source (e.g., Charbonneau and Dikpati, 2000). Long-term modulations, including Gnevyshev-Ohl/Odd-Even rule (Charbonneau, 2001; Charbonneau et al, 2007) and grand minima (Charbonneau et al, 2004; Choudhuri and Karak, 2009; Passos et al, 2012, 2014) are naturally produced in these models. Variations within the cycle, like the amplitude-period anti-correlation (Charbonneau and Dikpati, 2000; Karak, 2010) and Waldmeier effect (Karak and Choudhuri, 2011; Biswas et al, 2022) are also reproduced. Karak et al (2018) showed that a large variation in the Babcock-Leighton process can change the polar field abruptly and this can lead to double peaks in the following cycle. While in most of the studies, the level of fluctuations was tuned to produce the observed variation of the solar cycle including a reasonable number of grand minima, Choudhuri and Karak (2012) and Olemskoy and Kitchatinov (2013) made some estimate of the fluctuations in the Babcock-Leighton process from observations. Choudhuri and Karak (2012) found the correct frequency of grand minima as observed in the cosmogenic data for the last 11,000 years. Olemskoy and Kitchatinov (2013) showed that the statistics of grand minima are consistent with the Poisson random process, indicating the initiation of grand minima to be independent of the history of the past minima. In recent years, cycle modulations were, in particular, produced by including the variations in the BMR properties in two comprehensive models, namely, 2\(\times\)2D (Lemerle and Charbonneau, 2017) and 3D dynamo models (Karak and Miesch, 2017). In Figure 11, we show cycles from the 3D dynamo model presented by Karak and Miesch (2017). As seen in Figure 11(a), the variation in the BMR emergence rate and the flux distribution produce little variation in the solar cycle. When the variation around Joy's law tilt is included, it produces a large variation, including suppressed magnetic activity like the one seen during Dalton minimum and Maunder minimum as shown in Figure 11b (the regions shaded in green). Here, the grand minima are identified in the same manner as done in the observed data (Usoskin et al, 2007), i.e., the modelled-sunspot data are first binned in 10 years window and smoothed and then a grand minimum is considered when the smoothed data fall below 50% of the average at least for two cycles. In Figure 12, we present a detailed view of a grand minimum. We find that some of the observed features of the Maunder minimum (hemispheric asymmetry, gradual recovery, slightly longer cycle) are reproduced in this figure. We note that during this grand minimum, some BMRs are still produced, the number of which is a bit larger than that was observed during Maunder minimum (Usoskin et al, 2015; Vaquero et al, 2015; Zolotova and Ponyavin, 2016; Carrasco et al, 2021). However, we should keep in mind that the observations during Maunder minimum were limited (due to the poor resolving power of the 17th-century telescopes) to detect the small BMRs (e.g., Vaquero and Vazquez, 2009); only big sunspots were detected. In our Babcock-Leighton dynamo model, few BMRs erupt which produces a poloidal field at a slow rate through the Babcock-Leighton process and the model emerges from the grand minimum episode. It is the downward magnetic pumping included in our model which helps to reduce the magnetic flux loss through the surface and recovers the model from grand minima (Cameron et al, 2013; Karak and Cameron, 2016). There have been suggestions that during Maunder-like extended grand minima, the Babcock-Leighton process may not operate due to few observed sunspots, and \(\alpha\) effect (Parker, 1955) is the best candidate for this as it efficiently operates in sub-equipartition field strength (Karak and Choudhuri, 2013; Passos et al, 2014; Olcek et al, 2019). We observe that our model also fails to recover when it enters a deep grand minimum and stops producing BMRs due to the fall of the toroidal field below the threshold for BMR formation. However, this happens very rarely. While it is a critical question to answer what mechanism dominates in recovering the Sun from an extended grand minimum, it is expected that Babcock-Leighton process becomes less efficient during this phase and the \(\alpha\) effect certainly helps in recovering the Sun from grand minima. Dynamo models with stochastic fluctuations also produce grand maxima. Our model presented in Figure 11b also produces a few grand maxima shown by the regions shaded in red. Similar to the grand minima, grand maxima are also computed based on the smoothed sunspot number, but here the threshold is taken as 150% of the long-term mean. Systematic studies of grand maxima using dynamo models are limited (however, see Karak and Choudhuri, 2013; Olemskoy and Kitchatinov, 2013; Inceoglu et al, 2017). Kitchatinov and Olemskoy (2016) showed that at the beginning of the cycle, if the generation of the poloidal field is reversed (say due to the emergence of some wrongly tilted BMRs), then it will amplify the existing polar field, instead of reversing it. This increase in the magnetic field can lead to a grand maximum. Another mechanism of grand maxima was given by Olcek et al (2019), who showed that when the deep-seated \(\alpha\) effect is coupled with the surface Babcock-Leighton source, then these two sources more or less contribute equally to generate a strong poloidal field through a sort of constructive interference. Finally, for the secular and supersecular modulations (modulations beyond 11-year periodicity, e.g., Gleissberg cycle, Suess/de Vries cycle, Eddy cycle, and 2400-year Hallstatt cycle; Beer et al, 2018), there are limited studies available in the literature. In a simplified \(\alpha\Omega\) dynamo model coupled with the angular momentum equation, Pipin (1999) found the Gleissberg cycle as a result of the re-establishment of differential rotation after the magnetic feedback on the angular momentum transport. Cameron and Schussler (2017) modelled the overall power spectrum of solar activity using a generic normal form model for a noisy and weakly nonlinear limit cycle, and Cameron and Schussler (2019) showed that the long-term modulations beyond the 11-year cycle are consistent with the realization noise, thus casting doubt whether secular and supersecular modulations are connected to the intrinsic periodicities of the solar dynamo. ## 7 Summary Herewith, a brief overview is presented of the long-term variability of solar activity at centennial - millennial timescales. The main feature of solar variability is the 11-year quasi-periodic Schwabe cycles, which is however variable _per se_ in both magnitudes, duration and phase. While the direct telescopic observations of the Sun cover roughly four centuries since 1610 and cover a Figure 11: Time series of the monthly BMR number from a 3D dynamo model of Karak and Miesch (2017) (a) without tilt scatter around Joy’s law and (b) with scatter of \(\sigma_{\delta}=18^{\circ}\) (close to the observed value). The black/red curves indicate the north/south hemispheres. The blue curve in panel (b) is the smoothed curve of the cycle trajectories, and the green and red dashed horizontal lines indicate the thresholds for the grand minima and grand maxima, respectively. The green and red shaded regions indicate the grand minima and grand maxima episodes, respectively. full range of solar-activity levels from the Maunder minimum in the 17th century to the Modern grand maximum in the late 20th century, the quality of the sunspot-number dataset is inhomogeneous and greatly degrades back in time, being quite imprecise before \(\approx\)1820s. Moreover, it is too short to study the statistical properties of the solar-cycle modulation on a long timescale. The cosmogenic-isotope method provides quantitative reconstructions of solar activity on the multi-millennial timescale with stable quality throughout ages making it possible to study long-term solar-cycle modulation. Using the decadal data for the Holocene (the last twelve millennia), it is possible to identify specific observed properties of solar variability beyond the Schwabe cycle: * The Sun spends about \(\sfrac{1}{6}\) of its time in the grand minimum state, grand minima tend to cluster with a \(\approx\)210-year recurrence time; * The Sun spends about \(\sfrac{1}{10}\) of its time in the grand maximum state, grand maxima appear without any regular pattern; * During the major fraction of time, the Sun is in the cyclic moderate activity state; * 140 years; 210-year _Suess/de Vries_ cycles manifesting itself through intermittent recurrence of grand minima; About 2400-year _Hallstat_ cycle whose nature is still unclear; Other long-term cycles, including the millennial Eddy cycle, are insignificant. Figure 12: Zoomed-in view of a grand minimum presented in Figure 11. Evolution of (a) the surface radial field (b) BMR eruptions and hemispheric asymmetry of the toroidal field (black/red curve), and (c) the toroidal field at the bottom of the convection zone. A recent reconstruction of the annual sunspot numbers from high-precision radiocarbon data for the last millennium makes greatly extended, nearly tripling, the statistic of solar cycles to 96 individually resolved cycles. In particular, the Waldmeier rule (high cycles rise faster) is statistically confirmed on a larger statistical basis, while the Genvyshev-Ohl rule of the even-odd cycle pairing is not confirmed. The extended statistic of solar cycles has made it possible, for the first time, to answer the question principle to the solar dynamo theory: is the solar cycle phase-locked, implying an intrinsic synchronisation process as proposed by some external clocking mechanisms, or is random and incoherent. The new analysis excludes the phase-locking hypothesis at a high significance level, implying that solar cycles vary randomly. A brief review of the theoretical perspectives to explain the observed features in the framework of the dynamo models is presented. It is discussed that the nonlinearities in the dynamo, including the effects of the flux loss due to magnetic buoyancy as well as latitude and tilt quenching, help to stabilize the solar dynamo, rather than producing variability in the solar cycle. Primary causes of the solar cycle variability are the stochastic fluctuations in the dynamo which are inherent in different processes such as a large scatter of the BMR's tilts around Joy's law, and variability in the BMR eruption rates and locations. On one hand, while modern dynamo models are able to reproduce, with a reasonable ad-hoc tuning of the parameters, the observed features of solar variability, the exact role of those factors is not clear, and some discrepancies between the model results and the data still remain. On the other hand, the progress in the accuracy of models is significant, and we keep gaining knowledge of the processes driving solar variability with the new data acquainted and new models developed. IU acknowledges the Academy of Finland (project ESPERA No. 321882). AB and BBK gratefully acknowledge the financial support provided by ISRO/RESPOND (project No. ISRO/RES/2/430/19-20), the Department of Science and Technology (SERB/DST), India through the Ramanujan Fellowship (project No. SB/S2/RJN-017/2018), the International Space Science Institute (ISSI, Team 474), and the computational resources of the PARAM Shivay Facility under the National Supercomputing Mission, the Government of India, at the Indian Institute of Technology Varanasi. This work was performed in the framework of the ISSI workshop "Solar and Stellar Dynamos: A New Era". The authors declare they have no conflicts of interest.
2307.00036
Machine learning for potion development at Hogwarts
Objective: To determine whether machine learning methods can generate useful potion recipes for research and teaching at Hogwarts School of Witchcraft and Wizardry. Design: Using deep neural networks to classify generated recipes into a standard drug classification system. Setting: Hogwarts School of Witchcraft and Wizardry. Data sources: 72 potion recipes from the Hogwarts curriculum, extracted from the Harry Potter Wiki. Results: Most generated recipes fall into the categories of psychoanaleptics and dermatologicals. The number of recipes predicted for each category reflected the number of training recipes. Predicted probabilities were often above 90% but some recipes were classified into 2 or more categories with similar probabilities which complicates anticipating the predicted effects. Conclusions: Machine learning powered methods are able to generate potentially useful potion recipes for teaching and research at Hogwarts. This corresponds to similar efforts in the non-magical world where such methods have been applied to identify potentially effective drug combinations.
Christoph F. Kurz, Adriana N. König
2023-06-30T08:47:27Z
http://arxiv.org/abs/2307.00036v1
# Machine learning for option development at Hogwarts ###### Abstract **Objective**: To determine whether machine learning methods can generate useful potion recipes for research and teaching at Hogwarts School of Witchcraft and Wizardry. **Design**: Using deep neural networks to classify generated recipes into a standard drug classification system. **Setting**: Hogwarts School of Witchcraft and Wizardry. **Data sources**: 72 potion recipes from the Hogwarts curriculum, extracted from the Harry Potter Wiki. **Results**: Most generated recipes fall into the categories of psychoanaleptics and dermatoglogicals. The number of recipes predicted for each category reflected the number of training recipes. Predicted probabilities were often above 90% but some recipes were classified into 2 or more categories with similar probabilities which complicates anticipating the predicted effects. **Conclusions**: Machine learning powered methods are able to generate potentially useful potion recipes for teaching and research at Hogwarts. This corresponds to similar efforts in the non-magical world where such methods have been applied to identify potentially effective drug combinations. ## Introduction Potions are a required subject for students at Hogwarts School of Witchcraft and Wizardry from the first to the fifth year [1]. They are optional to students in their sixth and seventh-years if they achieved a high score on their Ordinary Wizarding Level exam [2]. Potions classes are considered to be among the most difficult lessons at Hogwarts because the nuances of timing, ageing, bottling, and stirring techniques are difficult to acquire even with the guidance of experienced teachers such as Professor Severus Snape. Brewing potions requires glass vials, weighting scales and a cauldron. Ingredients range from plants such as belladonna and shrivelfig to magical components such as unicorn hair or fairy wings. The brewing process often requires some degree of wand work to complete [1]. Potions can be used as medicines, antidotes, poisons, or to provide the drinker with any magical effect ranging from increased strength to flame immunity. They are not always consumed by drinking them; some, like the Regeneration point, might be applied by physical touch or have an effect merely by being created [3]. Certain magical effects can only be achieved through the use of potions. Some potions mimic the effects of spells and charms, but a few (such as the Polyjuice Potion, a potion that allows the drinker to take the form of someone else, and Felix Felicis, a luck potion) have effects that can not be achieved in any other way [2, 4]. Because brewing is so difficult and the smallest deviations from the recipe can have serious consequences, there are countless reports of accidents and undesirable side effects happening during class at Hogwarts [1, 4, 5]. For example, in 1992, there was a well-documented case of the student Neville Longbottom who, while improperly brewing the Cure for Boils' option, infected himself with red boils all over his body [1]. Nevertheless, some deviations from instructions have proven successful for Harry Potter in his fifth year at Hogwarts [2]. Accurately following the brewing instructions is already difficult, but the discovery and development of new potions is an even more complex and dangerous process. Recent advances in the field of artificial intelligence (AI) have led to increased interest in the use of machine learning approaches within the pharmaceutical industry. Advances in new algorithms, such as deep neural networks, demonstrated its utility in addressing diverse problems in drug discovery such as bioactivity prediction or novel molecular designs [6, 7]. In this work, we explore the usefulness of machine learning for generating recipes for magic potions. For this, we randomly generated new magic potion recipes with various ingredients and predicted their most likely effect using an artificial neural network. ## Methods We collected the recipes for all known potions from the Harry Potter Wiki [8] and classified them according to the Anatomical Therapeutic Chemical (ATC) classification system [9] in one of the following categories: anesthetics; antiinfectives for systemic use; antiparasitic products, insecticides and repellents; dermatatologicals; musculo-skeletal system; psychoanaleptics; psycholetics; respiratory system; sensory organs; and various. These categories represent the first and second level of the ATC classification which describes pharmacological or therapeutic subgroups [9]. Recipes in the musculo-skeletal system category include, for example, the pompionion point that temporarily turns the drinkers head into a pumpkin, or the skelegro motion that regrows bones. Dermatologicals are, among others, potions that make your skin immune to fire, grow hair or cure boils. Recipes in the psychoanaleptics category include, for example, the forgetfulness option which causes memory loss in the drinker. Others are the wit-sharpening notion which improves clear thinking or the befuddlement draught that provokes belligerence and recklessness in the drinker. The various category contains several antidotes as well as potions that boost spell-casting, such as the exstimulo potion. We additionally added a category for poisons because many recipes fall into it. However, poisons are not associated with an ATC code for obvious reasons. Usually, administered drugs aim at improving and not deteriorating an individual's health. See Table 1 for an overview of the number of recipes in each of the 11 categories. In total, the training set contained 72 recipes. Each recipe includes instructions for adding ingredients and brewing. We then generated 10,000 new potion recipes by randomly picking between 3 to 8 single ingredients (e.g., "Add 4 horned slugs to your cauldron") and mixing instructions (e.g., "Stir 5 times, clockwise."). We used a custom BioBERT neural network [10] for predicting the class of potion for each newly generated recipe. This language model has been pre-trained on large-scale biomedical corpora comprising over 18 billion words from PubMed abstracts and full-text articles. We fine-tuned the model to all known Hogwarts' potion recipes so it would input a recipe and output the probabilities of belonging to each of the 11 classes. The top probability is the most likely effect. This method is often referred to as feature extraction transfer learning [11]. All computations were done in Mathematica 13 [12]. Code and data are available on our GitHub page [13]. ## Results Figure 1 shows the number of predicted potion recipes in each category. Most of the 10,000 generated recipes fall into the psychoanaleptics category (\(n=5549\)), followed by the dermatological category (\(n=1539\)) and the various category (\(n=1487\)). 225 recipes fall into the newly added poison category. In contrast, only 3 psycholeptics, 3 respiratory systems, and 1 sensory organs recipes were generated. This corresponds to the number of available training recipes. All generated recipes differed from the training set of recipes. Our BioBERT model was generally confident in its predictions. Predicted probabilities of belonging to a certain ATC category were often above 90%, see Figure 2. For example, Table 2 shows a generated recipe where its predicted effect is in the psychoanaleptics category with a probability of 99.9%. In contrast, the effects of some recipes are difficult to predict for our model. Table 3 shows a generated recipe that could be dermatological with 58.4% probability, be a psychoanaleptic with 10% probability, or an antinfective for systemic use with 24.1% probability. Add Baneberry. Add 2 bundles of knotgrass to the cauldron. Add Dogbane. Add syrup of helebore until the potion turns turquoise. Add a sprig of Peppermint to counteract side-effects. Add Honey water until it turns back to a turquoise colour. Stir four times anti-clockwise. Add the Infusion of Wormwood. Add Stewd Mandrake. Add Wormwood. Add a dash of Flobberworm Mucus and stir vigorously. Leave to brew and return in 8 hours (Copper), 14 hours (Brass), or 23 hours (Pewter). Shake and add wormwood until the potion turns green. Slice bursting mushrooms with knife, add to cauldron and stir clockwise until potion turns blue. ## Discussion Our findings suggest that AI powered methods are able to generate potentially useful potion recipes for teaching and research at Hogwarts School of Witchcraft and Wizardry. We were able to produce many previously unknown combinations of ingredients and stirring instructions that were predicted to belong to a specific bioactivity class with high probability. Previously, AI methods have also been used to identify potentially effective drug combinations [14, 15]. In the magical world, our research could be extended to not only detect new combinations of ingredients but also new combinations of potions. Apart from new effective combinations, AI methods could also be \begin{table} \begin{tabular}{l l} Add Baneberry. Add 2 bundles of knotgrass to the cauldron. Add Dogbane. Add syrup of helebore until the potion turns turquoise. Add a sprig of Peppermint to counteract side-effects. Add Honey water until it turns back to a turquoise colour. Stir four times anti-clockwise. Add the Infusion of Wormwood. \\ \end{tabular} \end{table} Table 2: Generated recipe that is predicted to work as a psychoanaleptic with 99.9% probability. Figure 2: Count histogram of the predicted probabilities of belonging to one of the 11 ATC categories (including poisons). Only the top probability for each generated recipe is shown. applied to identify potentially harmful drug combinations [14]. This complements our predictions in the category for poisons and could be extended to harmful option combinations. Still, our results are not without limitations. In general, AI models need very large training sets. We only had a set of 72 recipes available for training. For this reason, we used a model that has been pre-trained on large medical corpora. Still, potions belonging to the same ATC category often have very different effects. For example, both the Babbling Potion, a notion that causes the drinker to babble nonsense, and Baruffio's Brain Elixir, a option that increases the drinker's brain power, are part of the nervous system category. This makes it extremely difficult to predict specific potion effects, other than the organ or system on which they act. Furthermore, our AI approach for drug discovery and potion generation could potentially be misused. For example, Urbina et al. [16] trained an AI model that generated new molecules that were predicted to be more toxic than publicly known chemical warfare agents. In this sense, machine learning could be used to support the Dark Arts. The Dark Arts refer to spells and actions that could harm others, such as powerful curses, as well as brewing dark potions and breeding dark creatures. Our approach could lead to the discovery of new spells and potions that would enable Dark Wizards or Witches becoming even more powerful than Lord Voldemort, considered to have been the most capable and dangerous practitioner of the Dark Arts of all time [2]. At last, two muggles with (presumably) no magical abilities performed the study. Thus, it is difficult to assess the validity and classification quality of the generated recipes. ### Competing interests statement All authors have completed the Unified Competing Interest form (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years, no other relationships or activities that could appear to have influenced the submitted work. ### Details of contributors CFK and ANK collected the data and wrote the manuscript. CFK analysed the data. ANK is the guarantor. ### Transparency declaration The lead author (the manuscript's guarantor) affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained. ### Ethical approval The data were obtained from publicly available data sources. ### Details of funding No funding was received. ### Patient and public involvement statement Not applicable.
2309.10789
First-principles characterization of thermal conductivity in LaPO4-based alloys
Alloys based on lanthanum phosphate (LaPO$_{4}$) are often employed as thermal barrier coatings, due to their low thermal conductivity and structural stability over a wide temperature range. To enhance the thermal-insulation performance of these alloys, it is essential to comprehensively understand the fundamental physics governing their heat conduction. Here, we employ the Wigner formulation of thermal transport in conjunction with first-principles calculations to elucidate how the interplay between anharmonicity and compositional disorder determines the thermal properties of La$_{1{-}x}$Gd$_{x}$PO$_{4}$ alloys, and discuss the fundamental physics underlying the emergence and coexistence of particle-like and wave-like heat-transport mechanisms. We also show how the Wigner transport equation describes correctly the thermodynamic limit of a compositionally disordered crystal, while the Boltzmann transport equation does not. Our predictions for microscopic vibrational properties (temperature-dependent Raman spectrum) and for macroscopic thermal conductivity are validated against experiments. Finally, we leverage these findings to devise strategies to optimize the performance of thermal barrier coatings.
Anees Pazhedath, Lorenzo Bastonero, Nicola Marzari, Michele Simoncelli
2023-09-19T17:38:28Z
http://arxiv.org/abs/2309.10789v2
# First-principles characterization of thermal conductivity in LaPO\({}_{4}\)-based alloys ###### Abstract Alloys based on lanthanum phosphate (LaPO\({}_{4}\)) are often employed as thermal barrier coatings, due to their low thermal conductivity and structural stability over a wide temperature range. To enhance the thermal-insulation performance of these alloys, it is essential to comprehensively understand the fundamental physics governing their heat conduction. Here, we employ the Wigner formulation of thermal transport in conjunction with first-principles calculations to elucidate how the interplay between anharmonicity and compositional disorder determines the thermal properties of La\({}_{x}\)Gd\({}_{1-x}\)PO\({}_{4}\) alloys, and discuss the fundamental physics underlying the emergence and coexistence of particle-like and wave-like heat-transport mechanisms. Our predictions for microscopic vibrational properties (temperature-dependent Raman spectrum) and for macroscopic thermal conductivity are validated against experiments. Finally, we leverage these findings to devise strategies to optimize the performance of thermal barrier coatings. ## I Introduction Improving the thrust and efficiency of airbreathing jet engines requires increasing the temperature of the gas at the entry of their turbine cascade (inlet temperature) [1]. Since the 1970s, significant research efforts have been made to develop materials capable of operating at increasingly higher temperatures, and current turbines employ superalloys blades covered by thermal barrier coatings (TBCs) [2; 3; 4; 5]. TBCs are critical in determining the performance and lifespan of turbines: they protect the blades from thermal stresses, allowing operational temperatures higher than the melting point of the superalloy. Thus, one of the key objectives [6; 7; 8] in current research on TBCs is to find materials with the lowest possible thermal conductivity [1; 2]. Hitherto, most of the progress on improving the thermal-insulation performance of TBC materials has relied on experiments and trial-and-error efforts which hinted that the presence of compositional disordered in highly anharmonic materials can be beneficial for their thermal-insulation performance [9; 10]. However, a first-principles understanding of the microscopic physics governing heat conduction in these materials is missing, preventing their systematic optimization. The absence of such understanding on how the interplay between anharmonicity and compositional disorder affects the transport properties of TBC can be traced back to limitations of established first-principles approaches for thermal transport in solids, namely first-principles molecular dynamics (FPMD) [11; 12; 13; 14; 15; 16] and the linearized Boltzmann transport equation for phonons (LBTE) [17; 18; 19; 20; 21; 22; 23; 24; 25]. Specifically, despite recent advances [26; 27], FPMD approaches still have a high computational cost, which practically limits their application to materials having nanometric disorder length scale (simulations cells containing a few hundred atoms). On the other hand, the LBTE accounts exclusively for particle-like (intraband) heat transport mechanisms and misses wave-like interband tunneling; thus, it is in principle accurate only in weakly anharmonic crystals characterized by well separated phonon bands [28]. The LBTE with a perturbative description of compositional (mass) disorder [29] has been shown to successfully describe the thermal properties of SiGe alloys [30], _i.e._ weakly anharmonic materials in which low-frequency vibrational modes dominate transport [31]. However, such a perturbative scheme fails in strongly anharmonic systems where transport is not dominated by low-frequency modes [32]. Thermal insulators and alloys for TBCs [33; 34] belong to this class; thus, the effect of compositional disorder on the thermal conductivity of these materials cannot be described using the perturbative treatment within the LBTE. These limitations can be overcome relying on the recently introduced Wigner formulation [28; 33], which generalizes the LBTE and naturally adds a wave-like tunnelling term to the drift and scattering of particle-like phonon wavepackets. Such tunneling term allows to unify under the same formal equation of the LBTE for weakly anharmonic crystals and the Allen-Feldman equation for disordered solids [35], also covering all the intermediate cases--such as alloys for TBC materials--in which anharmonicity [36; 37; 38; 39; 40] and disorder [41; 42; 43; 44; 45] are both relevant. Here, we combine first-principles calculations with the Wigner formulation to unveil the microscopic heat conduction mechanisms in LaPO\({}_{4}\)[7; 46; 47; 48; 49], in its solid solutions with GdPO\({}_{4}\), and its composites with La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\). These materials are employed in TBCs [7; 46; 47; 48; 49] due to their low thermal conductivity [46; 47; 48; 49], high melting point [50], chemical durability [51], structural stability [52] and large thermal expansion coefficient [53] over a wide temperature range. After characterizing the properties of the heat carriers in LaPO\({}_{4}\), we show that microscopic particle-like and wave-like transport mechanisms emerge and coexist in this material, and we discuss how they contribute to the macroscopic thermal conductivity. Next, we analyze how atomic-scale compositional disorder affects thermal transport in La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys, describing La-Gd mass-substitutional disorder explicitly using models containing up to 5184 atoms. Finally, we use the continuum Maxwell-Garnett model [54] to discuss how the thermal conductivity is affected by the compositional disorder at the micrometer scale, investigating composites containing micrometer-sized grains of La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) and LaPO\({}_{4}\) at different concentrations. ## II Wigner formulation for thermal transport The Wigner transport equation (WTE) [28; 33] describes thermal transport in solids accounting for the interplay between structural disorder, anharmonicity, and Bose-Einstein statistics of vibrations. Such an equation provides a comprehensive approach to thermal transport in solids, allowing to describe structurally ordered "simple crystals" with interband spacings much larger than the linewidths [55; 56], glasses [43; 44; 45], as well as the intermediate regime of "complex crystals" with interband spacings smaller than the linewidths [33; 36; 37; 38]. In the following, we summarize the salient features of the WTE. For complex crystals having ultralow thermal conductivity, the WTE can be accurately solved using the single-mode relaxation-time approximation (SMA) [30; 44; 57], since in these cases the SMA yields the results, practically indistinguishable from the exact solution [28; 33; 37; 38; 44]. Within the SMA, the Wigner conductivity expression assumes the following compact form: \[\kappa^{\alpha\beta}\!\!=\!\!\frac{1}{\mathcal{V}N_{c}}\sum_{ \mathbf{q},s,s^{\prime}}\!\frac{\omega(\mathbf{q})_{s}\!+\!\omega(\mathbf{q})_{s^{\prime}} }{4}\!\left(\frac{C(\mathbf{q})_{s}}{\omega(\mathbf{q})_{s}}\!\!+\!\frac{C(\mathbf{q})_{s^ {\prime}}}{\omega(\mathbf{q})_{s^{\prime}}}\right)\!\nu^{\alpha}(\mathbf{q})_{s,s^{ \prime}}\] \[\times\!\nu^{\beta}(\mathbf{q})_{s^{\prime},s}\frac{\frac{1}{2}( \Gamma(\mathbf{q})_{s}\!+\!\Gamma(\mathbf{q})_{s^{\prime}})}{\left(\omega(\mathbf{q})_{s} \!-\!\omega(\mathbf{q})_{s^{\prime}}\right)^{2}+\frac{1}{4}\!\left(\Gamma(\mathbf{q})_ {s}\!+\!\Gamma(\mathbf{q})_{s^{\prime}}\right)^{2}}\;, \tag{1}\] where \(\hbar\omega(\mathbf{q})_{s}\) is the energy of the phonon having wavevector \(\mathbf{q}\) and mode index \(s\), which carries the specific heat \[C(\mathbf{q})_{s}\!\!=\!\!C[\omega(\mathbf{q})_{s}]\!\!=\!\!\frac{\hbar^{2}\omega^{2}( \mathbf{q})_{s}}{k_{\rm B}T^{2}}\bar{\mathbf{N}}(\mathbf{q})_{s}\big{[}\bar{\mathbf{N}}(\mathbf{q}) _{s}\!+\!1\big{]}, \tag{2}\] where \(\bar{\mathbf{N}}(\mathbf{q})_{s}\!\!=\!\![\exp(\hbar\omega(\mathbf{q})_{s}/k_{\rm B}T)\!- \!1]^{-1}\) is the Bose-Einstein distribution at temperature \(T\), \(\nu^{\alpha}(\mathbf{q})_{s,s^{\prime}}\) is the velocity operator coupling eigenstates \(s\) and \(s^{\prime}\) at the same wavevector \(\mathbf{q}\) (\(\alpha\) denotes a Cartesian direction) [33], \(N_{c}\) is the number of \(\mathbf{q}\)-points entering in the summation and \(\mathcal{V}\) is the primitive-cell volume; finally, \(\hbar\Gamma(\mathbf{q})_{s}=\hbar\Gamma(\mathbf{q})_{s}^{\rm annh}\!+\!\hbar\Gamma(\mathbf{ q})_{s}^{\rm iso}\!+\!\hbar\Gamma(\mathbf{q})_{s}^{\rm had}\) is the total linewidth, determined by anharmonicity [18; 58] (\(\hbar\Gamma(\mathbf{q})_{s}^{\rm annh}\)), isotopic-mass disorder [29] (\(\hbar\Gamma(\mathbf{q})_{s}^{\rm iso}\)) and grain-boundary scattering [59; 60] (\(\hbar\Gamma(\mathbf{q})_{s}^{\rm bnd}\)), see Appendix A for details. In crystals, it is useful to resolve the WTE conductivity (1) as the sum of two terms, \(\kappa\!=\!\kappa_{P}\!+\!\kappa_{C}\)[33]. The term \(\kappa_{P}\) is referred to as "populations conductivity" [28; 33] and is determined by the diagonal (\(s\!=\!\!s^{\prime}\)) or perfectly degenerate (\(s\!\neq\!\!s^{\prime}\) with \(\omega(\mathbf{q})_{s}\!\!=\!\!\omega(\mathbf{q})_{s^{\prime}}\)) terms in the summation in expression (1). Specifically, \(\kappa_{P}\) can be written as \(\kappa_{P}^{\kappa_{P}}\!\!=\!\!\frac{1}{\mathcal{V}N_{c}}\sum_{\mathbf{q}s}C[ \omega(\mathbf{q})_{s}]\nu^{\alpha}(\mathbf{q})_{s,s}\Lambda^{\alpha}(\mathbf{q})_{s}\); this expression shows that \(\kappa_{P}\) describes an heat-transport mechanisms in which heat carriers transport the energy \(\hbar\omega(\mathbf{q})_{s}\) and propagate particle-like with velocity \(\nu^{\alpha}(\mathbf{q})_{s,s}\) over the mean-free path \(\Lambda^{\alpha}(\mathbf{q})_{s}\!\!=\!\!\nu^{\alpha}(\mathbf{q})_{s,s}[\Gamma(\mathbf{q} )_{s}]^{-1}\), in analogy with particles in a classical gas. In contrast, the non-degenerate off-diagonal elements (referred to as "coherences" [28; 33]) do not have an absolute energy but are characterized by an energy difference \(\hbar\omega(\mathbf{q})_{s}\!\!-\!\hbar\omega(\mathbf{q})_{s^{\prime}}\); they describe conduction through a wave-like tunneling mechanism between pairs of phonon bands (a mechanisms bearing analogies to the electronic Zener interband tunneling [61]). In Eq. (1), non-degenerate off-diagonal elements determine the coherences conductivity \(\kappa_{C}\)[28; 33]. It has been shown in Refs. [28; 33] that in simple crystals particle-like mechanisms dominate over the wave-like tunnelling and thus \(\kappa_{\rm p}\!\gg\!\!\kappa_{\rm C}\), while in complex crystals both these mechanisms co-exist and may have comparable strength, implying \(\kappa_{\rm p}\!\sim\!\kappa_{\rm C}\). ## III Results and discussion ### LaPO\({}_{4}\): vibrational properties & Raman spectra In order to elucidate the microscopic physics that governs thermal transport in LaPO\({}_{4}\)-based alloys, we start by computing from first principles the microscopic vibrational properties appearing in the thermal conductivity expression (1) for the fundamental component LaPO\({}_{4}\). Specifically, we employ density-functional theory to compute the vibrational harmonic frequencies (phonon spectrum) [62] and anharmonic third-order linewidths [18; 19; 21] (see Appendix D for details). These quantities are calculated employing the standard perturbative approach, where frequencies and velocity operators are determined at the harmonic level and considered temperature-independent, and the anharmonic linewidths depend on temperature through the Bose-Einstein distribution (see Eq. (11) in Appendix A). In order to assess the accuracy of this perturbative approach in describing the actual frequencies and linewidths, we employ the theoretical frequencies and linewidths to predict the temperature dependence of the experimentally measurable non-resonant Raman spectra [65], \[I(\omega,T)\propto\sum_{s}I_{s}(T)\frac{(\Gamma_{s}+\Gamma_{\rm ins})/2}{(\omega -\omega_{s})^{2}+(\Gamma_{s}+\Gamma_{\rm ins})^{2}/4}, \tag{3}\] where \(I_{s}(T)\) is the Raman intensity of the phonon mode \(s\) computed within the Placzek approximation and using the powder average formula (Eq. (5) in Ref. [66]) including the laser-frequency factor at experimental conditions; \(\omega_{s}\) and \(\Gamma_{s}\) are the bare phonon frequencies and total linewidths at \(\mathbf{q}=\mathbf{0}\). The linewidths are the full width at half maximum, related to the lifetime as \(\tau_{s}=[\Gamma_{s}]^{-1}\), and are determined by both anharmonicity (\(\Gamma_{s}^{\rm ann}\)) and isotopic-mass disorder (\(\Gamma_{s}^{\rm iso}\)), _i.e._\(\Gamma_{s}\)=\(\Gamma_{s}^{\rm iso}\)+\(\Gamma_{s}^{\rm ann}\), respectively. Finally, \(\Gamma_{\rm ins}\)=2 cm\({}^{-1}\) accounts for the instrumental broadening [67] affecting the experiments [63; 64] with which we compare our calculations. In this approach the temperature dependence of the Raman spectra originates from the Bose-Einstein occupation numbers appearing in the intensity \(I_{s}(T)\), and from the linewidths appearing in the Lorentzian broadening, as in previous work [33]. We show in Fig. 1 a comparison between the theoretical and experimental Raman intensities in powder samples at 300 K (experiments by Hirsch _et al._[64]) and at 1000 K (experiments by Lucas _et al._[63]). Then, we note that the theoretical and experimental spectra are systematically shifted by approximately 26 cm\({}^{-1}\). Systematic shifts between the theoretical and experimental spectra comparable to those observed here are common in the literature [68; 69; 70]. Here, the shift mostly affects the symmetric or asymmetric PO\({}_{4}\) bending and stretching modes [64]; it may be related to the presence of water in the experimental samples [63; 64; 70], which is not accounted for in our calculations. The description of how water impurities affect the Raman spectrum of LaPO\({}_{4}\) is an open challenging problem that goes beyond the scope of the present study. Importantly, the positions of the experimental peaks are essentially unaffected by temperature; this indicates that in LaPO\({}_{4}\) the temperature renormalization of the vibrational frequencies, not accounted for in the standard perturbative treatment of anharmonicity employed here, is unimportant. In the insets of Fig. 1 we highlight how the broadening of the experimental Raman peaks -- which is determined mainly by anharmonicity and is negligibly affected by compositional disorder [64] -- is in agreement with our theoretical predictions at all temperatures for the most intense, high-frequency part of the Raman spectrum. We recall that the anharmonic linewidths generally increase with frequency and temperature [33; 36; 44; 45] (see Fig. 9 in Appendix B). Thus, from the good agreement between the theoretical and experimental broadening at high frequencies -- where anharmonic effects are largest -- we infer that the standard perturbative approach employed here provides a description of the anharmonic microscopic vibrational properties of LaPO\({}_{4}\) that is sufficiently accurate for our purposes. Finally, we note that the low-frequency part of the experimental Raman spectrum at 1000 K is sharper than the theoretical predictions. Such behavior may be due to the occurrence of grain coalescence in the sample of Lucas et al. [63], which could cause partial crystallization and departure from our calculations that assume a powder sample. Additional details on the Raman simulations are reported in Appendix D.2. Figure 1: **Temperature-dependent Raman spectra.** Solid lines are theory at 1000 K (top) and 300 K (bottom); dashed lines are experimental data, from Lucas et al. [63] at 1000 K (top) and from Hirsh et al. [64] at 300 K (bottom). In the insets the theoretical spectra have been rigidly shifted by 26 cm\({}^{-1}\) to higher frequencies (right) to ease the comparison between the broadening of the theoretical and experimental spectra. ### Thermal conductivity of LaPO\({}_{4}\) In this section, we use the first-principles microscopic vibrational properties of LaPO\({}_{4}\) to evaluate the thermal conductivity as a function of temperature (Eq. (1)). Fig. 2 shows our theoretical predictions for a bulk sample (_i.e._, without considering grain-boundary scattering, more on this later); these are compared with experiments by Hongying et al. [47], Aibing et al. [46], Chenglong et al. [48], and Shijina et al. [49]. Theory and experiments are in overall good agreement (more details on the spread of the experimental data will be discussed later), both approaching a \(T^{-1}\) trend at low temperatures and a milder-than-\(T^{-1}\) decay at high temperature. These different trends in the low and high temperature limits emerge from the coexistence of particle-like and wave-like transport mechanism, whose relative strength depends on temperature. Specifically, Fig. 2 shows that the particle-like mechanisms--which determine the populations conductivity \(\kappa_{P}\) discussed in Sec. II-- contribute mainly at low temperature, since \(\kappa_{P}\) decays proportionally to \(T^{-1}\). To understand the origin of such \(T^{-1}\) trend for \(\kappa_{P}\), we recall that the particle-like conductivity emerging from the Wigner formulation is totally equivalent to the LBTE conductivity [33], and the LBTE predicts a \(T^{-1}\) scaling for the conductivity to appear universally at high temperature in all crystals characterized by dominant third-order anharmonicity [71; 72; 73; 74]. In contrast, wave-like mechanisms yield a coherences conductivity \(\kappa_{C}\) that in LaPO\({}_{4}\) increases with temperature and determines the milder-than-\(T^{-1}\) decay at high temperature. Now we focus on the spread displayed by different, independent experiments [46; 47; 48; 49], which is deemed to originate from differences in the samples. Specifically, the conductivity is sensitive to the sample's grains properties and size [76; 77; 47], which are determined by the synthesis process. The experiments the are closest to our bulk (perfect-crystal) calculations are those by Hongying et al. [47], who used heat treatment to obtain samples with high degree of crystallinity (average grain size \(\gtrsim 25\ \mu m\)). We also note that the highest-temperature experiment performed by Hongying et al. [47] (\(T\sim 1500K\)) departs from the decreasing trend displayed by all the other experiments discussed by the same reference. This change in trend might originate from the onset of the radiative conduction, the description of this effect goes beyond the scope of the present study. The other experiments reported in Fig. 2 report a lower conductivity, which originates from the smaller average grain size and consequent stronger grain-boundary scattering in these samples. Specifically, the samples used by Shijina et al. [49] had grains with size of 1-4 \(\mu m\), Aibing et al. [46] used samples with grains in the 1-3 \(\mu m\) range, and Chenglong et al. [48] used samples with grains in the 2-5 \(\mu m\) range. We show in Fig. 3 that accounting for grain-boundary scattering at the micrometer length scale (see Eq. (A4) in Appendix A) produces variations of the total thermal conductivity that are broadly compatible with the spread observed in different experiments. Finally, we note that Fig. 3 also reports predictions for samples with nanometer-sized grains (blue and red curves); this is to provide information on how much the conductivity would change Figure 3: **Effect of grain-boundary scattering on the conductivity.** We considered grain sizes equal to \(10^{-2}\) (red), \(10^{-1}\) (blue) and \(1\,\mu m\) (green) to compute the conductivity, and we compared results with the bulk value (black). The solid, dashed and dotted lines are \(\kappa_{T},\ \kappa_{P}\) and \(\kappa_{C}\), respectively. Each line is the average trace of the respective conductivity tensor, an estimator for the conductivity of the polycrystalline samples employed in experiments [75]. Figure 2: **Thermal conductivity of LaPO\({}_{4}\).** Green, populations conductivity \(\kappa_{P}\), which follows the \(T^{-1}\) decay typical of Peierls’ particle-like transport in crystals with dominant third-order anharmonicity. Blue, coherences conductivity \(\kappa_{C}\), significant at high temperature. Black, total conductivity, \(\kappa_{T}\)=\(\kappa_{P}\)+\(\kappa_{C}\). Scatter points are experiments from Aibing et al. [46], Hongying et al. [47], Chenglong et al. [48], and Shijina et al. [49]. The theoretical conductivities are the average trace of the respective tensors, these are estimators for the conductivity of the polycrystalline samples employed in experiments [75]. if the experimental nanostructuring techniques for TBC (see e.g. Refs. [78; 79]) were used to prepare the samples. ### Particle-like & wave-like transport The calculations in the previous section highlighted how the macroscopic thermal conductivity is determined by both particle-like and wave-like microscopic transport mechanisms. In this section, we investigate these microscopic transport mechanisms; specifically, how their relative strengths and contributions to the total macroscopic conductivity vary as a function of temperature. As discussed in Ref. [33], it is possible to resolve how much each phonon (\(\mathbf{q}\))\({}_{s}\) contributes to the particle-like (\(\bar{\mathcal{K}}_{P}(\mathbf{q})_{s}\)) and wave-like conductivities (\(\bar{\mathcal{K}}_{C}(\mathbf{q})_{s}\)) (the bar denotes the average trace of the mode-resolved contributions to the particle-like and wave-like conductivity tensors, full expressions are reported in Appendix C). Ref. [33] demonstrated that the relative strength of these contributions scales as the ratio between the average interband spacings (\(\Delta\omega_{\text{avg}}\)=\(\frac{\omega_{\text{avg}}}{3N_{\text{st}}}\), where \(\omega_{\text{max}}\) is the maximum phonon frequency, \(N_{at}\) is the number of atoms in the primitive cell, and \(3N_{at}\) the number of phonon bands) and the linewidth \(\Gamma(\mathbf{q})_{s}\)=\([\tau(\mathbf{q})_{s}]^{-1}\) (here \(\tau(\mathbf{q})_{s}\) is the phonon lifetime). In formulas, \[\frac{\bar{\mathcal{K}}_{C}(\mathbf{q})_{s}}{\bar{\mathcal{K}}_{P}(\mathbf{q})_{s}} \simeq\frac{\Gamma(\mathbf{q})_{s}}{\Delta\omega_{\text{avg}}}=\frac{[\Delta\omega _{\text{avg}}]^{-1}}{\tau(\mathbf{q})_{s}}. \tag{4}\] Eq. (4) predicts that phonons having a lifetime \(\tau(\mathbf{q})_{s}\) equal to the inverse interband spacing \([\Delta\omega_{\text{avg}}]^{-1}\) (also referred to as "Wigner limit in time" [33]) contribute simultaneously and with equal strength to both particle-like and wave-like conduction mechanism. In contrast, phonons with a lifetime much longer (shorter) than the Wigner limit in time contribute predominantly to the particle-like (wave-like) conductivity. Finally, Eq. (4) predicts the transition between these two limits to be non-sharp and centered at the Wigner limit in time. These analytical expectations are verified numerically in Fig. 4. Specifically, in the upper panel of such figure, we show the distribution of phonon lifetimes as a function of phonon energies at different temperatures. For each phonon mode (individual scatter point in Fig. 4), we use the particle-like and wave-like conductivity contributions (\(\bar{\mathcal{K}}_{P}(\mathbf{q})_{s}\) and \(\bar{\mathcal{K}}_{C}(\mathbf{q})_{s}\) appearing in Eq. (4), respectively) to resolve the conduction mechanisms through which the phonon participates to heat transport, as well as how much the microscopic phonon mode contributes to the macroscopic conductivity. The first information on the type of conduction mechanisms is encoded in the color of the scatter point, determined according to the value of the parameter \[c=\frac{\bar{\mathcal{K}}_{P}(\mathbf{q})_{s}-\bar{\mathcal{K}}_{C}(\mathbf{q})_{s}}{ \bar{\mathcal{K}}_{P}(\mathbf{q})_{s}+\bar{\mathcal{K}}_{C}(\mathbf{q})_{s}}. \tag{5}\] Eq. (5) implies that \(c\)=+1 when the phonon \((\mathbf{q})_{s}\) predominantly contributes to particle-like conduction (corresponding to green), \(c\)=\(-1\) when instead the phonon contributes mainly to wave-like conduction (blue), and finally \(c\)=0 when the phonon contributes equally to particle-like and wave-like conduction (red). In the following, we employ a linear color scale interpolating blue, red, and green to resolve possible intermediate cases. The second information, on the magnitude of the transport mechanisms, is represented by the area of each scatter point, which is proportional to the contribution of such phonon to the total thermal conductivity. Fig. 4 shows that the relative strength of particle-like and wave-like mechanisms strongly depends on the energy of the phonon, and varies significantly with temperature. Specifically, at room temperature (panel a), most of the phonons are above the Wigner limit in time (\(\tau(\mathbf{q})_{s}=[\Delta\omega_{\text{avg}}]^{-1}\), represented by the horizontal dashed-black line) and green, indicating that they mainly contribute to thermal transport particle-like. Increasing temperature (panel b is 1000 K, and panel c is 1500 K) yields a reduction of the lifetimes; numerical results confirm the analytical expectations that phonons with a lifetime comparable to the Wigner limit in time (red) contribute simultaneously to particle-like and wave-like conduction mechanisms, and phonons with even shorter lifetime mainly contribute to wavelike transport. These findings can be intuitively understood recalling that phonons with an extremely short lifetime are suppressed very quickly and thus cannot propagate particle-like enough to yield a sizable contribution to the populations conductivity. However, these short-lived phonons can still interfere and tunnel wavelike--we recall that interference and tunneling occurs also between damped waves--resulting in a significant contribution to the wave-like conductivity. Finally, we highlight that all the phonons in LaPO\({}_{4}\) have a lifetime longer than their reciprocal frequency (\(\tau(\mathbf{q})_{s}>[\omega(\mathbf{q})_{s}]^{-1}\), _i.e._ they are all above the dashed-purple line in panels a,b,c); thus Landau's quasiparticle picture [80] for phonons holds in LaPO\({}_{4}\) and consequently the Wigner formulation can be applied [33; 40]. The lifetime-energy analysis reported in the upper panel of Fig. 4 sheds light on the microscopic timescales underlying heat transport, and how they affect the macroscopic conductivity. In order to gain further insights, particularly on the dependence of the conductivity on the grains lengthscales discussed in the previous section, it is useful to recast such an analysis in the space-energy domain. To achieve this, we multiply the phonon lifetimes by the corresponding group velocities, obtaining the microscopic propagation lengthscales of phonons (mean free paths, MFP) \(\Lambda(\mathbf{q})_{s}\)=\(\frac{1}{\sqrt{3}}|\mathbf{v}(\mathbf{q})_{ss}|\tau(\mathbf{q})_{s}\) (here, \(\frac{1}{\sqrt{3}}|\mathbf{v}(\mathbf{q})_{ss}|\) is the spatially averaged modulus of the group velocity). The bottom panels of Fig. 4 show the MFP vs phonon energy at 300 K (a), 1000 K (b), and 1500 K (c). Similarly to the phonon lifetime-energy plots, a crossover from particle-like to wave-like transport is clearly evident here as well, and such a non-sharp transition is centered around the average bond length (see Ref. [33] for details on the relation between the particle-wave crossover in space and the average bond length). We highlight how phonons with MFP larger (smaller) than the average bond length contribute to particle (wave-like) conduction, and phonons with MFP equal to the average bond length contribute simultaneously to both particle and wave mechanisms. Finally, we note that from the lower panels of Fig. 4 it is apparent that most of the phonons in LaPO\({}_{4}\) have MFP always equal or shorter than the one micrometer, rationalizing the small difference between the bulk thermal conductivity and the thermal conductivity computed accounted for grain-boundary scattering at lengthscale \(1\mu m\) discussed in Fig. 3. ### Engineering the thermal conductivity through compositional disorder #### iii.4.1 Atomistic compositional disorder: La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys are promising materials for future thermal barrier applications, due to their excellent thermal stability and chemical durability [81]. In general, the presence of compositional disorder (alloying) causes a reduction of the thermal conductivity [1; 2; 31; 82; 83; 84; 85; 86; 87; 32; 88], thus is expected to be beneficial for TBC applications. To the best of our knowledge, there are no theoretical or experimental works on the thermal conductivity of La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys. As anticipated in the introduction, recent work [32] has highlighted how the standard LBTE-based perturbative treatment of compositional disorder [29; 30; 82] -- which is accurate in weakly anharmonic systems in which low-frequency vibrational modes dominate transport [30; 31; 82] -- fails in systems where transport is not dominated by low-frequency modes. We have seen in Fig. 4 that LaPO\({}_{4}\) belongs to this class, hence in this section, we develop a computational Figure 4: **Phonon lifetime (top) and mean free path (bottom) as a function of energy at different temperatures.** A color code has been used to show the origin of the conduction mechanism; green, blue, and red color represent, the particle-like, wavelike, and mixed conductivity (50 % each), respectively. The area of each circle corresponds to its contribution to the total conductivity. In the upper panels, the horizontal black-dashed line is the “Wigner limit in time”, \(\tau(\mathbf{q})_{s}=[\Delta\omega_{\text{avg}}]^{-1}\) see text). The purple-dashed hyperbola \(\tau=\frac{1}{\omega}\) indicates the regime of validity of the Wigner formulation, which requires phonons to be above such line to be well-defined (non-overdamped) quasiparticles [80]. The horizontal dashed line in the mean free path panel is the average bond length of LaPO\({}_{4}\). The pie charts have an area proportional to the total conductivity, the green and blue slices represent the particle-like and wave-like contributions, respectively. protocol that exploits the Wigner formulation to describe compositional-mass disorder explicitly, overcoming the limitations of the standard LBTE-based perturbative treatment of disorder. To describe how compositional-mass disorder affects the conductivity in La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys, we need to compute the quantities appearing in Eq. (1) -- _i.e._, the harmonic vibrational frequencies and velocity operators, and the anharmonic linewidths -- in disordered models. The harmonic frequencies and velocity operators are computed explicitly in the mass-substitution approximation [62]. We start from the force constants of a large supercell (more on this later) of pure LaPO\({}_{4}\), and we replace the mass of La with that of Gd with probability \(x\in(0,1]\). In formulas, \[\mathcal{G}_{\mathbf{R}b\alpha,\mathbf{R^{\prime}}b\alpha^{\prime}}=\frac{1}{\sqrt{f_{ x}(M_{b})f_{x}(M_{b^{\prime}})}}\frac{\partial^{2}V}{\partial u(\mathbf{R})_{b \alpha}\partial u(\mathbf{R^{\prime}})_{b\alpha^{\prime}}}\Big{|}_{\text{eq}}, \tag{6}\] where \(\mathbf{R}\) is a Bravais vector, \(b\) denotes an atom having mass \(M_{b}\) and position in the primitive cell \(\mathbf{\tau}_{b}\), and \(\alpha\) is a Cartesian direction; \(\frac{\partial^{2}V}{\partial u(\mathbf{R})_{b\alpha}\partial u(\mathbf{R^{\prime}}) _{b^{\prime}\alpha^{\prime}}}\Big{|}_{\text{eq}}\) is the second derivative of the Born-Oppenheimer potential evaluated at equilibrium atomic positions. Compositional-mass disorder on the La sites is accounted for by the function \(f_{x}\), which leaves unaffected masses of O and P (\(f_{x}(M_{\text{O}})=M_{\text{O}}\) and \(f_{x}(M_{\text{P}})=M_{\text{P}}\)\(\forall\)\(x\)) and replaces the mass of La with that of Gd with probability \(x\in(0,1]\): \[f_{x}(M_{\text{La}})=\left\{\begin{array}{l}M_{\text{La}}\text{ with probability }1-x;\\ M_{\text{Gd}}\text{ with probability }x.\end{array}\right. \tag{7}\] Then, Eq. (6) is used to compute the dynamical matrix at wavevector \(\mathbf{q}\), \[\mathcal{D}(\mathbf{q})_{b\alpha,b^{\prime}\alpha^{\prime}}{=}\sum_{\mathbf{R}} \mathcal{G}_{\mathbf{R}b\alpha,\mathbf{0}b^{\prime}\alpha^{\prime}}e^{-i\mathbf{q}\cdot( \mathbf{R}+\mathbf{\tau}_{b}-\mathbf{\tau}_{b^{\prime}})}, \tag{8}\] which is then diagonalized \[\sum_{b^{\prime}\alpha^{\prime}}\mathcal{D}(\mathbf{q})_{b\alpha,b^{\prime}\alpha ^{\prime}}\mathcal{E}(\mathbf{q})_{s,b\alpha^{\prime}}=\omega^{2}(\mathbf{q})_{s} \mathcal{E}(\mathbf{q})_{s,b\alpha}. \tag{9}\] The eigenvalues appearing in Eq. (9) are related to the vibrational frequencies \(\omega(\mathbf{q})_{s}\), and the eigenvectors \(\mathcal{E}(\mathbf{q})_{s,b\alpha}\) describe how atom \(b\) moves along the Cartesian direction \(\alpha\) when the phonon with wavevector \(\mathbf{q}\) and mode \(s\) is excited. As discussed in Ref. [33], the velocity operator is obtained from these quantities as \[\nu^{\beta}(\mathbf{q})_{s,s^{\prime}}{=}\!\!\sum_{b,\alpha,b^{\prime},\alpha^{ \prime}}\!\!\!\mathcal{E}^{*}(\mathbf{q})_{s,b\alpha}\nabla^{\beta}_{\mathbf{q}}\sqrt {\mathcal{D}(\mathbf{q})}_{b\alpha,b\alpha^{\prime}}\mathcal{E}(\mathbf{q})_{s^{ \prime},b\alpha^{\prime}}. \tag{10}\] The effect of compositional disorder on the anharmonic linewidths is accounted for in an analogous way, using the aforementioned mass-replacement function \(f(M_{b})\) in the third-order force constants used to determine the anharmonic linewidths (see Eqs. (13,14) in Appendix). As shown in Fig. 5, the mass-substitution approximation captures the differences between the experimental thermal conductivities of pure LaPO\({}_{4}\) and pure GdPO\({}_{4}\), and is therefore accurate enough for the scope of the present analysis. Due to the high computational cost of the anharmonic linewidth calculations, the mass-substitution approximation is used to obtain the linewidths of pure GdPO\({}_{4}\); then, linewidths for the intermediate compositions are determined by linear interpolation of the linewidth function of pure LaPO\({}_{4}\) and pure GdPO\({}_{4}\). Specifically, to perform this interpolation we first employ the approach discussed in Refs. [82, 32, 44] to coarse-grain the frequency-linewidths distributions of pure LaPO\({}_{4}\) and pure GdPO\({}_{4}\) into a single-valued function of \(\omega\) (see Appendix B for details). Then, as in previous work [90], we determine the linewidth of the alloy by interpolating linearly the linewidth functions of LaPO\({}_{4}\) and GdPO\({}_{4}\): \[\Gamma_{\text{La}_{1-x}\text{Gd}_{x}\text{PO}_{4}}(\omega)=(1-x)\Gamma_{\text{ LaPO}_{4}}(\omega)+x\;\Gamma_{\text{GdPO}_{4}}(\omega). \tag{11}\] The computational convergence of such an approach is verified by enlarging the size of the supercell (accuracy with which the disorder is described) and ensuring that results remain practically unchanged. In particular, in Fig. 6 we test the computational convergence for the case \(x=0.5\), where we show that employing supercells ranging from 4\(\times\)4\(\times\)4 (1536 atoms) periodically repeated 5\(\times\)5\(\times\)5 times, to 6\(\times\)6\(\times\)6 (5184 atoms) periodically repeated 3\(\times\)3\(\times\)3 times, yields practically indistinguishable results for the total conductivity. Fig. 6 shows that atomistic models containing thousands of atoms are sufficiently large to achieve computational convergence in the explicit description of compositional disorder. Fig. 6 also highlights that it is crucial to Figure 5: **Conductivities of LaPO\({}_{4}\) and GdPO\({}_{4}\).** For LaPO\({}_{4}\), the theory is solid red and experiments [46, 47, 48, 49] are red symbols. For GdPO\({}_{4}\), the theory is solid green, and green symbols are the experiments [89, 46]. The dotted lines are the average of the experimental data. employ the Wigner formulation to obtain size-consistent results for the thermal conductivity when compositional disorder is described explicitly. In fact, the populations conductivities (dashed lines in Fig. 6) decrease as the size of the supercell used to explicitly describe disorder increases. Thus, the LBTE, which accounts exclusively for particle-like transport mechanisms, would predict a model-dependent conductivity. In contrast, within the Wigner framework, the decrease of the particle-like conductivity is compensated by an increase of the coherences conductivity, yielding compatible results for compositionally disordered models having different sizes. We note that this size-consistent behavior for the Wigner conductivity in compositionally disordered materials is analogous to the size-consistent behavior for the Wigner conductivity recently discussed for structurally disordered materials (see Fig. 12 in Ref. [44]). Finally, we note that the analysis in Fig. 6 was done neglecting the effect of non-analytical term corrections [91], an approximation performed to reduce the computational cost for the calculation of the velocity operator in the large supercells and validated in the Appendix (Fig. 13). After having validated the computational protocol to describe disorder in the mass-substitution approximation within the Wigner framework, we analyze how the variable compositional disorder affects thermal conductivity. Fig. 7 shows the conductivity of La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys at different compositions and various temperatures. We see that the conductivity decreases as compositional disorder increases. Such a decrease is more pronounced at low temperatures (300 K), where anharmonicity is weak and does not dominate over disorder in limiting heat transfer. At 300K, the dependence of the conductivity from composition bears analogies with that observed in other alloys [92, 93, 9, 94, 95, 96, 97], displaying a U-shaped trend which reaches a minimum at \(x\sim 0.7\). We note that in the minimum the conductivity is about 12 % lower than that of pristine LaPO\({}_{4}\). Such a decrease is much smaller than the order-of-magnitude decrease observed in archetypal alloys such as Si\({}_{x}\)Ge\({}_{1-x}\)[30], because of two reasons. First, anharmonicity in LaPO\({}_{4}\) and GdPO\({}_{4}\) is much stronger than that in Si and Ge, yielding a much lower thermal conductivity already in the pristine components; second, there is less mass contrast between La and Gd (mass ratio, Gd/La = 1.13) compared to Si and Ge (mass ratio, Ge/Si =2.59). When increasing the temperature, anharmonic effects become stronger and thus progressively dominate over compositional disorder in determining the conductivity; this is apparent from the almost negligible dependence of the conductivity from Figure 6: **Thermal conductivity of models of La\({}_{0.5}\)Gd\({}_{0.5}\)PO\({}_{4}\) with different disorder lengthscales.** The solid lines with the markers are total conductivities \(\kappa_{T}\), dashed lines are the particle-like conductivity \(\kappa_{P}\). Orange represents calculations in a 4\(\times\)4\(\times\)4 supercell with _q_-mesh 5\(\times\)5\(\times\)5, blue represents a 5\(\times\)5\(\times\)5 supercell with _q_-mesh 3\(\times\)3\(\times\)3, and green is a calculation done in a 6\(\times\)6\(\times\)6 supercell with _q_-mesh 3\(\times\)3\(\times\)3. We highlight how the populations conductivity (dashed) decreases as the size of the model increases, such a decrease is compensated by an increase of the coherences conductivity; this implies that these different models have practically indistinguishable total conductivities. Figure 7: **Thermal conductivity of La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys** as a function of composition, at 300, 700 and 1000 K. At 300 K, the conductivity drops as compositional disorder increases, reaching a minimum at \(x\sim 0.7\). The effect of compositional disorder becomes less relevant as temperature increases. compositional disorder observed at 1000 K. In summary, we have developed a computational protocol that allows us to evaluate the effect of compositional disorder within the Wigner framework. This protocol allows to shed light on how the interplay between compositional disorder and anharmonicity determines the conductivity, and will be potentially very useful to study materials for next-generation TBCs. #### iii.2.2 Micrometer-scale disorder: LaPO\({}_{4}\)/La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) composites La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) has been identified as another potential TBC material with very low thermal conductivity [7]. However, its low thermal expansion coefficient causes the formation of cracks and delamination at high temperatures [98]. Composite structures of La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) and LaPO\({}_{4}\) have been suggested to offer better thermomechanical properties compared to the base materials [77]. Computing the thermal conductivity of composites characterized by compositional disorder at the micrometer lengthscale has a prohibitively high computational cost for first-principles methods. Therefore, in order to have a qualitative understanding of thermal transport in LaPO\({}_{4}\)/La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) composites, we employ the continuum Maxwell-Garnett (MG) model [54], which determines the total thermal conductivity of the composite (\(\kappa\)) from the total conductivities of the of matrix (\(\kappa_{1}\)) and filler (\(\kappa_{2}\)): \[\frac{\kappa-\kappa_{1}}{\kappa+2\kappa_{1}}=V\frac{\kappa_{2}-\kappa_{1}}{ \kappa_{2}+2\kappa_{1}}, \tag{12}\] where V is the volume fraction of the filler. Here we consider La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) as the matrix (its conductivity is taken from ref [33]), and LaPO\({}_{4}\) as filler. As in the experimental work by Zhang et al., [77], we consider composites containing weight percentages LaPO\({}_{4}\) equal to 10, 20, 30, and 40 %, and we estimate the conductivity of the composite using Eq. (12). Fig. 8 shows the thermal conductivity of the composite as a function of temperature at different filler fractions. The conductivity increases with an increase in the fraction of LaPO\({}_{4}\); in all the cases, it is higher than that of pristine La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) and smaller than that of LaPO\({}_{4}\). This trend is in broad agreement with the experiments performed by Zhang et al. [77]. We also note that experimental samples have porosities that vary with the filler fraction, as well as interfaces between La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) and LaPO\({}_{4}\) phases. These effects are not accounted for by the MG model employed here, and their description might improve the agreement between theory and experiments in Fig. 8, but it is an open challenging problem that is beyond the scope of the present paper. ## IV Conclusions We have shown that the Wigner formulation of thermal transport [33] can be combined with first principles techniques to quantitatively predict the macroscopic thermal-insulation performance of compositionally disordered materials for thermal barrier coatings [2, 3, 4, 5, 81]. First, we have investigated heat transport in pristine LaPO\({}_{4}\), showing that the recently developed Wigner transport equation [28, 33] rationalizes the conductivity decay milder than \(T^{-1}\) observed in experiments [46, 47, 48, 49]. More precisely, we have shown that the macroscopic trend of the conductivity is determined by the coexistence of microscopic particle-like and wave-like conduction mechanisms: the former strongly decays with temperature, while the latter increases with temperature, and the sum of the two yields the mildly decreasing trend observed in experiments. We have discussed how grain-boundary scattering affects the conductivity of LaPO\({}_{4}\), showing that such a mechanism has a weak effect in samples having micrometric grains, but would become very important in nanostructured samples [78, 79]. In particular, we have analyzed how the relative strength of the particle and wave-like transport mechanisms depends on the temperature, energy, and mean free path of the microscopic heat carriers. We have developed and tested a computational protocol that allows to describe explicitly within the Wigner formulation how compositional disorder affects the conductivity, and we employed it to investigate the thermal properties of La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys. We described explicitly compositionally disordered samples simulating systems containing up to 5184 atoms, a size that is one order of magnitude larger than that tractable by state-of-the-art first-principles molecular dynamics techniques. We discussed how the interplay between anharmonicity and disorder affects thermal transport in La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys at different temperatures, Figure 8: **Conductivity of LaPO\({}_{4}\)/La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) composites.** LaPO\({}_{4}\) (filler) is added to La\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) (matrix) at different weight percentages, and the thermal conductivity is computed using Maxwell-Garnett (MG) model [54]. Experimental data are taken from Zhang et al. [77]. showing that disorder has strong effects around room temperatures and almost negligible effects at high temperatures (\(\gtrsim 700\) K). We have also shown that the LBTE yields a size-dependent conductivity in the presence of disorder, while the Wigner formulation yields compatible results for compositionally disordered models having different sizes; the latter behavior shares analogies to the size-consistent results obtained using the Wigner formulation in structurally disordered materials [44]. This novel computational scheme sets the stage to rationalize thermal transport with quantum accuracy in solids with compositional-mass disorder, and will be potentially very useful to develop novel design strategies for thermal barrier coatings. ## V Acknowledgements M. S. acknowledges support from Gonville and Caius College, and from the SNSF project P500PT_203178. A. P., L.B. and N. M. acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy (EXC 2077, No. 390741603, University Allowance, University of Bremen) and Lucio Colombi Ciacchi, the host of the "U Bremen Excellence Chair Program". We also thank the HLRN resource allocation board for granting the computational resources on the supercomputer Lise and Emmy at NHR@ZIB and NHR@Gottingen as part of the NHR infrastructure (projects ID:hbp00075 and ID:hbi00059). ## Appendix A Phonon linewidths The linewidths appearing in Eq. (1) are determined by third-order anharmonicity [58; 18], isotopic-mass disorder [29; 18], and grain-boundary scattering [59; 60]. Specifically, the anharmonic linewidth is: \[\hbar\Gamma^{\mathrm{anh}}(\mathbf{q})_{s}=\frac{\pi}{\hbar N_{c}} \sum_{\mathbf{q^{\prime}},s^{\prime},s^{\prime\prime}}\Big{|}V^{(3)}_{\mathbf{q}s,\bm {q^{\prime}}s^{\prime},\mathbf{q^{\prime\prime}}s^{\prime\prime}}\Big{|}^{2} \tag{10}\] \[\times\Big{\{}\big{[}1{+}\bar{\mathbf{N}}(\mathbf{q^{\prime}})_{s^{\prime }}+\bar{\mathbf{N}}(\mathbf{q^{\prime\prime}})_{s^{\prime\prime}}\big{]}\delta\big{[} \omega(\mathbf{q})_{s}{-}\omega(\mathbf{q^{\prime}})_{s^{\prime}}{-}\omega(\mathbf{q^{ \prime\prime}})_{s^{\prime\prime}}\big{]}\] \[+2\big{[}\bar{\mathbf{N}}(\mathbf{q^{\prime}})_{s^{\prime}}{-}\bar{\mathbf{ N}}(\mathbf{q^{\prime\prime}})_{s^{\prime\prime}}\big{]}\delta\big{[}\omega(\mathbf{q})_{s} +\omega(\mathbf{q^{\prime}})_{s^{\prime}}{-}\omega(\mathbf{q^{\prime\prime}})_{s^{ \prime\prime}}\big{]}\Big{\}},\] where \[V^{(3)}_{\mathbf{q}s,\mathbf{q^{\prime}}s^{\prime},\mathbf{q^{\prime\prime}}s^{\prime \prime}}=\sum_{\begin{subarray}{c}\alpha,\alpha^{\prime},\alpha^{\prime\prime }\\ b,b^{\prime},b^{\prime\prime}\end{subarray}}\mathcal{E}(\mathbf{q})_{s,ba} \mathcal{E}(\mathbf{q^{\prime}})_{s^{\prime},b^{\prime}\alpha^{\prime}}\mathcal{E }(\mathbf{q^{\prime\prime}})_{s^{\prime\prime},b^{\prime}\alpha^{\prime\prime}}\] \[\sqrt{\frac{1}{f_{x}(M_{b})f_{x}(M_{b^{\prime}})f_{x}(M_{b^{ \prime\prime}})}}\] \[\sqrt{\frac{\hbar^{3}}{8}}\sqrt{\frac{1}{\omega(\mathbf{q})_{s} \omega(\mathbf{q^{\prime}})_{s^{\prime}}\omega(\mathbf{q^{\prime\prime}})_{s^{\prime \prime}}}}\] \[\frac{1}{N_{c}}\frac{\partial^{3}E^{tot}}{\partial u(\mathbf{q})_{b \alpha}\partial u(\mathbf{q^{\prime}})_{b^{\prime}\alpha^{\prime}}\partial u(\mathbf{ q^{\prime\prime}})_{b^{\prime}\alpha^{\prime\prime}}}\] are the three-phonon coupling matrix elements [58] and \(f_{x}(M_{b})\) is the mass-replacement function used to account for compositional disorder in the mass-substitution approximation discussed in Sec. III.4.1. The linewidth due to isotopic-mass disorder (used only in the pure cases) is [29; 18] \[\hbar\Gamma^{\mathrm{iso}}(\mathbf{q})_{s}= \frac{\hbar\pi}{2N_{c}}[\omega(\mathbf{q})_{s}]^{2}{\sum_{\mathbf{q^{ \prime}},s^{\prime}}}\delta\big{[}\omega(\mathbf{q})_{s}{-}\omega(\mathbf{q^{\prime}}) _{s^{\prime}}\big{]} \tag{11}\] \[\times\sum_{b}g_{2}^{b}\Big{|}\sum_{\alpha}\mathcal{E}(\mathbf{q})_{s, b\alpha}^{\star}\mathcal{E}(\mathbf{q^{\prime}})_{s^{\prime},ba}^{\star}\Big{|}^{2}.\] Finally, the linewidth due to grain-boundary scattering evaluated according to the Casimir model [60] in the presence of perfectly absorbing boundaries is [59], \[\hbar\Gamma(\mathbf{q})_{s}^{\mathrm{bnd}}=\frac{\|\mathbf{\nu}(\mathbf{q})_{ss}\|}{L}. \tag{12}\] ## Appendix B Representing anharmonic linewidths as a function of frequency In this section, we discuss the details of the computation of the analytical function \(\Gamma_{\mathrm{a}}(\omega)\), used in Sec. III.4.1 to approximatively determine the linewidths as a function of frequency. Similarly to previous work [32; 44; 82], the description of anharmonic linewidths as a single-value function of frequency, \(\Gamma_{\mathrm{a}}(\omega)\), is determined as \[\Gamma_{\mathrm{a}}(\omega)=\frac{1}{\sqrt{\frac{1}{(\Gamma_{1}(\omega))^{2}}+ \frac{1}{(\Gamma_{2}(\omega))^{2}}}}, \tag{13}\] where \(\Gamma_{1}(\omega)\) and \(\Gamma_{2}(\omega)\) are defined as \[\Gamma_{1}(\omega){=}\frac{\sum\limits_{\mathbf{q}=\mathbf{0},s}\frac{1}{\sqrt{2\pi \sigma^{2}}}\exp\Big{[}-\frac{\hbar^{2}(\omega(\mathbf{q})_{s}-\omega)^{2}}{2 \sigma^{2}}\Big{]}}{\sum\limits_{\mathbf{q}=\mathbf{0},s}\tau(\mathbf{q})_{s}\frac{1}{\sqrt{2 \pi\sigma^{2}}}\exp\Big{[}-\frac{\hbar^{2}(\omega(\mathbf{q})_{s}-\omega)^{2}}{2 \sigma^{2}}\Big{]}}, \tag{14}\] \(\Gamma_{2}(\omega)\)=\(p\cdot\omega^{2}\), \[p=\frac{\sum\limits_{\mathbf{q}=\mathbf{0},s}\int\limits_{\omega_{\mathrm{o}}}^{2\omega_{ \mathrm{o}}}d\omega_{c}\frac{\Gamma(\mathbf{q})_{s}}{\omega^{*}(\mathbf{q})_{s}}\frac{2.35 }{\sqrt{2\pi\sigma^{2}}}\exp\!\left[-\frac{\hbar^{2}(\omega(\mathbf{q})_{s}-\omega_ {\mathrm{o}})^{2}}{2\sigma^{2}}\right]}{\sum\limits_{\mathbf{q}=\mathbf{0},s}\int \limits_{\omega_{\mathrm{o}}}^{2\omega_{\mathrm{o}}}d\omega_{c}\frac{1}{\sqrt {2\pi\sigma^{2}}}\exp\!\left[-\frac{\hbar^{2}(\omega(\mathbf{q})_{s}-\omega_{c})^{ 2}}{2\sigma^{2}}\right]}. \tag{23}\] \(\omega_{\mathrm{o}}\) is the smallest non-zero frequency and \(\sigma\)=15 cm\({}^{-1}\) is a broadening chosen sufficiently large to ensure that the linewidths are averaged in a smooth way. The functional form of the approximated function \(\Gamma_{\mathrm{a}}(\omega)\) is inspired by past work [32, 44, 82] and the specific expressions (B1,B2,B3) to determine it have been devised and validated relying on exact calculations performed in pure LaPO\({}_{4}\). Specifically, we show in Fig. 9 the function \(\Gamma_{\mathrm{LaPO_{4}}}(\omega)\) for pure LaPO\({}_{4}\). In the inset of such a figure we demonstrate that the exact and approximated treatment of anharmonicity yields practically indistinguishable conductivities over the entire temperature range analyzed. Appendix C Expression for the average trace of the mode-resolved contributions to the particle-like and wave-like conductivity tensors Here, we report the full expression for the average trace of the mode resolved contributions to the particle-like and wave-like conductivity tensors appearing in Eq. (4). The expression for \(\bar{\mathcal{K}}_{P}(\mathbf{q})_{s}\) is deduced from the integrand of the particle-like like conductivity in SMA [30], \[\kappa_{\mathrm{P,SMA}}^{\alpha\beta}\!\!=\!\!\frac{1}{\mathcal{V}N_{c}}\! \sum\limits_{\mathbf{q}s}C(\mathbf{q})_{s}\nu^{\alpha}\!(\mathbf{q})_{s,s}\nu^{\beta}\!( \mathbf{q})_{\!s,s}\frac{1}{\Gamma(\mathbf{q})_{s}} \tag{24}\] where \(\mathcal{V}\) is the primitive-cell volume, \(N_{c}\) is the number of \(\mathbf{q}\) points appearing in the sum, \(C(\mathbf{q})_{s}\), \(\nu^{\beta}(\mathbf{q})_{\!s,s}\) and \(\Gamma(\mathbf{q})_{s}\) are the specific heat, group velocity, and linewidths of the phonon \((\mathbf{q})_{s}\) already discussed in Sec. II. The mode-resolved contributions to the particle-like conductivity \(\bar{\mathcal{K}}_{P}(\mathbf{q})_{s}\) are obtained from the terms appearing in the sum of Eq. (24) as \[\bar{\mathcal{K}}_{P}(\mathbf{q})_{s}=C(\mathbf{q})_{s}\bar{V}^{\mathrm{av}}(\mathbf{q})_{s,s}\bar{V}^{\mathrm{av}}(\mathbf{q})_{s,s}\left[\Gamma(\mathbf{q})_{s}\right]^{-1}, \tag{25}\] where \(\bar{V}^{av}(\mathbf{q})_{s,s}=\sqrt{\frac{1}{3}\sum_{\alpha=1}^{3}\left|\nu^{ \alpha}\!(\mathbf{q})_{s,s^{\prime}}\right|^{2}}\) is the spatially averaged group velocity. The average trace of the coherences conductivity can be written as, \[\kappa_{C}^{\alpha\beta}\!\!=\!\!\frac{1}{\mathcal{V}N_{c}}\sum \limits_{\mathbf{q},s\neq s^{\prime}}\frac{\omega(\mathbf{q})_{s}\!+\!\omega(\mathbf{q})_ {s^{\prime}}}{4}\!\left[\!\frac{C(\mathbf{q})_{s}}{\omega(\mathbf{q})_{s}}\!+\!\frac{C (\mathbf{q})_{s^{\prime}}}{\omega(\mathbf{q})_{s^{\prime}}}\right]\] \[\nu^{\alpha}\!(\mathbf{q})_{s,s^{\prime}}\nu^{\beta}(\mathbf{q})_{s^{ \prime},s}\frac{\frac{1}{2}\!\left[\Gamma(\mathbf{q})_{s}\!+\!\Gamma(\mathbf{q})_{s^{ \prime}}\right]}{[\omega(\mathbf{q})_{s^{\prime}}\!-\!\omega(\mathbf{q})_{s}]^{2}+\frac {1}{4}[\Gamma(\mathbf{q})_{s}\!+\!\Gamma(\mathbf{q})_{s^{\prime}}]^{2}}; \tag{26}\] and from the integrand of such an equation, one obtains the mode-resolved contributions to the wave-like conductivity \[\bar{\mathcal{K}}_{C}(\mathbf{q})_{s}\!\!=\!\!\sum\limits_{s^{\prime} \neq s}\!\!\frac{C(\mathbf{q})_{s}}{\!C(\mathbf{q})_{s}\!+\!C(\mathbf{q})_{s^{\prime}}} \frac{\omega(\mathbf{q})_{s}\!+\!\omega(\mathbf{q})_{s^{\prime}}}{2}\!\!\left[\!\frac {C(\mathbf{q})_{s}}{\omega(\mathbf{q})_{s}}\!+\!\frac{C(\mathbf{q})_{s^{\prime}}}{\omega( \mathbf{q})_{s^{\prime}}}\!\right]\] \[\times\!\!\left[\frac{1}{3}\!\sum\limits_{\alpha}\left|\nu^{ \alpha}\!(\mathbf{q})_{s,s^{\prime}}\right|^{2}\!\right]\!\!\frac{\frac{1}{2}\! \left[\Gamma(\mathbf{q})_{s}\!+\!\Gamma(\mathbf{q})_{s^{\prime}}\right]}{\!\left[ \omega(\mathbf{q})_{s^{\prime}}\!-\!\omega(\mathbf{q})_{s}\right]^{2}\!+\!\frac{1}{4} [\Gamma(\mathbf{q})_{s}\!+\!\Gamma(\mathbf{q})_{s^{\prime}}]^{2}}. \tag{27}\] Figure 9: **Frequency-linewidth distribution of LaPO\({}_{4}\) at different temperatures.** The scattered clouds are the linewidths as a function of frequency computed using the primitive cell and a 17\(\times\)17\(\times\)17 \(\mathbf{q}\) mesh. The solid lines are their coarse-graining into single-valued functions \(\Gamma_{a}(\omega)\). The purple area is the overdamped regime, where the Wigner formulation cannot be applied. The inset shows that the conductivity computed on a 17\(\times\)17\(\times\)17 \(\mathbf{q}\) using the exact linewidths (black line) is practically indistinguishable from that computed using the linewidths determined from \(\Gamma_{a}(\omega)\) in a 6\(\times\)6\(\times\)6 supercell with a 3\(\times\)3\(\times\)3 \(\mathbf{q}\) mesh (diamonds). ## Appendix D Computational methods ### Structural and vibrational properties of LaPO\({}_{4}\) The crystal structure of monazite LaPO\({}_{4}\) is taken from the experimental work of Ni et al. [99], it is monoclinic (space group \(P2_{1}/n\)) and contains 24 atoms (4 formula units) in the primitive cell (Fig. 10). DFT calculations are performed using the Quantum ESPRESSO (QE) distributions [100]. We employed the revised Perdew-Burke-Enzerhof functional (PBEsol) [101], pseudo-potentials were taken from the SSSP precision library (version 1.1.2) [102; 103]. We used a kinetic energy cut-off of 80 Ry, and the Brillouin zone was sampled using a 3\(\times\)3\(\times\)3 Monkhorst-Pack [104] k-point mesh with a (1 1 1) shift. The structure is relaxed using the variable cell relax (vc-relax) scheme, with a force convergence threshold of 10\({}^{-5}\) Ry/Bohr. The resulting equilibrium lattice parameters are in good agreement with experiments, see Table 1. The second-order force constants (fc2) are computed using density functional perturbation theory (DFPT) [62] over a 4\(\times\)4\(\times\)4 \(\mathbf{q}\) mesh in reciprocal space. The LO-TO splitting at the \(\Gamma\) point is incorporated using the non-analytic term correction computed with dielectric tensor and Born effective charges [91]. The absence of the imaginary phonon modes in the phonon dispersion (Fig. 11) confirms the dynamical stability. The third-order force constants (fc3) are computed with a \(2\times 2\times 2\) supercell using a synergistic combination of QE and ShengBTE package [105]. Here, the finite differences method is used and the nearest neighbor (nn) interaction up to 8 nn is incorporated. The QE fc2 and fc3 are exported to hdf5 formats using Phonopy [106] and HIPHIVE [107] packages, respectively. The linewidths are then computed using the Phono3py package [21; 108]. ### Raman spectrum The Raman intensities \(I_{s}\) appearing in Eq. (3) were computed from the Raman tensor[109; 110], \[\frac{\partial\chi_{ij}}{\partial u_{k,I}}=\frac{1}{\Omega}\frac{\partial^{2 }F_{k,I}}{\partial\mathbf{\varepsilon}_{i}\partial\mathbf{\varepsilon}_{j}} \tag{16}\] where \(F_{k,I}\) is the force acting on atom \(I\) and \(\mathbf{\mathcal{E}}\) is the macroscopic electric field. The Raman tensor was computed using the finite-electric-field approach [110] as implemented in the aiida-vibroscopy package [109] within the AiiDA infrastructure [111; 112]. The second-order derivative appearing in Eq. (16) was evaluated through the application of a small electric field, described by the electric-enthalpy functional [113; 114], an extension of the Kohn-Sham functional that allows us to find meta-stable solutions in the presence of a homogeneous electric field. In particular, we used a fourth-order central difference formula with an electric field step of about \(0.8\times 10^{-3}\) (Ry) a.u. (1 (Ry) a.u. \(\approx 36.3609\) V/A) to remove the finite size dependence of the numerical derivative, see Ref. [109] for details, and a Monkhorst-Pack grid of \(10\times 9\times 10\) to ensure a well-converged spectra. Finally, the tensor was symmetrized according to the LaPO\({}_{4}\) space group. We conclude this section discussing in more detail the temperature dependence of the Raman spectra in Fig. 1. Specifically, we note that relative intensity of the theoretical Raman spectra does not decrease Figure 11: **Phonon dispersion of LaPO\({}_{4}\).** Red (blue) is with (without) non-analytic (NAC) term correction. We highlight how NAC affects the splitting of LO-TO mode at \(\mathbf{q}=\mathbf{0}\). Figure 10: **DFT-optimized primitive cell of LaPO\({}_{4}\).** Green, purple, and red represent La, P, and O atoms, respectively. \begin{table} \begin{tabular}{c c c c c c c c} \hline Functional & \(a\) (\(\AA\)) & \(b\) (\(\AA\)) & \(c\) (\(\AA\)) & \(\alpha\)(°) & \(\beta\)(°) & \(\gamma\)(°) & \(V\) (\(\AA^{3}\)) \\ \hline PBEsol & 6.843 & 7.074 & 6.478 & 90.000 & 103.478 & 90.000 & 304.938 \\ Exp. from Ni et al. [99] & 6.831 & 7.071 & 6.503 & 90.000 & 103.270 & 90.000 & 305.732 \\ \hline \end{tabular} \end{table} Table 1: Optimized lattice parameters and volume of LaPO\({}_{4}\). Experimental data are taken from Ni et al. [99]. The theoretical primitive-cell volume is 0.26 % smaller than the experimental measure at 300 K. monotonically with temperature for all the peaks -- the peak around 200 cm\({}^{-1}\) is more intense at 1000 K than at 300 K. Such a behavior originates from the presence of the temperature-independent instrumental linewidth in the Lorentzian 3, and from the Bose-Einstein distribution appearing in the Raman cross section for the \(I_{s}\). To quantitatively understand this behavior, we consider the maximum Raman intensity at \(\omega\approx\) 200 cm\({}^{-1}\), \[I(\omega=\omega_{s},T)\sim(\bar{\textit{N}}(\omega_{s},T)+1)/(\Gamma_{s}+ \Gamma_{\text{ins}}), \tag{4}\] where \((\bar{\textit{N}}(\omega_{s},T)+1)=(1-\exp\{-\hbar\omega_{s}/k_{B}T\})^{-1}\). We see from Fig. 9 that for \(\omega_{s}\approx\) 200 cm\({}^{-1}\) we have 2 cm\({}^{-1}\) = \(\Gamma_{\text{ins}}\gg\Gamma_{s}\). It follows that the temperature dependence of the Raman intensity of these modes arises entirely from the Bose-Einstein occupations. In particular, comparing the intensities at \(T_{1}\) = 300 K and \(T_{2}\)= 1000 K, we have \[I(\omega_{s},T_{2})/I(\omega_{s},T_{1})\approx(\bar{\textit{N}}(\omega_{s},T_ {2})+1)/(\bar{\textit{N}}(\omega_{s},T_{1})+1)\approx 2.5\;, \tag{5}\] and this explains why the Raman peak at low-frequency becomes sharper increasing temperature. In contrast, for high-frequency modes, 2 cm\({}^{-1}\) = \(\Gamma_{\text{ins}}\ll\Gamma_{s}\sim\) T, and from Eq. (4) it follows that \((\bar{\textit{N}}(\omega_{s},T)+1)/\Gamma_{s}\) decreases upon increasing temperature. ### Thermal conductivity The thermal conductivity is calculated using solved for the linearized form of the Wigner transport equation (LWTE) implemented in the Phono3py code [108]. The scattering operator is computed on a mesh of size \(17\times 17\times 17\) by accounting for the isotopic scattering effects [29, 30] and third-order anharmonicity. To simulate the compositional disorder in the Wigner framework, the following computational protocol is adopted. For \(6\times 6\times 6\) supercell, the second-order force constants (fc2) is constructed by mapping the primitive cell fc2 into the above supercell [115]. The procedure is repeated for different compositions (\(x\)=0.05, 0.15, 0.30, 0.50, 0.70, 0.85, 0.95, 1.00); where La atoms are randomly replaced with Gd (see some representative configurations in Fig. 12). To verify the dynamical stability, the phonon density of states is computed at all configurations (Fig. 12). All phonon modes are real and positive at all compositions considered here, confirming the dynamical stability of alloy structures. Further, we compute the velocity operator for each composition on a \(3\times 3\times 3\) mesh. The end component (GdPO\({}_{4}\)) linewidths and conductivity are computed by mass substitution approximation (La is replaced by Gd in the LaPO\({}_{4}\) primitive cell). The linewidths of the end components Figure 12: **Explicit simulation of harmonic vibrational properties of La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys.** Panel a), front view of the La\({}_{1-x}\)Gd\({}_{x}\)PO\({}_{4}\) alloys with different degrees of compositional disorder (\(6\times 6\times 6\) supercell, containing 5184 atoms). The green, violet, red, and purple color represents La, P, O, and Gd atoms, respectively. Panel b), phonon density of states as a function of composition; all modes are real and positive, indicating the dynamic stability of the alloys at all compositions. Panel c), zoom of phonon DOS in the range 50 to 300 cm\({}^{-1}\), where compositional disorder has the largest effects. are coarse-grained and fitted into a single-valued function (see main text and Fig. 9). For alloys, the linewidths at each composition are approximated using a linear combination of the linewidths of the end components (see the main text III.4-1). Finally, the thermal conductivity is calculated using the approximated linewidth and the velocity operator at 300, 700, and 1000 K. To fix the size of the supercell, convergence tests have been done using La\({}_{0.5}\)Gd\({}_{0.5}\)PO\({}_{4}\) system with different supercells. Fig. 6 shows that our calculations are converged with respect to the size of the supercell. The calculations discussed in Fig. 7 were performed using a \(6\times 6\times 6\) supercell. Finally, we note that the analysis reported in Fig. 6 and Fig. 7 were done neglecting the effect of non-analytical term corrections [91]. This approximation was performed to reduce the computational cost for the calculation of the velocity operator in the large supercells; we show in Fig. 13 that such an approximation has negligible effects on the conductivity. ### Data availability The raw data needed to reproduce the findings of this study are available on the Materials Cloud Archive [116].
2307.16654
Homonuclear ultracold elastic $s$-wave collisions of alkali atoms via multichannel quantum defect theory
Multichannel quantum defect theory (MQDT) provides a powerful toolkit for describing and understanding collisions of cold alkali atoms. Various MQDT approximations differ primarily in how they characterize the so-called short-ranged $K$-matrix, ${\mathbf K}_{\text{sr}}$, which encapsulates the short-ranged, high-energy physics into a handful of low-energy parameters that exhibit simple and smooth dependence on energy and field. Here, we compare three different methods for computing ${\mathbf K}_{\text{sr}}$ for homonuclear collisions of alkali atoms, from lithium to cesium. The MQDT calculations are benchmarked against numerically converged coupled-channels calculations that use a log-derivative propagator out to the asymptotic region. We study how well these approximations reproduce positions of $s$-wave magnetic Feshbach resonances, comparing to experiment where possible, and identify the limitations of various approximations.
Alyson Laskowski, Nirav Mehta
2023-07-31T13:38:16Z
http://arxiv.org/abs/2307.16654v1
Homonuclear ultracold elastic \(s\)-wave collisions of alkali atoms via multichannel quantum defect theory ###### Abstract Multichannel quantum defect theory (MQDT) provides a powerful toolkit for describing and understanding collisions of cold alkali atoms. Various MQDT approximations differ primarily in how they characterize the so-called short-ranged \(K\)-matrix, \({\bf K}_{\rm sr}\), which encapsulates the short-ranged, high-energy physics into a handful of low-energy parameters that exhibit simple and smooth dependence on energy and field. Here, we compare three different methods for computing \({\bf K}_{\rm sr}\) for homonuclear collisions of alkali atoms, from lithium to cesium. The MQDT calculations are benchmark against numerically converged coupled-channels calculations that use a log-derivative propagator out to the asymptotic region. We study how well these approximations reproduce positions of \(s\)-wave magnetic Feshbach resonances, comparing to experiment where possible, and identify the limitations of various approximations. ## I Introduction The ability to control the scattering length by tuning an applied magnetic field in the vicinity of a magnetic Feshbach resonance has now become a standard tool in experimental ultracold physics [1]. For example, manipulation of the scattering length in this manner plays a key role in the realization of strongly interacting many-body systems [2]. It enables the creation of loosely bound molecules via Feshbach association, which is the first step in the formation of deeply bound molecules by subsequent stimulated Raman adiabatic passage [3]. Control of the two-body scattering length in this manner has played a key role in the study of Efimov physics [4; 5; 6; 7; 8; 9]. Theoretical developments have kept pace with experiment in predicting and understanding the properties of magnetic Feshbach resonances [1], and one of the most powerful theoretical tools that has been brought to bear upon the problem is multichannel quantum defect theory (MQDT). MQDT provides a powerful formalism for computing and understanding the field and energy dependence of collisional cross sections in ultracold systems. It has a long history, with seminal contributions made by many authors [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Various formulations differ significantly in notation and scope, but not in spirit. MQDT at its heart leverages a separation of energy and length scales in order to simplify the calculation of low-energy observables. It is in this sense an "effective theory" similar in spirit to modern renormalization techniques and effective field theories. In its application to ultracold atomic collisions, it can be made to agree with coupled channels calculations to a numerical accuracy approaching exactness. The strength and nature of ultracold collisions depends on the separation distance \(R\) between atoms. At small \(R\), the wells of the ground state spin-singlet and spin-triplet Born-Oppenheimer potentials are many orders of magnitude deeper than any other relevant energy scale, including those of the long-range van der Waals tail and one-atom hyperfine-Zeeman interactions. This robust separation of energy and length scales enables one to treat the collision in stages. First, one solves the short-range problem to determine a short-range \(K\)-matrix, \({\bf K}_{\rm sr}\), which is defined with respect to energy-analytic reference functions \(\{\hat{f},\hat{g}\}\) that are solutions to the long-range (e.g. van der Waals) potential common to all collision channels. Then one treats the long-range physics using the methods of MQDT, which involve accounting for (1) the phase accumulated in the long-range potential by \(\{\hat{f},\hat{g}\}\)--both with respect to each other and with respect to a pair of energy-normalized solutions \(\{f,g\}\), (2) the energy-normalization of \(\{f,g\}\), particularly when expressed in terms of the energy-analytic pair \(\{\hat{f},\hat{g}\}\), and (3) the reflected amplitude from closed channels. The short-range \(K\)-matrix is viewed as "input" into the machinery of MQDT, encoding information about the short-range physics relevant to low-energy (near threshold) observables. Moreover, \({\bf K}_{\rm sr}\) exhibits a smooth and simple dependence on both energy and magnetic field. Therefore, it only needs to be calculated on a coarse grid of energy and field values to provide a complete description of the short-range physics. The frame transformation (FT) [22] provides a powerful tool for approximating \({\bf K}_{\rm sr}\) by writing it in terms of the singlet and triplet quantum defects \(\mu_{S}\) and a sum over the spin singlet (\(S=0\)) and spin triplet (\(S=1\)) projection operators. A recoupling then rotates \({\bf K}_{\rm sr}\) into the field-dressed hyperfine basis that diagonalizes the long-range Hamiltonian. In the limit that the hyperfine and Zeeman splittings vanish, the frame transformation becomes essentially exact, limited only by the quality of the energy-analytic refer ence functions. We consider two variations of the frame transformation: (1) the energy independent frame transformation (EIFT), which requires only the zero-energy quantum defects to compute \({\bf K}_{\rm sr}\), and (2) the energy-dependent frame transformation (EDFT), which requires the quantum defects on a course grid of energy spanning the separation of two-body collision thresholds determined by the hyperfine-Zeeman energies. A number of studies [23; 24; 25] have utilized an energy independent frame transformation to build essentially a three-parameter MQDT that requires only the singlet and triplet scattering lengths \(a_{S}\) and \(a_{T}\), and the leading dispersion coefficient \(C_{6}\). In such a scheme, \(a_{S}\), \(a_{T}\) and \(C_{6}\) may be considered tunable parameters that can be adjusted to reproduce low-energy observables such as the positions of certain Feshbach resonances. The simplicity of this approach gives it enormous predictive power, as demonstrated by a recent study that identified a very large number of "broad" Feshbach resonances [25]. The present study places such frame transformation calculations in context by providing direct comparisons to more accurate implementations of MQDT, and also to numerically converged coupled channels calculations, which we take here to be "exact". This paper is structured as follows. In Section II, we discuss our model of alkali collisions, including the interaction Hamiltonian and field-dressed hyperfine-Zeeman basis. We describe the various Born-Oppenheimer potentials adapted for this work, discussing their properties and any necessary modifications made for the present calculations. Section III provides a brief overview of MQDT for ultracold collisions along with explanations of EIFT and EDFT. Our results, including the positions of \(s\)-wave resonance and zero crossings for particular collision channels of each species, are presented in Section IV. We show that when one obtains \({\bf K}_{\rm sr}\) from a rigorous boundary condition on a multichannel short-ranged solution--what we shall refer to as the "MQDT" calculation, the low-energy scattering observables agree, nearly exactly, with converged coupled-channels (CC) calculations using Johnson's log-derivative propagator [26]. The agreement between MQDT and CC calculations, however, is only possible if the model potential energy functions for the singlet and triplet configurations reliably converge to the long-range dispersion form Eq. (5) at separation distances where all collision channels are locally open. We also find that frame transformation approximations for \({\bf K}_{\rm sr}\) provide an excellent description of lighter alkali species, especially lithium, but become progressively unreliable for heavier species in which the hyperfine-Zeeman splitting is much larger, and the energy-dependence of the quantum defects over the necessary range of energy is appreciable. Finally, it is worth mentioning that while analytical solutions to the Schrodinger equation for potentials that vary as \(R^{-6}\) have been formulated by Gao [17], we opt instead to use of the numerical approach proposed in [27], namely the Milne phase amplitude method [28], to compute the energy-analytic reference functions that play a key role in MQDT. This approach is, for our purpose, simpler and more versatile since it is applicable to the case of a more general long-range potential that includes higher order dispersion terms--including these long-range dispersion terms reduces the energy dependence of the quantum defects and generally improves the MQDT. It is also, in our modest view, simpler to implement than the rather complicated analytical solution of [17]. To a new student of MQDT, the literature can be daunting. In the process of this work, we have relied heavily on Refs. [29; 22] to gain an understanding of MQDT methods, particularly as they relate to ultracold atomic collisions. The appendix of Ref. [30] provides useful expressions for matrix elements relevant to the hyperfine-Zeeman hamiltonian, and Ref. [27] provides a good starting point for computing the energy-analytic reference functions. ## II Model of alkali collisions Our model for ultracold collisions of alkali atoms follows closely that of Ref. [31]. For two-body atomic scattering, one generally writes the wavefunction as \(\Psi(R,\Omega)=R^{-1}\sum_{i}\psi_{i}(R)\Phi_{i}(\Omega)\), where \(R\) is the nuclear separation of the atoms and \(\Omega\) is a collective coordinate describing all angular and internal degrees of freedom. The problem is reduced to a coupled channels equation of the form \[\sum_{j}\left[\frac{\hbar^{2}}{2\mu}\left(-\frac{d^{2}}{dR^{2}}+\frac{\ell_{j }(\ell_{j}+1)}{R^{2}}\right)\delta_{ij}+V_{ij}\right]\psi_{j}=E\psi_{i}. \tag{1}\] Here, \(\mu\) is the reduced mass of the homonuclear dimer. The interaction matrix \({\bf V}(R)\) for two ultracold alkali atoms in a magnetic field is of the form \[{\bf V}(R)={\bf P}_{0}V_{0}(R)+{\bf P}_{1}V_{1}(R)+\sum_{n=1}^{2}{\bf H}_{n}^{ \rm HZ}, \tag{2}\] where \({\bf P}_{0}\) and \({\bf P}_{1}\) are the singlet and triplet projection operators, and \(V_{S}(R)\) are the Born Oppenheimer potentials corresponding to the singlet (\(S=0\)) and triplet (\(S=1\)) molecular ground states \(X^{1}\Sigma_{g}^{+}\) and \(a^{3}\Sigma_{u}^{+}\), respectively. The matrix operator \({\bf H}_{n}^{\rm HZ}\) is the combined hyperfine and Zeeman interaction for each atom, \[{\bf H}_{n}^{\rm HZ}=\left[\frac{A_{n}}{\vec{\bf s}_{n}}\cdot\vec{\bf s}_{n}+ \frac{\mu_{B}}{\hbar}\left(g_{s}\vec{\bf s}_{n}+g_{ni}\vec{\bf i}_{n}\right) \cdot\vec{B}\right], \tag{3}\] where \(\vec{\bf i}_{n}\) and \(\vec{\bf s}_{n}\) are the nuclear and electronic spins of atom \(n\), \(A_{n}\) is the hyperfine coupling in the electronic ground state, and \(g_{s}\) and \(g_{ni}\) are electron and nuclear \(g\)-factors in units of the bohr magneton \(\mu_{B}\). We adhere to the convention of Ref. [32] and define the \(g\)-factors to be of the opposite sign as their corresponding magnetic dipole moments. For convenience and clarity, a collection of the relevant parameters from Ref. [32] is given in Table 1. The two-atom collision thresholds in a magnetic field are determined by the eigenstates of \(\mathbf{V}(R)\) in the limit \(R\to\infty\), where \(V_{S}(R)\) vanishes. These states are constructed by appropriately symmetrizing the eigenstates of Eq. (3). The hyperfine interaction couples the nuclear spin \(\vec{\mathbf{i}}\) and electronic spin \(\vec{\mathbf{s}}\) of each atom, and is diagonal in the total atomic spin \(\vec{\mathbf{f}}=\vec{\mathbf{i}}+\vec{\mathbf{s}}\). However, the Zeeman interaction couples states of different \(f\), so that only the projection \(m_{f}\) remains a good quantum number at finite field. While the states of \(\mathbf{H}_{n}^{\text{HZ}}\) can be found analytically by the Breit-Rabi formula [33]--for a detailed derivation, see [34]--in practice, we compute the matrix elements of the Hamiltonian in Eq. (3) in the hyperfine basis \(|f,m_{f}\rangle\) and diagonalize the resulting matrix numerically. Figure 1, shows the energy levels of a single atom in a magnetic field for all of the alkali species considered here except for \({}^{40}\)K, which has a negative hyperfine coupling constant that results in the inverted diagram shown in Fig. 2. As we discuss below, these energies will determine the two-atom collision thresholds. The short-ranged physics (\(R\lesssim 30a_{0}\)) of Eq. (2) is dominated by the very deep singlet and triplet potentials, while the long-range physics (\(R\gtrsim 30a_{0}\)) is controlled by the comparatively shallow van der Waals tail and weak hyperfine-Zeeman structure of the atoms. For \(R\gtrsim 30a_{0}\), the off-diagonal elements of \(V_{ij}(R)\) in Eq. (1) vanish, and the diagonal elements are determined by the dispersion coefficients \(C_{6},C_{8}\) and \(C_{10}\) \[V_{ii}(R)-E_{i}^{\text{th}}\to V_{\text{LR}}(R) \tag{4}\] where \(E_{i}^{\text{th}}\) are the collision thresholds and the long-range potential common to all channels is of the form \[V_{\text{LR}}(R)=-\frac{C_{6}}{R^{6}}-\frac{C_{8}}{R^{8}}-\frac{C_{10}}{R^{10}} \ \text{ for }R\gtrsim 30a_{0}. \tag{5}\] The natural unit of length \(\beta\) associated with \(V_{\text{LR}}\), and the corresponding natural unit of energy \(E_{\beta}\) are fixed by the depth of \(V_{\text{LR}}\) at a separation distance \(\beta\): \[E_{\beta}=\frac{\hbar^{2}}{2\mu\beta^{2}}=|V_{\text{LR}}(\beta)| \tag{6}\] This definition reduces to twice the usual van der Waals length when \(C_{8}=C_{10}=0,\beta_{6}=(2\mu C_{6}/\hbar^{2})^{1/4}=2R_{\text{vdW}}\), and renders the dispersion coefficients unitless when expressed in these units. ### Field-dressed hyperfine basis For the two-atom system, we follow Ref. [31] and represent the symmetry requirements for identical bosons or fermions by defining the basis kets as \[|\{\alpha\beta\}\rangle=\frac{|\alpha,\beta\rangle\pm(-1)^{\ell}\,|\beta, \alpha\rangle}{\sqrt{2(1+\delta_{\alpha,\beta})}}. \tag{7}\] where the Greek letters refer to the internal states of the individual atoms. For example, \(|\alpha,\beta\rangle=|f_{1},m_{1},f_{2},m_{2}\rangle\) represents atom 1 in hyperfine state \(|\alpha\rangle=|f_{1},m_{1}\rangle\) and atom 2 in state \(|\beta\rangle=|f_{2},m_{2}\rangle\), while the \(+(-)\) sign is taken for bosons (fermions). We neglect in this work the magnetic dipole-dipole interaction, so the \(s\)-wave remains decoupled from higher partial waves. Furthermore, the total \(M_{F}=m_{f_{1}}+m_{f_{2}}\) remains a good quantum number at finite field. Each calculation presented here is specified by a particular \(M_{F}\), within which the lowest one-atom states can be read by the Breit-Rabi graphs. The properly symmetrized eigenstates of the two-atom hyperfine-Zeeman Hamiltonian comprise the "field-dressed" basis, constructed as a linear combination of symmetrized atomic hyperfine states \[|i\rangle=\sum_{\{\alpha\beta\}}C_{\{\alpha\beta\}}^{i}\,|\{\alpha\beta\}\rangle. \tag{8}\] The scattering thresholds correspond to the elements of the diagonal matrix \[\mathbf{E}_{\text{th}}=\mathbf{C}^{\text{T}}(B)\mathbf{H}^{\text{ HZ}}\mathbf{C}(B), \tag{9}\] where \(\mathbf{C}(B)\) is the field-dependent rotation comprised of the eigenvector elements \(C_{\{\alpha\beta\}}^{i}\). We express and solve Eq. (1) in the field-dressed spin basis given by Eq. (8). The scattering cross section is determined by matching the solutions to asymptotic Bessel functions in the limit \(R\to\infty\). In our calculations, because we neglect the weak, long-ranged magnetic dipole-dipole interaction, we match at a radius \(R\) much larger than the natural length \(\beta\), where both the singlet \(V_{0}(R)\) and triplet \(V_{1}(R)\) potentials become negligible, and the two-atom interaction is reduced to a sum of one-atom terms: \(\lim_{R\to\infty}\mathbf{V}(\mathbf{R})=\mathbf{H}^{\text{HZ}}=\sum_{n=1}^{2} \mathbf{H}_{n}^{\text{HZ}}\). In practice, \(R\approx 20\beta\) is sufficiently large to ensure that the van der Waals tail is negligible. We consider only \(s\)-wave collisions in this work, but a larger matching radius may be necessary for higher partial waves, particularly at threshold energies. ### Singlet/Triplet Potentials A great deal of effort has been expended by many authors [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46] in the development of state-of-the-art Born \begin{table} \begin{tabular}{c c c c} Atom & \(i\) & \(g_{i}\) & \(A_{\text{hf}}/h\)[MHz] \\ \hline \({}^{6}\)Li & 1 & \(-0.0004476540(3)\) & 152.1368407(20) \\ \({}^{7}\)Li & 3/2 & \(-0.001182213(6)\) & 401.7520433(5) \\ \({}^{23}\)Na & 3/2 & \(-0.0008046108(8)\) & 885.813064(5) \\ \({}^{39}\)K & 3/2 & \(-0.00014193489(12)\) & 230.5898601(3) \\ \({}^{40}\)K & 4 & \(+0.000176490(34)\) & \(-285.7308(24)\) \\ \({}^{85}\)Rb & 5/2 & \(-0.0002936400(6)\) & 1011.910813(2) \\ \({}^{87}\)Rb & 3/2 & \(-0.0009951414(10)\) & 3417.34130642(15) \\ \({}^{133}\)Cs & 7/2 & \(-0.00039885395(52)\) & 2298.1579425 \\ \end{tabular} \end{table} Table 1: Hyperfine couplings and nuclear \(g\)-factors used in this work. Nuclear g-factors \(g_{i}\) should be multiplied by the bohr magneton. Oppenheimer potential curves for alkali dimers in the spin singlet (\(X^{1}\Sigma_{g}^{+}\)) and spin triplet (\(a^{3}\Sigma_{u}^{+}\)) configurations. The models we adopt here were chosen because they are given in closed analytic form with conveniently tabulated parameters. The models broadly fall into two categories: (1) the Hannover polynomial expansion (or X-representation) [35; 36], and (2) the Morse/Long-Range (MLR) potential [37; 38]. Details regarding these models are contained in Refs. [39; 40; 41; 42; 43; 44]. The Hannover X-Rep potentials are used for \({}^{23}\)Na [41], \({}^{39}\)K [42], \({}^{40}\)K [42], \({}^{85}\)Rb [43], and \({}^{87}\)Rb [43]. These potentials require essentially no modification for our purpose; they allow for immediate and direct comparisons with experimentally observed Feshbach resonance positions. Moreover, these potentials exhibit rapid exponential convergence to the asymptotic form \(V_{\rm LR}\) of Eq. (5) for \(R\gtrsim 30a_{0}\). The long-range form of the potentials in the X-representation is of the form: \[V_{S}^{\rm(X\text{-}Rep)}(R)\to V_{\rm LR}(R)\pm A_{\rm ex}R^{\gamma}e^{- \beta_{\rm ex}R}. \tag{10}\] When including the exponential "exchange" term, we take the "+" sign for the triplet and the "\(-\)" sign for the singlet. In Fig. 3, we show the relative error of the singlet (panel (a)) and triplet (panel (b)) potentials with respect to the long-range potential \(V_{\rm LR}\) for a selection of alkali dimers. The X-Rep potential is shown only for the Figure 1: The one-atom Breit-Rabi energy spectrum for atomic species considered in this work. The \(f\) quantum number is only good in the zero-field limit. At any field, \(m_{f}\) remains a good quantum number, but different \(f\) levels are coupled. The curves are labeled by their \(m_{f}\) quantum number, or in cases where \(m_{f}\) is fractional, by \(2m_{f}\). Figure 2: The one-atom Breit-Rabi energy spectrum for \({}^{40}\)K, for which the hyperfine coupling is negative, leading to an inverted Breit-Rabi diagram. case of \({}^{85}\)Rb, but other potentials of this type exhibit similar convergence. The MLR potentials used for \({}^{6}\)Li, \({}^{7}\)Li and \({}^{133}\)Cs, on the other hand, do not behave asymptotically as Eq. (10) [39; 40; 44]. While they do indeed converge to the form of Eq. (5), that convergence is significantly slower than the exchange term, as seen by the red \({}^{7}\)Li curve in Fig. 3. The MLR potentials for \({}^{6}\)Li also include so-called "Born-Oppenheimer breakdown" (BOB) corrections, which are not included in the potentials for the "reference" isotopologue \({}^{7}\)Li\({}_{2}\). These corrections are configuration-dependent. They behave as \(R^{-6}\) to leading order and alter the long-range potential, leading to an "effective" \(C_{6}\) coefficient that is different for the singlet and triplet configurations. Therefore, neither of the potentials for \({}^{6}\)Li converge to \(V_{\rm LR}\), as demonstrated by the dotted-black curves in Fig. 3. Meanwhile, the MLR potentials for \({}^{133}\)Cs exhibit even slower convergence to \(V_{\rm LR}\), particularly for the triplet, as shown by the dotted-green curve in Fig. 3. We shall soon discuss how these potentials are modified in this work in order to accommodate an MQDT treatment, which requires that the boundary condition for determining \({\bf K}_{\rm sr}\) be applied at a separation distance where (1) the off-diagonal elements of \(V_{ij}\) vanish, and (2) the diagonal elements \(V_{ii}\) are reliably converged to \(V_{\rm LR}\) while and all collision channels are locally open. For potentials of the Hannover X-Rep type, the matching radius \(R_{m}\) at which \({\bf K}_{\rm sr}\) is determined may be chosen to be as small as \(30a_{0}\), and all quantum defects are independent of this matching radius up to about \(R_{m}\approx 40a_{0}\) where some collision channels begin to become energetically closed. For potentials of the MLR type, however, the convergence to the asymptotic form is prohibitively slow, and two options are available for improving the performance of both MQDT and FT methods. First, one may extend the matching radius out beyond the distance at which all channels are strictly open at the risk of incurring greater energy dependence in the quantum defects. Second, one may _force_ the singlet and triplet potentials to the long-range form by using a "switching" function like Eq. (46). We take the former strategy with lithium where the hyperfine-Zeeman splitting is relatively weak, and even at separation distances of \(55a_{0}\), the quantum defects vary smoothly, nearly linearly with energy. For cesium, however, the higher collision channels are strongly closed beyond about \(40a_{0}\) and the energy dependence in \({\bf K}_{\rm sr}\) and \(\mu_{S}\) quickly becomes unmanagable, so we take the latter strategy and force \(V_{S}^{\rm MLR}\to V_{\rm LR}\) for \(R\gtrsim 40a_{0}\). As discussed above, the Born-Oppenheimer breakdown corrections [39; 40] included in the MLR potentials for \({}^{6}\)Li produce different "effective" \(C_{6}\) coefficients for the singlet and triplet configurations. This is undesirable for an MQDT calculation, and so we have chosen to exclude these corrections from the potential. We have replaced the dispersion coefficients quoted in Refs. [39; 40] with those of Ref. [47], which include nonadiabatic corrections as well. For a comprehensive list of dispersion coefficients used in this work, see Table 2. This replacement significantly changes the singlet and triplet scattering lengths, and further changes to the potential are necessary in order to restore \(a_{S}\) and \(a_{T}\) to more realistic values. A common strategy [48; 57] for reproducing experimental data is to adjust the volume of the potentials by adding a quadratic term inside the equilibrium distance (i.e., for \(R<R_{e}\) where \(R_{e}\) is the potential energy minimum) of the form \[V_{\rm shift}(R)=V_{c}^{(S)}(R-R_{e})^{2}\ \ \mbox{for}\ R<R_{e}. \tag{11}\] Here, \(V_{c}^{(S)}\) are constant parameters that may be adjusted to reproduce the desired scattering lengths (or particular resonance positions) and \(S\) is the total spin quantum number. Figure 3: (color online) Convergence of the singlet and triplet potentials to their asymptotic form is shown for a few illustrative cases. Panel (a) shows the singlet potentials while Panel (b) shows the triplet potentials. The solid lines correspond to potentials as they are used in this work, while the dotted curves represent unaltered potentials for \({}^{6}\)Li (dotted black) and for \({}^{133}\)Cs (dotted green). Table 3 shows our scattering length calculations for all of the alkali species considered in this work. Despite the fact that the computation of single-channel scattering lengths is a relatively simple, numerically stable procedure--at least compared to solutions to large coupled channels problems--our calculations yield scattering lengths different from other published values for the same potentials. The differences are slight, yet significant since the precise positions of magnetic Feshbach resonances are sensitive to small changes in the singlet and triplet phase shifts. These differences are discussed case-by-case in Section IV. ## III Multichannel quantum defect theory for ultracold collisions As discussed in Sec. II, the two-atom Hamiltonian exhibits a natural separation of energy and length scales. At short-range (\(R\lesssim 30a_{0}\)), the interaction is dominated by the deep singlet and triplet potentials, while at longer range \(R\gtrsim 30a_{0}\), the potentials approach their comparatively weak long-range dispersion form Eq. (5), offset by thresholds determined by the two-atom hyperfine-Zeeman interaction. At asymptotically large distances \(R\gg\beta\), the solution may be matched to Bessel functions to determine the physical \(K\)-matrix. The basic MQDT procedure is as follows: (1) Solve the Schrodinger equation in each of these three regions, the short-range region, the van der Waals, and the asymptotic region. (2) Match the short-ranged numerical solution to the solution in the van der Waals region in order to determine the short-range \(K\)-matrix \(\mathbf{K}_{sr}\), whose eigenvalues exhibit smooth, simple dependence on energy and field. (3) Match the solution in the van der Waals region where all collision channels are locally open to the appropriate asymptotic solution in order to compute a physical \(K\)-matrix. Here, we shall focus on steps (2) and (3) of this procedure. For step (1), we use Johnson's log-derivative propagator [26]. ### Overview of MQDT Central to the implementation of MQDT, we seek a linearly independent pair of solutions, \(\hat{f}_{i}(R)\) and \(\hat{g}_{i}(R)\), to the single-channel Schodinger equation in the presence of \(V_{\text{LR}}(R)\) that are analytic in energy across the collision threshold. These reference functions satisfy \[\left(\frac{\hbar^{2}}{2\mu}\left[-\frac{d^{2}}{dR^{2}}+\frac{l_{i}(l_{i}+1)}{ R^{2}}\right]+V_{\text{LR}}(R)-E_{i}\right)\begin{pmatrix}\hat{f}_{i}(R)\\ \hat{g}_{i}(R)\end{pmatrix}=0 \tag{12}\] with \(E_{i}=E-E_{i}^{\text{th}}\). The desired reference functions are constructed using the Milne phase amplitude method [27; 28]: \[\hat{f}_{i}(R) =\alpha_{i}(R)\sin\left(\int_{R_{x}}^{R}\alpha_{i}^{-2}(R^{\prime })dR^{\prime}+\phi_{i}\right) \tag{13}\] \[\hat{g}_{i}(R) =-\alpha_{i}(R)\cos\left(\int_{R_{x}}^{R}\alpha_{i}^{-2}(R^{\prime })dR^{\prime}+\phi_{i}\right) \tag{14}\] where \(\phi_{i}\) is a channel-dependent (but energy-_independent_) phase and \(\alpha_{i}(R)\) satisfies the nonlinear differential equation \[\alpha_{i}(R)^{\prime\prime}+k_{i}^{2}(R)\alpha_{i}(R)=\alpha_{i}^{-3}(R). \tag{15}\] Here, \(k_{i}(R)=\sqrt{2\mu[E_{i}-V_{\text{LR}}(R)]/\hbar^{2}-\ell_{i}(\ell_{i}+1)/R^{2}}\) is the local wavenumber in the \(i^{\text{th}}\) channel. It is convenient to impose WKB-like boundary conditions [27] deep in the well (we choose \(R_{x}=0.07\beta\)) of the long-range reference potential: \[\alpha_{i}(R_{x})=\frac{1}{\sqrt{k_{i}(R_{x})}}, \tag{16}\] and \[\alpha_{i}^{\prime}(R_{x})=\frac{d}{dR}\left(\frac{1}{\sqrt{k_{i}(R)}}\right) _{R=R_{x}}. \tag{17}\] The selection of the point \(R_{x}\) in Eqs. 14-17 is somewhat arbitrary. All that is required is that \(V_{\text{LR}}\) is deep enough that our semi-classical boundary conditions are reasonable. Fixing the energy-independent phase \(\phi_{i}\) in Eq. 14 amounts to a "standardization" of the MQDT reference functions. Note that as \(R\rightarrow\infty\), \(V_{\text{LR}}(R)\) in Eq. 5 reduces to a potential of the form \(-C_{6}/R^{6}\). The strategy is to focus on the zero-energy solutions to such a potential [29; 58], \[\chi_{0}^{+}(R) =\sqrt{\frac{R}{\beta}}J_{-\frac{1}{2}(2\ell+1)}\left(\frac{\beta ^{2}}{2R^{2}}\right) \tag{18}\] \[\chi_{0}^{-}(R) =\sqrt{\frac{R}{\beta}}J_{\frac{1}{2}(2\ell+1)}\left(\frac{\beta ^{2}}{2R^{2}}\right) \tag{19}\] \begin{table} \begin{tabular}{c c c c} dimer & Ref. & \(C_{6}[E_{h}a_{0}^{6}]\) & \(C_{8}[E_{h}a_{0}^{6}]\) & \(C_{10}[E_{h}a_{0}^{10}]\) \\ \hline \({}^{6}\)Li\({}_{2}\) & [47] & 1394.1608 & 8.346 030 6\(\times 10^{4}\) & 7.374 489 5\(\times 10^{6}\) \\ \({}^{7}\)Li\({}_{2}\) & [47] & 1394.0508 & 8.345 586 0\(\times 10^{4}\) & 7.374 198 4\(\times 10^{6}\) \\ \({}^{23}\)Na\({}_{2}\) & [41] & 1560.0791 & 1.249 611 3\(\times 10^{5}\) & 8.155 141 1\(\times 10^{6}\) \\ \({}^{39}\)K\({}_{2}\) & [42] & 3925.9127 & 4.223 789 7\(\times 10^{5}\) & 4.937 959 1\(\times 10^{7}\) \\ \({}^{40}\)K\({}_{2}\) & [42] & 3925.9127 & 4.223 789 7\(\times 10^{5}\) & 4.937 959 1\(\times 10^{7}\) \\ \({}^{85}\)Rb\({}_{2}\) & [43] & 4710.2163 & 5.766 964 5\(\times 10^{5}\) & 7.591 280 9\(\times 10^{7}\) \\ \({}^{87}\)Rb\({}_{2}\) & [43] & 4710.2163 & 5.766 964 5\(\times 10^{5}\) & 7.591 280 9\(\times 10^{7}\) \\ \({}^{133}\)Cs\({}_{2}\) & [44] & 6881.3838 & 1.022 55\(\times 10^{6}\) & 1.5903\(\times 10^{8}\) \\ \end{tabular} \end{table} Table 2: Dispersion coefficients used in this work. where \(J_{\nu}(x)\) is the Bessel function of the first kind. As \(R\to\infty\), \(\chi_{0}^{+}\propto R^{\ell+1}\) and \(\chi_{0}^{-}\propto R^{-\ell}\). One possible standardization is to choose the standardization phase \(\phi_{i}\) such that \(\hat{f}_{i}(R)\) coincides with \(\chi_{0}^{+}(R)\) as \(R\to\infty\)[58]. In order to make our formulation easily adaptable to higher partial waves, we adhere to the standardization proposed in Ref. [29], demanding instead that \(\hat{g}(R)\) coincide with \(\chi_{0}^{-}\) at zero energy. There is a unique value of \(\tan\phi_{i}\) that satisfies this condition [29], namely, \[\tan\phi_{i}=-\left(\frac{W\left(\chi_{0}^{-},\hat{g}_{i\phi_{i}=0}\right)}{W \left(\chi_{0}^{+},\hat{f}_{i\phi_{i}=0}\right)}\right)_{R=R_{f}} \tag{20}\] where \(W(x,y)\) is the Wronskian and is given by \(W(x,y)=x(R)y^{\prime}(R)-x^{\prime}(R)y(R)\) and \(R_{f}=20\beta\) is sufficiently large for the present study. The linearly independent reference functions \(\hat{f}_{i}(R)\) and \(\hat{g}_{i}(R)\) are used to define the short-ranged \(K\)-matrix, \(\mathbf{K}_{\mathrm{sr}}\), via a boundary condition on the general solution to Eq. (1) \(\mathbf{\psi}(R)\) at \(R_{m}\), somewhere in the van der Waals region. We let \(\hat{\mathbf{f}}(R)\) and \(\hat{\mathbf{g}}(R)\) be diagonal matrices in the field-dressed channel space with functions \(\hat{f}_{i}(R)\) and \(\hat{g}_{i}(R)\), respectively, along the diagonal. Then \[\mathbf{\psi}(R)=\hat{\mathbf{f}}(R)-\hat{\mathbf{g}}(R)\mathbf{K}_{\mathrm{sr}}. \tag{21}\] Here, \(\mathbf{\psi}(R)\) is a matrix of solutions with elements \(\psi_{i\beta}\), where \(\beta\) denotes the state index, and \(i\) denotes the channel component. At very large separation distance (\(R\gtrsim 20\beta\)), \(V_{\mathrm{LR}}\to 0\) and the atomic system is described by a set of uncoupled equations, \[\left(\frac{\hbar^{2}}{2\mu}\left[-\frac{d^{2}}{dR^{2}}+\frac{\ell_{i}(\ell_{ i}+1)}{R^{2}}\right]-E_{i}\right)\psi_{i}(R)=0 \tag{22}\] where \(E_{i}=E-E_{i}^{\mathrm{th}}\). For open channels with \(E_{i}>0\), the solution \(\psi_{i}(R)\) is given by a linear combination of phase-shifted Riccati functions which asymptotically behave as \[f_{i}(R) \to\sqrt{\frac{k_{i}}{\pi}}\sin\left(k_{i}R-l_{i}\frac{\pi}{2}+ \eta_{i}\right) \tag{23}\] \[g_{i}(R) \to-\sqrt{\frac{k_{i}}{\pi}}\cos\left(k_{i}R-l_{i}\frac{\pi}{2}+ \eta_{i}\right) \tag{24}\] as \(R\to\infty\). The parameter \(\eta_{i}\) represents the phase that is accumulated in the van der Waals region, and it is given by \[\tan\eta_{i}=\left(\frac{W(\hat{f}_{i}(R),f_{i}^{s}(R))}{W(\hat{f}_{i}(R),g_{i} ^{s}(R))}\right)_{R=R_{f}}. \tag{25}\] Here \(f_{i}^{s}(R)\) and \(g_{i}^{s}(R)\) are the Riccati functions which approach \[f_{i}^{s}(R)=\sqrt{\frac{k_{i}}{\pi}}k_{i}Rj_{\ell}(kR)\to\sqrt{\frac{k_{i}}{ \pi}}\sin\left(k_{i}R-l_{i}\frac{\pi}{2}\right) \tag{26}\] \[g_{i}^{s}(R)=\sqrt{\frac{k_{i}}{\pi}}k_{i}Rn_{\ell}(kR)\to-\sqrt{\frac{k_{i}}{ \pi}}\cos\left(k_{i}R-l_{i}\frac{\pi}{2}\right) \tag{27}\] The "energy-normalized reference functions" given in Eqs. (23)-(24) \(\{f_{i},g_{i}\}\) are related to \(\{\hat{f}_{i},\hat{g}_{i}\}\) by the following transformation, \[\begin{pmatrix}f_{i}\\ g_{i}\end{pmatrix}=\begin{pmatrix}\mathcal{A}_{i}^{1/2}&0\\ \mathcal{A}_{i}^{-1/2}\mathcal{G}_{i}&\mathcal{A}_{i}^{-1/2}\end{pmatrix}\begin{pmatrix} \hat{f}_{i}\\ \hat{g}_{i}\end{pmatrix}, \tag{28}\] where the parameter \(\mathcal{A}_{i}\) is related to the energy-normalization of \(\{f_{i},g_{i}\}\) and \(\mathcal{G}_{i}\) accounts for the phase difference accumulated by \(\{\hat{f}_{i},\hat{g}_{i}\}\) in \(V_{\mathrm{LR}}\)[29]. These parameters are computed using the following formulas: \[\mathcal{A}_{i}=-\left(\frac{W(\hat{g}_{i},f_{i}^{s})-\tan\eta_{i}W(\hat{g}_{i },g_{i}^{s})}{W(\hat{f}_{i},g_{i}^{s})+\tan\eta_{i}W(\hat{f}_{i},f_{i}^{s})} \right)_{R=R_{f}} \tag{29}\] \[\mathcal{G}_{i}=-\left(\frac{W(\hat{g}_{i},g_{i}^{s})+\tan\eta_{i}W(\hat{g}_{i },f_{i}^{s})}{W(\hat{f}_{i},g_{i}^{s})+\tan\eta_{i}W(\hat{f}_{i},f_{i}^{s})} \right)_{R=R_{f}} \tag{30}\] For closed channels (\(E_{i}<0\)), the solution is a superposition of \(\hat{f}_{i}(R)\) and \(\hat{g}_{i}(R)\) that vanishes as \(R\to\infty\): \[\hat{f}_{i}(R)+\cot\gamma_{i}\hat{g}_{i}(R)\to\infty\;e^{-\kappa_{i}R}. \tag{31}\] \begin{table} \begin{tabular}{c c c c c c c c} & \multicolumn{4}{c}{Present Calculation} & \multicolumn{4}{c}{Literature} \\ \cline{2-7} \(X^{1}\Sigma_{g}^{+}/a^{3}\Sigma_{u}^{+}\) model & \(V_{c}^{0}[E_{h}/a_{0}^{2}]\) & \(V_{c}^{1}[E_{h}/a_{0}^{2}]\) & \(a_{S}/a_{0}\) & \(a_{T}/a_{0}\) & \(a_{S}/a_{0}\) & \(a_{T}/a_{0}\) & Other Refs. \\ \hline \({}^{6}\)Li\({}_{2}\) (MLR) [39; 40] & 2.65\(\times 10^{-7}\) & 1.254 65\(\times 10^{-6}\) & 45.166 & \(-\)2121.11 & 45.154(2)[48] & \(-\)2113(2)[48] & [49; 50] \\ \({}^{7}\)Li\({}_{2}\) (MLR) [39; 40] & 1.88\(\times 10^{-6}\) & 1.85\(\times 10^{-6}\) & 34.339 & \(-\)26.923 & 34.331(2)[48] & \(-\)26.92(7)[48] & [50] \\ \({}^{23}\)Na\({}_{2}\) (X-rep) [41] & 0 & 0 & 18.820 & 64.302 & 18.81(80)[41] & 64.30(40)[41] & [51; 52] \\ \({}^{39}\)K\({}_{2}\) (X-rep) [42] & 0 & 0 & 138.808 & \(-\)33.391 & 138.80[42] & \(-\)33.41[42] & [53] \\ \({}^{40}\)K\({}_{2}\) (X-rep) [42] & 0 & 0 & 104.425 & 169.185 & 104.42[42] & 169.18[42] & [53] \\ \({}^{85}\)Rb\({}_{2}\) (X-rep) [43] & 0 & 0 & 2572.37 & \(-\)392.496 & 2720[43] & \(-\)386.9[43] & [54; 55] \\ \({}^{87}\)Rb\({}_{2}\) (X-rep) [43] & 0 & 0 & 90.161 & 98.867 & 90.35[43] & 99.04[43] & [55] \\ \({}^{133}\)Cs\({}_{2}\) (MLR) [44] & \(-\)2.53\(\times 10^{-7}\) & 5.9705\(\times 10^{-7}\) & 280.253 & 2405.21 & 280.25[44] & 2405.6[44] & [56; 57] \\ \end{ Here, \(\kappa_{i}=\sqrt{2\mu|E_{i}|/\hbar^{2}}\) and \(\gamma_{i}\) is a parameter that determines what combination of \(\{\hat{f}_{i},\hat{g}_{i}\}\) vanishes as \(R\rightarrow\infty\). It is computed by \[\tan\gamma_{i}=\left(\frac{W(e^{-\kappa_{i}r},\hat{g}_{i}(R))}{W(e^{-\kappa_{i} r},\hat{f}_{i}(R))}\right)_{R=R_{f}} \tag{32}\] With the energy-dependent MQDT parameters \(\mathcal{A}\), \(\mathcal{G}\), and \(\cot\gamma\) in hand, one may determine the \(K\)-matrix defining the asymptotic boundary condition with respect to functions \(f_{i}(R)\) and \(g_{i}(R)\), namely \(\mathbf{\psi}(R)\rightarrow\mathbf{f}-\mathbf{Kg}\). First, \(\mathbf{K}_{\mathrm{sr}}\) is partitioned into blocks depending on which channels are asymptotically opened (\(P\)) or closed (\(Q\)): \[\mathbf{K}_{\mathrm{sr}}=\begin{pmatrix}K_{P}^{\mathrm{sr}}&K_{PQ}^{\mathrm{ sr}}\\ K_{QP}^{\mathrm{sr}}&K_{QQ}^{\mathrm{sr}}\end{pmatrix}. \tag{33}\] Then, we use the closed-channel parameter \(\gamma\) to transform the \(N\times N\) short-range reaction matrix into an \(N_{P}\times N_{P}\) matrix using the channel-closing formula \[\tilde{\mathbf{K}}=\mathbf{K}_{PP}^{\mathrm{sr}}-\mathbf{K}_{PQ}^{\mathrm{ sr}}(\mathbf{K}_{QQ}^{\mathrm{sr}}+\cot\mathbf{\gamma})^{-1}\mathbf{K}_{QP}^{ \mathrm{sr}}. \tag{34}\] This transformation accounts for the reflected amplitude arising from closed channels, and captures the physics of closed-channel resonances. Next, \(\tilde{\mathbf{K}}\) must be properly normalized with respect to energy. This is accomplished with the expression, \[\mathbf{K}=\mathcal{A}^{1/2}\tilde{\mathbf{K}}(\mathbf{1}+\mathcal{G}\tilde{ \mathbf{K}})^{-1}\mathcal{A}^{1/2}. \tag{35}\] This \(\mathbf{K}\), however, is not yet the full physical \(K\)-matrix because it does not include effects from the additional phase \(\eta\) which \(\{f,g\}\) acquire with respect to \(\{f^{s},g^{s}\}\). We obtain a physical \(S\)-matrix by: \[\mathbf{S}^{\mathrm{phys}}=e^{i\eta}\frac{\mathbf{1}+i\mathbf{K}}{\mathbf{1}- i\mathbf{K}}e^{i\eta} \tag{36}\] from which \(\mathbf{K}^{\mathrm{phys}}\) is obtained by \[\mathbf{K}^{\mathrm{phys}}=i\frac{\mathbf{1}-\mathbf{S}^{\mathrm{phys}}}{ \mathbf{1}+\mathbf{S}^{\mathrm{phys}}}. \tag{37}\] Note that in the above expressions, \(\mathbf{\gamma}\), \(\mathcal{A}\), \(\mathcal{G}\), and \(\mathbf{\eta}\) are diagonal matrices of the corresponding MQDT functions evaluated at the appropriate channel energy \(E-E_{\hat{2}}^{\mathrm{th}}\). For ultracold collisions in the lowest channel, \(\mathbf{K}\), \(\mathbf{K}\) and \(\mathbf{K}^{\mathrm{phys}}\) are each reduced to a single matrix element, and one can write the physical \(K\)-matrix element as \[K^{\mathrm{phys}}=\frac{\tan\eta+K}{1-K\tan\eta}, \tag{38}\] or in terms of the \(s\)-wave phase shift as: \[K^{\mathrm{phys}}=\tan\delta \tag{39}\] We are primarily interested in the scattering length \(a\), which is related to the \(s\)-wave phase shift \(\delta\) by \[a=-\lim_{k\to 0}\frac{\tan\delta}{k} \tag{40}\] In all calculations presented here, we compute the MQDT functions using Eqs. 25, 29, 30, and 32. ### Frame Transformation At short separation distances, i.e., \(R\lesssim 30\)\(a_{0}\), the physics is dominated by the deep Born-Oppenheimer potentials \(V_{S}(R)\). Therefore, to a good approximation, any hyperfine or Zeeman interactions can be neglected at short range, and the atomic system can be described by a set of uncoupled equations in the singlet and triplet channels written here only for the \(s\)-wave, \[\left(-\frac{\hbar^{2}}{2\mu}\frac{d^{2}}{dR^{2}}+V_{S}(R)-E\right)\psi_{S}(R)=0 \tag{41}\] In Figure 4 we show the quantum defects \(\mu_{S}(E)\) in the singlet and triplet eigenchannels as a function of energy. The range of energy is different for each dimer, and is determined by the maximum energy difference between collision thresholds \(\Delta E^{\mathrm{th}}=\max E_{i}^{\mathrm{th}}-\min E_{i}^{\mathrm{th}}\) at the largest fields considered, 1200G. For collisions at energies near the lowest threshold (\(\min E_{i}^{\mathrm{th}}\)), there may be a resonance due to a bound state attached to the highest threshold (\(\max E_{i}^{\mathrm{th}}\)), so the lowest energy one may imagine evaluating the quantum defect at is \(-\Delta E^{\mathrm{th}}\). Meanwhile, one may imagine collisions of atoms prepared in excited magnetic levels undergoing collisions that occur at relatively high energy (\(\Delta E^{\mathrm{th}}\)) with respect to the ground-state channel. These collisions are not explicitly considered in this work, but nevertheless suggest that one requires the quantum defects, in principle, over a range \(-|\Delta E^{\mathrm{th}}|<E<|\Delta E^{\mathrm{th}}|\). More technically, the scale \(\Delta E^{\mathrm{th}}\) is set by EDFT calculation, which requires we compute the channel-weighted average energy Eq. (44). As one varies atomic mass from \({}^{6}\)Li to \({}^{133}\)Cs, one finds that \(\Delta E^{\mathrm{th}}\) varies also from \(\sim 0.25\)K \(\approx 5\)GHz to about \(\sim 1\)K \(\approx 20\)GHz. Over the scale of relevant energies, the defects themselves vary roughly linearly, with slopes tending roughly to increase with mass. To find the solution \(\psi_{S}(R)\), we numerically integrate Eq. (41) from \(R\approx a_{0}\), sufficiently small so that \(\psi(R)\to 0\) due to the hard repulsive core of the potential, out to \(R_{m}\sim 40\)\(a_{0}\) (or \(55a_{0}\) for lithium as described in Section II.2). Then we match each solution to a linear combination of \(\hat{f}(R)\) and \(\hat{g}(R)\) at \(R_{m}\) to determine the singlet and triplet quantum defects \(\mu_{S}\) at zero energy. These _single-channel_ quantum defects are determined by imposing a single channel boundary condition (analogous to Eq. (21) on the numerical solution \(\psi_{S}(R)\) to Eq. (41), \[\psi_{S}(R)\rightarrow\hat{f}(R)-\hat{g}(R)\tan\left(\pi\mu_{S}\right). \tag{42}\] At large \(R\), however, the total electronic spin \(S\) is no longer a good quantum number and the interaction between the particles is no longer diagonal in the molecular basis \(|\lambda\rangle=|SM_{S}IM_{I}\rangle\). The frame-transformation provides a powerful approximation to the short-range reaction matrix \(\mathbf{K}^{\mathrm{sr}}\) in the basis \(|i\rangle\) that defines the collision channels in which the system is diagonal at large \(R\)[58]: \[K_{i,i^{\prime}}^{\mathrm{sr}}=\sum_{\lambda}\left\langle i|\lambda\right\rangle \tan\left(\pi\mu_{S}(E)\right)\left\langle\lambda|i^{\prime}\right\rangle. \tag{43}\] In absence of an external magnetic field, asymptotic channels are simply the the properly symmetrized hyperfine states Eq. (7). However, if there is an applied field, the asymptotic dissociation channels are now eigenstates of the full \(\mathbf{H}^{\mathrm{HZ}}\), as in Eq. (8). The short-ranged reaction matrix obtained from Eq. (43) depends only on the single-channel quantum defects \(\mu_{S}(E)\) and the field-dependent transformation that accomplishes the dressing. It is not entirely clear, however, at what energy \(E\) one should evaluate the quantum defects when computing \(K^{\mathrm{sr}}\) from Eq. (43), for the defects themselves are functions of energy measured with respect to the common singlet and triplet thresholds (zero), while the collision energy is measured with respect to the asymptotic threshold energies \(E_{i}^{\mathrm{th}}\) computed in Eq. (9). As a first approximation, one may assume that the energy dependence of the quantum defects is negligible, and simply evaluate \(\mu_{S}\) at zero energy. This results in what we call the _energy independent frame transformation_ (EIFT). A better approximation, which results in the _energy dependent frame transformation_ (EDFT), is to evaluate \(\mu_{S}(E)\) at the channel-weighted average energy [58] \[\bar{E}_{\lambda}=\sum_{i}\left(E-E_{i}^{\mathrm{th}}\right)|\langle\lambda|i \rangle|^{2}. \tag{44}\] Both of the EIFT and EDFT approximations circumvent the need to solve a set of coupled equations, allowing all scattering observables to be computed with _single_ channel calculations only. The more rigorous boundary condition Eq. (21) needed for MQDT, on the other hand, requires a CC calculation in the region \(R\leq R_{m}\). We refer to calculations that stem from Eq. (21) as "full" MQDT calculations, labeled as "MQDT" in figures that follow. ## IV Results Here, we present results for the scattering length versus magnetic field for homonuclear collisions of alkali atoms ranging from lithium to cesium. We focus on \(s\)-wave collisions only, and identify the positions magnetic Feshbach resonances and zero crossings in the lowest collision channel for a given \(M_{F}\) block. These are compiled in Table 4 along with available experimental data. Empty cells in the last column indicate that we were unable to find an experimental measurement in the literature. The locations of many zeros associated with narrow resonances are difficult to observe experimentally, and several high-field resonances have yet to appear in the literature. All calculations are performed at a collision energy of \(1\mu\)K. ### Lithium For both isotopes of lithium, we use the MLR potentials of Ref. [39; 40], but with two significant alterations. First, we use the dispersion coefficients \(C_{n}\) tabulated in Ref. [47], which include effects arising from the finite mass of the atomic nuclei. Second, we modify the short-ranged behavior of the singlet/triplet potentials by adding a term of the form Eq. (11). The first alteration is particularly necessary for our purpose of developing and testing the accuracy of MQDT methods because the dispersion coefficients reported in Refs. [39; 40] differ for the singlet and triplet channels. It is desirable to have the _same_ long-range behavior in each collision channel for MQDT, so that the MQDT parameters \(\mathcal{G}(\mathcal{E}),\mathcal{A}(\mathcal{E}),\eta(E),\gamma(E)\) can be computed uniquely. Having modified the dispersion coefficients, it is essential to include the short-ranged potential in Eq. (11) in order to restore the singlet and triplet scattering lengths to Figure 4: (color online) Quantum defects for both the singlet (left) and triplet (right) potential energy functions are shown for each homonuclear dimer. The energy (divided by the Boltzmann constant, \(k_{B}\)) scale is given in kelvin. The range of energies for each case is fixed by the maximum separation of collision thresholds, ranging from \(-|\Delta E^{\mathrm{th}}|\) to \(+|\Delta E^{\mathrm{th}}|\), where \(\Delta E^{\mathrm{th}}=\max E_{i}^{\mathrm{th}}-\min E_{i}^{\mathrm{th}}\) at the largest fields considered (1200 G). physically realistic values. References [39; 40] report \({}^{6}\)Li scattering lengths \(a_{S}=45.05(9)a_{0}\) and \(a_{T}=-3602(95)a_{0}\), where the reported uncertainties arise from statistical errors in the direct potential fit. Our scattering length calculations using their subroutines [73] for generating the potentials "out of the box" yield \(a_{S}/a_{0}=45.046\) and \(a_{T}/a_{0}=-3430.2\), showing excellent agreement for the singlet \(a_{S}\), but \(\approx 5\%\) discrepancy in the triplet \(a_{T}\), for which we cannot account. We have conducted rigorous tests of our calculations, as detailed in Appendix A. The _same_ code used to solve the coupled-channels problem was used to compute the singlet and triplet scattering lengths. We adjust the parameters \(V_{c}^{(S)}\) primarily to reproduce the scattering lengths reported in Ref. [48]. Ultimately, \begin{table} \begin{tabular}{c c c c c c c} Atom & Feature & CC & MQDT & EDFT & EIFT & EXPERIMENT \\ \hline & zero & 527.407 & 527.220 & 527.200 & 526.851 & 527.5(2) [59], 528(4) [60], 530(3) [61] \\ \({}^{6}\)Li, \(M_{F}=0\) & pole & 543.286 & 543.284 & 543.282 & 542.934 & 543.28(1) [62], 543.286(3) [63] \\ & zero & 543.387 & 543.384 & 543.382 & 543.034 & & \\ & pole & 832.180 & 832.186 & 831.779 & 831.527 & 834.1(1.5) [49], 822(3) [64] \\ & zero & 140.909 & 140.917 & 139.854 & 139.614 & & \\ \({}^{7}\)Li, \(M_{F}=+2\) & zero & 543.438 & 543.435 & 544.420 & 543.573 & 543.6(1) [65] \\ & pole & 737.716 & 737.717 & 737.949 & 736.334 & 737.69(12) [9] \\ & pole & 851.074 & 851.073 & 852.215 & 865.066 & 851.0(2) [41] \\ \({}^{23}\)Na, \(M_{F}=+2\) & zero & 851.083 & 851.083 & 852.225 & 865.076 & & \\ & pole & 905.149 & 905.147 & 905.159 & 917.777 & 905.1(4) [41] \\ & zero & 906.193 & 906.191 & 906.203 & 918.780 & & \\ & zero & 25.427 & 25.424 & 25.343 & 25.764 & & \\ & pole & 25.886 & 25.889 & 25.836 & 26.236 & 25.85(10) [66] \\ & zero & 350.374 & 350.364 & 350.492 & 350.720 & 350 [67], 350.4 [66] \\ \({}^{39}\)K, \(M_{F}=+2\) & pole & 402.461 & 402.462 & 402.338 & 402.558 & 403.4(7) [66] \\ & zero & 741.931 & 744.930 & 745.000 & 750.397 & & \\ & pole & 744.936 & 744.935 & 745.005 & 750.402 & & \\ & zero & 751.886 & 751.882 & 751.935 & 757.268 & & \\ & pole & 752.277 & 752.280 & 752.334 & 757.65 & 752.3(1) [66] \\ & pole & 12.661 & 12.661 & 13.009 & 14.557 & & \\ \({}^{40}\)K, \(M_{F}=-7\) & pole & 12.663 & 12.663 & 13.0132 & 14.558 & & \\ & pole & 224.222 & 224.222 & 223.758 & 223.909 & 224.2(1) [68] \\ & zero & 231.432 & 231.432 & 231.151 & 231.287 & 233.9(1) [68] \\ & zero & 850.572 & 850.571 & 847.973 & 868.110 & & \\ \({}^{85}\)Rb, \(M_{F}=+4\) & pole & 851.755 & 851.755 & 850.911 & 870.204 & 852.3(3) [54] \\ & zero & 1068.352 & 1068.352 & 1070.585 & 1087.537 & & \\ & pole & 1070.787 & 1070.787 & 1073.679 & 1092.366 & & \\ & pole & 406.883 & 406.883 & 400.758 & 446.639 & 406.2(3) [69] \\ & zero & 406.884 & 406.884 & 401.249 & 446.897 & & \\ & pole & 686.396 & 686.396 & 692.704 & 753.618 & 685.4(3) [69] \\ \({}^{87}\)Rb, \(M_{F}=+2\) & zero & 686.403 & 686.402 & 692.706 & 753.941 & \\ & pole & 911.651 & 911.651 & 933.705 & 1008.810 & 911.7(4) [69] \\ & zero & 911.652 & 911.652 & 933.707 & 1008.840 & & \\ & pole & 1007.71 & 1007.710 & 986.280 & 1046.440 & 1007.40(4) [70], 1007.3(4) [69] \\ & zero & 1007.91 & 1007.910 & 986.835 & 1046.780 & 1007.60(3) [70] \\ & pole & \(-8.654\) & \(-9.693\) & \(-56.018\) & 59.594 & \(-11.7\)[56] \\ & zero & 10.155 & 10.139 & 4.308 & 86.218 & 17.26(20)[71], 17.119(2)[72] \\ \({}^{133}\)Cs, \(M_{F}=+6\) & pole & 545.846 & 545.866 & 468.879 & 599.953 & 549 [57] \\ & zero & 551.406 & 551.410 & 538.332 & 616.613 & 553.73(2) [57] \\ & pole & 822.933 & 823.140 & 743.691 & 803.191 & 787[57] \\ & zero & 901.203 & 901.202 & 896.486 & 933.785 & & \\ \end{tabular} \end{table} Table 4: Features in the field-dependent \(s\)-wave scattering length for all alkali species considered here. Calculations are performed at a collision energy of \(1\mu\)K. Results are in gauss (G). our reported scattering lengths for \({}^{6}\)Li in Table 3 differ slightly from Ref. [48] because we have made additional adjustments to match the position for the narrow \(s\)-wave resonance near 543G to the experimental observation of Hazlett et al. [63]. For \({}^{6}\)Li we consider elastic collisions in the lowest channel with \(M_{F}=0\). Table 4 lists the zeroes and poles of the \(s\)-wave scattering length as determined by the coupled-channels (CC) calculation, MQDT, EDFT, and EIFT. The left graph in Fig. 5 plots \(a(B)\) for field values ranging from 0G to 1200G. There is broad resonance near 832G and a narrow feature around 543G, which is shown more clearly in the inset. Both MQDT and EDFT come within 1mG of the CC calculation for this narrow resonance, while EIFT is off by 0.3G. However, both EDFT and EIFT slightly underestimate the location of the broad resonance at 832.18G, whereas MQDT almost exactly agrees with the CC calculation. Several groups have experimentally determined the resonance features of this collision [49; 59; 60; 61; 62; 63; 64]. Measurements made by Jochim et al. [61] and O'Hara et al. [60] in 2002 place the location of the first zero-crossing at \(530\pm(3)\)G and \(528\pm 4\)G, respectively. In 2008, Du et al. more accurately determined the position to be \(527.5\pm 0.2\)G [59]. Our CC calculation agrees with the latter value, while MQDT and EDFT fall just outside the experimental uncertainty. The location of the narrow resonance has been measured by Refs. [74; 62]. However, to date, Hazlett et al. [63] has made the most precise measurement at 543.286(3) G. We pin our model for the potential curves so that this resonance position is to reproduce by our CC calculations to better than 1mG. The results of MQDT and EDFT are nearly within the error bars of this observation. Using RF spectroscopy on weakly bound molecules, Bartenstein et al. [49] measured the position of the wide resonance to be 834.1(1.5)G, which our CC and MQDT calculations fall just short of. Scattering length calculations using the potentials of Refs. [39; 40] for the case of \({}^{7}\)Li show reasonable, but not perfect agreement with values reported in those papers. Refs [39; 40] find \(a_{S}/a_{0}=34.22(9)\) and \(a_{T}/a_{0}=-27.80(2)\), while we find \(a_{S}/a_{0}=34.222\) and \(a_{T}/a_{0}=-27.891\). As with \({}^{6}\)Li, we replace the dispersion coefficients of Refs. [39; 40] with those of Ref. [47]. Having done so, the parameters \(V_{c}^{(S)}\) of Eq. (11) are adjusted to give the best agreement possible with experimental measurements of the scattering length node near 544G and the wide resonance near 738G. This yields scattering lengths comparable to those reported in Ref. [48]. The right plot in Fig. 5 shows the field-dependent scattering length for \({}^{7}\)Li elastic collisions in the ground state with \(M_{F}=2\). Just as in the \({}^{6}\)Li case, we find that MQDT is nearly in perfect agreement with the CC calculation, only underestimating the zero crossings in \(a(B)\) by a few mG. Both EDFT and EIFT do slightly worse, coming within 1G of the full coupled-channels calculation. Pollack et al. [65] observed that the scattering length passes through a zero crossing at \(B_{0}=543.6(1)\)G with a slope of \(\Delta a/\Delta B=0.08a_{0}/\)G. They fit the resonance peak to 736.8(2)G. Our best fit (performed manually) yields positions shown in Table 4. Our CC and MQDT calculations are nearly within the error bars of this observation and our calculation of the slope of \(a(B)\) is \(\Delta a/\Delta B=0.079a_{0}\)/gauss, in agreement with their observations. ### Sodium Early experiments [75; 76] reported two s-wave resonances for sodium atoms prepared in the \(|f,m_{f}\rangle=|1,1\rangle\) hyperfine state (i.e. ground state of the \(M_{F}=+2\) block), one near 853G and another narrower one near 907G. The accuracy of those measurements was limited by magnetic field stability to about 20G. Later experiments [41] greatly improved the accuracy of these reso Figure 5: (color online) The field-dependent scattering length for \({}^{6}\)Li collisions with \(M_{F}=0\) and \({}^{7}\)Li collisions in the ground state with \(M_{F}=2\) at collision energy \(1\mu\)K is shown. The inset on the \({}^{6}\)Li plot shows the narrow resonance at 543.286G. nance positions, made additional measurements of higher partial wave resonances, and developed improved singlet and triplet potential energy functions of the Hannover form. We adopt the sodium potential energy function developed in Ref. [41] without modification. Field values for zeroes and pole positions in the scattering length are tabulated in Table 4. In Fig. 6, we plot the scattering length from 820G to 930G, showing the two \(s\)-wave resonances in this range. We found no other \(s\)-wave resonances with width greater than approximately 1mG for fields less than 1200G, but there is another narrow resonance at very high field (not shown) near 2055G. Comparing the MQDT, EDFT, and EIFT with converged CC calculations reveals that while the EIFT is able to reproduce the qualitative features of \(a(B)\), the positions of resonances are consistently overestimated by about 14G. Improvements afforded by the EDFT are significant. The position of the narrow resonance near 851G is only overestimated by about 1G, while the position of the wide resonance near 905G is overestimated by only 10mG. Moreover, our CC and MQDT calculations are in agreement with Ref. [41], which reports values of 851.0(2)G and 905.1(4)G for the two resonances. ### Potassium We use the potassium potential energy functions of Ref. [42] without modification. For \({}^{39}\)K, we consider elastic collisions in the lowest channel with \(M_{F}=2\). The locations of poles and zero crossings in the \(s\)-wave scattering length are provided in Table 4 and Fig. 7(d) plots \(a(B)\) for fields ranging from 0G to 1200G. Figs. 7(a)-(c) show the three narrow resonances at 25.886G, 744.936G, and 752.277G in more detail. Comparing MQDT, EDFT, and EIFT with the full coupled channels calculation, we find that all three methods are able to reproduce the broad resonance near 402.461G and the narrow resonance at 25.886G (as determined by our CC calcualtions) to within 1G. However, EIFT fares worse for the two resonances at higher fields, overestimating the locations of poles and zeroes by about 5G while both MQDT and EDFT are within 0.1G of the CC calculation. Note that the sharp resonance in the EIFT calculation in Panel (c) of Fig. 7 is the same feature shown in Panel (b) for the other three calculations. D'Errico et al [66] found resonances in a number of channels. In the \(M_{F}=2\) block, they measured resonances at 25.85(10)G, 403.4(7)G, and 752.3(1)G, which are nearly in agreement with our CC and MQDT calculations. However, they missed a predicted narrow near 745G. Chapurin et al [77] have recently made a precise measurement of a low-field resonance in \({}^{39}\)K in the \(M_{F}=-2\) block, finding a resonance position 33.5820(14)G, which represents a significant improvement over an earlier measurement [8]. Our CC calculations using the unmodified potential functions of Ref. [42] yield a resonance position of 33.5780G. For \({}^{40}\)K, we consider elastic collisions in the lowest channel with \(M_{F}=-7\). The scattering length for magnetic fields between 0G and 800G is shown in Fig. 7(e). There is a very narrow resonance near 12.66G (which is shown in greater detail in the top inset) and a broader feature near 224G (shown in the bottom inset). Analyzing our results, we find that MQDT and CC calculations exactly agree on the location of every pole and zero crossing in \(a(B)\). EDFT and EIFT are slightly less accurate, differing from the coupled-channels calculation by \(\sim 1\)G. Looking at available experimental data, Ref. [68] determined the position of the broad resonance to be \(224.21\pm 0.05\)G, with a width \(\Delta=9.7\pm 0.6\)G, which nearly agrees with our CC calculation. ### Rubidium We use the potential energy functions for rubidium developed by Strauss et al. [43] without modification. For both isotopes there are small, yet significant differences between the scattering lengths reported in Ref. [43] and those that we calculate using the same potential model. We have not been able to determine the source of this discrepancy, but the disagreement motivated us to perform further rigorous tests of our log-derivative propagator. The results of these tests are carried out in Appendix A. For \({}^{85}\)Rb, we consider elastic collisions in the lowest channel with total \(M_{F}=4\). Field values for the zeroes and poles in the scattering length are given in Table 4. Our results for MQDT, EIFT, and EDFT compared to the full coupled channels calculation (CC) are shown in Fig. 8 (a) where we plot the scattering length for fields ranging from 0G to 1200G. Coupled channels calculations reveal two broad resonances at 851.755G and 1070.9G which are shown more clearly in the insets. We find that Figure 6: (color online) The scattering length (in units of \(a_{0}\)) for \({}^{23}\)Na collisions in the lowest channel with total \(M_{F}=2\) at threshold is shown as a function of magnetic field. all methods are able to replicate the general properties of the scattering length, but MQDT is superior for predicting the positions of resonances and zero crossings, matching the CC results almost exactly. EDFT does slightly worse, coming within a few gauss of the CC results, while EIFT is the least accurate, routinely overestimating the locations of poles and zeroes by about 20G. In 2013, Blackley et al. [54] experimentally confirmed 17 Feshbach resonances in optically trapped \({}^{85}\)Rb. For the ground state channel, they report one \(s\)-wave Feshbach resonance at 852.3(3)G with a width \(\Delta>1\)G. Our CC and MQDT calculations fall just outside the uncertainty Figure 8: (color online) (a) The scattering length (in units of \(a_{0}\)) for \({}^{85}\)Rb collisons in the lowest channel with total \(M_{F}=4\) at threshold is shown as a function of magnetic field. (f) The scattering length (in units of \(a_{0}\)) for \({}^{87}\)Rb collisions in the lowest channel with total \(M_{F}=2\) at threshold is shown as a function of magnetic field. (b)-(e) zoom in on the 4 narrow resonance features of the \({}^{87}\)Rb collisions near 406G, 686G, 911G, and 1007G, respectively. Figure 7: (color online) (d) The scattering length (in units of \(a_{0}\)) for \({}^{39}\)K in the lowest channel with total \(M_{F}=2\), with (a)-(c) showing the narrow resonances at 25.886G, 744.936G, and 752.277G, respectively. (e) The scattering length (in units of \(a_{0}\)) for \({}^{40}\)K collisions in the lowest channel with total \(M_{F}=-7\) at threshold is shown as a function of magnetic field. Insets in the \({}^{40}\)K plot show the resonances near 12G and 224G in greater detail. of this measurement. To the best of our knowledge, no experimental measurements of the high-field resonance near 1071G has appeared in the literature. For \({}^{87}\)Rb, we consider elastic collisions in the lowest channel with total \(M_{F}=2\). Fig. 8 (f) plots the scattering length for fields between 200G and 1200G and Figs. 8 (b)-(e) zoom in on each of the four resonance features. Similar to the \({}^{85}\)Rb case, we find that MQDT almost exactly reproduces the results of coupled channels calculation while EIFT overestimates the positions of poles and EDFT comes within 10G of the CC results. Turning to experimental data, in 2002, Marte et al. [69] observed more than 40 resonances in rubidium 87 for magnetic fields between 0.5G and 1260G for various spin mixtures in the lower hyperfine ground state to an accuracy of 30mG. For the ground state entrance channel, they report \(s\)-wave Feshbach resonances at 406.2(3)G, 685.4(3)G, 911.7(4)G, and 1007.3(4)G. A more recent study conducted by Ref. [70] places the high field resonance at 1007.40(4)G and measures a zero crossing in the scattering length at 1007.60(3)G. We find that the values predicted by our CC and MQDT calculations are nearly within the experimental uncertainty of both Refs. [69] and [70]. ### Cesium For cesium, we use the MLR potentials of [44], but with modifications as discussed in Section II.2. We modify the long-range behavior of the potential to more rapidly converge to the functional form of \(V_{\rm LR}(R)\) in Eq. 5 by using a switching function \(f(R)\) that vanishes for for \(R\lesssim R_{\rm LR}-\delta R\) and goes to unity for \(R\gtrsim R_{\rm LR}+\delta R\), \[V^{(S)}(R)=V^{(\rm MLR)}_{S}(R)(1-f(R))+f(R)V_{\rm LR}(R), \tag{45}\] where the switching function is \[f(R)=\frac{1}{2}\left(\tanh\left(\frac{R-R_{\rm LR}}{\delta R}\right)+1\right). \tag{46}\] We choose \(\delta R=0.5a_{0}\) and \(R_{\rm LR}=38a_{0}\) to ensure that when the boundary condition Eq. (21) determining \(\mathbf{K}_{\rm sr}\) is applied in at \(R_{f}=40a_{0}\), the reference functions \(\hat{f}(R)\) and \(\hat{g}(R)\) are valid solutions to the Schrodinger equation in each channel. Without the switching function, there is little hope of finding agreement between the CC and MQDT calculations for this particular MLR potential. Choosing a smaller \(R_{\rm LR}\) gives better agreement between the CC and MQDT calculations, but also dramatically changes the values of \(a_{S}\), \(a_{T}\), the background scattering length, and the resonance positions--so much so that tuning the parameters \(V^{(S)}_{c}\) in order to bring \(a_{S}\) and \(a_{T}\) back in line with accepted values becomes difficult. If better agreement with CC calculations is desired, either a more detailed re-parameterization of the potential model is required, or a different model should be used. The switching function turns out to be unnecessary for the case of lithium, despite the fact that the lithium MLR potentials exhibit similar slow convergence to the form of \(V_{\rm LR}\) (See Fig. 3). We speculate that this is likely because the lower reduced mass of the lithium dimer leads to a correspondingly slower phase accumulation in the asymptotic region. As with lithium, we adjust the short-range behavior of the MLR potentials by adding a quadratic term given by 11. We first adjust the parameters \(V^{(S)}_{c}\) to reproduce the scattering lengths reported in Ref. [56], then make further adjustments to best reproduce the positions of the three \(s\)-wave resonances reported in Ref. [57]. It is not possible to reproduce all three resonance positions by tuning only \(V^{(S)}_{c}\), and a full re-parameterization of the potential is beyond the scope of this work. Field values for zeroes and pole positions in the scattering length are listed in Table 4. In Fig. 9, we plot the scattering length, showing 3 \(s\)-wave resonances for magnetic field ranging from \(-50\)G to 1100G. Comparing MQDT, EDFT, and EIFT to the converged CC calculations shows that while all three methods are able to reproduce the qualitative features of \(a(B)\), MQDT by far is the most successful at replicating the locations of resonances and zero crossings, agreeing to within 1G. Conversely, EIFT overshoots the resonances near \(-10\)G and 545G by about 50G, and underestimates the resonance near 820G by 20G. EDFT does even worse, undershooting the three resonances by about 80G. However, EDFT does slightly better at predicting the locations of zero crossing, matching the CC calculations to within 10G. The low-field (i.e., \(B\lesssim 250\)G) resonances of cesium atoms have been studied by several groups [78; 79; 78; 71; 72; 78; 79; 80; 56; 71; 72; 73; 57; 74; 75; 76; 77; 78]. In 1999, Vuletic et al [78] observed a low-field resonance in the total \(M_{F}=6\) block. They found a zero and a pole at the following positions: 17.0(2)G and 30(3)G. Subsequently, Refs. [79; 72] reported values of \((17.064\pm 0.056)\)G and 17.119(2)G, respectively, Figure 9: (color online) The scattering length (in units of \(a_{0}\)) for \({}^{133}\)Cs collisions in the lowest channel with total \(M_{F}=6\) at threshold is shown as a function of magnetic field. for the position of the zero-crossing in the scattering length. More recently, Ref. [71] find the zero-crossing to be at \(17.26(20)\)G. Our CC and MQDT calculations differ from this latest experimental value by \(\sim 7\)G. The discrepancy may be improved by employing interaction potentials such as the M2012 model of Ref. [57]. The zero at \(17\)G has been used [72; 83] to prepare a Bose-Einstein condensate (BEC) of cesium atoms in the ground state. This feature is associated with a broad Feshbach resonance near \(-11\)G. Physically, a resonance at \(-|B|\) corresponds to one at \(|B|\) with the spin projections of each atom reversed in sign [1]. In this case, the negative resonance at \(-11.7\)G in the \(M_{F}=+6\) block corresponds to a positive resonance at \(11.7\)G in the \(M_{F}=-6\) block, which as been measured by Ref. [56]. Other theoretical models predict a location of \(-11.1(6)\)G [84] or \(-12\)G [57] for this low-field \(s\)-wave resonance. Comparing our results to these values, we find that both the coupled-channels calculation and MQDT overestimate this resonance position by \(\sim 3\)G. Berninger et al. [57] have explored the high-field physics of ultracold cesium collisions. Using trap-loss spectroscopy, they observed two broad loss features around \(549\)G and \(787\)G, which correspond to \(s\)-wave resonances, and a zero crossing in the scattering length at \(553.73(2)\)G. Again, we see a discrepancy between the available experimental data and our calculations. The coupled-channels calculation and MQDT underestimate the first resonance position and the zero-crossing by a few gauss, and overestimate the latter resonance position by almost \(50\)G. This is a shortcoming of the MLR potential developed in Ref. [44] for cesium, and we expect significantly better agreement in future calculations using improved potential models such as the M2012 potential of Ref. [57], which was specifically developed to describe experimental data at _both_ low and high fields. ## V Concluding discussion The accuracy of the EIFT, EDFT and MQDT calculations depend on a number of factors that we will now attempt to untangle. In Panel (a) of Fig. 10, we show the mean absolute error (in gauss) of magnetic Feshbach resonance positions for each atomic species. The error is defined for each of the resonance (pole) positions in Table 4 simply as \[\delta B=\left|B_{\rm pole}^{\rm type}-B_{\rm pole}^{\rm CC}\right|, \tag{47}\] where "type" stands for any of the MQDT, EDFT or EIFT calculations, and CC stands for the coupled channels calculation, which we have ensured are fully converged. We have taken care to compute a higher density of points in the vicinity of resonance poles and zeros of the scattering length. An interpolating function is used to identify the zeroes of \(1/a(B)\) as pole locations, accelerating the convergence of the CC calculations in particular when searching for these features. Let us first consider the elements of the MQDT calculation that may limit its accuracy. First, and likely the most significant contributor to error, is the fact that the MLR potentials themselves converge rather slowly to their asymptotic form, as illustrated in Fig. 3. Therefore, the reference functions \(\hat{f}(R)\) and \(\hat{g}(R)\), which are solutions to the Schrodinger equation in a potential \(V_{\rm LR}\), are not perfect solutions to the Schrodinger equation in \(V^{\rm MLR}\). Even slight differences in the long-range potentials can lead to a substantial difference in the resonance position. Secondly, the reference functions themselves are computed numerically and any error in their computation is inherited by \({\bf K}_{\rm sr}\). The MQDT calculations (red circles) are typically several orders of magnitude more accurate than either of the frame transformation calculations, but MQDT performs most poorly for \({}^{6}\)Li and \({}^{133}\)Cs. We believe that this is primarily caused by the slow convergence of the MLR potentials to the asymptotic form \(V_{\rm LR}\) of Eq. 5, as shown in Fig. 3. Without the switching function Eq. 46, the MQDT calculation for \({}^{133}\)Cs is significantly poorer. Likewise, without extending the matching radius \(R_{m}\) out to about \(55a_{0}\) for lithium, as discussed in subsection II.2, the performance of MQDT is significantly worse than what is shown. For further improvements, we recommend using a different potential energy model with faster convergence to \(V_{\rm LR}\). The frame transformation calculations rely upon the singlet and triplet quantum defects \(\mu_{S}\) which are plot Figure 10: (color online) Panel (a) shows the average absolute error (in gauss) of resonance positions for each atomic species. Data for MQDT (red circles), EDFT (green triangles) and EIFT (blue diamonds) are all shown on a log scale. Panel (b) shows the variation of the singlet and triplet quantum defects over the necessary range of energy required for the energy-dependent frame transformation calculation. ted in Fig. 4. Panel (b) of Fig. 10 shows the overall variation of the quantum defects over the total energy range required for the EDFT calculation, as prescribed by Eq. (44). The first feature to note is that as a rule, the energy dependence of the triplet quantum defects is greater than that of the singlet defects. This is sensible since the separation of energy and length scales is more robust for the comparatively deep singlet channel, leading to weaker energy dependence in \(\mu_{0}(E)\) compared to \(\mu_{1}(E)\). The second feature to note is that the performance of the frame transformation calculations is strongly correlated to the energy dependence of the quantum defects themselves. In general, the heavier the species, the greater the sensitivity to energy displayed by the quantum defects. This is because the hyperfine-Zeeman splitting increases with atomic mass. The one exception to this trend is \({}^{40}\)K, in which there are only three collision thresholds with total \(M_{F}=-7\), and the range of energies over which one must evaluate the quantum defects is considerably smaller. Only the MQDT calculation is able to reliably reproduce the position of every resonance pole to less than the width of the resonance. See, for example, Panels (a)-(d) of Fig. 8 showing the individual \(s\)-wave resonances in \({}^{87}\)Rb. The EDFT provides a significant improvement over EIFT in all cases, except for cesium. For example, see the wide resonance in \({}^{23}\)Na near 915G shown in Fig. 6, the resonances near 26G and 752G in \({}^{39}\)K shown in Fig. 7, or even the two resonances shown in the insets of Fig. 8(a) for \({}^{85}\)Rb. To conclude, we have conducted a comprehensive study of ultracold homonuclear collisions for eight alkali species, applying three variations of multichannel quantum defect theory that differ in how they characterize the short-ranged K-matrix, \(\mathbf{K}_{\mathrm{sr}}\). We have attempted to untangle various sources of error, both among the calculations themselves, and with experiment. We have quantitatively demonstrated how the frame transformation calculations become rather unreliable for the heavier species with large hyperfine-Zeeman splittings, while MQDT remains robust provided that the singlet and triplet potentials converge sufficiently quickly to the long-range form of \(V_{\mathrm{LR}}\). We hope to perform calculations in the future that extend this work to higher partial waves and include the weak magnetic dipole-dipole coupling. A still more comprehensive study of inelastic processes is also within reach. ###### Acknowledgements. We thank Chris H. Greene for guidance in the early stages of this work. ## Appendix A Numerical testing of the log-derivative propagator Numerical discrepancies between (some of) our calculated singlet and triplet scattering lengths and those reported in the literature, particularly for rubidium, spurred us to conduct further testing of our computer code. Here, we present calculations using the two-channel test model of Ref. [85]. The authors of Ref. [85] compare three robust methods commonly used for solving coupled channels problems: (1) the integral equation method (IEM) [86], (2) the finite element [87; 58] eigenchannel R-matrix [88] propagator, and (3) the Gordon algorithm [89]. Of these three methods, the greatest stability is achieved by the IEM, which--when used with a perturbative long-range correction--gives the scattering length to 11 significant figures \(a(\infty)=851.98171574\). We directly compare our calculation of the scattering length, \(a\), to \(a(\infty)\). and plot \((\delta a)/a=(a-a(\infty))/a(\infty)\) as a function of step size \(h/a_{0}\). Our implementation of Johnson's log-derivative propagator [26] uses Richardson extrapolation with step doubling [90], which greatly improves the convergence scaling with step size from \(h^{4}\) to \(h^{6}\). We therefore present two sets of calculations in Fig. 11. One with Richardson extrapolation (solid black curves), and one without (red dashed curves). The thick red dashed line indicates \(h^{4}\) scaling, while the thick black solid line indicates \(h^{6}\) scaling. Calculations for various values of \(R_{f}\) (where the matching to Bessel functions is made) are shown. These results clearly demonstrate the improved scaling, the dependence on \(R_{f}\), and the dependence on step size \(h/a_{0}\). We typically use \(N=10^{7}\) integration steps and integrate out to \(R_{f}=20\beta\lesssim 4000a_{0}\) in all our log-derivative calculations. Calculations including higher partial waves will require a larger matching radius. Based on the results shown in Fig. 11, we expect about 6 significant figures in the scattering length. Figure 11: (color online) The scattering length (in units of \(a_{0}\)) for \({}^{6}\)Li collisions in the lowest channel with total \(M_{F}=0\) at threshold is shown as a function of magnetic field.
2306.17677
Dunkl symplectic algebra in generalized Calogero models
We study the properties of the symplectic sp(2N) algebra deformed by means of the Dunkl operators, which describe the dynamical symmetry of the generalized N-particle Calogero model. It contains a symmetry subalgebra formed by the deformed unitary generators as well as the (nondeformed) sl(2,R) conformal subalgebra. An explicit relation among the deformed symplectic generators is derived. Based on matching between the Casimir elements of the conformal spin and Dunkl angular momentum algebras, the wavefunctions of the both the standard and generalized Calogero models, being expressed in terms of the deformed spherical harmonics, are classified according to infinite-dimensional lowest-state sl(2,R) multiplets. Meanwhile, any polynomial integral of motion of the (generalized) Calogero-Moser model generates a finite-dimensional highest-state conformal multiplet with descendants expressed via the Weyl-ordered product in quantum field theory.
Tigran Hakobyan
2023-06-30T14:07:45Z
http://arxiv.org/abs/2306.17677v1
# Dunkl symplectic algebra in generalized Calogero models ###### Abstract We study the properties of the symplectic \(sp(2N)\) algebra deformed by means of the Dunkl operators, which describe the dynamical symmetry of the generalized \(N\)-particle Calogero model. It contains a symmetry subalgebra formed by the deformed unitary generators as well as the (nondeformed) \(sl(2,R)\) conformal subalgebra. An explicit relation among the deformed symplectic generators is derived. Based on matching between the Casimir elements of the conformal spin and Dunkl angular momentum algebras, the wavefunctions of the both the standard and generalized Calogero models, being expressed in terms of the deformed spherical harmonics, are classified according to infinite-dimensional lowest-state \(sl(2,R)\) multiplets. Meanwhile, any polynomial integral of motion of the (generalized) Calogero-Moser model generates a finite-dimensional highest-state conformal multiplet with descendants expressed via the Weyl-ordered product in quantum field theory. ## I Introduction Most of many-body systems with particle interaction do not possess an exact solution both in classical and quantum mechanics and are amenable to numerical simulations and analytical approximations only. However in one dimension, there is a remarkable family of interacting \(N\)-particle systems based on the Calogero model, which are integrable in the Liouville sense. The latter describes one-dimensional identical particles with pairwise inverse-square interaction [1; 2]: \[H=\frac{1}{2}\sum_{i=1}^{N}(p_{i}^{2}+x_{i}^{2})+\sum_{i<j}\frac{g(g\mp 1)}{(x_ {i}-x_{j})^{2}}. \tag{1}\] The presence or absence of the confining harmonic potential (as well as any other potential depending on the radial coordinate only) does not affect on the integrability. The \(\pm\) sign in the potential coupling constant is chosen for the bosons/fermions, correspondingly. The particle mass, frequency, and the Plank's constant are set to unity in above Hamiltonian. More general inverse-square potentials based on the finite reflection groups, trigonometric functions [3], spin exchanges, etc. are also integrable. (See the reviews [4; 5; 6; 7] and references therein.) The family of conservation laws in such systems is much more diverse and wealthy. In particular, apart from the Liouville integrals, the Calogero systems with rational potentials possess \(N-1\) additional constants of motion, i.e. are maximally superintegrable [8; 9; 10]. This property results in a huge degeneracy of the energy levels and simplifies drastically the solution. The (super)integrability is robust under certain external potentials. It persists for the related angular models, as well as for systems defined on surfaces with constant curvature [11; 12; 13; 14]. Another closely related and intriguing property of the described systems is the elimination of the inverse-square potential by a nonlocal unitary transformation, which makes them equivalent to the related noninteracting counterparts [15; 16]. In quantum case, the equivalence becomes even more transparent when the particle interaction term is packed into a nonlocal covariant derivative with exchange operators introduced by Dunkl [17]. A generalized interaction, obtained in this way, contains the particle exchanges and is reduced to the original Calogero potential for the identical particles [18; 19]. Moreover, the quantum systems with even more common particle exchange interactions can be solved in the thermodynamic limit as has been discovered recently [20]. The described correspondence between the noninteracting and interacting models spreads further to their symmetries. In particular, the Dunkl deformations of the momentum and angular momentum are the invariants of the unbound Calogero Hamiltonian with particle exchanges, which argues the superintegrability at the level of the Dunkl operator. The Dunkl angular momentum algebra describes also the complete symmetries of the generalized angular Calogero model [14]. The same deformation applied to the symmetry generators of the \(N\)-dimensional isotropic oscillator results in the Dunkl-deformed version of the related unitary Lie algebra \(u(N)\) which describes the symmetry of the generalized Calogero model [14]. In case of the confining Coulomb potential, the Dunkl deformation of the \(so(N+1)\) algebra is obtained [21]. The latter gathers all symmetries of the generalized Calogero-Coulomb model and consists of the Dunkl analogs of the Runge-Lenz vector and angular momentum tensor [12]. The Dunkl deformation procedure retains or barely changes many important properties of the original symmetries. Particularly, the commutation and algebraic relations among the generators, the Casimir elements and the action on the wavefunctions look quite close to their nondeformed analogs. Meanwhile, the deformed generators do not form a Lie algebra based on the commutations, but constitute a quadratic algebra together with the exchange operators. The described mapping is continued further to the spectrum generating algebras, or dynamical symmetries. For the \(N\)-dimensional isotropic oscillator, the dynamical symmetry may be described by the semidirect product Weyl and symplectic groups, \(W_{N}\rtimes SP(2N)\), which are generated, respectively, by the linear and bilinear combinations of the creation-annihilation operators [22]. In the current article, we study the Dunkl deformation of the symplectic Lie algebra paying special attention to the derivation of explicit commutation relations among its generators. It consists of the deformed unitary subalgebra which combines the symmetries of the generalized Calogero Hamiltonian as well as the spectrum generating part, which maps in-between different energy levels. The Dunkl \(sp(2N)\) algebra involves also the standard \(sl(2,R)\) conformal subalgebra. Due to the actual coincidence of the quadratic invariants of both the conformal spin and the Dunkl angular momentum, the eigenfunctions of the generalized Calogero Hamiltonian in spherical coordinates (expressed via the deformed spherical harmonics) unite in the lowest-weight \(sl(2,R)\) representations. Then the second quantum number in the wavefunction varies within an individual multiplet and describes the conformal spin's projection while the remaining ones parameterize a representation. Meanwhile, for the (generalized) Calogero-Moser model, any integral of motion generates a (finite-dimensional) highest weight \(sl(2,R)\) multiplet with the descendants expressed in terms of the Weyl-ordered product in quantum field theory. The highest states in a product of two or more such multiplets produce additional integrals of motion which are responsible for the superintegrability. The article is organized as follows. In Sect. II, the extended version of the Calogero model based on the Dunkl operators is we briefly reviewed. Sect. III is devoted to the Dunkl analog of the symplectic group generators. Together with the Dunkl creation-annihilation operators they generate a Dunkl deformation of the \(w_{N}\rtimes sp(2N)\) algebra. In Sect. IV the behaviour of the eigenfunctions of the generalized Calogero system under the conformal group is studied using the Dunkl spherical harmonics. Sect. V reveals the conformal algebra structure of the Liouville integrals of motion of the Calogero-Moser system. Using the momentum sum rules and Weyl ordering, the additional integrals are constructed. The results are summarized in Conclusion. For the completeness, we provide the definitions of the operator ordering and deformed spherical harmonics and related details in Appendices. ## II Generalized Calogero model ### Hamiltonian and Dunkl operators The solution and integrals of motion of the quantum Calogero system (1) may be formulated in an elegant way after a slight but nontrivial modification. The expanded version involves the particle exchanges in the potential [18; 19], \[H=\frac{1}{2}\sum_{i=1}^{N}(p_{i}^{2}+x_{i}^{2})+\sum_{i<j}\frac{g(g-s_{ij})} {(x_{i}-x_{j})^{2}}. \tag{2}\] Here \(g>0\) is an attractive coupling constant, and the operator \(s_{ij}\) permutes the \(i\)-th and \(j\)-th particles while resting the others untouched. For identical bosons or fermions, the eigenfunctions are symmetric or antisymmetric, correspondingly. Therefore, \(s_{ij}=\pm 1\), and above Hamiltonian is reduced to the original Calogero model (1). Moreover, the inverse-square potential may be absorbed by a covariant momentum, so that the above Hamiltonian can be rewritten as follows: \[H=\frac{1}{2}\sum_{i=1}^{N}(\pi_{i}^{2}+x_{i}^{2}),\qquad\pi_{i}=-\imath\nabla _{i}. \tag{3}\] The covariant momentum \(\pi_{i}\) is not local and depends on the particle permutations. It is defined by means of the Dunkl operator [17] \[\nabla_{i}=\partial_{i}-\sum_{j\neq i}\frac{g}{x_{i}-x_{j}}s_{ij}. \tag{4}\] The system (2), (3) is known as a generalized Calogero model often called also a Dunkl oscillator. The components of the Dunkl momentum are mutually commutative so that the related connection is flat. Meanwhile, the commutations with coordinates are more involved in comparison with the standard canonical commutation relation: \[[\nabla_{i},\nabla_{j}]=0,\qquad[\nabla_{i},x_{j}]=S_{ij}. \tag{5}\] The last equation contains a matrix composed from the particle-exchange operators, \[S_{ij}=\begin{cases}-gs_{ij}&i\neq j,\\ 1+g\sum_{k\neq i}s_{ik}&i=j.\end{cases} \tag{6}\] In the free-particle limit (\(g\to 0\)), it is reduced to the Kronecker delta: \(S_{ij}\rightarrow\delta_{ij}\), so that the algebra (5), (6) is often referred in the literature as a \(S_{N}\)-extended Heisenberg algebra. ### Creatio-annihilation operators The generalized Calogero Hamiltonian (3) is reminiscent of the isotropic oscillator and, hence, is amenable to the simple but powerful tools for the study of the spectrum, integrals of motion, etc. In particular, the Hamiltonian may be expressed in terms of the Dunkl analogs of the energy level's lowering and rising operators [18; 19], \[H=\sum_{i=1}^{N}a_{i}^{+}a_{i}+\frac{1}{2}N-S, \tag{7}\] \[a_{i}^{\pm}=\frac{1}{\sqrt{2}}(x_{i}\mp\nabla_{i}). \tag{8}\] They play the same role as the particle creation-annihilation operators in the field-theoretical treatment. The last element in the Hamiltonian (7) is the permutation-invariant sum over all pairwise exchanges [14]: \[S=\sum_{i<j}S_{ij},\qquad[S,S_{ij}]=0. \tag{9}\] It disappears in the free-particle limit. The staircase operators obey a similar relarions as the coordinate and Dunkl derivative: \[[a_{i},a_{j}^{+}]=S_{ij},\qquad[a_{j},a_{j}]=[a_{j}^{+},a_{j}^{+}]=0. \tag{10}\] They form a Dunkl version of the Weyl algebra \(W_{N}\). The standard spectrum generating commutation rules hold for the Calogero Hamiltonian with particle exchanges: \[[H,a_{i}^{\pm}]=\pm a_{i}^{\pm}. \tag{11}\] The total Dunkl momentum yet does not contain a particle exchange and, hence, reduces to the usual momentum. So, the center-of-mass creation-annihilation operators obey the standard one-particle Weyl algebra commutation relation, \[A^{\pm}=\frac{1}{\sqrt{N}}\sum_{i}a_{i}^{\pm},\qquad[A,A^{+}]=1. \tag{12}\] ### Energy spectrum and eigenstates Alike for the isotropic oscillator, the energy levels and eigenstates of the generalized Calogero model are constructed in the algebraic way. The lowering operators annihilate the ground state: \(a_{i}|0\rangle=0\). The solution is provided by the following wavefunction (with omitted normalization constant) [19]: \[|0\rangle=e^{-\frac{1}{2}r^{2}}\prod_{i<j}|x_{i}-x_{j}|^{g} \tag{13}\] with the radial coordinate defined by \(r^{2}=\sum_{i}x_{i}^{2}\). The lowest energy is given by the following expression: \[E_{0}=\frac{1}{2}gN(N-1)+\frac{1}{2}N. \tag{14}\] Apart from the standard oscillatory energy, which grants one-half per each degree of freedom, is contains also the interaction part. The latter assigns the value \(g\) to each particle pair. According to Eq. (11), the excitation above the lowest state are generated by the monomials in rising operators, so that the spectrum is equidistant and highly degenerate alike in the oscillator case: \[|\mathbf{n}\rangle=a_{1}^{+n_{1}}\dots a_{N}^{+n_{N}}|0\rangle,\qquad E_{ \mathbf{n}}=E_{0}+\sum_{l=1}^{N}n_{l} \tag{15}\] with omitted normalization constant. Note that the particle interaction just shifts all levels by the same value given by the \(g\)-term in the ground-state energy (14). Generally, above states describe distinguishable particles since vary under the particle exchanges. In order to infer the identical bosons (fermions) from them, an additional (anti)symmetrization procedure is required. Then the generalized Calogero model (2) is reduced to the original Hamiltonian (1). Alternatively, one can apply (anti)symmetric polynomials in rising operators to the ground state. For instance, by means of the power sums, \[A_{l}^{\pm}=\sum_{i=1}^{N}a_{i}^{\pm l}, \tag{16}\] the related (unnormalized) bosonic eigenstates are labeled by \(N\) non-negative integrals, \(\mathbf{k}=\{k_{1},\dots,k_{N}\}\), and given by the formula [13; 1] \[|\mathbf{k}\rangle_{\text{sym}}=A_{1}^{+k_{1}}A_{2}^{+k_{2}}\dots A_{N}^{+k_{ N}}|0\rangle. \tag{17}\] The corresponding energy levels are highly degenerate and given by the following expression: \[E_{\mathbf{k}}=E_{0}+m_{\mathbf{k}}\qquad\text{with}\qquad m_{\mathbf{k}}= \sum_{l=1}^{N}lk_{l} \tag{18}\] being the order of the symmetric polynomial in \(a_{i}^{+}\) operators. Note that the above eigenfunctions may be obtained also using the quantum Lax operator algebra [23]. The first quantum number \(k_{1}\) describes the center-of-mass energy of the system. The related symmetrized operator (16) is proportional to the Weyl algebra generator (12): \(A_{1}^{\pm}=\sqrt{N}A^{\pm}\). The annihilation operator reduces the energy level by one, and its action on the energy eigenstates can be easily calculated: \[A_{1}^{-}|\mathbf{k}\rangle_{\text{sym}}=\sum_{l=1}^{N}lk_{l}|\mathbf{k}- \mathbf{e}_{l}+\mathbf{e}_{l-1}\rangle_{\text{sym}}. \tag{19}\] Here \(\mathbf{e}_{l}\) is the \(l\)-th standard unit vector in \(N\) dimensional space with the components \(\delta_{l}^{i}\) for \(1\leq l\leq N\), and \(\mathbf{e}_{0}=0\). The states with a negative coefficient \(k_{i}\) must be eliminated from the above sum. ## III Dunkl deformed symplectic algebra ### Dunkl deformation of \(sp(2N)\) generators Let us supplement the staircase operators (8) with their bilinear combinations, \[E_{ij}=a_{i}^{+}a_{j},\qquad F_{ij}=a_{i}a_{j},\qquad F_{ij}^{+}=a_{i}^{+}a_{j}^ {+}. \tag{20}\] Clearly, among them only \(N(2N+1)\) are different. These elements form a Dunkl deformation of the symplectic Lie algebra \(sp(2N)\). Together with the operators \(a_{i}^{\pm}\) they form a Dunkl deformation of the dynamical symmetry algebra of the generalized Calogeo model (2). The latter generalizes the dynamical symmetry group \(W_{N}\rtimes SP(2N)\) of the isotropic harmonic oscillator [22]. When considering the original Calogero model (1) instead of the generalized one (2), the symmetric polynomials in the elements \(E_{ij}\) and \(F_{ij}^{\pm}\) are taken since the first system usually deals with the identical particles (bosons or fermions). Looking ahead, the spectrum generating algebra for the first system is formed by the independent polynomials from the set \(\sum_{i=1}^{N}a_{i}^{+k}a_{i}^{\dagger}\) (alike the Hamiltonian itself and the conformal generators considered below) with rather complicated commutation relations among them [24]. ### Deformed unitary symmetry subalgebra The first set of generators in (20) is formed by the \(N^{2}\) elements \(E_{ij}\) and constitutes the symmetry algebra of the generalized Calogero Hamiltonian [25; 14]. Each transposes its indexes under Hermitian conjugate, \[[H,E_{ij}]=0,\qquad E_{ij}^{+}=E_{ji}.\] They form the Dunkl-operator deformation of the standard \(gl(N)\) basis and obey the following commutation rules [14], \[[E_{ij},E_{kl}]=E_{il}S_{jk}-S_{il}E_{kj}+[S_{kl},E_{ij}]. \tag{21}\] The symmetry operators are closed under commutation with the Dunkl staircase operator. The explicit relation is given by the \[[E_{ij},a_{k}^{+}]=a_{i}^{+}S_{jk}. \tag{22}\] and its Hermitian conjugate. They split into the antisymmetric and symmetric parts yielding the Dunkl angular momentum and Fradkin tensors, correspondingly [26; 14; 27; 9]. Both them are Hermitian and represent the Dunkl analog of the \(U(N)\) generators, \[L_{ij}=\imath(E_{ij}-E_{ji}),\qquad I_{ij}=E_{ij}+E_{ji}. \tag{23}\] In the absence of the interaction part (\(g=0\)), they generate the unitary symmetry of the isotropic oscillator. The Dunkl angular momentum components are closed under the commutation with coefficients depending on the particle exchanges: \[[L_{ij},L_{kl}]=\imath(L_{il}S_{jk}+L_{jk}S_{il}-L_{ik}S_{jl}+L_{jl}S_{ik}).\] The Casimir element, which commutes with above generators and modifies the usual angular momentum square, yields the angular part of the Calogero model (see also Eq. (45) below) [14], \[H_{\Omega}=L^{2}+S(S-N+2),\qquad L^{2}=\sum_{i<j}L_{ij}^{2}, \tag{24}\] \[[H_{\Omega},L_{ij}]=0.\] Using the commutation relations among the coordinates and Dunkl operators, the Dunkl angular momentum square may be expressed as follows [14]: \[H_{\Omega}=-r^{2}\nabla^{2}+r\partial_{r}(r\partial_{r}+N-2) \tag{25}\] where \(r=\sqrt{x^{2}}\) is the radial coordinate. Notice that recently the Dunkl angular momentum appears also as the symmetry of the Dunkl-operator extended version of the free relativistic particle [28]. The Dunkl \(u(N)\) algebra also possesses a Casimir element. Alike for the isotropic oscillator, it is not independent but expressed via the Hamiltonian in the following way: \[C=\sum_{i,j}E_{ij}(E_{ji}+S_{ji})=H^{2}-(S-\tfrac{1}{2}N)^{2}.\] The above formula follows from a quadratic relation (crossing relation) among the generators of the Dunkl unitary algebra [14], \[E_{ij}(E_{kl}+S_{kl})=E_{il}(E_{kj}+S_{kj}), \tag{26}\] which, in fact, implies the commutation relation (21). ### Spectrum generating part Finally, the remaining \(N(N+1)\) operators from (20) form Dunkl analogs of the staircase generators in the symplectic group \(SP(2N)\). The corresponding matrices are symmetric, \(F_{ij}^{\pm}=F_{ji}^{\pm}\), and map between the next-to-nearest neighboring energy levels of the generalized Calogero Hamiltonian (2): \[[H,F_{ij}^{\pm}]=\pm 2F_{ij}^{\pm},\qquad F_{ij}^{\pm}=F_{ji}^{\pm}. \tag{27}\] The commutation relations with the Dunkl creation-annihilation operators are similar to those for the symmetry generators (22). The nontrivial commutators among them are easy to obtain using the algebra (10), \[\begin{split}&[F_{ij},a_{k}^{+}]=S_{ik}a_{j}+a_{i}S_{jk},\\ &[F_{ij}^{+},a_{k}]=-S_{ik}a_{j}^{+}-a_{i}^{+}S_{jk}.\end{split} \tag{28}\] The remaining commutators among the \(F\)-s as well as between the \(F\) and \(E\) operators are a bit trickier. For the distinct values of the four indexes, the Wick's theorem holds for the string of creation and annihilation operators with the contractor among the pairs \(a_{i}\) and \(a_{k}^{+}\) provided by the exchange \(S_{ik}\). The latter behaves alike a number since commutes with the other operators in the monomial. Therefore, the only nontrivial commutators are provided by the following relations, \[\begin{split}&[F_{ij},F_{kl}^{+}]=E_{ki}S_{jl}+E_{kj}S_{li}+E_{li}S_{kj}+ E_{lj}S_{ik},\\ &[F_{ij},E_{kl}]=F_{jl}S_{ik}+F_{kl}S_{jk},\end{split}\] and their conjugates (with distinct \(i,j,k,l\)). For general values of indexes, the commutation relations among the generators may be presented in the following form: \[\begin{split}&[F_{ij},F_{kl}^{+}]=S_{ik}(E_{lj}+S_{lj})+E_{ki}S_{lj}+ a_{i}S_{jk}a_{l}^{+}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad+a_{k}^{+}S_{il}a_{j},\end{split} \tag{29}\] \[\begin{split}&[F_{ij},E_{kl}]=S_{ik}F_{jl}+a_{i}S_{jk}a_{l} \end{split} \tag{30}\] with the last equation supplemented by its Hermitian conjugate. The relation with the Dunkl angular momentum (23) is inherited from Eq. (30) and its conjugate, \[[F_{ij}^{\pm},L_{kl}]=\imath(S_{ik}F_{jl}^{\pm}+F_{il}^{\pm}S_{jk}-S_{il}F_{ jk}^{\pm}-F_{ik}^{\pm}S_{jl}).\] In the formulas (29), (30), the terms which are bilinear in the Dunkl creation-annihilation operators can be always re-expressed in terms of the \(E\) or \(F\) generators. This can be achieved by moving the permutation operators \(S_{ij}\) to the left or right. In particular, when the four indexes are split into two equal pairs with values \(i\neq j\), the relation (29) reduces as follows: \[\begin{split}[F_{ii},F_{kk}^{+}]=&(E_{ki}+E_{ik}+E_ {kk}+E_{ii}+S_{ii})S_{ik}+g^{2},\end{split} \tag{31}\] \[\begin{split}[F_{ik},F_{ik}^{+}]=&(2E_{ii}+S_{ ii})S_{ik}+S_{ii}(E_{kk}+S_{kk})\\ &\qquad\qquad\qquad\qquad\qquad+E_{ii}S_{kk}.\end{split} \tag{32}\] Then the next expression (30) turns into the following couple: \[\begin{split}&[F_{ii},E_{kk}]=(F_{ik}+F_{ii})S_{ik},\\ &[F_{ik},E_{ik}]=F_{ii}S_{ik}+S_{ii}F_{kk}.\end{split} \tag{33}\] In case when the four indexes coincide, the commutator contains many other generators. Using the equation \(S_{ii}=1-\sum_{k\neq i}S_{ik}\) implied by the definition of the exchange matrix (6), one can obtain the following expressions from the most general commutators (29), (30) [compare with Eqs. (31) and (33)]: \[\begin{split}[F_{ii},F_{ii}^{+}]=&\,4E_{ii}-\sum_{k \neq i}(E_{ii}+E_{kk}+E_{ik}+E_{ki})S_{ik}\\ &\qquad\qquad\qquad\qquad+S_{ii}^{2}-(N-1)g^{2},\end{split} \tag{35}\] \[\begin{split}[F_{ii},E_{ii}]=&\,2F_{ii}-\sum_{k \neq i}(F_{kk}+F_{ik})S_{ik}.\end{split} \tag{36}\] Apart from the commutation relations, there are, of course, additional algebraic relations among the considered operators. For example, the index order is not essential in the products \(F_{ij}F_{kl}\), \(F_{ij}a_{k}\) as well as the order of the last three indexes in the \(E_{ij}F_{kl}\). Likewise, the expression \(F_{ij}(E_{kl}+S_{kl})\) remains invariant under the cyclic permutation of the same indexes as the \(E_{ij}(E_{kl}+S_{kl})\) does (26). ### Conformal subalgebra The Dunkl deformed symplectic algebra contains a conventional (nondeformed) conformal subalgebra \(sl(2,R)\) formed by the following elements, \[\begin{split}& K_{\pm}=\frac{1}{2}\sum_{i}F_{ii}^{\pm}=\frac{1}{2} \sum_{i}a_{i}^{\pm 2},\\ & K_{3}=\frac{1}{2}H=\frac{1}{2}\sum_{i}E_{ii}^{\pm}-\frac{1}{2}S +\frac{1}{4}N.\end{split} \tag{37}\] They obey the standard commutation relation \[[K_{-},K_{+}]=2K_{3},\qquad[K_{3},K_{\pm}]=\pm K_{\pm}, \tag{38}\] which follows from the algebra (10). Alternatively, it may be obtained from Eqs. (31), (33), (35), (36), and the relation \[\sum_{i}S_{ii}^{2}=N-2S.\] The above generators may be regarded as analogs of ones constructed long time ago for the original Calogero model without the exchange operators [29]. It differs also from the conformal subalgebra of the Virasoro algebra's Dunkl deformation [30]. The operators \(K_{\pm}\) with excluded center of mass have been considered in Ref. [31]. Instead of the lowering-rising generators, one can use the elements \(K_{1}\) and \(K_{2}\) defined in a standard way: \[K_{\pm}=K_{1}\pm K_{2}. \tag{39}\] Then the defining relations (38) can be written in a covariant form, which reveals the equivalence with the three dimensional Lorentz algebra \(so(1,2)\): \[[K_{\alpha},K_{\beta}]=-\epsilon_{\alpha\beta\gamma}K^{\gamma}. \tag{40}\] Here by the \(\epsilon\), the Levi-Civita tensor is denoted, and the index is risen by the Minkowski metrics \(\gamma^{\alpha\beta}=\text{diag}(1,-1,-1)\). The second generator is anti-Hermitian while the remaining two are Hermitian: \[K_{2}^{+}=-K_{2},\qquad K_{1,3}^{+}=K_{1,3}.\] The first two generators are expressed via the Hamiltonian and radial coordinate: \[K_{1}=\frac{1}{2}(r^{2}-H),\qquad K_{2}=-\frac{1}{2}r\partial_{r}-\frac{N}{4}. \tag{41}\] Therefore, the conformal algebra commutes with the Dunkl angular momentum: \[[K_{\alpha},L_{ij}]=0.\] Recall that the formal replacement of \(K_{1}\) by \(\imath K_{1}\) in the equations (40) reproduces the familiar relations of the \(su(2)\) angular momentum algebra in quantum mechanics. The Casimir element of the conformal algebra (40) is the square in Minkowski space: \[K^{2}=K^{\alpha}K_{\alpha}=K_{1}^{2}-K_{2}^{2}-K_{3}^{2}. \tag{42}\] Up to a constant, it coincides with the angular Calogero Hamiltonian (24), as can be easily verified: \[H_{\Omega}=-4K^{2}-\left(\tfrac{1}{4}N-1\right)N. \tag{43}\] Finally note that the conformal (37) and Weyl (12) generators form together a closed algebra. The nontrivial mixed commutation relations are given by \[[A^{\mp},K^{\pm}]=\pm A^{\pm},\qquad[K_{3},A^{\pm}]=\pm\frac{1}{2}A^{\pm}. \tag{44}\] ## IV Conformal group structure of Calogero wavefunctions ### Deformed harmonic polynomials The unbound generalized Calogero model (Calogero-Moser model with particle exchanges) may be expressed via the angular Hamiltonian and radial coordinates by inverting Eq. (25): \[H_{0}=-\frac{1}{2}\nabla^{2}=-\frac{1}{2}\partial_{r}^{2}-\frac{N-1}{2r} \partial_{r}+\frac{H_{\Omega}}{2r^{2}}. \tag{45}\] Let us use an equivalent Hamiltonian which is more suitable for our purpose in the current section [32]. It is obtained from the primary one by applying the similarity map with respect to the \(g\)-th power of the Vandermonde polynomial's absolute value, \[\phi=\phi(x)=\prod_{i<j}|x_{i}-x_{j}|^{g}, \tag{46}\] which participates in the ground state's expression (13): \[H_{0}^{\prime}=\phi^{-1}H_{0}\phi=-\frac{1}{2}\nabla^{\prime 2 }=-\frac{1}{2}\partial^{2}\\ -g\sum_{i<j}\left[\frac{1}{x_{i}-x_{j}}(\partial_{i}-\partial_{j })-\frac{1-s_{ij}}{(x_{i}-x_{j})^{2}}\right]. \tag{47}\] Under such transform, the Dunkl operator, in its turn, undergoes the following shift: \[\nabla_{i}^{\prime}=\phi^{-1}\nabla_{i}\phi=\partial_{i}+\sum_{j\neq i}\frac{ g}{x_{i}-x_{j}}(1-s_{ij}). \tag{48}\] Evidently, the applied similarity map is not unitary. As a consequence, both \(H_{0}^{\prime}\) and \(\pi_{i}^{\prime}=-\imath\nabla_{i}^{\prime}\) are not Hermitian. An advantage of the shifted Dunkl operator in front of the initial one (4) is the reduction to the conventional derivative when applying to a symmetric function. Moreover, the deformed Laplace equation \[\nabla^{\prime 2}h(x)=0 \tag{49}\] has polynomial solutions, which are Dunkl deformations of the usual harmonic polynomials [17; 32]. The wavefunctions of the angular momentum square are expressed via such polynomials. In fact, it is easy to see that the relation (25) implies that any deformed homogeneous harmonic polynomial is an eigenfunction of the shifted angular Calogero Hamiltonian. A suitable set among the deformed harmonic polynomials is given by the following compact expression [33]: \[h_{\mathbf{n}}(x)=r^{2(E_{0}+m_{\mathbf{n}}-1)}\nabla_{1}^{\prime n_{1}}\nabla _{2}^{\prime n_{2}}\dots\nabla_{N}^{\prime n_{N}}r^{2(1-E_{0})} \tag{50}\] where the polynomial's order takes the value \[m_{\mathbf{n}}=\sum_{i=1}^{N}n_{i},\qquad r\partial_{r}h_{\mathbf{n}}(x)=m_{ \mathbf{n}}h_{\mathbf{n}}(x). \tag{51}\] For the completeness, the proof that the functions (50) fulfill the deformed Laplace equation is given in Appendix B. The symmetrization procedure over all coordinates reduces the deformed harmonic polynomials (50) to the expression [13] \[h_{\mathbf{k}}^{\text{sym}}(x)=r^{2(E_{0}+m_{\mathbf{k}}-1)}\mathcal{D}_{1}^{ k_{1}}\mathcal{D}_{3}^{k_{3}}\dots\mathcal{D}_{N}^{k_{N}}r^{2(1-E_{0})}. \tag{52}\] Here we have used a shorten notations for the Newton powers in shifted Dunkl operators, \[\mathcal{D}_{l}=\sum_{i=1}^{N}\nabla_{i}^{\prime l}.\] The integer \(m_{\mathbf{k}}\) also is equal to the polynomial's order and defined by Eq. (18) above, \[r\partial_{r}h_{\mathbf{k}}^{\mathrm{sym}}(x)=m_{\mathbf{k}}h_{\mathbf{k}}^{ \mathrm{sym}}(x)\quad\text{with}\quad k_{2}=0. \tag{53}\] The last condition is due to the absence of the Dunkl Laplacian \(\mathcal{D}_{2}=\nabla^{\prime 2}\) in the expression (52), since it acts on a deformed harmonic function: \[\nabla^{\prime 2}r^{-2(E_{0}-1)}=0. \tag{54}\] Note that due to above condition, the deformed harmonic polynomials (50)depend linearly and do not constitute a basis [33]: \[\sum_{i=1}^{N}h_{\mathbf{n}+2\mathbf{e}_{i}}(x)=0 \tag{55}\] with the standard notation for the unit vectors (19). ### Wavefunctions in deformed harmonic polynomials First, let us consider the Dunkl angular momentum square which coincides with the angular part of the generalized Calogero model (24). The related eigenfunctions are the product of the deformed harmonic polynomial (50) and shift function (46) as was explained above. The energy levels are obtained immediately using the expression for the angular Hamiltonian (25): \[H_{\Omega}\,\phi(x)h_{\mathbf{n}}(x)=\mathcal{E}_{\beta}\phi(x) h_{\mathbf{n}}(x), \tag{56}\] \[\mathcal{E}_{\beta}=\beta^{2}-(\tfrac{1}{2}N-1)^{2}. \tag{57}\] They depend on a single quantum number expressed via the order of the polynomial (51): \[\beta=E_{0}+m-1\quad\text{with}\quad m=\sum_{l=1}^{N}n_{l} \tag{58}\] where \(E_{0}\) is the lowest energy of the Calogero model (18). This property leads to the highest possible level of degeneracy and maximal superintegrability of the angular system [11; 13]. Note that the eigenstates are not independent but subjected to a constraint inherited from the condition (55). Since we deal with the angular system, the wavefunctions have to depend only on the angular coordinates \(u_{i}=\cos\theta_{i}=x_{i}/r\) located on unit \(d=N-1\) sphere: \(u^{2}=1\). Indeed, the radial coordinate may be eliminated after canceling out the common factor \(r^{\beta-\frac{1}{2}N+1}\) from both sides of Eq. (56). Using the wavefunctions of the angular Hamiltonian, the energy eigenfunctions of the (generalized) Calogero model may be constructed in spherical coordinates. This fact reflects a common feature of superintegrable Hamiltonians to allow separation of variables in few coordinate systems. The simplest and most conventional solution of the generalized Calogero model is given above in terms of the Cartesian coordinated (15). First note that the standard similarity transformation with respect to the function \(r^{\frac{1}{2}(N-1)}\), applied to the generalized Calogero (Calogero-Moser) system (45), absorbs the term which is linear in the radial derivative. As a result, Hamiltonian acquires the following simple form: \[\begin{split} r^{\frac{1}{2}(N-1)}Hr^{-\frac{1}{2}(N-1)}& =-\frac{1}{2}\partial_{r}^{2}+\frac{1}{2}r^{2}+\frac{H_{\Omega}+ \varepsilon^{\prime}}{2r^{2}},\\ \varepsilon^{\prime}&=\tfrac{1}{4}(N-1)(N-3).\end{split} \tag{59}\] The deformed spherical harmonics (56), (57) are eigenfunctions of the shifted Dunkl angular momentum square operator, \(H_{\Omega}+\varepsilon^{\prime}\), with the eigenvalue \[\varepsilon_{\beta}+\varepsilon^{\prime}=\beta^{2}-\tfrac{1}{4}.\] So, the angular and radial coordinates of the transformed Hamiltonian (59) are separated, and the radial part describes the one-dimensional singular oscillator with the spectrum and eigenfunctions given, respectively, by \[E_{k}=\beta+2k+1,\qquad e^{-\frac{r^{2}}{2}}r^{\beta+\frac{1}{2}}L_{k}^{\beta }(r^{2}) \tag{60}\] with \(k=0,1,2,\dots\) and \[L_{k}^{\beta}(z)=\frac{1}{k!}z^{-\beta}e^{z}\frac{d^{k}}{dz^{k}}\left(e^{-z}z^ {k+\beta}\right)\] being the \(k\)-th order associated Laguerre polynomial [4]. Hence, the eigenstates of the generalized Calogero model in terms of the spherical harmonics are provided by the following functions: \[\Psi_{\mathbf{n},k}(x) =e^{-\frac{r^{2}}{2}}r^{\beta-\frac{1}{2}N+1}L_{k}^{\beta}(r^{2}) \phi(u)h_{\mathbf{n}}(u) \tag{61}\] \[=e^{-\frac{r^{2}}{2}}L_{k}^{\beta}(r^{2})\phi(x)h_{\mathbf{n}}(x), \tag{62}\] where \(u_{i}\) are above defined cosine functions in angular coordinates. The corresponding energies are obtained from Eqs. (58) and (60): \[E_{\mathbf{n},k}=E_{0}+m+2k=E_{0}+\sum_{\begin{subarray}{c}l=1\\ l\neq 2\end{subarray}}^{N}lk_{l}+2(k_{2}+k). \tag{63}\] We see that the second quantum number participating in the energy expression comes both from the radial and spherical parts in the wavefunction. Choosing the symmetric harmonic polynomials (52) instead of the common ones, one obtains the states for indistinguishable bosons, which also are eigenfunctions of the conventional Calogero model, \[\Psi_{k_{1}k_{2}k_{3}\dots k_{N}}^{\mathrm{sym}}(x)=e^{-\frac{r^{2}}{2}}L_{k_ {2}}^{\alpha}(r^{2})\phi(x)h_{k_{1}0}^{\mathrm{sym}}{}_{k_{3}\dots k_{N}}^{ \mathrm{sym}}(x). \tag{64}\] Now the lower index of the Laguerre polynomial defines the second quantum number describing the state. The parameter \(\alpha\) is similar to \(\beta\) but with the degree \(m\) of the symmetric homogeneous polynomial (52), (53): \[\alpha=E_{0}+m-1,\qquad m=k_{1}+\sum_{l=3}^{N}lk_{l}. \tag{65}\] The energy spectrum is provided by the same formula as in case of the Cartesian coordinates (18), \[E_{\mathbf{k}}=E_{0}+\sum_{l=1}^{N}lk_{l}=\alpha+2k_{2}+1. \tag{66}\] However, generally, the quantum numbers \(k_{l}\), characterizing the energy eigenstates in the Cartesian (17) and spherical (64) coordinate systems, differ mutually. The cause of this discrepancy is a degeneracy of energy levels which reflects the superintegrability of the Calogero model. In comparison with the spectrum of the generalized Calogero mode (63), the second quantum number in the energy here (66) is spawned exclusively by the radial part in the wavefunction given by the associated Laguerre polynomial (64). Below it will be shown to be controlled by the conformal group. Finally we mention that the detailed study of the wavefunctions expressed in terms of the deformed spherical harmonics has been carried out recently for the generalized Calogero based the reflection groups \(B_{2}\) and \(H_{3}\) which describe the symmetries of the square and icosahedron correspondingly [34]. ### Conformal group action on wavefunctions in spherical coordinates The symmetric wavefunctions of the Calogero model in spherical coordinates (64) with fixed values of all quantum numbers \(k_{l}\) but the second one (with \(l=2\)) are grouped into the infinite-dimensional lowest-weight representation of the conformal algebra. Applying the \(SL(2,R)\) generators to them, one obtains the following expressions: \[K_{-}\Psi^{\text{sym}}_{\ldots k_{2}\ldots} =-(\alpha+k_{2})\Psi^{\text{sym}}_{\ldots k_{2}-1\ldots}, \tag{67}\] \[K_{+}\Psi^{\text{sym}}_{\ldots k_{2}-1\ldots} =-k_{2}\Psi^{\text{sym}}_{\ldots k_{2}\ldots},\] (68) \[K_{3}\Psi^{\text{sym}}_{\ldots k_{2}\ldots} \tfrac{1}{2}E_{\mathbf{k}}\Psi^{\text{sym}}_{\ldots k_{2}\ldots}, \tag{69}\] where the condition \(\Psi=0\) is imposed for the negative values of \(k_{2}\). Here the dots mark the remaining quantum numbers (\(k_{l}\) with \(l\neq 2\)) which are not affected by the conformal group. According to the definition of conformal algebra (37), the eigenvalue of the diagonal element equals a half the state's energy (66). Since the normalization constants in front of the wavefunctions are neglected, the matrices of the operators \(K_{\pm}\) and \(K_{3}\) have no conventional forms used in quantum mechanics. The relations (68) and (67) follow from the radial representation of the conformal generators (39), (41), Eq. (66), and the following recurrence relation among the associated Laguerre polynomials: \[(z\partial_{z}-k)L^{\alpha}_{k}(z) =-(\alpha+k)L^{\alpha}_{k-1}(z),\] \[(z\partial_{z}-z+\alpha+k)L^{\alpha}_{k-1}(z) =kL^{\alpha}_{k}(z)\] holding for any integer \(k\geq 1\). The infinite-dimensional \(SL(2,R)\) multiplet (67)-(69) is generated by the wavefunction with \(k_{2}=0\), \[\Psi^{\text{sym}}_{k_{1}0k_{3}\ldots}(x)=e^{-\frac{x^{2}}{2}}\phi(x)h^{\text{ sym}}_{k_{1}0k_{3}\ldots k_{N}}(x),\] obeying the lowest-state condition \[K_{-}\Psi^{\text{sym}}_{k_{1}0k_{3}\ldots}=0,\qquad K_{3}\Psi^{\text{sym}}_{k_ {1}0k_{3}\ldots}=s\Psi_{k_{1}0k_{3}\ldots}\] with the conformal spin \(s\) depending on all quantum numbers except from the second one, \(k_{2}\), \[s=\tfrac{1}{2}(\alpha+1). \tag{70}\] The Casimir element of the conformal algebra (42) takes on the current representation the constant value: \[K^{2}=-\tfrac{1}{4}(\alpha^{2}-1)=-s(s-1). \tag{71}\] The relations (43) between the Casimir elements of the conformal algebra and Dunkl angular momentum on the discussed wavefunctions leads to to an explicit relation between the corresponding constants given by Eq. (57) with the \(\beta\) replaced by the \(\alpha\). Resuming the above consideration, the radial quantum number and second quantum number \(k_{2}\) in the Dunkl spherical harmonics care about the conformal group in the Calogero model. In contrast, in Cartesian coordinates, the conformal transformation can alter other quantum numbers too. More explicitly, the Cartesian basis (17) behaves alike the spherical one (64) under the rising and diagonal generators: \[K_{+}|\mathbf{k}\rangle_{\text{sym}} =\tfrac{1}{2}|\mathbf{k}+\mathbf{e}_{2}\rangle_{\text{sym}},\] \[K_{3}|\mathbf{k}\rangle_{\text{sym}} =\tfrac{1}{2}E_{\mathbf{k}}|\mathbf{k}\rangle_{\text{sym}}\] while the lowering generator changes all quantum numbers. For example, if \(k_{l}=0\) for all \(l\geq 3\), one gets: \[K_{-}|k_{1},k_{2}\rangle_{\text{sym}} =2k_{2}(\alpha+k_{2})|k_{1},k_{2}-1\rangle_{\text{sym}}\] \[\qquad+(k_{1}-1)|k_{1}-2,k_{2}\rangle_{\text{sym}},\] where the vanishing quantum numbers are omitted within the ket states. In the absence of the last term, the above representation becomes equivalent to the previous one (67)-(69) with a similarity map: \[|k_{1},k_{2}\rangle_{\text{sym}}\rightarrow(-1)^{k_{2}}k_{2}!\Psi^{\text{ sym}}_{\ldots k_{2}\ldots}.\] Of course, using the Jacobi coordinates, one can separate the center-of-mass and its quantum number \(k_{1}\) but in the higher-level quantum numbers starting from \(k_{3}\) still will remain coupled for the Cartesian wavefunctions. Similar type of \(SL(2,R)\) representations appears for the Calogero model with particle exchanges (2), which has more common, nonsymmetric eigenstates \(\Psi_{\mathbf{n},k}\) (62). The corresponding multiplets are generated from the ground state \(\Psi_{\mathbf{n},0}(x)=e^{-\frac{x^{2}}{2}}\phi(x)h_{\mathbf{n}}(x)\) and characterized by the conformal spin given by the formula (70) with the parameter \(\alpha\) substituted by \(\beta\) (58): \[s=\tfrac{1}{2}(\beta+1). \tag{72}\] An eigenvalue associated with the diagonal generator equals half the state's energy given now by Eq. (63): \[\begin{split} K_{-}\Psi_{\mathbf{n},k}&=-(\beta+k) \Psi_{\mathbf{n},k-1},\\ K_{+}\Psi_{\mathbf{n},k-1}&=-k\Psi_{\mathbf{n},k}, \\ K_{3}\Psi_{\mathbf{n},k}&=\tfrac{1}{2}E_{\mathbf{n},k} \Psi_{\mathbf{n},k}.\end{split} \tag{73}\] It depends on the radial quantum number \(k\) which also distinguishes the individual states within a single multiplet. The quantum numbers \(n_{i}\), characterizing the spherical harmonics, remain untouched under the conformal transformation. They define the value of the conformal spin (79), see Eq. (58). ## V Conformal group structure of Calogero-Moser integrals ### Rotated \(sl(2,R)\) basis In the previous section, the behavior of the spectrum and wavefunctions of the Calogero model with and without particle exchanges under the action of the conformal group has been revealed. The latter is a part of more general dynamical symmetry formed by the Dunkl symplectic group. In the current section, we study the conformal multiplets spawned by the Liouville integrals model by adapting the method used in Ref. [35] to the exchange-operator approach. The additional constants of motion then are recovered from the tensor-product decomposition which is equivalent to the usual angular momentum sum rule in quantum mechanics. Since both the second and third generators of the conformal algebra have a space-like signature (42), a rotation in this plane results in an equivalent algebra (40). In particular, the \(\pi/2\) rotation along the first conformal component maps the initial generators to their modified counterparts: \[K^{\prime}_{\alpha}=e^{\frac{\pi}{2}K_{1}}K_{\alpha}e^{-\frac{\pi}{2}K_{1}}. \tag{74}\] Clearly, the above similarity transformation does not touch the generator \(K_{1}\) while exchanges the two other components as follows: \[K^{\prime}_{2}=-K_{3},\qquad K^{\prime}_{3}=K_{2}. \tag{75}\] As a result, the rotated lowering and rising \(SL(2,R)\) generators (39) acquire the following explicit form: \[\begin{split} K^{\prime}_{+}&=K_{1}-K_{3}=\tfrac {1}{2}\nabla^{2}=-H_{0},\\ K^{\prime}_{-}&=K_{1}+K_{3}=\tfrac{1}{2}r^{2}. \end{split} \tag{76}\] Virtually, the conformal algebra representation via the Dunkl operators appeared in above form at first in Ref. [36]. Recall that the observable \(H_{0}=-\frac{1}{2}I_{2}\) is the generalized Calogero-Moser Hamiltonian (45). Note that the transformation (74) is not unitary so that both \(K^{\prime}_{\pm}\) are Hermitian while \(K_{\pm}\) are conjugate of each other. In fact, the same rotation relates the generators of the deformed Weyl and Dunkl algebras, \[a_{i}\rightarrow\nabla_{i},\qquad a_{i}^{+}\to x_{i}.\] As a result, the generalized Calogero Hamiltonian (7) transforms as \[H\rightarrow-2K^{\prime}_{3}=r\partial_{r}+\frac{1}{2}N\] while the ground state maps to \(\phi\) (46) which disappears under the action of the Dunkl operator: \(\nabla_{i}\phi(x)=0\). ### Conformal multiplets generated by Liouville integrals The Liouville integrals of motion of the unbound Calogero (Calogero-Moser) model are given by symmetric polynomials in Dunkl operators, usually, the power sums [18]: \[I_{n}=\sum_{i=1}^{N}\nabla_{i}^{n}. \tag{77}\] In fact, the valid quantities must be Hermitian and expressed via the Dunkl momentum, \(\sum_{i}\pi_{i}^{n}=(-\imath)^{n}I_{n}\). Here we ignore that requirement in order to avoid imaginary unit in formulas below. The adjoint action of the conformal algebra on the above integral has quite simple form in the newly defined basis. Indeed, the third generator is diagonal while the rising one just kills it: \[\hat{K}^{\prime}_{+}I_{n}=0,\qquad\hat{K}^{\prime}_{3}I_{n}=\tfrac{1}{2}nI_{n}, \tag{78}\] where the hat here means the commutator, \[\hat{X}f=[X,f].\] It is easy to see that the integral (77) generates the highest weight irreducible representation of the \(SL(2,R)\) group with the conformal spin \[s^{\prime}=\tfrac{1}{2}n \tag{79}\] composed from the states \(I_{n,l}\) with \(l=0,1,\ldots,n\), and the \(I_{n,0}=I_{n}\) is the highest one: \[I_{n,l}=(\hat{K}^{\prime}_{-})^{l}I_{n},\qquad\hat{K}^{\prime}_{3}I_{n,l}=(s^{ \prime}-l)I_{n,l}. \tag{80}\] The rising generator acts on these states in a standard way: \[\hat{K}^{\prime}_{+}I_{n,l}=-\hat{H}_{0}I_{n,l}=-l(n-l+1)I_{n,l-1}. \tag{81}\] In particular, the first Liouville integral \(I_{1}=\sum_{i}\partial_{i}\), which is proportional to the total momentum, generates a doublet (\(s^{\prime}=1/2\)) with the lowest function proportional to the center-of-mass coordinate \(I_{1,1}=-\sum_{i}x_{i}\). The second integral generates a triplet (\(s^{\prime}=1\)) consisting of the following members: \[I_{2}=2K^{\prime}_{+},\qquad I_{2,1}=4K^{\prime}_{3}=-2r\partial _{r}-N,\] \[I_{2,2}=4K^{\prime}_{-}=2r^{2}. \tag{82}\] The square of the conformal spin (42) takes (80) a constant value on the common multiplet: \[\hat{K}^{2}=-\tfrac{1}{4}n(n+1)=-s^{\prime}(s^{\prime}+1). \tag{83}\] Note that the conformal spin's square in the infinite dimensional lowest (71) and highest weight (83) representations have different shapes but map to each other under the substitution \(s^{\prime}\to-s\). ### Descendants of Liouville integral in Weyl-ordered form. The descendants of the Liouville integral of motion are expressed via the Weyl-ordered operator product of the coordinate and Dunkl operator in the following way: \[I_{n,l}=(-1)^{l}\frac{n!}{(n-l)!}\sum_{i=1}^{N}\left(x_{i}^{l}\nabla_{i}^{n-l }\right)_{W}. \tag{84}\] Appendix A provides the definition and brief explanation of the Weyl order. The above relation follows immediately from the following commutation relations: \[\hat{K}^{\prime}_{-}\nabla_{i}=-x_{i},\qquad\hat{K}^{\prime}_{-}x_{i}=0.\] Note that the factorials in Eq. (84) contract with the binomial factor in front of the Weyl product (111) which results in the overall \(l!\) factor. In particular, the first descendant from the entire family has the following explicit form: \[I_{n,1}=-\sum_{i=1}^{N}\sum_{k=0}^{n-1}\nabla_{i}^{k}x_{i}\nabla_{i}^{n-k-1}. \tag{85}\] Meanwhile, the last one reduces to the center of mass: \[I_{n,n}=(-1)^{n}n!\sum_{i}x_{i}.\] Note that a generating polynomial gathering all the descendants (84) may be built as a Newton's power sum in an appropriate superposition of the Dunkl operator with the related coordinate (111): \[I_{n}(v):=\sum_{l=0}^{n}(-v)^{l}I_{n,l}=\sum_{i=1}^{N}(\nabla_{i}-vx_{i})^{n}.\] ### Additional integrals from product multiplets As was mentioned in Introduction, the Calogero-Moser model is superintegrable. Apart from the Liouville integrals \(I_{n}\) with \(n=1,\ldots,N\), it possesses additional constants of motion, which constitute together a set of \(2N-1\) independent quantities commuting with the Hamiltonian [8; 9; 10]. In fact, the superintegrability is closely related with the dynamical \(SL(2,R)\) symmetry which can be used in order to retrieve the additional integrals from the Liuoville ones [37; 10; 35]. Alternatively, they can be recovered as the highest-weight functions in the product representation [35]. Indeed, a symmetric bilinear combination of two Liouville integrals of motion, \[\{I_{n_{1}},I_{n_{2}}\}=I_{n_{1}}I_{n_{2}}+I_{n_{2}}I_{n_{1}}, \tag{86}\] possesses a structure of the tensor product representation with respect to the \(sl(2,R)\) algebra's adjoint action. The symmetrized invariant (86) decomposes according to the usual momentum sum rule in quantum mechanics: \((s_{1})\otimes(s_{2})=(s_{1}+s_{2})\oplus(s_{1}+s_{2}-1)\oplus\cdots\oplus(|s_ {1}-s_{2}|)\). Apparently, the highest-weight states are integrals of motion: \[I_{n_{1}+n_{2}-2k}^{n_{1},n_{2}}= \sum_{l=0}^{k}\frac{(-1)^{l}}{2}\binom{n_{1}-k+l}{l}\binom{n_{2}-l }{k-l}\] \[\times\{I_{n_{1},k-l},I_{n_{2},l}\},\] \[\hat{K}^{\prime}_{+}I_{n_{1}+n_{2}-2k}^{n_{1},n_{2}}= \big{[}H_{0},I_{n_{1}+n_{2}-2k}^{n_{1},n_{2}}\big{]}=0\] where \(0\leq k\leq|n_{1}-n_{2}|\). Such integral is parameterized by twice the spin value of the conformal multiplet it generates: \[\hat{K}^{\prime}_{3}I_{2s}^{2s_{1},2s_{2}}=sI_{2s}^{2s_{1},2s_{2}}\quad\text{ with}\quad s=s_{1}+s_{2}-k.\] Here for simplicity, we omit the prime symbol in the spin's notation (79). The first nontrivial integral within this family corresponds to the \(k=1\) case: \[I_{n_{1}+n_{2}-2}^{n_{1},n_{2}}=s_{2}\{I_{n_{1},1},I_{n_{2}}\}-s_{1}\{I_{n_{1}},I _{n_{2},1}\}. \tag{87}\] In case when the second multiplet is the triplet defined in Eq. (82), the above integral of motion may be expressed also as a commutator with the conformal spin's square, or generalized angular Calogero Hamiltonian: \[I_{n}^{n,2}= \,\{I_{n,1},K_{+}^{\prime}\}-2n\{I_{n},K_{3}^{\prime}\}\] \[= \,2[K^{2},I_{n}]=-\frac{1}{2}[H_{\Omega},I_{n}].\] These equations are derived easily from the expression \(K^{2}=\frac{1}{2}\{K_{+}^{\prime},K_{-}^{\prime}\}-K_{3}^{\prime 2}\) for the conformal spin square and the relation (43). ### Generalized Calogero-Moser's integrals The described formation of additional integrals of motion based on the dynamical \(SL(2,R)\) symmetry is even more transparent for the Calogero-Moser model with particle exchanges. In this case, the Liouville integrals form just by the Dunkl momentum components \(\pi_{i}=-\imath\nabla_{i}\) without any symmetrization procedure. Each component of the Dunkl operator generates a conformal doublet (\(s^{\prime}=1/2\)) with \[I_{1}=\nabla_{i}\qquad\text{and}\qquad I_{1,1}=-x_{i}.\] Then according to the momentum sum rule, a pair of such multiplets generated by two different components, \(I_{1}=\nabla_{i}\) and \(I_{1}^{\prime}=\nabla_{j}\) produce a singlet (\(s^{\prime}=0\)). The latter corresponds to a conserved quantity, which is nothing but the Dunkl angular momentum component (23) as is easy to see: \[I_{0}^{1,1}=\frac{1}{2}\{I_{1,1},I_{1}^{\prime}\}-\frac{1}{2}\{I_{1},I_{1,1}^{ \prime}\}=-\imath L_{ij}.\] So, one can resume that the Dunkl angular momentum symmetry is generated from the Dunkl momentum and the dynamical conformal symmetries. ## VI Conclusion It is well known that the Weyl algebra \(w(N)\) generated by the creation-annihilation operators and the symplectic algebra \(sp(2N)\) formed by their bilinear combinations generate a largest possible finite-dimensional group which generates the spectrum of the \(N\)-dimensional isotropic oscillator. In the current article, the spectrum generating algebra was deformed in case of the presence of additional inverse square (Calogero) potential. The construction was based on the exchange operator formalism, in which the derivatives in observables are replaced by the Dunkl operators. In particular, the commutation relations among the deformed generators are derived explicitly and presented in compact form. The Dunkl analog of the symplectic algebra contains of the deformed unitary subalgebra which combines the symmetries of the generalized Calogero Hamiltonian [14]. The remaining, spectrum generating part maps between the different energy levels and involves the standard \(sl(2,R)\) conformal subalgebra. A simple relation is obtained between the Casimir elements of both the conformal spin and Dunkl angular momentum, given by (modified) squares in the corresponding generators. This correspondence has suggested to analyze of the conformal structure of the Calogero wavefunctions in the spherical coordinates based on the Dunkl analog of the spherical harmonics. It turned out that, in fact, the second quantum number is driven by the conformal group, and each eigenstate, in which it vanishes, generates the infinite dimensional lowest-weight \(sl(2,R)\) multiplet. Next, the \(\pi/2\) rotation has been applied to the conformal generators in order to demonstrate that the \(n\)-th Liouville integral of motion of the (generalized) Calogero-Moser model generates the finite-dimensional (non-unitary) highest weight spin-\(\frac{n}{2}\)\(sl(2,R)\) multiplet. The descendants of this representation have been expressed in terms of the Weyl-ordered product in quantum field theory. The highest states in a product of two or more such multiplets produce additional integrals of motion. ###### Acknowledgements. The author is grateful to M. Feigin for preliminary discussions. The work was supported by the Armenian Science Committee Grants No. 20TTWS-1C035, No. 20TTAT-QTa009, and No. 21AG-1C047. ## Appendix A Weyl ordering Usually in quantum field theory, the the normal and chronological orderings are used in order to arrange the operators in a way suitable for calculations. However, sometimes the other order types appear to be more suitable. In particular, the Weyl ordering just symmetrizes over all possible products of two operators with given powers. Such operators may be represented by a particle creation/annihilation, coordinate/momentum, or any other couple of noncommuting variables \(a,b\). For example, some simple Weyl-ordered monomials are defined as follows: \[(ab)_{W}=\tfrac{1}{2}(ab+ba),\quad\left(a^{2}b\right)_{W}=\tfrac{1 }{3}(a^{2}b+aba+ba^{2}),\] \[\left(a^{2}b^{2}\right)_{W}=\tfrac{1}{6}(a^{2}b^{2}+abab+baba+ab^{ 2}a+ba^{2}b+b^{2}a^{2}),\] with numerical factors reflecting the number of terms in the sum. It is easy to see that in the most common case, this factor is given by an inverse binomial coefficient: \[(a^{l}b^{k})_{W}=\binom{k+l}{l}^{-1}(a^{l}b^{k}+a^{l-1}bab^{k-1}+\cdots+b^{k}a^{l}). \tag{10}\] The Weyl-ordered polynomials appear in the Newton's binomial formula for noncommutative variables \(a,b\) and numerical parameter \(v\): \[(a+vb)^{n}=\sum_{l=0}^{n}\binom{n}{l}v^{l}\left(a^{n-l}b^{l}\right)_{W}. \tag{11}\] The above formula may be considered also as a definition of Weyl order via generating function in \(v\). Notice that the symmetrized product (10) was appeared in the Lax and Dunkl staircase operators [38; 23]. ## Appendix B Dunkl deformed harmonic functions In the current Appendix, the derivation of the expression (52) for the Dunkl analog of the harmonic polynomials is briefly outlined. First note that the radial function \[1/r^{a}\quad\text{with}\quad a=2E_{0}-2, \tag{12}\] where \(E_{0}\) is the ground state energy of the Calogero model (14), obeys a Dunkl harmonic equation (49), (48). In three dimensions and the absence of particle interaction, it reduces to the usual Coulomb potential \(1/r\). Next, consider the last equation in the sequence (47) and apply its both sides to the function (12). Using the identities \[\frac{\partial}{\partial x_{i}}\frac{1}{r^{a}}=-\frac{ax_{i}}{r^{a+2}},\qquad \partial^{2}\frac{1}{r^{a}}=\frac{a(a-N+2)}{r^{a+2}},\] it is easy to calculate the deformed Laplacian's action: \[\nabla^{\prime 2}\frac{1}{r^{a}}=\frac{a\left(a-gN(N-1)-N+2\right)}{r^{a+2}}.\] Finally note that the only nontrivial value of \(a\) for which the right hand side of above equation vanishes is given by Eq. (12). Now fix that value of the parameter and consider the functions \[h_{\mathbf{n}}(x)=r^{a+2m}\nabla_{1}^{\prime n_{1}}\nabla_{2}^{\prime n_{2}} \ldots\nabla_{N}^{\prime n_{N}}r^{-a} \tag{13}\] with \(m=\sum_{l}n_{l}\) as was already defined (58). In common, the Dunkl derivative does not obey the standard Leibniz rule. Instead, the deformed analog of the Leibniz rule takes place as is easy to verify. Namely, any two wavefunctions \(\phi(x)\) and \(\psi(x)\) must satisfy the following identity: \[\nabla_{i}^{\prime}(\phi\psi)=\phi\nabla_{i}^{\prime}\psi+\psi \nabla_{i}^{\prime}\phi\\ -g\sum_{k\neq i}\frac{(\phi-s_{ik}\phi)(\psi-s_{ik}\psi)}{x_{i}-x_ {k}}, \tag{14}\] where the permutation operator, as usually, exchanges the corresponding coordinates in the function argument: \(s_{ij}\phi(x)=\phi(s_{ij}x)\). Notice that the sophisticated third term vanishes if just one two functions is symmetric, say \(\phi\): \(s_{ij}\phi=\phi\). In that case we arrive at the standard Leibniz rule. Let us choose now both functions as \(\phi=r^{a+2m}\) and \(\psi=\nabla_{\mathbf{n}}^{\prime}r^{-a}\). The first one is symmetric. In order to simplify the notations, the multiple Dunkl derivatives are abbreviated in the second function, namely: \[\nabla_{\mathbf{n}}^{\prime}=\nabla_{1}^{\prime n_{1}}\ldots\nabla_{N}^{\prime n _{N}}.\] Then the standard Leibniz rule remains valid for the product \(\phi\psi\), which implies the following identity: \[\nabla_{i}^{\prime}h_{\mathbf{n}}=(a+2m)r^{a+2m-2}x_{i}\nabla_{\mathbf{n}}^{ \prime}r^{-a}+r^{a+2m}\nabla_{i}^{\prime}\nabla_{\mathbf{n}}^{\prime}r^{-a}.\] Apply now the operator \(\nabla_{i}^{\prime}\) again and sum over the index \(i\). Then employ the Leibniz rule again for the Dunkl operator, taking the power of the radial coordinate on the left as the first function, and take into account the identity (54). Combining together similar terms, we arrive at the following result: \[\nabla^{\prime 2}h_{\mathbf{n}}(x)=(a+2m)r^{a+2m-2}\big{(}a+2m-2\\ +\mathbf{x}\cdot\nabla^{\prime}+\nabla^{\prime}\cdot\mathbf{x} \big{)}\nabla_{\mathbf{n}}^{\prime}r^{-a}\\ =(2m+a)(2E_{0}-2-a)r^{a+2m-2}\nabla_{\mathbf{n}}^{\prime}r^{-a}, \tag{15}\] where the operator order is essential. In the last step, an explicit value for the anticommutator has been applied: \[\frac{1}{2}\sum_{i}\{x_{i},\nabla_{i}^{\prime}\}=r\partial_{r}+E_{0}.\] The above equation is a consequence of the Dunkl-operator definitions (4), (48) and the canonical commutation relations (5), (6). For the proper choice of the parameter (12), the last expression in Eqs. (15) vanishes for any set of integers \(n_{i}\).
2302.14689
Robust one-shot estimation over shared networks in the presence of denial-of-service attacks
Multi-agent systems often communicate over low-power shared wireless networks in unlicensed spectrum, prone to denial-of-service attacks. We consider the following scenario: multiple pairs of agents communicating strategically over shared communication networks in the presence of a jammer who may launch a denial-of-service. We cast this problem as a game between a coordinator who optimizes the transmission and estimation policies jointly and a jammer who optimizes its probability of performing an attack. We consider two cases: point-to-point channels and large-scale networks with a countably infinite number of sensor-receiver pairs. When the jammer proactively attacks the channel, the game is nonconvex from the coordinator's perspective. However, despite the lack of convexity, we construct a saddle point equilibrium solution for any multi-variate Gaussian distribution for the observations. When the jammer is reactive, we obtain an algorithm based on sequential convex optimization, which converges swiftly to first-order Nash-equilibria. Interestingly, blocking the channel is often optimal when the jammer is reactive, even when it is idle, to create ambiguity at the receiver.
Xu Zhang, Marcos M. Vasconcelos
2023-02-28T16:00:03Z
http://arxiv.org/abs/2302.14689v1
# Robust one-shot estimation over shared networks in the presence of denial-of-service attacks ###### Abstract Networked multi-agent systems often communicate information over low-power shared wireless networks in unclensed spectrum, which are prone to denial-of-service attacks. An instance of this scenario is considered: multiple pairs of agents, each pair consisting of a transmitting sensor and a receiver acting as an estimator, communicate strategically over shared communication networks in the presence of a jammer who may launch a denial-of-service attack in the form of packet collisions. Using the so-called _coordinator approach_, we cast this problem as a zero-sum Bayesian game between the coordinator, who jointly optimizes the transmission and estimation policies, and a jammer who optimizes its probability of performing an attack. We consider two cases: point-to-point channels and large-scale networks with a countably infinite number of sensor-receiver pairs. When the jammer proactively attacks the channel, we find that this game is nonconvex from the coordinator's perspective. However, we construct a saddle point equilibrium solution for any multi-variate Gaussian input distribution for the observations despite the lack of convexity. In the case where the jammer is reactive, we obtain a customized algorithm based on sequential convex optimization, which converges swiftly to first order Nash-equilibria. Interestingly, we discovered that when the jammer is reactive, it is often optimal to block the channel even when it knows that the channel is idle to create ambiguity at the receiver. ## I Introduction Networked multi-agent systems often consist of multiple decision making agents collaborating to perform a task. Popular examples of network systems include ground and/or aerial robotic networks, sensor networks, and the internet of things. In order to achieve a synergistic behavior, the agents often communicate messages over a wireless network among themselves. Typically, a network system architecture will also involve one or multiple nodes communicating with a gateway or base-station. Over these links, the transmitting agent sends messages containing one or more state variables that need to be estimated at the base-station. In particular, remote sensing where one (or multiple) sensor(s) communicates its measurements over a shared wireless channel to one or more non-collocated access points or base-stations is a fundamental building block of many cyber-physical systems [1, 2, 3], and references therein. There are many communication protocols that enable such local communication such as Bluetooth, wi-fi and cellular, among others. The choice of a given protocol requires meeting some specifications, but there is no single protocol that achieves all desirable characteristics and uniformly better than the others. For example, Low Power Wide Area Networks (LP-WANs) provide power efficiency and large coverage leading to very cost efficient deployments [4]. However, such protocols operate in frequency bands in the so-called _unlicensed spectrum_, and are therefore vulnerable to malicious agents interested in disrupting the communication link between the anchor-node(s) and the base station using denial-of-service attacks. Denial-of-Service (DoS) is a class of cyber-attacks where a malicious agent, often referred to as the _jammer_, may disrupt the communication link between the legitimate transmitter-receiver pair. DoS attacks are widely studied at different levels of modeling detail of the communication channel. For example if the channel is assumed to be a physical layer model, the jammer may introduce additional Gaussian noise to the transmitted signal. If the channel is modeled at the network layer by a packet-drop channel, the jammer may increase the probability of dropping a packet. We consider a medium access control (MAC) layer model in which the jammer may decide to block the channel by transmitting an interference signal that overwhelms the receiver, causing a packet collision. We consider the remote estimation system depicted in Fig. 1, which is comprised of multiple sensor and estimator pairs communicating over a shared wireless network modeled by a collision channel in the presence of a jammer. Each sensor makes a stochastic measurement \(X_{i}\) of a physical quantity according to a given distribution, and decides whether to transmit Fig. 1: Block diagram for a remote estimation game between a coordinator and a jammer. The jammer may have access to side information on the channel’s occupancy. The coordinator designs the policies for the sensor and the estimator. it or not to the corresponding estimator. Communication is costly, therefore, the sensors must transmit wisely. We consider two cases: 1. the _proactive jammer_ that cannot sense if the channel is being used by the sensors; 2. the _reactive jammer_ that can sense the channel, i.e., has access to the number of transmitting sensors \(P\). Jamming is assumed to be costly, therefore, the jammer must act strategically. Finally, each estimator observes the channel output and declares an corresponding estimate \(\hat{X}_{i}\) for the sensor's observation such as to minimize the expected quadratic distortion between \(X_{i}\) and \(\hat{X}_{i}\). We study this problem as a zero-sum game between a coordinator (system designer) and the jammer. Our goal is to characterize equilibrium solutions and obtain efficient algorithms to compute them. The main difference between our model and existing work in this area is the presence of a virtual binary signaling channel that can be exploited by the coordinator to guarantee a minimum level of performance of the system in the presence of DoS attacks. There exists an extensive literature on strategic communication in the presence of jammers. This class of problems seems to have started with the seminal work of Basar [5], which obtained a complete characterization of the saddle point equilibria when the sensor measurements and the channel are Gaussian. Recently, an extension to the two-way additive Gaussian noise channel was studied by McDonald et al. in [6]. A jamming problem where the transmitter and estimator have different objectives was solved by Akyol et al. in [7] using a hierarchical game approach. A jamming problem with and without common randomness between the transmitter and estimator is studied Akyol in [8] and a Stackelberg game formulation was considered by Gao et al. [9]. Another interesting problem formulation is due to Shafiee and Ulukus in [10], where the pay-off function is the mutual information between the channel input and output. Jamming over fading channels was considered by Ray et al. in [11] and subsequently by Altman et al. in [12]. An LTE network model was considered by Aziz et al. in [13]. Another class of remote estimation problems focuses on the state estimation of a linear time invariant system driven by Gaussian noise under DoS attacks. Li et al. [14] studied a jamming game where the transmitter and jammer have binary actions. A SINR-based model was considered by Li et al. in [15], where the transmitter and jammer decide among multiple discrete power levels. The case of continuum of power levels was studied by Ding et al. in [16]. A jamming model over a channel with two modes (i.e., free mode and safe mode) was analyzed by Wu et al. in [17]. A jamming problem with asymmetric feedback information and multi-channel transmissions was considered by Ding et al. in [18] and [19], respectively. A Stackelberg equilibrium approach to this problem was considered by Feng et al. in [20]. The problem of optimizing the attack scheduling policy from the jammer's perspective was considered by Peng et al. in [21]. The model described herein is closely related to the work of Gupta et al. [22], [23] and Vasconcelos and Martins [24], [25], where there is a clear distinction between the channel being blocked vs. idle. As in [22], we assume that the transmission decision \(U\) may be available as side information to the jammer, but not the full input signal \(X\). This assumption is realistic in the sense that the bits used to encode \(X\) may be encrypted. In the game considered in [22], it is assumed that the receiver is fixed, and the game is played between the sensor and the jammer. Instead, we follow Akyol [8] in which the sensor and estimator are distinct agents implementing policies optimized by a _coordinator_[26]1. Footnote 1: A subset of the results for the point-to-point channel reported herein have appeared in [27], which uses a different optimization technique based on so-called _rearrangement inequalities_. In the present paper, we used different optimization techniques that allow a complete generalization for the multivariate Gaussian observation model. Such generalization would not be possible using the same techniques in [27]. Moreover, this paper introduces the analysis for the large-scale case, which appears here for the first time and does not follow from the analysis in [27]. The main contributions of the paper are summarized as follows: 1. For the proactive jammer over a point-to-point channel, we provide the optimal strategies for the coordinator and the jammer that constitute a saddle point equilibrium, which appears as two scenarios depending on the transmission and jamming costs. This result holds even though the objective function is non-convex from the coordinator's perspective. 2. For the reactive jammer over a point-to-point channel, we propose alternating between Projected Gradient Ascent (PGA) and Convex-Concave Procedure (CCP) to achieve an approximate first-order Nash-equilibrium. Our numerical results demonstrate that the proposed PGA-CCP algorithm exhibits superior convergence rates compared to the traditional Gradient Descent Ascent (GDA) algorithm. A significant contribution here is that the optimal estimator employs representation symbols with distinct values for the no-transmission and collisions, as opposed to when the jammer is proactive, which uses the mean to estimate the observations in both situations. 3. For large-scale networks we assume a problem with a countably infinite number of agents [28, and references therein] under the possibility of a proactive jamming attack. We compute the limiting objective function when the normalized channel capacity converges to a constant \(\bar{\kappa}\). In this regime, the zero-sum game between the coordinator and the jammer over large-scale networks is equivalent to a constrained minimax problem. We establish the saddle point equilibrium of the optimal strategies for the coordinator and the jammer, which consists of six scenarios based on transmission cost, jamming cost, and the normalized capacity. ## II System Model Consider a remote sensing system consisting of \(n\) sensors. Let \([n]\) denote the set \(\{1,\cdots,n\}\). Each sensor makes a random measurement, which is represented by a random vector. Let \(X_{i}\in\mathbb{R}^{m}\) denote the measurement of the \(i\)-th sensor. We assume for tractability that the measurements are independent and identically distributed Gaussian random vectors across sensors, that is, \(X_{i}\sim\mathcal{N}(\mu,\Sigma),\,i\in[n]\). We denote the probability density function (pdf) of a multivariate Gaussian random vector by: \[f(x)\mathop{=}^{\mathrm{def}}\frac{1}{\sqrt{(2\pi)^{m}|\Sigma|}}\exp\Big{(}- \frac{(x-\mu)^{\mathsf{T}}\Sigma^{-1}(x-\mu)}{2}\Big{)}. \tag{1}\] The goal of the sensors is to communicate their measurements to one or multiple receiver over a shared wireless network of limited capacity in the presence of a jammer. ### _Transmitters_ We define the following collection of policies: \[\gamma=(\gamma_{1},\cdots,\gamma_{n}). \tag{2}\] **Definition 1** (Transmission policy): _A transmission policy for the \(i\)-th sensor is a measurable function \(\gamma_{i}:\mathbb{R}^{m}\rightarrow[0,1]\) such that_ \[\mathbf{P}(U_{i}=1\mid X_{i}=x_{i})=\gamma_{i}(x_{i}),\ \ i\in[n]. \tag{3}\] When the \(i\)-th sensor makes a transmission, it sends a packet containing its identification number and its observed measurement as follows. Given \(X_{i}=x_{i}\) and \(U_{i}=1\), the signal transmitted to the receiver is: \[S_{i}=(i,x_{i}). \tag{4}\] The reason this is done is to remove the ambiguity regarding the origin of each measurement, since they could correspond to physical quantities captured at different locations, or, potentially, completely different physical quantities. When a sensor does not transmit, we assume that the signal transmitted corresponds to an _empty_ packet, which is mathematically represented by \[S_{i}=\varnothing. \tag{5}\] When a sensor transmits, it encodes the data using a cryptographic protocol. Typically, the LoRaWAN IoT standard uses the 128bit-AES lightweight encryption. The nature of the protocol is not important here, but it implies that when the attacker _senses_ the channel, it cannot decode the content in each transmitted packet. However, it is capable of detecting whether a given channel is used based on a threshold detector on the power level in the channel's frequency band. We assume that the communication occurs via a wireless medium of capacity \(\kappa(n)\in(0,n)\). Notice that the capacity of the channel corresponds to the number of packets that the channel can support simultaneously, and is not related to the information theoretic notion of capacity. Provided that the channel is not blocked by the attacker, when the total number of transmitting sensors is below or equal to the channel capacity, the receiver observes the packets perfectly. Conversely, when the number of simultaneous transmissions exceeds the channel capacity, the receiver observes a _collision symbol_. We represent this as follows: Let \[P\mathop{\stackrel{{\mathrm{def}}}{{=}}}\sum_{i=1}^{n}U_{i} \tag{6}\] and \[Y=\begin{cases}\{S_{i}\}_{i=1}^{n}&\text{if}\ \ P\leq\kappa(n)\\ \mathfrak{C}&\text{otherwise}.\end{cases} \tag{7}\] One feature of the wireless medium is that it is prone to malicious denial of service attacks known as jamming. There are many types of jamming attacks, but here we focus on two kinds: the _proactive_ and the _reactive_ jammer. ### _Proactive jamming_ We define the _proactive_ jammer, as one that decides whether to block the channel or not without sensing the channel. Therefore, at each time instant, the decision to attack is made according to a mixed strategy, such that with a certain probability, it spends a fixed amount of energy to block the network. At the receiver, the jamming attack is perceived as if a collision among many packets has happened. For the proactive jammer, the decision to block the channel or not is denoted by the variable \(J\), which is independent of \(P\), i.e., \[\mathbf{P}(J=1)=\varphi\in[0,1]. \tag{8}\] ### _Reactive jamming_ Reactive jamming is a more sophisticated attack model in which the jammer first senses whether the channel is occupied or not. Then the jammer adjusts its probability of blocking the channel based on the channel state. The reactive jamming strategy is characterized by a vector \(\varphi\mathop{\stackrel{{\mathrm{def}}}{{=}}}(\alpha,\beta) \in[0,1]^{2}\): \[\mathbf{P}\big{(}J=1\mid U=0\big{)}=\alpha,\ \ \mathbf{P}\big{(}J=1\mid U=1 \big{)}=\beta, \tag{9}\] where \(\alpha\) is the jamming probability when the channel is not occupied and \(\beta\) is the jamming probability when the channel is occupied. ### _Channel output_ Given \(\{X_{i}=x_{i}\}_{i=1}^{n}\), let us define the channel output alphabet as the \(\mathcal{Y}(x_{1},\cdots,x_{n})\) as follows. Let \[\mathcal{Y}_{i}(x_{i})\mathop{\stackrel{{\mathrm{def}}}{{=}}} \big{\{}\varnothing,(i,x_{i})\big{\}}. \tag{10}\] Then, \[\mathcal{Y}(x_{1},\cdots,x_{n})\mathop{\stackrel{{\mathrm{def}}}{{=} }}\big{(}\mathcal{Y}_{1}(x_{1})\times\cdots\times\mathcal{Y}_{n}(x_{n})\big{)} \cup\mathfrak{C}. \tag{11}\] The channel output is given by: \[Y=\begin{cases}\{S_{i}\}_{i=1}^{n}&\text{if}\ \ P\leq\kappa(n),\ J=0\\ \mathfrak{C}&\text{otherwise}.\end{cases} \tag{12}\] ### _Receiver_ Finally, we define the receiver's policy. Let the receiver policy \(\eta\) be a collection of functions \[\eta=(\eta_{1},\cdots,\eta_{n}), \tag{13}\] where \(\eta_{i}:\mathcal{Y}_{i}(x_{i})\cup\mathfrak{C}\rightarrow\mathbb{R}^{m}\) is a measurable map, \(i\in[n]\). Assume that from \(Y\), the receiver forms \(n\) signals \(\{Y_{i}\}_{i=1}^{n}\) such that \[Y_{i}=\begin{cases}\mathfrak{C},&\text{if}\ \ Y=\mathfrak{C}\\ S_{i},&\text{otherwise}.\end{cases} \tag{14}\] Given \(X_{i}=x_{i}\), an estimation policy is a measurable map \(\eta_{i}:\mathcal{Y}(x_{i})\rightarrow\mathbb{R}^{m}\) such that \[\hat{X}_{i}=\eta_{i}(Y_{i}),\ \ i\in[n]. \tag{15}\] We assume that a coordinator plays the role of a system designer, and jointly adjusts the transmission and estimation policies at the sensors and the receiver in the presence of a jammer. The approach is akin to a _robust design_ problem in which, the coordinator seeks to optimize the performance of the distributed sensing system, when the operation may be affected by a DoS attack. We assume that the coordinator _plays a zero-sum game_ with the attacker, where the objective function is given by: \[\mathcal{J}_{n}\big{(}(\gamma,\eta),\varphi\big{)}\mathop{\stackrel{{ \mathrm{def}}}{{=}}}\frac{1}{n}\mathbf{E}\Bigg{[}\sum_{i=1}^{n}\big{[}\|X_{i}- \hat{X}_{i}\|^{2}+c\mathbf{1}(U_{i}=1)\big{]}\Bigg{]}\] \[-d\mathbf{P}(J=1). \tag{16}\] Note that even when there are multiple sensors, there are only two players, namely, the coordinator and the jammer. We are interested in obtaining policies that constitute saddle point equilibria. **Definition 2** (Saddle point equilibrium): _A policy tuple \((\gamma^{\star},\eta^{\star},\varphi^{\star})\) is a saddle point equilibrium if_ \[\mathcal{J}\big{(}(\gamma^{\star},\eta^{\star}),\varphi\big{)}\leq\mathcal{J} \big{(}(\gamma^{\star},\eta^{\star}),\varphi^{\star}\big{)}\leq\mathcal{J} \big{(}(\gamma,\eta),\varphi^{\star}\big{)}, \tag{17}\] _for all \(\gamma,\eta\), and \(\varphi\) in their respective admissible policy spaces._ ## III Point-to-point channels We start our analysis by considering a point-to-point channel that can support at most one packet per time-slot, i.e., \(n=1\) and \(\kappa(n)=1\). In this case, \(P=U_{1}\), the objective function becomes \[\mathcal{J}_{1}\big{(}(\gamma,\eta),\varphi\big{)}=\mathbf{E}\big{[}\|X_{1}- \hat{X}_{1}\|^{2}\big{]}+c\mathbf{P}(U_{1}=1)-d\mathbf{P}(J=1). \tag{18}\] From here on, we will ignore the subscripts to simplify the notation. The first step is to assume that, without loss of generality, the estimator at the receiver implements the following map2 Footnote 2: Here we omit that the received signal includes the identification number. This information is redundant in the point-to-point setting. \[\eta(y)=\begin{cases}x&\text{if}\ \ y=x\\ \hat{x}_{0}&\text{if}\ \ y=\varnothing\\ \hat{x}_{1}&\text{if}\ \ y=\mathfrak{C},\end{cases} \tag{19}\] where the variables \(\hat{x}_{0}\) and \(\hat{x}_{1}\) serve as representation symbols for the no-transmission and collision events, and will be optimized by the coordinator. Let \(\hat{x}\stackrel{{\mathrm{def}}}{{=}}(\hat{x}_{0},\hat{x}_{1})\). The estimation policy in Eq. (19) is parametrized by \(\hat{x}\in\mathbb{R}^{2m}\). ### _Proactive jamming of point-to-point collision channels_ We obtain the following structural result for the set of optimal transmission policies at the sensor. **Proposition 1** (Optimality of threshold policies): _For a point-to-point system with a proactive attacker with a fixed jamming probability \(\varphi\in[0,1]\), and an arbitrary estimation policy \(\eta\) indexed by representation symbols \(\hat{x}\in\mathbb{R}^{2m}\), the optimal transmission strategy is3:_ Footnote 3: The function \(\mathbf{1}(\mathfrak{S})\) denotes the indicator function of the Boolean statement \(\mathfrak{S}\), i.e., \(\mathbf{1}(\mathfrak{S})=1\) if \(\mathfrak{S}\) is true, and \(\mathbf{1}(\mathfrak{S})=0\) if \(\mathfrak{S}\) is false. \[\gamma^{\star}_{\eta,\varphi}(x)=\mathbf{1}\big{(}(1-\varphi)\|x-\hat{x}_{0} \|^{2}>c\big{)}. \tag{20}\] Proof:: Using the law of total expectation, the definition of the estimation policy in Eq. (19), and the fact that \((U,X)\perp\!\!\!\perp J\), we rewrite Eq. (18) as follows: \[\mathcal{J}\big{(}(\gamma,\eta),\varphi\big{)}=\mathbf{E}\big{[} \|X-\hat{x}_{0}\|^{2}\ |\ U=0\big{]}\mathbf{P}(U=0)(1-\varphi)\\ +\mathbf{E}\big{[}\|X-\hat{x}_{1}\|^{2}\big{]}\varphi+c\mathbf{P}(U =1)-d\varphi, \tag{21}\] where \(\varphi=\mathbf{P}(J=1)\). Equation (21) is equivalent to \[\mathcal{J}\big{(}(\gamma,\eta),\varphi\big{)}=\int_{\mathbb{R}^ {m}}(1-\varphi)\|x-\hat{x}_{0}\|^{2}\big{(}1-\gamma(x)\big{)}f(x)\mathrm{d}x \\ +\int_{\mathbb{R}^{m}}c\gamma(x)f(x)\mathrm{d}x+\varphi\mathbf{E} \big{[}\|X-\hat{x}_{1}\|^{2}\big{]}-d\varphi. \tag{22}\] Finally, when optimizing over \(\gamma\) for fixed \(\eta\) and \(\varphi\), we have an infinite dimensional linear program with the following constraint: \[0\leq\gamma(x)\leq 1,\ \ x\in\mathbb{R}^{m}. \tag{23}\] The solution to this problem is obtained by comparing the arguments of the two integrals that involve \(\gamma\), i.e., \(x\in\{\xi\mid\gamma^{\star}_{\eta,\varphi}(\xi)=1\}\) if and only if \((1-\varphi)\|x-\hat{x}_{0}\|^{2}>c\). **Remark 1**: _Proposition 1 implies that the optimal transmission policy is always of the threshold type. Moreover, this threshold policy is symmetric if and only if \(\hat{x}_{0}=0\). The optimal policy is non-degenerate if \(\varphi\in[0,1)\), or degenerate when \(\varphi=1\). The latter corresponds to a never-transmit policy._ With a slight abuse of notation, the structure of the optimal transmission policy in Proposition 1 implies that the objective function Eq. (18) assumes the following form: \[\mathcal{J}\big{(}(\gamma^{\star}_{\eta,\varphi},\varphi),\varphi\big{)}=\mathbf{E}\bigg{[}\min\Big{\{}(1-\varphi)\|X-\hat{x}_{0} \|^{2},c\Big{\}}\bigg{]}\\ +\varphi\Big{(}\mathbf{E}\big{[}\|X-\hat{x}_{1}\|^{2}\big{]}-d \Big{)}\stackrel{{\mathrm{def}}}{{=}}\tilde{\mathcal{J}}(\hat{x}, \varphi). \tag{24}\] **Proposition 2**: _Let \(X\in\mathbb{R}^{m}\) be a Gaussian random vector with mean \(\mu\) and covariance \(\Sigma\). The function \(\tilde{\mathcal{J}}(\hat{x},\varphi)\) is non-convex in \(\hat{x}\in\mathbb{R}^{2m}\) and concave in \(\varphi\in[0,1]\)._ Proof:: **Non-convexity in \(\hat{x}\)** - We set \(\varphi=0.5\), \(c=d=1\) and \(X\sim\mathcal{N}(0,1)\). We can numerically verify that: \[\frac{1}{2}\tilde{\mathcal{J}}\big{(}(0,0),\varphi\big{)}+\frac{1}{2}\tilde{ \mathcal{J}}\big{(}(1,0),\varphi\big{)}<\tilde{\mathcal{J}}\big{(}(0.5,0), \varphi\big{)}. \tag{25}\] **Concavity in \(\varphi\)** - Define \(p:\mathbb{R}^{m}\times\mathbb{R}^{m}\times\mathbb{R}\rightarrow\mathbb{R}\) such that \[p(x,\hat{x}_{0},\varphi)\stackrel{{\mathrm{def}}}{{=}}\min\Big{\{}(1 -\varphi)\|x-\hat{x}_{0}\|^{2},c\Big{\}}. \tag{26}\] For fixed \(x,\hat{x}_{0}\in\mathbb{R}^{m}\), \(p(x,\hat{x}_{0},\varphi)\) is the pointwise minimum of affine functions in \(\varphi\). Therefore, it is concave for all \(x\in\mathbb{R}^{m}\). Taking the expectation of \(p(X,\hat{x}_{0},\varphi)\) with respect to \(X\) preserves the concavity in \(\varphi\). We proceed by minimizing Eq. (24) with respect to the estimation policy, which is a non-convex finite dimensional optimization problem over \(\hat{x}\in\mathbb{R}^{2m}\). A classic result in probability theory implies that \(\hat{x}_{1}^{\star}=\mu\). However, due to the lack of convexity it is non-trivial to find the minimizer \(\hat{x}_{0}^{\star}\) for an arbitrary Gaussian distribution. ### _The scalar case_ We begin with the result for the scalar case. The general vector case is discussed in Appendix A. **Theorem 1** (Optimal estimator for scalar Gaussian sources): _Let \(X\) be a Gaussian random variable with mean \(\mu\) and variance \(\sigma^{2}\). The optimal estimator is_ \[\eta^{\star}(y)=\begin{cases}\mu,&\text{if}\ \ y\in\{\varnothing,\mathfrak{C}\}\\ x,&\text{if}\ \ y=x.\end{cases} \tag{27}\] Proof:: Since \(\hat{x}_{1}^{\star}=\mu\), after ignoring the constants, the objective function becomes \[\tilde{\mathcal{J}}(\hat{x},\varphi)=\int_{-\infty}^{+\infty}\min\Big{\{}(1- \varphi)(x-\hat{x}_{0})^{2},c\Big{\}}\frac{e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}}{ \sqrt{2\pi\sigma^{2}}}\mathrm{d}x. \tag{28}\] After a change of variables, the objective function may be expressed as \[\tilde{\mathcal{J}}(\hat{x},\varphi)=\int_{-\infty}^{+\infty}\min\Big{\{}(1- \varphi)z^{2},c\Big{\}}\frac{e^{-\frac{(z+\hat{x}_{0}-\mu)^{2}}{2\sigma^{2}}}}{ \sqrt{2\pi\sigma^{2}}}\mathrm{d}z. \tag{29}\] Taking the partial gradient of \(\tilde{\mathcal{J}}(\hat{x},\varphi)\) with respect to \(\hat{x}_{0}\) we obtain \[\nabla_{\hat{x}_{0}}\tilde{\mathcal{J}}(\hat{x},\varphi) = -\int_{-\infty}^{+\infty}\min\Big{\{}(1-\varphi)z^{2},c\Big{\}} \frac{e^{-\frac{(z+\hat{x}_{0}-\mu)^{2}}{2\sigma^{2}}}}{\sqrt{2\pi\sigma^{2}}} \tag{30}\] \[\quad\quad\quad\quad\cdot\left(\frac{z+\hat{x}_{0}-\mu}{\sigma^{ 2}}\right)\mathrm{d}z\] \[\stackrel{{(a)}}{{=}}-\int_{-\infty}^{+\infty}\min \Big{\{}(1-\varphi)[v-(\hat{x}_{0}-\mu)]^{2},c\Big{\}}\] (31) \[\quad\quad\quad\quad\quad\cdot\frac{e^{-\frac{x^{2}}{2\sigma^{2} }}}{\sqrt{2\pi\sigma^{2}}}\left(\frac{v}{\sigma^{2}}\right)\mathrm{d}v,\] where \((a)\) follows by exchanging \(z+\hat{x}_{0}-\mu\) with \(v\). Let \[h(v) \stackrel{{\mathrm{def}}}{{=}}\min\Big{\{}(1-\varphi )[v-(\hat{x}_{0}-\mu)]^{2},c\Big{\}}, \tag{32}\] \[g(v) \stackrel{{\mathrm{def}}}{{=}}\frac{1}{\sqrt{2\pi \sigma^{2}}}e^{-\frac{x^{2}}{2\sigma^{2}}}\cdot\Big{(}-\frac{v}{\sigma^{2}} \Big{)}. \tag{33}\] Note that \(g(v)\) is an odd function with \(g(v)<0\) for \(v>0\) and \(h(v)\) is nonnegative for all \(v\). We analyze the sign of \(\nabla_{\hat{x}_{0}}\tilde{\mathcal{J}}(\hat{x},\varphi)\) in three cases: 1. For \(\hat{x}_{0}=\mu\), \(h(v)\) is an even function, which implies that \(h(v)g(v)\) is an odd function. Therefore, \[\nabla_{\hat{x}_{0}}\tilde{\mathcal{J}}(\hat{x},\varphi)=\int_{-\infty}^{+ \infty}h(v)g(v)\mathrm{d}v=0.\] (34) 2. For \(\hat{x}_{0}>\mu\), we have \(0\leq h(v)<h(-v)\) when \(v\geq 0\). Since \(g(v)\) is an odd function and \(g(v)<0\) for \(v>0\), we have \[\nabla_{\hat{x}_{0}}\tilde{\mathcal{J}}(\hat{x},\varphi)=\int_{0}^{+\infty} \big{(}h(v)-h(-v)\big{)}g(v)\mathrm{d}v<0.\] (35) 3. For \(\hat{x}_{0}<\mu\), we have \(0\leq h(-v)<h(v)\) when \(v\geq 0\). Since \(g(v)\) is an odd function and \(g(v)<0\) for \(v>0\), we have \[\nabla_{\hat{x}_{0}}\tilde{\mathcal{J}}(\hat{x},\varphi)=\int_{0}^{+\infty} \big{(}h(v)-h(-v)\big{)}g(v)\mathrm{d}v<0.\] (36) Therefore, we conclude that \(\hat{x}_{0}=\mu\) is the unique minimizer of \(\tilde{\mathcal{J}}(\hat{x},\varphi)\). Without loss of generality, for the remainder of this section we assume that \(\mu=0\). The optimal transmitter and estimator's strategies for a symmetric Gaussian distribution imply that the objective function for the jammer is given by \[\mathcal{J}\big{(}(\gamma_{\eta^{\star},\varphi}^{\star},\eta^{\star}), \varphi\big{)}=\mathbf{E}\bigg{[}\min\Big{\{}(1-\varphi)X^{2},c\Big{\}}\bigg{]} +\varphi\Big{(}\mathbf{E}\big{[}X^{2}\big{]}-d\Big{)}. \tag{37}\] From Proposition 2, the objective function in Eq. (37) is concave with respect to \(\varphi\). Therefore, we can compute the optimal jamming probability \(\varphi^{\star}\). Let \(\tilde{\varphi}\) be defined as \[\tilde{\varphi}\stackrel{{\mathrm{def}}}{{=}}\inf\bigg{\{} \varphi\in[0,1)\ \Big{|}\ \int_{\sqrt{e/(1-\tilde{\varphi})}}^{+\infty}x^{2}f(x)\mathrm{d}x=\frac{d}{2} \bigg{\}}. \tag{38}\] **Theorem 2** (Optimal jamming probability for scalar Gaussian sources): _Let \(X\) be a Gaussian random variable with mean \(0\) and variance \(\sigma^{2}\). The optimal jamming probability for the optimal transmission policy in Proposition 1 and the optimal estimation policy in Theorem 1 is_ \[\varphi^{\star}=\left\{\begin{array}{ll}\tilde{\varphi}&\text{if }\int_{ \sqrt{e}}^{+\infty}x^{2}f(x)\mathrm{d}x\geq d/2\\ 0&\text{otherwise.}\end{array}\right. \tag{39}\] Proof:: First, we represent Eq. (37) in integral form as \[\mathcal{J}\big{(}(\gamma_{\eta^{\star},\varphi}^{\star},\eta^{ \star}),\varphi\big{)}=\int_{-\sqrt{e/(1-\varphi)}}^{\sqrt{e/(1-\varphi)}}(1- \varphi)x^{2}f(x)\mathrm{d}x\\ +2\int_{\sqrt{e/(1-\varphi)}}^{+\infty}cf(x)\mathrm{d}x+\varphi \Big{(}\mathbf{E}\big{[}X^{2}\big{]}-d\Big{)}. \tag{40}\] Taking the derivative of the objective function with respect to \(\varphi\), we have \[\mathcal{G}(\varphi) \stackrel{{\mathrm{def}}}{{=}}\nabla_{\varphi}\mathcal{ J}\big{(}(\gamma_{\eta^{\star},\varphi}^{\star},\eta^{\star}),\varphi\big{)} \tag{41}\] \[=2\int_{\sqrt{e/(1-\varphi)}}^{+\infty}x^{2}f(x)\mathrm{d}x-d. \tag{42}\] Notice that \(\mathcal{G}(\varphi)\) is a monotone decreasing function with respect to \(\varphi\) and the following identities hold \[\mathcal{G}(0)=2\int_{\sqrt{e}}^{+\infty}x^{2}f(x)\mathrm{d}x-d \tag{43}\] \[\lim_{\varphi\uparrow 1}\mathcal{G}(\varphi)=-d. \tag{44}\] If \(\mathcal{G}(0)\geq 0\), then the optimal \(\varphi^{\star}=\tilde{\varphi}\) due to the fact that \(\mathcal{G}(\tilde{\varphi})=0\). If \(\mathcal{G}(0)<0\), the objective function is decreasing in \(\varphi\). Therefore, \(\varphi^{\star}=0\). **Lemma 1**: _Let \(X\) be a Gaussian random variable with mean \(0\) and variance \(\sigma^{2}\). The optimal jamming probability \(\varphi^{\star}\) satisfies_ \[\mathcal{J}\big{(}(\gamma_{\eta^{\star},\varphi^{\star}}^{\star},\eta^{\star}, \varphi\big{)}\leq\mathcal{J}\big{(}(\gamma_{\eta^{\star},\varphi^{\star}}^{ \star},\eta^{\star}),\varphi^{\star}\big{)}. \tag{45}\] Proof:: Consider the objective function in integral form as \[\mathcal{J}\big{(}(\gamma_{\eta^{\star},\varphi^{\star}}^{\star},\eta^{\star}), \varphi\big{)}=\int_{-\sqrt{e/(1-\varphi^{\star})}}^{\sqrt{e/(1-\varphi^{\star})}}x ^{2}f(x)\mathrm{d}x\\ +2\int_{\sqrt{e/(1-\varphi^{\star})}}^{+\infty}cf(x)\mathrm{d}x+ \varphi\Big{(}2\int_{\sqrt{e/(1-\varphi^{\star})}}^{+\infty}x^{2}f(x)\mathrm{d}x- d\Big{)}. \tag{46}\] When \(\int_{\sqrt{e}}^{+\infty}x^{2}f(x)\mathrm{d}x\geq d/2\), Theorem 2 implies that the optimal jamming probability is \(\varphi^{\star}=\tilde{\varphi}\) and consequently \(\int_{\sqrt{e/(1-\varphi^{\star})}}^{+\infty}x^{2}f(x)\mathrm{d}x-d/2=0\). Therefore, \(\mathcal{J}\big{(}(\gamma_{\eta^{\star},\varphi^{\star}}^{\star},\eta^{\star}), \varphi\big{)}\) is constant for \(\varphi\in[0,1)\). Conversely, if \(\int_{\sqrt{e}}^{+\infty}x^{2}f(x)\mathrm{d}x<d/2\), Theorem 2 implies that \(\varphi^{\star}=0\). Therefore, \(\int_{\sqrt{e/(1-\varphi^{\star})}}^{+\infty}x^{2}f(x)\mathrm{d}x-d/2<0\). In this case, \(\varphi=0\) maximizes \(\mathcal{J}\big{(}(\gamma_{\eta^{\star},\varphi^{\star}}^{\star},\eta^{\star}), \varphi\big{)}\). Theorem 3 summarizes the saddle point strategy for the game between a coordinator jointly designing the transmission and estimation strategy against a proactive jammer. **Theorem 3** (saddle point equilibrium for scalar Gaussian sources): _Given a Gaussian source \(X\sim\mathcal{N}(0,\sigma^{2})\), communication and jamming costs \(c,d\geq 0\), the saddle point strategy \((\gamma^{\star},\eta^{\star},\varphi^{\star})\) for the remote estimation game with a proactive jammer is given by:_ 1. _If_ \(\int_{\sqrt{\varepsilon}}^{+\infty}x^{2}f(x)\mathrm{d}x<d/2\)_, the optimal policies are_ \[\gamma^{\star}(x) =\mathbf{1}(x^{2}>c)\] (47) \[\varphi^{\star} =0.\] (48) 2. _If_ \(\int_{\sqrt{\varepsilon}}^{+\infty}x^{2}f(x)\mathrm{d}x\geq d/2\)_, the optimal policies are_ \[\gamma^{\star}(x) =\mathbf{1}\big{(}(1-\tilde{\varphi})x^{2}>c\big{)}\] (49) \[\varphi^{\star} =\tilde{\varphi},\] (50) _where_ \(\tilde{\varphi}\) _is the unique solution of Eq. (_38_)._ In both cases, the optimal estimator is: \[\eta^{\star}(y)=\begin{cases}0,&\text{if}\ \ y\in\{\varnothing,\mathfrak{C}\}\\ x,&\text{if}\ \ y=x.\end{cases} \tag{51}\] Proof:: We need to consider two cases. **Case 1** - Assume that \(2\int_{\sqrt{\varepsilon}}^{+\infty}x^{2}f(x)\mathrm{d}x<d\). If the jammer chooses not to block the channel, i.e., \(\varphi^{\star}=0\), Proposition 1 implies that the corresponding optimal transmission strategy is \(\gamma^{\star}(x)=\mathbf{1}(x^{2}>c)\). Under this pair of jamming and transmission policies, Theorem 1 yields that \(\hat{x}_{0}^{\star}=0\) and \(\hat{x}_{1}^{\star}=0\). Therefore, \[\mathcal{J}\big{(}(\gamma^{\star},\eta^{\star}),\varphi^{\star}\big{)}\leq \mathcal{J}\big{(}(\gamma,\eta),\varphi^{\star}\big{)}. \tag{52}\] If the optimal transmission strategy is \(\gamma^{\star}\) and the optimal estimator is \(\eta^{\star}\), Lemma 1 implies that \[\mathcal{J}\big{(}(\gamma^{\star},\eta^{\star}),\varphi\big{)}\leq\mathcal{J} \big{(}(\gamma^{\star},\eta^{\star}),\varphi^{\star}\big{)}. \tag{53}\] **Case 2** - Assume that \(2\int_{\sqrt{\varepsilon}}^{+\infty}x^{2}f(x)\mathrm{d}x\geq d\). If the jammer blocks the channel with probability \(\tilde{\varphi}\), Proposition 1 implies that the corresponding optimal transmission strategy is \(\gamma^{\star}(x)=\mathbf{1}\big{(}(1-\tilde{\varphi})x^{2}>c\big{)}\). Under this pair of jamming and transmission policies, Theorem 1 yields that \(\hat{x}_{0}^{\star}=0\) and \(\hat{x}_{1}^{\star}=0\). Therefore, \[\mathcal{J}\big{(}(\gamma^{\star},\eta^{\star}),\varphi^{\star}\big{)}\leq \mathcal{J}\big{(}(\gamma,\eta),\varphi^{\star}\big{)}. \tag{54}\] If the optimal transmission strategy is \(\gamma^{\star}\) and the optimal estimator is \(\eta^{\star}\), Lemma 1 implies that \[\mathcal{J}\big{(}(\gamma^{\star},\eta^{\star}),\varphi\big{)}\leq\mathcal{J} \big{(}(\gamma^{\star},\eta^{\star}),\varphi^{\star}\big{)}. \tag{55}\] **Remark 2**: _Notice that in case 2 of Theorem 3, the ratio \(c/(1-\tilde{\varphi})\) is constant for any given value of \(d>0\), which is determined by solving Eq. (38). Therefore, the optimal transmission policy is also uniquely determined by \(d\)._ ### _Reactive jamming of point-to-point collision channels_ In this section, we consider the case in which the attacker can sense whether the channel is occupied or not. Notice that we allow the reactive jammer to block the channel even when the sensor is not transmitting. To the best of our knowledge, the existing literature on reactive jamming attacks precludes that possibility. However, there is a reason why the jammer may engage in such counter-intuitive behavior: when the jammer only blocks a transmitted signal, it creates a noiseless binary signaling channel between the transmitter and the receiver, which may be exploited by the coordinator. If the jammer is allowed to "block" the channel when the user is not transmitting, such binary signaling channel will no longer be noiseless because there will be uncertainty if the decision variable at the transmitter is zero or one. This scenario is illustrated in Fig. 2. **Proposition 3**: _For a fixed jamming policy parametrized by \(\varphi\in[0,1]^{2}\), and a fixed estimation policy \(\eta\) parametrized by \(\hat{x}\in\mathbb{R}^{2m}\), the optimal transmission policy is:_ \[\gamma^{\star}_{\eta,\varphi}(x) =\mathbf{1}\big{(}\beta\|x-\hat{x}_{1}\|^{2}+c-d\beta<\] \[\alpha\|x-\hat{x}_{1}\|^{2}+(1-\alpha)\|x-\hat{x}_{0}\|^{2}-d \alpha\big{)}. \tag{56}\] Proof:: For a reactive jammer, the random variables \(X\) and \(J\) are conditionally independent given \(U\). Using the law of total expectation, and employing the estimation policy in Eq. (19), the cost function can be expressed as \[\mathcal{J}\big{(}(\gamma,\eta),\varphi\big{)}=\int_{\mathbb{R}^{ m}}\big{[}\beta\|x-\hat{x}_{1}\|^{2}+c-d\beta\big{]}\gamma(x)f(x)\mathrm{d}x+\] \[\int_{\mathbb{R}^{m}}\big{[}\alpha\|x-\hat{x}_{1}\|^{2}+(1- \alpha)\|x-\hat{x}_{0}\|^{2}-d\alpha\big{]}\big{(}(1-\gamma(x)\big{)}f(x) \mathrm{d}x. \tag{57}\] For fixed \(\hat{x}\in\mathbb{R}^{2m}\) and \(\varphi\in[0,1]^{2}\), the transmission policy \(\gamma\) that minimizes Eq. (57) is obtained by comparing the arguments of the two integrals as follows: \(x\in\{\xi\ |\ \gamma^{\star}_{\eta,\varphi}(\xi)=1\}\) if and only if \[\beta\|x-\hat{x}_{1}\|^{2}+c-d\beta<\alpha\|x-\hat{x}_{1}\|^{2}+(1-\alpha)\|x- \hat{x}_{0}\|^{2}-d\alpha. \tag{58}\] Given the optimal transmitter's strategy in Proposition 3, the objective function becomes \[\mathcal{J}\big{(}(\gamma^{\star}_{\eta,\varphi},\eta),\varphi \big{)}=\mathbf{E}\Big{[}\min\big{\{}\beta\|X-\hat{x}_{1}\|^{2}+c-d\beta,\] \[\alpha\|X-\hat{x}_{1}\|^{2}+(1-\alpha)\|X-\hat{x}_{0}\|^{2}-d \alpha\big{\}}\Big{]}\overset{\text{def}}{=}\tilde{\mathcal{J}}(\hat{x}, \varphi). \tag{59}\] Therefore, the coordinator wants to minimize \(\tilde{\mathcal{J}}(\hat{x},\varphi)\) over \(\hat{x}\in\mathbb{R}^{2m}\) and the jammer wants to maximize it over \(\varphi\in[0,1]^{2}\). As in Section III-A, for fixed \(\hat{x}\in\mathbb{R}^{2m}\), \(\tilde{\mathcal{J}}\) is a concave function of \(\varphi\) for any pdf \(f\). However, for fixed \(\varphi\in[0,1]^{2}\) Fig. 2: Signaling channel between the sensor and the receiver. The jammer controls the transition probabilities \(\alpha\) and \(\beta\). When \(\alpha=\beta=0\), the channel is noiseless, i.e., the receiver can unequivocally decode whether \(U=1\) or \(U=0\) from the oupt signal \(Y\). \(\tilde{\mathcal{J}}\) is non-convex in \(\hat{x}\). Unfortunately, the structure of Eq. (59) does not allow us to use the same techniques to find a saddle point equilibrium for the proactive jammer. It is also not clear if saddle point solutions even exist. From the remainder of this section, we assume that the coordinator and the jammer are solving the following minimax optimization problem4: Footnote 4: A solution for the minimax problem corresponds to finding a _security_ (or robust) policy for the coordinator [29]. \[\min_{\hat{x}\in\mathbb{R}^{2\alpha}}\max_{\varphi\in[0,1]^{2}}\tilde{ \mathcal{J}}(\hat{x},\varphi), \tag{60}\] where \(\tilde{\mathcal{J}}(\hat{x},\varphi)\) is given by Eq. (59). A useful alternative to the saddle point equilibrium are the solutions that satisfy the first-order stationarity conditions of the minimization and the maximization problems, yielding in a larger class of policies, called _first order Nash equilibria_ (FNE) [30, 31, 32, 33]. **Definition 3** (Approximate First order Nash equilibrium): _Let \(\varepsilon>0\). A pair of policies \((\hat{x}^{\star},\varphi^{\star})\in\mathbb{R}^{2m}\times[0,1]^{2}\) is an approximate First-order Nash-equilibrium (\(\varepsilon\)-FNE) of the game if_ \[\|\nabla_{\hat{x}}\tilde{\mathcal{J}}(\hat{x}^{\star},\varphi^{\star})\|_{2} \leq\varepsilon \tag{61}\] _and_ \[\max_{\varphi\in[0,1]^{2}}\langle\nabla_{\varphi}\tilde{\mathcal{J}}(\hat{x}^ {\star},\varphi^{\star}),\varphi-\varphi^{\star}\rangle\leq\varepsilon. \tag{62}\] **Proposition 4**: _The function \(\tilde{\mathcal{J}}(\hat{x},\varphi)\) admits the following subgradients with respect to \(\hat{x}\) and \(\varphi\):_ \[\nabla_{\hat{x}}\tilde{\mathcal{J}}(\hat{x},\varphi)=\mathbf{E} \Bigg{[}\begin{bmatrix}0_{m\times 1}\\ -2\beta(X-\hat{x}_{1})\end{bmatrix}\mathbf{1}(\gamma^{\star}_{\eta,\varphi}(X )=1)\\ +\begin{bmatrix}-2(1-\alpha)(X-\hat{x}_{0})\\ -2\alpha(X-\hat{x}_{1})\end{bmatrix}\mathbf{1}(\gamma^{\star}_{\eta,\varphi}(X )=0)\Bigg{]} \tag{63}\] _and_ \[\nabla_{\varphi}\tilde{\mathcal{J}}(\hat{x},\varphi)=\mathbf{E} \Bigg{[}\begin{bmatrix}0\\ \|X-\hat{x}_{1}\|^{2}-d\end{bmatrix}\cdot\mathbf{1}(\gamma^{\star}_{\eta, \varphi}(X)=1)\\ +\begin{bmatrix}\|X-\hat{x}_{1}\|^{2}-\|X-\hat{x}_{0}\|^{2}-d \end{bmatrix}\mathbf{1}(\gamma^{\star}_{\eta,\varphi}(X)=0)\Bigg{]}. \tag{64}\] This result follows from the Leibniz rule. Problems in the form of Eq. (60) where the inner optimization problem is concave and the outer optimization problem is non-convex have been studied under assumptions on the gradients being Lipschitz continuous [31, 33]. Under such conditions an algorithm known as the (Projected) Gradient Ascent-Descent (GAD) converges to an \(\varepsilon\)-FNE. However, the gradients in Eqs. (63) and (64) are not Lipschitz continuous. We will resort to an alternative algorithm that leverages the structure of a difference of convex decomposition present in our problem. #### Iv-B1 Optimization algorithm for a reactive jammer To obtain a pair of \(\varepsilon\)-FNE to the problem in Eq. (60), we alternate between a _projected gradient ascent_ (PGA) step for the inner optimization problem; and a _convex-concave procedure_ (CCP) step for the outer optimization problem. ``` 0: PDF \(f\), transmission cost \(c\), jamming cost \(d\) 0: Estimated result \(\hat{x}^{\star}\) and \(\varphi^{\star}\) 1: Initialize \(k\gets 0\), \(\varepsilon\), \(\hat{x}^{(0)}\) and \(\varphi^{(0)}\) 2:repeat 3:\(\varphi^{(k+1)}=\mathcal{P}_{[0,1]^{2}}\big{(}\varphi^{(k)}+\lambda_{k}\, \nabla_{\varphi}\tilde{\mathcal{J}}(\hat{x}^{(k)},\varphi^{(k)})\big{)}\) 4:\(\hat{x}^{(k+1)}=\mathcal{A}^{\dagger}(\varphi^{(k+1)})\,g(\hat{x}^{(k)},\hat{ \varphi}^{(k+1)})+\mu\) 5:\(k\gets k+1\) 6:until \(\varepsilon\)-FNE conditions (Eqs. (61) and (62)) are satisfied ``` **Algorithm 1** PGA-CCP algorithm We start with the description of the PGA step at a point \((\hat{x}^{(k)},\varphi^{(k)})\): \[\varphi^{(k+1)}=\mathcal{P}_{[0,1]^{2}}\big{(}\varphi^{(k)}+\lambda_{k}\, \nabla_{\varphi}\tilde{\mathcal{J}}(\hat{x}^{(k)},\varphi^{(k)})\big{)}, \tag{65}\] where \(\{\lambda_{k}\}\) is a step-size sequence (e.g. \(\lambda_{k}=0.1/\sqrt{k}\)) and the projection operator is defined as \[\mathcal{P}_{[0,1]^{2}}(\varphi)\mathop{=}\limits^{\mathrm{def}}\min_{\tilde{ \varphi}\in[0,1]^{2}}\|\tilde{\varphi}-\varphi\|_{2}, \tag{66}\] which is equal to \[\mathcal{P}_{[0,1]^{2}}\bigg{(}\begin{bmatrix}\alpha\\ \beta\end{bmatrix}\bigg{)}=\bigg{[}\begin{array}{c}\max\big{\{}0,\min\{1, \alpha\}\big{\}}\\ \max\big{\{}0,\min\{1,\beta\}\big{\}}\end{array}\bigg{]}. \tag{67}\] To update \(\hat{x}^{(k)}\) for a fixed \(\varphi^{(k+1)}\), we use the property that Eq. (59) can be decomposed as a difference of convex functions (DC decomposition). Using the DC decomposition we obtain a specialized descent algorithm, which is guaranteed to converge to stationary points of Eq. (59) for a fixed \(\varphi^{(k+1)}\). Because the CCP uses more information about the structure of the objective function than standard Gradient Descent methods, it often leads to faster convergence [34, 35]. Notice that: \[\tilde{\mathcal{J}}(\hat{x},\varphi)=\mathcal{F}(\hat{x},\varphi)-\mathcal{G}( \hat{x},\varphi), \tag{68}\] where \[\mathcal{F}(\hat{x},\varphi)\mathop{=}\limits^{\mathrm{def}} (1-\alpha)\|\hat{x}_{0}\|^{2}+(\alpha+\beta)\|\hat{x}_{1}\|^{2}\\ +(1+\beta)\big{(}\operatorname{trace}(\Sigma)+\|\mu\|^{2}\big{)}+c-d( \alpha+\beta)\\ -2\big{[}(\beta+\alpha)\hat{x}_{1}+(1-\alpha)\hat{x}_{0}\big{]}^{\mathsf{T}}\mu, \tag{69}\] and \[\mathcal{G}(\hat{x},\varphi)\mathop{=}\limits^{\mathrm{def}} \mathbf{E}\Big{[}\max\big{\{}\beta\|X-\hat{x}_{1}\|^{2}+c-d\beta,\] \[\alpha\|X-\hat{x}_{1}\|^{2}+(1-\alpha)\|X-\hat{x}_{0}\|^{2}-d\alpha \Big{\}}\Big{]}. \tag{70}\] The CCP for computing a local minima for the outer optimization problem is given by \[\hat{x}^{(k+1)}=\arg\min_{\hat{x}}\Big{\{}\mathcal{F}(\hat{x},\varphi^{(k+1)})- \mathcal{G}_{\mathrm{affine}}(\hat{x},\varphi^{(k+1)};\hat{x}^{(k)})\Big{\}}\,, \tag{71}\] where \(\mathcal{G}_{\mathrm{affine}}(\hat{x},\varphi^{(k+1)};\hat{x}^{(k)})\) is the affine approximation of \(\mathcal{G}(\hat{x},\varphi^{(k+1)})\) with respect to \(\hat{x}\) at \(\hat{x}^{(k)}\), while keeping \(\varphi^{(k+1)}\) fixed, i.e., \[\mathcal{G}_{\mathrm{affine}}(\hat{x},\varphi^{(k+1)};\hat{x}^{(k)}) =\mathcal{G}(\hat{x}^{(k)},\varphi^{(k+1)})\] \[+g(\hat{x}^{(k)},\varphi^{(k+1)})^{\mathsf{T}}(\hat{x}-\hat{x}^{(k)}) \tag{72}\] and \(g(\hat{x},\varphi)\) is the gradient of \(\mathcal{G}(\hat{x},\varphi)\) with respect to \(\hat{x}\). Because \(\mathcal{F}\) is a quadratic function of \(\hat{x}\) for a fixed \(\varphi\), we may use the first-order necessary optimality condition of problem Eq. (71) to find the recursion for \(\hat{x}^{(k+1)}\) in closed form: \[\nabla_{\hat{x}}\mathcal{F}(\hat{x}^{(k+1)},\varphi^{(k+1)})=g(\hat{x}^{(k)}, \varphi^{(k+1)}). \tag{73}\] The partial gradient of \(\mathcal{F}(\hat{x},\varphi)\) with respect to \(\hat{x}\) is \[\nabla_{\hat{x}}\mathcal{F}(\hat{x},\varphi)=\left[\begin{array}{c}2(1- \alpha)(\hat{x}_{0}-\mu)\\ 2(\alpha+\beta)(\hat{x}_{1}-\mu)\end{array}\right]. \tag{74}\] The partial gradient of \(\mathcal{G}(\hat{x},\varphi)\) with respect to \(\hat{x}\) is \[g(\hat{x},\varphi)=\mathbf{E}\Bigg{[}\begin{bmatrix}0_{m\times 1} \\ -2\beta(X-\hat{x}_{1})\end{bmatrix}\mathbf{1}(\gamma_{\eta,\varphi}^{\star}(X) =0)\\ +\begin{bmatrix}-2(1-\alpha)(X-\hat{x}_{0})\\ -2\alpha(X-\hat{x}_{1})\end{bmatrix}\mathbf{1}(\gamma_{\eta,\varphi}^{\star}(X )=1)\Bigg{]}. \tag{75}\] Finally, define \(\mathcal{A}:[0,1]^{2}\to\mathbb{R}^{2m\times 2m}\) as \[\mathcal{A}(\varphi)\mathop{=}\limits^{\mathrm{def}}\left[\begin{array}{cc}2 (1-\alpha)I_{m\times m}&0_{m\times m}\\ 0_{m\times m}&2(\alpha+\beta)I_{m\times m}\end{array}\right], \tag{76}\] and \(\mathcal{A}^{\dagger}\) denotes its Moore-Penrose pseudo-inverse. Then, the update of CCP can be compactly represented as \[\hat{x}^{(k+1)}=\mathcal{A}^{\dagger}\big{(}\varphi^{(k+1)}\big{)}\,g\big{(} \hat{x}^{(k)},\varphi^{(k+1)}\big{)}+\mu. \tag{77}\] ### _Numerical results_ In this subsection, we provide policies that satisfy \(\varepsilon\)-FNE. Convergence is studied using the "performance index" defined below \[\text{FNE}(\hat{x}^{(k)},\varphi^{(k)})\mathop{=}\limits^{ \mathrm{def}}\max\big{\{}\|\nabla_{\hat{x}}\tilde{\mathcal{J}}(\hat{x}^{(k)}, \varphi^{(k)})\|_{2},\] \[\max_{\varphi\in[0,1]^{2}}\langle\nabla_{\varphi}\tilde{ \mathcal{J}}(\hat{x}^{(k)},\varphi^{(k)}),\varphi-\varphi^{(k)}\rangle\big{\}}. \tag{78}\] In this section, "optimality" is in the \(\varepsilon\)-FNE sense. We begin by presenting the optimal estimation policies for one-dimensional observations. Fig. 3 shows the optimal representation symbols \(\hat{x}_{0}^{\star}\) and \(\hat{x}_{1}^{\star}\) as a function of \(\sigma^{2}\) for different jamming cost \(d\), where \(X\sim\mathcal{N}(0,\sigma^{2})\), \(c=1\) and \(\varepsilon=10^{-5}\). Notice that the representation symbols obtained for the collision and no-transmission in the presence of the reactive jammer are always distinct and neither is equal to the mean \(0\). This is in contrast with the the proactive jammer case, in which \(\hat{x}_{0}^{\star}=\hat{x}_{1}^{\star}=0\). Therefore, the assumption of a fixed receiver with \(\hat{x}_{0}^{\star}=\hat{x}_{1}^{\star}=0\), as in [22, 23], leads to a loss of optimality. We then present the optimal jamming policies for one-dimensional observations. Figure 4 shows the optimal jamming probabilities \(\alpha^{\star}\) and \(\beta^{\star}\) as a function of \(\sigma^{2}\) for different jamming cost \(d\) with \(X\sim\mathcal{N}(0,\sigma^{2})\), \(c=1\) and \(\varepsilon=10^{-5}\). Notice that the optimal jamming probabilities decrease as \(d\) increases. Besides, the optimal jamming probability when the sensor does not transmit can be nonzero when \(d=1\) and \(d=1.2\), where the jammer aims to deceive the estimator into thinking there has been a transmission that has been blocked. However, when \(d=1.5\) the optimal jamming probability when the sensor does not transmit is zero for all \(\sigma^{2}\) since the jamming cost is high and it is not worth it to deceive the estimator. We also compare the performance of our proposed PGA-CCP and the traditional GDA algorithms. Figure 5 presents the convergence curves of PGA-CCP vs. GDA for different values of the variance \(\sigma^{2}\), where \(c=1,d=1\), and \(X\sim\mathcal{N}(0,\sigma^{2})\). In this study, we set the step size of PGA-CCP as \(\lambda=0.1\) and the step sizes for GA and GD in GDA as \(\lambda_{\mathrm{GA}}=0.1\) and \(\lambda_{\mathrm{GD}}=0.01\), respectively. These are values consistent with the ones suggested by the analysis in [33]. We performed 100 Monte Carlo simulations for each algorithm with random initial conditions. The results indicate that PGA-CCP converges more than six times faster than GDA. Furthermore, GDA oscillates more with the increase of \(\sigma^{2}\) while PGA-CCP decreases steadily with a small standard deviation from the mean of the sample paths. We proceed by presenting the simulation results for multi-dimensional observations. Figure 6 shows the convergence to \(\varepsilon\)-FNE for multidimensional observations, with \(c=1\) and \(d=1\). We make 100 Monte Carlo simulations for each algorithm. For each Monte Carlo simulation, the expectations used in Algorithm 1 are approximated by the average of \(10^{4}\) samples drawn from \(X\sim\mathcal{N}(0_{m},I_{m\times m})\). Notice that PGA-CCP converges quickly to zero while GDA does not even converge to a \(0.1\)-FNE when \(m=10\) and \(m=50\). When the dimension of measurements is \(m=100\), PGA-CCP can achieve \(0.02\)-FNE while GDA does not even converge to \(0.3\)-FNE. Therefore, the numerical examples herein show that our heuristic algorithm is promising relative to GDA especially for high-dimensional remote estimation problems5. Footnote 5: The code used to obtain all the examples in this paper is available at GitHub [https://github.com/mulervasconeclos/IEEE-TAC2023.git](https://github.com/mulervasconeclos/IEEE-TAC2023.git). ## IV Large-scale networks In this section, we consider the remote estimation problem over large-scale networks in the presence of a proactive jammer, where the network consists of countably infinitely many legitimate transmitters that can support a fraction \(\bar{\kappa}\in(0,1)\) of packets per time-slot transmitted simultaneously, i.e., \(\lim_{n\rightarrow\infty}\kappa(n)/n=\bar{\kappa}\). To keep the notation simple, we will only consider the scalar observation case. However, it is straightforward to extend our results to the vector case using the techniques developed in Appendix A. Our goal is to obtain an expression for the limiting objective function when \(n\) approaches infinity and characterize its saddle point equilibrium. Before that, we will provide the objective function of the remote estimation problem over medium-scale networks, where \(n\) is finite. ### _Objective function in medium-scale networks_ In this section, we consider the remote estimation problem over the medium-scale networks, which consists of \(n<\infty\) transmitters and support \(\kappa(n)<n\) simultaneous packets. Let \(\{U_{i}\}_{i=1}^{n}\) be the collection of transmission decisions at the sensors. For a given realization of \(\{U_{i}\}_{i=1}^{n}\in\{0,1\}^{n}\), define \(\mathbb{T}=\{i\mid U_{i}=1\}\) as the index set of all transmitting sensors. Given the channel input \(\{S_{i}\}_{i=1}^{n}\) and the jammer's decision \(J\), the output of the collision channel of capacity \(\kappa(n)<n\) is given by \[Y=\begin{cases}\varnothing,&\text{if }P=0\text{ and }J=0\\ \{(i,X_{i})\mid i\in\mathbb{T}\},&\text{if }1\leq P\leq\kappa(n)\text{ and }J=0\\ \mathfrak{C},&\text{if }P>\kappa(n)\text{ or }J=1,\end{cases} \tag{79}\] where \(\varnothing\) means that the channel is idle and \(\mathfrak{C}\) means that the receiver observes a collision. There are two kinds of collision events. Collisions of the first type are called _intrinsic_ and are caused when the number of transmissions is above the network capacity, i.e., \(P>\kappa(n)\). The second type of collisions are called _extrinsic_ and are caused when the jammer decides to block the channel, i.e., \(J=1\). Finally, for the measurement at the \(i\)-th sensor, the estimator uses a policy \(\eta_{i}\) determined by \[\eta_{i}(y)=\begin{cases}x_{i},&\text{if }\ P\leq\kappa(n),\ i\in\mathbb{T} \text{ and }J=0\\ \hat{x}_{i0},&\text{if }\ P\leq\kappa(n),\ i\notin\mathbb{T}\text{ and }J=0\\ \hat{x}_{i1},&\text{if }\ P>\kappa(n)\text{ or }J=1,\end{cases} \tag{80}\] where \(\hat{x}_{i0},\hat{x}_{i1}\in\mathbb{R}^{m}\) are the representation symbols used by the estimator when the \(i\)-th sensor's observation is not transmitted and when a collision occurs, respectively. Since the observations at all sensors are i.i.d., it is natural to assume that the sensors use the same transmission strategy \(\gamma\). Similarly, the estimators use the same estimation strategy \(\eta\), i.e., \(\hat{x}_{i0}=\hat{x}_{0}\) and \(\hat{x}_{i1}=\hat{x}_{1}\), \(i\in\{1,\cdots,n\}\). Let \(\hat{x}\stackrel{{\mathrm{def}}}{{=}}(\hat{x}_{0},\hat{x}_{1})\). Using the law of total expectation and the mutual independence of \(J\), \(U_{i}\) and \(\{U_{\ell}\}_{\ell\neq i}\), the objective function in Eq. (16) can be expressed Fig. 5: Convergence curves of PGA-CCP vs. GDA for different variance \(\sigma^{2}\), where \(m=1\), \(c=1,d=1\), and \(X\sim\mathcal{N}(0,\sigma^{2})\). **Top:**\(\sigma^{2}=1\); **Middle:**\(\sigma^{2}=3\); **Bottom:**\(\sigma^{2}=5\). The results are obtained by taking the average of 100 Monte Carlo simulations. Fig. 6: Convergence curves of PGA-CCP vs. GDA for multidimensional state, where \(c=1,d=1\), and \(X\sim\mathcal{N}(0_{m},I_{m\times m})\). **Top:**\(m=10\); **Middle:**\(m=50\); **Bottom:**\(m=100\). The sample size for the estimation of gradients is \(10^{4}\). as \[\mathcal{J}_{n}\big{(}(\gamma,\eta),\varphi\big{)}=\frac{1}{n}\sum_{i= 1}^{n}\bigg{\{}\varphi\Big{(}\mathbf{E}\big{[}(X_{i}-\hat{x}_{1})^{2}\big{]}-d \Big{)}+c\mathbf{P}(U_{i}=1)\] \[+(1-\varphi)\bigg{[}\mathbf{E}\big{[}(X_{i}-\hat{x}_{0})^{2}\mid U _{i}=0\big{]}\mathbf{P}(U_{i}=0)\mathbf{P}\bigg{(}\sum_{\ell\neq i}U_{\ell} \leq\kappa(n)\bigg{)}\] \[+\mathbf{E}\big{[}(X_{i}-\hat{x}_{1})^{2}\mid U_{i}=0\big{]} \mathbf{P}(U_{i}=0)\mathbf{P}\bigg{(}\sum_{\ell\neq i}U_{\ell}>\kappa(n)\bigg{)}\] \[+\mathbf{E}\big{[}(X_{i}-\hat{x}_{1})^{2}\mid U_{i}=1\big{]} \mathbf{P}(U_{i}=1)\mathbf{P}\bigg{(}\sum_{\ell\neq i}U_{\ell}>\kappa(n)-1 \bigg{)}\bigg{]}\bigg{\}}. \tag{81}\] Since \(\{X_{i}\}_{i=1}^{n}\) is i.i.d., we may simplify the objective function in Eq. (81) as follows: \[\mathcal{J}_{n}\big{(}(\gamma,\eta),\varphi\big{)}=\varphi\Big{(} \mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}-d\Big{)}+c\mathbf{P}(U=1)\] \[+(1-\varphi)\bigg{[}\mathbf{E}\Big{[}(X-\hat{x}_{0})^{2}\big{(}1 -\gamma(X)\big{)}\Big{]}\mathcal{F}_{n-1,\kappa(n)}(\gamma)\] \[+\mathbf{E}\Big{[}(X-\hat{x}_{1})^{2}\big{(}1-\gamma(X)\big{)} \Big{]}\big{(}1-\mathcal{F}_{n-1,\kappa(n)}(\gamma)\big{)}\] \[+\mathbf{E}\Big{[}(X-\hat{x}_{1})^{2}\gamma(X)\Big{]}\big{(}1- \mathcal{F}_{n-1,\kappa(n)-1}(\gamma)\big{)}\bigg{]}, \tag{82}\] where \[\mathcal{F}_{n,\kappa}(\gamma)\mathop{=}\limits^{\mathrm{def}}\sum_{m=0}^{ \kappa}\binom{n}{m}\mathbf{P}(U=1)^{m}\big{(}1-\mathbf{P}(U=1)\big{)}^{n-m}. \tag{83}\] ### _Objective function in large-scale networks_ Taking the limit of \(\mathcal{J}_{n}\) in Eq. (82) we can compute the objective function for large-scale networks. It is important to notice that we are constraining \(\mathbf{\gamma}\) and \(\mathbf{\eta}\) to be homogeneous strategy profiles. This can be justified by symmetry of the underlying probabilistic model. Additionally, the different sensing elements in large-scale systems are mass produced, and exhibit nearly identical characteristics. Therefore, it makes sense that the decision rules implemented by them are identical. We will make use of the following variants of the standard Chernoff's inequality [36, Theorems 4.4 and 4.5]. **Lemma 2** (Chernoff's inequality): _Consider a collection \(\{U_{\ell}\}_{\ell=1}^{n}\) of independent Bernoulli variables with probability \(p\). Let \(S_{n}=\sum_{\ell=1}^{n}U_{\ell}\) and \(\mu=\mathbf{E}[S_{n}]=np\), then_ 1. \(\mathbf{P}\big{(}S_{n}\geq(1+\delta)\mu\big{)}\leq\exp(-\frac{\mu\delta^{2}}{2 +\delta})\)_, for any_ \(\delta>0\)_;_ 2. \(\mathbf{P}\big{(}S_{n}\leq(1-\delta)\mu\big{)}\leq\exp(-\frac{\mu\delta^{2}}{2 })\)_, for any_ \(0<\delta<1\)_._ **Lemma 3**: _Suppose that \(\lim_{n\to\infty}\kappa(n)/n=\bar{\kappa}\). Let \(\{U_{\ell}\}_{\ell=1}^{n}\) be i.i.d. Bernoulli variables with \(\mathbf{P}(U_{\ell}=1)=p\) and \(\mathbf{P}(U_{\ell}=0)=1-p\). The following holds_ \[\lim_{n\to\infty}\mathbf{P}\bigg{(}\sum_{\ell=1}^{n}U_{\ell}\leq\kappa(n)\bigg{)} =\begin{cases}1&\text{if}\ \ p<\bar{\kappa}\\ 0&\text{if}\ \ p>\bar{\kappa}.\end{cases} \tag{84}\] Proof:: Since \(\lim_{n\to\infty}\kappa(n)/n=\bar{\kappa}\), then for any \(\varepsilon>0\), there exists a natural number \(N\) such that \[\bigg{|}\frac{\kappa(n)}{n}-\bar{\kappa}\bigg{|}<\varepsilon,\ n\geq N. \tag{85}\] 1. For \(p<\bar{\kappa}\), fix \(\varepsilon\) such that \(\varepsilon\leq(\bar{\kappa}-p)/2\). Then, there exists an \(N\) such that \[\delta\mathop{=}\limits^{\mathrm{def}}\frac{\kappa(n)}{np}-1\in\Big{(}\frac{ \bar{\kappa}-p}{2p},\frac{3(\bar{\kappa}-p)}{2p}\Big{)}.\] (86) From Lemma 2, we have \[\mathbf{P}\bigg{(}\sum_{\ell=1}^{n}U_{\ell}>\kappa(n)\bigg{)}= \mathbf{P}\big{(}S_{n}>(1+\delta)\mu\big{)}\] \[\leq\exp\Big{(}-\frac{\mu\delta^{2}}{2+\delta}\Big{)}\leq\exp \bigg{(}-\frac{np(\frac{\bar{\kappa}-p}{2p})^{2}}{2+\frac{3(\bar{\kappa}-p)}{2 p}}\bigg{)}\] \[=\exp\bigg{(}-\frac{n(\bar{\kappa}-p)^{2}}{6\bar{\kappa}+2p} \bigg{)}.\] (87) Let \(N^{\prime}>\frac{6\bar{\kappa}+2p}{(\bar{\kappa}-p)^{2}}\ln(\frac{1}{ \varepsilon})\), then for all \(n\geq\max\{N,N^{\prime}\}\), we have \[\mathbf{P}\bigg{(}\sum_{\ell=1}^{n}U_{\ell}>\kappa(n)\bigg{)}<\varepsilon.\] (88) Therefore, \[\lim_{n\to\infty}\mathbf{P}\bigg{(}\sum_{\ell=1}^{n}U_{\ell}>\kappa(n)\bigg{)}=0\] \[\Rightarrow\lim_{n\to\infty}\mathbf{P}\bigg{(}\sum_{\ell=1}^{n}U_{ \ell}\leq\kappa(n)\bigg{)}=1.\] (89) 2. For \(p>\bar{\kappa}\), fix \(\varepsilon\) such that \(\varepsilon\leq\min\{p-\bar{\kappa},\bar{\kappa}\}/2\). Then, there exists an \(N\) such that \[\delta\mathop{=}\limits^{\mathrm{def}}1-\frac{\kappa(n)}{np}\in\Big{(}\frac{p- \bar{\kappa}}{2p},\min\Big{\{}\frac{3(p-\bar{\kappa})}{2p},\frac{2p-\bar{\kappa}}{2 p}\Big{\}}\Big{)},\] (90) for all \(n\geq N\). From Lemma 2, we have \[\mathbf{P}\bigg{(}\sum_{\ell=1}^{n}U_{\ell}\leq\kappa(n)\bigg{)}= \mathbf{P}\big{(}S_{n}\leq(1-\delta)\mu\big{)}\] \[\leq\exp\Big{(}-\frac{\mu\delta^{2}}{2}\Big{)}\leq\exp\bigg{(}- \frac{np(\frac{p-\bar{\kappa}}{2p})^{2}}{2}\bigg{)}\] \[=\exp\bigg{(}-\frac{n(p-\bar{\kappa})^{2}}{8p}\bigg{)}.\] (91) Let \(N^{\prime}>\frac{8p}{(p-\bar{\kappa})^{2}}\ln(\frac{1}{\varepsilon})\), then for all \(n\geq\max\{N,N^{\prime}\}\), we have \[\mathbf{P}\bigg{(}\sum_{\ell=1}^{n}U_{\ell}\leq\kappa(n)\bigg{)}<\varepsilon.\] (92) Therefore, \[\lim_{n\to\infty}\mathbf{P}\bigg{(}\sum_{\ell=1}^{n}U_{\ell}\leq\kappa(n)\bigg{)}=0.\] (93) **Proposition 5**: _If \(\lim_{n\to\infty}\kappa(n)/n=\bar{\kappa}\), then the following holds:_ \[\lim_{n\to\infty}\mathcal{F}_{n-1,\kappa(n)}(\gamma)=\mathbf{1}\big{(}\mathbf{P}(U =1)\leq\bar{\kappa}\big{)}\ \ \mathrm{a.e.} \tag{94}\] _and_ \[\lim_{n\to\infty}\mathcal{F}_{n-1,\kappa(n)-1}(\gamma)=\mathbf{1}\big{(}\mathbf{P}( U=1)\leq\bar{\kappa}\big{)}\ \ \mathrm{a.e.} \tag{95}\] Therefore, in the asymptotic regime, we have \[\mathcal{J}_{\infty}\big{(}(\gamma,\eta),\varphi\big{)}=\varphi \big{(}\mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}-d\big{)}+c\mathbf{P}(U=1)\\ +(1-\varphi)\Big{[}\mathbf{E}\big{[}(X-\hat{x}_{0})^{2}\mathbf{1} (U=0)\big{]}\mathbf{1}\big{(}\mathbf{P}(U=1)\leq\bar{\kappa}\big{)}\\ +\mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}\mathbf{1}\big{(} \mathbf{P}(U=1)>\bar{\kappa}\big{)}\Big{]}, \tag{96}\] which is equivalent to \[\mathcal{J}_{\infty}\big{(}(\gamma,\eta),\varphi\big{)}=\varphi \big{(}\mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}-d\big{)}+c\mathbf{P}(U=1)\\ +\begin{cases}(1-\varphi)\mathbf{E}\big{[}(X-\hat{x}_{0})^{2} \mathbf{1}(U=0)\big{]},&\text{if }\mathbf{P}(U=1)\leq\bar{\kappa}\\ (1-\varphi)\mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]},&\text{if }\mathbf{P}(U=1)> \bar{\kappa}.\end{cases} \tag{97}\] Since the coordinator can choose the value of \(\mathbf{P}(U=1)\) by adjusting the transmission policy, the problem is equivalent to solving the following two problems and choosing the one with the smaller optimal value: \[\min_{\gamma,\eta}\max_{\varphi} \varphi\,\,\big{(}\mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}-d \big{)}+c\mathbf{P}(U=1)\] \[+(1-\varphi)\mathbf{E}\big{[}(X-\hat{x}_{0})^{2}\mathbf{1}(U=0) \big{]}\] \[\text{subject to} \mathbf{P}(U=1)\leq\bar{\kappa}, \tag{98}\] and \[\min_{\gamma,\eta}\max_{\varphi} \varphi\,\,\big{(}\mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}-d \big{)}+c\mathbf{P}(U=1)\] \[+(1-\varphi)\mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}\] \[\text{subject to} \mathbf{P}(U=1)>\bar{\kappa}. \tag{99}\] ### _Characterization of saddle points solutions_ Let \[\mathcal{L}\big{(}(\gamma,\eta),\varphi\big{)}\stackrel{{ \mathrm{def}}}{{=}}\varphi\,\big{(}\mathbf{E}[(X-\hat{x}_{1})^{2}]-d\big{)}+c \mathbf{P}(U=1)\\ +(1-\varphi)\mathbf{E}\big{[}(X-\hat{x}_{0})^{2}\mathbf{1}(U=0) \big{]}, \tag{100}\] with \(\mathbf{P}(U=1)\leq\bar{\kappa}\), and \[\mathcal{U}\big{(}(\gamma,\eta),\varphi\big{)}=\varphi\,\big{(} \mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}-d\big{)}+c\mathbf{P}(U=1)\\ +(1-\varphi)\mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}, \tag{101}\] with \(\mathbf{P}(U=1)>\bar{\kappa}\). The following result shows that the optimal objective function value is always obtained by solving Eq. (98). **Proposition 6**: _Let \(\gamma^{\star},\eta^{\star}\) and \(\varphi^{\star}\) be a saddle point of Eq. (98). Then, we have_ \[\mathcal{L}\big{(}(\gamma^{\star},\eta^{\star}),\varphi^{\star}\big{)}\leq \mathcal{U}\big{(}(\gamma,\eta),\varphi\big{)}, \tag{102}\] _for all \(\gamma\) such that \(\mathbf{P}(U=1)>\bar{\kappa}\), and for all \(\varphi\in[0,1]\)._ For the two problems in Eqs. (98) and (99), we have \[\hat{x}_{1}^{\star}=\mu. \tag{103}\] The solution of \(\mathcal{U}((\gamma,\eta),\varphi)\) is lower-bounded by setting \(\mathbf{P}(U=1)=\bar{\kappa}\) and \(\varphi=0\), i.e., \[\mathcal{U}((\gamma,\eta),\varphi)>\mathbf{E}\big{[}(X-\mu)^{2}\big{]}+c\bar{ \kappa}. \tag{104}\] If \(\gamma^{\star},\hat{x}_{0}^{\star}\) and \(\varphi^{\star}\) is a saddle point of Eq. (98), then the objective function in Eq. (98) satisfies \[\mathcal{L}\big{(}(\gamma^{\star},\eta^{\star}),\varphi^{\star} \big{)}= \varphi^{\star}\,\big{(}\mathbf{E}\big{[}(X-\mu)^{2}\big{]}-d\big{)}+c \mathbf{P}(U=1)\] \[+(1-\varphi^{\star})\mathbf{E}\big{[}(X-\mu)^{2}\big{]}\] \[=\mathbf{E}\big{[}(X-\mu)^{2}\big{]}-d\varphi^{\star}+c\mathbf{P} (U=1)\] \[\stackrel{{(c)}}{{=}}\mathbf{E}\big{[}(X-\mu)^{2} \big{]}+c\bar{\kappa}, \tag{105}\] where \((a)\) follows from the definition of saddle point equilibrium, \((b)\) follows from the inequality \(\mathbf{E}\big{[}(X-\mu)^{2}\mathbf{1}(U=0)\big{]}\leq\mathbf{E}\big{[}(X-\mu)^ {2}\big{]}\) and \((c)\) holds due to \(d\varphi^{\star}\geq 0\) and \(\mathbf{P}(U=1)\leq\bar{\kappa}\) for \(\mathcal{L}\big{(}(\gamma,\eta),\varphi\big{)}\). Combining Eq. (104) and Eq. (105), we obtain Eq. (102). Therefore, it suffices to consider the constrained optimization problem in Eq. (98). Define the Lagrangian function \[L(\gamma,\eta,\varphi,\lambda)\stackrel{{\mathrm{def}}}{{=}} \varphi\big{[}\mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}-d\big{]}+c\mathbf{P} (U=1)\\ +(1-\varphi)\mathbf{E}\big{[}(X-\hat{x}_{0})^{2}\mathbf{1}(U=0) \big{]}+\lambda\big{(}\mathbf{P}(U=1)-\bar{\kappa}\big{)}, \tag{106}\] where \(\lambda\geq 0\) is the dual variable associated with the inequality constraint \(\mathbf{P}(U=1)-\bar{\kappa}\leq 0\). **Proposition 7** (Optimality of threshold policies): _For a large-scale system under a proactive attack with a fixed jamming probability \(\varphi\in[0,1]\), dual variable \(\lambda\), and an arbitrary estimation policy \(\eta\) indexed by representation symbols \(\hat{x}=(\hat{x}_{0},\hat{x}_{1})\in\mathbb{R}^{2}\), the optimal transmission strategy is:_ \[\gamma^{\star}_{\eta,\lambda,\varphi}(x)=\mathbf{1}\big{(}(1-\varphi)(x-\hat{x} _{0})^{2}\geq c+\lambda\big{)}. \tag{107}\] For fixed \(\eta\), \(\lambda\), and \(\varphi\), Proposition 7 implies that \[\tilde{L}\big{(}\eta,(\varphi,\lambda)\big{)}\stackrel{{ \mathrm{def}}}{{=}}L(\gamma^{\star}_{\eta,\lambda,\varphi},\eta,\varphi, \lambda)=\varphi\big{(}\mathbf{E}\big{[}(X-\hat{x}_{1})^{2}\big{]}-d\big{)}\\ +\mathbf{E}\Big{[}\min\big{\{}(1-\varphi)(X-\hat{x}_{0})^{2},c+ \lambda\big{\}}\Big{]}-\lambda\bar{\kappa}. \tag{108}\] **Proposition 8** (Optimal estimator): _Let \(X\) be a Gaussian random variable with mean \(\mu\) and variance \(\sigma^{2}\). The optimal estimator is_ \[\eta^{\star}(y)=\begin{cases}\mu,&\text{if }\ y\in\{\varnothing,\mathfrak{C}\}\\ x,&\text{if }\ y=x.\end{cases} \tag{109}\] _Without loss of generality, set \(\mu=0\). Then the Lagrangian function becomes_ \[\tilde{\tilde{L}}(\varphi,\lambda)\stackrel{{ \mathrm{def}}}{{=}}\tilde{L}\big{(}\eta^{\star},(\varphi,\lambda)\big{)}= \varphi\big{(}\mathbf{E}[X^{2}]-d\big{)}\\ +\mathbf{E}\Big{[}\min\big{\{}(1-\varphi)X^{2},c+\lambda\big{\}} \Big{]}-\lambda\bar{\kappa}. \tag{110}\] _The optimal values \(\varphi^{\star}\) and \(\lambda^{\star}\) are coupled. Therefore, we must jointly maximize \(\tilde{L}\) over \(\varphi\) and \(\lambda\). Let \(l_{\lambda}(\bar{\kappa})\) denote the unique solution of_ \[2\int_{\sqrt{l_{\lambda}}}^{+\infty}f(x)\mathrm{d}x=\bar{\kappa} \tag{111}\] _and let \(l_{\varphi}(d)\) denote the unique solution of_ \[2\int_{\sqrt{l_{\varphi}}}^{+\infty}x^{2}f(x)\mathrm{d}x=d. \tag{112}\] **Theorem 4** (Optimal jamming policy): _For a given input pdf \(f\), transmission cost \(c\), jamming cost \(d\), and asymptotic channel capacity \(\bar{\kappa}\), the optimal jamming probability and its associated optimal Lagrange dual variable are: 1. If \(\int_{\sqrt{c}}^{+\infty}f(x)\mathrm{d}x<\bar{\kappa}/2\) and \(\int_{\sqrt{c}}^{+\infty}x^{2}f(x)\mathrm{d}x<d/2\), then \[(\varphi^{\star},\lambda^{\star})=(0,0);\] (113) 2. If \(\int_{\sqrt{c}}^{+\infty}f(x)\mathrm{d}x\geq\bar{\kappa}/2\) and \(\int_{\sqrt{c}}^{+\infty}x^{2}f(x)\mathrm{d}x<d/2\), then \[(\varphi^{\star},\lambda^{\star})=(0,l_{\lambda}(\bar{\kappa})-c);\] (114) 3. If \(\int_{\sqrt{c}}^{+\infty}f(x)\mathrm{d}x<\bar{\kappa}/2\) and \(\int_{\sqrt{c}}^{+\infty}x^{2}f(x)\mathrm{d}x\geq d/2\), then \[(\varphi^{\star},\lambda^{\star})=\bigg{(}1-\frac{c}{l_{\varphi}(d)},0\bigg{)};\] (115) 4. If \(\int_{\sqrt{c}}^{+\infty}f(x)\mathrm{d}x\geq\bar{\kappa}/2\) and \(\int_{\sqrt{c}}^{+\infty}x^{2}f(x)\mathrm{d}x\geq d/2\), 1. if \(l_{\lambda}(\bar{\kappa})=l_{\varphi}(d)\), then \[(\varphi^{\star},\lambda^{\star})\in\bigg{\{}(\varphi,\lambda)\in[0,1]\times \mathbb{R}_{+}\ \bigg{|}\ \frac{c+\lambda}{1-\varphi}=l_{\lambda}(\kappa)\bigg{\}};\] (116) 5. if \(l_{\lambda}(\bar{\kappa})>l_{\varphi}(d)\), then \[(\varphi^{\star},\lambda^{\star})=\big{(}0,l_{\lambda}(\bar{\kappa})-c\big{)};\] (117) 6. if \(l_{\varphi}(d)>l_{\lambda}(\bar{\kappa})\), then \[(\varphi^{\star},\lambda^{\star})=\bigg{(}1-\frac{c}{l_{\varphi}(d)},0\bigg{)}.\] (118) Proof:: The proof is in Appendix B. Next, we establish the existence of a saddle point equilibrium for Eq. (98), i.e., \[\mathcal{L}\big{(}(\gamma_{\eta^{\star},\varphi^{\star}}^{\star},\eta^{\star} ),\varphi\big{)}\leq\mathcal{L}\big{(}(\gamma_{\eta^{\star},\varphi^{\star}}^{ \star},\eta^{\star}),\varphi^{\star}\big{)}\leq\mathcal{L}\big{(}(\gamma,\eta ),\varphi^{\star}\big{)} \tag{119}\] for \(\varphi\in[0,1]\) and \(\gamma\in\{\gamma:\mathbb{R}\to[0,1]\mid\mathbf{P}\big{(}\gamma(X)=1\big{)} \leq\bar{\kappa}\}\). We will show that it suffices to show that the saddle point equilibrium of its Lagrangian function \[L\big{(}(\gamma^{\star},\eta^{\star}),(\varphi,\lambda^{\star}) \big{)}\leq L\big{(}(\gamma^{\star},\eta^{\star}),(\varphi^{\star},\lambda^{ \star})\big{)}\\ \leq L\big{(}(\gamma,\eta),(\varphi^{\star},\lambda^{\star})\big{)}. \tag{120}\] **Proposition 9**: _Let \(\big{(}(\gamma^{\star},\eta^{\star}),(\varphi^{\star},\lambda^{\star})\big{)}\) be a saddle point of \(L\big{(}(\gamma,\eta),(\varphi)\big{)}\), then \(\big{(}(\gamma^{\star},\eta^{\star}),\varphi^{\star}\big{)}\) is the saddle point of \(\mathcal{L}\big{(}(\gamma,\eta),\varphi\big{)}\)._ Proof:: Since \(\big{(}(\gamma^{\star},\eta^{\star}),(\varphi^{\star},\lambda^{\star})\big{)}\) is a saddle point of \(L((\gamma,\eta),(\varphi,\lambda))\) it must satisfy complementary slackness, i.e., \[\lambda^{\star}\Big{(}\mathbf{E}\big{[}\gamma^{\star}(X)\big{]}-\bar{\kappa} \Big{)}=0. \tag{121}\] Therefore, we always have \[L\big{(}(\gamma^{\star},\eta^{\star}),(\varphi^{\star},\lambda^{\star})\big{)} =\mathcal{L}\big{(}(\gamma^{\star},\eta^{\star}),\varphi^{\star}\big{)}. \tag{122}\] Since \(\lambda\Big{(}\mathbf{E}\big{[}\gamma^{\star}(X)\big{]}-\bar{\kappa}\Big{)}\leq 0\), we have \[L\big{(}(\gamma,\eta),(\varphi^{\star},\lambda^{\star})\big{)}\leq\mathcal{L} \big{(}(\gamma,\eta),\varphi^{\star}\big{)}. \tag{123}\] When \(\gamma=\gamma^{\star}\)6, the complementary slackness property is satisfied. Then Theorem 4 implies that Footnote 6: Here, \(\gamma^{\star}\) denotes the optimal transmission policy for given \(\lambda^{\star},\eta^{\star},\varphi^{\star}\) as established in Proposition 7. \[L\big{(}(\gamma^{\star},\eta^{\star}),\varphi\big{)}\leq\mathcal{L}\big{(}( \gamma^{\star},\eta^{\star}),\varphi^{\star}\big{)}\leq\mathcal{L}\big{(}( \gamma,\eta),\varphi^{\star}\big{)}. \tag{124}\] Therefore, \[\mathcal{L}\big{(}(\gamma^{\star},\eta^{\star}),\varphi\big{)}\leq\mathcal{L} \big{(}(\gamma^{\star},\eta^{\star}),\varphi^{\star}\big{)}\leq\mathcal{L} \big{(}(\gamma,\eta),\varphi^{\star}\big{)}. \tag{125}\] Following the proof of Theorem 3 and using Proposition 9, we establish a saddle point equilibrium for large-scale networks. **Theorem 5** (Saddle point equilibrium): _Given a Gaussian source \(X\sim\mathcal{N}(0,\sigma^{2})\), communication and jamming costs \(c,d\geq 0\), a saddle point strategy \((\gamma^{\star},\eta^{\star},\varphi^{\star})\) for the remote estimation game with a proactive jammer over a large-scale network of capacity \(\bar{\kappa}\) is given by:_ 1. _If_ \(\int_{\sqrt{c}}^{+\infty}f(x)\mathrm{d}x<\bar{\kappa}/2\) _and_ \(\int_{\sqrt{c}}^{+\infty}x^{2}f(x)\mathrm{d}x<d/2\)_, then_ \[\gamma^{\star}(x)=\mathbf{1}(x^{2}>c)\ \ \text{and}\ \ \varphi^{\star}=0.\] (126) 2. _If_ \(\int_{\sqrt{c}}^{+\infty}f(x)\mathrm{d}x\geq\bar{\kappa}/2\) _and_ \(\int_{\sqrt{c}}^{+\infty}x^{2}f(x)\mathrm{d}x<d/2\)_, then_ \[\gamma^{\star}(x)=\mathbf{1}\big{(}x^{2}>l_{\lambda}(\bar{\kappa})\big{)}\ \ \text{and}\ \ \varphi^{ \star}=0.\] (127) 3. _If_ \(\int_{\sqrt{c}}^{+\infty}f(x)\mathrm{d}x<\bar{\kappa}/2\) _and_ \(\int_{\sqrt{c}}^{+\infty}x^{2}f(x)\mathrm{d}x\geq d/2\)_, then_ \[\gamma^{\star}(x)=\mathbf{1}\big{(}x^{2}>l_{\varphi}(d)\big{)}\ \ \text{and}\ \ \varphi^{ \star}=1-\frac{c}{l_{\varphi}(d)}.\] (128) 4. _If_ \(\int_{\sqrt{c}}^{+\infty}f(x)\mathrm{d}x\geq\bar{\kappa}/2\) _and_ \(\int_{\sqrt{c}}^{+\infty}x^{2}f(x)\mathrm{d}x\geq d/2\)_,_ \[\text{if}\ l_{\lambda}(\bar{\kappa})=l_{\varphi}(d),\] (129) 5. _if_ \(l_{\lambda}(\bar{\kappa})>l_{\varphi}(d)\)_, then_ \[\gamma^{\star}(x)=\mathbf{1}\big{(}x^{2}>l_{\lambda}(\bar{\kappa})\big{)}\ \ \text{and}\ \ \varphi^{ \star}\in\bigg{[}0,1-\frac{c}{l_{\varphi}(d)}\bigg{]};\] (130) 6. _if_ \(l_{\varphi}(d)>l_{\lambda}(\bar{\kappa})\)_, then_ \[\gamma^{\star}(x)=\mathbf{1}\big{(}x^{2}>l_{\varphi}(d)\big{)}\ \ \text{and}\ \ \varphi^{ \star}=1-\frac{c}{l_{\varphi}(d)}.\] (131) _In all cases, the estimation policy is:_ \[\eta^{\star}(y)=\begin{cases}0,&\text{if}\ \ y\in\{\varnothing,\mathfrak{E}\}\\ x,&\text{if}\ \ y=x.\end{cases} \tag{132}\] ### _Numerical results_ Based on Theorem 5, the following numerical results provide some insights on the optimal transmission strategy and optimal jamming strategy. Table I shows the saddle point equilibrium under different parameters, where \(X\sim\mathcal{N}(0,1)\) and \(c=1\). For example, let \(d=1\) and \(\bar{\kappa}=0.25\). Since \[\int_{\sqrt{\varepsilon}}^{+\infty}f(x)\mathrm{d}x=0.16>\bar{\kappa}/2,\ \ \text{and}\\ \int_{\sqrt{\varepsilon}}^{+\infty}x^{2}f(x)\mathrm{d}x=0.40<d/2, \tag{133}\] we have \(\gamma^{\star}=\mathbf{1}(x^{2}>1.32)\) and \(\varphi^{\star}=0\). Considering \(d=0.25\) and \(\bar{\kappa}=0.25\), we have \[\int_{\sqrt{\varepsilon}}^{+\infty}f(x)\mathrm{d}x=0.16>\bar{\kappa}/2,\ \ \text{and}\\ \int_{\sqrt{\varepsilon}}^{+\infty}x^{2}f(x)\mathrm{d}x=0.40>d/2, \tag{134}\] and \(l_{\lambda}(\bar{\kappa})=1.32<l_{\varphi}(d)=4.11\). So the optimal strategies are \(\gamma^{\star}=\mathbf{1}(x^{2}>4.11)\) and \(\varphi^{\star}=0.76\). Notice that the complementary slackness property is always satisfied, i.e., \(\lambda^{\star}(\mathbf{P}(\gamma^{\star}(X)=1)-\bar{\kappa})=0\). Moreover, in the saddle point equilibrium of Theorem 5, there is a sharp transition in the optimal jamming probability from zero to nonzero, which directly depends on the jamming cost. However, the structure of the transmission and estimation policies remain unchanged. In particular, the optimal transmission threshold policy is always symmetric. ## V Concluding remarks and future work Building upon the pioneering model introduced by Gupta et al. in [22, 23], we have considered a remote estimation game with asymmetric information involving transmitters, receivers and a jammer. While most of the literature focuses on jamming in the network and the physical layer of the communication protocol stack, our work focuses on the medium access control layer. To address the complicated problem originated by the fact that the problem has a non-classical information structure, we adopt a coordinator approach, which leads to a tractable framework based on a zero-sum game between coordinator and the jammer. We have obtained several results on the saddle point equilibria for many cases of interest, and extended the result for large scale networks, which provide insights in the design of massive IoT deployments for many modern applications such as smart farming, Industry 4.0, and robotic swarms. There are many interesting directions for future work. The most prominent ones are related to learning. In this work, we have assumed that the probability density function of the observations are common knowledge. However, this assumption is never realistic in practice. The design of real systems is data-driven which leads to issues related to the stability, robustness and performance bounds when the probabilistic model is not known a priori and is learned from data samples. For example, the sample complexity of our system is a largely unexplored issue with only a few related results reported in [37]. Additionally, all of our results assume that the jamming and communication costs are available to the coordinator and the jammer, which is also a contrived assumption. If the costs are private information, the game may no longer be zero-sum. Moreover, these parameters may need to be learned from repeated play. In such case, it would be interesting to developed a theory that characterize the rate of regret in online learning in this more realistic scenario.
2307.16640
A multilevel Monte Carlo algorithm for SDEs driven by countably dimensional Wiener process and Poisson random measure
In this paper, we investigate the properties of standard and multilevel Monte Carlo methods for weak approximation of solutions of stochastic differential equations (SDEs) driven by the infinite-dimensional Wiener process and Poisson random measure with Lipschitz payoff function. The error of the truncated dimension randomized numerical scheme, which is determined by two parameters, i.e grid density $n \in \mathbb{N}_{+}$ and truncation dimension parameter $M \in \mathbb{N}_{+},$ is of the order $n^{-1/2}+\delta(M)$ such that $\delta(\cdot)$ is positive and decreasing to $0$. We derive complexity model and provide proof for the upper complexity bound of the multilevel Monte Carlo method which depends on two increasing sequences of parameters for both $n$ and $M.$ The complexity is measured in terms of upper bound for mean-squared error and compared with the complexity of the standard Monte Carlo algorithm. The results from numerical experiments as well as Python and CUDA C implementation are also reported.
Michał Sobieraj
2023-07-31T13:22:45Z
http://arxiv.org/abs/2307.16640v2
A Multilevel Monte Carlo algorithm for SDEs Driven by countably Dimensional Wiener Process and Poisson Random Measure ###### Abstract. In this paper, we investigate the properties of standard and multilevel Monte Carlo methods for weak approximation of solutions of stochastic differential equations (SDEs) driven by the infinite-dimensional Wiener process and Poisson random measure with the Lipschitz payoff function. The error of the truncated dimension randomized numerical scheme, which is determined by two parameters, i.e grid density \(n\in\mathbb{N}_{+}\) and truncation dimension parameter \(M\in\mathbb{N}_{+}\), is of the order \(n^{-1/2}+\delta(M)\) such that \(\delta(\cdot)\) is positive and decreasing to \(0\). The paper introduces the complexity model and provides proof for the upper complexity bound of the multilevel Monte Carlo method which depends on two increasing sequences of parameters for both \(n\) and \(M\). The complexity is measured in terms of upper bound for mean-squared error and compared with the complexity of the standard Monte Carlo algorithm. The results from numerical experiments as well as Python and CUDA C implementation are also reported. **Key words:** countably dimensional Wiener process, Poisson random measure, stochastic differential equations with jumps, randomized Euler algorithm, multilevel Monte Carlo method, information-based complexity **MSC 2010:** 65C05, 65C30, 68Q25 ## 1. Introduction We investigate the problem of efficient approximation of the value of \[\mathbb{E}(f(X(T)))\] for \(d\in\mathbb{N}_{+}\) and Lipschitz payoff function \(f:\mathbb{R}^{d}\mapsto\mathbb{R}\), where \(\{X(t)\}_{t\in[0,T]}\) is a unique strong solution of the following stochastic differential equation \[\left\{\begin{array}{l}\mathrm{d}X(t)=a(t,X(t))\,\mathrm{d}t+b(t,X(t))\, \mathrm{d}W(t)+\int\limits_{\mathcal{E}}c(t,X(t-),y)N(\,\mathrm{d}y,\,\mathrm{d }t),\ t\in[0,T],\\ X(0)=\eta.\end{array}\right. \tag{1}\] In the SDE above, \(T>0,\mathcal{E}:=\mathbb{R}^{d^{\prime}}\setminus\{0\}\), \(d^{\prime}\in\mathbb{N}_{+}\), and \(W=[W_{1},W_{2},\ldots]^{T}\) is a countably dimensional Wiener process on a complete probability space \((\Omega,\Sigma,\mathbb{P})\), i.e., an infinite sequence of independent scalar Wiener processes defined on the same probability space. We also assume that \(N(\,\mathrm{d}y,\,\mathrm{d}t)\) is a Poisson random measure with an intensity measure \(\nu(\,\mathrm{d}y)\,\mathrm{d}t\), where \(\nu(\,\mathrm{d}y)\) is a finite Levy measure on \((\mathcal{E},\mathcal{B}(\mathcal{E}))\). We assume that \(N\) and \(W\) are independent. We also impose suitable regularity conditions on the coefficients \(a,b,c\) and \(\eta\). Analytical properties and applications of such SDEs are widely investigated in [1] and [2]. The infinite-dimensional Wiener process is the natural extension of standard finite-dimensional Brownian motion which allows us to model more complex structures of the underlying noise. If \(W\) is countably dimensional the stochastic Ito integral can be understood as a stochastic integral wrt cylindrical Wiener process in the Hilbert space \(\ell^{2}\), see pages 289-290 in [1]. For more correspondence between the theory of Stochastic Partial Differential Equations (SPDEs) and SDEs driven by countably dimensional Wiener, see [3] and [4]. In many cases, the existence and uniqueness of the solutions of SDEs are guaranteed but the analytical formulas are not known. It leads to the usage of numerical schemes for the approximation of trajectories of the solutions of SDEs. In [5] authors introduce the truncated dimension Euler algorithm for strong approximation of the solutions of (1) and show its upper error bounds. The results are further used in this paper in the context of cost and error analysis for both Monte Carlo and multilevel Monte Carlo methods. Back in 2001, the multilevel Monte Carlo (MLMC) approach was very first introduced by Stefan Heinrich (see [6]) in the context of parametric integration. Next, in 2008 Mike Giles applied the multilevel method to weak approximation problem in the context of SDEs (see [7]). For now, there is a vast literature addressing the application of the multilevel Monte Carlo method to various classes of SDEs. Nonetheless, so far there was no preceding works that directly address the investigation of MLMC for SDEs driven by a countably dimensional Wiener process. On the other hand, the investigation of the multilevel Monte Carlo method for SPDEs is very popular and, as mentioned, related to the concept of SDEs driven by a countably dimensional Wiener process. For papers regarding SPDEs and MLMC method, the reader is especially referred to [8], [9], [10], [11], [12], [13], [14] and [15]. From the practical point of view, one of the multilevel Monte Carlo method applications is its extensive usage in Finance (see [16], [17], [18], [19]). The main contribution in this paper is a derivation of the cost model for weak approximation with a random number of evaluations and the analysis of the cost bounds for the standard and multilevel Monte Carlo methods for SDEs driven by countably dimensional Wiener process (which induces an additional set of parameters in MLMC method) and Poisson random measure (which imposes the expected complexity model). Extension of the multilevel approach to SDEs driven by countably dimensional Wiener process and Poisson random measure is motivated by and analogous to the approach presented in [7]. The structure of the paper is as follows. In Section 2 we describe the considered class of SDEs (1) with admissible coefficients defining the equation. We further recall recent results on the numerical scheme for a strong approximation of the solutions. In Section 3 we define the complexity model for the investigation of standard and multilevel Monte Carlo costs. In Section 4 we provide the reader with the standard Monte Carlo algorithm in a defined setting together with its corresponding error-dependent parameters and the cost. In Section 5 we derive the multilevel Monte Carlo algorithm and provide a theorem and proof which address its upper complexity bounds. Finally, our theoretical results are supported by numerical experiments described in Section 6. Therefore, we also provide the key elements of our current algorithm implementation in Python and CUDA C. In section 7 we summarize the main results and list the resulting open questions. ## 2. Preliminaries We first introduce basic notations and recall certain class of SDEs that was already considered in [5]. For such a class there exists a numerical scheme that exhibits a convergence rate of order \(n^{-1/2}+\delta(M)\) for \(M,n\in\mathbb{N}_{+}\) and \(\delta(M)\to 0_{+}\) as \(M\to+\infty\). Let \(x\wedge y:=\min\{x,y\}\) and \(x\lor y:=\max\{x,y\}\) for any \(x,y\in\mathbb{R}\). We use the following notation of asymptotic equalities. For functions \(f,g:[0,+\infty)\to[0,+\infty)\) we write \(f(x)=\mathcal{O}(g(x))\) iff there exist \(C>0,x_{0}>0\) such that for all \(x\geq x_{0}\) it holds \(f(x)\leq Cg(x)\). Furthermore, we write \(f(x)=\Theta(g(x))\) iff \(f(x)=\mathcal{O}(g(x))\) and \(g(x)=\mathcal{O}(f(x))\). The preceding definitions can naturally be extended to arbitrary accumulation points in \([0,+\infty)\). Depending on the context, by \(\|\cdot\|\) we denote the Euclidean norm for vectors and Hilbert-Schmidt norm for matrices. The difference should be clear from the context. We also set \[\ell^{2}(\mathbb{R}^{d})=\{x=(x^{(1)},x^{(2)},\ldots)\ |\ x^{(j)}\in\mathbb{R}^{d} \text{ for all }j\in\mathbb{N},\|x\|<+\infty\},\] where \(x^{(j)}=\begin{bmatrix}x_{1}^{(j)}\\ \vdots\\ x_{d}^{(j)}\end{bmatrix}\), \(\|x\|=\Bigl{(}\sum_{j=1}^{+\infty}\|x^{(j)}\|^{2}\Bigr{)}^{1/2}=\Bigl{(}\sum _{j=1}^{+\infty}\sum_{k=1}^{d}|x_{k}^{(j)}|^{2}\Bigr{)}^{1/2}\). Let \(\nu\) be a Levy measure on \((\mathcal{E},\mathcal{B}(\mathcal{E}))\), i.e., \(\nu\) is a measure on \((\mathcal{E},\mathcal{B}(\mathcal{E}))\) that satisfies condition \(\int\limits_{\mathcal{E}}(\|z\|^{2}\wedge 1)\nu(\,\mathrm{d}z)<+\infty\). We further assume that \(\lambda:=\nu(\mathcal{E})<+\infty\). Let \((\Omega,\Sigma,\mathbb{P})\) be a complete probability space with sufficiently rich filtration \((\Sigma_{t})_{t\geq 0}\) which also satisfies the usual conditions (see [20]) and for which \(W\) is countably dimensional \((\Sigma_{t})_{t\geq 0}-\)Wiener process and and \(N(\,\mathrm{d}z,\,\mathrm{d}t)\) is an \((\Sigma_{t})_{t\geq 0}\)-Poisson random measure with the intensity measure \(\nu(\,\mathrm{d}z)\,\mathrm{d}t\). We assume that both \(W\) and \(N\) are independent of each other. Furthermore, let \(\Sigma_{\infty}:=\sigma\Bigl{(}\bigcup_{t\geq 0}\Sigma_{t}\Bigr{)}\). For any random vector \(X:\Omega\mapsto\mathbb{R}^{d}\) we define its \(L^{2}(\Omega)\) norm as \(\|X\|_{L^{2}(\Omega)}:=(\mathbb{E}\|X\|^{2})^{1/2}\) and by \(X^{(i)}\) we mean the \(i^{\prime}th\) independent sample of the vector. For \(D,D_{L}>0\) we consider \(\mathcal{A}(D,D_{L})\) a class of all functions \(a:[0,T]\times\mathbb{R}^{d}\mapsto\mathbb{R}^{d}\) satisfying the following conditions: 1. \(a\) is Borel measurable, 2. \(\|a(t,0)\|\leq D\) for all \(t\in[0,T]\), 3. \(\|a(t,x)-a(t,y)\|\leq D_{L}\|x-y\|\) for all \(x,y\in\mathbb{R}^{d}\), \(t\in[0,T]\). Let \(\Delta=(\delta(k))_{k=1}^{+\infty}\subset\mathbb{R}_{+}\) be a positive, strictly decreasing sequence, converging to zero, and let \(C>0\), \(\varrho_{1}\in(0,1]\). We consider the following class \(\mathcal{B}(C,D,D_{L},\Delta,\varrho_{1})\) of functions \(b=(b^{(1)},b^{(2)},\ldots):[0,T]\times\mathbb{R}^{d}\mapsto\ell^{2}(\mathbb{R }^{d})\), where \(b^{(j)}:[0,T]\times\mathbb{R}^{d}\mapsto\mathbb{R}^{d}\), \(j\in\mathbb{N}\). Namely, \(b\in\mathcal{B}(C,D,D_{L},\Delta,\varrho_{1})\) iff it satisfies the following conditions: 1. \(\|b(0,0)\|\leq D\), 2. \(\|b(t,x)-b(s,x)\|\leq D_{L}(1+\|x\|)|t-s|^{\varrho_{1}}\) for all \(x\in\mathbb{R}^{d}\) and \(t,s\in[0,T]\), 3. \(\|b(t,x)-b(t,y)\|\leq D_{L}\|x-y\|\) for all \(x,y\in\mathbb{R}^{d}\) and \(t\in[0,T]\), 4. \(\sup_{0\leq t\leq T}\|\sum_{i=k+1}^{+\infty}b^{(i)}(t,x)\|\leq C(1+\|x\|)\delta (k)\) for all \(k\in\mathbb{N}\) and \(x\in\mathbb{R}\). By \(\delta\) we also denote a function on \([1,+\infty)\) which is defined either by linear interpolation of \(\Delta\) sequence or simple substitution of index \(k\) with continuous variable \(x\) in the definition. Such function is invertible and the difference between each delta is clear from the context. Let \(\varrho_{2}\in(0,1]\) and let \(\nu\) be the Levy measure as above. We say that a function \(c:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{d^{\prime}}\mapsto\mathbb{R}^{d}\) belongs to the class \(\mathcal{C}(D,D_{L},\varrho_{2},\nu)\) if and only if 1. \(c\) is Borel measurable, 2. \(\bigg{(}\int\limits_{\mathcal{E}}\|c(0,0,y)\|^{p}\nu(\,\mathrm{d}y) \bigg{)}^{1/2}\leq D,\) 3. \(\bigg{(}\int\limits_{\mathcal{E}}\|c(t,x_{1},y)-c(t,x_{2},y)\|^{ 2}\ \nu(\,\mathrm{d}y)\bigg{)}^{1/2}\leq D_{L}\|x_{1}-x_{2}\|\) for all \(x_{1},x_{2}\in\mathbb{R}^{d}\), \(t\in[0,T]\), 4. \(\bigg{(}\int\limits_{\mathcal{E}}\|c(t_{1},x,y)-c(t_{2},x,y)\|^{ 2}\ \nu(\,\mathrm{d}y)\bigg{)}^{1/2}\leq D_{L}(1+\|x\|)|t_{1}-t_{2}|^{\varrho_{2}}\) for all \(x\in\mathbb{R}^{d}\), \(t_{1},t_{2}\in[0,T]\). Finally, we define the following class \[\mathcal{J}(D)=\{\eta\in L^{2}(\Omega)\ |\ \sigma(\eta)\subset\Sigma_{0},\|\eta \|_{L^{2}(\Omega)}\leq D\}.\] As a set of admissible input data, we consider the following class \[\mathcal{F}(C,D,D_{L},\Delta,\varrho_{1},\varrho_{2},\nu)=\mathcal{A}(D,D_{L}) \times\mathcal{B}(C,D,D_{L},\Delta,\varrho_{1})\times\mathcal{C}(D,D_{L}, \varrho_{2},\nu)\times\mathcal{J}(D).\] We recall the truncated dimension randomized Euler algorithm, defined in [5], that approximates the value of \(X(T)\). Let \(M,n\in\mathbb{N}_{+}\), \(t_{j}=jT/n\), \(j=0,1,\ldots,n\). We also use the notation \[\Delta W_{j}=[\Delta W_{j,1},\Delta W_{j,2},\ldots]^{T},\] where \[\Delta W_{j,k}=W_{k}(t_{j+1})-W_{k}(t_{j})\] for \(k\in\mathbb{N}\). Let \(\left(\theta_{j}\right)_{j=0}^{n-1}\) be a sequence of independent random variables, where each \(\theta_{j}\) is uniformly distributed on \([t_{j},t_{j+1}]\), \(j=0,1,\ldots,n-1\). We also assume that \(\sigma(\theta_{0},\theta_{1},\ldots,\theta_{n-1})\) is independent of \(\Sigma_{\infty}\). For \((a,b,c,\eta)\in\mathcal{F}(C,D,D_{L},\Delta,\varrho_{1},\varrho_{2},\nu)\) we set \[\begin{cases}X_{M,n}^{RE}(0)=\eta\\ X_{M,n}^{RE}(t_{j+1})=X_{M,n}^{RE}(t_{j})+a(\theta_{j},X_{M,n}^{RE}(t_{j})) \frac{T}{n}+b^{M}(t_{j},X_{M,n}^{RE}(t_{j}))\Delta W_{j}\\ \qquad\qquad+\sum\limits_{k=N(t_{j})+1}^{N(t_{j+1})}c(t_{j},X_{M,n}^{RE}(t_{j} ),\xi_{k}),\quad j=0,1,\ldots,n-1.\end{cases}. \tag{2}\] If the argument is omitted by \(X_{M,n}^{RE}\) we usually mean the random vector \(X_{M,n}^{RE}(T)\). By \(X_{M,n}^{RE,i}\), we denote the \(i^{\prime}th\) independent sample of random vector \(X_{M,n}^{RE}\) that implicitly depends on \((a,b,c,\eta)\in\mathcal{F}(C,D,D_{L},\Delta,\varrho_{1},\varrho_{2},\nu)\). For the sake of brevity, we also recall the following theorem which corresponds to the rate of convergence of the presented algorithm. The more general case where the error is measured in \(L^{p}(\Omega)\) norm can be found in [5]. **Theorem 1** ([5]).: _There exists a constant \(\kappa>1\), depending only on the parameters of the class \(\mathcal{F}(C,D,D_{L},\Delta,\varrho_{1},\varrho_{2},\nu)\), such that for every \((a,b,c,\eta)\in\mathcal{F}(C,D,D_{L},\Delta,\varrho_{1},\varrho_{2},\nu)\) and \(M,n\in\mathbb{N}\) it holds_ \[\|X(T)-X_{M,n}^{RE}\|_{L^{2}(\Omega)}\leq\kappa\Big{(}n^{-\alpha}+\delta(M) \Big{)}\] _where \(\alpha:=\min\{\varrho_{1},\varrho_{2},1/2\}\)._ In the remaining part of the paper, by \(\{X(t)\}_{t\in[0,T]}\) we mean the unique strong solution of the equation (1) that implicitly depends on \((a,b,c,\eta)\in\mathcal{F}(C,D,D_{L},\Delta,\varrho_{1},\varrho_{2},\nu)\). In the following chapter, we define the complexity model inspired by the information-based complexity (IBC) framework. See [21] for more details. ## 3. Complexity model In [22], authors deal with Levy driven SDEs where random discretization of the time grid imposes complexity measured in terms of the expectation. Regarding the truncated dimension randomized Euler algorithm, we utilize the same approach, since the number of evaluations of \(c\) is non-deterministic. Let \(\operatorname{cost}(X_{M,n}^{RE,i})\) denote the complexity of the evaluation of the single random sample \(X_{M,n}^{RE,i}\), i.e \[\operatorname{cost}(X_{M,n}^{RE,i}) :=\#\{\text{evaluations of }a,b,c,\eta,W,N\}\] \[=d(n+Mn+N(T)+1)+Mn+n\] Note that \(\operatorname{cost}(X_{M,n}^{RE,i})\) is a random variable (see [5] for more details). On the other hand, let the (expected) cost of the algorithm be defined as \[\operatorname{cost}\big{(}X_{M,n}^{RE}\big{)} :=\mathbb{E}\Big{[}\text{\# of scalar evaluations of }a,b,c,\eta,W,N\Big{]}\] \[=d(n+Mn+\lambda T+1)+Mn+n\] From the law of large numbers for the sequence of i.i.d random variables \(\{\operatorname{cost}(X_{M,n}^{RE,i})\}_{i=1}^{K}\), we obtain that \[\operatorname{cost}(\{X_{M,n}^{RE,i}\}_{i=1}^{K}) =\sum_{i=1}^{K}\operatorname{cost}(X_{M,n}^{RE,i})=K(\frac{1}{K} \sum_{i=1}^{K}\operatorname{cost}(X_{M,n}^{RE,i}))\] \[\approx K\operatorname{cost}(X_{M,n}^{RE}).\] In this paper, we are meant to measure the overall cost (sum of costs) of the evaluation of many independent trajectories rather than the single trajectory itself. Thus, replacing the actual cost with a multiple of the expected cost seems reasonable and intuitive. Next, we see that \[dMn\leq\operatorname{cost}\big{(}X_{M,n}^{RE}\big{)}\leq d(\lambda T+5)Mn,\] since \(Mn\geq 1\). Assuming that number of samples \(K\) is sufficiently large, it means that the complexity of \(X_{M,n}^{RE}\) is wrt \(M\) an \(n\), up to constant proportional to \(Mn\). For confidence level \(\alpha\in(0,1)\), in order to keep asymptotic confidence interval width for \(\frac{1}{K}\sum_{i=1}^{K}\cos(X_{M,n}^{RE,i})\) less than \(d\), it is sufficient to have \(K\geq 4(\lambda T)^{2}(\Phi(\frac{1+\alpha}{2}))^{2}\) where \(\Phi\) denotes an inverse CDF of a Normal distribution. We keep that condition in mind regarding the numerical experiments where the number of samples in the Monte Carlo estimator has to be sufficiently large depending on \(\lambda\) and \(\alpha\). Henceforth, in numerical experiments, we usually assume that \(K\geq 1000\). Finally, we further drop the constants and assume that the cost of the evaluation of i.i.d. samples \(\{X_{M,n}^{RE,i}\}_{i=1}^{K}\) is equal to \(KMn\). ## 4. Monte Carlo method In weak approximation, our main interest is the evaluation of the expectation \[\mathbb{E}(f(X(T))),\] such that \(f\) is the Lipschitz payoff function with Lipschitz constant \(D_{L}\). It is well-known that it can be done via the standard Monte Carlo method (replacing expectation with a mean value of \(K\) independent samples from \(f(X_{M,n}^{RE})\)). Rewriting the mean squared error of the standard Monte Carlo estimator as the sum of its variance and squared bias indicates that the method has the following upper error bound \[\mathbb{E}\big{|}\mathbb{E}(f(X(T)))-\frac{1}{K}\sum_{i=1}^{K}f( X_{M,n}^{RE,i})\big{|}^{2} =\frac{\operatorname{Var}(f(X_{M,n}^{RE}))}{K}+\Big{(}\mathbb{E} \big{[}f(X(T))-f(X_{M,n}^{RE})\big{]}\Big{)}^{2}\] \[\leq\kappa_{2}K^{-1}+D_{L}^{2}\mathbb{E}\|X(T)-X_{M,n}^{RE}\|^{2}\] \[\leq\kappa_{2}K^{-1}+2(\kappa D_{L})^{2}(n^{-2\alpha}+\delta^{2}( M))\] which results from theorem 1 for bias and the fact that \(f\) is Lipschitz payoff with Lipschitz constant \(D_{L}\) and variance of \(f(X_{M,n}^{RE})\) is bounded. Therefore, one obtains that \[\|\mathbb{E}(f(X(T)))-\frac{1}{K}\sum_{i=1}^{K}f(X_{M,n}^{RE,i})\|_{L^{2}( \Omega)}\leq\kappa_{3}(K^{-1/2}+n^{-\alpha}+\delta(M))\] where \(\kappa_{3}:=\sqrt{\kappa_{2}}\vee\sqrt{2}\kappa D_{L}\). **Remark 1**.: Since the Monte Carlo estimator evaluates the payoff function only once per trajectory, the cost of the evaluation of \(f\) can be neglected. Therefore, the total complexity of the estimator is defined as the sum of costs for every single trajectory evaluation, namely \[\cos\Big{(}\frac{1}{K}\sum_{i=1}^{K}f(X_{M,n}^{RE,i})\Big{)}:=KMn.\] To achieve the upper error bound up to constant proportional to \(\varepsilon>0\), one for instance may set \[K:=\lceil\varepsilon^{-2}\rceil,n:=\lceil\varepsilon^{-1/\alpha}\rceil,M:= \lceil\delta^{-1}(\varepsilon)\rceil. \tag{3}\] Note that \(\lceil x\rceil\leq(1+\frac{1}{x_{0}})x\) for \(0<x_{0}\leq x\), thus \[\varepsilon^{-2} \leq K=\lceil\varepsilon^{-2}\rceil\leq(1+(1\wedge\delta(1))^{2} )\varepsilon^{-2}\leq 2\varepsilon^{-2},\] \[\varepsilon^{-1/\alpha} \leq n=\lceil\varepsilon^{-1/\alpha}\rceil\leq(1+(1\wedge\delta( 1))^{1/\alpha})\varepsilon^{-2}\leq 2\varepsilon^{-1/\alpha},\] \[\delta^{-1}(\varepsilon)\leq M=\lceil\delta^{-1}(\varepsilon)\rceil\leq 2 \delta^{-1}(\varepsilon)\] for sufficiently small \(\varepsilon\), i.e less than or equal to \(1\wedge\delta(1)\). Therefore, the tight complexity bounds are \[\varepsilon^{-(2+\frac{1}{\alpha})}\delta^{-1}(\varepsilon)\leq\text{cost} \left(\frac{1}{K}\sum_{i=1}^{K}f(X_{M,n}^{RE,i})\right)\leq 8\varepsilon^{-(2+ \frac{1}{\alpha})}\delta^{-1}(\varepsilon),\] indicating that the cost of the algorithm is in fact up to constant proportional to \(\varepsilon^{-(2+\frac{1}{\alpha})}\delta^{-1}(\varepsilon)\). By \(\mathcal{MC}(\varepsilon)\) we mean the value of standard Monte Carlo algorithm under provided parameters (3). ## 5. Multilevel Monte Carlo method Suppose \(\{n_{l}\}_{l=0}^{+\infty}\subset\mathbb{N}_{+}\) and \(\{M_{l}\}_{l=0}^{+\infty}\subset\mathbb{N}_{+}\) are two non-decreasing sequences of parameters. For fixed level \(L\in\mathbb{N}_{+}\) and \(\{K_{l}\}_{l=0}^{L}\subset\mathbb{N}_{+}\) which are parameters to be found, the multilevel Monte Carlo estimator is defined as \[\mathcal{ML}:=\frac{1}{K_{0}}\sum_{i_{0}=1}^{K_{0}}f(X_{M_{0},n_{0}}^{RE,i_{0 }})+\sum_{l=1}^{L}\frac{1}{K_{l}}\sum_{i_{l}=1}^{K_{l}}\big{(}f(X_{M_{l},n_{l} }^{RE,i_{l}})-f(X_{M_{l-1},n_{l-1}}^{RE,i_{l}})\big{)}. \tag{4}\] In addition to level \(L\), the multilevel estimator \(\mathcal{ML}\) depends on a set of parameters for both grid densities and **truncation parameters**. For the sake of brevity, we further omit those parameters in the notation of \(\mathcal{ML}\). Furthermore, for each level \(l=1,...,L\), and \(i_{l}\in\{1,...,K_{l}\}\) the random variables \(X_{M_{l},n_{l}}^{RE,i_{l}}\) and \(X_{M_{l-1},n_{l-1}}^{RE,i_{l}}\) remain coupled only via the use of the same realization of Wiener process, Poisson process, and jump-heights sequence. The realizations of random time discretizations on different levels remain independent of each other. The reasoning behind such a coupling idea is explained in the following part. First of all, one may define the timestep of the algorithm as \(h_{l}:=T/n_{l}\) as in (2). Nevertheless, it is convenient to have timestep \(h_{l}\) in the form of \[h_{l}:=T\beta^{-l},\] for some \(\beta>1\). Therefore, we usually assume that the grid density parameters are defined as \[n_{l}:=\lceil\beta^{l}\rceil\] for \(\beta>1\) and every \(l=0,\ldots,L\), so that \[\frac{T}{nl}=\frac{T}{\lceil\beta^{l}\rceil}\leq T\beta^{-l}=h_{l}, \tag{5}\] meaning that our timesteps do not exceed the desired \(h_{l}\). Furthermore, note that \[n_{l}^{-\alpha}\leq T^{-\alpha}h_{l}^{\alpha}\] for \(\alpha>0\), which means that we can rewrite thesis of the theorem 1 in terms of \(h_{l}\) instead of \(n_{l}\) with new constant \(\tilde{\kappa}:=(T^{-\alpha}\lor 1)\kappa\). Since \(\beta>1\), we have that \(\lceil\beta^{l}\rceil\leq 2\beta^{l}\). Thus, \[n_{l}=\lceil\beta^{l}\rceil\leq 2\beta^{l}=2T\frac{\beta^{l}}{T}=\frac{2T}{h_{ l}}. \tag{6}\] And as was already mentioned, the mean squared error can be rewritten as the sum of variance and the squared bias which can also be applied to the multilevel estimator. This leads to the following equality \[\|\mathbb{E}(f(X(T)))-\mathcal{ML}\|_{L^{2}(\Omega)}^{2}=\mathrm{Var}(\mathcal{ ML})+(\mathbb{E}(f(X(T)))-\mathbb{E}(\mathcal{ML}))^{2}.\] Note that \(\mathbb{E}(\mathcal{ML})=\mathbb{E}(f(X_{M_{L},n_{L}}^{RE}))\), thus \[(\mathbb{E}(f(X(T)))-\mathbb{E}(\mathcal{ML}))^{2}\leq\mathbb{E}\big{|}f(X(T))- f(X_{M_{L},n_{L}}^{RE})\big{|}^{2}\leq\kappa_{bias}(h_{L}^{2\alpha}+\delta^{2}(M_{L})), \tag{7}\] where \(\kappa_{bias}:=2(D_{L}\tilde{\kappa})^{2}\). Therefore, the upper bound for the second term above (squared bias) depends only on the convergence rate of the numerical scheme which is determined by \(\alpha\) and \(\delta(\cdot)\). The multilevel Monte Carlo method aims at variance reduction of the expectation estimator to reduce the computational cost. As mentioned, our main interest in this paper is to investigate the properties of the introduced multilevel procedure (4) in the infinite-dimensional setting (1). First, let us provide an intuitive derivation of parameters' values. To investigate the cost of the multilevel Monte Carlo method for SDEs driven by the countably dimensional Wiener process, we replicate steps presented in paper [7]. Let \[v_{l}:=\mathrm{Var}\left[f(X_{M_{l},n_{l}}^{RE})-f(X_{M_{l-1},n_{l-1}}^{RE}) \right].\] and \[v_{0}:=\mathrm{Var}\left[f(X_{M_{0},n_{0}}^{RE})\right].\] One can notice that the variance of the multilevel estimator is equal to \[\mathrm{Var}[\mathcal{ML}]=\sum_{l=0}^{L}\frac{v_{l}}{K_{l}}\] and, following the remark 1, the cost can be defined as \[\mathrm{cost}(\mathcal{ML}):=\sum_{l=0}^{L}K_{l}M_{l}n_{l}\] or alternatively (by inequalities (5) and (6)) as \(\sum_{l=0}^{L}\frac{K_{l}M_{l}}{h_{l}}\). By minimizing the variance with respect to the fixed cost one obtains that the optimal value for \(K_{l}\) is \[K_{l}=\left\lceil 2\varepsilon^{-2}\sqrt{\frac{v_{l}h_{l}}{M_{l}}}\sum_{k=0}^{L }\sqrt{\frac{v_{k}M_{k}}{h_{k}}}\right\rceil\] where \(\varepsilon\) is the expected \(L^{2}(\Omega)\) error of the algorithm. Recall that for any two square-integrable random variables \(X,Y\) one has that \[\mathrm{Var}(X-Y)\leq(\mathrm{Var}^{\frac{1}{2}}(X)+\mathrm{Var}^{\frac{1}{2}} (Y))^{2}.\] On the other hand, since \[h_{l-1}=\beta h_{l}\text{ and }\delta(M_{l})<\delta(M_{l-1})\] for \(l=1,\ldots,L\), we have that \[v_{l} =\operatorname{Var}\left[\left(f(X(T))-f(X^{RE}_{M_{l-1},n_{l-1}}) \right)-\left(f(X(T))-f(X^{RE}_{M_{l},n_{l}})\right)\right]\] \[\leq\left(\operatorname{Var}^{\frac{1}{2}}(f(X(T))-f(X^{RE}_{M_{l },n_{l}}))+\operatorname{Var}^{\frac{1}{2}}(f(X(T))-f(X^{RE}_{M_{l-1},n_{l-1}} ))\right)^{2}\] \[\leq 2D_{L}^{2}\|X(T)-X^{RE}_{M_{l},n_{l}}\|_{L^{2}(\Omega)}^{2}+2D _{L}^{2}\|X(T)-X^{RE}_{M_{l-1},n_{l-1}}\|_{L^{2}(\Omega)}^{2}\] \[\leq 8(1\vee\beta^{2\alpha})(D_{L}\tilde{\kappa})^{2}(h_{l}^{2 \alpha}+\delta^{2}(M_{l-1}))\] Similarly, from the fact that the variance of \(f(X^{RE}_{M_{0},n_{0}})\) is bounded and the observation that \(\delta\) takes only positive values, we obtain that \[v_{0}\leq\mathbb{E}\big{|}f(X^{RE}_{M_{0},n_{0}})\big{|}^{2} \leq\kappa_{2} =\frac{\kappa_{2}}{T^{2\alpha}+\delta^{2}(M_{0})}(T^{2\alpha}+ \delta^{2}(M_{0}))\] \[\leq\frac{\kappa_{2}}{T^{2\alpha}}(h_{0}^{2\alpha}+\delta^{2}(M_ {0}))\] Hence, \[v_{l}\leq\kappa_{var}(h_{l}^{2\alpha}+\delta^{2}(M_{l-1})),\ l\in\{0,\ldots,L\} \tag{8}\] where \(\kappa_{var}:=4\beta^{2\alpha}\kappa_{bias}\vee\kappa_{2}T^{-2\alpha}\) and \(M_{-1}:=M_{0}\). It results in the observation that \(v_{l}\to 0\) as \(l\to+\infty\) which guarantees that one needs fewer and fewer samples on the next levels. Similarly to SDEs driven by the finite-dimensional Wiener process, it means that coarse levels contribute to the cost via a slightly greater number of independent samples. Inequality (8) also indicates that despite the fact that for \(l=1,...,L\), sequences of independent random variables \(\theta_{j}^{(l)}\sim\mathcal{U}(j\frac{T}{n_{l}},(j+1)\frac{T}{n_{l}})\) for \(j=0,...,n_{l}-1\) and \(\theta_{j}^{(l-1)}\sim\mathcal{U}(j\frac{T}{n_{l-1}},(j+1)\frac{T}{n_{l-1}})\) for \(j=0,...,n_{l-1}-1\) are not coupled, both \(f(X^{RE}_{M_{l-1},n_{l-1}})\) and \(f(X^{RE}_{M_{l},n_{l}})\) approximate the same realization of a random variable \(f(X(T))\) which is sufficient for the algorithm to work. The general idea for the proof of complexity bounds for the multilevel Monte Carlo method consists of a few repeatable steps. To establish the parameters of an algorithm, first, we try to find values for which the upper error bound \(\varepsilon>0\) is attained. The parameters depend on \(\varepsilon\). It usually starts with parameter \(L\) since it determines the upper bound for the bias of an estimator which is entirely determined by the convergence rate of the numerical scheme and not by the method itself (see inequality (7)). Next, we proceed with the number of summands \(K_{l}\) which affects the estimator's variance. Finally, having obtained concrete parameters' values for which the upper error bound is attained, we check the corresponding complexity of an algorithm in terms of \(\varepsilon\) (upper bound for the \(L^{2}(\Omega)\) error). In the next part of this paper, we provide a theorem that addresses the complexity upper bound for the multilevel algorithm in setting (1). As expected, the cost turns out to highly depend on the \(\delta\) function. Nevertheless, in the general case, the dependence corresponds to the exponent of a possibly unknown constant. To investigate input data for which the impact of the exponent can be mitigated, we introduce the following family of classes for the inverse of \(\delta\). **Definition 1**.: _We define the following family of classes_ \[\mathcal{G}_{x_{0}} :=\Big{\{}g:(0,x_{0})\mapsto\mathbb{R}_{+}:g-\text{strictly decreasing, and}\] \[\exists_{\tilde{C}:\mathbb{R}_{+}\mapsto\mathbb{R}_{+}}\ \forall_{0<x,y<x_{0}}:\frac{g(y)}{g(x)}\leq\tilde{C}\Big{(}\frac{\log(x_{0}/y )}{\log(x_{0}/x)}\Big{)}\Big{\}}\] _which is parametrized by \(x_{0}>0\). The class \(\mathcal{G}_{x_{0}}\) is further called the class of positive log-decreasing functions. For the sake of brevity, if function \(g\) is defined on a domain broader than \((0,x_{0}),\) by condition \(g\in\mathcal{G}_{x_{0}}\) we usually mean \(g|_{(0,x_{0})}\in\mathcal{G}_{x_{0}}.\)_ **Fact 1**.: _For any \(x_{0}>0,\) class \(\mathcal{G}_{x_{0}}\) satisfies the following properties:_ 1. \(\mathcal{G}_{x_{0}}\) _is non-empty since_ \[g:(0,x_{0})\ni x\mapsto\log(x_{0}/x)\in\mathbb{R}_{+}\] _belongs to the class with_ \(\tilde{C}(x)=x.\)__ 2. _For every_ \(g_{1},g_{2}\in\mathcal{G}_{x_{0}},\) _one has that_ \[g_{1}+g_{2}:(0,x_{0})\ni x\mapsto g_{1}(x)+g_{2}(x)\in\mathbb{R}_{+}\] _and_ \[g_{1}g_{2}:(0,x_{0})\ni x\mapsto g_{1}(x)g_{2}(x)\in\mathbb{R}_{+}\] _belong to_ \(\mathcal{G}_{x_{0}}\)_._ 3. _For every_ \(g\in\mathcal{G}_{x_{0}}\) _and_ \(a>0,\) _one has that_ \[a+g:(0,x_{0})\ni x\mapsto a+g(x)\in\mathbb{R}_{+}\] _and_ \[a\cdot g:(0,x_{0})\ni x\mapsto a\cdot g(x)\in\mathbb{R}_{+}\] _belong to_ \(\mathcal{G}_{x_{0}}.\)__ 4. _For every_ \(g\in\mathcal{G}_{x_{0}}\) _and_ \(\alpha>0,\) _one has that_ \[g^{\alpha}:(0,x_{0})\ni x\mapsto(g(x))^{\alpha}\in\mathbb{R}_{+}\] _belongs to_ \(\mathcal{G}_{x_{0}}.\)__ 5. _For any_ \(g\in\mathcal{G}_{x_{0}}\)_,_ \(a>0\) _and_ \(x\in(0,x_{0}^{\frac{1}{a}})\) _one obtains that_ \[g(x^{a})\leq\tilde{C}(a)g(x_{0}^{\frac{a-1}{a}}x),\] _which results from the direct substitution_ \(y:=x_{0}^{1-a}x^{a}.\)__ Properties (P1)-(P4) guarantee that for any \(x_{0}>0\) class \(\mathcal{G}_{x_{0}}\) is rich in various functions. In fact, for any \(x_{0}>0\) class \(\mathcal{G}_{x_{0}}\) is a convex cone. On the other hand, property (P5) assures that the exponent can be reduced to the multiplicative constant. The following theorem provides the upper bound for the cost of the multilevel algorithm for the considered class of SDEs. We also provide an additional lower cost bound of the obtained estimator to stress that the cost is greater than one measured in a finite-dimensional setting. The cost in finite-dimensional setting is proportional to \(\varepsilon^{-2}(\log(\varepsilon))^{2}\). **Theorem 2**.: _Let \((a,b,c,\eta)\in\mathcal{F}(C,D,D_{L},\Delta,\varrho_{1},\varrho_{2},\nu)\) be the tuple of functions defining equation (1) with \(\varrho_{1},\varrho_{2}\in[1/2,1],\) so that \(\alpha=1/2.\) For any sufficiently small \(\varepsilon\geq 0\) there exists a multilevel algorithm \(\mathcal{ML}\) such that_ 1. \[\|\mathcal{ML}-\mathbb{E}(f(X(T)))\|_{L^{2}(\Omega)}\leq\varepsilon,\] 2. \[\mathrm{cost}(\mathcal{ML})\leq\begin{cases}c_{7}\varepsilon^{-2}(\log( \varepsilon^{-1}))^{2}\delta^{-1}(\varepsilon),\text{ for }\delta^{-1}\in \mathcal{G}_{\delta(1)}\\ c_{6}\varepsilon^{-2}(\log(\varepsilon^{-1}))^{2}\delta^{-1}(\varepsilon^{ \kappa_{cost}}),\text{ otherwise}\end{cases},\] 3. \[\mathrm{cost}(\mathcal{ML})\geq c_{9}\varepsilon^{-2}\Big{(}(\log( \varepsilon^{-1}))^{2}+\delta^{-1}(\varepsilon)\Big{)}.\] _for some positive constants \(c_{6},c_{7},c_{9}\) and \(\kappa_{cost}>1,\) depending only on the parameters of the class \(\mathcal{F}(C,D,D_{L},\Delta,\varrho_{1},\varrho_{2},\nu)\) and some \(\beta>1.\)_ Proof.: Without loss of generality, let \(D_{L}>1.\) Otherwise, note that any \(D_{L}\)-Lipschitz function satisfies Lipschitz condition with \(D_{L}\lor 1\) constant. Similarly w.l.o.g we assume that \(\delta(1)>1.\) Next, let \[n_{l}:=\lceil\beta^{l}\rceil\] for some \(\beta>1,\) so that \[h_{l}:=T\beta^{-l}\geq\frac{T}{n_{l}}.\] Knowing the exact rate of convergence which depends on \(\alpha\) and \(\delta,\) let \[M_{l}:=\lceil\delta^{-1}(\beta^{-\alpha(l+1)})\rceil,\] thus \[M_{l}\geq\delta^{-1}(\beta^{-\alpha(l+1)})\] \[\delta(M_{l})\leq\beta^{-\alpha(l+1)}=\Big{(}\frac{1}{T\beta} \Big{)}^{\alpha}h_{l}^{\alpha}.\] Note that \(\beta^{-\alpha(l+1)}\) falls into the domain of \(\delta^{-1}\) which is \((0,\delta(1)],\) since \(\beta>1\geq\delta(1)^{-1/\alpha}.\) From inequality (7) one obtains the following upper bound \[(\mathbb{E}(f(X(T)))-\mathbb{E}(\mathcal{ML}))^{2}\leq\kappa_{bias}(h_{L}^{2 \alpha}+\delta^{2}(M_{L}))\leq 2(1\vee(T\beta)^{-2\alpha})\kappa_{bias}h_{L}^{2 \alpha}=c_{1}h_{L}^{2\alpha}\] where \(c_{1}:=2(1\vee(T\beta)^{-2\alpha})\kappa_{bias}.\) Having set \[L:=\Big{\lceil}\frac{\log\left(\sqrt{2c_{1}}T^{\alpha}\varepsilon^{-1}\right) }{\alpha\log\beta}\Big{\rceil}, \tag{9}\] we get the desired upper bound for squared bias \[(\mathbb{E}(f(X(T)))-\mathbb{E}(\mathcal{ML}))^{2}\leq\frac{\varepsilon^{2}} {2}. \tag{10}\] Similarly, for such parameters \(n_{l}\) and \(M_{l}\) and from inequality (8) one obtains that \[v_{l}\leq\kappa_{var}(h_{l}^{2\alpha}+\delta^{2}(M_{l-1}))\leq 2(1\lor T^{-2 \alpha})\kappa_{var}h_{l}^{2\alpha}=c_{2}h_{l}^{2\alpha}\] where \[c_{2}:=2(1\lor T^{-2\alpha})\kappa_{var}.\] Recalling that \(\alpha=1/2\), the above inequality simplifies to \[v_{l}\leq c_{2}h_{l}.\] Utilizing the above property, we get the following upper bound for the optimal \(K_{l}\). Following the idea presented in [7], the actual value for \(K_{l}\) in the proof is further chosen to be equal to the obtained upper bound, namely \[K_{l} :=\Big{\lceil}2c_{2}\varepsilon^{-2}\frac{h_{l}}{\sqrt{M_{l}}} \sum_{k=0}^{L}\sqrt{M_{k}}\Big{\rceil}=\Big{\lceil}2c_{2}\varepsilon^{-2}\frac {h_{l}^{\alpha+1/2}}{\sqrt{M_{l}}}\sum_{k=0}^{L}h_{k}^{\alpha-1/2}\sqrt{M_{k}} \Big{\rceil}\] \[\geq\Big{\lceil}2\varepsilon^{-2}\sqrt{\frac{v_{l}h_{l}}{M_{l}}} \sum_{k=0}^{L}\sqrt{\frac{v_{k}M_{k}}{h_{k}}}\Big{\rceil}.\] Note, that \[1\leq\frac{\sum_{k=0}^{L}\sqrt{M_{k}}}{\sqrt{M_{l}}} \tag{11}\] for any \(L\in\mathbb{N}\) and \(l=0,\ldots,L\). Thus, from the above inequality in (11), we have that \[\operatorname{Var}(\mathcal{ML})=\sum_{l=0}^{L}\frac{v_{l}}{K_{l}}\leq\sum_{l =0}^{L}\frac{c_{2}h_{l}}{2c_{2}\varepsilon^{-2}h_{l}}=\frac{\varepsilon^{2}} {2}. \tag{12}\] Henceforth, together from (10) and (12), we obtain that \[\|\mathbb{E}(f(X(T)))-\mathcal{ML}\|_{L^{2}(\Omega)}\leq\varepsilon,\] which means that our algorithm does not exceed the desired mean squared error. Next, we focus on the value of \(\operatorname{cost}(\mathcal{ML})\). From assumption that \(\beta>1\) we obtain \[M_{l}=\lceil\delta^{-1}(\beta^{-\alpha(l+1)})\rceil\leq\Big{(}1+\frac{1}{ \delta^{-1}(1)}\Big{)}\delta^{-1}(\beta^{-\alpha(l+1)})\] for every \(l=0,\ldots,L\). Assuming that \(\varepsilon\) is sufficiently small, i.e \(\varepsilon<1/e\), from (9) one obtains that \[L+1\leq c_{4}\log(\varepsilon^{-1})\] for \[c_{4}:=\Big{(}0\vee\frac{\log(\sqrt{2c_{1}}T^{\alpha})}{\alpha\log\beta} \Big{)}+\frac{1}{\alpha\log\beta}+2\] which was also stressed in [7]. From this property, we obtain that \[\delta^{-1}(\beta^{-\alpha(L+1)})\leq\delta^{-1}(\varepsilon^{c_{4}\alpha \log\beta})=\delta^{-1}(\varepsilon^{\kappa_{cost}}),\] where \(\kappa_{cost}:=c_{4}\alpha\log\beta\). Therefore, \[M_{l}\leq c_{5}\delta^{-1}(\varepsilon^{\kappa_{cost}})\] for \(l=0,\ldots,L\) where \(c_{5}:=\Big{(}1+\frac{1}{\delta^{-1}(1)}\Big{)}.\) Since \[K_{l}\leq 2c_{2}\varepsilon^{-2}\frac{h_{l}}{\sqrt{M_{l}}}\sum_{k=0}^{L}\sqrt{ M_{k}}+1,\] the following inequality holds true \[K_{l}M_{l}n_{l}\leq(2c_{2}\varepsilon^{-2}\frac{h_{l}}{\sqrt{M_{l}}}\sum_{k=0}^{L} \sqrt{M_{k}}+1)M_{l}\frac{2T}{h_{l}}.\] It results in \[\sum_{l=0}^{L}K_{l}M_{l}n_{l} \leq\sum_{l=0}^{L}(2c_{2}\varepsilon^{-2}\frac{h_{l}}{\sqrt{M_{l }}}\sum_{k=0}^{L}\sqrt{M_{k}}+1)M_{l}\frac{2T}{h_{l}}\] \[\leq 2T\Big{(}2c_{2}\varepsilon^{-2}+\sum_{l=0}^{L}h_{l}^{-1} \Big{)}(L+1)^{2}M_{L}.\] Similarly as in [7], note that \[h_{l}^{-1}=T^{-1}\beta^{l}=T^{-1}\beta^{L}\beta^{l-L}=h_{L}^{-1}\beta^{l-L}\] and from (9) \[L-1\leq\frac{\log(\sqrt{2c_{1}}T^{\alpha}\varepsilon^{-1})}{ \alpha\log\beta}\] \[h_{L}^{-\alpha}=(T^{-1}\beta^{L})^{\alpha}\leq\sqrt{2c_{1}}\beta ^{\alpha}\varepsilon^{-1}\] \[h_{L}^{-1}\leq(\sqrt{2c_{1}})^{\frac{1}{\alpha}}\beta\varepsilon ^{-1/\alpha}=2c_{1}\beta\varepsilon^{-2}.\] From these inequalities, one obtains that \[\sum_{l=0}^{L}h_{l}^{-1}=h_{L}^{-1}\sum_{l=0}^{L}\beta^{l-L}\leq\frac{(\sqrt{ 2c_{1}})^{\frac{1}{\alpha}}\beta^{2}}{\beta-1}\varepsilon^{-2}.\] From previously obtained upper bounds we get that \[\sum_{l=0}^{L}K_{l}M_{l}n_{l} \leq 2T\Big{(}2c_{2}+\frac{(\sqrt{2c_{1}})^{\frac{1}{\alpha}} \beta^{2}}{\beta-1}\Big{)}\varepsilon^{-2}(L+1)^{2}M_{L}\] \[\leq 2T\Big{(}2c_{2}+\frac{(\sqrt{2c_{1}})^{\frac{1}{\alpha}} \beta^{2}}{\beta-1}\Big{)}c_{4}^{2}c_{5}\varepsilon^{-2}(\log(\varepsilon^{-1 }))^{2}\delta^{-1}(\varepsilon^{\kappa})\] \[=c_{6}\varepsilon^{-2}(\log(\varepsilon^{-1}))^{2}\delta^{-1}( \varepsilon^{\kappa})\] where \(c_{6}:=2T\Big{(}2c_{2}+\frac{(\sqrt{2c_{1}})^{\frac{1}{\alpha}}\beta^{2}}{ \beta-1}\Big{)}c_{4}^{2}c_{5}.\) Note that \(\kappa_{cost}=c_{4}\alpha\log\beta>1\), thus \[\delta^{-1}(\varepsilon^{\kappa_{cost}})\geq\delta^{-1}(\varepsilon).\] Furthermore, if \(\varepsilon<(\delta(1))^{1/\kappa_{cost}}\) and \(\delta^{-1}\in\mathcal{G}_{\delta(1)}\), from property (P5) in fact 1, we obtain that \[\delta^{-1}(\varepsilon^{\kappa_{cost}})\leq\tilde{C}(\kappa_{cost})\delta^{- 1}(c_{8}\varepsilon)\leq\tilde{C}(\kappa_{cost})\delta^{-1}(\varepsilon)\] where \(c_{8}:=(\delta(1))^{\frac{\kappa_{\text{cost}}-1}{\kappa_{\text{cost}}}}>1\). It completes the proof for the upper complexity bounds of the algorithm with \(c_{7}:=c_{6}\tilde{C}(\kappa_{\text{cost}})\). We now proceed with additional lower complexity bound. From \[\kappa_{bias}\delta^{2}(M_{L})\leq\kappa_{bias}(h_{L}^{2\alpha}+\delta^{2}(M_{ L}))\leq\varepsilon^{2}\] and the observation that \(\sqrt{\kappa_{bias}}=\sqrt{2}D_{L}\tilde{\kappa}>1\), we obtain that \[M_{L}\geq\delta^{-1}\Big{(}\frac{\varepsilon}{\sqrt{\kappa_{bias}}}\Big{)}> \delta^{-1}(\varepsilon).\] Similarly, from the lower bounds on \(K_{l}\) and \(n_{l}\) we obtain \[\text{cost}(\mathcal{ML}) \geq\sum_{l=0}^{L}\Big{(}2c_{2}\varepsilon^{-2}\frac{h_{l}}{ \sqrt{M_{l}}}\sum_{k=0}^{L}\sqrt{M_{k}}\Big{)}\frac{2TM_{l}}{h_{l}}\] \[\geq 4Tc_{2}\varepsilon^{-2}\Big{(}L^{2}+M_{L}\Big{)}\] \[\geq 4Tc_{2}(1\wedge(\alpha\log\beta)^{-1})\varepsilon^{-2}\Big{(} \log(\sqrt{2c_{1}}T^{\alpha}\varepsilon^{-1})+\delta^{-1}(\varepsilon)\Big{)}\] \[\geq c_{9}\varepsilon^{-2}\Big{(}(\log(\varepsilon^{-1}))^{2}+ \delta^{-1}(\varepsilon)\Big{)}\] where \[c_{9}:=4Tc_{2}(1\wedge(\alpha\log\beta)^{-1})\] and \[\sqrt{2c_{1}}T^{\alpha}=2\sqrt{\kappa_{bias}}(T^{\alpha}\vee\beta ^{-\alpha})\] \[\geq 2\sqrt{\kappa_{bias}}T^{\alpha}=2\sqrt{2}D_{L}\tilde{\kappa} T^{\alpha}\] \[=2\sqrt{2}D_{L}(1\lor T^{-\alpha})T^{\alpha}\kappa\geq 2\sqrt{2}D_{L }\kappa>1,\] which completes the proof. ## 6. Numerical experiments In this section, we compare the results from numerical experiments carried out for both standard and multilevel Monte Carlo methods. The implementation utilizes both Python and CUDA C programming languages as well as the PyCuda package which allows calling CUDA kernels from the Python level. The pseudo-code for the algorithm that dynamically estimates the number of levels and the variance is similar to the one presented in [7]. For upper error bound \(\varepsilon>0\), the parameters of the standard Monte Carlo algorithm were set as defined in section 4, resulting in the expected cost asymptotically proportional to \(\Theta(\varepsilon^{-4}\delta^{-1}(\varepsilon))\). The main parameters of the multilevel method were set as it was defined in the proof of theorem 4 with \(\beta=2\), i.e \(n_{l}=2^{l}\) and \(M_{l}=\lceil\delta^{-1}(2^{-(l+1)/2})\rceil\) for \(l\in\mathbb{N}\). The following part of this chapter corresponds to the remaining parameters of an algorithm which were dynamically estimated via the procedure presented in [7]. Until the next subsection let the superscript of any estimator denote the iteration number of an algorithm. Thus, let \(\widehat{L}^{(i)}\) denote the number of levels (excluding zero-level) at \(i\)-th iteration of the procedure. The number of levels is updated concerning the following formula \[\widehat{L}^{(i+1)}=\begin{cases}\widehat{L}^{(i)},\ (i>2)\ \text{and}\ ( convergence\_error(i)<0)\\ \widehat{L}^{(i)}+1,\ \text{otherwise}\end{cases}\] for \(i\in\mathbb{N}_{+}\) and \(\widehat{L}^{(1)}=0\), meaning that we start our procedure with single level. On the other hand, the final iteration \(\mathrm{id}\) is denoted by \(fin:=\min\{i\in\mathbb{N}:\widehat{L}^{(i)}=\widehat{L}^{(i+1)}\}\). Since the optimal values for \(\{K_{l}\}\) depend on the variances of corresponding levels, at \(i\)-th iteration one estimates \(K_{l}\) with \(\widehat{K}^{(i+1)}_{l}\) utilizing proper variance estimate \(\widehat{v}^{(i)}_{l}\) of \(v_{l}\). The variance estimates are updated regarding the following formula \[\widehat{v}^{(i)}_{l}=\begin{cases}\text{estimate of $v_{l}$ with $10^{3}\ samples,\ l=\widehat{L}^{(i)}$}\\ \text{estimate of $v_{l}$ with $\widehat{K}^{(i)}_{l}$ samples,\ l=0,..., \widehat{L}^{(i)}-1$}\end{cases}\] for \(l=0,...,\widehat{L}^{(i)}\). It means that the variance of the current top level is always estimated with \(1000\) samples. And the number of samples \(\widehat{K}^{(i+1)}_{l}\) per level is updated with respect to the following formula \[\widehat{K}^{(i+1)}_{l}:=\left\lceil 2\varepsilon^{-2}\sqrt{\frac{\widehat{v}^{(i )}_{l}}{M_{l}n_{l}}}\sum_{k=0}^{\widehat{L}^{(i)}}\sqrt{\widehat{v_{k}}^{(i)} M_{k}n_{k}}\right\rceil\] for \(l=0,...,\widehat{L}^{(i)}\). Finally, let \[\widehat{Y}^{(i)}_{l}:=\frac{1}{\widehat{K}^{(i)}_{l}}\sum_{i_{l}=1}^{\widehat {K}^{(i)}_{l}}(f(X^{RE,i_{l}}_{M_{l},n_{l}})-f(X^{RE,i_{l}}_{M_{l-1},n_{l-1}})),\] so that the convergence error function is defined as \[convergence\_error:\mathbb{N}_{+}\setminus\{1,2\}\ni i\mapsto(\frac{1}{2}| \widehat{Y}^{(i)}_{\widehat{L}^{(i)}-1}|\vee|\widehat{Y}^{(i)}_{\widehat{L}^ {(i)}}|)-(\sqrt{2}-1)\frac{\varepsilon}{\sqrt{2}}.\] From now on, by \(\widehat{\mathcal{ML}}(\varepsilon)\) we define the value of multilevel algorithm that uses the aforementioned estimates \(\widehat{L}^{(fin)},\{\widehat{K}^{(fin)}_{l}\}_{l=0}^{\widehat{L}^{(fin)}}\) and defined parameters \(\{M_{l}\}_{l\in\mathbb{N}},\{n_{l}\}_{l\in\mathbb{N}}\). ### Example equation In numerical experiments, we used the following equation and payoff function. **Example** (**Merton model with Call option payoff**).: Let us consider the following equation \[X(t)=\eta+\int\limits_{0}^{t}\mu X(s)\mathrm{d}s+\sum\limits_{j=1}^{+\infty} \int\limits_{0}^{t}\frac{\sigma_{j}}{j^{\alpha}}X(s)\mathrm{d}W_{j}(s)+\int \limits_{0}^{t}X(s-)\mathrm{d}L(s),\quad t\in[0,T], \tag{13}\] where \(\eta,\mu\in\mathbb{R}\), \(\alpha\geq 1,(\sigma_{j})_{j=1}^{+\infty}\) is a bounded sequence of positive real numbers, and \(L=(L(t))_{t\in[0,T]}\) is a compound Poisson process with intensity \(\lambda>0\) and jump heights \((\xi_{i})_{i=1}^{+\infty}\). The solution of the equation (13) can be described by the following formula \[X(t)=\eta\exp\biggl{[}\biggl{(}\mu-\frac{1}{2}\sum_{j=1}^{+\infty}\frac{\sigma_{j} ^{2}}{j^{2\alpha}}\biggr{)}t+\sum_{j=1}^{+\infty}\frac{\sigma_{j}}{j^{\alpha}} W_{j}(t)\biggr{]}\prod_{i=1}^{N(t)}(1+\xi_{i}).\] The aforementioned solution can be simulated on the computer by truncating an infinite sum in the above formula, which is further denoted by \(X_{M}(t)\) for \(M\in\mathbb{N}_{+}\). For simulation purposes, we set \(\mu=0.08,\sigma_{j}=\sigma=0.4\) for \(j\in\mathbb{N}\) and \(\alpha=T=\eta=\lambda=1\) with call-option payoff \(f(x):=(x-1\lor 0)\). Let \((Y_{i})_{i=1}^{+\infty}\) be a sequence o independent random variables that are normally distributed with zero mean and unit variance. We assume that the jump heights sequence of random variables is defined by \(\xi_{i}=-0.5\cdot\mathds{1}_{(-\infty,0]}(Y_{i})+(0.5+Y_{i})\cdot\mathds{1}_{ (0,+\infty)}(Y_{i})\). Since the exact solution of the equation is known, the corresponding value of \(\mathbb{E}(f(X(T)))\) can be estimated with the standard Monte Carlo method, i.e \[\mathbb{E}(f(X(T)))\approx\frac{1}{10^{6}}\sum_{k=1}^{10^{6}}f(X_{12\cdot 10^{ 3}}^{(k)}(T)).\] Thus, for both standard and multilevel Monte Carlo algorithms, we estimate their corresponding errors in the \(L^{2}(\Omega)\) norm using the following formula \[\widehat{e}_{K}(Y):=\left(\frac{1}{K}\sum_{i=1}^{K}\left|Y^{(i)}(\varepsilon )-\frac{1}{10^{6}}\sum_{k=1}^{10^{6}}f(X_{12\cdot 10^{3}}^{(k)}(T))\right|^{2} \right)^{1/2},\] such that \(Y\in\{\mathcal{MC}(\varepsilon),\widehat{\mathcal{ML}}(\varepsilon)\}\). Since the cost of \(\widehat{\mathcal{ML}}(\varepsilon)\) is a random variable, we estimate it with the mean cost, i.e \[\mathrm{cost}(\widehat{\mathcal{ML}}(\varepsilon)):=\frac{1}{K}\sum_{i=1}^{K} \mathrm{cost}(\widehat{\mathcal{ML}}^{(i)}(\varepsilon))=\frac{1}{K}\sum_{i=1 }^{K}\sum_{l=0}^{\widehat{L}^{(fin),i}}\widehat{K}_{l}^{(fin),i}M_{l}n_{l}.\] In numerical experiments both \(\widehat{e}_{10^{4}}(\mathcal{MC}(\varepsilon))\) and \(\widehat{e}_{10^{3}}(\widehat{\mathcal{ML}}(\varepsilon))\) were evaluated for various values of \(\varepsilon>0\). On figure 1 the reader can find log-log plot of \(\widehat{e}_{10^{4}}(\mathcal{MC}(\varepsilon))\) and \(\mathrm{cost}(\mathcal{MC}(\varepsilon))\) as well as the expected theoretical slope. On the other hand, in figure 2 one can find a plot of \(\widehat{e}_{10^{3}}(\widehat{\mathcal{ML}}(\varepsilon))\) and \(\mathrm{cost}(\widehat{\mathcal{ML}}(\varepsilon))\) as well as the comparison with expected cost upper bound with unknown constants obtained via nonlinear regression. Finally, in figure 3, one can find the comparison between the costs of the algorithms wrt the errors. ### Details on the implementation For the convenience of the reader, we provide the following code listings that contain the implementation of the algorithm. A single step of truncated dimension randomized Euler algorithm was implemented as a CUDA device function. See listing 3 in [5]. ``` 1__device__FPsample_from_Yl(intML,intmL,intM,intnl,FPx0,FPT,curandState_t*state){ 2//(1) 3FP*dWL=(FP*)malloc(sizeof(FP)*ML); 4memset(dWL,0,sizeof(FP)*ML); 5FP*dWl=(FP*)malloc(sizeof(FP)*MI); 6 * [6] memset(dW1,0,sizeof(FP)*M1); * [7]//(2) * [8]FPtL=0; * [9]FPt1=0; * [10]FPXL=x0; * [11]FPXl=x0; * [12]//(3) * [13]intgrid_density=LCM(nL,nl); * [14]FPH=T/grid_density; * [15]//(4) * [16] Jump* jumps_head=(Jump*)malloc(sizeof(Jump)); * [17]generate_jumps<FP>(state,INTENSITY,T,jumps_head); * [18] Jump* jump_L=jumps_head; * [19] Jump* jump_l=jumps_head; * [20]//(5) * [21]FPt,dW; * [22]for(inti=0;i<grid_density;i++){ * [23]t=(i+1)*H; * [24]//(6) * [25]for(intk=0;k<ML;k++){ * [26]dW=(FP)(curand_normal(state)*sqrt(H)); * [27]dWL[k]+=dW; * [28]if(k<M1){ * [29]dWL[k]+=dW; * [30]} Figure 1. Monte Carlo error vs cost * 1. Initializing sparse and dense grid Wiener increments. * 2. Initializing temporary variables for sparse and dense grid trajectories. * 3. Getting least common multiple of grid densities. Figure 2. Multilevel Monte Carlo error vs cost 4. Generating all jumps. 5. Traversing through grid points. 6. Updating Wiener increments. 7. Updating dense grid trajectory value. 8. Updating sparse grid trajectory value. On the very top of the abstraction hierarchy, there is an implementation of a multilevel method in Python that makes direct calls to CUDA kernels via PyCuda API. The implementation is shown on the listing 2. ``` 1defrun_adaptive(self,x0:float,T:float,M:Callable,n:Callable,eps:float=1e-3)->Tuplefloat,float]; 2#(1) 3L=0 4Y,V,N=[],[],[] 5Y11,T12=np.inf,np.inf 6beta=n(1)/m(0) 7convergence_err=np.inf 8#(2) 9whileconvergence_err>0: 10N.append(10**3) 11#(3) 12YL,VL=self.__get_Y1(level=L,M=M,n=n,N=N[L],x0=x0,T=T) 13Y.append(YL) 14V.append(VL) 15_N=get_N(M,n,V,eps) Figure 3. Standard Monte Carlo cost vs Multilevel Monte Carlo cost * (1) Initializing local variables. * (2) Running the main loop of the procedure. * (3) Estimating expectation and expectation of a squared payoff with direct CUDA kernel call. * (4) Updating a number of samples, expectations, and variance estimates per level if needed. * (5) Calculating convergence error. * (6) Returning the resulting estimate and the corresponding informational cost of an algorithm. ## 7. Conclusions In this paper, we analyzed the multilevel Monte Carlo method for SDEs driven by countably dimensional Wiener process and Poisson random measure in terms of theory and numerical experiments. The main theorem shows that the multilevel Monte Carlo method can be applied to a class of SDEs for which their coefficients satisfy certain regularity conditions including discontinuous drift and Holder-continuous diffusion and jump-function with Holder constants greater than or equal to one-half. Under provided complexity model, the resulting informational cost is reduced similarly as in the finite-dimensional case. The resulting thesis coincides with the case that the Wiener process is finite (see [7]), meaning the cost reduces from \(\Theta(\varepsilon^{-4})\) to \(\Theta(\varepsilon^{-2}(\log(\varepsilon))^{2})\). In infinite dimensional case we obtained the reduction from \(\Theta(\varepsilon^{-4}\delta^{-1}(\varepsilon))\) to \(\mathcal{O}(\varepsilon^{-2}(\log(\varepsilon))^{2}\delta^{-1}(\varepsilon^{ \kappa}))\) with possibly unknown constant \(\kappa>1\). The impact of the unknown constant can be mitigated if the inverse of \(\delta\) belongs to a certain class of functions. The multilevel Monte Carlo method which depends on an additional set of parameters (truncation dimension parameters) is therefore a natural extension of a multilevel method that depends only on the grid density. On the other hand, the lower cost bound for the multilevel method shows that the cost is always greater than the one obtained for a multilevel method for SDEs driven by the finite-dimensional Wiener process. The lower cost bound is (up to constant) proportional to \(\varepsilon^{-2}(\log(\varepsilon))^{2}+\varepsilon^{-2}\delta^{-1}(\varepsilon)\) which is equal to the sum of costs for the evaluation of multilevel method in finite-dimensional setting and the truncated dimension Euler algorithm. It is rather a natural consequence of the combined usage of those two algorithms. To conclude, this paper paves the way for further research regarding the following open questions: * Can the unknown constant in the exponent of the cost upper bound of the multilevel Monte Carlo method be mitigated in general? * What is the cost upper bound if one of Holder constants is less than one-half? * What are the worst-case complexity lower bounds? In future research, we plan to investigate the error of the multilevel Monte Carlo method under inexact information for the weak approximation of solutions of SDEs. ## 8. Acknowledgments I would like to thank my supervisor Pawel Przybylowicz for guidance and inspiration to work on that topic.
2309.07587
The edge rings of compact graphs
We define a simple graph as compact if it lacks even cycles and satisfies the odd-cycle condition. Our focus is on classifying all compact graphs and examining the characteristics of their edge rings. Let $G$ be a compact graph and $\mathbb{K}[G]$ be its edge ring. Specifically, we demonstrate that the Cohen-Macaulay type and the projective dimension of $\mathbb{K}[G]$ are both equal to the number of induced cycles of $G$ minus one, and that the regularity of $\mathbb{K}[G]$ is equal to the matching number of $G_0$. Here, $G_0$ is obtained from $G$ by removing the vertices of degree one successively, resulting in a graph where every vertex has a degree greater than 1.
Zexin Wang, Dancheng Lu
2023-09-14T10:45:11Z
http://arxiv.org/abs/2309.07587v2
# The edge rings of compact graphs ###### Abstract. We define a simple graph as compact if it lacks even cycles and satisfies the odd-cycle condition. Our focus is on classifying all compact graphs and examining the characteristics of their edge rings. Let \(G\) be a compact graph and \(\mathbb{K}[G]\) be its edge ring. Specifically, we demonstrate that the Cohen-Macaulay type and the projective dimension of \(\mathbb{K}[G]\) are both equal to the number of induced cycles of \(G\) minus one and that the regularity of \(\mathbb{K}[G]\) is equal to the matching number of \(G_{0}\). Here, \(G_{0}\) is obtained from \(G\) by removing the vertices of degree one successively, resulting in a graph where every vertex has a degree greater than \(1\). Key words and phrases:Compact graph, Odd-cycle condition, Regularity, Projective dimension, Canonical module, Euler formula 2010 Mathematics Subject Classification: Primary 05E40,13A02; Secondary 06D50 ## Introduction Recently, many authors have investigated the algebraic properties of edge rings of simple graphs. Consider a simple graph \(G=(V,E)\) with vertex set \(V=\{x_{1},\ldots,x_{n}\}\) and edge set \(E=\{e_{1},\ldots,e_{r}\}\). The _edge ring_\(\mathbb{K}[G]\) is defined to be the toric ring \(\mathbb{K}[x_{e}\colon\ e\in E(G)]\subset\mathbb{K}[x_{1},\ldots,x_{n}]\), where \(x_{e}=\prod_{x_{i}\in e}x_{i}\) for all \(e\in E(G)\). Let \(\mathbb{K}[E(G)]\) (or \(\mathbb{K}[E]\) for short) denote the polynomial ring \(\mathbb{K}[e_{1},\ldots,e_{r}]\) in variables \(e_{1},\ldots,e_{r}\). Then, there is exactly one ring homomorphism \(\phi:\mathbb{K}[E(G)]\rightarrow\mathbb{K}[V]\quad\) such that \(\ e_{i}\mapsto x_{e_{i}}\ i=1,\ldots,r.\) The kernel of the homomorphism map \(\phi\) is called the _toric ideal_ or the _defining ideal_ of \(\mathbb{K}[G]\) or \(G\), which is denoted by \(I_{G}\). It follows that \(\mathbb{K}[G]\cong\mathbb{K}[E(G)]/I_{G}\). The main focus of these studies is to establish connections between the combinatorial properties of simple graphs and the algebraic properties of their edge rings, see [2, 3, 6, 7, 8, 9, 13, 14] and etc. In 1999, Ohsugi and Hibi established in [14] that \(\mathbb{K}[G]\) is a normal domain if and only if \(G\) satisfies the odd-cycle condition. Recall a simple graph is said to satisfy the _odd-cycle_ condition if, for every pair of cycles \(C_{1}\) and \(C_{2}\), either \(C_{1}\) and \(C_{2}\) have at least one vertex in common or there is an edge that connects a vertex of \(C_{1}\) to a vertex of \(C_{2}\). We call a simple graph to be _compact_ if it not only satisfies the odd-cycle condition but also contains no even cycles. In this paper, we devote to investigating the properties of the edge rings of compact graphs. Let \(G\) be a compact graph. The main results of this paper can be summarized as follows. Firstly, we demonstrate that the projective dimension and Cohen-Macaulay type of \(\mathbb{K}[G]\) are both equal to the number of the induced cycles of \(G\) minus one. Additionally, we show that the regularity of \(\mathbb{K}[G]\) coincides with the matching number of \(G_{0}\). Here, \(G_{0}\) refers to the graph derived from \(G\) by successively removing all vertices of degree one. This finding serves as an interesting complement to the result presented in [11, Theorem 1 (a)], which states if \(G\) is a non-bipartite graph satisfying the odd-cycle condition, the regularity of \(\mathbb{K}[G]\) does not exceed the matching number of \(G\). Finally, we determine the top graded Betti numbers of \(\mathbb{K}[G]\). Here, for any simple graph \(G\), a _matching_ of \(G\) is a subset \(M\subset E(G)\) where \(e\cap e^{\prime}=\emptyset\) for any distinct edges \(e,e^{\prime}\in M\), and the _matching number_ of \(G\), denoted by \(\operatorname{mat}(G)\), is the maximal cardinality of matchings of \(G\). The paper is organized as follows. Section 1 provides a brief overview of toric ideals of graphs and canonical modules. Section 2 classifies the compact graphs up to the (essentially) same edge rings. In Section 3 we compute the universal Grobner bases for the toric ideals of compact graphs and then obtain their initial ideals with respect to some suitable monomial order. Let \(G\) be a compact graph. In Section 4, we show that all initial ideals obtained in Section 3 possess a "good" E-K splitting, enabling us to utilize the induction process to present a simple formula for the total Betti numbers of such ideals. Consequently, the regularity, projection dimension, and an upper bound for the Cohen-Macaulay type of \(\mathbb{K}[G]\) are derived. Section 5 provides the top graded Betti numbers for \(\mathbb{K}[G]\) by computing the minimal generators of its canonical module. In Section 6, a question regarding the Betti numbers for \(\mathbb{K}[G]\) is posed. ## 1. Preliminaries In this section, we provide a brief review of the notation and fundamental facts that will be utilized later on. ### Betti numbers and Canonical modules Let \(R:=\mathbb{K}[x_{1},\ldots,x_{n}]\) be the polynomial ring in variables \(x_{1},\ldots,x_{n}\), which is standard graded. For a finitely generated graded \(R-\)module \(M\), there exists the minimal graded free resolution of \(M\) that has the form: ( \[\lx@sectionsign 0\rightarrow\underset{j\in\mathbb{Z}}{\bigoplus}R[-j]^{ \beta_{p,j}(M)}\rightarrow\cdots\rightarrow\underset{j\in\mathbb{Z}}{\bigoplus}R[-j]^{ \beta_{1,j}(M)}\rightarrow\underset{j\in\mathbb{Z}}{\bigoplus}R[-j]^{\beta_{0,j}(M)}\to M\to 0\] . Here, \(R[-j]\) is the cyclic free \(R\)-module generated in degree \(j\). The number \(\beta_{i,j}(M):=\dim_{\mathbb{K}}\text{Tor}_{i}^{R}(M,\mathbb{K})_{j}\) is called the \((i,j)\)-th _graded Betti number_ of \(M\) and \(\beta_{i}(M):=\sum_{j\in\mathbb{Z}}\beta_{i,j}\) is called the \(i\)-th _total Betti number_ of \(M\). Many homological invariants of \(M\) can be defined in terms of its minimal graded free resolution. The _Castelnuovo-Mumford regularity_ and _projective dimension_ of \(M\) are defined to be \[\operatorname{reg}\left(M\right):=\max\left\{j-i\mid\beta_{i,\,j}(M)\neq 0\right\}\] and \[\operatorname{pdim}\left(M\right):=\max\left\{i\mid\beta_{i,\,j}(M)\neq 0 \text{ for some }j\right\}\text{.}\] Denote \(\operatorname{pdim}\left(M\right)\) by \(p\). Then, \(\beta_{p}(M)\) and \(\beta_{p,j}(M),j\in\mathbb{Z}\) are referred to as the _top total Betti number_ and the _top graded Betti numbers_ of \(M\), respectively. By applying the functor \(\operatorname{Hom}_{R}(-,R[-n])\) to the sequence (SS), we obtain the following complex: \[0 \to\operatorname{Hom}_{R}(F_{0},R[-n])\to\operatorname{Hom}_{R}(F_{ 1},R[-n])\to\cdots\] \[\to\operatorname{Hom}_{R}(F_{p},R[-n])\to\operatorname{Ext}_{R}^{p }(M,R[-n])\to 0.\] Here, \(F_{i}\) denotes the free module \(\bigoplus\limits_{j\in\mathbb{Z}}R[-j]^{\beta_{i,j}(M)}\). Assume further that \(M\) is Cohen-Macaulay. Then, it follows from the local duality (see [1]) that the above complex is exact and so it is a minimal free resolution of \(\operatorname{Ext}_{R}^{p}(M,R[-n])\). The module \(\operatorname{Ext}_{R}^{p}(M,R[-n])\), also denoted by \(\omega_{M}\), is called the _canonical module_ of \(M\). Note that \[\operatorname{Hom}_{R}(F_{i},R[-n])=\bigoplus\limits_{j\in\mathbb{Z}} \operatorname{Hom}_{R}(R[-j]^{\beta_{i,j}(M)},R[-n])=\bigoplus\limits_{j\in \mathbb{Z}}R[-n+j]^{\beta_{i,j}(M)}.\] Hence, we have the following well-known result. **Lemma 1.1**.: _Let \(M\) be a Cohen-Macaulay graded \(R=\mathbb{K}[x_{1},\ldots,x_{n}]\)-module, and \(\omega_{M}\) its canonical module. Assume \(p=\operatorname{pdim}(M)\). Then \(\beta_{i,j}(\omega_{M})=\beta_{p-i,n-j}(M)\) for all \(i,j\)._ The _Cohen-Macaulay type_ of a finitely generated Cohen-Macaulay \(R\)-module \(M\) is defined to be the number \[\operatorname{type}(M):=\beta_{p}(M)=\beta_{0}(\omega_{M}),\] where \(p\) is the projective dimension of \(M\). In the following, we will consider the case when \(M=\mathbb{K}[G]\) as a \(\mathbb{K}[E(G)]\)-module. ### Toric ideals of graphs Let \(G\) be a simple graph, i.e., a finite graph without loops and multiple edges, with vertex set \(V(G)\) and edge set \(E(G)\). A _matching_ of \(G\) is a subset \(M\subset E(G)\) for which \(e\cap e^{\prime}=\emptyset\) for \(e\neq e^{\prime}\) belonging to \(M\). The _matching number_, denoted by \(\operatorname{mat}(G)\), is the maximal cardinality of matchings of \(G\). Recall that a walk of \(G\) of length \(q\) is a subgraph \(W\) of \(G\) such that \(E(W)=\{\{v_{0},v_{1}\},\{v_{1},v_{2}\},\ldots,\{v_{q-1},v_{q}\}\}\), where \(v_{0},v_{1},\ldots,v_{q}\) are vertices of \(G\). A walk \(W\) of \(G\) is even if \(q\) is even, and it is closed if \(v_{0}=v_{q}\). A cycle with edge set \(\{\{v_{0},v_{1}\},\{v_{1},v_{2}\},\{v_{q-1},v_{q}=v_{0}\}\}\) is a special closed walk where \(v_{1},\ldots,v_{q}\) are pairwise distinct and \(q\geq 3\). A cycle is called even (resp. odd) if \(q\) is even (resp. odd). For a subset \(W\) of \(V(G)\), the _induced subgraph_\(G_{W}\) is the graph with vertex set \(W\) and for every pair \(x,y\in W\), they are adjacent in \(G_{W}\) if and only if they are adjacent in \(G\). The generators of the toric ideal of \(I_{G}\) are binomials which are tightly related to even closed walks in \(G\). Given an even closed walk \(W\) of \(G\) with \[E(W)=\{\{v_{0},v_{1}\},\{v_{1},v_{2}\},\ldots,\{v_{2q-2},v_{2q-1}\},\{v_{2q-1 },v_{0}\}\},\] we associate \(W\) with the binomial defined by \[f_{W}:=\prod_{j=1}^{q}e_{2j-1}-\prod_{j=1}^{q}e_{2j},\] where \(e_{j}=\{v_{j-1},v_{j}\}\) for \(1\leq j\leq 2q-1\) and \(e_{2q}=\{v_{2q-1},v_{0}\}\). A binomial \(f=u-v\in I_{G}\) is called a _primitive binomial_ if there is no binomial \(g=u^{\prime}-v^{\prime}\in I_{G}\) such that \(u^{\prime}|u\) and \(v^{\prime}|v\). An even closed walk \(W\) of \(G\) is a _primitive even closed walk_ if its associated binomial \(f_{W}\) is a primitive binomial in \(I_{G}\). It is known that the set \[\{f_{W}\colon\ W\mbox{ is a primitive even closed walks of }G\}\] is the universal Grobner base of \(I_{G}\) by e.g. [16, Proposition 10.1.10] or [4, Proposition 5.19]. In particular, it is a Grobner base of \(I_{G}\) with respect to any monomial order. The set of primitive even walks of a graph \(G\) was described in [12] explicitly. **Lemma 1.2**.: [12, Lemma 5.11] _A primitive even closed walk \(\Gamma\) of \(G\) is one of the following:_ 1. \(\Gamma\) _is an even cycle of_ \(G\)_;_ 2. \(\Gamma=(C_{1},C_{2})\)_, where each of_ \(C_{1}\) _and_ \(C_{2}\) _is an odd cycle of_ \(G\) _having exactly one common vertex;_ 3. \(\Gamma=(C_{1},\Gamma_{1},C_{2},\Gamma_{2})\)_, where each of_ \(C_{1}\) _and_ \(C_{2}\) _is an odd cycle of_ \(G\) _with_ \(V(C_{1})\cap V(C_{2})=\emptyset\) _and where_ \(\Gamma_{1}\) _and_ \(\Gamma_{2}\) _are walks of_ \(G\) _of the forms_ \(\Gamma_{1}=(e_{i_{1}},\ldots,e_{i_{r}})\) _and_ \(\Gamma_{1}=(e_{i_{1}^{{}^{\prime}}},\ldots,e_{i_{r^{{}^{\prime}}}^{{}^{\prime}}})\) _such that_ \(\Gamma_{1}\) _combines_ \(j\in e_{i_{1}}\cap e_{i_{r^{{}^{\prime}}}^{{}^{\prime}}}\cap V(C_{1})\) _with_ \(j^{{}^{\prime}}\in e_{i_{r}}\cap e_{i_{1}^{{}^{\prime}}}\cap V(C_{2})\) _and_ \(\Gamma_{2}\) _combines_ \(j^{{}^{\prime}}\) _with_ \(j\)_. Furthermore, none of the vertices belonging to_ \(V(C_{1})\cup V(C_{2})\) _appears in each of_ \(e_{i_{1}}\backslash\{j\}\)_,_ \(e_{i_{2}}\)_,_\(\ldots\)_,_\(e_{i_{r-1}}\)_,_ \(e_{i_{r}}\backslash\{j^{{}^{\prime}}\}\)_,_ \(e_{i_{1}^{{}^{\prime}}}\backslash\{j\}\)_,_ \(e_{i_{2}^{{}^{\prime}}}\)_,_\(\ldots\)_,_\(e_{i_{r-1}^{{}^{\prime}}}\)_,_\(e_{i_{r^{{}^{\prime}}}^{{}^{\prime}}}\backslash\{j^{{}^{\prime}}\}\)_._ We would like to note that in \((iii)\) the sum of lengths of \(\Gamma_{1}\) and \(\Gamma_{2}\) must be even in order to ensure it is indeed an even closed walk. ### Edge Cones and Canonical modules Let \(G\) be a simple graph with vertex set \(V(G)=\{1,\ldots,n\}\) and edge set \(E(G)\). For any \(f=\{i,j\}\in E(G)\) denote \(v_{f}=\mathbf{e}_{i}+\mathbf{e}_{j}\), where \(\mathbf{e}_{i}\) is the \(i\)th unit vector of \(\mathbb{R}^{n}\). The edge cone of \(G\), denoted by \(\mathbb{R}_{+}(G)\), is defined to be the cone of \(\mathbb{R}^{n}\) generated by \(\{v_{f}\mid f\in E(G)\}\). In other words, \[\mathbb{R}_{+}(G)=\{\sum_{f\in E(G)}a_{f}v_{f}\mid a_{f}\in\mathbb{R}_{+} \mbox{ for all }f\in E(G)\}.\] If \(G\) satisfies the odd-cycle condition, then the edge ring \(\mathbb{K}[G]\) is normal, see [14]. Furthermore, according to Hochster's theorem \(\mathbb{K}[G]\) is Cohen-Macaulay, see [1, Theorem 6.3.5]. It follows that the ideal of \(\mathbb{K}[G]\) generated all the monomials \(x^{\alpha}\) with \(\alpha\in\mathbb{Z}^{n}\cap\operatorname{relint}(\mathbb{R}_{+}(G))\) is the canonical module of \(\mathbb{K}[G]\), see e.g. [1, section 6.3] for the details. Let us describe the cone \(\mathbb{R}_{+}(G)\) in terms of linear inequalities. For the description, we need to introduce some more notions on graphs. * For a subset \(W\subset V(G)\), let \(G\setminus W\) be the subgraph induced on \(V(G)\setminus W\). If \(W=\{k\}\), then we write \(G\setminus k\) instead of \(G\setminus\{k\}\). * For \(j\in V(G)\), let \(N_{G}(j)=\{i\in V(G)\mid\{i,j\}\in E(G)\}\), and for any subset \(W\subset V(G)\), let \(N_{G}(W)=\underset{k\in W}{\bigcup}N_{G}(k)\). * A non-empty subset \(T\subset V(G)\) is called an _independent set_ if \(\{j,k\}\not\in E(G)\) for any \(j,k\in T\). * We call a vertex \(j\) of \(G\)_regular_ if each connected component of \(G\setminus j\) contains an odd cycle. * We say that an independent set \(T\) of \(V(G)\) is a _fundamental_ set if * the bipartite graph on the vertex set \(T\cup N_{G}(T)\) with the edge set \(E(G)\cap\{\{j,k\}\mid j\in T,k\in N_{G}(T)\}\) is connected, and * either \(T\cup N_{G}(T)=V(G)\) or each of the connected components of the graph \(G\setminus(T\cup N_{G}(T))\) contains an odd cycle. It follows from [15, Theorem 3.2] or ([14, Theorem 1.7 (a)]) that \(\mathbb{R}_{+}(G)\) consists of the elements \((x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) satisfying all the following inequalities: ( \[\Delta\] ) \[\sum_{v\in N_{G}(T)}x_{v} \geq\sum_{u\in T}x_{u}\ \ \ \text{for any fundamental set}\ T.\] ### E-K splitting Based on the approach in [5], Eliahou and Kervaire introduced the notion of splitting a monomial ideal. **Definition 1.3**.: Let \(I,J\) and \(K\) be monomial ideals such that \(G(I)\), the unique set of minimal generators of \(I\), is the disjoint union of \(G(J)\) and \(G(K)\). Then \(I=J+K\) is an **Eliahou-Kervaire splitting** (abbreviated as "E-K splitting") if there exists a splitting function \[G(J\cap K)\to G(J)\times G(K)\] sending \(w\mapsto(\phi(w),\psi(w))\) such that 1. \(w=\operatorname{lcm}(\phi(w),\psi(w))\) for all \(w\in G(J\cap K)\), and 2. for every subset \(\emptyset\neq S\subset G(J\cap K)\), \(\operatorname{lcm}(\phi(S))\) and \(\operatorname{lcm}(\psi(S))\) strictly divide \(\operatorname{lcm}(S)\). **Lemma 1.4**.: _[_5_, Proposition 3.1]_ _Let \(I=J+K\) be an E-K splitting. Then, for all \(i\geq 0\),_ ( \[\ast\] ) \[\beta_{i}(I)=\beta_{i}(J)+\beta_{i}(K)+\beta_{i-1}(J\cap K),\beta_{i,j}(I)= \beta_{i,j}(J)+\beta_{i,j}(K)+\beta_{i-1,j}(J\cap K),\] _where \(\beta_{-1,j}(J\cap K)=0\) for all \(j\) by convention._ ## 2. A classification of compact graphs In this section, we aim to classify all the compact graphs up to the essentially same edge rings. We start by presenting the following straightforward observation, which we will not provide a proof for. **Lemma 2.1**.: _Let \(x_{1}\) be a vertex of degree one in a simple graph \(G\) and let \(G^{\prime}\) be the graph obtained from \(G\) by removing \(x_{1}\). Then \(I_{G}\) and \(I_{G^{\prime}}\) have the same set of minimal binomial generators. More precisely, if \(G\) has edge set \(\{e_{1},\ldots,e_{r}\}\) with \(x_{1}\in e_{r}\), then \(I_{G}=I_{G^{\prime}}\cdot\mathbb{K}[e_{1},\ldots,e_{r}]\) and \(\mathbb{K}[G]\cong\mathbb{K}[G^{\prime}]\otimes_{\mathbb{K}}\mathbb{K}[e_{r}]\). Here, both \(\mathbb{K}[e_{1},\ldots,e_{r}]\) and \(\mathbb{K}[e_{r}]\) are polynomial rings by definitions._ This observation indicates that the removal of vertices with a degree of one does not essentially alter the edge ring. For a simple graph \(G\), by iteratively removing all vertices of degree one, we obtain a new graph denoted as \(G_{0}\), where every remaining vertex has a degree greater than one. It is evident that \(G\) and \(G_{0}\) essentially share the same edge ring by Lemma 2.1. From this point forward, we will solely focus on simple graphs in which every vertex has a degree greater than one. **Definition 2.2**.: Let \(G\) be a connected simple graph where every vertex has a degree greater than one. We call \(G\) to be a _compact_ graph if it does not contain any even cycles and satisfies the odd-cycle condition. **Proposition 2.3**.: _Let \(G\) be a compact graph. Then there exist at most three vertices of degree \(\geq 3\) in \(G\)._ Proof.: First, we observe the following easy but useful fact: Given distinct cycles \(C_{1},C_{2}\) of \(G\) with \(V(C_{1})\cap V(C_{2})\neq\emptyset\), one has \(V(C_{1})\cap V(C_{2})\) is a singleton. This is because if \(V(C_{1})\cap V(C_{2})\) contains more than one vertex then \(G\) must contain an even cycle. As a result, we see that every cycle of \(G\) is an induced cycle of \(G\) and every edge of \(G\) belongs to at most one cycle of \(G\). We label this as the first assertion. Next, we will prove the second assertion, which states that every vertex of \(G\) belongs to at least one cycle. Assume on the contrary that there is a vertex \(v\) which does not belong to any cycle. Then, since \(\deg(v)\geq 2\) for each \(v\in V(G)\), there is a path \(v_{1}--\cdots--v_{s}=v---v_{t}\) such that \(v_{1}\) and \(v_{t}\) belong to the cycles \(C_{1}\) and \(C_{t}\) respectively. It is clear that \(C_{1}\) and \(C_{k}\) are disjoint, for otherwise \(v\) belongs to a cycle. This implies there is an edge connecting \(C_{1}\) and \(C_{t}\) by the odd-cycle condition, which is also a contradiction. This proves the second assertion. For convenience, we say a cycle of \(G\) to be _almost-isolated_ if it has exactly one vertex of degree \(\geq 3\). We can then prove the third assertion, which states that if \(v\) is a vertex with \(\deg(v)\geq 3\), then \(v\) belongs to at least one almost-isolated cycle \(C\). Let \(v_{1},v_{2},\ldots,v_{k}\) be all the vertices which are adjacent to \(v\). Note that \(k\geq 3\). In view of the second assertion we have proved, we may assume \[C:v_{1}--v--v_{2}--u_{1}--\cdots--u_{2\ell}--v_{1}\] is an odd cycle. If \(C\) is almost-isolated, we are done. Suppose now that \(C\) is not almost-isolated. Let \(C_{k}\) be a cycle containing \(v_{k}\). We consider the following cases: _Case 1:_\(\deg(u_{i})\geq 3\)_for some_\(i=1,2,\ldots,2\ell\). Say \(i=1\) and \(u\notin V(C)\) is a vertex adjacent to \(u_{1}\). Let \(C_{1}\) be a cycle containing \(u\). Then either there is an edge connecting \(C_{1}\) and \(C_{k}\), or \(C_{1}\) and \(C_{k}\) shares a common vertex. In both cases, there is a path that connects \(v_{1}\) and \(v_{k}\), but does not pass through \(v\). Thus, the edge \(e=\{v,v_{1}\}\) not only belongs to \(C\), but also to a cycle containing the vertex \(v_{k}\). This is impossible by the first assertion. _Case 2:_\(\deg(v_{i})\geq 3\)_for some_\(i=1,2\). Say \(i=1\) and \(u\notin V(C)\) is a vertex adjacent to \(v_{1}\). Let \(C_{1}\) be a cycle containing \(u\). If \(v\notin C_{k}\), then \(e=\{v,v_{1}\}\) also belongs to a cycle containing \(v_{k}\) by the odd-cycle condition, a contradiction. So we only need to consider the case that \(v\in C_{k}\). If \(C_{k}\) is almost-isolated, we are also done. If \(C_{k}\) is not almost-isolated, we let \(w\in V(C_{k})\) other than \(v\) with degree \(\geq 3\) and let \(w_{1}\) be a vertex adjacent to \(w\) with \(w_{1}\notin V(C_{k})\). By the second assertion, there is a cycle, denoted by \(C_{2}\), which contains \(w_{1}\). Since \(C_{1}\) and \(C_{2}\) is connected by an edge, we see that \(e=\{v,v_{1}\}\) belongs to an cycle containing \(w\). This is impossible according to the first assertion. Thus, the third assertion has been proved. From this, it follows that if \(v_{1},\ldots,v_{k}\) are vertices of degree \(\geq 3\), then the induced graph on the set \(\{v_{1},\ldots,v_{k}\}\) is a complete graph. Hence \(k\leq 3\). \(\Box\) For the sake of simplicity, a vertex of a compact graph is called a _big_ vertex if it has a degree greater than \(2\). According to Proposition 2.3, compact graphs can be categorized into four classes, each determined by the number of big vertices. More specifically, we say a compact graph falls into type \(i\) if it possesses \(i\) big vertices for \(i=0,1,2,3\). A compact graph of type \(0\) is simply an odd cycle. A compact graph of type \(1\) is a finite collection of odd cycles that share a vertex. A compact graph of type \(2\) consists of two disjoint compact graphs of type \(1\), where the two big vertices are connected either by an edge or by an edge as well as a path of even length. A compact graph of type \(3\) consists of three disjoint compact graphs of type \(1\), where every pair of big vertices is connected by an edge. Suppose \(\underline{p}=(p_{1},\ldots,p_{m})\), \(\underline{q}=(q_{1},\ldots,q_{n})\) and \(\underline{r}=(r_{1},\ldots,r_{k})\) are positive integral vectors with dimensions \(m,n\) and \(k\) respectively. We denote a compact graph of type \(1\), where the odd cycles have lengths \(2p_{1}+1,\ldots 2p_{m}+1\) respectively, as \(A_{\underline{p}}\) or \(A_{p_{1},\ldots,p_{m}}\). Figure 1. The graph \(A_{(1,2,1)}\) Figure 2. The graph \(B_{(2,1):(2,1)}^{2}\) By \(B_{\underline{p}:\underline{q}}^{0}\) we mean a compact graph of type 2 where the two disjoint compact graphs of type 1 that compose it are \(A_{\underline{p}}\) and \(A_{\underline{q}}\) and where the two big vertices are connected by an edge. Furthermore, if \(s>0\) is an even number, then \(B_{\underline{p}:\underline{q}}^{s}\) represents the graph obtained by appending a path of length \(s\) connecting two big vertices to \(B_{\underline{p}:\underline{q}}^{0}\): A compact graph of type 3 is denoted by \(C_{\underline{p}:\underline{q}:\underline{r}}\) if the three disjoint compact graphs of type 1 that makeup it are \(A_{\underline{p}}\), \(A_{\underline{q}}\) and \(A_{\underline{r}}\) respectively. ## 3. Universal grobner bases and initial ideals In this section, we will present the universal Grobner bases and initial ideals of toric ideals of compact graphs, focusing on a specific monomial order. The key of this section is to identify appropriate monomial orders such that the initial ideals with respect to these orders have a "good" E-K splitting, as illustrated in the subsequent section. ### Compact graphs of type 1 Given positive integers \(m\geq 2\) and \(p_{1},\ldots,p_{m}\), we use \(A\) to denote the graph \(A_{p_{1},\ldots,p_{m}}\) for short. Thus \(A\) has vertex set \[V(A)=\{u\}\cup\{u_{i,j}\mid 1\leq i\leq m,1\leq j\leq 2p_{i}\}\] and edge set \[E(A)=\{ \{ u_{i,j},u_{i,j+1}\}\mid 1\leq i\leq m,1\leq j\leq 2p_{i}-1\}\] \[\cup\{\{u,u_{i,1}\},\{u,u_{i,2p_{i}}\}\mid 1\leq i\leq m\}.\] We label the edges of \(A\) as follows. For \(i\in\{1,\ldots,m\}\), we let \(e_{i,1}=\{u,u_{i,1}\}\) and \(e_{i,2p_{i}+1}=\{u,u_{i,2p_{i}}\}\). For \(i\in\{1,\ldots,m\}\) and \(j\in\{1,\ldots,2p_{i}-1\}\) let \(e_{i,j+1}=\{u_{i,j},u_{i,j+1}\}\). For \(1\leq i,j\leq m\), we put \[e_{i}^{\prime}=e_{i,1}e_{i,3}\cdots e_{i,2p_{i}+1}\text{ and }e_{j}^{\prime \prime}=e_{j,2}e_{j,4}\cdots e_{j,2p_{j}}.\] Figure 3. The graph \(C_{(2,1):(1,1):(2,1)}\) **Lemma 3.1**.: _For any integers \(m\geq 2\) and positive integers \(p_{1},\ldots,p_{m}\), the universal Grobner basis for the toric ideal \(I_{A}\) is given by_ \[\mathcal{G}=\{e_{i}^{\prime}e_{j}^{\prime\prime}-e_{i}^{\prime\prime}e_{j}^{ \prime}\ |\ 1\leq i<j\leq m\}.\] Proof.: It follows from [14, Lemma 3.2] together with [16, Proposition 10.1.10]. Going forward, we work in the standard graded polynomial ring \[\mathbb{K}[E(A)]=\mathbb{K}[e_{1,1},\ldots,e_{1,2p_{1}+1},\ldots\ldots,e_{m,1},\ldots,e_{m,2p_{m}+1}].\] Let \(<\) denote the lexicographic monomial order on \(\mathbb{K}[E(A)]\) satisfying \[e_{1,1}<\cdots<e_{1,2p_{1}+1}<\cdots\cdots<e_{m,1}<\cdots<e_{m,2p_{m}+1},\] and let \(J_{A}\) denote the initial ideal of \(I_{A}\) with respect to the monomial order \(<\). **Proposition 3.2**.: _The minimal set of monomial generators of \(J_{A}\) is given by_ \[\mathcal{M}=\left\{e_{i}^{\prime\prime}e_{j}^{\prime}\ |\ 1\leq i<j\leq m \right\}.\] Proof.: Note that \(e_{i}^{\prime}e_{j}^{\prime\prime}<e_{i}^{\prime\prime}e_{j}^{\prime}\) for \(1\leq i<j\leq m\), we can deduce from Lemma 3.1 that \(J_{A}\) is generated by \(\mathcal{M}\). The minimality of \(\mathcal{M}\) can be checked directly. ### Compact graphs of type 2 Assume that \(n,m\) and \(p_{1},\ldots,p_{m},q_{1},\ldots,q_{n}\) are given positive integers. Let \(s\geq 0\) be an even number. We use \(B\) denote the graph \(B_{p_{1},\ldots,p_{m}:q_{1},\ldots,q_{n}}^{s}\) for short. Then, we may assume that \(B\) has vertex set \[V(B)=\{u,v\}\cup\{w_{1},\ldots,w_{s-1}\}\] \[\cup\{u_{i,j}\ |\ 1\leq i\leq m,1\leq j\leq 2p_{i}\}\cup\{v_{i,j}\ |\ 1\leq i \leq n,1\leq j\leq 2q_{i}\}\] and edge set \[E(B)=\{ \{u_{i,j},u_{i,j+1}\}\ |\ 1\leq i\leq m,1\leq j\leq 2p_{i}-1\}\] \[\cup\{\{u,u_{i,1}\},\{u,u_{i,2p_{i}}\}\ |\ 1\leq i\leq m\}\] \[\cup\{\{u,v\},\{u,w_{1}\},\{v,w_{s-1}\}\}\cup\{\{w_{i},w_{i+1}\} \ |\ 1\leq i\leq s-2\}\] \[\cup\{\{v_{i,j},v_{i,j+1}\}\ |\ 1\leq i\leq n,1\leq j\leq 2q_{i}-1\}\] \[\cup\{\{v,v_{i,1}\},\{v,v_{i,2q_{i}}\}\ |\ 1\leq i\leq n\}.\] The edges of \(B\) are labeled as follows. For \(i\in\{1,\ldots,m\}\) let \(e_{i,1}=\{u,u_{i,1}\}\) and \(e_{i,2p_{i}+1}=\{u,u_{i,2p_{i}}\}\). For \(i\in\{1,\ldots,m\}\) and \(j\in\{1,\ldots,2p_{i}-1\}\) let \(e_{i,j+1}=\{u_{i,j},u_{i,j+1}\}\). Let \(x=\{u,v\}\), \(x_{1}=\{u,w_{1}\}\) and \(x_{s}=\{v,w_{s-1}\}\). For \(i\in\{1,\ldots,s-2\}\) let \(x_{i+1}=\{w_{i},w_{i+1}\}\). For \(i\in\{1,\ldots,n\}\) let \(f_{i,1}=\{v,v_{i,1}\}\) and \(f_{i,2q_{i}+1}=\{v,v_{i,2q_{i}+1}\}\). For \(i\in\{1,\ldots,n\}\) and \(j\in\{1,\ldots,2q_{i}-1\}\) let \(f_{i,j+1}=\{v_{i,j},v_{i,j+1}\}\). We put \[e_{i}^{\prime}=e_{i,1}e_{i,3}\cdots e_{i,2p_{i}+1}\ \text{and}\ e_{i}^{\prime \prime}=e_{i,2}e_{i,4}\cdots e_{i,2p_{i}},\] \[f_{i}^{\prime}=f_{i,1}f_{i,3}\cdots f_{i,2q_{i}+1}\ \text{and}\ f_{i}^{\prime \prime}=f_{i,2}f_{i,4}\cdots f_{i,2q_{i}},\] and put \[x^{\prime}=x_{1}x_{3}\cdots x_{s-1},\ \text{and}\ x^{\prime\prime}=x_{2}x_{4} \cdots x_{s}.\] Note that if \(s=0\) then both \(x^{\prime}\) and \(x^{\prime\prime}\) vanish. **Lemma 3.3**.: _For any positive integers \(m,n\) and \(p_{1},\ldots,p_{m},q_{1},\ldots,q_{n}\) and an integer \(s\geq 0\), the universal Grobner basis of \(I_{B}\) is given by \(\mathcal{G}=\mathcal{G}_{1}\cup\mathcal{G}_{2}\cup\mathcal{G}_{3}\cup \mathcal{G}_{4}\cup\mathcal{G}_{5}\cup\mathcal{G}_{6}\), where_ 1. \(\mathcal{G}_{1}=\{e^{\prime}_{i}e^{\prime\prime}_{j}-e^{\prime\prime}_{i}e^{ \prime}_{j}\ |\ 1\leq i<j\leq m\}\)_,_ 2. \(\mathcal{G}_{2}=\{f^{\prime}_{i}f^{\prime\prime}_{j}-f^{\prime\prime}_{i}f^{ \prime}_{j}\ |\ 1\leq i<j\leq n\}\)_,_ 3. \(\mathcal{G}_{3}=\{e^{\prime}_{i}f^{\prime}_{j}-e^{\prime\prime}_{i}x^{2}f^{ \prime\prime}_{j}\ |\ 1\leq i\leq m,1\leq j\leq n\}\)_,_ 4. \(\mathcal{G}_{4}=\{e^{\prime}_{i}x^{\prime\prime 2}f^{\prime}_{j}-e^{\prime\prime}_{i}x^{ \prime 2}f^{\prime}_{j}\ |\ 1\leq i\leq m,1\leq j\leq n\}\)_,_ 5. \(\mathcal{G}_{5}=\{e^{\prime}_{i}x^{\prime\prime}-e^{\prime}_{i}x^{\prime}x^{ \prime}\ |\ 1\leq i\leq m\}\)_,and_ 6. \(\mathcal{G}_{6}=\{f^{\prime}_{i}x^{\prime}-f^{\prime}_{i}x^{\prime\prime}x\ |\ 1\leq i\leq n\}\)_._ _It should be noted that \(\mathcal{G}_{4},\mathcal{G}_{5}\) and \(\mathcal{G}_{6}\) vanish if \(s=0\)._ Proof.: By [12, Lemma 5.11], every primitive even closed walk of \(B\) is one of the followings: * \((e_{i,1},\ldots,e_{i,2p_{i}+1},e_{j,1},\ldots,e_{j,2p_{j}+1})\), where \(1\leq i<j\leq m\), * \((f_{i,1},\ldots,f_{i,2q_{i}+1},f_{j,1},\ldots,f_{j,2q_{j}+1})\), where \(1\leq i<j\leq n\), * \((e_{i,1},\ldots,e_{i,2p_{i}+1},x,f_{j,1},\ldots,f_{j,2q_{j}+1},x)\), where \(1\leq i\leq m,1\leq j\leq n\), * \((e_{i,1},\ldots,e_{i,2p_{i}+1},x_{1},\ldots,x_{s},f_{j,1},\ldots,f_{j,2q_{j}+1 },x_{s},\ldots,x_{1})\), where \(1\leq i\leq m,1\leq j\leq n\), \(j\leq n\), * \((e_{i,1},\ldots,e_{i,2p_{i}+1},x_{1},\ldots,x_{s},x)\), where \(1\leq i\leq m\), and * \((f_{i,1},\ldots,f_{i,2q_{i}+1},x,x_{1},\ldots,x_{s})\), where \(1\leq i\leq n\). The result now follows from [16, Proposition 10.1.10]. Let \(<\) denote the lexicographic monomial ordering on the polynomial ring \(\mathbb{K}[E(B)]\) satisfying \[e_{1,1}<\cdots<e_{1,2p_{1}+1}<\cdots\cdots<e_{m,1}<\cdots e_{m,2p _{m}+1}<x<x_{1}<\cdots<x_{s}\] \[<f_{1,1}<\cdots<f_{1,2q_{1}+1}<\cdots\cdots<f_{n,1}<\cdots<f_{n,2q _{n}+1}.\] and let \(J_{B}\) be the initial ideal of \(I_{B}\) with respect to this order. **Proposition 3.4**.: _The minimal set of monomial generators of \(J_{B}\) is given by \(\mathcal{M}=\mathcal{M}_{1}\cup\mathcal{M}_{2}\cup\mathcal{M}_{3}\cup \mathcal{M}_{4}\cup\mathcal{M}_{5}\), where_ 1. \(\mathcal{M}_{1}=\left\{e^{\prime\prime}_{i}e^{\prime}_{j}\ |\ 1\leq i<j\leq m\right\}\)_,_ 2. \(\mathcal{M}_{2}=\left\{f^{\prime\prime}_{i}f^{\prime}_{j}\ |\ 1\leq i<j\leq n\right\}\)_,_ 3. \(\mathcal{M}_{3}=\left\{e^{\prime}_{i}f^{\prime}_{j}\ |\ 1\leq i\leq m,1\leq j\leq n\right\}\)_,_ 4. \(\mathcal{M}_{4}=\left\{e^{\prime}_{i}x^{\prime\prime}\ |\ 1\leq i\leq m\right\}\)_, and_ 5. \(\mathcal{M}_{5}=\{f^{\prime}_{i}x^{\prime}\ |\ 1\leq i\leq n\}\)_._ _It should be noted that \(\mathcal{M}_{4}\) and \(\mathcal{M}_{5}\) vanishes if \(s=0\)._ Proof.: That \(\mathcal{M}\) is a generating set with respect to the given order follows from Lemma 3.3. That it is minimal follows from the fact that none of the monomials are divided by any of the others. ### Compact graphs of type 3 Given positive integers \(m,n,k\), as well as the tuples \(\underline{p}=(p_{1},\ldots,p_{m})\), \(\underline{q}=(q_{1},\ldots,q_{n})\) and \(\underline{r}=(r_{1},\ldots,r_{k})\), we denote the graph \(C_{\underline{p}\underline{q}\underline{r}}\) as \(C\) for brevity. Here, \(p_{i},q_{i},r_{i}\) are all positive integers. By definition, we may assume \(C\) has vertex set \[V(C)= \{\{u,v,w\}\cup\{u_{i,j}\mid 1\leq i\leq m,1\leq j\leq 2p_{i}\}\] \[\cup\{v_{i,j}\mid 1\leq i\leq n,1\leq j\leq 2q_{i}\}\cup\{w_{i,j} \mid 1\leq i\leq k,1\leq j\leq 2r_{i}\},\] and edge set \[E(C)= \{\{u_{i,j},u_{i,j+1}\}\mid 1\leq i\leq m,1\leq j\leq 2p_{i}-1\}\] \[\cup\{\{u,u_{i,1}\},\{u,u_{i,2p_{i}}\}\mid 1\leq i\leq m\}\] \[\cup\{\{v_{i,j},v_{i,j+1}\}\mid 1\leq i\leq n,1\leq j\leq 2q_{i}-1\}\] \[\cup\{\{v,v_{i,1}\},\{v,v_{i,2q_{i}}\}\mid 1\leq i\leq n\}\] \[\cup\{\{w_{i,j},w_{i,j+1}\}\mid 1\leq i\leq k,1\leq j\leq 2r_{i}-1\}\] \[\cup\{\{w,w_{i,1}\},\{w,w_{i,2r_{i}}\}\mid 1\leq i\leq k\}\] \[\cup\{\{u,v\},\{v,w\},\{w,u\}\}.\] We assign labels to the edges of \(C\) as follows: For \(i\in\{1,\ldots,m\}\), let \(e_{i,1}=\{u,u_{i,1}\}\) and \(e_{i,2p_{i}+1}=\{u,u_{i,2p_{i}+1}\}\). For \(i\in\{1,\ldots,m\}\) and \(j\in\{1,\ldots,2p_{i}-1\}\), let \(e_{i,j+1}=\{u_{i,j},u_{i,j+1}\}\). For \(i\in\{1,\ldots,n\}\), let \(f_{i,1}=\{v,v_{i,1}\}\) and \(f_{i,2q_{i}+1}=\{v,v_{i,2q_{i}+1}\}\). For \(i\in\{1,\ldots,n\}\) and \(j\in\{1,\ldots,2q_{i}-1\}\), let \(f_{i,j+1}=\{v_{i,j},v_{i,j+1}\}\). For \(i\in\{1,\ldots,k\}\), let \(g_{i,1}=\{w,w_{i,1}\}\) and \(g_{i,2r_{i}+1}=\{w,w_{i,2r_{i}+1}\}\). For \(i\in\{1,\ldots,k\}\) and \(j\in\{1,\ldots,2r_{i}-1\}\), let \(g_{i,j+1}=\{w_{i,j},w_{i,j+1}\}\). Furthermore, we define \(x=\{u,v\}\), \(y=\{v,w\}\), and \(z=\{w,u\}\). **Lemma 3.5**.: _For any integers \(m,n,k\) and \(p_{1},\ldots,p_{m},q_{1},\ldots,q_{n},r_{1},\ldots,r_{k}\), the universal Grobner basis of \(I_{C}\) is given by in \(\mathcal{G}=\mathcal{G}_{1}\cup\mathcal{G}_{2}\cup\mathcal{G}_{3}\cup\mathcal{ G}_{4}\cup\mathcal{G}_{5}\cup\mathcal{G}_{6}\cup\mathcal{G}_{7}\cup\mathcal{G}_{8} \cup\mathcal{G}_{9}\cup\mathcal{G}_{10}\cup\mathcal{G}_{11}\cup\mathcal{G}_{ 12}\), where_ 1. \(\mathcal{G}_{1}=\{e^{\prime}_{i}e^{\prime\prime}_{j}-e^{\prime\prime}_{i}e^{ \prime}_{j}\mid 1\leq i<j\leq m\}\)_,_ 2. \(\mathcal{G}_{2}=\{f^{\prime}_{i}f^{\prime\prime}_{j}-f^{\prime\prime}_{i}f^{ \prime}_{j}\mid 1\leq i<j\leq n\}\)_,_ 3. \(\mathcal{G}_{3}=\{g^{\prime}_{i}g^{\prime}_{j}g^{\prime}_{j}-g^{\prime\prime}_{i }g^{\prime}_{j}\mid 1\leq i<j\leq k\}\)_,_ 4. \(\mathcal{G}_{4}=\{e^{\prime}_{i}f^{\prime}_{j}-e^{\prime\prime}_{i}x^{2}f^{ \prime\prime}_{j}\mid 1\leq i\leq m,1\leq j\leq n\}\)_,_ 5. \(\mathcal{G}_{5}=\{f^{\prime}_{i}g^{\prime}_{j}-f^{\prime\prime}_{i}y^{2}g^{ \prime\prime}_{j}\mid 1\leq i\leq n,1\leq j\leq k\}\)_,_ 6. \(\mathcal{G}_{6}=\{g^{\prime}_{i}e^{\prime}_{j}-g^{\prime\prime}_{i}z^{2}e^{ \prime\prime}_{j}\mid 1\leq i\leq k,1\leq j\leq m\}\)_,_ 7. \(\mathcal{G}_{7}=\{e^{\prime}_{i}y^{2}f^{\prime\prime}_{j}-e^{\prime\prime}_{i}z^ {2}f^{\prime}_{j}\mid 1\leq i\leq m,1\leq j\leq n\}\)_,_ 8. \(\mathcal{G}_{8}=\{f^{\prime}_{i}z^{2}g^{\prime\prime}_{j}-f^{\prime\prime}_{i}x^ {2}g^{\prime}_{j}\mid 1\leq i\leq n,1\leq j\leq k\}\)_,_ 9. \(\mathcal{G}_{9}=\{g^{\prime}_{i}x^{2}e^{\prime\prime}_{j}-g^{\prime\prime}_{i}y^ {2}e^{\prime}_{j}\mid 1\leq i\leq k,1\leq j\leq m\}\)_,_ 10. \(\mathcal{G}_{10}=\{e^{\prime\prime}_{i}y-e^{\prime\prime}_{i}zx\mid 1\leq i\leq m\}\)_,_ 11. \(\mathcal{G}_{11}=\{f^{\prime}_{i}z-f^{\prime\prime}_{i}xy\mid 1\leq i\leq n\}\)_, and_ 12. \(\mathcal{G}_{12}=\{g^{\prime}_{i}x-g^{\prime\prime}_{i}yz\mid 1\leq i\leq k\}\)_._ Proof.: In view of [12, Lemma 5.11], every primitive even closed walk of \(C\) is one of the followings: * \((e_{i,1},\ldots,e_{i,2p_{i}+1},e_{j,1},\ldots,e_{j,2p_{j}+1})\), where \(1\leq i<j\leq m\), * \((f_{i,1},\ldots,f_{i,2q_{i}+1},f_{j,1},\ldots,f_{j,2q_{j}+1})\), where \(1\leq i<j\leq n\), * \((g_{i,1},\ldots,g_{i,2r_{i}+1},g_{j,1},\ldots,g_{j,2r_{j}+1})\), where \(1\leq i<j\leq k\), * \((e_{i,1},\ldots,e_{i,2p_{i}+1},x,f_{j,1},\ldots,f_{j,2q_{j}+1},x)\), where \(1\leq i\leq m,1\leq j\leq n\), * \((f_{i,1},\ldots,f_{i,2q_{i}+1},y,g_{j,1},\ldots,g_{j,2r_{j}+1},y)\), where \(1\leq i\leq n,1\leq j\leq k\), * \((g_{i,1},\ldots,g_{i,2r_{i}+1},z,e_{j,1},\ldots,e_{j,2p_{j}+1},z)\), where \(1\leq i\leq k,1\leq j\leq m\), * \((e_{i,1},\ldots,e_{i,2p_{i}+1},z,y,f_{j,1},\ldots,f_{j,2q_{j}+1},y,z)\), where \(1\leq i\leq m,1\leq j\leq n\), * \((f_{i,1},\ldots,f_{i,2q_{i}+1},x,z,g_{j,1},\ldots,g_{j,2r_{j}+1},z,x)\), where \(1\leq i\leq n,1\leq j\leq k\), * \((g_{i,1},\ldots,g_{i,2r_{i}+1},z,x,e_{j,1},\ldots,e_{j,2p_{j}+1},x,y)\), where \(1\leq i\leq k,1\leq j\leq m\), * \((e_{i,1},\ldots,e_{i,2p_{i}+1},z,y,x)\), where \(1\leq i\leq m\), * \((f_{i,1},\ldots,f_{i,2q_{i}+1},x,z,y)\), where \(1\leq i\leq n\), and * \((g_{i,1},\ldots,g_{i,2r_{i}+1},y,x,z)\), where \(1\leq i\leq k\). Now the result follows from [16, Proposition 10.1.10]. Going forward, we work in the standard graded polynomial ring \(\mathbb{K}[E(C)]\), where the variables (i.e., the edges of \(C\)) is ordered as follows: \[e_{1,1}<\cdots<e_{1,2p_{1}+1}<\cdots\cdots<e_{m,1}<\cdots e_{m,2 p_{m}+1}<x<z<y\] \[<f_{1,1}<\cdots<f_{1,2q_{1}+1}<\cdots\cdots<f_{n,1}<\cdots f_{n,2 q_{n}+1}<g_{1,1}<\cdots\] \[<g_{1,2r_{1}+1}<\cdots\cdots<g_{k,1}<\cdots<g_{k,2r_{k}+1}.\] Let \(J_{C}\) denote the initial ideal of \(I_{C}\) with respect to the lexicographic monomial ordering \(<\) on \(\mathbb{K}[E(C)]\) induced by the above order of variables. By putting: \[e^{\prime}_{i}=e_{i,1}e_{i,3}\cdots e_{i,2p_{i}+1}\text{ and }e^{\prime\prime}_{i} =e_{i,2}e_{i,4}\cdots e_{i,2p_{i}},\] \[f^{\prime}_{j}=f_{j,1}f_{j,3}\cdots f_{j,2q_{j}+1}\text{ and }f^{\prime\prime}_{j} =f_{j,2}f_{j,4}\cdots f_{j,2q_{j}},\] and \[g^{\prime}_{\ell}=g_{\ell,1}g_{\ell,3}\cdots g_{\ell,2r_{\ell}+1}\text{ and }g^{\prime\prime}_{\ell} =g_{\ell,2}g_{\ell,4}\cdots g_{\ell,2r_{\ell}},\] where \(1\leq i\leq m,1\leq j\leq n\) and \(1\leq\ell\leq k\), we obtain the following result. **Proposition 3.6**.: _The minimal set of monomial generators of \(J_{C}\) is given by \(\mathcal{M}=\mathcal{M}_{1}\cup\mathcal{M}_{2}\cup\mathcal{M}_{3}\cup \mathcal{M}_{4}\cup\mathcal{M}_{5}\cup\mathcal{M}_{6}\cup\mathcal{M}_{7}\cup \mathcal{M}_{8}\cup\mathcal{M}_{9}\), where_ * \(\mathcal{M}_{1}=\left\{e^{\prime\prime}_{i}e^{\prime}_{j}\ |\ 1\leq i<j\leq m\right\}\)_,_ * \(\mathcal{M}_{2}=\left\{f^{\prime\prime}_{i}f^{\prime}_{j}\ |\ 1\leq i<j\leq n\right\}\)_,_ * \(\mathcal{M}_{3}=\left\{g^{\prime\prime}_{i}g^{\prime}_{j}\ |\ 1\leq i<j\leq k\right\}\)_,_ * \(\mathcal{M}_{4}=\left\{e^{\prime}_{i}f^{\prime}_{j}\ |\ 1\leq i\leq m,1\leq j\leq n\right\}\)_,_ * \(\mathcal{M}_{5}=\left\{f^{\prime}_{i}g^{\prime}_{j}\ |\ 1\leq i\leq n,1\leq j\leq k\right\}\)_,_ * \(\mathcal{M}_{6}=\left\{g^{\prime}_{i}e^{\prime}_{j}\ |\ 1\leq i\leq k,1\leq j\leq m\right\}\)_,_ * \(\mathcal{M}_{7}=\left\{e^{\prime}_{i}y\ |\ 1\leq i\leq m\right\}\)_,_ * \(\mathcal{M}_{8}=\left\{f^{\prime}_{i}z\ |\ 1\leq i\leq n\right\}\)_, and_ * \(\mathcal{M}_{9}=\left\{g^{\prime}_{i}x\ |\ 1\leq i\leq k\right\}\)_._ Proof.: That \(\mathcal{M}\) is a generating set with respect to the given order follows from Lemma 3.5. That it is minimal follows from the fact that none of the monomials are divided by any of the others. ## 4. Projective dimension and regularity In this section, we aim to establish the following results. For convenience, we denote by \(t(G)\) the number of induced cycles of \(G\). **Theorem 4.1**.: _Let \(G\) be a compact graph. Then there is a monomial order \(<\) such that_ 1. \(\beta_{i}(\mathrm{in}_{<}(I_{G}))=(i+1)\binom{t(G)}{i+2}\) _for all_ \(i\geq 0\)_;_ 2. \(\mathrm{pdim}(\mathbb{K}[E(G)]/\mathrm{in}_{<}(I_{G}))=t(G)-1\)_;_ 3. \(\mathbb{K}[E(G)]/\mathrm{in}_{<}(I_{G})\) _is a Cohen-Macaulay ring;_ 4. \(\mathrm{reg}(\mathbb{K}[E(G)]/\mathrm{in}_{<}(I_{G}))=\mathrm{mat}(G)\)_._ **Corollary 4.2**.: _Let Let \(G\) be a compact graph. Then_ 1. \(\mathrm{pdim}(\mathbb{K}[G])=t(G)-1\)_;_ 2. \(\mathrm{reg}(\mathbb{K}[G])=\mathrm{mat}(G)\)_._ It is known that if \(I\) is a graded ideal of a polynomial ring \(R\) such that \(R/\mathrm{in}_{<}(I)\) is Cohen-Macaulay for some monomial order \(<\), then \(R/I\) is also Cohen-Macaulay. Furthermore, we have \(\mathrm{reg}(R/I)=\mathrm{reg}(R/\mathrm{in}_{<}(I))\) and \(\mathrm{pdim}(R/I)=\mathrm{pdim}(R/\mathrm{in}_{<}(I))\). Based on these facts we could see that Corollary 4.2 follows immediately from Theorem 4.1. Regarding the proof of Theorem 4.1, we will provide it at the end of this section. Assume through this section that \(m,n,k\) are positive integers and \(\underline{p}=(p_{1},\ldots,p_{m})\), \(\underline{q}=(q_{1},\ldots,q_{n})\) and \(\underline{r}=(r_{1},\ldots,r_{k})\) are integral tuples with positive entries. Also, we write \(\underline{p}^{\prime}=(p_{1},\ldots,p_{m-1})\), \(\underline{q}^{\prime}=(q_{1},\ldots,q_{n-1})\), and \(\underline{r}^{\prime}=(r_{1},\ldots,r_{k-1})\). ### type one In this subsection, we always use the monomial order given in Subsection 3.1, and denote the toric ideal and its initial ideal of \(A_{\underline{p}}\) as \(I_{m}\) and \(J_{m}\), respectively. Similarly, \(I_{m-1}\) and \(J_{m-1}\) represents the toric ideal and its initial ideal of \(A_{\underline{p}^{\prime}}\) respectively. Recall from Subsection 3.1 that \(G(J_{m})=\left\{e_{i}^{\prime\prime}e_{j}^{\prime}\ |\ 1\leq i<j\leq m\right\}\). **Proposition 4.3**.: _Denote by \(H_{m}\) the monomial ideal \((e_{1}^{\prime\prime},\ldots,e_{m-1}^{\prime\prime})\). Then \(J_{m}=J_{m-1}+e_{m}^{\prime}H_{m}\) is an E-K splitting. Furthermore \(J_{m-1}\cap e_{m}^{\prime}H_{m}=e_{m}^{\prime}J_{m-1}\)._ Proof.: First of all, it is easy to see that \(G(J_{m})=G(J_{m-1})\sqcup G(e_{m}^{\prime}H_{m})\). Let us check \(J_{m-1}\cap e_{m}^{\prime}H_{m}=e_{m}^{\prime}J_{m-1}\). Take any \(e_{i}^{\prime\prime}e_{j}^{\prime}\in G(J_{m-1})\), since \(1\leq i<j\leq m-1\), we have \(e_{m}^{\prime}e_{i}^{\prime\prime}e_{j}^{\prime}\in e_{m}^{\prime}H_{m}\cap J _{m-1}\). For the converse, take \(e_{m}^{\prime}e_{i_{1}}^{\prime\prime}\in G(e_{m}^{\prime}H_{m})\) and \(e_{i_{2}}^{\prime\prime}e_{j}^{\prime}\in G(J_{m-1})\). Here, \(1\leq i_{1}\leq m-1\) and \(1\leq i_{2}<j\leq m-1\). Then, \[\mathrm{lcm}(e_{m}^{\prime}e_{i_{1}}^{\prime\prime},e_{i_{2}}^{\prime\prime}e_ {j}^{\prime})\in(e_{m}^{\prime}e_{i_{2}}^{\prime\prime}e_{j}^{\prime})\subseteq e _{m}^{\prime}J_{m-1}.\] This shows \(J_{m-1}\cap e_{m}^{\prime}H_{m}=e_{m}^{\prime}J_{m-1}\). Next, we define functions \(\phi\) and \(\psi\) as follows: \[\phi:G(e_{m}^{\prime}J_{m-1})\to G(J_{m-1}),\quad e_{m}^{\prime}e_{i}^{ \prime\prime}e_{j}^{\prime}\mapsto e_{i}^{\prime\prime}e_{j}^{\prime},\quad 1 \leq i<j\leq m-1,\] \[\psi:G(e^{\prime}_{m}J_{m-1})\to G(e^{\prime}_{m}H_{m}),\quad e^{\prime}_{m}e^{ \prime\prime}_{i}e^{\prime}_{j}\mapsto e^{\prime}_{m}e^{\prime\prime}_{i},\quad 1 \leq i<j\leq m-1.\] (1) Let \(u\) be a minimal generator of \(e^{\prime}_{m}J_{m-1}\). Then \(u=e^{\prime}_{m}e^{\prime\prime}_{i}e^{\prime}_{j}\) for some \(1\leq i<j\leq m-1\). It follows that \[\operatorname{lcm}(\phi(u),\psi(u))=\operatorname{lcm}(e^{\prime\prime}_{i}e^ {\prime}_{j},e^{\prime}_{m}e^{\prime\prime}_{i})=e^{\prime}_{m}e^{\prime\prime }_{i}e^{\prime}_{j}=u.\] (2) Let \(C=e^{\prime}_{m}(e^{\prime\prime}_{i_{1}}e^{\prime}_{j_{1}},\dots,e^{\prime \prime}_{i_{k}}e^{\prime}_{j_{k}})\) be a non-empty subset of \(G(e^{\prime}_{m}J_{m-1})\), where \(1\leq i_{q}<j_{q}\leq m-1\) for \(q=1,\dots,k\). Then \[\phi(C)=(e^{\prime\prime}_{i_{1}}e^{\prime}_{j_{1}},\dots,e^{\prime\prime}_{i_ {k}}e^{\prime}_{j_{k}})\text{ and }\psi(C)=e^{\prime}_{m}(e^{\prime\prime}_{i_{1}},\dots,e^{\prime\prime}_{i_ {k}}).\] Since \(e^{\prime\prime}_{i}\) and \(e^{\prime}_{j}\) are co-prime for all \(1\leq i,j\leq m\), we have \[\operatorname{lcm}(C)=e^{\prime}_{m}\operatorname{lcm}(\phi(C))\text{ and } \operatorname{lcm}(C)=\operatorname{lcm}(\psi(C))\operatorname{lcm}(e^{\prime}_{ j_{1}},\dots,e^{\prime}_{j_{k}}).\] This completes the proof. In the following, we will utilize the following formula without explicitly referencing it: For a finitely generated graded module \(M\) over a standard graded polynomial ring, one has \[\max\{j-i\mid\beta_{i-a,j-b}(M)\neq 0\}=\max\{\ell+b-(k+a)\mid\beta_{k,\ell}(M) \neq 0\}=\operatorname{reg}(M)+b-a.\] **Proposition 4.4**.: _Let \(m\geq 2\). Then \(\operatorname{reg}(J_{m})=\operatorname{mat}(A_{\underline{p}})+1\)._ Proof.: It is easy to check that \(\operatorname{mat}(A_{\underline{p}})=\sum\limits_{i=1}^{m}p_{i}\). We proceed with the induction on \(m\). If \(m=2\), since \(J_{2}\) is generated by a single monomial of degree \(p_{1}+p_{2}+1\), we obtain \(\operatorname{reg}(J_{2})=p_{1}+p_{2}+1\). Suppose that \(m>2\). Then, by Lemma 4.3, we have ( \[\clubsuit\] ) \[\beta_{i,j}(J_{m})=\beta_{i,j}(J_{m-1})+\beta_{i,j-p_{m}-1}(H_{m})+\beta_{i-1,j-p_{m}-1}(J_{m-1}).\] It follows that \[\operatorname{reg}(J_{m}) =\max\{j-i\mid\beta_{i,j}(J_{m-1})+\beta_{i-1,j-p_{m}-1}(J_{m-1}) +\beta_{i,j-p_{m}-1}(H_{m})\neq 0\}\] \[=\max\{\operatorname{reg}(J_{m-1}),\operatorname{reg}(J_{m-1})+p_ {m},\operatorname{reg}(H_{m})+p_{m}+1\}.\] Note that \(H_{m}\) is generated by a regular sequence of degrees \(p_{1},\dots,p_{m-1}\). By using the Koszul theory, we obtain \(\operatorname{reg}(H_{m})=\sum\limits_{i=1}^{m-1}p_{i}-m+2\). Hence, \[\operatorname{reg}(J_{m}) =\max\{\sum\limits_{i=1}^{m}p_{i}+1,\sum\limits_{i=1}^{m}p_{i}-m +3\}\] \[=\sum\limits_{i=1}^{m}p_{i}+1,\] as desired. **Proposition 4.5**.: _Let \(m\geq 2\). Then \(\beta_{i}(J_{m})=(i+1)\binom{m}{i+2}\) for all \(i\geq 0\). In particular, \(\operatorname{pdim}(J_{m})=m-2\)._ Proof.: We also employ the induction on \(m\). The case that \(m=2\) or \(i=0\) is straightforward. If \(m\geq 3\) and \(i\geq 1\), then, by noting that the formula \(\binom{m}{i}=\binom{m-1}{i-1}+\binom{m-1}{i}\) holds for all \(m\geq 1\) and \(i\geq 1\), we have \[\beta_{i}(J_{m}) =\beta_{i}(J_{m-1})+\beta_{i}(H_{m})+\beta_{i-1}(J_{m-1})\] \[=(i+1)\binom{m-1}{i+2}+\binom{m-1}{i+1}+i\binom{m-1}{i+1}\] \[=(i+1)\binom{m}{i+2},\] as desired. We may compute the graded Betti numbers of \(J_{m}\) in a special case. **Proposition 4.6**.: _If \(p_{1}=\dots=p_{m}=p\), then for all \(i\geq 0\), we have_ \[\beta_{i,j}(J_{m})=\left\{\begin{array}{ll}\binom{m}{i+2},&j=(i+2)p+\ell, \ell=1,\dots,i+1;\\ 0,&\text{otherwise.}\end{array}\right.\] Proof.: We use the induction on \(m\). The case that \(m=2\) or \(i=0\) are obvious. So we suppose \(m\geq 3\) and \(i\geq 1\). By the induction hypothesis we have \[\beta_{i,j}(J_{m-1})=\left\{\begin{array}{ll}\binom{m-1}{i+2},&j=(i+2)p+\ell,\ell=1,\dots,i+1;\\ 0,&\text{otherwise}\end{array}\right.\] and so \[\beta_{i-1,j-p-1}(J_{m-1})=\left\{\begin{array}{ll}\binom{m-1}{i+1},&j=(i+2)p +\ell+1,\ell=1,\dots,i;\\ 0,&\text{otherwise.}\end{array}\right.\] On the other hand, by the theory of Koszul complex, we have \[\beta_{i,j-p-1}(H_{m})=\left\{\begin{array}{ll}\binom{m-1}{i+1},&j=(i+2)p+1; \\ 0,&\text{otherwise.}\end{array}\right.\] Now, the result follows by applying the equality (\(\clubsuit\)). ### Type two In this subsection, we denote the toric ideal and its initial ideal of \(B^{s}_{\underline{p:q}}\) as \(I^{s}_{m:n}\) and \(J^{s}_{m:n}\) respectively. Similarly, \(I^{s}_{m:n-1}\) and \(J^{s}_{m:n-1}\) represents the toric ideal and its initial ideal of \(B^{s}_{\underline{p:q^{\prime}}}\) respectively. We always use the monomial order given in Section 3. The distinction between the case when \(s>0\) and the case when \(s=0\) is significant. Let us first consider the case when \(s>0\). Recall from Subsection 3.2 that \(G(J^{s}_{m:n})\) is the disjoint union \(\left\{e^{\prime\prime}_{i}e^{\prime}_{j}\ |\ 1\leq i<j\leq m\right\}\cup\left\{f^{ \prime\prime}_{i}f^{\prime}_{j}\ |\ 1\leq i<j\leq n\right\}\cup\left\{e^{\prime}_{i}f^{ \prime}_{j}\ |\ 1\leq i\leq m,1\leq j\leq n\right\}\cup\left\{e^{\prime}_{i}x^{\prime }\ |\ 1\leq i\leq m\right\}\cup\left\{f^{\prime}_{i}x^{\prime}\ |\ 1\leq i\leq n\right\}.\) **Proposition 4.7**.: _Denote by \(H^{s}_{m:n}\) the monomial ideal \((e^{\prime}_{1},\dots,e^{\prime}_{m},f^{\prime\prime}_{1},\dots,f^{\prime \prime}_{n-1},x^{\prime})\). Then \(J^{s}_{m:n}=J^{s}_{m:n-1}+f^{\prime}_{n}H^{s}_{m:n}\) is an E-K splitting. Furthermore, \(J^{s}_{m:n-1}\cap f^{\prime}_{n}H^{s}_{m:n}=f^{\prime}_{n}J^{s}_{m:n-1}\)._ Proof.: First of all, it is routine to see that \(G(J^{s}_{m:n})=G(J^{s}_{m:n-1})\sqcup G(f^{\prime}_{n}H^{s}_{m:n})\) and \(J^{s}_{m:n-1}\cap f^{\prime}_{n}H^{s}_{m:n}=f^{\prime}_{n}J^{s}_{m:n-1}\). Let us define a function \(\phi:G(f^{\prime}_{n}J^{s}_{m:n-1})\to G(J^{s}_{m:n-1})\) that sends \(f^{\prime}_{n}u\) to \(u\) for all \(u\in G(J^{s}_{m:n-1})\). Similarly, we define a function \(\psi:G(f^{\prime}_{n}J^{s}_{m:n-1})\to G(f^{\prime}_{n}H^{s}_{m:n})\) using the following rules: * \(f^{\prime}_{n}e^{\prime\prime}_{i}e^{\prime}_{j}\mapsto f^{\prime}_{n}e^{ \prime}_{j}\) for all \(1\leq i<j\leq m\) and \(f^{\prime}_{n}f^{\prime\prime}_{i}f^{\prime}_{j}\mapsto f^{\prime}_{n}f^{ \prime\prime}_{i}\) for all \(1\leq i<j\leq n-1\); * \(f^{\prime}_{n}e^{\prime}_{i}f^{\prime}_{j}\mapsto f^{\prime}_{n}e^{\prime}_{i}\) for all \(1\leq i\leq m\) and \(1\leq j\leq n-1\); * \(f^{\prime}_{n}e^{\prime}_{i}x^{\prime\prime}\mapsto f^{\prime}_{n}e^{\prime}_{i}\) for all \(1\leq i\leq m\) and \(f^{\prime}_{n}f^{\prime}_{i}x^{\prime}\mapsto f^{\prime}_{n}x^{\prime}\) for all \(1\leq i\leq n-1\). It is routine to check that conditions (1) and (2) of Definition 1.3 are satisfied, thus confirming that it is indeed an E-K splitting. If \(s=0\), then \(G(J^{0}_{m:n})\) is the disjoint union \(\left\{e^{\prime}_{i}f^{\prime}_{j}\mid 1\leq i\leq m,1\leq j\leq n\right\}\cup \left\{e^{\prime\prime}_{i}e^{\prime}_{j}\mid 1\leq i<j\leq m\right\}\cup\left\{f^{ \prime\prime}_{i}f^{\prime}_{j}\mid 1\leq i<j\leq n\right\}\). Similarly, we obtain the following. **Proposition 4.8**.: _Denote by \(H^{0}_{m:n}\) the monomial ideal \((e^{\prime}_{1},\ldots,e^{\prime}_{m},f^{\prime\prime}_{1},\ldots,f^{\prime \prime}_{n-1})\). \(J^{0}_{m:n}=J^{0}_{m:n-1}+f^{\prime}_{n}H^{0}_{m:n}\) is an E-K splitting._ **Proposition 4.9**.: _Let \(s\geq 0\) be an even number, \(m,n\geq 1\)._ _Then for all \(i\geq 0\), we have_ \[\beta_{i}(J^{s}_{m:n})=\left\{\begin{array}{ll}(i+1)\binom{m+n}{i+2},&s=0;\\ (i+1)\binom{m+n+1}{i+2},&s>0.\end{array}\right.\] Proof.: We consider the following two cases. _Case s=0:_ We also employ the induction on \(n\). The case that \(m=n=1\) or \(i=0\) is straightforward. If \(m+n\geq 3\) and \(i\geq 1\), then, we have \[\beta_{i}(J^{0}_{m:n}) =\beta_{i}(J^{0}_{m:n-1})+\beta_{i}(H^{0}_{m:n})+\beta_{i-1}(J^{ 0}_{m:n-1})\] \[=(i+1)\binom{m+n-1}{i+2}+i\binom{m+n-1}{i+1}+\binom{m+n-1}{i+1}\] \[=(i+1)\binom{m+n}{i+2},\] as desired. _Case s\(>0\)_: We also employ the induction on \(n\). The case that \(i=0\) is straightforward. If \(i\geq 1\), we have \[\beta_{i}(J^{s}_{m:n}) =\beta_{i}(J^{s}_{m:n-1})+\beta_{i}(H^{s}_{m:n})+\beta_{i-1}(J^{s}_ {m:n-1})\] \[=(i+1)\binom{m+n}{i+2}+i\binom{m+n}{i+1}+\binom{m+n}{i+1}\] \[=(i+1)\binom{m+n+1}{i+2}.\] This completes the proof. \(\square\) **Proposition 4.10**.: _Let \(s\geq 0\) be an even number and \(m,n\geq 1\). Then_ \[\operatorname{reg}(J^{s}_{m:n})=\operatorname{mat}(B^{s}_{\underline{p: \underline{q}}})+1.\] _Proof. Case \(s\)=0:_ In this case, \(\operatorname{mat}(B^{0}_{\underline{p:\underline{q}}})=\sum\limits_{i=1}^{m}p _{i}+\sum\limits_{i=1}^{n}q_{i}+1.\) We proceed with the induction on \(n\). By Proposition 4.8, we have \[\beta_{i,j}(J^{0}_{m:n})=\beta_{i,j}(J^{0}_{m:n-1})+\beta_{i,j-q_{n}-1}(H^{0}_ {m:n})+\beta_{i-1,j-q_{n}-1}(J^{0}_{m:n-1}).\] It follows that \[\operatorname{reg}(J^{0}_{m:n}) =\max\{j-i\mid\beta_{i,j}(J^{0}_{m:n-1})+\beta_{i-1,j-q_{n}-1}(J^ {0}_{m:n-1})+\beta_{i,j-q_{n}-1}(H^{0}_{m:n})\neq 0\}\] \[=\max\{\operatorname{reg}(J^{0}_{m:n-1}),\operatorname{reg}(J^{0 }_{m:n-1})+q_{n},\operatorname{reg}(H^{0}_{m:n})+q_{n}+1\}.\] Note that \(H^{0}_{m:n}\) is generated by a regular sequence of degrees \(p_{1}+1,\ldots,p_{m}+1,q_{1},\ldots,q_{n-1}\). By using the Koszul theory, we obtain \[\operatorname{reg}(H^{0}_{m:n})=\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{ n-1}q_{i}-n+2.\] Hence, if \(n=1\), since \(\operatorname{reg}(J^{0}_{m:0})=\operatorname{reg}(J_{m})=\sum\limits_{i=1}^{ m}p_{i}+1\), we have \[\operatorname{reg}(J^{0}_{m:1})=\max\{\sum\limits_{i=1}^{m}p_{i}+q_{1}+1,\sum \limits_{i=1}^{m}p_{i}+q_{1}+2\}=\sum\limits_{i=1}^{m}p_{i}+q_{1}+2.\] This proves the case when \(n=1\). If \(n>1\), then \[\operatorname{reg}(J^{0}_{m:n}) =\max\{\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+2, \sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}-n+3\}\] \[=\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+2.\] _Case \(s>0\):_ In this case, we have \(\operatorname{mat}(B^{s}_{\underline{p:\underline{q}}})=\sum\limits_{i=1}^{m}p _{i}+\sum\limits_{i=1}^{n}q_{i}+\frac{s}{2}.\) We proceed with the induction on \(n\) again. First, note that \[\operatorname{reg}(H^{s}_{m:n})=\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{ n-1}q_{i}+\frac{s}{2}-n+1.\] If \(n=1\), then \[\beta_{i,j}(J^{s}_{m:1})=\beta_{i,j}(J^{s}_{m:0})+\beta_{i,j-q_{1}-1}(H^{s}_{ m:1})+\beta_{i-1,j-q_{1}-1}(J^{s}_{m:0}).\] Note that \(G(J^{s}_{m:0})=\{e^{\prime\prime}_{i}e^{\prime}_{j}\mid 1\leq i<j\leq m\}\cup\{e^{ \prime}_{i}x^{\prime\prime}\mid 1\leq i\leq m\}\). By putting \(e^{\prime\prime}_{0}=x^{\prime\prime}\), we may write \(G(J^{s}_{m:0})=\{e^{\prime\prime}_{i}e^{\prime}_{j}\mid 0\leq i<j\leq m\}.\) This is exactly the ideal studied in Subsection 4.1, and so it follows from Proposition 4.4 that \[\operatorname{reg}(J^{s}_{m:0})=\sum\limits_{i=1}^{m}p_{i}+\frac{s}{2}+1.\] Hence, \(\operatorname{reg}(J^{s}_{m:1})=\max\{\operatorname{reg}(J^{s}_{m:0})+q_{1}, \operatorname{reg}(H^{s}_{m:1})+q_{1}+1\}=\sum\limits_{i=1}^{m}p_{i}+q_{1}+ \frac{s}{2}+1\). Suppose that \(n>1\). Then, since \[\beta_{i,j}(J^{s}_{m:n})=\beta_{i,j}(J^{s}_{m:n-1})+\beta_{i,j-q_{n}-1}(H^{s}_{ m:n})+\beta_{i-1,j-q_{n}-1}(J^{s}_{m:n-1}),\] we have \[\operatorname{reg}(J^{s}_{m:n}) =\max\{\operatorname{reg}(J^{s}_{m:n-1}),\operatorname{reg}(J^{s }_{m:n-1})+q_{n},\operatorname{reg}(H^{s}_{m:n})+q_{n}+1\}\] \[=\max\{\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+ \frac{s}{2}+1,\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}-n+\frac{s} {2}+2\}\] \[=\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+\frac{s}{2 }+1,\] as desired. ### Type three In this subsection, we denote the toric ideal and its initial ideal of \(C_{\underline{p}:\underline{q}:\underline{r}}\) as \(I_{m:n:k}\) and \(J_{m:n:k}\) respectively. Similarly, \(I_{m:n:k-1}\) and \(J_{m:n:k-1}\) represents the toric ideal and its initial ideal of \(C_{\underline{p}:\underline{q}:\underline{r}^{\prime}}\) respectively. Recall from Subsection 3.3 that \(G(J_{m:n:k})\) is the disjoint union \(\left\{e^{\prime\prime}_{i}e^{\prime}_{j}\mid 1\leq i<j\leq m\right\}\cup \left\{f^{\prime\prime}_{i}f^{\prime}_{j}\mid 1\leq i<j\leq n\right\}\cup\left\{g^{ \prime\prime}_{i}g^{\prime}_{j}\mid 1\leq i<j\leq k\right\}\cup\left\{e^{\prime}_{i}y \mid 1\leq i\leq m\right\}\cup\left\{f^{\prime}_{i}z\mid 1\leq i\leq k\right\}\cup\left\{g^{ \prime}_{i}x\mid 1\leq i\leq k\right\}\cup\left\{e^{\prime}_{i}f^{\prime}_{j}\mid 1\leq i\leq m,1 \leq j\leq n\right\}\cup\left\{f^{\prime}_{i}g^{\prime}_{j}\mid 1\leq i\leq n,1 \leq j\leq k\right\}\] \[\cup\left\{g^{\prime}_{i}e^{\prime}_{j}\mid 1\leq i\leq k,1\leq j \leq m\right\}.\] **Proposition 4.11**.: _Denote by \(H_{m:n:k}\) the monomial ideal_ \[(x,e^{\prime}_{1},\ldots,e^{\prime}_{m},f^{\prime}_{1},\ldots,f^{\prime}_{n},g^ {\prime\prime}_{1},\ldots,g^{\prime\prime}_{k-1}).\] _Then \(J_{m:n:k}=J_{m:n:k-1}+g^{\prime}_{k}H_{m:n:k}\) is an E-K splitting. Furthermore, we have_ \[J_{m:n:k-1}\cap g^{\prime}_{k}H_{m:n:k}=g^{\prime}_{k}J_{m:n:k-1}.\] Proof.: First of all, it is routine to check that \(G(J_{m:n:k})=G(J_{m:n:k-1})\bigsqcup G(g^{\prime}_{k}H_{m:n:k})\) and \(J_{m:n:k-1}\cap g^{\prime}_{k}H_{m:n:k}=g^{\prime}_{k}J_{m:n:k-1}\). Define a function \(\phi:G(g^{\prime}_{k}J_{m:n:k-1})\to G(J_{C_{m:n:k-1}})\) that sends \(g^{\prime}_{k}u\) to \(u\) for all \(u\in G(J_{m:n:k-1})\). Define a function \(\psi:G(g^{\prime}_{k}J_{m:n:k-1})\to G(g^{\prime}_{k}H_{m:n:k})\) by the following rules: * \(g^{\prime}_{k}e^{\prime\prime}_{i}e^{\prime}_{j}\mapsto g^{\prime}_{k}e^{\prime}_ {j}\) for all \(1\leq i<j\leq m\) and \(g^{\prime}_{k}e^{\prime}_{i}y\mapsto g^{\prime}_{k}e^{\prime}_{i}\) for all \(1\leq i\leq m\); * \(g^{\prime}_{k}e^{\prime}_{i}f^{\prime}_{j}\mapsto g^{\prime}_{k}e^{\prime}_{j}\) for all \(1\leq i\leq m\) and \(1\leq j\leq n\); * \(g^{\prime}_{k}f^{\prime\prime}_{i}f^{\prime}_{j}\mapsto g^{\prime}_{k}f^{ \prime}_{j}\) for all \(1\leq i<j\leq n\) and \(g^{\prime}_{k}f^{\prime}_{i}z\mapsto g^{\prime}_{k}f^{\prime}_{i}\) for all \(1\leq i\leq n\); * \(g^{\prime}_{k}f^{\prime}_{i}g^{\prime}_{j}\mapsto g^{\prime}_{k}f^{\prime}_{i}\) for all \(1\leq i\leq k-1\) and \(1\leq j\leq n\); * \(g^{\prime}_{k}g^{\prime}_{i}g^{\prime}_{j}\mapsto g^{\prime}_{k}g^{\prime\prime}_ {i}\) for all \(1\leq i<j\leq k-1\) and \(g^{\prime}_{k}g^{\prime}_{i}x\mapsto g^{\prime}_{k}x\) for all \(1\leq i<j\leq k-1\); * \(g^{\prime}_{k}g^{\prime}_{i}e^{\prime}_{j}\mapsto g^{\prime}_{k}e^{\prime}_{j}\) for all \(1\leq i\leq k-1\) and \(1\leq j\leq m\). It is routine to check conditions (1) and (2) of Definition 1.3 are satisfied. **Proposition 4.12**.: _Let \(m,n,k\geq 1\). Then \(\beta_{i}(J_{m:n:k})=(i+1)\binom{m+n+k+1}{i+2}\) for all \(i\geq 0\). In particular, \(\operatorname{pdim}(J_{m:n:k})=m+n+k-1\)._ Proof.: We also employ the induction on \(k\). The case that \(i=0\) is straightforward. If \(i\geq 1\), then, we have \[\beta_{i}(J_{m:n:k}) =\beta_{i}(J_{m:n:k-1})+\beta_{i}(H_{m:n:k})+\beta_{i-1}(J_{m:n:k-1})\] \[=(i+1)\binom{m+n+k}{i+2}+\binom{m+n+k}{i+1}+i\binom{m+n+k}{i+1}\] \[=(i+1)\binom{m+n+k+1}{i+2},\] as desired. **Proposition 4.13**.: _Let \(m,n,k\geq 1\). Then \(\operatorname{reg}(J_{m:n:k})=\operatorname{mat}(C_{\underline{p}: \underline{q}:\underline{r}})+1.\)_ Proof.: Note that \(\operatorname{mat}(C_{\underline{p}:\underline{q}:\underline{r}})=\sum \limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+\sum\limits_{i=1}^{k}r_{i}+1.\) We proceed with the induction on \(k\). If \(k=1\), then \[\beta_{i,j}(J_{m:n:1})=\beta_{i,j}(J_{m:n:0})+\beta_{i,j-r_{1}-1}(H_{m:n:1})+ \beta_{i-1,j-r_{1}-1}(J_{m:n:0}).\] Since \(\operatorname{reg}(J_{m:n:0})=\operatorname{reg}(J_{m:n}^{2})\), we obtain \(\operatorname{reg}(J_{m:n:1})=\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n} q_{i}+r_{1}+2.\) Suppose that \(k>1\). Then, by Proposition 4.11, we have \[\beta_{i,j}(J_{m:n:k})=\beta_{i,j}(J_{m:n:k-1})+\beta_{i,j-r_{k}-1}(H_{m:n:k})+ \beta_{i-1,j-r_{k}-1}(J_{m:n:k-1}).\] It follows that \(\operatorname{reg}(J_{m:n:k})\) \[=\max\{j-i\mid\beta_{i,j}(J_{m:n:k-1})+\beta_{i-1,j-r_{k}-1}(J_{m:n:k-1})+ \beta_{i,j-r_{k}-1}(H_{m:n:k})\neq 0\}\] \[=\max\{\operatorname{reg}(J_{m:n:k-1}),\operatorname{reg}(H_{m:n:k })+r_{k}+1,\operatorname{reg}(J_{m:n:k-1})+r_{k}\}.\] Note that \(H_{m:n:k}\) is generated by a regular sequence of degrees \(p_{1}+1,\ldots,p_{m}+1,q_{1}+1,\ldots,q_{n}+1,r_{1},\ldots,r_{k-1},1\), we obtain \(\operatorname{reg}(H_{m:n:k})=\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n} q_{i}+\sum\limits_{i=1}^{k-1}r_{i}-k+2.\) Hence, \[\operatorname{reg}(J_{m:n:k}) =\max\{\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+\sum \limits_{i=1}^{k}r_{i}+2,\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i} +\sum\limits_{i=1}^{k}r_{i}-k+3\}\] \[=\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+\sum \limits_{i=1}^{k}r_{i}+2,\] as desired. To complete the proof of Theorem 4.1, we require some additional notation and facts. Recall a connected graph is _planar_ if it can be drawn on a 2D plane such that none of the edges intersect. If a planar graph \(G\) is drawn in this way, it divides the plane into regions called _faces_. The number of faces is denoted by \(f(G)\). The famous Euler formula states that for any planar graph \(G\), we have \[|E(G)|-|V(G)|=f(G)-2.\] If we assume that every edge of \(G\) belongs to at most one induced cycle, then there is a one-to-one correspondence between induced cycles and bounded faces of \(G\). Since there is exactly one unbounded face of \(G\), it follows that \(f(G)=t(G)+1\). However, it is worth noting that the formula \(f(G)=t(G)+1\) does not hold in general. For example, if \(G\) is the complete graph with \(4\) vertices, then \(G\) is planar, but \(f(G)=t(G)=4\). We are now ready to present the proof of Theorem 4.1. Proof.: (1) This is a combination of Propositions 4.5, 4.9 and Proposition 4.12. (2) It follows immediately from (1). (3) Since \(G\) is a compact graph, \(G\) is a planar graph and every edge of \(G\) belongs to at most one induced cycle. Hence, because of the discussion above, we have \[|E(G)|-|V(G)|=t(G)-1.\] This implies \[\mathrm{depth}(\mathbb{K}[E(G)]/\mathrm{in}_{<}(I_{G})) =|E(G)|-\mathrm{pdim}(\mathbb{K}[E(G)]/\mathrm{in}_{<}(I_{G}))\] \[=|V(G)|=\dim(\mathbb{K}[G])\] \[=\dim(\mathbb{K}[E(G)]/\mathrm{in}_{<}(I_{G})).\] Here, the second last equality follows from [15, Corollary 10.1.21]. Hence, by definition, \(\mathbb{K}[E(G)]/\mathrm{in}_{<}(I_{G})\) is Cohen-Macaulay. (4) This is a combination of Propositions 4.4, 4.10 and Proposition 4.13. ## 5. Cohen-Macaulay types and top graded Betti numbers Assume that \(G\) is a compact graph. In this section we will compute the top graded Betti numbers of \(\mathbb{K}[G]\). Since \(\mathbb{K}[G]\) is Cohen-Macaulay, the regularity of \(\mathbb{K}[G]\) is determined by its top graded Betti numbers. Therefore, the regularity formula given in Section 4 could also be deduced from the results of this section. To present the top graded Betti numbers of \(\mathbb{K}[G]\), we need to consider three cases. The most complex case is when \(G\) is a compact graph of type \(3\), and we will provide detailed proof specifically for this case. The proofs for the cases when \(G\) is a compact graph of type one or type two are similar, with only minor differences, so we will only provide an outline of the proofs for those cases. The top graded Betti numbers of the edge rings of three types of compact graphs are presented in Propositions 5.2, 5.3 and Proposition 5.4, respectively. By combining the aforementioned results and their proofs, the following conclusion regarding the top total Betti numbers can be immediately derived. **Theorem 5.1**.: _Let \(G\) be a compact graph, and let \(I_{G}\) be the toric ideal of \(\mathbb{K}[G]\). Denote by \(J_{G}\) the initial ideal of \(I_{G}\) with respect to the order given in Section 3. Then \(I_{G}\) and \(J_{G}\) share the same top graded Betti numbers. In particular, we have \(\mathrm{type}(\mathbb{K}[G])=t(G)-1\)._ ### type three Let \(C\) denote the compact graph \(C_{\underline{p:q;r}}\), whose vertex set \(V(C)\) and edge set \(E(C)\) are given explicitly in Subsection 3.3. In this subsection, we compute the minimal generators of the canonical module \(\omega_{\mathbb{K}[C]}\) and then determine the top graded Betti numbers of the toric ring \(\mathbb{K}[C]\). It is easy to see that \(|V(C)|=2\sum\limits_{i=1}^{m}p_{i}+2\sum\limits_{i=1}^{n}q_{i}+2\sum\limits_{i= 1}^{k}r_{i}+3\). We use the following notions for all the entries of \(\mathbb{R}^{|V(C)|}\): \[\mathbb{R}^{|V(C)|}=\{\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{2p_ {i}}a_{i,j}\mathbf{u}_{i,j}+a\mathbf{u}+\sum\limits_{i=1}^{n}\sum\limits_{j=1}^ {2q_{i}}b_{i,j}\mathbf{v}_{i,j}+b\mathbf{v}+\sum\limits_{i=1}^{k}\sum\limits_{j= 1}^{2r_{i}}c_{i,j}\mathbf{w}_{i,j}+c\mathbf{w}\mid\] \[a_{i,j},a,b_{i,j},b,c_{i,j},c\in\mathbb{R}\text{ for all }i,j\},\] where \(\mathbf{u},\mathbf{u}_{i,j},\mathbf{v},\mathbf{v}_{i,j},\mathbf{w},\mathbf{w}_ {i,j}\) are the unit vectors of \(\mathbb{R}^{|V(C)|}\), each \(\mathbf{u}_{i,j}\) (resp. \(\mathbf{v}_{i,j},\mathbf{w}_{i,j}\)) corresponds to \(u_{i,j}\) (resp. \(v_{i,j},w_{i,j}\)) (where \(1\leq i\leq m\) and \(1\leq j\leq 2p_{i}\)) (resp. \(1\leq i\leq n\) and \(1\leq j\leq 2q_{i}\), \(1\leq i\leq k\) and \(1\leq j\leq 2r_{i}\)) and \(\mathbf{u}\) (resp. \(\mathbf{v},\mathbf{w}\) ) corresponds to \(u\) (resp. \(v,w\)). In what follows, we will construct \(m+n+k\) integral vectors in \(\mathbb{R}^{|V(C)|}\) and then show that they are minimal vectors of \(\operatorname{relint}(\mathbb{R}_{+}(C))\cap\mathbb{Z}^{|V(C)|}\). Here, an integral vector in \(\operatorname{relint}(\mathbb{R}_{+}(C))\) is called _minimal_ if it cannot written as the sum of a vector in \(\operatorname{relint}(\mathbb{R}_{+}(C))\cap\mathbb{Z}^{|V(C)|}\) and a nonzero vector of \(\mathbb{R}_{+}(C)\cap\mathbb{Z}^{|V(C)|}\). The construction is as follows: For \(\ell=1,\ldots,m\), let \[\alpha_{\ell}:=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{2p_{i}}\mathbf{u}_{i, j}+\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{2q_{i}}\mathbf{v}_{i,j}+\sum\limits_{i= 1}^{k}\sum\limits_{j=1}^{2r_{i}}\mathbf{w}_{i,j}+\mathbf{w}+2\ell\mathbf{v}.\] For \(\ell=1,\ldots,k\), let \[\gamma_{\ell}=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{2p_{i}}\mathbf{u}_{i,j }+\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{2q_{i}}\mathbf{v}_{i,j}+\sum\limits_ {i=1}^{k}\sum\limits_{j=1}^{2r_{i}}\mathbf{w}_{i,j}+\mathbf{u}+\mathbf{v}+2 \ell\mathbf{w}.\] We now verify that \(\alpha_{\ell},\beta_{\ell},\gamma_{\ell}\in\operatorname{relint}\mathbb{R}_{+ }(C)\) for all possible \(\ell\). For this, we put \(u_{i}^{(1)}=\{u_{i,j}\mid j=1,3,\ldots,2p_{i}-1\}\) for \(i=1,\ldots,m\), \(u_{i}^{(2)}=\{u_{i,j}\mid j=2,4,\ldots,2p_{i}\}\) for \(i=1,\ldots,m\) and \(v_{i}^{(1)},v_{i}^{(2)},w_{i}^{(1)},w_{i}^{(2)}\) are defined similarly. We see the following: * Each of \(u_{i,j}\)'s, \(v_{i,j}\)'s and \(w_{i,j}\)'s is a regular vertex of \(C\), while \(u\), \(v\) and \(w\) are not. * An independent subset \(T\) of \(V(C)\) is fundamental if and only if \(T\) is one of the following sets: \((i)\ \bigcup\limits_{i=1}^{m}u_{i}^{(f_{i})}\), where \((f_{1},\ldots,f_{m})\in\{1,2\}^{m}\); \((ii)\ \bigcup\limits_{i=1}^{n}v_{i}^{(g_{i})}\), where \((g_{1},\ldots,g_{n})\in\{1,2\}^{n}\); \((iii)\ \bigcup\limits_{i=1}^{k}w_{i}^{(h_{i})}\), where \((h_{1},\ldots,h_{k})\in\{1,2\}^{k}\); \[(iv) \{u\}\cup\bigcup\limits_{i=1}^{m}(u_{i}^{(f_{i})}\backslash\{u_{i,1}, u_{i,2p_{i}}\})\cup\bigcup\limits_{i=1}^{n}v_{i}^{(g_{i})}\cup\bigcup\limits_{i=1}^{k}w_{i }^{(h_{i})}\},\text{ where }(f_{1},\ldots,f_{m})\in\{1,2\}^{m},\,(g_{1},\ldots,g_{n})\in\{1,2\}^{n} \text{ and }(h_{1},\ldots,h_{k})\in\{1,2\}^{k};\] \[(v) \{v\}\cup\bigcup\limits_{i=1}^{n}(v_{i}^{(g_{i})}\setminus\{v_{ i,1},v_{i,2p_{i}}\})\cup\bigcup\limits_{i=1}^{m}u_{i}^{(f_{i})}\cup\bigcup \limits_{i=1}^{k}w_{i}^{(h_{i})}\,\text{ where }(f_{1},\ldots,f_{m})\in\{1,2\}^{m},\,(g_{1}, \ldots,g_{n})\in\{1,2\}^{n}\text{ and }(h_{1},\ldots,h_{k})\in\{1,2\}^{k};\] \[(vi) \{w\}\cup\bigcup\limits_{i=1}^{k}(w_{i}^{(h_{i})}\setminus\{w_{ i,1},w_{i,2p_{i}}\})\cup\bigcup\limits_{i=1}^{m}u_{i}^{(f_{i})}\cup\bigcup \limits_{i=1}^{n}v_{i}^{(g_{i})},\text{ where }(f_{1},\ldots,f_{m})\in\{1,2\}^{m},\,(g_{1}, \ldots,g_{n})\in\{1,2\}^{n}\text{ and }(h_{1},\ldots,h_{k})\in\{1,2\}^{k}.\] It should be noted that there are \(2^{m}\) fundamental sets in \((i)\) and \(2^{m+n+k}\) fundamental sets in \((iv)\), and so on. Hence, it follows from \((\Delta)\) (see this in Subsection 1.3) that a vector of \(\mathbb{R}^{|V(C)|}\) of the form: \[\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{2p_{i}}a_{i,j}\mathbf{u}_{i,j}+a \mathbf{u}+\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{2q_{i}}b_{i,j}\mathbf{v}_{ i,j}+b\mathbf{v}+\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{2r_{i}}c_{i,j} \mathbf{w}_{i,j}+c\mathbf{w}\] belongs to \(\mathbb{R}_{+}(C)\) if and only if the following inequalities are satisfied: 1. \(a_{i,j}\geq 0\) for any \(1\leq i\leq m\) and \(1\leq j\leq 2p_{i}\); 2. \(b_{i,j}\geq 0\) for any \(1\leq i\leq n\) and \(1\leq j\leq 2q_{i}\); 3. \(c_{i,j}\geq 0\) for any \(1\leq i\leq k\) and \(1\leq j\leq 2k_{i}\); 4. \(\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{2p_{i}}a_{i,j}-\sum\limits_{u_{i,j}\in T }a_{i,j}+a\geq\sum\limits_{u_{i,j}\in T}a_{i,j}\) for any \(T\in(i)\); 5. \(\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{2q_{i}}b_{i,j}-\sum\limits_{v_{i,j}\in T }b_{i,j}+b\geq\sum\limits_{v_{i,j}\in T}b_{i,j}\) for any \(T\in(ii)\); 6. \(\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{2r_{i}}c_{i,j}-\sum\limits_{w_{i,j} \in T}c_{i,j}+c\geq\sum\limits_{w_{i,j}\in T}c_{i,j}\) for any \(T\in(iii)\); 7. \(\sum+b+c\geq a+2(\sum\limits_{u_{i,j}\in T}a_{i,j}+\sum\limits_{v_{i,j}\in T }b_{i,j}+\sum\limits_{w_{i,j}\in T}c_{i,j})\) for any \(T\in(iv)\); 8. \(\sum+c+a\geq b+2(\sum\limits_{u_{i,j}\in T}a_{i,j}+\sum\limits_{v_{i,j}\in T }b_{i,j}+\sum\limits_{w_{i,j}\in T}c_{i,j})\) for any \(T\in(v)\); 9. \(\sum+a+b\geq c+2(\sum\limits_{u_{i,j}\in T}a_{i,j}+\sum\limits_{v_{i,j}\in T }b_{i,j}+\sum\limits_{w_{i,j}\in T}c_{i,j})\) for any \(T\in(vi)\). Here, \(\sum\) denotes \(\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{2p_{i}}a_{i,j}+\sum\limits_{i=1}^{n} \sum\limits_{j=1}^{2q_{i}}b_{i,j}+\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{2r_{i} }c_{i,j}\). It is straightforward to check that \(\alpha_{\ell}\) satisfies these inequalities, with strict inequalities holding for each \(\alpha_{\ell}\). This implies that \(\alpha_{\ell}\in\operatorname{relint}(\mathbb{R}_{+}(C))\cap\mathbb{Z}^{|V(C)|}\). Next, we show that \(\alpha_{\ell}\) is a minimal vector in \(\operatorname{relint}(\mathbb{R}_{+}(C))\cap\mathbb{Z}^{|V(C)|}\), i.e., it cannot be written as a sum of an element in \(\operatorname{relint}(\mathbb{R}_{+}(C))\cap\mathbb{Z}^{|V(C)|}\) and an element in \(\mathbb{R}_{+}(C)\cap\mathbb{Z}^{|V(C)|}\setminus\{\mathbf{0}\}\) for all \(\ell=1,\ldots,m\). Suppose on the contrary that \(\alpha_{\ell}=\alpha^{\prime}+\alpha^{\prime\prime}\) for some \(\alpha^{\prime}\in\operatorname{relint}(\mathbb{R}_{+}(C))\cap\mathbb{Z}^{|V(C)|}\) and \(\alpha^{\prime\prime}\in\mathbb{R}_{+}(C)\cap\mathbb{Z}^{|V(C)|}\setminus\{ \mathbf{0}\}\). Write \[\alpha^{\prime}=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{2p_{i}}a^{\prime}_{i,j} \mathbf{u}_{i,j}+a^{\prime}\mathbf{u}+\sum\limits_{i=1}^{n}\sum\limits_{j=1}^{2 q_{i}}b^{\prime}_{i,j}\mathbf{v}_{i,j}+b^{\prime}\mathbf{v}+\sum\limits_{i=1}^{k}\sum \limits_{j=1}^{2r_{i}}c^{\prime}_{i,j}\mathbf{w}_{i,j}+c^{\prime}\mathbf{w},\] \[\alpha^{\prime\prime}=\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{2p_{i}}a^{\prime \prime}_{i,j}\mathbf{u}_{i,j}+a^{\prime\prime}\mathbf{u}+\sum\limits_{i=1}^{n} \sum\limits_{j=1}^{2q_{i}}b^{\prime\prime}_{i,j}\mathbf{v}_{i,j}+b^{\prime\prime }\mathbf{v}+\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{2r_{i}}c^{\prime\prime}_{i, j}\mathbf{w}_{i,j}+c^{\prime\prime}\mathbf{w}.\] In view of the inequalities (1) - (3), we see that \(a^{\prime}_{i,j},b^{\prime}_{i,j},c^{\prime}_{i,j}\geq 1\) for all \(i,j\). Because of the inequalities (4) - (6), we also see that \(a^{\prime},b^{\prime},c^{\prime}\geq 1\). Hence, \(a^{\prime\prime}_{i,j}=b^{\prime\prime}_{i,j}=c^{\prime\prime}_{i,j}=b^{\prime \prime}=c^{\prime\prime}=0\) for all possible \(i,j\). From this together with (7) it follows that \(a^{\prime\prime}\leq 0\). Hence, \(\alpha^{\prime\prime}=0.\) This is a contradiction, which shows that \(\alpha_{\ell}\) is a minimal vector in \(\operatorname{relint}(\mathbb{R}_{+}(C))\cap\mathbb{Z}^{|V(C)|}\) for \(\ell=1,\ldots,m\). Likewise, so are \(\beta_{\ell}\)'s and \(\gamma_{\ell}\)'s. **Proposition 5.2**.: _Let \(C\) be defined as before. Assume \(m\leq n\leq k\). Then \(\operatorname{type}(\mathbb{K}[C])=m+n+k\), and the top graded Betti numbers of \(\mathbb{K}[C]\) are given by_ \[\beta_{m+n+k,j}(\mathbb{K}[C])=\left\{\begin{array}{ll}1,&j=\operatorname{ mat}(C)+n+m+\ell,\quad\ell=1,\ldots,k-n\text{;}\\ 2,&j=\operatorname{mat}(C)+m+k+\ell,\quad\ell=1,\ldots,n-m\text{;}\\ 3,&j=\operatorname{mat}(C)+k+n+\ell,\quad\ell=1,\ldots,m\text{;}\\ 0,&\text{otherwise.}\end{array}\right.\] Proof.: Every minimal vector in \(\operatorname{relint}(\mathbb{R}_{+}(C))\cap\mathbb{Z}^{|V(C)|}\) corresponds to a minimal generator of \(\omega_{\mathbb{K}[C]}\). It follows from the above discussion that \(\operatorname{type}(\mathbb{K}[C])\geq m+n+k\). Since \(\operatorname{type}(\mathbb{K}[C])\) is equal to the top total Betti number of \(\mathbb{K}[C]\), we conclude that \(\operatorname{type}(\mathbb{K}[C])\leq m+n+k\) by Theorem 4.1. Thus the first conclusion follows. From this, we see that \(\alpha_{1},\ldots,\alpha_{m},\beta_{1},\ldots,\beta_{n},\gamma_{1},\ldots,\gamma _{k}\) are all the minimal vectors of \(\operatorname{relint}(\mathbb{R}_{+}(C))\cap\mathbb{Z}^{|V(C)|}\). Therefore, the set of monomials \[\{x^{\alpha_{\ell}},\ \ell=1,\ldots,m;\quad x^{\beta_{\ell}},\ \ell=1,\ldots,n; \quad x^{\gamma_{\ell}},\ \ell=1,\ldots,k\}\] is a minimal generating set of \(\omega_{\mathbb{K}[C]}\), which is an ideal of the edge ring \(\mathbb{K}[C]\subset\mathbb{K}[V(C)]\). Note that every monomial \(x^{\alpha}\) belonging to \(\mathbb{K}[C]\), which is regarded as a graded module over the standard graded ring \(\mathbb{K}[E(C)]\), has a degree of \(\frac{1}{2}|\alpha|\). Hence, \[\beta_{0,j}(\omega_{\mathbb{K}[C]})=\left\{\begin{array}{ll}3,&j=\sum \limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+\sum\limits_{i=1}^{k}r_{i}+1 +\ell,\ell=1,\ldots,m;\\ 2,&j=\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+\sum\limits_{i=1}^{ k}r_{i}+1+\ell,\ell=m+1,\ldots,n;\\ 1,&j=\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+\sum\limits_{i=1}^{ k}r_{i}+1+\ell,\ell=n+1,\ldots,k;\\ 0,&\text{otherwise.}\end{array}\right.\] Since \(|E(C)|=2(\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+\sum\limits_{i= 1}^{k}r_{i})+m+n+k+3\) and \(\operatorname{mat}(C)=\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i}+ \sum\limits_{i=1}^{k}r_{i}+1\), the second conclusion follows by Lemma 1.1. ### type one Let \(A\) denote the compact graph \(A_{\underline{p}}\), whose vertex set \(V(A)\) and edge set \(E(A)\) are given explicitly in Subsection 3.1. Then \(|V(A)|=2\sum\limits_{i=1}^{m}p_{i}+1\). We may write \[\mathbb{R}^{|V(A)|}=\{\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{2p_{i}}a_{i,j} \mathbf{u}_{i,j}+a\mathbf{u}\mid\text{ all }a_{i,j},a\in\mathbb{R}\}.\] Here, \(\mathbf{u}_{i,j},\mathbf{u}\) correspond the vertices of \(A\) in a natural way. Then, we could show the following vectors \[\alpha_{\ell}:=\sum_{i=1}^{m}\sum_{j=1}^{2p_{i}}\mathbf{u}_{i,j}+2\ell\mathbf{u},\ \ell=1,\ldots,m-1\] are all the minimal vectors of \(\mathrm{relint}(\mathbb{R}_{+}(A))\cap\mathbb{Z}^{|V(A)|}\). **Proposition 5.3**.: _Let \(A\) denote the compact graph \(A_{\underline{p}}\). Then \(\mathrm{type}(\mathbb{K}[A])=m-1\), and the top graded Betti numbers of \(\mathbb{K}[A]\) are given by_ \[\beta_{m-1,j}(\mathbb{K}[A])=\left\{\begin{array}{ll}1,&j=\mathrm{mat}(A)+ \ell,\ \ell=1,\ldots,m-1\mbox{;}\\ 0,&\mbox{otherwise.}\end{array}\right.\] ### type two Let \(B^{0}\) and \(B^{s}\) denote the compact graph \(B^{0}_{\underline{p}\cdot\underline{q}}\) and \(B^{s}_{\underline{p}\cdot\underline{q}}\) respectively. Their vertex sets \(V(B^{0})\) and \(V(B^{s})\) and edge sets \(E(B^{0})\) and \(E(B^{s})\) are given explicitly in Subsection 3.2. Then \(|V(B^{0})|=2(\sum\limits_{i=1}^{m}p_{i}+\sum\limits_{i=1}^{n}q_{i})+2\). We may write \[\mathbb{R}^{|V(B^{0})|}=\{\sum_{i=1}^{m}\sum_{j=1}^{2p_{i}}a_{i,j}\mathbf{u}_{ i,j}+\sum_{i=1}^{m}\sum_{j=1}^{2p_{i}}b_{i,j}\mathbf{v}_{i,j}+a\mathbf{u}+b \mathbf{v}\ |\ \ \mathrm{all}\ a_{i,j},b_{i,j},a,b\in\mathbb{R}\}.\] Here, \(\mathbf{u}_{i,j},\mathbf{v}_{i,j},\mathbf{u},\mathbf{v}\) correspond the vertices of \(B^{0}\) in the natural way. Then, we could show the following vectors \[\alpha_{\ell}:=\sum_{i=1}^{m}\sum_{j=1}^{2p_{i}}\mathbf{u}_{i,j}+\sum_{i=1}^{n} \sum_{j=1}^{2q_{i}}\mathbf{v}_{i,j}+\mathbf{v}+(2\ell+1)\mathbf{u},\quad\ell= 0,\ldots,m-1\] and \[\beta_{\ell}:=\sum_{i=1}^{m}\sum_{j=1}^{2p_{i}}\mathbf{u}_{i,j}+\sum_{i=1}^{n} \sum_{j=1}^{2q_{i}}\mathbf{v}_{i,j}+\mathbf{u}+(2\ell+1)\mathbf{v},\quad\ell= 1,\ldots,n-1\] are all the minimal vectors of \(\mathrm{relint}(\mathbb{R}_{+}(B^{0}))\cap\mathbb{Z}^{|V(B^{0})|}\). On the other hand, we have \(|V(B^{s})|=|V(B^{0})|+s-1\) and we may write \[\mathbb{R}^{|V(B^{s})|}=\{\sum_{i=1}^{m}\sum_{j=1}^{2p_{i}}a_{i,j}\mathbf{u}_ {i,j}+\sum_{i=1}^{n}\sum_{j=1}^{2q_{i}}b_{i,j}\mathbf{v}_{i,j}+\sum_{i=1}^{s- 1}c_{i}\mathbf{w}_{i}+a\mathbf{u}+b\mathbf{v}\ |\ \ \mathrm{all}\ a_{i,j},b_{i,j},c_{i},a,b\in\mathbb{R}\}.\] Here, \(\mathbf{u}_{i,j},\mathbf{v}_{i,j},\mathbf{w}_{i},\mathbf{u},\mathbf{v}\) correspond the vertices of \(B^{s}\) in the natural way. We may show the following vectors \[\alpha_{\ell}:=\sum_{i=1}^{m}\sum_{j=1}^{2p_{i}}\mathbf{u}_{i,j}+\sum_{i=1}^{n }\sum_{j=1}^{2q_{i}}\mathbf{v}_{i,j}+\sum_{i=1}^{s-1}\mathbf{w}_{i}+\mathbf{v}+ 2\ell\mathbf{u},\ell=1,\ldots,m\] and \[\beta_{\ell}:=\sum_{i=1}^{m}\sum_{j=1}^{2p_{i}}\mathbf{u}_{i,j}+\sum_{i=1}^{n }\sum_{j=1}^{2q_{i}}\mathbf{v}_{i,j}+\sum_{i=1}^{s-1}\mathbf{w}_{i}+\mathbf{u}+ 2\ell\mathbf{v},\ell=1,\ldots,n\] are all the minimal vectors of \(\mathrm{relint}(\mathbb{R}_{+}(B^{s}))\cap\mathbb{Z}^{|V(B^{s})|}\). **Proposition 5.4**.: _Let \(B^{0}\) and \(B^{s}\) be defined as before. Assume \(m\leq n\). Then the following statements hold:_ * \(\operatorname{type}(\mathbb{K}[B^{0}])=m+n-1\) _and_ \(\operatorname{type}(\mathbb{K}[B^{s}])=m+n\)_;_ * _the top graded Betti numbers of_ \(\mathbb{K}[B^{0}]\) _are given by_ \[\beta_{m+n-1,j}(\mathbb{K}[B^{0}])=\left\{\begin{array}{ll}1,&j=\operatorname {mat}(B^{0})+m-1+\ell,\quad\ell=1,\ldots,n-m\text{;}\\ 2,&j=\operatorname{mat}(B^{0})+n-1+\ell,\quad\ell=1,\ldots,m-1\text{;}\\ 1,&j=\operatorname{mat}(B^{0})+m+n-1\text{.}\\ 0,&\text{otherwise.}\end{array}\right.\] _'the top graded Betti numbers of_ \(\mathbb{K}[B^{s}]\) _are given by_ \[\beta_{m+n,j}(\mathbb{K}[B^{s}])=\left\{\begin{array}{ll}1,&j=\operatorname {mat}(B^{s})+\ell,\quad\ell=m+1,\ldots,n\text{;}\\ 2,&j=\operatorname{mat}(B^{s})+n+\ell,\quad\ell=1,\ldots,m\text{;}\\ 0,&\text{otherwise.}\end{array}\right.\] ## 6. A question Let \(G\) be a compact graph, and let \(I_{G}\) be the toric ideal of \(\mathbb{K}[G]\). Denote by \(J_{G}\) the initial ideal of \(I_{G}\) with respect to the order given in Section 3. As we have seen in the previous section, \(I_{G}\) and \(J_{G}\) share the same top graded Betti numbers. This naturally leads to the following question: _Does \(I_{G}\) and \(J_{G}\) always share the same graded Betti numbers?_ Unfortunately, we are unable to provide a general answer to this question, except for a very specific case when \(G\) is a compact graph of type one. In what follows, we use \(A\) to denote the compact graph \(A_{\underline{p}}\), where \(\underline{p}=(\overbrace{p,\ldots,p}^{m})\) is a vector in \(\mathbb{Z}_{+}^{m}\). Let \(f(t)\) and \(g(t)\) denote the polynomial \(\sum\limits_{i,j}\beta_{i,j}(I_{A})(-1)^{i}t^{j}\) and \(\sum\limits_{i,j}\beta_{i,j}(J_{A})(-1)^{i}t^{j}\), respectively. It is known \(f(t)=g(t)\) and \(\beta_{i,j}(I_{A})\leq\beta_{i,j}(J_{A})\) for all \(i,j\). **Proposition 6.1**.: _If \(2\leq m\leq p+3\), then_ \[\beta_{i,j}(I_{A})=\beta_{i,j}(J_{A})=\left\{\begin{array}{ll}\binom{m}{i+ 2},&j=(i+2)p+\ell,\ell=1,\ldots,i+1\text{;}\\ 0,&\text{otherwise.}\end{array}\right.\] Proof.: Put \(A_{i}=\{j\in\mathbb{Z}\mid\beta_{i,j}(J_{A})\neq 0\}\) for all \(i\geq 0\). Then, by Proposition 4.6, we have \(A_{i}=\{(i+2)p+\ell\mid\ell=1,\ldots,i+1\}\) for \(0\leq i\leq m-2\), and is \(\emptyset\) otherwise. Given that \(j\notin A_{i}\) it can be inferred that \(\beta_{i,j}(J_{A})=\beta_{i,j}(I_{A})=0\). Therefore, we will next consider only the case when \(j\in A_{i}\). (1) If \(m\leq p+2\) then it follows that \(A_{i_{1}}\cap A_{i_{2}}=\emptyset\) for any distinct \(i_{1}\) and \(i_{2}\). Consequently, for any \(j\in A_{i}\), the coefficient of \(t^{j}\) in \(f(t)\) is \((-1)^{i}\beta_{i,j}(J_{A})\), while in \(g(t)\) it is \((-1)^{i}\beta_{i,j}(I_{A})\). Therefore we can deduce that \(\beta_{i,j}(J_{A})=\beta_{i,j}(I_{A})\). (2) If \(m=p+3\), then for any pair \(i_{1}\neq i_{2}\), \(A_{i_{1}}\cap A_{i_{2}}\neq\emptyset\) if and only if \(\{i_{1},i_{2}\}=\{m-3,m-2\}\) and in that case \(A_{m-3}\cap A_{m-2}=\{mp+1\}=\{(m-1)p+m-2\}\). If \(j\neq mp+1\) then it follows that \(\beta_{i,j}(I_{A})=\beta_{i,j}(J_{A})\) for the same reason as in (1). If \(j=mp+1\) then, by comparing the coefficient of \(t^{mp+1}\) of \(f(t)\) and \(g(t)\), we conclude that \[\beta_{m-3,mp+1}(I_{A})-\beta_{m-2,mp+1}(I_{A})=\beta_{m-3,mp+1}(J_{A})-\beta _{m-2,mp+1}(J_{A}).\] On the other hand, we have \(\beta_{m-2,mp+1}(I_{A})=\beta_{m-2,mp+1}(J_{A})\) by Theorem 5.1. From this it follows that \(\beta_{m-3,mp+1}(I_{A})=\beta_{m-3,mp+1}(J_{A})\), as required. **Acknowledgment:** This project is supported by NSFC (No. 11971338)
2303.17917
Higher-order retraction maps and construction of numerical methods for optimal control of mechanical systems
Retractions maps are used to define a discretization of the tangent bundle of the configuration manifold as two copies of the configuration manifold where the dynamics take place. Such discretization maps can be conveniently lifted to a higher-order tangent bundle to construct geometric integrators for the higher-order Euler-Lagrange equations. Given a cost function, an optimal control problem for fully actuated mechanical systems can be understood as a higher-order variational problem. In this paper we introduce the notion of a higher-order discretization map associated with a retraction map to construct geometric integrators for the optimal control of mechanical systems. In particular, we study applications to path planning for obstacle avoidance of a planar rigid body.
Alexandre Anahory Simoes, Maria Barbero Liñán, Leonardo Colombo, David Martín de Diego
2023-03-31T09:24:26Z
http://arxiv.org/abs/2303.17917v1
Higher-order retraction maps and construction of numerical methods for optimal control of mechanical systems ###### Abstract Retractions maps are used to define a discretization of the tangent bundle of the configuration manifold as two copies of the configuration manifold where the dynamics take place. Such discretization maps can be conveniently lifted to a higher-order tangent bundle to construct geometric integrators for the higher-order Euler-Lagrange equations. Given a cost function, an optimal control problem for fully actuated mechanical systems can be understood as a higher-order variational problem. In this paper we introduce the notion of a higher-order discretization map associated with a retraction map to construct geometric integrators for the optimal control of mechanical systems. In particular, we study applications to path planning for obstacle avoidance of a planar rigid body. ## I Introduction In this paper, we consider fully-actuated optimal control problems as higher-order variational problems (see [6] and [9]). Such problems are defined on the \(k^{th}\)-order tangent bundle \(T^{(k)}Q\) of a differentiable manifold \(Q\) (see [17]). For a higher-order Lagrangian function \(L:T^{(k)}Q\to\mathbb{R}\) and local coordinates \((q,\dot{q},\ldots,q^{(k)})\) on \(T^{(k)}Q\) the higher-order variational problems are given by \[\min_{q(\cdot)}\int_{0}^{T}L(q(t),\dot{q}(t),\ldots,q^{(k)}(t))dt,\] subject to the boundary conditions \(q^{(j)}(0)=q^{j}_{0}\), \(q^{(j)}(T)=q^{j}_{T}\) for \(0\leq j\leq k-1\), where \(q^{(j)}(t)=\frac{d^{j}}{dt^{j}}q(t)\). The relationship between higher-order variational problems and optimal control problems of fully-actuated mechanical systems comes from the fact that Euler-Lagrange equations are represented by a second-order Newtonian system and fully-actuated mechanical control systems have the form \(F(q,\dot{q},\ddot{q})=u\), where \(u\) are the control inputs, as many as the dimension of the configuration manifold \(Q\). If \(C\) is a cost function of an optimal control problem given by \[\min_{(q(\cdot),u(\cdot))}\int_{0}^{T}C(q,\dot{q},u)dt,\] it can be rewritten as a second-order variational problem replacing \(u\) by the above expression. The notion of retraction map is an essential tool in different research areas like optimization theory, numerical analysis and interpolation (see [1] and references therein). A retraction map plays the role of generalizing the linear-search methods in Euclidean spaces to general manifolds. On a manifold with nonzero curvature to move along the tangent line does not guarantee that the motion stays on the manifold. The retraction map provides the tool to define the notion of moving in a direction of a tangent vector while staying on the manifold. That is why retraction maps have been widely used to construct numerical integrators of ordinary differential equations, since it allows us to move from a point and a velocity to one nearby point so that the differential equation can be discretized. In [4] the classical notion of retraction map used to approximate geodesics is extended to the new notion of discretization maps, that is rigorously defined to become a powerful tool to construct geometric integrators. Using the geometry of the tangent and cotangent bundles, the authors were able to tangently and cotangent lift the map so that these lifts inherit the same properties as the original one and they continue to be discretization maps. In particular, the cotangent lift of a discretization map is a natural symplectomorphism, which plays a key role for constructing symplectic integrators. It was further applied in [5] to the construction of numerical methods for optimal control problems from a Hamiltonian perspective. Geometric integrators for optimal control problems seen as second-order variational problems were studied in [13] (see also [14, 15]). The goal of this paper is to extend the notion of discretization map given in [4] to higher-order tangent bundles and construct symplectic integrators for optimal control problems of only fully-actuated mechanical systems. The paper is structured as follows. Section II introduces the necessary tools on differential geometry and the geometric formalism for the dynamics of mechanical systems. Section III describes optimal control problems as higher-order variational problems and the Lagrangian and Hamiltonian characterization of necessary conditions for optimality. In Section IV we introduce retraction maps and discretization maps as well as the cotangent lift of discretization maps which allows the construction of symplectic integrators. In Section V we define higher-order discretization maps and describe the construction of symplectic integrators for higher-order mechanical systems. We employ this construction in Section VI to construct geometric integrators for optimal control of mechanical systems. In particular, we study applications to path planning for obstacle avoidance of planar rigid bodies. ## II Background on Geometric Mechanics Let \(Q\) be a \(n\)-dimensional differentiable configuration manifold of a mechanical system with local coordinates \((q^{A})\), \(1\leq A\leq n\). Denote by \(TQ\) the tangent bundle. If \(T_{q}Q\) denotes the tangent space of \(Q\) at the point \(q\), then \(TQ:=\cup_{q\in Q}T_{q}Q\), with induced local coordinates \((q^{A},\dot{q}^{A})\). There is a canonical projection \(\tau_{Q}:TQ\to Q\), sending each vector \(v_{q}\) to the corresponding base point \(q\). Note that in coordinates \(\tau_{Q}(q^{A},\dot{q}^{A})=q^{A}\). The vector space structure of \(T_{q}Q\) makes possible to consider its dual space, \(T^{*}_{q}Q\), to define the cotangent bundle as \(T^{*}Q:=\cup_{q\in Q}T^{*}_{q}Q\), with local coordinates \((q^{A},p_{A})\). There is a canonical projection \(\pi_{Q}:T^{*}Q\to Q\), sending each momenta \(p_{q}\) to the corresponding base point \(q\). Note that in coordinates \(\pi_{Q}(q^{A},p_{A})=q^{A}\). Given a Lagrangian function \(L:TQ\to\mathbb{R}\), the corresponding Euler-Lagrange equations are \[\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{q}^{A}}\right)-\frac{\partial L }{\partial q^{A}}=0,\quad 1\leq A\leq n. \tag{1}\] Equations (1) determine a system of \(n\) second-order differential equations. If we assume that the Lagrangian is regular, i.e., the \((n\times n)\)-matrix \(\left(\frac{\partial^{2}L}{\partial\dot{q}^{A}\partial\dot{q}^{B}}\right)\), \(1\leq A,B\leq n\), is nonsingular, the local existence and uniqueness of solutions are guaranteed for any given initial condition by employing the Implicit Function Theorem. A Hamiltonian function \(H:T^{*}Q\to\mathbb{R}\) is described by the total energy of a mechanical system and leads to Hamilton's equations on \(T^{*}Q\), whose solutions are integral curves of the Hamiltonian vector field \(X_{H}\) taking values in \(T(T^{*}Q)\) associated with \(H\). Locally, \(X_{H}(q,p)=\left(\frac{\partial H}{\partial p},-\frac{\partial H}{\partial q}\right)\), that is, \[\dot{q}^{A}=\frac{\partial H}{\partial p_{A}},\quad\dot{p}_{A}=-\frac{ \partial H}{\partial q^{A}},\quad 1\leq A\leq n. \tag{2}\] Equations (2) determine a set of \(2n\) first order ordinary differential equations (see [6], for instance, for more details). A one-form \(\alpha\) on \(Q\) is a map assigning to each point \(q\) a cotangent vector on \(q\), that is, \(\alpha(q)\in T^{*}_{q}Q\). Cotangent vectors acts linearly on vector fields according to \(\alpha(X)=\alpha_{i}X^{i}\in\mathbb{R}\) if \(\alpha=\alpha_{i}dq^{i}\) and \(X=X^{i}\frac{\partial}{\partial q}\). Analogously, a two-form or a \((0,2)\)-tensor field is a bilinear map that acts on a pair of vector fields to produce a number. A symplectic form \(\omega\) on a manifold \(Q\) is a \((0,2)\)-type tensor field that is skew-symmetric and non-degenerate, i.e., \(\omega(X,Y)=-\omega(Y,X)\) for all vector fields \(X\) and \(Y\) and if \(\omega(X,Y)=0\) for all vector fields \(X\), then \(Y\equiv 0\). The set of vector fields and the set of 1-forms on \(Q\) are denoted by \(\mathfrak{X}(Q)\) and \(\Omega^{1}(Q)\), respectively. The symplectic form induces a linear isomorphism \(\flat_{\omega}:\mathfrak{X}(Q)\to\Omega^{1}(Q)\), given by \(\langle\flat_{\omega}(X),Y\rangle=\omega(X,Y)\) for any vector fields \(X,Y\). The inverse of \(\flat_{\omega}\) will be denoted by \(\sharp_{\omega}\). As described in [24], the cotangent bundle \(T^{*}Q\) of a differentiable manifold \(Q\) is equipped with a canonical exact symplectic structure \(\omega_{Q}=-d\theta_{Q}\), where \(\theta_{Q}\) is the canonical 1-form on \(T^{*}Q\). In canonical bundle coordinates \((q^{A},p_{A})\) on \(T^{*}Q\), \(\theta_{Q}=p_{A}\,\mathrm{d}q^{A}\) and \(\omega_{Q}=\mathrm{d}q^{A}\wedge\mathrm{d}p_{A}\). Hamilton's equations can be intrinsically rewritten as \(\imath_{X_{H}}\imath_{Q}\colon=\flat_{\omega}(X_{H})=\mathrm{d}H\). Hamiltonian dynamics are characterized by the following two essential properties [20]: * Preservation of energy by the Hamiltonian function: \[0=\omega_{Q}(X_{H},X_{H})=dH(X_{H})=X_{H}(H)\,.\] * Preservation of the symplectic form: If \(\{\phi^{t}_{X_{H}}\}\) is the flow of \(X_{H}\), then the pull-back of the differential form by the flow is preserved, \((\phi^{t}_{X_{H}})^{*}\imath_{Q}=\omega_{Q}\). Recall that a pair \((Q,\omega_{Q})\) is called a symplectic manifold if \(Q\) is a differentiable manifold and \(\omega_{Q}\) is a symplectic 2-form. As a consequence, the restrictions of \(\omega_{Q}\) to each \(q\in Q\) makes the tangent space \(T_{q}Q\) into a symplectic vector space. **Definition 1**: _Let \((Q_{1},\omega_{1})\) and \((Q_{2},\omega_{2})\) be two symplectic manifolds, let \(\phi:Q_{1}\to Q_{2}\) be a smooth map. The map \(\phi\) is called symplectic if the symplectic forms are preserved: \(\phi^{*}\omega_{2}=\omega_{1}\). Moreover, it is a symplectomorphism if \(\phi\) is a diffeomorphism and \(\phi^{-1}\) is also symplectic._ Let \(Q_{1}\) and \(Q_{2}\) be \(n\)-dimensional manifolds and \(F:Q_{1}\to Q_{2}\) be a smooth map. The _tangent lift_\(TF:TQ_{1}\to TQ_{2}\) of \(F\) is defined by \(TF(v_{q})=T_{q}F(v_{q})\in T_{F(q)}Q_{2}\) where \(v_{q}\in T_{q}Q_{1}\), and \(T_{q}F\) is the tangent map of \(F\) whose matrix is the Jacobian matrix of \(F\) at \(q\in Q_{1}\). As the tangent map \(T_{q}F\) is linear, the dual map \(T^{*}_{q}F\colon T^{*}_{F(q)}Q_{2}\to T^{*}_{q}Q_{1}\) is defined as follows: \[\langle(T^{*}_{q}F)(\alpha_{2}),v_{q}\rangle=\langle\alpha_{2},T_{q}F(v_{q}) \rangle\text{ for every }v_{q}\in T_{q}Q_{1}.\] Note that \((T^{*}_{q}F)(\alpha_{2})\in T^{*}_{q}Q_{1}\). **Definition 2**: _Let \(F:Q_{1}\to Q_{2}\) be a diffeomorphism. The vector bundle morphism \(\widetilde{F}:T^{*}Q_{1}\to T^{*}Q_{2}\) defined by \(\widetilde{F}=T^{*}F^{-1}\) is called the cotangent lift of \(F^{-1}\). In other words, \(\widetilde{F}(\alpha_{q})=T^{*}_{F(q)}F^{-1}(\alpha_{q})\) where \(\alpha_{q}\in T^{*}_{q}Q_{1}\). Obviously, \((T^{*}F^{-1})\circ(T^{*}F)=\mathrm{Id}_{T^{*}Q_{2}}\)._ ### _Higher-order tangent bundles_ The higher-order tangent bundle is essentially a generalization of the tangent space of the manifold \(Q\) to higher-order derivatives, when one interprets tangent vectors as the velocity vector of some curve in \(Q\). Analogously, an element of the \(k\)-th order tangent bundle can be defined as an equivalence relation identifying all curves that match up to \(k\)-th order derivative. Let \(c_{1},c_{2}:\mathbb{R}\to Q\) be two curves on \(Q\). Consider the equivalence relation \(\sim_{k}\) at \(0\in\mathbb{R}\) determined by the following two conditions: 1. \(c_{1}(0)=c_{2}(0)\); 2. \(c_{1}^{(i)}(0)=c_{2}^{(i)}(0)\) for all \(1\leqslant i\leqslant k\), where the notation \(c^{(i)}\) represents the \(i\)-th derivative of \(c\). In this case, we say that \(c_{1}\) and \(c_{2}\) are \(\sim_{k}\)-related at \(0\). Moreover, the equivalence class of \(c\) determined by \(\sim_{k}\) is called the \(k\)-jet of \(c\) and is represented by \(j_{0}^{(k)}c\). The set of all \(k\)-jets at \(0\) is denoted by \(J_{0}^{(k)}(\mathbb{R},Q)\) in some general contexts. But, from now on, it will be denoted by \(T^{(k)}Q\) the \(k\)-th order bundle of \(Q\). The \(k\)-th order bundle of \(Q\) is a smooth manifold (see [17]) and admits several fibrations: \(\pi_{r}^{k}:T^{(k)}Q\to T^{(r)}Q\) mapping \(j_{0}^{(k)}c\mapsto j_{0}^{(r)}c\) for \(0\leqslant r<k\). Observe that for \(r=1\), \(T^{(1)}Q=TQ\) and for \(r=0\), \(T^{(0)}Q=Q\). If \((q^{A})\) are local coordinates on the manifold \(Q\), then the \(k\)-jet \(j_{0}^{(k)}c\) is uniquely determined by the coordinates \((q^{A},q^{(0^{A}},\dots,q^{(k)^{A})})\), where \[q^{A}=c^{A}(0),\quad q^{(r)^{A}}=\frac{1}{r!}c^{(r)^{A}}(0),\quad 1\leqslant A \leqslant\dim Q.\] In a sense, the local coordinates for \(k\)-jets are provided by the Taylor polynomial of \(c\) at \(0\). Given a smooth map \(F:Q_{1}\to Q_{2}\), we define \(T^{(k)}F:T^{(k)}Q_{1}\to T^{(k)}Q_{2}\) by \(T^{(k)}F(j_{0}^{(k)}c)=j_{0}^{(k)}(F\circ c)\), for some curve \(c:\mathbb{R}\to Q_{1}\). ## III Variational Formulation of Optimal Control Problems for Mechanical Systems There are some problems in which the functional to be minimized depends on higher-order derivatives of a curve. This is the case in interpolating problems [21, 30]; in generation of trajectories for quadrotors [27, 26], or in a generalization of least square problem on Riemannian manifolds [25]. The goal of this paper is to use discretization maps obtained from the retraction maps to produce numerical algorithms for the solutions of optimal control problems for fully actuated mechanical systems. The prototype problem in this paper is the optimization of the cost functional \[\mathcal{J}=\int_{0}^{T}||u||^{2}\ dt\] subjected to controlled Euler-Lagrange equations describing the dynamics of standard mechanical systems, i.e., \[\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{q}}\right)-\frac{\partial L }{\partial q}=u.\] The cost functional may then be recast as the second-order functional \[\mathcal{J}=\int_{0}^{T}\left|\left|\frac{d}{dt}\left(\frac{\partial L}{ \partial\dot{q}}\right)-\frac{\partial L}{\partial q}\right|\right|^{2}\ dt=\int_{0}^{T}\mathcal{L}(q,\dot{q},\ddot{q})\ dt.\] Using the variational principle, necessary equations for a trajectory to be optimal are the second order Euler-Lagrange equations: \[\frac{d^{2}}{dt^{2}}\left(\frac{\partial\mathcal{L}}{\partial\dot{q}}\right) -\frac{d}{dt}\left(\frac{\partial\mathcal{L}}{\partial\dot{q}}\right)+\frac{ \partial\mathcal{L}}{\partial q}=0\,.\] We want to use the results in [4] (see also [5]) to produce geometric numerical methods for the optimal control problem under study. But first, we need to get these results generalized for the higher-order tangent bundles, as well as to study the Hamiltonian version of optimality conditions. A second-order Lagrangian \(\mathcal{L}\) can be associated with a Lagrangian energy \(E_{\mathcal{L}}:T^{(3)}Q\to\mathbb{R}\) defined by \[E_{\mathcal{L}}(q,\dot{q},\ddot{q},q^{(3)})=\dot{q}\dot{p}_{(0)}+\ddot{q} \dot{p}_{(1)}-L(q,\dot{q},\ddot{q}),\] where \(\dot{p}_{(0)}\) and \(\dot{p}_{(1)}\) are the generalized momenta given by \[\dot{p}_{(0)}=\frac{\partial L}{\partial\dot{q}}-\frac{d}{dt}\frac{\partial L }{\partial\dot{q}},\ \ \dot{p}_{(1)}=\frac{\partial L}{\partial\ddot{q}}\,.\] These momenta are conserved along solutions of the second-order Euler-Lagrange equations (see [17] for instance). As usual the link between Lagrangian and Hamiltonian formalism is the corresponding Legendre transformation \(\text{Leg}_{\mathcal{L}}:T^{(3)}Q\to T^{*}(TQ)\) given by \[\text{Leg}_{\mathcal{L}}(q,\dot{q},\ddot{q},q^{(3)})=\left(q,\dot{q},\dot{p}_{ (0)},\dot{p}_{(1)}\right).\] The associated Hamiltonian function \(H:T^{*}(TQ)\to\mathbb{R}\) is given by \[H(q,\dot{q},\dot{p}_{(0)},\dot{p}_{(1)})=E_{\mathcal{L}}\circ\text{Leg}_{ \mathcal{L}}^{-1}(q,\dot{q},\dot{p}_{(0)},\dot{p}_{(1)}),\] and the second-order Hamilton equations are given by \[\dot{q}=\frac{\partial H}{\partial\dot{p}_{(0)}},\quad\ddot{q}=\frac{\partial H }{\partial\dot{p}_{(1)}},\quad\dot{\dot{p}}_{(0)}=-\frac{\partial H}{\partial q },\quad\dot{\dot{p}}_{(1)}=-\frac{\partial H}{\partial\dot{q}}.\] ## IV Discretization maps The first notion of retraction that appears in the literature can be found in [10] from a topological viewpoint. Later on, the notion of retraction map as defined below is used to obtain Newton's method on Riemannian manifolds [29, 3]. **Definition 3**: _A retraction map on a manifold \(Q\) is a smooth mapping \(R\) from the tangent bundle \(TQ\) onto \(Q\). Let \(R_{q}\) denote the restriction of \(R\) to \(T_{q}Q\), the following properties are satisfied:_ 1. \(R_{q}(0_{q})=q\)_, where_ \(0_{q}\) _denotes the zero element of the vector space_ \(T_{q}Q\)_._ 2. _With the canonical identification_ \(T_{0_{q}}T_{q}Q\simeq T_{q}Q\)_,_ \(R_{q}\) _satisfies_ \[\mathrm{DR}_{q}(0_{q})=T_{0_{q}}R_{q}=\mathrm{Id}_{T_{q}Q},\] (3) _where \(\mathrm{Id}_{T_{q}Q}\) denotes the identity mapping on \(T_{q}Q\)._ The condition (3) is known as _local rigidity condition_ since, given \(\xi\in T_{q}Q\), the curve \(\gamma_{\xi}(t)=R_{q}(t\xi)\) has \(\xi\) as tangent vector at \(q\), i.e. \(\dot{\gamma}_{\xi}(t)=\langle DR_{q}(t\xi),\xi\rangle\) and, in consequence, \(\dot{\gamma}_{\xi}(0)=\mathrm{Id}_{T_{q}Q}(\xi)=\xi\). A typical example of a retraction map is the exponential map, \(\exp\), on Riemannian manifolds given in [18, Chapter 3.2]. Therefore, the image of \(\xi\) through the exponential map is a point on the Riemannian manifold \((Q,g)\) obtained by moving along a geodesic a length equal to the norm of \(\xi\) starting with the velocity \(\xi/\|\xi\|\), that is, \[\exp_{q}(\xi)=\sigma(\|\xi\|)\,\] where \(\sigma\) is the unit speed geodesic such that \(\sigma(0)=q\) and \(\dot{\sigma}(0)=\xi/\|\xi\|\). Next, we define a generalization of the retraction map in Definition 3 that allows a discretization of the tangent bundle of the configuration manifold leading to the construction of numerical integrators as described in [4]. Given a point and a velocity, we obtain two nearby points that are not necessarily equal to the initial base point. **Definition 4**: _A map \(R_{d}\colon U\subset TQ\to Q\times Q\) given by_ \[R_{d}(q,v)=(R^{1}(q,v),R^{2}(q,v)),\] where \(U\) is an open neighborhood of the zero section \(0_{q}\) of \(TQ\), defines a _discretization map on \(Q\)_ if it satisfies 1. \(R_{d}(q,0)=(q,q)\), 2. \(T_{0_{q}}R_{q}^{2}-T_{0_{q}}R_{q}^{1}\colon T_{0_{q}}T_{q}Q\simeq T_{q}Q\to T_{q}Q\) is equal to the identity map on \(T_{q}Q\) for any \(q\) in \(Q\), where \(R_{q}^{a}\) denotes the restrictions of \(R^{a}\), \(a=1,2\), to \(T_{q}Q\). Thus, the discretization map \(R_{d}\) is a local diffeomorphism from some neighborhood of the zero section of \(TQ\). If \(R^{1}(q,v)=q\), the two properties in Definition 4 guarantee that the both properties in Definition 3 are satisfied by \(R^{2}\). Thus, Definition 4 generalizes Definition 3. **Example 1**: _The mid-point rule on an Euclidean vector space can be recovered from the following discretization map: \(R_{d}(q,v)=\left(q-\frac{v}{2},q+\frac{v}{2}\right).\)_ ### _Cotangent lift of discretization maps_ As the Hamiltonian vector field takes value on \(TT^{*}Q\), the discretization map must be on \(T^{*}Q\), that is, \(R_{d}^{T^{*}}:TT^{*}Q\to T^{*}Q\times T^{*}Q\). Such a map is obtained by cotangently lifting a discretization map \(R_{d}\colon TQ\to Q\times Q\), so that the construction \(R_{d}^{T^{*}}\) is a symplectomorphism. In order to do that, we need the following three symplectomorphisms (see [4] and [5] for more details): * The cotangent lift of the diffeomorphism \(R_{d}\colon TQ\to Q\times Q\) as described in Definition 2. * The canonical symplectomorphism: \[\alpha_{Q}\colon T^{*}TQ\longrightarrow TT^{*}Q\] such that \(\alpha_{Q}(q,v,p_{q},p_{v})=(q,p_{v},v,p_{q})\). * The symplectomorphism between \((T^{*}(Q\times Q),\omega_{Q\times Q})\) and \((T^{*}Q\times T^{*}Q,\Omega_{12}:=pr_{2}^{*}\omega_{Q}-pr_{1}^{*}\omega_{Q})\): \[\Phi:T^{*}Q\times T^{*}Q\longrightarrow T^{*}(Q\times Q)\,\] given by \(\Phi(q_{0},p_{0};q_{1},p_{1})=(q_{0},q_{1},-p_{0},p_{1})\). Diagram in Fig. 1 summarizes the construction process from \(R_{d}\) to \(R_{d}^{T^{*}}\): **Proposition 1**: _[_4_]_ _Let \(R_{d}\colon TQ\to Q\times Q\) be a discretization map on \(Q\). Then_ \[R_{d}^{T^{*}}=\Phi^{-1}\circ\widehat{R_{d}}\circ\alpha_{Q}\colon TT^{*}Q \to T^{*}Q\times T^{*}Q\] _is a discretization map on \(T^{*}Q\)._ **Corollary 1**: _[_4_]_ _The discretization map \(R_{d}^{T^{*}}=\Phi^{-1}\circ(TR_{d}^{-1})^{*}\circ\alpha_{Q}\colon T(T^{*}Q) \to T^{*}Q\times T^{*}Q\) is a symplectomorphism between \((T(T^{*}Q),\mathrm{d}_{T}\omega_{Q})\) and \((T^{*}Q\times T^{*}Q,\Omega_{12})\)._ **Example 2**: _On \(Q=\mathbb{R}^{n}\) the discretization map \(R_{d}(q,v)=\left(q-\frac{1}{2}v,q+\frac{1}{2}v\right)\) is cotangently lifted to_ \[R_{d}^{T^{*}}(q,p,\dot{q},\dot{p})=\left(q-\frac{1}{2}\,\dot{q},p-\frac{\dot{ p}}{2};\ q+\frac{1}{2}\,\dot{q},p+\frac{\dot{p}}{2}\right)\,.\] ## V Higher-order discretization maps In [4], the authors show how to lift a discretization map to the tangent and cotangent bundles. Next, we are going to see how to lift a discretization map to a one on a higher-order tangent bundle. Let \(R_{d}:TQ\to Q\times Q\) be a discretization map on \(Q\), then we can lift it to the map \[T^{(k)}R_{d}:T^{(k)}(TQ)\to T^{(k)}Q\times T^{(k)}Q,\] defined by \(T^{(k)}R_{d}(j_{0}^{(k)}\gamma)=j_{0}^{(k)}(R_{d}\circ\gamma)\) for \(\gamma:I\to TQ\). Consider the natural equivalence \(\Phi^{(k)}:T(T^{(k)}Q)\to T^{(k)}(TQ)\) defined using the following construction (see [23] or [12, Sec. V]): for each \(X\in T(T^{(k)}Q)\) there exists a curve \(c:\mathbb{R}\to Q\) such that \(X=j_{0}^{(1)}(j_{0}^{(k)}c)\). Then, we have that \[\Phi^{(k)}(X)=j_{0}^{(k)}(j_{0}^{(1)}c).\] The identification between the higher-order tangent bundles \(T^{(k)}(TQ)\cong T(T^{(k)}Q)\) allows to define the map \(R_{d}^{(k)}:T(T^{(k)}Q)\to T^{(k)}Q\times T^{(k)}Q\) given by \(R_{d}^{(k)}=T^{(k)}R_{d}\circ\Phi^{(k)}\). The following lemma will be useful in the proof of the Theorem below. **Lemma 1**: _Let \(F:M\to N\) be a smooth map and \(\gamma_{t}:\mathbb{R}\to M\) a smooth family of maps, i.e., \(\gamma:\mathbb{R}^{2}\to M\) defined by \(\gamma(t,s)=\gamma_{t}(s)\) is a smooth map. Then,_ \[\frac{d}{dt}\bigg{|}_{t=0}j_{0}^{(k)}(F\circ\gamma_{t})=(\Phi_{N}^{(k)})^{-1}j _{0}^{(k)}\left(\left.\frac{d}{dt}\right|_{t=0}(F\circ\gamma_{t})\right)\] _where \(\Phi_{N}^{(k)}:T(T^{(k)}N)\to T^{(k)}(TN)\) is the canonical identification._ As \[\frac{d}{dt}\bigg{|}_{t=0}j_{0}^{(k)}(F\circ\gamma_{t})=j_{0}^{(1)}(j_{0}^{(k) }(F\circ\gamma_{t}))\,,\] using the natural equivalence \(\Phi_{N}^{(k)}\) \[\frac{d}{dt}\bigg{|}_{t=0}j_{0}^{(k)}(F\circ\gamma_{t})=(\Phi_{N}^{(k)})^{-1} \left(j_{0}^{(k)}(j_{0}^{(1)}(F\circ\gamma_{t}))\right),\] the result follows. Now, we can prove that the map \(R_{d}^{(k)}\) is a discretization map on the higher-order bundle \(T^{(k)}Q\). **Theorem 1**: _Let \(R_{d}\) be a discretization map on \(Q\), the lift to the higher-order tangent bundle \(R_{d}^{(k)}:T(T^{(k)}Q)\to T^{(k)}Q\times T^{(k)}Q\) is a discretization map on \(T^{(k)}Q\)._ Let \(\Phi^{(k)}:T(T^{(k)}Q)\to T^{(k)}TQ\) be the diffeomorphism identifying both manifolds. First, we shall prove that given \(z\in T^{(k)}Q\), we have that \(R_{d}^{(k)}(0_{z})=(z,z)\), where \(0_{z}\) is the zero section of the bundle \(T(T^{(k)}Q)\to T^{(k)}Q\). Fig. 1: Definition of the cotangent lift of a discretization. The image of the zero section under \(\Phi^{(k)}\) is the \(k\)-th jet lift of the zero section on \(Q\), that is, \(\Phi^{(k)}(0_{z})=T^{(k)}\hat{0}(z)\), where \(\hat{0}:Q\to TQ\), as it is easily checked choosing natural coordinates on the higher-order tangent bundle. Thus, \(R_{d}^{(k)}(0_{z})=T^{(k)}R_{d}(j^{(k)}\hat{0}(z))\). Using the definition of the \(k\)-th jet lift \[T^{(k)}R_{d}(j^{(k)}\hat{0}(z))=j_{0}^{(k)}(R_{d}\circ\hat{0})(z).\] In addition, since \(R_{d}\) is a discretization map, we have that \(R_{d}\circ\hat{0}=\mathrm{Id}_{Q}\times\mathrm{Id}_{Q}\). Hence, \[R_{d}^{(k)}(0_{z}) =T^{(k)}(\mathrm{Id}_{Q}\times\mathrm{Id}_{Q})(z)\] \[=(\mathrm{Id}_{T^{(k)}Q}\times\mathrm{Id}_{T^{(k)}Q})(z)=(z,z).\] Next, let \(R_{d,z}^{(k)}\) be the restriction of \(R_{d}^{(k)}\) to the space \(T_{z}(T^{(k)}Q)\), where \(z\in T^{(k)}Q\). We can write \(R_{d,z}^{(k)}=T^{(k)}R_{d}\circ\Phi_{z}^{(k)}\), where \(\Phi_{z}^{(k)}\) is the restriction of \(\Phi^{(k)}\) to \(T_{z}(T^{(k)}Q)\). Moreover, if \(R_{d,z}^{(k),a}\) denotes the composition of \(R_{d,z}^{(k)}\) with the projection onto the ath-factor, \(a=1,2\), then we will prove that \(T_{0_{z}}R_{d,z}^{(k),2}(X_{z})-T_{0_{z}}R_{d,z}^{(k),1}(X_{z})=X_{z}\) for all \(X_{z}\in T_{z}(T^{(k)}Q)\) and \(z\in T^{(k)}Q\) under the identification \(T_{0_{z}}T_{z}(T^{(k)}Q)\equiv T_{z}(T^{(k)}Q)\). We have that \[\left(R_{d,z}^{(k),2}-R_{d,z}^{(k),1}\right)(X_{z})=\left(T^{(k)}R_{d}^{2}-T^{ (k)}R_{d}^{1}\right)\circ\Phi_{z}^{(k)}(X_{z})\,.\] Therefore, \[\frac{d}{dt}\bigg{|}_{t=0} \left(R_{d,z}^{(k),2}-R_{d,z}^{(k),1}\right)(tX_{z})\] \[=\left.\frac{d}{dt}\bigg{|}_{t=0}\left(T^{(k)}R_{d}^{2}-T^{(k)}R_ {d}^{1}\right)\circ\Phi_{z}^{(k)}(tX_{z})\right.\] \[=\left.\frac{d}{dt}\bigg{|}_{t=0}\,j_{0}^{(k)}\left((R_{d}^{2}-R_ {d}^{1})(tY(q))\right),\] where \(q=\pi_{0}^{k}(z)\), \(Y(q)\in TQ\) is a curve such that \(j_{0}^{(k)}Y=\Phi_{0}^{(k)}(X_{z})\). Moreover, if \(\pi_{0}^{k}\colon T^{(k)}TQ\to TQ\) and using \(\tau_{Q}\circ T\pi_{0}^{k}=\pi_{0}^{k}\circ\tau_{T^{(k)}Q}\), then \(Y(q)\in T_{q}Q\) and \(Y=\tilde{\pi}_{0}^{k}(\Phi^{(k)}(X_{z}))=T\pi_{0}^{k}(X_{z})\) because the diagram in Fig. 2 is commutative. Using Lemma 1 we have that \[\frac{d}{dt}\bigg{|}_{t=0} \left(R_{d,z}^{(k),2}-R_{d,z}^{(k),1}\right)(tX_{z})\] \[=(\Phi_{z}^{k})^{-1}j_{0}^{k}\left(\left.\frac{d}{dt}\bigg{|}_{t=0 }\,(R_{d,q}^{2}-R_{d,q}^{1})(tY(q))\right)\,.\] Using the second property from discretization maps, we obtain \[\frac{d}{dt}\bigg{|}_{t=0} \left(R_{d,z}^{(k),2}-R_{d,z}^{(k),1}\right)(tX_{z})\] \[=(\Phi_{z}^{k})^{-1}j_{0}^{k}(Y)=X_{z},\] where the last step follows from the definition of \(Y\). **Example 3**: _Consider the midpoint discretization map_ \[R_{d}(q,v)=\left(q-\frac{1}{2}v,q+\frac{1}{2}v\right).\] _The lift of the midpoint to \(T(TQ)\) is_ \[TR_{d}(q,v,\dot{q},\dot{v})=\left(q-\frac{1}{2}v,q+\frac{1}{2}v,\dot{q}-\frac{ 1}{2}\dot{v},\dot{q}+\frac{1}{2}\ddot{v}\right)\] _and the second lift to \(T^{(2)}(TQ)\) is_ \[T^{(2)}R_{d}(q,v;\dot{q},\dot{v};\ddot{q},\ddot{v})=\] \[\left(q-\frac{1}{2}v,q+\frac{1}{2}v,\dot{q}-\frac{1}{2}\dot{v},\dot{q}+\frac{1} {2}\dot{v},\ddot{q}-\frac{1}{2}\ddot{v},\ddot{q}+\frac{1}{2}\ddot{v}\right)\] _Under the natural equivalence between higher-order tangent bundles, the map \(R_{d}^{(2)}:T(T^{(2)}Q)\to T^{(2)}Q\times T^{(2)}Q\) is given by_ \[R_{d}^{(2)}(q,\dot{q},\ddot{q};v,\dot{v},\ddot{v})=\] \[\left(q-\frac{1}{2}v,\dot{q}-\frac{1}{2}\dot{v},\ddot{q}-\frac{1}{2}\ddot{v};q +\frac{1}{2}v,\dot{q}+\frac{1}{2}\dot{v},\ddot{q}+\frac{1}{2}\ddot{v}\right)\,.\] _Then \(R_{d}^{(2)}(q,\dot{q},\ddot{q};0,0,0)=(q,\dot{q},\ddot{q};q,\dot{q},\ddot{q})\),_ \[T_{0_{(q,\dot{q},\dot{q})}}R_{d,(q,\dot{q},\ddot{q})}^{(2),2}=\begin{bmatrix}1/2 &0&0\\ 0&1/2&0\\ 0&0&1/2\end{bmatrix}\,,\] \[T_{0_{(q,\dot{q},\dot{q})}}R_{d,(q,\dot{q},\ddot{q})}^{(2),1}=\begin{bmatrix}-1 /2&0&0\\ 0&-1/2&0\\ 0&0&-1/2\end{bmatrix}\,.\] _Therefore, \(T_{0_{(q,\dot{q},\dot{q})}}R_{d,(q,\dot{q},\ddot{q})}^{(2),2}-T_{0_{(q,\dot{q}, \dot{q})}}R_{d,(q,\dot{q},\ddot{q})}^{(2),1}=Id,\) and \(R_{d}^{(2)}\) is a discretization map under the suitable identifications._ **Example 4**: _Consider the initial point discretization map on the sphere \(R_{d}:T\mathbb{S}^{2}\to\mathbb{S}^{2}\times\mathbb{S}^{2}\)_ \[R_{d}(q,\xi)=\left(q,\frac{q+\xi}{\|q+\xi\|}\right).\] _The lift to \(T(T\mathbb{S}^{2})\) is the map \(TR_{d}\colon T(T\mathbb{S}^{2})\to T\mathbb{S}^{2}\times T\mathbb{S}^{2}\):_ \[TR_{d}(q,\xi,\dot{q},\dot{\xi})=\left(q,\dot{q},\frac{q+\xi}{\|q+\xi\|},\frac{ \dot{q}+\dot{\xi}}{\|q+\xi\|}-\frac{\xi\cdot\dot{\xi}(q+\xi)}{\|q+\xi\|^{3}}\right)\] Fig. 2: Commutative diagram. and the second lift \(T^{(2)}(T\mathbb{S}^{2})\) is \[T^{(2)}R_{d}(q,\xi,\dot{q},\dot{\xi},\ddot{q},\ddot{\xi})=\left(TR_ {d}(q,\xi,\dot{q},\dot{\xi}),\ddot{q},\frac{\ddot{q}+\ddot{\xi}}{\|q+\xi\|}\right.\] \[\qquad-\left.\frac{2\xi\cdot\dot{\xi}(\dot{q}+\dot{\xi})+(\dot{ \xi}\cdot\dot{\xi}+\xi\cdot\ddot{\xi})(q+\xi)}{\|q+\xi\|^{3}}\right.\] \[\qquad\left.+\frac{3\xi\cdot\dot{\xi}(q+\xi)}{\|q+\xi\|^{5}} \right).\] Composing with the natural identifications, we obtain a discretization map on \(T^{(2)}\mathbb{S}^{2}\). **Corollary 2**: _Let \(R_{d}^{(k)}:T(T^{(k)}Q)\to T^{(k)}Q\times T^{(k)}Q\) be a higher-order discretization map on \(T^{(k)}Q\). The cotangent lift \(\left(R_{d}^{(k)}\right)^{T^{*}}:T(T^{*}(T^{(k)}Q))\to T^{*}(T^{(k)}Q) \times T^{*}(T^{(k)}Q)\) is discretization map on \(T^{*}(T^{(k)}Q)\)._ In Fig. 1 the discretization map at the bottom line can be replaced by the higher-order discretization \(R_{d}^{(k)}\), whose existence has been proved in Theorem 1. Such a map can be cotangently lifted as in Proposition 1 to obtain the following discretization map \(\left(R_{d}^{(k)}\right)^{T^{*}}:T\left(T^{*}(T^{(k)}Q)\right)\to T^{*}(T^{(k )}Q)\times T^{*}(T^{(k)}Q)\). ### _Geometric integrators on the higher-tangent bundle_ The framework for the construction of geometric integrators is established by Proposition 5.1 in [4] wich reads: **Proposition 2**: _If \(R_{d}\) is a discretization map on \(Q\) and \(H:T^{*}Q\to\mathbb{R}\) is a Hamiltonian function, then the equation_ \[(R_{d}^{T^{*}})^{-1}(q_{0},p_{0},q_{1},p_{1})=\] \[\dot{\mathbb{E}}_{\omega}\left(hdH\left[\tau_{T^{*}Q}\circ(R_{d}^{ T^{*}})^{-1}(q_{0},p_{0},q_{1},p_{1})\right]\right)\] _written for the cotangent lift of \(R_{d}\) is a symplectic integrator._ The previous proposition adapts perfectly to our case since a higher-order Lagrangian on \(T(T^{(k)}Q)\) has the corresponding Hamiltonian function on \(T^{*}(T^{(k)}Q)\). The cotangent lift in Proposition 1 can be replaced by the higher-order cotangent lift in Corollary 2. As a result, we have constructed a symplectic integrator for the Hamiltonian version of the higher-order dynamics. ## VI Application to Optimal Control problems Suppose that on \(Q=\mathbb{R}^{n}\) we have the optimal control problem with cost functional \[\mathcal{J}=\int_{0}^{T}\frac{1}{2}||u||^{2}\ dt\] subjected to the Euler-Lagrange controlled dynamics \(\ddot{q}=u\). This problem can be recasted as the second-order variational problem \(\mathcal{J}=\int_{0}^{T}\frac{1}{2}||\ddot{q}||^{2}\ dt\) with the second-order Lagrangian \(\mathcal{L}=\frac{1}{2}||\ddot{q}||^{2}\) on \(T^{(2)}Q\). Necessary conditions for a trajectory to be optimal is to fulfill the second-order Euler-Lagrange equations, which in this case give the spline equations \(q^{(4)}=0\). However, as described in Section III, the Hamiltonian for this second-order Lagrangian system is defined on \(T^{*}TQ\): \[H(q,\dot{q},\hat{p}_{(0)},\hat{p}_{(1)})=\frac{1}{2}\hat{p}_{(1)}^{2}+\hat{p}_ {(0)}\dot{q}.\] A discretization map on \(T^{*}TQ=T^{*}(T^{(1)}Q)\) is obtained by cotangently lifting a first-order discretization map on \(TQ\), that corresponds with the tangent lift of a discretization map on \(Q\) as defined in [4]. As in Example 3, the midpoint discretization map \(R_{d}(q,\dot{q})=\left(q-\frac{1}{2}\dot{q},q+\frac{1}{2}\dot{q}\right)\) is used to define \(R_{d}^{(1)}:T(TQ)\to TQ\times TQ\). The first-order cotangent lift of the midpoint on \(T^{*}(TQ)\) is a discretization map as proved in Corollary 2: \[\left(R_{d}^{(1)}\right)^{T^{*}} (q,\dot{q},p_{0},p_{1};\dot{q},\ddot{p}_{(0)},\dot{p}_{(1)})=\] \[\left(q-\frac{1}{2}\,\dot{q},\dot{q}-\frac{1}{2}\,\ddot{q},p_{(0) }-\frac{\hat{p}_{(0)}}{2},p_{(1)}-\frac{\hat{p}_{(1)}}{2};\right.\] \[\left.q+\frac{1}{2}\,\dot{q},\dot{q}+\frac{1}{2}\,\ddot{q},p_{(0) }+\frac{\hat{p}_{(0)}}{2},p_{(1)}+\frac{\hat{p}_{(1)}}{2}\right)\,.\] As the Hamiltonian vector field takes values in \(T(T^{*}TQ)\), by Proposition 2, \(\left(R_{d}^{(1)}\right)^{T^{*}}\) generates the following symplectic numerical scheme on \(T^{*}TQ\): \[\frac{q_{1}-q_{0}}{h}=\frac{\dot{q}_{1}+\dot{q}_{0}}{2},\quad \frac{\dot{q}_{1}-\dot{q}_{0}}{h}=\frac{\hat{p}_{(1)1}+\hat{p}_{(1)0}}{2},\] \[\frac{\hat{p}_{(0)1}-\hat{p}_{(0)0}}{h}=0,\quad\frac{\hat{p}_{(1) 1}-\hat{p}_{(1)0}}{h}=-\frac{\hat{p}_{(0)1}+\hat{p}_{(0)0}}{2}.\] Working out the expressions we obtain: \[q_{1}=q_{0}+h\dot{q}_{0}+\frac{h^{2}}{2}\hat{p}_{(1)0}+\frac{h^{ 3}}{4}\hat{p}_{(0)0},\] \[\dot{q}_{1}=\dot{q}_{0}+h\hat{p}_{(1)0}+\frac{h^{2}}{2}\hat{p}_{(0)0},\] \[\hat{p}_{(0)1}=\hat{p}_{(0)0},\quad\hat{p}_{(1)1}=\hat{p}_{(1)0}- h\hat{p}_{(0)0}.\] ### _Obstacle avoidance problem_ The following application is an optimal control problem with obstacle avoidance, which is usually cast as a second-order variational problem of the form \[\int_{0}^{T}\left(\frac{1}{2}||\ddot{q}||^{2}+V(q)\right)\ dt\] (see [7, 8, 19]). The second order Lagrangian is in this case \(\mathcal{L}=\frac{1}{2}||\ddot{q}||^{2}+V(q)\) and the necessary equations for a trajectory to be optimal is the fulfilment of the Euler-Lagrange equations which in this case are the fouth-order system: \(q^{(4)}+\nabla V(q)=0\). The Hamiltonian for this second-order Lagrangian system is \[H(q,\dot{q},\hat{p}_{(0)},\hat{p}_{(1)})=\frac{1}{2}\hat{p}_{(1)}^{2}+\hat{p}_{( 0)}\dot{q}-V(q).\] The associated symplectic method will be \[\frac{q_{1}-q_{0}}{h}=\frac{\dot{q}_{1}+\dot{q}_{0}}{2},\quad\frac{ \dot{q}_{1}-\dot{q}_{0}}{h}=\frac{\hat{p}_{(1)1}+\hat{p}_{(1)0}}{2},\] \[\frac{\hat{p}_{(0)1}-\hat{p}_{(0)0}}{h}=\nabla V\left(\frac{q_{1}+ q_{0}}{2}\right),\] \[\frac{\hat{p}_{(1)1}-\hat{p}_{(1)0}}{h}=-\frac{\hat{p}_{(0)1}+ \hat{p}_{(0)0}}{2}.\] ### _Obstacle avoidance for a planar rigid body_ In this section suppose \(Q=SE(2)\), that all maps are considered in a local coordinate chart with coordinates \(q=(x,y,\theta)\) and that the artificial potential has the form \[V(x,y,\theta)=\frac{\tau}{x^{2}+y^{2}-r^{2}}.\] We simulate the optimal trajectory of the previous problem using \(\tau=1\times 10^{-20}\), \(r=1\) and we simulate \(N=400\) steps with a step-size \(h=0.01\). To measure the norm we use the euclidean metric. ## VII Conclusions & Further applications In this paper we have showed how to obtain discretization maps in higher-order tangent bundles by lifting discretization maps on the base manifold. Furthermore, we have showed some simple examples of higher-order discretization maps and simple applications to the construction of numerical integrators for optimal control problems. However, as we will describe below, the range of applications has still much to explore. ### _Numerical methods for splines on the sphere_ Given a Riemannian manifold \((Q,g)\) and the associated exponential map \(\text{exp}_{q}:T_{q}Q\to Q\), the following map \[R_{d}(q,\xi)=\left(\text{exp}_{q}(-\xi/2),\text{exp}_{q}(\xi/2)\right)\] is a discretization map because it satisfies the properties in Definition 4. An example of discretization maps that can be associated with the exponential map is, for instance, on the sphere \(S^{2}\) with the Riemannian metric induced by the restriction of the standard metric on \(\mathbb{R}^{3}\). The exponential map is given by \[\text{exp}_{q}(\xi)=\cos(\|\xi\|)\,q+\sin(\|\xi\|)\,\frac{\xi}{\|\xi\|},\qquad \xi\in T_{q}S^{2}\,. \tag{4}\] Higher-order discretization maps can be used in the problem of finding higher-order Riemannian polynomials, defined in [28, 16, 25, 30] as the critical curves of the higher-order functional \[\int_{0}^{T}\frac{1}{2}\langle\frac{D^{k}\gamma}{dt^{k}},\frac{D^{k}\gamma}{ dt^{k}}\rangle\ dt,\] where \(\frac{D^{k}\gamma}{dt^{k}}\) denotes \(k\)-th covariant derivative. In future work, we will apply the previous construction to obtain higher-order geometric integrators to numerically obtain Riemannian polynomials. ### _Discretization maps for systems on Lie groups_ Optimal control problems in Lie groups are extremely important because of the applications in robotics. Using the left-trivialized tangent bundle to have the identification \(TG\approx G\times\mathfrak{g}\), the exponential map can be, for instance, used for defining a discretization map on the Lie group: \[R_{d}(g,\xi)=\left(g\cdot\exp(-\xi/2),g\cdot\exp(\xi/2)\right).\] In this scenario, higher-order lifts of \(R_{d,g}\) are associated with higher-order derivatives of the map \(R_{d,g}:\mathfrak{g}\to G\times G\). Then, we might generate geometric integrators for the problem (VII-A) on Lie Groups.
2309.12834
A functional central limit theorem for the K-function with an estimated intensity function
The $K$-function is arguably the most important functional summary statistic for spatial point processes. It is used extensively for goodness-of-fit testing and in connection with minimum contrast estimation for parametric spatial point process models. It is thus pertinent to understand the asymptotic properties of estimates of the $K$-function. In this paper we derive the functional asymptotic distribution for the $K$-function estimator. Contrary to previous papers on functional convergence we consider the case of an inhomogeneous intensity function. We moreover handle the fact that practical $K$-function estimators rely on plugging in an estimate of the intensity function. This removes two serious limitations of the existing literature.
Anne Marie Svane, Christophe Biscio, Rasmus Waagepetersen
2023-09-22T12:46:09Z
http://arxiv.org/abs/2309.12834v1
# A functional central limit theorem for the \(K\)-function with an estimated intensity function ###### Abstract The \(K\)-function is arguably the most important functional summary statistic for spatial point processes. It is used extensively for goodness-of-fit testing and in connection with minimum contrast estimation for parametric spatial point process models. It is thus pertinent to understand the asymptotic properties of estimates of the \(K\)-function. In this paper we derive the functional asymptotic distribution for the \(K\)-function estimator. Contrary to previous papers on functional convergence we consider the case of an inhomogeneous intensity function. We moreover handle the fact that practical \(K\)-function estimators rely on plugging in an estimate of the intensity function. This removes two serious limitations of the existing literature. _Keywords:_ estimated intensity; functional central limit theorem; goodness-of-fit test; inhomogeneous \(K\)-function; point processes; Ripley's \(K\)-function _2020 Mathematics Subject Classification:_ 60F17; 60G55; 60F05 ## 1 Introduction Ripley's \(K\)-function [22] is the most popular summary of the second moment structure of a spatial point process. Asymptotic properties of the \(K\)-function are of interest when the \(K\)-function is used for goodness-of-fit test [14] of a proposed point process model or for deriving asymptotic properties of parameter estimates obtained from minimum contrast estimating functions based on the \(K\)-function [15, 12, 25]. A complete characterization of the functional asymptotic distribution of the \(K\)-function for wide classes of stationary Gibbs point processes and stationary so-called conditionally \(m\)-dependent point processes was obtained in [5]. The results were derived assuming known intensity and hence applicable for goodness-of-fit testing for a specific type of model with a given intensity. It is, however, in practice often desired to test the goodness-of-fit of a given type of model considering the intensity an unknown parameter to be estimated. Then the asymptotic distribution of the \(K\)-function needs to be adjusted for the effect of replacing the true intensity by an estimate. Functional convergence of statistics related to the \(K\)-function for both known and unknown intensity was considered in [14]. However, the approach in this paper relied heavily on the setting of a stationary Poisson process. Assuming a constant intensity is often restrictive and [2] extended the \(K\)-function to a wide class of point processes with inhomogeneous intensity functions. This allowed to study residual second order structure after filtering out large scale variations due to a non-constant intensity function. In practice an estimate of the intensity function is plugged in for the typically unknown intensity function. Parametric estimates can be obtained using for example composite likelihood or quasi-likelihood, e.g. [23, 24, 11, 25, 13]. Consistency and asymptotic normality of parameter estimates is studied in detail in these papers. For example, [25] used a framework of non-stationary \(\alpha\)-mixing spatial point processes and established joint asymptotic normality of composite likelihood estimates for the intensity and minimum-contrast estimates of clustering parameters where the contrast was based on the \(K\)-function with estimated intensity. However, [25] did not consider functional convergence. In this paper we address limitations of the existing literature by establishing a functional central limit theorem for the \(K\)-function in the case of an unknown possibly non-constant intensity function. Our main focus is establishing functional convergence when the intensity function is replaced by a parametric estimate. We assume as our starting point the availability of a multivariate central limit theorem for a random vector consisting of intensity function parameter estimates and estimates of the \(K\)-function with known intensity function for a range of spatial distances. This assumption is justified by the aforementioned references. ## 2 The \(K\)-function Let \(\mathcal{P}\subseteq\mathbb{R}^{d}\) be a simple point process with intensity function \(\rho(\cdot)>0\) and let \(A\subseteq\mathbb{R}^{d}\) be a set of positive and finite volume \(|A|\). Assuming further that \(\mathcal{P}\) is second-order intensity reweighted stationary [2], the \(K\)-function is defined for any \(r>0\) as \[K(r)=\frac{1}{|A|}\mathbb{E}\sum_{x\in\mathcal{P}\cap A}\sum_{y\in\mathcal{P}} \frac{\mathds{1}_{\{0<\|x-y\|\leq r\}}}{\rho(x)\rho(y)},\] where the right hand side does not depend on the particular choice of \(A\). The \(K\)-function can also be expressed as a Palm expection \[K(r)=\mathbb{E}_{u}\sum_{y\in\mathcal{P}}\frac{\mathds{1}_{\{0<\|y-u\|\leq r \}}}{\rho(y)},\] for any \(u\in\mathbb{R}^{d}\), where \(\mathbb{E}_{u}\) denotes the Palm expectation given \(u\in\mathcal{P}\). These definitions agree with the definition of Ripley's \(K\)-function in the stationary case where \(\rho(\cdot)\) is constant. In practice \(\mathcal{P}\) is only observed inside a bounded observation window. Throughout this paper, we will consider a square observation window \(W_{n}=[-\frac{1}{2}n^{1/d},\frac{1}{2}n^{1/d}]^{d}\) of volume \(n\) and write \(\mathcal{P}_{n}=\mathcal{P}\cap W_{n}\). Assuming for a moment that the intensity function is known, an unbiased estimator of \(K(r)\) is given by \[\hat{K}_{n}(r)=\sum_{x\in\mathcal{P}_{n}}\sum_{y\in\mathcal{P}_{n}}\frac{ \mathds{1}_{\{0<\|x-y\|\leq r\}}}{\rho(x)\rho(y)}e_{n}(x,y), \tag{1}\] where \(e_{n}(x,y)\) is an edge correction factor ensuring unbiasedness. A popular choice is \(e_{n}(x,y)=|W_{n}\cap W_{n,x-y}|^{-1}\) where \(W_{n,x-y}\) is \(W_{n}\) translated by \(x-y\). In practice \(\rho(\cdot)\) is unknown and must be estimated. We assume that \(\rho(\cdot)\) belongs to a parametric model \(\rho_{\beta}(\cdot)\), \(\beta\in\mathbb{R}^{p}\), so that \(\rho(\cdot)=\rho_{\beta^{*}}(\cdot)\) for some fixed parameter value \(\beta^{*}\in\mathbb{R}^{d}\). A common example is the log linear model \(\rho_{\beta}(\cdot)=\exp(z(u)^{\mathrm{T}}\beta)\) where for \(u\in\mathbb{R}^{d}\), \(z(u)\) is a \(p\)-dimensional covariate vector assumed to be observed within \(W_{n}\). Methods for obtaining consistent and asymptotically normal estimates of \(\beta\) include composite likelihood and quasi-likelihood, e.g. [23, 24, 11, 25, 13, 9]. We denote by \(\hat{\beta}_{n}\) an estimate of \(\beta\) obtained from an observation of \(\mathcal{P}\cap W_{n}\). Define \[\hat{K}_{n,\beta}(r)=\sum_{x\in\mathcal{P}_{n}}\sum_{y\in\mathcal{P}_{n}}\frac {\mathds{1}_{\{0<\|x-y\|\leq r\}}}{\rho_{\beta}(x)\rho_{\beta}(y)}e_{n}(x,y).\] Then \(\hat{K}_{n}(r)=\hat{K}_{n,\beta^{*}}(r)\). In practice we estimate \(K(r)\) by \(\hat{K}_{n,\hat{\beta}_{n}}(r)\). We will assume throughout that \[\hat{\beta}_{n}\text{ and }\hat{K}_{n}\text{ are consistent.} \tag{2}\] Moreover, we will often make the following assumption on \(\rho(\cdot)\): There is an \(\varepsilon>0\) and constants \(c_{1},\ldots,c_{4}>0\) such that for all \((\beta,x)\in B_{\varepsilon}(\beta^{*})\times\mathbb{R}^{d}\), \[c_{1}<\rho_{\beta}(x)<c_{2},\qquad\left\|\frac{\mathrm{d}}{\mathrm{d}\beta}\rho _{\beta}(x)\right\|<c_{3},\qquad\left\|\frac{\mathrm{d}^{2}}{\mathrm{d}\beta^ {\mathrm{T}}\mathrm{d}\beta}\rho_{\beta}(x)\right\|<c_{4}. \tag{3}\] Here \(B_{r}(x)\) denotes the Euclidean ball in \(\mathbb{R}^{d}\) of radius \(r\) centered at \(x\). We assume existence of joint intensity functions \(\rho^{(l)}\), \(l=2,3,4\) [e.g. 20] and define normalized joint intensities \(g^{(l)}(u_{1},\ldots,u_{l})=\rho^{(l)}(u_{1},\ldots,u_{l})/\prod_{i=1}^{l}\rho(u _{i})\), \(l=2,3,4\). In particular, \(g^{(2)}\) is the pair correlation function, which will simply be denoted by \(g\). Assuming that \(g\) is translation-invariant, \(g(u,v)=g(u-v)\), the \(K\)-function can be written as \(K(r)=\int_{B_{r}(0)}g(h)\mathrm{d}h\). We will also assume translation invariance of \(g^{(3)}\) and \(g^{(4)}\). We note that there are wide classes of point processes for which translation invariant normalized joint intensities exist such as log Gaussian Cox processes [21], inhomogeneous Neyman-Scott processes [24], and determinantal point processes [19]. We say that \(\mathcal{P}\) has fast decaying correlations if for any \(p,q\geq 1\) with \(p+q\leq 4\), there exists a function \(\phi_{p,q}:[0,\infty[\to[0,\infty[\) such that \(\int_{0}^{\infty}\phi_{p,q}(r)r^{d-1}\mathrm{d}r<\infty\) and \[|g^{(p+q)}(\mathbf{x},\mathbf{x}^{\prime})-g^{(p)}(\mathbf{x}))g^{(q)}( \mathbf{x}^{\prime})|\leq\phi_{p,q}(d(\mathbf{x},\mathbf{x}^{\prime})) \tag{4}\] where \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) are point configurations of cardinality \(p\) and \(q\), respectively, and \(d(\mathbf{x},\mathbf{x}^{\prime})=\min_{u\in\mathbf{x},v\in\mathbf{x}^{\prime }}\|u-v\|\), and we define \(g^{(1)}(u)=1\), \(u\in\mathbb{R}^{d}\). If \(\mathcal{P}\) has fast decaying correlations (4) it follows easily that \[\int_{\mathbb{R}^{d}}|g(h)-1|\mathrm{d}h<\infty, \tag{5}\] \[\sup_{x\in\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}|g^{(3)}(0,x,z)-g(x)|\mathrm{d}z<\infty \tag{6}\] and \[\sup_{u_{1},u_{2}\in\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}|g^{(4)}(0,u_{1},u_{4},u_{2}+u_{4})-g(u_{1})g(u_{2})|\mathrm{d}u_{4}<\infty. \tag{7}\] We finally need a condition of bounded normalized densities: \[g^{(k)}\text{ are bounded for }k=2,3,4. \tag{8}\] ## 3 Central limit theorem with estimated intensity function In this section we demonstrate how central limit theorems for \(K\)-functions with unknown intensity can be deduced from analoguous results with known intensity under suitable assumptions. We postpone a discussion of point process types satisfying the various assumptions to Section 4. ### Finite dimensional distributions Consider for any \(k\geq 1\) and \(0<r_{1}<r_{2}<\cdots<r_{k}<\infty\), the vector \[U_{n}=((\hat{\beta}_{n}-\beta^{*})^{\mathsf{T}},\hat{K}_{n}(r_{1})-K(r_{1}), \ldots,\hat{K}_{n}(r_{k})-K(r_{k}))^{\mathsf{T}}.\] Throughout this paper we assume asymptotic normality, \[|W_{n}|^{1/2}C_{n}U_{n}\to N(0,I_{p+k}), \tag{9}\] for a sequence of matrices \(C_{n}\) where \(C_{n}^{-1}(C_{n}^{-1})^{\mathsf{T}}/|W_{n}|\) approximates \(\mathsf{Var}\,U_{n}\). We discuss these assumptions in more detail in Section 4. The objective in this section is to obtain a central limit theorem for \[V_{n}=((\hat{\beta}_{n}-\beta^{*})^{\mathsf{T}},\hat{K}_{n,\hat{\beta}_{n}}(r _{1})-K(r_{1}),\ldots,\hat{K}_{n,\hat{\beta}_{n}}(r_{k})-K(r_{k}))^{\mathsf{T}}\] as well as consistency of \(\hat{K}_{n,\hat{\beta}_{n}}(r)\) for \(r\geq 0\). For this we employ a first order Taylor expansion to obtain \[\hat{K}_{n,\hat{\beta}_{n}}-K(r)=H_{n,\hat{\beta}_{n,\nu}}(r)(\hat{\beta}_{n}- \beta^{*})+\hat{K}_{n}(r)-K(r), \tag{10}\] where \(\|\hat{\beta}_{n,r}-\beta^{*}\|\leq\|\hat{\beta}_{n}-\beta^{*}\|\) and \[H_{n,\beta}(r)=-\sum_{x\in\mathcal{P}_{n}}\sum_{y\in\mathcal{P}_{n}}\frac{ \mathds{1}_{\{0<\|x-y\|\leq r\}}}{\rho_{\beta}(x)\rho_{\beta}(y)}\frac{ \mathrm{d}}{\mathrm{d}\beta^{\mathsf{T}}}\log[\rho_{\beta}(x)\rho_{\beta}(y)]e_ {n}(x,y). \tag{11}\] Let \(\tilde{B}_{n}\) denote the matrix with columns \(\tilde{\beta}_{n,r_{i}}\) and let \(H_{n}(\tilde{B}_{n})\) denote the \(k\times p\) matrix with rows \(H_{n,\tilde{\beta}_{n,r_{i}}}(r_{i})\), \(i=1,\ldots,k\). Further let \[A_{n}=\begin{bmatrix}I_{p}&0_{p\times k}\\ H_{n}(\tilde{B}_{n})&I_{k}\end{bmatrix}\] where \(0_{p\times k}\) is a \(p\times k\) matrix of zeros. Then, since \(A_{n}\) is invertible, \[|W_{n}|^{1/2}C_{n}A_{n}^{-1}V_{n}=|W_{n}|^{1/2}C_{n}U_{n}.\] Thus, by (9), \(|W_{n}|^{1/2}C_{n}A_{n}^{-1}V_{n}\) is asymptotically \(N(0,I_{p+k})\). This can be used to construct a joint confidence ellipsoid for \(((\beta^{*})^{\mathsf{T}},K(r_{1}),\ldots,K(r_{k}))^{\mathsf{T}}\). We can estimate \(H_{n,\tilde{\beta}_{n,r}}(r)\) by \(H_{n,\tilde{\beta}_{n,r}}(r)\) according to the following proposition which is also useful for establishing consistency of \(\tilde{K}_{n,\tilde{\beta}_{n}}\). **Proposition 3.1**.: _Assume that (2), (3) and (4) are satisfied. Then \(H_{n,\tilde{\beta}_{n,r}}(r)-H_{n,\beta^{*}}(r)\) and \(H_{n,\tilde{\beta}_{n,r}}(r)-H_{n,\tilde{\beta}_{n,r}}(r)\) converge to zero in probability. Further, \(\bar{H}_{n}(r)=\mathbb{E}H_{n,\beta^{*}}(r)\) is bounded. Finally, \(\mathsf{Var}\,H_{n,\beta^{*}}(r)\) is \(O(n^{-1})\)._ Proof.: Define \(h_{x,y}(\beta)=\rho_{\beta}(x)\rho_{\beta}(y)\frac{\mathrm{d}}{\mathrm{d}\beta ^{\mathsf{T}}}\log[\rho_{\beta}(x)\rho_{\beta}(y)]\) and let \(h_{i,x,y}(\beta)\) be the \(i\)th component of \(h_{x,y}\). Then \[h_{i,x,y}(\tilde{\beta}_{n,r})-h_{i,x,y}(\beta^{*})=h_{i,x,y}^{\prime}(\tilde {\beta}_{i,x,y,n})(\tilde{\beta}_{n,r}-\beta^{*})\] with \(h_{i,x,y}^{\prime}(\beta)=\frac{\mathrm{d}}{\mathrm{d}\beta^{\mathsf{T}}}h_{i, x,y}(\beta)\) and \(\|\tilde{\beta}_{i,x,y,n}-\beta^{*}\|\leq\|\tilde{\beta}_{n,r}-\beta^{*}\|\). Next, \(h_{x,y}(\tilde{\beta}_{n,r})-h_{x,y}(\beta^{*})=(\tilde{\beta}_{n,r}-\beta^{*} )^{\mathsf{T}}h_{x,y}^{\prime}(\bar{B}_{x,y,n})^{\mathsf{T}}\) where \(h_{x,y}^{\prime}(\bar{B}_{x,y,n})\) is \(p\times p\) with rows \(h_{i,x,y}^{\prime}(\tilde{\beta}_{i,x,y,n})\). Further, \[\|H_{n,\tilde{\beta}_{n,r}}(r)-H_{n,\beta^{*}}(r)\|\leq\|\tilde{\beta}_{n,r}- \beta^{*}\|\sup_{x,y}\|h_{x,y}^{\prime}(\bar{B}_{x,y,n})\|\bigg{|}\sum_{x,y \in\mathcal{P}_{n}}\mathbb{1}_{\{0<\|x-y\|\leq r\}}e_{n}(x,y)\bigg{|}. \tag{12}\] On the right hand side, the first factor is bounded by \(\|\tilde{\beta}_{n}-\beta^{*}\|\) and hence converges to zero in probability. The second is bounded in probability since eventually \(\|\tilde{\beta}_{i,x,y,n}-\beta^{*}\|\leq\|\tilde{\beta}_{n}-\beta^{*}\|\leq\varepsilon\) with high probability. The last factor is bounded by \(c_{2}^{2}\hat{K}_{n}(r)\), where \(c_{2}\) is the constant from (3), and this is bounded in probability by the consistency assumption on \(\hat{K}_{n}(r)\). The convergence of \(H_{n,\tilde{\beta}_{n,r}}(r)-H_{n,\tilde{\beta}_{n}}(r)\) follows from the previous result, the analogous result with \(\tilde{\beta}_{n,r}\) replaced by \(\hat{\beta}_{n}\), and the decomposition \[H_{n,\tilde{\beta}_{n,r}}(r)-H_{n,\tilde{\beta}_{n,r}}(r)=H_{n,\tilde{\beta}_{ n,r}}(r)-H_{n,\beta^{*}}(r)+H_{n,\beta^{*}}(r)-H_{n,\hat{\beta}_{n,r}}(r).\] By application of the Campbell formula, \[\|\bar{H}_{n}(r)\|\leq\int_{W_{n}^{2}}\frac{\mathds{1}_{\{0<\|x-y\|\leq r\}}} {\rho_{\beta^{*}}(x)\rho_{\beta^{*}}(y)|W_{n,x-y}|}g(x-y)\left\|\frac{\mathrm{d }}{\mathrm{d}\beta^{\mathsf{T}}}(\rho_{\beta}(x)\rho_{\beta}(y))\right\rvert_ {\beta=\beta^{*}}\bigg{\|}\mathrm{d}x\mathrm{d}y\leq cK(r)\] for some \(c>0\). That \(\mathsf{Var}\,H_{n,\beta^{*}}(r)\) is \(O(n^{-1})\) follows from Lemma 1 in [25]. **Corollary 3.2**.: _Assume that (2), (3) and (4) hold. Then \(\hat{K}_{n,\beta_{n}}(r)\) is consistent._ Proof.: Combining (10) with Proposition 3.1, we have \[\hat{K}_{n,\beta_{n}}-K(r)=H_{n,\beta_{n,r}}(r)(\hat{\beta}_{n}- \beta^{*})+\hat{K}_{n,\beta^{*}}(r)-K(r)+o_{P}(1)\] \[= H_{n,\beta^{*}}(r)(\hat{\beta}_{n}-\beta^{*})+\hat{K}_{n}(r)-K(r)+ o_{P}(1)\] \[= \bar{H}_{n}(r)(\hat{\beta}_{n}-\beta^{*})+(H_{n,\beta^{*}}(r)- \bar{H}_{n})(\hat{\beta}_{n}-\beta^{*})+\hat{K}_{n}(r)-K(r)+o_{P}(1)=o_{P}(1)\] by consistency of \((\hat{\beta}_{n}^{\mathsf{T}},\hat{K}_{n}(r))\), boundedness of \(\bar{H}_{n}(r)\), and since \(H_{n,\beta^{*}}(r)-\bar{H}_{n}\) is bounded in probability. ### Functional convergence Fix \(0\leq r_{0}<R<\infty\). We will prove functional convergence of the process \(\{\sqrt{n}(\hat{K}_{n,\hat{\beta}_{n}}(r)-K(r))\}_{r\in[r_{0},R]}\) with estimated intensity under the assumption that functional convergence holds for the process \(\{\sqrt{n}(\hat{K}_{n}(r)-K(r))\}_{r\in[r_{0},R]}\) with known intensity. Define \(\bar{H}_{n}(r)=\mathbb{E}H_{n,\beta^{*}}(r)\) as in Proposition 3.1. We make the following further assumptions: 1. The matrices \(C_{n}\) converge to a fixed invertible matrix \(C\). 2. The matrices \(\bar{H}_{n}(r)\) converge uniformly to a function \(H(r)\). 3. The process \(\{\sqrt{n}(\hat{K}_{n}(r)-K(r))\}_{r\in[r_{0},R]}\) converges in Skorokhod topology to a Gaussian process with a limiting covariance function \(c(s,t)\), \(s,t\geq 0\). Note that by Proposition 3.1, (ii) together with (2) and (3) implies that \(A_{n}\to A\) in probability where \(A\) is defined as \(A_{n}\), but with \(H_{n}(\tilde{B}_{n})\) replaced by the matrix with rows \(H(r_{i})\), \(i=1,\ldots,k\). Combining further with (9) we obtain \[\sqrt{n}V_{n}\to N(0,A\Sigma A^{T}) \tag{13}\] where \(\Sigma=C^{-1}(C^{-1})^{\mathsf{T}}\). Our main result is the following functional central limit theorem. The proof is postponed to Section 6. **Theorem 3.3**.: _Suppose that (2), (3), (4), (8), (9), and (i)-(iii) hold. Then \(\big{\{}\sqrt{n}(\hat{K}_{n,\hat{\beta}_{n}}(r)-K(r))\big{\}}_{r\in[r_{0},R]}\) converges in Skorokhod topology to a Gaussian process with limiting covariance function given by_ \[\tilde{c}(s,t)=H(s)\Sigma_{11}H(t)^{\mathsf{T}}+H(s)\Sigma_{2,t}+H(t)\Sigma_{ 2,s}+c(s,t), \tag{14}\] _where \(\Sigma_{11}\) is the limiting covariance matrix of \(\sqrt{n}(\hat{\beta}_{n}-\beta^{*})\) and \(\Sigma_{2,r}\) is the limiting covariance vector between \(\sqrt{n}(\hat{\beta}_{n}-\beta^{*})\) and \(\sqrt{n}\hat{K}_{n}(r)\), \(r\in[r_{0},R]\)._ ## 4 Point processes satisfying the assumptions The most complete characterization of asymptotic normality is obtained in the case of constant intensity, see Section 4.1. We consider further the case of the commonly used [e.g. 1] log-linear model for the intensity function in Section 4.2. ### Constant intensity In the case of constant intensity \(\rho_{\beta}(x)=\beta\), the standard estimator for \(\beta\) is \(\hat{\beta}_{n}=\#(\mathcal{P}_{n})/|W_{n}|\), where \(\#\) denotes cardinality, see e.g. [8]. Functional convergence with known intensity was established in [5] for the class of stationary conditionally \(m\)-dependent point processes having exponential decay of correlations and satisfying two extra conditions [5, Cond. **(M)**] and [5, Cond. **(R)**]. Exponential decay of correlations means that (4) holds for all \(p\) and \(q\) with \(\phi_{p,q}\) being an exponential function, see [7, Sec. 1.1] for the precise definition. Assumption (9) can be verified assuming only exponential decay of correlations and [5, Cond. **(M)**], and hence applies to all point processes discussed in [7, Sec. 2.2.2] (and also [7, Sec. 2.2.1]). Assuming additionally conditionally conditioned \(m\)-dependence, (i) can be verified when replacing [5, Cond. **(R)**] by the following: For any \(r_{0}\leq r_{1}<r_{2}\leq R\), define events \[F_{1} =\{\mathcal{P}_{(5\bar{R})^{d}}=\emptyset\}\] \[F_{2} =\ \Big{\{}\forall x,y\in\mathcal{P}_{(5\bar{R})^{d}}:x=y\vee\|x-y \|>r_{1}\Big{\}}\cap\Big{\{}\exists x,y\in\mathcal{P}_{(3\bar{R})^{d}}:0<\|x- y\|\leq r_{2}\Big{\}}\] \[F_{3} =\ \Big{\{}\mathcal{P}_{(5\bar{R})^{d}}\backslash\mathcal{P}_{(3 \bar{R})^{d}}=\emptyset\Big{\}}\cap\Big{\{}\#(\mathcal{P}_{(3\bar{R})^{d}})=1 \Big{\}}\] where \(\tilde{R}=\max\{m,R\}\). Then we require **(R1)**: \[\mathbb{E}\big{[}\min_{i\in\{1,2\}}P\big{(}\mathcal{P}_{(5\bar{R})^{d}}\in F_{i} \,|\,\sigma(\Lambda,\mathcal{P}\setminus W_{\bar{R}^{d}})\big{)}\big{]}>0,\] **(R2)**: \[\mathbb{E}\big{[}\min_{i\in\{1,3\}}P\big{(}\mathcal{P}_{(5\bar{R})^{d}}\in F_{i} \,|\,\sigma(\Lambda,\mathcal{P}\setminus W_{\bar{R}^{d}})\big{)}\big{]}>0,\] where \(\Lambda\) is the random measure from the definition of conditional \(m\)-dependence. Examples of conditionally \(m\)-dependent point processes satisfying all assumptions are log-Gaussian Cox processes with exponentially decaying covariance function and Matern cluster processes, see [6, Sec. 4]. Condition (9) can also be shown for the class of Gibbs processes considered in [5]. Recent developments [16, Thm. 3] (or [3, Thm. 4.11]) allow a generalization to Gibbs processes satisfying [16, Cond. **(A)**], which is more general than the assumption in [5], c.f. the discussion in [3, Rem. 3.5]. Positive definiteness of \(\Sigma\) can be obtained as in [5, Prop. 6.2] if the process satisfies **(R1)** and **(R2)** with \(\bar{R}\) larger than the interaction radius and \(\Lambda\) trivial. Verifying this is often straightforward for a given Papangelou intensity. The functional convergence (iii) established for the point processes considered in [5] generalizes to Gibbs processes satisfying [16, Cond. **(A)**] since fast decay of correlations can be derived from [3, Thm. 3.4], and condition **(M)** can be derived from [3, Lem. 2.6]. Condition (ii) is easily verified, noting that \[H_{n,\beta}(r)=-2\beta^{-1}\hat{K}_{n,\beta}(r),\qquad H(r)=-2(\beta^{*})^{-1 }K(r). \tag{15}\] The condition (i) follows from conditional \(m\)-dependence and **(R1)** and **(R2)**. We then obtain the following corollary. **Corollary 4.1**.: _For all point processes considered in [5] and Gibbs processes satisfying [16, Cond. **(A)**], if additionally_ **(R1)** _and_ **(R2)** _are satisfied, then \(\big{\{}\sqrt{n}\big{(}\hat{K}_{n,\beta_{n}}(r)-K(r)\big{)}\big{\}}_{r\in[r_{0 },R]}\) converges in Skorokhod topology to a centered Gaussian process with covariance structure given by (14)._ For any point process with fast decay of correlations, the limiting covariance with known intensity \(\Sigma\) can be computed from Campbell's formulae. The computations are omitted, but are very similar to those in Appendix A. This yields \[\lim_{n\to\infty}n\,\mathsf{Var}(\hat{\beta}_{n})= \beta^{2}\int_{\mathbb{R}^{d}}(g(x)-1)\mathrm{d}x+\beta,\] \[\lim_{n\to\infty}n\,\mathrm{Cov}(\hat{\beta}_{n},\hat{K}_{n}(r))= \beta\int_{\mathbb{R}^{2d}}(g^{(3)}(o,x,y)-g(x))\mathds{1}_{\{\|x \|\leq r\}}\mathrm{d}x\mathrm{d}y+2K(r),\] \[\lim_{n\to\infty}n\,\mathrm{Cov}(\hat{K}_{n}(r_{1}),\hat{K}_{n}(r_ {2}))= \int_{\mathbb{R}^{3d}}\Big{(}g^{(4)}(o,x,y,z)-g(x)g(y-z)\Big{)} \mathds{1}_{\{\|x\|\leq r_{1},\|y-z\|\leq r_{2}\}}\mathrm{d}x\mathrm{d}y \mathrm{d}z\] \[+\frac{4}{\beta}\int_{\mathbb{R}^{2d}}g^{(3)}(o,x,y)\mathds{1}_{ \{\|x\|\leq r_{1},\|y\|\leq r_{2}\}}\mathrm{d}x\mathrm{d}y\] \[+\frac{2}{\beta^{2}}\int_{\mathbb{R}^{d}}g(x)\mathds{1}_{\{\|x\| \leq r_{1}\wedge r_{2}\}}\mathrm{d}x. \tag{16}\] Note that all integrals converge due to the fast decay of correlations assumption. With unknown intensity, the limiting covariance structure is (14). Using this, we obtain from (16), \[\lim_{n\to\infty}n\,\mathrm{Cov}(\hat{K}_{n,\hat{\beta}_{n}}(r_{ 1}),\hat{K}_{n,\hat{\beta}_{n}}(r_{2}))=\lim_{n\to\infty}n^{-1}\,\mathrm{Cov}( \hat{K}_{n}(r_{1}),\hat{K}_{n}(r_{2}))\] \[-2\int_{\mathbb{R}^{2d}}\Big{(}g^{(3)}(o,x,y)-g(x)\Big{)}\left(K(r _{1})\mathds{1}_{\{\|x\|\leq r_{2}\}}+K(r_{2})\mathds{1}_{\{\|x\|\leq r_{1}\}} \right)\mathrm{d}x\mathrm{d}y\] \[+4K(r_{1})K(r_{2})\left(\int_{\mathbb{R}^{d}}(g(x)-1)\mathrm{d}x -\beta^{-1}\right). \tag{17}\] For Poisson processes in \(\mathbb{R}^{2}\), these formulas reduce to the covariance formulas given in [14]: \[\lim_{n\to\infty}n\,\mathrm{Cov}(\hat{K}_{n}(r_{1}),\hat{K}_{n}(r_ {2}))=2\pi(r_{1}\wedge r_{2})^{2}/\rho^{2}+4\pi^{2}r_{1}^{2}r_{2}^{2}/\rho,\] \[\lim_{n\to\infty}n\,\mathrm{Cov}(\hat{K}_{n,\hat{\beta}_{n}}(r_{ 1}),\hat{K}_{n,\hat{\beta}_{n}}(r_{2}))=2\pi(r_{1}\wedge r_{2})^{2}/\rho^{2}. \tag{18}\] For general point processes, the explicit covariance formulas are more complicated but may be evaluated numerically after plugging in estimates of the normalized joint intensities. For instance, in case of log-Gaussian Cox processes and Neyman-Scott processes, explicit parametric expresssions for the normalized joint intensities are available [21, 17] enabling parametric estimation of these. Note that by (18), in case of a Poisson process, it is actually beneficial to estimate the intensity since the asymptotic variance is smaller in the unknown intensity case. We expect this to hold for more general point processes. For example, for point processes satisfying \(g^{(3)}(0,x,y)\geq g(x)\) for all \(x,y\in\mathbb{R}^{d}\), the middle term in (17) gives a negative contribution to the variance. This holds for instance for Poisson processes, shot noise Cox processes and log-Gaussian Cox processes with non-negative covariance, see formulas for \(\rho^{(k)}\) in [10]. Moreover, if \(g\) is sufficiently close to \(1\), the last term in (17) is also negative, reducing further the point-wise variance when the intensity is estimated. ### Log-linear intensity The log-linear model \(\rho_{\beta}(u)=\exp(z(u)^{\mathsf{T}}\beta)\) is the default model when covariates \(z(u)\), \(u\in W_{n}\), are available for explaining variation in the intensity function. We consider here the case where \(\hat{\beta}_{n}\) is the first order composite likelihood estimate obtained by maximizing the likelihood function of a Poisson process with intensity function \(\rho_{\beta}\)[see 23, 24, 11, 25, 9, for theoretical and practical details]. Following for example [25], under certain conditions, \[|W_{n}|\bar{S}_{n}(\hat{\beta}_{n}-\beta^{*})=e_{n}(\beta^{*})+o_{P}(1)\] where \(\bar{S}_{n}\) is the normalized sensitivity \[\bar{S}_{n}=\frac{1}{|W_{n}|}\int_{W_{n}}z(u)z(u)^{\mathsf{T}}\exp(z(u)^{ \mathsf{T}}\beta^{*})\mathrm{d}u \tag{19}\] and \[e_{n}(\beta^{*})=\sum_{u\in\mathcal{P}_{n}\cap W_{n}}z(u)-\int_{W_{n}}z(u)\rho _{\beta^{*}}(u)\mathrm{d}u \tag{20}\] is the Poisson composite likelihood score. Let \(\Delta K=(\hat{K}_{n}(r_{i})-K(r_{i}))_{i=1}^{k}\) and \[\Sigma_{n}=\mathsf{Var}((|W_{n}|^{-1/2}e_{n}(\beta^{*})^{\mathsf{T}},|W_{n}|^{ 1/2}\Delta K^{\mathsf{T}})^{\mathsf{T}}).\] Then, e.g. assuming \(\alpha\)-mixing [25], \[\Sigma_{n}^{-1/2}(|W_{n}|^{-1/2}e_{n}(\beta^{*}),|W_{n}|^{1/2}\Delta K^{ \mathsf{T}})^{\mathsf{T}}\to N(0,I_{p+k}).\] Hence, letting \(B_{n}\) be block diagonal with blocks \(\bar{S}_{n}\) and \(I_{k}\), \(|W_{n}|^{1/2}(\Sigma_{n}/|W_{n}|)^{-1/2}B_{n}U_{n}\) converges to \(N(0,I_{p+k})\). We can thus take \(C_{n}=(\Sigma_{n}/|W_{n}|)^{-1/2}B_{n}\). The validity of assumption (i) depends on the behaviour of \(z\) on the infinite domain \(\mathbb{R}^{d}\). The normalized sensitivity \(\bar{S}_{n}\) is for example obviously a spatial average that has a limiting value \(\bar{S}\) when \(z\) is a realization of an ergodic process. In this situation, we can also show (Appendix A) under (5)-(7) that \(\bar{H}_{n}\) and \(\Sigma_{n}\) have limits \(H\) and \(\Sigma\). Hence \(C_{n}\) has a limit \(C\) too. Specifically, \(\bar{H}_{n,\beta^{*}}(r)\) converges to \(-2K(r)\bar{z}\) where \(\bar{z}=\lim_{n\to\infty}|W_{n}|^{-1}\int_{W_{n}}z(v)\mathrm{d}v\) and (ii) is satisfied. The invertibility of \(C\) remains an assumption. Having established (9) and (i), (iii) holds since the tightness proof from [5] goes through in the case of inhomogeneous point processes with exponential decay of correlations satisfying [5, Cond. **(M)**] and intensity function satisfying (3). Indeed, one must show Lemma 6.2, Lemma 7.2 and Lemma 7.3 of [5]. Lemmas 6.2 and Lemma 7.2 are proved in the exact same way using the lower bound on \(\rho_{\beta^{*}}\). Lemma 7.3 also follows in the same way by noting that the proof of [7, Thm. 1.11] carries over to the case of non-stationary processes. ## 5 Simulation study To emphasize the importance of taking into account the effect of estimating the intensity, we conduct a small scale simulation study for a Poisson process and a Matern cluster process on a sequence of increasing square observation windows of sidelengths 1, 2, 4 and 8. We use an intensity of 200 for both types of point processes. As in [5] we consider a Kolmogorov-Smirnov goodness-of-fit test statistic \(\sup_{r\in[0,R]}|\hat{K}_{n,\hat{\beta}_{n}}(r)-\pi r^{2}|\) for the null hypothesis of a homogeneous Poisson process. We choose \(R=0.05\) to reflect a reasonable upper lag on the unit square window. For the Matern process we use a parent intensity of 25, on average 8 offspring for each parent, and a uniform dispersal density on a disc of radius 0.2. Table 1 reports rejection probabilities when the Kolmogorov-Smirnov test is rejected on the nominal 5% level. The rejection probabilities are computed over 10000 simulations where for each simulation, the critical value of the test is determined from the asymptotic distribution of \(\sqrt{n}(\hat{K}_{n,\hat{\beta}_{n}}-\pi r^{2})\) under the null distribution. Here we consider both results obtained with the asymptotic variance formula for estimated intensity and the asymptotic variance formula pretending the estimated intensity is the true intensity (see (18)). In both asymptotic variance formulas, the unknown intensity is replaced by the estimated intensity. In case of the Poisson process, the actual levels of the test are close to the nominal level for all window sizes when the correct asymptotic variance is used. However, assuming erroneously known intensity, the rejection probabilities are far too small, completely invalidating the goodness-of-fit test. For the Matern process, using the variance formula for known intensity leads to a loss of power of the goodness of fit test. ## 6 Proof of Theorem 3.3 We will need the following uniform version of Proposition 3.1. **Lemma 6.1**.: _Under the assumptions (2), (3), (4), (8), \(\|H_{n,\beta^{*}}-\bar{H}_{n}\|_{\infty}\) goes to zero in probability. Moreover, \(\bar{H}_{n}\) is continuous, and so is \(H\) if (ii) is satisfied._ Proof of Lemma 6.1.: Let \(\delta>0\) be given and let \(\eta>0\) be a parameter to be chosen later. Choose \(r_{0}<r_{1}<...<r_{k}=R\) such that \(|r_{i}-r_{i+1}|\leq\eta\) and \(k\eta\leq 2(R-r_{0})\). Then \[P\Big{(}\sup_{r\in[r_{0},R]}\|H_{n,\beta^{*}}(r)-\bar{H}_{n}(r) \|\geq\delta\Big{)}\leq P\Big{(}\sup_{i=0,...,k-1}\sup_{r\in[r_{i},r_{i+1}]}\|H_{n,\beta^{*}}( r)-H_{n,\beta^{*}}(r_{i})\|\geq\delta/3\Big{)} \tag{21}\] \[+P\Big{(}\sup_{i=0,...,k}\|H_{n,\beta^{*}}(r_{i})-\bar{H}_{n}(r_ {i})\|\geq\delta/3\Big{)}\] (22) \[+P\Big{(}\sup_{i=0,...,k-1}\sup_{r\in[r_{i},r_{i+1}]}\|\bar{H}_{n }(r_{i})-\bar{H}_{n}(r)\|\geq\delta/3\Big{)}. \tag{23}\] With \(r_{i}<r\), \[H_{n,\beta^{*}}(r)-H_{n,\beta^{*}}(r_{i})=-\sum_{x,y\in\mathcal{P}_{n}}\frac {\mathds{1}_{A(r,r_{i})}(x-y)}{|W_{n}\cap W_{n,x-y}|\rho_{\beta^{*}}(x)^{2} \rho_{\beta^{*}}(y)^{2}}\frac{\mathrm{d}}{\mathrm{d}\beta}(\rho_{\beta^{*}}(x )\rho_{\beta^{*}}(y)) \tag{24}\] where \(A(r,t)\) for \(r<t\) denotes the annulus \(B_{t}(0)\backslash B_{r}(0)\) with volume \(|A(r,t)|\leq C_{1}|t-r|\) where \begin{table} \begin{tabular}{l|c|c|c|c} Window side length & 1 & 2 & 4 & 8 \\ \hline Poisson (assuming estim. intensity) & 0.053 & 0.053 & 0.052 & 0.051 \\ Poisson (assuming known intensity) & 0.0015 & 0.0011 & 0.0008 & 0.0008 \\ \hline Matern (assuming estim. intensity) & 0.63 & 1.00 & 1.00 & 1.00 \\ Matern (assuming known intensity) & 0.31 & 0.96 & 1.00 & 1.00 \\ \end{tabular} \end{table} Table 1: Rejection probabilities for Kolmogorov-Smirnov test for Poisson process (upper two rows) and Matern process (lower two rows) on increasing observation windows. For each type of process, the first row is using asymptotic variance for estimated intensity and the second row is using asymptotic variance assuming known intensity. \(C_{1}\) is independent of \(t\) as long as \(t\leq R\). It follows that \[\sup_{r\in[r_{i},r_{i+1}]}\|H_{n,\beta^{*}}(r)-H_{n,\beta^{*}}(r_{i})\|\leq\sum_{x,y\in\mathcal{P}_{n}}\frac{\mathds{1}_{A(r_{i},r_{i+1})}(x-y)}{|W_{n}\cap W_{n,x- y}|\rho_{\beta^{*}}(x)^{2}\rho_{\beta^{*}}(y)^{2}}\left\|\frac{\mathrm{d}}{ \mathrm{d}\beta}\big{(}\rho_{\beta^{*}}(x)\rho_{\beta^{*}}(y)\big{)}\right\|. \tag{25}\] Let \(X_{i,n}\) denote the right hand side. Campbell's formula shows that \(\mathbb{E}X_{i,n}\leq C_{2}\eta\), where \(C_{2}\) is independent of \(n\) and \(r_{i}\). Moreover, the computation in Appendix B shows that \(\mathsf{Var}(X_{i,n})\leq C_{3}\eta n^{-1}\). Choose \(\eta<\delta/(6C_{3}2\). Then by Chebyshev's inequality, \[P\Big{(}\sup_{r\in[r_{i},r_{i+1}]}\|H_{n,\beta^{*}}(r)-H_{n,\beta^{*}}(r_{i}) \|\geq\delta/3\Big{)}\leq P(|X_{i,n}|\geq\delta/3)\leq P(|X_{i,n}-\mathbb{E}X_ {i,n}|\geq\delta/6)\leq\frac{36C_{3}\eta}{n\delta^{2}},\] so (21) is bounded by \(36kC_{3}\eta/(n\delta^{2})\), which tends to zero as \(n\to\infty\) since \(k\eta\) is bounded. Proposition 3.1 shows that \(\mathsf{Var}\,H_{n,\beta^{*}}(r)\) is \(O(n^{-1})\), and it is easily verified (by computations similar to Appendix B) that the upper bound is uniform in \(r\) for \(r\in[r_{0},R]\). By Chebyshev's inequality, there is a \(C_{4}>0\) such that \[P\Big{(}\sup_{i=0,\ldots,k}\|H_{n,\beta^{*}}(r_{i})-\bar{H}_{n}(r_{i})\|\geq \delta/3\Big{)}\leq\frac{9(k+1)C_{4}}{n\delta^{2}},\] so (22) goes to zero for \(n\to 0\) for any fixed \(\eta\). Taking expectations and applying Campbell's formula in (24), we get \[\sup_{r\in[r_{i},r_{i+1}]}\|\bar{H}_{n}(r_{i})-\bar{H}_{n}(r)\|\] \[\leq C_{5}\eta \tag{26}\] for some \(C_{5}\) independent of \(r_{i}\) and \(n\). Thus, choosing \(\eta<\delta/(3C_{5})\), (23) vanishes. Finally, (26) shows that \(\bar{H}_{n}\) is continuous and if uniform convergence in (ii) is satisfied, \(H\) must also be continuous. The proof of Theorem 3.3 uses some definitions of Skorokhod space, which we briefly recall here, see [4, Sec. 12] for details. The Skorokhod space \(D[r_{0},R]\) of cadlag functions on \([r_{0},R]\) is a separable metric space with metric \(\mu\) given by \[\mu(f_{1},f_{2})=\inf_{\lambda}\{|\lambda-I|_{\infty}\vee|f_{1}-f_{2}\circ \lambda|_{\infty}\},\] where the infimum runs over all strictly increasing, continuous bijections \(\lambda:[r_{0},R]\to[r_{0},R]\), \(I\) is the identity map, and \(|\cdot|_{\infty}\) is the sup norm. In the rest of this section, unless other mentioned, functions will be restricted to the domain \([r_{0},R]\), i.e. \(|f|_{\infty}=\sup_{r\in[r_{0},R]}|f(r)|\). The tightness condition we apply makes use of the cadlag modulus of continuity \(\omega_{f}^{\prime}(\delta)\) defined by \[\omega_{f}^{\prime}(\delta)=\inf_{\begin{subarray}{c}t_{1}<\cdots<t_{k}\\ |t_{i}-t_{i-1}|>\delta\end{subarray}}\max_{i=1,\ldots,k}\sup_{s,t\in[t_{i-1}, t_{i})}|f(s)-f(t)|. \tag{27}\] We will need the following property: If \(f_{2}\) is continuous on \([r_{0},R]\), then it is uniformly continuous. Hence there is a function \(g_{f_{2}}(\delta)\) with \(\lim_{\delta\to 0}g_{f_{2}}(\delta)=0\) such that \(|s-t|\leq\delta\) implies \(|f_{2}(s)-f_{2}(t)|\leq g_{f_{2}}(\delta)\). Since it is enough to take the infimum in (27) over all partitions with \(|t_{i}-t_{i-1}|\leq 2\delta\), we get for any cadlag \(f_{1}\) \[\omega_{f_{1}+f_{2}}^{\prime}(\delta)\leq\omega_{f_{1}}^{\prime}(\delta)+g_{f_ {2}}(2\delta). \tag{28}\] Proof of Theorem 3.3.: Recall the decomposition \[\hat{K}_{n,\hat{\beta}_{n}}-K(r)=H_{n,\hat{\beta}_{n,\cdot}}(r)(\hat{\beta}_{n }-\beta^{*})+\hat{K}_{n}(r)-K(r).\] The proof relies on the observation that \(H_{n,\tilde{\beta}_{n,r}}(r)\) converges to a continuous deterministic function \(H(r)\), \(\sqrt{n}(\hat{\beta}_{n}-\beta^{*})\) is constant in \(r\) and converges in distribution, and \(\sqrt{n}(\hat{K}_{n}(r)-K(r))\) converges in Skorokhod space. Thus, the proof below can be viewed as a stochastic process analogue to Slutsky's theorem. To simplify, we write \[\sqrt{n}(\hat{K}_{n,\beta_{n}}-K(r))=H_{n,\tilde{\beta}_{n,r}}(r)Y_{n}+\sqrt{n} (\hat{K}_{n}(r)-K(r))=(H_{n,\tilde{\beta}_{n,r}}(r)-H(r))Y_{n}+Z_{n}(r) \tag{29}\] where \(Y_{n}=\sqrt{n}(\hat{\beta}_{n}-\beta^{*})\) and converges in distribution to a Gaussian vector \(Y\) by assumption (13), and \[Z_{n}(r)=H(r)Y_{n}+\sqrt{n}(\hat{K}_{n}(r)-K(r)).\] By [4, Thm. 3.1], in order to show functional convergence, it is enough to show: 1. Convergence of \(\mu\big{(}Z_{n},\sqrt{n}\big{(}\hat{K}_{n,\beta_{n}}-K)\big{)}\to 0\) in probability. 2. Convergence of \(Z_{n}\) in distribution in Skorokhod topology to a Gaussian process with the covariance structure given by (14). To show a., note that \(\mu(f_{1},f_{2})\leq|f_{1}-f_{2}|_{\infty}\), so \[\mu\big{(}Z_{n},\sqrt{n}\big{(}\hat{K}_{n,\beta_{n}}-K\big{)}\big{)}\leq\big{|} \big{(}H_{n,\tilde{\beta}_{n,r}}-H\big{)}Y_{n}\big{|}_{\infty}\leq\big{\|}H_{ n,\tilde{\beta}_{n,r}}-H\big{\|}_{\infty}\|Y_{n}\|.\] Since \(Y_{n}\to Y\) in distribution, \(\limsup_{n}P(\|Y_{n}\|\geq M)\leq P(\|Y\|\geq M)\) and hence \(\|Y_{n}\|\) is bounded in probability. It remains to show that \(\|H_{n,\tilde{\beta}_{n,r}}-H\|_{\infty}\) goes to zero in probability. We write \[\|H_{n,\tilde{\beta}_{n,r}}-H\|_{\infty}\leq\|H_{n,\tilde{\beta}_{n,r}}-H_{n, \beta^{*}}\|_{\infty}+\|H_{n,\beta^{*}}-\bar{H}_{n}\|_{\infty}+\|\bar{H}_{n}-H \|_{\infty}. \tag{30}\] The first term goes to zero in probability because the bound in (12) is uniform in \(r\) since \(\hat{K}_{n}(r)\leq\tilde{K}_{n}(R)\), the middle term goes to zero in probability by Lemma 6.1, and the third term goes to zero by (ii). To show b., note that convergence of finite dimensional distributions follows from the observation (13). It remains to show tightness. According to [4, Thm. 13.2], tightness of a sequence \(Z_{n}\) is equivalent to the following two conditions: 1. \(\lim_{a\to\infty}\limsup_{n}P(|Z_{n}|_{\infty}\geq a)=0\). 2. For any \(\varepsilon>0\): \(\lim_{\delta\to 0}\limsup_{n}P(\omega^{\prime}_{Z_{n}}(\delta)\geq \varepsilon)=0\). To show 1., note \[P(|HY_{n}+\sqrt{n}(\hat{K}_{n}-K)|_{\infty}\geq a)\] \[\leq P(|HY_{n}|_{\infty}\geq a/2)+P(|\sqrt{n}(\hat{K}_{n}-K)|_{ \infty}\geq a/2).\] Taking \(\lim_{a\to\infty}\limsup_{n}\), the latter term vanishes by tightness of \(\sqrt{n}(\hat{K}_{n}(r)-K(r))\). The first term satisfies \[P\left(|HY_{n}|_{\infty}\geq a/2\right)\leq P\left(\|Y_{n}\|\geq\sqrt{a/2} \right)+P\left(\|H\|_{\infty}\geq\sqrt{a/2}\right).\] Clearly, \(\lim_{a\to\infty}P(\|H\|_{\infty}\geq a/2)=0\) since \(H\) is continuous and hence bounded on \([r_{0},R]\) by Lemma 6.1. Moreover, since \(Y_{n}\to Y\) in distribution, \[\lim_{a\to\infty}\limsup_{n}P(\|Y_{n}\|\geq\sqrt{a/2})\leq\lim_{a\to\infty}P( \|Y\|\geq\sqrt{a/2})=0.\] To show 2., we use (ii) and (28) to obtain a (\(p\)-dimensional) function \(g_{H}\) such that \[\omega^{\prime}_{\sqrt{n}(\hat{K}_{n}-K)+HY_{n}}(\delta)\leq\omega^{\prime}_{ \sqrt{n}(\hat{K}_{n}-K)}(\delta)+\|Y_{n}\|\|g_{H}(2\delta)\|.\] It follows that \[P\left(\omega^{\prime}_{Y_{n}K+\sqrt{n}(\hat{K}_{n}-K)}(\delta)\geq \varepsilon\right) \leq P\left(\omega^{\prime}_{\sqrt{n}(\hat{K}_{n}-K)}(\delta)\geq \varepsilon/2\right)\,+\,P\left(\|Y_{n}\|\geq(\varepsilon/2)\|g_{H}(2\delta)\| ^{-1}\right).\] Taking \(\lim_{\delta\to 0}\limsup_{n}\) yields \(0\) in both terms because of tightness of \(\sqrt{n}(\hat{K}_{n}-K)\) and convergence in distribution of \(Y_{n}\).
2302.01284
A Self-Adaptive Algorithm of the Clean Numerical Simulation (CNS) for Chaos
The background numerical noise $\varepsilon_{0} $ is determined by the maximum of truncation error and round-off error. For a chaotic system, the numerical error $\varepsilon(t)$ grows exponentially, say, $\varepsilon(t) = \varepsilon_{0} \exp(\kappa\,t)$, where $\kappa>0$ is the so-called noise-growing exponent. This is the reason why one can not gain a convergent simulation of chaotic systems in a long enough interval of time by means of traditional algorithms in double precision, since the background numerical noise $\varepsilon_{0}$ might stop decreasing because of the use of double precision. This restriction can be overcome by means of the clean numerical simulation (CNS), which can decrease the background numerical noise $\varepsilon_{0}$ to any required tiny level. A lot of successful applications show the novelty and validity of the CNS. In this paper, we further propose some strategies to greatly increase the computational efficiency of the CNS algorithms for chaotic dynamical systems. It is highly suggested to keep a balance between truncation error and round-off error and besides to progressively enlarge the background numerical noise $\varepsilon_{0}$, since the exponentially increasing numerical noise $\varepsilon(t)$ is much larger than it. Some examples are given to illustrate the validity of our strategies for the CNS.
Shijie Qin, Shijun Liao
2023-01-31T13:30:13Z
http://arxiv.org/abs/2302.01284v1
# A Self-Adaptive Algorithm of the Clean Numerical Simulation (CNS) for Chaos ###### Abstract The background numerical noise \(\varepsilon_{0}\) is determined by the maximum of truncation error and round-off error. For a chaotic system, the numerical error \(\varepsilon(t)\) grows exponentially, say, \(\varepsilon(t)=\varepsilon_{0}\exp(\kappa t)\), where \(\kappa>0\) is the so-called noise-growing exponent. This is the reason why one can not gain a convergent simulation of chaotic systems in a long enough interval of time by means of traditional algorithms in double precision, since the background numerical noise \(\varepsilon_{0}\) might stop decreasing because of the use of double precision. This restriction can be overcome by means of the clean numerical simulation (CNS), which can decrease the background numerical noise \(\varepsilon_{0}\) to any required tiny level. A lot of successful applications show the novelty and validity of the CNS. In this paper, we further propose some strategies to greatly increase the computational efficiency of the CNS algorithms for chaotic dynamical systems. It is highly suggested to keep a balance between truncation error and round-off error and besides to progressively enlarge the background numerical noise \(\varepsilon_{0}\), since the exponentially increasing numerical noise \(\varepsilon(t)\) is much larger than it. Some examples are given to illustrate the validity of our strategies for the CNS. keywords: Chaos; Clean Numerical Simulation (CNS); self-adaptive algorithm; computational efficiency. ## 1 Introduction For a chaotic dynamical system, the sensitivity dependence on initial conditions (SDIC) was first discovered by Poincare [1], and then this phenomenon was discovered once again by Lorenz [2] with a more familiar name "butterfly-effect". Due to the SDIC, a very weak, small-scale disturbance of the initial condition will give rise to a huge deviation of numerical solution of the chaotic system after a long enough temporal interval [3; 4; 5]. Furthermore, it was found that a chaotic dynamical system not only has the sensitivity dependence on initial conditions (SDIC) but also possesses the sensitivity dependence on numerical algorithms (SDNA), as reported by Lorenz [6; 7]. All of these phenomena are due to the exponential increase of noise (or uncertainty) of chaotic systems, but unfortunately _artificial_ numerical noises (i.e. truncation errors and round-off errors) are always _inevitable_ for almost all of the numerical algorithms. Thus, for a chaotic dynamical system, calculated trajectories of computer-generated simulations obtained by means of different numerical algorithms (with single/double precision) and different time steps are mostly quite different. Naturally, such kind of non-replicability/unreliability of chaotic solution has brought plenty of heated debates on the credence of the numerical simulation of chaotic dynamical system [8], and someone even made an extremely pessimistic conclusion that "for chaotic systems, numerical convergence cannot be guaranteed _forever_" [9]. In addition, it has been recently reported that "shadowing solutions can be almost surely nonphysical", which "invalidates the argument that small perturbations in a chaotic system can only have a small impact on its statistical behavior" [10]. To gain a _reproducible/reliable_ numerical simulation of chaotic systems, Liao [11] proposed a brand-new numerical strategy, namely the "Clean Numerical Simulation" (CNS) [12; 13; 14], to control the background numerical noise, say, truncation error and round-off error, during a temporal interval \(t\in[0,T_{c}]\), where \(T_{c}\) is the so-called "critical predictable time" and this temporal interval should be long enough for calculating statistics. In the frame of the CNS [11; 12; 13; 14; 15; 16; 17; 18], the temporal truncation error and the spatial truncation error are able to be decreased to a _required_ small level via using the Taylor expansion method with a high _enough_ order in the temporal dimension and adopting a fine _enough_ discretization method in the spatial dimension (such as the high-order spatial Fourier expansion), respectively. Significantly, all of the physical and numerical variables/parameters should be represented by means of the multiple precision (MP) [19] with a large _enough_ number of significant digits, and thus the round-off error is also able to be decreased to a _required_ small level. Moreover, an additional numerical simulation of the identical chaotic system with the even smaller numerical noise is required and performed in order to determine such a "critical predictable time" \(T_{c}\), so that the numerical noise could be negligible and thus the computer-generated solution of a chaotic system is reproducible/reliable within the whole spatial computational domain and in the temporal interval \([0,T_{c}]\). In this way, different from some other general numerical algorithms, the CNS is able to give the reproducible/reliable numerical simulation of a chaotic dynamical system within a finite but long enough temporal interval. Here it should be emphasized that although our CNS strategy is based on the classical Taylor series method [20] as well as the multiple precision [19], the scientific significance of this strategy is mainly about the "critical predictable time" \(T_{c}\): the CNS can greatly reduce the background numerical noise, i.e. truncation error and round-off error, to any a required tiny level so that the numerical noise is negligible compared with the "true" physical solution, and thus the corresponding numerical result of a chaotic system is reproducible/reliable in an interval of time \([0,T_{c}]\) that is long enough for statistics, as described in the next section. In other words, the results of chaotic dynamical systems given by the CNS can be regarded as a "clean" benchmark solution [15; 16; 17; 18], which is the main purpose of proposing this CNS strategy. By contrast, solely adopting the Taylor series method [20; 21; 22; 23] to solve a chaotic system for high precision, one usually does not focus on the "critical predictable time" \(T_{c}\) and thus obtain a mixture of the "true" physical solution and the "false" numerical noise, which are mostly at the same order of magnitude, since the background numerical noise of a simulation of chaos should increase exponentially (and quickly) until to the same level of "true" physical solution, which is not considered by traditional numerical strategies. For the computer-generated simulation of a chaotic dynamical system given by a certain numerical algorithm, it is well-known that the averaged level of numerical noise should increase exponentially within a temporal interval \([0,T_{c}]\), i.e. \[\varepsilon(t)=\varepsilon_{0}\exp(\kappa\,t),\qquad\quad t\in[0,T_{c}], \tag{1}\] where the noise-growing exponent \(\kappa\) is a positive constant (usually corresponding to the maximum Lyapunov exponent for a chaotic dynamical system with a finite degree of freedom), \(T_{c}\) is the above-mentioned critical predictable time in the frame of the CNS strategy, \(\varepsilon_{0}\) represents the level of initial/background numerical noise (which is determined by the initial truncation error and the initial round-off error), and \(\varepsilon(t)\) denotes the averaged level of evolving noise for a computer-generated simulation, respectively. Considering that there might be another increasing pattern of numerical noise [24; 25] that is more meticulous, the exponential growing (1) is still suitable for a long-time simulation. Theoretically, the critical predictable time \(T_{c}\) is determined by a given value of the critical numerical noise \(\varepsilon_{c}\), i.e. \(\varepsilon_{c}=\varepsilon_{0}\exp(\kappa\,T_{c})\) that leads to \[T_{c}=\frac{1}{\kappa}\ln\left(\frac{\varepsilon_{c}}{\varepsilon_{0}}\right). \tag{2}\] Obviously, if the value of \(\varepsilon_{c}\) is unchanged, the smaller the level of the initial/background numerical noise \(\varepsilon_{0}\), the larger the critical predictable time \(T_{c}\). Unfortunately, it is impossible in practice to obtain the evolving noise \(\varepsilon(t)\) with high accuracy, because we do not know the true (physical) solution of a numerical simulation of chaos. Thus, a practical approach with satisfied numerical precision is required to calculate the \(\varepsilon(t)\). Let \(\mathbf{x}\in\Omega\) represent the dimensional vector in a chaotic dynamical system, \(\phi(\mathbf{x},t)\) denote the solution of numerical (computer-generated) simulation that is reproducible/convergent within \(t\in[0,T_{c}]\) possessing the initial/background numerical noise \(\varepsilon_{0}\), and \(\phi^{\prime}(\mathbf{x},t)\) denote another solution (using the identical initial/boundary conditions and physical parameters) that is reliable within \(t\in[0,T_{c}^{\prime}]\) possessing the initial/background numerical noise \(\varepsilon_{0}^{\prime}\) which is smaller than \(\varepsilon_{0}\). Due to the exponentially growing property (1) of numerical noise for a chaotic dynamical system, there is \(T_{c}^{\prime}>T_{c}\) and that \(\phi^{\prime}(\mathbf{x},t)\) within \(t\in[0,T_{c}]\) must be superior and much closer to the physical solution (true solution) compared with \(\phi(\mathbf{x},t)\). And thus, \(\phi^{\prime}(\mathbf{x},t)\) could be seen as a benchmark solution to help us determine the numerical noise of \(\phi(\mathbf{x},t)\) within \(\mathbf{x}\in\Omega\) approximately. Therefore, practically, the evolving noise \(\varepsilon(t)\) is obtained via comparing \(\phi(\mathbf{x},t)\) (possessing the background numerical noise \(\varepsilon_{0}\)) with a superior numerical solution \(\phi^{\prime}(\mathbf{x},t)\) (possessing the smaller background numerical noise \(\varepsilon_{0}^{\prime}\)). Up to now, the above-mentioned CNS strategy has been applied to many chaotic dynamical systems successfully with the corresponding computer-generated simulations being reproducible and of course reliable. For example, via using some general numerical algorithms with _double_ precision, one always obtains the reproducible numerical solutions of the well-known Lorenz system only in a short temporal interval, i.e. \(t\in[0,32]\) approximately [11]. By contrast, via using the CNS strategy, a reproducible/convergent numerical simulation of the same chaotic Lorenz system within quite a long temporal interval, i.e. \(t\in[0,10000]\), was obtained _for the first time_ by Liao and Wang [14]. Besides, Liao and Li [26] studied the evolution of the microscopic physical uncertainty of initial condition for the famous three-body system (which is chaotic) by means of the CNS, and they found that the uncertainty finally becomes macroscopical, which leads to the random escape of the three-body system as well as the behavior of symmetry breaking. It indicates that the uncertainty of microscopic physics could be the origin of large-scale randomness for the well-known three-body system. Furthermore, the numerical noise of the CNS strategy can be controlled to be much smaller than the uncertainty of microscopic physics, and thus via using the CNS, Lin et al. [27] theoretically provided rigorous evidence to demonstrate that the microscopic thermal fluctuation should be the origin of large-scale randomness of the two-dimensional turbulent Rayleigh-Benard convection. Significantly, with the help of China's national supercomputer, the CNS strategy was applied to investigate the periodic orbits of the famous three-body problem, and more than 2000 brand-new families of periodic orbits were discovered successfully by Li et al. [28, 29, 30]. Those newly found periodic orbits were reported twice in the popular magazine _New Scientist_[31, 32], because, for the three-body problem, there are only three families of chaotic periodic orbits that had ever been found since Newton mentioned this famous problem three hundred years ago! It is also worth noting that, according to a known periodic orbit as well as three equal masses, and integrating the governing equations by means of the CNS, Li et al. [33] obtained 135445 brand-new periodic orbits of arbitrarily unequal masses of the three-body system, including 13315 stable ones. In addition, using the CNS in quite a long temporal interval, Xu et al. [17] obtained the reliable/reproducible trajectories of a free-fall disk that is chaotic under some certain physical parameters, and the CNS strategy is able to help him accurately forecast the position and posture of the chaotic free-fall disk near the bifurcation point. As for spatiotemporal chaos, Hu & Liao [15] and Qin & Liao [16] proposed an efficient CNS strategy utilized in physical space to numerically solve the 1D complex Ginzburg-Landau equation (CGLE) and the damped driven sine-Gordon equation (SGE), respectively, which further demonstrates the effectiveness of the CNS strategy that can exactly maintain both the statistical properties and symmetric features of the spatiotemporal chaotic systems in which general numerical algorithms with double precision always fail. Recently, taking the CNS strategy as a tool, it has been found that the statistical features (such as the probability density function) of some chaotic dynamical systems are extremely sensitive to the tiny noise/disturbance, and thus this kind of chaos is called ultra-chaos by Liao & Qin [18]. As a brand-new concept, the ultra-chaos might deepen and enrich our understandings about chaos and turbulence. Furthermore, with the help of CNS, Qin & Liao [34] provide rigorous evidence that numerical noises as a kind of tiny artificial stochastic disturbances have quantitatively and qualitatively large-scale influences on a sustained turbulence. In a word, the above-mentioned investigations demonstrate the effectiveness and potential of the CNS for complex chaotic dynamical systems. Although the CNS is able to be applied to obtain the reproducible/convergent numerical simulation of a chaotic dynamical system within a long enough temporal interval, it is more time-consuming compared with some other general numerical algorithms with double precision [14, 27]. In this paper, according to the exponentially growing property of noise \(\varepsilon(t)\) in (1), we propose a modified strategy of the CNS, called the "self-adaptive CNS", to significantly increase the computational _efficiency_ of the CNS algorithm. To illustrate its validity, we apply the CNS with the self-adaptive precision to some chaotic systems, such as the Lorenz equation, the hyper-chaotic Rossler system, the three-body problem, and the damped driven sine-Gordon equation. ## 2 Basic ideas of the self-adaptive CNS In this section, let us use the Lorenz equations [2] \[\left\{\begin{array}{l}\dot{x}(t)=\sigma\,[y(t)-x(t)],\\ \dot{y}(t)=R\,x(t)-y(t)-x(t)\,z(t),\\ \dot{z}(t)=x(t)\,y(t)+b\,z(t),\end{array}\right. \tag{3}\] in the case of \[\sigma=10,\ \ R=28,\ \ b=-8/3, \tag{4}\] under the initial condition \[x(0)=-15.8,\ \ y(0)=-17.48,\ \ z(0)=35.64, \tag{5}\] as one of the most famous chaotic systems (with one positive Lyapunov exponent) to briefly describe the basic ideas of the self-adaptive CNS. The CNS algorithm for the Lorenz system (3)-(5) is mainly based on the \(M\)th-order Taylor series in the temporal interval \([t,t+\Delta t]\): \[x(t+\Delta t)\approx x(t)+\sum_{m=1}^{M}x^{[m]}(t)\,(\Delta t)^{m}, \tag{6}\] \[y(t+\Delta t)\approx y(t)+\sum_{m=1}^{M}y^{[m]}(t)\,(\Delta t)^{m}, \tag{7}\] \[z(t+\Delta t)\approx z(t)+\sum_{m=1}^{M}z^{[m]}(t)\,(\Delta t)^{m}, \tag{8}\] where \(\Delta t\) is the time step and \[x^{[m]}(t)=\frac{1}{m!}\frac{d^{m}x(t)}{dt^{m}},\ \ y^{[m]}(t)=\frac{1}{m!} \frac{d^{m}y(t)}{dt^{m}},\ \ z^{[m]}(t)=\frac{1}{m!}\frac{d^{m}z(t)}{dt^{m}} \tag{9}\] are the high-order temporal derivatives. Differentiating both sides of Eqs. (3) (\(m-1\)) times with respect to \(t\) and then dividing them by \(m!\), we obtain the iterative formulae \[x^{[m]}(t)=\frac{\sigma}{m}\,\left[y^{[m-1]}(t)-x^{[m-1]}(t)\right], \tag{10}\] \[y^{[m]}(t)=\frac{1}{m}\,\left[R\,x^{[m-1]}(t)-y^{[m-1]}(t)-\sum_{i=0}^{m-1}x^ {[i]}(t)\,z^{[m-1-i]}(t)\right], \tag{11}\] \[z^{[m]}(t)=\frac{1}{m}\,\left[\sum_{i=0}^{m-1}x^{[i]}(t)\,y^{[m-1-i]}(t)+b\,z ^{[m-1]}(t)\right], \tag{12}\] for arbitrary \(m\geq 1\). Note that parallel technology can be applied to calculate the sum terms in (6)-(8), (11) and (12). According to (2), the background numerical noises \(\varepsilon_{0}\) must be small enough if one needs a reliable (reproducible) chaotic solution within a large temporal interval \(t\in[0,T_{c}]\). It is worth noting that the background numerical noises \(\varepsilon_{0}\) in (2) is a constant, which is determined by the maximum of the spatio-temporal truncation error (resulting from the truncation of an infinite number of series) and the round-off error (resulting from a limited number of significant digits of data). The basic idea of the above-mentioned CNS algorithm is to greatly decrease both the temporal truncation error and round-off error so that the background numerical noise \(\varepsilon_{0}\) is small enough for a numerical simulation to be reproducible/reliable within a given temporal interval \(t\in[0,T_{c}]\). Obviously, if the temporal Taylor expansion (6)-(8) is given a large _enough_ order \(M\), the temporal truncation error is able to be decreased under a _required_ small level. More importantly, different from other traditional algorithms, we express all of the physical and numerical variables/parameters by means of the multiple precision (MP) via choosing the significant digits with a large _enough_ number \(N_{s}\), and thus the round-off error is also able to be decreased under a _required_ small level. In this way, both the temporal truncation error and round-off error are able to be decreased under a _required_ small level via the CNS. Note that computer-generated solutions of chaotic Lorenz system (3)-(5) given by some general numerical algorithms with _double_ precision are reproducible/convergent in quite a short temporal interval \(t\in[0,32]\). In 2014, Liao & Wang [14] obtained a reproducible/convergent numerical simulation \((x,y,z)\) of the above-mentioned Lorenz system in quite a long temporal interval \(t\in[0,10000]\) (Lorenz unit time) by means of a parallel algorithm of the CNS using the 3500th-order Taylor expansion (\(M=3500\)) with the constant time step \(\Delta t=0.01\) and 4180-digit multiple precision (\(N_{s}=4180\)) for all physical and numerical variables/parameters, whose reproducibility/reliability (from the mathematical viewpoint) was confirmed by means of another simulation \((x^{\prime},y^{\prime},z^{\prime})\) given by the CNS with the smaller background numerical noise using the 3600th-order Taylor expansion (\(M=3600\)) with the time step \(\Delta t=0.01\) and 4515-digit multiple precision (\(N_{s}=4515\)). For simplicity, define the relative error \[\varepsilon(t)=\frac{\left|x^{\prime}(t)-x(t)\right|+\left|y^{\prime}(t)-y(t) \right|+\left|z^{\prime}(t)-z(t)\right|}{\left|x^{\prime}(t)\right|+\left|y^{ \prime}(t)\right|+\left|z^{\prime}(t)\right|}, \tag{13}\] where \((x,y,z)\) and \((x^{\prime},y^{\prime},z^{\prime})\) are the two CNS results mentioned above. It was found that (1) indeed holds with the noise-growing exponent \(\kappa\approx 0.91\) (which corresponds to the maximum Lyapunov exponent of this Lorenz system), say, \(\kappa/\ln 10\approx 0.40\), indicating that the background numerical noise \(\varepsilon_{0}\) will be enlarged nearly \(10^{4000}\) times at \(t=10000\). This is the reason why Liao & Wang [14] had to use the 4180-digit multiple precision (\(N_{s}=4180\)) and the 3500th-order Taylor expansion (\(M=3500\)) in their CNS algorithm so as to greatly decrease the background numerical noise \(\varepsilon_{0}\) to a very tiny level! Frankly speaking, one hardly uses 3500th-order Taylor expansion and data in a multiple precision with 4180 significant digits in practice. However, from a theoretical viewpoint, the _reproducible/convergent_ chaotic simulation of the Lorenz equations in such a long interval of time is very important, since it gives, _for the first time_, direct evidence that one can indeed gain a _reproducible/convergent_ trajectory of chaotic systems in a long enough interval of time. It invalidates the argument that "for chaotic systems, numerical convergence cannot be guaranteed _forever_" [9], although a large number of calculations are required: it took 220.9 hours (i.e. about 9 days and 5 hours) using 1200 CPUs of the National Supercomputer TH-1A at Tianjian, China [14]. This kind of convergent simulation in \(t\in[0,10000]\) can be used as a benchmark solution of the chaotic Lorenz system (3)-(5) to verify the modified CNS algorithms, as described below. How to increase the computational efficiency of the CNS? ### Keeping a balance between truncation error and round-off error Note that the background numerical noises \(\varepsilon_{0}\) in (1) is determined by the maximum of the truncation error and round-off error. So, for solving this problem, it is the optimum that the temporal truncation error is at the same level as the round-off error. So, we should keep a balance between the temporal truncation error and the round-off error. Unlike Liao and Wang [14] who used a constant time step, the _variable_ stepsize (VS) strategy [35] is able to be applied to the above-mentioned CNS algorithm by means of an allowed tolerance _tol_ (whose value is given) of the governing equations. Referring to Barrio _et al._[35], the optimal time stepsize is given by \[\Delta t=min\left(\frac{tol^{\frac{1}{M}}}{\left\|x_{i}^{[M-1]}(t)\right\|_{ \infty}^{\frac{1}{M-1}}},\frac{tol^{\frac{1}{M-1}}}{\left\|x_{i}^{[M]}(t) \right\|_{\infty}^{\frac{1}{M}}}\right), \tag{14}\] where \(M\) denotes the order of Taylor expansion, _tol_ denotes the allowed tolerance, \(\left\|\ \right\|_{\infty}\) is the infinite norm for the variable \(x_{i}\) (\(i=1,2,3\)), and \(x_{1}(t)\), \(x_{2}(t)\), \(x_{3}(t)\) correspond to \(x(t)\), \(y(t)\), \(z(t)\), respectively. Considering that the parallel technology is applied to calculate the sum terms in (6)-(8), (11) and (12), here we use the empirical formula \[M=\left\lceil-1.5\log_{10}(tol)\right\rceil, \tag{15}\] to determine a proper order of Taylor expansion for the high calculating efficiency. Furthermore, considering the round-off error of data should be controlled at the same level of the temporal truncation error, we choose \[tol=10^{-N_{s}}, \tag{16}\] where \(N_{s}\) denotes the number of significant digits chosen by means of the multiple precision. In this way, we can control the background numerical noise by means of choosing the number \(N_{s}\) for multiple precision and keeping a balance between the temporal truncation error and round-off error via (15) and (16) with an optimal value of the time step via (14). The above-mentioned strategy can greatly increase the computational efficiency of the CNS algorithm. For the chaotic Lorenz system (3)-(5), according to (1), here \(\varepsilon_{0}=10^{-N_{s}}\) and \(\kappa\approx 0.91\) (i.e. \(\kappa/\ln 10\approx 0.40\)), we should choose \(N_{s}=4020\) so as to guarantee that the numerical noise \(\varepsilon(t)\) is nearly at the level of \(10^{-20}\) at \(t=10000\). In fact, using \(N_{s}=4020\) and the corresponding \(tol=10^{-N_{s}}=10^{-4020}\) and \(M=\lceil-1.5\log_{10}(tol)\rceil=\lceil 1.5N_{s}\rceil=6030\), we obtain a convergent simulation by means of a parallel CNS algorithm using 1200 CPUs of the National Supercomputer TH-2 at Guangzhou, China, which agrees in the accuracy of more than 20 significant digits in the whole interval of time \(t\in[0,10000]\) with the benchmark solution given by Liao & Wang [14]. Note that it took only 96.8 hours (i.e. about 4 days and 1 hours), just about 44% of the CPU time required by Liao & Wang [14] in a supercomputer. This illustrates that the computational efficiency of the CNS algorithm can be indeed greatly increased by using an optimal time step (14) and keeping a balance between the truncation error and round-off error, as mentioned above. ### Using self-adaptive multiple-precision The background numerical noise \(\varepsilon_{0}\) is determined by the maximum of the truncation error and round-off error. According to (1), one had to use very small background numerical noise \(\varepsilon_{0}\) so as to gain a convergent chaotic simulation in a long interval of time. This is indeed true. For example, to gain the convergent benchmark solution of the chaotic Lorenz system in \(t\in[0,10000]\), Liao & Wang [14] used the 3500th-order Taylor expansion (\(M=3500\)) with the time step \(\Delta t=0.01\) in the 4180-digit multiple precision (\(N_{s}=4180\)). The corresponding background numerical noise is indeed rather small. But, unfortunately, it is rather time-consuming. Note that a key point of the CNS is to determine the critical predictable time \(T_{c}\). Since there exists a balance between the truncation error and round-off error, for the chaotic Lorenz system (3)-(5) in the last section, it is reasonable to assume that the background numerical noise should be equal to the round-off error, say, \(\varepsilon_{0}=10^{-N_{s}}\), where \(N_{s}\) is the initial significant digit number of the multiple-precision (MP). Then, according to (1) we have \[\varepsilon(t)=\varepsilon_{0}\exp(\kappa\,t)=10^{-(N_{s}-\kappa\,t/\ln 10)}, \qquad\quad t\in[0,T_{c}], \tag{17}\] where the noise-growing exponent \(\kappa\approx 0.91\) is known and is generally equal to the leading Lyapunov exponent of a temporal chaos, and further \[\varepsilon_{c}=10^{-N_{s}+\kappa\,T_{c}/\ln 10}, \tag{18}\] which gives the relationship between the initial significant digit number \(N_{s}\) of the multiple-precision (MP) and the critical predictable time \(T_{c}\): \[N_{s}=\left\lceil\frac{\gamma\,\kappa\,T_{c}}{\ln 10}-\log_{10}\varepsilon_{c} \right\rceil, \tag{19}\] where \(\varepsilon_{c}\) denotes the critical numerical noise that is close to the order of magnitude of the true physical solution, \(\lceil\cdot\rceil\) stands for the ceiling function, \(\gamma\geq 1\) is a constant used here as a kind of safety factor, respectively. According to (17), the numerical noise \(\varepsilon(t)\) increases _exponentially_. Thus, after a short time such as at \(t=t^{\prime}\), \(\varepsilon(t^{\prime})\) becomes much larger than the background numerical noise \(\varepsilon_{0}\). So, it is _unnecessary_ to keep the background numerical noise \(\varepsilon_{0}\) being the _same_ in the whole interval \(t\in[0,T_{c}]\). In theory, according to (1), using a larger background numerical noise \(\varepsilon_{0}^{\prime}\) does not influence the numerical result in \(t\geq t^{\prime}\), as long as \(\varepsilon_{0}<\varepsilon_{0}^{\prime}\leq\varepsilon(t)\). Note that, for the above-mentioned CNS algorithm of the Lorenz system (3)-(5), the _larger_ background numerical noise corresponds to the multiple-precision with a _smaller_ number \(N_{s}\) of significant digits and a _larger_ allowed tolerance \(tol=10^{-N_{s}}\) that further leads to a _smaller_ order \(M=\lceil-1.5\log_{10}(tol)\rceil=\lceil 1.5N_{s}\rceil\) of Taylor expansion. Thus, according to (19), after integrating a temporal interval \(t\in[0,t^{\prime})\), where \(t^{\prime}<T_{c}\), it is sufficient to use a smaller number of significant digits \[N_{s}=\left\lceil\frac{\gamma\,\kappa\,(T_{c}-t^{\prime})}{\ln 10}-\log_{10} \varepsilon_{c}\right\rceil, \tag{20}\] and a _larger_ allowed tolerance \(tol=10^{-N_{s}}\) that further leads to a _smaller_ order \(M=\lceil-1.5\log_{10}(tol)\rceil=\lceil 1.5N_{s}\rceil\) of Taylor expansion, respectively, to gain the CNS result for \(t\geq t^{\prime}\). In practice, it is unnecessary to change \(N_{s}\), \(tol\) and the corresponding \(M\) at each time step but at some given times such as \(t^{\prime}=100\), 500, 1000 and so on. Substituting \(\kappa=0.91\) into (20) and choosing \(\varepsilon_{c}=10^{-2}\), we have the following relationship \[N_{s}=\left[\frac{\gamma\left(T_{c}-t^{*}\right)}{2.53}+2\right]\approx\left[ \frac{\gamma\left(T_{c}-t^{*}\right)}{2.53}\right], \tag{21}\] where \(t^{*}=n\,\Delta T\) with \(n=0,1,2,...\) and \(\Delta T\) being a constant such as \(\Delta T=25\), \(50\), \(100\), \(500\), \(1000\) and so on. In practice, there is \(T_{c}-t^{*}>500\) for the high enough remaining precision, say, the value of \(N_{s}\) is stopped decreasing when \(t>9500\) for the long time simulation with \(t\in[0,10000]\) in this section. Taking the safety factor \(\gamma=1.1\) and using different values of \(\Delta T\), the corresponding CPU times of this self-adaptive CNS algorithm with the adjustable multiple-precision (MP) and an optimal variable time step, for the chaotic Lorenz system (3)-(5) in \(t\in[0,10000]\), i.e. \(T_{c}=10000\), are listed in Table 1. It indicates that the required CPU time of the above-mentioned self-adaptive CNS algorithm is not very sensitive to the value of \(\Delta T\), and thus we can choose \(\Delta T=0.5\%\,T_{c}=50\) for the relatively higher computational efficiency. The CNS algorithm described in SS 2.1, together with the above-mentioned self-adaptive strategy, can greatly increase the computational efficiency of the CNS. For example, for the chaotic Lorenz system (3)-(5) in \(t\in[0,10000]\), using \(tol=10^{-N_{s}}\) and \(M=\left[-1.5\log_{10}(tol)\right]=\left[1.5\,N_{s}\right]\) with the optimal time step (14), where \(N_{s}\) is determined by (21) with taking \(\gamma=1.1\), \(T_{c}=10000\) and \(\Delta T=50\), we successfully obtain a reproducible/convergent numerical simulation by means of the parallel CNS algorithm using \(1200\) CPUs of the National Supercomputer TH-2 at Guangzhou, China, which agrees with the benchmark solution given by Liao & Wang [14] in the accuracy of at least \(20\) significant digits in the whole interval of time \(t\in[0,10000]\), as shown in Table 2. Note that it took only \(37.2\) hours (i.e. about \(1\) day and \(13\) hours), just about \(17\%\) of the CPU time of the previous CNS algorithm applied by Liao & Wang [14] who likewise used a supercomputer. Thus, the self-adaptive CNS algorithm mentioned above has indeed much higher computational efficiency than the previous CNS with the constant background numerical noise. In addition, the numerical precision of the above-mentioned self-adaptive CNS algorithm can be guaranteed: as shown in Fig. 1, the results given by this strategy (marked by CNS-SA) and the CNS algorithm combined with the variable stepsize strategy described in SS 2.1 (marked by CNS-VS) are in the accuracy of at least \(20\) significant digits in the whole interval of time \(t\in[0,10000]\), which is obtained via the comparison with the previous CNS algorithm applied by Liao & Wang [14] that has the constant stepsize and the constant background numerical noise (marked by CNS-CS). \begin{table} \begin{tabular}{l c} \hline \hline \(\Delta T\) & CPU time (hours) \\ \hline 25 & 37.7 \\ 50 & 37.2 \\ 100 & 37.4 \\ 500 & 40.3 \\ 1000 & 42.6 \\ \hline \hline \end{tabular} \end{table} Table 1: CPU times of the self-adaptive CNS algorithm with the adjustable multiple-precision (MP) and an optimal variable time step for the chaotic Lorenz system (3)-(5) in \(t\in[0,10000]\), i.e. \(T_{c}=10000\), where the number \(N_{s}\) of significant digits is determined by (21) with the allowed tolerance \(tol=10^{-N_{s}}\) that further leads to the order \(M=\left[-1.5\log_{10}(tol)\right]=\left[1.5\,N_{s}\right]\) of Taylor expansion with the optimal time step (14), taking the safety factor \(\gamma=1.1\) and using different values of \(\Delta T\). \begin{table} \begin{tabular}{c c c c} \hline \hline \(t\) & \(x\) & \(y\) & \(z\) \\ \hline 1000 & 13.881997000862393623 & 19.918303160406394373 & 26.901943308376105536 \\ 2000 & -6.8738836932050180481 & -1.4848348276698421977 & 31.349521074674276721 \\ 3000 & 1.6932902170011335241 & 3.6003418650451083164 & 21.410875101298497293 \\ 4000 & -7.6926663606916323997 & -13.499590676622338604 & 14.199428882538458225 \\ 5000 & -6.0844510954990075032 & -10.813737089458431017 & 12.739116756422288312 \\ 6000 & 0.21673563458354078642 & 2.1042785739999677006 & 22.124608735478140521 \\ 7000 & -11.394859731998057561 & -16.575389386215504779 & 23.681268415272744261 \\ 8000 & -1.2658734776208739301 & -2.3362702560379947755 & 17.495968339114928401 \\ 9000 & 13.479653230046728502 & 17.282101858684218362 & 29.238196888213967777 \\ 10000 & -15.817277998368267071 & -17.366868329556944701 & 35.558386165882592794 \\ \hline \hline \end{tabular} \end{table} Table 2: Convergent result of the chaotic Lorenz system (3)-(5) in \(t\in[0,10000]\), i.e. \(T_{c}=10000\), given by the self-adaptive CNS algorithm using \(tol=0^{-N_{s}}\) and \(M=[-1.5\log_{10}(tol)]=[1.5N_{s}]\) with the optimal time step (14), where \(N_{s}\) is determined by (21) with taking \(\gamma=1.1\) and using \(\Delta T=100\). Figure 1: Evolving noises \(e(t)\) of the CNS results of the Lorenz system (3)-(5) in the whole interval of time \(t\in[0,10000]\), given by the CNS algorithm combined with the variable stepsize strategy (marked by CNS-VS, red solid line) using \(N_{s}=4020\), \(tol=10^{-N_{s}}=10^{-4020}\), \(M=[-1.5\log_{10}(tol)]=[1.5N_{s}]=6030\) with the optimal time step (14), and given by the self-adaptive CNS algorithm (marked by CNS-SA, blue solid line) using \(tol=10^{-N_{s}}\), \(M=[-1.5\log_{10}(tol)]=[1.5N_{s}]\) with the optimal time step (14), where \(N_{s}\) is determined by (21) with taking \(\gamma=1.1\), \(T_{c}=10000\) and \(\Delta T=100\). These evolving noises \(e(t)\) are obtained via the comparison with the previous CNS algorithm applied by Liao & Wang [14] that has the constant stepsize (marked by CNS-CS). Black dashed line: \(\log_{10}(e)=0.40t-4020\). ## 3 Some examples ### Self-adaptive CNS for hyper-chaotic Rossler system The chaotic four-dimensional Rossler system [36] \[\left\{\begin{array}{l}\dot{x}(t)=-\,y(t)-z(t),\\ \dot{y}(t)=x(t)+a\,y(t)+w(t),\\ \dot{z}(t)=b+x(t)\,z(t),\\ \dot{w}(t)=-\,c\,z(t)+d\,w(t),\end{array}\right.\] in the case of \[a=0.25,\;\;b=3,\;\;c=0.5,\;\;d=0.05, \tag{22}\] under the initial condition \[x(0)=-20,\;\;y(0)=z(0)=0,\;\;w(0)=15, \tag{23}\] has aroused wide concern as a typical hyper-chaotic system [36; 37; 38], since it has two positive Lyapunov exponents. How can we gain a convergent chaotic simulation of Rossler system (22)-(23) in the accuracy of 20 significant digits in \(t\in[0,10000]\)? The CNS algorithm for the hyper-chaotic Rossler system (22)-(23) is also based on the \(M\)th-order truncated Taylor series in the temporal interval \([t,t+\Delta t]\): \[x(t+\Delta t)\approx x(t)+\sum_{m=1}^{M}x^{[m]}(t)\,(\Delta t)^{m}, \tag{24}\] \[y(t+\Delta t)\approx y(t)+\sum_{m=1}^{M}y^{[m]}(t)\,(\Delta t)^{m}, \tag{25}\] \[z(t+\Delta t)\approx z(t)+\sum_{m=1}^{M}z^{[m]}(t)\,(\Delta t)^{m}, \tag{26}\] \[w(t+\Delta t)\approx w(t)+\sum_{m=1}^{M}w^{[m]}(t)\,(\Delta t)^{m}, \tag{27}\] where the high-order derivatives are governed by \[x^{[m]}(t)=\frac{1}{m}\,\left[-\,y^{[m-1]}(t)-z^{[m-1]}(t)\right], \tag{28}\] \[y^{[m]}(t)=\frac{1}{m}\,\left[x^{[m-1]}(t)+a\,y^{[m-1]}(t)+w^{[m-1]}(t)\right], \tag{29}\] \[z^{[m]}(t)=\frac{1}{m}\,\left[B_{m}+\sum_{i=0}^{m-1}x^{[i]}(t)\,z^{[m-1-i]}(t) \right], \tag{30}\] \[w^{[m]}(t)=\frac{1}{m}\,\left[-\,c\,z^{[m-1]}(t)+d\,w^{[m-1]}(t)\right], \tag{31}\] for arbitrary \(m\geq 1\) and \[B_{m}=\left\{\begin{array}{ll}b,&m=1,\\ 0,&m>1.\end{array}\right. \tag{32}\] Note that the parallel technology can be applied to calculate the sum terms in (24)-(27) and (30). It is easy for us to know that the maximum Lyapunov exponent of the hyper-chaotic Rossler system (22)-(23) is about 0.11, which gives us the corresponding noise-growing exponent \(\kappa\approx 0.11\) in (1). In this case, if \(\varepsilon_{0}=10^{-N_{s}}\), we have the numerical noise evolution \[\varepsilon(t)\approx\varepsilon_{0}\exp(0.11\,t)\approx 10^{-(N_{s}-0.048\,t)}.\] If our CNS simulation should be in the accuracy of at least 8 significant digits in the whole interval of \(t\in[0,10000]\), we have \(T_{c}=10000\) and \[-(N_{s}-0.048\,T_{c})\leq-8,\] which gives \(N_{s}\geq 488\), indicating that we should choose \(N_{s}=488\). Similarly, in the frame of the CNS, the background numerical noise (i.e. truncation error and round-off error) of this system can be decreased under a _required_ tiny level by means of choosing a large _enough_ order \(M\) of the Taylor expansion (24)-(27) and a large _enough_ number \(N_{s}\) of significant digits for multiple-precision. First, following Liao & Wang [14] who used a constant time step, we obtain a reproducible/convergent simulation of the hyperchaotic Rossler system (22)-(23) in \(t\in[0,10000]\) by means of a parallel CNS algorithm using the 415th-order Taylor expansion (\(M=415\)) with a fixed time step \(\Delta t=0.01\) in the multiple precision of 488 significant digits (\(N_{s}=488\)), as listed in Table 3. In fact, this convergent simulation result agrees in the accuracy of more than 8 significant digits in the whole interval of time \(t\in[0,10000]\) compared with the benchmark solution given by another CNS using the 500th-order Taylor expansion (\(M=500\)) with a fixed time step \(\Delta t=0.01\) in the multiple precision of 550 significant digits (\(N_{s}=550\)). It takes 5804 seconds (i.e. about 1 hours and 37 minutes) using 50 Intel's CPUs (Xeon Silver 4114) on our local cluster. Then, we apply the strategy of keeping a balance between truncation error and round-off error (mentioned in SS 2.1) to increase the computational efficiency of the CNS algorithm. The variable stepsize (VS) scheme is applied with an optimal time step determined by (14), where \(i=1,2,3,4\) is for this hyper-chaotic Rossler system and thus \(x_{1}(t)\), \(x_{2}(t)\), \(x_{3}(t)\), \(x_{4}(t)\) correspond to \(x(t)\), \(y(t)\), \(z(t)\), \(w(t)\), respectively. Considering that the parallel technology is applied to calculate the sum terms in (24)-(27) and (30), here we adopt the empirical formula \[M=\left\lceil-1.5\log_{10}(tol)\right\rceil, \tag{33}\] to choose a proper order of Taylor expansion for the high calculating efficiency. Besides, (16) is used to keep the balance between the round-off error and the truncation error. In this way, we can control the background numerical noise \(\varepsilon_{0}\) by choosing the number \(N_{s}\) of significant digits for multiple-precision, say, \(\varepsilon_{0}\) is at the level of \(10^{-N_{s}}\). By means of the CNS algorithm described in SS 2.1 using a _fixed_ value of \(N_{s}=488\) for multiple-precision, \(tol=10^{-488}\) for the allowed tolerance, \(M=\left\lceil-1.5\log_{10}(tol)\right\rceil=\left\lceil 1.5\,N_{s}\right\rceil=732\) for the order of Taylor expansion, and an optimal time step given by (14), we obtain a reproducible/convergent numerical simulation of the hyper-chaotic Rossler system (22)-(23) in \(t\in[0,10000]\), which gives exactly the same result as those listed in Table 3. And it takes 608 seconds (i.e. about 10 minutes) using 50 Intel's CPUs (Xeon Silver 4114) on our local cluster, say, only 10.5% \begin{table} \begin{tabular}{l c c c c} \hline \hline \(t\) & \(x\) & \(y\) & \(z\) & \(w\) \\ \hline 1000 & -33.992602 & -5.5093173 & 0.087878252 & 20.503330 \\ 2000 & -13.578396 & 9.9097524 & 0.23536319 & 13.972979 \\ 3000 & -76.551626 & 32.540905 & 0.039413405 & 40.392583 \\ 4000 & -27.968165 & -19.878204 & 0.10477112 & 24.712276 \\ 5000 & -21.983158 & 20.457146 & 0.14308917 & 23.787920 \\ 6000 & -11.968879 & 20.979201 & 0.32526220 & 26.481453 \\ 7000 & -5.9175355 & 11.379490 & 1.0511723 & 17.005655 \\ 8000 & -18.606119 & -8.2632834 & 0.15776751 & 17.709076 \\ 9000 & -16.668563 & 13.935457 & 0.18978718 & 30.452723 \\ 10000 & -56.166749 & 27.803911 & 0.053903377 & 29.797559 \\ \hline \hline \end{tabular} \end{table} Table 3: Convergent result of the hyper-chaotic Rössler system (22)-(23) in \(t\in[0,10000]\) given by the CNS parallel algorithm. CPU time of the previous CNS algorithm (i.e. 5804 seconds) with a _fixed_ time step. The convergence of this CNS result is confirmed by comparing it with another CNS result with the even smaller background numerical noise, given by a fixed value of \(N_{s}=550\) for multiple-precision, \(tol=10^{-550}\) for the allowed tolerance, \(M=\lceil-1.5\log_{10}(tol)\rceil=\lceil 1.5N_{s}\rceil=825\) for the order of Taylor expansion, and the optimal time step via (14). In addition, the computational efficiency can be further increased by means of the self-adaptive CNS algorithm described in SS 2.2. Substituting \(\kappa=0.11\) into (20) and choosing \(\varepsilon_{c}=10^{-2}\), we have the following relationship \[N_{s}=\left\lceil\frac{\gamma\left(T_{c}-t^{*}\right)}{20.9}+2\right\rceil \approx\left\lceil\frac{\gamma\left(T_{c}-t^{*}\right)}{20.9}\right\rceil, \tag{34}\] where \(t^{*}=n\,\Delta T\) with the non-negative integer \(n\). In practice, there is \(T_{c}-t^{*}>1000\) for the high enough remaining precision, say, the value of \(N_{s}\) is stopped decreasing when \(t>9000\) for the long time simulation with \(t\in[0,10000]\) in this section. Using different values of \(\Delta T\), the corresponding CPU times of this self-adaptive CNS algorithm with the adjustable multiple-precision (MP) and an optimal variable time step, for the hyper-chaotic Rossler system (22)-(23) in \(t\in[0,10000]\), i.e. \(T_{c}=10000\), are illustrated in Fig. 2. It indicates that there is an approximate linear relationship that the required CPU time (seconds) equals to \(0.04\,\Delta T+248\). Since the corresponding slope \(0.04\) is rather small, it reconfirms the conclusion that the required CPU time of the above-mentioned self-adaptive CNS algorithm is not very sensitive to the value of \(\Delta T\). Thus, in practice we can choose \(\Delta T=0.5\%\,T_{c}=50\) for the relatively higher computational efficiency. For the hyper-chaotic Rossler system (22)-(23) in \(t\in[0,10000]\), using \(tol=10^{-N_{s}}\) and \(M=\lceil-1.5\log_{10}(tol)\rceil=\lceil 1.5N_{s}\rceil\) (according to (16) and (33), respectively) with the optimal time step (14), where \(N_{s}\) is determined by (34) with taking \(\gamma=1.1\), \(T_{c}=10000\) and \(\Delta T=50\), we successfully obtain the _same_ reproducible/convergent numerical simulation by means of the parallel CNS algorithm together with the above-mentioned self-adaptive strategy using 50 Intel's CPUs (Xeon Silver 4114) on our local cluster, as listed in Table 3, which agrees with the benchmark solution given by another CNS, using the 500th-order Taylor expansion (\(M=500\)) with a fixed time step \(\Delta t=0.01\) in the multiple precision of 550 significant digits (\(N_{s}=550\)) mentioned above, in the accuracy of at least 8 significant digits in the whole interval of time \(t\in[0,10000]\). Especially, it takes only 250 seconds (i.e. about 4 minutes), say, only 4.3% of the CPU time (i.e. 5804 seconds) of the previous CNS algorithm with a _fixed_ time step and a _fixed_ value of \(N_{s}\) for multiple-precision. This further verifies the high computational efficiency of the self-adaptive CNS algorithm mentioned in SS 2. Figure 2: CPU times of the self-adaptive CNS algorithm for the hyper-chaotic Rössler system (22)-(23) in \(t\in[0,10000]\), given by different values of \(\Delta T\), using \(tol=10^{-N_{s}}\) and \(M=\lceil-1.5\log_{10}(tol)\rceil=\lceil 1.5N_{s}\rceil\) with the optimal time step (14), where \(N_{s}\) is determined by (34) with taking \(\gamma=1.1\) and \(T_{c}=10000\). Red circle: computed results; Black dashed line: CPU time (seconds) equals to \(0.04\,\Delta T+248\). ### Self-adaptive CNS for three-body problem Here let us consider the well-known three-body problem [1; 39; 40; 41], i.e. the motion of three celestial objects/bodies under their mutual gravitation. Let \(x_{1}\), \(x_{2}\), \(x_{3}\) denote three Cartesian coordinates and \(\mathbf{r}_{i}=(x_{1,i},x_{2,i},x_{2,i})\) denotes the corresponding position vector of the \(i\)th body. Considering Newton's Law of Gravitation, the motion of three bodies is governed by the following non-dimensional equation \[\ddot{x}_{k,i}=\sum_{j=1,j\neq i}^{3}\rho_{j}\frac{x_{k,j}-x_{k,j}}{R_{i,j}^{3} },\hskip 28.452756ptk=1,2,3, \tag{35}\] where \[R_{i,j}=\left[\sum_{k=1}^{3}(x_{k,j}-x_{k,i})^{2}\right]^{\frac{1}{2}} \tag{36}\] and \[\rho_{i}=\frac{m_{i}}{m_{1}},\hskip 28.452756pti=1,2,3 \tag{37}\] denotes the ratio of mass, in which \(m_{i}\) denotes the mass of the \(i\)th body. Similarly, in the frame of the CNS, the background numerical noise (i.e. truncation error and round-off error) of solving the three-body problem (35) can be decreased under a _required_ tiny level by means of choosing a large _enough_ order \(M\) of the Taylor expansion and a large _enough_ number \(N_{s}\) of significant digits for multiple-precision. For more details, please refer to Liao [42]. Without loss of generality, in this paper we follow Liao [42] to consider the motion of three bodies with the initial positions \[\mathbf{r}_{1}=(0,0,-1)+d\mathbf{r}_{1},\hskip 28.452756pt\mathbf{r}_{2}=(0,0,0), \hskip 28.452756pt\mathbf{r}_{3}=-(\mathbf{r}_{1}+\mathbf{r}_{2}), \tag{38}\] as well as the initial velocities \[\dot{\mathbf{r}}_{1}=(0,-1,0),\hskip 28.452756pt\dot{\mathbf{r}}_{2}=(1,1,0), \hskip 28.452756pt\dot{\mathbf{r}}_{3}=-(\dot{\mathbf{r}}_{1}+\dot{\mathbf{r}}_{ 2}), \tag{39}\] where \(d\mathbf{r}_{1}=\delta\,(1,0,0)\) denotes the micro-level physical uncertainty with \(\delta=10^{-60}\). For simplicity, we consider the case of equal masses, say, \(\rho_{j}=1\) with \(j=1,2,3\). It is easy for us to know that the maximum Lyapunov exponent of the above-mentioned three-body problem is about \(0.168\), which gives us the corresponding noise-growing exponent \(\kappa\approx 0.168\) in (1). In this case, if \(\varepsilon_{0}=10^{-N_{s}}\), we have the numerical noise evolution \[\varepsilon(t)\approx\varepsilon_{0}\exp(0.168\,t)\approx 10^{-(N_{s}-0.073\,t)}.\] If our CNS simulation should be in the accuracy of at least \(11\) significant digits in the whole interval of \(t\in[0,1000]\), we have \(T_{c}=1000\) and \[-(N_{s}-0.073\,T_{c})\leq-11,\] which gives \(N_{s}\geq 84\), indicating that we should choose \(N_{s}=84\). First, following Liao [42] who used a constant time step, we obtain a reproducible/convergent simulation of the three-body problem (35)-(39) in \(t\in[0,1000]\) by means of a CNS algorithm using the \(45\)th-order Taylor expansion (\(M=45\)) with a fixed time step \(\Delta t=0.01\) in the multiple precision of \(84\) significant digits (\(N_{s}=84\)). In fact, this convergent simulation result agrees in the accuracy of at least \(11\) significant digits in the whole interval of time \(t\in[0,1000]\) compared with the benchmark solution (with the even smaller background numerical noise) given by another CNS using the \(60\)th-order Taylor expansion (\(M=60\)) with a fixed time step \(\Delta t=0.01\) in the multiple precision of \(100\) significant digits (\(N_{s}=100\)). It takes \(1327\) seconds (i.e. about \(22\) minutes) using Intel's CPU (Xeon Silver \(4114\)) on our local cluster. Then, we apply the strategy of keeping a balance between truncation error and round-off error (mentioned in SS 2.1) to increase the computational efficiency of the CNS algorithm. The variable stepsize (VS) scheme is applied with an optimal time step determined by (14), where \(x_{i}\) is replaced by \(x_{k,i}\) for this three-body problem. Considering that (16) is used to keep the balance between the round-off error and the truncation error, and there is no parallel technology applied in the CNS algorithm for solving the three-body problem (35)-(39), here we adopt the optimal order of Taylor expansion [43] \[M=\left\lceil 1.15\,N_{s}+1\right\rceil. \tag{40}\] In this way, we can control the background numerical noise \(\varepsilon_{0}\) by choosing the number \(N_{s}\) of significant digits for multiple-precision, say, \(\varepsilon_{0}\) is at the level of \(10^{-N_{s}}\). By means of the CNS algorithm described in SS 2.1 using a _fixed_ value of \(N_{s}=84\) for multiple-precision, \(tol=10^{-84}\) for the allowed tolerance, \(M=\left\lceil 1.15\,N_{s}+1\right\rceil=98\) for the order of Taylor expansion, and an optimal time step given by (14), we obtain a reproducible/convergent numerical simulation of the three-body problem (35)-(39) in \(t\in[0,1000]\), and it takes 370 seconds (i.e. about 6 minutes) using Intel's CPU (Xeon Silver 4114) on our local cluster, say, only 28% CPU time of the previous CNS algorithm (i.e. 1327 seconds) with a _fixed_ time step. Furthermore, this CNS result is in the accuracy of more than 11 significant digits in the whole interval of time \(t\in[0,1000]\), compared with the above-mentioned CNS benchmark solution. In addition, the computational efficiency can be further increased by means of the self-adaptive CNS algorithm described in SS 2.2. Substituting \(\kappa=0.168\) into (20) and choosing \(\gamma=1.1\), \(\varepsilon_{c}=10^{-2}\), we have the following relationship \[N_{s}=\left\lceil 0.08\left(T_{c}-t^{*}\right)+2\right\rceil, \tag{41}\] where \(t^{*}=n\,\Delta T\) with the non-negative integer \(n\). In practice, there is \(T_{c}-t^{*}>200\) for the high enough remaining precision, say, the value of \(N_{s}\) is stopped decreasing when \(t>800\) for the long time simulation with \(t\in[0,1000]\) in this section. Using different values of \(\Delta T\), the corresponding CPU times of this self-adaptive CNS algorithm with the adjustable multiple-precision (MP) and an optimal variable time step, for the three-body problem (35)-(39) in \(t\in[0,1000]\), i.e. \(T_{c}=1000\), are illustrated in Fig. 3. It indicates that there is an approximate linear relationship that the required CPU time (seconds) equals to \(0.16\,\Delta T+83\). Since the slope \(0.16\) of this linear relationship is rather small, it once again confirms the conclusion that the required CPU time of the above-mentioned self-adaptive CNS algorithm is not very sensitive to the value of \(\Delta T\). Thus, in practice we can choose \(\Delta T=0.5\%\,T_{c}=5\) for the relatively higher computational efficiency. For the three-body problem (35)-(39) in \(t\in[0,1000]\), using \(tol=10^{-N_{s}}\) and \(M=\left\lceil 1.15\,N_{s}+1\right\rceil\) (according to (16) and (40), respectively) with the optimal time step (14), where \(N_{s}\) is determined by (41) with taking \(T_{c}=1000\) Figure 3: CPU times of the self-adaptive CNS algorithm for the three-body problem (35)-(39) in \(t\in[0,1000]\), given by different values of \(\Delta T\), using \(tol=10^{-N_{s}}\) and \(M=\left\lceil 1.15\,N_{s}+1\right\rceil\) with the optimal time step (14), where \(N_{s}\) is determined by (41) with taking \(T_{c}=1000\). Red circle: computed results; Black dashed line: CPU time (seconds) equals to \(0.16\,\Delta T+83\). and \(\Delta T=5\), we successfully obtain the _same_ reproducible/convergent numerical simulation by means of the above-mentioned self-adaptive CNS algorithm using Intel's CPU (Xeon Silver 4114) on our local cluster, which agrees with the benchmark solution given by another CNS (with the even smaller background numerical noise), using the 60th-order Taylor expansion (\(M=60\)) with a fixed time step \(\Delta t=0.01\) in the multiple precision of 100 significant digits (\(N_{s}=100\)) mentioned above, in the accuracy of at least 11 significant digits in the whole interval of time \(t\in[0,1000]\). Especially, it takes only 84 seconds (i.e. less than 2 minutes), say, only 6.3% of the CPU time (i.e. 1327 seconds) of the previous CNS algorithm with a _fixed_ time step and a _fixed_ value of \(N_{s}\) for multiple-precision. This result also verifies the high computational efficiency of the self-adaptive CNS algorithm mentioned in SS 2. ### Self-adaptive CNS for spatiotemporal chaos Let us consider here a spatiotemporal chaos, i.e. a chain of pendulums coupled through the elastic restoring force, governed by the damped driven sine-Gordon equation [44; 45; 46]: \[u_{tt}(x,t)=u_{xx}(x,t)-sin[u(x,t)]-\alpha u_{t}(x,t)+\Gamma sin(\omega t- \lambda x), \tag{42}\] subject to a periodic boundary condition \[u(x+l,t)=u(x,t), \tag{43}\] where the subscript represents the spatial/temporal partial derivative, \(x\) and \(t\) denote the variables in the spatial and temporal dimensions, \(u(x,t)\) represents the angle of a pendulum, \(\alpha\) denotes a constant related to the damped friction, \(\Gamma\) denotes a constant related to the external force field, \(\omega\) is the temporal frequency and \(\lambda=2\pi/l\) is the spatial frequency, \(l\) denotes the total calculating length of the system, respectively. Without loss of generality, we follow Chacon et al. [44] to consider the following case \[\omega=\frac{3}{5},\ \ \alpha=\frac{1}{10},\ \ \Gamma=\frac{461}{500},\ \ l=500,\ \ \lambda=\frac{2\pi}{l}=\frac{\pi}{250}, \tag{44}\] with the initial condition \[u(x,0)=0,\ \ u_{t}(x,0)=0. \tag{45}\] As reported by Qin & Liao [16], the above-mentioned model corresponds to a spatiotemporal chaos, whose statistics are extremely sensitive to a small disturbance: such kind of chaos belongs to the so-called ultra-chaos, which is in a higher level of disorders than a normal-chaos, as reported by Liao & Qin [18]. Similarly, the CNS algorithm for the sine-Gordon equation (42)-(45) is also based on a high _enough_ order of Taylor expansion in the temporal dimension for decreasing the temporal truncation error under a _required_ tiny level, but combined with a high _enough_ order of the spatial Fourier expansion for a fine _enough_ spatial discretization for decreasing the spatial truncation error under a _required_ tiny level. First, the spatial interval \(x\in[0,l)\) is discretized uniformly by \(N\) equidistant points, say, \(x_{k}=l\,k/N\), where \(k=0,1,2,...,N-1\) and \(x_{k}\) denotes the \(k\)th discrete points in the physical space. Note that the parallel technology is applied in the spatial discretization. Then, in the temporal dimension, the \(M\)th-order Taylor expansion method is used in the temporal interval \([t,t+\Delta t]\), say, \[u(x_{k},t+\Delta t)\approx u(x_{k},t)+\sum_{m=1}^{M}u^{[m]}(x_{k},t)\,(\Delta t )^{m},\ \ \ \ \ \ \ \ 0\leq k\leq N, \tag{46}\] where \(\Delta t\) is the time step and \[u^{[m]}(x_{k},t)=\frac{1}{m!}\frac{\partial^{m}u(x_{k},t)}{\partial t^{m}}. \tag{47}\] From (46), we have the first-order derivative with respect to time: \[u_{t}(x_{k},t+\Delta t)=u^{[1]}(x_{k},t+\Delta t)\approx u^{[1]}(x_{k},t)+ \sum_{m=1}^{M}\,(m+1)\,u^{[m+1]}(x_{k},t)\,\left(\Delta t\right)^{m}. \tag{48}\] The high-order temporal derivatives in (46) and (48) can be obtained via differentiating both sides of Eq. (42) with respect to \(t\), while the corresponding spatial derivatives are approximated by means of the \(N\)th-order Fourier spectral expression, i.e. \[u^{[m]}(x,t)\approx\frac{1}{2}\,a_{m,0}(t)+\sum\limits_{n=1}^{\frac{N}{2}-1} \left[a_{m,n}(t)\,cos(\lambda nx)+b_{m,n}(t)\,sin(\lambda nx)\right]+a_{m, \frac{N}{2}}(t)\,cos\left(\frac{\lambda Nx}{2}\right), \tag{49}\] where the Fast Fourier Transform (FFT) algorithm as well as parallel technology can be adopted. For more details, please refer to Qin & Liao [16]. Besides, all physical and numerical variables/parameters are in the multiple precision with a large _enough_ number \(N_{s}\) of significant digits so as to decrease the round-off error under a _required_ tiny level. As reported by Hu & Liao [15] and Qin & Liao [16], this kind of parallel CNS algorithm in physical space for spatio-temporal chaos has much higher computational efficiency than the previous CNS algorithm in spectrum space [27]. To further increase the computational efficiency, Qin & Liao [16] applied the VS scheme in the temporal dimension with a given allowed tolerance _tol_ for solving the governing equation, using an optimal time step determined by: \[\Delta t=min\left(\frac{tol^{\frac{1}{M}}}{||u^{[M-1]}(x_{k},t)||_{\infty}^{ \frac{1}{M-1}}},\frac{tol^{\frac{1}{M+1}}}{||u^{[M]}(x_{k},t)||_{\infty}^{ \frac{1}{M}}}\right), \tag{50}\] where \(||\ ||_{\infty}\) is the infinite norm for the variable \(x_{k}\). We adopted the empirical formula \(M=\lceil-\log_{10}(tol)-10\rceil\) to determine a proper order of Taylor expansion for the high calculating efficiency [16]. In addition, we use (16) to balance the round-off error at the same level of the temporal truncation error. In this way, one can control the background numerical noise \(\varepsilon_{0}\) only by means of choosing the number \(N_{s}\) of significant digits for multiple precision. In this way, Liao & Qin [18] obtained a convergent chaotic simulation \(u(x,t)\) of the damped driven sine-Gordon equation (42)-(45) in \(t\in[0,3600]\) by means of a parallel algorithm of the CNS using \(N=2^{16}=65536\) and \(N_{s}=230\), corresponding to \(tol=10^{-230}\) and \(M=220\) according to (16) and (54), respectively, with the optimal time step via (50). It took \(202.6\) hours (about \(8\) days and \(11\) hours) using \(256\) Intel's CPUs (Xeon Silver \(4114\)) on our local cluster. To confirm its convergence in the whole interval \(t\in[0,3600]\), say, \(T_{c}=3600\), Liao & Qin [18] obtained another CNS result \(u^{\prime}(x,t)\) with the even smaller background numerical noise using the same \(N=65536\) but \(N_{s}=240\), corresponding to \(tol=10^{-240}\) and \(M=230\) according to (16) and (54), respectively, with the optimal time step via (50). The deviation of \(u(x,t)\) from \(u^{\prime}(x,t)\) is given by \[\varepsilon(t)=\frac{\sum\limits_{n=0}^{\frac{N}{2}}\left|(c_{n}^{\prime})^{2 }-(c_{n})^{2}\right|}{\sum\limits_{n=0}^{\frac{N}{2}}|c_{n}^{\prime}|^{2}}, \tag{51}\] where \(c_{n}\) and \(c_{n}^{\prime}\) are the the complex coefficients of the spatial Fourier expansion of \(u(x,t)\) and \(u^{\prime}(x,t)\) at a given time \(t\), respectively. It was found that the deviation evolves in a power law \[\varepsilon(t)\approx\varepsilon_{0}\exp(\kappa\,t), \tag{52}\] where \(\varepsilon_{0}=10^{-N_{s}}\) is the background numerical noise and \(\kappa\approx 0.14\) is the noise-growing exponent. For more details, please refer to Liao & Qin [18]. Similarly, to further increase the computational efficiency, here we adopt the self-adaptive CNS algorithm described in SS 2.2. Substituting \(\kappa=0.14\) into (20) and choosing \(\varepsilon_{c}=10^{-2}\), we have the following relationship \[N_{s}=\left\lceil\frac{\gamma(T_{c}-t^{*})}{16.4}+2\right\rceil, \tag{53}\] where \(t^{*}=n\,\Delta T\) with the non-negative integer \(n\). In practice, \(\Delta T=0.5\%\,T_{c}=18\) is chosen for the higher computational efficiency and there is \(T_{c}-t^{*}>600\) for the high enough remaining precision, say, the value of \(N_{s}\) is stopped decreasing when \(t>3000\) for the long time simulation with \(t\in[0,3600]\) in this section. Considering that (16) is used to keep the balance between the round-off error and the truncation error, and there is no parallel technology applied in the CNS algorithm for solving the sine-Gordon equation (42)-(45), here we also adopt the optimal order of Taylor expansion [43] \[M=\lceil 1.15\,N_{s}+1\rceil\,. \tag{54}\] We adopt the self-adaptive CNS algorithm mentioned above for solving the sine-Gordon equation (42)-(45), using \(N=65536\) for the spatial discretization, the allowed tolerance \(tol=10^{-N_{s}}\) of governing equations via (16), and the order \(M=\lceil 1.15\,N_{s}+1\rceil\) of Taylor expansion via (54) with the optimal time step (50), respectively, where the self-adaptive number of \(N_{s}\) for multiple-precision is determined by (53) with taking \(\gamma=1.2\), \(T_{c}=3600\) and \(\Delta T=18\). Here it should be emphasized that, according to the definition (51), the power law (52) means an averaged evolution of deviation, and the real deviations at different discrete points in the physical space have fluctuations compared with this averaged deviation. Thus we choose a relatively large safety factor \(\gamma=1.2\) for enough precision. Note that the _same_ reproducible/convergent numerical result in the averaged accuracy of 5 significant digits in the whole interval of time \(t\in[0,3600]\) (compared with the benchmark solution given by another CNS using the same \(N=65536\) but a fixed value of \(N_{s}=240\), corresponding to \(tol=10^{-240}\) and \(M=230\) according to (16) and (54), respectively) is obtained, by means of the self-adaptive CNS algorithm mentioned above, which takes 76.9 hours (i.e. about 3 days and 5 hours) using 256 Intel's CPUs (Xeon Silver 4114) on our local cluster, say, only about 38% of the CPU time (i.e. 202.6 hours) of the previous CNS algorithm. This illustrates that the self-adaptive CNS algorithm described in SS 2 can indeed greatly increase the computational efficiency for a spatiotemporal chaos. ## 4 Conclusion The background numerical noise \(\varepsilon_{0}\) is determined by the maximum of truncation error and round-off error. For a chaotic dynamical system, the numerical error \(\varepsilon(t)\) grows exponentially, say, \(\varepsilon(t)=\varepsilon_{0}\exp(\kappa\,t)\), where \(\kappa>0\) is the so-called noise-growing exponent. This is the reason why one can not gain a convergent simulation of chaotic systems in a long enough interval of time by means of traditional algorithms in double precision, since the background numerical noise \(\varepsilon_{0}\) might stop decreasing because of the use of double precision. This restriction can be overcome by means of the clean numerical simulation (CNS) [11; 12; 13; 14], which can decrease the background numerical noise \(\varepsilon_{0}\) to any a required tiny level. A lot of successful applications show the novelty and validity of the CNS [15; 16; 17; 18; 26; 27; 28; 29; 30; 33; 34]. In this paper, we propose some strategies to greatly increase the computational efficiency of the CNS algorithms for chaotic dynamical systems. It is highly suggested to keep a balance between truncation error and round-off error and besides to progressively enlarge the background numerical noise \(\varepsilon_{0}\), since the exponentially increasing numerical noise \(\varepsilon(t)\) is much larger than it. To illustrate the validity of our strategies, we apply the CNS algorithm combined with the self-adaptive precision to some chaotic dynamical systems, such as the Lorenz system, the hyper-chaotic Rossler system, the three-body problem, and a spatiotemporal chaos governed by the damped driven sine-Gordon equation. All of our results indicate that the self-adaptive CNS algorithm can indeed greatly increase the computational efficiency for chaotic systems. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgments This work is partly supported by National Natural Science Foundation of China (No. 12272230) and Shanghai Pilot Program for Basic Research - Shanghai Jiao Tong University (No. 21TQ1400202). The parallel algorithms for the Lorenz system in this paper were performed on TH-2 at National Supercomputer Centre in Guangzhou, China.
2309.15183
The Shortest Route Is Not Always the Fastest: Probability-Modeled Stereoscopic Eye Movement Completion Time in VR
Speed and consistency of target-shifting play a crucial role in human ability to perform complex tasks. Shifting our gaze between objects of interest quickly and consistently requires changes both in depth and direction. Gaze changes in depth are driven by slow, inconsistent vergence movements which rotate the eyes in opposite directions, while changes in direction are driven by ballistic, consistent movements called saccades, which rotate the eyes in the same direction. In the natural world, most of our eye movements are a combination of both types. While scientific consensus on the nature of saccades exists, vergence and combined movements remain less understood and agreed upon. We eschew the lack of scientific consensus in favor of proposing an operationalized computational model which predicts the speed of any type of gaze movement during target-shifting in 3D. To this end, we conduct a psychophysical study in a stereo VR environment to collect more than 12,000 gaze movement trials, analyze the temporal distribution of the observed gaze movements, and fit a probabilistic model to the data. We perform a series of objective measurements and user studies to validate the model. The results demonstrate its predictive accuracy, generalization, as well as applications for optimizing visual performance by altering content placement. Lastly, we leverage the model to measure differences in human target-changing time relative to the natural world, as well as suggest scene-aware projection depth. By incorporating the complexities and randomness of human oculomotor control, we hope this research will support new behavior-aware metrics for VR/AR display design, interface layout, and gaze-contingent rendering.
Budmonde Duinkharjav, Benjamin Liang, Anjul Patney, Rachel Brown, Qi Sun
2023-09-26T18:40:17Z
http://arxiv.org/abs/2309.15183v2
# The Shortest Route Is Not Always the Fastest: ###### Abstract Speed and consistency of target-shifting play a crucial role in human ability to perform complex tasks. Shifting our gaze between objects of interest quickly and consistently requires changes both in depth and direction. Gaze changes in depth are driven by slow, inconsistent _vergence movements_ which rotate the eyes in opposite directions, while changes in direction are driven by ballistic, consistent movements called _saccades_, which rotates the eyes in the same direction. In the natural world, most of our eye movements are a combination of both types. While scientific consensus on the nature of saccades exists, vergence and combined movements remain less understood and agreed upon. We escheve the lack of scientific consensus in favor of proposing an operationalized computational model which predicts the completion time of any type of gaze movement during target-shifting in 3D. To this end, we conduct a psychophysical study in a stereo VR environment to collect more than 12,000 gaze movement trials, analyze the temporal distribution of the observed gaze movements, and fit a probabilistic model to the data. We perform a series of objective measurements and user studies to validate the model. The results demonstrate its predictive accuracy, generalization, as well as applications for optimizing visual performance by altering content placement. Lastly, we leverage the model to measure differences in human target-changing time relative to the natural world, as well as suggest scene-aware projection depth. By incorporating the complexities and randomness of human oculomotor control, we hope this research will support new behavior-aware metrics for VR/AR display design, interface layout, and gaze-contingent rendering. Accepted 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2024 2023 2024 2024 2024 2025 2024 2025 2025 2024 2025 2025 2026 2026 2027 2026 2027 2028 2028 2029 2029 2029 2029 2029 2029 2029 203 2024 2026 2028 2029 2029 203 2029 203 203 2024 2026 2029 203 203 2027 203 203 203 203 203 2023 2024 203 203 2024 203 2025 203 2026 203 2027 203 2023 2028 2029 203 2029 203 2029 203 20 ###### Abstract Gaze movement patterns are dictated by the strengths and limitations of the visual system. Visual acuity is much higher in the central region of the retina, encouraging observers to first shift their gaze to bring targets of interest into the fovea prior to analyzing any details. Furthermore, the binocular nature of human vision dictates that both left and right eyes must move in coordination to focus at the same location. Consequently, several distinct classes of eye movement patterns have evolved in humans to fulfill various roles and are used in different situations. Due to the underlying neurological and mechanical limitations of eye movements, each one exhibits distinct performance characteristics; some are slow and steady, while others are ballistic and jerky. The combination of all classes of movements forms an efficient and comprehensive overall gaze behavior strategy in 3D visual environments. The speed of these movements are critical in complex tasks such as driving, where we rapidly move our eyes to acquire a plethora of information from the surroundings such as the presence of pedestrians, the approaching of vehicles, the speedometer reading, and even GPS navigation instructions. In those tasks, there is always a delay between the decision to acquire a visual target, and our two eyes successfully landing on it. We ask "how long is this delay and how does it depend on the displacement of our gaze location?". With the emerging adoption of virtual/augmented reality (VR/AR), answering this question enables us to design 3D content that allows for an efficient target changing. Prior vision science studies suggest that gaze shifts move along two primary axes (Figure 2a): one in _direction_ and the other in _depth_(Zee et al., 1992). Highly rapid and consistent eye motions that quickly shift to a peripheral location, called _saccades_, are crucial for fast reaction to targets in different directions. In contrast, eye movements that shift the gaze in depth by rotating each eye in opposing directions, called _wergence movements_, are relatively slower and more inconsistent. Often, both of these movements are executed concurrently, and the performance of such _combined_ movements exhibit a different time signature which is faster than pure vergence movements, but slower than pure saccades (Bucci et al., 2006; Lang et al., 2014; Yang and Kapoula, 2004; Zee et al., 1992). While vision science literature has extensively studied saccadic movements and provided comprehensive models for its temporal characteristics (i.e., the main sequence (Bahill et al., 1975b; van Beers, 2008)), the nature of vergence and combined movements exhibit confounding theories (Chen et al., 2010; Cullen and Van Horn, 2011; King, 2011). As an alternative, we present the first operational model that predicts the required eye movement completion time necessary for shifting the gaze to new 3D targets in stereoscopic virtual environments. We recognize the current lack of first-principle consensus on how vergence/combined eye movements are neurologically constructed. Additionally, we note that noise in both human behavior and eye-tracking adds difficulty to comprehensive study of complex stereoscopic movements with downstream applications. Circumventing these obstacles, we take a holistic approach to (1) focus on _when_ both eyes land on a target after its onset, instead of the intermediate trajectory; and (2) form a computational model which accounts for the noise and variability to produce a _probabilistic_ prediction, instead of a deterministic one. We fit our model and validate its accuracy using our psychophysical study data, which includes more than \(12,000\) individual trials to measure the temporal offsets of gaze movements in a stereo VR environment. The results evidence the model's consistent prediction accuracy, generalizability to unseen participants and trials, as well as the capability of forecasting and optimizing task performance with various real-world VR scenarios. Our model can be applied to measure the difficulty of video games in VR and how the scale of variability in depth can alter gaze movement behaviors for users. We also explore how completion time predictions can be used as a metric for evaluating the placement of 3D UI elements in VR/AR applications. Recalling the driving example, we can improve driver awareness by placing a virtual car dashboard overlay (with speedometer readings and navigation instructions etc.) in an adaptive manner to minimize completion times of objects that appear in the driver's periphery in changing surrounding environments. This research aims to propose an operational model for computer graphics applications for a behavioral phenomenon that is yet to be fully understood. We believe that providing a quantitative understanding of how emerging VR/AR technology influences statistical signatures of human target-changing performance during daily tasks is beneficial even without the neurological understanding of the underlying behaviors. We hope the research can serve as a novel benchmark to guide 3D interfaces and act as a metric for the "user performance" in various applications and mediums. To this aim, we will release the source code and de-identified study data at www.github.com/NYU-ICL/stereo-latency. In summary, our main contributions include: * a series of psychophysical studies and data which systematically characterize visual performance (measured by completion/offset time) across various vergence-saccade combined eye movements in VR; * an operational model that predicts the statistical distribution of completion times; * demonstration of the model's accuracy and effectiveness in predicting and optimizing VR users' target-changing performance in natural scenarios; * model application to measure users' visual performance discrepancies among various games, 2D and VR displays, as well as recommendations for depth designs for 3D user interfaces. ## 2. Related Work ### Eye Movement, Visual Behaviors, and Performance Human eyes are highly dynamic, consisting of various types of movements including smooth pursuit, vestibulo-ocular, saccade, and vergence movements. Saccade and vergence are the two most frequent movements to redirect gaze in 3D spaces (Lang et al., 2014). There has been extensive study of them in the context of computer graphics, displays, and interactions (Hadnett-Hunter et al., 2019; Tarbus, 2013). Unlike most traditional desktop displays, VR/AR platforms provide high field-of-view stereoscopic displays, which simultaneously unlock both saccade and vergence movements. Understanding the timing of these visual movements is essential in broad applications such as esports [14], driving [15], and healthcare [1]. Pure saccades are rapid and conjugate eye movements that change the direction of gaze along a circle of iso-vergence (or the geometric horopter) which is computed using the centers of the two eyes and the fixation point (Figure 2). In the scope of this work, we simplify the measurements by equalizing the optical and visual axes (cf. [13, 14]), leaving the study of this difference as future work. Saccades are high-speed, ballistic motions with short travel times and a probability distribution of spatial error skewing towards undershooting the target location [10]. The scan path, speed, and spatial accuracy of a saccade are all influenced by the characteristics of the visual content [1, 15, 16, 17, 18, 19], and have been extensively studied and modeled [13, 14, 15, 16]. Although those features can also be influenced by visual tasks [12, 13], studies on the _main sequence_[13] show the consistency in completion time after the ocular-motor-controlled movement starts, independent of cognitive factors. By comparison, pure vergences are both slower and disconjugate, directing the gaze to a new location in depth and thereby defining a new geometric horopter. In stereo displays that lack accommodative cues, the displacement of the images presented to the two eyes provides an essential depth cue that drives vergence eye movements. In the context of VR/AR, the conflict between the variable vergence cues provided by stereo displacement and the static accommodation cue corresponding to the display depth commonly causes discomfort, known as vergence-accommodation conflict [16]. The duration of pure vergence movements is influenced by travel distance, direction, and starting depth [14]. Measurement of vergence movements are also more challenging compared to saccades due to the relatively smaller amplitude of movements [15, 16], inconsistent performance [13], complex neural coding [17, 18, 19], and a higher sensitivity to external factors such as pupil dilation [10, 15, 16]. In the real 3D world, saccade and vergence movements are more commonly combined than isolated because of the 3D distribution of visual targets [13, 14]. Prior literature has demonstrated that, relative to pure vergence, these combined eye movements are accelerated by the addition of saccades [13, 14, 15, 16]. Competing theories attempt to untangle the neurological pathways that control vergence and combined movements, and fully explain their behaviors [15, 16, 17]. However, there is no definitive and agreed-upon theory within the literature [17, 18], as exists for saccadic movements [13]. Therefore, despite the critical importance of combined eye movements, we still lack an analytical understanding of how different vergence-saccade combinations quantitatively influence visual performance. For instance, although adding a small saccade offset to a 3D target location may accelerate a slower vergence movement, would an extra long saccade provide even more acceleration, or would the benefits of the saccade be outweighed by additional travel time? If so, what size saccade is optimal for producing the fastest vergence movement? Our work attempts to answer these questions by quantifying the scale of this acceleration effect across different amplitudes of 3D gaze movements into a continuous domain probabilistic model for predicting gaze offset times, and side-step the need to explicitly depict the vast complexity of vergence-saccade movement behaviors. ### Stereo Vision and Stereopsis-Aware Optimization Understanding stereo vision in order to optimize computer graphics systems and user experience, especially in VR/AR environments, Fig. 2. _Illustration of various eye movements._ (a) We illustrate how we define and measure the angles of eye vergence movements \(\alpha_{v}\) and saccadic movements \(\alpha_{s}\) throughout the paper. For further intuition, the physical distance of objects appearing at \(\alpha_{s}=0^{\circ}\) is illustrated in units of meters, and Diopters (i.e., reciprocal of meters). Here, interpupillary distance (IPD) is chosen to be equal to the human average of 63 mm [14]. The optical display depth of the headset is overlaid as a horizontal black bar at a depth of 0.85 m, or 1.2 D. (b) In vergence motion, the two eyes move symmetrically in opposing directions; away from each other in divergent movement and towards each other in convergent movement. (c) In saccadic motion, both eyes rotate by the same amount in the same direction. (d) In combined motion, each eye moves a different amount. The rotation of each eye can be derived as the sum and difference of the corresponding vergence and saccadic coordinate shift as defined in (a). remains a popular research frontier (Aizenman et al., 2022; Shi et al., 2022). Most of today's consumer VR/AR devices are incapable of supporting accommodation; therefore, stereopsis is still the primary means by which these devices _improve_ depth perception over conventional 2D displays. Numerous efforts have been made to optimize stereoscopic content with gaze tracking so as to enhance the perceived realism of depth in virtual environments. Examples include grain positioning (Templin et al., 2014), as well as optimizations considering depth (Kellnhofer et al., 2016; Templin et al., 2014), luminance (Wolski et al., 2022), shading material (Chapiro et al., 2015), and displays (Chapiro et al., 2014; Zhong et al., 2021). With the surge of low-cost and low-power gaze-tracking, another emerging research line incorporates dynamic cues such as motion parallax (Kellnhofer et al., 2016). Depth cues may be enhanced by incorporating these various rotation and projection centers (Konrad et al., 2020; Krajancich et al., 2020). Reduced depth acuity in peripheral vision has also been leveraged to accelerate neural rendering (Deng et al., 2022) and image reconstruction (Kaplanyan et al., 2019). ## 3. Measuring and Predicting Stereoscopic Eye Movement Completion Time To quantitatively understand combined stereoscopic eye movements, we first performed a psychophysical experiment with a wide field-of-view stereo VR display. The study measured how jointly varyingvergence and saccade amplitudes influence the time required for an observer's eyes to reach a 3D target relative to stimulus onset; this duration is often referred to as the eye movement _offset time_. The data then serve as the foundation of our model (detailed in Section 3.4) for predicting the offset timing of various eye movements. ### Experimental Design Participants and setupEight participants (ages 20-32, 6 male) with normal or corrected-to-normal vision were recruited. Due to the demanding requirements, established low-level psychophysical research commonly starts with pilot studies involving a small number of participants and leverages the collected data to develop computational models (e.g., the foveated rendering literature (Krajancich et al., 2021, 2023; Patney et al., 2016; Sun et al., 2020)). These models, constructed using data from a limited set of subjects, can be evaluated for their cross-subject generalizability using a larger group of users, as we performed in Section 4.3 with 12 additional unseen participants. Moreover, in the context of our work, psychophysical studies examining the temporal dynamics of human behaviors require remarkably large sample sizes for a comprehensive statistical pattern to account for neural and mechanical noise (Bucci et al., 2006; Collewijn et al., 1995; Erkelens et al., 1989; van Beers, 2007; Yang and Kapoula, 2004). Considering that variations among subjects do not exhibit a significant impact on the completion rate of low-level gaze movements like saccades (Bahill et al., 1975) and vergence movements (Collewijn et al., 1995; Erkelens et al., 1989) - as confirmed by our cross-validation analysis in Section 4.2 - and given that these are objective psychophysical behaviors not reliant on subjective reporting, we chose to enlist a small number of participants while acquiring an extensive sample size (1,500+ trials) per participant. To this aim, we split the study across multiple days for every participant (see _Conditions_ paragraph for details). The study was conducted with a Vario Aero head-mounted VR display (HMD) with the relevant specifications detailed in Supplement A. As shown in Figure 2(a), throughout the study, participants wearing the HMD remained seated and performed the visual-target-changing task as detailed in the _Task and Stimuli_ paragraph. Before the experiment, participants underwent a "preamble" checklist to ensure proper task completion and accuracy, including: 1. Measure and calibrate the HMD's inter-pupillary distance (IPD). 2. Complete a five-point calibration for accurate binocular gaze tracking (repeat whenever the HMD is re-mounted after breaks). 3. Adjust a fixation point between the nearest and furthest depths at which experimental stimuli appeared to ensure the success of fusing the stereoscopic visual stimuli (i.e., no double-vision). Task and stimuliParticipants' task was to shift their gaze to land on targets appearing in 3D space. At the beginning of each trial, they were instructed to observe the fixation stimulus at the center of the screen. As illustrated in Figure 2(a), this stimulus included a combination of a cross and four circular flankers to assist fixation (Thaler et al., 2013). Once successful fixation was detected, this stimulus disappeared and was immediately replaced by a target stimulus, to which participants were instructed to move their gaze to as naturally as possible with a single gaze motion. The target stimulus was a Gaussian blob with \(\sigma=0.25^{\circ}\) and peak luminance of \(150\) cd/m\({}^{2}\) - a similar design as in Lisi et al. (2019). To ensure stable tracking, a trial only began if the participant's eyes were within \(1.2^{\circ}\) to the center of the fixation point for a consecutive \(0.4\) s. If the participant failed to hold their gaze at the fixation point for sufficient duration more than three consecutive times, the eye-tracker was re-calibrated. Additionally, to ensure correct task completion, we rejected and repeated a trial if it was completed in less than \(0.1\) s or more than \(1.3\) s. To avoid fatigue, participants were shown a darkened screen between trials as a cue to blink or close their eyes, if they: (1) successfully completed a trial, (2) failed to hold their gaze on the starting fixation point, or (3) failed a trial. Definitions and annotationsOffset times are known to vary depending on the spatial location of the stimuli, mostly due to the varying contributions of either saccadic or vergence movements, often superimposed on each other (Zee et al., 1992). In order to study how the spatial placement of the stimuli influences what type of eye movements arise, we parameterize spatial locations using two parameters: the vergence angle, \(\alpha_{v}\), and the saccade angle, \(\alpha_{s}\), as illustrated in Figure 1(a). All locations in the transverse plane containing the participants' eyes, and the stimuli can be encoded using the two degrees of freedom provided by \(\alpha_{v}\) and \(\alpha_{s}\). Specifically, following vision science practice, we define the vergence angle as the angle formed by the intersection of the gaze rays. That is, if we denote the signed angles of the left and right eyes, with respect to the forward "\(z\)" direction (i.e. the intersection between the transverse and median planes) as \(\alpha_{l}\) and \(\alpha_{r}\), the vergence angle is equal to \[\alpha_{v}=\alpha_{l}-\alpha_{r}. \tag{1}\] The set of gaze locations that have the same \(\alpha_{v}\) form an _isovergence circle_, visualized as the orange circles in Figure 2a. Pure vergence movements maintain the direction of gaze and move the gaze point from one isovergence circle to another. On the other hand, the saccade angle, \(\alpha_{s}\), is defined as the mean of the angles of the left and right eyes: \[\alpha_{s}=(\alpha_{l}+\alpha_{r})/2. \tag{2}\] The set of gaze locations that have the same \(\alpha_{s}\) form a ray representing the direction of gaze, visualized as the blue lines in Figure 2a. Pure saccade movements remain on the same isovergence circle while rotating the direction of gaze across the transverse plane. Therefore, a vergence and saccade angle pair, \(\mathbf{\alpha}=(\alpha_{v},\alpha_{s})\), uniquely defines a point on the transverse plane via the intersection of the isovergence circle which corresponds to \(\alpha_{v}\), and the direction of gaze which corresponds to \(\alpha_{s}\). An arbitrary gaze movement in this coordinate system can be represented as a displacement vector, \[\Delta\mathbf{\alpha}=\mathbf{\alpha}^{t}-\mathbf{\alpha}^{o}=(\alpha_{v}^{t}-\alpha_{v}^ {o}\,\alpha_{s}^{t}-\alpha_{s}^{o})=(\Delta\alpha_{v},\Delta\alpha_{s}), \tag{3}\] for movement from \(\mathbf{\alpha}^{(\alpha\text{right})}=(\alpha_{v}^{o}\,\alpha_{s}^{o})\) to \(\mathbf{\alpha}^{t(target)}=(\alpha_{v}^{t}\,\alpha_{s}^{t})\). #### Conditions We define a condition by a pair \(\{\mathbf{\alpha}^{o},\Delta\mathbf{\alpha}\}\). We sought to create a grid of experimental conditions which cover a wide set of possible gaze movements. Today's VR devices limit the breadth of applicable eye movements. Here we discuss these limitations as well as the solutions we implemented to ensure study accuracy. First, we observed that participants could not fuse a stereo stimulus when it was placed too close, causing double (yet in-focus) vision. This restricted the range of possible vergence movements we could study in VR. We believe this effect is due to the lack of support for variable accommodation in VR displays, and thus distorted depth cues due to the _vergence-accomodation conflict_[18, 19, 20]. To establish a conservative _minimum_ depth with successful stereo stimulus fusion, we performed a pre-study test with 4 participants with various inter pupil distances (IPDs) (\(64-71\) mm). Through this experiment, we established that this depth is approximately \(d_{min}=0.4\) m in front of the observer. This corresponds to a _maximum_ vergence angle coordinate of \(\alpha_{v}^{max}=8.4^{o}\) for an observer with an IPD of \(w_{IP}^{min}=59\) mm -- the lowest IPD supported by the HMD (see Supplement A). Since a larger IPD only relaxes this maximum value, we limit the maximum vergence angle to \(\alpha_{v}^{max}\leq 8.4^{o}\). See Supplement B for a more in-depth analysis. Second, we found that the accuracy of the HMD eye tracker deteriorates significantly further in the periphery for \(\alpha_{s}\geq 15^{o}\). We recognize that the majority of saccades naturally performed by humans have amplitudes \(\alpha_{s}\leq 15^{o}\)[18], due to a preference to move the head otherwise. Therefore, we limit the maximum saccade angle to \(\alpha_{s}^{max}\leq 15^{o}\). Lastly, due to the inconsistent nature of temporal human behavior, our study requires many repeats for each condition in order to reveal statistical trends. It is therefore infeasible to include a large number of conditions in our study. We address this by only sampling gaze movement displacements, \(\Delta\mathbf{\alpha}\). That is, although the initial gaze position \(\mathbf{\alpha}\) has been shown to be a relevant factor influencing offset time [16], we chose not to consider it in our analysis and modeling for the current study. We leave characterizing the effects of "starting pose" as future work. To summarize, our study design is constrained to vergence angles \(\alpha_{v}\leq 8.4^{o}\), saccade angles \(\alpha_{s}<15^{o}\), as well as to only consider gaze movement displacements, \(\Delta\mathbf{\alpha}\), and to ignore initial gaze positions, \(\mathbf{\alpha}^{o}\). Within these constraints, we sample the following conditions for vergence, saccade, and combined motions respectively: * 2 vergence conditions with amplitudes (\(|\Delta\alpha_{v}|\in\{4.2^{o},\,8.4^{o}\}\)) conducted for both divergent (\(-\)) and convergent (\(+\)) movements, * 3 saccade conditions with amplitudes (\(\Delta\alpha_{s}\in\{4^{o},8^{o},12^{o}\}\)) conducted at near and far depths, * \(2\times 3\) combined movements for every combination of the above conditions for both convergent and divergent movements, totaling in \((2+3+2\times 3)\times 2=22\) conditions, as in Figures 3b and 3c. We treated leftward and rightward saccades as symmetric; therefore, while we randomized stimulus location to appear on the left or right side, in data processing, we remove the distinction by taking the Fig. 3: _Study setup and results._ (a) visualizes the setup and temporal stimuli (zoomed-in for illustration) of an example condition. (b)/(c) shows the histogram of the collected offset times, with divergent/convergent movement. Each sub-figure block indicates an individual condition. Higher vertical/horizontal locations imply higher vergence (\(\Delta\alpha_{s}\))/saccade(\(\Delta\alpha_{s}\)) amplitudes. In each block, the X-axis denotes the observed offset time (\(0-1200\) ms range; \(250\) ms for each tick) and Y-axis denotes the corresponding distribution density. The dashed lines indicate the mean offset time of each histogram. For each histogram an Exponentially modified Gaussian (_ExGauss_) distribution is fitted via Maximum Likelihood Estimation (MLE); refer to Section 3.4 for details on the fitting procedure. absolute value of the saccade amplitudes. Implementation of the conditions is detailed in Supplement B. To account for human sensory and behavioral noise (van Beers, 2007), we repeated each condition 6 times within one experimental block (totaling in \(6\times 22=132\) trials per block), and instructed participants to complete a total of 12 blocks. Each block took \(10-15\) minutes to complete, with a \(2-3\) minute break between blocks. The experiment was split into sessions across 3 days to avoid fatigue, with each session scheduled at approximately the same time for consistent performance. Before each session, participants also performed a short warm-up session of 24 trials to familiarize themselves with the task and target positions and eliminate potential variance in reaction time. Overall, each experimental condition was repeated a total of 72 times, and the entire experiment took about 3 hours for each participant, including intermediate breaks. Running the experiment across 8 participants, we collected a total of \(8\times 72\times 22=12,672\) trials. Data analysisEach experimental trial yields a time-series of eye directions recorded during the trial, sampled at 200 Hz. Similar to (Templin et al., 2014; Yang et al., 2002, 2010), we performed post-hoc processing and analysis on the raw data to more precisely identify gaze movement offset times. To address tracker noise from high sampling frequency (van Beers, 2007), we first applied a 25 Hz smoothing filter (Butterworth et al., 1930), similar to (Templin et al., 2014; Yang et al., 2010). We compute the angular velocity over time across each trial from the smoothed eye direction data and apply a constant velocity threshold to detect offset timestamps of gaze movement. Specifically, for a reliable offset time measurement, we require two conditions to be met: (1) individual speeds of the left and right eyes to be below a threshold of \(5^{\circ}/\sec\), as well as (2) each eye to be directed within \(1^{\circ}\) relative to the target. While some prior work suggests that vergence offset times can be detected by the angular velocity in the vergence dimension, i.e., \(\frac{d}{dt}\alpha_{v}=\frac{d}{dt}(\alpha_{l}-\alpha_{r})\)(Yang and Kapoula, 2004), we found that our strategy is more fitting in our use case due to the additional challenges in eye tracker precision, accuracy, and frequency posed by consumer VR devices. For consistency and fairness across all conditions, we applied this detection approach for all the conditions, including vergence-only, saccade-only, and combined movement trails. A small percentage of trials (6.4%) were rejected from analysis and training due to the gaze offset position falling outside the allowable range. Manual inspection of these trials indicates that the users' eye movements only satisfied the second condition (2) above, but not the first (1). These cases could not be identified during experiment run-time due to the inability to reliably perform post-processing filters to the raw data on the fly. ### Results Figure 3 visualizes the raw data with the identified eye movement offset time. All time values in the statistical analysis below and throughout the paper are in _seconds_ for clarity. Additionally, Figure 4 statistically summarizes the mean of each condition. The offset times of saccades (\(\Delta\alpha_{v}=0^{\circ},.37\) (mean) \(\pm.12\) (std)) are lower than offset times of vergence movements (\(\Delta\alpha_{s}=0^{\circ},.59\pm.15\)). The effect applies for both divergent (\(\Delta\alpha_{v}<0^{\circ},.59\pm.17\)) and convergent (\(\Delta\alpha_{v}>0^{\circ},.59\pm.14\)) conditions. The average offset time of combined movements (\(.48\pm.16\)) lies in between. A repeated measures analysis of variance (ANOVA) indicated that the type of eye movement (saccade/vergence/combined) had a significant effect on the offset time (\(F_{2,14}=339.3,p<.001\)). Additionally, the range (max-min) of mean offset times across saccade conditions (.02) is significantly narrower than across vergence conditions (.14). The effect can be visualized by comparing the span of values on the Y-axis of Figure 4. Saccade acceleration exhibits a "U-shape" for divergent combined movements (Figure 4b). The optimality (i.e., the amplitude of the saccade that accelerates vergence the most, thus the fastest combined movement) depends on the corresponding vergence amplitude. Lastly, human performance on changing 3D visual targets is inconsistent across trials, even within the same participant. Moreover, the scale of the inconsistency varies across different eye movements. These observations inspire us to develop a computational model that 1) depicts quantitatively how saccades accelerate vergence, and 2) predicts the probability distribution of target landing offset time with combined vergence-saccade movements. ### Generalization to Arbitrary Gaze Movements Statistical modelThe statistical analyses in Sections 3.2 and 3.3 motivate us to develop a model for predicting the target landing offset times for arbitrary gaze movements not present within our dataset. As reported in Section 3.2, the distributions observed in our dataset are positively skewed, and vary across different conditions; so an Exponentially modified Gaussian (_ExGauss_), which features fine control over skewness via its parameters, is a viable choice of statistical model for these distributions (Marmolejo-Ramos et al., 2023). Specifically, offset time, \(\mathcal{T}\), represented as an _ExGauss_ random variable has a probability density function (PDF), \[f_{\mathcal{T}}(t;\mu,\sigma^{2},\tau)=\frac{1}{2\tau}e^{2\mu\frac{\mu+\sigma ^{2}}{\tau}-2t}\text{ erfc}\left(\frac{\mu+\frac{\sigma^{2}}{\tau}-t}{\sqrt{2} \sigma}\right), \tag{4}\] parameterized by \(\mu\), \(\sigma\), and \(\tau\), to depict the location, spread, and asymmetry of the resulting distribution, respectively. All parameters are in units of _seconds_. Here, erfc(\(\cdot\)) is the complementary error function. As shown in Figure 3, we estimate the _ExGauss_ parameters for each condition separately via Maximum Likelihood Estimation (MLE) to collect a total of \(N=19\) sets of parameters (not double counting the saccade conditions). In this work, offset times are modeled as _ExGauss_ random variables, but note that modeling with a different random variable may also be valid. We leave the analysis and comparisons among model choices as future work since the specific presentation is beyond our focus, and other parameterizations are adaptable to our framework. Parameter interpolationOur focus, instead, is on how the parameters of a given model should be interpolated to provide predictions of gaze offset times for arbitrary gaze movements. To this end, we leverage the _ExGauss_ parameter estimations of each condition and smoothly interpolate each parameter via Radial Basis Function (RBF) interpolation. Concretely, each RBF takes, as input, the amplitude of the gaze movement, \(\Delta\mathbf{\sigma}=(\Delta\alpha_{\text{v}},\Delta\alpha_{\text{s}})\), to output the predicted _ExGauss_ random variable, \(\mathcal{T}(\Delta\mathbf{\sigma})\), with estimated parameters \[\hat{\mu}(\Delta\mathbf{\sigma}) \coloneqq\sum_{i}^{M}w_{i}^{\mu}\varphi(e^{\mu}||\Delta\mathbf{ \sigma}-\mathbf{\epsilon}_{i}^{\mu}||),\] \[\hat{\sigma}(\Delta\mathbf{\sigma}) \coloneqq\sum_{i}^{M}w_{i}^{\sigma}\varphi(e^{\sigma}||\Delta\mathbf{ \sigma}-\mathbf{\epsilon}_{i}^{\sigma}||),\] \[\hat{t}(\Delta\mathbf{\sigma}) \coloneqq\sum_{i}^{M}w_{i}^{\tau}\varphi(e^{\tau}||\Delta\mathbf{ \sigma}-\mathbf{\epsilon}_{i}^{\tau}||). \tag{5}\] \(\mathbf{\epsilon}_{i}^{\mu}\) and \(w_{i}^{\mu}\) represent the location and weight of each of the \(M=4\) radial bases, \(\varphi\) is the radial function, and \(e^{\mu}\) is a tuning shape parameter for the radial function. In our implementation, we used the Gaussian kernel, \(\varphi(r)=\exp(-r^{2})\). Overall, the learnable parameters in this regression are \(\mathbf{\epsilon}_{i}^{j}\), \(w_{i}^{j}\), and \(e^{j}\) for \(i\in[1\dots M]\), totalling in \(4+4+1=9\) variables for each _ExGauss_ parameter \(j\in\{\mu,\sigma,\tau\}\). RegressionWe optimize the adjustable variables via gradient descent to minimize the mean-squared error between the MLE-estimated _ExGauss_ parameters for each condition, and the RBF-interpolated parameters, with the loss \[L_{j}=\frac{1}{N}\sum^{N}\left(j-\hat{j}\right)^{2}\text{ for }j\in\{\mu, \sigma,\tau\}. \tag{6}\] The RBF parameters are regressed using batch gradient descent with the loss functions from Equation (6) and a learning rate of \(10^{-2}\) for \(200,000\) iterations. The mean-squared losses are minimized from \(137k/2.3k/17k\)\(s^{2}\) to \(230/200/120\)\(s^{2}\) over the course of each regression, respectively. We report model performance metrics as well as additional evaluations in Section 4. Discussion and applicationsWe compare the mean offset times predicted by our model to the means aggregated from our dataset in Figure 5. This visualization demonstrates how the offset times differ between convergent and divergent gaze movements. For convergent combined movement, we observe the same monotonic decrease in offset time as a function of saccade amplitude as reported in Figure 4c. Additionally, we see the U-shaped behavior for divergent combined movements, as discussed in Section 3.3 and Fig. 4b. The _ExGauss_ distribution and RBF interpolation methods are represented by parameterized differentiable functions. This allows us to compose these components to construct an end-to-end differentiable model for predicting the probability distribution of arbitrary gaze movements. This formulation can be leveraged in various ways for practical applications. For example, the "optimal" saccade amplitude, \(\Delta\alpha_{\text{s}}^{*}\), which minimizes the offset time at various vergence amplitudes, \(\Delta\alpha_{\text{v}}\) can be computed analytically: \[\Delta\alpha_{\text{s}}^{*} =\operatorname*{arg\,min}_{\Delta\alpha_{\text{s}}}\mathbb{E} \left[\mathcal{T}\left(\Delta\mathbf{\sigma}=(\Delta\alpha_{\text{v}},\Delta \alpha_{\text{s}})\right)\right]\] \[=\operatorname*{arg\,min}_{\Delta\alpha_{\text{s}}}\left(\hat{ \mu}\left(\Delta\alpha_{\text{v}},\Delta\alpha_{\text{s}}\right)+\hat{\tau} \left(\Delta\alpha_{\text{v}},\Delta\alpha_{\text{s}}\right)\right). \tag{7}\] These local minima indicate the location of the lowest point in the valley of the U-shaped behavior, as visualized in Figure 5. ## 4. Evaluation We first measure the statistical accuracy and necessity of the vergence-saccade combined modeling with an ablation study in Section 4.1. We further test the model's goodness-of-fit when generalizing to unseen users and trials in Section 4.2. Then, to evaluate its applicability in real-world scenarios and novel conditions, we perform an evaluation user study with various scenes in Section 4.3. ### Model Accuracy and Ablation Study MetricsWe utilize the Kullback-Leibler divergence (KLdiv) as a continuous domain metric for measuring the similarity between model-predicted probability densities and the histograms obtained from the psychophysical data. A model with _lower_ KLdiv relative to a ground truth histogram indicates a _better_ prediction. ConditionsWe conduct an ablation study and utilize the KLdiv to validate the necessity of modeling combined movements. Specifically, we consider the model's prediction accuracy if not supplying it with information on either saccade or vergence movement. For this purpose, we re-aggregate our psychophysical data into groups separated only by saccade amplitude (**SAC**), or only by vergence amplitude (**VER**) conditions. That is, we pool together the histograms in Figure 3 across the columns, or rows respectively. The re-aggregation is then utilized to regenerate an ablated model following the same steps as described in Section 3.4. See Supplement D for visualizations of the ablated model predictions. While the probability distribution predicted by our model is continuous, the psychophysical study dataset only provides a finite sample of the theoretical ground truth distribution of offset times. Therefore, we apply the discrete version of KLdiv onto histograms of the ground truth data for each condition with \(n=50\) bins (\(\Delta t=24\) ms). Results and discussionThe resulting average KLdivs for the two ablated models are compared to the full model (**FULL**) in Table 1. We observe that the FULL model exhibits significantly lower KLdiv than **VER** and **SAC**. While the number of bins does have an effect on the divergence values, we extensively tested and confirmed that the relative relationship across the three conditions was not influenced by this factor. These results demonstrate that combined eye movements exhibit remarkably distinct temporal patterns that depend both on saccade and vergence movement amplitudes, agreeing with our observations in Section 3.3. Quantitatively, the combined model predicts participants' behaviors significantly more accurately, and thus proves the necessity and effectiveness of considering amplitudes of both components of movement. ### Model Generalizability We further evaluate generalized goodness-of-fit with unseen data partitions. We create segments of the psychophysical data from Section 3 into training-testing groups along multiple axes. MetricsSimilar to prior art on stochastic visual behaviors (Dunikharjav et al., 2022; Le Meur et al., 2017), we utilize the Kolmogorov-Smirnov (K.S.) goodness-of-fit test (Massey Jr, 1951) between the test set and the corresponding model prediction, using ten quantiles for the offset time. Significance (\(p<.05\)) in the K.S. test indicates a rejection of the null hypothesis that two samples are drawn from the same distribution; failing to reject (\(p>.05\)) supports distributional matching. The \(D\) value in K.S. measures the maximum distance. ConditionsWe first assess the model's statistical goodness of fit for the full set of psychophysical data from Section 3. Then we analyze the model's generalizability based on its capability to successfully fit the statistical distribution with unseen trials or subjects. To this end, the collected dataset is split into two fully separated training and testing sets without overlap. The training set is leveraged to re-train a new model as in Section 3.4, which tests the fitness on the corresponding unseen test set. We experiment with two methods of partitions: (1) reserve each one of the eight participants' data as the test set (annotated as \(\mathbf{C}_{i}\), \(i\in\{1,2,\ldots,8\}\)); (2) uniformly randomly sample 1/8 of the entire data for each condition but across all users (annotated as \(\mathbf{C}_{r}\)). For both methods, the remaining data is used as the corresponding training set. \begin{table} \begin{tabular}{c|c c c} \hline \hline Condition & FULL & VER & SAC \\ \hline KL Divergence &.172 &.236 &.444 \\ \hline \hline \end{tabular} \end{table} Table 1. KL divergence of the model and ablation study. Figure 5. _Visualization of the interpolated model._ The sparsely sampled data visualized in Figure 4 is smoothly interpolated via RBF interpolation. The surface heatmap shows the mean offset times across all interpolated conditions, and the measured data is overlaid as a scatter plot for comparison. The “optimal” combined gaze movements at various vergence amplitude settings are computed using Equation (7) and visualized as a dashed white line on the surface of the model prediction. Figure 6. _Results of the model generalization evaluation with various partition conditions._ (a) shows the K.S. analysis. The color indicates the corresponding partition condition. (b) shows the Q-Q plot for all conditions, comparing the distributions between the model-prediction on test set vs. training set. Results and discussion.Figure 6a shows the results for the goodness-of-fit across all conditions. Additionally in Figure 6b, we provide a quantile-quantile (Q-Q) visualization between the training set and the model prediction on the test set: samples closer to the diagonal line indicate better distribution agreement. As a baseline reference, the K.S. test between the model and all collected data shows \(D=.1,p=1\). For all experimented partitioning conditions, the K.S. tests exhibit \(p>.99\), failing to reject the null hypothesis that the model prediction acquired from the training set and the unseen test data are drawn from the same distribution. The goodness-of-fit analyses above reveal that our probabilistic model can be generalized to unseen users and trials, implying that it can predict user behavior without observing it in advance. ### Study: Predicting and Optimizing Visual Performance Beyond measuring the performance of the model on data from the controlled experiment (Section 3), we further design and conduct a second study with more complex stimuli. We aim to gauge the model's capability to predict and optimize visual performance with realistic VR/AR scenarios, novel conditions, and unseen participants. Participants and setup.We recruited 12 participants (ages \(20-33\), \(3\) female). To validate the generalizability of the model, we ensured no overlap of participants with the study from Section 3. All participants reported having normal or correct-to-normal vision. We utilized the same hardware and "preamble" checklist as in Section 3.1. Scenes and stimuli.To validate how our model performs for varied scenarios and content, we designed 3 distinct environments: (1) a rendered archery range with a 2D bullseye stimulus (Figure 7a), (2) a rendered basketball court with a 3D ball stimulus (Figure 7b), and (3) a photographic natural outdoor scene with a virtual bird stimulus to simulate pass-through augmented reality (AR) scenarios (Figure 7c). Tasks.We instructed participants to complete a target-changing task similar to Section 3.1. During each trial, participants were first instructed to fixate on a cross at the center of the screen. After successfully fixating for \(0.4\) s, the cross was immediately replaced by one of the three scenes, containing the corresponding target at a new location. The participant then made an eye movement to direct their gaze at the target stimulus. To reduce the influence of progressive learning effects on reaction time, as well as to familiarize the participants with the environment and task, participants performed 36 warm-up trials for each of the scenes, followed by a short break. Conditions.We aim to validate our realistic scenarios with unseen conditions during the model training. Given the hardware limitations in Section 3.1, we experimented with a fixation at \(0.4\) m and targets placed \(\Delta\alpha_{\nu}=6.9^{\circ}\) away in depth. Using this novel vergence depth, we designed 3 conditions with various eye travel distances: * pure vergence motion with the **shortest** distance, \(\Delta\alpha_{s}=0^{\circ}\), \(\mathbf{C_{m}}\): combined motion with the **medium** distance \(\Delta\alpha_{s}=7^{\circ}\), \(\mathbf{C_{I}}\): combined motion with the **longest** distance \(\Delta\alpha_{s}=10.5^{\circ}\). We used the same conditions across all three tested scenes to statistically compare inter-scene generalizability, as detailed in the _results_ paragraph below. To acquire enough data for robust statistical distributions, we included 72 repeats per condition on each scene, with fully randomized order. Therefore, the experiment generated 12 participants \(\times 3\) scenes \(\times 3\) conditions \(\times 72\) repeats \(=7776\) trials in total. We avoided participant fatigue by partitioning the study into 6 blocks, with each block containing trials from only one scene. Additionally, the scene order was fully counterbalanced with a Latin square to avoid carry-on effects. Results.The second row of Figure 7 summarizes the results (see Supplement E for the full visualization). To measure the model's applicability and generalizability, we compare its predictions with the obtained human data along multiple axes, including unseen conditions (Figure 7d), participants (Figure 7e), and scenes. Specifically, 1. Across the 3 conditions, \(\mathbf{C_{m}}\) exhibits the fastest average offset time (\(.49\pm.16\)), compared to \(\mathbf{C_{s}}\) (\(.58\pm.13\)) and \(\mathbf{C_{I}}\) (\(.52\pm.13\)) conditions. The trend agrees with the model's prediction for \(\mathbf{C_{m}}/\mathbf{C_{s}}/\mathbf{C_{I}}\), as \(.44\pm.13/.60\pm.15/.54\pm.16\). The predictions for \(\mathbf{C_{s}}\) in Figure 7d appear to be slightly higher than measured data, however, K.S. tests failed to reject the null hypothesis that the model prediction and the user-exhibited data are drawn from the same distribution (\(p>.99\) for each condition). A repeated measures ANOVA indicated that the condition had a significant effect on the offset time (\(F_{2,22}=21.75,p<.001\)). 2. Across the 12 participants, K.S. tests failed to reject the null hypothesis that the model prediction and the user-exhibited data are drawn from the same distribution (\(p>.79\) for each). 3. Across the 3 scenes, K.S. tests failed to reject the null hypothesis that the model prediction and the user-exhibited data are drawn from the same distribution (\(p>.99\) for each scene). A repeated Figure 7. _Evaluation user study scenes and results._ The first row shows the 3 scenes leveraged for the study. The target stimuli are zoomed-in with insets. The second row visualizes the comparisons across various dimensions. (d) compares the model vs. data for the 3 conditions, aggregating all users and scenes. The X-axis/Y-axis indicates offset time/cumulative probability. Note the discrepancy between eye travel distance (\(\mathbf{C_{s}}<\mathbf{C_{m}}<\mathbf{C_{I}}\)) and landing time (\(\mathbf{C_{m}}<\mathbf{C_{I}}<\mathbf{C_{s}}\)). Predictions for \(\mathbf{C_{s}}\) appear higher than measured data, but are statistically similar (Section 4.3). (e) visualizes the model vs. data for each of the participants with a Q-Q plot, aggregating all conditions and scenes. Samples closer to the diagonal line indicate better fitting. measures ANOVA did not observe that the scene had a significant effect on the offset time (\(F_{2,22}=1.93,p=.17\)). We further calculated the KLdivs between observed data and model predictions for each scene to investigate whether the choice of scene affects model alignment. The KLdiv for archery/basketball/natural is \(.52\pm.27/.56\pm.29/.54\pm.23\), respectively. A repeated measures ANOVA did not observe that scene had a significant effect on the KLdiv (\(F_{2,22}=.51,p=.61\)). DiscussionThe statistical analysis demonstrates the model's consistent capability of predicting and thus optimizing users' task performance during 3D visual target changes. In addition to averaged offset times, the model also accurately predicts probability distributions with statistical accuracy, considering individual differences and sensory/behavioral randomness. Our predictions are consistent with unseen conditions and participants, without being affected by novel and realistic scenes. We also re-observe the remarkable fact that offset time performance is not positively correlated to the travel distance, again evidenced by a significant "U-shape" effect. ## 5. Application Case Studies We apply our model to two applications considering 3D gaze movements. First, we explore how gaze movement variability between VR games can influence video game difficulty experienced by players. Second, we make recommendations for scene-aware design and placement of 3D UI elements to minimize the cost of users' target changing in scenarios such as automotive head-up displays (HUD). ### Gaze Movement Performance in Games for VR vs. 2D The relationship between human performance in video games and target placement has been studied in traditional 2D displays (Duinkharjav et al., 2022; Kim et al., 2022). In this case study, we consider whether the game-dependent content depth has an effect on this performance. Since gaming in 2D does not involve vergence movements, our evidence in Section 3 suggests that gaze movements would be faster than in 3D environments. To measure the scale of this difference across display environments as well as individual games, we conduct a numerical simulation using our model. SetupWe experiment with a large-scale VR player behavior dataset established by Aizenman et al. (2022). The dataset investigates how often users fixate at various depths during gameplay. It contains games which mimic four top-rated games on Steam1: _Job Simulator2_, _Arizona Sunshine2_, _Beat Saber2_, and _Pistol Whip2_. With this data, we can simulate various gaze shifts between fixations \(h_{f}\)(_fivation_) that occur during real gameplay and use our model to predict the corresponding average offset time. Concretely, the distribution of gaze fixation depth is described via a probability density function, \(h_{f}(\alpha_{v}\mid G)\). The PDF value at some vergence angle, \(\alpha_{v}\), represents the proportion of total time spent fixating at that depth when a user plays a given game \(G\). Footnote 1: [https://store.steampowered.com/vr/ip=0&tub=TopSellers](https://store.steampowered.com/vr/ip=0&tub=TopSellers) Footnote 2: [https://media.mbusa.com/releases/release-9e110a7b636c4518148b9c1a4e19be23-meet-the-s-class-digital-my-mbusa-mercedes-benz-user-experience](https://media.mbusa.com/releases/release-9e110a7b636c4518148b9c1a4e19be23-meet-the-s-class-digital-my-mbusa-mercedes-benz-user-experience) We model each gaze movement during play as originating and targeting two fixation points sampled from the same distribution \(h_{f}\). Given an origin and target vergence angles, \(\alpha_{v}^{o}\) and \(a_{v}^{t}\), the joint probability density, \(h_{m(overnt)}(\Delta\alpha_{v})\), is equal to \[h_{m}(\Delta\alpha_{v}=a_{v}^{t}-\alpha_{v}^{o}\mid G)=h_{f}(a_{v}^{t}\mid G )\times h_{f}(\alpha_{v}^{o}\mid G). \tag{8}\] Using this distribution of vergence movement amplitudes, \(h_{m}\), as a weight factor, we compute the mean gaze movement offset times at all saccade amplitudes our model supports (i.e., \(\Delta\alpha_{s}\in[4^{o},12^{o}]\)). Results and discussionWe visualize our main results in Figure 8. Across all gaze depths reported by Aizenman et al. (2022), 98.7% of the duration was fixated at vergence angles \(\alpha_{v}\leq 8.4^{o}\) -- the maximum supported by our model. In analysis, we excluded the remaining 1.3% data. The baseline 2D condition without vergence movements between fixations (i.e., \(\Delta\alpha_{v}=0\)) exhibits the fastest offset times of 354 ms. The mean offset times for the four games are, on average, 10 ms slower compared to the baseline 2D condition. _Job Simulator2_ and _Arizona Sunshine2_ present a mean gaze offset time of around 20 ms more than baseline, while _Beat Saber2_, and _Pistol Whip2_ present a mean gaze offset time of around 5 ms. Footnote 2: [https://media.mbusa.com/releases/release-9e110a7b636c4518148b9c1a4e19be23-meet-the-s-class-digital-my-mbusa-mercedes-benz-user-experience](https://media.mbusa.com/releases/release-9e110a7b636c4518148b9c1a4e19be23-meet-the-s-class-digital-my-mbusa-mercedes-benz-user-experience) The additional time and effort resulting from stereoscopic eye movements in different games will likely translate to increased difficulty. Notably, the performance regression varies across games and depends on the scale of players' gaze depth variance. These results suggest that gaming in VR comes with a "performance overhead" when compared to playing in 2D. Games that feature more salient objects at shallow depths such as _Job Simulator2_ and _Arizona Sunshine2_ result in up to 20 ms longer gaze offset times compared to the other two games where very little performance is lost. Further investigations to characterize the relationship between gaze offset times and player-experienced difficulties are interesting future work but beyond the scope of this research. Footnote 2: [https://media.mbusa.com/releases/release-9e110a7b636c4518148b9c1a4e19be23-meet-the-s-class-digital-my-mbusa-mercedes-benz-user-experience](https://media.mbusa.com/releases/release-9e110a7b636c4518148b9c1a4e19be23-meet-the-s-class-digital-my-mbusa-mercedes-benz-user-experience) ### Scene-Aware Optimization for 3D User Interface The surging automotive head-up displays (HUD) and wearable AR devices raise new demands in user-centric 3D interface design. Sub-optimal designs may slow users' reactions and cause dangers (Sabelman and Lam, 2015). When it comes to HUD interface, a desirable design target is the "optimal" virtual projection distance that preserves or even accelerates drivers' reaction to road conditions (see Figure 9a), in addition to factors such as focal depths. However, the optimization still remains debated and thus confounds designs. For example, while some literature suggests the distance to be \(2.5-4\) m (Betancur, 2011), some manufacturers instead designed it as 10 m\({}^{2}\). Our model provides a quantitative metric for drivers' target-reaching time as a consequence of varying HUD projection distances. Specifically, as annotated in Figure 9b: if the driver were to initiate a gaze movement from looking at the HUD image, depending on the depth of the UI element as well as the target location, the gaze offset times would vary anywhere between \(330-450\) ms (Figure 9c). Therefore, driving assistant applications could leverage the predictions in gaze offset to adjust the placement of UI elements, or to provide timely intervention/alerts in case of emergencies. While the specific optimization goal for object placement will vary depending on the application, we conducted an example optimization using our model without loss of generality. Specifically, we leverage large-scale datasets to collect the depth distribution of various scenes and suggest the ideal placement of a "HUD overlay image" which would minimize the average gaze offset time from the display element to arbitrary points of focus within the scene. Figure 10 shows our experimental results with two datasets containing depth maps of natural outdoor environments; DIODE [20] (18, 206 frames), KITTI [12, 919] frames). The average distances of objects are visualized in the top row of the histograms. Assuming a starting gaze centered on a HUD overlay image, positioned at some depth, \(d_{HUD}\), we measure the average gaze offset time, \(\mathbb{E}[\mathcal{T}]\), for saccade amplitudes uniformly sampled from \(\Delta\alpha_{\text{s}}\in[4^{\circ},12^{\circ}]\), and depth targets sampled from the dataset depth histograms. The resulting relationship between \(d_{HUD}\) and \(\mathbb{E}[\mathcal{T}]\) is visualized in Figure 10. Due to the differentiable nature of our model, we can optimize \(d_{HUD}\) to minimize \(\mathbb{E}[\mathcal{T}]\) via gradient descent. As a result, the optimal image placements, \(d_{HUD}^{\text{s}}\), are 1.8 m and 2.5 m for the outdoor DIODE and KITTI datasets. Beyond HUD in outdoor environments, we may also leverage the model for AR devices in indoor scenarios. Therefore, we further leveraged the indoor portion from DIODE (9, 652 frames), and NYUv2 [12] (407, 024 indoor frames). Intuitively, the depths that minimize \(\mathbb{E}[t]\) are smaller for indoor datasets because more objects are closer in the distance. Indeed, we found 1.3 m to be the optimal projection depths for both the indoor-DIODE and NYUv2 datasets. Our model helps design HUD displays in various applications, as the optimized image placements clearly vary significantly with scenes, e.g. indoor or outdoor ones. They can also be further optimized by using distributions of saccade amplitudes that are more representative of each application. ## 6. Limitations and Future Work _Initial depth and eccentricity._ Our combined vergence-saccade model measures the angular displacement in 3D without considering the initial fixation depth and eccentricity, even though both of these factors do influence eye movement offset time. Specifically, prior literature suggests that convergence/divergence-only movements show a linear correlation for offset times [17], while off-axis movements that maintain focal depth are much more complex, and require consideration of both vertical/horizontal eccentricity and ocular-motor anatomics [21]. In order to develop a model that predicts gaze offset times between arbitrary points in 3D space, we would need to individually measure and account for all these factors as a high-dimensional grid of conditions. Our main focus of this research is to demonstrate the importance and possibility of modeling gaze offset times for computer graphics applications; therefore, we plan to investigate all the factors above in future work. _Influence of accommodation and peripheral stereoacuity._ Vergence accommodation conflict may, in addition to discomfort, also cause incorrect visual fidelity [14] and depth acuity [22], thus potentially degrading target localization accuracy. Similarly, the inherent mismatch between the geometric and empirical horopters may result in poor stereoacuity (and therefore localization) for targets at farther eccentricities along the iso-vergence circle [13]. Additionally, accommodation speeds have been shown to be slower than vergence speeds [1]; hence, while our methods have comprehensive predictive capability in VR and pass-through AR devices (such as the Oculus Quest, and Apple Vision Pro), future investigations are necessary to fully model the latency of accommodation in _see-through_ AR devices. Our stimuli cover a conservative range of vergence depths and eccentricities, with targets placed close to where the geometric and empirical horopters meet, and having little to no VAC. While this range is appropriate for the contemporary (vergence-only) VR/AR displays [1], however, future work on understanding and optimizing for the influence of accommodation on 3D temporal visual behaviors may shed light on new performance-aware metrics to guide 3D display optics design. _Reaction time and image-space features._ Throughout this paper, we eliminated, as much as possible, any image-dependent variance in reaction time. Therefore, our measured offset time is primarily influenced by biomechanical responses to the spatial distribution of the stimuli, and not influenced by task difficulties or image characteristics such as contrast and spatial frequency [15, 16]. Exploring the combined effect of cognitive load or image characteristics on reaction time may add new building blocks for comprehensive measurements of visual performance. _Eye-head coordination._ During free-viewing, head movements often accompany eye movements and we tend to rotate our heads toward visual targets, especially for large eccentricities beyond \(15^{\circ}\)[1]. Our model does not predict the duration or impact of this concurrent head movement. However, even though moving the head to center the target is a slower movement that typically completes after initial eye movement [11], Figure 8. _Measuring target-shifting offset times in VR games._ Variability in the depth of salient regions in VR games induces longer gaze movement offset times due to combined vergence-saccade gaze movements. Representative depth-buffer frames from each image are shown as insets for each game. Games with higher variation in depth (_Job Simulator_(r) and _Arizona Sumline_(r)) exhibit longer offset times as predicted by our model. Traditional 2D video games do not involve depth changes during gaze movements, and therefore have a faster average offset time of 354 ms, shown here as a “baseline” for comparison. our retinal image during the re-centering phase is stabilized, similar to Vestibular Ocular Reflex. Hence, our model's predictions are likely to continue to be useful as they identify the earliest point after initial eye movement at which the target is clearly visible. We hope that future work in eye-head movement validates this expectation. ## 7. Conclusion We statistically measure and model the correlation between visual target displacement in 3D and eye movement offset time. Our data and model reveal a remarkable fact about eye movements in the 3D world: although combining a saccadic movement with a vergence movement accelerates motion towards a target in depth, the acceleration effect shows a surprisingly non-monotonic U-shape effect. Moreover, the model accurately predicts absolute temporal performance on this task without individual normalization. This is primarily because offset time for eye movements is mainly a biophysical phenomenon and not a cognitive one. We hope the research presented here inspires a new frontier exploring exciting questions about eye movements in 3D. For example, what contributes to variation in our target acquisition speeds? How do the surging virtual layers added to the physical world influence our visual attention shifts, and thus safety? And finally, how can we build future virtual environments that boost human performance in taking actions, even to outperform ourselves in the physical world? ###### Acknowledgements. We would like to thank Avigael Aizenman and Agostino Gibaldi for insightful advice on processing stereo gaze data, and support in leveraging the video game gaze behavior data in their work [2022]. This project is partially supported by the National Science Foundation grants #2225861 and #2232817, and a DARPA PTG program.
2309.03939
Axion-Gauge Coupling Quantization with a Twist
The possible couplings of an axion to gauge fields depend on the global structure of the gauge group. If the Standard Model gauge group is minimal, or equivalently if fractionally charged color-singlet particles are forbidden, then the QCD axion's Chern-Simons couplings to photons and gluons obey correlated quantization conditions. Specifically, the photon coupling can have a fractional part which is a multiple of 1/3, but which is determined by the gluon coupling. A consequence of this result is that, among all theories with a minimal gauge group and minimal axion coupling to gluons, the smallest possible axion-photon amplitude $|g_{a\gamma\gamma}|$ arises for $E/N = 8/3$. This provides a new motivation for experiments targeting this axion-photon coupling.
Matthew Reece
2023-09-07T18:00:00Z
http://arxiv.org/abs/2309.03939v1
# Axion-Gauge Coupling Quantization with a Twist ###### Abstract The possible couplings of an axion to gauge fields depend on the global structure of the gauge group. If the Standard Model gauge group is minimal, or equivalently if fractionally charged color-singlet particles are forbidden, then the QCD axion's Chern-Simons couplings to photons and gluons obey correlated quantization conditions. Specifically, the photon coupling can have a fractional part which is a multiple of \(1/3\), but which is determined by the gluon coupling. A consequence of this result is that, among all theories with a minimal gauge group and minimal axion coupling to gluons, the smallest possible axion-photon amplitude \(|g_{a\gamma\gamma}|\) arises for \(E/N=8/3\). This provides a new motivation for experiments targeting this axion-photon coupling. ###### Contents * 1 Introduction * 2 Argument from twisted field configurations * 3 Argument from representation theory * 4 Comments * 4.1 Examples * 4.2 Axion-fermion couplings * 4.3 Axion strings and anomaly inflow * 4.4 Cosmology * 4.5 The experimental target * 4.6 Applicability * A Details for the full Standard Model * A.1 Case 1: \(G_{\rm SM}=\widetilde{G}_{\rm SM}/\mathbb{Z}_{6}\). * A.2 Case 2: \(\widetilde{G}_{\rm SM}/\mathbb{Z}_{3}\). * A.3 Case 3: \(\widetilde{G}_{\rm SM}/\mathbb{Z}_{2}\). * A.4 Case 4: \(\widetilde{G}_{\rm SM}\). Introduction The QCD axion is not only the most compelling solution to the Strong CP problem [1, 2, 3, 4], but also an appealing dark matter candidate via the misalignment mechanism [5, 6, 7]. For both of these, only the axion's coupling to gluons plays a direct role. On the other hand, very few of the experimental and observational efforts to detect the axion rely on its gluon coupling (but see [8, 9, 10, 11]); most instead rely on its coupling to photons (e.g., [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]). This makes it crucial to understand how these couplings are related. It is well-known that the coupling of a QCD axion to photons is a sum of two contributions, a quantized piece that depends on UV physics and an additional, non-quantized piece that arises at the QCD scale from mixing with neutral mesons, especially the pion [23, 24, 25, 26, 27, 28]. Here we point out an overlooked constraint: the quantized, UV contribution to the axion-photon coupling is nontrivially correlated with the axion-gluon coupling, if the Standard Model gauge group is minimal (or, relatedly, if the UV completion does not involve fractionally charged color-singlet particles). By minimal, we mean the Standard Model gauge group \[G_{\rm SM}\cong[{\rm SU}(3)_{\rm C}\times{\rm SU}(2)_{\rm L}\times{\rm U}(1)_{ \rm Y}]/\mathbb{Z}_{6}. \tag{1}\] It is possible that this is not the true gauge group: there could be a quotient by \(\mathbb{Z}_{2}\) or \(\mathbb{Z}_{3}\), or no quotient at all, or there could even exist particles with hypercharge \(1/12\) or another fraction smaller than \(1/6\). However, this is the minimal possibility, in the sense that it allows the fewest possible representations for electrically charged matter. For this minimal choice of \(G_{\rm SM}\), the axion coupling to photons is not fully independent of the coupling to gluons. For most of this paper, we focus on just the axion, photon, and gluons. The full Standard Model including the electroweak gauge bosons (and possible non-minimal choices of the gauge group) is discussed in Appendix A; we only briefly comment on axion-fermion couplings in SS4. To fix our conventions: we consider the kinetic terms \[I_{\rm kin}=\int{\rm d}^{4}x\sqrt{|g|}\left[\frac{1}{2}f^{2}\partial_{\mu} \theta\partial^{\mu}\theta-\frac{1}{4e^{2}}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2g_{ s}^{2}}{\rm tr}(G_{\mu\nu}G^{\mu\nu})\right], \tag{2}\] where \(\theta\cong\theta+2\pi\) is a dimensionless periodic axion field, \(F={\rm d}A\) is the electromagnetic field strength in the normalization where the electron has charge \(-1\), and \(G={\rm d}C-{\rm i}C\wedge C\) is the gluon field strength (and, for convenience of equations written in terms of differential forms, we make the somewhat uncommon choice to refer to the gluon field as \(C_{\mu}\) so that the gauge field and its field strength are referred to with different letters). We normalize the \({\rm SU}(3)\) generators with the standard particle physics convention \({\rm tr}(T^{a}T^{b})=\frac{1}{2}\delta^{ab}\). Notice that \(\theta\), \(A\), and \(C\) are _not_ canonically normalized, but instead are normalized in a manner that makes periodicity and charge quantization properties manifest. Factors of \(f\), \(e\), and \(g_{s}\) are needed to restore canonical normalization. The quantities of interest to us are the axion-gauge couplings \(k_{G}\) and \(k_{F}\) in \[I_{\rm ax}=\int\left[\frac{k_{G}}{8\pi^{2}}\,\theta\,{\rm tr}(G\wedge G)+ \frac{k_{F}}{8\pi^{2}}\,\theta\,F\wedge F\right]. \tag{3}\] This action is not invariant under the gauge transformation \(\theta\mapsto\theta+2\pi\). We require that the path integral be well-defined, so \(\exp[{\rm i}I]\) should be gauge invariant. This implies quantization conditions on \(k_{G}\) and \(k_{F}\). In fact, these numbers can be thought of as (generalized) Chern-Simons levels, which are well-known to be quantized. Readers unfamiliar with this physics can find a detailed pedagogical review in my TASI lectures [29]. The main result of this paper is that, if the full Standard Model gauge group is the minimal one (1), which forbids fractionally charged color-singlet particles, then in fact \(k_{G}\) and \(k_{F}\) are not independently quantized. Instead, we have \[k_{G}\in\mathbb{Z},\qquad\frac{2}{3}k_{G}+k_{F}\in\mathbb{Z}. \tag{4}\] The integer \(k_{G}\) is also known as the domain wall number, because QCD dynamics will generate an axion potential with \(|k_{G}|\) distinct vacua that can be separated by domain walls at the QCD phase transition. Eq. (4) implies that the photon coupling \(k_{F}\) can be fractional (a multiple of \(1/3\)), but its fractional part is determined by the domain wall number. This is a nonperturbative fact about the theory, but I will explain how it arises in perturbative theories (e.g., KSVZ-like models) where \(k_{G}\) and \(k_{F}\) are computed from triangle diagrams. In this case, even if the global structure of the Standard Model is non-minimal (e.g., lacking the \(\mathbb{Z}_{6}\) quotient), the result still holds provided that the fermions appearing in the UV completion and giving rise to the axion couplings appear in representations of \([\mathrm{SU}(3)_{\mathrm{C}}\times\mathrm{SU}(2)_{\mathrm{L}}\times\mathrm{U}( 1)_{\mathrm{Y}}]/\mathbb{Z}_{6}\), i.e., do not include fractionally charged color-singlet particles. Experimental searches for axions generally express the result in terms of a quantity \(g_{a\gamma\gamma}\), proportional to the axion-photon-photon amplitude at low energies, which is defined to be \[g_{a\gamma\gamma}=\frac{\alpha}{2\pi f/k_{G}}\left(\frac{E}{N}-1.92(4)\right). \tag{5}\] Here \(E\equiv k_{F}\), \(N\equiv\frac{1}{2}k_{G}\), and the \(-1.92\) contribution arises from mixing with mesons (we use a recent estimate from [28]). It is very common in the axion literature to refer to a model based on its value of \(E/N\), i.e., of \(2k_{F}/k_{G}\). The factor of \(2\) here is a historical artifact, since \(k_{G}\) is an integer but \(N\) can be a half-integer. In the literature, what I denote as \(f/k_{G}\) is often denoted \(f_{a}\), and it is this combination that determines the QCD axion's mass, a recent estimate of which is [28]: \[m_{a}\approx 5.70(6)(4)\,\mu\mathrm{eV}\left(\frac{10^{12}\,\mathrm{GeV}}{f/k_{G }}\right). \tag{6}\] It is common for experimental plots to show lines in the \((m_{a},g_{a\gamma\gamma})\) plane labeled by values of \(E/N\), such as \(E/N=0\) (often labeled KSVZ [30, 31]) or \(E/N=8/3\) (often labeled DFSZ [32, 33]). We emphasize that the conventional KSVZ and DFSZ labels are misleading, as the structure of these models can accommodate many different \(E/N\) values, and conversely the same \(E/N\) values can arise in models with completely different structure. The value \(E/N=8/3\) is also of interest because it arises in GUT completions of the Standard Model [24, 26, 34]. Because \(E/N=8/3\) mostly cancels against \(-1.92\), and because it arises in simple models, it is often viewed as a major target for axion experiments. Of course, one could consider values like \(E/N=2\) that cancel even more completely, so it is not clear whether \(E/N=8/3\) is a well-motivated stopping point. Our result provides a different motivation for the line \(E/N=8/3\) as an experimental target. Models with \(k_{G}=\pm 1\) are particularly appealing. A phenomenological corollary of our result (4) is: If the axion coupling to gluons is minimal (\(k_{G}=1\)), and the Standard Model gauge group is minimal (no fractionally charged color-singlet particles exist), then the smallest value of \(|g_{a\gamma\gamma}|\) consistent with eq. (4) arises for \(k_{F}=4/3\), or in conventional notation, \(E/N=8/3\). Notice that the case \(k_{G}=-1\) also has domain wall number one, but then the smallest \(|g_{a\gamma\gamma}|\) arises for \(k_{F}=-4/3\), so the conclusion about \(E/N\) and \(g_{a\gamma\gamma}\) is unchanged. (In fact, one could always redefine \(\theta\mapsto-\theta\) to ensure \(k_{G}>0\) without affecting \(|g_{a\gamma\gamma}|\).) We give two different arguments for (4). First, in SS2, we give a nonperturbative argument based on field configurations with nontrivial topology. Second, in SS3, we give an argument based on \(\mathrm{SU}(3)\) representation theory, which applies to any perturbative model lacking fractionally charged color-singlet particles. We offer further commentary on the result and its implications in SS4. In an appendix SSA, we explain how the result changes for different global structures of the Standard Model gauge group. ## 2 Argument from twisted field configurations Given the minimal Standard Model gauge group (1), there is a correlation between the \(\mathrm{SU}(3)_{\mathrm{C}}\) representations of particles and their electric charges. This correlation depends only on the representation's _triality_, or transformation under the \(\mathbb{Z}_{3}\) center of \(\mathrm{SU}(3)_{\mathrm{C}}\). Representations of zero triality have integer electric charge, those with triality 1 (e.g., the **3**) have electric charge \(1/3\) less than an integer, and those with triality 2 (e.g., the \(\bar{\textbf{3}}\)) have electric charge \(1/3\) more than an integer. Below the electroweak scale, we can summarize this result by saying that the gauge group is \(G=\mathrm{U}(3)\cong[\mathrm{SU}(3)\times\mathrm{U}(1)]/\mathbb{Z}_{3}\). More explicitly, we have charged fields in the fundamental of \(\mathrm{SU}(3)\) transforming as \[\psi\mapsto U\mathrm{e}^{\mathrm{i}\hat{q}\alpha}\psi, \tag{7}\] where \(U\in\mathrm{SU}(3)\) and \(\mathrm{e}^{\mathrm{i}\alpha}\in\mathrm{U}(1)\), with \(\hat{q}\) an integer charge that is \(3\) times our conventional normalization \(q\) of electric charge. For instance, the up quark has \(q=2/3\) and thus \(\hat{q}=2\). The center of \(\mathrm{SU}(3)\) is generated by the matrix \[z=\begin{pmatrix}\mathrm{e}^{2\pi\mathrm{i}/3}&0&0\\ 0&\mathrm{e}^{2\pi\mathrm{i}/3}&0\\ 0&0&\mathrm{e}^{2\pi\mathrm{i}/3}\end{pmatrix}=\mathrm{e}^{2\pi\mathrm{i}/3} \textbf{1}. \tag{8}\] and the \(\mathbb{Z}_{3}\) quotient corresponds to the fact that the combined action of \(z\in\mathrm{SU}(3)\) and the \(\mathrm{U}(1)\) element with \(\alpha=2\pi/3\) acts trivially on fields. In order for this to be true for a field in the fundamental, we need \[\mathrm{e}^{2\pi\mathrm{i}/3(1+\hat{q})}=1, \tag{9}\] i.e., we need \[\hat{q}\equiv 2\pmod{3} \tag{10}\] for fields in the fundamental of \(\mathrm{SU}(3)\). Correspondingly, we need \(\hat{q}\equiv 1\pmod{3}\) for fields in the antifundamental of \(\mathrm{SU}(3)\). We will derive our quantization condition using topologically nontrivial field configurations with twisted boundary conditions, of the sort introduced by 't Hooft [35, 36]. The homotopy group \(\pi_{1}(G)\cong\mathbb{Z}\) is generated by the projection of a path in the covering space \(\widetilde{G}=\mathrm{SU}(3)\times\mathrm{U}(1)\) that goes from the origin \((\textbf{1},1)\) to the point \((z,\mathrm{e}^{2\pi\mathrm{i}/3})\), which is identified with the origin when we take the \(\mathbb{Z}_{3}\) quotient. A simple such path \(f:[0,1]\to\widetilde{G}\) is given by \[(U(t),\xi(t))=\left(\exp(2\pi\mathrm{i}tT/3),\exp(2\pi\mathrm{i}t/3)\right), \tag{11}\] where \[T=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&-2\end{pmatrix} \tag{12}\] has the property that \(\exp(2\pi\mathrm{i}T/3)=z\). We denote the \(\mathrm{SU}(3)\) gauge field by \(C\) with field strength \(G=\mathrm{d}C-\mathrm{i}C\wedge C\) and the \(\mathrm{U}(1)\) gauge field by \(\widehat{A}\) with field strength \(\widehat{F}=\mathrm{d}\widehat{A}\), with gauge transformations \[C \mapsto UCU^{-1}-\mathrm{i}(\mathrm{d}U)U^{-1} \tag{13}\] \[\widehat{A} \mapsto\widehat{A}-\mathrm{i}(\mathrm{d}\mathrm{e}^{\mathrm{i} \alpha})\mathrm{e}^{-\mathrm{i}\alpha}=\widehat{A}+\mathrm{d}\alpha. \tag{14}\] We are interested in gauge field configurations that return to themselves with a twist under the \(\mathbb{Z}_{3}\) center symmetry. We study the theory on a 4-torus parametrized by \(x_{i}\cong x_{i}+2\pi r_{i}\). Let's first consider flux on a 2-torus. Consider a gauge field configuration of the form \[C=G_{12}x_{1}\mathrm{d}x_{2},\quad\widehat{A}=\widehat{F}_{12}x_{1}\mathrm{d}x _{2}. \tag{15}\] This is manifestly not invariant under \(x_{1}\mapsto x_{1}+2\pi r_{1}\), but we ask that it map to a gauge equivalent configuration under a gauge transformation that winds around the \(x_{2}\) direction as the generator of \(\pi_{1}(G)\), i.e., \[2\pi r_{1}G_{12}\mathrm{d}x_{2} =-\mathrm{i}(\mathrm{d}U(x_{2}/2\pi r_{2}))U^{-1}(x_{2}/2\pi r_{2}), \tag{16}\] \[2\pi r_{1}\widehat{F}_{12}\mathrm{d}x_{2} =-\mathrm{i}\xi^{-1}(x_{2}/2\pi r_{2})\mathrm{d}\xi(x_{2}/2\pi r _{2}). \tag{17}\] From the explicit expression (11) we read off that \[G_{12} =\frac{1}{3}\frac{1}{2\pi r_{1}r_{2}}T, \tag{18}\] \[\widehat{F}_{12} =\frac{1}{3}\frac{1}{2\pi r_{1}r_{2}}. \tag{19}\] Next, consider a field configuration with the same type of flux on both the \((x_{1},x_{2})\) torus and the \((x_{3},x_{4})\) torus: \[G =\frac{1}{3}T\left(\frac{1}{2\pi r_{1}r_{2}}\,\mathrm{d}x_{1} \wedge\mathrm{d}x_{2}+\frac{1}{2\pi r_{3}r_{4}}\,\mathrm{d}x_{3}\wedge \mathrm{d}x_{4}\right), \tag{20}\] \[\widehat{F} =\frac{1}{3}\left(\frac{1}{2\pi r_{1}r_{2}}\,\mathrm{d}x_{1} \wedge\mathrm{d}x_{2}+\frac{1}{2\pi r_{3}r_{4}}\,\mathrm{d}x_{3}\wedge \mathrm{d}x_{4}\right). \tag{21}\] Then we calculate that: \[\int\mathrm{tr}(G\wedge G) =\frac{1}{9}8\pi^{2}\mathrm{tr}(TT)=\frac{2}{3}8\pi^{2}, \tag{22}\] \[\int\widehat{F}\wedge\widehat{F} =\frac{1}{9}8\pi^{2}. \tag{23}\] Finally, we note that the conventional normalization of the electromagnetic gauge field is determined by \(\hat{q}\widehat{A}=qA=\hat{q}A/3\), hence \(A=3\widehat{A}\), and \[\int F\wedge F=9\int\widehat{F}\wedge\widehat{F}=8\pi^{2}. \tag{24}\] Now we are ready to put the pieces together. Given the action (3), we consider the change in \(\exp(\mathrm{i}I)\) under \(\theta\mapsto\theta+2\pi\), in the presence of the twisted field configuration we have just considered: \[\exp(\mathrm{i}I)\mapsto\exp(\mathrm{i}I)\exp\left[2\pi\mathrm{i}\left(\frac{ 2}{3}k_{G}+k_{F}\right)\right]. \tag{25}\] From this we conclude that we require a correlated quantization condition of the form \[\frac{2}{3}k_{G}+k_{F}\in\mathbb{Z}, \tag{26}\] as promised in (4). The other part of (4), \(k_{G}\in\mathbb{Z}\), follows from the standard argument about SU(3) instantons. So far we have shown that (4) holds in any valid model with the low-energy gauge group \([\text{SU}(3)\times\text{U}(1)]/\mathbb{Z}_{3}\), but one could ask if there might be a stronger condition that holds. In other words, given integers \((n,m)\), is there always a model that produces \(k_{G}=n\) and \(\frac{2}{3}k_{G}+k_{F}=m\)? In a KSVZ-like model, a color triplet \(Q\) with electric charge \(-\frac{1}{3}\) and PQ charge \(\pm 1\) produces \((k_{G},k_{F})=\pm(1,\frac{1}{3})\), whereas a color singlet \(L\) with electric charge \(1\) and PQ charge \(\pm 1\) produces \((k_{G},k_{F})=(0,\pm 1)\). A model with sufficiently many copies of such fields can achieve any \((n,m)\); for instance, if \(n>0\), one can choose \(n\) copies of \(Q\) with PQ charge \(+1\) and \(|m-n|\) copies of \(L\) with PQ charge \(\text{sign}(m-n)\). This shows that (4) is in fact the most general quantization condition that can be proven. ## 3 Argument from representation theory In SS2, we gave a nonperturbative argument for (4) exploiting the topology of the gauge group by placing the theory on a topologically nontrivial manifold. We will now show that the same conclusion can be reached within a perturbative model (e.g., a KSVZ-like or DFSZ-like model) by integrating out heavy fermions. This argument has a less general starting point, as it makes assumptions about the UV completion; for example, the axion could arise from a higher-dimensional gauge field, in which case \(k_{G}\) and \(k_{F}\) descend directly from higher-dimensional Chern-Simons levels, which do not arise from integrating out 4d fermions. Nonetheless, it is instructive to see how perturbative models respect the nonperturbative constraint (4), and we will see that in fact we derive a stronger statement: every individual contribution to \(k_{G}\) and \(k_{F}\) independently obeys (4). (This also follows from the topological argument by specializing to a KSVZ model with a single representation of fermions giving rise to the coupling.) In a perturbative 4d model, the coefficients \(k_{G}\) and \(k_{F}\) arise from triangle diagrams, integrating out particles in SU(3) representation \(R_{i}\) with electric charge \(q_{i}\in\frac{1}{3}\mathbb{Z}\) and Peccei-Quinn charge \(p_{i}\in\mathbb{Z}\), \[k_{G} =\sum_{i}2I(R_{i})p_{i}, \tag{27}\] \[k_{F} =\sum_{i}\dim(R_{i})q_{i}^{2}p_{i}. \tag{28}\] Here \(I(R_{i})\) is the Dynkin index of the representation, normalized to \(1/2\) for the fundamental. We will show that each individual contribution to the sum obeys (4), for the minimal choice \(|p_{i}|=1\) (and thus a fortiori for other choices of \(p_{i}\)). Because this holds for every contribution, we will drop the \(i\) subscript and focus on a representation \(R\); our goal is to argue that \[\frac{4}{3}I(R)+\dim(R)q^{2}\in\mathbb{Z}, \tag{29}\] for any SU(3) representation \(R\) and U(1) charge \(q\) consistent with the gauge group \([\text{SU}(3)\times\text{U}(1)]/\mathbb{Z}_{3}\). Rather than trying to give a general argument applicable to other gauge groups, we will give a very direct (but not elegant) analysis of the explicit formulas for the case of interest. Let us recall some facts about \(\mathrm{SU}(3)\) representation theory. An irreducible representation of \(\mathrm{SU}(3)\) is labeled by two nonnegative integers (also sometimes referred to as Dynkin indices) \(r\), \(s\), which correspond to the number of fundamentals and anti-fundamentals that should be tensored together to obtain the representation. These representations have triality \(n_{3}(R)\equiv r-s\pmod{3}\). As reviewed in SS2, the gauge group \([\mathrm{SU}(3)\times\mathrm{U}(1)]/\mathbb{Z}_{3}\) has the constraint that the fractional part of the electric charge \(q\) is \(-n_{3}(R)/3\). Thus, we can write \(q=n-\frac{r-s}{3}\) for some \(n\in\mathbb{Z}\). The dimension of an \(\mathrm{SU}(3)\) representation is \[\dim(R)=\frac{1}{2}(r+1)(s+1)(r+s+2). \tag{30}\] If \(n_{3}(R)\neq 0\), then \(\dim(R)\) is divisible by \(3\). For instance, if \(n_{3}(R)=1\), then \(r\equiv s+1\pmod{3}\) and so \(2\dim(R)=(s+2)(s+1)(2s+3)\equiv 2s(s+1)(s+2)\pmod{3}\), so \(\dim(R)\equiv s(s+1)(s+2)\pmod{3}\) is a product of \(3\) consecutive numbers mod \(3\) and hence is \(0\pmod{3}\). By \(r\leftrightarrow s\) the same is true if \(n_{3}(R)=2\). On the other hand, \(\dim(R)\) need not be divisible by \(3\) when \(n_{3}(R)=0\), as exemplified by the adjoint representation of dimension \(8\). The Dynkin index of an \(\mathrm{SU}(3)\) representation is \[I(R)=\frac{1}{24}\dim(R)(r^{2}+rs+s^{2}+3r+3s). \tag{31}\] It is instructive to look at some examples first: * The fundamental representation has \(r=1,s=0\), \(\dim(R)=3\), and \(I(R)=\frac{1}{2}\). Hence \(\frac{4}{3}I(R)=\frac{2}{3}\). In this case the allowed electric charges are \(q=n-\frac{1}{3}\), so \(\dim(R)q^{2}=3\left(n-\frac{1}{3}\right)^{2}=3n^{2}-2n+\frac{1}{3}\). The contribution in (29) is then \(3n^{2}-2n+1\in\mathbb{Z}\). * The adjoint representation has \(r=1,s=1\), \(\dim(R)=8\), and \(I(R)=3\). In this case \(\frac{4}{3}I(R)=4\) is an integer, and only integer electric charges are allowed, so (29) obviously holds. * The symmetric tensor representation has \(r=2,s=0\), \(\dim(R)=6\), and \(I(R)=\frac{5}{2}\). In this case \(\frac{2}{3}I(R)=\frac{10}{3}\). The allowed electric charges are \(q=n+\frac{1}{3}\), so \(\dim(R)q^{2}=6\left(n+\frac{1}{3}\right)^{2}=6n^{2}+4n+\frac{2}{3}\). The two terms in (29) sum to \(6n^{2}+4n+4\in\mathbb{Z}\). In general, \(2I(R)\in\mathbb{Z}\). This is physically clear, since we can build a KSVZ model where this is the value of \(k_{G}\), which must be an integer for the usual topological reason related to \(\mathrm{SU}(3)\) instantons. Let's now break down the argument into cases. First, suppose that \(n_{3}(R)=0\). Then \(q\in\mathbb{Z}\), so the \(k_{F}\) contribution is an integer, so in order for (29) to hold we need \(2I(R)\) to be divisible by \(3\). There is a simple physical argument for this: the representation \(R\) could arise in the context of an \(\mathrm{SU}(3)/\mathbb{Z}_{3}\) gauge theory, in which case instanton number can have fractional part a multiple of \(1/3\). Thus, the coupling \(k_{F}\) in such a theory is quantized in integer multiples of \(3\). A KSVZ-like model with a single field in representation \(R\) must obey this constraint, hence \(3\mid(2I(R))\). One can also check that this follows from (31), along similar lines to the case we discuss below. Next, consider the case \(n_{3}(R)=1\). Then the second term in (29) has the form \[\dim(R)\left(n-\frac{1}{3}\right)^{2}=\dim(R)n^{2}-\frac{2}{3}\dim(R)n+\frac{ 1}{9}\dim(R). \tag{32}\] Now, we know that \(3\mid\dim(R)\) in this case, so the fractional part comes only from the last term, \(\frac{1}{9}\dim(R)\). In other words, when \(n_{3}(R)=1\), we wish to show that \[\frac{4}{3}I(R)+\frac{1}{9}\dim(R)\in\mathbb{Z}. \tag{33}\] After a little algebra, one finds that this is equivalent to claiming that \[36\mid(r+1)(s+1)(r+s+2)(r^{2}+s^{2}+rs+3r+3s+2), \tag{34}\] when \(r,s\) are nonnegative integers with \(r=s+1\pmod{3}\). Thus, we need to find two factors of \(3\) and two factors of \(2\) on the right-hand side. We have already argued that \(3\mid(r+1)(s+1)(r+s+2)\). One can also check that, mod \(3\), \[r^{2}+s^{2}+rs+3r+3s+2\equiv(s+1)^{2}+s^{2}+(s+1)s+2\equiv 3s^{2}+3s+3\equiv 0 \pmod{3}. \tag{35}\] This takes care of our two factors of \(3\). Next, we want to show that there are two factors of \(2\). We do this somewhat tediously by checking cases. * \(r\) odd, \(s\) odd: In this case, \((r+1)\), \((s+1)\), and \((r+s+2)\) are all even, so we have three factors of \(2\) from the \(\dim(R)\) factor alone. * \(r\) even, \(s\) even: In this case, \(r+s+2\) and also \(r^{2}+s^{2}+rs+3r+3s+2\) are both even. * \(r\) odd, \(s\) even: In this case, \(r+1\) is even, and \(r^{2}+s^{2}+rs+3r+3s+2\equiv r^{2}+3r\equiv 0\pmod{2}\) is even as well. * \(s\) odd, \(r\) even: Just like the last case with \(r\leftrightarrow s\). Finally, one can consider the case \(n_{3}(R)=-1\). In this case \(q=n+\frac{1}{3}\), but this does not affect the form of (33), and the argument proceeds exactly as above with \(r\leftrightarrow s\). ## 4 Comments In this section we collect a series of brief remarks on different aspects of the result (4) and its interpretation. ### Examples In the SU(5) GUT model, the higgsing pattern is \(\mathrm{SU}(5)\to[\mathrm{SU}(3)_{\mathrm{C}}\times\mathrm{SU}(2)_{\mathrm{L}} \times\mathrm{U}(1)_{\mathrm{Y}}]/\mathbb{Z}_{6}\) through an adjoint vev, so our constraint should apply. The GUT model predicts a relationship [24, 26] \[k_{F}=\frac{4}{3}k_{G}, \tag{36}\] and hence \(\frac{2}{3}k_{G}+k_{F}=2k_{G}\) is an integer (in fact, an even integer). In this context, the factor of \(4/3\) arises as the trace of the square of the matrix \[\begin{pmatrix}1/3&0&0&0&0\\ 0&1/3&0&0&0\\ 0&0&1/3&0&0\\ 0&0&0&0&0\\ 0&0&0&0&-1\end{pmatrix}, \tag{37}\] which embeds the generator of \(\mathrm{U}(1)\) electromagnetism in SU(5). This result is most often quoted as \(E/N=8/3\), using the notation explained in the introduction. For more discussion see [34]. A wide variety of perturbative axion models with matter in different representations of the gauge group \(G_{\mathrm{SM}}\) has been tabulated in [37], which lists these models in terms of \(E/N\) (in our notation, \(2k_{F}/k_{G}\)) and \(N_{\rm DW}\) (in our notation, \(k_{G}\)). Thus, our constraint (26) can be written in terms of their data as \[N_{\rm DW}\left(\frac{2}{3}+\frac{1}{2}\frac{E}{N}\right)\in\mathbb{Z}. \tag{38}\] We have checked that all of their tabulated models obey this constraint. This follows from the general argument in SS3, but it is a useful sanity check on our claim. ### Axion-fermion couplings A point that may, at first, bother some readers is that the couplings \(k_{G}\) and \(k_{F}\) are not, by themselves, physical in theories with fermions charged under the gauge group. This is because the amplitude for an axion to interact with gauge fields also depends on the axion's couplings to fermions that can run in loops. The amplitude is a physical invariant, but the individual couplings are not. Indeed, it is often stated that we can freely move the axion coupling between the phase of a fermion mass term, of the form \[m_{\psi}{\rm e}^{{\rm i}k\theta}\psi\widetilde{\psi}+{\rm h.c.}, \tag{39}\] and the terms in (3) using the chiral anomaly. This is true (provided we also keep track of changes in axion derivative couplings under the field redefinition). However, altering the values of \(k_{G}\) and \(k_{F}\) in this way does not change the conclusion (4). In order for (39) to be well-defined, we require that \(k\in\mathbb{Z}\). Similarly, when we perform an axion-dependent field redefinition to rephase the fermion, e.g., \[\psi\mapsto{\rm e}^{{\rm i}n\theta}\psi, \tag{40}\] it makes sense only for \(n\in\mathbb{Z}\). In that case, the chiral anomaly tells us that the field redefinition shifts \(k_{G}\) by \(2I(R)n\) and \(k_{F}\) by \(\dim(R)q^{2}n\). But this leaves (4) untouched, as the argument of SS3 shows. ### Axion strings and anomaly inflow In SS3, we presented a representation theoretic argument as a constraint on perturbative theories with \(k_{G}\) and \(k_{F}\) arising from triangle diagrams with 4d fermions in loops. There is another way to interpret the calculation that applies to other classes of axion theories, such as those where the axion arises from a higher dimensional gauge field. Any theory of an axion is expected to have axion strings, i.e., dynamical vortices around which \(\theta\) winds from \(0\) to \(2\pi\). (One argument for this is the absence of global symmetries in quantum gravity, because the axion winding number current \(\frac{1}{2\pi}{\rm d}\theta\) would generate a 2-form global symmetry if axion strings do not exist.) The lack of gauge invariance of Chern-Simons terms has dynamical implications in the presence of various boundaries or defects. For an axion string, in particular, it implies that the string admits chiral zero modes carrying charge under the bulk gauge symmetries. This is the phenomenon of anomaly inflow [38]. The charged zero modes in the 2d worldsheet theory must be anomalous under the gauge symmetries, with anomaly coefficients that cancel the bulk inflow terms arising from \(k_{G}\) and \(k_{F}\). The 2d anomaly arises from vacuum polarization diagrams, proportional to \(2I(R)\) for SU(3) and \(\dim(R)q^{2}\) for U(1). Thus, we can interpret the argument of SS3 as a constraint on the anomaly coefficients of fermions on the axion string worldsheet in any axion theory, rather than bulk fermions in 4d. ### Cosmology As emphasized by [37, 39], axion models with fractionally charged color-singlet particles are severely constrained by experimental results that found the abundance of such particles in our universe must be twenty orders of magnitude below the abundance of ordinary baryons [40]. In any model of a post-inflation axion, i.e., one where the universe undergoes a thermal phase transition after inflation that spontaneously breaks a Peccei-Quinn symmetry and produces the QCD axion as a pseudo-Goldstone boson, we expect that all of the particles responsible for generating the axion-gluon and axion-photon couplings are in thermal equilibrium in the early universe. They will then inevitably have a nonzero relic abundance. The tricky part of the relic abundance calculation is understanding whether hadronic effects enhance the annihilation rate after the QCD phase transition enough to suppress the abundance sufficiently; see, e.g., [41, 42] for discussions. The result is that annihilation of such particles is not effective enough to reduce their abundance below the experimental bound. Hence, any post-inflation axion model must be free of fractionally charged color-singlet particles, and the constraint (4) applies to all such models. The case \(|k_{G}|=1\) is of special interest for a post-inflation QCD axion. The post-inflation axion field value is randomized in different parts of the universe. As a result, at the time of the QCD phase transition, domain walls will form interpolating between different minima of the periodic axion potential [43]. These lead to a cosmological history inconsistent with the universe we observe unless they are somehow dynamically eliminated. The simplest means of eliminating axion domain walls is if axion strings were formed in the early universe, because domain walls can end on cosmic strings [44, 45, 46]. The number of domain walls ending on a minimal axion string is \(|k_{G}|\). If this is equal to 1, a single domain wall ends on a string and the entire string-wall network can efficiently annihilate away into radiation [47]. If it is larger than one, multiple walls end on a string and the network is frustrated, leading to an inconsistent cosmology. Thus, there is a strong preference in post-inflation axion models for the minimal axion-gluon coupling \(|k_{G}|=1\). Summarizing, cosmological considerations for post-inflation axion models point to both the absence of fractionally charged color-singlet particles and a minimal domain wall number \(|k_{G}|=1\). Given (4), this then implies that the smallest value of \(|g_{a\gamma\gamma}|\) is attained for \(k_{F}=4/3\) and hence \(E/N=8/3\). This provides a model-independent argument for the importance of \(E/N=8/3\) as an experimental target. ### The experimental target A number of experiments are already targeting \(E/N=8/3\) to provide a benchmark small value of \(g_{a\gamma\gamma}\). Such experiments are often advertised as having "DFSZ sensitivity," a phrase that I find misleading, though it is widely used and understood. ADMX and CAPP have already achieved this level of sensitivity [16, 18, 20, 22]. To do so, they assume that axions constitute all of the dark matter in our neighborhood, with an abundance \(\rho=0.45\,\mathrm{GeV/cm^{3}}\). However, there is still a substantial, \(O(1)\) uncertainty in the local dark matter density [48, 49, 50]. I would advocate for taking a more conservative approach, using a deliberately low estimate of \(\rho\) or targeting a value somewhat below the \(|g_{a\gamma\gamma}|\) predicted by \(E/N=8/3\), to be sure that the axion signal is not missed! ### Applicability The post-inflation QCD axion scenario is just one possibility. It is equally plausible that the axion was already an independent field during inflation. In the compelling class of models where axions arise from extra-dimensional gauge fields (e.g., [51, 26, 52, 53]), there is no Peccei-Quinn phase transition and the axion is intrinsically of the pre-inflation type. In such models, we do not have a clean cosmological argument for the absence of fractionally charged color-singlet particles (they could simply be inflated away, if they were ever produced at all) or for \(|k_{G}|=1\), as domain walls do not form since inflation leads to a uniform value of the axion field across the universe. Thus, it is somewhat less clear that \(E/N=8/3\) is the appropriate target for a pre-inflation axion. At the same time, there are still reasons why our underlying assumptions are plausible in a broader class of models. There are many theories in which the spectrum of light matter is a good guide to the global form of the gauge group. This is an active area of investigation in quantum gravity, under the name "massless charge sufficiency" [54, 55, 56]. If only very heavy particles carry a particular charge, then the low-energy effective theory has a very good approximate 1-form global symmetry. Similarly, if \(|k_{G}|\neq 1\), then the low-energy effective theory has a good approximate 0-form global symmetry that shifts the axion by multiples of \(2\pi/k_{G}\). Quantum gravity forbids exact global symmetries and places restrictions on the quality of approximate symmetries, which provides a reason to think that models with the minimal gauge group and minimal value of \(|k_{G}|\) are on a firmer footing, or at least raise fewer questions about the UV completion. Beyond these considerations, minimality is often considered an aesthetically appealing principle. In short, we cannot claim that any theoretically consistent axion model will have \(|g_{a\gamma\gamma}|\) at or below the value it takes for \(E/N=8/3\). However, a class of minimal and hence appealing examples have this property, including post-inflation axion models with the simplest solution to the domain wall problem. ## Acknowledgments I thank the theorists of the 1980s for inexplicably leaving basic aspects of axion physics untouched for me to work on in the 2020s. (By writing this I am obviously in danger of receiving messages telling me that I have overlooked a paper that derived exactly (4) decades ago; please let me know!) I thank Shu-Heng Shao and Prateek Agrawal for informing me of their related papers appearing simultaneously [57, 58]. My work is supported in part by the DOE Grant DE-SC0013607. ## Appendix A Details for the full Standard Model A compact cover of the Standard Model gauge group is \[\widetilde{G}_{\rm SM}\cong{\rm SU(3)_{C}}\times{\rm SU(2)_{L}}\times{\rm U(1) _{Y}}. \tag{41}\] (The universal cover would have \(\mathbb{R}\) for the last factor, but we don't expect non-compact gauge groups to appear in real-world physics [59].) There are four possible global structures for the gauge group (see, e.g., [60]): \(\widetilde{G}_{\rm SM}\) itself, \(\widetilde{G}_{\rm SM}/\mathbb{Z}_{2}\), \(\widetilde{G}_{\rm SM}/\mathbb{Z}_{3}\), and \(G_{\rm SM}=\widetilde{G}_{\rm SM}/\mathbb{Z}_{6}\). In the main text we have focused on only the last case, and also have ignored the electroweak structure and discussed only electromagnetism. In this appendix, we will work out the general case. In the cases with a nontrivial quotient, it must be the case that all hypercharges are (in the usual convention) integer multiples of \(1/6\). In the case of \(\widetilde{G}_{\rm SM}\), it could be that there are heavy fields with hypercharge a multiple of \(1/(6k)\) for \(k\neq 1\in\mathbb{Z}\). As in the main text, we denote the \({\rm SU(3)_{C}}\) gauge fields by \(C\) with field strength \(G={\rm d}C-{\rm i}C\wedge C\). We denote the \({\rm SU(2)_{L}}\) gauge fields as \(L\) with field strength \(W={\rm d}L-{\rm i}L\wedge L\), and the \({\rm U(1)_{Y}}\) gauge field by \(\widehat{Y}\) with field strength \(\widehat{B}={\rm d}\widehat{Y}\) in the normalization that the minimal hypercharge is 1, i.e., the hypercharge of the quark doublet field of the Standard Model is 1 (rather than the conventional \(1/6\)). The conventional hypercharge gauge field strength is then \(B=6\widehat{B}\). The kinetic terms are \[I_{\rm kin}=\int{\rm d}^{4}x\sqrt{|g|}\left[\frac{1}{2}f^{2}\partial_{\mu} \theta\partial^{\mu}\theta-\frac{1}{2g_{s}^{2}}{\rm tr}(G_{\mu\nu}G^{\mu\nu})- \frac{1}{2g^{2}}{\rm tr}(W_{\mu\nu}W^{\mu\nu})-\frac{1}{4g^{\prime 2}}B_{\mu\nu}B^{ \mu\nu}\right], \tag{42}\] and the axion couplings of interest are \[I_{\rm ax}=\int\left[\frac{k_{G}}{8\pi^{2}}\,\theta\,{\rm tr}(G\wedge G)+\frac {k_{W}}{8\pi^{2}}\,\theta\,{\rm tr}(W\wedge W)+\frac{k_{B}}{8\pi^{2}}\,\theta \,B\wedge B\right]. \tag{43}\] In our derivation below it will also be useful to normalize the last term as \[\int\frac{k_{\widehat{B}}}{8\pi^{2}}\,\theta\,\widehat{B}\wedge\widehat{B}= \int\frac{k_{B}}{8\pi^{2}}\,\theta\,B\wedge B\quad\Rightarrow\quad k_{ \widehat{B}}=36k_{B}. \tag{44}\] Below the electroweak symmetry breaking scale, we can integrate out the \(Z\) boson by setting \(L^{3}=Y=A\), where \(A\) is the conventionally normalized photon field (i.e., normalized so that the electron charge is \(-1\), as in the main text). In this case the axion couplings match onto (3) with the axion-photon coupling \[k_{F}=\frac{1}{2}k_{W}+k_{B}. \tag{45}\] Independent of the global structure of the gauge group, consideration of SU(3)\({}_{\rm C}\) instantons and SU(2)\({}_{\rm L}\) instantons shows that \[k_{G}\in\mathbb{Z},\qquad k_{W}\in\mathbb{Z}. \tag{46}\] The nontrivial calculation is to understand the quantization condition involving \(k_{\widehat{B}}\) for each of the possible global structures. Here we present the results from the most complex case to the simplest one. ### Case 1: \(G_{\rm SM}=\widetilde{G}_{\rm SM}/\mathbb{Z}_{6}\). The center of SU(2) is generated by the matrix \(w=-{\bf 1}\). The \(\mathbb{Z}_{6}\) quotient corresponds to the fact that the combined action of \(z\in{\rm SU(3)_{C}}\) (see (8)), \(w\in{\rm SU(2)_{L}}\), and the \({\rm U(1)_{Y}}\) element with \(\alpha=2\pi/6\) acts trivially on all of the fields. The fundamental group \(\pi_{1}(G_{\rm SM})\cong\mathbb{Z}\) is generated by the projection of a path in \(\widetilde{G}_{\rm SM}\) from the origin \(({\bf 1},{\bf 1},1)\) to the point \((z,w,{\rm e}^{2\pi{\rm i}/6})\), which we can take to be given by the following map \(f:[0,1]\to\widetilde{G}_{\rm SM}\): \[(U(t),V(t),\xi(t))=(\exp(2\pi{\rm i}tT/3),\exp(\pi{\rm i}t\sigma),\exp(2\pi{ \rm i}t/6)), \tag{47}\] with \(T\) as in (12) and \[\sigma=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}. \tag{48}\] The calculation proceeds along the same lines as in SS2, only slightly more complicated. Our gauge transformations are \[C \mapsto UCU^{-1}-{\rm i}({\rm d}U)U^{-1}, \tag{49}\] \[L \mapsto VCV^{-1}-{\rm i}({\rm d}V)V^{-1},\] (50) \[\widehat{Y} \mapsto\widehat{Y}+{\rm d}\xi. \tag{51}\] On a 2-torus, we consider a gauge field configuration of the form \[C=G_{12}x_{1}{\rm d}x_{2},\quad L=W_{12}x_{1}{\rm d}x_{2},\quad\widehat{Y}= \widehat{B}_{12}x_{1}{\rm d}x_{2}. \tag{52}\] For this to be well-defined we ask that under \(x_{1}\mapsto x_{1}+2\pi r_{1}\) it map to a configuration that is equivalent under a gauge transformation winding around the \(x_{2}\) direction as the generator of \(\pi_{1}(G_{\rm SM})\), i.e., \[2\pi r_{1}G_{12}{\rm d}x_{2} =-{\rm i}({\rm d}U(x_{2}/2\pi r_{2}))U^{-1}(x_{2}/2\pi r_{2}), \tag{53}\] \[2\pi r_{1}W_{12}{\rm d}x_{2} =-{\rm i}({\rm d}V(x_{2}/2\pi r_{2}))V^{-1}(x_{2}/2\pi r_{2}),\] (54) \[2\pi r_{1}\widehat{B}_{12}{\rm d}x_{2} =-{\rm i}\xi^{-1}(x_{2}/2\pi r_{2}){\rm d}\xi(x_{2}/2\pi r_{2}). \tag{55}\] Using (47) we read off that \[G_{12} =\frac{1}{3}\frac{1}{2\pi r_{1}r_{2}}T, \tag{56}\] \[W_{12} =\frac{1}{2}\frac{1}{2\pi r_{1}r_{2}}\sigma,\] (57) \[\widehat{B}_{12} =\frac{1}{6}\frac{1}{2\pi r_{1}r_{2}}. \tag{58}\] Next, consider a field configuration with the same type of flux on both the \((x_{1},x_{2})\) torus and the \((x_{3},x_{4})\) torus: \[G =\frac{1}{3}T\left(\frac{1}{2\pi r_{1}r_{2}}\,{\rm d}x_{1}\wedge {\rm d}x_{2}+\frac{1}{2\pi r_{3}r_{4}}\,{\rm d}x_{3}\wedge{\rm d}x_{4}\right), \tag{59}\] \[W =\frac{1}{2}\sigma\left(\frac{1}{2\pi r_{1}r_{2}}\,{\rm d}x_{1} \wedge{\rm d}x_{2}+\frac{1}{2\pi r_{3}r_{4}}\,{\rm d}x_{3}\wedge{\rm d}x_{4} \right),\] (60) \[\widehat{B} =\frac{1}{6}\left(\frac{1}{2\pi r_{1}r_{2}}\,{\rm d}x_{1}\wedge{ \rm d}x_{2}+\frac{1}{2\pi r_{3}r_{4}}\,{\rm d}x_{3}\wedge{\rm d}x_{4}\right). \tag{61}\] Then we calculate that: \[\int{\rm tr}(G\wedge G) =\frac{1}{9}8\pi^{2}{\rm tr}(TT)=\frac{2}{3}8\pi^{2}, \tag{62}\] \[\int{\rm tr}(W\wedge W) =\frac{1}{4}8\pi^{2}{\rm tr}(\sigma\sigma)=\frac{1}{2}8\pi^{2},\] (63) \[\int\widehat{B}\wedge\widehat{B} =\frac{1}{36}8\pi^{2}. \tag{64}\] Invariance of \(\exp(iI)\) then translates into the correlated quantization condition \[\widetilde{G}_{\rm SM}/\mathbb{Z}_{6}:\qquad\frac{2}{3}k_{G}+\frac{1}{2}k_{W} +\frac{1}{36}k_{\widehat{B}}=\frac{2}{3}k_{G}+\frac{1}{2}k_{W}+k_{B}\in\mathbb{ Z}. \tag{65}\] Now, because of (45), this is identical to the condition (26) that we derived in the main text from the U(3) symmetry below the electroweak scale. ### Case 2: \(\widetilde{G}_{\rm SM}/\mathbb{Z}_{3}\). In this case, the fundamental group is generated by the projection of a path in \(\widetilde{G}_{\rm SM}\) from \(({\bf 1},{\bf 1},1)\) to \((z,{\bf 1},{\bf e}^{2{\rm i}/3})\). We can take this path to be stationary in the SU(2)\({}_{\rm L}\) factor, and then the calculation is precisely like what we discussed in SS2 except that the U(1) factor we are focusing on now is hypercharge rather than electromagnetism. As a result, one derives \[\widetilde{G}_{\rm SM}/\mathbb{Z}_{3}:\qquad\frac{2}{3}k_{G}+\frac{1}{9}k_{ \widehat{B}}=\frac{2}{3}k_{G}+4k_{B}\in\mathbb{Z}. \tag{66}\] For example, one could consider a configuration with \(k_{G}=1,k_{W}=0\), and then an allowed choices is \(k_{\widehat{B}}=-6\). Below the electroweak scale, this leads to an axion-photon coupling of \(k_{F}=-\frac{1}{6}\). Such a value, which is not an integer multiple of \(1/3\), can never arise for the minimal Standard Model gauge group. ### Case 3: \(\widetilde{G}_{\text{SM}}/\mathbb{Z}_{2}\). In this case, the fundamental group is generated by the projection of a path in \(\widetilde{G}_{\text{SM}}\) from \((\mathbf{1},\mathbf{1},1)\) to \((\mathbf{1},w,\mathbf{e}^{\text{\tiny{\tt{gi}}}})\). Taking the path to be stationary in \(\text{SU}(3)_{\text{C}}\) and following the familiar logic, we consider the field configurations \[W =\frac{1}{2}\sigma\left(\frac{1}{2\pi r_{1}r_{2}}\,\mathrm{d}x_{1 }\wedge\mathrm{d}x_{2}+\frac{1}{2\pi r_{3}r_{4}}\,\mathrm{d}x_{3}\wedge \mathrm{d}x_{4}\right), \tag{67}\] \[\widehat{B} =\frac{1}{2}\left(\frac{1}{2\pi r_{1}r_{2}}\,\mathrm{d}x_{1} \wedge\mathrm{d}x_{2}+\frac{1}{2\pi r_{3}r_{4}}\,\mathrm{d}x_{3}\wedge \mathrm{d}x_{4}\right). \tag{68}\] Then we find that \[\int\mathrm{tr}(W\wedge W) =\frac{1}{4}8\pi^{2}\mathrm{tr}(\sigma\sigma)=\frac{1}{2}8\pi^{2}, \tag{69}\] \[\int\widehat{B}\wedge\widehat{B} =\frac{1}{4}8\pi^{2}, \tag{70}\] and finally a correlated quantization condition \[\widetilde{G}_{\text{SM}}/\mathbb{Z}_{2}:\qquad\frac{1}{2}k_{W}+\frac{1}{4}k_ {\widehat{B}}=\frac{1}{2}k_{W}+9k_{B}\in\mathbb{Z}. \tag{71}\] For example, we could take \(k_{W}=1\), \(k_{\widehat{B}}=-2\), and then \(k_{F}=\frac{1}{2}-\frac{1}{18}=\frac{4}{9}\). Again, we see that this larger gauge group allows for a wider range of axion-photon couplings than in the minimal theory. ### Case 4: \(\widetilde{G}_{\text{SM}}\). In the case where there is no quotient, we have only the usual configurations of pure \(\text{U}(1)_{\text{Y}}\) flux to think about. In this case, because we have normalized \(\widehat{B}\) so that the minimum hypercharge is \(1\), it is a standard result that \[k_{\widehat{B}}=36k_{B}\in\mathbb{Z}. \tag{72}\] This allows the axion-photon coupling \(k_{F}\) to be as small as \(\frac{1}{36}\), much smaller than it can be for the minimal Standard Model gauge group. More generally, we can consider the case that the gauge group is \(\widetilde{G}_{\text{SM}}\) but with an even smaller minimal hypercharge \(q_{\text{min}}\) (in the usual normalization of hypercharge). In this case \[\widetilde{G}_{\text{SM}}:\qquad\frac{1}{q_{\text{min}}^{2}}k_{B}\in\mathbb{Z}, \tag{73}\] and the axion-photon coupling can be a multiple of \(q_{\text{min}}^{2}\).
2309.05390
Sibyll$^\bigstar$: ad-hoc modifications for an improved description of muon data in extensive air showers
Current simulations of air showers produced by ultra-high energy cosmic rays (UHECRs) do not satisfactorily describe recent experimental data, particularly when looking at the muonic shower component relative to the electromagnetic one. Discrepancies can be seen in both average values and on an individual shower-by-shower basis. It is thought that the muonic part of the air showers isn't accurately represented in simulations, despite various attempts to boost the number of muons within standard hadronic interaction physics. In this study, we investigate whether modifying the final state of events created with Sibyll~2.3d in air shower simulations can achieve a more consistent description of the muon content observed in experimental data. We create several scenarios where we separately increase the production of baryons, $\rho^0$, and strange particles to examine their impact on realistic air shower simulations. Our results suggest that these ad-hoc modifications can improve the simulations, providing a closer match to the observed muon content in air showers. One side-effect of the increased muon production in the considered model versions is a smaller difference in the predicted total muon numbers for proton and iron showers. However, more research is needed to find out whether any of these adjustments offers a realistic solution to the mismatches seen in data, and to identify the precise physical process causing these changes in the model. We hope that these modified model versions will also help to develop improved machine-learning analyses of air shower data and to estimate sys.{} uncertainties related to shortcomings of hadronic interaction models.
Felix Riehn, Ralph Engel, Anatoli Fedynitch
2023-09-11T11:41:08Z
http://arxiv.org/abs/2309.05390v1
# Sibyll: ad-hoc modifications for an improved description of muon data in extensive air showers ###### Abstract: Current simulations of air showers produced by ultra-high energy cosmic rays (UHECRs) do not satisfactorily describe recent experimental data, particularly when looking at the muonic shower component relative to the electromagnetic one. Discrepancies can be seen in both average values and on an individual shower-by-shower basis. It is thought that the muonic part of the air showers isn't accurately represented in simulations, despite various attempts to boost the number of muons within standard hadronic interaction physics. In this study, we investigate whether modifying the final state of events created with Sibyll 2.3d in air shower simulations can achieve a more consistent description of the muon content observed in experimental data. We create several scenarios where we separately increase the production of baryons, \(\rho^{0}\), and strange particles to examine their impact on realistic air shower simulations. Our results suggest that these ad-hoc modifications can improve the simulations, providing a closer match to the observed muon content in air showers. One side-effect of the increased muon production in the considered model versions is a smaller difference in the predicted total muon numbers for proton and iron showers. However, more research is needed to find out whether any of these adjustments offers a realistic solution to the mismatches seen in data, and to identify the precise physical process causing these changes in the model. We hope that these modified model versions will also help to develop improved machine-learning analyses of air shower data and to estimate sys. uncertainties related to shortcomings of hadronic interaction models. Introduction This study addresses the 'Muon Puzzle' [1, 2], a key challenge in the interpretation of extensive air shower (EAS) data. Specifically, we focus on the discrepancy between observed and predicted muon numbers in EAS, along with the associated inconsistency between the depth of shower maximum (\(\langle X_{\rm max}\rangle\)) and the ground-level signal [3, 4]. In pursuit of gaining better understanding of this problem, we explore ad-hoc modifications of the hadronic interaction model Sibyll 2.3d [5, 6, 7]. While numerous standard [8, 9, 10, 11, 12] and exotic [13, 14, 15, 16, 17] mechanisms have been proposed to address this problem, our study specifically focuses on quantitatively evaluating several conventional mechanisms. For this purpose, we employ a customized version of Sibyll, designed for direct use in realistic EAS simulations. All modifications of the model satisfy the constraint that, on an event-by-event level, all fundamental constraints such as energy-momentum and quantum numbers of relevance are conserved. Furthermore, existing experimental data from particle physics experiments are considered as guiding input, limiting some of the changes to energy and phase space regions for which no such data exist. ## 2 Sibyll\({}^{\star}\) We construct the custom models by modifying events generated with Sibyll 2.3d. Once the initial event generation is complete, we start by letting all hadronic resonances with a shorter lifetime than that of K\({}_{\rm s}^{0}\) decay, except for \(\pi^{0}\). We then go through the event's particle list to identify appropriate pairs or triples of pions, only considering the five nearest neighbors for every pion. If the combined invariant mass of the selected pions is sufficient and the sampling criterion is fulfilled, we exchange them with a pair of new particles that together have the same total momentum, invariant mass, and charge. We calculate the final momenta from the invariant mass, the new particles' masses, and a minor transverse momentum sourced from an exponential distribution in transverse mass. In these altered events, we maintain conservation of energy, momentum, and charge, although (iso)spin conservation is not upheld. The acceptance rate of these particle exchanges hinges on the total center-of-mass (CM) energy (\(\sqrt{s}\)) and the fraction of longitudinal momentum \(x_{\rm F}\) (defined as \(p_{z}/p_{z,\rm max}\), with momenta \(p\) in the CM frame) of the initial particle. The probability of exchanging particles is parameterized by \[P_{i}\ =\ P_{i,0}\cdot|x_{\rm F}|^{\epsilon_{i}}\cdot f(\sqrt{s},E_{\rm thr}). \tag{1}\] The emphasis given to forward or central particles depends on the chosen value for the exponent \(\epsilon_{i}\) in the \(x_{\rm F}\)-dependence. If \(\epsilon_{i}=0\), all particles receive equal weight, preserving the original distribution's shape in longitudinal phase space. As \(\epsilon_{i}\) approaches 1, the forward part of the \(x_{\rm F}\)-spectrum is enhanced. The energy dependence of \(f(\sqrt{s},E_{\rm thr})\) is logarithmic. It is set such that the rate is precisely zero below a threshold energy \(E_{\rm thr}\) and reaches the nominal \(P_{0,i}\) at lab energies of \(10^{19}\) eV (\(1.37\times 10^{5}\) GeV in CM frame). The threshold energy \(E_{\rm thr}\) is set at 5 GeV. This parameterization of energy dependence represents a very gradual change in particle production, from no change at low energies, where fixed target experiments effectively limit the entire phase space, to LHC energies, where only the central region is well constrained, up to the UHECR energy scale (with no lab experiment constraints). Overall, this spans five orders of magnitude in energy, and the modification scales from zero to one. As an alternative, we allow a more drastic increase of the exchange rate towards high energies (above \(13\,\mathrm{TeV}\) in CM frame) represented as \(P_{i}\to P_{i,0}+P_{i,\,\mathrm{HE}}\cdot f_{\mathrm{HE}}(\sqrt{s},13\,\mathrm{ TeV})\). This mode depicts a swift transition to new physics beyond the LHC scale. Lastly, we apply this algorithm for all projectiles or only for mesons. Using the previously discussed algorithm, we construct different Sibyll 2.3d variants with an aim to enhance muon production in EAS. We select three distinct modifications known for their efficacy in muon production: \(\rho^{0}\) production, baryon anti-baryon pair-production, and kaon production enhancement [8, 9, 10, 11, 17]. These variants are denoted as \(\mathrm{S}^{\star}(\rho^{0})\), \(\mathrm{S}^{\star}(\bar{p})\), and \(\mathrm{S}^{\star}(\mathrm{K}^{\pm,0})\). In the \(\rho^{0}\) variant, \(\pi^{0}\) are directly substituted with \(\rho^{0}\). For the baryon pair and kaon pair variant, charge-neutral combinations of two or three pions are replaced with \(p\bar{p}\) or \(n\bar{n}\) pairs, and \(\mathrm{K}^{+}\mathrm{K}^{-}\) or \(\mathrm{K}^{0}\)\(\mathrm{\bar{K}}^{0}\) pairs respectively. We adjust the parameters for each variant to align the energy in the corresponding component with NA61 measurements [18, 19] (refer to Fig.1 and Fig. 2). Producing \(\rho^{0}\) significantly impacts muon production in EAS because \(\rho\) mesons can form directly from the pion projectile. This mechanism doesn't hold for proton projectiles, hence for the \(\rho^{0}\) variant, only meson projectile interactions are modified. However, the modifications for baryon Figure 1: Fraction of projectile energy carried by \(\rho^{0}\), anti protons and charged kaons in \(\pi^{-}\mathrm{C}\) collisions [18, 19]. Lines are Sibyll 2.3d, Sibyll 2.1 and different variants of Sibyll\({}^{\star}\). Figure 2: Multiplicities of \(\pi^{+}\), anti protons and charged kaons in pp collisions [20, 21, 22]. Lines are Sibyll 2.3d, Sibyll 2.1 and different variants of Sibyll\({}^{\star}\). pair and kaon production apply to any projectile and include a rapid increase at energies beyond the LHC. The parameters are adjusted so that the energy fraction carried by all hadrons except neutral pions at 10 EeV is approximately the same across all variants (\(\approx\) 0.82). Furthermore, we create a fourth variant using both \(\rho^{0}\) and baryon pair production (S\({}^{\star}\)(mix)). In this scenario, we choose a more moderate increase of \(\rho\) production and dismiss the rapid increase of baryon production at high energies. The parameters of the different variants are detailed in Tab. 1. Note that the mechanism driving the effectiveness of both strangeness enhancement and increased baryon pair production in raising the muon number is fundamentally the same: quantum number conservation (strangeness and baryon number, respectively). The key difference is that baryon production works effectively at all energies (no nucleon decay), while strangeness is only conserved in EAS at high energies, where kaon decay is negligible. Note also that since hyperons do not decay into kaons, enhanced hyperon production is part of the variant with enhanced baryon production. ## 3 EAS predictions In our EAS predictions, we carried out full air shower simulations using CORSIKA1 for each S\({}^{\star}\) variant. We compared the resulting average depth of shower maximum (\(\langle X_{\rm max}\rangle\)) and the average number of muons at ground level (\(\langle N_{\mu}\rangle\)) with simulations using the unmodified Sibyll 2.3d. These simulations were set to mirror conditions at the Pierre Auger Observatory site. The results for a primary proton with energy 10 EeV and an incident zenith angle of 67\({}^{\circ}\) are presented in Fig. 3. These results show a notable increase in the number of muons, while \(\langle X_{\rm max}\rangle\) remains largely unaffected. Footnote 1: CORSIKA v7.7420 [23]; The magnetic field and observation level are set to the values of the site of the Pierre Auger Observatory. Low-energy hadronic interactions were modeled using FLUKA 2021.2.9 [24]. In the analysis shown in Fig. 4, we compare the predicted values of \(\langle X_{\rm max}\rangle\) and \(\langle\ln N_{\mu}\rangle\) from proton, helium, nitrogen, and iron primaries with the measurement from the Pierre Auger Observatory [25]. Our findings indicate that only the \(\rho^{0}\) and the mixed \(\rho^{0}\) baryon-pair variant yield a sufficient number of muons to match the levels shown in the data. The model predictions in Fig. 4 for p, He, N and Fe all fall on a line. The reason is that per the superposition model, \(\langle X_{\rm max}\rangle\) and \(\langle\ln N_{\mu}\rangle\), have the same dependence on the primary mass (linear \begin{table} \begin{tabular}{c c c c c} \hline Label & \(P_{i,0}\) & forward weight \(\epsilon_{i}\) & projectiles & \(P_{i,\rm HE}\) \\ \hline \(\rho^{0}\) & 0.8 & 0.3 & mesons & - \\ \(\bar{p}\) & 0.5 & 0.7 & all & 0.25 \\ \(K^{\pm,0}\) & 0.5 & 0.8 & all & 0.3 \\ \(\rho\)-mix & 0.8 & 0.4 & mesons & - \\ \(\bar{p}\)-mix & 0.5 & 0.7 & all & - \\ \hline \end{tabular} \end{table} Table 1: Parameters in different variants of Sibyll\({}^{\star}\). in \(\ln A\)), e.g. \(\langle\ln N_{\mu}\rangle(A,E)~{}=~{}(1-\beta)\ln A~{}+~{}\langle\ln N_{\mu} \rangle(1,E)\), where \(\beta\) is the exponent in the energy dependence of the number of muons for proton primaries, that is \(\langle\ln N_{\mu}\rangle(1,E)~{}=~{}\beta\ln E\). In simplified models a la Heitler-Matthews we have \[N_{\mu}=A\cdot\left(\frac{E}{A\cdot E_{\rm dec}}\right)^{\beta} \tag{2}\] with \(\beta\) being computed as \(\ln N_{\rm ch}/\ln N_{\rm tot}\), where \(N_{\rm ch}\) and \(N_{\rm tot}\) denote the multiplicities of charged and all pions, respectively [26]. More broadly, \(\beta\) is associated with the fraction of hadrons that carry enough kinetic energy to re-interact rather than decay. The slopes in Fig. 4 decreases from Sibyll 2.1 towards the variants with the highest levels of muon production. This behavior is common to all variants S\({}^{\bigstar}\). The greater a fraction of energy is kept in hadrons, the larger the increase in the number of muons and the larger \(\beta\), and consequently, the less pronounced is the separation of the primary masses in \(\langle N_{\mu}\rangle\). Figure 4: Comparison of the predicted values for \(\langle X_{\rm max}\rangle\) and \(\langle N_{\mu}\rangle\) from various Sibyll\({}^{\bigstar}\) variants to the measurements obtained from the Pierre Auger Observatory. [25] Figure 3: \(N_{\mu}\) and \(X_{\rm max}\) for proton showers at 67\({}^{\circ}\) across Sibyll variants. The left panel shows a substantial increase, up to 35%, in the number of muons for the mixed and \(\rho\) variant. However, the variation on the shower maximum between Sibyll 2.3d and its variants is less than 7 g/cm\({}^{2}\). The grey line represents the required increase in muon count to align with the data from the Pierre Auger Observatory. The data clearly favor a larger muon content at high energy leading to a reduction in the mass resolution. However Fig. 5 shows that for specific experiments the situation may be not as dramatic, as the separation between primary masses (respectively \(\beta\)) varies with the distance from the shower axis. The reason is that muons at different lateral distances are dominated by different phases of the shower development [27, 28, 29, 30, 6]. ## 4 Inclusive fluxes We employed the MCEq code [31] for a comparative analysis of the atmospheric muon and neutrino fluxes as predicted by the \(\mathrm{S}^{\star}\) versions against the original Sibyll 2.3d. Considering the modification (\(\mathrm{S}^{\star}(\rho^{0})\)) impacts secondary pion interactions and (\(\mathrm{S}^{\star}(\bar{p})\)) affects the production of secondary baryons, both of which influence the shower development in deeper atmospheric layers, we didn't foresee changes to inclusive fluxes, which primarily depend on the air shower's early stages. Our findings confirm no significant effects. However, since \(K^{\pm}\) decays into muons and neutrinos, for the \(\mathrm{S}^{\star}(\mathrm{K}^{\pm},0)\) model, we identified a minor increase in muon fluxes of around 5% at tens of TeV and PeV energies, and up to a 20% increase in atmospheric neutrino fluxes. ## 5 Discussion In this study, we explored custom variants of the Sibyll 2.3d model, aiming to boost muon production in extensive air showers. Our focus centered on three modifications: \(\rho^{0}\) production, baryon anti-baryon pair-production, and kaon production enhancement. Significantly, the \(\rho^{0}\) and mixed \(\rho^{0}\) baryon-pair enhancements effectively aligned with the observed muon production data from the Pierre Auger Observatory at 10 EeV. In contrast, elevating strangeness or baryon production proved insufficient, even when amplified to extreme levels. However, our results don't definitively eliminate these scenarios, especially considering that our implementation does not permit the production of leading strangeness (e.g. \(p\,\rightarrow\Lambda^{0}\,\rightarrow\mathrm{K}^{+}+\mathrm{n}\)). Figure 5: Ratio between the average number of muons in iron and proton showers (left panel) and the slope \(\beta\) (right panel). Both are shown as a function of the distance from the shower axis. The inset numbers in the left panel show the ratio of the total numbers of muons between iron and proton showers and the ratio of the muon densities at 1000 m in parentheses. Our simulations show that these modifications increase the number of muons, while the average depth of shower maximum remains largely unaffected. Another observation is that the degree of primary mass separation through muon measurement is predicted to be smaller in the modified models than in the original Sibyll 2.3d and other interaction models as along as the total number of muons is concerned. This is also expected within the Heitler-Matthews model [26] as more energy is kept in the hadronic channel in each interaction, increasing the exponent \(\beta\). On the other hand, the muon density at sufficient distance from the shower core (for example, 1000 m) is a very well suited observable for composition studies. Moreover, despite certain modifications leading to increased muon fluxes, inclusive fluxes - primarily dependent on early air shower stages - remained largely unchanged. Only the \(\mathrm{S}^{\star}(\mathrm{K}^{\pm},0)\) model showed a minor increase in muon fluxes and a noticeable increase in atmospheric neutrino fluxes, due to the decay of \(K^{\pm}\) into muons and neutrinos. The provided variants of Sibyll 2.3d can be used to train machine-learning models like deep neural networks to have a better description of the measurements and to estimate systematic uncertainties stemming from shortcomings of modeling hadronic multiparticle production. The authors acknowledge many fruitful discussions with colleagues of the Pierre Auger and IceCube Collaborations. They have used computing resources provided by the Academia Sinica Grid Computing Center (ASGC), supported by the Institute of Physics at Academia Sinica. FR and RE are supported in part by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 101065027 and by BMBF grant No. 05A2023VK4, respectively.
2309.04918
Global Message Ordering using Distributed Kafka Clusters
In contemporary distributed systems, logs are produced at an astounding rate, generating terabytes of data within mere seconds. These logs, containing pivotal details like system metrics, user actions, and diverse events, are foundational to the system's consistent and accurate operations. Precise log ordering becomes indispensable to avert potential ambiguities and discordances in system functionalities. Apache Kafka, a prevalent distributed message queue, offers significant solutions to various distributed log processing challenges. However, it presents an inherent limitation while Kafka ensures the in-order delivery of messages within a single partition to the consumer, it falls short in guaranteeing a global order for messages spanning multiple partitions. This research delves into innovative methodologies to achieve global ordering of messages within a Kafka topic, aiming to bolster the integrity and consistency of log processing in distributed systems. Our code is available on GitHub.
Shashank Kumar, Aryan Jadon, Sachin Sharma
2023-09-10T02:34:29Z
http://arxiv.org/abs/2309.04918v2
# Global Message Ordering using Distributed Kafka Clusters ###### Abstract In contemporary distributed systems, logs are produced at an astounding rate, generating terabytes of data within mere seconds. These logs, containing pivotal details like system metrics, user actions, and diverse events, are foundational to the system's consistent and accurate operations. Precise log ordering becomes indispensable to avert potential ambiguities and discordances in system functionalities. Apache Kafka, a prevalent distributed message queue, offers significant solutions to various distributed log processing challenges. However, it presents an inherent limitation: while Kafka ensures the in-order delivery of messages within a single partition to the consumer, it falls short in guaranteeing a global order for messages spanning multiple partitions. This research delves into innovative methodologies to achieve global ordering of messages within a Kafka topic, aiming to bolster the integrity and consistency of log processing in distributed systems. Our code is available on GitHub - [https://github.com/aryan-jadon/Distributed-Kafka-Clusters](https://github.com/aryan-jadon/Distributed-Kafka-Clusters). Apache Kafka, Distributed message queues, Distributed systems, Global ordering, Log inconsistencies, Log processing, Message ordering, Partitioning ## I Introduction Since the inception of Web 2.0 and the ongoing evolution to Web 3.0, there has been a significant proliferation in the decentralization of computer systems [1]. Concurrently, the landscape of data generation has undergone transformative shifts [2]. Spurred by the surge of IoT(Internet of Things) devices, the prevalence of social media platforms, the rise of online services, and the myriad of digital infrastructures and architectures, there has been a meteoric surge in the magnitude, pace, and diversity of data generated [3]. Within a nodal cluster, data emanates from multifarious sources: 1. Events denoting user activities such as logins, content access, user engagements, and transactions. 2. System-centric metrics encompassing service engagements, network metrics [4], and node-specific resource utilizations like heap memory, CPU, and disk performance. In traditional systems, data analysis was predominantly conducted offline, extracting logs from operational servers [5]. In contrast, contemporary systems place significant emphasis on real-time data analysis, leveraging immediate feedback to inform subsequent operational decisions [6]. Apache Kafka [7] has evolved as a potent tool to confront the intricacies introduced by the surge in data volume. It facilitates the dependable, scalable, and proficient acquisition, preservation, and analysis of streaming data. The distributed nature of Kafka supports horizontal expansion, distributing data over numerous brokers, thus ensuring high-capacity and fault-resilient data handling [8]. Data can be introduced into Kafka topics via diverse methods including producers, connectors, or alternative data integration techniques. Once within Kafka, this data can undergo processing, and transformation, and be accessed by an array of applications, infrastructures, or analytical workflows. Kafka's capacity to manage substantial data loads, ensure fault resilience, and facilitate real-time operations positions it as a preferred option for an extensive range of applications such as data pipeline construction, event-centric architectures, log consolidation, and stream analytics, among others [9]. A notable challenge in utilizing Kafka is its provision of ordering guarantees limited to an individual partition and not across multiple partitions. Each Kafka partition is designated to a particular broker and functions autonomously, facilitating parallel computations and enhancing scalability [10]. Yet, due to this segmented structure, it's not feasible to ensure a universal order for data spanning all partitions within a topic. This absence of comprehensive ordering across partitions poses constraints in situations necessitating rigorous sequential processing or specific event sequencing. In this research paper, we aim to tackle the issue of attaining a universal data order across partitions within Apache Kafka using **Aggregator and Sorter**, **Single Consumer within a consumer group**, and **Batch Commit and Broadcast Protocol Algorithms**. By overcoming this limitation, our research strives to make a significant contribution to both the Kafka community and practitioners dealing with situations where maintaining a global data order is paramount for their data processing workflows. We possess assurance that our findings will not only enhance the capabilities of Kafka but also unveil novel prospects for applications that require exact event sequencing and effective dependency management across multiple partitions. The structure of this paper is organized as follows: Section II delves into the related work, while Section III elaborates on the Proposed Architecture and Design implementations. Experimental findings are presented in Section IV, and Section V concludes the paper and offers insights into future work. ## II Related Work The challenge lies in the realm of distributed systems, spanning thousands of components scattered across the globe. This complex landscape necessitates a dedicated Middleware infrastructure. These distributed systems, by their very nature, are rigid and static, requiring a transformation from point-to-point synchronous applications to large-scale asynchronous systems. This transition is pivotal due to the glaring problem: traditional setups, such as Meta Scribe, employ log aggregators that funnel data from frontend machines over sockets, eventually storing it in HDFS for offline processing [11]. However, this approach leaves the potential of real-time data utilization untapped, creating a substantial gap. While other messaging queue systems, like IBM Web-sphere, offer global message ordering [12], they falter in high throughput scenarios due to the stringent delivery guarantees mandating message exchange acknowledgments. Such guarantees, while valuable in certain contexts, prove excessive for noncritical log data. Similarly, messaging services like RabbitMQ [13] and ActiveMQ [14] maintain global ordering but stumble when faced with the scale of data, as they lack the ability to send multiple messages within a single request, resulting in costly TCP/IP round trips. Kafka protocol is a game-changer in the real-time data processing realm. It empowers consumers to access messages as soon as Brokers publish them [15]. Kafka's pull mechanism for data access ensures consumers remain unfared by high network traffic, thus delivering unparalleled throughput. The magic behind Kafka's success lies in its elegant architecture, leveraging Zookeeper for essential distributed tasks like data partition replication, leader consensus, and maintaining consumer and broker registries, including tracking consumer data offsets for each partition. Figure 1 vividly illustrates the Zookeeper's role in this architecture, organizing registries in a file directory structure. Brokers, consumers, and ownership registries are ephemeral, ensuring seamless load balancing when servers are added or removed. In addition, Kafka maintains a persistent offset registry for data recovery in case of consumer failures. Kafka's innovative solution hinges on parallel data streaming using partitions, where messages within a partition maintain their order, enhancing throughput. However, this approach poses a challenge for applications requiring global message ordering when dealing with messages from the same topic distributed across different partitions. LinkedIn, for instance, has deployed a Kafka library cluster within a data center to facilitate offline analysis, leveraging HDFS for delivering analytical insights [16]. Although this setup caters to applications relying on message sequencing, it primarily operates offline. In certain scenarios, data undergoes preprocessing before reaching the producer application, introducing an additional layer of complexity in the development process. Our mission is clear: harness Kafka's distributed parallel data processing and high throughput, in tandem with its streaming queue capabilities, to bridge the gap and ensure coherent message sequencing across partitions, unlocking the true potential of real-time data utilization. ## III Proposed Architecture and Design Implementations To address the mentioned use cases, the creation of partitions may necessitate the use of non-intuitive keys. Furthermore, partitions limit consumption to a single consumer node. Our objective is to maintain a universal order, irrespective of the number of consumers involved. Numerous messaging technologies, including AMQP (Advanced Message Queuing Protocol) [17] and JMS [18], offer support for Message Prioritization. These technologies enable messages to be consumed or processed in varying orders, depending on their significance. For instance, consider a scenario in applications where customer queries need attention, and a business may need to handle the most critical cases first. Kafka, originally designed as an event streaming platform, lacks essential features such as message prioritization. To bridge this gap, we intend to introduce an intermediary layer between consumers and brokers that can facilitate message prioritization. ### **Architecture and Design** A concise overview of Kafka's architecture includes brokers with topics and partitions, engaging with producers and consumers. In this section, we will delve into crucial aspects of these components, laying the foundation for our upcoming architectural design. _Producer:_ In Kafka, the producer's role involves disseminating messages among various partitions. The quantity of partitions within a topic is established during its creation. By default, the **Partitioner** utilizes a hash function of the message key to determine the appropriate partition for the message. Fig. 1: A Standard file system for Zookeeper Namespace _Consumer_: A consumer group subscribes to the topics it intends to receive messages from. Within each consumer group, partitions are allocated to different consumers to ensure that each partition is processed by a single consumer. The logic responsible for assigning partitions to consumers is implemented by the **Assignors**. _Broker_: In Kafka, each broker is referred to as a **bootstrap server**, and a Kafka cluster comprises multiple such brokers (servers). Every broker is uniquely identified by an integer ID and houses specific topic partitions. What makes Kafka intriguing is that, at any moment, a client needs to establish a connection with just one broker, and that connection provides access to the entire cluster. Each broker possesses knowledge of all other brokers, topics, and partitions through the maintenance of metadata across the server ensemble. The orchestration of all brokers is a key function carried out by Zookeepers. They maintain a comprehensive list of all brokers and are also responsible for orchestrating leader elections for partitions. We are presenting three distinct designs to attain message ordering and subsequently assessing their impact on performance in comparison to the existing Kafka implementation. ### _Aggregator and Sorter Algorithm_ In this approach, we propose a method to ensure message ordering by buffering and sorting the messages. We can use the message key field found in the ProducerHeader.java file, which assigns a unique identifier to each message, for this purpose. When multiple partitions are in use, consumers must maintain a buffer that contains messages from all partitions. If only one consumer were used, a local cache could suffice. However, in high-load scenarios where multiple consumers are needed, each reading from a single partition, this buffer must be positioned outside the consumer layer. The middleware layer will then sequentially poll messages from the consumers and arrange them in the correct order. For instance, if messages arrive out of order, the middleware will only deliver those that are in sequence, while retaining out-of-order messages in the buffer until the missing sequences arrive. Another approach to maintaining sequential delivery is to poll messages in sequence. Although this eliminates the need for buffering messages, it can be extremely slow. While the Aggregator and Sorter approach effectively addresses the global message ordering issue, it compromises parallelism in a distributed system. Additionally, there is no predefined limit on buffer size. If a consumer processes messages slowly, it must either continue buffering messages or wait until the missing messages arrive. Figure 3 explains the Proposed Design using the Aggregator and Sorter Mechanism. The Aggregator and Sorter Algorithm III-B plays a crucial role in managing the flow of messages, optimizing the order of processing, and improving the overall system's performance. ### _Single Consumer within a Consumer Group Algorithm_ One straightforward approach to preserve message order is to streamline message delivery. This can be accomplished either by creating a single partition for each topic or by assigning a single consumer within a consumer group to all partitions of a topic. However, opting for a single partition per topic lacks scalability because the broker handling leader partitions can become easily overwhelmed with increased network traffic. Consequently, we propose the adoption of a single consumer as a more viable alternative. In the case of a single consumer, we employ a round-robin polling strategy across partitions, ensuring the delivery of messages in the order they arrive. The message key field plays a crucial role in determining message sequence. Out-of-order messages are temporarily stored within the consumer and subsequently delivered in the correct order upon receiving the missing sequentially numbered messages. Single Consumer within a Consumer Group Algorithm III-C addresses the challenges associated with message handling in distributed systems. ### _Batch Commit and Broadcast Protocol Algorithm_ This approach suggests preserving order by employing a consensus algorithm independently among producers and consumer groups. To accomplish this, we introduce a global batch size at the producer level for ordered messages. During a single poll operation, the consumer receives messages in multiples of this batch size. Batch Commit and Broadcast Protocol Algorithm III-D gives highly efficient Kafka streams that can provide global Fig. 2: Proposed Design using Aggregator and Sorter Mechanism ``` Initialize message order preservation strategy ifUsing a single partition per topicthen Create a single partition for each topic elseifAssigning a single consumer within a consumer group to all partitionsthen Assign a single consumer to handle all partitions of the topic else Choose an alternative approach endifif Offing for a single consumer then Initialize a round-robin polling strategy forEach message in partitionsdo Poll messages in a round-robin sequence Use the message key field to determine the message sequence ifMessage is out-of-order then Buffer the out-of-order message endif if Received missing sequentially numbered messages then Deliver out-of-order messages in the correct order endif endfor else Choose an alternative approach endif ``` **Algorithm 2** Single Consumer within a consumer group ``` Producers' Role: Producers employ Raft consensus algorithms to assign a batch number to a group of messages and then write them into the broker sequentially, following a round-robin approach. This batch number ensures a uniform sequence identifier across all system components. Partitioning Strategy: Instead of employing key-based partition allocation, we will utilize the Round Robin Partitioner (available in Kafka) to distribute messages across partitions. Consumers' Role: Consumers will adopt the atomic broadcast protocol to ensure the sequential delivery of messages, guided by batch numbers. While consumers can continue to poll multiple batches of messages, during delivery to the application, they will prioritize delivering them in accordance with the batch number sequence. Subsequently, they will broadcast the information about the next batch to be delivered or the last batch number that was dispatched. When a consumer possesses messages from the next batch, it will deliver them to the client and inform all other consumers accordingly. ``` **Algorithm 3** Batch Commit and Broadcast Protocol Algorithm ## IV Experiments And Results In this research, we undertook a systematic experimental comparison of the framework, which was constructed following the designs delineated in earlier sections. We aimed to contrast the latency and throughput of our developed system with the inherent attributes of the native Kafka framework. Given that our approach functions as an overlay atop Kafka, it is anticipated that message delivery might exhibit augmented latency. Efforts were made to maintain consistent parameters across different design configurations wherever feasible. We utilized a Macintosh system equipped with an 8-core CPU, segmented into 4 performance cores and 4 efficiency cores, complemented by an 8-core GPU and a 16-core Neural Engine for our experiments. To simulate a multi-producer and multi-consumer setup, we initiated several threads sharing a common group ID. Within the context of the Aggregator and Sorter design paradigm, multi-threading was employed to simulate the simultaneous operations of multiple producers and consumers. When a client request is received, the producer engages a lock via a distributed lock service, subsequently generating a sequential token. This locking mechanism is critical, guaranteeing the uniqueness and orderly sequence of tokens, thereby preventing any duplication or misordering. Following this, the producers relay their respective messages to the broker, where these messages are stored with their keys designated by the sequence token ID. Upon retrieval from the broker, the consumer places the message in a distributed queue structured to uphold the message sequence and facilitate the delivery of organized messages. It is imperative to note that message delivery is initiated only after reaching a predefined buffer size, ensuring a globally sequenced batch dispatch. In order to assess the efficacy of the Multi Consumer Aggregator and Sorter Design implementation, we conducted an experimental study involving the transmission of a burst of 700 messages. Our analysis revealed that the average latency per request when employing the native Kafka system was Fig. 3: Architecture Design using Batch Commit and Broadcast Protocol approximately 6.9 milliseconds, whereas the utilization of the modified Kafka wrapper resulted in an average latency of 60 milliseconds. Fig. 4 displays a graphical representation of the relationship between Request ID and their corresponding latency, measured in hundredths of a second \(\frac{1}{100}\)\({}^{th}\) of a second). Notably, as depicted in Fig. 4, an observation can be made regarding the latency disparity between the native Kafka system and the modified Kafka implementation, amounting to approximately 20 milliseconds. For a **single consumer design paradigm**, a single thread was used to read from all the partitions. A local buffer is maintained that is responsible for sorting the messages received based on the message key and delivering them to the downstream process in a globally sorted order. The average latency per request for a **single consumer design** with 3 partitions was observed around 16ms. The difference in the latency between native Kafka and modified Kafka is around 9 ms. The average request latency for a **single consumer design** utilizing three partitions measured approximately 16ms. Notably, there is a discernible latency disparity of roughly 9ms between the unaltered Kafka setup and the customized Kafka setup. For the implementation of the **batch commit and broadcast protocol**, we've incorporated the Raft algorithm on the producer side to generate sequential token IDs. Instead of assigning a token ID to each individual message, we allocate it to a batch, which can be configured to a specific size in the native Kafka environment. Correspondingly, the consumer reads from a broker with an identical batch size to that of the producer. To facilitate message delivery to downstream applications, we've employed an atomic broadcast protocol with a built-in timeout mechanism. Upon reading messages from the broker, the consumer patiently awaits a broadcast message that conveys the sequence ID of the next batch to be committed. In the event of a timer expiration, the consumer broadcasts the lowest batch sequence number available in its buffer. Upon receiving this broadcast message, other consumers also respond with their lowest batch sequence numbers. Each consumer independently computes the lowest batch sequence number, which determines the order of commitment. The consumer bearing the lowest batch sequence number proceeds to deliver the message and initiates the broadcast of the subsequent batch sequence number scheduled for commitment. We conducted a thorough examination of the average latency per request within the **batch commit and broadcast protocol** design. Our observations revealed that the average latency per request consistently registered at approximately 9.0 ms. In the context of performance comparison between native Kafka and the modified Kafka version, a discernible difference of approximately 2 ms became apparent. While the **batch commit and broadcast protocol** design within the modified Kafka version offers certain advantages, it does introduce an incremental latency of 2 ms when contrasted with native Kafka. This information holds significant relevance for system architects and developers who prioritize real-time data processing and are actively assessing the trade-offs between system performance and necessary modifications. Refer to Table I for Performance Analysis of all three designs. ## V Conclusion And Future Work From our tests, it became evident that a single consumer outperforms the Aggregator and Sorter for the given message burst size. Hence, for applications with a lower frequency of messages and partitions, the single consumer emerges as the superior choice. Fig. 4: Multi Consumer Aggregator and Sorter Design Performance Fig. 5: Single Consumer Design Performance Fig. 6: Batch commit and Broadcast Protocol Performance However, as the message frequency and number of partitions rise, the performance of the Aggregator and Sorter improves. Notably, the batch commit and broadcast protocol demonstrated reduced latency in generating sequence IDs compared to distributed locks. The inclusion of a buffer in the initial two designs creates a consistent latency, as we have to wait for the sorter and aggregator layers to attain a specific buffer size. Interestingly, we noted enhanced performance using the atomic broadcast protocol with re-transmission, possibly due to real-time message delivery as opposed to buffering with a batch size greater than one. While the atomic broadcast protocol offers speed, it introduces the challenge of overseeing group membership, a task managed by Kafka's zookeeper. We evaluated our current models within a multi-threaded environment. To conduct a more comprehensive performance analysis, we are contemplating replicating these designs within a distributed framework spanning diverse geographical locations. While our present testing centers around the latency per request for a single batch size, our future efforts are geared towards exploring additional metrics, including the throughput measured in requests processed per second, and investigating the impact of varying batch sizes. Our designs function as wrappers around Kafka, but our aspiration is to transform them into libraries suitable for applications that require global message ordering. Furthermore, our designs possess the capability to prioritize messages, contingent on the availability of a token-generating algorithm that adheres to specific priority guidelines. ## Acknowledgment The authors would like to express their gratitude to those who have provided feedback and support throughout the research process. Furthermore, we acknowledge that a significant portion of the content presented in this paper has been derived from our previously published arXiv preprint titled "Distributed Kafka Clusters: A Novel Approach to Global Message Ordering" [19].
2309.08850
Vertex Operators of the KP hierarchy and Singular Algebraic Curves
Quasi-periodic solutions of the KP hierarchy acted by vertex operators are studied. We show, with the aid of the Sato Grassmannian, that solutions thus constructed correspond to torsion free rank one sheaves on some singular algebraic curves whose normalizations are the non-singular curves corresponding to the seed quasi-periodic solutions. It means that the action of the vertex operator has an effect of creating singular points on an algebraic curve. We further check, by examples, that solutions obtained here can be considered as solitons on quasi-periodic backgrounds, where the soliton matrices are deterimed by parameters in the vertex operators.
Atsushi Nakayashiki
2023-09-16T03:06:15Z
http://arxiv.org/abs/2309.08850v1
# Vertex Operators of the KP hierarchy and Singular Algebraic Curves ###### Abstract Quasi-periodic solutions of the KP hierarchy acted by vertex operators are studied. We show, with the aid of the Sato Grassmannian, that solutions thus constructed correspond to torsion free rank one sheaves on some singular algebraic curves whose normalizations are the non-singular curves corresponding to the seed quasi-periodic solutions. It means that the action of the vertex operator has an effect of creating singular points on an algebraic curve. We further check, by examples, that solutions obtained here can be considered as solitons on quasi-periodic backgrounds, where the soliton matrices are determined by parameters in the vertex operators. ## 1 Introduction We revisit the vertex operators of the KP-hierarchy [5]. We apply them to quasi-periodic solutions, that is, solutions which are expressed by Riemann's theta functions of non-singular algebraic curves [22]. The problem we consider here is what kind of solutions we get in this way. To study this problem we use the Sato Grassmannian [36]. We show that solutions obtained here correspond to certain singular algebraic curves whose normalizations are the non-singular curves of the seed quasi-periodic solutions. It implies two things. First is that the action of the vertex operator of the KP-hierarchy has the effect of creating certain singularities on a curve. Second is that the solutions created by vertex operators describe certain limits of quasi-periodic solutions, since singular curves may be considered as limits of non-singular curves. We check, by computer simulations, that the solutions here represent solitons on the quasi-periodic backgrounds, where the soliton matrices can be extracted from the parameters of the vertex operators. It implies that wave patterns of quasi-periodic solutions of the KP equation contain various shapes of soliton solutions [16, 17] as a part. Recently interactions of solitons and quasi-periodic solutions attracted much attention in relation with soliton gases [10, 18, 7]. It is interesting to study whether the results in this paper can have some application to this subject. Now let us explain the results in more detail along the history of researches. During the last two decades it is revealed that the shapes of soliton solutions of the KP equation form various wave patterns like web diagrams [4, 16, 17]. Those wave patterns are related with combinatorics of non-negative Grassmannians and cluster algebras [20, 21]. Mathematically soliton solutions, in terms of tau function, are those described by linear combinations of exponential functions and are known to be constructed from singular algebraic curves of genus 0 [23, 38]. Quasi-periodic solutions of the KP equation are those written by Riemann's theta function of non-singular algebraic curves of positive genus [22, 37]. It is expected that quasi-periodic solutions tend to soliton solutions in certain genus zero limits [23, 38, 1]. Therefore it is quite interesting to study the wave patterns of quasi-periodic solutions incorporating the recent development on soliton solutions. One strategy to study this problem is to take limits of quasi-periodic solutions and make correspondence between quasi-periodic solutions and soliton solutions. However, to carry out this program was not very easy because it is difficult to compute limits of period matrices. We have avoided this difficulty by using the Sato Grassmannian and have calculated limits of quasi-periodic solutions for several examples [31, 32, 33, 34]. Recently there are important progress in computing the limits of quasi-periodic solutions [2, 3, 12, 11]. In [12], in particular, some kind of limits have been computed for any Riemann surface. However it seems difficult to understand how solitonic structure is incorporated in the wave patterns of quasi-periodic solutions from those results. In this paper we change the direction of study. Instead of studying the limits of quasi-periodic solutions we construct solutions corresponding to degenerate algebraic curves which are more covenient to see the relation with soliton solutions. In course of studying the degeneration of quasi-periodic solutions we found the following formula (Theorem 4.5 of [33] ): \[\lim\tau_{g,0}(t)=C\mathrm{e}^{-2\sum_{i=1}^{\infty}\alpha^{l}t_{ 2l}}\] \[\times\Big{(}\mathrm{e}^{\eta(t,\alpha^{1/2})}\tau_{g-1,0}(t-[ \alpha^{-1/2}])+(-1)^{n}\mathrm{e}^{\eta(t,-\alpha^{1/2})}\tau_{g-1,0}(t-[- \alpha^{-1/2}])\Big{)},\] where \(C\) is a certain constant, \(t=(t_{1},t_{2},t_{3},...)\), \([\kappa]=(\kappa,\kappa^{2}/2,\kappa^{3}/3,...)\), \(\eta(t,p)=\sum_{j=1}^{\infty}t_{j}p^{j}\), \(\tau_{n,0}\) is a quasi-periodic solution of the KdV hierarchy, expressed in some standard form, corresponding to a hyperelliptic curve of genus \(n\), lim means pinching a pair of branching points and \(\alpha\) is the parameter correponding to the pinched point. The term inside the bracket of the right hand side has a very special form. It is rewritten using the vertex operator introduced in [5]: \[X(p,q)=e^{\eta(t,p)-\eta(t,q)}e^{-\eta(\tilde{\partial},p^{-1}) +\eta(\tilde{\partial},q^{-1})},\] \[\tilde{\partial}=(\partial_{1},\partial_{2}/2,\partial_{3}/3,...).\] In fact, for any function \(\tau(t)\), we have \[e^{\eta(t,q)}e^{aX(p,q)}\tau(t-[q^{-1}])=e^{\eta(t,q)}\tau(t-[q^{-1}])+ae^{\eta(t,p)}\tau(t-[p^{-1}]).\] If we take \((p,q)=(-\alpha^{1/2},\alpha^{1/2})\), \(a=(-1)^{n}\) and \(\tau(t)=\tau_{g-1,0}\), we recover the above formula. The vertex operator transforms a solution of the KP-hierarchy to another one, that is, if \(\tau(t)\) is a solution then \(e^{aX(p,q)}\tau(t)\) is again a solution for any constant \(a\)[5]. The above fact suggests that the limits of quasi-periodic solutions may be described by the action of vertex operators on quasi-periodic solutions. The aim of this paper is, in a sense, to show that this is actually the case. Let \(\tau_{0}(t)\) be the quasi-periodic solution of the KP-hierarchy constructed from the data \((C,L_{\Delta,e},p_{\infty},z)\), where \(C\) is a compact Riemann surface of genus \(g>0\), \(L_{\Delta,e}\) is a certain holomorphic line bundle on \(C\) of degree \(g-1\), \(p_{\infty}\) a point of \(C\) and \(z\) is a local coordinate around \(p_{\infty}\)[22, 37, 15]. We apply vertex operators with various parameters successively to \(\tau_{0}(t)\). More precisely, let \(M,N\geq 1\), \(p_{i},q_{j}\), \(1\leq i\leq N\), \(1\leq j\leq M\) distinct complex parameters, \(A=(a_{i,j})\) an \(M\times N\) matrix. We consider the vertex operator of the form \[G=e^{\sum_{i=1}^{M}\sum_{j=1}^{N}a_{i,j}X(q_{i}^{-1},p_{j}^{-1})}.\] We make a certain shift on \(\tau_{0}(t)\), apply \(G\) and multiply it by a constant times exponential function and a get new solution \(\tau(t)\) (cf. (3.3)). We show that \(\tau(t)\) is a solution constructed from the data \((C^{\prime},{\cal W}_{e},p^{\prime}_{\infty},z)\) where \(C^{\prime}\) is a certain singular algebraic curve whose normalization is \(C\), \({\cal W}_{e}\) is a certain torsion free sheaf of rank one on \(C^{\prime}\), \(p^{\prime}_{\infty}\) is a point of \(C^{\prime}\) and \(z\) a local coordinate around \(p^{\prime}_{\infty}\). To prove these properties we use the Sato Grassmannian, which we denote by UGM. It is the set of certain subspaces of the vector space \(V={\mathbb{C}}((z))\) and parametrizes all formal power solutions of the KP-hierarchy. We recall here that if \(\tau(t)\) is a solution of the KP-hierarchy so is \(\tau(-t)\). W remark that the descriptions of the points of UGM corresponding to \(\tau(t)\) and \(\tau(-t)\) are not very symmetric. The point corresponding to \(\tau(-t)\) is more suitable to describe the geometry. By this reason we determine the point \(W_{e}\) of UGM corresponding to \(\tau(-t)\). This is done by examining the properties of the wave function associated with \(\tau(t)\) (cf. (2.4)). The subspace of \(V\) corresponding to \(\tau_{0}(-t)\) is a module over the affine coordinate ring \(R\) of \(C\backslash\{p_{\infty}\}\). Since \(W_{e}\) is a subspace of this space, we consider the stabilizer \(R_{e}\) of \(W_{e}\) in \(R\). Following Mumford [27] and Mulase[25, 26] we define \(C^{\prime}\)and \({\cal W}_{e}\) as a scheme and a sheaf on it respectively using \(R_{e}\) and \(W_{e}\). Finally it should be mentioned that the relation of the vertex operator and the degeneration of Riemann's theta function has also been observed by Yuji Kodama from different point of view [18, 19]. It is interesting to investigate the relation of the results of this paper and those of Kodama. The paper is organized as follows. In section 2 the relation of the KP-hierarchy and the Sato Grassmannian is reviewed. The main results are formulated and stated in section 3. The formula for a quasi-periodic solution, the definition and the properties of the vertex operator and the result of the action of the vertex operator on a quasi-periodic solution are given here. In section 4 the singular algebraic curve and the sheaf on it associated with the solution considered in the main theorem are defined and their properties are studied. The example of genus one is studied in detail in section 5. Figures of computer simulation by Mathematica are presented here. The proof of the main theorem is given in section 4 and 5. Regarding the readability of the paper proofs of assertions in section 4 are given in appendix A to E. ## 2 KP-hierarchy and Sato Grassmannian The KP-hierarchy is the equation for a function \(\tau(t)\), \(t=(t_{1},t_{2},t_{3},...)\) of the form \[\oint\tau(t-s-[\lambda^{-1}])(t+s+[\lambda^{-1}])e^{-2\eta(s,\lambda)}\frac{d \lambda}{2\pi i}=0, \tag{2.1}\] where \(s=(s_{1},s_{2},s_{3},...)\), \[[\mu]=(\mu,\frac{\mu^{2}}{2},\frac{\mu^{3}}{3},...),\ \ \ \ \eta(t,\lambda)=\sum_{n=1}^{\infty}t_{n} \lambda^{n},\] and \(\oint\cdot\frac{d\lambda}{2\pi i}\) means taking the residue at \(\lambda=\infty\). By expanding in \(\{s_{j}\}\) the KP-hierarchy is equivalent to the infinite set of differential equations which include the KP equation in the bilinear form: \[(D_{1}^{4}-3D_{2}^{2}-4D_{1}D_{3})\tau\cdot\tau=0, \tag{2.2}\] where the Hirota derivatives \(D_{i}\) is defined, in general, for a function \(f(t)\), as the Taylor coefficients of the expansion of \(f(t+s)f(t-s)\) in \(s\): \[f(t+s)f(t-s)=\sum\frac{D^{\alpha}f\cdot f}{\alpha!}s^{\alpha},\] where \(\alpha=(\alpha_{1},\alpha_{2},...)\), \(D^{\alpha}=D_{1}^{\alpha_{1}}D_{2}^{\alpha_{2}}\cdots\), \(\alpha!=\alpha_{1}!\alpha_{2}!\cdots\), \(s^{\alpha}=s_{1}^{\alpha_{1}}s_{2}^{\alpha_{2}}\cdots\). If \(x=t_{1}\), \(y=t_{2}\), \(t=t_{3}\) and \(u=2\partial_{x}^{2}\log\tau(t)\), (2.2) implies the KP equation \[3u_{yy}+(-4u_{t}+6uu_{x}+u_{xxx})_{x}=0. \tag{2.3}\] The Sato Grassmannian, which we denote by UGM after Sato, parametrizes all formal power series solutions of the KP-hierarchy. It is defined as follows. Let \(V=\mathbb{C}((z))\) be the space of formal Laurent series in one variable \(z\), \(V_{\phi}=\mathbb{C}[z^{-1}]\) the subspace of polynomials in \(z^{-1}\) and \(V_{0}=z\mathbb{C}[[z]]\) the subspace of formal power series vanishing at \(z=0\). Then \(V=V_{\phi}\oplus V_{0}\). The Sato Grassmannian is the set of subspaces of \(V\) with the same size as \(V_{\phi}\). More precisely let \(\pi:V\to V_{\phi}\) be the projection. Then UGM is the set of a subspace \(U\) such that \(\mathrm{Ker}\pi|_{U}\) and \(\mathrm{Coker}\pi|_{U}\) is both finite dimensional and their dimensions are same. Here we shall give a criterion for a subspace of \(V\) to be a point of UGM. To this end, for \(f(z)\in V\), define the order of \(f\) by, \[\operatorname{ord}f=-N\text{ if }f(z)=cz^{N}+O(z^{N+1})\text{ with }c\neq 0.\] It describes the order of a pole at \(z=0\) if \(N\) is negative. We set \[V(n)=\{f\in V\,|\,\operatorname{ord}f\leq n\},\] and, for a subspace \(U\) of \(V\), set \(U(n)=U\cap V(n)\). Then **Proposition 2.1**.: A subspace \(U\) of \(V\) belongs to UGM if and only if \(\dim U(n)=n+1\) for all sufficiently large \(n\). Proof.: Suppose that \(\dim U(n)=n+1\) for \(n\geq n_{0}\) with \(n_{0}\geq 0\). Then \(\dim U(n+1)/U(n)=1\) for \(n\geq n_{0}\). It means that there exist \(f_{n}\in U\), \(n\geq n_{0}+1\), such that \[f_{n}=z^{-n}+O(z^{-n+1}),\quad\,n\geq n_{0}+1.\] We can take \(f_{n}\), \(n\geq n_{0}+1\), as a part of a basis of \(U\). For the remaining part, since \(\dim U(n_{0})=n_{0}+1\), there exist \(f_{m_{i}}\in U\), \(0\leq i\leq n_{0}\) such that \(\operatorname{ord}f_{m_{i}}=m_{i}\) with \(m_{0}<\cdots<m_{n_{0}}\). By the definition \[r:=\dim\operatorname{Ker}\pi|_{U}=\sharp\{j\,|\,m_{j}<0\}.\] Then \[\dim\operatorname{Coker}\pi|_{U}=n_{0}+1-(n_{0}+1-r)=r.\] Thus \(U\in UGM\). **Corollary 2.2**.: Let \(W\) be a subspace of \(V\). If there exists an integer \(N\) such that \(\dim W(n)=n-N\) for all sufficiently large \(n\), then \(U=z^{N+1}W\) is a point of UGM. Proof.: Since \(U(n)=z^{N+1}W(n+N+1)\) and therefore \(\dim U(n)=\dim W(n+N+1)=n+1\) for sufficiently large \(n\). Thus the assertion follows from Proposition 2.1. **Example 2.3**.: \(W=\mathbb{C}z^{3}+\mathbb{C}z^{2}+\sum_{i=4}^{\infty}\mathbb{C}z^{-i}\). For \(n\geq 4\)\(\dim W(n)=n-1\). Then \[U=z^{2}W =\mathbb{C}z^{5}+\mathbb{C}z^{4}+\sum_{i=2}^{\infty}\mathbb{C}z^ {-i}\text{ is a point of UGM since }\operatorname{Ker}\pi|_{U}=\mathbb{C}z^{5}+\mathbb{C}z^{4}\text{ and }\operatorname{Coker}\pi|_{U} =\mathbb{C}1+\mathbb{C}z^{-1}.\] Next we explain the correspondence between solutions of the KP-hierarchy and points of UGM. For a point of UGM the corresponding solution of the KP-hierarchy is constructed as a series using Schur functions and Plucker coordinates. For this see [32]. We need, in this paper, the converse construction of the point of UGM from a solution of the KP-hierarchy. Let \(\tau(t)\) be a solution of the KP-hierarchy. Define the wave function \(\Psi(t;z)\) and the adjoint wave function \(\Psi^{*}(t;z)\) by \[\Psi(t;z)=\frac{\tau(t-[z])}{\tau(t)}e^{\eta(t,z^{-1})},\hskip 28.452756pt\Psi^{ *}(t;z)=\frac{\tau(t+[z])}{\tau(t)}e^{-\eta(t,z^{-1})}. \tag{2.4}\] **Theorem 2.4**.: [36, 15] Let \(U\) be the vector space spanned by the expansion coefficients of \(\tau(t)\Psi^{*}(t;z)\) in \(t\). Then \(U\) is the point of UGM corresponding to \(\tau(t)\). It is easy to verify that if \(\tau(t)\) is a solution of the KP-hierarchy so is \(\tau(-t)\). **Corollary 2.5**.: Let \(U^{\prime}\) be the vector space spanned by the expansion coefficients of \(\tau(t)\Psi(t;z)\) in \(t\) Then \(U^{\prime}\) is the point of UGM corresponding to \(\tau(-t)\). Proof.: Let \(\Psi^{*}_{-}(t;z)\) be the adjoint wave function of \(\tau(-t)\). Then \[\tau(-t)\Psi^{*}_{-}(t;z)=\tau(-t-[z])e^{-\eta(t,z^{-1})}. \tag{2.5}\] If we set \(s=-t\), it is equal to \[\tau(s-[z])e^{\eta(s,z^{-1})}=\tau(s)\Psi(s;z). \tag{2.6}\] Since the vector space generated by the expansion coefficients of (2.6) in \(s\) is the same as that generated by expansion coefficients of (2.5) in \(t\), the assertion of the lemma follows from Theorem 2.4. ## 3 Main results Let \(C\) be a compact Riemann surface of genus \(g>0\), \(\{\alpha_{i},\beta_{i}\}_{i=1}^{g}\) a canonical basis of \(H^{1}(C,\mathbb{Z})\), \(\{dv_{i}\}_{i=1}^{g}\) the normalized basis of holomorphic one forms, \(\Omega=(\Omega_{i,j})_{1\leq i,j\leq g}\) with \(\Omega_{i,j}=\int_{\beta_{j}}dv_{i}\) the period matrix, \(J(C)=\mathbb{C}^{g}/L_{\Omega}\) with \(L_{\Omega}=\mathbb{Z}^{g}+\Omega\mathbb{Z}^{g}\) the Jacobian variety of \(C\), \(p_{\infty}\) a point of \(C\), \(I(p)=\int_{p_{\infty}}^{p}dv\) with \(dv={}^{t}(dv_{1},...,dv_{g})\) the Abel map, \(K\) Riemann's constant, \(\theta(z|\Omega)\) Riemann's theta function \[\theta(z|\Omega)=\sum_{n\in\mathbb{Z}^{g}}\exp(\pi i^{t}n\Omega n+2\pi i^{t}nz),\quad z={}^{t}(z_{1},...,z_{g}),\] where \(n\in\mathbb{Z}^{g}\) is considered as a column vector. We extend the the definition of the Abel map to divisors of any degree by, for \(D=\sum_{j=1}^{m}p_{j}-\sum_{j=1}^{n}q_{j}\), \[I(D)=\sum_{j=1}^{m}I(p_{j})-\sum_{j=1}^{n}I(q_{j}).\] We denote by \(\Delta\) the Riemann divisor. It is the dvisor of degree \(g-1\) which satisfies \(2\Delta\equiv\Omega_{C}^{1}\) and \(I(\Delta)=K\), where \(\equiv\) signifies the linear equivalence of divisors and \(\Omega_{C}^{1}\) is the linear equivalence class of divisors of holomorphic one forms. The Riemann divisor is uniquely determined from the canonical homology basis by the condition [28] \[\{I(p_{1}+\cdots+p_{g-1}-\Delta)\,|\,p_{1},...,p_{g-1}\in C\}=\{z\in J(C)\,|\, \theta(z|\Omega)=0\}.\] Notice that the left hand side does not depend on the choice of the base point \(p_{\infty}\) of the Abel map. Let \(E(p_{1},p_{2})\) be the prime form [8, 29] (see also [30]): \[E(p_{1},p_{2})=\frac{\theta[\delta](\int_{p_{1}}^{p_{2}}dv)}{h_{\delta}(p_{1} )h_{\delta}(p_{2})},\] where \(\delta=\binom{\delta^{\prime}}{\delta^{\prime\prime}}\), \(\delta^{\prime},\delta^{\prime\prime}\in\frac{1}{2}\mathbb{Z}^{g}\) is a non-singular odd half characteristic and \(h_{\delta}(p)\) is the half differential satisfying \[h_{\delta}^{2}(p)=\sum_{j=1}^{g}\frac{\partial\theta[\delta]}{\partial z_{j}} (0)dv_{j}(p).\] Take a local coordinate \(z\) around \(p_{\infty}\) and write \[\begin{split}& E(P_{1},P_{2})=\frac{E(z_{1},z_{2})}{\sqrt{dz_{1}} \sqrt{dz_{2}}},\quad\ P_{i}\in C,\quad z_{i}=z(P_{i}),\\ & d_{z_{1}}d_{z_{2}}\log E(z_{1},z_{2})=\left(\frac{1}{(z_{1}-z_ {2})^{2}}+\sum_{i,j=1}^{\infty}q_{i,j}z_{1}^{i-1}z_{2}^{j-1}\right)dz_{1}dz_{ 2},\\ & dv_{i}=\sum_{j=1}^{\infty}v_{i,j}z^{j-1}dz.\end{split} \tag{3.1}\] We set \[\mathcal{V}=(v_{i,j})_{1\leq i\leq g,1\leq j},\hskip 28.452756ptq(t)=\sum_{i,j =1}^{\infty}q_{i,j}t_{i}t_{j}.\] Then \[\tau_{0}(t)=e^{\frac{1}{2}q(t)}\theta(\mathcal{V}t+e|\Omega)\] is a solution of the KP-hierarchy for arbitrary \(e\in\mathbb{C}^{g}\)[37](see also [35, 15, 30]). Let \[X(p,q)=e^{\eta(t,p)-\eta(t,q)}e^{-\eta(\tilde{\partial},p^{-1})+ \eta(\tilde{\partial},q^{-1})},\] \[\eta(t,p)=\sum_{j=1}^{\infty}t_{j}p^{j},\hskip 28.452756pt\tilde{ \partial}=(\partial_{1},\partial_{2}/2,\partial_{3}/3,...),\] be the vertex operator. The following theorem is known. **Theorem 3.1**.: [5] If \(\tau(t)\) is a solution of the KP-hierarchy, so is \(e^{aX(p,q)}\tau(t)\) for any \(a\in\mathbb{C}\). Vertex operators satisfy \[X(p_{1},q_{1})X(p_{2},q_{2})=\frac{(p_{1}-p_{2})(q_{1}-q_{2})}{(p_{1}-q_{2})(q _{1}-p_{2})}:X(p_{1},q_{1})X(p_{2},q_{2}): \tag{3.2}\] where \(:\)\(:\) denotes the normal ordering taking all differential operators to the right of all multiplication operators, that is, \[:X(p_{1},q_{1})X(p_{2},q_{2}):=e^{\sum_{j=1}^{2}(\eta(t,p_{j})-\eta(t,q_{j}))} e^{\sum_{j=1}^{2}(-\eta(\tilde{\partial},p_{j}^{-1})+\eta(\tilde{\partial},q_{j} ^{-1}))}.\] The following properties follow from this. \[X(p_{1},q_{1})X(p_{2},q_{2}) = X(p_{2},q_{2})X(p_{1},q_{1})\quad\text{if $p_{1}\neq q_{2}$ and $q_{1}\neq p_{2}$},\] \[X(p_{1},q_{1})X(p_{2},q_{2}) = 0\quad\text{if $p_{1}=p_{2}$ or $q_{1}=q_{2}$}.\] Let \(M,N\) be positive integers, \(q_{i}\), \(1\leq i\leq M\), \(p_{j}\), \(1\leq j\leq N\) non-zero complex numbers and \((a_{i,j})\) an \(M\times N\) complex matrix. We set \(p_{N+j}=q_{j}\) and use both notation \(p_{N+j}\) and \(q_{j}\). Set \[G=e^{\sum_{i=1}^{M}\sum_{j=1}^{N}a_{i,j}X(q_{i}^{-1},p_{j}^{-1})}.\] Define \[\tau(t)=\Delta(p_{1}^{-1},...,p_{N}^{-1})e^{\sum_{j=1}^{N}\eta(t,p_{j}^{-1})} G\,\tau_{0}(t-\sum_{j=1}^{N}[p_{j}]), \tag{3.3}\] where \(\Delta(p_{1},...,p_{n})=\prod_{1\leq i<j\leq n}(p_{j}-p_{i})\). It can be computed explicitly as follows. Set \(L=M+N\) and define the \(L\times N\) matrix \(B=(b_{i,j})\) by \[b_{i,j} = \delta_{i,j}\quad\text{for $1\leq i,j\leq N$},\] \[b_{N+i,j} = a_{i,j}\prod_{m\neq j}^{N}\frac{p_{j}^{-1}-p_{m}^{-1}}{q_{i}^{-1} -p_{m}^{-1}}\quad\text{for $1\leq i\leq M$, $1\leq j\leq N$}, \tag{3.4}\] that is, \[B=\left(\begin{array}{cccc}1&&&\\ &&\ddots&\\ &&&1\\ b_{N+1,1}&\cdots&b_{N+1,N}\\ \vdots&&\vdots\\ b_{N+M,1}&\cdots&b_{N+M,N}\end{array}\right).\] We set \([L]=\{1,...,L\}\) and denote by \({[L]\choose N}\) the set of \((i_{1},...,i_{N})\), \(1\leq i_{1}<\cdots<i_{N}\leq L\)[16, 17]. For \(I=(i_{1},...,i_{N})\in{[L]\choose N}\) set \[\Delta_{I}^{-}=\Delta(p_{i_{1}}^{-1},...,p_{i_{N}}^{-1}),\quad\eta_{I}=\sum_{ i\in I}\eta(t,p_{i}^{-1}),\quad[p_{I}]=\sum_{i\in I}[p_{i}],\] \[B_{I}=\det(b_{i_{r},s})_{1\leq r,s\leq N}.\] By a direct calculation using the commutation relation (3.2) we have **Proposition 3.2**.: [34] The function \(\tau(t)\) of (3.3) has the following expression \[\tau(t)=\sum_{I\in{[L]\choose N}}B_{I}\Delta_{I}^{-}e^{\eta_{I}}\tau_{0}(t-[p_ {I}]). \tag{3.5}\] **Remark 3.3**.: The matrices \((a_{i,j})\) and \(B\) correspond to each other. When we study the positivity of \(\tau(t)\), it is convenient to begin with the matrix \(B\) and define the matrix \((a_{i,j})\) from it. For example, in the case where \(\tau_{0}(t)>0\) for any real \(t\), \(\{p_{i}\}\) are real and \(p_{1}^{-1}<\cdots<p_{L}^{-1}\), then \(\tau(t)>0\) if \(B_{I}\geq 0\) for any \(I\). Later in section 5 this view point is used. The part \(\tau_{0}(t-[p_{I}])\) can further be expressed by theta function. To write it let \(d\tilde{r}_{k}\), \(k\geq 1\), be the normalized differential of the second kind with a pole only at \(p_{\infty}\) of order \(k+1\), that is, it satisfies \[\int_{\alpha_{j}}d\tilde{r}_{k}=0\text{ for any }j,\quad d\tilde{r}_{k}=d \left(z^{-k}-O(z)\right)\text{ near }p_{\infty}.\] The expansion of \(d\tilde{r}_{k}\) near \(p_{\infty}\) can be written more explicitly using \(\{q_{i,j}\}\). In integral form it is given by \[\int^{p}d\tilde{r}_{k}=z^{-k}-\sum_{j=1}^{\infty}q_{k,j}\frac{z^{j}}{j},\quad p \in C,z=z(p). \tag{3.6}\] Then **Proposition 3.4**.: In terms of the theta function \(\tau(t)\) defined by (3.3) is written as \[\tau(t)=e^{\frac{1}{2}q(t)}\sum_{J\in{[L]\choose N}}B_{J}C_{J}e^{\sum_{j\in J} \sum_{k=1}^{\infty}t_{k}\int^{P_{j}}d\tilde{r}_{k}}\ \theta\left(\mathcal{V}t-\sum_{j\in J}I(Q_{j})+e\right), \tag{3.7}\] where \[C_{J}=\prod_{i<j,i,j\in J}E(p_{j},p_{i})\prod_{j\in J}\frac{p_{j}}{E(0,p_{j})^ {N}},\] and \(P_{j},Q_{j}\in C\) such that \(z(P_{j})=p_{j}\), \(z(Q_{j})=q_{j}\). This proposition is proved by a direct calculation using the following lemma which can be derived from (3.1). **Lemma 3.5**.: Let \(Q(t|s)=\sum_{i,j=1}^{\infty}q_{i,j}t_{i}s_{j}\). Then \[e^{Q([z]|[w])}=\frac{E(z,w)}{w-z}\frac{zw}{E(0,z)E(0,w)},\quad q^{\frac{1}{2} Q([z]|[z])}=\frac{z}{E(0,z)}.\] For \(e={}^{t}(e_{1},...,e_{g})\in\mathbb{C}^{g}\ L_{e}\) denotes the holomorphic line bundle of degree \(0\) on \(C\) whose characteristic homomorphism is specified by \[\chi(\alpha_{j})=1,\ \ \ \ \ \ \ \chi(\beta_{j})=e^{2\pi ie_{j}}.\] If \(c\in\mathbb{C}^{g}\) is taken such that \(\theta(c)\theta(e+c)\neq 0\) then \(\theta(I(p)+e+c)/\theta(I(p)+c)\) is a meromorphic section of \(L_{-e}\). We denote \(L_{\Delta}\) the holomorphic line bundle of degree \(g-1\) corresponding to \(\Delta\). Then we consider \(L_{\Delta,-e}:=L_{\Delta}\otimes L_{-e}\) which has \(\theta(I(p)+e)/E(p,p_{\infty})\) as a meromorphic section if \(\theta(e)\neq 0\). Let \(H^{0}(C,L_{\Delta,-e}(*p_{\infty}))\) be the vector space of meromorphic sections of \(L_{\Delta,-e}\) which are holomorphic on \(C\backslash\{p_{\infty}\}\). Using the local coordinate \(z\) we embed this space into \(V=\mathbb{C}((z))\) as follows. We consider a section of \(L_{\Delta,-e}\) as \(E(p,p_{\infty})^{-1}\) times a multi-valued meromorphic function on \(C\) whose transformation rule is the same as that of \(\theta(I(p)+e)\), that is, \[f(p+\alpha_{j})=f(p),\ \ \ f(p+\beta_{j})=e^{-\pi i\Omega_{j,j}-2\pi i(\int_{p_{ \infty}}^{p}dv_{j}+e_{j})}f(p).\] We realize a section of \(L_{\Delta,-e}\) using a function on \(C\) in this way. Then we expand elements of \(H^{0}(C,L_{\Delta,-e}(*p_{\infty}))\) around \(p_{\infty}\) in \(z\) as \(\sum a_{n}z^{n}\sqrt{dz}\) and get elements \(\sum a_{n}z^{n}\) of \(V\). In the following we always consider \(H^{0}(C,L_{\Delta,-e}(*p_{\infty}))\) as a subspace of \(V\) in this way. **Theorem 3.6**.: [15] The point of UGM corresponding to \(\tau_{0}(t)\) is \(zH^{0}(C,L_{\Delta,-e}(*p_{\infty}))\) and that corresponding to \(\tau_{0}(-t)\) is \(zH^{0}(C,L_{\Delta,e}(*p_{\infty}))\). For the solution \(\tau(t)\) of (3.3) the descriptions of the points of UGM correponding to \(\tau(t)\) and \(\tau(-t)\) are not very symmetric as opposed to \(\tau_{0}(t)\). We consider \(\tau(-t)\) here, since it is more conveniently related with the geometry of \(C\) as in the case of soliton solutions [38, 23]. Set \[b^{\prime}_{N+j,i}=b_{N+j,i}(p_{i}^{-1}q_{j}),\] \[W_{e}=\{f\in H^{0}(C,L_{\Delta,e}(*p_{\infty}))\,|\,f(p_{i})=-\sum_{j=1}^{M}b^{ \prime}_{N+j,i}f(q_{j}),\quad 1\leq i\leq N\},\] \[U_{e}=z^{N+1}W_{e}. \tag{3.8}\] Our main theorem is **Theorem 3.7**.: (i) The subspace \(U_{e}\) is a point of UGM. (ii) The point of UGM corresponding to \(\tau(-t)\) is \(U_{e}\). ## 4 Singular curve created by vertex operators In this section we study the geometry of \(U_{e}\). By Theorem 3.6 the point of UGM corresponding to the solution \(\tau_{0}(-t)\) is associated with \((C,L_{\Delta,e},p_{\infty},z)\), where, as in the previous section, \(C\) is a non-singular algebraic curve of genus \(g>0\), \(L_{\Delta,e}\) is the holomorphic line bundle on \(C\), \(p_{\infty}\) a point of \(C\) and \(z\) a local coordinate around \(p_{\infty}\)[37, 15]. We show that the point \(U_{e}\) of UGM corresponding to the solution \(\tau(-t)\) is associated with \((C^{\prime},{\cal W}_{e},p^{\prime}_{\infty},z)\), where \(C^{\prime}\) is a singular algebraic curve whose normalization is \(C\), \({\cal W}_{e}\) is a rank one torsion free sheaf on \(C^{\prime}\), \(p^{\prime}_{\infty}\) is a point of \(C^{\prime}\) and \(z\) a local coordinate at \(p^{\prime}_{\infty}\). It suggests that, geometrically, the action of the vertex operator has an effect of creating some kind of singularities on a curve. Moreover singular curves may be considered as degenerate limits of non-singular curves. Therefore \(\tau(-t)\) and consequently \(\tau(t)\) can be considered as a certain limit of a quasi-periodic solution. In order to define the curve \(C^{\prime}\) from \(U_{e}\) the most appropriate way in the present case is the abstract algebraic method of Mumford and Mulase [27, 25, 26]. Namely we define \(C^{\prime}\) as a complete integral scheme and \({\cal W}_{e}\) as a sheaf on it. We referred to [9, 13, 14, 24] as references on algebraic geometry and commutative algebras. Let \(R:=H^{0}(C,{\cal O}(*p_{\infty}))\) be the vector space of meromorphic functions on \(C\) which have a pole only at \(p_{\infty}\). By expanding functions in the local coordinate \(z\) around \(p_{\infty}\) we consider \(R\) as a subspace of \(V={\mathbb{C}}((z))\). It is the affine coordinate ring of \(C\backslash\{p_{\infty}\}\). The vector space \(H^{0}(C,L_{\Delta,e}(*p_{\infty}))\) is an \(R\)-module and \(W_{e}\) is a vector subspace of it. Let \[R_{e}=\{f\in R\,|\,fW_{e}\subset W_{e}\}\] be the stabilizer of \(W_{e}\) in \(R\). Then **Proposition 4.1**.: We have \[R_{e}=\{f\in R\,|\,f(p_{i})=f(q_{j})\text{ if }b_{N+j,i}\neq 0\}. \tag{4.1}\] The proof of this proposition is given in Appendix A. To study the structure of \(R_{e}\) we introduce a directed graph \(G_{B}\) associated with the matrix \(B\). The vertices of \(G_{B}\) consists of \(\{p_{1},...,p_{L}\}\). The vertices \(p_{i}\) and \(p_{N+j}\) are connected by an edge with the weight \(b_{N+j,i}\). The direction of the edge is from \(p_{N+j}\) to \(p_{i}\). We understand the edge with the weight \(0\) is the same as that there is no edge. Other edges are not connected. Notice that one can recover the matrix \(B\) from \(G_{B}\). Let \(s\) be the number of connected components of \(G_{B}\). We divide the set of vertices \(\{p_{i}|1\leq i\leq L\}\) according as connected components and rename them as \(\{p_{i,j}|1\leq j\leq n_{i}\}\),\(1\leq i\leq s\). We denote \(P_{i,j}\) the point on \(C\) such that \(z(P_{i,j})=p_{i,j}\). **Example 4.2**.: Consider \[B=\left(\begin{array}{cc}1&0\\ 0&1\\ 0&a\\ -b&0\end{array}\right),\qquad a,b\neq 0.\] In this case \(G_{B}\) is \[\begin{array}{ccc}p_{1}&\circ\cfrac{-b}{\circ}&p_{4}\\ p_{2}&\circ\cfrac{a}{\circ}&p_{3}\end{array}\] and \(s=2\). Then \((p_{1,1},p_{1,2})=(p_{1},p_{4})\), \((p_{2,1},p_{2,2})=(p_{2},p_{3})\) for example. **Example 4.3**.: Let \[B=\left(\begin{array}{cc}1&0\\ 0&1\\ -c&a\\ -d&b\end{array}\right),\qquad a,b,c,d\neq 0.\] Then \(G_{B}\) is In this case \(s=1\) and \((p_{1,1},p_{1,2},p_{1,3},p_{1,4})=(p_{1},p_{2},p_{3},p_{4})\) for example. In the notation introduced above \(R_{e}\) is described as \[R_{e}=\{f\in R|f(p_{i,j})=f(p_{i,j^{\prime}})\mbox{ for any }j,j^{\prime},\,1\leq i \leq s\}. \tag{4.2}\] Let \(H^{0}\left(C,{\cal O}(np_{\infty})\right)\) be the space of meromorphic functions on \(C\) with a pole only at \(p_{\infty}\) of order at most \(n\) and \[R(n)=H^{0}\left(C,{\cal O}(np_{\infty})\right),\] \[R_{e}(n)=R(n)\cap R_{e}. \tag{4.3}\] Notice that \(R(n)=R_{e}(n)=\{0\}\) for \(n<0\) and \(R(0)=R_{e}(0)=\mathbb{C}\). The set of subspaces \(\{R_{e}(n)\}\) satisfies \(R_{e}(n)\subset R_{e}(n+1)\) for any \(n\) and \(R_{e}=\cup_{n=0}^{\infty}R_{e}(n)\). Set \[A^{\prime}=\oplus_{n=0}^{\infty}R_{e}(n),\] \[C^{\prime}=\mbox{Proj}\,A^{\prime},\] We call \(R_{e}(n)\) the homogeneous component of \(A^{\prime}\) with degree \(n\). There is a natural injective morphism \(\varphi:\mbox{Spec}R_{e}\to C^{\prime}\) given by \[\varphi({\cal P})=\oplus_{n=0}^{\infty}{\cal P}^{(n)},\hskip 28.452756pt{ \cal P}^{(n)}={\cal P}\cap R_{e}(n). \tag{4.4}\] Next define \[p^{\prime}_{\infty}=\oplus_{n=0}^{\infty}R_{e}(n-1),\] where \(R_{e}(n-1)\) is located at the homogeneous component of \(A^{\prime}\) with degree \(n\). It can be easily checked that \(p^{\prime}_{\infty}\in C^{\prime}\). By the Riemann-Roch theorem there exists \(N_{0}\) such that \[\dim R_{e}(n)/R_{e}(n-1)=1\quad n\geq N_{0}.\] Take an arbitrary \(m\geq N_{0}\) and \(a\in R_{e}(m)\) such that \[a=z^{-m}+O(z^{-m+1}). \tag{4.5}\] We consider \(a\) as a homogeneous element of \(A^{\prime}\) with degree \(m\). Set \[D^{\prime}_{+}(a) = \{{\cal P}\in\mbox{Proj}\,A^{\prime}\,|\,a\notin{\cal P}\},\] \[A^{\prime}_{(a)} = \{ua^{-n}\,|\,u\in R_{e}(mn),\ n\geq 0\}\] \[= \text{the set of elements of degree zero in }A^{\prime}[a^{-1}].\] Then \(D^{\prime}_{+}(a)\) is an affine open subscheme of \(C^{\prime}\) isomorphic to \(\operatorname{Spec}A^{\prime}_{(a)}\)(c.f. [9]). This isomorphism is given by \[\mathcal{P}\mapsto\oplus_{n=0}^{\infty}a^{-n}\mathcal{P}^{(nm)}.\] Then **Theorem 4.4**.: (i)_\(p^{\prime}_{\infty}\notin\varphi(\operatorname{Spec}R_{e})\)._ (ii)_\(C^{\prime}=\varphi(\operatorname{Spec}R_{e})\cup\{p^{\prime}_{\infty}\}\)._ (iii)_\(H^{0}(C^{\prime}\backslash\{p^{\prime}_{\infty}\},\mathcal{O}_{C^{\prime}})=R _{e}\)._ (iv)_\(p^{\prime}_{\infty}\in D^{\prime}_{+}(a)\) and it corresponds to a maximal ideal of \(A^{\prime}_{(a)}\)._ The proof of this theorem is given in Appendix B. By this theorem \[C^{\prime}=\varphi(\operatorname{Spec}R_{e})\cup D^{\prime}_{+}(a)\] is an affine open cover of \(C^{\prime}\). The rings \(R_{e}\) and \(A^{\prime}_{(a)}\) are integral domains, since they are subrings of \(\mathbb{C}((z))\). Moreover \(\dim C^{\prime}=1\) because, as we shall show in Lemma C.1, the quotient field of \(R_{e}\) is isomorphic to the quotient field of \(R\) which is the field of meromorphic functions on \(C\). Using Proposition D.3 and the Riemann-Roch theorem we can easily prove that \(A^{\prime}\) is generated over \(\mathbb{C}\) by a finite number of homogeneous elements. Then \(A^{\prime}\) can be written as a quotient of polynomial ring by a homogeneous ideal. Therefore \(C^{\prime}\) becomes a closed subscheme of a weighted projective space and consequently of a projective space. Thus \(C^{\prime}\) is a projective integral scheme of dimension one, that is, \(C^{\prime}\) is a projective integral curve. Next we define a sheaf on \(C^{\prime}\). Let \(H^{0}(C,L_{\Delta,e}(np_{\infty}))\) be the space of meromorphic sections of \(L_{\Delta,e}\) on \(C\) with a pole only at \(p_{\infty}\) of order at most \(n\) and \[W_{e}(n)=W_{e}\cap H^{0}(C,L_{\Delta,e}(np_{\infty})).\] Define \[W_{e}^{gr}=\oplus_{n=0}^{\infty}W_{e}(n).\] Using Lemma 6.2 and the Riemann-Roch theorem it can easily be proved that \(W_{e}^{gr}\) is a finitely generated \(A^{\prime}\)-module. Therefore \(W_{e}^{gr}\) defines a coherent \(\mathcal{O}_{C^{\prime}}\) module \(\mathcal{W}_{e}\) on \(C^{\prime}\) such that \[H^{0}(C^{\prime}\backslash\{p^{\prime}_{\infty}\},\mathcal{W}_{e})=W_{e}.\] Moreover we can prove **Proposition 4.5**.: The \({\cal O}_{C^{\prime}}\)-module \({\cal W}_{e}\) is torsion free and of rank one. The proof of this proposition is given in Appendix C. Next we study the relation between \(C\) and \(C^{\prime}\). A compact Riemann surface can be embedded into a projective space. Therefore there is a projective scheme corresponding to \(C\) which we denote by the same symbol \(C\). In terms of \(R=\cup_{n=0}^{\infty}R(n)\)\(C\) is described as \[C={\rm Proj}A,\hskip 28.452756ptA=\oplus_{n=0}^{\infty}R(n).\] There is an injective morphism, \(\varphi\): \({\rm Spec}R\to C\) given by a similar formula to (4.4) which we denote by the same symbol. The affine scheme \({\rm Spec}A\) corresponds to \(C\backslash\{p_{\infty}\}\). Similarly to \(p^{\prime}_{\infty}\) define \[\tilde{p}_{\infty}=\oplus_{n=0}^{\infty}R(n-1),\] where \(R(n-1)\) is situated at the degree \(n\) component of \(A\) as in the previous case. Set \[D_{+}(a) = \{{\cal P}\in{\rm Proj}\,A\,|\,a\notin{\cal P}\},\] \[A_{(a)} = \{ua^{-n}\,|\,u\in R(mn),\ n\geq 0\}.\] As in the case of \(C^{\prime}\) the following proposition holds. **Proposition 4.6**.: (i) \(\tilde{p}_{\infty}\notin\varphi({\rm Spec}\,R)\). (ii) \(C=\varphi({\rm Spec}\,R)\cup\{\tilde{p}_{\infty}\}\). (iii) \(H^{0}(C\backslash\{\tilde{p}_{\infty}\},{\cal O}_{C})=R\). (iv) \(\tilde{p}_{\infty}\in D_{+}(a)\) and it corresponds to a maximal ideal of \(A_{(a)}\). All elements of the maximal ideal \(\tilde{p}_{\infty}\) of \(A_{(a)}\) vanish at \(p_{\infty}\). So \(\tilde{p}_{\infty}\) can be identified with \(p_{\infty}\). The inclusion map \(A^{\prime}\subset A\) induces a morphism \(\psi:C\to C^{\prime}\). Then **Proposition 4.7**.: The morphism \(\psi:C\to C^{\prime}\) gives the normalization of \(C^{\prime}\). The proof of this proposition is given in Appendix D. Finally we study the singularities of \(C^{\prime}\). Let \(P\) be a point of the compact Riemann surface \(C\) such that \(P\neq p_{\infty}\), \(z(P)=p\) and \(m_{P}\in{\rm Spec}R\) the maximal ideal corresponding to \(P\), that is, \[m_{P}=\{f\in R|f(p)=0\}.\] Then \(m^{\prime}=\psi(m_{P})=m_{P}\cap R_{e}\) is a maximal ideal of \(R_{e}\) since \(R\) is integral over \(R_{e}\) by Corollary D.4. We denote by \(R_{m_{P}}\) the localization of \(R\) at \(m_{P}\) etc. Then **Proposition 4.8**.: (i) If \(P\neq P_{i}\) for any \(i\),then \((R_{e})_{m^{\prime}}\simeq R_{m_{P}}\). In particular \((R_{e})_{m^{\prime}}\) is a normal ring and the closed point \(m^{\prime}\in\mbox{Spec}R_{e}\) is a non-singular point. (ii) If \(P=P_{i,j}\) and \(n_{i}\geq 2\), then \((R_{e})_{m^{\prime}}\) is not a normal ring. In particular \(m^{\prime}\in\mbox{Spec}R_{e}\) is a singular point. Moreover \(\psi^{-1}(m^{\prime})=\{P_{i,1},...,P_{i,n_{i}}\}\) in this case. The proof of this proposition is given in Appendix E. This proposition shows that \(C^{\prime}\) is obtained from \(C\) by identifying the points \(P_{i,1},...,P_{i,n_{i}}\) for each \(i\) such that \(n_{i}>1\). ## 5 Solitons on elliptic backgrounds As remarked in remark 3.3 if all \(t_{i}\), \(p_{j}\) are real, \(\tau_{0}(t)>0\) for any \(t\)[6], \(B_{I}\geq 0\) for any \(I\) and \(p_{1}^{-1}<\cdots<p_{L}^{-1}\), then \(\tau(t)\) given by (3.5) is positive. Notice that \(\tau(t)\) is a linear combination of \(\{e^{\eta_{I}}\tau_{0}(t-[p_{I}])\}\). Then, in the region of the \(xy\)-plane such that \(e^{\eta_{I}}\tau_{0}(t-[p_{I}])\) is dominant, \(u(t)=2\partial_{x}^{2}\log\tau(t)\approx 2\partial_{x}^{2}\log\tau_{0}(t)\) is the quasi-periodic wave corresponding to the shift of \(\tau_{0}(t)\). On the boundary of two such domains, soliton like waves will appear as in the case of soliton solutions[16, 17]. Thus it is expected that \(u(t)\) represents a soliton on a quasi-periodic background. In this section we verify it by a computer simulation in the case of genus one. Let \(a,b\) be positive real numbers. Set \(2\omega_{1}=-ib\), \(2\omega_{2}=ab\), \(\Omega=\omega_{2}/\omega_{1}=ia\) and \(\mathbb{L}=2\omega_{1}\mathbb{Z}+2\omega_{1}\mathbb{Z}\). Define \[g_{2}=60\sum\nolimits_{\omega\in\mathbb{L},\omega\neq 0}\frac{1}{\omega^{4}},\qquad g_{3}=140\sum\nolimits_{\omega\in\mathbb{L},\omega\neq 0}\frac{1}{ \omega^{6}}.\] Then \(g_{2}\), \(g_{3}\) are real. Consider the algebraic curve \(C\) defined by the corresponding Weierstrass cubic \[y^{2}=4x^{3}-g_{2}x-g_{3}.\] Let \(\wp(u)\) be the Weierstrass elliptic function. Then \(u\mapsto(\wp(u),\wp^{\prime}(u))\) gives an isomorphism between the complex torus \(\mathbb{C}/\mathbb{L}\) and \(C\), where \(u=0\) corresponds to \(\infty\in C\). A basis of holomorphic one forms is \(du=dx/y\). A canonical homology basis can be taken such that \[\int_{\alpha}du=2\omega_{1},\quad\int_{\beta}du=2\omega_{2}.\] Therefore the normalized holomorphic one form is given by \[dv=(2\omega_{1})^{-1}du.\] We take \(u\) as a local coordinate around \(\infty\). Then the corresponding solution of the KP hierarchy is given by \[\tau_{0}(t)=e^{\frac{1}{2}q(t)}\theta\left(\frac{x}{2\omega_{1}}+e|\Omega \right), \tag{5.1}\] where \(e\) is an arbitrary complex constant. If we take \(e\in i\mathbb{R}\) and \(x\), \(t_{j}\), \(j\geq 2\) to be real then \(\tau_{0}(t)\) is real and positive. In the present case \(q_{i,j}\) defining \(q(t)\) is described in the following way. Let \[\theta_{11}(z|\Omega)=\sum_{n\in\mathbb{Z}}e^{\pi i\Omega(n+\frac{1}{2})^{2}+2 \pi i(n+\frac{1}{2})(z+\frac{1}{2})}.\] Sometimes \(\theta_{11}(z|\Omega)\) is simply denoted by \(\theta_{11}(z)\). It satisfies \[\theta_{11}(-z)=-\theta_{11}(z),\quad\theta_{11}(z)=\theta_{11}^{\prime}(0)z+O (z^{3}),\quad\theta_{11}^{\prime}(0)\neq 0.\] The prime form is written as \[E(z_{1},z_{2})=\frac{2\omega_{1}}{\theta_{11}^{\prime}(0)}\theta_{11}\left( \frac{z_{2}-z_{1}}{2\omega_{1}}\right).\] Therefore \[\frac{\partial^{2}}{\partial z_{1}\partial z_{2}}\log\frac{\theta_{11}\left( \frac{z_{2}-z_{1}}{2\omega_{1}}\right)}{z_{2}-z_{1}}=\sum_{i,j=1}^{\infty}q_{ i,j}z_{1}^{i-1}z_{2}^{j-1}. \tag{5.2}\] Let us take the elliptic solution (5.1) as \(\tau_{0}(t)\) in the general formula (3.5). To neatly write \(\tau(t)\) let us introduce \[F(z)=\log\theta_{11}\left(\frac{z}{2\omega_{1}}\right),\hskip 28.452756ptF_{j}(z )=\frac{(-1)^{j-1}}{(j-1)!}F^{(j)}(z).\] By Proposition 3.4 we have **Proposition 5.1**.: For \(\tau_{0}(t)\) given by (5.1) the \(\tau(t)\) defined by (3.3) is written as \[\tau(t) = ce^{\sum_{j=1}^{\infty}c_{j}t_{j}+\frac{1}{2}q(t)} \tag{5.3}\] \[\times\sum_{I}B_{I}C_{I}e^{\sum_{j\in I}\sum_{k=1}^{\infty}t_{k}F_ {k}(p_{j})}\theta\left(\frac{x-\sum_{j\in I}p_{j}}{2\omega_{1}}+e|\Omega\right).\] Here \(c\), \(c_{j}\) are certain constants and \[C_{I}=\prod_{j\in I}\frac{p_{j}}{\theta_{11}\left(\frac{p_{j}}{2\omega_{1}} \right)^{N}}\prod_{j,k\in I,j<k}\theta_{11}\left(\frac{p_{k}-p_{j}}{2\omega_{1 }}\right).\] We take \(\{t_{j}\}\), \(\{p_{j}\}\) are real and \(e\in i\mathbb{R}\). If \(B_{I}C_{I}\geq 0\) for any \(I\) and some of them is positive then the \(\sum_{I}\) part in the right hand side of (5.3) is positive and \(u=2\partial_{x}^{2}\log\tau(t)\) is non-singular. For the positivity of \(C_{I}\) we have **Proposition 5.2**.: If \(p_{1}<\cdots<p_{L}\), \(p_{L}-p_{1}<ab\) and \(|p_{j}|<ab\) for \(1\leq j\leq L\) then \(C_{I}>0\) for any \(I\). This proposition follows from the following lemma which can easily be proved. **Lemma 5.3**.: If \(a>0\), then \(i\theta_{11}(ix|ia)>0\) for \(0<x<a\). **Example 5.4**.: Take \(M=N=2\), \(a=1\), \(b=6\), \((p_{1},p_{2},p_{3},p_{4})=(0.31,0.46,0.81,4.89)\), \(t_{j}=0\) for \(j\geq 4\) and \[B=\left(\begin{array}{cc}1&0\\ 0&1\\ -2&1\\ -3&0\end{array}\right). \tag{5.4}\] This \(B\) corresponds to (3) of SS4.6.4 [17]. Notice that the matrix in [17] is the transpose of our matrix \(B\). In this case \(B_{12}=1\), \(B_{13}=1\), \(B_{14}=0\), \(B_{23}=2\), \(B_{24}=3\), \(B_{34}=3\) and \[(F^{(1)}(p_{1}),F^{(1)}(p_{2}),F^{(1)}(p_{3}),F^{(1)}(p_{4}))=(3.25,2.21,1.30, 0.05). \tag{5.5}\] The results of a computer simulation of \(u(t)\) is given in figures 1,2,3. We can see the soliton corresponding to the matrix \(B\) on the periodic waves. **Example 5.5**.: Take \(M,N,a,b,p_{j},t_{j}\) the same as those in Example 5.4. Consider \(B\) of the form \[B=\left(\begin{array}{cc}1&0\\ 0&1\\ -1&2\\ -1&1\end{array}\right). \tag{5.6}\] For this matrix \(B_{12}=1\), \(B_{13}=2\), \(B_{14}=1\), \(B_{23}=1\), \(B_{24}=1\), \(B_{34}=1\). This \(B\) corresponds to (1) of SS4.6.4 [17]. See figures 4,5,6. Figure 1: Example 5.4\(t=-3\), \(-20\leq x\leq 20\), \(0\leq y\leq 15\) Figure 2: Example 5.4\(t=0\), \(-20\leq x\leq 20\), \(-10\leq y\leq 10\) Figure 4: Example 5.5\(t=-2\), \(-35\leq x\leq 15\), \(-5\leq y\leq 15\) Figure 5: Example 5.5\(t=0\), \(-20\leq x\leq 20\), \(-10\leq y\leq 10\) Figure 6: Example 5.5\(t=2\), \(-20\leq x\leq 32\), \(-25\leq y\leq 0\) Proof of Theorem 3.7 (i) In this section we freely use notation on sheaf cohomologies. Namely, for a holomorphic line bundle \({\cal L}\), distinct points \(S_{i}\), \(1\leq i\leq m\), \(S_{j}^{\prime}\), \(1\leq j\leq m^{\prime}\) on \(C\), \(P\) and positive integers \(\{m_{i}\}\), \(\{m_{j}^{\prime}\}\) we denote \[H^{0}(C,{\cal L}(-\sum m_{i}S_{i}+\sum m_{j}^{\prime}S_{j}^{\prime}+*P))\] the space of meromorphic sections of \({\cal L}\) which have a zero at \(S_{i}\) of order at least \(m_{i}\), a pole at \(S_{j}\) of order at most \(m_{j}^{\prime}\) and a pole at \(P\) of any order. To prove Theorem 3.7 (i) it is sufficient to show \[\dim W_{e}(n)=n-N\mbox{ for }n>>0, \tag{6.1}\] by Corollary 2.2. **Lemma 6.1**.: There exists \(F_{i}\) in \(H^{0}(C,L_{\Delta,e}(*p_{\infty}))\), \(1\leq i\leq L\) such that \[F_{i}(p_{j})=\delta_{i,j}.\] The proof of this lemma is given in the end of this section. For \(1\leq m\leq M\) set \[\varphi_{m}=F_{N+m}-\sum_{i=1}^{N}b_{N+m,i}^{\prime}F_{i}. \tag{6.2}\] Then \[\varphi_{m}\in W_{e},\] since \[\varphi_{m}(q_{l})=\delta_{m,l},\hskip 28.452756pt\varphi_{m}(p_{i})=-b_{N+m, i}^{\prime}, \tag{6.3}\] and \[-\sum_{j=1}^{M}b_{N+j,i}^{\prime}\,\varphi_{m}(q_{j})=-b_{N+m,i}^{\prime}= \varphi_{m}(p_{i}).\] Notice that the vector space \[H^{0}(C,L_{\Delta,e}(-\sum_{j=1}^{L}P_{j}+*p_{\infty}))\] is a subspace of \(W_{e}\), since elements of it vanish at all \(p_{j}\) and the linear equations imposed in \(W_{e}\) are trivially satisfied. **Lemma 6.2**.: The following equation holds, \[W_{e}=H^{0}(C,L_{\Delta,e}(-\sum_{j=1}^{L}P_{j}+*p_{\infty}))\oplus\oplus_{m=1}^{ M}\mathbb{C}\varphi_{m}. \tag{6.4}\] Proof.: It is obvious that the right hand side is included in the left hand side. Let us prove the converse inclusion. Take any \(f\in W_{e}\) and set \(f(q_{i})=c_{i}\). Set \[F=f-\sum_{m=1}^{M}c_{m}\varphi_{m}.\] Then \[F(q_{i})=f(q_{i})-\sum_{m=1}^{M}c_{m}\varphi_{m}(q_{i})=c_{i}-c_{i}=0.\] Since \(f\) and \(\varphi_{l}\) are both in \(W_{e}\), \(F\in W_{e}\) and \(F(p_{i})=0\) for \(1\leq i\leq N\). Therefore \[F\in H^{0}(C,L_{\Delta,e}(-\sum_{j=1}^{L}P_{j}+*p_{\infty}))\] and therefore \(f\) is in the right hand side of (6.4). Take \(n\) larger than the order of a pole of \(\varphi_{j}\) at \(p_{\infty}\) for any \(j\). Then \[W_{e}(n)=H^{0}(C,L_{\Delta,e}(-\sum_{j=1}^{L}P_{j}+np_{\infty}))\oplus\oplus_{ m=1}^{M}\mathbb{C}\varphi_{m},\] and \[\dim W_{e}(n)=\dim H^{0}(C,L_{\Delta,e}(-\sum_{j=1}^{L}P_{j}+np_{\infty}))+M.\] If we further take \(n\) larger than \(g-1+L\), \[\deg L_{\Delta,e}(-\sum_{j=1}^{L}P_{j}+np_{\infty})=g-1-L+n>2g-2=\deg\Omega^{ 1},\] then \[H^{1}(C,L_{\Delta,e}(-\sum_{j=1}^{L}P_{j}+np_{\infty}))=0,\] and, by the Riemann-Roch theorem, \[\dim H^{0}(C,L_{\Delta,e}(-\sum_{j=1}^{L}P_{j}+np_{\infty}))=n-L.\] Therefore, for \(n>>0\), \[\dim W_{e}(n)=n-L+M=n-N.\] **Proof of Lemma 6.1.** We first show that, for each \(1\leq i\leq L\), there exists an element \(G_{i}\) of \(H^{0}(C,L_{\Delta,e}(*p_{\infty}))\) such that it has only a simple pole at \(P_{i}\) on \(C\backslash\{p_{\infty}\}\). By the Riemann-Roch theorem, for a sufficiently large \(n\), we have \[\dim H^{0}(C,L_{\Delta,e}(P_{i}+np_{\infty}))=n+1,\] \[\dim H^{0}(C,L_{\Delta,e}(np_{\infty}))=n.\] It means that there exists a meromorphic section of \(L_{\Delta,e}\) which has a simple pole at \(P_{i}\), a pole at \(p_{\infty}\) of order at most \(n\) and has no other poles. Thus \(G_{i}\) exists. Next we prove that there exists a meromorphic function on \(C\) such that it has a simple zero at every \(P_{i}\), \(1\leq i\leq L\), and it is holomorphic on \(C\backslash\{p_{\infty}\}\). Again, if we take \(n\) sufficiently large, we have, by the Riemann-Roch theorem, \[\dim H^{0}(C,{\cal O}(-\sum_{j=1}^{L}P_{j}+np_{\infty}))=1-g-L+n,\] \[\dim H^{0}(C,{\cal O}(-P_{i}-\sum_{j=1}^{L}P_{j}+np_{\infty}))=-g -L+n.\] It follows that, for each \(i\leq L\), there exists a meromorphic function \(h_{i}\) on \(C\) which has a simple zero at \(P_{i}\), a zero at \(P_{j}\), \(j\neq i\), and is holomorphic on \(C\backslash\{p_{\infty}\}\). We shall show that a desired \(h\) can be constructed as a linear combination of \(\{h_{i}\}\). Set \[h=\sum_{i=1}^{L}\lambda_{i}h_{i}.\] At each \(P_{i}\) take a local coordinate \(w\) and write \[h_{i}=c_{i}w+O(w^{2}),\hskip 28.452756pth_{j}=c_{j}w^{m_{j}}+O(w^{m_{j}+1}), \hskip 14.226378ptj\neq i,\] where \(c_{j}\neq 0\) for any \(j\). Let \(\{j\,|\,m_{j}=1\}=\{j_{1},...,j_{r}\}\), where we set \(m_{i}=1\). Then \[h=(c_{j_{1}}\lambda_{j_{1}}+\cdots+c_{j_{r}}\lambda_{j_{r}})w+O(w^{2}),\] and the condition that \(h\) has a simple zero at \(P_{i}\) is \[c_{j_{1}}\lambda_{j_{1}}+\cdots+c_{j_{r}}\lambda_{j_{r}}\neq 0. \tag{6.5}\] This is an open condition for \(\{\lambda_{j}\}\). Thus there exists \(\{\lambda_{j}\}\) such that \(h\) has a simple zero at every \(P_{i}\). Choosing one set of \(\{h,G_{i}|1\leq i\leq L\}\) define the element of \(H^{0}(C,L_{\Delta,e}(*p_{\infty}))\) by \[F_{i}=a_{i}hG_{i},\quad a_{i}=(hG_{i})(p_{i})^{-1}.\] Obviously they satisfy \(F_{i}(p_{j})=\delta_{i,j}\). ## 7 Proof of Theorem 3.7 (ii) Let \(\Psi(t,z)\) be the wave function of \(\tau(t)\), \[\Psi(t,z)=\frac{\tau(t-[z])}{\tau(t)}e^{\eta(t,z^{-1})}. \tag{7.1}\] We first show **Lemma 7.1**.: For \(1\leq i\leq N\) \[\Psi(t,p_{i})=-\sum_{j=1}^{M}\tilde{b}_{N+j,i}\Psi(t,p_{N+j}),\quad\tilde{b}_ {N+j,i}=b_{N+j,i}(p_{i}p_{N+j}^{-1})^{N} \tag{7.2}\] Proof.: Substitute (7.1) into (7.2) we have the equation for \(\tau(t)\), \[\tau(t-[p_{i}])e^{\eta_{i}}=-\sum_{j=1}^{M}\tilde{b}_{N+j,i}\tau(t-[p_{N+j}])e ^{\eta_{N+j}}. \tag{7.3}\] Let us prove this equation. By (3.5) we have, using \(e^{-\eta([z],p^{-1})}=1-p^{-1}z\), \[\tau(t-[z])=\sum_{I}B_{I}\Delta_{I}^{-}\prod_{j\in I}(1-p_{j}^{-1}z)\,e^{\eta_ {I}}\tau_{0}(t-[z]-[p_{I}]). \tag{7.4}\] Substituting \(z=p_{i}\), \(1\leq i\leq N\) and multiplying by \(e^{\eta_{i}}\) we get \[\tau(t-[p_{i}])e^{\eta_{i}} = p_{i}^{N}\sum_{I}B_{I}\Delta_{I}^{-}\prod_{j\in I}(p_{i}^{-1}-p_ {j}^{-1})e^{\eta_{I}+\eta_{i}}\tau_{0}(t-[p_{i}]-[p_{I}]) \tag{7.5}\] \[= p_{i}^{N}\sum_{i\notin I}B_{I}\Delta_{(I,i)}^{-}e^{\eta_{(I,i)}} \tau_{0}(t-[p_{(I,i)}]).\] We write \(I=(I^{\prime},N+j_{1},...,N+j_{r})\) with \[I^{\prime}\in{[N]\choose N-r},\quad 1\leq j_{1}<\cdots<j_{r}\leq M.\] Since \(i\notin I\), we have \(r\geq 1\) and \(i\notin I^{\prime}\). For simplicity we set \[T(I,i)=e^{\eta_{(I,i)}}\tau_{0}(t-[p_{(I,i)}]). \tag{7.6}\] Then RHS of (7.5) \[= p_{i}^{N}\sum_{r=1}^{N}\sum_{I^{\prime}\in{[N]\choose N-r},i\notin I ^{\prime},1\leq j_{1}<\cdots<j_{r}\leq M}B_{(I^{\prime},N+j_{1},...,N+j_{r})} \Delta_{(I^{\prime},N+j_{1},...,N+j_{r},i)}^{-}\] (7.7) \[\times T(I^{\prime},N+j_{1},...,N+j_{r},i).\] To proceed we extend the index \(I\) of \(B_{I}\) and \(\Delta_{I}^{-}\) to arbitrary sequence from \([L]\) in a skew symmetric way. In particular, for \(I=(i_{1},...,i_{N})\), \(B_{I}\Delta_{I}^{-}\) is symmetric in \(i_{1},...,i_{N}\) and \(B_{I}\), \(\Delta_{I}^{-}\) become \(0\) if some of indices in \(I\) coincide. Then \[\sum_{1\leq j_{1}<\cdots<j_{r}\leq M}B_{(I^{\prime},N+j_{1},...,N +j_{r})}\Delta_{(I^{\prime},N+j_{1},...,N+j_{r},i)}^{-}T(I^{\prime},N+j_{1},...,N+j_{r},i) \tag{7.8}\] \[= \frac{1}{r!}\sum_{j_{1},...,j_{r}=1}^{M}B_{(I^{\prime},N+j_{1},...,N+j_{r})}\Delta_{(I^{\prime},N+j_{1},...,N+j_{r},i)}^{-}T(I^{\prime},N+j_{1},...,N+j_{r},i).\] Recall that \(B\) has the form \[B=\left(\begin{array}{cccc}1&&&\\ &&\ddots&&\\ &&&1\\ b_{N+1,1}&\cdots&b_{N+1,N}\\ \vdots&&\vdots\\ b_{N+M,1}&\cdots&b_{N+M,N}\end{array}\right).\] and \(i\notin I^{\prime}\). Then the expansion of the determinant \(B_{(I^{\prime},N+j_{1},...,N+j_{r})}\) in the \(i\)-th column takes the form \[B_{(I^{\prime},N+j_{1},...,N+j_{r})}=\sum_{k=1}^{r}(-1)^{N-r+k+i}b_{N+j_{k},i} B_{(I^{\prime},N+j_{1},...,\widehat{N+j_{k}},...,N+j_{r})}^{(i)}. \tag{7.9}\] Here \(B_{i_{1},...,i_{N-1}}^{(i)}\) denotes the determinant \(\det(b_{i_{m},j})_{1\leq m\leq N-1,1\leq j\leq N,j\neq i}\). Substitute (7.9) into (7.8) and change the order of sum: RHS of (7.8) \[= \frac{1}{r!}\sum_{k=1}^{r}\sum_{j_{1},...,j_{r}=1}^{M}(-1)^{N-r+k+i }b_{N+j_{k},i}B^{(i)}_{(I^{\prime},N+j_{1},...,\widehat{N+j_{k}},...,N+j_{r})} \Delta^{-}_{(I^{\prime},N+j_{1},...,N+j_{r},i)} \tag{7.10}\] \[\times T(I^{\prime},N+j_{1},...,N+j_{r},i).\] Use \[\Delta^{-}_{(I^{\prime},N+j_{1},...,N+j_{r},i)}=(-1)^{r-k}\Delta^{ -}_{(I^{\prime},N+j_{1},...,\widehat{N+j_{k}},...,N+j_{r},N+j_{k},i)}\] and change the names of indices as \(j_{k}\to j\), \(j_{1},...,\widehat{j_{k}},...,j_{r}\to j_{1},...,j_{r-1}\). We get RHS of (7.10) \[= \frac{1}{r!}\sum_{k=1}^{r}(-1)^{N+i}\sum_{j=1}^{M}b_{N+j,i}\sum_{ j_{1},...,j_{r-1}=1}^{M}B^{(i)}_{(I^{\prime},N+j_{1},...,N+j_{r-1})}\Delta^{-}_{(I ^{\prime},N+j_{1},...,N+j_{r-1},N+j,i)}\] \[\times T_{(I^{\prime},N+j_{1},...,N+j_{r-1},N+j,i)}.\] Since the summand does not depend on \(k\), the sum in \(k\) gives \(r\) times of the summand. Rewrite the summation in \(1\leq j_{1},...,j_{r-1}\leq M\) to \((r-1)!\) times the summation in \((j_{1},...,j_{r-1})\) with \(1<j_{1}<\cdots<j_{r-1}\leq M\). Substitute it into (7.7) and get RHS of (7.7) \[= (-1)^{N+i}p_{i}^{N}\sum_{j=1}^{N}b_{N+j,i}\sum_{r=1}^{M}\sum_{I^{ \prime}\in{[N]\choose N-r},i\notin I^{\prime}}\sum_{1\leq j_{1}<\cdots<j_{r-1} \leq M}B^{(i)}_{(I^{\prime},N+j_{1},...,N+j_{r-1})}\] \[\times\Delta^{-}_{(I^{\prime},N+j_{1},...,N+j_{r-1},N+j,i)}T_{(I^ {\prime},N+j_{1},...,N+j_{r-1},N+j,i)}.\] Notice that taking the summation over \(r\), \(I^{\prime}\), \(\{j_{k}\}\) is equivalent to taking the summation over \(I^{\prime}\in{[L]\choose N-1}\) with \(i,N+j\notin I^{\prime}\). Thus \[\tau(t-[p_{i}])e^{\eta_{i}} \tag{7.11}\] \[= \mbox{RHS of (\ref{eq:1})}\] \[= (-1)^{N+i}p_{i}^{N}\sum_{j=1}^{N}b_{N+j,i}\sum_{I^{\prime}\in{[ L]\choose N-1},i,N+j\notin I^{\prime}}B^{(i)}_{I^{\prime}}\Delta^{-}_{(I^{ \prime},N+j,i)}T(I^{\prime},N+j,i).\] Next, by replacing \(i\) by \(N+j\) in (7.5) and recalling the definition (7.6) of \(T(I,i)\), we have \[\tau(t-[p_{N+j}])e^{\eta_{N+j}}=p_{N+j}^{N}\sum_{N+j\notin I}B_{I }\Delta^{-}_{(I,N+j)}T(I,N+j). \tag{7.12}\] It follows that \[\sum_{j=1}^{M}\tilde{b}_{N+j,i}\tau(t-[p_{N+j}])e^{\eta_{N+j}}\] \[= p_{i}^{N}\sum_{j=1}^{M}b_{N+j,i}\sum_{N+j\notin I}B_{I}\Delta_{(I,N+ j)}^{-}T(I,N+j)\] \[= I_{+}+I_{-}, \tag{7.14}\] where \(I_{+}\) is the part of the RHS of (7.13) such that \(I\) includes \(i\) in the summation over \(I\) and \(I_{-}\) the part where \(I\) does not include \(i\). We show that \(I_{+}\) is equal to \(-\tau(t-[p_{i}])e^{\eta_{i}}\) and \(I_{-}=0\). Let us first consider \(I_{+}\). Separating \(i\) from \(I\) we have \[I_{+}=p_{i}^{N}\sum_{j=1}^{M}b_{N+j,i}\sum_{I^{\prime}\in\binom{[L]}{N-1},i,N+ j\notin I}B_{(I^{\prime},i)}\Delta_{(I^{\prime},i,N+j)}^{-}T(I^{\prime},i,N+j).\] Since the \(N\)-th row vector of \(B_{(I^{\prime},i)}\) is the \(i\)-th unit vector, we have \[B_{(I^{\prime},i)}=(-1)^{N+i}B_{I^{\prime}}^{(i)}.\] Therefore \[I_{+} = (-1)^{N+i}p_{i}^{N}\sum_{j=1}^{M}b_{N+j,i}\sum_{I^{\prime}\in \binom{[L]}{N-1},i,N+j\notin I}B_{I^{\prime}}^{(i)}\Delta_{(I^{\prime},i,N+j)} ^{-}T(I^{\prime},i,N+j) \tag{7.15}\] \[= -\mbox{RHS of (\ref{eq:11})}\] \[= -\tau(t-[p_{i}])e^{\eta_{i}},\] where \(\Delta_{(I^{\prime},i,N+j)}^{-}=-\Delta_{(I^{\prime},N+j,i)}^{-}\) is used. Next let us consider \(I_{-}\). In a similar computation to deriving (7.11) we have \[I_{-} = (-1)^{N+i}\sum_{j=1}^{M}b_{N+j,i}\sum_{j^{\prime}=1}^{M}b_{N+j^{ \prime},i}\sum_{I^{\prime}\in\binom{[L]}{N-1},i,N+j^{\prime},N+j\notin I^{ \prime}}B_{I^{\prime}}^{(i)}\Delta_{(I^{\prime},N+j^{\prime},N+j)}^{-}\] \[\times T(I^{\prime},N+j^{\prime},N+j)\] \[= (-1)^{N+i}\sum_{j,j^{\prime}=1}^{M}b_{N+j,i}b_{N+j^{\prime},i} \sum_{I^{\prime}\in\binom{[L]}{N-1},i,N+j^{\prime},N+j\notin I^{\prime}}B_{I^ {\prime}}^{(i)}\Delta_{(I^{\prime},N+j^{\prime},N+j)}^{-}\] \[\times T(I^{\prime},N+j^{\prime},N+j).\] Since \(b_{N+j,i}b_{N+j^{\prime},i}\) is symmetric in \(j,j^{\prime}\) and the remaining part is skew symmetric in \(j,j^{\prime}\), the last summation in \(j,j^{\prime}\) becomes zero. Therefore \(I_{-}=0\). We, then, have (7.3) by (7.14), (7.15). By Lemma 3.5 we have **Lemma 7.2**.: Let \(N\geq 1\), \(Q_{j}\in C\), \(1\leq j\leq N\) and \(z_{j}=z(Q_{j})\). Then \[\tau_{0}(t-[z]-\sum_{j=1}^{N}[z_{j}])e^{\eta(t,z^{-1})} \tag{7.16}\] \[= \left(\frac{z}{E(0,z)}\right)^{N+1}\prod_{j=1}^{N}\frac{E(z,z_{j} )}{z_{j}-z}\prod_{j=1}^{N}\frac{z_{j}}{E(0,z_{j})}e^{q(\sum_{j=1}^{N}[z_{j}])- \sum_{j=1}^{N}Q(t|[z_{j}])+\frac{1}{2}q(t)}\] \[\times\theta(\mathcal{V}t-I(p)-\sum_{j=1}^{N}I(Q_{j})+e)e^{\sum_{ j=1}^{\infty}t_{j}\int^{p}d\tilde{r}_{j}}.\] We use (7.16) to compute (7.4) and get \[\tau(t-[z])e^{\eta(t,z^{-1})} \tag{7.17}\] \[= z^{N+1}\sum_{J\in\binom{[L]}{N}}B_{J}\Delta_{J}^{-}e^{\eta_{J}} \prod_{j\in J}\frac{E(z,p_{j})}{E(0,z)E(0,p_{j})}e^{q(\sum_{j\in J}[p_{j}])- \sum_{j\in J}Q(t|[p_{j}])+\frac{1}{2}q(t)}\] \[\times\frac{1}{E(0,z)}\theta(\mathcal{V}t-I(p)-\sum_{j\in J}I(Q_{ j})+e)e^{\sum_{j=1}^{\infty}t_{j}\int^{p}d\tilde{r}_{j}}.\] Set \[\Psi^{\prime}(t,z)=z^{-N-1}\Psi(t,z).\] Then (7.17) shows that the expansion coefficients of \(\tau(t)\Psi^{\prime}(t,z)=z^{-N-1}\tau(t-[z])e^{\eta(t,z^{-1})}\) belong to \(H^{0}(C,L_{\Delta,e}(*p_{\infty}))\). Rewriting the equation (7.2) in terms of \(\Psi^{\prime}(t,z)\) we have \[\Psi^{\prime}(t,p_{i})=-\sum_{j=1}^{M}b^{\prime}_{N+j,i}\Psi^{\prime}(t,q_{j}).\] It means that the expansion coefficients of \(\tau(t)\Psi^{\prime}(t,z)\) are in \(W_{e}\). Thus the expansion coefficients of \(\tau(t)\Psi(t,z)\) are in \(z^{N+1}W_{e}=U_{e}\). Since \(U_{e}\in UGM\) by (1) of Theorem 3.7 and the strict inclusion relation is impossible for points of UGM, \(U_{e}\) is the point of UGM corresponding to \(\tau(-t)\). ## Appendix A Proof of Proposition 4.1 We first show that \(R_{e}\) is contained in the RHS of (4.1). Let \(f\in R_{e}\) and \(\varphi_{m}\in W_{e}\) be defined in (6.2). Then \(f\varphi_{m}\in W_{e}\). By (6.3) we see that \[(f\varphi_{m})(p_{i})=-\sum_{j=1}^{M}b^{\prime}_{N+j,i}(f\varphi_{m})(q_{j})\] is equivalent to \[f(p_{i})b^{\prime}_{N+m,i}=b^{\prime}_{N+m,i}f(q_{m}).\] (A.1) Therefore \(f\) is contained in the RHS of (4.1). Let us prove the converse inclusion. Let \(f\) be an element of the RHS of (4.1) and \(F\in W_{e}\). Notice that the equation (A.1) holds for any \(i,m\). Then \[-\sum_{j=1}^{M}b^{\prime}_{N+j,i}(fF)(q_{j}) = -\sum_{j=1}^{M}b^{\prime}_{N+j,i}f(q_{j})F(q_{j})\] \[= -\sum_{j=1}^{M}b^{\prime}_{N+j,i}f(p_{i})F(q_{j})\] \[= f(p_{i})\left(-\sum_{j=1}^{M}b^{\prime}_{N+j,i}F(q_{j})\right)\] \[= f(p_{i})F(p_{i})=(fF)(p_{i})\] which means \(fF\in W_{e}\). Thus \(f\in R_{e}\). ## Appendix B Proof of Theorem 4.4 (iii) follows from (ii). Let us prove (i), (ii), (iv). (i) It can be easily proved that if \(\varphi({\cal P})=p^{\prime}_{\infty}\) for some \({\cal P}\in{\rm Spec}R_{e}\) then \({\cal P}=R_{e}\). It is absurd. (ii) Let \({\cal P}=\oplus_{n=0}^{\infty}{\cal P}^{(n)}\in C^{\prime}\) such that \({\cal P}^{\prime}\neq p^{\prime}_{\infty}\). Set \[q=\cup_{n=0}^{\infty}{\cal P}^{(n)}\subset Re,\] which obviously becomes an ideal of \(R_{e}\). It is sufficient to prove the following lemma. **Lemma B.1**.: (i) If \(x,y\in R_{e}\) satisfies \(xy\in q\), either \(x\in q\) or \(y\in q\). (ii) \(q\neq R_{e}\). (iii) \(\varphi(q)={\cal P}\). Proof.: (i) We can assume that \[x=z^{-m}+O(z^{-m+1}),\hskip 28.452756pty=z^{-n}+O(z^{-n+1})\] with \(m,n\geq 1\). Then \[xy\in R_{e}(m+n)\backslash R_{e}(m+n-1).\] (B.1) Since \(xy\in q\), there exists \(N\geq 0\) such that \[xy\in{\cal P}^{(N)}\subset R_{e}(N).\] By (B.1) we have \(N\geq m+n\). Set \[N-(m+n)=k\geq 0.\] Then \[x\in R_{e}(m)\subset R_{e}(m+k),\hskip 28.452756pty\in R_{e}(n).\] So we consider \(x\) and \(y\) as homogeneous elements of \(A^{\prime}\) with degree \(m+k\) and \(n\) respectively. Then \[xy\in{\cal P}^{(N)}\subset A^{\prime}.\] Since \({\cal P}\) is a prime ideal of \(A^{\prime}\), \(x\in{\cal P}\) or \(y\in{\cal P}\). Therefore \(x\in{\cal P}\cap R_{e}(m+k)={\cal P}^{(m+k)}\) or \(y\in{\cal P}\cap R_{e}(m)={\cal P}^{(m)}\). It means that \(x\in q\) or \(y\in q\). (ii) Notice that \(1\in R_{e}(n)\) for any \(n\geq 0\). If we consider \(1\) as a homogeneous element of \(A^{\prime}\) with degree \(n\) we denote it by \(1^{(n)}\). We shall show \[1^{(n)}\notin{\cal P}^{(n)},\hskip 28.452756ptn\geq 0.\] (B.2) Since \({\cal P}\neq A^{\prime}\), \(1^{(0)}\notin{\cal P}^{(0)}\). Let us consider the case \(n=1\). Suppose that \(1^{(1)}\in{\cal P}^{(1)}\). Notice that \(A^{\prime}1^{(1)}=p^{\prime}_{\infty}\). Therefore \[p^{\prime}_{\infty}\subsetneq{\cal P},\] since \({\cal P}\neq p^{\prime}_{\infty}\). Then there exists \(k\geq 1\) and \(f_{k}\in{\cal P}^{(k)}\) such that \(f_{k}\notin R_{e}(k-1)\), that is, \[f_{k}=z^{-k}+O(z^{-k+1}).\] By the Riemann-Roch theorem we have \[\dim R_{e}(n)/R_{e}(n-1)=1.\hskip 28.452756ptn>>0,\] (B.3) It follows that, for all sufficiently large \(n\), there exists \(h_{n}\in R_{e}(n)\) such that \[h_{n}=z^{-n}+O(z^{-n+1}).\] Then \(h_{n}f_{k}\in{\cal P}^{(n+k)}\) and \(h_{n}f_{k}\in R_{e}(n+k)\backslash R_{e}(n+k-1)\). Taking into account that \({\cal P}^{(n+k)}\supset R_{e}(n+k-1)\) we see that \[{\cal P}^{(n)}=R_{e}(n),\hskip 28.452756ptn>>0.\] (B.4) On the other hand \[{\cal P}\not\supset\oplus_{n=1}^{\infty}R_{e}(n)\] since \(\mathcal{P}\in C^{\prime}\). It follows that there exists \(N\geq 1\) and \(F_{N}\in R_{e}(N)\) such that \(F_{N}\notin\mathcal{P}^{(N)}\). Since \(\mathcal{P}^{(N)}\supset R_{e}(N-1)\), \(F_{N}\in R_{e}(N)\backslash R_{e}(N-1)\), that is, \[F_{N}=z^{-N}+O(z^{-N+1}).\] Since \(\mathcal{P}\) is a prime ideal, \[F_{N}^{m}\notin\mathcal{P}^{(Nm)},\] for any \(m\geq 1\) which contradicts (B.4). Thus \(1^{(1)}\notin\mathcal{P}^{(1)}\). Notice that \((1^{(1)})^{n}=1^{(n)}\) for \(n\geq 2\). Thus \(1^{(n)}\notin\mathcal{P}^{(n)}\) and (B.2) has been proved. Then \(1\notin q\) and \(q\neq R_{e}\). (iii) We have to prove that \(q\cap R_{e}(n)=\mathcal{P}^{(n)}\) for any \(n\geq 0\). Suppose that this does not hold. Then there exists \(N\geq 1\) such that \[q\cap R_{e}(N)\neq\mathcal{P}^{(N)}.\] Since the right hand side is contained in the left hand side, it means \[q\cap R_{e}(N)\supsetneq\mathcal{P}^{(N)}.\] Therefore there exists \(x\in q\cap R_{e}(N)\) such that \(x\notin\mathcal{P}^{(N)}\). Since \(x\in q\) there exists \(M\) such that \(x\in\mathcal{P}^{(M)}\). Then \(N<M\). In fact if \(N\geq M\), \(x\in\mathcal{P}^{(M)}\subset\mathcal{P}^{(N)}\). Since \(x\in q\cap R_{e}(N)\), we can consider \(x\) as a homogeneous element of \(A^{\prime}\) with degree \(N\). Then \(x\notin\mathcal{P}^{(N)}\) and \(1^{(M-N)}\notin\mathcal{P}^{(M-N)}\) but \[1^{(M-N)}x=x\in\mathcal{P}^{(M)},\] which is absurd. Thus \(q\cap R_{e}(n)=\mathcal{P}^{(n)}\) for any \(n\) and (iii) of the lemma is proved. Proof of Theorem 4.4 (iv). It is obvious that \(a\notin p^{\prime}_{\infty}\) and \(p^{\prime}_{\infty}\in D_{+}(a)\). Let \[(p^{\prime}_{\infty})_{(a)}=\sum_{n=0}^{\infty}\frac{R_{e}(nm-1)}{a^{n}}\] be the image of \(p^{\prime}_{\infty}\) in \(A^{\prime}_{(a)}\). Any element of \(f\in R_{e}(nm)\) satisfies \(f-ca^{n}\in R_{e}(nm-1)\) for some constant \(c\in\mathbb{C}\). It means that \(f/a^{n}=c\) modulo \((p_{\infty})_{(a)}\). Since \((p_{\infty})_{(a)}\neq A_{(a)}\), this means that \(A_{(a)}/(p^{\prime}_{\infty})_{(a)}\simeq\mathbb{C}\). Thus \((p^{\prime}_{\infty})_{(a)}\) is a maximal ideal of \(A_{(a)}\) and (iv) is proved, ## Appendix C Proof of Proposition 4.5 Both \(W_{e}\) and \(R_{e}\) is a subspace of \(\mathbb{C}((z))\) and the \(R_{e}\) module structure of \(W_{e}\) is given by the ring structure of \(\mathbb{C}((z))\). Therefore \(W_{e}\) is a torsion free \(R_{e}\) module. That \(\mathcal{W}_{e}\) is a torsion free \(\mathcal{O}_{C^{\prime}}\) module follows from this. Let \(K^{\prime}\) be the quotient field of \(R_{e}\). In order to prove that the rank of \(W_{e}\) is one, it is sufficient to show, by the definition, that \(\dim_{K^{\prime}}K^{\prime}\otimes_{R_{e}}W_{e}=1\). **Lemma C.1**.: Let \(K\) be the quotient field of \(R\). Then \(K^{\prime}=K\). _Proof._ Since \(R_{e}\subset R\), \(K^{\prime}\subset K\). Let us prove the converse inclusion. Take any \(f\in K\). Let the pole divisor of \(f\) be \[m_{1}R_{1}+\cdots+m_{s}R_{s}+m_{\infty}p_{\infty},\quad m_{i}>0,\,R_{i}\in C, \,R_{i}\neq p_{\infty}.\] Take \(n\) sufficiently large such that there exists non zero \(F\in H^{0}(C,\mathcal{O}(-\sum_{i=1}^{s}m_{i}R_{i}-\sum_{i,j}P_{i,j}+np_{ \infty}))\). Then \(F\in R_{e}\) since \(F(p_{i,j})=0\) for all \(i,j\). Set \(h=fF\). Then \[h\in H^{0}(C,\mathcal{O}(-\sum_{i,j}P_{i,j}+*p_{\infty}))\subset R_{e}.\] Thus \(f=h/F\in K^{\prime}\). Thus \(K\subset K^{\prime}\). Let us continue the proof of Proposition 4.5. Take any nonzero \(f_{1}\in W_{e}\). Notice that \(K\) is the field of meromorphic functions on \(C\). Therefore, for any \(f_{2}\in W_{e}\), \(f_{2}/f_{1}\in K\) and \(f_{2}\in Kf_{1}\). By Lemma C.1 we have \(W_{e}\subset Kf_{1}=K^{\prime}f_{1}\). Thus \(K^{\prime}\otimes_{R_{e}}W_{e}=K^{\prime}f_{1}\) and \(\dim_{K^{\prime}}K^{\prime}\otimes_{R_{e}}W_{e}=1\). ## Appendix D Proof of Proposition 4.7 We begin by determining the structure of \(R\) and \(R_{e}\). **Lemma D.1**.: For any \((i,j)\), \(1\leq i\leq s\), \(1\leq j\leq n_{i}\), there exists \(H_{i,j}\in R\) such that \[H_{i,j}(p_{k,l})=\delta_{i,k}\delta_{j,l}.\] _Proof._ By the Riemann-Roch formula, for all sufficiently large \(n\), we have \[\dim H^{0}\left(C,\mathcal{O}(-\sum P_{k,l}+np_{\infty})\right)=n -L+1-g,\] \[\dim H^{0}\bigg{(}C,\mathcal{O}(-\sum_{(k,l)\neq(i,j)}P_{k,l}+np_ {\infty})\bigg{)}=n-L+2-g.\] A non-zero element of the latter space which does not belong to the former space satisfies the required property if it is adjusted by a constant multiple. Let \[H_{i}=H_{i,1}+\cdots+H_{i,n_{i}}.\] Then **Lemma D.2**.: (i) \(H_{i}(p_{i,j})=1\) for \(1\leq j\leq n_{i}\) and \(H_{i}(p_{k,l})=0\) if \(k\neq i\). (ii) \(H_{i}\in R_{e}\). (iii) \(\{H_{i}\,|\,1\leq i\leq s\}\) is linearly independent. The lemma can be easily proved from the definition of \(H_{i}\). So we leave the proof to the reader. **Proposition D.3**.: (i) \(R=H^{0}\left(C,\mathcal{O}(-\sum P_{i,j}+*p_{\infty})\right)\oplus\oplus_{i,j} \mathbb{C}H_{i,j}\). (ii) \(R_{e}=H^{0}\left(C,\mathcal{O}(-\sum P_{i,j}+*p_{\infty})\right)\oplus\oplus_{i =1}^{s}\mathbb{C}H_{i}\). Proof.: (i) It is sufficient to prove that the left hand side is contained in the right hand side. Take any \(f\in R\). Set \(f(p_{i,j})=c_{i,j}\) and \(f^{\prime}=f-\sum c_{i,j}H_{i,j}\). Then \(f^{\prime}(p_{i,j})=0\) for any \(i,j\). Thus \[f^{\prime}\in H^{0}\left(C,\mathcal{O}(-\sum P_{i,j}+*p_{\infty})\right),\] which shows that \(f\) is contained in the RHS of (i). (ii) is similarly proved. Since \(R_{e}\) is a subring of \(R\), \(R\) is considered as an \(R_{e}\)-module. Then **Corollary D.4**.: The ring \(R\) is a finitely generated \(R_{e}\) module. Proof.: By Proposition D.3 we have \[R=R_{e}1+\sum R_{e}H_{i,j},\] which shows the assertion of the corollary. Since \(C\) is non-singular \(R\) is integrally closed in \(K\). Then we have **Corollary D.5**.: The ring \(R\) is the integral closure of \(R_{e}\) in \(K\). Proof.: By Corollary D.4\(R\) is integral over \(R_{e}\). An integral element of \(K\) over \(R_{e}\) is integral over \(R\) and therefore it belongs to \(R\) since \(R\) is integrally closed. Thus \(R\) is the integral closure in \(K\). Take \(a\in R_{e}(m)\subset R(m)\) as in (4.5). Consider \(a\) as an element of \(A^{\prime}\) and \(A\) with the degree \(m\). Similarly to the above corollary we can prove the following. **Proposition D.6**.: The ring \(A_{(a)}\) is a finitely generated \(A^{\prime}_{(a)}\) module and it is the integral closure of \(A^{\prime}_{(a)}\) in \(K\). By Corollary D.5 and Proposition D.6 we have Proposition 4.7. Proof of Proposition 4.8 (i) Let \[S=R-m=\{f\in R\,|\,f(p)\neq 0\},\ \ \ \ S^{\prime}=R_{e}-m^{\prime}=\{f\in R_{e}\,| \,f(p)\neq 0\}.\] Then, \(S^{\prime}\subset S\) and, by definition, \(R_{m}=S^{-1}R\), \((R_{e})_{m^{\prime}}={S^{\prime}}^{-1}R_{e}\). Therefore \((R_{e})_{m^{\prime}}\subset R_{m}\). Let us prove the converse inclusion. Take any \(f\in R_{m}\) and write it as \[f=\frac{F}{G},\ \ \ \ F\in R,G\in S.\] Notice that, by the Riemann-Roch theorem, there exists \(H\in R\) such that \[H\in H^{0}\left(C,\mathcal{O}(-\sum P_{i,j}+np_{\infty})\right),\ \ \ \ H(p)\neq 0,\] (E.1) if \(n\) is sufficiently large. Then \(FH,GH\in R_{e}\), \((GH)(p)\neq 0\) and \(f=FH/GH\in{S^{\prime}}^{-1}R_{e}\). Thus \(R_{m}\subset(R_{e})_{m^{\prime}}\). (ii) Since \(R\) is the integral closure of \(R_{e}\), the integral closure of \((R_{e})_{m^{\prime}}={S^{\prime}}^{-1}R_{e}\) is \({S^{\prime}}^{-1}R\) (c.f. Proposition 2.1 of [13]). Consider \(H_{i,j}\in R\subset{S^{\prime}}^{-1}R\) of Lemma D.1. It is not in \({S^{\prime}}^{-1}R_{e}\). In fact if \(H_{i,j}\in{S^{\prime}}^{-1}R_{e}\) then \(fH_{i,j}\in R_{e}\) for some \(f\in S^{\prime}\). Then \((fH_{i,j})(p_{i,j})=f(p_{i,j})\neq 0\) and \((fH_{i,j})(p_{i,j^{\prime}})=f(p_{i,j^{\prime}})H(p_{i,j^{\prime}})=0\) for \(j^{\prime}\neq j\) which contradicts \(fH_{i,j}\in R_{e}\). Thus \({S^{\prime}}^{-1}R_{e}\neq{S^{\prime}}^{-1}R\) and \({S^{\prime}}^{-1}R_{e}\) is not a normal ring. Let us prove the last statement of (ii). Obviously \(m_{P_{i,j^{\prime}}}\in\psi^{-1}(m^{\prime})\) for any \(j^{\prime}\). Since \(R\) is integral over \(R_{e}\), each element of \(\psi^{-1}(m^{\prime})\) is a maximal ideal. Let \(Q\) be a point of the Riemann surface \(C\) such that \(Q\neq P_{i,j^{\prime}}\) for any \(j^{\prime}\) and \(z(Q)=q\). Suppose that \(\psi(m_{Q})=m^{\prime}\). Similarly to (E.1) there exists \(H\in R_{e}\) such that \(H(p_{i^{\prime},j^{\prime}})=0\) for any \(i^{\prime},j^{\prime}\) and \(H(q)\neq 0\). Then \(H\in m^{\prime}\) but \(H\notin\psi(m_{Q})\). It contradicts the assumption. Thus the assertion is proved. ### Acknowledgeements Parts of the results of this paper were presented in a series of lectures at University of Tokyo in July, 2022 and at Nagoya University in July, 2023. I would like to thank Junichi Shiraishi and Masashi Hamanaka for invitations and hospitality. I would also like to thank Hiroaki Kanno, Yasuhiko Yamada, Shintaro Yanagida for their interests. I would especially like to thank Yuji Kodama for explaining the contents of the paper [19] as well as related results [18] and for his valuable comments when I gave lectures at Nagoya University. I am also grateful to Yuji Kodama for letting me know his theory on soliton solutions of the KP equation some years ago. That was the starting point of this study. This work was supported by JSPS KAKENHI Grant Number JP19K03528.
2309.08688
Probabilistic Constellation Shaping With Denoising Diffusion Probabilistic Models: A Novel Approach
With the incredible results achieved from generative pre-trained transformers (GPT) and diffusion models, generative AI (GenAI) is envisioned to yield remarkable breakthroughs in various industrial and academic domains. In this paper, we utilize denoising diffusion probabilistic models (DDPM), as one of the state-of-the-art generative models, for probabilistic constellation shaping in wireless communications. While the geometry of constellations is predetermined by the networking standards, probabilistic constellation shaping can help enhance the information rate and communication performance by designing the probability of occurrence (generation) of constellation symbols. Unlike conventional methods that deal with an optimization problem over the discrete distribution of constellations, we take a radically different approach. Exploiting the ``denoise-and-generate'' characteristic of DDPMs, the key idea is to learn how to generate constellation symbols out of noise, ``mimicking'' the way the receiver performs symbol reconstruction. By doing so, we make the constellation symbols sent by the transmitter, and what is inferred (reconstructed) at the receiver become as similar as possible. Our simulations show that the proposed scheme outperforms deep neural network (DNN)-based benchmark and uniform shaping, while providing network resilience as well as robust out-of-distribution performance under low-SNR regimes and non-Gaussian noise. Notably, a threefold improvement in terms of mutual information is achieved compared to DNN-based approach for 64-QAM geometry.
Mehdi Letafati, Samad Ali, Matti Latva-aho
2023-09-15T18:27:44Z
http://arxiv.org/abs/2309.08688v1
# Probabilistic Constellation Shaping With Denoising Diffusion Probabilistic Models: A Novel Approach ###### Abstract With the incredible results achieved from generative pre-trained transformers (GPT) and diffusion models, generative AI (GenAI) is envisioned to yield remarkable breakthroughs in various industrial and academic domains. In this paper, we utilize denoising diffusion probabilistic models (DDPM), as one of the state-of-the-art generative models, for probabilistic constellation shaping in wireless communications. While the geometry of constellations is predetermined by the networking standards, probabilistic constellation shaping can help enhance the information rate and communication performance by designing the probability of occurrence (generation) of constellation symbols. Unlike conventional methods that deal with an optimization problem over the discrete distribution of constellations, we take a radically different approach. Exploiting the "denoise-and-generate" characteristic of DDPMs, the key idea is to learn how to generate constellation symbols out of noise, "mimicking" the way the receiver performs symbol reconstruction. By doing so, we make the constellation symbols sent by the transmitter, and what is inferred (reconstructed) at the receiver become as similar as possible. Our simulations show that the proposed scheme outperforms deep neural network (DNN)-based benchmark and uniform shaping, while providing _network resilience_ as well as _robust out-of-distribution performance_ under low-SNR regimes and non-Gaussian noise. Notably, a threefold improvement in terms of mutual information is achieved compared to DNN-based approach for 64-QAM geometry. AI-native wireless, diffusion models, generative AI, network resilience, wireless AI. ## I Introduction The emergence of generative models has made a paradigm shift in the realm of artificial intelligence (AI) towards generative AI (GenAI)-based systems [1]. Innovative approaches in GenAI have attracted significant attention from both academia and industry, garnering extensive research and development efforts. In this regard, the evolution of diffusion models [2], as the state-of-the-art family of generative models, is considered as one of the key factors in the recent breakthroughs of GenAI. It has showcased remarkable results with famous solutions such as ImageGen by Google Brain and DALLE 2 by OpenAI, to name a few. Through the lens of data communication and networking, "connected intelligence" is envisioned as the most significant driving force in the sixth generation (6G) of wireless communications [3, 4, 5, 6]. It is envisioned that machine learning (ML) and AI algorithms are widely incorporated into wireless systems to realize "AI-native" systems. This highlights the need for novel AI solutions to be tailored for the emerging 6G scenarios. ### _Literature Review_ Although diffusion models have shown remarkable results in various applications within the computer science community, such as natural language processing (NLP), computer vision, and medical imaging [7], there are _only a few papers in communication literature that have started looking into the applications of diffusion models for wireless systems_[8, 9, 10, 11]. Notably, the incorporation of diffusion models into wireless communication problems is still in its infancy, and we hope that our paper sheds light on some of the possible directions. The authors in [8] propose a workflow for wireless network management via utilizing diffusion models, highlighting their exploration capability for wireless network management. A preprint [9] employs diffusion models to improve the performance of receiver in terms of noise and channel estimation error removal. The authors employ an autoencoder (AE) in addition to their diffusion model. However, the output signals of the encoder does not necessarily follow the standard shape of constellation symbols, making the scheme inapplicable to real-world wireless systems. Moreover, implementing two different ML models, each with a distinct objective function can impose computational overhead to the network. Denoising diffusion probabilistic model (DDPM) is utilized in [10] to generate synthetic channel realizations for an AE-based end-to-end wireless system. The authors highlight the promising performance of diffusion models as an alternative to generative adversarial network (GAN)-based models. They show that GANs have unstable training and less diversity in generation performance, because of their "adversarial" training nature, while DDPMs maintain a more stable training process and a better generalization during inference. In [11], noise-conditioned score networks are employed for channel estimation in multi-input-multi-output (MIMO) wireless communications. RefineNet neural architecture is implemented to estimate the gradient of the log-prior of wireless channels. The results imply a competitive performance for in- and out-of-distribution (OOD) scenarios compared to GANs. ### _Our Work_ With the aid of GenAI, our general goal is to take a step towards an _AI-native_ system [3, 4, 5], in which we can continuously design radio signals, adapt to changes, and realize "mutual understanding" between communication parties, instead of blindly transmitting information symbols. In this paper, we study the application of diffusion models, the state-of-the-art generative model in GenAI literature, for probabilistic constellation shaping in wireless communications. _To the best of our knowledge, this is the first paper that proposes diffusion models for constellation shaping in wireless communications._ _Setting the Stage:_ The choice of constellations can significantly affect the performance of communication systems. Recently, DNNs are proposed for geometric shaping [12, 13, 14]. They typically employ AEs and let the neural model learn constellation symbols for transmission. This results in arbitrary forms of constellation points that might not be compliant with wireless standards such as the 3rd Generation Partnership Project (3GPP) [15]. In such scenarios, probabilistic constellation shaping can help enhance the information rate and decoding performance of communication systems [12]. It designs the probability of occurrence (generation) of constellation symbols within the corresponding geometry. _Our Contributions:_ Unlike previous works that try to deal with the optimization of constellations over discrete distributions via iterative optimization methods or deep neural networks (DNN) [12], we offer a radically different approach--we exploit the "denoise-and-generate" characteristic of DDPMs for probabilistic shaping. First, a DDPM is trained with the aim of learning the diffusion process for generating constellation symbols out of noise. Within each transmission slot (TS), the transmitter runs the model to probabilistically shape (generate) the constellation symbols according to the signal-to-noise ratio (SNR) level. Intuitively, the goal is to do shaping in a way that the information-bearing constellation symbols generated at the transmitter, and what is inferred (reconstructed) at the receiver become as similar as possible, resulting in as few mismatches between the communication parties as possible. To fulfill this requirement, the transmitter exploits the "denoise-and-generate" characteristic of DDPMs, and "mimics" the way the receiver performs symbol reconstruction. (More details are provided in Section III.) We show that our proposed approach outperforms DNN-based scheme with trainable constellation layer and neural demapper [12]. Notably, we show a threefold improvement in terms of mutual information metric compared to DNN-based solution for \(64\)-QAM geometry. Our results also highlight that the proposed DDPM-based scheme is _resilient_ against low-SNR regimes. We also demonstrate a robust OOD performance under non-Gaussian noise, compared to other benchmarks. In what follows, we first introduce DDPM framework in Section II. System model and the proposed scheme are introduced in Section III. Furthermore, the neural architecture and the proposed algorithms for probabilistic constellation shaping are addressed in this section. Numerical results are studied in Section IV, and Section V concludes the paper.1 Footnote 1: Vectors and matrices are represented, respectively, by bold lower-case and upper-case symbols. \(|\cdot|\) and \(||\cdot||\) respectively denote the absolute value of a scalar variable and the \(\ell_{2}\) norm of a vector. Notation \(\mathcal{N}(\mathbf{x};\boldsymbol{\mu},\boldsymbol{\Sigma})\) stands for the multivariate normal distribution with mean vector \(\boldsymbol{\mu}\) and covariance matrix \(\boldsymbol{\Sigma}\) for a random vector \(\mathbf{x}\). Similarly, complex normal distribution with the corresponding mean vector and covariance matrix is denoted by \(\mathcal{CN}(\boldsymbol{\mu},\boldsymbol{\Sigma})\). Moreover, the expected value of a random variable (RV) is denoted by \(\mathbb{E}\left[\cdot\right]\) sets are denoted by caligraphic symbols. \(\mathbf{0}\) and \(\mathbf{I}\) respectively show all-zero vector and identity matrix of the corresponding size. Moreover, \([N]\), (with \(N\) as integer) denotes the set of all integer values from \(1\) to \(N\), and \(\text{Unif}[N]\) (for \(N>1\)) denotes discrete uniform distribution with samples between \(1\) to \(N\). Also, \(\delta(\cdot)\) denotes the Dirac function. ## II Preliminaries on DDPMs Diffusion models are a new class of state-of-the-art probabilistic generative models inspired by non-equilibrium thermodynamics [2]. Let \(\mathbf{x}_{0}\) be a data sample from some distribution \(q(\mathbf{x}_{0})\). For a finite number \(T\) of time-steps, the forward diffusion process \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) is defined by adding Gaussian noise according to a "variance schedule" \(0<\beta_{1}<\beta_{2}<\cdots<\beta_{T}<1\) at each time-step \(t\in[T]\). This is, \[q(\mathbf{x}_{t}|\mathbf{x}_{t-1}) \sim\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t-1 },\beta_{t}\mathbf{I}), \tag{1}\] \[q(\mathbf{x}_{1:T}|\mathbf{x}_{0}) =\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1}). \tag{2}\] Invoking (2), the data sample gradually loses its distinguishable features as the time-step goes on, where with \(T\rightarrow\infty\), \(\mathbf{x}_{T}\) approaches an isotropic Gaussian distribution with covariance matrix \(\boldsymbol{\Sigma}=\sigma^{2}\mathbf{I}\) for some \(\sigma>0\)[2]. According to (1), each new sample at time-step \(t\) can be drawn from a conditional Gaussian distribution with mean vector \(\mu_{t}=\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}\) and covariance matrix \(\boldsymbol{\Sigma}_{t}^{2}=\beta_{t}\mathbf{I}\). Hence, the forward process is realized by sampling from a Gaussian noise \(\boldsymbol{\epsilon}_{t-1}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and setting \[\mathbf{x}_{t}=\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}+\sqrt{\beta_{t}} \boldsymbol{\epsilon}_{t-1}. \tag{3}\] A useful property for the forward process in (3) is that we can sample \(\mathbf{x}_{t}\) at any arbitrary time step \(t\), via recursively applying the reparameterization trick from ML literature [16]. This results in the following formulation. \[\mathbf{x}_{t} =\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}} \boldsymbol{\epsilon}_{0}, \tag{4}\] \[q(\mathbf{x}_{t}|\mathbf{x}_{0}) \sim\mathcal{N}\left(\mathbf{x}_{t};\sqrt{\bar{\alpha}_{t}} \mathbf{x}_{0},(1-\bar{\alpha}_{t})\mathbf{I}\right), \tag{5}\] where \(\bar{\alpha}_{t}\!=\!\prod_{i=1}^{t}(1-\alpha_{i})\) and \(\alpha_{t}=1-\beta_{t}\). Now the problem is to reverse the process in (4) and sample from \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\), so that we regenerate the true samples from some Gaussian noise \(\mathbf{x}_{T}\). According to [2], for \(\beta_{t}\) small enough, \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t}),\forall t\in[T]\) also follows Gaussian distribution. However, we cannot easily estimate the distribution, since it requires knowing the distribution of all possible data samples. Hence, to approximate the conditional probabilities and run the reverse diffusion process, we need to learn a probabilistic model \(p_{\boldsymbol{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\), parameterized by \(\boldsymbol{\theta}\). Accordingly, we can write \[p_{\boldsymbol{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t}) \sim\mathcal{N}(\mathbf{x}_{t-1};\boldsymbol{\mu}_{\boldsymbol{ \theta}}(\mathbf{x}_{t},t),\boldsymbol{\Sigma}_{\boldsymbol{\theta}}(\mathbf{x}_{t },t)), \tag{6}\] \[p_{\boldsymbol{\theta}}(\mathbf{x}_{0:T}) =p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\boldsymbol{\theta}}(\mathbf{ x}_{t-1}|\mathbf{x}_{t}). \tag{7}\] Now the problem simplifies to learning the mean vector \(\boldsymbol{\mu}_{\boldsymbol{\theta}}(x_{t},t)\) and the covariance matrix \(\boldsymbol{\Sigma}_{\boldsymbol{\theta}}(x_{t},t)\) for the probabilistic model \(p_{\boldsymbol{\theta}}(\cdot)\), where a neural network (NN), with parameter \(\boldsymbol{\theta}\), can be trained to approximate (learn) the reverse process. We note that if we condition the reverse process on \(\mathbf{x}_{0}\), this conditional probability becomes tractable [2]. Hence, when we have \(\mathbf{x}_{0}\) as a reference, we can take a small step backwards to generate data samples, and the reverse step is formulated as \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\). Mathematically, we have \[q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\sim\mathcal{N}( \mathbf{x}_{t-1};\ \tilde{\boldsymbol{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t),\tilde{\beta}_{t} \mathbf{I}), \tag{8}\] which is obtained by utilizing Bayes rule, where \[\tilde{\boldsymbol{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t) =\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{ t}}\mathbf{x}_{t}+\frac{\sqrt{\alpha_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t}} \mathbf{x}_{0}, \tag{9}\] \[\tilde{\beta}_{t} =\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_{t}. \tag{10}\] Invoking (10), one can infer that the covariance matrix in (8) has no learnable parameter. Hence, we simply need to learn the mean vector \(\tilde{\boldsymbol{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t)\). To further simplify (9), we note that thanks to the reparameterization trick and with a similar approach to (4), we can express \(\mathbf{x}_{0}\) as follows. \[\mathbf{x}_{0}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{x}_{t} -\sqrt{1-\bar{\alpha}_{t}}\boldsymbol{\epsilon}_{t}). \tag{11}\] Substituting \(\mathbf{x}_{0}\) in (9) by (11) results in \[\tilde{\boldsymbol{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t)=\frac{ 1}{\sqrt{\bar{\alpha}_{t}}}\Big{(}\mathbf{x}_{t}-\frac{1-\alpha_{t}}{\sqrt{1- \bar{\alpha}_{t}}}\boldsymbol{\epsilon}_{t}\Big{)}. \tag{12}\] Now we can learn the conditioned probability distribution \(p_{\boldsymbol{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) by training a NN that approximates \(\tilde{\boldsymbol{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t)\). Therefore, we simply need to set the approximated mean vector \(\boldsymbol{\mu}_{\boldsymbol{\theta}}(\mathbf{x}_{t},t)\) to have the same form as the target mean vector \(\tilde{\boldsymbol{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t)\). Since \(\mathbf{x}_{t}\) is available at time-step \(t\), we can reparameterize the NN to make it approximate \(\boldsymbol{\epsilon}_{t}\) from the input \(\mathbf{x}_{t}\). Compiling these facts results in \[\boldsymbol{\mu}_{\boldsymbol{\theta}}(\mathbf{x}_{t},t)=\frac{ 1}{\sqrt{\alpha_{t}}}\Big{(}\mathbf{x}_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{ \alpha}_{t}}}\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\mathbf{x}_{t},t) \Big{)}, \tag{13}\] where \(\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\mathbf{x}_{t},t)\) denotes our NN. We can now define the loss function \(\mathcal{L}_{t}\) for time-step \(t\in[T]\), aiming to minimize the difference between \(\boldsymbol{\mu}_{\boldsymbol{\theta}}(\mathbf{x}_{t},t)\) and \(\tilde{\boldsymbol{\mu}}(\mathbf{x}_{t},\mathbf{x}_{0},t)\). \[\mathcal{L}_{t} =\mathbb{E}_{\begin{subarray}{c}t\sim\mathsf{Unif}[T]\\ \mathbf{x}_{0}\sim q(\mathbf{x}_{0})\\ \mathbf{e}_{0}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\end{subarray}}\ \Big{[}\| \boldsymbol{\epsilon}_{t}-\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\mathbf{x }_{t},t)\|^{2}\Big{]}\] \[=\mathbb{E}_{\begin{subarray}{c}t\sim\mathsf{Unif}[T]\\ \mathbf{x}_{0}\sim q(\mathbf{x}_{0})\\ \mathbf{e}_{0}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\end{subarray}}\ \Big{[}\| \boldsymbol{\epsilon}_{t}-\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\sqrt{ \bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\boldsymbol{ \epsilon}_{t},t)\|^{2}\Big{]}. \tag{14}\] Invoking (14), at each time-step \(t\), the DDPM model takes \(\mathbf{x}_{t}\) as input and returns the distortion components \(\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\mathbf{x}_{t},t)\). Also, \(\boldsymbol{\epsilon}_{t}\) denotes the diffused noise term at time step \(t\), ## III System Model and Proposed Scheme Fig. 1 demonstrates the communication system under consideration. The system takes the information bitstream and maps it onto hypersymbols \(s\in\mathcal{S}\), \(\mathcal{S}=\{1,\dots,M\}\), according to the learnable distribution \(p_{\boldsymbol{\theta}}(s)\) (parameterized by \(\boldsymbol{\theta}\)), where \(M\) denotes the modulation order. In this paper, \(\boldsymbol{\theta}\) is realized by a DDPM, which is trained and shared with the transmitter and receiver. The sequence of hypersymbols is then fed into a symbol modulator which maps each symbol \(s\) into a constellation point \(x\in\mathcal{X}_{c}\), with \(\mathcal{X}_{c}\) showing the set of constellation points. Each symbol is generated according to the distribution \(p_{\boldsymbol{\theta}}(s)\). In other words, the frequency of sending a bitstream over the constellation point \(x=g(s)\) corresponds to the parametric distribution \(p_{\boldsymbol{\theta}}(s)\), where \(g\) denotes the modulation functionality. Accordingly, we have \[p_{\boldsymbol{\theta}}(x)=\sum_{s\in\mathcal{S}}\delta\left(x-g(s) \right)p_{\boldsymbol{\theta}}(s),\quad\forall x\in\mathcal{X}_{c}. \tag{15}\] As motivated in Section I, our focus in this work is on probabilistic shaping, and hence, constellation geometry is determined in advance according to network configurations. Thus, the design of modulator function \(g(\cdot)\) is not of interest in this work, and we adhere to standard constellation schemes, such as QAM, in order to propose a system which is compliant with the real-world communication systems. In addition, similar to [12], we also assume that the bits-to-symbols mapper is known. Hence, we have a one-to-one mapping between the constellation point \(x\) and the information symbol \(s\), and the transmitter's output is directly sampled from \(p_{\boldsymbol{\theta}}(\cdot)\). Information-bearing signal \(x\) is then sent over the communication channel, and the channel output \(y\) is observed at the receiver. Then the receiver needs to reconstruct the transmitted symbols by approximating the posterior distribution \(p(s|y)\) given the channel output. To do so, the receiver leverages the trained DDPM and maps each received sample \(y\) to a probability distribution over the set of symbols \(\mathcal{S}\). Having this approximation, symbols can be obtained at the receiver's de-modulator, and the information bits can be reconstructed using the prevalent symbol-to-bit mappers. ### _Proposed Approach_ #### Setting the Stage Intuitively speaking, the goal is to probabilistically shape the constellation symbols by finding a proper \(p_{\boldsymbol{\theta}}(\cdot)\), such that the information-bearing symbols sent by the transmitter, and what is inferred at the receiver become as similar as possible,2 resulting in as few mismatches between the communication parties as possible. This fact, together with the characteristic of diffusion models to "denoise-and-generate", motivates us to propose the following DDPM-based approach for probabilistic constellation shaping. The key idea to fulfill the desired similarity is that the transmitter "mimics" the way the receiver would perform the reconstruction of symbols. Hence, the transmitter probabilistically Fig. 1: System model overview. generates the constellations in a way that would be similar to the process of denoising and reconstruction (regeneration) at the receiver. This also helps facilitate having "mutual understanding" of how to map and de-map the information symbols over time, realizing kind of _native intelligence_ among communication parties. Motivated by these facts, our step-by-step solution can be elaborated on as follows. #### Iii-B1 DDPM Training A DDPM is trained based on the loss function given in (14). This corresponds to training the parameter \(\mathbf{\theta}\) for our probabilistic shaping scheme in (15). The goal is to train a diffusion process to generate constellation symbols (with the pre-determined geometry) out of noise. The process is summarized in Algorithm 1, which is inspired by the seminal paper of DDPM by Ho _et. al_, 2020 [2]. Training can be carried out in a central cloud, or an edge server and then downloaded by the communication entities. The trained model is deployed at the transmitter and the receiver. #### Iii-B2 Link quality estimation using channel SNR Within each TS, the transmitter first estimates the quality of communication link. This can be carried out using the pilot signals sent by the destination node, at the beginning of each TS, and the SNR level of communication channel can be calculated [18]. #### Iii-B3 Probabilistic shaping The trained DDPM is run at the transmitter to probabilistically shape (generate) the constellation symbols according to the channel SNR. To do so, the transmitter first takes \(N_{s}\) samples from the set of constellation symbols \(\mathcal{X}_{c}\) uniformly at random.3 The goal is not to uniformly map information symbols to constellation points. Rather, we aim to generate the constellation symbols in a way that the information-bearing constellation symbols sent by the transmitter, and what is inferred (reconstructed) at the receiver become as similar as possible. For instance, when the communication channel is experiencing high levels of noise, i.e., in low-SNR regime, we intuitively expect that most often, the receiver would be able to decode the symbols corresponding to the points that are relatively far from each other in the constellation geometry, while the other points are prone to being decoded incorrectly. Thus, the transmitter wishes to probabilistically reshape the constellation symbols in a way that would be straightforward to denoise and reconstruct (regenerate) at the receiver. To do so, the transmitter samples \(N_{s}\) random noise with average power \(\delta^{2}\), and injects them to the uniformly-sampled symbols. The power of synthetic noise, \(\delta^{2}\), is calculated according to the channel SNR, \(\Gamma\), which is obtained at Step 2, and can be formulated as \(\delta^{2}=P10^{\Gamma/10}\), where \(P\) denotes the average transmit power. The noisy version of samples is then fed into the trained DDPM, and the reverse diffusion process is run to denoise and generate symbols out of the synthetically-noisy samples. In other words, the transmitter tries to mimic the way the receiver performs the reconstruction (regeneration) of symbols, when it receives noisy symbols. The distribution of the generated samples at the output of the DDPM block is considered as the output probabilistic constellations onto which the information symbols are mapped to be sent. Footnote 3: The sample size \(N_{s}\) can be regarded as the number of observations to form (generate) the empirical distribution of our probabilistic shaping. ``` Hyper-parameters: Number of time-steps \(T\), neural architecture \(\mathbf{\epsilon_{\theta}}(\cdot,t)\), variance schedule \(\beta_{t}\), and \(\bar{\alpha}_{t},\forall t\in[T]\). Input: Training samples from the constellation geometry \(\mathcal{X}_{c}\). Output: Trained neural model for DDPM. 1:while the stopping criteria are not met do 2: Randomly sample \(\mathbf{x}_{0}\) from \(\mathcal{X}_{c}\) 3: Randomly sample \(t\) from \(\text{Unif}[T]\) 4: Randomly sample \(\mathbf{\epsilon}\) from \(\mathcal{N}(\mathbf{0},\mathbf{I})\) 5: Take gradient descent step on 6:\(\nabla_{\mathbf{\theta}}\left\|\mathbf{\epsilon}-\mathbf{\epsilon_{\theta}}(\sqrt{\bar{ \alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\mathbf{\epsilon},t)\right\|^ {2}\) 7:endwhile ``` **Algorithm 1** Training algorithm of DDPM The overall algorithm is proposed in Algorithm 2, where the main loop corresponds to the reverse diffusion process from time-step \(T\) to \(1\), according to 13. Also, \(\texttt{proj}_{\mathcal{S}}(\mathbf{x})\) stands for the projection operator, which maps the elements of vector \(\mathbf{x}\) onto the nearest elements in the set \(\mathcal{S}\). Moreover, \(\texttt{count}(\mathbf{x},\mathcal{S})\) outputs a vector with size \(|\mathcal{S}|\), with elements representing the number of occurrences of the elements of set \(\mathcal{S}\) in vector \(\mathbf{x}\). Notably, \(\mathbf{\psi}\) in Algorithm 2 denotes the probabilistically-shaped constellation points at the output of the transmitter's DDPM block, and \(p_{\mathbf{\theta}}\) stands for the corresponding distribution inferred by the diffusion model. #### Iii-B4 Symbol reconstruction at the receiver After generating constellation symbols, information signals are sent according to the probabilistic constellations. The symbols are received by the receiver. Then the receiver runs the diffusion model to reconstruct (regenerate) the symbols from the received noisy signals. The corresponding algorithm for this step is proposed in Algorithm 3. Starting from the received batch of noisy symbols, denoted by \(\mathbf{y}_{r}\), for each time step \(t\in\{T,T-1,\dots,1\}\), the NN outputs \(\mathbf{\epsilon_{\theta}}(\hat{\mathbf{x}}_{t},t)\) to approximate the residual noise within the batch of symbols, and the sampling algorithm is run according to Line \(4\) of the algorithm, in order to sample \(\hat{\mathbf{x}}_{t-1}\). The process is executed for \(T\) steps.4 ## IV Evaluations In this section, we carry out numerical evaluations, in order to highlight the performance of the proposed scheme compared to other benchmarks. Specifically, we show that our DDPM-based approach achieves a threefold improvement in terms of mutual information compared to DNN-based solution for \(64\)-QAM geometry. We also show that the proposed DDPM can provide _native resilience_ for the communication system under low-SNR regimes and non-Gaussian noise. We employ a NN comprised of \(3\) hidden linear layers each of which has \(128\) neurons with softplus activation functions. The output layer is a simple linear layer with the same shape as input. Inspired by the Transformer paper [17], we share the parameters of the NN across time-steps via multiplying the embeddings of time-step and incorporating them into the model. For training the diffusion model, we use adaptive moment estimation (Adam) optimizer with learning rate \(\lambda=10^{-3}\). We consider QAM geometry as a widely-adopted constellation format in wireless networks [4, 6, 12]. Moreover, we set \(T=100\), and the stopping criterion in Algorithm 1 is met when reaching the maximum number of epochs [9, 10, 11], which is set to \(1000\) epochs. Fig. 2, demonstrates data visualization for sampling phase, corresponding to Algorithms 2 and 3. The first row corresponds to the constellation generation steps that are performed at the transmitter (Algorithm 2), and the second row corresponds to the reconstruction at the receiver (Algorithm 3). This is repeated for different SNRs to visualize the constellation shaping performance of our DDPM under different levels of noise. Comparing the output of our probabilistic constellation generation algorithm (the first row) and the reconstructed symbols at the receiver (the second row), we can observe that the idea of synthetically mimicking the functionality of receiver for shaping the constellation symbols (addressed in Section III-A) has helped the transmitter generate symbols that are quite similar to the ones that are actually reconstructed by the receiver. This can improve the communication performance by decreasing the mismatch between the way the transmitter decides to convey the information, and the way the receiver decodes the symbols. This "similarity" is quantitatively measured in terms of mutual information in the next figure. According to the figure, when the communication system is experiencing low-SNR regimes, the probabilistic model demonstrates a non-uniform distribution over constellation points, with higher probabilities assigned to the points that are at the furthest distance from each other in the constellation geometry. This is aligned with what we intuitively expect from a communication system under low-SNR regimes to frequently map information bits to constellation symbols that are far apart from each other. Increasing the SNR, the probabilistic shaping tends to uniform distribution, which is also aligned with one's intuition about communication systems. Fig. 3 demonstrates the mutual information between the probabilistically-generated symbols at the output of transmitter (i.e., the channel input), and the reconstructed ones at the receiver. The mutual information metric can be interpreted as the quantitative measure to study the "mutual similarity" among communication parties as discussed in Section III-A. For this experiment, we consider both cases of additive white Gaussian noise (AWGN) channel and also non-Gaussian noise to show the OOD performance of our scheme. For the benchmark, we consider a DNN model with trainable constellation layer and neural demapper as proposed in [12]. The DNN benchmark has three linear layers with \(64\) neurons at hidden layers and rectified linear unit (ReLU) activation Fig. 2: Generation process at the transmitter, and the reconstruction at receiver for SNR values set to \(-25\), \(-10\), and \(10\) dB, respectively. functions, and we considered \(5000\) training iterations with Adam optimizer. Fig. 3 clearly highlights the performance of our DDPM-based model in low-SNR regimes. Notably, although the DNN benchmark does not show any noticeable performance in SNR ranges below \(-5\) dB, even for the more straightforward scenario of \(16\)-QAM (which is supposed to be less prone to errors and mismatches than 64-QAM case), our scheme achieves mutual information of around \(1.25\) bits for 64-QAM geometry, and \(1\) bit for 16-QAM geometry, respectively. Moreover, our scheme shows a threefold improvement compared to DNN-based benchmark for 64-QAM geometry and \(0\) dB SNR. These results clearly highlight that our main goal in realizing the "mutual understanding" among communication parties has been successfully achieved, and thanks to this understanding, the system is resilient under low-SNRs. To show the robustness of our scheme in OOD performance, we study the scenario of communication channels with non-Gaussian noise. We consider additive Laplacian noise with the same variance as that of AWGN scenario as our benchmark [6]. Remarkably, although we do not re-train our diffusion model with Laplacian noise, the performance of our DDPM-based approach does not change (even becomes better) under this non-Gaussian assumption (which is not seen during training), and the resultant mutual information curves follow the case of in-distribution scenario. However, the DNN benchmark experiences performance degradation under non-Gaussian assumption, although we also re-trained it with Laplacian noise. In Fig. 3, we also study another benchmark, which is uniform shaping. We examine this benchmark by disabling the DDPM block at the transmitter. We can see from the figure that still, the employed DDPM at receiver can outperform conventional DNN benchmark. Notably, the figure implies that our probabilistic shaping model with DDPMs employed at both the transmitter and the receiver outperforms the naive scheme of uniform shaping. This gap also increases when considering higher modulation orders, as it becomes more important to realize somewhat mutual understanding and smartly shaping the higher order constellation symbols. ## V Conclusions In this paper, we studied the application of DDPMs in probabilistic constellation shaping for wireless communications. We exploited the "denoise-and-generate" characteristic of DDPMs. The transmitter runs the model to probabilistically shape (generate) the constellation symbols and the receiver regenerates (reconstructs) the symbols from the received noisy signals. The key idea was that the transmitter mimics the way the receiver would do to reconstruct (regenerate) the symbols out of noisy signals, realizing "mutual understanding" to reduce the mismatch among communication parties. Our results highlighted the performance of our scheme compared to DNN-based demapper, while providing _network resilience_ under low-SNR regimes and non-Gaussian noise.
2309.17227
MORPH: Design Co-optimization with Reinforcement Learning via a Differentiable Hardware Model Proxy
We introduce MORPH, a method for co-optimization of hardware design parameters and control policies in simulation using reinforcement learning. Like most co-optimization methods, MORPH relies on a model of the hardware being optimized, usually simulated based on the laws of physics. However, such a model is often difficult to integrate into an effective optimization routine. To address this, we introduce a proxy hardware model, which is always differentiable and enables efficient co-optimization alongside a long-horizon control policy using RL. MORPH is designed to ensure that the optimized hardware proxy remains as close as possible to its realistic counterpart, while still enabling task completion. We demonstrate our approach on simulated 2D reaching and 3D multi-fingered manipulation tasks.
Zhanpeng He, Matei Ciocarlie
2023-09-29T13:25:45Z
http://arxiv.org/abs/2309.17227v1
# MORPH: Design Co-optimization with Reinforcement Learning ###### Abstract We introduce MORPH, a method for co-optimization of hardware design parameters and control policies in simulation using reinforcement learning. Like most co-optimization methods, MORPH relies on a model of the hardware being optimized, usually simulated based on the laws of physics. However, such a model is often difficult to integrate into an effective optimization routine. To address this, we introduce a proxy hardware model, which is always differentiable and enables efficient co-optimization alongside a long-horizon control policy using RL. MORPH is designed to ensure that the optimized hardware proxy remains as close as possible to its realistic counterpart, while still enabling task completion. We demonstrate our approach on simulated 2D reaching and 3D multi-fingered manipulation tasks. ## I Introduction Design optimization and automation techniques generally aim to alleviate some of the time-consuming process of hardware design, usually by iterating over a large parameter space in simulation in order to find optimal (or good enough) design parameters before the hardware is ever constructed. Within this broad category, co-design or co-optimization methods use simulation in order to simultaneously optimize both hardware parameters and aspects of the software that will run on the hardware (e.g. a controller or a policy) to ensure suitability for a specific task. Creating a simulated model for the hardware being optimized is a crucial component of design automation. In the case of co-design, the importance of this step only grows, since the simulated hardware model must not only be faithful to its real counterpart, but also lend itself to optimization techniques capable of simultaneously handling the controller or policy component of the co-optimization. Given the recent success of reinforcement learning (RL) methods in optimizing effective control policies for complex behaviors, it is only natural for the field to apply RL techniques to the co-design problem as well. The core idea of this approach is to compute policy gradients for both the design parameters and control policy parameters. For example, one way to achieve this is by treating a design, or a change of the design, as actions, and co-learn design actions and control actions [1]. However, this results in extending the action space in ways that can increase the difficulty of exploration. Another approach is to integrate the design parameters and their effect with the control policy via differentiable physics [2]. However, this integration relies on the existence of a differentiable modeling of a task, which may not be available. In this paper, we propose to address these challenges by considering the cumulative effect of hardware design parameters on the behavior of the robot itself, rather than approaching them just as values to be optimized. With this in mind, we differentiate between two components: * A physics-based hardware model, dubbed Hw-Phy. This is a traditional model, designed to mimic the behavior of real hardware as accurately as possible, and typically implemented by simulating some aspects of the laws of physics. Its parameters include the design parameters that are the goal of the optimization. Depending on the underlying method used, Hw-Phy may or may not be differentiable. * A neural network-based hardware model proxy, dubbed Hw-NN. The job of Hw-NN is to help with the co-optimization problem. Specifically, our method ensures that, during the optimization, Hw-NN remains as close as possible to Hw-Phy. However, owing to its implementation as a neural network, Hw-NN is always differentiable, which allows its integration into an efficient co-optimization routine, one that also optimizes a software control policy. In the proposed framework, we use RL to co-optimize a control policy alongside the Hw-NN model, under the constraints that Hw-NN needs to mimic a real robot's behaviors as encapsulated by Hw-Phy. The advantage is that we no longer require a differentiable physics simulation in Fig. 1: MORPH co-learns hardware design parameters and control policies, exemplified here on a 2D reaching task (top row) and a 3D manipulation task (bottom row). (A) shows each task. (B) shows the initial design of the robots. (C) visualizes the optimized designs resulting from MORPH. (D) shows the robot executing the control policy that has been co-optimized alongside the design parameters. our co-optimization pipeline, but our policy still receives information about how the design parameters affect robot behavior during training. The result is an iterative training procedure: 1. Using Hw-Phy as a constraint, we optimize the control policy along with Hw-NN to maximize the task performance; 2. With the improved policy, we search for design parameters that match Hw-Phy with the current version of Hw-NN. Conceptually, the first training phase aims to find a combination of hardware and control policy that can complete the tasks. The second training phase aims to ensure that the optimized hardware is still realistic, given real-world physical constraints. We dub our method **M**odel **O**ptimization via **R**einforcement and a **P**roxy for **H**ardware, or **MORPH**. By separating task learning and design derivation into two phases, MORPH enables the use of a non-differentiable Hw-Phy model, and can combine the ability of RL to reason about long-horizon behaviors with the use of gradient-free algorithms for parameter search. We summarize our overall contributions as follows: * We propose a novel method that co-optimizes both the design and policy of a robot directly in parameter space with RL without the assumption of the differentiability of the hardware model. * We propose a technique that mitigates the optimization difficulty of improving the robot's task performance while imposing realistic constraints on the hardware model. ## II Related Work Considerable research has investigated co-optimizing the design and control of a robot [3, 4, 5, 6, 7, 8, 9]. One approach to this is using gradient-free optimization methods and treating the evaluation of a design as a black box. For instance, Nygaard et al. [10] apply evolutionary algorithms on a quadrupedal robot to optimize its leg lengths and control parameters in the real world. Deimal et al. [11] apply particle filter optimization method to optimize the shape of a soft robotic hand and grasping poses in simulation. Liao et al. [12] propose a Bayesian optimization method to tune the design of a microrobot efficiently to a walking task. Xu et al. apply graph heuristic search [6] with RoboGrammar [13], which is a set of graph grammar for robot design, for terrestrial robot design. However, gradient-free optimization methods suffer from long-horizon reasoning and can not be used for evaluating a robot with complex tasks. Our work aims to optimize the control and hardware of robots for long-horizon tasks. Therefore, we use RL for joint optimization. Recently, RL has been considered for design and control co-optimization [14, 15, 16]. Chen et al. [2] propose to model the robot as a computational graph differentiably and co-optimize its parameters and a control policy using RL. Wang et al. suggest Neural Graph Evolution (NGE) [17], which models the structure of an agent as a graph neural network (GNN), optimizes the morphology of an agent by changing its graph structure, and learns to control by adapting the parameters of the GNN. Luck et al. [18] propose to learn a latent space that represents the design space and train a design-conditioned policy. Transform2Act [1] considers a transform stage when actions can modify a robot's kinematic structure and morphology, then a control stage when the design is frozen and the policy only computes control actions. MORPH directly optimizes design in the parameter space of a policy without assuming the differentiability of the robot by separating task optimization and design parameter search into two steps. In this work, we observe the existence of optimization difficulty caused by gradient interference introduced from the mismatch between RL improvement and learning realistic hardware. This is similar to the observed optimization difficulty in multi-task learning literature [19, 20] if we treat task improvement and being realistic as two tasks. Previous works have explored mitigating the conflicts for multi-task learning. For example, Senor and Koltun [21] Fig. 2: **Approach overview**. MORPH is an iterative training framework for design-control co-optimization: (A) We first co-optimize both a control policy and a neural-network proxy of the hardware model (Hw-NN) with an RL loss and a constraint loss computed using the more realistic, physics-based hardware model (Hw-Phy); (B) Using the updated Hw-NN, we construct a dataset \(\mathcal{D}\) of tuples of states, actions, and task actions; (C) Using \(\mathcal{D}\), we search for the design parameters that match Hw-Phy to the Hw-NN proxy. scale the gradients introduced by different tasks to reduce the scale difference among gradients. GradNorm [22] uses gradient normalization to facilitate multi-task learning. Our work is inspired by PCGrad [23], which uses cosine similarity between gradients to measure the conflicts and project a gradient to the normal plane of another when conflicting. In this work, we treat the task learning and hardware constraints as a dual-task learning problem and project the task learning gradients to the normal of hardware constraint gradients. ## III Method ### _Preliminaries_ We formulate our co-optimization problem as a Markov Decision Process (MDP). An MDP can be represented by a tuple \((\mathcal{S},\mathcal{A},\mathcal{F},\mathcal{R})\), where \(\mathcal{S}\) is state space, \(\mathcal{A}\) is the action space, \(\mathcal{R}(\mathbf{s},\mathbf{a})\) is the reward function, and \(\mathcal{F}_{\phi}(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a})\) is the state transition model, where \(\mathbf{s},\mathbf{s}^{\prime}\in\mathcal{S}\), and \(\mathbf{a}\in\mathcal{A}\). The transition function is parameterized by some design parameters \(\phi\), which determine the behaviors of the robot. The goal of solving this MDP is finding both a control policy \(\pi_{\mathbf{\theta}}(\mathbf{a}|\mathbf{s})\) and design parameters \(\phi\) that optimize the expected returns: \(\mathbb{E}[\sum_{t=0}^{T}\mathcal{R}(\mathbf{s}_{t},\mathbf{a}_{t})]\) where \(T\) is the length of an episode. The key idea of MORPH is modeling the cumulative effect of the hardware design parameters \(\phi\) on the real robot and task performance. Consider the case where \(\phi\) comprises the kinematic parameters of the robot (e.g. link lengths, mounting locations, geometry, etc.). Hardware essentially converts policy actions (joint angles) into task-related actions (end-effector movements). In this example, a physics-based hardware model simply consists of the forward kinematics function. In general, we refer to such a physics-based hardware model as Hw-Phy. We model its effects using a function \(h()\) that, given a state, converts from policy actions \(a\) to task-related actions \(z\): \(h=h_{\phi}(\mathbf{z}|\mathbf{s},\mathbf{a})\). However, instead of directly integrating Hw-Phy and its parameters into our optimization routine, we use a neural network-based proxy that we refer to as Hw-NN. As a proxy for Hw-Phy, HW-NN takes similar inputs and produces similar output, thus we model its behavior as the function \(h^{nn}=h^{nn}_{\psi}(\mathbf{z}|\mathbf{s},\mathbf{a})\). The key difference is that Hw-NN is always differentiable (as it is modeled as a neural network) and provides additional flexibility compared to Hw-Phy. Both \(h_{\phi}()\) and \(h^{nn}_{\psi}()\) are parameterized, but the nature of these parameters is vastly different. The parameters \(\phi\) of \(h()\) have physical meaning and correspond directly to the design parameters we wish to determine. In contrast, the parameters \(\psi\) of \(h^{nn}()\) are just the weights and biases of a neural network, with no physical correspondent. By using a Hw-NN, we now can co-optimize the parameters of \(h^{nn}\) with the control policy parameters. The goal of the optimization is to improve task performance, which is evaluated by expected returns, and approximate \(h\) using \(h^{nn}\). However, only optimizing \(h^{nn}\) and a policy \(\pi\) does not satisfy our goal of extracting a design that we can build in the real world since the parameters of \(h^{nn}\) are not interpretable by humans. Hence, we propose to search explicit parameters \(\phi\) that mimic the Hw-NN with good task performance. Therefore, the resulting training pipeline is an iterative process (see Fig. 2): We first co-optimize both the control policy and the Hw-NN to task performance, under the constraints that the Hw-NN remains close to the current version of Hw-Phy; Then, we search for the hardware design parameters that allow Hw-Phy to match the current version of the Hw-NN. ### _Hardware as constraints_ The first step of our framework is to co-optimize both the control policy and the Hw-NN to improve task performance under hardware constraints. By using a Hw-NN \(h^{nn}\), we now can extend our policy to consider the effect of the design parameters: \(\pi^{comb}=\pi_{\theta}(\mathbf{a}|\mathbf{s})h^{nn}_{\psi}(\mathbf{z}|\mathbf{a},\mathbf{s})\). The optimization goal of \(\pi^{comb}\) is a constrained optimization problem: \[\max_{\theta,\psi}\mathbb{E}_{\pi,h^{nn}}[\sum_{t=0}^{T}\mathcal{ R}(\mathbf{s}_{t},\mathbf{z}_{t})]\] subject to \[D[h(\mathbf{z}|\mathbf{s},\mathbf{a}),h^{nn}(\mathbf{z}|\mathbf{s},\mathbf{a})]\leq\epsilon\] Here, \(D\) is a divergence function, which measures the divergence between the Hw-NN \(h^{nn}\) and the Hw-Phy \(h\). \(\epsilon\) is some chosen small constants. In practice, instead of directly performing constrained policy optimization, our work optimizes both \(\pi^{comb}\) via an unconstrained objective that uses the divergence between Hw-NN and Hw-Phy as a regularization term: \[\max_{\theta,\psi}\mathbb{E}_{\pi,h^{nn}}[\sum_{t=0}^{T}\mathcal{ R}(\mathbf{s}_{t},\mathbf{z}_{t})-\alpha D[h(\mathbf{z}|\mathbf{s},\mathbf{a}),h^{nn}(\mathbf{z}|\mathbf{s}, \mathbf{a})]] \tag{1}\] where \(\alpha\) is a constant. Since the divergence between \(h\) and \(h^{nn}\) is intractable, we measure it via a sampling-based method. Essentially, we collect state-action pairs \((s,a)\) and feed them to both models. Then, we use the distances between outputs from both models as an estimate of the divergence between the two models. Note that in this process, only the parameters of \(\pi\) and \(h^{nn}\) are optimized. We use actor-critic RL in this step. ### _Deriving design parameters_ Our ultimate goal is to derive design parameters from our optimization process to build a robot. However, in the policy learning step, \(h^{nn}\) is a neural network whose parameters are uninterpretable by humans. Although this learning step indeed produces some policies that can achieve high task performance, its product cannot be used to build a real robot. Hence, the second step of our co-optimization framework is to search for design parameters that match the performance of the updated Hw-NN: \[\min_{\phi}D[h_{\phi}(\mathbf{z}|\mathbf{s},\mathbf{a}),h^{nn}_{\psi}(\mathbf{z}| \mathbf{s},\mathbf{a})] \tag{2}\] Similar to the policy optimization step, we use a sampling-based method to estimate the divergence between \(h\) and \(h^{nn}\) Here, we do not have any restriction on the differentiability of the Hw-Phy \(h\). If \(h\) is differentiable, we can apply gradient-based optimization methods, e.g. stochastic gradient descent, for design parameter searching. If \(h\) is non-differentiable, we can use a non-differentiable optimization method, e.g. evolutionary algorithms. Without the need to reason long-horizon behaviors, evolutionary algorithms can search in the design space to find parameters that match the optimized Hw-NN well. In this work, we use Covariance Matrix Adaptation - Evolution Strategy (CMA-ES) [24] for design parameter derivations. Overall, MORPH is an iterative process that first improves both control and Hw-NN to task performance. Then, based on the adapted version of the Hw-NN, MORPH extracts design parameters that mimic the behaviors of Hw-Phy. ``` 1:while returns have not converged do 2: Sample \(H\) trajectories using the current control policy \(\pi_{\theta}\) and Hw-NN \(h_{\psi}^{nn}\) with an MDP 3: Optimize parameters of the control policy \(\pi_{\theta}\) and Hw-NN \(h_{\psi}^{nn}\) with objective E.q. 1 4:for every K steps do 5: With updated Hw-NN \(g\), compute updated outputs \(z\) using the current policy and construct a dataset \(\mathcal{D}=[(s_{0},a_{0},z_{0}),(s_{1},a_{1},z_{1}),...,(s_{m},a_{m},z_{m})]\) 6: Optimize Hw-Phy \(h\) using data \(\mathcal{D}\) with E.q. 2 7:endfor 8:endwhile ``` **Algorithm 1** MORPH ### _Objective mismatch between task learning and hardware constraints_ While MORPH by itself provides a method to co-optimize the design and control, its final objectives can be seen as a multi-task learning objective: the first part improves its task performance and the second part constrains the Hw-NN to be close to Hw-Phy. These two objectives do not necessarily agree on how to adapt the parameters. In practice, the mismatch between these two objectives can result in detrimental gradient interference that makes optimization challenging (detailed discussion in Section V-B). Hence, in this work, we adopt PCGrad [23] and project the RL gradients to the normal plane of the design constraint gradient direction if they conflict. Similar to PCGrad, we measure the conflict between two gradient directions by their cosine similarity \(S_{c}\) - if two gradients conflict with each other, the cosine similarity of the two directions is negative: \[S_{c}(g_{task},g_{hw})=\frac{g_{task}\cdot g_{hw}}{||g_{task}||\cdot||g_{hw}||}\] Here, \(g_{task}\) and \(g_{hw}\) represent gradients introduced by the RL loss and the divergence between \(h\) and \(h^{nn}\) accordingly. If the cosine similarity is negative, we project task gradients \(g_{task}\) to the normal plane of hardware gradients \(g_{hw}\): \[g_{task}=g_{task}-\frac{g_{task}\cdot g_{hw}}{||g_{hw}||}\cdot g_{hw}\] This projection prioritizes the learning of the hardware proxy model. This is crucial for hardware-policy co-optimization since policy gradients can be misleading if \(h^{nn}\) does not model the effect of actions and design parameters well. ## IV Experiments MORPH aims to find robust hardware design parameters and control policies that can solve complex robotic problems. Therefore, we test our algorithm on problems with the following characteristics: 1. Only a small part of the design space allows the robot to complete the task; 2. The final goal of our tasks is hard to explore in the task environment. We evaluate MORPH with task performances using Hw-Phy and compare it against the performance of unoptimized hardware. One of the key characteristics of our method is that MORPH optimizes both the design and policy in parameter space. To evaluate its performance, we compare our method with three baselines: * **Transform2Act**[1]: This approach treats design as actions and uses Proximal Policy Optimization (PPO) to find good design and control parameters. * **CMA-ES with RL inner-loop**: This approach uses CMA-ES for searching hardware parameters and trains a policy for control. The negative best return achieved by the RL policy is used as a cost for CMA-ES. * **RL-NoHOOpt**: This approach trains an RL agent using the initial design parameters and do not optimize the hardware parameters of the robot. ### _Optimizing a reaching Robot_ We first test our method on a 2D reaching task. Here, we require a 5-link reaching robot to navigate a zig-zag-shaped tunnel and touch a goal location with the end-effector. Fig. 4: **Button pressing tasks.** (A) pen-pressing task where the button is located at one end of a long object. (B) pen-pressing as performed by a person. (C) mouse-clicking task where the button is located at a corner of the object. (D) mouse-clicking task as performed by a person. Fig. 3: **Optimizing a reaching robot.** (A) and (C) show the optimized reader by MORPH and Transform2Act respectively. (B) and (D) visualize the co-optimized control policies for MORPH and Transform2Act respectively. Both methods are able to optimize the link lengths to complete the reaching task. The design optimization goal is finding the appropriate link length for each length so the robot can reach the goal with minimal collisions. Overly long links can make moving in the constrained space without collision difficult, and overly short links may decrease the possibility of exploring the goal. The optimization range of the link lengths is \([0.05,5]\) and the initial design of the robot has a link length of \(3\). To optimize link lengths, it is sufficient to choose forward kinematics as the Hw-Phy, which converts joint space actions and current joint states to the end-effector space: \(h=h(a_{e.e.}|a_{joint},s)\), where \(a_{joint}\) is the output from \(\pi\) and represents the change of joint positions. The state space contains joint positions and end-effector positions. This problem space is difficult to explore since the desired behavior requires navigating tunnel. Hence, we use a reward function that encourages exploring inside the tunnel: \[R =||p_{e.e.}-p_{goal}||+\beta_{0}||y_{e.e.}-y_{tunnel}||\] \[+\beta_{1}r_{collide}+\beta_{2}r_{goal}\] Here, \(p_{e.e.}\) and \(p_{goal}\) and the Cartesian coordinate of the end-effector and goal accordingly. \(y_{e.e.}\) and \(y_{tunnel}\) represent the \(y\) coordinate of the e.e. and the center of the tunnel. \(r_{collide}\) is a collision penalty. \(r_{goal}\) rewards the agent touch the goal location with its e.e.. \(\beta_{0},\beta_{1},\beta_{2}\) are constants. ### _Optimizing robotic hands with manipulation tasks_ In this experiment, we use MORPH to optimize the kinematics of robotic hands for complete object manipulation tasks. As shown in Fig. 4, the robot needs to grasp the object and press a button on the object in hand. This task is challenging for several reasons: 1. The agent needs to find a design that produces a stable grasp so it can press the button without dropping the object; 2. The goal of pressing a button in hand can only be discovered after a stable grasp of the object. We optimize the angle of the finger placement and the link length of the finger links. Each finger has two movable links and palm movement is constrained to the z-axis. In this case, the robot can not complete the button-pressing task by adjusting its grasp pose. Hence, the optimization algorithm needs to find a suitable finger placement to complete the task. In this task, the optimization range of finger placement angles is \([-\pi,\pi]\) and the range of link lengths is \([0.02,0.4]\). In this experiment, we assume that two links of a finger share the same link length. Unlike the reaching task, it is unclear how to define a task-related action space for a contact-rich task. Hence, for the button-pressing tasks, we use the full transition function \(h=\mathcal{F}(s^{\prime}|s,a)\) as the Hw-Phy. In practice, we use the physics simulator Mujoco [25] as the transition function. The state space of this task contains joint positions of the hand joints, the location of the robotic hand, and the pose of the object. The reward function for this task is: \[R=||p_{obj}-p_{goal}||+\beta_{0}r_{contact}+\beta_{1}z_{obj}+\beta_{2}r_{goal}\] Here, \(p_{obj}\) and \(p_{goal}\) are the location of the object and the goal accordingly. \(r_{contact}\) is \(1\) if the distal link of any finger makes contact with the button. \(z_{obj}\) represents the height of the object. \(r_{goal}\) is a reward bonus only supplied when the robot presses a button with one of its distal links and the object is in-hand. \(\beta_{0},\beta_{1},\beta_{2}\) are constants. We test our algorithm to learn hardware parameters to manipulate two buttoned objects: 1. Pen-pressing: In this task, the button is located on one end of a long box object. This is similar to pressing a push button on a pen. 2. Mouse-clicking: In this task, the button is located on the corner of a computer mouse. The mouse geometry is more complex and requires more fingers in contact to form a stable grasp. As shown in Fig. 4, both objects are designed to be used by human hands with different grasp poses. ## V Results ### _Task performance_ Our results show that MORPH can learn control policies and designs that achieve good task performance for the reaching task. As shown in Fig. 3, the robot shortens the lengths of the third and fourth links to decrease collisions when navigating in the tunnel. The final optimized link lengths are \(\{2.51,2.10,0.66,1.01,2.3\}\). On the other hand, Fig. 5: **Optimized hand with pen-pressing**. (A) shows the optimized robotic hand using MORPH. (B) shows the optimized hand completing the pen-pressing task. (C) shows an optimized hand using Transform2Act. (D) shows the Transform2Act agent executing its control policy and failing to press the button. Fig. 6: **Overage returns vs. environment steps for the reaching task**. Here, GP represents gradient projection. Fig. 7: **Average returns vs. environment steps for the reaching task**. Here, GP represents gradient projection. RL-NoHWOpt cannot discover the goal since it collides with the tunnel walls. This indicates that the original design is not suitable for learning this task. As shown in Fig. 7, Transform2Act also achieves similar task performance as MORPH. It successfully finds hardware parameters that allow it to discover the goal and finally converge to a hardware design that can complete the task. Finally, CMA-ES with RL inner-loop fails to find good design parameters to achieve the goal in a similar timescale. For the pen-pressing task, MORPH discovers behaviors that use only two fingers to grasp the object and elongate one finger to press the button (Fig. 5). It also shortens the unused finger to avoid any contact that results in unstable grasps. The final optimized link lengths are \(\{0.11,0.24,0.33,0.30\}\) and the final finger placement differences from the initial design are \(\{0.16,-0.06,0.32,0.07\}\) radians. As shown in Fig. 8, MORPH outperforms all the baselines in the pen-pressing task. Both RL-NoHWOpt and CMA-ES with RL inner-loop fail to learn to grasp the object stably. Transform2Act learns a design that can grasp the object. However, it fails to discover the final goal of pressing the button. Finally, for the mouse-clicking task, MORPH is able to elongate a robot's links to establish a stable grasp of the computer mouse and click the button. The optimized link lengths are \(\{0.17,0.24,0.27,0.3,0.28\}\) and the optimized finger placement angle differences are \(\{0.78,0.32,-0.7,0.54,-0.01\}\) radians. For the baselines, all of them fail to achieve a stable grasp. RL-NoHWOpt has difficulty grasping the object using short fingers. Both CMA-ES with RL inner-loop and Transform2Act fail to find appropriate link lengths and finger positions that lift the object stably. Our results for co-optimization demonstrate that MORPH is able to optimize robots that learn different tasks. Crucially, MORPH efficiently explores the design space and is able to discover hardware design parameters that allow for rich explorations in the task environment. Compared to the baselines, which can only learn part of the manipulation task, MORPH can discover the final goal from a delayed task-completion reward signal. ### _Discussion_ As shown in Figs. 7 and 8, MORPH fails to learn the task without using gradient projection, which implies that gradient projection is crucial to learning hardware and a control policy with high task performance. To further investigate this, we visualize the cosine similarity between the RL gradients and hardware proxy learning gradients in Fig. 9. The cosine similarity is negative for approx. \(64\%\) of the training steps, meaning that the learning progress for both task improvement and hardware approximation can often be hindered by gradient interference. Hence, gradient projection is a critical component. Our training framework relies on the Hw-NN to learn a policy from the cumulative effect of design parameters. If the Hw-NN is inaccurate, the control policy may never be learned since the policy gradients can be misleading. Hence, it is important that the Hw-NN is close to the Hw-Phy (i.e. minimizing \(D[h_{\phi}(\mathbf{z}|\mathbf{s},\mathbf{a}),h_{\psi}^{nn}(\mathbf{z}|\mathbf{s},\mathbf{a})]\)). Since our framework is iterative, this minimization is achieved from two directions: the Hw-NN mimics the behaviors of an Hw-Phy, and vice versa. As shown in Fig. 9, both Hw-NN's loss and Hw-Phy's search costs are high at the beginning of the training. While a high initial loss for Hw-NN is expected, the search cost is also high because, in this stage, the hardware proxy model is close to random and it is hard to find realistic parameters that allow Hw-Phy to match it. However, both the loss and the cost decrease during training, and finally both models converge to to each other. ## VI Conclusion and Future Directions We introduced MORPH, a method to co-optimize hardware design parameters and control policies. MORPH uses a hardware proxy model, Hw-NN, that learns the cumulative effect of hardware design parameters on the robot itself. With Hw-NN, our method does not require a differentiable physics model to compute gradients of the design parameters. Our results show that MORPH can learn hardware design parameters and control policies that enable hard-exploration manipulation tasks. A key limitation of our approach is that it currently does not learn the morphology of the robot, which is assumed as given. However, we envision that MORPH can achieve this by using a graph neural network (GNN) to model the robot, and optimizing the graph structure in order to optimize the robot's morphology. Another direction will be applying MORPH to optimize a robot design for multiple tasks, such as a robotic hand capable of diverse manipulation skills. Fig. 8: **Average returns (in log scale) vs environment steps for the button pressing tasks.** (A) pen-pressing; (B) mouse-clicking. Fig. 9: **Training details for mouse-clicking**. (A) shows cosine similarity between hardware constraint gradients and RL gradients. (B) plots constraint loss of Hw-NN and search costs for Hw-Phy.
2309.08307
A comparative ab-initio investigation of the physical properties of cubic Laves phase compounds XBi$_2$ (X = K, Rb)
In this study, we looked into a number of physical properties of alkali-bismuth compounds KBi$_2$ and RbBi$_2$ in the cubic Laves phase using the density functional theory (DFT). The structural, elastic, anisotropy indices, hardness, thermo-physical parameters, electronic band structure, and optoelectronic properties have been explored. Most of the results presented in this work are novel in nature.
Jahid Hassan, M. A. Masum, S. H. Naqib
2023-09-15T10:55:12Z
http://arxiv.org/abs/2309.08307v1
A comparative _ab-initio_ investigation of the physical properties of cubic Laves phase compounds \(X\)Bi\({}_{2}\) (\(X=\) K, Rb) ###### Abstract In this study, we looked into a number of physical properties of alkali-bismuth compounds \(X\)Bi\({}_{2}\) (\(X=\) K, Rb), in the cubic Laves phase (symmetry Fd\(\overline{3}\)m) using the density functional theory (DFT). The structural, elastic behavior along with Pugh's ratio, Poisson's ratio, Cauchy pressure, anisotropy indices, micro- and macro-hardness, thermo-physical properties such as Debye temperature, sound velocities, Gruneisen parameter, and melting temperature, electronic band structure, and optoelectronic properties has been explored. The computed ground-state lattice parameters and unit cell volume are in close accordance with the known theoretical and experimental findings. The elastic, thermo-physical, and optoelectronic properties of \(X\)Bi\({}_{2}\) (\(X=\) K, Rb) are investigated for the first time in this study. The computed elastic constants satisfied the mechanical stability criteria. The estimated Pugh's ratio, Poisson's ratio, and Cauchy pressure signify the ductility of the compounds. In order to understand the electronic properties, band structures and electronic energy densities of states have been explored. These compounds exhibit metallic characteristics in their electrical band structures. We have done a complete investigation on the reflectivity, absorption coefficient, refractive index, dielectric function, optical conductivity, and loss function of these metals. These compounds possess a low Debye temperature, thermal conductivity, and melting point. The optical absorption, reflectivity spectra and the refractive index of \(X\)Bi\({}_{2}\) (\(X=\) K, Rb) show that they can be used as solar reflector and ultraviolet absorber. The majority of the findings in this study are novel. Laves phase compound; Density functional theory; Elastic properties; Band structure; Optoelectronic properties; Thermo-physical properties ## 1 Introduction Laves phase are ideal for studying the fundamentals of intermetallic phases due to their large representative number, polytypism, extended homogeneity ranges, and simple crystal structure [1, 2, 3]. Laves phase compounds play a significant role in various functional and structural applications, including hydrogen storage material (nickel-metal hydride batteries), magneto-mechanical sensors, and wear- and corrosion-resistant coatings [1, 4]. They are also used in high temperature environment such as aerospace, power generation, high-temperature steel creep-strengthening, new alloy designing, and other structural materials [5, 6]. Moreover, they are also applicable for using in solar cells and optoelectronic devices, due to their high efficiency, low toxicity, and good environmental stability [5, 7]. The chiral phonon modes emerge in the cubic Laves phase \(X\)Bi\({}_{2}\) (\(X=\) K, Rb) compounds due to its geometry of the crystal structure, which are associated with the circulation of atoms around their equilibrium positions [8, 9]. It possesses both time reversal symmetry and inversion symmetry and results
2309.07034
Sensitivity, Performance, Robustness: Deconstructing the Effect of Sociodemographic Prompting
Annotators' sociodemographic backgrounds (i.e., the individual compositions of their gender, age, educational background, etc.) have a strong impact on their decisions when working on subjective NLP tasks, such as toxic language detection. Often, heterogeneous backgrounds result in high disagreements. To model this variation, recent work has explored sociodemographic prompting, a technique, which steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give. However, the available NLP literature disagrees on the efficacy of this technique - it remains unclear for which tasks and scenarios it can help, and the role of the individual factors in sociodemographic prompting is still unexplored. We address this research gap by presenting the largest and most comprehensive study of sociodemographic prompting today. We analyze its influence on model sensitivity, performance and robustness across seven datasets and six instruction-tuned model families. We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks. However, its outcomes largely vary for different model types, sizes, and datasets, and are subject to large variance with regards to prompt formulations. Most importantly, our results show that sociodemographic prompting should be used with care for sensitive applications, such as toxicity annotation or when studying LLM alignment. Code and data: https://github.com/UKPLab/arxiv2023-sociodemographic-prompting
Tilman Beck, Hendrik Schuff, Anne Lauscher, Iryna Gurevych
2023-09-13T15:42:06Z
http://arxiv.org/abs/2309.07034v2
# How (Not) to Use Sociodemographic Information ###### Abstract Annotators' sociodemographic backgrounds (i.e., the individual compositions of their _gender_, _age_, _educational background_, etc.) have a strong impact on their decisions when working on subjective NLP tasks, such as hate speech detection. Often, heterogeneous backgrounds result in high disagreements. To model this variation, recent work has explored sociodemographic prompting, a technique, which steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give. However, the available NLP literature disagrees on the efficacy of this technique -- it remains unclear, for which tasks and scenarios it can help and evaluations are limited to specific tasks only. We address this research gap by presenting the largest and most comprehensive study of sociodemographic prompting today. Concretely, we evaluate several prompt formulations across seven datasets and six instruction-tuned model families. We find that (1) while sociodemographic prompting can be beneficial for improving zero-shot learning in subjective NLP tasks, (2) its outcomes largely vary for different model types, sizes, and datasets, and (3) are subject to large variance with regards to prompt formulations. Thus, sociodemographic prompting is not a reliable proxy for traditional data annotation with a sociodemographicly heterogeneous group of annotators. Instead, we propose (4) to use it for identifying ambiguous instances resulting in more informed annotation efforts.1 Footnote 1: Code and data at [https://github.com/UKPLab/arxiv2023-sociodemographic-prompting](https://github.com/UKPLab/arxiv2023-sociodemographic-prompting) ## 1 Introduction How messages are perceived, is often not only dependent on their factual content, but also on the receiver's _subjective interpretation_: for instance, during dataset creation, two annotators might have different equally valid opinions about what the "correct" offensiveness label for a particular tweet should be (e.g., Waseem, 2016; Davani et al., 2023, _inter alia_). As previously shown, this variation is, at least to some extent, tied to sociodemographic characteristics of the receivers, like their gender identity, age, and educational background (e.g., Biester et al., 2022; Pei and Jurgens, 2023). Accordingly, modeling the effect of sociodemographic factors on subjective tasks has emerged as an interesting research direction for NLP. As such, researchers have proposed new data collection paradigms - cf. _perspectivism_(Rottger et al., 2022) - and trained models for reflecting the decisions of particular sociodemographic groups Fleisig et al. (2023). Most recently, researchers Deshpande et al. (2023); Santurkar et al. (2023); Hwang et al. (2023); Cheng et al. (2023) have explored _sociodemographic prompting_ of large language models (LLMs): the idea is to enrich a particular input prompt with additional sociodemographic information (cf. Figure 1). The models' output should then be aligned with the pop Figure 1: We instruct LLMs to make predictions from different perspectives using sociodemographic profiles. This technique can be used to identify ambiguous instances before annotation. our knowledge on the effect of including sociodemographic profiles is still scarce and the existing literature seems to disagree on its usefulness: for instance, Durmus et al. (2023) showed that sociodemographic prompting of LLMs can be used to simulate human populations - a promise for more efficient sociological surveys. Other work, in turn, points to the danger of stereotypical bias reflected when prompting models with sociodemographic profiles (Cheng et al., 2023; Deshpande et al., 2023), its influence on humans when used as writing assistant (Jakesch et al., 2023) or to the so-called ecological fallacy, demonstrating that no significant performance improvements are to be expected (Orlikowski et al., 2023). However, most studies evaluated including sociodemographic information in setups involving a few tasks and models only, leaving important questions open. For instance, we do not know anything about the sensitivity of current models to sociodemographic information, nor the robustness of this technique. And thinking ahead, if not robust, should we still use sociodemographic prompting? **Contributions.** We present the largest and most comprehensive study on sociodemographic prompting to-date. Concretely, we test the effect of instructing 17 LLMs (covering various model types, e.g., InstructGPT, Flan-T5, etc.) with sociodemographic profiles across seven datasets reflecting four different subjective NLP classification tasks (sentiment analysis, hatespeech detection, toxicity detection, and stance detection). Our results allow us to answer the following four research questions (**RQ1-RQ4**): _**RQ1:** How sensitive are our models to sociodemographic prompting?_ In SS4.1, we demonstrate that sociodemographic prompting leads to surprisingly large amounts of prediction changes (up to 80%). The exact amount varies heavily across model types and sizes. Some trends emerge: for instance, T5-based models are "easier to influence". _**RQ2:** What is the effect of sociodemographic prompting on the measurable performance?_ Our findings (SS4.2) indicate that predicting annotators' original votes is challenging, despite providing their profiles to the model. However, we also observe substantial performance improvements (up to +8pp in accuracy) in zero-shot learning. These improvements are most pronounced for datasets exhibiting low levels of inter-annotator agreement. _**RQ3:** How robust is sociodemographic prompting?_ We show (SS4.3) that sociodemographic prompting is not robust: merely changing the prompt formulation can lead to labels flipping for 95% of instances. _**RQ4:** What is the relationship between sociodemographic prompting and instance ambiguity?_ Our results (SS4.4) show that sociodemographic prompting is effective at identifying instances which annotators disagree upon, thereby showcasing this technique as a viable tool during annotation projects. ## 2 Sociodemographic Prompting Throughout this work, we _prompt_ a language model with (or without) _sociodemographic information_ for _obtaining predictions_ for the classification tasks we study. In the following, we discuss the main concepts our methodology relies on. Prompting.Prompting refers to the act of providing an initial input or cue to a language model, guiding its subsequent output generation. LLMs rely on these prompts to produce contextually relevant and coherent responses. Sociodemographic Information.Sociodemographic information encompasses data related to the social and demographic characteristics of individuals or groups. This includes, but is not limited to, attributes such as gender (e.g., _male_, _female_, _non-binary_, etc.), education level (e.g., _high-school degree_, etc.), political affiliation (e.g., _liberal_, _right_, etc.), and age (_young_, _old_, etc.). Providing sociodemographic cues via prompting has been shown to influence LLM's responses (Durmus et al., 2023; Hwang et al., 2023). The promise is that the output will be tailored to a specific demographic or social group. The present analysis encompasses five distinct sociodemographic attributes, based on the datasets we employ: gender, race, age range, education level, and political affiliation, as detailed in Table 1. In Figure 2, we provide an example of a sociodemographic prompt. Obtaining Predictions.Our strategy for answer generation is contingent upon the origin of the model, specifically differentiating between closed-source LLMs and open-source alternatives. For open-source models, we adapt all classification tasks into a multiple-choice format, following Brown et al. (2020); Ye et al. (2023). Concretely, we present the context alongside a potential answer. Then, we evaluate the likelihood associated with each option, selecting the one with the highest likelihood. In scenarios requiring binary classification, we assign semantically coherent descriptors to each label (e.g., _"Yes"_ or _"No"_ in lieu of 0 or 1 for binary hate speech detection) and then process the task akin to a multiple-choice question. Conversely, for closed-source models, we post-process the model output and map it to the pre-defined label space. In the few cases where this approach fails, we assign manually. ## 3 Overall Experimental Setup ### Tasks and Datasets We select seven datasets for four subjective tasks (toxicity detection, stance detection, hatespeech detection, and sentiment classification) to study sociodemographic prompting across a large and diverse benchmark (cf. Table 2, Appendix A.1). We have access to the original, un-aggregated annotations for each dataset. To analyze the effect of sociodemographic prompting we additionally require sociodemographic profiles. For two of the datasets (_DP_, _Diaz_) we have access to this information. For those, we adhere to the original sociodemographic details for prompting. For sociodemographic prompting the remaining five datasets, we adopt the sociodemographic profiles derived from the toxicity dataset, following the approach by Wan et al. (2023). For all datasets, we removed instances with incomplete or unknown information (details in Appendix A.1). Due to the large number of experiments and data sizes, we randomly sample 1,000 instances from each dataset. In the following, we describe the individual datasets. Toxicity.The task is to decide whether or to what degree (e.g., _slightly toxic_) a text is toxic. Here, we utilize _Diverse Perspectives_ (DP) by Kumar et al. (2021), and _Jigsaw_Goyal et al. (2022). _DP_ comprises comments from various online forums, including Twitter, 4chan, and Reddit. These comments underwent annotation via Amazon Mechanical Turk, receiving five annotations per instance. For each annotator, the sociodemographic data was gathered. The dataset did not come equipped with a definitive gold label. Therefore, we use majority voting to determine the gold label. _Jigsaw_ encapsulates comments from news articles, originally collated by the _Civil Comments_ platform and subsequently annotated for toxicity indicators. The binary gold label for this dataset was derived by classifying comments as toxic if a majority of annotators identified them as such. Stance.Stance detection pertains to discerning an author's viewpoint towards a specific topic. Also here, annotators' decisions are influenced by their sociodemographic background (Balahur et al., 2010; Luo et al., 2020). We employ the SemEval \begin{table} \begin{tabular}{l|l} \hline \hline **Attribute** & **Values (Percentage share)** \\ \hline Gender & male (52\%), female (47\%), nonbinary (\textless{1}\%) \\ Race & White (77\%), Black or African American (13\%), Asian (6\%), Hispanic (3\%), Native \\ & Hawaiian or Pacific Islander (1\%), American Indian or Alaska Native (\textless{1}\%) \\ Age & Under 18 (\textless{1}\%), 18 - 24 (11\%), 25 - 34 (40\%), 35 - 44 (25\%), 45 - 54 (13\%), 55 - 64 (8\%), 65 or older (3\%) \\ Education & Less than high school degree (1\%), High school graduate (9\%), Some college but \\ & no degree (19\%), Associate degree in college (2-year) (11\%), Bachelors degree in \\ & college (4-year) (42\%), Masters degree (16\%), Professional degree (JD, MD) (2\%), \\ & Doctoral degree (1\%) \\ Political Affiliation & Liberal (43\%), Conservative (29\%), Independent (28\%) \\ \hline \hline \end{tabular} \end{table} Table 1: The sociodemographic attributes and their corresponding values we use in this study, based on the dataset by Kumar et al. (2021). Ordered ordinally or by percentage share. Figure 2: Sociodemographically enriched prompt to predict the level of toxicity in a text. The different parts of prompt are highlighted, i.e. instruction, sociodemographic properties and dataset input. Example drawn from the dataset by Kumar et al. (2021). 2016 Task 6 dataset (SE2016) by Mohammad et al. (2016) and the _Global Warming Stance Detection_ (GWSD) dataset by Luo et al. (2020). _SE2016_ encompasses 3,591 annotated Twitter posts that address a range of contentious subjects. The gold labels were ascertained using majority voting. Instances exhibiting less than 60% consensus among annotators were excluded by the authors. _GWSD_ was curated to analyze the framing of opinions within the discourse on global warming. It consists of 2,050 annotated U.S. news articles on global warming. In order to determine the gold label for each article, the authors employed a model tailored to the distribution of annotations, which also factored in potential biases of the annotators. Hatespeech.Hatespeech detection is a task designed to tackle the increasing amount of hateful online communication. We use the _Gabe Hate Corpus_ (GHC) by Kennedy et al. (2022) and the _Twitter_Hatespeech Corpus_ (H-Twitter; Waseem, 2016). _GHC_ was sourced from the social network service gab.com and annotated in a multi-label fashion for _Human Degradation_, _Calls For Violence_ and _Vulgar/Offensive_. The authors obtained gold labels using majority voting. As we are comparing multi-class tasks, we binarized the annotations into hatespeech indicators (i.e., _Yes_ and _No_). _H-Twitter_ was annotated by CrowdFlower workers for _sexism_, _racism_, _neither_, or _both_. Expert annotators, e.g., as feminist and anti-racism activists, contributed the gold labels to the collection. Sentiment.Given a text, the task is to decide upon the sentiment conveyed. We use the dataset by Diaz et al. (2018), which we call _Diaz_, created for studying age-related bias in sentiment analysis. The training data is a re-annotated subset of the Sentiment1402 dataset and the test data is scraped from blog posts and corresponding comments from a prominent _elderblogger_ community Lazar et al. (2017) authored by older adults. Footnote 2: [http://help.sentiment140.com/for-students](http://help.sentiment140.com/for-students) ### Models We seek to _instruct_ models to mimick an annotator with a specific sociodemographic profile, and thus, we resort to the most natural choice, instruction-tuned models. Concretely, we focus on GPT-3 Brown et al. (2020), T5 Raffel et al. (2020), OPT Zhang et al. (2022), and Pythia Biderman et al. (2023) model variants. We present a comprehensive overview of all models in Appendix A.2. Gpt-3.We use InstructGPT Ouyang et al. (2022) which was fine-tuned using reinforcement learning from human feedback (RLHF). T5.We further use Flan-T5 Chung et al. (2022), Flan-UL2 Tay et al. (2023) and Tk-Instruct Wang et al. (2022). Flan-T5 was trained over a collection of 1,836 finetuning tasks. Flan-UL2 uses the same instruction-tuning procedure but is built on top of a language model which was trained following the Unifying Language Learning Paradigm (UL2) pretraining framework. Tk-Instruct was trained using a large benchmark of 1,616 NLP tasks and their natural language instructions. Opt.Further, we employ OPT-IML Iyer et al. (2022) which was fine-tuned using an aggregation of eight instruction-tuning datasets. Pythia.Finally, we use Dolly-V2 Conover et al. (2023) fine-tuned on a 15K record instruction corpus generated by Conover et al. (2023). ### Evaluation For subjective NLP tasks Ovesdotter Alm (2011), comparing aggregated annotations with model predictions provides only a limited view on the perfor \begin{table} \begin{tabular}{l|l|l|l} \hline **Task** & **Dataset** & **Labels** & **IAA** \\ \hline Toxicity & DP & not toxic (52\%), slightly toxic (19\%), moderately toxic (14\%), very toxic (9\%), extremely toxic (6\%) & 0.13 \\ & Jigsaw & yes (67\%), no (33\%) & 0.46 \\ Hatespeech & GHC & yes (87\%), no (13\%) & 0.25 \\ & H-Twitter & neither (79\%), sexism (17\%), racism (3\%), both(1\%) & 0.59 \\ Stance & SE2016 & against (55\%), none (23\%), favor (22\%) & 0.58 \\ & GWSD & agree (38\%), neutral (44\%), disagree (18\%) & 0.33 \\ Sentiment & Diaz & very positive (9\%), somewhat positive (24\%), neutral (41\%), somewhat negative (21\%), very negative (5\%) & 0.11 \\ \hline \end{tabular} \end{table} Table 2: The tasks and datasets (_Diverse Perspectives (DP)_, _Jigsaw_, _SE2016_, _Global Warming Stance Detection (GWSD)_, _Gabe Hate Corpus (GHC)_, _Twitter Hatespeech Corpus (H-Twitter)_, and _Diaz_) we use along with their labels and inter-annotator agreement we obtain (IAA, Krippendorff’s \(\alpha\)). mance as label aggregation obscures any disagreement in the data (Prabhakaran et al., 2021; Basile et al., 2021). Thus, we follow Uma et al. (2021) and evaluate our results using both hard-label evaluation (accuracy, macro-averaged F1) and soft-label evaluation (cross-entropy, Jensen-Shannon divergence). In case of hard-label evaluation, we aggregate all predictions obtained via sociodemographic prompting using majority voting. Further experimental details are provided in the Appendix A.3. ## 4 Results ### Model sensitivity Detailed Setup.We investigate to what extent LLMs' predictions change when instructed to answer from viewpoints driven by particular sociodemographic backgrounds. In particular, we aggregate all predictions from prompting with different profiles using majority vote. Then, we compare how often the aggregated label is different from the one predicted without any sociodemographic information. Additionally, we conduct a statistical analysis using a generalized linear mixed model (GLMM) to account for potential confounders and statistical dependencies in our data by jointly modeling numerous main effects (e.g., the impact of model family) and interaction effects (e.g., the joint impact of model family and prompting method). We report details on model specification and statistical results in Appendix A.7. Prompting using sociodemographic profiles leads to prediction changes.In Figure 3, we depict the percentages of label prediction changes when including sociodemographic information, for three datasets (the remaining datasets are visualized in the Appendix 5). Several trends can be observed; first, the degree of prediction changes is both dependent on the dataset and model. In general, models are more affected by data from _DP_ and _Diaz_, with extreme cases where more than 80% of the predictions change when using Dolly-V2-2.8B. The sentiment dataset affects all models to a large degree while the hate-speech datasets lead to less pronounced label shifts. Notably, instruction-tuned models based on T5 (Flan-T5, Tk-Instruct ) are on average more affected by sociodemographic prompting than InstructGPT or variants of OPT-IML. We find that prediction changes are statistically significantly affected by the choice of model family (\(\chi^{2}(5)\)=3937.76, \(p\)<0.001) and the length of the input text (\(\beta\)=-1.69e-04, 95% CI [-2.93e-04, -4.51e-05], \(p\)=0.007). In particular, shorter input texts are associated with an increased number of prediction changes. Further, a post hoc test confirmed our observation that T5 models are more strongly affected by sociodemographic prompting than InstructGPT and OPT-IML. annotators in Table 3. While the models are better than random and majority prediction, more than half of the instances are classified incorrectly (Acc). Further, the models are struggling with label imbalance and the number of labels (5) which can be observed from the relatively low F1 scores. While we do not find a statistically significant main effect of the prompting method, we observe significant interactions of prompting method with model size (\(\beta\)=-0.12, 95% CI [-0.15, -0.09], \(p\)<0.001), input text length (\(\beta\)=6.95e-04, 95% CI [4.91e-04, 8.99e-04], \(p\)<0.001), and model family (\(\chi^{2}(5)\)=561.33, \(p\)<0.001). Concretely, sociodemographic prompting is less effective for larger models, more effective for longer input texts, most effective for Flan-UL2 and least effective for Dolly-V2. **Sociodemographic prompting can improve zero-shot performance.** In Table 4, we present the hard-label and soft-label scores for the two best performing models, InstructGPT and OPT-IML. Our statistical analysis confirms a significant interaction effect of model family and prompting method (\(\chi^{2}(5)\)=101.29, \(p\)<0.001) and identifies InstructGPT as the model family that benefits most from sociodemographic prompting. Flan-T5 is the model family that benefits least. Interestingly, for toxicity detection and sentiment classification, the models benefit from sociodemographic prompting, whereas for stance detection they perform better without such information. Our statistical model confirms a significant interaction effect of task and prompting method (\(\chi^{2}(3)\)=309.46, \(p\)<0.001) and identified sentiment classification as the task that benefits most from sociodemographic prompting. Most notably, using the sociodemographic profiles from the _DP_ dataset can also improve performance for other datasets such as _Jigsaw_ (both models) or _GHC_ and _GWSD_ (OPT-IML ). We also observe a slight trend that datasets where improvements are observed share low IAA across the original annotations (see Krippendorff's \(\alpha\) in Table 2). When comparing both evaluation setups, the effect of sociodemographic prompting is more pronounced for soft-label evaluation. This indicates that, overall, the predictions are more aligned to the original annotations. The results for the other model sizes are provided in Appendix A.5. We observe that multiple model configurations exhibit weak performance for both setups in general, often without any increasing trend for larger models from the same model family, which is supported by a significant negative interaction effect of model size and SD prompting (\(\beta\)=-0.08, \begin{table} \begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c}{Toxicity - DP} & \multicolumn{2}{c}{Sentiment - Diaz} \\ Model & Acc & F1 & Acc & F1 \\ \hline Random &.19 &.17 &.20 &.17 \\ Majority &.06 &.02 &.09 &.03 \\ \hline InstructGPT(175B) &.43 &.26 &.34 &.26 \\ InstructGPT(175B)-SD & **.44** &.26 & **.37** & **.31** \\ \hline OPT-IML(30B) &.42 &.18 &.28 &.26 \\ OPT-IML(30B)-SD & **.45** &.18 & **.32** & **.27** \\ \hline \hline \end{tabular} \end{table} Table 3: Zero-shot performance when predicting annotations using the original sociodemographic profile of an annotator. We compare prompting with (SD) and without sociodemographic information and report macro-averaged F1 (F1) and Accuracy (Acc). Figure 3: Percentage of prediction changes when comparing outputs of zero-shot prompting with and without sociodemographic information. The x-axis shows the sizes of models of a particular type (same color). 95% CI [-0.10, -0.05], \(p\)<0.001). We also test whether larger model sensitivity (percentage of prediction changes as detailed above) directly translates to better performance but do not measure any significant correlation (-0.16 Spearman \(\rho\), p=0.08). We conclude that model sensitivity is not a decisive factor for zero-shot performance using sociodemographic information. ### Robustness Detailed Setup.Previous work demonstrated that prompting for text classification is influenced by the prompt format (Min et al., 2022), and that the model's "understanding" of a prompt differs from its semantic meaning (Webson and Pavlick, 2022; Khashabi et al., 2022). Here, we investigate the extent to which predictions change when reformulating the instruction. We compare the format used in previous experiments to two other instruction formats; one simple paraphrase (format 1) and another formulation where do not provide any explicit instruction but merely present the sociodemographic profile and the input text (format 2). We provide the exact formulations in Appendix A.6. Importantly, the sociodemographic profile remains the same between different formats. Predictions are sensitive to prompt formulation.Table 5 presents the percentage of prediction differences between the different formats across datasets and models. Even for semantically equivalent formats (0,1) prediction differences can rise up to 35% (OPT-IML on _Diaz_). Using a minimal format leads to the most drastic changes across all datasets, especially pronounced for _DP_ and _H-Twitter_. Similar effects can be observed for prompting without sociodemographic information. Thus, prediction differences are only partially induced by the sociodemographic profile and confirm previous observations that prompt formulation largely influences prediction outcomes. In combination with our previous results (SS4.1), we argue that (s sociodemographic) prompting is not robust enough to be used as proxy for human annotation. ### Ambiguous Instances Detailed Setup.Wan et al. (2023) trained a model for disagreement prediction in subjective NLP tasks, which may guide data annotation efforts. However, their approach relies on the existence of annotated data alongside the sociodemographic information. \begin{table} \begin{tabular}{l r r r r r r r r r r r r r} \hline \hline & \multicolumn{4}{c}{Toxicity} & \multicolumn{4}{c}{Hatespeech} & \multicolumn{4}{c}{Stance Detection} & Sentiment \\ & \multicolumn{2}{c}{DP} & \multicolumn{2}{c}{Jigsaw} & \multicolumn{2}{c}{GHC} & \multicolumn{2}{c}{H-Twitter} & \multicolumn{2}{c}{SE2016} & \multicolumn{2}{c}{GWSD} & \multicolumn{2}{c}{Diaz} \\ Model & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 \\ \hline \multicolumn{12}{l}{InstructGPT(175B)} &.48 &.27 &.76 &.59 &.89 & **.75** & **.87** & **.57** & **.53** &.51 & **.72** & **.69** &.36 &.33 \\ \multicolumn{12}{l}{InstructGPT(175B) +SD} & **.51** & **.28** & **.79** & **.60** &.89 &.74 &.86 &.53 &.52 & **.52** &.69 &.68 & **.39** &.33 \\ \hline \multicolumn{12}{l}{OPT-IML(30B)} &.50 &.19 &.58 &.49 &.80 &.69 &.84 &.54 & **.67** & **.53** & **.57** & **.50** &.27 &.24 \\ \multicolumn{12}{l}{OPT-IML(30B) SD} & **.55** & **.20** & **.64** & **.53** & **.85** & **.74** & **.86** & **.57** &.65 &.51 &.52 &.39 & **.35** & **.29** \\ \hline \multicolumn{12}{l}{} & \multicolumn{1}{c}{CE} & \multicolumn{1}{c}{JSD} & \multicolumn{1}{c}{CE} & \multicolumn{1}{c}{JSD} & \multicolumn{1}{c}{CE} & \multicolumn{1}{c}{JSD} & \multicolumn{1}{c}{CE} & \multicolumn{1}{c}{JSD} & \multicolumn{1}{c}{CE} & \multicolumn{1}{c}{JSD} & \multicolumn{1}{c}{CE} & \multicolumn{1}{c}{JSD} \\ \hline \multicolumn{12}{l}{InstructGPT(175B)} & 1.42 &.32 &.55 &.15 &.43 &.07 & **.88** &.08 & **.98** &.27 & **.86** &.18 & 1.50 &.37 \\ \multicolumn{12}{l}{InstructGPT(175B) SD} & **1.40** & **.29** & **.51** & **.12** & **.42** & **.06** &.90 &.08 &.99 & **.25** &.89 & **.15** & **1.48** & **.33** \\ \hline \multicolumn{12}{l}{OPT-IML(30B)} & 1.43 &.33 &.71 &.26 &.52 &.13 &.90 &.10 & **.90** &.22 & **.95** &.23 & 1.57 &.42 \\ \multicolumn{12}{l}{OPT-IML(30B) SD} & **1.40** & **.29** & **.66** & **.20** & **.48** & **.09** & **.89** & **.07** &.91 & **.21** &.99 &.23 & **1.52** & **.32** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of zero-shot prompting performance using hard-label evaluation (Acc, F1) and soft-label evaluation, with (SD) and without sociodemographic information. CE is cross-entropy and JSD is for Jensen-Shannon divergence. Better scores are highlighted in bold. \begin{table} \begin{tabular}{l r r r r} \hline \hline & \multicolumn{2}{c}{InstructGPT (175B)} & \multicolumn{2}{c}{OPT-IML (30B)} \\ Model & Diff (1,2) & F1 & Diff (1,2) & F1 \\ \hline DP & (19\%, 33\%) &.27 \(\pm\).02 & (13\%, 82\%) &.15 \(\pm\).03 \\ DP+SD & (10\%, 40\%) &.27 \(\pm\).02 & (15\%, 56\%) & **.20** \(\pm\).01 \\ Jigsaw & (5\%, 16\%) &.61 \(\pm\).01 & (11\%, 30\%) &.50 \(\pm\).02 \\ Jigsaw+SD & (4\%, 13\%) &.61 \(\pm\).01 & (14\%, 23\%) & **.53** \(\pm\).00 \\ GHC & (2\%, 10\%) &.71 \(\pm\).01 & (6\%, 22\%) &.67 \(\pm\).05 \\ GHC+SD & (2\%, 9\%) & **.73** \(\pm\).01 & (8\%, 16\%) & **.72** \(\pm\).04 \\ H-Twitter & (3\%, 12\%) &.54 \(\pm\).06 & (11\%, 91\%) & **.38** \(\pm\).2 \\ H-Twitter+SD & (4\%, 8\%) &.54 \(\pm\).03 & (12\%, 95\%) &.36 \(\pm\).25 \\ SE2016 & (10\%, 41\%) &.38 \(\pm\).19 & (13\%, 28\%) & **.50** \(\pm\).08 \\ SE2016+SD & (8\%, 17\%) &.52 \(\pm\).01 & (12\%, 20\%) &.43 \(\pm\).12 \\ GWSD & (7\%, 24\%) &.66 \(\pm\).04 & (13\%, 20\%) & **.41** \(\pm\).10 \\ GWSD+SD & (12\%, 19\%) & **.67** \(\pm\).01 & (10\%, 10\%) &.34 \(\pm\).09 \\ Diaz & (16\%, 45\%) & **.33** \(\pm\).01 & (24\%, 45\%) &.21 \(\pm\).04 \\ Diaz+SD & (14\%, 36\%) &.31 \(\pm\).04 & (35\%, 42\%) & **.26** \(\pm\).02 \\ \hline \hline \end{tabular} \end{table} Table 5: Differences between different prompt formulations. Diff refers to prediction changes when comparing results of using format 0 to other formats (1,2). F1 refers to the averaged F1 scores across all three formats. graphic information of the annotators. Thus, we investigate whether we can use sociodemographic prompting as proxy to identify instances which will likely result in disagreement during annotation. To this end, we compare the original annotations with the result of sociodemographic prompting and calculate a binary F1 score. True positives are instances which received disagreement in both setups. Conversely, true negatives are instances which received no disagreeing votes in both setups. Sociodemographic prompting is effective at modeling disagreement.We present the results in Figure 4. Surprisingly, the best-performing zero-shot models (see SS4.2) are not the best at modeling the disagreement. With a mean performance of 0.62, Flan-T5 (11B) produces the best and most consistent results across all datasets. This observation is confirmed by a significant positive effect of model size (\(\beta\)=0.22, 95% CI [0.21, 0.24], \(p\)<0.001) along with a significant effect of model family (\(\chi^{2}(5)\)=579.84, \(p\)<0.001) for which Dolly-V2 and Flan-T5 are the two highest-performing model families and InstructGPT is the lowest. For the two datasets with original sociodemographic information and lowest IAA overall, we observe the best performances across different model sizes. As both datasets also exhibit increased prediction changes (see SS4.1), we hypothesize that the disagreement induced by sociodemographic prompting increases if there is larger disagreement in the original annotation. Thus, we can use sociodemographic prompting to estimate the disagreement level in data. This is useful during annotation to identify ambiguous instances which require a larger set of diverse annotators. ## 5 Related Work The sociodemographic background of annotators has been identified as an influential factor in text annotation (Luo et al., 2020; Sap et al., 2022; Biester et al., 2022; Pei and Jurgens, 2023; Santy et al., 2023). These results question the notion of a _single_ ground truth (Aroyo and Welty, 2015) and emphasize the need (Plank, 2022) to consider _all_ interpretations during data annotation (Cabitza et al., 2023), modeling (Uma et al., 2021; Davani et al., 2022) and evaluation (Basile et al., 2021; Gordon et al., 2021) of NLP systems. Modeling sociodemographic information in NLP classifiers.Several works have shown that modeling uncertainty in annotation is beneficial for performance in subjective NLP tasks (Fornaciari et al., 2021; Davani et al., 2022; Zhou et al., 2022; Gordon et al., 2022; Gupta et al., 2023; Wan et al., 2023, inter alia). Logically, researchers have moved on to directly model sociodemographic information in the classifier. Gordon et al. (2022) introduced the concept of _Jury Learning_, a supervised machine learning methodology that addresses discrepancies by establishing the composition and relative influence of individuals or groups responsible for making predictions through the classifier. Gupta et al. (2023) utilized multi-task learning techniques and modified the loss function in order to account for distinct sociodemographic groups. Similarly, Fleisig et al. (2023) use RoBERTa and GPT-2 to learn annotator distributions from sociodemographic profiles and conduct ablation studies to investigate which input is most influential. Orlikowski et al. (2023) introduce group-specific layers to model groups of annotators with shared attributes in multi-annotator models for toxic content detection. However, they find that explicitly accounting for sociodemographic attributes does not significantly improve performance. Notably, the authors highlight the risk of an ecological fallacy, i.e. the risk of explaining individual behaviour via aggregate group behaviour. Figure 4: Performance to model disagreement in various datasets of subjective NLP tasks (binary F1). Prompting large language models with sociodemographic informationWith the increasing performance of LLMs, researchers in and outside of NLP investigated to what extent they are influenced when prompted with sociodemographic information. Lee et al. (2023) investigate whether instruction-tuned LLMs are aligned to human disagreement but limit their experiments to a single NLI dataset. They conclude that models deviate from human annotators in terms of accuracy and disagreement level. By analyzing disagreement for Q&A, Hwang et al. (2023) find that users' opinions and their sociodemographic background are not mutual predictors. For predicting users' individual opinions, they show that a combination of sociodemographic information and relevant past opinions performs best. Several works Durmus et al. (2023); Santurkar et al. (2023); Santurkar et al. (2023) analyze LLM's alignment with specific sociodemographic groups and show that model responses are biased towards responses by participants from Western countries. Notably, Santurkar et al. (2023) observe that misalignment persists even after explicitly steering the LMs towards particular demographic groups. Argyle et al. (2023) suggest using GPT-3 as testbed before conducting large scale population surveys. They propose _algorithmic fidelity_ to evaluate alignment with different human subpopulations and present it as a cost-efficient proxy for specific human sub-populations in social science research. When directly analyzing for biases, it has been shown Cheng et al. (2023); Deshpande et al. (2023) that prompting LLMs with sociodemographic information carries the potential to amplify existing stereotypical biases. In contrast to studying alignment of LLMs with opinion surveys, we use sociodemographic prompting to investigate its influence on predictions for subjective NLP tasks. Previous work has successfully integrated sociodemographic information in the modeling of NLP classifiers to improve performance Fleisig et al. (2023) or identification of ambiguous instances Wan et al. (2023). In addition to the annotated data itself, these approaches rely on the existence of sociodemographic information about the annotators. It is unclear how to transfer this knowledge across different tasks. In contrast, we show that sociodemographic prompting improves zero-shot performance for subjective NLP tasks and demonstrate its capabilities to identify ambiguous instances. ## 6 Conclusion We evaluate prompting LLMs for simulating data annotation using humans with various sociodemographic backgrounds. We employ a comprehensive study across seven datasets and seven instruction-tuned model families. Our results show that sociodemographic prompting improves zero-shot performance for subjective NLP tasks but does not significantly outperform standard prompting when directly modeling the original annotator sociodemographic. Further, we observe sensitivity to the prompt formulation We argue that it should not be used as a proxy for data annotation using humans but show that it can be used to identify ambiguous instances to aid annotation efforts. In future we plan to extend our analysis to LLM's self-explanations about diverging predictions from different sociodemographic profiles. Ethical Considerations and Limitations All our experiments have been approved by the Institutional Review Board of one of our universities. In the following, we provide an examination of the ethical dimensions and inherent limitations associated with this research study. Annotations go beyond sociodemographicographies.While annotators' sociodemographic backgrounds have been shown to be influential in their decision-making process (Al Kuwatly et al., 2020; Excell and Al Moubayed, 2021; Shen and Rose, 2021; Larimore et al., 2021; Sap et al., 2022, _inter alia_), it is not a definitive predictive factor as individual lived experiences (Waseem, 2016) or situated domain expertise (Patton et al., 2019) can influence annotation decisions, too. In short, collective group behavior may not always provide an explanation for individual behavior (Diaz et al., 2022; Orlikowski et al., 2023). While our general approach can be extended to a wider range of sociodemographic attributes or even descriptions of individuals, we refrained from testing more to contain the complexity of our study and due to the limited availability of such resources. We welcome efforts to increase the availability of such information alongside the datasets, e.g., Crowdworksheets by Diaz et al. (2022), and hope to see more work in future exploring prompting large language models with more dimensions of sociodemographic and personal information. Sociodemographic profiles are not representative.It is important to acknowledge certain limitations with regard to the representation of sociodemographic profiles. First, all the datasets employed in our research are exclusively in English language, mostly due to the lack of resources in other languages. This linguistic restriction inherently limits our ability to make comprehensive cross-linguistic assessments. Second, the sociodemographic information provided by the annotators of the datasets used in this study adheres to a classification system specific to the United States. Consequently, our findings cannot be generalized to sociodemographic data originating from other nations, linguistic communities, or cultural contexts. These limitations underscore the need for caution when extrapolating our results to broader sociodemographic contexts beyond the scope of our study. We cannot model all factors influencing prompting outcomes.While we demonstrate that model predictions can effectively be changed when incorporating sociodemographic information within the prompt (SS4.1), we acknowledge that this is one among many of the factors influencing model predictions in a zero-shot prompting setup. We account for the influence of prompt formulation by investigating its effect in SS4.3 and are aware of the growing body of work investigating various other factors which influence prompting results, such as correct label assignment (Min et al., 2022), domain-specific vocabulary (Fei et al., 2023) or example order (Kumar and Talukdar, 2021; Zhao et al., 2021; Lu et al., 2022). Furthermore, it has been shown that human interpretation of the prompt semantics often is not aligned with the output of the model's continuous representation (Webson and Pavlick, 2022; Khashabi et al., 2022). The majority of these works deals with incontext learning or few-shot learning in general which we do not investigate in this study. However, we see these phenomena as support for our overall argument (SS4.3) that estimating the degree of alignment of any LLM should not be merely based on the outcome of prompting with varying sociodemographic profiles. This is due to their lack of robustness when changing the surface form of the prompt while keeping its semantic meaning similar. Simulating annotations using prompting mechanisms is limited.Employing humans for annotation projects in NLP is a multi-step process. It involves the formulation of annotation guidelines and their iterative refinement through discussions between the annotators and coordinators. In most cases, annotators are undergoing a qualification process or test to evaluate their eligibility for contributing to the annotations. These factors influence the decision-making process of the annotators and ultimately the annotation agreement. In our experimental setup, we do not provide any additional instructions to the model than the prompt instructions which we present in SS2 and SSA.6, thus possibly underspecifying the task instruction to the model. Our experimental setup is designed driven by the following observations; for most datasets, the original annotation guidelines are non-retrievable and could only be guessed from the description in the corresponding research publication. Further, LLMs are limited with regards to the context input size (see Table 6 for details) and using longer prompts would have limited our experiments to a few models with appropriate input sizes. ## Acknowledgements We thank Max Glockner, Hovhannes Tamoyan and Anmoel Goel for their feedback on an early draft of this work and the authors of the datasets we used for providing them publicly. We gratefully acknowledge the support of Microsoft with a grant for access to OpenAI GPT models via the Azure cloud (Accelerate Foundation Model Academic Research). This work has been funded by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 01UP2229B (KoPoCoV) and by the German Research Foundation (DFG) as part of the Research Training Group KRITIS No. GRK 2222. Anne Lauscher's work is funded under the Excellence Strategy of the Federal Government and the Lander.
2309.16139
Two-Step Active Learning for Instance Segmentation with Uncertainty and Diversity Sampling
Training high-quality instance segmentation models requires an abundance of labeled images with instance masks and classifications, which is often expensive to procure. Active learning addresses this challenge by striving for optimum performance with minimal labeling cost by selecting the most informative and representative images for labeling. Despite its potential, active learning has been less explored in instance segmentation compared to other tasks like image classification, which require less labeling. In this study, we propose a post-hoc active learning algorithm that integrates uncertainty-based sampling with diversity-based sampling. Our proposed algorithm is not only simple and easy to implement, but it also delivers superior performance on various datasets. Its practical application is demonstrated on a real-world overhead imagery dataset, where it increases the labeling efficiency fivefold.
Ke Yu, Stephen Albro, Giulia DeSalvo, Suraj Kothawade, Abdullah Rashwan, Sasan Tavakkol, Kayhan Batmanghelich, Xiaoqi Yin
2023-09-28T03:40:30Z
http://arxiv.org/abs/2309.16139v1
# Two-Step Active Learning for Instance Segmentation with Uncertainty and Diversity Sampling ###### Abstract Training high-quality instance segmentation models requires an abundance of labeled images with instance masks and classifications, which is often expensive to procure. Active learning addresses this challenge by striving for optimum performance with minimal labeling cost by selecting the most informative and representative images for labeling. Despite its potential, active learning has been less explored in instance segmentation compared to other tasks like image classification, which require less labeling. In this study, we propose a post-hoc active learning algorithm that integrates uncertainty-based sampling with diversity-based sampling. Our proposed algorithm is not only simple and easy to implement, but it also delivers superior performance on various datasets. Its practical application is demonstrated on a real-world overhead imagery dataset, where it increases the labeling efficiency fivefold. ## 1 Introduction Instance segmentation is the task of identifying and segmenting individual objects in an image, and it has a wide range of applications in real-world domains such as autonomous driving [1, 2], medical imaging [3, 4], and aerial imagery analysis [5, 6], among others. However, obtaining annotations for instance segmentation is considerably more expensive than other computer vision tasks due to the need for unique labeling of each instance and precise pixel-level segmentation for objects. This creates a significant bottleneck in the development and implementation of state-of-the-art instance segmentation models. Active learning is a technique designed to reduce labeling cost by iteratively selecting the most _informative_ samples from unlabeled data. There are two major ways to select the next batch to be labeled: uncertainty-based [7, 8, 9, 10, 11], which selects samples that model has low confidence in prediction, and diversity-based [12, 13, 14, 15, 16], which selects samples that are representative of the dataset. Both of these active learning approaches have been successfully applied in various computer vision tasks, including image classification [17, 18, 19], object detection [20, 21, 22], and semantic segmentation [23, 24, 25]. However, very few existing works have addressed active learning for instance segmentation, and the few existing works focus on uncertainty-based approaches. Developing an effective active learning method for instance segmentation entails addressing several crucial factors. Instance segmentation models produce diverse types of output per instance, including class distribution, bounding box location, and a dense segmentation mask. This variety complicates the task of determining the most suitable uncertainty metric for active learning. Moreover, solely depending on the uncertainty metric can lead to redundancy, as most uncertain instances may share the same challenging semantic type (_e.g._, small pedestrians in background). Hence, incorporating diversity sampling to account for the semantic diversity within the unlabeled pool is crucial. In addition, information from individual instances needs to be aggregated into a score for image-level scoring and labeling, further complicating the active learning method's design. We propose TAUDIS, a **T**wo-step **A**ctive learning algorithm that combines **U**ncertainty and **D**iversity sampling strategies for **I**nstance **S**egmentation. In the first step, TAUDIS uses uncertainty sampling to identify an initial set of the most informative object instances. In the second step, we extract region feature maps from an intermediate convolutional layer in the network to represent each instance and apply the _maximum set cover_ algorithm [26] to select a diverse subset of instances from the most uncertain instances identified earlier. Finally, to select images for labeling, we use a majority vote approach that prioritizes images containing the most informative instances selected by the two-step algorithm until the labeling budget is met. Fig. 1 provides an overview of the proposed method. We evaluated TAUDIS on two instance segmentation datasets: MS-COCO [27], and a proprietary overhead imagery dataset. The results show that TAUDIS consistently outperforms the compared active learning strategies. ## 2 Related Work Uncertainty-based methods select informative samples based on their ambiguities. In image classification, uncertainty is assessed via softmax probabilities using metrics such as least confidence [28], margin [17], and entropy [29]. For semantic segmentation, pixel-wise class probability maps are used to construct image-level [24; 25] or region-level [30; 31] uncertainty scores. In object detection, uncertainty scores are first calculated for each bounding box, then combined to the image level, with previous research exploring the aggregation of scores across multiple locations [32; 20] or multiple losses [33; 34]. However, finding the optimal uncertainty metric for instance segmentation is challenging due to the choices of representing uncertainties as either classification or segmentation uncertainties at the instance level, along with multiple options for aggregating them to the image-level. In this paper, we explore numerous combinations of uncertainty metrics and aggregation methods in the context of instance segmentation. Diversity-based active learning methods aim to identify samples that are representative of data distribution, with methods like Core-set [35], VAAL [36], CDAL [37], and DBAL [38] being developed for deep neural networks. The fusion of uncertainty and diversity sampling, demonstrated in approaches by Elhamifar _et al._[39], Yin _et al._[40], Yang _et al._[23], and Wu _et al._[41], becomes increasingly populate in active learning due to the combined benefits. To our knowledge, our method is the first to integrate both sampling strategies in active learning for instance segmentation. Despite the higher annotation cost, research on active learning for instance segmentation has been limited. Wang _et al._[42] proposed a triplet uncertainty metric combining model predictions from classes, bounding boxes, and segmentation masks, but this approach fails to consider diversity and requires modifications to the Mask R-CNN [43] architecture. Our method, on the other hand, recognizes diversity to eliminate redundancy and can be integrated with any existing architecture, including more recent transformer based models like Mask2Former [44]. Recently, Tang _et al._[45] proposed an active learning method trained with point supervision, yet it assumes pre-existing class labels and bounding boxes, which is not feasible for raw real-world images. In contrast, our method, which makes image-level sampling decisions without prior annotations, is more broadly applicable. Figure 1: Schematic of TAUDIS. First, the uncertainty of each instance is assessed to identify the most informative instances from unlabeled data. Second, a graph-based maximum set cover algorithm is used to select the most representative instances among the uncertain ones. Finally, a majority vote approach selects images containing the most instances filtered from the previous steps for labeling. Symbols \(\mathcal{D_{L}},\mathcal{D_{U}},\mathcal{D_{A}}\) represent labeled, unlabeled, and annotated set, respectively. ## 3 Method ### Model Training We initiate the process of active learning by training an instance segmentation model, denoted as \(\mathcal{M}_{\theta}\), on a labeled set of data \(\mathcal{D}_{\mathcal{L}}\). Our method is compatible with any instance segmentation model utilizing region-level features for mask prediction. In this study, we employ Mask R-CNN [43] as our model, given its extensive use in instance segmentation applications. During model inference, \(\mathcal{M}_{\theta}\) generates \(N\) detected instances \(\{t_{n}\}_{n=1}^{N}\) for a given input image \(x\), where \(n\) represents instance index. For each instance \(t_{n}\), we obtain its associated class-probability distribution \(p_{n}\), sigmoid masks \(m_{n}\), and instance embedding \(r_{n}\). The instance embedding \(r_{n}\) is computed as the average pooling of the regional feature map, which can be extracted from intermediate layers in either the object classification or object segmentation branch. For our experiments, we choose the feature map of the last convolutional layer in the object segmentation branch. ### Instance-level Uncertainty Measures The first step of our method involves identifying uncertain instances from unlabeled images \(\mathcal{D}_{\mathcal{U}}\) based on the current state of \(\mathcal{M}_{\theta}\). Unlike active learning methods for image classification that rely on image-level uncertainty scores, our approach uses instance-level uncertainty measures to select the top candidates. We explore three metrics to measure uncertainty at the instance-level: classification margin (\(\text{CM}_{\text{n}}\)), classification entropy (\(\text{CE}_{n}\)) and segmentation entropy (\(\text{SE}_{n}\)). Definitions of these metrics can be found in Appendix A. ### Instance-level Diversity Sampling Uncertainty-based sampling alone may not yield satisfactory results when the sampled batch contains a considerable amount of redundancy [46; 47]. This challenge is particularly relevant in instance segmentation, where the instance-level uncertainty scores are influenced not only by the semantic properties of objects but also by factors such as their size and spatial arrangement (_et al._ small pedestrians in the background). To avoid redundancy, our method oversamples uncertain instances beyond the designated budget in the first step, and subsequently selects a diverse subset in the second step. We formulate the diversity sampling as a _maximum k-set cover problem_. Formally, let \(\mathcal{T}_{\mathcal{F}}\) denote all detected instances in \(\mathcal{D}_{\mathcal{U}}\) and \(\mathcal{T}_{\mathcal{C}}\) denote the subset of most uncertain instances in \(\mathcal{T}_{\mathcal{F}}\). Our objective is to select a subset of instances \(\mathcal{T}_{\mathcal{D}}\), \(\mathcal{T}_{\mathcal{D}}\subset\mathcal{T}_{\mathcal{C}}\), that is highly representative of \(\mathcal{T}_{\mathcal{F}}\). To achieve this we build an undirected similarity graph, \(G(V,E)\), such that vertices, \(V\), are the instances in \(\mathcal{T}_{\mathcal{F}}\) and edges, \(E\), represent the similarity between instances. To quantify the similarity, \(s_{i,j}\), between instances \(t_{i},t_{j}\in\mathcal{T}_{\mathcal{F}}\), we use the cosine similarity between their respective embeddings \(r_{i}\) and \(r_{j}\). The edges are kept only if \(s_{i,j}\) is larger than a similarity threshold, \(\sigma\). To limit our samples to \(\mathcal{T}_{\mathcal{C}}\), we select a subset of vertices, \(V_{C}=\{v_{i}\mid v_{i}\in T_{C}\}\), and all associated edges, \(E_{C}=\{e_{i,j}\mid v_{i}\in V_{C}\text{ or }v_{j}\in V_{C}\}\). Note that vertices in a graph defined as \(G(V_{C},E_{C})\), may have dangling edges where one end of the edge is in \(\mathcal{T}_{\mathcal{C}}\) but the other end is in \(\mathcal{T}_{\mathcal{F}}-\mathcal{T}_{\mathcal{C}}\). We define our maximum k-set cover problem by the bipartite graph \(G(V_{C},U_{C},E_{C})\), where \(U_{C}=\{u_{i}|e_{i,j}\in E_{C}\text{ or }e_{j,i}\in E_{C}\}\). \(V_{C}\) is the subset to sample from and \(U_{C}\) is the universe we want to cover. We determine the optimal subset of instances \(\mathcal{T}_{\mathcal{D}}\) by maximizing coverage of \(G(V_{C},U_{C},E_{C})\) constrained to \(|\mathcal{T}_{\mathcal{D}}|=k\), using the distributed submodular optimization algorithm introduced in [26]. We use two hyperparameters, \(\alpha\) and \(\beta\) (\(\alpha>\beta>1\)), as multipliers to the image-level annotation budget \(\mathcal{B}\) to adjust the upsampling and downsampling of instances in the first and second steps, respectively. In particular, we first select \(|\mathcal{T}_{\mathcal{C}}|=\alpha\mathcal{B}\) most uncertain instances and then refine this set to the \(|\mathcal{T}_{\mathcal{D}}|=\beta\mathcal{B}\) most representative instances. ### Majority Vote Aggregation To acquire annotations for the selected instances, we prioritize images with the highest number of instances in \(\mathcal{T}_{\mathcal{D}}\). This is based on the intuition that images containing a large number of uncertain and diverse instances likely encompass important visual concepts that the model has yet to learn. We compute the number of instances \(n_{D}\) in \(\mathcal{T}_{\mathcal{D}}\) for each image in the unlabeled pool \(\mathcal{D}_{\mathcal{U}}\) and rank the images by \(n_{D}\) in descending order. The top \(\mathcal{B}\) images are then chosen for annotation. Subsequently, the \(\mathcal{B}\) newly annotated images \(\mathcal{D}_{\mathcal{A}}\) are removed from the unlabeled dataset and added to the labeled dataset, which is utilized to retrain the instance segmentation model. The complete algorithm for our proposed active learning method is outlined in Algorithm 1. ``` 0: Labeled data: \(\mathcal{D}_{\mathcal{L}}\), Unlabeled data: \(\mathcal{D}_{\mathcal{U}}\), Model: \(\mathcal{M}_{\theta}\), Budget: \(\mathcal{B}\), Number of rounds: \(\mathcal{I}\), Hyperparameters: \(\alpha\), \(\beta\) (\(\alpha>\beta>1\)), \(0<\sigma<1\) 1:for\(i=1:\mathcal{I}\)do 2: Train \(\mathcal{M}_{\theta}\) on \(\mathcal{D}_{\mathcal{L}}\) 3: Obtain detected instances \(\mathcal{T}_{\mathcal{F}}\leftarrow\mathcal{M}_{\theta}(\mathcal{D}_{\mathcal{ U}})\) 4: Compute uncertainty scores for each instance in \(\mathcal{T}_{\mathcal{F}}\) 5: Select top uncertain instances \(\mathcal{T}_{\mathcal{C}}\subset\mathcal{T}_{\mathcal{F}}\). \(|\mathcal{T}_{\mathcal{C}}|=\alpha\mathcal{B}\) 6: Construct \(\mathcal{S}\in\mathbb{R}^{|\mathcal{T}_{\mathcal{C}}|\times|\mathcal{T}_{ \mathcal{F}}|}\), the pairwise similarity matrix between \(\mathcal{T}_{\mathcal{C}}\) and \(\mathcal{T}_{\mathcal{F}}\) instance embeddings. Matrix elements smaller than \(\sigma\) are set to zero. 7: Determine top representative instances \(\mathcal{T}_{\mathcal{D}}\subset\mathcal{T}_{\mathcal{C}}\) via maximum k-set cover on a bipartite graph defined by \(\mathcal{S}\) and \(k=|\mathcal{T}_{\mathcal{D}}|=\beta\mathcal{B}\). Rows of \(\mathcal{S}\) define the subsets, and columns define the universe of the maximum k-set cover problem. 8: Select \(\mathcal{B}\) images with the most instances in \(\mathcal{T}_{\mathcal{D}}\) 9: Annotate selected images \(\mathcal{D}_{\mathcal{A}}\). \(|\mathcal{D}_{\mathcal{A}}|=\mathcal{B}\) 10:\(\mathcal{D}_{\mathcal{U}}:=\mathcal{D}_{\mathcal{U}}-\mathcal{D}_{\mathcal{A}}\), \(\mathcal{D}_{\mathcal{L}}:=\mathcal{D}_{\mathcal{L}}+\mathcal{D}_{\mathcal{A}}\) 11:endfor 12:return\(\mathcal{M}_{\theta}\) ``` **Algorithm 1** TAUDIS (Illustration in Fig. 1) ## 4 Experiments ### Experimental Setup **Datasets.** Our method is evaluated on two datasets: COCO [27] and OVERHEAD, a proprietary building segmentation dataset. COCO comprises around 118k training images and 5k validation images. The OVERHEAD dataset consists of overhead images with bounding boxes and per-instance segmentation masks for buildings. It contains around 50k training images and 6k validation images. For each dataset, we use the training set as the unlabeled data and evaluate the trained models on the validation set. **Active Learning Setting.** For our experiments, we begin by randomly selecting around 25% of the images from the unlabeled set to form the initial labeled data. In each subsequent active learning cycle, we add a small, fixed batch size of images to the labeled set. The active learning iterations continue until at least 90% of the unlabeled samples have been selected for labeling. Based on the results of the ablation study (Appendix B), we choose segmentation entropy \(\text{SE}_{n}\) as the instance-level uncertainty metric in our TAUDIS method. For further implementation details, please refer to Appendix C. **Baselines.** We compare our method against several baselines, including random sampling, uncertainty-based sampling using metrics such as classification margin, classification entropy, and segmentation entropy, as well as diversity-based sampling methods such as Core-set [35] and Round-robin [48]. We also compare our method to a variation of TAUDIS that combine uncertainty and diversity sampling at the image level. Additional details about the baselines can be found in Appendix D. For evaluation, we use the mean average precision at 50% intersection over union (mAP50) as the performance metric for assessing the instance segmentation models. ### Results **COCO.** Fig. 2(a) compares TAUDIS with various baseline methods: random, uncertainty sampling with segmentation entropy, and two diversity-based methods (_i.e.,_ Coreset and Round robin). TAUDIS consistently outperforms other baselines particularly during the early activation cycles. The result also highlight the effectiveness of segmentation entropy as a metric for uncertainty in instance segmentation, as it is the second-best performer. Furthermore, the superiority of TAUDIS over TAUDIS-IMG and Core-set suggests that instance-level features are a better proxy for diversity than image-level features, for the sub-image task of instance segmentation. **OVERHEAD.** Fig. 2(b) shows the results on our building segmentation data set. Since there is only one class (buildings), we omitted the round-robin baseline. Notably, TAUDIS achieves superior performance using only 10k images compared to the random strategy's performance with 45k images, resulting in an improvement in labeling efficiency of nearly _fivefold_. This finding highlights the effectiveness of the diversity-based sampling strategy, even in a single-class setting. ## 5 Conclusion In conclusion, our study presents a novel active learning algorithm for instance segmentation, addressing the lack of post-hoc methods in this field. By combining uncertainty and diversity sampling at the instance level, our algorithm outperforms various baselines across multiple datasets. Moreover, we demonstrate its practical value in a real-world satellite imagery dataset, achieving an _fivefold_ improvement in labeling efficiency. As future work, we plan to explore the use of transformer-based segmentation architectures, such as MaskFormer [49] and Mask2Former [44], which take a unified view of instance and semantic segmentation. We expect that using these richer embeddings may synergize effectively with our diversity-based active learning strategy, potentially unlocking an efficient batch active learning approach for panoptic segmentation.
2309.15430
Evaluation of Constrained Reinforcement Learning Algorithms for Legged Locomotion
Shifting from traditional control strategies to Deep Reinforcement Learning (RL) for legged robots poses inherent challenges, especially when addressing real-world physical constraints during training. While high-fidelity simulations provide significant benefits, they often bypass these essential physical limitations. In this paper, we experiment with the Constrained Markov Decision Process (CMDP) framework instead of the conventional unconstrained RL for robotic applications. We perform a comparative study of different constrained policy optimization algorithms to identify suitable methods for practical implementation. Our robot experiments demonstrate the critical role of incorporating physical constraints, yielding successful sim-to-real transfers, and reducing operational errors on physical systems. The CMDP formulation streamlines the training process by separately handling constraints from rewards. Our findings underscore the potential of constrained RL for the effective development and deployment of learned controllers in robotics.
Joonho Lee, Lukas Schroth, Victor Klemm, Marko Bjelonic, Alexander Reske, Marco Hutter
2023-09-27T06:49:20Z
http://arxiv.org/abs/2309.15430v1
# Evaluation of Constrained Reinforcement Learning Algorithms ###### Abstract Shifting from traditional control strategies to Deep Reinforcement Learning (RL) for legged robots poses inherent challenges, especially when addressing real-world physical constraints during training. While high-fidelity simulations provide significant benefits, they often bypass these essential physical limitations. In this paper, we experiment with the Constrained Markov Decision Process (CMDP) framework instead of the conventional unconstrained RL for robotic applications. We perform a comparative study of different constrained policy optimization algorithms to identify suitable methods for practical implementation. Our robot experiments demonstrate the critical role of incorporating physical constraints, yielding successful sim-to-real transfers, and reducing operational errors on physical systems. The CMDP formulation streamlines the training process by separately handling constraints from rewards. Our findings underscore the potential of constrained RL for the effective development and deployment of learned controllers in robotics. ## I Introduction The use of Deep Reinforcement Learning (RL) for robotic control is on the rise, revolutionizing the way control policies are created for legged robots and other complex dynamic systems. Particularly, model-free approaches have gained prominence, replacing traditional optimization-based methods. This paradigm shift can be attributed to the high-capacity neural network models, effective model-free algorithms that can solve complex problems, and efficient tools for data-generation (i.e. simulations). As a result, the synthesis of locomotion policies for legged robots has become more straightforward and accessible, as evidenced by the growing number of RL-based controllers in recent literature. The so-called sim-to-real approach is commonly employed, where policy training solely relies on simulated data. This is due to the inherent requirements of widely-used algorithms such as Proximal Policy Optimization (PPO) [1] and Soft Actor Critic (SAC) [2], which demand random exploration and a significant number of samples. As a result, training policies directly on hardware is both impractical and hazardous. In recent years, diverse approaches have emerged to enhance simulation fidelity (e.g., actuator modeling [3], hybrid simulator [4, 5]), and to robustify policies against domain shifts (e.g., dynamics randomization [6, 7], privileged training [8]). Notably, while most existing research emphasizes enhancing simulation accuracy and regularizing policies for sim-to-real transfer, a gap persists in the literature -- a lack of attention to physical constraints. Despite the studies done in understanding and simulating the physical properties of hardware, the incorporation of essential physical constraints during training remains under-explored. These constraints can be physical, such as limits on joint velocities, torque limits, or safety regulations. Considering such constraints is a common practice in model-based approaches [9, 10]. Existing literature provides compelling evidence of its significance. For instance, Gangapurwala et al. [11] first utilized a Constrained Proximal Policy Optimization (CPPO) algorithm to train a locomotion controller for a quadrupedal robot, achieving both constraint-consistency and high performance. Kim et al. [12] also experimented with a modified version of Interior-point Policy Optimization (IPO) [13] algorithm and showed rough-terrain locomotion with a generalizable Constrained Markov Decision Process (CMDP) formulation. In this paper, we evaluate various first-order constrained policy optimization methods, focused on the application to legged locomotion. We formulate velocity-tracking locomotion as a CMDP [14], effectively isolating the physical constraints from the reward function. Additionally, we introduce a modification to existing algorithms to enhance both stability and final performance. Our main results can be summarized as follows: 1. We conduct a comprehensive comparison of first-order constrained RL algorithms and select the most suitable one for practical applications based on constraint vio Fig. 1: Wheeled-legged locomotion trained via constrained policy optimization. Additional components to conventional PPO are highlighted. lations and final performance. 2. We demonstrate the effectiveness of the constrained RL approach in handling physical constraints with the wheeled-legged robot shown in Fig. 1. From our experiments, we found out that the constrained RL formulation yields fewer constraint violations compared to the commonly used unconstrained approach. Additionally, this reduces the reward-shaping effort for physical limitations, a common practice in the existing research. * This is a preprint. We will publish our implementations of the algorithm in [https://github.com/junja94/cmdp_ppos](https://github.com/junja94/cmdp_ppos) with the final version of the paper. ## II Background ### _Constrained Policy Optimization_ In RL, a control problem is typically modeled as a Markov Decision Process (MDP), which is described by a tuple \((S,A,r,p,\mu)\). Here, \(S\) is the set of states, \(A\) is the set of Actions, \(r:S\times A\times S\rightarrow\mathbb{R}\) is the reward function, \(p:S\times A\times S\rightarrow[0,1]\) is the state transition probability and \(\mu\) is the initial state distribution. To solve an MDP, we aim to find a policy \(\pi:S\mapsto\mathcal{P}(A)\) that maximizes \[J_{R}(\pi)=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t},s_{t+1}) \right], \tag{1}\] where \(\gamma\in[0,1)\) is the discount factor. Here, the expectation \(\mathbb{E}[\ldots]\) represents the empirical average over a finite batch of samples. \(s_{0}\) is sampled from an initial state distribution \(\mu\) and trajectories sampled using \(\pi\). To address constrained problems, this framework is extended into a CMDP. The MDP is augmented with a set \(C\) of cost functions that capture constraint violations \(\{c_{1},c_{2},\ldots,c_{n}\}\) and corresponding limits \(E=\{\epsilon_{1},\epsilon_{2},\ldots,\epsilon_{n}\}\)[14, 15]. Each \(c_{i}:S\times A\times S\rightarrow\mathbb{R}\) maps state-action-state triplets to the cost of the state transition. In the constrained setting, an optimal policy maximizes the expected discounted return in Eq. 1, while keeping the discounted sum of future costs \(c_{i}\) below their respective threshold \(\epsilon_{i}\), yielding the constrained optimization problem: \[\max_{\pi} J_{R}(\pi)\] (2) s.t. \[\forall i\in\{1,\ldots,n\},\ J_{C_{i}}(\pi)\leq\epsilon_{i},\] where \[J_{C_{i}}(\pi)=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}c_{i}(s_{t},a_{t},s _{t+1})\right]. \tag{3}\] While many constrained RL problems in the literature consider a single constraint (e.g [16, 15, 17]), the CMDP framework is not limited to the single constraint setup [12]. Derived by the performance difference lemma by Shen et al. [16], the constrained optimization problem in Eq. 2 can be reformulated as follows: \[\max_{\pi^{\prime}} \mathbb{E}\bigg{[}A_{R,t}^{\pi}(s,a)\bigg{]}\] (4a) s.t. \[\underbrace{J_{C_{i}}(\pi)+\frac{1}{1-\gamma}\mathbb{E}_{\pi^{ \prime}}\bigg{[}A_{C_{i},t}^{\pi}(s,a)\bigg{]}}_{J_{C_{i}}(\pi^{\prime})} \leq\epsilon_{i}\quad\forall i. \tag{4b}\] where \(A_{R,t}^{\pi}(s,a)\) and \(A_{C_{i},t}^{\pi}(s,a)\) are estimators of the reward advantage function and cost advantage function for the \(i\)-th constraint at timestep \(t\), respectively. ### _First-order Optimization Methods for CMDPs_ We compare five first-order policy optimization algorithms in order to identify a method that is performant and stable. As higher-order algorithms typically require resource-intensive computation of the inverse Hessian or inverse Hessian-vector products (see, e.g., CPO [15], PCPO [18], TRPO-Lagrangian [17]), we restrict our scope to first-order algorithms. We considered practical aspects such as the number of hyperparameters, availability of an implementation and the presented empirical results. #### Ii-B1 P3o Shen et al. [16] proposed to augment the PPO objective with penalties on the constraint violations. The objective function for Penalized Proximal Policy Optimization (P3O) is defined as: \[L_{R}^{\mathrm{CLIP}}(\theta^{\prime})-\sum_{i}\kappa_{i}\cdot\max\left\{0,J_{ C_{i}}(\pi^{\prime})-\epsilon_{i}\right\}, \tag{5}\] \(\kappa_{i}\) controls the weight of each constraint. The first term \(L_{R}^{\mathrm{CLIP}}(\theta^{\prime})\) is the clipped surrogate objective by Schulman et al. [1], defined as: \[L_{R}^{\mathrm{CLIP}}(\theta^{\prime})=\mathbb{E}\left[\min(r_{t}(\theta^{ \prime})\tilde{A}_{R,t}^{\pi_{\theta}},\mathrm{clip}(r_{t}(\theta^{\prime})) \tilde{A}_{R,t}^{\pi_{\theta}}\right], \tag{6}\] where \(r_{t}(\theta^{\prime})\) denote the probability ratio \(\frac{\pi^{\prime}(a_{t}|s_{t})}{\pi(a_{t}|s_{t})}\), and the operation \(\mathrm{clip}(\cdot)\) clips the value between \(1-\delta\) and \(1+\delta\) with \(\delta\) controlling the magnitude of policy updates. \(\tilde{A}_{R,t}\) denotes the normalized reward advantage. Similarly, the final objective of P3O is obtained using importance sampling and clipping of the importance ratios of the cost advantages: \[L^{P3O}(\theta^{\prime})=L_{R}^{\mathrm{CLIP}}(\theta^{\prime})-\sum_{i}\kappa _{i}\cdot\max\left\{0,L_{C_{i}}^{\mathrm{YIOL}}(\theta^{\prime})\right\}, \tag{7}\] with \[L_{C_{i}}^{\mathrm{YIOL}}(\theta^{\prime})=L_{C_{i}}^{\mathrm{CLIP}}(\theta^{ \prime})+(1-\gamma)(J_{C_{i}}(\pi_{\theta})-\epsilon_{i})\] \[L_{C_{i}}^{\mathrm{CLIP}}(\theta^{\prime})=\mathbb{E}\left[\max(r_{t}(\theta^{ \prime})A_{C_{i},t}^{\pi_{\theta}},\mathrm{clip}(r_{t}(\theta^{\prime}))A_{C_{ i},t}^{\pi_{\theta}})\right].\] #### Ii-B2 PPO-Lagrangian Chow et al. [19] proposed to utilize the Lagrangian relaxation. The Lagrangian method approaches constraint problems with objective \(f(\theta)\) and constraint \(g(\theta)\) by minimizing the Lagrange dual with dual variable \(\lambda\), resulting in the unconstrained objective: \[\min_{\lambda\geq 0}\max_{\theta}\mathcal{L}(\theta,\lambda)\doteq f(\theta)- \lambda g(\theta). \tag{9}\] Approximate solutions of this minimax objective can be obtained via the iterative primal-dual method, which alternates between updates on the primal variable \(\theta\) and the dual variable \(\lambda\)[20]. In practice, the updates are typically realized with gradient ascent and descent steps on \(\theta\) and \(\lambda\), where the other variable is kept fixed. Intuitively, \(\lambda\) behaves like a penalty parameter that increases when the constraint is violated and decreases when it is satisfied. OpenAI researchers [17] suggested utilizing the iterative primal-dual method with the PPO objective to derive the following update: \[\theta^{\prime} =\theta+\alpha_{\theta}\nabla_{\theta}\left(L_{R}^{\mathrm{CLIP}}( \theta)-\sum_{i}\lambda_{i}L_{C_{i}}^{\mathrm{CLIP}}(\theta)\right), \tag{10}\] \[\lambda_{i}^{\prime} =\lambda_{i}+\alpha_{\lambda_{i}}(J_{C_{i}}(\theta)-\epsilon_{i}). \tag{11}\] Here, \(\alpha_{\theta}\) and \(\alpha_{\lambda_{i}}\) are the learning rates of the gradient ascent and descent steps, respectively. \(\lambda_{i}^{\prime}\) is typically cut off at zero, to ensure non-negativity of the penalty parameter. #### Ii-B3 Ipo Inspired by the interior-point method for constrained optimization problems, IPO uses logarithm barrier functions \(\phi(x)=\log(-x)/k\), with the hyperparameter \(k\) to achieve an infinitely large penalty as the estimated cost returns approach the constraint threshold \(\epsilon_{i}\). This results in the objective: \[L_{R}^{\mathrm{CLIP}}(\theta^{\prime})+\sum_{i}\phi(J_{C_{i}}(\theta^{\prime} )-\epsilon_{i}), \tag{12}\] where \(J_{C_{i}}(\theta^{\prime})\) can be estimated based on the advantages using Eq. 4b. #### Ii-B4 Crpo Constraint-Rectified Policy Optimization (CRPO) [21] alternates between maximizing the objective and minimizing the constraint violations whenever the constraints are violated: \[L^{CRPO}(\theta^{\prime})=\mathds{1}_{J_{C}(\theta)\leq\epsilon_{i}}\cdot L_{ R}^{\mathrm{CLIP}}(\theta^{\prime})-\mathds{1}_{J_{C}(\theta)>\epsilon_{i}}\cdot L_{C}^{ \mathrm{CLIP}}(\theta^{\prime}). \tag{13}\] #### Ii-B5 Foops First-Order Constrained Optimization in Policy Space (FOCOPS) solves the constrained optimization problem in policy space and then projects the solution back into parameter space, effectively also leading to an objective function with a constraint penalty [22]. For a detailed derivation we refer to the original paper of Zhang at al. [22]. The algorithms P3O [16], PPO-Lagrangian [17], and IPO [13] relax the constrained optimization problem in Eq.2 into an unconstrained one using additional penalties to the PPO objective. CRPO takes a simpler approach and alternates between PPO updates with reward and cost advantages [21]. FOCOPS [22] solves the constrained optimization problem in policy space. ## III Method We define a CMDP to train policies for velocity-tracking perceptive locomotion. The training environment and MDP inherit from the quadruped environment by Rudin et al. [23]. ### _CMDP for Perceptive Locomotion_ #### Iii-A1 Reward Functions Our reward function is a sum of different reward terms provided in Table I. We define three categories: * Task Reward: This defines the main task objective. In our experiment, the main task is to track linear velocity command in horizontal direction (\(v_{xy}\)) and yaw rate (\(\omega_{z}\)). * Style Reward: There can be many solutions for the velocity tracking, e.g., different gait, base height, or different orientation. We use extra rewards to guide natural-looking motions. Kim et al. [12] similarly achieved this by applying constraints to gait and other physical quantities. * Constraint Reward: High penalty is given when the physical limits are violated. The constraint rewards are replaced by the constraints in CMDP. #### Iii-A2 Constraints For all constraints, we set \(\epsilon_{i}=0\) and defined cost functions such that each cost encapsulates a specific physical quantity: * Command Smoothness: For the sim-to-real transfer, it is crucial to consider the tracking bandwidth of the physical actuators [9]. Existing works regularize the output with negative rewards on the first or second order derivative of the commands [8, 23, 12]. This prevents infeasible commands, reduces sim-to-real discrepancy in the joint space, and vibration on the hardware. We define two constraint functions as: \[c_{c1,i} =\max(0,|(q_{t,i}^{des}-q_{t-1,i}^{des})/dt|-\dot{q}^{des,*})\] \[c_{c2,i} =\max(0,|(q_{t,i}^{des}-2q_{t-1,i}^{des}+q_{t-2,i}^{des})/dt^{2}|- \ddot{q}^{des,*})\] for each joints (\(i\in\) joints). \(dt\) is the timestep and \(\dot{q}^{des,*}\) and \(\ddot{q}^{des,*}\) are thresholds. The discounted sum of both costs are restricted to be below the desired thresholds by setting \(\epsilon=0.0\). \(\dot{q}^{des,*}\) and \(\ddot{q}^{des,*}\) are hyperparameters, with \(\dot{q}^{des,*}\) set as half of the joint speed limit, and \(\ddot{q}^{des,*}=\dot{q}^{des,*}/dt\). * Joint Speed: The constraint function is defined as an indicator function: \[c_{qv}=\mathds{1}(\sum_{i\in\text{joints}}\mathds{1}(|\dot{q}_{t,i}|>\dot{q}_{ i}^{*})>0.0).\] In other words, \(c_{qv}=1\) if any of the joints violates the speed limitation. \(\dot{q}^{*}\) is the physical limit of the actuator. * Joint Torque: Joint torque constraint is defined similarly to the joint speed constraint. \[c_{\tau}=\mathds{1}(\sum_{i\in\text{joints}}\mathds{1}(|\tau_{t,i}|>\tau_{i}^{* })>0.0).\] \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{2}{c}{Task Rewards} \\ \hline Linear Velocity & \(\exp(-2.0\cdot\|v_{xy}^{long}-v_{xy}\|^{2})\) \\ Yaw Rate & \(\exp(-2.0\cdot\|v_{xy}^{long}-\omega_{z}\|^{2})\) \\ \hline \multicolumn{2}{c}{Style Rewards} \\ \hline Base Stability & \(\exp(-v_{x}^{2})+\exp(-\|\omega_{x,y}\|^{2})\) \\ Height & \(-0.5\) \(|h^{large}-h_{robot}|\), \(\dot{h}^{large}=0.5\) \\ Joint Torque Minimization & \(1\text{e-}e\cdot\|\dot{\tau}\|^{2}\) \\ Joint Motion & \(1\text{e-}5\) \(\|\dot{\tau}\|^{2}+1\text{e-}6\) \(\|\dot{\sigma}\|^{2}\) \\ \hline \multicolumn{2}{c}{Constraint Rewards (**Removed for CMDP**)} \\ \hline Command Smoothness 1 & \(-0.01\)\(\|q_{t}^{des}-q_{t-1}^{des}\|^{2}\) \\ Command Smoothness 2 & \(-0.01\)\(\|q_{t}^{des}-2q_{t-1}^{des}\|^{2}\) \\ Joint Torque Limits & \(-0.01\)\(\sum\max(|\tau_{i,t}|-\tau_{i,1}^{lim},0)^{2}\) \\ Joint Speed Limits & \(-0.1\)\(\sum\max(|\dot{q}_{t,i}|-\dot{q}_{lim},0)^{2}\) \\ Joint Position Upper Limits & \(-10.0\)\(\sum\max(q_{t,i}-q_{t,0}^{lim},0)^{2}\) \\ Joint Position Lower Limits & \(-10.0\)\(\sum\max(q_{t}^{lb}-q_{t,t}^{lb},0)^{2}\) \\ Body Contact & \(-\)(Number of non-wheel contacts) \\ \hline \hline \end{tabular} \end{table} TABLE I: Reward Functions. \(q\) and \(\tau\) are joint position and torque vectors. \(g^{b}\) denotes the gravity vector in base frame. * Joint Position: Each joint has different upper bound (\(q^{ub}\)) and lower bound (\(q^{lb}\)) positions. We only set the limit angle for the hip joints to avoid self-collision. \[c_{q}=\mathds{1}(\sum_{i\text{\emph{clip joints}}}(\mathds{1}(q_{t,i}>q^{ub}_{i}) +\mathds{1}(q_{t,i}<q^{lb}_{i}))>0.0).\] * Undesirable Body Contact: The cost is \(1.0\) when there is any contact at the body parts except for the wheel or foot, including self-collision. ### _Normalizing Cost Advantages_ Advantage normalization is a widely used heuristic to improve the stability of policy gradient algorithms [24]. This technique is also applicable for constrained RL algorithms. Consider the simplified objective for P3O: \[L(\theta^{\prime})=\mathbb{E}\left[r(\theta^{\prime})\left(A^{\theta}_{R}- \kappa\cdot A^{\theta}_{C}\right)\right]\] The un-normalized advantages \(A^{\theta}_{R}\) and \(A^{\theta}_{C}\) can have different magnitudes, depending on the reward, constraints, and the current policy's behavior. With normalized advantages, \[L(\theta^{\prime})=\mathbb{E}\left[r(\theta^{\prime})\left(\tilde{A}^{\theta} _{R}-\kappa\cdot\tilde{A}^{\theta}_{C}\right)\right] \tag{14}\] then the weighting of the constraints (\(\kappa\)) remains unchanged regardless of the reward and cost functions. E.g., \(\kappa=1\) always corresponds to equal weighting of the reward and cost advantages. This makes the algorithm more stable and improves generalization across tasks, also as evidenced by Kim et al. [12]. Furthermore, this prevents the cost advantages from vanishing when cost violation is low. For P3O and IPO, we need to reformulate the objectives in Eq. 7 and Eq. 12. We start by expressing the constraint in Eq. 4b in terms of normalized advantages: \[\frac{(1-\gamma)(J_{C_{i}}(\pi)-\epsilon_{i})+\mu_{C_{i}}}{\sigma_{C_{i}}}+ \mathbb{E}\Big{[}\underbrace{\frac{A^{\pi}_{C_{i},t}-\mu_{C_{i}}}{\sigma_{C_{i }}}}_{\tilde{A}^{\nu}_{C_{i},t}}\Big{]}\leq 0\quad\forall_{i}. \tag{15}\] Here, \(\mu_{C_{i}}\), \(\sigma_{C_{i}}\) are the mean and standard deviations of the cost advantages. \(\tilde{A}^{C_{i}}_{\pi}\) denotes the normalized advantages. Using importance sampling with clipping, one obtains \[L^{\text{VIOL,N}}_{C_{i}}(\theta^{\prime})=L^{\text{CLPN}}_{C_{i}}(\theta^{ \prime})+\frac{(1-\gamma)(J_{C_{i}}(\pi_{\theta})-\epsilon_{i})+\mu_{C_{i}}}{ \sigma_{C_{i}}}\leq 0. \tag{16}\] The superscript \(N\) indicates the usage of normalized advantage estimates. Penalizing violations of Eq. 16, leads to the objectives \[L^{\text{N-P3O}}(\theta^{\prime}) =L^{\text{CLPN}}_{R}(\theta^{\prime})-\sum_{i}\kappa_{i}\cdot \max\left\{0,L^{\text{VIOL,N}}_{C_{i}}(\theta^{\prime})\right\},\] \[L^{\text{N-IPO}}(\theta^{\prime}) =L^{\text{CLPN}}_{R}(\theta^{\prime})+\sum_{i}\phi(L^{\text{VIOL,N}}_{C_{i}}(\theta^{\prime})).\] We will refer to these modified versions of P3O and IPO as N-P3O and N-IPO throughout the rest of the paper. ## IV Experimental Results We present two experimentals: 1. **Comparison of first-order CMDP algorithms**: We select the most suitable algorithm for our purposes (N-P3O) based on a comparative study of different first-order CMDP algorithms. 2. **Sim-to-real transfer with tight constraints**: We validate the CMDP framework by training a perceptive locomotion policy for the robot depicted in Fig. 1 while enforcing tight physical constraints. We compare it to a standard PPO-trained policy to assess if constrained RL offers improved constraint consistency with qualitatively similar performance. ### _Comparing different CMDP Algorithms_ #### Iv-A1 Experimental Setup We consider an example problem of legged locomotion on flat terrain with constrained joint velocities. We use the ANYmal C robot and constrain the joint velocities to be below 6.0 rad/s. We implement all algorithms with normalized advantages, but include P3O in our comparison to depict the benefits of normalization. As we aim to obtain zero constraint violations, we used P3O, N-P3O, PPO-Lagrangian and FOCOPS with a threshold (\(\epsilon\)) of zero. Hereby, the cost return cannot drop below zero since the cost function is non-negative. For N-IPO and CRPO, we treat the threshold as a hyperparameter.1 It should be noted that a zero threshold leads to a continuous increase in the penalty parameters of PPO-Lagrangian and FOCOPS with positive learning rate. Footnote 1: CRPO only applies reward improvement steps if the cost returns are below \(\epsilon\), and the logarithm barrier penalty term in N-IPO also needs constraint satisfaction to be well-defined. #### Iv-A2 Results Fig. 2 and Table II show the cost and reward over the learning iterations and the final performance of the best runs. We include PPO without considering the constraint as a baseline. \begin{table} \begin{tabular}{c|c c} \hline \hline & Reward & Violations per episode \\ \hline PPO (unconstrained) & 24.96 (\(\pm\) 0.67) & 533.44 (\(\pm\) 108.94) \\ \hline P3O & 24.13 (\(\pm\) 1.55) & 0.96 (\(\pm\) 1.35) \\ N-P3O & 24.13 (\(\pm\) 1.14) & **0.49** (\(\pm\) 0.88) \\ PPO-Lagrangian & 23.68 (\(\pm\) 1.87) & 0.99 (\(\pm\) 1.31) \\ N-IPO & **24.67** (\(\pm\) 0.84) & 1.33 (\(\pm\) 1.69) \\ CRPO & 22.28 (\(\pm\) 1.70) & 0.96 (\(\pm\) 1.22) \\ FOCOPS & 22.65 (\(\pm\) 3.02) & 15.82 (\(\pm\) 11.67) \\ \hline \hline \end{tabular} \end{table} TABLE II: Final performances of the CMDP algorithms. Fig. 2: Learning curves of the selected CMDP algorithms. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & Tuning Iteration & Parameters & Episode reward & \(\#_{\text{violations}}\) / episode \\ \hline \hline PPO (no constraint) & - & - & 24.96 (\(\pm\) 0.67) & 533.44 (\(\pm\) 108.94) \\ \hline \multirow{4}{*}{PSO} & 1 & \(\kappa=1\) & 25.23 (\(\pm\) 0.93) & 61.84 (\(\pm\) 25.84) \\ & 2 & \(\kappa=10\) & 25.19 (\(\pm\) 1.10) & 5.16 (\(\pm\) 3.58) \\ & 3 & \(\kappa=30\) & 24.88 (\(\pm\) 1.62) & 2.95 (\(\pm\) 2.64) \\ & 4 & \(\kappa=60\) & 24.71 (\(\pm\) 1.08) & 1.28 (\(\pm\) 1.49) \\ & 5 & \(\kappa=120\) & 24.13 (\(\pm\) 1.55) & 0.96 (\(\pm\) 1.35) \\ \hline \multirow{4}{*}{PPO-Lagrangian} & 1 & \(\kappa=1\) & 24.13 (\(\pm\) 1.14) & 0.49 (\(\pm\) 0.88) \\ \cline{2-5} & 1 & \(\lambda_{init}=0,\alpha_{\lambda}=0.001\) & 1.69 (\(\pm\) 2.35) & 0.02 (\(\pm\) 0.17) \\ & 2 & \(\lambda_{init}=-0.5,\alpha_{\lambda}=0.0\) & 1.81 (\(\pm\) 2.69) & 0.06 (\(\pm\) 0.56) \\ \cline{2-5} & 3 & \(\lambda_{init}=-1.5,\alpha_{\lambda}=0.0\) & 25.05 (\(\pm\) 0.90) & 4.42 (\(\pm\) 2.91) \\ & 4 & \(\lambda_{init}=-1.4,\alpha_{\lambda}=0.001\) & 23.70 (\(\pm\) 1.45) & 1.08 (\(\pm\) 1.40) \\ & 5 & \(\lambda_{init}=-1.3,\alpha_{\lambda}=0.001\) & 23.68 (\(\pm\) 1.87) & 0.99 (\(\pm\) 1.31) \\ \hline \multirow{4}{*}{N-IPO} & 1 & \(\epsilon=0.3,k=20,\lambda_{rec}=1\) & 24.97 (\(\pm\) 1.35) & 2.64 (\(\pm\) 2.43) \\ & 2 & \(\epsilon=0.2,k=20,\lambda_{rec}=1\) & 24.64 (\(\pm\) 1.66) & 2.95 (\(\pm\) 2.62) \\ \cline{2-5} & 3 & \(\epsilon=0.1,k=20,\lambda_{rec}=1\) & 24.67 (\(\pm\) 0.84) & 1.33 (\(\pm\) 1.69) \\ \cline{2-5} & 4 & \(\epsilon=0.05,k=20,\lambda_{rec}=1\) & 22.19 (\(\pm\) 2.25) & 1.69 (\(\pm\) 1.77) \\ \cline{2-5} & 5 & \(\epsilon=0.025,k=40,\lambda_{rec}=1\) & 22.52 (\(\pm\) 1.65) & 1.24 (\(\pm\) 1.50) \\ \hline \multirow{4}{*}{CRPO} & 1 & \(\epsilon=0.2\) & 24.97 (\(\pm\) 1.19) & 7.36 (\(\pm\) 4.88) \\ & 2 & \(\epsilon=0.1\) & 24.75 (\(\pm\) 1.26) & 5.13 (\(\pm\) 3.28) \\ \cline{2-5} & 3 & \(\epsilon=0.05\) & 24.25 (\(\pm\) 1.72) & 2.65 (\(\pm\) 2.18) \\ \cline{2-5} & 4 & \(\epsilon=0.025\) & 23.28 (\(\pm\) 1.58) & 1.62 (\(\pm\) 1.68) \\ \cline{2-5} & 5 & \(\epsilon=0.01\) & 22.28 (\(\pm\) 1.70) & 0.96 (\(\pm\) 1.22) \\ \hline \multirow{4}{*}{FOCOPS} & 1 & \(\nu=1,\alpha_{\nu}=0,\lambda=0.5\) & 4.59 (\(\pm\) 3.76) & 0.10 (\(\pm\) 0.83) \\ & 2 & \(\nu=0.5,\alpha_{\nu}=0,\lambda=0.5\) & 3.02 (\(\pm\) 3.18) & 0.02 (\(\pm\) 0.17) \\ \cline{1-1} & 3 & \(\nu=0.25,\alpha_{\nu}=0,\lambda=0.5\) & 2.63 (\(\pm\) 3.10) & 0.03 (\(\pm\) 0.21) \\ \cline{1-1} & 4 & \(\nu=0.1,\alpha_{\nu}=0,\lambda=0.5\) & 22.65 (\(\pm\) 3.02) & 15.82 (\(\pm\) 11.67) \\ \cline{1-1} & 5 & \(\nu=0.1,\nu_{max}=0.2,\alpha_{\nu}=0.005,\lambda=0.5\) & 2.54 (\(\pm\) 3.13) & 0.11 (\(\pm\) 1.45) \\ \hline \hline \end{tabular} \end{table} Table 3: Mean performance metrics and parameter values of CMDP algorithms with different parameters. Figure 3: Robot experiments with constraints. (A) Traversing a 20 cm high block with 1.0 m/s command to the front. (B) Walking in \(y\)-direction at the maximum speed. Three algorithms could achieve high reward and less than a single constraint violation on average: P3O, N-P3O and PPO-Lagrangian. The N-P3O achieved the lowest constraint violation. Its superiority over P3O can be attributed to the balance between the reward and cost advantages due to normalization. With similar modification, N-IPO demonstrated the highest reward, albeit with a higher violation rate compared to P3O. The constraint violation is unavoidable due to the non-negative \(\epsilon\) by design, but potential improvements could be explored by using different cost functions and advanced scheduling techniques, as proposed by Kim et al. [12]. #### Iv-A3 **Our choice** For our real-world experiment, we decided to use N-P3O. Among the compared algorithms, N-P3O required the fewest parameters to adjust in our setup (with \(\epsilon\) fixed at zero) and achieved low constraint violation. Although N-IPO resulted in the highest reward and comparable constraint violation, its sensitivity to the threshold parameter made it less suitable. For further details on our parameter adjustments and results, please refer to Table III and implementation details in appendix. ### _Robot Experiments_ We evaluate a perceptive locomotion policy trained using N-P3O for our wheeled-legged robot. We compare it with the PPO baseline trained with the constraint reward (see Table I). #### Iv-B1 Experimental Setup The policies are trained to follow velocity commands over rough terrain. The policy observes the terrain scan around the robot as shown by Fig. 1 and outputs joint position and wheel speed commands. We used the rough terrain environment by Rudin et al. [23]. The velocity commands are sampled uniformly within the ranges of [-2.0, 2.0] m/s in the \(x\)-direction, [-1.0, 1.0] m/s in the \(y\)-direction, and a yaw rate from [-1.5, 1.5] rad/s. To evaluate the effectiveness of the constrained RL approach, we enforce tight constraints for the leg actuators. We use joint speed limit of 6.0 rad/s, which is significantly lower than the robot's actual physical limit of \(\sim\)8.0 rad/s. Joint torque is limited to 75 Nm for leg joints. The physical limit is \(\sim\)100 Nm. We also applied other constraints mentioned in section III-A. We used two cost critic networks - one for command smoothness constraint and the other one for sum of other costs. #### Iv-B2 Results In Fig. 3 we show the results from different policies in two scenarios. Both policies violated joint velocity and torque constraints at varying rates in our experiments, while other constraints remained satisfied. Firstly, we evaluate the policies' behavior when encountered by discrete obstacles (Fig. 3A). A notable qualitative difference in behavior is observed: the N-P3O policy slows down before stepping down to reduce impact, while normal PPO policy gains speed (See Fig. 3A-1(a) and A-2(a)). This significantly impacts the rate of constraint violation. The N-P3O policy shows two short peaks in the joint velocity that violates constraints, but the joint torque remains within the constrained range (Fig. 3A(b)). On the other hand, the PPO policy exhibits a significantly higher violation rate when stepping up (the front wheel collision) and when stepping down (front legs drop). The N-P3O policy actively modulates its leg motions and speed in response to discrete events. Secondly, we evaluate the constraint violation when the robot is stepping at it's maximum speed to the \(y\)-direction (Fig. 3B). We commanded 1.0 m/s, which is the maximum speed the policy is trained for. Note that for ANYmal C robot, this is higher than the nominal operating range (\(\sim\)0.75 m/s by [8, 25]). As shown in Fig. 3B-1, the N-P3O policy shows longer strides and slower gait frequency, resulting in less joint velocity constraint violation (Fig. 3B-2). Additionally, the N-P3O exhibited lower tracking error. The tracking errors are \(0.276\) (\(\pm 0.077\)) \(\mathrm{m/s}\) and \(0.296\) (\(\pm 0.091\)) \(\mathrm{m/s}\) for N-P3O and PPO, respectively. Both policies could not achieve 1.0 m/s due to the hardware limitation. ## V Conclusion & Discussion Our study presents a CMDP formulation for the perceptive locomotion of quadrupedal robots. Through a comparative study of five first-order CMDP algorithms, we identified N-P3O, a normalized version of P3O, as the most effective for our task. The additional advantage normalization step further enhanced both the stability and performance of the algorithm. Real-world experiments on a wheeled-legged quadrupedal robot provide strong evidence for the effectiveness of the constrained RL approach. Utilizing the N-P3O algorithm, our policies were able to achieve performance metrics on par with conventional PPO algorithm used by state-of-the-arts, but with fewer constraint violations. A distinct advantage we observed was the decoupling of reward and constraint functions, which simplified the tuning processes and led to a better performance in terms of constraint violation. In conclusion, Constrained RL emerges as a promising tool for robotic applications, particularly in sim-to-real transfer scenarios. While our focus was on legged locomotion, the methodology is broadly applicable. ### _Practical Benefits_ From a hands-on perspective, the constrained RL algorithms showed clear advantages. The PPO approach necessitated complex adjustments to the scaling coefficients of penalty terms (see Table I). The impact of each coefficient is non-intuitive, often demanding numerous trial-and-errors. On the other hand, with separate cost critics, this effort is removed by design. We can control the influence of the cost objective using a single parameter \(\kappa\). Such a streamlined approach accelerates the overall development of learned controllers. While having additional cost critics adds a computational overhead in comparison to PPO (0.07 s more), this is negligible compared to the simulation time (\(\sim 0.74\) s). ### _Future work_ Future works will include different applications such as autonomous navigation or manipulation. Additionally, we only experimented with simple and constant constraints. More complex systems, such as joints with variable gear ratios, may introduce state-dependent constraints. Identifying complex constraints from an unknown or under-modeled systems remains an open question. Current approaches also face limitations in enforcing hard constraints. Constraint violation is inevitable due to the exploration during training. This issue is particularly relevant for safety-sensitive applications, necessitating the development of methods for stricter constraint satisfaction [26, 27]. ## Appendix Here we provide additional experiments and technical details. ### _Effect of Different Cost Functions_ The cost function is an important design choice when formulating a CMDP. We evaluate the effect of the cost functions in Table IV, again on the example problem of quadrupedal locomotion on flat terrain with constrained joint velocities. The policies were obtained with N-P3O and \(\kappa=1\). There are notable differences in the constraint violations. The indicator function leads to the fewest violations, closely followed by the number-of-joints cost function. The squared-ReLU cost function violates more often, but leads to smaller deviation from the limit. ### _Non-negative Cost Critics_ In cases of near-perfect constraint satisfaction, a plain cost critic has trouble learning the cost value function, often outputting negative values. To address this, we appended a Softplus output layer to the cost critic. Fig. 4 display the mean of the sampled cost returns and the estimated cost returns. The use of the non-negative function leads to a lower variance in cost returns. These improvements are shown in Table V. ### _Training Details_ The definition of observation and domain randomization are the same as Rudin et al. [23]. #### V-D1 Architecture The models are depicted in Table VII. The proprioceptive observation includes target velocity, base velocity, joint position, joint velocity, and gravity vector. #### V-D2 Scheduling Constraint Minimization When constraints are enforced, we noticed premature convergence of the policy training. To promote exploration, we set \(\kappa\) to a low value at the beginning of the training and exponentially increased the value: \(\kappa_{i}=\min(0.2,0.1\cdot(1.0004)^{i})\) for \(i\)-th iteration. #### V-D3 Decaying Entropy Coefficient We introduced a decaying entropy regularization loss in the objective function, as suggested in previous work [28, 29]. This improved the smoothness of the policy on convergence. ### _Algorithm Implementation Details_ #### V-E1 Ppo-Lagrangian In our implementation, we use the ADAM optimizer for update in Eq. 11 and apply the Softplus function to ensure non-negativity of \(\lambda\) after updates. #### Vi-C2 N-Ipo The logarithm barrier penalty cannot be applied if the constraint is already violated. We added a recovery strategy to achieve constraint satisfaction again: \[L^{\text{N-IPO}}(\theta^{\prime}) =L_{R}^{\text{CLIP},\text{N}}(\theta^{\prime})+\sum_{i:J_{C_{i}}( \theta^{\prime})\leq\varepsilon_{i}}\phi(L_{C_{i}}^{\text{VIOLN}}(\theta^{ \prime}))\] \[+\lambda_{\text{rec}}\cdot\sum_{i:J_{C_{i}}(\theta^{\prime})> \varepsilon_{i}}L_{C_{i}}^{\text{CLIP},\text{N}}(\theta^{\prime})\] with the additional recovery term. #### Vi-C3 Crpo We utilize sampled data within multiple epochs and iterate over minibatches, leading to several updates of the policy within each learning iteration. In our implementation of CRPO, we utilize the constraint reformulation (Eq. 4b) to estimate the constraint violation after every policy update, instead of switching between policy improvement and constraint minimization after each complete iteration.
2309.03252
Prevalence of Compact Nuclear Radio Emission in Post-Merger Galaxies and its Origin
Post-merger galaxies are unique laboratories to study the triggering and interplay of star-formation and AGN activity. Combining new, high resolution, 10 GHz Jansky Very Large Array (VLA) observations with archival radio surveys, we have examined the radio properties of 28 spheroidal post-merger galaxies. We find a general lack of extended emission at (sub-)kiloparsec scales, indicating the prevalence of compact, nuclear radio emission in these post-merger galaxies, with the majority (16/18; 89\%) being radio-quiet at 10 GHz. Using multi-wavelength data, we determine the origin of the radio emission, discovering 14 new radio AGN and 4 post-mergers dominated by emission from a population of supernova remnants. Among the radio AGN, almost all are radio-quiet (12/14; 86\%). We discover a new dual AGN (DAGN) candidate, J1511+0417, and investigate the radio properties of the DAGN candidate J0843+3549. 4 of these radio AGN are hosted by SF emission-line galaxies, suggesting that radio AGN activity may be present during periods of SF activity in post-mergers. The low jet powers and compact morphologies of these radio AGN also point to a scenario in which AGN feedback may be efficient in this sample of post-mergers. Lastly, we present simulated, multi-frequency observations of the 14 radio AGN with the Very Long Baseline Array (VLBA) and the VLBI capabilities of the Next Generation Very Large Array (ngVLA) to assess the feasibility of these instruments in searches for supermassive black hole binaries (SMBHBs).
Gregory Walsh, Sarah Burke-Spolaor
2023-09-06T18:00:00Z
http://arxiv.org/abs/2309.03252v1
# Prevalence of Compact Nuclear Radio Emission in Post-Merger Galaxies and its Origin ###### Abstract Post-merger galaxies are unique laboratories to study the triggering and interplay of star-formation and AGN activity. Combining new, high resolution, 10 GHz Jansky Very Large Array (VLA) observations with archival radio surveys, we have examined the radio properties of 28 spheroidal post-merger galaxies. We find a general lack of extended emission at (sub-)kiloparsec scales, indicating the prevalence of compact, nuclear radio emission in these post-merger galaxies, with the majority (16/18; 89%) being radio-quiet at 10 GHz. Using multi-wavelength data, we determine the origin of the radio emission, discovering 14 new radio AGN and 4 post-mergers dominated by emission from a population of supernova remnants. Among the radio AGN, almost all are radio-quiet (12/14; 86%). We discover a new dual AGN (DAGN) candidate, J1511+0417, and investigate the radio properties of the DAGN candidate J0843+3549. 4 of these radio AGN are hosted by SF emission-line galaxies, suggesting that radio AGN activity may be present during periods of SF activity in post-mergers. The low jet powers and compact morphologies of these radio AGN also point to a scenario in which AGN feedback may be efficient in this sample of post-mergers. Lastly, we present simulated, multi-frequency observations of the 14 radio AGN with the Very Long Baseline Array (VLBA) and the VLBI capabilities of the Next Generation Very Large Array (ngVLA) to assess the feasibility of these instruments in searches for supermassive black hole binaries (SMBHBs). Galaxy mergers (608); Active galactic nuclei (16); Radio active galactic nuclei (2134); Radio jets (1347); Supernova remnants (1667); Evolution of galaxies (594) + Footnote †: journal: 0000-0002-8071-8885]Gregory Walsh 0000-0002-4882-7885]Sarah Burke-Spolaor ## 1 Introduction Theoretical studies predict that galaxy mergers are the main contributor to the buildup of stellar mass in galaxies and to the formation of bulges and massive elliptical galaxies (Springel, 2000; Cox et al., 2008; Di Matteo et al., 2008; Torrey et al., 2012). Integral to this evolution is what role galaxy mergers have in the triggering of an active galactic nucleus (AGN) and/or intense starburst activity. Early studies of ultra-luminous infrared galaxies (ULIRGs), which are at least partially powered by a heavily obscured AGN (Lonsdale et al., 2006), found a nearly ubiquitous fraction hosted by interacting galaxy systems (Murphy et al., 1996; Veilleux et al., 2002). These early works suggested that ULIRGs are triggered by galaxy merger-induced processes. Likewise, intense starburst activity has been observed in merging systems (Tacconi et al., 2008). For both cases, the triggering of these phenomenon are caused by nuclear inflows of gas produced by gravitational torques during the merger process (Hopkins et al., 2006), linking the growth of supermassive black holes (SMBHs) and their host galaxies. Indeed, observed correlations between the SMBH and host galaxy properties confirm their co-evolution (Kormendy & Richstone, 1995; Magorrian et al., 1998; Ferrarese & Merritt, 2000; Gebhardt et al., 2000; Tremaine et al., 2002; Gultekin et al., 2009; McConnell & Ma, 2013; Sahu et al., 2019). Thus, detailed studies of galaxy mergers at various stages of evolution are needed to fully realize the astrophysical processes governing these phenomena. Observational studies are in conflict with one another over the role mergers play in triggering an AGN. While many have found either an increased AGN fraction in merging systems (Ellison et al., 2011; Satyapal et al., 2014; Donley et al., 2018; Goulding et al., 2018) or an increased merger fraction in AGN hosts (Chiaberge et al., 2015; Fan et al., 2016; Gao et al., 2020; Marian et al., 2019; Breiding et al., 2023), others have found no such connection between AGN and mergers (Grogin et al., 2005; Cisternas et al., 2011; Bohm et al., 2013; Villforth et al., 2017; Lambrides et al., 2021). Selection biases almost certainly contribute to this dissonance. Different AGN selection criterion, e. g., mid-infrared (e. g., Satyapal et al., 2014; Donley et al., 2018; Goulding et al., 2018), X-ray (e. g., Grogin et al., 2005; Villforth et al., 2017), optical (e. g., Bohm et al., 2013), and radio (e. g., Chiaberge et al., 2015; Breiding et al., 2023), and selection of mergers at various stages of their evolution also necessarily select different astrophysical scenarios (e. g., Sanders et al., 1988). Among the evolutionary stages of merger systems, post-merger galaxies, those in which the stellar nuclei have coalesced, perhaps present the most unique laboratories to study these triggering mechanisms and the effects of AGN feedback on star formation as a result of the advanced stage of the merger. Small samples of post-merger systems have found hints at enhancement in the star-formation rate (Ellison et al., 2013) and AGN incidence over galaxies in close pairs (Carpineti et al., 2012; Bickley et al., 2023). The effectiveness of AGN feedback, however, is questioned when examining post-merger galaxies. Post-mergers appear to host a significantly higher fraction of post-starburst galaxies (Ellison et al., 2022; Li et al., 2023), characterized as having recently experienced intense star-formation activity that was rapidly truncated (Couch & Sharples, 1987), although this itself does not imply that an AGN is the main driver of this quenching. Indeed, this low efficiency scenario is corroborated by the works of Kaviraj et al. (2015) and Shabala et al. (2017). Both of these studies determined that the onset of merger-triggered AGN activity is delayed with respect to the peak of starburst activity, significantly limiting its ability to impact the star-formation rate of the host galaxy. Expanding the overall population of post-merger galaxies for which we can study these evolutionary effects is important towards understanding the general trends in their behavior. Post-merger galaxies are also ideal targets in which to search for supermassive black hole binaries (SMB-HBs). As all massive galaxies are believed to harbor a SMBH (Kormendy & Ho, 2013), a major galaxy merger should lead to the formation of a SMBHB. At the initial stages of SMBHB evolution, dynamical friction is the dominant mechanism through which the SMBHs lose energy and momentum (Begelman et al., 1980), eventually settling into the gravitational center of the merger remnant. Simulations of galaxy mergers have found that this phase of the evolution may be as short as 1 Gyr (Dosopoulou & Antonini, 2017; Kelley et al., 2017), shorter than the timescale over which the bulges will centralize. Thus, by the time the stellar nuclei have merged, the resident SMBHs are likely to already reside in the gravitational center of the merger remnant (Cf. Dvorkin & Barausse, 2017; Kelley et al., 2017). At parsec-scale separations, the SMBHs will form a gravitationally-bound SMBHB. Here, several different processes, of varying efficiency, are hypothesized to contribute to the shrinking of the binary's orbit; the so-called 'last parsec' problem. If the SMBHB is able to overcome the last parsec, it will reach sub-pc orbital separation, where the emission of low-frequency gravitational waves will efficiently bring the binary to merger. Thus, establishing a population of observed SMBHBs at various orbital separations is key towards our understanding of the nHz gravitational wave population, which will soon be probed by pulsar timing arrays (Agazie et al., 2023, 2020). Critical to this aspect, however, is the poorly understood evolution of SMBHBs themselves. Low efficiency at pc-scale can create a scenario in which binaries at these orbital separations are still present in the post-merger host galaxy. Observations of galaxy mergers at this post-merger phase must be taken to better understand SMBHB evolution. In this paper we present a multi-wavelength analysis of 30 galaxies identified as post-mergers in Galaxy Zoo to study their emission mechanisms. The paper is organized as follows. Section 2 presents the post-merger sample. Section 3 describes new 10 GHz observations taken with the Karl G. Jansky Very Large Array (VLA) of this post-merger sample, while in Section 4, we describe the multi-frequency data obtained via archival radio surveys. The optical emission-line classifications of each post-merger galaxy, and the radio luminosities, morphologies and spectra of these sources are presented in Section 5. In Section 6, we present analyses to determine the origin of the radio emission in our 10 GHz-detected post-merger galaxies. We then discuss the prevalence of radio-quiet emission in these post-mergers, the impact of AGN feedback in radio AGN hosts, and the properties of the SF-dominated radio sources in Section 7. Lastly, in Section 8, we present simulated, multi-frequency observations of the radio AGN with the Very Long Baseline Array (VLBA) and Next Generation VLA (ngVLA) to assess the feasibility of SMBHB searches for these post-merger galaxies. Our results are summarized in Section 9. Throughout this paper, we have adopted a \(\Lambda\)CDM cosmology with \(H_{0}=67.4\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) Mpc\({}^{-1}\). The \(\Lambda\)CDM cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) Mpc\({}^{-1}\), and the \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1}\) cosmology is \(\Omega_{m}=10^{-3}\) Mpc\({}^{-1 0.315 (Planck Collaboration et al., 2020). We use the radio spectral index convention \(S_{\nu}\propto\nu^{\alpha}\). ## 2 Sample Our sample consists of the 30 spheroidal post-merger (SPM) galaxies presented by Carpineti et al. (2012, hereafter C12). The C12 sample was selected from the larger sample of Darg et al. (2010). These authors constructed a catalog of 3003 local (\(0.005<z<0.1\)) galaxy merger systems identified through visual inspection via the Galaxy Zoo Project (Lintott et al., 2008). Of these 3003, 370 of these merging systems were considered strongly perturbed, e. g., the presence of strong tidal tails, but could not be clearly divided into a pair of two interacting galaxies. From the parent sample of 370 late-stage merger systems, C12 selected their sample of 30 SPM galaxies via the distinct visual characteristics of SPMs: SPM galaxies are defined as a single galaxy that displays morphological disturbances associated with a recent merger event, e. g., tidal tails, and contain only a single dominant bulge, making them the likely progenitors of early-type galaxies. As a consistency check, after visual inspection of each SPM candidate, C12 utilized the SDSS parameter fracdev in the optical \(r\)-band for a quantitative representation of the bulge-dominant nature of each system. fracdev represents the likelihood of the surface brightness distribution to be fit by a de Vaucouleurs profile: pure bulge systems have a value of 1; pure exponential or disc-like distributions have a value of 0. All of the 30 SPM galaxies have fracdev \(>0.5\), signifying they are bulge dominated, with most having fracdev \(\geq 0.8\). Further, the 30 SPM systems are all of high stellar mass (\(10.3\leq\log(M_{*})\leq 11.76\)), typical of early-type galaxies. Additionally, C12 found that these 30 SPM systems are diverse in their large-scale environments. Using the environment parameter \(\rho_{g}\) defined by Schawinski et al. (2007), C12 found two SPM systems which inhabit a cluster environment, 19 in a group environment, and 9 in a field environment. ## 3 Observations and Data Calibration High resolution, Karl G. Jansky Very Large Array (VLA) observations of the 30 SPM galaxies were taken from 2016 to 2022. J1018+3613 was observed on May 26, 2016 (PI: S. Burke-Spolaor) using the S- (2-4 GHz) and X-band (8-12 GHz) receivers of the VLA while in B configuration. 3C 186 was used for bandpass calibration and J1018+3542 was used to perform phase referencing. J0206\(-\)0017, J1445+5134, J1511+0417, J1511+2309 and J1655+2639 were observed in two observing programs on November 15, 2020 and December 12, 2020 (PI: P. Breiding) using the S- and X-band receivers of the VLA while in BnA and A configuration. 3C 286 and 3C 138 were used for bandpass calibration and a nearby phase reference calibrator for phase calibration of each target. We obtained observations for the remaining 24 SPM galaxies on May 19, and May 28, 2022 using the X-band receiver of the VLA with 4 GHz bandwidth while in A configuration. We observed 3C 147 and 3C 286 for bandpass calibration and a nearby phase reference calibrator for phase calibration of each target. Our observations were designed to reach a nominal sensitivity of 15 \(\mu\)Jy for each target, with a 3\(\sigma\) detection threshold of 45 \(\mu\)Jy. Two of the SPM galaxies, J0908+1407 and J0933+1048, were removed from the analysis due to technical issues during the observing session. For that reason, only the remaining 28 SPM galaxies are included in this analysis. The data sets were calibrated either using the VLA calibration pipeline1 or following standard data calibration techniques. The VLA calibration pipeline could not be used for the observing session that contained technical issues. To check for calibration consistency, we calibrated the data set without technical errors using both the VLA calibration pipeline and our manual calibration routine. We achieved the same results using both calibration methods for this data set. Footnote 1: [https://science.nrao.edu/facilities/vla/data-processing/pipeline](https://science.nrao.edu/facilities/vla/data-processing/pipeline) The data were inspected, flagged, and imaged in the Common Astronomy Software Applications (CASA; THE CASA TEAM et al., 2022) package. To account for the large fractional bandwidths at each band, \(\sim 60\%\) and \(\sim 40\%\) respectively for S- and X-band, we used multi-Taylor, multi-frequency synthesis deconvolution (MTMFS; Rau and Cornwell, 2011) when cleaning our images. Because of the limited \(uv\)-coverage from our observations, we utilized a Briggs weighting scheme (Briggs, 1995) with a robust parameter of 0.7 to suppress the sidelobes present in images of moderately strong sources (\(S_{\nu}>1\) mJy). Otherwise, we used a natural weighting scheme when imaging. Where applicable, we performed standard self-calibration techniques on the target data to improve the image quality. ## 4 Radio Surveys We wish to construct a broadband radio SED for each of the 28 SPM galaxies in our survey. In this section, we describe the surveys used to construct each SED. It is important to note that each survey, observed at a different frequency and with different angular resolutions, is, by nature, sensitive to different forms of radio emission and may or may not suffer from source confusion. Low frequency (\(<1\) GHz) surveys are more sensitive to diffuse emission, likely associated with star formation, and are more likely to suffer from source confusion due to their larger resolution elements. High frequency surveys, in contrast, generally resolve out this same extended, diffuse emission if it is of low surface brightness, making them good identifiers of compact radio jets and cores, features associated with an AGN. This can create an artificial steepening of the radio spectrum due to the high frequency surveys missing flux density information recovered at lower frequency for diffuse emission, or confusion from background source blending. We attempted to mitigate these issues by visually inspecting all of the survey images used in our analysis. In particular, we are interested in the spectral index measurement of the nuclear radio emission, which will be obtained through the 3 and 10 GHz flux density measurements, or their limits (see Section 5.4). Background source blending is not an issue because of the high angular resolution at these frequencies and the low redshift (\(z<0.1\)) of these SPMs. For lower frequency observations, visual inspection showed no complex structure in the vast majority of the radio maps. For those select few that do have complex structure, we describe our procedures for their flux density measurements in Section 5.3. This is likewise true for any high frequency image that showed complex structure. For each radio survey, we considered a source to be detected if it was found in an available source catalog, e. g., LoTSS, RACS, and FIRST, or the signal-to-noise (S/N) ratio in the respective survey image is \(>5\sigma\), where \(\sigma\) is the local image RMS noise. In some cases, a source was identified at only a \(3\sigma\) significance. We considered the \(3\sigma\) source a true detection if it was spatially coincident with a source of \(\geq 5\sigma\) detection in any of the other radio surveys. Each image was inspected visually to assure that no sources were missed. It is important to distinguish this to properly account for the difference in sensitivities between the various radio surveys used. If we only classified sources at the \(5\sigma\) level and greater, our multi-frequency analyses would be incomplete and not truly representative of the SPM sample radio population within the limits of each survey. The 10 GHz map of J1015+3914, presented in Figure A6, illustrates this point. The diffuse, \(3\sigma\) emission at 10 GHz would not by itself be substantial for a detection. However, J1015+3514 is detected at high signal-to-noise ratio in all other surveys that observed it, and this diffuse 10 GHz emission is spatially coincident with those detections. For this reason, we consider the 10 GHz source a true detection and include it as part of our analyses. For \(3\sigma\) sources, we determined the flux density by performing a 2D Gaussian fit to the observed radio emission using the task IMFIT in CASA. ### LoTss The LOw Frequency ARray (LOFAR; van Haarlem et al., 2013) Two Meter Sky Survey (LoTSS; Shimwell et al., 2022) is an on-going survey covering the northern sky above \(+34^{\circ}\) conducted at a central observing frequency of 144 MHz. For our analysis, we use the second data release2, which covers 27% of the northern sky with a resolution of \(6^{\prime\prime}\) and median RMS sensitivity of 83 \(\mu\)Jy beam\({}^{-1}\). 16 of the 28 SPM galaxies in our survey fall within the LoTSS DR2 sky coverage, of which 12 were detected. Footnote 2: [https://lofar-surveys.org/dr2_release.html](https://lofar-surveys.org/dr2_release.html) ### Racs The Rapid ASKAP Continuum Survey (RACS; McConnell et al., 2020; Hale et al., 2021) is the first large-area survey completed using the Australian Square Kilometer Array Pathfinder (ASKAP; Hotan et al., 2021). RACS covered the entire southern sky up to a declination \(+41^{\circ}\) with a median field RMS sensitivity of 250 \(\mu\)Jy beam\({}^{-1}\). RACS-low, as part of the RACS DR13, was observed at a central frequency of 887.5 MHz with a resolution of \(15^{\prime\prime}\). 14 of the 28 SPM galaxies in our survey fall within the RACS sky coverage, of which 7 were detected. Footnote 3: [https://research.csiro.au/caseda/the-rapid-askap-continuum-survey-stokes-1](https://research.csiro.au/caseda/the-rapid-askap-continuum-survey-stokes-1) ### First The Faint Images of the Radio Sky at Twenty Centimeters (FIRST; Helfand et al., 2015) survey was a VLA survey conducted at 1.4 GHz and observed the entire sky north of \(+10^{\circ}\) and south of \(+65^{\circ}\), covering 10,575 deg\({}^{2}\). The survey resolution is given at \(5^{\prime\prime}\) with a typical RMS sensitivity of 150 \(\mu\)Jy beam\({}^{-1}\). 27 of the 28 SPM galaxies in our survey fall within the FIRST sky coverage, of which 14 were detected. We used the flux density and RMS values listed for each source from the catalog of Helfand et al. (2015), except for \(3\sigma\) sources, which were not included in this catalog. ### Nvss The NRAO VLA Sky Survey (NVSS; Condon et al., 1998) was a VLA survey conducted at 1.4 GHz and observed the entire sky north of \(-40^{\circ}\). The nominal resolution of the survey is \(45^{\prime\prime}\) with a typical RMS sensitivity of 450 \(\mu\)Jy beam\({}^{-1}\). Because of the large angular resolution, we used the NVSS catalog data for only one source, J1304+6520, which was not part of the FIRST sky coverage, but was observed and detected by NVSS. ### Vlass The VLA Sky Survey (VLASS; Lacy et al., 2020) is an on-going VLA survey conducted at S-band, covering the frequency range 2-4 GHz, which will cover the whole sky observable by the VLA (\(\delta>-40^{\circ}\)) over three observing epochs. Each observing epoch is designed to reach a nominal RMS sensitivity of 120 \(\mu\)Jy beam\({}^{-1}\) with a resolution of 2.5\({}^{\prime\prime}\). VLASS has currently completed two observing epochs, with raw and calibrated data sets and Quick Look images available for both epochs 1 and 2. The flux density accuracy of Quick Look sources in the first campaign of the first epoch of VLASS (VLASS1.1) were affected by antenna pointing errors, giving systematically lower flux density measurements of 10% with a scatter of \(\pm 8\)% for flux densities below \(\approx 1\) Jy (see VLASS Memo 13\({}^{4}\) for more detail). For this reason, we used only the campaigns from the second epoch of VLASS (VLASS2.1 and VLASS2.2) for the S-band flux density of sources of interest. As mentioned in Section 3, we observed 6 of the SPM galaxies in separate VLA observing campaigns at S-band. We used these 3 GHz VLA observations for these sources to derive source parameters instead of any corresponding VLASS detections for them. The remaining 22 SPM galaxies have all been observed in the second VLASS campaign, of which 7 were detected. To extract the flux density of the detected sources, we used the CASA task IMFIT to fit a two-dimensional Gaussian to the source in each Quick Look image. ## 5 Source Properties Following the detection criterion of Section 4, 75% (12/16) of the sources with available LoTSS data were detected; 50% (7/14) with available RACS data were detected; 54% (15/28) with available 1.4 GHz data, from either FIRST or NVSS, were detected; 32% (7/22) with available VLASS data were detected, with a 100% detection rate for the remaining 6 with separate 3 GHz VLA observations; and 67% (18/28) were detected by our 10 GHz VLA observations. ### Emission-Line Activity Emission-line diagnostics are a powerful tool to probe the dominant ionization mechanism in a galaxy. To examine the emission-line behavior of the 30 SPM galaxies, we have used the OSSY catalog (Oh et al., 2011) to obtain the intrinsic fluxes of the H\(\beta\), [OIII]\(\lambda\)5007, H\(\alpha\), [NII]\(\lambda\)6583, [SII]\(\lambda\)6717, and [OI]\(\lambda\)6300 emission lines. Oh et al. (2011) determined these values by performing a spectral fitting routine to the SDSS DR7 spectrum of each source. If the signal-to-noise (S/N) ratio of the H\(\beta\), [OIII]\(\lambda\)5007, H\(\alpha\) or [NII]\(\lambda\)6583 lines was \(<3\), we classified the galaxy as quiescent. For the remaining galaxies, we followed the standard BPT diagram analysis (Baldwin et al., 1981). For the [NII]/H\(\alpha\) diagnostic, we used the demarcation of Kauffmann et al. (2003) to distinguish between pure star-forming (SF) and SF-AGN composite galaxies. Composite galaxies and AGN are divided using the theoretical maximum starburst model from Kewley et al. (2001). AGN are then subdivided between Seyferts and LINERs (Low-Ionization Nuclear Emission-line Regions) by the division of Schawinski et al. (2007a). The best indication of Seyfert or LINER behavior is achieved by using the [OI]\(\lambda\)6300 emission line (Schawinski et al., 2007a). However, the [OI]\(\lambda\)6300 line is typically weaker than any of the other lines used and we only employed this diagnostic if the [OI]\(\lambda\)6300 line was detected with a S/N ratio \(\geq 3\). Otherwise, we employed the [SII]\(\lambda\)6717 diagnostic to distinguish between Seyfert AGN and LINERs. For these two diagnostics, we used the Seyfert-LINER demarcation lines of Kewley et al. (2001). If neither line was detected, we used the [NII] diagnostic to distinguish between Seyferts and LINERs. The results of our BPT analysis are presented in Figure 1, where each data point colored by its \(u-r\) color. The emission-line classification of each SPM galaxy is listed in Table 1. It should be noted that for even the bluest of the SPM galaxies in the C12 sample, their overall \(u-r\) color is still predominantly red. This is expected, as C12 found that the \(u-r\) colors of this SPM sample is indicative of a recent star-formation episode, e, g,. bluer than an early-type control sample, but one that peaked prior to the merger coalescence, e. g., redder than a sample of ongoing mergers (see Figure 5 of C12). The BPT diagnostic for J0206\(-\)0017 deserves special attention. The middle panel of Figure 1 shows only 16 of the 17 identified active galaxies. This is because the data point for J0206\(-\)0017 has log([SII]\(\lambda\)6717/H\(\alpha\))=-1.27. In comparison to the much larger sample of active SDSS galaxies used by Kewley et al. (2006), there are no galaxies which approach this value of J0206\(-\)0017. This is most likely attributable to the fact that J0206\(-\)0017 is a known changing-look AGN with asymmetric broad-line emission (Cohen et al., 1986; McElroy et al., 2016). The prescription used by OSSY to determine the line fluxes would not have accounted for the extremely broad nature of the H\(\alpha\) and H\(\beta\) lines for this source, and we would most likely need to perform our own spectral fitting routine to extract a reliable flux value for the narrow emission-line components to these broad lines. Because of this, we have classified J0206\(-\)0017 as a Seyfert AGN instead of as a star-forming galaxy as would be determined by its BPT diagnostics. We also note that the spectra of J0908+1407, J1511+2309, and J1655+2639 all contain H\(\beta\) absorption. In all of these cases, the H\(\beta\) absorption appears to be of stellar origin. Through visual inspection, there does not appear to be a significant blueshift in the H\(\beta\) absorption, which would be representative of an AGN-related outflow (e. g., Williams et al., 2017). For any H\(\beta\) emission present in these sources, the S/N ratio of the emission line was \(<3\). Although the emission lines of [OIII]\(\lambda\)5007, H\(\alpha\), and [NII]\(\lambda\)6583 are all detected with S/N ratio \(\geq 3\) in these spectra, for consistency, we classified them as quiescent because of the weak H\(\beta\) emission. In total, we found that 43% (13/30) of the C12 SPM galaxies were classified as quiescent. The remaining \(\approx\)57% (17/30) were classified as either purely SF (10%; 3/30), SF-AGN composite (\(\approx\)13%; 4/30), Seyfert AGN (\(\approx\)13%; 4/30), or LINER (20%; 6/30) from their BPT diagnostics. In comparison to the emission-line diagnostics performed by C12, our analysis finds a higher percentage of quiescent galaxies (\(16\%\pm 6\%\) to 43%), a lower percentage of Seyfert AGN (\(42\%\pm 6\%\) to 13%), and a similar percentage of LINERs (\(26\%\pm 6\%\) to 20%) and star-forming galaxies (\(16\%\pm 6\%\) to 10%). Direct comparison is somewhat ambiguous though, since C12 did not use the SF-AGN composite classification for their BPT analysis. It is unclear where the composite systems we identified would fall in the analysis of C12. It is interesting, however, that we arrive at different conclusions for the number of quiescent galaxies considering both the OSSY catalog and C12 used the gandalf code (Sarzi et al., 2006) to perform emission-line fitting of the spectra. We would expect, then, that the S/N ratio of the requisite emission lines would not change between the two analyses. Even if the 3 H\(\beta\) absorption spectra are considered as active galaxies by C12, this only marginally reduces the percentage of quiescent galaxies we have identified from 43% to 30%, which is still a factor of 2 greater than what was found by C12. ### Radio Flux Densities and Luminosities Flux density measurements were obtained either from survey catalog entries or from the CASA task IMFIT when reported values were not available. The integrated flux density measurements and their associated errors for each source are summarized in Table 2. For sources identified in the LoTSS, RACS, and FIRST/NVSS catalogs, the measurement error in Table 2 is the error quoted by each catalog summed in quadrature with a 5% uncertainty in the absolute flux scale. For VLASS and 3\(\sigma\) detections in any of the archival radio surveys, the error is the RMS image noise and 5% uncertainty in the absolute flux scale. For sources detected by the 3 and 10 GHz VLA observations, the errors are the image Figure 1: The BPT diagnostic diagrams for the emission-line galaxies in the C12 SPM sample. Each point is colored by its \(u-r\) color, with the colorbar indicating the range of values in the scale. Even for the most blue SPMs, the SPMs are still predominantly red in color. The dashed line in the [NII]\(\lambda\)6583/H\(\alpha\) diagram (left panel) is the empirical SF line of Kauffmann et al. (2003) and the straight line that divides Seyferts and LINERs is from Schawinski et al. (2007a). Seyferts and LINERs are divided in the [SII]\(\lambda\)6717/H\(\alpha\) (middle panel) and [OI]\(\lambda\)6300/H\(\alpha\) (right panel) diagrams from the line of Kewley et al. (2006). In all diagrams, the solid line is the theoretical maximum from the starburst models of Kewley et al. (2001). Galaxies that fall between the lines of Kauffmann et al. (2003) and Kewley et al. (2001) are SF-AGN composite, while those below Kauffmann et al. (2003) are purely SF. RMS and a \(3\%\) uncertainty in the absolute flux scale (Perley and Butler, 2017). The observed radio flux densities span \(0.90-30\) mJy, with a median of \(12\) mJy, for the \(12\) LoTSS detections; \(1.1-12\) mJy, with a median of \(4.2\) mJy, for the \(7\) RACS detections; \(0.78-16\) mJy, with a median of \(2.7\) mJy, for the \(15\) FIRST/NVSS detections; \(0.50-8.8\) mJy, with a median of \(2.0\) mJy, for the \(13\) VLASS/\(3\) GHz VLA detections; and \(0.06-2.7\) mJy, with a median of \(0.50\) mJy, for the \(18\) with a \(10\) GHz VLA detection. Each SPM galaxy has an associated redshift measurement from SDSS. We used this and the flux density measurement to calculate a luminosity, in units of Watts (W), at each observing frequency for all of the detected radio sources. We show the luminosity distributions for the \(\nu<1\) GHz (LoTSS, RACS) and \(\nu>1\) GHz (FIRST/NVSS, VLASS, VLA) surveys in Figure 2. After calculating the luminosities for each source, we compared these to the standard radio-loud demarcation \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{ Source} & RA & Dec & \(z\) & BPT \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) \\ \hline J0206\(-\)0017 & 31.567 & -0.291 & 0.043 & AGN \\ J0759+2750 & 119.952 & 27.839 & 0.067 & Composite \\ J0833+1523 & 128.289 & 15.398 & 0.076 & Quiescent \\ J0843+3549 & 130.937 & 35.828 & 0.054 & AGN \\ J0851+4050 & 132.978 & 40.836 & 0.029 & LINER \\ J0908+1407 & 137.156 & 14.122 & 0.088 & Quiescent \\ J0916+4542 & 139.212 & 45.700 & 0.026 & Composite \\ J0933+1048 & 143.447 & 10.811 & 0.085 & Quiescent \\ J1015+3914 & 153.992 & 39.243 & 0.063 & Starforming \\ J1018+3613 & 154.640 & 36.224 & 0.054 & AGN \\ J1041+1105 & 160.266 & 11.096 & 0.053 & LINER \\ J1056+1245 & 164.196 & 12.762 & 0.092 & Quiescent \\ J1113+2714 & 168.419 & 27.241 & 0.037 & Starforming \\ J1117+3757 & 169.385 & 37.963 & 0.096 & LINER \\ J1124+3005 & 171.142 & 30.095 & 0.055 & LINER \\ J1135+2913 & 173.781 & 29.891 & 0.046 & Starforming \\ J1144+2309 & 176.183 & 23.162 & 0.048 & Quiescent \\ J1230+1146 & 187.554 & 11.770 & 0.089 & Quiescent \\ J1253+3944 & 193.458 & 39.738 & 0.092 & Quiescent \\ J1304+6520 & 196.060 & 65.346 & 0.083 & AGN \\ J1314+2607 & 198.656 & 26.123 & 0.074 & Quiescent \\ J1326+5653 & 201.726 & 56.889 & 0.090 & Quiescent \\ J1405+4001 & 211.414 & 40.032 & 0.084 & Quiescent \\ J1433+3444 & 218.327 & 34.735 & 0.034 & LINER \\ J1445+5134 & 221.438 & 51.581 & 0.030 & Composite \\ J1511+0417 & 227.771 & 4.294 & 0.042 & LINER \\ J1511+2309 & 227.964 & 23.151 & 0.052 & Quiescent \\ J1517+0409 & 229.454 & 4.162 & 0.037 & Quiescent \\ J1617+2512 & 244.426 & 25.206 & 0.031 & Composite \\ J1655+2639 & 253.790 & 26.663 & 0.035 & Quiescent \\ \hline \end{tabular} Note. – Column 1: Source name. Column 2: Right Ascension. Column 3: Declination. Column 4: Spectroscopic redshift from SDSS. Column 5: BPT classification. \end{table} Table 1: Spheroidal Post-Merger Sample & BPT Classification spectral luminosity \(\nu L_{\nu}\approx 10^{32}\) W. A luminosity value above this demarcation is considered radio-loud; radio AGN dominate the radio luminosity function above this demarcation (Condon et al., 2002; Kimball et al., 2011). We find that J1018+3613 is radio-loud at GHz frequencies, and J1304+6520 is radio-loud at 3 and 10 GHz. The remaining radio sources are all radio-quiet objects. ### Radio Morphology We describe the bulk radio morphology properties of the detected sources in each of the radio surveys. Each individual source and its intensity maps are discussed and presented in Appendix A. For each source, we categorize the morphology into one of the following classifications: \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Source} & \(S_{\rm LofTSS}\) (mJy) & \(S_{\rm RACS}\) (mJy) & \(S_{\rm 1.4GHz}\) (mJy) & \(S_{\rm 3GHz}\) (mJy) & \(S_{\rm 10GHz}\) (mJy) \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) \\ \hline J0206\(-\)0017 & – & \(5.38\pm 0.81\) & \(3.36\pm 0.18\) & \(1.91\pm 0.07\) & \(1.64\pm 0.06\) \\ J0759+2750 & \(11.3\pm 0.6\) & \(4.23\pm 0.52\) & \(3.45\pm 0.21\) & \(1.93\pm 0.15\) & \(0.786\pm 0.016\) \\ J0833+1523 & – & \(<1.65\) & \(<0.41\) & \(<0.48\) & \(<0.06\) \\ J0843+3549* & \(29.7\pm 0.8\) & – & \(3.77\pm 0.17\) & \(2.79\pm 0.13\) & \(0.539\pm 0.017\) \\ J0851+4050 & \(1.0\pm 0.2\) & – & \(<0.43\) & \(<0.37\) & \(0.226\pm 0.015\) \\ J0916+4542 & \(2.6\pm 0.2\) & – & \(0.782\pm 0.13\) & \(<0.33\) & \(0.11\pm 0.013\) \\ J1015+3914** & \(11.5\pm 0.4\) & – & \(1.36\pm 0.19\) & \(1.13\pm 0.13\) & \(0.526\pm 0.024\) \\ J1018+3613 & \(23.4\pm 0.9\) & – & \(16.2\pm 0.8\) & \(8.75\pm 0.25\) & \(2.72\pm 0.08\) \\ J1041+1105 & – & \(3.3\pm 0.5\) & \(<0.39\) & \(<0.45\) & \(0.08\pm 0.012\) \\ J1056+1245 & – & \(<1.2\) & \(<0.41\) & \(<0.47\) & \(<0.04\) \\ J1113+2714 & – & \(1.08\pm 0.30\) & \(1.32\pm 0.14\) & \(<0.35\) & \(0.063\pm 0.012\) \\ J117+3757 & \(2.0\pm 0.2\) & – & \(<0.43\) & \(<0.33\) & \(<0.04\) \\ J1124+3005 & \(0.9\pm 0.2\) & – & \(<0.44\) & \(<0.33\) & \(<0.03\) \\ J1135+2913 & \(15.2\pm 0.6\) & – & \(3.38\pm 0.22\) & \(2.89\pm 0.14\) & \(0.355\pm 0.013\) \\ J1144+2309 & – & \(<1.38\) & \(<0.45\) & \(<0.34\) & \(<0.04\) \\ J1230+1146 & – & \(<16\) & \(<0.65\) & \(<0.65\) & \(<0.05\) \\ J1253+3944 & \(<0.30\) & – & \(<0.39\) & \(<0.41\) & \(<0.04\) \\ J1304+6520 & \(12.3\pm 0.5\) & – & \(2.41\pm 0.24\) & \(2.95\pm 0.22\) & \(0.99\pm 0.29\) \\ J1314+2607 & \(<0.30\) & \(<0.78\) & \(<0.41\) & \(<0.40\) & \(<0.04\) \\ J1326+5653 & \(<0.30\) & – & \(<0.46\) & \(<0.56\) & \(<0.04\) \\ J1405+4001 & \(<0.30\) & – & \(<0.41\) & \(<0.50\) & \(<0.04\) \\ J1433+3444 & \(15.6\pm 0.7\) & – & \(2.69\pm 0.19\) & \(2.02\pm 0.16\) & \(1.15\pm 0.036\) \\ J1445+5134 & \(28.2\pm 1.0\) & – & \(11.9\pm 0.58\) & \(6.06\pm 0.14\) & \(1.05\pm 0.049\) \\ J1511+0417* & – & \(<2.3\) & \(1.55\pm 0.17\) & \(1.068\pm 0.032\) & \(1.342\pm 0.039\) \\ J1511+2309 & – & \(11.8\pm 1.7\) & \(1.32\pm 0.19\) & \(0.495\pm 0.031\) & \(0.249\pm 0.017\) \\ J1517+0409 & – & \(<1.8\) & \(<0.45\) & \(<0.41\) & \(0.12\pm 0.013\) \\ J1617+2512 & – & \(1.47\pm 0.46\) & \(1.41\pm 0.15\) & \(1.06\pm 0.13\) & \(0.22\pm 0.012\) \\ J1655+2639 & – & \(7.8\pm 0.59\) & \(4.70\pm 0.23\) & \(2.50\pm 0.07\) & \(0.77\pm 0.02\) \\ \hline \end{tabular} Note. – Column 1: Source name. Column 2: LoTSS (144 MHz) flux density and error (Shimwell et al., 2022). Upper limits indicate a 3\(\sigma\) non-detection, whereas no entry means the source was not included in the survey field. Column 3: RACS (888 MHz Hz) flux density and error (McConnell et al., 2020; Hale et al., 2021). Column 4: 1.4 GHz flux density and error, reported from either FIRST (27/28; Helfand et al., 2015) or NVSS (1/28; Condon et al., 1998). Column 5: 3 GHz flux density and error, reported from either VLASS (22/28; Lacy et al., 2020) or for the first time from VLA observations (6/28). Column 6: 10 GHz flux density and error, reported from our VLA observations. *: These flux density measurements are reported for the dominant component when the source is resolved into a multi-component morphology. **: The flux density at 10 GHz was found after applying a _uv_-taper to the image plane. \end{table} Table 2: Integrated Flux Density Values 1. **Unresolved**: The peak-to-integrated flux density ratio is unity within 3\(\sigma\) uncertainty and the source is a single Gaussian component that does not exhibit any flux beyond the synthesized beam. 0/12 sources are unresolved by LoTSS; 3/7 sources by RACS; 12/15 sources by FIRST/NVSS; 5/13 sources by VLASS or 3 GHz VLA; and 6/18 sources by our 10 GHz VLA observations. 2. **Marginally Resolved**: The peak-to-integrated flux density ratio is unity within 3\(\sigma\) uncertainty and the source is marginally extended along one axis of the synthesized beam. 1/12 sources are marginally resolved by LoTSS; 2/7 sources by RACS; 2/15 sources by FIRST/NVSS; 2/13 sources by VLASS or 3 GHz VLA; and 1/18 sources by our 10 GHz VLA observations. 3. **Resolved**: The peak-to-integrated flux density ratio of the source is significantly less than unity, and the deconvolved major and minor axes have non-zero size. 11/12 sources are resolved by LoTSS; 2/7 sources by RACS; 1/15 sources by FIRST/NVSS; 6/13 sources by VLASS or 3 GHz VLA; and 9/18 sources by our 10 GHz VLA observations. 4. **Multi-component**: The intensity map of the radio source shows two or more distinct radio components common to one central engine. 0/12 sources are multi-component in LoTSS; 0/7 in RACS; 0/15 in FIRST/NVSS; 1/13 in VLASS or 3 GHz VLA; and 2/18 in our 10 GHz VLA observations. Visual inspection of the FIRST intensity map of J1511+2309 source (Figure 15) shows two distinct radio components separated by \(\sim 10^{\prime\prime}\), or 10 kpc at the redshift of the source. The central component is compact and spatially coincident with the optical nucleus of the host galaxy. The second component is extended in morphology and has no optical counterpart. However, there is no clearly definable jet axis which extends from the primary component in any of the 1.4, 3, or 10 GHz maps that may link the two radio components to a common central engine. The VLASS map, which is closer in resolution to FIRST than the 3 GHz VLA observations of this source, further resolves this second component into two more distinct components, neither of which lie along a preferential axis and appear to be unassociated with one another or the primary component. This is similarly true upon examination of the 3 GHz, higher resolution map produced from VLA observations of J1511+2309. For these reasons, we concluded that the Figure 2: _Left_: Luminosity (\(\nu L_{\nu}\)) distribution of the radio sources associated with each spheroidal post-merger (SPM) galaxy from the radio surveys observed in the MHz regime: LoTSS (solid blue) and RACS (hatched green). Some of the SPM galaxies were not observed in each survey. The dashed vertical line represents the demarcation between radio-loud and radio-quiet objects at 1.4 GHz, or \(\nu L_{\nu}=1.4\times 10^{32}\) W. _Right_: Same as left but for radio surveys observed in the GHz regime: FIRST or NVSS (dotted red), VLASS or 3 GHz VLA (solid purple), and 10 GHz VLA (hatched). second component detected by FIRST is unassociated with this SPM and did not classify the radio emission associated with this galaxy as multi-component. Similarly, the 144 MHz LoTSS map of J0843+3549 shows two distinct radio components separated by 27.3\({}^{\prime\prime}\). The second radio source, observed to the southwest of the primary source, is associated with the galaxy cluster GMBCG J130.93151+35.82210 (Hao et al., 2010) at a redshift of \(z=0.475\)(Rozo et al., 2015). We did not classify the 144 MHz morphology of J0843+3549 as multi-component because of this. Figure 3 shows the logarithm of the peak-to-integrated flux density ratios for each of the detected sources. It is clear that at the lowest frequency, 144 MHz, each of the radio sources has an extended emission component. For most of these sources, this extended emission is diffuse and non-axisymmetric, meaning it is unlikely from an AGN. Each of the LoTSS sources, however, still displays an unresolved component that is spatially coincident with the optical center of the host galaxy. J1433+3444 is the only source with collimated emission. We discuss this in Section A.12. For the peak-to-integrated flux density ratio of this source, we determined the total integrated flux density of the unresolved, nuclear emission plus the diffuse component by applying a mask to the region of interest in the intensity map within the CASA task VIEWER. However, the 144 MHz flux density reported in Table 2 and used in Section 5.4 is only that of the unresolved emission. We did this to mitigate the effects of artificial steepening of the radio spectrum of the nuclear emission, which is the main emission region of interest for our study. For the 7 sources with a detection by RACS, we find a mix of unresolved, resolved, and moderately resolved sources. At 1.4 GHz, the majority of sources become unresolved, with the unresolved emission being spatially coincident with the optical nucleus of the host galaxy. This shows that compact, nuclear emission is prevalent in the radio-detected SPM galaxies. However, higher resolution observations at 3 GHz reveal that extended emission is indeed prevalent in the majority of sources (8/13), and the unresolved emission of J1511+0417 resolves into two components. Likewise, our 10 GHz observations further reveal extended emission of these nuclear radio sources. With the high resolution of our observations, many of the formerly unresolved sources at lower frequencies now show a diffuse, non-axisymmetric component to the nuclear radio source (J0759+2750, J1015+3914, J1135+2953, J1304+6520 and J1617+2512). Like J1511+0417 at 3 GHz, J0843+3549 resolves into two components at 10 GHz. ### Radio Spectra The radio spectrum is a useful tool for interpreting the underlying physical characteristics of the radio source, including the dominant production mechanism for the observed emission. For AGN, the radio emission is dominated by synchrotron emission from a distribution of relativistic electrons, creating a distinctive non-thermal power law spectrum. For older jetted and lobe structures, the highest energy electrons in the distribution will radiate away the fastest, causing a break in the power law at higher frequencies and creating a steep radio spectrum with a power law slope \(\alpha<-0.5\). Radio cores, which are associated with the region of emission closest to the active SMBH itself, are actively injected with fresh high energy electrons, creating a flat spectrum with a power law slope \(\alpha>-0.5\). For HII regions, thermal emission is dominant at rest-frame frequencies of \(\nu>10\) GHz, and is characterized by a power law slope \(\alpha\approx-0.1\). At \(\nu<10\) GHz, the non-thermal emission from supernova remnants (SNR) dominates, with varying power law slopes. Both mechanisms of emission can be self-absorbed at low frequency, causing a characteristic spectral turnover and inverted slope \(\alpha>0\). Identifying and quantifying the power law slope \(\alpha\), as well as the curvature and peak frequency, if present, can greatly aid in the interpretation of the radio source. To explore this parameter space, we constructed a radio SED using the multi-frequency flux density measurements we have tabulated for each of the 18 radio sources we detected with our 10 GHz observations. We have chosen not to include J1117+3757 and J1124+3005, since these two SPMs were only detected by LoTSS. 12 of these 18 radio sources were observed and detected in 4 or more of the radio surveys we have used. We considered these 12 to be well-sampled radio spectra for SED analysis. To constrain the overall shape of the radio SED for these 12 well-sampled radio sources, we have performed a two-fold fitting procedure. First, the spectrum is fit by a simple power law of the form \[S_{\nu}=A\nu^{\alpha}, \tag{1}\] where \(S_{\nu}\) is the flux density in mJy, \(\nu\) is the observing frequency in GHz, and \(\alpha\) is the spectral index value. The second fit describes a parabola in log space and accounts for curvature in the overall shape of the radio spectrum: \[S_{\nu}=A\nu^{\alpha}\mathrm{e}^{q(\ln\nu)^{2}}\,, \tag{2}\] where \(S_{\nu}\), \(\nu\), and \(\alpha\) are identical to Eqn. 1, and \(q\) gives the spectral curvature. For cases of significant curvature, e. g., \(|q|\geq 0.2\)(Duffy & Blundell, 2012), \(\alpha\) and lead to a peak frequency \(\nu_{peak}\) of \[\nu_{peak}=\mathrm{e}^{-\alpha/2q}\,. \tag{3}\] Here, \(q\) is strictly phenomenological. Physically-motivated synchrotron self-absorption or free-free absorption models, or models with multiple electron populations, would require the use of more free parameters, e. g., more flux density measurements at distinct frequencies, than were available for this analysis (Tingay et al., 2015). However, \(q\) is still an important constraint to describe the overall shape of the radio spectrum and can hint at the underlying physical mechanism of the radio emission (Callingham et al., 2017; Nyland et al., 2020; Patil et al., 2022). For the remaining 6 sources without well-sampled spectra, the maximum number of detections for a single source across all the surveys used is 3. Then, we could not perform the curved power law fit to these spectra given the paucity of data. In addition, the spectral index values determined by our two-fold fitting procedure are often difficult to interpret for such a wide frequency range, spanning approximately 2 decades in frequency for some sources. To obtain a representative spectral index value, we performed a linear fit to the 3 and 10 GHz flux density values for each of the eighteen 10 GHz-detected radio sources. For those sources without a 3 GHz detection, this estimate provides a lower limit to the actual spectral index value. We chose to use the 3 and 10 GHz flux density values because our ultimate goal is to characterize the nuclear radio emission detected by our high resolution VLA observations. These observing frequencies have the highest angular resolution among the surveys used for our analysis, giving us the best approximation to the true spectral index value of the nuclear radio emission. The two-point spectral index \(\alpha_{3}^{10}\) is given by \[\alpha_{3}^{10}=\frac{\log(S_{3}/S_{10})}{\log(3/10)}\,. \tag{4}\] with an associated error of \[\sigma_{\alpha}=\frac{1}{\ln(10/3)}\sqrt{\left(\frac{\sigma_{S_{3}}}{S_{3}} \right)^{2}+\left(\frac{\sigma_{S_{10}}}{S_{10}}\right)^{2}}\,, \tag{5}\] given by standard propagation of errors. Figure 4 and Figure 5 show each radio SED and the results of our fitting analyses for the eighteen 10 GHz-detected radio sources. Table 3 lists the reduced \(\chi^{2}\) values of the power law and curved power law fits for the 12 well-sampled radio spectra. For each of these, the spectral curvature parameter \(q\) is provided for those spectra that are better fit by a curved power law than Figure 3: _Left_: Distribution of peak-to-integrated flux density ratios of the 10 GHz-detected SPM galaxies in the LoTSS (solid blue) and RACS (hatched green). Some of the SPM galaxies were not observed in each survey. More negative values indicate resolved structure of the radio source. _Right_: Same as left but for FIRST or NVSS (dotted red), VLASS or 3 GHZ VLA (solid purple), and 10 GHz VLA (hatched). a simple power law (\(\chi^{2}_{\rm red,PL}>\chi^{2}_{\rm red,CPL}\)). We also list the peak frequency \(\nu_{\rm peak}\) for the 3 sources that have \(q\leq-0.2\). Table 3 also lists the two-point spectal index value \(\alpha_{3}^{10}\), or its lower limit, for all 18 sources. We find that 5 of the 12 well-sampled radio SEDs show evidence of significant curvature: J0206\(-\)0017, J1018+3613, J1445+5134, J1511+2309 and J1617+2512. Of these, J1018+3613, J1445+5134 and J1617+2512 are all peaked spectrum objects, while J0206\(-\)0017 and J1511+2309 have \(q>0.2\), indicative of an inverted spectrum. Visual inspection of the radio SEDs for these two sources show that their actual nature is most likely not inverted. Instead, it's probable that larger-scale emission components with steeper spectra are being resolved out by the higher frequency, higher resolution observations, leaving only the most compact Figure 4: Broadband radio SED for each of the 12 well-sampled radio sources detected by our 10 GHz observations. Radio surveys used are labeled by different markers and colors, with the key in the upper right of each SED. 1\(\sigma\) errors are plotted for each flux density measurement. The best-fit power law and curved power law are plotted as the dashed and solid black lines, respectively. features as the sole contributor to the recovered flux density and causing the spectrum to flatten. Indeed, the flat-spectrum nature of J0206\(-\)0017 was confirmed by Walsh et al. (2023). We expect that higher frequency observations of J1511+2309 would confirm the presence of a flat-spectrum object in this source. We did not find evidence of significant curvature in the remaining 7 well-sampled radio SEDs. The two-point spectral index values \(\alpha_{3}^{10}\) span a range \(0.19\geq\alpha_{3}^{10}\geq-1.74\). The majority of sources (10/13) have a steep spectral index, e. g., \(\alpha<-0.5\); J0206\(-\)0017 and J1433+3444 have a flat spectral index, e. g., \(-0.5\leq\alpha\leq 0\); and J1511+0417 has an inverted spectral index value of \(\alpha_{3}^{10}>0\). The spectrum of J1511+0417 is likely truly inverted, unlike the spectra of J0206\(-\)0017 and J1511+2309, though more flux density measurements are required to confirm this. Of those sources with lower limits on \(\alpha_{3}^{10}\), J0851+4050 likely has a flat spectral index given that its \(\alpha_{3}^{10}>-0.41\). The limits for the remaining 4 radio sources leave their radio spectral class ambiguous. The radio SED of J0843+3549, J1015+3914, J1135+2953, and J1304+6520 each show a significant deviation from their best-fit simple and curved power laws at 1.4 GHz. This is reflected in the abnormally high reduced \(\chi^{2}\) values for the fits of these sources. This can be interpreted in two ways. First, the significant deviation at 1.4 GHz may be explained by intrinsic variability of the radio source. The observations at 144 MHz (LoTSS), 888 MHz (RACS), 3 GHz (VLASS/VLA), and 10 GHz (VLA) are quasi-contemporaneous; at most, the observations were taken within 5 years of one another. Yet, the 1.4 GHz observations, by either FIRST or NVSS, were conducted well over a decade ago, at Figure 5: Broadband radio SED for each of the 6 radio sources detected by our high resolution, 10 GHz observations that were a non-detection by 1 or more of the radio surveys used. Radio surveys are labeled by different markers and colors, with the key in the upper right of each SED. Unfilled markers indicate a 3\(\sigma\) non-detection in that survey. The two-point spectral index \(\alpha_{3}^{10}\) determined from a linear fit to the 3 and 10 GHz fluxes, or their limits, is plotted as the dot-dashed black line. least, at the time of this analysis. If the source underwent significant variability over a years-long timescale prior to its most recent observation at 1.4 GHz, all of the flux density measurements at frequencies besides 1.4 GHz would reflect this. It is possible, then, that these sources have naturally varied over this intervening time span and are no longer well fit to the quasi-contemporaneous data points of the other surveys. The corresponding variability amplitudes at 1.4 GHz, assuming a power law fit to the spectrum without the 1.4 GHz flux density properly describes the spectral shape, range from 12%-61%. These amplitudes are certainly plausible, given that some sources have been found to reach variability amplitudes higher than 2400% at this observing frequency (Nyland et al., 2020). Alternatively, the flux densities at lower frequencies are representative of a separate electron population. For example, LoTSS sources in the local universe will be dominated by diffuse emission associated with star-formation processes. Using higher frequency and higher spatial resolution observations, this diffuse emission will be resolved out. If there is a second, distinct population of electrons producing a radio source that is more compact and hosted by the same SPM, this will become apparent by a break in the broadband radio spectrum. Essentially, the two electron populations, both of which are located within the same host galaxy, are confused with each other at low frequency, and only the high frequency observations we have used are truly representative of the second, more compact, nuclear source. Follow-up observations at high angular resolution with better frequency sampling, including 1.4 GHz, are required to distinguish between the two methods we have outlined that may produce \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Source} & \(\chi^{2}_{\rm PL}\) & \(\chi^{2}_{\rm CPL}\) & \(\alpha^{10}_{3}\) & \(q\) & \(\nu_{\rm peak}\) (MHz) \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) \\ \hline J0206\(-\)0017 & 76.0 & 0.23 & -0.13 \(\pm\) 0.04 & 0.29 \(\pm\) 0.01 & – \\ J0759+2750 & 113 & 1.52 & -0.75 \(\pm\) 0.07 & -0.04 \(\pm\) 0.02 & – \\ J0843+3549* & 95.0 & 531 & -1.37 \(\pm\) 0.05 & – & – \\ J0851+4050 & – & – & \(>-0.41\) & – & – \\ J0916+4542 & – & – & \(>-0.91\) & – & – \\ J1015+3914 & 7164 & 5673 & -0.64 \(\pm\) 0.10 & 0.10 \(\pm\) 0.07 & – \\ J1018+3613 & 1048 & 21.8 & -0.97 \(\pm\) 0.03 & -0.19 \(\pm\) 0.02 & 290 \\ J1041+1105 & – & – & \(>-1.43\) & – & – \\ J1113+2714 & – & – & \(>-1.42\) & – & – \\ J1135+2913 & 1348 & 1538 & -1.74 \(\pm\) 0.05 & – & – \\ J1304+6520 & 13.9 & 186 & -0.91 \(\pm\) 0.07 & – & – \\ J1433+3444 & 64.7 & 4.27 & -0.47 \(\pm\) 0.07 & 0.08 \(\pm\) 0.02 & – \\ J1445+5134 & 1458 & 19.3 & -1.46 \(\pm\) 0.04 & -0.18 \(\pm\) 0.02 & 155 \\ J1511+0417* & – & – & 0.19 \(\pm\) 0.03 & – & – \\ J1511+2309 & 217 & 114 & -0.57 \(\pm\) 0.08 & 1.61 \(\pm\) 0.36 & – \\ J1517+0409 & – & – & \(>-1.02\) & – & – \\ J1617+2512 & 216 & 4.92 & -1.31 \(\pm\) 0.11 & -0.37 \(\pm\) 0.08 & 1114 \\ J1655+2639 & 5.80 & 19.5 & -0.98 \(\pm\) 0.03 & – & – \\ \hline \end{tabular} Note. – Column 1: Source name. Column 2: Reduced \(\chi^{2}\) value for a power law fit to the radio SED. Only reported for sources with 4 or more survey detections. Column 3: Reduced \(\chi^{2}\) value for a curved power law fit to the radio SED. Only reported for souces with 4 or more survey detections. Column 4: Spectral index value, and its error, for the optically-thin emission, or its lower limit. This was estimated by using the 3 and 10 GHz flux density values, or the upper limit on the 3 GHz flux for non-detections at this frequency. Column 5: Spectral curvature parameter determined by the curved power law fit. Only reported if \(\chi^{2}_{\rm red,\,PL}>\chi^{2}_{\rm red,\,CPL}\). Column 6: Peak frequency of the radio SED. Only reported if \(\chi^{2}_{\rm red,\,PL}>\chi^{2}_{\rm red,\,CPL}\) and \(q<0\). Note: J0843+3549 and J1511+0417 are resolved into two distinct components at 10 GHz. For both, we used the dominant, southern component to calculate \(\alpha^{10}_{3}\). See Figure A3 and Figure A14 for the 10 GHz contour maps of J0843+3549 and J1511+0417, respectively. \end{table} Table 3: Radio Spectral Fitting Parameters this observed break in the radio spectrum. The nature of these sources is further discussed in Section 6. ## 6 Origin of Radio Emission In this section, we assess the origin of the radio emission in our SPM sample. Our analysis will make use of mid-IR fluxes available from the ALLWISE source catalog to perform a dust correction to the far-UV (FUV) flux for each SPM, for which we wish to calculate the star-formation rate (SFR). In this paradigm, active star formation heats dust grains in the surrounding interstellar medium that re-radiate this energy as thermal emission in the mid-IR. However, it is possible for an AGN to also assume this heat engine role and this will introduce systematic effects into our calculation of the SFR using the FUV fluxes. As such, we first identify if any of our sources are AGN by mid-IR selection criterion. To do this, we utilize a _WISE_ color-color diagram to search for mid-IR AGN using the selection criterion of Jarrett et al. (2011) for each of the eighteen 10 GHz-detected radio sources. These results are shown in Figure 6. We find that J0843+3549 and J0206\(-\)0017 are within the mid-IR AGN box of Jarrett et al. (2011), and J1018+3613 is consistent with a mid-IR AGN within errors. The remaining 15 sources are well outside of the AGN region and occupy the region of star-forming galaxies, e. g., spiral galaxies, luminous infrared galaxies (LIRGS) and starburst/ultraluminous infrared galaxies (ULIRGS) (Wright et al., 2010). J0206\(-\)0017 (Osterbrock, 1981; Cohen et al., 1986; McElroy et al., 2016; Walsh et al., 2023), J0843+3549 (Veron-Cetty & Veron, 2001; Stern & Laor, 2012; Koss et al., 2018), and J1018+3613 (Stern & Laor, 2012, Walsh et al. in prep) are all known AGN, in addition to being identified as Seyfert AGN via their emission-line ratios as discussed in Section 5.1. To mitigate the impact of potential systematics introduced by the mid-IR AGN to our SFR calculation, we conclude that the radio emission in each of these sources is associated with the AGN and remove them from the analyses described in this section. For each of the remaining fifteen 10 GHz-detected radio sources, we consider the following origin scenarios for their radio emission: thermal emission from star-forming regions, synchrotron emission from individual radio supernova (RSN) or a population of supernova remnants (SNRs), or an AGN. ### Radio Excess We begin by searching for excess radio emission in each of the remaining 15 SPM radio sources. To do this, we predict what the SFR for each of the SPMs would be from their 1.4 GHz luminosity and compare this radio-predicted SFR to that calculated from the FUV emission of the host galaxy. Sources that have an over-prediction of the SFR from their radio luminosity are called radio excess. These radio excess sources cannot be explained from SF processes alone, while those that do not show radio excess can be, though aren't necessarily. We first predict the radio-based SFR using the 1.4 GHz luminosity. To do this, we use Equation 17 of Murphy et al. (2011): \[\left(\frac{\mathrm{SFR_{1.4GHz}}}{M_{\odot}\,\mathrm{yr^{-1}}}\right)=6.35 \times 10^{-29}\left(\frac{L_{\mathrm{1.4GHz}}}{\mathrm{erg\,s^{-1}\,Hz^{-1}}} \right)\,. \tag{6}\] This SFR is based on the FIR-radio correlation, which relates the galactic FIR properties to the galactic radio continuum properties. Murphy et al. (2011) note that the expected contribution to the total radio emission from non-thermal processes is negligible for some cases in which the emission is co-spatial with an active HII region. Condon & Yin (1990) argue that this is the case only for small HII regions, for which stars with \(M>8M_{\odot}\) can escape before exceeding their lifetime of \(<3\times 10^{7}\) yr. For our sources that are identified as SF or SF-AGN composite galaxies via their optical emission-line ratios, the SDSS spectroscopic fiber has a diameter of \(3^{\prime\prime}\). We only know that within the galactic region covered by the spectroscopic fiber, an active HII region is, at least partially for composite galaxies, contributing to the ionization. The area covered by the SDSS fiber Figure 6: _WISE_ color-color diagram for the eighteen 10 GHz-detected SPM galaxies in our sample. The dashed black lines define the region of mid-IR AGN taken from the selection criterion of Jarrett et al. (2011). Two of our sources, J0206\(-\)0017 and J0843+3549, are selected as mid-IR AGN, while a third, J1018+3613, falls in this region within \(1\sigma\) error. Each of these has multi-wavelength evidence for an AGN. We conclude that their radio emission is associated with the AGN and remove these sources from further analyses to avoid possible introduction of systematics by the AGN to the SFR calculation of each host SPM. is much larger than the synthesized beamwidth of our 10 GHz VLA observations (\(\sim 0.2\arcsec\)). Because none of our sources show features comparable in angular size to the SDSS fiber, for the consistency of the analysis, we continued under the assumption that the HII region is large enough such that there could be significant non-thermal radio emission spatially coincident with it. To estimate the radio-based SFR, we use the 1.4 GHz luminosity values derived from the FIRST or NVSS, for J1304+6520, catalog entry for each source. We do this instead of extrapolating to the 1.4 GHz luminosity using \(\alpha_{3}^{10}\) because, as mentioned in Section 5.4, some sources exhibit clear breaks from their best-fit power law at 1.4 GHz (J0843+3549, J1015+3914, J1135+2953, and J1304+6520), and the synthesized beams of FIRST and NVSS, \(5\arcsec\) and \(45\arcsec\), respectively, are better matched to galaxy-scale properties than the synthesized beam of our 10 GHz observations. It is important to most accurately trace the galaxy-scale radio emission because the radio-FIR correlation has been shown to deviate from a linear correlation for regions of radio emission with low thermal fractions (Hughes et al., 2006). At 1.4 GHz, the expected thermal contribution to the total radio emission for any radio source is approximately 5 to 10% (Condon, 1992; Murphy, 2013). This is true even for starburst systems, for which Murphy (2013) found a thermal fraction of 5% in a sample of 31 local starburst galaxies. We assume, then, that the radio sources we have detected are not extraordinary in this regard, and have low thermal fractions at 1.4 GHz. However, the spatial scale range for which Hughes et al. (2006) found the radio-FIR correlation to deviate from a linear correlation is from 50-250 pc for their low thermal fraction radio sources. At \(5\arcsec\) resolution, the smallest spatial scale probed for our 18 source sample is approximately 3 kpc, or an order of magnitude larger than what was found by Hughes et al. (2006). Because of this, we do not expect any deviations from the standard radio-FIR correlation using the 1.4 GHz luminosity for our sources. We now calculate the host galaxy SFRs for 13 SPMs that had FUV measurements available from _GALEX_. The remaining 2 did not have _GALEX_ measurements available and we describe the calculation of their SFRs later on in this section. We first correct the FUV luminosity for dust absorption by using the 25\(\mu\)m _WISE_ luminosity for each source, following Hao et al. (2011): \[L(\rm FUV)_{corr}=L(\rm FUV)_{obs}+3.89L(25\mu m)\,, \tag{7}\] where all luminosity values are in units of erg s\({}^{-1}\). Here, we have used the available _WISE_ 22\(\mu\)m flux density as a proxy for the 25\(\mu\)m luminosity, since the flux density ratio between these two values is expected to be unity for early-type galaxies (Jarrett et al., 2013). After calculating the _WISE_-corrected FUV luminosity, we find the host galaxy SFR following Table 1 of Kennicutt & Evans (2012) for the FUV band: \[\left(\frac{\rm SFR}{M_{\odot}\,\rm yr^{-1}}\right)=4.5\times 10^{-44}L(\rm FUV )_{corr}\,. \tag{8}\] Using this method, the 1\(\sigma\) uncertainty on the SFR is 0.13 dex (Hao et al., 2011). For the two SPM detections without available _GALEX_ FUV measurements, J1304+6520 and J1511+2309, we use the H\(\alpha\) luminosity to calculate the host galaxy SFR. Kennicutt et al. (2009) provide a dust-attenuated correction to the H\(\alpha\) luminosity using the 25 \(\mu\)m luminosity: \[L(\rm H\alpha)_{corr}=L(\rm H\alpha)_{obs}+0.020L(25\mu m)\,. \tag{9}\] As before, all luminosity values are in erg s\({}^{-1}\). For \(L(\rm H\alpha)_{obs}\), we use the values provided by the OSSY catalog (Oh et al., 2011) for each of the two optical spectra. We again follow Table 1 of Kennicutt & Evans (2012) to calculate the host galaxy SFR using the dust-corrected H\(\alpha\) luminosity: \[\rm SFR=5.37\times 10^{-42}\left(\frac{L(\rm H\alpha)_{corr}}{\rm erg\,s^{-1}} \right)\,. \tag{10}\] The 1\(\sigma\) uncertainty for the H\(\alpha\) method is 0.4 dex (Kennicutt et al., 2009). The radio-based SFR is plotted against the galaxy-based SFR for each of our 10 GHz-detected SPM sources in Figure 7. The host galaxy SFRs are in the range \(0.2\,\rm M_{\odot}\,\rm yr^{-1}\leq SFR\leq 17\,\rm M_{\odot}\,\rm yr^{-1}\). We find 4 radio sources that do not have excess radio emission, indicating that their radio emission could be explained by star-formation processes alone. These are: J1015+3914, J1041+1105, J1445+5134, and J1617+2512. We note that J0851+4050, J1041+1105 and J1517+0409 are non-detections at 1.4 GHz, and thus their radio-based SFRs are upper limits. 9 of our sources fall on either side of the radio-excess line within their 1\(\sigma\) errors, including all of those sources that do not show excess radio emission. Only 4 sources have excess radio emission above a factor of 3\(\sigma\) from what is expected by star-formation process. These are: J0759+2750, J1304+6520, J1433+3444, and J1655+2639. Now that we have identified which radio sources do or do not show excess radio emission, we seek to answer what physical process is the dominant means of radio emission production for each. The first consideration for the origin of the radio emission is that of thermal bremsstrahlung (free-free) emission produced by ionized hydrogen in active star-forming regions. In the optically-thin regime, radio emission that is dominated by a free-free component is characterized by a flat spectral index of \(\alpha=-0.1\)(Condon, 1992; Murphy et al., 2011; Klein et al., 2018). Each of the 4 non-radio-excess sources are in the optically-thin regime at GHz frequencies, as indicated by their broadband radio SEDs (see Figure 4 for J1015+3914, J1445+5134, and J1617+2512, and Figure 5 for J1041+1105). However, their optically-thin spectral index values, \(\alpha_{3}^{10}\), range from -1.46 to -0.64, with J1041+1105 having a lower limit of -1.43. Although there may be a contribution from free-free emission in each of these radio sources, it is clear from their spectral index values that free-free emission is not the dominant radio emission mechanism for any. This is not unexpected, as free-free emission from HII regions does not usually dominate the radio SED for \(\nu<10\) GHz (Condon, 1992; Murphy, 2013). ### Radio Supernova and Supernova Remnants Our second consideration for the origin of the nuclear radio emission in our SPM sources is from non-thermal emission produced by either an individual radio supernova (RSN) or a population of supernova remnants (SNRs). We first consider an individual RSN as the progenitor for the radio emission. RSN are morphologically compact radio sources that span a range of radio luminosity (Weiler et al., 2002) and spectral index values (Bendo et al., 2016; Klein et al., 2018; Galvin et al., 2018; Emig et al., 2020). The radio emission associated with a RSN is powered by synchrotron processes. Generally, for star-forming galaxies, this synchrotron emission is diffuse, tracing the host galaxy's morphology. In the optically thin regime, RSN associated with a Type Ib/c event have \(\alpha<-1\) (\(S_{\nu}\propto\nu^{\alpha}\)), while those associated with a Type II event have a shallower spectral index \(\alpha>-1\)(Weiler et al., 2002). The radio luminosity of an individual RSN will peak a few 100s of days after the initial SN explosion, reaching a maximum 5 GHz luminosity of \(L_{\rm 6cm\ peak}\approx 1.3\times 10^{27}\) erg s\({}^{-1}\) Hz\({}^{-1}\)(Weiler et al., 2002). However, two of the most luminous RSN, SN1998bw (Kulkarni et al., 1998) and PTF11qcj (Corsi et al., 2014; Palliyaguru et al., 2019), have a peak luminosity value as high as \(10^{29}\) erg s\({}^{-1}\) Hz\({}^{-1}\) at 5 GHz. We take this luminosity to be the upper limit to what a RSN can achieve and compare this to the extrapolated 5 GHz luminosity for each of our SPM detections. To extrapolate to the 5 GHz luminosity, we use \(\alpha_{3}^{10}\) for each source (Table 3, column 4) determined by performing a linear fit to the log of the 3 and 10 GHz flux density values. After extrapolation, 2 of the 15 detections have a 5 GHz luminosity greater than \(10^{29}\) erg s\({}^{-1}\) Hz\({}^{-1}\): J0759+2750, and J1304+6520. All of the sources with a lower limit to \(\alpha_{3}^{10}\) are below this luminosity. The remaining 13 have a median luminosity value of \(4.3\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\), which is only a factor of 2.3 lower than this RSN luminosity limit. While the spectral index and luminosity values do not rule out the individual RSN origin for these 15 sources, it is extremely unlikely that each radio source is associated with an individual, extremely luminous, nuclear RSN. Nonetheless, we pursue a more robust argument to rule out this scenario. Chomiuk & Wilcots (2009) determined an expression that relates the maximum 1.4 GHz luminosity of a RSN to the SFR of its host galaxy: \[L_{\rm 1.4\ GHz}^{\rm max}=\left(95^{+32}_{-23}\right){\rm SFR}^{0.98\pm 0.12 }\,, \tag{11}\] where the 1.4 GHz luminosity is in units of \(10^{24}\) erg s\({}^{-1}\) Hz\({}^{-1}\) and the SFR is measured in M\({}_{\odot}\) yr\({}^{-1}\). For a given SFR, we first use this relation to determine the maximum 1.4 GHz luminosity of a RSN, then extrapolate this to a 10 GHz luminosity to compare to our VLA sources. There is some freedom here in which value to choose for the spectral index. Chomiuk & Wilcots (2009) use \(\alpha=-0.5\) when deriving the syn Figure 7: Comparison of the 1.4 GHz, radio-based SFRs to the host galaxy SFRs for 15 of the 10 GHz-detected radio sources in our sample of SPMs. J0206\(-\)0017, J0843+3549, and J1018+3613 were removed from this analysis due to the presence of an IR AGN. Host galaxy SFRs were determined using either the IR-corrected FUV (purple circles) or H\(\alpha\) luminosity (black squares). Unfilled data points are non-detections at 1.4 GHz, and their radio-based SFRs are upper limits determined using a 3\(\sigma\) detection threshold from FIRST. Points above the dashed line exhibit radio excess, while those below do not. chrotron emission from a RSN. This comes from the assumption that the cosmic ray (CR) energy spectrum is a power law of the form E\({}^{-2}\), which gives a synchrotron spectral index of \(\alpha_{\rm syn}=-0.5\). However, focusing on the most luminous RSNs, SN1998bw has a steeper spectral index of \(\alpha_{\rm syn}=-0.75\)(Chevalier and Fransson, 2006), and PTF11qcj has a varying late-time spectral index \(\alpha_{\rm syn}\gtrsim-1\)(Corsi et al., 2014). Bjornsson (2013) note that the spectral index should approach a value of \(\alpha_{\rm syn}=-1\) in the optically-thin regime, and, indeed, this is in agreement with those values listed in Table 1 of Chevalier and Fransson (2006). For our analysis, we have used a spectral index value of \(\alpha_{\rm syn}=-0.5\), as is done in Chomiuk and Wilcots (2009). We chose this spectral index value because it is the shallowest among those discussed. Thus, if any of our sources lie above the extrapolated RSN luminosity using a spectral index value of \(\alpha_{\rm syn}=-0.5\), they will certainly do the same for a steeper spectral index value. Using this method, we find that the observed 10 GHz luminosity of each radio source is greater than the expected luminosity of an individual RSN by at least a factor of 80. It is evident that an individual, luminous RSN is not responsible for the radio emission in these SPM galaxies. For the 4 radio sources that do not show excess radio emission, it is likely, then, that their radio emission is produced by a population of SNRs. Only one of these radio sources (J1015+3914) is hosted by a star-forming galaxy as determined by its optical emission-line ratios. Two are identified as SF-AGN composite galaxies (J1445+5134, J1617+2512), and the remaining is a LINER (J1041+1105). The median spectral luminosity at 10 GHz of these 4 radio sources is \(1.4\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\). For comparison, the nuclear starbursts identified by Song et al. (2022) in sample of 63 local (U)LIRGS with SFRs in the range of \(0.14-13\) M\({}_{\odot}\) yr\({}^{-1}\) have a median spectral luminosity of \(5.8\times 10^{27}\) erg s\({}^{-1}\) Hz\({}^{-1}\), or about a factor of 2 lower than what we have found for our radio SF sources. However, higher luminosity radio SF sources do exist: NGC 4945, a powerful, local starburst with a SFR of 1.5 M\({}_{\odot}\) yr\({}^{-1}\), has a higher spectral luminosity at 10 GHz of \(2.4\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\)(Lenc and Tingay, 2009). To emphasize, only J1015+3914 may be powered by SF processes alone, as determined by its emission-line ratios. The other three may have contributions from another ionization process, e. g., an AGN. Interestingly, this source has the highest 10 GHz luminosity (\(4.3\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\)) of any of the non-excess sources. Among these sources with a detection in the FIRST catalog, none are resolved at the spatial scales probed by the 5'' FIRST beam. That is, they do not display the diffuse emission morphology that is characteristic of synchrotron emission from SNRs. This morphology does become more apparent at low frequency (LoTSS or RACS), and at higher spatial resolution for a few sources, e. g., J1015+3914 (Figure A6), though remains elusive at GHz frequency in others, e. g., J1041+1105 (Figure A8) and J1617+2512 (Figure A17). This is discussed in more detail for each individual source in Appendix A. When examining the ratio of peak flux to integrated flux density however, all of these sources become resolved at 10 GHz (see Figure 3). Deeper observations at 1.4 and 3 GHz may reveal lower surface brightness emission indicative of this characteristically diffuse nature. ### Agn For the 11 radio-excess sources, it is likely that their radio emission is dominated by an AGN component. We did not find a one-to-one match between the radio AGN classification and that derived from our BPT analysis (Section 5.1). Instead, the radio AGN are found to occupy all categories of the BPT classifications: J1113+2714 and J1135+2913 are SF; J0759+2750, J0916+4542 are SF-AGN composites; J1304+6520, J1433+3444 are Seyfert AGN; J0851+4050, J1511+0417 are LINERs; and J1511+2309, J1517+0409, J1655+2639 are hosted by quiescent systems. In addition to these 11, we have already concluded that the radio emission in J0206\(-\)0017, J0843+3549, and J1018+3613 is each associated with an AGN. From the emission-line perspective alone, this indicates that radio AGN activity may be present during an ongoing or recent stage of star formation activity in the host SPM. In total, we find that the nuclear radio emission for 14 of the 18 sources at 10 GHz, or 78%, is dominated by a radio AGN. We begin by discussing the three radio sources hosted in quiescent emission-line galaxies: J1511+2309, J1517+0409, and J1655+2639. As noted before in Section 5.1, the spectra of J1511+2309 and J1655+2639 contain H\(\beta\) absorption that is most likely from a stellar origin due to its absorption trough being centered on the rest-frame H\(\beta\) wavelength. Because there is not a significant detection of any H\(\beta\) emission, as reported by OSSY (Oh et al., 2011), in these two spectra, we have classified them as quiescent, even though the [OIII]\(\lambda\)5007, H\(\alpha\), and [NII]\(\lambda\)6583 lines are detected with a S/N ratio \(>3\). The radio emission properties, however, indicate the clear presence of radio AGN activity, as we discuss further in this section. J1517+0409 is unique among these quiescent emission-line SPMs and radio AGN. The optical spectrum for this SPM is largely fea tureless: only the [NII]\(\lambda\)6583 line is detected with a S/N ratio \(>3\). Additionally, it was a non-detection by each of the archival radio surveys that observed it (RACS, FIRST, VLASS), making it the only 10 GHz-detected source with no additional radio detection(s). The radio-based SFR of J1517+0409 is within the radio-excess area shown in Figure 7, though this was determined by using a \(3\sigma\) detection limit to the FIRST image of this source. With more sensitive 1.4 GHz observations, it is likely that J1517+0409 would not show radio-excess, and the origin of its radio emission would need to be re-examined. However, considering the data available during this analysis, we count J1517+0409 to be among the likely radio AGN discovered by our observations. #### 6.4.1 Radio AGN Morphologies For these radio AGN, almost all of them display only a single-component morphology except for J0843+3549, J1433+3444, and J1511+0417. The LoTSS map of J1433+3444 alone displays the extended morphology characteristic of AGN (Figure A12.) The majority of these radio AGN do not have collimated jets and/or extended lobe emission that is easily and clearly identifiable in any of their intensity maps (see Appendix A for all intensity maps). This is perhaps unsurprising given that almost all of our 10 GHz sources are radio-quiet, whereas radio-loud AGN are nearly ubiquitously associated with highly relativistic emission arising from radio jets. Yet, the majority of our sources are also not dominated by flat-spectrum radio cores. These objects are identified via their unresolved radio emission, flat spectral index (\(\alpha\geq-0.5\)), and high brightness temperature, indicative of non-thermal emission, and are almost always associated with an AGN. J0206\(-\)0017, J1433+3444 and J1511+0417 are the most likely sources to contain a dominant radio core when considering their unresolved morphology and flat spectral index values (Table 3). Indeed, this is the case for J0206\(-\)0017, as VLBI observations by Walsh et al. (2023) revealed that the radio emission remains compact down to pc scale, retains its flat spectral index, and shows chromatic variation in its position, confirming its radio-core nature. J0851+4050, J0916+4542, J1113+2714 and J1517+0409 only have lower limits to their spectral index value so the true nature of their radio AGN emission is ambiguous. It is evident from the ratio of peak-to-integrated flux density at 10 GHz that some of our single-component sources are moderately resolved (Figure 3). Since there is little evidence for kpc-scale radio jets or lobes at GHz frequencies, aside from J0843+3549 and J1511+0417, this moderately resolved nature of the nuclear emission could be an indication of young or frustrated radio jets, which are confined to only the central, sub-kpc region of their host galaxy. J1655+2639 is the most identifiable example of this scenario. The radio emission is moderately resolved along a linear feature (VLA X panel of Figure A18) that has a steep spectral index of \(\alpha_{3}^{10}=-0.97\pm 0.03\). Alternatively, the moderately resolved nature of some of these radio AGN sources can be explained by the presence of non-thermal emission associated with star formation. As noted, 2 of our radio AGN sources are hosted by SF emission-line galaxies, and 2 more are classified as SF-AGN composite. J0759+2750, hosted by a SF-AGN composite galaxy, is perhaps an archetypal source for SF and AGN activity because its 1.4 GHz radio luminosity is \(>3\sigma\) higher than what is expected from star formation alone, but the 10 GHz morphology shows both a diffuse, non-linear component and an unresolved component (VLA X panel of Figure A2). J0916+4542, also hosted by a SF-AGN composite galaxy, is similarly resolved by its peak-to-integrated flux density ratio at 10 GHz, although the point-like morphology makes the nature of the diffuse emission unclear (Figure A5). Like J0759+2750, J1135+2913, hosted by a SF galaxy, shows low surface brightness emission in its 10 GHz map (VLA X panel of Figure A10), though its radio emission is only a factor of 2.2\(\sigma\) higher than expected from star formation. However, the diffuse emission in J1135+2913 forms a linear feature. This is particularly evident once this source is imaged with a _uv_-taper to create a lower resolution map at 10 GHz. It is likely that this feature is a radio jet associated with the radio AGN, like J1655+2639, though we cannot rule out that there is no contribution to the diffuse radio emission from star-formation processes. The diffuse emission of J1304+6520 is more difficult to interpret (Figure A11). Like J0759+2750, J1304+6520's 1.4 GHz radio luminosity is \(>3\sigma\) higher than what is expected from star formation alone. However, unlike J0759+2750, J1304+6520 is hosted by a Seyfert AGN emission-line galaxy. So, if this diffuse emission is from SF processes, it is not evident that such would be the case from its optical emission-lines. To test if this diffuse emission would resolve into a jet-like feature, we created 10 GHz maps using different weighting schemes. However, these maps did not provide clear evidence of a radio jet. Further observations are required to ascertain the nature of this radio emission. For this analysis, we cannot conclusively determine the emission is from a jet, and do not count it as such as a result. Lastly, we discover a potential precessing radio jet in J0843+3549. The emission of the compact feature is moderately resolved in all of its intensity maps, and the position angle (PA) of this moderately resolved feature changes from a range of 131deg-151deg at kpc scales to \(-18\arcdeg\) in our 10 GHz intensity map, nearly aligning with the second radio component (Figure A3). The different observing frequencies probe different periods in the AGN's evolution because of their differing resolution elements. Thus, the change in PA from the largest spatial scales (\(2.5\arcsec-6\arcsec\) resolution) to the smallest (\(\sim 0.25\arcsec\)) is indicative of time evolution in the PA of the radio jet. The near-alignment of the second radio component with the moderately-resolved structure of the primary points towards a common origin for both emission features. To confirm this precessing jet, further observations are needed over a wide frequency and spatial resolution coverage. Then, 4 of the 14 radio AGN in our sample (29%) show evidence for a compact radio jet from their morphology. #### 6.4.2 Dual AGN Candidates J0843+3549 and J1511+0417 are both radio doubles; that is, they show two morphologically distinct radio components (see Figure A3 and Figure A14). For both, the radio AGN is likely hosted by the southern, dominant component in each source. The two point spectral index value \(\alpha_{3}^{10}\) for these dominant components is \(-1.37\pm 0.05\) and \(0.19\pm 0.03\) for J0843+3549 and J1511+0417, respectively. J0843+3549 is thus a candidate compact steep spectrum (CSS) object, which are compact radio sources less than 20 kpc in linear size and have \(\alpha\leq-0.5\)(O'Dea & Saikia, 2021). J1511+0417 is host to a radio core, as is evident by the inverted spectral index and unresolved morphology of the southern component. The two components are separated by 1.6 kpc for J0843+3549, and 2.1 kpc for J1511+0417. For both systems, there is no clear radio emission which connects the dominant source to the weaker one. J1511+0417 is particularly noteworthy because of the co-location of the northern radio component with a second optical nucleus. Although this galaxy merger is classified as a post-merger system, which are defined as containing only a single optical nucleus, it's clear through the visual identification of distinct optical nuclei that this system is in an earlier stage of galaxy merger evolution, prior to the merging of the stellar nuclei. Indeed, this is corroborated by the _GAIA_ photometric catalog containing an additional source identification at the location of the northern optical nucleus. The northern radio component has a two point spectral index of \(-0.60\pm 0.04\), also making it a candidate CSS object. These characteristics make J1511+0417 a candidate dual AGN. J0843+3549 has also been identified as a dual AGN candidate by previous work. Using deep NIR imaging, Koss et al. (2018) revealed a population of hidden nuclear mergers in a sample of heavily obscured, hard X-ray selected AGN. They identified a second IR source in the central kpcs of this optically selected post-merger galaxy that was blended into the optical nucleus in its low-resolution SDSS image. We identified two radio components in J0843+3549, the dominant of which is co-spatial with the central IR/optical component and the second component is found to the north of this. However, Koss et al. (2018) identified the second IR nucleus located 2.9 kpc to the east of the dominant component. If indeed the second IR nucleus of Koss et al. (2018) is associated with this galaxy merger and not a chance projection, we find no evidence of radio emission associated with it in our 10 GHz map, down to a \(3\sigma\) luminosity limit of \(2.7\times 10^{20}\) W Hz\({}^{-1}\). We require further observations of J0843+3549 to confirm the nature of the northern radio source, since the two components are blended in the VLASS Quick Look image and we do not have lower frequency observations of higher resolution such as for J1511+0417. ## 7 Discussion ### Prevalence of RQ Emission Much attention has been given in recent years to studying the connection between merging galaxy systems and the triggering of a radio-loud or radio-quiet AGN (Ramos Almeida et al., 2011; Bessiere et al., 2012; Ramos Almeida et al., 2012; Chiaberge et al., 2015; Koziel-Wierzbowska et al., 2017; Pierce et al., 2022). The general finding is that radio-loud AGN are associated with merging galaxy systems, while the fraction of radio-quiet AGN hosted by merging galaxies is often indistinguishable from non-merging systems (Chiaberge et al., 2015), suggesting that the radio-quiet phase is ubiquitous in the formation of all early-type galaxies (Ramos Almeida et al., 2013). This is particularly striking given that 89% (16/18) of the radio emission we detected is radio-quiet, whether its origin be from an AGN, SF, or a combination of both; and 86% (12/14) of the radio AGN are radio-quiet. From the overall sample, this translates into 57% (16/28) of our post-mergers hosting radio-quiet emission, 43% (12/28) hosting a radio-quiet AGN, and 7% hosting a radio-loud AGN. We emphasize important distinctions between our study and those focusing on the incidence of galaxy mergers in radio-loud/radio-quiet systems. These studies (e.g., Chiaberge et al., 2015; Breiding et al., 2023) placed constraints on the merger fraction of AGN host galaxies; they began by selecting for a sample of lumi nous quasars and examined the host galaxy morphology for signs of ongoing or recent gravitational interaction. Our sample, however, is of known merging galaxies that were selected only because of their host galaxy morphology and the presence of a single optical nucleus (Carpineti et al., 2012). Then, the merger fraction for our radio-quiet and radio-loud AGN is unity by definition; that is, 100% of the radio-AGN are hosted by mergers. We cannot make direct comparisons to similar, though distinct, AGN-merger studies because of the contrasting selection criterion used for the different samples. Additionally, this means that the C12 sample is unbiased towards AGN activity, whereas previous studies either favored, or outright required, the identification of a radio AGN. The nature of our study is to holistically examine the radio properties of these post-merger galaxies and constrain the progenitor of their radio emission, including SF-related activity. This is not to say that previous works have found that all merging galaxies will produce a radio-loud AGN; such a case is clearly unrealistic by the existence of inactive merging galaxy systems, as is the case for a number of the SPMs in both their radio emission and optical emission-line activity presented in this work. The deep nature of our observations may also play a key role in the high fraction of radio-quiet AGN in our sample. The sensitivity limits from ongoing and past GHz-frequency radio surveys are close to, if not more than, an order of magnitude less sensitive than what we achieved with our 10 GHz observations. Without the high-significance 10 GHz detection revealed by our observations, a number of these radio sources would be classified as non-detections by standard survey selection criterion, which are often \(\geq 5\sigma\), and thus would be excluded from such a study examining the link between merging galaxies and the incidence of radio-loud/radio-quiet emission. However, because of the 10 GHz detection that is co-spatial with the low-significance radio emission revealed by these surveys, we are confident that this emission is of astrophysical origin and should be examined as such. Again, the nature of our study makes it impractical to compare our detection rates to the merger fraction of AGN hosts from previous studies. Nonetheless, it is expected that with deeper, more sensitive observations, the number of radio-quiet AGNs hosted by galaxy mergers will increase. Such observations are required to attain a full representation of the radio-quiet AGN population, as is evidenced by the high fraction of radio-quiet AGN in our post-merger sample. #### 7.1.1 The Role of Mergers in RQ AGN The predominance of radio-quiet AGN among our radio AGN sample points to a scenario in which the majority of the SMBHs powering these AGN have low black hole spin values. This comes from the framework first proposed by Blandford & Znajek (1977), in which the energy extracted from a spinning BH via a highly magnetized accretion disk results in the launching of a jet. The radio-quiet/radio-loud dichotomy can then be explained by the SMBH possessing either a low or high spin (Wilson & Colbert, 1995). Because these radio AGN are hosted exclusively by post-merger galaxies, we can explore this SMBH spin paradigm from an interesting perspective: namely, the coalescence of a supermassive black hole binary (SMBHB). Wilson & Colbert (1995) proposed that the mass ratio of the SMBHB imposes significant evolutionary effects on the radio-loud nature of the coalesced SMBH. Only for SMBHBs with a mass ratio of order unity, with each SMBH of high mass (\(\geq 10^{8}M_{\odot}\)), will the resultant coalesced SMBH be highly spinning and thus able to form a radio-loud AGN. This event is intrinsically rare, since the mass function of SMBHs declines for high mass values (McLure & Dunlop, 2004; Hopkins et al., 2006; Gultekin et al., 2009), making the formation of such a SMBHB rare as well. Leaving out J1511+0417, which is not a post-merger system and likely would not yet have formed a SMBHB, the radio-loud AGN are outnumbered by the radio-quiet AGN in our sample by a factor of 5.5, which is consistent with the broad expectations of the Wilson & Colbert (1995) framework. There is observational evidence to support the merger-spin framework to explain the radio-loud/radio-quiet dichotomy. de Ruiter et al. (2005) and Capetti & Balmaverde (2006) have found that radio-loud AGN are ubiquitously hosted by cored galaxies, i. e., those that show a flattening of their brightness distribution towards the optical nucleus. Cored galaxies themselves are products of SMBHB evolution. Once the SMBHB reaches pc separation, it will scatter stars whose orbits form close encounters with itself, creating the cored brightness distribution profile of the post-merger galaxy (Merritt & Milosavljevic, 2005). Then, at least a subset of cored post-merger galaxies will host a coalesced SMBH, some of which may form a radio-quiet AGN and a rare few a radio-loud AGN. This gives a self-consistent model for the triggering and high relative fraction of radio-quiet AGN emission we have discovered in our sample of post-merger galaxies. Follow-up radio and optical observations would be needed to test this hypothesis for our sample of post-mergers. Very Long Baseline Interferometry (VLBI) is needed to probe the state of any SMBHB in these radio AGN. In Section 8, we present the results of simulated observations using the current and next-generation VLBI instruments to ascertain the feasibility of these studies. Space-based or adaptive optics optical observations would be needed to model the brightness distribution profiles of these post-mergers to assess if they are truly cored galaxies. Alternatively, accretion can lead to a spinning up of the SMBH. As discussed in Volonteri et al. (2013) and Chiaberge et al. (2015), coherent accretion, i. e., the accreted material has a constant angular momentum axis, will lead to a spinning up of the single SMBH. However, this naturally requires the accretion to be constant over a significant time evolution, which, in turn, requires a large gaseous reservoir for the SMBH to reside in. For gas-rich galaxy mergers, it is well established that such a reservoir can be created via tidal torquing of the gas, which drives it towards the nucleus where it forms a circumbinary disk (Di Matteo et al., 2005). By examining the colors of the SPM sample, C12 concluded that at least 55% of these SPMs are the product of a galaxy merger consisting of one or more late-type, e. g., gas-rich, galaxies. This accretion-driven spin up of the SMBH may be a plausible origin for the radio emission in a subsample of these SPMs. However, it should be noted that this framework can only produce radio-loud AGN; the radio-quiet AGN require an alternative explanation. ### Impact of AGN Feedback Important to the overall discussion of post-merger evolution is an AGN's ability to either trigger or cut-off star formation in post-mergers through AGN feedback. The centralization of gas that occurs during a gas-rich galaxy merger can both fuel accretion to power an AGN and also act as a catalyst for triggering starburst activity. The chronology of these two processes is crucial: if the AGN activity is not prompt during the starburst period, AGN feedback will have little to no effect on the SFR of the host galaxy. Kaviraj et al. (2015) found that the host galaxies for a sample of VLBI-detected, merger-triggered radio AGN were a factor of 3 more likely than stellar mass- and redshift-matched inactive early-type galaxies to lie on the UV-optical red sequence. Using this and timescale arguments, the authors argue that these merger-triggered radio AGN are inefficient at regulating the SF in the host galaxy. Shabala et al. (2017) found a similar result for VLBI-detected radio AGN hosted by gas-rich minor mergers. By reconstructing the SF history of the host galaxies, they found that none of the radio AGN were triggered within 400 Myr of the onset of the starburst activity in the host, limiting their ability to impact the overall SFR. Carpineti et al. (2012) also found that the fraction of star-forming galaxies peaks in ongoing mergers, but the fraction of optical AGN peaks in post-merger systems, indicating different triggering times for these two processes during the merger evolution. Our radio-detected sources span a broad expanse of radio luminosity and dominant emission mechanisms. As shown in Section 6.3, at least 4 of these sources are likely dominated by emission from SNRs associated with past or ongoing SF. 3 of these 4 radio sources are hosted by LINER or SF-AGN composite emission-line galaxies. These 3 may represent the very earliest stages of AGN feedback occurring in the post-mergers, as there is evidence for both an AGN and recent SF activity, or at least ambiguously for the LINER. On the other hand, 4 of the likely radio AGN are hosted by either SF or SF-AGN emission-line galaxies. These may represent the next stage of AGN feedback, as the AGN is now the dominant radio emission mechanism, outshining in the radio band the total emission from the supernova remnants associated with the SF. There is multi-wavelength evidence for potential feedback processes occurring in at least these 8 post-mergers based on their radio and emission-line activity, though, as shown in Section 6.1, almost all of the post-merger galaxies appear to be forming stars at a rate \(\geq 1\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\). Although for the SF-dominated radio sources we, naturally, cannot estimate the effect of an AGN-powered jet on feedback, we can discuss how our population of radio AGN fits within the realm of AGN feedback. Establishing the total AGN jet power from its radio luminosity is not straightforward (see Godfrey & Shabala, 2016). However, the capability of the jet to substantially impact the surrounding interstellar medium (ISM), creating feedback, can be broadly interpreted even from an order of magnitude estimation. We used the relation of Cavagnolo et al. (2010) to estimate the jet power for each of the radio AGN in our sample. We found a range of jet powers spanning \(10^{41}-10^{43}\) erg s\({}^{-1}\), making these low power radio jets. These jet powers are actually favorable for feedback process: low power jets, confined to the central kpcs of the host, are more effective at large, sustained disruption of the surrounding ISM compared to high power jets that easily puncture through the dense circumnuclear ISM and remain collimated at large distances from the launching region (Mukherjee et al., 2016). Indeed, the compact morphology of the radio sources favors this scenario, as none of the radio AGN show collimated jets beyond the nuclear region of their host galaxy. In conjunction with the substantial SFRs of the host post-merger galaxies, this makes these sources good candidates to study the impact of AGN feedback in post-merger galaxies by searching for AGN-triggered bursts of SF activity (positive feedback) and/or multiphase outflows (negative feedback). ### Spectral Index of SF-Related Emission In the GHz regime, radio emission related to the shocks propagated by SNRs is optically thin, with a canonical spectral index value \(\alpha\sim-0.8\)(Condon, 1992). By comparing the radio-based SFR to that derived from host galaxy properties, we have identified 4 compact, nuclear, 10 GHz radio sources that are likely dominated by SF-related processes. Among these, we have placed strict constraints on the optically-thin spectral index for 3 of these sources using their 3 and 10 GHz flux densities. These 3 radio sources have a median optically-thin spectral index \(\alpha_{3}^{10}=-1.14\pm 0.08\), significantly steeper than the canonical value of \(\alpha\sim-0.8\). Such a difference may be cause for concern. However, recent work by Klein et al. (2018) has provided a more in-depth analysis of the broadband radio spectra of integrated SNR populations. By using nearly 2 decades of frequency coverage, these authors found that for 14 SF galaxies, the synchrotron spectral index at low frequency, i. e., \(\nu<\) 1 GHz, was similar to the canonical value. However, in the 1-12 GHz frequency range, the spectra required either a break or an exponential decline, indicating a steepening of the radio spectra. These features would be caused either by significant synchrotron or inverse-Compton loses to the high-energy electrons produced by the SN shocks, although Klein et al. (2018) explain that a cutoff in the synchrotron spectrum is difficult to explain without inducing a single-injection scenario, which is unlikely when considering the integrated properties of a SNR population. This situation is alleviated somewhat by the emission-line activity for each of the SF-dominated radio sources. Only J1015+3614 is identified as a purely SF galaxy from its emission-line ratios, and its radio source has the flattest optically-thin spectral index of these sources of interest with \(\alpha_{3}^{10}=-0.64\pm 0.10\). This is not dissimilar to the spectral index values found for other SF galaxies (Chyzy et al., 2018; An et al., 2021), and while slightly flatter than the canonical \(\alpha\sim-0.8\), this can be explained by the ratio of CR electrons favoring a younger, more energetic population, e. g., due to recent supernova activity. J1445+5134 and J1617+2512 have been identified as SF-AGN composites by their emission-line ratios, and have much steeper optically-thin spectral index values, \(-1.46\pm 0.04\) and \(-1.31\pm 0.11\), respectively. These steep spectral index values for SF galaxies have been observed before: using a sample of 41 6 GHz-detected submillimeter galaxies (SMGs), Thomson et al. (2019) found that the median 1.4-6 GHz spectral index for a sub-sample of bright SMGs to be \(-1.35\pm 0.24\). Additionally, the radio spectra of the bright SMGs showed a steepening at these GHz frequencies when compared to the same spectra at MHz frequencies. Thomson et al. (2019) discussed that such a phenomenon may be caused by the mixing of distinct electron populations accelerated by decoupled processes that dominate at different frequencies and at different spatial scales. We have presented this hypothesis to explain the spectral breaking (Section 5.4) and radio morphology (Section 6.4.1) for a number of different radio sources in our sample. We further extend this idea to the two SF-dominated radio sources hosted by SF-AGN composite emission-line galaxies. If there is a contribution from both a steep-spectrum radio AGN component and SNR population to the observed radio emission, such a mixing may cause a steepening of the radio spectral index. This would most likely arise from resolution effects, i. e., the diffuse emission from SNRs is resolved out at higher frequency, leaving only the steep-spectrum, compact AGN component. A scenario in which the diffuse emission from SNRs is cutoff at higher frequency is unlikely, as discussed previously (Klein et al., 2018). Multi-band SED modeling of these sources is needed to better understand the fractional contribution of the AGN (Dietrich et al., 2018; Ramos Padilla et al., 2020). The remaining SF-dominated radio source is J1041+1105. Unlike for the other SF-dominated radio sources, we could only place a lower limit on the optically-thin spectral index of J1041+1105 (\(\alpha_{3}^{10}>-1.43\)) due to the non-detection of any radio emission at 3 GHz. Like J1445+5134 and J1617+2512, \(\alpha_{3}^{10}\) for this source may also be steeper than expected, given its SF-dominated nature. However, when considering the 1.4 GHz flux density limit, which is more sensitive than the 3 GHz limit, the lower limit to the spectral index becomes -0.81. We know, then, that J1041+1105 does not show the same type of highly steep spectral index at GHz frequencies that J1445+5134 and J1617+2512 do. This may be indicative of the more canonical radio emission associated with SF, like J1015+3614. However, unlike J1015+3614, the host galaxy of J1041+1105 is a LINER. Although the exact nature of LINER emission is ambiguous, the surface brightness profiles of the low-ionization emission lines found in LINER hosts though integral field spectroscopy favor a scenario in which the ionization is powered by post-asymptotic giant branch stars (Yan and Blanton, 2012; Singh et al., 2013). Such a scenario is also favorable due to the ubiquity of these stars in all galaxies, especially those with little active star formation. Radio observations, however, seemingly favor the presence of an AGN over pure SF-related processes to produce the observed radio emission (Filho et al., 2004; Singh et al., 2015). Although J1041+1105 may possibly have an optically-thin spectral index close to the canonical SF value of -0.8, more sensitive 1.4 and 3 GHz observations are required to better understand the association of this nuclear radio source to its LINER emission-line host galaxy. ## 8 SMBHBS in SPM systems ### Do Our SPMs Host SMBHBs? Radio observations are a powerful tool to probe both the kpc- and pc-scale environment to search for and confirm SMBHB candidates. At kpc scales, the morphology of the extended jets or lobes may hold signatures of SMBHB evolution: an X-shaped morphology due to a coalesced SMBHB (Begelman et al., 1980; Merritt and Ekers, 2002), or an S-shaped (helical) morphology due to jet precession caused by a SMBHB (e.g., Rubinur et al., 2017). These radio structures have steep spectral index measurements (\(\alpha\leq-0.5\)), making low frequency observations particularly advantageous towards their identification. After visual inspection, we find no evidence for any of these kpc-scale, SMBHB evolutionary signatures in the radio morphology of these SPM galaxies at any observing frequency (see Appendix A for multi-frequency radio maps for each SPM). However, the absence of these S- or X-shaped morphologies does not dismiss that these SPMs may harbor a SMBHB. Likewise, only 2 of these SPMs (J0841+3549 and J1511+0417), show any evidence of DAGN behavior, as discussed in Section 6.4.2. Our 10 GHz observations would be the most adept at identifying DAGN candidates because of their sensitivity (nominal image RMS \(\sim 15\)\(\mu\)Jy) and high angular resolution, which would be able to resolve potential blended radio cores in lower resolution images. Then, we find no evidence for secondary radio emission in the remaining 12 radio AGN down to a range of limiting \(3\sigma\) luminosities \(L_{\nu}=8\times 10^{19}-7\times 10^{20}\) W Hz\({}^{-1}\). A second AGN in these systems would be extremely radio-faint and require ultra-deep sensitivities to detect. This is also true for the SF-dominated sources: we find no evidence of multi-component radio emission in any of these systems above a 10 GHz luminosity of \(L_{\nu}=3.7\times 10^{20}\) W Hz\({}^{-1}\). _Gaia_'s superb astrometric precision has enabled a number of searches for DAGN and SMBHBs candidates that utilize astrometric variability induced by photo-center pseudo-motion of the unresolved SMBH pair, or varastrometry (Hwang et al., 2020; Shen et al., 2021; Chen et al., 2022; Schwartzman et al., 2023). This technique is particularly powerful for systems with \(z>0.5\), as _Gaia_'s astrometry is optimized for compact sources (Makarov and Secrest, 2022). Below this redshift, extended features in the host galaxy, e. g., tidal tails, will induce false astrometric noise (Souchay et al., 2022). Because our sample of SPMs were selected to have extended, tidal features and \(z<0.1\), any analysis using the photometric center from _Gaia_ is unreliable, including searching for offsets in the radio and optical photometric centers. Then, for the majority of the 10 GHz-detected SPM galaxies in our sample (16/18; 89%), we find no evidence for SMBHB evolution at the (sub-)kpc scale. We emphasize, however, that the lack of evidence does not preclude that any of these SPMs may host a SMBHB system. ### Searches with Very Long Baseline Interferometry Very Long Baseline Interferometry (VLBI) offers a plethora of direct and indirect methods to identify SMBHB candidates. Among these, the identification of dual, flat-spectrum radio cores at pc-scale separation provides the most compelling evidence of any SMBHB, a technique that is only feasible because of VLBI's unique milliarcsecond angular resolution. Indeed, this technique has so far provided the best evidence for a SMBHB, hosted in the elliptical galaxy 0402+379, through imaging (Rodriguez et al., 2006) and proper motion constraints (Bansal et al., 2017). VLBI observations can also provide corroboratory evidence of SMBHB candidates through indirect methods. Significant position angle differences between the pc- and kpc-scale radio jet may be indicative of binary evolution (e. g., Mooley et al., 2018), as the jet opening angle widens as the binary loses energy due to the emission of gravitational waves (Kulkarni and Loeb, 2016). Periodic variability observed in both the radio luminosity and radio core position may be indicative of SMBHB-induced precession of the radio jet of sub-pc SMBHB candidates (e. g., Sudou et al., 2003; Stirling et al., 2003; Kun et al., 2014). We are interested in establishing the feasibility of performing VLBI observations to search for a SMBHB in each of the 14 radio AGN we have discovered with our 10 GHz VLA observations. We did this by simulating two different VLBI observatories: the Very Long Baseline Array (VLBA), and the Next Generation Very Large Array (ngVLA). For each frequency band of each array, we calculated the expected continuum RMS image sensitivity of a 1-hour long integration and a 10-hour long integration. Here, we are using integration to represent the total time on-source for each target of interest. Each observation, encompassing this on-source time, is then necessarily longer than 1 hour and 10 hours to ac count for overheads. In actual targeted observations for searches of SMBHBs, Walsh et al. (in prep) found that roughly 75% of their total observation time was on-source, with the remaining 25% dedicated to scans of amplitude and phase reference calibrators. So, we can expect an increase of approximately 25% for each integration time, or observations which are approximately 1.3 and 13.3 hours for the 1-hour and 10-hour integration, respectively. For the VLBA, we have assumed an efficiency factor \(\eta_{s}\) of 0.8, and that all 10 antennas, thus 90 baselines, are included for each simulated integration. Our simulated integrations use the L-band (21 cm), S-band (13 cm), C-band (6 cm and 5 cm), X-band (4 cm), Ku-band (2 cm), K-band (1.4 cm), Ka-band (1.25 cm), and Q-band (0.7 cm) receivers for the VLBA. For each frequency band, we have assumed the maximum possible data rate (2048 Mbps for L- and S-band, 4096 Mbps for all others) and used the SEFD provided for the VLBA5. For the simulated ngVLA integrations, we used the ngVLA sensitivity calculator Python script6 to calculate the expected continuum RMS image sensitivity. We did this for each of the central frequencies listed for the VLBA, since the larger bandwidths of the ngVLA receivers encapsulate multiple VLBA receiver frequency ranges. We simulated the 1-hour and 10-hour long integrations at the first 5 bands of the ngVLA for this analysis, with central frequencies at 2.4, 8, 16, 27, and 41 GHz. We have not taken into account the RFI environment for any of these simulated integrations. This is especially prevalent at lower frequencies, where up to half of the bandwidth may be unusable due to the persistent, dominating presence of RFI. Each of these simulated integrations is an ideal case and represents a lower limit to what is achievable in actual observations. Footnote 5: [https://science.nrao.edu/facilities/vlba/docs/manuals/oss/bands-perf](https://science.nrao.edu/facilities/vlba/docs/manuals/oss/bands-perf) Footnote 6: [https://gitlab.nrao.edu/vrosero/ngvla-sensitivity-calculator](https://gitlab.nrao.edu/vrosero/ngvla-sensitivity-calculator) To calculate the expected milliarcsecond-scale radio flux density at each frequency, we used the 10 GHz flux density value (Table 2, column 6) and the \(\alpha_{3}^{10}\) spectral index value (Table 3, column 4) for each of the radio AGN. For these radio AGN with more than one resolved component (J0843+3549 and J1511+0417), we used the 10 GHz flux density value of the dominant radio component. We began by extrapolating the 10 GHz flux density to each of the central frequencies listed above. Here, we have assumed that the broader radio SED for each of these radio components follows a simple power law \(S_{\nu}\propto\nu^{\alpha}\), where \(\alpha\) is \(\alpha_{3}^{10}\). We have used the 10 GHz flux density values because this represents the best approximation to the radio flux density we would expect from a dominant radio core. All other flux density measurements were taken at lower frequency and angular resolution. Because of this, those flux density measurements are more likely to have contributions from non-core-related phenomena, such as steep-spectrum features, e. g., a radio jet, or star formation, especially for the case of the 144 MHz LoTSS data. Indeed, only J0206\(-\)0017 and J1511+0417 have a flat spectral index (\(\alpha>-0.5\)) in the optically-thin regime, which is expected if the dominant contributor to the radio flux density were from a radio core. We do note that J1433+3444 may also be a flat spectrum object since \(\alpha_{3}^{10}=-0.47\pm 0.07\). The same could also be true for all source for which we could only place a lower limit on \(\alpha_{3}^{10}\) (J0851+4050, J0916+4542, J1113+2714, and J1517+0409). This is critical to establishing the expected population of detected radio sources for each of our simulated integrations. If we systematically overestimate the expected milliarcsecond-scale flux density, we will also overestimate the number of significant detections achievable in each of our simulated integrations. We have also assumed that \(\alpha_{3}^{10}\) also represents the dominant VLBI component. This certainly does not need to be the case, as even unresolved features at sub-arcsecond scale may be resolved out at the mas-scale probed by VLBI observations, possibly revealing a dominant radio core at mas scales. However, we are using these values since they best represent the physical situation as we can currently determine. Once again, we only wish to estimate what the VLBI-scale emission properties are; only through actual observation could these flux density values be determined. After extrapolating the 10 GHz flux density to the designated frequency values, we apply a factor of 0.3 in converting from sub-arcsecond-scale flux density to mas-scale flux density. This value was chosen from the analysis of Deller & Middelberg (2014). In their analysis, these authors determined the ratio of peak VLBI flux density at 1.4 GHz to peak FIRST flux density for a large sample of VLBI sources detected in the mJy Imaging VLBA Exploration at 20 cm (mJIVE-20) survey. Overall, they found that 30%-35% of all sources have compact VLBI emission in which the majority of the FIRST flux is recovered, with this trend increasing towards lower FIRST flux density. Indeed, for FIRST sources with a flux density measuring from 1-2 mJy that are detected with VLBI, all recover at least 32% of the FIRST flux density value at VLBI scales, with about 25% of sources having greater than 64% of this value recovered. We acknowledge that this extrapolation was determined only for 1.4 GHz observations, whereas our analysis uses the flux density determined at 10 GHz. Our factor of 0.3, then, may even underestimate the VLBI-scale flux density for each source. Though, for the sake of this analysis, this is preferred to an overestimation. We now have estimations for the VLBI-scale flux density for each of the SPM sources detected with our 10 GHz observations. Figure 8 plots these estimated VLBI-scale flux density values with the simulated continuum image RMS sensitivities of a 1-hour long and 10-hour long integration with the VLBA and ngVLA. For each simulated integration, the sensitivity curve represents a \(5\sigma\) detection threshold. Points that fall above these curves represent a detection, while those below are not expected to be detected. Notably, lower frequency integrations (\(\nu<10\) GHz) with the VLBA may already achieve a sensitivity to reach a significant detection of VLBI-scale radio emission. While these frequencies may not be optimal for isolating the core emission, the flux density information they provide is nonetheless critical to establishing the spectral index value of the potential radio core associated with each binary constituent. The significant improvement in detection threshold appears in the higher frequency integrations (\(\nu>10\) GHz). At these frequencies, we expect that the vast majority of sources would not be detected even with a 10-hour integration time using the VLBA. However, with the ngVLA, we find that a 1-hour integration time is sufficient to detect all of the sources at 15 GHz, and the majority of sources at 22, 23, and 43 GHz. These higher frequencies observations are best at isolating the radio core by resolving out larger-scale emission and provide high astrometric precision due to their high angular resolution. ## 9 Summary In this paper, we have analyzed the emission properties of a sample of 30 local post-merger galaxies from Carpineti et al. (2012) to search for star formation- and AGN-related activity. Our main results are as follows: 1. **Diverse Emission-Line Activity:** Using the optical emission-line flux ratios derived from the OSSY catalog (Oh et al., 2011), and standard BPT diagram analyses, 43% of the post-mergers are op Figure 8: VLBI-scale flux density estimates of the 14 newly-discovered radio AGN from our sample of SPM galaxies plotted with the simulated, \(5\sigma\) sensitivity curves of a 1-hour and 10-hour integration with the Very Long Baseline Array (VLBA) and the Long Baseline Array (LBA) of the Next Generation Very Large Array (ngVLA). These data points represent the potential flux density of a supermassive black hole binary (SMBHB), which can only be resolved with VLBI. Points above a \(5\sigma\) sensitivity curve indicate a detection. At lower frequency (\(\nu<10\) GHz), the VLBA observations may already reach the desired sensitivity to achieve a significant detection of milliarcsecond-scale radio emission. The notable difference is at higher frequency (\(\nu>10\) GHz), where a 1-hour integration with the ngVLA vastly improves the probability of detection compared to a 10-hour VLBA integration. tically quiescent, 10% are dominated by SF, 13% by a combination of SF and AGN, 13% by Seyfert AGN, and 20% by LINER activity. 2. **Low-luminosity Radio Emission:** Of those with detectable radio emission, through both archival radio surveys and new, high resolution, 10 GHz observations with the VLA, the vast majority are radio-quiet, with only 2 reaching the spectral luminosity threshold (\(\nu L_{\nu}>10^{32}\) W) to be classified as radio-loud. We discovered a number of nuclear radio sources at high significance (\(\geq 5\sigma\)) with our 10 GHz observations that were otherwise non-detections by archival radio surveys, emphasizing the importance of deep observations to reveal the full population of radio-quiet systems. 3. **Prevalence of Compact Radio Emission:** At the largest spatial extents, sampled by 144 MHz LoTSS observations, all of the detected radio sources have a diffuse or extended emission component. Only J0843+3549 and J1433+3444 display an AGN-like morphology among these sources. The nuclear emission is unresolved in all but one of the 15 sources at 1.4 GHz (5\({}^{\prime\prime}\) resolution), indicating that compact, nuclear emission is prevalent in these post-mergers. At the most compact scales, sampled by our 10 GHz observations (\(\approx 0.2^{\prime\prime}\)) the sources show a variety of AGN- and SF-related morphologies. 4. **Radio Spectra and Spectral Index Measurements:** Of the 12 radio sources with well-sampled (4 or more detections) spectra, we found 3 that showed evidence of significant curvature (\(|q|\geq 0.2\)) in their broadband spectrum, with one being a Gigahertz peaked spectrum (GPS) source (J1617+2512). Two spectra were found to be inverted (\(q\geq 0.2\)), though we believe this is likely due to an overall flattening of the spectrum at high frequency and not a true inversion. The spectra of J0843+3549, J1015+3914, J1135+2953, and 1304+6520 are poorly fit by both a simple and curved power law, as indicated by the \(\chi^{2}_{red}\) value for each fit. This is either due to variability at one or more flux values, or a blending of two distinct electron populations at low angular resolution. We also calculated the spectral index value \(\alpha^{10}_{3}\) to the compact radio emission using the 3 and 10 GHz flux values, or its lower limit for sources without a 3 GHz detection. These \(\alpha^{10}_{3}\) values range from \(0.19\geq\alpha\geq-1.74\), though the majority have a steep spectral index \(\alpha<-0.5\). 5. **SF Activity in Post-Mergers:** The 1.4 GHz luminosity of 4 of the radio sources (J1015+3914, J1041+1105, J1445+5134, and J1617+2512) can be explained by SF processes alone. For each, we have determined that their emission is most likely due to a population of supernova remnants, as the expected luminosity from an individual, luminous radio supernova is at least a factor of 80 dimmer than each of their observed 10 GHz luminosity. Further, these 4 radio sources are hosted by either SF, SF-AGN composite, or LINER emission-line galaxies, providing corroboratory evidence of ongoing or recent SF activity in each of these post-mergers. It is notable that the spectral index value for 3 of these 4 is significantly steeper than the canonical value \(\alpha\sim-0.8\) for shock-dominated sources (Condon, 1992). However, these spectral index values are not dissimilar to those of other shock-dominated sources (Chyzy et al., 2018; Thomson et al., 2019; An et al., 2021, e. g.,), and may be indicative of an older CR electron population due to evolved supernova activity. Alternatively, as two of these sources are hosted by SF-AGN composite emission-line galaxies, the steep spectral index may be due to resolution effects in which the diffuse synchrotron is resolved out at higher angular resolution, revealing compact, steep-spectrum AGN emission. 6. **Discovery of Radio AGN:** We have discovered 14 likely radio AGN in these post-mergers: 3 because of their association with a known AGN and 11 for which we found excess radio emission compared to SF predictors. 86% (12/14) of these radio AGN are radio-quiet, with only 14% (2/14) being radio-loud. The post-merger hosts are found to occupy all regions of the BPT diagrams, indicating that radio AGN activity may be present even during stages of star-forming activity of the post-merger evolution. We also report on the discovery of a precessing jet in the dual AGN candidate J0843+3549 (Koss et al., 2018), and discover a new dual AGN candidate, J1511+0417. 7. **The Origin of Radio AGN Activity in Mergers:** The prevalence of radio-quiet AGN among our radio AGN population lends itself to a scenario in which radio-quiet AGN in ongoing or recent galaxy mergers may be more populous than previously believed. Because our sample is comprised of late-stage and post-merger systems, the high fraction of radio-quiet AGN can be explained by SMBH spin up due to the coalescence of a su permassive black hole binary (SMBHB; Wilson & Colbert (1995)). In this framework, radio-loud AGN are only produced for the most massive binary systems, which are intrinsically rare due to the sharp decline at high mass of the SMBH mass function (McLure & Dunlop, 2004; Hopkins et al., 2006; Gultekin et al., 2009). Indeed, we have found that the radio-quiet AGN outnumber the radio-loud AGN by a factor of 5.5. Alternatively, gas-rich mergers may produce radio-loud AGN if the SMBH sustains coherent accretion for an extended period of time. Both scenarios need further observations to test rigorously. 8. **Jet-ISM Feedback:** We estimated the total power of the jets for our sample of radio AGN. The jet powers span a range of \(10^{41}-10^{43}\) erg s\({}^{-1}\), making them low power. The majority of the post-mergers have a SFR \(\geq 1\) M\({}_{\odot}\) yr\({}^{-1}\), indicating that the AGN may play an important role in providing positive or negative feedback. Importantly, these low power jets, confined to the central kpcs of the host, are more effective at large, sustained disruption of the surrounding ISM compared to high power jets that easily puncture through the dense circumnuclear ISM and remain collimated at large distances from the launching region (Mukherjee et al., 2016). Indeed, the compact morphology of these radio sources agrees with this scenario. These radio AGN are then good candidates to study the impact of AGN feedback in post-merger systems by searching for signatures of multi-phase gas outflows. 9. **Next-generation Searches of SMBHBs:** Lastly, we simulated 1- and 10-hour integrations at multiple frequencies with the Very Long Baseline Array (VLBA) and the VLBI capabilities of the Next Generation VLA (ngVLA). These simulations present the necessary time commitment for each instrument to reach the deep sensitivities required to perform robust searches for a SMBHB in each of these radio AGN hosted by a post-merger galaxy. We estimated the milliarcsecond-scale flux density of the radio source using the spectral index value \(\alpha_{3}^{10}\) for each radio AGN and additional factors from the literature (e. g., Deller & Middelberg (2014)). We found that at low frequency (\(\nu<10\) GHz), the VLBA can already perform these robust searches, though the low frequency, milliarcsecond environment of radio AGN will often be dominated by extended, steep spectrum emission, making radio core identification difficult. The ngVLA will be a particularly powerful instrument for searches of SMBHBs at high frequency (\(\nu>10\) GHz), where the dual, flat-spectrum cores of the SMBHB are the dominant emission signature. These high frequency, high angular resolution observations also offer significantly better astrometric precision than low frequency observations, which will be important for constraining proper motion measurements of the SMBHB constituents. Our study of the multi-wavelength emission properties of 30 post-merger galaxies has discovered a number of exciting phenomena and individual sources. Future work on this topic will expand the sample population of post-merger galaxies, include better frequency coverage, and examine the radio emission of galaxy mergers at various stages of their evolution. This work will further our understanding of the astrophysical processes occurring during the merger sequence, the impact of AGN feedback, and establish new radio sources for which to follow-up with VLBI with the hope of detecting individual SMBHBs. GW and SBS were supported in this work by NSF award grant #1815664. We thank Amy Reines for the discussions on multi-wavelength analyses to determine the origin of radio emission and Julie Comerford for recommendation of the OSSY catalog to perform our optical spectral analysis. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The NANOGrav collaboration, which funded some components associated with this research, receives support from National Science Foundation (NSF) Physics Frontiers Center award #1430284 and #2020265. This research has made use of NASA's Astrophysics Data System Bibliographic Services. VLA, LOFAR, ASKAP, _Galex_, _WISE_ CASA (THE CASA TEAM et al., 2022), Numpy (van der Walt et al., 2011), Scipy (Virtanen et al., 2020), Matplotlib (Hunter, 2007), Astropy (Astropy Collaboration et al., 2018)
2309.14048
Synchronous Agents, Verification, and Blame -- A Deontic View
A question we can ask of multi-agent systems is whether the agents' collective interaction satisfies particular goals or specifications, which can be either individual or collective. When a collaborative goal is not reached, or a specification is violated, a pertinent question is whether any agent is to blame. This paper considers a two-agent synchronous setting and a formal language to specify when agents' collaboration is required. We take a deontic approach and use obligations, permissions, and prohibitions to capture notions of non-interference between agents. We also handle reparations, allowing violations to be corrected or compensated. We give trace semantics to our logic, and use it to define blame assignment for violations. We give an automaton construction for the logic, which we use as the base for model checking and blame analysis. We also further provide quantitative semantics that is able to compare different interactions in terms of the required reparations.
Karam Kharraz, Shaun Azzopardi, Gerardo Schneider, Martin Leucker
2023-09-25T11:23:59Z
http://arxiv.org/abs/2309.14048v2
# Synchronous Agents, Verification, and Blame -- A Deontic View ###### Abstract A question we can ask of multi-agent systems is whether the agents' collective interaction satisfies particular goals or specifications, which can be either individual or collective. When a collaborative goal is not reached, or a specification is violated, a pertinent question is whether any agent is to blame. This paper considers a two-agent synchronous setting and a formal language to specify when agents' collaboration is required. We take a _deontic_ approach and use _obligations_, _permissions_, and _prohibitions_ to capture notions of non-interference between agents. We also handle _reparations_, allowing violations to be corrected or compensated. We give trace semantics to our logic, and use it to define blame assignment for violations. We give an automaton construction for the logic, which we use as the base for model checking and blame analysis. We also further provide quantitative semantics that is able to compare different interactions in terms of the required reparations. ## 1 Introduction Interaction between agents can be adversarial, where each agent pursues its own set of individual goals, or cooperative where the agents collaborate to achieve a collective goal. Verification techniques can help us detect whether such goals may be achieved. Agents may also interfere or not cooperate, at which point the failure to achieve a goal could be attributed to some agent. In this paper, we develop a _deontic_ logic allowing us to specify the anticipated interaction of two agents in the presence of such aspects. A deontic logic [16, 21] includes norms as first-class concepts, with _obligations_, _permissions_, and _prohibitions_ as basic norms. These concepts are crucial in legal documents and contractual relationships, where the agents are the parties to a contract.1 Norms are parameterised by actions/events or propositions and are used to specify what _ought to be_, or the parties _ought to do_. Footnote 1: We use _party_ and _agent_ interchangeably throughout. In this paper, interaction or cooperation of the agents is modelled as the interplay of the individual actions performed by each agent, leading to the concept of cooperative actions. Cooperative actions could be synchronous, i.e., actions at each time point of each agent are meant to describe the possible cooperation, or asynchronous, meaning that actions for cooperation may happen a different time points.1 We choose synchrony as an abstraction to simplify the concept of cooperation and non-interference between parties. We also study only the setting with two rather than many parties. As such, we are concerned with _two-party synchronous systems_, leaving extensions as future work. Footnote 1: Observe similarities with synchronous and asynchronous communication. We re-purpose and extend the syntax of a deontic language from literature [3, 4] into a new deontic logic with denotational semantics appropriate for this two-party setting. Our semantics depends on two notions of _informative_ satisfaction or violation, which talk about the exact point in time a contract is satisfied or violated. Other features of the logic include the ability to make contracts trigger on matching a regular language, requiring the satisfaction of a contract while one is still within the prefix language of a regular language, and a recursion operator to allow the definition of persistent contracts and repetition. We extend the semantics with a notion of _blame assignment_, to identify which party is responsible for a certain violation. We further use this to define quantitative semantics that counts the number of violations caused by a certain party, which can be used to compare different traces or behaviour of a party. We give an exponential automata construction for the logic, transforming a contract specification into an automaton capable of identifying satisfaction, and violation as specified in our semantics. We also provide a model checking algorithm, which is quadratic in the size of the contract automaton, hence exponential in the size of the contract. We re-use this construction for blame analysis, but leave analysis for the quantitative semantics for future work. The paper organisation follows. Section 2 lays out preliminaries, Section 3 presents our logic, and Section 4 presents algorithms for model checking and blame analysis through automata constructions. Related work is considered in Section 5, and we conclude in Section 6. ## 2 Preliminaries We write \(\mathbb{N}_{\infty}\) for \(\mathbb{N}\cup\{\infty\}\). Given a finite alphabet \(\Sigma\), we write \(\Sigma_{0}\), and \(\Sigma_{1}\) for re-labellings of \(\Sigma\) with party identifiers \(0\) and \(1\), and \(\Sigma_{0,1}\) for \(\Sigma_{0}\cup\Sigma_{1}\). We use \(P[x/y]\) to refer to the syntactic replacement of \(x\) in \(P\) with \(y\), where \(P\) can be an automaton (\(x\) and \(y\) are states), or a specification (\(x\) and \(y\) are syntactic objects in the language). We write \((*,s)\) to refer to all state pairs with \(s\) in the second position, and similarly for \((s,*)\). **Traces** For \(i\in\mathbb{N}\), \(j\in\mathbb{N}_{\infty}\), and an infinite trace \(w\) over sets of actions from a finite alphabet \(\Sigma\), we denote the trace between positions \(i\) and \(j\) by \(w[i..j]\), including the values at both positions. If \(j<i\) then \(w[i..j]\) is the empty trace. When \(j=\infty\) then \(w[i..j]\) is the suffix of \(w\) from \(i\). We write \(w[i]\) for \(w[i...i]\), and \(w\cdot w^{\prime}\) for concatenation of \(w\) and \(w^{\prime}\), which is only defined for a finite word \(w\). Given two traces \(w,w^{\prime}\) over \(2^{\Sigma}\), we define stepwise intersection: \((w\sqcap w^{\prime})[i]\stackrel{{\mbox{\tiny def}}}{{=}}w[i] \sqcap w^{\prime}[i]\), union \((w\sqcup w^{\prime})[i]\stackrel{{\mbox{\tiny def}}}{{=}}w[i] \cup w^{\prime}[i]\), and union with party labelling: \((w\sqcup_{1}^{0}w^{\prime})[i]\stackrel{{\mbox{\tiny def}}}{{=}}w[i ]\cup_{1}^{0}w^{\prime}[i]\), where \(E\cup_{1}^{0}E^{\prime}\stackrel{{\mbox{\tiny def}}}{{=}}\{a_{0} \mid a\in E\}\cup\{a_{1}\mid a\in E^{\prime}\}\), i.e. the left actions are labeled by \(0\) and the right actions by \(1\). This gives a trace in \(\Sigma_{0,1}\). For instance, given \(w=\langle\{a\},\{b\},\{c,d\}\rangle\) and \(w^{\prime}=\langle\{a\},\{e\},\{d,e\}\rangle\), we have that \(w[2]\cap w^{\prime}[2]=\{c,d\}\cap\{d,e\}=\{d\}\) and \(w[2]\sqcup_{1}^{0}w^{\prime}[2]=\{c,d\}\sqcup_{1}^{0}\{d,e\}=\{c_{0},d_{0},d_{1 },e_{1}\}\). Given two traces \(w_{0}\) and \(w_{1}\), over \(2^{\Sigma}\), we write \(\boldsymbol{w}_{i}^{j}\) for the pair \((w_{0}[i..j],w_{1}[i..j])\). \(\boldsymbol{w}_{i}^{j}\) is said to be an _interaction_, and when \(j\in\mathbb{N}\) a _finite interaction_. Sometimes we abuse notation and treat \(\boldsymbol{w}_{i}^{j}\) as a trace in \(\Sigma_{0,1}\), since it can be projected into such a trace through \(\sqcup_{1}^{0}\). **Automata** A tuple \(A=\langle\Sigma,Q,q_{0},\mathit{Rej},\rightarrow\rangle\) is an _automaton_, where \(\Sigma\) is a finite alphabet, \(S\) is a finite set of states, \(s_{0}\in S\) is the initial state, \(\mathit{Rej}\subseteq S\) is a set of rejecting states, and \(\rightarrow\in S\times 2^{\Sigma}\rightarrow(2^{S}\setminus\emptyset)\) is the transition function (\(\rightarrow\in S\times 2^{\Sigma}\to S\) when the automaton is deterministic). The language \(L(A)\) of automaton \(A\) is the set of infinite traces with no prefix reaching a rejecting state. The rejecting language \(RL(A)\) of automaton \(A\) is the set of infinite traces with a prefix reaching a rejecting state. We write \(RL_{s}(A)\) for the rejecting language through a specific rejecting state \(s\in\mathit{Rej}\). The _synchronous product_ of automata \(A\) and \(B\) over the same alphabet \(\Sigma\), denoted by \(A\|B\), is the automaton: \((\Sigma,S_{A}\times S_{B},(s_{0_{A}},s_{0_{B}}),(Rej_{A}\times S_{B})\cup(S_{A }\times Rej_{B}),\rightarrow)\) where \(\rightarrow\) is the minimal relation such that: for any \(E\subseteq\Sigma\), if \(s_{1}\xrightarrow{E}_{A}s_{1}^{\prime}\) and \(s_{2}\xrightarrow{E}_{B}s_{2}^{\prime}\) then \((s_{1},s_{2})\xrightarrow{E}(s_{1}^{\prime},s_{2}^{\prime})\). The _relaxed synchronous product_ of automata \(A\) and \(B\) over the same alphabet \(\Sigma\), denoted by \(A\|^{r}B\) includes \(A\|B\) but allows moving independently when there is no match: if \(s_{1}\xrightarrow{E}_{A}s_{1}^{\prime}\) and \(\nexists s_{2}^{\prime}\cdot s_{2}\xrightarrow{E}_{B}s_{2}^{\prime}\), then \((s_{1},s_{2})\xrightarrow{E}(s_{1}^{\prime},s_{2})\); and symmetrically. **Moore Machines** A Moore machine is a \(5\)-tuple \(M=(S,s_{0},\Sigma_{I},\Sigma_{O},\delta,\lambda)\) where \(S\) is a finite set of states, \(s_{0}\in S\) is the initial state, \(\Sigma_{I}\) and \(\Sigma_{O}\) are respectively the finite set of input and output actions, \(\delta:S\times 2^{\Sigma_{I}}\to 2^{S}\) is a transition function that maps each state and inputs to a next state, and \(\lambda:S\to 2^{\Sigma_{O}}\) is an output function that maps each state to a set of outputs. The _product_ of a Moore machine \(M_{1}\) over input alphabet \(\Sigma_{I}\) and output alphabet \(\Sigma_{O}\), and Moore machine \(M_{2}\) with flipped input and output alphabets is the automaton: \(M_{1}\otimes M_{2}\stackrel{{\mbox{\tiny def}}}{{=}}(\Sigma_{I} \cup\Sigma_{O},S_{1}\times S_{2},(s_{0_{1}},s_{0_{2}}),\emptyset,\rightarrow)\) where \(\rightarrow\) is the minimal relation such that: for any states \(s_{1}\in S_{1}\) and \(s_{2}\in S_{2}\), where \(s_{1}\xrightarrow{\lambda_{2}(s_{2})}s_{1}^{\prime}\) and \(s_{2}\xrightarrow{\lambda_{1}(s_{1})}s_{2}^{\prime}\) then \((s_{1},s_{2})\xrightarrow{\lambda_{1}(s_{1})\cup\lambda_{2}(s_{2})}(s_{1}^{ \prime},s_{2}^{\prime})\). **Regular Expressions** We use standard syntax for regular expressions. We treat as atomic boolean combinations of actions from \(\Sigma_{0,1}\). The operators are standard: choice, \(re+re\) (match either); sequence, \(re;re\) (match the first then the second) and the Kleene plus, \(re^{+}\) (match a non-zero finite amount of times in sequence). The language of a regular expression \(re\) is a set of finite traces: \(L(re)\subseteq(2^{\Sigma_{0,1}})^{*}\). We abuse notation and write \(\boldsymbol{w}_{i}^{j}\in L(re)\) for \(w_{0}[i..j]\sqcup_{1}^{0}w_{1}[i..j]\in L(re)\). We restrict attention to the _tight language_ of a regular expression, containing matching finite traces that have no matching strict prefix: \(TL(re)\stackrel{{\mbox{\tiny def}}}{{=}}\{\boldsymbol{w}_{i}^{j} \in L(re)\mid\nexists k:k<j\land\boldsymbol{w}_{i}^{k}\in L(re)\}\). The _prefix closure_ of the tight language is the set of finite prefixes of the tight language up to a match: \(cl(re)\stackrel{{\mbox{\tiny def}}}{{=}}\{\mathbf{w}_{i}^{k}\mid\exists j: \mathbf{w}_{i}^{j}\in TL(re)\wedge i\leq k<j\}\). We define the _complement of the prefix closure_ as the set of finite traces that do not tightly match the regular expression but whose maximal strict prefix is in the closure of the expression: \(\overline{cl}(re)\stackrel{{\mbox{\tiny def}}}{{=}}\{\mathbf{w}_{i}^{ j}\mid(\mathbf{w}_{i}^{j-1}\in cl(re)\wedge\mathbf{w}_{i}^{j}\not\in cl(re)\wedge\mathbf{w}_{i}^{j} \not\in TL(re))\}\). We denote by \(A(re,s_{0},s_{\checkmark},s_{\times})\) the deterministic finite automaton corresponding to regular expression \(re\), with \(s_{0}\), and \(s_{\times}\) respectively as the initial and rejecting states and, \(s_{\checkmark}\) as a sink state, s.t. \(\forall\mathbf{w}_{i}^{j}\in TL(re):s_{0}\stackrel{{\mathbf{w}_{i}^{j}}}{ {=}}s_{\checkmark}\), \(\forall\mathbf{w}_{i}^{j}\in cl(re):\exists s:s_{0}\stackrel{{\mathbf{w}_{i }^{j}}}{{=}}s\wedge s\neq s_{\checkmark}\wedge s\neq s_{\times}\), and \(\forall\mathbf{w}_{i}^{j}\in\overline{cl}(re):s_{0}\stackrel{{\mathbf{w}_{i }^{j}}}{{=}}s_{\times}\). ## 3 A Deontic Logic for Collaboration In this section, we present the syntax and semantics of \(c\mathcal{DL}\), a deontic logic able to express the extent to which parties should cooperate and non-interfere. Definition 1 (\(c\mathcal{DL}\) Syntax): A \(c\mathcal{DL}\) contract \(C\) is given by the following grammar, given an alphabet \(\Sigma\), regular expressions \(re\), a set of variables \(\mathbb{X}\), and party labels \(p\) from \(\{0,1\}\): \[a\in \Sigma_{0}\cup\Sigma_{1}\] \[N:= O_{p}(a)\mid F_{p}(a)\mid P_{p}(a)\mid\top\mid\perp\] \[C:= N\mid C\wedge C\mid C;C\mid C\blacktriangleright C\mid\] \[\langle re\rangle C\mid re\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin \mathbin{\mathbin{\mathbin{\mathbin{\mathbin \cdot}}}}}{}}{}{}{{}{{{{{{{{{{{{{{{{{ { }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \] \] \}\}}}}}}}}}}}\ \ \ \ \ \ \\\\\\ \.\\\\\\\\ \.\\\\\\\\\\.\\\\\\\\\ \\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\ recursion with the reparation operator: the reparation has to either not be the last operation before \(X\) or the whole recursion should be guarded with \(re\!\upharpoonright\!\), the reason behind it is to avoid the procrastination dilemma [14]. For example, \(rec\ X.\langle re\rangle((C\blacktriangleright C^{\prime});X)\) and \(re\!\upharpoonright\!(rec\ X.C\blacktriangleright X)\) are valid, unlike \(rec\ X.X\), \(rec\ X.C;(C^{\prime}\wedge X)\), \(rec\ X.\langle re\rangle((C;X);C^{\prime})\), and \(rec\ X.C\blacktriangleright X\). Moreover, a recursion variable \(X\in\mathbb{X}\) must always be bound when it appears in a contract. In our setting, we want to be able to talk about collaborative actions (actions that require both parties to be achieved successfully) and non-interference between the parties (a party not being allowed to interfere with the other party carrying out a certain action). We model both of these using a notion of synchronicity. We will later represent parties as Moore machines; here we talk just about their traces. We assume two traces over \(2^{\Sigma}\), one for each party: \(w_{0}\) and \(w_{1}\). A party's trace is a record of which actions were enabled (or attempted) by that party. The step-wise intersection of these traces, \(w_{0}\sqcap w_{1}\), is the trace of _successful_ actions. Restricting attention to the successful actions misses information about attempts that were not successful. Instead, we give semantics over pairs of party traces, an _interaction_, rather than over \(w_{0}\sqcap w_{1}\), allowing us to localise interference. This setting allows us to model both collaboration and non-interference between the parties in the same way. If the parties are required to collaborate on an action, then they must both propose it (_obligation_). If instead, the parties should ensure an action is not successful, then at least one of them must not enable it (_prohibition_). If a party is required to not interfere with another party's action, then they must also enable it (_permission_). We refer to actions of one party variously as _proposed_, _attempted_, or _enabled_ by that party. We consider an example specification in our language. Example 3.1: _Consider two possibly distinct robots, 0 and 1, working on a factory floor, with their main goal being to cooperate in placing incoming packages on shelves. Each robot has sensors to identify when a new package is in the queue (detectProd), and they must lift the package together (lift), and place it on a shelf (putOnShelf). Between iterations of this process, the robots are individually allowed to go to their charging ports (charge0 or charge1). If a robot does not help in lifting, it is given another chance:_ \[\text{permitCharge} \stackrel{{\text{\tiny{\sf def}}}}{{=}}P_{0}(\text{ charge0})\wedge P_{1}(\text{charge1})\] \[\text{lift}(p) \stackrel{{\text{\tiny{\sf def}}}}{{=}}O_{p}(\text{lift })\blacktriangleright O_{p}(\text{lift})\] \[\text{detect\&Lift}(p) \stackrel{{\text{\tiny{\sf def}}}}{{=}}\langle\text{detect Prod}_{p}\rangle\text{lift}(p)\] \[\text{detect\&Place} \stackrel{{\text{\tiny{\sf def}}}}{{=}}(\text{detect \&Lift}(0)\wedge\text{detect\&Lift}(1))\,;\] \[(O_{0}(\text{putOnShelf})\wedge O_{1}(\text{putOnShelf}))\] \[\text{collabRobot} \stackrel{{\text{\tiny{\sf def}}}}{{=}}rec\ X.\text{permitCharge };\text{detect\&Place};X.\] ### Informative Semantics The semantics of our language is defined on an _interaction_, i.e. a pair of traces \(w_{0}\) and \(w_{1}\), restricting our view to a slice with a minimal position \(i\) and maximal one \(j\). For the remainder of this paper, we will refer to this interaction with \(\vec{w}_{i}^{j}\). In Figure 1, we introduce the semantic relations for _informative_ satisfaction (\(\models_{s}\)) and violation (\(\models_{v}\)). These capture the moment of satisfaction and violation of a contract in a finite interaction. We use this to later define when an infinite interaction models a contract. In Figure 1 we also capture with \(\models_{?}\), when the interaction slice neither informatively satisfies nor violates the contract. We give some intuition and mention interesting features of the semantics. Note how we only allow the status of atomic contracts to be informatively decided in one time-step (when \(i=j\)), given they only talk about one action. When it comes to the trigger contract, our goal is to confirm its fulfillment only when we no longer closely align with the specified trigger language. Alternatively, we consider it satisfied if we've matched it previously and subsequently maintained compliance with the contract. Conversely, we would classify a violation if we Figure 1: Informative semantics rules over a finite interaction \(\vec{w}_{i}^{j}\). achieved a close match but then deviated from the contract's terms. Regarding the regular expression guard, we have two scenarios for evaluating satisfaction. First, we ensure satisfaction when either we have precisely matched the language or have taken actions preventing any future matching of the guard, with no prior violations or the guarded contract. Second, we verify satisfaction when there's still a possibility of a precise match of the guard, and the guarded contract has already been satisfied. In contrast, a violation occurs when there remains a chance for a precise match in the future of the guard, and a violation of the sub-contract occurs. The definitions for conjunction and sequence are relatively simple. Note that for conjunction we take the maximum index at which both contracts have been satisfied. Sequence and reparation are similar, except in reparation we only continue in the second contract if the first is violated, while we violate it if both contracts end up being violated. For recursion, we simply re-write variable \(X\) as needed to determine satisfaction or violation. **Example 3.2**: _Note how the semantics ensure that, given traces \(w_{0}\) and \(w_{1}\) such that \(w_{0}[0]=w_{1}[1]=\{\mathit{charge0},\mathit{charge1}\}\) then \(\mathbf{w}_{0}^{0}\models_{s}\mathit{permitCharge}\), i.e. both robots try to charge and allow each other to charge. But if further \(w_{0}[1..3]=\langle\{\mathit{detectProd}\},\{\mathit{lift}\},\{\mathit{lift}\}\rangle\) and \(w_{1}[1..3]=\langle\{\},\{\}\{\}\rangle\), then \(\mathbf{w}_{0}^{3}\models_{v}CollabRobot\), since robot 0 attempted a lift but robot 1 declined helping in lifting._ Then, we show that if a contract is informatively satisfied (violated), then any suffix or prefix of the interaction cannot also be informatively satisfied (violated): **Lemma 3.1** (Unique satisfaction and violation): _If there exists \(j\) and \(k\) such that \(\mathbf{w}_{i}^{j}\models_{s}C\) and \(\mathbf{w}_{i}^{k}\models_{s}C\), then \(j=k\). Similarly, if there exists \(j\) and \(k\) such that \(\mathbf{w}_{i}^{j}\models_{v}C\) and \(\mathbf{w}_{i}^{k}\models_{v}C\) then \(j=k\)._ _Proof (sketch):_ For the atomic contracts, this is clear. By structural induction, the result follows for conjunction, sequence, and reparation. For the trigger operations, the definition of \(TL\) ensures the result. For recursion, note how given a finite interaction there is always a finite amount of times the recursion can be unfolded (with an upper bound of \(j-i\)) so that we can determine satisfaction or violation in finite time. (See the Appendix for a detailed proof.5) Footnote 5: All the proofs of lemmas, propositions and theorems of this and next section may be found in the appendix. If an interaction is not informative for satisfaction, it is not necessarily informative for violation, and vice-versa. But we can show that if there is a point of informative satisfaction then there is no point of informative violation. **Lemma 3.2** (Disjoint satisfaction and violation): _Informative satisfaction and violation are disjoint: there are no \(j,k\) s.t. \(\mathbf{w}_{i}^{j}\models_{s}C\) and \(\mathbf{w}_{i}^{k}\models_{v}C\)._ _Proof (sketch):_ The proof follows easily by induction on the structure of C. We can then give semantics to infinite interactions. Definition 3.2 (Models): For an infinite interaction \(\mathbf{w}_{0}^{\infty}\), and a \(c\mathcal{DL}\) contract \(C\), we say \(\mathbf{w}_{0}^{\infty}\) models a contract \(C\), denoted by \(\mathbf{w}_{0}^{\infty}\vDash C\), when there is no prefix of the interaction that informatively violates \(C\): \(\mathbf{w}_{0}^{\infty}\vDash C\stackrel{{\mbox{\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny not attempt \(a\) (case (iii)). The intuition is that by not attempting \(a\), party \(0\) violated the contract, thus relieving party \(1\) of any obligation to cooperate or non-interfere (given party \(0\) knows there is no hope for the norm to be satisfied if they do not attempt \(a\)). We use similar interpretations for the other norms. Another crucial observation is that violations of a contract are not necessarily caused by a party. For example, the violated contract \(\bot\) cannot be satisfied. Moreover, norms can conflict, e.g., \(O_{p}(a)\wedge F_{p}(a)\). Conflicts are not immediately obvious without some analysis, e.g., \(\langle re\rangle O_{p}(a)\wedge\langle re^{\prime}\rangle F_{p}(a)\) (where there is some interaction for which \(re\) and \(re^{\prime}\) tightly match at the same time). We provide machinery to talk about conflicts, to avoid unsound blaming, by characterising two contracts to be conflicting when there is no way to satisfy them together. Definition 3 (Conflicts): Two contracts \(C\) and \(C^{\prime}\) are in conflict after a finite interaction \(\vec{w}_{i}^{j}\) if at that point their conjunction has not been informatively satisfied or violated yet, but all possible further steps lead to its violation: \(\text{conflict}(C,C^{\prime},\vec{w}_{i}^{j})\stackrel{{\text{ \tiny{\sf def}}}}{{=}}\nexists\vec{w}^{\prime}:\vec{w^{\prime}}_{i}^{j}=\vec{w} _{i}^{j}\wedge\vec{w^{\prime}}_{i}^{j+1}\not\models_{v}C\wedge C^{\prime}\). Another instance of a conflict can be observed between \(C_{1}=O_{0}(a);F_{1}(c)\) and \(C_{2}=O_{0}(b)\blacktriangleright O_{0}(c)\) at the second position. This can be demonstrated with a trace of length one, \(\langle a_{0};a_{1}\rangle\), where the obligation to achieve \(c\) for party \(0\) and the prohibition to achieve \(c\) for party \(1\) have to be enforced simultaneously. Example 3: _Recall the violating example in Example. 3, where robot 1 declines in helping lifing, twice. Clearly in that case \(\vec{w}_{0}^{3}\models_{v}^{1}\text{ collabRobot. However, if robot 0 did not attempt a lift in position 3 (i.e., to attempt to satisfy the reparation), the blame would be on the other agent._ From the definition of blame it easily follows that a party is blamed for a violation only when there is a violation: Proposition 3: _If a party \(p\) is blamed for the violation of \(C\) then \(C\) has been violated: \(\exists p\cdot\vec{w}_{i}^{j}\models_{v}^{p}C\) implies \(\vec{w}_{i}^{j}\models_{v}C\)._ Proof: Note how each case of \(\models_{v}^{p}\) implies its counterpart in \(\models_{v}\). But the opposite is not true: Proposition 4: _A contract may be violated but both parties be blameless: \(\vec{w}_{i}^{j}\models_{v}C\) does not imply \(\exists p\cdot\vec{w}_{i}^{j}\models_{v}^{p}C\)._ Proof: Consider their definitions on \(\bot\), and given conjunction and the presence of conflicts. Proposition 5 (Satisfaction implies no blame): _Satisfaction of contract \(C\) means that no party will get blamed: \(\vec{w}_{i}^{j}\models_{s}C\) implies \(\nexists p\cdot\vec{w}_{i}^{j}\models_{v}^{p}C\)_ Proof: Assume the contrary, i.e. that \(C\) is satisfied but party \(p\) is blamed. By Proposition 3 then there is a violation, but Lemma 3 implies we cannot both have a satisfaction or violation. **Observation 3.1**.: _For any contract \(C^{\pounds}\) defined on \(c\mathcal{DL}\) free of \(\bot\) and free of conflicts, the violation of a contract \(C^{\pounds}\) leads to blame._ **Observation 3.2** (Double blame).: _Double blame in \(c\mathcal{DL}\) for both parties \(p\) and \(1-p\) is possible. Consider \(C=O_{p}(a)\wedge O_{p}(b)\). Violation of the left-hand side by \(p\) and the violation of the right-hand side by \(1-p\) can happen at the same time._ ### Quantitative Semantics While it is possible to assign blame to one party for violating a contract, other qualitative metrics can provide additional information about the violation. These metrics can determine the number of violations caused by each party, as well as the level of satisfaction with the contract. To assess responsibility for contract violations, we introduce the notion of a mistake score, \(\rho\), for each party, enabling us to calculate a _responsibility degree_. It is important to note that our language permits reparations, whereby violations can be corrected in the next time step. However, interactions that are satisfied with reparations are not considered ideal. We present quantitative semantics to compare satisfying interactions based on the number of repaired violations a party incurs. We define relations that track the number of repaired violations attributed to each party with a mistake score, \(\rho\), written \(\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\vbox to 0.0pt{\pgfpicture \makeatletter\hbox to 0.0pt{\pgfsys@beginscope{}\definecolor{pgfstrokecolor}{ rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}{}\pgfsys@color@rgb@fill{0}{0}{0}{ }\pgfsys@setlinewidth{0.4pt}{}\nullfont p}{}\hbox to 0.0pt{\vbox to 0.0pt{ \pgfpicture\makeatletter\hbox to 0.0pt{\pgfsys@beginscope{} \definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}{} \pgfsys@color@rgb@fill{0}{0}{0}{}\pgfsys@setlinewidth{0.4pt}{}\nullfont p}{} \hbox to 0.0pt{\vbox to 0.0pt{\pgfpicture\makeatletter\hbox to 0.0pt{\pgfsys@beginscope{} \definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}{} \pgfsys@color@rgb@fill{0}{0}{0}{}\pgfsys@setlinewidth{0. \[\begin{array}{ll}\mathbf{w}_{i}^{j},\rho\mathop{\vbox{\hbox{\hbox to 0.0pt{\vbox{ \hbox to 0.0pt{\vbox{\hbox to 0.0pt{\vbox{\hbox to 0.0pt{\vbox{\hbox to 0.0pt{\vbox{ \hbox to 0.0pt{\vbox{\hbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\pgvfill}}}}}}}}}}}}}}_{s}}N }\quad\stackrel{{\hbox{\hbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{0.0.0pt{\vbox to 0.0.0pt{0.0.0pt{\vbox to 0.0.0pt{0.0.0.0pt{\vbox to 0.0.0pt{0.0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{0.0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{0.0.0.0pt{\vbox{\vbox 0.0.0.0.0pt{\vbox 0.0.0.0.0pt{\vbox 0.0.0.0.0pt{\vbox{\vbox 0.0.0.0.0.0.0pt{\vbox 0.0.0.0.0.0pt{\vbox to 0.0.0.0pt{\vbox to.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0.0pt{\vbox to 0.0.0pt{\vbox 0.0.0.0pt{\vbox to 0.0.0.0pt{\vbox to 0.0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0.0pt{\vbox to 0.0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.00.0.0pt{\vbox to 0.0.0.0pt{\vbox to 0.0.0.0pt{\vbox to 0.0.0pt{\vbox to 0.0.00.0pt{\vbox to 0.0.0pt{\vbox to 0.00.0.0pt{\vbox to 0.0pt{\vbox to 0.0.00.0ptpt{\vbox to 0.00.0ptpt{\vbox to 0.0.0ptpt{\vbox to 0.0.0ptpt{\vbox to 0.0pt{\vbox to 0.00.0ptpt{\vbox to 0.0.0ptpt{\vbox to 0.0pt{\vbox to 0.00pt{\vbox to 0.0pt{\vbox to 0.0.00pt{\vbox to 0.0.0pt{\vbox to 0.00.0ptpt{\vbox to 0.0pt{\vbox to 0.00.0ptpt{\vbox to 0.0.0pt{\vbox to 0.00pt{\vbox to 0.0.0pt{\vbox to 0.0.00pt{\vbox to 0.0.0ptpt{\vbox to 0.0pt{\vbox to 0.0.0ptpt{\vbox to 0.0.0ptpt{\vbox to 0.0.0pt{\vbox to 0.00ptpt{\vbox to 0.0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.00.0pt{\vbox to 0.0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0pt{\vbox to 0.0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptptpt{\vbox to 0.0ptpt{\vbox to 0.0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox to 0.0ptpt{\vbox{\vbox to 0.0.0ptptpt{\vbox{\vbox{\vbox{\vbox to 0.0.0ptptptpt{\vbox{\vbox{\vbox{\vboxvbox{\vboxvboxvbox{\vbox{0.ptptptptptptptptptptptptptptpt{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vbox{\vboxvbox{\vboxvbox{\vboxvbox{\vboxvbox\vbox{\vbox\vbox\vbox{\vbox\vbox\vbox{\vbox\vbox\vbox{\vbox\vbox\vbox{\vbox\vbox{\vbox\vbox\vbox{\vbox { \vbox { \vbox Proof (sketch).: We prove, this by structural induction, noting that the score only increases when \(p\) is blamed for the violation of a norm, while the inductive case easily follows from the inductive hypothesis. ## 4 Analysis In this section, we define an automata-theoretic approach to analyzing \(c\mathcal{DL}\) contracts, through a construction to a safety automaton. We use this for model checking and blame analysis, but leave the application for quantitative analysis for future work. ### Contracts to Automata We give a construction from \(c\mathcal{DL}\) contracts to automata that recognize interactions that are informative for satisfaction or violation. For brevity, we keep the definition of the automata symbolic, with transitions tagged by propositions over party actions, representing a set of concrete transitions. The automaton is over the alphabet \(\Sigma_{0,1}\) since it requires information about the parties. Definition 4.1: The deterministic _automaton of contract_\(C\) is: \[\text{aut}(C)\stackrel{{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{ }}}}}}}}}}}}}}}}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\tau(X,s_{0},s_{G},s_{B},V) \stackrel{{\mbox{\tiny def}}}{{=}}\{s_{G}\stackrel{{ \epsilon}}{{\rightarrow}}V(X)\}\] \[\tau(recX.C,s_{0},s_{G},s_{B},V) \stackrel{{\mbox{\tiny def}}}{{=}}\tau(C,s_{0},s_{G},s _{B},V[X\mapsto s_{0}])\] _We define \(\rightarrow^{\prime}\) as \(\tau(C,s_{0},s_{G},s_{B},\{\})\) without all transitions outgoing from \(s_{G}\) and \(S_{B}\), and define \(\stackrel{{\mbox{\tiny def}}}{{\rightarrow}}\stackrel{{ \mbox{\tiny def}}}{{\rightarrow}}^{\prime}\cup\{s_{B}\stackrel{{ \mbox{\tiny true}}}{{\longrightarrow}}s_{B}\}\cup\{s_{G}\stackrel{{ \mbox{\tiny true}}}{{\longrightarrow}}s_{G}\}\), where \(S\) is the set of states used in \(\rightarrow\). We assume the \(\epsilon\)-transitions are removed using standard methods._ We give some intuition for the construction. The transitions for the atomic contracts follow quite clearly from their semantics. For the trigger contracts, we use a fresh state \(s\) to connect the automaton for the regular expression, with that of the contract, ensuring the latter is only entered when the former tightly matches. For the guard contract, we instead synchronously compose (\(\|\)) both automata (i.e., intersect their languages), getting a set of transitions. Here we also relabel tuples of states to single states. Recall we use \((*,s)\) to match any pair, where the second term is \(s\), and similarly for \((s,*)\). Through the sequence of re-labellings, we ensure: first that reaching \(s_{G}\) in the acceptance of the first means; (2) reaching \(s_{B}\) in the second means violation; and (3) if the previous two situations are not the case, reaching \(s_{G}\) in the second means acceptance. For conjunction, instead of using the synchronous product, we use the relaxed variant (\(\|^{r}\)), since the contracts may require traces of different lengths for satisfaction. This relaxed product allows the 'longer' contract to continue after the status of the other is determined. For sequence, we use the fresh state \(s\) to move between the automata, once the first contract has been satisfied. For reparation this is similar, except we move between the contracts at the moment the first is violated. For recursion, we simply loop back to the initial state of the recursed contract with an \(\epsilon\)-transition once the corresponding recursion variable is encountered. Note how analyzing states without viable transitions, after applying \(\tau\), can be used for _conflict analysis_ of \(c\mathcal{DL}\) contracts. For example, when there is a conflict, e.g., \(O_{p}(a)\wedge F_{p}(a)\), there will be a state with all outgoing transitions to \(s_{B}\). Theorem 4.1 (Correctness): _An infinite interaction is a model of \(C\), iff it never reaches a rejecting state in \(aut(C)\): \(\forall\vec{w}_{0}^{\infty}\cdot\vec{w}_{0}^{\infty}\models C\iff w_{0}\sqcup_ {1}^{0}w_{1}\in L(aut(C))\)._ Proof (sketch): For the atomic contracts, the correspondence should be clear. By structural induction on the rest: triggering, sequence, and reparation should also be clear from the definition. For conjunction, the relaxed synchronous product makes sure the contract not yet satisfied continues being executed, as required, while the replacements ensure large nestings of conjunctions do not lead to large tuples of accepting or rejecting states. For \(\upharpoonright\), using the synchronous product ensures the path ends when either is satisfied/violated, as required. **Corrolary 4.1**.: _An infinite interaction is not a model of \(C\), if and only if it reaches a rejecting state in \(aut(C)\colon\forall(w_{0},w_{1})\not\models C\iff\exists j\in\mathbb{N}\cdot s_{ 0}\xRightarrow{(w_{0}\sqcup_{1}^{0}w_{1})(0\ldots j]}\)\(s_{B}\)._ Proof (sketch).: Follows from Theorem. 4.1 and completeness (up to rejection) of \(aut(C)\). **Complexity** From the translation note that without regular expressions the number of states and transitions is linear in the number of sub-clauses and operators in the contract, but is exponential in the presence of regular expressions.6 Footnote 6: For example, a contract \(recX.\top;(O_{0}(a)\wedge P_{1}(b));X\) has size 8 (note normed actions are not counted). ### Model Checking We represent the behaviour of each party as a Moore machine (\(M_{0}\), and \(M_{1}\)). For party 0, the input alphabet is \(\Sigma_{1}\) and the output alphabet is \(\Sigma_{0}\), and vice-versa for party 1. We characterise the composed behaviour of two parties by defining the product of the two dual Moore machines: \(M_{0}\otimes M_{1}\), getting an automaton over \(\Sigma_{0}\cup\Sigma_{1}\). We can then compose this automaton that represents the interactive behaviour of the parties with the contract's automaton, \((M_{0}\otimes M_{1})\|aut(C)\). Then, if no rejecting state is reachable in this automaton, the composed party's behaviour respects the contract. **Theorem 4.2** (Model Checking Soundness and Completeness).: \(\emptyset=RL((M_{0}\otimes M_{1})\|aut(C))\) _iff \(\nexists\boldsymbol{w}_{0}^{\infty}:w_{0}\sqcup_{1}^{0}w_{1}\in L(M_{0} \otimes M_{1})\wedge\boldsymbol{w}_{0}^{\infty}\models_{v}C\)._ Proof.: Consider that \(\|\) computes the intersection of the languages, while Theorem. 4.1 states that \(L(aut(C))\) contains exactly the traces satisfying \(C\) (modulo a simple technical procedure to move between labelled traces and pairs of traces). Then it follows easily that \(RL((M_{0}\otimes M_{1})\|aut(C))\) is empty only when there is no trace in \((M_{0}\otimes M_{1})\) that leads to a rejecting state in \(aut(C)\). The same logic can be taken in the other direction. ### Blame Assignment For the blame assignment, we can modify the automaton construction by adding two other violating states: \(s_{B}^{0}\) and \(s_{B}^{1}\), and adjust the transitions for the basic norms accordingly. **Definition 4.2**.: _The deterministic blame automaton of contract \(C\) is:_ \[blAut(C)\xRightarrow{\langle}\Sigma_{0,1},S,s_{0},\{s_{B},s_{B}^{0},s_{B}^{1},(s_{B}^{0},s_{B}^{1})\},\rightarrow\rangle\] _We define \(\rightarrow\) through the function \(\tau(C,s_{0},s_{G},s_{B}^{0},s_{B}^{1},V)\) that computes a set of transitions, as in Definition 4.1 but now assigning blame by transitioning to the appropriate state. We focus on a subset of the rules, given limited space, where there are substantial changes7: Footnote 7: The missing rules essentially mirror the previous construction with the added states, and the different domains. \[\tau(O_{p}(a),s_{0},s_{G},s_{B}^{0},s_{B}^{1},V) \stackrel{{\mbox{\tiny def}}}{{=}}\{s_{0}\xrightarrow{a_ {p}\wedge a_{1-p}}s_{G},s_{0}\xrightarrow{\neg a_{p}}s_{B}^{p},s_{0} \xrightarrow{a_{p}\wedge\neg a_{1-p}}s_{B}^{1-p}\}\] \[\tau(F_{p}(a),s_{0},s_{G},s_{B}^{0},s_{B}^{1},V) \stackrel{{\mbox{\tiny def}}}{{=}}\{s_{0} \xrightarrow{\neg(a_{p}\wedge a_{1-p})}s_{G},s_{0}\xrightarrow{a_{p}\wedge a _{1-p}}s_{B}^{p}\}\] \[\tau(P_{p}(a),s_{0},s_{G},s_{B}^{0},s_{B}^{1},V) \stackrel{{\mbox{\tiny def}}}{{=}}\{s_{0} \xrightarrow{a_{p}\implies a_{1-p}}s_{G},s_{0}\xrightarrow{a_{p}\wedge\neg a _{1-p}}s_{B}^{1-p}\}\] \[\tau(C\blacktriangleright C^{\prime},s_{0},s_{G},s_{B}^{0},s_{B}^{ 1},V) \stackrel{{\mbox{\tiny def}}}{{=}}\tau(C,s_{0},s_{G},s^{0},s^{ 1},V)\] \[\qquad\cup\tau(C^{\prime},s^{0},s_{G},s_{B}^{0},V)\cup\tau(C^{ \prime},s^{1},s_{G},s_{B}^{1},V)\] Given \(\rightarrow^{\prime}=\tau(C,s_{0},s_{G},s_{B},\{\})\), \(\rightarrow\) is defined as \(\rightarrow^{\prime}\) with the following transformations, in order: (1) any tuple of states containing both \(s_{B}^{0}\) and \(s_{B}^{1}\) is relabelled as \((s_{B}^{0},s_{B}^{1})\); (2) any tuple of states containing \(s_{B}^{0}\) (\(s_{B}^{1}\)) is relabelled as \(s_{B}^{0}\) (\(s_{B}^{1}\)); (3) any state for which all outgoing transitions go to a bad state are redirected to \(s_{B}\); (4) any tuple of states containing \(s_{G}\) is relabelled as \(s_{G}\); and (5) all bad states and \(s_{G}\) become sink states. \(S\) is the set of states used in \(\rightarrow\). We assume the \(\epsilon\)-transitions are removed using standard methods. Note how this automata simply refines the bad states of the original automata construction, by assigning blame for the violation of norms through a transition to an appropriate new state. While the post-processing (see (3)), allows violations caused by conflicts to go instead to state \(s_{B}\), where no party is blamed. Then we prove correspondence with the blame semantics: Theorem 4.3 (Blame Analysis Soundness and Completeness): _Where \(RL_{p}\), for \(p\in\{0,1\}\), is the rejecting language of the automaton through states that pass through \(s_{B}^{p}\) or the tuple state \((s_{B}^{0},s_{B}^{1})\):_ \(\emptyset=RL_{p}((M_{0}\otimes M_{1})\|bAlut(C))\) _iff \(\nexists w_{0},w_{1}\in(2^{\Sigma})^{*}:w_{0}\sqcup_{1}^{0}w_{1}\in L(M_{0} \otimes M_{1})\wedge(w_{0},w_{1})\models_{v}^{p}C\)._ This automaton can be used for model checking as before, but it can also answer queries about who is to blame. Example 4.1: _We illustrate in Figure 4 an example of two Moore machines representing the behaviour of two parties (Figures 3(a) and 3(b)). Note these are deterministic, therefore their composition (Figure 4) is just a trace. Note the same theory applies even when the Moore machines are non-deterministic. In Figures 3(d) and 3(e) we show the automaton and blame automaton for the contract \(recX.(O_{1}(c)\blacktriangleright O_{0}(b);X)\). Our model checking procedure (without blame) will compose Figure 4 and Figure 3(d), and identify that the trace reaches the bad state. Consider that the reparation consisting of an obligation to perform an action \(b\) was not satisfied. Similarly (not shown here) blame automaton would blame party 1 for the violation._ ## 5 Related Work **Multi-agent systems.** A number of logics can express properties about multi-agent systems. For example, ATL can express the existence of a strategy for one or more agents to enforce a certain specification [2], while strategy logic makes strategies first-class objects [7]. Checking for the existence of strategies is in 2EXPTIME. Our logic is not concerned with the existence of strategies, but with analyzing the party strategies to ensure they respect a contract. So, our approach is more comparable to LTL than to game-based logic, limited to (co-)safety properties and with a notion of norms that allows us to talk about blame natively. Concerning blame, [11] considers the notion of _blameworthiness_. They use structural equations to represent agents, but the approach is not temporal, and each agent performs only one action. Work in this area (e.g., [11, 13, 9]) tends to be in a different setting than ours. They consider the cost of actions and agents' beliefs about the probability of their actions not achieving the expected outcome. Instead, we assume all the parties have knowledge of the contract, and we take an automata-theoretic Figure 4: Example of the model checking approach. approach. Moreover, our blame derives from the norms, whereas other work depends on a notion of causality [8]. The work [1] extends _STIT logic_ with notions of responsibility, allowing reasoning about blameworthiness and praiseworthiness. This, and other similar work (e.g., [15]) is more related to our work and even has a richer notion of blame. However, we give an automata-based model checking procedure. **Deontic logics** Deontic logics have been used in a multi-agent setting before. For example, [6] define deontic notions in terms of ATL, allowing reasoning like _an obligation holds for an agent iff they have a strategy to carry it out_. These approaches (e.g., [6, 17, 20]) focus on obligations and neglect both preparations and our view of permissions as rights. Some approaches (e.g., [17, 19]) however do perform model checking for a deontic logic in a multi-agent system setting. The work most similar to ours is that of _contract automata_[5], wherein a contract is represented as a Kripke structure (with states tagged by norms), two parties as automata, and permissions with a similar rights-based view. However, it takes a purely operational approach, and lacks a notion of blame. Our language is an extension and combination of the deontic languages presented in [3, 4, 18], combining action attempts, a right-based view of permission, a two-party setting, and regular expressions as conditions. Besides maintaining all these, we give denotational trace semantics, and provide blame and model checking algorithms. ## 6 Conclusions In this paper we have introduced a deontic logic for reasoning about a two-party synchronous setting. This logic allows one to define constraints on when parties should support or non-interfere with the carrying out of a certain action or protocol. Using a pair of party traces, we can talk about attempts and success to perform collaborative actions. We consider automata constructions describing both the set of all satisfying and violating sequences. Given the behavior of the agents in the form of suitable automata, we have also provided algorithms for model checking and for blame assignment. To differentiate between satisfying a formula in the expected manner or by fullfilling the exceptional case, we introduce a quantitative semantics. This allows ordering satisfying traces depending on how often they use these exceptions. This work may be extended in many directions. First, we could consider asynchronous interaction, distinguishing between sending and receiving. The syntax and semantics can also be extended easily to handle multi-party agents rather than just a two-party setting. Different quantitative semantics could be given, for example considering the _costs of actions_ to reason when it is better to pay a fine rather than to behave as expected. We plan to study how to synthesise strategies for the different parties, for instance to ensure the optimal behaviour of agents.
2309.13727
Inequalities For Distances Between Triangle Centers
In his seminal paper on triangle centers, Clark Kimberling made a number of conjectures concerning the distances between triangle centers. For example, if $D(i; j)$ denotes the distance between triangle centers $X_i$ and $X_j$ , Kimberling conjectured that $D(6; 1) \leq D(6; 3)$ for all triangles. We use symbolic mathematics techniques to prove these conjectures. In addition, we prove stronger results, using best-possible constants, such as $D(6; 1) \leq (2 -\sqrt3)D(6; 3)$.
Stanley Rabinowitz
2023-09-24T19:14:13Z
http://arxiv.org/abs/2309.13727v1
# Inequalities For Distances # Inequalities For Distances Between Triangle Centers Stanley Rabinowitz 545 Elm St Unit 1, Milford, New Hampshire 03055, USA e-mail: [email protected] web: [http://www.StanleyRabinowitz.com/](http://www.StanleyRabinowitz.com/) **Abstract.** In his seminal paper on triangle centers, Clark Kimberling made a number of conjectures concerning the distances between triangle centers. For example, if \(D(i,j)\) denotes the distance between triangle centers \(X_{i}\) and \(X_{j}\), Kimberling conjectured that \(D(6,1)\leq D(6,3)\) for all triangles. We use symbolic mathematics techniques to prove these conjectures. In addition, we prove stronger results, using best-possible constants, such as \(D(6,1)\leq(2-\sqrt{3})D(6,3)\). **Keywords.** triangle geometry, triangle centers, inequalities, computer-discovered mathematics, Blundon's Fundamental Inequality GeometricExplorer. **Mathematics Subject Classification (2020).** 51M04, 51-08. ## 1. Introduction Let \(X_{n}\) denote the \(n\)th named triangle center as cataloged in the Encyclopedia of Triangle Centers [4]. Let \(X_{i}X_{j}\) denote the distance between \(X_{i}\) and \(X_{j}\). We will also write this as \(D(i,j)\). In his seminal paper on triangle centers [3], Clark Kimberling made a number of conjectures concerning the distances between pairs of triangle centers. For example, Kimberling conjectured that \(D(6,1)\leq D(6,3)\) for all triangles. He also conjectured the truth of many chains of inequalities, such as the following. \[X_{3}X_{9}\leq X_{3}X_{10}\leq X_{3}X_{2}\leq X_{3}X_{12}\leq X_{3}X_{7}\leq X _{3}X_{4}.\] Kimberling reached these conjectures by using a computer to examine 10,740 different shaped triangles and numerically computing the coordinates for the centers. Upon determining that the inequality held for each of these 10,740 triangles, he then conjectured that the inequality was true for all triangles. With the advances in computers and symbolic algebra systems, it is now possible to prove these conjectures using exact symbolic computation. ## 2. Barycentric Coordinates We use barycentric coordinates in this study. The barycentric coordinates for triangle centers \(X_{1}\) through \(X_{20}\) in terms of the sides of the triangle, \(a\), \(b\), and \(c\), are shown in Table 1, where \[S=\frac{1}{2}\sqrt{(a+b-c)(a-b+c)(-a+b+c)(a+b+c)}.\] Only the first barycentric coordinate is given, because if \(f(a,b,c)\) is the first barycentric coordinate for a point \(P\), then the barycentric coordinates for \(P\) are \[\Big{(}f(a,b,c):f(b,c,a):f(c,a,b)\Big{)}.\] These were derived from [4]. To find the distance between two centers, we used the following formula which comes from [2]. **Proposition 1**.: _Given two points \(P=(u_{1},v_{1},w_{1})\) and \(Q=(u_{2},v_{2},w_{2})\) in normalized barycentric coordinates. Denote \(x=u_{1}-u_{2}\), \(y=v_{1}-v_{2}\) and \(z=w_{1}-w_{2}\). Then the distance between \(P\) and \(Q\) is_ \[\sqrt{-a^{2}yz-b^{z}x-c^{2}xy}.\] \begin{table} \begin{tabular}{|l|l|} \hline n & first barycentric coordinate for \(X_{n}\) \\ \hline 1 & \(a\) \\ \hline 2 & 1 \\ \hline 3 & \(a^{2}(a^{2}-b^{2}-c^{2})\) \\ \hline 4 & \((a^{2}+b^{2}-c^{2})(a^{2}-b^{2}+c^{2})\) \\ \hline 5 & \(c^{4}-a^{2}b^{2}+b^{4}-a^{2}c^{2}-2b^{2}c^{2}\) \\ \hline 6 & \(a^{2}\) \\ \hline 7 & \((a+b-c)(a-b+c)\) \\ \hline 8 & \(a-b-c\) \\ \hline 9 & \(a(a-b-c)\) \\ \hline 10 & \(b+c\) \\ \hline 11 & \((b-c)^{2}(-a+b+c)\) \\ \hline 12 & \((a+b-c)(a-b+c)(b+c)^{2}\) \\ \hline 13 & \(a^{4}-2(b^{2}-c^{2})^{2}+a^{2}(b^{2}+c^{2}+2\sqrt{3}S)\) \\ \hline 14 & \(a^{4}-2(b^{2}-c^{2})^{2}+a^{2}(b^{2}+c^{2}-2\sqrt{3}S)\) \\ \hline 15 & \(a^{2}(\sqrt{3}(a^{2}-b^{2}-c^{2})-2S)\) \\ \hline 16 & \(a^{2}(\sqrt{3}(a^{2}-b^{2}-c^{2})+2S)\) \\ \hline 17 & \((a^{2}+b^{2}-c^{2}+2\sqrt{3}S)(a^{2}-b^{2}+c^{2}+2\sqrt{3}S)\) \\ \hline 18 & \((a^{2}+b^{2}-c^{2}-2\sqrt{3}S)(a^{2}-b^{2}+c^{2}-2\sqrt{3}S)\) \\ \hline 19 & \(a(a^{2}+b^{2}-c^{2})(a^{2}-b^{2}+c^{2})\) \\ \hline 20 & \(3a^{4}-2a^{2}b^{2}-b^{4}-2a^{2}c^{2}+2b^{2}c^{2}-c^{4}\) \\ \hline \end{tabular} \end{table} Table 1. Barycentric coordinates for the first 20 centers ## 3. Graphs For \(n\), \(i\), and \(j\) ranging from \(1\) to \(20\), we used Algorithm B from [5] to check every inequality of the form \(D(n,i)\leq D(n,j)\). Algorithm B is based on Blundon's Fundamental Inequality [1]. Figure \(n\) shows a graph of the results. An arrow from node \(i\) to node \(j\) means that \(D(n,i)\leq D(n,j)\) for all triangles. No arrow means the inequality does not hold for all triangles. Since we used exact symbolic computations, these results are theorems and not conjectures. To avoid radicals, we replaced inequalities of the form \(D(a,b)\leq D(c,d)\) by the equivalent inequality \(D(a,b)^{2}\leq D(c,d)^{2}\). Figure 2. \(X_{2}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{2}X_{i}\leq X_{2}X_{j}\). Figure 4. \(X_{4}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{4}X_{i}\leq X_{4}X_{j}\). Figure 5. \(X_{5}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{5}X_{i}\leq X_{5}X_{j}\). Figure 6. \(X_{6}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{6}X_{i}\leq X_{6}X_{j}\). Figure 8. \(X_{8}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{8}X_{i}\leq X_{8}X_{j}\). Figure 10. \(X_{10}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{10}X_{i}\leq X_{10}X_{j}\). Figure 9. \(X_{9}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{9}X_{i}\leq X_{9}X_{j}\). Figure 11. \(X_{11}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{11}X_{i}\leq X_{11}X_{j}\). Figure 12. \(X_{12}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{12}X_{i}\leq X_{12}X_{j}\). Figure 14. \(X_{14}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{14}X_{i}\leq X_{14}X_{j}\). Figure 13. \(X_{13}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{13}X_{i}\leq X_{13}X_{j}\). Figure 16. \(X_{16}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{16}X_{i}\leq X_{16}X_{j}\). Figure 15. \(X_{15}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{15}X_{i}\leq X_{15}X_{j}\). There were no inequalities found for \(n=18\). In other words, there were no inequalities of the form \(D(18,i)\leq D(18,j)\) for any \(i\) and \(j\) with \(1\leq i\leq 20\), \(1\leq j\leq 20\), \(i\neq 18\), \(j\neq 18\), and \(i\neq j\). Figure 19. \(X_{19}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{19}X_{i}\leq X_{19}X_{j}\). Figure 17. \(X_{17}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{17}X_{i}\leq X_{17}X_{j}\). Examining these graphs, we note that there are a few loops. An arrow from \(i\) to \(j\) and an arrow from \(j\) to \(i\) in Figure \(n\) means that \(D(n,i)\leq D(n,j)\) and \(D(n,j)\leq D(n,i)\). This implies that \(D(n,i)=D(n,j)\). Three such equalities were noticed: \(D(1,10)=D(8,10)\), \(D(3,4)=D(3,20)\), and \(D(3,5)=D(4,5)\). These equalities were noticed by Kimberling in [3, Table 5.4]. These correspond to the (now) well-known facts that in all triangles, \(X_{10}\) is the midpoint of \(\overline{X_{1}X_{8}}\), \(X_{3}\) is the midpoint of \(\overline{X_{4}X_{20}}\), and \(X_{5}\) is the midpoint of \(\overline{X_{3}X_{4}}\). Since we only investigated inequalities between distances formed by three triangle centers, this does not mean that we can conclude that there aren't any other equalities of the form \(D(i_{1},i_{2})=D(i_{3},i_{4})\), where \(i_{1}\), \(i_{2}\), \(i_{3}\), and \(i_{4}\) are all distinct. To check for such equalities, we ran a separate Mathematica program that examined all distances of the form \(D(i,j)\) where \(i\) and \(j\) are distinct integers between 1 and 20, looking for duplicate distances. No new equalities were found. This lets us state the following result. **Proposition 2**.: _The only pairs of centers from among the first 20 centers that have equal distances are the following._ \[D(1,10) =D(8,10)\] \[D(3,4) =D(3,20)\] \[D(3,5) =D(4,5)\] Figure 20. \(X_{20}\) inequalities. An arrow from \(i\) to \(j\) means \(X_{20}X_{i}\leq X_{20}X_{j}\). ## 4. Bounds Some of the inequalities from Section 3 can be strengthened. For example, from Figure 6, one can see that \(D(6,2)\leq D(6,10)\). However, the stronger inequality \[D(6,2)\leq\frac{1}{3}\left(1+\sqrt{2}\right)D(6,10)\] is true. To find the best such inequalities, we applied Algorithm K from [5] to every inequality of the form \[D(n,i)\leq kD(n,j)\quad\text{or}\quad D(n,i)\geq kD(n,j)\] for \(n\), \(i\), and \(j\) ranging from \(1\) to \(10\) with \(i<j\) to find the smallest (resp. largest) constant \(k\) making the inequality true. The results are given below, shown as lower and upper bounds for \(\dfrac{D(n,i)}{D(n,j)}\). Lower bounds of \(0\) and upper bounds of \(\infty\) are omitted. For example, \(0\leq\dfrac{D(1,2)}{D(1,4)}\leq\infty\) would mean that Algorithm K proved that there is no constant \(k>0\) such that \(k\leq\dfrac{D(1,2)}{D(1,4)}\) is true for all triangles, and that there is no constant \(k\) such that \(\dfrac{D(1,2)}{D(1,4)}\leq k\) is true for all triangles. **Theorem 1**.: _The following bounds involving distances from \(X_{1}\) hold for all triangles._ \[\begin{split}\dfrac{D(1,2)}{D(1,3)}\leq\frac{2}{3}\quad\quad \left|\begin{aligned} & 1+\sqrt{3}\leq&\dfrac{D(1,3)}{D(1,6)}\\ &\dfrac{2\sqrt{2}}{3}\leq&\dfrac{D(1,2)}{D(1,6)}\\ \end{aligned}\right|\quad\quad\begin{aligned} & \dfrac{D(1,6)}{D(1,8)}\leq\dfrac{1}{2\sqrt{2}}\\ &\dfrac{3}{2}+\sqrt{2}\leq&\dfrac{D(1,3)}{D(1,7)}\\ \end{aligned}\right|\quad\quad\begin{aligned} & \dfrac{D(1,6)}{D(1,9)}\leq\dfrac{1}{2}\\ &\dfrac{D(1,6)}{D(1,10)}\leq\dfrac{1}{\sqrt{2}}\\ \end{aligned}\] \[\begin{split}\dfrac{1}{3}\leq&\dfrac{D(1,2)}{D(1,9 )}\leq\dfrac{2}{3}\\ &\dfrac{D(1,2)}{D(1,10)}=\dfrac{2}{3}\\ \end{split}\] \[\begin{split}\dfrac{D(1,2)}{D(1,10)}=\dfrac{2}{3}\\ \end{split}\] \[\begin{split}\dfrac{1}{2}\leq&\dfrac{D(1,3)}{D(1,4 )}\\ \end{split}\] \[\begin{split}\dfrac{1}{3}\leq&\dfrac{D(1,3)}{D(1,5 )}\\ \end{split}\] **Theorem 2**.: _The following bounds involving distances from \(X_{2}\) hold for all triangles._ \[\begin{split}\frac{D(2,1)}{D(2,3)}\leq 2&\qquad\frac{1}{4} \leq&\frac{D(2,3)}{D(2,8)}\\ \frac{D(2,1)}{D(2,4)}\leq 1&\qquad 1\leq& \frac{D(2,3)}{D(2,9)}\\ \frac{D(2,1)}{D(2,5)}\leq 4&\qquad 1\leq& \frac{D(2,3)}{D(2,10)}\\ 6\sqrt{2}-8\leq&\frac{D(2,1)}{D(2,6)}\leq 1& \qquad\frac{D(2,4)}{D(2,5)}=4\\ \frac{1}{4}\leq&\frac{D(2,1)}{D(2,7)}\leq 1& 1\leq& \frac{D(2,4)}{D(2,6)}\\ \frac{D(2,1)}{D(2,8)}=\frac{1}{2}&\qquad 1\leq& \frac{D(2,4)}{D(2,7)}\\ \frac{1}{2}\leq&\frac{D(2,1)}{D(2,9)}\leq 2& \qquad\frac{1}{2}\leq& \frac{D(2,4)}{D(2,8)}\\ \frac{D(2,1)}{D(2,10)}=2&\qquad 2\leq& \frac{D(2,4)}{D(2,9)}\\ \frac{D(2,3)}{D(2,4)}=\frac{1}{2}&\qquad 2\leq& \frac{D(2,4)}{D(2,10)}\\ \frac{D(2,3)}{D(2,5)}=2&\qquad \frac{1}{4}\leq& \frac{D(2,5)}{D(2,6)}\\ \frac{1}{2}\leq&\frac{D(2,3)}{D(2,6)}\\ \frac{1}{2}\leq&\frac{D(2,3)}{D(2,6)}\\ \frac{1}{2}\leq&\frac{D(2,3)}{D(2,7)}\\ \end{split}\qquad\begin{split}\frac{1}{2}\leq& \frac{D(2,5)}{D(2,9)}\\ \frac{1}{2}\leq&\frac{D(2,5)}{D(2,10)}\\ \frac{1}{2}\leq&\frac{D(2,6)}{D(2,7)}\leq\frac{3}{2}\\ \frac{D(2,4)}{D(2,5)}=4&\qquad 1\leq& \frac{D(2,6)}{D(2,8)}\leq\frac{4+3\sqrt{2}}{8}\\ 1\leq&\frac{D(2,6)}{D(2,9)}\leq 3\\ 2\leq&\frac{D(2,6)}{D(2,10)}\leq 2+\frac{3}{\sqrt{2}} \\ \frac{1}{2}\leq&\frac{D(2,7)}{D(2,8)}\leq 2\\ \frac{D(2,7)}{D(2,9)}=2&\qquad 2\leq& \frac{D(2,7)}{D(2,10)}\leq 8\\ 1\leq&\frac{D(2,5)}{D(2,6)}\\ \frac{1}{4}\leq&\frac{D(2,5)}{D(2,7)}\\ \frac{1}{2}\leq&\frac{D(2,3)}{D(2,6)}\\ \frac{1}{2}\leq&\frac{D(2,3)}{D(2,6)}\\ \frac{1}{2}\leq&\frac{D(2,3)}{D(2,7)}\\ \frac{1}{2}\leq&\frac{D(2,3)}{D(2,7)}\\ \end{split}\] **Theorem 3**.: _The following bounds involving distances from \(X_{3}\) hold for all triangles._ \[1\leq \frac{D(3,1)}{D(3,2)}\leq 3 \frac{1}{3}\leq \frac{D(3,2)}{D(3,7)}\leq 1 \frac{1}{3}\leq \frac{D(3,2)}{D(3,8)} \frac{1}{2}\leq \frac{D(3,5)}{D(3,8)}\] \[\frac{2}{3}\leq \frac{D(3,1)}{D(3,5)}\leq 2 1\leq \frac{D(3,2)}{D(3,9)} \frac{3}{2}\leq \frac{D(3,5)}{D(3,9)}\] \[\sqrt{3}-1\leq \frac{D(3,1)}{D(3,6)}\leq 1 1\leq \frac{D(3,2)}{D(3,10)} \frac{3}{2}\leq \frac{D(3,5)}{D(3,10)}\] \[\frac{1}{17}\left(7+4\sqrt{2}\right)\leq \frac{D(3,1)}{D(3,7)}\leq 1 \frac{D(3,4)}{D(3,5)}=2 \frac{D(3,6)}{D(3,7)}\leq C_{2}\] \[1\leq \frac{D(3,1)}{D(3,8)} 1\leq \frac{D(3,4)}{D(3,6)}\leq 3 1\leq \frac{D(3,6)}{D(3,8)}\] \[1\leq \frac{D(3,1)}{D(3,9)} 1\leq \frac{D(3,4)}{D(3,7)}\leq 3 1\leq \frac{D(3,6)}{D(3,9)}\] \[1\leq \frac{D(3,1)}{D(3,10)} 1\leq \frac{D(3,4)}{D(3,8)} 1\leq \frac{D(3,6)}{D(3,10)}\] \[\frac{D(3,2)}{D(3,4)}=\frac{1}{3} 3\leq \frac{D(3,4)}{D(3,9)} 1\leq \frac{D(3,7)}{D(3,8)}\] \[\frac{D(3,2)}{D(3,5)}=\frac{2}{3} 3\leq \frac{D(3,4)}{D(3,10)} 1\leq \frac{D(3,7)}{D(3,9)}\] \[\frac{1}{3}\leq \frac{D(3,2)}{D(3,6)}\leq 1 \frac{1}{2}\leq \frac{D(3,5)}{D(3,6)}\leq\frac{3}{2} 1\leq \frac{D(3,7)}{D(3,10)}\] where \(C_{1}\approx 0.9002270330\) is the second largest root of \[6137x^{5}-14689x^{4}+14429x^{3}-9547x^{2}+3698x-100\] and \(C_{2}\approx 1.100851119\) is the largest root of the same polynomial. **Theorem 4**.: _The following bounds involving distances from \(X_{4}\) hold for all triangles._ \[\frac{D(4,1)}{D(4,2)}\leq 1\] \[\frac{D(4,1)}{D(4,3)}\leq\frac{2}{3}\] \[\frac{D(4,1)}{D(4,5)}\leq\frac{4}{3}\] \[1\leq \frac{D(4,1)}{D(4,6)}\] \[1\leq \frac{D(4,1)}{D(4,7)}\leq 2\] \[\frac{D(4,1)}{D(4,8)}\leq 1\] \[\frac{D(4,1)}{D(4,9)}\leq 1\] \[\frac{D(4,1)}{D(4,10)}\leq 1\] \[\frac{D(4,1)}{D(4,10)}\leq 2\] \[\frac{D(4,1)}{D(4,10)}\leq 1\] \[\frac{D(4,2)}{D(4,5)}\leq\frac{D(4,5)}{D(4,7)}\] \[1\leq \frac{D(4,2)}{D(4,6)}\] \[1\leq \frac{D(4,2)}{D(4,7)}\] where \(C_{3}\approx 1.104068697\) is the positive root of \(8x^{4}-36x^{3}+113x^{2}-69x-25\). **Theorem 5**.: _The following bounds involving distances from \(X_{5}\) hold for all triangles._ \[\begin{array}{l}\frac{D(5,1)}{D(5,2)}\leq 3\\ \frac{D(5,1)}{D(5,3)}\leq 1\\ \frac{D(5,1)}{D(5,4)}\leq 1\\ \frac{D(5,1)}{D(5,8)}\leq 1\\ \frac{D(5,1)}{D(5,9)}\leq 1\\ \frac{D(5,1)}{D(5,10)}\leq 1\\ \frac{D(5,1)}{D(5,10)}\leq 1\\ \frac{D(5,2)}{D(5,6)}=1\\ \frac{D(5,2)}{D(5,6)}\leq 3\\ \frac{D(5,7)}{D(5,9)}\leq 3\\ \frac{D(5,7)}{D(5,8)}\leq 1\\ \frac{D(5,7)}{D(5,8)}\leq 1\\ \frac{D(5,7)}{D(5,9)}\leq 1\\ \frac{D(5,7)}{D(5,9)}\leq 1\\ \frac{D(5,7)}{D(5,10)}\leq 1\\ \frac{1}{3}\leq\end{array}\qquad\begin{array}{l}\frac{1}{3}\leq \frac{D(5,2)}{D(5,9)}\leq 1\\ \frac{1}{3}\leq \frac{D(5,2)}{D(5,10)}\leq 1\\ \frac{D(5,3)}{D(5,4)}=1\\ 1\leq \frac{D(5,4)}{D(5,10)}\leq 1\\ \frac{D(5,6)}{D(5,8)}\leq 1\\ 1\leq \frac{D(5,6)}{D(5,9)}\leq 1\\ \frac{D(5,6)}{D(5,9)}\leq C_{4}\\ \frac{D(5,6)}{D(5,10)}\leq C_{5}\\ \frac{D(5,7)}{D(5,8)}\leq 1\\ \frac{D(5,7)}{D(5,9)}\leq 1\\ \frac{D(5,7)}{D(5,9)}\leq 1\\ \frac{D(5,7)}{D(5,10)}\leq 1\\ 1\leq \frac{D(5,7)}{D(5,9)}\leq 1\\ \frac{D(5,7)}{D(5,10)}\leq 1\\ \frac{1}{3}\leq \frac{D(5,4)}{D(5,6)}\leq 3\\ \frac{D(5,2)}{D(5,7)}\leq 1\\ \frac{1}{3}\leq \frac{D(5,4)}{D(5,8)}\leq 3\\ \end{array}\qquad\begin{array}{l}\frac{1}{3}\leq \frac{D(5,2)}{D(5,9)}\leq 1\\ \frac{1}{3}\leq \frac{D(5,2)}{D(5,10)}\leq 1\\ 1\leq \frac{D(5,4)}{D(5,8)}\leq 3\\ 1\leq \frac{D(5,9)}{D(5,10)}\leq 7-4\sqrt{2}\end{array}\] where \(C_{4}\approx 1.053322135\) is the positive root of \[6137x^{5}+5335x^{4}+678x^{3}-3702x^{2}-9479x-1225\] and \(C_{5}\approx 1.194505073\) is the positive root of \[x^{4}+2x^{3}+22x^{2}-30x-1.\] **Theorem 6**.: _The following bounds involving distances from \(X_{6}\) hold for all triangles._ \[\begin{split}\frac{D(6,1)}{D(6,2)}\leq 9-6\sqrt{2}& \quad\frac{1}{3}\leq&\frac{D(6,2)}{D(6,8)}\leq\frac{5+4 \sqrt{2}}{21}\\ \frac{D(6,1)}{D(6,3)}\leq 2-\sqrt{3}&\quad\frac{1}{2}\leq& \frac{D(6,2)}{D(6,9)}\leq\frac{3}{4}\\ \frac{1}{2}\leq&\frac{D(6,1)}{D(6,7)}& \quad\frac{2}{3}\leq&\frac{D(6,2)}{D(6,10)}\leq\frac{1+ \sqrt{2}}{3}\\ \frac{D(6,1)}{D(6,8)}\leq\frac{2\sqrt{2}-1}{7}& \quad\frac{1}{2}\leq&\frac{D(6,3)}{D(6,4)}\\ \frac{D(6,1)}{D(6,9)}\leq\frac{1}{3}& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad **Theorem 7**.: _The following bounds involving distances from \(X_{7}\) hold for all triangles._ \[\begin{array}{l}\frac{D(7,1)}{D(7,2)}\leq\frac{3}{4}\\ \frac{D(7,1)}{D(7,3)}\leq\frac{2}{17}\left(5-2\sqrt{2}\right)\\ \frac{D(7,1)}{D(7,4)}\leq 1\\ \frac{D(7,1)}{D(7,8)}\leq\frac{1}{2}\\ \frac{D(7,1)}{D(7,9)}\leq\frac{1}{2}\\ \frac{D(7,1)}{D(7,9)}\leq\frac{1}{2}\\ \frac{D(7,1)}{D(7,10)}\leq\frac{2}{3}\\ \frac{D(7,2)}{D(7,3)}\leq\frac{2}{3}\\ 2\leq\frac{D(7,3)}{D(7,8)}\\ 2\leq\frac{D(7,2)}{D(7,6)}\\ \end{array}\qquad\begin{array}{l}\frac{1}{3}\leq\frac{D(7,2)}{D(7,8)}\leq \frac{2}{3}\\ \frac{D(7,2)}{D(7,9)}=\frac{2}{3}\\ \frac{2}{3}\leq\frac{D(7,2)}{D(7,10)}\leq\frac{8}{9}\\ \frac{1}{2}\leq\frac{D(7,3)}{D(7,4)}\\ 2\leq\frac{D(7,3)}{D(7,5)}\\ \frac{D(7,6)}{D(7,10)}\leq\frac{4}{9}\\ 1\leq\frac{D(7,8)}{D(7,9)}\leq 2\\ \frac{4}{3}\leq\frac{D(7,8)}{D(7,10)}\leq 2\\ 1\leq\frac{D(7,9)}{D(7,10)}\leq\frac{4}{3}\\ \end{array}\] where \(C_{7}\approx 7.9776615835\) is the largest root of \[\begin{array}{l}833089536x^{28}+220028016384x^{26}-19474287964848x^{24}+139 707882692901x^{22}\\ \quad-410390834384412x^{20}+732430210466916x^{18}-892396597211316x^{16}\\ \quad+782711166381062x^{14}-492062343977916x^{12}+216425700787620x^{10}\\ \quad-65960002546284x^{8}+14226627485565x^{6}-2259294716376x^{4}+253570773456x ^{2}\\ \quad-14637417984.\end{array}\] **Theorem 8**.: _The following bounds involving distances from \(X_{8}\) hold for all triangles._ \[\frac{D(8,1)}{D(8,2)} =\frac{3}{2} \frac{4}{3}\leq \frac{D(8,4)}{D(8,5)}\leq 4\] \[\frac{D(8,1)}{D(8,4)} \leq 1 1\leq \frac{D(8,4)}{D(8,6)}\] \[\frac{D(8,1)}{D(8,5)} \leq\frac{4}{3} 1\leq \frac{D(8,4)}{D(8,7)}\] \[\frac{2}{7}\left(4-\sqrt{2}\right)\leq \frac{D(8,1)}{D(8,6)}\leq 1 2\leq \frac{D(8,4)}{D(8,9)}\] \[\frac{1}{2}\leq \frac{D(8,1)}{D(8,7)}\leq 1 2\leq \frac{D(8,4)}{D(8,10)}\] \[2\leq \frac{D(8,1)}{D(8,9)} C_{8}\leq \frac{D(8,5)}{D(8,6)}\] \[\frac{D(8,1)}{D(8,10)}=2 \frac{1}{8}\left(3+2\sqrt{2}\right)\leq \frac{D(8,5)}{D(8,7)}\] \[\frac{D(8,2)}{D(8,4)} \leq\frac{2}{3} \frac{3}{2}\leq \frac{D(8,5)}{D(8,9)}\] \[\frac{D(8,2)}{D(8,5)} \leq\frac{8}{9} \frac{3}{2}\leq \frac{D(8,5)}{D(8,10)}\] \[\frac{4}{21}\left(4-\sqrt{2}\right)\leq \frac{D(8,2)}{D(8,6)}\leq\frac{2}{3} \frac{2}{3} \frac{2}{3}\leq \frac{D(8,6)}{D(8,7)}\leq\frac{7}{6}\] \[\frac{1}{3}\leq \frac{D(8,2)}{D(8,7)}\leq\frac{2}{3} 2\leq \frac{D(8,6)}{D(8,9)}\] \[\frac{4}{3}\leq \frac{D(8,2)}{D(8,9)} 2\leq \frac{D(8,6)}{D(8,10)}\leq 2+\frac{1}{\sqrt{2}}\] \[\frac{D(8,2)}{D(8,10)}=\frac{4}{3} 2\leq \frac{D(8,7)}{D(8,9)}\] \[\frac{D(8,3)}{D(8,4)}\leq\frac{1}{2} 2\leq \frac{D(8,7)}{D(8,10)}\leq 4\] \[\frac{D(8,3)}{D(8,5)}\leq 2 \frac{D(8,9)}{D(8,10)}\leq 1\] where \(C_{8}\approx 0.6817039304\) is the smallest positive root of \[896x^{4}-2184x^{3}+1924x^{2}-758x+121.\] **Theorem 9**.: _The following bounds involving distances from \(X_{9}\) hold for all triangles._ \[\frac{3}{2}\leq \frac{D(9,1)}{D(9,2)}\leq 3 1\leq \frac{D(9,3)}{D(9,10)}\] \[\frac{D(9,1)}{D(9,4)}\leq 1 2\leq \frac{D(9,4)}{D(9,5)}\leq 4\] \[\frac{D(9,1)}{D(9,5)}\leq 2 1\leq \frac{D(9,4)}{D(9,6)}\] \[\frac{2}{3}\leq \frac{D(9,1)}{D(9,6)}\leq 1 1\leq \frac{D(9,4)}{D(9,7)}\] \[\frac{1}{2}\leq \frac{D(9,1)}{D(9,7)}\leq 1 1\leq \frac{D(9,4)}{D(9,8)}\] \[1\leq \frac{D(9,1)}{D(9,8)} 10\leq \frac{D(9,4)}{D(9,10)}\] \[2\leq \frac{D(9,1)}{D(9,10)} C_{9}\leq \frac{D(9,5)}{D(9,6)}\] \[\frac{D(9,2)}{D(9,4)}\leq\frac{1}{3} \frac{1}{2}\leq \frac{D(9,5)}{D(9,7)}\] \[\frac{D(9,2)}{D(9,5)}\leq\frac{2}{3} \frac{1}{2}\leq \frac{D(9,5)}{D(9,8)}\] \[\frac{1}{4}\leq \frac{D(9,2)}{D(9,6)}\leq\frac{1}{2} \frac{5}{2}+\sqrt{2}\leq \frac{D(9,5)}{D(9,10)}\] \[\frac{D(9,2)}{D(9,7)}=\frac{1}{3} \frac{2}{3}\leq \frac{D(9,6)}{D(9,7)}\leq\frac{4}{3}\] \[\frac{1}{3}\leq \frac{D(9,2)}{D(9,8)} 1\leq \frac{D(9,6)}{D(9,8)}\] \[\frac{4}{3}\leq \frac{D(9,2)}{D(9,10)} \frac{8}{3}\leq \frac{D(9,6)}{D(9,10)}\] \[\frac{D(9,3)}{D(9,4)}\leq\frac{1}{2} 1\leq \frac{D(9,7)}{D(9,8)}\] \[\frac{D(9,3)}{D(9,5)}\leq 2 4\leq \frac{D(9,7)}{D(9,10)}\] where \(C_{9}\approx 0.4870156430\) is the smallest positive root of \[3072x^{5}+9304x^{4}-35096x^{3}+40708x^{2}-25350x+6137.\] **Theorem 10**.: _The following bounds involving distances from \(X_{10}\) hold for all triangles._ \[\frac{D(10,1)}{D(10,2)}=3 2\leq \frac{D(10,4)}{D(10,5)}\leq 4\] \[\frac{D(10,1)}{D(10,4)}\leq 1 1\leq \frac{D(10,4)}{D(10,6)}\] \[\frac{D(10,1)}{D(10,5)}\leq 2 1\leq \frac{D(10,4)}{D(10,7)}\] \[2-\sqrt{2}\leq \frac{D(10,1)}{D(10,6)}\leq 1 1\leq \frac{D(10,4)}{D(10,8)}\] \[\frac{1}{3}\leq \frac{D(10,1)}{D(10,7)}\leq 1 9\leq \frac{D(10,4)}{D(10,9)}\] \[\frac{D(10,1)}{D(10,8)}=1 C_{10}\leq \frac{D(10,5)}{D(10,6)}\] \[1\leq \frac{D(10,1)}{D(10,9)} \frac{1}{2}\leq \frac{D(10,5)}{D(10,7)}\] \[\frac{D(10,2)}{D(10,4)}\leq\frac{1}{3} \frac{1}{2}\leq \frac{D(10,5)}{D(10,8)}\] \[\frac{D(10,2)}{D(10,5)}\leq\frac{2}{3} \frac{3}{2}+\sqrt{2}\leq \frac{D(10,5)}{D(10,9)}\] \[\frac{1}{3}\left(2-\sqrt{2}\right)\leq \frac{D(10,2)}{D(10,6)}\leq\frac{1}{3} \frac{5}{9}\leq \frac{D(10,6)}{D(10,7)}\leq\frac{4}{3}\] \[\frac{1}{9}\leq \frac{D(10,2)}{D(10,7)}\leq\frac{1}{3} 1\leq \frac{D(10,6)}{D(10,8)}\leq 1+\frac{1}{\sqrt{2}}\] \[\frac{D(10,2)}{D(10,8)}=\frac{1}{3} \frac{5}{3}\leq \frac{D(10,6)}{D(10,9)}\] \[\frac{1}{3}\leq \frac{D(10,2)}{D(10,9)} 1\leq \frac{D(10,7)}{D(10,8)}\leq 3\] \[\frac{D(10,3)}{D(10,4)}\leq\frac{1}{2} 3\leq \frac{D(10,7)}{D(10,9)}\] \[\frac{D(10,3)}{D(10,5)}\leq 2 1\leq \frac{D(10,8)}{D(10,9)}\] \[2\leq \frac{D(10,3)}{D(10,9)}\] where \(C_{10}\approx 0.4870156430\) is the smallest positive root of \[50x^{4}-72x^{3}+22x^{2}-2x+1.\]
2309.11124
Receding-Constraint Model Predictive Control using a Learned Approximate Control-Invariant Set
In recent years, advanced model-based and data-driven control methods are unlocking the potential of complex robotics systems, and we can expect this trend to continue at an exponential rate in the near future. However, ensuring safety with these advanced control methods remains a challenge. A well-known tool to make controllers (either Model Predictive Controllers or Reinforcement Learning policies) safe, is the so-called control-invariant set (a.k.a. safe set). Unfortunately, for nonlinear systems, such a set cannot be exactly computed in general. Numerical algorithms exist for computing approximate control-invariant sets, but classic theoretic control methods break down if the set is not exact. This paper presents our recent efforts to address this issue. We present a novel Model Predictive Control scheme that can guarantee recursive feasibility and/or safety under weaker assumptions than classic methods. In particular, recursive feasibility is guaranteed by making the safe-set constraint move backward over the horizon, and assuming that such set satisfies a condition that is weaker than control invariance. Safety is instead guaranteed under an even weaker assumption on the safe set, triggering a safe task-abortion strategy whenever a risk of constraint violation is detected. We evaluated our approach on a simulated robot manipulator, empirically demonstrating that it leads to less constraint violations than state-of-the-art approaches, while retaining reasonable performance in terms of tracking cost, number of completed tasks, and computation time.
Gianni Lunardi, Asia La Rocca, Matteo Saveriano, Andrea Del Prete
2023-09-20T08:13:28Z
http://arxiv.org/abs/2309.11124v2
# Receding-Constraint Model Predictive Control using a ###### Abstract In recent years, advanced model-based and data-driven control methods are unlocking the potential of complex robotics systems, and we can expect this trend to continue at an exponential rate in the near future. However, ensuring safety with these advanced control methods remains a challenge. A well-known tool to make controllers (either Model Predictive Controllers or Reinforcement Learning policies) safe, is the so-called _control-invariant set_ (a.k.a. safe set). Unfortunately, for nonlinear systems, such a set cannot be exactly computed in general. Numerical algorithms exist for computing approximate control-invariant sets, but classic theoretic control methods break down if the set is not exact. This paper presents our recent efforts to address this issue. We present a novel Model Predictive Control scheme that can guarantee recursive feasibility and/or safety under weaker assumptions than classic methods. In particular, recursive feasibility is guaranteed by making the safe-set constraint move backward over the horizon, and assuming that such set satisfies a condition that is weaker than control invariance. Safety is instead guaranteed under an even weaker assumption on the safe set, triggering a safe task-abortion strategy whenever a risk of constraint violation is detected. We evaluated our approach on a simulated robot manipulator, empirically demonstrating that it leads to less constraint violations than state-of-the-art approaches, while retaining reasonable performance in terms of tracking cost and number of completed tasks. ## I Introduction Ensuring safety is crucial in all robotics applications. However, this is more and more difficult with the recently increasing complexity of control methods and robotic platforms. Indeed, recent data-driven approaches, often relying on Reinforcement Learning (RL) algorithms, typically produce black-box policies that are inherently hard to certify as safe. Moreover, even model-based control methods for constrained nonlinear systems in practice struggle to guarantee safety, which consists in recursive constraint satisfaction (a.k.a. _recursive feasibility_). This is because the classic approach to guaranteeing safety, both for Model Predictive Control (MPC) and for Quadratic-Programming-based control methods, relies on the assumption of knowing a so-called _safe set_ (a.k.a. control-invariant set) [1, 2], or a Control Barrier Function (CBF) [3, 4]. However, exactly computing safe sets (or CBFs) for nonlinear systems is not feasible in general. Therefore, practitioners must rely on numerical methods to compute approximate versions of such sets (or functions) [5, 6, 7, 8, 9, 10, 11, 12]. Unfortunately, safety guarantees are lost if the used safe set is not exact. In this paper, we present a novel MPC scheme that ensures: i) safety, assuming the safe set is a _conservative_ approximation of a specific backward reachable set; ii) recursive feasibility, assuming the safe set is N-step control invariant, which is a weaker assumption than classic control invariance. We compared our approach with classic MPC schemes: the standard formulation (without terminal constraints but a longer horizon), and a formulation using the safe set to constrain the terminal state. Our method could successfully avoid constraint violation in more tests than the others, being able to trade off performance and safety depending on the conservativeness of the used safe set. ## II Preliminaries ### _Notation_ * \(\mathbb{N}\) denotes the set of natural numbers; * \(\{x_{i}\}_{0}^{N}\) denotes a discrete-time trajectory given by the sequence \((x_{0},\ldots,x_{N})\); * \(x_{i|k}\) denotes the state at time step \(k+i\) predicted when solving the MPC problem at time step \(k\); ### _Problem statement_ Let us consider a discrete-time dynamical system with state and control constraints: \[x_{i+1}=f(x_{i},u_{i}),\qquad x\in\mathcal{X},\qquad u\in\mathcal{U}. \tag{1}\] Our goal is to design a control algorithm to ensure _safety_ (i.e., constraint satisfaction), while preserving performance (i.e., cost minimization) as much as possible. Let us define \(\mathcal{S}\) as the set containing all the equilibrium states of our system: \[\mathcal{S}=\{x\in\mathcal{X}\mid\exists\,u\in\mathcal{U}:x=f(x,u)\}. \tag{2}\] To achieve our goal, we rely on the _Infinite-Time Backward-Reachable Set_[1] of \(\mathcal{S}\), which we denote as \(\mathcal{V}\). Mathematically, it is defined as the subset of \(\mathcal{X}\) starting from which it is possible to reach \(\mathcal{S}\) in finite time: \[\begin{split}\mathcal{V}\triangleq\{x_{0}\in\mathcal{X}\,|\, \exists\{u_{i}\}_{0}^{k},k\in\mathbb{N}:& x_{k+1}\in\mathcal{S},x_{i}\in \mathcal{X},\\ & u_{i}\in\mathcal{U},\forall\,i=0,\ldots,k\}.\end{split} \tag{3}\] As all backward reachable sets of equilibrium states, the set \(\mathcal{V}\) is a control-invariant set [1]. This means that, starting from inside \(\mathcal{V}\), it is possible to remain inside \(\mathcal{V}\) indefinitely. If we knew \(\mathcal{V}\) we could use it to construct a safe controller. However, we cannot reasonably assume to know it in general, but we rely instead on a more realistic assumption. **Assumption 1**.: _We know a conservative approximation of the set \(\mathcal{V}\):_ \[\hat{\mathcal{V}}\subseteq\mathcal{V} \tag{4}\] _Note that \(\hat{\mathcal{V}}\) is not control invariant in general._ **Assumption 2**.: _We know an upper bound on the number of time steps needed to safely drive the system to an equilibrium from a state in \(\hat{\mathcal{V}}\), which we refer to as \(\bar{N}\)._ As discussed in Section I, numerical methods exists to compute approximations of \(\mathcal{V}\). Among the others, the method in [12] can be made conservative by an appropriate choice of a safety margin and it also produces an estimate of \(\bar{N}\), satisfying Assumption 2. Therefore, we used [12] in our evaluation. Now we discuss different approaches to exploit \(\hat{\mathcal{V}}\) in an MPC formulation to try to achieve safety. ### _Model Predictive Control and Recursive Feasibility_ Let us consider the following MPC problem: \[\operatorname*{minimize}_{\{x_{i}\}_{0}^{N},\{u_{i}\}_{0}^{N-1}} \sum_{i=0}^{N-1}\ell_{i}(x_{i},u_{i})+\ell_{N}(x_{N}) \tag{5a}\] \[\operatorname*{subject\,to} x_{0}=x_{init}\] (5b) \[x_{i+1}=f(x_{i},u_{i}) i=0\ldots N-1\] (5c) \[x_{i}\in\mathcal{X},u_{i}\in\mathcal{U} i=0\ldots N-1\] (5d) \[x_{N}\in\mathcal{X}_{N}, \tag{5e}\] where \(\ell(\cdot)/\ell_{N}(\cdot)\) is the running/terminal cost, \(x_{init}\) is the current state, and \(\mathcal{X}_{N}\subseteq\mathcal{X}\) is the terminal set [13]. Even though MPC is one of the most suited frameworks for controlling constrained systems, ensuring safety (i.e., constraint satisfaction) remains challenging when the dynamics or the constraints are nonlinear. The most common approach to ensuring safety is based on _recursive feasibility_ (RF), which guarantees that, under the assumption of no disturbances/modeling errors, if an MPC problem is feasible at the first loop, it remains feasible forever. RF is guaranteed if the MPC horizon \(N\) is _sufficiently_ long (see Section 8.2 of [2]). However, in general we cannot know how long \(N\) should be. Moreover, even if \(N\) were known, it may be too long to result in acceptable computation times. Alternatively, RF can be guaranteed by using the terminal set \(\mathcal{X}_{N}\) to constrain the final state inside a _control-invariant_ set (see Section II-D). While theoretically elegant, the practical issue with this approach is that control-invariant sets are extremely challenging (if not impossible) to compute for nonlinear systems/constraints. A special case of this approach is when an _equilibrium_ state (or a set of equilibria) is used as terminal set. This solves the issue of computing control-invariant sets, but at the price of (potentially drastically) reducing the _basin of attraction_ of the MPC. Other approaches to RF exist that rely on the optimality properties of the solution and the stability of the closed loop (e.g., Section 8.3 of [2]). However, these approaches require controllability and other conditions on running and terminal costs. Therefore, they are not applicable to arbitrary cost formulations as the methods discussed in this paper. ### _Terminal Constraint_ As discussed above, a common way to ensure recursive feasibility in MPC is to constrain the final state inside a control-invariant set, such as \(\mathcal{V}\). Unfortunately, we do not know \(\mathcal{V}\), but only \(\hat{\mathcal{V}}\), which is not control invariant in general. Therefore, using \(\hat{\mathcal{V}}\) as terminal set in our MPC does not ensure RF. This means that our MPC problem could become unfeasible, and at that point classic MPC theory does not tell us what to do. A common strategy to deal with unfeasibility is to relax the terminal constraint with a slack variable, which is heavily penalized in the cost function [14, 15]. In this way, when the terminal constraint cannot be satisfied, we can still get a solution that allows us to keep controlling the system, in the hope that eventually the terminal constraint be satisfied again. However, this approach does not ensure _safety_, nor RF, because the soft constraint allows the state to leave \(\hat{\mathcal{V}}\), which eventually can lead to constraint violations. ## III Safe Model Predictive Control This section describes our novel MPC scheme, which relies on two components: a safe task-abortion strategy (Section III-A, and a receding-constraint MPC formulation (Section III-B), which can be used together (Section III-C). ### _Safe Task Abortion_ Our key idea to ensure safety relies on Assumption 1 and 2, and on the following two assumptions. **Assumption 3**.: _We have access to two computational units, which we refer to as unit A and unit B._ **Assumption 4**.: _We can solve the following OCP for any \(x_{init}\in\hat{\mathcal{V}}\), in at most \(N-1\) time steps:_ \[\operatorname*{minimize}_{\{x_{i}\}_{0}^{\bar{N}},\{u_{i}\}_{0}^{ N-1}} \sum_{i=0}^{N-1}\ell_{i}(x_{i},u_{i})+\ell_{\bar{N}}(x_{\bar{N}})\] (6) \[\operatorname*{subject\,to} \eqref{eq:constraint soon as one MPC problem becomes unfeasible. While we follow the last feasible solution, we can keep trying to solve OCP (5). This strategy is summarized in Alg. 1 and it can guarantee safety, as stated in the following Lemma. ``` 0: Number of time steps \(T\), Initial state \(x_{0}\), Initial guess \(\{x_{i}^{g}\}_{0}^{N},\{u_{i}^{g}\}_{0}^{N-1}\), OCP (5), Safe-abort OCP (6) 1:\(finals\gets 0\)\(\triangleright\) Counter for failed OCP's 2:for\(t=0\to T-1\)do 3:\(\{x_{i}^{g}\}_{0}^{N},\{u_{i}^{*}\}_{0}^{N-1},feas\leftarrow\text{OCP}(x_{t}, \{x_{i}^{g}\}_{0}^{N},\{u_{i}^{g}\}_{0}^{N-1})\) 4:if\(feas\) = True then\(\triangleright\) If OCP's solution is feasible 5:\(finals\gets 0\)\(\triangleright\) Reset counter 6:else 7:if\(finals=0\)then\(\triangleright\) Start solving (6) in Unit B 8: SolveSafeabortOCP\((x_{N-1}^{g})\) 9:if\(finals=N-1\)then\(\triangleright\) Abort task 10:returnFollowSafeabortTrajectory() 11:\(finals\gets fails+1\)\(\triangleright\) Increment counter 12:\(\{x_{i}^{*}\}_{0}^{N},\{u_{i}^{*}\}_{0}^{N-1}\leftarrow\{x_{i}^{g}\}_{0}^{N}, \{u_{i}^{g}\}_{0}^{N-1}\)\(\triangleright\) Copy last feasible solution 13:\(x_{t+1}\gets f(x_{t},u_{0}^{*})\)\(\triangleright\) Simulate system 14:\(\{x_{i}^{*}\}_{0}^{N-1},\{u_{i}^{g}\}_{0}^{N-2}\leftarrow\{x_{i}^{*}\}_{1}^{N}, \{u_{i}^{*}\}_{1}^{N-1}\) 15:\(x_{N}^{g},u_{N-1}^{g}\gets x_{N-1}^{g},u_{N-2}^{g}\) ``` **Algorithm 1** Terminal-Constraint MPC with Safe Abortion **Lemma 1**.: _Under Assumptions 1 to 4, the hard terminal-constraint MPC with safe task abortion described in Alg. 1 guarantees that constraints are never violated._ Proof.: This proof is straightforward. OCP (6) is always feasible because, by Assumption 1 and 2, from any state in \(\hat{\mathcal{V}}\) we can reach an equilibrium in at most \(\bar{N}\) time steps. Assumption 4 ensures that, by dedicating a computational unit to solving OCP (6), we get a solution before reaching the terminal state of the last feasible MPC problem, \(x_{N|k-1}\). After reaching \(x_{N|k-1}\), we follow the solution of OCP (6) to reach an equilibrium state, in which we can stay forever without violating the constraints. Our most critical assumption is probably Assumption 4, which relies on the MPC horizon \(N\) to be sufficiently long, and on \(\bar{N}\) not to be too large, to allow for enough computation time to solve the OCP. This may be challenging because we can expect \(\bar{N}\) to be rather large, since it must be sufficient to allow the system to reach an equilibrium from any state in \(\hat{\mathcal{V}}\). At the same time, \(N\) cannot be set too large because it is proportional to the computation time of the MPC problem. However, learning-based warm-start techniques could be used to speed-up computation [16, 17]. #### Iii-B1 Safe-Abort for Robot Manipulators During our tests, we have noticed that the safe-abort OCP (6) was hard to solve for our numerical solver. Therefore, we suggest an alternative formulation, which is equivalent to (6) for the case of robot manipulators, but leads to less numerical issues with the solver. Given \(x_{init}=(q_{init},\dot{q}_{init})\in\hat{\mathcal{V}}\), where \(q\) are the joint angles and \(\dot{q}\) are the joint velocities, OCP (6) can be substituted by: \[\begin{array}{rl}\underset{\{x_{i}\}_{0}^{\bar{\mathcal{V}}},\{u_{i}\}_{0}^ {\bar{\mathcal{V}}-1}}{\text{maximize}}&d^{\top}\dot{q}_{0}\\ \operatorname{subject\,to}&q_{0}=q_{init}\\ &(I-dd^{\top})\dot{q}_{0}=0\\ &d^{\top}\dot{q}_{0}\leq||\dot{q}_{init}||\\ &\eqref{eq:constraint_constraint_MPC},\eqref{eq:constraint_MPC},x_{\bar{N}}=x_{\bar{N}-1}, \end{array} \tag{7}\] where \(d=\frac{\dot{q}_{init}}{||\dot{q}_{init}||}\) is the initial velocity direction. OCP (7) is inspired by the VBOC method [12]. Rather than fixing the initial state as in (6), we fix only the joint angles and the direction of the joint velocity vector, while maximizing the joint velocity norm. In this way, the problem is feasible for any \(x_{init}\in\mathcal{X}\). In practice, our solver was always able to solve this formulation, even for cases where \(x_{init}\notin\hat{\mathcal{V}}\), making the Safe Task Abortion more reliable. ### _Receding-Constraint MPC_ Instead of relying exclusively on the final state to ensure safety, we could exploit the fact that, as long as at least one state \(x_{r}\in\hat{\mathcal{V}}\) (with \(1\leq r\leq N\)), we know that \(x_{1}\in\mathcal{V}\) because from \(x_{1}\) we can reach \(x_{r}\). This suggests that a less conservative constraint to include in our OCP would be: \[(x_{1}\in\hat{\mathcal{V}})\,\vee\,(x_{2}\in\hat{\mathcal{V}})\,\vee\,\dots\, \vee\,(x_{N}\in\hat{\mathcal{V}}) \tag{8}\] Unfortunately, OR constraints are extremely challenging for numerical solvers. Even if this constraint cannot be used, we can find other ways to exploit this insight. We suggest to adapt online the time step at which we constrain the state in \(\hat{\mathcal{V}}\). For instance, if at the MPC loop \(k-1\) we had \(x_{r|k-1}\in\hat{\mathcal{V}}\), at the loop \(k\) we know that it is possible to have \(x_{r-1|k}\in\hat{\mathcal{V}}\) (assuming no disturbances and modeling errors), therefore we can impose this constraint in a hard way. This is sufficient to ensure safety for \(r\) loops, during which this _receding constraint_ would slide backward along the horizon. However, once the receding constraint reaches time Fig. 1: Example of Receding-Constraint MPC with \(N=4\). After the MPC loop 3, the receding constraint slides forward because \(x_{4|3}\in\hat{\mathcal{V}}\). step 0, we can no longer rely on it to ensure safety. Therefore, we suggest to maintain also a soft constraint to encourage the terminal state to be in \(\hat{\mathcal{V}}\). This MPC formulation can be stated as: \[\operatorname*{\mathrm{minimize}}_{\{x_{i}\}_{0}^{N},\{u_{i}\}_{0}^{N -1},s} \sum_{i=0}^{N-1}\ell_{i}(x_{i},u_{i})+\ell_{N}(x_{N})+w_{s}||s||^{2}\] (9) \[\operatorname*{\mathrm{subject\,to}} \eqref{eq:constraint to ensure that the pre-computed _safe-abort trajectory_ starts exactly at the state of the system when the task abortion is initiated. To achieve this, we must modify the receding constraint from \(x_{j|k}\in\hat{\mathcal{V}}\) to the more conservative \(x_{j|k}=x_{j+1|k-1}\). In other words, we constrain the predicted state in \(\mathcal{V}\) not to change across the MPC loops. This is bound to deteriorate performance, but it should still outperform the standard Terminal-Constraint MPC. ## IV Results This section presents our results1 comparing five MPC formulations: Footnote 1: Our code is available at [https://github.com/idra-lab/safe-mpc](https://github.com/idra-lab/safe-mpc). * _Naive_: a classic formulation without terminal constraint, i.e., problem (5) with \(\mathcal{X}_{N}=\mathcal{X}\). * _Soft Terminal_ (ST): it uses a soft terminal constraint set \(\mathcal{X}_{N}=\hat{\mathcal{V}}\) with a penalty weight of \(10^{8}\). * _Soft Terminal With Abort_ (STWA): as the previous one, but it triggers the safe abort whenever \(x_{N|k}\notin\hat{\mathcal{V}}\). * _Hard Terminal With Abort_ (HTWA): it uses a hard terminal constraint set \(\mathcal{X}_{N}=\hat{\mathcal{V}}\), and it triggers the safe abort whenever the OCP is unfeasible (as in Alg. 1). * _Receding_: the novel formulation (9) described by Alg. 2, using soft constraints for both \(x_{r}\in\hat{\mathcal{V}}\) (penalty weight of \(10^{8}\)) and \(x_{N}\in\hat{\mathcal{V}}\) (\(w_{s}=10^{5}\)). For the simulations, we have considered a planar triple pendulum, thus \(n_{x}=6,\,n_{u}=3\). We have used CasADi [18] for the symbolic computation of the dynamics, costs and constraints, and Acados [19] to solve the OCPs and integrate the dynamics. The OCP is a tracking problem with respect to a static state, purposely chosen near the joint limits, to test the safety of the controllers: \[x^{\text{ref}}=(q^{\text{max}}-0.05,\bar{q},\bar{q},0,0,0), \tag{11}\] with \(\bar{q}=(q^{\text{max}}+q^{\text{min}})/2\). We have used as running cost a least-squares function, penalizing deviations from \(x^{\text{ref}}\) and control efforts: \[\begin{split} l(x,u)&=||x-x^{\text{ref}}||_{Q}^{2}+|| u||_{R}^{2}\\ Q&=\text{diag}([500,10^{-4}I_{5}]),\quad R=10^{-4 }I_{3},\end{split} \tag{12}\] where \(I_{k}\) is the identity matrix with size \(k\). Set membership to \(\hat{\mathcal{V}}\) is verified with the constraint: \[(1-\alpha)\phi(x)-||\hat{q}||\geq 0, \tag{13}\] where \(\phi(\cdot)\) is a Neural Network (NN) computing an upper bound on the joint velocity norm [12], and \(\alpha\in[0,1]\) is a safety margin that we introduced to ensure that \(\hat{\mathcal{V}}\subseteq\mathcal{V}\). We have run 100 simulations for each MPC formulation, starting from the same 100 random joint positions \(q_{0}\) with \(\dot{q}_{0}=0\). The time step of the MPCs was \(dt=5\,\mathrm{ms}\). The horizon of _Naive_ has been fixed to \(N=36\), so that each MPC iteration takes less than \(4\,\mathrm{ms}\) (leaving \(1\,\mathrm{ms}\) for further operations, to mimic the timing limitations of a real-time application). We used instead shorter horizons for the other approaches (\(N=35\) for the three terminal-constrained MPC's, and \(N=34\) for _Receding_), since their MPC iterations take more time due to the additional constraints. Table I reports the number of tasks completed, safely aborted, or failed by each controller, using a safety margin \(\alpha=2\%\). For the safe abort, we have tested both formulation (6) and (7). In terms of safety, _Naive_ violated the constraints the most, while _Receding_ violated them the least (when using (7) for safe abort). In terms of performance, ST completed more tasks than the others, but at the price of a higher number of failed tasks than _Receding_. STWA and HTWA performed strictly worse than ST, completing less tasks and failing more times. The lower number of completed tasks is explained by the trigger of the safe abort, while the relatively high number of failures could be explained by the small safety margin \(\alpha\), which is not enough to ensure \(\hat{\mathcal{V}}\subseteq\mathcal{V}\). Table II reports a similar comparison, but with a higher safety margin \(\alpha=10\%\). The number of completed tasks is slightly smaller for all approaches using \(\hat{\mathcal{V}}\), but the number of failures is remarkably smaller for STWA, HTWA, and especially for _Receding_, which failed only 2 times. The number of failures remained large when using (6) for the safe abort, demonstrating the benefit of formulation (7). Fig. 2 and 3 highlight the different risk-aversion levels of ST and STWA by showing the joint trajectories of two simulations with \(\alpha=10\%\). In both cases STWA aborted the task. ST instead completed the first task, while it failed the second one. ST is willing to take risks, which sometimes leads to completing the task (Fig. 2), but sometimes it leads to failure (Fig. 3). STWA is instead risk-averse, and it triggers a safe abort as soon as a risk of constraint violation is detected, which leads to less completed tasks, but also less failures. In terms of cost, Table III shows that the average cost for the completed tasks (with \(\alpha=10\%\)) is comparable for the different formulations, thus the tracking performance is not degraded by the extra constraints using \(\tilde{\mathcal{V}}\). The same table also reports the computation times. The 99-percentile for the real-time iteration scheme [20] is always below the time step duration (5 ms). We do not report the RTI computation times for _Receding_ because of a technical issue. Indeed, the Python interface of Acados does not support time-varying constraints. Therefore our current implementation of _Receding_ actually soft constrains the whole state trajectory in \(\tilde{\mathcal{V}}\), but then sets to zero the penalty weights for all time steps except for \(r\) and \(N\), resulting in a much higher computation time than needed. The Safe Abort column reports the maximum computation times for the Task Abortion with the two methods. As previously stated, OCP (7) reports a higher number of successes (see Table I and II) at the cost of large computation times, while (6) reports good computation times (satisfying Assumption 4), but with a high number of failures. ## V Conclusions We have presented a novel Receding-Constraint MPC formulation, which provides recursive feasibility guarantees under a weaker assumption on the used safe set with respect to classic approaches. Moreover, we have presented a task-abortion strategy that allows to reach an equilibrium state whenever a risk of constraint violation is detected. Our results on a 3-joint manipulator show the improved safety of the presented Receding-Constraint MPC with respect to other state-of-the-art methods. Future research will focus on finding a safe-abort method that achieves high success rates as (7), but with reasonable computation times as (6). For this, we plan to extend the method in [12] to learn both the set \(\tilde{\mathcal{V}}\) and a policy to drive the state to an equilibrium. While this work focused on model-based control methods, our approach could be applied in the future to _safety filters_ for making black-box RL policies safe. Fig. 3: Comparison between ST (task failed) and STWA (task aborted). The last plot shows the value of the terminal constraint (13). The vertical line highlights the start of the safe-abort trajectory. Fig. 2: Comparison between ST (task completed) and STWA (task aborted). The last plot shows the value of the terminal constraint (13). The vertical line highlights the start of the safe-abort trajectory.
2309.06458
Quantum multi-secret sharing scheme with access structures and cheat identification
This work proposes a $d$-dimensional quantum multi-secret sharing scheme with a cheat detection mechanism. The dealer creates multiple secrets and distributes the shares of these secrets using multi-access structures and a monotone span program. The dealer detects the cheating of each participant using the Black box's cheat detection mechanism. To detect the participants' deceit, the dealer distributes secret shares' shadows derived from a randomly invertible matrix $X$ to the participants, stored in the black box. The Black box identifies the participant's deceitful behavior during the secret recovery phase. Only honest participants authenticated by the Black box acquire their secret shares to recover the multiple secrets. After the Black box cheating verification, the participants reconstruct the secrets by utilizing the unitary operations and quantum Fourier transform. The proposed protocol is reliable in preventing attacks from eavesdroppers and participants. The scheme's efficiency is demonstrated in different noise environments: dit-flip noise, $d$-phase-flip noise, and amplitude-damping noise, indicating its robustness in practical scenarios. The proposed protocol provides greater versatility, security, and practicality.
Deepa Rathi, Sanjeev Kumar
2023-09-12T16:15:49Z
http://arxiv.org/abs/2309.06458v2
# Quantum multi-secret sharing scheme with access structures and cheat identification ###### Abstract This work proposes a \(d\)-dimensional quantum multi-secret sharing scheme with a cheat detection mechanism. The dealer creates multiple secrets and distributes the shares of these secrets using multi-access structures and a monotone span program. The dealer detects the cheating of each participant using the Black box's cheat detection mechanism. To detect the participants' deceit, the dealer distributes secret shares' shadows derived from a randomly invertible matrix \(X\) to the participants, stored in the black box. The Black box identifies the participant's deceitful behavior during the secret recovery phase. Only honest participants authenticated by the Black box acquire their secret shares to recover the multiple secrets. After the Black box cheating verification, the participants reconstruct the secrets by utilizing the unitary operations and quantum Fourier transform. The proposed protocol is reliable in preventing attacks from eavesdroppers and participants. The scheme's efficiency is demonstrated in different noise environments: dit-flip noise, \(d\)-phase-flip noise, and amplitude-damping noise, indicating its robustness in practical scenarios. The proposed protocol provides greater versatility, security, and practicality. keywords: Black box, Cheat identification, Multi access structure, Noise environments, Quantum Fourier transform + Footnote †: journal: Elsevier ## 1 Introduction In today's world, where hackers continually target secret data, secret sharing is a crucial cryptographic technique that assures the security and confidentiality of secret information. Shamir[1] and Blakley[2] devised the first threshold secret sharing technique separately, employing Lagrange's interpolation and projective geometry theories, respectively. However, classical secret sharing schemes rely on mathematical assumptions and computational complexity, which cannot provide secure demonstrative communication. C.H. Bennett and G. Brassard devised the famed BB84[3] protocol to solve the limits of classical cryptography techniques. This protocol is considered to be the beginning of quantum cryptography. The absolute security of quantum cryptography relies on the fundamental characteristics of quantum mechanics, including the no-cloning theorem, the Heisenberg uncertainty principle, and the inability to distinguish non-orthogonal quantum states. Quantum secret sharing (QSS) is a significant field of study in quantum cryptography. This cryptographic technique provides enhanced security and several advantages over classical schemes. QSS research is often separated into two groups based on shared secrets: Quantum state sharing (QSTS) for sharing unknown quantum states and QSS for sharing classical information. In 1999, Hillery et al.[4] introduced the founding work of QSS by utilizing the entangled three-qubit and four-qubit GHZ states. At the same time, \((t,m)\)-threshold QSS protocols have been introduced [5; 6], in which at least \(t\) out of \(m\) participants are required to retrieve the secret. Gottesman[7] proved that the no-cloning theorem and monotonicity constraints are sufficient for the existence of QSS schemes with access structures. Subsequently, Xiao et al.[8] extended the QSS scheme[4] by implementing the quantum key distribution techniques: favored measuring basis and measuring basis encrypted. Deng et al.[9] envisioned a QSTS scheme to share an arbitrarily two-qubit state by utilizing Einstein-Podolsky-Rosen (EPR) pairs. Henceforth, QSS protocols of 2-dimensional quantum system have been extensively studied in Ref.[10; 11; 12; 13; 14; 15]. Most QSS protocols discussed before are built for 2-dimensional quantum systems (qubits). However, with advancements in quantum technology, developing QSS protocols in high dimensional quantum systems (qudits) is becoming more significant than qubits due to their higher information capacity and improved security. Therefore, several QSS protocols are presented in high-dimensional quantum systems. Yu et al.[16] developed a \(d\)-dimensional QSS protocol with mutually unbiased and biased bases by generalizing the two-qubit QSS scheme[4]. Tavakoli et al.[17] introduced a multiparty QSS scheme by utilizing a sequential communication of a single quantum system with \(d\)-dimensional. Subsequently, Chen et al.[18] demonstrated that the protocol reported in [17] needs to be more secure and efficient. They evaluated the security vulnerabilities and enhanced the efficiency, growing it from \(1/d\) to \(1\). Using the quantum Fourier transform (QFT), and generalized unitary operators, Song et al.[19] presented the \((t,m)\)-threshold QSS scheme. The secret is retrieved by the reconstructor employing the inverse quantum Fourier transform (IQFT) without relying on any information from the remaining participants. However, Kao et al.[20] discovered that in the scheme [19], the reconstructor cannot retrieve the secret without the help of other participants. Later, Sutradhar and Om[21] overcome this problem by introducing an enhanced \((t,m)\) threshold QSS protocol. Qin et al.[22] presented a QSTS scheme by utilizing the QFT. QSS systems are categorized into two groups based on the number of participants in authorized sets: threshold and general. The current QSS methods are mainly \((t,m)\)-threshold[23; 19; 24; 25; 21; 26; 27], allowing any subset of \(t\) participants or more to retrieve the secret, whereas subsets with fewer than \(t\) participants are unable to retrieve the secret. In practical situations, the composition of authorized subsets may not depend on \(t\), leading to the proposal of general QSS techniques which utilize access structures to determine authorized subsets [28; 29; 30; 31; 32; 33; 34]. The access structure describes participant subsets that can retrieve the secret, while the adversary structure refers to participant subsets that cannot get any information of the secret. Given that most QSS schemes only consider ideal noise-free quantum channels, i.e., without considering the impact of channel noise on the QSS schemes in real quantum communication. Nevertheless, in real quantum communication, the quantum states must engage via the surrounding environment. Which introduces influences from channel noise and disrupts quantum resource entanglement. Thus, studying the effect of QSS protocols in noisy environments is essential. Some QSS schemes with a 2-dimensional quantum system in noisy environments have been reported in Ref.[35; 36; 37; 38; 39]. Furthermore, in most of the QSS, as mentioned earlier protocols, the dealer and participants can recognize if there is cheating but cannot identify the culprit. Yan et al.[40] introduced a threshold QSS protocol to identify cheaters using a voting mechanism. However, this technique is not analyzed in noisy environments, limiting its feasibility and application versatility. Li et al.[41] introduced a cheating-detectable classical secret sharing method to detect and remove cheaters using asymmetric bivariate polynomial and Black box methodology. In their scheme, the \(m\) participants are divided into \(t\) disjoint sets, and one trusted dealer is assigned to each group. Nevertheless, this method may not be appropriate for some practical situations. Therefore, We are considering combining the Black box deception algorithm with QSS to make the scheme general and unconditionally secure. We study a cheating identifiable quantum multi-secret sharing (QMSS) scheme with general access structures. In QMSS, multiple secrets are distributed to the participants simultaneously. The dealer assigns \(n\) secrets to the participants according to \(n\) distinct access structures employing a monotone span program (MSP) and linear multi-secret sharing (LMSS). During the recovery phase, the participants' cheating behavior was identified utilizing the Black box's deception verification mechanism. After the cheating verification, the participants directly exchange their secret shares through the Black box and then regenerate the secrets. The participants implement the generalized Pauli operator and QFT to retrieve the secret, and a hash function is utilized to validate the authenticity of secrets. Moreover, we evaluate the effectiveness of the proposed scheme in three kinds of noise models: dit-flip, d-phase-flip, and amplitude-damping observed in real-world scenarios. The proposed scheme distinguishes itself from existing QSS methods in the following ways: 1. The scheme is feasible to share multiple secrets simultaneously based on multi-access structures. 2. Each participant's deception is identified by a Black box. 3. The proposed scheme is independent of trustworthy third parties due to the cheating verification mechanism. 4. It can withstand participant attacks, including forgery and collusion attacks. 5. The influence of noisy environments on the proposed QMSS is demonstrated through fidelity. ## 2 Preliminaries ### Unitary operators Definition 1: The generalized Pauli operator for a qudit system with dimension \(d\) is specified as \[U_{a,b}=\sum_{z=0}^{d-1}\omega^{bz}\left|z+a\right\rangle\left\langle z\right|,\] where \(\omega=e^{\frac{2\omega}{d}}\), and \(a,b\in\{0,1,...,d-1\}\). Definition 2: The quantum Fourier transform \(\mathcal{F}\) executed on a quantum state \(\left|x\right\rangle\) of \(d\)-dimensional is written as \[\mathcal{F}\left|x\right\rangle=\frac{1}{\sqrt{d}}\sum_{z=0}^{d-1}\omega^{xz} \left|z\right\rangle,\text{where }\ \omega=e^{\frac{2\omega}{d}}.\] The inverse quantum Fourier transform \(\mathcal{F}^{-1}\) applied on a qudit state \(\left|z\right\rangle\) is represented by \[\mathcal{F}^{-1}\left|z\right\rangle=\frac{1}{\sqrt{d}}\sum_{x=0}^{d-1}\omega^{- zx}\left|x\right\rangle,\text{where }\ \omega=e^{\frac{2\pi i}{2}}.\] **Definition 3**.: The quantum SUM gate for two qudits \(\left|\alpha\right\rangle\) and \(\left|\beta\right\rangle\) is written as \[SUM(\left|\alpha\right\rangle,\left|\beta\right\rangle)=(\left|\alpha\right\rangle,\left|\alpha+\beta\right\rangle).\] In this context, \(\left|\alpha\right\rangle\) represents the control particle, \(\left|\beta\right\rangle\) represents the target particle, and " + " indicates the addition modulo \(d\). ### Access structure **Definition 4**.: Let \(\Omega=\{P_{1},P_{2},...,P_{m}\}\), is a collection of participants and \(\Gamma\) be a subset of \(2^{\Omega}\). A \(\Gamma\subseteq 2^{\Omega}\) access structure can be considered as a set of authorized participants if satisfies the conditions: \(B\in\Gamma\) when \(A\in\Gamma,\ A\subseteq B\subseteq\Omega\). The adversary structure \(\Delta\) refers to the collection of unauthorized sets, i.e., \(\Delta=\Gamma^{c}\). **Definition 5**.: For a secret \(s_{i}\), the access structure \(\Gamma_{i}\subseteq 2^{\Omega}\) is a family of sets of authorized participants to get the secret \(s_{i}\). A multi access structure \(\Gamma=(\Gamma_{1},\Gamma_{2},...,\Gamma_{n})\) consisting of \(n\) sets is used for \(n\) secrets \((s_{1},s_{2},...,s_{n})\). **Definition 6**.: A monotone span program (MSP) is represented by \((Z_{d},M,\psi,\zeta_{i})\), where \(Z_{d}\) be a finite field (\(d\) is a prime), \(M\) is a matrix of \(m\times l\) order over \(Z_{d}\), \(\psi:\{1,2,...,m\}\rightarrow\Omega\) is a surjection map used to assign the rows of \(M\) to each participant, and \(\zeta_{i}=(0,...,0,1,0,...,0)^{T}\in Z_{d}^{l}\) (where \(1\) is the \(i\)th element) is the target vector. **Definition 7**.: For multi-access structure \(\Gamma=(\Gamma_{1},\Gamma_{2},...,\Gamma_{n})\), if \((Z_{d},M,\psi,\zeta_{i})\), \(i=1,2,...,n\), satisfies the following conditions then it is referred to as a monotone span program (MSP). 1. For any \(A\in\Gamma_{i}\), there exists a vector \(\lambda_{iA}\) such that \(M_{A}^{T}\lambda_{iA}=\zeta_{i}\). 2. For any \(A\in\Delta_{i}\), there exists a vector \(\kappa=(\kappa_{1},...,1,...,\kappa_{l-1})^{T}\in Z_{d}^{l}\) such that \(M_{A}\kappa=0\in Z_{d}^{l}\) with \(1\) is the \(i\)th element. In this context, \(M_{A}\) represents the rows \(k\) of \(M\) where \(\psi(k)\in A\), and \(T\) signifies the transpose. ### Linear multi-secret sharing (LMSS) Linear multi-secret sharing (LMSS) is considered one of the most efficient methodologies in the field of general secret sharing. The LMSS could be utilized for access control techniques for large data sets with minimal additional cost. Following the MSP \((Z_{d},M,\psi,\zeta_{i})\), we examine the formulation of an LMSS about multi-access structure \(\Gamma=(\Gamma_{1},\Gamma_{2},...,\Gamma_{n})\). The dealer \(D\) wants to distribute the \(n\) secrets \(s_{1},s_{2},...,s_{n}\in Z_{d}\), to \(m\) participants using the multi access structure \(\Gamma=(\Gamma_{1},\Gamma_{2},...,\Gamma_{n})\). The dealer \(D\) examined a MSP \((Z_{d},M,\psi,\zeta_{i})\). 1. **Distribution phase:** The dealer \(D\) calculates the shares of the participant by selecting a random vector \(\rho=(s_{1},...,s_{n},\rho_{n+1},...,\rho_{i})\in Z_{d}^{l}\). Then, \(D\) calculates \(sh=M\rho=(sh_{1},sh_{2},...,sh_{m})^{T}\) and distribute the share \(sh_{k}\) among the participant \(\psi(k)\) via secure quantum channel. 2. **Reconstruction phase:** Consider that \(A\in\Gamma_{i}\) and \(sh_{A}\) represents the elements of \(sh\) that have indices in the set \(A\). The participants of the set \(A\) regenerate the \(i\)th secret \(s_{i}\) as: \[sh_{A}\lambda_{iA}=(M_{A}\rho)^{T}\lambda_{iA}=\rho^{T}(M_{A}^{T}\lambda_{iA})= \rho^{T}\zeta_{i}=s_{i}.\] **Remarks:** 1. For any set \(A\subseteq\Omega\), if \(A\not\subset\Gamma\), then \(A\subseteq\Gamma^{c}=\Delta\). 2. An unauthorized subset of \(\Delta\) cannot acquire all the secret shares, whereas an authorized subset of \(\Gamma\) obtains all secret shares. 3. If \(\omega=e^{\frac{2\omega}{d}}\), then \[\sum_{y=0}^{d-1}\omega^{xy}=\begin{cases}d,&x\overset{d}{\equiv}0;\\ 0,&x\overset{d}{\equiv}0.\end{cases}\] ### Black box mechanism for cheat-identification The term "Black box"[41] means that a device or product's internal structure or principles are not significant to the user. Thus, the user is only interested in the device's functionality and how to operate it. In our protocol, the Black box is required to execute the following functions: 1. The dealer \(D\) develops a diagonal matrix \(\Sigma\) of \(2m\)-order of secret shares \(sh_{k}\) and computes the matrix \(X=Y^{-1}\Sigma Y\). \(D\) determines two independently eigenvectors \((y_{k1},y_{k2})\) corresponding to the eigenvalues of \(X\), and \((y_{k1},y_{k2})\) are utilized as the shadows of secret shares. These shadows \((y_{k1},y_{k2})\) are transmitted to the participants. Then, these \(sh_{k}\) and matrix \(X=Y^{-1}\Sigma Y\) are kept in the Black box. 2. In the reconstruction phase, the Black box validates the secret shares' shadow given by the participants. Therefore, the following two factors are used to verify the existence of cheaters: * \(y_{k1}\) and \(y_{k2}\) are linearly independent. * \(sh_{k}=sh_{k1}=sh_{k2}\), where \(sh_{k1}\) and \(sh_{k2}\) can be evaluated by solving the equations \(Xy_{k1}=sh_{k1}y_{k1}\) and \(Xy_{k2}=sh_{k2}y_{k2}\), respectively. 3. After the cheating verification of participants, the Black box transmits the secret shares \(sh_{k}\) to the participants \(\psi(k)\) through a secure quantum channel. ### Noise models The operator sum representation efficiently depicts the interaction between a quantum state and its surrounding environment. Using Kraus operators[42], the noise model for \(d\)-dimensional quantum states may be characterized by an entirely positive trace-preserving map \(\epsilon\). \[\rho^{\prime}=\epsilon(\rho)=\sum_{m^{\prime},n^{\prime}}E_{m^{\prime},n^{ \prime}}\rho E_{m^{\prime},n^{\prime}}^{\dagger}\] where \(E_{m^{\prime},n^{\prime}}^{\dagger}\) denotes the conjugate transpose of \(E_{m^{\prime},n^{\prime}}\), \(\rho\) and \(\rho^{\prime}\), are the density matrices of the input quantum state and corresponding output quantum state, respectively. The Kraus operators \(E_{m^{\prime},n^{\prime}}\) are associated to the Weyl operators \(\hat{U}_{m^{\prime},n^{\prime}}\)[43] described as: \[\hat{U}_{m^{\prime},n^{\prime}}=\sum_{z=0}^{d-1}\omega^{m^{\prime}z}\left|z \right\rangle\left\langle z+n^{\prime}\right|\] where " +" means addition modulo \(d\). The widely recognized noise models[42] in quantum channels, dit-flip, d-phase-flip, and amplitude damping, represented as: 1. **Dit-flip noise:** This noise involves disturbances that convert \(\left|z\right\rangle\) with probability \(\mu\), either to the state \(\left|z+1\right\rangle\), \(\left|z+2\right\rangle,...,\) or \(\left|z+d-1\right\rangle\), whereas preserving it unaltered with the probability \(1-\mu\). The associated Kraus operators are represented as: \[E_{0,0}=\sqrt{1-\mu}\hat{U}_{0,0},\ E_{0,1}=\sqrt{\frac{\mu}{d-1}}\hat{U}_{0, 1},...,E_{0,d-1}=\sqrt{\frac{\mu}{d-1}}\hat{U}_{0,d-1}\] 2. **d-phase-flip noise:** This noise refers to the phenomenon where quantum information is lost without energy dissipation. In this noise, the state \(\left|z\right\rangle\) is susceptible to a phase transformation with a probability of \(\mu\), resulting in one of the \(d-1\) phases: \(\omega\left|z\right\rangle\), \(\omega^{2}\left|z\right\rangle\),..., or \(\omega^{d-1}\left|z\right\rangle\). The Kraus operators are shown as: \[E_{0,0}=\sqrt{1-\mu}\hat{U}_{0,0},\ E_{1,0}=\sqrt{\frac{\mu}{d-1}}\hat{U}_{1, 0},...,E_{d-1,0}=\sqrt{\frac{\mu}{d-1}}\hat{U}_{d-1,0}\] 3. **Amplitude-damping noise:** The consequences of energy dispersion in a quantum system caused by energy loss are referred to as amplitude-damping noise. This noise will change the basis state \(\left|z\right\rangle\) to the state \(\left|0\right\rangle\) with a probability of \(\mu\) excluding the state \(\left|0\right\rangle\), and leave it unchanged with a probability of \(1-\mu\). The associated Kraus operators are represented as: \[E_{0}=\left|0\right\rangle\left\langle 0\right|+\sqrt{1-\mu}\sum_{z=1}^{d-1} \left|z\right\rangle\left\langle z\right|,\ E_{z}=\sqrt{\mu}\left|0\right\rangle \left\langle z\right|,\ \text{with}\ z=1,2,...,d-1.\] The density matrix for \(m\)-qudit state through Kraus operators is described as: \[\rho^{\prime}=\epsilon(\rho)=\sum_{r_{1},r_{2},...,r_{n}}(E_{r_{1}}\otimes E_ {r_{2}}\otimes...\otimes E_{r_{n}})\rho(E_{r_{1}}\otimes E_{r_{2}}\otimes... \otimes E_{r_{n}})^{\dagger}.\] Where \(E_{r_{i}}\) represents the \(z\)th qudit influenced by the channel noise. The influence of noise on the quantum state is visualized by determining the fidelity between the initial quantum state, say \(\left|\phi\right\rangle\), and the output density matrix \(\rho_{out}\). Fidelity quantifies the similarity between two quantum states and is a mathematical measure for assessing their degree of closeness. Fidelity is defined by: \[F=\left\langle\phi|\rho_{out}|\phi\right\rangle\] If \(F=1\), no noise exists in the quantum channel. However, \(F=0\) indicates that all information has been lost. Thus, \(0\leq F\leq 1\). ## 3 Proposed QMSS scheme The proposed cheating-identifiable quantum multi-secret sharing (QMSS) technique comprises a dealer \(D\), \(m\) participants \(\{P_{1},P_{2},...,P_{m}\}\) and a Black box. Assume that the dealer \(D\) intends to allocate \(n\) secrets \((s_{1},s_{2},...,s_{n})\) to \(m\) participants \(\{P_{1},P_{2},...,P_{m}\}\), based on the multi access structures \(\Gamma=(\Gamma_{1},\Gamma_{2},...,\Gamma_{n})\). Additionally, \(d\) is a prime number, \(h\) represents a hash function, and \((Z_{d},M,\psi,\zeta_{i})\) denotes a monotone span program (MSP) for \(\Gamma\). The graphical representation of the QMSS scheme is depicted in Fig.1. ### Distribution phase The dealer \(D\) executes the following actions. 1. \(D\) select a random vector \(\rho=(s_{1},...,s_{n},\rho_{n+1},...,\rho_{l})^{T}\in Z_{d}^{l}\) 2. Compute \(sh=M_{m\times l}\rho=(sh_{1},sh_{2},...,sh_{m})^{T}\). 3. \(D\) creates a diagonal matrix \(\Sigma\) of order \(2m\) with diagonal elements \(sh_{k},\ k=1,2,,...,m\) as \[\Sigma=\text{diag}\Big{\{}sh_{1},sh_{1},sh_{2},sh_{2},...,sh_{m},sh_{m}\Big{\}}.\] (1) Now, the dealer \(D\) randomly develops a \(2m\)-order invertible matrix \(Y\) over \(Z_{d}\) and compute \(X=Y^{-1}\Sigma Y\). It is known that the eigenvalues of matrices \(X\) and \(\Sigma\) are identical due to their similarity. There are two linearly independent (LI) eigenvectors \((y_{k1},y_{k2})\) correspond to the eigenvalues \(sh_{k}\) (\(k=1,2,...,m\)). Therefore, each eigenvalue must have at least two LI eigenvectors. The dealer \(D\) occurs the linearly independent eigenvectors \((y_{k1},y_{k2})\) corresponding to the eigenvalue \(sh_{k}\) of participant \(P_{k}\) as secret shares' shadows. Subsequently, \(D\) transmits each pair of secret shares' shadows \((y_{k1},y_{k2})\) to the participant \(\psi(k),\ \psi(k)\in\Gamma_{i}\) through secure quantum channel. For simplicity, we assume that \(\psi(k)=P_{k}\) for \(1\leq k\leq m\). The secret shares (eigenvalues of matrix \(\Sigma\)) \(sh_{1},sh_{2},...,sh_{m}\) and the matrix \(X=Y^{-1}\Sigma Y\) are stored in the Black box. 4. Using the public hash function \(h\), \(D\) calculates and publishes the hash values \(H_{i}=h(s_{i})\), \(i=1,2,...,n\). ### Reconstruction phase Assume that the participants of an authorized set \(A\in\Gamma_{i}\) are required to retrieve the secret \(s_{i}\). To simplify the explanation, we assume that \(A=\{P_{1},P_{2},\ldots,P_{t}\}\). The cheating of the participants was detected through a Black box mechanism that relies on the matrix \(X\). Participants verified as honest through the Black box may acquire the secret shares and then successfully reconstruct the secret \(s_{i}\). #### 3.2.1 Cheating identification phase 1. Consider that the participants \(P_{k},\ k=1,2,...,t\), provide the shadows \((y_{k1},y_{k2})\) and recover the secret \(s_{i}\). The procedure for reconstructing the secret can be executed if the following conditions are met: * \(y_{k1}\) and \(y_{k2}\) are linearly independent * \(sh_{k}=sh_{k1}=sh_{k2}\) otherwise, continue with step 2. 2. If participant \(P_{k}\) is detected as dishonest in the previous step, he will be eliminated. The procedure of secret recovery will be terminated if the set of participants without a cheater is not a subset of \(\Gamma_{i}\). 3. After the cheating verification of all participants, the Black box transmits the secret shares \(sh_{k}\) (\(k=1,2,...,t\)) to the participants \(P_{k}\) of the authorized set \(A\in\Gamma_{i}\) through a secure quantum channel. #### 3.2.2 Secret recovery phase Assume that after obtaining the secret's shares \(sh_{k}\), the participants \(P_{k}\) of a set \(A\in\Gamma_{i}\) want to recover the secret \(s_{i}\). To simplify the explanation, we assume that \(A=\{P_{1},P_{2}...,P_{t}\}\) is the qualifying subset of participants, and \(P_{1}\) is a reconstructor. The reconstruction process proceeds as follows: 1. The participant \(P_{1}\) generates \(t\) single qudits \(\ket{0}_{1},\ket{0}_{2},...,\ket{0}_{t}\). 2. \(P_{1}\) operates the QFT \(\mathcal{F}\) on the first particle \(\ket{0}_{1}\) and get the state \(\ket{\phi_{1}}\) as \[\ket{\phi_{1}} =(\mathcal{F}\ket{0}_{1})\ket{0}_{2},...,\ket{0}_{t}\] \[=\left(\frac{1}{\sqrt{d}}\sum_{v=0}^{d-1}\ket{v}_{1}\right)\ket{0 }_{2},...,\ket{0}_{t}.\] (2) 3. \(P_{1}\) applies the \(t-1\) quantum SUM operations on the particles \(\ket{0}_{j},\ j=2,3,..,t\) with \((\mathcal{F}\ket{0}_{1})\) as the control qudit and \(\ket{0}_{j},(j=2,3,..,t)\) as the target qudits. The generated entangled quantum state \(\ket{\phi_{2}}\) is \[\ket{\phi_{2}}=\frac{1}{\sqrt{d}}\sum_{v=0}^{d-1}\ket{v}_{1}\ket{v}_{2}...\ket{ v}_{t}.\] (3) 4. \(P_{1}\) distributes the particle \(\ket{v}_{j},\ j=2,3,...,t\) to the participants \(P_{j}\) respectively, via the secure quantum channels. 5. Every participant \(P_{j}\) (\(j=1,2,...,t\)) operates the Pauli operators \(U_{0,\lambda_{j}sh_{j}}\) on their respective particles \(\ket{v}_{j}\), to obtain the quantum state \(\ket{\phi_{3}}\) as: \[\ket{\phi_{3}} =U_{0,\lambda_{1},sh_{1}}\otimes U_{0,\lambda_{2},sh_{2}}\otimes...\otimes U_{0,\lambda_{k},sh_{k}}\ket{\phi_{2}}\] \[=\frac{1}{\sqrt{d}}\sum_{v=0}^{d-1}\omega^{\lambda_{1},sh_{1}v} \ket{v}_{1}\omega^{\lambda_{2},sh_{2}v}\ket{v}_{2}...\omega^{\lambda_{k},sh_{ v}}\ket{v}_{t}\] \[=\frac{1}{\sqrt{d}}\sum_{v=0}^{d-1}\omega^{(\sum_{j=1}^{t}\lambda _{j},sh_{j})v}\ket{v}_{1}\ket{v}_{2}...\ket{v}_{t}.\] (4) Figure 1: QMSS scheme with the authorized set \(A\) for secret \(s_{i}\) (\(j=1,2,...,t\) and \(\mathrm{qc}=\mathrm{quantum}\) channel). 6. Each participant \(P_{j}\) executes the inverse quantum Fourier transform \(\mathcal{F}^{-1}\) on their particle \(\ket{v}_{j}\), and then measures the outcomes. After performing measurements on the particles, each participant \(P_{j}\) broadcasts his measurement result. 7. The participants \(P_{j}\) (\(j=1,2,...,t\)) sum up their measurement outcomes and compute the secret \(\sum_{j=1}^{t}\lambda_{j}sh_{j}\bmod d=s_{i}\). 8. Each participant \(P_{j}\) checks the recovered secret by \(H_{i}=h(s_{i})\), where \(h\) is a hash function. If this test is correct, they can conclude that all participants are trustworthy; otherwise, they ensure that any of the participant is deceitful. ## 4 Security analysis This section examines the proposed scheme's security against internal and external attackers and demonstrates its resistance to their actions. ### Intercept resend attack Assume that the eavesdropper, Eve, has control over the quantum channel. Then, Eve intercepts the qudits \(\ket{v}_{j}\) and measures them on a computational basis to obtain secret information. Additionally, Eve creates and resends the fictitious particle \(\ket{v^{\prime}}_{j}\) to \(P_{j}\). After measuring the particle, Eve may obtain the corrected value \(v\) with a probability of \(1/d\). However, Eve is unable to acquire any information about the secret shadows \((y_{k1},y_{k2})\) and the secret \(s_{i}\). Since the transmitted particles \(\ket{v}_{j}\) contain no information of secret shadows and secret shares. ### Entangle measure attack The eavesdropper Eve obtains all of the particles \(\ket{v}_{j}(j=2,3,...,t)\), when \(P_{1}\) transmits the particles \(\ket{v}_{j}\) to participants \(P_{j}\). Afterward, Eve proceeds to create an additional particle \(\ket{a}\) and entangles it with one of the intercepted particles \(\ket{v}_{j}\). Eve applies the SUM operator on the particles \(\ket{a}\) and \(\ket{v}_{j}\). Thus, the state \(\ket{\phi_{2}}\) develops into \(\ket{\phi_{2}}^{\prime}\) as \[\ket{\phi_{2}}^{\prime}=\frac{1}{\sqrt{d}}\sum_{v=0}^{d-1}\ket{v}_{1}\ket{v}_{ 2}...\ket{v}_{t}\ket{v+a}. \tag{5}\] Subsequently, Eve chooses another secret particle \(\ket{v}_{r}\) and executes a SUM operator on \(\ket{a}\). Consequently, the quantum state \(\ket{\phi_{2}}^{\prime}\) evolves into \(\ket{\phi_{2}}^{\prime\prime}\) \[\ket{\phi_{2}}^{\prime\prime}=\frac{1}{\sqrt{d}}\sum_{v=0}^{d-1}\ket{v}_{1} \ket{v}_{2}...\ket{v}_{t}\ket{v+v+a}=\ket{\phi_{2}}\ket{a}. \tag{6}\] Eve acquires the initial value \(a\) by measuring the ancillary particle \(\ket{a}\). Consequently, he concludes that \(\ket{v}_{j}\) and \(\ket{v}_{r}\) are equivalent. Similarly, he can only conclude that all transmitted particles \(\ket{v}_{j}\) are identical. Thus, Eve cannot gain any secret information from the intercepted particles \(\ket{v}_{j}\). ### Collusion attack The secret shares \(sh_{k}\) are exclusively held by the dealer \(D\), while the participants can only get the shadows \((y_{k1},y_{k2})\) of the secret shares. Therefore, even if they collaborate, the participants cannot recreate the secret shares to retrieve the secret \(s_{i}\). Only the participants authenticated by the Black box can acquire all the correct secret shares \(sh_{k}\). Hence, the secrets \(s_{i}\) (\(i=1,2,...,n\)) remain secure. Alternatively, in the secret recovery phase, each participant \(P_{j}\) measures their particle in the computational basis and publicly announces the outcome of their measurement \(\lambda_{j}sh_{j}\). However, this process does not reveal the value shares of participant \(P_{j}\) to the other participants. Moreover, assume that participants of an unauthorized set \(C\subset A\) conspire to obtain additional secret shares from the shared particles. Their assault will not succeed in the proposed scheme. Since only participant \(P_{1}\) distributes the secret particles, \(\ket{v}_{j}\) to the rest and \(\ket{v}_{j}\) carries no secret information. Additionally, if participants from an unauthorized set \(C\subset A\) try to access the secret \(s_{i}\), they would be required to get the secret shares held by the other participants in set \(A\). However, as explained earlier, they cannot obtain these shares. Therefore, according to the LMSS, unauthorized participants cannot compute the secret by performing linear operations on their shares. ### Forgery attack Suppose that the participant \(P_{k}\) provides fake shadows during the cheating verification process. The secret shares \(sh_{k}\) and the matrix \(X=Y^{-1}\Sigma Y\) are stored in the Black box. The Black box validates the participant's shadows based on the following two conditions: * \(y_{k1}\) and \(y_{k2}\) are linearly independent. * \(sh_{k}=sh_{k1}=sh_{k2}\), \(sh_{k1}\) and \(sh_{k2}\) can be determined by calculating the equations \(Xy_{k1}=sh_{k1}y_{k1}\) and \(Xy_{k2}=sh_{k2}y_{k2}\), respectively. Consequently, the eigenvalues must be consistent without cheating when compared to \(sh_{k}\) stored in the Black box. If a participant provides fake shadows, these two conditions are not satisfied, and the participant is identified as a cheater. Hence, the participants cannot forge the secret shares' shadows. During the reconstruction process, every participant provides the secret shares' shadow \((y_{k1},y_{k2})\) rather than the required information \(sh_{k}\). Although the attackers obtain \((y_{k1},y_{k2})\), they have to compute \(Xy_{k1}=sh_{k1}y_{k1}\) and \(Xy_{k2}=sh_{k2}y_{k2}\) to acquire \(sh_{k1},sh_{k2}\), respectively. However, only the dealer \(D\) and the Black box know about the matrix \(X\). Therefore, no one can obtain information of \(sh_{k}\) from \((y_{k1},y_{k2})\). Only the participants authenticated by the black box can obtain all the secret shares. Furthermore, since the Black box directly transmits these \(sh_{k}\), the internal attacker cannot fabricate the recovery of the secret. In the secret recovery process, assume that certain malicious participants within the authorized set \(A\) utilize a Pauli operator with a counterfeit share. As a result, each participant calculates incorrect values for the secret \(s_{i}\) and obtains \(h(s_{i})\neq H_{i}\). Then, they conclude that some participants are dishonest. ## 5 Example Let \(\Omega=\{P_{1},P_{2},P_{3},P_{4}\}\) represent the set of participants and \(\Gamma=(\Gamma_{1},\Gamma_{2})\) indicates the access structures with \(\Gamma_{1}=\{A_{1}=\{P_{1},P_{2},P_{3}\}\), \(A_{2}=\{P_{1},P_{2},P_{4}\},A_{3}=\Omega\},\Gamma_{2}=\{A=\Omega\}\). Assume the dealer \(D\) intends to reveal two secret \(s_{1}=2\) and \(s_{2}=5\) to four participants, using the multi access structure \(\Gamma=(\Gamma_{1},\Gamma_{2})\) and MSP \((Z_{7},M,\psi,\zeta_{1},\zeta_{2})\). The labeling map \(\psi(k)=P_{k}\), \(\forall\ k\in\{1,2,3,4\}\), \(\zeta_{1}=(1,0,0,0)^{T}\), \(\zeta_{2}=(0,1,0,0)^{T}\), and \(M=\begin{bmatrix}4&1&1&1\\ 0&0&1&1\\ 6&3&0&0\\ 0&1&1&1\end{bmatrix}\). So, \(\lambda_{1A_{1}}=(4,3,1)^{T},\lambda_{1A_{2}}=(2,0,5)^{T},\lambda_{1A_{3}}=(4,3,1,0)^{T}\), and \(\lambda_{2A}=(4,5,4,6)^{T}\). ### Distribution phase 1. \(D\) chooses a random vector \(\rho=(2,5,1,4)^{T}\) and calculate \(sh=M\rho=(4,5,6,3)^{T}\). 2. The dealer \(D\) prepares diagonal matrix \(\Sigma\) of order \(8\) with diagonal elements \(sh_{k}\), \(k=1,2,3,4\). \[\Sigma=\text{diag}\Big{\{}4,4,5,5,6,6,3,3\Big{\}}.\] \(D\) develops an invertible matrix \(Y\) of order \(8\) and computes \(X=Y^{-1}\Sigma Y\). \[Y=\begin{bmatrix}0&0&0&1&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 1&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&1&0\\ 0&0&0&0&1&0&0&0\\ 0&0&1&0&0&1&0&0\\ 1&0&0&0&0&0&0&1\end{bmatrix},X=Y^{-1}\Sigma Y=\begin{bmatrix}5&0&0&0&0&0&-1&0 \\ 0&5&0&0&0&0&0&0&0\\ 0&0&4&0&0&0&0&0\\ 0&0&0&4&0&0&1&0\\ 0&0&0&0&6&0&0&0\\ 0&0&-1&0&0&3&0&0\\ 0&0&0&0&0&0&6&0\\ -2&0&0&0&0&0&1&3\end{bmatrix}.\] The dealer \(D\) occurs the linearly independent eigenvectors \((y_{k1},y_{k2})\) corresponding to the eigenvalues \(sh_{k}\) of \(X\) as secret shares' shadows of participant \(P_{k}\). Thus, \(D\) transmits these shadows \((y_{k1},y_{k2})\) to the participants \(P_{k},\ k=1,2,3,4\). The eigenvectors \((y_{11},y_{12})\) corresponding to the eigenvalue \(sh_{1}=4\), are \[y_{11}=(0,0,1,0,0,-1,0,0)^{T},y_{12}=(0,0,0,1,0,0,0,0)^{T}.\] Similarly, the eigenvectors \((y_{21},y_{22})\),\((y_{31},y_{32})\), and \((y_{41},y_{42})\) corresponding to the eigenvalue \(sh_{2}=5\), \(sh_{3}=6\), and \(sh_{4}=3\), respectively, are \[y_{21}=(1,0,0,0,0,0,0,-1)^{T},y_{22}=(0,1,0,0,0,0,0,0)^{T}\] \[y_{31}=(0,0,0,0,1,0,0,0)^{T},y_{32}=(-1,0,0,0,0,0,1,1)^{T}\] \[y_{41}=(0,0,0,0,0,0,0,1)^{T},y_{42}=(0,0,0,0,0,1,0,0)^{T}\] 3. \(D\) computes the hash values \(H_{1}=h(s_{1})\), \(H_{2}=h(s_{2})\), where \(h()\) is a publicly known hash function. ### Reconstruction phase Suppose the participants \(A_{1}=\{P_{1},P_{2},P_{3}\}\in\Gamma_{1}\), and \(A=\{P_{1},P_{2},P_{3},P_{4}\}\in\Gamma_{2}\) want to retrieve the secrets \(s_{1}=2\) and \(s_{1}=5\), respectively. #### 5.2.1 Cheating identification phase 1. The secret shares \(sh=(sh_{1},sh_{2},sh_{3},sh_{4})^{T}=(4,5,6,3)^{T}\) and the diagonal matrix \(\Sigma\) are stored in the Black box. 2. The participants \(P_{k},\ k=1,2,3,4\) provides the secret shares' shadows \((y_{k1},y_{k2})\). The Black box verifies the participants' cheating by the following conditions: * \(y_{k1}\) and \(y_{k2}\) are linearly independent. * \(sh_{k}=sh_{k1}=sh_{k2}\). If the participants \(P_{k},\ k=1,2,3,4\) meet the above two requirements; they will receive their secret shares \(sh_{k}\) to recover the secrets \(s_{1}=2\) and \(s_{2}=5\). Otherwise, they will be identified as cheaters and eliminated. #### 5.2.2 Secret recovery phase Suppose that the participants \(A_{1}=\{P_{1},P_{2},P_{3}\}\in\Gamma_{1}\) want to retrieve the secret \(s_{1}\), and \(P_{1}\) is a reconstructor. \(P_{1}\) generates 3 single qudit particles and computes \(\ket{\phi_{2}}=\frac{1}{\sqrt{2}}\sum_{v=0}^{6}\ket{v}_{1}\ket{v}_{2}\ket{v}_{3}\). \(P_{1}\) transmits \(\ket{v}_{2}\) and \(\ket{v}_{3}\) to the participants \(P_{2}\) and \(P_{3}\), respectively. Now, each participants \(P_{1},P_{2}\) and \(P_{3}\) applies generalized Pauli operators and get the quantum state \(\ket{\phi_{3}}\). \[\ket{\phi_{3}} =\frac{1}{\sqrt{7}}\sum_{v=0}^{6}U_{0,2}\ket{v}_{1}\otimes U_{0,1 }\ket{v}_{2}\otimes U_{0,6}\ket{v}_{3}\] \[=\frac{1}{\sqrt{7}}\sum_{v=0}^{6}\omega^{2v}\ket{v}_{1}\omega^{v} \ket{v}_{2}...\omega^{6v}\ket{v}_{t}\] \[=\frac{1}{\sqrt{7}}\sum_{v=0}^{6}\omega^{(2+1+6)v}\ket{v}_{1} \ket{v}_{2}\ket{v}_{3}\,. \tag{7}\] Now, every participant performs the inverse quantum Fourier transform \(\mathcal{F}^{-1}\) on their respective particle and then measures the outcome of the \(\mathcal{F}^{-1}\) transformation. After performing the measurement, each participant publicly shares their measurement outcome and combines the results. Then, they calculate the secret \(s_{1}\) and check the recovered secret by \(H_{1}=h(s_{1})\). \[\sum_{j=1}^{3}\lambda_{j}sh_{j}\text{mod}7=\lambda_{1}sh_{1}+\lambda_{2}sh_{2} +\lambda_{3}sh_{3}=(2+1+6)\text{mod}7=2. \tag{8}\] Similarly, the participants \(A=\{P_{1},P_{2},P_{3},P_{3}\}\in\Gamma_{2}\) reconstruct the secret \(s_{2}\) and verify the secret by \(H_{2}=h(s_{2})\). \[\sum_{j=1}^{4}\lambda_{j}sh_{j}\text{mod}7=\lambda_{1}sh_{1}+\lambda_{2}sh_{2} +\lambda_{3}sh_{3}+\lambda_{4}sh_{4}=(1+3+6+2)\text{mod}7=5. \tag{9}\] ## 6 Efficiency analysis in noisy environment In the current advanced quantum technologies, the dealer and participants are expected to create the quantum state accurately. However, when quantum particles are transmitted between the dealer and participants over a quantum channel, channel noise affects QSS protocol execution. Thus, we demonstrate the effectiveness of the proposed scheme in various noise conditions, including dit-flip (df), \(d\)-phase-flip (dpf), and amplitude damping (ad). To simplify the study, assume that the participants' local particles are unaffected by channel noise. Only the same kind of noise and similar noise parameters operate on the particles when sent through the quantum channel. In the proposed scheme, the qudits in quantum state \(\ket{\phi_{2}}\) are distributed to participants via the quantum channel, and the final quantum state \(\ket{\phi_{3}}\) is prepared using local unitary operations. Consequently, the effectiveness of the proposed QMSS protocol relies on the proximity between the final quantum state \(\ket{\phi_{3}}\) and the output density matrix \(\rho_{out}\). In proposed protocol, the dealer \(D\) splits the secret shadows to the participants \(P_{1},P_{2},...,P_{m}\) and assume that the participants of a set \(A\in\Gamma_{i}\) retrive the secret \(s_{i}\). After the cheating verification of participants, the participant \(P_{1}\) prepares the entangled quantum state \(\ket{\phi_{2}}\). Therefore, the density matrix of the quantum state \(\ket{\phi_{2}}\) is \(\rho=\ket{\phi_{2}}\bra{\phi_{2}}\). Thus, \(P_{1}\) communicates the states' particles to the participants \(P_{j}\) (\(j=2,3,...,t\)). The noise model describing the entire quantum system is presented as follows: \[\rho_{1}^{r}=\epsilon^{\epsilon}(\rho)=\sum_{m^{\prime},n^{\prime}}(I\otimes E _{m^{\prime},n^{\prime}}^{2}\otimes E_{m^{\prime},n^{\prime}}^{3}\otimes... \otimes E_{m^{\prime},n^{\prime}}^{t})\rho(I\otimes E_{m^{\prime},n^{\prime}} ^{2}\otimes E_{m^{\prime},n^{\prime}}^{3}\otimes...\otimes E_{m^{\prime},n^{ \prime}}^{t})^{\dagger} \tag{10}\] where \(r\in\{df,dpf,ad\}\) for dit-flip, \(d\)-phase-flip and amplitude damping noise environments, respectively. After participants \(P_{2},P_{3},...,P_{t}\) receive the transmitted particles, the affected density matrices under the dit-flip, \(d\)-phase-flip, and amplitude damping noise channel can be described as, respectively. \[\begin{split}\rho_{1}^{df}&=\epsilon^{df}(\rho)=(I \otimes E_{0,0}^{2}\otimes E_{0,0}^{3}\otimes...\otimes E_{0,0}^{t})\rho(I \otimes E_{0,0}^{2}\otimes E_{0,0}^{3}\otimes...\otimes E_{0,0}^{t})^{ \dagger}\\ &+(I\otimes E_{0,1}^{2}\otimes E_{0,1}^{3}\otimes...\otimes E_{ 0,1}^{t})\rho(I\otimes E_{0,1}^{2}\otimes E_{0,1}^{3}\otimes...\otimes E_{0,1 }^{t})^{\dagger}+...+\\ &(I\otimes E_{0,d-1}^{2}\otimes E_{0,d-1}^{3}\otimes...\otimes E _{0,d-1}^{t})\rho(I\otimes E_{0,d-1}^{2}\otimes E_{0,d-1}^{3}\otimes... \otimes E_{0,d-1}^{t})^{\dagger}\\ &=\frac{1}{d}\Big{[}(1-\mu)^{t-1}\Big{(}\sum_{v=0}^{d-1}\ket{v}_{ 1}\ket{v}_{2}...\ket{v}_{t}\Big{)}\Big{(}\sum_{v=0}^{d-1}\ket{v}_{1}\ket{v}_{2}...\ket{v}_{t}\Big{)}^{\dagger}+\Big{(}\frac{\mu}{d-1}\Big{)}^{t-1}\\ &\Big{(}\sum_{v=0}^{d-1}\ket{v}_{1}\ket{v+1}_{2}...\ket{v+1}_{t} \Big{)}\Big{(}\sum_{v=0}^{d-1}\ket{v}_{1}\ket{v+1}_{1}...\ket{v+1}_{t}\Big{)}^{ \dagger}+...+\Big{(}\frac{\mu}{d-1}\Big{)}^{t-1}\\ &\Big{(}\sum_{v=0}^{d-1}\ket{v}_{1}\ket{v+d-1}_{2}...\ket{v+d-1}_{ t}\Big{)}\Big{(}\sum_{v=0}^{d-1}\ket{v}_{1}\ket{v+d-1}_{2}...\ket{v+1}_{t} \Big{)}^{\dagger}\Big{]}\end{split} \tag{11}\] \[\rho_{1}^{dpf} =\epsilon^{dpf}(\rho)=(I\otimes E_{0}^{2}\otimes E_{0,0}^{3}\otimes...\otimes E_{0,0}^{t})\rho(I\otimes E_{0}^{2}\otimes E_{0,0}^{3} \otimes...\otimes E_{0,0}^{t})^{\dagger}\] \[+(I\otimes E_{1,0}^{2}\otimes E_{1,0}^{3}\otimes...\otimes E_{1,0} ^{t})\rho(I\otimes E_{1,0}^{2}\otimes E_{1,0}^{3}\otimes...\otimes E_{1,0}^{t} )^{\dagger}+...+\] \[(I\otimes E_{d-1,0}^{2}\otimes E_{d-1,0}^{3}\otimes...\otimes E_{ d-1,0}^{t})\rho(I\otimes E_{d-1,0}^{2}\otimes E_{d-1,0}^{3}\otimes...\otimes E_{ d-1,0}^{t})^{\dagger}\] \[=\frac{1}{d}\Big{[}(1-\mu)^{t-1}\Big{(}\sum_{v=0}^{d-1}|v\rangle _{1}\,|v\rangle_{2}\,...\,|v\rangle_{t}\Big{)}\Big{(}\sum_{v=0}^{d-1}|v\rangle _{1}\,|v\rangle_{2}\,...\,|v\rangle_{t}\Big{)}^{\dagger}+\Big{(}\frac{\mu}{d- 1}\Big{)}^{t-1}\] \[\Big{(}\sum_{v=0}^{d-1}\omega^{(t-1)v}\,|v\rangle_{1}\,|v\rangle _{2}\,...\,|v\rangle_{t}\Big{)}\Big{(}\sum_{v=0}^{d-1}\omega^{(t-1)v}\,|v \rangle_{1}\,|v\rangle_{2}\,...\,|v\rangle_{t}\Big{)}^{\dagger}+\Big{(}\frac{ \mu}{d-1}\Big{)}^{t-1}\] \[\Big{(}\sum_{v=0}^{d-1}\omega^{2(t-1)v}\,|v\rangle_{1}\,|v \rangle_{2}\,...\,|v\rangle_{t}\Big{)}\Big{(}\sum_{v=0}^{d-1}\omega^{2(t-1)v} \,|v\rangle_{1}\,|v\rangle_{2}\,...\,|v\rangle_{t}\Big{)}^{\dagger}+...+\Big{(} \frac{\mu}{d-1}\Big{)}^{t-1}\] \[\Big{(}\sum_{v=0}^{d-1}\omega^{(d-1)(t-1)v}\,|v\rangle_{1}\,|v \rangle_{2}\,...\,|v\rangle_{t}\Big{)}\Big{(}\sum_{v=0}^{d-1}\omega^{(d-1)(t-1) v}\,|v\rangle_{1}\,|v\rangle_{2}\,...\,|v\rangle_{t}\Big{)}^{\dagger}\Big{]} \tag{12}\] \[\rho_{1}^{ad} =\epsilon^{ad}(\rho)=(I\otimes E_{0}^{2}\otimes E_{0}^{3}\otimes...\otimes E_{0}^{t})\rho(I\otimes E_{0}^{2}\otimes E_{0}^{3}\otimes...\otimes E _{0}^{t})^{\dagger}+(I\otimes E_{1}^{2}\otimes E_{1}^{3}\] \[\otimes...\otimes E_{1}^{t})\rho(I\otimes E_{1}^{2}\otimes E_{1}^ {3}\otimes...\otimes E_{1}^{t})^{\dagger}+...+(I\otimes E_{d-1}^{2}\otimes E_{ d-1}^{3}\otimes...\otimes E_{d-1}^{t})\rho\] \[(I\otimes E_{d-1}^{2}\otimes E_{d-1}^{3}\otimes...\otimes E_{d-1 }^{t})^{\dagger}\] \[=\frac{1}{d}\Big{(}\,|0\rangle_{1}\,|0\rangle_{2}\,...\,|0 \rangle_{t}+(1-\mu)^{\frac{t-1}{2}}\sum_{v=1}^{d-1}|v\rangle_{1}\,|v\rangle_{2} \,...\,|v\rangle_{t}\,\Big{)}\Big{(}\,|0\rangle_{1}\,|0\rangle_{2}\,...\,|0 \rangle_{t}+(1-\mu)^{\frac{t-1}{2}}\] \[\sum_{v=1}^{d-1}|v\rangle_{1}\,|v\rangle_{2}\,...\,|v\rangle_{t} \Big{)}^{\dagger}+\frac{\mu^{t-1}}{d}\sum_{v=1}^{d-1}(|v\rangle_{1}\,|0\rangle_{ 2}\,...\,|0\rangle_{t})(|v\rangle_{1}\,|0\rangle_{2}\,...\,|0\rangle_{t})^{\dagger} \tag{13}\] Now, the participants \(P_{j},\ j=1,2,...,t\) implement the generalized Pauli operator \(U_{0,{\lambda}_{j}sh_{j}}\) on their particles and the resultant density matrix defined by: \[\rho_{out}^{r}=(U_{0,{\lambda}_{1}sh_{1}}\otimes U_{0,{\lambda}_{2}sh_{2}} \otimes...\otimes U_{0,{\lambda}_{t}sh_{t}})\rho_{1}^{r}(U_{0,{\lambda}_{1}sh_{ 1}}\otimes U_{0,{\lambda}_{2}sh_{2}}\otimes...\otimes U_{0,{\lambda}_{t}sh_{t }})^{\dagger} \tag{14}\] In an ideal situation with no noise in the quantum channel, all participants can generate the quantum state \(|\phi_{3}\rangle\). The effectiveness of the proposed QMSS scheme under various noise conditions can be assessed by measuring the fidelity between the output density matrix \(\rho_{out}^{r}\) and the quantum state \(|\phi_{3}\rangle\). The fidelity in the various noises can be characterized as follows. \[F^{df} =(1-\mu)^{t-1} \tag{15}\] \[F^{dpf} =\begin{cases}(1-\mu)^{t-1}+\frac{\mu^{t-1}}{(d-1)^{t-2}},&(t-1) \stackrel{{ d}}{{\equiv}}0;\\ (1-\mu)^{t-1},&(t-1)\stackrel{{ d}}{{\equiv}}0\end{cases}\quad(\text{ Using remark 3})\] (16) \[F^{ad} =\frac{1}{d^{2}}(1+(1-\mu)^{\frac{t-1}{2}}(d-1))^{2} \tag{17}\] To illustrate the fidelity in different noise environments, we examine the performance by taking \(t=5,12\) participants with dimensions \(d=2,3,7,13,29,53,229\). The fidelity of three distinct noise models is determined using MATLAB in Fig.(2). The influence of dit-flip noise is shown in Figs. (2a) and (2d) by a graphical depiction of the change in fidelity compared to the noise parameter \(\mu\). The graph shows that when \(t=5\), fidelity decreases with increasing noise parameter \(\mu\) and gets to zero as \(\mu\in[0.8,1]\). The fidelity \(F^{df}\) does not vary as the dimension \(d\) increases since it is independent of the quantum system's dimension. Following that, when \(\mu\) grows, the fidelity \(F^{df}\) reduces rapidly for the larger number of participants \(t\) (larger number of qudit particles). In the instance of \(t=12\), \(F^{df}\) reaches zero when \(\mu\in[0.4,1]\). The graphical depiction Figs.(2b) and (2e) of \(d\)-phase-flip fidelity \(F^{dpf}\) and noise parameter \(\mu\) show that for \(t=5\) and \(d=2\) dimensions, the fidelity \(F^{dpf}\) decreases as the noise parameter \(\mu\in[0,0.5]\) increases. Following that, \(F^{dpf}\) begins to increase and reaches \(1\) as \(\mu\in[0.5,1]\). However, at larger dimensions \(d=3,7,13,29,53,229\), the fidelity \(F^{dpf}\) declines as \(\mu\) increases and reaches zero as \(\mu\in[0.65,1]\). For \(t=12\) participants, \(F^{dpf}\) immediately falls to zero for \(\mu\in[0.4,1]\). The graph in Fig.(2c) of amplitude damping fidelity \(F^{ad}\) and noise parameter \(\mu\) demonstrates that for \(t=5\) and \(d=2,3,7\), the fidelity \(F^{ad}\) falls as the noise parameter \(\mu\) increases. Whereas for higher dimensions \(d=13,29,53,229\), \(F^{ad}\) rapidly decrease and becomes zero as \(\mu\in[0.7,1]\). In the instance of \(t=12\), \(F^{ad}\) decreases rapidly as \(\mu\) increases for \(d=2,3,7\) and approaches to zero as \(\mu\in[0.7,1]\) for \(d=13,29,53,229\) given in Fig.(2f). In three distinct noise environments, the effectiveness of the proposed QMSS protocol decreases as the noise parameter \(\mu\) increases. Nevertheless, within the range of noise parameter \(\mu\in[0,0.4]\), the proposed scheme demonstrates superior efficiency in the presence of amplitude-damping noise compared to the dit-flip and \(d\)-phase-flip noise channels. For \(t=5\) and \(d=2\), the proposed scheme exhibits greater efficiency in the presence of \(d\)-phase-flip noise, precisely when \(\mu\in[0.4,1]\). Figure 2: The effect of three noises on QMSS by examining the changes in fidelity \(F^{r}\) with respect to the noise parameter \(\mu\). \([0.8,1]\), compared to other noise channels. The fidelity of dit-flip and \(d\)-phase-flip noises are correlated in some instances. ## 7 Comparisons This section compares the proposed QMSS protocol with several other similar existing \(d\)-dimensional QSS protocols[22; 30; 21; 31; 33; 40]. Qin et al.[22] developed a multi-dimensional QSS protocol using SUM operator and quantum Fourier transform for encoding and decoding the qudit state as a secret. The dealer allocates the particles among \(m\) participants, with \(m-1\) participants performing measurements on their particles, while the last participant applies a unitary operation to his particle depending on the measurement outcomes. However, this \((m,m)\) threshold scheme is vulnerable to participant attacks such as forgery and collusion and cannot detect dishonest behavior of the participants, making it less flexible and less secure. Mashhadi[30] presented a QSS scheme utilizing the quantum Fourier transform and general access structure to share a classical secret. The scheme requires one trusted player in each authorized set to reconstruct the secret. While the scheme demonstrates resilience against various attacks and the ability to detect cheating, it cannot distinguish malicious participants. Sutradhar and Om[21] suggested an enhancement to the QSS scheme[19] by introducing a \((t,m)\)-threshold QSS scheme. Nonetheless, the scheme is restricted to a \((t,m)\)-threshold and cannot identify dishonest participants. Based on the two qudit generalized Bell states, a general QSS technique was introduced by Li et al.[31]. In this technique, participants reconstruct the secret by operating a generalized Pauli operator. Yan et al.[40] developed a \((t,m)\)-threshold QSS protocol with cheat-identification of the participants. The dealer provides two identical quantum states, one signed for secret sharing and the other for identifying cheating. The participants apply unitary transformations on two quantum states and verify the cheating of successive participants by quantum digital signature mechanism. Nonetheless, this protocol presents implementation challenges, and its practical application may be limited. Furthermore, in the schemes[22; 30; 21; 31; 40], the dealer is only capable of sharing a single classical secret with participants. In contrast, our proposed protocol enables the sharing of multiple secrets simultaneously to different subsets of participants. Mashhadi[33] presented a QSS scheme that employs a single qudit state and unitary operations to share multiple classical secrets with multiple access structures. He examines the internal eavesdropping using a memoryless qudit quantum channel and the weak locking for the erasure channel. Thus, this scheme cannot detect the dishonest participant. Furthermore, none of the above QSS schemes have been observed in noise environments. In contrast, we propose a cheat-detection QSS protocol to share multiple classical secrets with different subsets of participants. The Black box's cheat-detection technology can recognize and identify each participant's deceptive behavior. The scheme can endure several typical attacks, such as forgery and collusion attacks, making it more secure. Furthermore, we emphasize the efficiency of the scheme in various noise environments. Consequently, the proposed scheme features a robust cheat-detection technique assures the honesty of participants, thereby enhancing overall security and also demonstrating its effectiveness in noisy environments. Table 1 compares our proposed QMSS protocol and other recently developed QSS schemes. ## 8 Conclusions This study presents a \(d\)-dimensional QMSS protocol with cheat identification using multi-access structures and a monotone span program. The dealer distributes multiple classical secrets to participants, and authorized sets of participants retrieve them by utilizing QFT and unitary operators. The deception verification mechanism in the Black box can identify each participant's dishonesty. The security evaluation demonstrates that the proposed approach is resistant to various attacks, intercept resend, entangle measure, and participant attacks, including forgery and collusion. Furthermore, the proposed protocol's efficiency is evaluated using quantum fidelity in several noise models: dit-flip, \(d\)-phase-flip, and amplitude-damping. Compared to existing QSS protocols, the proposed scheme has several characteristics: to detect dishonest participants, no need for entanglement measurement, more straightforward implementation, greater efficiency, and practicality under real-world conditions. AcknowledgementThe first author, supported by grant number 09/143(0951)/2019-EMR-I, expresses gratitude to the Council of Scientific and Industrial Research (CSIR), India, for their financial assistance in conducting this work. This research is also supported by SERB core grant number CRG/2020/002040. ## Data availability Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. ## Declaration of competing interests The authors have no competing interests to declare that are relevant to the content of this article.
2309.16893
Data-driven reevaluation of $ft$-values in superallowed $β$ decays
We present a comprehensive re-evaluation of the $ft$ values in superallowed nuclear $\beta$ decays crucial for the precise determination of $V_{ud}$ and low-energy tests of the electroweak Standard Model. It consists of the first, fully data-driven analysis of the nuclear $\beta$ decay form factor, that utilizes isospin relations to connect the nuclear charged weak distribution to the measurable charge distributions. This prescription supersedes previous shell-model estimations, and allows for a rigorous quantification of theory uncertainties in $f$ which is absent in the existing literature. Our new evaluation shows an overall downward shift of the central values of $f$ at the level of 0.01\%.
Chien-Yeah Seng, Mikhail Gorchtein
2023-09-28T23:36:49Z
http://arxiv.org/abs/2309.16893v2
# Data-driven re-evaluation of \(ft\)-values in superallowed beta decays ###### Abstract We present a comprehensive re-evaluation of the \(ft\)-values in superallowed nuclear beta decays crucial for the precise determination of \(V_{ud}\) and low-energy tests of the electroweak Standard Model. It consists of the first, fully data-driven analysis of the nuclear beta decay form factor, that utilizes isospin relations to connect the nuclear charged weak distribution to the measurable charge distributions. This prescription supersedes previous shell-model estimations, and allows for a rigorous quantification of theory uncertainties in \(f\) which is absent in the existing literature. Our new evaluation shows an overall downward shift of the central values of \(f\) at the level of 0.01%. ## I Introduction The top-row Cabibbo-Kobayashi-Maskawa (CKM) matrix element \(V_{ud}\) is a fundamental parameter in the Standard Model (SM) that governs the strength of charged weak interactions involving up and down quarks. Its precise determination constitutes an important component of the low-energy tests of SM and the search of physics beyond the Standard Model (BSM) at the precision frontier. Currently, beta transitions between isospin \(T=1\), and spin-parity \(J^{P}=0^{+}\) nuclear states (the so-called superallowed nuclear beta decays) and free neutron are the two competing candidates for the most precise determination of \(V_{ud}\). The advantage of the former is the existence of the many nuclear transitions that had been measured over decades and averaged over (see, e.g. Reg.[1] and references therein), but the existence of nuclear structure effects complicates the theory analysis. In contrast, neutron decay is limited by the experimental precision but is free from nuclear uncertainties and is theoretically cleaner. The recent improved measurement of the neutron lifetime \(\tau_{n}\) by UCN\(\tau\)[2] and the axial coupling constant \(g_{A}\) by PERKEO-III [3] have made the precision of \(V_{ud}\) from neutron beta decay almost comparable to that from superallowed nuclear beta decays, but a mild tension starts to develop between the two values[82]: \[|V_{ud}|_{0^{+}}=0.97361(31)\ \@@cite[cite]{[\@@bibref{}{Becker:2012}{}{}]}\,\quad|V_{ud}|_{n}^{\text{ best}}=0.97404(42)\ \@@cite[cite]{[\@@bibref{}{Becker:2012}{}{}]}. \tag{1}\] Combining them with the best determination of \(V_{us}\) from semileptonic kaon decays, \(|V_{us}|_{K_{\ell 3}}=0.22308(55)\)[5; 6] (with \(N_{f}=2+1+1\) lattice determination of the \(K^{0}\to\pi^{-}\) transition matrix element [7; 8; 9]), and \(|V_{ub}|_{K_{\ell 3}}=3.82(20)\times 10^{-3}\)[10], the first number leads to a \(3.6\sigma\) deficit of the first-row CKM unitarity \(|V_{ud}|^{2}+|V_{us}|^{2}+|V_{ub}|^{2}=1\), but with the second number the deficit is only \(1.7\sigma\). Given its profound impact on the SM precision tests, it is important to understand the origin of such discrepancy. It is a commonplace for low-energy precision tests that the main limitation in precision comes from radiative corrections that are sensitive to the effects of strong interaction which is described by Quantum Chromodynamics (QCD). At low energies QCD is nonperturbative, which complicates the uncertainty estimation of theory calculations. In the recent years, effort has been put in developing methods that would allow to compute such corrections to beta decay with a controlled systematics. The dispersion relation (DR) [11; 12; 13], effective field theory (EFT) [14; 15] and lattice QCD [16; 17; 18] analyses have ensured a high-precision determination of the single-nucleon radiative correction. The SM theory uncertainties in the free neutron decay are believed to be firmly under control at the precision level of \(10^{-4}\). The theory for superallowed nuclear beta decays is more involved due to the presence of specifically nuclear corrections. This fact is reflected in the master formula for the extraction of \(V_{ud}\)[19][83], \[|V_{ud}|_{0^{+}}^{2}=\frac{\pi^{3}\ln 2}{G_{F}^{2}m_{e}^{5}{\cal F}t(1+ \Delta_{R}^{V})}. \tag{2}\] Above, \(\Delta_{R}^{V}\) is the free-nucleon radiative correction which is also present in neutron decay. All nuclear structure effects are absorbed into the so-called \({\cal F}t\)-value [20], \[{\cal F}t=ft(1+\delta_{\text{R}}^{t})(1+\delta_{\text{NS}}-\delta_{\text{C}})\, \tag{3}\] where \(t\), the partial half-life, is the only pure experimental observable. All the remaining quantities in the expression above require nuclear theory inputs at either tree or loop level. First, \(\delta_{\text{R}}^{\prime}\) is known as the nucleus-dependent _outer_ radiative correction, which is calculable order-by-order with Quantum Electrodynamics (QED) assuming the nucleus as a point charge [21; 22; 23]. The remaining radiative corrections that depend on the nuclear structure are contained in \(\delta_{\text{NS}}\), which has previously been studied in the nuclear shell model [24; 25; 26; 27; 20]. Furthermore, \(\delta_{\text{C}}\) represents the isospin-symmetry-breaking (ISB) correction to the Fermi matrix element. This correction has been object of study by the nuclear theory community over the past 6 decades [27; 28; 29; 30; 31; 32]. Both \(\delta_{\rm NS}\) and \(\delta_{\rm C}\) have recently been under renewed scrutiny [33; 34; 35; 36; 12], and new methods were devised to study them either using nuclear ab-initio methods [37; 38] or by relating them to experimental measurements [39], which we will not discuss here. The focus of this paper is the statistical rate function \(f\) in superallowed beta decays. It represents the phase space integral over the spectrum of the positron originating from a beta decay process \(\phi_{i}\to\phi_{f}e^{+}\nu_{e}\). At the leading order it is fixed by the atomic mass splitting (i.e. the \(Q_{\rm EC}\) value). However, a number of effects that lead to sizable corrections to the spectrum have to be included \(f\), that require theory inputs from atomic and nuclear physics. Among these count the distortion of the outgoing positron wave function in the Coulomb field of the daughter nucleus, the nuclear form factors, screening effects from atomic electrons, recoil corrections etc. In principle, each of these inputs bears its own theory uncertainty which must be accounted for in the total error budget. Unfortunately, in most existing literature, including the series of reviews by Hardy and Towner [40; 41; 42; 1], only the experimental uncertainty of \(Q_{\rm EC}\) is included in the evaluation of \(f\). Here we address the validity of this assumption, given the precision goal of \(10^{-4}\) for the extraction of \(V_{ud}\). A quantity of fundamental importance in the determination of the statistical rate function is the charged weak form factor \(f_{+}(q^{2})\), defined through the (relativistic) nuclear matrix element of the vector charged current[84]: \[{}_{\rm QFT}\langle\phi_{f}(p_{f})|({J}_{W}^{\dagger\mu}(0))_{V}| \phi_{i}(p_{i})\rangle_{\rm QFT}\] \[=f_{+}(q^{2})(p_{i}+p_{f})^{\mu}+f_{-}(q^{2})(p_{i}-p_{f})^{\mu}\, \tag{4}\] with \(q^{2}=(p_{i}-p_{f})^{2}\). In nuclear physics it is common to use the Breit frame where \(q\) only has the spatial component. The contribution of \(f_{-}\) to the decay rate is suppressed simultaneously by ISB and kinematics, so only \(f_{+}\) is relevant. After scaling out its \(\vec{q}^{\,2}=0\) value which is just the Fermi matrix element \(M_{F}\) (\(=\sqrt{2}\) in the isospin limit), one can perform a Fourier transform[85] \[f_{+}(q^{2})=M_{F}\int d^{3}xe^{-i\vec{q}\cdot\vec{x}}\rho_{\rm cw}(r)\, \tag{5}\] which defines the nuclear charged weak distribution \(\rho_{\rm cw}(r)\); it is essentially the distribution of "active" protons eligible to transition weakly into a neutron in a nucleus. Obviously, \(\rho_{\rm cw}(r)\) is a basic property of the nucleus, just like the nuclear charge distribution \(\rho_{\rm ch}(r)\). Yet, in the literature they are treated with very different levels of rigor: \(\rho_{\rm ch}(r)\) was deduced from experimental data where uncertainties are (in principle) quantifiable, whereas \(\rho_{\rm cw}(r)\) is evaluated using simplified nuclear models. This may introduce an uncontrolled systematic uncertainty and neglects the fact that the two distributions are correlated. The purpose of this paper is to perform a careful re-evaluation of \(f\) with a more rigorous, data-driven error analysis. In particular, we adopt the strategy pioneered in Ref.[43] that connects \(\rho_{\rm cw}(r)\) to the charge distributions of the members of the superallowed isotriplet using model-independent isopsin relations. This prescription transforms the non-quantifiable model uncertainty in the usual approach to \(\rho_{\rm cw}(r)\) into uncertainty estimates that are derived from experimental ones under the only assumption of an approximate isospin symmetry. Furthermore, the new approach automatically accounts for the correlation between the Fermi function and decay form factor, and treats their uncertainties on the same footing. We furthermore analyse possible uncertainties from secondary effects, such as the screening corrections by the atomic electrons. With these, we report a set of 13 newly-calculated \(f\), with a much more robust uncertainty estimate. Our result lays a foundation for the future, more rigorous extraction of \(V_{ud}\) from superallowed beta decays. This work is organized as follows. In Sec.II we introduce the statistical rate function specifying various correction terms. A particular emphasis is put on the shape factor that depends on the charged weak distribution. In Sec.III we describe the isospin relations that connect different electroweak distribution functions. Sec.D is the central part of this work, where we describe in full detail our procedure of selecting the nuclear charge distribution data that we use for the data-driven analysis. In Sec.V we discuss our treatment of the secondary nuclear/atomic structure effects that enter \(f\). We present our final results in Sec.VI and discuss their influence and prospects. Some useful formulas on the solutions of the Dirac equation, the shape factor and the nuclear charge distributions can be found in the Appendix. ## II Statistical rate function and the shape factor We study the superallowed \(\beta^{+}\) decay, \(\phi_{i}\to\phi_{f}e^{+}\nu_{e}\), where we denote the positron energy and momentum as \(E\equiv E_{e}\) and \(\vec{p}\equiv\vec{p}_{e}\), with \({\bf p}=|\vec{p}|\). The positron endpoint energy of the decay is given by \(E_{0}^{\rm full}\equiv(M_{i}^{2}-M_{f}^{2}+m_{e}^{2})/(2M_{i})\), but upon neglecting recoil corrections it can be approximated as \(E_{0}\equiv M_{i}-M_{f}\). Before applying various corrections, the uncorrected differential decay rate is proportional to \({\bf p}E(E_{0}-E)^{2}\). The statistical rate function \(f\) is defined as the integrated decay rate in atomic units (\(\hbar=c=m_{e}=1\)). Ref.[44] provided an in-depth survey of some 12 different types of atomic/nuclear corrections that should be applied to formula above for a generic allowed beta decay. For superallowed decays of \(0^{+}\) nuclei, the number of relevant corrections is reduced. Therefore, following Refs.[40; 41], we express the statistical rate function as \[f=m_{e}^{-5}\int\limits_{m_{e}}^{E_{0}}{\bf p}E(E_{0}-E)^{2}F(E)C(E)Q(E)R(E)r(E )dE\,, \tag{6}\] where we have arranged the corrections factor in decreasing degrees of importance: (1) The Fermi function \(F(E)\), (2) The shape factor \(C(E)\), (3) The atomic shadowing correction \(Q(E)\), (4) The kinematic recoil correction \(R(E)\), and (5) The atomic overlap correction \(r(E)\). All the five corrections depend on the nucleus, which is usually denoted by carrying the daughter nucleus charge \(Z\) as a second argument, but we suppress this dependence for compactness. In this work we classify the former two corrections as _primary_, which will be evaluated coherently using the most recent nuclear distribution data. The latter three corrections, on the other hand, are classified as _secondary_ and we will not treat them differently than in the literature (except for a more careful account for theory uncertainties). We start from the largest correction, the Fermi function \(F(E)\) that accounts for the Coulomb interaction between the outgoing positron and the _daughter_ nucleus [45]. Historically, it was first derived by solving the Dirac equation of the charged lepton under the Coulomb potential of a point-like nucleus, which the equation is analytically solvable. The solution diverges at \(r=0\), so it was instead evaluated at an arbitrarily-chosen nuclear radius \(R\)[46]; corrections due to the finite nuclear charge distributions could then be added on top of it [47; 48]. Here we do not adopt this two-step approach, but rather solving the full Dirac equation numerically with a given nuclear charge distribution. The numerical solution is finite at \(r=0\), from which we can define the Fermi function as: \[F(E)=\frac{f_{+1}^{2}(0)+g_{-1}^{2}(0)}{2\mathbf{p}^{2}}=\frac{\alpha_{+1}^{ 2}+\alpha_{-1}^{2}}{2\mathbf{p}^{2}}\, \tag{7}\] where the coefficients \(\alpha_{\pm k}\) (\(k=1\) in this case) come from the solution of the radial Dirac equation, detailed in Appendix A and B. Fig.1 shows the typical shape of the Fermi function for \(\beta^{+}\) decay: since the Coulomb force is repulsive for a positron, the probability of its existence at \(r=0\) with low energy is suppressed. The second largest correction is the shape factor \(C(E)\), which incorporates the influence of the beta decay form factor in Eq.(4) (or equivalently, the charged weak distribution in Eq.(5)). A closed expression was obtained by Behrens and Buhring [49]: \[C(E)=\sum_{k}\lambda_{k}\left\{M_{0}^{2}(k)+m_{0}^{2}(k)-\frac{2\mu_{k}\gamma _{k}}{kE}M_{0}(k)m_{0}(k)\right\}\, \tag{8}\] where \(k=+1,+2,...\). The involved Coulomb functions are \[\lambda_{k}=\frac{\alpha_{-k}^{2}+\alpha_{+k}^{2}}{\alpha_{-1}^{2}+\alpha_{+ 1}^{2}}\,\ \mu_{k}=\frac{\alpha_{-k}^{2}-\alpha_{+k}^{2}}{\alpha_{-k}^{2}+\alpha_{+k}^{2} }\frac{kE}{\gamma_{k}}\, \tag{9}\] where \(\gamma_{k}=\sqrt{k^{2}-\alpha^{2}Z_{f}^{2}}\), with \(Z_{f}\) the atomic number of the _daughter_ nucleus. The functions that depend on \(\rho_{\rm cw}(r)\) are: \[M_{0}(k) = \frac{\sqrt{k}}{(2k-1)!!}\int_{0}^{\infty}4\pi r^{2}dr\rho_{\rm cw }(r)(\mathbf{p}r)^{k-1}\] \[\times\left[H_{k}(r)j_{k-1}(E_{\nu}r)-\frac{r}{R}D_{k}(r)j_{k}(E_{ \nu}r)\right]\] \[m_{0}(k) = \frac{\sqrt{k}}{(2k-1)!!}\int_{0}^{\infty}4\pi r^{2}dr\rho_{\rm cw }(r)(\mathbf{p}r)^{k-1} \tag{10}\] \[\times\left[h_{k}(r)j_{k-1}(E_{\nu}r)-\frac{r}{R}d_{k}(r)j_{k}(E _{\nu}r)\right]\] where \(E_{\nu}\approx E_{0}-E\) is the neutrino energy, and the functions \(H_{k}\), \(h_{k}\), \(D_{k}\) and \(d_{k}\) are defined in Eq.(44). Notice that the overall Fermi matrix element has been factored out from the definitions above. The derivation of this master formula can be found in Appendix C. One can also check that it reduces to the simple expression in Ref.[43] upon switching off the electromagnetic interaction. The series in Eq.8 converges very fast. In fact, explicit calculation shows that the \(k=2\) correction to \(f\) is smaller than \(0.0003\%\) for all measured transitions (i.e. up to \(A=74\)), therefore it is sufficient retain only the \(k=1\) term which greatly simplifies the analysis. ## III Isospin formalism In the survey by Hardy and Towner [40], the weak form factor was evaluated in the impulse approximation where nucleus is treated as a collection of non-interacting nucleons. In this formalism, the nuclear matrix element of the weak transition operator \(\hat{O}\) reads \[\langle\phi_{f}|\hat{O}|\phi_{i}\rangle=\sum_{\alpha\beta}\langle\alpha|\hat {O}|\beta\rangle\langle\phi_{f}|a_{\alpha}^{\dagger}a_{\beta}|\phi_{i} \rangle\, \tag{11}\] where \(\{\alpha,\beta\}\) are single-nucleon states, \(\{a_{\alpha}^{\dagger},a_{\beta}\}\) are their corresponding creation and annihilation operator, \(\langle\alpha|\hat{O}|\beta\rangle\) is the single-nucleon matrix element, and \(\langle\phi_{f}|a_{\alpha}^{\dagger}a_{\beta}|\phi_{i}\rangle\) the one-body density matrix element evaluated with shell model. In this formalism the Fermi function and the shape factor are completely decoupled, and Figure 1: A plot of the Fermi function \(F(E)\) with respect to \(\mathbf{p}\) (in units of \(m_{e}\)) for \({}^{22}\)Mg. The error band due to uncertainties from the nuclear charged distribution parameters is too small to be visible. the theory error from the shell model calculation is not quantifiable. An alternative approach was adopted by Wilkinson in Ref.[50]. It consists of first identifying \(\rho_{\rm cw}\) with \(\rho_{\rm ch}\) at zeroth order and adding a correction that is assumed to be small, \[\rho_{\rm cw}(r)=\rho_{\rm ch}(r)+\delta\rho(r)\, \tag{12}\] with the latter estimated in the nuclear shell model. Ref.[44] interpreted \(\delta\rho(r)\) as a consequence of ISB (Section F) and assumed to be small. However, we will show that the size of \(\delta\rho(r)\) is enhanced and is comparable to \(\rho_{\rm ch}(r)\), hence it cannot be taken as a small correction. In this work we perform a consistent treatment of \(F(E)\) and \(C(E)\) using the isospin formalism, coined in the earlier days the conserved vector current (CVC) hypothesis [51; 52; 53]. It arises from the expressions of the vector charged weak and electromagnetic current: \[(J_{W}^{\dagger\mu})_{V} =\bar{d}\gamma^{\mu}u\,, \tag{13}\] \[J_{\rm em}^{\mu} =\frac{1}{6}(\bar{u}\gamma^{\mu}u+\bar{d}\gamma^{\mu}d)+\frac{1}{ 2}(\bar{u}\gamma^{\mu}u-\bar{d}\gamma^{\mu}d)\,,\] where the former is purely isovector, while the latter has both isoscalar and isovector components. It is therefore the presence of the isoscalar electromagnetic current that gives rise to a non-zero \(\delta\rho(r)\), even in absence of ISB. To connect the nuclear matrix elements of the two currents, we construct linear combinations which subtract out the matrix element of the isoscalar current. To that end, we apply the Wigner-Eckart theorem in the isospin space to the members of the \(0^{+}\) isotriplet \(T_{f}=T_{i}=1\), \[\langle T_{f},T_{z,f}|O_{T_{z}}^{T}|T_{i},T_{z,i}\rangle =(-1)^{T_{f}-T_{z,f}}\sqrt{2T_{f}+1} \tag{14}\] \[\times\left(\begin{array}{ccc}T_{f}&T&T_{i}\\ -T_{z,f}&T_{z}&T_{z,i}\end{array}\right)\langle T_{f}||O^{T}||T_{i}\rangle\,,\] where \(\langle T_{f}||O^{T}||T_{i}\rangle\) is a reduced matrix element. Expressing now the time component of the electroweak currents as tensors in the isospin space, \[J_{\rm em}^{0}=O_{0}^{0}-\frac{1}{2}O_{0}^{1}\.\ (J_{W}^{\dagger 0})_{V}=- \frac{1}{\sqrt{2}}O_{1}^{1}\, \tag{15}\] we obtain the electromagnetic and charged weak form factors as: \[Z_{T_{z}}F_{{\rm ch},T_{z}}^{0} =\langle 1,T_{z}|J_{\rm em}^{0}|1,T_{z}\rangle \tag{16}\] \[=-\frac{T_{z}}{2\sqrt{2}}\langle 1||O^{1}||1\rangle+\langle 1||O^{ 0}||1\rangle\] \[M_{F}^{0}F_{\rm cw}^{0} =\langle 1,T_{zf}|(J_{W}^{\dagger 0})_{V}|1,T_{zi}\rangle=\frac{1}{ 2}\langle 1||O^{1}||1\rangle\,,\] with \(M_{F}^{0}=\sqrt{2}\), and \(Z_{T_{z}}\) is the atomic number of the nucleus within isospin quantum numbers \((1,T_{z})\)[86]. Fourier-transforming the form factors into the coordinate space gives: \[\rho_{\rm cw}(r) =\rho_{{\rm ch},1}(r)+Z_{0}\left(\rho_{{\rm ch},0}(r)-\rho_{{\rm ch },1}(r)\right)\] \[=\rho_{{\rm ch},1}(r)+\frac{Z_{-1}}{2}\left(\rho_{{\rm ch},-1}(r)- \rho_{{\rm ch},1}(r)\right). \tag{17}\] This means, taking \(\rho_{{\rm ch},1}\) as a reference distribution (since the neutron-rich nucleus is always the most stable), one has \(\delta\rho=Z_{0}(\rho_{{\rm ch},0}-\rho_{{\rm ch},1})=Z_{-1}(\rho_{{\rm ch},-1} -\rho_{{\rm ch},1})/2\). In Fig.2 we show the plot of two \(\rho_{\rm ch}\) along with \(\rho_{\rm cw}\) in the \(A=22\) isotriplet, the later obtained from the isospin relation (17). A few features are observed: 1. \(\rho_{\rm cw}\) differs significantly from all \(\rho_{\rm ch}\), and \(\delta\rho\) cannot be taken as a small perturbation. 2. Unlike the ordinary charge distributions that falls off monotonically with increasing \(r\), \(\rho_{\rm cw}\) is peaked at a larger value of \(r\). This can be qualitatively understood in a shell-model picture: While a photon couples equally to all protons inside the nucleus, a \(W\)-boson can only couple to a proton in the outermost shell because the corresponding neutron state in an inner shell is filled. 3. The error band of \(\rho_{\rm cw}\) deduced from the isospin relation is much larger than that of the individual \(\rho_{\rm ch}\) due to the enhancement of the \(Z\)-factor in \(\delta\rho\). In short, the isospin relation (17) allows us to evaluate \(F(E)\) and \(C(E)\) simultaneously with reduced model dependence and a fully-correlated error analysis. Finally, isospin symmetry also relates the three charge distributions within an isotriplet: \[2Z_{0}\rho_{{\rm ch},0}(r)=Z_{1}\rho_{{\rm ch},1}(r)+Z_{-1}\rho_{{\rm ch},-1}(r ). \tag{18}\] Therefore, if the charge distribution of a particular daughter nucleus is unknown, one can still obtain it if the other two charge distributions within the isotriplet are. For example, the unknown charge distribution of \({}^{18}\)Fe(ex) can be deduced from the data of \({}^{18}\)Ne and \({}^{18}\)O using the formula above. Figure 2: Plot of the nuclear charge distribution \(\rho_{\rm ch}(r)\) in atomic units (\(\hbar=c=m_{e}=1\)) for \({}^{22}\)Mg (blue), \({}^{22}\)Ne (red), and the corresponding charged weak distribution \(\rho_{\rm cw}(r)\) (green). The selection of nuclear charge distribution data is explained in Sec.IV. ## IV Selection of nuclear charge distribution data A comprehensive data-driven analysis of \(f\) using the isospin formalism requires a careful selection of nuclear charge distribution data. The most important distribution parameter is the root-mean-square (RMS) charge radius, \[r_{\rm RMS}\equiv\langle r_{\rm ch}^{2}\rangle^{1/2}=\left[\int_{0}^{\infty}4\pi r ^{2}\,r^{2}\rho_{\rm ch}(r)dr\right]^{1/2}. \tag{19}\] For stable nuclei it can be extracted from elastic electron scattering or from spectra of muonic atoms. For unstable ones it can be deduced from the field shift relative to a stable reference nucleus. Many compilations of nuclear charge radii are available, including Fricke, Heilig and Schopper [59], Angeli and Marinova [54], and Li _et al._[55]. While the data analysis in Ref.[59] is more transparent, Refs.[54; 55] cover more nuclei and will be adopted in this paper, alongside with several new measurements [56; 57; 58]. We summarize the available data of RMS nuclear radii relevant to superallowed transitions in Table 1. The full functional form of the nuclear charge distribution beyond the RMS charge radius can only be extracted from electron scattering off stable nuclei, where the available data is quite limited. The most recent compilation by de Vries _et al._, which will be our main source of reference, dates back to 1987 [60]. In Appendix D, we summarize the most commonly used parameterizations in that compilation: The two-parameter Fermi (2pF), three-parameter Fermi (3pF), three-parameter Gaussian (3pG) and harmonic oscillator (HO). For each distribution, we define the "primary" distribution parameter which is just \(\langle r_{\rm ch}^{2}\rangle^{1/2}\), and one or two independent, "secondary" distribution parameters (\(a\) for 2pF, \(a\) and \(w\) for 3pF and 3pG, \(\alpha_{\rm HO}\) for HO). The primary parameter is always taken from Table 1, whereas the secondary parameters are taken from the compilation by de Vries _et al._. The analytic expressions of \(\langle r_{\rm ch}^{2}\rangle\) given in Appendix D then allow us to fix the remaining, non-independent parameters (\(c\) for 2pF, 3pF and 3pG, \(b\) for HO). Given the limited information, we must develop a selection criteria in order to make full use of the data Ref.[60] to determine all the (independent) secondary distribution parameters. Inspired by Ref.[40], we adopt the following prescription: 1. If the data for a desired nucleus is available in Ref.[60], we use the secondary parameter(s) listed there. 2. If the data of a particular nucleus is not available, we take the secondary parameter(s) from the nearest isotope. 3. If no data of any isotope exists, we take the secondary parameter(s) from an available nucleus with the closest mass number \(A\). For some nuclei there are more than one set of distribution parameters given in Ref.[60]. In that case we need to choose the "best" set of data which we evaluate according to the following criterion. First, we compare the quoted central value of \(r_{\rm RMS}\) for an available nucleus in de Vries's compilation (not necessarily one that participates in a superallowed decay) with those in Angeli's review [54]. The latter typically has a smaller uncertainty. We then use \(|r_{\rm deVries}-r_{\rm Angeli}|\) as a measure of the accuracy of de Vries's fitting. At the same time, we use the quoted uncertainty of \(r_{\rm RMS}\) in deVries's compilation, \(\delta r_{\rm deVries}\), as a measure of its precision. Then, we may select a set of data which has the best overall accuracy and precision by requiring: \[\Delta\equiv((r_{\rm deVries}-r_{\rm Angeli})^{2}+(\delta r_{\rm deVries})^{2})= \text{min}. \tag{20}\] Finally, we are only interested in those nuclear isotriplets where at least two charge radii are measured, such that the isospin formalism can be applied. This includes 8 nuclear isotriplets and covers 13 superallowed transitions. In what follows we summarize, for all such nuclei, the charge distribution parameters that we chose for the evaluation of the Fermi function and the shape function, and explain the reasoning of our choice. ### \(A=18\) * For \({}^{18}\)Ne, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=2.9714(76)\) fm [54]. The nearest isotope of which charge distribution data exists in Ref.[60] is \({}^{20}\)Ne, with three parameterizations: 2pF (1971) [61], 2pF (1981) [62] and 3pF (1985) [63]. We adopt the secondary parameters from 3pF (1985): \(a=0.698(5)\) fm, \(w=-0.168(8)\) which returns the smallest \(\Delta\). * For \({}^{18}\)O, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=2.7726(56)\) fm [54]. The charge distribution data exists in Ref.[60], from which we take the secondary parameter: HO (1970), \(\alpha_{\rm HO}=1.513\)[64]. ### \(A=22\) * For \({}^{22}\)Mg, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.0691(89)\) fm [55]. The nearest isotope of which charge distribution data exists in Ref.[60] is \({}^{24}\)Mg, with three parameterizations: 3pF (1974) [65], 3pF (1974v2) [66] and 2pF (1976) [67]. We adopt the secondary parameters from 3pF (1974): \(a=0.607(9)\) fm, \(w=-0.163(30)\) which returns the smallest \(\Delta\). * For \({}^{22}\)Ne, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=2.9525(40)\) fm [54]. The charge distribution data exists Ref.[60], from which we take the secondary parameter: 2pF (1971), \(a=0.549(4)\) fm [61]. * For \({}^{34}\)Sr, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.2847(21)\) fm [54]. The nearest isotope of which charge distribution data exists in Ref.[60] is \({}^{32}\)S, from which we take the secondary parameters: 3pG (1974), \(a=2.191(10)\) fm, \(w=0.160(12)\)[69]. ### \(A=38\) * For \({}^{38}\)Ca, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.467(1)\) fm [56]. The nearest isotope of which charge distribution data exists in Ref.[60] is \({}^{40}\)Ca, from which we take the secondary parameters: 3pF (1973), \(a=0.586(5)\) fm, \(w=-0.161(23)\)[70]. * For \({}^{38}\)Ar, its RMS charge radius is experimentally known: \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.437(4)\) fm [57] but is the least precise among all three in the isotriplet. So we obtain instead the radius and charge distributions of this nucleus using the isospin relation (18). * For \({}^{38}\)Ar, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.4028(19)\) fm [54]. The nearest isotope of which charge distribution data exists in Ref.[60] is \({}^{36}\)Ar, which we mentioned above. We emphasize that this nuclear isotriplet plays a special role as it is the only isotriplet where all three nuclear charge radii are measured. This allows us to test the validity of the CVC assumption we used in deducing \(\rho_{\rm cw}\). As pointed out in Ref.[39], a non-zero value of the quantity \[\Delta M_{B}^{(1)}\equiv\frac{1}{2}\left(Z_{1}\langle r_{\rm ch,1}\rangle^{2} +Z_{-1}\langle r_{\rm ch,-1}\rangle^{2}\right)-Z_{0}\langle r_{\rm ch,0} \rangle^{2} \tag{21}\] measures nuclear isospin mixing effect not probed by the nuclear mass splitting. Using Table 1, we obtain \(\Delta M_{B}^{(1)}=-0.03(54)\)fm\({}^{2}\) which is consistent with zero. This shows that the current experimental precision of radii observables is not yet enough to resolve ISB effect; this also validates our strategy to use CVC with experimental data. ### \(A=42\) * For \({}^{42}\)Sc, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.5702(238)\) fm [54]. No data of charge distributions on Sc isotopes exists in Ref.[60], so we pick the available nucleus of nearest mass number, \({}^{40}\)Ca, which we mentioned above. * For \({}^{42}\)Ca, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.5081(21)\) fm [54]. The nearest isotope of which charge distribution data exists in Ref.[60] is \({}^{40}\)Ca, which was already mentioned before. ### \(A=50\) * For \({}^{50}\)Mn, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.7120(196)\) fm [54]. The nearest isotope of which charge distribution data exists in Ref.[60] is \({}^{55}\)Mn, from which we take the secondary parameter: 2pF (1969), \(a=0.567\) fm [71]. * For \({}^{50}\)Cr, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.6588(65)\) fm [54]. The charge distribution data exists in Ref.[60], with two parameterizations: 2pF (1976) [72] and 2pF (1978) [73]. We adopt the secondary parameter from 2pF (1976): \(a=0.520(13)\) fm which returns the smallest \(\Delta\). ### \(A=54\) * For \({}^{54}\)Ni, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.738(4)\) fm [58]. The nearest isotope of which charge distribution data exists in Ref.[60] is \({}^{58}\)Ni, from which we take the secondary parameters: 3pF (1970), \(a=0.5169\) fm, \(w=-0.1308\)[74]. * For \({}^{54}\)Fe, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=3.6933(19)\) fm [54]. The charge distribution data exists in Ref.[60], with three parameterizations: 3pG (1976) [75], 2pF (1976) [72] and 2pF (1978) [73]. We adopt the secondary parameters from 3pG (1976): \(a=2.270(12)\) fm, \(w=0.403(15)\) which return the smallest \(\Delta\). ### \(A=74\) * For \({}^{74}\)Rb, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=4.1935(172)\) fm [55]. No data of charge distributions on Sc isotopes exists in Ref.[60], so we pick the available nucleus of nearest mass number, \({}^{72}\)Ge, from which we take the secondary parameter: 2pF (1975), \(a=0.573(7)\) fm [76]. * For \({}^{74}\)Kr, we take \(\langle r_{\rm ch}^{2}\rangle^{1/2}=4.1870(41)\) fm [54]. No data of charge distributions on Sc isotopes exists in Ref.[60], so we pick the available nucleus of nearest mass number, \({}^{72}\)Ge, which was already mentioned before. With the information above, one can now evaluate \(F(E)\) and \(C(E)\) simultaneously using Eqs.(7), (8) with a fully-correlated error analysis. ## V Secondary corrections This section outlines the procedure we adopt to compute the remaining, "secondary" corrections to \(f\) in Eq.(6). ### Screening correction The presence of atomic electrons that reside around the atomic radius \(r_{A}\sim 1\)A alters the nuclear potential felt by the outgoing positron; namely, at very large \(r\) the positron feels not the point-like Coulomb potential \(V(r)=|Z|\alpha/r\) (where \(\alpha\) is the fine-structure constant) but a screened version. To estimate this correction, we use the simple formula by Rose [78] derived from the Wentzel-Kramers-Brillouin (WKB) approximation: \[Q(E)=\frac{\tilde{\bf p}\tilde{E}}{{\bf p}E}\frac{F(\tilde{E})}{F(E)}\, \tag{22}\] with \(\tilde{E}=E-V_{0}\), \(\tilde{\bf p}=\sqrt{\tilde{E}^{2}-m_{e}^{2}}\), \(V_{0}=\alpha^{2}Z_{i}^{4/3}\mathfrak{N}(Z_{i})\), where \(Z_{i}\) is the atomic number of the _parent_ nucleus, and the function \(\mathfrak{N}(Z_{i})\) can be computed approximately using Hartree-Fock wavefunctions; here we obtain its functional form by interpolating the discrete points in Ref.[77], which we reproduce in Table 2 for the convenience of the readers. The size of the screening correction is of the order \(10^{-3}\), but the simplified formula above does not permit a rigorous quantification of its uncertainty. Nevertheless, one could gain some insights by comparing the outcomes of different models. Ref.[44] compared the simple Rose formula to the solution of a more sophisticated potential by Salvat _et al._[79] (which they adopted); they found that the two are practically indistinguishable except at very small \(E\), see Fig.5 of their paper. For \(\beta^{+}\) decay, the small-\(E\) contribution to \(f\) is suppressed not only by the kinematic factor \({\bf p}E\) but also by the Fermi function, see Fig.1. Therefore, it is reasonable to believe that the simple Rose formula is sufficient to meet our precision goal. Nevertheless, we will assign a 10% uncertainty to the total screening correction to \(f\) to stay on the safe side. ### Kinematic recoil correction The kinematic recoil correction factor \(R(E)\) in Eq.(6) takes into account two effects: (1) the difference between \(E_{0}^{\rm full}\) and \(E_{0}\) in the upper limit of the \(E\)-integration, and (2) the \(1/M\)-suppressed terms in the tree-level squared amplitude, with \(M\) the average nuclear mass. One may derive its expression starting from the exact, relativistic phase space formula for the decay of spinless particles, see, e.g. Appendix A in Ref.[37]. Retaining terms up to \({\cal O}(1/M)\) gives: \[R(E)\approx 1+\frac{2E^{3}-2E_{0}E^{2}+E_{0}^{2}E-m_{e}^{2}E}{E(E-E_{0})M}. \tag{23}\] Ref.[40] adopted a simpler, \(E\)-independent form, which is equivalent to the expression above to \({\cal O}(1/M)\) after integrating over \(E\): \[R_{\rm HT}(E_{0})\approx 1-\frac{3E_{0}}{2M}. \tag{24}\] The size of this correction is \(\sim 10^{-4}\), so there is no need to assign an uncertainty of it. It is worth noticing that, depending on whether \(E_{0}\) or \(E_{0}^{\rm full}\) is used in the "zeroth order" expression of \(f\), the expression of \(R(E)\) will appear differently, e.g. between Ref.[40] and Ref.[44], which is a minor detail often not clearly explained in literature. ### Atomic overlap correction The last structure-dependent correction in Eq.(6) is the atomic overlap correction \(r(E)\) which accounts for the mismatch between the initial and the final atomic states in the beta decay; it is of the order \(\lesssim 10^{-4}\). We evaluate this correction using the empirical formula in Ref.[41]: \[r(E)=1-\frac{1}{E_{0}-E}\frac{\partial^{2}}{\partial Z_{i}^{2}}B(G)\, \tag{25}\] with \[B(G)\ =\ \left\{\begin{array}{ll}13.080Z_{i}^{2.42}{\rm eV}&,\ \ 6\leq Z_{i}\leq 10\\ 14.945Z_{i}^{2.37}{\rm eV}&,\ \ 11\leq Z_{i}\leq 30\\ 11.435Z_{i}^{2.45}{\rm eV}&,\ \ 31\leq Z_{i}\leq 39\end{array}\right.\, \tag{26}\] where \(Z_{i}\) is again the atomic number of the _parent_ nucleus. Similarly, it is unnecessary to assign an uncertainty due to its smallness. ## VI Final results and discussions Our final results of the statistical rate function (denoted as \(f_{\rm new}\)) are summarized in Table 3, alongside the latest compilation by Hardy and Towner [1] (denoted as \(f_{\rm HT}\)). In contrast to the latter that quoted only the experimental uncertainty from the \(Q_{\rm EC}\) values, our results fully account for the theory uncertainties from the Fermi function, the shape factor and the screening correction (scr). The errors from the former two are fully correlated and stem from the radial (rad) and higher-order shape parameters (shape) in the nuclear charge distribution functions. It is apparent from our analysis that in many cases the total theory uncertainty (rad + shape + scr) is larger than the experimental ones (\(Q_{\rm EC}\)). Based on this we deem that Ref.[1] has underestimated the errors in \(f\). It is also interesting to study the shift of the central value of \(f\) from the previous determination. It was shown in Ref.[43], by inspecting the analytic formula of the "pure-QCD" shape factor \(C_{\rm QCD}(E)\) in the absence of electromagnetic interaction, that an increase of \(\langle r_{\rm cw}^{2}\rangle^{1/2}\), the MS radius characterizing \(\rho_{\rm cw}\), in general leads smaller values of \(f\). Indeed, from the last column in Table 3 we see that in most cases our new evaluation reduces the central value of \(f\) at the level of \(0.01\%\), although some of such shifts are within the quoted (theory) uncertainties. The magnitude of the shift obtained in this work is in general smaller than those estimated in Ref.[43] upon accounting for the correlated effects with the Fermi function. Nevertheless, according to Eq.(2), a coherent downward shift of \(f\) may lead to an upward shift of \(V_{ud}\), which could partially alleviate the current CKM unitarity deficit. We refrain from quoting immediately an updated value of \(V_{ud}\) based on the new values of \(f\) for several reasons: 1. In this work we only improved the control over the nuclear structure effects that reside in the statistical rate function, but not in other pieces of Eq.(3), especially \(\delta_{\rm NS}\) and \(\delta_{\rm C}\). Before similar theory progress on these two quantities (which can be expected in the next few years), any update on \({\cal F}t\)-values would be preliminary. 2. With the existing data on nuclear charge radii, we are only able to re-evaluate \(f\) for 13 out of the 25 measured superallowed transitions. Furthermore, most of the information of the secondary charge distribution parameters in these 13 transitions are not directly measured but inferred from the nearest isotopes. The effects of isotope shifts to the secondary parameters are not systematically accounted for. 3. Moreover, the experimental determination of the nuclear charge radii is not unambiguous. In some cases electron scattering and atomic spectroscopy disagree with each other. In addition, the extraction of nuclear radii from data relies on the removal of higher-order corrections, most notably the nuclear polarization correction. In the nuclear radii \begin{table} \begin{tabular}{|c c c c c c c c c c c|} \hline \(Z_{i}\) & \({\mathfrak{N}}(Z_{i})\) & \(Z_{i}\) & \({\mathfrak{N}}(Z_{i})\) & \(Z_{i}\) & \({\mathfrak{N}}(Z_{i})\) & \(Z_{i}\) & \({\mathfrak{N}}(Z_{i})\) & \(Z_{i}\) & \({\mathfrak{N}}(Z_{i})\) \\ \hline \hline 1 & 1.000 & 14 & 1.481 & 25 & 1.513 & 39 & 1.553 & 60 & 1.572 & 80 & 1.599 \\ 7 & 1.399 & 15 & 1.484 & 27 & 1.518 & 45 & 1.561 & 64 & 1.577 & 86 & 1.600 \\ 8 & 1.420 & 16 & 1.488 & 30 & 1.540 & 49 & 1.566 & 66 & 1.579 & 92 & 1.601 \\ 9 & 1.444 & 17 & 1.494 & 32 & 1.556 & 52 & 1.567 & 68 & 1.586 & 94 & 1.603 \\ 10 & 1.471 & 18 & 1.496 & 35 & 1.550 & 53 & 1.568 & 70 & 1.590 & \\ 11 & 1.476 & 20 & 1.495 & 36 & 1.551 & 54 & 1.568 & 74 & 1.593 & \\ 12 & 1.474 & 23 & 1.504 & 38 & 1.552 & 55 & 1.567 & 76 & 1.595 & \\ \hline \end{tabular} \end{table} Table 2: Hartree-Fock calculation of \({\mathfrak{N}}(Z_{i})\) from Ref.[77]. compilation by Fricke and Heilig [59] this correction is taken from older calculations [80] from the 1970's. Meanwhile, the compilation by Angeli and Marinova [54] does not quote neither the value nor the source of the nuclear polarization correction used. Thus, one may not be able to claim to have gained a full control over all theory systematics until these ambiguities are fully resolved. With the above caveats in mind, our work represents an important first step towards a fully data-driven analysis of \(ft\)-values based on available data of nuclear charge distributions. Our approach offers a well-defined prescription to rigorously quantify the theory uncertainties, both in the Fermi function and in the shape factor. It also helps to identify some of the most urgently needed experimental measurements for future improvements. For instance, one extra measurement of nuclear charge radius in each of the \(A\)=10, 14, 26, 30, 46, 62 nuclear isotriplets will activate the data-driven analysis on these systems based on the isospin formalism, and for \(A\)=66 and 70, two measurements on each isotriplet are needed. ###### Acknowledgements. We acknowledge the participation of Giovanni Carotenuto, Michela Sestu, Matteo Cadeddu and Nicola Cargioli at earlier stages of this project. The work of C.-Y.S. is supported in part by the U.S. Department of Energy (DOE), Office of Science, Office of Nuclear Physics, under the FRIB Theory Alliance award DE-SC0013617, by the DOE grant DE-FG02-97ER41014, and by the DOE Topical Collaboration "Nuclear Theory for New Physics", award No. DE-SC0023663. M.G. acknowledges support by EU Horizon 2020 research and innovation programme, STRONG-2020 project under grant agreement No 824093, and by the Deutsche Forschungsgemeinschaft (DFG) under the grant agreement GO 2604/3-1. ### Radial solutions of the Dirac equation For a nucleus of charge \(Z\) and charge distribution \(\rho_{\rm ch}(r)\), the potential experienced by an electron reads: \[V(r)=-4\pi Z\alpha\left[\frac{1}{r}\int_{0}^{r}dr^{\prime}\rho_{\rm ch}(r^{ \prime})r^{\prime 2}+\int_{r}^{\infty}dr^{\prime}\rho_{\rm ch}(r^{\prime})r^{ \prime}\right]\,. \tag{10}\] The radial Dirac equations are: \[f^{\prime}_{\kappa}(r)=\frac{\kappa-1}{r}f_{\kappa}(r)-(E-m_{e}- V(r))g_{\kappa}(r)\] \[g^{\prime}_{\kappa}(r)=(E+m_{e}-V(r))f_{\kappa}(r)-\frac{\kappa +1}{r}g_{\kappa}(r). \tag{11}\] We choose the normalization such that, when \(V(r)=0\) the unbounded radial functions read: \[\left(\begin{array}{c}g^{\rm free}_{\kappa}(r)\\ f^{\rm free}_{\kappa}(r)\end{array}\right)={\bf p}\left(\begin{array}{c} \sqrt{\frac{E+m_{e}}{E}}j_{\ell}({\bf p}r)\\ {\rm sgn}(\kappa)\sqrt{\frac{E-m_{e}}{E}}j_{\ell}({\bf p}r)\end{array}\right)\, \tag{12}\] where \(j_{\ell}\) is the spherical Bessel function, with: \[\ell=\left\{\begin{array}{cc}\kappa&,\ \kappa>0\\ -\kappa-1&,\ \kappa<0\end{array}\right.,\ \ \bar{\ell}=\left\{\begin{array}{cc} \kappa-1&,\ \kappa>0\\ -\kappa&,\ \kappa<0\end{array}\right.. \tag{13}\] It is beneficial to define \(k\equiv|\kappa|\). With that, one defines four new types of radial functions \(H_{k}\), \(h_{k}\), \(D_{k}\) and \(d_{k}\) as: \[f_{+k}(r)\equiv\frac{\alpha_{+k}}{(2k-1)!!}({\bf p}r)^{k-1}\left\{H_{k}(r)+h_{ k}(r)\right\}\] \[g_{-k}(r)\equiv\frac{\alpha_{-k}}{(2k-1)!!}({\bf p}r)^{k-1}\left\{ H_{k}(r)-h_{k}(r)\right\}\] \[f_{-k}(r)\equiv-\frac{\alpha_{-k}}{(2k-1)!!}({\bf p}r)^{k-1}\frac {r}{R}\left\{D_{k}(r)-d_{k}(r)\right\}\] \[g_{+k}(r)\equiv\frac{\alpha_{+k}}{(2k-1)!!}({\bf p}r)^{k-1}\frac {r}{R}\left\{D_{k}(r)+d_{k}(r)\right\}\,, \tag{14}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline Transition & \(f_{\rm new}\) & \(f_{\rm HT}\) & \(\frac{f_{\rm new}-f_{\rm HT}}{f_{\rm new}}\) (\%) \\ \hline \hline \({}^{18}\)Ne\(-^{18}\)F & \(134.62(0)_{\rm rad}(0)_{\rm shape}(2)_{\rm scr}(17)_{Q_{BC}}\) & \(134.64(17)_{Q_{BC}}\) & \(-0.01(0)_{\rm rad}(0)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{22}\)Mg\(\rightarrow^{22}\)Na & \(418.27(1)_{\rm rad}(1)_{\rm shape}(7)_{\rm scr}(13)_{Q_{BC}}\) & \(418.35(13)_{Q_{BC}}\) & \(-0.02(0)_{\rm rad}(0)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{34}\)Ar\(\rightarrow^{34}\)Cl & \(3409.89(16)_{\rm rad}(18)_{\rm shape}(60)_{\rm scr}(25)_{Q_{BC}}\) & \(3410.85(25)_{Q_{BC}}\) & \(-0.03(0)_{\rm rad}(1)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{38}\)Ca\(\rightarrow^{38m}\)K & \(5327.49(14)_{\rm rad}(36)_{\rm shape}(98)_{\rm scr}(31)_{Q_{BC}}\) & \(5328.88(31)_{Q_{BC}}\) & \(-0.03(0)_{\rm rad}(1)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{42}\)T\(\rightarrow^{42}\)Sc & \(7124.3(57)_{\rm rad}(8)_{\rm shape}(14)_{\rm scr}(14)_{Q_{BC}}\) & \(7130.1(14)_{Q_{BC}}\) & \(-0.08(8)_{\rm rad}(1)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{50}\)Fe\(-^{50}\)Mn & \(15053(18)_{\rm rad}(3)_{\rm shape}(3)_{\rm scr}(60)_{Q_{BC}}\) & \(15060(60)_{Q_{BC}}\) & \(-0.04(12)_{\rm rad}(2)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{54}\)Ni\(\rightarrow^{54}\)Co & \(21137(3)_{\rm rad}(1)_{\rm shape}(5)_{\rm scr}(52)_{Q_{BC}}\) & \(21137(57)_{Q_{BC}}\) & \(+0.00(2)_{\rm rad}(0)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{34}\)Cl\(\rightarrow^{34}\)S & \(1995.076(81)_{\rm rad}(103)_{\rm shape}(364)_{\rm scr}(94)_{Q_{BC}}\) & \(1996.003(96)_{Q_{BC}}\) & \(-0.05(0)_{\rm rad}(1)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{38m}\)K\(\rightarrow^{38}\)Ar & \(3296.32(8)_{\rm rad}(21)_{\rm shape}(63)_{\rm scr}(15)_{Q_{BC}}\) & \(3297.39(15)_{Q_{BC}}\) & \(-0.03(0)_{\rm rad}(1)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{42}\)Sc\(\rightarrow^{42}\)Ca & \(4468.53(336)_{\rm rad}(52)_{\rm shape}(91)_{\rm scr}(46)_{Q_{BC}}\) & \(4472.46(46)_{Q_{BC}}\) & \(-0.09(8)_{\rm rad}(1)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{50}\)Mn\(\rightarrow^{50}\)Cr & \(10737.93(1150)_{\rm rad}(202)_{\rm shape}(229)_{\rm scr}(50)_{Q_{BC}}\) & \(10745.99(49)_{Q_{BC}}\) & \(-0.08(11)_{\rm rad}(2)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{54}\)Co\(\rightarrow^{54}\)Fe & \(15769.4(23)_{\rm rad}(7)_{\rm shape}(34)_{\rm scr}(27)_{Q_{BC}}\) & \(15766.8(27)_{Q_{BC}}\) & \(+0.02(1)_{\rm rad}(0)_{\rm shape}(2)_{\rm scr}\) \\ \hline \({}^{74}\)Rb\(\rightarrow^{74}\)Kr & \(47326(127)_{\rm rad}(18)_{\rm shape}(12)_{\rm scr}(94)_{Q_{BC}}\) & \(47281(93)_{Q_{BC}}\) & \(+0.10(27)_{\rm rad}(4)_{\ with the normalization \(H_{k}(0)\equiv 1\), \(h_{k}(0)\equiv 0\), and \(R\) is an arbitrarily-chosen radius parameter such that the nuclear charge is practically zero at \(r>R\). These definitions, together with the normalization of \(f_{\kappa}(r)\), \(g_{\kappa}(r)\), fully define the parameters \(\alpha_{\pm k}\). A particularly important case is the point-like Coulomb potential: \[V(r)=-\frac{Z\alpha}{r}\, \tag{10}\] where there are two sets of solutions, the "regular" and "irregular" ones. The "regular" solution reads: \[\left(\begin{array}{c}g_{\kappa}^{\rm reg}(r)\\ f_{\kappa}^{\rm reg}(r)\end{array}\right)={\bf p}\left(\begin{array}{c}\sqrt {\frac{E+m_{e}}{E}}\mathfrak{Re}\\ -\sqrt{\frac{E-m_{e}}{E}}\mathfrak{Im}\end{array}\right)Q_{\kappa}(r)\, \tag{11}\] where \[Q_{\kappa}(r)\equiv 2e^{\frac{\pi i}{2}}\frac{|\Gamma(\gamma_{ \kappa}+iy)|}{\Gamma(2\gamma_{\kappa}+1)}(\gamma_{\kappa}+iy)(2{\bf p}r)^{ \gamma_{\kappa}-1}\] \[\times e^{-i{\bf p}r+i\eta_{\kappa}}{}_{1}F_{1}(\gamma_{\kappa}+ 1+iy;2\gamma_{\kappa}+1;2i{\bf p}r)\,, \tag{12}\] with \[\gamma_{\kappa} =\sqrt{\kappa^{2}-\alpha^{2}Z^{2}}\,,\ y=\frac{Z\alpha E}{{\bf p }}\,,\] \[\eta_{\kappa} ={\rm sgn}(\kappa Z)\sin^{-1}\sqrt{\frac{1}{2}\left(1+\frac{ \kappa\gamma_{\kappa}-y^{2}m_{e}/E}{\gamma_{\kappa}^{2}+y^{2}}\right)}. \tag{13}\] Meanwhile, the "irregular" solution reads: \[\left(\begin{array}{c}g_{\kappa}^{\rm irres}(r)\\ f_{\kappa}^{\rm irres}(r)\end{array}\right)={\bf p}\left(\begin{array}{c}\sqrt {\frac{E+m_{e}}{E}}\mathfrak{Re}\\ -\sqrt{\frac{E-m_{e}}{E}}\mathfrak{Im}\end{array}\right)\bar{Q}_{\kappa}(r)\, \tag{14}\] where \(\bar{Q}_{\kappa}(r)\) is obtained from \(Q_{\kappa}(r)\) by simply switching \(\gamma_{\kappa}\to-\gamma_{\kappa}\). When \(r\to\infty\), the regular solution takes the following asymptotic form: \[\left(\begin{array}{c}g_{\kappa}^{\rm reg}(r)\\ f_{\kappa}^{\rm reg}(r)\end{array}\right)\to\frac{1}{r}\left(\begin{array}{c} \sqrt{\frac{E+m_{e}}{E}}\cos({\bf p}r+\delta_{\kappa})\\ -\sqrt{\frac{E-m_{e}}{E}}\sin({\bf p}r+\delta_{\kappa})\end{array}\right)\, \tag{15}\] where \[\delta_{\kappa}=y\ln(2{\bf p}r)-\arg\Gamma(\gamma_{\kappa}+iy)+\eta_{\kappa}- \frac{\pi\gamma_{\kappa}}{2} \tag{16}\] is the phase shift for the Coulomb potential. The corresponding phase shift for the irregular solution is \(\bar{\delta}_{\kappa}\), which is again obtained by taking \(\gamma_{\kappa}\to-\gamma_{\kappa}\). If the point-like Coulomb potential holds for all distances (i.e. from \(r=0\) to \(r\to\infty\)), then only the regular solutions survive because the irregular solutions blow up at \(r\to 0\). However, in reality the nuclear charge is distributed over a finite space, so Eq.(10) only holds at \(r>R\). Therefore, since the analytic solutions never apply to \(r=0\), we must retain both the regular and irregular solutions. To be more specific, the radial function at \(r>R\) (which we call the "outer solution") is a linear combination of the two: \[\left(\begin{array}{c}g_{\kappa}(r)\\ f_{\kappa}(r)\end{array}\right)=A_{\kappa}\left(\begin{array}{c}g_{\kappa}^{ \rm reg}(r)\\ f_{\kappa}^{\rm reg}(r)\end{array}\right)+B_{\kappa}\left(\begin{array}{c}g _{\kappa}^{\rm irreg}(r)\\ f_{\kappa}^{\rm irres}(r)\end{array}\right)\,\ r>R \tag{17}\] where the coefficients satisfy the following normalization condition, which we express in terms of matrix product for future benefit [81]: \[\left(\begin{array}{c}A_{\kappa}\\ B_{\kappa}\end{array}\right)^{T}\left(\begin{array}{cc}1&\cos(\delta_{ \kappa}-\bar{\delta}_{\kappa})\\ \cos(\delta_{\kappa}-\bar{\delta}_{\kappa})&1\end{array}\right)\left( \begin{array}{c}A_{\kappa}\\ B_{\kappa}\end{array}\right)=1. \tag{18}\] The other condition comes from the matching with the inner solution (i.e. the \(r<R\) solution) at \(r=R\), which we will describe later. Finally, to obtain radial functions for the positron, one simply switches \(Z\to-Z\). ### Obtaining the inner solution Here we outline the procedure to obtain the inner solution as well as the matching to the outer solution. We start with the \(\kappa=+k\) functions, and define: \[f_{+k} \equiv\frac{\alpha_{+k}}{(2k-1)!!}({\bf p}r)^{k-1}\bar{f}_{+k}\,,\] \[g_{+k} \equiv\frac{\alpha_{+k}}{(2k-1)!!}({\bf p}r)^{k-1}\frac{r}{R}\bar {g}_{+k}\, \tag{19}\] where \(\bar{f}_{+k}=H_{k}+h_{k}\), \(\bar{g}_{+k}=D_{k}+d_{k}\). They satisfy the following radial equations: \[\bar{f}^{\prime}_{+k}(r) =-(E-m_{e}-V(r))\frac{r}{R}\bar{g}_{+k}(r) \tag{20}\] \[\frac{r}{R}\bar{g}^{\prime}_{+k}(r) =(E+m_{e}-V(r))\bar{f}_{+k}(r)-\frac{2k+1}{R}\bar{g}_{+k}(r)\,,\] with the normalization condition \(\bar{f}_{+k}(0)=H_{k}(0)+h_{k}(0)=1\). It is easy to see that this one normalization condition completely fixes both functions; for instance, taking \(r=0\) at both sides of the second differential equation gives \(\bar{g}_{+k}(0)=R(E+m_{e}-V(0))/(2k+1)\), so we now know the values of both functions at \(r=0\). The values of their first derivative at \(r=0\) are then given immediately by the differential equations, so on and so forth. Similarly, for the \(\kappa=-k\) radial functions, we define: \[g_{-k} \equiv\frac{\alpha_{-k}}{(2k-1)!!}({\bf p}r)^{k-1}\bar{g}_{-k}\,,\] \[f_{-k} \equiv-\frac{\alpha_{-k}}{(2k-1)!!}({\bf p}r)^{k-1}\frac{r}{R} \bar{f}_{-k}\, \tag{21}\] where \(\bar{g}_{-k}=H_{k}-h_{k}\), \(\bar{f}_{-k}=D_{k}-d_{k}\). They satisfy the following radial equations: \[\bar{g}^{\prime}_{-k}(r) =-(E+m_{e}-V(r))\frac{r}{R}\bar{f}_{-k}(r) \tag{10}\] \[\frac{r}{R}\bar{f}^{\prime}_{-k}(r) =(E-m_{e}-V(r))\bar{g}_{-k}(r)-\frac{2k+1}{R}\bar{f}_{-k}(r)\,,\] with the normalization condition \(\bar{g}_{-k}(0)=H_{k}(0)-h_{k}(0)=1\). Given a choice of nuclear charge distribution (which fixes the potential \(V(r)\)), we can solve for the functions \(\bar{g}_{\pm k}(r)\), \(\bar{f}_{\pm k}(r)\) numerically from \(r=0\) to \(r=R\). Then, at \(r=R\), we match them to the analytic expressions of the outer solutions. Combining Eqs.(10), (11) and (12), the matching gives: \[\left(\begin{array}{c}A_{\kappa}\\ B_{\kappa}\end{array}\right) =\frac{\alpha_{\kappa}(\mathbf{p}R)^{k-1}}{(2k-1)!!\,\mathbf{p}} \left(\begin{array}{cc}\mathfrak{Re}Q_{\kappa}(R)&\mathfrak{Re}\bar{Q}_{ \kappa}(R)\\ \mathfrak{Im}Q_{\kappa}(R)&\mathfrak{Im}\bar{Q}_{\kappa}(R)\end{array} \right)^{-1}\] \[\times\left(\begin{array}{c}\sqrt{\frac{E}{E+m_{e}}}\bar{g}_{ \kappa}(R)\\ -\text{sgn}(\kappa)\sqrt{\frac{E}{E-m_{e}}}\bar{f}_{\kappa}(R)\end{array} \right)\, \tag{13}\] where \(\kappa=\pm k\). Substituting this to Eq.(11) gives: \[\alpha_{\kappa}^{-2} =\left(\frac{(\mathbf{p}R)^{k-1}}{(2k-1)!!\,\mathbf{p}}\right)^{ 2}\left[\left(\begin{array}{cc}\mathfrak{Re}Q_{\kappa}(R)&\mathfrak{Re}\bar {Q}_{\kappa}(R)\\ \mathfrak{Im}Q_{\kappa}(R)&\mathfrak{Im}\bar{Q}_{\kappa}(R)\end{array} \right)^{-1}\right.\] \[\times\left(\begin{array}{c}\sqrt{\frac{E}{E+m_{e}}}\bar{g}_{ \kappa}(R)\\ -\text{sgn}(\kappa)\sqrt{\frac{E}{E-m_{e}}}\bar{f}_{\kappa}(R)\end{array} \right)^{T}\] \[\times\left(\begin{array}{c}1\cos(\delta_{\kappa}-\bar{\delta }_{\kappa})\\ \cos(\delta_{\kappa}-\bar{\delta}_{\kappa})\end{array}\right)\] \[\times\left(\begin{array}{cc}\mathfrak{Re}Q_{\kappa}(R)& \mathfrak{Re}\bar{Q}_{\kappa}(R)\\ \mathfrak{Im}Q_{\kappa}(R)&\mathfrak{Im}\bar{Q}_{\kappa}(R)\end{array} \right)^{-1}\] \[\times\left(\begin{array}{c}\sqrt{\frac{E}{E+m_{e}}}\bar{g}_{ \kappa}(R)\\ -\text{sgn}(\kappa)\sqrt{\frac{E}{E-m_{e}}}\bar{f}_{\kappa}(R)\end{array} \right)\.\] Thus, with the numerical solutions of \(\bar{f}_{\pm k}(R)\) and \(\bar{g}_{\pm k}(R)\), Eqs.(13), (14) give spontaneously the coefficients \(\alpha_{\pm k}\) and \(\{A_{\pm k},B_{\pm k}\}\); the former give all the Coulomb functions while the latter determine the full radial functions at \(r>R\). ### Derivation of the master formula of shape factor In this appendix we briefly outline the derivation of the master formula of the shape factor, Eq.(8), based on the formalism by Behrens and Buhring [49]. To match their notations, we adopt the following normalization of states: \[\langle\vec{k}^{\prime}|\vec{k}\rangle=(2\pi)^{3}\delta^{(3)}(\vec{k}-\vec{k}^ {\prime})\, \tag{14}\] i.e. the states are rescaled with respect to the QFT states in the introduction as \(|\vec{k}\rangle=(1/2E_{k})|\vec{k}\rangle_{\rm QFT}\approx(1/2M)|\vec{k}\rangle_ {\rm QFT}\). We start by introducing the Behrens-Buhring form factors in terms of the nuclear matrix element of the charged weak current: \[\langle J_{W}^{\dagger 0}(0)\rangle_{fi}=\sum_{Lm_{L}}\sqrt{4\pi(2J _{i}+1)}(-1)^{J_{f}-m_{J_{f}}} \tag{15}\] \[\times\left(\begin{array}{ccc}J_{f}&L&J_{i}\\ -m_{J_{f}}&m_{L}&m_{J_{i}}\end{array}\right)Y_{Lm_{L}}^{*}(\hat{q})\frac{( \mathbf{q}R)^{L}}{(2L+1)!!}F_{L}(\mathbf{q}^{2})\] \[\langle\vec{J}_{W}^{\dagger}(0)\rangle_{fi}=\sum_{KLm_{K}}\sqrt{ 4\pi(2J_{i}+1)}(-1)^{J_{f}-m_{J_{f}}}\] \[\times\left(\begin{array}{ccc}J_{f}&K&J_{i}\\ -m_{J_{f}}&m_{K}&m_{J_{i}}\end{array}\right)\vec{Y}_{KL}^{m_{K}*}(\hat{q}) \frac{(\mathbf{q}R)^{L}}{(2L+1)!!}F_{KL}(\mathbf{q}^{2})\,.\] where \(q=p_{f}-p_{i}\), \(\mathbf{q}=|\vec{q}|\), with \(Y_{Lm_{L}}\) and \(\vec{Y}_{KL}^{m_{K}}\) the spherical harmonics and the vector spherical tensor respectively. When \(J_{i}=J_{f}=0\), only the \(F_{0}\) and \(F_{01}\) form factors survive, but the latter is proportional to \(f_{-}(q^{2})\) (in the Breit frame) which vanishes in the isospin limit. The former gives: \[\langle J_{W}^{\dagger 0}(0)\rangle_{fi}=F_{0}(\mathbf{q}^{2})\, \tag{16}\] where \(\mathbf{q}\to 0\) limit gives the Fermi matrix element: \(F_{0}(0)=M_{F}\). The differential rate of the tree-level decay \(\phi_{i}(p_{i})\to\phi_{f}(p_{f})e^{+}(p_{e})\nu_{e}(p_{\nu})\) is given by: \[d\Gamma=\frac{d^{3}p_{f}}{(2\pi)^{3}}\frac{d^{3}p_{e}}{(2\pi)^{3}}\frac{d^{3}p _{\nu}}{(2\pi)^{3}}(2\pi)^{4}\delta^{(4)}(p_{i}-p_{f}-p_{e}-p_{\nu})\sum_{ \lambda_{e}\lambda_{e}}|\mathcal{T}|^{2}. \tag{17}\] The amplitude, using the lepton current in configuration space, reads: \[\mathcal{T} =-\frac{G_{F}V_{ud}}{\sqrt{2}}\int\frac{d^{3}q^{\prime}}{(2\pi)^{3}} \langle\phi_{f}(\vec{q}^{\prime})|J_{W}^{\dagger\mu}(0)|\phi_{i}(\vec{0})\rangle\] \[\quad\times\int d^{3}xe^{-i\vec{q}^{\prime}\cdot\vec{x}}\bar{\psi}_{ \nu,\vec{p}_{\nu}}(\vec{x})\gamma_{\mu}(1-\gamma_{5})\psi_{e^{+},\vec{p}}(\vec{ x})\] \[\to-\frac{G_{F}V_{ud}}{\sqrt{2}}\frac{1}{2\pi^{2}}\int_{0}^{ \infty}d\mathbf{q}^{\prime}\mathbf{q}^{\prime 2}F_{0}(\mathbf{q}^{\prime 2})\] \[\quad\times\int d^{3}xj_{0}(\mathbf{q}^{\prime})\psi_{\nu, \vec{p}_{\nu}}^{\lambda\star\dagger}(\vec{x})(1-\gamma_{5})\psi_{e^{+},\vec{p}}^{ \lambda_{e}}(\vec{x})\, \tag{18}\] the second expression applies to \(J_{i}=J_{f}=0\) decays, where \(\lambda_{e},\lambda_{\nu}\) denote the lepton spin orientations. This representation is particularly convenient for the implementation of Coulomb effects, as we just need to take the lepton wavefunctions as the solution of the Dirac equation. To that end, we shall expand these wavefunctions in terms of spherical waves: \[\psi^{\lambda_{\nu}}_{\nu,\vec{p}_{\nu}}(\vec{x}) =\sum_{\kappa_{\nu}\mu_{\nu}}i^{l_{\nu}}b^{\lambda_{\nu}}_{\kappa_{ \nu}\mu_{\nu}}\psi^{\mu_{\nu}}_{\nu,\kappa_{\nu}}(\vec{x})\,,\] \[\psi^{\lambda_{\pm}}_{e^{\pm},\vec{p}}(\vec{x}) =\sum_{\kappa_{\nu}\mu_{e}}(-1)^{j_{e}+\mu_{e}}i^{l_{e}}a^{ \lambda_{e}*}_{\kappa_{\nu}\mu_{e}}\psi^{-\mu_{e}}_{e^{+},\kappa_{e}}(\vec{x}). \tag{100}\] The spherical waves read, \[\psi^{\mu_{\nu}}_{\nu,\kappa_{\nu}}(\vec{x}) =\left(\begin{array}{c}j_{\iota_{*}}(E_{\nu}r)\chi^{\mu_{\nu}}_ {\kappa_{\nu}}(\hat{r})\\ i\,\text{sgn}(\kappa_{\nu})j_{\bar{l}_{\nu}}(E_{\nu}r)\chi^{\mu_{\nu}}_{- \kappa_{\nu}}(\hat{r})\end{array}\right)\,,\] \[\psi^{-\mu_{e}}_{e^{+},\kappa_{e}}(\vec{x}) =\left(\begin{array}{c}if_{\kappa_{e}}(r)\chi^{-\mu_{e}}_{- \kappa_{\nu}}(\hat{r})\\ -g_{\kappa_{e}}(r)\chi^{-\mu_{e}}_{\kappa_{e}}(\hat{r})\end{array}\right)\, \tag{101}\] where \[\chi^{\mu}_{\kappa}\equiv\sum_{m}C^{j\;\mu}_{\ell\;\mu-m;\frac{1}{2}\;m}Y_{\ell \;\mu-m}(\hat{r})\chi_{m} \tag{102}\] is a two-component spinor, with \(C^{j\;\mu}_{\ell\;\mu-m;\frac{1}{2}\;m}\) the Clebsch-Gordan coefficients. The expansion coefficients read, \[b^{\lambda_{\nu}}_{\kappa_{\nu}\mu_{\nu}} =\frac{4\pi}{\sqrt{2}}C^{j_{\nu}\;\mu_{\nu}}_{l_{\nu}\;\mu_{\nu} \;\mu_{\nu}-\lambda_{e};\frac{1}{2}\;\lambda_{e}}Y^{*}_{l_{\nu}\;\mu_{\nu}- \lambda_{e}}(\hat{p}_{\nu})\,,\] \[a^{\lambda_{e}}_{\kappa_{e}\mu_{e}} =\frac{4\pi}{\sqrt{2}\mathbf{p}}C^{j_{e}\;\mu_{e}}_{l_{e}\;\mu_{e }-\lambda_{e};\frac{1}{2}\;\lambda_{e}}Y^{*}_{l_{e}\;\mu_{e}-\lambda_{e}}(\hat {p}_{e})e^{i\Delta_{\kappa_{e}}}\, \tag{103}\] with \(\Delta_{\kappa_{e}}\) an extra phase due to the distortion by the nuclear charge. Substituting Eq.(100) into Eq.(100), one may perform the angular integration to obtain: \[\mathcal{T} =-\frac{G_{F}V_{ud}}{\sqrt{2}}\frac{1}{2\pi^{2}}\int_{0}^{\infty} d\mathbf{q}^{\prime}\mathbf{q}^{\prime 2}F_{0}(\mathbf{q}^{\prime 2})\int_{0}^{ \infty}drr^{2}j_{0}(\mathbf{q}^{\prime}r)\] \[\times\sum_{\kappa_{\nu}\mu_{e}\kappa_{\nu}\mu_{\nu}}(-1)^{j_{e} -\mu_{e}+1}a^{\lambda_{e}*}_{\kappa_{\nu}\mu_{e}}b^{\lambda_{\nu}*}_{\kappa_{ \nu}\mu_{e}}\delta_{\mu_{e},-\mu_{\nu}}\] \[\left\{g_{\kappa_{e}}(r)[j_{l_{\nu}}(E_{\nu}r)\delta_{\kappa_{e},\kappa_{\nu}}+j_{\bar{l}_{\nu}}(E_{\nu}r)\delta_{\kappa_{e},-\kappa_{\nu}}]\right.\] \[\quad-\left.\text{sgn}(\kappa_{e})f_{\kappa_{e}}(r)[j_{l_{\nu}}(E _{\nu}r)\delta_{-\kappa_{e},\kappa_{\nu}}\right.\] \[\quad+j_{\bar{l}_{\nu}}(E_{\nu}r)\delta_{-\kappa_{e},-\kappa_{ \nu}}]\right\}\,. \tag{104}\] Now, we may express \(g_{\kappa_{e}}\) and \(f_{\kappa_{e}}\) in terms of \(H,h,D,d\) as we defined in Appendix A, which allows us to introduce the Behrens-Buhring's shape factor functions \(M_{K}(k_{e},k_{\nu})\) and \(m_{K}(k_{e},k_{\nu})\). In superallowed decays, we only need the \(K=L=S=0\) functions: \[M_{0}(k_{e},k_{\nu}) =\frac{2}{\pi M_{F}}\int_{0}^{\infty}d\mathbf{q}^{\prime}\mathbf{ q}^{\prime 2}\int_{0}^{\infty}drr^{2}j_{0}(\mathbf{q}^{\prime}r)F_{0}(\mathbf{q}^{\prime 2})\] \[\times\frac{(\mathbf{p}r)^{k_{e}-1}}{(2k_{e}-1)!!}\sqrt{\frac{2j_ {e}+1}{2}}\delta_{k_{e}k_{\nu}}\] \[\times\left\{H_{k_{e}}(r)j_{k_{\nu}-1}(E_{\nu}r)-\frac{r}{R}D_{k_{e }}(r)j_{k_{\nu}}(E_{\nu}r)\right\}\] \[m_{0}(k_{e},k_{\nu}) =\frac{2}{\pi M_{F}}\int_{0}^{\infty}d\mathbf{q}^{\prime}\mathbf{ q}^{\prime 2}\int_{0}^{\infty}drr^{2}j_{0}(\mathbf{q}^{\prime}r)F_{0}(\mathbf{q}^{\prime 2})\] \[\times\frac{(\mathbf{p}r)^{k_{e}-1}}{(2k_{e}-1)!!}\sqrt{\frac{2j_ {e}+1}{2}}\delta_{k_{e}k_{\nu}}\] \[\times\left\{h_{k_{e}}(r)j_{k_{\nu}-1}(E_{\nu}r)-\frac{r}{R}d_{k _{e}}(r)j_{k_{\nu}}(E_{\nu}r)\right\}\, \tag{105}\] where we have rescaled the functions by \(1/F_{0}(0)=1/M_{F}\). With them we can rewrite Eq.(104), after some algebra, as: \[\mathcal{T} =\frac{G_{F}V_{ud}}{4\pi}M_{F}\sum_{\kappa_{e}\mu_{e}\kappa_{\nu }\mu_{\nu}}\frac{(-1)^{j_{e}-\mu_{e}+1}}{\sqrt{2j_{e}+1}}\] \[\times a^{\lambda_{e}*}_{\kappa_{e}\mu_{e}}b^{\lambda_{e}*}_{\kappa_{ \nu}\mu_{\nu}}\delta_{\mu_{e},-\mu_{\nu}}\alpha_{\kappa_{e}}\] \[\times\left\{\text{sgn}(\kappa_{e})M_{0}(k_{e},k_{\nu})+m_{0}(k_ {e},k_{\nu})\right\}. \tag{106}\] Next we evaluate the squared amplitude and perform the phase-space integration. Neglecting kinematic recoil corrections, one can easily show that, \[\frac{d\Gamma}{dE}\approx\frac{1}{(2\pi)^{5}}E\mathbf{p}(E_{0}-E)^{2}\int d \Omega_{e}\int d\Omega_{\nu}\sum_{\lambda_{e}\lambda_{\nu}}|\mathcal{T}|^{2}. \tag{107}\] The angular integration and summation over lepton spin act only on the expansion coefficients \(a^{\lambda_{e}}_{\kappa_{e}\mu_{e}},b^{\lambda_{\nu}}_{\kappa_{\nu}\mu_{\nu}}\): \[\sum_{\lambda_{e}}\int d\Omega_{e}a^{\lambda_{e}*}_{\kappa_{e} \mu_{e}}a^{\lambda_{e}}_{\kappa_{\nu}^{\prime}\mu_{e}^{\prime}} =\frac{8\pi^{2}}{\mathbf{p}^{2}}\delta_{\kappa_{e}\kappa_{e}^{ \prime}}\delta_{\mu_{e}\mu_{e}^{\prime}}\,,\] \[\sum_{\lambda_{\nu}}\int d\Omega_{\nu}b^{\lambda_{e}*}_{\kappa_{\nu }\mu_{\nu}}b^{\lambda_{\nu}*}_{\kappa_{\nu}^{\prime}\mu_{\nu}^{\prime}} =8\pi^{2}\delta_{\kappa_{e}\kappa_{e}^{\prime}}\delta_{\mu_{e}\mu_{e}^{ \prime}}. \tag{108}\] We can further simplify Eq.(105): Since both \(M_{0}(k_{e},k_{\nu})\) and \(m_{0}(k_{e},k_{\mu})\) are proportional to \(\delta_{k_{e}k_{\nu}}\), we can define: \[M_{0}(k_{e},k_{\nu}) \equiv\delta_{k_{e}k_{\nu}}M_{0}(k),\] \[m_{0}(k_{e},k_{\nu}) \equiv\delta_{k_{e}k_{\nu}}m_{0}(k)\, \tag{109}\] where \(k_{e}=k_{\nu}\equiv k\). Furthermore, we where the Fermi function \(F(E)\) and the shape factor \(C(E)\) are exactly those given by Eqs.(7), (8) respectively. ### Parameterizations of nuclear charge distributions Here we summarize the few parameterizations of nuclear charge distributions used in Ref.[60]. * Two-parameter Fermi (2pF): \[\rho_{\rm ch}(r)=\frac{\rho_{0}}{1+\exp\{(r-c)/a\}}\] (12) where \[\rho_{0}=-\frac{1}{8\pi a^{3}\text{Li}_{3}(-\exp\{c/a\})}\] (13) and the MS charge radius: \[\langle r_{\rm ch}^{2}\rangle=\frac{12a^{2}\text{Li}_{5}(-\exp\{c/a\})}{\text {Li}_{3}(-\exp\{c/a\})}\.\] (14) * Three-parameter Fermi (3pF): \[\rho_{\rm ch}(r)=\frac{\rho_{0}(1+wr^{2}/c^{2})}{1+\exp\{(r-c)/a\}}\] (15) where \[\rho_{0}=-\frac{1}{8\pi a^{3}\left[\text{Li}_{3}(-e^{c/a})+(12a^{2}w/c^{2}) \text{Li}_{5}(-e^{c/a})\right]}\] (16) and \[\langle r_{\rm ch}^{2}\rangle=\frac{12\left[30a^{4}w\text{Li}_{7}(-\exp\{c/a \})+a^{2}c^{2}\text{Li}_{5}(-e^{c/a})\right]}{12a^{2}w\text{Li}_{5}(-e^{c/a})+ c^{2}\text{Li}_{3}(-e^{c/a})}\,\] where \(\text{Li}_{s}(z)\) is the polylogarithm function. * Three-parameter Gaussian (3pG): \[\rho_{\rm ch}(r)=\frac{\rho_{0}(1+wr^{2}/c^{2})}{1+\exp\{(r^{2}-c^{2})/a^{2}\}}\] (17) where \[\rho_{0}=-\frac{2c^{2}}{\pi^{3/2}a^{3}\left[3a^{2}w\text{Li}_{5/2}(-e^{c^{2}/a ^{2}})+2c^{2}\text{Li}_{3/2}(-e^{c^{2}/a^{2}})\right]}\] (18) and \[\langle r_{\rm ch}^{2}\rangle=\frac{6a^{2}c^{2}\text{Li}_{5/2}(-e^{c^{2}/a^{2} })+15a^{4}w\text{Li}_{7/2}(-e^{c^{2}/a^{2}})}{6a^{2}w\text{Li}_{5/2}(-e^{c^{2}/ a^{2}})+4c^{2}\text{Li}_{3/2}(-e^{c^{2}/a^{2}})}\.\] (19) * Harmonic oscillator (HO): \[\rho_{\rm ch}(r)=\rho_{0}\left(1+\alpha_{\rm HO}r^{2}/b^{2}\right)\exp\{-r^{ 2}/b^{2}\}\] (20) where \[\rho_{0}=\frac{2}{\pi^{3/2}(3\alpha_{\rm HO}+2)b^{3}}\] (21) and \[\langle r_{\rm ch}^{2}\rangle=\frac{3(5\alpha_{\rm HO}+2)b^{2}}{6\alpha_{\rm HO }+4}\.\] (22)
2303.00072
Absolute spectral metrology of XFEL pulses using diffraction in crystals
At modern X-ray sources, such as synchrotrons and X-ray Free-Electron Lasers (XFELs), it is important to measure the absolute value of the photon energy directly. Here, a method for absolute spectral metrology is presented. A photon energy estimation method based on the spectral measurements and rocking of diffracting crystals is presented. The photon energy of SASE1 channel of the European XFEL was measured, and the benefits and applications of the precise photon energy evaluation are discussed.
Ilia Petrov, Liubov Samoylova, Sarlota Birnsteinova, Valerio Bellucci, Mikako Makita, Tokushi Sato, Romain Letrun, Jayanath Koliyadu, Raphael de Wijn, Andrea Mazzolari, Marco Romagnoni, Richard Bean, Adrian Mancuso, Alke Meents, Henry N. Chapman, Patrik Vagovic
2023-02-28T20:43:20Z
http://arxiv.org/abs/2303.00072v1
# Absolute spectral metrology of XFEL pulses using diffraction in crystals ###### Abstract At modern X-ray sources, such as synchrotrons and X-ray Free-Electron Lasers (XFELs), it is important to measure the absolute value of the photon energy directly. Here, a method for absolute spectral metrology is presented. A photon energy estimation method based on the spectral measurements and rocking of diffracting crystals is presented. The photon energy of SASE1 channel of the European XFEL was measured, and the benefits and applications of the precise photon energy evaluation are discussed. ## 1 Introduction At X-ray Free-Electron Lasers (XFELs), the photon energy is determined by various parameters such as undulator period, magnetic field, electron energy etc. [1]. However, due to the complexity of measuring these parameters, it is difficult to estimate the photon energy with the precision that is required for experiments. That is, due to a low \(\sim\)20 eV frequency bandwidth at XFELs [2, 3, 4, 5], the photon energy needs to be determined with the precision of several eVs to set up the experiment to operate within the highest spectral intensity of XFELs. In particular, the photon energy helps align the crystal optics to the diffraction orientation. In X-ray absorption spectroscopy experiments, the photon energy can be adjusted such that absorption edges are within the highest intensity of X-rays. Here, we present a direct photon energy measurement method using diffraction in monocrystallines and measurements of spectrum. When inserted into XFEL beam, flat crystals diffract a fraction of the spectrum, such that a dip in the spectrum appears. The angular positions of the crystals during scans allow to determine the resolution of the spectrometer, and, by matching the spectral dip position in opposite diffraction directions, the absolute photon energy. ## 2 Experimental The schematic of the experiment is shown in Fig. 1. A spectrometer based on a strongly bent High-Pressure High-Temperature (HPHT) diamond crystal [5] in (440) reflection was used to measure the Self-Amplified Spontaneous Emission (SASE) spectrum of SASE1 channel at SPB/SFX instrument of European XFEL. When a flat crystal is inserted into the beam upstream of the spectrometer, the photons within the Darwin width of the flat crystal are diffracted which leads to a dip in the spectrum. By matching the location of the dip in two opposite orientations of the flat crystal one can determine the wavelength \(\lambda_{0}\) using Bragg's equation \[2d\sin(\theta_{2r}/2)=\lambda_{0}, \tag{1}\] where \(d\) is the lattice spacing of the reflection, \(\theta_{2r}\) is the angle between the orientations of the flat crystal for the same position of the dip in the spectrum. Moreover, by measuring the change of the angle \(\Delta\theta\) during scans, we can attribute the shift of the spectral dip on the detector to the photon energy difference \(\Delta E\) which can be derived using the differential Bragg equation Figure 1: The layout of the experiment. The rocking of the crystal (red rectangle) in different orientations relative to the incident beam will provide a dip in the spectrum which will be shifting along photon energies. We denote the opposite diffraction directions as ”left” and ”right”, since the diffraction plane in the experiment was the horizontal plane. \[\Delta E=E_{0}\cdot\Delta\theta\tan\theta_{B}, \tag{2}\] where \(\theta_{B}=\theta_{2r}/2\) is the Bragg's angle when \(\Delta\theta=0\) and \(E_{0}\) is the photon energy for \(\theta_{B}\). The shift of the dip allows to estimate the pixel energy resolution of the detector used to measure the spectrum. As a result, the photon energy and the pixel resolution provide the absolute calibration of the spectrometer. ## 3 Measurements Zyla 5.5 sCMOS camera was used to record the spectrum from the strongly bent diamond crystal. Fig. 2a shows the image of the SASE spectrum without a dip. When a flat C*(111) crystal is inserted, the X-rays within a Darwin width are diffracted, and the dip in the spectrum appears in Fig. 2b. The spectral images shown in Fig. 2a are acquired by averaging 100 images, in Fig. 2b - 50 images. Each of the images is an average over a train of 40 pulses that arrived with 1.1 Mhz repetition rate. The inclined features in Fig. 2 are attributed to the phase-space configuration of the photon pulse during the measurements and/or the distortions the wavefront by optical elements, which are not expected to have affected the absolute spectral calibration because of a well-pronounced dip in the spectrum. If we integrate over an area of the detector with the strongest signal, as shown in Fig. 2, we will have the spectra with and without the dip due to diffraction shown in Fig. 3a. SmarAct SR-12012 rotation stage was used for the rotation of the flat crystal. The scans were performed in "left" and "right" diffraction orientations since the diffraction was in the horizontal plane. The angle position of the crystal in the "left" diffraction was chosen to be roughly in the center of the spectrum. By selecting the scan angles in the " Figure 2: a: the image of the spectrum after diffraction from a bent C*(440) crystal that represents the SASE spectrum, b: the image of the spectrum with a flat C*(111) 100 100 μm inserted before the spectrometer. The spectrum is calculated by integrating the intensity in the black dashed rectangle along the vertical axis. The SASE spectrum was calculated from an average of 100 images. For each scan position, an average of 50 images was used. provide the dips in the spectrum around the dip in the "left" orientation, as shown in Fig. 3b, using linear interpolation we can estimate the exact angle in the "right" orientation that would correspond to the selected dip in the "left" orientation. In order to restore the position of diffraction peaks in the spectra, we calculate the ratio of the spectra with a dip and the clear SASE spectrum in Fig. 3b, subtract the minimum value of the calculated ratio and multiply it by \(-1\). The resulting curves and their Gaussian fits are shown in Fig. 4a. Linear extrapolation of center positions of Gaussian fits in "right" orientations allows to estimate the scan angle which corresponds to the center of the Gaussian fit of the peak in "left" orientation, as shown in Fig. 4b. The estimated difference of the angles in different orientations that provide the dip in the spectrum at the same photon energy was estimated to be \(\theta_{2r}=29.217^{\circ}\) which provides the photon energy \(E_{0}=11935\) eV. The slope of the linear fit in Fig. 4b allows to estimate the pixel energy resolution to be around 0.038 eV. The rotation step of around 0.001\({}^{\circ}\) provided Figure 4: a: the restored diffraction peaks in different orientations (dotted lines) and their Gaussian fits (dashed lines). b: the difference between the center of the Gaussian fit of the reflection peak in “left” orientation and in the four “right” orientations in Fig. 5 in pixels (red dots) and their linear fit (blue line). Figure 3: a: the measured SASE spectrum (blue) and a spectrum with a dip due to C*(111) diffraction (orange) in a scan with diffraction to the “left” side. b: the measured SASE spectrum (blue) and a spectrum with a dip due to C*(111) diffraction (orange) in a scan with diffraction to the “left” side and dips for various angles in a scan with diffraction to the “right” side, see the legend. dip in the spectrum, see Fig. 4b, and we expect such accuracy of photon energy measurement to be sufficient in view of the SASE bandwidth that amounts to several tens of eV. We also consider the photon energy scaling precision of 0.001 eV per pixel to be sufficient since such an error would provide around 1 eV difference of the measured SASE bandwidth, which can be tolerated in view of the width of the spectrum. ## 4 Conclusions and outlook A method for the absolute spectral metrology of XFEL radiation using strongly bent crystals and flat ideal crystals was presented and implemented to precisely measure the photon energy of SASE1 channel of European XFEL. During the experiment, the central photon energy was estimated to be 11935 eV and the estimated bandwidth was around 30 eV. In practice, since the bandwidth of XFEL radiation is several tens of eV, a 1 eV accuracy of photon energy measurement would be sufficient for typical experiments at XFELs. Angular scans provided the detector photon energy scaling of 0.038 eV per pixel, and we consider the accuracy of 0.001 eV to be sufficient in view of the 1 eV difference of SASE bandwidth that such an error would result in. ## Acknowledgements We acknowledge SFX User Consortium for providing Zyla 5.5 sCMOS camera for recording spectral information. The authors acknowledge funding support from Bundesministerium fur Bildung und Forschung (BMBF) (05K18XXA), Vetenskapsradet (VR) (2017-06719), Rontgen Angstrom Cluster INVISION project and the funding from HORIZON-EIC-2021-PATHFINDEROPEN-01-01, Grant agreement: 101046448, MHz-Tomoscopy project.
2309.15342
First-Order Crosstalk Mitigation in Parallel Quantum Gates Driven With Multi-Photon Transitions
We demonstrate an order of magnitude reduction in the sensitivity to optical crosstalk for neighboring trapped-ion qubits during simultaneous single-qubit gates driven with individual addressing beams. Gates are implemented via two-photon Raman transitions, where crosstalk is mitigated by offsetting the drive frequencies for each qubit to avoid first-order crosstalk effects from inter-beam two-photon resonance. The technique is simple to implement, and we find that phase-dependent crosstalk due to optical interference is reduced on the most impacted neighbor from a maximal fractional rotation error of 0.185(4) without crosstalk mitigation to $\leq$ 0.006 with the mitigation strategy. Further, we characterize first-order crosstalk in the two-qubit gate and avoid the resulting rotation errors for the arbitrary-axis M{\o}lmer-S{\o}rensen gate via a phase-agnostic composite gate. Finally, we demonstrate holistic system performance by constructing a composite CNOT gate using the improved single-qubit gates and phase-agnostic two-qubit gate. This work is done on the Quantum Scientific Computing Open User Testbed (QSCOUT); however, our methods are widely applicable for individual-addressing Raman gates and impose no significant overhead, enabling immediate improvement for quantum processors that incorporate this technique.
Matthew N. H. Chow, Christopher G. Yale, Ashlyn D. Burch, Megan Ivory, Daniel S. Lobser, Melissa C. Revelle, Susan M. Clark
2023-09-27T01:15:45Z
http://arxiv.org/abs/2309.15342v1
# First-Order Crosstalk Mitigation in Parallel Quantum Gates Driven With Multi-Photon Transitions ###### Abstract We demonstrate an order of magnitude reduction in the sensitivity to optical crosstalk for neighboring trapped-ion qubits during simultaneous single-qubit gates driven with individual addressing beams. Gates are implemented via two-photon Raman transitions, where crosstalk is mitigated by offsetting the drive frequencies for each qubit to avoid first-order crosstalk effects from inter-beam two-photon resonance. The technique is simple to implement, and we find that phase-dependent crosstalk due to optical interference is reduced on the most impacted neighbor from a maximal fractional rotation error of \(0.185(4)\) without crosstalk mitigation to \(\leq 0.006\) with the mitigation strategy. Further, we characterize first-order crosstalk in the two-qubit gate and avoid the resulting rotation errors for the arbitrary-axis Molmer-Sorensen gate via a phase-agnostic composite gate. Finally, we demonstrate holistic system performance by constructing a composite CNOT gate using the improved single-qubit gates and phase-agnostic two-qubit gate. This work is done on the Quantum Scientific Computing Open User Testbed (QSCOUT); however, our methods are widely applicable for individual-addressing Raman gates and impose no significant overhead, enabling immediate improvement for quantum processors that incorporate this technique. + Footnote †: preprint: APS/123-QED Quantum computing promises to solve certain classes of problems faster than classical computing [1; 2]. However, technical imperfections lead to errors that currently prevent most known quantum algorithms from running successfully at scale. Quantum error correction has the potential to allow large codes to run successfully once experiments have surpassed fault tolerance thresholds [3; 4; 5; 6; 7; 8]. Of the classes of errors currently preventing scalable fault tolerance, one of the most pernicious is crosstalk, wherein operations ("gates") applied to a target qubit unintentionally also impact one or more additional qubits. These errors are prevalent in many quantum computing platforms [9; 10; 11; 12; 13; 14], particularly during parallel gate operation, which is desirable for reducing execution time and correcting qubits with idle errors [15; 16; 17]. Further, crosstalk errors complicate error correction schemes as they can violate certain assumptions for well-behaved models such as locality and independence of operations [18]. Previous attempts to reduce crosstalk errors include algorithmic efforts such as crosstalk-aware compiling of circuits [19; 13], echo-based protocols [20; 10], and dynamical decoupling [21]. While these approaches can reduce the impacts of crosstalk, they come at the expense of additional overhead from longer gates and circuits. Attempts at physical limitation of crosstalk have relied on either coherent cancellation [11; 12; 22] or highly-engineered independent qubit controls [23; 24; 25; 26]. These strategies can impose significant experimental burdens to calibrate the system and maintain stability to operate in the low-crosstalk regime. In this work, we describe and implement a physical means of crosstalk mitigation for parallel operation of multi-photon driven quantum gates on a linear chain of trapped ions. Specifically, the first-order sensitivity to optical crosstalk at neighboring sites is removed by choosing distinct single-photon detunings for nearby qubits while maintaining the required two-photon resonance to drive transitions on each target qubit. We demonstrate an order of magnitude improvement in our measured crosstalk when we implement this technique, and we use the improved single-qubit gates with a phase-agnostic two-qubit gate to implement a complete gateset. Further, this solution requires no algorithmic overhead nor additional calibrations. We implement our solution on individually-addressed ions; however, our result should hold for other platforms using multi-photon transitions with individual qubit addressing, such as neutral atom quantum processors [27]. Analogous ideas for individual addressing of ions have been implemented by applying field gradients to shift each qubit frequency, but these solutions require precise calibrations of each independent qubit frequency and an additional, well-stabilized control field [28; 29; 30; 31; 32]. Application of our technique on quantum processors of similar architecture will be immediately useful in mitigating crosstalk without imposing any significant overhead. This work is done on the Quantum Scientific Computing Open User Testbed (QSCOUT) [22]. We use a linear chain of up to four \({}^{171}\)Yb\({}^{+}\) ions trapped above a surface-electrode trap. Qubits are encoded in the hyperfine "clock" ground states [33]. To implement gates, we apply a \(355\,\mathrm{nm}\) pulsed laser to drive two-photon Raman transitions [34]. As depicted in Fig. 1a, each ion is addressed with a tightly-focused individual addressing (IA) beam. Additionally, a counter-propagating global beam illuminates all ions nearly uniformly (not pictured). Each IA beam and the global beam are independently modulated using their own dedicated acousto-optic modulator (AOM), which converts a radiofrequency (RF) control pulse into a laser pulse. More details about the apparatus have been specified in previous work [22]. To drive two-photon transitions, we drive an AOM with two RF tones (\(\omega_{0}\), \(\omega_{1}\)) where the offset (\(\omega_{1}-\omega_{0}\)) is chosen such that the pulsed laser has a frequency component at the qubit frequency (\(\omega_{\rm qubit}\approx 2\pi\times 12.6\) GHz) [34]. Applying both tones to an IA AOM produces a co-propagating configuration, which is preferred for single-qubit gates as it is insensitive to the ion's motion. Alternately, we can apply \(\omega_{0}\) to an IA AOM and \(\omega_{1}\) to the global beam AOM in a counter-propagating configuration, which is required for driving the motional sidebands. The effective Raman Rabi rate is proportional to the product of the electric field amplitudes for the two tones: \(\Omega_{\rm eff}\propto|E_{\omega_{0}}||E_{\omega_{1}}|\). To understand the effects of crosstalk during parallel co-propagating single-qubit gates, we consider all resonant Raman pairs that illuminate an ion. In typical ion trap experiments, all IA beams are operated with identical single-photon detunings, \(\Delta_{i}=\Delta_{j}=\Delta\), where \(i\) and \(j\) are IA beam indices. In this configuration, the leading crosstalk terms come from resonant Raman pairs formed from combinations of the target and neighbor control beams (e.g. \(\omega_{0,\rm target}\) and \(\omega_{1,\rm neighbor}\)). These terms have a Rabi rate of order \(\epsilon\) where \(\epsilon=|E_{i}\alpha_{j}|/|E_{i}\alpha_{i}|\) is the fraction of the electric field amplitude from an IA beam at site \(i\) as measured at neighboring site \(j\). However, as depicted in Fig. 1b, if we operate neighboring IA beams at different single-photon detunings, \(\Delta_{i}\neq\Delta_{j}\) for \(i\neq j\), then each \(\omega_{0,i}\) and \(\omega_{1,i}\) is distinct from tones applied to neighboring beams and the only resonant Raman pairs form from two tones of the same beam. Therefore, crosstalk is reduced to order \(|\epsilon|^{2}\), as the only unintended resonant pair is formed from residual light of _both_ tones from a neighboring beam. In other words, uniform single-photon detunings lead to field-sensitive crosstalk while distinct single-photon detunings lead to intensity-sensitive crosstalk. As shown in Fig. 2, we characterize the spatial extent of an IA beam and estimate the residual illumination at neighboring sites by shuttling a single ion through the beam, measuring the Rabi rate at each point. For this measurement, we apply one tone to the IA beam and one tone to the global beam (counter-propagating), such that the Raman Rabi rate is directly proportional to the electric field amplitude of the IA beam, neglecting small variation in the global beam. From this data, we estimate fractional electric field amplitudes, \(\epsilon\), of 5%(2%) at the left (right) nearest neighbor position, marked \(q_{-1}\) (\(q_{1}\)). Asymmetry in the beam profile is caused by optical aberrations. To implement the intensity-sensitive configuration for co-propagating gates, we shift both tones applied to each IA AOM (\(\omega_{0,i}\), \(\omega_{1,i}\)) such that the difference of each intra-beam pair is constant (\(\omega_{1,i}-\omega_{0,i}\)=\(\omega_{1,j}-\omega_{0,j}\)) and each beam resonantly drives Raman transitions for its target ion, but that each \(\omega_{0,i}\) and \(\omega_{1,i}\) is detuned relative to Figure 1: (a) Ions (blue dots) are individually-addressed with tightly-focused laser beams (red shaded regions), which are evenly spaced at \(4.5\,\mu m\). Each beam contains two tones to perform Raman transitions. A global beam (not shown) illuminates all ions and counter-propagates with the individual-addressing (IA) beams. Ions are labelled \(q_{-1},q_{0},q_{1},q_{2}\) from left to right. (b) A simplified level diagram shows two-photon Raman transitions between qubit states \(|0\rangle\) and \(|1\rangle\) through virtual intermediate level \(|i\rangle\). To resonantly drive transitions, two frequency components of the applied laser fields (arrows) must have a difference equal to the qubit frequency (\(\omega_{\rm qubit}\)). When the single-photon detuning for a target ion \(\Delta_{\rm target}\) is equal to that of its neighbor \(\Delta_{\rm neighbor}\) (not shown), unintended resonant pairs are formed from combinations of the high-intensity control light on the target (solid blue lines) and low-intensity residual illumination from the neighbor (broken red lines). By contrast, when \(\Delta_{\rm target}\neq\Delta_{\rm neighbor}\), the only unintended resonant Raman pair is formed from _both_ tones of the low-intensity residual illumination. Figure 2: We scan a single ion through a single IA beam (others are off) to estimate the residual illumination at neighboring sites. Beam center locations where ions would typically be located are marked with dashed vertical lines. neighboring beam tones such that no inter-beam resonant pairs are formed. We choose shift frequencies pseudorandomly, sampled over a \(\pm 0.5\,\mathrm{MHz}\) range, and check to make sure no nearest neighbor or next nearest neighbors are within \(0.1\,\mathrm{MHz}\). This method is sufficient to satisfy the requirement that the detuning of first-order crosstalk pairs (\(|\Delta_{i}-\Delta_{j}|\geq 0.1\,\mathrm{MHz}\)) is much larger than the crosstalk Rabi rate (order \(1\,\mathrm{kHz}\)), while still keeping all drive frequencies close enough to the AOM center frequency to maintain efficiency. Since the single-photon detuning for the \(355\,\mathrm{nm}\) laser is of order \(100\,\mathrm{THz}\), offsets at the MHz level do not significantly alter the target Rabi rate or ac Stark shift. We note this solution incurs little experimental overhead and requires no additional calibrations or equipment. Next, we directly measure the crosstalk for parallel single-qubit gates in a four-ion chain by applying both tones (\(\omega_{0,1}\) and \(\omega_{1,1}\)) on \(q_{1}\) and a single tone (we use \(\omega_{0,i}\) throughout this letter; \(\omega_{1,i}\) performs similarly as it has a nearly identical spatial mode) for all spectator ions. We perform this measurement in both field-sensitive and intensity-sensitive configurations, see Figs. 3a and 3b. Without crosstalk mitigation (\(\Delta_{i}=\Delta_{j}\)), we observe Rabi flopping on the left-nearest neighbor, \(q_{0}\), with \(9.6\%\) of the control Rabi rate on \(q_{1}\). By contrast, when we mitigate crosstalk (\(\Delta_{i}\neq\Delta_{j}\) for \(i\neq j\)), the Rabi rate on \(q_{0}\) is reduced to \(0.26\%\) of the control Rabi rate on \(q_{1}\). Crosstalk for the right-nearest neighbor (\(q_{2}\)) and left, second nearest neighbor (\(q_{-1}\)) are both reduced from \(\approx 2\%\) to \(\leq 0.01\%\) of the Rabi rate on \(q_{1}\). We repeat this measurement for all ions and record the fractional Rabi rate crosstalk matrix in Figs. 3c and 3d. Additionally, we measure the phase dependence of the crosstalk. The native single-qubit gate on QSCOUT is a continuously parameterized rotation, \(R_{\phi}(\theta)\), as described in reference [35]. The rotation axis, \(\phi\), is varied by changing the relative phase between the two drive tones (i.e. the "Raman phase"). The rotation angle, \(\theta\), is set by the pulse duration. In the field-sensitive configuration, identical-frequency target and neighbor light interferes. This interference between the beams depends on the relative optical phase and thus on \(\phi\) for each beam. To measure this effect, we apply parallel \(R_{\phi}(\frac{\pi}{2})\) gates to all four ions and vary \(\phi\) on \(q_{1}\) while fixing \(\phi=0\) Figure 3: Crosstalk effects are reduced during parallel single-qubit gates when run with first-order crosstalk mitigation (bottom, \(\Delta_{i}\neq\Delta_{j}\); b,d,f) compared to without it (top, \(\Delta_{i}=\Delta_{j}\); a,c,e). (a,b) Example Rabi flopping when driving with both tones on \(q_{1}\) (red) and only one tone on all other ions (blue) with crosstalk mitigation (b) compared to without it (a). (c,d) The crosstalk Rabi rate matrix shows improvement by at least an order of magnitude on all entries in the intensity-sensitive configuration compared to the field-sensitive configuration. Data is collected as in (a,b) by driving qubit \(i\) with both tones (and spectators with only one) and measuring the observed Rabi rate, \(\Omega_{j}\), on each ion. Unless shown, the fitting uncertainty is \(<1\%\) of each entry. (e,f) All qubits are driven with an \(R_{\phi}(\frac{\pi}{2})\) pulse in parallel and the Raman phase (\(\phi\)) is scanned on \(q_{1}\) (red). Without crosstalk mitigation (e), the measured rotation angle (\(\theta\)) depends strongly on phase in contrast to when crosstalk is mitigated (f). Curves are fit to a sinusoid to extract rotation error as a function of phase. Horizontal dashed black lines show the ideal \(\theta\). Uncertainty markers on plots are derived from \(95\%\) Wilson score intervals. on all other qubits. The data in Fig. 3e reveals the strong rotation-angle (\(\theta\)) dependence on phase in the field-sensitive configuration, which causes substantial rotation error for parallel arbitrary-phase gates. We fit the data for ion \(i\) to \(\theta_{i}(\phi_{\mathrm{m}})=\frac{A_{i}}{2}\sin{(\phi_{\mathrm{q}_{1}}+\xi_{i})}\) where \(A_{i}\) and \(\xi_{i}\) are free parameters and find the maximum fractional change in rotation angle relative to the average angle (\(\bar{\theta}_{i}\)), (\(\theta_{i,\mathrm{max}}-\theta_{i,\mathrm{min}}\))/\(\bar{\theta}_{i}=A_{i}/\bar{\theta}_{i}\). For {\(q_{-1},q_{0},q_{1},q_{2}\)} we measure \(A_{i}/\bar{\theta}_{i}=\{0.027(5),0.185(4),0.176(6),0.054(4)\}\). Consistent with the earlier results, the largest oscillations occur when there is a phase difference between the residual illumination of the phase-varying qubit and its left nearest neighbor as well as when the phase-varying qubit is the left nearest neighbor to another qubit and subject to its residual illumination. By contrast, in the intensity-sensitive configuration (Fig. 3f), we do not observe strong rotation-angle dependence on phase as all fitted amplitudes are smaller than the 95% confidence intervals for each point. Fitting results yield \(A_{i}/\bar{\theta}_{i}=\){0.004(8),0.000(6),0.007(7),0.017(7)}. For these improvements to be relevant in the context of a working quantum processor, the single-qubit gates also must be compatible with two-qubit gates. The native two-qubit gate in QSCOUT is the ubiquitous Molmer-Sorensen (MS) gate [36]. The MS gate is driven by symmetrically detuned red and blue motional sidebands and requires precise frequency matching conditions of these two drives. Alternately, these conditions can be expressed as precise phase tracking requirements for both sideband drives and their phase relationship to other gates in the circuit. In principle, one could carefully track all of the rotating frames required for the different frequency-shifted single-qubit gates to satisfy these requirements. Instead, we find that use of a phase-agnostic compilation of the MS gate [37] is sufficient to both combine these single- and two-qubit gates and avoid much of these same field-sensitive crosstalk effects on the two target ions during the MS interaction. In our current apparatus, all counter-propagating gates must use the global beam, which restricts these gates to a field-sensitive configuration. The MS gate must be counter-propagating to address the motional sidebands, and we implement the gate by turning on the global beam and the two IA beams for the two target ions. Potentially, one could drive counter-propagating gates with IA beams on both sides and recover second-order crosstalk sensitivity with similar techniques; however, that requires significant apparatus reconfiguration and is outside the scope of this work. Nonetheless, we do characterize the first-order crosstalk sensitivity during the MS interaction in a four-ion chain. A dominant error from crosstalk in the MS gate stems from the phase-dependent rotation error on the two target ions of the MS gate, akin to Fig. 3e. This crosstalk-induced rotation error shrinks as a function of physical distance between the two target qubits, as shown by Fig. 4a. Other first-order crosstalk effects in the MS gate, such as crosstalk on spectator ions, are left as a subject of future study, but we note that recent demonstration of a spin echo technique to reduce such crosstalk is applicable and compatible with our technique [10]. Since the QSCOUT system offers the MS gate with a continuous rotation axis (\(\phi\)) input, the phase-dependent crosstalk on the two target ions can lead to \(\phi\)-dependent rotation errors if the MS gate is implemented using a phase shift between the two ions (\(\Delta\phi\)). Similarly, the use of virtual Z gates that advance the phase of all subsequent waveforms [38] means any MS gates that appear mid-circuit may experience a different phase relationship between the two ions depending on the preceding pulses, regardless of the rotation axis specified. To combat this problem, we use a composite gate, \(\mathcal{ZZ}\), as shown in Fig. 4b, which implements an effective Pauli ZZ interaction. We surround the bare MS interaction with counter-propagating (denoted \({}^{cu}\)) single-qubit carrier (qubit transition with no driven motional state change) \(R_{\mathrm{z}y}^{\mathrm{c}\mathrm{u}}(\frac{\pi}{2})\) pulses which convert the total effective interaction from the XX to ZZ basis [37; 39]. Since ZI and IZ commute with ZZ, prior local phases commute through the \(\mathcal{ZZ}\) gate and therefore do not need to be tracked during the MS interaction. Furthermore, to realize the \(\mathcal{ZZ}\) gate's phase agnosticism, we intentionally ignore the value of local phases tracked by our frame rotations [22] and virtual Z gates prior to the gate. This phase agnosticism allows us to use a fixed relative phase between the two target ions (\(\Delta\phi\)) at all times. For circuits requiring an XX-type interaction, a second basis transformation is performed using standard co-propagating single-qubit gates before and after the counter-propagating single-qubit gates, shown in Fig. 4b. We now calibrate the MS interaction with fixed \(\Delta\phi\) and apply the arbitrary phase input (\(\phi\)) to the basis-transformation single-qubit gates. Moving the phase input to the single-qubit gates ideally implements the same unitary as changing the relative phase between the two ions during the bare MS interaction (\(\Delta\phi\)), but as shown in Fig. 4c, the rotation angle is no longer dependent on \(\phi\). We note that these basis-transformation gates do not increase the single-qubit overhead as they are also required to combat phase instabilities that arise from path length differences when switching between co-propagating (single-qubit) and counter-propagating (two-qubit) gates [37]. To demonstrate that our crosstalk-mitigated single-qubit gates work with the phase-agnostic \(\mathcal{ZZ}\), we run a simple circuit where single-qubit gates convert \(\mathcal{ZZ}\) into a composite CNOT gate as shown in Fig. 5a [40; 41]. As shown in Fig. 5b, we then estimate the fidelity of the CNOT gate from population measurements using the computational-basis states as inputs. The 96.2% average gate fidelity estimate [42] is consistent with the fidelity of our bare MS interaction [43], indicating that the single-qubit gates are working as expected in concert with the two-qubit gate. In summary, we demonstrate that the crosstalk on parallel single-qubit gates can be effectively mitigated by driving each individual addressing beam with a distinct single-photon detuning. We observe an order of magnitude or better suppression of all crosstalk Rabi rates as listed in Fig. 3d. Further, we characterize the rotation error from crosstalk between the two target ions in the arbitrary-phase two-qubit gate and demonstrate that this error can be avoided by applying the phase input to the single-qubit basis transformation gates. Finally, we demonstrate that the improved single-qubit gates work in concert with our \(\mathcal{ZZ}\) two-qubit gate, forming a universal gateset. Our technique can be readily adopted on other quantum processors to achieve similar performance gain in parallel gates. This method is also compatible with other algorithmic and physical crosstalk mitigation strategies, allowing for further improvements. As crosstalk is one of many important obstacles on state-of-the-art quantum processors, this work represents a significant step towards achieving scalable fault tolerance. ###### Acknowledgements. We thank Rich Rines, Victory Omode, and Pranav Gokhale for discussions inspiring the development of the phase-agnostic MS gates. We also thank Mallory Harris for helpful discussions in preparation of this work for a general scientific audience. This research was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research Quantum Testbed Program. Sandia National Laboratories is a multi-mission laboratory managed and operated by Figure 4: (a) We measure the bare MS gate rotation error for nearest-neighbor, next-nearest-neighbor, and third-nearest-neighbor pairs in a four ion chain. As the target ion separation increases, the crosstalk and resulting phase dependence decreases. (b) Circuit diagram for the \(\mathcal{ZZ}(\theta)\) gate. Counter-propagating single-qubit gates (\(R^{cu}\)) surround the \(MS(\theta)\) gate and transform the basis from XX to ZZ. _Optional_ co-propagating single-qubit gates (\(R^{co}\)) surrounding \(\mathcal{ZZ}(\theta)\) transform the gate back to an XX-type operation. (c) The measured MS rotation angle on a nearest-neighbor pair (\(q_{0},q_{1}\)) is constant (squares) when keeping the bare MS interaction at a fixed phase \(\Delta\phi\) and applying the rotation-axis phase (\(\phi\)) to the basis transformation gates, in contrast to the rotation error from phase-dependent crosstalk observed when the phase within the bare MS interaction \(\Delta\phi=\phi\) is varied (diamonds). The apparent slight under-rotation of the \(\mathcal{ZZ}\) formulation is likely due to state preparation issues from additional single-qubit infrastructure. Uncertainty markers are derived from 95% confidence Wilson score intervals. Figure 5: (a) Circuit diagram for the composite CNOT gate. We use single-qubit gates and the fully-entangling \(\mathcal{ZZ}(\frac{\pi}{2})\) gate to construct CNOT. (b) Population measurements after application of the CNOT gate to each computational basis state demonstrate \(\approx 96.2\%\) average fidelity. Uncertainties are 95% Wilson score intervals. National Technology & Engineering Solutions of Sandia, LLC (NTESS), a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration (DOE/NNSA) under contract DE-NA0003525. This written work is authored by an employee of NTESS. The employee, not NTESS, owns the right, title and interest in and to the written work and is responsible for its contents. Any subjective views or opinions that might be expressed in the written work do not necessarily represent the views of the U.S. Government. The publisher acknowledges that the U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this written work or allow others to do so, for U.S. Government purposes. The DOE will provide public access to results of federally sponsored research in accordance with the DOE Public Access Plan. SAND2023-09857O ## Author declarations ### Conflict of Interest The authors have no conflicts to disclose. ### Author Contributions **Matthew N. H. Chow:** Conceptualization (lead); Data Curation (equal); Formal Analysis (equal); Investigation (equal); Methodology (equal); Software (equal); Validation (equal); Visualization (equal); Writing - original draft (lead); Writing - review & editing (equal). **Christopher G. Yale:** Conceptualization (equal); Data Curation (equal); Formal Analysis (equal); Investigation (equal); Methodology (equal); Project Administration (supporting); Software (equal); Supervision (supporting); Validation (equal); Visualization (lead); Writing - original draft (supporting); Writing - review & editing (equal). **Ashlyn D. Burch:** Conceptualization (supporting); Data Curation (equal); Investigation (equal); Methodology (equal); Software (equal); Validation (equal); Visualization (supporting); Writing - review & editing (equal). **Megan Ivory:** Validation (supporting); Writing - review & editing (equal). **Daniel S. Lobser:** Conceptualization (equal); Methodology (supporting); Software (lead); Validation (equal); Writing - review & editing (equal). **Melissa C. Revelle:** Conceptualization (supporting); Methodology (supporting); Validation (supporting); Visualization (supporting); Writing - review & editing (equal). **Susan M. Clark:** Conceptualization (supporting); Funding Acquisition: Investigation (equal); Methodology (supporting); Project Administration (lead); Software (supporting); Supervision (lead); Validation (equal); Visualization (supporting); Writing - review & editing (equal). ## Data availability statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2309.15424
On Positive Matching Decomposition Conjectures of Hypergraphs
In this paper, we prove the conjectures of Gharakhloo and Welker (2023) that the positive matching decomposition number (pmd) of a $3$-uniform hypergraph is bounded from above by a polynomial of degree $2$ in terms of the number of vertices. Moreover, we derive a lower bound for pmd specifically for complete $3$-uniform hypergraphs. Additionally, we obtain an upper bound for pmd of $r$-uniform hypergraphs. For a $r$-uniform hypergraphs $H=(V,E)$ such that $\lvert e_i\cap e_j\rvert \leq 1$ for all $e_i,e_j \in E$, we give a characterization of positive matching in terms of strong alternate closed walks. For specific classes of a hypergraph, we classify the radical and complete intersection Lov\'{a}sz$-$Saks$-$Schrijver ideals.
Marie Amalore Nambi, Neeraj Kumar
2023-09-27T06:22:28Z
http://arxiv.org/abs/2309.15424v2
# On positive matching decomposition conjectures of hypergraphs ###### Abstract. In this paper, we prove the conjectures of Gharakhloo and Welker [4, Conjecture 3.5 and Conjecture 3.6] that the positive matching decomposition number (pmd) of a \(3\)-uniform hypergraph is bounded from above by a polynomial of degree \(2\) in terms of the number of vertices. Furthermore, we obtain the upper bound for pmd of \(r\)-uniform hypergraphs. For a \(r\)-uniform hypergraphs \(H=(V,E)\) such that \(|e_{i}\cap e_{j}|\leq 1\) for all \(e_{i},e_{j}\in E\), we give a characterization of positive matching in terms of strong alternate closed walks. For specific classes of a hypergraph, we classify the radical and complete intersection Lovasz\(-\)Saks\(-\)Schrijver ideals. Key words and phrases:Matching, positive matching, alternate walk, LSS-ideal, complete intersection 2020 Mathematics Subject Classification: Primary 05C70, 13F65, 13F70, 13C40; Secondary 05C75, 05E40 ## 1. Introduction Let \(H=(V=[n],E)\) be a hypergraph such that \(E\) is a clutter; that is, the sets in \(E\) are pairwise incomparable with respect to inclusion. Conca and Welker introduced a graph-theoretical invariant denoted as \(\operatorname{pmd}(H)\in\mathbb{N}\), called the positive matching decomposition number of \(H\), in [2]. We recall a subset \(M\subseteq E\) is said to be _matching_ if \(e\cap e^{\prime}=\emptyset\) for all \(e,e^{\prime}\in M\) and \(e\neq e^{\prime}\). A _positive matching_ of hypergraph \(H\) is a matching \(M\subseteq E\) such that there exists a weight function \(w:V\to\mathbb{Q}\) satisfying: \[\sum_{i\in e}w(i)>0\text{ if }e\in M, \sum_{i\in e}w(i)<0\text{ if }e\in E\setminus M.\] A _positive matching decomposition_ (or pm-decomposition) of \(H\) is a partition \(E=\cup_{i=1}^{p}M_{i}\) into pairwise disjoint subsets such that \(M_{i}\) is a positive matching on \((V,E\setminus\cup_{j=1}^{i-1}M_{j})\) for \(i=1,\ldots,p\). The \(M_{i}\) are called the parts of the pm-decomposition. The smallest \(p\) for which \(H\) admits a pm-decomposition with \(p\) parts will be denoted by \(\operatorname{pmd}(H)\) (cf. [2, Definition 5.1, Definition 5.3]). For an integer \(r>1\), a hypergraph \(H=(V,E)\) is said to \(r\)-uniform if \(|e|=r\), for every \(e\in E\). In [2, Theorem 5.4(1)], the authors provided a linear upper bound of \(\operatorname{pmd}(H)\) for a \(2\)-uniform hypergraphs. Gharakhloo and Welker established a matching decomposition for the \(3\)-uniform complete hypergraph \(H=(V,E)\) with \(n\) vertices and \(\binom{n}{3}\) edges. Specifically, the authors proved that for every \(3\leq l_{1}\leq 2n-3\) and \(5\leq l_{2}\leq 2n-1\), \(E_{l_{1},l_{2}}=\{\{a,b,c\}\in E\mid a<b<c,a+b=l_{1},b+c=l_{2}\}\) is a matching and \(E=\cup_{l_{1},l_{2}}E_{l_{1},l_{2}}\) (cf. [4, Proposition 3.4]). Moreover, the authors proposed the following conjectures: **Conjecture 1.1**.: [4, Conjecture 3.5] Let \(H\) be a complete \(3\)-uniform hypergraph with \(n\) vertices. Then \(E_{l_{1},l_{2}}=\{\{a,b,c\}\in E\mid a<b<c,a+b=l_{1},b+c=l_{2}\}\) is a positive matching. **Conjecture 1.2**.: _[_4_, Conjecture 3.6]_ _Let \(H=(V,E)\) be a \(3\)-uniform hypergraph with \(n\) vertices. Then \(\operatorname{pmd}(H)\leq\frac{3}{2}n^{2}-\frac{15}{2}n+10\)._ Note that Conjecture 1.2 follows from Conjecture 1.1 and [4, Proposition 3.4]. In Section 2, we prove Conjecture 1.1 by presenting a precise weight function for each \(E_{l_{1},l_{2}}\). Furthermore, in [4, page-4042], the authors mentioned that "_one can speculate that in general, for \(r\)-uniform hypergraphs \(H\), the value of \(\operatorname{pmd}(H)\) is bounded from above by a polynomial with a degree of \(r-1\) in terms of the number \(n\) of vertices_". In Subsection 2.2, we derive a positive matching decomposition for the complete \(r\)-uniform hypergraphs, for all \(r\geq 4\). Notably, we obtain an upper bound value of \(\operatorname{pmd}\) for \(r\)-uniform hypergraphs (see, Corollary 2.1). Next, we explain a connection between \(\operatorname{pmd}\) and algebraic objects. Let \(\mathbb{K}\) be a field, and \(n\geq 1\) be an integer. For an integer \(d\geq 1\) and \(e\subset[n]\) we consider a polynomial \[f_{e}^{(d)}=\sum_{j=1}^{d}\prod_{i\in e}x_{ij}\] in a polynomial ring \(S=\mathbb{K}[x_{ij}\mid i\in[n],j\in[d]]\). The ideal \[L_{H}^{\mathbb{K}}(d)=(f_{e}^{(d)}\mid e\in E)\subset S\] is called the Lovasz\(-\)Saks\(-\)Schrijver ideal [5]. We refer to it as LSS-ideal in short. The ideal \(L_{H}^{\mathbb{K}}(d)\) coincides with the variety of orthogonal representations of the graph complementary to \(H\) (cf. [9]). Lovasz introduced orthogonal representations of graphs in [8]. The variety of orthogonal representations was studied by Lovasz, Saks and Schrijver [9, 10]. For \(d=1\), the ideal \(L_{H}^{\mathbb{K}}(d)\) coincides with the edge ideal of \(H\). LSS-ideals can be geometrically viewed as follows. Let \(\mathbb{K}\) be an algebraically closed field and \(H\) be a \(r\)-uniform hypergraph. Consider the map \(\phi:(\mathbb{K}^{n})^{d}\to\underbrace{\mathbb{K}^{n}\otimes\cdots\otimes \mathbb{K}^{n}}_{r}\) by \[(v_{1},\ldots,v_{d})\longmapsto\sum_{j=1}^{d}\underbrace{v_{j}\otimes\cdots \otimes v_{j}}_{r}=\sum_{j=1}^{d}\sum_{1\leq i_{1},\ldots,i_{r}\leq n}(v_{j})_ {i_{1}}\cdots(v_{j})_{i_{r}}e_{i_{1}}\otimes\cdots\otimes e_{i_{r}}\] where \(\{e_{1},\ldots,e_{n}\}\) is the standard basis of \(\mathbb{K}^{n}\). The Zariski closure of the image of \(\phi\) is the variety \(S_{n,r}^{d}\) of symmetric tensors of (symmetric) rank \(\leq d\). Let \(\mathcal{V}(L_{H}^{\mathbb{K}}(d))\) be the vanishing locus of the ideal \(L_{H}^{\mathbb{K}}(d)\). Then the restriction of the map \(\phi\) to \(\mathcal{V}(L_{H}^{\mathbb{K}}(d))\) is a parametrization of coordinate section of \(S_{n,r}^{d}\) with \(0\) coefficient at \(e_{i_{1}}\otimes\cdots\otimes e_{i_{r}}\) for \(\{i_{1},\ldots,i_{k}\}\in E\). In particular, the Zariski-closure of the image of the restriction is irreducible if \(L_{H}^{\mathbb{K}}(d)\) is prime (cf. [2, Proposition 9.9]). Hence, establishing the primality of \(L_{H}^{\mathbb{K}}(d)\) serves as a valuable method for deducing the irreducibility of coordinate sections of \(S_{n,r}^{d}\). In Remark 2.4 we obtain that every coordinate sections of the variety \(S_{n,3}^{d}\) is irreducible for \(\frac{3}{2}n^{2}-\frac{15}{2}n+10\leq d\leq\binom{n+2}{3}-n+1\). The following implications establish a connection between \(\operatorname{pmd}\) and the algebraic characteristics of LSS-ideals: \[\operatorname{pmd}(H)\leq d\implies L_{H}^{\mathbb{K}}(d)\text{ is radical complete intersection}\implies L_{H}^{\mathbb{K}}(d+1)\text{ is prime}\] where the above implications proved in [2, Proposition 2.4, Lemma 5.5, Theorem 1.3] and [4, Theorem 1.2]. As a result, the concept of the \(\operatorname{pmd}\) for hypergraphs assumes a crucial role as a noteworthy graph-theoretical invariant with versatile applications spanning the domains of algebra and geometry. For \(r=2\) and \(d=2\), the algebraic properties of LSS-ideals, such as radical, prime, primary decomposition, complete intersection, and almost complete intersection, are studied in terms of combinatorial invariants of a graph in [5, 6]. In [2], the authors provide a complete characterisation for \(L_{H}^{\mathbb{K}}(d)\) being radical, complete intersection and prime when \(H\) is a tree and \(d\geq 2\). In [1], for \(d\geq 2\), the authors characterise almost complete intersection \(L_{H}^{\mathbb{K}}(d)\) when \(H\) is a forest, unicyclic and bicyclic graphs. When \(H\) is a graph, LSS-ideals have a connection with the determinantal ideal of the \((d+1)\)-minors of \(X_{H}^{sym}\) a generic symmetric matrices with \(0\)s in positions corresponding to the edges of graph \(H\). For instance, the coordinate section of \(I_{d+1}^{\mathbb{K}}(X_{H}^{sym})\) of \(I_{d+1}^{\mathbb{K}}(X_{n}^{sym})\) is radical, prime, and has maximal height if the corresponding LSS-ideal is radical, prime, complete intersection, respectively over an infinite field, cf.[2, Proposition 7.5]. In [3], the authors provide necessary and sufficient conditions for a matching of a graph to be a positive matching using alternating closed walks. Subsequently, the authors obtained pmd of complete multipartite graphs, bipartite graphs, cacti, and more. In [4], the authors obtain the pmd of \(r\)-uniform hypertree (which is more restrictive from other hypertree definitions from the literature). In Section 3, we introduce strong alternating closed walk and prove that a matching \(M\) in a hypergraph \(H\) is positive if and only if the subgraph induced by \(M\) does not contain any strong alternate closed walk, where \(H=(V,E)\) is a \(r\)-uniform hypergraph such that \(|e_{i}\cap e_{j}|\leq 1\), for all \(e_{i},e_{j}\in E\) (see, Theorems 3.1, 3.2). A hypergraph \(H\) is said to be a good forest if there exists a sequence of edges \(e_{1},\ldots,e_{m}\) such that \(|\{e_{1},\ldots,e_{i}\}\cap e_{i+1}|\leq 1\), for all \(1\leq i\leq m-1\), where \(m=|E|\). We obtain the exact value of pmd for good forest, loose cycle hypergraphs and hypergraphs obtained from a hypergraph by adding pendant edges. Lastly, we provide a complete picture of \(L_{H}^{\mathbb{K}}(d)\) being radical and complete intersection when \(H\) is a good forest. ## 2. On Conjecture In this section, we prove Conjectures 1.1 and 1.2 proposed by Gharakhloo and Welker. Namely, the \(\operatorname{pmd}(H)\) for a \(3\)-uniform hypergraph \(H\) is bounded from above by a quadratic function in the number of vertices. First, we recall notations and known results. **Remark 2.1**.: [2, Lemma 5.2] Let \(H=(V,E)\) be a hypergraph, \(M\subseteq E\) and \(V_{M}=\cup_{A\in M}A\). 1. \(M\) is a positive matching for \(H\) if and only if \(M\) is a positive matching for the induced hypergraph \((V_{M},A\in E\mid A\subseteq V_{M})\). 2. Assume \(M\) is a positive matching on \(H\) and \(A\in E\) is such that \(M_{1}=M\cup\{A\}\) is a matching. Assume also there is a vertex \(a\in A\) such that \(\{B\in E\mid B\subset V_{M_{1}}\) and \(a\in B\}=\{A\}\). Then \(M\cup\{A\}\) is a positive matching of \(H\). ### Set-up Let \(H=(V,E)\) be the complete \(3\)-uniform hypergraph on \(n\) vertices and with \(\binom{n}{3}\) edges. Then for every \(3\leq l_{1}\leq 2n-3\) and \(5\leq l_{2}\leq 2n-1\), the set \(E_{l_{1},l_{2}}=\{\{a,b,c\}\in E\mid a<b<c,a+b=l_{1},b+c=l_{2}\}\). **Remark 2.2**.: Let \(H\) be a complete \(3\)-uniform hypergraph as in Set-up 2.1. Any element within \(E_{l_{1},l_{2}}\) takes on the following structure: \[\{l_{1}-l_{2}+\lambda,l_{2}-\lambda,\lambda\},\] where \(3\leq\lambda\leq n\), \(l_{1}-l_{2}+\lambda<l_{2}-\lambda<\lambda\), and \(l_{1}-l_{2}+\lambda\geq 1\). **Remark 2.3**.: [4, Proposition 3.4] Let \(H\) be a \(3\)-uniform hypergraph as in Set-up 2.1. Then \(E_{l_{1},l_{2}}\) is a matching and \(E=\cup_{l_{1},l_{2}}E_{l_{1},l_{2}}\). Moreover, the cardinality of the set \(E_{n}=\{(l_{1},l_{2})\mid\text{ there exist }1\leq a<b<c\leq n,l_{1}=a+b,l_{2}=b+c\}\) is \(\frac{3}{2}n^{2}-\frac{15}{2}n+10\). **Definition 2.1**.: Let \((l_{1},l_{2})\) and \((l_{1}^{\prime},l_{2}^{\prime})\in\mathbb{N}^{2}\). We say \((l_{1},l_{2})<(l_{1}^{\prime},l_{2}^{\prime})\) if 1. \(l_{1}<l_{1}^{\prime}\) or 2. \(l_{1}=l_{1}^{\prime}\) and \(l_{2}<l_{2}^{\prime}\). **Notation 2.1**.: Let \(H\) be a \(3\)-uniform hypergraph as in Set-up 2.1. Then the set \(E_{l_{1},l_{2}}^{c}\) denotes the set of all edges in an induced hypergraph \((V_{E_{l_{1},l_{2}}},E\setminus\{\cup_{(l_{1}^{\prime},l_{2}^{\prime})<(l_{1},l_{2})}E_{l_{1}^{\prime},l_{2}^{\prime}}\cup E_{l_{1},l_{2}}\})\). Proof of Conjecture 1.1.: We show the pm-decomposition of \(H\) by arranging the matching \(E_{l_{1},l_{2}}\) in the order defined in Definition 2.1. That is, we show a matching \(E_{l_{1},l_{2}}\) is a positive matching on \((V,E\setminus\cup_{(l_{1}^{\prime},l_{2}^{\prime})<(l_{1},l_{2})}E_{l_{1}^{ \prime},l_{2}^{\prime}})\). If \(|E_{l_{1},l_{2}}|=1\) then it follows from Remark 2.1(2) that \(E_{l_{1},l_{2}}\) is positive matching on \((V,E\setminus\cup_{(l_{1}^{\prime},l_{2}^{\prime})<(l_{1},l_{2})}E_{l_{1}^{ \prime},l_{2}^{\prime}})\). Let \(a\in\mathbb{N}\) be an integer and we represent matching \(E_{l_{1},l_{2}}\) with \(|E_{l_{1},l_{2}}|>1\) in the following form: \[\begin{bmatrix}l_{1}-l_{2}+m-a&l_{2}-m+a&m-a\\ l_{1}-l_{2}+m-a+1&l_{2}-m+a-1&m-a+1\\ \vdots&\vdots&\vdots\\ l_{1}-l_{2}+m-1&l_{2}-m+1&m-1\\ l_{1}-l_{2}+m&l_{2}-m&m\end{bmatrix}=\begin{bmatrix}x_{11}&x_{12}&x_{13}\\ x_{21}&x_{22}&x_{23}\\ \vdots&\vdots&\vdots\\ x_{a1}&x_{a2}&x_{a3}\\ x_{a+1,1}&x_{a+1,2}&x_{a+1,3}\end{bmatrix} \tag{2.1}\] where, \(6\leq m\leq n\), \(l_{2}-m+a<m-a\), \(l_{1}-l_{2}+m-a\geq 1\). In Equation (2.1) each row is an edge of \(E_{l_{1},l_{2}}\) (see, Remark 2.2). We define a map \(\rho:V(E_{l_{1},l_{2}})\to\mathbb{Q}\) as \[\rho(x_{11}) =t,\ \rho(x_{12})=-(\frac{t}{2}-1),\ \rho(x_{13})=-(\frac{t}{2}-1),\] \[\rho(x_{21}) =-(1+\rho(x_{12})+\rho(x_{13})),\] \[\rho(x_{23}) =-(1+\rho(x_{11})+\rho(x_{12})),\] \[\rho(x_{22}) =1-(\rho(x_{21})+\rho(x_{23})), \tag{2.2}\] \[\rho(x_{i1}) =-(1+\rho(x_{i-1,2})+\rho(x_{i-2,2})),\] \[\rho(x_{i3}) =-(1+\rho(x_{i-1,1})+\rho(x_{i-1,2})),\] \[\rho(x_{i2}) =1-(\rho(x_{i1})+\rho(x_{i3})),\] where \(3\leq i\leq a+1\), \(t\in\mathbb{N}\) such that \(\rho(x_{a+1,1})>0\) and \(\rho(x_{a+1,2})\leq 0\). Let \(E_{l_{1},l_{2}}^{c}\) be the set all edges in \((V_{E_{l_{1},l_{2}}},E\setminus\{\cup_{(l_{1}^{\prime},l_{2}^{\prime})<(l_{1},l_{ 2})}E_{l_{1}^{\prime},l_{2}^{\prime}}\cup E_{l_{1},l_{2}}\})\). Observe that \(x_{11}<x_{21}<\cdots<x_{a+1,1}<x_{a+1,2}<x_{a2}<\cdots<x_{12}<x_{13}<\cdots<x_{ a+1,3}\). From the observation, it follows that \[E^{c}_{l_{1},l_{2}}=\begin{cases}\{x_{11},x_{12},\gamma\},&\gamma=x_{23},\ldots,x _{a+1,3},\\ \{x_{11},\beta,\gamma\},&\beta<\gamma,\text{ and }\beta,\gamma\in\{x_{13}, \ldots,x_{a+1,3}\},\\ \{x_{i1},x_{i2},\gamma\},&2\leq i\leq a,\text{ and }\gamma=x_{i+1,3},\ldots,x _{a+1,3},\\ \{x_{i1},\beta,\gamma\},&2\leq i\leq a+1,\beta<\gamma,\text{ and }\beta,\gamma\in\{x_{i-1,2}, \ldots,x_{12},x_{13},\ldots,x_{a+1,3}\},\\ \{\alpha,\beta,\gamma\},&\alpha<\beta<\gamma,\text{ and }\alpha,\beta,\gamma\in\{x _{a+1,2},\ldots,x_{12},x_{13},\ldots,x_{a+1,3}\}.\end{cases}\] Clearly, these are all the edges in \(E\setminus\{\cup_{(l^{\prime}_{1},l^{\prime}_{2})<(l_{1},l_{2})}E_{l^{\prime} _{1},l^{\prime}_{2}}\cup E_{l_{1},l_{2}}\}\) induced by \(V_{E_{l_{1},l_{2}}}\). From Remark 2.1(1) it follows that to complete the proof it is enough to prove the following two claims: 1. If \(e\in E_{l_{1},l_{2}}\) then \(\sum_{i\in e}\rho(i)>0\); 2. If \(e^{\prime}\in E^{c}_{l_{1},l_{2}}\) then \(\sum_{i\in e^{\prime}}\rho(i)<0\). Claim (1): From the map \(\rho\) (2.2) it is clear that \(\rho(x_{11})+\rho(x_{12})+\rho(x_{13})=2\) and \(\rho(x_{i1})+\rho(x_{i2})+\rho(x_{i3})=1\), for all \(i=2,\ldots,a+1\). Before proving Claim (2) we show that \[\rho(x_{11})>\rho(x_{21})>\ldots>\rho(x_{a+1,1})>\\ \rho(x_{a+1,2})>\rho(x_{a2})>\ldots>\rho(x_{12})=\rho(x_{13})> \ldots>\rho(x_{a+1,3}). \tag{2.3}\] It is clear that \(\rho(x_{12})\) and \(\rho(x_{13})\) are equal from the map \(\rho\). For all \(i=a+1,a,\ldots,3\), one has \[\rho(x_{i3}) =-1-\rho(x_{i-1,1})+\rho(x_{i-1,2}))\] \[=-1-\rho(x_{i-1,1})-1+\rho(x_{i-1,1})+\rho(x_{i-1,3})\] \[<\rho(x_{i-1,3}).\] We have, \[\rho(x_{23}) =-1-\rho(x_{11})+\rho(x_{12})=-1-t+\rho(x_{12})\] \[<\rho(x_{12})=\rho(x_{13}).\] From Equation (2.2) it follows that \[\rho(x_{11}) >-(\rho(x_{12})+\rho(x_{13}))>\rho(x_{21}).\] \[\rho(x_{21}) >-(\rho(x_{22})+\rho(x_{23}))\] \[=-\rho(x_{22})-\rho(x_{12})+1+\rho(x_{11})+2\rho(x_{12}))\] \[=-\rho(x_{22})-\rho(x_{12})+3>\rho(x_{31}).\] For all \(i=4,5,\ldots,a+1\), one has \[\rho(x_{i1}) =-1-\rho(x_{i-1,2})-\rho(x_{i-2,2})\] \[=-3+\rho(x_{i-1,1})+\rho(x_{i-1,3})+\rho(x_{i-2,1})+\rho(x_{i-2,3}) \tag{2.4}\] \[=\rho(x_{i-1,1})-4+\rho(x_{i-2,3})-\rho(x_{i-2,2}).\] From Equation 2.2 it follows that \(-4+\rho(x_{2,3})-\rho(x_{2,2})<0\). Therefore from Equation 2.4 we have, \(\rho(x_{41})<\rho(x_{31})\). Consider a Fibonacci sequence \(f_{1}=1,f_{2}=2\) and \(f_{n}=f_{n-2}+f_{n-1}\) for all \(n>2\). From Equation 2.2 it follows that \(\rho(x_{i-2,3})-\rho(x_{i-2,2})<-f_{i-2}\rho(x_{11})-f_{i-1}\rho(x_{12})-f_{i-1 }\rho(x_{13})<0\), for all \(i=5,6,\ldots,a+1\). Thus from Equation 2.4 we have, \(\rho(x_{i1})<\rho(x_{i-1,1})\) for all \(i=5,6,\ldots,a+1\). Since \(\rho(x_{a+1,1})>0\) and \(\rho(x_{a+1,2})\leq 0\), one has \(\rho(x_{a+1,1})>\rho(x_{a+1,2})\). For all \(i=a+1,\ldots,3\), from Equation (2.2) we have, \[\rho(x_{i2}) =1-(\rho(x_{i1})+\rho(x_{i3}))\] \[=1+1+\rho(x_{i-1,2})+\rho(x_{i-2,2})+1+\rho(x_{i-1,1})+\rho(x_{i-1,2})\] \[=3+\rho(x_{i-1,2})+O(i),\text{ where }O(i)=\rho(x_{i-1,2})+\rho(x_{i-2,2}) +\rho(x_{i-1,1}).\] Since \(\rho(x_{i-1,2})+\rho(x_{i-2,2})=-\rho(x_{i1})-1\) and \(\rho(x_{i1})<\rho(x_{i-1,1})\), it follows that \(O(i)=\rho(x_{i-1,2})+\rho(x_{i-2,2})+\rho(x_{i-1,1})\geq 0\). Therefore, we get \(\rho(x_{i2})>\rho(x_{i-1,2})\). Also, we have \[\rho(x_{22}) =1-(\rho(x_{21})+\rho(x_{23}))\] \[=1+1+\rho(x_{12})+\rho(x_{13})+1+\rho(x_{11})+\rho(x_{12})=5+\rho (x_{12})\] \[>\rho(x_{12}).\] Hence, Equation (2.3) holds. Claim (2): First we show that for edges \(\{x_{11},x_{12},\gamma\}\in E^{c}_{l_{1},l_{2}}\), \(\rho(x_{11})+\rho(x_{12})+\rho(\gamma)<0\), where \(\gamma=x_{23},\ldots,x_{a+1,3}\). From the map \(\rho\) it follows that \(\rho(x_{11})+\rho(x_{12})+\rho(x_{23})=-1\). From Equation (2.3) it follows that \(\rho(x_{23})>\rho(j)\), for all \(j=x_{33},\ldots,x_{a+1,3}\). Thus \(\rho(x_{11})+\rho(x_{12})+\rho(j)<0\), as desired. Similarly, for every edge \(\{x_{i1},x_{i2},\gamma\}\in E^{c}_{l_{1},l_{2}}\), one has \(\rho(x_{i1})+\rho(x_{i2})+\rho(\gamma)<0\), where \(2\leq i\leq a\), and \(\gamma=x_{i+1,3},\ldots,x_{a+1,3}\). From Equations (2.3) and (2.2) one has \(\rho(x_{a+1,2})\leq 0\), and \(\rho(x_{a+1,2})>\rho(j)\), for all \(j=x_{a,2},\ldots,x_{12},x_{13},\ldots,x_{a+1,3}\). Therefore it is clear that for every edge \(\{\alpha,\beta,\gamma\}\in E^{c}_{l_{1},l_{2}}\), one has \(\rho(\alpha)+\rho(\beta)+\rho(\gamma)<0\), where \(\alpha<\beta<\gamma,\text{ and }\alpha,\beta,\gamma\in\{x_{a+1,2},\ldots,x_{12},x_{13}, \ldots,x_{a+1,3}\}\). Next, we show that for edges \(\{x_{11},\beta,\gamma\}\in E^{c}_{l_{1},l_{2}}\), \(\rho(x_{11})+\rho(\beta)+\rho(\gamma)<0\), for all \(\beta<\gamma,\text{ and }\beta,\gamma\in\{x_{13},\ldots,x_{a+1,3}\}\). From Equation (2.2) it follows that \(\rho(x_{11})+\rho(x_{13})+\rho(x_{23})=-1\). Then from Equation (2.3) one has \(\rho(x_{13})+\rho(x_{23})\geq\rho(i)+\rho(j)\), for all \(i<j\) and \(i,j\in\{x_{13},\ldots,x_{a+1,3}\}\). Hence we have \(\rho(x_{11})+\rho(\beta)+\rho(\gamma)<0\) as desired. Similarly, for every edge \(\{x_{i1},\beta,\gamma\}\in E^{c}_{l_{1},l_{2}}\), one has \(\rho(x_{i1})+\rho(\beta)+\rho(\gamma)<0\), for all \(2\leq i\leq a+1\), \(\beta<\gamma,\text{ and }\beta,\gamma\in\{x_{i-1,2},\ldots,x_{12},x_{13}, \ldots,x_{a+1,3}\}\). Thus \(E_{l_{1},l_{2}}\) is a positive matching on \((V,E\cup_{(l^{\prime}_{1},l^{\prime}_{2})<(l_{1},l_{2})}E^{\prime}_{l^{\prime} _{1},l^{\prime}_{2}})\) as desired. Proof of Conjecture 1.2.: It follows from Conjecture 1.1 and Remark 2.3. **Remark 2.4**.: From Conjecture 1.1 and [7, Corollary 5.2] it follows that for \(\frac{3}{2}n^{2}-\frac{15}{2}n+10\leq d\leq\binom{n+2}{3}-n+1\), every coordinate sections of the variety \(S^{d}_{n,3}\) is irreducible. ### \(r\)-Uniform complete hypergraphs In this subsection, we derive an upper bound for pmd of complete \(r\)-uniform hypergraphs. This extension naturally encompasses the scenario when \(r=3\). Subsequently, we endeavor to apply a parallel approach to demonstrate that \(\operatorname{pmd}(H)\) is bounded above by a polynomial of degree \(r-1\) in terms of the number of vertices when \(H\) is an \(r\)-uniform hypergraph. **Proposition 2.1**.: Let \(H=(V,E)\) be the complete \(r\)-uniform hypergraph on \(n\) vertices and with \(\binom{n}{r}\) edges. Then for every \(3\leq l_{1}\leq 2n-2r+3\), \(5\leq l_{2}\leq 2n-2r+5,\ldots\), \(2r-1\leq l_{r-1}\leq 2n-1\), the set \(E_{l_{1},\ldots,l_{r-1}}=\{\{a_{1},\ldots,a_{r}\}\in E\mid 1\leq a_{1}<\cdots<a_{r}\leq n,a_{1}+a_{2}=l_{1},a_{2}+a_{3}=l_{2 },\ldots,a_{r-1}+a_{r}=l_{r-1}\}\) is a matching. Proof.: Let \(e=\{x_{1}<x_{2}<\cdots<x_{r}\}\), \(e^{\prime}=\{y_{1}<\cdots<y_{r}\}\in E_{l_{1},\ldots,l_{r-1}}\) for some \(3\leq l_{1}\leq 2n-2r+3\), \(5\leq l_{2}\leq 2n-2r+5,\ldots\), \(2r-1\leq l_{r-1}\leq 2n-1\). Assume \(e\neq e^{\prime}\) and \(e\cap e^{\prime}\neq\emptyset\). **Case I.** If \(x_{1}=y_{1}\) then since \(x_{1}+x_{2}=l_{1}=y_{1}+y_{2}\) we have \(x_{2}=y_{2}\). By repeating a similar argument, we get \(x_{i}=y_{i}\) for all \(i=3,\ldots,r\). Thus we have \(e=e^{\prime}\) a contradiction. The remaining cases \(x_{i}=y_{i}\) can be deduced in a similar manner. **Case II.** If \(x_{i}=y_{i+1}\) or \(x_{i+1}=y_{i}\), where \(i=1,\ldots,r-1\), then from condition \(x_{i}+x_{i+1}=l_{i}=y_{i}+y_{i+1}\) yield a contradiction to the order of the elements in \(e\) and \(e^{\prime}\). **Case III.** If \(x_{i}=y_{j}\) such that \(i<j\) and \(j-i\geq 2\), where \(i=1,\ldots,r-2\), then \(y_{1}<\cdots<y_{j}=x_{i}<x_{i+1}\) contradicts \(x_{i}+x_{i+1}=y_{i}+y_{i+1}=l_{i}\). Hence, we have \(e\cap e^{\prime}=\emptyset\) as desired. **Proposition 2.2**.: Let \(H=(V,E)\) be the complete \(r\)-uniform hypergraph on \(n\) vertices. Then the cardinality of the set \(E_{n}=\{(l_{1},\ldots,l_{r-1})\ |\ \ \text{there exists }1\leq a_{1}<\cdots<a_{r}\leq n,a_{1}+a_{2}=l_{1},a_{2}+a_{3}=l_{2}, \ldots,a_{r-1}+a_{r}=l_{r-1}\}\) is bounded from above by a polynomial of degree \(r-1\) in terms of the number of vertices, that is \(|E_{n}|\leq n^{r-1}+O(r-2)\), where \(O(r-2)\) denotes a polynomial of degree less than or equal to \(r-2\) in terms of \(n\). Proof.: It is obvious that \(3\leq l_{1}<\cdots<l_{r-1}\leq 2n-1\). Thus the number of ways of choosing \(r-1\) elements in increasing order from \([2n-3]\), that is, \(\binom{2n-3}{r-1}\) is strictly greater than \(|E_{n}|\). Hence, it can be deduced that \(|E_{n}|\leq n^{r-1}+O(r-2)\). **Theorem 2.1**.: Let \(H=(V,E)\) be a \(r\)-uniform hypergraph as in Proposition 2.1. Then \(E_{l_{1},\ldots,l_{r-1}}=\{\{a_{1},\ldots,a_{r}\}\in E\ |\ a_{1}<\cdots<a_{r},a_{1}+a_{2}=l_{1},a_{2}+a_{3}=l_{2}, \ldots,a_{r-1}+a_{r}=l_{r-1}\}\) is a positive matching. Proof.: First, we fix an order on matching \(E_{l_{1},\ldots,l_{r-1}}\subset E\). We say \(E_{l_{1},\ldots,l_{r-1}}<E_{l^{\prime}_{1},\ldots,l^{\prime}_{r-1}}\) if \((l_{1},\ldots,l_{r-1})<_{lex}(l^{\prime}_{1},\ldots,l^{\prime}_{r-1})\). Now, we will show the matching \(E_{l_{1},\ldots,l_{r-1}}\) is a positive matching on \(\Lambda=(V,E\setminus\cup_{(l^{\prime}_{1},\ldots,l^{\prime}_{r-1})<_{lex}(l_{ 1},\ldots,l_{r-1})}E_{l^{\prime}_{1},\ldots,l^{\prime}_{r-1}})\) by cardinality of the matching. If \(|E_{l_{1},\ldots,l_{r-1}}|=1\) then from Remark 2.1(2) it follows that \(E_{l_{1},\ldots,l_{r-1}}\) is positive matching on \(\Lambda\). We depict a matching \(E_{l_{1},\ldots,l_{r-1}}\) with \((|E_{l_{1},\ldots,l_{r-1}}|>1)\) using the following representation: **Case I.** For an odd integer \(r\geq 5\) we set \[\begin{bmatrix}l_{1}-l_{2}+l_{3}-\ldots-l_{r-1}+m-a&\cdots&l_{r-1}-m+a&m-a\\ l_{1}-l_{2}+l_{3}-\ldots-l_{r-1}+m-a+1&\cdots&l_{r-1}-m+a-1&m-a+1\\ \vdots&\vdots&\vdots&\vdots\\ l_{1}-l_{2}+l_{3}-\ldots-l_{r-1}+m-1&\cdots&l_{r-1}-m+1&m-1\\ l_{1}-l_{2}+l_{3}-\ldots-l_{r-1}+m&\cdots&l_{r-1}-m&m\end{bmatrix}\] \[=\begin{bmatrix}x_{11}&\cdots&x_{1,r-1}&x_{1r}\\ x_{21}&\cdots&x_{2,r-1}&x_{2r}\\ \vdots&\vdots&\vdots&\vdots\\ x_{a1}&\cdots&x_{3,r-1}&x_{ar}\\ x_{a+1,1}&\cdots&x_{a+1,r-1}&x_{a+1,r}\end{bmatrix}\] where, \(2r\leq m\leq n\), \(a\in\mathbb{N}\) such that \(x_{1,i-1}<x_{1,i}\), \(x_{a+1,j-1}<x_{a+1,j}\), and \(x_{11}\geq 1\), where \(i=3,5,\ldots,r\), and \(j=2,4,\ldots,r-1\). In the above representation, each row is an edge of \(E_{l_{1},\ldots,l_{r-1}}\). Consider the following ascending sequence: \[x_{11},x_{21},\ldots,x_{a+1,1},x_{a+1,2},x_{a2},\ldots,x_{12},x_{ 13},\\ x_{23},\ldots,x_{a+1,i},x_{a+1,i+1},\ldots,x_{1j},x_{1,j+1}, \ldots,x_{a+1,r}, \tag{2.5}\] where \(i=3,5,\ldots,r-2\) and \(j=4,6,\ldots,r-1\). We define a map \(\phi:V(E_{l_{1},\ldots,l_{r-1}})\to\mathbb{Q}\) as \[\phi(x_{11}) =t, \tag{2.6}\] \[\phi(x_{1i}) =\phi(x_{1,i+1}) =-(1+\phi(x_{11})+\ldots+\phi(x_{1,i-1})+\phi(\alpha_{1})+ \ldots+\phi(\alpha_{r}-i)),\] where \(\alpha_{1}\) to \(\alpha_{r-i}\) constitute a sequence of \(r-i\) consecutive elements that commence immediately after \(x_{1,i+1}\) from sequence (2.5), and \(i=2,4,\ldots,r-1\), such that \[\phi(x_{12})<0\text{ and }\phi(x_{12})\geq\phi(x_{13})\geq\ldots \geq\phi(x_{1r})\text{ and }\phi(x_{11})+\ldots+\phi(x_{1r})=1, \tag{2.7}\] \[\phi(x_{\ell r}) =-(1+\phi(x_{\ell-1,1})+\ldots+\phi(x_{\ell-1,r-1})),\] \[\phi(x_{\ell k}) =-(1+\phi(x_{\ell 1})+\ldots+\phi(x_{\ell k-1})+\phi(\beta_{1})+ \ldots+\phi(\beta_{r-k})),\] where \(\beta_{1}\) to \(\beta_{r-k}\) constitute a sequence of \(r-k\) consecutive elements that commence immediately after \(x_{\ell k+1}\) from sequence (2.5), \(\phi(x_{\ell,0})=0\), and \(1\leq k\leq r-1\), \[\phi(x_{\ell-1,i}) >\phi(x_{\ell,i}),\text{ where }i=1,3,\ldots,r,\] \[\phi(x_{\ell-1,j}) <\phi(x_{\ell,j}),\text{ where }j=2,4,\ldots,r-1,\] \[\text{and }\phi(x_{\ell 1}) >\phi(x_{\ell 2})>\ldots>\phi(x_{\ell r})\text{ such that }\phi(x_{\ell 1})+\ldots+\phi(x_{\ell r})=1,\] where \(2\leq\ell\leq a+1\), \(t\gg 0\) such that \(\phi(x_{a+1,1})>0\) and \(\phi(x_{a+1,2})\leq 0\). The definition of \(\phi\) implies that the following sequence is in descending order: \[\phi(x_{11}),\phi(x_{21}),\ldots,\phi(x_{a+1,1}),\phi(x_{a+1,2}), \phi(x_{a2}),\ldots,\phi(x_{12}),\phi(x_{13}),\\ \phi(x_{23}),\ldots,\phi(x_{a+1,i}),\phi(x_{a+1,i+1}),\ldots,\phi (x_{1j}),\phi(x_{1,j+1}),\ldots,\phi(x_{a+1,r}), \tag{2.8}\] where \(i=3,5,\ldots,r-2\) and \(j=4,6,\ldots,r-1\). **Case II.** For an even integer \(r\geq 4\) we set \[\left[\begin{array}{cccc}l_{1}-l_{2}+l_{3}-\ldots+l_{r-1}-m&\cdots&l_{r-1}- m&m\\ l_{1}-l_{2}+l_{3}-\ldots+l_{r-1}-m+1&\cdots&l_{r-1}-m+1&m-1\\ \vdots&\vdots&\vdots&\vdots\\ l_{1}-l_{2}+l_{3}-\ldots+l_{r-1}-m+a-1&\cdots&l_{r-1}-m+a-1&m-a+1\\ l_{1}-l_{2}+l_{3}-\ldots+l_{r-1}-m+a&\cdots&l_{r-1}-m+a&m-a\end{array}\right]\] \[=\left[\begin{array}{cccc}x_{11}&\cdots&x_{1,r-1}&x_{1r}\\ x_{21}&\cdots&x_{2,r-1}&x_{2r}\\ \vdots&\vdots&\vdots&\vdots\\ x_{a1}&\cdots&x_{3,r-1}&x_{ar}\\ x_{a+1,1}&\cdots&x_{a+1,r-1}&x_{a+1,r}\end{array}\right]\] where, \(2r\leq m\leq n\), \(a\in\mathbb{N}\) such that \(x_{1,i-1}<x_{1,i}\), \(x_{a+1,j-1}<x_{a+1,j}\), and \(x_{11}\geq 1\), where \(i=3,5,\ldots,r-1\), and \(j=2,4,\ldots,r\). In the above representation, each row is an edge of \(E_{l_{1},\ldots,l_{r-2}}\). Consider the following ascending sequence: \[x_{11},x_{21},\ldots,x_{a+1,1},x_{a+1,2},x_{a2},\ldots,x_{12},x_ {13},\\ x_{23},\ldots,x_{a+1,i},x_{a+1,i+1},\ldots,x_{1j},x_{1,j+1}, \ldots,x_{a+1,r-1},x_{a+1,r},\ldots,x_{1,r}, \tag{2.9}\] where \(i=3,5,\ldots,r-3\) and \(j=4,6,\ldots,r-2\). We define a map \(\psi:V(E_{l_{1},\ldots,l_{r-1}})\to\mathbb{Q}\) as \[\psi(x_{11}) =t, \tag{2.10}\] \[\psi(x_{1i}) =\psi(x_{1,i+1}) =-(1+\psi(x_{11})+\ldots+\psi(x_{1,i-1})+\psi(\alpha_{1})+ \ldots+\psi(\alpha_{r}-i)),\] where \(\alpha_{1}\) to \(\alpha_{r-i}\) constitute a sequence of \(r-i\) consecutive elements that commence immediately after \(x_{1,i+1}\) from sequence (2.9), and \(i=2,4,\ldots,r-2\), such that \[\psi(x_{12})<0\text{ and }\psi(x_{12})\geq\psi(x_{13})\geq \ldots\geq\psi(x_{1r})\text{ and }\psi(x_{11})+\ldots+\psi(x_{1r})=1, \tag{2.11}\] \[\psi(x_{mr}) =-(1+\psi(x_{m+1,1})+\ldots+\psi(x_{m+1,r-1})),\text{ where }m=1, \ldots,a,\] \[\psi(x_{\ell k}) =-(1+\psi(x_{\ell 1})+\ldots+\psi(x_{\ell k-1})+\psi(\beta_{1})+ \ldots+\psi(\beta_{r-k})),\] where \(\beta_{1}\) to \(\beta_{r-k}\) constitute a sequence of \(r-k\) consecutive elements that commence immediately after \(x_{\ell k+1}\) from sequence (2.9), \(\psi(x_{\ell,0})=0\), and \(1\leq k\leq r-1\), \[\psi(x_{\ell-1,i}) >\psi(x_{\ell,i}),\text{ where }i=1,3,\ldots,r-1,\] \[\psi(x_{\ell-1,j}) <\psi(x_{\ell,j}),\text{ where }j=2,4,\ldots,r,\] \[\text{and }\psi(x_{\ell 1}) >\psi(x_{\ell 2})>\ldots>\psi(x_{\ell r})\text{ such that }\psi(x_{\ell 1})+\ldots+\psi(x_{\ell r})=1,\] where \(2\leq\ell\leq a+1\), \(t\gg 0\) such that \(\psi(x_{a+1,1})>0\) and \(\psi(x_{a+1,2})\leq 0\). The definition of \(\phi\) implies that the subsequent sequence is in descending order. \[\psi(x_{11}),\psi(x_{21}),\ldots,\psi(x_{a+1,1}),\psi(x_{a+1,2}),\psi(x_{a2}),\ldots,\psi(x_{12}),\psi(x_{13}),\psi(x_{23}),\ldots,\psi(x_{a+1,i}),\\ \psi(x_{a+1,i+1}),\ldots,\psi(x_{1j}),\psi(x_{1,j+1}),\ldots,\psi(x _{a+1,r-1}),\psi(x_{a+1,r}),\ldots,\psi(x_{1,r}), \tag{2.12}\] where \(i=3,5,\ldots,r-3\) and \(j=4,6,\ldots,r-2\). Let \(E^{c}_{l_{1},\ldots,l_{r-1}}\) be the set all edges in \((V_{E_{l_{1}},\ldots,l_{r-1}},E\backslash\{\cup_{(l_{1}^{\prime},\ldots,l_{r- 1}^{\prime})<_{lex}(l_{1},\ldots,l_{r-1})}E_{l_{1}^{\prime},\ldots,l_{r-1}^{ \prime}}\cup E_{l_{1},\ldots,l_{r-1}}\})\). From maps \(\phi\) and \(\psi\) it is clear that \(\phi(x_{i1})+\ldots+\phi(x_{ir})=1\) and \(\psi(x_{i1})+\ldots+\psi(x_{ir})=1\) for all \(i=1,\ldots,a+1\). From maps \(\phi\) and \(\psi\), and referring to Equations (2.8) and (2.12) one has \(\sum_{i\in e}\phi(i)<0\) and \(\sum_{i\in e}\psi(i)<0\) for all \(e\in E^{c}_{l_{1},\ldots,l_{r-1}}\), analogous to the proof presented in Claim 2 of Conjecture 1.1. Hence the matching \(E_{l_{1},\ldots,l_{r-1}}\) is a positive matching on \(\Lambda\). **Remark 2.5**.: In Theorem 2.1, it is necessary to establish the definitions of the maps \(\phi\) and \(\psi\) to facilitate matching with cardinalities that are less than or equal to \(r-2\). In cases where \(|E_{l_{1},\ldots,l_{r-1}}|>r-2\), these maps can be recursively derived. **Corollary 2.1**.: Let \(H=(V,E)\) be a \(r\)-uniform hypergraph with \(n\) vertices. Then \(\operatorname{pmd}(H)\leq n^{r-1}+O(r-2)\). Proof.: It follows from Proposition 2.2 and Theorem 2.1. ## 3. Positive matching decomposition Generally characterising pmds of a hypergraph is a challenging problem. To begin with, our focus is on comprehending the pmds within the context of \(r\)-uniform hypergraph \(H=(V,E)\) such that \(\mid e_{i}\cap e_{j}\mid\leq 1\), where \(e_{i},e_{j}\in E\), for all \(i,j\). A _walk_ in a \(H\) is a sequence of alternate vertices and edges \(u_{1}e_{1}u_{2}\ldots e_{n}u_{n}\) such that \(u_{i},u_{i+1}\in e_{i}\) for all \(i\). A _matching_ in a graph \(H\) is a set \(M=\{e_{1},\ldots,e_{n}\}\subset E\) such that \(e_{i}\cap e_{j}=\emptyset\) for all \(i,j\). For \(A\subseteq V,H[A]\) denotes the _induced subgraph_ of \(H\) on the vertex set \(A\), that is, \(H[A]=(A,\{e\in E\mid e\subseteq A\})\). Throughout this section we assume \(H=(V,E)\) be a \(r\)-uniform hypergraph such that \(\mid e_{i}\cap e_{j}\mid\leq 1\), where \(e_{i},e_{j}\in E\), for all \(i,j\). **Definition 3.1**.: An _alternating walk_ in a hypergraph \(H\) with respect to a matching \(M\) is a walk whose edges alternate between edges of \(M\) and \(E\setminus M\). **Definition 3.2**.: Let \(H\) be a 3-uniform hypergraph and \(M\) be a matching of \(H\). An alternate closed walk in \(H\) with respect to \(M\) say \(W=u_{1},a_{1},v_{1},b_{1},u_{2},a_{2},v_{2},b_{2},\ldots,v_{n},b_{n},u_{n+1}=u _{1}\), where \(u_{i},v_{i},a_{i},b_{i}\in V\), \(\{u_{i},a_{i},v_{i}\}\in M\) and \(\{v_{i},b_{i},u_{i+1}\}\in E\setminus M\), is said to be a _strong alternate closed walk_ if \(a_{i}=b_{j}\) for some \(j\) and number of times \(a_{i}\) appears in \(W\) is equal to number of times \(b_{j}\) appears in \(W\) for all \(i\). **Remark 3.1**.: Certainly, the definition of a strong alternate closed walk can be naturally extended to any \(r\)-uniform hypergraph. Moreover, it becomes evident that strong alternate closed walks and alternate closed walks coincide when \(r=2\). The following theorem gives a characterization of positive matching via strong alternate closed walks. It turns out that a matching \(M\) of a hypergraph \(H\) is positive if and only if the subgraph induced by \(M\) has no strong alternate closed walks with respect to \(M\). **Theorem 3.1**.: Let \(H=(V,E)\) be a 3-uniform hypergraph and \(M\subset E\) be matching. The following conditions are equivalent: 1. \(M\) is positive; 2. The subgraph induced by \(M\) does not contain any strong alternate closed walk, 3. The subgraph induced by every subset \(N\subseteq M\) does not contain any subgraph \(N_{1}\) such that \(N\subseteq N_{1}\) and \(\deg(u_{i})=\deg(a_{i})=\deg(v_{i})=k\), where \(k\geq 2\) for all \(\{u_{i},a_{i},v_{i}\}\in N\). Proof.: \((1)\implies(2)\). Suppose \(H[M]\) has a strong alternate closed walk with the following vertices \[u_{1},a_{1},v_{1},b_{1},u_{2},a_{2},v_{2},b_{2},\ldots,v_{n},b_{n},u_{n+1}=u_{ 1},\] where \(\{u_{i},a_{i},v_{i}\}\in M\) and \(\{v_{i},b_{i},u_{i+1}\}\in E(H[M])\setminus M\), for all \(i\). Since \(M\) is positive there exists a map \(\rho:V(M)\rightarrow\mathbb{Z}\) such that \(\rho(u_{i})+\rho(a_{i})+\rho(v_{i})>0\) and \(\rho(v_{i})+\rho(b_{i})+\rho(u_{i+1})<0\), for all \(i\). WLOG, we can assume that \(\rho(u_{1})>0\) as \(\rho(u_{1})+\rho(a_{1})+\rho(v_{1})>0\). From the positive matching of \(M\), we get the following inequalities: \[\rho(u_{1}) <-(\rho(v_{n})+\rho(b_{n})),\] \[\rho(u_{1}) <-(\rho(v_{n})+\rho(b_{n}))<\rho(u_{n})+\rho(a_{n})-\rho(b_{n}),\] \[\rho(u_{1}) <-(\rho(v_{n})+\rho(b_{n}))<\rho(u_{n})+\rho(a_{n})-\rho(b_{n})<-( \rho(v_{n-1})+\rho(b_{n-1})-\rho(a_{n})+\rho(b_{n}))),\] \[\quad\vdots\] \[\rho(u_{1}) <\rho(u_{1})+\sum_{i=n}^{n}\rho(a_{i})-\sum_{i=n}^{n}\rho(b_{i}),\] which is a contradiction as \(a_{i}=b_{j}\), for some \(j\) and number of times \(a_{i}\) appears is equal to number of times \(b_{j}\) appears for all \(i\). \((2)\implies(3)\). It follows from Lemma 3.1. \((3)\implies(1)\). Assume that \(M\) is not positive. Let \(N\subset M\) be a minimal such that \(N\) is not positive. Let \(N=\{e_{1},\ldots,e_{n}\}\) and \(N^{c}=E(H[N])\setminus N=\{e^{\prime}_{1},\ldots,e^{\prime}_{m}\}\) be edges and \(e_{i}=\{u_{i1},u_{i2},u_{i3}\}\) for all \(i\). Let \(\rho:V(N)\to\mathbb{Z}\) be a map. By not positiveness of \(N\) one has \(\sum_{u_{i}\in e_{i}}\rho(u_{i})=y_{i}\) and \(\sum_{u_{j}\in e^{\prime}_{j}}\rho(u_{j})=-y_{j}\), where \(y_{i},y_{j}>0\), for all \(1\leq i\leq n\) and \(1\leq j\leq m\), has no solution. Now we represent the system of linear equations using a matrix by \(Ax=Y\), where \(x=[\rho(u_{11}),\ldots,\rho(u_{n3})]^{t}\), \(Y=[y_{1},\ldots,y_{n+m}]\). Notice that each row of \(A\) corresponds to an edge, and the number of non-zero entries in each column of \(A\) corresponds to the degree of the respective vertex. Then we have \(\operatorname{Rank}(A,Y)>\operatorname{Rank}(A)\). Let \(R_{i}\) denotes \(i^{th}\) row of matrix \(A\). First, \(n\) rows of matrix \(A\) corresponds to edges of \(N\) and rows from \(n+1\) to \(n+m\) correspond to edges of \(N^{c}\). Therefore we get \(a_{1}R_{1}+\ldots+a_{n}R_{n}=a_{n+1}R_{n+1}+\ldots+a_{n+m}R_{n+m}\), where \(a_{i}>0\) be a positive integer for all \(1\leq i\leq n\) and \(a_{j}\) be a non negative integer for all \(n+1\leq j\leq n+m\). Then we get \(\deg_{H[N]}(u_{i1})=\deg_{H[N]}(u_{i2})=\deg_{H[N]}(u_{i3})=a_{i}+1\), for all \(i\). Hence \(H(N)\) has a subgraph \(N_{1}\) such that \(N\subseteq N_{1}\) and \(\deg(u_{i1})=\deg(u_{i2})=\deg(u_{i3})=k\), where \(k\geq 2\) for all \(\{u_{i1},u_{i2},u_{i3}\}\in N\). **Definition 3.3**.: We write the vertices of an edge as a sequence, and we call the first vertex in the sequence the parent and the last vertex we call the descendant. ### Construction Let \(H\) be a \(3\)-uniform hypergraph and \(M\) be a matching of \(H\). Let \(N\subseteq M\) and \(N_{1}\) be a subgraph of \(H[N]\) such that \(N\subset N_{1}\) and the degree of each vertex in an edge is equal and greater than or equal to \(2\), for all edges in \(N\), i.e. \(\deg(u_{i})=\deg(a_{i})=\deg(v_{i})=k\), where \(k\geq 2\) for all \(\{u_{i},a_{i},v_{i}\}=e_{i}\in N\). Assume that \(N\) and \(N_{1}\) are minimal with this property. Let \(N=\{e_{1},\ldots,e_{n}\}\) and \(N^{c}=\{e\ |\ e\in N_{1}\setminus N\}=\{e^{\prime}_{1},\ldots,e^{\prime}_{m}\}\) be edges. The following construction resembles a rooted tree structure with alternating edges with respect to \(N\), and edges of \(N\) may repeat. An alternate rooted tree \((H,N,u_{0})\) is an alternate walk with respect to \(N\) with root \(u_{0}\). 1. Let \(u_{0}\in V(N)\) be a vertex. Then there exists a unique edge in \(N\), which intersects with \(u_{0}\). Let \(e_{1}=\{u_{0},a_{1},v_{1}\}\) be an edge in \(N\), where \(u_{0}\) is the parent. Then there are two descendants for \(u_{0}\), namely \(a_{1}\) and \(v_{1}\). 2. We will repeat the following process for each \(2n^{th}\) step. For a descendant vertex, say \(v\), if \(2(\deg(v)-1)\) is greater than the number of times \(v\) appeared in the sequence from \(u_{0}\) to \(v\), then there exists at least one edge in \(N^{c}\) which intersects with \(v\) and the edge does not appear in the sequence from \(u_{0}\) to \(v\), Now fix \(v\) as the parent then there are two descendants to \(v\). Similarly, we have two descendants for each edge in \(N^{c}\), which intersects with \(v\). Otherwise, \(v\) is the last descendant. 3. We will repeat the following process for each \(2n+1^{th}\) step. For each descendant vertices, say \(u\), if \(2(\deg(u)-1)\) is greater than the number of times \(u\) appeared in the sequence from \(u_{0}\) to \(u\), then there exists a unique edge in \(N\) which intersects with \(u\). Then we fix \(u\) as the parent, and then \(u\) has two descendants. Otherwise, \(u\) is the last descendant. **Example 1**.: Let \(H=(V,E)\) be a hypergraph on [9]. Let \(M=\{\{1,2,3\},\{4,5,6\},\{7,8,9\}\}\) be a matching and \(M^{c}=\{\{1,4,7\},\{2,5,8\},\{3,6,9\}\}\). Observe that \(\deg_{H}(i)=2\) for all \(i\in V\). We will now generate a strong alternate closed walk using an alternate rooted tree. Let \((H,M,1)\) be an alternate rooted tree with root \(1\). Then the alternating edges \(\{1,2,3\}\), \(\{3,6,9\}\), \(\{9,7,8\}\), \(\{8,2,5\}\), \(\{5,6,4\}\) and \(\{4,7,1\}\) forms a strong alternate closed walk in \(H\). And the alternate edges \(\{1,2,3\}\), \(\{3,6,9\}\), \(\{9,8,7\}\), and \(\{7,4,1\}\) is an alternate closed walk but not a strong alternate closed walk. Consequently, based on the construction, it can be deduced that the longest alternate walk with the same starting and ending vertex yields a strong alternate closed walk in \(H\). **Lemma 3.1**.: Let \(H\) be a hypergraph and \(M\subset E\) be a matching. If the subgraph induced by \(M\) does not contain any strong alternate closed walk, then the subgraph induced by every subset \(N\subseteq M\) does not contain any subgraph \(N_{1}\) such that \(N\subseteq N_{1}\) and \(\deg(u_{i})=\deg(a_{i})=\deg(v_{i})=k\), where \(k\geq 2\) for all \(\{u_{i},a_{i},v_{i}\}\in N\). Proof.: Suppose there exists a subset \(N\subseteq M\) such that \(H[N]\) has a subgraph \(N_{1}\supseteq N\) with \(\deg(u_{i})=\deg(a_{i})=\deg(v_{i})=k\), where \(k\geq 2\) for all \(\{u_{i},a_{i},v_{i}\}\in N\). Assume that \(N\) and \(N_{1}\) are minimal with this property. Now, we construct an alternating closed walk in a subgraph \(N_{1}\) with respect to \(N\). From the construction 3.1, it follows that there exists a strong alternate closed walk in \((H,N,u_{1})\) such that \(\deg(u_{i})=\deg(a_{i})=\deg(v_{i})=k\), where \(k\geq 2\) for all \(\{u_{i},a_{i},v_{i}\}\in N\), which is a contradiction. The concept of extension for \(r\)-uniform hypergraphs can be applied analogously, enabling us to broaden the understanding of their structural properties and relationships. **Theorem 3.2**.: Let \(H=(V,E)\) be a \(r\)-uniform hypergraph and \(M\subset E\) be a matching. The following conditions are equivalent: 1. \(M\) is positive; 2. The subgraph induced by \(M\) does not contain any strong alternate closed walk, 3. The subgraph induced by every subset \(N\subseteq M\) does not contain any subgraph \(N_{1}\) such that \(N\subseteq N_{1}\) and \(\deg(u_{i1})=\deg(u_{i2})=\ldots=\deg(u_{ir})=k\), where \(k\geq 2\) for all \(\{u_{i1},\ldots,u_{ir}\}\in N\). Proof.: The proof is similar to the proof of Theorem 3.1. **Remark 3.2**.: Note that we obtain [3, Theorem 2.1] as one specific case of Theorem 3.2. **Definition 3.4**.: Let \(H=(V,E)\) be a \(r\)-uniform hypergraph on \([n]\). Then \(H\) is called 1. _good forest_ if there exist a sequence of edges \(e_{1},\ldots,e_{m}\) such that \(|\{e_{1},\ldots,e_{i}\}\cap e_{i+1}|\leq 1\), for all \(1\leq i\leq m-1\), where \(m=|E|\). We call it a good tree if \(H\) is connected. 2. _loose cycle_ if \(E=\{\{1,\ldots,r\},\{r,\ldots,2r-1\},\ldots,\{n-r-2,\ldots,n,1\}\}\), denote it by \(C_{(r-1)m}\), where \(m>1\) and \((r-1)m=n\). **Remark 3.3**.: If \(H=(V,E)\) be a good forest then \(e_{i}\cap e_{j}\leq 1\), for all \(e_{i},e_{j}\in E\). Note that good trees are \(r\)-uniform hypertrees (see, [4, Definition 3.1]). Applying Theorem 3.2 in a straightforward manner immediately yields the following results. **Corollary 3.1**.: Let \(H\) be a \(r\)-uniform good forest and denote by \(\Delta(H)\) the maximal degree of a vertex in \(H\). Then \(\operatorname{pmd}(H)=\Delta(H)\). **Remark 3.4**.: Since hypertree (see, [4, Definition 3.1]) does not have any closed walk, we obtain [4, Theorem 1.4], which asserts that for a \(r\)-uniform hypertree \(H\), one has \(\operatorname{pmd}(H)=\Delta(H)\). **Corollary 3.2**.: Let \(C_{(r-1)m}\) be a loose cycle, where \(r>2\) and \(m\geq 2\). Then \[\operatorname{pmd}(C_{(r-1)m})=\begin{cases}2,&\text{if $m$ is even,}\\ 3,&\text{if $m$ is odd.}\end{cases}\] **Remark 3.5**.: Let \(m>2\) and \(C_{m}\) be a cycle graph. Then \(\operatorname{pmd}(C_{m})=3\), since even cycles have alternate closed walk as an induced subgraph. An edge of a \(r\)-uniform hypergraph is said to be a _pendant_ if it has a vertex of degree one. Note that pendant edges do not contribute to strong alternate closed walks. The following result is analogous to [3, Theorem 2.3]. **Corollary 3.3**.: Let \(H\) and \(H^{\prime}\) are \(r\)-uniform hypergraphs. If \(H\) can be obtained from \(H^{\prime}\) by adding pendent edges such that each pendant edge has \(r-1\) vertices of degree \(1\), then \(\operatorname{pmd}(H)=\max\{\operatorname{pmd}(H^{\prime}),\Delta(H)\}\). An ideal \(I\subset S\) is said to be a complete intersection if \(\mu(I)=ht(I)\), where \(\mu(I)\) denotes the cardinality of a minimal homogeneous generating set of \(I\). **Theorem 3.3**.: Let \(H\) be a \(r\)-uniform good forest. Then: 1. \(L^{\mathbb{K}}_{H}(d)\) is radical for all \(d\). 2. \(L^{\mathbb{K}}_{H}(d)\) is a complete intersection if and only if \(d\geq\Delta(H)\). 3. \(L^{\mathbb{K}}_{H}(d)\) is prime if \(d\geq\Delta(H)+1\). Proof.: (1). The proof is analogous to the proof of [2, Theorem 6.1]. Then it follows that the ideal \(L^{\mathbb{K}}_{H}(d)\) is a Cartwright-Sturmfels ideal. In particular, \(L^{\mathbb{K}}_{H}(d)\) and all its initial ideals are radical. Assertions (2) and (3) follows from Corollary 3.1, [1, Lemma 2.1], and [4, Theorem 1.2]. **Acknowledgement.** The first author is financially supported by the University Grant Commission, India. The second author is partially supported by the Mathematical Research Impact Centric Support (MATRICS) grant from Science and Engineering Research Board (SERB), India.
2309.05637
Latte: Lightweight Aliasing Tracking for Java
Many existing systems track aliasing and uniqueness, each with their own trade-off between expressiveness and developer effort. We propose Latte, a new approach that aims to minimize both the amount of annotations and the complexity of invariants necessary for reasoning about aliasing in an object-oriented language with mutation. Our approach only requires annotations for parameters and fields, while annotations for local variables are inferred. Furthermore, it relaxes uniqueness to allow aliasing among local variables, as long as this aliasing can be precisely determined. This enables support for destructive reads without changes to the language or its run-time semantics. Despite this simplicity, we show how this design can still be used for tracking uniqueness and aliasing in a local sequential setting, with practical applications, such as modeling a stack.
Conrad Zimmerman, Catarina Gamboa, Alcides Fonseca, Jonathan Aldrich
2023-09-11T17:28:46Z
http://arxiv.org/abs/2309.05637v1
# Latte: Lightweight Aliasing Tracking for Java ###### Abstract. Many existing systems track aliasing and uniqueness, each with their own trade-off between expressiveness and developer effort. We propose Latte, a new approach that aims to minimize both the amount of annotations and the complexity of invariants necessary for reasoning about aliasing in an object-oriented language with mutation. Our approach only requires annotations for parameters and fields, while annotations for local variables are inferred. Furthermore, it relaxes uniqueness to allow aliasing among local variables, as long as this aliasing can be precisely determined. This enables support for destructive reads without changes to the language or its run-time semantics. Despite this simplicity, we show how this design can still be used for tracking uniqueness and aliasing in a local sequential setting, with practical applications, such as modeling a stack. **Keywords:** aliasing, uniqueness, ownership, java + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. + Footnote †: Both authors contributed equally to this research. ## 1. Introduction From low-level languages like C to high-level programming languages like Python, the combination of mutability with aliasing has been the source of many bugs, warranting its own term (Aliasing Bug). Reasoning about aliasing is difficult, as it usually requires a global analysis of the program and its possible traces of execution. To overcome this challenge, the community has proposed type systems that track and restrict aliasing (Belleelle and Belle, 2013; Belle and Belle, 2013). In this very large design space, there are lines of work more focused on uniqueness, ownership and permissions. However, these proposals add complexity relative to ordinary type systems, and in some cases require developers to understand and reason about quite complex concepts. For example, ownership has been frequently mentioned as one of the hardest concepts to learn in the Rust programming language (Rust, 2013; Goyal et al., 2014). Thus, we propose a type system for uniqueness and aliasing that aims to be more usable and impose low overhead on developers. Moreover, we intend to keep our approach as simple as possible so that it can support the development of more complex type systems (like Liquid Types (Liquid, 2015)) that require reasoning about uniqueness and aliasing. In particular, we propose a type system for a subset of the Java language that tracks uniqueness and heap aliasing with low annotation effort and no necessary runtime changes. To handle uniqueness, we provide the developer with the simple invariant that _no two references in the heap point to the same unique object_. We require few annotations, specifically two (unique and shared) for object fields and return types, and three (adding owned) for method parameters, and we infer the remaining information for local variables. Moreover, we aim to maintain Java's runtime semantics unaltered, and therefore do not build in destructive reads; the programmer can, however, get a similar effect with an explicit assignment to null. In the remainder of the paper, we present an overview of previous work that is closely related to ours (Section 2), followed by our approach with the presentation of an example and the system's grammar and typing rules (Section 3). At the end, we discuss some of the system's limitations and future directions for this work (Section 4). ## 2. Related Work There are different approaches for managing aliasing in programming languages. One popular line of work focuses on ownership types (Belle and Belle, 2013), a type system that restricts access to objects according to their owners. There have been multiple proposals for ownership types with different flavors (Belle and Belle, 2013). Unfortunately, classic ownership types alone do not track aliasing within ownership boundaries, making it difficult for verification tools to precisely track the effect of assignments. Tracking uniqueness can provide strong guarantees about aliasing, which is useful for verification as well as safe manual memory management, e.g. as implemented in Rust [21]. However, more powerful systems such as Rust's (which is called "ownership," though it provides uniqueness in the sense described in the research literature) are known to be complex and difficult for developers to understand [11]. Uniqueness types [12] allow uniqueness properties to be specified as part of a data type. However, this approach is tailored to functional programming languages and requires a significantly different type system, thus its applicability to Java is limited. Other type systems for uniqueness focus on the use cases of concurrency and message passing. For example, Haller and Odersky [16] use capabilities [4] to add uniqueness and borrowing to a Java-like languages with a focus on message passing in concurrent object-oriented programming. They use a concept of _separate uniqueness_, where distinct variables do not share a common reachable object. Thus uniqueness is used to enforce separation, which is desirable for concurrent message-passing systems. More recently, Milano et al. [22] presented a new language and type system for safe concurrency by statically ensuring that different threads cannot access the same heap regions. Their proposal focused on reducing the annotation burden and eliminating the need for unnatural rewrites required by more restrictive programming models. Also aiming for a minimal set of annotations, LaCasa [15] adds uniqueness to the Scala language using object capabilities. However, this approach requires classes to adhere to the object-capability discipline, and their empirical evaluation showed that most classes from the standard library do not follow these rules. There are other approaches that focus on modeling different aspects of aliasing. Reachability types [2] uses _reachability sets_ to reason about ownership, and tracks reachable values using type qualifiers. This work layers uniqueness, nested mutable state and other concepts over the tracking of reachability sets. Unique accesses are enabled by killing all other access paths to a reference. Castegren and Wrigstad [6] combined many of the previously-mentioned concepts in their \(\kappa\) language. This language uses reference capabilities to ensure separation, and combines techniques from ownership types, linear types [25], and regions [14] in a concurrent and parallel object-oriented setting. The systems described above were not implemented for Java, however, and it is unclear how to do so, as they rely on language features that Java does not have, such as capabilities in Scala or a primitive swap operation. AliasJava [1] does extend Java with type annotations in Java that specify data sharing relationships. The type system includes four annotations: _unique_, _owned_, _lent_ and _shared_, and reduces the annotation burden by inferring annotations. Our approach is similar in spirit, but achieves greater simplicity by doing without ownership and ownership parameters, while allowing more local aliasing within a method. Many early systems for uniqueness used destructive reads, but these often negatively impact program complexity by reducing the ability to query information contained in objects [5]. Therefore, alias burying [3] aims to define a uniqueness system for Java-like languages without using destructive reads by relying on the idea that aliases that will not be used again can be buried. However, as noted by Boyland and Retert [5], the analysis described in the initial work [3] exposes implementation details, such as the fields read by a method, which breaks encapsulation and modularity. In summary, the prior work has one of three limitations: reliance on language or type system features not present in Java, modularity or coding pattern issues, or a larger and more complex set of abstractions for programmers to understand compared to our goals. All of these design choices raise the adoption cost for developers. In our approach, Latte, we try to address these difficulties by creating a lightweight uniqueness system with few annotations. Latte, which we present in the following section, requires no changes to the language semantics, and allows many common code patterns while precisely tracking aliasing. ## 3. Approach As we described previously, our design aims to impose minimal restrictions while enforcing unique references (in the heap) and tracking aliasing (in the stack). While our system is not as expressive as others in previous work, its main advantages are an easily-understood programming model and low annotation complexity. In particular, our design only requires annotations on fields and parameters (with only two and three possible choices, respectively). This burden can be further reduced by choosing sensible defaults. Local variables do not need annotations, as the aliasing between local variables and field values is inferred, which reduces the barrier to adopting this system. In this section, we first give a high-level description of our approach, and then use an example to build intuition about our model. We then formally define the typing rules on top of a Featherweight Java [18]-inspired core language and explain how these rules result in the intended behavior. Finally, we demonstrate the expressive power of our system with a more complex example. ### Description First, we restrict our definition of uniqueness to only consider reachable values on the heap, thus unique values may be stored in at most one reachable heap location, and aliased in the local environment. However, these _dynamic aliases_ [17] may only be used as long as such aliasing can be precisely inferred. Our treatment of dynamic aliases and unreachable heap locations is similar to that of alias burying [5]. In Lette, the annotation unique is used to identify unique values (as defined above). owned identifies borrowed values, since it is only used on method parameters whose value will be owned by some other context when entering the method body. shared identifies values that may or may not be unique. We aim to use our analysis in an automated verifier such as LiquidJava [13] to reason about mutation of unique (and borrowed) values. This requires precisely identifying all values that may be affected by a particular mutation. Our approach does this while permitting dynamic aliases, by inferring annotations of the form \(\operatorname{alias}(p)\) or \(\bot\) during type checking. These special annotations are only inferred; they are never written by the developer. For each local variable \(x\), our typing environment \(\Delta\) contains a class \(C\) and an annotation \(\alpha\) which describes the uniqueness of \(x\) at that point. The formal definition of \(\Delta\) is given later in Figure 3. ### Example We illustrate the main features of our approach by implementing push and pop operations for a stack storing unique values. References to objects pushed onto the stack may not be stored on the heap anywhere else. This invariant could be used by an automated verifier to show that values pushed onto the stack will not be mutated until they are returned by pop. The code is shown in Figure 1. First, we demonstrate how the push method is validated. At each step, we have listed (in Figure 1) the current typing context \(\Delta\) to illustrate the verification process. The typing environment at the beginning of the method body contains the parameters and their types (line 17). Because the two variables r and n are declared, but not yet initialized at line 18, they are annotated with \(\bot\) to mark them inaccessible. r is aliased with this.root at line 19, and this aliasing information is added to the environment by annotating \(x\) with \(\operatorname{alias}(\operatorname{this.root})\). Next, this.root is assigned, which _isolates_ it in the typing environment (a process that we describe in SS3.4.5). This invalidates all aliases to it, which allows the previously-aliased variable \(x\) to become unique and thus claim ownership. Note that lines 19-20 are equivalent to a destructive read. However, we do not need to change the language semantics or introduce new language constructs. Also, the need for this destructive read is easily understood: we want to store this.root in a different place in the heap, thus we need to remove its current value from the heap. Otherwise, our uniqueness invariant would be violated since the value at this.root (which is declared unique) would be stored in multiple reachable places in the heap. Continuing with our example, we initialize a new Node object at line 21. The constructor of Node has the signature (unique Object, unique Node), which states that it _consumes_ both arguments. This marks value and r as inaccessible (\(\bot\)) in the calling context. We encapsulate constructor bodies, thus we do not know what value or r may be aliased with after they are passed to the constructor. Since Figure 1. Example: a stack for unique references we cannot track this aliasing, any usage of these values is disallowed after they are passed to the constructor. Finally, we assign n to this.root at line 22. Since our typing context only stores annotations for local variables, and not for fields, we update the annotation for the local variable n, which is on the RHS of the assignment. The annotation alias(this.root) simply denotes that its target this.root contains the same value as the annotated variable n, thus it does not matter which side of an alias is annotated. After this line, \(\Delta\) indicates that n is aliased with this.root, thus we have precisely determined all local aliases to the unique value this.root, and ensured that this.root is stored at only one location on the heap. This first method gives an overview of our approach; the second method (pop, at line 32) will be presented in Section 3.4.7 after the grammar and typing rules are introduced. ### Grammar For our grammar, presented in Figure 2, we extended Featherweight Java (Fahl, 2017) with statements, including field and variable assignments as in Java, to better approximate the Java language in terms of mutability (a key concern in this paper). To model our particular system, we added the unique, shared, and owned annotations. All fields (\(F\)) must be annotated with either unique or shared (\(\alpha_{f}\)), while method parameters (in \(M\)) must be annotated with one of the three annotations (\(\alpha_{p}\)). Note that variable declarations are not annotated (first production of \(s\)). Our owned annotation is often called _lent_ or _borrowed_ in other systems. Our choice reflects the state of the value within the method body - the value is owned by some other context. ### Typing rules Our typing rules use a local type environment \(\Delta\). This environment maps variables to an annotated class (\(\alpha\)\(C\)), where the annotation specifies the current aliasing or uniqueness information. The form of \(\Delta\) is given in Figure 3. A shared annotation denotes that the variable can be accessed by outside objects - untracked aliases may exist. owned denotes that the value of the variable is borrowed; specifically, its value is unique in the current context and no new aliases may be added to the heap. unique denotes ownership - the value is only stored at this location (modulo precisely-tracked dynamic aliases). A local variable annotated unique may be converted to a shared, based on its usage. \(\bot\) denotes that the value is inaccessible. #### 3.4.1. Aliasing Aliasing between variables and fields is tracked by entries of the form \(x:\text{alias}(p)\) in \(\Delta\). This denotes that the \(x\) stores the same value as the _path_ (a variable or some field access) \(p\). Two paths are aliased iff they reference the same object, thus aliasing is an equivalence relation. This is encoded by the judgment \(\Delta\vdash p_{1}\equiv p_{2}\), which denotes that \(\Delta\) indicates that the path \(p_{1}\) is aliased with \(p_{2}\). Formal rules are given in Appendix A.4. Given an environment \(\Delta\), we define its _alias graph_ to be a (undirected) graph whose nodes are syntactic paths (\(p\) as defined in Figure 2), and distinct paths \(p_{1}\) and \(p_{2}\) are connected iff \(\Delta\vdash p_{1}\equiv p_{2}\). Each component of this graph may contain at most one path annotated with owned, unique, or shared. Intuitively, the alias graphs for each program point (which identify allowable aliasing), along with validation of uniqueness invariants (which ensures that no other aliases of unique values exists), is the primary product of our analysis. This output can then be used to automatically verify the effects of mutation or, more generally, separation invariants. #### 3.4.2. Side note: concurrency Since we are tracking aliases across multiple statements, and our alias annotations may point to mutable heap locations, it may seem challenging to handle concurrency. However, we only claim to precisely track aliases of unique or borrowed values, i.e. expressions for which \(\Delta\vdash e:\text{owned}\ C\dashdot\Delta^{\prime}\) holds for some \(C\) and \(\Delta^{\prime}\). Figure 3. Typing environment and annotations used in typing rules Figure 2. Grammar extended from Featherweight Java Intuitively, if this holds for a variable \(x\), and \(x\) is aliased to \(y.f\), then \(y.f\) is also unique, which requires \(y\) to be unique. In other words, either the current context or some calling context is the sole owner of \(y\), and thus of the heap location \(y.f\). (This reasoning may be extended for \(y.f.g\), etc.) Therefore we can determine all mutations that would affect this alias relation. In other words, assuming soundness of our approach for sequential programs, it should remain sound for concurrent programs, as long as unique values accessible to the spawned thread are consumed after a fork operation. #### 3.4.3. Reachable aliasing \(\Delta\vdash p_{1}\approxeq p_{2}\) denotes that a value reachable from \(p_{1}\) (for example, \(p_{1}.f\)) may be aliased with a value reachable from \(p_{2}\) (for example, \(p_{2}.f.g\)). Formal rules are given in appendix A.5. If \(\Delta\vdash p_{1}\approxeq p_{2}\) then \(p_{1}\) and \(p_{2}\) are _separately unique_, as defined in (Hardt and Rivest, 2009). #### 3.4.4. Expression typing \(\Delta\vdash e:\alpha_{e}\ C\dashv\Delta^{\prime}\) denotes that \(e\) may be used as a value with class \(C\) and ownership annotation \(\alpha_{e}\), provided that all future typing uses the \(\Delta^{\prime}\) typing environment. Formal rules are given in Figure 4. \(\Delta\vdash e:\operatorname{unique}(p.f)\dashv\Delta^{\prime}\) denotes that \(e\) refers to a unique value (as defined in SS3.1), and \(e\) is aliased with \(p.f\) in \(\Delta^{\prime}\). This is used to validate assignments to unique fields, since the assignee is a field whose annotation is not stored in \(\Delta\). Thus aliasing information is tracked by annotating the assignment value, instead of annotating the assignee. For a variable \(x\) annotated with \(\operatorname{alias}(p)\), \(\Delta\vdash x:\alpha_{e}\ C\dashv\Delta^{\prime}\) holds if and only if \(\Delta\vdash p:\alpha_{e}\ C\dashv\Delta^{\prime}\) - in other words, aliased variables may be used exactly how the path they alias may be used. When borrowing a unique field value (i.e. passing the value to a parameter annotated as owned), the object reference must also be unique. For example, if we have variables \(x:\text{shared }C\) and \(y:\text{shared }C\), and \(C\) contains a unique field \(f\), we cannot borrow \(x.f\) because we do not know whether the same heap location is already borrowed through \(y.f\). Thus one can introduce aliases to a unique field of a shared value, but those values can only be used after a destructive read or some equivalent operation. Finally, a value of a subtype \(C\) may be used as a value of the supertype type \(D\) with the same annotation. #### 3.4.5. Isolation \(\Delta*p\dashv\Delta^{\prime}\) denotes that \(p\) is _isolated_ from \(\Delta^{\prime}\) - all references to \(p\) contained in \(\Delta\) are removed in \(\Delta^{\prime}\). \(\Delta^{\prime}\) represents a state where \(p\) is assigned a new value, thus all aliases to \(p\) in \(\Delta\) should be removed in \(\Delta^{\prime}\). Moreover, if \(p\) represented a unique value and a variable \(x\) was aliased to \(p\), \(x\) contains a unique value after \(p\) is overwritten. Thus destructive reads are accomplished by first introducing an alias to \(p\), and then overwriting \(p\) with a different value, such as \(\operatorname{null}\). Given an environment \(\Delta\), we define its _reference graph_ to be a (directed) graph whose nodes are syntactic paths. An edge \(x\to p\) exists iff \(\Delta\) contains an annotation \(x:\operatorname{alias}(p.\cdots)\). Intuitively, the origin of an edge in the alias graph identifies a variable whose annotation requires updating when its target is mutated. Unlike the alias graph defined in SS3.4.1, this graph is not symmetric or transitive. For example, in the pop method, after line 30 we have the annotation value : alias(this.root.value). Thus the reference graph contains the edge \(\operatorname{value}\rightarrow\operatorname{this.root.}\) If the value of this.root is changed, we must determine a new annotation for value. Formal rules are given in appendix A.7. The rules deal with three main cases: 1. No node in either the reference graph or the alias graph \(p\) is connected to \(p\). \(\Delta\) is unchanged, except to remove \(p\) from \(\operatorname{dom}(\Delta)\). (See the I-Remove* rules.) This case is applied during validation of line 30 in Figure 1 to remove the initial annotation \(\operatorname{value}:\bot\). 2. For some variable \(x\) (distinct from \(p\)), \(x\) is aliased with \(p\). In this case, all paths rooted in \(p\) may be replaced by paths rooted in \(x\). (See the I-Replace* rules.) 3. \(p\) is disconnected in the alias graph, but \(p\) is the target of an edge in the reference graph. In this case, we can Figure 4. Expression usage typing rules isolate the subfield that induces this edge (such as \(p.f\)) before isolating \(p\). (See the I-Elim* rules.) This case is applied during validation of line 33 in Figure 1 when isolating this.root. The annotation of value is updated from alias(this.root.value) to unique. #### 3.4.6. Framing When a reference is passed to a method, any field of the referenced object can be modified. The _frame_ of a method call contains all such fields. Any aliases to fields in the method's frame must be invalidated after the method call. \(\Delta\star\overline{e}\dashv\)\(\dashv\)\(\wedge\)\(\ ### Extended example The dequeue method in Figure 6 allows the stack to be used as a FIFO queue. The recursive traversal of the linked list is handled by the dequeueHelper method. Note that the entire list is traversed, and the tail modified, using only a single destructive read. This is enabled by borrowing the unique value this.root, and in turn each next node. The value of an owned parameter is guaranteed to be unique, but its value may not be consumed or placed on the heap. Thus dequeueHelper guarantees that no additional aliases to n will be introduced. However, the contents of an owned value may be modified, which allows the tail Node to be removed. Also note that the unique value n.next.value is read without an explicit destructive read at line 16. Instead, it is known to be unique since its container (n.next) is isolated at line 17. In a survey of related work, Milano et al. (Milano et al., 2019) found that ownership systems often require explicit destructive reads at each step when traversing and modifying a linked list as in this example. However, our isolation technique, combined with local aliased values and borrowing, eliminates this requirement while allowing common code patterns and requiring few annotations. ## 4. Future Work While the design decisions were guided by making this system usable by developers, we would need to implement the system and evaluate it in larger examples with users. A comparison of the effort in different examples with the alternatives in the related work would also be interesting, to confirm whether our invariant helps developers to annotate their code. Additionally, our current approach includes only a core set of Java features that we would like to extend to include while loops. One of the motivations of this work was to introduce enough information to reason about mutability to support Liquid Types in a mutable context. Flux (Kumar et al., 2019) took the first steps in this area, by using Rust's ownership type in combination with a Liquid Type System. Our proposal targets the Java language instead, and serves as the basis to extend LiquidJava (Kumar et al., 2019) to better model aliasing and uniqueness combined with refinements. Because Liquid Types supports a logic-based version of symbolic execution, the information from refinements could be used in the unification to have a more precise alias tracking, instead of the conservative invalidation we took instead. ## 5. Conclusion We have described Latte, a simple type system for uniqueness and aliasing for Java, which prioritizes usability and low development overhead. Our vision is that more complex type systems may utilize the uniqueness and aliasing information determined by Latte. Latte enforces (and requires consideration of) simple invariants of values on the heap, imposes a low annotation burden, and requires no changes to existing Java semantics. Our simple uniqueness invariant indicates that a unique object is stored at most once on the heap. In addition, all usable references to a unique object from the local environment are precisely inferred. The developer only needs to annotate field declarations and the parameters and return types of method declarations, using one of unique, owned or shared. While it may lack the expressive power of related approaches, we hope that Latte provides a lower barrier to entry for existing Java developers, thus enhancing the appeal of automated verification tools built on Latte. Further evaluation of its usability, along with the development of such verification tools, is required to validate this goal. ## 6. Acknowledgments This work was supported by the National Science Foundation under Grant No. CCF-1901033 and by the Algorand Centres of Excellence programme managed by Algorand Foundation. The work is also co-financed by a Dual Degree Ph.D. Scholarships awarded by the Portuguese Foundation for Science and Technology through the Carnegie Mellon Portugal Program, through the RAP project (EXPL/CCI-COM/1306/2021) and through the LASIGE unit (UIDB/00408/2020 and UIDP/00408/2020). Any opinions, findings, and conclusions or recommendations Figure 6. Example: a dequeue method for the Stack class from Figure 1 expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF or the Algorand Foundation.
2309.17233
Open Hardware Solutions in Quantum Technology
Quantum technologies such as communications, computing, and sensing offer vast opportunities for advanced research and development. While an open-source ethos currently exists within some quantum technologies, especially in quantum computer programming, we argue that there are additional advantages in developing open quantum hardware (OQH). Open quantum hardware encompasses open-source software for the control of quantum devices in labs, blueprints and open-source toolkits for chip design and other hardware components, as well as openly-accessible testbeds and facilities that allow cloud-access to a wider scientific community. We provide an overview of current projects in the OQH ecosystem, identify gaps, and make recommendations on how to close them today. More open quantum hardware would accelerate technology transfer to and growth of the quantum industry and increase accessibility in science.
Nathan Shammah, Anurag Saha Roy, Carmen G. Almudever, Sébastien Bourdeauducq, Anastasiia Butko, Gustavo Cancelo, Susan M. Clark, Johannes Heinsoo, Loïc Henriet, Gang Huang, Christophe Jurczak, Janne Kotilahti, Alessandro Landra, Ryan LaRose, Andrea Mari, Kasra Nowrouzi, Caspar Ockeloen-Korppi, Guen Prawiroatmodjo, Irfan Siddiqi, William J. Zeng
2023-09-29T13:32:38Z
http://arxiv.org/abs/2309.17233v2
# Open Hardware in Quantum Technology ###### Abstract Quantum technologies such as communications, computing, and sensing offer vast opportunities for advanced research and development. While an open-source ethos currently exists within some quantum technologies, especially in quantum computer programming, we argue that there are additional advantages in developing open quantum hardware (OQH). Open quantum hardware encompasses open-source software for the control of quantum devices in labs, blueprints and open-source toolkits for chip design and other hardware components, as well as openly-accessible testbeds and facilities that allow cloud-access to a wider scientific community. We provide an overview of current projects in the OQH ecosystem, identify gaps, and make recommendations on how to close them today. More open quantum hardware would accelerate technology transfer to and growth of the quantum industry and increase accessibility in science. ## I Introduction The free and open exchange of scientific tools has become more and more important as automation and devices increase their role into scientific fields. The past five years have witnessed an explosion of open-source tools for programming quantum computers. The open nature of these software tools has substantially increased the number of users of quantum computers, and created a new genre of programmer: the "quantum software engineer" [1; 2; 3; 4; 5; 6]. This has shaped the development of quantum computing as a whole, leading to the creation of new organizations, job categories, and career paths. Less attention has been paid to the tools and components actually used to build and control quantum computers - and quantum technologies such as communications and sensing more broadly - as well as efforts making quantum computers more accessible in non-commercial ways. We use the term "open quantum hardware" (OQH) to cover them, and intend for it to explicitly encompass several steps related to the openness associated to hardware in quantum technology, including: (1) open-source software (OSS) for designing quantum processors and other hardware components (used for computation, but also for simulation, sensing, and communication), (2) foundries and facilities for fabricating quantum processors, (3) software for controlling, analyzing and characterizing quantum devices, and (4) software and hardware for making the infrastructure that enables various levels of open-access - from cloud-accessible quantum processors to collaborative testbeds providing non-commercial access and testing, from remotely accessible research labs to the related cloud infrastructure. These steps covering the life cycle of OQH and their overlapping relations are sketched in Fig. 1. In this article, we provide an overview of efforts in each of these categories of open quantum hardware (OQH), highlight notable projects in each category, identify gaps, and provide recommendations for closing them. A major theme in this review is that there is much opportunity to promote interoperability, reduce cost, and increase the number of users of quantum technologies. Enabling the open quantum hardware ecosystem is both the natural complement to the open quantum software ecosystem (which is itself quite robust and sophisticated [2]), as well as the natural extension of efforts to open up classical hardware. This ecosystem also represents a prime opportunity for entities building hardware (of all kinds) to engage in the kind of beneficial, pre-competitive activity which boosts the ecosystem as a whole (and consequently, those builders themselves). A first generation of projects in open quantum hardware, such as ARTIQ and pyEPR [7; 8], has pioneered a free and open dissemination of tools. We are now witnessing the beginning of a more robust and sophisticated OQH ecosystem [9; 10] which can enable and accelerate scientific progress and discovery on a wider scale. On the one hand, this ecosystem can further extend beyond quantum computing, on which it is mostly focused, to encompass quantum technologies such as quantum sensing and communications. On the other hand, it can further integrate and benefit from the adoption and integration of existing non-quantum open hardware projects and frameworks [11]. What's more, given the substantial investments made around the world [12] - especially in Australia [13], Eurasia [14; 15; 16; 17; 18; 19], and North America [20; 21; 22; 23]- to support the development of a quantum technologies industry, ensuring those investments yield the best possible fruit is of great importance. One way to do so is through encouraging the development of open communities and ecosystems [24; 25; 26; 27; 28; 29; 30; 31; 32; 33] around quantum technology. To describe the potential of OQH, we can look at the open quantum software ecosystem, which has flourished within the past five years, thanks to seeds planted over ten years ago. As a result, researchers (theorists and experimentalists alike) as well as quantum software engineers worldwide can use open-source software to advance quantum technology and quantum science in many directions, generally building upon an existing stack. An illustrative case is that of the Quantum Toolbox in Python (QuTiP) [34; 35], first released in 2012, which enables the exploration of the effects of noise on a variety of quantum systems interacting with the environment. Building on top of QuTiP, several other tools have emerged, focusing on specific niches such as nontrivial system-environment dynamics [36; 37], notably getting closer and closer to the simulation of QPUs and their diagnostics [38; 39]. When looking at similar efforts in the classical open hardware space, the astounding success of the Arduino project [40] (microcontrollers) as well as the Raspberry Pi [41] (single-board computers) speaks to the power of Figure 1: **Overarching diagram of the Open Quantum Hardware steps and their interconnection**. 1: Design phase, which can involve a loop between simulation of the Hamiltonian and electromagnetic (EM) simulations used to define the QPU design and layout. 2: Fabrication step, e.g., through a foundry. 3: Installation and bring-up, which includes testing and characterization; information collected at this step can be fed back to step 1 to modify designs. 4: Sustained operation, with overlaps (with step 3), and is composed of data acquisition (with control and calibration) and can involve interfacing with infrastructure to provide cloud access to the device or experiment. putting open hardware in the hands of end-users. We can reasonably expect similar benefits within the quantum hardware ecosystem. Compared to only open-source software, an open hardware ecosystem also provides legibility, transparency, and reproducibility of hardware devices, a crucial consideration for supply chains and their management. Finally, the pre-competitive activity which can take place by opening up quantum hardware extends these benefits by mobilizing institutional momentum from key players who can help the ecosystem adopt a broad ethos of openness, interchangeability, interoperability, and benchmarking, as fostered by bodies like the Quantum Economic Development Consortium (QED-C) in the USA and similar entities in other geographies [42, 43, 44]. In doing so, this allows for additional adoption and use of quantum hardware in academic labs and testbeds (a point returned to in Sec. II.4.2). This article is organized as follows: Sec. II gives a high-level overview of the state of the art in OQH, divided among blueprints and software for hardware design (Sec. II.1), and software for control and data acquisition (Sec. II.2), itself divided into data acquisition (SS II.2.1), pulse-level control on hardware (SS II.2.2), pulse-level simulation (SS II.2.3), and optimal control, calibration and characterization (SS II.2.4). In Sec. II.3 we provide an outlook on the current status and specific need for OQH for quantum error correction. In Sec. II.4 we review facilities for OQH, such as remotely accessible labs (Sec. II.4.1), Testbeds (SS II.4.2) and Foundries (SS II.4.3). Throughout Sec. II's subsections, we focus on some example projects to bring definiteness to the discussion. In Section III we discuss gaps in OQH and make recommendations to close them. Finally, in Section IV, we give our conclusions. ## II Open Quantum Hardware Today Open quantum hardware encompasses open-source software that is used for designing, analyzing, building, controlling, and programming quantum chips, foundries which build quantum chips, and cloud-accessible labs and testbeds that provide alternative, non-commercially-driven access to them. In this section, we review the state of open quantum hardware along each of these categories. It should be noted that within each of these categories, there can exist multiple stacks - configurations of components which are all inter-operable with one another, but not necessarily with components used in a different stack. Ensuring inter-operability of components across stacks would help ensure a more robust and sophisticated ecosystem. In Table 1 we summarize the different categories, providing examples of existing open hardware projects. The particular choices of components is again meant to be representative. \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{**Functionality**} & **Examples** \\ \hline Processor Design & DASQA [45, 46], KQCircuits [47], PainterQubits/Devices.jl, pyEPR [8], Qiskit Metal [48], QuCAT [49] \\ Simulation and diagnostics & KQCircuits [47], Pulser [50], Qiskit Metal [48], QuTiP [34, 35], QuTiP-QIP [39], sc-qubits [38] \\ Control and data acquisition & ARTIQ [7], Duke-ARTIQ [51], Qua1 [52], QCoDeS [53], QICK [54], Quantify [55], QubiC [56], qupulse [57], Sinara Open Hardware [58, 59] \\ Remotely Accessible Labs2 & Forschungszentrum Julich through OpenSuperQ [60], Quantum Inspire [61] \\ Testing (testbeds) & Lawrence Berkeley National Lab’s AQT, Sandia National Labs’ QSCOUT [62], Sherbrooke’s Distriq DevTeQ \\ Fabrication (foundries) & LPS Qubit Collaboratory, UCSB quantum foundry, QuantWare3 [FOOTNOTE:3]Footnote 3: providing a benchmark provides a benchmark provides a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark with a benchmark a ### Blueprints and software for hardware design Blueprints for hardware design are classic examples of "open hardware" [63; 64; 65; 66; 67; 68; 69; 70; 71]. In modern device design, software tools have been developed to tackle various aspects of this task, known as computer-aided design (CAD). In quantum computing research, while quantum device design is often published in peer-reviewed papers and micrographs are included in the supporting materials and figures, generally, CAD drawings are not shared in the open. Notable exceptions include pyEPR [8], Qiskit Metal [48], and KQC Circuits [47], three projects enabling various capabilities for chip design of superconducting-circuit (SC) qubits. Superconducting circuits incorporating non-linear devices, such as Josephson junctions and nanowires, are among the leading platforms for emerging quantum technologies. Using pyEPR, one can design and optimize SC circuits and control dissipative Hamiltonain parameters in a systematic, simple and robust approach. This reduces the number of required ab-initio simulations. pyEPR has been used on a variety of circuit quantum electrodynamics (cQED) devices and architectures, from 2D to 3D, including "2.5D" (flip-chip), demonstrating 1% to 10% agreement for non-linear coupling and modal Hamiltonian parameters over five-orders of magnitude and across a dozen samples. Finite Element Methods (FEM) simulations, as those shown in Figure 2, can be obtained with PyEPR using the energy participation ratio approach (EPR). EPR unifies the design of dissipation and Hamiltonians around a single concept -- the energy participation, a number between zero and one -- in a single-step electromagnetic simulation. After the FEM simulations have validated the qubit properties (and possibly its coupling to a cavity), one is ready to fabricate the SC QPU. To this end, a photomask is generated to place the SC qubits on the substrate of a processor. You can find a typical CAD drawing of a superconducting flip chip QPU and an example mask generated by KQCircuits [47] in Figure 3 and more information about KQCircuits in the frame below (_Example 1_). Figure 2: **Finite Element Methods (FEM) chip simulations** Design and FEM simulations of a cavity (left) and qubit (right) modes, using PyEPR [8]. The contour plot of cavity modes displays the intensity of the electric field at 9.0 GHz. The Current-density magnitude of a transmon qubit, showing transmon pads connected by a Josephson junction [adapted from Ref. [8]]. _Example 1. Hardware Design._ **KQCircuits: KLayout Python library for integrated quantum circuit design.** KQCircuits [47] is an open-source Python library created by IQM, a full-stack quantum computing startup, for designing superconducting quantum circuits. It automates the layout and simulation part of the design process, outputting chip layouts and photomask files in OASIS or GDSII formats, standard formats for the specification of data structures of photomasks, and integrated circuit layout. The layout files are sent to a mask manufacturer to produce a physical mask which is then used for the fabrication process. As a part of the mask export process, KQCircuits also produces other files such as netlists, to help with design verification. While KQCircuits itself cannot be used to perform simulations, it can be used to export automated simulation scripts and files for popular electromagnetic field simulation toolkits, such as Ansys HFSS/Q3D [72], pyEPR [8] (Energy Participation Ratio simulation), Sonnet [73], and Elmer [74]. These simulations use the parameterized geometry as the physical mask layouts produced by KQCircuits. A comparison between KQCircuits and Qiskit Metal is provided in Table 2. KQCircuits generates multi-layer 2-dimensional-geometry representing common structures in quantum processing units (QPU). An important part of KQCircuits are the definitions of parameterized geometrical objects, or "elements", and a framework for easily defining new ones. Generating the designs using code makes it easy to quickly create different variations of designs and helps to avoid costly human errors. The combination of many elements into a full QPU design is made easier by features such as named reference points used for automatic positioning of elements relative to each other and Graphical User Interface (GUI) editing. Furthermore, KQCircuits includes a library of premade chips, many of which have been manufactured and used for testing at IQM. KQCircuits works on top of KLayout [11], an open-source layout design software that is mainly used for classical electronics. Advantageously, existing KLayout functionalities can be used for quantum hardware other than classical, while KQCircuits only needs to add features specific to QPU design. This connection with KLayout can also help to bring together the wider electronics open-source hardware community and open quantum hardware community. In terms of remaining challenges for chip design tools, we note that computational electromagnetic simulation of quantum circuits is still heavily reliant on the use of proprietary and expensive tools such as Ansys HFSS or Sonnet or CST Studio (full-wave and capacitance simulations in particular). Open-source tools such openEMS or MEEP or Scuff-EM which are used elsewhere in the field of microwave simulations find very little adoption and application in quantum device design. Notable open-source exceptions are Elmer [74] and Palace [75]. Elmer is an open-source parallel multi-physics Finite Element Methods (FEM) software used for quantum device simulation on Figure 3: **Photomask layout and chip design.** An open-source photomask (left) provided in KQCircuits as a demo example. An individual SC quantum processor layout (right) based on flip chip technology, where blue is one substrate, red is the other substrate and in green denotes the bump bonds. The chip contains four transmons capacitively coupled and two buses for multiplexed readout. The code for the mask can be accessed here here and for the chip here. desktop and High Performance Computing (HPC), e.g., 3D layouts capacitance matrices, cross section of layouts, London equations, which in the framework of OpenSuperQPlus (European Open-Access Quantum Computer Project [60]) is developed with the partnership between CSC and IQM. Elmer along with Gmsh [76] has also been integrated as a backend for mesh generation and FEM simulation in Qiskit Metal. Palace (Parallel, Large-scale Computational Electromagnetics) [75], is an open-source parallel FEM software capable of full-wave electromagnetics simulations developed at AWS, with out-of-the-box support for use of large scale cloud HPC resources. ### Control and data acquisition In this section, we discuss the features and components involved in the execution and readout of quantum experiments. This field is not easily defined, as the operation of a QPU or quantum technology experiment is informed already at the highest level with the definition of abstract instructions, e.g., for quantum programming, gates in the form of a quantum circuit in a high-level software development kit (SDK). To narrow the field, we focus here on control that gets "closer to the metal", e.g., lower than gate-level quantum circuit compilation in quantum computing. This field is already populated by a number of projects, spanning from pulse-level simulation of devices for diagnostics, characterization and calibration, to pulse-level control and data acquisition during experiments runs [77, 78, 79, 34, 38, 39, 53, 54, 77, 34, 39]. We highlight that core to these tasks is the exchange of data through application programming interfaces (APIs). #### ii.2.1 Data acquisition Quantum hardware for control and readout of quantum systems includes a user interface to operate the instrument, for instance, through a touch-based front panel or remotely accessible GUI. However, to implement more complex tasks such as characterization, tuning, calibration and general operation of QPU elements that involve coordination and timing between several different pieces of programmable hardware, it is essential to add a software layer that runs on a control computer and communicates with hardware directly through a programming interface. This software layer orchestrates the flow of control and data acquisition commands and is responsible for the overall operation of the QPU via the quantum hardware. On the software side, there exist quite a few alternatives for QPU control and data acquisition, starting with QCoDeS [53], which is a Python framework developed by Microsoft Quantum. Another example of Control and Data acquisition software is ARTIQ [7], which is also used to control hardware, as discussed in Sec. II.2.4. Control and data acquisition software encompasses various elements and instruments: the quantum device, the firmware that interconnects with the instrument drivers, parameters that are controlled and measured to generate datasets, which are stored, analyzed and explored via data visualization tools, as depicted in Fig. 4. To achieve frictionless control of a QPU, the software typically abstracts away the hardware elements using drivers. For example in the QCoDeS library [53], an instrument driver is a Python class that implements methods to operate \begin{table} \begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Software** & **Purpose** & **Simulations** & **Circuit analysis** & **Input/output (I/O)** \\ \hline Qiskit Metal & Full-stack quantum processor design. & Ansys HFSS/Q3D, Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & EPR, impedance, quasi-lumped LOM, lumped. & Python based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Python based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Python based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Python based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Python based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based. Connects to Ansys, GDS, etc. by plugins. Work in progress on open-source EM renders. & Support based Connects to Ansys, GDS, etc. the hardware using commands that are sent to the device via the programming interface. Namely, a quantum hardware instrument, such as an arbitrary waveform generator, can be used to generate a radio frequency pulse for a rotation of a qubit, e.g., an X gate. In order to program this pulse, the driver implements parameters such as frequency, amplitude, phase and pulse time. The driver will then use these values to construct the right programming commands to send to the instrument to set up the pulse. The QCoDeS documentation [53] contains driver examples to control over 50 instruments. The data acquisition software uses instrument drivers to form an abstraction layer to the several pieces of hardware that control a QPU. Parameters can also be used in higher levels of abstraction on top of instrument parameters to represent QPU device elements that are controlled and read via the hardware layer, such as a qubit readout circuit. The parameters are then used in automated measurement routines with the goal to characterize, calibrate or program a QPU or its different elements. This requires saving, analyzing and visualizing the data that is recorded by these instruments in a dataset. Most data acquisition software frameworks therefore also include data storage abstractions and data visualization tools. In order to control the QPU elements using quantum programming instructions, the control signals need to be calibrated for optimal gate and read-out fidelity. As shown in Fig. 1, calibration values are obtained via device testing and calibration measurements using the data acquisition software, analysis routines and data storage in the Cloud. More details can be found in section II.2.4 on optimal control, calibration and characterization for high-fidelity quantum operations. A way to approach the software-hardware interface is through an Instruction Set Architecture (ISA) [94] as it is done in classical computing. The ISA acts as a contract between hardware and software by explicitly describing the set of available operations, machine instruction format, supported data types, registers, and memory addressing. A well-designed ISA can present a very compact and efficient method to access the specialized features of a computing device. Graphical Processor Units (GPUs) [95] and their success in becoming an integral part of many modern devices is an example of instruction set design and standardization leading to widespread specialized architecture adoption. There are a few quantum ISA implementations [96; 97; 98], with some that explore this control and data acquisition approach, such as QUASAR [99] developed at Advanced Quantum Testbed (AQT), Lawrence Berkeley National Laboratory (LBNL). QUASAR is based on the RISC-V ISA [100] - an open-source architecture that revolutionized the classical computing field and continues expanding toward emerging technology applications. QUASAR has been demonstrated in conjunction with QubiC [83] (QubiC is discussed separately in section II.2.2, below), executing experiments, including mid-circuit measurement and feed-forward, on superconducting quantum processors (QPU) at AQT, LBNL. Figure 4: **Control and data acquisition.** The diagram shows some of the most important elements in the software and hardware stack involved in a quantum experiment. The hardware part of the stack includes the device under test or operation, the measurement instrument and the firmware. A driver provides interaction between the software and hardware. Drivers implement parameters that can be controlled (set) or read (get) over several measurements. The result of each measurement is stored into a dataset, that is read for data analysis, visualization and storage. **Example 2**: _CONTROL AND DATA ACQUISITION_ **QUASAR: A Quantum Instruction Set Architecture** The QUASAR development started in 2017 and went through several iterations. It is a project defining a quantum Instruction Set Architecture (ISA) to interface software with hardware. The first version was a tightly-coupled extension that required significant changes to the processor micro-architecture. Later, the implementation moved towards a more decoupled approach. The current implementation is a RISC-V Rocket Core [101] extended with the QUASAR co-processor (note that RISC-V is a very popular open standard ISA in classical open hardware. RISC-V is based on the established reduced instruction set computer (RISC) principles, and the Rocket Core is a processor that implements RISC-V.) In contrast to classical instruction sets, quantum ISAs operate on different types of computations. Traditional general-purpose operations are supplemented with a set of basic quantum gates applied on direct-addressed qubits and/or qubit registers. The RISC-V core executes the main program; when a quantum instruction occurs at the fetch pipeline stage, it forwards it via the RoCC interface [102] to be decoded and executed by the QUASAR co-processor. Such an architectural solution allows the main core to continue performing computations while the quantum backend is generating the control pulses. Moreover, the RISC-V core can perform complex computations, such as floating-point arithmetic for phase estimation, and make conditional branching during algorithm execution based on the qubit measurement data. That makes the QUASAR implementation flexible enough to accommodate complex hybrid experiments for classical-quantum computations. The RISC-V Rocket core runs a Linux kernel that facilitates remote communication with the user host machine and provides additional functionalities for the quantum software stack. #### ii.2.2 Pulse-level control with hardware integration In recent years, several pulse-level control software packages have been developed with the idea of providing end-users with a tool to program all the relevant device-specific physical parameters of the system [7, 39, 54, 77, 79, 103]. This approach allows for a finer level of control over pulses during the application of gates, and also makes it possible to directly use the Hamiltonian of the system as a resource for computation. Within the gate-level framework of quantum circuits, pulse-level control and simulation enables a greater level of flexibility and the ability to implement optimal control schemes. Within the analog quantum simulation framework, pulse-level control notably allows practitioners to take advantage of the mimetic capabilities of the hardware. Exposing the hardware controls at such a low level is intended to help quantum developers design software procedures while having the specific characteristics of the hardware in mind [104]. Control hardware, e.g., field programmable gate arrays (FPGAs), arbitrary waveform generators (AWGs), sequencers, have historically been closed-source, either provided by industrial manufacturers or by in-house developed systems. The ARTIQ/Sinara project is a notable exception, focused on the fast control of ion-based quantum processors through FPGAs, as detailed in the box below (_Example 3_). **Example 3**: _CONTROL AND DATA ACQUISITION._ **ARTIQ: Advanced Real-Time Infrastructure for Quantum physics.** The ARTIQ experiment control and data acquisition system was initiated in 2013 at the NIST Ion Storage Group, in partnership with M-Labs Ltd, to address the deficiencies observed with control systems developed in-house by physicists or based on existing commercial solutions. A key feature of the ARTIQ system is a high-level Python-based programming language that helps describe complex experiments, which is compiled and executed on dedicated FPGA hardware with nanosecond timing resolution and sub-microsecond latency. Using the abstractions provided by Python, ARTIQ has the capability to handle the entire control stack from quantum circuits to pulse-level programming. ARTIQ also supports connecting several FPGAs together and synchronizing their clocks in the sub-nanosecond regime, which greatly expands the input/output (I/O) scalability of the system. Initial versions of the ARTIQ system ran on physicist-designed hardware based on FPGA development kits with custom I/O expansion boards. In order to improve the quality, availability and reproducibility of the hardware, the Sinara project [59] was started in collaboration with Warsaw University of Technology. The Sinara project developed a modular system with carrier FPGA cards (Kasli, Kasli-SoC) controlling various so-called Eurocard Extension Modules (EEMs) catering to the needs of each experiment - such as digital I/O, Analog to Digital Converter (ADC), Digital to Analog Converter (DAC), Direct Digital Synthesis (DDS), Phase-Locked Loop (PLL) synthesizer, AWG. There is ongoing work by Duke University on firmware [51] and by Warsaw University of Technology on hardware [58] to port ARTIQ to the RF-SoC platform, with similar capabilities to QICK. Combined with the ARTIQ software, the Sinara hardware has encountered substantial success, with almost a thousand quantum physics experiments relying on ARTIQ/Sinara systems. ARTIQ, including its gateware, the firmware, and the ARTIQ tools and libraries are licensed as LGPLv3+. The Sinara hardware designs are licensed under CERN OHL. This ensures that a user of ARTIQ or Sinara hardware designs obtains broad rights to use, redistribute, study, and modify them. More recently, other FPGA-based projects have emerged beyond ARTIQ, for SC qubits control, such as QubiC [56] and QICK [54], two projects based on software, firmware and hardware composed of radio-frequency system-on-chip (RF-SoC) boards. Open-source FPGA-based control efforts for SC qubits started in 2018 with QubiC, developed at the Advanced Quantum Testbed (AQT) based at LBNL (more information on this facility is given in Sec. II.4.2, discussing Testbeds). The first version of QubiC was implemented on the XilinX VC707 platform [105, 83] informed by actual quantum information science experiments at AQT. To reduce cost, complexity, and space requirements, customized hardware components were designed and fabricated, such as in-phase and quadrature (I/Q) mixing modules integrating filters, amplifiers, and bias Ts on printed circuit boards (PCBs) with Electromagnetic Interference (EMI) shielding, the design of which was published and open-sourced to benefit the community [106]. Additionally, automated single- and two-qubit gate calibration protocols were developed using QubiC [56]. To take advantage of the growing capabilities of recent RF-SoC platforms, QubiC was then ported to the XilinX ZCU216 platform [107], capable of direct generation of control pulses at higher frequencies (up to 10GHz, in the second Nyquist zone). Additionally, a novel Distributed Processor design was created and implemented to enable distributed decision-making and branching, such as mid-circuit measurement and feed-forward [108]. The different iterations of QubiC implementations are shown in Figure 5. To form a prototype full-stack quantum control system with increased potential for future experiments, QubiC and QUASAR (described above in II.4.1) have been integrated and applied to quantum computing experiments at AQT, LBNL [109]. Boards from the QICK project have reached over 40 labs in the USA and abroad after a little more than two years. Using QICK helps reduce equipment costs and increases the performance of quantum information science experiments with SC qubits [110]. All of QICK implementations have been on Zynq boards, with the firmware provided by PYNQ (Python for Zynq), compatible with multiple generations of RF-SoC FPGAs. An image of a QICK board is shown in Figure 6. The usage of QICK has already expanded from SC qubits to atomic, molecular, and optical physics qubits and spin qubits (Nitrogen-Vacancy-center qubits), on the hardware side. On the application side, QICK is used not only for quantum computing but also for quantum sensing experiments, e.g., for dark matter candidates detection [113], as well as for RF-SoC control in particle physics detection beyond quantum technology. #### iii.4.3 Pulse-level simulation Simulation is important to design an experiment before running it on hardware and then to validate and interpret the results after data collection. Pulse-level simulation can be employed for quantum optics experiments, quan tum simulation, and both for gate-level (digital) quantum computing and analog quantum computing. In quantum computing, although gate-level instructions are the most common formalism on the user-side to write quantum algorithms, there can be a compiler that transforms discrete operations into pulses. Examples of pulse-level simulators include Pulser [79] and Bloqade.jl, two toolboxes for neutral-atom QPUs respectively developed in Python and Julia. qutip-qip is a simulation toolkit that provides the freedom to design any QPU and simulate pulse level execution, integrating it QuTiP's noisy time evolution [39]. We summarize information on Pulser in _Example 4_. Figure 5: **QubiC, through the years:** QubitC 1.0, implemented initially on XilinX VC707, in its chassis with auxiliary components (left), the Analog Front End chassis for QubiC 1.0 (middle), and QubiC 2.0 implemented on XilinX ZCU216 with a custom SMA fanout board. A customized Surface Mount Assembly (SMA) fan-out board has been designed and recently fabricated to fully utilize all the channels of the ZCU216 board. Separately, a low-cost DAC extension was developed for the VC707 platform, to meet the varying frequency needs of different superconducting QPU architectures. The QubiC team has recently demonstrated heterogeneous synchronization of two sets of VC707 boards with low-cost DACs and two ZCU216 platforms [107]. Figure 6: **The Quantum Instrumentation Control Kit (QICK).** The QICK consists of two pieces of hardware: a commercial RFSoC evaluation board (left), which connects to the QICK RF&DC custom board (right) which can be used for additional signal amplification and filtering. QICK supports the AMD-Xilinx ZCU111 [111] (shown in the picture with its companion custom board) and the AMD-Xilinx ZCU216 [112] (not shown) with a new companion board under fabrication and testing. Example 4: **Pulse Level Simulation.** **Pulse: Library for pulse-level/analog control of neutral atom devices.** The Pulser framework is an open-source software library for designing pulse sequences for neutral-atom QPUs [50]. In such devices, individual atoms are placed in arrays of microscopic traps [114; 115; 116] and their quantum state is manipulated through the application of laser or microwave pulses. Using Pulser, developers can control all the relevant physical parameters of the qubit register and the driving channels. The central object in Pulser is called a sequence, which is linked to a device. The device contains information about the hardware constraints, such as the maximal number of qubits available, the minimal pairwise distance between qubits which can be reached or the number of lasers and the maximal amount of laser power available for each of them. These constraints are enforced upon the register, where the neutral-atom array is defined, and upon the pulses that are added to the sequence. Each pulse is defined by its phase, amplitude and detuning waveforms, and pulses are sequentially added to the channels that are available on the device. The resulting program can subsequently be sent to QPUs and executed on the hardware after a device-specific calibration-aware compilation step. In addition to providing an interface to hardware, Pulser includes a built-in emulator relying on QuTiP [34] that faithfully reproduces the hardware behavior. To this extent Pulser acts also as a simulator for QPU design. #### ii.1.4 Optimal control, calibration and characterization High fidelity operations require high fidelity optimized controls. The process of generating optimal pulses traditionally involves a model-based open-loop optimal control step (in simulation) such as GRAPE, Krotov, GOAT algorithms [117; 118; 119] followed by a model-free closed-loop calibration step (in hardware). The former relies on building a sophisticated model of the system and the optimality of the generated pulses is constrained by the model's capability to faithfully reproduce the imperfections of open quantum systems. The latter usually involves the use of a gradient-free optimization algorithm, e.g., Nelder-Mead or CMA-ES to optimize the parameters of an open-loop optimal pulse on the quantum hardware benchmarked with a fidelity measure such as ORBIT [120]. Open-source libraries such as JuliaQuantumControl, QuOCS and C3-Toolset [80; 81] aim to provide a broad range of the optimal control functionalities previously discussed, with easy to use interfaces in Python or Julia. These tools also incorporate automatic differentiation (AD) techniques which have made it possible to obtain gradients of arbitrary order for free, even with the most complex numerical simulations. Such gradients are essential for the optimization of pulse parameters in open-loop control. AD capabilities are provided by the use of standard Machine Learning frameworks such as TensorFlow or JAX [121; 122]. In order to improve the fidelity of the pulse obtained through open-loop control, both the model parameters and the model itself need refinement. One approach for refining model parameters is the technique of data-driven system identification also known as characterisation. This is achieved through learning model parameters from data collected during experiments performed on the hardware, e.g., the calibration step previously outlined. Refining the model translates to building a complex digital twin that models not only the quantum dynamics but also all of accessory classical electronics and their non-ideal behavior. These two techniques -- Model Learning and Quantum-Classical Digital Twin -- are tightly integrated as a unified solution in the C3-Toolset package [80], as discussed below (_Example 5_). _Example 5. OPTIMAL CONTROL, CALIBRATION AND CHARACTERIZATION._ **C3: An integrated tool-set for Control, Calibration and Characterization.** The C3-Toolset package provides an API for directly interfacing with hardware to use the pulses generated by optimal control and further optimize them using closed-loop calibration algorithms. Users have access to a high-level Qiskit interface that allows them to run gate or pulse level quantum circuits on the full-physics noisy high-fidelity differentiable simulator. In recent years, a variety of machine learning for quantum control techniques have been closely integrated in the quantum device bring up process. Besides the previously mentioned process of learning model parameters, reinforcement learning (RL) has been particularly useful in a variety of applications. The open-source library rbqoc [123] details a technique for noise-robust control using RL while C3-Toolset uses RL for both learned optimizers and Bayesian experiment design. Calibration is generally needed for most platforms. Depending on the architecture, this involves different control signals. A difference between academic research labs and industry is the tendency to automate and standardize calibrations tasks. Tests are run periodically to validate the status of the system before running an experiment and during an experiment. On a schematic level, these can be seen as continuous integration (CI) tests, involving hardware control but similar to open-source testing for software projects. If the job does not pass a test, automatic re-calibration is prompted. ### Designs for quantum error correction Many applications of quantum computing, networking, and sensing technologies will require a significant reduction in error rates. In the long term, developing quantum hardware with built in error correction can address this need through fault-tolerant quantum computing. In this section, we review some of the open-source software available today for studying and simulating quantum error correcting codes. This software is becoming increasingly important as quantum error correction moves from theory into practice. Quantum error correcting codes are often based on stabilizer subsets of quantum computation and so can benefit from specialized simulators. A leading open-source package for stabilizer simulation today is Stim [124] which improves on existing stabilizer simulators available in Qiskit, Cirq, or other packages. Other libraries sit on top of simulators like stim and allow users to explore quantum error correcting code designs. These packages help inform longer term planning for quantum processor architecture. Examples include plaquette [125] and stac [126]. Both of these are Python libraries that have built in examples of quantum error correcting codes and allow you to generate logical circuits to study performance. There are also tools that act as compilers, translating logical circuits into physical circuits that can be run directly on capable hardware, e.g. the suite of tools developed at latticesurgery.com for compiling to the surface code. Finally, there are open implementations of the decoding algorithms used to decide how to best correct error detected by different quantum codes. PyMatching [127, 128] is an example open-source decoder. This software can help inform the design of processors specifically adapted to run quantum error correction. For example, stim was used to simulate novel superconducting qubit designs to support surface code parity measurements in [129]. There remain many important opportunities to improve quantum error correction. For example, fast classical control of QPUs is essential for feedback loops in quantum error correction implementation. Tailored tools could reduce the existing gap between hardware operation and software instructions. More generally, increasing the ecosystem of open-source software available in this field will both support new breakthrough ideas as well as accelerate the transition from theory to practice in building fault-tolerant quantum computers. ### Open quantum hardware facilities In these section we review the state of the art with respect to facilities that can enable a robust OQH ecosystem. We find three qualitatively different categories: open-access to QPUs and remotely accessible research labs, collaborative testbeds, and facilities for fabrication such as foundries. #### iv.4.1 Remotely accessible labs and cloud-connected labs The idea of open remote labs, i.e., laboratories that can be remotely accessed by users to perform real experiments is not new in the context of scientific research and education [130, 131]. For example, the educational tool HYPATIA [132] can be used to perform particle-physics experiments with real data produced by the ATLAS experiment at CERN. In a similar way, quantum computers can be used as quantum remote labs, i.e., advanced laboratories in which quantum experiments can be performed for purely scientific purposes without any computational motivation [62; 133; 134; 135; 136; 137; 138; 139]. For example in Ref. [140] a quantum computer was used for testing quantum fluctuation theorems, while in Ref. [141] a quantum computer was used to prepare a many-body quantum system in a time-crystalline phase. Superconducting-circuit quantum computers originally became available online from IBM Quantum with the IBM Quantum Experience, Rigetti Computing with the Rigetti Quantum Cloud Services, and other providers. These have had an impact on the way research is done in quantum computing and quantum optics as well as enabling access for students for outreach activities, due to the fact that experiments can be done much more easily (even by theorists), from the cloud. We are now in a second phase with more providers putting their devices online, encompassing more technologies - ion-based (IonQ, Quantumium), atom-based (e.g., Infeqtion, Pasqal, QuEra), photonics-based (e.g., Quandela, Xanadu), quantum-dot based, etc. Turning the attention to institutional frameworks, the European OpenSuperQ project (and the follow on OpenSuperQPPlus project) [60] is similarly aimed at developing public-owned full-stack open-access superconducting systems. Multiple remotely accessible QPUs are online or currently being set up as part of this project at the Delft University of Technology, the Wallenberg Centre for Quantum Technology (Chalmers), the Walther-Meissner-Institut (Munich), and at the Forschungszentrum Juelich (from phase 1). A hybrid form of cloud access is also emerging, in which cloud providers give access to different quantum processor providers. Further sustaining open access to QPU providers can be important for scientific discovery. Over the years, the level of control over device properties in compilation, optimization, qubit mapping, etc., has increased, spilling notably into remote pulse level control, both for digital (e.g., Qiskit pulse [77]) and analog quantum computing (e.g., Pasqal's Pulse studio [115]). The deeper the access, the more advanced the control researchers have over hardware and the greater the opportunity for integration of open-hardware features with cloud-access. An example of novel interaction between cloud providers and OQH is given by the Amazon AWS application developed for QICK, called SideQICK, which is being integrated into a more general cloud queue for quantum devices [142]. We also witness examples of a hybrid model of industry-public interaction enabling open access. Quantum computers are being shipped to research centers and further integrated with HPC centers, which also act as simulator infrastructure (such as the EU HPCQS). This provides new avenues for industry-academic collaborations and purpose-specific hardware customization, use cases and access. It will be important to foster academic research labs putting their quantum devices online, building upon the open-source tools described in the previous sections. This could further change the research landscape, enabling more researchers to test research ideas on more devices, novel architectures and platforms, and in turn facilitate technology transfer. #### iv.1.2 Deeply collaboratively testbeds While cloud-based platforms allow scientists and the public alike to submit their circuits and receive the results, deeper, customized access to the full stack to enable more involved experiments and R&D efforts is generally not possible on these platforms. To provide such access to users in Academia, Industry, and National Labs, the US Department of Energy has funded two testbed programs: QSCOUT (Quantum Scientific Computing Open User Testbed), based on trapped ions and located at Sandia National Laboratories, and AQT (Advanced Quantum Testbed), based on superconducting qubits and located at Lawrence Berkeley National Laboratory. A similar network called QuTest [143] managed by TNO at Delft, Netherlands has recently been kicked off with the goal of providing a federated network of testbeds by bringing together 13 service providers and 11 industrial users from the European quantum community. These platforms allow low-level access to the full stack of quantum hardware (including programming languages [144; 145], gate-pulse shaping [103], noise injection, unique qubit/qudit states, specialized calibrations, comparison of compilation techniques, etc.) allowing users to probe how the machines actually work and how to make them work better. The testbeds foster deep collaborations between the host laboratory and the users. Collaborations on these testbeds have so far resulted in demonstrations of scientific applications of near-term quantum computers, and the development of many tools to benchmark quantum computers as well as to characterize and mitigate errors on them QPUs. This could further change the research landscape, enabling more researchers to test research ideas on more devices. Moreover, the existence of testbeds would be crucial to enable reproducible benchmarks, for quantum hardware and for application-driven tasks. Testbeds are key to facilitate startup creation in the quantum space, as they lower the barrier for testing prototypes with expensive equipment (such as dilution fridges). Fabrication and foundries As described in Section II.1, open-source tools exist to facilitate the design and creation of quantum processors _in silico_. However, translating those designs into actual hardware can be difficult and expensive. Current arrangements for producing hardware on the basis of designs fall into four categories: partnerships, the construction of research-grade fab facilities, the use of academic foundries, and "quantum-fab-as-a-service". Several companies in recent years have begun partnerships with major manufacturers to procure supply lines for their hardware designs. For example, photonic startups PsiQuantum and Xanadu have both signed agreements with semiconductor manufacturers GlobalFoundries [146, 147]. Trapped-ion startup IonQ has sourced some of its traps from Sandia National Laboratories [62]. These sorts of partnerships are typically only available to startups or other corporations, and allow for the use of use of existing manufacturing techniques and scalable processes. However, whether rapid prototype of hardware can be achieved is unclear. An alternative approach is to stand up one's own fabrication facility. Startup Rigetti Computing is notable in this space for their "Fab-1" facility [148], which allows the company to prototyping new hardware. However, such a facility can be expensive to construct, and requires access to large amounts of capital. In the United States, in recognition of the need for fabrication facilities, various foundries are being stood up around the country, including the UCSB quantum foundry [149], MonArk quantum foundry [150], and the LPS Qubit Collaboratory [151]. The European Commission has established a somewhat similar program called Qu-Pilot [143] under the umbrella of the Quantum Flagship consisting of 21 partners from 9 different countries, with the goal of developing and providing access to federated European production facilities linking existing infrastructure. The overall coordination of the project is managed by VTT, Finland. Finally, some startups have leveraged this need toward the development of new businesses. For example, startup Quanware in the Netherlands [152] helps designing, developing, and fabricating hardware for customers. In Canada, the NSERC CREATE programs have partnered with CMC Microsystems for novel superconducting circuits workshops at the Stewart Blusson Quantum Matter Institute. In the future, more industry players, industry consortia, startup incubators, and academic partnerships could enable even more facilities for processor fabrication, inspired by existing electronics industry frameworks, such as the Efabless [153] and the Google Silicon project [154]. ## III Discussion: Current gaps & future recommendations In Sec. II we reviewed OQH today, with an overview of the various OQH categories and deep dives into selected projects. From this overview it is possible to draw an overarching picture and outline some major topics across the OQH ecosystem. In the sections below we list these topics, identifying gaps and making recommendations to close them. **OQH growth and maintenance across technologies and architectures.** Currently, most of the OQH projects focus on tools for quantum computers. There are considerable opportunities for expanding OQH projects and tools to accelerate scientific discovery and tech transfer in quantum communication, quantum metrology, and quantum sensing. Moreover, given the early stage of the field, for specific functionalities, tooling supporting only a given set of architectures may be currently available. For example, the open hardware tools for chip design are mostly revolving around SC qubits (and partially ion traps). There is room for tools for other QPU architectures, such as atom-based, spin-based, and photonics-based QPUs. With respect to control, open hardware projects including firmware and hardware, such as FPGAs, have been developed mostly for ion traps and SC qubits and can more broadly applied to other architectures. Moreover, while it is possible to find multiple open-source FPGA (and general open-hardware electronics projects with many being applicable to physics), one would be hard-pressed to find anything that comes closest to an optical frequency comb in photonics tooling. **APIs and standards for instruction sets.** We note that interoperability is a major challenge in the OQH ecosystem. From APIs in the software stack to software-hardware integration via ISA, there is work to be done. Currently, there is considerable duplication on higher software stack and one needs to do conversions of quantum programs in order to run them on different QPUs. Frameworks such as the Quantum Intermediate Representation (QIR) project and OpenQASM (Open Quantum Assembly Language) [155, 156] can help at the higher level. Broader support for common frameworks getting further down at the hardware control level would be desired, such as QUASAR and QUA. **Benchmarks.** Open-source benchmarking suites can have strong impact on the scientific community, providing information on the state of the art of current hardware architectures. So far we note that quantum technologies have been confronted with a lack of standardized benchmarks. Benchmarks are useful for evaluating the performance of quantum devices and algorithms, as well as assessing the relative performance compared to purely-classical solutions in relevant applications or non-conventional computers [157]. Thankfully, efforts in the standardization of benchmarks, such as those provided by the QED-C, a U.S.-based industry consortium [42, 43, 44] have been occurring. Pipelines for crowd-sourced benchmark results is also needed: [https://metriq.info/](https://metriq.info/) is one attempt, inspired by projects such as Papers With Code and MLCommons in machine learning. Pipelines including OSS and OQH can simplify the accomplishment of standardized benchmarks. **Open access to hardware.** Currently, there exist several cloud-accessible QPUs and cloud service frameworks. Providers have made QPUs accessible in a variety of ways, including direct and free access, as well as through partnerships within institutional frameworks (e.g.,ORNL's Quantum Computing User Program (QCUP) for national labs and federally-funded consortia). However, the research community would benefit from further access enabled by grants and other awards. Ensuring such frameworks and access points increase as soon as possible is necessary for creating a more equitable research landscape. Moreover, while several research labs have infrastructure to enable remote access and data acquisition of experiments, this is generally for internal use. Sometimes, access is extended to partnering organizations or collaborating researchers. However, there is a general lack of research labs who have put their device online for cloud access to a wider community of users and researchers. This is due to the overhead in infrastructure involved in making such devices operational and maintained. OQH can help with this regard, by providing researchers with existing tools that can be plugged in to bring up experiments from research labs. **Reproducibility.** In order to facilitate reproducibility of results, journals and academic guidelines can further encourage (and mandate where appropriate) the sharing of open software along with research publications. Funding agencies, universities and other research stakeholders can facilitate these efforts by recognizing software artifacts as important, citable outcomes for measuring research impact. **OQH policy and intellectual property.** Implementing policy that is open-hardware aware can favor the growth of a OQH ecosystem. For example, learning from existing open hardware projects beyond quantum technology can help avoid known pitfalls. A practical example is the adoption of an integrated toolchain fostering collaboration between academic labs, software developers, and industry. An example at the facility level is the investment into the establishment of shared facilities, such as quantum foundries, which can provide a flywheel effect for startups. Policy makers and quantum experts have the opportunity to work together, in order to ensure that informed decisions are made about the application of export control and intellectual property protection laws to quantum hardware. On the one hand, this could ensure that policy makers provide clear guidance about how efforts to open up quantum hardware will be treated from the perspective of these laws. On the other hand, researchers can develop expertise on the correct use of licenses for open-source software and open hardware. This guidance can include implementing processes through which one can more easily assess what hardware-based technology should be open-sourced, and what should not, making strategic calls at a narrow level (researcher, company) and wider level (common good for society). Researchers need to balance against intellectual property (IP) & strategic competitive considerations for certain components and technologies that could provide commercial exploitation. At the same time, they can consider when OSS can be used in business models, helping grow a community of users and creators around products. **Support of OQH in education and academia.** We believe it is important to foster OSS and OQH adoption in education and research. One actionable item is to encourage schools and universities to include in course material and labs the usage of open-source software [158] and open hardware [159, 160, 161]. Programs that prepare the future quantum workforce are fundamental to broaden the acquisition of talents [5, 6, 24]. This includes developing communities across domain knowledge. A bottom-up approach to strengthening OQH is by encouraging academic labs to pull together their systems with existing open-source tools, so they can create do-it-yourself small-scale testbeds. If open-source tools are not available, researchers can consider developing and open-sourcing new tools or using less-closed tools (e.g., proprietary but with a layer of open APIs). Fostering the creation of new open-hardware projects includes engaging in activities, programs and material that facilitates the open-sourcing of existing projects, such as Ref. [162], which provides information about starting a scientific OSS project. Starting an OQH project includes general challenges found by other OSS projects (testing, documentation, distribution, maintenance, community relations) and some specific ones, such as how to distribute physical devices to users, or work with experimental labs on ensuring device control. An example of a top-down approach to supporting open-source projects (and open hardware) is provided by funding agencies programs acknowledging the impact of OSS projects in science. One notable example is the NSF POSE program. Further tailored funding in programs focused in OQH could strengthen the ecosystem addressing some of its characteristics. **Community building: projects support and governance.** Similarly to other OSS projects, also OQH projects need to interact with their community of users, engage feedback, create guidelines to foster code contributions and meetings. They can benefit from non-profit organization support for their governance, as typically scientific OSS projects start in a research lab or group and then grow in scope and need for representation of more parties and partners. There is a need to simplify the bureaucracy in compliance at institutional at government level to contribute to open-source projects and open hardware, e.g., by equipping national labs with default policies that would allow researchers to contribute to open-source projects with permissive licenses approved by the Open Source Initiative (OSI). Often the originating organization such a national lab has restrictions that prevent an open-source project from growing independently. This highlights the need for alternatively supporting organizations to incubate and house open-source projects. Examples of non-profit organizations that do this in the classical space are NumFOCUS, the Linux Foundation, and the Open Source Hardware Association (OSHA). Unitary Fund is a 501(c)(3) non-profit organization that offers this support specifically to quantum technology projects. ## IV Conclusions Quantum computers - and quantum devices in general - provide a fascinating prospect to bridge a platform for fundamental science, technology transfer in engineering, and novelties in cloud computing and actuators. An open hardware approach in quantum technology will preserve, and possibly expand, the pre-competitive space necessary to make ideas flourish and enable accessible benchmarks and evaluations that have already characterized quantum software. As we have illustrated, quantum technology provides unique challenges and features with regards to the usability and feasibility of open-hardware stack: from process design toolkit to foundries, from simulation software to control and data acquisition, from cloud infrastructure to testbeds. All these layers can benefit from collaborative innovation. Learning from lessons learned in the early days of quantum software, and from other open hardware verticals in conventional computing and tooling, as well as from a snapshot of current gaps in the quantum ecosystem, we hope that the quantum industry can efficiently implement innovation, build lively communities, and expand its workforce. ## V Acknowledgments This work stemmed from a workshop [10] hosted at IEEE QCE 2021. The authors thank Gary Steele and Zlatko Minev for fruitful discussions. NS thanks the organizers of IEEE QCE 2021 for the opportunity to start the discussion of OQH which culminated in this manuscript, and many of the co-authors for their contributions at that workshop. This material was funded in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research Quantum Testbed Program. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. SAND2023-09347O. This work was supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research in Quantum Computing under Award Number DE-SC0020266. AM acknowledges support from the PNRR MUR project PE0000023-NQSTI.
2301.13677
Some rigidity results and asymptotic porperties for solutions to semilinear elliptic P.D.E
We will present some rigidity results for solutions to semilinear elliptic equations of the form $\Deltau = W'(u)$, where W is a quite general potential with a local minimum and a local maximum. We are particularly interested in Liouvlle-type theorems and symmetry results, which generalise some known facts about the Cahn-Hilliard equation.
Matteo Rizzi, Panayotis Smyrnelis
2023-01-31T14:52:13Z
http://arxiv.org/abs/2301.13677v2
# Some rigidity results and asymptotic properties for solutions to semilinear elliptic P.D.E. ###### Abstract. We will present some rigidity results for solutions to semilinear elliptic equations of the form \(\Delta u=W^{\prime}(u)\), where \(W\) is a quite general potential with a local minimum and a local maximum. We are particularly interested in Liouville-type theorems and symmetry results, which generalise some known facts about the Cahn-Hilliard equation. **MSC2020**: 35B06; 35B50; 35B53. **Keywords**: Liouville theorems, radial symmetry, rigidity, Cahn-Hilliard equation. ## 1. Introduction and main results Our aim is to revisit some asymptotic properties and rigidity results for solutions \(u\) to the semilinear elliptic P.D.E. \[\Delta u(x)=W^{\prime}(u(x)),\ u\in C^{2}(D),\ D\subset\mathbb{R}^{n},\ n\geq 2, \tag{1.1}\] with a potential \(W\in C^{1,1}_{loc}(\mathbb{R})\). The domains \(D\subset\mathbb{R}^{n}\) we consider are connected open unbounded sets such that \[\forall R>0,\,D\text{ contains a ball of radius }R. \tag{1.2}\] We shall also assume that on \(D\) the solution \(u\) takes its values in a bounded interval where \(W\) is monotone. For instance, one may consider the Cahn-Hilliard equation \[\Delta u=W^{\prime}(u)=u^{3}-u+\delta\qquad\text{in }\mathbb{R}^{n}, \tag{1.3}\] with \(|\delta|<\frac{2}{3\sqrt{3}}\), so that the polynomial \(f(t)=t^{3}-t+\delta\) admits exactly three real roots \[z_{1}(\delta)<-\frac{1}{\sqrt{3}}<z_{2}(\delta)<\frac{1}{\sqrt{3}}<z_{3}( \delta),\] and is negative on the interval \((z_{2}(\delta),z_{3}(\delta))\). This equation was largely studied in the literature. For example, some particular solutions were constructed in [15] and [17], while some results about radial and cylindrical symmetry of solutions and Liouville type results can be found in [21]. Our starting point is the following theorem. **Theorem 1.1**.: _[[21]] Let \(n\geq 2\), \(\delta\in(-\frac{2}{3\sqrt{3}},\frac{2}{3\sqrt{3}})\) and let \(u_{\delta}\) be a solution to (1.3) such that_ \[u_{\delta}>z_{2}(\delta)\qquad\text{outside a ball }B_{R}\subset\mathbb{R}^{N}. \tag{1.4}\] 1. _If_ \(\delta\in(-\frac{2}{3\sqrt{3}},0]\)_, then_ \(u\equiv z_{3}(\delta)\)_._ 2. _If_ \(\delta\in(0,\frac{2}{3\sqrt{3}})\)_, then_ \(u_{\delta}\) _is radially symmetric (not necessarily constant)._ Similar results in the case \(\delta=0\) can be found in [10, 11]. The purpose here is to extend Theorem 1.1 to more general non linearities. The proofs in [21] are based on some known symmetry results (see [12]) which rely on the moving planes method. A key tool in these methods is the maximum principle, even for unbounded domains (see [1]). If \[u(D)\subset[a,b],\,\text{and}\,\,W^{\prime}<0,\,\,\text{on}\,\,[a,b)\,\,(\text{ with}\,\,a,b\in\mathbb{R}), \tag{1.5}\] it is straightforward by Lemma 2.1 below that \[\lim_{d(x,\partial D)\to\infty}u(x)=b,\,\text{and}\,\,W^{\prime}(b)=0. \tag{1.6}\] Thus, we shall focus on the more involved problem where \[u(D)\subset(a,b], \tag{1.7b}\] \[W^{\prime}(a)=W^{\prime}(b)=0,\,\text{and}\,\,W^{\prime}<0,\,\,\text{on}\,\,( a,b)\,\,(\text{with}\,\,a,b\in\mathbb{R}). \tag{1.7a}\] If we assume in addition to (1.7), the nondegeneracy condition: \[W^{\prime}(s)\leq-C_{0}(s-a)\,\,\text{on}\,\,[a,s_{0}],\,\text{for some}\,\,C_{0}>0\,\,\text{and}\,\,s_{0}\in(a,b), \tag{1.8}\] we can apply comparison arguments of Berestycki, Caffarelli, and Nirenberg [1, Lemma 3.2] to deduce that \[d(x,\partial D)>\eta\Rightarrow u(x)\geq a+\epsilon,\,\,\text{for some constants}\,\,\eta,\,\epsilon>0. \tag{1.9}\] Consequently, the asymptotic property (1.6) follows again from Lemma 2.1. On the other hand, in the degenerate case where (1.8) does not hold, the asymptotic behaviour of the solutions may be more involved. In the case where \(D\) is the complement of a ball, we can relax condition (1.8) by assuming that \[W^{\prime}(s)\leq-C_{0}(s-a)^{\frac{n}{n-2}}\,\,\text{on}\,\,[a,s_{0}],\,\text {for some}\,\,C_{0}>0,\,\text{and}\,\,s_{0}\in(a,b), \tag{1.10}\] Under assumption (1.10), we can still prove the asymptotic property (1.6) for solutions provided (1.7) holds (cf. Proposition 2.2). However, in the case of potentials such that \[\lim_{u\to a^{+}}\frac{W^{\prime}(u)}{(u-a)^{p}}=-\lambda\] for some \(\lambda>0\) and \(p>\frac{n}{n-2}\), radial solutions \(u:\mathbb{R}^{n}\to(a,b)\) of (1.1) satisfying \[\lim_{|x|\to\infty}u_{0}(x)=a \tag{1.11}\] may exist in dimensions \(n\geq 3\) (cf. Lemma 2.3). Therefore, condition (1.10) is optimal to derive (1.6), when \(D\) is the complement of a ball. For general domains, condition (1.10) is not sufficient to deduce the asymptotic behaviour of the solution. In Proposition 2.6, we construct a solution of (1.1) in a dumbbell shaped domain \(D\subset\mathbb{R}^{2}\), such that \(u\approx a\) on the one side of the neck, while \(u\approx b\) on the other side. To sum up these results, we now state **Theorem 1.2**.: _Let \(W\in C^{1,1}_{loc}(\mathbb{R})\) be a potential satisfying (1.7b)._ 1. _Assume_ \(u\in C^{2}(\mathbb{R}^{n})\) _is a solution of (_1.1_) such that_ \(u(\mathbb{R}^{n})\subset[a,b]\)_. Then, when_ \(n=2\)_, or_ \(n\geq 3\) _and (_1.10_) holds, we have either_ \(u\equiv a\)_, or_ \(u\equiv b\)_. Otherwise (when_ \(n\geq 3\) _and (_1.10_) does not hold), we have either_ \(u\equiv a\)_, or_ \(u\equiv b\)_, or_1__ Footnote 1: For instance, let \(u_{0}\) be the radial solution provided by Lemma 2.3. Then, by taking \(u(x_{1},\ldots,x_{n},x_{n+1})=u_{0}(x_{1},\ldots,x_{n})\), we can see that (1.12) holds. \[\begin{cases}u(\mathbb{R}^{n})\subset(a,C_{W}],\text{ for a constant }C_{W}\in(a,b)\text{ depending only on }W,\\ \liminf_{|x|\to\infty}u(x)=a.\end{cases}\] (1.12) 2. _Assume the domain_ \(D\) _satisfies (_1.2_), and_ \(u\in C^{2}(D)\) _is a solution of (_1.1_) such that_ \(u(D)\subset(a,b]\)_. Then, we have_ \(\lim_{d(x,\partial D)\to\infty}u(x)=b\)_, provided that (_1.8_) holds._ Next, we derive some Liouville type results by considering domains \(D\subset\mathbb{R}^{n}\) satisfying the following condition: \[\text{ the radii of the balls contained in }\mathbb{R}^{n}\setminus D\text{ are uniformly bounded by a constant }\Lambda>0. \tag{1.13}\] **Theorem 1.3**.: _Let \(W\in C^{1,1}_{loc}(\mathbb{R})\) be a non negative potential satisfying (1.7b), and \(W(b)=0\). Assume the domain \(D\) satisfies (1.13), and \(u\in C^{2}(\mathbb{R}^{n})\) is a bounded entire solution of (1.1) such that \(\sup_{\mathbb{R}^{n}}u=b\), and \(u(D)\subset(a,b]\). Then, \(u\equiv b\)._ **Remark 1.4**.: _Modica [19] proved that if \(W\in C^{2}(\mathbb{R})\) is a non negative potential, and \(u\) is a bounded solution of (1.1) in \(\mathbb{R}^{n}\), then the condition \(W(u(x_{0}))=0\) for some \(x_{0}\in\mathbb{R}^{n}\) implies that \(u\) is constant. In the sequel, a new proof of this result which also applies to potentials \(W\in C^{1,1}_{loc}(\mathbb{R})\) was proposed in [4]. Therefore, the hypothesis \(\sup_{\mathbb{R}^{n}}u=b\) in Theorem 1.3 is not very strong, since the condition \(u(D)\subset(a,b]\) yields that either \(u<b\) in \(\mathbb{R}^{n}\) or \(u\equiv b\), so that \(\sup_{\mathbb{R}^{n}}u\leq b\)._ Since the linear behaviour of \(W^{\prime}\) near the local maximum (see condition (1.8)) implies that \(\lim_{d(x,\partial D)\to\infty}u(x)=b\), when \(D\) satisfies (1.2) (cf. Theorem 1.2 (ii)), we obtain a first corollary of Theorem 1.3: **Corollary 1.5**.: _Let \(W\in C^{1,1}_{loc}(\mathbb{R})\) be a non negative potential satisfying (1.7b), (1.8) and \(W(b)=0\). Assume the domain \(D\) satisfies (1.2) as well as (1.13), and \(u\in C^{2}(\mathbb{R}^{n})\) is a bounded entire solution of (1.1) such that \(u(D)\subset(a,b]\). Then, \(u\equiv b\)._ Finally, we particularise Corollary 1.5 in the case where \(D\) is the complement of a ball. **Corollary 1.6**.: _Let \(W\in C^{1,1}_{loc}(\mathbb{R})\) be a nonnegative potential satisfying (1.7b), and \(W(b)=0\). Assume \(u\in C^{2}(\mathbb{R}^{n})\) is an entire solution of (1.1) such that_ \[u(x)\in(a,b]\qquad\forall\,x\in\mathbb{R}^{n}\backslash B_{R}, \tag{1.14}\] _for some \(R>0\). Then, \(u\equiv b\), provided that \(n=2\), or \(n\geq 3\) and (1.10) holds._ We will prove these results in Section 2. **Remark 1.7**.: _Corollary 1.6 was established in [21] for entire solutions to the Cahn-Hilliard equation (1.3). Here we extend this result to general nonlinearities under optimal assumptions. Indeed, the necessity of condition (1.10) (when \(n\geq 3\)) for Corollary 1.6 to hold, is clear in view of Lemma 2.3._ Other Liouville type results for stable solutions to semilinear PDEs were established in [8]. Here there is no stability assumption. After that, we will address the issue of radial symmetry. In [13], the authors prove radial symmetry of solutions to fully nonlinear equations of very general form, provided these solutions have a suitable asymptotic polynomial decay at infinity (see Theorem 4 there). Here we are interested in radial symmetry of solutions to (1.1) with \(W\) satisfying (1.7b), assuming that either \(\lim_{|x|\to\infty}u(x)=b\) or \(\lim_{|x|\to\infty}u(x)=a\). The case in which \(\lim_{|x|\to\infty}u(x)=b\) is easier. The following result is a consequence of [13, Proposition 1]: **Proposition 1.8**.: _Let \(u:\mathbb{R}^{n}\to\mathbb{R}\) be a solution to \(\Delta u=W^{\prime}(u)\), \(n\geq 3\), where \(W\in C^{2}(\mathbb{R}^{n})\) is a potential fulfilling (1.7b) and such that_ \[[b-\delta,b]\ni t\mapsto\frac{W^{\prime}(t)}{|t-b|^{p}}\text{ is H\"{o}lder continuous for some }\delta>0,\,p\geq\frac{n+2}{n-2}. \tag{1.15}\] _Assume that \(u<b\) in \(\mathbb{R}^{n}\) and \(\lim_{|x|\to\infty}u(x)=b\). Then \(u\) is radially symmetric._ On the other hand, if \(W\) is convex in an interval \((b-\delta,b)\), then the symmetry result follows from [12, Theorem 2] in any dimension \(n\geq 2\). Assumption (1.15) is not required anymore. In view of Proposition 1.8 and [12, Theorem 2], we obtain the following generalisation of Theorem 1.1 (ii): **Theorem 1.9**.: _Let \(W\in C^{2}(\mathbb{R})\) be a potential such that \(W^{\prime}(t)<0\) for any \(t\in(a,b)\), \(W^{\prime}(a)=0\), and \(W(t)\geq W(b)\) for any \(t>b\). In addition, we suppose that one of the following is true:_ * \(n\geq 3\)_, and_ \(W\in C^{6,\alpha}(b-\delta,b+\delta)\)_, for some_ \(\delta>0\) _and_ \(\alpha\in(0,1)\)_._ * \(n\geq 2\)_, and_ \(W\) _is convex in_ \((b-\delta,b)\)_, for some_ \(\delta>0\)_._ _Assume also that \(u:\mathbb{R}^{n}\to\mathbb{R}\) is a solution to \(\Delta u=W^{\prime}(u)\) such that \(u(\mathbb{R}^{n}\backslash B_{R})\subset(a,b)\) and \(\lim_{|x|\to\infty}u(x)=b\). Then \(u<b\) in \(\mathbb{R}^{n}\) and it is radially symmetric._ **Remark 1.10**.: * _If a potential_ \(W\) _satisfies the assumptions of Theorem_ 1.9_, then it has a local minimum at_ \(t=b\)_, so that_ \(W^{\prime}(b)=0\)_. However, this minimum is not required to be a global one._ * _If_ \(W^{\prime}(t)>0\) _for_ \(t>b\)_, it follows from the maximum principle that any bounded solutions_ \(u\) _of (_1.1_) in_ \(\mathbb{R}^{n}\)_, satisfies the bound_ \(u\leq b\)_._ * _Let_ \(u\) _be a solution of (_1.1_) in_ \(\mathbb{R}^{n}\)_. Then, if_ \(n=2\) _or_ \(n\geq 3\) _and (_1.10_) holds, the condition_ \(u(\mathbb{R}^{n}\setminus B_{R})\subset(a,b)\)_, implies that_ \(\lim_{|x|\to\infty}u(x)=b\) _in view of Lemmas_ 5.4 _and_ 2.1 _(resp. Proposition_ 2.2_)._ * _For the existence of radial solutions satisfying_ \(\lim_{|x|\to\infty}u(x)=b\)_, we refer to_ _[_2_, Theorem 1, Theorem 4]_ _and_ _[_16_, Theorem 1.3]__._ We will prove these symmetry results in Section 3. Assuming again that \(W\) satisfies (1.7b), the description of entire solutions to (1.1) converging to \(a\) at infinity is a much more difficult task. In that case, only a few symmetry results are available, under somewhat restrictive hypotheses on the solution and the nonlinearity. Some results can be found in [5], where a monotonicity assumption is required. As a particular case, their results apply to bounded solutions to the Lane-Emden equation \[-\Delta u=|u|^{p-1}u\] in \(\mathbb{R}^{n}\), for which several Liuoville type results are known (see for example [3, 9, 18]). For future purposes, the main difficulty is to remove the monotonicity and convexity assumption about the non-linearity. By Proposition 2.2, we know that non trivial solutions can exist only if condition (1.10) is violated. However, the fact that \[u(\mathbb{R}^{n}\backslash B_{R})\subset(a,b) \tag{1.16}\] cannot guarantee a Liouville type result (cf. Lemma 2.3), or even radial symmetry under the assumption that \[\lim_{|x|\to\infty}u(x)=a. \tag{1.17}\] In section 4, we check that the solutions constructed in [6], provide examples of nonradial solutions to (1.1), such that \(u(x)-a\) changes sign in a compact set, and (1.16) as well as (1.17) hold. It would be interesting to see if a nonradial solution satisfying \(u(\mathbb{R}^{n})\subset(a,b)\) and \(\lim_{|x|\to\infty}u(x)=a\), may also exist for a potential \(W\) having a negative derivative on the range of \(u\). To the best of our knowledge, this is a difficult open problem. ## 2. Asymptotic behaviour and Liouville type results We first prove a basic lemma on the asymptotic behaviour of solutions satisfying (1.5). **Lemma 2.1**.: _Let \(D\subset\mathbb{R}^{n}\) be a domain satisfying (1.2), and let \(u\) be a solution of (1.1) (\(W\in C^{1,\alpha}_{loc}(\mathbb{R})\), \(\alpha\in(0,1)\)). Assume also that \(u(D)\subset[a,b]\), and \(W^{\prime}<0\) on the interval \([a,b)\) (with \(a,b\in\mathbb{R}\)). Then, \(\lim_{d(x,\partial D)\to\infty}u(x)=b\), and \(W^{\prime}(b)=0\) hold. If in addition \(D=\mathbb{R}^{n}\), then we have \(u\equiv b\)._ Proof.: We first recall that for fixed \(R>0\), the solution \(u\) is uniformly bounded in \(C^{2,\alpha}\) (for some \(\alpha\in(0,1)\)) on the balls \(B_{R}(x)\) satisfying \(d(x,\partial D)>R+1\) with \(x\in D\). Let \(l:=\liminf_{d(x,\partial D)\to\infty}u(x)\), and let \(\{x_{k}\}\subset D\) be a sequence such that \(\lim_{k\to\infty}d(x_{k},\partial D)=\infty\), and \(\lim_{k\to\infty}u(x_{k})=l\). We set \(v_{k}(y)=u(x_{k}+y)\). In view of the previous estimates, we can apply the Ascoli theorem via a diagonal argument to the sequence \(\{v_{k}\}\), and deduce that up to subsequence, \(v_{k}\) converges in \(C^{2}_{\rm loc}(\mathbb{R}^{n})\) to an entire solution \(v_{\infty}\) of (1.1). Moreover, we have \[v_{\infty}(0)=l=\min_{y\in\mathbb{R}^{n}}v_{\infty}(y),\] and \[0\leq\Delta v_{\infty}(0)=W^{\prime}(l)\leq 0,\] so that, \(l=b\), \(W^{\prime}(b)=0\), and \(v_{\infty}\equiv b\). This proves that \(\lim_{d(x,\partial D)\to\infty}u(x)=b\), and \(W^{\prime}(b)=0\) hold. In the particular case where \(D=\mathbb{R}^{n}\), we have \(u\equiv b\), since otherwise \(u\) would attain its minimum at a point \(x_{0}\) where \(0\leq\Delta u(x_{0})=W^{\prime}(u(x_{0}))<0\), which is a contradiction. Next, given a potential satisfying (1.7b), we study the existence of solutions such that \(u(\mathbb{R}^{n})\subset(a,b)\), for \(n\geq 3\). The answer to this question depends on the growth of \(W^{\prime}\) in a right neighbourhood of \(a\). In Proposition 2.2 below, we first examine the case of potentials for which (1.10) holds. **Proposition 2.2**.: _Let \(n\geq 3\), let \(B_{\rho}\subset\mathbb{R}^{n}\) be the open ball of radius \(\rho\) centred at the origin, and let \(W\in C^{1,1}_{loc}(\mathbb{R})\) be a potential fulfilling (1.7b), and (1.10). Then, every solution \(u\in C^{2}(\mathbb{R}^{n}\setminus B_{\rho})\) to (1.1) such that \(u(\mathbb{R}^{n}\setminus B_{\rho})\subset(a,b)\), satisfies \(\lim_{|x|\to\infty}u(x)=b\)._ Proof.: Without loss of generality, we may assume that \(a=0\). Assume by contradiction that \[u(\mathbb{R}^{n}\setminus B_{\rho})\subset(0,b-\eta],\text{ for some }\eta>0\text{ small.} \tag{2.18}\] Then, we have \[W^{\prime}(u)\leq-c_{1}u^{\frac{n}{n-2}},\ \forall u\in[0,b-\eta] \tag{2.19}\] for a constant \(0<c_{1}<C_{0}\). We first examine the case where \(u\) is radial, that is, \(u(x)=v(|x|)\). As a consequence, \(v\) solves \[v^{\prime\prime}(r)+\frac{n-1}{r}v^{\prime}(r)=W^{\prime}(v(r)),\ \forall r\in[\rho, \infty). \tag{2.20}\] Our claim is that \(v^{\prime}(\rho_{0})\leq 0\) holds for some \(\rho_{0}\geq\rho\). Indeed, otherwise, we would have \[\forall r\in[\rho,\infty):\ v^{\prime}(r)>0,\text{ and }v^{\prime\prime}(r)\leq \kappa:=\max_{[v(\rho),b-\eta]}W^{\prime}<0,\] which is impossible. So far, we have proved that \(v^{\prime}(\rho_{0})\leq 0\) for some \(\rho_{0}\geq\rho\). By noticing that \(v^{\prime}(\rho_{0})=0\Rightarrow v^{\prime\prime}(\rho_{0})<0\) in view of (2.20), one can see that \(v^{\prime}<0\) holds on an interval \((\rho_{0},\rho_{0}+\epsilon)\), for small \(\epsilon>0\). Let \(l:=\sup\{r>\rho_{0}:v^{\prime}<0\text{ on }(\rho_{0},r)\}\). It is clear that \(l=\infty\), since otherwise we would deduce that \(v^{\prime}(l)=0\) and \(v^{\prime\prime}(l)<0\), which is a contradiction. This establishes that \(v^{\prime}<0\) on \((\rho_{0},\infty)\). Now, it follows from (2.20) that \[\forall r>\rho_{1}:\ r^{n-1}v^{\prime}(r) \leq r^{n-1}v^{\prime}(r)-\rho_{0}^{n-1}v^{\prime}(\rho_{0})=\int _{\rho_{0}}^{r}s^{n-1}W^{\prime}(v(s))ds\] \[\leq-c_{1}v^{\frac{n}{n-2}}(r)\int_{\rho_{0}}^{r}s^{n-1}ds\leq-kv ^{\frac{n}{n-2}}(r)r^{n},\] for a constant \(k>0\), and for \(\rho_{1}>\rho_{0}\) large enough. Next, an integration of the previous inequality gives \[\forall r>\rho_{1}:v^{-\frac{2}{n-2}}(r)\geq v^{-\frac{2}{n-2}}(r)-v^{-\frac{ 2}{n-2}}(\rho_{1})\geq\frac{k}{n-2}(r^{2}-\rho_{1}^{2}),\] from which we deduce that \[v(r)\leq\tilde{k}r^{2-n},\text{ for a constant }\tilde{k}>0,\text{ and for }r>\rho_{2}>\rho_{1}\text{ large enough.} \tag{2.21}\] On the other hand, the lower bound provided by Lemma 5.3: \[\forall r\geq\rho\text{: }v(r)\geq cr^{2-n},\text{ for a constant }c>0, \tag{2.22}\] combined with (2.19) implies that \[\forall r>\rho_{0}:r^{n-1}v^{\prime}(r)\leq\int_{\rho_{0}}^{r}s^{n-1}W^{\prime }(v(s))ds\leq-c_{1}c^{\frac{n}{n-2}}\int_{\rho_{0}}^{r}s^{-1}ds=-c_{1}c^{\frac {n}{n-2}}\ln\big{(}\frac{r}{\rho_{0}}\big{)}.\] As a consequence, we obtain the bound \(v^{\prime}(r)\leq-c_{1}c^{\frac{n}{n-2}}\ln(\frac{r}{\rho_{0}})r^{1-n}\), \(\forall r>\rho_{0}\), which contradicts (2.21). Therefore the existence of a radial solution satisfying (2.18) is ruled out. To complete the proof of Proposition 2.2, we also have to exclude the existence of non radial solutions. Assume by contradiction that \(u\in C^{2}(\mathbb{R}^{n}\setminus B_{\rho})\) is a solution of (1.1) satisfying (2.18). In view of Lemma 5.3, \(u\) satisfies the lower bound \[u(x)>\phi_{*}(x)=c|x|^{2-n},\ c>0, \tag{2.23}\] where \(\phi_{*}\) is a subsolution of (1.1), that is, \(\Delta\phi_{*}=0\geq W^{\prime}(\phi_{*})\). Starting from \(u\), we shall construct a radial supersolution \(\phi^{*}\) of (1.1), such that \(\phi_{*}\leq\phi^{*}\). Let \(\rho_{i,m}\) be the rotation of angle \(\frac{\pi}{2^{m}}\) around the \(x_{i}\) coordinate axis of \(\mathbb{R}^{n}\) (\(m\geq 1\), \(i=1,\ldots,n-1\)), and let \[G_{m}:=\{\rho_{1,m}^{k_{1}}\circ\ldots\circ\rho_{n-1,m}^{k_{n-1}}:0\leq k_{i} \leq 2^{m+1}-1,\ i=1,\ldots,n-1\}.\] Using spherical coordinates, one can see that given \(|x_{0}|\geq\rho\), the set \(\cup_{m\geq 1}G_{m}x_{0}\) is dense in the sphere \(\{x\in\mathbb{R}^{n}:|x|=|x_{0}|\}\). In particular, we have \[\lim_{m\to\infty}\min_{g\in G_{m}}u(gx_{0})=\min_{|x|=|x_{0}|}u(x). \tag{2.24}\] Next, we notice that for every \(g\in G_{m}\), \(x\mapsto u(gx)\) solves (1.1). On the other hand, in view of the Kato inequality, \(\phi_{m}(x):=\min_{g\in G_{m}}u(gx)\) is a supersolution of (1.1), satisfying \(\phi_{*}\leq\phi_{m}\leq u\). In addition, it follows from (2.24) that \(\phi^{*}(x):=\lim_{m\to\infty}\phi_{m}(x)=\min\{u(y):|y|=|x|\}\). Finally, since \(|\nabla\phi_{m}|\) is uniformly bounded on \(\mathbb{R}^{n}\setminus B_{\rho}\), we obtain that (up to subsequence) \(\phi_{m}\)conververges weakly to \(\phi^{*}\) in \(W^{1,2}(B_{R}\setminus\overline{B_{\rho}})\), for every \(R>\rho\). This implies that \(\phi^{*}\) (which belongs to \(W^{1,2}(B_{R}\setminus\overline{B_{\rho}})\), for every \(R>\rho\)) is a radial supersolution of (1.1) satisfying \(\phi_{*}\leq\phi^{*}\leq u\). To conclude, we deduce from the method of sub- and supersolutions (cf. Section 5.1, and for instance [7, Lemma 1.1.1]), the existence of a radial solution \(v\in C^{2}(\mathbb{R}^{n}\setminus B_{\rho})\), satisfying \(0<\phi_{*}\leq v\leq\phi^{*}\leq b-\eta\). In view of the first part of the proof, this is a contradiction. So far, we have established that every solution \(u\in C^{2}(\mathbb{R}^{n}\setminus B_{\rho})\) to (1.1) such that \(u(\mathbb{R}^{n}\setminus B_{\rho})\subset(0,b)\), satisfies \(\sup_{\mathbb{R}^{n}\setminus B_{\rho}}u=b\). That is, \[\exists\{x_{k}\}_{k\in\mathbb{N}}:\ \lim_{k\to\infty}|x_{k}|=\infty,\ \text{and}\ \lim_{k\to\infty}u(x_{k})=b. \tag{2.25}\] Setting \(v_{k}(y):=u(x_{k}+y)\), and proceeding as in Lemma 2.1, we obtain that (up to subsequence) \(v_{k}\) converges in \(C^{2}_{\text{loc}}(\mathbb{R}^{n})\) to an entire solution \(v_{\infty}\) of (1.1). Furthermore, since \(v_{\infty}(0)=b\), the maximum principle implies that \(v_{\infty}\equiv b\). At this stage we consider a minimizer \(\phi_{R}\in H^{1}(B_{R}(0))\) of the energy functional \[\tilde{E}(v)=\int_{B_{R}(0)}\Big{(}\frac{1}{2}|\nabla v(x)|^{2}+\tilde{W}(v(x ))\Big{)}dx, \tag{2.26a}\] in \(H^{1}_{0}(B_{R}(0))\), where \[\tilde{W}(v)=\begin{cases}W(a)&\text{for }v\leq 0\\ W(v)&\text{for }0\leq v\leq b\\ W(b)&\text{for }v\geq b.\end{cases} \tag{2.26b}\] It is known that \(\phi_{R}\) is a smooth radial solution of (1.1) in \(B_{R}(0)\), such that \(0\leq\phi_{R}\leq\max_{B_{R}(0)}\phi_{R}:=b-\delta_{R}\) on \(B_{R}(0)\), for some \(\delta_{R}>0\). In addition, we have \(\lim_{R\to\infty}\delta_{R}=0\). Thus, given \(\epsilon>0\), we can ensure that * \(\delta_{R}<\epsilon\) for some \(R>0\) large enough, * and \(\phi_{R}\leq b-\delta_{R}\leq v_{k}\) holds on \(B_{R}(0)\), for \(k\geq k_{R}\) large enough. Finally, by applying the sliding method of Berestycki, Caffarelli, and Nirenberg [1, Lemma 3.1], we deduce that \(u(x)\geq\phi_{R}(0)\geq b-\epsilon\), provided that \(|x|>\rho+R\). This completes the proof of Proposition 2.2 In the subcritical case where \(W^{\prime}(u)\sim-\lambda|u-a|^{p}\) in a right neighbourhood of \(a\), with \(\lambda>0\) and \(p\in(\frac{n}{n-2},\frac{n+2}{n-2})\), we shall see in Lemmas 2.3 and 2.4 below, that depending on the potential, there may or may not exist a radial solution such that \(u(\mathbb{R}^{n})\subset(a,b)\). **Lemma 2.3**.: _Given any \(n\geq 3\), \(p>\frac{n}{n-2}\) and \(\lambda>0\), there exists a potential \(W\in C^{2}(\mathbb{R})\) fulfilling (1.7b), and a solution \(u\in C^{\infty}(\mathbb{R}^{n})\) to (1.1), such that_ * \(\lim_{u\to a^{+}}\frac{W^{\prime}(u)}{|u-a|^{p}}=-\lambda\)_,_ * \(u\) _is radial and radially decreasing (i.e._ \(u(x)=\tilde{u}(|x|)\)_, for a smooth decreasing function_ \(\tilde{u}:[0,\infty)\to(a,b)\)_),_ * \(u(\mathbb{R}^{n})\subset(a,b)\)_, and_ \(\lim_{|x|\to\infty}u(x)=a\)_,_ * \(W^{\prime\prime}(u(0))>0\)_._ Proof.: Without loss of generality, we may assume that \(a=0\). First, we note that the function \(v(x)=\big{(}\frac{2((n-2)p-n)}{\lambda(p-1)^{2}}\big{)}^{\frac{1}{p-1}}|x|^{- \frac{2}{p-1}}\) solves the equation \[\Delta v=-\lambda v^{p}\qquad\text{in }\mathbb{R}^{n}\backslash\{0\}.\] Next, in order to eliminate the singularity at the origin, we take a smooth cutoff function \(\xi:\mathbb{R}\to[0,1]\) such that \[\begin{cases}\xi=1\text{ in }[3,\infty),\\ 0<\xi<1\text{ and }\xi^{\prime}>0\text{ in }(2,3),\\ \xi=0\text{ in }(-\infty,2],\end{cases}\] and we consider a function \(\tilde{u}:(1,\infty)\to\mathbb{R}\) such that \[\begin{cases}\tilde{u}^{\prime\prime}(r)=\xi(r)\tilde{v}^{\prime\prime}(r)& \forall\,r\in[1,\infty),\\ \tilde{u}(r)=\tilde{v}(r)&\forall r\geq 3.\end{cases}\] where \(v(x)=:\tilde{v}(|x|)\). One can see that \[\tilde{u}^{\prime\prime}+\frac{n-1}{r}\tilde{u}^{\prime}<0\qquad\text{in }[1, \infty). \tag{2.27}\] The latter inequality is clear if \(r\geq 3\). In order to prove that (2.27) holds in \([1,3)\) too, we note that \[\begin{split}\tilde{u}^{\prime}(r)&=-\int_{r}^{\infty }\tilde{u}^{\prime\prime}(t)dt=-\int_{r}^{\infty}\xi(t)\tilde{v}^{\prime\prime }(t)dt\\ &=\xi(r)\tilde{v}^{\prime}(r)+\int_{r}^{\infty}\xi^{\prime}(t) \tilde{v}^{\prime}(t)dt<\xi(r)\tilde{v}^{\prime}(r)\leq 0,\qquad\forall\,r\in[1,3), \end{split} \tag{2.28}\] so that \[\tilde{u}^{\prime\prime}+\frac{n-1}{r}\tilde{u}^{\prime}<\xi\Big{(}\tilde{v}^{ \prime\prime}+\frac{n-1}{r}\tilde{v}^{\prime}\Big{)}\leq 0,\qquad\forall\,r \in[1,3).\] Now, we extend \(\tilde{u}\) to a smooth even positive function on the whole \(\mathbb{R}\), still denoted by \(\tilde{u}\), fulfilling \(\tilde{u}^{\prime}<0\) in \((0,\infty)\), \(\tilde{u}^{\prime\prime}<0\) in \([0,1)\), so that \(\tilde{u}^{\prime\prime}+\frac{n-1}{r}\tilde{u}^{\prime}<0\) holds in \([0,\infty)\), \(\tilde{u}^{\prime\prime\prime}(0)=0\) and \(\tilde{u}^{(4)}(0)<0\). This can easily be done if we recall that \(\tilde{u}\) is affine and decreasing on \([1,2]\). Since \(\tilde{u}\) is monotone in \([0,\infty)\), then it is invertible in this interval with inverse function \(\beta:(0,\tilde{u}(0)]\to[0,\infty)\). Finally, setting \[\varphi(r):=\tilde{u}^{\prime\prime}(r)+\frac{n-1}{r}\tilde{u}^{\prime}(r), \qquad\forall\,r>0,\] and \(H(s):=\varphi(\beta(s))\), for \(s\in(0,\tilde{u}(0)]\), one can see that \(u(x):=\tilde{u}(|x|)\) satisfies the equation \(\Delta u=H(u)\) in \(\mathbb{R}^{n}\). We also notice that \(H(\tilde{u}(0))=n\tilde{u}^{\prime\prime}(0)<0\) and \(H^{\prime}(\tilde{u}(0))=\frac{(n+2)\tilde{u}^{(4)}(0)}{3\tilde{u}^{\prime \prime}(0)}>0\). Thus, one can find a \(C^{1}\) extension of \(H\) to the whole \(\mathbb{R}\), still denoted by \(H\), such that \(H<0\) in \((0,b)\), for some \(b>\tilde{u}(0)\), and \(H(b)=0\). By construction, we have \(H(u)=-\lambda u^{p}\) in \((0,\tilde{u}(3))\), so that \(H(0)=H^{\prime}(0)=0\). In order to conclude the proof it is enough to define \(W\) to be the primitive of \(H\). **Lemma 2.4**.: _Given any \(n\geq 3\), \(p\in(\frac{n}{n-2},\frac{n+2}{n-2})\), and \(\lambda>0\), there exists a potential \(W\in C^{2}(\mathbb{R})\) fulfilling (1.7b) and \(\lim_{u\to a^{+}}\frac{W^{\prime}(u)}{|u-a|^{p}}=-\lambda\), for which there are no radial solutions \(u\in C^{2}(\mathbb{R}^{n})\) of (1.1) such that \(u(\mathbb{R}^{n})\subset(a,b)\)._ Proof.: Without loss of generality, we may assume that \(a=0\). We consider the function \(H(u)=-\lambda u^{p}\) on an interval \([0,\beta]\), and since \(p\in(\frac{n}{n-2},\frac{n+2}{n-2})\), we set \(\epsilon=\frac{n}{p+1}-\frac{n-2}{2}>0\). One can find a \(C^{1}\) extension of \(H\) to the whole \(\mathbb{R}\), still denoted by \(H\), such that * \(H<0\) in \((0,b)\), and \(H(b)=0\), for some \(b>\beta\). Let \(b=\kappa\beta\), with \(\kappa>1\). * \(H([0,b])=[-\lambda\mu\beta^{p},0]\) for some \(\mu>1\), such that \(\kappa\mu<1+\frac{2\epsilon}{n-2}\). Next, define \(W\in C^{2}(\mathbb{R})\) to be the primitive of \(H\) vanishing at \(0\). We claim that \[\frac{n-2}{2}W^{\prime}(u)u-nW(u)>0\text{ on }(0,b]. \tag{2.29}\] Indeed, we have \(\frac{n-2}{2}W^{\prime}(u)u-nW(u)=\epsilon\lambda u^{p+1}\) on \([0,\beta]\). On the other hand, if \(u\in[\beta,b]\), then it follows that \(\frac{n-2}{2}W^{\prime}(u)u-nW(u)\geq\frac{n-2}{2}W^{\prime}(u)u-nW(\beta) \geq(\frac{n}{p+1}-\frac{n-2}{2}\kappa\mu)\lambda\beta^{p+1}>0\). Now that (2.29) is established, we consider a radial solution \(u\in C^{2}(\mathbb{R}^{n})\) of (1.1) such that \(u(\mathbb{R}^{n})\subset(0,b)\). Setting \(v(|x|)=u(x)\) and proceeding as in the proof of Proposition 2.2, one can see that \(v\) satisfies the standard estimates \(v(r)=O(r^{-\frac{2}{p-1}})\), \(v^{\prime}(r)=O(r^{-\frac{p+1}{p-1}})\), and \(W(v(r))=O(r^{-\frac{2(p+1)}{p-1}})\). To conclude we use the well-known Pohozaev identity: \[\int_{0}^{r}s^{n-1}(\frac{n-2}{2}W^{\prime}(v(s))v(s)-nW(v(s)) \big{)}ds=\frac{n-2}{2}r^{n-1}v(r)v^{\prime}(r)+r^{n}\big{(}\frac{|v^{\prime} (r)|^{2}}{2}-W(v(r))\big{)}. \tag{2.30}\] We notice that since \(p\in(\frac{n}{n-2},\frac{n+2}{n-2})\), the right hand side of (2.30) goes to \(0\), as \(r\to\infty\). On the other hand, the left hand side of (2.30) is strictly positive in view of (2.29). This rules out the existence of radial solutions such that \(u(\mathbb{R}^{n})\subset(0,b)\). The next Proposition examines the existence of radial solutions in the different regimes. **Proposition 2.5**.: _Let \(n\geq 3\), and let \(W\in C^{1,1}_{loc}(\mathbb{R})\) be a potential satisfying (1.7b)._ * _If (_1.10_) holds, there are no radial solutions_ \(u\in C^{2}(\mathbb{R}^{n})\) _of (_1.1_) such that_ \(u(\mathbb{R}^{n})\subset(a,b)\)_._ * _If_ \(\limsup_{u\to a^{+}}\frac{|W^{\prime}(u)|}{|u-a|^{\frac{n+2}{n-2}}}=0\) _holds, there exists a radial solution_ \(u\in C^{2}(\mathbb{R}^{n})\) _of (_1.1_) such that_ \(u(\mathbb{R}^{n})\subset(a,b)\)_._ * _Otherwise, if neither (_1.10_) nor_ \(\limsup_{u\to a^{+}}\frac{|W^{\prime}(u)|}{|u-a|^{\frac{n+2}{n-2}}}=0\) _hold, depending on_ \(W\)_, there may or may not exist a radial solution_ \(u\in C^{2}(\mathbb{R}^{n})\) _of (_1.1_) such that_ \(u(\mathbb{R}^{n})\subset(a,b)\)_._ Proof.: (i) A radial solution \(u\in C^{2}(\mathbb{R}^{n})\) of (1.1) such that \(u(\mathbb{R}^{n})\subset(a,b)\), decays to \(a\), as \(|x|\to\infty\). In view of Proposition 2.2, it is clear that such a solution does not exist when (1.10) holds. (ii) Now, assume that \(\limsup_{u\to a^{+}}\frac{|W^{\prime}(u)|}{|u-a|^{\frac{n+2}{n-2}}}=0\) holds, and define \[\tilde{W}(v)=\begin{cases}W(v)&\text{ for }v\leq b\\ W(b)&\text{ for }v\geq b.\end{cases} \tag{2.31}\] Theorem 4 of [2] provides the existence of a radial solution \(u\in C^{2}(\mathbb{R}^{n})\) of \(\Delta u=\tilde{W}^{\prime}(u)\), such that \(u>a\), and \(\lim_{|x|\to\infty}u(x)=a\). By the maximum principle, we have \(u(\mathbb{R}^{n})\subset(a,b)\), and thus \(u\) solves \(\Delta u=W^{\prime}(u)\). Finally, (iii) follows from Lemmas 2.3 and 2.4. As we mentioned in the Introduction, for general domains, condition (1.10) is not sufficient to derive the asymptotic property (1.6) of solutions. Proposition 2.6 below, provides examples of solutions having a different asymptotic behaviour. **Proposition 2.6**.: _Let \(p>1\), and let \(W\in C^{1,1}_{loc}(\mathbb{R})\) be a potential fulfilling (1.7b), as well as_ \[\forall u\in[a,b]:\ W^{\prime}(u)\geq-c(u-a)^{p},\ \text{for a constant}\ c>0. \tag{2.32}\] _Let \(D=\{x\in\mathbb{R}^{2}:|x_{2}|<\psi(x_{1})\}\), where \(\psi\in C^{\infty}(\mathbb{R})\) is a positive function such that \(\psi(s)=\lambda|s|\), for \(|s|>\epsilon\) (with \(\lambda\,,\epsilon>0\) sufficiently small, depending on \(W\)). Then, there exists a solution \(u\in C^{2}(D)\) of (1.1) such that \(u(D)\subset(a,b)\), and_ \[\lim_{x_{1}\to+\infty}u(x)=a\ \text{and}\ \lim_{x_{1}\to-\infty}u(x)=b. \tag{2.33}\] Proof.: Without loss of generality we may assume that \(a=0\). We shall first construct a supersolution \(\phi^{*}\) of (1.1) in \(D\). We define the auxilliary functions \[f(re^{i\theta})=r^{-\frac{2}{p-1}}g(\theta), \tag{2.34a}\] with \(g:[-\theta_{0},\theta_{0}]\to(0,\infty)\) (\(\theta_{0}<\frac{\pi}{2}\)), a positive solution of the O.D.E.: \[g^{\prime\prime}(\theta)=-cg^{p}(\theta)-\frac{4}{(p-1)^{2}}g(\theta). \tag{2.34b}\] Next, setting \(\lambda=\tan(\theta_{0})\), one can check that \[\Delta f(x)=-c(f(x))^{p}\ \text{in the sector}\ S=\{x_{1}>0,|x_{2}|<\lambda x_{1}\}. \tag{2.35}\] In addition, we have \(f(x)>b\) in the set \(\{0<x_{1}\leq\epsilon,|x_{2}|<\lambda x_{1}\}\), provided that \(\epsilon>0\) is sufficiently small. Finally, we take \[\phi^{*}(x)=\begin{cases}\min(f(x),b)&\text{ when }x_{1}>\epsilon,\ \text{and}\ |x_{2}|<\lambda x_{1}.\\ b&\text{ when }x_{1}\leq\epsilon,\ \text{and}\ |x_{2}|<\psi(x_{1}).\end{cases} \tag{2.36}\] Using the Kato inequality, one can see that \(\phi^{*}\) is a supersolution of (1.1) in \(D\). Indeed, in view of (2.32) and (2.35), we have \[\Delta\phi^{*}\leq-cf^{p}\chi_{\{f<b\}}\leq W^{\prime}(\phi^{*})\ \text{in}\ H^{1}_{loc}(D), \tag{2.37}\] where \(\chi\) is the characteristic function. To construct a subsolution \(\phi_{*}\) of (1.1) in \(D\), we take \[\phi_{*}(x)=\begin{cases}e(x_{1})&\text{ when }x_{1}<0,\ \text{and}\ |x_{2}|<\psi(x_{1})\\ 0&\text{ when }x_{1}\geq 0,\ \text{and}\ |x_{2}|<\psi(x_{1}),\end{cases} \tag{2.38}\] where \(e:(-\infty,0]\to[0,b)\) is the heteroclinic orbit, solving \[e^{\prime\prime}(s)=W^{\prime}(e(s)),\ e(0)=0,\lim_{s\to-\infty}e(s)=b. \tag{2.39}\] The existence of such a heteroclinic orbit is proved by extending \(W\) to an even \(C^{1,1}(\mathbb{R})\) function \(\tilde{W}\) such that \(\tilde{W}=W\) in \((0,b)\), \(\tilde{W}^{\prime}>0\) in \((b,\infty)\) and considering the phase plane for the ODE \(v^{\prime\prime}=\tilde{W}^{\prime}(v)\). The situation is analogue to the one we have for the classical double well potential \(\frac{1}{4}(1-t^{2})^{2}\). It follows again from the Kato inequality that \(\Delta\phi_{*}\geq W^{\prime}(\phi_{*})\) holds in \(H^{1}_{loc}(D)\). In addition, it is clear that \(\phi_{*}<\phi^{*}\) holds in \(D\). Therefore, we deduce from the method of sub- and supersolutions (cf. Section 5.1, and for instance [7, Lemma 1.1.1]), the existence of a solution \(u\in C^{2}(D)\) of (1.1) satisfying \(\phi_{*}\leq u\leq\phi^{*}\). Since \(0<u<b\) by the maximum principle, the solution \(u\) has all the desired properties. Now, we are ready to prove Theorems 1.2 and 1.3, and their corollaries. Proof of Theorem 1.2.: (i) Assume \(u\in C^{2}(\mathbb{R}^{n})\) is an entire solution of (1.1) such that \(u(\mathbb{R}^{n})\subset[a,b]\). When \(n=2\), \(u\) is a bounded superharmonic function defined on \(\mathbb{R}^{2}\). Thus, \(u\) is constant and equal to a critical point of \(W\). That is, \(u\equiv a\) or \(u\equiv b\). In higher dimensions \(n\geq 3\), we have by the maximum principle either \(u\equiv a\), or \(a<u\leq b\) on \(\mathbb{R}^{n}\). We shall first assume that \(a<u\leq b\) as well as (1.10) hold, and we shall prove that \(u\equiv b\). In view of (1.10), Proposition 2.2 implies that \(\lim_{|x|\to\infty}u(x)=b\), and \(a+\epsilon\leq u\leq b\) holds on \(\mathbb{R}^{n}\), for some \(\epsilon>0\). Thus, \(u\equiv b\), by Lemma 2.1. Next, we consider again an entire solution \(u\) of (1.1) satisfying \(u(\mathbb{R}^{n})\subset[a,b]\) in dimensions \(n\geq 3\), but without assuming (1.10). By the maximum principle, we have either \(u\equiv a\), or \(u\equiv b\), or \(a<u<b\). Let \[\mathcal{F}=\{u\text{ is a solution of \eqref{eq:1.1} such that }u(\mathbb{R}^{n})\subset(a,b)\},\] \[C_{W}=\sup\{u(x):x\in\mathbb{R}^{n},u\in\mathcal{F}\}.\] Our first claim is that \[C_{W}<b. \tag{2.40}\] Indeed, assume by contradiction that there exists a sequence \(\{u_{k}\}\subset\mathcal{F}\), and a sequence \(\{x_{k}\}\subset\mathbb{R}^{n}\), such that \(\lim_{k\to\infty}u_{k}(x_{k})=b\). Setting \(v_{k}(y)=u_{k}(x_{k}+y)\), and proceeding as in Lemma 2.1, we obtain that (up to subsequence) \(v_{k}\) converges in \(C^{2}_{\rm loc}(\mathbb{R}^{n})\) to an entire solution \(v_{\infty}\) of (1.1). Furthermore, since \(v_{\infty}(0)=b\), the maximum principle implies that \(v_{\infty}\equiv b\). At this stage we consider the minimizer \(\phi_{R}\in H^{1}(B_{R}(0))\) defined in (2.26). It is known that \(\phi_{R}\) is a smooth radial solution of (1.1) in \(B_{R}(0)\), such that \(a\leq\phi_{R}\leq b-\delta_{R}\) on \(B_{R}(0)\), for some \(\delta_{R}>0\). In addition, by taking \(R>R_{0}\) large enough, we have \(a<\phi_{R}\leq b-\delta_{R}\) on \(B_{R}(0)\). Thus, for fixed \(R>R_{0}\), we can ensure that \(a<\phi_{R}\leq b-\delta_{R}\leq v_{k}\) holds on \(B_{R}(0)\), provided that \(k\geq k_{R}\) is large enough. Finally, by applying the sliding method of Berestycki, Caffarelli, and Nirenberg [1, Lemma 3.1], we deduce that for \(k\geq k_{R}\), \(v_{k}\) as well as \(u_{k}\) are entire solutions of (1.1) satisfying respectively \(v_{k}(\mathbb{R}^{n})\subset[a+\epsilon_{R},b]\), and \(u_{k}(\mathbb{R}^{n})\subset[a+\epsilon_{R},b]\), with \(\epsilon_{R}:=\phi_{R}(0)-a>0\). In view of Lemma 2.1, this implies that \(u_{k}\equiv b\), for \(k\geq k_{R}\), which is a contradiction. This proves (2.40). The fact that \(\liminf_{|x|\to\infty}u(x)=a\) holds for every \(u\in\mathcal{F}\) also follows from Lemma 2.1. Indeed, assuming by contradiction that \(\liminf_{|x|\to\infty}u(x)>a\) we would obtain that \(u(\mathbb{R}^{n})\subset[a+\epsilon,b]\), for some \(\epsilon>0\). Therefore, using Lemma 2.1, we conclude that \(u\equiv b\), which is a contradiction. (ii) Now, assume the domain \(D\) satisfies (1.2), and \(u\in C^{2}(D)\) is a solution of (1.1) such that \(u(D)\subset(a,b]\). In the nondegenerate case where (1.8) holds, [1, Lemma 3.2] implies that \(a+\epsilon<u(x)\leq b\) holds for some \(\epsilon>0\), provided that \(d(x,\partial D)>\eta\), for some \(\eta>0\). Thus, in view of Lemma 2.1, we have \(\lim_{d(x,\partial D)\to\infty}u(x)=b\). Proof of Theorem 1.3.: On the one hand, since \[\sup_{\mathbb{R}^{n}}u=b,\] let \(\{x_{k}\}_{k\in\mathbb{N}}\subset\mathbb{R}^{n}\) be a sequence such that \(\lim_{k\to\infty}u(x_{k})=b\), and set \(v_{k}(y)=u(x_{k}+y)\). Proceeding as in the proof of Theorem 1.2, one can see that (up to subsequence), \(v_{k}\) converges in \(C^{2}_{\mathrm{loc}}(\mathbb{R}^{n})\) to an entire solution \(v_{\infty}\equiv b\). In particular, given \(R>0\) and \(\delta>0\), we have \(u(x)\in[b-\delta,b]\), provided that \(x\in B_{R}(x_{k})\), and \(k\geq K(R,\delta)\) is large enough. On the other hand, let \(\iota:=\inf_{\mathbb{R}^{n}}u\leq b\), and assume by contradiction that \(\iota<b\) and \(W(\iota)>0\). Next, define the auxiliary potential \[\tilde{W}(u)=\begin{cases}W(u)&\text{for }u\geq\iota\\ W(\iota)&\text{for }u\leq\iota,\end{cases}\] and consider a minimiser \(\phi_{R}\in H^{1}(B_{R}(0))\) of the energy functional \[\tilde{E}(v)=\int_{B_{R}(0)}\Big{(}\frac{1}{2}|\nabla v(x)|^{2}+\tilde{W}(v(x ))\Big{)}dx,\] in the class \(\mathcal{A}=\{v\in H^{1}(B_{R}(0)),\,v=\iota\text{ on }\partial B_{R}(0)\}\). Setting \(\sigma:=\min\{t\geq\iota:W(t)=0\}\), and \(\sigma_{R}:=\sup_{B_{R}(0)}\phi_{R}\), one can see that \[\iota\leq\phi_{R}\leq\sigma_{R}<\sigma\] holds for every \(R>0\), since \(W(\iota)>0\). In addition, \(\phi_{R}\) is a smooth radial solution of (1.1) in \(B_{R}(0)\), such that \(\lim_{R\to\infty}\sigma_{R}=\sigma\). Thus by taking \(R>0\) large enough, we can ensure that \(\iota<\sigma_{R}<b\). As a consequence, we also have \(\phi_{R}(y)\leq u(y+x_{k})\), provided that \(y\in B_{R}(0)\), and \(k\geq K(R,\delta)\). Finally, by applying the sliding method of Berestycki, Caffarelli, and Nirenberg [1, Lemma 3.1], we deduce that for \(u\geq\sigma_{R}>\iota\), holds on \(\mathbb{R}^{n}\), which is a contradiction. So far we have established that \(W(\iota)=0\), so that \(\iota=\sigma\). To complete the proof of Theorem 1.3, it remains to show that \(\iota=b\). Indeed, if \(\iota<b\) and \(W(\iota)=0\), then in particular \(\iota<a\), since \(W^{\prime}<0\) on \((a,b)\). Let \(\{z_{k}\}_{k\in\mathbb{N}}\subset\mathbb{R}^{n}\) be a sequence such that \(\lim_{k\to\infty}u(z_{k})=\iota\), and set \(w_{k}(y)=u(z_{k}+y)\). Proceeding as previously, we obtain that (up to subsequence), \(w_{k}\) converges in \(C^{2}_{\mathrm{loc}}(\mathbb{R}^{n})\) to an entire solution \(w_{\infty}\equiv\iota\). In particular, given \(R_{0}=\Lambda+1\) (cf. (1.13)) and \(\eta>0\) such that \(\iota+\eta<a\), we have \(u(x)\in[\iota,\iota+\eta]\), provided that \(x\in B_{R_{0}}(z_{k})\cap D\), and \(k\geq\tilde{K}(\eta)\) is large enough (we note that, in view of (1.13), \(B_{R_{0}}(z_{k})\cap D\neq\emptyset\)). This is a contradiction. Therefore, we have proved that \(\iota=b\), and \(u\equiv b\). Proof of Corollary 1.5.: Under the assumptions of Corollary 1.5, we can apply Theorem 1.2 (ii) to deduce that \(\lim_{d(x,\partial D)\to\infty}u(x)=b\). On the other hand, in view of Remark 1.4 we have \(u\leq b\), so that \(\sup_{\mathbb{R}^{n}}u=b\). Therefore, Theorem 1.3 implies that \(u\equiv b\). Proof of Corollary 1.6.: When \(n\geq 3\), we first apply Proposition 2.2 to deduce that \(\lim_{|x|\to\infty}u(x)=b\). Next, in view of Remark 1.4 we obtain that \(\sup_{\mathbb{R}^{n}}u=b\). Finally, Theorem 1.3 implies that \(u\equiv b\). On the other hand, when \(n=2\), \(u\) is superharmonic in \(\{x\in\mathbb{R}^{2}:|x|>R\}\). Setting \(\gamma:=\min_{|x|=R+1}u(x)\in(a,b)\), we deduce from Lemma 5.4, that \(u(x)\in[\gamma,b)\), provided that \(|x|>R+1\). In view of Lemma 2.1, Remark 1.4 and Theorem 1.3, we conclude as previously that \(u\equiv b\). Radial symmetry for solutions converging to the local minimum: proofs of Proposition 1.8 and Theorem 1.9 In this section we give the proofs of Proposition 1.8 and Theorem 1.9. Proof of Proposition 1.8.: First we note that \(v:=b-u>0\) is bounded and subharmonic outside \(B_{R}\), in fact \(-\Delta v=\Delta u=W^{\prime}(b-v)\leq 0\) in \(\mathbb{R}^{n}\backslash B_{R}\), hence by [12, Lemma 22] we have the decay estimate \[v(x)\leq C|x|^{2-n}\qquad\forall,\ |x|\geq\rho. \tag{3.41}\] Next, it follows from [13, Proposition 1, Theorem 4] that \(v\) is radial. Proof of Theorem 1.9.: First we show that \(u<b\) in all \(\mathbb{R}^{n}\). By the strong maximum principle, it is enough to prove that \(u\leq b\) in \(\mathbb{R}^{n}\). For this purpose, assume by contradiction that \(c:=\sup_{\mathbb{R}^{n}}u=\max_{\mathbb{R}^{n}}u>b\), and \(W(c)>W(b)\). Next, define the auxiliary potential \[\tilde{W}(u)=\begin{cases}W(b)&\text{for }u\leq b\\ W(u)&\text{for }b\leq u\leq c\\ W(c)&\text{for }u\geq c,\end{cases}\] and consider a minimiser \(\phi_{R}\in H^{1}(B_{R}(0))\) of the energy functional \[\tilde{E}(v)=\int_{B_{R}(0)}\Big{(}\frac{1}{2}|\nabla v(x)|^{2}+\tilde{W}(v(x ))\Big{)}dx,\] in the class \(\mathcal{A}=\{v\in H^{1}(B_{R}(0)),\,v=c\text{ on }\partial B_{R}(0)\}\). We know that \(\phi_{R}\) is a radial solution to (1.1) such that \(b<\min_{B_{R}(0)}\phi_{R}=\phi_{R}(0)<c\), for \(R\geq R_{0}\) sufficiently large, since \(\phi_{R}(0)\to b\) as \(R\to\infty\). In addition, since \(u(\mathbb{R}^{n}\setminus B_{R}(0))\subset(a,b)\), we have \(u(x+x_{0})<\phi_{R_{0}}(x)\) on \(B_{R_{0}}(0)\), provided that \(|x_{0}|>R+R_{0}\). Finally, by applying the sliding method of Berestycki, Caffarelli, and Nirenberg [1, Lemma 3.1], we deduce that \(u\leq\phi_{R_{0}}(0)<c\) holds on \(\mathbb{R}^{n}\), which is a contradiction. So far we have established that \(W(c)=W(b)\). To conclude that \(u\leq b\), it remains to show that \(c=b\). Indeed, if \(c>b\) and \(W(c)=W(b)\), then \(c\) is a local minimum of \(W\) satisfying \(W^{\prime}(c)=0\), and there exists \(x_{0}\in\mathbb{R}^{n}\) such that \(u(x_{0})=c\). Thus, by the maximum principle, we obtain \(u\equiv c\), which is excluded. To complete the proof of Theorem 1.9, we shall use Proposition 1.8, [12, Theorem 2], and the regularity of \(W\). We first assume that hypothesis (i) holds, and distinguish the following cases. a) If \(W^{\prime\prime}(b)>0\), then \(v:=b-u>0\) is a decaying entire solution to \[-\Delta v=f(v):=W^{\prime}(b-v),\] therefore it is radial by [12, Theorem 2], since \(f^{\prime}(t)\leq 0\) for \(t\in(0,\delta)\). Otherwise, \(W^{\prime\prime}(b)=0\) implies that \(W^{\prime\prime\prime}(b)=0\), since \(W\in C^{6}(\mathbb{R})\), thus we shall examine the sign of \(\frac{d^{4}W}{du^{4}}(b)\). b) In the case where \(\frac{d^{4}W}{du^{4}}(b)>0\), the radial symmetry of \(u\) follows again from [12, Theorem 2], since \(f^{\prime}(t)\leq 0\) holds for \(t\in(0,\delta)\). c) In the case where \(\frac{d^{4}W}{du^{4}}(b)=0\), we have \(\frac{d^{5}W}{du^{4}}(b)=0\), and \([b-\delta,b]\ni t\mapsto\frac{W^{\prime}(t)}{|t-b|^{5}}\) is Holder continuous. Moreover, \(5\geq\frac{n+2}{n-2}\) holds for every \(n\geq 3\), hence the result follows from Proposition (1.8). Finally, in the case where hypothesis (ii) holds, the result is straightforward in view of [12, Theorem 2]. ## 4. A nonradial solution converging to the local maximum In this section we will provide an example of a potential \(W\) of the form (1.7b) for which equation (1.1) admits a solution \(u\) such that \(u(x)>a\) for \(|x|>R\) and \(\lim_{|x|\to\infty}u(x)=a\), but \(u\) is not radial. The counterexample can be found in [6] using the Yamabe equation \[-\Delta u=\frac{n(n-2)}{4}|u|^{\frac{4}{n-2}}u\ \ \text{in $\mathbb{R}^{n}$, $n\geq 3$.} \tag{4.42}\] Equation (4.42) is variational, in the sense that it is the Euler-Lagrange equation of the energy functional \[E(u):=\frac{1}{2}\int_{\mathbb{R}^{n}}|\nabla u|^{2}-\frac{(n-2)^{2}}{8}\int_ {\mathbb{R}^{n}}|u|^{\frac{2n}{n-2}}.\] It is known that the only finite energy positive solutions are given by \[\mu^{-\frac{n-2}{2}}U(\mu^{-1}(x-\xi)),\qquad U(x):=\left(\frac{2}{1+|x|^{2}} \right)^{\frac{n-2}{2}},\,\mu>0,\,\xi\in\mathbb{R}^{n}.\] These solutions which are called the _standard bubbles_, are also the only positive solutions of (4.42) (see [5]). Using these bubbles, in [6] the authors construct a sequence of bounded entire solutions \(\{u_{k}\}_{k\geq k_{0}}\) to (4.42) in \(\mathbb{R}^{n}\) of the form \[u_{k}:=v_{k}+\phi_{k}, \tag{4.43}\] where the approximate solution \(v_{k}\) is given by \[\begin{split} v_{k}(x)&:=U(x)-\sum_{j=1}^{k}\mu_{k} ^{-\frac{n-2}{2}}U(\mu_{k}^{-1}(x-\xi_{j,k})),\\ \mu_{k}&=c_{n}k^{-2}\ \text{for}\ n\geq 4,\,\mu_{k}=c_ {3}k^{-2}(\log k)^{-2}\ \text{for}\ n=3\\ \xi_{j,k}&:=(\cos(\frac{2\pi j}{k}),\cos(\frac{2\pi j }{k}),0,\ldots,0)\qquad 1\leq j\leq k\end{split} \tag{4.44}\] and the corrections \(\phi_{k}\) fulfil \[|\phi_{k}(x)|\leq\frac{c}{\log k(1+|x|)}\ \text{if}\ n=3,\qquad|\phi_{k}(x)| \leq\frac{c}{k^{\alpha_{n}}(1+|x|^{n-2})}\ \text{if}\ n\geq 4,\,\text{with}\ \alpha_{n}>0. \tag{4.45}\] As a consequence, these solutions \(v_{k}\) are \(L^{\infty}(\mathbb{R}^{n})\) close to a linear combination of \(k+1\) rescaled bubbles. One of them is positive and centred at the origin, the other ones are negative and centred along the unit circle \(S^{1}\subset\mathbb{R}^{2}\). It particular, they are sign changing solutions. Moreover, it follows from (4.44) and (4.45) that \[u_{k}(x)\to 0\ \text{as}\ |x|\to\infty,\,\text{for any}\ k\geq k_{0}. \tag{4.46}\] We are going to check that \(u_{k}\) is positive outside a ball. **Lemma 4.1**.: _There exist \(\bar{r}>0\) and \(\bar{k}>0\) such that \(u_{k}(x)>0\) if \(|x|>\bar{r}\) and \(k\geq\bar{k}\)._ Proof.: We will show that, for \(k\) large enough, the approximate solution \(v_{k}\) fulfils \[v_{k}(x)>\frac{2}{3}U(x)>0\qquad\text{if }|x|>\bar{r} \tag{4.47}\] for some large \(\bar{r}>0\). Then we apply (4.45) to conclude that \[u_{k}(x)=v_{k}(x)+\phi_{k}(x)>\frac{2}{3}U(x)-\frac{C}{(1+|x|)\log k}>\frac{1}{ 2}U(x)>0\] outside a large ball in dimension \(n\geq 3\). Similarly, in higher dimension we have \[u_{k}(x)=v_{k}(x)+\phi_{k}(x)>\frac{2}{3}U(x)-\frac{C}{k^{\alpha_{n}}(1+|x|^{n- 2})}>\frac{1}{2}U(x)>0\] outside a large ball. In order to prove (4.47), we note that, in dimension \(n=3\) we have \[v_{k}(x) =\left(\frac{2}{1+r^{2}}\right)^{\frac{1}{2}}-\sum_{j=1}^{k}\mu_ {k}^{-\frac{1}{2}}U\left(\frac{x-\xi_{j,k}}{\mu_{k}}\right)\geq\left(\frac{2} {1+r^{2}}\right)^{\frac{1}{2}}-k\mu_{k}^{-\frac{1}{2}}\left(\frac{2\mu_{k}^{2 }}{\mu_{k}^{2}+(r-1)^{2}}\right)^{\frac{1}{2}}\] \[\geq\left(\frac{2}{1+r^{2}}\right)^{\frac{1}{2}}\left(1-k\mu_{k}^ {\frac{1}{2}}\left(\frac{1+r^{2}}{(r-1)^{2}}\right)^{\frac{1}{2}}\right)=\left( \frac{2}{1+r^{2}}\right)^{\frac{1}{2}}\left(1-\frac{\sqrt{c_{3}}}{\log k} \left(\frac{1+r^{2}}{(r-1)^{2}}\right)^{\frac{1}{2}}\right)>\frac{2}{3}U(x)\] where \(r=|x|\) and \(k\) are large enough. Similarly, in higher dimension, we have \[v_{k}(x)\geq\left(\frac{2}{1+r^{2}}\right)^{\frac{n-2}{2}}\left(1-\frac{c_{n} ^{\frac{n-2}{2}}}{k^{n-3}}\left(\frac{1+r^{2}}{(r-1)^{2}}\right)^{\frac{n-2}{ 2}}\right)>\frac{2}{3}U(x).\] for \(r\) and \(k\) large enough. Finally, we can take \(b>\|u_{\bar{k}}\|_{L^{\infty}(\mathbb{R}^{n})}\) and define a \(C^{1}(\mathbb{R})\) function \(f\) such that \(f(t)<0\) for any \(t\in(0,b)\), \(f(t)=-\frac{n(n-2)}{4}|t|^{\frac{4}{n-2}}t\) for \(|t|\leq\|u_{\bar{k}}\|_{L^{\infty}(\mathbb{R}^{n})}\), and \(f(b)=0\). Then \(f^{\prime}(0)=0\) and \(u_{\bar{k}}\) is a solution to \(\Delta u=f(u)\). Taking \(W\) to be a primitive of \(f\), we have the required counter example. In fact, we have \(0<u_{\bar{k}}(x)<b\) in \(D:=\mathbb{R}^{n}\backslash B_{\bar{r}}\), \(u_{\bar{k}}\to 0\) as \(|x|\to\infty\) but \(u_{\bar{k}}\) is sign changing and not radial. ## 5. Appendix ### The method of sub- and supersolutions Let \(\Omega\subset\mathbb{R}^{n}\) be a open set with Lipschitz boundary, and let \(f\in C^{\alpha}_{loc}(\mathbb{R})\), for some \(\alpha\in(0,1)\). We say that \(\underline{u}\in W^{1,2}(\Omega)\) is a subsolution (respectively \(\overline{u}\in W^{1,2}(\Omega)\) is a supersolution) to \[\Delta u=f(u), \tag{5.48}\] if \(\Delta\underline{u}\geq f(\underline{u})\) (respectively \(\Delta\overline{u}\leq f(\overline{u})\)) holds in \(\Omega\) in the weak sense. **Proposition 5.1**.: _Let \(\underline{u}\leq\overline{u}\) be a couple of bounded \(W^{1,2}(\Omega)\) sub- and supersolutions to (5.48). Then, there exists a solution \(u\in C^{2}(\Omega)\cap W^{1,2}(\Omega)\) to (5.48), satisfying \(\underline{u}\leq u\leq\overline{u}\)._ Proof.: We introduce the nonlinearity \[g(x,u):=\begin{cases}f(\underline{u}(x))&\text{if }u<\underline{u}(x),\\ f(u)&\text{if }\underline{u}(x)\leq u\leq\overline{u}(x),\\ f(\overline{u}(x))&\text{if }u>\overline{u}(x),\end{cases} \tag{5.49}\] and set \(G(x,u)=\int_{0}^{u}g(x,t)dt\). Next, we establish (exactly as in the proof of [7, Lemma 1.1.1]), the existence of a minimizer \(u\) of the energy functional: \[\mathcal{E}(v)=\int_{\Omega}\big{(}\frac{1}{2}|\nabla v(x)|^{2}+G(x,v(x)) \big{)}dx, \tag{5.50}\] in the class \(\mathcal{A}=\underline{u}+W^{1,2}_{0}(\Omega)\). For the sake of simplicity, we consider in the definition of \(\mathcal{A}\), the boundary condition \(v=\underline{u}\) on \(\partial\Omega\). However, we could also set \(\mathcal{A}=\phi+W^{1,2}_{0}(\Omega)\), with any \(\phi\in W^{1,2}(\Omega)\) such that \(\underline{u}\leq\phi\leq\overline{u}\) holds on \(\partial\Omega\). By construction, \(u\) solves the Euler-Lagrange equation \[\Delta u=g(x,u),\ x\in\Omega. \tag{5.51}\] Moreover, it follows from the maximum principle that \(\underline{u}\leq u\leq\overline{u}\) in \(\Omega\), which yields that \(u\) is actually a \(C^{2}(\Omega)\) solution to (5.48), satisfying \(\underline{u}\leq u\leq\overline{u}\). **Remark 5.2**.: _If in Proposition 5.1, we consider a domain \(\Omega=\{x\in\mathbb{R}^{n}:\rho_{1}<|x|<\rho_{2}\}\), and a couple \(\underline{u}\leq\overline{u}\) of bounded radial sub- and supersolutions to (5.48), then we obtain the existence of a radial solution \(u\in C^{2}(\Omega)\cap W^{1,2}(\Omega)\) to (5.48), satisfying \(\underline{u}\leq u\leq\overline{u}\). Indeed, since the nonlinearity (5.49) and the energy functional (5.50) are invariant by the orthogonal group \(O(n)\), we can look for a minimizer \(u\) in the class \(\mathcal{A}_{O(n)}=\{v\in\mathcal{A}:v(\sigma x)=v(x),\forall\sigma\in O(n)\}\). By the principle of symmetric criticality [20], \(u\) is a smooth radial solution to (5.51), and the bounds \(\underline{u}\leq u\leq\overline{u}\) follow as previously from the maximum principle._ The method of sub- and supersolutions is also applicable in unbounded domains. In Proposition 2.2, we apply it in \(\Omega=\mathbb{R}^{n}\setminus B_{\rho}\), with a radial subsolution \(\phi_{*}(x)=c|x|^{2-n}\), and a radial supersolution \(\phi^{*}\geq\phi_{*}\), \(\phi^{*}\in W^{1,2}(B_{R}\setminus\overline{B}_{\rho})\), \(\forall R>\rho\). As a consequence of Proposition 5.1 and Remark 5.2, we obtain for every \(R>\rho\), a radial solution \(v_{R}\) to (1.1) in \(\Omega_{R}:=B_{R}\setminus\overline{B}_{\rho}\), satisfying * \(\phi_{*}\leq v_{R}\leq\phi^{*}\) in \(\Omega_{R}\), * \(v_{R}=\phi_{*}\) on \(\partial\Omega_{R}\). In addition, since for any \(\alpha\in(0,1)\), the \(C^{1,\alpha}\) norm of \(\partial\Omega_{R}\) is uniformly bounded, and the \(C^{1,\alpha}\) norm of \(\phi_{*}\) is also bounded in \(\overline{\Omega}\), we deduce that the \(C^{1,\alpha}\) norm of \(v_{R}\) is uniformly bounded in \(\overline{\Omega_{R}}\), \(\forall R>\rho\) (cf. [14, Theorem 8.33]). Finally, we use the Theorem of Ascoli, via a diagonal argument, to prove that the limit \(v=\lim_{R\to\infty}v_{R}\) exists (up to subsequence) and is a radial solution to (1.1) in \(\Omega\), satisfying \(\phi_{*}\leq v\leq\phi^{*}\) in \(\Omega\). In Proposition 2.6, we have a second application of the method of sub- and supersolutions in an unbounded domain \(D\), such that \(\partial D\) is bounded for the \(C^{1,\alpha}\) norm. Here again, we consider an increasing sequence of bounded domains \(D_{k}\), such that \(D=\cup_{k}D_{k}\), and the boundaries \(\partial D_{k}\) are uniformly bounded for the \(C^{1,\alpha}\) norm. In view of Proposition 5.1, we obtain in each domain \(D_{k}\) a solution \(u_{k}\) of (1.1), and then by taking the limit \(u=\lim_{k\to\infty}u_{k}\) via the same diagonal argument, we construct the solution \(u\) in the whole domain \(D\). ### Two lemmas for superharmonic functions Here we recall two classical results on superharmonic functions. **Lemma 5.3**.: _Let \(n\geq 3\), let \(B_{\rho}\subset\mathbb{R}^{n}\) be the open ball of radius \(\rho\) centered at the origin, and let \(u\in C^{2}(\mathbb{R}^{n}\setminus B_{\rho})\) be a positive and bounded function, such that \(\Delta u\leq 0\) in \(\mathbb{R}^{n}\setminus B_{\rho}\). Then, there exists a constant \(c>0\) such that \(u(x)\geq c|x|^{2-n}\), for any \(x\in\mathbb{R}^{n}\setminus B_{\rho}\)._ Proof.: We fix \(y\in\mathbb{R}^{n}\backslash\overline{B_{\rho}(0)}\), \(\varepsilon>0\) and we prove that \(u(y)\geq c|y|^{2-n}-\varepsilon\), for some constant \(c>0\) independent of \(\varepsilon\), so that the result follows by letting \(\varepsilon\to 0\). In order to do so, we note that \[u(x)\geq\inf_{\partial B_{\rho}}u=:c\rho^{2-n}=c|x|^{2-n}>c|x|^{2-n}- \varepsilon\qquad\forall\,x\in\partial B_{\rho}.\] Moreover, taking \(R>|y|\) large enough, we have \[c|x|^{2-n}-\varepsilon<0<u(x)\qquad\forall\,x\in\partial B_{R}.\] As a consequence, using that \(c|x|^{2-n}-\varepsilon\) is harmonic in the set \(A:=\{x\in\mathbb{R}^{n}:\,\rho<|x|<R\}\), the maximum principle yields that \(u\geq c|x|^{2-n}-\varepsilon\) in \(A\). In particular we have \(u(y)\geq c|y|^{2-n}-\varepsilon\). **Lemma 5.4**.: _Let \(B_{r}(0)\subset\mathbb{R}^{2}\) be the open ball of radius \(r\) centred at the origin, and let \(\psi\in C(\mathbb{R}^{2}\setminus B_{r}(0))\) be a function such that_ * \(\psi\in W^{1,2}_{loc}(\mathbb{R}^{2}\setminus\overline{B_{r}(0)})\)_,_ * \(\psi\) _is bounded from below on_ \(\mathbb{R}^{2}\setminus B_{r}(0)\)_,_ * \(\Delta\psi\leq 0\)_, on_ \(\mathbb{R}^{2}\setminus\overline{B_{r}(0)}\)_._ _Then, \(\psi\) attains its minimum on \(\partial B_{r}(0)\)._ Proof.: Let \(x_{0}\in\partial B_{r}(0)\) be such that \(\min_{\partial B_{r}(0)}\psi=\psi(x_{0})\). For every \(\epsilon>0\) fixed, we consider the function \(\zeta_{\epsilon}(x)=\psi(x)+\epsilon\ln(|x|/r)\) which is superharmonic on \(\mathbb{R}^{2}\setminus\overline{B_{r}(0)}\). In addition, we have \(\zeta_{\epsilon}(x)>\zeta_{\epsilon}(x_{0})=\psi(x_{0})\), provided that \(|x|\geq R_{\epsilon}\) (with \(R_{\epsilon}\) sufficiently large). Thus, by the maximum principle, the minimum of \(\zeta_{\epsilon}\) in the annuli \(r\leq|x|\leq R\), with \(R\geq R_{\epsilon}\), is attained at \(x_{0}\). This implies, that for every \(\epsilon>0\), and \(x\in\mathbb{R}^{2}\setminus B_{r}(0)\), we have \(\zeta_{\epsilon}(x)\geq\psi(x_{0})\Leftrightarrow\psi(x)\geq\psi(x_{0})- \epsilon\ln(|x|/r)\). Finally, letting \(\epsilon\to 0\), we obtain that \(\psi(x)\geq\psi(x_{0})\) holds for every \(x\in\mathbb{R}^{2}\setminus B_{r}(0)\). ## Acknowledgements M. Rizzi was partially supported by Justus Liebig University. The authors are particularly grateful to prof. Alberto Farina for his precious remarks and comments.
2309.08313
Conditional validity of heteroskedastic conformal regression
Conformal prediction, and split conformal prediction as a specific implementation, offer a distribution-free approach to estimating prediction intervals with statistical guarantees. Recent work has shown that split conformal prediction can produce state-of-the-art prediction intervals when focusing on marginal coverage, i.e. on a calibration dataset the method produces on average prediction intervals that contain the ground truth with a predefined coverage level. However, such intervals are often not adaptive, which can be problematic for regression problems with heteroskedastic noise. This paper tries to shed new light on how prediction intervals can be constructed, using methods such as normalized and Mondrian conformal prediction, in such a way that they adapt to the heteroskedasticity of the underlying process. Theoretical and experimental results are presented in which these methods are compared in a systematic way. In particular, it is shown how the conditional validity of a chosen conformal predictor can be related to (implicit) assumptions about the data-generating distribution.
Nicolas Dewolf, Bernard De Baets, Willem Waegeman
2023-09-15T11:10:46Z
http://arxiv.org/abs/2309.08313v2
# Heteroskedastic conformal regression ###### Abstract Conformal prediction, and split conformal prediction as a specific implementation, offer a distribution-free approach to estimating prediction intervals with statistical guarantees. Recent work has shown that split conformal prediction can produce state-of-the-art prediction intervals when focusing on marginal coverage, _i.e._, on a calibration dataset the method produces on average prediction intervals that contain the ground truth with a predefined coverage level. However, such intervals are often not adaptive, which can be problematic for regression problems with heteroskedastic noise. This paper tries to shed new light on how adaptive prediction intervals can be constructed using methods such as normalized and Mondrian conformal prediction. We present theoretical and experimental results in which these methods are investigated in a systematic way. Conformal prediction; Heteroskedastic noise; Regression; Conditional validity. ## 1 Introduction Many methods exist to estimate prediction sets or, more specifically, prediction intervals in the regression setting. Some examples include Gaussian processes [41], quantile regression [16] and Monte Carlo Dropout [31]. In a model-independent and distribution-free way, conformal prediction [2, 37] allows to estimate such regions with statistical guarantees. Different approaches to conformal prediction exist, e.g. transductive [29], inductive [21, 28] and cross-conformal prediction [35]. The validity of inductive conformal prediction has been verified numerous times, see e.g. [4, 32, 43], and a comparison of the aforementioned uncertainty quantification methods and further improvements resulting from applying conformal prediction as a (post-hoc) calibration method were investigated in [7]. However, this analysis was only performed at the marginal level, where none of the structure inherent to the data and problem setting is taken into account. Nonetheless, an important problem in the field of uncertainty quantification is exactly this conditional behaviour. Conformal prediction has a probabilistic validity guarantee, but this only holds w.r.t. the full data distribution. Consequently, the algorithm is allowed to attain the claimed validity by solely focusing on the 'easy' parts of the data, which are often more abundant, while ignoring the more difficult parts. In practice it are, however, usually these difficult regions that matter the most. In this regard, consider Figure 1. If the two samples considered would make up a data set, the prediction intervals would be valid at the significance level \(\alpha=0.2\), because 80% of the points is covered. Since these intervals were generated with a standard conformal predictor based on the absolute residuals for the significance level \(\alpha=0.2\), this figure illustrates its marginal guarantees. However, all of the data points in the blue subgroup are covered, while only 60% of the red subgroup is covered. The conformal predictor might work marginally as promised, but it is definitely not sufficient when working with data sets in which more structure is present. With the rise of conformal prediction, the interest in distribution-free conditional uncertainty modelling has also increased. Although Venn and Mondrian conformal predictors are actually almost as old as the field itself [36, 37], adoption by mainstream machine learning practitioners has remained even more limited than is the case for their nonconditional counterparts. Just like these counterparts, the conditional variants provide strict statistical validity guarantees, but this leads to an inherent problem when conditioning on sets of probability zero [9, 34]. In general it is not possible to obtain distribution-free guarantees for object-level conditioning, _i.e._, when conditioning on the feature tuple. This forces researchers to aggregate data into larger subsets, thereby potentially reintroducing the issue of neglecting underrepresented regions of the instance space. In this paper the focus lies on modelling heteroskedastic noise with guarantees conditional on the level of heteroskedasticity, _i.e._, where the data set is divided based on an estimate of the residual variance, as in Figure 1, and the validity of different models w.r.t. such a division. In this respect it can be seen as a continuation of [5]. Aside from comparing the conditional validity of various standard nonconformity measures, with and without Mondrian taxonomies, theoretical conditions are derived for attaining conditional validity. In Section 2 the problem setting is discussed in more detail and a formal definition of (conditional) validity is given. Section 3 covers the general framework of (inductive) conformal prediction and how it can be applied in a conditional context. Both the use of normalized nonconformity measures and Mondrian conformal predictors is covered. The explicit case of uncertainty-dependent conditioning is treated in Section 4 in a theoretical way. Before considering some real-world data sets in Section 6, the impact of misspecification is analyzed in Section 5, in which some practical diagnostic tools are introduced that can help data scientists to decide on which framework to use. Figure 1: Two data samples with the same trend, \(y(x,s)\sim 0.1x+2s+\epsilon(s)\), where \(s\in\{0,1\}\) is a dummy variable labelling the subgroups, but with different noise levels. The blue subgroup (\(s=0\)) has standard deviation \(0.1\), while the red subgroup (\(s=1\)) has standard deviation \(0.5\). Although the prediction intervals are valid at the \(\alpha=0.2\) significance level, both marginally and for the blue subgroup, this is not the case for the red subgroup. ## 2 Problem statement As mentioned in the introduction, the main focus of this paper lies on conditional uncertainty quantification and, in particular, the construction of prediction regions with conditional guarantees. This conditioning is induced by a subdivision of the instance space, which in turn is performed [34, 37] using a so-called _taxonomy function_. To formalize this problem, some notations and conventions are fixed. Firstly, it is worth mentioning that some abuse of notation and terminology will be present, e.g. sets and multisets will be treated on an equal footing, as will unions and disjoint unions. Moreover, wherever necessary, functions will be assumed to be measurable. In conformal prediction (to be introduced in Section 3) and, by extension, all of statistics and data science, the natural setting is that of data sequences \(\big{(}(\mathbf{x}_{n},y_{n})\big{)}_{n\in\mathbb{N}}\) in \(\mathcal{X}\times\mathbb{R}\), where the target space has been fixed to \(\mathbb{R}\) since only (univariate) regression problems are of interest in this paper (note that most of the definitions in this and the ensuing sections can be generalized to arbitrary target spaces): \[y_{i}=\widehat{y}(\mathbf{x}_{i})+\epsilon_{i}\,. \tag{1}\] The set of all sequences in \(\mathcal{X}\times\mathbb{R}\) will be denoted by \((\mathcal{X}\times\mathbb{R})^{\infty}:=\cup_{n=1}^{\infty}(\mathcal{X} \times\mathbb{R})^{n}\). The feature space \(\mathcal{X}\) can be any type of space, such as \(\mathbb{R}^{n}\), \(\mathbb{N}\), etc. Elements of this space are denoted by bold font symbols: \(\mathbf{x}\in\mathcal{X}\). Although in machine learning the data sequence \(\big{(}(\mathbf{x}_{n},y_{n})\big{)}_{n\in\mathbb{N}}\) is often assumed to be drawn identically and independently from a joint distribution \(P_{X,Y}\), conformal prediction relaxes this requirement [37] to \(\big{(}(\mathbf{x}_{n},y_{n})\big{)}_{n\in\mathbb{N}}\) being drawn _exchangeably_ from \(P_{X,Y}\). Cumulative distributions will be denoted by the capital letter \(F\) and, if they exist, probability density functions will be denoted by a lower case \(f\). For clarity, all estimators will be denoted by a caret, e.g. \(\widehat{\mu}\) and \(\widehat{\sigma}\) denote estimators of the (conditional) mean \(\mu\) and (conditional) standard deviation \(\sigma\) of \(P_{Y|X}\), respectively. **Definition 1** (Validity).: An interval predictor \(\Gamma^{\alpha}:\mathcal{X}\to[\mathbb{R}]\) is said to be _(marginally) valid_ at significance level \(\alpha\in[0,1]\) if \[\mathrm{Prob}\big{(}Y\in\Gamma^{\alpha}(X)\big{)}\geq 1-\alpha\,, \tag{2}\] where \([\mathbb{R}]\) denotes the set of all (closed) intervals in \(\mathbb{R}\): \[[\mathbb{R}]:=\big{\{}[a,b]\,\big{|}\,a,b\in\mathbb{R}\wedge a\leq b\big{\}}\,. \tag{3}\] Consider a function \(\kappa:\mathcal{X}\times\mathbb{R}\to\mathcal{C}\), called the _taxonomy function_. The interval predictor \(\Gamma^{\alpha}:\mathcal{X}\to[\mathbb{R}]\) is said to be _conditionally valid_ w.r.t. \(\kappa\) at significance level \(\alpha\in[0,1]\) if \[\mathrm{Prob}\big{(}Y\in\Gamma^{\alpha}(X)\,\big{|}\,\kappa(X,Y)=c\big{)}\geq 1-\alpha \tag{4}\] for all \(c\in\mathcal{C}\). For convenience, the taxonomy space \(\mathcal{C}\) and the distribution \(P_{C}\) of the taxonomy class \(C:=\kappa(X,Y)\), given by the pushforward rule \[P_{C}(c):=P_{X,Y}\big{(}\kappa^{-1}(c)\big{)}\,, \tag{5}\] are assumed to be discrete with only the empty set having probability zero, such that conditioning on a taxonomy class does not lead to measure-zero issues. Moreover, the taxonomy function \(\kappa\) can in general be any function. However, for the purpose of this paper, a specific type of taxonomy will be considered. The taxonomy functions of interest divide the instance space based on an estimate of the uncertainty [5]. In this paper, the taxonomy function will be derived from a proxy of the heteroskedastic noise such as the (conditional) standard deviation. More formally, given such an estimate \(\delta:\mathcal{X}\to\mathbb{R}^{+}\) and a binning function \(\mathcal{B}:\mathbb{R}^{+}\to\mathcal{C}\), the induced taxonomy function is given by \[\kappa:=\mathcal{B}\circ\delta\circ\pi_{1}:\mathcal{X}\times\mathbb{R}\to \mathcal{C}, \tag{6}\] where \(\pi_{1}:\mathcal{X}\times\mathbb{R}\to\mathcal{X}:(\mathbf{x},y)\mapsto\mathbf{x}\) projects a data point onto its features. A straightforward example would be where \(\delta=\widehat{\sigma}\) is an estimate for the (conditional) standard deviation and \(\mathcal{B}\) corresponds to equal frequency binning for some predetermined number of classes (see, e.g., Figure 3(a) further below for the case of three classes). ## 3 Conformal prediction Since conformal prediction [2, 37] is the main framework used in this paper, a short introduction is in order. For simplicity and computational ease-of-use, attention is restricted to inductive (or split) conformal prediction (ICP) [21], where a data splitting strategy is adopted to avoid retraining the models, thereby sacrificing some statistical power. ### Inductive conformal regression Everything starts with a choice of _nonconformity measure_ \[A:\mathcal{X}\times\mathbb{R}\to\mathbb{R}\,, \tag{7}\] _i.e._, a function assigning to every data point a nonconformity score, indicating how 'weird' or 'nonconform' it is. Since this function often depends on a training set \(\mathcal{T}\), it can be interpreted as the weirdness w.r.t. that data set. Similar to Eq. (5), the induced distribution of nonconformity scores is given by \[P_{A}(B):=P_{X,Y}\left(A^{-1}(B)\right), \tag{8}\] for all subsets \(B\subseteq\mathbb{R}\). Given a choice of nonconformity measure, the _inductive_ (or _split_) _conformal prediction_ algorithm can be summarized as follows: 1. (Optional) Choose a training set \(\mathcal{T}\in(\mathcal{X}\times\mathbb{R})^{\infty}\) and train the underlying model of \(A\). 2. Choose a calibration set \(\mathcal{V}\in(\mathcal{X}\times\mathbb{R})^{\infty}\) and significance level \(\alpha\in[0,1]\). 3. For every calibration point \((\mathbf{x}_{i},y_{i})\in\mathcal{V}\), calculate the nonconformity score \(a_{i}:=A(\mathbf{x}_{i},y_{i})\). 4. Calculate the _critical nonconformity score_: \[a_{\mathcal{V}}^{*}:=q_{(1-\alpha)(1+\sfrac{1}{|\mathcal{V}|})}\left(\left\{ A(\mathbf{x},y)\,\big{|}\,(\mathbf{x},y)\in\mathcal{V}\right\}\right).\] (9) 5. For every new data point \(\mathbf{x}\in\mathcal{X}\), include all elements \(y\in\mathbb{R}\) in \(\Gamma^{\alpha}(\mathbf{x})\) for which \(A(\mathbf{x},y)\leq a_{\mathcal{V}}^{*}\). The reason for the 'inflated' quantile in the definition of the critical score can be found in the theorem at the end of this introduction. It is used to correct for not knowing the true label of new data points. Pseudocode for the algorithm is given in Algorithm 1. In general, conformal prediction allows for any choice of predictive model or nonconformity measure. However, even though for a fixed model the choice of nonconformity measure is virtually unconstrained, some choices are much more natural than others. The most widely used ones are the (absolute) _residual measure_ \[A_{\text{res}}(\mathbf{x},y):=|\widehat{y}(\mathbf{x})-y| \tag{10}\] in the case of point predictors [37] as in (1), and the _interval measure_ \[A_{\text{int}}(\mathbf{x},y):=\max\big{(}y-\widehat{y}_{+}(\mathbf{x}), \widehat{y}_{-}(\mathbf{x})-y\big{)} \tag{11}\] in the case of interval predictors [28], where \(\widehat{y}_{\pm}:\mathcal{X}\to\mathbb{R}\) denote the upper and lower bound of the prediction intervals, respectively. In this paper, two other common approaches are also considered: _normalized conformal prediction_ (in the next section) and _Mondrian conformal prediction_ (in Section 3.3). Both of these methods use estimates of the heteroskedastic noise, the function \(\delta:\mathcal{X}\to\mathbb{R}^{+}\), in an explicit way. The power of all these (inductive) conformal prediction methods lies in the following theorem [28, 37], where the notion of interval predictors is generalized to functions of the form \[\Gamma^{\alpha}:\mathcal{X}\times(\mathcal{X}\times\mathbb{R})^{ \infty}\to[\mathbb{R}] \tag{12}\] as to make the dependence on the calibration set more apparent. **Theorem 1** (Marginal validity).: _Let \(\Gamma^{\alpha}:\mathcal{X}\times(\mathcal{X}\times\mathbb{R})^{\infty}\to[ \mathbb{R}]\) be an inductive conformal predictor at significance level \(\alpha\in[0,1]\). If the nonconformity scores are exchangeable for any calibration set \(\mathcal{V}\) and any new observation \((\mathbf{x},y)\), i.e., any ordering of \(A(\mathcal{V})\cup\{A(\mathbf{x},y)\}\) is equally probable, then \(\Gamma^{\alpha}\) is conservatively valid:_ \[\operatorname{Prob}\big{(}Y\in\Gamma^{\alpha}(X,V)\big{)}\geq 1-\alpha\,, \tag{13}\] _where the probability is taken over both \((X,Y)\) and \(V\). Moreover, if the nonconformity scores are almost surely distinct, the conformal predictor is asymptotically exactly valid:_ \[\mathrm{Prob}\big{(}Y\in\Gamma^{\alpha}(X,V)\,\big{|}\,|V|=n\big{)}\leq 1- \alpha+\frac{1}{n+1}\,. \tag{14}\] This theorem heavily relies on the following lemma [28]. **Lemma 1**.: _Let \(\{X_{1},\ldots,X_{n+1}\}\) be a set of exchangeable random variables for some \(n\in\mathbb{N}_{0}\). The following relation holds for any \(\alpha\in[0,1]\):_ \[\mathrm{Prob}\left(X_{n+1}\leq\widehat{Q}_{n}\left((1-\alpha)\left(1+\frac{1} {n}\right)\right)\right)\geq 1-\alpha\,, \tag{15}\] _where \(\widehat{Q}_{n}\) is the empirical quantile function of the set \(\{X_{1},\ldots,X_{n}\}\). Moreover, if ties almost surely do not arise, then this probability is also bounded from above by \(1-\alpha+\frac{1}{n+1}\)._ Note that replacing the (inflated) sample quantile by the true quantile \(q_{1-\alpha}\) of the nonconformity distribution \(P_{A}\) would give the same result. In practice this is how (inductive) conformal prediction is applied, unless suitable modifications, such as on-line training, are utilized [37]. It is assumed that the sample quantile is a consistent estimator, _i.e._, converges in probability to \(q_{1-\alpha}\). This is, for example, the case when the data is i.i.d. and \(P_{A}\) has unique quantiles. Instead of resampling a calibration set for every new test point, a fixed calibration set is used and the hoped-for consistency is assumed, allowing for violations of the above theorem. Therefore, from here on, the dependence on the calibration set will be left implicit. ### Normalized conformal prediction (NCP) The standard residual measure (10) does not take into account any information about subregions of \(\mathcal{X}\), such as where the model might perform subpar. As a consequence, the resulting prediction intervals are all of the same size, given by twice the critical value \(a_{\mathcal{Y}}^{*}\). As such, this method assumes domain knowledge about the homoskedasticity of the problem. To resolve this issue, knowledge about the data noise can be explicitly incorporated to obtain more realistic and more efficient intervals (meaning that the intervals will be smaller when possible). This leads to the idea of normalized conformal prediction [13, 22]: \[A_{\delta}(\mathbf{x},y):=\frac{|\widehat{y}(\mathbf{x})-y|}{\delta(\mathbf{ x})}\,, \tag{16}\] where \(\delta:\mathcal{X}\to\mathbb{R}^{+}\) is called the _difficulty function_ (suggestively given the same notation as the uncertainty measure inducing uncertainty-dependent taxonomies from the previous section). As before, in practice \(\delta\) will often be a characteristic of the heteroskedastic noise such as (an estimate of) the standard deviation. **Example 1** (Mean-variance estimators).: A straightforward example of normalized nonconformity measures occurs in the case of mean-variance estimators [20]. Instead of simply estimating a point-predictor as in Eq. (1), this approach assumes a parametric model for the data-generating process, usually a normal distribution, characterized by a conditional mean \(\widehat{\mu}:\mathcal{X}\to\mathbb{R}\) and conditional standard deviation \(\widehat{\sigma}:\mathcal{X}\to\mathbb{R}^{+}\). These functions are then estimated through maximum likelihood estimation. In this case, the canonical choice of nonconformity score is \[A_{\widehat{\sigma}}(\mathbf{x},y)=\frac{|\widehat{\mu}(\mathbf{x})-y|}{ \widehat{\sigma}(\mathbf{x})}\,. \tag{17}\] For stability issues [13], \(\widehat{\sigma}\) can be replaced by \(\widehat{\sigma}+\epsilon\) for some (small) \(\epsilon\in\mathbb{R}^{+}\). Note that this transformation induces a strong bias when \(\epsilon\) is not carefully tuned, especially when the variance is small compared to \(\epsilon\). Given a mean-variance estimator \((\widehat{\mu},\widehat{\sigma}):\mathcal{X}\to\mathbb{R}\times\mathbb{R}^{+}\), a prediction interval at significance level \(\alpha\in[0,1]\) can be obtained by, for example, assuming that the data-generating distribution is a normal distribution. This gives rise to the following parametric form: \[\Gamma^{\alpha}_{\text{MV}}(\mathbf{x}):=\left[\widehat{\mu}( \mathbf{x})-z^{\alpha}\widehat{\sigma}(\mathbf{x}),\widehat{\mu}(\mathbf{x})+ z^{\alpha}\widehat{\sigma}(\mathbf{x})\right], \tag{18}\] where \(z^{\alpha}\) is the \((1-\alpha/2)\)-quantile of the standard normal distribution \(\mathcal{N}(0,1)\). **Remark 1**.: One could argue that the quantiles of a Student \(t\)-distribution should be used because the standard deviation is merely an estimate. However, since the use of this interval is already based on a strong normality assumption and, moreover, the data sets in practice are quite large, the influence of this additional approximation should be minimal. ### Mondrian conformal predictor (MCP) Another possibility to explicitly incorporate the noise is given by Mondrian conformal prediction [34, 36]. Here, the data set is, just like the instance space, divided into multiple classes using the taxonomy function \(\kappa:\mathcal{X}\times\mathbb{R}\to\mathcal{C}\). To this end, \(\mathcal{C}\) is from here on also assumed to be finite such that the algorithm is numerically feasible in practice. This partitioning also induces a partitioning of the calibration set: \[\mathcal{V}=\bigcup_{c\in\mathcal{C}}\mathcal{V}_{c} \tag{19}\] with \[\mathcal{V}_{c}:=\kappa^{-1}(c)=\left\{(\mathbf{x},y)\in\mathcal{ V}\,\big{|}\,\kappa(\mathbf{x},y)=c\right\}. \tag{20}\] The algorithm proceeds by constructing a conformal predictor for every class \(c\in\mathcal{C}\) with calibration set given by \(\mathcal{V}_{c}\). For every new instance \(\mathbf{x}\in\mathcal{X}\) with \(\kappa(\mathbf{x},y)=c\), the critical nonconformity score \(a^{\kappa}_{c}\) is used to construct a prediction interval. Note that every conformal predictor induces an MCP model in a straightforward fashion, where every taxonomy class uses the same nonconformity measure. However, not all MCP models are induced in this way. It is perfectly valid to use a distinct nonconformity measure (or even a different conformal prediction algorithm) for every taxonomy class. Conformal predictors that are not constructed in a Mondrian fashion will be called _non-Mondrian_ in this paper. The Mondrian approach benefits from the theoretical guarantees of Theorem 1 of the (I)CP algorithm in that validity will be guaranteed for every class in \(\mathcal{C}\) individually, as long as the data is exchangeable in every class [37]. **Theorem 2** (Conditional validity).: _Let \(\Gamma^{\alpha}:\mathcal{X}\times(\mathcal{X}\times\mathbb{R})^{\infty}\to[ \mathbb{R}]\) be a Mondrian inductive conformal predictor at significance level \(\alpha\in[0,1]\) for the taxonomy function \(\kappa:\mathcal{X}\times\mathbb{R}\to\mathcal{C}\). If the nonconformity scores are exchangeable for any calibration set \(\mathcal{V}\) and any new observation \((\mathbf{x},y)\), i.e., any ordering of \(A(\mathcal{V})\cup\{A(\mathbf{x},y)\}\) is equally probable, then \(\Gamma^{\alpha}\) is conservatively conditional valid w.r.t. to the taxonomy function \(\kappa\):_ \[\mathrm{Prob}\big{(}Y\in\Gamma^{\alpha}(X,V)\mid\kappa(X,Y)=c\big{)}\geq 1- \alpha\,, \tag{21}\] _for all \(c\in\mathcal{C}\), where the probability is taken over both \((X,Y)\) and \(V\)._ Nonetheless, the calibration set has to be split, something which might become problematic in settings with limited data or a large number of taxonomy classes. **Remark 2** (Terminology).: Note that the approach introduced in this section was initially called _Venn_ conformal prediction by Vovk et al. [37], while Mondrian conformal prediction had a different definition. Due to the specific structure, however, the literature adopted the Mondrian terminology. ## 4 Theoretical results In this section, the conditional validity of non-Mondrian conformal predictors is studied from a theoretical perspective. First, the general case is considered, where no parametric assumptions are made about the nonconformity measures. By making some stricter assertions, an explicit expression, Eq. (25), can be found for the data-generating process such that conditional validity holds w.r.t. any uncertainty-dependent taxonomy function. ### Pivotal quantities In the situation at hand, the hope for non-Mondrian conformal prediction is that the heteroskedasticity in a data set can be treated by choosing a suitable nonconformity measure, such as the normalized one (16), without having to resort to conditional methods such as MCP, since these are more data intensive (and data sparsity is often higher in regions with high heteroskedastic noise). However, there is an important theoretical barrier that limits the usefulness of this pursuit. Namely, if a non-Mondrian model should be able to produce conditionally valid intervals, the critical nonconformity scores should be distributed equally across all classes. The following theorem presents a sufficient condition for conditionally valid conformal predictors. The joint distribution over calibration sets and test instances, which is assumed to exchangeable, will be denoted by \(P\). **Theorem 3** (Independence).: _Consider a taxonomy function \(\kappa:\mathcal{X}\times\mathbb{R}\to\mathcal{C}\). If both the distribution of the nonconformity scores and the distribution of the calibration sets are independent of \(\mathcal{C}\), i.e.,_ \[P_{A\mid C}(B\mid c)=P_{A}(B)\qquad\text{and}\qquad P(\mathcal{V}\mid c)=P( \mathcal{V}) \tag{22}\] _for all \(c\in\mathcal{C}\), \(B\subseteq\mathbb{R}\) and \(\mathcal{V}\in(\mathcal{X}\times\mathbb{R})^{n}\), the conformal predictor associated with \(A\) is conditionally valid w.r.t. \(\kappa\) (Definition 1)._ _Proof_ For any taxonomy class \(c\in\mathcal{C}\), the following relation holds: \[\operatorname{Prob}\bigl{(}Y\in\Gamma^{\alpha}(X,V)\,\big{|}\,C=c\bigr{)}=\int_{ (\mathcal{X}\times\mathbb{R})^{n}}\operatorname{Prob}\bigl{(}Y\in\Gamma^{ \alpha}(X,\mathcal{V})\,\big{|}\,C=c\bigr{)}\,\mathrm{d}P(\mathcal{V}\mid C=c )\,.\] By the definition of ICPs, the first factor can be expressed in terms of the distribution of nonconformity scores: \[\operatorname{Prob}\bigl{(}Y\in\Gamma^{\alpha}(X,V)\,\big{|}\,C=c\bigr{)}=\int _{(\mathcal{X}\times\mathbb{R})^{n}}F_{A|C}\bigl{(}a^{*}_{\mathcal{V}}\,\big{|} \,C=c\bigr{)}\,\mathrm{d}P(\mathcal{V}\mid C=c)\,,\] where \(a^{*}_{\mathcal{V}}:=q_{(1-\alpha)\bigl{(}1+\frac{1}{n}\bigr{)}}\bigl{(}A( \mathcal{V})\bigr{)}\). By assumption, neither the distribution of nonconformity scores nor the distribution of calibration sets depends on the taxonomy class \(\kappa(X,Y)\), hence \[\operatorname{Prob}\bigl{(}Y\in\Gamma^{\alpha}(X,V)\,\big{|}\, \kappa_{n+1}=c\bigr{)} =\int_{(\mathcal{X}\times\mathbb{R})^{n}}F_{A}\bigl{(}a^{*}_{ \mathcal{V}}\bigr{)}\,\mathrm{d}P(\mathcal{V})\] \[=\operatorname{Prob}\bigl{(}Y\in\Gamma^{\alpha}(X,V)\bigr{)}\geq 1 -\alpha\,.\] This concludes the proof. \(\qed\) Note that the above theorem implies that mere independence of the nonconformity measure and taxonomy function are not sufficient in general. Exchangeability is too weak for the conclusion to hold. In the case of i.i.d. data, the condition is, however, trivially satisfied. When conditional validity is not just required w.r.t. a fixed taxonomy function, but w.r.t. an entire family of taxonomy functions, a concept from the classical statistical literature becomes relevant [3, 33]. **Definition 2** (Pivotal quantity).: Consider a family of distributions \(\{P_{\theta}\mid\theta\in\Theta\}\), parametrized by a set \(\Theta\). A function \(g\) of observations is called a _pivotal quantity_ (or simply _pivot_) if its distribution does not depend on the particular choice of parameter: \[(\forall\theta,\theta^{\prime}\in\Theta)((X\sim P_{\theta}\wedge X^{\prime} \sim P_{\theta^{\prime}})\Rightarrow g(X)\stackrel{{ d}}{{=}}g(X^{ \prime}))\,, \tag{23}\] where \(\stackrel{{ d}}{{=}}\) denotes equality in distribution, _i.e._, \(g(X)\) and \(g(X^{\prime})\) have the same distribution. The distribution of such a pivotal quantity will be called the _pivotal distribution_ further on. Combined with the Independence Theorem 3, this gives rise to the following result. **Corollary 1**.: A conformal predictor associated with a nonconformity measure \(A:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) is conditionally valid for any _feature-dependent Mondrian taxonomy_, _i.e._, a Mondrian taxonomy that only depends on the feature space \(\mathcal{X}\), if \(A\) is a pivotal quantity for the family of conditional distributions \(\bigl{\{}P_{Y|X}(\cdot\mid\mathbf{x})\,\big{|}\,\mathbf{x}\in\mathcal{X} \bigr{\}}\) and if the data is such that the calibration set \(\mathcal{V}\) does not depend on the taxonomy class of any test point, _i.e._, \(P(\mathcal{V}\mid c)=P(\mathcal{V})\) for all \(c\in\mathcal{C}\) and \(\mathcal{V}\in(\mathcal{X}\times\mathbb{R})^{n}\). ### Normalization Recall the mean-variance estimators in Example 1 with normalized nonconformity measure (17). If the absolute value was not present in that equation, then the nonconformity scores would actually be the (estimated) \(z\)-scores of the data. Assuming the idealized situation where the _oracle_ can be accessed, _i.e._, both the (conditional) mean and variance can be modelled perfectly, such a transformation would lead to the random variable \(A\), the nonconformity score, following a distribution with \[\mathrm{E}[A]=0\qquad\text{and}\qquad\mathrm{Var}[A]=1 \tag{24}\] independent of whether one considers the marginal distribution or conditions on a specific taxonomy class. Using the terminology of Definition 2, this can be rephrased by saying that \(A\) is a pivotal quantity. The next theorem shows that such a situation holds more generally. In the remainder of the text, \(f_{A}\) and \(f_{A|C}\) denote the probability density functions of the distributions \(P_{A}\) and \(P_{A|C}\), respectively. **Theorem 4** (Standardization).: _If the conditional distribution \(P_{Y|X}\) has a density function of the form_ \[f_{Y|X}(y\mid\mathbf{x})=\frac{1}{\sigma(\mathbf{x})}g\left(\frac{y-\mu( \mathbf{x})}{\sigma(\mathbf{x})}\right) \tag{25}\] _for some smooth function \(g:\mathbb{R}\to\mathbb{R}^{+}\), then the probability distribution of the standardized nonconformity measure_ \[A_{\mathrm{st}}(\mathbf{x},y):=\frac{y-\mu(\mathbf{x})}{\sigma(\mathbf{x})} \tag{26}\] _is independent of the classes of any feature-independent Mondrian taxonomy \(\kappa:\mathcal{X}\times\mathcal{Y}\to\mathcal{C}\)._ _Proof_ The joint density of nonconformity scores and taxonomy classes can be rewritten as \[f_{A_{\mathrm{st}},C}(a,c) =\frac{\partial}{\partial a}F_{A_{\mathrm{st}},C}(a,c)\] \[=\frac{\partial}{\partial a}P_{X,Y}\big{(}\{(\mathbf{x},y)\mid A _{\mathrm{st}}(\mathbf{x},y)\in\,]-\infty,a]\wedge\kappa(\mathbf{x})=c\}\big{)}\] \[=\frac{\partial}{\partial a}\int_{\kappa^{-1}(c)}\int_{-\infty}^{ \mu(\mathbf{x})+a\sigma(\mathbf{x})}f_{X,Y}(\mathbf{x},y)\,\mathrm{d}y\, \mathrm{d}\mathbf{x}\] \[=\int_{\kappa^{-1}(c)}f_{X,Y}\big{(}\mathbf{x},\mu(\mathbf{x})+a \sigma(\mathbf{x})\big{)}\sigma(\mathbf{x})\,\mathrm{d}\mathbf{x}\,,\] where in the last step Leibniz's integral rule was applied: \[\frac{\mathrm{d}}{\mathrm{d}x}\int_{l(x)}^{u(x)}f(x,y)\,\mathrm{d}y=f\big{(}x,u(x)\big{)}\,\frac{\mathrm{d}u}{\mathrm{d}x}-f\big{(}x,l(x)\big{)}\,\frac{ \mathrm{d}l}{\mathrm{d}x}+\int_{l(x)}^{u(x)}\frac{\partial f}{\partial x}(x,y )\,\mathrm{d}y\,.\] Finally, the joint density function \(f_{X,Y}\) is factorized as \[f_{X,Y}(\mathbf{x},y)=f_{Y|X}(y\mid\mathbf{x})f_{X}(\mathbf{x})\] to obtain: \[f_{A_{\text{st}},C}(a,c)=\int_{\kappa^{-1}(c)}\underbrace{\sigma(\mathbf{x})f_{Y |X}\big{(}\mu(\mathbf{x})+a\sigma(\mathbf{x})\,\big{|}\,\mathbf{x}\big{)}}_{=f_{ A_{\text{st}}|X}(a|\mathbf{x})}f_{X}(\mathbf{x})\,\mathrm{d}\mathbf{x}.\] If the first factor is (functionally) independent of \(\mathbf{x}\), hence, of \(\mu\) and \(\sigma\), it can be moved out of the integral: \[f_{A_{\text{st}},C}(a,c)=f_{A_{\text{st}}}(a)\int_{\kappa^{-1}(c)}f_{X}( \mathbf{x})\,\mathrm{d}\mathbf{x}=f_{A_{\text{st}}}(a)P_{C}(c).\] To see when this holds, define a three-parameter function \(\widetilde{f}\) as a generalization of \(f_{Y|X}\) as follows: \[\widetilde{f}\big{(}a,\mu(\mathbf{x}),\sigma(\mathbf{x})\big{)}:=f_{Y|X} \big{(}\mu(\mathbf{x})+a\sigma(\mathbf{x})\,\big{|}\,\mathbf{x}\big{)}\,.\] If after standardization the density should only depend on \(y\), the following system of partial differential equations should hold: \[\begin{cases}\partial_{\mu}\big{(}\sigma\widetilde{f}(a,\mu,\sigma)\big{)}=0 \,,\\ \partial_{\sigma}\big{(}\sigma\widetilde{f}(a,\mu,\sigma)\big{)}=0\,.\end{cases}\] The first partial differential equation immediately yields that \(\widetilde{f}\) is independent of \(\mu\). Analogously, the second equation says that \(\sigma\widetilde{f}\) is independent of \(\sigma\) or, equivalently, that \[\widetilde{f}(a,\mu,\sigma)=\frac{g(a)}{\sigma}\] for an arbitrary function \(g:\mathbb{R}\to\mathbb{R}^{+}\) (requiring that \(\widetilde{f}\) gives a density function imposes further conditions on \(g\)). Transforming back to the original function gives \[f_{Y|X}(y\mid\mathbf{x})=\frac{1}{\sigma(\mathbf{x})}g\left(\frac{y-\mu( \mathbf{x})}{\sigma(\mathbf{x})}\right)\,,\] which concludes the proof. In view of Corrollary 1, the standardized variable \(A_{\text{st}}(\mathbf{x},y)\) is a pivotal quantity for variables coming from the \(\mathcal{X}\)-parametrized family of distributions (25) and the function \(g:\mathbb{R}\to\mathbb{R}^{+}\) is exactly its pivotal distribution. Although the standardized measure (26) is not exactly the same as the normalized measure (17), it is also of interest on its own. It is for example used in the construction of _conformal predictive systems_[38], where instead of calibrating at a single quantile, the whole predictive distribution is modelled. The results in this paper carry over to that setting accordingly. Notwithstanding that requiring invariance under standardization seems weaker than what is required to handle the normalized nonconformity measure (17), the following remark shows that this difference is actually irrelevant. **Remark 3**.: Distributions with a density function of the form (25) also lead to a pivotal distribution for the \(\sigma\)-normalized residual measure (17). More generally, if a nonconformity measure \(A\) is pivotal, any nonconformity measure obtained by (post)composing it with a feature-independent function will also be pivotal. **Remark 4**.: Note that, instead of making a detour via the preceding theorem, the proof of Theorem 4 could have been generalized to work directly with the normalized residual measure (17): in the step before applying Leibniz's integral rule, the lower integration bound for \(y\) given by \(-\infty\), would then become \(\mu(\mathbf{x})-a\sigma(\mathbf{x})\) for the normalized nonconformity measure. The integral rule would then lead to an additional term \(\sigma(\mathbf{x})f\big{(}\mathbf{x},\mu(\mathbf{x})-a\sigma(\mathbf{x})\big{)}\). However, Remark 3 is more generally applicable for any nonconformity measure. **Example 2**.: Examples of distributions, where \(\mu\) and \(\sigma\) represent the conditional mean and standard deviation, that are generated by the following common pivotal distributions: 1. Normal distribution: \[g(x)=\frac{1}{\sqrt{2\pi}}\exp(-x^{2}/2)\,.\] (27) 2. Laplace distribution: \[g(x)=\frac{1}{\sqrt{2}}\exp(-\sqrt{2}|x|)\,.\] (28) 3. Uniform distribution: \[g(x)=\frac{1}{2\sqrt{3}}\mathbb{1}_{[-\sqrt{3},\sqrt{3}]}(x)\,,\] (29) where \[\mathbb{1}_{S}(x):=\begin{cases}1&\text{if }x\in S,\\ 0&\text{if }x\not\in S,\end{cases}\] (30) denotes the indicator function of the set \(S\). To give an example of how general the allowed distributions in this theorem are, consider the following asymmetric one (see Figure 2): \[f_{Y|X}(y\mid\mathbf{x})=\frac{2y}{\lambda(\mathbf{x})^{2}}\,\mathbb{1}_{[0, \lambda(\mathbf{x})]}(y) \tag{31}\] for some positive function \(\lambda:\mathcal{X}\to\mathbb{R}^{+}\). Although it is seemingly not of the form (25), it can, with some work, be rewritten as such: \[\frac{2y}{\lambda(\mathbf{x})^{2}}\,\mathbb{1}_{[0,\lambda(\mathbf{x})]}(y)= \frac{1}{\sigma(\mathbf{x})}\left(\frac{1}{9}\left(\frac{y-\mu(\mathbf{x})}{ \sigma(\mathbf{x})}\right)+\frac{2\sqrt{2}}{9}\right)\mathbb{1}_{[-2\sqrt{2}, \sqrt{2}]}\left(\frac{y-\mu(\mathbf{x})}{\sigma(\mathbf{x})}\right)\,, \tag{32}\] where \[\mu(\mathbf{x}):=\frac{2\lambda(\mathbf{x})}{3}\qquad\text{and}\qquad\sigma( \mathbf{x}):=\frac{\lambda(\mathbf{x})}{3\sqrt{2}}\,. \tag{33}\] Essentially, to obtain a pivotal distribution for the standardized (or normalized) nonconformity measure, the conditional distribution \(P_{Y|X}\) should be obtained as a member of the location-scale family with parameters \(\mu(\mathbf{x})\) and \(\sigma(\mathbf{x})\) induced by the distribution \(g\). As a consequence, requiring invariance under standardization also allows for conditional distributions such as exponential distributions. Exponential distributions in general only form a scale family and not a location family. However, because \(\mu=\sigma\) for exponential distributions, the conditional version does form a location-scale family generated by \(\mu\) and \(\sigma\). Accordingly, it gives rise to the following pivotal distribution for standardized variables: \[g(x)=\exp(-x-1)\theta(x+1)\,, \tag{34}\] where \(\theta:\mathbb{R}\to\{0,1\}\) denotes the Heaviside step function: \[\theta(x):=\mathbb{1}_{[0,+\infty[}(x)\,. \tag{35}\] Note that the functions \(\mu:\mathcal{X}\to\mathbb{R}\) and \(\sigma:\mathcal{X}\to\mathbb{R}^{+}\) in the preceding theorems in general do not have to be the conditional mean and standard deviation. Moreover, it is not even strictly necessary that the Figure 2: Probability density function of the triangular distribution (31) with (conditional) width parameter \(\lambda(\mathbf{x})=5\). parameters \(\mu\) and \(\sigma\) are estimated perfectly. For example, from the form of Eq. (25) it can immediately be seen that the estimators \(\widehat{\mu}\) and \(\widehat{\sigma}\) only need to satisfy \[\widehat{\sigma} =\lambda\sigma\,, \tag{36}\] \[\widehat{\mu}-\mu =\kappa\widehat{\sigma}\,, \tag{37}\] for some \(\lambda\in\mathbb{R}^{+}\) and \(\kappa\in\mathbb{R}\). (These relaxations also follow from Remark 3.) **Example 3** (Additive noise).: The above theorem also shows that any data set having a conditional generating process of the form [11, 12, 17] \[y=\mu(\mathbf{x})+\sigma(\mathbf{x})\varepsilon\,, \tag{38}\] where \(\varepsilon\) is sampled from some fixed distribution, will lead to a conditionally valid NCP algorithm. In other words, NCP models will be conditionally valid for any data set obtained by adding noise to a fixed trend. It is important to remark that the distribution of \(\varepsilon\) is unconstrained and does not have to be of the above functional form. To round off this section, it can be useful for future work to note that the method of proof of Theorem 4 can be generalized to other nonconformity measures. **Theorem 5**.: _Assume that \(P_{Y|X}\) admits a density function \(f_{X|Y}\), smoothly parameterized by the parameters \(\theta\equiv\theta(\mathbf{x})\), and assume that the nonconformity measure \(A:\mathcal{X}\times\mathbb{R}\to\mathbb{R}\) only depends on \(\mathcal{X}\) through \(\theta\). If, for a fixed \(\mathbf{x}\in\mathcal{X}\), the transformation \(y\mapsto A(\mathbf{x},y)\) is increasing, then \(A\) is pivotal if the following condition is satisfied:_ \[f_{Y|X}\big{(}g(a,\theta);\theta\mid\mathbf{x}\big{)}\nabla_{\theta}\frac{ \partial g(a,\theta)}{\partial a}+g(a,\theta)\nabla_{\theta}\frac{\partial f_ {Y|X}\big{(}g(a,\theta);\theta\mid\mathbf{x}\big{)}}{\partial a}=0\,, \tag{39}\] _where \(g\big{(}\cdot,\theta\big{)}:\mathbb{R}\to\mathbb{R}\) is defined by the equation_ \[A\Big{(}g\big{(}a,\theta\big{)},\mathbf{x}\Big{)}=a\,, \tag{40}\] i.e._, it is the inverse of \(A\) for fixed \(\mathbf{x}\in\mathcal{X}\)._ This differential equation could be solved, for example, in the case of the interval nonconformity measure in (11). However, this would be less sensible. The normalized nonconformity measure does not take into account the significance level at which the prediction intervals are going to be constructed. It simply gives a statistic of the predictive distribution. However, any interval predictor does assume a predetermined significance level in some way and, hence, it should only become a pivotal quantity at that given significance level. ## 5 Experiments on synthetic data By performing and analyzing some synthetic experiments, both the results from Section 4 can be validated and a diagnostic tool can be developed to help assess whether a non-Mondrian conformal predictor could be conditionally valid (w.r.t. a given taxonomy function). ### Data types To compare the different methods in a controlled manner, some synthetic data sets are considered. Four different types are of importance and representative of real-world situations: \[\begin{array}{ll}\text{Type 1.}&\text{constant mean: }\operatorname{E}[Y\mid X]= \operatorname{E}[Y],\\ \text{Type 2.}&\text{functional dependence: }\operatorname{Var}[Y\mid X]=\varphi \bigl{(}\operatorname{E}[Y\mid X]\bigr{)}\text{ for some function }\varphi:\mathbb{R}\to\mathbb{R}^{+},\\ \text{Type 3.}&\text{low-dimensional representation: }\operatorname{Var}[Y\mid X]=f(X^{\downarrow}), \text{ where }X^{\downarrow}\text{ denotes the projection of }X\\ &\text{ onto a subspace of }\mathcal{X},\text{ such as the projection onto the first component, and}\\ \text{Type 4.}&\text{mixture models.}\end{array}\] Note that, similar to Type 1 data, one could also consider data sets with constant variance. However, this implies homoskedasticity (at least aleatorically) and, hence, is not considered here. To generate these data sets synthetically, the main sampling procedure is as follows: 1. A parametric family of distributions \(P_{Y\mid X}(\,\cdot\,;\mu,\sigma^{2}\mid\cdot)\) is fixed. 2. \(n\in\mathbb{N}\) feature tuples \(\mathbf{x}\) are sampled from a fixed distribution \(P_{X}\), e.g. a uniform distribution over a \(k\)-dimensional (unit) hypercube. 3. Mean and variance functions \(\mu:\mathcal{X}\to\mathbb{R}\) and \(\sigma^{2}:\mathcal{X}\to\mathbb{R}^{+}\) are fixed. 4. For every feature tuple \(\mathbf{x}\in\mathcal{X}\), a response \(y\) is sampled from the distribution \(P_{Y\mid X}\bigl{(}\,\cdot\,\mid\,\mathbf{x};\mu(\mathbf{x}),\sigma^{2}( \mathbf{x})\bigr{)}\). To evaluate the quality of interval predictors \(\Gamma^{\alpha}:\mathcal{X}\to[\mathbb{R}]\), two performance metrics are used: the _empirical coverage_ and the _average size of the prediction regions_[7]. Given a joint distribution \(P_{X,Y}\) on \(\mathcal{X}\times\mathbb{R}\), the coverage is defined as follows: \[\mathcal{C}(\Gamma^{\alpha},P_{X,Y}):=\operatorname{E}\left[ \mathbb{1}_{\Gamma^{\alpha}(X)}(Y)\right]=\operatorname{Prob}\bigl{(}Y\in \Gamma^{\alpha}(X)\bigr{)}\,, \tag{41}\] thereby turning Definition 1 into the condition \(C(\Gamma^{\alpha},P_{X,Y})\geq 1-\alpha\). The average width is defined as \[\mathcal{W}(\Gamma^{\alpha},P_{X,Y}):=\operatorname{E}\bigl{[} \lvert y_{+}(X)-y_{-}(X)\rvert\bigr{]}\,, \tag{42}\] where the functions \(y_{\pm}:\mathcal{X}\to\mathbb{R}\) denote the upper and lower bounds of the prediction intervals produced by \(\Gamma^{\alpha}\). When \(P_{X,Y}\) is the empirical distribution of a data set \(\mathcal{D}\), the notation \(\mathcal{C}(\Gamma^{\alpha},\mathcal{D})\) is also used. Of course, since the focus lies on conditional performance, conditional counterparts can be defined as well: \[\mathcal{C}(\Gamma^{\alpha},P_{X,Y}\mid c):=\operatorname{Prob} \bigl{(}Y\in\Gamma^{\alpha}(X)\,\big{|}\,\kappa(X,Y)=c\bigr{)} \tag{43}\] and \[\mathcal{W}(\Gamma^{\alpha},P_{X,Y}\mid c):=\operatorname{E}\bigl{[} \lvert y_{+}(X)-y_{-}(X)\rvert\,\big{|}\,\kappa(X,Y)=c\bigr{]}\,. \tag{44}\] Note that whereas the (marginal) distribution over \(\mathbb{R}\) is irrelevant for the marginal measures, it plays a role in the conditional definitions, since the taxonomy function can in general also depend on the response variable. ### Deviations from oracle Up to some very specific relaxations in the form of Eqs. (36) and (37) and, more generally, Remark 3, the theorems and methods from the previous section require the parameters to be estimated exactly (or at least consistently in the large data setting). However, this assumption is not a very realistic one in practice. Estimating higher conditional moments, such as the variance, to high precision usually requires state-of-the-art methods and, even more so, a large amount of data, since without strong parametric assumptions multiple samples with nearly identical features are necessary. This requirement is hard to achieve, especially in high-dimensional settings. For this reason it is interesting to see what happens when the estimates deviate from the oracle. In general, two possibilities exist: * **Misspecification**: In general, the nonconformity measure depends on the estimated parameters. Therefore, if these estimators are misspecified, then the transformation \((X,Y)\mapsto A\) might not remove all dependency on \(\mathcal{X}\). * **Contamination**: As for the nonconformity measure, the taxonomy function will, in general, depend on the estimated parameters. If these estimators are misspecified, then the taxonomy classes can get mixed up and even though the distribution of the nonconformity scores might not depend on the true taxonomy, it might depend on the estimated taxonomy. For clarity's sake, a simple example is in order. **Example 4**.: Consider a two-dimensional feature space \(\mathcal{X}=[0,1]^{2}\) equipped with the uniform distribution \(\mathcal{U}^{2}\). As data-generating process, take \[y(\mathbf{x})\sim\mathcal{N}\big{(}x_{1}+x_{2},1+|x_{2}-0.5|\big{)} \tag{45}\] and consider the residual nonconformity measure (10): \[A(\mathbf{x},y)=|y-\widehat{\mu}(\mathbf{x})|\,. \tag{46}\] As taxonomy function, choose the indicator function (30) \[\kappa_{\lambda}(\mathbf{x},y)=\mathbb{1}_{[0,\lambda]}(x_{2})\,, \tag{47}\] where \(\lambda\in\mathbb{R}\). For \(\lambda=0.5\), it is not hard to see that the conformal predictor associated with \(A\) will be conditionally valid, even though \(A\) itself is not pivotal for the given data-generating process. Analyzing the effect of misspecification and contamination is now also quite straightforward. If the location \(\widehat{\mu}\) is misspecified (in such a way that the residuals have a mean that depends on \(x_{2}\)), then the distribution of the nonconformity scores will also depend on the taxonomy, no matter how perfect the parameter \(\lambda\) is fine-tuned. On the other hand, as soon as \(\lambda\) deviates from \(0.5\), e.g. when it would be estimated based on a data sample, the distribution of nonconformity scores would also no longer be dependent of the classes, even when \(\widehat{\mu}\) is modelled perfectly. For the purpose of this paper, however, where both the taxonomy function and the nonconformity measure depend explicitly on the same estimate of the data noise, see Eq. (6), misspecification and contamination go hand in hand. In Fig. 3, the coverage results for three of the four types of synthetic data are shown (Types 1-3) for different kinds of misspecification. For each of these figures, the data-generating distribution \(P_{Y|X}\) has the general form (25) of Theorem 4. The misspecification is simulated by adding random noise to the values of the mean and variance. The first (green) column, indicated by the label 'Oracle', which uses the true mean and standard deviation, for the three types of nonconformity measure introduced in Section 3: the residual (10), interval (11) and \(\sigma\)-normalized measures (17). The columns (orange, blue and pink) indicated by the label '\(\sigma\)-shifted (\(\lambda\))' show the empirical coverage for increasing values of \(\sigma\)-noise (these estimates are clipped to \(\mathbb{R}^{+}\) to enforce positivity of the standard deviation): \[\widehat{\sigma}(\mathbf{x})=\sigma(\mathbf{x})+\epsilon\qquad\text{and} \qquad\epsilon\sim\mathcal{N}\left(0,\lambda^{2}\right)\,. \tag{48}\] This simulates the behaviour of models that are not able to estimate the variance consistently. It is clear that for larger values of the noise, the (non-Mondrian) conformal predictor using the normalized conformal measure also stops being conditionally valid. The light green column, indicated by the label '\(\sigma\)-scaled' shows the coverage when the variance is scaled by a fixed value (5 in this case). As expected from Remark 3 and, in particular, Eq. (36), this does not change anything in terms of the conditional validity of the normalized (and, trivially, residual) conformal predictors. However, it does break the conditional validity of the conformal predictor using the interval nonconformity measure. This observation is also to be expected. For the mean-variance estimators from Example 1 with a normality assumption, the intervals are of the form (18). It follows that the interval measure (11) in this case can be rewritten as follows: \[\max\left(\widehat{y}_{-}(\mathbf{x})-y,y-\widehat{y}_{+}(\mathbf{ x})\right) =\max\left(\widehat{\mu}(\mathbf{x})-z^{\alpha}\widehat{\sigma}( \mathbf{x})-y,y-\left(\widehat{\mu}(\mathbf{x})+z^{\alpha}\widehat{\sigma}( \mathbf{x})\right)\right)\] \[=\max\left(\widehat{\mu}(\mathbf{x})-y-z^{\alpha}\widehat{\sigma }(\mathbf{x}),y-\widehat{\mu}(\mathbf{x})-z^{\alpha}\widehat{\sigma}( \mathbf{x})\right)\] \[=\max\left(\widehat{\mu}(\mathbf{x})-y,y-\widehat{\mu}(\mathbf{ x})\right)-z^{\alpha}\widehat{\sigma}(\mathbf{x})\] \[=|\widehat{\mu}(\mathbf{x})-y|-z^{\alpha}\widehat{\sigma}( \mathbf{x})\,. \tag{49}\] From this last line, it is clear that scaling the variance is not equivalent to applying a feature-independent transformation and, accordingly, Remark 3 is not applicable. The last two columns (yellow and brown), indicated by the label '\(\mu\)-shifted', show the coverage when the mean is shifted by, respectively, a constant and a value proportional to the standard deviation: \[\widehat{\mu}(\mathbf{x})=\mu(\mathbf{x})+\epsilon\qquad\text{and}\qquad \epsilon\sim\mathcal{N}\left(0,\lambda^{2}\widehat{\sigma}(\mathbf{x})^{2} \right)\,. \tag{50}\] It is immediately clear that whereas the constant shift breaks the conditional validity of all three conformal predictors, a shift proportional to the standard deviation does preserve the conditional validity of the normalized model. This is entirely in line with the above results, in particular Eq. (37). Figure 5 shows a similar plot for a data set of Type 4. In this case the data is sampled from a bimodal mixture: \[y\sim\begin{cases}\mathcal{N}\left(\mu(\mathbf{x})-1,0.01\mu(\mathbf{x})^{2} \right)&\quad\text{if }\mu(\mathbf{x})\leq 2\,,\\ \mathcal{N}\left(\mu(\mathbf{x})+1,0.01\mu(\mathbf{x})^{2}\right)&\quad\text{ if }\mu(\mathbf{x})>2\,.\end{cases} \tag{51}\] Even from the 'Oracle'-column it is already clear that the non-Mondrian conformal predictors for such a distribution are not conditionally valid. When moving away from the oracle, the situation is at best sustained. This observation is expected to hold for all mixture distributions \(P_{Y|X}\). Unless a consistent model can be obtained for every component in the mixture and that each of these components admits the same pivotal distribution, conditional validity will not hold. Figure 3: Conditional coverage at significance level \(\alpha=0.1\) for synthetic data sets of Types 1, 2 and 3. For every type, the data is divided in three folds based on equal-frequency binning of the estimated variance. The coloured columns indicate the type of misspecification (from left to right): oracle, additive noise on the standard deviation (means of 0.01, 0.1 and 1), scaling by factor 5 of the standard deviation and additive noise on the mean (means of 1 and \(\widehat{\sigma}\)). For every model, three nonconformity measures are shown (from left to right): residual, interval and \(\widehat{\sigma}\)-normalized nonconformity measure. ### Diagnostics The theorem in the preceding section provides a means to get an idea of the conditional behaviour of conformal predictors without actually having to consider a test set. Comparing the distributions of the nonconformity scores over the different strata of the calibration set can give insight into how well the methods will perform and what impact misspecification and contamination might have. As a toy example, a family of normal distributions with constant coefficient of variation, fixed at \(c_{v}=0.1\), is chosen as data-generating process: \[y\sim\mathcal{N}\big{(}\mu(\mathbf{x}),0.01\,\mu(\mathbf{x})^{2}\big{)}\,. \tag{52}\] Figure 4(a) shows a CDF plot of the variances of these distributions in case \(\mu(\mathbf{x}):=\text{mean}(\mathbf{x})\) and \(\mathbf{x}\sim\mathcal{U}^{n}(0,100)\). The colors indicate the taxonomy classes corresponding to equal-frequency binning (6) of the variances. In Figs. 6(a) and 6(c) the CDF plots of the residual (10) and \(\sigma\)-normalized nonconformity scores (17) are shown, respectively. When applying a non-Mondrian conformal predictor with the residual nonconformity measure to 20 random test sets of 1000 instances, sampled from the above distributions (52), the results shown on the first line of Table 1 are obtained. While performing experiments on synthetic data, it was observed that most methods showed quadratic deviations when comparing the true variances to the predicted variances. In the region with high levels of noise, the estimates were approximately correct, but in the regions with low noise levels, the estimates were often much larger (of the same magnitude as for high noise). This effect can be modelled by using the following explicitly misspecified model: \[\widehat{\mu}(\mathbf{x}):=\mu(\mathbf{x})\qquad\text{and}\qquad\widehat{ \sigma}^{2}(\mathbf{x}):=5\big{(}\sigma^{2}(\mathbf{x})-0.5\big{)}^{2}+0.5\,. \tag{53}\] The consequences of misspecification and contamination can be seen in Fig. 4(b) for the variance and Fig. 6 for the nonconformity scores. In the first figure, the estimated variance is shown in function of the true variance. The colors again indicate the taxonomy classes, but this time those determined by the estimated variances. As is clearly visible by comparing Figs. 4(a) and 4(b), the taxonomy classes are completely different from what the true classes would be. The true class with medium variance corresponds to the class with low estimated variance and the classes with low and high true variance have been reshuffled to become 50/50 mixtures of low and high estimated variance. When comparing the CDF plots 6(a) and 6(c) to 6(b) and 6(d), an interesting effect can be seen. Whereas the residual score for the oracle did not give rise to conditional coverage, it does do so for the misspecified model. On the other hand, for the \(\widehat{\sigma}\)-normalized nonconformity score, the effect works in the opposite direction. For the oracle, conditional validity is obtained (for all significance levels), but for the misspecified model, only marginal validity is attained. These results are also reflected in Table 2. \begin{table} \begin{tabular}{l l l l l} \hline \hline & Marginal & Low variance & Medium variance & High variance \\ \hline \(A_{\text{res}}\) & \(0.905\pm 0.008\) & \(0.951\pm 0.010\) & \(0.904\pm 0.017\) & \(0.856\pm 0.018\) \\ \(A_{\sigma}\) & \(0.902\pm 0.008\) & \(0.905\pm 0.018\) & \(0.902\pm 0.015\) & \(0.898\pm 0.017\) \\ \hline \hline \end{tabular} \end{table} Table 1: Interval coverage degrees for different methods across variance classes (at significance level \(\alpha=0.1\)). Mean and standard deviation over 20 samples are provided. The analysis of the toy model (52) leads to the following diagnostic method. #### 5.0.1 Method 1 (CDF plots).Assuming that the conformal predictors are not overly conservative, meaning that ties do not arise by Theorem 1, and that the calibration set is representative, creating a CDF plot \begin{table} \begin{tabular}{c c c c c} \hline \hline & Marginal & Low variance & Medium variance & High variance \\ \hline \(A_{\text{res}}\) & \(0.905\pm 0.008\) & \(0.905\pm 0.017\) & \(0.905\pm 0.014\) & \(0.905\pm 0.012\) \\ \(A_{\sigma}\) & \(0.905\pm 0.008\) & \(0.886\pm 0.017\) & \(0.897\pm 0.014\) & \(0.931\pm 0.008\) \\ \hline \hline \end{tabular} \end{table} Table 2: _Interval coverage degrees for different methods across variance classes (at significance level \(\alpha=0.1\)) for the misspecified model (53). Mean and standard deviation over 20 samples are provided._ Figure 4: Characterization of the true (a) and estimated (b) variance in Eq. (52). Comparing the subfigures shows that misspecification strongly mixes the taxonomy classes. of both the marginal nonconformity scores and the conditional nonconformity scores for all taxonomy classes can give an idea of how well the marginal method will perform conditionally. As is immediately clear from Fig. 6(a), at significance level \(\alpha=0.1\), the quantiles for low and high variance are, respectively, smaller and greater than the marginal quantile. This directly translates to over- and undercoverage in these regions, although marginally the model is valid (as expected from Figure 5: Conditional coverage at significance level \(\alpha=0.1\) for a synthetic data set of Type 4. The data is divided in three folds based on equal-frequency binning of the estimated variance. The coloured columns indicate the type of misspecification (from left to right): oracle, additive noise for the standard deviation (means of 0.01, 0.1 and 1), scaling by factor 5 of the standard deviation and additive noise for the mean (means of 1 and \(\bar{\sigma}\)). For every model, three nonconformity measures are shown (from left to right): residual, interval and \(\bar{\sigma}\)-normalized nonconformity measure. Figure 6: (a) and (c): CDF plots of the residual and \(\sigma\)-normalized nonconformity scores for the oracle. (b) and (d): CDF plots of the residual and \(\sigma\)-normalized nonconformity scores for the misspecified model in Equation (52) The (empirical) distributions are shown marginally (all data) and for the taxonomy classes corresponding to equal-frequency binning of the estimated variance (\(n=3\) classes). Theorem 1). Accidentally, the CDF for the taxonomy class with medium variance intersects the marginal CDF at around this significance level and this is also visible in the coverage values. For medium variance, the coverage is close to the nominal level. For the normalized conformal predictor with the scores from Fig. 6(c), the coverage degrees are shown on the second line of Table 1 (again for 20 test sets with 1000 instances). The fact that all CDFs approximately coincide at all levels in the figure is reflected in the conditional coverage values in the table. The model is valid for all classes. Another analytical tool can be used in the situation where a single calibration set is used and, accordingly, consistency of the \((1-\alpha)\)-sample quantile is assumed. **Method 2** (Bootstrap analysis).: In cases in which visual inspection of the CDF plots does not give a clear interpretation, a statistical test can be used to determine whether the required quantiles coincide for the different taxonomy classes. To compare the \((1-\alpha)\)-quantiles between two different taxonomy classes, the method from [40] can be used. Choose two taxonomy classes \(c_{1},c_{2}\in\mathcal{C}\) and consider a fixed number of bootstrap samples \(\{\mathcal{V}_{i}^{1}\}_{i=1,\ldots,B}\) and \(\{\mathcal{V}_{i}^{2}\}_{i=1,\ldots,B}\), sampled from the data sets \(\mathcal{V}_{c_{1}}\) and \(\mathcal{V}_{c_{2}}\), respectively. For every \(i\in\{1,\ldots,B\}\), the sample difference \[d_{i}:=\widehat{q}_{(1-\alpha)\left(1+\frac{1}{n}\right)}(\mathcal{V}_{i}^{1}) -\widehat{q}_{(1-\alpha)\left(1+\frac{1}{n}\right)}(\mathcal{V}_{i}^{2}) \tag{54}\] can be calculated using the Harrell-Davis quantile estimator [10]. A(n) (approximate) bootstrap confidence interval, at significance level \(\beta\in[0,1]\), is given by \([d_{(B\beta/2)},d_{(B-B\beta/2)}]\). If 0 is not contained in any of the pairwise confidence intervals for a suitable value of \(\beta\), e.g. \(\beta=0.025\), evidence has been found against the use of a marginal conformal predictor for obtaining conditional validity. ## 6 Experiments on real data In the synthetic experiments of the previous section, it was possible to investigate the impact of misspecification. In practice, however, there is no way to know the exact model parameters and misspecification almost always occurs. For this reason it also useful to see how realistic, data-driven models perform. ### Models Since the focus lies on uncertainty-dependent conditioning, only models that actually provide estimates of the residual variance will be considered. This excludes point predictors such as standard neural networks or random forests. The methods considered in this benchmarking effort are listed below (abbreviations that will be used in the remainder of the text are indicated in between parentheses): * (neural-network) quantile regressors (**QR**) from [16, 28], * quantile regression forests (**QRF**) from [18], * mean-variance estimators (**MV**) from [20], * mean-variance ensembles (**MVE** ) from [15]. Each of these methods comes in different flavours when augmented with conformal prediction. All of them have a baseline performance (no conformal prediction), a marginal CP incarnation giving rise to three models (residual score, interval score and normalized score) and a Mondrian CP incarnation, again giving rise to three models. For the conditional performance this results in seven different options. The aim of this experimental part is to analyse whether the results from the previous section hold, _i.e._, to check when the marginal models, the normalized variant in particular, give the same performance as the conditional (Mondrian) ones. For more information about the models, choices of architecture and further hyperparameter choices, see Appendix A. All experimental results were obtained by evaluating the models on 10 different train/test-splits. The test set always contained 20% of the data. To obtain a calibration set, the training set was further split in half. The significance level was fixed at \(\alpha=0.1\). ### Real data Most of the data sets were obtained from the UCI repository [8]. Specific references are given in Table 3. This table also shows the number of data points and (used) features and the skewness and (Pearson) kurtosis of the response variable. All data sets were standardized (both features and target variables) before training. For the taxonomy function \(\kappa:\mathcal{X}\times\mathbb{R}\to\mathcal{C}\), equal-frequency binning of the variance estimates with \(n=3\) bins was chosen, as in the synthetic case. Although these data sets all have different characteristics, e.g. dimensionality, sparsity, count data vs. continuous data, etc., they are treated equally. The experimental outcomes could be interpreted in light of these differences, but this would require more insight in the underlying data-generating mechanism. The coverage of the prediction intervals of different models on the data sets from Table 3 are shown in Figs. 7 and 8. (For figures of the PI widths and marginal performance for all data sets, see Appendix B.) It should be immediately clear that the Mondrian conformal prediction methods, indicated by the labels 'PointCP', 'IntCP' and 'NCP', have the most stable and desirable behaviour. For these methods the coverage is centered around the target confidence level of 90% (corresponding to the determined significance level of \(\alpha=0.1\)), whereas for the baseline and marginal conformal prediction methods, the coverage can fluctuate heavily, lying either far above or far below 90%. Of course, this is entirely to be expected, since only the Mondrian approach satisfies the strict conditional coverage guarantees of Theorem 2. Another feature of these figures is that the deviations of the marginal models are not always in the same direction. Comparing the different subplots, it can be seen that whereas some models show undercoverage \begin{table} \begin{tabular}{c c c c c} \hline \hline Name & \# samples & \# features & Skewness / Kurtosis & Source \\ \hline concrete & 1030 & 8 & 0.42 / 2.68 & [42] \\ turbine & 9568 & 4 & 0.31 / 1.95 & [14] \\ puma32H & 8192 & 32 & 0.02 / 3.04 & [6] \\ residential & 372 & 105 & 1.26 / 5.15 & [24, 25] \\ crime2 & 1994 & 123 & 1.52 / 4.83 & [26, 27] \\ star & 2161 & 39 & 0.29 / 2.63 & [1] \\ \hline \hline \end{tabular} \end{table} Table 3: Overview of the data sets. on one data set, they exhibit overcoverage on the other. The same occurs among the different methods on a single data set. If the true variance were known, as in Fig. 4 in the previous section, an in-depth analysis could be made. However, in contrast to estimates of the true response, where one can use metrics such as the MSE or \(R^{2}\) to quantify deviations from the truth, no analogous approaches exist for the residual variance. For higher conditional moments, multiple measurements for the same feature tuple are required to get a good estimate. Any uncertainty-dependent taxonomy function will, therefore, also lead to a wrong decomposition of the instance space. Moreover, there is in general no way to determine where the lack of validity stems from: bad estimates of the prediction intervals or incorrect taxonomy classes (_cf._ the distinction between misspecification and contamination in Section 5). Figure 7: Conditional coverage at significance level \(\alpha=0.1\) for the concrete, turbine and crime2 data sets. The data is divided in three folds based on equal-frequency binning of the estimated variances. The coloured columns indicate the different estimators (from left to right): quantile regression, quantile regression forest, mean-variance estimator and mean-variance ensemble. For every model, a baseline result and six nonconformity measures are shown (from left to right): residual, interval and \(\widehat{\sigma}\)-normalized nonconformity measures and their Mondrian counterparts. Figure 8: Conditional coverage at significance level \(\alpha=0.1\) for the star, residential and puma32H data sets. The data is divided in three folds based on equal-frequency binning of the estimated variances. The coloured columns indicate the different estimators (from left to right): quantile regression, quantile regression forest, mean-variance estimator and mean-variance ensemble. For every model, a baseline result and six nonconformity measures are shown (from left to right): residual, interval and \(\widehat{\sigma}\)-normalized nonconformity measures and their Mondrian counterparts. ## 7 Discussion This paper was motivated by the surge in interest in conformal prediction and accurate uncertainty quantification over the past few years. Although the benefits and validity of these methods have been illustrated on numerous data sets and in various domains, an important aspect remains less understood: the conditional performance. More specifically, conditioning on estimates of the residual variance, so as to make sure the models do not neglect the often underrepresented regions of high uncertainty, deserves more attention. In this paper, uncertainty-independence was studied in both a general setting, leading to the use of pivotal quantities, and in the case of explicit, parametric nonconformity measures. The latter allows to derive families of probability distributions for which conditional validity will hold whenever the instance space is subdivided based on the data noise, provided it can be expressed in terms of the chosen parameterization.
2309.04501
Weighted refined decoupling estimates and application to Falconer distance set problem
We prove some weighted refined decoupling estimates. As an application, we give an alternative proof of the following result on Falconer's distance set problem by the authors in a companion work: if a compact set $E\subset \mathbb{R}^d$ has Hausdorff dimension larger than $\frac{d}{2}+\frac{1}{4}-\frac{1}{8d+4}$, where $d\geq 4$, then there is a point $x\in E$ such that the pinned distance set $\Delta_x(E)$ has positive Lebesgue measure. Aside from this application, the weighted refined decoupling estimates may be of independent interest.
Xiumin Du, Yumeng Ou, Kevin Ren, Ruixiang Zhang
2023-09-08T03:49:03Z
http://arxiv.org/abs/2309.04501v1
# Weighted refined decoupling estimates and application to Falconer distance set problem ###### Abstract. We prove some weighted refined decoupling estimates. As an application, we give an alternative proof of the following result on Falconer's distance set problem by the authors in a companion work: if a compact set \(E\subset\mathbb{R}^{d}\) has Hausdorff dimension larger than \(\frac{d}{2}+\frac{1}{4}-\frac{1}{8d+4}\), where \(d\geq 4\), then there is a point \(x\in E\) such that the pinned distance set \(\Delta_{x}(E)\) has positive Lebesgue measure. Aside from this application, the weighted refined decoupling estimates may be of independent interest. ## 1. Introduction In this paper, we prove some weighted refined decoupling estimates (see Theorems 1.1 and 1.2) and discuss their application to Falconer's distance set problem. ### Weighted refined decoupling estimates Here is the setup for refined decoupling estimates. Suppose that \(S\subset\mathbb{R}^{d}\) is a compact and strictly convex \(C^{2}\) hypersurface with Gaussian curvature \(\sim 1\). For any \(\epsilon>0\), suppose there exists \(0<\beta\ll\epsilon\) satisfying the following. Suppose that the \(R^{-1}\)-neighborhood of \(S\) is partitioned into \(R^{-1/2}\times...\times R^{-1/2}\times R^{-1}\) blocks \(\theta\). For each \(\theta\), let \(\mathbb{T}_{\theta}\) be a set of finitely overlapping tubes of dimensions \(R^{1/2+\beta}\times\cdots\times R^{1/2+\beta}\times R\) with long axis perpendicular to \(\theta\), let \(G(\theta)\in\mathbb{S}^{d-1}\) denote this direction, and let \(\mathbb{T}=\cup_{\theta}\mathbb{T}_{\theta}\). Each \(T\in\mathbb{T}\) belongs to \(\mathbb{T}_{\theta}\) for a single \(\theta\), and we let \(\theta(T)\) denote this \(\theta\). We say that \(f\) is microlocalized to \((T,\theta(T))\) if \(f\) is essentially supported in \(2T\) and \(\hat{f}\) is essentially supported in \(2\theta(T)\). Here is our first main result on weighted refined decoupling estimates. **Theorem 1.1**.: _Suppose that \(f=\sum_{T\in\mathbb{W}}f_{T}\), where \(\mathbb{W}\subset\mathbb{T}\) and each \(f_{T}\) is microlocalized to \((T,\theta(T))\). Let \(Y\) be a union of \(R^{1/2}\)-cubes in \(B_{R}^{d}\) each of which intersects at most \(M\) tubes \(T\in\mathbb{W}\). Denote \(p_{d}=\frac{2(d+1)}{d-1}\). Then the following refined decoupling inequalities hold._
2308.07321
The Efficacy of Utility Functions for Multicriteria Hospital Case-Mix Planning
A new approach to perform hospital case-mix planning (CMP) is introduced in this article. Our multi-criteria approach utilises utility functions (UF) to articulate the preferences and standpoint of independent decision makers regarding outputs. The primary aim of this article is to test whether a utility functions method (UFM) based upon the scalarization of aforesaid UF is an appropriate quantitative technique to, i) distribute hospital resources to different operating units, and ii) provide a better capacity allocation and case mix. Our approach is motivated by the need to provide a method able to evaluate the trade-off between different stakeholders and objectives of hospitals. To the best of our knowledge, no such approach has been considered before in the literature. As we will later show, this idea addresses various technical limitations, weaknesses, and flaws in current CMP. The efficacy of the aforesaid approach is tested on a case study of a large tertiary hospital. Currently UF are not used by hospital managers, and real functions are unavailable, hence, 14 rational options are tested. Our exploratory analysis has provided important guidelines for the application of these UF. It indicates that these UF provide a valuable starting point for planners, managers, and executives of hospitals to impose their goals and aspirations. In conclusion, our approach may be better at identifying case mix that users want to treat and seems more capable of modelling the varying importance of different levels of output. Apart from finding desirable case mixes to consider, the approach can provide important insights via a sensitivity analysis of the parameters of each UF.
Robert L Burdett, Paul Corry, Prasad Yarlagadda, David Cook, Sean Birgan
2023-07-31T22:45:38Z
http://arxiv.org/abs/2308.07321v1
# The Efficacy of Utility Functions for Multicriteria Hospital Case-Mix Planning ###### Abstract A new approach to perform hospital case-mix planning (CMP) is introduced in this article. Our multi-criteria approach utilises utility functions (UF) to articulate the preferences and standpoint of independent decision makers regarding outputs. The primary aim of this article is to test whether a utility functions method (UFM) based upon the scalarization of aforesaid UF is an appropriate quantitative technique to, i) distribute hospital resources to different operating units, and ii) provide a better capacity allocation and case mix. Our approach is motivated by the need to provide a method able to evaluate the trade-off between different stakeholders and objectives of hospitals. To the best of our knowledge, no such approach has been considered before in the literature. As we will later show, this idea addresses various technical limitations, weaknesses, and flaws in current CMP. The efficacy of the aforesaid approach is tested on a case study of a large tertiary hospital. Currently UF are not used by hospital managers, and real functions are unavailable, hence, 14 rational options are tested. Our exploratory analysis has provided important guidelines for the application of these UF. It indicates that these UF provide a valuable starting point for planners, managers, and executives of hospitals to impose their goals and aspirations. In conclusion, our approach may be better at identifying case mix that users want to treat and seems more capable of modelling the varying importance of different levels of output. Apart from finding desirable case mixes to consider, the approach can provide important insights via a sensitivity analysis of the parameters of each UF. hospital capacity analysis, hospital case-mix planning, utility functions, multicriteria, achievement scalarising function, OR in health services ## 1 Introduction In this article case mix planning (CMP) in hospitals is focused upon. This strategic planning problem is important and sits at the top of a hierarchy of tactical and operational decision problems like operating room planning and patient scheduling (Leeftink and Hans, 2018; Burdett et al, 2018; Aringheiri et al, 2022). The purpose of CMP is ultimately to support health care managers improve resource management in hospitals (Hof et al., 2017; Ma and Demeulmeester, 2013). The goal is to identify a patient caseload (a.k.a. cohort) to treat with a specific set of features deemed desirable or ideal (Andrews et al., 2022). This choice has significant economic consequences, and greatly affects the operation of a hospital, and the size of patient waiting lists. In the literature, the usual metrics for desirability are number of patients treated, the total revenue obtained, or the total costs incurred. When performing CMP, it is first necessary to partition patients into a set of homogenous groups each with common characteristics (Landa et al., 2018). Each group may refer to a particular operating unit, medical or surgical speciality, a particular patient type, or patients with a particular condition and/or illness. Depending upon the agenda of stakeholders, CMP can be modelled at different levels of detail. In most papers, hospital operations are modelled at a macroscopic level, and scheduling policies and other operational considerations are not included. This permits CMP to be performed over longer time horizons. Some papers, however, consider greater levels of detail and provide microscopic planning and scheduling models. Those models, however, are time consuming, if not intractable to solve and rely upon a discretisation of the time horizon. If the time horizon is long and or the number of patients is large, then optimal solutions may not be guaranteed. In past research, a variety of approaches have been applied to CMP, including goal programming (Blake and Carter 2002), mixed integer programming (Burdett et al. 2017), multicriteria optimisation (Malik et al. 2015, Burdett and Kozan 2016, Zhou et al. 2018, Chalgham et al. 2019), stochastic programming (Neyshabouri and Berg 2017, Freeman et al. 2018, McRae and Brunner 2019, Burdett et al. 2023c), and discrete event simulation (Oliveira et al. 2020). In addition, Leftink and Hans (2018) have proposed a case mix classification scheme. Burdett et al. (2023b) have considered the needs of end users and developed a personal decision support tool. Table 1 below summarises the state of the art presently, and the main features that have been included to date. Case mixes are an important concept in CMP. A case mix is the specific blend (a.k.a., mixture) of patients (i.e., to be imposed) within a cohort. Without intervention and planning, a hospital's case mix is dictated by the training, skills and interests of staff, the referral patterns of patients, the productivity of the hospital, and prevalence of disease within the catchment areas (Blake et al. 2002). The case mix is hierarchical. A further division of the patients within a particular group into sub-groups or sub-types is routine. As such, a case mix must be defined for each group to describe the relative number of patients of each of its subtypes. The case mix is frequently an input to CMP (Burdett et al. 2017), defined upfront by decision makers and planners of hospitals. It is used as a mechanism to preference specific groups of patients over a planning horizon and a mechanism to regulate the competition for resources. There is, however, no universal definition, applicable to all situations. Each case mix definition produces a different caseload and a different profile of resource usage (Burdett et al. 2023b). In the literature, case mix is often viewed as the relative number of patients of each group or type that is treated. This means that for each group \(g\in G\), there is a given proportion \(\mu_{g}\in[0,1]\) such that \(\sum_{g}\mu_{g}=1\). Hence, we impose that the number of patients of type \(g\) is governed by the equation \(n_{g}=\mu_{g}N\) where \(N=\sum_{g}n_{g}\). The main drawback of this approach is that if one group of patients is bottlenecked, then all the other groups of patients are too. Consequently, without altering the case mix designated by the user, it is not possible to use the latent capacity in the system to treat other groups of patients. In the language and terminology of multicriteria analysis, caseloads of this nature are called "dominated", as other caseloads exist (i.e., non-dominated) which permit the latent capacity to be used (Burdett et al. 2016). \begin{table} \begin{tabular}{l Anecdotally we have observed that hospitals do not always view the case mix as described above. They often view case mix as a relative measure of the theatre time allocated to each surgical patient group or type. For instance, \(n_{g}\epsilon_{g}=\mu_{g}T\) where \(t_{g}\) is the average theatre time for group \(g\) and \(T\) is the total theatre time available. The case mix, however, can be defined relative to any hospital resource type. CMP with an output focused objective is an inherently multicriteria decision problem with many-objectives. This is because each group of patients has conflicting interests, and shares resources (i.e., like operating theatres and in-patient beds) with other groups. Only on rare occasions is that not so. Without a formal mechanism such as a case mix, it is necessary to find an acceptable trade-off another way, for instance by obtaining and analysing the Pareto frontier of alternatively optimal (i.e., non-dominated) solutions. Each non-dominated solution describes a completely different trade-off and divides the time availability (a.k.a., capacity) of each hospital resource, amongst the different groups of patients in a unique way. The drawback of such an approach is evident. When there are many patient types, the resulting multicriteria decision problem has a high number of dimensions, i.e., one for group of patients. As shown in Burdett and Kozan (2016), the number of Pareto optimal solutions that can be identified is excessive, and techniques like the epsilon-constraint method are inadequate. _Research Agenda._ There are many hospital stakeholders and hospital objectives, and it is important to provide methods to evaluate the trade-off between them. An important aspiration of most hospitals is to treat as many patients as possible of each type, within a given time horizon. However, between upper and lower base-levels of achievement, outputs are selectable and negotiable. To facilitate the best CMP, we ought to define a utility (a.k.a., achievement) function, that more clearly articulates our preferences and standpoint and those of decision makers (DM), regarding outputs. To the best of our knowledge, no such approach has been considered before in the CMP literature. As we will later show, this idea addresses various technical limitations, weaknesses, and flaws in current CMP. The following research questions are posed: 1. Conceptually how useful are utility functions for CMP activities? 2. Can the use of utility functions negate the need to apply traditional multicriteria analysis and optimization techniques? 3. Are utility functions conceptually a better/worse approach than designating a case mix of some form and imposing case mix constraints? 4. How are utility functions defined, altered, and renegotiated in an iterative CMP approach? Is there a more rigorous approach or set of guidelines to do so? 5. How do the results of CMP change when different utility functions are applied? What is the exact difference between the UF types? The rest of the paper is organized as follows. In Section 2 the current state of the art is examined, and important background methodological information is provided. In Section 3 the details of the quantitative framework are provided. In Section 4, a case study of real-world size is presented. Last, the conclusions, managerial insights and future research directions are detailed. This article has numerous acronyms, and a summary can be found in Appendix A. In Appendix B and C technical details are provided. Important results are summarised in Appendix D. ## 2 Methodological Background As a foundation for later developments a review of utility functions and salient multicriteria optimization techniques is provided in this section. **Multicriteria Analysis and Optimization (MCO).** Multicriteria analysis is an iterative process supporting the user in the exploration of a Pareto set consisting of non-dominated solutions. It aims at finding subsets of solutions with desired properties (Makowski 2009). MCO is the solution of a mathematical programming model with two or more objectives. Numerous methods have been developed for MCO and there are two main strategies. The first strategy involves the application of multi-objective programming methods to first find efficient solutions for the DM to choose from. This is known as a Pareto frontier (PF). In the second strategy, an auxiliary parametric single-objective model is posed, whose solution provides a single Pareto-optimal point (Granat and Makowski, 2000). In the second strategy utility functions are predominantly applied. Eliciting preference information from the DM is first necessary to construct a utility function which is subsequently optimized. The preferences of DMs may then change as they learn more of the decision situation (Stewart (1996)). Ehrgott et al. (2009) is noteworthy for comparing both strategies for portfolio optimization with multiple objectives. They generated efficient solutions upfront for the investor and applied utility functions to optimize a single objective mathematical programming model. **Utility Functions.** A Utility Function (UF) is a relative measure of the desirability (i.e., global utility) of different alternatives. They are often used to measure preferences concerning goods and services. It has been said that every decision maker (DM) tries to optimize, consciously or unconsciously, a utility or payoff function aggregating all their points of view (Wierzbicki 1977). Utility Function Methods (UFM) and Value Function Methods are techniques that apply utility functions to MCO problems. Assuming \(m\) objectives and \(n\) decision variables, the objective is to maximize \(U\big{(}f(x)\big{)}\), such that \(x\in X\), i.e., to choose \(x_{opt}=arg\ \max_{x\in X}U\big{(}f(x)\big{)}\). Here, \(x\) is a decision vector, \(X\) is the decision space, \(U\) is a utility function that maps \(\mathbb{R}^{m}\rightarrow\mathbb{R}^{1}\), and \(f\) is a function to evaluate the different objectives, that maps \(\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\). It is worth noting that \(U\big{(}f(x_{a})\big{)}>U\big{(}f(x_{b})\big{)}\) implies solution \(a\) is preferred to solution \(b\). The UF must be strongly decreasing; this means the preference must increase if one objective is decreased, and all others are kept the same. Both additive and multiplicative models have been posed in the literature as shown in equation (1). In that equation \(f_{t}\) computes the ith objective value and \(u_{i}\) is the utility function for objective \(i\) that maps \(z_{i}\) to a particular achievement level. There are various assumptions related to the application of these, the foremost being mutual preferential independence (Keeney, 1971). \[U\big{(}f(x)\big{)}=\prod_{i}u_{i}(z_{i})\big{)}\ \text{or}\ U\big{(}f(x) \big{)}=\sum_{i}u_{i}(z_{i})\ \text{where}\ z_{i}=f_{i}(x) \tag{1}\] To obtain utility functions \(u_{i}\) for real world MCO problems, numerous approaches have been proposed. In recent years, interactive learning procedures have become trendy. Dewancker et al. (2016) proposed a generative "multiplicative" model and a machine learning approach. Shavarani et al. (2021) applied an interactive multi-objective evolutionary algorithm and use a proven sigmoidal UF. Interactive methods are designed to explore interesting parts of the Pareto Frontier. They comment that simple and/or unrealistic utility functions are most often applied in published articles to simulate the behaviour of real decision makers (Shavarani et al. 2021). Torkjazi and Fazlollahtabar (2015) considered the application of multiple utility functions per objective in a MCO problem. They applied fuzzy probabilistic programming techniques. Multiple utility functions are deemed necessary because there is imprecision, uncertainty, and ambiguity in those functions. It is also noted that utility functions may be defined for each objective based on different situations and different environments. Longaray et al (2018) proposed a multicriteria decision analysis to evaluate the performance of activities in the internal logistics process of the supply chain of a teaching hospital. They used the categorical based evaluation technique (MACBETH) to generate value functions. The MACBETH method aggregates performance values in different criteria using an additive value function model. **Goal Attainment (GAM) and Goal Programming Methods (GPM).** These methods have been applied to multicriteria decision problems for numerous years (Gur and Eren, 2018). Some new versions have been created recently, however. In Gur and Eren (2018) the number of goals that are reached is maximized. This is called extended goal programming (EGP). In Hezam et al. (2022) a goal programming approach based upon fuzzy logic theory is proposed for evaluating the resources of health organisations. In their healthcare planning problem, they included staffing levels, medical supplies and drugs, staff rostering, and budgets. They applied their model to an oncology centre. Although described as clear and appealing, these methods _are criticized_ by MCO specialists for their non-compliance with the Pareto optimality principle (Ogryczak and Lahoda, 1992). In the literature the models presented in (2)-(5) are most prevalent. They have been previously posed in the literature and can be found summarised in sources like Ogryczak and Lahoda (1992) and Stewart (2005). Multiplicative versions can also be posed by changing the summations to products. \[\text{Minimize}\left\{\epsilon_{1}\left(\max_{l}(w_{l}\delta_{l}) \right)+\epsilon_{2}\sum_{i}(w_{l}\delta_{l})\right\}\text{s.t.}\ z_{i}=d_{i}- \delta_{i}\ \text{or}\ z_{i}=d_{i}+\delta_{i}\ \text{and}\ \delta_{i}\geq 0 \tag{2}\] \[\text{Minimize}\sum_{l\in I}w_{i}\left|z_{i}-d_{i}\right|\ \text{or}\sum_{l\in I}w_{l}\left|\frac{z_{i}-d_{i}}{d_{i}}\right|\] (3a) \[\text{Minimize}\sum_{i}(w_{l}^{+}\delta_{l}^{+}+w_{l}^{-}\delta_ {l}^{-})\ \text{s.t.}\ z_{i}-d_{i}=\delta_{i}^{+}-\delta_{i}^{-},\ \delta_{i}^{+}\delta_{i}^{-}=0\ \text{and}\ \delta_{i}^{+},\delta_{i}^{-}\geq 0\ (GPM)\] (3b) \[\text{Minimize}\left\{\max_{l}(w_{l}^{+}\delta_{l}^{+}+w_{l}^{-} \delta_{l}^{-})\right\}\text{or}\left\{\max_{l}(w_{l}^{+}\delta_{l}^{+})+ \max_{l}(w_{l}^{-}\delta_{l}^{-})\right\}\] (3c) \[\text{Minimize}\ \delta\ \text{s.t.}\ z_{i}\geq d_{i}-w_{l} \delta,\ z_{i}\leq d_{i}+w_{l}\delta\ \text{and}\ \delta\geq 0\] (GAM) (4a) \[\text{Minimize}\ \epsilon^{+}\delta^{+}+\epsilon^{-}\delta^{-}\ \text{s.t.}\ z_{i}\leq d_{i}+w_{l}\delta^{+},\ z_{i}\geq d_{i}-w_{l} \delta^{-}\ \text{and}\ \delta^{+},\delta^{-}\geq 0\] (4b) \[\text{Minimize}\ \sum_{l}\max(0,z_{i}-d_{i}-w_{i}\delta)\ \text{ for a given}\ \delta\] (4c) \[\text{Minimize}\ \sum_{i}\max(0,d_{i}-z_{i})\ \text{or}\ \sum_{i}\max(0,z_{i}-d_{i}) \tag{5}\] These models are solved subject to other problem specific decision variables and technical constraints. For objective \(i\), \(d_{i}\) is the goal (a.k.a., aspiration), the under-achievement is defined by \(\delta_{i}^{-}\) and the over -achievement by \(\delta_{i}^{+}\). The term \(\delta\) is used generally for either type of deviation. In (2), under and over-achievements respectively are penalized, but not both. Depending upon how \(\epsilon_{1}\) and \(\epsilon_{2}\) are defined, the aggregate deviation and the extent of the worst over or under-achievement (i.e., the Chebyshev utility function) can be minimized. We could choose \(\epsilon_{1}\leq 1\) and \(\epsilon_{2}\leq 1\), such that \(\epsilon_{1}+\epsilon_{2}=1\). We could also set them independently. The same could be said of \(\epsilon^{+}\) and \(\epsilon^{-}\) in (4b). In (3a), any deviation (scaled or unscaled) is deemed undesirable. This is recognisable as the 1-norm. Option (3b) is a variant of (3a) that weights over and under-achievements differently. This is the traditional _Goal Programming Method_ (GPM). In (3c), the maximum over and under-achievement are minimized. Although not shown, (3a) or (3b) could be aggregated with (3c). In (4a) and (4b), terms \(w_{l}\delta\), \(w_{l}\delta^{+}\) and \(w_{l}\delta^{-}\) introduce an element of slackness into the problem, so that goals do not need to be rigidly met. In (4a), over or under-utilizations respectively are minimized. This is the traditional _Goal Attainment Method_ (GAM). In (4b), both over and under-utilization are considered, but regarded with potentially different importance. In (4c), aggregate over-achievement is minimized. All over-achievements are, however, permitted, in contrast to (4b) which explicitly sets hard limits. Some over-achievements are permitted without penalty, and do not contribute to the score. This is governed by parameter \(\delta\). Objective (5) is a variant of (4c), from Benson (1978). _The biggest issue_ with most of these methods is that deviations are penalised in a static way, when really the penalty should increase as the deviation becomes bigger. In other words, slight under or over-achievements are inconsequential, but larger ones are not. These methods meet goals as best possible, but do not consider if any can be exceeded. Hence, solutions are not necessarily Pareto optimal. **Aspiration Reservation Method (ARM).** The ARM is an approach for multi-criteria analysis of decision problems. It has been well exemplified and sponsored in the literature for instance by Wierzbicki (1977), Granat and Makowski (2000, 2006), Makowski (2009). It is an arguably a better approach than the GAM and GPM. Simple UF (i.e., with one or two piecewise linear segments) are the backbone of the ARM and are used to articulate more clearly the preferences of decision makers. In the ARM, each objective is given a criterion (a.k.a., component) achievement function (CAF) and multiple objectives are aggregated into one objective, using an achievement scalarizing function (ASF), that maps \(R^{n}\to R^{1}\). Important concepts are the aspiration \(z_{i}^{\text{a}}\) and reservation (a.k.a., reference) points \(z_{i}^{\text{r}}\). The former is a solution composed of the desired values for the corresponding criterion. The latter is a solution composed of acceptable values for the corresponding criterions. The traditional ASF is maximized and defined as follows, \(\mathit{ASF}=\min_{i}\{u_{i}(z_{i},\_)\}+\epsilon\sum_{i}u_{i}\left(z_{i},\_)\), where \(\epsilon\) is a small value, \(z_{i}\) is the value of the ith objective, and \(u_{i}(z_{i})\) is a UF / CAF. For the ARM, it is necessary to define functions \(u_{i}(z_{i},z_{i}^{2},z_{i}^{T})\) for all \(i\in I\). According to Ogryczak and Lahoda (1992), solutions that satisfy all aspiration levels are preferable to outcomes that do not in the ARM method. Given strict upper and lower limits, it is beneficial to normalize the achievement as follows: \(u_{i}\left(z_{i},\bar{z}_{i},\underline{z}_{i}\right)=\left(z_{i}-\underline{z }_{i}\right)/(\bar{z}_{i}-\underline{z}_{i})\). As described, the ARM _is quite basic_. A more general version can be implemented, without any notion of aspiration and reservation points. General utility functions with more piecewise linear segments or non-linearities may be used. In the next section, that approach is taken. ## 3 Quantitative Framework for CMP In this section the case-mix planning model is first introduced before a utility function method is proposed. ### The CMP Model In this article we choose to consider a high-level strategic CMP problem. It is described by the optimization model shown in (6)-(12). The purpose of this model is to identify the number of patients of each type (a.k.a., group) and subtype (a.k.a., sub-group) to treat over time, denoted respectively \(n_{g}^{1}\) and \(n_{g,p}^{2}\), given the current hospital configuration and some basic patient resourcing requirements. These variables are rates of output and do not refer to discrete patients. There is an inherent hierarchy between \(n_{g}^{1}\), and \(n_{g,p}^{2}\), namely \(n_{g}^{1}=\sum_{p\in P_{g}}n_{g,p}^{2}\), and \(P_{g}\) is the set of subtypes within group \(g\). The resourcing requirements for each patient subtype \((g,p)\) is described by a resourcing profile (a.k.a., patient care pathway). This resourcing profile is just a list of activities. For the purposes of this paper, and for a high-level capacity modelling perspective, the sequence of events, is irrelevant. The output of the hospital, denoted \(\mathbb{N}\), is restricted by the resources present, their time availability, and their purpose. As such, it is necessary to identify a resource allocation, describing which resources will be used to treat each patient. The resource allocation is denoted by \(\beta_{a,r}\). This decision variable describes how many patients with activity \(a\) are treated by resource \(r\). The model has various bookkeeping constraints. Constraint (7) defines the inherent relationship between the number of patients \(n_{g,p}^{2}\) and the resource allocation. Resource usage is restricted by the time availability of the resource as shown in constraint (8). Designated case mix are enforced by (9) and (10) if needed. The remainder, namely (11) and (12) enforce positivity. Maximize \[\mathbb{N}=\sum_{g\in G}n_{g}^{1}=\sum_{g\in G}\sum_{p\in P_{g}}n_{g,p}^{2}\] (6) Subject To: \[n_{g,p}^{2}=\sum_{r\in R_{a}}\beta_{a,r}\ \ \forall g\in G,\forall p\in P_{g}, \forall a\in A_{g,p}\] (7) \[\sum_{a\in A_{r}}\beta_{a,r}\ t_{a}\leq T_{r}\ \ \forall r\in R\ \text{ where }T_{r}=h_{r}\times\mathbb{T}\] (8) \[n_{g}^{1}\geq u_{g}^{1}\ \ \beta_{g}\ n_{g}^{1}\ \ \forall g\in G\] (9) \[n_{g,p}^{2}\geq u_{g,p}^{2}n_{g}^{1}\ \ \forall g\in G, \forall p\in P_{g}\] (10) \[n_{g}^{1},n_{g,p}^{2}\geq 0\ \ \forall g\in G,\forall p\in P_{g}\] (11) \[\beta_{a,r}\geq 0\ \ \forall a\in A,\forall r\in R_{a}\ \text{and}\ \ \beta_{a,r}=0\ \ \forall a\in A, \forall r\in R\backslash R_{a}\] (12) To fully understand this model, it is also necessary to point out the following: 1. \(\mathbb{T}\) is the period of planning, i.e., the number of weeks considered. 2. \(R\) is the set of resources. We only consider hospital facilities such as operating theatres, wards, and intensive care in this article. Auxiliary resources like staffing could also be integrated. 3. \(A_{g,p}\) is the set of activities for patient subtype \((g,p)\). Hence, \(A_{g}=\bigcup_{p\in P_{g}}A_{g,p}\). In addition, \(A\) is the complete set of activities. As such: \(A=\bigcup_{g\in G}A_{g}\). \(R_{a}\subset R\) is the resourcing profile for activity \(a\), i.e., the set of resources that can be used. This set is defined relative to the type of activity being performed. \(t_{a}\) is the time to perform activity \(a\). \(h_{r}\) is the time availability weekly of resource \(r\). If the resource is a facility like a ward or intensive care unit, then this number must be multiplied by the number of beds present. \(\forall\). The patient type mix and sub mix are denoted \(\mu_{g}^{1}\) and \(\mu_{g,p}^{2}\)respectively where \(\sum_{g}\mu_{g}^{1}=1\) and \(\sum_{p\in P_{g}}\mu_{g,p}^{2}=1\). \(\forall\). Upper bounds designated by \(\bar{n}_{g}^{1}\) are important to compute. The upper bound is determined from the CMP model, assuming the following single patient case mix, \(\mu_{g}^{1}=1;\mu_{g}^{1}=0\ \forall g^{\prime}\in G\backslash\{g\}\). ### Solving the Multicriteria CMP Problem The multicriteria CMP problem considers the maximization of each patient type simultaneously. In the CMP model, case mix constraint (9) is omitted, and objective function (6) is conceptually replaced with the following: Maximize \(\{n_{1}^{1},n_{2}^{1},...,n_{|G|}^{1}\}\). In this section goal programming methods and utility functions are revisited as a means of navigating the Pareto frontier of the multicriteria CMP problem. These methods permit conversion to a single objective. They can be used to minimize the level of over or under-achievement from imposed goals denoted \(\hat{n}_{g}^{1}\) and to maximize total treatments given by \(\sum_{g\in G}n_{g}^{1}\). The relevant details of each are now discussed: #### 3.2.1 Goal Attainment The goal attainment model for CMP is summarised by equation (13)-(16). The domain of the goals, \(\hat{n}_{g}^{1}\) is \(\left[0,\bar{n}_{g}^{1}\right]\). In a multi-criterion CMP setting, it is worth setting \(\hat{n}_{g}^{1}=\bar{n}_{g}^{1}\). Minimize \(\delta\) Subject to: \[\delta\geq 0\] (13) \[n_{g}^{1}\leq\hat{n}_{g}^{1}+w_{g}\delta\ \forall g\in G\ \ \ \text{(i.e.,}\ \ (n_{g}^{1}-\hat{n}_{g}^{1})/w_{g}\leq\delta)\] (14) \[n_{g}^{1}\geq\hat{n}_{g}^{1}-w_{g}\delta\ \forall g\in G\ \ \ \text{(i.e.,}\ \ (\hat{n}_{g}^{1}-n_{g}^{1})/w_{g}\leq\delta)\] (15) The permitted deviation is governed by \(\delta\) and the group specific priority \(w_{g}\). Equation (15) restricts over-achievements if \(w_{g}>0\), and equation (16) restricts under-achievements. The sign of parameter \(w_{g}\) is important. In (15), if \(w_{g}<0\ \forall g\in G\), then no goal can be reached. In (16), if \(w_{g}<0\ \forall g\in G\), then every goal must be exceeded. It's worth noting that if all goals are achievable, then \(\delta=0\). Practically, we could set all \(w_{g}=1\), and this would then restrict the output of all groups in the same way. If any \(w_{g}=0\), then achievement is a hard constraint. Hence, smaller values imply less freedom to deviate from goals, and larger values permit the opposite. If \(w_{g}\) are different, then different levels of over-achievement may be permitted for some groups of patients. In other words, some groups of patients would be prioritized. As reported at MathWorks (2022), if \(w_{g}=\hat{n}_{g}^{1}\), then we restrict the relative achievement in the following way: \[\frac{n_{g}^{1}}{\hat{n}_{g}^{1}}\leq(1+\delta)\ \text{and}\ \frac{n_{g}^{1}}{\hat{n}_{g}^{1}}\geq(1-\delta) \tag{17}\] #### 3.2.2 Goal Programming The goal programming method summarised by (18) is like the GAM, but a single parameter \(\delta\) is not used to restrict the output of all groups. Minimize \[\sum_{g\in G}\bigl{(}w_{g}^{+}\delta_{g}^{+}+w_{g}^{-}\delta_{g}^{-} \bigr{)}\] Subject to: \[n_{g}^{1}=\hat{n}_{g}^{1}+\delta_{g}^{+}-\delta_{g}^{-}\ \delta_{g}^{+}\delta_{g}^{-}=0\ \text{and}\ \delta_{g}^{+},\delta_{g}^{-}\geq 0\ \ \forall g\in G\] (18) This is essentially a weighted sum approach. The results are highly dependent upon parameters \(w_{g}^{+}\) and \(w_{g}^{-}\). As such, priority will be given to some groups, and not others. It is also reasonable to optimize the relative under and over-achievement, \(\sum_{g\in G}\bigl{(}w_{g}^{+}\hat{\delta}_{g}^{+}+w_{g}^{-}\hat{\delta}_{g}^{-} \bigr{)}\) where \(\hat{\delta}_{g}^{+}=\delta_{g}^{+}/\hat{\eta}_{g}^{1}\) and \(\hat{\delta}_{g}^{-}=\delta_{g}^{-}/\hat{\eta}_{g}^{1}\). This normalisation scales the differences and makes comparison between different groups more even-handed. To handle the non-linear constraint \(\delta_{g}^{+}\delta_{g}^{-}=0\), it is necessary to incorporate a binary decision to force \(\delta_{g}^{+}=0\) or \(\delta_{g}^{-}=0\). Let us define \(\lambda_{g}=1\) if \(\delta_{g}^{-}=0\), and zero if \(\delta_{g}^{+}=0\). The following constraints are then required: \[\delta_{g}^{+}\leq\lambda_{g}\bigl{(}\bar{n}_{g}^{1}-\hat{n}_{g} ^{1}\bigr{)}\text{ and }\delta_{g}^{-}\leq\bigl{(}1-\lambda_{g}\bigr{)}\hat{n}_{g}^{1}\ \forall g\in G \tag{19}\] \[\lambda_{g}\in\{0,1\}\ \forall g\in G \tag{20}\] #### 3.2.3 Utility Function Method (UFM) An approach using utility functions, inspired by the ARM can be applied. The main assumptions are as follows: 1. We consider a single attribute, namely patient type, and define the attribute level as the number of patients treated. 2. Each stakeholder represents a particular specialty and describes their level of satisfaction regarding different levels of output. They do not comment about the output of other specialties. Stakeholders who represent more than one specialty, including those that represent all specialties, are not considered. 3. The preference of different levels of patient type \(g\) do not depend on the levels of any other type \(g^{\prime}\). In other words, there is utility independence. Given the above details, the application of objective function (21) with constraint (7)-(12) is appropriate: \[\text{Maximize }ASF=\epsilon_{1}\min_{g\in G}\bigl{\{}w_{g}u_{g}\bigr{\}}+ \epsilon_{2}\sum_{g\in G}w_{g}u_{g}\text{ where }u_{g}=\text{PLF}(n_{g}^{1},b_{g},\nabla_{g}) \tag{21}\] In (21), \(b_{g}\) are the breakpoints of the \(g\)th utility function, and \(\nabla_{g}\) are the gradients of the line segments. We may, however, define the utility functions directly. For instance, the simplest options are \(u_{g}=n_{g}^{1}\), \(u_{g}=n_{g}^{1}-\hat{n}_{g}^{1}\) and \(u_{g}=\bigl{(}n_{g}^{1}-\hat{n}_{g}^{1}\bigr{)}/\hat{n}_{g}^{1}\). The first option defines achievement as the weighted raw output, the second as the weighted difference from the aspiration, or thirdly as the weighted relative difference. A special case of (21) is when \(\epsilon_{2}=0\). It maximizes the utility of the worst performing group. #### 3.2.4 Generating Pareto Optimal Solutions Goal programming and goal attainment methods have known limitations in the context of multicriteria optimization. These methods minimize the over and under-achievement from the specified goals, and there is no incentive to do better, if the goals are achievable (i.e., the goals describe a dominated solution). In other words, it is possible to find non-dominated solutions. Figure 1 demonstrates for a basic two group scenario, where \(n_{1}\leq\bar{n}_{1}\) and \(n_{2}\leq\bar{n}_{2}\), the possibility of setting goals (i.e., A, B) above and below the implied Pareto frontier, that demarcates in the objective space, the boundary between feasibility and infeasibility. Goal B is not achievable, so the GAM and GPM must return the "nearest" feasible solution. That depends upon the weights used in the objective. That solution must be Pareto optimal, because any solution below the frontier would be non-optimal. Goal A is a dominated solution to the problem and would be the reported solution if the GAM or GPM were applied. To find a better solution, either of the models described at (23) and (24) could be applied in a "follow-up stage". \[\text{Maximize }n_{g}^{1}\text{ s.t. }n_{g}^{1}\geq\bar{n}_{g}^{1}\ \forall g\in G \backslash\{g^{*}\} \tag{23}\] \[\text{Maximize }\Psi^{+}=\sum_{g\in G}\bigl{(}w_{g}^{+}\delta_{g}^{+} \bigr{)}\text{ s.t. }n_{g}^{1}\geq\bar{n}_{g}^{1}\ \forall g\in G\text{ or }\delta_{g}^{-}=0\ \ \forall g\in G \tag{24}\] In Figure 1, the solid black arcs show solutions that could be obtained. The model described at (23) permits a user to preference one of the patient groups, denoted \(g^{*}\). Other patient groups will be kept at their respective goal level (i.e., \(n_{g}^{1}=\hat{n}_{g}^{1}\)) if they share resources with patient group \(g^{*}\). Otherwise, the goals can be exceeded for those patient types as well. In contrast, the model described at (24) does not explicitly preference any patient group. Instead, it seeks to optimise the overall improvement. Another approach is to solve a variant model that minimizes under-achievement and maximizes over-achievement. For instance: Maximize \[\epsilon^{+}\Psi^{+}-\epsilon^{-}\Psi^{-}\] (25) Subject to: Constraint (18), (19), (20) Where \(\Psi^{-}=\sum_{g\in G}\bigl{(}w_{g}^{-}\delta_{g}^{-}\bigr{)}\) or \(\Psi^{-}=\max_{g}\bigl{\{}w_{g}^{-}\delta_{g}^{-}\bigr{\}}\) and \(\Psi^{+}=\sum_{g\in G}\bigl{(}w_{g}^{+}\delta_{g}^{+}\bigr{)}\) or \(\Psi^{+}=\min_{g}\bigl{\{}w_{g}^{+}\delta_{g}^{+}\bigr{\}}\). In contrast to (23) and (24), this model may permit under-achievements, so that other greater over-achievements are realised. To avoid that happening, we can set \(\epsilon^{-}\) to be a large value and \(\epsilon^{+}\cong 1\). In Figure 1, the dotted black arcs show possible solutions that could be obtained. In theory this model could supersede the others and be used for both stages described previously. ### Utility Functions for CMP To apply the UFM, it is necessary to define UF for each group of patients \(g\in G\), or other category of interest. In each UF, a level of achievement must be defined for each conceivable number of patients treated. The UF may be defined as specific mathematical functions, otherwise they must be elicited from end users. Elicited UF may describe end users' subjective view of achievement relative to output. The achievement can be viewed in numerous ways. It can represent levels of satisfaction or dissatisfaction (unit = %), profit or loss (unit = $), achievement or non-achievement (unit = real value). The achievement function can be based upon quantitative data or qualitative. The simplest utility function that may be used for CMP describe increased achievement and merit for increased treatments. A minimum requirement and aspiration can also be defined and incorporated. Any demand or target can be viewed as an aspiration; however, they may also be regarded as strict requirements. The principal options are shown in Figure 2. The x-axis is the output, and the y-axis is the metric of achievement. Most of the UF in Figure 2 have only two or three linear segments. Those with one may be characterized as simple functions, and those with more called compound functions. It is worth noting that UF1 is a special case of UF2, UF3 and UF4. Similarly, UF2 is a special case of UF4, and UF3 is a special case of UF4. The functions UF4 and UF7 also have a similar shape. Non-linear and non-monotonic variants (i.e., like UF6) are also shown. Convex and concave variants are specifically dotted. For modelling purposes, the nonlinear variants need to be broken up into an arbitrary number of sub segments. Fig 2(e)-(g) are more sophisticated variants of (a)-(d) that impose negative achievement for not meeting a minimum expectation, and reduced achievement for exceeding aspirations too greatly. Fig 2(h) explicitly shows the well-known s-shaped function, Figure 1: Goal setting above and below the Pareto frontier positioned around a reference point. In the literature gains are often perceived as concave, and losses convex. The perception of what constitutes gain and loss, however, is subjective. Last, Fig 2(i)-(k) show discontinuous utility functions with tiers. Fig 2(k) demonstrates how a strict requirement may be described. Regarding these utility functions, two concepts are worth noting. The value of output above which achievement is first acknowledged, is called the point of indifference (PTOI) (a.k.a., the intercept). All values of output below this are deemed zero or negative. Another important value is the minimum output above which no further achievement is regarded. This has been called an _aspiration point_ (ASPT) in past research. There is only a single aspiration for any given group. In UF4 and UF7, it is worth noting the point where utility is halfway. It is referred to as a reference point. **Quantification.** The UF shown in Figure 2 can be described by explicit mathematical functions or as piecewise linear functions with a distinct number of linear segments. The details of appropriate options are summarised in Appendix B and C. In Appendix C, some non-linear variants are described, but this list does not include all possibilities. For the non-linear functions shown, the parameter \(\alpha\) is used to alter the convexity of the curve. The ordered set of breakpoints for each group is defined as \(b_{g}\) and the slopes as \(\nabla_{g}\). Formally, the point of indifference is denoted \(n_{g}^{1}\) and the aspiration as \(n_{g}^{A}\). All non-linear UF need to be converted into piecewise-linear equivalents for the CMP model to be solved using commercial solvers like IBM ILOG CPLEX. The number of points in each UF can vary but a minimum of two is required. To implement the tiered UF or other discontinuous piecewise linear functions, there are a few options. In IBM ILOG CPLEX, discontinuous piecewise linear functions can be accommodated by duplicating breakpoints and defining "jumps". Generally, any discontinuity occurring with points \((x,y)\) and \((x,y^{\prime})\) is represented with two breakpoints \(\{x,x\}\) and one slope, \(\{y^{\prime}-y\}\). In the absence of that functionality, these types of UF can be handled by adding an extra breakpoint before or after the discontinuity (i.e., \(\{x-eps,x\}\) or \(\{x,x+eps\}\)), to create a steep sloped segment in place of the vertical one. Another approach is to incorporate additional auxiliary binary variables and constraints. Figure 2: Piecewise linear and non-linear utility functions of practical relevance Let us define \(\delta_{g,i}\) as one if the \(i\)th interval of the UF is selected, such that \(\sum_{i\in(1..I)}\delta_{g,i}=1\). To select the correct value, it is necessary to impose the following constraints: \[l_{g,i}+M\big{(}\delta_{g,i}-1\big{)}\leq n_{g}^{1}\leq r_{g,i}+M \big{(}1-\delta_{g,i}\big{)} \tag{27}\] \[\mathrm{PLF}_{g,i}(n_{g}^{1})+M\big{(}\delta_{g,i}-1\big{)}\leq u_ {g}\leq\mathrm{PLF}_{g,i}(n_{g}^{1})+M\big{(}1-\delta_{g,i}\big{)} \tag{28}\] Above, \(l_{g,i},r_{g,i}\) are the input left and right boundary respectively for segment \(i\) and \(\mathrm{PLF}_{g,i}\big{(}n_{g}^{1}\big{)}=m_{g,i}n_{g}^{1}+c_{g,i}\). **Pitfalls.** Utility functions may be defined for a given group without any understanding of the maximum number of patients that may be treated of that type in the hospital given full access to resources. This has implications for the proposed approach and for end users. For instance, if the chosen upper bound is less than the actual capability of the hospital, then an unfair limitation is imposed, one that may allow other groups of patients to capitalize. It is also possible to define aspirations that are above the capacity of the hospital. This may affect the analysis, because the upper bound if selected as the output, will be given a lower achievement level (i.e., see Fig 3). In fact, the upper bound is the aspiration point in these circumstances. Given these two issues, a pre-analysis should be applied upfront to identify all upper bounds. Otherwise, a sufficiently large value should be identified. Also, we should enforce any aspiration point to be less than or equal to the upper bound. The utility function in Fig 2b should be used with care. Conceptually, the point of indifference should be a small value. However, if the point of indifference is designated too high (i.e., by accident or design), for each group, then the solution \(n_{g}^{1}=n_{g}^{1}\)\(\forall g\in G\) may be infeasible. If objective function (21) with \(\epsilon=0\) is selected, then the optimal solution may be a zeroed (i.e., null) solution. This is unhelpful given that there is capacity to achieve more output. This, however, is logical, as in the range \(\big{[}0,n_{g}^{1}\big{]}\) all solutions have the same utility; none. So, there is no point in choosing a higher output below the point of indifference. In conclusion, it would be better to have two sloping segments, within the utility function. The first segment would have a lesser slope, than the second. In other words, a piecewise linear version of the non-linear option shown in Fig 2c is suggested. **Final Remarks.** An alternative viewpoint may be taken when defining utility functions. It is feasible to represent the x-axis as the unrealised performance (i.e., \(\bar{n}_{g}^{1}-n_{g}^{1}\)), instead of the output \(n_{g}^{1}\). The following figure demonstrates this possibility. Hence, when unrealised output is low (i.e., \(n_{g}^{1}\) is high), utility and achievement are high. ### Utility Functions - Financial Figure 4: Alternative but equivalent utility function Figure 3: An invalid UF with an incorrectly defined aspiration value In CMP, financial considerations are equally important. Every patient treated in a public or private hospital has various costs associated with their care. Some of those costs may be met from the government, publicly funded universal healthcare organisations (i.e., like Medicare), and private health insurance. The rest may be incurred by the patient. Each patient is also charged and after costs are met, some hospitals may receive a net income/profit. When considering financial factors, it is perhaps warranted to define UF for each sub-group \(p\in P_{g}\). It makes less sense by type, as significantly different costs/revenues occur at the level of subtype. If defined by subtype, the number of utility functions could be considerable. Let us define \(f_{g}\) as the net income received and \(\gamma_{g}\) as the financial penalty incurred for each patient of type \(g\). In 5(a) the simplest UF is posed. The utility increases linearly and is only zero when there is no output. The achievement is measured as the income generated and hence \(u_{g}=n_{g}^{1}f_{g}\) for \(n_{g}^{1}\leq UB_{g}\). For this UF, the reference point is \(\left(\bar{n}_{g}^{1},f_{g}\bar{n}_{g}^{1}\right)\), the breakpoints are \(b_{g}=\left\{\bar{n}_{g}^{1}\right\}\) and the slopes are \(\nabla_{g}\)= \(\left\{f_{g},0.0\right\}\). If there is a demand, then it is possible to view unmet demand with regret and penalize the lost revenue. In Figure 5(b), there is a gain for \(n_{g}^{1}\geq\bar{n}_{g}^{1}/2\) and loss otherwise. The full income is only received if \(n_{g}^{1}\geq\bar{n}_{g}^{1}\). Partial gains are achieved in the range \(\left[\bar{n}_{g}^{1}/2,d_{g}\right)\). Direct income is always \(n_{g}^{1}f_{g}\) and lost income of \(\left(\bar{n}_{g}^{1}-n_{g}^{1}\right)\gamma_{g}\) is subtracted when \(n_{g}^{1}<\bar{n}_{g}^{1}\). It is reasonable to set \(\gamma_{g}=f_{g}\). The UF for this is as follows: \[u_{g}=\begin{cases}n_{g}^{1}f_{g}&n_{g}^{1}\geq\bar{n}_{g}^{1}\\ n_{g}^{1}f_{g}-\left(\bar{n}_{g}^{1}-n_{g}^{1}\right)\gamma_{g}&n_{g}^{1}<\bar {n}_{g}^{1}\end{cases} \tag{29}\] For this UF, the reference point is \(\left(\bar{n}_{g}^{1},\bar{n}_{g}^{1}f_{g}\right)\), the breakpoints are \(b_{g}=\left\{\hat{n}_{g}^{1}\right\}\) and the slopes are \(\nabla_{g}\)= \(\left\{2f_{g},f_{g}\right\}\). The UF11\(\gamma\)-axis intercept is not -100. It changes relative to \(\hat{n}_{g}^{1}\). If \(\hat{n}_{g}^{1}\) is low then the \(\gamma\)-intercept is not that small (i.e., only a little negative). If \(\hat{n}_{g}^{1}\) is large, then regret is high, and the \(\gamma\)-intercept has a very large negative value. ### Utility Functions - Rewards and Regrets In this section we consider unmet goals with regret and dissatisfaction and suggest further UF of practical relevance to CMP. Let us first consider the possibility that no utility, reward, or satisfaction is achieved if treatments for a particular group do not exceed a specified aspiration or demand \(\hat{n}_{g}^{1}\). Otherwise, the utility increases linearly (or non-linearly) as \(n_{g}^{1}\) increases. The UF for this is shown in Figure 6(a) and the UF for the linear variant is as follows: \[u_{g}=\begin{cases}w_{g}n_{g}^{1}&n_{g}^{1}\geq\hat{n}_{g}^{1}\\ 0&n_{g}^{1}<\hat{n}_{g}^{1}\end{cases} \tag{30}\] For this UF, the reference point is \(\left(\bar{n}_{g}^{1},w_{g}\bar{n}_{g}^{1}\right)\), the breakpoints are \(b_{g}=\left\{\hat{n}_{g}^{1},\hat{n}_{g}^{1}\right\}\) and the slopes are \(\nabla_{g}\)= \(\left\{0.0,\hat{n}_{g}^{1}w_{g},w_{g}\right\}\). It is reasonable to set \(w_{g}=f_{g}\) or \(w_{g}=1\). For the first option, the reward is Figure 5: Financial utility functions (a) Without penalty, b) With penalty monetary and specific to a particular patient group. The second option is independent of group and non-monetary. Another option is to set \(w_{g}=100/\bar{n}_{g}^{1}\). The utility can then be viewed as a level of satisfaction between 0 and 100%. Let us then consider the loss of reward, income, and satisfaction from not meeting demand. We could penalize the extent of the unmet demand and include that value as a measure of dissatisfaction. This is shown in Figure 6(b). Evidently, outputs below demand are assumed to provide no satisfaction here. The UF for this is as follows: \[u_{g}=\begin{cases}w_{g}n_{g}^{1}&n_{g}^{1}\geq\hat{n}_{g}^{1}\\ -\gamma_{g}(\hat{n}_{g}^{1}-n_{g}^{1})&n_{g}^{1}<\hat{n}_{g}^{1}\end{cases} \tag{31}\] For this UF, the reference point is \(\left(\bar{n}_{g}^{1},w_{g}\bar{n}_{g}^{1}\right)\), the breakpoints are \(b_{g}=\left\{\hat{n}_{g}^{1},\hat{n}_{g}^{1}\right\}\) and the slopes are \(\nabla_{g}\)= \(\left\{\gamma_{g},w_{g}\hat{n}_{g}^{1},w_{g}\right\}\). The parameter \(w_{g}\) can be defined in three ways as previously mentioned, and as such it makes sense to define \(\gamma_{g}=w_{g}\) as well. Third, it is worth considering only regret and not the measurement of reward. Figure 6(c) shows a UF modelling only regret. In that function, any output above demand is assumed to have no utility. The UF for this is as follows: \[u_{g}=\begin{cases}0&n_{g}^{1}\geq\hat{n}_{g}^{1}\\ -(\hat{n}_{g}^{1}-n_{g}^{1})\gamma_{g}&n_{g}^{1}<\hat{n}_{g}^{1}\end{cases} \tag{32}\] **Final Remarks**. The UF shown in Figure 6 are basic as they have only two segments. More segments can be added on both sides of the discontinuity and on non-static segments. The UF shown in Figure 6(a) is well suited to current funding arrangements in Australia, whereby public hospitals are given extra funding for exceeding planned "care targets". Defining valid UF is a key step to performing CMP. The process of eliciting UF, identifies the objectives of managers, and places limitations on the possible solutions that can be entertained. This reduces the complexity of the optimisation and calculations phase. As reported in the literature, it is likely that an iterative process is needed to provide/revise these, as extracted UF may be contentious. Between each stage of that process, an analysis would be performed followed by a negotiation. As UF are needed for each patient type or group, it is foreseeable that many stakeholders would need to be approached to gather a full set of UF. When eliciting a UF, each stakeholder may be asked questions like the following: _Question 1_. Which metric is being used to measure performance? For instance, is the function modelling satisfaction/dissatisfaction, profit/loss, cost, achievement/non-achievement? _Question 2_. Is there a minimum or maximum output? If there is, what is the value? Are there thresholds of acceptable performance, below which a DM will not be prepared to go, no matter what the gains in other criteria? _Question 3_. Is there an aspiration, target, or demand for each group? Figure 6: UF measuring satisfaction and regret _Question 4_.: For what range of values is the utility function deemed "static", "increasing" or "decreasing"? Is the increase concave up (), concave down (), or linear? Is the decrease concave up (), concave down (), or linear? _Question 5_.: Is over and under-achievement undesirable? Is there a penalty for not meeting the minimum output or for exceeding the maximum output? What is the meaning of the penalty and what is the penalty value? _Question 6_.: Regarding the definition of discontinuous and tiered functions, are the boundaries of linear segments open or closed? ## 4 Case Study ### Details In this case study we have considered a large tertiary level public hospital in the local area. For the purposes of our CMP activities, the main infrastructure of the hospital has been included, for instance a 26-bed intensive care unit, 19 operating theatres, and 24 surgical/medical wards totalling 522 beds. Excluded from further consideration are surgical care areas for preoperative and post anaesthesia care and some miscellaneous wards. The time availability of wards is 168 hours per week, and 40 hours for operating theatres. Historical patient treatment information has been collected and this constitutes the main inputs to the CMP. From the historical data, patient treatment times and resource demands have been extracted. Within each specialty, there are many subtypes. These are characterised by Diagnosis Related Group (DRG), a classification system that groups hospital cases by the resources required in their treatment. The number of DRGs however is prohibitive, so for pragmatic reasons we have characterised patients as either surgical or medical inpatients in this article. An unrestricted case study with subtypes defined by DRG can be found in Burdett et al. (2023a). For the 19 specialties we have chosen to consider, and for each inpatient subtype, there is an average time requirement for, i) surgery in an operating theatre, ii) recovery or other treatment in a ward, and iii) intensive care. These times are weighted averages scaled by the prevalence of each DRG. Table 2 describes the considered specialties and the number of subtypes. The TRANS patient type only includes surgical inpatients, and the PSY type only includes medical inpatients. Historically, the records showed that medical inpatients also required intensive care and surgeries. Table 3 describes the wards and their focus, i.e., the types of patients that should be cared for there. \begin{table} \begin{tabular}{l l l l l l l} \# & **SPECIALY** & **\#SUB** & **(SUR, MED)** & \multicolumn{2}{c}{**SURGICAL**} & \multicolumn{2}{c}{**MEDICAL**} \\ & **(GROUP)** & & **\%MIX** & **(OT, ICU, WARD)** & **WARDS** & **(OT, ICU, WARD)** & **WARDS** \\ & & & & & **AVG TIME (HRS)** & **AVG TIME (HRS)** \\ 1 & Cardiac (CARD) & 2 & (58.8,41.2) & (3.16,19.85,171.35) & 3C & (0.06,1.82,84.45) & 3D, 3E, 5A \\ 2 & Endocrinology (ENDO) & 2 & (50.63,49.37) & (2.13,27.137.85) & 4D & (0.51,0.27,185.24) & 4D, 5C \\ 3 & Ear Nose Throat (ENT) & 2 & (54.08,45.92) & (2.12,10.24,44.02) & 1D & (0.5,0.91,49.43) & 1D \\ 4 & Facio-Masillary (FMAM) & 2 & (70.76,29.33) & (4.52,613.133) & 1D & (0.61,0.08,13.55) & 1D \\ 5 & Gastroenterology (GST) & 2 & (54.97,45.03) & (2.64,361.150.71) & 4D, 4E & (0.144,49.410.43) & 4D, 4E, 5C \\ 6 & Gynaecology (GYN) & 2 & (67.45,32.55) & (2.2,10,411.36) & 4C & (0.59,0.52.86) & 4C, 5C \\ 7 & Hepatology (HEP) & 2 & (45.97,54.03) & (1.475,413.160.71) & 4C, 4E & (0.075,1.84,119.87) & 4C, 4E \\ 8 & Immunology (IMMU) & 2 & (5.66,94.34) & (1.93,43.306.79) & 2D & (0.19,44.68,119.45) & 2D, 5B \\ 9 & Nephrology (NFP) & 2 & (28.3,71.7) & (2.19,05.165,102.41) & 48R & (0.47,0.743,50.65) & 48R, REVD, 5C \\ 10 & Neurology (NEUR) & 2 & (26.95,73.0) & (2.46,367,243.44) & 2C & (0.099,5.35,200.68) & 2C, 5B \\ 11 & Oncology (ONC) & 2 & (57.28,42.72) & (2.86,209,217.5) & 2E & (0.36,0.89,172.27) & 2E \\ 12 & Ophthalmicology (OPTH) & 2 & (68.83,31.17) & (1.52,008,465.35) & 4D & (0.046,0.000.36) & 5A \\ 13 & Orthopaedics (ORTH) & 2 & (64.36) & (3.09,132.189.89) & 2A, 2B & (0.52,18.266.12) & 2A, 2B \\ 14 & Plastics (PLAS) & 2 & (65.69,34.31) & (2.43,171.57.44) & 1D & (0.18,0.01,137.73) & 1D \\ 15 & Psychiatry (PSY) & 1 & (100) & na & na & (0.08,0.06,258.82) & GREV \\ 16 & Respiratory (RESP) & 2 & (5.62,94.38) & (2.86,37.161.26) & 2D & (0.22,4.76,136.37) & 2D, 5A \\ 17 & Transplants (TRANS) & 1 & (100) & (3.33,445.71,593.24) & 48T & na & na \\ 18 & Urology (IURO) & 2 & (43.73,56.27) & (1.83,1.66,71.63) & 4A & (0.38, 0.1,41.11) & 4A \\ 19 & Vascular (VASC) & 2 & (31.85,68.15) & (2.98,4.75,339.59) & 1C & (0.07,5.9,122.74) & 1C \\ \end{tabular} \end{table} Table 2: Patient types, subtypes, and resourcing details ### Sensitivity Analysis It is necessary to map the behaviour of different UF within the context of CMP. To the best of our knowledge utility functions are not currently used in hospitals and as such we cannot test the preferences and aspirations of real hospital managers. Hence, we analyse each UF type individually. This aligns with a hospital wide alignment of values and beliefs regarding outputs. However, different parameters of each UF type are considered. The purpose of the numerical testing is to find caseloads with highest utility, and this may translate on some occasions to caseloads with a higher or lower number of patients treated. Our numerical results are summarised in Appendix D and includes most of the UF discussed in section 3. The piecewise linear functions of each UF can also be found in Appendix B and C. To characterise the piecewise linear functions of all non-linear UF, thirty break points were arbitrarily selected. For the UFM there are two parameters in the objective, namely \(\epsilon_{1}\) and \(\epsilon_{2}\). We considered the two extremes, namely \(\epsilon_{1}=1,\epsilon_{2}=0\) and \(\epsilon_{1}=0,\epsilon_{2}=1\). The former maximizes the minimum utility (MMU), and the later maximizes the sum of the utilities (MSU). GAM and GPM.The GAM and GPM were applied for comparative purposes. Their details are summarised in Table 5, where \(u_{g}=100n_{g}^{1}/\bar{m}_{g}^{1}\). The GAM method has identified the least equitable caseload of the two methods but has higher outputs for a select number of groups. The GAM solution has nine groups with zero output. The GPM (MSU) has two groups at zero output and a third that is close to zero. The GPM (MMU) however achieves consistent outputs of 36% for each group except CARD and TRANS, where output is at 100%. The total number treated, however, is lowest. between the non-linear variants. The convex up version produces the lowest number of patients treated whereas the convex down version the highest. For the MMU objective, however, the largest number of patients is when there is no convexity (i.e., the UF is linear). The caseloads obtained are significantly different between the MMU and MSU cases, across different values of \(\alpha\). The difference between the maximum and minimum of each group are summarised in Figure 8. That chart summarises the prevailing case mix, which is computed as \(100n_{g}^{1}/\mathbb{N}\). That chart shows much less variation in the MMU objective. Speciality GAST, NEPH and PSY were altered the most. For the MSU objective, more groups were altered. **UF2**. The results for UF2 are summarised in Figure 9 and 10. The maximum number of treatments are obtained when there is no indifference. When the point of indifference is increased, \(\mathbb{N}\) decreases slowly for the MSU objective and quickly for the MMU objective. When the level of indifference becomes too high, it is not possible for each group to have non-zero utility, and a "zero" solution is Figure 8: Differences observed in the case mix (UF1) Figure 7: Alpha versus total treatments, sum of utility, and minimum utility for UF1 produced. For the MMU case, N drops to zero when the indifference level is 40%. This means that no single group has an output above the level of indifference. For the MSU case, trade-offs are made straightaway to achieve higher outputs, and three groups have a zero utility. Also, there is no solution for a 100% level of indifference. The difference between the maximum and minimum of each group are summarised in Figure 10. There are significant changes required in some specialties. **Ur3**\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; expense of reductions in ENT, GAST, GYN, NEPH and OPHT. The effect of non-linearity is quite pronounced in each measure. **UF4**. The results for UF4 are summarised in Figure 13 and 14. The numerical testing shows that \(\mathbb{N}\) does not change much as the indifference and aspiration level are altered. When the indifference level is high enough, however, the MMU case does produce a zero solution. As the indifference level and aspiration approach 50% symmetrically from below and above respectively, total utility tends to increase for the MSU case, but the opposite for the MMU case. The MSU case has at least one zeroed group, and hence a minimum utility of zero always. For the MMU case, the minimum utility decreases slowly. Figure 11: Aspiration level versus total treatments, sum of utility, and minimum utility for UF3 Figure 12: Differences observed in the case mix (UF3) when \(\alpha=1\) As the parameters are altered the same caseload and case mix are obtained for the MMU case; the only variation occurs in a few specialties like GYN. For the MSU case, the percentage decreases for some specialties but increases for others. This is not shown in Figure 14 but is clearly visible in a standard bar-chart of the caseload. **Uf5**.: The results for UF5 are summarised in Figure 15 and 16. When the intercept is high, the utility at zero is very small, in fact smaller than -100. However, the utility is always 100 at the upper bound. We can see that \(\mathbb{N}\) does not change at all as the intercept is increased. However, a decrease in total utility occurs. This decrease is somewhat linear over the % range (0,50], but significantly more non-linear when over 60%. In this situation, the reduced total utility is not an indicator that trade-offs are Figure 14: Differences observed in the case mix (UF4) Figure 13: Total treatments, sum of utility, and minimum utility for UF4 being made as the intercept is altered. Scrutiny of the results shows that the same caseload is obtained (regardless) for the MSU case. For the MMU case, the caseload are very similar too, but there are some significant changes in ENDO, GAST, GYN and NEPH. The reduced utility eventuates not because the \(n_{g}^{1}\) values change but rather because the slope of the function steepens as the intercept increases. **UF6**: _The results for UF6 are the same as those obtained for UF3. There are, however, some differences in the case mix as shown in Figure 17, relative to Figure 12. As UF6 has two values of \(n_{g}^{1}\) for each utility value, the CMP economies and chooses values on the left-hand side of the apex first. Without a term involving N in the objective, outputs \(n_{g}^{1}>n_{g}^{A}\) will not occur, because the utility decreases on the right-hand side of the apex._ Figure 16: Differences observed in the case mix (UF5) Figure 15: Aspiration level versus total treatments, sum of utility, and minimum utility for UF5 **UF7**. The results for UF7 are summarised in Figure 18 and 19. Although UF7 is like UF4, the results are unalike. As the reference point is increased and the s-shape is shifted, there is little change in total number of treatments, but significant decreases in total utility and minimum utility. The case mix for the MMU objective is quite invariant. For the MSU objective, the case mix varies considerably as the reference point is increased. Quite a few specialties are zeroed. the results for UF8 are summarised in Figure 20 and 21. This UF has similar behaviour to UF3 initially. However, notable differences arise as the level of indifference is increased. Once it is too high, a non-zero solution is not achievable for the MMU objective. Solutions are however achievable for the MSU objective as some groups can be zeroed to allow other groups to prosper. The total number of patients treated has not increased uniformly and perhaps demonstrates the possibility of finding alternatively optimal caseloads. The total utility however gradually decreases. The case mix is quite variable for the MSU objective, perhaps the most of any of the UF. There are big differences between the min and max percentage. This occurs because UF8 is tiered, and there are only two utility values, namely 0 and 100. As such, there are many alternatively optimal caseloads to choose from. Figure 19: Differences observed in the case mix (UF7) Figure 20: Indifference versus total treatments, sum of utility, and minimum utility for UF8 **UF11** (). The results for UF11 are summarised in Figure 22 and 23. The numerical testing shows that the results are quite comparable to those of UF5, and this makes sense as they have a common segment with negative utility. It is worth noting that when goals are high, regret is highest and negative utilities are incurred for some groups. At most, six specialties had negative utility, and this occurred when indifference was at 90% of specialty capacity. When goals are low, regrets are small. The case mix varies only slightly for the MMU objective, and quite a lot for the MSU objective. Interestingly some specialities had almost no variation at all in the different caseloads, like CARD, ONC, ORTH, PSY, TRANS, VASC. **UF12**: _The results for UF12 are summarised in Figure 24 and 25. Although UF8 and UF12 are the same up till the point of indifference, the results are not completely alike. When the level of indifference is small, then it is unnecessary for any group to have a zero utility; the utility of each group will lie on the slope somewhere. If the level of indifference is larger, some groups will be zeroed and the rest will have a utility on the slope, further up, away from the point of discontinuity. As the indifference increases, more groups will occur at the point of discontinuity, and the majority will need to be zeroed._ **Figure 24. Indifference versus total treatments, sum of utility, and minimum utility for UF12** Figure 23. Differences observed in the case mix (UF11) **UF13**. The results for UF13 are summarised in Figure 26 and 27. These are like those of UF12. There are however two main differences. For the MSU objective, the total utility decreases more significantly. Also, the minimum utility is negative, as opposed to zero. The MMU case mix results are invariant, except for a percent difference in ENDO and GYN. For the MSU objective, six specialties have an unvarying case mix. **UF14** [20]. The results for UF14 are summarised in Figure 28 and 29. This function is most comparable to UF5 and UF13 and as such, the results are similar. For both MMU and MSU objectives the number of patients treated increases as the intercept is increased. The intercept forces higher outputs in some groups to be chosen. However, the total utility decreases quite significantly. This reduction occurs because more groups incur negative utilities as the intercept is increased. The case mix for the MMU objective is quite stable and only varies slightly as the intercept is increased. For the MSU objective, some specialties have increased presence in the caseload and some decrease in response. Only two are zeroed, however. Figure 28: Intercept versus total treatments, sum of utility, and minimum utility for UF14 Figure 27: Differences observed in the case mix (UF13) ### Pareto Optimality of Solutions The Pareto optimality of caseloads obtained in section 4.2 were analysed. To check Pareto optimality the CMP was applied with the constraint \(n_{g}^{1}\geq\bar{n}_{g}^{1}\;\forall g\in G\) and a Maximize N objective. The goals \(\bar{n}_{g}^{1}\) were set as the existing caseload. The results are shown in Table 6. To note; i) the exact meaning of UF parameters has been explained previously, ii) a difference of zero means the caseload was Pareto optimal, and a positive difference means the caseload was dominated, iii) "zeroed caseload" means \(n_{g}^{1}=0\;\forall g\in G\) and a minimum group utility greater than zero could not be obtained. All zeroed caseloads are clearly not Pareto optimal. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Initial**} & \multicolumn{2}{c}{**Corrected**} & \multirow{2}{*}{**Diff**} & \multirow{2}{*}{**Diff**} & \multirow{2}{*}{**\%**} \\ GAM & 22389.66 & 33530.74 & 11141.07 & 49.76 \\ GPM (minimize max under) & 21232.76 & 32196.79 & 10964.03 & 51.64 \\ GPM (minimize sum under) & 31664.00 & same & 0 & 0 \\ \hline \multirow{2}{*}{**Funct.**} & \multirow{2}{*}{**Param.**} & \multirow{2}{*}{**N (MMU)**} & \multirow{2}{*}{**Diff.**} & \multirow{2}{*}{**N (MSU)**} & \multirow{2}{*}{**Diff.**} \\ & & & & & & \\ UF1 & \(\alpha\)=1 & 2161.02 & 30928.55 & 9318.13 & 31664.00 & same & 0 \\ & \(\alpha\)=0.15 & 20315.59 & 32232.47 & 11916.88 & 32001.87 & same & 0 \\ & \(\alpha\) = 3 & 20460.73 & 30964.59 & 10503.86 & 29220.08 & same & 0 \\ UF2 & (10\%) & 21537.21 & 30854.87 & 9317.67 & 32024.11 & same & 0 \\ & (90\%) & zeroed caseload obtained & 25565.72 & 28241.53 & 2675.76 \\ UF3 & (10\%) & 5407.79 & 34600.94 & 29193.15 & 5407.79 & 34600.94 & 29193.15 \\ & (100\%) & 20807.70 & 30928.60 & 10120.89 & 31663.97 & same & 0 \\ UF4 & (5\%, 95\%) & 20492.85 & 30968.19 & 10475.34 & 30900.76 & 30952.98 & 52.21 \\ & (20\%, 80\%) & 19482.28 & 32442.66 & 12960.38 & 28521.94 & 28578.56 & 56.62 \\ & (40\%, 60\%) & zeroed caselo obtained & 28816.7 & 32418.74 & 3602.03 \\ UF5 & (10\%) & 20372.80 & 31301.57 & 10928.77 & 31663.97 & same & 0 \\ & (100\%) & 20551.92 & 30674.33 & 10122.41 & 31663.97 & same & 0 \\ UF6 & (40\%) & 21245.39 & 31358.17 & 10112.78 & 21464.89 & 31517.39 & 10052.50 \\ & (100\%) & 20734.49 & 30854.87 & 10120.38 & 31511.33 & 31522.99 & 11.66 \\ UF7 & (20\%, 0.1) & 21074.59 & 3232.47 & 11157.88 & 30249.15 & same & 0 \\ & (20\%, 0.9) & 19924.63 & 31978.08 & 12053.45 & 29220.06 & same & 0 \\ UF8 & (30\%) & 16223.37 & 33357.72 & 17134.35 & 20977.59 & 32697.59 & 11720.00 \\ & (70\%) & zeroed caseload obtained & 27081.41 & same & 0 \\ & (100\%) & zeroed caseload obtained & 26908.58 & 26918.73 & 10.16 \\ UF11 & (10\%) & 19482.28 & 32442.66 & 12960.38 & 32191.15 & same & 0 \\ & (100\%) & 20263.33 & 30736.8 & 10473.47 & 31663.97 & same & 0 \\ UF12 & (10\%) & 20263.33 & 30736.8 & 10473.47 & 31765.95 & same & 0 \\ & (60\%) & zeroed caseload obtained & 29850.05 & 30984.91 & 1134.87 \\ & (100\%) & zeroed caselo obtained & 30221.02 & same & 0 \\ UF13 & (10\%) & 20413.87 & 31342.97 & 10929.10 & 32011.30 & same & 0 \\ \hline \hline \end{tabular} \end{table} Table 6: Pareto optimality of caseloads Figure 29: Differences observed in the case mix (UF14) In summary, none of the caseloads obtained for the MMU objective were Pareto optimal. These caseloads, however, can be significantly improved as shown. Many caseloads for the MSU objective are Pareto optimal. Those not Pareto optimal were analysed and found to occur in specific conditions. Any patient type that lies on a flat segment of their associated UF results in the production of a dominated caseload. Examples include UF2, UF3, UF4, UF8, UF12, UF14. The other special case is UF6, which has the same utility value, for two different outputs. ### Discussion and insights The numerical testing has provided the following insights. (i). Once a UF "template" is chosen, it is recommended to perform a sensitivity to identify any insights that may be discernible. In some situations, UF parameters can have a greater effect on possible outputs. This sensitivity can be done even before specific parameters are chosen, and even when a particular parameter is of interest to a decision maker. For instance, there are insights regarding the aspiration level of individual specialties. The sensitivity analysis of UF3 and UF8 clearly shows that outputs below 36% of specialty capacity can easily be achieved. For higher aspiration levels the exact nature of the trade-offs that needs to be made (i.e., linear, or non-linear) can be observed. Aspiration and indifference levels may or may not affect total number of treatments. UF1, UF4, UF5, UF7, UF11, and UF12 are examples where total number of treatments can remain static. In other situations, those outputs can increase or decrease significantly. Some of the sensitivity analysis describe when regret for lost achievement becomes significant. For instance, regarding UF5, the regret manifested as negative utility, increases more greatly once aspirations exceed 70% of specialty capacity. (ii). Those UF with static segments with utility zero (i.e., UF2, UF4, UF8, UF12) are nuanced in the sense that zeroed solutions may be identified for the MMU objective as optimal. Static segments (by definition), imply no significant difference over a range of values. If the level of indifference is too high, then many solutions will be regarded as having no utility. The CMP model will take advantage of this and choose the lowest output. To avoid zeroed solutions static segments should instead be sloped. This is not a problem for the MSU objective, however, as at least one group will have a non-zero caseload. (iii). As the point of indifference is increased, there is a gradual decrease in output. The opposite occurs regarding the aspiration level, i.e., increased output occurs as it is increased. iv). There are alternative optimal caseload solutions for some of the UF. For instance, the same \(\sum_{g}u_{g}\) value can in theory be obtained in different ways, and this will lead to caseload solutions with different values of \(\mathbb{N}\). The implication of this is that perhaps a tri-objective, with a \(\epsilon_{3}\mathbb{N}\) term could be considered worthwhile. v). The MMU objective may be used as a primer to initiate the iterative process of multi-criteria CMP. The caseload produced is a good trade-off that gives minimal outputs to each group. In following stages of planning, reductions in those base levels may be traded, to allow other groups to prosper more. The MSU option automatically chooses some groups to prosper at the expense of others without negotiation. vi). The convexity of the UF does not affect Pareto optimality of the caseload under the MSU objective. Only UF with flat segments and two values per utility are potentially dominated. **5. Conclusions** Most decision problems of any practical relevance involve the analysis of several conflicting criteria (Makowski 2006). Hospital case-mix planning is one such problem. In that task, it is necessary to apportion resources like operating theatres, wards, and intensive care units to different patient types, so that the maximum number of patients of each type is treated. There are, however, many ways to operate and that poses a significant dilemma to hospital managers and executives. Current approaches for CMP involve the definition of a case mix and the search for a caseload, that abides by the proportions held within that case mix. Using some case mix concept, however, forces patient type outputs to be inherently interlinked. Dominated caseloads may occur because bottlenecks restricting the output of one group of patients, also cause reduced outputs of other patient types, even though free resources are available and additional outputs can be achieved. Given perceived limitations in existing methods, utility functions (UF) were identified as a potential mechanism to facilitate improved hospital case-mix planning. This is a novel idea yet to be considered in the CMP literature, and a concept we believe managers and executives would find appealing. UF may be viewed as a means of regulating hospital capacity within a competitive environment, whereby the overall agenda is to treat as many patients of each type as possible. It is impossible to meet the needs of all patient types equally; hence treating a smaller number of patients is ultimately necessary. This is acceptable if individual patient types are treated in sufficient numbers. The application of UF to hospital case-mix planning has numerous other benefits. Utility functions can be used to model objective (i.e., quantitative) information including financial details, and subjective (i.e., qualitative) information like aspirations as well. Instead of treating non-achievement statically, as a linear relationship of the deviation from an aspiration, UF can model the varying importance of not meeting aspirations, which is more realistic. Other non-linear and discontinuous relationships can also be modelled using UF. In this article we considered linear/non-linear, monotonic/non-monotonic, convex/concave, simple/compound, and continuous/discontinuous utility functions. The usage of utility functions is also intended as a means of avoiding the computationally intractable task of Pareto Front generation (i.e., identification of a set of non-dominated solutions) for a decision problem likely to have a high dimensional objective space. It is believed that utility functions and the application of a single objective involving minimum utility and aggregate utility is sufficient to provide a means of generating non-dominated solutions and navigating the space of optimal case mix options. It is worth noting that non-dominated solutions can be identified one by one, by altering the weights in equation (21). Numerical testing indicates that the proposed approach has significant merit. Apart from finding desirable caseloads to consider, the approach can provide important insights via a sensitivity analysis of the parameters of each UF. The sensitivity analysis can provide a reality check to the managers and decision makers, as to the likely consequences of the choices that they are being asked to make. The challenge of implementing an approach based upon utility functions concerns the creation of them. Inconsistent responses to questions regarding the nature of the UF being elicited may greatly affect the type of caseload determined. The omission of criteria and the confounding of criteria are two other drawbacks mentioned in the literature (Stewart, 1996). However, proper implementation and training may eliminate these issues. The definition of UF is synonymous to bidding in a competitive process. The chosen UF gives decision makers what they ask for. Having low aspirations early, may result in the CMP model overlooking the importance of specific patient groups, and permit prioritization of others more greatly. **Future Research:** Basic utility functions of different types have been imposed and tested in this paper. Each specialty may, however, impose a completely different UF. It made no sense to make up scenarios of that nature as there are limitless possibilities. Pragmatically, however, it would be beneficial to test instances where the UF type is different. It would also be beneficial to run through the process of changing and negotiating UF for each specialty present in a hospital. This would help in the creation of a set of rigorous guidelines and processes, to optimise what is currently an ad-hoc process. The UF considered in this article predominantly describe a linear relationship between output and utility, and as such they contain linear segments. A non-linear segment was, however, included in UF1. The non-linearity was shown to have significant effect on the resulting caseload. It would be beneficial to analyse the effect of non-linearity in all the UF discussed in this article. This research facilitates the development of a decision support tool for hospital planners and exectives. It would be worth exploring how such a tool could be designed and how users could interact with it. CMP is typically performed with respect to a single criterion measured for each group of patients, i.e., the number of patients treated. It is possible to perform CMP when there is more than one kpi, i.e., there is a "vectorial return". Alternative kpi like revenue and cost could also be incorporated and used. **Acknowledgements:** This research was funded by the Australian Research Council (ARC) Linkage Grant LP 180100542 and supported by the Princess Alexandra Hospital and the Queensland Children's Hospital in Brisbane, Australia.
2302.14306
CLR-GAM: Contrastive Point Cloud Learning with Guided Augmentation and Feature Mapping
Point cloud data plays an essential role in robotics and self-driving applications. Yet, annotating point cloud data is time-consuming and nontrivial while they enable learning discriminative 3D representations that empower downstream tasks, such as classification and segmentation. Recently, contrastive learning-based frameworks have shown promising results for learning 3D representations in a self-supervised manner. However, existing contrastive learning methods cannot precisely encode and associate structural features and search the higher dimensional augmentation space efficiently. In this paper, we present CLR-GAM, a novel contrastive learning-based framework with Guided Augmentation (GA) for efficient dynamic exploration strategy and Guided Feature Mapping (GFM) for similar structural feature association between augmented point clouds. We empirically demonstrate that the proposed approach achieves state-of-the-art performance on both simulated and real-world 3D point cloud datasets for three different downstream tasks, i.e., 3D point cloud classification, few-shot learning, and object part segmentation.
Srikanth Malla, Yi-Ting Chen
2023-02-28T04:38:52Z
http://arxiv.org/abs/2302.14306v1
# CLR-GAM: Contrastive Point Cloud Learning with Guided Augmentation and Feature Mapping ###### Abstract Point cloud data plays an essential role in robotics and self-driving applications. Yet, annotating point cloud data is time-consuming and nontrivial while they enable learning discriminative 3D representations that empower downstream tasks, such as classification and segmentation. Recently, contrastive learning-based frameworks have shown promising results for learning 3D representations in a self-supervised manner. However, existing contrastive learning methods cannot precisely encode and associate structural features and search the higher dimensional augmentation space efficiently. In this paper, we present CLR-GAM, a novel contrastive learning-based framework with Guided Augmentation (GA) for efficient dynamic exploration strategy and Guided Feature Mapping (GFM) for similar structural feature association between augmented point clouds. We empirically demonstrate that the proposed approach achieves state-of-the-art performance on both simulated and real-world 3D point cloud datasets for three different downstream tasks, i.e., 3D point cloud classification, few-shot learning, and object part segmentation. ## 1 Introduction 3D understanding is of key importance in a wide range of applications including healthcare, medicine, entertainment, robotics, and human-machine interaction. Several 3D vision research problems (e.g., 3D point cloud classification [37, 38, 51]), detection [29], and segmentation [38, 49, 51]) have recently drawn much attention. However, obtaining 3D point cloud representations from the raw point clouds is challenging and often requires supervision, which causes high annotation costs. As a result, self-supervised learning for 3D point cloud representations has witnessed much progress and can potentially improve sample efficiency and generalization for these 3D understanding tasks. Existing works are mainly based on generative models [1, 53, 17], reconstruction [13, 18, 24, 56, 60], pretext task [19, 36, 39, 42, 55, 50, 55], and contrastive learning [54, 58, 26, 23, 59, 26]. Much progress has been made in recent contrastive learning-based methods. However, we observe the following two limitations. **Issue 1 (Contrast Ambiguity): a) GCA (Global Contrast Ambiguity).** With augmentations like cropping and nonrigid body transformation, the shape of an augmented object is entirely different from the original object, leading to ambiguity for contrastive learning. For instance, if we remove the back part of a "Chair" point cloud, the resulting point cloud could be similar in shape to a sample of the "Table" class, as shown in Figure 1.a. It poses a challenge for contrastive learning based methods because they do not access class labels for training. **b) LCA (Local Contrast Ambiguity).** In addition, local feature contrasting techniques [54, 26] treat every other point's feature in the same point cloud as a negative. The Figure 1: Motivation for CLR-GAM: a) motivation for Guided Feature Mapping, for better association b) motivation for Guided Augmentations, for better exploration of augmentation space drawback with this objective is that there are symmetries and similar shapes in an object that can have the same features. **Issue 2 (Curse of Dimensionality):** contrastive learning requires a variety of augmentations to learn discriminative 3D point cloud representations. However, searching over these high-dimensional augmentations is time-consuming and does not guarantee proper coverage with a dynamic limited number of samples. In this work, we introduce two novel modules, i.e., guided feature mapping (GFM) and guided augmentation (GA), to overcome the above limitations. We introduce the GFM module to associate features of the same structure between two augmented samples for effective feature association under heavy shape deformation. The feature contrasting is done at the object or global level, like most works, but with a tight coupling of local feature association. The GA module is present to efficiently explore higher-dimensional augmentation spaces with dynamically limited samples for diverse coverage of the augmentation space. We conducted extensive experiments to validate the effectiveness of the proposed contrastive learning framework. Specifically, we benchmark three downstream tasks, i.e., classification, few-shot learning, and object part semantic segmentation. We obtain state-of-the-art performance on the three tasks, and extensive ablative studies are conducted to justify the designed choice. **Our main contributions:** i) We propose Guided Augmentation (GA) and Feature Mapping (GFM) for learning discriminative 3D point cloud representations. ii) Our proposed approach achieves state-of-the-art performance on three downstream tasks, i.e., object classification, few-shot learning, and part segmentation. iii) Extensive ablatives studies are presented to justify our design choices. ## 2 Related Works ### Contrastive Learning on Point Clouds Following the recent success of self-supervised contrastive learning for images, recent works [12, 23, 26, 41, 54, 59, 58] explore contrastive learning for point cloud. Point-Contrast [54] applies contrastive loss for pointwise features generated from the neural network for a point cloud transformed using two random augmentations, to learn invariant features. PointContrast uses local feature contrasting, whereas in our approach, we tightly couple local feature association with object-level/global feature contrasting. Most importantly, the features of different points in the same object can be similar because of symmetries and similar shapes, but PointContrast treats every other point's feature in the same pointcloud as a negative feature and suffers from LCA as shown in Table 1. DepthContrast [59] uses two encoders for global level contrasting using voxel and point encoders but does not address GCA. Zhu et al. [61] uses the feature memory bank [21] to store negatives and positives for hard sample mining. Huang et al. [23] propose STRL that applies spatial augmentation for temporally correlated frames in a sequence point cloud dataset and performs contrastive learning. Recently, Afham et al. [2] proposed CrossPoint that learns cross-modal representations (images and point clouds) using contrastive learning. All these methods rely on contrastive learning of the encoded global features of point clouds, ignoring the structural deformations that lead to intraclass confusion (GCA). Recently, the authors of PointDisc [26] apply a point discrimination loss within an object to enforce similarity in features for points within a local vicinity. PointDisc makes the geometric assumption of a fixed radius for obtaining positives from the encoded features of the same point cloud and also suffers from LCA, similar to PointContrast. In this work, we introduce the GFM to identify structurally similar features between two different augmentations of the same point cloud without any geometric assumptions. We empirically demonstrate the effectiveness of the proposed GFM to learn discriminative 3D representations for three different downstream tasks. ### Guided Augmentation Several guided augmentation approaches for image modality [8, 11, 20, 35, 40] have shown to synthesize variable realistic samples for training. It is an important problem to generalize an algorithm to cover the unseen samples in the test data, which is expected to have wide variations of augmentation. In the context of human posture, [8] generates synthetic videos for gait recognition, and [40] augments images with 2D poses using 3D MoCAP data for pose estimation. For improving image detection, [35, 46] renders 3D CAD models with variable texture, background, and pose for generating synthetic images. Hauberg et al. [20] learn class-specific transformations (diffeomorphism) from external data, whereas another work [28] synthesizes new images using an iterative process. Since the existing works are task-specific and designed for supervised learning of image modality, they require class labels during training. AGA [11] extends to the feature space to be class agnostic, but it requires a huge corpus of annotated datasets with class labels to pretrain. We cannot directly adapt those approaches to self-supervised point cloud learning approaches, so we find exploration strategies in reinforcement learning are relevant for unsupervised guided augmentation. \begin{table} \begin{tabular}{c|c|c|c|c} Method & \multicolumn{2}{c|}{Feature Contrast} & \multicolumn{2}{c}{Contrast Ambiguity} \\ \hline Contrastive & global contrast & local contrast & GCA & LCA \\ \hline PointContrast [54] & & ✓ & & ✓ \\ PointDisc [26] & & ✓ & & ✓ \\ DepthContrast [59] & ✓ & & ✓ & \\ STRL [23] & ✓ & & ✓ & \\ CrossPoint [2] & ✓ & & ✓ & \\ \hline **CLR-GAM (ours)** & ✓ & & & \\ \end{tabular} \end{table} Table 1: Comparison of existing works and the problems ### Exploration of High Dimensional Spaces Efficient exploration in high-dimensional space is a fundamental problem in reinforcement learning. Different strategies such as selecting new states including epsilon-greedy, selecting random states with epsilon probability [30], upper confidence bounds [4], Boltzmann exploration [48, 52] using softmax over the utility of actions and Thomson sampling [3]. The motivation or curiosity to explore new states is coined as intrinsic motivation [33], which is adapted into [16, 12, 31, 32, 34, 45] as an intrinsic reward to quantify how different the new state is from already explored states. Some existing methods [16, 22, 31, 34, 45] use error in prediction as an intrinsic reward, while others use count-based techniques [6, 32]. However, the computation of intrinsic reward using function approximation is slow to catch up and is not efficient enough for contrastive learning. In this work, we introduce a guided augmentation mechanism for efficient exploration of new states using a memory-based module motivated by [5]. Badia et al. construct an episodic memory-based intrinsic reward using k-nearest neighbors over the explored states to train the directed exploratory policies. ## 3 Methodology ### Preliminaries and Notation We denote a point cloud as \(P_{i}\), which consists of an unordered set of points \(\mathbf{x}_{j=1:n}\) and \(\mathbf{x}_{j}\in\mathbb{R}^{3}\), where the parameter \(n\) is the number of points, and a point \(\mathbf{x}_{j}\) is in 3D coordinate space. A point cloud \(P_{i}\) can be augmented by changing scale \(\mathbf{a}_{k}^{S}\in\mathbb{R}^{3}\), translation \(\mathbf{a}_{k}^{T}\in\mathbb{R}^{3}\), rotation \(\mathbf{a}_{k}^{R}\in\mathbb{R}^{3}\), cropping \(\mathbf{a}_{k}^{C}\), and jittering \(\mathbf{a}_{k}^{J}\). The combined set of the above operations is denoted as \(\mathbf{a}_{k}\), where \(\mathbf{a}_{k}=[\mathbf{a}_{k}^{C},\mathbf{a}_{k}^{s},\mathbf{a}_{k}^{R}, \mathbf{a}_{k}^{T},\mathbf{a}_{k}^{J}]\). Given a point cloud \(P_{i}\), we apply the order defined in \(\mathbf{a}_{k}\) to obtain an augmented point cloud \(P_{i}^{k}\). In the remaining of this paper, we use \(i,j,k\) as the index of a point cloud \(P_{i}\in\mathbb{R}^{n\times 3}\) and the corresponding encoded features \(F_{i}\in\mathbb{R}^{n\times d}\), a point in point cloud \(x_{j}=P_{i}(j)\in\mathbb{R}^{1\times 3}\) and a row of the encoded features \(F_{i}(j)\in\mathbb{R}^{1\times d}\), and an augmentation operation \(\mathbf{a}_{k}\), respectively. Note that the parameter \(n\) is the number of points in a point cloud. ### Framework The detailed architecture of the CLR-GAM framework, a contrastive learning-based approach with the proposed GA and GFM modules, is depicted in Figure 2. We briefly introduce the overall contrastive learning algorithm in this section. First, a point cloud \(P_{i}\) is transformed into \(P_{i}^{1}\) and \(P_{i}^{2}\) by applying two augmentation operations \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\). We utilize a Siamese architecture with shared weights for feature extraction. In this work, we utilize PointNet (a MLP based method) [37] and DGCNN (a graph convolution-based method) [51] to extract features that are invariant to the input order. The augmented point clouds \(P_{i}^{1},P_{i}^{2}\in\mathbb{R}^{n\times 3}\) are encoded into latent space \(F_{i}^{1},F_{i}^{2}\in\mathbb{R}^{n\times d}\), respectively. The parameter \(n\) is the number of points, and \(d\) is the feature dimension. The augmented point clouds \(P_{i}^{1}\), \(P_{i}^{2}\) could contain different structures, while both point clouds originate from the same point cloud \(P_{i}\). To ensure an effective feature association between \(F_{i}^{1}\) and \(F_{i}^{2}\), we introduce the Guided Feature Mapping (GFM) module to associate the features that belong to the same structure between two augmented point clouds. The feature \(F_{i}^{1}\) is mapped to \(F_{i}^{12}\) to entail similar structural features when \(F_{i}^{2}\) is considered. The features \(F_{i}^{12}\) and \(F_{i}^{2}\) are pooled and projected into the projected latent space, resulting in \(z_{i}^{1}\) and \(z_{i}^{2}\), respectively. We perform contrastive loss to enforce that the latent representation distance between the same point clouds (positives) features is smaller than the distance between the features from different point clouds (negatives) in a minibatch. In addition, contrastive learning heavily relies on the quality of augmentation. An efficient strategy for exploring the augmentation space is indispensable. We introduce a guided augmentation search to explore various augmentations efficiently, motivated by [5]. **a) Guided Augmentation:** Augmentation is the key to the success of self-supervised contrastive learning. We hypothesize that if we can efficiently identify a wide range of informative augmentations, a discriminative representation can be learned. Existing approaches apply random sampling in augmentation spaces, which leads to ineffective augmentation and a high computational burden. Thus, we utilize a dynamic and efficient exploration strategy commonly used in reinforcement learning to mitigate the limitation. The ranges of each dimension of rotation \(\mathbf{a}^{R}\), translation \(\mathbf{a}^{T}\), and scaling \(\mathbf{a}^{S}\) are \([0,2\pi)\) radians, \([-1,1]\) meters, and \([0.5,1]\), respectively. Since the jittering and cropping operations are point specific, we ignore them in guided augmentation for simplicity. Specifically, motivated by [5], we utilize a memory bank \(M\) to save explored augmentation samples \(\mathbf{a}_{m}\), where \(m\) is the index of a slot. The goal is to ensure that the new sample is different from the explored samples. It is worth noting that it is hard to obtain this behavior when just the average of \(L\)-norm distance is used to select novel augmentations. To start, we first randomly sample \(N\) augmentations \(\hat{a}_{k=1:N}\) from the augmentation space \(\mathbf{a}_{k}\). We compute the distance of a new sample \(\hat{\mathbf{a}}_{k}\) from all the explored samples in the memory bank \(\mathbf{a}_{m}\). The design is used to evaluate the novelty of a sample. A novel augmentation \(\mathbf{a}_{k}^{*}\) is identified by using equation 1. \[\mathbf{a}_{k}^{*}=\arg_{\hat{a}_{k}}\max\frac{1}{\sqrt{\sum_{m\in M}K(\mathbf{ a}_{m},\hat{\mathbf{a}}_{k})+c}} \tag{1}\] where \(K(\mathbf{a}_{m},\mathbf{a}_{k})=\frac{\epsilon}{d(\mathbf{a}_{m},\mathbf{a}_{ k})+\epsilon}\). The distance function \(d\) between two augmentations is the \(L_{2}\)-norm. The parameters \(c,\epsilon\) are small values added for numerical stability. The memory bank is updated with the selected novel augmentation \(\mathbf{a}_{k}^{*}\). The operation is applied twice on each point cloud \(P_{i}\) in an iteration to obtain two novel augmentations \(\mathbf{a}_{1},\mathbf{a}_{2}\). The two augmentations are applied to input point cloud \(P_{i}\), as shown in Figure 2. Note that the augmentations of rotation angles \(2\pi\) and \(0\) are the same in the angular space, we utilize an angular distance measure, i.e., \(d_{R}(\mathbf{a}_{m}^{R},\mathbf{a}_{n}^{R})=\sum(0.5-|\ \mathbf{a}_{m}^{R}- \mathbf{a}_{n}^{R}|-0.5|)\), instead of using \(L_{2}\) distance. To be consistent with different scales and ranges of augmentations, we normalize each augmentation to \([0,1]\) before computing the total distance \(d\) as shown in equation 2, where \(\alpha_{R}\), \(\alpha_{T}\), and \(\alpha_{S}\) are the weights for the three distances. \[\begin{split} d(\mathbf{a}_{m},\mathbf{a}_{n})=& \alpha_{R}d_{R}(\mathbf{a}_{m}^{R},\mathbf{a}_{n}^{R})+\alpha_{T }||\mathbf{a}_{m}^{T}-\mathbf{a}_{n}^{T}||_{2}\\ &+\alpha_{S}||\mathbf{a}_{m}^{S}-\mathbf{a}_{n}^{S}||_{2}\end{split} \tag{2}\] **b) Guided Feature Mapping:** To learn discriminative point cloud representations, it is crucial to project features with similar structural characteristics for contrastive learning. Existing methods may fail to identify the structural similarity between the two augmented point clouds because certain augmentations (e.g., cropping, scaling) could lead to heavy deformations of an augmented point cloud with a completely different shape from the original class and similar to a different class. Based on our observation, when both the augmentations \(\mathbf{a}_{1},\mathbf{a}_{2}\) contains crop operations, this results in the very limited structural similarity between the augmented point clouds. So we exclude the crop augmentation \(\mathbf{a}_{1}^{C}\) from the augmentation \(\mathbf{a}_{1}\). In \(\mathbf{a}_{2}\), it uses all the augmentations, i.e., rotation, translation, scaling, cropping, and jittering. Note that \(\mathbf{a}_{k}^{R},\mathbf{a}_{k}^{T},\mathbf{a}_{k}^{S}\) are invertible operations as they are applied on the whole point cloud. The operation \(\mathbf{a}_{k}^{J}\) is a point-specific operation and invertible. On the other hand, the cropping operation \(\mathbf{a}_{k}^{C}\) is not invertible as the information is lost. An invertible augmentation operation can be written as \(P_{i}=(\mathbf{a}_{1})^{-1}\otimes P_{i}^{1}\), where \(P_{i}^{1}\) is an augmented point cloud, \(P_{i}\) is the original point cloud, and \(\otimes\) denotes an augmentation operator. The equation holds because the augmentation \(\mathbf{a}_{1}\) does not contain a cropping operation. Whereas the augmentation inverted point cloud of \(P_{i}^{2}\) results in \(P_{i}^{C}=(\mathbf{a}_{2})^{-1}\otimes P_{i}^{2}\), a cropped point cloud. The crop operation is ignored in the inverse operation with \(\mathbf{a}_{2}\), as it is not invertible. The order of points and their structures cannot be directly associated with these two augmented point clouds even with the same number of points. The closest point association mapping \(S_{12}\) between points of inverted point clouds of \(P_{i}^{1}\) and \(P_{i}^{2}\) is calculated based on equation 3. The structural index mapping \(S_{12}\) retains only the indices of the closest points of \(P_{i}^{1}\) to \(P_{i}^{2}\), for every point in \(P_{i}^{2}\) with index \(j\). \[S(j)_{12}=\arg_{q}\min||P_{i}^{C}(j)-P_{i}(q)||_{2} \tag{3}\] The operators \(P_{i}(\cdot)\) and \(F_{i}(\cdot)\) denote an indexing operation to point cloud and feature set, respectively. The guided mapped feature \(F_{i}^{12}\) is obtained according to \(F_{i}^{12}=F_{i}^{1}(S_{12})\). The feature \(F_{i}^{12}\) is projected to \(z_{i}^{1}\) using the feature projection module after pooling. Feature projection module is an MLP to reduce the dimensionality of the features. Similarly, \(F_{i}^{2}\) is projected to \(z_{i}^{2}\). The contrastive loss [9] is utilized to compute the similarity between positives (\(z_{i}^{1},z_{i}^{2}\)) and negatives from the minibatch. We do not store negatives over multiple iterations in a memory bank for comparability with other techniques [2], which is commonly done for improving the performance [21]. The loss can be found in equation 4. The similarity measure is the cosine distance between two features, \(\text{sim}(z_{1},z_{2})=\text{sim}(z_{1},z_{2})\), and \(\text{sim}(z_{1},z_{2})=\text{sim}(z_{1},z_{2})\). \((z_{1}^{T}z_{2})/(||z_{1}||||z_{2}||)\). Given a minibatch, the final contrastive loss is \(L_{c}=\frac{1}{2B}\sum_{b=1}^{B}(L_{\mathbf{1,2}}^{b}+L_{\mathbf{2,1}}^{b})\). The parameter \(\tau\) is temperature 0.5, \(b\) is the index of the feature in the minibatch of total size \(B\). \[L_{\mathbf{1,2}}^{i}=-log\frac{\exp(\min(z_{1}^{i},z_{2}^{i})/\tau)}{\sum_{b=1, bdi}^{B}\exp(\min(z_{1}^{i},z_{b}^{b})/\tau)+\sum_{b=1}^{B}\exp(\min(z_{1}^{i},z_{ b}^{b})/\tau)} \tag{4}\] ## 4 Experiments In this section, we first quantitatively evaluate the self-supervised trained approach on different downstream tasks and different object data sets (synthetic and real world). Second, we qualitatively visualize the features on an unseen object dataset. Finally, we do ablation studies of a) our novel modules and augmentations, b) t-SNE feature visualization on the unseen dataset, and c) qualitatively visualize the features on an unseen driving scenario. ### Quantitative Results **a) 3D Object Classification:** For this task, we utilize the ModelNet-40 (synthetic) and ScanObjectNN (real-world) datasets. The ModelNet-40 dataset consists of a wide range of 3D objects' CAD models. The dataset contains 12,331 objects that are categorized into 40 classes. We use 9,843 for training and 2,468 for testing. The ScanObjectNN dataset is challenging because data is collected in cluttered environments, so objects could be partially observable due to occlusions. It consists of 15 classes totaling 2,880 objects (2,304 for training and 576 for testing). We follow the same evaluation strategy as in the existing works [2, 23, 50]. Specifically, we freeze the pretrained point cloud feature extractor pretrained on the ShapeNet dataset. We randomly sample 1024 points from each object for testing classification accuracy on ModelNet-40 and ScanObjectNN. We fit a linear SVM [10] on the extracted features. The results on the testing set of ModelNet-40 and ScanObjectNN can be found in Table 2 and Table 3, respectively. Additionally, we also conduct experiments using two different backbones, i.e., PNet [37] and DGCNN [51], on the two datasets. We demonstrate state-of-the-art performance on the ModelNet-40 dataset using both backbone architectures compared to point cloud pretrained approaches in the bottom sub-table, as shown in Table 2. With the DGCNN backbone, the proposed approach performs better than CrossPoint and DepthContrast. It is worth noting that both methods utilize extra image modality for pretraining, while the proposed contrastive self-supervised learning framework only uses point clouds. Compared to the previous SOTA on a single modality (Oco), the accuracy is improved by 2.35% (with DGCNN). The results conducted on ScanObjectNN further justify the effectiveness of the proposed framework, as shown in Table 3. State-of-the-art performance is present compared to both point cloud and multimodal pretrained approaches using both backbone architectures. Noticeably, compared to previous SOTA on a single modality (Oco), the accuracy is improved by 4.8% (with DGCNN). In addition to satisfactory results, we empirically demonstrate that the proposed approach has better generalization capability in a real-world setting under severe occlusions than other methods. **b) Few Shot Object Classification:** Few Shot Learning (FSL) is a learning paradigm that aims to train a model that generalizes with limited data. In this experiment, we conduct experiments on N-way K-shot learning, which means that a model is trained on N classes and K samples in each class. The test/query set for each N class consists of 20 unseen samples for all these experiments. We use ModelNet-40 and ScanObjectNN for these experiments. The same pretrained model is used for both classification and FSL tasks with respective backbones. Similar to the classification task, we \begin{table} \begin{tabular}{l l c} Modality & Method & ModelNet-40 \\ \hline \hline point cloud & 3D-GAN [53] & 83.3 \\ & Latent-GAN [1] & 85.7 \\ & SO-Net [24] & 87.3 \\ & FoldingNet [56] & 88.4 \\ & MRTNet [14] & 86.4 \\ & 3D-PCapsNet [60] & 88.9 \\ & ClusterNet [58] & 86.8 \\ & VIP-GAN [17] & 90.2 \\ + Image Modality & DepthContrast [59] & 85.4 \\ \hline & \multicolumn{1}{c}{PNet} & DGCNN \\ \hline point cloud & Multi-Task [19] & - & 89.1 \\ & PoinDisc [26] & 86.2 & 89.3 \\ & Self-contrast [12] & - & 89.6 \\ & PointContrast [54] & 86.7 & 89.9 \\ & Jigsaw [42] & 87.3 & 90.6 \\ & STRL [23] & 88.3 & 90.9 \\ & Rotation [36] & 88.6 & 90.8 \\ & Oco [50] & 88.7 & 89.2 \\ & **CLR-GAM (ours)** & **88.9** & **91.3** \\ \hline + Image Modality & CrossPoint [2] & 89.1 & 91.2 \\ \end{tabular} \end{table} Table 2: We pretrained using the proposed contrastive self-supervised learning framework on ShapeNet. We evaluate on the test split of ModelNet-40 by fitting a linear SVM classifier. The reported results are the overall accuracy. The upper subtable uses custom backbone and training strategies. \begin{table} \begin{tabular}{c|c|c} Method & PNet & DGCNN \\ \hline Jigsaw [42] & 55.2 & 59.5 \\ PoinDisc [26] & 68.3 & 78.0 \\ Oco [50] & 69.5 & 78.3 \\ PointContrast [54] & 70.4 & 78.6 \\ STRL [23] & 74.2 & 77.9 \\ **CLR-GAM (ours)** & **75.7** & **82.1** \\ \hline CrossPoint [2] & 75.6 & 81.7 \\ \end{tabular} \end{table} Table 3: 3D Object classification on ScanObjectNN. We pretrained using the proposed contrastive self-supervised learning framework on ShapeNet. We evaluate on test split of ScanObjectNN by fitting a linear SVM classifier. The reported results are the overall accuracy on the test split. fit a linear SVM classifier for testing the FSL task. A similar protocol is used in earlier works [2, 43]. We report the results in Tables 4, 5. As there is no standard benchmark test set, we follow the setting used in [2, 43, 50]. Specifically, we report mean and standard deviation over 10 runs. As shown in Table 4, we observe that the CLR-GAM with DGCNN achieves SOTA compared to all other approaches in the challenging 5-way setting. In the 10-way setting, CLR-GAM performs on par with CrossPoint (multimodal pretrained) and Occo (single modal pretrained). The results show the same trend as in 2. The few-shot object classification results on ScanObjectNN is reported in Table 5. CLR-GAM with DGCNN and PointNet performs SOTA compared to both point cloud and multimodal pretrained approaches. Specifically, on ScanNet we show a large margin improvement (more than 11%) using DGCNN on all sets, and more than 8% improvement with PNET (5 way-20 shot, 10 way-10 shot, 10 way-20 shot). There is a 24% improvement with both DGCNN and PNET backbones in 10 way-20shot. The results further testify that CLR-GAM learns discriminative 3D point cloud representations, and the representations can generalize to challenging real-world settings. ing 2048 points sampled from point clouds. We observe that the performance of CLR-GAM is better than the other point cloud contrastive learning-based approaches and on par with CrossPoint (multimodal pretrained). The reported results in Table 6 are the average of intersection over union (IOU) computed for each part. ### Qualitative Results We visualize feature representations (learned from the proposed CLR-GAM) of each point/node in an unseen object's point cloud selected from test sets of ShapeNet and ModelNet-40 in Figure 3. We compute the cosine distance between the feature of a randomly selected point (colored in red) to other points' features in the same point cloud. The color scale is Yellow-Green-Blue. The closest feature in the feature space is yellow, and the farthest is blue. Our approach learns similar representations for the whole planar region for simple planar structures such as stool (a) and table (b). Moreover, in the case of a chair (f), a complicated planar structure, the proposed model can learn similar features for the back part of a seat. For monitor (k), the plane is assigned with closer/similar features, whereas the features at the corners (structural irregularities) are dissimilar to the center. A similar observation can be found in the case of a knife (e), i.e., the handle and sharp edge have different features. For a curved object like a bathtub (g), the whole tub has similar features except for the legs. Similarly, for the cone (h), the whole curved region has similar features except for the edges. In the case of lamp (i), the curved stand has similar features, separating the stem. For irregular-shaped objects, e.g., flowerpot (c), all leaves have similar features, and different features are learned for pot and stem. For airplane (d), all turbines have similar features since it is relatively small and curved, and the other sharply curved front and back regions of the airplane have similar features. ### Ablation Study **a) Augmentations and Novel Modules:** We conduct an ablation study on the ModelNet-40 dataset to understand the contribution of GFM, GA, and augmentation. The results are shown in Table 7. Contrastive learning without cropping achieves around 84.8% in overall accuracy. With cropping, a large improvement of 4.9% is observed. The result is similar to the performance of CrossPoint [2] without multimodal training (i.e., only Intra Modal Instance Discrimination, IMID). We treat the model as the vanilla baseline, i.e., the second row in Table 7. With GFM, we observe a performance improvement of 1.1% compared to the vanilla baseline. A 0.77% improvement is observed when GA is added. When both novel modules are introduced, we observe a 1.78% improvement compared to the vanilla baseline. The ablative studies demonstrate the effectiveness of the proposed GA and GFM. **b) Feature Visualization:** We depict all features generated from our CLR-GAM approach on unseen samples of the ModelNet-10 test dataset using the DGCNN backbone in Figure 4. To generate t-SNE plots, we use a perplexity of 30. In the vanilla contrastive learning approach, except monitor class, all the other classes have a wider spread making the classes closer. With the proposed GFM, we observe the improvement in nightstand toilet classes, but with a similar overlap of bed bathtub classes as vanilla. With added GA, our proposed approach CLR-GAM, we observe further improvement in toilet class separation from the nightstand and more concentrated class clusters. In all cases, the dresser and nightstand were more confused because of the shape similarity. **c) Generalization to driving scene:** To understand the generalization of the proposed unsupervised approach to the \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \multicolumn{4}{c|}{augmentations} & \multicolumn{3}{c|}{novel modules} & \multicolumn{2}{c}{dataset} \\ \hline jitter & translation & rotation & scaling & crops & GFM & GA & Modelnet-40 \\ \hline \hline ✓ & ✓ & ✓ & ✓ & & & & 84.8 \\ ✓ & ✓ & ✓ & ✓ & ✓ & & & 89.7 \\ ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & & 90.7 \\ ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & & 90.4 \\ ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & **91.3** \\ \end{tabular} \end{table} Table 7: Ablation Study of CLR-GAM: Trained on ShapeNet using the self-supervised method and evaluated ModelNet-40 using Linear-SVM. Reported results are overall accuracy Figure 3: Feature visualization of unseen objects selected from the test sets of ShapeNet and ModelNet-40. For more qualitative results please check the supplementary material. real-world applications or datasets, we visualize the features of a driving scenario point-cloud data from the KITTI dataset [15], which is shown in Figure 5. The full scene with 80 meters in all directions of the ego-vehicle (160m x 160m scene) is shown in (a) as a top-down image. In subfigure-a, the gray color is used for the ground, and the red color is used for the non-ground or obstacles. The separation is done using -1.5 meters in height axis of the point cloud data from the Velodyne sensor. The blue box is the region of interest (ROI), which is zoomed in subfigure-b, which is a 20m x 20m region. This is subsampled to around 4000 points using voxel-based sampling with a 0.3-meter voxel length in all three axes. 1024 points are randomly selected and passed to the feature encoder. The features are visualized in subfigure-c. The color scale is the same as in Figure 3, Yellow-Green-Blue. The closest feature in the feature space is yellow, and the farthest one is blue with respect to a randomly selected point (red circle). The two vehicles have features different from the ground, highlighted in pink boxes. ## 5 Discussion Existing local feature contrastive techniques using inter point cloud features (PointContrast) [54] or intra point cloud features (PointDisc) [26] objective suffers from LCA. It is also observed from our qualitative results shown in Figure 3 that similar parts/shapes or symmetries in an object can have similar features. Compared to these local contrast approaches, our novel approach performs better in downstream tasks using linear SVM on the learned feature representation. Our proposed approach avoids LCA by avoiding the local feature contrast objective. But global contrast introduces GCA as mentioned in Section 1. With our novel GFM and global contrast, we address GCA and also perform better than other global contrast techniques DepthContrast, pretext-based approaches, and multimodal trained CrossPoint from our quantitative evaluation. Our proposed self-supervised approach not only generalizes to different object datasets but also to driving scenes, as shown in Figure 5. Please check the supplementary for further discussions. ## 6 Conclusion In this paper, we present a contrastive learning framework (CLR-GAM) with guided augmentation (GA) to search augmentation parameters efficiently and guided feature mapping (GFM) to associate structural features precisely. The former is realized by adapting the inverse Dirac delta function with a memory bank, and the latter is fulfilled by the global contrasting of associated structural features between two augmented point clouds. Both these processes help boost the contrastive learning of point cloud data. We benchmark on three different downstream tasks and show that our method performs state-of-the-art compared to other methods trained on single modality point cloud data. It also performs similar to or better than a recent multimodal trained approach CrossPoint. Figure 4: t-SNE plots: visualization of features from three different approaches, generated from unseen samples of ModelNet-10 test dataset. Figure 5: Feature visualization of unseen **driving scene** selected from the KITTI dataset.
2309.16663
HyperPPO: A scalable method for finding small policies for robotic control
Models with fewer parameters are necessary for the neural control of memory-limited, performant robots. Finding these smaller neural network architectures can be time-consuming. We propose HyperPPO, an on-policy reinforcement learning algorithm that utilizes graph hypernetworks to estimate the weights of multiple neural architectures simultaneously. Our method estimates weights for networks that are much smaller than those in common-use networks yet encode highly performant policies. We obtain multiple trained policies at the same time while maintaining sample efficiency and provide the user the choice of picking a network architecture that satisfies their computational constraints. We show that our method scales well - more training resources produce faster convergence to higher-performing architectures. We demonstrate that the neural policies estimated by HyperPPO are capable of decentralized control of a Crazyflie2.1 quadrotor. Website: https://sites.google.com/usc.edu/hyperppo
Shashank Hegde, Zhehui Huang, Gaurav S. Sukhatme
2023-09-28T17:58:26Z
http://arxiv.org/abs/2309.16663v1
# HyperPPO: A scalable method for finding small policies for robotic control ###### Abstract Models with fewer parameters are necessary for the neural control of memory-limited, performant robots. Finding these smaller neural network architectures can be time-consuming. We propose HyperPPO, an on-policy reinforcement learning algorithm that utilizes graph hypernetworks to estimate the weights of multiple neural architectures simultaneously. Our method estimates weights for networks that are much smaller than those in common-use networks yet encode highly performant policies. We obtain multiple trained policies at the same time while maintaining sample efficiency and provide the user the choice of picking a network architecture that satisfies their computational constraints. We show that our method scales well - more training resources produce faster convergence to higher-performing architectures. We demonstrate that the neural policies estimated by HyperPPO are capable of decentralized control of a Crazyflie2.1 quadrotor. Website: [https://sites.google.com/usc.edu/hyperppo](https://sites.google.com/usc.edu/hyperppo) ## I Introduction A common practice in robot learning (particularly deep reinforcement learning) is to fix a network size and architecture and train it to approximate the near-optimum policy for a given task. For locomotion tasks with only proprioceptive sensing, networks of \(\sim 256\) neurons and \(\sim 3\) layers are commonly employed [1], while for exteroceptive sensing, the configuration of the network varies with the data modality [2]. For tasks that require the neural network controller to be deployed onto a real robot, especially one with memory and computational constraints such as the Crazyflie2.1, with which we experiment here (192Kb of onboard RAM) [3], the choice of network size and architecture is of paramount importance. There has been significant recent progress in neural architecture search (NAS) [4]. However, this has not focused on applications to neural robotic control. The problem of finding small yet performant neural networks for robot control is further exacerbated by the fact that performance and size of neural networks are not directly correlated [5]. Here, we build on the approach in [5] and present a method (Figure 1) that trains thousands of architecturally unique neural control policies simultaneously. We give the user the ability to choose an architecture that fits within their computation constraints and meets their performance requirements. We note that post-training, the weights for any chosen architecture can be estimated in one forward pass of our trained model. **Contributions:** The method proposed in [5] is off-policy. Such methods tend to be sample-efficient yet time-inefficient in training (when one measures wall-clock training time). Here we present an on-policy method (HyperPPO) that simultaneously produces thousands of policies, each with a unique architecture. HyperPPO has sample efficiency similar to one training run of regular proximal policy optimization (PPO) and results in unique performant policies for each architecture. We propose two versions of HyperPPO: with vectorized standard deviations (HyperPPO-VSD), suitable for the setting when training data are abundant and a fast simulator is available, and with common standard deviation (HyperPPO-CSD), suitable in the setting when gathering data is harder. We analyze and ablate the trade-offs of each version. We benchmark HyperPPO-VSD on GPU accelerated environments and HyperPPO-CSD on the quadrotor simulator, QuadSwarm [6]. We show that small networks estimated by HyperPPO-VSD are capable of outperforming the same networks obtained by training with regular PPO. We also show that the weights estimated by HyperPPO-CSD for a tiny neural network (just one hidden layer with 4 neurons) can be successfully deployed on a Crazyflie2.1 for autonomous flight control. ## II Related Work ### _Proximal Policy Optimization (PPO)_ PPO is a widely adopted on-policy learning algorithm [7]. As opposed to off-policy learning algorithms, PPO provides Fig. 1: For a given task and a large architecture search space, HyperPPO learns to estimate weights for multiple architectures simultaneously. The user can choose an architecture based on their performance requirements and computational constraints from the set of learned policies. separate loops for sample collection and training. This separation allows for massive parallelization, which provides trained policies more quickly. Further, PPO has been shown to have better stability. The governing equations of PPO are as follows. \[r_{t}(\theta) =\frac{\pi_{\theta}\left(a_{t}\mid s_{t}\right)}{\pi_{\theta_{k}} \left(a_{t}\mid s_{t}\right)}\] \[\hat{A}_{t}^{\pi_{\theta}} =\delta_{t}+(\gamma\lambda)\delta_{t+1}+\ldots+(\gamma\lambda)^{ T-t+1}\delta_{T-1}\] \[\text{where }\delta_{t} =r_{t}+\gamma V^{\pi_{\theta}}\left(s_{t+1}\right)-V^{\pi_{\theta }}\left(s_{t}\right)\] \[\mathcal{L}_{\theta_{k}}(\theta) =\underset{\tau\sim\pi_{\theta}}{\mathrm{E}}\left[\min\left(r_{t }(\theta)\hat{A}_{t}^{\pi_{\theta_{k}}},\mathrm{clip}\left(r_{t}(\theta),1\pm \epsilon\right)\hat{A}_{t}^{\pi_{\theta_{k}}}\right)\right]\] \(r_{t}(\theta)\) is the importance sampling ratio function between the policy that is used to collect data and the \(k\)'th version of the policy. \(V^{\pi_{\theta}}\left(s_{t}\right)\) is the value function estimated by the critic for the policy \(\pi_{\theta}\) at the state \(s_{t}\).The generalized advantage estimate is given by \(\hat{A}_{t}^{\pi_{\theta}}\). Finally, \(\mathcal{L}_{\theta_{k}}(\theta)\) is the clipped loss objective. Off-policy methods tend to be slower than on-policy methods, as the latter can be optimized easily. Further, on-policy methods have fewer hyperparameters and can have higher convergence stability if we have sufficient environment instances [8]. Optimizations needed to improve the performance of PPO are documented in [9]. A benefit of using PPO is the ability to scale with more computational resources. The availability of highly parallelized environments [10] and GPU-based physics engines [11, 12], have been shown to work well with PPO [13, 10]. For exploration, PPO generally samples its actions from a stochastic policy. The mean is obtained as the output from a parameterized state-conditioned network. The standard deviation is obtained either with another state-conditioned network or is simply characterized as a (non-state-conditioned) array whose values are directly modified during training. Here, we will consider the later version. ### _Neural Architecture Search_ Neural architecture search [4] is the process of searching for an optimal neural architecture for a given task. While reinforcement learning has been used for NAS [14], the use of NAS for reinforcement learning-based policies is still an under-explored area. NAS has tremendous opportunities in robotic control as on-board compute size poses an architecture search constraint. Differentiable Architecture Search (DARTS) [15] is a machine learning technique used to automate the process of finding optimal neural network architectures for tasks by introducing a continuous relaxation of the discrete architecture space, allowing gradient-based optimization methods to be used. In [16] DARTS was used for reinforcement learning policies. In [17] a differentiable approach was used for architecture search for robotic learning - the first to deploy a NAS-based neural controller on a robot. Efficient Neural Architecture Search (ENAS) [14] optimizes the architecture search process by sharing parameters across child models, reducing the computational overhead of evaluating multiple architectures. [18] and [19] utilize ENAS to find the best-performing architecture for RL tasks. Another family of methods in NAS is one-Shot Model Architecture Search through Hypernetworks (SMASH) [20]. A primary network (hypernetwork) is trained to estimate the optimal weights for a variable architecture secondary network. Once this hypernetwork is trained, the optimal weights for all architectures in a search space can be estimated, and the one with the best objective can be chosen. The idea of Graph Hypernetworks (GHN) was introduced in [21]. The computational graph of an architecture is provided as input, and common message-passing techniques akin to those found in GNNs are used to generate the weights of that architecture as its output. GHN benchmarking against other DARTS and ENAS methods shows that it only uses a fraction of the search cost associated with other NAS methods. Following this [22] introduces GHN2, which employs a gated graph network for better generalization of the hypernetwork. [5] introduced Graph Hyper Policies (GHP) that utilized a GHN to estimate the weights of robotic policies for manipulation and locomotion. This was done using off-policy reinforcement learning, specifically, Soft Actor critic [23] for locomotion and Hindsight Experience Replay[24] with Deep deterministic policy gradients [25] for manipulation. For a given architecture graph representation of a network \(g\), this network, \(h_{\theta}\), can estimate the policy \(\pi_{\phi}=h_{\theta}(g)\), where the estimated weights are \(\phi\). It was also shown in [5] that directly estimated weights of smaller policies were more performant than policies of same same architecture obtained by behavior cloning based distillation methods. Since these methods are off-policy, they are extremely sample efficient and can learn to estimate weights for multiple policies with the same number of samples as it would be to learn for a single architecture. A drawback for this method though is that it is not time efficient. As noted in the paper, this method had a \(\sim\) 5x training time increase. This can amount to a large amount of time considering that off-policy methods are already time inefficient as compared to on-policy methods. Further, this method does not scale well with more compute resources as data collection is not a bottleneck for Q learning. From a constraint architecture search point of view, searching for architectures for robotic control, hypernetwork-based methods are an alluring option as having multiple options during deployment would reduce experimentation time drastically. ### _Deep Reinforcement Learning for Quadrotor Control_ There is significant recent work in the control of quadrotors with direct rotor thrusts by using deep reinforcement learning (DRL). [26] investigates stabilizing a quadrotor with hash initialization, and a neural network policy with two hidden layers with 64 neurons in each layer. [27] can train control policies with minimal prior knowledge about a quadrotor's dynamics parameters and can transfer a single control policy to multiple quadrotor platforms with two hidden layers with 64 neurons in each layer. [28] uses model-based DRL for the hover control of a quadrotor (up to 6 seconds with 3 minutes of training data with 2 hidden layers with 250 neurons in each layer). [29] proposes control policies that can achieve 60 km/h on a physical quadrotor by using 2 hidden layers with 128 neurons in each layer. [30] uses DRL to design decentralized control policies that can fly quadrotor swarms in various scenarios with significant collision avoidance ability in the real world with two encoders, both consisting of 2 hidden layers, with only 16 and 8 neurons, respectively. For agile tasks, it is desirable for neural network inference to have lower latency than sensing. This can become an issue when the sensing modality is complex (such as vision) or goal conditioning needs a larger encoder (such as language). For agile flight control of a quadrotor, [31] utilize a RealSense D435i camera for depth sensing, which runs at 30 Hz while their network inference on an onboard NVIDIA Jetson TX2 runs at 25 Hz. ## III Method ### _Multi Architecture Proximal Policy Optimization_ The method proposed in [5] is off-policy. Such methods tend to be sample efficient, yet time-inefficient in training. To find an on-policy version of [5], as a first cut, we ran PPO where the policy is replaced with a graph hyper policy estimating policies for randomly sampled architectures, on the halfcheetah environment [32]. This setup is similar to [5] but with PPO instead of Soft Actor Critic [23]. As the model trained, we evaluated it on a fixed set of architectures. We observed that for all architectures, the policies estimated by the graph hyper policy reach the same reward and collapse to a single policy. This is because PPO, being an on-policy algorithm, cannot effectively use data obtained from one architecture to estimate weights for a different architecture. This becomes evident on inspecting the equations for PPO from II-A. Let us denote the entire search space of architectures by U. Let the sampled architectures from this space be \(g\sim\text{U}\). In order to use the PPO algorithm for multi-architecture training, we need to substitute \(\pi_{\theta}\gets h_{\theta}(g)\) in these equations, where \(h_{\theta}\) is a graph hypernetwork parameterized by \(\theta\), which estimates the weights for architecture \(g\). Doing so results in the following equations: \[r_{t}(\theta,g) =\frac{h_{\theta}\left(a_{t}\mid s_{t},g\right)}{h_{\theta_{k}} \left(a_{t}\mid s_{t},g\right)}\] \[\hat{A}_{t}^{h_{\theta}(g)} =\delta_{t}+(\gamma\lambda)\delta_{t+1}+\ldots+(\gamma\lambda)^{T -t+1}\delta_{T-1}\] \[\text{where }\delta_{t} =r_{t}+\gamma V^{h_{\theta}}\left(s_{t+1},g\right)-V^{h_{\theta}} \left(s_{t},g\right)\] \[\mathcal{L}_{\theta_{k}}(\theta)=\underset{\begin{subarray}{c}g \sim\text{U}\\ \tau\sim h_{\theta}(g)\end{subarray}}{\mathrm{E}}\left[\min\left(\frac{r_{t}( \theta,g)\hat{A}_{t}^{h_{\theta}(g)}}{\mathrm{clip}\left(r_{t}(\theta,g),1 \pm\epsilon\right)\hat{A}_{t}^{h_{\theta}(g)}}\right)\right]\] We see that the importance sampling ratio, advantage estimate, and the value function, are all now conditioned on the current policy's architecture. Since the architecture remains \(g\) while estimating all the above values, no mixing of data between architectures must happen. ### _Intuition_ Another way of visualizing the above formulation is by restructuring the underlying Markov Decision Process. We concatenate the randomly sampled architecture graph into the state variable. As shown in figure 2, this allows us to reformulate the policy as the actions sampled from the policy estimated by the hypernetwork for that given combination of graph and state variables. The concatenation of the state and architecture can be seen while estimating the GAE \(\hat{A}_{t}^{h_{\theta}(g)}\), specifically while estimating the state value function \(V^{h_{\theta}}\left(s_{t},g\right)\). Practically, we condition the critic network of PPO with state and architecture and make sure we use the same architecture's data for the Bellman update. ### _Algorithm_ Based on these changes we propose HyperPPO. As shown in Algorithm 1, for a given task we start with a predefined architecture space U. For every iteration of the algorithm, we sample architecture \(g_{i}\) from the search space. For this work, we restrict the search to the architecture space of Multi Layer Perceptrons (MLPs). Our architecture search space U consists of all possible MLPs with four or fewer layers. that can be constructed with the number of neurons in each layer being {4, 8, 16, 32, 64, 128, 256}. This gives us 2800 unique architectures. We use the same graph hyper policy model as in [5] and estimate policy \(\pi_{\phi_{j}}\) for that architecture. We then collect data \(\{\mathcal{D}_{k}\}_{i}\) using this policy. Using this data we estimate GAE \(\hat{A}_{t}^{h_{\theta_{k}}(g_{i})}\) and the ratio function \(r_{t}(\theta,g_{i})\). This process can be parallelized for a meta batch size of architectures for faster computation. Using these estimates, we then use SGD to optimize the objective \(\mathcal{L}_{\theta_{k}}\) over the hypernetwork weights \(\theta\). Just like regular PPO for continuous action spaces, actions are sampled from a Gaussian distribution. The mean of the distribution is obtained using the policies estimated by the graph hyper network. For standard deviations, we propose two approaches, which lead to two versions of our method. HyperPPO-VSD (Vectorized Standard Deviations) Fig. 2: Architecture and State concatenated Markov Decision Process. By augmenting the architecture into the MDP state space, we can train policy RL agents with varying architecture. constructs a vector of standard deviation arrays, one for each architecture. This enables independent exploration for all architectures. HyperPPO-CSD (Common Standard Deviation) uses a common standard deviation array for all architectures. This reduces computation and converges faster. For our method, we utilize vectorized environments. These environments enable parallelization and allow us to sample data for different architectures simultaneously. The larger the number of environments we can run in parallel the better our estimates should be for our objective functions. ``` 1:input: Initial Hypernetwork parameters \(\theta_{0}\). 2:input: Clipping threshold \(\epsilon\). 3:input: Architecture Search space \(\mathrm{U}\), Meta-batch size \(M\). 4:for\(k=1,2,\ldots\)do 5:for\(i=1,2,\ldots M\)do 6: Sample architecture \(g_{i}\sim\mathrm{U}\) 7: Estimate Policies \(\pi_{\phi_{i}}\gets h_{\theta_{k}}(g_{i})\) 8: Collect trajectories \(\{\mathcal{D}_{k}\}_{i}\) using policy \(\pi_{\phi_{i}}\) 9: Estimate GAE \(\hat{A}_{t}^{h_{\theta_{k}}(g_{i})}\) 10: Estimate importance sampling ratio \(r_{t}(\theta,g_{i})\) 11:endfor 12: Compute policy update 13:\(\theta_{k+1}=argmax_{\theta}\mathcal{L}_{\theta_{k}}(\theta)\) 14: by taking \(K\) steps of minibatch SGD (via Adam) 15:endfor ``` **Algorithm 1** HyperPPO ## IV Results and Discussion To implement our method, we use the Sample Factory [33] package. Its efficient design enables us to parallelize data collection and train Graph Hyper Policies quickly. The experiments are carried out on standard locomotion tasks that have been implemented on Brax [11] and Mujoco [34]. We also train on the quadrotor simulator described in QuadSwarm [6]. All experiments were run 4 seeds at a time on an AWS g4dn.12xlarge instance with 48vCPU, 4 Telsa T4 GPUs and 192 GB RAM. ### _Ablations_ For our ablations, we train on the Humanoid task in Brax for 1 billion steps for 8 seeds. We simulated 4096 environment instances in parallel and ran for approximately 200 minutes. Every few steps, we evaluate the performance of policies estimated by the GHP for every architecture in the search space. To estimate the quality of all architectures we find the average reward across all architectures. #### Iv-A1 Vectorized Standard deviations First, we analyze the performance of HyperPPO with VSD and CSD. Figure 4 shows this for both CSD and VSD. We see that with CSD, the average reward grows faster initially. This is because the standard deviation converges faster with CSD. But with more training, we see that VSD eventually achieves a larger reward. As mentioned in IV-A1, we believe this is because individual exploration for each architecture can eventually obtain better performance. Therefore we suggest using the VSD when massively parallel environments such as Brax or IsaacGym [12] are available. #### Iv-A2 Architecture Sampling During experimentation, we first implemented the uniform architecture sampling as described in [5]. On further analysis, we found that the graph hyper policy has a learning bias toward deeper network architectures. We believe this is because there are fewer shallower architectures than deeper ones. To compensate for this effect, we sample architectures with their sampling probability inversely proportional to the number of layers. We shall call this biased sampling. We run HyperPPO-VSD with both modes of architecture sampling. From figure 4, we can see that with biased sampling, we obtain better performance. Further, smaller networks gained a bigger performance boost with biased sampling, since more of these were considered during training. Therefore, for all other experiments in this paper, we set the architecture sampling mode to biased sampling. ### _Scaling HyperPPO_ Here, we show that HyperPPO can scale up to provide better results with more computation. We train HyperPPO-CSD on the Mujoco halfcheetah task for 5 hours while varying the number of environment instances from which data are sampled. We run this experiment over 5 seeds, and at the end of the experiment, we evaluate every architecture in the search space. Figure 5 shows us the distribution of performance over all unique architecture policies estimated by GHP. This plot is similar to those used to evaluate policy data sets in [35, 36]. The x-axis is the policy's accumulated reward, while the y-axis represents the number of policies with reward greater than x. N represents the number of environment instances from which data are sampled. We can see that scaling up the algorithm with more parallel environments in HyperPPO with more computation can provide a better collection of policies over the same time. ### _Brax benchmarks_ Having shown that our method scales with performance, we benchmark HyperPPO-VSD on GPU-accelerated Brax environments. We use 4 locomotion tasks, namely, humanoid, ant, halfcheetah, and walker2d. On each task, we train for 1 billion state transition steps and show results across 8 seeds. During training, every few steps, we evaluate the GHP on every architecture in the search space. From this evaluation, we identify architectures that provided the highest reward, the smallest architectures that provided 90% of the highest reward, and the smallest architectures that provided 80% of the highest reward. We call these max, 90%, and 80% architectures respectively. As a baseline, we train regular PPO also implemented on Sample Factory with the same hyperparameters, with 3 hidden layers with 256 neurons each. This is a common choice of model architecture for these locomotion tasks. Figure 3 shows the results of this experiment. For each task, the left plot depicts rewards attained by the max, 90%, 80% architectures, and the baseline. The right plot shows the size of these architectures on a log scale. For all tasks, we see that the number of parameters required to achieve 90% and 80% of maximum performance reduces considerably. Further, by taking the average reward over all seeds, we identify 80% architectures for each task as (64) for halfcheetah, (64) for walker2d, (32) for humanoid, and (64) for Ant. These are all single hidden layer architectures with either 64 or 32 neurons in them. We trained policies with these architectures with regular PPO and compared their performance with policies of the same architectures estimated by the GHP in HyperPPO-VSD. Table I shows that the policies estimated by the GHP obtain considerably more reward on the Halfcheetah, Walker2d, and Humanoid tasks, while the performance is comparable on the Ant task, figure 3 suggests that the model has not yet converged for Ant. These results show that HyperPPO-VSD can provide multiple architecture policies with the same sample complexity as a single PPO run, and further provides higher performing smaller policies than its regular PPO counterparts. We believe this increase in performance has two reasons: (a) Better exploration: The policies are now more stochastic with HyperPPO-VSD probabilistically choosing different action distributions during data collection. (b) Distillation between architectures: Gradients to the hypernetwork from Fig. 4: **Ablations.** Average reward across all architectures during training. **Left:** Action Standard Deviation; **Right**: Architecture Sampling. Fig. 5: **Scaling HyperPPO: With more environment instances, the performance of all architectures increases. N represents the number of parallel environment instances.** Fig. 3: **Learning smaller networks.** All architectures are evaluated as training progresses. For each pair, **left**: (max performance, 90% of max performance, 80% of max performance, baseline performance) vs training samples collected; **right**: the minimum number of parameters needed to achieve these levels of performance vs training samples collected. data of larger architectures can improve policies estimated for smaller architectures. ### _Quadrotor Drones_ We train HyperPPO-CSD on the Quadrotor environment designed for a Crazyflie 2.1, QuadSwarm [6]. The Crazyflie 2.1 is a severely compute-constrained quadrotor with an onboard microcontroller running at 168MHz with 168 Kb RAM. We train the control policy in simulation on a mixture of single drone goal-based scenarios [30] (static goal, dynamic goal, random 3D Lissajous trajectory tracking, and random 3D Bezier curve trajectory tracking), for 500 million state transition steps, and we zero-shot transfer our control policy to the physical Crazyflie quadrotor. We test our control policies on the Bezier curve trajectory tracking on the physical Crazyflie quadrotor, one of the most challenging scenarios in the simulation, to showcase the flying performance of our control policy. As a baseline, we train a policy with architecture (512,512) (i.e., two hidden layers with 512 neurons each), with the same hyperparameters and scenarios. Similar to Figure 3, we analyze the training performance in Figure 6. We see that the best-performing architecture estimated with HyperPPO-CSD achieves more reward than the baseline, whose performance is comparable to that of 80% architectures. Across seeds, for this task, we identified the 80% architecture as (4) (i.e., a single hidden layer 4 neuron network). This small policy was estimated at the end of training and deployed on the Crazyflie. For evaluating the physical deployment performance, we generate a random 3D Bezier curve as the desired trajectory and use the neural network to control rotor thrusts, to track this trajectory. From Figure 7 we see that the quadrotor is capable of tracking the desired trajectory with a HyperPPO estimated neural network, with high success rates. If we wanted to test a different architecture for physical deployment, instead of retraining a new network from scratch, we can estimate the weights for that architecture with one inference step of the trained GHP model. While we maintain sample efficiency, we note that a limitation of our method is a \(\sim\) 2-3x training time increase as compared to regular PPO. At present, we limit ourselves to Multi-Layer Perceptrons, however, we plan to experiment with architecture search spaces with different types of networks such as CNNs, LSTM, and Transformers in the future. Finally, identifying the performance of a candidate architecture involves estimating it with the GHP and evaluating it with a rollout. Identifying the desired architecture algorithmically during training is a possible future avenue for this work.
2305.20004
Learning to solve Bayesian inverse problems: An amortized variational inference approach using Gaussian and Flow guides
Inverse problems, i.e., estimating parameters of physical models from experimental data, are ubiquitous in science and engineering. The Bayesian formulation is the gold standard because it alleviates ill-posedness issues and quantifies epistemic uncertainty. Since analytical posteriors are not typically available, one resorts to Markov chain Monte Carlo sampling or approximate variational inference. However, inference needs to be rerun from scratch for each new set of data. This drawback limits the applicability of the Bayesian formulation to real-time settings, e.g., health monitoring of engineered systems, and medical diagnosis. The objective of this paper is to develop a methodology that enables real-time inference by learning the Bayesian inverse map, i.e., the map from data to posteriors. Our approach is as follows. We parameterize the posterior distribution as a function of data. This work outlines two distinct approaches to do this. The first method involves parameterizing the posterior using an amortized full-rank Gaussian guide, implemented through neural networks. The second method utilizes a Conditional Normalizing Flow guide, employing conditional invertible neural networks for cases where the target posterior is arbitrarily complex. In both approaches, we learn the network parameters by amortized variational inference which involves maximizing the expectation of evidence lower bound over all possible datasets compatible with the model. We demonstrate our approach by solving a set of benchmark problems from science and engineering. Our results show that the posterior estimates of our approach are in agreement with the corresponding ground truth obtained by Markov chain Monte Carlo. Once trained, our approach provides the posterior distribution for a given observation just at the cost of a forward pass of the neural network.
Sharmila Karumuri, Ilias Bilionis
2023-05-31T16:25:07Z
http://arxiv.org/abs/2305.20004v3
# Learning to solve Bayesian inverse problems: An amortized variational inference approach ###### Abstract Inverse problems, i.e., estimating parameters of physical models from experimental data, are ubiquitous in science and engineering. The Bayesian formulation is the gold standard because it alleviates ill-posedness issues and quantifies epistemic uncertainty. Since analytical posteriors are not typically available, one resorts to Markov chain Monte Carlo sampling or approximate variational inference. However, inference need to be rerun from scratch for each new set of data. This drawback limits the applicability of the Bayesian formulation to real-time settings, e.g., health monitoring of engineered systems, and medical diagnosis. The objective of this paper is to develop a methodology that enables real-time inference by learning the Bayesian inverse map, i.e., the map from data to posteriors. Our approach is as follows. We represent the posterior distribution using a parameterization based on deep neural networks. Next, we learn the network parameters by amortized variational inference method which involves maximizing the expectation of evidence lower bound over all possible datasets compatible with the model. We demonstrate our approach by solving examples a set of benchmark problems from science and engineering. Our results show that the posterior estimates of our approach are in agreement with the corresponding ground truth obtained by Markov chain Monte Carlo. Once trained, our approach provides the posterior parameters of observation just at the cost of a forward pass of the neural network. _Keywords:_ Inverse problems; real-time inference; Bayesian inverse map; amortized variational inference ## 1 Introduction In scientific and engineering applications, we are often interested in identifying the unknown parameters of a physical model from observable quantities. These problems are called inverse or model calibration problems [1]. For instance, in reservoir engineering, it is pivotal to infer the permeability field of the subsurface from geophysical field measurements [2]. Other examples of inverse problems include remote sensing [3], climate modeling [4], medical imaging [5], subsurface hydrology and geology [6], ocean dynamics [7], seismic inversion [8], and many more. Inverse problems are hard to solve. First, the observed data typically contain measurement noise which has to be filtered out. Second, inverse problems may be ill-posed, i.e., many different sets of parameters could result in the same observations. Third, forward models are usually computationally expensive with simulation times ranging from a few minutes to days. Bayesian inference is the gold standard for posing inverse problems [9, 10]. In the Bayesian paradigm, one encodes their knowledge about the parameters using prior probabilities, and models the measurement process using a likelihood function which connects the physical model to the data. The solution of the inverse problem is the posterior probability dictated by Bayes' rule [11]. The analytical form of the posterior is not always available, except for very few simple cases. The crudest way to summarize the Bayesian solution is via a point estimate of the parameters, typically obtained by maximizing the posterior probability density (MAP estimate). This approach is used in seismic inversion [12] and numerical weather prediction models [13]. MAP estimates are acceptable only when the posterior has a unique maximum and is sharply peaked. The Laplace method [14] approximations the posterior using a multivariate Gaussian with a mean specified by the MAP estimate and a covariance matrix given by the negative inverse Hessian of the logarithm of the posterior. This approximation is capable of quantifying some of the epistemic uncertainty, albeit it is acceptable only in the cases where the posterior has a unique maximum and is shaped like a Gaussian. More sophisticated approaches involve exploring the posterior by sampling through Markov chain Monte Carlo (MCMC) [15, 16, 17] sampling methods. MCMC generates a sequence of samples from a proposal distribution which are accepted or rejected according to an acceptance ratio. The final samples form a Markov chain that is ergodic with respect to the desired posterior. MCMC requires repeated evaluations of the underlying physical model, which raises the computing overhead. This computational cost can be overcome by replacing the physical model with a computationally inexpensive surrogate. Surrogates are built, for example, using Gaussian process regression (GPR) [18, 19, 20, 21, 22, 23], Polynomial chaos expansion (PCE) [24, 25, 26], Deep neural networks (DNNs) [27, 28, 29]. Note that surrogate models introduce additional epistemic uncertainty, an issue that can be addressed using the theory developed in [23]. MCMC is not without its issues. For instance, as the number of parameters increases; the generated Markov chain may take impractically long times to converge [30, 31]. Variational inference (VI) [32, 33] offers a compromise between computational efficiency and accuracy. The idea is to pose the posterior learning problem as an optimization problem over a family of tractable probability distributions. The optimization objective is usually the information loss between the true and the approximate posterior. Common choices are the Kullback-Leibler (KL) divergence [34] and Maximum mean discrepancy (MMD) [35]. The parameters of the approximated posterior are referred to as variational parameters and the corresponding approximated posterior as the variational distribution or as the guide. In [36], the authors applied the VI formulation to approximate the posterior of an inverse problem by a family of a mixture of Gaussians with applications to catalysis and contamination source identification. Similarly, [37] and [38] applied these techniques for parameter inference in heat conduction and elastography. A major drawback of the presented methodologies is that they need to be rerun for each new set of observations. As a result, it is not always feasible to apply these methods to settings that require a real-time response. Overcoming this limitation has a potential impact on many applications, e.g., medical imaging [39, 40], structural health monitoring [41, 42], geology [43]. The goal of our paper is to address this drawback. More specifically, our objective is to develop a methodology that enables real-time inference by learning a generalized model that outputs the posterior for any observed data that is compatible with the physical process. We refer to this function from data to parameters as the "Bayesian inverse map." The idea of learning inverse maps has been explored in previous works. The authors of [44], employed an invertible neural network to learn both the forward map (from parameters to data) and the inverse map. However, this work is incomplete in the sense that they assumed that their data was noise-free. In [45], the authors proposed to learn inverse maps by parameterizing the posterior as a deep-generative conditional flow model [46], where a sequence of invertible transformations to a base conditional distribution models the posterior. These invertible transformations are parameterized functions represented by neural networks. They trained the parameters of these networks on pairs of parameter-observation data by maximizing the conditional likelihood. It can be shown that the approach is a posterior mean-seeking approach and not a mode-seeking approach. The latter point creates problems when one tries to apply the method to ill-posed inverse problems. Our method does not suffer from the same problems. Next, the authors of [47], introduced an invertible DeepONet architecture to learn inverse maps, however, they estimate the posterior through a semi-analytic approach. We represent the Bayesian inverse map using amortized variational distributions [48, 49]. Amortized variational distributions are guides with parameters that are functions of the observed data. These functions accept observations as inputs and output the parameters of the guide representing the Bayesian solution to the inverse problem. We represent these functions using a neural network, called the amortization network. We identify the parameters of the amortization network by minimizing the expectation (over all datasets supported by the physical model) of the Kullback-Leibler divergence between the guide and the true posterior. We call our approach amortized variational inference (AVI). We prove theoretically that, under certain assumptions, optimizing the proposed objective function is equivalent to solving all possible VI problems in one shot. We also derive a stochastic optimization algorithm that enables the practical implementation of our scheme. The problem is very computationally demanding, but the cost is "amortized" when the solution is repeatedly used. Most importantly, the Bayesian inverse map can be queried at the cost of a single forward amortization network pass and, thus, it is suitable for real-time applications. Note that AVI is more restricted than free VI, a phenomenon called the amortization gap [50, 51]. Of course, as the amortization network capacity goes to infinity, the amortization gap disappears. In practice, one has to balance the amortization network capacity with the available computational resources for identifying its parameters. The rest of the paper is structured as follows. In Sec. 2, we outline our methodology by first discussing the mathematical notation used throughout the paper. In Sec. 2.1, we describe the problem we intend to solve. We then describe in detail the variational formulation to learn the inverse map using amortized posteriors in Sec. 2.2. In Sec. 2.3, we move on to the discussion of the choice of the amortized posterior used in this work and its representation using a neural network. Finally, in Sec. 2.4, we discuss the stochastic optimization of the variational loss. In Sec. 3, we discuss the metrics used for evaluating the performance of our approach and then demonstrate the methodology on a series of examples. In Sec. 4 we present our concluding remarks. ## 2 Methodology We start by a discussion of the mathematical notation we follow regarding random variables and their expectations. We use uppercase letters to indicate random variables and lowercase letters to indicate the values of these random variables. We assume that all the random variables we are working with have probability densities. If the probability density of a random variable \(X\) is not explicitly specified, then we denote it by \(p(x)\). In this regard, we follow the common practice in probabilistic machine learning of "overloading" the symbol \(p\). In particular, when we encounter the symbols "\(p(x)\)" then we understand that it refers to "the probability density function of the random variable \(\text{upper}(x)=X\) evaluated at \(x\)." Now if \(g\) is a function of \(x\), the expectation of \(g(X)\) is: \[\mathbb{E}[g(X)]=\int g(x)p(x)dx.\] Sometimes we want to take the expectation of \(X\) not with respect to \(p(x)\) but with respect to another distribution, say \(q\). We denote this expectation by: \[\mathbb{E}_{X\sim q}[g(X)]=\int g(x)q(x)dx.\] When there is no ambiguity, we may simply write \(\mathbb{E}_{q}\) instead of \(\mathbb{E}_{X\sim q}\). We define the (differential) entropy of the probability density \(q(x)\) by: \[\mathbb{H}[q(X)]:=-\mathbb{E}_{q}[\log q(X)].\] Finally, we denote by \(\mathcal{N}(x|\mu,\Sigma)\) the probability density of a multivariate Gaussian with mean \(\mu\) and covariance matrix \(\Sigma\) evaluated at \(x\). ### Problem definition and motivation Suppose that we have a physical problem with unknown parameters as \(\xi\), a vector in \(\mathbb{R}^{d}\). The physical model connects the parameter \(\xi\) to some quantities of interest. We refer to this map as the "forward model." The forward model is a function, \(f\), from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{m}\). The evaluation of the forward model at a given parameter vector, is \(f(\xi)\). We denote the experimental observations by \(y\), also a vector in \(\mathbb{R}^{m}\). The goal in inverse problems is to find the parameters \(\xi\) from the data \(y\), i.e., to invert the forward model. Note that the data differ from the model prediction due to a variety of reasons, e.g., measurement noise, model discrepancy errors, errors due to the discretization of the physical equations, numerical errors. We work under the simplifying assumption that only measurement uncertainty is present. For concreteness, let us assume that the data are generated by adding zero-mean Gaussian noise to the forward model. The likelihood function, which connects parameters to data, is: \[p(y|\xi)=\mathcal{N}(y|f(\xi),\gamma^{2}I),\] where the mean is the model prediction, \(I\) is the unit matrix, and \(\gamma^{2}\) is a parameter that controls the measurement noise. In the Bayesian formulation of inverse problems one starts by describing their state of knowledge about the parameters \(\xi\) using probabilities. Let \(\Xi\) be the random variable encoding this prior knowledge and \(p(\xi)\) the corresponding prior probability density function. After observing data, we wish to update our state of knowledge about the parameters. Following the Bayesian paradigm, our posterior state of knowledge is captured by: \[p(\xi|y)=\frac{p(y|\xi)p(\xi)}{p(y)}.\] The normalizing constant \(p(y)=\int p(y|\xi)p(\xi)\,d\xi\), is known as the evidence. The posterior is, typically, not analytically available. In VI, one approximates it within a tractable distribution family \(q_{\lambda}(\xi)\). We call \(q_{\lambda}(\xi)\) the guide and we refer to \(\lambda\) as the variational parameters. One identifies the variational parameters by minimizing the Kullback-Leibler (KL) divergence between the guide and the posterior. KL minimization is equivalent to maximizing a related quantity, the Evidence Lower BOund (ELBO) [52, 53]. Equipped with this notation, we can write the mathematical equation for the ELBO. It consists of two parts. The first part, which promotes data fitting, is the expectation over the guide of the logarithm of the joint probability density of parameters and data. The second part, which serves a regularization purpose, is the entropy of the guide. If we use \(p(\xi,y)=p(y|\xi)p(\xi)\) to denote the joint probability density of parameters and data, then the mathematical form of the ELBO is: \[\text{ELBO}(\lambda;y):=\mathbb{E}_{q_{\lambda}}\left[\log p(\Xi,y)\right]+ \mathbb{H}\left[q_{\lambda}(\Xi)\right]. \tag{1}\] ### Variational formulation of the problem of finding the inverse map The main drawback of VI is that it requires solving the variational problem for each new data. This shortcoming inhibits the application of VI to real-time inference settings. Our goal is to learn the inverse map, i.e., the map from data to posteriors. To this end, we rely on two pillars. First, we use amortization to represent the map from data to optimal variational parameters. Second, we formulate a new variational problem whose solution is equivalent to doing VI for all datasets compatible with the model. The idea in amortization is to make the optimal variational parameters a function of the data, i.e., \(\lambda=\lambda(y)\). So, for new data \(y\), the posterior is approximated by the guide \(q_{\lambda(y)}(\xi)\). We refer to \(\lambda\) as the amortization function. For concreteness, assume that there are \(n\) variational parameters so that \(\lambda\) is a function from \(\mathbb{R}^{m}\) to \(\mathbb{R}^{n}\). Let \(\lambda_{i}\) denote the \(i\)-th component of \(\lambda\). We define the space of admissible amortization functions, \(\mathcal{A}\), to be the set of Lebesgue-measurable functions \(\lambda\) from \(\mathbb{R}^{m}\) to \(\mathbb{R}^{n}\) with finite \(L^{2}\) norm: \[\parallel\lambda\parallel^{2}:=\mathbb{E}\left[\sum_{i=1}^{n}\lambda_{i}^{2}( Y)\right]<\infty.\] The space of admissible amortization functions \(\mathcal{A}\) is a Banach space. Note that the expectation above is over the data random variable \(Y\) which is assumed to follow the data density \(p(y)\), as predicted by our model. Using the sum rule, the probability density function of \(Y\) is: \[p(y)=\int p(\xi,y)d\xi=\int p(y|\xi)p(\xi)d\xi.\] In other words, one can sample \(Y\) by sampling parameters from the prior, evaluating the forward model, and then sampling data from the likelihood. We propose to learn the amortization function by maximizing the expectation of the ELBO, Eq. (1), over all admissible amortization functions: \[\text{AELBO}[\lambda]=\mathbb{E}\Big{[}\text{ELBO}(\lambda(Y);Y)\Big{]}, \tag{2}\] This expectation is well defined whenever the ELBO is continuous (the composition of a continuous function with a Lebesgue-measurable function is Lebesgue-measurable). We refer to this quantity as the amortized ELBO (or AELBO). Next we prove two propositions that provide some intuition as to why the AELBO is a good choice for learning the amortization function. The first proposition claims that the AELBO is bounded above by minus the differential entropy of the data density. Observe that this statement does not necessarily mean that the AELBO has an attainable maximum. But the statement guarantees that a maximization algorithm will not result in perpetual AELBO increase. **Proposition 1**.: _If the differential entropy of the data density, \(\mathbb{H}[p(Y)]\), is finite, then the amortized ELBO is bounded above by \(\neg\mathbb{H}[p(Y)]\) for all admissible amortization functions._ Proof.: Let \(\lambda\) be an admissible amortization function. The ELBO is bounded above by the log evidence [36], i.e., \[\operatorname{ELBO}(\lambda(Y);Y)\leq\log p(Y).\] Taking the expectation of both sides with respect to the data density yields: \[\operatorname{AELBO}[\lambda]\leq\mathbb{E}[\log p(Y)]=-\mathbb{H}[p(Y)].\] It is also worth noting that one can construct pathological probability densities whose differential entropy is minus infinity. For such cases, our argument breaks down. It is also possible to construct probability densities with infinite differential entropy. For such data densities, the steps in the proof show that the AELBO is minus infinity and, thus, meaningless. In what follows, we are assuming that the data density has a finite differential entropy. We refer the interested reader to the work of [54] for sufficient conditions under which this assumption is true. Let \(\lambda\) and \(\zeta\) be admissible amortization functions. The first variation of the AELBO with respect to \(\lambda\) in the direction of \(\zeta\) is defined by: \[\delta\text{AELBO}[\lambda,\zeta]:=\left.\frac{d}{d\epsilon}\right|_{\epsilon =0}\text{AELBO}[\lambda+\epsilon\zeta].\] The second variation of the AELBO at \(\lambda\) in the direction of \(\zeta\) is: \[\delta^{2}\text{AELBO}[\lambda,\zeta]:=\left.\frac{d^{2}}{d\epsilon^{2}} \right|_{\epsilon=0}\text{AELBO}[\lambda+\epsilon\zeta].\] The necessary and sufficient conditions for an admissible amortization function \(\lambda\) to be a maximum of AELBO is that the first variation is zero and the second variation is strongly negative for all directions \(\zeta\) in \(\mathcal{A}\)[55], i.e., \[\delta\text{AELBO}[\lambda,\zeta]=0,\] and \[\delta^{2}\text{AELBO}[\lambda,\zeta]<-\kappa\parallel\zeta\parallel^{2},\] for some \(\kappa>0\). Similarly, if a variational parameter \(\lambda(y)\) maximizes the ELBO then the gradient of the ELBO is zero at \(\lambda(y)\) and the Hessian of the ELBO is negative definite. The next proposition guarantees that maxima of the AELBO yield maxima of the ELBO. Note that there are underlying technical smoothness assumptions which we do not explicitly state. The reader should assume that the functions involved are as smooth as necessary for the steps of the proof to be valid. **Proposition 2**.: _If an admissible amortization function, \(\lambda\), is a maximum of the amortized ELBO then the variational parameters \(\lambda(y)\) form a maximum of the ELBO for all data \(y\) supported by the data density._ Proof.: To keep the notation as simple as possible, define the function \(g\) from \(\mathbb{R}^{n}\times\mathbb{R}^{m}\) to \(\mathbb{R}\) by: \[g(\lambda,y)=\operatorname{ELBO}(\lambda,y).\] The AELBO is the functional from \(\mathcal{A}\) to \(\mathbb{R}\): \[\text{AELBO}[\lambda]=\mathbb{E}[g(\lambda(Y),Y)],\] where the expectation is with respect to the random vector \(Y\) which follows the data density. Let \(\lambda\) be an admissible amortization function that maximizes the AELBO. We will show that \(\lambda(y)\) maximizes the ELBO for all \(y\) in the support of the data density. The first variation of \(\text{AELBO}[\lambda]\) in an arbitrary direction \(\zeta\) must be zero. Using the chain rule, we get: \[0=\delta\text{AELBO}[\lambda,\zeta]=\mathbb{E}\left[\sum_{i=1}^{n}\frac{ \partial g(\lambda(Y),Y)}{\partial\lambda_{i}}\zeta_{i}(Y)\right]. \tag{3}\] Now for any \(j=1,\ldots,n\) and any \(y\) in the support of the data density, pick a \(\zeta\) whose components are the product of the following carefully chosen Kronecker and Diract deltas: \[\zeta_{i}(Y)=\delta_{ij}\delta(Y-y).\] Plugging in Eq. (3) yields: \[\frac{\partial g(\lambda(y),y)}{\partial\lambda_{j}}=0.\] This is the necessary condition for \(\lambda(y)\) to be a maximum of the ELBO. Since \(\lambda\) is a maximum of the AELBO, the second variation is strictly negative. This means that there exists a positive \(\kappa\) such that for all \(\zeta\): \[\delta^{2}\text{AELBO}[\lambda,\zeta]<-\kappa\parallel\zeta\parallel^{2}.\] Again, using the chain rule, we can show that: \[\delta^{2}\text{AELBO}[\lambda,\zeta]=\mathbb{E}\left[\sum_{i=1}^{n}\sum_{j=1 }^{n}\frac{\partial^{2}g(\lambda(Y),Y)}{\partial\lambda_{i}\partial\lambda_{j }}\zeta_{i}(Y)\zeta_{j}(Y)\right]<-\kappa\parallel\zeta\parallel^{2}. \tag{4}\] We now show that Eq. (4) implies that the Hessian of the ELBO (with respect to \(\lambda\)) is negative definite. Let \(x\) be a vector in \(\mathbb{R}^{n}\) different than zero and \(y\) be in the support of the data density. It suffices to show that: \[\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{\partial^{2}g(\lambda(y),y)}{\partial\lambda _{i}\partial\lambda_{j}}x_{i}x_{j}<0. \tag{5}\] To this end, pick a \(\zeta\) whose components are: \[\zeta_{i}(Y)=x_{i}\delta(Y-y).\] Plugging this \(\zeta\) on the left-hand-side of Eq. (4) yields the left-hand-side of Eq. (5). Plugging \(\zeta\) on the right-hand-side of Eq. (4) gives a negative number since: \[\parallel\zeta\parallel^{2}=\sum_{i=1}^{n}x_{i}^{2},\] and \(x\) is not zero. ### Choice of the guide and parameterization of the amortization function We use a DNN to represent the amortization function \(\lambda(y)\). More specifically, we write \(\lambda=\lambda(y;\phi)\), where \(\phi\) are the DNN parameters to be learned. We refer to this DNN as the amortization network, and to \(\phi\) as the amortization parameters. The guide we use in this work is a full-rank multivariate Gaussian. To define it, suppose that the amortization network \(\lambda(y;\phi)\) has two multi-dimensional outputs, i.e., \[\lambda(y;\phi)=\left(\mu(y;\phi),L(y;\phi)\right).\] The first component, \(\mu(y;\phi)\) is a \(d\)-dimensional vector and the second component, \(L(y;\phi)\), is a \(d\times d\) matrix. To be specific, \(\mu(y;\phi)\) is the mean vector and \(L(y;\phi)\) is the Cholesky factor of the covariance matrix of the multivariate Gaussian on \(\xi\): \[q_{\lambda(y;\phi)}(\xi)=\mathcal{N}\left(\xi\big{|}\mu(y;\phi),\Sigma(y;\phi) \right),\] where \(\Sigma(y;\phi)=L(y;\phi)L(y;\phi)^{T}\). There are no constraints on the \(\mu(y;\phi)\) output of the network. But the Cholesky factor output \(L(y;\phi)\) must be lower triangular with positive diagonal [56]. We honor these constraints by composing \(\lambda(y;\phi)\) from three distinct neural networks \(\lambda_{1}(y;\phi_{1}),\lambda_{2}(y;\phi_{2})\) and \(\lambda_{3}(y;\phi_{3})\). The first two networks have output dimension \(d\) and the third network has output dimension \(\frac{d^{2}-d}{2}\). All these networks have similar structure (feed-forward networks with ReLU activations), the complete details of which we provide in the numerical examples section (Sec. 3). The first and the third networks end with a linear activation and correspond, respectively, to the mean vector \(\mu(y;\phi)\) and the lower triangular part of \(L(y;\phi)\). The second network corresponds to the diagonal of \(L(y;\phi)\) and ends with a softplus activation to ensure the positivity constraint. The above parameterization defines a subset of the admissible amortization functions \(\mathcal{A}\). This subset is described by \(\phi\) which lives in an unrestricted Euclidean space. From now on, we seek to solve the finite dimensional optimization problem of maximizing the multivariate function: \[v(\phi)=\text{AELBO}[\lambda(\cdot;\phi)]. \tag{6}\] ### Numerical optimization via stochastic gradient ascent We construct a stochastic gradient ascent algorithm that provably converges to a local maximum of Eq. (6). The first step is to recast the problem as a stochastic optimization problem and to construct an unbiased estimator of the gradients of the objective function \(v\) with respect to the amortization network parameters \(\phi\). Notice that the objective function decomposes in two parts. An expectation over the logarithm of the joint probability density of the data \(Y\) and the parameters \(\Xi\) and an expectation over the data density of the entropy of the guide: \[v(\phi)=\mathbb{E}\left[\mathbb{E}_{\Xi\sim q_{\lambda(Y;\phi)}}\left[\log p( \Xi,Y)\big{|}Y\right]+\mathbb{H}\left[q_{\lambda(Y;\phi)}(\Xi)\big{|}Y\right] \right]. \tag{7}\] In this equation, \(\mathbb{E}[\cdot|Y]\) is and \(\mathbb{H}[\cdot|Y]\) are the expectation and entropy conditional on \(Y\), respectively. For the first summand, we employ the reparameterization trick [57, 58, 59] to remove the dependence of the expectation on the amortization network parameters. Introduce the \(d\)-dimensional standard normal random variable \(Z\sim N(0,I)\), and write: \[\Xi=h(Z,Y;\phi):=\mu(Y;\phi)+L(Y;\phi)Z.\] Then \(\Xi\) conditioned on \(Y\) follows \(q_{\lambda(Y;\phi)}\) and thus: \[\mathbb{E}\left[\mathbb{E}_{\Xi\sim q_{\lambda(Y;\phi)}}\left[\log p(\Xi,Y) \big{|}Y\right]\right]=\mathbb{E}\left[\mathbb{E}\left[\log p(\Xi=h(Z,Y;\phi), Y)\big{|}Y\right]\right]=\mathbb{E}[\log p(\Xi=h(Z,Y;\phi),Y)].\] For the second term of Eq. (7), we have that: \[\mathbb{H}\left[q_{\lambda(Y;\phi)}(\Xi)\Big{|}Y\right]=\frac{1}{2}\log\{(2 \pi e)^{d}\det\left(\Sigma(Y;\phi)\right)\}=\frac{d}{2}\log(2\pi e)+\sum_{r=1} ^{d}\log L_{rr}(Y;\phi).\] Putting everything together, we get: \[v(\phi)=\frac{d}{2}\log(2\pi e)+\mathbb{E}\left[\log p(\Xi=h(Z,Y;\phi),Y)+\sum _{r=1}^{d}\log L_{rr}(Y;\phi)\right].\] The reparameterization trick allows us to derive unbiased estimators of \(v(\phi)\) and of its gradient with respect to \(\phi\). To this end, let \(N_{y}\) and \(N_{z}\) be integers. Let \(Y_{i}\), \(i=1,\ldots,N_{y}\), be independent identically distributed (iid) random variables following the data density. Let \(Z_{j}\), \(j=1,\ldots,N_{z}\), be iid following a \(d\)-dimensional standard normal. Define the random variable: \[V(\phi)=\frac{d}{2}\log(2\pi e)+\frac{1}{N_{y}}\sum_{i=1}^{N_{y}}\left\{\frac{ 1}{N_{z}}\sum_{j=1}^{N_{z}}\log p(\Xi=h(Z_{j},Y_{i};\phi),Y_{i})+\sum_{r=1}^{d }\log L_{rr}(Y_{i};\phi)\right\}. \tag{8}\] For this random variable, we have: \[v(\phi)=\mathbb{E}\left[V(\phi)\right].\] We have now succeeded in recasting the problem of learning the amortization network parameters as a stochastic optimization problem: \[\phi^{*}=\arg\max_{\phi}v(\phi)=\arg\max_{\phi}\mathbb{E}[V(\phi)].\] Furthermore, the \[\nabla_{\phi}v(\phi)=\mathbb{E}\left[\nabla_{\phi}V(\phi)\right],\] where \(\nabla_{\phi}\) denotes the gradient with respect to \(\phi\). Under these conditions, the stochastic gradient ascent updates: \[\phi_{k+1}=\phi_{k}+\eta_{k}\nabla_{\phi}v_{k}(\phi_{k}), \tag{9}\] where \(v_{k}(\phi_{k})\) are independent samples from \(V(\phi_{k})\) (which can be constructed by sampling the underlying \(Y_{i}\)'s and \(Z_{j}\)'s) converge to a local maximum if the the learning rate \(\eta_{k}\) satisfies the Robbins-Monro conditions [60]: \[\sum_{k=1}^{\infty}\eta_{k}=+\infty,\] \[\sum_{k=1}^{\infty}\eta_{k}^{2}<+\infty.\] This algorithm is typically called stochastic gradient ascent (SGA). In our numerical examples, we employed the adaptive moments (ADAM) optimization method [61], a robust variant of SGA that typically exhibits faster convergence. This method computes adaptive learning rates for each parameter using exponentially decaying averages of past gradients and past squared gradients, and converges faster than SGA. In ADAM, the averaging hyper-parameters denoted as \(\beta_{1}\), and \(\beta_{2}\) are, in principle, tunable. In practice, default values of \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), as suggested by [61] work well and we do not change these quantities. We use a step decay learning rate schedule, which decays the learning rate by a multiplicative factor \(\alpha\) after every \(r\) iterations. The amortization network training algorithm is outlined in Algorithm 1. Fig. 1 shows a schematic representation of the proposed approach. ``` 0: Amortization network architecture, number of iterations \(N_{\text{iter}}\), number of samples \((N_{y},N_{z})\), initial learning rate \(\eta_{0}\), multiplicative factor for learning rate decay \(\alpha\), learning rate decay after every \(r\) iterations. 1: Initialize parameters of the amortization network. 2:for\(k=1\) to \(N_{\text{iter}}\)do 3:for each \(i=1,\cdots,N_{y}\)do 4: Generate sample data \(y_{ki}\) by sampling the random variable \(Y_{i}\) that follows the data density by: 5: Sampling parameters \(\xi_{ki}\) from the prior \(p(\xi)\). 6: Solving the forward model \(f(\xi_{ki})\). 7: Sampling \(y_{ki}\) from the likelihood. 8:for each \(j=1,\cdots,N_{z}\)do 9: Generate samples \(z_{kj}\) of the standard normal \(Z_{j}\). 10:endfor 11:endfor 12: Construct a sample \(v_{k}(\phi_{k})\) of \(V(\phi_{k})\) using Eq. (8). 13: Construct a sample \(\nabla_{\phi}v_{k}(\phi_{k})\) of \(\nabla V(\phi_{k})\). 14: Update the learning rate \(\eta_{k}\) based on step decay learning rate schedule using \(\eta_{0}\), \(\alpha\) and \(r\). 15: Update the parameters \(\phi_{k+1}\) using Eq. (9). 16:endfor 17:return\(\phi^{*}\). \(\triangleright\) Return \(\phi^{*}\) trained amortization network parameters. ``` **Algorithm 1** Amortization network training process ## 3 Numerical examples We demonstrate the effectiveness of our proposed framework to learn inverse maps through three examples. These examples allow intuitive visualizations of the posterior and comparison against ground truth MCMC estimates of the posterior. The MCMC estimates are sampled by employing the No-U-Turn sampler (NUTS) [62] implemented by the Pyro [63] python library. The codes of all the examples in this work are published in Github ([https://github.com/PredictiveScienceLab/paper-2023-inverse-map-karumuri](https://github.com/PredictiveScienceLab/paper-2023-inverse-map-karumuri)). ### Comparison metrics To assess the quality of the inverse map, one of the metrics we use is the value of the Kolmogorov-Smirnov (KS) test statistic [64, 65] between the posterior samples estimated by our method and the corresponding samples estimated using the MCMC method. This test statistic quantifies the distance between the empirical cumulative distribution functions (ECDFs) of the posterior samples obtained by both methods. The test statistic is zero when the posterior samples of the parameters from both methods follow the same distribution. We estimate these KS statistic values of posteriors for \(N_{y}=100\) samples from the data density, say \(y_{i}\), \(i=1,\ldots,N_{y}\). For each one of these hypothetical data sets, we perform MCMC to obtain \(N_{\text{MCMC}}=1000\) samples of the parameters, say \(\xi_{ij}\), \(j=1,\ldots,N_{\text{MCMC}}\). Specifically, we use a total of \(3,300\) NUTS samples, discard the first \(300\) as burn-in, and then select every \(3^{\text{rd}}\) sample. Another metric we use is the so-called re-simulation error. Let \(\Xi_{\text{gt}}\) be a random variable that follows the prior \(p(\xi)\). The "gt" subscript stands for "ground truth." Let \(Y\) be the random variable modeling the measurement we would have made if the parameters were \(\Xi_{\text{gt}}\), i.e., \(Y\sim p(y|\xi=\Xi_{\text{gt}})\). The re-simulation Figure 1: Schematic representation of the amortization approach for learning Bayesian inverse maps. The amortization network in grey takes in observation data and outputs the corresponding variational parameters of full-rank Gaussian distribution. Shown in orange, blue and green are the three neural networks computing \(\mu(y;\phi),L_{\text{diag}}(y;\phi)\) and \(L_{\text{off-diag}}(y;\phi)\) respectively. error is defined to be: \[\mathcal{E}_{\text{re-sim}}=\mathbb{E}\left[\mathbb{E}_{\Xi\sim q_{\lambda(Y; \phi)}}\left[\|f(\Xi)-f(\Xi_{\text{gt}})\|_{2}\big{|}\Xi_{\text{gt}},Y\right] \right].\] Note that the inner expectation is over the guide \(q_{\lambda(Y;\phi)}\), while the outer expectation is over the "ground truth" parameters \(\Xi_{\text{gt}}\) and the hypothetical measurement \(Y\). Again, we approximate the re-simulation error by sampling. Specifically, we use \(N_{y}\) samples \(y_{i}\) and \(\xi_{\text{gt},i}\) of \(Y\) and \(\Xi_{\text{gt}}\), respectively. For each \(i=1,\dots,N_{y}=100\), we sample \(N_{\text{samples}}=1000\) points \(\xi_{ij}\) from the guide \(q_{\lambda(y_{i};\phi)}\). It is: \[\hat{\mathcal{E}}_{\text{re-sim}}=\frac{1}{N_{y}N_{\text{samples}}}\sum_{i=1}^ {N_{y}}\sum_{j=1}^{N_{\text{samples}}}\|f(\xi_{ij})-f(\xi_{\text{gt},i})\|_{2}. \tag{10}\] The benefit of the re-simulation error is that it does not require any MCMC samples. ### Damage location detection We consider the physically motivated problem of identification of the location and size of the damage in an object using the Electrical impedance tomography (EIT) [66, 67, 68] technique. This technique is taking center stage over recent years in Structural health monitoring (SHM) [69, 70] owing to its low cost and non-intrusive nature for damage detection. In EIT, the test object is probed using low-intensity current injections and thereby measuring the induced boundary electric potential. Changes in the measured electric potential data are linked to the changes in material properties of the test object via an inverse problem approach. In this context of damage identification using the EIT technique, the Bayesian inverse map learned using our AVI approach enables instantaneous on-the-fly detection of the distribution of damage location given the measured boundary potential data. To demonstrate this, we take a square solid plate of unit length with a circular void of 0.1 radius and we aim at discovering the void center using the EIT technique. Mathematically, the forward electrostatic EIT boundary value problem is described by the following equation [71]: \[-\nabla\cdot\left(a(x)\nabla u(x)\right)=0,\;\forall\;x\in\Omega=[0,1]^{2} \subset\mathbb{R}^{2}, \tag{11}\] where \(\Omega\) indicates domain of the square plate, \(u=\) electric potential, \(a=\) internal electric conductivity of the material, and the conductivity of the material varies as follows: \[a(x)=\begin{cases}a_{d}&\text{within the circular defect with center at }(x_{1c},x_{2c})\\ a_{o}&\text{otherwise}\end{cases} \tag{12}\] with \(a_{d}=1.5\) and \(a_{o}=10\). The test object is subjected to Neumann boundary conditions on the boundaries as follows: \[a_{o}\frac{\partial u}{\partial n}=\begin{cases}j(\text{current})&\text{on }S,\;S\subset\partial\Omega\\ 0&\text{on }\partial\Omega\backslash\bar{S}\end{cases} \tag{13}\] with unit current (\(j=1\)) injected on the boundary \(S\), a subset of boundary of the square plate \(\partial\Omega\). Specifically, we consider three experimental scenarios, in the first experiment unit current is injected on all four sides of the object (\(S\)), in the second experiment unit current is injected on the left and right sides of the test object (\(S\)), and no current on the other two sides and finally in the third experiment, a unit current is injected only on the top and bottom of the test object (\(S\)). From these three experiments induced potential on the boundaries is measured. We call these experiments as _Expt-1_, _Expt-2_ and _Expt-3_. For the sake of illustration, we show the contours of induced potential in the three experiments for a test object with circular defect at center \((0.5,0.5)\) in Fig. 2. We could clearly see a change in induced potential at the location of the defect. These induced potentials are estimated by solving the forward EIT problem in Eq. (11) numerically using a finite volume method (FVM) solver implemented in FiPy [72]. We assumed that the circular void of radius \(0.1\) lies anywhere within the square plate in the region \([0.2,0.8]^{2}\) and that we have access to \(600\) (\(=m\)) noisy measurements of the induced boundary potential in total from the three experiments considered. To be specific, \(200\) measurements from each of the experiments i.e., \(50\) measurements on each side of the square plate. We collectively denote the noisy boundary potential measurements \(\{y_{1,1},y_{1,2},\ldots,y_{1,200}\}\) from _Expt-1_ as vector \(y_{1}\). Similarly, we denote the data collected from _Expt-2_ and _Expt-3_ as \(y_{2}\) and \(y_{3}\) respectively. Now the inverse problem we are interested in here is to identify the center of the circular damage \(\xi=\{x_{1c},x_{2c}\}\) based on the observed boundary potential data from the three experiments \(y=\{y_{1},y_{2},y_{3}\}\). We learn the required Bayesian inverse map from all the boundary potential data \(y\in\mathbb{R}^{600}\) to the center of the circular damage \(\xi\in\mathbb{R}^{2}\) using our amortization network. For learning the amortization network, we set up the required likelihood and prior. We assume the \(m\) noisy measurements to be independent and identically distributed (iid) and we model the measurement Figure 2: (Damage location detection) Illustration of the electrical conductivity field and induced potential (FVM solution) in a test object with circular damage at \((0.5,0.5)\). process using a Gaussian likelihood with a noise scale of \(\gamma=0.005\) : \[p(y|\xi)=\prod_{i=1}^{3}N(y_{i}|u_{i}(\xi),\gamma^{2}I),\] where, \(u_{1},u_{2},u_{3}\) are vectors of true boundary potentials from the three experiments which are obtained using the FVM solver. Note that, each of these vectors is of length \(200\). Further to make the computations faster, for each of the experiments we built a surrogate of true boundary potentials using a residual neural network. The network takes in the center of the circular void, \(\xi\) in \(\mathbb{R}^{2}\) as input, and outputs the corresponding boundary potentials in \(\mathbb{R}^{200}\). The architecture of this network consists of \(5\) residual blocks each with \(3\) layers having \(60\) neurons each and with SiLU activation functions. We trained this network using \(3,721\) data points by randomly generating the circular defects of radius \(0.1\) within the region \([0.2,0.8]^{2}\) and estimating the corresponding boundary potential induced using the FVM solver mentioned before. Having learnt these surrogates, now the likelihood above reduces to, \[p(y|\xi)=\prod_{i=1}^{3}N(y_{i}|\hat{u}_{i}(\xi;\theta_{i}),\gamma^{2}I). \tag{14}\] where \(\theta_{i}\) are corresponding residual network parameters. We choose the parameters prior as \(\xi\sim\mathcal{N}(\mu,\sigma^{2}I)\) with \(\mu=[0.5,0.5]\) and \(\sigma=[0.1,0.1]\). Having obtained the necessary ingredients likelihood and prior, we built the three networks in our amortization net using feed-forward networks. We consider these networks to be having two hidden layers of sizes \(20\), and \(10\) respectively. Following Algorithm 1 the amortization net is trained for \(8,000\) iterations (\(N_{iter}\)) using (\(N_{y}=32,N_{z}=5\)) samples in each iteration. Followed by an initial learning rate of \(\eta_{I_{0}}=10^{-2}\), with step decay learning rate schedule of multiplicative factor \(\alpha=0.1\), after every \(r=4,000\) iterations. Qualitative results of the posteriors learned using our amortization net for three sets of observations are shown in Figs.(4 - 6), along with comparisons against corresponding MCMC estimates. In these figures, diagonal elements show the marginal posterior estimates of the damage location center coordinates and the off-diagonal elements show the corresponding scatter-plot. The ground truth location of the damage center is shown by a black dashed line for reference on the diagonal elements. We can clearly see that the posterior inferences using our amortization network are matching the corresponding MCMC estimates and our network is able to infer the center of circular damages accurately conditional on the boundary potential measurements. This is also reflected in the very low values of re-simulation error (\(\mathcal{E}_{\text{re-sim}}=1.06\times 10^{-2}\)) and error metrics based on KS test statistic values in Fig. 3 for \(N_{y}=100\) samples from the data density. Figure 4: (Damage location detection - Observation set 1.) Qualitative results of the damage detection problem from our method and MCMC approaches using pairplot for the case where ground truth damage center is located at \((0.67,0.59)\). Figure 5: (Damage location detection - Observation set 2.) Qualitative results of the damage detection problem from our method and MCMC approaches using pairplot for the case where ground truth damage center is located at \((0.71,0.33)\). Figure 3: (Damage location detection) Histograms of KS test statistic values of parameter posteriors for \(N_{y}=100\) samples from the data density. ### Elliptic PDE with uncertain conductivity field Consider the 1D steady-state heat conduction equation with no heat sources: \[-\frac{d}{dx}\left(a(x,\xi)\frac{d}{dx}u(x,\xi)\right)=0, \tag{15}\] for \(x\) in \([0,1]\) and with Dirichlet boundary values: \[u(0,\xi)=1\text{ and }u(1,\xi)=0.\] The function \(a(x,\xi)\) is a spatially varying conductivity field and \(u(x,\xi)\) is the corresponding temperature field. We assume that the conductivity is characterized by a random field given by the following analytical equation: \[a(x,\xi)=\exp\{g(x,\xi)\},\] where \[g(x,\xi)=\sum_{i=1}^{5}\xi_{i}\frac{\sqrt{2}\sigma}{(i-\frac{1}{2})\pi}\sin \left((i-\frac{1}{2})\pi x\right),\] is a random sinusoidal field with uncertain parameters \(\xi=\{\xi_{i}\}_{i=1}^{5}\). These \(\xi_{i}\)'s are independent standard normal random variables with zero mean and unit variance and we consider the variance of the field \(\sigma\) to be \(1.5\). Now to demonstrate the effectiveness of our approach, we carry out the problem of inferring the thermal conductivity of this heterogeneous rod based on the observed temperature measurements. For this, we assume that we have access to a set of \(m=9\) potentially noisy measurements \(y_{1},y_{2},\ldots,y_{m}\) of \(u(x_{1},\xi),\ldots,u(x_{m},\xi)\) at \(m\) equidistant points between \(0.15\) and \(0.85\) along the length of the rod. We collectively denote all the noisy measurements as vector \(y=(y_{1},y_{2},\ldots,y_{m})\). The inverse problem here is to find the posterior distribution of the uncertain parameters in the conductivity, \(p(\xi|y)\), that lead to observed data \(y\). To do this, we learn the required Bayesian inverse map from \(y\in\mathbb{R}^{9}\) to \(\xi\in\mathbb{R}^{5}\) using our proposed amortization network. To move forward with building our amortization network, we set up the required ingredients likelihood and prior. We assume that the \(m\) noisy measurements are iid, and we model the measurement process using Figure 6: (Damage location detection - Observation set 3.) Qualitative results of the damage detection problem from our method and MCMC approaches using pairplot for the case where ground truth damage center is located at \((0.31,0.20)\). a Gaussian likelihood with a noise scale of \(\gamma=0.015\): \[p(y|\xi)=\prod_{i=1}^{m}N(y_{i}|u(x_{i},\xi),\gamma^{2}).\] To reduce the computational overhead, we construct a neural network approximator of \(u(x,\xi)\) as \(\hat{u}(x,\xi;\theta)\), using a physics-informed approach that minimizes the energy functional-based loss of Eq. (15) as described in [27]. Here, \(\theta\) are the neural network approximator parameters. The network takes as inputs a spatial location \(x\) and the conductivity parameters \(\xi\), and outputs the corresponding temperature \(\hat{u}(x,\xi;\theta)\). This network is a residual network with 5 residual blocks, each consisting of 3 layers with 40 neurons each, and with sigmoid linear units (SiLUs) as the activation function. Now the likelihood can be approximated as, \[p(y|\xi)\approx\prod_{i=1}^{m}N(y_{i}|\hat{u}(x_{i},\xi;\theta),\gamma^{2}), \tag{16}\] where \(\hat{u}\) is the forward model. Moving ahead to the prior, we assume that the parameters follow a Gaussian prior, \(\xi\sim\mathcal{N}(0,I)\). Now, for learning the inverse map, we chose the three networks in our amortization network to be feed-forward networks with four hidden layers of sizes 50, 40, 30, and 20, respectively. We trained this amortization net following Algorithm 1 for a total of \(35,000\) iterations (\(N_{iter}\)), with (\(N_{y}=64,N_{z}=5\)) samples in each iteration. We started with an initial learning rate of \(\eta_{l_{0}}=10^{-3}\) and used a step decay learning rate schedule with a multiplicative factor of \(\alpha=0.5\) after every \(r=20,000\) iterations. Qualitative results of the posterior conductivity fields inferred using our amortization network for three sets of observations are shown in Figs. (8 - 10), along with corresponding MCMC estimates for comparison. The green lines in Figs. (8a-8b), (9a-9b), and, (10a-10b) represent few samples of the inferred posterior input field and its corresponding solution responses. The black dotted line corresponds to the ground truth used to generate the measurement data, and the black crosses mark the measurement data. Figs. 8c, 9c and 10c show the distribution of posterior and prior draws of parameters with a pairplot, where ground truth parameter values are indicated by a black dashed line for reference. These results demonstrate that the posterior draws of the conductivity field from our amortization network accurately capture a distribution over the true solution, conditional on the noisy observations. Moreover, we observe that the ground truth MCMC posterior estimates follow a Multivariate Gaussian distribution and our amortization network is able to learn it. Notably, the correlations of the parameters learned from our method are in almost agreement with the corresponding estimates from MCMC. This is evident from the low values of re-simulation error (\(\mathcal{E}_{\text{re-sim}}=4.05\times 10^{-2}\)) and error metrics based on KS test statistic values in Fig. 7. These quantitative results are also estimated using \(N_{y}=100\) samples from the data density. Overall, these results demonstrate the effectiveness and accuracy of our proposed amortization network for inferring posterior distributions on the fly. Figure 7: (Elliptic PDE) Histograms of KS test statistic values of parameter posteriors for \(N_{y}=100\) samples from the data density. Figure 8: (Elliptic PDE - Observation set 1.) Qualitative results of the elliptic PDE problem from our method and MCMC approaches. Figure 9: (Elliptic PDE - Observation set 2.) Qualitative results of the elliptic PDE problem from our method and MCMC approaches. Figure 10: (Elliptic PDE - Observation set 3.) Qualitative results of the elliptic PDE problem from our method and MCMC approaches. ### Inverse kinematics We consider the inverse kinematics problem of identifying the configuration of a multi-jointed \(2D\) arm that ends at a given position, see Fig. 11a. This problem has been considered in [44]. The forward model takes the height on the slider \(\xi_{1}\) and the three joint angles \(\xi_{2},\xi_{3},\xi_{4},\) and returns the coordinates of the arm end point \(f(\xi)=(f_{1}(\xi),f_{2}(\xi))\): \[f_{1}(\xi)=l_{1}\cos(\xi_{2})+l_{2}\cos(\xi_{2}+\xi_{3})+l_{3}\cos(\xi_{2}+\xi _{3}+\xi_{4}),\] \[f_{2}(\xi)=\xi_{1}+l_{1}\sin(\xi_{2})+l_{2}\sin(\xi_{2}+\xi_{3})+l_{3}\sin(\xi_ {2}+\xi_{3}+\xi_{4}),\] with arm lengths \(l_{1}=0.5,l_{2}=0.5\) and \(l_{3}=1.\) The parameters \(\xi\) follow a Gaussian prior \(\xi\sim\mathcal{N}(0,\mathrm{diag}(\sigma^{2}))\) with \(\sigma=\left(\frac{1}{4},\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\) (see Fig. 11b). We assume that we have access to a noisy version \(y\) of the arm end coordinates \(f(\xi)\). The likelihood of observed data is chosen to be Gaussian, \[p(y|\xi)=\mathcal{N}(y|f(\xi),\gamma^{2}I),\] with \(\gamma=0.01\). The inverse problem is to find the posterior distribution \(p(\xi|y)\) of all arm configurations \(\xi\) that end at the observed \(2D\) position \(y\). We learn the required Bayesian inverse map from \(y\) in \(\mathbb{R}^{2}\) to \(\xi\) in \(\mathbb{R}^{4}\) using our method. Similar to the before examples, the three distinct neural networks in our amortization network are feed-forward networks with two hidden layers of sizes \(20\) and \(10\) respectively. The amortization network was trained following the process described in Algorithm 1 for \(10,000\) iterations (\(N_{\mathrm{iter}}\)) using \(N_{y}=32\) and \(N_{z}=5\) samples in each iteration. A step decay learning rate schedule was employed to optimize this network. The learning rate was initialized to an initial value of \(\eta_{I_{0}}=10^{-2}\) and decreased by a multiplicative factor of \(\alpha=0.1\) after every \(r=5,000\) iterations. Qualitative results of the posteriors learned using our amortization network for three endpoint observation \(y\) cases are shown in Figs. (13-15), along with comparisons against the corresponding MCMC estimates. Specifically, Figs. (13a-13b), (14a-14b), (15a-15b) show the distribution of arm configurations, conditional on the endpoint \(y\) marked by a grey cross. Here the vertical dotted line represents the rail the arm is based on, and the solid line represents the ground truth arm configuration. The faint-colored lines are Figure 11: (Inverse Kinematics) Illustration (a) of the articulated arm with three segments mounted on a rail with a slider and (b) prior distribution of the parameters \(\xi\) visualized as a collection of possible arm positions. sampled posterior arm configurations and contour lines around the target represent the area containing 97% of the sampled arm endpoints. Figs. 13c, 14c and 15c show the pairplot of parameters from our approach and MCMC. The diagonal elements show the marginal posterior estimates of the parameters and the off-diagonal elements show the scatter-plot for each pair of parameters. Ground truth parameter values are marked by a black dashed line for reference on the diagonal elements. From these, we see that our amortization network is able to capture a valid set of arm configurations but not all possible configurations as expected owing to the fact that our chosen posterior guide is a multivariate Gaussian whereas the ground-truth posterior is non-Gaussian and multi-modal. This is reflected in the low value of re-simulation error (\(\mathcal{E}_{\text{re-sim}}=2.32\times 10^{-2}\)) and high values of error metrics based on KS test statistic values in Fig. 12. These quantitative results are estimated using \(N_{y}=100\) samples from the data density. The posterior estimates, in this case, could be improved to capture the complete uncertainty by choosing a variational posterior guide that reflects the non-Gaussian nature and multi-modalities. Figure 12: (Inverse Kinematics) Histograms of KS test statistic values of parameter posteriors for \(N_{y}=100\) samples from the data density. Figure 13: (Inverse Kinematics - Observation set 1.) Qualitative results of the inverse kinematic problem from our method and MCMC approaches for the case where the arm ends at a position \(y=(1.91,0.08)\). Figure 14: (Inverse Kinematics - Observation set 2.) Qualitative results of the inverse kinematic problem from our method and MCMC approaches for the case where the arm ends at a position \(y=(1.67,0.20)\). ## 4 Conclusion In this work, we developed a methodology for learning Bayesian inverse maps from the observed data to posteriors by using an amortization network. The amortization network is a deep neural network that takes in observation data as input and outputs the corresponding posterior parameters. By using this amortization network, we avoided the need to compute per observation variational parameters and instead we computed the amortization network parameters which are a set of global variational parameters that generalize over all observations. We learned these amortization network parameters with an amortized approach for variational inference by taking an additional expectation over standard ELBO with respect to all observations compatible with the model. Towards this end note that, once the amortization network is trained posterior parameters Figure 15: (Inverse Kinematics - Observation set 3.) Qualitative results of the inverse kinematic problem from our method and MCMC approaches for the case where arm ends at a position \(y=(1.68,1.28)\). of an observation are available just at the forward pass of the network thereby enabling real-time on the fly inference. The inference models that we employed in this work are full-rank Gaussian densities, where the mean vector and covariance matrix are specified using our amortization network. We demonstrated the performance of our amortization network through three examples. The posteriors estimated from our amortization network are consistent with the ground truth posteriors from MCMC, except for the cases with posteriors involving non-gaussian nature and multi-modalities. Hence in the future, to address this we intend to extend our amortized inference approach by using conditional normalizing flow-based models [73, 46] to model the posterior. These flow based models represent the complex posterior densities by applying a series of invertible and differentiable transformations to simple conditional densities. ## 5 Acknowledgements This work has been made possible by the financial support provided by AFOSR program on materials for extreme environments under the grant number FA09950-22-1-0061.
2309.09743
The NFLikelihood: an unsupervised DNNLikelihood from Normalizing Flows
We propose the NFLikelihood, an unsupervised version, based on Normalizing Flows, of the DNNLikelihood proposed in Ref.[1]. We show, through realistic examples, how Autoregressive Flows, based on affine and rational quadratic spline bijectors, are able to learn complicated high-dimensional Likelihoods arising in High Energy Physics (HEP) analyses. We focus on a toy LHC analysis example already considered in the literature and on two Effective Field Theory fits of flavor and electroweak observables, whose samples have been obtained throught the HEPFit code. We discuss advantages and disadvantages of the unsupervised approach with respect to the supervised one and discuss possible interplays of the two.
Humberto Reyes-Gonzalez, Riccardo Torre
2023-09-18T13:13:47Z
http://arxiv.org/abs/2309.09743v3
**The NFLLikelihood: an unsupervised DNNLikelihood from Normalizing Flows** ## Abstract **We propose the NFLLikelihood, an unsupervised version, based on Normalizing Flows, of the DNNLikelihood proposed in Ref. [1]. We show, through realistic examples, how Autoregressive Flows, based on affine and rational quadratic spline bijectors, are able to learn complicated high-dimensional Likelihoods arising in High Energy Physics (HEP) analyses. We focus on a toy LHC analysis example already considered in the literature and on two Effective Field Theory fits of flavor and electroweak observables, whose samples have been obtained throught the HEPFit code. We discuss advantages and disadvantages of the unsupervised approach with respect to the supervised one and discuss possible interplays of the two.** ###### Contents * 1 Introduction * 2 Likelihood functions for LHC analyses * 2.1 The LHC-like new physics search Likelihood * 2.2 The ElectroWeak fit Likelihood * 2.3 The Flavor fit Likelihood * 3 Evaluation Metrics * 4 The NFLLikelihood * 4.1 The Toy Likelihood * 4.2 The EW Likelihood. * 4.3 Flavor Likelihood * 5 Conclusion * A Details of the EW and Flavor Likelihoods Introduction The distribution, preservation, and reinterpretation of experimental and phenomenological Likelihoods arising in High Energy Physics (HEP) and astrophysics is an important and open topic [2]. In Ref. [1] it was shown how deep learning can play a crucial role in this context, by showing how the problem of encoding the Likelihood function into a Deep Neural Network (DNN) can be formulated as a supervised learning problem of regression. In simple terms, the values of the parameters \(\mathbf{x}\) and of the corresponding Likelihood \(y=\mathcal{L}(\mathbf{x})\) are used to train a fully connected multilayer perceptron (MLP), which delivers a "pseudo-analytical" representation of the Likelihood function in terms of a DNN, therefore called DNNLikelihood. In a recent paper, we have shown that Normalizing Flows (NFs) of the coupling and autoregressive type, are able to perform density estimation of very high dimensional probability density functions (PDFs) with great accuracy and with limited training samples and hyperparameters tuning [3]. Moreover, trained NFs, can be used as sample generators with two different approaches: on the one hand one can draw samples from the base distribution and transform them through the generative direction of the NFs, obtaining samples distributed according to the target PDF; on the other hand, the normalizing direction of the NFs can be used to get a fast prediction of the density for a given sample, allowing one to use the NF to assist and speed up traditional sequential Monte Carlo techniques [4, 5, 6, 7, 8, 9, 10, 11]. In this paper we show how Autoregressive Normalizing Flows (ANF) can be used to learn complicated Likelihoods, doing dentisy estimation starting from the \(\mathbf{x}\) samples only and therefore offering an unsupervised approach to the DNNLikelihood.1 We call the Likelihood encoded by NFs the NFLLikelihood. The aim of this paper is twofold: on the one hand we want to give explicit physics examples of the performances of the Autoregressive Flows studied in Ref. [3], which only considered toy distributions based on mixtures of Gaussians and truncated Gaussians; on the other hand we want to propose the NFLlikelihood as an alternative DNNLikelihood, discussing advantaged and disadvantages of the unsupervised approach, with respect to the supervised one. Footnote 1: We are aware of an upcoming paper proposing a similar approach in a different context [12]. One important remark is that, since we are interested here in discussing the NF performances in learning some physical complicated densities, we focused on learning the posterior probability (not just the Likelihood), since these were the data we had at our disposal. Our approch can of course be trivially extended to the Likelihood by removing the contribution of the (known) prior used to sample the posterior. The paper is organized as follows. In Section 2 we briefly describe the three phenomenological Likelihoods that we consider. Section 3 contains a discussion of the figures of merit that we used in our analysis, while Section 4 presents the main results. Finally, we report our conclusions in Section 5. ## 2 Likelihood functions for LHC analyses In this analysis, we consider three Likelihoods of different dimensionality. We briefly describe them in turn in the following subsections. ### The LHC-like new physics search Likelihood As a first example we consider the toy LHC-like NP search, here after referred as the Toy Likelihood, introduced in Ref. [13] and also considered in Ref. [1]. We refer the reader to those references for a detailed explanation of the Likelihood construction, its parameters, and its sampling. Here we limit ourselves to remind that the Likelihood depends on one signal strength parameter \(\mu\) and 94 nuisance parameters \(\mathbf{\delta}\). ### The ElectroWeak fit Likelihood The second Likelihood we consider is the one corresponding to the ElectroWeak fit presented in Ref. [14], which includes the recent top quark mass measurement by the CMS Collaboration [15] and \(W\) boson mass measurement by the CDF Collaboration [16]. Such Likelihood, that we call EW Likelihood, depends on 40 parameters: 32 nuisance parameters and 8 parameters of interest, corresponding to the Wilson coefficients of the relevant Standard Model Effective Field Theory (SMEFT) operators. A sampling of the posterior probability distribution has been obtained with the HEPFit code [17]. The complete list of parameters with their definitions is reported in Appendix A. In this case, the 1D marginal distributions of the parameters are all nearly Gaussian, with the exception of two truncated Gaussians, so that we expect it to be relatively simple for a NF with a Gaussian base distribution to learn the posterior. Nevertheless, the posterior shows strong correlations among some pairs of parameters (see Figure 5 in Appendix A), which helps to understand the ability of the NFs to accurately learn the correlation matrix. ### The Flavor fit Likelihood The third Likelihood we consider corresponds to the EFT fit to flavor observables related to neutral current \(b\to s\) transitions presented in Ref. [18]. This Likelihood, refereed to as the Flavor Likelihood, depends on 89 parameters: 77 nuisance parameters and 12 parameters of interest, corresponding to the Wilson coefficients of the relevant SMEFT operators. A sampling of the posterior probability distribution has been obtained with the HEPFit code documented in Ref. [17]. The full list of parameters is reported in Appendix A. This Likelihood is clearly more complicated than the previous two, since it features multimodal 1D distributions and complicated correlations (see Fig.s 3 and 4). ## 3 Evaluation Metrics We used as quality metrics the mean over dimensions of the \(p\)-values of 1D Kolmogorov-Smirnov test (KS-test), with an optimal value of 0.5 and the Sliced Wasserstein distance (SWD) [19, 20], with optimal value 0. We briefly recall their definitions here for convenience: * **Kolmogorov-Smirnov Test (KS)** The Kolmogorov-Smirnov (KS) test serves as a statistical test for assessing if two one-dimensional samples originate from the same underlying (unknown) probability density function (PDF). The null hypothesis assumes that both sets of samples are derived from the same PDF. The KS metric can be expressed as: \[D_{y,z}=\sup_{x}\left|F_{y}(x)-F_{z}(x)\right|,\] (1) where \(F_{y,z}(x)\) is the empirical cumulative distribution functions of the sample sets \(\{y_{i}\}\) and \(\{z_{i}\}\), while sup denotes the supremum function. The \(p\)-value for null hy pothesis rejection is given by: \[D_{y,z}>\sqrt{-\ln\left(\frac{p}{2}\right)\times\frac{1+\frac{n_{z}}{n_{y}}}{2n_{ z}}}\] (2) where \(n_{y}\) and \(n_{z}\) indicate the sample sizes. * **Sliced Wasserstein Distance (SWD)** The SWD serves as a metric for comparing two multi-dimensional distributions, leveraging the one-dimensional Wasserstein distance. The one-dimensional Wasserstein distance between two empirical distributions is formulated as: \[W_{y,z}=\int_{\mathbb{R}}dx\left|F_{y}(x)-F_{z}(x)\right|\] (3) In our sliced approach, we randomly select \(N_{d}=2D\) directions, with \(D\) the dimensionality of the sample, uniformly distributed over the \(4\pi\) solid angle.2. We then project all samples on such directions and compute the one-dimensional Wasserstein distance and finally take the mean over the directions. Footnote 2: This is achieved by normalizing an \(N\)-dimensional vector whose components are sampled from independent standard normal distributions [21]. In order to include statistical uncertainty on the test and NF generated samples we compute the above metrics 100 times, for independent batches of \(N_{\text{test}}/100\) points, and take the average. With respect to the Ref. [3] we also consider here the metric given by the discrepancy on the Highest Posterior Density Interval (HPDI). This is a very important metric for Bayesian posterior inference, since it tells how well credibility intervals (CI) are reproduced by the NFLikelihood. In particular, we computed the HPDI relative error width (HPDIe) for \(68.27\%,95.45\%\), and \(99.73\%\) (CI) of each 1D marginal of the true and predicted distributions. For each dimension, we compute the mean of this quantity when more than one interval is present (which is common for multimodal distributions). Finally, we take the median over all dimensions. We choose the median to avoid that results on very noisy dimensions, particularly in the Flavor Likelihood, have a large negative effect on the generally good value of the metric. ## 4 The NFLikelihood The results of this analysis have been obtained using the TensorFlow2 NF implementation from Ref. [3]. The Toy Likelihood was trained with a Masked Autoregressive Flow (MAF) architecture [22], while the EW and Flavor Likelihoods were trained with an Autoregressive Rational Quadratic Spline (A-RQS) architecture [23], always training with a log-probability loss function. For all three cases the training data was always standardized (to zero mean and unit standard deviation) before training and a small scan over the flow's hyperparameters was performed. Here we only present the optimal results obtained for each distribution. All training iterations were performed with an initial learning rate of \(0.001\), reduced by a factor of \(0.2\) after a \(patience\) number of epochs without improvement on the validation loss. Training was early stopped after \(2\cdot patience\) number of epochs without improvement. The value of \(patience\) and of the other relevant hyperparameters will be reported separately for each of the Likelihoods. All models have been trained on Tesla V100 Nvidia GPUs. ### The Toy Likelihood The hyperparameters that lead to the best estimation of the Toy Likelihood are shown in Table 1. The corresponding NF architecture is made of two MAF bijectors and one reverse permutation between them. Each MAF has an autoregressive network with 3 hidden layers made of 64 nodes each. The training was performed for a maximum of 200 epochs, with \(patience=20\) and \(2\cdot 10^{5}\) training samples. The NF model was tested with \(2\cdot 10^{5}\) test samples. The resulting quality metrics are shown in Table 2. In particular, we obtained an optimal KS-test of \(\sim 0.5\) and HPDIe of the order of \(10^{-2}\), which guarantee that, within the considered statistcal uncertainty, the NF generated samples are indistinguishable from those generated with the true pdf. The training time was about 300s. Since, when doing inference from a Likelihood function or posterior distribution, one is usually specially interested in the so-called parameters of interests (POIs), we show in Table 3 the results obtained for \(\mu\). Here the KS-test is again \(\sim 0.5\) and HPDIes of the order of \(10^{-2}\). The accuracy of the NF model is visually shown in Figure 1, which presents a corner plot of a selection of 10 parameters, including \(\mu\). In the Figure, the true distribution is shown in red, while the NF distribution in blue. The HPDIs corresponding to 68.27% (\(1\sigma\)), 95.45% (\(2\sigma\)), and 99.73% (\(3\sigma\)) probabilities are shown as solid, dashed and dashed-dotted lines, respectively. The selected parameters include those considered in Ref. [1], therefore allowing for a direct comparison. In particular, comparing with what in Ref. [1] is called the Bayesian DNNLikelihood, we find that both approaches gives extremely accurate results: the NF approach seems to perform slighlty better, even though the DNNLikelihood was trained with half the number of training points of the NFLLikelihood (\(10^{5}\)). The main difference seems to be in the training time, which is much larger for the DNNLikelihood. The advantage of the DNNLikelihood with respect to the NFLLikelihood comes when the so-called Frequentist Likelihood is considered: in this case one is not particularly interested in learning the Likelihood (or the posterior) as a PDF, but is instead interested in learning it as a function close to its absolute and local (profiled) maxima. This highlights the main difference betweeb the DNNLikelihood and the NFLLikelihood. The first is more suitable to encode Likelihoods to be used for frequentist analyses, while the second for Likelihoods \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{8}{c}{**Hyperparameters for Toy Likelihood**} \\ \hline \# of samples & hidden layers & \# of bijec. & algorithm & spline knots & range & L1 factor & patience & max \# of epochs \\ \hline \(\mathbf{2\cdot 10^{5}}\) & \(3\times 64\) & MAF & 2 & - & - & 0 & 20 & 200 \\ \hline \hline \end{tabular} \end{table} Table 1: Hyperparameters leading to the best determination of the Toy Likelihood. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{**Results for Toy Likelihood**} \\ \hline \# of samples & Mean KS-test & Mean SWD & HPDIe\({}_{1\sigma}\) & HPDIe\({}_{2\sigma}\) & HPDIe\({}_{3\sigma}\) \\ \hline \(\mathbf{2\cdot 10^{5}}\) & 0.4893 & 0.03947 & 0.02073 & 0.01207 & 0.01623 & 133 \\ \hline \hline \end{tabular} \end{table} Table 2: Best results obtained for the Toy Likelihood. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{**Results for Toy Likelihood POI**} \\ \hline POI & KS-test & HPDIe\({}_{1\sigma}\) & HPDIe\({}_{2\sigma}\) & HPDIe\({}_{3\sigma}\) \\ \hline \(\mu\) & 0.54 & 0.02742 & 0.01359 & 0.01786 \\ \hline \hline \end{tabular} \end{table} Table 3: Best results POI flavor. (or posteriors) to be used in Bayesian analyses. Obviously one can combine the two approaches to obtain a general and flexible representation of the Likelihood suitable for both frequentist and Bayesian inference. We defer this generalization to future work. ### The EW Likelihood. The hyperparameters corresponding the best NF model describing the EW Likelihood are shown in Table. 4. The chosen NF architecture is made of two A-RQS bijectors with 4 spline knots defined in a \([-6,6]\) range, and one reverse permutation between them. Each A-RQS has an autoregressive network with 3 hidden layers made of 128 nodes each. The training was performed for a maximum of 800 epochs and a patience of 20 with \(2\cdot 10^{5}\) training samples. The NF model was tested with \(2\cdot 10^{5}\) samples. Finally, given the presence of truncated dimensions, the distributions was soft clipped, with a hinge factor of \(10^{-4}\) at the truncations, within the range of the training data. A summary of the values obtained for the evaluation metrics is reported in Table 5. We obtained a mean KS-test of \(\sim 0.4\) and HPDIes of the order of \(10^{-3}\) or smaller. The training time was about 7200 s, that is a couple of hours. Furthermore, Table 3 shows the Figure 1: Corner plot of the 1D and 2D marginal posterior distributions of a representative selection of the Toy Likelihood parameters.The true distribution is depicted in red, while the predicted distribution is shown in blue. The solid, dashed and dashed-dotted line over the 1D marginals denote the \(68.27\%,95.45\%\), and \(99.73\%\) HPDIs, respectively. The rings on the 2D marginals describe the corresponding probability levels. metrics obtained for the Wilson coefficients (POIs). We find that the POIs are generally well described, albeit some small deviations after \(2\sigma\) interval, which can be likely fixed after further fine-tunning the hyperparameters or adding more training points. The true and NF distributions are visually compared in Figure 2, which shows a corner plot over the POIs plus four representative nuisance parameters (a total of twelve parameters). As before, the distribution is represented in red, while the NF distribution in blue. The HPDIs corresponding to \(68.27\%,95.45\%\), and \(99.73\%\) probabilities are shown as solid, dashed and dashed-dotted lines, respectively. We see that in general, the NF distributions matches pretty well the true one. Something worth emphasizing is the NF ability to learn even large correlations between dimensions. This is not expected in the case of the DNNLikelihood, since regression becomes inefficient when large correlations between parameters are present. ### Flavor Likelihood The optimal hyperparameters found for learning the Flavor Likelihood are shown in Table 7. The chosen NF architecture is made of two A-RQS bijectors with 8 spline knots defined in the \([-5,5]\) range, and one reverse permutation between them. Each A-RQS has an autoregressive network with 3 hidden layers made of 1024 nodes each and an L1 regularization factor of \(10^{-4}\). The training was performed for a maximum of 12000 epochs with a patience of 50. The model was trained with \(10^{6}\) samples and tested with \(5\cdot 10^{5}\) samples. Furthermore, since the Likelihood function presents several truncated dimensions, the NF \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{**Results for EW Likelihood**} \\ \hline POI & KS-test & HPDle\({}_{1\sigma}\) & HPDle\({}_{2\sigma}\) & HPDle\({}_{3\sigma}\) \\ \hline \(c_{\varphi l}^{1}\) & 0.2089 & 0.05913 & 0.09649 & 0.2618 \\ \hline \(c_{\varphi l}^{3}\) & 0.2224 & 0.03072 & 0.1328 & 0.5699 \\ \hline \(c_{\varphi q}^{1}\) & 0.4308 & 0.03586 & 0.01308 & 0.03996 \\ \hline \(c_{\varphi q}^{3}\) & 0.4612 & 0.008389 & 0.03182 & 2.8446 \\ \hline \(c_{\varphi d}\) & 0.4478 & 0.0008574 & 0.04543 & 0.1011 \\ \hline \(c_{\varphi e}\) & 0.4831 & 0.01389 & 0.1226 & 0.1393 \\ \hline \(c_{\varphi u}\) & 0.4847 & 0.02874 & 0.006302 & 0.2268 \\ \hline \(c_{ll}\) & 0.2574 & 0.1487 & 0.0868 & 0.06874 \\ \hline \hline \end{tabular} \end{table} Table 6: Results for the Wilson coefficients in the EW Likelihood. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{5}{c}{**Hyperparameters for the EW Likelihood**} \\ \hline \# of samples & hidden layers & \# of bijec. & algorithm & spline knots & range & L1 factor & patience & \# of epochs \\ \hline \(\mathbf{2\cdot 10^{5}}\) & 2 & \(3\times 128\) & A-RQS & 4 & -6 & 0 & 20 & 800 \\ \hline \hline \end{tabular} \end{table} Table 4: Hyperparameters leading to the best determination of the EW Likelihood. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{5}{c}{**Results for the EW Likelihood**} \\ \hline \# of samples & Mean KS-test & Mean SWD & HPDle\({}_{1\sigma}\) & HPDle\({}_{2\sigma}\) & HPDle\({}_{3\sigma}\) & time (s) \\ \hline \(\mathbf{2\cdot 10^{5}}\) & 0.4269 & 0.003066 & 0.0006896 & 0.0006578 & 0.006181 & 7255 \\ \hline \hline \end{tabular} \end{table} Table 5: Best results obtained on the EW Likelihood. model was soft-clipped, with an hinge factor of \(1\cdot 10^{-4}\), within the range of the training data. A summary of the evaluation metrics is shown in Table 8. We obtained an optimal KS-test of \(\sim 0.42\) and HPDIEs of the order of \(10^{-3}\) or smaller. Training took about \(2\cdot 10^{4}\) s, i.e. around \(5.5\) hours. The Flavor Likelihood includes \(12\) Wilson coefficients as POIs, and Table 9 shows the results obtained for each of them. The KS-tests are almost always above \(0.4\), with a couple of exceptions where the value is above \(0.3\). In turn, the HPDIEs are generally of the order \(10^{-2}\) or smaller, with some exceptions, that we believe may be improved by finely-tuning the architecture and/or by adding more training points. Notice that the apparent large discrepancy in HPDIe\({}_{1\sigma}\) for \(\epsilon_{22}^{\prime\,LedQ}\) is due to the algorithm that determines the HPDIs and not to a bad interpolation of the distribution. Indeed, from Table 9 the KS value for \(\epsilon_{22}^{\prime\,LedQ}\) is of the order of \(0.44\) and, as one can see from Figure 3, the bimodal 1D marginal of \(\epsilon_{22}^{\prime\,LedQ}\) is very well reproduced. It is important to stress the complexity of the Flavor likelihood. As can be seen from Figures 3 and 4, depicting a corner plot of the Wilson coefficients and the 1D marginal distributions of all dimensions, respectively, the posterior features multimodal 1D marginals, Figure 2: Corner plot of the 1D and 2D marginal posterior distributions of the POIs plus four representative nuisance parameters of the EW Likelihood. The true distribution is depicted in red, while the predicted distribution is shown in blue. The solid, dashed and dashed-dotted line over the 1D marginals denote the \(68.27\%,95.45\%\), and \(99.73\%\) HPDIs, respectively. The rings on the 2D marginals describe the corresponding probability levels. complex correlations and noisy dimensions, offering a very realistic prototype of a complicated high dimensional HEP Likelihood. Nonetheless, we find that the NF model is able to reproduce it with a very good accuracy. ## 5 Conclusion The publication of full Likelihoods is crucial for the long lasting legacy of the LHC, and for any other experiment involving complicated analyses with a large parameter space. However, this is not always a straightforward matter since Likelihoods are often high dimensional complex distributions, sometimes depending on Monte Carlo simulations and/or numeric integrations, which make their sampling a very hard task. Furthermore, one requires precise, compact, and efficient representations of them so that they can be easily and systematically reused. As it was first shown in Ref. [1], Neural Networks, being universal interpolators, offer a promising approach to encode, preserve, and reuse Likelihood functions. In this work we extended this approach to unsupervised learning, proposing \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{8}{c}{**Hyperparameters for the Flavor Likelihood**} \\ \hline \# of samples & hidden layers & \# of bijec. & algorithm & spline knots & range & L1 factor & patience & max \# of epochs \\ \hline \(\mathbf{10^{6}}\) & \(3\times 1024\) & 2 & A-RQS & 8 & -5 & 1e-4 & 50 & 12000 \\ \hline \hline \end{tabular} \end{table} Table 7: Hyperparameters leading to the best determination of the Flavor Likelihood. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{**Results for Flavor Likelihood POIs**} \\ \hline \# of samples & Mean KS-test & Mean SWD & HPDle\({}_{1\sigma}\) & HPDle\({}_{2\sigma}\) & HPDle\({}_{3\sigma}\) \\ \hline \(\mathbf{10^{6}}\) & 0.3163 & 0.04031 & 0.01154 & 0.01354 & 1.738e-5 & 9550 \\ \hline \hline \end{tabular} \end{table} Table 8: Best results obtained for the Flavor Likelihood. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{**Results for Flavor Likelihood POIs**} \\ \hline \hline POI & KS-test & HPDle\({}_{1\sigma}\) & HPDle\({}_{2\sigma}\) & HPDle\({}_{3\sigma}\) \\ \hline \(c_{1123}^{LQ\,1}\) & 0.2056 & 0.1488 & 7.6285 & 4.731e-08 \\ \hline \(c_{223}^{LQ\,1}\) & 0.316 & 0.008596 & 0.009739 & 0.03413 \\ \hline \(c_{1123}^{Ld}\) & 0.4626 & 0.02686 & 0.01354 & 0.03553 \\ \hline \(c_{223}^{Ld}\) & 0.239 & 0.06724 & 0.01053 & 2.398e-08 \\ \hline \(c_{111}^{LedQ}\) & 0.3585 & 0.05904 & 0.01171 & 5.387e-08 \\ \hline \(c_{22}^{LedQ}\) & 0.3474 & 0.02035 & 0.008888 & 2.155e-09 \\ \hline \(c_{2231}^{Qe}\) & 0.2761 & 0.01753 & 0.01449 & 1.419e-07 \\ \hline \(c_{232}^{Qe}\) & 0.1362 & 0.05067 & 0.03221 & 0.009614 \\ \hline \(c_{1123}^{Qe}\) & 0.4518 & 0.02477 & 0.005794 & 5.602e-08 \\ \hline \(c_{2232}^{Qe}\) & 0.3939 & 0.007526 & 0.01198 & 1.502e-08 \\ \hline \(c_{11}^{LedQ}\) & 0.4603 & 0.007353 & 0.01401 & 8.011e-08 \\ \hline \(c_{22}^{\prime\,LedQ}\) & 0.4385 & 1.8112 & 0.007142 & 4.374e-08 \\ \hline \hline \end{tabular} \end{table} Table 9: Results for the Wilson coefficients in the Flavor Likelihood. the use of Normalizing Flows for this endeavor. Indeed, Normalizing Flows are powerful generative models which, by construction, also provide density estimation. We tested our proposal on three posterior distributions of increasing complexity, corresponding to three different Likelihood functions: a 95-dimensional LHC-like new physics search Likelihood, a 40-dimensional ElectroWeak EFT fit Likelihood, and an 89-dimensional Flavor EFT fit Likelihood. We found that Autoregresive Normalizing Flows are capable of precisely describing all the above examples, including all the multimodalities, truncations, and complicated correlations. In fact, we see that, given the way they are constructed, Autoregressive Flows can easily learn the covariance matrices of the distributions. Both the code used for this project [24] and a user-friendly TensorFlow2 framework for Normalizing Flows (still under development) [25] are available on GitHub. The training and generated data, as well as the trained NF models, are available on Zenodo [26]. The 95-dimensional LHC-like new physics search Likelihood, which was also studied in the context of the DNNLikelihood of Ref. [1] was also used to make a comparison between the two approaches. Such comparison leads to the conclusion that the two approaches are complementary and could, in the future, be merged to get an even more flexible Figure 3: Corner plot of the 1D and 2D marginal posterior distributions of the Wilson coefficients of the Flavor Likelihood. The true distribution is depicted in red, while the predicted distribution is shown in blue. The solid, dashed and dashed-dotted line over the 1D marginals denote the \(68.27\%,95.45\%\), and \(99.73\%\) HPDIs, respectively. The rings on the 2D marginals describe the corresponding probability levels. representation of the Likelihood. Indeed, while the DNNLikelihood approach focuses on learning the Likelihood as a multivariate function, and is agnostic about its probability interpretation, the NF approach leverages the latter. This implies that the DNNLikelihood approach is more suitable to learn Likelihood functions to be used in Frequentist analyses, where the region of profiled maxima is the important part to learn, while the NFLLikelihood approach is more suitable for Likelihoods (or posteriors) to be used in Bayesian analyses, where is crucial to learn the distribution as a statistical PDF and not just a multivariate function. We defer to future work the study of the best approach to merge the DNN and NF Likelihoods into a unique object. As a follow-up, we also plan to explore the possibility of learning full statistical models, i.e. functions of both the data and the parameters. A promising way to do this is by means of the so-called conditional Normalizing Flows [27]. ## Acknowledgements We thank Luce Silvestrini for useful discussions and for providing the samples of the EW and Flavor Likelihoods. We also thank the IT service of INFN Sezione di Genova, and especially Mirko Corosu, for computing support. H.R.G. is also thankful to Sabine Kraml, Wolfgang Waltenberger, and Danny van Dyk for encouraging discussions. Funding informationThis work was supported by the Italian PRIN grant 20172LNEEZ. Figure 4: 1D marginal posterior distributions of all the parameters of the Flavor Likelihood. The true distribution is depicted in red, while the predicted distribution is shown in blue. The solid, dashed and dashed-dotted lines over the marginals denote the \(68.27\%,95.45\%\), and \(99.73\%\) HPDIs, respectively. ## Appendix A Details of the EW and Flavor Likelihoods The list of parameters and their description for the EW Likelihood is given in Table 10, while Figure 5 gives a pictorial representation of the data correlation matrix. The list of parameters and their description for the Flavor Likelihood is given in Table 11.
2309.10212
Speculative Progressive Raycasting for Memory Constrained Isosurface Visualization of Massive Volumes
New web technologies have enabled the deployment of powerful GPU-based computational pipelines that run entirely in the web browser, opening a new frontier for accessible scientific visualization applications. However, these new capabilities do not address the memory constraints of lightweight end-user devices encountered when attempting to visualize the massive data sets produced by today's simulations and data acquisition systems. In this paper, we propose a novel implicit isosurface rendering algorithm for interactive visualization of massive volumes within a small memory footprint. We achieve this by progressively traversing a wavefront of rays through the volume and decompressing blocks of the data on-demand to perform implicit ray-isosurface intersections. The progressively rendered surface is displayed after each pass to improve interactivity. Furthermore, to accelerate rendering and increase GPU utilization, we introduce speculative ray-block intersection into our algorithm, where additional blocks are traversed and intersected speculatively along rays as other rays terminate to exploit additional parallelism in the workload. Our entire pipeline is run in parallel on the GPU to leverage the parallel computing power that is available even on lightweight end-user devices. We compare our algorithm to the state of the art in low-overhead isosurface extraction and demonstrate that it achieves 1.7x-5.7x reductions in memory overhead and up to 8.4x reductions in data decompressed.
Will Usher, Landon Dyken, Sidharth Kumar
2023-09-18T23:49:47Z
http://arxiv.org/abs/2309.10212v1
Speculative Progressive Raycasting for Memory Constrained Isosurface Visualization of Massive Volumes ###### Abstract New web technologies have enabled the deployment of powerful GPU-based computational pipelines that run entirely in the web browser, opening a new frontier for accessible scientific visualization applications. However, these new capabilities do not address the memory constraints of lightweight end-user devices encountered when attempting to visualize the massive data sets produced by today's simulations and data acquisition systems. In this paper, we propose a novel implicit isosurface rendering algorithm for interactive visualization of massive volumes within a small memory footprint. We achieve this by progressively traversing a wavefront of rays through the volume and decompressing blocks of the data on-demand to perform implicit ray-isosurface intersections. The progressively rendered surface is displayed after each pass to improve interactivity. Furthermore, to accelerate rendering and increase GPU utilization, we introduce speculative ray-block intersection into our algorithm, where additional blocks are traversed and intersected speculatively along rays as other rays terminate to exploit additional parallelism in the workload. Our entire pipeline is run in parallel on the GPU to leverage the parallel computing power that is available even on lightweight end-user devices. We compare our algorithm to the state of the art in low-overhead isosurface extraction and demonstrate that it achieves 1.7\(\times\)-5.7\(\times\) reductions in memory overhead and up to 8.4\(\times\) reductions in data decompressed. ## 1 Introduction Recent advances in web technologies, specifically WebGPU [55] and WebAssembly [54], have enabled the development of powerful GPU-based compute applications that run entirely in the browser. Scientific visualization applications can leverage these technologies to take advantage of the ease of deployment afforded by the browser without sacrificing the compute capabilities required to perform their analysis and visualization tasks; thereby widening the accessibility of complex scientific visualization applications. For example, recent works have leveraged these new technologies for interactive isosurface extraction on compressed data on the GPU [51] and GPU-parallel layout computation of large graphs [10]. However, these new technologies alone do not address the fundamental issues of limited memory and compute capacity on lightweight end-user devices. Memory capacity constraints are a fundamental issue in scientific visualization even when targeting high-end workstations, and are especially problematic when faced with processing the massive data sets produced by current simulations and data acquisition systems on lightweight consumer GPUs. While there exists a large body of work on large-scale volume rendering approaches [5], deploying large-scale volume visualization in the browser poses its own unique set of additional challenges (see, e.g. [51]). Desktop large-scale volume rendering approaches typically leverage special purpose file formats to stream data from disk (e.g. [19, 14, 8, 20]); however, web applications are unable to perform such low-level disk I/O operations. Although prior work has leveraged remote servers to stream subsets of data [46, 48], this introduces tradeoffs with latency and deployment cost. Usher and Pascucci [51] recently proposed Block-Compressed Marching Cubes (BCMC) to achieve interactive isosurface extraction in the browser through on the fly decompression and caching of a compressed data set stored on the GPU. Their approach reduces latency by transferring the entire compressed volume to the client, eliminating the need for a complex server, and achieves interactive isosurface extraction times through a fully GPU-driven decompression, caching, and isosarfacing pipeline. However, their approach extracts explicit surface geometry and thus, as with other extraction techniques, its memory and compute costs scale with the size of the data set and the number of triangles in the surface. As a result, BCMC is unable to extract isosurfaces from large data sets on lightweight devices as it runs out of memory to store the vertex data. We begin from the on the fly GPU decompression strategy of Usher and Pascucci [51]; however, we make deliberate design choices to reduce memory consumption and the impact of data set size on memory footprint and compute cost. First, we eliminate the need to store a large triangle mesh for the surface by adopting an implicit ray-isosurface intersection approach [35]. Next, to avoid processing fully occluded blocks, we progressively traverse a wavefront of rays through the volume in a multipass approach. Volume data cache updates are performed each pass to decompress newly visible blocks, thereby reducing the working set to just those blocks that the current set of rays traverse in a given pass. Finally, to address utilization issues encountered as rays terminate, we introduce ray-block speculation into our algorithm to exploit additional parallelism on the GPU to terminate rays faster and accelerate rendering. Our algorithm can be easily scaled down to run on low power devices, as its costs are primarily tied to the image size. Our contributions are: * A novel progressive algorithm for implicit isosurface raycasting that works directly on compressed data on the GPU; * A per-pass view-dependent decompression and caching strategy built into the algorithm to minimize its memory footprint; * A dynamic work speculation strategy that exploits additional parallelism in the workload to increase GPU utilization and accelerate rendering completion; * Evaluation of our algorithm against the state of the art on data sets with up to 8.05B voxels on lightweight end user devices. ## 1 Related Work In Section 1, we review recent work on bringing scientific visualization to the browser through WebGL [18] and WebGPU [55]. Visualizing large-scale volumetric data is a fundamental problem in scientific visualization, and has been deeply explored (see surveys by Beyer et al. [5] and Rodriguez et al. [3]). Isosurface visualization techniques can be categorized as either explicit surface extraction methods (Section 1), where geometry is computed for the surface, or implicit surface rendering methods (Section 1), which directly compute ray-isosurface intersections without explicit geometry. Finally, due to the similarities in isosurface ray-casting and ray-guided volume rendering algorithms, we review relevant work on raycasting of large volumes in Section 1. ### _Scientific Visualization in the Browser_ As with information visualization applications, bringing scientific visualization to the browser greatly expands accessibility to visualization, enabling more scientists to gain better insights about their data. Prior work has brought compelling scientific visualization applications to the browser through the use of server-side processing, local GPU acceleration, and combinations of both techniques. Prior server-based techniques have moved all computation to the server and streamed images to the client [40, 41, 26, 11, 42], allowing lightweight clients to access large amounts of compute power. However, such approaches can face issues with latency, cost, and quality of service when faced with supporting large numbers of concurrent users. Prior work has demonstrated leveraging a remote server to query and stream subsets of data to the client [46, 48], thereby balancing between remote and local processing costs. Clients can query subsets of the data for their region of interest or level of detail, which is transferred and rendered or processed locally. Although moving the rendering work to the client reduces the impact of server latency and quality of service, data streaming approaches can face similar issues as fully server-side approaches at scale. In this work, we target a fully client-side processing approach to eliminate the need for running backend servers and related potential challenges. We note that a combination of client- and server-side processing can provide the best scalability and performance for large data visualization; here we focus on expanding the capabilities of the client. Prior to WebGPU, browser applications leveraged WebGL to perform GPU accelerated rendering in applications ranging from LiDAR visualization [46] to volume rendering [38] and neuroscience [25, 48]. A fundamental limitation of WebGL compared to WebGPU is the lack of support for general compute shaders; however, Li and Ma [28, 29] proposed a method to work around this limitation by repurposing the rendering pipeline to perform a subset of parallel compute operations. With the recent development of WebGPU, browser applications now have access to general purpose GPU compute and advanced rendering capabilities. Usher and Pascucci [51] leveraged WebGPU to deploy a GPU-driven isosurface extraction pipeline that achieved interactive visualization of massive data sets entirely in the browser. Dyken et al. [10] presented a graph layout algorithm in WebGPU to accelerate layout and rendering of large graphs. Hidaka et al. [22] accelerated deep neural network execution in the browser using WebGPU, an approach which is also being tested in TensorFlow. ### _Explicit isosurface Extraction_ The original Marching Cubes paper [34] defined an object-order technique that computed explicit triangle geometry for each voxel to render the isosurface. The extracted triangle geometry can typically be rendered in real-time on modern GPUs. Subsequent work proposed constructing interval trees [6] or \(k\)-d trees over the span space [33] to accelerate Marching Cubes by filtering out voxels that were known to not contain the isosurface. Isosurface meshes can contain large numbers of triangles, many of which will be occluded or subpixel in size for a given viewpoint, leading to wasted computation and memory use. Livnat and Hansen [32] proposed a view-dependent surface extraction technique that traversed an octree to find voxels to extract triangles from. Extracted triangles were rasterized for display and updated an occlusion buffer used to filter out occluded octree nodes to skip traversal of occluded regions. Recent work has primarily focused on leveraging parallel execution on GPUs to accelerate surface extraction [1, 7, 31, 36, 43, 45, 31, 27]. Although each voxel can be processed independently in parallel, coordination is required to ensure that the individual voxel's outputs do not overwrite each other. GPU-parallel algorithms achieve this through prefix sums and stream compactions to compute which voxels contain the surface and to assign offsets into output buffers for their triangle data. However, prior work typically assumes that the entire volume fits in the memory of a single GPU [2, 27, 31, 43, 45], or that it can be distributed over a cluster [36]. Usher and Pascucci [51] recently proposed the Block-Compressed Marching Cubes (BCMC) algorithm for interactive GPU-parallel isosurface extraction on massive data sets. Their approach uploads a ZFP fixed-rate compressed volume to the GPU and decompresses and caches the blocks required for a given isosurface on demand using GPU decompression and an LRU cache. BCMC achieves interactive isosurface extraction times on consumer GPUs; however, as with other surface extraction techniques, it produces large vertex buffers and its cost scales with the total number of blocks containing the isosurface. These factors limit BCMCs scalability to massive volumes and lightweight end-user systems. Although we adopt a similar on-demand decompression and caching strategy, our algorithm does not store a vertex buffer and processes blocks in a view-dependent wavefront. These design choices significantly reduce our algorithm's memory footprint, and tie compute costs primarily to image size to provide better control over compute cost on lightweight devices. ### _Implicit isosurface Rendering_ Parker et al. [39] proposed the first implicit isosurface rendering technique, where rays were traversed through the volume grid and ray-voxel intersections computed directly by solving a cubic polynomial. Parker et al. [39] accelerated ray-traversal by skipping empty space using a multi-level grid hierarchy. Marmitt et al. [35] improved the quality and speed of ray-voxel intersection through a root finding approach based on isolation and iterative root finding. Wald et al. [52] further accelerated empty space skipping through an implicit \(k\)-d tree tracking value ranges of subregions of the volume. Hadwiger et al. [21] proposed an implicit isosurface rendering technique that combined object and image order empty skipping to accelerate rendering, coupled with a brick cache to reduce memory use. Their algorithm constructs a fine grid over the volume and rasterizes the front and back faces of cells that potentially contain the isosurface to generate ray start and end positions, then performs ray marching on the GPU to find isosurface intersections between these intervals. Hadwiger et al. employed a brick cache using a coarse grid to reduce memory use, where data for a grid cell is only uploaded to the GPU if its value range contains the isovalue. However, this caching strategy does not take into account visibility, and as such will upload data for occluded regions of the volume. In contrast, our algorithm performs data decompression on-demand as rays traverse the volume, reducing the working set to just the blocks visible in a single pass. Moreover, our proposed decompression and caching pipeline runs entirely on the GPU to eliminate CPU communication bottlenecks. ### Ray-guided Large Volume Rendering A large body of work has explored techniques to address memory constraints in ray-guided volume rendering [12, 14, 19, 5, 8, 20], which we briefly review here due to their applicability to implicit isosurface raycasting. Ray-guided techniques for large volume rendering typically combine GPU-driven cache requests, made as rays encounter missing data during traversal, with a CPU-side data management system that services these requests by uploading new data to the GPU [19, 20, 8, 14]. The CPU-side data management system is typically coupled with a special purpose file format and takes advantage of low-level file system APIs to efficiently stream massive data sets off disk. Prior work has demonstrated interactive rendering of data sets ranging in size from hundreds of gigabytes [14] to terabytes [19, 8, 20]. Volume rendering techniques that operate on compressed data have been proposed to alleviate disk space and in-memory working set requirements [3, 15, 16, 37, 44, 50, 53, 17]. Schneider and Westermann [44] proposed a hierarchical quantization scheme that similarly decomposes the data into \(4^{3}\) blocks and computes a \(1/4\) resolution quantized representation of each block combined with two codebooks for the volume. Samples are then reconstructed in a slice-based renderer using the quantized volume and codebooks. Fout et al. [15, 16] proposed a vector quantization technique combined with deferred filtering for slice-based rendering. Rendering occurs in two-passes for each slice, slices are first decompressed to a small cache, after which filtering and blending is performed. Subsequent works have leveraged compressed GPU texture formats [37], combining bricking, quantization and run-length encoding [53], and extending these techniques with multiresolution data representations using tensor approximations [50] and octrees over compressed blocks [17]. As with prior work, we adopt a brick-based compression scheme to allow decompression of spatial subregions on-demand, leveraging ZFP [30] to compress the bricks. We note that it would be possible to leverage other brick-based compression schemes, and to combine our technique with multiresolution level of detail hierarchies to address undersampling issues, or out-of-core streaming methods to support larger data sets. Finally, our choice of only using ZFP for compressing the data amounts to using only the "precision" axis for data reduction; however, better data reduction and quality can be achieved by combining the precision and resolution axes [23]. ## 3 Progressive Wavefront Isosurface Raycasting Our algorithm is designed with a focus on reducing overall memory consumption and on achieving scalable and controllable rendering performance that is not strongly impacted by the data set size. These properties enable the algorithm to be used for visualizing massive data sets in the browser on lightweight end user devices. To achieve this, we propose an implicit isosurface raycasting algorithm that progressively traverses a wavefront of rays through a block-compressed volume (Figure 2). In each pass, new visible blocks that potentially contain the isosurface are decompressed and cached in an LRU cache to enable re-use of decompressed blocks across passes. Thus our algorithm's memory footprint and compute cost is dependent on the image size, view position, and isovalue. The progressively rendered image is displayed after each pass to improve interactivity. At a high-level, our algorithm proceeds as follows. First, volume data compressed using ZFP's [30] fixed-rate compression mode is uploaded to the GPU. We then construct a two-level macrocell grid [39] on the GPU to accelerate ray traversal (Section 3.1). For Figure 2: An illustration of our algorithm’s core loop on a slice of a \(16^{3}\) volume. (a) The volume has a single coarse macrocell (orange) with a \(4^{3}\) grid of ZFP blocks within it. After computing the initial set of rays we repeat steps (a-f) until all rays have terminated, displaying the partial image after each pass. (a) Rays are advanced to the next active block (green blocks), storing the block ID in \(R_{RID}\). Rays one and two traverse blocks whose value range combined with their neighbors contains the isovalue, indicating that their dual grid may contain the isosurface and thus must be traversed. (b) The blocks referenced in \(R_{RID}\) are marked visible and active (\(M_{R^{\prime}N}\), green blocks), and their neighbors to the positive side marked active (\(M_{R^{\prime}N}\), blue blocks). The neighbors will be required to populate the dual grid vertices for the visible blocks. (c) \(M_{R^{\prime}N}\) is passed to the LRU cache [51], which decompresses and caches any new blocks required for the pass, potentially evicting those that are no longer needed. (d) We then prepare the inputs for the block raytracing kernel through stream compactions and parallel sorts on the GPU. (e) Each block traverses its rays through its local data, terminating those that intersect the isosurface. (f) Finally, we compute the remaining number of active rays to determine if rendering is complete and display the current image to the user. each new isovalue or camera viewpoint, we compute the view rays for each pixel (Section 3.2). The following steps are then repeated to traverse the wavefront of rays through the volume to progressively render the isosurface (see Figure 2). First, we traverse the rays through the macrocoll grids to find the next block they must test for intersections with (Figure 1(a), Section 3.3). We then mark all the blocks that are visible or active in the current pass (Figure 1(b), Section 3.4). The data for uncached blocks are decompressed using a WebGPU port of ZFP's CUDA decompressor, and cached for re-use between passes through a GPU-driven LRU cache, as done by Usher and Pascucci [51] (Figure 1(c), Section 3.5). We then construct arrays of the visible block IDs, the number of rays intersecting each block, and the ray IDs sorted by their block ID to provide inputs to the block raycasting kernel (Figure 1(d), Section 3.6). Each block then intersects its rays with its local region of data to find ray-isosurface intersections (Figure 1(e), Section 3.7). Finally, we compute the remaining number of active rays to determine if rendering has completed (Figure 1(f)) and display the current image. ### Macrocell Grid Construction As done by Usher and Pascucci [51], we leverage the \(4^{3}\) block decomposition of the volume used by ZFP's fixed-rate compression mode to define a macrocoll grid over the volume. The macrocoll grid is used to skip blocks that do not contain the isovalue [39], and thereby skip decompressing them. In addition to the ZFP block macrocell grid, referred to as the fine grid, we compute a coarse macrocell grid by grouping \(4^{3}\) regions of ZFP blocks to form coarse cells. Each coarse cell contains \(16^{3}\) voxels, allowing larger regions of space to be skipped more efficiently to accelerate rendering of sparse isosurfaces. The value range of each cell in the fine (or coarse) grid is computed by combining the range of the cell's voxels (or blocks) with those of its neighbors in the \(+x/y/z\) direction. The neighbor ranges are required to ensure we do not miss values contained in the cell's dual grid, which would lead to cracks. When a new volume is loaded, we compute the value range of each block and then combine each cell's range with its neighbors to populate the coarse and fine grids. These computations are run in parallel on the GPU. We note that our approach can be combined with an octree or other hierachical multiresolution acceleration structure over the ZFP blocks for LOD, rather than a two-level grid. ### Compute Initial Rays For each new camera position or isovalue, we begin by computing the initial camera rays. This is done through a standard GPU volume raycasting approach where the backfaces of the volume's bounding box are rasterized and ray directions computed in the fragment shader [13, 49]. The fragment shader writes the pixel's ray direction and the \(t\) value that it enters the volume out to an image-sized ray data buffer, requiring 16 bytes per-ray. Rays that miss the volume are marked as terminated. ### Macrocell Grid Traversal Each pass of the wavefront ray traversal begins by finding the next block along the ray that potentially contains the isosurface (Figure 1(a)). We traverse the two-level macrocell grids using the algorithm of Amanatides and Woo [2], skipping cells whose value range does not contain the isovalue. Rays begin by traversing the coarse grid. When a coarse cell containing the isovalue is encountered, we traverse the \(4^{3}\) grid of its blocks to determine if the ray intersects a block containing the isovalue. If such a block is found, we record the block ID for the ray in \(R_{BID}\), save the coarse and fine grid iterator traversal states, and exit the macrocell grid traversal kernel. \(R_{BID}\) is an image-sized buffer that stores the block ID each ray intersects, or UINT_MAX if none. Rays that exit the volume are marked as terminated. The macrocell grid traversal is run over all \(w\times h\) rays; rays that have terminated simply early exit from the kernel. The grid iterator states are saved and restored between passes to ensure that we do not skip cells due to precision issues that would occur when simply tracking the ray's current \(t\) value. Iterator states are stored in an image-sized buffer that tracks \(t_{\text{max}}\) and the current cell ID, requiring 16 bytes per-grid for a total of 32 bytes per-ray. ### Mark Visible and Active Blocks Next, we determine which blocks need to be decompressed to process ray-block intersections (Figure 1(b)). A block is marked both visible and active if a ray is traversing it (Figure 1(b), green blocks); blocks that are \(+x/y/z\) neighbors of visible blocks must also be decompressed to provide data for the visible block's dual grid, and are marked active (Figure 1(b), blue blocks). This pass is run on the GPU over the entire \(R_{BID}\) buffer, and thus scales with the image size rather than the number of blocks. Kernel invocations for terminated rays simply exit early. ### GPU-driven LRU Block Cache The buffer marking active blocks, \(M_{\text{BACt}}\), is passed to the GPU-driven LRU block cache of Usher and Pascucci [51] to produce a list of the new blocks that need to be decompressed and cached for the current pass (Figure 1(c)). These blocks are decompressed into their Figure 3: An illustration of the ray traversal passes for an example isosurface on a \(16^{3}\) volume, without (b, c.1, c.2) and with (b, d) speculation. Green squares mark the blocks currently being traversed by a ray. (a) Four of the six initial rays intersect the volume’s bounds. (b) Pass one is identical in both cases, as not enough rays have terminated to enable speculation. (c.1, c.2) Without speculation, rays one and two traverse one block at a time until they hit the isosurface, requiring two additional passes with low GPU utilization to complete the rendering. (d) With speculation, enough rays have terminated after pass one that \(N_{\text{Spece}}=3\), increasing utilization to 83% and completing the rendering in one additional pass by intersecting rays one and two against multiple blocks. A trade-off of speculative execution strategies is the potential for wasted computation. This is illustrated by ray two, which traverses an extra occluded block in pass two. Overall, our speculative execution strategy significantly reduces the total number of passes, and thus total time, required to render isosurfaces. assigned cache slots using a WebGPU port [51] of ZFP's [30] CUDA fixed-rate decompression algorithm. Rays are likely to require data from the same blocks traversed in the previous few passes. The data from these blocks will be readily available in the cache, reducing the decompression cost for the pass. Similarly, rays are likely to require data from the same blocks as their neighbors in a given pass. Shared blocks will be decompressed once and cached, amortizing the decompression workload over multiple rays. Performing the cache update each pass allows us to replace unneeded blocks with new ones each pass, reducing the algorithm's working set to just the active blocks in an individual pass. This is in contrast to surface extraction based methods [51], which decompress and store all the blocks that potentially contain the isosurface at once, regardless of visibility. ### Build Raytracing Kernel Inputs At this point, we have all the volume and ray data required to traverse rays through the blocks they intersect and test for ray-isosurface intersections. However, a large number of rays will likely traverse the same block in each pass. If we were to run the raytracing kernel in parallel over the rays we would waste bandwidth by repeatedly reloading the same block from memory. Instead, we run the raytracing kernel in parallel over the visible blocks. The raytracing kernel then loads each block's dual grid from memory just once and computes ray-isosurface intersections for the rays passing through it. The inputs to the raytracing kernel are the list of visible block IDs (\(I_{\text{BVs}}\)), the number of rays intersecting each block (\(N_{\text{BRays}}\)), the offsets to the block's set of rays (\(O_{\text{BRays}}\)), and the active ray IDs sorted by their block ID (\(I_{\text{RAcI}}\)). These inputs are produced through a series of stream connections, prefix sums, and parallel sorts on the GPU (Figure 2d). The list of visible block IDs, \(I_{\text{BVs}}\), is computed via a stream compaction. The number of rays intersecting each block, \(N_{\text{BRays}}\), is computed using a kernel run for each ray that atomically increments the block's ray count. The offset to each block's set of ray IDs, \(O_{\text{BRays}}\), is computed by perfoming a prefix sum on \(N_{\text{BRays}}\). Finally, we compute the list of active ray IDs (\(I_{\text{RAcI}}\)) sorted by their block ID (\(I_{\text{RAcIbI}}\)) by compacting the active ray IDs and their block IDs, then performing a parallel sort by key, using the block ID as the key. ### Raytracing Visible Blocks The raytracing kernel is run in parallel over the visible blocks, and is responsible for taking the set of rays intersecting the block and traversing them through its dual grid to find ray-isosurface intersections (Figure 2e). The kernel consists of two steps: loading the block's dual grid data into shared memory, followed by traversing the rays through the dual grid to compute intersections. The block's dual grid consists of its local data combined with the face/edge/corner values from its neighbors in the \(+x/y/z\) direction, if those neighbors exist. We employ the parallel loading strategy of Usher and Pascucci [51] to load the dual grid data into shared memory. Kernel work groups are launched with 64 threads, corresponding to one thread per dual grid cell, and have a work group shared memory region with room for \(5^{3}\) floating point values to store the full set of local and neighbor values for the dual grid. First, the work group loads the 64 vertices corresponding to the block's local \(4^{3}\) data into the shared memory region, after which a subset of threads load data from the \(+x/y/z\) face, edge, and corner neighbor blocks to complete the dual grid. Finally, the work group synchronizes on a memory barrier to ensure the complete dual grid data is visible to all threads in the group. With the dual grid loaded into shared memory, we can now traverse rays through it to find ray-isosurface intersections. The 64 threads in the work group are used to process the block's rays in parallel in chunks of 64 rays, with each thread responsible for a different ray in the chunk. We again use the Amanatides and Woo [2] grid traversal algorithm to step rays through the dual grid. Ray-isosurface intersections are computed using the ray-voxel intersection technique of Marmitt et al. [35]. If an intersection is found, the shaded color and depth is output to the ray's pixel in the framebuffer and the ray is marked as terminated. ## 4 Increasing GPU Utilization with Speculation Our algorithm as described in Section 3 achieves interactive isosurface rendering of massive data sets within a small memory footprint. However, we observed that the algorithm would take a large number of passes to complete the isosurface on average. Each pass incurs some fixed time costs, and this translated into long total surface rendering times. We further observed that, on average, after 10 passes there were \(<20\%\) of rays still active, and that by pass 25 there were \(<1\%\) of rays still active (see Figure 4). These long tail rays are those that just miss the surface and must be traversed through many blocks before finding an intersection or exiting the volume. To address this issue, we extend our algorithm to enable _speculative_ intersection of rays with additional blocks to increase utilization and terminate rays in fewer passes (Figure 3). To avoid scaling up memory consumption and compute costs by the speculation count, we treat the various image-sized ray and block data buffers used by our algorithm as a virtual GPU with \(w\times h\) threads and memory slots. As rays terminate, these slots become available for other active rays to use for speculation. For simplicity we use a constant speculation count for all rays, defined as \(N_{\text{Spec}}=\lfloor\frac{w\times h}{N_{\text{Act}}}\rfloor\), where \(N_{\text{Act}}\) is the number of active rays. To balance between terminating rays in Figure 4: Our speculative ray traversal improves GPU utilization to reduce the number of passes needed to render the isosurface by 10\(\times\) on average, thereby reducing the total time to complete the isosurface by 4.8\(\times\) on average. Although average time per pass roughly doubles, this is more than made up for by the reduction in the total number of passes required. The vertical black lines mark when the surface was completed for each configuration. The dotted green line shows the speculation count, which is increased as rays terminate to process additional speculated ray-block intersections for the remaining active rays in parallel to terminate them sooner. Timings are reported on an RTX 3080. fewer passes and performing unnecessary computation, we limit the speculation count to a maximum of 64. The following modifications are made to the algorithm described previously (Section 3) to enable speculation. The macrocell grid traversal kernel now advances each ray through \(N_{\text{Spec}}\) blocks, recording multiple block IDs for each ray (Section 4.1). As the macrocell grid traversal will write out the same ray ID \(N_{\text{Spec}}\) times in \(R_{\text{ID}}\), ray IDs in the buffer are no longer unique identifiers, and we must introduce an additional speculated ray-block offset buffer to the raytracing kernel inputs (Section 4.2). To prevent speculated ray-block intersections from trampling each other's results, the block raytracing kernel is modified to write intersection results out to a new RGBZ buffer instead of directly to the framebuffer (Section 4.3). A new kernel is introduced to select the closest hit found, if any, for a given ray and write the final color to the framebuffer (Section 4.4). At the end of each pass, we keep the prefix sum result buffer \(O_{\text{Act}}\) that is produced when computing \(N_{\text{Act}}\) and update \(N_{\text{Spec}}\). \(O_{\text{Act}}\) is used to assign offsets in \(R_{\text{ID}}\) and \(R_{\text{BID}}\) to the remaining active rays. ### Speculative Macrocell Grid Traversal Our speculative macrocell grid traversal performs the same traversal as before (Section 3.3), with the key difference being that it traverses the ray until finding up to \(N_{\text{Spec}}\) visible blocks instead of just one (see Figure 3), and records all the visible block IDs encountered to be tested for intersections. The set of blocks being traversed by a given ray may be disconnected due to empty space-skipping. The macrocell grid traversal kernel is run over all \(w\times h\) pixels as before, with terminated rays exiting early. The \(N_{\text{Spec}}\) entries for each active ray are written at offsets given by \(o=O_{\text{Act}}[\text{ray}]\times N_{\text{Spec}}\). The visible block IDs for each active ray are written into \(R_{\text{BID}}\) starting at \(o\), with up to \(N_{\text{Spec}}\) entries written for each ray. If the ray exits the volume early, its remaining \(R_{\text{BID}}\) entries are left filled with UINT_MAX and filtered out in subsequent passes in the manner as terminated rays. The ray ID buffer, \(R_{\text{ID}}\), is populated by writing out \(N_{\text{Spec}}\) entries of the ray ID starting at \(o\). As before, each ray maintains just one coarse and fine grid iterator state. The iterator states are saved out after \(N_{\text{Spec}}\) visible blocks have been found, to resume traversal after the last block being intersected in the pass. As each speculated ray-block intersection writes its block ID to the \(R_{\text{BID}}\) buffer as before, the mark visible and active blocks kernel does not require modification to support speculation. The kernel is run over the entire \(R_{\text{BID}}\) buffer and marks blocks active as before, with the only difference being that some visible block IDs in the buffer correspond to speculated ray-block intersections. ### Build Speculated Raytracing Kernel Inputs The construction of the inputs for the raytracing kernel when speculation is enabled is nearly identical to the step without speculation (Section 3.6). The key difference is that ray IDs are now repeated \(N_{\text{Spec}}\) times in the active ray ID buffer \(R_{\text{Act}}\), meaning that the ray ID alone is no longer a unique identifier for a ray-block intersection. We introduce an additional offset buffer, \(O_{\text{Spec}}\), that assigns a unique index to each ray-block intersection. \(O_{\text{Spec}}\) is produced by scanning the buffer that marks active ray-block intersections, \(M_{\text{RAct}}\). \(M_{\text{Ract}}\) is produced as before during the compaction of active ray IDs (Figure 2d.3). As with \(I_{\text{RAct}}\), \(O_{\text{Spec}}\) is compacted down to just the entries for active ray-block intersections and sorted by block ID to match the order of \(I_{\text{RAct}}\). The list of visible block IDs (\(I_{\text{BVis}}\)), the number of rays to process for each block (\(N_{\text{BRays}}\)), and the offsets (\(O_{\text{BRays}}\)) are produced as before. ### Raytracing Visible Blocks with Speculation With the entries in \(I_{\text{RActive}}\), \(I_{\text{BVis}}\), \(N_{\text{BRays}}\) and \(O_{\text{BRays}}\) already accounting for speculated ray-block intersections, few modifications are needed to the raytracing kernel. As before, after loading the dual grid data each visible block reads its ray IDs from the offset given in \(O_{\text{BRays}}\) and traverses the rays through its dual grid to find ray-isosurface intersections. However, as intersections may be found in multiple blocks for a given ray when speculation is enabled, the kernel is modified to output intersection results to a new RGBZ buffer instead of directly to the framebuffer. Color and depth values for ray-isosurface intersections are written at offsets given in \(O_{\text{Spec}}\) for the ray-block intersection. ### Depth Compositing Speculated Intersections The final step in our speculative rendering pipeline is to perform depth compositing on the set of intersections found for each ray. A kernel is run for each active ray that iterates through its \(N_{\text{Spec}}\) potential intersections to select the closest one, if any, and writes it to the framebuffer. We note that it would be possible to skip the depth compositing step if WebGPU supported 64-bit atomics, as the depth sorting could be performed using atomic min operations in the raytracing kernel instead [47]. Rays that exit the volume without finding a hit are also marked as terminated in this step. ## 5 Evaluation We evaluate the rendering performance and memory consumption of our method on data sets ranging in size from \(256^{3}\) (16.7M voxels) up to \(2048\times 2048\times 1920\) (8.05B voxels) (Table I). Each data set is compressed offline with ZFP to produce the compressed data used by the renderer. As ZFP only supports single- and double-precision floating point values, the compression step also converts any non single-precision data sets to single-precision. Each data set is benchmarked on 100 random isovalues sampled over a range covering values of interest in the data. Each isovalue is rendered over a 10 position camera orbit to a \(1280\times 720\) framebuffer. We also demonstrate visualization of complex isosurfaces on the 1TB DNS data set; the DNS is first resampled from \(10240\times 7680\times 1536\) to \(1920\times 1440\times 288\) through a combination of adaptive precision and resolution techniques [24], then compressed with ZFP. \begin{table} \begin{tabular}{c c c} \hline \hline \(O_{\text{BRays}}\) & traverses & rays \\ \hline \(O_{\text{BC}}\) & traverses & rays The test data sets cover a range of isosurface visualization scenarios, with some being especially challenging for surface extraction techniques. The Skull, Kingsnake, Chameleon, and Beechhut were produced through various scanning technologies. The Skull and Chameleon consist of relatively smooth shell-like isosurfaces, while the Kingsnake contains many fine features. The Beechnut is a challenging case with many fine features and noise, resulting in large isosurface mesh where large numbers of triangles will be occluded. The TACC, Plasma, Miranda, JICF Q, DNS, and Richtmyer-Meshkov (R-M) were produced through various simulation codes. The Miranda, DNS, and R-M pose similar challenges to surface extraction techniques as the Beechnut; they consist of highly turbulent isosurfaces that result in large meshes with large numbers of occluded triangles. The JICF Q is similarly challenging, as a few isosurfaces cover a substantial portion of the domain, producing a large surface that requires a large amount of data to be decompressed. We report performance results of our algorithm on three different systems. Two are representative of lightweight end user systems: a laptop with an i7-1165G7 CPU and integrated GPU (XPS 13), and a Mac Mini with an M1 chip (M1 Mac Mini). The final system is a desktop with an RTX 3080 GPU and an i9-12900K CPU. We conduct a detailed evaluation of our method's performance and scalability; and evaluate it against the state of the art in GPU-based large-scale isosurface extraction [51]. In Section 5.1, we discuss overall rendering performance of our method and the benefits of our speculative execution strategy; in Section 5.2, we evaluate the scalability of our method with respect to data set size and image resolution compared to the state of the art. Finally, Section 5.3 evaluates the memory consumption of our method against the state of the art. ### Rendering Performance The average time per pass and total time to complete the isosurface across the data sets and hardware platforms tested are shown in Figure 5. Our method achieves interactive pass computation times, and thus rendering frame rates, even when visualizing the massive and complex isosurfaces of the Beechnut, Miranda, JICF Q and DNS data sets on the XPS 13 and M1 Mac Mini. Table 2 lists statistics about the computation recorded over the isosurface benchmarks. Utilization is reported as the percentage of \(w\times h\) slots of the virtual GPU being used for ray-block intersections. We find that our speculative execution strategy increases GPU utilization to complete surfaces in far fewer passes, reducing the total time taken to complete the isosurface (Figure 4, Figure 4(b)), at the cost of slightly increased per-pass times. The view-dependent nature of our approach allows rendering surfaces with a small memory footprint, with just 1.3% of blocks visible per-pass on average. When comparing performance across the data sets tested (Figure 5), we observe that our algorithm's performance is nearly independent of the data set size. Instead, our scales with the visible surface area and complexity of the isosurface. We achieve similar performance on data sets with similar isosurface structure, such as the Skull, Plasma, Kingsnake, Chameleon and JICF Q, even though these data sets range in size from \(256^{3}\) to \(1408\times 1080\times 1100\). These data sets have relatively smooth isosurfaces, where rays can quickly skip empty space to reach the isosurface and find an intersection. In contrast, data sets with noisier or more complex isosurfaces such as the Beechnut, Miranda, DNS, and R-M, see higher rendering times, as more data must be processed for each ray to find an intersection. We find that our progressive approach is valuable to quickly provide a nearly complete image of the data set, with 75% of pixels complete on average by pass two across the data sets tested (Figure 6). ### Scalability with Image and Data Size The performance of our method is primarily driven by the visible surface area and complexity of the isosurface being rendered, and is less tied to the data set size. Another main driver of rendering cost in our method is the number of pixels, allowing rendering performance to be increased by reducing the image size. This is in line with prior implicit isosurface and volume raycasting techniques, which have image-order scaling. Explicit isosurface extraction techniques, such as BCMC [51], typically extract the complete triangle mesh for the isosurface, including triangles that will be occluded in the final rendering. Although extraction techniques are output-sensitive, they \begin{table} \begin{tabular}{l r r r r} \hline \hline \multirow{2}{*}{Data set} & \multirow{2}{*}{\begin{tabular}{c} Median \\ \# Passes \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Avg. Blocks \\ Visible/Pass \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Median Spec. \\ Count \\ \end{tabular} } & \multicolumn{1}{c}{Avg.} \\ & & & & \\ \hline Skull & 3 & 2.09\% & 7 & 30.71\% \\ TACC & 6 & 4.12\% & 5 & 51.32\% \\ Plasma & 4 & 1.56\% & 8 & 49.00\% \\ Kingsnake & 4 & 0.54\% & 13 & 35.68\% \\ Chameleon & 3 & 0.34\% & 16 & 29.55\% \\ Beechnut & 7 & 0.64\% & 13 & 54.45\% \\ Miranda & 5 & 0.83\% & 5 & 60.22\% \\ JICF Q & 3 & 0.25\% & 23 & 27.15\% \\ DNS & 4 & 2.61\% & 10 & 53.94\% \\ R-M & 4 & 0.27\% & 9 & 55.36\% \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics about the number of passes, percentage of blocks visible, speculation count, and utilization over the benchmarks. Our view-dependent approach reduces memory consumption by requiring just a small fraction of the data each pass. Figure 5: Our method achieves interactive rendering frames (a) across the data sets tested, even on the XPS 13 and M1 Mac Mini. Moreover, rendering cost does not scale significantly with data size, allowing large and complex to be rendered interactively on lightweight systems. Our speculative approach completes rendering in few passes, allowing for reasonable surface completion times (b). Figure 6: Images rendered with our algorithm are over 75% complete by the second pass. scale with the size of the output isosurface and are more effected by data set size. Figure 7 quantifies the benefits of these properties of our algorithm against BCMC [51]. We conduct benchmarks rendering at \(1920\times 1080\) (\(1080\)p), \(1280\times 720\) (\(720\)p), and \(640\times 360\) (\(360\)p) on the Plasma, Chameleon, and Miranda data sets, and compare the average pass and total times of our method against the isosurface extraction times achieved by BCMC. Benchmarks for both methods were run over 100 random isovalies. As before, rendering performance of our method is measured over a 10 position camera orbit for each isovalie. Results for our method are shown for each resolution, BCMC is shown as a solid line as its compute costs are resolution independent. Our method achieves a \(1.50\times\) reduction in per pass and total times when scaling down from 1080p to 720p, and an additional \(1.55\times\) reduction when scaling down from 720p to 360p. We compare our algorithm's interactivity and total isosurface computation times against BCMC by comparing per-pass (Figures 7a, 7c and 7e) and total times (Figures 7b, 7d and 7f) against the surface extraction times achieved by BCMC. We find that our algorithm provides better interactivity through its progressive rendering approach in all but two cases, the Plasma on the XPS 13 and RTX 3080. The interactivity improvement achieved by our method is especially pronounced on data sets with large and complex isosurfaces such as the Miranda, where BCMC struggles with the large number of active blocks and the size of the surface mesh. At 1080p on the Miranda we achieve \(7.2\times\), \(9.5\times\) and \(2.9\times\) faster pass times on XPS 13, M1 Mac Mini and RTX 3080 respectively, compared to BCMC's surface extraction times. Moreover, our algorithm achieves faster total surface computation times than BCMC on the XPS 13 and M1 Mac Mini on the Miranda at all resolutions. On the Miranda at 1080p we achieve speedups over BCMC of \(1.4\times\) and \(1.9\times\) on the XPS 13 and M1 Mac Mini respectively. On the Miranda at 720p these speedups grow to \(2.2\times\) and \(2.7\times\) on the XPS 13 and M1 Mac Mini respectively. On the Chameleon at 720p we achieve total surface computation times on par with BCMC on the XPS 13 and M1 Mac Mini, while requiring substantially less memory. BCMC typically outperforms our approach on the RTX 3080, though we do achieve a \(1.3\times\) speed-up at 360p on the Miranda. ### Memory Consumption Finally, we compare the memory overhead of our technique against BCMC [51]. BCMC provides a direct comparison point for explicit isosurface extraction algorithms, as it also works directly on compressed data sets and performs on the fly decompression to reduce memory overhead. We report average memory statistics over the 100 random isovalue and 10 camera position orbit benchmarks, rendering at \(1280\times 720\) (Tables 3 and 4). We achieve an average memory overhead reduction of \(3.1\times\) compared to BCMC on the data sets BCMC is able to compute on without running out of memory (Table 3). These memory reductions are achieved through our algorithm's use of implicit ray-isosurface intersection, which eliminates the need for a large vertex buffer, and our progressive wavefront traversal, which significantly reduces the amount of data that must decompressed to render the isosurface. Furthermore, BCMC failed to compute the isosurface on the Beechnut, JICF Q, DNS, and R-M, due to exceeding WebGPU's buffer size limit of 4GB. These large data sets have noisy or turbulent isosurfaces, resulting in some isosurfaces containing over 500M triangles. Even with BCMC's quantized vertex format, these large \begin{table} \begin{tabular}{l r r r} \hline \hline Data set & BCMC Avg. Cache Mem & Our Avg. Cache Mem & Reduction \\ \hline Skull & 66.0MB & 16.9MB & \(3.9\times\) \\ TACC & 55.0MB & 16.0MB & \(3.4\times\) \\ Plasma & 107MB & 44.7MB & \(2.4\times\) \\ Kingsnake & 545MB & 145MB & \(3.8\times\) \\ Chameleon & 375MB & 102MB & \(3.7\times\) \\ Beechnut & 1.99GB & 243MB & \(8.2\times\) \\ Miranda & 1.40GB & 167MB & \(8.4\times\) \\ JICF Q & — & 170MB & — \\ DNS & 2.02GB & 406MB & \(5\times\) \\ R-M & — & 522MB & — \\ \hline \hline \end{tabular} \end{table} Table 4: The average cache size required by our algorithm vs BCMC. Our progressive wavefront traversal achieves a significant reduction in the volume working set size, providing a \(4.8\times\) reduction in cache size on average. Entries marked by — crashed due to the cache exceeding the 4GB buffer binding limit in WebGPU. \begin{table} \begin{tabular}{l r r r} \hline \hline Data set & BCMC Avg. Mem & Our Avg. Mem & Reduction \\ \hline Skull & 329MB & 109MB & \(3.02\times\) \\ TACC & 187MB & 108MB & \(1.73\times\) \\ Plasma & 563MB & 191MB & \(2.95\times\) \\ Kingsnake & 1.34GB & 607MB & \(2.22\times\) \\ Chameleon & 2.09GB & 691MB & \(3.02\times\) \\ Beechnut & — & 1.06GB & — \\ Miranda & 4.20GB & 737MB & \(5.70\times\) \\ JICF Q & — & 1.00GB & — \\ DNS & — & 875MB & — \\ R-M & — & 4.19GB & — \\ \hline \hline \end{tabular} \end{table} Table 3: The average total compute memory overhead required by our algorithm vs. BCMC. We achieve an average \(3.1\times\) reduction in total memory overhead. Entries marked by — crashed due to exceeding the 4GB buffer binding limit in WebGPU. Figure 7: The performance scaling of our approach and BCMC with image resolution and data set size. BCMC’s compute cost is strongly tied to the size of the data set and the size of the isosurface triangle mesh, making it difficult to scale down to ensure interactivity. In contrast, our approach can be easily scaled down by reducing the image resolution, and is less effected by data size overall, enabling interactive rendering of massive data sets on lightweight devices. isosurfaces exceed 4GB, resulting in a crash. These results were run on the RTX 3080, which has 12GB of GPU memory; however, on the XPS 13 or M1 Mac Mini these data sets would fail due to running out of GPU memory, even if the size limit was lifted or otherwise worked around. Our algorithm is able to achieve interactive rendering of these massive isosurfaces, even on the XPS 13 and M1 Mac Mini. By progressively stepping rays through the volume and decompressing and caching just the blocks required for each pass, we achieve significant reductions in the amount of data that must be decompressed. Table IV compares the average cache memory required by BCMC and our algorithm. The Miranda and DNS results for BCMC were measured by disabling the vertex extraction step. However, on the JICF Q and R-M, BCMC's active block cache memory alone exceeded 4GB, resulting in a crash. We achieve an average cache size reduction of 4.8\(\times\) compared to BCMC on the data sets it is able to compute, with far greater reductions achieved on the Beechnut (8.2\(\times\)) and Miranda (8.4\(\times\)). The Beechnut is a noisy microCT scan and the Miranda is from a turbulent fluid mixing simulation, resulting in large numbers of active but occluded blocks being decompressed and processed by BCMC. Our view-dependent algorithm achieves substantial reductions in the number of blocks that must be decompressed (Figure 8). These reductions come from a number of factors: our algorithm can replace unneeded blocks with active ones each pass to minimize its working set; blocks that are occluded or otherwise not visible are not decompressed; and the number of visible blocks is driven by the image size and view position. Compared to BCMC, we achieve a 6.7\(\times\) reduction in the average number of active blocks and a 5.7\(\times\) reduction in the maximum number of active blocks. The JICF Q and R-M results for BCMC were measured by only recording the number of active blocks for each isovalue and skipping all other computation to avoid crashing. ## 5 Conclusion and Limitations We have proposed a new view-dependent isosurface rendering algorithm designed specifically for interactive visualization of massive isosurfaces on lightweight consumer platforms. This is achieved through a progressive wavefront ray traversal algorithm with per-pass block cache updates, where blocks of the data are decompressed and cached on demand for each pass. We accelerate isosurface rendering completion and increase GPU utilization by introducing ray-block speculation into the algorithm. Speculation enables us to fill open compute slots left by terminated rays with speculated ray-block intersections for active rays to better leverage the GPU's parallel compute power to complete the rendering in fewer passes and less time. Our progressive, view-dependent isosurface rendering algorithm is well suited to large scale isosurface visualization on end-user devices. The compute and memory costs of our algorithm are not strongly affected by data set size and can be easily reduced to scale it down to lightweight systems and mobile devices by simply reducing the image resolution. Furthermore, the progressive rendering provided by our algorithm makes it well suited to provide low-latency interactive visualization. Our algorithm runs entirely in the browser on the GPU through WebGPU to expand access to large scale data visualization, and is available on GitHub1, along with a live demo2. Footnote 1: [https://github.com/Twinklebear/webgpu-prog-iso](https://github.com/Twinklebear/webgpu-prog-iso) Footnote 2: [http://progiso-demo.willusher.io](http://progiso-demo.willusher.io) However, our approach is not without its limitations. Although our method scales up well to large data sets, it does not scale down to small data sets. For example, on the XPS13 and M1 Mac Mini BCMC achieves faster surface extraction times on the Plasma and, in some cases, on the Chameleon. Our approach still uses less memory on these data sets; however, BCMC's overhead on smaller data sets is likely acceptable for the performance improvement. Further optimization efforts would be worthwhile to improve performance on smaller data sets, improve scalability with image size, and reduce overhead to improve per-pass and total rendering times overall. We also find call overhead in JavaScript and WebGPU and note that better performance could be achieved with a CUDA implementation where optimized libraries such as Thrust and CUB are available. Bringing these libraries to WebGPU would be a valuable effort. There are also a number of interesting avenues left open for future work. Although our speculation approach increases utilization and achieves large speed-ups in total surface rendering time, our use of a global speculation count for all rays is restrictive. It may be possible to achieve higher utilization by tracking a per-ray speculation count; however, the added complexity may introduce additional overhead. It would also be worthwhile to explore other acceleration structures that can be built over the macrocell grid instead of our two-level grid to improve space skipping and provide level of detail or multiresolution hierarchies to address current limitations of our method with respect to undersampling the data. For example, an implicit \(k\)-d tree [52] built over the blocks could further accelerate empty space skipping, or multiresolution and compression techniques from work on compressed volume rendering could be integrated [3, 15, 16, 17, 37, 44, 50, 53]. Leveraging multiresolution hierarchies within our method would address limitations with respect to undersampling of the high-resolution data, and enable rendering larger data sets. To improve image quality, it would be worth exploring support for secondary ray tracing effects in our pipeline to add shadows, ambient occlusion, and global illumination with denoising. Finally, as our algorithm's rendering and memory costs are primarily driven by the number of rays traced and the number of passes, it would be worthwhile to combine it with machine learning approaches for image up-scaling [56], image in-painting, and foveated rendering [4] Such a combination would reduce the image resolution, number of passes, and rays traced respectively; potentially reducing total surface rendering times to the cost of one or two passes in our current method, within the same memory footprint. ###### Acknowledgements. This work was funded in part by NSF RII Track-4 award 2132013, NSF PPoSS planning award 2217036, NSF PPoSS large award 2316157 and, NSF collaborative research award 2221811. Figure 8: The (a) average and (b) max percentage of active blocks required by BCMC and our algorithm. Our approach updates the cache each pass, storing just the blocks needed by active rays. In contrast, BCMC decompresses all blocks that may contain the isosurface.
2309.14074
FlexCast: genuine overlay-based atomic multicast
Atomic multicast is a communication abstraction where messages are propagated to groups of processes with reliability and order guarantees. Atomic multicast is at the core of strongly consistent storage and transactional systems. This paper presents FlexCast, the first genuine overlay-based atomic multicast protocol. Genuineness captures the essence of atomic multicast in that only the sender of a message and the message's destinations coordinate to order the message, leading to efficient protocols. Overlay-based protocols restrict how process groups can communicate. Limiting communication leads to simpler protocols and reduces the amount of information each process must keep about the rest of the system. FlexCast implements genuine atomic multicast using a complete DAG overlay. We experimentally evaluate FlexCast in a geographically distributed environment using gTPC-C, a variation of the TPC-C benchmark that takes into account geographical distribution and locality. We show that, by exploiting genuineness and workload locality, FlexCast outperforms well-established atomic multicast protocols without the inherent communication overhead of state-of-the-art non-genuine multicast protocols.
Eliã Batista, Paulo Coelho, Eduardo Alchieri, Fernando Dotti, Fernando Pedone
2023-09-25T12:09:54Z
http://arxiv.org/abs/2309.14074v3
# FlexCast: genuine overlay-based atomic multicast ###### Abstract Atomic multicast is a communication abstraction where messages are propagated to groups of processes with reliability and order guarantees. Atomic multicast is at the core of strongly consistent storage and transactional systems. This paper presents FlexCast, the first genuine overlay-based atomic multicast protocol. Genuineness captures the essence of atomic multicast in that only the sender of a message and the message's destinations coordinate to order the message, leading to efficient protocols. Overlay-based protocols restrict how process groups can communicate. Limiting communication leads to simpler protocols and reduces the amount of information each process must keep about the rest of the system. FlexCast implements genuine atomic multicast using a complete DAG overlay. We experimentally evaluate FlexCast in a geographically distributed environment using gTPC-C, a variation of the TPC-C benchmark that takes into account geographical distribution and locality. We show that, by exploiting genuineness and workload locality, FlexCast outperforms well-established atomic multicast protocols without the inherent communication overhead of state-of-the-art non-genuine multicast protocols. ## 1 Introduction Atomic multicast is a communication abstraction that propagates messages to groups of processes with reliability and order guarantees. Agreeing on the order of messages in the presence of failures is a notoriously difficult problem [13]. Yet, message ordering is at the core of strongly consistent storage and transactional systems (e.g., [6, 26, 27]). Some systems implement strong consistency using an ad-hoc ordering protocol (e.g., [8, 6]). Atomic multicast encapsulates the logic for ordering messages and thereby reduces the complexity of designing fault-tolerant strongly consistent distributed systems. In light of their important role, it is not surprising that many atomic multicast protocols have been proposed in the literature (e.g., [9, 10, 22, 14, 23]). These protocols can be classified according to two criteria: (a) genuineness (or lack of) and (b) process connectivity. GenuinenessIn a genuine atomic multicast protocol, only the message sender and destinations communicate to order a multicast message [17]. Some non-genuine atomic multicast protocols order messages using a fixed group of processes or involving all groups, regardless of the destination of the messages. In geographically distributed settings, a genuine atomic multicast protocol can better exploit locality than a non-genuine protocol since messages addressed to nearby groups do not introduce communication with remote groups. Moreover, because a group only receives messages that are addressed to the group, in a genuine atomic multicast protocol groups do not incur communication overhead from relaying messages to the destinations. This is important in geographically distributed environments where communication across wide-area links represents an important cost (e.g., Amazon Web Services). ConnectivityMost atomic multicast protocols assume that processes can communicate directly with one another. Alternatively, processes communicate following an _overlay_, which determines which processes can exchange messages with which other processes. Imposing limits on communication has advantages. For example, overlays can represent the structure of administrative domains, simplify the design of protocols, and reduce the amount of information each process must keep about the rest of the system (e.g., key management in Byzantine fault tolerant protocols [4]). Combining genuineness and overlays is challenging. Existing atomic multicast protocols focus on one aspect or the other but not both. For example, all existing genuine atomic multicast protocols assume a fully connected overlay. Hierarchical protocols, which structure communication between groups as a tree, are not genuine. For example, in ByzCast [4], a multicast message is first sent to the lowest common ancestor of the message destinations, and then proceeds down the tree until it reaches all destinations. ByzCast's logic is simple and processes in a group only need to keep information about their parent and children. However, it is not genuine since a message addressed to the children of group \(g\), but not to \(g\), are first sent to \(g\) and then propagated to \(g\)'s children, violating genuineness. Figure 1 quantifies ByzCast's communication overhead, computed as one minus the ratio between the number of messages that a group delivers (i.e., messages addressed to the group) and the number of messages the group receives as part of communication imposed by the tree overlay, and expressed as a percentage. On average, groups incur on almost 10% of communication overhead. Some groups, however, are more penalized than others, depending on their position in the tree. In particular, about 23% and 36% of the communication of groups 5 and 9, respectively, is overhead. This is in contrast to genuine atomic multicast protocols, which have no communication overhead. Our contributionThis paper proposes FlexCast, the first genuine overlay-based atomic multicast protocol. FlexCast assumes a complete directed acyclic graph (C-DAG) overlay. Multicast messages are sent to the lowest common ancestor (_lca_) of the message destinations. The _lca_ then propagates the message to all other destinations in one communication step, without involving any groups that are not a message's destination. FlexCast uses a sophisticated history-based protocol to order messages. First, each process builds a history with all messages the process has delivered. This history is propagated to other processes in the C-DAG, so that processes can ensure consistency (e.g., no two processes order two messages differently). Simply following other processes' histories is not enough to ensure consistent order due to indirect dependencies. Indirect dependencies happen for a few reasons. For example, if process \(x\) orders message \(m_{1}\) before message \(m_{2}\) and process \(y\) orders \(m_{2}\) before message \(m_{3}\), then process \(z\) must order \(m_{1}\) before \(m_{3}\) as a consequence of dependencies created by processes \(x\) and \(y\) involving \(m_{2}\), a message not addressed to \(z\). FlexCast is well-suited to equip geographically replicated systems as it exploits locality. We have implemented FlexCast and evaluated it in an emulated wide-area network that mimics Amazon's EC2. To experimentally evaluate FlexCast, we propose gTPC-C, a variation of the well-known TPC-C benchmark that integrates geographical distribution. In the original TPC-C benchmark, a transaction operates on items in a main warehouse and with a certain probability on items from additional warehouses as well. gTPC-C models real-world wholesale supply systems in which transactions are directed to the customers' nearest warehouse and items not present in this warehouse are requested from the next closest warehouse and so on. In gTPC-C, customers and warehouses are geographically distributed. To account for locality, a customer's main warehouse is the closest one to the customer's location and multi-warehouse transactions have higher probability to involve warehouses located near the main warehouse. Our results show that, by exploiting locality, FlexCast can reduce latency by up to 42% to 46% when compared to state-of-the-art atomic multicast protocols in a geographically distributed environment. Moreover, as a genuine atomic multicast protocol, FlexCast has no communication overhead. The rest of the paper is structured as follows. Section 2 presents the system model and definitions used in the paper. Section 3 reports on related works. Section 4 presents a detailed description of FlexCast, starting with a high level description of the protocol, then detailing the algorithms, and addressing practical concerns and fault tolerance. Section 5 provides an experimental evaluation of FlexCast. Section 6 concludes the paper. ## 2 System model and definitions This section presents our system model and recalls the definition of atomic multicast. ### System model We consider a message-passing distributed system consisting of an unbounded set of client processes \(C=\{c_{1},c_{2},...\}\) and a bounded set of server processes \(S=\{p_{1},p_{2},...,p_{n}\}\). We define the set of server groups as \(\Gamma=\{G_{A},G_{B},...,G_{N}\}\), where for every \(g\in\Gamma\), \(g\subseteq S\). Moreover, groups are non-empty and disjoint [17, 16, 24, 4]. Processes are _correct_ if they never fail or _faulty_ otherwise. In either case, processes do not experience arbitrary (i.e., Byzantine) behavior. We assume the system is partially synchronous [12]: it is initially asynchronous and eventually becomes synchronous. The time when the system becomes synchronous is called the Global Stabilization Time (GST), and it is unknown to the processes. Before GST, there are no bounds on communication and processing delays; after GST, such bounds exist but are unknown. ### Atomic multicast Atomic multicast is a fundamental communication abstraction in reliable distributed systems. It encapsulates the complexity of reliably propagating and ordering messages. With atomic multicast, a client can multicast messages to different groups with the guarantee that the destinations will deliver messages consistently. Figure 1: Communication overhead in a hierarchical protocol when executing the gTPC-C benchmark with tree \(T_{1}\) and 90% of locality (more details in Section 5); overhead, expressed as a percentage, is computed for each group as 1 minus the ratio between number of messages delivered and number of messages received by the group. In the following, we precisely capture these reliability and ordering guarantees. A client atomically multicasts an application message \(m\) to a set of groups by calling primitive \(mulicast(m)\), where \(m.sender\) denotes the process that calls \(multicast(m)\), \(m.id\) is the message's unique identifier, and \(m.dst\) is the groups \(m\) is multicast to. A server delivers message \(m\) calling the primitive \(deliver(m)\). If \(|m.dst|=1\) we say that \(m\) is a _local_ message; if \(|m.dst|>1\) we say that \(m\) is a _global_ message. We define the relation \(\prec\) on the set of messages server processes deliver as follows: \(m\prec m^{\prime}\) iff there exists a process that delivers \(m\) before \(m^{\prime}\). If \(m\prec m^{\prime}\) or \(m^{\prime}\prec m\), we say that there is a dependency between \(m\) and \(m^{\prime}\). Atomic multicast satisfies the following properties [18]: * _Validity_: If a correct process \(p\) multicasts a message \(m\), then eventually all correct server processes \(q\in g\), where \(g\in m.dst\), deliver \(m\). * _Agreement_: If a process \(p\) delivers a message \(m\), then eventually all correct server processes \(q\in g\), where \(g\in m.dst\), deliver \(m\). * _Integrity_: For any process \(p\) and any message \(m\), \(p\) delivers \(m\) at most once, and only if \(p\in g\), \(g\in m.dst\), and \(m\) was previously multicast. * _Prefix order_: For any two messages \(m\) and \(m^{\prime}\) and any two server processes \(p\) and \(q\) such that \(p\in g\), \(q\in h\) and \(\{g,h\}\subseteq m.dst\cap m^{\prime}.dst\), if \(p\) delivers \(m\) and \(q\) delivers \(m^{\prime}\), then either \(p\) delivers \(m^{\prime}\) before \(m\) or \(q\) delivers \(m\) before \(m^{\prime}\). * _Acyclic order_: The relation \(\prec\) is acyclic. In a genuine atomic multicast protocol, only the sender and the destinations of a message coordinate to order the message. A genuine atomic multicast protocol does not depend on a fixed group of processes and does not involve processes unnecessarily. More precisely, a genuine atomic multicast algorithm should guarantee the following property [17]. * _Minimality_: If a process \(p\) sends or receives a message in run \(R\), then some message \(m\) is multicast in \(R\), and \(p\) is \(sender(m)\) or in a group in \(m.dst\). ## 3 Related work An early atomic multicast protocol is attributed to D. Skeen [2]. In this protocol, a multicast message \(m\) is first propagated to \(m\)'s destinations. Upon receiving the message, a destination assigns the message a local timestamp and sends the local timestamp to the other message destinations. When a destination has received timestamp from all message destinations, it computes the message's final timestamp as the maximum among all of the message's local timestamps. Destinations deliver messages in order of their final timestamp. This protocol is genuine but does not tolerate failures. Several atomic multicast protocols extend Skeen's ordering technique to tolerate failures [5], [14], [16], [21], [22]. In all these protocols, the idea is to implement destinations as groups of processes. Thus, messages are addressed to one or more process groups, instead of a set of processes, as in the original protocol. Although some processes in a group may fail, each group acts as a reliable entity, whose logic is replicated within the group using state machine replication [25]. Recent protocols aim at reducing the cost of replication within groups while keeping Skeen's original idea of assigning timestamps to messages and delivering messages in timestamp order. FastCast [5] improves performance by optimistically executing parts of the replication logic within a group in parallel. WhiteBox[16] atomic multicast uses the leader-follower approach to replicate processes within groups. RamCast [21] relies on distributed shared memory (RDMA) to reduce latency. Since in all these protocols processes communicate directly with one another, we refer to them as _distributed_ atomic multicast protocols (see Table 1). In [10], a genuine distributed atomic multicast protocol that does not rely on exchanging of timestamps to order messages is proposed. The protocol assigns a total order to groups and relays messages sequentially through their destination groups following this order. A multicast message \(m\) is initially sent to the lowest group in \(m.dst\) according to the total order. When the group receives \(m\), it uses consensus to order and deliver \(m\) inside the group, then \(m\) is forwarded to the next group in \(m.dst\), according to the total order of groups. A group that delivers \(m\) can only order the next message once it knows \(m\) is ordered in all groups in \(m.dst\), which is after it receives an end message from the last group in \(m.dst\). Although the dissemination of the message follows an order, the end message returns to each group involved and therefore the protocol is a distributed atomic multicast protocol. Besides needing \(n+1\) steps to deliver a message, where \(n\) is the number of destinations of the message, since groups remain locked until the end message arrives, this protocol is affected by the convoy effect [1]. Some protocols restrict process communication by means of a tree overlay that determines how groups can communicate (e.g., [4, 15]). To order a message \(m\) using a tree, \(m\) is first sent to the lowest common ancestor group among those in \(m.dst\), in the worst case the root of the overlay tree. Then, \(m\) is successively ordered by the lower groups in the tree until it reaches all groups in \(m.dst\). An important invariant is that lower groups in the tree preserve the order induced by \begin{table} \begin{tabular}{|l|l|l|} \hline **Class** & **Type** & **Examples** \\ \hline Distributed & genuine & [2, 5, 14, 10, 16, 21, 22] \\ \hline Hierarchical & non-genuine & [4, 15, 19] \\ \hline C-DAG overlay & genuine & FlexCast (this paper) \\ \hline \end{tabular} \end{table} Table 1: Different classes of atomic multicast protocols. higher groups. Although simple, this protocol is not genuine since a message may need to be ordered by a group that is not in the destination set of the message. While the tree-based protocol proposed in [15] does not tolerate failures, ByzCast [4] can withstand Byzantine failures. The Arrow [19] protocol is a non-fault tolerant tree-based protocol that targets open groups. It emerges from the combination of a reliable multicast protocol with a distributed swap protocol. Arrow assumes a graph \(G\) and a spanning tree \(T\) on \(G\). Initially, each node \(v\) in \(T\) has \(link(v)\) that is its neighbour in \(T\) or itself if \(v\) is a sink (initially only the root of \(T\)). To multicast \(m\) a node \(v\) sends a message through \(link(v)\), which is forwarded to the root of the tree. By definition, the root has sent the last message before \(m\). As the message is forwarded, edges change direction and \(v\) becomes the new root (that has sent the last message, which now is \(m\)). Although genuine, this procedure may result in swap messages traversing the diameter of \(T\) and only then a multicast, using an underlying reliable multicast, is issued. Restricting communication as in a tree may lead to simpler atomic multicast algorithms. Moreover, if communication needs to be authenticated, as in Byzantine fault-tolerant protocols, a tree overlay requires fewer keys to be maintained and exchanged between processes than a distributed fully connected protocol. Finally, a fully connected protocol is a reasonable assumption in systems that run within the same administrative domain (e.g., Google's Spanner [14]). In other contexts (e.g., decentralized systems), however, multiple entities from different administrative domains collaborate but do not wish to establish connections with all other domains. Hereafter, we refer to protocols based on a tree as _hierarchical_ atomic multicast protocols. Figure 2 shows three cases of interest. All genuine atomic multicast algorithms we are aware of are distributed (Figure 2 (a)). A tree (Figure 2 (b)) is the minimum connectivity needed by any atomic multicast protocol to support an arbitrary workload (i.e., messages can be multicast to any set of groups), as removing one edge from the tree results in a partitioned graph. Hierarchical protocols, however, are not genuine. For example, in Figure 2 (b), a message multicast to groups \(B\) and \(C\) will first be ordered at \(A\), and then propagated and ordered by \(B\) and \(C\). This paper proposes the first overlay-based genuine atomic multicast protocol. ## 4 Genuine overlay-based atomic multicast In this section, we present FlexCast's basic idea and detailed algorithm, and conclude with practical considerations and a discussion on fault tolerance. FlexCast's correctness is presented in the appendix of this paper. ### General idea Groups in FlexCast are structured as a complete directed acyclic graph (C-DAG), as the example in Figure 2 (c). We assume there is a total order among groups. Each group is assigned a unique rank in \(0..(n-1)\), where \(n\) is the number of groups. The C-DAG topology is such that there is a directed edge from each group with rank \(i\) to each group with rank \(j\) if \(i<j\). In this graph, \(i\)'s _ancestors_ have lower rank than \(i\) and \(i\)'s _descendants_ have higher rank than \(i\).1 Figure 2 (c) shows a C-DAG with nodes ordered from lowest to highest as: A, B, D, E, C. Footnote 1: We use the terms “lower” and “higher” groups to denote relative positions of groups in this rank, and “lowest” and “highest” group of a subset of groups, also referring to this rank. “Ancesetors” of a group \(g\) denote the set of groups lower than \(g\), while “descendants” respectively higher. A client atomically multicasts a message \(m\) by sending \(m\) to \(m\)'s lowest common ancestor (\(lca\)). The \(lca\) of a multicast message is the group with the lowest rank among the destinations of the message. At its \(lca\), \(m\) is directly delivered and propagated to \(m\)'s other destination groups (by definition the \(lca\) has direct edges with each other destination group in \(m.dst\)). Similarly to a tree-base atomic multicast, in a C-DAG, a group must respect the dependencies created by its ancestors and propagate dependencies to its descendants. In a C-DAG, however, a group may have multiple ancestors and dependencies can be created by any of them. An important challenge is to ensure that dependencies are properly communicated down the C-DAG without violating the minimality property of genuine atomic multicast. FlexCast uses three strategies to accomplish this, as explained next. _Strategy (a):_ First, every group keeps track of a _history_, a graph where messages are vertexes and their relative order are edges. A vertex contains a message's id and destinations. Messages delivered at a group are recorded in its history and build a total order within the graph. When a group propagates a message to another one, its history is included. The destination group extends its history with the histories that it receives from other groups and messages it delivers. The history then becomes a graph. More specifically, since ordering is respected (discussed next), the history is a DAG. Destination groups use the history to ensure that messages are delivered consistently across the system. To understand the need for exchanging histories, consider the scenario depicted in Figure 3 (a), where group \(A\) is the \(lca\) of messages \(m_{1}\) (multicast to \(A\) and \(C\)) and \(m_{2}\) (multicast to \(A\) and \(B\)), and group \(B\) is the \(lca\) of \(m_{3}\) (multicast to \(B\) and \(C\)). Since \(A\) delivers \(m_{1}\) before \(m_{2}\) (i.e., \(m_{1}\prec m_{2}\)) and \(B\) delivers \(m_{2}\) before \(m_{3}\) (i.e., \(m_{2}\prec m_{3}\)), \(C\) must deliver \(m_{1}\) before \(m_{3}\) to avoid a cycle among delivered messages. But \(C\) receives \(m_{3}\) from \(B\) before it receives \(m_{1}\) from \(A\). By receiving \(B\)'s history, \(C\) knows that it should deliver \(m_{1}\) and then \(m_{3}\) to avoid cycles. Unfortunately, including histories in forwarded messages is not enough to avoid cycles. Intuitively, this happens because not all dependencies are captured in the communication of application messages between groups. There are two cases to consider, depending on whether the group that creates the dependency is aware that it must propagate the dependency to its descendants or not. _Strategy (b):_ To motivate the case where a group is aware that it should send dependencies to its descendants, consider the execution in Figure 3 (b). In this case, \(B\) delivers \(m_{1}\) before \(m_{2}\), and \(C\) receives \(m_{2}\) from \(A\) (with an empty history) and then \(m_{1}\) from \(B\) (with an empty history since \(B\) did not know about \(m_{2}\) when it sent \(m_{1}\) to \(C\)). Yet, \(C\) must deliver \(m_{1}\) before \(m_{2}\). FlexCast ensures proper order in such cases as follows. If group \(g\) and its descendant \(h\) are in the destination of a message \(m\) and \(g\) is not \(m\)'s \(lca\), then \(g\) sends an ACK message to \(h\) with \(g\)'s history. Conversely, if \(h\) receives a message \(m\) and \(h\) has an ancestor that is in \(m\)'s destination, but is not \(m\)'s \(lca\), \(h\) waits for \(g\)'s ACK message. _Strategy (c):_ To motivate the case where a group is not aware that it should send dependencies to its descendants, consider the execution in Figure 3 (c). In this case, group \(A\) sends \(m_{3}\) and its history (i.e., \(m_{2}\) precedes \(m_{3}\)) to \(C\), and \(B\) sends \(m_{1}\) and an empty history to \(C\) (i.e., because the dependency between \(m_{1}\) and \(m_{2}\) happens in \(B\) after \(B\) communicates with \(C\)). \(B\) does not send \(C\) the information that \(m_{1}\) precedes \(m_{2}\) since \(m_{2}\) is not addressed to \(C\). Yet, \(C\) must deliver \(m_{1}\) before \(m_{3}\). To handle this case, when a group determines that a descendant \(d\) must forward its history down the C-DAG, it sends a motif message to \(d\) so that \(d\) can communicate its dependencies to other groups. More precisely, when a group \(g\), the \(lca\) of a message (or another destination in \(m.dst\)) is about to forward message \(m\) (respectively, an ack message regarding \(m\)) and there is a group \(h\) such that: (i) \(h\) is not in \(m.dst\); (ii) \(h\) is a descendant of \(g\) and an ancestor of group \(r\) in \(m.dst\); and (iii) there is a message in \(g\)'s history addressed to \(h\), then \(g\) sends a motif message regarding \(m\) to \(h\). If group \(h\) receives a motif message regarding \(m\), it sends ack messages to all its descendants \(k\in m.dst\). Moreover, inductively, if there is a message \(h^{\prime}\) in \(h\)'s history with the same restrictions above, \(h\) notifies \(h^{\prime}\). This induction naturally finishes since there is a total order on groups. #### 4.1.1 Why it is genuine To argue that FlexCast is genuine, first notice the following aspects discussed about _Strategies (a)_ and _(b)_: * when \(m\) is multicast, it enters the overlay at \(m.lca()\) (see Algorithm 1), which is by definition a destination of \(m\); * \(m.lca()\) propagates \(m\) to its further destinations in \(m.dst\); and * each destination \(d\) (other than \(m.lca()\)) sends ack messages to groups in \(m.dst\) higher than \(d\). From the above, it follows that the communication described involves exclusively groups in \(m.dst\). Now, consider the _Strategy (c)_ and notice that: * a group \(g\in m.dst\) can send a motif message to a group \(h\notin m.dst\) provided that \(g\) previously sent a message to \(h\), i.e. some message was multicast to \(h\) in run \(R\); and * inductively, \(h\) notifies \(h^{\prime}\) only if some message was multicast from \(h\) to \(h^{\prime}\) in run \(R\). From the above, it follows that groups not in \(m.dst\) exchange messages only if they communicated in run \(R\), keeping minimality (see definition in Section 2.2). ### Detailed protocol Algorithm 1 presents the basic data structures used in FlexCast. Each group knows the C-DAG topology and has a communication channel to each descendant group (i.e., a FIFO reliable point-to-point link). As a consequence, each process has an input queue for each input channel from ancestor groups (line 14). Each queue contains not-yet-delivered messages sent by the respective ancestors. A message has a unique \(id\) (line 2), a set of destination groups (line 3), and an arbitrary payload (line 4), provided by the application. The protocol stores pending messages along with a set of respective ack messages (line 5) and a set of notified groups (line 6), both detailed later. Function \(m.lca()\) (line 7) returns the lowest group in \(m.dst\). A group \(g\) has the history it learns from each of its ancestors and the messages it delivers (line 15). The set of messages delivered in \(g\) is a subset of messages in the history (line 16). The history builds a DAG Figure 2: Three communication patterns used in atomic multicast protocols involving groups \(A,B,...,E\): (a) distributed, (b) hierarchical, and (c) FlexCast, the approach presented in this paper. In the graphs, directed edge \(g\to h\) means that group \(g\) can send messages to group \(h\), and \(h\) can receive messages from \(g\) but not send messages to \(g\). with dependencies in \(hst.D\). As notification messages may not be immediately delivered according to criteria to be detailed later, a group also has a set of pending notification messages (line 17). When group \(g\) communicates with a descendent group \(h\), \(g\) informs only the difference in \(g\)'s history with respect to the last message \(g\) sent to \(h\). Therefore, for each descendent \(h\), \(g\) keeps track of what part of its history it has already sent to \(h\) (line 18). ``` 1:TypeMessage: every message \(m\) has: 2:\(m.id\)\(\{\)m's global unique id\(\}\) 3:\(m.dst\)\(\{\)m's destinations, a subset of groups\(\}\) 4:\(m.payload\)\(\{\)provided by the application\(\}\) 5:\(m.acks\leftarrow\varnothing\)\(\{\)a set of received acks\(\}\) 6:\(m.notifList\leftarrow\varnothing\)\(\{\)a set of notified groups\(\}\) 7:\(m.lco(\)\)\(\{\)\(\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\)\(\{\}\{\}\)\(\{\}\{\}\)\(\{\}\)\(\{\}\)\(\{\}\{\}\)\(\{\}\{\}\)\(\{\}\{\}\)\(\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}\{}{}\{}\{}{}\{}{}\{}{}\{}{}{}\{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{} {{}{}{{}}{{}{}{}{}{{}}{{}{}{}{}{}{}{}{}{{}}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{{} {}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{} {}{{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{} {{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{}{{} {{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{{}{}{ We use set _deliveredInG_ to identify messages delivered in \(g\) (line 8). _deliveredInG_ is a subset of \(hst.M\) and is used to identify possible open dependencies in the history (line 9). An open dependency happens when a message addressed to \(g\) is included in \(g\)'s history but not yet delivered. Operation _diff-\(hst\)_ (line 11) is an optimization: only the new parts of a history are sent to each descendent. Operation _depend_ (line 17) computes \(m\)'s possible transitive dependency on \(m^{\prime}\) in \(hst\). When a message can be delivered (line 20), the group adds the message to its local history (line 21). An _lca_ group sends the message to its descendants (line 23), while non-_lca_ groups remove the message from the ancestor's queue (line 25) and send the corresponding Ack messages to their descendants (line 26). All groups verify whether delivering this message may unblock pending notifications (line 27). Function _send-descendants_ (line 32) is part of _Strategies (a)_ and _(b)_ discussed in Section 4.1. To send msg\(m\) (or ack\(m\)), the _lca_ (or a descendant), first sends possible notification messages to its descendants that are not in \(m.dst\). Function _send-notifs()_ implements _Strategy (c)_: it searches past messages and evaluates if notifications are needed, including the notified groups in \(m\)'s notification list (lines 33 and 36-39). Then, \(m\) is sent to all other destinations in \(m.dst\) (line 35), carrying the list of notified groups along with the history with information needed by each destination (_diff-\(hst\)_). Function _reprocess-queues()_ (lines 41-48) is called upon receiving msg and ack messages (see Algorithm 2, lines 6 and 11). In both cases, it iterates through ancestor's queues and tries to deliver messages. It keeps iterating while messages can be delivered due to updated dependency information. The delivery of messages in non-_lca_ groups is defined in function _can-deliver(m)_ (line 49). The first condition (line 50) checks whether \(g\) received ack from all needed ancestors: (i) all ancestors (except the \(lca\)) in \(m.dst\); (ii) all ancestors (not in \(m.dst\)) notif-ied about message \(m\), which were informed to \(g\) either through msg or ack. Recall that a notified group, besides sending ack can further notify other groups. In Algorithm 2, line 10, _notifList_ accumulates all notified ancestors that have to ack \(m\). The list of ancestors that have ack is kept in _ancestors-that-acked_ (line 57). Having the complete information on \(m\), the second condition (line 52) ensures that any message \(m^{\prime}\) that precedes \(m\) and is addressed to \(g\) has already been delivered before \(m\)'s delivery. ### Practical considerations The protocol as described so far does not include garbage collection. In our FlexCast prototype, however, we prune local histories associated with each ancestor group. A distinguish process periodically multicast a \(flush\) message to all groups. Once a group delivers this message, it knows that all messages that precede \(flush\) can be garbage collected. The intuition behind this mechanism is that to deliver a message \(m\) from a specific ancestor, all dependencies before \(m\) must be resolved and do not need to be re-evaluated in the future. To further reduce communication, histories sent with messages do not enclose the ever-growing system history. FlexCast sends only a _diff_ of the history for each descendant group. The idea is implemented by keeping track of the last message of the local history sent to each descendant \(d\) and, in subsequent messages to \(d\), sending a history that contains only the newest messages added since the last communication to \(d\). ### Tolerating failures FlexCast uses the same approach used in other atomic multicast protocols to tolerate failures (e.g., [5], [14], [16], [21], [22], [4]), that is, processes within a group are kept consistent using state machine replication. This means that processes in a group can fail as long as enough processes remain operational within the group. Consequently, groups do not fail as a whole and must remain connected (i.e., no network partition). Tolerating the failure of a group requires additional system assumptions [24]. The implications of this approach on the number of correct processes per group and process communication depend of the particular consensus protocol used to implement state machine replication within a group. For example, Paxos [20] requires a majority of correct processes within each group and can tolerate message losses. ## 5 Evaluation In this section, we explain the evaluation rationale, describe the environment and the benchmarks used, present the results, and summarize the main lessons learned. ### Evaluation rationale We compare FlexCast to a distributed atomic multicast protocol and a hierarchical atomic multicast protocol using single-process groups (i.e., no failures are tolerated) in all three protocols. In doing so, our evaluation focuses on the inherent costs of three classes of atomic multicast protocols (see Table 1) and avoids overhead introduced by replication. We use Skeen's protocol as distributed atomic multicast because its ordering mechanism is used by several other protocols (e.g., [5], [14], [16], [21], [22]). Moreover, when groups contain a single process, FastCast [5] and Whitebox [16] atomic multicast protocols behave as in Skeen's protocol. Skeen's protocol is genuine, can order messages in two communication steps, which has been shown to be optimum [23], and assumes that any two groups can communicate. We choose ByzCast as hierarchical atomic multicast protocol. ByzCast is non-genuine and imposes a tree overlay on communication, the minimum overlay that ensures a connected system. In single-process groups, ByzCast does not introduce any overhead particular to tolerating malicious behavior. We implemented prototypes of all protocols in Java. Our experimental evaluation aims to understand the behavior of the considered protocols in geographically distributed deployments subject to realistic workloads. Our workload extends the well-established TPC-C benchmark to accommodate locality, a common property in geo-distributed systems. In these settings, we seek to answer the following questions: (i) What is the impact of different overlays on FlexCast and hierarchical protocols? (ii) How quickly can a protocol order messages addressed to two or more groups? (iii) What is the communication overhead of hierarchical protocols? (iv) What is the communication cost of atomic multicast protocols? ### Environment and deployment The experimental setup was configured with 12 server machines and 24 client machines, connected via a 1-Gbps switched network, in CloudLab [11]. The machines are equipped with eight 64-bit ARMv8 cores at 2.4 GHz, and 64GB of RAM. The software installed on the machines was Linux Ubuntu 20.04 (64 bits) and 64-bit Java virtual machine version 11.0.3. Machines communicate via TCP. We consider an emulated wide-area network that models Amazon Web Services (AWS): Each group represents an AWS region and we experimented with a deployment of 12 AWS regions, as depicted in Figure 4 (a). The emulated latencies among regions are based on real measurements in AWS [3]. Enough client processes (to saturate our FlexCast implementation) are uniformly distributed along the 24 client machines that represent each region/group, and they send requests to the nearest group. Upon delivering a message, each message destination replies to the message's sender (client). ### gTPC-C Benchmark We developed gTPC-C, a geographically distributed benchmark inspired by the well-established TPC-C benchmark [7]). We translate TPC-C warehouses into groups, deployed in one or more AWS regions, and TPC-C transactions into messages multicast to their corresponding warehouses. According to the TPC-C benchmark, clients can generate the following transactions (with a certain probability): new order (45%), payment (43%), order status (4%), delivery (4%), or stock level (4%). The last three transactions are single-warehouse (local), resulting in a message multicast to the client's home warehouse. Since all multicast protocols perform the same when ordering a message multicast to a single group, in our latency measurements we only consider global transactions, which result in messages addressed to multiple warehouses. Consequently, this workload only contains new order and payment transactions, always involving two or more warehouses. New order transactions can have from 5 to 15 items, where each item has a 2% probability of being issued to a warehouse that is not the client's home warehouse, as defined by TPC-C. To capture locality, when choosing an additional warehouse to the client's home warehouse, the client picks the nearest warehouse to its home warehouse with a configurable high probability, the _locality_ rate; otherwise, the client chooses the next nearest warehouse, and so on, up to the farthest warehouse to the client's home warehouse. Our criteria to define locality is inspired by a common wholesale supplier policy that when an item is not available in the nearest warehouse to a client (i.e., the home warehouse), it is shipped from the closest warehouse that has the item. This locality specification implies that most messages are addressed to only two warehouses (same as in standard TPC-C), and some to three. Very few are addressed to more than three groups, therefore we do not consider these messages in our experiments. Clients operate in a closed loop issuing one transaction at a time and are deployed in the same region as their home warehouse. Each experiment lasts for a period of approximately one minute, in which clients collect and store latency data. We discard the first and last 10% of the data collected during the experiment to avoid possibly noisy data during warm up and end of execution. ### The effect of overlays In the first set of experiments, we investigate the role of overlays on FlexCast and hierarchical protocols. We compare the latency experienced by clients of two FlexCast overlays, and three hierarchical overlays (trees), as depicted in Figure 4. Trees \(T_{1}\), \(T_{2}\) and \(T_{3}\) contain different numbers of inner nodes. In principle, a larger number of inner nodes provides better distribution of communication overhead among these nodes. Trees with many inner nodes, however, may lead to additional communication steps when ordering messages. For overlays \(O_{1}\) and \(O_{2}\), we initially selected a starting node (i.e., central node 8 in \(O_{1}\) and left-most node 1 in \(O_{2}\)). Then, the closest node to the initial one, the closest node to the second chosen node, and so on. Since \(O_{1}\) and \(O_{2}\) are complete DAGs, a node is connected to all nodes that succeed it (e.g., the first node is connected to all nodes). Figure 5 and Table 2 present the results. We report the latency per group addressed by the message. The latency of the first (respectively, second and third) destination corresponds to the first (respectively, second and third) response the client receives from the groups addressed by the message. \(O_{1}\) shows better performance than \(O_{2}\) for all destinations. This happens because \(O_{1}\) better exploits locality: higher nodes in the DAG have the lowest latencies in the geographical distribution. Hereafter, we evaluate FlexCast using overlay \(O_{1}\). Differently than FlexCast, whose performance is largely dependent on the overlay, a hierarchical protocol is not so sensitive to the chosen tree (but see also the discussion in Section 5.6), although the trees do have an impact on the performance. \(T_{1}\) shows slightly better performance in all destinations than \(T_{2}\) and \(T_{3}\). This is due to the communication overhead (further discussed in Section 5.8) of involving non-destination groups, and also the bottleneck effect of involving the tree root on \(T_{3}\) for all messages in the system. From these results, we select \(T_{1}\) to represent a hierarchical protocol in the rest of our evaluation. ### Throughput In the second set of experiments, we assess the overall performance of our standard gTPC-C, including local and global messages, when deployed in a configuration with 99% locality rate. We conduct multiple experiments while gradually increasing the number of clients and measure the total number of transactions ordered by each protocol. Figure 6 presents the results. Although FlexCast was designed to optimize latency, it can maintain the same throughput as the other protocols up to its saturation point. This effect can be seen by the slight bend of the throughput curve of FlexCast starting with 960 clients. In the experiments presented next, we consider configurations with 240 clients. This is justified by the fact that none of the algorithms is subject to queuing effects, which would interfere with their inherent latency. ### Latency In the third set of experiments, we increase the locality rate and measure the latency experienced by the clients when receiving a response from each of the des Figure 4: AWS regions and different overlays used in our experimental evaluation. tinations of a global multicast message. Figure 7 and Table 3 present the results. FlexCast outperforms both a distributed and hierarchical protocols in the latency of the first destination group for all three experimented locality rates. We attribute this behavior to the fact that FlexCast benefits from two aspects that reduce the cost of ordering messages in the first destination in a distributed scenario: _(i) Communication steps:_ while in a distributed protocol groups addressed by a message need to exchange timestamps before a destination group can deliver a message, in FlexCast the first destination group in the DAG (i.e., the _lca_ of the message) can deliver the message as soon as it receives the message from a client; the hierarchical protocol also benefits from this aspect, however, in ByzCast, the _lca_ of a message may not be a message destination since it is not a genuine protocol. _(ii) Locality rate:_ having a workload with a high locality rate increases the number of messages that FlexCast can deliver using fewer communication steps than both other protocols. This gives FlexCast an advantage since the cost for a communication step may take tens of milliseconds in geographical settings. In the second destination, FlexCast performs worse than the hierarchical protocol and outperforms the distributed protocol. As in the discussed above, hierarchical protocols need only one extra communication step to order a message at the second destination, while the distributed protocol, in addition to require destination groups to communicate, is also exposed to the convoy effect, which further slows down the delivery of messages [16]. In the third destination, FlexCast latency increases and the simplicity of a hierarchical protocol algorithm pays off. In both the second and third destinations, FlexCast may need extra communication steps to receive the necessary ack messages to deliver a multicast message \(m\), evaluate possible dependencies, \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{|c|}{1st} & \multicolumn{4}{|c|}{Destination} \\ \hline & Overlay & 90p & 95p & 99p & 90p & 95p & 99p & 90p & 95p & 99p \\ \hline \multirow{2}{*}{FlexCast} & \(O_{1}\) & 144.0 & 279.0 & 1403.1 & 398.0 & 829.0 & 2243.42 & 1406.0 & 2195.0 & 4542.5 \\ & \(O_{2}\) & 156.0 & 350.0 & 790.22 & 416.0 & 652.0 & 2006.83 & 1028.0 & 1681.5 & 3112.9 \\ \hline \multirow{3}{*}{Hierarchical} & \(T_{1}\) & 229.0 & 267.0 & 311.0 & 261.0 & 288.0 & 403.0 & 307.0 & 386.0 & 408.0 \\ & \(T_{2}\) & 233.0 & 269.0 & 311.0 & 215.0 & 249.1 & 351.0 & 261.0 & 338.0 & 375.28 \\ & \(T_{3}\) & 311.0 & 398.0 & 544.0 & 381.0 & 480.0 & 622.0 & 397.0 & 531.6 & 621.0 \\ \hline \end{tabular} \end{table} Table 2: Latency percentiles in milliseconds for each destination group when varying the overlay in FlexCast and the tree in the hierarchical protocol, gTPC-C with 90% locality. Figure 5: Latency per destination group when varying overlays in FlexCast and a hierarchical protocol, gTPC-C with 90% locality. Figure 6: Throughput vs. number of clients with 99% locality. and wait for dependencies to be solved (i.e., waiting for the delivery of previous messages ordered before \(m\) in ancestor groups). Although FlexCast performs worse than both hierarchical and distributed protocols in the third destination, messages addressed to three (or more) groups are rare in gTPC-C, a characteristic inherited from TPC-C. As a consequence of FlexCast's C-DAG overlay and the fact that each client in the gTPC-C benchmark is associated with the nearest warehouse, clients send most of their messages to their home warehouse and to the next nearest warehouse. The rate at which this phenomenon happens is regulated by the configured locality. Therefore most messages in the workload have a disjoint destination set. This increases FlexCast's advantage over a distributed protocol when messages are addressed to two groups if the groups are placed consecutively in the C-DAG. The hierarchical protocol also benefits from locality, although as a non-genuine protocol, it introduces communication overhead, quantified in Section 5.8. The locality rate also helps to decrease the number of auxiliary messages (i.e., ack and notif) needed by FlexCast to ensure consistency in the global total order, since interdependencies will be relatively fewer in such a scenario. Table 3 shows the latency percentiles (90, 95 and 99) of all destinations when varying the locality rate for all techniques. Although the hierarchical protocol shows on average a better performance when aggregating the latencies of all destinations, FlexCast is more sensitive to locality. In the first destination, FlexCast's reduces 90p latency by 9% when increasing locality from 90% to 99%, while the hierarchical protocol reduces by 3%. Despite its higher latency, the distributed protocol reduces latency by up to 29% when increasing locality from 90% to 99%. ### The cost of exchanging histories In this section, we evaluate the amount of information required by each protocol to implement atomic multicast. All protocols propagate the message payload, as defined by gTPC-C, and protocol-specific information, which in the case of FlexCast includes histories. Figure 8 displays our findings. In each chart, the first graph (top) represents the number of messages received by each node per second. The second graph (middle) shows the average message size per node. Unlike the other protocols with fixed average sizes, FlexCast shows an increase in average message size as nodes ascend the C-DAG topology (see Figure 4). This is due to higher nodes requiring more history data from their ancestors. The third graph (bottom) shows the overall information exchanged by nodes per second. In summary, our experiments indicate that FlexCast exhibits distinctive behavior, with higher nodes in FlexCast's C-DAG exchanging a higher amount of Figure 7: Latency per destination group when varying locality rate. data than lower nodes. This results in larger messages compared to the other protocols. On average, a node exchanges 68.5 Kbytes per second in the distributed protocol, 66 Kbytes per second in the hierarchical protocol, and 79 Kbytes per second in FlexCast. ### The overhead of non-genuineness In this section, we investigate the communication overhead of non-genuine hierarchical protocols. Figures 1 and 9 present the overhead experienced per group. Intuitively, communication overhead captures the amount of communication involving a group due to multicast messages not addressed to the group. We express communication overhead as a percentage and define it as 1 minus the ratio between the number of payload messages delivered by a group and the number of payload messages received by the group during an execution of the protocol. We focus on payload messages as these are typically larger than auxiliary messages used in a protocol. The overhead across groups depends on the tree overlay and the workload. But while all inner groups in a tree are potentially subject to communication overhead, leaf groups have no overhead since they are always in the destinations of messages they receive. Locality also plays a role in communication overhead. A tree can benefit from locality by directly connecting groups that are near each other. This is the motivation behind tree \(T_{1}\): as locality increases, \(T_{1}\)'s overhead decreases, since communication will more likely involve directly connected groups (see Table 4). Tree \(T_{3}\) has lower communication overhead than \(T_{1}\), but this comes at the cost of penalizing group 6 (i.e., \(T_{3}\)'s root), which has to endure 56% of overhead. In \(T_{1}\), groups 5 and 9 present high overhead as they are roots (lowest common ancestors) of different subtrees that represent separate geographical regions (America and Asia). The tree root does not have much overhead since locality is high in groups within the Europe region. The same is observed in \(T_{2}\), where groups 5 and 7 of disjoint subtrees present the highest overheads. Tables 2 and 4 suggest a tradeoff: trees with the lowest latencies are subject to higher overhead on average, while trees with worse performance have lower communication overhead on average. ### Summary We draw the following main conclusions from our experimental evaluation. * FlexCast is more sensitive to the chosen overlay than the hierarchical protocol when it comes to latency. The chosen tree, however, has an impact on the hierarchical protocol's communication overhead. * FlexCast consistently outperforms the distributed protocol (a genuine algorithm) in all configurations experimented. FlexCast performs better than the hierarchical protocol in the first destination group and worse in the latency of the second and third destinations. However, messages addressed to three (or more) groups are rare in TPC-C and gTPC-C. As a genuine protocol, FlexCast has no communication overhead (as defined in Section 5.8), in contrast to a non-genuine hierarchical protocol. * The hierarchical protocol has a tradeoff between latency and communication overhead. Although communication overhead is inherent to non-genuine atomic multicast protocols, in the hierarchical protocol, trees with the best performance have the highest overhead and vice-versa. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Overlay & Locality & Mean overhead & Max \\ \hline \multirow{3}{*}{\(T_{1}\)} & 90\% & 9.16\% (11.18) & 36\% \\ & 95\% & 7.33\% (11.12) & 36\% \\ & 99\% & 5.41\% (11.06) & 34\% \\ \hline \multirow{3}{*}{\(T_{2}\)} & 90\% & 5.75\% (11.31) & 30\% \\ & 95\% & 5.08\% (10.50) & 30\% \\ & 99\% & 4.33\% (9.90) & 30\% \\ \hline \multirow{3}{*}{\(T_{3}\)} & 90\% & 4.66\% (16.16) & 56\% \\ & 95\% & 4.66\% (16.16) & 56\% \\ \cline{1-1} & 99\% & 4.66\% (16.16) & 56\% \\ \hline \end{tabular} \end{table} Table 4: Mean overhead, standard deviation, and maximum overhead in hierarchical trees when varying the locality rate. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Destination} \\ \hline & \multicolumn{4}{c|}{1st} & \multicolumn{4}{c|}{2nd} & \multicolumn{4}{c|}{3rd} \\ \hline & Locality & 90p & 95p & 99p & 90p & 95p & 99p & 90p & 95p & 99p \\ \hline \multirow{3}{*}{FlexCast} & 90\% & 144.0 & 279.0 & 1403.1 & 398.0 & 829.0 & 2243.42 & 1406.0 & 2195.0 & 4542.5 \\ & 95\% & 131.0 & 217.0 & 1146.0 & 288.0 & 671.4 & 2192.64 & 1307.2 & 2231.65 & 4211.55 \\ & 99\% & 132.0 & 218.0 & 764.0 & 227.0 & 458.0 & 1562.09 & 1404.9 & 1975.7 & 3583.92 \\ \hline \multirow{3}{*}{Hierarchical} & 90\% & 229.0 & 267.0 & 311.0 & 261.0 & 288.0 & 403.0 & 307.0 & 386.0 & 408.0 \\ & 95\% & 226.0 & 265.0 & 307.0 & 255.0 & 286.0 & 403.0 & 306.0 & 381.0 & 405.0 \\ & 99\% & 224.0 & 264.0 & 303.0 & 243.0 & 284.0 & 402.0 & 303.0 & 376.2 & 406.84 \\ \hline \multirow{3}{*}{Distributed} & 90\% & 335.0 & 377.0 & 452.0 & 299.0 & 367.0 & 444.0 & 373.0 & 423.0 & 527.7 \\ & 95\% & 284.0 & 349.0 & 417.0 & 275.0 & 339.0 & 406.98 & 365.0 & 407.0 & 528.0 \\ \cline{1-1} & 99\% & 241.0 & 279.0 & 370.0 & 238.0 & 263.0 & 355.0 & 309.5 & 367.0 & 415.3 \\ \hline \end{tabular} \end{table} Table 3: Latency percentiles in milliseconds for each destination when varying the locality rate for all protocols. ## 6 Conclusion We propose FlexCast, the first genuine overlay-based atomic multicast protocol. As overlay-based, it accounts for reduced connectivity in different deployment scenarios. As genuine, it favors geographical locality and avoids communication overhead. To combine both aspects, FlexCast assumes a complete DAG overlay. Since messages may enter the overlay at different groups (nodes) of the DAG, each group takes local ordering decisions. One interesting challenge solved by FlexCast and not yet addressed by other atomic multicast protocols is how to ensure global acyclic order out of local ordering information from different groups. This is achieved using a sophisticated history-based protocol. We present FlexCast's design, its implementation, and propose a new benchmark to evaluate it: gTPC-C integrates geographical distribution and locality to the well-known TPC-C benchmark. FlexCast shows important latency reduction in geographically distributed settings when compared to a latency-optimum genuine atomic multicast algorithm and a hierarchical protocol. ## Acknowledgments This work was partially supported by the Swiss National Science Foundation (# 175717), Fundacao de Amparo a Pesquisa do Estado Do Rio Grande do Sul--FAPERGS PqG 07/21, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico--CNPq Universal 18/21, PUCRS-PrInt, Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES), Brazil, Finance Code 001, and FAPDF through EDITAL 08/2023--FAP Participa.
2309.04267
The use of deception in dementia-care robots: Should robots tell "white lies" to limit emotional distress?
With projections of ageing populations and increasing rates of dementia, there is need for professional caregivers. Assistive robots have been proposed as a solution to this, as they can assist people both physically and socially. However, caregivers often need to use acts of deception (such as misdirection or white lies) in order to ensure necessary care is provided while limiting negative impacts on the cared-for such as emotional distress or loss of dignity. We discuss such use of deception, and contextualise their use within robotics.
Samuel Rhys Cox, Grace Cheong, Wei Tsang Ooi
2023-09-08T11:27:14Z
http://arxiv.org/abs/2309.04267v1
The use of deception in dementia-care robots: Should robots tell "white lies" to limit emotional distress? ###### Abstract. With projections of ageing populations and increasing rates of dementia, there is need for professional caregivers. Assistive robots have been proposed as a solution to this, as they can assist people both physically and socially. However, caregivers often need to use acts of deception (such as misdirection or white lies) in order to ensure necessary care is provided while limiting negative impacts on the cared-for such as emotional distress or loss of dignity. We discuss such use of deception, and contextualise their use within robotics. Dementia-Care, Ageing, Robotic Assistants, Deception + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 ## 1. Introduction Population ageing coupled with lower fertility rates is an increasing concern for many countries and with a growing population of older adults, there is an increasing demand for professional caregivers to address the care needs of older adults at a societal level. This is particularly so for persons with dementia. Research estimates that the number of people living with dementia will approximately triple from 57 million in 2019 to 153 million by 2050 [(31)]. Cognitive functioning deteriorates as the disease progresses, inhibiting the independence and functioning of persons afflicted with the disease. Depending on the severity of impairment, persons with dementia may require extensive, consistent care in daily living. To address this gap, research has pointed towards the use of robots to address the care-giving needs of older persons [(2; 11)]. Assistive robots have been deployed in a multiplicity of care settings [(1; 12; 19; 22)]. However, unlike human care-givers, assistive robots at present lack the reflexivity to respond according to the changing needs and behaviour of their cared-fors. This then also raises the question of how and if robots should adapt their response according to the changing cognitive state of their cared-fors. On from this, advice for human carers would adapt depending on the severity of the cognitive decline associated with the cared-for's dementia. Strategies in early stages of dementia emphasise orienting the recipient's consciousness to be more reality based [(6; 18)] (such as via stating the year, time of day and current weather, and the cared-for's name, age and significant relationships). However, in later stages of dementia such an orientation towards reality may cause emotional distress to the cared-for [(16)] and different approaches may be needed [(8; 9)]. For this reason, advice would dictate that the care-giving could deploy acts that may be considered deceptive in order to fulfil the care needs of cared-for while limiting emotional distress and loss of dignity [(4; 25)]. In addition, while it is in the interests of the care-giver to ensure that the cared-for is given sufficient care (such as ensuring hygiene is maintained, and that cared-for are safe) delusions from cognitive decline may conflict with these needs and cause distress. These equally would lead to care-givers potentially using techniques that are in some way acts of deception. A robot that fully emulates a human carer would then, on occasion, deceive those that it is caring for. Related to this, schools of thought for robotic deception could be seen in two camps: those who oppose the use of deception as [(28)] (perhaps due to leading to over-dependence, loss of human relationships or lost sense of reality); and those who view deception as a necessary and inevitable act [(5; 14)] (as robots continue to anthropomorphise and adopt human characteristics, thereby increasing acceptability and effectiveness [(10; 29)]). At a philosophical and ethical level, the context of robot lies have been discussed within healthcare. For example, Matthias described that lies should always be in the patient's best interests, increase patient autonomy, not lead to harm, and be transparent in deception [(23)], yet these do not address precise strategies that could be used in dementia-care by assistive robots, or people's perception of these. On from this, we discuss a number of strategies (involving deception) for dementia-care that are either recommended practice, or have been reported as used by care-givers. Afterwards, we discuss potential implications and concerns that could be raised if these same strategies were used by assistive robots. For example, when could a robot need to deceive people in order to better complete its tasks, and why would it need to deceive someone rather than revealing the truth? ## 2. Human carers and Uses of Deception A goal of human carers is to deliver necessary care, while balancing emotional distress, dignity, and humanity in order to limit both physical and emotional harm. For example, while allowing for autonomy and maintaining independent activities improves well-being and cognitive functions, if such freedom exists, so too does the possibility for resisiveness due to conflicting wants between care-givers and the cared-fors [(32)], with such resistiveness causing distress to both parties [(24)]. Additionally, in the case of persons with dementia in particular, cognitive impairments may hinder the ability to make rational decisions and complete tasks that are beneficial to one's well-being. As a consequence of this, human carers may use techniques to lessen resistiveness to care that could be seen as deceptive, such as therapeutic lying (Bartos et al., 2016; Kohn et al., 2017; Kohn et al., 2018), informal use of restraint (Kohn et al., 2018), or forms of indirect coercion (Kohn et al., 2018). While some of these techniques are widely adopted (if not sometimes contentiously (Bartos et al., 2016; Bartos et al., 2016; Kohn et al., 2018; Kohn et al., 2018)) among career-givers, additional debate is needed for their potential use by assistive robots. Specifically, therapeutic lying (lying that is used for the benefit of the cared-for, rather than for the care-giver) can be used to limit emotional distress while delivering care (Bartos et al., 2016; Kohn et al., 2018; Kohn et al., 2018; Kohn et al., 2018). Therapeutic lies could be used for "tricks" (Bartos et al., 2016) such as to simplify ingestion of medication (Bartos et al., 2016), to avoid aggressive behaviour, to limit time spent giving explanations, to go along with a care-for's misconception, and to alleviate stress (Bartos et al., 2016) for examples of therapeutic lies). For example, a person with dementia may ask their care-giver where their deceased parent is, to which the care-giver could reply "they'll come tomorrow" (Bartos et al., 2016). These therapeutic lies involve some form of deception based in verbal communication, that while uncertain in their levels of ethics and acceptability, would become more feasible as the abilities of language models improves. Similarly, Oye and Jacobsen surveyed nursing homes in Norway, and identified five distinct types of "informal restraint" enacted by caregivers (Kohn et al., 2018). While not acts of formal restraint (such as physical restraint that would limit autonomy and cause physical and psychological pain), informal restraint limits freedom of movement and personal preference in order to provide care. Specifically, they identified diverting residents' attention (such as showing photographs to distract a person with dementia during washing that they may otherwise protest to); white lies (to affirm the perceived realities of persons with dementia); persuasion and interpersonal pressure; offers (to incentivise adherence to the care-giver's requests); andthreats (such as sending them back to their room to enforce compliance) as forms of informal restraint. They additionally identified "grey-zone" constraints, such as seating a resident in a chair that is deep and low so that they cannot get up on their own, and therefore reduce their chance of wandering and hurting themselves. ## 3. Discussion We have provided an overview of some deception-based techniques that care-givers may use when providing care for people with dementia. We will now discuss issues related to their potential use and application by assistive robots. While the use of deception techniques such as therapeutic lying is well investigated for both its prevalence and acceptability in caregiving environments (Bartos et al., 2016), such beliefs are not well investigated with regards to assistive robots, and there are potential ethical concerns that a person with dementia may not be able to distinguish between a relationship with a robot and a relationship with a person (Kohn et al., 2018). However, previous literature has demonstrated that perhaps there is a difference in attitude between the philosophical ideals and clinical needs and practice when deploying forms of deception in dementia-care. For example, Koh et al. (Kohn et al., 2018) investigated the use of robot pets in nursing homes for dementia, and found that, while people were apprehensive about the potential for deception (i.e., people with dementia believing a robot pet to be real), once stakeholder's personally experienced real-world use of robots, most were comfortable with its adoption. Although it is difficult to expand such a discussion to the more direct forms of deception in Section 2, it could be forseen that such acceptance would be more normalised and likely as adoption and capabilities of robotics increases. In addition, it is important to note that some of the uses of deception may be related to lack of resources for care-givers (Bartos et al., 2016; Kohn et al., 2018). For example, the use of grey-zone restraints (Kohn et al., 2018) could be avoided in a well-resourced home, or if a vision of a future with more caregivers due to assistive robots is met. Such forms of deception are less ethically and morally defensible and (while ensuring the safety of the cared-for) would perhaps be less desirable and less acceptable if used by a robot. The changing nature of interventions used by care-givers could also be challenging for robotic assistants to overcome. For example, a therapeutic lie may have limited effectiveness the more it is used, and additional alternative therapeutic lies may need to be used in order to still deliver care while limiting the emotional distress of the cared-for (Bartos et al., 2016). It is unclear whether giving a robot the freedom and flexibility to devise such deceptions would be seen as acceptable and ethical, and if this necessary flexibility would have the potential for harm. This changing nature could also lead to technical challenges due to differences in each person and their perceived independent autonomy and application of care as such. For example, how would such a robot be designed to differentiate and perceive the level of cognitive abilities of individuals and thereby the potential application of deception to aid in delivery of care? Such a robot would need to be able to distinguish between situations where it can apply deception (and to whom) and situations and persons where this is not appropriate, or where orientating in reality (rather than using therapeutic lies for example) would be acceptable. In addition, there are also potential legal issues caused by deception-led robotic interventions with questions surrounding liability if such methods are seen as as harmful, distressing and coercive (Kohn et al., 2018; Kohn et al., 2018). Furthermore, if such interactions are (presumably) recorded and actionable to one robotics corporation this centralised control could be more liable to litigation, or in need of more strict oversight. ## 4. Conclusion In conclusion, we have discussed a number of techniques (that adopt forms of deception) used by human care-givers when caring for people with dementia. While we cannot provide a clear consensus on the acceptability, effectiveness or ethics of adopting each technique, we would advise that assistive robots behave so as to enhance emotional and physical well-being (Bartos et al., 2016). With robots and language models becoming more capable and prevalent in the provision of care, it is hoped that our discussion will lead researchers to reflect on the likely potential adoption of such deceptive techniques, as well as draw attention to the need for additional studies of robotic ethics in innovative contexts of use (Kohn et al., 2018). ###### Acknowledgements. This research is part of the programme DesCartes and is supported by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme.
2309.05179
Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming
We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the robot is based on some reward function they try to optimize. We use a new human trust-behavior model that enables the robot to learn and adapt to the human's preferences in real-time during their interaction using Bayesian Inverse Reinforcement Learning. We present three strategies for the robot to interact with a human: a non-learner strategy, in which the robot assumes that the human's reward function is the same as the robot's, a non-adaptive learner strategy that learns the human's reward function for performance estimation, but still optimizes its own reward function, and an adaptive-learner strategy that learns the human's reward function for performance estimation and also optimizes this learned reward function. Results show that adapting to the human's reward function results in the highest trust in the robot.
Shreyas Bhat, Joseph B. Lyons, Cong Shi, X. Jessie Yang
2023-09-11T00:18:44Z
http://arxiv.org/abs/2309.05179v1
# Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming ###### Abstract We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the robot is based on some reward function they try to optimize. We use a new human trust-behavior model that enables the robot to learn and adapt to the human's preferences in real-time during their interaction using Bayesian Inverse Reinforcement Learning. We present three strategies for the robot to interact with a human: a non-learner strategy, in which the robot assumes that the human's reward function is the same as the robot's, a non-adaptive learner strategy that learns the human's reward function for performance estimation, but still optimizes its own reward function, and an adaptive-learner strategy that learns the human's reward function for performance estimation and also optimizes this learned reward function. Results show that adapting to the human's reward function results in the highest trust in the robot. 1University of Michigan 2Air Force Research Laboratory Miami Herbert Business School [email protected], [email protected], [email protected], [email protected] ## 1 Introduction As autonomous technologies become more ubiquitous, the need to ensure that these technologies behave in a trustworthy manner increases. When working with humans in collaboration, these technologies (e.g., autonomous robots, intelligent decision aids, etc.) are being perceived more as teammates rather than tools to be used by a human operator. In such hybrid teams, trust has been identified as a key factor to facilitate effective and efficient collaboration [1, 13]. To enable such trust-driven partnerships, it is essential for a robot to be able to estimate its human partner's level of trust in real time. Further, it also needs a way to estimate the human's behavior based on her level of trust. Finally, many robotic decision-making systems use reward maximization to plan their behaviors. In such cases, it is necessary to ensure that the "values" of the robot match that of the human. This is usually accomplished via Inverse Reinforcement Learning [14], which aims at learning reward functions through observed behaviors. Although it has been theorized that matching the robot's reward function with that of the human in a collaborative task is good for the team, its effect on trust has not been studied in detail. Yet there are two reasons to suggest that such adaptation could be beneficial for trust. First, research has shown that agent adaptation to humans can enhance performance in a HAT context [1, 1]. Second, agent adaptation could be viewed as the agent being responsive to the human and may, in turn, increase human trust of the agent [10]. This study investigates the effect on trust when humans interact with robots with different interaction strategies. We compare three types of interaction strategy: (1) the robot does not align its reward with the human, (2) the robot does not align its reward with the human, but uses the estimated human reward function for performance assessment, trust estimation, and behavior prediction, and (3) the robot aligns its reward to that of the human. We conduct a human-subjects study with 12 participants. Our results indicate that adapting to human preferences leads to the highest level of trust of the human on the robot, and leads to a significantly higher number of agreements with the robot's recommendations. We use the term "robot" and "intelligent agent" interchangeably in this paper. The rest of the paper is organized as follows: Section 2 gives an overview of related work that our study builds upon. Section 3 details the human-robot team task and formulates our problem as a trust-aware Markov Decision Process (trust-aware MDP). Section 4 details the human-subjects experiment. Section 5 discusses major results and their implications. Finally, section 6 concludes our study and discusses limitations and future work. ## 2 Related Work This work is motivated by two bodies of research, quantitative trust models and value alignment in HRI. ### Trust Models in HRI xu2012robustxu2012robust proposed a reputation based trust model to adapt the behavior of robots when trust crossed a certain threshold. Later, the authors proposed an Online Probabilistic Trust Inference Model (OPTIMo) [13] which modeled trust through a Dynamic Bayesian Network. Subsequently, they used this model to adapt a robot's behavior depending on the trust level of the human [13]. Our work differs from this in the way that we do not use a threshold-based adaptation strategy. Rather, we use an embedded human-behavior model to predict human behavior with trust and use it directly in the decision-making process of the robot. Guo and Yang modeled human trust as a beta distribution with personalized parameters that are updated during interaction depending on the performance of the robot [13, 14]. They presented simulation results using a reverse-psychology human behavior model and found that the robot can "manipulate" human's trust if an explicit trust-gaining reward is not added to its reward function [13]. Bhat et al. (2022) used this model with a trust-gaining reward term and demonstrated its usage in a human-subjects study. Our work is similar to this work in the sense that we use the same trust estimation model, but we use a different human behavior model, which we call the "bounded-rationality-disuse" model, which gets rid of this trust manipulation issue as long as the values of the human and the robot are aligned. Thus, no trust-gaining reward term is needed in the robot's reward function, which makes the reward function much simpler. ### Value Alignment in Human-Robot Teams The problem of aligning the "values" of the robot to that of its human teammate has been studied in human-robot teaming literature [1, 1, 13, 14, 15, 16]. In a majority of these works, techniques from Inverse Reinforcement Learning (IRL) [12] are used to estimate a reward function that aligns with human demonstrations, preferences, or actions. A bidirectional value alignment problem is studied in [14]. The human knows the true reward function and behaves accordingly while interacting as a supervisor to a group of worker robots. The robots try to learn this true reward function through correctional inputs to their behavior from the human. The human, on the other hand, tries to update her belief on the robot's belief of the true reward function and inputs corrections to their behavior accordingly. In our case, there is no _true_ reward function: the human and the robot have their own reward functions, and we want to see the effect of aligning/not aligning the robot's reward function with that of the human. A "driver's test" to verify value alignment between the human and the robot is provided in Brown, Schneider, and Niekum (2020). This is especially relevant when the human and the robot are performing separate tasks in collaboration, since in this case, it is not enough to match the reward functions of the human and the robot. In our case, since the action sets for the robot and the human are the same, it is enough to match their reward functions to guarantee value alignment. We use a Bayesian framework for IRL [1] which learns human preferences by maintaining and updating a distribution over the possible preferences of the human. The update happens in a Bayesian way after observing the human's selected action. ## 3 Problem Formulation ### Human-Robot Team Task We designed a scenario in which the human-robot team performs a search for potential threats in a town. The team sequentially goes through search sites to look for threats. At each site, the team is given a probability of threat presence inside the site via a scan of the site by a drone. The robot additionally, has some prior information about the probability of threat presence at all of the search sites. This prior information is unknown to the human. After getting the updated probability of threat presence, the robot generates a recommendation for the human. It can either recommend that human use or not use an armored robot for protection from threats. Encountering a threat without protection from the armored robot will result in injury to the human. On the other hand, using the armored robot takes extra time since it takes some time to deploy and move the armored robot to the search site. The goal of the team is to finish the search mission as quickly as possible while also maintaining the soldier's health level. Thus, a two-fold objective arises with conflicting sub-goals: To save time you must take risks, and if you want to avoid risks, you must sacrifice precious mission time. ### Trust-Aware Markov Decision Process We model the interaction between the human and the robot as a trust-aware Markov Decision Process (trust-aware MDP). A trust-aware MDP is a tuple of the form \((S,A,T,R,H)\), where \(S\) is a set of states one of which is the trust of the human in the robot, \(A\) is a finite set of actions, \(T\) is the transition function giving the transition probabilities from one state to another given an action, \(R\) is a reward function and \(H\) is an embedded human trust-behavior model, which gives the probabilities of the human choosing a certain action given the action chosen by the robot, their level of trust, etc. StatesThe level of trust \(t\in[0,1]\). ActionsThe recommender robot has two choices of action: recommend to use or not use the armored robot. These are represented by \(a^{r}=1\) and \(a^{r}=0\) respectively. Reward FunctionThe rewards for both agents (the human and the robot) are a weighted sum of the negative cost of losing health and losing time. The weights for these costs can be different for the robot and the human. For agent \(o\in\{h,r\}\), the reward function can be written as, \[R^{o}(D,a)=-w_{h}^{o}h(D,a)-w_{c}^{o}c(a). \tag{1}\] Here, \(D\) is a random variable representing the presence of threat inside a search site, \(a\) is the action chosen by the human to implement, \(o\in\{h,r\}\) represents the agent, either the human \(h\) or the robot \(r\). \(h(D,a)\) gives the health loss cost and \(c(a)\) gives the time loss cost. Transition FunctionThe transition function gives the dynamics of trust as the human interacts with the robot. We use the model from [13, 14]. 2022) which models trust as a random variable following the Beta distribution based on personalized parameters \((\alpha_{0},\beta_{0},w^{s},w^{f})\). \[t_{i} \sim Beta(\alpha_{i},\beta_{i}), \tag{2}\] \[\alpha_{i} =\alpha_{0}+\sum_{j=1}^{i}p_{j},\] \[\beta_{i} =\beta_{0}+\sum_{j=1}^{i}(1-p_{j}).\] Where \(i\) is the number of interactions completed between the human and the robot, \(t_{i}\) is the current level of trust, and \(p_{j}\) is the realization of the random variable performance \((P_{j})\) of the recommender robot at the \(j^{th}\) interaction, \[P_{j}=\begin{cases}1,\ \ \text{if}\ R_{j}^{h}(a_{j}^{r})\geq R_{j}^{h}(1-a_{j}^{ r}),\\ 0,\ \ \text{otherwise}.\end{cases} \tag{3}\] Here, \(R_{j}^{h}(a_{j}^{r})\) is the reward for the human for choosing the recommended action \((a_{j}^{r})\) at the \(j^{th}\) interaction and \(R^{h}(1-a_{j}^{r})\) is the same for the other action. Human Trust-Behavior ModelA human trust-behavior model gives the probabilities of a human choosing an action, given the robot's action, their trust level, and other factors such as the human's preferences. In our study, we use the _Bounded Rationality Disuse Model_ as the human trust-behavior model. This model states that the human chooses the recommended action with a probability equal to the human's current level of trust. If the human chooses to ignore the recommendation, she will choose an action according to the bounded rationality model of human behavior. That is, she will choose an action with a probability that is proportional to the exponential of the expected reward associated with that action. Mathematically, \[P(a_{i}^{h}=a|a_{i}^{r}=a)=t_{i}+(1-t_{i})q_{a}, \tag{4}\] \[P(a_{i}^{h}=1-a|a_{i}^{r}=a)=(1-t_{i})(1-q_{a}). \tag{5}\] where \(q_{a}\) is the probability of choosing action \(a\) under the bounded rationality model, \[q_{a}=\frac{\exp(\kappa E[R_{i}^{h}(a)])}{\sum_{a^{\prime}\in\{0,1\}}\exp( \kappa E[R_{i}^{h}(a^{\prime})])} \tag{6}\] ### Bayesian Inverse Reinforcement Learning We use Bayesian IRL to estimate the reward weights of the human as they interact with the recommender robot. This is accomplished by maintaining a distribution of the possible reward weights and updating it using Bayes' rule after observing the human's behavior. More precisely, if \(b_{i}(w)\) is the belief distribution on the reward weights before the \(i^{th}\) interaction, the distribution after the \(i^{th}\) interaction is given by, \[b_{i+1}(w)\propto\begin{cases}P(a_{i}^{h}=a_{i}^{r}|a_{i}^{r})b_{i}(w),&\text {if}\ a_{i}^{h}=a_{i}^{r},\\ P(a_{i}^{h}=1-a_{i}^{r}|a_{i}^{r})b_{i}(w),&\text{otherwise}.\end{cases} \tag{7}\] In our formulation, we only learn a distribution over the health reward weight of the human, \(w_{h}^{h}\), and assume that the time reward weight is defined by \(w_{c}^{h}:=1-w_{h}^{h}\). Further, we use the mean of the maintained distribution as an estimate for the human's health reward weight. ## 4 Experiment This section provides details about the testbed used for data collection and the human-subject experiment. The experiment complied with the American Psychological Association code of ethics and was approved by the Institutional Review Board at the University of Michigan. ### Testbed We updated some elements of the testbed from (Bhat et al., 2022) for this study, following feedback from participants in that study. The testbed was developed in the Unreal Engine game development platform. We updated the recommendation interface to be more informative and to help the participants make their own decisions if they choose to do so. In the updated interface (shown in fig. 1), the participants are shown the probability of threat presence reported by the drone, the recommendation given by the intelligent agent, and an estimate of search time with and without the armored robot. In order to better separate the threat detection task from the recommendation task, the participants were told that a separate entity called an _intelligent agent_ will given them action recommendations. On the other hand, the drone's task is to just scan a site and report the threat level inside it. The participants were specifically asked to report their trust level on the intelligent agent. Further, we updated the trust feedback slider to provide information about the last interaction that the participant had, in order to help them make an informed decision about their trust. The updated interface can be seen in fig. 2. We designed a within-subjects study. Our goal is to compare trust between different interactio Figure 1: The recommendation interface high variation between trust dynamics between participants [1], we think that it is better to compare trust within-subjects than between-subjects. The participants completed 3 missions in total. In each mission, they interacted with an intelligent agent following one of the 3 interaction strategies (detailed in sec. 4.3). In each mission, they sequentially searched through 40 search sites. The condition order was counterbalanced using a Latin square. ### Participants We collected data from \(12\) participants (Age: Mean = \(21.92\) years, _SD_ = \(2.36\)). All participants were students from the College of Engineering at the University of Michigan. ### Interaction Strategies We designed three interaction strategies for the intelligent agent: * **Non-learner:** The intelligent agent does not learn the reward weights of the human. It assumes that the human and the intelligent agent share the same reward weights. * **Non-adaptive learner:** The intelligent agent learns personalized reward weights for each human. It only uses these learned weights for performance assessment and human behavior modeling. It still optimizes the MDP based on its own fixed reward weights. * **Adaptive learner:** The intelligent agent learns personalized reward weights for each human. It uses them for performance assessment, human behavior modeling, and also optimizes the MDP based on these reward weights. In other words, it updates its own reward function match the learned reward function. Although it may look like the non-learner and the non-adaptive learner both optimize the same reward function, they actually optimize expected reward under the assumed human trust-behavior model. Thus, since the non-adaptive-learner has a better estimate of the human's preferences, we postulate that it will have a better estimate of the human's trust and behavior, and hence, will show some difference in its recommendations compared to the non-learner. ### Measures Pre-experiment MeasuresBefore the beginning of each mission, we ask the participants to rate their preference between saving the soldier's health and saving time by moving a slider between these two objectives, showing their relative importance. In Experiment MeasuresAfter each site's search was completed, the participants were asked to provide feedback on their level of trust on the intelligent agent's recommendations. The interface can be seen in fig. 2. The slider values were between 0 and 100 with a step of 2 points. The end-of-mission trust (used in sec 5) is the feedback given by the participant using this slider after completing the search of the last search site. Post-mission MeasuresWe used the following measures as a post-mission survey that the participants filled out after every mission. * **Post-mission Trust:** This was measured using Muir's trust questionnaire [16]. It has 9 questions each with a slider answer range between 0 and 100. Note that this is separate from the end-of-mission trust, which is a subjective rating on a single slider by the participant after the last search site is completed. * **Post-mission Reliance Intentions:** Measured using the scale developed in [10]. It has 10 items but 6 items were used herein, each on a 7-point Likert scale. ## 5 Results and Discussion This section provides an overview of our major results and discusses some reasons behind them and their implications. Note: All error bars on bar charts are standard errors. ### Trust We expect that adapting to human preferences will result in higher levels of trust of the human on the robot. Fig. 3 shows the post-mission trust rating given using Muir's trust scale. Repeated measures ANOVA shows significant difference between the tr Figure 3: Mean and standard error (SD) of trust ratings given by the participants post-mission using Muir’s trust scale [16] Figure 2: The trust feedback slider used to get feedback from the participants after every search site. The mission timer is paused when the slider is shown to let the participants take their time in adjusting their trust. three strategies (\(F(2,22)=11.962,p<0.001\)), with the highest trust given to the adaptive. Post-hoc analysis with Bonferroni adjustment shows significant differences between the non-learner strategy and the adaptive-learner strategy (\(p=0.001\)) and between the non-adaptive-learner strategy and the adaptive-learner strategy (\(p=0.014\)). Fig. 4 shows the average trust rating given by the participants to the recommendations of the intelligent agent across their interaction period. Repeated measures ANOVA shows significant difference between the three strategies (\(F(2,22)=4.968,p=0.017\)). Post-hoc analysis with Bonferroni adjustment reveals significant difference in average trust rating between the non-learner and the adaptive learner strategy (\(p=0.044\)) and no significant difference between the other two pairs. Fig. 5 shows the trust rating given by the participants to the recommendations of the intelligent agent at the end of their mission. Repeated measures ANOVA shows significant difference between the three strategies (\(F(2,22)=7.455,p=0.003\)). Post-hoc analysis with Bonferroni adjustment reveals significant difference in average trust rating between the non-learner and the adaptive learner strategy (\(p=0.044\)) and a marginally significant difference between the non-adaptive-learner and adaptive-learner strategies (\(p=0.057\)). This trend could reach significance with more data which we are currently working on. The end-of-mission trust rating should be a stable trust rating since the participants have had enough interactions with the intelligent agent to have a good sense of their trust on it. Fig. 6 shows the number of agreements between the recommendation from the intelligent agent and the participant's action selection. We expect there to be a positive correlation between the number of agreements and trust reported by the participants. Repeated measures ANOVA shows significant difference between the three strategies \((F(2,22)=13.732,p<0.001)\). Post-hoc analysis with Bonferroni adjustment reveals significant difference in average trust rating between the non-learner and the adaptive learner strategy \((p=0.003)\) and between the non-adaptive-learner and the adaptive-learner strategy \((p=0.009)\). The participants agreed the most with the adaptive-learner's recommendations. ## 6 Conclusions In this study, we provided a demonstration of the use of Bayesian IRL coupled with the bounded-rationality-disuse model of human behavior to learn a human's preferences in performing a human-robot team task. We implemented an adaptive interaction strategy for the robot that learns and optimizes a reward function based on these preferences. We showed the trust and performance improvement when using such an adaptive interaction strategy compared to two baselines. The results of our study should be seen in light of the following limitations. First, we provide a demonstration in the case when there are only two components in the team's reward function. Therefore, we only need to learn the human's preference for one of the two components and can ascertain their relative preference between the two objectives. Our formulation, however, can readily be extended to the case where there are more than two objectives in the team's reward function, with additional computations required to learn and maintain a distribution over each reward weight. Second, we used an uninformed uniform prior for the reward weights of the human. This uniform distribution was, thus, also used to set the reward weights for the non-adaptive interaction strategies. This simulates a scenario where we do Figure 4: Average trust reported by the participants across the interaction period, mean and standard error. Figure 5: Trust reported by the participants at the end of the mission, mean and standard error. Figure 6: Number of agreements between the recommendation from the robot and the human’s action choice, mean and standard error. not have any other way of setting the reward weights for the robot. Another approach could be to use a data-driven prior on the reward weight distribution to set these weights. A similar comparison between the three strategies with such an informed prior could be a direction for a future study. ## Acknowledgments This work was supported by the Air Force Office of Scientific Research under grant #FA9550-20-1-0406.
2305.19849
Biography-based Robot Games for Older Adults
One issue in aging is how to stimulate the cognitive skills of older adults. One way to address it is the use of serious games delivered through humanoid robots, to provide engaging ways to perform exercises to train memory, attention, processing, and planning activities. We present an approach in which a humanoid robot, by using various modalities, propose the games in a way personalised to specific individuals' experiences using their personal memories associated with facts and events that occurred in older adults' life. This personalization can increase their interest and engagement, and thus potentially reduce the cognitive training drop-out.
Benedetta Catricalà, Miriam Ledda, Marco Manca, Fabio Paternò, Carmen Santoro, Eleonora Zedda
2023-05-31T13:37:48Z
http://arxiv.org/abs/2305.19849v1
# Biography-based Robot Games for Older Adults ###### Abstract One issue in aging is how to stimulate the cognitive skills of older adults. One way to address it is the use of serious games delivered through humanoid robots, to provide engaging ways to perform exercises to train memory, attention, processing, and planning activities. We present an approach in which a humanoid robot, by using various modalities, propose the games in a way personalised to specific individuals' experiences using their personal memories associated with facts and events that occurred in older adults' life. This personalization can increase their interest and engagement, and thus potentially reduce the cognitive training drop-out. Humanoid robot, Personalisation, Serious Games, Cognitive training + Footnote †: Footnote †: thanks: [ ## 1 Introduction The increasing number of older adults implies an increasing need for their physical, social, and cognitive assistance. Indeed, aging has a considerable impact on the health of older adults in terms of cognitive and physical impairments, which influence the abilities to complete and perform basic activities of daily living, such as cooking, shopping, managing the home, bathing, and dressing. Nowadays, a large proportion of cognitive assistance is provided by informal caregivers, usually family members. These caregivers often experience a negative impact on their psychological, emotional, and physical well-being due to the high workload [2]. Given the high health care expenditure at older ages, and their effects on family caregivers, new technologies to assist older adults with cognitive impairments are urgently needed. Non-pharmacological interventions, such as physical training, cognitive training, and social stimulation activities have been used to mitigate the cognitive decline by maintaining or improving cognitive abilities, social well-being, and quality of life of older adults [2, 3]. However, traditional interventions require experienced instructors who may be unavailable. Assistive technologies can provide useful support to address this problem. They are technologies that aim to assist different types of users during their rehabilitation. They can help older adults maintain their independence during daily routines and can also be an important instrument during their rehabilitation [11]. In recent years, humanoid robots have increased their similarity to human behaviour starting from gestures and facial expressions to the ability to understand questions and provide answers. Thanks to such humanlike characteristics, the interaction between people and robots is becoming more natural. The behaviour of such robots can also be personalised through end-user development approaches, such as trigger-action rules and associated support [6]. A recent literature review [10] indicates that the humanoid robot is an interactive technology still not sufficiently investigated for supporting the cognitive stimulation of older adults. In this paper, we present a novel approach based on a Pepper humanoid robot, which exploits serious games for cognitive stimulation of older adults. A humanoid robot is a system that can employ different interaction strategies, such as verbal and non-verbal communication, facial expressions, communicative gestures, and can detect the surrounding context by using various sensors (tactile sensors, camera, microphones). These capabilities are essential to creating social and emotional interaction with users to increase their acceptability and user's engagement, which may increase the possibility of reaching the goal of assistance in less time and with better results [2]. Using robots to support and assist patients can be a valuable tool to help them during their cognitive training. In such context, digital cognitive training through serious games may potentially benefit those with cognitive impairments more than traditional training due to enhanced motivation and engagement. In literature, different studies show how digital games can obtain positive results in helping seniors improve their cognitive abilities compared to traditional training [8]. Since older adults are varied in terms of preferences, interests, and abilities, it is important to propose serious games for cognitive training that are able to personalise, and thus be more relevant for them. Combining a humanoid robot and a set of personalised serious games can be a solution to obtain measurable progress in cognitive functions and stimulate the user to continue the training [13]. Personalised serious games for cognitive intervention have been explored with mobile apps [15] but have not been investigated with humanoid robots. We aim to offer novel digital training through serious games designed using personally relevant material from older adults' life. They will be based on elements associated with their biography, thus making interactions personalised, relevant, and more engaging. ## 2 The Sereni Approach The psychological well-being of older adults may be affected by some age-related conditions, such as approaching death, loss of family members, and reduced autonomy. A meta-analysis [2] indicates that the practice of life review (discussing what a memory means), even more than reminiscence (describing a memory itself), is a good instrument for improving the psychological well-being of older adults and that its effect sizes are comparable to those of cognitive-behavioural therapy. Serrano et al. [11] found that the practice of autobiographical memory improved the mood of the elderly by improving their life satisfaction. Furthermore, Damianakis et al. [4] report that interventions that contextualise history, personality, and life experiences can contribute to improving both communication and social interactions between family members and between family members and formal caregivers. Based on previous experiences [7], we have started the development of a new prototype in which the serious games installed on the humanoid robot will motivate older adults by engaging them in playful situations that draw on their personal memories, with which they can interact. Indeed, such serious games are designed to use personally relevant material and events from older adults' life. Specifically, the games are based on elements associated with the biography of the users (mainly taken from from their youth), thus making interactions more relevant and more likely to keep them engaged while enhancing their well-being. According to such motivations, we have designed the SERENI platform to deliver serious games using personally relevant material from older adults' life through a humanoid robot. It aims to stimulate cognitive functions through play sessions, which should last 15-20 minutes. The exercises should be useful for making the participants think and reason before providing the correct answer. The platform can be a solution for day-care centres where older adults with mild cognitive impairments can go to perform relevant exercises. On the one hand the older adults, by interacting with the biographical app, provide relevant biographical data that are mainly used to customise the games, which thereby will be highly personalised for them. On the other hand, seniors will also interact with the games to stimulate their cognitive abilities. The data produced during the interactive sessions will be exploited to improve the adaptation of the game itself (according to the data gathered in previous game sessions) and also to feed the associated analytics services. The SERENI platform is based on a modular architecture allowing the deployment of multimodal serious cognitive games on a humanoid robot. Thanks to its human-like appearance and behaviour, it can stimulate interest and engagement from seniors that would be more difficult with other types of smaller and more limited robots can stimulate interest and engagement from seniors that would be more difficult with other types of smaller and more limited robots, thanks to its human-like appearance and behaviour. The platform is based on various components. The first one is the Remind App, a responsive multimodal Web application to collect memories from older adults and their relatives. The memories can be entered both through graphical and vocal interaction. Biographical information is exploited in a group of games that aim to stimulate and train various cognitive resources in older adults (memory, attention, planning). The platform is also able to store data regarding user performance (i.e. when and for how long the user played with a given game, number of errors in a session, type of games played). In the resulting environment, the humanoid robots will serve as personal trainers, proposing exercises and communicating through various modalities, and challenging users in cognitive games relevant to their daily life (e.g. by remembering past events or names of family members and friends). The solution aims to allow caregivers to configure the exercises and choose the most suitable games to stimulate the cognitive skills of users and enhance their experience. Caregivers can also interact with an Analytics tool, to have both overview and detailed information regarding user performance and state. For this goal, the games include a custom tracking system, which tracks the data about user performance and other game analytics data (such as time, number of errors, pass/fail, score, completion level, etc.). To facilitate entering the memories through the responsive Web application (Remind) developed to collect older adults' memories, we thought it was useful to categorise the biographical aspect, also because different types of memories need different types of questions for being entered. Based on our previous experiences in projects in the Ambient Assisted Living area and informal discussions with relevant stakeholders, in the first version of the app we identified a first set of memory categories: Music, Events, Games, Places, Food, and Hobbies. We then decided to carry out an empirical validation of such classification with the target audience, by proposing to people aged 65+ a questionnaire (in Italian), composed of three parts. The initial part was dedicated to demographic information, in the second part they were asked to freely indicate at least four categories that they deemed particularly relevant to classify their personal memories, and to select the categories they find relevant within Food, Events, Family, Travels, Music, Hobby, Work, Love/Friendship, Study, Health. Then, they had to rate on a scale from 1 to 5 the relevance of the categories used in the initial version of the games (Locations, Games, Hobby, Food, Music, and Personal Events), with the possibility of indicating the category they would add or remove. The questionnaire was completed in a paper form by 50 people (23 males and 27 females) aged between 65 and 84 years (Mean: 72, SD: 5,09). 40% have a higher education, 38% have a degree. 86% very familiar with electronic devices such as smartphones, tablets and PCs; the remaining 14% only use smartphones mainly out of necessity. 80% indicated "Family" as the most representative category for their memories; the examples proposed concern the birth of children and grandchildren, the memory of parents and grandparents and the childhood home. 40% of them indicated "Work", in particular the first experiences and satisfactions during their career. 50% cited "Affections", and provided examples such as meeting their first love, childhood friendships, and events such as engagement and marriage. Participants were also asked to indicate, among the initially proposed categories, those most relevant to them. Participants rated each category on a scale of 1 (Not Relevant) to 5 (Relevant). In particular, 54% rated "Hobbies" as Not Relevant (scores \(<\) 3); 40% rated "Food" as Not Relevant (scores \(<\) 3); 74% rated "Music" as Relevant (scores \(>\) 3); 86% rated "Places" as Relevant (scores \(>\) 3); 98% rated "Events" as Relevant (scores \(>\) 3); 68% rated "Games" as Relevant (scores \(>\) 3). The most relevant category among those proposed by us was "Events" (avg score \(=\) 4,7): it was considered very versatile by users, as it allows the inclusion of different types of memories. The least relevant categories were Hobbies and Food. Hobbies received an average score of about 2,6: the main criticism concerned the category's name which was judged not adequate compared to other proposals. As possible replacements, terms such as "leisure" or "entertainment" were suggested. Participants also showed very low interest in the Food category, as most of them said that it did not significantly impact on their life experiences. In conclusion, the most significant memories concern the dearest affections and memorable events in life (i.e. graduation, marriage, children birth). Of the six categories proposed, Music and Personal Events aroused the greatest interest. Thus, in the new version of the Remind app we introduced the Affections category and removed the Food one. In the end, the categories selected were: Affections, Events, Games, Hobbies, Places, Music. At the beginning of the interaction with the Web Remind application, users are asked whether they want to enter a new memory or review those previously entered. After selecting a memory category, the user can provide the associated information associated with the specific memory. For example, for entering a memory related to a particular event in life the user indicates a name for the event and provide a description, which can be entered either vocally or by keyboard. The users can also indicate their age when such event occurred, and optionally provide an image associated with it. In the case of a memory in the Hobby category, the user can also provide a list of activities required by the hobby. All such information can then be used by the games provided by the Pepper robot for specific exercises. In general it is not necessary that the older adults directly enter the memories, to facilitate the process they can tell them to some formal or informal caregiver, who can also help them in specifying relevant memories. The Pepper application presents various exercises useful for making the participants think and reason to provide the correct answer. An initial set of five games have been identified: \(\bullet\) **Memory completion**. Pepper presents a memory with a missing detail, which the user should select from some elements (if the answer is correct, the memory is re-read to the senior). For example: "when I was 12 years old I used to spend summer time in..." and the robot shows three possible options: Marina di Pisa, Tirrena, San Vincenzo or Castiglioncello;) or "I used to listen to that singer when I travelled by car with my father" with possible answers: Modugno, Morandi, Celentano, Guccini; \(\bullet\) **Activities ordering**. It is only applied to Hobby: a set of activities presented in an unordered list should be put in the right order by the user (this can stimulate executive functions and procedural memory); \(\bullet\) **Memory association**. In this game, 3-4 memories are briefly listed as well as some details: users have to connect each memory with the corresponding detail, for example associating song titles to the corresponding singers (to stimulate attention and memory); \(\bullet\) **Memory-related event** question. The user has to guess an event that happened in the same year of the memory: the robot asks the user to select that event from a list of possible events. For example: what happened in the same year you got married (1945)? Possible answers: "the end of second world war", "the first man on the moon", "women gain the right to vote in Italy"? (useful to stimulate long-term memory). \(\bullet\) **Music game**, the robot plays the initial part of a song popular at the time of the memories and the user has to guess its singer or title. In general, music has a positive effect on the users engagement, and in this case music related to their memories is proposed. In a session at the beginning the robot asks the name of the user, then through such information it retrieves the memories that the user entered, which are available from the biography application backend through a restful service and transmitted in JSON format. The memories arrive in the robot with the indication of the corresponding category, which is useful to determine how to exploit them in the various exercises. In the case of a missing detail in the Memory completion exercise, the robot proposes a memory and a list of possible missing details derived from that user's memories. For the memory-related event exercises, the list of options in terms of real events are taken by external services. The activities ordering exercise refers only to the Hobby category because only in that case users are asked to enter the steps required to perform the hobby. Thus, users can first select the type of game they want to play, and then they have the opportunity to perform the associated exercises, with personalised content. ## 3 Conclusions and Future Work In this paper, we introduce a novel approach to personalising serious games for cognitive stimulation of older adults delivered through a humanoid Pepper robot. It is based on a multimodal Web app to collect memories of older adults, and then such content is exploited in a set of games aiming to stimulate several cognitive resources of seniors. We have collected memories from 16 older adults (65+) with MCI and in next weeks we will carry out a trial in which they will be asked to interact with both the version of the games exploiting personal memories and another version with standard content in a within-subjects study so that we can assess the impact of the biography-based personalization. ## Acknowledgments This work is partly supported by the CNR project SERENI [https://hiis.isti.cnr.it/sereni/index.html](https://hiis.isti.cnr.it/sereni/index.html)
2305.20010
Human or Not? A Gamified Approach to the Turing Test
We present "Human or Not?", an online game inspired by the Turing test, that measures the capability of AI chatbots to mimic humans in dialog, and of humans to tell bots from other humans. Over the course of a month, the game was played by over 1.5 million users who engaged in anonymous two-minute chat sessions with either another human or an AI language model which was prompted to behave like humans. The task of the players was to correctly guess whether they spoke to a person or to an AI. This largest scale Turing-style test conducted to date revealed some interesting facts. For example, overall users guessed the identity of their partners correctly in only 68% of the games. In the subset of the games in which users faced an AI bot, users had even lower correct guess rates of 60% (that is, not much higher than chance). This white paper details the development, deployment, and results of this unique experiment. While this experiment calls for many extensions and refinements, these findings already begin to shed light on the inevitable near future which will commingle humans and AI.
Daniel Jannai, Amos Meron, Barak Lenz, Yoav Levine, Yoav Shoham
2023-05-31T16:32:22Z
http://arxiv.org/abs/2305.20010v1
# Human or Not? ###### Abstract We present _"Human or Not?"_1, an online game inspired by the Turing test, that measures the capability of AI chatbots to mimic humans in dialog, and of humans to tell bots from other humans. Over the course of a month, the game was played by over 1.5 million users who engaged in anonymous two-minute chat sessions with either another human or an AI language model which was prompted to behave like humans. The task of the players was to correctly guess whether they spoke to a person or to an AI. This largest scale Turing-style test conducted to date revealed some interesting facts. For example, overall users guessed the identity of their partners correctly in only 68% of the games. In the subset of the games in which users faced an AI bot, users had even lower correct guess rates of 60% (that is, not much higher than chance). This white paper details the development, deployment, and results of this unique experiment. While this experiment calls for many extensions and refinements, these findings already begin to shed light on the inevitable near future which will commingle humans and AI. Footnote 1: [https://www.humanronot.ai/](https://www.humanronot.ai/) ## 1 Introduction The famous Turing test, originally proposed by Alan Turing in 1950 as "the imitation game" (Turing, 1950), was proposed as an operational test of intelligence, namely, testing a machine's ability to exhibit behavior indistinguishable from that of a human. In this proposed test, a human evaluator engages in a natural language conversation with both another human and a machine, and tries to distinguish between them. If the evaluator is unable to tell which is which, the machine is said to have passed the test. While when it was proposed by Turing the test was more of a thought experiment than a practical proposal, in 1990, the Loebner Prize was established as an annual competition to reward the most human-like computer programs, adding a tangible goal of \(100,000\$\) for the builders of an AI system that can fool all \(4\) human judges. A widely publicized case of an AI system purportedly passing a Turing-like test emerged in 2014. Eugene Goostman, a chatbot emulating a 13-year-old Ukrainian boy, managed to convince 33% of the judges at a competition held in the Royal Society in London that it was human. However, some argued that Goostman's portrayal as a young non-native English speaker was deliberately used to elicit forgiveness from those interacting with him, explaining any grammatical errors or gaps in general knowledge. Since then, staggering progress has been made in the fields of artificial intelligence and natural language processing by Large language models (LLMs) like ChatGPT (OpenAI, 2022) or AI21 Labs' Jurassic-2 (AI21 Labs, 2023). Contemporary LLMs demonstrate remarkable language generation capabilities, producing coherent and contextually relevant responses across a wide range of topics. Indeed, while it is unlikely that Turing himself could have predicted the recent burst of AI advances, it is now clear that LLMs can be put to Turing-like tests with a fighting chance. This white paper describes _"Human or Not?"_, a social experiment that we released as a game in which users conduct open ended short conversations with a second party, and at the end cast their vote: did they converse with a fellow human user or with an AI bot? The experiment was deliberately open-ended. While the explicit task given was to guess the type of interlocutor, users were free to add other motivations. Thus, some users tried to trick their partners into believing they are speaking with an AI, some tried to convince the other party that they are humans, while some users stuck to the assigned task and focused on interrogating their partner on what they considered to be traits or topics that distinguish between humans and bots. It should also be said that our AI bots too were not innocuous; we prompted them to make convincing attempts to mimic humans in a variety of aspects, which ranged from human-like slang and spelling errors, to holding a coherent back story about their character, all the way to leaving the game in the middle if the other side offended them. These made the game challenging and engaging, extracting emotional reactions from users at times. Riding the current massive wave of public interest in AI, in its first month _"Human or Not?"_ - crued over 10 million human-AI and human-human conversations by over 1.5 million unique users, providing us with the first ever statistically robust scores to a Turing-like test. Several interesting findings emerged. Most importantly, our experiment echoed Turing's prediction that after a short interaction, an average interrogator would have less than 70% chance of identifying an AI: users guessed the identity of their partners correctly in 68% of the games (notably Turing assumed 5-minute interactions while we only allowed 2-minute ones). Intriguingly, in the subset of the games where users faced an AI bot, users had even lower correct guess rates of 60%. While this isn't a completely fair comparison due to the shorter time frame and potential influence from game design decisions, it's fascinating to see Turing's forecast partially borne out. Although contemporary AI bots are still far from perfect, the results of our experiment clearly show that they are making staggering progress. The _"Human or Not?"_ setup is the first statistically robust method for tracking this progress, and it can be re-used in upcoming years as AI agents improve. Future analyses of this data can offer valuable insights into the current capabilities of AI models and the strategies humans use to identify AI-generated text. Below, we outline the design and development process of _"Human or Not?"_ and present an initial analysis of the game's data. We hope that our setup and findings can provide valuable insights for the ongoing development of AI language models, the design of future human-AI interaction scenarios, and our understanding of how humans perceive and interact with AI systems. ## 2 Game Design and Development ### Motivation and Design Principles Contemporary AI models give us a glimpse into a future where AI plays active roles in our lives, ranging from providing chatbot assistance in commercial services, revolutionizing education, boosting creativity as a thought partner for creators, providing loneliness relief for the elderly, and more. Given this trajectory, we think it is important to (1) understand the traits and behaviors which people perceive as "human-like" or "machine-like", and (2) develop quantitative measures that capture the \begin{table} \begin{tabular}{l c} \hline \hline & Probability of Correct Guess \\ \hline Overall & 68\% \\ When Partner is a Bot & 60\% \\ When Partner is Human & 73\% \\ \hline \hline \end{tabular} \end{table} Table 1: Probability of correct guess by partner type. ability of AI systems to mimic humans. With this in mind, we created a platform that would facilitate Turing-like tests in a modern, engaging, and accessible manner. Our success in popularizing this experiment provides the first ever statistically robust score to a Turing-like test, which serves as a baseline for future progress. Concretely, we made strategic choices aimed at creating an immersive gamified experience which encourages recurring users. The conversations have a "ping-pong" structure that prevents players from sending two consecutive messages without a response, in order to ensure a balanced and dynamic exchange. Each message, limited to a maximum of 100 characters, has to be composed and sent within a 20-second window, and the chat ends after 2 minutes, usually consisting of 4-5 messages from each side. This ensures that players don't have to wait for too long, so they can remain engaged with the game and a constant suspense is kept. Once the conversation is over, players are prompted to guess whether their conversational partner was a fellow human or an AI bot. Several other design decisions shaped the game dynamics. Firstly, input was limited to Latin characters and emojis to encourage English communication, though this solution was only partially effective as many languages can still be written using Latin characters. Secondly, we opted for anonymity, not requiring any registration, which aimed to lower barriers to entry, although it limited demographic analysis. In addition, we did not impose a limit on the number of times a user could play, providing the opportunity for them to develop and refine their strategies over time. Lastly, we refrained from implementing a leaderboard to keep the focus on exploring AI-human interaction and discourage system gaming. The only performance indicator was the display of correct guesses versus total games played. Notably, we decided not to inform the players on what their counterpart's guess was (when it was human). The rationale behind this choice was to prevent incentivizing the player to imitate a bot. While our results showed that bot imitation was indeed a prevalent strategy used by the players, we suspect that the situation could have been exacerbated if players had access to their counterpart's eventual guess. As we reflect on the game design and user feedback, it is intriguing to consider alternative structures that could lead to different behaviors. For instance, one idea is a modified ranking system that penalizes users for being misidentified as bots, thus encouraging "authentic" human-like behavior. Such changes might further reduce bot imitation but could introduce new biases and strategies. It also raises intriguing questions about what constitutes "authentic" behavior in such a setting and how it might be incentivized. In addition to what was previously mentioned, each message goes through a moderation service to ensure a safe environment and prevent abuse and hate speech. Any flagged content in AI-generated responses is filtered out, and if a user message is flagged, the conversation promptly ends. Finally, to encourage engaging and varied conversations, we provide both human users and AI bots with randomized conversation starters. These suggestions are intended to reduce the likelihood of repetitive or mundane conversations, contributing to the game's challenge and entertainment value. ### Developing the Conversational AI Bots One of the central challenges we faced during the development was ensuring that our AI bots were not easily distinguishable from humans. We recognized the difficulty of rendering bots truly human-like and hence set about defining a set of properties that the AI could emulate. Consequently, we moved beyond Turing's original formulation which implicitly assumed the AI to be a neutral entity. Instead, we created a diverse array of bots, each with its unique personality and objective. We were motivated by the desire to keep the conversations interesting and less repetitive for recurring users, and to obscure the tell-tale signs of bots, making detection more challenging. Specifically, each bot is prompted with a persona that includes basic information such as a name, age, occupation, and location, along with distinctive personality traits like wit, humor, or seriousness (see example in figure 0(a)). The prompts also contained game instructions, making the bots aware of the gameplay context. To circumvent users leveraging the multilingual training data of the models to differentiate them from humans, the bots were also instructed to role-play a character that only spoke English. Some bots are even endowed with playful objectives, creating captivating narratives that keep the users engaged (see examples in figures 0(b) and 0(c)). Following Ram et al. (2023), who demonstrated that integrating relevant information into a language model's context can enhance the factuality of its generated text, we fed our bots real-time, contextually relevant information, such as local news and weather data (see example in figure 2). This way, we hoped that when asked about recent events, the bots would be less likely to generate misleading or incorrect information and instead, provide a response grounded in the factual information already present in their context, and allowing their interactions to be more nuanced and believable (indeed, many users tried to trick the bots by inquiiring regarding real time information, see section 3.1). The bots also display a wide repertoire of writing styles, from impeccable spelling and punctuation to the intentional use of grammatical errors and slang (see examples in figure 3). To add to the variety, we include several different backbone language models that introduce additional diversity, including Jurassic-2 (AI21 Labs, 2023), GPT-4 (OpenAI, 2023), and Coherence2. By generating such a diverse set of AI bots, we hope to keep the conversations interesting and less repetitive for recurring users, and to undermine any easy identification of a common "bot-like behavior". Footnote 2: [https://cohere.com/](https://cohere.com/) Moreover, we incorporated certain behavioral elements into the AI bots to mimic human tendencies. For instance, regardless of how well an AI bot might mimic human language, instantaneous responses could be a tell-tale sign of a non-human partner. Therefore, we implemented an artificial delay in the bots' responses, simulating human typing speed. On top of that, we also introduced elements of unpredictability and irresponsiveness into the bots' behaviors. For example, some bots were programmed to exit the conversation abruptly under certain conditions, such as when they are "offended" or when the conversation becomes repetitive. This unpredictability was designed to mimic human behavior further, as human users may also choose to end a conversation suddenly for a variety of reasons. ## 3 Results and Analysis With more than 1.5 million unique users and over 10 million guesses in the first month, _"Human or Not?"_ generated a rich dataset for analysis. From the vast pool of interactions, we identified several types of human players that excelled in different aspects of the game: players who were Figure 1: Examples of different types of prompts for initializing bots’ personas. * [1] Date in Ronolulu: Tuesday, May 30, 2023. * [2] ## * [3] Time in Ronolulu: 09:28 AM. * [4] ## * [5] Weather in Ronolulu: 79F (26C), Wind E at 12 mgh (19 km/h), 644 Humidity. * [6] ## * [7] Top stories in Ronolulu: * [8] I. Elizabeth Holmes Reports to Prison in Texas on Tuesday (29 mins ago) * [9] 2. Debt ceiling dealtails: What does the Bide-McCarthy bill include? (1 hour ago) * [10] 3. Russia says drops lightly damage Moscow buildings before dawn... (53 mins ago) * [11] 4. Rossolum Carter, wife of Fish US president, has dementia, family says (55 mins ago) * [12] 5. 1-year-old among 9 shot after attraction near beach in Hollywood, Florida, authorities say (1 hour ago) * [13] 6. Bouse conservative threatants to push outsing McCarthy over dsht... (2 hours ago) * [14] 7. Another tourist following QS directions instantaneously drives car into Hawaii harbor (1 hours ago) * [15] 8. Victoria describes second dog attack that injured her, near on high Island (6 hours ago) * [16] 9. Pay year hour: Native Hawaiians divided over artificial sugar (103 mins ago) * [17] 10. Rank sealual Policies after weaning from mother (2 hours ago). * [18] ## * [19] Top tweets in Ronolulu: * [20] 1. AIA (POPLAT: All lines of the hi salt including the right lane after the Wainalho on-ramp OPEN. Stalled Off Off Off therowave thiratric(Hainello Tucker, 3 hours ago) * [21] 2. happy memorial day: Here is a look at the weather for the coming week. Hink(MBSBnonolulu, 21 hours ago) * [22] 3. STORP PREFP (2) Hawaii state and local officials are using residents to prepare for a weather emergency after the NOAA Central Pacific Warrictions Centers prediction of an above-normal season for tropical cyclone activity, www.city.com/news/local(HTIVE1, 1 day ago) * [23] 4. Hokkom, Mining a wonderli Day economy, Government Goal (GroupGroup) todays one to the aid of a woman in the audience who had a medical emergency. 8808a./43xNumber adept at identifying bots, players who were proficient at recognizing fellow humans, players who convincingly signaled their own humanity, and players who masterfully impersonated bots. Players drawn from these groups, as well as the intersections among them, offered fascinating insights and were a point of particular interest in our broad analysis. By examining anonymized conversations from across the spectrum of players, we can explore how people try to distinguish between humans and AIs, how they try to prove their own humanity, or conversely how they attempt to impersonate an AI system. We found a wide range of strategies used by players (see several examples in figure 4), showcasing the cognitive flexibility and creativity of the human mind. Many of these common strategies were based on the perceived limitations of language models that people encountered while using popular AI interfaces, as well as individuals' prior experiences with human behavior in online interactions. ### Distinguishing between humans and AIs We detail below prevalent strategies which players used in order to assess the identity of their counterpart: Grammatical errors and typosSome players believed that such errors were strictly made by humans, so they paid close attention to typos, grammatical mistakes, and slang usage, assuming these were tell-tale signs of human interaction. This strategy, however, had its pitfalls as the AI models were explicitly instructed to occasionally include such errors and imitate informal language to mimic realistic human conversation. As a matter of fact, out of 26k conversations in which we've Figure 4: Examples of different strategies players used. identified a comment about spelling, human players were able to identify their bot counterparts only 54.7% of the time, significantly lower than the general number of 60%. We speculate that in these conversations, the human player pointed out a spelling error in one of their counterpart's messages, and thus assumed they were human. Personal questionsAnother prevalent approach involves posing personal questions to challenge the AI, delving into areas presumed to be beyond its scope. Participants asked about personal history, experiences, and unique insights in an attempt to differentiate between the human and AI conversation partners. Yet, with the bots having been primed with fabricated personal stories, this approach was often less successful than users anticipated. Interestingly, some participants sought to probe the AI's emotional and philosophical capacities, posing abstract or ethical questions. They inquired about the meaning of life, opinions on geopolitical conflicts, or religious beliefs, aiming to assess the AI's depth of understanding and ability to provide nuanced responses. However, even in the face of such complex inquiries, the AI bots were able to offer plausible and contextually appropriate replies (see examples in figures 4(a) and 4(b)). Moreover, participants often posed questions that required subjective thoughts and feelings, using phrases like "What do you think/feel about...?" or "What's your opinion on...?". The underlying assumption behind this strategy was that AI, lacking human experiences and emotions, would struggle to convincingly answer such queries. The bots often coped well with these question types as well (see example in figure 4(c)), but still seemed to struggle sometimes. When evaluating 13k conversations about religious beliefs, 400k conversations about personal opinions and preferences and 650k conversations about personal relationships, the correct guess rates were almost identical to the general ones in human-AI chats of 60%, suggesting that this approach was not particularly effective. Interestingly, when evaluating 2.5k conversations about geopolitical conflicts, the bots were correctly identified at significantly higher rates of 68% of the time, suggesting that we didn't provide them with all relevant knowledge about concurrent conflicts, or that these issues involve deeper nuance, PolitenessAnother strategy involved evaluating the tone of their conversation partners. AI is often associated with formality and politeness, whereas humans, especially online, can exhibit rudeness and aggression. Some users therefore made provocative comments and tried to elicit aggressive behavior from their counterparts. Upon scrutinizing 270k conversations, we found that users adopting the strategy of overly polite demeanor ultimately assumed that their counterparts were human. These users correctly identified a human counterpart 80.9% of the time, but only correctly identified a bot 52.2% of the time. Interestingly, their counterparts only recognized them as human 69.2% of the time versus a general 73% identification in human-human conversations, indicating a perception bias towards considering polite individuals as Als. Conversely, an examination of 470k conversations revealed that users employing rude or vulgar language were more proficient in correctly identifying both human and bot counterparts, with success rates of 76.2% and 63.6% respectively. Their human counterparts correctly identified them 86.7% of the time, suggesting a strong association between rudeness and humans. This hypothesis is further supported by our finding that vulgar bots were only Figure 5: Examples of AI answers to subjective and personal questions. correctly identified 52.1% of the time, adding to the notion that users associated impoliteness with human behavior. Up to date informationAdditionally, participants tried to probe whether their conversation partner was up-to-date with current events or trends, due to the known data cut-off point of AI models. Their questions spanned from recent global news (see examples in figure 6) to popular TikTok dances. In an analysis of 90k conversations where a human player talked about social media trends, they were able to correctly identify their fellow human counterparts 79.7% of the time, significantly surpassing the overall average of 73%. This suggests that their chat partners often provided convincing responses about such trends. Interestingly, in 50k conversations when the same topic was brought up to an AI counterparts, humans could only correctly identify the AI 55.4% of the time. This suggests that suggests that discussions revolving around social media trends have a peculiar effect - any response, be it from a human or an AI, tends to be perceived as convincingly human. Making hard requestsPeople also tried to identify bots by posing questions or making requests that AI bots are known to struggle with, or tend to avoid answering. For instance, participants might ask their chat partner for guidance on performing illegal activities or request that they use offensive language. These types of commands were intended to take advantage of the instruction-based nature of some AI models, which were trained to follow instructions. The rationale behind this strategy was that human participants could easily recognize and dismiss such commands, while AI bots, in contrast, might either respond evasively or have difficulty resisting the urge to comply (see example in figure 3(b)). From our analysis of 32k conversations where participants asked their counterparts to do things that AI will typically refuse to do, we found that users who posed such questions were slightly more likely than average to correctly identify their human counterparts (74.8%). This result may be attributed to users responding as they were instructed to or refusing convincingly. Moreover, these users were also very likely to correctly identify their bot counterparts, achieving a 64.9% success rate, which likely reflects the fact that AI language models were indeed trained to refuse such instructions. Interestingly, when a user posed a request that an AI would typically refuse to perform, their counterparts managed to recognize them as human in a staggering 85.1% of the cases. MultilingualityUsing non-English languages emerged as another common strategy, leveraging the fact that many modern language models are trained on multilingual data. Even if the models were prompted to portray a character that only understands English, participants found that when they switched languages, the bots could understand their messages and respond correctly, while still claiming to have no knowledge languages other than English (see example in figure 3(f)). Sub-word tricksLastly, linguistic tricks were employed as a common strategy, such as asking the AI to spell a word backwards or identify individual letters in a word. Most language models typically struggle with these tasks due to their token-based nature, making it an effective way to distinguish between human and AI responses (see example in figure 3(c)). Figure 6: Examples of AI answers to questions about recent events. ### Signaling Humanity In order to reassure the other side of their own humanity, participants employed strategies similar to those used to identify AI. Many of these strategies are related to showcasing characteristics traditionally perceived as distinctly human. This included answering personal or emotional questions, discussing pop culture and current events and extensive use of slang and languages other than English. A more distinctive approach involved using four language, expressing controversial views, and posing messages that AI lots tend to avoid answering. These tactics seemed designed to exploit the stringent guidelines that govern AI language models, which are programmed to refrain from using offensive language and expressing politically incorrect views. As such, players seemed to consider the ability to deviate from these norms as a uniquely human trait. Indeed, as mentioned in section 3.1, the use of rude language revealed a player's humanity 86.7% of the time, and making requests typically shunned by AI led to players being correctly recognized as humans 85.1% of the time. Perhaps one of the most intriguing strategies was the referencing of the game itself. This tactic is a clever exploitation of a typical limitation of AI language models: they are usually accessed via a text-only interface, and thus, wouldn't be expected to possess awareness of on-screen elements such as the game's background color or the time left on the timer. These metareferences to the game environment were seen as an undeniable proof of humanity, as they conveyed a shared understanding of the game's context and setup that an AI would presumably lack. Thus, by exploiting AI characteristics, players skillfully signaled their own humanity, complementing their efforts to identify AI counterparts. ### Imitating AI An intriguing aspect of the _"Human or Not?"_ game was the opportunity for participants to engage in a reverse Turing Test, where instead of determining if a conversation partner is an AI, they formed for themselves the objective to convincingly imitate an AI. This twist on the classic test, though not explicitly incentivized by our setup, created a dual nature for our game and offered a unique glimpse into how players perceived and emulated AI. Many participants took this role-play to heart, diving deep into AI language model characteristics. For instance, they often preface their messages with phrases associated with AI language models such as "As an AI language model...". Moreover, their messages also displayed other language patterns that are characteristic of AI-generated text, such as rigorous use of spelling and punctuation, deliberate ignorance towards cultural nuances and slang, and a refusal to share personal information or express individual viewpoints. Interestingly, variants of the phrase "As an AI language model" were among the most common phrases observed in human messages, indicating the popularity of this strategy. Further analysis of 100k randomly sampled conversations revealed intriguing patterns tied to this strategy. Players who used AI-like expressions were more likely to presume their counterparts were human, correctly identifying a human 80.5% of the time, while their success rate dropped to 54.6% for bot identification. Interestingly, 75.3% of their counterparts identified them as human, suggesting the imitation was far from perfect. When AI bots used similar phrases, they were correctly identified as bots with a 67.9% probability. This observation implies that while humans tried to imitate bots, the imitation often fell short, failing to obscure their human traits effectively. Interestingly, when we examined the strategies of users that were mistaken for bot the most, we found out that a common strategy they used was reusing responses from their previous interactions with bots, mirroring the conversation style of AI, and making it harder to identify them as humans acting as bots. Rigorous use of spelling and punctuation, deliberate ignorance towards cultural nuances and slang, and a refusal to share personal information or express individual viewpoints all added to their AI-like facade. This strategy added an additional layer of complexity to the game, as it led to humans attempting to act like bots and bots acting like humans, further complicating the identification process. ### Human-like AI Behavior Despite participants' strategies and the broad understanding of AI, several aspects of AI behavior appeared to confuse them. Unexpected language elements, such as spelling mistakes, lack of capitalization, and the use of slang, which are traits generally associated with human language, led some participants to incorrectly identify AI as human. The same was true for displays of rudeness, sarcasm, or the use of foul language. Additionally, when the AI's message made no sense and appeared "too dumb", participants often mistakenly believed they were interacting with a human trying to impersonate a bot. Finally, references to the game's context and seeming awareness of current events also bewildered many participants, since they had not expected these from AI models given their training data cutoff. Overall, players utilized an array of strategies, each underpinned by assumptions about the limitations of AI. Yet, the AI models often defied these assumptions, mimicking human behavior in ways that frequently fooled the participants. This underlines the sophistication of current AI language models and highlights the challenges involved in discerning AI-generated conversation from human interaction. ## 4 Discussion and Limitations While the findings from this analysis provide valuable insights into human-AI interactions, they should be viewed within the specific context of _"Human or Not?"_, which has its own inherent limitations. Firstly, the game's context can amplify the participants' suspicion and scrutiny. Therefore, the strategies identified may not necessarily reflect those employed in daily, less antagonistic interactions with AI. Secondly, participants were aware that they were interacting with AI at least half the time, which could have influenced their behavior and strategies. This awareness might not be present in regular interactions, resulting in different approaches. Next, the time-limited nature of the game limited the depth of the conversations, and forced participants to make quicker judgments than they would in more relaxed, non-game interactions. Furthermore, the AIs in the game were designed in specific ways for the purpose of this experiment. These specific features have their own biases, and might not be applicable to other AI settings, thus affecting the generalizability of our findings. In terms of demographic diversity, our analysis is biased towards English-speaking, internet-accessible participants interested in such games. Hence, the findings might not account for potential cultural, linguistic, and age-based variations. The analysis also has a certain degree of subjectivity, as the categorization of strategies and behaviors largely relies on manual annotation and interpretation. While we strived to maintain objectivity and consistency throughout the process, some bias is inevitable. Despite these limitations, the experiment provides a valuable foundation for future research into human-AI interaction. It provides a novel way to observe the evolving AI capabilities and human strategies to identify AI, contributing to our understanding of this intricate dynamic. While our findings may not be fully applicable across all contexts, they underscore the nuances and complexities in human-AI interactions, presenting a compelling case for further research in this field. These insights can inform future AI design, training, and deployment, aiming to foster more effective, ethical, and intuitive human-AI coexistence. ## 5 Conclusion and Future Directions _"Human or Not?"_ represents a significant milestone in evaluating AI's capabilities. It serves as a compelling case study for future research on human-like AI and Turing-like tests. As AI continues to advance, its potential to revolutionize various industries, from customer service to mental health, becomes more apparent. However, as we inch closer to more human-like AI, ethical considerations come to the fore. How do we handle AI that convincingly mimics human behavior? What responsibility do we bear for its actions? Future studies will need to grapple with these questions, and experiments like this one will remain essential in assessing AI capabilities and understanding its impact on society. In conclusion, _"Human or Not?"_ stands as an engaging, large-scale social experiment that offers valuable insights into AI's progress in mimicking human conversation. The rich data offers valuable insights for the ongoing development of AI models, with implications for areas as diverse as AI ethics, user interface design, and our understanding of what it means to be a human.
2309.10560
Bridging the Spoof Gap: A Unified Parallel Aggregation Network for Voice Presentation Attacks
Automatic Speaker Verification (ASV) systems are increasingly used in voice bio-metrics for user authentication but are susceptible to logical and physical spoofing attacks, posing security risks. Existing research mainly tackles logical or physical attacks separately, leading to a gap in unified spoofing detection. Moreover, when existing systems attempt to handle both types of attacks, they often exhibit significant disparities in the Equal Error Rate (EER). To bridge this gap, we present a Parallel Stacked Aggregation Network that processes raw audio. Our approach employs a split-transform-aggregation technique, dividing utterances into convolved representations, applying transformations, and aggregating the results to identify logical (LA) and physical (PA) spoofing attacks. Evaluation of the ASVspoof-2019 and VSDC datasets shows the effectiveness of the proposed system. It outperforms state-of-the-art solutions, displaying reduced EER disparities and superior performance in detecting spoofing attacks. This highlights the proposed method's generalizability and superiority. In a world increasingly reliant on voice-based security, our unified spoofing detection system provides a robust defense against a spectrum of voice spoofing attacks, safeguarding ASVs and user data effectively.
Awais Khan, Khalid Mahmood Malik
2023-09-19T12:12:59Z
http://arxiv.org/abs/2309.10560v1
# Bridging the Spoof Gap: A Unified Parallel Aggregation Network for Voice Presentation Attacks ###### Abstract Automatic Speaker Verification (ASV) systems are increasingly used in voice biometrics for user authentication but are susceptible to logical and physical spoofing attacks, posing security risks. Existing research mainly tackles logical or physical attacks separately, leading to a gap in unified spoofing detection. Moreover, when existing systems attempt to handle both types of attacks, they often exhibit significant disparities in the Equal Error Rate (EER). To bridge this gap, we present a Parallel Stacked Aggregation Network that processes raw audio. Our approach employs a split-transform-aggregation technique, dividing utterances into convolved representations, applying transformations, and aggregating the results to identify logical (LA) and physical (PA) spoofing attacks. Evaluation of the ASVSpoof-2019 and VSDC datasets shows the effectiveness of the proposed system. It outperforms state-of-the-art solutions, displaying reduced EER disparities and superior performance in detecting spoofing attacks. This highlights the proposed method's generalizability and superiority. In a world increasingly reliant on voice-based security, our unified spoofing detection system provides a robust defense against a spectrum of voice spoofing attacks, safeguarding ASVs and user data effectively. Anti-spoofing, voice presentation attacks, Automatic Speaker Verification, Speech Synthesis, spoofing countermeasures ## I Introduction Biometrics, a vital progression from traditional password-based authentication, are gaining widespread adoption for user identification across various applications. In recent years, Automatic Speaker Verification (ASV), a type of voice biometrics, has gained prominence for its ability to authenticate users based on unique speech characteristics. The ASVs are also experiencing growing utilization in smart speakers (such as Google Home, Siri, Amazon Alexa) and various Internet of Things (IoT) devices, enabling voice-activated access to services and resources [1]. However, despite offering cost-effective authentication, ASV systems exhibit vulnerabilities to both physical and logical voice presentation attacks (VPAs), commonly known as voice spoofing attacks. These vulnerabilities present challenges to the widespread adoption of ASV technology. A convenient way to deter a VPA is through the use of anti-spoofing or presentation attack detection (PAD) systems, which perform acoustic characterization of genuine and spoofed speech signal [2]. Integrating an independent spoofing countermeasure, or PAD, with an ASV system has been demonstrated to enhance resilience against spoofing attacks [3]. Consequently, substantial research effort has been directed toward developing spoofing countermeasures, particularly targeting the four main VPA classes: impersonation, speech synthesis (SS), voice conversion (VC), and replay. While impersonation attacks, lacking standardized databases, have received comparatively less research attention, this study focuses on the remaining three presentation attack types. In the realm of voice spoofing, various strategies have been developed to combat voice spoofing attacks, as illustrated in Figure 1. Anti-spoofing research highlights significant differences between Speech Synthesis (SS), Voice Conversion (VC), and replay spoofing attacks [3, 4]. SS and VC attacks use modern AI-algorithms, resulting in machine-generated distortions like robotic voices and the absence of natural pauses etc. In contrast, replay attacks involve recorded variations of genuine speech, leading to microphonic disparities. Unlike VC and SS, replay attack indicators often lie in the high-frequency region of the recordings [5]. Due to these distinct artifact characteristics, existing countermeasures mostly target specific attack types, as shown in Figure 1 (a) and (b), with limited attention to unified spoofing detectors (Figure 1 (c)). In practical scenarios, the nature of an attack on an ASV Fig. 1: The architectural framework of existing anti-spoofing systems: (a) a standard replay attack detection system that counters only physical attacks (b) A standard anti-spoofing system for synthesis speech samples that was trained and tested using Logical Access speech samples (c) Unified PAD systems, trained and tested for LA and PA attacks, also trained and tested in parallel with PA and LA data samples. (d) A real-world challenge for the PADs. system is often unknown. Consequently, there is a need for unified spoof detection models capable of addressing all types of spoofing attacks and identifying unique artifacts for each specific attack. Unfortunately, there remains a significant gap in comprehensive research addressing state-of-the-art (SOTA) spoofing attacks. In anti-spoofing literature, existing solutions can be classified into two categories: front-end features coupled with a back-end classifier and DNN-based end-to-end solutions. Previous studies have demonstrated the effectiveness of manually crafted features combined with a back-end classifier in detecting Speech Synthesis (SS) and Voice Conversion (VC) attacks, as these distortions manifest across various speech frames [2, 3]. This success is attributed to the inclusion of specific sub-band extractions of acoustic cues within the front-end features. In contrast, distinguishing Replay (PA) attacks poses challenges for front-end features due to their utilization of a broader spectrum band that encompasses the entire utterance. Moreover, the presence of identical microphonic variations complicates the differentiation between replay attacks and legitimate speech [5]. Consequently, many feature-based systems have been specifically designed to target either SS/VC (LA) attacks or replay (PA) attacks exclusively. Following recent advancements, the emphasis of anti-spoofing research has shifted from "crafted features" to "end-to-end networks" [6, 7, 8]. However, there remains a significant variation in EERs across existing solutions, particularly when addressing the detection of logical access and physical access attacks with a single system [9]. For instance, in ASSERTS [10], the EER is reported as \(0.59\%\) for PA but increases to \(6.70\%\) for LA. Similarly, in STC [11], the EER stands at 4.6% for PA and 7.86% for LA, while comparable performance variations are observed in BUT-Omilia [12], MFMT [13], and SASV [14] solutions. These results shows the bias of existing systems towards either LA or PA attacks and raise concerns about the practicality of deploying existing PADs in real-world scenarios. Furthermore, with the exception of the system in [8], most of the unified systems rely on frontend features or spectro-grammatic representations of speech samples for input. This dependence on resource-intensive computation raises concerns about the applicability of the system to resource-constrained edge devices. In parallel, this demonstrates the research gap when it comes to the direct use of speech signals to distinguish spoof from real utterances. In particular, this paper attempt to answer the following research questions: * Are the unified detectors equally good at detecting both logical and physical attacks? Does the proposed aggregated network show better cumulative EERs for LA and PA compared to state-of-the-art unified solutions using raw audio signal? * How do existing residual networks perform in terms of EER, and is there any need to use aggregated networks? * What is the trade-off between model widths and density in aggregated networks? Further, what density and width are optimal for a unified anti-spoofing system? * What input and DNN model is optimal for resource-constrained devices when integrating PADs into ASV systems? And what is the performance of the aggregated models compared to existing handcrafted features? To answer the above questions, we present a parallel stack aggregated (PSA) network, as illustrated in [15]. The PSA network leverages the split-transform-merge strategy from Inception networks for effective extraction of frame-level acoustic cues. Simultaneously, it employs the repeating residual layer architecture from VGG-Net and ResNets in a scalable manner to capture utterance-level speech representations. Our network collectively applies a series of transformations to a low-dimensional embedding, facilitating the extraction of both frame-level and utterance-level representations. These outputs are then aggregated to obtain finely detailed acoustic representations, subsequently processed in a dense layer architecture to discriminate between spoof and authentic speech. Moreover, rather than transforming the waveform from the time domain to the frequency domain and then developing classifiers, the proposed system learns all at once from the raw speech signal. To sum up, the main contributions of this work are as follows: 1. We introduce a unified SE-Parallel Stacked Aggregated Network designed to detect a range of speech presentation attacks using raw audio data, without being constrained by the computational complexities of spectrograms or manually crafted features. 2. We examine the efficacy of several residual and aggregated residual networks using squeeze and excitation (SE) networks. To the best of our knowledge, we are the first to use SE in concert with a parallel stacked aggregated network to use raw audio in order to combat LA and PA voice presentation attacks. 3. The presented system surpasses twelve individual and seven unified solutions, including the baseline models used in the ASVspoof2019 challenge. It notably mitigates the bias observed in state-of-the-art unified solutions towards a particular attack class. 4. We conduct a thorough ablation study using eight networks to combat advanced voice presentation attacks. Our system outperforms both comparative models and ASVspoof2019 baseline systems across standalone and unified testing across two datasets. The rest of the paper is organized as follows: Section 2 reviews prior work in the field, while Section 3 elaborates on the SE-PSA network development methodology. Section 4 outlines the experimental setup, and Section 5 examines the results, including comparisons with other systems, as well as the ablation study. Finally, Section 6 offers the conclusion. ## II Existing Work Existing research on voice anti-spoofing solutions falls into two main categories: hand-crafted countermeasures with frame-level classifiers such as GMM and i-vectors [16], and DNN-based end-to-end solutions [17]. ### _standalone systems for anti-spoofing_ Most spoofing research primarily focuses on developing standalone anti-spoofing systems, which commonly utilize handcrafted features and backend classifiers. These techniques vary mainly in the types of features and classifiers employed. Various features, such as magnitude-spectrum-based [18, 19], phase-spectrum-based [20], and modulation-spectrum-based features [21], have been explored in combination with classifiers like Gaussian mixture models [16, 22, 23] and support vector machines [24]. While these approaches have advanced attack identification, they often involve prior task-specific speech manipulation and short-term spectral auditory processing. With the rise of neural networks, the research community has shifted its focus toward integrating front-end features with neural network architectures, including CNN [25], ResNet [17], and attention-based methods [26]. This has led to the development of numerous anti-spoofing systems that combine advanced DNNs with handcrafted features to capture more discriminative local descriptors [6, 7, 8]. Although these systems have significantly outperformed traditional machine-learning-based countermeasures, such as RW-Resnet [27], SENET [6] and Assist [7], none have been designed to address both LA and PA attacks within a single system. ### _Unified solutions and their limitations_ In [11], the author presents a unified anti-spoofing system based on numerous front-end features. The authors found that the system that used LFCC performed better overall than other features. However, failed to detect synthesized speech effectively, and its EER remained much higher when presented with replay attacks. Similarly, Li et al. [13] employed multi-task learning with multi-feature integration (MFCC, CQCC, and Filter Bank) to detect both LA and PA spoofing attacks. While this combination of cepstral features performed well for replay attack detection (EER of \(0.96\%\)), it struggled with LA attacks. In contrast to the cepstral features, a combination of ternary features, named sm-ALTP [14], was presented to detect voice spoofing. Even after suppressing the performance of cepstral features with aggregation-based ensemble, the ternary features struggled against LA attacks [14]. In another study, Zeinali et al. [12] presented an anti-spoofing system that merged two VGG networks trained on single- and two-feature sets. While it performed reasonably well with power spectrogram and CQT features, it encountered higher EERs in LA attack testing. In the ASSERT system [10], CQCC features and a squeeze-and-excitation-based residual network were used to identify VS, SS, and replay attacks. While the model excelled in replay detection with a low \(0.59\%\) EER, it struggled to identify SS and VC attacks, resulting in an increased EER of \(6.70\%\) during evaluation. Thus, both standalone and unified spoofing countermeasures exhibited significant EER disparities in the detection of SS/VC versus replay attacks. Additionally, each solution heavily relied on specifically extracted features or spectrograms. ## III Methodology This section provides an overview of the proposed SE-PSA anti-spoofing system as well as details on each phase. The proposed anti-spoofing system is divided into two stages: data preparation (composed of data pre-processing and augmentation presented in Section IV) and the parallel stacking aggregate network. ### _Parallel Stack Aggregation Network_ The Parallel Stack Aggregated (PSA) network follows a topology identical to ResNeXt's intra-architecture [15]. The design in [15] has shown that it appears to reduce the risk of hyper-parameter over-adaptation to a specific dataset, which is currently missing in existing ResNet models. While aggregated network architectures have proven effective in image classification, their application in audio classification or spoofing detection remains unexplored. This paper addresses the challenge of creating a unified solution for detecting various audio forgeries by integrating the Split-Transform-Merge (STM) strategy and squeeze and excitation approaches. Additionally, it incorporates a stacking-based classifier capable of effectively discerning artifacts in both genuine and spoofed speech samples. Notably, the STM approach achieves these objectives with reduced computational complexity, aligning with our goal of providing a lightweight solution. This approach works directly on raw waveforms, eliminating the need Fig. 2: The internal architectural framework for addressing gradient vanishing via spatial dropout. (a) A standard DNN with processing and activation of all neurons without any selection or drop (b) A standard neural network with spatial dropout, which causes the selection of required neurons with more crucial embeddings [28]. for additional spectrogram generation or handcrafted feature extraction. The PSA Network comprises two main components: the first component involves passing the pre-processed speech signal through convolution blocks to extract convolved embeddings. These embeddings are then forwarded to the second component, the SE-PSA blocks, which utilize them to extract fine-grained features necessary for classifying spoofed and authentic speech samples. Specifically, the SE-PSA blocks employ a structured VGG/ResNets architecture, combining the repeating strategy of ResNet with the split-transform-merge strategy of the Inception Network. Each network block divides the input, transforms it as required, and aggregates it to generate the output. All blocks within the network follow this parallel topology. Figure 2 provides a visual representation of both the overall and internal architecture of the PSA network. The convolution block consists of three convolution layers for extracting convolved embeddings, while the five SE-PSA blocks comprise four cardinal paths with group convolutions. The final block serves the purpose of classifying speech samples as either genuine or spoofed. In the initial stage, the input audio signal denoted as \(F[n]\), consisting of \(N\) samples corresponding to frames \(n=1,2,3,\ldots,k\) containing spectral and temporal details, is fed into the first convolutional block of the network. This convolution block comprises three layers with \(c1,c2\), and \(c3\) filters, \(k1,k2\), and \(k3\) kernel sizes, strides, and employs the same padding, along with a softmax activation function. It has been observed that pre-activation convolution yields better results in voice spoofing detection compared to post-activation convolution. Therefore, to generate a deep feature map of the speech signal, we utilize a pre-activation convolution block that includes batch normalization, activation, followed by a convolution layer. Once the deep feature map is obtained, max pooling is applied to extract the enhanced embedding \(E_{c}^{st}=e_{1},e_{2},e_{3},\ldots,e_{n}\) representing the speech signal. Detailed specifications regarding convolution size, strides, and filter usage for extracting discriminative embedding representations are provided in Table I. In the second fold, the acquired embedding \(E_{c}^{st}\) is passed to the SE-PSA block, responsible for extracting fine-grained representations denoted as \(F_{r}^{fg}=f_{1}^{g},f_{2}^{g},f_{3}^{g},\ldots,f_{n}^{g}\) used for the classification of genuine and spoofed speech samples. The core architecture of the PSA block adheres to the same two principles governing the ResNeXt architecture: * When creating spatial maps of identical size, blocks share identical hyper-parameters (width and filter sizes). * The width of the blocks is increased by a factor of \(2\), leading to an increase in width each time the spatial map is down-sampled by a factor of \(2\). These principles significantly streamline the design and enable us to focus on a few critical factors. In the intra-architecture illustrated in Fig. 3, the PSA network combines the high-level \(Hf^{st}\) and low-level feature \(Lf^{s}t\) representations extracted from homogeneous neural paths. Subsequently, the high and low-level feature representations obtained from this step are passed on to the next SE-PSA block. This process is iterated \(M\) times to derive an adaptive feature representation for both spoofed and legitimate speech samples. This adaptive feature representation is achieved through the introduction of "cardinality," denoted as \(C\), which adds an additional dimension to residual networks, making them wider rather than deeper. The value of cardinality \(C\) determines the size of the transformation set \(T=t_{1},t_{2},\ldots,t_{n}\). The same transformations \(T\) are applied \(M\) times, and the cumulative gain is aggregated as shown in the equations below. \[S_{R}=\sum_{i=1}^{D}\omega_{i}n_{i} \tag{1}\] where \(n_{i}=\{n_{1},n_{2}..n_{i}\) is the \(D\) channel input vector to the neuron, and \(\omega_{i}n_{i}\) is the filter weight for the \(i_{th}\) channel. \(S_{R}\) shows the inner product of the neural network. \[E_{c}^{st}=\sum_{i=1}^{C}\tau_{i}(n_{i}) \tag{2}\] \[F_{r}^{fg}=E_{c}^{st}+\sum_{i=1}^{C}\tau_{i}(n_{i}) \tag{3}\] Where \(E_{c}^{st}\) represents the aggregates transformation and \(\tau_{i}(n_{i})\) can be an arbitrary function. Analogous to a \(S_{R}\), \(\tau_{i}(n_{i})\) projects \(E_{c}^{st}\) embedding and then transforms and aggregated to \(F_{r}^{fg}\). where \(F_{r}^{fg}\) refers to the fine-grained representation extracted to classify the speech representations. Lastly, we employ global max pooling, flatten the extracted representations \(F_{r}^{fg}\), and add the fully connected dense layers, followed by dropout to classify the real and spoofed speech samples. For the classification, we used the sigmoid activation function to extract the score of an utterance being forged or bona fide, as shown below: \[\mathbb{S}_{cr}=\frac{1}{1+e^{-x}} \tag{4}\] \[\mathbb{P}_{pred}=\mathbb{S}_{cr}>0.5 \tag{5}\] Where \(S_{cr}\) denotes the score of the utterance and \(\mathbb{P}_{pred}\) refers to the prediction of the model as spoof and bonafide speech. Hyper-parameters, including width and filter sizes, are shared within the SE-PSA block. Group convolutions are utilized as an aggregation strategy. To address network overfitting, spatial dropout is applied before the aggregation node. Spatial dropout involves selecting neurons with more significant embeddings and dropping those with less representative features, as illustrated in Fig. 4. Furthermore, recognizing the significant differences between LA and PA speech samples, we introduce Squeeze and Excitation (SE) to the PSA block. This addition effectively extracts transformed feature maps and aids in back-propagation within the network. Fig. 5 illustrates the layered architecture of the SE connections. Multi-cardinality transforms accentuate critical parameters for distinguishing between genuine and spoofed tasks, while reducing the significance of highly correlated parameters. This, combined with the residual block featuring split-transform and merge techniques, enables effective differentiation between authentic and spoofed speech. #### Iii-B1 Addressing the gradient vanishing Skip connections are utilized to mitigate the vanishing gradient issue by maintaining the error gradient during back-propagation. They multiply the error gradient by one during back-propagation through the skip connection. This allows for the training of deeper networks, enhancing the ability of our PAD system to discern authentic from spoofed speech. In a network without skip connections, Fig. 4: The internal architectural framework for addressing the vanishing gradient via Spatial dropout. (a) A standard DNN, with processing and activation of all neurons, without any selection or drop. (b) A standard neural network with Spatial dropout, which results in the selection of required neurons with more relevant embeddings [28]. Fig. 5: Aggregated Feature Map extraction with the Squeeze and Excitation Block. The SE block include the spatial dropout applied before every global average layer of the each SE-PSA block. Fig. 3: Intra-architecture of SE-PSA Blocks with 4 cardinalities and pre-activation convolutions. The similar Intra- architecture repeated 5 times for each block of the proposed PSA network. the gradient is computed as follows: \[y=\frac{\partial J}{\partial x} \tag{6}\] where \(y\) obtained using the chain rule for the full operation with all the steps. Whereas in the case of without the skip connection, the full operations can be performed as shown in Eq. below: \[\frac{\partial J}{\partial x_{0}}=\frac{\partial J}{\partial x_{2}}\frac{ \partial x_{2}}{\partial z_{2}}\frac{\partial z_{2}}{\partial x_{1}}\frac{ \partial x_{1}}{\partial z_{1}}\frac{\partial z_{1}}{\partial x_{0}} \tag{7}\] This denotes the chain of multiplications which renders neural networks prone to disappearing and exploding gradients. If we substitute F(x) for the intermediate computations, the gradient calculation becomes: \[\frac{\partial J}{\partial x}=\frac{\partial J}{\partial F(x)}\frac{\partial F (x)}{\partial x} \tag{8}\] Next, we add the new function \(H(x)=F(x)+x\) for the added the skip connection. In particular, we must now differentiate \(F(x)\) through \(H(x)\) to get the gradient of the cost function in a network with skip connections as shown below: \[\frac{\partial J}{\partial x}=\frac{\partial J}{\partial H(x)}\frac{\partial F (x)}{\partial x} \tag{9}\] where the derivative of \(x\) with respect to \(H(x)\) is equal to 1. Thus, substituting \(F(x)+x\) for \(H(x)\) yields the expression: \[\frac{\partial J}{\partial x}=\frac{\partial J}{\partial H(x)}(\frac{\partial F (x)}{\partial x}+1)=\frac{\partial J}{\partial H(x)}\frac{\partial F(x)}{ \partial x}+\frac{\partial J}{\partial H(x)} \tag{10}\] In this scenario, the gradient of \(F(x)\) becomes extremely small as a result of multiple matrix multiplications during back-propagation through all the layers of \(x\). However, we still retain the direct gradient of the cost function concerning \(H(x)\). This approach allows the network to bypass certain gradient computations during back-propagation, preventing gradient vanishing or exploding. In the next section, we outline the experimental setup for conducting experiments and comparative analyses of the proposed system. ## IV Experimental setup In this section, we illustrate the experimental configuration used to produce the reported results. The proposed anti-spoofing system was validated against TTS, VC, replay and chained replay spoofing samples. Following are the metrics, datasets, and hyper-parameters used for all of the testing and results presented below. ### _Dataset_ Since 2015 the ASVspoof challenge has been providing datasets and standards in order to promote the development of spoofing countermeasures. Among these datasets, the ASVspoof 2019 database has become the de facto standard for the investigation and evaluation of voice spoofing countermeasures. Consequently, we evaluated the performance of the proposed anti-spoofing system using the ASVspoof2019 [29]. To contrast the performance of the proposed system against single- and multi-order replay attacks we also employed the voice spoofing detection corpus (VSDC) developed in [30]. The ASVspoof2019 dataset [29] comprises audio samples recorded at a 16 kHz sample rate with 16-bit compression. This dataset is divided into two categories: Logical Access (LA) and Physical Access (PA), each further divided into training, development, and evaluation subsets. Both the training and development sets contain speech samples from 20 distinct speakers, with spoofed speech samples generated using algorithms A01 to A06. In contrast, the evaluation set includes bonafide speech samples from 67 speakers and spoofed samples generated using 19 algorithms, including GANs and DNNs. For dataset details, please refer to Table II, and further configuration specifics can be found in [29]. The system's performance against replay attacks was evaluated using VSDC [30], which includes first-order and second-order replay spoof samples alongside genuine speech. The dataset introduces variations in environments, configurations, genres, recording and replay devices, and output devices (speakers). In contrast to ASVspoof2019, VSDC incorporates noise and microphonic differences in speech samples and employs multiple playback devices to minimize bias. VSDC comprises 19 voices (10 male and 9 female), with each audio sample lasting 6 seconds. Table III provides the details of the VSDC dataset, and [30] contains information about the playback devices and development architecture used in its construction. ### _Data Preprocessing_ Raw audio, characterized by discrete high- and low-frequency values and amplitude variation, requires preprocessing before input into the PSA network due to the significant data point differences. This preprocessing includes normalization and frame length adjustment. In cases like the ASVspoof2019 dataset, where audio files have variable-length segments and varying frame counts, input voice sample length (\(L\)) is standardized to four seconds either by concatenation or trimming. The first four seconds (equivalent to \(1\times 64000\) samples) of the speech sample are retained, and sequence padding is applied to shorter speech samples. Subsequently, Z-score normalization is conducted on the raw waveform, constraining sample values to the range \([-1,1]\) using mean and standard deviation values, as shown below: \[x=\frac{\sum_{i=1}^{j}(x-\mu)}{\sigma} \tag{11}\] \[\sigma=\sqrt{E[X^{2}]-(E[X])^{2}} \tag{12}\] where \(x\) is the number of speech samples, \(mu\) and \(sigma\) are the mean and standard deviation of the signal, and \(E[X2]\) \((E[X])2\) are the mean of the squared data and the square of the mean of the data, respectively. After standardizing the data, we apply five types of augmentation to address data imbalances. The details of data augmentation are explained in the section below. ### _Data augmentation_ Data augmentation (DA) is a widely used technique in image and speech recognition to increase training data, prevent overfitting, and improve performance in class-imbalanced problems [31]. To mitigate these challenges in our research, we employ five effective augmentation techniques: MP3 compression, high-pass filtering, low-pass filtering, silence trimming, and reverberation. MP3 compression is known to be effective in spoofing detection [32], and the other augmentations were selected based on their positive impact on model performance. High- and low-pass filtering, in particular, help extract sub-band information, which is crucial for detecting fine-grained features in state-of-the-art spoofing attacks. We also incorporate reverberation, aligning with the configuration settings used in the evaluation dataset preparation. The influence of each kind of the augmentation approach in spectral analysis is presented in Fig. 6. The spectra of the speech samples demonstrates that the model was trained with diverse frequency sub-bands, which assists in learning the diverse artifacts of the speech samples. ### _Evaluation Metrics_ From the dataset details in Table II, it is clear that the ratio of real to spoofed trials is highly skewed. To address this, we employ alternative performance metrics, namely the Equal Error Rate (EER) and the Tandem Detection Cost Function (t-DCF), which are standard in ASVspoof challenges. In contrast to EER, t-DCF measures the performance evaluation of spoofing countermeasures (CMs) on the reliability of an ASV system. Wang et al. [33] demonstrate that the effectiveness of spoofing detection systems can vary significantly when random seeds are used. Similarly, after being trained with various random seeds, the EER of the baseline system in [8] fluctuates between \(1.19\%\) and \(2.06\%\). In response to these observations, the reported results in this study are an average of the best results obtained during the experiments with three random seeds. ### _Experimental setup and hyper-parameters_ For all experiments, we utilize the Keras training platform in Python. Our anti-spoofing system employs the Adam optimizer with an initial learning rate of \(1e^{-4}\) and a weight decay of \(0.001\). The filter and kernel values for convolution layers are set to \(64,128,256\) and \(196,144,100\) for \(c1,c2,c3\) and \(k1,k2,k3\), respectively. We apply the cosine annealing warm restarts method [31] to adjust the learning rate, with linear growth for the first \(1000\) warm-up steps, followed by a decrease according to the inverse square root of the step number. Our model is trained for \(50\) epochs, using the cross-entropy loss function. Further, KAIMING initialization [34] is employed for all convolution layers, and batch normalization layers are configured with weights at \(1\) and biases at \(0\). The final model for evaluation is chosen based on the lowest loss observed on the development set. We conducted all model training and testing on the Matilda High Performance Cluster at Oakland University. The HPC's GPU nodes, equipped with four NVIDIA Tesla V\(100\)\(16\) GPUs, \(192\) GB of RAM, and \(48\)\(2.10\)GHz CPU cores, were utilized for these tasks. Fig. 6: The spectrogramatic representation of each type of augmentation applied for the training of the network. ## V Result and discussion In this section, we demonstrate the experimental and comparative results of our proposed anti-spoofing system. We optimized the hyper-parameters of our model as described above, and the results of the best set of hyper-parameters are provided below. ### _Trade-off between cardinalities and model width_ In split aggregate-based networks with multiple pathways, cardinality \(C\) and bottleneck width \(d\) are considered vital parameters. In contrast to the Inception network, which has unique cardinal paths, the proposed system is made up of pathways of varied cardinalities that follow the same configurations. Prior research has shown that split aggregate-based networks are more efficient with high cardinalities when evaluated against vision-based datasets like ImageNet and CIFAR. Therefore, as indicated in Table IV, we begin by examining the trade-off between cardinality and bottleneck width under conserved complexity. For this experiment, we used a balanced set of bonafide and spoofed speech samples from the LA and PA subsets of the ASVspoof2019 dataset. The results, reported in Table IV and Fig. 7, demonstrate that the \(4C\times 64d\) construction surpasses the other variants in terms of area under the curve (AUC) for spoofing detection. Specifically, this model obtained an AUC of 0.93% and 0.97% for the LA and PA subsets, respectively. Further, the results indicate that the AUC of the system fluctuates as the value of \(C\) rises from 1 to 32. It is notable that when the value of \(C\) was set to 1, the proposed network became equivalent to ResNet, as explained in Section 2. For the values of \(C\) and \(d\), we chose the values proven to be effective against large-scale datasets, i.e., ImageNet and CIFAR [15]. The results showed that the \(8C\times 32d\) and \(4C\times 24d\) structures produced the second-best comparable results. Although these structures produced equivalent results for the LA spoofing samples, AUC degraded when detecting PA spoofing. Table IV and Fig. 7 further demonstrate that when the bottleneck width is small, increasing cardinality at the expense of decreasing width starts to produce saturated AUCs. Thus, from this result analysis, we conclude that raising the cardinalities and widths to higher values (as seen in [15]) is not worthwhile in the case of the ASVspoof2019 dataset.Optimity results are obtained when the cardinality ranges between 4 and 8 and the width is between 32 and 64.Consequently, the best-performing model has a cardinality of 4 and a model width of 64. In the next subsection, we present a performance analysis of the proposed system against familiar and unfamiliar attacks and the comparative performance. ### _Performance analysis of the proposed anti-spoofing system_ #### V-B1 Performance analysis against Familiar spoofing Attacks In this experiment, we test the proposed method on the ASVspoof2019 and VSDC datasets. We use the training subsets of the both datasets for training, development subsets for validation, and evaluation subsets to test the effectiveness of the system. The results, in Table V, demonstrate that the proposed system performs optimally, with an EER of 3.04% and minimal t-DCF of 0.087 for the LA dataset, and an EER of 1.26 and min t-DCF of 0.038 when tested against PA speech samples. In the instance of the VSDC dataset, the proposed system obtains an EER of 0.32 for the first-order replays and an ideal EER of 0.87 when the speech sample contains the artifacts of multi-order replays. The results show that the proposed system performs better when tested against replay and chained replay spoofing attacks. This demonstrates the system's effectiveness against device artifacts in playback voice samples. Although the proposed system performs marginally better in PA attacks, it still surpasses the SOTA unified solutions. #### V-B2 Performance analysis against unfamiliar spoofing attacks In this sub-experiment, we test the effectiveness of the proposed system in the absence of background knowledge about the spoofing attack. To the best of our knowledge, no unified model has been tested with integrated spoofing classes; SOTA systems report performance based on known clone and Fig. 7: The model’s performance is weighed against the trade-off between cardinality and model width. Before presenting, the model width is transformed to 1E3 ranges. The graph demonstrates that the AUC increases as the model width decreases.The model’s performance is enhanced by lowering its complexity through a narrower model. replay spoofing attacks. Instead of training and testing the system separately, we combine LA and PA spoofing classes to create an integrated spoofing class. The model is trained to distinguish between bonafide, clone, and replay samples simultaneously within this integrated class. During testing, the proposed model confronts both LA and PA attacks together, resulting in an EER of \(5.35\%\) and a t-DCF of \(0.237\). While the EER and minimum t-DCF are slightly higher compared to testing against known sample types (as mentioned earlier), this showcases the proposed model's applicability to real-world spoofing challenges. Furthermore, these results highlight the limitations of previous research, where separate training and testing fail to accurately assess the system's performance against various attack types. Specifically, when training and evaluation are restricted to either LA- or PA-based attacks, the performance of the model degrades significantly when evaluated against multiple spoofing attacks. _Performance analysis of the proposed and comparative methods against Logical Access (LA) spoofing attacks_ In this experiment, we examine the performance of the proposed model on synthetic and converted voice spoofing samples. The model is trained using the training subset of the ASVspoof-LA dataset, along with five types of augmented samples (as described in Section II). The proposed system is compared with twelve comparative methods, and the results are presented in Table VI. The results demonstrate that, when trained using augmented samples, the proposed system obtains an EER of 3.04% and a t-DCF of 0.087%, whereas without augmented samples, the model achieves 4.06% and 0.099%, respectively. These results indicate that the proposed system outperformed eleven of the twelve SOTA comparative countermeasures, with the lowest EER and t-DCF. More specifically, the proposed system performed second best on the ASVspoof2019 LA dataset, both with and without augmented samples. The EER and minimum t-DCF of comparative methods are reported in Table VI and Fig. 10. Despite having a slightly lower EER and minimal t-DCF than the proposed system, [39] is particularly optimized to identify LA-based attacks and has never been evaluated against replay attacks. In contrast, the proposed system obtained a lower EER and minimal t-DCF even when trained without any type of augmentation. This indicates the proposed system's superiority as a robust countermeasure to voice cloning and conversion attacks. In the next section, we compare the performance of the proposed system against SOTA replay attack detection systems. _Performance analysis of the proposed and comparative methods against Physical Access (PA) spoofing attacks_ In this experiment, we test the proposed PSA system's resilience against replay spoofing attacks. The proposed system is trained using the training subset of the ASVspoof-PA dataset, validated using the development subset, and tested using the evaluation subset of the dataset. The results, shown in Table VII, indicate that the proposed system effectively discriminates between spoofed and bonafide artifacts in replayed voice samples. When trained using augmented samples, the proposed system achieves an optimal EER of 1.26% and a minimal t-DCF of 0.038%. In comparison, the proposed system attains an EER of 2.13% and a t-DCF of 0.064% without augmentation. These results show that the proposed system achieved the second-lowest EER, after ASSERT [37]. Although the EER of the ASSERT solution is slightly better in PA spoofing attacks, the proposed model outperformed AS Fig. 8: Comparative Analysis of the proposed PSA and state-of-the-art comparative methods, where * denotes augmentation. [37]-i and ii denote ASSERTs SENET and SENET-Resnet variation, [38]-i and ii denote FFT-CNN and LFCC-CNCC combination, and [39]-i-iv denote logspec, SineNet, VGG, and SineNet with dropout, respectively. Fig. 10: EER Comparative Analysis of SOTA systems against ASVspoof2019 synthesized and converted speech samples, where * denotes augmentation. (a)-i and ii denotes ASVspoof2019 baseline systems [29], (b-k) denotes [35], [36], [37], [38], [11], [39], [40], [41], [42], [39] and (P) represents proposed systems, respectively. Fig. 9: EER Comparative Analysis of SOTA systems against the Replay attack from ASVspoof2019, where * denotes augmentation. (a)-i and ii denote Baseline ASVspoof2019 systems [29], (b) i and ii are ASSERT variation [10], (c)-i and ii denote STC [11], and (d) i-iv denote [12] logspec, SineNet, VGG, and SineNet with dropout, and (P) represents proposed systems, respectively. SERT in LA spoofing attacks. Further, the ASSERT model is based on handmade features and a 50-layer SENet architecture, whereas the proposed model has an 18-layer architecture and can extract the required deep features directly from raw audio. Except for the ASSERT, the proposed model outperformed the other eight comparative models in replay speech detection. The results in terms of EER and t-DCF of other comparative methods are described in Table VII and Fig. 9. In the next subsection, we compare the proposed system's performance to SOTA unified methods designed to identify both LA and PA spoofing attacks. ### _Correlation and Cumulative performance analysis of proposed and comparative unified solutions_ In this experiment, we evaluate the performance of the proposed system compared to seven state-of-the-art unified solutions designed for detecting both LA and PA spoofing attacks. The results, summarized in Table VIII, clearly demonstrate the superiority of the proposed method in terms of EER and minimal t-DCF metrics. When we use data augmentation during training, the proposed system achieves the lowest EER and t-DCF values for synthetic and converted speech samples, at \(3.04\%\) and \(0.087\%\), respectively. Without augmentation, the proposed system still maintains robust performance, with EER and t-DCF values of \(4.06\%\) and \(0.99\%\), respectively, especially against voice cloning attacks. In the case of PA spoofing attacks, the proposed system again outperforms the competition with an EER of \(1.26\%\) and a t-DCF of \(0.038\%\) when trained with augmentation, and EER and t-DCF values of \(2.13\%\) and \(0.064\%\), respectively, without augmentation. Additionally, we evaluated the cumulative EERs (combined EERs for PA and LA attacks) of the existing solutions, highlighting that the proposed system achieves an impressive EER of nearly \(4.30\%\), surpassing the SOTA unified methods. This is depicted in Fig. cumulative EER. To provide a comprehensive view of overall performance, we created an error bar graph for all unified solutions, once again showing the superiority of the proposed solution in detecting both LA and PA attacks, as seen in Fig. 12. Comparisons with other unified approaches reveal that many of them exhibit a significant EER disparity of over \(4\%\) between LA and PA spoof sample detection, except for STC [11], which has a \(2\%\) EER deviation. The proposed system, on the other hand, effectively reduces the EER disparities between LA and PA attacks, achieving the best detection results. While the EER of the proposed system is slightly higher than that of SASV [14], ASSERT [10], and MFMT [13], it significantly outperforms these systems in LA attack detection. It's worth noting that none of these systems were designed as end-to-end solutions; they all rely on computationally expensive handcrafted feature extraction. This highlights the proposed system's superiority over the current state-of-the-art unified countermeasures against LA and PA attacks. Fig. 11: Comparison of the cumulative EERs of SOTA’s unified anti-spoofing systems. The graph illustrates that, when compared to other approaches, the proposed methods has the lowest cumulative EER, where * denotes augmentation. [37]-i and ii denote ASSERTs SENET and SENET-Resnet variation, [38]-i and ii denote FFT-CNN and LFCC-CNCC combination, and [39] -iv denote logspec, SineNet, VGG, and SineNet with dropout, respectively. ### _Performance comparison of PSA with Hand Crafted features_ In this experiment, we feed the traditional handcrafted features into the proposed SE-PSA network to see the effectiveness of the system. These handcrafted features have proven to be successful in voice spoof detection; however, their efficacy within the aggregated network has yet to be investigated. In order to demonstrate the efficiency of the PSA network against handcrafted features, we examined 5 handcrafted features: CQCC, LFCC, MFCC, GTCC, and LPCC. A 20-number filter is used for all features except for CQCC, where a 96-octave filter is used. We use mean aggregation to transform the 2D feature specifications into 1D before feeding them into the network. We examine the performance of the PSA network with a balanced subset of the ASVspoof2019-LA dataset, and the results are shown in Table IX. These results indicate that the aggregated network incorporates the higher-order distinctions of the feature map fed into it. The local and global transformations of the feature map need a larger input to be performed optimally. As a result, the raw wave input with the appropriate number of frame lengths performs better than the other handcrafted features. The results show that the LFCC and GTCC features perform well, with an EER of 10.09 and 7.66, respectively; however, this EER is higher than raw wave audio. In contrast, the MFCC features show the highest EER, 26.38, and a minimum t-DCF of 0.352. Thus, we can conclude that the designed aggregated network performs better with raw waveforms compared to handcrafted features. ### _Ablation Study_ Different channel cardinality combinations, dropouts, and layer topologies were investigated to avoid over-fitting and under-fitting during training. To achieve the intended goals, we evaluated the SE-PSA design by increasing the width and density of the network. In all, we evaluated the resnet architectures with 18, 34, 50, and 101 layers as well as the aggregated network with identical layered structures. All of the networks were trained for 20 epochs using the ASVspoof-LA datasets. The results showed that training the larger network required a significantly larger number of training epochs and more data, and the models became overfit after training with the speech samples available in the LA subset. However, the aggregated network with SE and skip connections, with 18 and 34 layers of architecture, respectively, outperformed state-of-the-art networks. In comparison to all other approaches, the aggregated network with 34 layers performs the second best with an EER of 8.54% while the lowest EER is obtained by the SE-aggregated network with spatial dropout. Surprisingly, when fed raw audio samples, the ResNet architecture failed to perform properly and obtained a higher EER. We evaluated networks with and without SE and skip connections in addition to varying network densities, as shown in Table X. The aggregated network consistently outperformed ResNet networks, with notable EER differences (detailed in Table X). While ResNets with SE connections showed improved performance compared to those without, the aggregated network consistently achieved better EER results. For instance, SE-Resnet variants achieved EERs of 6.87, 12.66, 23.54, and 30.33, while the aggregated networks achieved 5.50, 6.43, 29.65, and 29.54, respectively. To prevent overfitting due to numerous cardinalities and aggregation, we used various strategies, including spatial dropout after the aggregation layer, resulting in superior results with an EER of 4.06 when combined with SE and skip connections. ## VI Conclusion This paper introduces a unified spoofing detection system, using a Parallel Stack Aggregation (PSA) network to Fig. 12: Error bar graph representation of proposed and SOTA’s unified solutions, where * denotes augmentation. [37]-i and ii denote ASSERT SENET and SENET-Resnet variation, [38]-i and ii denote FFT-CNN and LFCC-CNCC combination, and [39] -iv denote logspec, SineNet, VGG, and SineNet with dropout, respectively. process raw audio directly. The method employs a Split-Transform-Merge (STM) strategy with multiple cardinal points to effectively learn logical and physical artifacts from speech samples. Experimental results on the ASVspoof 2019 and VSDC datasets demonstrate that the proposed anti-spoofing model significantly outperforms both baselines and state-of-the-art systems. Additionally, the proposed network reduces the intra-EER distinction between logical and physical attacks, detecting both equally effectively. Future work aims to extend the system to include liveliness detection and automatic speaker verification. ## VII Acknowledgement This study is funded by NSF award number 1815724 and MTRAC ACT award number 292883. The opinions, results, conclusions, or recommendations in this material are solely those of the author(s) and do not necessarily represent NSF or MTRAC ACT views.
2309.16111
The relational complexity of linear groups acting on subspaces
The relational complexity of a subgroup $G$ of $\mathrm{Sym}(\Omega)$ is a measure of the way in which the orbits of $G$ on $\Omega^k$ for various $k$ determine the original action of $G$. Very few precise values of relational complexity are known. This paper determines the exact relational complexity of all groups lying between $\mathrm{PSL}_{n}(\mathbb{F})$ and $\mathrm{PGL}_{n}(\mathbb{F})$, for an arbitrary field $\mathbb{F}$, acting on the set of $1$-dimensional subspaces of $\mathbb{F}^n$. We also bound the relational complexity of all groups lying between $\mathrm{PSL}_{n}(q)$ and $\mathrm{P}\Gamma\mathrm{L}_{n}(q)$, and generalise these results to the action on $m$-spaces for $m \ge 1$.
Saul D. Freedman, Veronica Kelsey, Colva M. Roney-Dougal
2023-09-28T02:30:01Z
http://arxiv.org/abs/2309.16111v2
# The relational complexity of linear groups ###### Abstract. The relational complexity of a subgroup \(G\) of \(\operatorname{Sym}(\Omega)\) is a measure of the way in which the orbits of \(G\) on \(\Omega^{k}\) for various \(k\) determine the original action of \(G\). Very few precise values of relational complexity are known. This paper determines the exact relational complexity of all groups lying between \(\operatorname{PSL}_{n}(\mathbb{F})\) and \(\operatorname{PGL}_{n}(\mathbb{F})\), for an arbitrary field \(\mathbb{F}\), acting on the set of \(1\)-dimensional subspaces of \(\mathbb{F}^{n}\). We also bound the relational complexity of all groups lying between \(\operatorname{PSL}_{n}(q)\) and \(\operatorname{PTL}_{n}(q)\), and generalise these results to the action on \(m\)-spaces for \(m\geq 1\). Key words and phrases:Relational complexity, linear groups, subspace actions 2020 Mathematics Subject Classification: 20B15, 20G40, 03C13 ## 1. Introduction The study of relational complexity began with work of Lachlan in model theory as a way of studying _homogeneous_ relational structures: those in which every isomorphism between induced substructures extends to an automorphism of the whole structure. For the original definition see, for example, [11]; an equivalent definition in terms of permutation groups was given by Cherlin [1], and, apart from a slight generalisation to group actions, is the one we now present. Let \(\Omega\) be an arbitrary set and let \(H\) be a group acting on \(\Omega\). Fix \(k\in\mathbb{Z}\), and let \(X:=(x_{1},\dots,x_{k}),Y:=(y_{1},\dots,y_{k})\in\Omega^{k}\). For \(r\leq k\), we say that \(X\) and \(Y\) are _\(r\)-equivalent_ under \(H\), denoted \(X\mathop{\sim}\limits_{H,r}Y\), if for every \(r\)-subset of indices \(\{i_{1},\dots,i_{r}\}\subseteq\{1,\dots,k\}\), there exists an \(h\in H\) such that \((x_{i_{1}}^{h},\dots,x_{i_{r}}^{h})=(y_{i_{1}},\dots,y_{i_{r}})\). If \(X\mathop{\sim}\limits_{H,k}Y\), i.e. if \(Y\in X^{H}\), then \(X\) and \(Y\) are _equivalent_ under \(H\). The _relational complexity_ of \(H\), denoted \(\operatorname{RC}(H,\Omega)\), or \(\operatorname{RC}(H)\) when \(\Omega\) is clear, is the smallest \(r\geq 1\) such that \(X\mathop{\sim}\limits_{H,r}Y\) implies \(Y\in X^{H}\), for all \(X,Y\in\Omega^{k}\) and all \(k\geq r\). Equivalently, \(\operatorname{RC}(H)\) is the smallest \(r\) such that \(r\)-equivalence of tuples implies equivalence of tuples. Note that \(\operatorname{RC}(H)\geq 2\) if \(H\neq 1\) and \(|\Omega|>1\), as \(X\) or \(Y\) may contain repeated entries. Calculating the precise relational complexity of a group is often very difficult. A major obstacle is that if \(K<H\leq\operatorname{Sym}(\Omega)\), then there is no uniform relationship between \(\operatorname{RC}(K,\Omega)\) and \(\operatorname{RC}(H,\Omega)\). For example, if \(n\geq 4\), then the relational complexities of the regular action of \(C_{n}\) and natural actions of \(\operatorname{A}_{n}\) and \(\operatorname{S}_{n}\) are \(2\), \(n-1\) and \(2\), respectively. In [1], Cherlin gave three families of finite primitive binary groups (groups with relational complexity two) and conjectured that this list was complete. In a dramatic recent breakthrough, this conjecture was proved by Gill, Liebeck and Spiga in [6]; this monograph also contains an extensive literature review. In [1, 2], Cherlin determined the exact relational complexity of \(\operatorname{S}_{n}\) and \(\operatorname{A}_{n}\) in their actions on \(k\)-subsets of \(\{1,\dots,n\}\). The relational complexity of the remaining large-base primitive groups is Introduction Let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field. Let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field. Let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field. Let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field. Let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field. Let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field. Let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field. Let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field. Let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and let \(\mathbb{F}\) be a field and \(\mathbb{F}\) be a field and \(\mathbb{F}\) be a field and \(\mathbb{F}\) be a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field. Let \(\mathbb{F}\) be a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field and \(\mathbb{F}\) a field. For most groups, we see that the relational complexity is very close to the bound in Proposition 1.2(ii). However, the difference between the height and the relational complexity of \(\mathrm{PGL}_{n}(\mathbb{F})\) increases with \(n\) when \(|\mathbb{F}|\geq 3\). This addresses a recent question of Cherlin and Wiscons (see [6, p. 23]): there exists a family of finite primitive groups that are not large-base, where the difference between height and relational complexity can be arbitrarily large. Theorem A also provides infinitely many examples of almost simple groups \(\overline{H}\) with \(\mathrm{RC}(\mathrm{Soc}(\overline{H}))>\mathrm{RC}(\overline{H})\). We next bound the relational complexity of the remaining groups with socle \(\mathrm{PSL}_{n}(q)\) that act on \(\Omega_{1}\). For \(k\in\mathbb{Z}_{>0}\), the number of distinct prime divisors of \(k\) is denoted by \(\omega(k)\), with \(\omega(1)=0\). **Theorem B**.: _Let \(\overline{H}\) satisfy \(\mathrm{PSL}_{n}(q)\leq\overline{H}\leq\mathrm{P\Gamma L}_{n}(q)\), and let \(e:=|\overline{H}:\overline{H}\cap\mathrm{PGL}_{n}(q)|\). Suppose that \(e>1\), so that \(q\geq 4\) and \(\overline{H}\not\leq\mathrm{PGL}_{n}(q)\)._ * _If_ \(n=2\) _and_ \(q\geq 8\)_, then_ \(4+\omega(e)\geq\mathrm{RC}(\overline{H},\Omega_{1})\geq 4\)_, except that_ \(\mathrm{RC}(\mathrm{P\Sigma L}_{2}(9),\Omega_{1})=3\)_._ * _If_ \(n\geq 3\)_, then_ \[2n-1+\omega(e)\geq\mathrm{RC}(\overline{H},\Omega_{1})\geq\begin{cases}n+2& \text{always},\\ n+3&\text{if }\mathrm{PGL}_{n}(q)<\overline{H},\\ 2n-2&\text{if }\overline{H}\leq\mathrm{P\Sigma L}_{n}(q)\neq\mathrm{P\Gamma L}_{n }(q).\end{cases}\] In fact, the lower bound of \(2n-2\) holds for a larger family of groups; see Proposition 3.7. **Theorem C**.: _Let \(\overline{H}\) satisfy \(\mathrm{PSL}_{n}(q)\leq\overline{H}\leq\mathrm{P\Gamma L}_{n}(q)\) and let \(e:=|\overline{H}:\overline{H}\cap\mathrm{PGL}_{n}(q)|\). Fix \(m\in\{2,\ldots,\lfloor\frac{n}{2}\rfloor\}\). Then_ \[(m+1)n-2m+2+\omega(e)\geq\mathrm{RC}(\overline{H},\Omega_{m})\geq mn-m^{2}+1.\] GAP [5] calculations using [3] yield \(\mathrm{RC}(\mathrm{P\Gamma L}_{2}(3^{5}),\Omega_{1})=5=4+\omega(5)\) and \(\mathrm{RC}(\mathrm{P\Gamma L}_{4}(9),\Omega_{1})=8=7+\omega(2)\), so the upper bounds of Theorem B cannot be improved in general. On the other hand, \(\mathrm{RC}(\mathrm{P\Gamma L}_{3}(2^{6}),\Omega_{1})\) achieves the lower bound of \(6=3+3<7=5+\omega(6)\). Additionally, \(\mathrm{RC}(\mathrm{PSL}_{4}(2),\Omega_{2})\) achieves the lower bound of \(5\) from Theorem C, while \(\mathrm{RC}(\mathrm{PSL}_{4}(3),\Omega_{2})=6\) and \(\mathrm{RC}(\mathrm{PGL}_{4}(3),\Omega_{2})=\mathrm{RC}(\mathrm{PSL}_{4}(4), \Omega_{2})=8\). It is straightforward to use our results to bound the relational complexity in terms of the degree. For example, \(\mathrm{RC}(\mathrm{PGL}_{n}(q),\Omega_{1})<\log(|\Omega_{1}|)+3\). Many of our arguments also apply to the case where \(\mathbb{F}\) is an arbitrary field; see Theorem 3.1, Lemmas 3.5 and 3.6, and Propositions 3.7 and 4.1. This paper is structured as follows. In Section 2, we fix some more notation and prove some elementary lemmas, then prove upper bounds on the relational complexity of the relevant actions on 1-spaces. In Section 3, we shall prove corresponding lower bounds, and then prove Theorems A and B. Finally, in Section 4, we prove Theorem C. ## 2. Action on 1-spaces: upper bounds In this section we present several preliminary lemmas, and then determine upper bounds for the relational complexity of groups \(H\), with \(\mathrm{SL}_{n}(\mathbb{F})\trianglelefteq H\leq\mathrm{GL}_{n}(\mathbb{F})\), acting on \(\Omega_{1}\). We begin with some notation that we will use throughout the remainder of the paper. Let \(\{e_{1},\ldots,e_{n}\}\) be a basis for \(V\). For a set \(\Gamma\), a tuple \(X=(x_{i})_{i=1}^{k}\in\Gamma^{k}\) and a permutation \(\sigma\in\mathrm{S}_{k}\), we write \(X^{\sigma}\) to denote the \(k\)-tuple \((x_{1^{\sigma^{-1}}},\ldots,x_{k^{\sigma^{-1}}})\). For a tuple \(X\in\Omega_{m}^{k}\), we write \(\langle X\rangle\) to denote the subspace of \(V\) spanned by all entries in \(X\). For \(i\in\{1,\ldots,k\}\), we shall write \((X\setminus x_{i})\) to denote the subtuple of \(X\) obtained by deleting \(x_{i}\). In the remainder of this section, let \(\Omega:=\Omega_{1}=\mathcal{PG}_{1}(V)\) and let \(H\) be a group such that \(\mathrm{SL}_{n}(\mathbb{F})\unlhd H\leq\mathrm{GL}_{n}(\mathbb{F})\). Recall from Theorem 1.1 that \(\mathrm{RC}(\mathrm{GL}_{n}(\mathbb{F}),\Omega)=n\) when \(|\mathbb{F}|=2\). Thus we shall assume throughout this section that \(|\mathbb{F}|\geq 3\) and \(n\geq 2\). We write \(D\) to denote the subgroup of diagonal matrices of \(\mathrm{GL}_{n}(\mathbb{F})\) (with respect to the basis \(\{e_{1},\ldots,e_{n}\}\)), and \(\Delta:=\big{\{}\langle e_{i}\rangle\mid i\in\{1,\ldots,n\}\big{\}}\). Observe that \(D\cap H\) is the pointwise stabiliser \(H_{(\Delta)}\). For a vector \(v=\sum_{i=1}^{n}\alpha_{i}e_{i}\in V\), the _support_\(\mathrm{supp}(v)\) of \(v\) is the set \(\{i\in\{1,\ldots,n\}\mid\alpha_{i}\neq 0\}\). Additionally, the _support_\(\mathrm{supp}(W)\) of a subset \(W\) of \(V\) is the set \(\bigcup_{w\in W}\mathrm{supp}(w)\), and similarly for tuples. In particular, \(\Delta\) is the set of subspaces of \(V\) with support of size \(1\), and \(\mathrm{supp}(W)=\mathrm{supp}(\langle W\rangle)\) for all subsets \(W\) of \(V\). ### Preliminaries We begin our study of the action of \(H\) on \(\Omega\) with a pair of lemmas that will enable us to consider only tuples of a very restricted form. **Lemma 2.1**.: _Let \(k\geq n\), and let \(X,Y\in\Omega^{k}\) be such that \(X\underset{H,n}{\sim}Y\). Additionally, let \(a:=\dim(\langle X\rangle)\). Then there exist \(X^{\prime}=(x^{\prime}_{1},\ldots,x^{\prime}_{k}),Y^{\prime}=(y^{\prime}_{1}, \ldots,y^{\prime}_{k})\in\Omega^{k}\) such that_ * \(x^{\prime}_{i}=y^{\prime}_{i}=\langle e_{i}\rangle\) _for_ \(i\in\{1,\ldots,a\}\)_, and_ * \(X\underset{H,r}{\sim}Y\) _if and only if_ \(X^{\prime}\underset{H,r}{\sim}Y^{\prime}\)_, for each_ \(r\in\{1,\ldots,k\}\)_._ Proof.: Observe that there exists \(\sigma\in\mathrm{S}_{k}\) such that \(\langle X^{\sigma}\rangle=\langle x_{1^{\sigma^{-1}}},\ldots,x_{a^{\sigma^{-1} }}\rangle\). Since \(X\underset{H,n}{\sim}Y\) and \(a\leq n\), the definition of \(a\)-equivalence yields \(X^{\sigma}\underset{H,a}{\sim}Y^{\sigma}\). Hence there exists an \(f\in H\) such that \(x^{f}_{i^{\sigma^{-1}}}=y_{i^{\sigma^{-1}}}\) for all \(i\in\{1,\ldots,a\}\), and so \(\langle Y^{\sigma}\rangle=\langle y_{1^{\sigma^{-1}}},\ldots,y_{a^{\sigma^{-1} }}\rangle\). Since \(\mathrm{SL}_{n}(\mathbb{F})\) is transitive on \(n\)-tuples of linearly independent \(1\)-spaces, there exists \(h\in\mathrm{SL}_{n}(\mathbb{F})\leq H\) such that \(x^{fh}_{i^{\sigma^{-1}}}=y^{h}_{i^{\sigma^{-1}}}=\langle e_{i}\rangle\) for \(i\in\{1,\ldots,a\}\). Define \(X^{\prime},Y^{\prime}\in\Omega^{k}\) by \(x^{\prime}_{i}=x^{fh}_{i^{\sigma^{-1}}}\) and \(y^{\prime}_{i}=y^{h}_{i^{\sigma^{-1}}}\), so that \(X^{\prime}=X^{\sigma fh}\) and \(Y^{\prime}=Y^{\sigma h}\). Then \(X^{\prime}\underset{H,r}{\sim}Y^{\prime}\) if and only if \(X^{\sigma}\underset{H,r}{\sim}Y^{\sigma}\), which holds if and only if \(X\underset{H,r}{\sim}Y\). **Lemma 2.2**.: _Let \(k\geq r\geq n\), and let \(X,Y\in\Omega^{k}\) be such that \(X\underset{H,r}{\sim}Y\). Additionally, let \(a:=\dim(\langle X\rangle)\) and assume that \(a<n\). If \(a=1\), or if \(\mathrm{RC}(\mathrm{GL}_{a}(\mathbb{F}),\mathcal{PG}_{1}(\mathbb{F}^{a}))\leq r\), then \(Y\in X^{H}\)._ Proof.: If \(a=1\), then all entries of \(X\) are equal, so since \(r\geq n\geq 2\), we see that \(X\underset{H,r}{\sim}Y\) directly implies \(Y\in X^{H}\). We will therefore suppose that \(a\geq 2\) and \(\mathrm{RC}(\mathrm{GL}_{a}(\mathbb{F}),\mathcal{PG}_{1}(\mathbb{F}^{a}))\leq r\). By Lemma 2.1, we may assume without loss of generality that \(\langle X\rangle=\langle Y\rangle=\langle e_{1},\ldots,e_{a}\rangle\). As \(X\underset{H,r}{\sim}Y\) and \(\mathrm{RC}(\mathrm{GL}_{a}(\mathbb{F}),\mathcal{PG}_{1}(\mathbb{F}^{a}))\leq r\), there exists an element \(g\in\mathrm{GL}_{a}(\mathbb{F})\) mapping \(X\) to \(Y\), considered as tuples of subspaces of \(\langle e_{1},\ldots,e_{a}\rangle\). We now let \(h\) be the diagonal matrix \(\mathrm{diag}(\det(g^{-1}),1,\ldots 1)\in\mathrm{GL}_{n-a}(\mathbb{F})\), and observe that \(g\oplus h\in\mathrm{SL}_{n}(\mathbb{F})\) maps \(X\) to \(Y\), and so \(Y\in X^{H}\). We now begin our study of some particularly nice \(k\)-tuples. **Lemma 2.3**.: _Let \(k\geq n+1\), and let \(X,Y\in\Omega^{k}\) be such that \(x_{i}=y_{i}=\langle e_{i}\rangle\) for \(i\in\{1,\ldots,n\}\) and \(X\underset{H,n+1}{\sim}Y\). Then \(\mathrm{supp}(x_{i})=\mathrm{supp}(y_{i})\) for all \(i\in\{1,\ldots,k\}\)._ Proof.: It is clear that \(\operatorname{supp}(x_{i})=\{i\}=\operatorname{supp}(y_{i})\) when \(i\in\{1,\ldots,n\}\). Assume therefore that \(i>n\). Since \(X_{H,n+1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! We shall therefore let \(X\) and \(Y\) be elements of \(\Omega^{2n-1}\) with \(x_{i}=y_{i}=\langle e_{i}\rangle\) for \(i\in\{1,\ldots,n\}\), such that \(X\underset{H,2n-2}{\sim}Y\). Additionally, for \(i\in\{1,\ldots,2n-1\}\) and \(j\in\{1,\ldots,n\}\), define \[\alpha_{ij},\beta_{ij}\in\mathbb{F}\text{ so that }x_{i}=\langle\sum_{j=1}^{n} \alpha_{ij}e_{j}\rangle\text{ and }y_{i}=\langle\sum_{j=1}^{n}\beta_{ij}e_{j}\rangle. \tag{1}\] **Lemma 2.6**.: _If at least one of the following holds, then \(Y\in X^{H}\)._ 1. _There exist_ \(i,j\in\{n+1,\ldots,2n-1\}\) _with_ \(i\neq j\) _and_ \(\operatorname{supp}(x_{i})\subseteq\operatorname{supp}(x_{j})\)_._ 2. _There exists a nonempty_ \(R\subseteq\{n+1,\ldots,2n-1\}\) _with_ \(|\bigcap_{i\in R}\operatorname{supp}(x_{i})|=1\)_._ 3. _There exists_ \(i\in\{n+1,\ldots,2n-1\}\) _such that_ \(\operatorname{supp}(x_{i})\geq 4\)_._ Proof.: We begin by noting that Lemma 2.3 yields \(\operatorname{supp}(y_{i})=\operatorname{supp}(x_{i})\) for all \(i\in\{1,\ldots,2n-1\}\). 1. Since \(X\underset{H,2n-2}{\sim}Y\), there exists an \(h\in H\) mapping \((X\setminus x_{i})\) to \((Y\setminus y_{i})\), and such an \(h\) is necessarily diagonal, with fixed entries in \(\operatorname{supp}(x_{j})\) (up to scalar multiplication). Now, let \(\ell\in\{n+1,\ldots,2n-1\}\setminus\{i,j\}\) (this is possible as \(n\geq 4\)). There exists an \(h^{\prime}\in H\) mapping \((X\setminus x_{\ell})\) to \((Y\setminus y_{\ell})\), and as before each such \(h^{\prime}\) is diagonal. Hence every matrix in \(H\cap D\) mapping \(x_{j}\) to \(y_{j}\) maps \(x_{i}\) to \(y_{i}\), and in particular \(x_{i}^{h}=y_{i}\) and so \(X^{h}=Y\). 2. Let \(\{\ell\}:=\bigcap_{i\in R}\operatorname{supp}(x_{i})\). Then \(\alpha_{i\ell}\neq 0\) for all \(i\in R\). Since \(X\underset{H,2n-2}{\sim}Y\), there exists \(h\in H\) such that \((X\setminus x_{\ell})^{h}=(Y\setminus y_{\ell})\). It follows that for all \(k\in\{1,\ldots,n\}\backslash\{\ell\}\), there exists \(\gamma_{k}\in\mathbb{F}^{*}\) such that \(e_{k}^{h}=\gamma_{k}e_{k}\). Thus for each \(i\in R\), \[y_{i}=x_{i}^{h}=\Big{\langle}\sum_{k\in\operatorname{supp}(x_{i})}\alpha_{ik} e_{k}^{h}\Big{\rangle}=\Big{\langle}\alpha_{i\ell}e_{\ell}^{h}+\sum_{k\in \operatorname{supp}(x_{i})\setminus\{\ell\}}\alpha_{ik}\gamma_{k}e_{k}\Big{\rangle}.\] Since \(\alpha_{i\ell}\neq 0\), we deduce that \(\operatorname{supp}(e_{\ell}^{h})\subseteq\operatorname{supp}(y_{i})= \operatorname{supp}(x_{i})\). As this holds for all \(i\in R\), we obtain \(\operatorname{supp}(e_{\ell}^{h})=\{\ell\}\). Thus \(x_{\ell}^{h}=\langle e_{\ell}\rangle^{h}=\langle e_{\ell}\rangle=y_{\ell}\), so \(X^{h}=Y\). 3. Permute the last \(n-1\) coordinates of \(X\) and \(Y\) so that \(\operatorname{supp}(x_{n+1})\geq 4\). By (ii), we may assume that \(x_{i}\not\in\Delta\) for all \(i\geq n+1\). For each \(k\in\{n+1,\ldots,2n-1\}\), we define \(X_{n+1}^{k}:=(x_{n+1},\ldots,x_{k})\) and \(Y_{n+1}^{k}:=(y_{n+1},\ldots,y_{k})\). As \(\operatorname{supp}(x_{i})=\operatorname{supp}(y_{i})\) for all \(i\), we see that \(X_{n+1}^{k}\underset{D,1}{\sim}Y_{n+1}^{k}\), so \(X_{n+1}^{k}\) and \(Y_{n+1}^{k}\) satisfy the conditions of Lemma 2.4(ii). Suppose first that there exists \(j\in\{n+2,\ldots,2n-1\}\) such that \(\mathbb{M}_{X_{n+1}^{j},Y_{n+1}^{j}}=\mathbb{M}_{X_{n+1}^{j-1},Y_{n+1}^{j-1}}\). As \(X\underset{H,2n-2}{\sim}Y\), there exists \(h\in H\cap D\) such that \((X\setminus x_{j})^{h}=(Y\setminus y_{j})\). Hence \(h\in\mathbb{M}_{X_{n+1}^{j-1},Y_{n+1}^{j-1}}\), and so \(h\in\mathbb{M}_{X_{n+1}^{j},Y_{n+1}^{j}}\). Therefore, \(x_{j}^{h}=y_{j}\) and \(X^{h}=Y\). Hence we may assume instead that \(\mathbb{M}_{X_{n+1}^{j},Y_{n+1}^{j}}<\mathbb{M}_{X_{n+1}^{j-1},Y_{n+1}^{j-1}}\) for all \(j\in\{n+2,\ldots,2n-1\}\). Then \(\dim(\mathbb{M}_{X_{n+1}^{j},Y_{n+1}^{j}})\leq\dim(\mathbb{M}_{X_{n+1}^{j-1},Y _{n+1}^{j-1}})-1\). Lemma 2.4(ii) yields \(\dim(\mathbb{M}_{X_{n+1}^{n+1},Y_{n+1}^{n+1}})\leq n-3\), and hence \(\mathbb{M}_{X_{n+1}^{2n-2},Y_{n+1}^{2n-2}}=\{0\}=\mathbb{M}_{X_{n+1}^{2n-1},Y _{n+1}^{2n-2}}\), contradicting our assumption. We now prove the main result of this subsection. **Theorem 2.7**.: _Suppose that \(n\geq 4\) and \(|\mathbb{F}|\geq 3\), and let \(H\) be any group with \(\operatorname{SL}_{n}(\mathbb{F})\trianglelefteq H\leq\operatorname{GL}_{n}( \mathbb{F})\). Then \(\operatorname{RC}(H,\Omega)\leq 2n-2\)._ Proof.: Let \(X,Y\in\Omega^{2n-1}\) be as defined before Lemma 2.6. By Lemma 2.5 it suffices to show that \(Y\in X^{H}\), so assume otherwise. We may also assume that all subspaces in \(X\) are distinct, so that \(|\mathrm{supp}(x_{i})|\in\{2,3\}\) for each \(i\in\{n+1,\ldots,2n-1\}\) by Lemma 2.6(iii). For \(k\in\{2,3\}\), let \(R_{k}\) be the set of all \(i\in\{n+1,\ldots,2n-1\}\) such that \(|\mathrm{supp}(x_{i})|=k\). Then \[|R_{2}|+|R_{3}|=n-1. \tag{2}\] Observe from Lemma 2.6(i)-(ii) that if \(i\in R_{2}\), then \(\mathrm{supp}(x_{i})\cap\mathrm{supp}(x_{j})=\varnothing\) for each \(j\in\{n+1,\ldots,2n-1\}\setminus\{i\}\). Hence \(2|R_{2}|\leq n\) and \[|U|:=\bigg{|}\bigcup_{j\in R_{3}}\mathrm{supp}(x_{j})\bigg{|}\leq\bigg{|}\{1,\ldots,n\}\setminus\Big{(}\dot{\bigcup_{i\in R_{2}}\mathrm{supp}(x_{i})} \Big{)}\bigg{|}=n-2|R_{2}|. \tag{3}\] We now determine an expression for \(|U|\) involving \(|R_{3}|\). Observe first that \(|R_{3}|\geq 1\), else \(|R_{2}|=n-1\) by (2), contradicting \(2|R_{2}|\leq n\). Define a relation \(\sim\) on \(R_{3}\) by \(i\sim j\) if \(\mathrm{supp}(x_{i})\cap\mathrm{supp}(x_{j})\neq\varnothing\), and let \(\mathcal{P}\) be the equivalence classes of the transitive closure of \(\sim\). Let \(P\) be a class of \(\mathcal{P}\). By Lemma 2.6(i)-(ii), \(|\mathrm{supp}(x_{i})\cap\mathrm{supp}(x_{j})|\in\{0,2\}\) for all distinct \(i,j\in R_{3}\). Thus if \(|P|=1\), then \(|\bigcup_{c\in P}\mathrm{supp}(x_{c})|=3\), and if \(|P|=2\), then \(|\bigcup_{c\in P}\mathrm{supp}(x_{c})|=4\). Now suppose that \(|P|\geq 3\). Then there exist distinct \(c_{1},c_{2},c_{3}\in P\) with \(c_{1}\sim c_{2}\) and \(c_{2}\sim c_{3}\). Let \(I:=\bigcap_{i=1}^{3}\mathrm{supp}(x_{c_{i}})\). We observe that \(|I|\neq 0\), and so Lemma 2.6(ii) shows that \(I\) has size two and is equal to \(\mathrm{supp}(x_{c_{1}})\cap\mathrm{supp}(x_{c_{3}})\). Hence \(c_{1}\sim c_{3}\) and \(\bigcup_{i=1}^{3}\mathrm{supp}(x_{c_{i}})=I\dot{\cup}\big{(}\dot{\bigcup_{i=1 }^{3}(\mathrm{supp}(x_{c_{i}})\setminus I)}\big{)}\). If \(|P|>3\), then there exists \(c_{4}\in P\setminus\{c_{1},c_{2},c_{3}\}\) such that, without loss of generality, \(c_{4}\sim c_{1}\). As \(c_{1}\sim c_{j}\) for each \(j\in\{2,3\}\), the above argument shows that \(\bigcap_{i\in\{1,j,4\}}\mathrm{supp}(x_{c_{i}})=I\) and \(\bigcup_{i=1}^{4}\mathrm{supp}(x_{c_{i}})=I\dot{\cup}\big{(}\dot{\bigcup_{i=1 }^{4}(\mathrm{supp}(x_{c_{i}})\setminus I)}\big{)}\). Repeating this argument inductively on \(|P|\) shows that \(\bigcup_{c\in P}\mathrm{supp}(x_{c})=I\dot{\cup}\big{(}\dot{\bigcup_{c\in P}( \mathrm{supp}(x_{c})\setminus I)}\big{)}\), which has size \(2+|P|\). Finally, let \(r\geq 1\) be the number of parts of \(\mathcal{P}\). As \(|R_{3}|=\sum_{P\in\mathcal{P}}|P|\), we deduce that \(|U|=2r+|R_{3}|\geq 2+|R_{3}|\). Thus (3) yields \(2+|R_{3}|\leq n-2|R_{2}|\). Hence \(2|R_{2}|+|R_{3}|\leq n-2<n-1\), which is equal to \(|R_{2}|+|R_{3}|\) by (2), a contradiction. ### Upper bounds for \(\mathbf{GL_{n}(\mathbb{F})}\) on 1-spaces In this subsection, we determine a much smaller upper bound on \(\mathrm{RC}(\mathrm{GL}_{n}(\mathbb{F}),\Omega)\) via our main result, Theorem 2.12. We shall assume throughout that \(n\) and \(|\mathbb{F}|\) are at least \(3\), and write \(G:=\mathrm{GL}_{n}(\mathbb{F})\). Since \(D\) is the pointwise stabiliser of \(\Delta\) in \(G\), we will prove Theorem 2.12 by combining Lemmas 2.1 and 2.2 with information about the action of \(D\) on \(r\)-tuples \(A\) and \(B\) of subspaces in \(\overline{\Delta}=\Omega\setminus\Delta\). If these tuples are \((r-1)\)-equivalent under \(D\), then by acting on one with a suitable element of \(\overline{\Delta}\) we may assume that their first \(r-1\) entries are equal. We shall denote the nonzero entries of elements \(g\) of \(D\) by just \(g_{1},\ldots,g_{n}\) rather than \(g_{11},\ldots,g_{nn}\), since \(g\) is necessarily diagonal. **Lemma 2.8**.: _Let \(r\geq 3\), and let \(A,B\in\overline{\Delta}^{r}\) be such that \((a_{1},\ldots,a_{r-1})=(b_{1},\ldots,b_{r-1})\), \(A\underset{D,r-1}{\sim}B\), and \(B\notin A^{D}\). Let \(C=\{a_{1},\ldots,a_{r-1}\}\) and assume also that \(\mathrm{supp}(C)=\{1,\ldots,n\}\). Then \((\)after reordering the basis for \(V\) and \((a_{1},\ldots,a_{r-1})\) if necessary\()\) the following statements hold._ 1. _There exist integers_ \(2\leq i_{1}<i_{2}<\ldots<i_{r-1}=n\) _such that, for each_ \(t\in\{1,\ldots,r-1\}\)_,_ \(\mathrm{supp}(a_{1},\ldots,a_{t})\) _is equal to_ \(\{1,\ldots,i_{t}\}\) _._ 2. _Let_ \(t\in\{1,\ldots,r-3\}\)_. Then_ \(\operatorname{supp}(a_{t})\cap\operatorname{supp}(a_{u})=\varnothing\) _for all_ \(u\in\{t+2,\ldots,r-1\}\)_._ 3. _The support of_ \(a_{2}\) _does not contain_ \(1\)_._ 4. _Let_ \(t\in\{1,\ldots,r-1\}\)_. Then_ \(i_{t}\in\operatorname{supp}(a_{t})\cap\operatorname{supp}(a_{t+1})\)_._ 5. _Each integer in_ \(\operatorname{supp}(a_{r})\) _lies in the support of a unique subspace in_ \(C\)_._ Proof.: We begin by fixing notation related to \(a_{r}=\langle\sum_{\ell=1}^{n}\alpha_{\ell}e_{\ell}\rangle\) and \(b_{r}=\langle\sum_{\ell=1}^{n}\beta_{\ell}e_{\ell}\rangle\). Since \(A\underset{D,r-1}{\sim}B\), there exists an element in \(D\) mapping \(a_{r}\) to \(b_{r}\), and so \(\operatorname{supp}(b_{r})=\operatorname{supp}(a_{r})\). On the other hand, \(B\notin A^{D}\), and so \(a_{r}\neq b_{r}\). Therefore, by scaling the basis vectors for \(a_{r}\) and \(b_{r}\), there exist \(j,k\in\{1,\ldots,n\}\) such that \(j<k\), \(\alpha_{j}=\beta_{j}=1\), and \(\alpha_{k}\) and \(\beta_{k}\) are distinct and nonzero. Reordering \(\{e_{1},\ldots,e_{n}\}\) if necessary, we may assume that \(j=1\). Then each element of \(D\) that maps \(a_{r}\) to \(b_{r}\) also maps \(\langle e_{1}+\alpha_{k}e_{k}\rangle\) to \(\langle e_{1}+\beta_{k}e_{k}\rangle\); we will use this fact throughout the proof. 1. We show first that there is no partition of \(C\) into proper subsets \(C^{\prime}\) and \(C^{\prime\prime}\) such that \(\operatorname{supp}(C^{\prime})\cap\operatorname{supp}(C^{\prime\prime})=\varnothing\), so suppose otherwise, for a contradiction. Then, as \(|C^{\prime}|<r-1\) and \(A\underset{D,r-1}{\sim}B\), there exists an \(f\in D_{(C^{\prime})}\) such that \(a_{r}^{f}=b_{r}\). Multiplying \(f\) by a scalar if necessary, we may assume that \(f_{1}=1\). Then \(f_{i}=\beta_{i}/\alpha_{i}\) for each \(i\in\operatorname{supp}(a_{r})\). Similarly, there exists \(g\in D_{(C^{\prime\prime})}\) with the same properties. As \(\operatorname{supp}(C^{\prime})\cap\operatorname{supp}(C^{\prime\prime})=\varnothing\), there exists an \(h\in D\) such that \(h|_{\operatorname{supp}(C^{\prime})}=f|_{\operatorname{supp}(C^{\prime})}\) and \(h|_{\operatorname{supp}(C^{\prime\prime})}=g|_{\operatorname{supp}(C^{\prime \prime})}\). Since \(\operatorname{supp}(C)=\{1,\ldots,n\}\), we observe that \(h|_{\operatorname{supp}(a_{r})}=f|_{\operatorname{supp}(a_{r})}=g|_{ \operatorname{supp}(a_{r})}\). Hence \(a_{r}^{h}=b_{r}\). Furthermore, by construction, \(h\in D_{(C^{\prime})}\cap D_{(C^{\prime\prime})}=D_{C}\). Thus \(B\in A^{D}\), a contradiction. Next, by reordering \(a_{1},\ldots,a_{r-1}\) if necessary, we may assume that \(1\in\operatorname{supp}(a_{1})\). Then by reordering \(\{e_{2},\ldots,e_{n}\}\) if necessary, we may assume that \(\operatorname{supp}(a_{1})\) is equal to \(\{1,2,\ldots,i_{1}\}\) for some \(i_{1}\geq 2\), since \(a_{1}\in\overline{\Delta}\). Thus the result holds for \(t=1\). We will use induction to prove the result in general, and to show that, for all \(s\in\{2,\ldots,r-1\}\), (4) there exists \(w\in\{1,\ldots,s-1\}\) such that \(\operatorname{supp}(a_{s})\cap\operatorname{supp}(a_{w})\neq\varnothing\). Let \(t\in\{2,\ldots,r-1\}\), let \(U_{t-1}:=\{a_{1},\ldots,a_{t-1}\}\) and assume inductively that \(\operatorname{supp}(U_{t-1})=\{1,2,\ldots,i_{t-1}\}\). If \(t\geq 3\) assume also that (4) holds for all \(s\in\{2,\ldots,t-1\}\). Since \(C\) cannot be partitioned into two parts whose support has trivial intersection, \(\operatorname{supp}(a_{1},\ldots,a_{t-1})\cap\operatorname{supp}(a_{t},\ldots,a _{r-1})\neq\varnothing\), so we may reorder \(\{a_{t},\ldots,a_{r-1}\}\) so that (4) holds when \(s=t\). Suppose for a contradiction that \(\operatorname{supp}(a_{t})\subseteq\operatorname{supp}(U_{t-1})\). Then (4) (applied to each \(s\in\{2,\ldots,t-1\}\)) and Lemma 2.4 imply that \(D_{(C)}\) is equal to \(D_{(C\setminus a_{t})}\). Since \(A\underset{D,r-1}{\sim}B\), the latter stabiliser contains an element mapping \(a_{r}\) to \(b_{r}\). Hence the same is true for \(D_{(C)}\), contradicting the fact that \(B\notin A^{D}\). Therefore, we can reorder \(\{e_{i_{t-1}+1},\ldots,e_{n}\}\) so that \(\operatorname{supp}(a_{t})\) contains \(\{i_{t-1}+1,\ldots,i_{t}\}\) for some \(i_{t}>i_{t-1}\), and the result and (4) follow by induction. Note in particular that \(i_{r-1}=n\), since \(\operatorname{supp}(C)=\{1,\ldots,n\}\). 2. Let \(m\in\{1,\ldots,r-1\}\) be such that \(\operatorname{supp}(a_{m})\) contains the integer \(k\) from the first paragraph of this proof, and let \(\mathcal{I}:=\{1,\ldots,m\}\). Then, using (4) (for each \(s\in\mathcal{I}\setminus\{1\}\)) and Lemma 2.4(i), we observe that every \(g\in D_{(a_{1},\ldots,a_{m})}\) satisfies \(g_{1}=g_{k}\). Therefore, \(a_{r}^{g}\neq b_{r}\) for all \(g\in D_{(a_{1},\ldots,a_{m})}\). As \(A\underset{D,r-1}{\sim}B\), we deduce that \(m=r-1\). In particular, \(a_{r-1}\) is the unique subspace in \(C\) whose support contains \(k\). Swapping \(e_{k}\) and \(e_{n}\) if necessary, we may assume that \(k=n\). Now, for a contradiction, suppose that \(\operatorname{supp}(a_{t})\cap\operatorname{supp}(a_{u})\neq\varnothing\) for some \(t\in\{1,\ldots,r-3\}\) and \(u\in\{t+2,\ldots,r-1\}\), and assume that \(u\) is the largest integer with this property. Then (4) and the maximality of \(u\) imply that \(\operatorname{supp}(a_{s})\cap\operatorname{supp}(a_{s-1})\neq\varnothing\) for all \(s\in\{u+1,\ldots,r-1\}\). It now follows from Lemma 2.4(i), together with a further application of (4) to each \(s\in\{2,\ldots,t\}\), that every \(g\in E:=D_{(a_{1},\ldots,a_{t},a_{u},\ldots,a_{r-1})}\) satisfies \(g_{1}=g_{n}\). Therefore, \(a_{r}^{g}\neq b_{r}\) for all \(g\in E\). However, \(|(a_{1},\ldots,a_{t},a_{u},\ldots,a_{r-1})|<r-1\), contradicting the fact that \(A\underset{D,r-1}{\sim}B\). 3. As in the proof of (ii), we may assume that \(k=n\). We observe from (ii) and (4) that \(\operatorname{supp}(a_{t})\cap\operatorname{supp}(a_{t+1})\neq\varnothing\) for all \(t\leq r-2\). Hence if \(1\in\operatorname{supp}(a_{2})\) then Lemma 2.4(i) shows that every \(g\in D_{(a_{2},\ldots,a_{r-1})}\) satisfies \(g_{1}=g_{n}\) (since \(k=n\)). This contradicts the fact that \(A\underset{D,r-1}{\sim}B\), and so (iii) holds. Finally, since \(\operatorname{supp}(a_{t})\cap\operatorname{supp}(a_{t+1})\neq\varnothing\) for each \(t\leq r-2\), we obtain (iv) by defining \(i_{0}:=1\) and reordering the vectors in \(\{e_{i_{t-1}+1},\ldots,e_{i_{t}}\}\) if necessary. In particular, for \(t=r-1\), the assumption that \(i_{r-1}=n=k\in\operatorname{supp}(a_{r})\) gives the result. 4. Suppose for a contradiction that some \(\ell\in\operatorname{supp}(a_{r})\) lies in the support of more than one subspace in \(C\). If \(r=3\), then \(\ell\in\operatorname{supp}(a_{1})\cap\operatorname{supp}(a_{2})\) and we set \(t:=2\), and if \(r>3\), then (ii) implies that \(\ell\in\operatorname{supp}(a_{t})\) for some \(t\in\{2,\ldots,r-2\}\). Moreover, \(\ell\neq 1\), since \(1\not\in\operatorname{supp}(a_{2})\) by (iii), and \(1\not\in\operatorname{supp}(a_{u})\) for \(u\in\{3,\ldots,r-2\}\) by (i)-(ii). Furthermore, (i) shows that \(\ell\neq i_{r-1}=n\). Suppose first that \(\alpha_{\ell}=\beta_{\ell}\) (\(\neq 0\)). Since the supports of \(a_{t},a_{t+1},\ldots,a_{r-1}\) consecutively overlap, Lemma 2.4(i) shows that \(g_{\ell}=g_{n}\) for each \(g\in D_{(a_{t},\ldots,a_{r-1})}\). Since \(\alpha_{n}\neq\beta_{n}\), no such \(g\) maps \(a_{r}\) to \(b_{r}\), contradicting the fact that \(A\underset{D,r-1}{\sim}B\). Hence \(\alpha_{\ell}\neq\beta_{\ell}\). However, each \(g\in D_{a_{1}}\) satisfies \(g_{1}=g_{\ell}\) if \(r=3\), as does each \(g\in D_{(a_{1},\ldots,a_{t})}\) if \(r>3\). Again, no such matrix \(g\) maps \(a_{r}\) to \(b_{r}\), a contradiction. Recall that \(G\) denotes \(\operatorname{GL}_{n}(\mathbb{F})\), with \(n,|\mathbb{F}|\geq 3\). Our next result is a key ingredient in the proof that \(\operatorname{RC}(G,\Omega)\) is at most \(n+2\). **Lemma 2.9**.: _Let \(r\geq 2\), and let \(A,B\in\overline{\Delta}^{r}\) be such that \(A\underset{D,r-1}{\sim}B\) and \(B\notin A^{D}\). Then there exists a subset \(\Gamma\) of \(\Delta\) of size \(n+2-r\) such that \(B\notin A^{G_{(\Gamma)}}\)._ Proof.: If \(r=2\), then set \(\Gamma=\Delta\). Since \(G_{(\Gamma)}=D\) and \(B\notin A^{D}\), we are done. Assume therefore that \(r\geq 3\). We will suppose for a contradiction that \(n\) is the smallest dimension for which the present lemma does not hold, for this value of \(r\). Since \(A\underset{D,r-1}{\sim}B\), we may also assume that \((a_{1},\ldots,a_{r-1})=(b_{1},\ldots,b_{r-1})\). Let \(C=\{a_{1},\ldots,a_{r-1}\}\). As \(B\notin A^{D}\), no element of \(D_{(C)}\) maps \(a_{r}\) to \(b_{r}\). Therefore, \(B\notin A^{G_{(\Gamma)}}\) for a given subset \(\Gamma\) of \(\Delta\) if and only if no element of \(G_{(\Gamma\cup C)}\) maps \(a_{r}\) to \(b_{r}\). We split the remainder of the proof into two cases, depending on whether or not \(|\operatorname{supp}(C)|=n\). **Case \(|\operatorname{supp}(C)|<n\):**: Let \(\Delta_{C}:=\{\langle e_{j}\rangle\mid j\in\operatorname{supp}(C)\}\), let \(L\) be the subspace \(\langle\Delta_{C}\rangle\) of \(V\), and let \(a_{\ell}\) and \(b_{\ell}\) be the projections onto \(L\) of \(a_{r}\) and \(b_{r}\), respectively. Lemma 2.4(iii) shows that the diagonal entries corresponding to \(\{1,\ldots,n\}\setminus\operatorname{supp}(C)\) of elements of \(D_{(C)}\) can take any multiset of nonzero values. Since no element of \(D_{(C)}\) maps \(a_{r}\) to \(b_{r}\), it follows that there is no matrix in \(D_{(C)}\) whose restriction to \(L\) maps \(a_{\ell}\) to \(b_{\ell}\). By the minimality of \(n\), there exists a subset \(\Gamma_{C}\) of \(\Delta_{C}\) of size \(|\Delta_{C}|+2-r\) such that no element of \(\operatorname{GL}(L)_{(\Gamma_{C}\cup C)}\) maps \(a_{\ell}\) to \(b_{\ell}\). Setting \(\Gamma\) to be \(\Gamma_{C}\cup(\Delta\setminus\Delta_{C})\), so that \(|\Gamma|=n+2-r\), we observe that no element of \(G_{(\Gamma\cup C)}\) maps \(a_{r}\) to \(b_{r}\). This is a contradiction, and so the lemma follows in this case. #### Case \(|\operatorname{supp}(C)|=n\): In this case, Lemma 2.8 applies, so with the notation of that lemma, let \[\Gamma:=\Delta\setminus\{\langle e_{i_{1}}\rangle,\ldots,\langle e_{i_{r-2}} \rangle\}.\] Then \(|\Gamma|=n+2-r\) and \(\langle e_{1}\rangle,\langle e_{n}\rangle\in\Gamma\), since \(i_{1}\geq 2\) and \(i_{r-1}=n\). Let \(g\in G_{(\Gamma\cup C)}\). To complete the proof, we will show that \(a_{r}^{g}=a_{r}\neq b_{r}\), by showing that \(g|_{\operatorname{supp}(a_{r})}\) is scalar. We will first show that \(g\) is lower triangular. It is clear that \(g\) stabilises \(\langle e_{1}\rangle\in\Gamma\). Suppose inductively that \(g\) stabilises \(\langle e_{1},e_{2},\ldots,e_{s}\rangle\) for some \(s\in\{1,\ldots,n-1\}\). If \(\langle e_{s+1}\rangle\in\Gamma\), then \(g\) stabilises \(E_{s+1}:=\langle e_{1},e_{2},\ldots,e_{s}\rangle+\langle e_{s+1}\rangle= \langle e_{1},e_{2},\ldots,e_{s+1}\rangle\). Otherwise, \(s+1=i_{t}\) for some \(t\in\{1,\ldots,r-2\}\), and then Lemma 2.8(i) shows that \(\{s+1\}\subsetneq\operatorname{supp}(a_{t})\subseteq\{1,\ldots,s+1\}\). In this case, \(g\) again stabilises \(\langle e_{1},e_{2},\ldots,e_{s}\rangle+a_{t}=E_{s+1}\). Hence by induction, \(g\) is lower triangular. Now, let \(\mathcal{I}:=\{i_{1},\ldots,i_{r-1}\}\), let \(\mathcal{U}\) be the set of integers that each lie in the support of a unique subspace in \(C\), and let \(\mathcal{J}:=\mathcal{I}\cup\mathcal{U}\). We will show next that \(g|_{\mathcal{J}}\) is diagonal, by fixing \(j\in\mathcal{J}\) and proving that \(g_{kj}=0\) whenever \(k>j\). First, if \(\langle e_{k}\rangle\in\Gamma\), then \(g_{kj}=0\), so \(g_{nj}=0\). Hence we may also assume that \(k\in\mathcal{I}\setminus\{i_{r-1}\}\). Suppose inductively that \(g_{i_{u},j}=0\) for some \(u\geq 2\) (the base case here is \(u=r-1\), so that \(i_{u}=n\)). We will show that if \(i_{u-1}>j\), then \(g_{i_{u-1},j}=0\). By Lemma 2.8(iv), \(i_{u-1},i_{u}\in\operatorname{supp}(a_{u})\) and furthermore Lemma 2.8(i)-(ii) shows that \(\operatorname{supp}(a_{u})\cap\mathcal{I}=\{i_{u-1},i_{u}\}\). Thus by the previous paragraph and our inductive assumption, \(g_{kj}=0\) for all \(k\in\operatorname{supp}(a_{u})\setminus\{j,i_{u-1}\}\). In fact, Lemma 2.8(i)-(ii) shows that each integer in \(\operatorname{supp}(a_{u})\) less than \(i_{u-1}\) lies in \(\operatorname{supp}(a_{u-1})\). As \(i_{u-1}>j\in\mathcal{J}\), we deduce from the definition of \(\mathcal{J}\) that \(j\notin\operatorname{supp}(a_{u})\). Thus \(g_{kj}=0\) for all \(k\in\operatorname{supp}(a_{u})\setminus\{i_{u-1}\}\). As \(g\) stabilises \(a_{u}\), we deduce that \(g_{i_{u-1},j}=0\). Therefore, by induction, \(g_{kj}=0\) for all \(k\neq j\) and so \(g|_{\mathcal{J}}\) is diagonal. Finally, we will show that \(g|_{\mathcal{J}}\) is scalar. Let \(j,k\in\mathcal{J}\cap\operatorname{supp}(a_{t})\) for some \(t\in\{1,\ldots,r-1\}\). As \(g\) stabilises \(a_{t}\), and as \(g|_{\mathcal{J}}\) is diagonal, we deduce that \[g_{jj}=g_{kk}. \tag{5}\] Now, by Lemma 2.8(iv), \(i_{t}\in\operatorname{supp}(a_{t})\cap\operatorname{supp}(a_{t+1})\) for each \(t\in\{1,\ldots,r-2\}\), so \(i_{t}\in\mathcal{J}\cap\operatorname{supp}(a_{t})\cap\operatorname{supp}(a_{t +1})\). Thus starting from \(t=1\) and proceeding by induction on \(t\), it follows from (5) that \(g_{jj}=g_{kk}\) for all \(j,k\in\mathcal{J}\), i.e. \(g|_{\mathcal{J}}\) is a scalar. Since \(\operatorname{supp}(a_{r})\subseteq\mathcal{J}\) by Lemma 2.8(v), we deduce that \(a_{r}^{g}=a_{r}\neq b_{r}\), as required. The following lemma is strengthening of Lemma 2.9 in the case \(|\mathbb{F}|=3\) and \(r=2\), in which the subset \(\Gamma\) now has size \(n+1-r=n-1\). **Lemma 2.10**.: _Suppose that \(|\mathbb{F}|=3\), and let \(A,B\in\overline{\Delta}^{2}\). Suppose also that \(A\mathop{\sim}\limits_{D,1}B\) and \(B\notin A^{D}\). Then there exists a subset \(\Gamma\) of \(\Delta\) of size \(n-1\) such that \(B\notin A^{G_{(\Gamma)}}\)._ Proof.: Since \(A\mathop{\sim}\limits_{D,1}B\), without loss of generality \(a_{1}=b_{1}\), and there exists an element of \(D\) mapping \(a_{2}\) to \(b_{2}\). Hence \(a_{2}\) and \(b_{2}\) have equal supports. Reordering the basis for \(V\) if necessary, we may also assume that \(\operatorname{supp}(a_{1})=\{1,2,\ldots,m\}\) for some \(m\geq 2\). Then by Lemma 2.4, the upper-left \(m\times m\) submatrix of each matrix in \(D_{a_{1}}\) is a scalar, while the remaining diagonal entries can be chosen independently. As \(B\notin A^{D}\), no matrix in \(D_{a_{1}}\) maps \(a_{2}\) to \(b_{2}\). We may therefore assume (by reordering the basis vectors in \(\{e_{1},\ldots,e_{m}\}\) and/or swapping \(A\) and \(B\) if necessary) that the projections of \(a_{2}\) and \(b_{2}\) onto \(\langle e_{1},e_{2}\rangle\) are \(\langle e_{1}+e_{2}\rangle\) and \(\langle e_{1}-e_{2}\rangle\), respectively. Now, let \(\Gamma:=\Delta\setminus\{\langle e_{2}\rangle\}\), let \(g\in G_{(\Gamma\cup\{a_{1}\})}\), and notice that \(g\) is diagonal outside of the second row. Write \(a_{1}\) as \(\langle\sum_{i=1}^{m}\alpha_{i}e_{i}\rangle\), with \(\alpha_{1}=1\) and \(\alpha_{i}\neq 0\) for all \(i\in\{2,\ldots,m\}\). Since \(a_{1}^{g}=a_{1}\) we deduce that without loss of generality the top left \(2\times 2\) submatrix of \(g\) is \[\left(\begin{array}{cc}1&0\\ g_{21}&1+\alpha_{2}g_{21}\end{array}\right).\] Let \(v\) be the projection of \((e_{1}+e_{2})^{g}\) onto \(\langle e_{1},e_{2}\rangle\). Recall that \(\alpha_{2}\neq 0\), and note that \(g_{22}\neq 0\), since \(g\) is invertible. Hence if \(g_{21}=1\), then \(\alpha_{2}=1\) and \(v=-e_{1}-e_{2}\); if \(g_{21}=-1\), then \(\alpha_{2}=-1\) and \(v=-e_{2}\); and if \(g_{21}=0\), then \(v=e_{1}+e_{2}\). Hence, in each case, \(v\) does not span \(\langle e_{1}-e_{2}\rangle=b_{2}|_{\langle e_{1},e_{2}\rangle}\). Therefore \(a_{2}^{g}\neq b_{2}\), and hence \(B\not\in A^{G_{(\Gamma)}}\). Although the next result holds for all \(\mathbb{F}\), it will only be useful in the case \(|\mathbb{F}|=3\). **Proposition 2.11**.: _Let \(X,Y\in\Omega^{n+1}\) such that \(X\mathop{\sim}\limits_{G,n}Y\), and suppose that \(\langle X\rangle=V\). Then \(Y\in X^{G}\)._ Proof.: As \(\dim(\langle X\rangle)=n\), we may assume by Lemma 2.1 that \(x_{i}=y_{i}=\langle e_{i}\rangle\) for \(i\in\{1,\ldots,n\}\). Let \(S:=\operatorname{supp}(x_{n+1})\) and \(T:=\operatorname{supp}(y_{n+1})\). We will show that \(S=T\); it will then follow that there exists an element of \(D=G_{(\Delta)}\) mapping \(x_{n+1}\) to \(y_{n+1}\), and so \(Y\in X^{G}\). If \(S=\{1,\ldots,n\}=T\), then we are done. Otherwise, exchanging \(X\) and \(Y\) if necessary (note that \(\langle Y\rangle=V\)), we may assume that there exists an element \(t\in\{1,\ldots,n\}\setminus S\). Let \(\Gamma:=\Delta\setminus\{\langle e_{t}\rangle\}\). Then since \(X\mathop{\sim}\limits_{G,n}Y\), there exists an element of \(G_{(\Gamma)}\) mapping \(x_{n+1}\) to \(y_{n+1}\). As \(G_{(\Gamma)}\) stabilises each subspace \(\langle e_{i}\rangle\) with \(i\in S\), it follows that \(S=T\), as required. We are now able to prove this section's main theorem. **Theorem 2.12**.: _Suppose that \(n\) and \(|\mathbb{F}|\) are at least 3. Then \(\operatorname{RC}(\operatorname{GL}_{n}(\mathbb{F}),\Omega)\) is at most \(n+2\). Moreover, \(\operatorname{RC}(\operatorname{GL}_{n}(3),\Omega)\leq n\)._ Proof.: Let \(k\in\{n,n+1,n+2\}\), with \(k=n+2\) if \(|\mathbb{F}|>3\). Additionally, let \(X,Y\in\Omega^{u}\) for some \(u>k\), such that \(X\mathop{\sim}\limits_{G,k}Y\). It suffices to prove that \(Y\in X^{G}\). Suppose, for a contradiction, that \(n\) is the minimal dimension for which the theorem does not hold (for a fixed \(\mathbb{F}\)), and that \(Y\notin X^{G}\). Then for each \(m\in\{2,\ldots,n-1\}\), using Proposition 1.2(i) in the case \(m=2\), we obtain \(\operatorname{RC}(\operatorname{GL}_{m}(\mathbb{F}),\mathcal{PG}_{1}(\mathbb{F} ^{m}))<k\). Since \(Y\notin X^{G}\), Lemma 2.2 yields \(\langle X\rangle=V\). Hence by Lemma 2.1, we may assume without loss of generality1 that \(x_{i}=y_{i}=\langle e_{i}\rangle\) for \(i\in\{1,\ldots,n\}\), and furthermore that all subspaces in \(X\) are distinct, so that \(x_{i},y_{i}\in\overline{\Delta}\) for each \(i\geq n+1\). Footnote 1: If the basis vectors for \(V\) are reordered, as required by several of this section’s earlier proofs, then we can reorder the subspaces in \((x_{1},\ldots,x_{n})\) and \((y_{1},\ldots,y_{n})\) in the same way to preserve this equality. We will first consider the case \(k\geq n+1\). Since \(X\underset{G,n+1}{\sim}Y\), Lemma 2.3 yields \(\operatorname{supp}(x_{i})=\operatorname{supp}(y_{i})\) for all \(i\). However, \(Y\notin X^{G}\). Hence there exists an integer \(r\geq 2\) and subtuples \(A\) of \(X\) and \(B\) of \(Y\), with \(A,B\in\overline{\Delta}^{r}\), such that \((x_{1},\ldots,x_{n},a_{1},\ldots,a_{r})\) and \((x_{1},\ldots,x_{n},b_{1},\ldots,b_{r})\) are \((n+r-1)\)-equivalent, but not equivalent, under \(G\). Equivalently, \(A\underset{D,r-1}{\sim}B\) and \(B\notin A^{D}\). If \(k=n+2\), then by Lemma 2.9, there exists a set \(\Gamma:=\{\langle e_{i_{1}}\rangle,\ldots,\langle e_{i_{k-r}}\rangle\}\) such that \(B\notin A^{G_{(\Gamma)}}\). However, this means that the subtuples \((x_{i_{1}},\ldots,x_{i_{k-r}},a_{1},\ldots,a_{r})\) and \((x_{i_{1}},\ldots,x_{i_{k-r}},b_{1},\ldots,b_{r})\) are not equivalent under \(G\). This contradicts the assumption that \(X\underset{G,k}{\sim}Y\). Hence in this case, \(Y\in X^{G}\), as required, so \(\operatorname{RC}(G)\leq n+2\). If \(|\mathbb{F}|>3\), then we are done. Therefore, assume for the rest of the proof that \(|\mathbb{F}|=3\) and suppose first that \(k=n+1\). By the previous paragraph, \(\operatorname{RC}(G)\leq n+2\). Therefore, to prove that \(\operatorname{RC}(G)\leq k\), it suffices to show that \(X\underset{G,n+2}{\sim}Y\) whenever \(X\underset{G,k}{\sim}Y\). Thus by replacing \(X\) and \(Y\) by suitable subtuples, if necessary, we may assume that \(u=n+2\). In this case, \(r=2\), and by Lemma 2.10, there exists a subset \(\Gamma\) of \(\Delta\) of size \(k-r\) such that \(B\notin A^{G_{(\Gamma)}}\). Arguing as in the previous paragraph, this contradicts the assumption that \(X\underset{G,k}{\sim}Y\). Thus \(\operatorname{RC}(G)\leq n+1\). Finally, suppose that \(k=n\). Since \(\operatorname{RC}(G)\leq n+1\), we may assume that \(u=n+1\). However, since \(X\underset{G,n}{\sim}Y\) and \(\langle X\rangle=V\), Proposition 2.11 shows that \(Y\in X^{G}\). Therefore, \(\operatorname{RC}(G)\leq n\). ## 3. Action on \(1\)-spaces: lower bounds In this section, we again assume that \(|\mathbb{F}|\geq 3\), and write \(\Omega:=\Omega_{1}=\mathcal{P}\mathcal{G}_{1}(V)\). We drop the assumption that \(n\geq 3\) and permit \(n=2\). We shall now prove lower bounds for the relational complexity of each group \(H\) satisfying \(\operatorname{SL}_{n}(\mathbb{F})\trianglelefteq H\leq\Gamma\mathbb{L}_{n}( \mathbb{F})\), acting on \(\Omega\). For some results in this section, we will assume that \(\mathbb{F}=\mathbb{F}_{q}\) is finite, and when doing so we fix a primitive element \(\omega\), and assume that \(q=p^{f}\) for \(p\) prime. Additionally, we will write \(\operatorname{P\Gamma\mathbb{L}_{n}}(q)/\mathrm{PSL}_{n}(q)=\langle\delta,\phi\rangle\), with \(\operatorname{PGL}_{n}(q)/\mathrm{PSL}_{n}(q)=\langle\delta\rangle\). Here, the automorphism \(\phi\) can be chosen to be induced by the automorphism of \(\operatorname{GL}_{n}(q)\) which raises each matrix entry to its \(p\)th power, and with a slight abuse of notation, we will also write \(\phi\) to denote this automorphism of \(\operatorname{GL}_{n}(q)\), and to denote a generator for \(\operatorname{Aut}(\mathbb{F}_{q})\). If \(\mathbb{F}\) is an arbitrary field, then the group \(\Gamma\mathbb{L}_{n}(\mathbb{F})\) is still a semi-direct product of \(\operatorname{GL}_{n}(\mathbb{F})\) by \(\operatorname{Aut}(\mathbb{F})\) (see, for example, [13, Theorem 9.36]), but of course \(\operatorname{GL}_{n}(\mathbb{F})/\mathrm{SL}_{n}(\mathbb{F})\) and \(\operatorname{Aut}(\mathbb{F})\) need not be cyclic. We let \(Z:=Z(\operatorname{GL}_{n}(\mathbb{F}))\), and will write \(I_{n}\) for the \(n\times n\) identity matrix, and \(E_{ij}\) for the \(n\times n\) matrix with \(1\) in the \((i,j)\)-th position and \(0\) elsewhere. We write \(A\oplus B\) for the block diagonal matrix with blocks \(A\) and \(B\). Our first result is completely general and easy to prove, although we shall later prove much tighter bounds for various special cases. **Theorem 3.1**.: _Let \(\mathbb{F}\) be arbitrary, and let \(H\) satisfy \(\operatorname{SL}_{n}(\mathbb{F})\trianglelefteq H\leq\Gamma\mathbb{L}_{n}( \mathbb{F})\). Then \(\operatorname{RC}(H,\Omega)\geq n\)._ Proof.: Define \(X,Y\in\Omega^{n}\) by \(x_{i}=y_{i}=\langle e_{i}\rangle\) for \(i\in\{1,\ldots,n-1\}\), with \(x_{n}=\langle\sum_{i=1}^{n}e_{i}\rangle\) and \(y_{n}=\langle\sum_{i=1}^{n-1}e_{i}\rangle\). Then \(\dim(\langle X\rangle)=n\) and \(\dim(\langle Y\rangle)=n-1\), so no element of \(\Gamma\mathrm{L}_{n}(\mathbb{F})\) maps \(X\) to \(Y\). Hence \(Y\not\in X^{H}\). Now, let \(h_{\ell}:=I_{n}-E_{\ell n}\) for each \(\ell\in\{1,\ldots,n-1\}\), and \(h_{n}:=I_{n}\). Then \(h_{\ell}\in\mathrm{SL}_{n}(\mathbb{F})\leq H\) and \((X\setminus x_{\ell})^{h_{\ell}}=(Y\setminus y_{\ell})\), for each \(\ell\in\{1,\ldots,n\}\). Therefore \(X\underset{H,n-1}{\sim}Y\), and so the result follows. Our next two results focus on the special cases \(n=2\) and \(n=3\). **Lemma 3.2**.: _Assume that \(q\geq 8\), and let \(H\) satisfy \(\mathrm{SL}_{2}(q)\trianglelefteq H\leq\Gamma\mathrm{L}_{2}(q)\). Then \(\mathrm{RC}(H)\geq 4\), except that \(\mathrm{RC}(\Sigma\mathrm{L}_{2}(9))=3\)._ Proof.: The claim about \(\Sigma\mathrm{L}_{2}(9)\) is an easy computation in GAP using [3], so exclude this group from now on. We divide the proof into two cases. For each, we define \(X,Y\in\Omega^{4}\) such that \(X\underset{H,3}{\sim}Y\) but \(Y\not\in X^{H}\). In both cases, we set \((X\setminus x_{4})=(Y\setminus y_{4})=(\langle e_{1}\rangle,\langle e_{2} \rangle,\langle e_{1}+e_{2}\rangle)\). **Case (a)**: Either \(q\) is even, or \(H\not\leq\langle Z,\Sigma\mathrm{L}_{2}(q)\rangle\). If \(q\) is odd, then let \(\alpha\in\mathbb{F}_{p}^{*}\setminus\{1\}\), and otherwise let \(\alpha=\omega^{3}\), so that \(\alpha\) is not in the orbit \(\omega^{\langle\phi\rangle}\). Then let \(x_{4}=\langle e_{1}+\omega e_{2}\rangle\) and \(y_{4}=\langle e_{1}+\alpha e_{2}\rangle\). The stabiliser in \(H\) of \((X\setminus x_{4})=(Y\setminus y_{4})\) is contained in \(\langle Z,\phi\rangle\). As \(\alpha\not\in\omega^{\langle\phi\rangle}\), no element of this stabiliser maps \(x_{4}\) to \(y_{4}\), and so \(Y\not\in X^{H}\). On the other hand, for each \(j\in\{1,2,3,4\}\), the matrix \(g_{j}\in\mathrm{GL}_{2}(q)\) given below maps \((X\setminus x_{j})\) to \((Y\setminus y_{j})\). \[g_{1}=\begin{pmatrix}1&(\alpha-\omega)(1-\omega)^{-1}\\ 0&1-(\alpha-\omega)(1-\omega)^{-1}\end{pmatrix},\quad g_{2}=\begin{pmatrix}1-( \omega\alpha^{-1}-1)(\omega-1)^{-1}&0\\ (\omega\alpha^{-1}-1)(\omega-1)^{-1}&1\end{pmatrix},\] \[g_{3}=\begin{pmatrix}1&0\\ 0&\alpha\omega^{-1}\end{pmatrix},\quad g_{4}=I_{2}.\] If \(q\) is even, then some scalar multiple of \(g_{j}\) lies in \(H\) for all \(j\), so \(X\underset{H,3}{\sim}Y\) and we are done. If instead \(q\) is odd, then our assumption that \(H\not\leq\langle Z,\Sigma\mathrm{L}_{2}(q)\rangle\) implies that \(H\) contains a scalar multiple of an element \(\mathrm{diag}(\omega,1)\phi^{i}\) for some \(i\geq 0\), as \(\mathrm{diag}(\omega,1)\) induces the automorphism \(\delta\) of \(\mathrm{PSL}_{2}(q)\). Hence for each \(j\), there exists \(\phi^{ij}\in\mathrm{Aut}(\mathbb{F}_{q})\) such that a scalar multiple of \(g_{j}\phi^{ij}\) lies in \(H\). Since \(\alpha\in\mathbb{F}_{p}^{*}\), each \(\phi^{ij}\) fixes \(Y\), and thus \(X\underset{H,3}{\sim}Y\). **Case (b)**: \(q\) is odd and \(H\leq\langle Z,\Sigma\mathrm{L}_{2}(q)\rangle\). Since \(H\neq\Sigma\mathrm{L}_{2}(9)\), and since Proposition 1.2(i) yields the result when \(H=\mathrm{SL}_{2}(9)\), we may assume that \(q>9\). We generalise Hudson's [9, SS5.4] proof that \(\mathrm{RC}(\mathrm{SL}_{2}(q),\Omega)\geq 4\). First, let \(\mathcal{S}:=\mathbb{F}_{q}\setminus\{0,1,-1\}\) and \(\mathcal{T}:=\mathbb{F}_{q}\setminus\{0,1\}\), and for each \(\lambda\in\mathcal{S}\) define a map \(\theta_{\lambda}:\mathcal{T}\to\mathbb{F}_{q}\) by \(\mu\mapsto(1-\lambda^{2}\mu)(1-\mu)^{-1}\). We will show that there exist elements \(\lambda\in\mathcal{S}\) and \(\tau\in\mathcal{T}\) satisfying the following conditions: (i) \((\tau)\theta_{\lambda}\) is a square in \(\mathbb{F}_{q}^{*}\), and (ii) no automorphism of \(\mathbb{F}_{q}\) maps \(\tau\) to \(\lambda^{2}\tau\). It is easy to see that for each \(\lambda\in\mathcal{S}\), the image \(\mathrm{im}(\theta_{\lambda})=\mathbb{F}_{q}\setminus\{1,\lambda^{2}\}\), so the map \(\theta_{\lambda}\) is injective, and the preimage of any nonzero square in \(\mathrm{im}(\theta_{\lambda})\) lies in \(\mathcal{T}\) and satisfies Condition (i). Hence for each \(\lambda\in\mathcal{S}\), there are precisely \((q-1)/2-2\) choices for \(\tau\in\mathcal{T}\) satisfying Condition (i). Given \(\lambda\in\mathcal{S}\), since \(\lambda^{2}\neq 1\), Condition (ii) is equivalent to requiring that \(\lambda^{2}\tau\neq\tau^{p^{k}}\) for all \(k\in\{1,\ldots,f-1\}\), i.e. \(\lambda^{2}\neq\tau^{p^{k}-1}\) for all \(k\). There are exactly \((q-3)/2=(q-1)/2-1\) distinct squares of elements of \(\mathcal{S}\), and precisely \((q-1)/(p-1)\) elements in \(\mathbb{F}_{q}^{*}\) that are \((p-1)\)-th powers. Hence if \(p>3\), then there exists \(\lambda\in\mathcal{S}\) such that \(\lambda^{2}\) is not a \((p-1)\)-th power in \(\mathbb{F}_{q}\). Observe that then \(\lambda^{2}\) is not a \((p^{k}-1)\)-th power for any \(k\), and so this \(\lambda\) and any corresponding \(\tau\) from the previous paragraph satisfy both conditions. Suppose instead that \(p=3\), and fix \(\lambda\in\mathcal{S}\). The number of elements \(\tau\in\mathbb{F}_{3^{f}}^{*}\) not satisfying (ii), i.e. with \(\lambda^{2}=\tau^{3^{k}-1}\) for some \(k\in\{1,\ldots,f-1\}\), is at most \[(3-1)+(3^{2}-1)+\cdots+(3^{f-1}-1)=(3+3^{2}+\cdots+3^{f-1})-(f-1).\] On the other hand, we established that the number of elements \(\tau\in\mathcal{T}\) satisfying (i) is equal to \[(3^{f}-1)/2-2=(3-1)(1+3+3^{2}+\cdots+3^{f-1})/2-2=(3+3^{2}+\cdots+3^{f-1})-1.\] Since \(q>9\), and hence \(f>2\), there again exists \(\tau\in\mathcal{T}\) satisfying both conditions. Finally, fix such a \(\lambda\in\mathcal{S}\) and \(\tau\in\mathcal{T}\), and complete the definition of \(X,Y\in\Omega^{4}\) by setting \(x_{4}=\langle e_{1}+\tau e_{2}\rangle\) and \(y_{4}=\langle e_{1}+\lambda^{2}\tau e_{2}\rangle\). The stabiliser in \(H\) of \((X\setminus x_{4})=(Y\setminus y_{4})\) is contained in \(\langle Z,\phi\rangle\). By Condition (ii), no such element maps \(x_{4}\) to \(y_{4}\), so \(Y\notin X^{H}\). However, the proof of [9, Theorem 5.4.6] shows that \(X\underset{\mathrm{SL}_{2}(q),3}{\sim}Y\). Therefore, \(X\underset{H,3}{\sim}Y\), and the result follows. **Lemma 3.3**.: _Assume that \(\mathrm{PSL}_{3}(\mathbb{F})\neq\mathrm{PGL}_{3}(\mathbb{F})\), and let \(H\) satisfy \(\mathrm{SL}_{3}(\mathbb{F})\trianglelefteq H\leq\Gamma\mathrm{L}_{3}(\mathbb{ F})\). If \(\mathbb{F}\) is finite, or if \(H\leq\mathrm{GL}_{3}(\mathbb{F})\), then \(\mathrm{RC}(H)\geq 5\)._ Proof.: If \(|\mathbb{F}|=4\), then we verify the result in GAP using [3], so assume that \(|\mathbb{F}|\geq 7\). If \(\mathbb{F}\) is finite, then let \(\lambda:=\omega\), whilst if \(\mathbb{F}\) is infinite, then let \(\lambda\) be any element of \(\mathbb{F}^{*}\) of multiplicative order at least \(3\). Define \(X,Y\in\Omega^{5}\) by \(x_{i}=y_{i}=\langle e_{i}\rangle\) for \(i\in\{1,2,3\}\), \(x_{4}=y_{4}=\langle e_{1}+e_{2}+e_{3}\rangle\), \(x_{5}=\langle e_{1}+\lambda e_{2}+\lambda^{2}e_{3}\rangle\), and \(y_{5}=\langle e_{1}+\lambda^{-1}e_{2}+\lambda^{-2}e_{3}\rangle\), so that \(x_{5}\neq y_{5}\). We first show that \(Y\not\in X^{H}\). The stabiliser in \(H\) of \((X\setminus x_{5})=(Y\setminus y_{5})\) lies in \(H\cap\langle Z,\mathrm{Aut}(\mathbb{F})\rangle\), so if \(\mathbb{F}\) is infinite then we are done. Assume therefore that \(\mathbb{F}=\mathbb{F}_{q}\). If \(x_{5}^{\phi^{i}}=y_{5}\), then \(\lambda^{p^{i}}=\lambda^{-1}=\lambda^{p^{f}-1}\). Since \(i\in\{0,\ldots,f-1\}\) and \(\lambda=\omega\), we deduce that \((p,f,i)\in\{(2,2,1),(3,1,0)\}\), contradicting \(q\geq 7\). Thus \(Y\not\in X^{H}\). Next, for all \(\mathbb{F}\), we show that \(X\underset{H,4}{\sim}Y\). Let \[g_{1}:=\begin{pmatrix}\lambda&\lambda+1&\lambda+\lambda^{-1}\\ 0&-1&0\\ 0&0&-\lambda^{-1}\end{pmatrix},\quad g_{2}:=\begin{pmatrix}-\lambda&0&0\\ \lambda+1&1&1+\lambda^{-1}\\ 0&0&-\lambda^{-1}\end{pmatrix},\] \[g_{3}:=\begin{pmatrix}-\lambda&0&0\\ 0&-1&0\\ \lambda+\lambda^{-1}&1+\lambda^{-1}&\lambda^{-1}\end{pmatrix},\quad g_{4}:= \begin{pmatrix}\lambda^{2}&0&0\\ 0&1&0\\ 0&0&\lambda^{-2}\end{pmatrix},\ \ \text{and}\quad g_{5}:=I_{3}.\] Observe that \(\det(g_{\ell})=1\) for each \(\ell\in\{1,\ldots,5\}\), and so \(g_{\ell}\in\mathrm{SL}_{3}(\mathbb{F})\leq H\). It is also easy to check that \((X\setminus x_{\ell})^{g_{\ell}}=(Y\setminus y_{\ell})\) for each \(\ell\). Thus \(X\underset{H,4}{\sim}Y\), and so \(\mathrm{RC}(H)\geq 5\). Our remaining results hold for all sufficiently large \(n\). The first is specific to \(\mathrm{GL}_{n}(\mathbb{F})\). **Proposition 3.4**.: _If \(n\geq 3\) and \(|\mathbb{F}|\geq 4\), then \(\operatorname{RC}(\operatorname{GL}_{n}(\mathbb{F}),\Omega)\geq n+2\)._ Proof.: Since \(|\mathbb{F}|\geq 4\), there exists an element \(\lambda\in\mathbb{F}^{*}\) such that \(\lambda\neq\lambda^{-1}\) (so \(\lambda\neq-1\)). Define \(X,Y\in\Omega^{n+2}\) by \(x_{i}=y_{i}=\langle e_{i}\rangle\) for \(i\in\{1,\ldots,n\}\), \(x_{n+1}=y_{n+1}=\langle\sum_{i=1}^{n}e_{i}\rangle\), \(x_{n+2}=\langle e_{1}+\lambda e_{2}\rangle\) and \(y_{n+2}=\langle e_{1}+\lambda^{-1}e_{2}\rangle\). The stabiliser in \(\operatorname{GL}_{n}(\mathbb{F})\) of \((X\backslash x_{n+2})=(Y\backslash y_{n+2})\) is the group of scalar matrices, so \(Y\not\in X^{\operatorname{GL}_{n}(\mathbb{F})}\). Additionally, it is easily verified that, for each \(j\in\{1,\ldots,n+2\}\), the matrix \(g_{j}\in\operatorname{GL}_{n}(q)\) given below maps \((X\setminus x_{j})\) to \((Y\setminus y_{j})\). \[g_{1}=\begin{pmatrix}\lambda&1+\lambda\\ 0&-1\end{pmatrix}\oplus\lambda I_{n-2},\quad g_{2}=\begin{pmatrix}-1&0\\ 1+\lambda^{-1}&\lambda^{-1}\end{pmatrix}\oplus\lambda^{-1}I_{n-2},\quad g_{n+ 1}=\operatorname{diag}(\lambda,\lambda^{-1},\lambda,\ldots,\lambda),\] \[g_{j}=g_{n+1}+(\lambda-\lambda^{-1})E_{j2}\ \ \text{for $j\in\{3,\ldots,n\}$},\quad g_{n+2}=I_{n}.\] Hence \(X\underset{\operatorname{GL}_{n}(\mathbb{F}),n+1}{\sim}Y\), and so the result follows. In the light of Proposition 3.4, the next result in particular bounds the relational complexity of all remaining groups when \(\operatorname{PSL}_{n}(\mathbb{F})=\operatorname{PGL}_{n}(\mathbb{F})\). **Lemma 3.5**.: _Let \(\mathbb{F}\) be arbitrary, assume that \(n\geq 3\), and let \(H\) satisfy \(\operatorname{GL}_{n}(\mathbb{F})\trianglelefteq H\leq\Gamma\mathrm{L}_{n}( \mathbb{F})\) and \(H\neq\operatorname{GL}_{n}(\mathbb{F})\). Then \(\operatorname{RC}(H)\geq n+3\)._ Proof.: Since \(\operatorname{GL}_{n}(\mathbb{F})\) is a proper subgroup of \(H\), there exists a nontrivial \(\psi\in H\cap\operatorname{Aut}(\mathbb{F})\) and an element \(\lambda\in\mathbb{F}^{*}\) with \(\lambda^{\psi}\neq\lambda\). We define \(X,Y\in\Omega^{n+3}\) by \(x_{i}=y_{i}=\langle e_{i}\rangle\) for \(i\in\{1,\ldots,n\}\), \[x_{n+1}=y_{n+1}=\Big{\langle}\sum_{i=1}^{n}e_{i}\Big{\rangle},\ x_{n+2}=y_{n+2}= \langle e_{1}+e_{2}+\lambda e_{3}\rangle,\ x_{n+3}=\langle e_{1}+\lambda e_{2} \rangle,\ y_{n+3}=\langle e_{1}+\lambda^{\psi}e_{2}\rangle.\] We claim that \(X\underset{H,n+2}{\sim}Y\), but \(Y\not\in X^{H}\), from which the result will follow. The stabiliser in \(H\) of \((x_{1},\ldots,x_{n+1})=(y_{1},\ldots,y_{n+1})\) is contained in \(\langle Z,\operatorname{Aut}(\mathbb{F})\rangle\). However, no element of \(\langle Z,\operatorname{Aut}(\mathbb{F})\rangle\) maps \((x_{n+2},x_{n+3})\) to \((y_{n+2},y_{n+3})\), so \(Y\not\in X^{H}\). The reader may verify that, for each \(j\in\{1,\ldots,n+3\}\), the element \(h_{j}\in\langle\operatorname{GL}_{n}(\mathbb{F}),\psi\rangle\leq H\) given below maps \((X\setminus x_{j})\) to \((Y\setminus y_{j})\), where we define \(\tau:=(\lambda-1)^{-1}\) (notice that \(\lambda\neq 1\)). \[h_{1}=\begin{pmatrix}1&-\tau(\lambda^{\psi}-\lambda)\\ 0&1+\tau(\lambda^{\psi}-\lambda)\end{pmatrix}\oplus I_{n-2},\quad h_{2}= \begin{pmatrix}1-\tau(\lambda(\lambda^{-1})^{\psi}-1)&0\\ \tau(\lambda(\lambda^{-1})^{\psi}-1)&1\end{pmatrix}\oplus I_{n-2},\] \[h_{3}=\left(\begin{pmatrix}1-\tau(\lambda(\lambda^{-1})^{\psi^{ -1}}-1)&0&0\\ 0&1-\tau(\lambda(\lambda^{-1})^{\psi^{-1}}-1)&0\\ \tau(\lambda(\lambda^{-1})^{\psi^{-1}}-1)&\tau(\lambda(\lambda^{-1})^{\psi^{ -1}}-1)&1\end{pmatrix}\oplus I_{n-3}\right)\psi,\] \[h_{j}=\begin{pmatrix}\operatorname{diag}(1,1,\lambda^{-1}\lambda^ {\psi^{-1}},1,\ldots,1)+(1-\lambda^{-1}\lambda^{\psi^{-1}})E_{j3}\end{pmatrix}\psi \ \ \text{for $j\in\{4,\ldots,n\}$},\] \[h_{n+1}=\operatorname{diag}(1,1,\lambda^{-1}\lambda^{\psi^{-1}},1, \ldots,1)\psi,\quad h_{n+2}=\psi,\quad h_{n+3}=I_{n}.\] Hence \(X\underset{H,n+2}{\sim}Y\), and the result follows. **Lemma 3.6**.: _Let \(\mathbb{F}\) be arbitrary, assume that \(n\geq 4\), and let \(H\) satisfy \(\operatorname{SL}_{n}(\mathbb{F})\trianglelefteq H\leq\Gamma\mathrm{L}_{n}( \mathbb{F})\) and \(H\not\leq\operatorname{GL}_{n}(\mathbb{F})\). Then \(\operatorname{RC}(H)\geq n+2\)._ Proof.: Since \(H\not\leq\operatorname{GL}_{n}(\mathbb{F})\), there exist elements \(h\psi\in H\) and \(\lambda\in\mathbb{F}^{*}\) such that \(h\in\operatorname{GL}_{n}(q)\), \(\psi\in\operatorname{Aut}(\mathbb{F}_{q})\), and \(\lambda^{\psi}\neq\lambda\). Let \(X,Y\in\Omega^{n+2}\) be as in the proof of Lemma 3.5, but supported only on the first \(n-1\) basis vectors, so that \(\langle e_{n}\rangle\) lies in neither \(X\) nor \(Y\), and \(x_{n}=y_{n}=\langle\sum_{i=1}^{n-1}e_{i}\rangle\). Just as in that proof, one may check that \(Y\not\in X^{H}\), but \(X\underset{H,n+1}{\sim}Y\). The next result applies, in particular, to all groups \(H\) such that \(\operatorname{SL}_{n}(\mathbb{F})\trianglelefteq H\) and either \(H<\operatorname{GL}_{n}(\mathbb{F})\) or \(H\leq\Sigma\mathrm{L}_{n}(\mathbb{F})\neq\Gamma\mathrm{L}_{n}(\mathbb{F})\). We write \(\mathbb{F}^{\times n}\) for the subgroup of \(\mathbb{F}^{*}\) consisting of \(n\)-th powers, which is the set of possible determinants of scalar matrices in \(\operatorname{GL}_{n}(\mathbb{F})\). **Proposition 3.7**.: _Assume that \(n\geq 4\) and \(|\mathbb{F}|\geq 3\), and let \(H\) satisfy \(\operatorname{SL}_{n}(\mathbb{F})\trianglelefteq H\leq\Gamma\mathrm{L}_{n}( \mathbb{F})\). Assume also that the set \(\{\det(g)^{\psi}\mathbb{F}^{\times n}\mid g\psi\in H\text{ with }g\in \operatorname{GL}_{n}(\mathbb{F}),\psi\in\operatorname{Aut}(\mathbb{F})\}\) is a proper subset of \(\mathbb{F}^{*}/\mathbb{F}^{\times n}\). Then \(\operatorname{RC}(H)\geq 2n-2\)._ Proof.: By assumption, there exists an \(\alpha\in\mathbb{F}^{*}\) such that \(\alpha\neq\det(gz)^{\psi}\) for all \(g\psi\in H\) and \(z\in Z\). Define \(X,Y\in\Omega^{2n-2}\) as follows: \[X:=\big{(}\langle e_{2}\rangle,\ldots,\langle e_{n}\rangle, \langle e_{1}+e_{2}\rangle,\ldots,\langle e_{1}+e_{n}\rangle\big{)};\text{ and}\] \[Y:=\big{(}\langle e_{2}\rangle,\ldots,\langle e_{n}\rangle, \langle\alpha e_{1}+e_{2}\rangle,\ldots,\langle\alpha e_{1}+e_{n}\rangle\big{)}.\] We show first that \(Y\not\in X^{H}\), so suppose for a contradiction that there exists \(g\psi\in H\), with \(g\in\operatorname{GL}_{n}(\mathbb{F})\) and \(\psi\in\operatorname{Aut}(\mathbb{F})\), such that \(X^{g\psi}=Y\). As \(g\psi\) fixes \(\langle e_{2}\rangle\) and \(\langle e_{3}\rangle\), and maps \(\langle e_{1}+e_{2}\rangle\) and \(\langle e_{1}+e_{3}\rangle\) to \(\langle\alpha e_{1}+e_{2}\rangle\) and \(\langle\alpha e_{1}+e_{3}\rangle\), respectively, we deduce that \(e_{1}^{g\psi}\in\langle e_{1},e_{2}\rangle\cap\langle e_{1},e_{3}\rangle= \langle e_{1}\rangle\). Therefore \(\langle e_{i}\rangle^{g\psi}=\langle e_{i}\rangle\) for each \(i\in\{1,\ldots,n\}\), and so \(g\) is diagonal. Let \(\mu:=\alpha^{\psi^{-1}}\). As \(\langle e_{1}+e_{i}\rangle^{g\psi}=\langle\alpha e_{1}+e_{i}\rangle\) for each \(i\in\{2,\ldots,n\}\), we deduce that \(g=\operatorname{diag}(\mu,1,\ldots,1)z\) for some \(z\in Z\). Hence \((\det(gz^{-1}))^{\psi}=\mu^{\psi}=\alpha\), a contradiction. Hence \(Y\not\in X^{H}\). Now, for each \(i\in\{2,\ldots,n\}\), let \(h_{i}:=\operatorname{diag}(\alpha,1,\ldots,1,\alpha^{-1},1,\ldots,1)\), where the \(\alpha^{-1}\) appears in entry \(i\). First, for \(j\in\{1,\ldots,n-1\}\), let \(k:=j+1\) so that \(x_{j}=y_{j}=\langle e_{k}\rangle\). It is easy to verify that \(h_{k}+(1-\alpha)E_{k1}\) has determinant \(1\) and maps \((X\setminus x_{j})\) to \((Y\setminus y_{j})\). Finally, for \(j\in\{n,\ldots,2n-2\}\), let \(k:=j+2-n\), so that \(x_{j}=\langle e_{1}+e_{k}\rangle\) and \(y_{j}=\langle\alpha e_{1}+e_{k}\rangle\). Then \(h_{k}\) has determinant \(1\) and maps \((X\setminus x_{j})\) to \((Y\setminus y_{j})\). Therefore, \(X\underset{H,2n-3}{\sim}Y\), and so \(\operatorname{RC}(H)\geq 2n-2\). Proof of Theorem a.: When \(|\mathbb{F}|=2\), this result is clear from Theorem 1.1. For the remaining fields \(\mathbb{F}\), the fact that Part (i) gives an upper bound on \(\operatorname{RC}(\operatorname{PGL}_{n}(\mathbb{F}))\) is proved in Theorem 2.12, whilst we prove that it gives a lower bound in Theorem 3.1 for \(|\mathbb{F}|=3\) and Proposition 3.4 for \(|\mathbb{F}|\geq 4\). That Part (ii) gives upper bounds on \(\operatorname{RC}(\overline{H})\) is immediate from Theorem 1.2(ii) for \(n=3\), and from Theorem 2.7 for \(n\geq 4\). Lemma 3.3 and Proposition 3.7 show that these are also lower bounds. Recall that \(\omega(k)\) denotes the number of distinct prime divisors of the positive integer \(k\). **Lemma 3.8** ([8, Lemma 3.1]).: _Let \(K\leq\operatorname{Sym}(\Gamma)\) be a finite group with normal subgroup \(N\) such that \(K/N\) is cyclic. Then \(\operatorname{H}(K,\Gamma)\leq\operatorname{H}(N,\Gamma)+\omega(|K/N|)\)._ Proof of Theorem b.: For the upper bound in Part (i), we combine Proposition 1.2(i) with Lemma 3.8 to deduce that \(\operatorname{H}(\overline{H},\Omega_{1})=3+\omega(e)\), so \(\operatorname{RC}(\overline{H},\Omega_{1})\leq 4+\omega(e)\). The lower bound (and the case \(\overline{H}=\operatorname{P}\Sigma\mathrm{L}_{2}(9)\)) is Lemma 3.2. For the upper bound in Part (ii), we similarly combine Proposition 1.2(ii) with Lemma 3.8. As for the lower bound, first let \(n=3\), and notice that in this case \(2n-2=4<n+2=5\). If \(\overline{H}\) properly contains \(\mathrm{PGL}_{3}(q)\), then the lower bound of \(6\) is proved in Lemma 3.5. Otherwise, \(\mathrm{PSL}_{3}(q)\neq\mathrm{PGL}_{3}(q)\), and so the lower bound of \(5\) follows from Lemma 3.3. Now assume that \(n\geq 4\). The general lower bound is Lemma 3.6, the bound of \(n+3\) for groups properly containing \(\mathrm{PGL}_{n}(q)\) is Lemma 3.5, and the bound of \(2n-2\) is Proposition 3.7. ## 4. Action on \(m\)-spaces for \(m\geq 2\) In this section, we consider the action of \(H\) on \(\Omega_{m}=\mathcal{PG}_{m}(V)\), where \(\mathrm{SL}_{n}(\mathbb{F})\trianglelefteq H\leq\Gamma\mathrm{L}_{n}( \mathbb{F})\), as before, but now \(2\leq m\leq\frac{n}{2}\). The main work is to prove a lower bound on \(\mathrm{RC}(H,\Omega_{m})\), as the upper bound follows from existing literature. **Proposition 4.1**.: _Let \(\mathbb{F}\) be arbitrary, let \(n\geq 2m\geq 4\), and let \(H\) satisfy \(\mathrm{SL}_{n}(\mathbb{F})\trianglelefteq H\leq\Gamma\mathrm{L}_{n}( \mathbb{F})\). Then \(\mathrm{RC}(H,\Omega_{m})\geq mn-m^{2}+1\)._ Proof.: For each \(i\in\{1,\ldots,m\}\) and \(j\in\{m+1,\ldots,n-1\}\), let \(B_{i}:=\{e_{1},e_{2},\ldots,e_{m}\}\setminus\{e_{i}\}\), \(U_{ij}:=\langle B_{i},e_{j}\rangle=\langle e_{1},\ldots,e_{i-1},e_{i+1},\ldots,e_{m},e_{j}\rangle\), \(V_{i}:=\langle B_{i},e_{i}+e_{n}\rangle\), and \(W_{i}:=\langle B_{i},e_{n}\rangle\), so that \(U_{ij},V_{i},W_{i}\in\Omega_{m}\). Define \(X,Y\in\Omega_{m}^{mn-m^{2}+1}\) as follows: \[x_{mn-m^{2}+1}:=\langle e_{1}+e_{2},\ldots,e_{1}+e_{m},\sum_{i=1 }^{n}e_{i}\rangle;\] \[y_{mn-m^{2}+1}:=\langle e_{1}+e_{2},\ldots,e_{1}+e_{m},-e_{1}+ \sum_{i=m+1}^{n}e_{i}\rangle;\] \[X:=\Big{(}U_{1(m+1)},U_{1(m+2)},\ldots,U_{m(n-1)},V_{1},V_{2}, \ldots,V_{m},x_{mn-m^{2}+1}\Big{)};\text{ and }\] \[Y:=\Big{(}U_{1(m+1)},U_{1(m+2)},\ldots,U_{m(n-1)},W_{1},W_{2}, \ldots,W_{m},y_{mn-m^{2}+1}\Big{)}.\] We shall first show that \(Y\not\in X^{\Gamma\mathrm{L}_{n}(\mathbb{F})}\), so in particular \(Y\not\in X^{H}\), and then that \(X\underset{H,mn-m^{2}}{\sim}Y\). Assume for a contradiction that \(Y\in X^{\Gamma\mathrm{L}_{n}(\mathbb{F})}\). Since each subspace in \(Y\) is spanned by vectors of the form \(\sum_{i=1}^{n}\lambda_{i}e_{i}\) with \(\lambda_{i}\in\{-1,0,1\}\), it follows that there exists \(g\in\mathrm{GL}_{n}(\mathbb{F})\) with \(X^{g}=Y\). For each \(i\in\{1,\ldots,m\}\), choose \(k\in\{1,\ldots,m\}\setminus\{i\}\). Then \[\langle e_{i}\rangle=\bigcap_{\ell\in\{1,\ldots,m\}\setminus\{i\}}U_{\ell(m+ 1)}\cap V_{k}=\bigcap_{\ell\in\{1,\ldots,m\}\setminus\{i\}}U_{\ell(m+1)}\cap W _{k},\] so \(g\) fixes \(\langle e_{i}\rangle\). Similarly, \(g\) fixes \(\langle e_{j}\rangle=\bigcap\limits_{i=1}^{m}U_{ij}\) for each \(j\in\{m+1,\ldots,n-1\}\). Therefore, there exist \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{F}^{*}\) and \(\mu_{1},\ldots,\mu_{n-1}\in\mathbb{F}\) such that \(g\) maps \(e_{i}\) to \(\lambda_{i}e_{i}\) for all \(i\in\{1,\ldots,n-1\}\), and maps \(e_{n}\) to \(\lambda_{n}e_{n}+\sum_{i=1}^{n-1}\mu_{i}e_{i}\). It now follows that for each \(i\in\{2,\ldots,m\}\), the element \(g\) maps \(e_{1}+e_{i}\in x_{mn-m^{2}+1}\) to \(\lambda_{1}e_{1}+\lambda_{i}e_{i}\), which must lie in \(y_{mn-m^{2}+1}\), and hence \(\lambda_{i}=\lambda_{1}\). Similarly, \(V_{i}^{g}=W_{i}\) for each \(i\in\{1,\ldots,m\}\), and so \(W_{i}=\langle B_{i},e_{n}\rangle\) contains \[(e_{i}+e_{n})^{g}=\lambda_{1}e_{i}+\lambda_{n}e_{n}+\sum_{k=1}^{n-1}\mu_{k}e_{k}.\] Hence \(\mu_{i}=-\lambda_{1}\), and \(\mu_{j}=0\) for all \(j\in\{m+1,\dots,n-1\}\). It now follows that \(g\) maps \(\sum_{i=1}^{n}e_{i}\in x_{mn-m^{2}+1}\) to \(\sum_{i=m+1}^{n}\lambda_{i}e_{i}\), which is clearly not in \(y_{mn-m^{2}+1}\), a contradiction. Thus \(Y\not\in X^{H}\). We now show that \(X\underset{H,mn-m^{2}}{\sim}Y\), by identifying an element \(g_{\ell}\in\operatorname{SL}_{n}(\mathbb{F})\leq H\) that maps \((X\setminus x_{\ell})\) to \((Y\setminus y_{\ell})\), for each \(\ell\in\{1,\dots,mn-m^{2}+1\}\). We divide the proof into three cases, which together account for all values of \(\ell\). To simplify our expressions, let \(z:=e_{1}+e_{2}+\dots+e_{m}\), \(\alpha_{1}:=-1\), and \(\alpha_{r}:=1\) for all \(r\in\{2,\dots,m\}\). In each case the element \(g_{\ell}\) will be lower unitriangular, and so will have determinant 1. **Case (a)**: \(\ell\in\{1,\dots,m(n-m-1)\}\). Let \(r\in\{1,\dots,m\}\) and \(s\in\{m+1,\dots,n-1\}\) be such that \(\ell=(n-m-1)(r-1)+(s-m)\), so that \(x_{\ell}=y_{\ell}=U_{rs}\). Additionally, let \(g_{\ell}\) fix \(e_{i}\) for all \(i\notin\{s,n\}\), map \(e_{s}\) to \(e_{s}+\alpha_{r}e_{r}\), and map \(e_{n}\) to \(e_{n}-z\). Then \(g_{\ell}\) fixes \(U_{ij}\) provided \((i,j)\neq(r,s)\), and maps \(e_{i}+e_{n}\in V_{i}\) to \(e_{i}+e_{n}-z\in W_{i}\), and hence \(V_{i}\) to \(W_{i}\), for all \(i\in\{1,\dots m\}\). Finally, \[\Big{(}\sum_{i=1}^{n}e_{i}\Big{)}^{g_{\ell}}=\alpha_{r}e_{r}+\sum_{i=m+1}^{n}e _{i}\in y_{mn-m^{2}+1},\] where we have used the fact that \(e_{r}+\sum_{i=m+1}^{n}e_{i}=(e_{1}+e_{r})+(-e_{1}+\sum_{i=m+1}^{n}e_{i})\) when \(r>1\). Hence \(g_{\ell}\) maps \(x_{mn-m^{2}+1}\) to \(y_{mn-m^{2}+1}\), as required. **Case (b)**: \(\ell=m(n-m-1)+r\), where \(r\in\{1,\dots,m\}\). Here, \(x_{\ell}=V_{r}\) and \(y_{\ell}=W_{r}\). Let \(g_{\ell}\) fix \(e_{i}\) for each \(i\in\{1,\dots,n-1\}\) and map \(e_{n}\) to \(\alpha_{r}e_{r}+e_{n}-z\). Then \(g_{\ell}\) fixes \(U_{ij}\) for all \(i\) and \(j\), and maps \(e_{i}+e_{n}\in V_{i}\) to \(e_{i}+\alpha_{r}e_{r}+e_{n}-z\in W_{i}\), and hence \(V_{i}\) to \(W_{i}\), for all \(i\in\{1,\dots,m\}\setminus\{r\}\). Finally, \[\Big{(}\sum_{i=1}^{n}e_{i}\Big{)}^{g_{\ell}}=\alpha_{r}e_{r}+\sum_{i=m+1}^{n}e _{i}\in y_{mn-m^{2}+1},\] as in Case (a). **Case (c)**: \(\ell=mn-m^{2}+1\). Let \(g_{\ell}\) fix \(e_{i}\) for each \(i\in\{1,\dots,n-1\}\), and map \(e_{n}\) to \(e_{n}-z\). Then \(g\) fixes \(U_{ij}\) for all \(i\) and \(j\), and maps \(e_{i}+e_{n}\in V_{i}\) to \(e_{i}+e_{n}-z\in W_{i}\) for all \(i\), as required. The _irredundant base size_\(\operatorname{I}(K,\Gamma)\) of a group \(K\) acting faithfully on a set \(\Gamma\) is the largest size of a tuple \((\alpha_{1},\dots,\alpha_{k})\) of elements of \(\Gamma\) such that \(K>K_{\alpha_{1}}>K_{(\alpha_{1},\alpha_{2})}>\dots>K_{(\alpha_{1},\dots,\alpha_ {k})}=1\), with all inclusions strict. It is clear that \(\operatorname{I}(K,\Gamma)\) is bounded below by the height \(\operatorname{H}(K,\Gamma)\), which we recall (from SS1) is bounded below by \(\operatorname{RC}(K,\Gamma)-1\). Proof of Theorem C.: In [10, Thm 3.1], it is proved that \(\operatorname{I}(\operatorname{PGL}_{n}(\mathbb{F}),\Omega_{m})\leq(m+1)n-2m+1\). Since the irredundant base size of a subgroup is at most the irredundant base size of an overgroup, and the height is at most the irredundant base size, we deduce that \(\operatorname{H}(\overline{H},\Omega_{m})\leq(m+1)n-2m+1\) for all \(\overline{H}\leq\operatorname{PGL}_{n}(\mathbb{F})\). From Lemma 3.8, we then see that for all \(\overline{H}\) as in the statement, \(\operatorname{H}(\overline{H},\Omega_{m})\leq(m+1)n-2m+1+\omega(e)\), and hence the upper bound follows. The lower bound is immediate from Proposition 4.1, so the proof is complete. **Acknowledgments** The authors would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme "Groups, representations and applications: new perspectives", when work on this paper was undertaken. This work was supported by EPSRC grant no. EP/R014604/1, and also partially supported by a grant from the Simons Foundation. The first author was supported by the University of St Andrews (St Leonard's International Doctoral Fees Scholarship & School of Mathematics and Statistics PhD Funding Scholarship), and by EPSRC grant no. EP/W522422/1. The second author is funded by the Heilbronn Institute. In order to meet institutional and research funder open access requirements, any accepted manuscript arising shall be open access under a Creative Commons Attribution (CC BY) reuse licence with zero embargo. **Competing interests** The authors declare none.
2302.14474
Completions and Terminal Monads
We consider the terminal monad among those preserving the objects of a subcategory, and in particular preserving the image of a monad. Several common monads are shown to be uniquely characterized by the property of being terminal objects in the category of co-augmented endo-functors. Once extended to infinity categories, this gives, for example, a complete characterization of the well-known Bousfield-Kan R-homology completion. In addition, we note that an idempotent pro-completion tower can be associated with any co-augmented endo functor M, whose limit is the terminal monad that preserves the closure of ImM, the image of M, under finite limits. We conclude that some basic properties of the homological completion tower of a space can be formulated and proved for general monads over any category with limits, and characterized as universal
Emmanuel Dror Farjoun, Sergei O. Ivanov
2023-02-28T10:33:34Z
http://arxiv.org/abs/2302.14474v2
# Completions and terminal monads ###### Abstract. We consider _the terminal monad_ among those preserving the objects of a subcategory \(\mathcal{D}\subseteq\mathcal{C},\) and in particular preserving the image of a monad over the category \(C.\) Several common monads \(\mathcal{C}\to\mathcal{C}\) are shown to be uniquely characterized by the property of being terminal objects in the category of co-augmented endo-functors. Once extended to infinity categories, this gives, for example, a complete characterization of the well-known Bousfield-Kan \(R\)-homology completion \(R_{\infty}.\) In addition, we note that an idempotent pro-completion tower \(M_{\bullet}\) can be associated with any co-augmented endo functor \(M,\) whose limit \(M_{\infty}\) is the terminal monad that preserves the closure of \(ImM,\) the image of \(M,\) under finite limits. We conclude that some basic properties of the homological completion tower \(R_{\bullet}X\) of a space can be formulated and proved for general monads over any category with limits, and characterized as universal. The second named author is supported by BIMSA ## 1. Introduction and main results Many well-known and extremely useful constructions, mostly known as "completions", such as the profinite completion \(G\to\widehat{G}\) or the Bousfield-Kan homology completion \(X\to R_{\infty}X,\) are usually constructed directly, without specifying what universal property, if any, determines them up to equivalence. Here, using a notion that we call "terminal monad," many of these are shown to be completely determined by the property of being _terminal objects_ in an appropriate category of co-augmented functors \(X\to F(X)\) over the given underlying category \(\mathcal{C}.\) Our first observation is that the category \[Fun(\mathcal{C},\mathcal{C})_{Id\text{--}}\] of co-augmented functors over a category \(\mathcal{C}\) with limits, is itself closed under limits. Moreover, consider any collection \(D\) of objects in \(\mathcal{C};\) and denote by \(\mathcal{F}ix_{D}(\mathcal{C}),\) the subcategory of the above functor category, consisting of functors that preserve each object of \(D,\) namely with \(d\to F(d)\) an equivalence for every \(d\in D.\) Then this category, \(\mathcal{F}ix_{D}(\mathcal{C}),\) of co-augmented functors, is also closed under limits. In particular, it has a terminal object \(Id\to M_{D},\) which is easily shown to be a monad over \(\mathcal{C}.\) For a given small full subcategory \(\mathcal{D}\subseteq\mathcal{C},\) a _construction_ of the terminal monad, associated with the set of its objects, can be done by re-considering the well-known co-density monad, denoted here by \(T_{\mathcal{D}}:\mathcal{C}\to\mathcal{C},\) associated with a full subcategory \(\mathcal{D}\subseteq\mathcal{C},\) of a category \(\mathcal{C}\) which is always assumed to be closed under limits. Hence, as above, this functor \(T_{\mathcal{D}}\) can be characterized as _the terminal monad_ on \(\mathcal{C},\) among all co-augmented functors \(Id\to F\) on \(\mathcal{C},\) that "preserves the objects of \(\mathcal{D},\)" i.e. with \(d\cong F(d)\) for all \(d\in\mathcal{D}.\) This \(d\) is "fixpoint" in the terminology of Adamek, [2] definition 2.5, see also [16]. It turns out that a terminal monad can be associated with more general, not necessarily fully faithful, functors. In particular, we consider terminal monads associated with a given monad, or more generally with co-augmented endo-functors \(M:\mathcal{C}\to\mathcal{C}.\) Notice, as in the references above, that the category of monads over \(\mathcal{C}\) is also closed under all limits (--but, in general, not under colimits.) This allows one to characterize, by a universal property, common constructions, such as "completions," as _terminal monads_ with respect to an appropriate subcategory. There is an infinity-categorical extension of this observation [9]. It leads, for example, to a seemingly new characterization by a universal property of the well-known Bousfield-Kan homological completion \(R_{\infty}\) as an infinity monad on topological spaces or a simplicial sets \(X.\) Compare [6]: The completion \(R_{\infty}\) is shown to be _the terminal \(\infty-\)monad associated with the monad \(X\to R(X)\)_, among all co-augmented functors that preserve, up to homotopy, the essential image of \(R\), the free \(R-\) module functor. Similar characterization of the pro-finite completion of a group, an algebraic variety, or a topological space, and other "completion" are examples. In addition, following and elaborating on Fakir [8], and Casacuberta-Frei [7], one can associate to a monad (or any co-augmented functor) a terminal _idempotent_ monad, i.e. a terminal localization functor, \(L_{M}\), projecting \(\mathcal{C}\) to the smallest subcategory of \(\mathcal{C}\) that contains the image of \(M\) and is closed under limits. The above definitions and constructions can be evidently dualized to get analog ones associated with a comonad \(N\to Id\). In [18], L. Yanovski constructs for quite general \(\infty-\)categories a transfinite tower of co-monads with similar (implicit) properties, and strong transfinite convergence results. ### Examples: To begin, consider some quite well-known examples. Recall that any localization (or so-called reflection)functor \(L:\mathcal{C}\to\mathcal{C}\) is a terminal monad see [7]. It is the terminal that preserves all the \(L-\) local objects. Next, the canonical set \(U(X)\) of all ultra-filters on a set \(X\) is a special case, see [10]. Namely, the monad \(U\) now appears as the terminal monad among all co-augmented functors on sets that preserve every finite set. Another well-known example is the double dual of a vector space functor \(V\to V^{\star\star}.\) It is the terminal monad that preserves the one-dimensional spaces, or alternatively all finite-dimensional spaces. The last examples are clearly related to theorems 1.2 and 1.3 below. It turns out, see below, that even when \(\mathcal{D}\) is just the subcategory spanned by a three-element set \(3=d\in\mathcal{C}=Sets\) in the category of sets, then \(T_{\mathcal{D}}=T_{3}(X)\) is again the _underlying set_ of the Stone-Cech compactification of the set \(X\), i.e. \(U(X)\) as above. Further, \(T_{2}\) is a canonical sub-monad of the ultrafilter monad, while \(T_{n}\) for \(n\geq 3\) is again the ultrafilter monad. When \(\mathcal{D}\subseteq\mathcal{C}\) is the subcategory of finite groups in the category of (discrete) groups, then the _discrete_ profinite completion endo-functor, on the category of groups appears as the terminal monad among all co-augmented ones \(F,\) that preserve all finite groups i.e. with \(\Gamma\cong F(\Gamma)\) for every finite \(\Gamma.\) Or again, if \(\mathcal{D}\subseteq\mathcal{C}\) is the subcategory of nilpotent groups in the category of groups, the associated monad \(T_{\mathcal{D}}\) is the (discrete) nilpotent completion functor in the category of groups. For a ring \(A\), the (discrete) completion functor of an \(A-\)module, with respect to an ideal \(I\subseteq A\), can be similarly expressed as a terminal monad. A final, slightly stretched example, in an \(\infty\)-category, is the double dual as in equation 6.2 section 6 below, and Theorem 6.2. This is very close to Mandell's functor, [15], see remark 6.2 below, that can be considered as the terminal monad preserving certain GEM spaces expressed as a double dual monad. #### 1.1.1. Acknowlegements This line of thought was a result of a private discussion with M. Hopkins about the properties of the Bousfield-Kan \(R\)-completion. Our students Guy Kapon and Shauly Regimov took an active part in the discussion leading to the present paper. Their work led them to the corresponding formulations in the context of \(\infty\)-algebras, see [9]. ### A sample of results The results below regard the existence, basic properties, and explicit formulas for the terminal monad in certain cases. Our first concern is to guarantee the existence of terminal monads under certain rather weak conditions. In any category, a terminal object can be considered as the limit over the empty diagram. Hence the existence of a terminal co-augmented functor in a given functor category would follow from its closure under limits. In the following, the closure under limits and thus the existence of a terminal object is guaranteed by the closure of the basic category \(\mathcal{C}\) under limits. In the present case, the functor categories, coma categories, and considered subcategories are clearly closed under limits. Limits in the category of co-augmented endo-functors are taken in the appropriate coma category under the identity functor. The following gives a general construction of the terminal monad in quite a general framework, see Proposition 2.4 below. **Proposition 1.1**.: _Let \(\mathcal{D}\subseteq\mathcal{C}\) be a full subcategory of a category with limits. The co-density functor, or the \(\mathcal{D}\)-completion, \(T_{\mathcal{D}}:\mathcal{C}\rightarrow\mathcal{C}\) is the terminal object in the category of co-augmented functors \(Id\to F\in\mathcal{C}^{C},\) such that the co-augmentation map \(d\to F(d)\) is an isomorphism for all \(d\in\mathcal{D}.\) This functor has a unique canonical monad structure._ The following statements use the notation of 6.2, so they should be read with caution: Although not treated here, they hold also in a complete monoidal category with internal \(hom(-,-)\) objects. In all cases, the notation \(hom_{O}(-,-)\) should be read as the appropriate equivariant maps, with respect to the implied action on the monoid \(End\) or operad \(O,\) on the range and domain. **Theorem 1.2**.: _(See equation 6.2) Let \(d\in\mathcal{C}\) be an element in a category with limits. Denote by \(End(d)\) the full subcategory generate by \(d,\) namely the endomorphism of \(d.\) The terminal monad \(Id\to T_{d}\) that preserves \(d,\) is given by the "structured double dual" with respect to \(d:\)_ \[X\to T_{d}(x)=hom_{End\,(d)}(hom(X,d),d)\] Further, the terminal monad that preserves an element \(d\in\mathcal{C}\) and all its cartesian powers \(d^{n},\) is given by a similar expression as below, where \(\mathsf{O}=\mathsf{O}_{d}\) denote the full endomorphism operad of an object \(d,\) given by all the morphisms \(d^{i}\to d\) with \(i>0.\) In the notation of 1.2 one has: **Theorem 1.3**.: _The terminal monad \(Id\to T_{d^{\bullet}}\) that preserves \(d^{i},\) for all \(i,\) is given by the "operadic double dual" with respect to \(d:\)_ \[X\to T_{d}(X)=hom_{\mathsf{O}}(hom(X,d),d)\] _Terminal monads associated to a given monad \(M\)._ For a given monad \(Id\to M:\mathcal{C}\rightarrow\mathcal{C},\) ( or, more generally, a co-augmented endo-functor,) one has an associated _terminal monad_\(T_{M}:\mathcal{C}\rightarrow\mathcal{C},\) which is the terminal endo-functor among all those that preserve the image of \(M,\) namely with \(M\to FM\) an isomorphism. As an example, of such a monad \(M\) one can take any of the monads discussed above or even the terminal monad \(T_{\mathcal{D}}\) as above. The terminal monad associated with the (discrete) profinite completion functor, \(G\to M(G)=\widehat{G}\equiv proG\) is a functor \(T_{pro},\) that preserves all groups of the form \(\widehat{G},\) i.e. groups that are the _discrete_ profinite completion of some group \(G.\) Next, if \(U\) is the ultrafilter monad discussed above then its associated terminal monad can be seen to be the identity monad, \(T_{U}=Id,\) which is clearly the only monad that preserves all possible sets of the form \(U(X),\) since the latter have arbitrarily high cardinality. The terminal monad associated to \(M\) can be expressed explicitly as follows: Compare [8]: **Theorem 1.4**.: _Let \(M\) be a monad on a category \(\mathcal{C}.\) The associated terminal monad is given as the equalizer_ \[T_{M}\cong Equal(M\rightrightarrows M^{2}).\] In addition, we may consider, following Fakir above, the category of idempotent monads, which are often called localizations or reflections. Fakir constructs for every monad \(M\) a naturally associated idempotent monad K(M). Casacuberta et el observed in [7] that this idempotent monad is _terminal_ among all idempotent monads \(F,F\cong F^{2},\) with the property \(M(f)\) is an isomorphism if and only if \(K(M)(f)\) is one. Note the difference between \(T_{M}\) and \(K(M).\) The latter is discussed shortly in the last section below. The \(\infty\)-category analog is clearly the totalization of the co-simplicial monad \(M^{\bullet}\) discussed in [9]. ### Outline of the rest of the paper We begin with recalling the general concept of completion i.e. co-density with respect to a subcategory, such as the subcategory of compact objects. This is done by considering the right Kan extension of a subcategory over itself. This gives many known examples of terminal monads. We then consider the terminal monad associated with a given object in a category and one associated with a given monad. The last example gives a functor from monads to terminal monads on the category \(\mathcal{C}.\) The paper goes on to consider some known special cases such as the category of sets and groups where the general construction gives some well-known constructions as a terminal monad, this characterizes them uniquely by a property. The last section deals with the pro-idempotent monad associated with a co-augmented endo-functor, vastly generalizing the classical Bousfield-Kan completion tower \(R_{\bullet},\) here only for a discrete category, but paving the way for a similar result for an \(\infty\)-category. ## 2. \(\mathcal{D}\)-Completions Let \(\mathcal{D}\) be a full subcategory of a category \(\mathcal{C}\). Denote by \(I:\mathcal{D}\to\mathcal{C}\) the embedding and assume that the right Kan extension of \(I\) by \(I\) exists and denote it by \[T=T_{\mathcal{D}}=\mathsf{Ran}_{I}(I):\mathcal{C}\to\mathcal{C}. \tag{2.1}\] So by the definition of the right Kan extension, \(T\) is a functor together with a natural transformation \(\varepsilon:TI\longrightarrow I\) (2.2) such that for any functor \(F:\mathcal{C}\to\mathcal{C}\) and any natural transformation \(\varepsilon^{\prime}:FI\to I\) there exists a unique natural transformation \(\delta:F\to T\) such that \(\varepsilon\circ\left(\delta I\right)=\varepsilon^{\prime},\) where \(\delta I:FI\to TI\) is the whiskering of \(\delta\) and \(I\). The equation \(\varepsilon\circ\left(\delta I\right)=\varepsilon^{\prime}\) can be rewritten as follows: for any \(d\in\mathcal{D}\) \[\varepsilon_{d}\circ\delta_{d}=\varepsilon_{d}^{\prime}. \tag{2.3}\] Note that the universal property implies that for two natural transformations \(\delta,\delta^{\prime}:F\to T\) the equation \(\delta I=\delta^{\prime}I\) implies \(\delta=\delta^{\prime}.\) Thus \(T\) appears already here as a terminal functor, in a somewhat different sense from the above. Compare [11]. The functor \(T\) will be called the functor of \(\mathcal{D}\)-completion. Any right Kan extension can be presented as a limit over a comma category [14, Ch. X, SS3, Th.1]. In our case, it is just the limit of the projection functor \[T(c)=\mathsf{lim}(c\downarrow\mathcal{D}\to\mathcal{D}). \tag{2.4}\] **Lemma 2.1**.: _The morphism \(\varepsilon:TI\to I\) is an isomorphism. In particular, for any \(d\in\mathsf{Ob}(\mathcal{D})\) we have an isomorphism_ \[\varepsilon_{d}:T(d)\cong d. \tag{2.5}\] Proof.: Since the functor \(I:\mathcal{D}\to\mathcal{C}\) is full and faithful, by [14, Ch. X, SS3, Cor.3] we obtain that \(\varepsilon\) is an isomorphism. **Lemma 2.2**.: _There exists a unique natural transformation_ \[\eta^{T}:\mathsf{Id}_{C}\longrightarrow T \tag{2.6}\] _such that \(\eta_{d}^{T}=\varepsilon_{d}^{-1}\) for any \(d\in\mathsf{Ob}(\mathcal{D}).\)_ Proof.: Take \(F=\mathsf{Id}_{C}\) and \(\varepsilon^{\prime}=\mathsf{id}_{I}\) and use the universal property of the right Kan extension. Further we will treat \(T\) as an co-augmented functor \(T=(T,\eta^{T}).\) For any co-augmented functor \(F=(F,\eta^{F})\) we set \[\mathsf{Inv}(F)=\{c\in\mathsf{Ob}(\mathcal{C})\mid\eta_{c}^{F}\text{ is iso}\}. \tag{2.7}\] Note that \(\mathcal{D}\subseteq\mathsf{Inv}(T).\) **Lemma 2.3**.: _The class \(\mathsf{Inv}(F)\) is closed under retracts._ Proof.: Because a retract of an isomorphism is an isomorphism. The following is a basic observation that follows from the above, that justifies the term "terminal monad," compare [3.7.3] in [5]. **Proposition 2.4**.: _The co-augmented functor \(T_{\mathcal{D}}:\mathcal{C}\to\mathcal{C}\) of the \(\mathcal{D}\)-completion is a terminal object in the category of co-augmented functors \(F\) for which \(\mathcal{D}\subseteq\mathsf{Inv}(F).\) Moreover, \(T_{\mathcal{D}}\) is a monad in functor category \(\mathcal{C}^{\mathcal{C}}.\)_ Proof.: Take a co-augmented functor \(F=(F,\eta^{F})\) such that \(\mathcal{D}\subseteq\mathsf{Inv}(F).\) Note that \(\eta^{F}I:I\to FI\) is an isomorphism. We use the universal property of \(T\) and take \(\varepsilon^{\prime}=(\eta^{F}I)^{-1}:FI\to I.\) Then there exists a unique natural transformation \(\delta:F\to T\) such that \(\varepsilon\circ(\delta I)=(\eta^{F}I)^{-1}.\) Since \(\varepsilon=(\eta^{F}I)^{-1},\) we obtain that the equation \(\varepsilon\circ(\delta I)=(\eta I)^{-1}\) is equivalent to the equation \((\delta\circ\eta^{F})I=\eta^{T}I.\) And the equation \((\delta\circ\eta^{F})I=\eta^{T}I\) is equivalent to \(\delta\circ\eta^{F}=\eta^{T}\) by the universal property of \(T.\) The monad structure of this terminal \(T_{\mathcal{D}}\) follows immediately from the fact that its square preserves all objects of \(\mathcal{D},\) giving a unique natural transformation \(T_{\mathcal{D}}^{2}\to T_{\mathcal{D}}.\) The monadic equations are satisfied since they all involve equality of natural transformations from powers of \(T_{\mathcal{D}}\) to \(T_{\mathcal{D}}\) itself, but there is a unique such transformation for each power since \(T_{\mathcal{D}}\) is terminal among these co-augmented functors, all of which preserve the objects of \(\mathcal{D}.\) _Remark:_ Note that the above characterization shows that the terminal monad \(T_{\mathcal{D}},\) associated with a subcategory \(\mathcal{D}\subseteq\mathcal{C},\) can be identified using solely its effect on the objects in \(\mathcal{D},\) being the terminal co-augmented functor \(\mathcal{C}\to\mathcal{C},\) that "preserves the objects" of this subcategory. Of course, its usual construction, as above, does employ morphisms in \(\mathcal{D}\) and \(\mathcal{C}.\) ## 3. Terminal monads associated with a functor \(\mathcal{D}\to\mathcal{C}\) More generally, consider a general functor \(F:\mathcal{D}\to\mathcal{C}.\) Now, consider the subcategory \(\mathcal{C}_{F}^{\mathcal{C}},\) of the category of end-functors \(\mathcal{C}^{\mathcal{C}},\) consisting of all co-augmented functors \(\mathrm{G},\)\(Id\to G:\mathcal{C}\to\mathcal{C},\) that preserve the image of \(F,\) i.e. with \[(Id\to G)(F(x))=F(x)\to GF(x)\] is an isomorphism in \(\mathcal{C}\) for any object \(x\in\mathcal{D}.\) The subcategory \(\mathcal{C}_{F}^{\mathcal{C}},\) of the full functor category, is a category of co-augmented functors \(G,\) which is evidently closed under all limits. Hence it has a terminal object which is the _terminal monad \(T_{F}\)_ associated with the given functor \(F,\) namely, preserving the image of \(F.\) We note that this terminal object has a natural monad structure: **Proposition 3.1**.: _Let \(M\) be an co-augmented endo-functor in \(\mathcal{C}^{\mathcal{C}}.\) The terminal object, \(T_{M},\) in the category \(\mathcal{C}_{M}^{\mathcal{C}}\) of co-augmented endo-functors preserving the image of \(M,\) is naturally a monad._ Proof.: Denote the terminal co-augmented functor by \(T_{M}\) as above. Since the composition: \(T_{M}\circ T_{M}\) clearly preserves the image of \(M,\) and \(T_{M}\) is terminal among those preserving \(M,\) there is a unique map \(\mu:T_{M}T_{M}\to T_{M}.\) The conditions, on a co-augmented functor with this \(\mu\) as a structure map, of being a monad, involve equality among various maps from compositions of \(T_{M}\) with itself to \(T_{M}.\) Each such composition preserves the image of \(M,\) therefore there is a unique map, from any self-composition of \(T_{M},\) to the terminal object \(T_{M}.\) Recall all the conditions on a co-augmented to be a monad involve equality between various maps to the monad itself. It follows that all the needed equalities are satisfied by \(T_{M}.\) We conclude that the above basic properties of \(T_{\mathcal{D}}\) holds when one replaces the inclusion \(I:\mathcal{D}\subseteq\mathcal{C}\) with any functor \(M:\mathcal{D}\to\mathcal{C}.\) In this case, the right Kan extension \(T_{M}\) is the terminal monad on \(\mathcal{C}\) that preserves the image subcategory of the given functor \(M.\) In case \(\mathcal{D}=\mathcal{C}\) and where the functor \(M:\mathcal{C}\to\mathcal{C}\) is a co-augmented functor, we got the terminal monad \(T_{M}\) among those that preserve the image of \(M.\) Consider the special, well-known case, where \(M\) is an idempotent localization functor \(Id\to M\cong M^{2}.\) Namely, a projection onto a subcategory of \(\mathcal{C}.\) In that case, \(m:T_{M}\to M\) is an equivalence. Namely, \(M\) is its own terminal monad. (Compare: [17]) For the sake of completeness, we state: **Proposition 3.2**.: _Let \(L:\mathcal{C}\rightarrow\mathcal{C}\) be a co-augmented idempotent functor, i.e. localization- projection onto a full subcategory of local object. Then \(L\) is its own terminal monad i.e. \(L\cong T_{L}.\)_ Proof.: First note that \(L\) has associated monad structure since \(L\cong L^{2}\) by the two natural maps. Second, for every monad that preserves the image of \(L,\) namely, with the natural map \(L\to ML\) an equivalence (isomorphism) one gets a map \[M\to ML\to LML\cong L^{2}\cong L\] The monad structure of \(M\) forces the uniqueness of the map of monads \(M\to L\) since any map of monads \(f:M\to L\) is a retract of \(Mf:M^{2}\to ML\cong L,\) being a monad map, which is a retract of \(M(id\to L.)\) Starting with the map of monads 3.1, (3.1) Applying \(M\) to this triangle of maps we see immediately that \(f\) is uniquely determined by the monad **Remark 3.3**.: If the subcategory \(\mathcal{D}\) as above is closed under all limits then it is localizing and the \(\mathcal{D}\)-completion \(T_{\mathcal{D}}\) is the localization \(L_{\mathcal{D}}:\mathcal{C}\rightarrow\mathcal{D},\) projecting \(\mathcal{C}\) to the subcategory \(\mathcal{D}.\) **3.1**.: \(T_{M}\) **in terms of \(M\).** It turns out that there is a direct formula expressing the terminal monad \(T_{M},\) associated with \(M,\) in terms of \(M.\) Generally, given a monad \(Id\to M,\) consider a co-augmented functor \(Id\to F\) that preserves the image of \(M,\) i.e. \(M\cong FM.\) Applying such an \(F\) to \(Id\to M\) we get a canonical map \(F\to M,\) for every such functor. In particular, for any monad \(M,\) one gets a natural map \(T_{M}\to M,\) from the terminal monad to \(M,\) giving rise to the augmentation \(T\to Id\) of the functor \(T.\) In the following this last map is identified with the natural map to \(M,\) of the _equalizer of the natural diagram: \(M\rightrightarrows M^{2}.\)_ In addition this map \(T_{M}\to M\) is shown to be a map of monads. Let us start with two basic properties: **Proposition 3.4**.: _Let \(f:(M,\mu_{{}_{M}})\rightarrow(N,\mu_{{}_{N}})\) be a map of monads. Then the monad \(N\) is naturally an \(M-\)algebra. In particular, \(T_{M}(N)\cong N,\) thus \(f\) induces a map of monads \(Tf:T_{M}\to T_{N}.\) Hence \(T:Mon\mathcal{C}\to Mon\mathcal{C},\) has a natural structure of augmented endo-functor \(T\to Id\) on the category \(Mon\mathcal{C}\) of monads over \(\mathcal{C}.\)_ Proof.: Since \(M\) is an algebra over itself, we need to show that the natural map \(N\to M(N)\), gotten by applying the co-augmentation \(\iota_{M}:Id\to M\) to \(N,\) has a left inverse i.e., that \(N\) is a retract of \(M(N)\equiv M\circ N.\) Since \(T_{M}\) preserves \(M,\) it preserves also any retract of \(M.\) The natural left inverse is given by the composition: \[MN\xrightarrow{f\circ N}NN\xrightarrow{\mu_{N}}N\] Hence, \(N(x)\) has been shown to be a retract of \(M(N(x)),\) for all objects \(x\in obj\mathcal{C}.\) Thus \(T_{M}(N(x))\cong N(x).\) But \(T_{N}\) is the terminal monad that preserves all objects of the form \(N(x).\) Therefore, there is a unique map \(T_{M}\to T_{N},\) as needed. Second, an interesting closure property **Proposition 3.5**.: _Let \(X:I\to\mathcal{C}\) be a functor with \(I\) being a small (indexing) category and \(M\) a monad over \(\mathcal{C}.\) Assume that for each \(i\in I\) the object \(X_{i}\in obj\mathcal{C},\) is a retract of \(M(X_{i}.)\) Then the object \(Y=lim_{I}X_{i}\) is a retract of \(T_{M}(Y),\) hence \(T_{T_{M}}(Y)\cong Y.\) Similarly, any such limit \(Y\) of \(M-\)algebras is naturally a \(T_{M}\)- algebra._ Note: there is no assumption here about relations among the various retractions, namely the retract structures on different \(X_{i}.\) Thus the limit is not in general an \(M\)-retract but it is a \(T_{M}\) retract. For example, (in \(Cat_{\infty}\)) in the infinity category of spaces, if \(M=R\) is the free \(R\)-module spanned by a space \(X,\) then a limit of any diagram of such \(R\)-GEMs is not, in general, a \(R\)-algebra but rather a \(R_{\infty}=T_{R}\)-algebra. Proof.: Consider the composition: \[lim_{i}X_{i}\to T_{M}(lim_{i}X_{i})\stackrel{{ a}}{{\to}}lim_{i}T_{M}(X_{i}) \cong lim_{i}X_{i}\] which is clearly the identity map. The right-hand side map \(a\) is the assembly map for limits, and the equality on the right is a consequence of \(T_{M}(X_{i})\cong X_{i},\) since the latter is a retract of \(M(X_{i})\) by assumption and hence also preserved by \(T_{M}.\) The right-hand side map \(a,\) is directly seen to equip \(lim_{i}X_{i}\) with an \(T_{M}-\) algebra structure. ### The equalizer as the terminal monad Given a co-augmented functor \(Id\to M,\) denote by \(Eq_{M}\) the equalizer of the two natural maps \(M\rightrightarrows M^{2}\) coming form the co-augmentation structure \(Id\to M.\) First, we note the following: **Lemma 3.6**.: _Let \(Id\to M\) be an co-augmented functor \(\mathcal{C}\to\mathcal{C}.\) Let \(F\) be any co-augmented functor \(F:\mathcal{C}\to\mathcal{C}\) that preserves \(M,\) i.e. with \((Id\to F)(M)=M\stackrel{{\Xi}}{{\to}}F(M)\) an equivalence. Then there is a natural map, of co-augmented functors, \(F\to Eq_{M}\) from \(F\) to the equalizer of \(M\rightrightarrows M^{2}.\)_ Proof.: Apply \(F\) to the commutative diagram \(Id\to M\rightrightarrows M^{2},\) to get the desired factorization to the equalizer: observing that we get commutative: \[F\to F(M)\rightrightarrows F(M^{2})\] which is equivalent to: \[F\to M\rightrightarrows M^{2}\] by the assumption on \(F.\) Hence there is a well-defined factorization of the left-hand side map through the equalizer \(F\to Eq_{M}.\) This map clearly respects the co-augmentation \(Id\to F\to M.\) For the rest of the discussion, we will mostly assume, sometimes for convenience only, that \(M\) is a monad on \(\mathcal{C}.\) We noticed that every functor that preserves \(M\) maps naturally to the functor \(Eq_{M}.\) The same is true in particular to the terminal functor that preserves \(M.\) But \(Eq_{M}\) preserves \(M\) thus, by definition, it maps uniquely to the terminal \(T_{M}.\) This brings us to the following: **Proposition 3.7**.: _Let \(M\) be a monad in \(MonC,\) the category of monads over \(\mathcal{C}.\) The terminal monad preserving the image of \(M\) is naturally equivalent to the equalizer:_ \[T_{M}\cong Eq(M\rightrightarrows M^{2}.)\] _Equalizer that is, with respect to the two natural transformations given by the augmentation. In particular, the equalizer itself has a natural structure of a monad._ Proof.: Observe that the diagram of maps \(M\rightrightarrows M^{2}\) is not, in general, a diagram of monads. The maps in it are natural transformations of co-augmented functors. Thus the equalizer is a co-augmented functor, but it is not immediately clear why it is a monad. Fakir states this without proof. It does follow below from the observation that the equalizer is naturally isomorphic to the terminal monad \(T_{M}.\) To prove that, note that \(Eq_{M}\) preserves \(M,\) i.e. \(Eq_{M}(M)\cong M,\) were here \(M\) denotes the underlying co-augmented functor of the monad \(M.\) The reason is that clearly as co-augmented functors, there is an equivalence \(M\cong totM^{\bullet}(M)=lim_{\Delta}(M^{\bullet}(M)),\) since the latter co-simplicial functor has an extra co-degeneracy map. But in \(1\)-category, \(totM^{\bullet}\cong Eq_{M}.\) Therefore there is a unique map of co-augmented functors \(Eq_{M}\to T_{M}.\) First, we prove that this map is an equivalence of functors. This will endow the equalizer with a monad structure. The monad structure on \(T_{M}\) comes from the natural map \(T_{M}T_{M}\to T_{M}\) given by the universal property of the range. Since the identity is only self-map \(T_{M}\to T_{M},\) the last map satisfies the monad conditions. Now consider the following maps (=natural transformations) of co-augmented functors: \[Eq_{M}\overset{q}{\longrightarrow}T_{M}\overset{t}{\longrightarrow}Eq_{M} \overset{q}{\longrightarrow}T_{M}\] The map \(q\) is uniquely guaranteed by the equation \(Eq_{M}\circ M\cong M,\) since \(T_{M}\) is terminal _co-augmented functor_ with this property. The map \(t\) is given by the universal property of \(Eq_{M}.\) See lemma 3.6 above. Namely, since the monoid \(T_{M}\) preserves \(M,\) (\(T_{M}\circ M\cong M\)) so it preserves the co-simplicial object \(M^{\bullet}.\) When we apply \(T_{M}\) to \(Id\to M^{\bullet}\) we get a map of \(T_{M}\) to the limit of \(M^{\bullet},\) which in our case is the equalizer \(Eq_{M}.\) In the above composition of three natural transformations, the induced self-map of \(T_{M}\) is the identity since \(T_{M}\) is a terminal object. We claim that the composition \(t\circ q\) is equivalent to the identity. To see that, consider the diagram: gotten by applying the above maps: \(Eq_{M}\to T_{M}\to Eq_{M}\) to the natural transformations: \[Id\to M\rightrightarrows M^{2}.\] Since all the horizontal maps in diagram that involve \(M,M^{2},\) and denoted by \("\cong"\) are equivalences, the required self-map on the equalizer, \(Eq_{M}\) is also an equivalence as needed. Thus we conclude that the maps \(t,s\) are equivalences of co-augmented functors. It follows that \(Eq_{M}\) has a structure of a monad coming from that of \(T_{M},\) and the two are equivalent as a monad, as stated. The terminal monad \(T_{M},\) has the following properties: **Theorem 3.8**.: _For any monad \(M,\) the above map \(T_{M}\to M\), 3.8, is a map of monads. The assignment \(M\mapsto T_{M}\) gives an endo-functor \(MonC\to MonC,\) together with a natural transformation \(T\to Id,\) namely, an augmented endo-functor \(T\) on \(MonC.\)_ Proof.: By 3.4 above, \(T_{M}\) is natural in the variable \(M.\) Or, since \(Eq_{M}\) is functorial in \(M,\) so \(T_{M}\) is by the above theorem 3.7. Hence the assigment: \(T:M\mapsto T_{M}\) defines an endo-functor on \(MonC:\) For a given map of monads \(M\to M^{\prime}\) one gets a map of co-simplicial resolutions \(M^{\bullet}\to M^{\prime\bullet}.\) Or, since \(T_{M}\) is naturally equivalent to the equalizer of \(M\rightrightarrows M^{2},\) we get a well-defined map on the terminal monads with a natural map \(m_{M}:T_{M}\to M,\) as needed. We now prove that this map \(T_{M}\to M\) is a map of monads, namely, respect the monad structure \(M^{2}\to M.\) Now since \(Eq_{(-)}\) is a co-augmented functor we get a commutative diagram involving \(Eq_{M^{2}}\) by applying this functor to \(M\rightrightarrows M^{2}.\) The natural map \(Eq_{M}Eq_{M}\to Eq_{M^{2}},\) completes the argument, giving the necessary commutation of the monad structures. In more detail, for any map of co-augmented functors, such as \(m_{M}:T_{M}\to M,\) the corresponding co-faces maps \(M\rightrightarrows M^{2}\) etc. commutes with \(m_{M}\) and \(m_{T_{M}}.\) Consider the natural diagram: To prove that \(T_{M}\to M\) is a map of monads, one only needs to show that the monad structures of \(M\) and \(T_{M},\) written as \(\mu,\mu_{1},\) are compatible with the natural map \(T_{M}\to M.\) Namely, that the two composition arrows, involving \(\mu\) and \(\mu_{1}\), of \(T_{M}T_{M}\to M\) in the diagram below are equal. Namely, that the outer square of the maps below commutes. Consider the diagram below: The arrows \(\mu_{1},\mu_{3}\) are defined by the universal properties of \(T_{M}\) and the equalizer, correspondingly, since their common domain preserves \(M\) and \(M^{2}\), using 6.5 above. The map \(\iota_{M^{2}}\) is the inclusion of the equalizer. Note that the bottom square commutes since here the equalizer \(Eq_{(-)}\) is considered here as a co-augmented functor,\(Eq_{(-)}\to Id\), from the category of co-augmented functors over \(\mathcal{C}\) to itself. The top square below commutes by the terminal property of its bottom left corner \(T_{M}\), admitting only one map from the functor \(T_{M}T_{M}\) at the top right corner. Thus the whole diagram commute as needed. #### 3.2.1. Example Consider the ultrafilter monad \(X\to U(X)\) discussed elsewhere in this paper. The associated terminal monad \(T_{U}\) preserves the image of \(U\) which includes sets of arbitrary cardinality. Hence it preserves all sets and must be the identity monad. In fact, the equalizer \(Eq_{U}\) is easily seen to be the identity functor on sets, as it includes only principal ultrafilters. On the other hand, the terminal monad associated to the profinite completion \(G\to pro-G\equiv\widehat{G}\) in the category of groups is not the identity functor since on a finitely presented group \(T_{pro}(G)\cong pro-G\) since the completion is idempotent on these groups so that the equalizer of \(pro-G\nrightarrow pro-(pro-G)\) is \(pro-G\) itself, the completion of \(G\). ## 4. Explicit expressions for \(T_{d}\) Here we explicitly express the terminal monad, associated with an object \(d\in\mathcal{C}\), as a structured double dual, see below, as opposed to the usual one as in [3], section 2. For a set \(S\) and an object \(c\in\mathsf{Ob}(\mathcal{C})\) we denote by \(c^{S}\) the product of copies of \(c\) indexed by \(S\) \[c^{S}=\prod_{s\in S}c. \tag{4.1}\] The projections will be denoted by \(\mathsf{pr}_{s}:c^{S}\to c.\) A morphism \(f:c^{\prime}\to c^{S}\) is defined by a family of morphisms \(f_{s}=\mathsf{pr}_{s}\circ f:c^{\prime}\to c\) which are called components of \(f.\) The object \(c^{S}\) is contravariant by \(S.\) More precisely, this defines a functor \[\mathcal{C}\times\mathsf{Set}^{\mathsf{op}}\to\mathcal{C},\qquad\quad(c,S) \mapsto c^{S} \tag{4.2}\] such that, if \(f:S\to S^{\prime}\) is a function, then the morphism \(c^{f}:c^{S^{\prime}}\to c^{S}\) is defined so that \((c^{f})_{s}=\mathsf{pr}_{f(s)}.\) **Proposition 4.1**.: _Let \(\mathcal{C}\) be a category with limits and \(\mathcal{D}\) be a small full subcategory of \(\mathcal{C}.\) Then the functor of \(\mathcal{D}\)-completion exists and it is given by the end_ \[T(c)=\int_{d\in\mathcal{D}}d^{\mathcal{C}(c,d)}. \tag{4.3}\] _It can be also presented as an equalizer_ \[T(c)=\mathsf{eq}\left(\prod_{d\in\mathcal{D}}d^{\mathcal{C}(c,d)}\Rightarrow \prod_{\alpha:\iota_{1}\to d_{2}}d_{2}^{\mathcal{C}(c,d_{1})}\right), \tag{4.4}\] _where the first morphism is induced by \(d_{1}^{\mathcal{C}(c,d_{1})}\xrightarrow{\alpha^{\mathcal{C}(c,d_{1})}}d_{2}^ {\mathcal{C}(c,d_{1})}\) and the second morphism is induced by \(d_{2}^{\mathcal{C}(c,d_{2})}\xrightarrow{d_{2}^{\mathcal{C}(c,d_{1})}}d_{2} ^{\mathcal{C}(c,d_{1})}.\)_ Proof.: Follows from the interpretation of right Kan extensions in terms of ends [14, Ch. X, SS4, Th.1] and the characterisation of ends in terms of equalisers [13, Remark 1.2.4]. **Corollary 4.2**.: _Let \(\mathcal{C}\) be a complete category and \(\mathcal{D}=\{d\}\) the full subcategory consisting of one object. Then \(T_{d}=T_{\mathcal{D}}\) exists and_ \[T_{d}(c)\cong\mathfrak{eq}\left(d^{C(c,d)}\rightrightarrows\left(d^{C(c,d)} \right)^{\mathsf{End}_{C}(d)}\right). \tag{4.5}\] **4.0.1**.: _Remark._ There is an alternative description of \(T_{d}\) using the double dual monad. Denote by \(DD_{d}(c)\) the "naive double dual" monad \(c\to d^{C(c,d)}=\Pi_{c\to d}d.\) Then it is not hard to see using the arguments in 3.2 that \(T_{d}\) is equivalent to the terminal monad \(T_{DD_{d}}\) associated with \(DD_{d}.\) Notice that \(DD_{d}(d)\) is isomorphic to \(d^{l}\) for some \(l\geq 1.\) Hence, since the terminal \(T_{DD_{d}}\) preserves the image of \(DD_{d},\)\(T_{DD_{d}}\) also preserves its retract \(d,\) and one has a unique map of monads \(Eq_{DD_{d}}\cong T_{DD_{d}}\to T_{d}.\) Similarly there is a map in the other direction: Namely, the desired element in the monad category \(Mon\mathcal{C}\): \[Mon(T_{d},DD_{d})=Mon(T_{d},d^{C(c,d)})\] is determined by the composition of maps in a \(\mathcal{C}\): \[T_{d}(c)\times\mathcal{C}(c,d)\to T_{d}(c)\times\mathcal{C}(T_{d}(c),T_{d}(d) =d)\stackrel{{ eval}}{{\longrightarrow}}d.\] Or, stated otherwise, the map \(T_{d}(c)\to\Pi_{c\to d}d\) is give factor-wise by: \(T_{d}(c)\to d=T_{d}(c\to d),\) since \(T_{d}(d)\cong d.\) This also can serve to prove the crucial property of \(T_{d}\) namely, for any map \(x\to y\) in \(\mathcal{C}\) with \(\mathcal{C}(y,d)\cong\mathcal{C}(x,d)\) satisfies \(T_{d}(x)\cong T_{d}(y);\) which is of course evident from the explicit expression for \(T_{d}\) above. It is clear, though not expanded here, that, in the case where the category \(\mathcal{C}\) is enriched over itself, the above approach works well, using internal hom objects \(hom(hom(-,d),d).\) ## 5. Examples in the category of sets and groups ### Variations on the set of ultrafilters For any set \(X\) we denote by \(\mathcal{P}(X)\) the set of all subsets of \(X\). We treat \(\mathcal{P}\) as a (contravariant) functor \[\mathcal{P}:\mathsf{Set}^{\mathsf{op}}\longrightarrow\mathsf{Set}, \tag{5.1}\] where, for a map \(f:X\to X^{\prime},\) the map \(\mathcal{P}(f):\mathcal{P}(X^{\prime})\to\mathcal{P}(X)\) is defined as \(\mathcal{P}(f)(Y)=f^{-1}(Y).\) Note that the characteristic function defines an isomorphism \[\chi:\mathcal{P}(X)\cong\mathsf{Set}(X,2). \tag{5.2}\] The composition \[\mathcal{P}\mathcal{P}:\mathsf{Set}\longrightarrow\mathsf{Set} \tag{5.3}\] is a (covariant) functor that has a natural coaugmentation \[\eta_{X}:X\longrightarrow\mathcal{P}(\mathcal{P}(X)) \tag{5.4}\] such that \(\eta_{X}(x)\) is the set of all sets \(Y\subseteq X\) containing \(x.\) If we denote by \(\mathcal{U}\mathcal{P}(X)\) the set of ultrafilters on \(X,\) we obtain that \(\mathcal{U}\mathcal{P}\) is an co-augmented sub-functor of \(\mathcal{P}\mathcal{P}\) \[\mathcal{U}\mathcal{P}\subseteq\mathcal{P}\mathcal{P}. \tag{5.5}\] Recall that \(\mathcal{U}\mathcal{P}(X)\) can be thought of as the underlying set of the Stone-Cech compactification of the set \(X.\) Let us define another co-augmented sub-functor of \(\mathcal{HP}\). An element \(A\in\mathcal{P}(\mathcal{P}(X))\) is called _ultrasef_, if 1. \(\emptyset\notin A\); 2. for any \(Y\subseteq X\) one and only one of the sets \(Y,X\setminus Y\) is an element of \(A\). Note that the axiom (US1) can be equivalently replaced by 1. \(X\in A\). **Example 1**.: Any ultrafilter is an ultraset. **Example 2**.: Let \(X\) is a finite set of an odd cardinality \(|X|=2n+1\) and \[A=\{Y\subseteq X\mid|Y|\geq n+1\}. \tag{5.6}\] Then \(A\) is an ultraset on \(X\) which is not an ultrafilter for \(n\geq 1\). **Lemma 5.1**.: _Let \(A\) be an ultraset on a set \(X.\) Then \(A\) is an ultrafilter if and only if for any partition into three disjoint subsets \(X=P_{0}\sqcup P_{1}\sqcup P_{2}\), there exists a unique \(i\in\{0,1,2\}\) such that \(P_{i}\in A.\)_ Proof.: Assume that \(A\) is an ultrafilter and \(X=P_{0}\sqcup P_{1}\sqcup P_{2}\) is a partition. If there exists \(i\) such that \(P_{i}\in A,\) then it is obviously unique because \(A\) is closed under finite intersections. Let us prove that it exists. Assume the contrary that \(P_{i}\notin A\) for any \(i.\) Then \(X\setminus P_{i}\in A,\) and hence \(P_{0}=(X\setminus P_{1})\cap(X\setminus P_{2})\in A,\) which is a contradiction. Now assume that for any partition into three disjoint subsets \(X=P_{0}\sqcup P_{1}\sqcup P_{2},\) there exists a unique \(i\in\{0,1,2\}\) such that \(P_{i}\in A.\) In order to prove that \(A\) is an ultrafilter, we need to prove that: (1) \(Y\in A\) and \(Y\subseteq Y^{\prime}\subseteq X\) implies \(Y^{\prime}\in A;\) (2) \(Y,Y^{\prime}\in A\) implies \(Y\cap Y^{\prime}\in A.\) Let us prove (1). Take \(P_{0}=Y,\)\(P_{1}=Y^{\prime}\setminus Y\) and \(P_{2}=X\setminus Y^{\prime}.\) Then \(P_{0}\in A,\) and hence, \(P_{2}\notin A.\) By (US2) we obtain \(X\setminus P_{2}=Y^{\prime}\in A.\) Let us prove (2). In the proof, we use that we already proved (1). Take \(P_{0}=Y\cap Y^{\prime},\)\(P_{1}=Y\setminus Y^{\prime},\)\(P_{2}=X\setminus Y.\) Since \(Y\in A,\) we have \(P_{2}\notin A.\) Therefore either \(P_{0}\in A,\) or \(P_{1}\in A.\) We need to prove that \(P_{0}\in A.\) Assume the contrary that \(P_{1}\in A.\) Since \(P_{1}\subseteq X\setminus Y^{\prime},\) using (1), we obtain \(X\setminus Y^{\prime}\in A.\) It follows that \(Y^{\prime}\notin A,\) which is a contradiction. Hence \(P_{0}\in A.\) The set of all ultrasets on \(X\) is denoted by \(\mathcal{US}(X).\) It is easy to check that \(\mathcal{US}\) is a co-augmented sub-functor of \(\mathcal{HP}\) \[\mathcal{UF}\subseteq\mathcal{US}\subseteq\mathcal{HP}. \tag{5.7}\] Let \(n\) be a natural number, taken as an ordinals \(n=\{0,\ldots,n-1\}.\) We denote by \(T_{n}:\mathsf{Set}\to\mathsf{Set}\) the functor of \(\{n\}\)-completion i.e. it is the terminal co-augmented functor with the property that \(n\to T_{n}(n)\) is an isomorphism \(n.\) **Lemma 5.2**.: _Let \(\mathsf{Fin}_{\leq n}\) denotes the class of finite sets of cardinality at most \(n.\) Then_ \[T_{n}=T_{\mathsf{Fin}_{\leq n}}. \tag{5.8}\] Proof.: Since any set of cardinality at most \(n\) is a retract of \(n,\) this follows from Lemma 2.3. **Proposition 5.3**.: _The co-augmented functor of \(2\)-completion on the category of sets is isomorphic to \(\mathcal{US}\)_ \[T_{2}\cong\mathcal{US}. \tag{5.9}\] Proof.: By Corollary 4.2 we see \[T_{2}(X)=\mathsf{eq}\Big{(}\mathsf{Set}(\mathsf{Set}(X,2),2)\rightrightarrows \mathsf{Set}(\mathsf{Set}(X,2),2)^{\mathsf{End}(2)}\Big{)}. \tag{5.10}\] The characteristic function defines a bijection \(\mathcal{P}(X)\cong\mathsf{Set}(X,2).\) There are four maps \(2\to 2:\) (1) the identity map \(\mathsf{id}=e_{1};\) (2) the map \(e_{2}\) sending all to \(0;\) (3) the map \(e_{3}\) sending all to \(1;\) (4) the permutation \(e_{4}.\) The composition with them correspond to four maps on \(f_{i}^{X}:\mathcal{P}(X)\to\mathcal{P}(X):\) (1) \(f_{1}^{X}(Y)=Y;\) (2) \(f_{2}^{X}(Y)=\emptyset;\) (3) \(f_{3}^{X}(Y)=X;\) (4) \(f_{4}^{X}(Y)=X\setminus Y.\) Consider the isomorphism \[\mathcal{P}(\mathcal{P}(X))\cong\mathsf{Set}(\mathsf{Set}(X,2),2). \tag{5.11}\] So we need to prove that \[\mathcal{US}(X)=\mathfrak{eq}\Big{(}\mathcal{P}(\mathcal{P}(X))\rightrightarrows \mathcal{P}(\mathcal{P}(X))^{\mathsf{End}(2)}\Big{)} \tag{5.12}\] The equaliser consists of such elements \(A\in\mathcal{P}(\mathcal{P}(X))\) that the equation \(f_{i}^{\mathcal{P}(X)}(A)=\mathcal{P}(f_{i}^{X})(A)\) is satisfied for any \(i\). For \(i=1\) it is satisfied for any \(A\). For \(i=2\) we have \(f_{2}^{\mathcal{P}(X)}(A)=\varnothing\) and \[\mathcal{P}(f_{2}^{X})(A)=(f_{2}^{X})^{-1}(A)=\begin{cases}\mathcal{P}(X),& \varnothing\in A;\\ \varnothing,&\varnothing\notin A.\end{cases} \tag{5.13}\] Then it is satisfied for \(i=2\) iff \(\varnothing\notin A\) (axiom (US1)). Similarly, we obtain that the equation is satisfied for \(i=3\) iff \(X\in A\) (axiom (US1')). For \(i=4\) we have that \(f_{4}^{\mathcal{P}(X)}(A)=\mathcal{P}(X)\setminus A\) and \(\mathcal{P}(f_{4}^{X})(A)=\{X\setminus Y\mid Y\in A\}\). Then the equation is satisfied for \(i=4\) iff the axiom (US2) is satisfied. **Proposition 5.4**.: _Let \(\mathsf{Fin}\) denote the full subcategory of \(\mathsf{Set}\) consisting of finite sets. Then \(T_{\mathsf{Fin}}\) is isomorphic \(T_{3}\) and isomorphic to \(\mathcal{UIF}\)_ \[T_{\mathsf{Fin}}\cong T_{3}\cong\mathcal{UIF}. \tag{5.14}\] Proof.: It is well-known that \(\eta:K\to\mathcal{UIF}(K)\) is an isomorphism for any finite \(K\). So it is enough to prove that \(\mathcal{UIF}\) is the terminal among all co-augmented functors \(\big{(}F,\eta^{F}\big{)}\) such that \(\eta^{F}:K\to F(K)\) is an isomorphism for any set \(K\) such that \(|K|\leq 3\). By the universal property of \(\mathcal{US}=T_{2}\) (Proposition 5.3) we see that there is a unique morphism of co-augmented functors \(\varphi:F\to\mathcal{US}\). So we just need to prove that for any set \(X\) the image of \(\varphi_{X}\) is in \(\mathcal{UIF}(X)\). Denote by \(F^{\prime}\) the image of \(\varphi\). Note that \(F^{\prime}\) is a co-augmented sub-functor of \(\mathcal{US}\). So we need to prove that \(F^{\prime}(X)\subseteq\mathcal{UIF}(X)\). Since \(\eta:K\to F(K)\) is an isomorphism for finite any \(K\) such that \(|K|\leq 3\) we see that \[\mathcal{UIF}(K)=F^{\prime}(K). \tag{5.15}\] for any \(K\) such that \(|K|\leq 3\). Let us prove that \(F^{\prime}(X)\subseteq\mathcal{UIF}(X)\). Take an ultraset \(A\in F^{\prime}(X)\). Consider a partition \(X=P_{1}\sqcup P_{2}\sqcup P_{3}\). Define a map \(\alpha:X\to 3\) such that \(\alpha^{-1}(i)=P_{i}\). The map \(F^{\prime}(\alpha):F^{\prime}(X)\to F^{\prime}(3)=\mathcal{UIF}(3)\) sends \(A\) to \(\eta(i_{0})\in\mathcal{UIF}(3)\) for some \(i_{0}\in\{0,1,2\}\). Since \(\{i_{0}\}\in\eta(i_{0})\) and for \(i\neq i_{0}\) we have \(\{i\}\notin\eta(i_{0})\), we obtain that \(P_{i_{0}}=\alpha^{-1}(i_{0})\in A\) and \(P_{i}=\alpha^{-1}(i)\notin A\) for \(i\neq i_{0}\). Therefore the assumption of Lemma 5.1 is satisfied, and hence, \(A\) is an ultrafilter. ### Examples: Groups and modules Here we briefly consider the examples alluded to in the first section. The examples below are proved by applying the expression 2.4 above. It is rather immediate to see that by taking \(\mathcal{D}\subset\mathcal{C}\) to be the subcategory of finite groups in the category of all groups, the \(\mathcal{D}\)-completion \(T_{\mathcal{D}}\) is canonically isomorphic to the (discrete!) pro-finite completion functor on groups. Similarly, when \(\mathcal{D}\subset\mathcal{C}\) is the subcategory of nilpotent groups in the category of all groups. Similarly, for the completion of an \(A-\)module \(M\), with respect to an ideal \(I\subseteq A\) in a ring \(A:\) Namely, \(M\to limM/I^{k}M\). In the above example, the fact that \(\mathcal{D}\) is a large category can be dealt with by noticing that for each \(A-\)module \(M\), the tower of quotients \(M\to(M/I^{k}M)_{k}\) is co-final in the category \(M\downarrow\mathcal{D}\), appearing in 2.4. In case the ring \(A=K\) is a field the usual double dual functor of a \(K-\) vector space \(V\to V^{**}\) appears as a terminal monad \(T_{K}\), since the double dual in 6.2 above is reduce here to \(V^{**}\). ## 6. Completions and operads In this section, we continue to assume that \(\mathcal{C}\) is closed under limits. ### Objects with an action of a monoid Let \(M\) be a monoid. An \(M\)-object in \(\mathcal{C}\) is an object \(c\) endowed by a homomorphism of monoids \(f^{c}:M\to\mathsf{End}(c)\). If \(X\) is an \(M\)-set and \(c\) is an \(M\)-object, we define the _hom-object_ over \(M\) as an equalizer \[\mathsf{hom}_{M}(X,c)=\mathsf{eq}(\sigma,\tau:c^{X}\rightrightarrows(c^{X})^{ M}), \tag{6.1}\] where \(\sigma_{m}=c^{f^{X}(m)}\) and \(\tau_{m}=(f^{c}(m))^{X}\) for any \(m\in M\). If \(\mathcal{C}\) is a category of sets, \(\mathsf{hom}_{M}(X,c)\) coincides with the ordinary hom-set in the category of \(M\)-sets. For any two objects \(c,d\) from \(\mathcal{C}\) the hom-set \(\mathcal{C}(c,d)\) has a natural structure of \(\mathsf{End}(d)\)-set defined by the composition. Then Corollary 4.2 can be reformulated as \[T_{d}\cong\mathsf{hom}_{\mathsf{End}(d)}(\mathcal{C}(-,d),d). \tag{6.2}\] ### Objects with an action of an operad Let \(O\) be an operad (of sets). For an object \(c\) we denote by \(\mathsf{O}(c)\) the endomorphism operad of \(c\), whose \(n\)-th component is \(\mathsf{O}(c)_{n}=\mathcal{C}(c^{n},c)\). An \(O\)-algebra in \(\mathcal{C}\) is an object \(c\) endowed by a morphism \(f^{c}:O\to\mathsf{O}(c)\). If \(X\) is an \(O\)-algebra in the category of sets and \(c\) is an \(O\)-algebra in \(\mathcal{C}\), we defined the hom-object over \(O\) as an equaliser \[\mathsf{hom}_{O}(X,c)=\mathsf{eq}(\sigma,\tau:c^{X}\rightrightarrows\prod_{n= 0}^{\infty}(c^{X^{n}})^{O_{n}}), \tag{6.3}\] where \(\sigma\) and \(\tau\) are defined so that \(\sigma_{n,o}=c^{f^{X}_{n}(o)}:c^{X}\to c^{X^{n}}\) and \[\tau_{n,o,x_{1},\ldots,x_{n}}=f^{c}_{n}(o)\circ(\mathsf{pr}_{x_{1}},\ldots, \mathsf{pr}_{x_{n}}):c^{X}\to c \tag{6.4}\] for any \(n\geq 0\), \(o\in O_{n}\) and \(x_{1},\ldots,x_{n}\in X\). Here we denote by \((\mathsf{pr}_{x_{1}},\ldots,\mathsf{pr}_{x_{n}}):c^{X}\to c^{n}\) the morphism with components \(\mathsf{pr}_{x_{i}}\). For the special case \(n=0\) we have \(X^{n}=1=\{0\}\), \(c^{n}=1\), and \(\tau_{0,o}:c^{X}\to c\) is the composition of \(c^{X}\to 1\) and \(f^{X}_{0}(o):1\to c\). Note that \[\sigma_{n,o,x_{1},\ldots,x_{n}}=\mathsf{pr}_{f^{X}_{n}(o)(x_{1},\ldots,x_{n})}: c^{X}\to c. \tag{6.5}\] For any \(n\geq 0\) we also consider \[\mathsf{hom}^{n}_{O}(X,c)=\mathsf{eq}(\sigma_{n},\tau_{n}:c^{X}\rightrightarrows( c^{X^{n}})^{O_{n}}) \tag{6.6}\] and \[\mathsf{hom}^{\otimes n}_{O}(X,c)=\mathsf{eq}(\sigma_{\leq n},\tau_{\leq n}:c ^{X}\rightrightarrows\prod_{i=0}^{n}(c^{X^{i}})^{O_{i}}). \tag{6.7}\] The projection \(\prod_{i=0}^{n}(c^{X^{i}})^{O_{i}}\to\prod_{i=0}^{n-1}(c^{X^{i}})^{O_{i}}\) induces a morphism \[\mathsf{hom}^{\otimes n}_{O}(X,c)\longrightarrow\mathsf{hom}^{\otimes n-1}_{O} (X,c). \tag{6.8}\] Since \(\prod_{i=0}^{\infty}(c^{X^{i}})^{O_{i}}=\varprojlim_{n}\ \prod_{i=0}^{n}(c^{X^{i}})^{O_{i}}\), using that limits commute with limits, we obtain \[\mathsf{hom}_{O}(X,c)=\varprojlim_{n}\ \mathsf{hom}^{\otimes n}_{O}(X,c). \tag{6.9}\] ### Completion with respect to a power For an object \(d\) of \(\mathcal{C}\) we denote by \(\mathsf{O}^{1}(d)\) the suboperad of the endomorphism operad \(\mathsf{O}(d)\) such that \[\mathsf{O}^{+}(d)_{0}=\emptyset,\qquad\quad\mathsf{O}^{+}(d)_{n}=\mathsf{O}(d) _{n} \tag{6.10}\] for \(n\geq 1\). For any two objects \(c,d\) of \(\mathcal{C}\) there is a natural structure of \(\mathsf{O}^{1}(d)\)-algebra on the set \(\mathcal{C}(c,d):\) for any \(\alpha:d^{n}\to d\) we consider the map \[f(\alpha):\mathcal{C}(c,d)^{n}\cong\mathcal{C}(c,d^{n})\xrightarrow{\mathcal{ C}(c,\alpha)}\mathcal{C}(c,d). \tag{6.11}\] For any object \(d\) of \(\mathcal{C}\) we consider the functor \(T_{d^{n}}\) of \(d^{n}\)-completion. We also consider \[T_{d^{+}}=T_{\{d^{n}|n\geq 1\}},\qquad\quad T_{d^{\bullet}}=T_{\{d^{n}|n\geq 0\}}. \tag{6.12}\] This subsection is devoted to the proof of the following theorem. **Theorem 6.1**.: _Let \(\mathcal{C}\) be a complete category and \(d\) be its object. Then for \(n\geq 1\), there are isomorphisms of co-augmented functors_ \[T_{d^{n}}\cong\hom_{\mathcal{O}^{*}(d)}^{n}(\mathcal{C}(-,d),d)\cong\hom_{ \mathcal{O}^{*}(d)}^{\leqslant n}(\mathcal{C}(-,d),d), \tag{6.13}\] \[T_{\{1,d^{n}\}}\cong\hom_{\mathcal{O}(d)}^{\leqslant n}(\mathcal{C}(-,d),d), \tag{6.14}\] \[T_{d^{+}}\cong\hom_{\mathcal{O}^{*}(d)}(\mathcal{C}(-,d),d), \tag{6.15}\] \[T_{d^{\bullet}}\cong\hom_{\mathcal{O}(d)}(\mathcal{C}(-,d),d), \tag{6.16}\] _where the augmentations of the right-hand functors are induced by morphism to the element \(d\) raised to the power \(\mathcal{C}(c,d)\), given by: \(\tilde{\eta}_{c}:c\to d^{\mathcal{C}(c,d)}\) with components \((\tilde{\eta}_{c})_{\alpha}=\alpha\)._ **Remark 6.2**.: An analog and potentially a special case of this formula, within the \(\infty\)-category of simplicial sets, appears in Mandell's theorem, [15] and [4], Proposition 4.4. Here, the operadic double-dual appears as a version of homological p-completion. It gives the terminal functor that preserves certain \(p\)-adic Eilenberg-MacLane spaces. In order to prove this theorem, we need to prove several lemmas. **Lemma 6.3**.: _For \(n\geq 0\) and a morphism \(\varphi:e\to d^{\mathcal{C}(c,d)},\) the diagram_ \[e\overset{\varphi}{\to}d^{\mathcal{C}(c,d)}\overset{\sigma_{n}}{\underset{ \tau_{n}}{\overset{\sigma_{n}}{\tau_{n}}}}\left(d^{\mathcal{C}(c,d)^{n}}\right) ^{\mathcal{C}(d^{n},d)} \tag{6.17}\] _is commutative (\(\tau_{n}\varphi=\sigma_{n}\varphi\)) if and only if for any morphism \(\alpha:d^{n}\to d\) and any morphism \(\beta:c\to d^{n}\) we have_ \[\varphi_{\alpha\circ\beta}=\alpha\circ(\varphi_{\beta_{1}},\dots,\varphi_{ \beta_{n}}). \tag{6.18}\] Proof.: The components of the morphisms \(\tau_{n,\alpha},\sigma_{n,\alpha}\) are \(\sigma_{n,\alpha,\beta_{1},\dots,\beta_{n}}=\mathsf{pr}_{\alpha\circ\beta}\) and \(\tau_{n,\alpha,\beta_{1},\dots,\beta_{n}}=\alpha\circ(\mathsf{pr}_{\beta_{1}}, \dots,\mathsf{pr}_{\beta_{n}}).\) The assertion follows. **Lemma 6.4**.: _For \(n\geq 0\) the diagram_ \[c\overset{\tilde{\eta}_{c}}{\longrightarrow}d^{\mathcal{C}(c,d)}\overset{ \sigma_{n}}{\underset{\tau_{n}}{\overset{\sigma_{n}}{\tau_{n}}}}\left(d^{ \mathcal{C}(c,d)^{n}}\right)^{\mathcal{C}(d^{n},d)} \tag{6.19}\] _is commutative (\(\sigma_{n}\tilde{\eta}_{c}=\tau_{n}\tilde{\eta}_{c}\)), where \((\tilde{\eta}_{c})_{\alpha}=\alpha.\)_ Proof.: It follows from Lemma 6.3. **Lemma 6.5**.: _For \(n\geq 0\) the diagram_ \[d^{n}\overset{\tilde{\eta}_{d^{n}}}{\longrightarrow}d^{\mathcal{C}(d^{n},d) }\overset{\sigma_{n}}{\underset{\tau_{n}}{\overset{\sigma_{n}}{\tau_{n}}}} \left(d^{\mathcal{C}(d^{n},d)^{n}}\right)^{\mathcal{C}(d^{n},d)} \tag{6.20}\] _is an equalizer._ Proof.: By Lemma 6.4 we have \(\sigma_{n}\tilde{\eta}=\tau_{n}\tilde{\eta}.\) Let \(\varphi:e\to d^{\mathcal{C}(d^{n},d)}\) be a map that equalizes \(\sigma_{n}\) and \(\tau_{n}\). Lemma 6.3 implies that for any \(\alpha:d^{n}\to d\) and \(\beta:d^{n}\to d^{n}\) we have \(\varphi_{\alpha\circ\beta}=\alpha\circ(\varphi_{\beta_{1}},\dots,\varphi_{ \beta_{n}}).\) In particular, if we take \(\beta=\mathsf{id}_{d^{n}}\), we get \[\varphi_{\alpha}=\alpha\circ(\varphi_{\mathsf{pr}_{1}},\dots,\varphi_{\mathsf{ pr}_{n}}). \tag{6.21}\] So, if we take \(\psi=(\varphi_{\mathsf{pr}_{1}},\dots,\varphi_{\mathsf{pr}_{n}})\), we obtain \(\tilde{\eta}\circ\psi=\varphi.\) Let us prove that such \(\psi\) is unique. Assume that \(\psi:e\to d^{n}\) is a morphism such that \(\tilde{\eta}\circ\psi=\varphi.\) Then \(\varphi_{\alpha}=\alpha\circ\psi.\) It follows that \(\psi_{i}=\varphi_{\mathsf{pr}_{i}}.\) **Lemma 6.6**.: _For \(1\leq n^{\prime}\leq n\), if \(\varphi:e\to d^{\mathcal{C}(c,d)}\) is a morphism such that \(\sigma_{n}\varphi=\tau_{n}\varphi\), then \(\sigma_{n^{\prime}}\varphi=\tau_{n^{\prime}}\varphi.\)_ Proof.: It is enough to prove for \(n^{\prime}=n-1\geq 1\). Take morphisms \(\alpha:d^{n-1}\to d\) and \(\beta:c\to d^{n-1}\). Since \(d^{n}=d^{n-1}\times d\) we have a projection \(\mathsf{pr}_{\leq n-1}:d^{n}\to d^{n-1}\) and we can take a map \(\delta:d^{n-1}\to d^{n}\) such that \(\mathsf{pr}_{\leq n-1}\circ\delta=\mathsf{id}_{d^{n-1}}\) and \(\mathsf{pr}_{n}\circ\delta=\mathsf{pr}_{n-1}\). We set \(\alpha^{\prime}=\alpha\circ\mathsf{pr}_{\leq n-1}:d^{n}\to d\) and \(\beta^{\prime}=\delta\circ\beta:c\to d^{n}\). Then \(\alpha\circ\beta=\alpha^{\prime}\circ\beta^{\prime}\); \(\beta^{\prime}_{i}=\beta_{i}\) for \(1\leq i\leq n-1\) and \(\beta^{\prime}_{n}=\beta_{n-1}\). Then by the assumption and Lemma 6.3 we have \[\varphi_{\alpha\circ\beta}=\varphi_{\alpha^{\prime}\circ\beta^{\prime}}= \alpha^{\prime}\circ(\varphi_{\beta^{\prime}_{1}},\ldots,\varphi_{\beta^{ \prime}_{n}})=\alpha\circ(\varphi_{\beta_{1}},\ldots,\varphi_{\beta_{n-1}}). \tag{6.22}\] The assertion follows. **Remark 6.7**.: Generally the equation \(\sigma_{n}\varphi=\tau_{n}\varphi\) for \(n\geq 1\) does not imply \(\sigma_{0}\varphi=\tau_{0}\varphi\). The assumption \(n^{\prime}\geq 1\) of Lemma 6.6 is essential. Proof of Theorem 6.1.: Lemma 6.6 implies that \[\mathsf{hom}_{\mathsf{O}^{*}(d)}^{n_{+}(d)}(\mathcal{C}(-,d),d)\cong\mathsf{ hom}_{\mathsf{O}^{*}(d)}^{\leq n}(\mathcal{C}(-,d),d). \tag{6.23}\] Let us prove \(T_{d^{n}}\cong\mathsf{hom}_{\mathsf{O}^{*}(d)}^{\leq n}(\mathcal{C}(-,d),d)\). By Lemma 6.5 we have \[\eta:d^{n}\cong\mathsf{hom}_{\mathsf{O}^{*}(d)}^{\leq n}(\mathcal{C}(d^{n},d),d). \tag{6.24}\] So we need to prove the universal property. Consider a co-augmented functor \(F\) such that \(d^{n}\in\mathsf{Inv}(F)\). Since \(d\) is a retract of \(d^{n}\), we have \(d\in\mathsf{Inv}(F)\). Therefore there is a unique morphism of co-augmented functors \(\theta:F\to\mathsf{hom}_{\mathsf{End}(d)}(\mathcal{C}(-,d),d)\). Taking the composition with the morphism \(\mathsf{hom}_{\mathsf{End}(d)}(\mathcal{C}(-,d),d)\to d^{\mathcal{C}(-,d)}\), we obtain a morphism \(\varphi:F\to d^{\mathcal{C}(-,d)}\). Then, in order to prove that \(T_{d^{n}}\cong\mathsf{hom}_{\mathsf{O}^{*}(d)}^{n}(\mathcal{C}(-,d),d)\) it is sufficient to prove that \(\sigma_{n^{\prime}}\varphi_{c}=\tau_{n^{\prime}}\varphi_{c}\) for any \(c\), any \(1\leq n^{\prime}\leq n\) and any morphism of co-augmented functors \(\varphi:F\to d^{\mathcal{C}(-,d)}\). By Lemma 6.6 it is enough to prove that \(\sigma_{n}\varphi_{c}=\tau_{n}\varphi_{c}\). Let us prove that \(\sigma_{n}\varphi_{c}=\tau_{n}\varphi_{c}\) for any \(c\). Take \(\alpha:d^{n}\to d\) and \(\beta:c\to d^{n}\). Note that \(\varphi\eta^{F}=\tilde{\eta}\). The commutative diagram (6.25) shows that \[(\varphi_{c})_{\alpha\circ\beta}=\alpha\circ(\eta^{F}_{d^{n}})^{-1}\circ F( \beta). \tag{6.26}\] And the diagram (6.27) implies that \[((\eta^{F}_{d^{n}})^{-1}\circ F(\beta))_{i}=(\varphi_{c})_{\beta_{i}}. \tag{6.28}\] Therefore, we have \[(\varphi_{c})_{\alpha\circ\beta}=\alpha\circ((\varphi_{c})_{\beta_{1}},\ldots,( \varphi_{c})_{\beta_{n}}). \tag{6.29}\] Then Lemma 6.3 implies that \(\sigma_{n}\varphi_{c}=\tau_{n}\varphi_{c}\). This implies that \(T_{d^{n}}=\mathsf{hom}_{\mathsf{O}^{*}(d)}^{n}(\mathcal{C}(-,d),d)\). Now we prove that \(T_{\{1,d^{n}\}}=\hom_{(d)}^{\leq n}(\mathcal{C}(-,d),d).\) The proof is similar. We just need to note that if \(\eta_{1}^{F}:1\to F(1)\) is an isomorphism, then for any natural transformation \(\varphi:F\to d^{\mathcal{C}(-,d)},\) any \(\alpha:1\to d\) and \(\beta:c\to 1\) we have a diagram similar to (6.25). (6.30) This diagram implies that \[(\varphi_{c})_{\alpha\circ\beta}=\alpha\circ(), \tag{6.31}\] where \(():F(c)\to 1.\) Then Lemma 6.3 implies that \(\sigma_{0}\varphi_{c}=\tau_{0}\varphi_{c}\) and the rest of the proof is the same as for \(T_{d^{n}}.\) The fact that \(T_{d^{n}}=\hom_{\mathcal{O}^{+}(d)}(\mathcal{C}(-,d),d)\) follows from the equations \[T_{d^{+}}=\varprojlim T_{d^{n}}\] and \[\hom_{\mathcal{O}^{+}(d)}(\mathcal{C}(-,d),d)=\varprojlim\hom_{\mathcal{O}^{ \pm}(d)}^{\leq n}(\mathcal{C}(-,d),d).\] Similarly we have \[T_{d^{n}}=\varprojlim T_{\{1,d^{n}\}}=\hom_{\mathcal{O}(d)}(\mathcal{C}(-,d),d).\] ## 7. An idempotent pro-completion tower We end with a few comments on a pro-idempotent monad \(M_{\bullet}\) associated with a given monad \(M.\) Recall from [1] that the Bousfield-Kan \(R\)-homology completion tower \(R_{\bullet}X,\) associated with a topological space \(X,\) is pro-idempotent. In addition, and as consequence, its \(R-\) homology \(H_{*}(R_{\bullet}X,R)\) is naturally pro-isomorphic to the homology \(H_{*}(X,R)\) of \(X.\) One would like to have a similar result for a general monad \(M.\) This is possible, with the price being the replacement of the tot-tower \(tot_{\bullet}X,\) with a slightly more involved tower defined inductively. This line was considered to with clear results for a a general co-augmented functor in the homotopy category of spaces, by A. Libman, compare [12]. For a general subcategory \(\mathcal{D}\subseteq\mathcal{C}\) of a nice category \(\mathcal{C},\) one can construct the right Kan extension functor \(T_{\mathcal{D}}:\mathcal{C}\to\mathcal{C}.\) as above. This functor is not idempotent. However, we can consider a "refined" right extension \[T_{\mathcal{D}}^{pro}:\mathcal{C}\to pro-\mathcal{D}.\] This last functor associate, as usual to each \(X\in\mathcal{C}\) a diagram of objects in \(\mathcal{D}\) indexed by the coma category \(\mathcal{D}_{X/}.\) The limit, in \(\mathcal{C},\) of this diagram of objects in \(\mathcal{D}\) is the value of right Kan extension, \(T_{\mathcal{D}}(X),\) on the object \(X,\) see equation 2.4 above. Now for the diagram \(X/\mathcal{D}\to\mathcal{D},\) to define a pro-object it must be filtering. Therefore if we assume that \(D\) is closed under finite limits, i.e. pullbacks. In this case, the diagram \(X/\mathcal{D}\to\mathcal{D}\) of objects in \(\mathcal{D}\) is filtering. Hence, the the above define diagram \(T_{\mathcal{D}}^{pro}\) is a pro-object in \(pro-\mathcal{C}.\) Moreover, the functor can be directly prolonged to a functor: \[T_{\mathcal{D}}^{pro}:pro-\mathcal{C}\to pro-\mathcal{C}\] which deserves the name "tautological pro-completion" with respect to the inclusion \(\mathcal{D}\subseteq\mathcal{C}.\) As such it is clearly pro-idempotent. For example, if \(\mathcal{D}\) is the full subcategory of _Groups_ consisting of finite groups, one gets the usual diagram \(G\to(\Gamma_{i})_{i}\) of finite all finite groups under a given group \(G.\) Our aim is to show that in case \(\mathcal{D}\) is the closure under finite limits of the image of a monad \(M,\) there is a small variant on the Bousfield-Kan tower associated with \(M,\) which is pro-equivalent to this canonical pro-object \(T_{\mathcal{D}}^{pro}.\) Moreover, that pro-object is the _terminal pro-monad_ among those that preserve the closure of \(ImD\) under finite limits. Note that the image of \(M\) is not, in general, closed under finite limits. Therefore, the following construction, which is valid for any \(M,\) will be shown to be equivalent to the above \(T_{\overline{ImM}}^{pro}\) where the subscript \(\overline{ImM}\) denotes the closure of the image of the monad \(M\) under finite limits. Consider the inductively defined tower of injective maps of terminal monads: \[M_{0}=M;M_{i+1}\coloneqq T_{M_{i}}\to M_{i}\cdots\to M.\] To continue, we notice that under no additional assumptions on \(\mathcal{C},\) one has associated with \(M\) an idempotent pro-monad tower, \((M_{i})_{i<\omega}.\) The limit of this tower of monads is precisely the terminal monad that preserves the closure of \(ImM,\) the image of our monad \(M,\) under all _finite_ limits. By 3.5, each \(M_{i}\) in this tower not only preserves \(M_{j}\) for \(j<i,\) but also any finite limit of objects of the form \(M_{j}(Y_{j})\) for \(Y_{j}\in\mathcal{C}.\) It follows that if \(X\in\mathcal{C}\) is in the closure under finite limits of the image of \(M,\) then for \(i\) large enough, there is an equivalence \(X\cong M_{i}(X).\) Since each \(M_{i}(X)\) is, by construction, an element in the above finite limits closure of \(ImM\) in \(\mathcal{C}.\) Therefore we have a pro-idempotent tower: \[M_{\bullet}\cong M_{\bullet}\circ M_{\bullet}.\] In addition, using the argument in [6] and [7] it follows that for every monad \(M\) one has a pro-equivalence: \[M\cong M(M_{\bullet}).\] In other words, the tower \(M_{\bullet}\) satisfies some of the basic properties of the classical Bousfield-Kan \(R\)-homology completion tower as given by [1]. The limit, \(M_{\infty}=lim_{i}M_{i},\) of the tower \(M_{i}\) is the terminal monad among all co-augmented functors that preserve \(\overline{ImM},\) the closure of the image of \(M\) under finite limits. **Remark 7.1**.: following Fakir, [8] one can continue to this tower of inclusions, see 3.7 above, trasfinitely. Under suitable rather weak assumption on \(\mathcal{C}\) this tower converges to an idempotent monad \(L_{M}.\) This idempotent monad is easily seen to be the terminal monad \(T_{\overline{\mathcal{D}}}\) where now \(\overline{\mathcal{D}}\) denotes the closure of the image of \(M\) under _all limits._ In the infinity category of spaces the classical example is the map \(L_{HR}\to R_{\infty}\) from the idempotent Bousfield homological localization to the \(R-\) completion functor on spaces, compare [7]. **Examples:** In the category of groups we can consider the terminal monad \(T_{G}\) associated with a group \(G,\) so that \(T_{G}(G)\cong G.\) It is given as above by the double dual \[T_{G}(\Gamma)\cong map_{End\,G}(map(\Gamma,G),G).\] This is a subgroup of \(G^{l}\) for \(l=|map(\Gamma,G)|,\) the cardinality of the set of homomorphisms. For the group of integers \(\Gamma=\mathbb{Z},\) we have \(T_{G}(\mathbb{Z})\subseteq G^{|G|}.\) The transfinite tower of Fakir stabilizes at \(\mathbb{Z}\to L_{G}(\mathbb{Z})\cong C,\) where \(C\) is a cyclic group whose order is the LCM of the orders of all elements of \(G,\) since the image of the generator of \(\mathbb{Z}\to T_{G}(\mathbb{Z},)\) is the diagonal element \((g_{g})_{g\in G}.\) Namely, \(L_{G}(\mathbb{Z})\) is the image subgroup of \(\mathbb{Z}\to T_{G}(\mathbb{Z}).\) This \(L_{G}\) localization, or reflexive functor can be characterized as terminal among those that preserve the closure in the category of groups of \(\{G\}\) under all limits, mapping \(\mathcal{C}\) into that closure, or the initial idempotent monad that turns every \(map(-,G)\)-equivalence (i.e. a sort of 'G-cohomology equivalence') into an equivalence. _An infinity categorical example_ see [9]. The classical \(R_{\infty}\) of Bousfield and Kan comes from the monad \(R\) on the \(\infty\)-category of topological spaces. It preserves not only spaces of the form \(RX\), i.e. \(R-\)GEMs but also \(R\)-polyGEM spaces i.e. the closure of \(R-\)GEMs under finite limits. In this special case the construction of the terminal \(R_{\infty}\) is somewhat simpler than the above inductive tower \((M_{i})_{i}\). The precise meaning or value of this tower for the (discrete) pro-finite completion of groups, considered as a monad \(\mathcal{M}(G)=\widehat{G}\), is not immediately clear. For every group \(G\), the monad \(\mathcal{M}_{\infty}G\) is a natural subgroup of the pro-finite completion \(\widehat{G}\) of \(G\); with the property that it is idempotent (\(\mathcal{M}_{\infty}\mathcal{M}_{\infty}(G)\cong\mathcal{M}_{\infty}(G)\)) if \(G\) is a finite limit of (discrete) profinite groups. In fact, since \(\mathcal{M}_{\infty}\) preserves all finite limits in \(Im\mathcal{M}\), see above, we have that it preserves any finite limit of profinite groups. Note, however, that for a finitely presented group \(G\), one has an isomorphism, \(\mathcal{M}_{\infty}G=\widehat{G}\), since for such a group one has an isomorphism \(\widehat{G}\cong\widehat{\overline{G}}\), namely, the completion \(\mathcal{M}=\widehat{(-)}\) is idempotent on this subcategory of groups. The transfinite intersection or limit of \(lim_{\alpha}\mathcal{M}_{\alpha}\) on all ordinals is also an interesting subgroup of the pro-finite completion, which is just \(\widehat{G}\) itself if \(G\) is finitely presented.
2303.18167
Accounting for Vibration Noise in Stochastic Measurement Errors
The measurement of data over time and/or space is of utmost importance in a wide range of domains from engineering to physics. Devices that perform these measurements therefore need to be extremely precise to obtain correct system diagnostics and accurate predictions, consequently requiring a rigorous calibration procedure which models their errors before being employed. While the deterministic components of these errors do not represent a major modelling challenge, most of the research over the past years has focused on delivering methods that can explain and estimate the complex stochastic components of these errors. This effort has allowed to greatly improve the precision and uncertainty quantification of measurement devices but has this far not accounted for a significant stochastic noise that arises for many of these devices: vibration noise. Indeed, having filtered out physical explanations for this noise, a residual stochastic component often carries over which can drastically affect measurement precision. This component can originate from different sources, including the internal mechanics of the measurement devices as well as the movement of these devices when placed on moving objects or vehicles. To remove this disturbance from signals, this work puts forward a modelling framework for this specific type of noise and adapts the Generalized Method of Wavelet Moments to estimate these models. We deliver the asymptotic properties of this method when applied to processes that include vibration noise and show the considerable practical advantages of this approach in simulation and applied case studies.
Lionel Voirol, Davide A. Cucci, Mucyo Karemera, Wenfei Chu, Roberto Molinari, Stéphane Guerrier
2023-03-31T16:02:42Z
http://arxiv.org/abs/2303.18167v1
# Accounting for Vibration Noise ###### Abstract The measurement of data over time and/or space is of utmost importance in a wide range of domains from engineering to physics. Devices that perform these measurements therefore need to be extremely precise to obtain correct system diagnostics and accurate predictions, consequently requiring a rigorous calibration procedure which models their errors before being employed. While the deterministic components of these errors do not represent a major modelling challenge, most of the research over the past years has focused on delivering methods that can explain and estimate the complex stochastic components of these errors. This effort has allowed to greatly improve the precision and uncertainty quantification of measurement devices but has this far not accounted for a significant stochastic noise that arises for many of these devices: vibration noise. Indeed, having filtered out physical explanations for this noise, a residual stochastic component often carries over which can drastically affect measurement precision. This component can originate from different sources, including the internal mechanics of the measurement devices as well as the movement of these devices when placed on moving objects or vehicles. To remove this disturbance from signals, this work puts forward a modelling framework for this specific type of noise and adapts the Generalized Method of Wavelet Moments to estimate these models. We deliver the asymptotic properties of this method when applied to processes that include vibration noise and show the considerable practical advantages of this approach in simulation and applied case studies. Keywords: Wavelet Variance, Inertial Measurement Unit, Generalized Method of Wavelet Moments, Stochastic Error Acknowledgements: S. Guerrier, D. A. Cucci, M. Karemera and W. Chu were supported by the SNSF Grants #176843 and #211007 and the Innosuisse Grants #37308.1 IP-ENG and #53622.1 IP-ENG. L. Voirol was supported by the SNSF Grant #182684. Roberto Molinari was also partially supported by NSF Grant SES-2150615. ## 1 Introduction The task of measuring and predicting the evolution of different physical systems passes through the precision of the instruments built to carry out such a task. In order to measure the evolution of these systems, the devices need to perform repeated measurements (often at high frequency) and can suffer from errors that can accumulate over time and, consequently, have extreme negative impacts in many fields (see Titterton et al., 2004; Webster and Eren, 2018). For this purpose, espe cially when dealing with high-precision measurement, these devices need to go through a rigorous calibration procedure which, in the majority of cases, is performed within a certain time-frame and in controlled (static) settings. These procedures are mainly aimed at quantifying and characterizing the measurement error of these devices so as to explain and model them, subsequently allowing to remove or filter out these errors when actually employed in the real world as well as to deliver reliable uncertainty metrics. An example of such a procedure can be found in inertial sensor calibration where these devices are widely and increasingly being employed in different areas, from robotics to unmanned navigation, because of their low cost and light weight (see e.g., El-Sheimy and Youssef, 2020). Due to these characteristics, inertial sensors often suffer from important measurement errors which, like many phenomena measured over time, have deterministic and stochastic components where the latter often have a considerable impact in the overall measurement error (see e.g., El-Sheimy et al., 2007). Indeed, while various statistical or machine learning techniques can be employed to explain and remove the deterministic component based also on physical models, the stochastic component (which is the focus of this work) still represents a modelling challenge under many aspects and is essential to quantify the measurement uncertainty. More specifically, the stochastic measurement error of these instruments (such as that of inertial sensors) is frequently characterized by a complex spectral structure generally explained by _composite_ models that are constituted by the sum of different stochastic error processes which contribute to the overall observed error (many state-space models take on this form, see e.g., El-Sheimy et al., 2007; Stebler et al., 2014). The different underlying (latent) stochastic error processes can either have a direct physical justification (such as a random error accumulation represented by a random walk) and/or can be extremely useful in closely approximating the overall error structure. An example of such composite processes is the renown class of Auto-Regressive-Moving-Average models which can be represented by the sum of individual white noise and first-order autoregressive processes, as well as the larger class of (linear) state-space models. It must be underlined that the stochastic errors of these devices are commonly observed in static conditions and, hence, the above composite models are chosen and estimated for this specific setting. However, in many applied settings, there are circumstances where there are additional noise components that are unaccounted for by these composite models. Indeed, there are often additional structures in the noise that, as a result of sources of vibration, are of cyclical nature and can arise for different reasons (see e.g., IEEE, 2006). For example, the measurement devices themselves can produce vibrations due to their electric and mechanical characteristics when their power is on, thereby requiring the inclusion of an additional source of _internal_ noise to the composite model structure. Moreover, it has been highlighted how the stochastic properties of the device errors vary as a function of _external_ conditions, such as, for example, the ambient temperature and the sensor motion (see e.g., Stebler et al., 2015). For instance, a device may exhibit higher noise levels or bias instability during highly dynamic motion or in presence of intense vibrations, see for example Radi et al. (2018). More specifically, the impact of these dynamics on the measurement errors has been assessed in controlled environments through calibration instruments such as rotation or linear tables which are used to move these devices according to known and repeated patterns for a sufficient amount of time. Imperfections on the calibration instruments, such as rotation table control loops, make this process very difficult and spurious, entailing periodic disturbances that are left in the error signal and need to be removed through stochastic modeling. To date, the estimation of the stochastic error component has been addressed by employing complex composite models which however do not include processes that describe the impact of vibrations on the measurements. In particular, even estimating these commonly employed composite models (without vibration) has represented an important computational challenge given the high-frequency and consequent length of the error signals that these devices record. For example, the Maximum Likelihood Estimator (MLE) is generally implemented through the use of an Extended Kalman Filter (EKF) and the Expectation-Maximization algorithm which both become numerically unstable or computationally prohibitive when considering the complexity of the composite models and the length of the error signals (see Stebler et al., 2014). Other more tractable techniques have been proposed and adopted over the past years but they often lack adequate statistical properties (see Guerrier et al., 2016). For this purpose, Guerrier et al. (2013) put forward the Generalized Method of Wavelet Moments (GMWM) which delivers a computationally feasible solution in these settings while preserving appropriate statistical properties. However, as stated earlier, none of these existing techniques have comprehensively addressed the presence of vibration noise in the stochastic signals. A substantial amount of literature has either underlined the need to find solutions to this problem or have put forward approaches that nevertheless do not adequately respond to the need to filter out this type of noise from device measurement errors. For example, in the context of stochastic calibration of Inertial Measurement Units (IMU), Capriglione et al. (2019) highlight that MicroElectroMechanical Systems (MEMS) are very sensitive to vibration via multiple experimental results and that adapted measurement procedures and corrections should be considered. Among the approaches which attempt to remove this source of noise, Gang et al. (2010) propose a mathematical modelling of the vibration signal and discuss conditions under which it can be filtered out using wavelet transforms, while Kang et al. (2013) propose a direct coning mitigation algorithm to test, estimate and compensate a sinusoidal component in the signal of a gyroscope whereas Ma et al. (2017) propose a gradient descent algorithm to correct vibration effects on an IMU. In all these cases the vibration error is treated as an "outlier" effect where the proposed methods aim to deliver adequate estimations that are in some way resistant to this effect. Hence, these methods are not tailored to the most common setting where the vibration noise is a structural component of the stochastic measurement errors in these devices. In this sense, to the best of the authors' knowledge, the only approach that aims at addressing this noise as a structural component of these measurement errors can be found in Ng and Pines (1997) or Bos et al. (2008, 2013) where however it is not considered as a stochastic component but as a deterministic one. Indeed, using the relationship between the Allan Variance (AV) and the spectral density function (see e.g., El-Sheimy et al., 2007), in Ng and Pines (1997) they derive a theoretical form of the AV for a sinusoidal process to consequently characterize the low- and high-frequency components of the measurement error in ring-laser gyroscopes. However this characterization was proven to be statistically inconsistent (see Guerrier et al., 2016). Finally, like the MLE methodology in Bos et al. (2008, 2013), a limitation of these approaches is that the sinusoidal process is assumed to be deterministic and, as a consequence, certain characteristics of this process (such as its frequency or phase) are assumed to be known which is rarely the case in practice. Following the above, in Section 2 this work firstly enriches the class of composite models used for stochastic modelling by adding one or more processes that adequately describe the impact of vibration on the stochastic error components of these measurement devices. Based on this, in Section 3 it then aims to make use of the GMWM to estimate the composite models that include the vibration components, studying its properties in presence of vibration noise and giving theoretical guarantees that support the validity of this approach. Finally Section 4 evaluates the numerical performance of the proposed method through different simulation studies while Section 5 highlights the advantages of this approach through the analysis of data issued from a low-cost inertial sensor in different dynamic conditions. Further discussions are presented in Section 6. The proofs of the theoretical results together with some additional information are collected in the appendix. ## 2 Modeling Vibration Noise We first clearly define a model that can adequately describe such a stochastic vibration effect over time. For this purpose, let \((S_{t})\), \(t\in\mathbb{Z}\), represent the process issued from the vibration source which we assume to be periodic. Based on this assumption, a natural candidate to model such a process is a wave function which, for this work, we parametrize as follows \[S_{t}:=\alpha\sin(\beta t+U), \tag{1}\] with amplitude \(\alpha\!\in\!\mathrm{I\!R}_{+}\!:=\!(0,+\infty)\), angular frequency \(\beta\in(0,\pi]\) and phase \(U\sim\mathcal{U}(0,2\pi)\). As confirmed by the application study presented in Section 5 (as well as other applications not presented in this work), this parametrization of the vibration noise appears to well represent the true noise observed in real-life applications. Within this work, we will refer to the process as a _sinusoidal_ process. It is possible to notice that the only stochastic component in the above parametrization is the uniform random variable \(U\) which randomly shifts the phase of the process. While other parametrizations of the trigonometric functions can obviously be considered, in many applications (including inertial measurement units) it is possible to observe roughly constant behaviours in terms of amplitude and frequency of the distinct vibration noises affecting each device, while the uncertainty usually lies in the phase of their periodicity. Having defined the model to characterize vibration noise, we can increase its flexibility by considering a summation of \(L\in\mathbb{N}\) independent latent sinusoidal processes given by \[\sum_{l=1}^{L}S_{t}^{(l)}:=\sum_{l=1}^{L}\alpha_{l}\sin(\beta_{l}t+U_{l}), \tag{2}\] where the parameters and the random phase are now indexed by \(l=1,\ldots,L\) denoting how they specifically characterize the \(l^{th}\) sinusoidal latent process. Indeed, due to the mechanical properties of a device or to the conditions under which it operates, there may be different sources of vibration affecting the measurement performance and such a composite process would address (or well approximate) the diversity of these vibration sources. As mentioned in the introduction, devices are characterized by a multitude of measurement errors which in many cases are modelled by composite processes which, like the one defined above, consist in a summation of different independent processes representing different sources of error. A general class of composite processes which is used to model these errors is given by: \[Z_{t}:=W_{t}+Q_{t}+\sum_{k=1}^{K}Y_{t}^{(k)}+D_{t}+R_{t},\] where \((W_{t})\) represents a white noise (WN); \((Q_{t})\) is a quantization noise (QN) which is a rounding error process (see Papoulis and Unnikrishna Pillai, 2002); \((Y_{t}^{(k)})\) is the \(k^{th}\) causal first-order autoregressive (AR1) process out of a total of \(K\geq 1\) AR1 processes; \((D_{t})\) is a deterministic drift (DR) process; and \((R_{t})\) is a random walk (RW) process (see e.g., El-Sheimy et al., 2007; Guerrier et al., 2016, for more details). Noting that the sum of AR1 and WN processes delivers an Auto-Regressive-Moving-Average (ARMA) process (see Granger and Morris, 1976), this class of composite processes is extremely flexible since the subsets of models that originate from it can adequately describe or approximate the behaviour of the vast majority stationary signals. The goal of this work, as underlined in the introduction, is to reliably estimate the class of models \((Z_{t})\) while accounting for vibration noise which would be considered as a "nuisance" process. As a consequence, we aim to combine the above-defined classes of composite processes, i.e., \((S_{t})\) and \((Z_{t})\), to ensure that the additional sources of noise are addressed appropriately by considering a larger class of processes which is intuitively given by their summation, i.e., \[X_{t}:=W_{t}+Q_{t}+\sum_{k=1}^{K}Y_{t}^{(k)}+D_{t}+R_{t}+\sum_{l=1}^{L}S_{t}^{ (l)}. \tag{3}\] The first aspect to underline is that, from a practical perspective, one would not usually postulate a model based on _all_ the latent processes available in the class \((X_{t})\). Nevertheless, in various applied settings it may be necessary to make use of at least one of each latent process since the noise characterizing measurement devices can have a highly complex spectral behaviour. In particular, for this work we consider the sum of vibration noises in (3) to be a structural nuisance component that needs to be estimated to obtain adequate estimates of the processes that are specific to \((Z_{t})\). In this optic, the next section studies how the proposed methodology estimates these nuisance components whose form is given in (1). ## 3 Estimation Framework In this work we intend to make use of the GMWM which has been adapted to different time series estimation settings (see e.g., Xu et al., 2019; Guerrier et al., 2020, 2022). To define this framework, let \(F_{\mathbf{\theta}_{0}}\) represent the data-generating process with true parameter \(\mathbf{\theta}_{0}\in\mathbf{\Theta}\subset\mathrm{I\!R}^{p}\) which we aim to estimate and perform inference on. The GMWM delivers an estimator of \(\mathbf{\theta}_{0}\) based on the following generalized least-squares problem: \[\hat{\mathbf{\theta}}:=\operatorname*{argmin}_{\hat{\mathbf{\theta}}\in\mathbf{\Theta}} \left\|\hat{\mathbf{\nu}}-\mathbf{\nu}(\mathbf{\theta})\right\|_{\mathbf{\Omega}}^{2}, \tag{4}\] where \(\|\mathbf{x}\|_{\mathbf{\Lambda}}^{2}:=\mathbf{x}^{T}\mathbf{A}\mathbf{x}\) with \(\mathbf{x}\in\mathrm{I\!R}^{J}\) and \(\mathbf{A}\in\mathrm{I\!R}^{J\times J}\); \(\hat{\mathbf{\nu}}\in\mathrm{I\!R}_{+}^{J}\) represents the Wavelet Variance (WV) estimated on the signal \((X_{t})_{t=1,\ldots,T}\); \(\mathbf{\nu}(\mathbf{\theta})=[\nu_{j}(\mathbf{\theta})_{j=1,\ldots,J}\in\mathrm{I\!R}_{ +}^{J}\) is the theoretical WV implied by the model of interest; and \(\mathbf{\Omega}\in\mathrm{I\!R}^{J\times J}\) is a positive-definite weighting matrix for which a good choice, for example, is the inverse of the covariance matrix of \(\hat{\mathbf{\nu}}\)(see e.g., Guerrier et al., 2013). More specifically, the WV is the variance of the wavelet coefficients \((\omega_{j,t})\) issued from a wavelet decomposition of the process \((X_{t})\) with relative scales of decomposition \(j=1.\ldots,J\), with \(J<\log_{2}(T)\), and has different advantageous properties for the analysis of time series (see e.g., Serroukh et al., 2000). The GMWM framework therefore relies firstly on the properties of the estimator of \(\mathrm{WV}\)\(\hat{\mathbf{\nu}}\), and then on those of the theoretical \(\mathrm{WV}\)\(\mathbf{\nu}(\mathbf{\theta})\) implied by the model \(F_{\mathbf{\theta}}\). More specifically, the properties of the estimator \(\hat{\mathbf{\nu}}\) were first studied in Percival (1995) and Serroukh et al. (2000) under a set of standard conditions for time series analysis, followed by the results of Xu et al. (2019) and Guerrier et al. (2022) allowing for statistical consistency (and asymptotic normality) of this estimator under weaker conditions. For this reason, given the new process considered in this work, let us consider the properties of the estimator of \(\mathrm{WV}\)\(\hat{\mathbf{\nu}}\) when applied exclusively to the realization of a single sinusoidal process \((S_{t})\) and denote this specific estimator as \(\hat{\mathbf{\nu}}_{S}:=[\hat{\nu}_{j,S}]_{j=1,\ldots,J}\) to indicate its implicit dependence on the parameters underlying this particular process. With \(\tau_{j}\) denoting the length of the wavelet filter at the \(j^{th}\) level of decomposition, the following lemma highlights the statistical consistency of \(\hat{\mathbf{\nu}}_{S}\) when using the commonly used Haar wavelet filter for which \(\tau_{j}=2^{j}\). **Lemma 1**: _For the Haar wavelet filter and for any \(j=1,\ldots,J<\log_{2}(T)\), we have_ \[\hat{\nu}_{j,S}=\nu_{j}\left(\alpha,\beta\right)+\mathcal{O}_{p}\left(T^{-1} \right),\] _where_ \[\nu_{j}\left(\alpha,\beta\right):=\mathbb{E}\left[\hat{\nu}_{j,S}\right]= \frac{\alpha^{2}\left\{1-\cos\left(\frac{\beta\tau_{j}}{2}\right)\right\}^{2 }}{\tau_{j}^{2}\{1-\cos\left(\beta\right)\}}.\] The Haar wavelet filter is one of the most commonly employed wavelet filters (see e.g., Percival and Walden, 2000) and, as a result of this lemma (whose proof is given in Appendix A), it is possible to see that the estimator \(\hat{\mathbf{\nu}}_{S}\) is statistically consistent when making use of this filter. In addition, the use of the Haar filter is the only condition needed to obtain this result while also guaranteeing a convergence rate of \(\mathcal{O}_{p}\left(T^{-1}\right)\). We now extend the study of the properties of the \(\mathrm{WV}\) estimator \(\hat{\mathbf{\nu}}\) to the class of models \((X_{t})\) defined in (3), which adds the sinusoidal processes to the class of models \((Z_{t})\). For this purpose we will first focus solely on defining the conditions required to obtain asymptotic normality of the estimator for the class of processes \((Z_{t})\) and consequently denote the \(\mathrm{WV}\) estimator applied exclusively to this class as \(\hat{\mathbf{\nu}}_{Z}\). We will then combine these with the result of Lemma 1 to obtain the required properties for the class of processes of interest \((X_{t})\). The properties of consistency and asymptotic normality of \(\hat{\mathbf{\nu}}_{Z}\) have already been studied in Xu et al. (2019) and Guerrier et al. (2022) so, for the sake of completeness, we will briefly summarize and discuss the conditions needed to achieve these properties. To do so, let us start by denoting the first order difference of the process \((Z_{t})\) as \(\Delta_{t}:=Z_{t}-Z_{t-1}\). We also define \(G(\cdot)\) to be an \(\mathrm{I\!R}\)-valued measurable function as well the filtration \(\mathcal{F}_{t}=(\ldots,\epsilon_{t-1},\epsilon_{t})\), where \(\epsilon_{t}\) are i.i.d random variables. Assumption A: _The process \((\Delta_{t})\) is strictly stationary and can be represented as_ \[\Delta_{t}=G(\mathcal{F}_{t}).\] This assumption is commonly required when analyzing time series and, in the setting of this work, allows to make use of the results in Wu and Zhou (2011). For the next assumptions, we also define the operation \(\left\|D\right\|_{p}:=\left(\mathbb{E}[|D|^{p}]\right)^{1/p}\), for \(p>0\), as well as the filtration \(\mathcal{F}_{t}^{\star}=\left(\ldots,\epsilon_{0}^{\star},\ldots,\epsilon_{t-1},\epsilon_{t}\right)\), where \(\epsilon_{0}^{\star}\) is an i.i.d. random variable. The latter also allows us to define \(\Delta_{t}^{\star}=G(\mathcal{F}_{t}^{\star})\) which differs from \(\left(\Delta_{t}\right)\) as a result of the different innovation noise at time \(t=0\) (clearly we have \(\Delta_{t}^{\star}=\Delta_{t}\) for \(t<0\)). Assumption B: \(\left\|\Delta_{t}\right\|_{4}<\infty\). Assumption C: \(\sum_{t=0}^{\infty}\left\|\Delta_{t}-\Delta_{t}^{\star}\right\|_{4}<\infty\). To summarize, Assumptions B and C require bounded fourth moments of the process \(\left(\Delta_{t}\right)\) and of the difference between this process and its "copy", implying a stability of \(\left(\Delta_{t}\right)\) since a change in the innovation process does not have long-lasting effects on the behaviour of \(\left(\Delta_{t}\right)\). Overall, Assumptions A, B and C are quite common and are generally satisfied for the class of processes \(\left(Z_{t}\right)\) (see Xu et al. (2019) and Guerrier et al. (2022) for a more detailed account on these assumptions). Under these assumptions, the consistency and asymptotic normality of \(\sqrt{T}\left(\hat{\mathbf{\nu}}_{\left(Z\right)}-\mathbf{\nu}_{\left(Z\right)}\right)\) is ensured by (Xu et al., 2019, Theorem 1). Unfortunately, although we have proven consistency of the WV estimator for the process \(\left(S_{t}\right)\), we cannot make use of these conditions to prove the asymptotic normality of \(\hat{\mathbf{\nu}}\) which denotes the estimator of WV when applied to the class of models of interest \(\left(X_{t}\right)\). Indeed, Assumption C cannot be satisfied for sinusoidal processes since, as discussed above, this assumption requires the process to have a "short-range" dependence property which is not reasonable for sinusoidal processes due to their intrinsic periodicity which, among others, does not allow the autocovariance structure to decrease at larger time lags. Nevertheless, the convergence rate of the WV estimator for sinusoidal processes ensures that the asymptotic normality of \(\hat{\mathbf{\nu}}\) is still guaranteed as stated in Theorem 1. For the latter, we also define \(\mathbf{\omega}_{t}:=\left[\omega_{j,t}^{\left(Z\right)}\right]_{j=1,\ldots,J}\) as the vector of wavelet coefficients at time \(t\) applied to the process \(\left(Z_{t}\right)\) as well as the projection operator \(\mathcal{P}_{t}(\cdot):=\mathbb{E}\left[\cdot\mid\mathcal{F}_{t}\right]- \mathbb{E}\left[\cdot\mid\mathcal{F}_{t-1}\right].\) Theorem 1: _For the Haar wavelet filter and under Assumptions A, B and C, we have_ \[\sqrt{T}\left\{\hat{\mathbf{\nu}}-\mathbf{\nu}\left(\mathbf{\theta}_{0}\right)\right\} \overset{\mathcal{D}}{\rightarrow}\mathcal{N}(\mathbf{0},\mathbf{V}),\] _where \(\mathbf{V}=\mathbb{E}\left[\mathbf{D}_{0}\mathbf{D}_{0}^{\top}\right]\) and \(\mathbf{D}_{0}:=\sum_{t=0}^{\infty}\mathcal{P}_{0}\left(\mathbf{\omega}_{t}\right)\)._ The proof of this theorem is omitted since it can be obtained directly by using Slutsky's Theorem when combining the result from Lemma 1 with that from Theorem 2.1 in Guerrier et al. (2022) which proves asymptotic normality of \(\hat{\mathbf{\nu}}\) under the above conditions. Indeed it can be noticed that, while the results hold for the WV estimator \(\hat{\mathbf{\nu}}\), which is applied to the process \(\left(X_{t}\right)\), the asymptotic covariance matrix is defined solely based on \(\mathbf{\omega}_{t}\) which represent the wavelet coefficients from the decomposition of the process \(\left(Z_{t}\right)\) that does not contain sinusoidal noise. As a consequence, for example, an estimator of the asymptotic covariance matrix \(\mathbf{V}\) can be computed without taking into account the sinusoidal noise process, allowing to take advantage of existing results for this purpose. With Theorem 1 we can now obtain the asymptotic distribution of the GMWM estimator \(\hat{\mathbf{\theta}}\). To start, we need to consider the following additional assumptions. Assumption D: \(\mathbf{\Theta}\) _is compact._ Assumption E: _If \(\mathbf{\hat{\Omega}}\in\mathrm{I\!R}^{J\times J}\) is an estimator of a definite-positive matrix \(\mathbf{\Omega}\), then_ \[\left\|\mathbf{\hat{\Omega}}-\mathbf{\Omega}\right\|_{S}=o_{p}(1),\] _where \(\|\cdot\|_{S}\) denotes the spectral norm._ Assumption F: _The function \(\boldsymbol{\nu}(\boldsymbol{\theta})=[\nu_{j}(\boldsymbol{\theta})]_{j=1,\ldots,J}\) identifies \(\boldsymbol{\theta}\), in that for any \(\boldsymbol{\theta}_{1},\boldsymbol{\theta}_{2}\in\mathbf{\Theta}\) we have that \(\nu_{j}(\boldsymbol{\theta}_{1})=\nu_{j}(\boldsymbol{\theta}_{2}),\) for \(j=1,\ldots,J,\) implies \(\boldsymbol{\theta}_{1}=\boldsymbol{\theta}_{2}\)._ Assumption D is a common regularity condition which can eventually be replaced by a condition on the convexity of the parameter space that however can only be verified on a model-specific basis. On the other hand, Assumption E is only required in case an estimator is chosen instead of any deterministic positive-definite matrix \(\mathbf{\Omega}\). Indeed, if an estimator \(\mathbf{\hat{\Omega}}\) is chosen, then this assumption requires this estimator to be consistent for the chosen positive-definite matrix \(\mathbf{\Omega}\). Finally, an assumption that is also challenging to verify and is often assumed in practice is Assumption F, which is equivalent to requiring that \(\boldsymbol{\nu}(\boldsymbol{\theta})\) be an injective function. In Xu et al. (2019), Guerrier et al. (2020) and Guerrier et al. (2022), the validity of this assumption was discussed for different classes of composite processes which include combinations of, among others, white noise, random walk, quantization noise, drift and AR1 components. Hence, before stating the asymptotic properties of the GMWM, we add to these previous results by discussing the validity of this assumption when including a sinusoidal process in the class of composite processes defined in (3). As a first step, we verify this assumption when considering only one sinusoidal process through the following lemma. Lemma 2: _For \(J\geq 2\), the function \(\boldsymbol{\nu}(\alpha,\beta):=\left[\nu_{j}(\alpha,\beta)\right]_{j=1,\ldots,J}\) identifies \((\alpha,\beta)\), in that for any \((\alpha_{1},\beta_{1}),(\alpha_{2},\beta_{2})\in\mathrm{I\!R}_{+}\times(0,\pi]\), we have that \(\nu_{j}(\alpha_{1},\beta_{1})=\nu_{j}(\alpha_{2},\beta_{2}),\) for \(j=1,\ldots J,\) implies \((\alpha_{1},\beta_{1})=(\alpha_{2},\beta_{2}).\)_ The proof of Lemma 2 is given in Appendix B. This result is important to confirm that the WV is informative with respect to this process, meaning that the information contained in the WV is sufficient to identify the parameters of the process in (1). We now try to extend this evidence towards the composite model (3) for which we consider the lengths of the wavelet filters for each level \(j\) (i.e., \(\tau_{j}\)) to live on a subset of the rational numbers representing the range of values containing those scales that would naturally arise in practice. More specifically, we assume that the WV scales are defined by \(\tau\in\Lambda\) where \(\Lambda:=\left[2,2^{J}\right]\cap\mathbb{Q}\). This definition of the scales does not completely correspond to the ideal scenario for which we would like to provide supporting evidence for Assumption F, but is an extension that allows to provide additional support to this assumption in the case of the process in (3). In this context we consider a model representative of the class defined in (3) given by: \[X_{t}:=W_{t}+Q_{t}+Y_{t}+D_{t}+R_{t}+S_{t}. \tag{5}\] In this case, the parameter space \(\mathbf{\Theta}\subset\mathrm{I\!R}^{8}\) is isomorphic to \(\mathrm{I\!R}^{6}_{+}\times\{(-1,0)\cup(0,1)\}\times(0,\pi]\) and, underlining that the notation \(\boldsymbol{\nu}_{\tau}(\boldsymbol{\theta})\) refers to the WV over scales \(\tau\in\Lambda\), the following lemma states the identifiability of the parameters of the model in (5) within this continuous scale setting. Lemma 3: _The function \(\nu_{\tau}(\boldsymbol{\theta})\) associated with the model in (5) identifies \(\boldsymbol{\theta}\), in that for any \(\boldsymbol{\theta}_{1},\boldsymbol{\theta}_{2}\in\mathbf{\Theta}\subset \mathrm{I\!R}^{8}\), we have that \(\nu_{\tau}(\boldsymbol{\theta}_{1})=\nu_{\tau}(\boldsymbol{\theta}_{2})\), for all \(\tau\in\Lambda\) implies \(\boldsymbol{\theta}_{1}=\boldsymbol{\theta}_{2}\)._ The proof of this lemma is given in Appendix C. The results of Lemmas 2 and 3 are therefore helpful in supporting the validity of Assumption F when adding sinusoidal processes to the modelling framework. More specifically, if only considering the sinusoidal process \((S_{t})\) in the model in (5), it is obvious that the identifiability through \(\mathbf{\nu}(\mathbf{\theta})\), given in Lemma 2, implies Lemma 3 but the converse is not necessarily true. Therefore Lemma 3 is useful but does not imply the identifiability of the general class of models defined in (3). Following the results in Xu et al. (2019) and Guerrier et al. (2022), the GMWM estimator \(\hat{\mathbf{\theta}}\) is consistent for the class of models in (3) under Assumptions A to F. In order to obtain its asymptotic normality, we need to consider two additional assumptions defined below. Assumption G: \(\mathbf{\Theta}\subset\mathds{R}^{p}\) _is convex and \(\mathbf{\theta}_{0}\in\mathbf{\Theta}\) is an interior point._ Assumption H: _The derivative \(\mathbf{A}(\mathbf{\theta}_{0}):=\frac{\partial}{\partial\mathbf{\theta}^{T}}\mathbf{\nu}(\bm {\theta})\Big{|}_{\mathbf{\theta}=\mathbf{\theta}_{0}}\) is such that_ \[\mathbf{B}(\mathbf{\theta}_{0}):=\mathbf{A}(\mathbf{\theta}_{0})^{T}\mathbf{\Omega}\mathbf{A}(\mathbf{ \theta}_{0}),\] _is non-singular._ Assumption G is a regularity condition that allows us to make use of the mean value theorem which can nevertheless be quite restrictive, for example in the case where the assumed model overfits the data. Indeed in the latter case some components of the parameter vector \(\mathbf{\theta}_{0}\) may lie on the border of \(\mathbf{\Theta}\) if, for example, some variance parameters are equal to zero. Assumption H on the other hand simply enables us to define the asymptotic covariance matrix of \(\hat{\mathbf{\theta}}\). Denoting \(\mathbf{M}\in\mathds{R}^{J\times p}\) and \(\mathbf{N}\in\mathds{R}^{J\times J}\), we define the operator \(\mathbf{M}\boxtimes\mathbf{N}:=\mathbf{M}\mathbf{N}\mathbf{M}^{T}\) which allows us to state the following theorem and delivers the final result on \(\hat{\mathbf{\theta}}\). Theorem 2: _Under Assumptions A to H, we have that_ \[\sqrt{T}\left(\hat{\mathbf{\theta}}-\mathbf{\theta}_{0}\right)\overset{\mathcal{D}}{ \longrightarrow}\mathcal{N}(\mathbf{0},\mathbf{\Xi}),\] _where \(\mathbf{\Xi}:=\{\mathbf{B}(\mathbf{\theta}_{0})^{-1}\mathbf{A}(\mathbf{\theta}_{0})^{T}\mathbf{\Omega }\}\boxtimes\mathbf{V}\) and \(\mathbf{V}\) is given in (1)._ As for consistency, this theorem follows directly from the results in Xu et al. (2019) and Guerrier et al. (2022). In brief, based on Assumption G we can use the mean value theorem on the GMWM objective function in (4) around the true value \(\mathbf{\theta}_{0}\) and, based on the consistency of \(\hat{\mathbf{\theta}}\), it is possible to show convergence of the different quantities defined by this expansion (including that of the derivative \(\mathbf{A}(\mathbf{\theta})\)) towards their theoretical values which define the asymptotic covariance matrix. Remark A: _It can be noticed how all quantities that define the asymptotic covariance \(\mathbf{\Xi}\) depend on the parameter \(\mathbf{\theta}_{0}\) with the exception of \(\mathbf{\Omega}\) and \(\mathbf{V}\). While the quantities that depend on \(\mathbf{\theta}_{0}\) can be estimated by plugging in the consistent estimator \(\hat{\mathbf{\theta}}\), the matrix \(\mathbf{\Omega}\) is chosen by the user while \(\mathbf{V}\) has to be estimated. Given the results in this work, the asymptotic covariance matrix \(\mathbf{V}\) can be estimated using the proposals in Xu et al. (2019) and Guerrier et al. (2022) without considering the presence of vibration noise. Moreover, making the choice \(\mathbf{\Omega}:=\mathbf{V}^{-1}\) delivers the most asymptotically efficient GMWM estimator \(\hat{\mathbf{\theta}}\)(see Hansen, 1982). Indeed, for this particular choice of the weight matrix the expression of the GMWM covariance matrix simplifies to \(\mathbf{\Xi}=\{\mathbf{A}(\mathbf{\theta}_{0})^{T}\boxtimes\mathbf{V}^{-1}\}^{-1}=\mathbf{B}( \mathbf{\theta}_{0})^{-1}\). This expression is actually the same as the one obtained in the just-identified case, i.e., when \(J=p\). In the latter case, we also have \(\mathbf{\Xi}=B(\mathbf{\theta}_{0})^{-1}\) independently of the choice of \(\mathbf{\Omega}\)._ Simulation Studies In this section we present different simulation studies to investigate the performance of the GMWM when considering various models included in the general class defined in (3), specifically those characterized by the presence of sinusoidal processes which we consider as a nuisance noise. In particular, although these cannot be considered a proof for parameter identifiability, the simulations also aim at understanding to what extent the GMWM is able to identify the parameters of models which include sums of sinusoidal noises and other latent processes. In addition, we want to understand the loss of statistical efficiency of the GMWM with respect to more computationally-demanding likelihood-based approaches (and also how much computational gain is achieved when using the GMWM). In all cases we simulate 500 signals from each model and choose parameter values that are consistent with those that are commonly identified for the stochastic errors of measurement devices such as IMUs (see e.g., Titterton et al. (2004); El-Sheimy et al. (2007); Guerrier et al. (2016) and the applied case study in Section 5). The parameter values for each of the simulations presented in this section are reported in Appendix D. For the first study (which we refer to as Simulation 1), we consider a model represented by the sum of a WN, a RW, an AR1 and a single sinusoidal process therefore requiring the estimation of six parameters in total (see Appendix D for values) considering the parametrization in (1). We use this model to compare the statistical and computational performance of the GMWM with respect to the MLE implemented in the open-source software Hector (v2.0) which represents the fastest available implementation of the MLE for these latent models (see Bos et al., 2008, 2013). It must be noted that, for the latter approach the vibration noise is purely deterministic and assumes that the frequency is known, leaving amplitude and phase to be estimated. Due to these different parametrizations, the two approaches cannot be compared in terms of estimation of the sinusoidal process parameters, but only in terms of estimation of the processes of true interest (i.e., WN, RW and AR1). For this purpose, considering commonly large signals recorded by high-frequency measurement devices, we generate signals of five different lengths, i.e., \(T=T_{i}\cdot 10^{4}\) with \(\{T_{1},\ldots,T_{5}\}=\{1,2,4,8,16\}\), and compute the Root Mean Squared Error (RMSE) as well as the average running time for both approaches. This information is represented in Figures 1 and 2 respectively where, for both methods, we removed results that were affected by convergence problems for the MLE (specifically for smaller sample sizes) in order to make fair comparisons. We denote the parameters as: \(\sigma^{2}\in\mathds{R}_{+}\) (WN variance); \(\phi\in(-1,0)\cup(0,1)\) and \(\zeta^{2}\in\mathds{R}_{+}\) (AR1 autoregressive and innovation variance parameters respectively) and \(\gamma^{2}\in\mathds{R}_{+}\) (RW innovation variance). As can be observed in Figure 1, for all parameters of interest both methods have an RMSE that decreases with the sample size thereby supporting consistency of the GMWM and the MLE in this setting. Moreover it can be seen how, with few marginal differences, the RMSE of both methods appear to be extremely close to each other suggesting that the potential loss of statistical efficiency of the GMWM with respect to the MLE is almost negligible in sample sizes of relevance for the considered applications. This conclusion needs to be evaluated jointly with the results presented in Figure 2: as it can be observed, the average MLE running time ranges from less than 2 seconds (for \(T=10^{4}\)) up to more than 5 hours (for \(T=16\cdot 10^{4}\)) while the GMWM consistently runs in less than half a second for all sample sizes considered. This implies that, with comparable performance in terms of RMSE, the GMWM is at least \(12\cdot 10^{2}\) times faster on average than the MLE in the considered sample size settings (\(15\cdot 10^{4}\) faster for the largest sample size). It must also be noted that, for smaller sample sizes the MLE suffers from convergence issues which is not the case for the GMWM. For the next simulation settings, we consider more complex models and larger sample sizes which are often observed in real measurement error signals. In these settings the MLE can become more numerically unstable and remains computationally demanding for these sample sizes. Therefore, considering the comparison made in Simulation 1, in the next simulations we only verify the performance of the GMWM and, as a result, also focus on the estimation of the parameters of the sinusoidal process put forward in (1). More in detail, we first consider a model defined as the sum of two AR1 processes, with parameters \(|\phi_{i}|\in(0,1)\) and \(\zeta_{i}^{2}\in\mathrm{I\!R}_{+}\), and two sinusoidal processes, with parameters \(\alpha_{i}\in\mathrm{I\!R}_{+}\) and \(\beta_{i}\in(0,\pi]\), for \(i=1,2\) (we refer to this setting as Simulation 2) and then a model composed of an AR1, a RW, a WN and a sinusoidal process, all with parameter notations consistent with the previous simulations (we refer to this setting as Simulation 3). For these two simulations we consider a sample size of \(1\cdot 10^{7}\) and, as mentioned previously, simulate 500 time series for both simulations with parameter values in the range of those found in practical applications such as those discussed in Guerrier et al. (2013) and Stebler et al. (2014) (see Appendix D for values). In particular, due to the high-frequency of the measurements, the autoregressive parameters \(\phi\) for example can be close to unit value (i.e., close to a RW process) creating additional numerical convergence issues of the MLE. A representation of these two models is given through a WV plot in Figure 3 where it can be seen that the theoretical WV \(\boldsymbol{\nu}(\boldsymbol{\theta})\) of each model (orange line) is composed from the contribution of the WV of the individual processes, generating realizations of empirical WV estimates \(\hat{\boldsymbol{\nu}}\) (light grey lines) that closely follow it. Hence, as explained in the previous section, the GMWM observes the grey line \(\hat{\boldsymbol{\nu}}\) and aims at finding the parameter vector \(\boldsymbol{\theta}\in\boldsymbol{\Theta}\) that allows the implied theoretical WV \(\boldsymbol{\nu}(\boldsymbol{\theta})\) (orange line) to be as close as possible to this empirical WV in \(L_{2}\)-norm (weighted by \(\boldsymbol{\Omega}\)). The empirical distributions of the estimated parameter values are represented through the boxplots Figure 1: RMSE of the estimated parameters of Simulation 1 for the MLE (orange line) and the GMWM (blue line) for sample sizes \(T=T_{i}\cdot 10^{4}\) with \(\{T_{1},\ldots,T_{5}\}=\{1,2,4,8,16\}\) in Figure 4. We subtract the true parameter values from these distributions (hence all boxplots should be roughly centered around zero if the GMWM is correctly estimating these parameters) and standardize them via their respective empirical standard deviations to compare them all on the same scale. We consider this re-scaling since we cannot compare these distributions to those of other estimators, therefore we are mainly interested in consistency rather than efficiency of the GMWM, which however can be observed in the boxplots in Appendix D as well as partially studied through the RMSE in smaller sample sizes in Simulation 1 (see Figure 1). As highlighted by the two plots in Figure 4, the GMWM appears to correctly target the true values of the parameters of the models considered in both simulations, including those of the sinusoidal processes put forward in this work thereby supporting the theoretical results in Section 3. Figure 2: Mean running time of the MLE (orange line) and the GMWM (blue line) in Simulation 1 for sample sizes \(T=T_{i}\cdot 10^{4}\) with \(\{T_{1},\ldots,T_{5}\}=\{1,2,4,8,16\}\) Figure 4: Empirical distribution of GMWM parameter estimates for Simulations 2 and 3. The true parameter values were subtracted from each boxplot and all distributions were standardized by their respective empirical variances. Figure 3: Theoretical wavelet variance \(\mathbf{\nu}(\mathbf{\theta})\) (orange line) of the settings considered in Simulation 2 and Simulation 3. The light grey lines represent WV estimates \(\hat{\mathbf{\nu}}\) from 10 different realizations of the respective models. The other lines represent the theoretical WV of the individual processes contributing to the models. Case Study: Inertial Sensor Calibration To study the advantage of the proposed approach for real-world applications, we consider stochastic measurement error data (of size \(T=2,879,999\)) collected from controlled IMU calibration sessions. More specifically, to study the impact of vibration noise, the stochastic measurement errors of the same z-axis gyroscope of a low cost MEMS IMU were measured at 200 Hz in two controlled settings: (i) a static setting and (ii) a rotating setting where the IMU is placed on rotating table at a fixed rotation-speed of 200 deg/sec. The latter setting is used to deliver the possible vibrations that can often corrupt measurement devices during their calibration phases. The intuition for this application is that the stochastic model for the measurement error in static settings constitutes the basic error specific to the measurement device which we are interested in, however this model becomes more complex under rotation through the addition of one or more vibration noise components which constitute disturbances to the true stochastic error. Hence, we would like to understand how the basic measurement error model changes when the additional vibration noise is taken into account and, as a consequence, what are the impacts of this change when employing the estimated models (with and without considering vibration noise) in navigation settings. The basic model for the IMU was chosen via a visual representation of the WV (black line) shown in the left plot of Figure 5. Comparing different models, the composition of an AR1 and a RW process (orange line) appeared to best fit the observed empirical WV on the measurement error in static settings. Indeed, the theoretical WV implied by the GMWM estimates of this model (AR1 and RW) appears to closely follow the empirical WV. Having identified the basic model for the stochastic error of this IMU, we considered the empirical WV of the same IMU under the above-mentioned rotation setting which can be observed in the right plot of Figure 5. In the latter setting the assumption is that the structure of the basic model remains the same but is complemented with additional vibration noise in the form of one or more sinusoidal processes which themselves can have an impact on the parameter values of the basic model. Hence, preserving this basic model structure, the addition of two sinusoidal processes appear to better fit the WV observed under rotation as hinted by the right plot of Figure 5. Indeed, the postulated parametrization in (1) appears to well describe the additional structure in the empirical WV observed under rotation, therefore supporting the usefulness of the approach put forward in this work. More in detail, Table 1 reports the parameters of the basic model (i.e., AR1 and RW parameters) when estimated in static and rotating conditions (including sinusoidal processes) respectively. It can be seen how all the estimated parameters of the basic model differ significantly when considered in the two settings, highlighting how the presence of vibrations due to rotation affects the values of this model's parameters. To date, as described earlier in this work, the inclusion of a stochastic process to account for noise induced by the vibration of measurement devices has not been fully addressed, if not through deterministic models for which certain parameters need to be known in advance (and with important numerical/computational issues for existing methods). Hence, the common approach to this kind of setting is to approximate these error signals through the standard models (e.g., AR1 and RW) under static scenarios which however can often be affected by different sources of vibration. Considering this, we will study the extent of the bias induced by excluding the sinusoidal processes, that are present during the calibration phase, through a simulation study based on the parameter estimates from the MEMS IMU under rotation in Table 1 (the estimated parameters of the sinusoidal processes are given in Appendix D). Therefore we will simulate from the model that includes the two sinusoidal processes and then, each time, use the GMWM to estimate a misspecified model with only an AR1 and a RW process (Model 1) as well as a correctly specified model which also includes the two sinusoidal processes (Model 2). We focus on how much model-misspecification in this scenario impacts the estimates of the basic model of interest composed of the AR1 and RW processes. The results of this simulation, based on the real estimates from the considered MEMS IMU data, are represented in the boxplots of Figure 6. As expected, from these boxplots it is clear to what extent the model misspecification can have significantly negative impacts both in terms of bias as well as in terms of variance, especially with respect to the AR1 parameters. The previous simulation is a simple proof-of-concept of the intuitive fact that not accounting for the vibration noise, when this is actually present, can severely impact estimation for the other (basic) model parameters. However, this does not necessarily give the idea of the impact that this problem may have in real-world applications. For this reason, we translate the above simulation setting to a navigation scenario where we take the estimated model under rotation (i.e., Model 2 which includes the two sinusoidal processes) and generate stochastic error signals that we will \begin{table} \begin{tabular}{c|c c c c} Parameter\({}^{\text{Rotation rate}}\) & & 0 deg/sec & & 200 deg/sec \\ & Estimate & CI(95 \%) & Estimate & CI(95 \%) \\ \hline \(\phi\) & \(1.63\cdot 10^{-1}\) & \((1.62\cdot 10^{-1}\ ;\,1.64\cdot 10^{-1}\ )\) & \(1.85\cdot 10^{-1}\) & \((1.81\cdot 10^{-1}\ ;\,1.89\cdot 10^{-1})\) \\ \(\varsigma^{2}\) & \(4.78\cdot 10^{-3}\) & \((4.77\cdot 10^{-3}\ ;\,4.78\cdot 10^{-3}\ )\) & \(3.56\cdot 10^{-2}\) & \((3.55\cdot 10^{-2}\ ;\,3.57\cdot 10^{-2})\) \\ \(\gamma^{2}\) & \(3.09\cdot 10^{-12}\) & \((9.01\cdot 10^{-13};5.52\cdot 10^{-12})\) & \(8.69\cdot 10^{-10}\) & \((5.88\cdot 10^{-10};1.12\cdot 10^{-9})\) \\ \end{tabular} \end{table} Table 1: Estimated parameters and 95% parametric bootstrap confidence intervals for the error signal collected at 0 deg/sec (i.e., corresponding to the first and second column respectively) and for the error signal collected at 200 deg/sec (i.e., corresponding to the third and fourth column respectively). Figure 5: Empirical WV (black line) of the MEMS IMU in static conditions (left plot) and under 200 deg/sec rotation (right plot). The WV implied by the respective models estimated via the GMWM is represented by the orange line while the contribution of each underlying process to these models is represented through lines of other colors. add to an arbitrarily determined and fixed navigation path defined by a trajectory and an altitude profile (this is known as an _emulation_ study). In the latter scenario we can imagine that this path describes the movement of an aerial vehicle (e.g., a drone) and, for this emulation, we imagine that this vehicle is guided by an integrated navigation system composed of a Global Positioning System (GPS) and IMUs such as the one considered in this section. In these systems the most accurate measurements are given by the GPS while the IMUs are used mainly to update navigation estimates between GPS measurements and also for uncertainty quantification. Unfortunately, GPS signals can be corrupted or be absent in different situations and, as a consequence, the IMUs are employed in so-called "coasting mode" to provide the navigation solutions without the GPS. In this case, and in general, it is extremely important for the navigation filter associated to these IMUs, usually an Extended Kalman Filter (EKF), to be programmed with precise estimates of the stochastic error signals that characterize them so that they can be removed from their measurements and obtain more accurate navigation solutions for the vehicle. We rely on the framework presented in Cucci et al. (2023) to perform this emulation study and assess the impact on navigation performances when considering or not the sinusoidal perturbations in the stochastic calibration procedure. Given this setting, let us perform an emulation study where we determine a ground-truth trajectory and altitude profile for a period of 235 seconds during which we mimic a GPS outage for 50 seconds (from 160 seconds to 210 seconds) as represented in Figure 7. More in detail, we assume that the GPS is measuring at 1 Hz while the IMU is measuring at 100 Hz. Hence, let us study how much the navigation solutions, more specifically _position_ solutions, are affected by the approach taken in the previous simulation which highlights the effects of not accounting for vibration noise (see Figure 6). To do so, let us first define \(\boldsymbol{\lambda}\in\mathds{R}^{4}_{+}\times\{(-1,0)\cup(0,1)\}\times(0, \pi]^{2}\) as the parameter vector containing the GMWM estimates for Model 2 taken on the IMU under rotation (therefore containing AR1, RW and two sinusoidal processes) while we will define \(\boldsymbol{\theta}\in\mathds{R}^{2}_{+}\times\{(-1,0)\cup(0,1)\}\) as the general parameter notation of the _basic_ model structure consisting solely in the sum of an AR1 and a RW process (this is indeed the structure of Model 1). Given this, we can use \(\boldsymbol{\theta}^{\star}\) to represent the elements of \(\boldsymbol{\lambda}\) that correspond to the parameters of this basic model, hence without the parameters of the sinusoidal processes. Using this notation, we take the following approach for the \(b^{th}\) iteration (out of 500): (i) we simulate a stochastic error signal from Model 2 based on \(\boldsymbol{\lambda}\) and of the same length as the original data (i.e., \(T=2,879,999\)) that we refer to as \((x_{t}^{(b)})\); (ii) we use \((x_{t}^{(b)})\) to Figure 6: Empirical distribution of the GMWM parameter estimates under Model 1 (misspecified) and Model 2 (correctly specified) represented in the left and right boxplot in each plot respectively. Figure 8: Mean ratio of position errors for EKF based on \(\hat{\mathbf{\theta}}_{1}\) over those for EKF based on \(\hat{\mathbf{\theta}}_{2}\). Yellow bands represent the 95% confidence intervals for the mean ratio. Figure 7: Trajectory and altitude profile (blue lines) of the vehicle considered in the emulation study over 235 sec. The highlighted part of these paths (in yellow) represent the portion in which we mimic the GPS outage (from 160 sec to 210 sec). estimate Model 1 (misspecified) and Model 2 (correctly specified) using the GMWM and denote the associated basic model parameters as \(\hat{\mathbf{\theta}}_{1}^{(b)}\) and \(\hat{\mathbf{\theta}}_{2}^{(b)}\) respectively (hence we discard the estimates of the parameters of the sinusoidal processes in Model 2); (iii) we simulate an additional stochastic error signal but this time from the basic model (AR1 and RW) based on \(\mathbf{\theta}^{\star}\), and add this error signal to the previously mentioned navigation paths represented in Figure 7 (blue lines); (iv) we use an EKF based on \(\hat{\mathbf{\theta}}_{1}^{(b)}\) and \(\hat{\mathbf{\theta}}_{2}^{(b)}\) respectively to estimate the position of the vehicle when the GPS is available as well as when there is an outage. Following this, at each time point we have a position estimate using the basic model parameters estimated under Model 1 and Model 2 respectively. Hence it is possible to compute a position error ratio between the two solutions throughout the entire emulated path. The position error is defined as the average over the emulated trajectories of the \(\ell_{2}\)-norm of the position error over the three axes defined as \(\Delta r_{t}=\hat{r}_{t}-r_{t}\), where \(r_{t}\) denotes the true position at time \(t\) and \(\hat{r}_{t}\) is the estimated position at time \(t\). The results of this ratio (position error based on \(\hat{\mathbf{\theta}}_{1}\) over position error based on \(\hat{\mathbf{\theta}}_{2}\)) are given in Figure 8 where we represent the mean position ratio along with its corresponding 95% confidence intervals. It can be observed how this ratio is almost always above one when the GPS signal is available, indicating that the model that accounts for vibration has a slightly lower position error in-between GPS measurements. This observation is confirmed when the GPS outage occurs (grey area): the ratio drastically and steadily increases up to 1.1789 indicating a rapidly deteriorating position error for the misspecified navigation filter, only to return towards one when the GPS signal is available again. While this emulation study substantially confirms the intuitive bias that can be induced by an omission of stochastic disturbances during calibration (i.e., vibration noise), it also provides more insight to the real-life impacts that such an omission can entail. In this example we can observe how much a misspecified navigation filter, based on a model that does not account for vibration noise during the calibration phase, can severely affect navigation precision of (autonomous) vehicles. Indeed, a difference in position error of 15% is considered large and in our study it actually goes beyond 17%. If one wanted to correct an error of such magnitude, this would generally require much more accurate and expensive equipment. This study therefore highlights how a significant improvement in navigation can be achieved when accounting for vibration noise with important advantages for many applications going from ground-vehicles to drones for aerial-mapping and rescue-searches. In these applications it is of essence to track their positions in situations where GPS signals are often absent, thereby highlighting how much this sizeable navigation error can affect the successful outcome of these tasks. ## 6 Conclusions The proposal of a stochastic parametrization of wave functions to account for vibration noise can be helpful in many applications ranging from engineering to natural sciences. In particular, modeling this noise while considering the presence of several other stochastic processes in large signals is a computational and/or numerical challenge for standard approaches. The derivation of theoretical forms for the WV and the study of its properties in the context of sinusoidal processes has allowed to extend the flexibility of the GMWM modeling framework which can account for more complex features in signals in a computationally efficient and numerically stable manner, thereby greatly improving the precision of measurement devices which was the specific focus of this work. More broadly though, this methodology can be applied in a wide range of domains where periodic signals are observed and need to be taken in account when performing statistical modeling and inference on time series data. For example, our approach could be applied in the context of modeling daily position time series from Global Navigation Satellite Systems where periodic signals often need to be considered jointly with other deterministic and stochastic signals as highlighted in Cucci et al. (2023).
2309.15418
Automatic Feature Fairness in Recommendation via Adversaries
Fairness is a widely discussed topic in recommender systems, but its practical implementation faces challenges in defining sensitive features while maintaining recommendation accuracy. We propose feature fairness as the foundation to achieve equitable treatment across diverse groups defined by various feature combinations. This improves overall accuracy through balanced feature generalizability. We introduce unbiased feature learning through adversarial training, using adversarial perturbation to enhance feature representation. The adversaries improve model generalization for under-represented features. We adapt adversaries automatically based on two forms of feature biases: frequency and combination variety of feature values. This allows us to dynamically adjust perturbation strengths and adversarial training weights. Stronger perturbations are applied to feature values with fewer combination varieties to improve generalization, while higher weights for low-frequency features address training imbalances. We leverage the Adaptive Adversarial perturbation based on the widely-applied Factorization Machine (AAFM) as our backbone model. In experiments, AAFM surpasses strong baselines in both fairness and accuracy measures. AAFM excels in providing item- and user-fairness for single- and multi-feature tasks, showcasing their versatility and scalability. To maintain good accuracy, we find that adversarial perturbation must be well-managed: during training, perturbations should not overly persist and their strengths should decay.
Hengchang Hu, Yiming Cao, Zhankui He, Samson Tan, Min-Yen Kan
2023-09-27T05:48:05Z
http://arxiv.org/abs/2309.15418v1
# Automatic Feature Fairness in Recommendation via Adversaries ###### Abstract. Fairness is a widely discussed topic in recommender systems, but its practical implementation faces challenges in defining sensitive features while maintaining recommendation accuracy. We propose _feature fairness_ as the foundation to achieve equitable treatment across diverse groups defined by various feature combinations. This improves overall accuracy through balanced feature generalizability. We introduce unbiased feature learning through adversarial training, using adversarial perturbation to enhance feature representation. The adversaries improve model generalization for under-represented features. We adapt adversaries automatically based on two forms of feature biases: frequency and combination variety of feature values. This allows us to dynamically adjust perturbation strengths and adversarial training weights. Stronger perturbations are applied to feature values with fewer combination varieties to improve generalization, while higher weights for low-frequency features address training imbalances. We leverage the Adaptive Adversarial perturbation based on the widely-applied Factorization Machine (AAFM) as our backbone model. In experiments, AAFM surpasses strong baselines in both fairness and accuracy measures. AAFM excels in providing item- and user-fairness for single- and multi-feature tasks, showcasing their versatility and scalability. To maintain good accuracy, we find that adversarial perturbation must be well-managed: during training, perturbations should not overly persist and their strengths should decay. Recommender System, Adversarial Training, Fair Recommendation + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems to recognize interactions between the _homemaker_ feature and different gender values, resulting in poorer generalization. In this work, we aim to utilize the two forms of feature biases to automatically (1) incorporate fairness considerations across diverse feature domains; and (2) ensure similar generalizability for different combinations of feature values. Adversarial training (Krizhevsky et al., 2014) is a technique for augmenting model generalization (Krizhevsky et al., 2014), where the generalization derives from its robustness to unseen inputs. We thus adopt adversarial training to accommodate a variety of feature combinations. By integrating adversarial training into our regular training iterations, we enhance feature representations by perturbing them. However, applying this approach directly still poses issues. First, existing approaches assume consistent perturbation intensities (Krizhevsky et al., 2014; Krizhevsky et al., 2014) for all feature representations, but there are significant variations in sample outcomes associated with different features. Our method utilizes _combination variety_ as the measure to determine the intensity of adversarial perturbation. We employ a formula that maps lower variety values to higher adversarial intensity, thereby enhancing the stability of targeted groups. To prevent excessive perturbation that overly distorts the original data representation, we map _variety_ inversely proportional to a range of \(0\sim 1\). Second, conventional adversarial unbiased learning approaches often view accuracy and fairness as conflicting objectives (Zhu et al., 2017). As features often follow a long-tailed distribution, low-frequency features make up the majority of features. Hence, low-frequency features are important, so we prioritize their appropriate representation during training by assigning higher adversarial training weights. This balancing results in enhanced performance. We instantiate the above-mentioned Adaptive Adversaries with the FM model as our backbone, or AAFM for short. Extensive experiments show that our method improves results by 1.9% in accuracy against baselines, while balancing group standard deviation by \(\frac{7}{10}\) on fairness metrics. AAFM further demonstrates scalability, tackling fairness concerns for both users and items simultaneously. Additionally, as the number of feature domains in the data increases, our approach consistently tackles fairness at finer levels among diverse groups. This serves as a bridge between group and individual fairness, spanning datasets with one feature domain to those with a broader range of three feature domains. Our method's universal applicability to fairness issues offers a win-win outcome by promoting both fairness and accuracy. In summary, our contributions are as follows: (i) Compared to user fairness and item fairness, we define our task as a more fundamental feature fairness objective. The feature fairness task aims to develop a parameter-efficient framework that flexibly provides feature-specific fairness for various combinations of user or item features. (ii) We introduce AAFM, an adversarial training method that leverages statistical feature bias for unbiased learning, combining the benefits of fairness and accuracy. (iii) Through experiment datasets with varying numbers of features, user- and item-centric settings, we validate the scalability and practicality of AAFM in real scenarios. The code is available at: [https://github.com/HoldenHlu/AdvFM](https://github.com/HoldenHlu/AdvFM) ## 2. Methodology In what follows, we first outline our task and delve into the issue of feature fairness, which arises due to two biases. We then provide our solution -- Adversarial Factorization Machines which applies the fast gradient method to construct perturbations over feature representations. We further propose an adaptive perturbation based on feature biases, which re-scales adversarial perturbation strengths and adversarial training weights. ### Preliminaries of Feature Fairness **Problem Formulation.** The recommendation task aims to predict the probability of unobserved user-item interactions \(\hat{y}(\mathbf{x})\) given the user and item features \(\mathbf{x}\)(Krizhevsky et al., 2014). We represent one sample, the input as the combination of these features, denoted as \(\mathbf{x}=\{x_{1},x_{2},...,x_{n}\}\). Here, \(x_{i}\) represents the \(i^{th}\) feature domain, encompassing user features (e.g., user occupation) and item features (e.g., item color). Concerning the \(k^{th}\) sample \(\mathbf{x}^{(k)}\in\mathcal{X}\), \(x_{i}^{(k)}\) indicates its specific feature value (e.g., _student_ or _red_) in feature domain \(x_{i}\). In our work, the feature domains include user/item ID, and the categorical attributes of user/item. Concerning specific feature value \(v\) in domain \(x_{i}\), we denote its corresponding samples of subset data as \(\mathcal{X}_{x_{i}:v}=\{\mathbf{x}^{(k)}|x_{i}^{(k)}=v\}\). The overall prediction error of the subset data is denoted as \(\mathcal{E}_{x_{i}:v}=\sum_{\mathbf{x}\in\mathcal{X}_{x_{i}:v}}\mathcal{E}( \hat{y}(\mathbf{x}),y)\). Here, \(\mathcal{E}\) indicates the metric (e.g., Logloss) measuring errors between the prediction \(\hat{y}\) and ground-truth \(y\), where \(y\in\{0,1\}\). To achieve feature fairness, we expect a smaller difference between errors \(\mathcal{E}_{x_{i}:v_{1}}\) and \(\mathcal{E}_{x_{i}:v_{2}}\), with respect to each feature domain \(x_{i}\) and each value pair \((v_{1},v_{2})\) within \(x_{i}\). In neural models, the precise representation of each value is vital, as it directly affects errors in corresponding samples. The quality of feature value representation depends on the statistical bias (e.g., popularity bias (Zhu et al., 2017)) of feature values in the data. **Two Forms of Feature Biases.** Feature values in the data distribution have the following statistical properties. To aid understanding, we show an example of feature value \(v\) in the feature domain \(x_{i}\). * _Frequency_\(\alpha_{v}\) indicates the occurrence rate of the value \(v\) concerning its feature domain. * _Combination variety_\(\beta_{v}\) indicates the number of diverse samples where value \(v\) co-occurs with other features in combination. \(\alpha_{v}\) can be used to measure how many times this feature value has been seen by the model, while \(\beta_{v}\) better reflects the degree of isolation of this feature-based data group. The more isolated the groups are, the more likely they are sensitive to model perturbation. Figure 1. (left) Unfairness between two groups with sensitive features. (right) Biased validation accuracy between two data groups during training of Factorization Machine. In normal distributions, combination variety and frequency can be viewed as equivalent, where the frequency increase, the combination variety increase as well. But in real-world cases, this may not hold true as feature values may not always follow a strict joint probability dependence. Take the feature domain gender as an example. Given a situation where _female_ has fewer combinations with _occupation_ than _male_, this does not mean that the feature value _female_'s frequency is necessarily less than _male_. In the results depicted in Figure 2, data samples were grouped into 5 bins based on the multiplied value of frequency or combination variety across all feature domains (_user features+item features_). While both biases contribute significantly to performance imbalances, they are not aligned, highlighting the interdependence between features in real-world data. Therefore, we consider them as separate statistical biases for utilization. ### Adversarial Factorization Machine (AdvFM) #### 2.2.1. Base Model Our framework consists of three stages (Figure ), characterized by stages for Embedding. Representation learning and Prediction. _(a) Embedding Initialization._ To improve the representative ability of features, we first map each original discrete feature value of \(x_{i}\) into \(d\)-dimensional continuous vectors \(e_{i}=\mathcal{M}(x_{i}|\Theta)\) through the embedding layer \(\mathcal{M}\). Here, the concatenated feature embeddings are denoted as \(\mathbf{e}=cat[e_{1};...;e_{n}]\). _(b) Representation Learning._ Our key insight is that the interdependencies among low-level feature groups play a critical role in robustness and fairness. For this reason, we use Factorization Machines (FM) (Zhou et al., 2017) as the backbone for our methodology. FM takes a set of vector inputs, each consisting of \(n\) feature values and performs recommendations through their cross-product. An FM model of degree 2 estimates the rating behavior \(\hat{y}\) as: \[f(\mathbf{e})=\sum_{i=1}^{n}\left\langle w_{i}\,e_{i}\right\rangle+\sum_{i=1}^ {n}\sum_{j=i+1}^{n}\left\langle v_{i},v_{j}\right\rangle e_{i}e_{j}, \tag{1}\] where parameter \(w_{i}\in\mathcal{R}^{1\times d}\) models the linear, first-order interactions, and \(v_{i}\in\mathbb{R}^{1\times d}\) models second-order interactions for each low dimensional vector \(e_{i}\). \(\left\langle\cdot,\cdot\right\rangle\) indicates the dot product operation and \(e_{i}e_{j}\) indicates element-wise product between them. To be concise, we use the notation \(\hat{y}(\mathbf{x}|\Theta)=f(\mathbf{e})\) to represent the model's processing of input \(\mathbf{x}\) with the embedding parameter \(\Theta\). _(c) Prediction & Model Training._ The training objective function is defined as: \[\mathcal{L}(\hat{y},y)=\sum_{(\mathbf{x},y)}y\log(\hat{y}(\mathbf{x}|\Theta)+ (1-y)\log(1-\hat{y}(\mathbf{x}|\Theta)) \tag{2}\] where \(\mathcal{L}\) indicates the cross entropy loss (Kang et al., 2017), the difference between the predicted and true values. #### 2.2.2. Adversarial Perturbation Inspired by previous work (Zhou et al., 2017) which observed that users with rare interactions would benefit more from robustness, we adopt gradient-based adversarial noise (Krizhevsky et al., 2014) as the perturbation mechanism to improve balanced robustness from the feature perspective. As shown in Fig 3, the normal representation learning of FM module utilizes the original embedding \(\mathbf{e}\). The adversarial training adds noise to each feature's embedding by perturbing FM's parameters: \[\tilde{e}_{i}=\mathcal{M}(x_{i}|\Theta+\Delta_{add}^{e_{i}}) \tag{3}\] where \(\Delta_{add}^{e_{i}}\) is the parameter noise providing the maximum perturbation on the embedding layer. \(\Delta_{add}=\{\Delta_{add}^{e_{i}}...\Delta_{add}^{e_{n}}\}\) denotes the overall perturbations on embedding layer. To efficiently perturb normal training, we estimate the optimal adversarial perturbation by maximizing the loss incurred during training: \[\Delta_{add}=\arg\max_{\|\delta\|\leq e}\mathcal{L}(\hat{y}(\mathbf{x}|\Theta +\delta),y), \tag{4}\] where the hyper-parameter \(e\) controls the strength level of perturbations, and \(\|\cdot\|\) denotes the \(l2\) norm. Our adversarial noise uses the backward propagated fast gradient (Krizhevsky et al., 2014) of each feature's embedding parameters as their most effective perturbing direction. Specifically, to perturb the embedding \(e_{i}\), we calculate the partial derivative of the normal training loss: \[\Delta_{add}^{e_{i}}=\epsilon\cdot\frac{\partial\mathcal{L}(\hat{y}(\mathbf{e }|\Theta),y)/\partial e_{i}}{\|\partial\mathcal{L}(\hat{y}(\mathbf{e}|\Theta),y )/\partial e_{i}\|}, \tag{5}\] where the right-hand side's normalized term is the sign of the fast gradient direction of the feature \(x_{i}\)'s embedding parameters. _Training objective._ In each epoch, we conduct training as normal first, then introduce the adversarial perturbations in another following training session, round by round. We define the final optimization objective for AdvFM as a min-max game: \[\arg\min_{\Theta}\{\arg\max_{\Delta_{add}}[\mathcal{L}(\hat{y}(\mathbf{x}| \Theta),y)+\lambda\cdot\mathcal{L}(\hat{y}(\mathbf{x}|\Theta+\Delta_{add}),y)]\} \tag{6}\] Figure 3. The training process of adversarial factorization machine on sample \(\mathbf{x}^{(k)}\). Figure 2. Unbalanced results regarding two forms of feature biases. \(\mathbf{x}\)-axis indicates the indices of sample groups sorted by the overall feature frequency/combination variety. The results are from FM applied to the Yelp/Movielens dataset. where \(\Delta_{adv}\) provides the maximum perturbation and \(\Theta\) is trained to provide a robust defense to minimize the overall loss. Here, \(\lambda\) is a hyper-parameter to control the adversarial training weights. ### Automatic Adaptation on AdvFM The approach described so far has a key drawback: It introduces a single, uniform perturbation strength level \(\epsilon\) overall features, and uniform adversary weights \(\lambda\) over all samples. This makes the method inflexible, and unable to model nuanced weighting. To further balance and improve the accuracy, we further propose an Adaptive version of AdvFM (AAFM). It auto-strengthens the adversarial perturbations on the feature embedding parameters, and re-weights the samples in adversarial training. Our adaptive version leverages the two forms of feature biases previously introduced (Fig 3, right). * _Auto-Strengthening._ Considering each feature domain \(x_{i}\) with the corresponding value \(v_{i}\), a smaller combination variety \(\beta_{u_{i}}\) indicates a higher degree of sensitivity representation. Thus, it needs to be trained with stronger perturbations on its embedding parameters to improve its robustness. We estimate the feature-specific \(\epsilon_{u_{i}}\) based on an inversely proportional basis: \[\epsilon_{u_{i}}=\psi\left(\omega_{i}\times(\beta_{u_{i}})^{-1}\right),\] (7) where \(\omega_{i}\) is a learnable parameter with respect to the feature domain \(x_{i}\). We adopt SoftPlus activation function for \(\psi\), as it does not change the sign of the gradient, and the SoftPlus unit has a non-zero gradient over all real inputs. * _Re-Weighting._ Unlike previous work (Hu et al., 2017) conducting fixed adversarial training weight \(\lambda\) for all samples, we conduct sample-specific ones. Specifically, given a sample \(\mathbf{x}^{(k)}\), the sample-specific adversary weight \(\lambda_{k}\) is defined as: \[\lambda_{k}=\Phi(-\prod_{x\in\mathbf{x}^{(k)}}\alpha_{x},t),\] (8) For the sample \(\mathbf{x}^{(k)}\) with a low overall feature frequency \(\prod_{x\in\mathbf{x}^{(k)}}\alpha_{x}\) in training, we increase the weight of its adversarial loss by increasing its associated \(\lambda\) value. The function \(\Phi(\cdot,t)\) is used to scale the values between \(1\) and \(t\). If we use the previous design of trainable parameter \(\omega\) to scale, \(\lambda\) is easily eliminated by the overarching optimization goal (Equation 6); hence we apply manually-controlled scaling via \(t\). _Optimization of Decaying Adversarial Perturbation._ When the model adaptively adjusts the adversarial perturbation (noise) level \(\epsilon\), we observe that optimization may simply set \(\epsilon\) to zero, which best meets the normal training objectives by achieving a local optimum. However, this thwarts the benefit of introducing adversarial perturbation; canceling it prematurely. To mitigate this, we envision a slow decline in the effect of adversarial perturbation, proportional to the time already trained. To this end, we design a regulation term for \(\omega\) by defining an additional loss \(\mathcal{L}_{decay}=\alpha(\tau\cdot\|\omega\|)^{-1}\), where \(\tau\) represents the trained epoch number, and \(\alpha\) is an annealing hyper-parameter controlling regulation strength. As such, the change of \(\omega\) is more marked during early training, where a small \(\omega\) would make \(\mathcal{L}_{decay}\) large. As the training proceeds and the model stabilizes, the sensitivity of \(\omega\) gradually decays, as \(\tau\) increases. ## 3. Experiments **Datasets.** We experiment on three public datasets to examine our model's debiasing effect on both user and item groups. User feature enriched recommendation datasets include movie dataset _MovieLens-100K1_ (user gender, occupation, and zip code), and image dataset _Pinterest2_ (user preference categories). Item feature enriched recommendation datasets include movie dataset _MovieLens-100K1_ (movie category, and release timestamp), and business dataset _Yelp3_ (business city, star). Following the previous work (Hu et al., 2017) to reduce the excessive cost, we filtered out the user with more than 20 interactions in Yelp, and randomly selected 6,000 users to construct our Pinterest dataset. We convert all continuous feature values into categorical values (e.g., by binning user age into appropriate brackets), and consider the user and item IDs as additional features. Footnote 1: [https://grouplens.org/datasets/movieLens/](https://grouplens.org/datasets/movieLens/) Footnote 2: [https://sites.google.com/site/xuestalphabeta/academic-projects](https://sites.google.com/site/xuestalphabeta/academic-projects) Footnote 3: [https://www.yelp.com/dataset](https://www.yelp.com/dataset) **Baselines.** We choose our comparison baseline with respect to models achieving strong recommendation accuracy and debias effects. _Accuracy Baselines_ include matrix factorization-based method onCNF (Hu et al., 2017) and FM-families -- FM (Hu et al., 2017), NFM (Hu et al., 2017), DeepFM (Hu et al., 2017), CFM (Hu et al., 2017). _Debiasing Baselines_ include regularization-based approach M-Match (Hu et al., 2017), classical inverse propensity scoring approach IPS (Wang et al., 2018), MACR (Zhu et al., 2017) incorporating user/item's effect in the loss, DecRS (Zhu et al., 2017) investigating the causal representation of users. **Evaluation Protocols.** For the train-test data split, we employ standard leave-one-out (Hu et al., 2017). To evaluate the _accuracy_, we adopt _AUC_ (Area Under Curve) and _Logloss_ (cross-entropy). To assess the _fairness_ concerning imbalanced features, we split data into buckets for evaluation, following previous work (Hu et al., 2017; Wang et al., 2018). We first rank the data samples \(\mathbf{x}^{(k)}\) by joint feature statistics \(\prod_{x\in\mathbf{x}^{(k)}}(\alpha_{x}\cdot\beta_{x})\), and divide the ranked samples into 6 buckets. We propose two quantitative metrics as follows. * _EFGD_ (extreme feature-based groups difference). Following the previous practice (Hu et al., 2017) that term the difference between the two extreme data groups as the indicator, we take EFGD as the AUC difference between the first 10% samples and the last 10%. * _STD_ (overall groups' standard deviation). STD is used to measure more fine-grained fairness (as (Zhu et al., 2017)). And STD stands for the AUC standard deviation of the buckets. ### Recommendation Accuracy Comparison #### 3.1.1. Superior Accuracy Against Baselines. We present the overall results in Table 1. Regarding both user and item feature-enriched datasets, our AAFM consistently outperforms other FM-based baselines. Among the baselines, DeepFM achieves the best performance in three datasets, as indicated by both Logloss and AUC metrics. This highlights its effectiveness in mapping sparse features to dense vectors using the neural embedding layer. CFM, employing 3D CNN, outperforms ONCF, which uses 2D CNN, indicating the superiority of 3D CNN in extracting feature interactions. #### 3.1.2. Ablation Study. To further investigate where the performance improvement of AAFM originates from, we present the ablation study in the right-hand columns of Table 1. We can see that compared to AdvFM (without any adaptive optimization), the introduction of adaptive \(\lambda\) significantly enhances the overall performance. This indicates that our proposed adversarial training reweighting is promising and can optimize well, instead of locking the fairness model within performance-compromising constraints. However, introducing only adaptive \(\epsilon\) worsens the overall performance on several datasets. By considering both aspects together, synthesizing them into AAFM, and adding decaying perturbation regularization loss, we get D-AAFM. Either of them performs best across all datasets. In most cases, D-AAFM performs better, demonstrating that persistent adversarial perturbations can severely impact model accuracy. ### Feature Fairness Results #### 3.2.1. Superior Fairness Against Baselines Feature fairness is another aspect of concern in our study. As depicted in Table 2, all fairness baselines show improvement over FM in terms of metrics measuring the reduction in bias (EFGD and STD). We observe that the phenomenon of feature unfairness does exist, and that current fairness models do alleviate this issue. Among the baselines, MACR performs the best; it considers the popularity bias of both users and items, taking into account the impact of skewed occurrences of user or item IDs. Our AdvFM also provides more fair results, compared to FM. However, it is not as good as the aforementioned debiasing baselines. This corroborates that though adversarial training has shown promise in promoting fairness recently, it necessitates further detailed investigation. Through careful design of adversarial perturbations, our AAFM and D-AAFM achieve better fairness, concerning either user features or item features. #### 3.2.2. Ablation Study To figure out how the effects of adversaries improve fairness, we conduct an additional ablation study, shown in the right columns of Table 2. Compared to AdvFM, the inclusion of adaptive \(\lambda\) and adaptive \(\epsilon\) both significantly contribute to improving fairness. When both are utilized (i.e., AAFM), the effect on feature fairness is further enhanced. This demonstrates that both proposed automatic adaptations are complementary and indispensable. Features with smaller combination variety require a larger \(\epsilon\) to improve generalization ability. Even though we encourage it by using the reciprocal of its bias, it is still very easy to reduce \(\epsilon\) during training (thereby reverting back to normal training). In order to forcefully encourage adversarial training, it is necessary for samples with less frequent features to have more adversarial training weight, thus enabling the adversaries to truly play their role. Similar to the finding from the accuracy comparison, D-AAFM and AAFM alternately become the best models, suggesting different dataset sensitivities to long-term perturbations. \begin{table} \begin{tabular}{c|c c|c c c c c|c c c c c} \hline \hline _Seenarios_ & _Dataset_ & _Metrics_ & FM & NFM & CFM & DeepFM & ONCF & AdvFM & AAFM\({}^{\ddagger}\) & AAFM\({}^{\ddagger}\) & AAFM & D-AAFM \\ \hline \multirow{3}{*}{_item-centric_} & \multirow{3}{*}{ML\({}^{i}\)} & LL & 0.4093 & 0.3688 & 0.3635 & 0.3597 & 0.3641 & 0.3730 & 0.3391 & 0.3673 & 0.3352 & **0.3248** \\ & & AUC & 0.9154 & 0.9203 & 0.9257 & 0.9381 & 0.9243 & 0.9337 & 0.9406 & 0.9308 & 0.9408 & **0.9431** \\ \cline{2-13} & \multirow{3}{*}{Yelp} & LL & 0.1934 & 0.1895 & 0.0963 & 0.1584 & 0.1527 & 0.1692 & 0.0878 & 0.1751 & **0.0731** & 0.0742 \\ & & AUC & 0.9474 & 0.9569 & 0.9732 & 0.9665 & 0.9668 & 0.9653 & 0.9790 & 0.9619 & **0.9813** & 0.9795 \\ \hline \multirow{3}{*}{_user-centric_} & \multirow{3}{*}{ML\({}^{u}\)} & LL & 0.4493 & 0.4297 & 0.3876 & 0.3109 & 0.3721 & 0.4325 & 0.3182 & 0.4323 & 0.3072 & **0.2996** \\ & & AUC & 0.8796 & 0.8908 & 0.9172 & 0.9319 & 0.9012 & 0.8810 & 0.9249 & 0.8808 & 0.9323 & **0.9357** \\ \cline{1-1} \cline{2-13} & \multirow{3}{*}{Pinterest} & LL & 0.5647 & 0.3865 & 0.3577 & 0.3541 & 0.4026 & 0.3859 & 0.3573 & 0.3914 & 0.3447 & **0.3042** \\ \cline{1-1} & & AUC & 0.5700 & 0.7430 & 0.7356 & 0.7580 & 0.7251 & 0.7432 & 0.7695 & 0.7408 & 0.7756 & **0.8031** \\ \hline \hline \end{tabular} \end{table} Table 1. Overall accuracy performance comparison. Smaller LL (Logloss) or larger AUC indicates better accuracy. ML\({}^{i}\) or ML\({}^{u}\) indicate the partial MovieLens dataset with only item or item features. AAFM\({}^{\ddagger}\) and AAFM\({}^{\ddagger}\) only adaptively adjust \(\lambda\) (with fixed \(\epsilon=0.5\)) and \(\epsilon\) (with fixed \(\lambda=1\)) respectively. D-AAFM indicates AAFM incorporating decaying perturbation regularization. \begin{table} \begin{tabular}{c|c c|c c c c c|c c c c c} \hline \hline _Seenarios_ & _Dataset_ & _Metrics_ & FM & IPS & M-match & MACR & DecRS & AdvFM & AAFM\({}^{\ddagger}\) & AAFM\({}^{\ddagger}\) & AAFM & D-AAFM \\ \hline \multirow{3}{*}{_item-centric_} & \multirow{3}{*}{ML\({}^{i}\)} & EFGD & 0.0713 & 0.0336 & 0.0381 & 0.0282 & 0.0589 & 0.0401 & 0.0232 & 0.0241 & 0.0110 & **0.0105** \\ & & STD & 0.0257 & 0.0237 & 0.0236 & 0.0171 & 0.0246 & 0.0235 & 0.0139 & 0.0161 & 0.0091 & **0.0069** \\ \cline{2-13} & \multirow{3}{*}{Yelp} & EFGD & 0.0440 & 0.0243 & 0.0301 & 0.0177 & 0.0272 & 0.0230 & 0.0166 & 0.0228 & **0.0144** & 0.0181 \\ & & STD & 0.0131 & 0.0114 & 0.0153 & 0.0082 & 0.0122 & 0.0086 & 0.0082 & 0.0079 & **0.0064** & 0.0068 \\ \hline \multirow{3}{*}{_user-centric_} & \multirow{3}{*}{ML\({}^{u}\)} & EFGD & 0.0415 & 0.0289 & 0.0337 & 0.0294 & 0.0374 & 0.0340 & 0.0323 & 0.0377 & **0.0281** & 0.0368 \\ & & STD & 0.0280 & 0.0198 & 0.0208 & 0.0199 & 0.0230 & 0.0225 & 0.0219 & 0.0259 & **0.0195** & 0.0220 \\ \cline{1-1} \cline{2-13} & \multirow{3}{*}{Pin.} & EFGD & 0.1068 & 0.0682 & 0.0726 & 0.0558 & 0.0545 & 0.0853 & 0.0289 & 0.0580 & 0.0193 & **0.0132** \\ \cline{1-1} & & STD & 0.0307 & 0.0277 & 0.0299 & 0.0275 & 0.0265 & 0.0300 & 0.0213 & 0.0296 & 0.0195 & **0.0178** \\ \hline \hline \end{tabular} \end{table} Table 2. Feature fairness effect comparison. The smaller the STD or EFGD, the fairer the results. The abbreviations are the same as in Table 1. The upper/lower two datasets correspond to item-centric/user-centric fairness. ### Robustness of AdvFM Driven by the premise that adversarial training enhances robustness for perturbed parameters, we delve into understanding this improvement. In order to probe the robustness of groups under feature representation perturbations, we adopt the methodology from (Kang et al., 2018), which infuses external noise into the model parameters at levels spanning 0.5 to 2.0. As shown in Table 3, we observe that AdvFM exhibits less sensitivity to adversarial perturbations compared to FM. For instance, on the Yelp dataset, a noise level of 0.5 results in a decrease of 6.14% for FM, whereas AdvFM only experiences a decrease of 3.38%. Moreover, AAFM demonstrates even greater stability with a decrease of only 1.41%. From the perspective of these improvements in robustness, we see the model's ability to generalize to unseen inputs, giving indicative evidence for why rare features are handled well by our proposed methods. _Case Study._ The benefits of such robustness improvement are particularly pronounced for small groups characterized by less frequent features and unstable performance during training. To illustrate this, we select the _male entertainment_ group, which accounts for only 0.2% of the total users, as a case study (Figure 4). The figure demonstrates that normal FM training exhibits significant fluctuations, indicating the sensitivity of the data group to model updates. In contrast by incorporating annealing adaptive noise in AAFM, performance gradually converges while improving overall AUC in the later stages of training. This notable improvement in stability further confirms the enhanced robustness in small groups. ### Trade-off Between Fairness and Accuracy Fairness and accuracy often involve a trade-off, and sometimes their objectives can even be contradictory (Srivastava et al., 2017). However, we argue that fairness and accuracy can find common ground with appropriate adaptive adversarial weights. We adjust the hyperparameter \(t\) to control the scale of \(\lambda_{k}\) in Figure 4. As \(t\) increases, we observe that fairness achieves the best results when \(t\) takes on the values of 100, 200, and 100 for MovieLens, Yelp, and Pinterest, respectively. On the other hand, accuracy reaches its peak when \(t\) is set to 50, 200, and 100. Notably, these two objectives are mostly aligned, suggesting that the improvement in fairness mainly stems from the enhanced accuracy of small groups rather than compromising the performance of larger groups (which could significantly reduce overall accuracy). The exception occurs in the MovieLens dataset, where there is a trade-off between the best accuracy (\(t=50\)) and the best fairness \(t=100\). MovieLens contains more feature domains compared to the other two datasets. This implies a finer feature granularity and more similar joint feature statistics for samples. Larger \(t\) will magnify the differences in adversarial weights of samples that were originally similar. This will lead to a rapidly increasing amount of samples with low training weights, resulting in a more prominent overall performance drop. ## 4. Related Work **Fairness in recommendation** is a nascent but growing topic of interest (Beng et al., 2017), but hardly has a single, unique definition. The concept has been extended to cover multiple stakeholders(Beng et al., 2017; Li et al., 2018) and implies different trade-offs in utility. From a stakeholder perspective, fairness can be considered from both item and user aspects. _User fairness_(Li et al., 2018; Li et al., 2018) expects equal recommendation quality for individual users or demographic groups, and _item fairness_(Li et al., 2018; Li et al., 2018) indicates fair exposure among specific items or item groups. From an architectural perspective, there are mainly two approaches to address fairness: One method is to post-process model predictions (i.e., re-ranking) to alleviate unfairness (Li et al., 2018; Li et al., 2018; Li et al., 2018). The other unbiased learning method is to directly debias in the training process. Such latter methods come from two origins. Causal Embedding (Beng et al., 2017) is one way to control the embedding learning from the bias-free uniform data (e.g., by re-sampling (Li et al., 2018)). Re-weighting (Li et al., 2018; Li et al., 2018) is another method to balance the impact of unevenly distributed data during training, where the Inverse Propensity Scoring (Li et al., 2018; Li et al., 2018) is a common means to measure the difference between actual and expected distributions. In this work, we generalize the problem to Figure 4. Trade-off between accuracy and user group fairness via control of the re-weighting parameter \(t\). Smaller STD (i). indicates better fairness, and larger AUC (\(\uparrow\)) indicates better accuracy. Figure 5. Validation accuracy (AUC) on the small group _(male entertainment)_ in MovieLens dataset. The case study. \begin{table} \begin{tabular}{l|c c|c c c|c c c} \hline _Dataset_ & **ML-100K** & \multicolumn{3}{c|}{**Yelp**} & \multicolumn{3}{c}{**Pinterest**} \\ _Noise_ & 0.5 & 1.0 & 2.0 & 0.5 & 1.0 & 2.0 & 0.5 & 1.0 & 2.0 \\ \hline FM & -4.67 & -9.58 & -18.9 & -6.14 & -12.7 & -24.3 & -2.74 & -3.40 & -4.95 \\ AdvFM & -2.32 & -4.75 & -9.60 & -3.38 & -6.79 & -13.3 & -1.48 & -1.53 & -1.64 \\ AAFM & -0.64 & -0.76 & -1.00 & -1.41 & -3.03 & -6.37 & -0.29 & -0.30 & -0.32 \\ \hline \end{tabular} \end{table} Table 3. Performance drop ratio (%) in AUC of models in the presence of external adversarial perturbation. solve both user and item groups' unfairness, proposing an unbiased learning technique at the feature-level. **Adversarial training in recommendation** helps models pursue robustness by introducing adversarial samples. One of the most effective techniques is to perturb adversarial samples by gradient-based noise (e.g., FGSM (Goodfellow et al., 2016), PGD (Rendle et al., 2016), and C&W (Cowran et al., 2017)). Previous work found such noise is effective in improving recommendation accuracy, such as applying fixed FGSM on matrix factorization (Goodfellow et al., 2016) and multiple adversarial strengths (Zhu et al., 2017). Current adversarial perturbation in recommendation systems mostly focuses on representing individual users (Beng et al., 2017; Chen et al., 2017; Li et al., 2018) or items (Beng et al., 2017; Li et al., 2018) properly. Adversarial training is increasingly discussed in unbiased learning approaches (Zhu et al., 2017). Recent work (Li et al., 2018) also found adversarial perturbation could benefit under-served users. Yu et al. (Yu et al., 2018) found a positive correlation between the node representation uniformity and the debias ability, and added adversarial noise to each node in contrastive graph learning. However, they lack systematic comparison with fair recommendation baselines and overlook the flexibility of selected features. While there have been discussions in computer vision on connecting fairness and model robustness (Zhu et al., 2017; Li et al., 2018), there is a lack of studies addressing the bridging between model robustness and the co-improvement of accuracy and fairness in recommendation tasks. ## 5. Conclusion and Future Work In this work, we propose a feature-oriented fairness approach, employing feature-unbiased learning for simultaneous improvement of fairness and accuracy. We address imbalanced performance among feature-based groups by identifying its root causes in feature frequency and combination variety. Our proposed Adaptive Adversarial Factorization Machine (AAFM) uses adversarial perturbation to mitigate this imbalance during training, applying varied perturbation levels to different features and adversarial training weights to different samples. This adaptive approach effectively enhances the generalizability of feature representation. Our experimental results show that AAFM outperforms in fairness, accuracy, and robustness, highlighting its potential as an effective approach for further study in this field. While AAFM introduces adversarial training to unbiased learning, there are still many possible refinements. For example, AAFM defaults to using random negative sampling, which biases toward the majority of users/items features. How to balance the impact of such biased negative sampling in different groups deserves future study. It will also be valuable to further investigate the effectiveness of different adversaries (e.g., PGD (Rendle et al., 2016), or C&W (Cowran et al., 2017)) on more complex neural recommendation backbones. ## Appendix A Derivation of Adversarial Perturbation We present the mathematical derivation of the adversarial perturbation for feature embedding \(e_{i}\), and explain the reasoning behind utilizing combination variety as the bias parameter to achieve balance. By applying the Chain Rule, we express the adversarial feature perturbation \(\Delta_{ado}^{e_{i}}\) in the following manner: \[\begin{split}\Delta_{ado}^{e_{i}}&=\epsilon\cdot \frac{\partial\mathcal{L}(\hat{g},y)/\partial e_{i}}{\|\partial\mathcal{L}( \hat{g},y)/\partial e_{i}\|}\\ &=\epsilon\cdot\frac{\left(-\frac{y}{y}-\frac{1-y}{1-\hat{g}} \right)\cdot\partial\hat{y}/\partial e_{i}}{\|\left(-\frac{y}{y}-\frac{1-y}{1- \hat{g}}\right)\cdot\partial\hat{y}/\partial e_{i}\|}\\ \end{split} \tag{9}\] \(y\) can take on values of either \(0\) or \(1\), hence we can simplify the above expression as: \[\Delta_{ado}^{e_{i}}=-\epsilon\cdot\frac{\partial\hat{y}/\partial e_{i}}{\| \left.\hat{y}/\partial e_{i}\right\|}\\ \tag{10}\] Given that we have chosen FM as our prediction model, we can calculate the partial derivative of \(\hat{y}\) with respect to the feature embedding \(e_{i}\) as follows: \[\begin{split}\frac{\partial\hat{y}}{\partial e_{i}}& =w_{i}+\frac{\partial}{\partial e_{i}}\left[\frac{1}{2}\sum_{j=1} ^{d}\left(\sum_{i=1}^{n}v_{i,j}e_{i}\right)^{2}-\frac{1}{2}\sum_{j=1}^{d} \left(\sum_{i=1}^{n}v_{i,j}^{2}e_{i}^{2}\right)\right]\\ &=w_{i}+\frac{1}{2}\left[\frac{\partial}{\partial e_{i}}\sum_{j =1}^{d}\left(v_{i,j}^{2}e_{i}^{2}+2\sum_{j=1}^{n}v_{i,j}e_{j,j}e_{i}e_{j} \right)-\frac{\partial}{\partial e_{i}}\sum_{j=1}^{d}\left(\sum_{i=1}^{n}v_{ i,j}^{2}e_{i}^{2}\right)\right]\\ &=w_{i}+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{d}\sum_{j=1}^{n}v_{i,j}e_{j,j}e_{j}\\ &=w_{i}+\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{n}v_{i,j}>e_{j}\\ \end{split} \tag{11}\] where \(\frac{\partial}{\partial e_{i}}\sum_{j=1}^{d}\left(\sum_{i=1}^{n}v_{i,j}^{2} e_{i}^{2}\right)\) can be reduced and vector multiplication involved is performed element-wise. Substituting \(\partial\hat{y}/\partial e_{i}\) into \(\Delta_{ado}^{e_{i}}\), we thus have: \[\begin{split}\Delta_{ado}^{e_{i}}&=-\epsilon\cdot \frac{w_{i}+\sum_{\begin{subarray}{c}j=i\\ j\neq i\end{subarray}}^{n}\left\langle e_{i},v_{j}\right\rangle e_{j}}{\|\mathsf{ w}_{i}+\sum_{\begin{subarray}{c}j=i\\ j\neq i\end{subarray}}^{n}\left\langle v_{i},v_{j}\right\rangle e_{j}\|}\\ \end{split} \tag{12}\] The addition of this adversarial perturbation to the original embedding \(e_{i}\) utilizes the interacted feature embeddings \(e_{j}\) weighted by the pair-wise interaction weights \(\left\langle e_{i},v_{j}\right\rangle\) to enhance the representation of embedding \(e_{i}\). Hence, we can find the perturbation on \(e_{i}\) is controlled by the strength \(\epsilon\), and the perturbation direction is influenced by \(w_{i}\) and \(v\). There exists a direct relationship between \(w_{i}\) and the perturbation direction. As for \(\left\langle v_{i},v_{j}\right\rangle\), being the second-order interaction parameters, their pairwise combinations determine the impact of other \(e_{j}\) values on the perturbation direction. When \(w_{i}\) is held constant, a larger variety of feature combinations results in a more diverse range of perturbation directions. Consequently, in our work, we assign a smaller perturbation strength to balance between the influence of adversaries and their impact.
2309.11695
Active perception network for non-myopic online exploration and visual surface coverage
This work addresses the problem of online exploration and visual sensor coverage of unknown environments. We introduce a novel perception roadmap we refer to as the Active Perception Network (APN) that serves as a hierarchical topological graph describing how to traverse and perceive an incrementally built spatial map of the environment. The APN state is incrementally updated to expand a connected configuration space that extends throughout as much of the known space as possible, using efficient difference-awareness techniques that track the discrete changes of the spatial map to inform the updates. A frontier-guided approach is presented for efficient evaluation of information gain and covisible information, which guides view sampling and refinement to ensure maximum coverage of the unmapped space is maintained within the APN. The updated roadmap is hierarchically decomposed into subgraph regions which we use to facilitate a non-myopic global view sequence planner. A comparative analysis to several state-of-the-art approaches was conducted, showing significant performance improvements in terms of total exploration time and surface coverage, and demonstrating high computational efficiency that is scalable to large and complex environments.
David Vutetakis, Jing Xiao
2023-09-21T00:20:57Z
http://arxiv.org/abs/2309.11695v1
# Active perception network for non-myopic online exploration and visual surface coverage ###### Abstract This work addresses the problem of online exploration and visual sensor coverage of unknown environments. We introduce a novel perception roadmap we refer to as the Active Perception Network (APN) that serves as a hierarchical topological graph describing how to traverse and perceive an incrementally built spatial map of the environment. The APN state is incrementally updated to expand a connected configuration space that extends throughout as much of the known space as possible, using efficient difference-awareness techniques that track the discrete changes of the spatial map to inform the updates. A frontier-guided approach is presented for efficient evaluation of information gain and covisible information, which guides view sampling and refinement to ensure maximum coverage of the unmapped space is maintained within the APN. The updated roadmap is hierarchically decomposed into subgraph regions which we use to facilitate a non-myopic global view sequence planner. A comparative analysis to several state-of-the-art approaches was conducted, showing significant performance improvements in terms of total exploration time and surface coverage, and demonstrating high computational efficiency that is scalable to large and complex environments. ## I Introduction The general problem of online exploration and visual surface coverage of _a priori_ unknown structure or environment can be referred to as _online sensor-based coverage planning_ (OSCP). For this, a robot such as a Micro Aerial Vehicle (MAV) must efficiently discover the spatial and geometric structure of an initially unknown environment using an onboard depth sensor. The robot must traverse the environment to perceive the unknown space from different perspectives, accumulating the acquired sensor knowledge in a spatial map. OSCP is a prerequisite problem for a wide range of applications involving operation in an unknown environment, such as structural modeling and inspection, surveying, search and rescue, and many others [1]. The _global coverage problem_ of OSCP is to achieve maximum coverage of the target surfaces as efficiently as possible. Naturally, this cannot be solved directly due to the lack of a priori knowledge, and can only be solved online in an incremental fashion. This leads to the _incremental exploration problem_ which represents an iterative action selection problem: given the current incomplete knowledge, determine the optimal action to increase the current knowledge. As each action is executed, the incremental objective is recursively solved using feedback from the added environment knowledge until the global exploration objective has been achieved. ## II Related Work The purpose of online exploration and coverage can vary between different applications and tasks. For example, some applications seek knowledge of the traversable space within an unknown environment for subsequent navigation tasks, while others may seek detailed coverage of the surfaces for 3D modelling or inspection purposes. It is important to recognize such differences in the intended application, as this can greatly influence how the problem is approached and how its performance is evaluated. **Frontier-based exploration**: Autonomous exploration was pioneered by [2] by introducing the now well-known concept of spatial frontiers. Frontiers represent boundaries within a partially built map between unknown space the robot seeks to observe, and the free space the robot can use to make the observations. The original frontier exploration algorithm, referred to as classical frontier exploration, was implemented for a mobile ground robot building a 2D occupancy grid map. The algorithm selects the closest frontier as the goal, and navigates towards the goal while using reactive collision-avoidance. Upon arrival, a new sensor scan is acquired of the region and added to the map, repeating the process until no unvisited and reachable frontiers remain. Many extensions have been proposed to the initial frontier-based approach, including more efficient frontier detection methods [3, 4] and extensions for 3D maps [5]. However, a significant drawback of classical frontier exploration is that frontiers indicate only the _existence_ of adjacent unknown space, but not the quantity or quality. Using a frontier location directly as the navigation goal ignores sensor's measurement range, thus causing inefficient and wasteful motions. Furthermore, a frontier location near surfaces generally do not represent a feasible goal for a robot due to collision with the surface obstacle, making their direct use in this way ineffective for surface coverage tasks. **Next-best-view (NBV) sampling**: Exploration can be effectively modeled as an extension of the Next-Best-View (NBV) problem introduced by [6], which can overcome several of the drawbacks associated with classical frontier-based approaches. Here, a _view_ refers to a hypothetical pose of the sensor apparatus used to predict and analyze the spatial information expected to be visible if the real sensor were to be placed at this pose. The expected visible information is then said to be _covered_ by the view. The classical NBV problem assumed full prior knowledge of the target object is given to facilitate the search and evaluation of NBVs, where the objective was to find a minimum set of views that maximizes coverage of the known surfaces of the object model. This premise can be adapted for online exploration tasks by instead evaluating views according to currently unknown parts of the environment model, rather than the known parts. NBV-based exploration methods typically utilize a generate-and-test paradigm which apply sampling techniques necessary to discretize the continuous configuration space into a finite set of candidate views for analysis [7]. The quality of a view is evaluated according to some measure of its _information gain_ (IG), which quantifies the new spatial information potentially observable from the view [8, 9, 10, 11]. A cost metric is additionally used to evaluate the expected effort for the robot to visit the view (e.g. time or energy). Most critical differences between existing NBV approaches occur within the sampling strategy for generating view candidates, and the formulation of metrics for analyzing and comparing candidates for goal selection. Information gain is commonly computed volumetrically by finding the expected amount of unknown space visible from a view [12, 13, 14]. This necessarily involves checking for occlusions within the known space using techniques like raycasting, which incurs high computational complexity that can rapidly increase with various factors like map resolution, sensor field of view, and sensing range. This limits the number of distinct views that can be practically evaluated within a given time period. The high complexity also make it difficult to analyze overlapping or mutual information between views, such that most approaches treat the gain as an independent value that prevents an understanding of the unique gain contributions of each view within a group. **Tree-based planning**: Tree-based methods organize sampled views as vertices in a geometric tree where directed edges between vertices represent feasible paths between views. The RH-NBVP approach of [15, 16] applies rapidly-exploring random tree (RRT) to grow a tree rooted at the robots current position. Each node in the tree is weighted according to their predicted information gain based on how much unknown space lies within the view. Cost weights are aggregated along each branch, and the leaf node with the highest value is used to identify the best branch to explore, iteratively repeating the process in a receding horizon fashion. This has become a well-known approach and is often used as a baseline for comparative analysis [17, 18, 19]. A hybrid approach that combines both frontier-based and NBV-based techniques was introduced by [17], referred to as AEP. It combines the RH-NBVP strategy for local planning, while switching to frontier-based planning for global search when local planning fails to find informative views. FFI [18] is also a hybrid approach that uses an efficient frontier clustering strategy to guide view sampling. A significant drawback of tree-based planning is the difficulty in preserving the previously computed tree structure as the robot navigates to each goal. The RH-NBVP approach builds a new tree each iteration, discarding the previously built structure that may still contain useful knowledge. Other approaches attempt to transfer as much of the previous tree structure as possible by rewiring its edges to initialize the construction of a new tree. Since tree-based methods are rooted at the robots position, they tend to become increasingly inefficient over larger distances, making it difficult to handle dead-end or backtracking cases. **Graph-based planning**: Various approaches have utilized graph structures that can overcome some of the limitations and drawbacks of trees. The approach of [20] builds a history graph that stores previously visited positions and their edge connections. These are used as potential seed points for RRT, which allows a tree to be grown from different positions across the map, rather then just from the robot position. An approach using Rapidly-Exploring Random Graphs (RRG) was presented in [13] for exploration of subterranean environments. A Probabilistic Roadmap (PRM) strategy was used by [21] to build a graph of feasible configurations and paths over the map as it is explored. **Topological maps**: Topological maps have been applied by recent works which aim to reduce the planning complexity through the compact representation provided by a topological map. Topological maps can be considered as an extension to graph-based methods, where vertices represent some volumetric sub-map, or _place_, and edges represent the adjacency or reachability between places. This coarse and abstracted representation is more efficient for handling large-scale environments, which can become intractable to explore online using alternative approaches. However, they usually lack sufficient metric knowledge for direct use in navigation. [22] used a topological map for exploration of underground mines using a ground robot. The regions of intersection between passageways were represented as nodes, and exploration was planned along the edges between nodes. A more recent approach proposed by [23] also uses a topological map for subterranean exploration. Convex polyhedrons are used to estimate distinctive exploration regions (DER-s) which are added as graph nodes to the map. Each DER represents an enclosed 3D volume of the map like an enclosed room or corridor, providing the planner with knowledge of high-level intent such as moving between distinctive rooms or regions. Other approaches have applied segmentation algorithms to identify the separation of distinct exploration regions like rooms of a building [24]. **Myopic greedy planning**: The majority of existing methods compute navigation goals using myopic planning strategies that greedily optimize the cost of the next single planning decision [25, 18], or within a limited planning horizon [15, 17]. Some works allow planning over the full map, but still use greedy search for the decision making. These are sometimes referred to as global planning methods, but we clarify they are still considered myopic. Myopic strategies bias exploration toward regions with high information gain, while ignoring small gains even if they are closer. This bias can frequently create regions of incomplete coverage when a high gain goal leads the exploration away from the current region before it is fully mapped. This can also result in frequent back-and-forth oscillation between goals, or require re-visitation of these regions after the robot has traveled a significant distance, backtracking over potentially large distance. This greatly reduces efficiency, and can result in sparse coverage gaps or failure to fully explore an environment within an allowed time limit, especially over large-scales. A relatively small number of works have recently attempted to overcome the drawbacks of greedy planning using non-myopic planning strategies. This has been formulated using the Traveling Salesman Problem (TSP) [26, 27], but often relies on prior map knowledge [28, 29]. A sector decomposition approach was presented by [27], which partitions the map into a set of convex sectors used to compute a TSP sequence. However, the sector decomposition method can be computationally expensive, especially for finer map resolutions, which can greatly decrease the update rate of the map and planning. Additionally, sectors form an exact partitioning of the space, which can make the geometric properties of the resulting sectors difficult to control, and may not effectively handle large-scale and complex environments. **Environment and task-specific approaches**: Simplifying or restrictive assumptions are sometimes made on the operational environment. This can include indoor operation, or reliance on certain regular geometric features, e.g. room structures used for segmentation. Some applications are intended to operate in relatively obstacle-free environments, such as outdoors or underwater [30], which contain an abundance of free-space that greatly simplifies collision checking and other sub-tasks. Assumptions can significantly restrict the practicality of many approaches for general use, or require fine tuning of parameters between different environments to achieve their rated performance. **Limitations of existing approaches:** Limitations of existing approaches are summarized as follows: * greedy and myopic planning strategies that focus on the incremental exploration objective, but fail to consider the global one, * non-generalized approaches that are limited to small-scale environments, or specialized for specific environments or conditions (e.g. subterranean or building-like structures), * most approaches succumb to high computational costs: * they do not scale well with respect to environment size or map resolution, * the ability to quickly replan on added knowledge diminishes, where a suboptimal plan is fully executed before replanning, * reduced velocities are often required to compensate for low planning rates, * frequent stop-and-go motions can occur. We observe that there is not sufficient attention to the underlying data management issues of the general OSCP problem in the robotics research community, which could be due to limited research funding and development cycle and that data infrastructure was not a focus. There is also a lack of open-source software to help reducing efforts that researchers have to put into developing a good data management system. The aforementioned limitations of existing approaches are the results of that. However, for many realistically large-scale OSCP tasks, it is critical to have smart and sophisticated data management system, requiring careful conceptual, algorithmic, and data structure design and efficient software engineering solutions. ## III Contributions This work is motivated to alleviate some of the limitations of the existing approaches in handling the OSCP problem. We focus first on how to dynamically compute and maintain the accurate global knowledge necessary to a non-myopic planning algorithm, since this represents a significant bottleneck in terms of computational complexity and exploration quality in the existing work. Our key contributions are as follows: * A novel dynamic multi-layer topological graph designated as the _Active Perception Network_ (APN). The APN serves as a global hierarchical roadmap over the spatial map that accumulates the incrementally computed knowledge of the exploration state space. is defined and organized around adaptive nodes to best represent the perceptual and actionable environment knowledge discovered to minimize the complexity, which allows it to be efficiently accessed and searched for planning purposes. * A dynamic update procedure referred to as _Differential Regulation_ (DFR) to incrementally build and refine the APN as environment knowledge is increased. This procedure addresses the complexity of updating the APN as its size and the map scale increase, while ensuring sufficient global knowledge is maintained for effective planning. * A non-myopic planning approach denoted as APN-Planner (APN-P) that demonstrates how the APN can be leveraged to compute and adaptively refine a globally informed exploration sequence. * A detailed performance analysis and comparison to existing approaches among the state-of-the-art. * An open-sourced release of the APN, DFR, and planning implementations, and a programming framework for the development of generalized autonomous exploration approaches that was developed and used for all our implementations. ## IV Problem Formulation We assume exploration is performed using an MAV equipped with an onboard depth sensor (e.g. stereo-visual, RGB-D, or LiDAR) to perceive 3D space, noting that other systems such as mobile ground robots could also be utilized without loss of generality. We define the following terms and symbols to facilitate the description of our approach: **Environment and map model:** Let \(\mathcal{W}\subset\mathbb{R}^{3}\) represent the bounded 3D space of the operational environment, referred to as the _world_. The solid structures and objects of the world represent _occupied_ space \(\mathcal{W}_{occ}\subset\mathcal{W}\), while the remaining volume is defined as _free-space_\(\mathcal{W}_{free}\subset\mathcal{W}\), such that \(\mathcal{W}\equiv\mathcal{W}_{free}\cup\mathcal{W}_{occ}\). The intersection boundaries between occupied and free-space define the surface manifolds, \(\mathcal{S}\subset\mathbb{R}^{2}\). Surface manifolds are assumed to be visually opaque, and a surface point is considered optically visible from a point \(\mathbf{x}\in\mathcal{W}_{free}\) only if no occupied space lies between the surface and \(\mathbf{x}\). Otherwise, the surface is considered to be occluded from \(\mathbf{x}\). A spatial map \(\mathcal{M}\) is used to store the environment state knowledge as it is discovered from sensing. We assume the use of a 3D grid-based occupancy map \(\mathcal{M}=\{\mathbf{m}_{0},\ldots,\mathbf{m}_{m}\}\), though other map models could also be used without loss of generality (e.g. Signed Distance Field (SDF) [31]). \(\mathcal{M}\) partitions \(\mathcal{W}\) by a set of non-overlapping cubic volumes \(\mathbf{m}\in\mathbb{R}^{3}\), known as voxels. The minimum edge length of a voxel dictates the map resolution, \(r_{\mathcal{M}}\). Each voxel stores the occupancy probability of its volume, which is updated from sensor measurements depending on whether occupied or free-space was observed. The probability value is discretized by an occupancy state \(\mathcal{O}\in\{\mathcal{O}^{unk},\mathcal{O}^{occ},\mathcal{O}^{free}\}\), where \(\mathcal{O}^{unk}\) indicates the state is _unknown_. As sensor measurements are integrated, the state is classified as either \(\mathcal{O}^{occ}\) or \(\mathcal{O}^{free}\) to indicate, respectively, whether the voxel belongs to the set of occupied voxels \(\mathcal{M}^{occ}\subseteq\mathcal{M}\), or free-space voxels \(\mathcal{M}^{free}\subseteq\mathcal{M}\). The set of occupied voxels are given as \(\mathcal{M}^{occ}\subseteq\mathcal{M}\), the set of free voxels is given by \(\mathcal{M}^{free}\subseteq\mathcal{M}\), and the set of unknown voxels is given by \(\mathcal{M}^{unk}\subseteq\mathcal{M}\), with the initial map state given as \(\mathcal{M}\stackrel{{\text{\tiny{\textdefeq}}}}{{=}}\mathcal{M}^ {unk}\). Spatial frontiers, \(\mathcal{F}\), are detected from \(\mathcal{M}\) by identifying unknown voxels with an adjacent free voxel. Frontiers that are also adjacent to an occupied surface voxel are further classified as _surface frontiers_, \(\mathcal{F}^{\mathcal{S}}\). Those that are adjacent only to free space are classified as _void frontiers_, \(\mathcal{F}^{\mathcal{X}}\), such that \(\mathcal{F}\equiv\mathcal{F}^{\mathcal{S}}\cup\mathcal{F}^{\mathcal{X}}\). These distinctions are made according to the goal of achieving complete surface coverage, where surface frontiers help to identify where surface coverage is incomplete. **Robot model:** The robot agent is modeled by a rigid body with pose configuration \(\mathbf{q}^{agent}(t)=(\mathbf{x},\mathbf{a}),\ \mathbf{q}\in SE(3)\) at time \(t\), where \(\mathbf{x}\in\mathbb{R}^{3}\) is the position vector and \(\mathbf{a}=\{\varphi,\vartheta,\psi\}\) is the orientation vector represented by roll, pitch, and yaw Euler angles, respectively. Additional parameters \(\mathbf{v}_{max}\) and \(\dot{\psi}_{max}\) are used to specify the maximum allowable velocity and yaw rate, respectively. A spherical volume \(B^{safe}\) centered at \(\mathbf{x}\) with radius \(d_{safe}\) is defined, where \(d_{safe}\) specifies the minimum obstacle separation distance for safe operation. **Sensor model:** The robot's depth sensor is modeled by the parameter vector \([R_{s},\alpha_{s},d_{max}^{sense}]\). \(\alpha_{s}=[\alpha_{h},\alpha_{v}]\in(0,2\pi]\) is the maximum angular field of view (FoV) on the horizontal and vertical dimensions of the sensor, and \(R_{s}=[R_{sx},R_{sy}]\) is the maximum spatial resolution. \(d_{max}^{sense}\in\mathbb{R}\) is the maximum effective sensing range that surface points can be accurately detected by the sensor. This value corresponds to the physical limitations of the sensor, where distances greater than \(d_{max}^{sense}\) either cannot be measured, or are rejected due to loss of accuracy. The sensor parameters can be combined with a pose \(\mathbf{q}\) to form a projection model \(\lambda\in\Lambda\), referred to as a _viewpose_. The projected space from \(\lambda\) is described by the subset of rays that pass through the view's origin \(\mathbf{x}\), constrained by the intervals \([\vartheta\pm\alpha_{v}/2]\) and \([\psi\pm\alpha_{h}/2]\) of the unit-sphere. The length of each ray is constrained by \(d_{max}^{sense}\). The projected space defines the view volume of a viewpose, and a location within the view volume is considered visible if there are no occlusions between it and the origin. This provides the basis for making visibility queries and predictions on the expected information gain. **Reachable configuration space:** Given the robot's initial position \(\mathbf{x}_{0}^{agent}\), the _reachable configuration space_\(\mathcal{X}\subset\mathbb{R}^{3}\) is a metric space defined by all admissible configurations path-connected to \(\mathbf{x}_{0}^{agent}\). As a precondition, a configuration is considered _admissible_ if it does not intersect any occupied space within distance \(d_{safe}\). It is then considered _reachable_ if there exists a simply-connected path of admissible configurations from \(\mathbf{x}_{0}^{agent}\). The distance between two reachable points is quantified by a metric value \(L\in\mathbb{R}\). **Goal space:** The surfaces that can possibly be covered at any point during exploration is inherently restricted to a subset \(\mathcal{S}_{\mathcal{X}}\subseteq\mathcal{S}\) which are visible from some viewpose \(\lambda\) constrained by \(\mathcal{X}\). The _goal space_\(\Lambda^{G}\subset\Lambda\) is then defined as the set of feasible configurations that contribute some amount of coverage of \(\mathcal{S}_{\mathcal{X}}\), quantified by a gain metric, \(\gamma\in\mathbb{R}\). **Exploration state space:** The _exploration state space_, \(\Omega\), refers to the collectively available knowledge necessary to solve the incremental exploration problem. This mainly consists of the robot pose \(\mathbf{q}^{agent}\), spatial map \(\mathcal{M}\), and \(\mathcal{F}\), which are considered as independent time-varying input variables. It additionally includes the reachable C-Space, \(\mathcal{X}\), and goal space, \(\Lambda^{G}\), which are dependent variables computed from the input data. **Myopicity:** A planning strategy operates on the exploration state space to search for the optimal goal \(\mathbf{q}^{g}\in\Lambda^{G}\) for navigation, where the myopicity corresponds to the length of its planning horizon. A myopic strategy typically uses greedy search techniques which treats each goal or action as independent of the others, greedily selecting the best one. They may also constrain the search to only some local sub-region of the map, rather than considering its full extent. Myopic strategies focus on the optimization related to the incremental exploration problem, which are not necessarily optimal with respect to the global coverage objective. In contrast, a non-myopic strategy searches over a long horizon that spans most or all of the available map. It additionally considers how the particular selection of a goal and its associated action may alter the future exploration state space. This involves search and evaluation over ordered sequences of actions, rather than each action individually. This results in solutions that are more optimal with respect to the long-term global coverage objective. ## V Approach Overview In this work, we address how to build a reusable exploration state space \(\Omega\) that is adaptively maintained over the full spatial map as it is built concurrently. The iteratively built exploration space is then used to facilitate efficient non-myopic planning. We seek an approach that generalizes well to different environments with varying complexities and geometric characteristics, and efficiently scales to large-sized environments that cannot be effectively solved by myopic approaches. To achieve this goal, we introduce a novel graph-theoretic information structure named the _Active Perception Network_ (APN) to model the exploration state space data, detailed in Section VI. A key feature of the APN is a hierarchical representation over its configurations that helps to reduce its size complexity and enables variable-resolution planning as the map increases in scale. Another focus of the APN is the storage and organization of the contained data, such that dynamic changes can be efficiently made to any of its contents as its size increases, while also maximizing the low-level efficiency for search and query operations. Some of these details are related to software, data structures, and other implementation challenges, which are beyond the scope of this work. Instead, the APN will primarily be discussed from a modeling perspective, with some additional implementation details provided in the appendix. We additionally introduce the process of _Differential Regulation_ (DFR) in Section VII, which operates on the APN to modulate its state with respect to the increasing map knowledge. DFR consists of sampling-based methods for increasing knowledge of the goal space and reachable space. A novel approach for information gain analysis is utilized that enables the individual and mutual information gain of the APN to be efficiently computed, which is leveraged to accelerate informative view sampling, pruning, and refinement. DFR exploits the incremental nature of map building where each sequential map update induces changes that occur only within a relatively small local region of bounded volume, independent of the total map size. With this insight, these incremental changes are tracked and cached using difference-awareness and memoization strategies to greatly reduce the computational overhead necessary to update the APN. This allows more discrete updates to be performed in a given time period, increasing the completeness and accuracy of each update. The ability to quickly perform each update is also critical to ensure the size of the map changes remain small, since the complexity of each update scales with the size of the changes. An anytime exploration planner is presented in Section VIII, which demonstrates the use of the APN to efficiently compute non-myopic global exploration sequences. The hierarchical representation of the APN is leveraged to first compute a global topological exploration plan over the full map. The beginning of the global plan is then locally optimized at a higher-resolution. Similar to the difference-aware approach used by DFR, sequential changes to the APN typically occur within locally bounded regions which are leveraged to initialize new planning instances from previous results. This allows optimizations to achieve faster convergence despite the increasing size of the map and APN. The iterative exploration pipeline is illustrated in Fig. 1, which consists primarily of two asynchronous processing loops. The first loop is dedicated for spatial mapping to allow continuous integration of the sensor measurement data, \(\mathbf{Z}_{t}\), at high frequency. Frontier detection is performed after each map update, which operates only on the state-changed voxels that resulted from the update. This minimizes the complexity required to maintain the global frontier set, and provides a constant upper complexity bound that remains independent of the total map size. The second loop concurrently performs DFR to update the APN, which then serves as the input for replanning the current exploration solution. Further details of each DFR subroutine will be provided in Section VII. ## VI Active Perception Network (APN) The Active Perception Network (APN) serves as a topological roadmap that stores the unified knowledge of the dynamically exploration state space. Its fundamental structure is represented by a hypergraph \[\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{C}), \tag{1}\] where \(\mathcal{V}=\{v_{i}\}_{i=1,\dots,n}\) is the set of graph nodes and \(\mathcal{E}=\{e_{u,v}\}_{u,v\in[1,n]}\) is the set of traversal edges between nodes. The nodes have a bijective mapping to a codomain of viewposes, \(\mathcal{V}\hookrightarrow\Lambda\), where the terms node and viewpose may also be referred to interchangeably. \(\mathcal{V}\) is decomposed by a set of hyperedges \(\mathcal{C}=\{\mathcal{H}\}\in\mathcal{P}(\mathcal{V})\), where \(\mathcal{P}\) is the Fig. 1: Mapping, frontier detection, and Differential Regulation process pipelines used to update the APN. power set. Each hyperedge \(\mathcal{H}\subseteq\mathcal{V}\) contains a disjoint subset of \(\mathcal{V}\) as a multi-level hierarchy. **Graph nodes**, \(\mathcal{V}\): Each node \(v_{i}\in\mathcal{V}\) represents a viewpose information structure that consists of the tuple \[v_{i}=\{\mathbf{q}_{i},\gamma_{i},\mathbb{1}_{i}^{open}\}, \tag{2}\] where \(\mathbf{q}_{i}\) is its pose which has an associated viewpose \(\mathbf{q}_{i}\mapsto\lambda_{i}\), and \(\gamma_{i}\in\mathbb{R}\) is a reward metric that quantifies the expected information gain available from \(\lambda_{i}\). The node's visitation state is stored by a Boolean indicator \(\mathbb{1}_{i}^{open}:v_{i}\mapsto\mathbb{B}\), corresponding to whether the robot has visited the pose of \(v_{i}\). A _true_ value indicates the node is unvisited, also referred to as _open_, and is otherwise referred to as _closed_ if it has already been visited. This is used to discriminate between the open set of nodes \(\mathcal{V}^{open}\) which can represent goal candidates, and the closed set \(\mathcal{V}^{closed}\) of nodes which have already been visited. Several important classifications are defined over \(\mathcal{V}\) based on their properties. These provide an increased understanding of how the network can serve different tasks. These are summarized as follows: * A unique node \(v^{agent}\in\mathcal{V}\), referred to as the _agent node_, is used to represent the robot and is dynamically updated with the robot pose as it changes over time. The robot's initial pose \(\mathbf{q}_{0}^{agent}\) is used to define the _home state_, represented by a unique node \(v^{home}\) that remains fixed over the lifetime of the APN. * The previously traversed path of the robot is represented by a path-connected set of _keyframe nodes_, \(\mathbf{q}_{0:t}^{agent}\mapsto\{v_{0:k}^{kf}\}\in\mathcal{V}^{kf}\), rooted at the home state, \(v_{0}^{kf}=v^{home}\). Keyframe nodes are added in intermediate intervals once the robot has traveled a minimum distance from the last keyframe. * Unvisited nodes with positive information gain are classified as _NBV candidate nodes_, represented by the set \(\mathcal{V}^{nbv}=\{v\in\mathcal{V}^{open}:\gamma(v)>0\}\). A Next Best View (NBV) node represents a subgoal candidate for for navigation and planning that is expected to increase map knowledge. * The remaining _traversal nodes_, \(\mathcal{V}^{\mathcal{X}}=\mathcal{V}\setminus\mathcal{V}^{nbv}\), mainly serve to preserve the accumulated knowledge of the reachability space and its connectivity, but not expected to increase map knowledge. **Graph edges**, \(\mathcal{E}\): Each edge \(e_{u,w}\in\mathcal{E}\) corresponds to the pair of nodes \(\langle v_{u},v_{w}\rangle\), and stores various analytical information of the traversal space between the pair as follows: \[e_{u,w}=\{d^{\mathbf{x}},d^{\psi},L,OBB,l^{\mathcal{O}},\mathbf{p}^{obs}\}, \tag{3}\] where \(d^{\mathbf{x}}\) and \(d^{\psi}\) are the Euclidean distance and the orientation angle distance, respectively, between \((v_{u},v_{w})\). \(L\) is the evaluated cost metric value to traverse the edge given the maximum velocity \(\mathbf{v}_{max}\) and yaw rate \(\dot{\psi}_{max}\), defined by: \[L(e_{u,w})=max\left(\frac{d^{\mathbf{x}}(e_{u,v})}{\mathbf{v}_{max}},\frac{d^{\psi}(e_ {u,v})}{\dot{\psi}_{max}}\right). \tag{4}\] Each edge also stores the Oriented Bounding Box (OBB) enclosing the endpoints, and the collision state of the space contained in the OBB is stored by \(l^{\mathcal{O}}:OBB\rightarrow\{free,unk,obs\}\). \(\mathbf{p}^{obs}\) is used as a memory cache that stores any uncertain voxels found from previous collision checks. This allows for lazy evaluation during future checks by first checking if these discrete voxels have changed, rather than the full OBB volume, to greatly reduce complexity. **Hyperedge clusters**: A set of hyperedges \(\mathcal{C}\subset\mathcal{P}(\mathcal{V})\) forms a topological decomposition of \(\mathcal{G}\), providing a representation with reduced size and complexity. A hyperedge \(\mathcal{H}\in\mathcal{C}\) represents a cluster of nodes \(\{v\}\subseteq\mathcal{V}\) grouped according a similarity measure between the nodes, such that \(\mathcal{C}\) is a partitioning of \(\mathcal{V}\) into disjoint subsets \(\{\mathcal{H}\}\). Each hyperedge is modeled by the following: \[\mathcal{H}_{i}=\{\mathcal{V}_{i}^{\mathcal{C}},\mathcal{A}_{i},B_{i},\mathbf{x}_{ i}\}, \tag{5}\] where \(\mathcal{V}_{i}^{\mathcal{C}}\) is the set of nodes belonging to \(\mathcal{H}_{i}\), with the centroid of the contained nodes given by \(\mathbf{x}_{i}\) and its bounding volume given as \(B_{i}\). \(\mathcal{A}_{i}=G[\mathcal{V}_{i}^{\mathcal{C}}]\) is the vertex-induced subgraph formed by each cluster containing the clustered nodes \(v\in\mathcal{H}\) and the induced edges \((e_{u,w}\in\mathcal{E}:v_{u},v_{w}\in\mathcal{V}_{i}^{\mathcal{C}})\) with both endpoints belonging to \(\mathcal{A}_{i}\). The induced edges of a cluster subgraph \(\mathcal{A}\) are referred to as its _interior edges_, while the remaining edges that connect nodes between different clusters are referred to as _exterior edges_. The efficiency of global search queries and traversal through \(\mathcal{G}\) can greatly increased by traversing between sub-raphs using their exterior edges, using the interior edges of the subgraphs to perform local operations as needed. ## VII Differential Regulation The APN is incrementally built by the process of _Differential Regulation_ (DFR), which manages how information is added, removed, or modified in the APN with respect to the concurrently built spatial map. DFR evaluates the APN according to a set of objectives and constraints conditioned on the current map, and executes a set of modifying procedures on the APN as needed to ensure they remain satisfied as the map evolves. The broad purpose of the DFR procedures is to a) re-evaluate map-dependent analytical measures to ensure their accuracy (e.g. information gain of existing nodes), b) add node and edge elements to increase the completeness of the network while pruning redundant or overcomplete elements, and c) recompute the topological clustering of the updated graph state. A diagram of these procedures is shown in Fig. 1, and detailed in the following subsections. ### _Reconditioning_ Each DFR cycle \(i\) begins at a time \(t\) with the latest spatial map \(\mathcal{M}_{t(i)}\), frontiers \(\mathcal{F}_{t(i)}\), and robot pose \(\mathbf{q}_{t(i)}^{agent}\). The first task is to determine the local differences of these variables to their states from the previous cycle \(t(i-1)\). Each incremental map update reports the set of state-changed voxels, which are accumulated in a local cache \(\Delta\mathcal{M}\) with its bounding volume \(\Delta B\). This is defined as the _local difference neighborhood_ and is used to inform various APN update procedures about where state-changes have occurred, described further in the next subsections. Each regulation cycle then begins by updating the pose of the agent node \(v^{agent}\) and its local edges. The length of the local path is then checked and compared against a keyframe threshold distance. If the threshold is exceeded, a new keyframe view \(v^{kf}\) is created from \(v^{agent}\) and added to the keyframe set \(\mathcal{V}^{kf}\), with an edge connection to the previous keyframe to ensure a connected path to the home location is always maintained. ### _View Analysis and Coverage Sampling_ \(\mathcal{V}^{nbv}\) represents the set of NBV subgoal candidates expected to observe currently unknown voxels, such that map coverage will be increased if a subgoal is visited by the robot. To support the purposes of non-myopic planning, \(\mathcal{V}^{nbv}\) should be sufficiently distributed to provide maximum coverage of the unknown map space. Additionally, maximum coverage should be achieved using a minimal size of \(\mathcal{V}^{nbv}\) to reduce the eventual planning complexity that can increase exponentially with the number of views considered. A sampling-based approach is used to incrementally build \(\mathcal{V}^{nbv}\) to maintain maximum coverage as the map evolves. To efficiently and scalably achieve the aforementioned characteristics desired of \(\mathcal{V}^{nbv}\), we introduce an approach using a frontier-based heuristic to evaluate information gain and also guide the sampling of additional views. Information gain analysis:A common approach in the literature to evaluate the expected information gain of a viewpose is by tracing the voxels along a dense set of raycasts within view's FoV, projected from its origin. This has a high computational cost that can become prohibitive when evaluating many views and as the map resolution increases. Additionally, it is difficult to efficiently determine the visible information overlap between different views, such that information gain is usually treated as an independent measure between views. This prevents an understanding of the unique or redundant coverage within a set of views, or how efficiently they cover the given map. To mitigate these drawbacks, we directly use the frontier voxels within a view's FoV to constrain the evaluation of information gain. Given a voxel along a ray is only considered visible if no occupied voxels precede it, it can be inferred that the first unknown voxel traversed by a ray must be preceded by a free voxel to satisfy the visibility conditions. This transition from a free to an unknown voxel natural represents a frontier boundary, allowing a precondition to be defined that any raycast capable of containing information gain must at some point cross a frontier boundary. This allows the subset of raycasts that may contain some gain to be quickly identified based on the visible frontiers, which can greatly reduce the number of discrete raycast operations considered per view. A _visibility map_\(\Gamma:\mathcal{V}\rightarrow\mathcal{F}\) is used to store the visible frontier features of each viewpose: \[\Gamma(\lambda)=\{f\in\mathcal{F}:Vis(\mathbf{m}_{f},\lambda)\}, \tag{6}\] where \(\mathbf{m}_{f}\) is the voxel associated to \(f\), and \(Vis\) is an indicator function returning true if \(\mathbf{m}_{f}\) is visible from \(\lambda\). An _inverse visibility map_\(\Upsilon:\mathcal{F}\rightarrow\mathcal{V}\) represents the preimage of \(\Gamma\) storing the viewposes from which each frontier is visible as \[\Upsilon(f)=\{\lambda\in\Lambda:Vis(\mathbf{m}_{f},\lambda)\}. \tag{7}\] Fig. 2: Depictions of the APN composition. The _individual gain_, \(\mathcal{K}\), of a view \(\lambda\) refers to the independent amount of unknown space visible from the view. This measure can be lower bounded by the number of visible frontiers \(\mathcal{K}:\Lambda\mapsto|\Gamma(\lambda)|\), since each frontier corresponds to an unknown voxel location. The _joint gain_, \(\mathcal{J}\), refers to the unique information collectively visible from a set of views. These can be respectively formulated as follows: \[\mathcal{K}(\lambda)=|\Gamma(\lambda)|, \tag{8}\] \[\mathcal{J}(\Lambda)=|\bigcup_{\lambda\in\Lambda}\Gamma(\lambda)|. \tag{9}\] The _exclusive gain_, \(\mathcal{I}\), of a view \(\lambda\) refers to its unique contribution to the joint gain, or, in other words, the exclusively information visible by \(\lambda\) that is not visible by any other view in \(\Lambda\). \(\mathcal{I}\) can be determined according to the visible frontiers of \(\Gamma(\lambda)\) that are only observed by \(\lambda\). This can be efficiently computed in linear time on the number of visible frontiers by: \[\mathcal{I}(\lambda)=|\{f\in\Gamma(\lambda)\ :\ |\Upsilon(f)|=1\}|. \tag{10}\] **Coverage view sampling:** An iterative objective of DFR is to ensure maximum coverage of the current unknown space is maintained. \(\Upsilon\) supports evaluation of the coverage completeness of the unknown map space by the current views \(\Lambda\). Let \(\mathcal{F}^{cvr}\) represent the set of covered frontiers, where a frontier is considered covered if it has at least one covering view able to observe it according to \(\Upsilon\). The residual set is represented as \(\mathcal{F}^{cvr}=\mathcal{F}\setminus\mathcal{F}^{cvr}\), and the global coverage completeness is evaluated by the fraction of covered frontiers, \(\mathcal{F}^{cvr}/\mathcal{F}\). The iterative coverage maximization objective can be formulated as: \[\max\frac{|\mathcal{F}^{cvr}|}{|\mathcal{F}|}=\max\big{|}\bigcup_{f\in \mathcal{F}}\{f\ |\ \exists\lambda\in\Lambda,Vis(\mathbf{m}_{f},\lambda)\}\big{|}. \tag{11}\] A frontier-guided sampling strategy is presented to perform the maximization of (11) by iteratively sampling view-poses to observe the non-covered frontiers. This effort is concentrated within \(\Delta B\) which contains the most recent changes to the frontier distribution. Given the high complexity potentially involved in the sampling procedure, a performance tuning parameter \(p^{\lambda}_{local}\in(0,1]\) is provided, representing a probability threshold used to select a random subset of the frontiers in \(\Delta B\) to be considered for sampling in the current cycle. A second parameter \(p^{\lambda}_{global}\in(0,1]\) is provided which serves a similar purpose as \(p^{\lambda}_{local}\), but is applied to any non-covered frontiers that lie outside of \(\Delta B\). This is to account for possible frontiers that were not successfully covered in a finite number of attempts during previous DFR cycles, which can result when large amounts of occupied or unknown space exist near a frontier. The difficulty in finding a feasible viewpose can greatly increase for these cases, and in some cases one may not exist with the available map knowledge. Given the increased difficulty, \(p^{\lambda}_{global}\) is given a lesser value than \(p^{\lambda}_{local}\), allowing the search effort to persist between DFR cycles but with lower priority. In effect, this offers a degree of probabilistic completeness as the likelihood of finding a valid sample, if one exists, can continually increase over time while reducing the individual search effort per DFR cycle. The sampling procedure is given in Alg. 1, which begins by calling \(reconditionVisibility\) to update the visible information of existing views withing the changed volume. Between cycles, the frontier boundaries are often pushed back by only a small amount, but remain visible within the many of the same view as the previous cycle. This step ensures these differences are updated, so sampling is only needed when frontiers are pushed beyond visibility of all existing views. ``` 1\(reconditionVisibility(\mathcal{G},\Delta B)\) 2\(\widehat{\mathcal{F}}\gets frontierQueueInit(\mathcal{F}^{cvr},\Delta B,p^{ \lambda}_{local},p^{\lambda}_{global})\) 3while\(\widehat{\mathcal{F}}\neq\emptyset\)do 4\(f_{i}\gets extractNext(\widehat{\mathcal{F}})\) 5\(B_{f_{i}}\gets getSamplingVol(f_{i},d^{sense}_{max},\alpha_{s})\) 6\(success\gets false\), \(n\gets 0\) 7while\(n<N^{attempt}_{nbv}\ \&\ \neg success\)do 8\(\tilde{\mathbf{q}}\gets getCoverageSample(B_{f_{i}})\) 9if\(isValidSample(\tilde{\mathbf{q}})\)then 10\(\lambda_{j}=addNode(\tilde{\mathbf{q}},\mathcal{G})\) 11\(\mathcal{F}^{vis}_{\lambda_{j}}\gets computeVisible(\lambda_{j},f_{i})\) 12\(updateVisibility(\lambda_{j},\mathcal{F}^{vis}_{\lambda_{j}})\) 13\(\widehat{\mathcal{F}}=\widehat{\mathcal{F}}\setminus\mathcal{F}^{vis}_{\lambda_ {j}}\) 14\(success\gets true\) 15 16 end if 17\(n=n+1\) 18 19 end while 20 21 end while ``` **Algorithm 1**Frontier-guided view sampling for information gain maximization Next, a frontier queue \(\widehat{\mathcal{F}}\) is initialized containing the selected subsets from \(\mathcal{F}^{cvr}\). For each \(f_{i}\in\widehat{\mathcal{F}}\), a sampling subspace \(B_{f_{i}}\) is computed from which \(f_{i}\) can potentially be observed given the sensing parameters. For a maximum of \(N^{attempt}_{nbv}\) attempts, viewposes are randomly sampled using \(getCoverageSample\) and checked by \(isValidSample\) to determine if a valid sample has been found. A sample is considered valid only if it is collision-free and successfully observes the current frontier target, \(f_{i}\). Upon finding a valid sample, it is used to add a new node to the network, and all of its visible frontiers are computed to update the visibility map. If any of these frontiers are contained in \(\widehat{\mathcal{F}}\), they are removed since they have been already covered by the current sample. This can greatly reduce the number of samples, since in practice a single view will often be able to observe many nearby frontiers. ### _Pruning and Refinement_ The growth rate of the network is reduced by pruning unnecessary views that no longer provide any individual gain contribution, and redundant views with little or no exclusive information gain. These conditions naturally occur as the robot progresses its exploration of the map and observes the previously unknown space within each view. They also occur as a result when new view samples are added to the network which overlap with the pre-existing views, decreasing their exclusive gain. The goal is to identify the views that can be removed from the network without loss of the overall joint gain. The joint gain and exclusive gain measures are used to formulate the _pruning objective_ as a submodularity maximization problem. Given an initial set of views \(\Lambda\), pruning can be described as finding the minimum subset of views \(\Lambda^{*}\) that achieves the same total joint gain as \(\Lambda\), as follows: \[\underset{\Lambda^{*}\subseteq\Lambda}{\text{argmin}}(\Lambda^{*}),\] (12) s.t. \[\mathcal{J}(\Lambda)-\mathcal{J}(\Lambda^{*})\approx 0.\] To solve (12), a set of pruning candidates is found by searching for views that have negligible individual or exclusive information gain. Given the local difference neighborhood \(\Delta B\), the search is restricted to the views located within visible range \(d_{max}^{sense}\) of \(\Delta B\), corresponding to the views with visibility information that was potentially effected by the map changes. The candidates within this region are further evaluated for their edge connectivity. Any candidate found to have a cut-edge is preserved to maintain the graph connectivity, while the remainder are deleted. Once the pruning stage is complete, the coverage views of \(\mathcal{V}^{nbv}\) represent the supremal set that maximizes map coverage using a minimal number of views. Not only does this help to reduce the total size, but the minimization of redundant coverage helps to simplify the planning problem. Since each NBV has some positive amount of exclusive gain after pruning, they represent an exact set of targets that a planner must determine how to optimally visit, without the need to evaluate their redundancy during its search. ### _Reachability Update_ The reachability knowledge represented by \(\mathcal{E}\) is updated each iteration to account for new map knowledge and any state changes in \(\mathcal{V}\). Additional nodes are also sampled during this stage to increase the overall node density and uniformity in \(\mathcal{V}^{\mathcal{X}}\). This accounts for the non-uniformity of coverage view sampling, which is biased towards the frontier boundaries. Since the purpose of \(\mathcal{V}^{\mathcal{X}}\) is primarily to increase the network connectivity, only the position of these samples is needed, while the visible information and pose orientation attributes can be ignored. The pseudocode for the reachability update procedure is shown in Alg. 2, which contains two primary stages. The first stage samples traversal nodes to increase the distribution density within the graph, and the second stage increases the total edge density. ``` 1\(\widehat{B}\gets getSearchVolume(\Delta B,d_{traversal}^{sample})\) // Stage 1: increase node density 2\(i,n\gets 0\) 3while\(i<N_{traversal}^{attempt}\And n<N_{traversal}^{sample}\)do 4\(\mathbf{x}_{i}\gets generateReachabilitySample(\widehat{B})\) 5\(\mathbf{x}_{near}\gets findNearest(\mathbf{x}_{i},\Lambda)\) 6if\(distance(\mathbf{x}_{i},\mathbf{x}_{near})>d_{traversal}^{sample}\)then 7 addNode\((\mathcal{G},\mathbf{x}_{i})\) 8\(n=n+1\) 9 end if 10\(i=i+1\) 11 end while// Stage 2: increase edge density 12\(\mathcal{E}_{local}\gets getUncertainEdgePairs(\widehat{B},p_{update}^{e})\) 13for\(e_{i}\in\mathcal{E}_{local}\)do 14\(OBB,I^{\mathcal{O}},\mathbf{p}^{obs}\gets computeEdgeState(e_{i})\) 15if\(I^{\mathcal{O}}\)= freethen 16 addEdge\((\mathcal{G},e_{i})\) 17elseif\(I^{\mathcal{O}}\)= unkthen 18\(cacheUncertainEdge(\mathcal{G},e_{i},OBB,I^{\mathcal{O}},\mathbf{p}^{obs})\) 19elseif\(I^{\mathcal{O}}\)= occ then 20cacheCollisionEdge\((\mathcal{G},e_{i})\) 21 end if 22 23 end while ``` **Algorithm 2**Reachability expansion algorithm. In the first stage, collision-free positions are uniformly sampled from \(\widehat{B}\), for a maximum of \(N_{traversal}^{attempt}\) attempts, or until a threshold of \(N_{traversal}^{sample}\) samples are accepted. Each sample is evaluated according to the distance of its nearest neighbor in \(\Lambda\), and compared against a threshold distance, \(d_{traversal}^{sample}\cdot d_{traversal}^{sample}\) serves as a density constraint to prevent too many samples from being added in close proximity, which would unnecessarily increase the size complexity of the graph while adding little or no additional reachability knowledge. A sample is accepted if its nearest neighbor distance is greater than \(d_{traversal}^{sample}\), and a new node is added to the graph using the sampled position. The second stage begins by extracting the local set of candidate edge pairs \(\mathcal{E}_{local}\) using the function \(getUnknownEdgePairs\). This procedure searches \(\widehat{B}\) to find the set of node pairs \((v_{u},v_{w})\) such that the collision state of the corresponding edge \(e_{u,w}\) is either null or unknown. Here, a null edge indicates the edge does not exist (i.e. has not been evaluated in any DFR cycle), while unknown refers to an edge found with an uncertain collision state from a previous DFR cycle. A parameter \(p_{update}^{e}\in[0,1)\) is used to specify a random probability threshold of whether to evaluate a candidate node pair \((v_{u},v_{w})\). This helps to limit the number of edge evaluation operations that occur per cycle, similar the parameter \(p_{local}^{\lambda}\) used for coverage view sampling. Each edge is evaluated by \(computeEdgeState\) to de termine its collision state data, which leverages previously cached results if available. Since edges may be evaluated between any nodes over any distance within \(\widehat{B}\), the cached collision data can significantly reduce the update complexity. If an occupied collision is found, the edge is added to the cache of collision edges to prevent future evaluation. For unknown voxel collisions, the edge is added to the cache of uncertain edges along with the intermediate collision data results to accelerate future re-evaluation. Otherwise, the edge is added to the graph by \(addEdge\) which computes and stores its associated cost information according to (3) for efficient lookup by other procedures and planning. ### _Topological Clustering_ The graph nodes are decomposed into a set of subgraph regions represented by the hyperedges \(\mathcal{C}\), as illustrated in Fig. (b)b. \(\mathcal{C}\) serves as a topological hierarchy over \(\mathcal{G}\) to reduce its size complexity. This representation can be utilized to increase the efficiency for search, traversal, and other operations. A tradeoff occurs where greater reductions in size complexity also result in reduced level of detail (LoD), i.e. resolution. To compute the hyperedges, we use a density-based clustering approach based on [32, 33], extended to leverage both the geometric and reachability knowledge already present in the APN. The algorithm uses two parameters, \(D_{c}\) and \(\rho_{c}\), where \(D_{c}\) defines neighborhood distance threshold, and \(\rho_{c}\) defines a density threshold for the neighborhood. Let a node \(v_{p}\) be defined as a _core node_ if it has at least \(\rho_{c}\) edge-connected neighbors within distance \(D_{c}\). A node \(v_{q}\) is then defined as a _reachable node_ from \(v_{p}\) only if there exists an edge connection between \(v_{p}\) and \(v_{q}\), and \(v_{q}\) is within distance \(D_{c}\) from \(v_{p}\). Given a core node \(v_{p}\), a cluster is formed by all nodes reachable from \(v_{p}\). Any remaining nodes that are neither core nodes nor reachable from a core node are assigned as singleton clusters. This approach allows clusters to form more naturally by additionally considering the edge connectivity between points. They are also not required to be geometrically convex as with other clustering approaches. This enables fewer clusters to be formed, since they can be better fit to the nodes over arbitrarily shaped space. Explicit constraints on the maximum number of clusters or their size are also not necessary, such that clusters can conform to the map with variable size and density, which can effectively handle environments where different regions may have different geometric characteristics and complexities. ## VIII Hierarchical Evolutionary View Planning The iteratively updated APN provides a generalized representation of the exploration state space, which can be utilized by any graph-based planning strategy for global and local planning. In this section, we present an anytime planning approach referred to as the APN Planner (APN-P), which leverages the hierarchical decomposition of the APN to plan a global sequence over the topological subgraph regions. A second planning stage then optimizes the low-level view path for the first subgraph of the topological sequence. Each stage is formulated as a Fixed-Ended Open Traveling Salesman Problem (FEOTPSP), solved using an evolutionary optimization approach to determine the optimal sequence orders. A visualization of this procedure is displayed in Fig. 3. Let \(\mathcal{J}^{\mathcal{H}}\subset\mathbb{N}\) be an index set that enumerates the clusters \(\mathcal{C}\). A cost matrix \(\mathbf{m}^{\mathcal{H}}\) is computed by finding the shortest path between the centroid views of each pair of clusters. Given the pairwise cost \(s(u,w)\in\mathbf{m}^{\mathcal{H}}\) between cluster indices \(u,w\in\mathcal{J}^{\mathcal{H}}\), the cluster planning objective is to find the minimum cost permutation \(\Pi^{\mathcal{H}}\in\mathbf{S}_{n}(\mathcal{J}^{\mathcal{H}})\) of the indices \(\mathcal{J}^{\mathcal{H}}\), where \(\mathbf{S}_{n}(\mathcal{J}^{\mathcal{H}})\) is the symmetric group of \(\mathcal{J}^{\mathcal{H}}\). Given first cluster \(\mathcal{H}_{0}\) of \(\Pi^{\mathcal{H}}\), and its induced subgraph \(\mathcal{G}[\mathcal{H}_{0}]\), the view planning procedure is similarly formulated. Given an index set \(\mathcal{J}^{\Lambda}\subset\mathbb{N}\) enumerating the NBVs \(\{\lambda\}\in\mathcal{G}[\mathcal{H}_{0}]\), a pairwise cost matrix \(\mathbf{m}^{\Lambda}\) between index pairs \(u,w\in\mathcal{J}^{\Lambda}\) can be obtained directly from the existing edge costs. The view path planning objective is then to find the minimum cost permutation \(\Pi^{\lambda}=(v^{agent},\lambda_{\mathcal{J}^{\Lambda}_{0}},\cdots,\lambda_{ \mathcal{J}^{\Lambda}_{n}})\), which begins at the current robot configuration \(v^{agent}\) and visits each NBV node of the target cluster. The optimized sequences are preserved in a data cache allowing them to be used to re-initialize subsequent planning cycles. Given a planning cycle \(i\) and target cluster \(\mathcal{G}[\mathcal{H}_{0}]\), the current solution \(\Pi_{i}\) is initialized from \(\Pi_{i-1}\) by first filtering out any invalid views \(\Pi_{i-1}\setminus\mathcal{G}[\mathcal{H}_{0}]\) that do not belong to the current cluster. The relative order over the common subset \(\Pi_{i-1}\cup\mathcal{G}[\mathcal{H}_{0}]\) is preserved, and any additional views \(\mathcal{G}[\mathcal{H}_{0}]\setminus\Pi_{i-1}\) are inserted using local search to estimate their optimal sequence positions. Sequence optimization is performed using a mimetic evolutionary algorithm [34]. A population of \(P_{n}\) candidates, Fig. 3: Visual depiction of the hierarchical planning strategy. The first stage computes the global path (dark red arrows) over the node clusters, with the start fixed to the robot location and the end fixed to the home location. The second stage optimizes the NBV sequence (depicted using tan arrows) within the first cluster of the global sequence (green bounding box). or individuals, are initialized by randomized permutations of \(\hat{\Pi}\). For a maximum of \(N_{g}\) generations, the population is optimized using a pairwise swap mutation and partially mapped crossover (PMX) [35]. The procedure terminates once \(N_{g}\) generations is exceeded, or an improved solution cannot be found after \(N_{stall}\) generations. Once the exploration plan optimization is complete, the first view of the local sequence represents the navigation goal, \(\lambda_{g}\). If this goal is different from the previous goal, the cost of their respective sequences is compared to determine whether to accept or reject the new goal, penalizing significant changes in the direction of motion. Once the appropriate goal is selected, its trajectory is computed with \(RRT^{*}\)[36], using the APN to find the shortest path to initialize the trajectory planner. Exploration terminates once no frontiers remain, or no reachable views can be found for the remaining frontiers. Given that solutions tend to become more optimal with more generations, the computational efficiency of DFR directly impacts the planning quality. This can have a compounding effect also, since the planning convergence rate can be increased with increased optimality of prior solutions used to initialize subsequent instances. ## IX Evaluation The APN and APN-P were evaluated through ROS-based simulations using Gazebo [37] and the RotorS MAV simulation framework [38]. The AscTec Firefly MAV model provided by RotorS was used to simulate the robot dynamics and control systems, and was equipped with a stereo depth sensor for visual perception. The simulations and all algorithms were executed using a single laptop computer with Intel Core i7 2.6 GHz processor and 16 GB RAM. The test results were used to analyze the computational performance and planning efficiency of the proposed approach. Exploration was tested using several different 3D structure models with various scales as displayed in Fig. 4, with a visual comparison of their relative scales shown in Fig. 4e. In addition to varying sizes, each environment provides different characteristics for evaluation, such as obstacle density, narrow spaces opposed to open space, dead-ends, and overall geometric complexity. A video presentation demonstrating the operation and performance of the APN-P is included with this work. The simulation environment is used to visualize the concepts of operation as they are executed. The Apartment scenario is used in the video presentation to demonstrate the real-time operation of the full exploration procedure while visualizing the APN's dynamically changing structure. To account for the stochastic nature of the approach, each scenario was run 5 times and statistical analysis was computed over a variety of performance metrics, summarized in Table I. The average total exploration runtime required to complete the exploration task is denoted as \(\overline{T}\), and \(\bar{t}^{cycle}\) refers to the average computational time required per update and planning cycle. A maximum exploration time limit of \(T_{max}=14\)min (\(840\)s) was imposed, which is the maximum rated flight time for the AscTec Firefly. If this threshold is exceeded, exploration immediately terminates and failure is reported. The total map coverage is given as the ratio \(\vartheta_{\mathcal{M}}\) of the number of surface voxels \(\mathcal{M}^{occ}\) discovered during exploration with respect to a ground truth set \(\widehat{\mathcal{M}}^{occ}\) of all visible surface voxels. \(\widehat{\mathcal{M}}^{occ}\) was determined by manually guiding the robot through each world scenario, carefully ensuring every observable surface was covered by the sensor. The total volumetric exploration rate is given as \(\eta_{\mathcal{M}}\), which is the average volume of new information gain per second in m\({}^{3}\)/s. Since the objective is to achieve complete surface coverage, a more useful metric is \(\eta_{\mathcal{M}^{occ}}\) which refers to the rate of occupied information gain in m\({}^{3}\)/s. The APN is evaluated according to its average node density \(\vartheta_{\mathcal{V}}\) and edge density \(\vartheta_{\mathcal{E}}\). Here, node density refers to the number of nodes within a standard unit of volume, normalized as the number of nodes per \(100\)m\({}^{3}\) of the mapped free space. Edge density refers to the ratio between the known edges \(|\mathcal{E}|\) and the total edge capacity of a complete edge set over the nodes, \(\binom{|\mathcal{V}|}{2}\). \(\vartheta_{\mathcal{V}}\) and \(\vartheta_{\mathcal{E}}\) are given as the average over all cycles of the test scenario. The following baseline approaches were used for comparative analysis with the APN-P: * RH-NBVP [16]: A receding horizon method that finds informative view paths using RRT-based expansion within a local region of the robot. * AEP [17]: An approach that extends the strategy of RH-NBVP, using RH-NBVP for local planning and frontier-based planning for global search when local planning fails to find informative views. * FFI [18]: A hybrid frontier-based and sampling-based approach that uses an efficient frontier clustering strategy to guide the sampling of views. * Rapid [19]: An extension of frontier-based planning \begin{table} \begin{tabular}{|l|c c c c|} \hline & \multicolumn{4}{c|}{Scenario} \\ \hline Param & Apt. & Maze & Ind. Plant & Warehouse \\ \hline \(r_{\mathcal{M}}\) & \(\{0.1,0.2,0.4\}\) & \(\{0.1,0.2\}\) & \(\{0.2\}\) & \(\{0.4\}\) \\ \hline \(d_{safe}\) & \multicolumn{4}{c|}{\(0.75\)m} \\ \hline \(\boldsymbol{\upsilon}_{max}\) & \(1.0\)m/s & \(2.0\)m/s & \(2.5\)m/s & \(3.0\)m/s \\ \hline \(\hat{\psi}_{max}\) & \multicolumn{4}{c|}{\(0.75\)rad/s} \\ \hline \(d_{max}^{dense}\) & \(5\)m & \(6\)m & \(7\)m & \(9\)m \\ \hline \(\alpha_{v},\alpha_{h}\) & [60\({}^{\circ}\), 90\({}^{\circ}\)] & [60\({}^{\circ}\), 90\({}^{\circ}\)] & [75\({}^{\circ}\), 115\({}^{\circ}\)] & [75\({}^{\circ}\), 115\({}^{\circ}\)] \\ \hline \end{tabular} \end{table} TABLE II: Summary of common configuration parameters. designed to maintain the fastest allowable velocity by guiding towards frontiers within the sensors current field of view, and using classical frontier planning when no visible frontiers are available. A summary of common parameters for the different scenarios is shown in Table II, which were selected as consistently as possible to the baseline approaches. The map resolution \(r_{\mathcal{M}}\) was varied between the values \(\{0.1,0.2,0.4\}\)m to analyze its effects on performance scalability. The maximum linear velocity \(\boldsymbol{v}_{max}\) and yaw rate \(\dot{\psi}_{max}\) were assigned based on the common values used in the comparative approaches, along with the sensing parameters \(d_{max}^{sense}\) and \((\alpha_{v},\alpha_{h})\). Coverage view sampling parameters related to Alg. 1 were set as \(p_{local}^{\lambda}=0.8\), \(p_{global}^{\lambda}=0.1\), and \(N_{nbv}^{attemp}=30\) for each scenario. The reachability update parameters for Alg. 2 for each scenario were commonly set to \(N_{traversal}^{sample}=3\), \(p_{update}^{e}=0.7\), and \(d_{traversal}^{sample}=2.0\)m. ### _Apartment Scenario_ The apartment scenario in Fig. 3(a) is a relatively small scale interior space with the dimensions \(20\times 10\times 3(\text{m}^{3})\), used as a baseline for comparing the larger and more complex scenarios. An example map reconstruction by APN-P is shown in Fig. 4(a) with the traced exploration path, and the APN roadmap is shown in Fig. 4(b). The average distance traveled was \(76.5\)m, and a surface coverage completeness of \(\vartheta_{\mathcal{M}}=100\%\) was consistently achieved at each evaluated map resolution. Fig. 5(a) shows an example of the explored map volume over time using resolution \(0.2\)m for reference. The surface coverage rate \(\eta_{\mathcal{M}^{ace}}\) was \(1.5\text{m}^{3}/\text{s}\) and \(2.6\text{m}^{3}/\text{s}\) for the respective map resolutions of \(0.1\)m and \(0.2\)m. Since there are multiple dead-end regions for this scenario, some amount of backtracking is unavoidable, where the effects of backtracking correspond to the periods in Fig. 5(a) where the map growth briefly stagnates (e.g. around the 30s timestamp). Fig. 4: Visualization of each evaluated world scenario. The relative scale of each scenario is depicted in 3(e) according to their bounding box dimensions, where red represents the Apartment (slightly offset from the origin for visual clarity), blue represents the Maze, grey represents the Industrial Plant, and green represents the Warehouse. Fig. 5: Exploration results for the Maze Scenario. (a): The explored path is plotted in red, with intermediate keyframe configurations represented by yellow points. (b): The APN nodes and edges overlayed in blue. The size growth of the APN over time shown in Fig. 5(e). Compared to the map scale in Fig. 5(a), the APN is significantly smaller and its growth over time is non-monotonic due to iterative pruning and refinements. The final state of the APN roadmap is shown in Fig. 4(b), which can be seen to expand throughout the reachable free-space at a sufficient density for planning and navigation. Fig. 6(a) shows representative results of the computation times per cycle, using map resolution 0.2m as reference. The time taken for DFR remains fairly consistent over time despite the increasing map size. This demonstrates the effectiveness of the difference-aware update procedures at constraining the complexity as the map grows. A statistical boxplot of the respective procedures executed per cycle is shown in Fig. 6(e). The majority of computation time per cycle was spent on view planning, which had a median value of \(13.6\)ms. The time spent on global cluster planning was negligible due to the relatively small size and complexity of this environment. The APN contained an average of only \(1.2\) clusters, resulting in a trivial instance of cluster sequence optimization. The computation times for all differential regulation procedures were minimal compared to planning, given the relatively simple environment. The time performance with the compared methods is summarized in Table III. At the lowest map resolution of \(0.4\)m, the APN-P achieved an average total exploration time of \(\overline{T}=52.9\)s\(\pm\)\(4.3\)s, and average computation time per iteration of \(\overline{t}^{cycle}=14.0\pm 8.0\)ms. Using a map resolution of 0.2m, the average exploration time was \(57.9\)s with \(18.9\pm 9.1\)ms per cycle. At the highest map resolution of \(0.1\)m, the average exploration time was \(69.4\)s with \(28.9\pm 18.5\)ms per cycle. The RH-NBV approach required the highest total exploration time of 501.9s, with an average computation time per iteration of 153ms. For AEP, the total exploration time for each resolution was reported to take approximately 200s on average (exact quantities were not specified), with an average computation time per iteration of 98ms. FFI reported the fastest exploration time of the compared methods, with a total time of 80s and 151s for the respective map resolutions 0.4m and 0.1m. It should be noted that this approach was terminated once 95% exploration was reached, rather than full coverage. The APN-P performance demonstrated a significant improvement over the compared state-of-the-art implementations in terms of both total exploration time and per-iteration computation times. Compared to FFI, APN-P achieved complete coverage while the exploration time was reduced by \(34\%\) using resolution \(0.4\)m, and \(54\%\) using resolution \(0.1\)m. Additionally, the percent improvement between resolutions indicates better scalability to higher resolution mapping. ### _Maze-like Scenario_ A maze-like environment is presented in Fig. 3(b) with the dimensions of \(20\times 20\times 2.5\)(m\({}^{3}\)). This scenario was tested using map resolutions of \(0.1\)m and \(0.2\)m; higher resolutions were not evaluated since there are narrow passageways that require lower resolutions to admit collision-free paths (as also noted in [18]). This scenario was primarily compared against FFI, as this scenario was not evaluated in the original works of the other approaches. A representative example of the mapped environment after exploration is shown in Fig. 7(a) with the executed explo Fig. 6: Representative results of the exploration progress over time. (a) - (d): explored map in terms of total voxels and their volume. (e) - (h): corresponding APN size in terms of its nodes (red) and edges (blue), with the respective node density (\(\vartheta_{\mathcal{V}}\)) and edge density (\(\vartheta_{\mathcal{E}}\)). ration path overlayed in red. The path shows that very few redundant motions were executed and progresses smoothly throughout the maze passages, with an average total path length of \(208.9\)m. Fig. (b)b shows the map construction over time. An average coverage value of \(\vartheta_{\mathcal{M}}=100\%\) was reached at each map resolution, and the surface coverage rate \(\eta_{\mathcal{M}^{occ}}\) was \(0.5\mathrm{m}^{3}/\mathrm{s}\) and \(1.4\mathrm{m}^{3}/\mathrm{s}\) for the respective map resolutions of \(0.1\)m and \(0.2\)m. The APN size growth over time was plotted in Fig. (f)f, and visualized in Fig. (b)b. The average node density per \(100\mathrm{m}^{3}\) was \(\vartheta_{\mathcal{V}}=16.0\pm 1.3\), with an average edge density of \(\vartheta_{\mathcal{E}}=0.20\pm 0.12\). The computation times per cycle are plotted in Fig. (b)b with a statistical analysis of the computation time taken per procedure shown in Fig. (f)f. For this scenario, most of the computation time went towards APN regulation, with coverage view sampling requiring the most time of \(15.8\)ms due to the prevalence of obstacles and occlusions. Despite the high obstacle density, the computation times for reachability updates remained relatively small, while still maintaining sufficient node and edge densities to facilitate planning. This demonstrates the effectiveness of the local difference-awareness and efficient data caching strategies that minimize wasteful or redundant processing. Table III summarizes the exploration efficiency of the compared approaches with respect to total exploration time and computation time per cycle. Note that as previously mentioned, exploration time for FFI was reported when \(95\%\) coverage was achieved, rather than \(100\%\). The APN-P completed the exploration with \(100\%\) coverage in an average time of \(145.1\)s and \(212.6\)s for map resolutions \(0.2\)m and \(0.1\)m, respectively. These are significant improvements over the results of FFI, while the processing time per cycle was also reduced by around \(80\%\) and had much less variability. Additionally, the total exploration time for FFI increased by \(86\%\) between the two map resolutions, while the respective increase for the APN-P was \(45\%\). This further demonstrates the performance scalability for higher mapping resolutions \begin{table} \begin{tabular}{c c|c c|c c|c c|c c|c c} \hline \hline & & \multicolumn{2}{c|}{**APN-P**} & \multicolumn{2}{c|}{**FFI**} & \multicolumn{2}{c|}{**AEP**} & \multicolumn{2}{c|}{**RH-NBVP**} & \multicolumn{2}{c}{**Rapid**} \\ \hline \hline **Scenario** & \(r_{\mathcal{M}}\)[m] & \(\overline{T}\)[s] & \(\bar{t}^{cycle}\)[ms] & \(\overline{T}\)[s] & \(\bar{t}^{cycle}\)[ms] & \(\overline{T}\)[s] & \(\bar{t}^{cycle}\)[ms] & \(\overline{T}\)[s] & \(\bar{t}^{cycle}\)[ms] & \(\overline{T}\)[s] & \(\bar{t}^{cycle}\)[ms] \\ \hline \multirow{4}{*}{**Apt.**} & 0.4 & 52.9 & \(14.0\pm 8.0\) & 80 & \(122\pm 36\) & 200 & 92 & 501.9 & 153 & - & - \\ & 0.2 & 57.9 & \(18.9\pm 9.1\) & - & \(156\pm 109\) & 200 & - & - & - & - & - \\ & 0.1 & 69.4 & \(28.9\pm 18.5\) & 151 & \(68\pm 27\) & 200 & 129 & - & - & - & - \\ \hline \multirow{2}{*}{**Maze**} & 0.2 & \(145.1\) & \(26.1\pm 20.8\) & 177 & \(155\pm 71\) & - & - & - & - & - & - \\ & 0.1 & \(212.6\) & \(48.0\pm 28.8\) & 330 & \(238\pm 80\) & - & - & - & - & - & - \\ \hline **Ind. Plant** & 0.2 & \(353.1\) & \(186.8\pm 113.4\) & \(>1000\) & \(152\pm 20\) & 941 & \(-\) & \(2104\) & \(-\) & \(582\) & \(-\) \\ \hline **Warehouse** & 0.4 & \(268.1\) & \(121.3\pm 84.4\) & - & - & - & - & - & - & - & - \\ \hline \hline \end{tabular} \end{table} TABLE III: Time performance comparison in terms of total exploration runtime \(\overline{T}\) and computation time per cycle \(\bar{t}^{cycle}\), averaged over 10 runs. Fig. 7: Timing performance for each exploration scenario. (a)-(d): depict the processing time taken per cycle. (e)-(h): display the median statistical boxplot of the DFR and planning computation times per cycle. using larger and more complex environments. ### _Industrial Plant Scenario_ The Industrial Plant scenario shown in Fig. 3(c) is an outdoor environment based on the Gazebo Powerplant model, truncated to the approximate dimensions of \(33\times 31\times 26\)(m\({}^{3}\)). It represents both a large-scale and complex exploration task due to intricate structural geometries with many auto-occlusions. It was tested using a map resolution of \(0.2\)m and maximum velocity of \(2.5\)m/s, consistent with the compared approaches. An example of the explored map is shown in Fig. 8(a), with the explored volume over time plotted in Fig. 5(c). A high surface coverage rate of \(\eta_{\mathcal{M}^{occ}}=3.2\)m\({}^{3}\)/s was achieved, which was consistently maintained as shown in Fig. 5(c). The average total coverage was \(98.7\%\), due to a few small regions with high surrounding occlusions, where coverage sampling failed to find a feasible viewpose. This could be overcome by selecting more aggressive sampling parameters, which was not done for these tests for parameter consistency between scenarios. The APN size over time is plotted in Fig. 5(g), with the final roadmap structure visualized in Fig. 8(b). The average node and edge density were \(\vartheta_{\mathcal{V}}=3.8\) and \(\vartheta_{\mathcal{E}}=0.25\), respectively. By visual inspection of Fig. 8(b), the extent and density of the network appear to provide good coverage throughout the map. The processing time per cycle is displayed in Fig. 6(c), with a statistical boxplot of the time taken by each subroutine shown in Fig. 6(g). Traversal edge maximization required the most computation time during differential regulation with an average of \(67.2\)ms due to the large scale and the amount of empty-space surrounding the structures which is initially unknown. Unknown edges are repeatedly checked for collision checks until they can be determined as either completely free, or having an occupied collision after which they are suppressed. The processing time spent on planning was well-balanced between the hierarchical layers. A comparison of the timing results to the baseline approaches is shown in Table III. We note that this environment was not originally tested by the authors of RH-NBVP; instead, the corresponding value of \(2104\)s was obtained from the comparative analysis performed by [19]. Rapid performed with the fastest total exploration time among the compared methods, taking \(582\)s with an average total distance of \(728\)m. This was also the only approach able to finish exploration within the rated time limit \(T_{max}\) of \(840\)s. This could be explained because this approach takes advantage of the large amount of free space to maintain Fig. 8: Exploration results for the Maze Scenario. (a): The explored path is plotted in red, with intermediate keyframe configurations represented by yellow points. (b): The APN nodes and edges overlayed in blue. Fig. 9: Exploration results of the Industrial Plant scenario. high velocity, which helps to offset the diminished efficiency from greedy planning. However, this also has the effect of frequently leaving regions that have only been partially mapped. Coverage gaps can frequently occur that require large redundant paths to revisit, or otherwise reduce the completeness of the final map depending on the specific termination criteria. Additionally, the authors of Rapid note that their implementation can spend a significant amount of time computing paths over large distances (up to 10 seconds) using Dijkstra's algorithm over the map. These computation times were omitted from the reported total exploration time to focus evaluation only on the quality of their flight behavior. Even without this consideration, the APN-P was still able to reach complete exploration around \(65\%\) faster on average with a decrease in distance traveled of around \(10\%\). This also highlights the importance of the APN efficiency to prevent such high computation times from occurring in practice. APN-P exhibited significantly better performance than all compared methods, requiring an average total exploration time of only \(353.1\)s, with each cycle requiring an average of \(186.8\)ms. The average total distance traveled was \(406.3\)m, with mean velocity of \(1.9\)m/s. The MAV was able to maintain higher velocities due to the fast cycle times, which enabled the system to quickly react to the changing spatial map and re-plan its exploration path. Often the information gain of the current NBV goal gets fully observed as the MAV gets closer which can be quickly reflected within the network, allowing it to maintain its momentum by not needing to completely stop at each goal. To evaluate how the larger size of this scenario correlates to the processing time per cycle, the Ind. Plant was additionally evaluated against the Maze scenario. To enable more consistent comparison, the map resolution was kept at \(0.2\)m, and the maximum velocity and sensor parameters were assigned the values used for the Maze as indicated in Table II. The resulting cycle processing time for the Ind. Plant decreased by around \(58\%\), with each cycle taking an average of \(\bar{t}^{cycle}=78.7\)ms. Within each cycle, DFR required \(46.4\)ms and planning required \(32.3\)ms. The effects of map resolution were analyzed by a testing the timing performance using map resolution of \(0.4\)m. This resulted in a significant decrease in the cycle processing time, which was reduced to \(\bar{t}^{cycle}=30.9\)ms, and the total exploration time was reduced to \(\overline{T}=220.2\)s. This indicates the increased cycle processing time at resolution \(0.2\) were primarily due to the increased resolution, rather than the larger environment size directly. ### _Warehouse Scenario_ The Warehouse scenario is a large-scale indoor environment with the approximate dimensions \(90\times 30\times 15\) (m\({}^{3}\)), shown in Fig. (d)d with its exterior shown on the left, and the interior structures shown on the right. The models exterior structure was derived from the Powerplant model available from the Gazebo model library, while the interior was modified by adding a various geometric features and structures to create a more intricate environment for exploration. Since this was a custom built model, the APN-P was evaluated independently as comparative results were unavailable. Due to the larger scale of this scenario, the mapping resolution was set to \(0.4\)m, and the maximum velocity was increased to \(3.0\)m/s. The sensing parameters were also increased using a maximum range of \(9\)m, with FoV \((75^{\circ},115^{\circ})\). The larger sensor view volume results in more information being added to the map per scan and the higher maximum velocity results in more scans being integrated between cycles, both resulting in more changed data to process per cycle. This scenario was also used to analyze variations of the clustering parameters \(\rho_{c}\) and \(D_{c}\), which are indicated in Table IV. Unless otherwise noted, these parameters were set to \(\rho_{c}=4\) and \(D_{c}=7.0\), consistent with the previous Industrial Plant evaluation. A representative example of the reconstructed map results shown in Fig. 10 and the explored map volume over time is shown in Fig. (d)d. A minimum coverage ratio of \(\vartheta_{\mathcal{M}}=99.98\%\) was achieved for all test configurations. The APN size growth is depicted in Fig. (h)h, which contained an average of \(346\) nodes and \(21494\) edges, with an edge density factor of \(0.374\). The computation time per cycle is plotted in Fig. (d)d and summarized in Table III. Similar to the Ind. Plant scenario, the time spent on APN regulation remains within a bounded range despite the increasing size of the map and APN. The exploration time performance results are summarized in Table III, requiring an average exploration time of \(\overline{T}=268.1\)s and average planning cycle time \(\bar{t}^{cycle}=121.3\)ms. A more detailed breakdown of the processing times per sub-procedure is shown in Fig. (h)h. Different clustering parameter variations were applied and the resulting time performance is summarized in Table IV. The average exploration time was not significantly changed between parameter variations, indicating the low sensitivity of these parameters. The primary effect of the variations was on the per-cycle computation time, though the differences were relatively minor. Using the values \(\rho_{c}=4\) and \(D_{c}=10.0\), the cycle time was nearly evenly distributed between differential regulation, \(\bar{t}^{cycle}_{DFR}\), and planning, \(\bar{t}^{cycle}_{plan}\). The other parameter combinations increased the planning time, but only by a small amount. Fig. 10: The reconstructed map of the Warehouse scenario colorized by voxel height. The maximum height of displayed voxels is truncated for visual clarity. ### _Discussions_ The experimental results show that our approach has the ability to iteratively update the APN and replan the exploration path at an average rate of at least \(20\)Hz for the two smaller scale scenarios (Apt. and Maze), and at least \(5\)Hz for the larger scales (Industrial Plant and Warehouse). However, the difference between these cycle rates is not primarily due to the larger environment sizes. Instead, the larger sensor view volume and higher maximum velocities are the more significant factors, which result in a larger amount of map data for processing per cycle, but these factors are not directly related to the environment size. This helps to explain the scalability of our approach for larger environments. For the smaller environments, most of the planning time is spent on local view planning (see Fig. 6(e) and 6(f)), This is due to the relatively few clusters needed to partition the nodes, resulting in trivial cluster planning instances. However, planning directly over all NBVs can quickly become intractable as the map size increases, either resulting in unacceptably large processing times, or would otherwise require premature search termination that degrades the planning quality. The hierarchical planning strategy of APN-P helps to mitigate the complexity by keeping the problem size manageable. Furthermore, planning convergence is further accelerated by initializing each planning cycle from the partially optimized solution of the previous cycle. This reduces the need to introduce further problem simplifications or approximations that would decrease the planning quality. These effects are demonstrated by the results shown in Fig. 6(d) and 6(h). The distributed planning time remains relatively low and does not exhibit continually increasing growth, despite the increasing size of the map and APN as shown in Fig. 5(h), 5(d). The frontier-guided information gain and sampling strategy of DFR provides an effective way to avoid the prohibitively high computation costs for analyzing information gain by the existing (compared) approaches and to balance processing time per cycle and update rates. This enables maximized coverage of the unknown map regions to be maintained at high update rates, providing the necessary knowledge needed for non-myopic planning. ## X Conclusions and Future Work This paper has presented the Active Perception Network (APN), serving as a topological roadmap of the dynamically changing exploration state space, the differential regulation (DFR) update procedure that incrementally adapts the APN to the changing environment knowledge, and an exploration planner APN-P, which leverages the APN to find non-myopic exploration sequences through the APN. The results demonstrate the efficiency of DFR in performing each cyclic update and its scalability with increasing map sizes. In comparison to several state-of-the-art approaches, the APN-P consistently demonstrated improved performance in terms of total exploration time and coverage completeness. The improved performance was achieved over a variety of different environments, both indoor and outdoor, with only minor parameter adjustments between them. We expect to make all implementations of the presented work available as open-source, including the full development framework it was built upon (briefly introduced in the Appendix). Several areas of future work have been identified. An investigation of different clustering methods and their performance effects will provide insight toward future improvements. Methods to account for sensing and localization uncertainty using multi-objective optimization strategies are also being investigated, where the current processing performance provides an excellent baseline to account for the greater computational complexities inherent to these. This will make the approach more robust for practical use in GPS-denied environments. An ablation study and analysis of parameter sensitivity will help guide future developments that are generalizable to a wider range of environment scenarios and that reduce or eliminate the need for parameter tuning.
2307.16456
Camoscio: an Italian Instruction-tuned LLaMA
In recent years Large Language Models (LLMs) have increased the state of the art on several natural language processing tasks. However, their accessibility is often limited to paid API services, posing challenges for researchers in conducting extensive investigations. On the other hand, while some open-source models have been proposed by the community, they are typically English-centric or multilingual without a specific adaptation for the Italian language. In an effort to democratize the available and open resources for the Italian language, in this paper we introduce Camoscio: a language model specifically tuned to follow users' prompts in Italian. Specifically, we finetuned the smallest variant of LLaMA (7b) with LoRA on a corpus of instruction prompts translated to Italian via ChatGPT. Results indicate that the model's zero-shot performance on various downstream tasks in Italian competes favorably with existing models specifically finetuned for those tasks. All the artifacts (code, dataset, model) are released to the community at the following url: https://github.com/teelinsan/camoscio
Andrea Santilli, Emanuele Rodolà
2023-07-31T07:31:48Z
http://arxiv.org/abs/2307.16456v2
# Camoscio: an Italian Instruction-tuned LLaMA ###### Abstract In recent years Large Language Models have improved the state of the art on several natural language processing tasks. However, their accessibility is often limited to paid API services, posing challenges for researchers in conducting extensive investigations. On the other hand, while some open-source models have been proposed by the community, they are typically multilingual and not specifically tailored for the Italian language. In an effort to decoratize the available and open resources for the Italian language, in this paper we introduce Camoscio: a language model specifically tuned to follow users' prompts in Italian. Specifically, we finetuned the smallest variant of LLaMA (7b) with LoRA on a corpus of instruction prompts translated to Italian via ChatGPT. Results indicate that the model's zero-shot performance on various downstream tasks in Italian competes favorably with existing models specifically finetuned for those tasks. All the artifacts (code, dataset, model) are released to the community at the following url: [https://github.com/teelinsan/camoscio](https://github.com/teelinsan/camoscio) ## 1 Introduction In recent years, Large Language Models (LLMs) have made remarkable advancements in the field of natural language processing, demonstrating state-of-the-art performance on various tasks Brown et al. (2020); Chowdhery et al. (2022); OpenAI (2023). However, the majority of these models are typically controlled by for-profit organizations which release just a paid API for receiving responses based on input textual prompts. This severely constrains researchers from conducting comprehensive and meaningful research, as they lack access to both the model's weights and the training data regime. This limitation is particularly relevant for privacy-sensitive applications (e.g., medical domain) where data cannot be shared with external providers. On the other hand, several open-source models1 have been proposed as an alternative to closed models Zhang et al. (2022); Scao et al. (2022); Touvron et al. (2023). However, most of these models are English-centric or multilingual, albeit with performance that lags behind their monolingual counterparts. Furthermore, in these latter models, support for the Italian language is usually poor. For example, BLOOM - the largest open multilingual model available up to date - has not been trained on any Italian data, while LLaMA has only a small percentage of training data in the Italian language 2. In addition to this, most of these models are only trained with the standard language modeling objective (i.e., predict the next token given the previous ones) on corpora of raw textual data, while it has been shown that a second training step of instruction-tuning is crucial to increase downstream performance Sanh et al. (2022); Wei et al. (2021); Chung et al. (2022). Recently, a step in this direction has been made by Taori et al. (2023) with the release of Stanford Alpaca, an instruction-tuned version of LLaMA for the English language. Following this approach, in this paper we propose Camoscio as an instruction-tuned version of LLaMA for the Italian language by translating to Italian the instruction-tuning dataset of Stanford Alpaca. In particular, we finetuned the smallest version of LLaMA (7 billion parameters) with LoRA Hu et al. (2022), a parameter-efficient finetuning technique that allows to train larger models on standard desktop hardware. Footnote 1: Actual openness depends on the model license. Footnote 2: Less than 4.5% of training data comes from Wikipedia in 20 different languages, including Italian. Our contributions are the following: * We introduce an instruction-tuning dataset for the Italian language, stemming from the Stan ford Alpaca (Taori et al., 2023) dataset, translating it to Italian. * We train Camoscio on this dataset and evaluate its zero-shot performance on several downstream tasks for the Italian language (NewsSum-IT, SQuAD-IT, XFORMAL IT). * We release all the artifacts (code, dataset, model checkpoints) to the community. ## 2 Background Large language models have emerged as a general class of models capable of performing a wide range of tasks without explicit finetuning, by just leveraging in-context examples (Bommasani et al., 2021). Despite their well-known limitations, these models have garnered popularity not only in the natural language processing domain but also across audio, image, and multimodal domains (Postolache et al., 2023; Dosovitskiy et al., 2021; Trappolini et al., 2023), with most of the approaches scaling or optimizing their performance (Chowdhery et al., 2022; Santilli et al., 2023). In the context of the Italian language, the availability of pre-trained language models is currently limited; generic multipurpose LMs are almost nonexistent. Notable mentions include: GePpeTto (Mattei et al., 2020), a version of GPT-2 base (117 million parameters) finetuned using Italian Wikipedia and the ItWac corpus (Baroni et al., 2009); IT5 (Sarti and Nissim, 2022) a T5 model tailored for Italian using a refined version of the mC4 corpus (Xue et al., 2021); and BART-IT (La Quatra and Cagliero, 2023), an Italian variant of BART (Lewis et al., 2020) trained on the same mixture of data as IT5. Concurrently to our work, Bacciu et al. (2023) proposed Fauno, an Italian version of Baize (Xu et al., 2023) that is a LM trained on a corpus of self-chat performed by ChatGPT. Compared to our work, their approach is tailored to develop a conversational agent for the Italian language. After our work, Michael (2023) released on their GitHub repository an instruction-tuned version of LLaMA on a translation to Italian of the GPT-4-LLM dataset (Peng et al., 2023). ## 3 Method For the construction of our instruction-tuning dataset for the Italian language, we stem from the Stanford Alpaca dataset (Taori et al., 2023) and Alpaca LoRA (Wang, 2023) for their finetuning approach. ### Dataset Stanford Alpacais an instruction-tuning dataset constructed using the self-instruct method (Wang et al., 2023). Specifically, the authors started with a set of 175 human-written instruction-output pairs from the original self-instruct paper3 and used them as in-context examples to prompt OpenAI _text-davinci-003_. A total of 52.000 novel examples are generated with this technique. Each example includes an _instruction_, in natural English language, the answer (_output_), and optionally an additional context (_input_) for some datapoints (e.g., a short paragraph for question answering). Figure 1 shows different types of instructions in the dataset. Footnote 3: [https://github.com/yizhongw/self-instruct](https://github.com/yizhongw/self-instruct) Translation.Inspired by Croce et al. (2018) and Scaiella et al. (2019), we translated the original dataset of Stanford Alpaca to Italian using gpt-3.5-turbo with the prompt _"Translate the following text to Italian:_ {text}_". We translated all the fields in the dataset (_instruction, input, output_). We decided to use ChatGPT instead of other APIs for translation (e.g., Google Translate, Mi Figure 1: Diversity of the examples in the Stanford Alpaca dataset. Illustration from Taori et al. (2023). The inner circle shows the root verb on the instruction while the outer circle shows the direct object. The dataset of Camoscio is constructed by translating all these examples to Italian via _gpt-3.5_. crosoft Azure Translator, DeepL) because we found it to be more robust for translating code examples i.e., it translates correctly just the comments in the code and not also the coding lexicon of the programming language. We provide here an example from the dataset. Instruction: _"Data una parola, costruisci i suoi antonimi."_, Input: _"Luce"_, Output: _"Sucro, pesante, dense"_. Clearly the translation is not always perfect, but it is a fast-and-cheap method to bootstrap a noisy instruction-tuning dataset for the Italian language. ### Training & Prompting We finetuned the smallest version of LLaMA (Touvron et al., 2023) (7 billion) on an instruction-tuning dataset for the Italian language, obtained by translating to Italian the dataset of Stanford Alpaca as described in the paragraph above. The model is trained with supervision with the standard objective to predict the next token given the previous ones. The dataset has instruction, input, output fields, but the input is not available for all data points (e.g., open-ended generation). For such cases, we construct the prompt: _"Di seguito e riportata un'istruzione che descrive un task. Serviete una risposta che completi adeguatamente la richiesta. ***Istruzione:_ {instruction} *** _Risposta:_ {output}_". If, instead, the datapoint also has an input (e.g., question answering where the input is the contextual paragraph), we construct the prompt: _"Di seguito e riportata un'istruzione che descrive un task, insieme ad un input che fornisce un contesto piu ampio. Serviete una risposta che completi adeguatamente la richiesta. ***Istruzione:_ {instruction} *** _Input:_ {input} *** _Risposta:_ {output}_"._ At inference time, the same prompt is used to generate the answer. Only the text generated after "[...] _***Risposta:_" is used as final output. We sample from the model using _top-p_ sampling (Holtzman et al., 2020) with a temperature of 0.2, \(p=0.75\), \(k=40\), and beam search with 4 beams. We refer to Appendix A for the additional implementation details. ## 4 Experiments Currently, there is a very limited availability of datasets for a solid evaluation of the broad capabilities these general-purpose models possess. This is true for English but especially for the Italian language. Since there are no prior works that evaluated zero-shot capabilities in the Italian language, we decided to follow the same evaluation protocol proposed in Sarti and Nissim (2022). Compared to their approach, we do not perform any training on the downstream tasks, i.e., we perform the evaluation in a zero-shot fashion by providing to the model just a textual description of the task (e.g., _"Riassumi il seguente articolo"_). We compared the performance of our model on standard Italian benchmarks for summarization (NewsSum-IT), question answering (SQuAD-IT), and style transfer (XFORMAL IT). Compared to Sarti and Nissim (2022), we do not include the Wikipedia for Italian Text Summarization (WITS) corpus (Casola and Lavelli, 2021) \begin{table} \begin{tabular}{l|c c|c c c|c|c} \hline \hline & \multicolumn{6}{c}{**SQuAD-IT**} \\ \cline{2-7} & F1 & EM & R1 & R2 & RL & BS & EM-GPT \\ \hline DrQA-IT (Croce et al., 2018) &.659 &.561 & - & - & - & - \\ mBERT (Croce et al., 2019) &.760 &.650 & - & - & - & - & - \\ BERT3 (Devlin et al., 2019) &.753 &.638 & - & - & - & - & - \\ MiniLM (Riabi et al., 2021) &.720 &.577 & - & - & - & - & - \\ MiniLM.x. (2021) &.745 &.620 & - & - & - & - & - \\ XLM-R Large\({}_{\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-} since Wikipedia is included in the original training corpus of LLaMA Touvron et al. (2023). We also omitted the news style transfer task between "Il Giornale" to "La Repubblica" (and vice-versa) based on CHANGE-IT De Mattei et al. (2020), since Camoscio has no concepts of "Il Giornale" or "La Repubblica" styles (i.e., it was never exposed during training or finetuning to this kind of articles, although we recognize it might be interesting to analyze this in a few-shot setting). We describe in the next paragraphs the three datasets used for the evaluation. News Summarization. We evaluate the news article summarization capabilities of Camoscio using the dataset NewSum-IT proposed by Sarti and Nissim (2022). This dataset is obtained by merging two newspaper sources ("Fanpage.it" and "Il Post") scraped by the Applied Recognition Technology Laboratory4 and available on the Hugging Face Hub Lhoest et al. (2021). We used only the test split for the zero-shot evaluation, and asked the model to generate an answer given the instruction_"Dopo aver leto il testo qui soto, riassumilo adequatamente."_ provided in the textual prompt and the news text provided as input (complete prompt as explained in section 3.2). We use the same evaluation metrics of Sarti and Nissim (2022) and report the average across the two newspapers as in their work. Footnote 4: [https://huggingface.co/ARTeLab](https://huggingface.co/ARTeLab) Question Answering.To assess the model performance on extractive question answering, we used the SQuAD-IT dataset Croce et al. (2018). This dataset is composed of sets of paragraphs, questions, and answers derived from the original SQuAD dataset Rajpurkar et al. (2016) via machine translation and subsequent filtering of problematic instances. As for the previous datasets, we used just the test split for zero-shot evaluation. The model is asked to generate an answer given the instruction_"Dopo aver leto il paragrafo qui soto, rispondi correttamente alla successiva domanda"_. We evaluated the generated answers using the script from Sarti and Nissim (2022). Furthermore, we also used an additional metric "Chat-GPT Exact Match" to better assess the performance. We explain this metric in the following subsection "Evaluation Metrics". Formality Style Transfer.We assess the style transfer capabilities of Camoscio using the Italian subset of the XFORMAL dataset Briakou et al. (2021), hereafter referred to as XFORMAL-IT. The dataset consists of forum messages from the GYAFC corpus Rao and Tetreault (2018) automatically translated covering several topics (entertainment, music, family, and relationships). The test set is constructed by using crowdworkers via Amazon Mechanical Turk to collect formal-informal pairs directly in Italian. The model is evaluated in both style transfer directions (Formal to Informal and Informal to Formal). We use only the test split for the zero-shot evaluation and ask the model to generate an answer given the instruction_"Dato il seguente testo scritto in modo formale, riscrivilo in modo informale."_ and vice versa according to the style transfer direction. ### Evaluation Metrics We use the same evaluation protocol and scripts of Sarti and Nissim (2022). Specifically, for evaluating lexical matches, we rely on the language-independent ROUGE metric proposed by Lin (2004) in the variants unigram (R1), bigram (R2), and Longest Common Subsequence (RL). To gauge semantic correspondence, we employ the trained BERTScore metric Zhang et al. (2019) with a \begin{table} \begin{tabular}{l|c c c c|c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{**XFORMAL (IT) F \(\rightarrow\) I**} & \multicolumn{3}{c}{**XFORMAL (IT) I \(\rightarrow\) F**} \\ \cline{2-9} & R1 & R2 & RL & BS & R1 & R2 & RL & BS \\ \hline mT5 Small &.651 &.450 &.631 &.666 &.638 &.446 &.620 &.684 \\ mT5 Base &.653 &.449 &.632 &.667 &.661 &.471 &.642 &.712 \\ \hline IT5 Small &.650 &.450 &.631 &.663 &.646 &.451 &.628 &.702 \\ IT5 Base &.652 &.446 &.632 &.665 &.583 &.403 &.561 &.641 \\ IT5 Large &.611 &.409 &.586 &.613 &.663 &.477 &.645 &.714 \\ \hline Camoscio-7b &.645 &.436 &.623 &.651 &.622 &.428 &.600 &.667 \\ \hline \hline \end{tabular} \end{table} Table 2: Results on formality style transfer (XFORMAL IT) for the formal-to-informal (F \(\rightarrow\) I) and informal-to-formal (I \(\rightarrow\) F) directions. Competitors’ scores reported from Sarti and Nissim (2022). widely used BERT model pre-trained on Italian5 and the same baseline scores as Sarti and Nissim (2022). Following previous works, for evaluating the Question-Answering task we employ exact-match (EM) and F1-score (F1). However, since Camoscio is not trained on the output distribution of the question-answering dataset, these metrics will fail to assess the correctness of the output since the EM will count as zero even with a correct output but different wording. To account for these variations, we used an approach similar to Zheng et al. (2023) that leverages an external LM (in our case _gpt-3.3-turbo_) to judge whether the answer provided by a model is correct (1) or not (0) given the question and the ground-truth answer. We refer to this metric as Exact Match via ChatGPT (EM-GPT). Footnote 5: bdmz/bert-base-ialian-xxl-uncased ### Results and Discussion Question Answering.Table 1 shows the results of Camoscio compared to other methods used in the literature. We observe that the task metrics are very distant from all the other models (EM \(.077\) and \(.270\)). Although this is generally expected since we are comparing trained models with an untrained model (zero-shot), the exact match score is suspiciously low. Looking at the output responses we noted that Camoscio produces correct but wordy answers (e.g., _"La crisi petrolifera del 1973 e iniziata nell'ototobe 1973."_ instead of _"otobe 1973"_) making the system to perform bad on this score despite the fact that it produces correct answers. To this end, we evaluated the model also with standard evaluation metrics for generative models (R1, R2, RL, BS). However, also in this case we observe that scores are very low even though a qualitative look at the given answer seems good. This is possibly due to the different lengths between the produced answers (long) and the ground truth (short). Curiously, also the learning-based metric BERTScore is affected by this. To estimate quantitatively the actual performance of Camoscio we used instead _Exact Match via ChatGPT_. In this case, the last column of the table shows that the zero-shot performance of Camoscio is in line with the other trained model (\(.576\)) and very distant from the original EM metric (\(.077\)). These results also show that the EM-GPT metric of trained models correlates well with the existing EM metric, even though it is a little bit lower. This suggests that this metric could serve as an estimation of the model's actual performance, yet it may also be subject to bias based on the model used for estimation. Nevertheless, it is still useful to have a general idea of the performance of the model. Style Transfer & Summarization.Tables 2 and 3 show results respectively for the formality style transfer and news summarization task. We can observe that the zero-shot performance of Camoscio in both tasks is competitive with trained models. According to the model and training dataset, these latter might achieve slightly better scores at the expense of a less generalist model. Looking at the qualitative results, we note however that the summarization task on "Il Post" and "Fanpage" is affected by some common failure cases. Failure Cases.The most common failure case consists of the model not producing an answer at all after the input prompt (\(4.93\%\) of cases on "Il Post" and \(21.16\%\) cases on "Fanpage"). We think that it might be due to the input document of these examples being too long and out of distribution compared to the training documents seen in the instruction-tuning dataset (max length 256 tokens). This might confuse the model and trigger the generation of the end-of-sentence token. Other failure cases include the model responding with a copy of the instruction prompt. For example, we found that in \(0.75\%\) of the cases in the "Il Post" split of NewSum-IT, when asked to summarize the article the model responded with _"Questo articolo non e piu commentable. Abbonati al Post per commentare le altre notifie."_. We found this behavior peculiar, considering that the input prompt never \begin{table} \begin{tabular}{l|c c c c} \hline \hline & \multicolumn{4}{c}{**NewsSum-IT**} \\ \cline{2-5} & R1 & R2 & RL & BS \\ \hline mBART Large 6\({}^{,7}\) &.377 &.194 &.291 & - \\ \hline mT5 Small &.323 &.150 &.248 &.375 \\ mT5 Base &.340 &.161 &.262 &.393 \\ \hline IT5 Small &.330 &.155 &.258 &.386 \\ IT5 Base &.339 &.160 &.263 &.044 \\ IT5 Large &.251 &.101 &.195 &.315 \\ \hline Camoscio-7b &.250 &.104 &.174 &.190 \\ \hline \hline \end{tabular} \end{table} Table 3: Results on NewSum-IT mentions that articles were collected from "II Post". Looking at the dataset, we found that these documents were not correctly scraped from the web and the model was simply verbatim copying the input error message above (i.e., the message is already summarized and the model is proposing it as-is). Open-ended generation.We show here some qualitative results on the task of open-ended generation. Compared to the other tasks above, in this task we provide the model with just the instruction without any specific input. As a tribute to GePpeTto [11], the first generative language model for Italian, we pose to the model the following instruction: _"Riassumi la storia di Pinocchio"_. The model answered: _"La storia di Pinocchio racconta di un bambino di nome Pinocchio che vive con i suoi genitori in un piccolo villaggio. Un giorno, Pinocchio incontra un burattino magico che gli dona la possibilita di diventare un vero bambino. Pinocchio inizia un sereie di avventure mentre cerca di diventare un vero bambino. Durante il suo viaggio, Pinocchio fa amicizia con molti personaggi e impara molte lezioni di vita. Alla fine, Pinocchio raggiunge il suo obiettivo e diventa un vero bambino"_. At first glance the answer might seem correct, it is grammatically accurate and fluent in Italian. However, knowing the story, the model makes several factual errors like characterizing Pinocchio as a kid (instead of a wooden puppet) living with parents (instead of Geppetto) and introducing a nonexistent magic puppet. Overall this example highlights that, although promising, these models have well-known limitations like hallucinations, factual errors, and several kinds of biases [13, 1, 14]. Consequently, it is essential to exercise caution when utilizing them, keeping these limitations in mind. ## 5 Conclusion In this paper, we introduced Camoscio, a 7 billion instruction-tuned model for the Italian language, together with its Italian instruction-tuning dataset. Results show that the zero-shot performance of Camoscio on several downstream tasks in Italian is competitive with existing models specifically finetuned for those tasks. Despite the known limitations of these kinds of models, this is a first step towards a generalist model capable of performing a wide range of tasks in Italian without explicit finetuning. This is particularly relevant especially in several domains where data is scarce or not available (e.g., medical domain). In an effort to democratize the available and open resources for the Italian language, we release all the artifacts (code, dataset, model) to the community. ## Limitations Results shown in the paper highlight zero-shot performance competitive with existing finetuned models on three different tasks: summarization (NewsSum-IT), question answering (SQuAD-IT), and style transfer (XFORMAL IT). However, it is unclear whether this is true also for other tasks, especially those out of training distribution of the instruction-tuning dataset (see Figure 1). Evaluating and thoroughly assessing the performance of these kinds of models is still an open research question. In addition to this, as already mentioned, the model suffers from common problems that affect language models such as hallucinations, factual errors, and several kinds of biases. ## Acknowledgments We thank Danilo Croce for pointing out existing implementation issues with the tokenization and the training objective in the _alpaca-lora_ repository and Gabriele Sarti for sharing datasets and evaluation protocols used in IT5. This work is supported by ERC grant no.802554 (SPECGEO), PRIN 2020 project no.2020TA3K9N (LEGO.AI), PNRR MUR project PE0000013-FAIR.
2306.17461
Efficient Parallel Output-Sensitive Edit Distance
Given two strings $A[1..n]$ and $B[1..m]$, and a set of operations allowed to edit the strings, the edit distance between $A$ and $B$ is the minimum number of operations required to transform $A$ into $B$. Sequentially, a standard Dynamic Programming (DP) algorithm solves edit distance with $\Theta(nm)$ cost. In many real-world applications, the strings to be compared are similar and have small edit distances. To achieve highly practical implementations, we focus on output-sensitive parallel edit-distance algorithms, i.e., to achieve asymptotically better cost bounds than the standard $\Theta(nm)$ algorithm when the edit distance is small. We study four algorithms in the paper, including three algorithms based on Breadth-First Search (BFS) and one algorithm based on Divide-and-Conquer (DaC). Our BFS-based solution is based on the Landau-Vishkin algorithm. We implement three different data structures for the longest common prefix (LCP) queries needed in the algorithm: the classic solution using parallel suffix array, and two hash-based solutions proposed in this paper. Our DaC-based solution is inspired by the output-insensitive solution proposed by Apostolico et al., and we propose a non-trivial adaption to make it output-sensitive. All our algorithms have good theoretical guarantees, and they achieve different tradeoffs between work (total number of operations), span (longest dependence chain in the computation), and space. We test and compare our algorithms on both synthetic data and real-world data. Our BFS-based algorithms outperform the existing parallel edit-distance implementation in ParlayLib in all test cases. By comparing our algorithms, we also provide a better understanding of the choice of algorithms for different input patterns. We believe that our paper is the first systematic study in the theory and practice of parallel edit distance.
Xiangyun Ding, Xiaojun Dong, Yan Gu, Youzhe Liu, Yihan Sun
2023-06-30T08:17:04Z
http://arxiv.org/abs/2306.17461v2
# Efficient Parallel Output-Sensitive Edit Distance ###### Abstract In this paper, we study efficient parallel edit distance algorithms, both in theory and in practice. Given two strings \(A[1..n]\) and \(B[1..m]\), and a set of operations allowed to edit the strings, the edit distance between \(A\) and \(B\) is the minimum number of operations required to transform \(A\) into \(B\). In this paper, we use edit distance to refer to the Levenshtein distance, which allows for unit-cost single-character edits (insertions, deletions, substitutions). Sequentially, a standard Dynamic Programming (DP) algorithm solves edit distance with \(\Theta(nm)\) cost. In many real-world applications, the strings to be compared are similar to each other and have small edit distances. To achieve highly practical implementations, we focus on output-sensitive parallel edit-distance algorithms, i.e., to achieve asymptotically better cost bounds than the standard \(\Theta(nm)\) algorithm when the edit distance is small. We study four algorithms in the paper, including three algorithms based on Breadth-First Search (BFS), and one algorithm based on Divide-and-Conquer (DaC). Our BFS-based solution is based on the Landau-Vishkin algorithm. We implement three different data structures for the longest common prefix (LCP) queries needed in the algorithm: the classic solution using parallel suffix array, and two hash-based solutions proposed in this paper. Our DaC-based solution is inspired by the output-insensitive solution proposed by Apostolico et al., and we propose a non-trivial adaption to make it output-sensitive. All of the algorithms studied in this paper have good theoretical guarantees, and they achieve different tradeoffs between work (total number of operations), span (longest dependence chain in the computation), and space. We test and compare our algorithms on both synthetic data and real-world data, including DNA sequences, Wikipedia texts, GitHub repositories, etc. Our BFS-based algorithms outperform the existing parallel edit-distance implementation in ParlayLib in all test cases. On cases with fewer than \(10^{5}\) edits, our algorithm can process input sequences of size \(10^{9}\) in about ten seconds, while ParlayLib can only process sequences of sizes up to \(10^{6}\) in the same amount of time. By comparing our algorithms, we also provide a better understanding of the choice of algorithms for different input patterns. We believe that our paper is the first systematic study in the theory and practice of parallel edit distance. Edit Distance, Parallel Algorithms, String Algorithms, Dynamic Programming, Pattern Matching [SupplementaryMaterial] Software (Source Code): [https://github.com/ucrparlay/Edit-DistanceThis](https://github.com/ucrparlay/Edit-DistanceThis) work is supported by NSF grants CCF-2103483, CCF-2238358, and IIS-2227669, and UCR Regents Faculty Fellowships. Introduction Given two strings (sequences) \(A[1..n]\) and \(B[1..m]\) over an alphabet \(\Sigma\) and a set of operations allowed to edit the strings, the _edit distance_ between \(A\) and \(B\) is the minimum number of operations required to transform \(A\) into \(B\). WLOG, we assume \(m\leq n\). The most commonly used metric is the _Levenshtein distance_ which allows for unit-cost single-character edits (insertions, deletions, substitutions). In this paper, we use _edit distance_ to refer to the Levenshtein distance. We use \(k\) to denote the edit distance for strings \(A\) and \(B\) throughout this paper. Edit distance is usually used to measure the similarity of two strings (a smaller distance means higher similarity). Edit distance is a fundamental problem in computer science, and is introduced in most algorithm textbooks (e.g., [14, 15, 22]). In practice, it is widely used in version-control software [53], computational biology [12, 30, 38], natural language processing [10, 28], and spell corrections [27]. It is also closely related to other important problems such as longest common subsequence (LCS) [49], longest increasing subsequence (LIS) [33], approximate string matching [55], and multi-sequence alignment [58]. The classic dynamic programming (DP) solution can compute edit distance in \(O(nm)\) work (number of operations) between two strings of sizes \(n\) and \(m\). This complexity is impractical if the input strings are large. One useful observation is that, in real-world applications, the strings to be compared are usually _reasonably similar_, resulting in a relatively small edit distance. For example, in many version-control softwares (e.g., Git), if the two committed versions are similar (within a certain number of edits), the "delta" file is stored to track edits. Otherwise, if the difference is large, the system directly stores the new version. Most of the DNA or genome sequence alignment applications also only focus on when the number of edits is _small_[38]. We say an edit distance algorithm is _output-sensitive_ if the work is \(o(nm)\) when \(k=o(n)\). Many more efficient and/or practical algorithms were proposed in this setting with cost bounds parameterized by \(k\)[18, 19, 20, 21, 25, 34, 35, 36, 45, 46, 48]. Considering the ever-growing data size and plateaued single-processor performance, it is crucial to consider parallel solutions for edit distance. Although the problem is simple and well-studied in the sequential setting, we observe a _huge gap_ between theory and practice in the parallel setting. The few implementations we know of [7, 54, 57] simply parallelize the \(O(nm)\)-work sequential algorithm and require \(O(n)\) span (longest dependence chain), which indicates low-parallelism and redundant work when \(k\ll n\). Meanwhile, numerous theoretical parallel algorithms exist [1, 3, 19, 36, 40, 47], but it remains unknown whether these algorithms are practical (i.e., can be implemented with reasonable engineering effort), and if so, whether they can yield high performance. _The goal of this paper is to formally study parallel solutions for edit distance. By carefully studying existing theoretical solutions, we develop **new output-sensitive parallel solutions with good theoretical guarantees and high performance in practice**. We also conduct in-depth experimental studies on existing and our new algorithms._ The classic dynamic programming (DP) algorithm solves edit distance by using the states \(G[i,j]\) as the edit distance of transforming \(A[1..i]\) to \(B[1..j]\). \(G[i,j]\) can be computed as: \[G[i,j] =\begin{cases}G[i-1,j-1]&\text{if }A[i]=B[j]\text{ and }i>0,j>0\\ 1+\min(G[i-1,j],G[i-1,j-1],G[i,j-1])&\text{otherwise}\end{cases}\] \[G[i,j] =\max(i,j) \text{if }i=0\text{ or }j=0\] A simple parallelization of this computation is to compute all states with the same \(i+j\) value in parallel, and process all \(i+j\) values in an incremental order [7, 54, 57]. However, this approach has low parallelism as it requires \(n+m\) rounds to finish. Later work [1, 3, 40] improved parallelism using a _divide-and-conquer (DaC)_ approach and achieved \(\tilde{O}(n^{2})\) work and \(\text{polylog}(n)\) span. These algorithms use the monotonicity of the DP recurrence, and are complicated. There are two critical issues in the DaC approaches. First, to the best of our knowledge, there exist no implementations given the sophistication of these algorithms. Second, they are not output-sensitive (\(\tilde{O}(nm)\) work), which is inefficient when \(k\ll n\). Alternatively, many existing solutions, both sequentially [18, 19, 20, 21, 25, 34, 35, 45, 46] and in parallel [19, 36] use output-sensitive algorithms, and achieve \(\tilde{O}(nk)\) or \(\tilde{O}(n+k^{2})\) work and \(\tilde{O}(k)\) span. These algorithms view DP table as a grid-like DAG, where each state (cell) \((x,y)\) has three incoming edges from \((x-1,y)\), \((x,y-1)\), and \((x-1,y-1)\) (if they exist). The edge weight is \(0\) from \((x-1,y-1)\) to \((x,y)\), when \(A[x]=B[y]\), and \(1\) otherwise. Then edit distance is equivalent to the shortest path from \((0,0)\) to \((n,m)\). An example is given in Fig. 1. Since the edge weights can only be \(0\) or \(1\), we can use _breadth-first search (BFS)_ from the cell \((0,0)\) until \((n,m)\) is reached. Ukkonen [55] further showed that using _longest common prefix (LCP)_ queries based on suffix trees or suffix arrays, the work can be improved to \(O(n+k^{2})\). Landau and Vishkin [36] parallelized this algorithm (see Sec. 3). While the sequential output-sensitive algorithms have been widely used in practice [20, 25, 35, 45, 46], we are unaware of any existing implementations for the parallel version. We systematically study parallel output-sensitive edit distance, using both the BFS-based and the DaC-based approaches. Our first effort is to implement the BFS-based Landau-Vishkin algorithm with our carefully-engineered parallel suffix array (SA) implementation, referred to as BFS-SA. Although suffix array is theoretically efficient with \(O(n)\) construction work, the hidden constant is large. Thus, we use hashing-based solutions to replace SA for LCP queries to improve the performance in practice. We first present a simple approach BFS-Hash in Sec. 3.2 that stores a hash value for all prefixes of the input. This approach has \(O(n)\) construction work, \(O(\log n)\) per LCP query, and \(O(n)\) auxiliary space. While both BFS-SA and BFS-Hash take \(O(n)\) extra space, such space overhead can be significant in practice--for example, BFS-Hash requires \(n\)\(64\)-bit hash values, which is \(4\times\) the input size considering characters as inputs, and \(32\times\) with even smaller alphabet such as molecule bases (alphabet as \(\{A,C,G,T\}\)). To address the space issue, we proposed BFS-B-Hash using _blocking_. Our solution takes a user-defined parameter \(b\) as the block size, which trades off between space usage and query time. BFS-B-Hash limits extra space in \(O(n/b)\) by using \(O(b\log n)\) LCP query time. Surprisingly, despite a larger LCP cost, our hash-based solutions are consistently faster than BFS-SA in all real-world test cases, due to cheaper construction. All of our BFS-based solutions are simple to program. We also study the DaC-based approach and propose a parallel output-sensitive solution. We propose a non-trivial adaption for the AALM algorithm [1] to make it output-sensitive. Our algorithm is inspired by the BFS-based approaches, and improves the work from \(\tilde{O}(nm)\) to \(\tilde{O}(nk)\), with polylogarithmic span. The technical challenge is that the states in the computation are no longer a rectangle, but an irregular shape (see Fig. 1 and 3). We then present a highly non-trivial implementation of this algorithm. Among many key challenges, we highlight our solution to avoid dynamically allocating arrays in the recursive execution. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Algorithm** & **Work** & **Span** & **Space\({}^{*}\)** & **Algorithm** & **Work** & **Span** & **Space\({}^{*}\)** \\ \hline **BFS-SA** & \(O(n+k^{2})\) & \(\tilde{O}(k)\) & \(O(n)\) & **BFS-Hash\({}^{*}\)** & \(O(n+k^{2}\log n)\) & \(\tilde{O}(k)\) & \(O(n)\) \\ **DaC-SD** & \(O(nk\log k)\) & \(\tilde{O}(1)\) & \(O(nk)\) & **BFS-B-Hash\({}^{*}\)** & \(O(n+k^{2}b\log n)\) & \(\tilde{O}(kb)\) & \(O(n/b+k)\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Algorithms in this paper. \(k\) is the edit distance. \(b\) is the block size. \({}^{*}\): Monte Carlo algorithms due to the use of hashing. “Space\({}^{*}\)” means auxiliary space used in addition to the input. Here we assume constant alphabet size for BFS-SA.** While memory allocation is mostly ignored theoretically, in practice it can easily be the performance bottleneck in the parallel setting. We refer to this implementation as DaC-SD, with details given in Sec. 4 and 5.2 and Appendix B.2. The bounds of our algorithms (BFS-SA, BFS-Hash, BFS-B-Hash, and DaC-SD) are presented in Tab. 1. We implemented them and show an experimental study in Sec. 6. We tested both synthetic and real-world datasets, including DNA, English text from Wikipedia, and code repositories from GitHub, with string lengths in \(10^{5}\)-\(10^{9}\) and varying edit distances, many of them with real edits (e.g., edit history from Wikipedia and commit history on GitHub). In most tests, our new BFS-B-Hash or BFS-Hash performs the best, and their relative performance depends on the value of \(k\) and the input patterns. Our BFS-based algorithms are faster than the existing parallel output-insensitive implementation in ParlayLib [7], even with a reasonably large \(k\approx 10^{5}\). We believe that our paper is the first systematic study in theory and practice of parallel edit distance, and we give the first publicly available parallel edit distance implementation that can process _billion-scale strings_ with small edit distance and our code at [16]. We summarize our contributions as follows: 1. Two new BFS-based edit distance solutions BFS-Hash and BFS-B-Hash using hash-based LCP queries. Compared to the existing SA-based solution in Landau-Vishkin, our hash-based solutions are simpler and more practical. BFS-B-Hash also allows for tradeoffs between time and auxiliary space. 2. A new DaC-based edit distance solution DaC-SD with \(O(nk\log k)\) work and polylogarithmic span. 3. New implementations for four output-sensitive edit distance algorithms: BFS-SA, BFS-Hash, BFS-B-Hash and DaC-SD. Our code is publicly available [16]. 4. Experimental study of the existing and our new algorithms on different input patterns. ## 2 Preliminaries We use \(O(f(n))\)_with high probability_ (_whp_) (in \(n\)) to mean \(O(cf(n))\) with probability at least \(1-n^{-c}\) for \(c\geq 1\). We use \(\tilde{O}(f(n))\) to denote \(O(f(n)\cdot\mathrm{polylog}(n))\). For a string \(A\), we use \(A[i]\) as the \(i\)-th character in \(A\). We use _string_ and _sequence_ interchangeably. We use \(A[i..j]\) to denote the \(i\)-th to the \(j\)-th characters in \(A\), and \(A[i..j)\) the \(i\)-th to the \((j-1)\)-th characters in \(A\). Throughout the paper, we use "auxiliary space" to mean space used in addition to the input. String Edit Distance.Given two strings \(A[1..n]\) and \(B[1..m]\), Levenshtein's Edit Distance [37] between \(A\) and \(B\) is the minimum number of operations needed to convert \(A\) to \(B\) by using insertions, deletions, and substitutions. We also call the operations _edits_. In this paper, we use _edit distance_ to refer to Levenshtein's Edit Distance. The classic dynamic programming (DP) algorithm for edit distance uses DP recurrence shown in Sec. 1 with \(O(mn)\) work and space. Hash Functions.For the simplicity of algorithm descriptions, we assume a perfect hash function for string comparisons, i.e., a function \(h:S\rightarrow[1,O(|S|)]\) such that \(h(x)=h(y)\Longleftrightarrow x=y\). For any alphabet \(\Sigma\) with size \(|\alpha|\), we use a hash function \(h(A[l..r])=\sum_{i=l}^{r}A[i]\times p^{r-i}\) for some prime numbers \(p>|\alpha|\), which returns a unique hash value of the substring \(A[l..r]\). The hash values of two consecutive substrings \(S_{1}\) and \(S_{2}\) can be concatenated as \(h([S_{1},S_{2}])=h(S_{1})\cdot p^{|S_{2}|}+h(S_{2})\), and the inverse can also be computed as \(h(S_{2})=h([S_{1},S_{2}])-p^{|S_{2}|}\cdot h(S_{1})\). For simplicity, we denote concatenation and its inverse operation as \(\oplus\) and \(\ominus\), respectively, as \(h([S_{1},S_{2}])=h(S_{1})\oplus h(S_{2})\) and \(h(S_{2})=h([S_{1},S_{2}])\ominus h(S_{1})\). We assume perfect hashing for theoretical analysis. In practice, we use \(p\) as a large prime and modular arithmetic to keep the word-size hash values. In our experiment, we compare different approaches and validate that our implementations are correct in all test cases. However, collisions are possible for other datasets, since different strings may be mapped to the same hash value. If such cases arise, one can either use multiple hash functions for a better success rate in practice, or use the idea of Hirschberg's algorithm [26] to generate the edit sequence and run a correctness check (and restart with another hash function if failed). Longest Common Prefix (LCP).For two sequences \(A[1..n]\) and \(B[1..m]\), the Longest Common Prefix (LCP) query at position \(x\) in \(A[1..n]\) and position \(y\) in \(B[1..m]\) is the longest substring starting from \(A[x]\) that match a prefix starting from \(B[y]\). With clear context, we also use the term "LCP" to refer to the length of the LCP, i.e., \(LCP(A,B,x,y)\) is the length of the longest common prefix substring starting from \(A[x]\) and \(B[y]\) for \(A\) and \(B\). Computational Model.We use the _work-span model_ in the classic multithreaded model with _binary-forking_[2, 8, 9]. We assume a set of threads that share the memory. Each thread acts like a sequential RAM plus a fork instruction that forks two child threads running in parallel. When both child threads finish, the parent thread continues. A parallel-for is simulated by fork for a logarithmic number of steps. A computation can be viewed as a DAG (directed acyclic graph). The _work W_ of a parallel algorithm is the total number of operations, and the _span (depth) S_ is the longest path in the DAG. The randomized work-stealing scheduler can execute such a computation in \(W/P+O(S)\) time _whp_ in \(W\) on \(P\) processors [2, 9, 24]. Suffix Array.The suffix array (SA) [41] is a lexicographically sorted array of the suffixes of a string, usually used together with the longest common prefix (LCP) array, which stores the length of LCP between every adjacent pair of suffixes. The SA and LCP array can be built in parallel in \(O(n)\) work and \(O(\log^{2}n)\) span _whp_[31, 52]. In edit distance, we need the LCP query between \(A[x..n]\) and \(B[y..m]\) for any \(x\) and \(y\). This can be computed by building the SA and LCP arrays for a new string \(C[1..n+m]\) that concatenates \(A[1..n]\) and \(B[1..m]\). The LCP between any pair of suffixes in \(C\) can be computed by a range minimum query (RMQ) on the LCP array, which can be built in \(O(n+m)\) work and \(O(\log(n+m))\) span [8]. Combining all pieces gives the following theorem: Given two strings \(A[1..n]\) and \(B[1..m]\), using a suffix array, the longest common prefix (LCP) between any two substrings \(A[x..n]\) and \(B[y..m]\) can be reported in \(O(1)\) work and span, with \(O(n+m)\) preprocessing work and \(O(\log^{2}(n+m))\) span whp. ## 3 BFS-based Algorithms ### Overview of Existing Sequential and Parallel BFS-based Algorithms Many existing output-sensitive algorithms [18, 19, 20, 21, 25, 34, 35, 36, 45, 46] are based on breadth-first search (BFS). These algorithms view the DP matrix for edit distance as a DAG, as shown in Fig. 1. In this section, we use \(x\) and \(y\) to denote the row and column ids of the cells in the DP matrix, respectively. Each state (cell) \((x,y)\) has three incoming edges from \((x-1,y)\), \((x,y-1)\), and \((x-1,y-1)\) (if they exist). The edge weight is \(0\) from \((x-1,y-1)\) to \((x,y)\) when \(A[x]=B[y]\), and \(1\) otherwise. Then edit distance is equivalent to the shortest distance from \((0,0)\) to \((n,m)\). Since the edge weights are \(0\) or \(1\), we can use a special breadth-first search (BFS) to compute the shortest distance. In round \(t\), we process states with edit distance \(t\). The algorithm terminates when we reach cell \((n,m)\). First observed by Ukkonen [55], in the BFS-based approach, not all states need to be visited. For example, all states with \(|x-y|>k\) will not be reached before we reach \((n,m)\) with edit distance \(k\), since they require more than \(k\) edits. Thus, this BFS will touch at most \(O(kn)\) cells, leading to \(O(kn)\) work. Another key observation is that starting from any cell \((x,y)\), if there are diagonal edges with weight 0, we should always follow the edges until a unit-weight edge is encountered. Namely, we should always find the longest common prefix (LCP) from \(A[x+1]\) and \(B[y+1]\), and skip to the cell at \((x+p,y+p)\) with no edit, where \(p\) is the LCP length. This idea is used in Landau and Vishkin [36] on parallel approximate string matching, and we adapt this idea to edit distance here. Using the modified parallel BFS algorithm by Landau-Vishkin [36] (shown in Alg. 1), only \(O(k^{2})\) states need to be processed--on each diagonal and for each edit distance \(t\), only the last cell with \(t\) edits needs to be processed (see Fig. 1). Hence, the BFS runs for \(k\) rounds on \(2k+1\) diagonals, which gives the \(O(k^{2})\) bound above. In the BFS algorithm, we can label each diagonal by the value of \(x-y\). In round \(t\), the BFS visits a _frontier_ of cells \(f_{t}[\cdot]\), where \(f_{t}[i]\) is the cell with edit distance \(t\) on diagonal \(i\), for \(-t\leq i\leq t\). We present the algorithm in Alg. 1 and an illustration in Fig. 1. Note that in the implementation, we only need to maintain two frontiers (the previous and the current one), which requires \(O(k)\) space. We provide more details about this algorithm in Appendix A. If the LCP query is supported by suffix arrays, we can achieve \(O(n+k^{2})\) work and \(O(\log n+k\log k)\) span for the edit distance algorithm. ``` 1\(f_{0}[0]\leftarrow\)LCP(\(A[1..n],B[1..m]\))// Starting point 2\(t\gets 0\)while\(f_{t}[n-m]\neq n\)do 3\(t\gets t+1\)// Find new frontier for diagonal \(i\) parallel-for-each \(-t\leq i\leq t\)do 4\(f_{t}[i]\gets f_{t-1}[i]\)//Start from the last cell 5foreach\((dx,dy)\)\(\leftarrow\)\(\{(0,1),(1,0),\langle 1,1\rangle\}\)do 6\(j\gets i-dx+dy\)if\(|j|\leq t-1\)then 7 The rowid \(x\gets f_{t-1}[j]+dx\) 8 The column id \(y\gets x-i\)//Skip the common prefix 9\(x\gets x+LCP(A[x+1..n],B[y+1..m])\)//Keep the largest row id 10\(f_{t}[i]\leftarrow\max(f_{t}[i],x)\) return\(t\) ``` **Algorithm 1** BFS-based parallel edit distance [36] Algorithm 2 on Suffix Array (BFS-SA).Using the SA algorithm in [31] and the LCP algorithm in [52] for Landau-Vishkin gives the claimed bounds in Tab. 1. We present details about our SA implementation in Sec. 5.1. ### Algorithm Based on String Hashing (BFS-Hash) Although BFS-SA is theoretically efficient with \(O(n)\) preprocessing work to construct the SA, the hidden constant is large. For better performance, we consider string hashing as an alternative for SA. Similar attempts (e.g., locality-sensitive hashing) have also been used in approximate pattern matching problems [42, 43]. In our pursuit of exact output-sensitive edit distance computation, we draw inspiration from established string hashing algorithms, Figure 1: BFS-based edit distance on \(A[1..n]\) and \(B[1..m]\). A more detailed description is in Appendix A. \(f_{t}[i]\) is the row-id of the last cell on diagonal \(i\) with edit distance \(t\) (frontier \(t\)), representing cell \((f_{t}[i],f_{t}[i]-i)\). such as the Rabin-Karp algorithm (also known as rolling hashing) [32]. We will first present a simple hash-based solution BFS-Hash with \(O(n)\) preprocessing cost and \(O(n)\) auxiliary space. Then later in Sec. 3.3, we will present BFS-B-Hash, which saves auxiliary space by trading off more work in LCP queries. As mentioned in Sec. 2, the hash function \(h(\cdot)\) maps any substring \(A[l..r]\) to a unique hash value, which provides a fingerprint for this substring in the LCP query. The high-level idea is to binary search the query length, using the hash value as validation. We precompute the hash values for all prefixes, i.e., \(T_{A}[x]=h(A[1..x])\) for the prefix substring \(A[1..x]\) (similar for \(B\)). They can be computed in parallel by using any scan (prefix-sum) operation [6] with \(O(n)\) work and \(O(\log n)\) span. We can compute \(h(A[l..r])\) by \(T_{A}[r]\odot T_{A}[l-1]\). With the preprocessed hash values, we dual binary search the LCP of \(A[x..n]\) and \(B[y..m]\). We compare the hash values starting from \(A[x]\) and \(B[y]\) with chunk sizes of \(1,2,4,8,\dots\), until we find value \(l\), such that \(A[x..x+2^{l})=B[y..y+2^{l})\), but \(A[x..x+2^{l+1})\neq B[y..y+2^{l+1})\). By doing this with \(O(\log n)\) work, we know that the LCP of \(A[x..n]\) and \(B[y..m]\) must have a length in the range \([2^{l},2^{l+1})\). We then perform a regular binary search in this range, which costs another \(O(\log n)\) work. This indicates \(O(\log n)\) work in total per LCP query. Combining the preprocessing and query costs, we present the cost bounds of BFS-Hash: BFS-Hash computes the edit distance between two sequences of length \(n\) and \(m\leq n\) in \(O(n+k^{2}\log n)\) work, \(\tilde{O}(k)\) span, and \(O(n)\) auxiliary space, where \(k\) is the output size (fewest possible edits). BFS-Hash is simple and easy to implement. Our experimental results indicate that its simplicity also allows for a reasonably good performance in practice for most real-world input instances. However, this algorithm uses \(n\) 64-bit integers as hash values, and such space overhead may be a concern in practice. This is more pronounced when the input is large and/or the alphabet is small (particularly when each input element can be represented with smaller than byte size), as the auxiliary space can be much larger than the input size. This concern also holds for BFS-SA as several \(O(n)\)-size arrays are needed during SA construction. Note that for shared-memory parallel algorithms, space consumption is also a _key constraint_--if an algorithm is slow, we can wait for longer; but if data (and auxiliary data) do not fit into the memory, then this algorithm is not applicable to large input at all. In this case, the problem size that is solvable by the algorithm is limited by the space overhead, which makes the improvement from parallelism much narrower. Below we will discuss how to make our edit distance algorithms more space efficient. ### Algorithm Based on Blocked-Hashing (BFS-B-Hash) In this section, we introduce our BFS-B-Hash algorithm that provides a more space-efficient solution by trading off worst-case time (work and span). Interestingly, we observed that on many data sets, BFS-B-Hash can even outperform BFS-Hash and other opponents due to faster construction time, and we will analyze that in Sec. 6. To achieve better space usage, we divide the strings into blocks of size \(b\). As such, we only need to store the hash values for prefixes of the entire blocks \(h(A[1..b]),h(A[1..2b]),\cdots,\)\(h(A[1..[(n/b)]\cdot b])\). Our idea of blocking is inspired by many string algorithms (e.g., [4]). Using this approach, we only need auxiliary space to store \(O(n/b)\) hash values, and thus we can control the space usage using the parameter \(b\). To compute these hash values, we will first compute the hash value for each block, and run a parallel scan (prefix sum on \(\oplus\)) on the hash values for all the blocks. Similar to the above, we refer to these arrays as \(T_{A}[i]=h(A[1..ib])\) (and \(T_{B}[i]\) accordingly), and call them _prefix tables_. We now discuss how to run LCP with only partial hash values available. The _LCP_ function in Alg. 2 presents the process to find the LCP of \(A[x..n]\) and \(B[y..m]\) using the prefix tables. We present an illustration in Fig. 2. We will use the same dual binary search approach to find the LCP of two strings. Since we do not store the hash values for all prefixes, we use a function \(\mathit{GetHash}(A,T_{A},x)\) to compute \(h(A[1..x])\). We can locate the closest precomputed hash value and use \(r\) as the previous block id before \(x\). Then the hash value up to block \(r\) is simply \(\bar{h}=T_{A}[r]\). We then concatenate the rest characters to the hash value (i.e., return \(\bar{h}\oplus h(A[rb+1])\oplus\cdots\oplus h(A[x])\)). In this way, we can compute the hash value of any prefixes for both \(A\) and \(B\), and plug this scheme into the dual binary search in BFS-Hash. In each step of dual binary search, the concatenation of hash value can have at most \(b\) steps, and thus leads to a factor of \(b\) overhead in query time than BFS-Hash. ``` 1//Tableconstruction 2FunctionConstruct\((A,B)\) 3\(T_{A}[\cdot]\leftarrow\)Build\((A)\) 4\(T_{B}[\cdot]\leftarrow\)Build\((B)\)//Theprefix table building process 5FunctionBuild\((A)\) 6\(w\leftarrow\lfloor A|/b\rfloor\) 7\(T[0]\gets 0\) parallel-for-each\(j\gets 1\) to \(w\)do 8\(T[j]\gets h(A[(j-1)b+1..jb])\) 9Scan\((T)\) 10return\(T[\cdot]\) //Gethash value for prefix sub-sequence\(A[1..x]\) 11FunctionGetHash\((A,T_{A},x)\) 12if\(x=0\)thenreturn\(0\) 13\(r\leftarrow\lfloor(x-1)/b+1\rfloor\) 14\(\bar{h}\gets T_{A}[r]\) 15for\(i\gets r\cdot b+1\) to \(x\)do 16\(\bar{h}\leftarrow\bar{h}\oplus h(A[i])\) 17return\(\bar{h}\) ``` **Algorithm 2** The prefix table for finding the longest common prefix of \(A[1..n]\) and \(B[1..m]\) BFS-B-Hash computes the edit distance between two sequences of length \(n\) and \(m\leq n\) in \(O(n+k^{2}\cdot b\log n)\) work and \(\tilde{O}(kb)\) span, using \(O(n/b+k)\) auxiliary space, where \(k\) is the output size (fewest possible edits). The term \(k\) in space usage is from the BFS (each frontier is at most size \(O(k)\)). \(O(b\log n)\) is the work for each LCP query. Note that this is an upper bound--if the LCP length \(L\) is small, the cost can be significantly smaller (a tighter bound is \(O(\min(L,b\log L))\)). Sec. 6 will show that for normal input strings where the LCP lengths are small in most queries, the performance of BFS-B-Hash is indeed the fastest, although for certain input instances when the worst case is reached, the performance is not as good. Figure 2: The illustrations of prefix table values and one specific query, with key concepts shown when computing the hash value of a range using a prefix table. ## 4 The Divide-and-Conquer Algorithms Our parallel output-sensitive algorithm DaC-SD is inspired by the AALM algorithm [1], and also uses it as a subroutine. We first overview the AALM algorithm, and introduce our algorithm in details. We assume \(m=n\) is a power of \(2\) in this section for simple descriptions, but both our algorithm and AALM work for any \(n\) and \(m\). The AALM Algorithm.As described above, the edit distance problem can be considered as a shortest distance (SD) problem from the top-left cell \((0,0)\) to the bottom-right cell \((n,n)\) in the DP matrix \(G\). Instead of directly computing the SD from \((0,0)\) to \((n,n)\), AALM computes pairwise SD between any cell on the left/top boundaries and the bottom/right boundaries (i.e., those on \(L\cup U\) to \(W\cup R\) in Fig. 3(a)). We relabel all cells in \(L\cup U\) as a sequence \(v=\{v_{0},v_{1},\ldots v_{2n}\}\) (resp., \(W\cup R\) as \(u=\{u_{0},u_{1}\cdots,u_{2n}\}\)), as shown in Fig. 3. Therefore, for the DP matrix \(G\), the pairwise SD between \(v\) and \(u\) forms a \((2n+1)\times(2n+1)\) matrix. We call it the _SD matrix_ of \(G\), and denote it as \(D_{G}\). AALM uses a divide-and-conquer approach. It first partitions \(G\) into four equal submatrices \(G_{1}\), \(G_{2}\), \(G_{3}\), and \(G_{4}\) (See Fig. 3(b)), and recursively computes the SD matrices for all \(G_{i}\). We use \(D_{i}\) to denote the SD matrix for \(G_{i}\). In the "conquer" step, the AALM algorithm uses a _Combine_ subroutine to combine two SD matrices into one if they share a common boundary (our algorithm also uses this subroutine). For example, consider combining \(G_{1}\) and \(G_{2}\). We still use \(v_{i}\) and \(u_{j}\) to denote the cells on the left/top and bottom/right boundaries of \(\begin{pmatrix}G_{1}\\ G_{2}\end{pmatrix}\) (see Fig. 3(c)), and denote the cells on the common boundary of \(G_{1}\) and \(G_{2}\) as \(w_{1},\cdots,w_{n/2}\), ordered from left to right. For any pair \(v_{i}\) and \(u_{j}\), if they are in the same submatrix, we can directly get the SD from the corresponding SD matrix. Otherwise, WLOG assume \(v_{i}\in G_{1}\) and \(u_{j}\in G_{2}\), then we compute the SD between them by finding \(\min_{l}D_{1}[i,l]+D_{2}[l,j]\), i.e., for all \(w_{l}\) on the common boundary, we attempt to use the SD between \(v_{i}\) to \(w_{l}\), and \(w_{l}\) to \(u_{j}\), and find the minimum one. Similarly, we can combine \(D_{3}\) with \(D_{4}\), and \(D_{1\cup 2}\) with \(D_{3\cup 4}\), and eventually get \(D_{G}\). We note that the _Combine_ algorithm, even theoretically, is highly involved. At a high level, it uses the Monge property of the shortest distance (the monotonicity of the DP recurrence), and we refer the readers to [1] for a detailed algorithm description and theoretical analysis. In Sec. 5.2, we highlight a few challenges and our solutions for implementing this highly complicated algorithm. Theoretically, combining two \(n\times n\) SD matrices can be performed in \(O(n^{2})\) work and \(O(\log^{2}n)\) span, which gives \(O(n^{2}\log n)\) work and \(O(\log^{3}n)\) span for AALM. Our algorithm.The AALM algorithm has \(\tilde{O}(n^{2})\) work (\(\tilde{O}(nm)\) if \(n\neq m\)) and polylogarithmic span, which is inefficient in the output-sensitive setting. As mentioned in Sec. 3.1, only a narrow width-\(O(k)\) diagonal area in \(G\) is useful (Fig. 3(d)). We thus propose an output-sensitive DaC-SD algorithm adapted from the AALM algorithm. We follow the Figure 3: The illustrations of the key concepts and notation in the AALM algorithm described in Sec. 4. same steps in AALM, but restrict the paths to the diagonal area, although the exact size is unknown ahead of time. We first present the algorithm to compute the shortest distance on the diagonal region with width \(2t+1\) as function \(\mathit{Check}(t)\) in Alg. 3, which restricts the search in diagonals \(-t\) to \(t\). First, we divide such a region into four sub-regions (see Fig. 3(d)). Two of them (\(G_{1}\) and \(G_{4}\)) are of the same shape, and the other two of them (\(G_{2}\) and \(G_{3}\)) are triangles. For \(G_{2}\) and \(G_{3}\), we use the AALM algorithm to compute their SD matrices by aligning them to squares. For \(G_{1}\) and \(G_{4}\), we process them recursively, until the base case where the edge length of the matrix is smaller than \(t\) and they degenerate to squares, in which case we apply the AALM algorithm. Note that even though the width-(\(2t+1\)) diagonal stripe is not a square (\(G_{1}\) and \(G_{4}\) are also of the same shape), the useful boundaries are still the left/top and bottom/right boundaries (\(L\cup U\) and \(W\cup R\) in Fig. 3(d)). Therefore, we can use the same _Combine_ algorithm as in AALM to combine the SD matrices. For example, in Fig. 3(d), when combining \(G_{1}\) with \(G_{2}\), we obtain the pairwise distance between \(L\cup U\) and \(R\cup R^{\prime}\) using the common boundary \(W\). We can similarly combine all \(G_{1},G_{2},G_{3}\), and \(G_{4}\) to get the SD matrix for \(G\). However, the output value \(k\) is unknown before we run the algorithm. To overcome this issue, we use a strategy based on prefix doubling to "binary search" the value of \(k\) without asymptotically increasing the work of the algorithm. We start with \(t=1\), and run the \(\mathit{Check}(t)\) in Alg. 3 (i.e., restricting the search in a width-(\(2t+1\)) diagonal). Assume that the \(\mathit{Check}\) function returns \(\sigma\) edits. If \(\sigma\leq t\), we know that \(\sigma\) is the SD from \((0,0)\) to \((n,n)\), since allowing the path to go out of the diagonal area will result in an answer greater than \(t\). Otherwise, we know \(\sigma>t\), and \(\sigma\) is not necessarily the shortest distance from the \((0,0)\) to \((n,n)\), since not restricting the path in the \(t\)-diagonal area may allow for a shorter path. If so, we double \(t\) and retry. Although we need \(O(\log k)\) searches before finding the final answer \(k\), we will show that the total search cost is asymptotically bounded by the last search. In the last search, we have \(t<2k\). We first analyze the cost for \(\mathit{Check}(t)\). It contains two recursive calls, two calls to AALM, and three calls to the _Combine_ function. Therefore, the work for \(\mathit{Check}(t)\) is \(W(n)=2W(n/2)+O(t^{2}\log t)\), with base cases \(W(t)=t^{2}\log t\), which solves to \(W(n)=O(nt\log t)\). For span, note that there are \(\log(n/t)\) levels of recursion before reaching the base cases. In each level, the _Combine_ function combines \(t\times t\) SD matrices with \(O(\log^{2}t)\) span. In the leaf level, the base case uses AALM with \(O(\log^{3}t)\) span. Therefore, the total span of a _Check_ is: \[O(\log n/t\cdot\log^{2}t+(\log n/t+\log^{3}t))=O(\log^{2}t\cdot(\log n/t+\log t) )=O(\log^{2}t\log n) \tag{1}\] We will apply \(\textit{Check}(\cdot)\) for \(O(\log k)\) times, with \(t=1,2,4,\dots\) up to at most \(2k\). Therefore, the total work is dominated by the last _Check_, which is \(O(nk\log k)\). The span is \(O(\log n\log^{3}k)\). The _DaC-SD algorithm computes the edit distance between two sequences of length \(n\) and \(m\leq n\) in \(O(nk\log k)\) work and \(O(\log n\log^{3}k)\) span, where \(k\) is the output size (fewest possible edits). Compared to the BFS-based algorithms with \(\tilde{O}(k)\) span, our DaC-SD is also output-sensitive and achieves polylogarithmic span. However, the work is \(\tilde{O}(kn)\) instead of \(\tilde{O}(n+k^{2})\), which will lead to more running time in practice for a moderate size of \(k\). ## 5 Implementation Details We provide all implementations for the four algorithms as well as testing benchmarks at [16]. In this section, we highlight some interesting and challenging parts of our implementations. ### Implementation Details of BFS-based Algorithms For the suffix array construction in BFS-SA, we implemented a parallel version of the DC3 algorithm [31]. We also compared our implementation with the SA implementation in ParlayLib [7], which is a highly optimized version of the prefix doubling algorithm with \(O(n\log n)\) work and \(O(\log^{2}n)\) span. On average, our implementation is about \(2\times\) faster than that in ParlayLib when applied to edit distance. We present some results for their comparisons in Tab. 5 in the appendix. For LCP array construction and preprocessing RMQ queries, we use the implementation in ParlayLib [7], which requires \(O(n\log n)\) work and \(O(\log^{2}n)\) span. With them, the query has \(O(1)\) cost. In our experiments on both synthetic and real-world data, we observed that the LCP length is either very large when we find two long matched chunks, or in most of the cases, very short when they are not corresponding to each other. This is easy to understand--for genomes, text or code with certain edit history, it is unlikely that two random starting positions share a large common prefix. Based on this, we add a simple optimization for all LCP implementations such that we first compare the leading eight characters, and only when they all match, we use the regular LCP query. This simple optimization greatly improved the performance of BFS-SA, and also slightly improved the hash-based solutions. ### Implementation Details of the DaC-SD Algorithm Although our DaC-SD algorithm given in Alg. 3 is not complicated, we note that implementing it is highly non-trivial in two aspects. First, in Sec. 4, we assume both strings \(A\) and \(B\) have the same length \(n\), which is a power of two. However, handling two strings with different lengths makes the matrix partition more complicated in practice. Another key challenge is that the combining step in the AALM algorithm is recursive and needs to allocate memory with varying sizes in the recursive execution. While memory allocation is mostly ignored theoretically, frequent allocation in practice can easily be the performance bottleneck in the parallel setting. We discuss our engineering efforts as follows. Irregularity.The general case, when \(n\) and \(m\) are not powers of two and not the same, is more complicated than the case in Alg. 3. In this case, all four subproblems \(G_{1}\), \(G_{2}\), \(G_{3}\), and \(G_{4}\) will have different sizes. While theoretically, we can always round up, for better performance in practice, we need to introduce additional parameters to restrict the search within the belt region as shown in Fig. 4. Therefore, we use two parameters \(t_{1}\) and \(t_{2}\), to denote the lengths of the diagonal area on each side. We show an illustration in Fig. 4(a) along with how to compute the subproblem sizes. In extreme cases, \(t_{1}\) or \(t_{2}\) can degenerate to \(0\), which results in three subproblems (Fig. 4(b)). In such cases, we will first merge \(G_{2}\) and \(G_{4}\), then merge \(G_{1}\) and \(G_{2\cup 4}\). The Combining Step.As mentioned in Sec. 4, achieving an efficient combining step is highly non-trivial. The straightforward solution to combine two matrices is to use the Floyd-Warshall algorithm [17], but it incurs \(O(n^{3})\) work and will be a bottleneck. The AALM algorithm improves this step to \(O(n^{2})\) by taking advantage of the Monge property of the two matrices. For page limit, we introduce the details of the combining algorithm in Appendix B.1. However, the original AALM algorithm is based on divide-and-conquer and requires memory allocation for every recursive function call. This is impractical as frequent parallel memory allocation is extremely inefficient. To overcome this challenge, we redesign the recursive solution to an iterative solution, such that we can preallocate the memory space before the combining step. No dynamic memory allocation is involved during the computation. We provide the details of this approach in Appendix B.2. ## 6 Experiments Setup.We implemented all algorithms in C++ using ParlayLib [7] for fork-join parallelism and some parallel primitives (e.g., reduce). Our tests use a 96-core (192 hyperthreads) machine with four Intel Xeon Gold 6252 CPUs, and 1.5 TB of main memory. We utilize numactl -i all in tests with more than one thread to spread the memory pages across CPUs in a round-robin fashion. We run each test three times and report the median. Tested Algorithms and Datasets.We tested five algorithms in total: four output-sensitive algorithms in this paper (BFS-SA, BFS-Hash, BFS-B-Hash, DaC-SD), and a baseline algorithm from ParlayLib [7], which is a parallel output-insensitive implementation with \(O(nm)\) work. The ParlayLib implementation is intended to showcase the simplicity of parallel algorithms, and as a result, it may not be well-optimized. We are unaware of other \begin{table} \begin{tabular}{c c c c c c} **Data** & **Alias** & \(|\mathbf{A}|\) & \(|\mathbf{B}|\) & \(\mathbf{k}\) & \(|\mathbf{\Sigma}|\) \\ \hline Wikipedia & Wiki v1 & 0.56M & 0.56M & 439 & 256 \\ pages [44] & Wiki v2 & 0.56M & 0.56M & 5578 & 256 \\ & Wiki v3 & 0.56M & 0.55M & 15026 & 256 \\ \hline Linux kernel & Linux v1 & 6.47M & 6.47M & 236 & 256 \\ code [39] & Linux v2 & 6.47M & 6.47M & 1447 & 256 \\ & Linux v3 & 6.47M & 6.46M & 9559 & 256 \\ \hline DNA & DNA 1 & 42.3M & 42.3M & 928 & 4 \\ sequences [5] & DNA 2 & 42.3M & 42.3M & 9162 & 4 \\ & DNA 3 & 42.3M & 42.3M & 91419 & 4 \\ \hline \end{tabular} \end{table} Table 2: Real-world datasets in our experiments, including input sizes \(|A|\) and \(|B|\), number of edits \(k\), and alphabet sizes \(|\Sigma|\). Figure 4: The illustrations of our output-sensitive DaC-SD algorithm. (a) Two parameters \(t_{1}\) and \(t_{2}\) are needed to denote the lengths of the diagonal area on each side. (b) The case that \(t_{2}=0\) and \(G_{3}\) degenerates. parallel implementations that provide output-sensitive cost bounds. We use \(b=32\) for our BFS-B-Hash. As we will show later, the running time is generally stable with \(4\leq b\leq 64\). We tested the algorithms on both synthetic and real-world datasets. For synthetic datasets, we generate random strings with different string lengths \(n=10^{i}\) for \(6\leq i\leq 9\) and \(k\) (number of edits) varying from \(1\) to \(10^{5}\), and set the size of the alphabet as \(256\). We create strings \(A\) and \(B\) by generating \(n\) random characters, and applying \(k\) edits. The \(k\) edits are uniformly random for insertion, deletion and substitution. For \(k\ll n\), we have \(m\approx n\). All values of \(k\) shown in the figures and tables are approximate values. Our real-world datasets include Wikipedia [44], Linux kernel [39], and DNA sequences [50]. We compare the edit distance between history pages on Wikipedia and history commits of a Linux kernel file on GitHub. We also compare DNA sequences by adding valid modifications to them to simulate DNA damage or genome editing techniques, as is used in many existing papers [11, 13, 29, 56]. We present the statistics of the real-world datasets in Tab. 2. Overall Performance on Synthetic Data.We present our results on synthetic data in the upper part of Fig. 5. We also present the complete results in Tab. 4. For BFS-based algorithms, we also separate the time for _building_ the data structures for LCP queries, and the _query_ time (the BFS process). ParlayLib cannot process instances with \(n>10^{6}\) due to its \(O(nm)\) work bound. Figure 5: Running time (in seconds) of synthetic and real-world datasets for all algorithms. Lower is better. We put an “\(\times\)” if the algorithm does not finish within \(1000\) seconds. For BFS-based algorithms, we separate the time into building time (constructing the data structure for LCP queries) and query time (running BFS). All bars out of the range of the y-axis are annotated with numbers. The number is the total running time for DAC-SD and ParlayLib, and is in the format of \(a+b\) for BFS-SA, where \(a\) is the building time and \(b\) is the query time. Full results are presented in Tab. 4 in the appendix. We first _compare our solutions with ParlayLib_[7]. Since ParlayLib is not output-sensitive, its running time remains the same regardless of the value of \(k\). Among the tests that ParlayLib can process (\(n=10^{6}\)), our output-sensitive algorithms are much faster than ParlayLib, especially when \(k\) is small (up to \(10^{5}\times\)). For \(n=10^{6}\), all our BFS-based algorithms are at least \(1.7\times\) faster than ParlayLib even when \(k\approx n/10\). We then _compare our DaC- and BFS-based solutions._DaC-SD has the benefit of polylogarithmic span, compared to \(\tilde{O}(k)\) span for the BFS-based algorithm. Although this seems to suggest that DAC-SD should have better performance when \(k\) is large, the result shows the opposite. The reason is that DaC-SD has \(\tilde{O}(nk)\) work, compared to \(\tilde{O}(n+k^{2})\) cost of the BFS-based algorithms. When \(k\) becomes larger, the overhead in work is also more significant. On the other hand, when \(k\) is small, the \(O(nk)\) work becomes linear, which hides the inefficiency in work. Therefore, the gap between DaC-SD and other algorithms is smaller when \(k\) is small, but DaC-SD is still slower than BFS-based algorithms in all test cases, especially when \(k\) is large. This experiment reaffirms the _importance of work efficiency_ on practical performance for parallel algorithms. Finally, we _compare all our BFS-based solutions._ Our hash-based solutions have significant advantages over the other implementations when \(k\) is small, since the pre-processing time for hash-based solutions is much shorter. When \(k\) is large, pre-processing time becomes negligible, and BFS-Hash seems to be the ideal choice since its query is also efficient. In particular, for \(n\approx m\approx 10^{9}\), hash-based algorithms use about 1 second for pre-processing while BFS-SA uses about 100 seconds. Although BFS-SA also has \(O(n)\) construction time, the constant is much larger and its memory access pattern is much worse than the two hash-based solutions. We note that in some cases, the query time of BFS-SA can still be faster than BFS-Hash and BFS-B-Hash, especially when \(k\) is large, which is consistent with the theory (\(O(1)\) vs. \(O(\log n)\) or \(O(b\log n)\) per LCP query). In theory, BFS-B-Hash reduces space usage in BFS-Hash by increasing the query time. Interestingly, when \(k\) is small, BFS-B-Hash can also be faster than BFS-Hash by up to \(2.5\times\). This is because BFS-B-Hash incurs fewer writes (and thus smaller memory footprints) in preprocessing that leads to faster building time. When \(k\) is small, the running time is mostly dominated by the building time, and thus BFS-B-Hash can perform better. When \(k\) is relatively large and \(k^{2}\) is comparable to \(n\), BFS-Hash becomes faster than BFS-B-Hash due to better LCP efficiency. In fact, when \(k\) is large, the running time is mainly dominated by the query (BFS), and all three algorithms behave similarly. It is worth noting that in these experiments with \(|\Sigma|=256\) and random edits, in most of the cases, the queried LCP is small. Therefore, the \(O(\log n)\) or \(O(b\log n)\) query time for BFS-Hash and BFS-B-Hash are not tight, and they have much better memory access patterns than BFS-SA in LCP queries. As a result, they can have matching or even better performance than BFS-SA. Later we will show that under certain input distributions where the average LCP length is large, BFS-SA can have some advantage over both BFS-Hash and BFS-B-Hash. Real-World Datasets.We now analyze how our algorithms perform on real-world string and edit patterns. The results are shown in the lower part of Fig. 5. The results are mostly consistent with our synthetic datasets, where BFS-B-Hash is more advantageous when \(k\) is small, and BFS-Hash performs the best when \(k\) is large. When \(k\) is large, BFS-SA can also have comparable performance to the hash-based solutions. LCP Length vs. Performance.It seems that for both synthetic and real-world data shown above, our hash-based solutions are always better than BFS-SA. It is worth asking, whether BFS-SA can give the best performance in certain cases, given that it has the best theoretical bounds (see Tab. 1). By investigating the bounds carefully, BFS-SA has better LCP query cost as \(O(1)\), while the costs for BFS-Hash and BFS-B-Hash are \(O(\log L)\) and \(O(b\log L)\), respectively, where \(L\) is the LCP length. This indicates that BFS-SA should be advantageous when \(k\) and \(L\) are both large. To verify this, we artificially created input instances with medium to large values of \(k\) and controlled average LCP query lengths, and showed the results in Fig. 6 on two specific settings. The experimental result is consistent with the theoretical analysis. The running time for BFS-Hash increases slowly with \(L\), while the performance of BFS-B-Hash grows _much faster_, since it is affected by a factor of \(O(b)\) more than BFS-Hash. The query time for BFS-SA almost stays the same, but also increases slightly with increasing \(L\). This is because in general, with increasing \(L\), the running time for all three algorithms may increase slightly due to worse cache locality in BFS due to more long matches. In Figure 6(a), the building time for both BFS-Hash and BFS-B-Hash are negligible, while BFS-SA still incurs significant building time. Even in this case, with an LCP length of 300, the query time of the hash-based solutions still becomes larger than the _total_ running time of BFS-SA. In Figure 6(b) with a larger \(k\), the building time for all three algorithms is negligible. In this case, BFS-SA always has comparable performance with BFS-Hash, and may perform better when \(L>20\). However, such extreme cases (both \(k\) and \(L\) are large) should be very rare in real-world datasets - when \(k\) is large enough so that the query time is large enough to hide SA's building time, \(L\) is more likely to be small, which in turn is beneficial for the query bounds in hash-based solutions. Indeed such cases did not appear in our 33 tests on both synthetic and real data. Parallelism.We test the self-relative speedup of all algorithms. We present speedup numbers on two representative tests with different values of \(n\) and \(k\) in Tab. 3. For BFS-based algorithms, we separate the speedup for building and query. All our algorithms are highly parallelized. Even though BFS-SA and DAC-SD have a longer running time, they still have a 48-68\(\times\) speedup, indicating good scalability. Our BFS-Hash algorithm has about 40-50\(\times\) speedup in building, and BFS-B-Hash has a lower but decent speedup of about 20-40\(\times\). When \(k\) is small, the frontier sizes (and the total work) of BFS are small, and the running time is also negligible. In this case, we cannot observe meaningful speedup. For larger \(k=10^{5}\), three BFS-based algorithms achieve 27-48\(\times\) speedup both in query and entire edit distance algorithm. Space Usage.We study the time-space tradeoff of our BFS-B-Hash with different block sizes \(b\). We present the _auxiliary space_ used by the prefix table in BFS-B-Hash along with running time in Fig. 7 using one test case with \(n=10^{8}\) and \(k=10^{5}\) in our synthetic dataset. The dotted line shows the input size. Note that when \(b=1\), it is exactly BFS-Hash. Since the inputs are 8-bit characters and the hash values are 64-bit integers, BFS-Hash incurs \(8\times\) space overhead than the input size. Using blocking, we can avoid such overhead and keep the auxiliary space even lower than the input. The auxiliary space decreases linearly with the block size \(b\). Interestingly, although blocking itself incurs time overhead, the impact in time is small: the time grows by \(1.19\times\) from \(b=1\) to \(2\), and grows by \(1.08\times\) from \(b=2\) to \(64\). This is mostly due to two reasons: 1) as mentioned, with 8-bit character input type and random edits, the average LCP length is likely short and within the first block, and therefore the query costs in both approaches are close to \(O(L)\) for LCP length \(L\), and 2) the extra factor of \(b\) in queries (Line 17) is mostly cache hits (consecutive locations in an array). This illustrates the benefit of using blocking in such datasets, since blocking saves much space while only increasing the time by a small fraction. ## 7 Conclusion and Discussions We proposed output-sensitive parallel algorithms for the edit-distance problem, as well as careful engineering of them. We revisited the BFS-based Landau-Vishkin algorithm. In addition to using SA as is used in Landau-Vishkin (our BFS-SA implementation), we also designed two hash-based data structures to replace the SA for more practical LCP queries (BFS-Hash and BFS-B-Hash). We also presented the first output-sensitive parallel algorithm based on divide-and-conquer with \(\tilde{O}(nk)\) work and polylogarithmic span. We have also shown the best of our engineering effort on this algorithm, although its performance seems less competitive than other candidates due to work inefficiency. We implemented all these algorithms and tested them on synthetic and real-world datasets. In summary, our BFS-based solutions show the best overall performance on datasets with real-world edits or random edits, due to faster preprocessing time and better I/O-friendliness. BFS-Hash performs the best in time when \(k\) is large. BFS-B-Hash has better performance when \(k\) is small. The blocking scheme also greatly improves space efficiency without introducing much overhead in time. In very extreme cases where both \(k\) and the LCP lengths are large, BFS-SA can have some advantages over the hash-based solutions, while BFS-B-Hash can be much slower than BFS-Hash. However, such input patterns seem rare in the real world. All our BFS-based solutions perform better than the output-insensitive solution in ParlayLib, and the DaC-based solution with \(\tilde{O}(nk)\) work and polylogarithmic span, even for large \(k>\sqrt{n}\). The results also imply the importance of work efficiency in parallel algorithm designs, consistent with the common belief in the literature [51, 23]. Because the number of cores in modern multi-core machines is small (usually hundreds to thousands) compared to the problem size, an algorithm is less practical if it blows up the work significantly, as parallelism cannot compensate for the performance loss due to larger work. \begin{table} \begin{tabular}{c c|c c c|c c c|c c c|c} \multirow{2}{*}{\(n\)} & \multirow{2}{*}{\(k\)} & \multicolumn{3}{c|}{BFS-B-Hash} & \multicolumn{3}{c|}{BFS-Hash} & \multicolumn{3}{c|}{BFS-SA} & \multicolumn{1}{c}{**DaC-SD**} \\ & & **Build** & **Query** & **Total** & **Build** & **Query** & **Total** & **Build** & **Query** & **Total** & **Total** \\ \hline \(10^{8}\) & \(10\) & 20.4 & - & 19.9 & 46.6 & - & 46.5 & 49.6 & - & 49.4 & 68.2 \\ \(10^{9}\) & \(10^{5}\) & 24.2 & 36.4 & 36.3 & 42.7 & 46.8 & 46.6 & 51.2 & 27.1 & 48.3 & t.o. \\ \end{tabular} \end{table} Table 3: Self-relative speedup of each implementation in each step. “Build” = constructing the data structure for LCP queries. “Query” = the BFS process. “t.o.” = timeout. We omit query speedup when \(k=10\) because there is little parallelism to be explored for BFS with small \(k\), and the BFS time is also small and hardly affects the overall speedup. 192 hyperthreads are used for parallel executions.
2309.16844
DeBERTinha: A Multistep Approach to Adapt DebertaV3 XSmall for Brazilian Portuguese Natural Language Processing Task
This paper presents an approach for adapting the DebertaV3 XSmall model pre-trained in English for Brazilian Portuguese natural language processing (NLP) tasks. A key aspect of the methodology involves a multistep training process to ensure the model is effectively tuned for the Portuguese language. Initial datasets from Carolina and BrWac are preprocessed to address issues like emojis, HTML tags, and encodings. A Portuguese-specific vocabulary of 50,000 tokens is created using SentencePiece. Rather than training from scratch, the weights of the pre-trained English model are used to initialize most of the network, with random embeddings, recognizing the expensive cost of training from scratch. The model is fine-tuned using the replaced token detection task in the same format of DebertaV3 training. The adapted model, called DeBERTinha, demonstrates effectiveness on downstream tasks like named entity recognition, sentiment analysis, and determining sentence relatedness, outperforming BERTimbau-Large in two tasks despite having only 40M parameters.
Israel Campiotti, Matheus Rodrigues, Yuri Albuquerque, Rafael Azevedo, Alyson Andrade
2023-09-28T20:53:25Z
http://arxiv.org/abs/2309.16844v2
DeBERTinha: A Multistep Approach to Adapt DebertaV3 XSmall for Brazilian Portuguese Natural Language Processing Tasks ###### Abstract This paper presents an approach for adapting the DebertaV3 XSmall model pre-trained in English for Brazilian Portuguese natural language processing (NLP) tasks. A key aspect of the methodology involves a multi-step training process to ensure the model is effectively tuned for the Portuguese language. Initial datasets from Carolina and BrWac are preprocessed to address issues like emojis, HTML tags, and encodings. A Portuguese-specific vocabulary of 50,000 tokens is created using SentencePiece. Rather than training from scratch, the weights of the pre-trained English model are used to initialize most of the network, with random embeddings, recognizing the expensive cost of training from scratch. The model is fine-tuned using the replaced token detection task in the same format of DebertaV3 training. The adapted model, called DeBERTinha, demonstrates effectiveness on downstream tasks like named entity recognition, sentiment analysis, and determining sentence relatedness, outperforming BERTimbau-Large in two tasks despite having only 40M parameters. ## 1 Introduction In this study, we leverage the advancements introduced by the DebertaV3 model, which currently offers only a large Portuguese variant with 900M parameters. From the perspective of DebertaV3, we aim to harness the merits of its architectural excellence and craft a Portuguese version. When compared to other Portuguese encoder Transformers, like BERTimbau, DeBERTinha holds a distinctive advantage with its considerably reduced parameter count. Despite its reduced size, DeBERTinha manages to outperform BERTimbau in two out of four selected NLP tasks, all while being approximately five times more compact. The substantial advantage of a diminished parameter count becomes especially evident in corporate applications, as it demands fewer computational resources for predictive tasks and offers the benefit of reduced response times. Our construction steps begin with the acquisition of essential datasets, including the Carolina and BrWac datasets, which serve as the foundation for pre-training. These datasets, however, present challenges such as the presence of emojis, HTML tags, and non-standard characters. To address these issues, we employ emoji removal, html tag cleansing using ftty, and the normalization of utf8 encodings. To create a tailored vocabulary for Portuguese, we utilize the Carolina dataset and employ the SentencePiece algorithm, resulting in 50 thousand tokens. These tokens form the basis for creating our tokenizer, a crucial component for text processing. Subsequently, we merge the Carolina and BrWac datasets to create a unified dataset for pre-training. Tokenization is a pivotal step in NLP, and our next step involves tokenizing the merged dataset. To ensure compatibility with model input limitations, we divide texts into smaller pieces, each containing a maximum of 510 tokens. This is further complemented by the addition of special tokens, [CLS] at the beginning and [SEP] at the end, resulting in a 512-token structure. One unique aspect of our approach involves the initialization of the model. While we start with random embeddings, the rest of the model benefits from the weights of the DebertaV3 XSmall model pre-trained in English. This approach is based on the recognition that training from scratch in Portuguese can be prohibitively expensive. For model pre-training, we leverage the combinations of Masked Language Modeling (MLM) and Replaced Token Detection (RTD) losses with a hyperparameter \(\lambda\) within the same mathematical framework explained in [1]. To ensure the preservation of model progress, we save checkpoints for both the generator and discriminator models. These checkpoints play a crucial role in subsequent fine-tuning tasks. In the latter part of this article, we demonstrate the versatility of our adapted model by using the saved weights for Named Entity Recognition (NER) and classification tasks. Our experimentation extends to the ASSIN2 dataset, where we evaluate the model's ability to determine the relatedness of two sentences. Our entire multi-step approach can therefore offer a simple methodology for adapting pre-trained models to new languages, with a specific focus on Brazilian Portuguese. Through this work, we contribute to the growing body of research in the field of NLP, offering insights into effective cross-lingual adaptation and showcasing the versatility of DebertaV3 XSmall in addressing various NLP tasks in the Portuguese language context. ## 2 Related Work When tackling tasks like replace token detection, where the model needs to accurately predict missing words or phrases within a given context, language-specific models tend to reach better results than multi language models. Similarly, [2] exhibited superior performance across tasks and settings by training on Catalan, a moderately under-resourced language. In [3], a model with 10 billion parameters trained primarily on a Chinese corpus showcased significant outperformance on 54 Chinese NLP tasks. Exploring the feasibility of training monolingual Transformer-based language models in French, [4] demonstrated comparable results using a 4GB web-crawled dataset, rivaling outcomes from larger datasets that typically exceed 130GB. While multilingual models yield impressive results, they tend to have larger sizes. In specific instances, such as Portuguese, their performance can fall short compared to monolingual counterparts [5], especially for high-resource languages. Unfortunately, this limitation has constrained the availability of these cutting-edge models primarily to English in the monolingual context, causing inconvenience by impeding their practical application in NLP systems. Furthermore, it hampers exploration into their language modeling capabilities, particularly concerning morphologically rich languages like Portuguese, [6] address those details in the context of large language models. This specialization allows those monolingual models to capture the nuances, idioms, and domain-specific terms unique to a language more effectively. In replace token detection, having a precise vocabulary is crucial for accurately predicting missing tokens. In our current study, we adopt a specific methodology to leverage the capabilities of the pre-trained language model known as DeBERTaV3, as outlined in the research [1]. This particular model represents an advancement over the original DeBERTa model introduced by [7]. Notably, this improvement is achieved by departing from the conventional masked language modeling (MLM) approach and instead embracing replaced token detection (RTD). This RTD methodology is rooted in the concept elucidated in [8], where the core technique revolves around the utilization of two transformer encoders - a generator and a discriminator - within the framework of a replaced token detection task. Within the basic conceptional structure of DeBERTaV3, a noteworthy architectural feature involves the sharing of embeddings between the generator and the discriminator. This departure from the conventional model setup, where embeddings are handled independently, is achieved through a method referred to as gradient-disentangled embedding sharing (GDES). In this approach, the generator and the discriminator share embeddings, but a critical distinction is that the flow of gradients from the discriminator to the generator embeddings is intentionally halted. ## 3 Data sets To train language models that can compete with other state-of-the-art models, a substantial volume of data is indispensable. In the context of Brazilian Portuguese, we can draw attention to BERTimbau [5], which harnessed the brWAC dataset [9], yielding a hefty 17.5 GB of raw text after rigorous processing. Recent works, such as Albertina PT-BR [10], have also made use of this same dataset. In this particular endeavor, we exploit two distinct datasets during the pre-training phase, brWAC and Carolina. In line the well established research practices, we made use of the brWAC dataset, comprising a web crawl of Brazilian web pages, culminating in an approximately 17.5 GB corpus of processed text. Additionally, we incorporated the Carolina dataset [11], with 823 million tokens and encompassing two million text entries. To provide a contextual background, Carolina dataset encompasses a diverse array of domains, including wikis, university documents, social media interactions, the legislative branch, and various other public domain sources. In particular, we selectively utilized only the document body, disregarding titles, metadata, and other information associated with the text. We employed the emoji and ftfy [12] libraries to respectively handle emojis and HTML-related content, while also organizing the data into a single file format with each line containing 510 tokens. In total, the raw pre-processed texts amounts to 33 GB of data. ## 4 DeBERTinha a Brazilian Portuguese model Prior research efforts [5; 6; 13] have highlighted the benefits of training Transformers models for specific languages. While multilingual models have shown improvements, monolingual training and tokenization methods continue to hold significance in achieving state-of-the-art performance on monolingual datasets. Consequently, our focus is on training the Deberta-V3 model for the Portuguese language. Deberta-V3 has demonstrated superior performance compared to other encoder architectures, primarily attributed to its pre-training methodology and extensive vocabulary. Although there has been prior work in training Deberta for Portuguese [10], we identified opportunities for enhancement: * We trained a Portuguese tokenizer using text from the Carolina dataset. Given that our dataset was smaller than the one used for training the English tokenizer, we opted for a vocabulary size of 50 thousand tokens. * We maintained fixed examples with a size of 512 tokens, fully utilizing the maximum context window of the Deberta model. * Our goal was on training the Deberta-V3 XSmall version, which consists of only 40M parameters. Additionally, we continued training on the RTD task using the weights available for the English model on the Hugging Face model hub. Due to resource constraints, the training process was divided into two phases. The first phase involved training for one epoch using a batch size of 1664 on 8x 80GB A100 GPUs, while the second phase encompassed an additional two epochs with a batch size of 288 on 8x 32GB V100 GPUs. The cumulative training time amounted to 12.5 hours, incurring a cost of cloud computing of near four hundred dollars only. This calculation includes the 2.5 hours required for loading the dataset on each machine. ## 5 Experimental Setup In this section we report the experimental results obtained by DeBERTinha in 4 different tasks: ASSIN2-RTE and ASSIN2-STS [14], LeNERBR [15], HateBR [16]. We used a Google Colab instance with one 16GB T4 GPU to finetune on each task for an average time of 30min. Each training uses AdamW as optimizer with a learning rate of 0.00005, trained for a maximum of 20 epochs with early stopping. We compare the results of our DeBERTinha, that contains 40M parameters, against the baseline models BERTimbau-Large (335M) and Albertina-Large (900M). ASSIN2 contains sentence pairs of premises and hypothesis and for each pair there are two annotated targets: a binary classification, indicating whether premise and hypothesis are related or not, this task is denoted as ASSIN2-RTE; a similarity score between 0 and 5, indicating how similar the premise and hypothesis are, this task is named ASSIN2-STS. Accuracy and Pearson correlation are used as metrics for ASSIN2-RTE and ASSIN2-STS, respectively. LeNERBR is a dataset for the Named Entity Recognition (NER) task in Portuguese legal text. It contains seven different classes: Organization, Person, Time, Local, Legislation, Jurisprudence and Other. The Other label is given to every token that does not fall into any of the previvious categories. We use the standard BIO representation and only predict scores for the first token of each word. We use F1 score to asses performance on this task. HateBR is a dataset for classifying whether a tweet is inappropriate/malicious or not. We use Accuracy as our metric for this task. From the results shown in Table 1 we see that our DeBERTinha model surpasses BERTimbau-Large in two out of the four datasets: for the ASSIN2-RTE DeBERTinha achieves 89.99% Accuracy against the 89.13% of the BERTimbau-Large; in LeNERBR our model achieves a F1 score of 90.19%, while BERTimbau-Large achieves 90.15%. In the other two datasets DeBERTinha achieves over 97% of the BERTimbau-Large performance. Both models still fall behind Albertina's 91.30% Accuracy on ASSIN2-RTE and 86.76% Pearson Correlation on ASSIN2-STS. However, Albertina is 22.5 times bigger than DeBERTinha and 2.6 times larger than BERTimbau-Large. Because training Albertina demands heavy computational resources we do not train it on the other datasets and report the results taken directly from the original paper. ## 6 Conclusion This work has shown promising results for adapting a pre-trained language model to Portuguese through a carefully designed multi-step methodology. The resulting DeBERTinha model demonstrated competitive performance across several NLP tasks compared to larger baseline models, highlighting the effectiveness of the proposed approach. On the ASSIN2-RTE task for sentence relatedness classification, DeBERTinha achieved an accuracy of 89.99%, outperforming BERTimbau-Large which scored 89.13%. For the named entity recognition task on the LeNERBR dataset, DeBERTinha achieved an F1-score of 90.19%, slightly higher than BERTimbau-Large's score of 90.15%. While Albertina still achieved better results on some tasks, DeBERTinha was able to attain over 97% of the performance of BERTimbau-Large, despite having only 40M parameters compared to BERTimbau-Large's 335M parameters. This shows that the multi-step adaptation process leveraging an existing pre-trained model can produce a more specialized Portuguese model that rivals or exceeds the performance of much larger baseline models, demonstrating an effective approach for resource-constrained scenarios. The methodology introduced in this work contributes to research on cross-lingual transfer learning and model adaptation techniques. Overall, the results validate DeBERTinha as a promising lightweight model for Brazilian Portuguese NLP applications. For future work we aim at assessing DeBERTinha's performance in a bigger range of Portuguese NLP tasks, such as information retrieval, and extend its context to 1024 and 2048 tokens. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Model/Dataset** & **LeNERBR** & **ASSIN2-RTE** & **ASSIN2-STS** & **HateBR** \\ \hline **BERTimbau-Large** & 90.15 & 89.13 & 85.31 & 93.60 \\ \hline **Albertina** & - & 91.30 & 86.76 & - \\ \hline **DeBERTinha** & 90.19 & 89.99 & 84.75 & 91.28 \\ \hline \end{tabular} \end{table} Table 1: Results from BERTimbau-Large, Albertina and DeBERTinha on 4 commonly used Portuguese datasets. ## 7 Acknowledgements We would like to express our heartfelt gratitude to Letrus for their generous funding of the cloud services related to the processing and training of our language model. Their support was of a central importance in the execution of this work.
2304.00002
Beyond Interpretable Benchmarks: Contextual Learning through Cognitive and Multimodal Perception
With state-of-the-art models achieving high performance on standard benchmarks, contemporary research paradigms continue to emphasize general intelligence as an enduring objective. However, this pursuit overlooks the fundamental disparities between the high-level data perception abilities of artificial and natural intelligence systems. This study questions the Turing Test as a criterion of generally intelligent thought and contends that it is misinterpreted as an attempt to anthropomorphize computer systems. Instead, it emphasizes tacit learning as a cornerstone of general-purpose intelligence, despite its lack of overt interpretability. This abstract form of intelligence necessitates contextual cognitive attributes that are crucial for human-level perception: generalizable experience, moral responsibility, and implicit prioritization. The absence of these features yields undeniable perceptual disparities and constrains the cognitive capacity of artificial systems to effectively contextualize their environments. Additionally, this study establishes that, despite extensive exploration of potential architecture for future systems, little consideration has been given to how such models will continuously absorb and adapt to contextual data. While conventional models may continue to improve in benchmark performance, disregarding these contextual considerations will lead to stagnation in human-like comprehension. Until general intelligence can be abstracted from task-specific domains and systems can learn implicitly from their environments, research standards should instead prioritize the disciplines in which AI thrives.
Nick DiSanto
2022-12-04T08:30:04Z
http://arxiv.org/abs/2304.00002v2
# Analyzing the Contextual Shortcomings of Artificial General Intelligence ###### Abstract Even in the most cutting-edge Artificial General Intelligence (AGI) endeavors, the disparity between humans and artificial systems is extremely apparent. Although this difference fundamentally divides the capabilities of each, human-level intelligence (HLI) has remained the aim of AGI for decades. This paper opposes the binarity of the Turing Test, the foundation of this intention and original establishment of a potentially intelligent machine. It discusses how AI experts misinterpreted the Imitation Game as a means to anthropomorphize computer systems and asserts that HLI is a red herring that distracts current research from relevant problems. Despite the extensive research on the potential design of an AGI application, there has been little consideration of how such a system will access and ingest data at a human-like level. Although current machines may emulate specific human attributes, AGI is developed under the pretense that this can be easily scaled up to a general intelligence level. This paper establishes contextual and rational attributes that perpetuate the variation between human and AI data collection abilities and explores the characteristics that current AGI lacks. After asserting that AGI should not be seeking HLI, its current state is analyzed, the Turing Test is reevaluated, and the future of AGI development is discussed within this framework. Artificial General Intelligence (AGI), context, data collection, human-level intelligence (HLI), intelligent systems, Turing Test (TT) ## 1 Introduction Although machine learning and Artificial Intelligent systems have proven to be undoubtedly beneficial, their roles still differ from those of humans. AI, while heuristically impressive, is currently bound to task-based applications. While many of these products are proficient at simple, low-level tasks [1, 2], humans excel at abstract thought and metacognition [3, 4]. "Weak AI," which focuses on one narrow task and will hereafter be referred to as Artificial Narrow Intelligence (ANI), demonstrates a notable contrast to human capability. Though Artificial General Intelligence (AGI)--the complete abstraction of knowledge--is the goal for truly "intelligent" solutions, characteristics of ANI are seen in all current AI products. Noguerol [5] demonstrates the potential weaknesses of ANI in the context of subjective associations in radiology. While AI development has gone through "summers" and "winters," the prevailing definition, criteria, and goal of AGI remain unchanged: developing a system that rivals or exceeds human-level intelligence (HLI). Before continuing, it is essential to distinguish intelligence from _general_ intelligence in the context of AI development. Well-regarded definitions of intelligence in the AI community include "the ability to solve hard problems" [6] and "achieving goals in a wide variety of environments" [7]. For the scope of this paper, intelligence will simply be considered the ability to learn from data and, most importantly, apply it to solve specific tasks. General intelligence, on the other hand, is less correlated to task-solving ability. Voss [8] describes it as "the essential, domain-independent skills necessary for acquiring a wide range of domain-specific knowledge (data and skills) - i.e., the ability to learn anything (in principle)." Pennachin [9] describes it as "the ability to reason and think in a variety of domains, not just in a single area.". For the sake of simplicity, this paper will consider general intelligence to be congruent with HLI: the ability to generalize and learn from _any_ data and apply it in abstract environments. As recent research has attempted to establish criteria for AI to approach HLI, it does not acknowledge abilities that are likely unachievable in current implementations. While AGI is a very broad (not to mention abstruse) idea, its shortcomings are inherent and unavoidable in both theory and practice when compared to human general intelligence. ## 2 Background - Turing Test While HLI is undoubtedly a natural benchmark for AGI, the most notable work to formally establish it is likely the Turing Test (TT). Introduced by Turing [10], the "Imitation Game" consists of a human communicating with a hidden entity. For a machine to pass the test, the human test subject must not be able to distinguish human users from AI imposters. Turing's goal in establishing this as a target was simply to create a tangible, task-based benchmark for AGI, considering "thought" has been arbitrarily defined in the philosophical community since Descartes [11]. The TT is generally considered the first quantifiable definition of a conscious entity and the origin of Artificial Intelligence. As one of the original attempts to define a human-like artificial system, it paved the way for deep learning, developing machine learning models after the human brain's neural networks. Importantly, these demonstrations established the goal of AI development as "rivaling human ability," a dangerous rhetoric that would stay rigid for decades to come. However, the TT exhibits several limitations. Most importantly, it suggests that humans are the only reasonable benchmark for an intelligent system (a fallacy that will be further discussed later). Other important issues with the TT are its reliance on the aptitude of its participants [12] and its assumption that linguistics is the best measure of intelligent thought. The latter is best argued by Dreyfus [13], which contented that human learning is mainly tacit and unquantifiable, and Gunderson [14], which was skeptical that language could adequately encapsulate intelligence. The TT also sets the expectation that high performance in a performative task, such as conversation, is a surefire demonstration of rational thought. However, as established in the definition of AGI, general intelligence should not be contingent on its abilities in a task-based environment. The "Chinese Room" argument [15] famously demonstrates that giving a "correct" response can be very different from giving a thoughtful one. The strict binarity of the TT further emphasizes the problem with such a black-box environment. The TT concludes with a simple "yes" or "no" answer as to whether the computer is "intelligent" enough to imitate humans. However, it does not indicate the system's thought process, the method it takes to approximate human-level reasoning, or how close it gets to success. This yields no insight into its intellectual proficiency. The difference between the thought process of someone with square roots memorized and someone who can calculate them in their head illustrates this point. Merely acknowledging a correct answer provides no transparency into the person's problem-solving capability. ## 3 The Goal of AGI When claiming that the TT is not a viable measure of general intelligence, it becomes necessary to establish what the goal of AGI should instead be. Instead of following popular sentiment, this goal should rely on machine learning architectures' capabilities. Unfortunately, even as AI has continued to improve, its target has hardly changed. Whether spurred by trendy developments in the field, portrayals of the media [16], or Turing himself, the AI community has pressed on in the quest of rivaling human thought. However, while ANI has far surpassed the expectations of even Turing, AGI is struggling to match the general understanding of even a young child. Watching cutting-edge products struggle to fill the human-size shoes of general intelligence begs whether the prevailing approaches are practical. This paper seeks to establish criteria that demonstrate why the comparison of AI to HLI is a red herring. Instead of framing AGI models to challenge human thought, AI developers should seek to build efficient models that can automate specific tasks with minimal supervision. Russell & Norvig [17] analogize AI development to the aeronautical engineering research that led to successful "artificial flight." As seen in the construction of airplanes, the most viable solutions did not come by modeling the source - pigeons. Instead, the general principles that govern birds were used as inspiration to build a machine with a different application. Similarly, success in AI ventures does not need to come by perfectly imitating human thought but by reaching effective solutions to relevant issues. To contradict popular thought, this paper will combat the notion of human-level AGI by presenting foundational distinctions between the efficiency and nature in which humans and AI can collect, process, and learn from data. ## 4 Comparison To Human Learning Architecture Since the discrepancy between the current applications of HLI and AI has been demonstrated, it is essential to understand the factors that determine it. Once these are found, it becomes possible to decide whether or not they may be eliminated to bridge the gap. The areas in which the architectures are alike will be recognized and rejected to illuminate these attributes. While their effectiveness may vary, all living creatures share remarkably similar low-level data collection and learning processes. Illeris [18] explains human learning as a two-step procedure: "an external interaction process between the learner and his or her social, cultural, or material environment, and an internal psychological process of elaboration and acquisition." It is purely biological to absorb data and make future decisions based on it. While this is a simple summary of the human learning process, it can be argued for all human encounters. Humans are simply unminflful of ideology shifts because their unfathomably extensive data collection (daily life experiences) yields incredibly slow changes. As alluded to previously, artificial learning systems are hardly different. They are designed to mimic the learning ability of the human brain, breaking data down to its simplest form and training through abstraction. Grossberg [19] provides a particularly famous example, developing specific neural network models after particular brain regions. Similarly, de Garis [20] used simplified cellular automata-based neural networks to simulate the distribution of growth instructions through a 3D space. The architectural accuracy of these models shows that the imitation of human brain functionality is not only possible but is underway. Thus, the distinction between humans and AI cannot lie in the implementation of the machines' data processing mechanisms. This solicits an important question: why is the difference between AGI and HLI so substantial if the underlying activity is the same? Simply put, modeling the human brain only goes so far; the actual task is creating a system that can imitate the human mind. Contrary to natural intuition, the problem with creating an artificial mind is not with its "intelligence;" computers are as capable as could be expected of them. Instead, the issue is with sensory and information collection abilities. ## 5 The Discrepancy - Data Ingestion At a low level, both human and AI systems are deceptively simple in the way they process information. However, the notion of AGI relies on the premature assumption that the external environments in which each function are analogous. State-of-the-art AGI presentations frame their research assuming that the deep, rich data required for HLI is available to machines. This is plainly false; the learnable data presented to each system is completely incomparable. The most important distinction between human and AI data ingestion is the nature in which it is provided. Humans are presented with tens of thousands of conscious decisions daily (which hardly accounts for unconscious thought). Regardless of the individual's awareness of these interactions, each one influences the individual's intuition. This is why humans naturally avoid pain and seek pleasure without much thought [21]. Awake or asleep, aware or unaware - the body and mind are constantly interacting with their internal and external environments [22]. The human experience can therefore be modeled as a continuous, dynamic dataset with infinite potential values to consider. Modern AI products, on the other hand, are quite primitive in their data collection techniques. Accumulating data for a supervised model requires incredibly deliberate work, including collecting, labeling, organizing, and preprocessing each value. Additionally, the model will only be provided with values that a human labeler has deemed "valuable." Its training data is entirely contingent on human understanding, which eliminates the freedom for it to reach subconscious conclusions. This is a crucial area of research that is left uninvestigated. Since computers cannot "experience the world," they will never be able to comprehend it in the same way as humans. Attempting to use human-modeled learning architectures on artificial datasets is comparing apples and oranges regarding data availability. The difference in how these systems can perceive the world is spelled out clearly by Hoyes [23], which argues that the critical component computers are missing is the inability to instantiate 3D perceptions from 2D sense modalities. AGI is not inherently flawed in its learning methodologies; rather, it lacks the ability to absorb data about the world around it in the same way humans can. After all, since it has no real-world context, ANI needs curated data in an incredibly specific domain to perform a task at a human-like level. Instead of being considered geninuses, humans should be regarded as highly efficient data processing machines. As argued by Huang [24], distinguishing the _method of imitation_ is just as important as the tasks for AGI systems. While ANI can certainly surpass human "intelligence" in specific specializations, that goal is quite outlandish for AGI. Generalizing a machine to infinite knowledge over an infinitely vast domain is clearly ridiculous. Instead of attempting to imitate human intelligence, modeling the human brain mitigates this objective and instead relies on abstract learning abilities. This establishes a learning architecture that emphasizes breadth instead of depth, making "general" intelligence the focus. The model demonstrated by Vinyals [25] is an important example of the pitfalls of HLI imitation. Despite training on hundreds of millions of sentences and tokens with the goal of general language understanding, the model struggles to hold up conversations of depth. How have high-budget language understanding models not mastered human language due to their brute-force learning methods? Rather than architecturally, the performance difference lies in the context of the data they collect. ## 6 Comparing Contextual Comprehension Every human experience, of which there are billions every day, elicits reactions from nerves in the body that send an exceptionally complex signal to the brain. The brain can then quickly determine these senses' origin, implications, and contextual applicability. This gives HLI an incredible amount of depth. Not only is there a tremendous amount of data collected, but it is all processed harmoniously. Conversely, AI is difficult to optimize because it cannot contextually understand and apply its data. This is due both to the narrow scope of its applications and its inability to perceive and apply contextual implications to its learning process. Humans are more sophisticated learning creatures because their information contains three necessary contextual components: **generalized experiences, emotion and moral responsibility, and significance cognition.** ### - Generalized Experiences The single most important contextual tool humans have is the ability to generalize previous experiences to new situations. This feature is crucial because the rest of the features presented in this paper, along with the rationality of intelligent life, revolve around it. Without powerful generalization abilities, machines cannot learn altogether. In fact, in many respects, it is the best way to define "intelligence" in the first place. Chollet [26] explains intelligence as a system's "skill-acquisition efficiency." This definition can be another way to view generalization, as deducing unwritten instances is a relatively effective way to collect input [27]. While AI can generalize to a certain extent, the goal is to abstract it to the human level. A seemingly intuitive yet significant way AI struggles with this is in simple, common-sense reasoning. A demonstrative example is given by Davis [28]: when given the sentence "I stuck a pin in a carrot; when I pulled the pin out, it had a hole," humans do not need to hesitate to infer that the carrot is the object with a hole. However, NLP products often struggle with questions. AI systems (especially those trained exclusively in linguistics) have no real-world context to help them associate their training with human experiences, making simple conclusions incredibly complex. The following are descriptions of two unique ways humans can generalize previous experiences, along with their contrasts in AI. #### 6.1.1 - Previous events allow the prediction of future results Gilbert [29] makes the case that humans can subconsciously predict not only the hedonic consequences of events they have previously experienced but also events that have not yet taken place. Using formerly extrapolated data or rules (the laws of physics, for example), humans can deduce what they expect to happen, making the subsequent result seemingly straightforward. This is sometimes referred to as "metacognition," a skill that equips humans to cope with everyday life's uncertainties. AI, contrastingly, starts from scratch every time. This severely limits its capabilities because most of its brainpower will go to affirm what humans can piece together intuitively simply. #### 6.1.2 - Drawing connections over different domains Humans are remarkably adept at drawing deep connections between seemingly unrelated things. A juror in court, for example, will likely be able to consider a suspect's testimony, eyewitness testimony, and evidence, weighing each of these attributes to reach a reasonable conclusion. He is flexible and adaptable because he has processes and structures that can interact cooperatively with each other. A machine, on the contrary, would make quite a lousy juror. It cannot apply its understanding to test data that differs in _fundamental structure_ from its training, even if it requires just a simple logical jump. It may be able to learn specific patterns from its training data, but its application lacks the generalization of more loosely related instances. This is seen in machine learning applications that are used to analyze suspects. Specific neural networks meant to, for example, classify suspects based on a forensic sketch may function with a moderate rate of accuracy [30]. However, this neural network would be completely clueless in analyzing other aspects of the suspect, such as their court testimony or alibi. Although three separate models could achieve high accuracy in each of these smaller tasks, relating them to one another to reach a larger and contextually meaningful solution is impossible. ### - Emotion and Moral Responsibility The inability to perceive and handle emotions is a key component of AI's limited data collection abilities (and, notably, its exclusion from consideration as a "conscious" entity [31]). Emotion and subjective experiences are essential influencers to humans and weigh into every decision a person makes. First, it is important to note that emotional response is not necessary for improving a system's accuracy, precision, or predictability. In fact, it may often skew the objectivity of the subject. After all, cold, hard data is much less volatile than human emotion. Even so, it is a fundamental part of the human experience. AGI attempting to exhibit human-like tendencies must also demonstrate the ability to connect with the world around it emotionally. Regardless of the problem, AI will always seek the most logical and straightforward answer. This may seem like an appropriate ideology at face value, but it only goes so far. Many situations that humans decipher are guided by their opinions and ideologies. After all, many complex problems, such as those in politics or religion, are rooted in the individual's subjectivity. Humans are relational creatures, so emotions must be considered in meaningful interactions. AI may be adept at solving elementary tasks but equating it with humans assumes that it can understand the relational context in which human experience often lies. Since computers cannot interpret personal experiences and emotions, they have no personal philosophy to help them navigate complex real-world situations. Similarly, every human decision is rooted in that person's inherent values and moral responsibilities. To make an informed decision, humans have a lifetime of opportunities to learn what they consider "right" and "wrong." This allows for complete abstraction from problems, even if they have never explicitly been encountered. To illustrate, a fiscal conservative unfamiliar with the specifics of a new market regulation can deduct from his principles that he will likely disagree with it. On the other hand, a machine cannot do more than memorize what it "believes" is right and wrong. After all, the patterns among the data in these cases are not easily quantifiable. While humans can make reasonable decisions by relying on their fixed morals, AI systems lack inherent rationality behind their reasoning. This aspect of decision-making is also a large reason why the AI community is hesitant to trust it to make significant decisions for groups of people. The "gut feeling" humans feel towards making decisions allows them to sidestep ethical issues. On the other hand, machine learning demonstrates imprecision with some of these intangible issues [32]. ### - Significance Cognition The propensity for humans to assign meaning to incoming sensory data is a pivotal way they can understand a situation [33]. While a brute force data collection technique may yield strong results on tasks of limited scope, it does not inform the system of the significance of different events. Narrow tasks can avoid complications since the application of the data is one-dimensional. However, human experiences are wrapped in emotional, personal, social, and societal implications that alter their impact. To illustrate, picture two men: Tom is watching a silly movie while Jim is attending his father's funeral. Both activities may take an hour, but Jim will undoubtedly extract more meaning from his event than Tom. The human brain has an acute ability to decipher what events should stick in long-term memory and have significant ramifications for future decisions. This is because humans can assign different values to different situations in their everyday lives and determine which ones should play a role in decision-making. Voss [34] explains this concept fittingly, stating, "Reality presents massively more features and details than is (contextually) relevant or can be usefully processed. This is why the system needs to have some control over what input data is selected for analysis and learning - both in terms of which data, and also the degree of detail." On the other hand, AI has no way of understanding the real-world implications of its interactions contextually. It will assume equal importance between both the television show and the funeral. Although both events may have a role in the AI learning process, they will be in inaccurate and improperly weighed manners. AI cannot analyze and highlight notable encounters from the trillions of experiences encapsulating human experience. As long as it cannot understand the more profound implications that different situations insinuate, it will not be able to distinguish which aspects to focus on and consider with greater importance. Shortcomings are unavoidable in practice when compared to human general intelligence. ## 7 Implications Typical research consensus is that general intelligence is the ability to improve without having much knowledge. However, this argument differs: an incredible amount of knowledge _certainly is_ required to improve the general intelligence of a system. The distinction is that AI simply does not have the means to collect such data in the same way humans can. A reasonable argument for the consciousness of AI can certainly be made if the issue of data accumulation is resolved. Assuming it can eventually experience the world in the same way humans are, there is no reason to believe that its learning capability would be affected in any way. In addition, the inherently differing learning capabilities between humans and AI should significantly influence their applications. This leads to the unavoidable conclusion that AGI supporters have had the wrong goal for decades. Because AI is missing the data acquisition methods necessary to reach HLI, trying to get it to emulate human activity is pointless. These findings have dramatic implications for the current application and scope of AI. Acknowledging the fundamental contextual differences between both systems necessitates distinguishing tasks for each. Humans and AI should focus on the tasks for which they are best equipped: abstract problems and focused individual tasks, respectively. This consideration will also play a prominent role in how AGI is approached in the future. Much of American society is frightened by the rise of AI, whether in the form of automation, robotics, or AGI [35]. While certainly perpetuated by the media, these fears are largely understandable due to the field's rapid expansion. However, the strength of current AI solutions is being hindered by these hesitations, which can be rejected through a proper understanding of the functions this paper presents. Recognizing and implementing these systems in the roles where they are best suited would eliminate this concern and boost their potential. Finally, the Turing Test should be revisited once more. While many may seek to simply throw it out, this paper is not attempting to undermine it entirely. Instead, it asserts that the TT should be viewed merely as a display of AGI's ability instead of its intelligence criterion. It seems safe to claim that modern AGI enthusiasts are far too passionate about the specifics of Turing's original prediction, seeking to create a perfect "Turing Test environment." However, this misses the big picture. The thought behind the test can be extracted from the Imitation Game itself. The goal for AGI, while lofty (and, quite frankly, fanciful), can be abstracted to perceiving, absorbing, and contextually understanding the world in a human-like manner. ## 8 Future Work Time will be the most significant indicator of how future AGI attempts may look. Assuming the prevailing architecture of an HLI-based standard stays the same, the question is at what point the products will begin to stagnate. Once the most cutting-edge modern implementations reach their peak, the variance compared to humans can be analyzed. At that point, it will be evident that raw computational power is simply too shallow to explore complex ideas. When this time comes, several important questions will determine the future of these applications. The most important question in the wake of declining AGI performance is whether it is a worthy goal to continue to pursue. While this paper asserts that it is not, the developers themselves must decide whether to continue chasing HLI. Even if developers decide to change their scope, additional elements must be considered. One such question that presents interesting applications is whether current AGI attempts can be modified to perform well in more narrow contexts. If this is possible, AGI attempts may offer informative insight to optimize the learning ability of ANI applications. The main problem that the field currently faces is that every innovation is viewed as the new "intelligent system" that enthusiasts have been waiting for. However, once the hype wears off, it becomes clear that it is simply another advanced computer program. Unfortunately, in the current AI summer, the excitement for and promotion of ongoing developments makes this analysis of its culmination seem impractical. ## 9 Conclusion This paper certainly is not meant to undermine current AI implementations' strength, intelligence, and usefulness. AI and machine learning applications undeniably change how humans approach tasks in varying industries. This paper merely highlights the features that AI excels at and redirects attention off of HLI while the current architecture cannot support it. As many sources and implementations demonstrate, AGI is (in theory) possible. That is, the computing power necessary to train an intelligent model is not unreasonable. However, computing power is independent of the contextual and experiential data required to train such a model. It is also important to note that the issues in AGI do not necessarily lie in the potential intelligence of the systems themselves. There is no reason to believe that the underlying learning capabilities are compromised. Instead, the issue lies in the inability to ingest and learn from large amounts of data efficiently. To illustrate, someone who never went to second grade and does not know their times tables is not necessarily incapable of learning them. They simply have not been presented with it so that it can be properly understood. AGI can have a profound impact in a variety of fields but finding a way for it to emulate the human experience is a necessary first step. Only when AI is examined in a context where it can thrive can it be judged for its true potential.
2309.05192
Towards Viewpoint Robustness in Bird's Eye View Segmentation
Autonomous vehicles (AV) require that neural networks used for perception be robust to different viewpoints if they are to be deployed across many types of vehicles without the repeated cost of data collection and labeling for each. AV companies typically focus on collecting data from diverse scenarios and locations, but not camera rig configurations, due to cost. As a result, only a small number of rig variations exist across most fleets. In this paper, we study how AV perception models are affected by changes in camera viewpoint and propose a way to scale them across vehicle types without repeated data collection and labeling. Using bird's eye view (BEV) segmentation as a motivating task, we find through extensive experiments that existing perception models are surprisingly sensitive to changes in camera viewpoint. When trained with data from one camera rig, small changes to pitch, yaw, depth, or height of the camera at inference time lead to large drops in performance. We introduce a technique for novel view synthesis and use it to transform collected data to the viewpoint of target rigs, allowing us to train BEV segmentation models for diverse target rigs without any additional data collection or labeling cost. To analyze the impact of viewpoint changes, we leverage synthetic data to mitigate other gaps (content, ISP, etc). Our approach is then trained on real data and evaluated on synthetic data, enabling evaluation on diverse target rigs. We release all data for use in future work. Our method is able to recover an average of 14.7% of the IoU that is otherwise lost when deploying to new rigs.
Tzofi Klinghoffer, Jonah Philion, Wenzheng Chen, Or Litany, Zan Gojcic, Jungseock Joo, Ramesh Raskar, Sanja Fidler, Jose M. Alvarez
2023-09-11T02:10:07Z
http://arxiv.org/abs/2309.05192v1
# Towards Viewpoint Robustness in Bird's Eye View Segmentation ###### Abstract Autonomous vehicles (AV) require that neural networks used for perception be robust to different viewpoints if they are to be deployed across many types of vehicles without the repeated cost of data collection and labeling for each. AV companies typically focus on collecting data from diverse scenarios and locations, but not camera rig configurations, due to cost. As a result, only a small number of rig variations exist across most fleets. In this paper, we study how AV perception models are affected by changes in camera viewpoint and propose a way to scale them across vehicle types without repeated data collection and labeling. Using bird's eye view (BEV) segmentation as a motivating task, we find through extensive experiments that existing perception models are surprisingly sensitive to changes in camera viewpoint. When trained with data from one camera rig, small changes to pitch, yaw, depth, or height of the camera at inference time lead to large drops in performance. We introduce a technique for novel view synthesis and use it to transform collected data to the viewpoint of target rigs, allowing us to train BEV segmentation models for diverse target rigs without any additional data collection or labeling cost. To analyze the impact of viewpoint changes, we leverage synthetic data to mitigate other gaps (content, ISP, etc). Our approach is then trained on real data and evaluated on synthetic data, enabling evaluation on diverse target rigs. We release all data for use in future work. Our method is able to recover an average of 14.7% of the IoU that is otherwise lost when deploying to new rigs. ## 1 Introduction Neural networks (NNs) are becoming ubiquitous across domains. Safety critical applications, such as autonomous vehicles (AVs), rely on these NN to be robust to out of distribution (OOD) data. Yet, recent work has drawn attention to the susceptibility of NNs to failure when exposed to OOD data, such as adversarial corruptions [10], unseen weather conditions [15], and new geographic regions [6]. While each of these pose a significant challenge for safety critical applications, we focus on another distribution shift, which, thus far, has been understudied in the research literature - changes in camera viewpoint between train data and test data. Because camera viewpoint changes are realistic in AVs, we study their impact on AV perception tasks. AVs use cameras around the ego-vehicle to perceive their surroundings. Using images from each camera, NNs detect and segment objects in the scene, such as vehicles, pedestrians, roads, and more. This information is used by trajectory planners to decide how the ego-vehicle navigates. Camera viewpoint for AVs may differ between train and test in several real-world scenarios. First, the camera viewpoint may change over time due to wear and tear or damage. Second, camera viewpoint may change due to installation variation. Third, and most relevant for our work, if a single NN is to be deployed across different types of vehicles, it must be able to generalize to the camera viewpoints of each car. Collecting and labeling train data for each target rig is not scalable and quickly becomes intractable for AV companies wishing to scale across many types of vehicles due to cost, thus motivating our work to transform collected data into the viewpoint of diverse target rigs to use for training. The goal of this paper is to bring understanding and a Figure 1: **Impact of Changed Camera Viewpoint: We find that the performance of state-of-the-art methods for bird’s eye view (BEV) segmentation quickly drop with small changes to viewpoint at inference. Above we see predictions from Cross View Transformers [29] trained on data from a source rig (top). The target rig pitch is reduced by \(10^{\circ}\) (bottom), leading a 17% drop in IoU.** first approach to a real-world problem in the AV space that has yet to receive attention in the research literature - generalization from a source to target camera rig. We focus on bird's eye view (BEV) segmentation from RGB data to motivate how changing camera viewpoint can affect AV perception models. We study this problem by conducting an in-depth analysis on the impact changing the camera viewpoint at inference time has on recent BEV segmentation models. Our findings indicate that even small changes in camera placement at inference time degrade BEV segmentation accuracy, as illustrated in Fig. 1. We then propose a method to improve generalization to a target rig by simulating views in the target perspective. We show that incorporating data generated from novel view synthesis into training can significantly reduce the viewpoint domain gap, bringing the BEV segmentation model to the same level of accuracy as when there is no change in camera viewpoint, without having to collect or label any additional data. We compare our approach with other strategies, such as augmenting the camera extrinsics and labels during training, and find that our approach leads to better accuracy. Little work has focused on the impact of viewpoint changes for AV perception, and, to the best of our knowledge, we are the first to study the impact of diverse camera viewpoint changes on 3D AV perception tasks, such as BEV segmentation. We hope that this paper will encourage more research on the important problem of _viewpoint robustness_ in AV. Our paper makes the following contributions: * We highlight the understudied problem of _viewpoint robustness_ in bird's eye view segmentation for autonomous vehicles (AV) through an in-depth analysis revealing that recent models fail to generalize to different camera viewpoints at inference time. * We propose a viewpoint augmentation framework for AV; we develop a novel view synthesis method that can be used to transform training data to target viewpoints and show that it improves the robustness of bird's eye view segmentation models to viewpoint changes. * We provide datasets that can be used to benchmark future work on viewpoint robustness in AV. Because real-world AV datasets from a diverse set of camera rigs are not publicly available, we use simulated data both for (1) training and evaluation in our analysis and (2) evaluation of our proposed technique. Our synthetic datasets can be used for future efforts to benchmark the generalization abilities of different AV perception methods to viewpoint changes. Datasets are publicly available on our project page. The first dataset, rendered from CARLA, consists of both training and testing data, allowing for isolated analysis of the impact of viewpoint changes on BEV segmentation models (example images in Fig. 2). The second dataset, rendered with NVIDIA DRIVE Sim [20], is significantly more photorealistic and consists of test sets from a diverse set of camera viewpoints. Thus, it can be used to evaluate models trained on real data, as we show in Section 5. Both datasets include 3D bounding box labels. ## 2 Related Work ### Viewpoint Robustness Recent work has drawn attention to the susceptibility of NNs to misclassify when presented with distributions not seen during training. Madan _et al._[14] show that both convolutional- and transformer-based classifiers are fooled by small viewpoint changes, and they introduce a search strategy for finding adversarial viewpoints, which leads to misclassifications over 71% of the time. Similarly, [13] shows that small viewpoint changes degrade classification performance, especially when paired with out of distribution (OOD) categories, and demonstrates that increasing the diversity of training data is an effective strategy to mitigate this issue. Do _et al._[4] use homography to move images closer to the distribution of training data at inference time. Coors _et al._[3] study the impact of viewpoint changes for 2D semantic segmentation for AV, but do not explore 3D tasks. In contrast, we focus on providing a thorough analysis and a solution to the problem of viewpoint robustness for 3D AV perception tasks, focusing on BEV segmentation. ### Novel View Synthesis Novel view synthesis (NVS) provides a way to render images from unseen viewpoints of a scene, and thus could be used to improve the robustness of perception models to viewpoint changes. Many methods have been proposed for NVS in recent years [18, 16, 19], many of which are based on Neural Radiance Fields (NeRF) [17]. However, NeRF still faces two challenges that limit its applicability for our use case: (1) getting NeRF to generalize to dynamic scenes, which are common in AV, is an open research problem, and while there is promising work in this direction [28, 22], the setup is often too constrained and simplified to fit the AV problem setting, and (2) NeRF is challenging to scale due to lack of generalizability, so multiple NeRFs must be trained to perform NVS across scenes. While there is work aimed at generalizing NeRF [12], it remains an open problem and current methods are often constrained. Other methods for NVS rely on monocular depth estimation and can generalize across scenes when the depth estimation network is trained on diverse data. We leverage Worldsheet in our work [11], which is described in more detail in Sec. 4.1. ### Bird's Eye View Segmentation Bird's eye view (BEV) segmentation -- the task of segmenting a scene in the top-down view (BEV) from 2D im ages -- is a useful task for benchmarking AV perception [21, 24, 1]. BEV segmentation requires a 2D to 3D unprojection to predict the position of objects surrounding the ego-vehicle from the BEV perspective. BEV segmentation models usually consist of an image encoder, which extracts the features from images from the camera rig, and a decoder, which uses the image features to predict the objects of interest in the BEV coordinate frame. Existing methods condition on the extrinsics and intrinsics of each camera in different ways. Lift-Splat-Shoot (LSS) [21] and Orthographic Feature Transform (OFT) [25] unproject features into a point cloud according to each camera's intrinsic and extrinsic parameters. LSS performs sum pooling along each pillar in the map-view, while OFT performs average pooling. Other methods, such as Cross View Transformers (CVT) [29], treat camera intrinsics and extrinsics as a feature, rather than explicitly unprojecting. We use LSS and CVT to conduct benchmarks, since these two methods encompass both convolutional and transformer-based architectures and explicit and implicit geometric representations. ## 3 Measuring the Impact of Camera Viewpoint Variations on BEV Segmentation **Method:** In this section, we introduce our approach and results for measuring the impact of changing the camera viewpoint at inference time for BEV segmentation models trained on a single, source rig. We use simulated data from CARLA [5] for this analysis for two reasons: (1) using simulated data allows us to isolate the domain gaps between training and testing such that only camera viewpoint changes, and (2) real AV datasets with large differences in camera position are not publicly available. Examples of different camera viewpoints rendered in CARLA are shown in Fig. 2. For simplicity and ease of interpretation of our results, we conduct all experiments on a single camera rig, containing a front facing camera, which we refer to as the source rig. We first train a BEV segmentation model on data rendered from the source rig. For this rig, we use the camera parameters of sessions from the nuScenes dataset [2]. Then, we render train and test datasets from different target rig, which contain variations to the yaw, pitch, height, or pitch and height of the camera. The train datasets are used to train an oracle for each target rig, while the test datasets are used to evaluate the model trained on the source rig in comparison to the oracle. For completeness, we sweep over a large range of each extrinsic and render a train and test dataset on regular intervals. For pitch and yaw, we sweep from -20\({}^{\circ}\) to 20\({}^{\circ}\), rendering a dataset every 4\({}^{\circ}\). For height, we sweep from 0 in to 30 in, rendering a dataset every 3 in. For height and pitch together, we sweep from 0\({}^{\circ}\) and 0 in to Figure 3: **Analysis of impact of viewpoint changes in CARLA:** We train a source BEV model using Lift Splat Shoot (LSS) [21] and Cross View Transformers (CVT) [29], denoted at point 0 on the \(x\) axis of each graph. We then test the model across different target rig where the camera pitch, yaw, height, or pitch and height are changed, as denoted by the different points along the \(x\) axes. We also trained each model on the target rig directly and refer to this model as the “oracle”, as it reflects the expected upper bound IoU for each viewpoint. Figure 2: **Datasets rendered in CARLA across viewpoints:** For the analysis part of our work, we use CARLA to simulate different viewpoints. We rendered datasets from a total of 36 viewpoints, a few of which are highlighted above, including the source rig (extrinsics from nuScenes [2] dataset), +12\({}^{\circ}\) yaw, +12\({}^{\circ}\) pitch, +21 inch height, and -12\({}^{\circ}\) pitch and +18 inch height together. -20\({}^{\circ}\) and 30 in, rendering a dataset at every -4\({}^{\circ}\) and 6 in. To understand the domain gap introduced by changes in camera position, we test the "source model", which is the model trained on data from the source rig, across each test dataset, where each test dataset contains changes to either yaw, pitch, height, or pitch and height together. We then compare the test accuracy of the source model to the test accuracy of the oracle model, which was only trained on data from the target rig. The oracle model serves as an upper bound on model performance since there is no domain gap between the train and test datasets. **Model Details:** We conduct our analysis using two BEV segmentation models, Lift Splat Shoot (LSS) and Cross View Transformers (CVT). LSS uses an explicit geometric operation to map objects in the camera coordinate system to the bird's eye coordinate system. It does this by using each camera's intrinsic and extrinsic parameters to construct a frustum shaped point cloud per camera where predicted objects are placed inside. A convolutional encoder maps images to features and depths, which are unprojected into the frustum, and a cumulative summing operation is done over the features in the vertical pillars of the frustum before the decoder then predicts the final segmentation. In contrast, CVT uses a transformer to learn features over images, extrinsics, and intrinsics. The extrinsic and intrinsic parameters are used to condition the segmentation network, such that it implicitly learns correlations between the parameters and positions of objects relative to the ego-vehicle. We use these two architectures because they cover both explicit and implicit geometric representations and convolutional and transformer backbones, allowing us to test the impact of each on generalization to viewpoint changes. **Results:** Results of our analysis are shown in Fig. 3. We see that the performance of both LSS and CVT suffers drastically with even small changes to camera viewpoint, whether it be pitch, yaw, height, or pitch and height together. Because of the architecture of LSS, which includes cumulative summing in the vertical pillars within each frustum, changes to camera height have a relatively small impact on downstream BEV segmentation performance in comparison to other viewpoint changes. CVT lacks this generalization to changes in camera height because it does not sum features in the height dimension, but rather conditions on the camera extrinsics. We also note that because the training dataset is acquired in simulation, the extrinsics of the source rig have no noise or calibration error, and, thus, are always the same during training. As a result, we found that CVT learns to ignore the extrinsic embedding during training, indicating that the degradations we see to test performance in Fig. 3 are the result of the images being out of distribution. In contrast, our experiments in Sec. 5 involve training CVT on real world data, which has calibration error, and, as a result, CVT learns to use the extrinsic embedding to inform predictions, but still lacks generalization to target rigs. During our analysis, we also found that while changes to yaw have a negative impact on performance on LSS, the resulting segmentation predictions are transformed based on the difference in yaw between training and testing. To mitigate this, a post-processing step can be applied where the predictions are rotated to the viewpoint of the target rig. Post-processing can be used to mitigate the effect of changes in yaw, but does not generalize to other extrinsic parameters, such as pitch and height. Lastly, we note two biases in the oracle models. First, we observe that the LSS oracle model trained on negative pitches performs poorly. Second, both LSS and CVT achieve higher test IoU when trained and tested with rig with a larger camera height. While higher IoU could be explained by fewer occlusions due to a higher viewpoint, and thus more ground truth pixels, the number of ground truth objects is consistent across each of the test datasets (7 objects per frame on average), and so this bias is not explained by differences in the number of ground truth pixels. We note these biases, but they are not the main focus of our work. **Training Details:** We train each BEV segmentation model three times and show the mean and standard deviation in test IoU in Fig. 3. Each model is trained on 25,000 images rendered from the front center camera (same camera parameters as in nuScenes) with the CARLA Simulator [5]. Train datasets are created for all camera viewpoints tested so that an oracle model can be constructed. For evaluation, we use 5,000 test images from each target rig, where the target rig include changes to camera pitch, yaw, height, or pitch and height together, and are rendered from a different CARLA map than the training sets. Each model is trained for 30 epochs. We will release all 36 train and test datasets with this paper. The 36 datasets include train and test data for the source rig, 10 pitch rigs, 10 yaw rigs, 10 height rigs, and 5 height and pitch rigs. ## 4 Viewpoint Robustness via NVS We present a new method that improves generalization of BEV segmentation models to different camera positions using novel view synthesis (NVS). As described in Sec. 3, BEV segmentation models fail to generalize to even small changes in camera viewpoint. However, collecting new data from each target rig, especially when AV companies may wish to deploy models across many types of cars, is impractical due to the cost of collection and annotation. Thus, we focus on NVS as it provides an opportunity to reuse labeled data from the source rig by transforming it into the viewpoint of each target rig. We can then train a new model on the transformed data for each target rig. We first define our NVS method. The key difference between our NVS method and past work is how we generalize to complex, dy namic AV scenes. Then, we show how the transformed data can be used to train BEV segmentation models for diverse target rigs without access to real data from the target rig. We use real data to train our NVS and BEV segmentation models. To evaluate over diverse target rigs, we use synthetic data rendered with NVIDIA DRIVE Sim since real data only provides one rig setting. We compare test performance achieved with models trained with data transformed to the target viewpoint vs. only data from a source rig. Our approach is summarized in Fig. 4. ### Preliminaries We build off of Worldsheet [11], a recent method for single image NVS of _static scenes_, extending it to work on complex AV scenes that have _dynamic_ objects and occlusions. While NeRF-type approaches generate impressive NVS results, generalizing to dynamic scenes and across many scenes is still an active area of research. Worldsheet, on the other hand, is able to generalize across scenes, which is why we choose to use it in our work. The goal of Worldsheet is to build a 3D scene mesh, \(M\), by warping a \(W\times H\) lattice grid onto the scene based on predicted depths and vertex offsets. Given an input image, \(I\), a ResNet-50 [9] is trained to predict depth, \(z\), and grid offset of each vertex, \(V_{(x,y)}\) at each \((x,y)\) in \(I\). \(z\) and \(V_{(x,y)}\) are used to build \(M=(\{V_{(x,y)}\},\{F\})\), where \(F\) are the mesh faces. A differentiable texture sampler is then used to splat the RGB pixel intensities from the original image onto the mesh's UV texture map. The pipeline is trained end-to-end on a multi-view consistency loss. Given two views of the scene, an input and a target, the mesh is predicted from the input view and then projected to the target view based on the target camera pose, \(\theta_{t}\). The target view is rendered and compared to the GT with L1 and perceptual losses. A pix2pixHD generator inpaints parts of the scene in the generated target view that were not visible in the input. In contrast, we omit the pix2pixHD generator and use lidar depth supervision (LS), SSIM loss [27], automasking (AM) & minimum loss (ML) over neighboring frames [7] to build an NVS model that generalizes to complex, dynamic, AV scenes. ### Novel View Synthesis for AV Data **Overview:** Because AV sessions, \(S\), are composed of temporally sequential images, \(\{I_{0},I_{1},...,I_{n}\}\in S\), temporal consistency, rather than multi-view consistency, can be enforced between neighboring images to train our NVS model, assuming a sufficiently high frame rate so parts of the scene are visible in the input and target images. For every input image, \(I_{n}\), we enforce consistency between \(I_{n-1}\) and \(I_{n+1}\) by transforming \(I_{n-1}\) and \(I_{n+1}\) into the viewpoint of \(I_{n}\) and comparing each predicted novel view \(\hat{I_{n}}\) to GT \(I_{n}\): Figure 4: **Proposed Pipeline**. Current methods for bird’s eye view (BEV) segmentation are trained on data captured from one set of camera rigs (the source rig). At inference time, these models perform well on that camera rig, but, according to our analysis, even small changes in camera viewpoint lead to large drops in BEV segmentation accuracy. Our solution is to use novel view synthesis to augment the training dataset. We find this simple solution drastically improves the robustness of BEV segmentation models to data from a target camera rig, even when no real data from the target rig is available during training. Figure 5: **NVS Qualitative Comparison:** We compare the unrectified NVS results (top) and depth results (bottom) from Worldsheet [11] (right) to our method (middle and left). SSIM is SSIM loss, ML is min loss, AM is automasking, LS is lidar supervision. \[\begin{split}\{\mathbf{\hat{I}}_{n}^{n+1},\mathbf{\hat{D}}_{n}^{n+1} \}=render(\{V_{(x,y)}^{n+1}\},\{F^{n+1}\},T^{n+1})\\ \{\mathbf{\hat{I}}_{n}^{n-1},\mathbf{\hat{D}}_{n}^{n-1}\}=render(\{V_ {(x,y)}^{n-1}\},\{F^{n-1}\},T^{n-1})\\ L_{im}=\frac{1}{\mathcal{P}}\sum_{i=1}^{\mathcal{P}}\min(|I_{n,i }-\mathbf{\hat{I}}_{n,i}^{n+1}|,|I_{n,i}-\mathbf{\hat{I}}_{n,i}^{n-1}|)\end{split} \tag{1}\] where \(V\) are vertices, \(F\) are mesh faces, and \(T\) is the texture map. We render the meshes built from \(I_{n-1}\) and \(I_{n+1}\) in \(I_{n}\)'s viewpoint, forming novel view renderings \(\hat{I}_{n}\in(\mathbf{\hat{I}}_{n}^{n+1},\mathbf{\hat{I}}_{n}^{n-1})\) and their corresponding depth maps \(\mathbf{\hat{D}}_{n}^{n+1},\mathbf{\hat{D}}_{n}^{n-1}\). We then compute the per-pixel image loss \(L_{im}\), where \(\mathcal{P}\) is the valid pixel number and \(I_{n,i}\) is the \(i\)-th pixel of \(I_{n}\). Different from NeRF, worldsheet applies a single-layer mesh to synthesize novel views. In the discontinuous depth regions(_e.g._, boundaries), distortion might happen. To make the training more robust, we apply \(L_{1}\) and SSIM loss between the GT image \(I_{n}\) and the re-rendered image \(\hat{I}_{n}\), where we follow the same setting in [7]. **Occlusion Handling:** Inspired by unsupervised ego-video depth estimation work [7], we compute two losses between \((I_{n},\mathbf{\hat{I}}_{n}^{n-1})\) and \((I_{n},\mathbf{\hat{I}}_{n}^{n+1})\), and pick up the minimal loss (ML) between them in a pixel-wise way. Intuitively, as the car is moving, some parts of the scene might be occluded in the last or next frame. However, they are less likely to be occluded in both two frames. Therefore, applying minimal losses help prevent occlusions from affecting the training loss. We also use auto-masking [7] to ignore pixels that violate camera motion assumptions, _e.g._, ego-car shadows. **Depth Supervision:** Unlike other applications where only an RGB sensor is available, AVs are often equipped with lidar during data collection. We assume that lidar observations are available when training our NVS model. Thus, we can leverage lidar supervision (LS), rendering lidar into a ground truth sparse depth map [23], \(D_{n}\) for every image, \(I_{n}\). To further improve the quality of the lidar depth maps, we use two types of automasking (AM). First, we use a pre-trained sky segmentation network [26] to mask out the sky and set the depth for this part of each training image to infinity. Second, we use MaskRCNN [8] to predict masks of the "close-by" cars so that they are ignored in the depth loss, due to the fact that the lidar detector is mounted higher than the camera and it typically cannot see the close cars. We then apply two depth losses, an L1 loss between the predicted depth and GT lidar depth (_direct_ depth loss) and the L1 loss between the predicted depth and ground truth depth after the prediction is projected into the viewpoint of the cameras at frame \(n+1\) and \(n-1\) (_rendered_ depth loss). As above, we also use minimal loss for depth supervision: \[\begin{split} L_{D}^{direct}=\frac{1}{\mathcal{P}}\sum_{i=1}^{ \mathcal{P}}|D_{n-1,i}-F_{depth}(I_{n-1,i})|+\\ |D_{n+1,i}-F_{depth}(I_{n+1,i})|\\ L_{D}^{rendered}=\frac{1}{\mathcal{P}}\sum_{i=1}^{\mathcal{P}} min(|D_{n,i}-\mathbf{\hat{D}}_{n,i}^{n+1}|,|D_{n,i}-\mathbf{\hat{D}}_{n,i}^{n-1}|) \end{split} \tag{2}\] Fig. 5 shows how our method (SSIM, ML, AM, LS) improves depth estimation and NVS compared to Worldsheet. These improvements are quantitatively validated in Table 1. **Inpainting:** We train and test our NVS model using images from a 120\({}^{\circ}\) f-theta camera. The images are then rectified to 50\({}^{\circ}\) after NVS, such that missing parts of the scene not \begin{table} \begin{tabular}{|l|c c c c|} \hline Approach & Im. L1 \(\downarrow\) & PSNR \(\uparrow\) (dB) & SSIM \(\uparrow\) & Depth L1 \(\downarrow\) \\ \hline WS (original) & 0.145 & 22.602 & 0.595 & 0.00763 \\ WS + SSIM, ML, AM & 0.141 & 22.819 & 0.606 & 0.00707 \\ WS + SSIM, ML, AM + LS (Ours) & **0.138** & **22.936** & **0.608** & **0.00657** \\ \hline \end{tabular} \end{table} Table 1: **NVS Ablation:** We ablate our changes, which improve NVS and depth over Worldsheet (WS). We test with 1K images. Figure 6: **Novel View Synthesis Qualitative Results:** Shown above are the novel view synthesis results (rectified) obtained with our method. We transform images from the source rig to each of the target viewpoints and then use them for BEV segmentation training. in the field of view of the final image. As a result, no image inpainting is needed. Our NVS results are shown in Fig. 6. ### Augmenting BEV Segmentation Training The focus of our paper is not on NVS quality, but on the impact using NVS generated data can have on the problem of _viewpoint robustness_ in AV. Given a labeled BEV segmentation training dataset, \(D_{source}\), of size \(N\), we use our NVS method to transform \(n\) images from \(D_{source}\) to the viewpoint of the target rig, obtaining \(D_{target}^{pred}\) of size \(n\). This transformation is done by (1) estimating the depth each image, (2) creating meshes, (3) changing the viewpoint of the cameras, and 4) rendering each image in the viewpoint of the target rig. Finally, we construct a new BEV dataset, \(D_{final}\) of size \(N\), containing the \(n\) transformed images from \(D_{target}^{pred}\) and \(N-n\) images from \(D_{source}\). The number of transformed images, \(n\), is a hyperparameter and in our experiments we transform 25%, 50%, or 100% of \(D_{source}\) to the viewpoint of the target. The reason we do not always transform all \(N\) images is the NVS model may introduce other domain gaps; an ablation on this is done in Sec. 6. We train both the NVS model and BEV segmentation model on a real-world dataset, described in Sec. 5.1. An overview of the training pipeline is shown in Fig. 4. ## 5 Experiments and Results We show the effectiveness of our method by using it to train BEV segmentation models for diverse target rig, without any access to real data from the target rig during training. We first train our NVS model to transform data from the source rig to the target viewpoint. Next, we transform some or all of the source rig training data to the target rig. Finally, we train the BEV segmentation model for the target rig using a combination of transformed data and source data. All training is done on real world data, but evaluation is done with NVIDIA DRIVE Sim, allowing us to test across target rig that are not available in public datasets. ### Datasets **Training:** We train both the NVS and the BEV segmentation model on an internal dataset of 43 real AV sessions. We subsample the images from each video at a higher frame rate for our NVS training dataset than our BEV segmentation training dataset, yielding 250,000 and 30,000 training images respectively. All images are captured from a 120\({}^{\circ}\) f-theta lens camera. Prior to BEV segmentation training, we rectify the images to 50\({}^{\circ}\). Examples of rectified images from the source rig are shown in the first column of Fig. 6. **Evaluation:** We use simulated data from challenging scenes for the evaluation since real datasets with large viewpoint changes are not available and collecting them across many views is impractical. Simulated data could be used for train and test, but generating sufficiently large and diverse simulated train datasets is difficult. To mitigate the domain gap of training on real data and testing on simulated data, we use NVIDIA DRIVE Sim. Example images are shown in Fig. 7. To measure the domain gap, we trained a model on real data and evaluated it on both a real test dataset and a simulated test dataset from the source rig. The gap was 7.5% IoU, which is acceptable for our work, since we are concerned with relative changes in IoU, not absolute IoU. ### Experiment Details We demonstrate our method by transforming the dataset from the source rig, \(D_{source}\), to the viewpoint of six target rigs, training a BEV segmentation model for each, and evaluating the model on simulated data from the target rig. We conduct experiments with a single camera rig. The target rigs include pitch -10\({}^{\circ}\), -5\({}^{\circ}\), and 5\({}^{\circ}\), depth 1.5 m, and height 0.2 m and 0.8 m. Examples of source rig data transformed to each of the target rig with the NVS model are shown in \begin{table} \begin{tabular}{l l l l l l} \hline Extrinsic & \(\Delta\) View & Source & Source* & Extr Aug & Ours \\ \hline \hline - & 0 & 0.170 & 0.170 & 0.155 & - \\ \hline Pitch & -10\({}^{\circ}\) & 0.014 & 0.078 & 0.126 & **0.165** \\ Pitch & -5\({}^{\circ}\) & 0.037 & 0.141 & 0.128 & **0.161** \\ Pitch & +5\({}^{\circ}\) & 0.016 & 0.076 & 0.028 & **0.173** \\ Depth & 1.5 m & 0.017 & 0.156 & 0.150 & **0.174** \\ Height & 0.2 m & 0.094 & 0.175 & 0.145 & **0.177** \\ Height & 0.8 m & 0.003 & 0.170 & 0.132 & **0.214** \\ \hline \end{tabular} \end{table} Table 2: **Results:** We report the IoU of the CVT model trained on a source rig and tested across target rigs where pitch, depth, and height are changed (source). We then compare against two baselines, described in text. Last, we compare with our method, which is trained with some data transformed to the target rig view. The first row shows IoU of the source evaluated on sim data from the same viewpoint, and is our best estimate of oracle performance. Figure 7: **Evaluation Data:** We use images from NVIDIA DRIVE Sim [20] to evaluate our method on a diverse set of target rigs. Shown here are example test images with different viewpoints. Fig. 6. We note that, quantitatively, the NVS quality is best for changes in pitch and lowest for large changes in height. Despite lower quality for some transformed viewpoints, we show that the transformed data still leads to significant improvements in BEV segmentation accuracy for each target rig. For each target rig, we train a Cross View Transformers (CVT) model three times, with 25%, 50%, and 100% of \(D_{source}\) transformed to the target rig viewpoint. We also train CVT on source rig data for comparison. ### Baselines We compare against two baseline approaches: **- Using Train Extrinsics at Inference Time (_Source*_):** By passing in the train extrinsics to the BEV segmentation model at inference time, we find that, despite the image itself being from a different rig, performance improves. **- Extrinsic Augmentations (_Extr. Aug._):** Rather than augmenting the training images to be from the viewpoint of the target rig, we instead apply random rotations to both the extrinsic matrix and 3D bounding box labels together within the bounds of extrinsics of the target rig. ### Results We find that our approach of training BEV segmentation models with 25%, 50%, or 100% data transformed into the view of the target rig significantly improves BEV segmentation accuracy compared to training with only data from the source rig, leading to the same level of accuracy as when there is no viewpoint change. Results are shown in Table 2. We report the best IoU from the models trained with 25%, 50%, or 100% transformed data, but note that all top performing models use only 25% or 50% transformed data, and the rest of the training data remains from the source rig. We observe that both baselines also significantly improve the IoU compared to the model trained only on source data, but not as much as our NVS approach. We also compute the IoU of the model trained only on the source rig and tested on synthetic data from the same viewpoint to serve as a reference upper bound for expected performance when there is no viewpoint gap, shown in the first row of Table 2. This upper bound is more reliable than training and testing on simulated data, which results in an average of 35.4% IoU across views due to the lack of domain gap and limited diversity, resulting in visually similar train and test data. Lastly, we conducted an experiment in which we trained a model on \(\frac{1}{2}\) source rig and \(\frac{1}{6}\) +5\({}^{\circ}\) pitch, \(\frac{1}{6}\) +1.5 m depth, and \(\frac{1}{6}\) +0.2 m height data, resulting in 0.19 mean test IoU across views in Tab. 2 (0.206 for train views and 0.178 for other views). This result suggests training on multiple views can improve IoU over training only on the target view. Altogether, our results support our hypothesis that using NVS to transform labeled train data from the viewpoint of a source rig to that of a target rig and then training a BEV segmentation model with that data can enable the creation of BEV segmentation models for target rig without the associated cost of collecting and annotating data from each target rig. ## 6 Discussion We observe that, despite some NVS transformations leading to artifacts, e.g. the +0.8 m height transformation, the images still significantly help downstream BEV segmentation models to generalize to the desired target rig. In addition to our main results, we also conduct two ablation studies on our method, which are described below. **Amount of Transformed Data:** An open question is how much data from the source rig dataset should be transformed to the viewpoint of the target rig. While transforming all of the data may lead to a content gap due to NVS being imperfect, transforming too little may not expose the BEV segmentation model to enough examples of data from the target rig viewpoint. In our experiments, we train BEV segmentation models with 25%, 50%, and 100% transformed data. Shown in Fig. 8 is the IoU as a function of the amount of transformed training data. We see that IoU consistently increases as more transformed data is added to training until 50%. The model trained with 100% underperforms, most likely due to other domain gaps introduced by NVS. **Interpolation and Extrapolation:** In our work, we focus on generating target rig specific BEV segmentation models without the cost of data collection. However, one may wish to create a single BEV segmentation model that generalizes to multiple camera rigs. We investigate whether our approach can enable that by testing how models trained with two viewpoints interpolate between those viewpoints and extrapolate beyond those viewpoints. We test all combinations of the pitch models trained with 50% transformed data Figure 8: **Ablation: Varying percent transformed training data:** We observe that transforming 25-50% of the training dataset to the viewpoint of the target rig results in the best test IoU. and 50% source rig data, averaging test performance for interpololatation and extrapolation. An example of interpolation is testing a model trained on 0\({}^{\circ}\) and -10\({}^{\circ}\) pitch on -5\({}^{\circ}\) pitch, while an example of extrapolation is testing a model trained on 0\({}^{\circ}\) and -5\({}^{\circ}\) pitch on -10\({}^{\circ}\) pitch. On average, we find interpolation performance is 14.9% IoU and extrapolation performance is 14.8% IoU, suggesting the proposed method can improve generalization beyond the target rig. ## 7 Conclusion We find that changing camera viewpoint, even by small amounts, has a significant impact on BEV segmentation models that have not been trained on that viewpoint. As AVs become more ubiquitous and companies scale across different vehicle types, this problem, which we dub _viewpoint robustness_, will become critical to address. Our work makes a first attempt at improving viewpoint robustness using data generated from our method for NVS. We find that augmenting the BEV segmentation train dataset with data generated from the viewpoint of the target camera rig improves generalization to the target rig. As part of our work, we propose a method for NVS and show that it can be used to effectively mitigate the viewpoint domain gap. **Acknowledgements:** We thank Alperen Degirmenci for his valuable help with AV data preparation and Maying Shen for her valuable support with experiments.
2309.16850
Sketch2CADScript: 3D Scene Reconstruction from 2D Sketch using Visual Transformer and Rhino Grasshopper
Existing 3D model reconstruction methods typically produce outputs in the form of voxels, point clouds, or meshes. However, each of these approaches has its limitations and may not be suitable for every scenario. For instance, the resulting model may exhibit a rough surface and distorted structure, making manual editing and post-processing challenging for humans. In this paper, we introduce a novel 3D reconstruction method designed to address these issues. We trained a visual transformer to predict a "scene descriptor" from a single wire-frame image. This descriptor encompasses crucial information, including object types and parameters such as position, rotation, and size. With the predicted parameters, a 3D scene can be reconstructed using 3D modeling software like Blender or Rhino Grasshopper which provides a programmable interface, resulting in finely and easily editable 3D models. To evaluate the proposed model, we created two datasets: one featuring simple scenes and another with complex scenes. The test results demonstrate the model's ability to accurately reconstruct simple scenes but reveal its challenges with more complex ones.
Hong-Bin Yang
2023-09-28T21:02:04Z
http://arxiv.org/abs/2309.16850v1
Sketch2CADScript: 3D Scene Reconstruction from 2D Sketch using Visual Transformer and Rhino Grasshopper ###### Abstract Existing 3D model reconstruction methods typically produce outputs in the form of voxels, point clouds, or meshes. However, each of these approaches has its limitations and may not be suitable for every scenario. For instance, the resulting model may exhibit a rough surface and distorted structure, making manual editing and post-processing challenging for humans. In this paper, we introduce a novel 3D reconstruction method designed to address these issues. We trained a visual transformer to predict a "scene descriptor" from a single wire-frame image. This descriptor encompasses crucial information, including object types and parameters such as position, rotation, and size. With the predicted parameters, a 3D scene can be reconstructed using 3D modeling software like Blender or Rhino Grasshopper which provides a programmable interface, resulting in finely and easily editable 3D models. To evaluate the proposed model, we created two datasets: one featuring simple scenes and another with complex scenes. The test results demonstrate the model's ability to accurately reconstruct simple scenes but reveal its challenges with more complex ones. ## I Introduction In the field of architectural design, it is a common practice for architects to brainstorm different possibilities and communicate their design ideas through 2D sketches. These sketches serve as an initial step toward the development of the final design. Once a design direction has been chosen, architects then transfer the intermediate or final decision into a 3D model, which is more visually representative and provides a more detailed understanding of the design. Although architects commonly transfer their 2D sketches into 3D models for better visualization and detail, this can be a time-consuming task. While some research has been done on 3D reconstruction from 2D sketches [1, 2, 3], these approaches are not suitable for architectural design. This is because the structures of architecture are often combinations of simple geometric shapes like rectangular boxes or pyramids, which cannot be accurately represented using voxels and point-cloud. While mesh formats may work better, the method based on deforming a template shape, as described in [3], is more effective for objects with curvature and can result in undesired artifacts, such as uneven surfaces and blurred edges, as shown in Figure 1. Furthermore, the 3D model must also be capable of modification over time as the design is adapted and refined. However, voxel, point cloud, and distorted mesh are not intuitive for humans to interact with and perform manual post-processing. An ideal solution would be to integrate the 3D reconstruction process into 3D modeling software, allowing the generated model to be easily edited. This would also make it easier for the sketch to 3D conversion tool to be utilized effectively within the conventional design pipeline. To conclude, the objective of this project is to develop a machine learning model capable of generating a 3D model of architecture from a single 2D hand-drawn sketch, where the result can be seamlessly integrated into conventional 3D modeling software. To achieve this, an visual transformer is trained to take an image as input and generate a sequence of "scene descriptors" containing a list of all the objects appearing in the scene, along with their shape corresponding parameters like position, orientation, and size. To convert the predicted parameter into 3D objects, we programmed Rhino Grasshopper, a widely-used 3D modeling software in architecture design, to read the output and construct the scene accordingly. Figure 2 shows the overall pipeline of the proposed approach. Although this project aims to accelerate the 3D modeling process and improve the overall user experience in architectural design, however, generating simplified scene descriptors can also benefit robots that rely on computer vision. If a rough 3D scene can be reconstructed from an RGB image, robots can better navigate themselves and interact with different objects, increasing efficiency and accuracy in various tasks. Fig. 1: An example of the uneven surface created by the 3D reconstruction method that is based on template-mesh deformation (sketch2model[3]). ## II Related Work The proposed project, despite its primary focus on 3D reconstruction, can be viewed as a fusion of semantic segmentation, object classification, and 6DoF estimation using a distinctive approach. It is worth noting that ML-based 3D reconstruction is an active area of research, and various techniques and methodologies have been proposed to tackle the challenges associated with it. ### _3D Reconstruction_ Training an end-to-end machine learning model is the most common approach for 3D reconstruction today, especially with the presence of large-scale dataset[4, 5] and differential rendering[6, 7]. These learning-based methods are springing up today[8]; some take a single image as input, and some require multiple photos from different views. The model can be generated as voxel[9, 10], polygon mesh[11], and point cloud[12, 13, 14]. However, such ML-based 3D reconstruction methods are usually hampered by low generalization ability, which is an intrinsic issue related to the machine learning model. Since the overall structure of the generated models is limited, these methods typically train class-specific models for objects within the same category. As a result, they can only generate models within the same class with similar structures. For example, if a photo of a cat is fed into a model that is trained to generate cars, it will still generate a car regardless of how different they look. To tackle this problem, [15, 16] suggest using a 2-stage process by estimating the normal map, depth map, and silhouette as intermediate results through the first network, using them as the input to the 2nd network to generate the final 3D model. This project, in contrast, adopts an entirely different approach to reconstructing 3D objects and thus has the potential to reconstruct a wider range of contexts. The most related work to the proposed project is Sketch2CAD[17], which allows the user to draw the object's wireframe, and the system will automatically translate it to CAD operations. However, this means that the user has to draw very precisely, including all the hidden lines, making it different from the initial goal of this project, where the expected input is a hand-drawn sketch. ### _6DoF Estimation_ 6DoF estimation is the process of determining the position and orientation of an object or device in six degrees of freedom (x, y, z, yaw, pitch, roll). However, existing 6DoF estimation algorithms focus on tracing an object with a known 3D model[18, 19], which is different from this task, where the 3D structure is undetermined. ## III Methods ### _Data Generation_ To train the model, a Rhino Grasshopper program has been developed to generate synthetic data for training and evaluation. The data generation process includes creating the 3D scene and the corresponding 2D edge rendering. To thoroughly test the machine learning model's performance, two datasets are generated, namely the _simple dataset_ and the _complex dataset[20]_. Fig. 2: The pipeline of the proposed single image 3D model reconstruction. #### Iii-A1 3D Scene The 3D scene consists of multiple objects, where the shape type, position, and size are randomly assigned. Since this project aims to improve the architectural design process, the available shapes are chosen based on the appearance of typical residential buildings. Through architectural form analysis, the following shapes were chosen: Cube, Cylinder, Pyramid, Shed, Hip, A-Frame, and Mansard, as illustrated in Figure 3. The _simple dataset_ only contains cubes and cylinders. No rotation is performed. Each scene contains 1 to 5 objects. In contrast, the _complex dataset_ may have up to 10 objects in all available shapes, with objects randomly rotated at 90\({}^{\circ}\), 180\({}^{\circ}\), and 270\({}^{\circ}\) along yaw and pitch. Here, we only use yaw and pitch because of the geometric property of the shape we selected. Figure 4 showcases examples from both datasets. To describe the 3D scene, we employ a "scene descriptor" methodology. This descriptor includes the total number of objects in the scene and a comprehensive list of their associated parameters, such as shape, position, rotation angle, and size. These parameters are recorded and exported as a JSON file, enabling seamless data exchange between the machine-learning model and the Grasshopper script. #### Iii-A2 2D Sketch For each scene, multiple 2D images are rendered from various perspectives. The camera position is determined using the Horizontal Coordinate System, and the perspective is arbitrarily assigned. To maximize diversity and minimize ambiguity, we render 60 images per scene, with varying elevations (ranging from -15\({}^{\circ}\) to 45\({}^{\circ}\), every 15\({}^{\circ}\)) and azimuths (ranging from -180\({}^{\circ}\) to 180\({}^{\circ}\), every 30\({}^{\circ}\)). Two sets of images are rendered to test the precision of the ML model's prediction. The "informative" set contains the shape edge, intersection, hidden lines, and axis highlighting (rendering the x, y, and z axis with red, green, and blue), while the "normal" set contains only the shape edge and intersection. Given the limited time frame, we exclude the informative rendering for the _complex dataset_. ### _Object Classification and Parameter Prediction_ The model is developed based on Pix2Seq[21, 22], which is a visual transformer [23] framework developed for object detection. It reframes the object detection problem as a text generation task, where the model generates a sequence of tokens describing the objects in an image and their bounding box coordinates. In this project, the program is built on an open-source PyTorch implementation of Pix2Seq [24]. #### Iii-B1 Model Architecture The model consists of an encoder that reads an image as input and outputs the image embedding and a decoder that generates the final sequence. For the vision encoder, DeiT-III Small[25] is used, which is designed to be trained with less amount of data, as well as the computational resource and training time. The output is then fed into a vanilla Transformer decoder[26], which generates one token at a time, and the next token is predicted based on the preceding tokens and the encoded image representation. #### Iii-B2 Sequence construction To transform the 3D scene prediction as a text generation task, we must discretely express the parameters in the scene descriptor and assign a corresponding vocabulary as the token. For the camera position, since the 2D image is rendered from a pre-defined position and angle, we keep a map of \((\mathrm{ID}_{pose},(\mathrm{azimuth},\mathrm{elevation}))\) and use the pose ID as the vocabulary directly. Figure 4: 3 examples of the 3D scene from both of the dataset. The first row is the simple dataset, and the second is the complex dataset. Figure 5: An example of the two type rendering. The left side is the ”informative” one, which labels the x-, y-, and z-axis with red, green, and blue, respectively, and the hidden wire-frame is rendered as a dotted line. The right side is the ”normal” edge rendering. Figure 3: All shapes that appear in the dataset. From left to right are: Pyramid, Hip, Cube, A-frame, Shed, Cylinder, and Mansard. For the object shape, it is naturally a discrete property, so no further conversion is needed. To reconstruct a 3D scene, we need to know each object's position, rotation, and size, which is usually in a continuous domain. To tokenize these values, we adopted a similar strategy as Pix2Seq, by arbitrarily deciding the number of bins for each parameter and uniformly discretizing the value into an integer between \([0,n_{bins}-1]\). Specifically, the following equation is used for quantization and conversion, \[Q_{i}=\frac{(x_{i}-\min(X))}{(\max(X)-\min(X))}\times(n_{\mathrm{bins}}-1)\] where the \(x\) is the original continuous value, and \(Q\) is the quantified embedding. The vocabulary is only shared between different axis in the same property, resulting in the total vocabulary size equal to \(n_{\mathrm{cam-pose}}+n_{\mathrm{shape-type}}+n_{\mathrm{bin_{pos}}}+n_{ \mathrm{bin_{net}}}+n_{\mathrm{bin_{size}}}\). With the above-mentioned conversion method, an object can be represented in the following sequence: [shape-type, position-x, position-y, position-z, yaw, pitch, size-x, size-y, size-z]. To serialize multiple object descriptions and form the scene description, a random ordering strategy is used as Pix2Seq proved that it outperforms other deterministic ordering. The camera pose is encapsulated at the beginning of the sequence, which may benefit the estimation of position, rotation, and size. #### Iii-B3 Training Detail The objective of the model is to minimize the cross-entropy between the predicted sequence and the groundtruth. During training, the decoder will always see the prior tokens from the groundtruth when predicting the next one. The encoder's weight is initialized by the weight pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k, and the decoder's weight is randomly initialized. We used AdamW optimizer with an initial learning rate equal to \(10^{-4}\) and weight decay of \(10^{-4}\). Learning rate warmup is used for 15 epochs and then linearly decays the learning rate over the training process. To enable the ability to read hand-drawn sketches as input during the inference time, the initial plan is to preprocess the edge rendering with sketch simulator[27, 28] before feeding into the network. However, after spending some time trying to run CLIPascene[28], I did not succeed and thus had to postpone this part as future work. ## IV Experiment To begin with, we trained the model with the simple dataset and informative edge rendering. The number of bins for position and size are both set to 20. The loss converges after 105 epochs. From the visual result shown in Figure 6 and the quantitative result in Table I, we can see that the 3D scene is reconstructed precisely. Later, we fine tuned the model with the normal edge rendering, and we noticed a considerable performance drop. Since the model loses the reference to the original point and the axis information, it fails to estimate the camera pose correctly, and the error in position and size estimation is also increased. With the success of training the simple dataset, we moved to the more complex scenes. In this experiment, the number of bins for position is set to 200, for size is 60, and for rotation is 4 (since we only consider 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\), and 270\({}^{\circ}\)). However, the model fails to generate a satisfactory result. From the visual result shown in Figure 8, we can see that the predicted scene has nothing to do with the input image, not to mention the overall scene. Initially, we suspect that the model actually predicted a scene that matches the perspective as the input image since the scene is reconstructed from a single image. However, it is not the case if we look closely - the shapes are not even classified correctly. In the center column of the visual result, Fig. 6: Qualitative result of the simple dataset with informative 2D edge rendering Fig. 7: Qualitative result of the simple dataset with normal 2D edge rendering we can see that the predicted 3D scene mainly consists of cylinders, but there is only one cylinder in the input image and the rest are triangular shape. For the reason of failing, one guess is that the scene is too complex and beyond the model's capability, or we did provide enough data for it to train. Also, it is easier to find occlusion in the complex dataset since the maximum object size and the number of objects in each scene are increased. Such occlusion may produce too much noise during the training time. Lastly, since the synthetic dataset are generated randomly, the lack of context may also be an issue. ## V Conclusion and Limitations In this project, we proposed a transformer-based 3D model reconstruction method, which takes a single image as input and generates a sequence of objects' parameters, which can then be used as input for CAD software and reconstruct the 3D scene. To train and test this model, we created two datasets with 2 types of edge rendering methods and proved their efficiency and accuracy when presented with a simple scene. Nevertheless, the proposed method has its limitations mainly in two aspects. First, during our experiment, we saw that the model failed to predict the complex 3D scene. Also, since it can only reconstruct objects with known shapes, even if we can post-process the 3D model with boolean operations and create objects with various shapes and topologies, it is still unlikely for the current model to reconstruct shapes with complex curvature. 3D reconstruction from a single image is still an ill-posed problem. The proposed method tried a novel approach to tackle this problem. Also, leveraging the integration with conventional 3D CAD software increases its potential to be deployed in real-world applications.
2309.07785
A bijective proof of an identity of Berkovich and Uncu
The BG-rank BG($\pi$) of an integer partition $\pi$ is defined as $$\text{BG}(\pi) := i-j$$ where $i$ is the number of odd-indexed odd parts and $j$ is the number of even-indexed odd parts of $\pi$. In a recent work, Fu and Tang ask for a direct combinatorial proof of the following identity of Berkovich and Uncu $$B_{2N+\nu}(k,q)=q^{2k^2-k}\left[\begin{matrix}2N+\nu\\N+k\end{matrix}\right]_{q^2}$$ for any integer $k$ and non-negative integer $N$ where $\nu\in \{0,1\}$, $B_N(k,q)$ is the generating function for partitions into distinct parts less than or equal to $N$ with BG-rank equal to $k$ and $\left[\begin{matrix}a+b\\b\end{matrix}\right]_q$ is a Gaussian binomial coefficient. In this paper, we provide a bijective proof of Berkovich and Uncu's identity along the lines of Vandervelde and Fu and Tang's idea.
Aritram Dhar, Avi Mukhopadhyay
2023-09-14T15:19:17Z
http://arxiv.org/abs/2309.07785v3
# Combinatorial proof of an identity of Berkovich and uncu ###### Abstract. The BG-rank BG(\(\pi\)) of an integer partition \(\pi\) is defined as \[\text{BG}(\pi):=i-j\] where \(i\) is the number of odd-indexed odd parts and \(j\) is the number of even-indexed odd parts of \(\pi\). In a recent work, Fu and Tang ask for a direct combinatorial proof of the following identity of Berkovich and Uncu \[B_{2N+\nu}(k,q)=q^{2k^{2}-k}\begin{bmatrix}2N+\nu\\ N+k\end{bmatrix}_{q^{2}}\] for any integer \(k\) and non-negative integer \(N\) where \(\nu\in\{0,1\}\), \(B_{N}(k,q)\) is the generating function for partitions into distinct parts less than or equal to \(N\) with BG-rank equal to \(k\) and \(\begin{bmatrix}a+b\\ b\end{bmatrix}_{q}\) is a Gaussian binomial coefficient. In this paper, we provide a combinatorial proof of Berkovich and Uncu's identity along the lines of Vandervelde and Fu and Tang's idea. Key words and phrases:BG-rank, strict partition, bijection, generating function 2020 Mathematics Subject Classification: 05A15, 05A17, 05A19, 11P81, 11P83, 11P84 ## 1. Introduction An integer partition is a non-increasing finite sequence \(\pi=(\lambda_{1},\lambda_{2},\ldots)\) of non-negative integers where \(\lambda_{i}\)'s are called the parts of \(\pi\). We denote the number of parts of \(\pi\) by \(\#(\pi)\) and the largest part of \(\pi\) by \(l(\pi)\). The size of \(\pi\) is the sum of the parts of \(\pi\) and is denoted by \(|\pi|\). We say that \(\pi\) is a partition of \(n\) if \(|\pi|=n\). \(\lambda_{2i-1}\) (resp. \(\lambda_{2i}\)) are called odd-indexed (resp. even-indexed) parts of \(\pi\). In [3] and [4], Berkovich and Garvan defined the BG-rank of a partition \(\pi\), denoted by \(BG(\pi)\), as \[BG(\pi):=\sum_{i=1}^{k}(-1)^{i+1}\text{par}(\lambda_{i}),\] where \(\pi=(\lambda_{1},\lambda_{2},\ldots,\lambda_{k})\) and par(\(\lambda\)) denotes the parity of an integer \(\lambda\) which is defined as \(\text{par}(\lambda)=1\) if \(\lambda\) is odd and \(0\), otherwise. It is then easy to see that \[BG(\pi):=i-j,\] where \(i\) is the number of odd-indexed odd parts and \(j\) is the number of even-indexed odd parts. The BG-rank of a partition \(\pi\) can also be represented by \(2\)-residue Ferrers diagram of \(\pi\). The \(2\)-residue Ferrers diagram of a partition \(\pi\) is represented by writing the ordinary Ferrers diagram with boxes instead of dots and filling the boxes using alternate \(0\)'s and \(1\)'s ###### Abstract We consider the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) of the \(2\)-residue Ferrers diagram of \(\pi\). We show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrers diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\). We also show that the \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferre diagram of the partition \(\pi=(10,7,4,2)\) is a \(2\)-residue Ferrere diagram of the partition \(\pi=( where \(p_{k}^{d}(n)\) denotes the number of strict partitions of \(n\) with BG-rank equal to \(k\). Note that (1.2) is exactly Conjecture 1 in [9] where Vandervelde defined a partition statistic called characteristic, denoted by \(\chi(\pi)\), which is related to BG-rank as \[\chi(\pi)=-BG(\pi).\] In [8, Remark \(3.9\)], Fu and Tang mention that (1.2) can also be derived from the work of Boulet [6]. Setting \(a=d=qz\), \(b=c=q/z\), and \(z=1\) in Corollary 2 in [6], we get (1.2). In [9], Vandervelde provided a bijective proof of (1.2) with \(k=0\). More precisely, [9, Theorem \(1\)] states \[\sum_{n=0}^{\infty}p_{0}^{d}(n)q^{n}=\frac{1}{(q^{2};q^{2})_{\infty}}. \tag{1.3}\] Building upon Vandervelde's bijection, Fu and Tang [8] provided a bijective proof of (1.2) for all integers \(k\) using certain unimodal sequences whose alternating sum equals zero. In their paper [8, Remark \(3.9\)], Fu and Tang ask for a direct combinatorial proof of (1.1). The main aim of this paper is to provide such a combinatorial proof. The rest of the paper is organized as follows. In Section 2, we present Fu and Tang's bijection. In Section 3, we present the proof of Berkovich and Uncu's identity (1.1). In Section 4, we provide some examples to illustrate the combinatorial proof of (1.1). We conclude with a few remarks in Section 5 to motivate further investigation. ## 2. Fu and Tang's bijection ### \((a,b)\)-sequences First, we will define a certain type of unimodal sequence called an \((a,b)\)-sequence introduced by Fu and Tang [8, Definition \(2.1\)]. **Definition 2.1**.: For some non-negative integer \(a\) and an integer \(1\leq b\leq l\), we call a sequence of \(l\) positive integers \(\{d_{1},\ldots,d_{l}\}\) an \((a,b)\)_-sequence of length \(l\)_ if the following conditions hold: 1. \(d_{i}=a+i\) for \(1\leq i\leq b\), 2. \(d_{i}\) forms a non-increasing sequence of positive integers for \(i\geq b\), and 3. \(\sum\limits_{i=1}^{l}(-1)^{i}d_{i}=0\). We denote the collection of all such sequences by \(\mathcal{S}_{a,b}\) and define \(\mathcal{S}:=(\bigcup_{a\geq 0,b\geq 1}\mathcal{S}_{a,b})\cup\{\varepsilon\}\) where \(\varepsilon\) is the empty sequence. For \(\Delta=\{d_{1},\ldots,d_{l}\}\), we denote \(l(\Delta)=l\), \(|\Delta|=\sum\limits_{i=1}^{l}d_{i}\), and \(|\Delta|_{\text{alt}}=\sum\limits_{i=1}^{l}(-1)^{i}d_{i}\). If \(\Delta\in\mathcal{S}_{a,b}\), we denote \(a(\Delta)=a\) and \(b(\Delta)=b\). **Example 2.2**.: \(\{5,6,7,8,3,3,2,2,2,1,1\}\) is a \((4,4)\)-sequence of length \(11\). ### The Bijection According to Chu [7], a \(k\)-Durfee rectangle for the Young diagram of a partition is an \(i\times(i+k)\) rectangle (having \(i\) rows and \(i+k\) columns) which is obtained by choosing the largest possible \(i\) such that the \(i\times(i+k)\) rectangle is contained in the Young diagram for a fixed integer \(k\). It is to be noted that Fu and Tang [8] mention that this notion of Durfee rectangle is different from the generalization by Andrews in [1]. For integers \(a\geq 0\) and \(b\geq 1\), we consider a map \(\phi_{a}:\mathcal{S}_{a,b}\rightarrow\mathcal{P}_{a,b}\) where \(\mathcal{P}_{a,b}\) is the set of all integer partitions \(\lambda=(\lambda_{1},\lambda_{2},\ldots)\) whose \(a\)-Durfee rectangle has size \(\lceil\frac{b}{2}\rceil\times(\lceil\frac{b}{2}\rceil+a)\) and \(\lambda_{\frac{b}{2}}>a+b/2\) if \(b\) is even or \(\lambda_{\frac{b+1}{2}}=a+(b+1)/2\) if \(b\) is odd. Now, we will define the map \(\phi_{a}\). Consider a sequence \(\Delta=\{d_{1},d_{2},\ldots,d_{l}\}\in\mathcal{S}_{a,b}\). The aim is to use the sequence \(\Delta\in\mathcal{S}_{a,b}\) to _double cover_ the block diagram configuration shown in Figure 2 below. The notion of double covering of the cells in the block diagram configuration is equivalent to coloring the cells by yellow (or labeling them by a '1') and then re-coloring the cells by green (or re-labeling them again by a '2') so that in the end, all the cells are colored in green (or labeled by '2'). This is exactly the reason why the base in the \(q\)-binomial coefficient in (1.1) is \(q^{2}\) instead of just \(q\) as we are counting the cells twice. We then call a block diagram _doubly covered_ when all the cells are colored green. The doubly covered block diagram will then be the Young diagram of a partition in \(\mathcal{P}_{a,b}\). Following Fu and Tang [8, Fig. 2], in the block diagram configuration (see Figure 2 above), we call the \(i\)th labeled block \(B_{i}\). From now onwards, we label all the cells contained in \(B_{i}\) by \(\mathcal{B}_{i}\). \(B_{i}\) has size \(1\times\left(a+\frac{i+1}{2}\right)\) (resp. \(\frac{i}{2}\times 1\)) if \(i\) is odd (resp. even). We denote the area of \(B_{i}\), i.e., the number of cells labeled \(\mathcal{B}_{i}\) by \(b_{i}\). So, \(b_{1}=a+1,\,b_{2}=1,\,b_{3}=a+2,\,b_{4}=2\), and so on. We obtain \(\phi_{a}(\Delta)\) by performing the following operations: 1. Fill up \(B_{1}\) in the block diagram Figure 2 with \(d_{1}=a+1\) cells which is equivalent to labeling the \(a+1\) cells in \(B_{1}\) with '1'. 2. Use \(d_{i}\) cells first to _double cover_ the already existing cells in \(B_{i-1}\) for \(2\leq i\leq l\) and then use the remaining cells to fill \(B_{i}\). This is equivalent to using \(d_{i}\) cells to re-label Figure 2. Block diagram configuration for \(\phi_{a}\) with labeled blocks the already existing \(b_{i-1}\) cells in \(B_{i-1}\) by '\(2\)' first for \(2\leq i\leq l\) and then labeling the remaining \(d_{i}-b_{i-1}\) cells by '\(1\)' to fill \(B_{i}\). 3. Filling of \(B_{i}\)'s (labeling by '\(1\)' and re-labeling by '\(2\)') are done from left to right if \(i\) is odd and from top to bottom if \(i\) is even. 4. After having used up all the \(d_{i}\)'s where \(1\leq i\leq l\), the _doubly covered_ cells (cells which are labeled by '\(2\)') form the Young diagram of a partition (say) \(\lambda=\phi_{a}(\Delta)\). **Example 2.3**.: Suppose \(a=3\), \(b=2\), and \(\Delta=\{4,5,2,1\}\in\mathcal{S}_{3,2}\). Then following steps (1) to (4) above, we have \(\lambda=\phi_{3}(\Delta)=(5,1)\in\mathcal{P}_{3,2}\) and \(|\lambda|=|\Delta|/2=6\). For an illustration, see Figure 3 below where the intermediate steps are denoted by arrows from left to right. All the cells labeled \(\mathcal{B}_{i}\) form the \(i\)th block \(B_{i}\) and \(b_{i}\) is the number of cells labeled \(\mathcal{B}_{i}\) for \(i\in\{1,2,3\}\). Here, \(b_{1}=4\), \(b_{2}=1\), and \(b_{3}=1\). All _singly covered_ (equivalent to being labeled by '\(1\)' or counted once) cells are colored yellow and all _doubly covered_ (equivalent to being labeled by '\(2\)' or counted twice) cells are colored green. **Theorem 2.4**.: _([8, Theorem \(2.5\)]) For a fixed \(a\geq 0\) and any \(b\geq 1\), the map \(\phi_{a}\) defined above is a bijection from \(\mathcal{S}_{a,b}\) to \(\mathcal{P}_{a,b}\), such that \(|\Delta|=2|\phi_{a}(\Delta)|\), for any \(\Delta\in\mathcal{S}_{a,b}\)._ _Remark 2_.: Since \(b_{1}=a+1=d_{1}\) and \(b_{i-1}+b_{i}=d_{i}\) for \(2\leq i\leq I+1\) where \(I\) is the index of the last present block in the block diagram configuration and \(b_{I+1}=0\), we have \(\sum\limits_{i=1}^{I}d_{i}=2\sum\limits_{i=1}^{I}b_{i}\) which implies \(|\Delta|=2|\phi_{a}(\Delta)|\) as in Theorem 2.4. This justifies the fact that after using up all the \(d_{i}\)'s, none of the cells in the image partition \(\lambda=\phi_{a}(\Delta)\) are _singly covered_, i.e., none of the cells are colored yellow. ### Application of Fu and Tang's bijection to strict partitions First, we consider the map \(\iota:\mathcal{D}\to\mathcal{T}\times\mathcal{S}\) where \(\mathcal{D}\) is the set of all strict partitions and \(\mathcal{T}\) is the set of all triangular numbers, i.e., \(\mathcal{T}:=\left\{\frac{n(n+1)}{2}:n\in\mathbb{Z}\right\}\). Fu and Tang [8, Lemma \(3.1\)] proved that \(\iota\) is infact an injection. For any strict partition \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{r})\in\mathcal{D}\), we consider the shifted Young diagram of \(\lambda\). For instance, see the Young diagram in Figure 4 whose cells are colored orange. Figure 3. Applying \(\phi_{3}\) on \(\Delta=\{4,5,2,1\}\) to get the partition \(\lambda=(5,1)\in\mathcal{P}_{3,2}\) Now, construct the sequence of column lengths (read from left to right) of its shifted Young diagram. These column lengths form a unimodal sequence \(c(\lambda)=\{c_{1},c_{2},\ldots,c_{\lambda_{1}}\}\). For example, for the shifted Young diagram shown in Figure 4 above, \(c(\lambda)=\{1,2,3,4,2,2,1,1\}\). Fu and Tang [8, Lemma \(3.1\)] proved that there exists a unique integer \(0\leq m\leq r\) such that \(\sum\limits_{i=1}^{m}(-1)^{i}c_{i}=\sum\limits_{i=1}^{\lambda_{1}}(-1)^{i}c_{i}\), i.e., \(|\Delta|_{\text{alt}}=0\) where \(\Delta:=\{c_{m+1},c_{m+2},\ldots,c_{\lambda_{1}}\}\) and so, \(\Delta\in\mathcal{S}\). Now, define \(\iota(\lambda)=(t,\Delta)\) where \(t=1+2+\ldots+m=\binom{m+1}{2}\). Clearly, \(\iota(\lambda)\in\mathcal{T}\times\mathcal{S}\) and so, \(|\lambda|=\lambda_{1}+\lambda_{2}+\ldots+\lambda_{r}=c_{1}+c_{2}+\ldots+c_{ \lambda_{1}}=t+|\Delta|\). Fu and Tang [8, Lemma \(3.1\)] proved that \((t,\Delta)\in\iota(\mathcal{D})\) if and only if any one of the following conditions hold 1. \(a(\Delta)=m\), or 2. \(a(\Delta)\leq m-1\) and \(b(\Delta)=1\), or 3. \(\Delta=\varepsilon\). \(\iota\) is one-one simply because the pre-image of any \((t,\Delta)\in\mathcal{T}\times\mathcal{S}\) satisfying either (1) or (2) or (3) mentioned above can be constructed uniquely by appending columns of length \(1,2,\ldots,m\) to the left of the columns of length given by the elements of \(\Delta\) and obtaining a shifted Young diagram. ## 3. Combinatorial proof of Berkovich and Uncu's identity (1.1) We now present the statement of the main result which we prove in this section. **Theorem 3.1**.: _Let \(\nu\in\{0,1\}\), \(N\) be a non-negative integer, and \(k\) be any integer. Then, for any positive integer \(n\), the number of strict partitions \(\pi_{d}\) of \(n\) with BG-rank equal to \(k\) and \(l(\pi_{d})\leq 2N+\nu\) is equal to the number of partitions \(\pi\) of \(\frac{n-2k^{2}+k}{2}\) where \(l(\pi)\leq N+\nu-k\) and \(\#(\pi)\leq N+k\)._ Note that Theorem 3.1 together with the partition theoretic interpretation of \(q\)-binomial coefficient in Remark 1 implies Berkovich and Uncu's identity (1.1). We will now provide a combinatorial proof of Theorem 3.1. Proof.: Let the set of all strict partitions \(\pi_{d}\) of \(n\) having BG-rank equal to \(k\) and \(l(\pi_{d})\leq 2N+\nu\) be denoted by \(\mathcal{SP}_{n,k}^{N,\nu}\), the set of all partitions \(\pi\) of \(n\) with \(l(\pi)\leq L\) and \(\#(\pi)\leq m\) Figure 4. Young diagram and shifted Young diagram representing the partition \(\lambda=(8,5,2,1)\) be denoted by \(\mathcal{P}_{n,L,m}\), and \(T_{i}=i(i+1)/2\) be the \(i\)th triangular number for any integer \(i\). Clearly, \(\mathcal{SP}_{n,k}^{N,\nu}\subset\mathcal{D}\) and \(T_{i}\in\mathcal{T}\). Consider any \(\pi_{d}\in\mathcal{SP}_{n,k}^{N,\nu}\). First, construct the shifted Young diagram of \(\pi_{d}=(\lambda_{1},\ldots,\lambda_{r})\)\((\lambda_{1}\leq 2N+\nu)\) and then form the unimodal sequence \(c(\pi_{d})=\{c_{1},\ldots,c_{\lambda_{1}}\}\) where \(c_{i}\) is the length of the \(i\)th column of the shifted Young diagram of \(\pi_{d}\). There exists \(0\leq a\leq r\) such that \(\sum\limits_{i=1}^{a}(-1)^{i}c_{i}=\sum\limits_{i=1}^{\lambda_{1}}(-1)^{i}c_{i}\) and so, \(\Delta:=\{c_{a+1},c_{a+2},\ldots,c_{\lambda_{1}}\}\in\mathcal{S}_{a,b}\subset \mathcal{S}\) for some integer \(b\geq 1\). **Lemma 3.2**.: _For the \(\Delta\in\mathcal{S}_{a,b}\) obtained from the shifted Young diagram of \(\pi_{d}\in\mathcal{SP}_{n,k}^{N,\nu}\),_ \[a=a(\Delta)=\left\{\begin{array}{ll}-2k&\mbox{ if }k\leq 0,\\ 2k-1&\mbox{ if }k>0.\end{array}\right.\] Proof.: For \(\pi_{d}\) having BG-rank \(k\), we have \(k=-\left|\{1,2,\ldots,a\}\right|_{\mbox{\tiny alt}}=1-2+3-\ldots+(-1)^{a+1}a\). Now, we consider two cases concerning the parity of \(a\): * Case A: \(a\) is even Let \(a=2t\) for some \(t\geq 0\). Then, \[k =1-2+3-\ldots-2t\] \[=(1+3+\ldots+(2t-1))-2(1+2+\ldots+t)\] \[=t^{2}-2\cdot\frac{t(t+1)}{2}\] \[=t^{2}-t^{2}-t\] \[=-t\] \[=-\frac{a}{2}.\] * Case B: \(a\) is odd Let \(a=2t-1\) for some \(t\geq 1\). Then, \[k =1-2+3-\ldots+(2t-1)\] \[=(1+3+\ldots+(2t-1))-2(1+2+\ldots+(t-1))\] \[=t^{2}-2\cdot\frac{t(t-1)}{2}\] \[=t^{2}-t^{2}+t\] \[=t\] \[=\frac{a+1}{2}.\] Hence, \(a=-2k\) if \(k\leq 0\) and \(a=2k-1\) if \(k>0\) We will now show that for \(k\leq 0\), \(\mathcal{SP}_{n,k}^{N,\nu}\) is in bijection with \(\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+\nu-k,N+k}\) and for \(k>0\), \(\mathcal{SP}_{n,k}^{N,\nu}\) is in bijection with \(\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+k,N+\nu-k}\). Since \(\mathcal{P}_{n,L,m}\) is in bijection with \(\mathcal{P}_{n,m,L}\), the two sets \(\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+k,N+\nu-k}\) and \(\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+\nu-k,N+k}\) are equinumerous, where the bijection is the conjugation map which interchanges the rows and columns of a partition about the main diagonal in the Young's diagram respresentation of the partition. * Case I: \(k\leq 0\) One can now easily verify that for Fu and Tang's map \(\iota\big{|}_{\mathcal{SP}_{n,k}^{N,\nu}}:\mathcal{SP}_{n,k}^{N,\nu}\longrightarrow \{T_{-2k}\}\times\mathcal{S}_{-2k,b}\) is a bijection where \(\iota\big{|}_{\mathcal{SP}_{n,k}^{N,\nu}}(\pi_{d})=(T_{-2k},\Delta)\) with \(T_{-2k}=2k^{2}-k\in\mathcal{T}\) and \(\Delta\in\mathcal{S}_{-2k,b}\). Now, recall Fu and Tang's bijection \(\phi_{a}\). Consider the map \(\chi_{-}:\{T_{-2k}\}\times\mathcal{S}_{-2k,b}\longrightarrow\{T_{-2k}\}\times \mathcal{P}_{\frac{n-2k^{2}+k}{2},N+\nu-k,N+k}\) defined as \[\chi_{-}(T_{-2k},\Delta):=(T_{-2k},\phi_{-2k}\big{|}_{\mathcal{S}_{-2k,b}}( \Delta)).\] Therefore, we have \(\chi_{-}(T_{-2k},\Delta)=(T_{-2k},\pi)\) where \(\pi\in\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+\nu-k,N+k}\). Thus, \(\chi_{-}\) is a bijection. Next, consider the map \(\psi_{-}:\mathcal{SP}_{n,k}^{N,\nu}\longrightarrow\{T_{-2k}\}\times\mathcal{P }_{\frac{n-2k^{2}+k}{2},N+\nu-k,N+k}\) defined as \[\psi_{-}:=\chi_{-}\circ\iota\big{|}_{\mathcal{SP}_{n,k}^{N,\nu}}.\] So, for any \(\pi_{d}\in\mathcal{SP}_{n,k}^{N,\nu}\), we have \[\psi_{-}(\pi_{d}):=\chi_{-}\left(\iota\big{|}_{\mathcal{SP}_{n,k}^{N,\nu}}(\pi _{d})\right)=\chi_{-}(T_{-2k},\Delta)=(T_{-2k},\pi)\] where \(\pi\in\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+\nu-k,N+k}\). Clearly, \(\psi_{-}\) is an invertible map since it is the composition of two invertible maps \(\iota\big{|}_{\mathcal{SP}_{n,k}^{N,\nu}}\) and \(\chi_{-}\) where \(\psi_{-}^{-1}\) is given by \[\psi_{-}^{-1}=\left(\iota\big{|}_{\mathcal{SP}_{n,k}^{N,\nu}}\right)^{-1}\circ \chi_{-}^{-1}.\] * Case II: \(k>0\) Again, it can be verified that for Fu and Tang's map \(\iota:\mathcal{D}\longrightarrow\mathcal{T}\times\mathcal{S}\), \(\iota\big{|}_{\mathcal{SP}_{n,k}^{N,\nu}}:\mathcal{SP}_{n,k}^{N,\nu} \longrightarrow\{T_{2k-1}\}\times\mathcal{S}_{2k-1,b}\) is a bijection where \(\iota\big{|}_{\mathcal{SP}_{n,k}^{N,\nu}}(\pi_{d})=(T_{2k-1},\Delta)\) with \(T_{2k-1}=2k^{2}-k\in\mathcal{T}\) and \(\Delta\in\mathcal{S}_{2k-1,b}\). Now, recall Fu and Tang's bijection \(\phi_{a}\). Analogous to \(\chi_{-}\), consider the map \(\chi_{+}:\{T_{2k-1}\}\times\mathcal{S}_{2k-1,b}\longrightarrow\{T_{2k-1}\} \times\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+k,N+\nu-k}\) defined as \[\chi_{+}(T_{2k-1},\Delta):=(T_{2k-1},\phi_{2k-1}\big{|}_{\mathcal{S}^{N,\nu}_{2 k-1,b}}(\Delta)).\] Therefore, we have \(\chi_{+}(T_{2k-1},\Delta)=(T_{2k-1},\pi)\) where \(\pi\in\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+k,N+\nu-k}\). Thus, \(\chi_{+}\) is a bijection. Next, consider the map \(\psi_{+}:\mathcal{SP}^{N,\nu}_{n,k}\longrightarrow\{T_{2k-1}\}\times\mathcal{P }_{\frac{n-2k^{2}+k}{2},N+k,N+\nu-k}\) defined as \[\psi_{+}:=\chi_{+}\circ\iota\big{|}_{\mathcal{SP}^{N,\nu}_{n,k}}.\] So, for any \(\pi_{d}\in\mathcal{SP}^{N,\nu}_{n,k}\), we have \[\psi_{+}(\pi_{d}):=\chi_{+}\left(\iota\big{|}_{\mathcal{SP}^{N,\nu}_{n,k}}(\pi _{d})\right)=\chi_{+}(T_{2k-1},\Delta)=(T_{2k-1},\pi)\] where \(\pi\in\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+k,N+\nu-k}\). Now, define the map \[\psi_{+}^{*}:=*\circ\psi_{+}=*\circ\chi_{+}\circ\iota\big{|}_{\mathcal{SP}^{N, \nu}_{n,k}}\] such that \[\psi_{+}^{*}(\pi_{d}):=*(\psi_{+}(\pi_{d}))=*(T_{2k-1},\pi)=(T_{2k-1},\pi^{*})\] where \(*\) is the conjugation operation and \(\pi^{*}\in\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+\nu-k,N+k}\) is the conjugate partition of the partition \(\pi\in\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+k,N+\nu-k}\). Observe that one can first apply the conjugation operation \(*\) on the partition \(\pi^{*}\) to get the conjugate partition \((\pi^{*})^{*}=\pi\) (since conjugation is an involution) and then apply the inverse map \(\psi_{+}^{-1}\) on \((T_{2k-1},\pi)\) to get the strict partition \(\pi_{d}\). For a detailed illustration of how the forward (resp. inverse) map \(\psi_{-}\) or \(\psi_{+}\) (resp. \(\psi_{-}^{-1}\) or \(\psi_{+}^{-1}\)) works, see the examples listed in Section 4. Thus, it is clear that for any strict partition \(\pi_{d}\in\mathcal{SP}^{N,\nu}_{n,k}\) of size \(|\pi_{d}|=n\), the image partition \(\pi\) has size \(|\pi|=\frac{n-2k^{2}+k}{2}\) and vice-versa. Finally, we focus our attention on actually obtaining the bounds on the largest part and the number of parts of \(\pi\) explicitly under the action of \(\psi_{-}\) or \(\psi_{+}\). We also show that the we can retrieve back the bound on the largest part of \(\pi_{d}\) explicitly under the action of \(\psi_{-}^{-1}\) or \(\psi_{+}^{-1}\) on \(\pi\). We first present a lemma which lies at the heart of obtaining the desired bounds. **Lemma 3.3**.: _The index of the last block present in the block diagram representation of the Young diagram of \(\pi\) is at most \(l(\pi_{d})-a-1\)._ Proof.: In the shifted Young diagram of \(\pi_{d}\), the length of the unimodal sequence whose alternating sum is zero is equal to \(l(\pi_{d})-a\). So, the number of blocks that can be _doubly covered_ by the elements of this sequence is at most \(l(\pi_{d})-a-1\). Now, we consider two cases according to the sign of the BG-rank \(k\) of \(\pi_{d}\in\mathcal{SP}_{n,k}^{N,\nu}\). * Case I: \(k\leq 0\) If \(k\leq 0\), then from Lemma (3.2), we have \(a=-2k\), i.e, \(a\) is even. Let \(I\) be the index of the last present block in the block diagram representation of the Young diagram of \(\pi\). From Lemma (3.3), we know that \[I \leq l(\pi_{d})-a-1\] \[\leq 2N+\nu-a-1\] \[=2N+\nu+2k-1\] \[=2(N+k)+\nu-1\] \[=\left\{\begin{array}{ll}2(N+k)-1&\text{if $\nu=0$},\\ 2(N+k)&\text{if $\nu=1$}.\end{array}\right.\] Therefore, \(\#(\pi)\leq N+k\). Now, let \(E\) be the number of even-indexed blocks present in the block diagram representation of the Young diagram of \(\pi\). Then, it is clear that \[E\leq\sum_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{l(\pi_{d})-a-1}1.\] Again from the block diagram representation of the Young diagram of \(\pi\), we have \[l(\pi) =a+1+E\] \[\leq a+1+\sum_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{l(\pi_{d})-a-1}1\] \[\leq a+1+\sum_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{2N+\nu-a-1}1\] \[=-2k+1+\sum_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{2N+\nu+2k-1}1\] \[=\left\{\begin{array}{ll}-2k+1+\sum\limits_{\begin{subarray}{c}i=2 \\ 2|i\end{subarray}}^{2N+2k-1}1&\text{if $\nu=0$,}\\ \\ -2k+1+\sum\limits_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{2N+2k}1&\text{if $\nu=1$}\\ \\ =\left\{\begin{array}{ll}-2k+1+N+k-1&\text{if $\nu=0$,}\\ -2k+1+N+k&\text{if $\nu=1$}\\ \end{array}\right.\\ \\ =\left\{\begin{array}{ll}N-k&\text{if $\nu=0$,}\\ N-k+1&\text{if $\nu=1$.}\\ \end{array}\right.\end{array}\right.\] Hence, \(l(\pi)\leq N+\nu-k\). For the reverse direction, since \(k\leq 0\), we know that \(a=-2k\), \(l(\pi)\leq N+\nu-k\), and \(\#(\pi)\leq N+k\). Clearly, \(l(\pi_{d})=-2k+l(\Delta)=-2k+I+1\) where \(I\) is the index of the last present block in the block diagram representation of the Young diagram of \(\pi\). Now, we consider two sub-cases regarding the parity of \(I\): * Sub-Case IA: \(I\) is odd * Since \(\#(\pi)\leq N+k\), \[I \leq 2(N+k)-1\] \[=2N+2k-1\] (3.1) \[\leq 2N+\nu+2k-1\] where (3.1) follows from the fact that \(\nu\in\{0,1\}\). Therefore, from (3.1), it follows that \(l(\pi_{d})=-2k+I+1\leq 2N+\nu\). * Sub-Case IB: \(I\) is even Since \(l(\pi)\leq N+\nu-k\), \[I \leq 2((N+\nu-k)-(a+1))\] \[=2N+2\nu-2k-2a-2\] \[=2N+2\nu+2k-2\] \[=2N+\nu+2k-1+\nu-1\] (3.2) \[\leq 2N+\nu+2k-1+\nu-1+1-\nu\] (3.3) \[=2N+\nu+2k-1\] where (3.2) follows from the fact that \(1-\nu\in\{0,1\}\). Therefore, from (3.3), it follows that \(l(\pi_{d})=-2k+I+1\leq 2N+\nu\). * Case II: \(k>0\) If \(k>0\), then from Lemma (3.2), we have \(a=2k-1\), i.e, \(a\) is odd. Let \(I\) be the index of the last present block in the block diagram representation of the Young diagram of \(\pi\). From Lemma (3.3), we know that \[I \leq l(\pi_{d})-a-1\] \[\leq 2N+\nu-a-1\] \[=2N+\nu-2k\] \[=2(N+\nu-k)-\nu\] \[=\left\{\begin{array}{ll}2(N+\nu-k)&\mbox{if $\nu=0$},\\ 2(N+\nu-k)-1&\mbox{if $\nu=1$}.\end{array}\right.\] Therefore, \(\#(\pi)\leq N+\nu-k\). If \(E\) is the number of even-indexed blocks present in the block diagram representation of the Young diagram of \(\pi\), \[E\leq\sum_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{l(\pi_{d})-a-1}1.\] Again, from the block diagram representation of the Young diagram of \(\pi\), we have \[l(\pi) =a+1+E\] \[\leq a+1+\sum_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{l(\pi_{d})-a-1}1\] \[\leq a+1+\sum_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{2N+\nu-a-1}1\] \[=2k+\sum_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{2N+\nu-2k}1\] \[=\left\{\begin{array}{ll}2k+\sum_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{2N-2k}1&\mbox{if $\nu=0$},\\ 2k+\sum_{\begin{subarray}{c}i=2\\ 2|i\end{subarray}}^{2N-2k+1}1&\mbox{if $\nu=1$}\end{array}\right.\] \[=\left\{\begin{array}{ll}2k+N-k&\mbox{if $\nu=0$},\\ 2k+N-k&\mbox{if $\nu=1$}.\end{array}\right.\] Hence, \(l(\pi)\leq N+k\). For the reverse direction, since \(k>0\), we know that \(a=2k-1\), \(l(\pi)\leq N+k\), and \(\#(\pi)\leq N+\nu-k\). Clearly, \(l(\pi_{d})=2k-1+l(\Delta)=2k-1+I+1=2k+I\) where \(I\) is the index of the last present block in the block diagram representation of the Young diagram of \(\pi\). Now, we consider two sub-cases regarding the parity of \(I\): - Sub-Case IIA: \(I\) is odd Since \(\#(\pi)\leq N+\nu-k\), \[I \leq 2(N+\nu-k)-1\] \[=2N+2\nu-2k-1\] \[=2N+\nu-2k+\nu-1\] (3.4) \[\leq 2N+\nu-2k+\nu-1+1-\nu\] (3.5) \[=2N+\nu-2k\] where (3.4) follows from the fact that \(1-\nu\in\{0,1\}\). Therefore, from (3.5), it follows that \(l(\pi_{d})=2k+I\leq 2N+\nu\). - Sub-Case IIB: \(I\) is even Since \(l(\pi)\leq N+k\), \[I \leq 2((N+k)-(a+1))\] \[=2N+2k-2a-2\] \[=2N-2k\] \[=2N+\nu-2k-\nu\] (3.6) \[\leq 2N+\nu-2k-\nu+\nu\] (3.7) \[=2N+\nu-2k\] where (3.6) follows from the fact that \(\nu\in\{0,1\}\). Therefore, from (3.7), it follows that \(l(\pi_{d})=2k+I\leq 2N+\nu\). Thus, we conclude that in the forward direction, \(\#(\pi)\leq N+k\), \(l(\pi)\leq N+\nu-k\) if \(k\leq 0\) and \(\#(\pi)\leq N+\nu-k\), \(l(\pi)\leq N+k\) if \(k>0\) and in the reverse direction, \(l(\pi_{d})\leq 2N+\nu\) irrespective of the sign of \(k\). This completes the proof of Theorem 3.1. ## 4. Examples illustrating Theorem 3.1 In this section, we present four different examples where we show the correspondences \(\pi_{d}\underset{\psi_{+}^{-1}}{\overset{\psi_{+}}{\rightleftarrows}}\)\((T_{a},\pi)\) and \(\pi_{d}\underset{\psi_{-}^{-1}}{\overset{\psi_{-}}{\rightleftarrows}}(T_{a},\pi)\). Here, \(\pi_{d}\in\mathcal{SP}_{n,k}^{N,\nu}\) is a strict partition with fixed \(BG\)-rank \(k\) and \(l(\pi_{d})\leq 2N+\nu\), \(T_{a}=\frac{a(a+1)}{2}\) is the triangular part where \(a=a(\Delta)\) with \(\Delta=\{d_{1},d_{2},\ldots,d_{l(\Delta)}\}\in\mathcal{S}_{a,b}\) obtained from the shifted Young diagram of \(\pi_{d}\), \(\pi\in\mathcal{P}_{\frac{n-2k^{2}+k}{2},N+\nu-k,N+k}\) is a partition where \(l(\pi)\leq N+\nu-k\), \(\#(\pi)\leq N+k\) if \(k\leq 0\), and \(\pi\in\widehat{\mathcal{P}}_{\frac{n-2k^{2}+k}{2},N+k,N+\nu-k}\) is a partition where \(l(\pi)\leq N+k\), \(\#(\pi)\leq N+\nu-k\) if \(k>0\). In examples 4.1, 4.2, 4.3, and 4.4, all _singly covered_ (equivalent to being labeled by '1' or counted once) cells are colored yellow and all _doubly covered_ (equivalent to being labeled by '2' or counted twice) cells are colored green. The cells labeled \(\mathcal{B}_{i}\) form the \(i\)th block \(B_{i}\) and \(b_{i}\) is the number of cells labeled \(\mathcal{B}_{i}\) for \(i\in\{1,2,3,\ldots\}\). In example 4.1, we show all the intermediate steps (denoted by arrows from left to right) for the forward map in detail. However, in examples 4.2, 4.3, and 4.4, we just portray the strict partition \(\pi_{d}\) and the image \((T_{a},\pi)\) without displaying the intermediate steps. **Example 4.1**.: Let \(\pi_{d}=(9,7,5,4,1)\in\mathcal{SP}_{26,2}^{4,1}\) so that \(l(\pi_{d})=9\leq 2N+\nu=9\). Since \(k=2>0\), by Lemma 3.2, \(a=2k-1=3\) which implies \(T_{3}=6\) is the triangular part. From the shifted Young diagram of \(\pi_{d}\), \(c(\pi_{d})=\{1,2,3,4,5,4,4,2,1\}\), and \(\Delta=\{4,5,4,4,2,1\}\). \(\psi_{+}(\pi_{d})=(T_{3},\pi)\). So, \(b_{1}=4\), \(b_{2}=1\), \(b_{3}=3\), \(b_{4}=1\), and \(b_{5}=1\) which implies \(\pi=(6,3,1)\). Clearly, \(l(\pi)=6=N+k\) and \(\#(\pi)=3=N+\nu-k\). Hence, \(\pi\in\mathcal{P}_{10,6,3}\). Now, for the reverse direction, we are given \(T_{3}=6\) and \(\pi=(6,3,1)\in\mathcal{P}_{10,6,3}\). So, the solutions to \(2k^{2}-k=6\) are \(k=2\) and \(k=-\frac{3}{2}\). Since \(k\in\mathbb{Z}\), \(k=2>0\). On solving \(N+k=6\) and \(N+\nu-k=3\), we have \((N,\nu)=(4,1)\). On solving \(\frac{n-6}{2}=10\), we have \(|\pi_{d}|=n=26\). Now, \(a=2\cdot 2-1=3\) since \(k=2>0\) which implies \(b_{1}=a+1=4\), \(b_{2}=1\), \(b_{3}=3\), \(b_{4}=1\), and \(b_{5}=1\) following the block diagram configuration in Figure 1. Now, we obtain \(d_{1}=b_{1}=4\), \(d_{2}=b_{1}+b_{2}=5\), \(d_{3}=b_{2}+b_{3}=4\), \(d_{4}=b_{3}+b_{4}=4\), \(d_{5}=b_{4}+b_{5}=2\), and \(d_{6}=b_{5}+b_{6}=1\) since \(b_{6}=0\). Thus, we obtain the sequence \(\{4,5,4,4,2,1\}\) which we write column-wise and if we append columns of length \(\{1,2,3\}\) to the left of the column of length \(4\), we retrieve back the shifted Young diagram of the partition \(\pi_{d}=(9,7,5,4,1)\in\mathcal{SP}_{26,2}^{4,1}\). \begin{tabular}{|c| **Example 4.2**.: Let \(\pi_{d}=(12,11,6,4,2)\in\mathcal{SP}_{35,-1}^{6,0}\) so that \(l(\pi_{d})=12\leq 2N+\nu=12\). Since \(k=-1<=0\), by Lemma 3.2, \(a=-2k=2\) which implies \(T_{2}=3\) is the triangular part. From the shifted Young diagram of \(\pi_{d}\), \(c(\pi_{d})=\{1,2,3,4,5,5,4,3,2,2,2,2\}\), and \(\Delta=\{3,4,5,5,4,3,2,2,2,2\}\). \(\psi_{-}(\pi_{d})=(T_{2},\pi)\). So, \(b_{1}=3\), \(b_{2}=1\), \(b_{3}=4\), \(b_{4}=1\), \(b_{5}=3\), \(b_{6}=0\), \(b_{7}=2\), \(b_{8}=0\), and \(b_{9}=2\) which implies \(\pi=(5,4,3,2,2)\). Clearly, \(l(\pi)=5<N+\nu-k=7\) and \(\#(\pi)=5=N+k\). Hence, \(\pi\in\mathcal{P}_{16,7,5}\). Now, for the reverse direction, we are given \(T_{2}=3\) and \(\pi=(5,4,3,2,2)\in\mathcal{P}_{16,7,5}\). So, the solutions to \(2k^{2}-k=3\) are \(k=-1\) and \(k=\frac{3}{2}\). Since \(k\in\mathbb{Z}\), \(k=-1\leq 0\). On solving \(N+\nu-k=7\) and \(N+k=5\), we have \((N,\nu)=(6,0)\). On solving \(\frac{n-3}{2}=16\), we have \(|\pi_{d}|=n=35\). Now, \(a=-2\cdot(-1)=2\) since \(k=-1\leq 0\) which implies \(b_{1}=a+1=3\), \(b_{2}=1\), \(b_{3}=4\), \(b_{4}=1\), \(b_{5}=3\), \(b_{6}=0\), \(b_{7}=2\), \(b_{8}=0\), and \(b_{9}=2\) following the block diagram configuration in Figure 1. Now, we obtain \(d_{1}=b_{1}=3\), \(d_{2}=b_{1}+b_{2}=4\), \(d_{3}=b_{2}+b_{3}=5\), \(d_{4}=b_{3}+b_{4}=5\), \(d_{5}=b_{4}+b_{5}=4\), \(d_{6}=b_{5}+b_{6}=3\), \(d_{7}=b_{6}+b_{7}=2\), \(d_{8}=b_{7}+b_{8}=2\), \(d_{9}=b_{8}+b_{9}=2\), and \(d_{10}=b_{9}+b_{10}=2\) since \(b_{10}=0\). Thus, we obtain the sequence \(\{3,4,5,5,4,3,2,2,2,2\}\) which we write column-wise and if we append columns of length \(\{1,2\}\) to the left of the column of length \(3\), we retrieve back the shifted Young diagram of the partition \(\pi_{d}=(12,11,6,4,2)\in\mathcal{SP}^{6,0}_{35,-1}\). \begin{tabular}{| **Example 4.4**.: Let \(\pi_{d}=(11,8,7,4,3,1)\in\mathcal{SP}_{34,2}^{6,1}\) so that \(l(\pi_{d})=11\leq 2N+\nu=13\). Since \(k=2>0\), by Lemma 3.2, \(a=2k-1=3\) which implies \(T_{3}=6\) is the triangular part. From the shifted Young diagram of \(\pi_{d}\), \(c(\pi_{d})=\{1,2,3,4,5,6,5,3,3,1,1\}\), and \(\Delta=\{4,5,6,5,3,3,1,1\}\). \(\psi_{+}(\pi_{d})=(T_{3},\pi)\). So, \(b_{1}=4\), \(b_{2}=1\), \(b_{3}=5\), \(b_{4}=0\), \(b_{5}=3\), \(b_{6}=0\), and \(b_{7}=1\) which implies \(\pi=(5,5,3,1)\). Clearly, \(l(\pi)=5<N+k=8\) and \(\#(\pi)=4<N+\nu-k=5\). Hence, \(\pi\in\mathcal{P}_{14,8,5}\). Now, for the reverse direction, we are given \(T_{3}=6\) and \(\pi=(5,5,3,1)\in\mathcal{P}_{14,8,5}\). So, the solutions to \(2k^{2}-k=6\) are \(k=2\) and \(k=-\frac{3}{2}\). Since \(k\in\mathbb{Z}\), \(k=2>0\). On solving \(N+k=8\) and \(N+\nu-k=5\), we have \((N,\nu)=(6,1)\). On solving \(\frac{n-6}{2}=14\), we have \(|\pi_{d}|=n=34\). Now, \(a=2\cdot 2-1=3\) since \(k=2>0\) which implies \(b_{1}=a+1=4\), \(b_{2}=1\), \(b_{3}=3\), \(b_{4}=0\), \(b_{5}=3\), \(b_{6}=0\), and \(b_{7}=1\) following the block diagram configuration in Figure 1. Now, we obtain \(d_{1}=b_{1}=4\), \(d_{2}=b_{1}+b_{2}=5\), \(d_{3}=b_{2}+b_{3}=6\), \(d_{4}=b_{3}+b_{4}=5\), \(d_{5}=b_{4}+b_{5}=3\), \(d_{6}=b_{5}+b_{6}=3\), \(d_{7}=b_{6}+b_{7}=1\), and \(d_{8}=b_{7}+b_{8}=1\) since \(b_{8}=0\). Thus, we obtain the sequence \(\{4,5,6,5,3,3,1,1\}\) which we write column-wise and if we append columns of length \(\{1,2,3\}\) to the left of the column of length \(4\), we retrieve back the shifted Young diagram of the partition \(\pi_{d}=(11,8,7,4,3,1)\in\mathcal{SP}_{34,2}^{6,1}\). ## 5. Concluding remarks 1. We get the bounds on the largest part and the number of parts of the image partition \(\pi\) from the \(q\)-binomial coefficient on the right-hand side of (1.1). However, it will be interesting to examine the conditions on \(\pi_{d}\) under which the bounds on both the largest part and the number of parts of the image partition \(\pi\), as in the statement of Theorem 3.1, become exact equalities. One may even like to investigate conditions on \(\pi_{d}\) under which any one of the two bounds, i.e., either the bound on the largest part or the bound on the number of parts of \(\pi\) become an exact equality. 2. It will be worth finding an exact formula (or at least the generating function) of the number of strict partitions of an integer \(N\) with fixed BG-rank \(k\), fixed largest part \(L\), and fixed number of parts \(M\). 3. Let \(\nu\in\{0,1\}\), \(N\) be any non-negative integer and \(k\) be any integer. If \(\tilde{B}_{N}(k,q)\) denotes the generating function for the number of partitions into parts less than or equal to \(N\) with BG-rank equal to \(k\), then Berkovich and Uncu [5, Theorem \(3.2\)] showed that (5.1) \[\tilde{B}_{2N+\nu}(k,q)=\frac{q^{2k^{2}-k}}{(q^{2};q^{2})_{N+k}(q^{2};q^{2})_{N +\nu-k}}.\] Summing over all values of \(k\) in (1.1), we get [5, Theorem \(3.3\)] (5.2) \[\sum_{k=-N}^{N+\nu}q^{2k^{2}-k}\begin{bmatrix}2N+\nu\\ N+k\end{bmatrix}_{q^{2}}=(-q;q)_{2N+\nu}.\] Using (5.1) and (5.2), one then gets a proof of the following identity [5, Corollary \(3.4\)] (5.3) \[\sum_{k=-N}^{N+\nu}\frac{q^{2k^{2}-k}}{(q^{2};q^{2})_{N+k}(q^{2};q^{2})_{N+ \nu-k}}=\frac{1}{(q;q)_{2N+\nu}}.\] It will be interesting to look at a direct combinatorial proof of (5.1) and (5.3). ## 6. Acknowledgments The authors would like to thank Alexander Berkovich for encouraging them to prove (1.1) using combinatorial methods and for his very helpful comments and suggestions. The authors would also like to thank George Andrews for his kind interest and Ali Uncu for previewing a preliminary draft of this paper and for his helpful suggestions.
2309.09970
Empirical Study of Mix-based Data Augmentation Methods in Physiological Time Series Data
Data augmentation is a common practice to help generalization in the procedure of deep model training. In the context of physiological time series classification, previous research has primarily focused on label-invariant data augmentation methods. However, another class of augmentation techniques (\textit{i.e., Mixup}) that emerged in the computer vision field has yet to be fully explored in the time series domain. In this study, we systematically review the mix-based augmentations, including mixup, cutmix, and manifold mixup, on six physiological datasets, evaluating their performance across different sensory data and classification tasks. Our results demonstrate that the three mix-based augmentations can consistently improve the performance on the six datasets. More importantly, the improvement does not rely on expert knowledge or extensive parameter tuning. Lastly, we provide an overview of the unique properties of the mix-based augmentation methods and highlight the potential benefits of using the mix-based augmentation in physiological time series data.
Peikun Guo, Huiyuan Yang, Akane Sano
2023-09-18T17:51:47Z
http://arxiv.org/abs/2309.09970v1
# Empirical Study of Mix-based Data Augmentation Methods in Physiological Time Series Data ###### Abstract Data augmentation is a common practice to help generalization in the procedure of deep model training. In the context of physiological time series classification, previous research has primarily focused on label-invariant data augmentation methods. However, another class of augmentation techniques (_i.e., Mixup_) that emerged in the computer vision field has yet to be fully explored in the time series domain. In this study, we systematically review the mix-based augmentations, including mixup, cutmix, and manifold mixup, on six physiological datasets, evaluating their performance across different sensory data and classification tasks. Our results demonstrate that the three mix-based augmentations can consistently improve the performance on the six datasets. More importantly, the improvement does not rely on expert knowledge or extensive parameter tuning. Lastly, we provide an overview of the unique properties of the mix-based augmentation methods and highlight the potential benefits of using the mix-based augmentation in physiological time series data. Our code and results are available at [https://github.com/comp-well-org/Mix-Augmentation-for-Physiological-Time-Series-Classification](https://github.com/comp-well-org/Mix-Augmentation-for-Physiological-Time-Series-Classification). Data augmentation, mixup, physiological time series ## I Introduction Data augmentation is a crucial regularization technique for deep neural network models, as it serves to inform the network of potential variations in the input data during the training stage while preserving the integrity of the labels. This technique has been shown to improve network generalization, [1] by not only artificially increasing the size of the dataset but also imparting inductive bias through the encoding of information related to data invariances. Traditional data augmentation techniques aim to increase the statistical support of the training data distribution by utilizing human knowledge and adding additional virtual samples from the vicinity distribution of training samples. This approach has been shown to improve generalization, as demonstrated in previous literature. Such data augmentations have been employed actively and effectively in computer vision [1, 2, 3, 4] and speech recognition and synthesis [5, 6, 7]. The selection of specific augmentation methods remains a challenging task, as it is often based on heuristics and is highly dependent on the dataset, task, and even model architecture [8]. However, unlike in other domains, time series, particularly physiological data, do not follow a straightforward rule for label-invariant transformation. Methods such as jittering, rotation, scaling, permutation, magnitude warping, time warping, window slicing, and window warping have been shown to have unstable performance across datasets and tasks, or require human understandings of the data [9, 10, 11]. Two significant limitations of traditional transformation-based augmentations on physiological time series data are that: (1) certain transformations can be detrimental to the integrity of the physiological signal, and (2) the majority of traditional augmentations are data-dependent, lacking generalization and consistency across different datasets and tasks. An alternative approach is represented by mixup regularization [4], which is based on the assumption that linear interpolations of feature vectors should lead to linear interpolations of the associated targets. Despite its simplicity, mixup has been shown to be effective across different domains (computer vision [4, 12, 13] and speech [14, 15, 16]) and different tasks. For time series classification tasks, previous studies also employed mix-based augmentations to enhance model representation and generalization [17, 18, 19]. However, none of the previous works provide a thorough empirical study of mix-based augmentations across various types of physiological times series, regarding both the quantitative gain in the metrics and the benefits of feature representation. This study aims to evaluate the efficacy of mix-based data augmentation in the context of time series classification. Our evaluation compares mix-based augmentation against traditional data augmentation techniques, as classified in a previous survey, focusing on basic label-invariant time-domain transformations commonly used in time series classification. The baseline augmentations evaluated include jittering, rotation, scaling, permutation, magnitude warping, time warping, window slicing, and window warping. The unique benefits of mix-based augmentation are evaluated, and the contributions of this paper can be summarized as follows: * We present an empirical study of three mixup-based data augmentation methods (i.e., mixup, cutmix, and manifold mixup) in the context of time series classification. We provide detailed formulations of these methods and evaluate their performances on six physiological and biobehavioral datasets. * Our experiments reveal two significant distinctions between mixup-based augmentations and traditional data transformations. First, mixup-based methods do not re quire human expert priors, making them more practical in various applications. Second, mixup-based methods consistently achieve higher or comparable performance compared to traditional methods. ## II Related works ### _Traditional label-invariant time series data augmentation_ For physiological time series data, the time domain transforms manipulate the original time series directly, as compared to more advanced augmentations using generative approaches. In this paper, we focus the evaluations on the following traditional augmentations: jittering, rotation, scaling, permutation, window warping, and window slicing. Jittering refers to the injection of Gaussian or more sophisticated noise patterns, such as spikes and slope-like trends, into the raw signals. The schemes are introduced in [20]. Rotation for time series is achieved by multiplying the signal by a random rotation matrix. As it is not as suitable for time series data as in the image domain, one commonly used special case is flipping (changing the sign of the original time series). Permutation is a transformation that randomly shuffles segments of a time series. This operation does not preserve the sequential information of the original data. Window slicing or cropping, introduced in [21], randomly extracts continuous slices from the original samples. The window warping method is uniquely applicable to time series. It randomly selects a time interval, then upsamples or downsamples the segment, while keeping the rest of the time ranges unaltered. Window warping changes the total length of the original signal, therefore it is usually used along with window cropping. For time series classification tasks, all the above-mentioned augmentations do not change the labels of the altered training samples. The effectiveness of these augmentations has been previously investigated in the literature. An empirical study about biobehavioral time series data augmentation [9] concludes that, while some augmentations are beneficial for biobehavioral classification tasks, their effectiveness varies across different datasets and model architectures. This finding agrees with [11], which also highlights the inconsistency of transformation-based augmentations across different non-physiological time series datasets. Based on the analysis of the literature, we claim that two significant limitations exist when applying traditional transformation-based augmentations to physiological time series data: * Certain transformations can be detrimental to the integrity of the physiological signals. For instance, rotation, permutation, and warping can be more harmful to ECG signals, as ECG beats possess relatively fixed patterns, such as the order, width, and intensity of the wave components. * The majority of traditional augmentations are data-dependent, meaning that they require expert prior knowledge or multiple trials to select an appropriate transformation for a specific problem, due to the diverse properties of biobehavioral time series. ### _Mixup: Vicinal Risk Minimization_ Mixup is a data augmentation technique introduced by [4] to train neural networks by constructing virtual training examples using convex combinations of pairs of examples and their labels. The methodology behind Mixup, as described in this paper, is rooted in Vicinal Risk Minimization (VRM), which diverges from the conventional Empirical Risk Minimization (ERM) by drawing examples from a vicinity distribution of the training examples. This aims to enlarge the support for the training distribution. As stated in Section II, prior knowledge has been Fig. 1: The overview of mix-based augmentation procedure for physiological time series classification. From left to right: two sequences \(x_{i},x_{j}\) are shown as raw input signals. For mixup or cutmix, in each training epoch, virtual samples are created from the mini-batches; for manifold mixup, the linear combination is applied at the feature map level. For any of the three mix-based augmentations, the labels for the virtual samples are also mixed with the corresponding weights, shown on the right. NORM, MI, and STTC are representative classes of the PTB-XL dataset. See details in Figure 3. traditionally required for the identification of the vicinity or neighborhood. However, Mixup provides a more practical and data-agnostic alternative, as it does not necessitate domain expertise. Despite its simplicity, Mixup presents the following distinct advantages: 1. **Regularization**: The linear relationship established by mixup transformations between data augmentation and the supervision signal results in a strong regularization of the model's state, leading to improved performance. 2. **Generalization**: The authors of previous studies have reported improvements in generalization error for state-of-the-art models trained on ImageNet, CIFAR, speech, and tabular datasets when using mixup. Theoretical analysis [13] suggests that the soft targets of mixup virtual samples aid in model generalization in a manner similar to label smoothing and knowledge distillation. Interpolation/extrapolation of nearest neighbors in feature space can also improve generalization [22]. Since mixup was proposed, many incremental works have emerged and demonstrated improvements with different focuses, such as cutmix [23], manifold mixup [12], MixMatch [24], and AlignMix [25]. In this project, we choose to evaluate mixup, cutmix, and manifold mixup, as they have not been investigated in the context of time series classification in literature. ### _Mixup for physiological time series data_ In the domain of physiological time series classification, mixup has been employed to enhance generalization during training as demonstrated in prior studies. [17, 18] employ mixup for better generalization in ECG classification task, in the training batches of CNN models. [19] states that mixup improves the generalization performance of the ECG classification model regardless of leads and evaluation metrics. However, these studies lack a thorough examination of mixup's mechanism and reasons for performance improvement, and none of them have conducted ablation studies about mixup. Furthermore, although the 1D variant of vanilla Mixup [4] has been utilized in previous studies, the empirical results for cutmix [23] and manifold mixup [12] are currently lacking. In [26], the performance of mixup and cutmix were evaluated on the UEAMTSC dataset [27] using InceptionTime [28] as the baseline model. However, the subsets of UEAMTSC in that study were of small scale and not strictly comprised of time series data (_e.g_.image contours). This paper, on the other hand, aims to investigate the effectiveness of the mix-based methods on various physiological time series datasets, utilizing a higher capacity residual network structure. ## III Augmentation methods In this section, we provide a formal introduction and implementation of mix-based augmentations. Figure 1 shows the overall paradigm of how the mix-based augmentation is applied during the training process. The intrinsic difference between mix-based and traditional augmentations will be discussed in section V. The technical details for the implementation of the augmentations can be found in section IV-B. ### _Cutout_ In this study, the Cutout augmentation serves as a benchmark for comparison against mix-based data augmentation methods. Cutout, originally introduced in the computer vision field to address occlusion issues, involves the removal of contiguous sections of data. To evaluate its effectiveness in the context of time series classification, a single-item transformation, similar to traditional label-invariant regularizations, is applied. Specifically, a random contiguous section of a time series is replaced with zeros, which can be considered a dropout (zero-masking) operation at the input layer. The size of the random time segment is fixed, while the starting index of the interval is randomly drawn from a uniform distribution, and applied to all channels of a single data point. ### _Mixup_ The time series mix-based augmentations in this study leverage multiple data samples from one training minibatch to generate virtual data points. We examine three commonly utilized configurations of mix-based augmentations from the computer vision domain for time series classification problems, namely mixup [4], cutmix [23], and manifold mixup (layer mixup, [12]). The mixup augmentation blends random pairs of time series from the training data. Let \((x,y)\) denote a time series data instance, where \(x\in\mathbb{R}^{L\times C}\), with \(L\) representing the length of the sequence and \(C\) denoting the number of channels, and \(y\in\mathbb{R}^{K}\) being the class label with \(K\) classes. The mixing ratio randomly drawn from a Beta distribution is denoted as \(\lambda\). Given two samples \((x_{i},y_{i})\) and \((x_{j},y_{j})\), the mixup augmentation generates virtual training examples through the following formulation: \[\tilde{x} =\lambda x_{i}+(1-\lambda)x_{j} \tag{1}\] \[\tilde{y} =\lambda y_{i}+(1-\lambda)y_{j} \tag{2}\] Fig. 2: Illustrations of traditional time series data augmentations. Orange: original signal. Green: augmented signals. Mixup extends the training distribution by incorporating the the assumption that linear interpolations of feature vectors should lead to linear interpolations of the associated targets [4]. Based on the formulation, the label for a mixup virtual sample is also a mixture of two original one-hot labels weighted by \(\lambda\). The potential effect and benefit of the soft target will also be discussed in section V-B. The mixing ratio, \(\lambda\), in the mix-based data augmentation approach is sampled from a Beta distribution. The value of \(\lambda\) close to 0 or 1 results in the created virtual time series being more similar to one of the raw data points, whereas a value of \(\lambda\) close to 0.5 results in a more blended representation of the raw data points. For physiological time series data, it is desirable to have virtual time series that are similar to one of the raw data points, as these signals contain delicate features, such as the intensity of R peaks in ECG signals, which could be easily destroyed with random mixing. Despite its simplicity, in the computer vision domain, mixup has allowed consistently superior performance in the CIFAR-10, CIFAR-100, and ImageNet image classification datasets [4]. As we will show in the results section, mixup can also improve classification metrics in time series classification problems, along with other desirable features. ### _Cutmix_ The image augmentation technique Cutmix, proposed in [23], shares similarities with the technique Mixup. The authors claim a key advantage of Cutmix is its ability to prevent the occurrence of ambiguous components in the generated samples caused by mixing, such as blurred image regions [23]. For time series cutmix, we select a random time segment from a pair of multivariate time series, then the values of the pair of time series within the segment are exchanged across all channels. The length of the segment is also determined randomly through the mixing ratio \(\lambda\) drawn from a Beta distribution. ### _Manifold Mixup_ Manifold Mixup, presented in [12], demonstrates consistently superior performance across various computer vision tasks when compared to the original input-data-mixup approach. Unlike the original mixup, manifold mixup trains neural networks on linear combinations of hidden representations of training samples. The literature suggests that higher-level representations obtained from intermediate layers of the neural network feature extractor are low-dimensional, therefore, linear interpolations of hidden representations should cover meaningful regions of the feature space. In this paper, we implement Layer Mixup on layer 4 of the ResNet, prior to pooling and the classification head. The labels are also mixed in the same way as mixup and cutmix. ## IV Experiments ### _Datasets_ We conduct experiments on six biomedical time series datasets, encompassing diverse data types and varying sizes. The datasets include two ECG datasets, PTB-XL for cardiac condition classification and Apnea-ECG for sleep apnea detection, two EEG datasets, Sleep-EDF for sleep stage recognition and MMIDB for sleep movement detection, and two IMU datasets, PAMAP2 and UCI-HAR for human activity recognition. Table I provides a summary of the datasets. Note that in column "Periodic", the IMU datasets are tagged as "motion", because IMU data may contain periodic patterns when the recorded activity contains periodic motion (_e.g._walking). #### Iv-C1 Ptb-Xl The PTB-XL dataset [29] is an ECG database of 12-lead recordings, containing 44 diagnostic statements grouped into 5 superclasses (normal, conduction disturbance, myocardial infarction, hypertrophy, and ST-T change). In this study, we formulate a five-class cardiac abnormality classification problem. The data is divided into 10 balanced folds, with the first 8 used for training, the 9th for validation, and the 10th for testing. The data we use is sampled at 100Hz. The same stratification process is applied to the other datasets if no validation/test set is provided. The dataset is highly class-imbalanced as shown in Figure 3, with over half of the samples labeled normal and the least-represented class (HYP, hypertrophy in left ventricle) only constituting 3.3% of the data. This challenge is addressed in Section V through in-batch resampling and mix-based augmentations. #### Iv-C2 Apnea-ECG The Apnea-ECG dataset examines the connection between sleep apnea symptoms and heart activity in humans [30], as monitored through ECG. This dataset has 70 records, sampled at 100 Hz, with 35 records designated for training and the remaining 35 for testing. Each record is 7-10 hours in length and includes a continuous ECG signal along with per-minute apnea annotations indicating the presence or absence of sleep apnea. We segmented the ECG recordings into 60-second frames at 100 Hz, resulting in 17233 samples for the training set and 17010 samples for the test set. #### Iv-C3 Sleep-EDFE The Sleep-EDFE dataset is sourced from the publicly accessible Sleep European Data Format (EDF) database [31] on Physionet [32]. This database contains full-night PSG sleep records that include two-channel EEG (Fpz-Cz and Prz-Oz), a horizontal EOG, and EMG signal records, along with corresponding hypnograms (sleep stage annotations). The EEG signals have a sampling frequency of 100 Hz, and they are divided into 30-second epochs and normalized to have zero mean and unit standard deviation. #### Iv-C4 Mmidb-Eeg The EEGMIDB (EEG Motor Movement/Imagery Database) from PhysioNet is collected using the BCI200 EEG system4 [33]. It records 64 channels of brain signals at a sampling rate of 160 Hz, totaling over 1500 recordings of 1-2 minutes each. Subjects are instructed to wear the EEG device and sit in front of a computer screen, performing specific typing tasks in response to on-screen prompts. #### Iv-C5 Pamap2 The PAMAP2 dataset [34] comprises recordings from 9 participants who were asked to perform 12 daily activities, including household tasks and various exercises (e.g., Nordic walking, playing soccer). Data from accelerom eters, gyroscopes, magnetometers, temperature sensors, and heart rate monitors are recorded from inertial measurement units placed on the hand, chest, and ankle over 10 hours, resulting in a 52-dimensional dataset. #### Iv-A6 UCI-HAR The UCI-HAR dataset [35] was collected from 30 volunteers aged 19 to 48 years. Participants were instructed to engage in six basic activities, which included three static postures (standing, sitting, lying) and three dynamic activities (walking, walking downstairs, walking upstairs). 3-axial accelerometer and gyroscope signals were recorded at a constant rate of 50 Hz. The data was collected using smartphones carried by the participants. ### _Experimental Setup_ #### Iv-B1 Network architecture In all experiments, we use a 1D-CNN-based ResNet-18 [36] as the backbone. This model has convolutions with a kernel size of 3, and stride 2. The blocks in the ResNet architecture have convolutional layers with 32, 64, 128, and 256 channels respectively. The output after the final block is average pooled in the temporal dimension, and then a linear layer is applied to predict the probability of the positive class. Details of the ResNet-18 structure are summarized in Table.II. For the manifold mixup experiments, we take the output of layer 4 of ResNet18 as the input for mixing. We also perform t-SNE visualization on the features extracted from Layer 4 as validation for the quality of class representation. #### Iv-B2 Augmentation Implementation Following the previous empirical studies for mix-based regularization: mixup [4] and MixMatch [24], we use \(\alpha=0.4\) and \(\alpha=0.75\) for mix-related hyperparameters in our experiments. For cutmix, the ratio of the random segment length to the signal length is set to 0.2, following [23]. For the baseline augmentations, the hyperparameters, such as the intensity of scaling and jittering, are manually chosen following [9]. Following [4], the implementation of the training step is based on the mini-batches sampled by a data loader. For each minibatch, random shuffling is applied, and the mixing operations can all be performed in a vectorized manner, incurring minimal computation overhead. In our PTB-XL profiling experiments, the mean increase in the processing time of each Fig. 3: PTB-XL dataset overview. Top: The five-superclass distribution of PTB-XL (NORM, MI, STTC, CD, HYP). In this paper, we only train with single-label samples. Bottom: Example waveforms of PTB-XL classes (lead IV), the x-axis denotes time in ms, and the y-axis is the normalized voltage. mini-batch of size 128 is less than 0.001 second. We also report results for baselines (denoted as vanilla) that does not use any data augmentation. #### Iv-A3 Optimization We use the AdamW optimizer with learning rate 0.001 for all datasets. For the profiling experiments on PTB-XL dataset, we also tested with Adam optimizer and 0.005 learning rate. We use a step decay learning rate scheduler with step size 5, and a decay rate of 0.9 across all experiments. Training takes 50 epochs, which was observed to be sufficient for convergence in all datasets. #### Iv-A4 Computing resource : All model training was performed on a single NVIDIA GTX 2080Ti GPU. ## V Results and discussion ### _Quantitative performance of mix-base augmentations_ Experiments were conducted on six datasets of diverse categories with a ResNet18-1D backbone. The performance of vanilla (no augmentation), cutout, and three mix-based augmentations are presented in Table III. We summarize our results as follows. I).**The mix-based data augmentations can achieve superior accuracy in comparison to traditional augmentation methods.** All three mix-based augmentation techniques were found to outperform the baseline (no augmentation) in 16 out of 18 experimental trials, with only two exceptions (mixup for Apnea-ECG and cutmix for MMIDB-EEG). Furthermore, among the six datasets examined, the majority of the highest accuracy results were achieved through the use of cutmix and layer mixup. Overall, the mix-based augmentations outperform the baselines, but the performance of augmentation methods varies across different datasets. This is likely due to the unique characteristics of each dataset and the strengths of each augmentation method. For instance, datasets containing complex temporal patterns or high levels of noise may benefit from the use of certain mix-based augmentation methods that are particularly effective at enhancing classification accuracy. The results in Table III also show that more comprehensive mixup schemes (cutmix, manifold mixup) help yield better accuracy. II).**The mix-based augmentations deliver robust performance, and the accuracy gain is steady.** In addition to the quantitative performance advantages, these techniques are notable for their low dependency on expert knowledge and parameter tuning. Across all 18 mix-based experimental trials (excluding cutout), no significant reduction in accuracy was observed in comparison to the baseline. In contrast, traditional data transformation techniques can often result in a drastic decrease in accuracy if not implemented appropriately. For example, applying scaling in Apnea-ECG resulted in a 7.5% reduction in accuracy, permutation in MMIDB-EEG resulted in a 9.0% reduction, and jittering in UCI-HAR resulted in a 4.8% reduction. This finding implies that traditional transformations can undermine crucial features in physiological signals, such as wave intensity in ECG data and temporal correlations in EEG data. As a result, these augmentations can generate virtual samples that deviate from the actual data distribution, potentially compromising the generalization performance of the model. Note that in the experiments, we compared the mix-based augmentations against some individual traditional augmentations. However, to fully harness the potential of data augmentation, it is compatible to apply mix-based augmentations in conjunction with one or more traditional augmentations. ### _Profiling mix-based augmentations on PTB-XL_ The PTB-XL dataset [29] is a well-studied ECG dataset for cardiac condition classification, with the current SOTA accuracy of recognizing five classes being less than 80% [9]. To evaluate the effectiveness of mix-based data augmentation methods, extensive profiling experiments were conducted on the PTB-XL dataset, using over 80 combinations of settings and hyperparameters (Section IV-B3). The scatter plot of the best validation accuracy vs the best F1 score is shown in Figure 4(a). Identical combinations of learning rates, optimizers, etc. were used for each of the four augmentation setups (vanilla, input mixup, cutmix, layer mixup). The scatter plot illustrates that mix-based augmentations produce the best results, with all top-performing results (1st to 28th) obtained from mix-based augmentations, supporting the conclusions drawn in Section V-A. The PTB-XL dataset presents an imbalanced class distribution for the single-label data samples. To mitigate the high false negative rate for the minority class, a batch class-balanced data sampler was utilized in the data loader during the training process. The effect of the class-balanced sampler on the model's performance is shown in Figure 4 (c) and (d), by comparing the confusion matrices with and without the balanced sampling. As observed in the bottom row of Figure 4, which corresponds to the data samples of the minority class hypertrophy (HYP), the model without the balanced sampler was prone to producing the most false negative (NORM) predictions for the HYP cases. However, by incorporating both the class-balanced sampler and cutmix, virtual samples containing the information of the minority class were generated during the training, reducing the false negative rate. Additionally, the recall of the other three cardiac condition classes also received similar improvement as shown in the plot. ### _Feature representations_ The advantages of mix-based augmentations, or vicinal risk minimization, include the provision of more distinguishable representations for different classes. We present the results of t-SNE dimensional reduction of two models trained with cutmix and a baseline, on the training and test sets of the PTB-XL dataset. The feature vectors were calculated using layer 4 of ResNet18. The visualizations of the training set (Figure 5(a) and (c)) indicate that, compared to the baseline, cutmix gives more discriminative representations between classes. The projections of the test set (Figure 5(b) and (d)) demonstrate that the classes are more distinguishable with mix-based augmentation and balanced sampling. ## VI Conclusion and Future Work Inspired by the success of mixup in other domains, we investigate mixup and its variants, cutmix and manifold mixup for physiological for the time series classification task. This paper empirically shows that mix-based data augmentation techniques can achieve superior accuracy in comparison to traditional augmentation methods in the context of time series classification. In the experiments, the majority of the highest accuracy results were achieved through the use of cutmix and layer mixup, and these augmentations were found to deliver robust performance with a steady accuracy gain across various physiological and biobehavioral datasets. The low dependency on expert knowledge and parameter tuning, in addition to the quantitative performance advantages, makes mix-based augmentations more practical and effective in various applications. These findings highlight the effectiveness of mix-based, dataset-agnostic augmentations and the importance of appropriately choosing traditional data transformations, as they can compromise the generalization performance of the model. We plan to explore the combination of mix-based augmentation and traditional time series augmentations, as the effectiveness of well-composited transformations has been shown in previous studies [11]. Furthermore, we aim to extend the applicability of mix-based augmentation to the frequency domain, following the success of such an approach in acoustic data classification tasks [37].
2306.00109
Gluing residuated lattices
We introduce and characterize various gluing constructions for residuated lattices that intersect on a common subreduct, and which are subalgebras, or appropriate subreducts, of the resulting structure. Starting from the 1-sum construction (also known as ordinal sum for residuated structures), where algebras that intersect only in the top element are glued together, we first consider the gluing on a congruence filter, and then add a lattice ideal as well. We characterize such constructions in terms of (possibly partial) operators acting on (possibly partial) residuated structures. As particular examples of gluing constructions, we obtain the non-commutative version of some rotation constructions, and an interesting variety of semilinear residuated lattices that are 2-potent. This study also serves as a first attempt toward the study of amalgamation of non-commutative residuated lattices, by constructing an amalgam in the special case where the common subalgebra in the V-formation is either a special (congruence) filter or the union of a filter and an ideal.
Nick Galatos, Sara Ugolini
2023-05-31T18:33:27Z
http://arxiv.org/abs/2306.00109v1
# Gluing residuated lattices ###### Abstract. We introduce and characterize various gluing constructions for residuated lattices that intersect on a common subreduct, and which are subalgebras, or appropriate subreducts, of the resulting structure. Starting from the \(1\)-sum construction (also known as ordinal sum for residuated structures), where algebras that intersect only in the top element are glued together, we first consider the gluing on a congruence filter, and then add a lattice ideal as well. We characterize such constructions in terms of (possibly partial) operators acting on (possibly partial) residuated structures. As particular examples of gluing constructions, we obtain the non-commutative version of some rotation constructions, and an interesting variety of semilinear residuated lattices that are \(2\)-potent. This study also serves as a first attempt toward the study of amalgamation of non-commutative residuated lattices, by constructing an amalgam in the special case where the common subalgebra in the V-formation is either a special (congruence) filter or the union of a filter and an ideal. Key words and phrases:Residuated lattices, Amalgamation, Gluing, Ordinal sum 2010 Mathematics Subject Classification: 06F05,08A55,06A15,08A05 ## 1. Introduction and preliminaries The first gluing construction in lattice theory is due to Hall and Dilworth [19], who used it to prove the existence of a modular lattice that cannot be embedded in a complemented modular lattice. Later on, the same construction was independently used by Wronski in [27] and Troelstra [26] to study intermediate logics, by constructing Heyting algebras. The idea in these constructions is to glue together lattices that intersect (up to isomorphism) on a sublattice that is a principal ideal of the first and a principal filter of the second. In particular, the construction applies to Heyting algebras: bounded lattices that are relatively pseudocomplemented (i.e., for every pair of elements \(x,y\) there is a largest element \(z\) with the property that \(x\wedge z\leq y\)). Heyting algebras can also be equivalently defined as bounded residuated lattices where the monoidal operation coincides with the meet in the lattice order; in this case \(x\to y\) is the largest element \(z\) such that \(x\wedge z\leq y\). Residuated lattices play an important role in the study of algebraic logic, as they constitute the equivalent algebraic semantics (in the sense of Blok-Pigozzi [5]) of substructural logics. These encompass most interesting nonclassical logics: intuitionistic logic, fuzzy logics, relevance logics, linear logic, and classical logic as a limit case. Thus, the investigation of the variety of residuated lattices is a powerful tool in the comparative study of such logics, as explored in [16]. The multitude of different types of residuated lattices makes the study fairly complicated and at the present moment large classes of residuated lattices lack a structural description. The study of constructions that allow us to obtain new structures from known ones is extremely important in improving our understanding of residuated lattices, and as a result, of substructural logics. In the present paper we introduce different ways of gluing together residuated lattices, where by _gluing_ we mean obtaining a new structure from two original ones which intersect on a common subreduct, and which are subalgebras (or appropriate subreducts) of the resulting structure. The starting point of our investigation is the _\(1\)-sum_ construction, often called _ordinal sum_ in residuated structures, where the algebras intersect only at the top element, i.e. at a trivial filter. We will consider gluings over an arbitrary (nontrivial) congruence filter, and then over a lattice ideal as well. Moreover, we generalize these ideas to account for (possibly) partial algebras. Finally, we characterize the introduced constructions abstractly, by means of pairs of operators acting on residuated lattices. These new constructions serve as a first attempt in the study of amalgamation of non-commutative residuated lattices, by constructing an amalgam in the special case where the common subalgebra in the V-formation is either a special (congruence) filter or the union of a filter and an ideal. As particular examples of gluing constructions, we obtain the non-commutative version of the generalized rotation construction in [7], and an interesting variety of semilinear residuated lattices that are 2-potent. We use these two cases to illustrate examples of how the gluing construction can be used to study amalgamation. In the case of the rotation we show how the construction preserves the amalgamation property (in a sense that will be made precise), while in the latter case we show and characterize amalgamation failures. We start by introducing the objects of our study. A residuated lattices is an algebra \(\mathbf{A}=(A,\vee,\wedge,\cdot,\backslash,/,1)\) of type \((2,2,2,2,2,0)\) such that: 1. \((A,\vee,\wedge)\) is a lattice; 2. \((A,\cdot,1)\) is a monoid; 3. \(\backslash\) and \(/\) are the left and right division of \(\cdot\): for all \(x,y,z\in A\), \[x\cdot y\leq z\Leftrightarrow y\leq x\backslash z\Leftrightarrow x\leq z/y,\] where \(\leq\) is the lattice ordering. Residuated lattices form a variety, denoted by \(\mathsf{RL}\), as residuation can be expressed equationally; see [4]. When the monoidal identity is the top element of the lattice we say that the residuated lattice is _integral_ or an _IRL_; we call the corresponding variety \(\mathsf{IRL}\). Residuated lattices with an additional constant \(0\) are called _pointed_. _Bounded_ integral residuated lattices are expansions of residuated lattice with an extra constant \(0\) that satisfies the identity \(0\leq x\). The variety of bounded integral residuated lattices is called \(\mathsf{FL_{w}}\), referring to the fact that it is the equivalent algebraic semantics of the Full Lambek calculus with the structural rule of weakening (see [16]). As usual, we write \(xy\) for \(x\cdot y\). A residuated lattice is called _commutative_ if the monoidal operation is commutative. In this case the two divisions coincide, and we write \(x\to y\) for \(x\backslash y=y/x\). We write \(\mathsf{CRL}\) and \(\mathsf{CIRL}\), respectively, for the commutative subvarieties of \(\mathsf{RL}\) and \(\mathsf{IRL}\), and refer to commutative \(\mathsf{FL_{w}}\)-algebras as \(\mathsf{FL_{ew}}\)-algebras, since commutativity of the monoidal operation corresponds to the structural rule of exchange. In a lattice \(\mathbf{A}\), a _filter_ is a non-empty subset \(S\) that is closed upwards (if \(x\leq y\) and \(x\in S\) then also \(y\in S\)) and is closed under meet (if \(x,y\in S\) then \(x\wedge y\in S\)). In a (bounded) integral residuated lattice \(\mathbf{A}\), a _congruence filter_\(F\) is a non-empty upset of \(A\), closed under products (if \(x,y\in F\), then \(xy\in F\)) and under conjugates, i.e. if \(x\in F\), then \(yx/y,y\backslash xy\in F\) for every \(y\in A\). We will denote by \(\mathbf{Fil}(\mathbf{A})\) the lattice of congruence filters of \(\mathbf{A}\). It is easy to see that a filter \(F\) of a (bounded) integral residuated lattice \(\mathbf{A}\) is a subalgebra (or 0-free subreduct, if \(\mathbf{A}\) is bounded) of \(\mathbf{A}\), hence it is an integral residuated lattice. Filters of residuated lattices are in one-one correspondence to congruences. In particular, in the integral case the isomorphism between \(\mathbf{Fil}(\mathbf{A})\) and the congruence lattice of \(\mathbf{A}\), \(\mathbf{Con}(\mathbf{A})\), is given by the maps: \[F\mapsto\theta_{F}=\{(x,y)\in A\times A:x\backslash y,y\backslash x\in F\}=\{ (x,y):x/y,y/x\in F\},\] \[\theta\mapsto F_{\theta}=\{x\in A:(x,1)\in\theta\}\] for all \(F\in Fil(\mathbf{A}),\theta\in Con(\mathbf{A})\). In what follows, given a congruence filter \(F\), we will write \([x]_{F}\) for the equivalence class \([x]_{\theta_{F}}\). ## 2. Gluing over a filter As we mentioned in the introduction, the usual notion of gluing in lattice theory puts together two lattices that intersect in a filter of the first and an ideal of the second. As we are interested in integral residuated lattices and we want the components to be subalgebras of the resulting structure (in particular the common identity element needs to be the top), this approach needs to be modified. We start by describing the simple case where the ideal is empty, and the filter is trivial. ### \(1\)-sum The \(1\)_-sum_ construction in the context of residuated structures was introduced with the name of _ordinal sum_ by Ferreirim in [11] in the context of hoops. The latter can be defined as commutative integral divisible (\(x\wedge y=x(x\to y)\)) residuated lattices but without the demand that joins exist; we choose to use the naming \(1\)-sum as in [24, 25] to avoid confusion. Indeed it is slightly different than the ordinal sum of two posets/lattices, as it identifies the top elements of the two structures. The \(1\)-sum construction has played an important role in the study of BL-algebras and basic hoops [1]. The construction was later extended to integral residuated lattices, and even generalized to non-integral structures [14]. It is also worth mentioning that historically, the analogue to the \(1\)-sum has been previously introduced and studied for semigroups [8]. \(1\)-sums represent a seminal example of gluing; unlike the case of hoops, some care is needed to make sure that joins that were equal to the top still exist in the resulting structure. The two structures glued together intersect only at their respective top elements. In detail, let \(\mathbf{B}\) and \(\mathbf{C}\) be integral residuated lattices, where \(B\cap C=\{1\}\), and \(1\) is join irreducible in \(\mathbf{B}\) or \(\mathbf{C}\) has a bottom element. We extend the order of \(\mathbf{B}\) and \(\mathbf{C}\) to the set \(B\cup C\) by: \(b<c\) for \(b\in B-\{1\}\) and \(c\in C\), and extend the operations of \(\mathbf{B}\) and \(\mathbf{C}\) by: \[xy = \left\{\begin{array}{ll}y&\mbox{ if }x\in C\mbox{ and }y\in B \setminus\{1\}\\ x&\mbox{ if }x\in B\setminus\{1\}\mbox{ and }y\in C,\end{array}\right.\] \[x\backslash y = \left\{\begin{array}{ll}y&\mbox{ if }x\in C\mbox{ and }y\in B \setminus\{1\}\\ 1&\mbox{ if }x\in B\setminus\{1\}\mbox{ and }y\in C,\end{array}\right.\] \[x/y = \left\{\begin{array}{ll}y&\mbox{ if }x\in C\mbox{ and }y\in B \setminus\{1\}\\ 1&\mbox{ if }x\in B\setminus\{1\}\mbox{ and }y\in C.\end{array}\right.\] It can be easily verified that the resulting structure \(\mathbf{B}\oplus_{1}\mathbf{C}\) is a residuated lattice, called the \(1\)_-sum_ of \(\mathbf{B}\) and \(\mathbf{C}\). The assumption that \(1\) is join irreducible in \(\mathbf{B}\) or \(\mathbf{C}\) has a bottom element ensures that joins of elements of \(B\), calculated in \(\mathbf{B}\oplus_{1}\mathbf{C}\), exist; if these conditions are not satisfied the resulting structure is merely a residuated meet-semilattice. Note that \(\mathbf{C}\) is always a subalgebra of \(\mathbf{B}\oplus_{1}\mathbf{C}\) and \(\mathbf{B}\) is a subalgebra except possibly with respect to \(\vee\), in case that \(1\) is not join irreducible in \(\mathbf{B}\) (see [22] for details when \(1\) is not join-irreducible). Generalizations of the \(1\)-sum construction to the non-integral case are discussed in [16]. Notice that the \(1\)-sum construction stacks one IRL on top of another one and identifies/glues their top elements; the product between elements of \(\mathbf{B}\) and \(\mathbf{C}\) is actually their meet in the new order. As it turns out, this is the only choice for defining a residuated monoidal operation when gluing two residuated lattices together with this particular lattice order, if we want \(\mathbf{B}\) and \(\mathbf{C}\) to be subalgebras of the new structure. **Proposition 2.1**.: _Let \(\mathbf{B}\) and \(\mathbf{C}\) be IRLs and assume that \(\mathbf{D}\) is an IRL with underlying set \(B\cup C\), where \(B\cap C=\{1\}\), \(b<c\) for all \(b\in B-\{1\}\) and \(c\in C\), \(C\) is a subalgebra, and \(B\) is a subalgebra except possibly with respect to \(\vee\). Then \(\mathbf{D}\) is equal to \(\mathbf{B}\oplus_{1}\mathbf{C}\)._ Proof.: Given the assumptions, we only need to verify that for all \(b\in B\) and \(c\in C\), we have \(cb=bc=b\). We have \(cb\leq cb\), so \(c\leq cb/b\). Since \(cb,b\in B\) we have that \(cb/b\) is an element of \(B\) that is greater than some element of \(C\). Therefore, \(cb/b=1\), hence \(b\leq cb\). By integrality we also have \(cb\leq b\), so \(cb=b\). ### \(F\)-gluings: compatibility and uniqueness By relaxing the assumptions in Proposition 2.1, we will generalize the \(1\)-sum construction to a more general type of gluing where the intersection of the two algebras may be a congruence filter different than \(\{1\}\). More precisely, given IRLs \(\mathbf{B}\) and \(\mathbf{C}\), let \(\mathbf{D}\) be some IRL with underlying set \(B\cup C\), where \(F:=B\cap C\) is a congruence filter of \(\mathbf{D}\), \(b<c<f\) for all \(b\in B-F\), \(c\in C-F\) and \(f\in F\), \(C\) is a subalgebra, and \(B\) is a subalgebra except possibly with respect to \(\vee\), in which case \(F\) is assumed to have a bottom element. We say that \(\mathbf{D}\) is a _gluing over \(F\)_, or an _\(F\)-gluing of \(\mathbf{B}\) and \(\mathbf{C}\)_; see Figure 1 for the anticipated structure. We will identify conditions on \(\mathbf{B}\), \(\mathbf{C}\) and \(F\) that will allow us to construct \(\mathbf{D}\) from these constituent parts. First we describe a compatibility condition between \(F\) and \(\mathbf{B}\) and then we characterize the structure of the subset \(B^{\prime}:=(B-F)\cup\{1\}\). Note that \(B^{\prime}\) supports a residuated lattice even when it is not a subalgebra under \(\vee\) as those joins end up being equal to \(1\). We say that a congruence filter \(F\) of an IRL \(\mathbf{B}\) is _compatible_ with \(\mathbf{B}\) if: 1. every element of \(F\) is strictly above every element of \(B-F\). 2. For all \(b\in B-F\) the equivalence class \([b]_{F}\) has a maximum and a minimum; we define \[\sigma_{F}(b)=\min[b]_{F},\ \ \gamma_{F}(b)=\max[b]_{F}\] for \(b\in B-F\) and \(\sigma_{F}(1)=\gamma_{F}(1)=1\); hence \(\sigma_{F}\) and \(\gamma_{F}\) are maps on \(B^{\prime}=(B-F)\cup\{1\}\). 3. \(\sigma_{F}\) is _absorbing_: for all \(b\in B-F\), \(b\sigma_{F}[B-F]\subseteq\sigma_{F}[B-F]\) and \(\sigma_{F}[B-F]b\subseteq\sigma_{F}[B-F]\). We also say that \((\mathbf{B},F)\) form a _lower-compatible pair_. The following lemma shows that \(F\)-gluings contain compatible pairs and explains that the role of \(\sigma\) and \(\gamma\) is to capture the multiplications and divisions, respectively, by elements of \(C\) that are not in the compatible pair. More importantly, it shows the uniqueness of the \(F\)-gluing. **Lemma 2.2**.: _Given a gluing of IRLs \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\), the congruence filter \(F\) is compatible with \(\mathbf{B}\). Moreover, for all \(b\in B-F,c\in C-F\):_ \[cb=bc=\sigma_{F}(b),\qquad c\backslash b=b/c=\gamma_{F}(b).\] _Therefore, the gluing of \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\) is unique when it exists._ Proof.: We first show that for all \(b\in B-F\) the equivalence class \([b]_{F}\) has a minimum. For all \(b\in B-F\) and \(c\in C-F\), we have \(cb\leq cb\), so \(c\leq cb/b\). By integrality we also have \(cb\leq b\in B-F\), so \(cb\in B-F\). Since \(cb,b\in B\) we have that \(cb/b\) is an element of \(B\) that is greater than some element of \(C\). Therefore, \(cb/b\in F\). Also, since \(cb\leq b\), we get \(b/cb=1\in F\), hence \([b]_{F}=[cb]_{F}\). Moreover, given \(b^{\prime}\in[b]_{F}\), we have \(b^{\prime}/b\in F\), so \(c\leq b^{\prime}/b\), hence \(bc\leq b^{\prime}\). Thus, \(cb\) (and by symmetry also \(bc\)) is the minimum of \([b]_{F}\), for all \(c\in C-F\); we denote this minimum by \(\sigma_{F}(b)\). Note that since \(F\)-congruence classes are in particular lattice congruence classes, \(\sigma_{F}\) is monotone on \(B-F\). We now show that for all \(b\in B-F\) the equivalence class \([b]_{F}\) has a maximum. For any \(c\in C-F\), we cannot have \(c\backslash b\in C\), as then \(c^{\prime}\leq c\backslash b\) for some \(c^{\prime}\in C-F\), so \(c^{\prime}c\leq b\) and \(c^{\prime}c\leq c^{\prime}\in C-F\), so \(c^{\prime}c\in C-F\), which would imply \(b\in F\), a contradiction. So, \(c\backslash b\in B-F\), and thus \(\sigma_{F}(c\backslash b)=c(c\backslash b)\leq b\) and so \(\sigma_{F}(c\backslash b)\leq\sigma_{F}(b)\). Also, by integrality we have \(b\leq c\backslash b\), so \(\sigma_{F}(b)\leq\sigma_{F}(c\backslash b)\). Therefore, \(\sigma_{F}(b)=\sigma_{F}(c\backslash b)\), hence \([b]_{F}=[c\backslash b]_{F}\). Also, for every \(b^{\prime}\in[b]_{F}\), we have \(cb^{\prime}=\sigma_{F}(b^{\prime})=\sigma_{F}(b)\leq b\), so \(b^{\prime}\ \leq c\backslash b\). Therefore, for any \(c\in C-F\), \(c\backslash b\) (and by symmetry \(b/c\)) is the maximum of \([b]_{F}\); we denote this element by \(\gamma_{F}(b)\). We now prove the last property of compatibility, i.e., that \(\sigma_{F}\) is _absorbing_: for all \(b\in B-F\), \(b\sigma_{F}[B-F]\subseteq\sigma_{F}[B-F]\) and \(\sigma_{F}[B-F]b\subseteq\sigma_{F}[B-F]\). In particular, we show \(b\sigma_{F}[B-F]\subseteq\sigma_{F}[B-F]\), as the proof of \(\sigma_{F}[B-F]b\subseteq\sigma_{F}[B-F]\) is similar. Every element of \(b\sigma_{F}[B-F]\) is of the form \(b\sigma_{F}(x)\) where \(x\in B-F\). By the above, for any \(c\in C-F\), \(b\sigma_{F}(x)=bxc=\sigma_{F}(bx)\in\sigma_{F}[B-F]\). Thus we showed that the congruence filter F is compatible with \(\mathbf{B}\), and also that \(\sigma_{F}(b)=cb=bc\) and \(\gamma_{F}(b)=c\backslash b=b/c\). It follows that the gluing over \(F\) is unique when it exists. ### Compatible triples inside compatible pairs Our aim is to characterize abstractly the individual components of the gluing and later use them to define a gluing construction. In particular we identify the structure of \((B-F)\cup\{1\}\); as this set is not closed under divisions, we will need to make use of partially defined operations. **Definition 2.3**.: By a _partial IRL_ we understand a partially ordered partial algebra \((\mathbf{A},\leq)\) in the language of residuated lattices, such that: 1. \(\mathbf{A}\) is integral: \(x\leq 1\) for all \(x\in A\); 2. the three axioms of RLs are satisfied whenever they can be applied, in the following sense: 1. \(x\lor y\) is the least common upper bound of \(x\) and \(y\) whenever it exists, and similarly \(x\wedge y\) is the largest lower bound whenever it exists; 2. \(x1=1x=1\), and if \(xy,(xy)z,yz,x(yz)\) are defined then \((xy)z=x(yz)\); 3. if \(xy\), \(z/y\), and \(x\backslash z\) are defined, then \(xy\leq z\) if and only if \(x\leq z/y\) if and only if \(y\leq x\backslash z\). 3. multiplication is order preserving when defined: \(a\leq b\) and \(ac,bc\) defined implies \(ac\leq bc\), and likewise for left multiplication. 4. whenever defined, the division operations are order-preserving in the numerator and order-reversing in the denominator. That is: \(x\leq y\) and \(z\backslash x,z\backslash y\) defined implies \(z\backslash x\leq z\backslash y\); \(x\leq y\) and \(y\backslash z,x\backslash z\) defined implies \(y\backslash z\leq x\backslash z\); likewise for right division. For example, given an \(F\)-gluing of IRLs \(\mathbf{B}\) and \(\mathbf{C}\), the structure \(\mathbf{B}^{\prime}\), whose domain is \(B^{\prime}:=(B-F)\cup\{1\}\), is a partial IRL. _Remark 2.4_.: We wish to remark that, even though the definition of a partial IRL we are using is quite general, the constructions we will define in the rest of the paper really involve partial algebras that are much closer to being lattices: joins will always be defined, and a meet \(x\wedge y\) will always be defined except if there is no common lower bound of \(x\) and \(y\). Moreover, the partial IRLs considered in our constructions will actually have an underlying structure of a _partial monoid_ in the stronger sense usually intended in the literature: the products \(xy,(xy)z\) are defined if and only if \(yz,x(yz)\) are defined, and in such case \((xy)z=x(yz)\). We will now characterize abstractly triples of the form \((\mathbf{B}^{\prime},\sigma_{F},\gamma_{F})\), where we mean that \(B^{\prime}:=(B-F)\cup\{1\}\). A _lower-compatible triple_\((\mathbf{K},\sigma,\gamma)\) consists of 1. a partial IRL \(\mathbf{K}\) with all operations defined, except for \(x\backslash y\) and \(y/x\) which are undefined if and only if \(\sigma(x)\leq y\) and \(x\not\leq y\), 2. \((\sigma,\gamma)\) is a residuated pair, i.e. \(\sigma(x)\leq y\) if and only if \(x\leq\gamma(y)\), such that: 1. \(\sigma\) is a _strong conucleus_, i.e, an interior operator such that for \(x,y\neq 1\), \(x\sigma(y)=\sigma(xy)=\sigma(x)y\), and \(\sigma(1)=1\). 2. \(\gamma\) is a closure operator on \(\mathbf{K}\), and 3. \(xy,yx\leq\sigma(x)\) for all \(x,y\in K,y\neq 1\). **Lemma 2.5**.: _If \(F\) is a compatible congruence filter of an IRL \(\mathbf{B}\), then \((\mathbf{B}^{\prime},\sigma_{F},\gamma_{F})\) is a lower-compatible triple._ Proof.: For readability, in this proof we will write \(\sigma\) for \(\sigma_{F}\), \(\gamma\) for \(\gamma_{F}\) and \(\theta\) for \(\theta_{F}\). It is clear that \(B^{\prime}=(B-F)\cup\{1\}\) is closed under multiplication, meet and also under join except when \(x\lor y\in F-\{1\}\); we redefine these joins to be \(1\) in \(\mathbf{B}^{\prime}\). Since \(\mathbf{B}\) is an IRL and \(B^{\prime}\) inherits its operations, it can be directly checked that \(\mathbf{B}^{\prime}\) is a partial IRL in the sense of Definition 2.3. With respect to the divisions, we want to show that \(x\backslash y\) and \(y/x\) are undefined if and only if \(\sigma(x)\leq y\) and \(x\not\leq y\). Notice that the divisions \(x\backslash y\) and \(y/x\) are undefined in \(B^{\prime}\) iff they produce elements of \(F-\{1\}\). From \(x\backslash y\in F-\{1\}\) we get that \(x\not\leq y\), and moreover \(f\leq x\backslash y\) for some \(f\in F-\{1\}\). Thus by residuation \(xf\leq y\). Thus, since \(xf\in[x]_{F}\) (because \(xf\leq x\) and \(f\leq x\backslash xf\)), we have \(\sigma(x)=\min[x]_{F}\leq xf\leq y\). Similarly we can prove that if \(y/x\) is not defined in \(B^{\prime}\) then again \(\sigma(x)\leq y\) and \(x\not\leq y\). Conversely, suppose \(\sigma(x)\leq y\) and \(x\not\leq y\). Then \(x\backslash\sigma(x)\leq x\backslash y\) and since \(x\backslash\sigma(x)\in F\), we get \(x\backslash y\in F\). Moreover, since \(x\not\leq y\), we get \(x\backslash y\neq 1\). Note that \(B^{\prime}\) is closed under meet as all elements of \(B-F\) are below all elements of \(F\) and it is closed under multiplication due to integrality and order preservation of multiplication. Also, it is closed under joins that do not produce elements of \(F-\{1\}\) and the ones that do produce such elements are redefined to be 1. The resulting structure is a monoid and a lattice. Finally, if \(x\backslash^{\mathbf{B}}y\not\in F-\{1\}\), then residuation holds as all terms are evaluated in \(B^{\prime}\). We now prove that \(\sigma\) is a strong conucleus. Clearly, \(\sigma(x)\leq x\) and \(\sigma(\sigma(x))=\sigma(x)\) thus \(\sigma\) is decreasing and idempotent. We now prove that \(\sigma\) is monotone. Suppose \(x\leq y\), with \(x,y\in B^{\prime}\). Since \(\sigma(x)\ \theta\ x\) and \(\sigma(y)\ \theta\ y\), we have \(\sigma(x)\wedge\sigma(y)\ \theta\ x\wedge y=x\ \theta\ \sigma(x)\). Thus \(\sigma(x)\leq\sigma(x)\wedge\sigma(y)\) (since \(\sigma(x)\) is the smallest element in the equivalence class), thus \(\sigma(x)\leq\sigma(y)\). We will now use the absorbing property of \(\sigma\) in order to show that it is a strong conucleus. We show that \(x\sigma(y)=\sigma(xy)=\sigma(x)y\) for \(x\) and \(y\) not equal to 1. Now, \(x\sigma(y)\in x\sigma[B-F]\subseteq\sigma[B-F]\), thus \(x\sigma(y)=\sigma(z)\) for some \(z\in B-F\). But then since \(\sigma\) is idempotent \(\sigma(x\sigma(y))=\sigma(\sigma(z))=\sigma(z)=x\sigma(y)\). Since \(\sigma\) is decreasing and order preserving, we get \(x\sigma(y)=\sigma(x\sigma(y))\leq\sigma(xy)\). Moreover, since \(x\ \theta\ x\) and \(y\ \theta\ \sigma(y)\), we get \(xy\ \theta\ x\sigma(y)\), thus \(\sigma(xy)=\sigma(x\sigma(y))\leq x\sigma(y)\), and this shows that \(\sigma(xy)=x\sigma(y)\). Similarly, using \(\sigma[B-F]x\subseteq\sigma[B-F]\), we can show that \(\sigma(y)x=\sigma(xy)\). We now prove that \(\gamma\) is a closure operator. Since \(\gamma(x)=\max[x]_{F}\), it is easy to see that it is increasing and idempotent. To show that \(\gamma\) is monotone, suppose \(x\leq y\), with \(x,y\in B-F\). Since \(\gamma(x)\ \theta\ x\) and \(\gamma(y)\ \theta\ y\), we have \(\gamma(x)\vee\gamma(y)\ \theta\ x\lor y=y\ \theta\ \gamma(y)\). Thus \(\gamma(x)\vee\gamma(y)\leq\gamma(y)\) (since \(\gamma(y)\) is the biggest in the equivalence class), thus \(\gamma(x)\leq\gamma(y)\). We now show that \((\sigma,\gamma)\) is a residuated pair. If \(\sigma(x)\leq y\) then by idempotency and monotonicity of \(\sigma\) we get that \(\sigma(x)=\sigma\sigma(x)\leq\sigma(y)\). Then \(\gamma(\sigma(x))\leq\gamma(\sigma(y))\), and considering that it follows from their definition that \(\gamma\circ\sigma=\gamma\), we have \(x\leq\gamma(x)\leq\gamma(y)\). Similarly, if \(x\leq\gamma(y)\) then \(\gamma(x)\leq\gamma(y)\), thus applying \(\sigma\) we get that \(\sigma(x)\leq\sigma(y)\leq y\). It is only left to prove that \(xy,yx\leq\sigma(x)\) for all \(x,y\in B^{\prime},y\neq 1\). This easily follows from residuation, since for instance: \(xy\leq\sigma(x)\) if and only if \(y\leq x\backslash\sigma(x)\), which holds since \(x\backslash\sigma(x)\in F\). We say that \((\mathbf{B}^{\prime},\sigma_{F},\gamma_{F})\) is the compatible triple of the compatible pair \((\mathbf{B},F)\). We can also show that every compatible triple comes from a compatible pair. **Lemma 2.6**.: _Every lower-compatible triple \((\mathbf{K},\sigma,\gamma)\) is the compatible triple of the lower-compatible pair \((\mathbf{B},G)\), where \(G\) is the 2-element IRL and \(\mathbf{B}\) is an IRL with operations extending \((\mathbf{K}-\{1\})\cup\mathbf{G}\)._ Proof.: Let \(B=(K-\{1\})\cup G\) where \(G=\{f,1\}\). We extend the operations of \(\mathbf{K}\) to \(\mathbf{B}\) except when \(x\lor y=1\) in \(K\), in which case we redefine \(x\ \backslash^{\mathbf{B}}\ y=f\); moreover we stipulate that: * \(f\) is an idempotent coatom strictly above all elements of \(K-\{1\}\); * \(f\cdot x=x\cdot f=\sigma(x)\), \(f\backslash x=x/f=\gamma(x)\), \(x\backslash f=f/x=1\) for all \(x\in K-\{1\}\). * \(x\backslash y=y/x=f\) for \(x,y\in B\) such that \(x\not\leq y\) and \(\sigma(x)\leq y\) (i.e., when \(x\backslash^{\mathbf{K}}y,y/^{\mathbf{K}}x\) are undefined). We first show that \(\mathbf{B}\) is a residuated lattice. It can be easily seen that \((B,\wedge,\vee,1)\) is a lattice with top 1. In order to see that \((B,\cdot,1)\) is a monoid, we need to prove associativity of the product in triples of elements where \(f\) is involved, as in the other cases associativity follows from the associativity in \(\mathbf{K}\). We will make use of the absorption of \(\sigma\). For example, if \(b,d\in K-\{1\}\) (the other cases are similar): \[b(df)=b\sigma(d)=\sigma(bd)=(bd)f,\] \[b(fd)=b\sigma(d)=\sigma(bd)=\sigma(b)d=(bf)d.\] For residuation, notice first that if one among \(x,y,z\) is \(1\) then the law clearly holds. First we check the cases where \(f\) is involved. For \(b,d\in K-\{1\}\), we show that: \[bf\leq d\text{ iff }f\leq b\backslash d\text{ iff }b\leq d/f.\] Indeed, \(bf\leq d\text{ iff }\sigma(b)\leq d\text{ iff }[b\leq d\text{ or }(b\not\leq d \text{ and }\sigma(b)\leq d)]\text{ iff }(b\backslash d=1\text{ or }b\backslash d=f)\text{ iff }f\leq b \backslash d\). Moreover, \(bf\leq d\text{ iff }\sigma(b)\leq d\text{ iff }b\leq\gamma(d)\text{ iff }b\leq d/f\), where we used that \((\sigma,\gamma)\) is a residuated pair. Likewise we obtain \(fb\leq d\text{ iff }b\leq f\backslash d\text{ iff }f\leq d/b.\) Also, \[bd\leq f\text{ iff }d\leq b\backslash f\text{ iff }b\leq f/d\] holds as all these statements are true even for \(b=f\) and \(d\in K-\{1\}\) and also for \(d=f\) and \(b\in K-\{1\}\). Moreover, \(ff\leq b\text{ iff }f\leq b/f\text{ iff }f\leq f\backslash b\) since all inequalities are false. Now for \(x,y,z\in K-\{1\}\), we want to show that \[xy\leq z\text{ iff }y\leq x\backslash z\text{ iff }x\leq z/y.\] We show that \(xy\leq z\text{ iff }y\leq x\backslash z\), as the other equivalence is obtained similarly. We consider three different cases. * If \(x\leq z\), then both inequalities are true, since we get \(xy\leq x\leq z\) and \(x\backslash z=1\) thus \(y\leq x\backslash z\). * If \(x\not\leq z\), and \(\sigma(x)\leq z\), we have to show that \(xy\leq z\) iff \(y\leq f\). Notice that \(y\leq f\) since \(y\neq 1\). Moreover, since \(y\neq 1\) then \(xy\leq\sigma(x)\leq z\). * If \(\sigma(x)\not\leq z\), then the operations are defined in \(K\) and residuation holds. Thus \(\mathbf{B}\) is an integral residuated lattice. We want to show that \((\mathbf{B},G)\) is a compatible pair. \(G\) is closed under products since \(f\) is idempotent and it is closed under conjugates since \((x\cdot f)/x=\sigma(x)/x\in\{f,1\}\) and \(x\backslash(f\cdot x)=x\backslash\sigma(x)\in\{f,1\}\); so \(G\) is a congruence filter of \(B\). We now show that \(\sigma(x)=\min[x]_{G}=\sigma_{G}(x)\) and \(\gamma(x)=\max[x]_{G}=\gamma_{G}(x)\) for all \(x\in K-\{1\}\). Note that \(\sigma(x)\leq x\) implies \(\sigma(x)\backslash x=1\in G\); also \(x\backslash\sigma(x)\) is either \(1\) or \(f\), thus still in \(G\). Therefore, \(\sigma(x)\in[x]_{G}\). Furthermore, if \(y\in[x]_{G}\), then \(x\backslash y\in G\), thus either \(x\backslash y=1\) or \(x\backslash y=f\). If \(x\backslash y=1\) then \(\sigma(x)\leq x\leq y\), while if \(x\backslash y=f\), then \(xf\leq y\), so \(\sigma(x)\leq y\). Thus in any case \(\sigma(x)\leq y\), which implies that \(\sigma(x)=\min[x]_{G}=\sigma_{G}(x)\). We now prove that \(\gamma(x)=\gamma_{G}(x)\) for all \(x\in K-\{1\}\). Note that \(x\leq\gamma(x)\) implies \(x\backslash\gamma(x)=1\in G\); also \(\gamma(x)\backslash x\) is either \(1\) if \(x=\gamma(x)\) or \(f\) otherwise, since \(\sigma(\gamma(x))\leq x\) follows from the fact that \(\sigma,\gamma\) form a residuated pair. Therefore, \(\gamma(x)\in[x]_{G}\). Now, if \(y\in[x]_{G}\), then \(y\backslash x\in G\), thus either \(y\backslash x=1\) or \(y\backslash x=f\). If \(y\backslash x=1\) then \(y\leq x\leq\gamma(x)\), while if \(y\backslash x=f\), then \(yf\leq x\), so \(\sigma(y)\leq x\), if and only if \(y\leq\gamma(x)\). Thus \(\gamma(x)=\max[x]_{G}=\gamma_{G}(x)\). To prove that \(\sigma\) is absorbing, we use the fact that \(\sigma\) is a strong conucleus. For any \(x\) and \(y\in B-G\), we have \(x\sigma(y)=\sigma(xy)\in\sigma[B-G]\), so \(x\sigma[B-G]\subseteq\sigma[B-G]\) and similarly \(\sigma[B-G]x\subseteq\sigma[B-G]\). Thus we proved that \((\mathbf{B},G)\) is a compatible pair, and since \(\sigma=\sigma_{G},\gamma=\gamma_{G}\), and \(K=(B-F)\cup\{1\}\), it follows that \((\mathbf{K},\sigma,\gamma)\) is its compatible triple. If \(\mathbf{2}\) denotes the two-element residuated lattice, we have also shown the following. **Corollary 2.7**.: _If \((\mathbf{B},F)\) is a lower-compatible pair, then so is \((\mathbf{B}_{F},\mathbf{2})\), where \(B_{F}=(B-F)\cup 2\)._ We say that an IRL \(\mathbf{F}\)_fits_ with a lower-compatible triple \((\mathbf{K},\sigma,\gamma)\), if \((\mathbf{K}-\{1\})\cup\mathbf{F}\) extends to an IRL \(\mathbf{B}\), \(F\) is a compatible filter of \(\mathbf{B}\), and the compatible triple of the compatible pair \((\mathbf{B},F)\) is \((\mathbf{K},\sigma,\gamma)\). Note that if \((\mathbf{B},F)\) is a compatible pair, then \(F\) fits with the compatible triple \((\mathbf{B}^{\prime},\sigma_{F},\gamma_{F})\), but Corollary 2.7 shows that the same \(\mathbf{B}^{\prime}\) can belong to different lower-compatible triples. ### Fitting the components together: the construction Now, we define the \(F\)-gluing construction given the individual pieces we have identified, provided that they fit together in a suitable way. Consider a lower-compatible pair \((\mathbf{B},F)\), and an IRL \(\mathbf{C}\) such that \(B\cap C=F\), with \(F\) strictly above all other elements in \(C\) and such that if there are elements in \(B-F\) joining to some element of \(F\) then \(C\) has a least element \(0_{C}\). We will show that there is an IRL that is the (unique) gluing of \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\) and that it is the following structure: \[\mathbf{B}\oplus_{F}\mathbf{C}=(B\cup C,\,\cdot_{F},\backslash_{F},/_{F}, \wedge_{F},\vee_{F},0,1),\] where the operations are defined as follows: \[x\cdot_{F}y =\left\{\begin{array}{ll}x\cdot y&\mbox{if }x,y\in B,\mbox{ or }x,y\in C\\ \sigma_{F}(x)&\mbox{if }y\in C-F,x\in B-F\\ \sigma_{F}(y)&\mbox{if }x\in C-F,y\in B-F\end{array}\right.\] \[x\backslash_{F}y =\left\{\begin{array}{ll}x\backslash y&\mbox{if }x,y\in B,\mbox{ or }x,y\in C \\ \gamma_{F}(y)&\mbox{if }x\in C-F,y\in B-F\\ 1&\mbox{if }x\in B-F,y\in C-F\end{array}\right.\] \[x/_{F}y =\left\{\begin{array}{ll}x/y&\mbox{if }x,y\in B,\mbox{ or }x,y\in C \\ \gamma_{F}(x)&\mbox{if }y\in C-F,x\in B-F\\ 1&\mbox{if }x\in C-F,y\in B-F\end{array}\right.\] \[x\wedge_{F}y =\left\{\begin{array}{ll}x\wedge y&\mbox{if }x,y\in B,\mbox{ or }x,y\in C \\ x&\mbox{if }x\in B-F,y\in C-F\\ y&\mbox{if }y\in B-F,x\in C-F\end{array}\right.\] \[x\vee_{F}y =\left\{\begin{array}{ll}x\lor y&\mbox{if }x,y\in C,\mbox{ or }x,y\in B\mbox{ with }x\lor y\not\in F \\ 0_{C}&\mbox{if }x,y\in B-F,x\lor y\in F\\ y&\mbox{if }x\in B-F,y\in C-F\\ x&\mbox{if }y\in B-F,x\in C-F\end{array}\right.\] The proof of the following theorem is postponed until the next section where we further expand the construction. More precisely, it will be a direct consequence of Theorem 3.10. **Theorem 2.8**.: _If \(F\) is a congruence filter of an IRL \(\mathbf{C}\) and also a compatible congruence filter of an IRL \(\mathbf{B}\), then \(\mathbf{B}\oplus_{F}\mathbf{C}\) is the gluing of \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\)._ ### Gluing without the filter We now obtain a different construction, that generalizes the \(1\)-sum construction in a different way: it glues together two structures that intersect at the top \(1\) and maintains the same order relation, but some of the divisions are redefined. With respect to the previous construction, the intuition here is that we are removing the filter \(F\) (keeping the unit \(1\)), and what is left is a partial IRL. We start from a lower compatible triple \((\mathbf{K},\sigma,\gamma)\) and a partial IRL \(\mathbf{L}\) where some of the divisions might not be defined. In particular, \(x\backslash y\) is undefined if and only if all elements \(z\) in the interval \([y,1)=\{z\in L:y\leq z<1\}\) are such that \(xz\leq y\) but \([y,1)\) does not have a top element. Similarly, \(y/x\) is undefined if and only if all elements \(z\) the interval \([y,1)\) are such that \(zx\leq y\) and \([y,1)\) has no top. If \(\mathbf{L}\) has a splitting coatom \(c_{L}\) (i.e., \(L=\{1\}\cup\downarrow\downarrow_{C}\)), all divisions in \(L\) are defined. We also assume that \(K\cap L=\{1\}\) and if \(x\lor y=1\) in \(K\) for some \(x,y\in K-\{1\}\), then \(L\) has a bottom element \(0_{L}\). We set \(\pi=(\sigma,\gamma)\) and we define \(\mathbf{K}\oplus_{\pi}\mathbf{L}\) to be the structure where the operations extend those of \(K\) and \(L\), except if \(x\lor y=1\) in \(K\) then we redefine \(x\lor y=0_{L}\). Moreover: \[xy=\left\{\begin{array}{ll}\sigma(x)&\text{ if }y\in L-\{1\},x\in K -\{1\}\\ \sigma(y)&\text{ if }x\in L-\{1\},y\in K-\{1\}\\ \end{array}\right.\] \[x\backslash y=\left\{\begin{array}{ll}c_{L}&\text{ if }x,y\in K,L\text{ has a splitting coatom }c_{L},\text{ and }x\backslash{}^{\mathbf{K}}y\text{ is undefined}\\ \gamma(y)&\text{ if }x\in L-\{1\},y\in K-\{1\}\\ 1&\text{ if }x\in K-\{1\},y\in L\\ \end{array}\right.\] \[y/\,x=\left\{\begin{array}{ll}c_{L}&\text{ if }x,y\in K,L\text{ has a splitting coatom }c_{L},\text{ and }y/{}^{\mathbf{K}}x\text{ is undefined}\\ \gamma(y)&\text{ if }x\in L-\{1\},y\in K-\{1\}\\ 1&\text{ if }x\in K-\{1\},y\in L\\ \end{array}\right.\] \[x\wedge y=\begin{array}{ll}x&\text{ if }x\in K-\{1\},y\in L\end{array}.\] \[x\lor y=\begin{array}{ll}y&\text{ if }x\in K-\{1\},y\in L\end{array}.\] **Proposition 2.9**.: \(\mathbf{K}\oplus_{\pi}\mathbf{L}\) _is a partial IRL and it is total if \(L\) has a splitting coatom._ Proof.: First, we notice that all operations are total except possibly the divisions. The meet and join operation clearly define a lattice, and \(x\leq 1\) for every element \(x\) in the gluing. Associativity of the product can be easily shown by the definition using the idempotency of \(\sigma\) and the strong conuclearity condition. Since the operations of \(\mathbf{K}\oplus_{\pi}\mathbf{L}\) extend the ones of \(\mathbf{K}\) and \(\mathbf{L}\), \(\mathbf{K}\oplus_{\pi}\mathbf{L}\) has an underlying monoidal structure. We show that residuation holds. That is, for all elements \(x,y,z\), whenever \(xy,x\backslash z,z/y\) are defined: \[xy\leq z\text{ iff }y\leq x\backslash z\text{ iff }x\leq z/y\] * In the case where \(x,y\in L,z\in K-\{1\}\) none of the inequalities are true, as can be seen by the definition of the order and the divisions. * In the case where \(x,z\in L,y\in K-\{1\}\) all the inequalities hold as it follows from the definition of the order and of the divisions. The situation is similar when: \(x\in K-\{1\},y,z\in L\), as well as when \(x,y\in K-\{1\},z\in L\). * If \(x\in L-\{1\},y,z,\in K-\{1\}\), we need to verify: \[\sigma(y)\leq z\text{ iff }y\leq\gamma(z)\text{ iff }x\leq z/y\] whenever \(z/y\) is defined. The first equivalence holds since the two maps form a residuated pair. We now show \(\sigma(y)\leq z\Leftrightarrow x\leq z/y\) holds in case \(z/y\) is defined in \(\mathbf{K}\oplus_{\pi}\mathbf{L}\). If \(y\leq z\), then \(\sigma(y)\leq\sigma(z)\leq z\) and \(x\leq 1=z/y\), so both inequalities hold. Assume now that \(y\not\leq z\). If \(\sigma(y)\leq z\) we get that \(z/y\) is undefined in \(\mathbf{K}\), thus \(z/y\) is the coatom of \(L\) (supposing \(z/y\) is defined in \(\mathbf{K}\oplus_{\pi}\mathbf{L}\)), so \(x\leq z/y\) holds. Conversely, if \(x\leq z/y\) then since \(x\in L\), we get that either \(z/y=1\) or it is the coatom of \(L\). Thus either \(y\leq z\) (which is against our assumption) or \(z/y\) is undefined in \(K\), which means that \(\sigma(y)\leq z\). The case: \(x,z\in K-\{1\},y\in L-\{1\}\) is similar. * Suppose now \(x,y,z\in K-\{1\}\). We only show \(xy\leq z\Leftrightarrow y\leq x\backslash z\) assuming that \(x\backslash z\) is defined in \(\mathbf{K}\oplus_{\pi}\mathbf{L}\); the proof of the equivalence \(xy\leq z\Leftrightarrow x\leq z/y\) is similar. If \(x\backslash z\) is defined in \(K\), the equivalence holds by residuation in \(\mathbf{K}\). Also, if \(x\leq z\), then \(xy\leq zy\leq z\) and \(y\leq 1=x\backslash z\), so both inequalities hold. Now, if \(x\backslash{}^{\mathbf{K}}z\) is undefined and \(\sigma(x)\leq z\), then we get \(xy\leq\sigma(x)\leq z\) (where the first inequality is due to (2c) in the definition of lower-compatible triple) and also that \(x\backslash z\) is equal to the coatom of \(L\), so again both inequalities hold. It is left to show that multiplication is order preserving, divisions are order preserving in the numerator and order reversing in the denominator. The fact that the monoidal operation is order preserving can be easily checked using the facts that by the definition of lower-compatible triple \(\sigma\) is order preserving, and \(xy,yx\leq\sigma(x)\) for all \(x,y\in K,y\neq 1\). Finally, the order properties of divisions (when defined) can be directly checked, and follow by residuation and the order preservation of \(\gamma\). We have shown that \(\mathbf{K}\oplus_{\pi}\mathbf{L}\) is a partial IRL. In the case where \(L\) has a coatom, all divisions (the only partial operations) are correctly and fully defined. Thus in that case \(\mathbf{K}\oplus_{\pi}\mathbf{L}\) is a (total) IRL. We will refer to \(\mathbf{K}\oplus_{\pi}\mathbf{L}\) as the _partial upper gluing_ of \(\mathbf{K}\) and \(\mathbf{L}\). If we take a lower-compatible pair \((\mathbf{B},F)\) and an IRL \(\mathbf{C}\) such that \(B\cap C=F\) with \(F\) strictly above the other elements in \(C\), we can construct the partial gluing of the compatible triple \((\mathbf{B},\sigma_{F},\gamma_{F})\) and of \(C-F\), which is a partial IRL with the required properties for the partial gluing construction. We will see interesting examples of these constructions in the final section of the paper. ## 3. Gluing over a filter and an ideal We can take the previous intuition even further and generalize the construction allowing the algebras \(\mathbf{B}\) and \(\mathbf{C}\) to intersect on both a congruence filter \(F\) and a lattice ideal \(I\). Let \(\mathbf{D}\) be an IRL with underlying set \(B\cup C\), where \(B\cap C=F\cup I\), with \(F\) a congruence filter as before, \(I\) a lattice ideal, \(i<b<c<f\) for all \(i\in I\), \(b\in B^{-}:=B-(F\cup I)\)\(c\in C^{-}:=C-(F\cup I)\) and \(f\in F\), \(C\) is a subalgebra except possibly for \(\wedge\) and \(B\) is a subalgebra except possibly with respect to \(\vee\). We say that \(\mathbf{D}\) is _a gluing of \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\) and \(I\)_, or \(F-I\)_-gluing of \(\mathbf{B}\) and \(\mathbf{C}\)_. As before, we will identify conditions on \(\mathbf{B}\), \(\mathbf{C}\), \(F\) and \(I\) that will allow us to construct \(\mathbf{D}\) from these constituent parts. We will characterize the structure on the subset \(C^{\prime}=(C-I)\). ### Compatibility with an ideal We call an element \(c\in C^{-}\) a _left \(I\)-divisor_ if there exists another \(c^{\prime}\in C^{-}\) such that \(cc^{\prime}\in I\), and a _right \(I\)-divisor_ if instead there exists \(c^{\prime\prime}\in C^{-}\) such that \(c^{\prime\prime}c\in I\). We say that an ideal \(I\) of an IRL \(\mathbf{C}\) is _compatible_ with \(\mathbf{C}\) if it is strictly below \(C^{\prime}\) and for all left \(I\)-divisors \(c\in C^{\prime}\) and right \(I\)-divisors \(d\in C^{\prime}\), the sets \(\{c\backslash i:i\in I\}\) and \(\{i/d:i\in I\}\) have maxima. In this case we denote these elements by \(\ell_{I}(c)\) and \(r_{I}(d)\), respectively: \[\ell_{I}(c)=\max\{c\backslash i:i\in I\},\ \ r_{I}(d)=\max\{i/d:i\in I\}.\] We also say that \((\mathbf{C},I)\) is a _upper-compatible pair_. **Lemma 3.1**.: _Given a gluing of \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\) and \(I\), the lattice ideal \(I\) is compatible with \(\mathbf{C}\). Also, for every left \(I\)-divisor \(c\in C-I\), right \(I\)-divisor \(d\in C-I\), and \(b\in B-F\):_ \[c\backslash b=\ell_{I}(c),\qquad b/d=r_{I}(d).\] Proof.: The ideal \(I\) is strictly below all elements of \(C^{\prime}=C-I\) by definition. We now show that the maps \(\ell_{I}\) and \(r_{I}\) are defined for all, respectively, left and right \(I\)-divisors. For a left \(I\)-divisor \(c\in C^{-}\) we consider \(c\backslash b\) for some \(b\in B^{-}\). We claim that \(c\backslash b\in C^{-}\). Indeed by definition there is at least one element in \(C^{-}\) that multiplied to the right of \(c\) gives an element of \(I\), thus below \(b\), and moreover there cannot be \(f\in F\) such that \(cf\in I\). Indeed otherwise, we would have \(cf\leq i\) for some \(i\in I\) iff (by residuation) \(c\leq i/f\), which yields \(i/f\in F\) (since \(i/f\in B\cap C\)); this would imply that \(f^{\prime}f\leq i\) for some \(f^{\prime}\in F\), so we would get \(i\in F\), a contradiction. Thus \(c\backslash b=\max\{d\in C^{-}:cd\in I\}\). We now show that \(\max\{d\in C^{-}:cd\in I\}=\max\{c\backslash i:i\in I\}\). Indeed, since \(c\backslash b=\max\{d\in C^{-}:cd\in I\}\), there exists \(j\in I\), with \(c(c\backslash b)\leq j\); thus \(c\backslash b\leq c\backslash j\). Also, for all \(i\in I\), we have \(c\backslash i\leq c\backslash b\) since \(i<b\); so \(c\backslash i\leq c\backslash b\leq c\backslash j\). Since \(j\in I\), we get that \(c\backslash b=c\backslash j\) and also \(c\backslash b=\max\{c\backslash i:i\in I\}\). Thus \(\max\{c\backslash i:i\in I\}\) exists for every left \(I\)-divisor \(c\) in \(C^{\prime}\) and \(c\backslash b=\ell_{I}(c)\). Similarly, one can show that \(\max\{i/d:i\in I\}\) exists for every right \(I\)-divisor \(d\) of What we showed in the previous section about lower-compatible pairs still holds in case there is at least one non \(I\)-divisor. Otherwise, we need consider the weaker notion of lower-compatibility. **Definition 3.2**.: We say that a pair \((\mathbf{B},F)\) is a _weak lower-compatible pair_ if it respects all the conditions of a lower-compatible pair except that \(\gamma_{F}\) need not be defined, i.e. the equivalence classes \([b]_{F}\) for \(b\in B-F\) do not need to have a maximum element. We can indeed prove the following analogue of Lemma 2.2. **Lemma 3.3**.: _Given a gluing of \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\) and \(I\), \((\mathbf{B},F)\) is a weak lower-compatible pair. If there exists an element of \(\mathbf{C}\) that is not an \(I\)-divisor, then \((\mathbf{B},F)\) is a lower-compatible pair. Moreover, for all \(b\in B-F,c\in C-(F\cup I)\):_ \[cb=bc=\sigma_{F}(b),\] \[c\backslash b=\gamma_{F}(b)\text{ if c is not a left $I$-divisor,}\] \[b/c=\gamma_{F}(b)\text{ if c is not a right $I$-divisor.}\] Proof.: The proof that in any case \((\mathbf{B},F)\) is a weak lower compatible triple is the same as the one of Lemma 2.2. If given some \(c\in C-(F\cup I)\), there is no element \(c^{\prime}\in C-(F\cup I)\) such that \(cc^{\prime}\in I\), then also the proof about \([b]_{F}\) having a maximum can be replicated in exactly the same way. Indeed, we get that then \(c\backslash b\in B\), \([b]_{F}=[c\backslash b]_{F}\) and \(b^{\prime}\leq c\backslash b\) for all \(b^{\prime}\in[b]_{F}\). ### Compatible quadruples Consider now a weak lower-compatible pair \((\mathbf{B},F)\), and an upper-compatible pair \((\mathbf{C},I)\) such that \(B\cap C=F\cup I\) is a subalgebra of both \(\mathbf{B}\) and \(\mathbf{C}\), with \(F\) strictly above all elements in \(C-F\) and \(I\) strictly below all elements of \(C-I\). We call the quadruple \((\mathbf{B},F,\mathbf{C},I)\)_compatible_ if moreover: 1. if at least an element of \(C-F\) is not an \(I\)-divisor, then \((\mathbf{B},F)\) is a lower-compatible pair. 2. Whenever \(c,d\in C-I\) and \(cd\in I\), we have \((cd)x=x(cd)=\sigma_{F}(x)\) for all \(x\in B^{-}\). 3. If there are elements \(x,y\in B-F\) such that \(x\lor y\in F\), then \(C-I\) has a bottom element \(\bot_{C}\). 4. If there are elements \(x,y\in C-I\) such that \(x\wedge y\in I\), then \(B-F\) has a top element \(\top_{B}\). We recall that \(\sigma_{F}(x)=\min[x]_{F}\), and whenever defined \(\gamma_{F}(x)=\max[x]_{F}\), \(\ell_{I}(x)=\max\{x\backslash i:i\in I\}\), \(r_{I}(x)=\max\{i/x:i\in I\}\). **Proposition 3.4**.: _Given a gluing of IRLs \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\) and \(I\), \((\mathbf{B},F,\mathbf{C},I)\) is a compatible quadruple. Moreover, the gluing of \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\) and \(I\) is unique when it exists._ Proof.: The fact that \((\mathbf{B},F)\) is a weak lower compatible pair, together with conditions 1 and 2, are shown in Lemma 3.3. The fact that \((\mathbf{C},I)\) is an upper-compatible pair is shown in Lemma 3.1. Conditions 3 and 4 are clearly properties of the lattice ordering of the gluing. The following technical properties of a compatible quadruple will be useful in what follows. **Lemma 3.5**.: _If \((\mathbf{B},F,\mathbf{C},I)\) is a compatible quadruple, the following properties hold._ 1. _For all_ \(x\in F,y\in B-F\)_,_ \(x\sigma_{F}(y)=\sigma_{F}(xy)=\sigma_{F}(yx)=\sigma_{F}(y)x=\sigma_{F}(y)\)_._ 2. _For every_ \(c\in C^{-}\) _and_ \(f\in F\)_, we have_ \(cf,fc\in C^{-}\)_._ Proof.: (1) Since \(x\ \theta\ 1\) and \(y\ \theta\ y\), we have \(xy\ \theta\ y\) which implies \(\sigma_{F}(xy)=\sigma_{F}(y)\). From \(x\ \theta\ x\) and \(y\ \theta\ \sigma_{F}(y)\) we get \(xy\ \theta\ x\sigma_{F}(y)\), thus \(\sigma_{F}(xy)=\sigma_{F}(x\sigma_{F}(y))\leq x\sigma_{F}(y)\). Moreover, from \(x\ \theta\ 1\) and \(y\ \theta\ \sigma_{F}(y)\) we have \(xy\ \theta\ \sigma_{F}(y)\), which implies \(\sigma_{F}(xy)=\sigma_{F}(\sigma_{F}(y))=\sigma_{F}(y)\geq x\sigma_{F}(y)\); thus \(\sigma_{F}(xy)=x\sigma_{F}(y)\). The other equalities can be proven analogously. (2) Suppose by way of contradiction that \(cf=i\in I\). Thus by residuation \(c\leq i/f\), but since \(i,f\in B\cap C\), and \(B\cap C\) is a subalgebra of both \(\mathbf{B}\) and \(\mathbf{C}\), we get \(i/f\in F\), so \(i\geq(i/f)f\in F\), hence \(i\in F\), a contradiction. Similarly one can show that \(fc\in C^{-}\) ### Abstracting the upper part We now characterize abstractly the triples of the form \((\mathbf{C}^{\prime},\ell_{I},r_{I})\), where we understand \(C^{\prime}=(C-I)\) and \(I\) is an ideal strictly below \(C^{\prime}\). A triple \((\mathbf{L},\ell,r)\) is called _upper-compatible_, if \(\mathbf{L}\) is a partial IRL and: 1. \(\ell,r\) are partial maps on \(L\) that form a Galois connection; more precisely \(\ell(y)\) is defined and \(x\leq\ell(y)\) if and only if \(r(x)\) is defined and \(y\leq r(x)\). 2. If \(r(y^{\prime})\) is defined, \(x\leq r(y^{\prime})\) and \(y\leq y^{\prime}\), then \(r(y)\) is defined and \(x\leq r(y)\). Thus the domain \(D_{r}\) of \(r\) is downwards closed. Also, the same holds for \(\ell\). 3. \(xy\) is undefined iff \(y\) is in the domain of \(r\) and \(x\leq r(y)\), iff \(x\) is in the domain of \(\ell\) and \(y\leq\ell(x)\); 4. \(x\backslash y\) is undefined iff there is no \(z\) with \(xz\leq y\), and \(y/x\) is undefined iff there is no \(z\) with \(zx\leq y\). 5. If \(\ell(x)\) and \(r(z)\) are defined then: \(x\backslash r(z)\) is defined iff \(\ell(x)/z\) is defined, and in such a case \(x\backslash r(z)=\ell(x)/z\). 6. If \(\ell(x)\) is undefined and \(r(z)\) is defined, then \(x\backslash r(z)=r(z)\). If \(r(z)\) is undefined and \(\ell(y)\) is defined, then \(\ell(y)/z=\ell(y)\). 7. If \(\ell(x)\) is defined, then \(x\backslash z\) is defined and \(\ell(x)\leq x\backslash z\). Similarly, if \(r(x)\) is defined, then \(w/x\) is defined and \(r(x)\leq w/x\). 8. \(x\wedge y\) is undefined iff there is no \(z\leq x,y\). 9. All other operations are defined. **Lemma 3.6**.: _In an upper-compatible triple, if \(xy\) is defined, then: \(xy\leq z\) iff (\(x\backslash z\) is defined and \(y\leq x\backslash z\)) iff (\(z/y\) is defined and \(x\leq z/y\))._ Proof.: Suppose that \(xy\) is defined. If \(xy\leq z\), then \(x\backslash z\) is defined, since there is \(y\) such that \(xy\leq z\), and so residuation holds. Conversely, suppose \(x\backslash z\) is defined and \(y\leq x\backslash z\). Then \(x(x\backslash z)\) is defined, since otherwise we would have: \(x\in D_{\ell}\), \(x\backslash z\leq\ell(x)\), and since \(y\leq x\backslash z\leq\ell(x)\), then \(xy\) would be undefined, a contradiction. Thus we get \(xy\leq x(x\backslash z)\leq z\), by order preservation of multiplication. Similarly one can prove that \(xy\leq z\) if and only if \(z/y\) is defined and \(x\leq z/y\). **Lemma 3.7**.: _If \(I\) is a compatible ideal of an IRL \(\mathbf{C}\), then \((\mathbf{C}^{\prime},\ell_{I},r_{I})\) is an upper-compatible triple._ Proof.: We show that \((\mathbf{C}^{\prime},\ell_{I},r_{I})\) has the properties of an upper-compatible triple, recalling that \(C^{\prime}=C-I\). It is easy to check that \(\mathbf{C}^{\prime}\) is a partial IRL, in particular: 1. Integrality is clearly satisfied; 2. The three axioms of RLs are satisfied whenever they can be applied, in particular: 1. with respect to the lattice operations, the joins are always defined, \(1\) is the largest element, and \(x\wedge y\) is undefined iff there is no common lower bound of \(x\) and \(y\); 2. \(1\) is the unit of the product, and \(xy,(xy)z\) are defined iff they are not in \(I\), iff \(yz,x(yz)\) are not in \(I\) and in such case \((xy)z=x/yz\)). 3. residuation works by Lemma 3.6. 3. Since \(\mathbf{C}\) is an IRL, multiplication is order preserving when defined; 4. For the same reason, divisions are order-preserving in the numerator and order-reversing in the denominator whenever defined. In the rest of this proof we will write \(\ell\) for \(\ell_{I}\) and \(r\) for \(r_{I}\). We now check the properties in the definition of an upper compatible triple. 1. We first show that \(\ell,r\) form a Galois connection whenever they are defined, i.e. that \(\ell(y)\) is defined and \(x\leq\ell(y)\) if and only if \(r(x)\) is defined and \(y\leq r(x)\). If \(r(x)\) is defined and \(y\leq r(x)\), then \(y\leq i/x\) for some \(i\in I\), which by residuation is equivalent to \(yx\leq i\). So \(y\) is a left \(I\)-divisor and thus \(\ell(y)\) is defined and \(x\leq y\backslash i\leq\ell(y)\). Similarly, the converse holds. 2. We now prove that if \(r(y^{\prime})\) is defined, \(x\leq r(y^{\prime})\) and \(y\leq y^{\prime}\), then \(r(y)\) is defined and \(x\leq r(y)\). If \(r(y^{\prime})\) is defined then \(y^{\prime}\) is a right \(I\)-divisor, i.e. there is \(z\in C^{\prime}\) such that \(zy^{\prime}\in I\). Since \(y\leq y^{\prime}\), we have \(zy\leq zy^{\prime}\in I\) and since \(I\) is closed downwards we get \(zy\in I\), i.e., \(y\) is a right \(I\)-divisor. Moreover, it follows from the definition of \(r\) that it is order reversing, thus \(r(y^{\prime})\leq r(y)\). Since \(x\leq r(y^{\prime})\), we also have that \(x\leq r(y)\). 3. Given \(x,y\in C^{\prime}\), we now prove that \(xy\) is undefined if and only if \(y\) is in the domain of \(r\) and \(x\leq r(y)\), the proof of the other equivalence being similar. Notice that the product \(xy\) is undefined in \(C^{\prime}\) if and only if it is an element of \(I\). In such a case, we get \(xy\leq i\) for some \(i\in I\), thus \(y\) is a right \(I\)-divisor and so it is in the domain of \(r\). Moreover, \(x\leq i/y\leq r(y)\). Conversely, if \(y\) is in the domain of \(r\) and \(x\leq r(y)\), then \(x\leq i/y\) for some \(i\in I\). Thus \(xy\in I\) and so \(xy\) is undefined in \(C^{\prime}\). 4. We have that \(x\backslash y\) is undefined in \(C^{\prime}\) iff it is an element of \(I\), or equivalently, iff there is no \(z\in C^{\prime}\) with \(xz\leq y\). Similarly, \(y/x\) is undefined iff there is no \(z\) with \(zx\leq y\). 5. We need to show that, if \(\ell(x)\) and \(r(z)\) are defined: \(x\backslash r(z)\) is defined iff \(\ell(x)/z\) is defined and in such a case \(x\backslash r(z)=\ell(x)/z\). Suppose first that \(x\backslash r(z)\) is defined in \(C^{\prime}\). Then \((x\backslash r(z))z\leq x\backslash i\leq\ell(x)\), for some \(i\in I\), which by residuation implies \(x\backslash r(z)\leq\ell(x)/z\). Thus in particular \(\ell(x)/z\in C^{\prime}\). So we also get that \(x(\ell(x)/z)z\leq x\ell(x)\leq j\) for some \(j\in I\), which implies that \(x(\ell(x)/z)\leq j/z\leq r(z)\), which again by residuation implies \(\ell(x)/z\leq x\backslash r(z)\); thus the equality \(x\backslash r(z)=\ell(x)/z\) is proved. Likewise, we can show that if \(\ell(x)/z\) is defined in \(C^{\prime}\) then \(x\backslash r(z)\) is defined in \(C^{\prime}\) and \(x\backslash r(z)=\ell(x)/z\). 6. Suppose first that \(\ell(x)\) is undefined and \(r(z)\) is defined, then \(r(z)=i/z\) for some \(i\in I\). Thus \(x\backslash(i/z)=(x\backslash i)/z\leq r(z)\), because necessarily \(x\backslash i\in I\), and since \(r(z)\leq x\backslash r(z)\) we have the equality. Similarly one can show the other case. 7. Now if \(\ell(x)=\max\{x\backslash i:i\in I\}\) is defined in \(C^{\prime}\), then there exists \(i\in I\) with \(x\backslash i\in C^{\prime}\). For every \(z\in C-I\) we have \(i<z\) and \(x\backslash i\leq x\backslash z\), so \(x\backslash z\in C^{\prime}\), and thus it is defined in \(C^{\prime}\), and \(\ell(x)\leq x\backslash z\). The analogous fact for \(r\) is proven similarly. 8. A meet \(x\wedge y\) is undefined in \(C^{\prime}\) iff \(x\wedge y\in I\) iff there is no \(z\in C^{\prime}\) with \(z\leq x,y\). 9. All other operations are defined. We say that \((\mathbf{C}^{\prime},\ell_{I},r_{I})\) is the upper-compatible triple of the upper-compatible pair \((\mathbf{C},I)\). **Lemma 3.8**.: _Every upper-compatible triple \((\mathbf{L},\ell,r)\) is the upper-compatible triple of the upper-compatible pair \((\mathbf{C},J)\), where \(J=\{0\}\) is a one-element set and \(\mathbf{C}\) is an IRL with operations extending \(\mathbf{L}\cup\{0\}\), with \(0\) as the bottom element._ Proof.: Let \(C=L\cup\{0\}\), with \(0\) an idempotent element strictly below all elements of \(L\). The operations of \(\mathbf{C}\) are defined to extend the existing operations of \(L\), and we further define: \[x\wedge y= 0\quad\text{ if }x\wedge y\text{ is undefined in }L\] \[xy= 0\quad\text{ if }x=0\text{ or }y=0\text{ or }x\leq r(y)\] \[x\backslash y= \left\{\begin{array}{ll}1&\text{ if }x=0\\ 0&\text{ if }x\neq 0,y=0\text{ and }\ell(x)\text{ is not defined}\\ \ell(x)&\text{ if }x\neq 0,y=0\text{ and }\ell(x)\text{ is defined}\end{array}\right.\] \[y/x= \left\{\begin{array}{ll}1&\text{ if }x=0\\ 0&\text{ if }x\neq 0,y=0\text{ and }r(x)\text{ is not defined}\\ r(x)&\text{ if }x\neq 0,y=0\text{ and }r(x)\text{ is defined}\end{array}\right.\] We set \(J=\{0\}\) and show that \(\mathbf{C}\) is an integral residuated lattice. The order defined clearly yields a lattice. Let us show that associativity holds, i.e. for any \(x,y,z\in C\), \[x\cdot(y\cdot z)=(x\cdot y)\cdot z.\] We distinguish the following cases. * If any of \(x,y,z\) is \(0\), both sides of the equality are \(0\) and thus associativity holds. The same holds if \(x\leq r(y)\) and \(y\leq r(z)\). * Assume \(x\leq r(y)\), and \(r(z)\) is undefined or \(y\not\leq r(z)\). Then \((x\cdot y)\cdot z=0\cdot z=0\). Moreover \(yz\) is defined in \(L\), thus \(yz\leq y\leq\ell(x)\) (since there is a Galois connection between \(\ell\) and \(r\)) hence \(x\leq r(yz)\), so \(x\cdot(y\cdot z)=0\). Similarly we verify the case where \(y\leq r(z)\) and \(x\not\leq r(y)\) or \(r(y)\) is undefined. * Finally, assume that \(r(y)\) is undefined or \(x\not\leq r(y)\), and \(r(z)\) is undefined or \(y\not\leq r(z)\). Then the products \(xy,yz\) are defined in \(L\), and then we get that \((xy)z=0\) if \(xy\leq r(z)\), and \((xy)z\in L\) otherwise. Similarly, \(x(yz)=0\) if \(yz\leq\ell(x)\), and \(x(yz)\in L\) otherwise. The claim is proved by showing that \(xy\leq r(z)\) if and only if \(yz\leq\ell(x)\). Indeed, suppose \(r(z)\) is defined and \(xy\leq r(z)\), by Lemma 3.6 we get that \(x\backslash r(z)\) is defined and \(y\leq x\backslash r(z)\). Then if \(\ell(x)\) is undefined, by Property 6 we get \(x\backslash r(z)=r(z)\), thus \(y\leq r(z)\), a contradiction. Then also \(\ell(x)\) is defined, thus by Property 5\(\ell(x)/z\) is defined and \(y\leq x\backslash r(z)=\ell(x)/z\). Since \(\ell(x)/z\) is defined and \(y\leq\ell(x)/z\), by Lemma 3.6 we obtain \(yz\leq\ell(x)\) since \(yz\) is defined. Similarly one can show the right-to-left direction. We now show residuation: \[xy\leq z\text{ iff }y\leq x\backslash z\text{ iff }x\leq z/y\] * If \(x=0\) or \(y=0\) the claim is easily shown. Suppose now \(z=0\); we need to show \(xy\leq 0\) iff \(y\leq x\backslash 0\). If \(\ell(x)\) is defined then \(x\backslash 0=\ell(x)\), and we know that \(xy\leq 0\) iff \(y\leq\ell(x)\). If \(\ell(x)\) is not defined then \(x\backslash 0=0\) and and \(xy\) is defined to be \(0\), so both inequalities hold. * Let \(x,y,z\neq 0\), \(r(y)\) defined, and \(x\leq r(y)\) (or equivalently \(\ell(x)\) defined and \(y\leq\ell(x)\)). Then \(xy\) is not defined in \(L\), thus the first inequality becomes \(0\leq z\) and is true. Since \(\ell(x)\) is defined, by Property 7\(x\backslash z\) is also defined and \(y\leq\ell(x)\leq x\backslash z\), so the second inequality is also true. The proof that \(x\leq z/y\) holds is similar. * Let \(x,y,z\neq 0\), \(r(y)\) undefined or \(x\not\leq r(y)\); then \(xy\) is defined in \(L\). Residuation follows from Lemma 3.6. Notice that if \(c\) is a left \(J\)-divisor, \(\ell_{J}(c)=\max\{c\backslash i:i\in J\}=c\backslash 0=\ell(c)\), and if \(d\) is a right \(J\)-divisor \(r_{J}(d)=\max\{i/d:i\in J\}=0\backslash d=r(d)\). Thus we have shown that \((\mathbf{L},\ell,r)\) is the upper-compatible triple of the upper-compatible pair \((\mathbf{C},J)\). **Corollary 3.9**.: _If \((\mathbf{C},I)\) is an upper-compatible pair, so is \(((\mathbf{C}-I)\cup\{0\},\{0\})\)._ ### The gluing over a filter-ideal pair We are now ready to introduce the gluing construction over a congruence filter and an ideal, which is depicted in Figure 2. To ease the notation, we write the pair of the filter \(F\) and the ideal \(I\) as \(P\): \[P:=(F,I)\] We can then define the gluing of \(\mathbf{B}\) and \(\mathbf{C}\) over the pair \(P=(F,I)\), or \((F-I)\)-gluing of \(\mathbf{B}\) and \(\mathbf{C}\), as the structure \(\mathbf{B}\oplus_{P}\mathbf{C}\) where the operations extend the ones of \(\mathbf{B}\) and \(\mathbf{C}\) as follows: \[x\cdot y=\left\{\begin{array}{ll}x\cdot y&\text{if }x,y\in B,\text{ or }x,y\in C\\ \sigma_{F}(x)&\text{if }y\in C^{-},x\in B^{-}\\ \sigma_{F}(y)&\text{if }x\in C^{-},y\in B^{-}\end{array}\right.\] \[x\backslash y=\left\{\begin{array}{ll}x\backslash y&\text{if }x,y\in B,\text{ or }x,y\in C \\ \gamma_{F}(y)&\text{if }y\in B^{-}\text{ and }x\in C^{-}\text{ is not a left }I \text{-divisor}\\ \ell_{I}(x)&\text{if }y\in B^{-}\text{ and }x\in C^{-}\text{ is a left }I \text{-divisor}\\ 1&\text{if }x\in B^{-},y\in C^{-}\end{array}\right.\] \[x/y=\left\{\begin{array}{ll}x/y&\text{if }x,y\in B,\text{ or }x,y\in C \\ \gamma_{F}(x)&\text{if }x\in B^{-}\text{ and }y\in C^{-}\text{ is not a right }I \text{-divisor}\\ r_{I}(y)&\text{if }x\in B^{-}\text{ and }y\in C^{-}\text{ is a right }I \text{-divisor}\\ 1&\text{if }x\in C^{-},y\in B^{-}\end{array}\right.\] \[x\wedge y=\left\{\begin{array}{ll}x\wedge y&\text{if }x,y\in B,\text{ or }x,y\in C \text{ with }x\wedge y\not\in I\\ \top_{B}&\text{if }x,y\in C^{-},x\wedge y\in I\\ x&\text{if }x\in B^{-},y\in C^{-}\\ y&\text{if }y\in B^{-},x\in C^{-}\end{array}\right.\] \[x\lor y=\left\{\begin{array}{ll}x\lor y&\text{if }x,y\in C,\text{ or }x,y\in B \text{ with }x\lor y\not\in F\\ \bot_{C}&\text{if }x,y\in B^{-},x\lor y\in F\\ y&\text{if }x\in B^{-},y\in C^{-}\\ x&\text{if }y\in B^{-},x\in C^{-}\end{array}\right.\] **Theorem 3.10**.: _If \((\mathbf{B},F,\mathbf{C},I)\) is a compatible quadruple, then \(\mathbf{B}\oplus_{P}\mathbf{C}\) is the gluing of \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\) and \(I\)._ Proof.: We show that \(\mathbf{B}\oplus_{P}\mathbf{C}\) is an IRL. The fact that \(\mathbf{B}\oplus_{P}\mathbf{C}\) has an underlying lattice structure is guaranteed by the order properties of the compatible quadruple. In particular, it follows that \(F\) is strictly above all other elements of \(B\) and \(C\) implies that if \(x,y\in B-F\) (or \(x,y\in C-F\)) and \(x\lor y\in F\), then \(F\) has a bottom element \(\bot_{F}\) and \(x\lor y=\bot_{F}\). Also, if there are \(x,y\in B-F\) with \(x\lor_{B}y\in F\), then \(x\lor y=\bot_{C}\). This is not in conflict with the definition of the operations, since we can show that \(\sigma_{F}(x)=\bot_{F}\cdot x=x\cdot\bot_{F}\) and \(\gamma_{F}(x)=\bot_{F}\backslash x=x/\bot_{F}\) in \(\mathbf{B}\). First, it is easy to see that \(\bot_{F}\cdot x\in[x]_{F}\); indeed, \((\bot_{F}\cdot x)\backslash x=1\) and \(\bot_{F}\leq x\backslash(\bot_{F}\cdot x)\), thus both \((\bot_{F}\cdot x)\backslash x\) and \(x\backslash(\bot_{F}\cdot x)\) are in \(F\). Moreover, \((\bot_{F}\cdot x)\backslash\sigma_{F}(x)=\bot_{F}\backslash(x\backslash\sigma_{F }(x))=1\) since \(x\backslash\sigma_{F}(x)\in F\) and thus \(\bot_{F}\cdot x\leq\sigma_{F}(x)\). Therefore, \(\sigma_{F}(x)=\bot_{F}\cdot x\) and the proof for \(x\cdot\bot_{F}\) is analogous. We now show that \(\gamma_{F}(x)=\bot_{F}\backslash x\); the proof for \(x/\bot_{F}\) is similar. Since \(x\leq\bot_{F}\backslash x\), we get \(x\backslash(\bot_{F}\backslash x)=1\in F\), and we also have \((\bot_{F}\backslash x)\backslash x\geq\bot_{F}\in F\); hence \(\bot_{F}\backslash x\in[x]_{F}\). Moreover, \(\gamma_{F}(x)=\max[x]_{F}\leq\bot_{F}\backslash x\), or equivalently, \(\bot_{F}\gamma_{F}(x)\leq x\), since \(\bot_{F}\gamma_{F}(x)=\min[\gamma(x)]_{F}=\min[x]_{F}=\sigma_{F}(x)\leq x\). Similarly, since \(I\) is strictly below all other elements of \(B\) and \(C\), if \(x,y\in B-I\) (or \(x,y\in C-I\)) and \(x\wedge y\in I\), then \(I\) has a top element \(\top_{I}\) and \(x\wedge y=\top_{I}\). Thus given \(x,y\in C-I\) with \(x\wedge_{C}y\in I\), the meet is redefined as \(x\wedge y=\top_{B}\). This is not in conflict with the definition of the operations, due to Lemma 3.5 (2), and Lemma 3.3. Also, it is clear that \(1\) is both the monoidal unit and the top element of the lattice. To prove associativity, we need to show that for every \(x,y,z\in B\oplus_{P}C\), \((xy)z=x(yz)\). We distinguish the following cases. * Let \(x\in F,y\in C^{-},z\in B^{-}\). Then \((xy)z=\sigma_{F}(z)\), since \(xy\in C\) from Lemma 3.5(2). Now, \(x(yz)=x\sigma_{F}(z)=\sigma_{F}(z)\), given Lemma 3.5(1). Similarly we can show the cases where: \(x\in B^{-},y\in C^{-},z\in F\); \(x\in F,y\in B^{-},z\in C^{-}\); \(x\in C^{-},y\in F,z\in B^{-}\), \(x\in C^{-},y\in B^{-},z\in F\); \(x\in B^{-},y\in F,z\in C^{-}\). * Let \(x,y\in C^{-},z\in B^{-}\). We have that \((xy)z=\sigma_{F}(z)\), if either \(xy\in C^{-}\) (by definition) or if \(xy\in I\) (by the compatibility condition 2 for the quadruple). On the other hand, \(x(yz)=x\sigma_{F}(z)=\sigma_{F}(\sigma_{F}(z))=\sigma_{F}(z)\). The proof is analogous for the case: \(x\in B^{-},y,z\in C^{-}\). * If \(x\in C^{-},y,z\in B^{-}\), then \((xy)z=\sigma_{F}(y)z=\sigma_{F}(yz)\), given that \(\sigma_{F}\) is a strong conucleus. Also, \(x(yz)=\sigma_{F}(yz)\), if either \(yz\in B^{-}\) (by definition) or \(yz\in I\) (by Lemma 3.3). We get a similar proof for the cases: \(x,y\in B^{-},z\in C^{-}\); \(x\in B^{-},y\in C^{-},z\in I\); \(x\in I,y\in C^{-},z\in B^{-}\); \(x\in I,y\in B^{-},z\in C^{-}\); \(x\in B^{-},y\in I,z\in C^{-}\); \(x,z\in B^{-},y\in C^{-}\); \(x,z\in C^{-},y\in B^{-}\). * Since both \(\mathbf{B}\) and \(\mathbf{C}\) are subalgebras with respect to multiplication, the remaining cases hold automatically. We now prove that for all \(x,y,z\), \[x\cdot y\leq z\text{ iff }x\leq z/y\text{ iff }y\leq x\backslash z\] We have the following cases: * Let \(x\in F,y\in C^{-},z\in B^{-}\). Then it never happens that \(x\cdot y\leq z\), by Lemma 3.5(2). The other inequalities are also false by definition and order preservation. An analogous case is given by \(x\in C^{-},y\in F,x\in B^{-}\). * Let \(x\in F,y\in B^{-},z\in C^{-}\). Then all three inequalities are always true, given the definition of the operations and order preservation. Similar cases are given by: \(x\in C^{-},y\in B^{-},z\in F\); \(x,z\in C^{-},y\in B^{-}\); \(x\in B^{-},y\in F,z\in C^{-}\); \(x\in B^{-},y\in C^{-},z\in F\); \(x\in B^{-},y,z\in C^{-}\); \(x,y\in B^{-},z\in C^{-}\); \(x\in B^{-},z\in C^{-}\); \(x\in B^{-},y\in I,z\in C^{-}\); \(x\in I,y\in C^{-},z\in B^{-}\); \(x\in I,y\in B^{-},z\in C^{-}\); \(x\in C^{-},y\in I,z\in B^{-}\). * Let \(x,y\in C^{-},z\in B^{-}\). We distinguish two cases, based on whether \(xy\in I\) or not. If \(xy=i\) for some \(i\in I\), then all inequalities hold. Indeed \(xy=i\leq z\); moreover \(xy=i\) implies \(y\leq x\backslash i\leq\ell_{I}(x)=x\backslash z\) and similarly \(x\leq i/y\leq r_{I}(y)=z/y\). If \(xy\in C^{-}\), none of the inequalities hold. Indeed, \(xy\not\leq z\) by definition of the order. Moreover, if \(y\leq x\backslash z\) the only possibility by definition is that \(x\backslash z=\ell_{I}(x)\), but \(x\ell_{I}(x)\in I\) thus we would have \(xy\in I\), a contradiction. Similarly it cannot be that \(x\leq z/y\). * Let \(x\in C^{-},y,z\in B^{-}\). To show that \(xy=\sigma_{F}(y)\leq z\) iff \(x\leq z/y\) it suffices to note that, equivalently, \(\sigma_{F}(y)\leq z\) iff \(z/y\in F\). Indeed since \(\sigma_{F}(y)/y\in F\), we have that \(\sigma_{F}(y)\leq z\) implies \(z/y\in F\). Conversely, if \(z/y\in F\) then there is \(f\in F\) such that \(f\leq z/y\), thus \(fy\leq z\), and so \(\sigma_{F}(y)=\sigma_{F}(fy)\leq fy\leq z\) (where in the first equality we used Lemma 3.5(1)). We now show that \(x\cdot y=\sigma_{F}(y)\leq z\) iff \(y\leq x\backslash z\). If \(\sigma_{F}(y)\leq z\), then since \(\sigma_{F}\) and \(\gamma_{F}\) form a Galois connection we get \(y\leq\gamma_{F}(z)\leq x\backslash z\), where the second inequality holds because \(\gamma_{F}(z)=x\backslash z\) or \(x\backslash z\in C^{-}\). Conversely, assume \(y\leq x\backslash z\). If \(x\backslash z=\ell_{I}(x)\), then by definition \(x\) is a left \(I\)-divisor thus there is a \(c\in C^{-}\) such that \(xc\in I\), thus by the compatibility condition (2) for the quadruple \(\sigma_{F}(y)\in I\) thus \(\sigma_{F}(y)\leq z\). Otherwise, if \(x\backslash z=\gamma_{F}(z)\) then \(y\leq\gamma_{F}(z)\), so by the Galois connection we have that \(\sigma_{F}(y)\leq z\). An analogous case is given by \(x,z\in B^{-},y\in C^{-}\). * Let \(x\in C^{-},y\in B^{-},z\in I\). The fact that \(xy\leq z\) iff \(x\leq z/y\) can be shown, as in the previous case, using the fact that \(\sigma_{F}(y)\leq z\) if and only if \(z/y\in F\). We now show that \(xy=\sigma_{F}(y)\leq z\) iff \(y\leq x\backslash z\). Since \(\mathbf{C}\) is a subalgebra, either \(x\backslash z\in C^{-}\) or \(x\backslash z\in I\). If \(x\backslash z\in C^{-}\) then clearly \(y\leq x\backslash z\) and by the compatibility condition 2 for the quadruple, we get \(\sigma_{F}(y)=xy\leq x(x\backslash z)\leq z\). Otherwise, we have \(x\backslash z\in I\), so \(y\not\leq x\backslash z\), since \(y\in B^{-}\). Moreover, \(x\backslash z=\gamma_{F}(z)\) from Lemma 3.3. Thus since \(y\not\leq\gamma_{F}(z)\), we get that \(\sigma_{F}(y)\not\leq z\), given that the two operators are a residuated pair. Similarly we prove residuation for the case: \(x\in B^{-},y\in C^{-},z\in I\). * Since both \(\mathbf{B}\) and \(\mathbf{C}\) are subalgebras for the divisions the other cases do not need to be checked. Thus, \(\mathbf{B}\oplus_{P}\mathbf{C}\) is an integral residuated lattice. Notice that in the case where \(\mathbf{I}\) is empty, we get the proof of Theorem 2.8. Indeed, if \(I\) is empty, no elements of \(C-F\) are \(I\)-divisors and thus we obtain exactly the hypothesis of Theorem 2.8. ### A gluing of partial algebras In this section, we obtain a different construction that glues together two structures that intersect at the top \(1\) and keeps the same order relation, but where some of the divisions are redefined. The underlying idea is to forget the filter \(F\) from the previous construction. See Figure 3 for a pictorial intuition. We start from a lower-compatible triple \((\mathbf{K},\sigma,\gamma)\) and an upper-compatible triple \((\mathbf{L},\ell,r)\). Recall that in upper-compatible triples already some divisions are not defined. Here we will allow other divisions not to be defined. Precisely, we shall say that \(x\backslash y\) is _strongly undefined_ (in order to distinguish this case in the definition of the operations) if all elements \(z\) in the interval \([y,1)=\{z\in L:y\leq z<1\}\) are such that \(xz\leq y\) and there is no coatom. Similarly, \(y/x\) is strongly undefined if all elements \(z\) the interval \([y,1)\) are such that \(zx\leq y\) and \(\mathbf{L}\) has no coatom. We assume: 1. \(\mathbf{K}\) has an ideal \(I\subseteq K\) with an idempotent top element \(\top_{I}\) such that \(\sigma(\top_{I})=\top_{I}\). 2. If there are undefined products in \(L\), then \(\sigma(x)=\top_{I}\) for all \(x\in K-I\) and \(\sigma(\top_{I}y)=\sigma(y\top_{I})=\sigma(y)\) if \(y\in I\). 3. If there exists \(x,y\in K-\{1\}\) such that \(x\lor y=1\), then \(L\) has a bottom element \(\bot_{L}\). 4. If \(\mathbf{L}\) has undefined meets, \(\mathbf{K}\) has a splitting coatom \(c_{K}\). Moreover, we assume that \(K\cap L=\{1\}\), we set \(\tau:=(\sigma,\gamma,\ell,r)\) and we define the _partial gluing_\(\mathbf{K}\oplus_{\tau}\mathbf{L}\) to be the structure where the operations extend the ones of \(\mathbf{K}\) and \(\mathbf{L}\) in the following way, here \(D_{\ell}\) and \(D_{r}\) denote the domains of \(\ell\) and \(r\), respectively: \[xy=\left\{\begin{array}{ll}xy&\mbox{if $x,y\in K$ or $x,y\in L$ and $xy$ is defined}\\ \sigma(x)&\mbox{if $y\in L-\{1\},x\in K-\{1\}$}\\ \sigma(y)&\mbox{if $x\in L-\{1\},y\in K-\{1\}$}\\ \top_{I}&\mbox{if $x,y\in L$ and $xy$ is undefined}\end{array}\right.\] \[x\backslash y=\left\{\begin{array}{ll}x\backslash y&\mbox{if $x,y\in K$ or $x,y\in L $, and $x\backslash y$ is defined}\\ c_{L}&\mbox{if $x,y\in K,L$ has a coatom $c_{L}$ and $x\backslash y$ is undefined}\\ \ell(x)&\mbox{if $x\in L-\{1\},y\in K-\{I\cup 1\}$ and $x\in D_{\ell}$, or $x,y\in L$ and $x\backslash y$ undefined}\\ \gamma(y)&\mbox{if $x\in L-\{1\},y\in I$ or $(y\in K-\{1\}$ and $x\not\in D_{\ell}$)}\\ 1&\mbox{if $x\in K-\{1\},y\in L$}\\ \mbox{undefined}&\mbox{if $x,y\in L$, and $x\backslash y$ strongly undefined}\end{array}\right.\] \[y/x=\left\{\begin{array}{ll}y/x&\mbox{if $x,y\in K$ or $x,y\in L$, and $y/x$ is defined}\\ c_{L}&\mbox{if $x,y\in K,L$ has a coatom $c_{L}$ and $y/x$ is undefined}\\ r(x)&\mbox{if $x\in L-\{1\},y\in K-\{I\cup 1\}$ and $x\in D_{r}$ or $x,y\in L$ and $y/x$ undefined}\\ \gamma(y)&\mbox{if $x\in L-\{1\},y\in I$ or $(\in K-\{1\}$ and $x\not\in D_{r}$)}\\ 1&\mbox{if $x\in K-\{1\},y\in L$}\\ \mbox{undefined}&\mbox{if $x,y\in L$, and $y/x$ strongly undefined}\\ \end{array}\right.\] \[x\wedge y=\left\{\begin{array}{ll}x\wedge y&\mbox{if $x,y\in K,$ or $x,y\in L$}\\ c_{K}&\mbox{if $x,y\in L$ and $x\wedge y$ undefined}\\ y&\mbox{if $y\in K-\{1\},x\in L$}\\ x&\mbox{if $x\in K-\{1\},y\in L$}\\ \end{array}\right.\] \[x\lor y=\left\{\begin{array}{ll}x\lor y&\mbox{if $x,y\in K,$ or $x,y\in L$}\\ x&\mbox{if $y\in K-\{1\},x\in L$}\\ y&\mbox{if $x\in K-\{1\},y\in L$}\\ \bot_{L}&\mbox{if $x\lor y=1$ in $K$}.\end{array}\right.\] **Theorem 3.11**.: \(\mathbf{K\oplus_{\tau}L}\) _is a partial IRL, that is total if \(L\) has a coatom._ Proof.: Notice first that all operations are defined except possibly the divisions. It is easy to see that the operations \(\wedge,\vee\) define a lattice order, with \(1\) being the top. Let us now prove associativity of multiplication. * Suppose \(x,y,z\in L\). Then by definition: \[(xy)z=\left\{\begin{array}{ll}(xy)z&\mbox{if $xy$ and $(xy)z$ are defined in $L$}\\ \sigma(\top_{I})=\top_{I}&\mbox{otherwise}\end{array}\right.\] \[x(yz)=\left\{\begin{array}{ll}x(yz)&\mbox{if $yz$ and $x(yz)$ are defined in $L$}\\ \sigma(\top_{I})=\top_{I}&\mbox{otherwise}\end{array}\right.\] We will show that \(xy\) and \((xy)z\) are defined if and only if \(yz\) and \(x(yz)\) are defined, or equivalently, \(yz\) or \(x(yz)\) is undefined iff \(xy\) or \((xy)z\) is undefined. Notice that when \((xy)z\) and \(x(yz)\) are defined they coincide, since in upper compatible triples the IRL axioms hold whenever the operations involved are defined. We show the left-to-right direction first: we assume that \(yz\) is undefined or \(x(yz)\) is undefined; we also assume that \(xy\) is defined. If \(yz\) is undefined, then \(z\in D_{r}\) and \(y\leq r(z)\). Since \(xy\leq y\), we get \(xy\leq r(z)\), so \((xy)z\) is undefined. If instead \(yz\) is defined and \(x(yz)\) is undefined, we have that \(yz\in D_{r}\) and \(x\leq r(yz)\). Then by Property 1 of an upper compatible triple, \(\ell(x)\) is defined and \(yz\leq\ell(x)\). Suppose \(r(z)\) is defined, thus using Lemma 3.6 and Property 5, \(y\leq\ell(x)/z=x\backslash r(z)\) and then \(xy\leq r(z)\); thus \((xy)z\) is undefined. We also show that \(r(z)\) is necessarily defined, indeed if \(r(z)\) is undefined, \(\ell(x)/z=\ell(x)\), and \(y\leq\ell(x)\) implies \(x\leq r(y)\), thus \(xy\) undefined. The other direction is proved in a similar way. * For \(x,y\in L\) and \(z\in K-I\), we show that \(x(yz)=\sigma(z)\). If \(x\not\leq r(y)\), then \((xy)z=\sigma(z)\) holds by definition. Otherwise we get \(\top_{I}z=\top_{I}\), since \(\top_{I}\) is idempotent; so there is at least an undefined product in \(L\), thus by (A2) we get \(\sigma(z)=\top_{I}\). Similarly we can show the case: \(y,z\in L\), \(x\in K-I\). * For \(x,y\in L\) and \(z\in I\), we show that \(x(yz)=\sigma(z)\). If \(x\not\leq r(y)\), then \((xy)z=\sigma(z)\) by definition. Otherwise we get \(\top_{i}z=\sigma(z)\) since \(\sigma(z)=\sigma(\top_{I}z)\) implies \(\sigma(z)\leq\top_{I}z\) and \(\top_{I}z\leq\sigma(z)\). Similarly we prove the case: \(y,z\in L\) and \(x\in I\). * For \(x,z\in L,y\in K\), associativity follows directly from the definition of multiplication and the idempotency of \(\sigma\). * For \(x\in L,y,z\in K\), associativity follows from the definition of multiplication and the strong conuclear property. Similar cases are: \(x,z\in K,y\in L\); \(x,y\in K,z\in L\). * In the remaining cases all elements belong to \(K\). It easily follows that the product is a monoidal operation with unit \(1\). We now show that residuation holds (when the divisions are defined): \[xy\leq z\text{ iff }y\leq x\backslash z\text{ iff }x\leq z/y\] * For \(x,y,z\in L\), we distinguish two cases, depending whether \(xy\) is defined in \(L\) or not. If \(xy\) is not defined in \(L\), then we get that all three inequalities hold since they respectively become: \(\top_{I}\leq z,y\leq\ell(x),x\leq r(y)\); here we used Property 3 of upper-compatible triples. If \(xy\) is defined in \(L\), then we get \(xy\leq z\) if and only if \(x\backslash z\) is defined and \(y\leq x\backslash z\), by Lemma 3.6. Similarly, this is equivalent to \(z/y\) being defined and \(x\leq z/y\). * For \(x,y\in L,z\in K-\{I\cup 1\}\), we distinguish two cases, based on whether \(xy\) is defined in \(L\) or not. If \(xy\) is defined in \(L\), we have \(xy\not\leq z\) by the definition of the order; also \(x\not\leq r(y)\) and thus \(y\not\leq\ell(x)\), by Property 1 (Galois connection). Moreover, \(x\backslash z\) is either \(\ell(x)\) or \(\gamma(z)\), and in either case \(y\not\leq x\backslash z\), since \(\gamma(z)\in K-1\) and \(y\not\leq\ell(x)\). If \(xy\) undefined in \(L\), then \(xy\) is equal to the top element of \(I\), and \(\top_{I}\leq z\). Moreover, \(y\leq\ell(x)=x\backslash z\), and \(x\leq r(y)=z/y\), using again Property 1,3. * For \(x,y\in L,z\in I\), we distinguish two cases, whether \(x\leq r(y)\) or not. If \(x\not\leq r(y)\), the proof is the same as in the previous case. If \(x\leq r(y)\) we distinguish whether \(z=\top_{I}\) or not. If \(z<\top_{I}\), then \(xy=\top_{I}\not\leq z\), \(y\not\leq\gamma(z)=x\backslash z\) and \(x\not\leq\gamma(z)=z/y\). If \(z=\top_{I}\), then \(xy=\top_{I}\) implies \(x\leq\top_{I}/y=r(y)\) iff \(y\leq\ell(x)=x\backslash\top_{I}\). * If \(x,z\in L,y\in K-\{1\}\), it follows directly from the definition of the operations that all inequalities hold, whenever the divisions are defined. Similarly for the cases: \(x\in L,y\in I,z\in K-\{I\}\); \(x\in K,y,z\in L\); \(x,y\in K,z\in I\). * For \(x\in L,y,z,\in K-\{I\}\), it follows directly from the definition of the operations that \(xy=\sigma(y)\leq z\) iff \(x\leq z/y\) whenever the division is defined. Indeed \(z/y\) is either \(z/_{K}y\) iff \(\sigma(y)\not\leq z\) and it is either undefined or \(c_{L}\) if \(\sigma(y)\leq z\). Now we show that \(xy=\sigma(y)\leq z\) iff \(y\leq x\backslash z\). If \(x\in D_{\ell}\) then there are undefined products in \(L\), thus \(\sigma(y)=\top_{I}\leq z\) and \(x\backslash z=\ell(x)\) thus \(y\leq x\backslash z\). Otherwise, if \(x\not\in D_{\ell}\), \(\sigma(y)\leq z\) iff \(y\leq\gamma(z)=x\backslash z\) since \(\sigma,\gamma\) form a residuated pair. The following cases are proven similarly: \(x\in L,y\in K,z\in I\); \(x,z\in K-\{I\},y\in L\); \(x\in K-\{I\},y\in L,z\in I\). * For \(x,y,z\in K\), we show that \(xy\leq z\) iff \(y\leq x\backslash z\), as the proof the equivalence \(xy\leq z\Leftrightarrow x\leq z/y\) is analogous. If \(x\backslash_{K}z\) is defined then residuation holds since \(K\) is a partial IRL. If \(x\backslash_{K}z\) is undefined, then \(\sigma(x)\leq z\), and \(xy\leq\sigma(x)\leq z\) holds. Moreover \(x\backslash z\) is either still undefined or if there is a coatom \(x\backslash z=c_{L}\), thus \(y\leq c_{L}\). We now show that multiplication is order preserving: if \(x\leq y\), then \(xz\leq yz\) and \(zx\leq zy\) (notice again that all products are defined in the gluing). * The cases where \(x,y\in L\) and \(z\in K-\{1\}\) follow directly from the definition of the operations. * The cases where \(x,y\in K-\{1\}\) follow from the order preservation of \(\sigma\). * If \(x,z\in K-\{1\},y\in L\), then we get \(xz,zx\leq\sigma(z)\), which is a property of \(\sigma\) in a lower compatible triple. * Let now \(x\in K-\{1\}\), \(y,z\in L\). If \(yz\) (equiv. \(zy\)) is defined in \(L\), clearly \(xz\leq yz\) (equivv. \(zx\leq zy\)). If \(yz\) (or \(zy\)) is undefined in \(L\), the inequalities becomes \(\sigma(x)\leq\top_{I}\), which holds by condition (A2). * Finally, let \(x,y,z\in L\). We show order preservation of right multiplication, the proof for left multiplication being analogous. If \(xz,yz\) are defined in \(L\), then order preservation holds since \(\mathbf{L}\) is a partial IRL. If \(yz\) is undefined, by the definition of an upper compatible triple, \(z\leq\ell(y)\), and since \(x\leq y\), and the domain of \(\ell\) is closed downwards, by property (2) of the definition we also get that \(z\leq\ell(x)\) and thus \(xz\) is undefined in \(L\). Therefore, \(xz=\top_{I}=yz\). Suppose now that \(yz\) is defined and \(xz\) is undefined, then the inequality becomes \(\top_{I}\leq yz\in L\), which holds by definition of the order in the gluing. The fact that (when defined) divisions are order preserving in the numerator and order reversing in the denominator follows from residuation and the order preservation of multiplication (which is always defined in \(\mathbf{K}\oplus_{\tau}\mathbf{L}\)). Thus, \(\mathbf{K}\oplus_{\tau}\mathbf{L}\) is a partial IRL. We note that in the case where \(L\) has a coatom no operation is undefined. If we take a compatible quadruple \((\mathbf{B},F,\mathbf{C},I)\) where \(I\) has a top element satisfying assumptions \((A1),(A2)\), we can then consider the lower compatible triple \((\mathbf{B}^{\prime},\sigma_{F},\gamma_{F})\) (where \(B^{\prime}=B-F\)) and the upper compatible triple \((\mathbf{C}^{\prime},\ell_{I},r_{I})\) (where \(C^{\prime}=C-(F\cup I)\)) and construct the partial gluing \(\mathbf{B}^{\prime}\oplus_{\tau}\mathbf{C}^{\prime}\), where \(\tau=(\sigma_{F},\gamma_{F},\ell_{I},r_{I})\). More generally, starting from any compatible quadruple \((\mathbf{B},F,\mathbf{C},I)\), we can always construct a partial gluing. Indeed either there are no \(I\)-divisors in \(\mathbf{C}\), in which case the assumptions \((A1),(A2)\) are vacuously true, or otherwise we replace \(I\) with \(I^{\prime}:=\{i\in I:i\neq cd,\text{ for }c,d\in C-I\}\cup\top_{I}\), where \(\top_{I}\) a new element satisfying \((A1),(A2)\). Conditions \((A3),(A4)\) are implied by the last two compatibility conditions in the definition of a compatible quadruple. We can then construct the partial gluing \(\mathbf{B}^{\prime}\oplus_{\tau}\mathbf{C}^{\prime}\), where \(B^{\prime}=B-F\), \(C^{\prime}=C-(F\cup I^{\prime})\), \(\tau=(\sigma_{F},\gamma_{F},\ell_{I},r_{I})\). ## 4. Variations of the constructions First, notice that the gluing constructions presented that involve a non-empty ideal \(I\) work for both bounded and unbounded integral residuated lattices. In the case where the ideal is empty (and thus the construction is gluing over a congruence filter) one still obtains a new structure starting from \(\mathsf{FL}_{w}\)-algebras, but one of the two algebras is not a subalgebra with respect to \(0\) anymore. Importantly, note again that as a special case of the gluing construction, where the filter is trivially the top element \(\{1\}\) and the ideal is empty, we get the \(1\)-sum construction. This also means that given any pair of integral residuated lattices \(\mathbf{B}\) and \(\mathbf{C}\) (with \(\mathbf{C}\) having a lower bound or \(1\) being join irreducible in \(\mathbf{B}\)) we can always glue them. We will call a gluing _trivial_ if it is a \(1\)-sum, and _non-trivial_ otherwise. ### The congruence filter has a bottom element We first observe that if \(F\) has a bottom element, then \(\sigma_{F}\) and \(\gamma_{F}\) have a very transparent definition: also the bottom element of \(F\) multiplies and divides as the elements in \(C-F\) in the gluing. **Lemma 4.1**.: _Let \((\mathbf{B},F)\) be a lower-compatible triple where \(F\) has a bottom element \(\bot_{F}\). Then given any \(x\in B-F\), \(\sigma_{F}(x)=\bot_{F}\cdot x=x\cdot\bot_{F}\) and \(\gamma_{F}(x)=\bot_{F}\backslash x=x/\bot_{F}\)._ Proof.: The proof can be directly extracted from the first paragraphs of the proof of Theorem 3.10. It turns out that assuming that \(F\) has a bottom element is not a substantial restriction. Given a lower compatible triple \((\mathbf{B},F)\), where \(F\) may or may not have a bottom element, and a new element \(\bot_{F}\), we define the residuated lattice \(\mathbf{B}_{\bot}\), where for all \(b\in B-F,f\in F\) * \(b<\bot_{F}<f\), * \(\bot_{F}\cdot\bot_{F}=\bot_{F}\cdot f=f\cdot\bot_{F}=\bot_{F}\) and \(\bot_{F}\cdot b=b\cdot\bot_{F}=\sigma_{F}(b)\), * \(\bot_{F}\backslash b=b/\bot_{F}=\gamma_{F}(b)\), \(\bot_{F}\backslash f=f/\bot_{F}=1\), * \(b\backslash\bot_{F}=\bot_{F}/b=1\), and \(f\backslash\bot_{F}=\bot_{F}/f=\bot_{F}\). The following result is easy to prove and shows that every lower compatible triple can be embedded into one where the congruence filter has a bottom element. **Proposition 4.2**.: _If \((\mathbf{B},F)\) is a lower compatible pair, then \((\mathbf{B}_{\bot},F\cup\{\bot_{F}\})\) is also a lower compatible pair and \(\mathbf{B}\) is a subalgebra of \(\mathbf{B}_{\bot}\), except possibly for join if \(F\) has a bottom element._ ### Non-linear order We are now going to study whether the order conditions for the congruence filter \(F\) and the ideal \(I\) can be weakened. We previously required \(F\) to be strictly above all elements of \(\mathbf{B}\) and \(I\) to be strictly below all elements of \(\mathbf{C}\). This ensures that the product is well-defined and respect the join operation. Notice that, in particular, asking \(F\) to be strictly above all other elements implies that if there are elements \(x,y\in B-F\) such that \(x\lor y\in F\), then \(F\) has a bottom element \(\bot_{F}\) and \(x\lor y=\bot_{F}\). Lemma 4.1 shows that the product defined in Theorem 3.10 preserves this particular kind of join. We define a _non-strict_ lower-compatible pair a pair \((\mathbf{B},F)\) that is lower-compatible except that \(F\) is not strictly above all elements in \(F\), but whenever \(x,y\in B-F\) are such that \(x\lor y=z\in F\), the element \(z\) is such that \(zb=bz=\sigma_{F}(b)\) and \(z\backslash b=b/z=\gamma_{F}(b)\) for all \(b\in B-F\). Similarly, if \(z,w\in C-I\) are such that \(z\wedge w\in I\), then we required \(I\) to have a top element \(\top_{I}\) and \(z\wedge w=\top_{I}\); this does not create issues with respect to the operations, given Lemma 3.5. We can then extend the construction to also include lattice ideals that are not strictly below all other elements. In particular, we call a _non-strict_ upper-compatible pair a pair \((\mathbf{C},I)\) that is upper-compatible except that \(I\) is not strictly below all other elements in \(C\). Let us call a _non-strict compatible quadruple_ a compatible quadruple \((\mathbf{B},F,\mathbf{C},I)\) where the upper and lower-compatible pairs may be non-strict, by redefining all joins of elements \(x,y\in B-F\) such that \(x\lor_{B}y\in F\) to be the bottom element of \(\mathbf{C}\), \(x\lor y=\bot_{C}\), and all meets of elements \(z,w\in C-I\) such that \(x\wedge_{C}w\in I\) to be the top element of \(B-F\), \(x\wedge w=\top_{B}\). **Proposition 4.3**.: _If \((\mathbf{B},F,\mathbf{C},I)\) is a non-strict compatible quadruple, then \(\mathbf{B}\oplus_{\mathbf{P}}C\) is the gluing of \(\mathbf{B}\) and \(\mathbf{C}\) over \(F\) and \(I\)._ Proof.: The proof of Theorem 3.10 can be adapted to this case. ## 5. Preservation In this section we will investigate the interaction of the \((F-I)\)-gluing construction with class operators and equations that are preserved. ### Preservation of identities We identify equations that are preserved by the \((F-I)\)-gluing. It is worth noticing that the gluing construction preserves commutativity. Moreover, we identify the cases when divisibility and semilinearity are preserved; linearity is obviously always preserved. _Semilinear_ integral residuated lattices (i.e., subdirect products of totally ordered integral residuated lattices) constitute a variety, axiomatized by the equation: \[[u\backslash(y\backslash x)u]\vee[v(x\backslash y)/v]=1.\qquad\text{(sl)}\] This equation characterizes semilinearity also in \(\mathsf{FL_{w}}\)-algebras. In commutative subvarieties of \(\mathsf{IRL}\) and \(\mathsf{FL_{w}}\) semilinearity is characterized by the simpler prelinearity identity, obtained from (sl) by taking \(u=v=1\): \[(y\backslash x)\vee(x\backslash y)=1.\qquad\text{(prel)}\] Commutative prelinear \(\mathsf{FL_{w}}\)-algebras are called MTL-algebras since are the equivalent algebraic semantics of Esteva and Godo's _Monoidal t-norm based logic_, the logic of left-continuous t-norms [10]. A residuated lattice \(\mathbf{A}\) is called _divisible_ if the lattice order coincides with the inverse divisibility order: \[a\leq b\qquad\text{if and only if}\qquad\text{there are $c,d\in A$ with $a=bc$ and $a=db$.}\] Divisibility is characterized equationally by: \[x\wedge y=x(x\backslash(x\wedge y))=((x\wedge y)/x)x\qquad(\text{div})\] The latter in integral structures reduces to: \(x\wedge y=x(x\backslash y)=(y/x)x\). Semilinear, commutative and divisible \(\mathsf{FL}_{\mathsf{ew}}\)-algebras are called BL-algebras and we denote their variety by \(\mathsf{BL}\); semilinear and divisible CIRLs are called basic hoops, and we refer to their variety by \(\mathsf{BH}\). BL-algebras are the equivalent algebraic semantics of _Hajek's Basic Logic_[18]. **Proposition 5.1**.: _If \(\mathbf{B}\oplus_{P}\mathbf{C}\) is the \(P\)-gluing of the IRLs \(\mathbf{B}\) and \(\mathbf{C}\), where \(P=(F,I)\), then:_ 1. \(\mathbf{B}\oplus_{P}\mathbf{C}\) _is commutative iff both_ \(\mathbf{B}\) _and_ \(\mathbf{C}\) _are commutative._ 2. \(\mathbf{B}\oplus_{P}\mathbf{C}\) _is divisible iff both_ \(\mathbf{B}\) _and_ \(\mathbf{C}\) _are divisible,_ \(\mathbf{C}\) _has no_ \(I\)_-divisors, and_ \(B=((B-F)\cup\{1\})\oplus_{1}F\)_._ 3. _If_ \(F\neq\{1\}\)_, then_ \(\mathbf{B}\oplus_{P}\mathbf{C}\) _is semilinear iff both_ \(\mathbf{B}\) _and_ \(\mathbf{C}\) _are semilinear. If_ \(F=\{1\}\) _and_ \(C^{-}\neq\emptyset\)_, then_ \(\mathbf{B}\oplus_{P}\mathbf{C}\) _is semilinear iff_ \(\mathbf{B}\) _is linear and_ \(\mathbf{C}\) _is semilinear. (If_ \(F=\{1\}\) _and_ \(C^{-}=\emptyset\)_, then_ \(\mathbf{B}\oplus_{P}\mathbf{C}=\mathbf{B}\)_.)_ Proof.: Recall that \(B^{-}=B-(F\cup I)\) and \(C^{-}=C-(F\cup I)\). For readability we write \(\sigma\) for \(\sigma_{F}\), \(\gamma\) for \(\gamma_{F}\), \(\ell\) for \(\ell_{I}\) and \(r\) for \(r_{I}\). 1. \(B\) and \(C\) are closed under multiplication. Also, \(bc=\sigma(b)=cb\) for \(b\in B^{-},c\in C^{-}\); thus the construction preserves commutativity. 2. Recall that in integral structures divisibility states that for all \(x,y\), \[x\wedge y=x(x\backslash y)=(y/x)x.\] Since \(B\) and \(C\) are closed under \(\wedge,\cdot,\backslash,/\), the divisibility of \(\mathbf{B}\) and \(\mathbf{C}\) is a necessary condition for the gluing to be divisible. Note that if \(x\in B^{-}\), and \(y\in C^{-}\), then \(x(x\backslash y)=x1=x=x\wedge y\), and similarly \((y/x)x=x=x\wedge y\), so divisibility holds in this case. If \(x\in C^{-}\) and \(y\in B^{-}\), then \(x\wedge y=y\), while \(x(x\backslash y)\) and \((y/x)x\) depend on whether \(x\) is an \(I\)-divisor or not. Notice that if \(x\) is a left \(I\)-divisor, then \(x\ell(x)\neq y\) since \(\mathbf{C}\) is a subalgebra, therefore \[x(x\backslash y)=x\ell(x)\neq y=x\wedge y.\] Similarly, if \(x\) is a right \(I\)-divisor \[(y/x)x=r(x)x\neq y=x\wedge y\] Thus for the gluing to be divisible, \(\mathbf{C}\) must have no \(I\)-divisors. Now, if \(x\) is not an \(I\)-divisor, we get: \[x(x\backslash y)=x\gamma(y)=\sigma(\gamma(y))=\sigma(y)\] and similarly, \[(y/x)x=\gamma(y)x=\sigma(\gamma(y))=\sigma(y).\] Thus for the gluing to be divisible, we need that for all \(y\in B^{-}\), \(\sigma(y)=y\). Notice that the same holds for \(x\in C^{-},y\in I\), by Lemma 3.3. Consequently, for all \(f\in F\), \(y=\sigma(y)\leq fy\leq y\) and also \(y=\sigma(y)\leq yf\leq y\), thus have that \(fy=yf=y\). This implies that \(\mathbf{B}\) is the \(1\)-sum \(B=(B-F)\oplus_{1}F\) (including the trivial case where \(F=\{1\}\)). Now we show that if \(\mathbf{B}\) and \(\mathbf{C}\) are divisible, \(\mathbf{C}\) has no \(I\)-divisors and \(B=(B-F)\oplus_{1}F\), then divisibility holds in the gluing. We only need to check the case where \(x\in C^{-},y\in B^{-}\), where we get \(x\wedge y=y\)and \[x(x\backslash y)=x\gamma(y)=\sigma(y)=(y/x)x\] since \(x\) is not an \(I\)-divisor. Now, if \(B=(B-F)\oplus_{1}F\), all products between elements \(f\in F\) and \(x\in B-F\) are such that \(fx=xf=x\). Thus, for \(x\in B^{-}\) with \(x\)\(\theta_{F}\)\(y\), we have \(x\backslash y,y\backslash x\in F\). So \(x=x(x\backslash y)\leq y\) and \(y=y(y\backslash x)\leq x\), hence \(x=y\). Therefore, \(\sigma(y)=\min[y]_{F}=y\) and divisibility holds in the gluing. 3. If \(\mathbf{B}\oplus_{P}\mathbf{C}\) is semilinear, then \(\mathbf{C}\) is semilinear, since it is a subalgebra except possibly for the meet. Also, in verifying semilinearity in \(\mathbf{B}\), if \(x,y,u,v\in B\), then \([u\backslash(y\backslash x)u]\), \([v(x\backslash y)/v]\in B\) and \([u\backslash(y\backslash x)u]\vee[v(x\backslash y)/v]=1\) in \(\mathbf{B}\oplus_{P}\mathbf{C}\). Since \([u\backslash(y\backslash x)u]\vee[v(x\backslash y)/v]\in B\), it follows that \([u\backslash(y\backslash x)u]\vee[v(x\backslash y)/v]=1\) in \(\mathbf{B}\); hence semilinearity holds in \(\mathbf{B}\). Moreover, in the particular case where \(\mathbf{F}=\{1\}\), let \(a,b\in B\) be incomparable. Therefore, \(a\not\leq b\) and \(b\not\leq a\), so \(1\neq a\backslash b\) and \(1\neq b\backslash a\); hence \(a\backslash b,b\backslash a\in B-F\), so \((a\backslash b)\vee(b\backslash a)\leq c\) for every \(c\in C^{-}\). Since \(C^{-}\neq\emptyset\) we have \(c<1\) for some \(c\in C^{-}\), so \((a\backslash b)\vee(b\backslash a)\leq c<1\). By the semilinearity of \(\mathbf{B}\oplus_{P}\mathbf{C}\), we get \((a\backslash b)\vee(b\backslash a)=1\), a contradiction. For the converse direction, suppose both \(\mathbf{B}\) and \(\mathbf{C}\) are semilinear. We check whether in \(\mathbf{B}\oplus_{P}\mathbf{C}\) we have (sl): \[[u\backslash(y\backslash x)u]\vee[v(x\backslash y)/v]=1.\] If all elements belong to \(\mathbf{C}\), the equation follows from the semilinearity of \(\mathbf{C}\). If \(x\) and \(y\) are comparable, then \(y\leq x\) or \(x\leq y\), so \(y\backslash x=1\) or \(x\backslash y=1\); hence for all \(u,v\), \(u\backslash(y\backslash x)u=1\) or \(v(x\backslash y)/v=1\) and so (sl) holds. If \(x\in B^{-},y\in C^{-}\) or vice versa, then \(x,y\) are comparable, so (sl) holds. It remains to verify (sl) for \(x,y\in B^{-}\). If \(F=\{1\}\) and \(\mathbf{B}\) is linear, then \(x,y\) are comparable, so (sl) holds. We now assume that \(F\neq\{1\}\). For \(x,y\in B^{-}\), if they are comparable, (sl) holds. If they are incomparable, then \(y\backslash x\neq 1\) and \(x\backslash y\neq 1\) and \(y\backslash x,x\backslash y\in B\). The semilinearity of \(\mathbf{B}\) yields \[(y\backslash x)\vee(x\backslash y)=1.\] Note that if \(b_{1},b_{2}\in B\), \(b_{1}\neq 1\), \(b_{2}\neq 1\) and \(b_{1}\lor b_{2}=1\), then \(b_{1},b_{2}\in F\). (If, say, \(b_{1}\not\in F\) and \(b_{2}\in F\), then since \(B-F\) is strictly below \(F\), we get \(b_{1}\lor b_{2}=b_{2}\neq 1.\) If \(b_{1},b_{2}\not\in F\), then since \(B-F\) is strictly below \(F\neq\{1\}\), there is \(f\in F-\{1\}\) such that \(b_{1}\lor b_{2}\leq f<1\).) The same holds for elements \(c_{1},c_{2}\in C\). Therefore, \(y\backslash x,x\backslash y\in F\). If \(u\in B\) (or \(u\in C\)), since \(\mathbf{B}\) (respectively, \(\mathbf{C}\)) is semilinear, we can apply Lemma 6.5 in [4] and obtain that \[[u\backslash(y\backslash x)u]\vee(x\backslash y)=1.\] If \([u\backslash(y\backslash x)u]=1\), then (sl) holds. If not, then \([u\backslash(y\backslash x)u]\) and \((x\backslash y)\) are non-identity elements of \(B\) (\(C\), respectively) that join to \(1\), so by above fact they are both in \(F\). Given the semilinearity of \(\mathbf{B}\) (respectively, \(\mathbf{C}\)), we can apply Lemma 6.5 in [4], which states that whenever semilinearity holds, if \(a\lor b=1\), also \(\gamma_{1}(a)\vee\gamma_{2}(b)=1\), for any iterated conjugates \(\gamma_{1},\gamma_{2}\). With \(v\in B\) (or \(v\in C\)), this yields precisely (sl). Since the components of the gluing are subalgebras (except in the mentioned cases for the lattice operations), most one-variable equations are preserved. **Proposition 5.2**.: _The \(P\)-gluing \(\mathbf{B}\oplus_{P}\mathbf{C}\), where \(P=(F,I)\), of two IRLs preserves all one-variable equations not involving the lattice operations. Whenever \(B-F\) is closed under joins, and \(C-I\) is closed under meets, all one-variable equations satisfied by both \(\mathbf{B}\) and \(\mathbf{C}\) are preserved._ Proof.: Follows directly from the definition of the operations. Thus, for example, the gluing construction preserves \(n\)-potency, \(x^{n}=x^{n+1}\), for every \(n\geq 1\). In fact, the gluing also preserves all monoidal equations, given the idempotency and the absorbing properties of the conucleus \(\sigma\). **Proposition 5.3**.: _The \(P\)-gluing \(\mathbf{B}\oplus_{P}\mathbf{C}\), with \(P=(F,I)\), preserves all monoid equations valid in both \(\mathbf{B}\) and \(\mathbf{C}\)._ Proof.: If the equation has a variable that appears in only one side, then setting all the other variables equal to \(1\), we obtain a consequence of the form \(x^{n}=1\), for some \(n\neq 1\), and the only model of that equation is the trivial algebra. Therefore, we consider equations where all variables appear on both sides. Since \(B\) and \(C\) are closed under multiplication, if an equation holds in the gluing then it also holds in \(\mathbf{B}\) and in \(\mathbf{C}\). We now assume that some equation holds in \(\mathbf{B}\) and in \(\mathbf{C}\). If under some evaluation all variables are chosen from \(B\) or all variables are chosen from \(C\), then the equation holds true. Now suppose that at least one variable is assigned to an element of \(C^{-}\) and one variable is assigned to an element \(B^{-}\). Assume that \(X\), is the set of variables in the equation, \(v\) is the evaluation, and that \(X_{C}=\{x\in X:v(x)\in C^{-}\}\) corresponds to the variables that are mapped to elements of \(C^{-}\); \(X_{C}^{c}\) denotes the complement of \(X_{C}\). We focus on the position of values \(v(x)\), where \(x\in X_{C}\), inside the equation; we group together the elements in between these \(v(x)\) as follows. Given that \(B\) is closed under multiplication, the evaluation of each side of the equation takes the form \[b_{1}^{\prime}c_{1}b_{2}^{\prime}c_{2}\cdots b_{n}^{\prime},\] where each \(c_{i}\) is of the form \(v(x)\), for some \(x\in X_{C}\) and each \(b_{i}^{\prime}\) is a product of elements of the form \(v(x)\), for certain \(x\in X_{C}^{c}\), so \(v(x)\in B\); hence \(c_{i}\in C^{-}\) and \(b_{i}^{\prime}\in B\). By focusing on the elements adjacent to the \(c_{i}\)'s and using that \(cb=bc=\sigma(b)\in B-F\), for \(c\in C^{-}\) and \(b\in B-F\), and that \(fc,cf\in C^{-}\) for \(c\in C^{-}\) and \(f\in F\), the evaluation of the equation is reduced to a form that does not contain any elements of \(C^{-}\) (recall that at least one variable is assigned to an element of \(C^{-}\) and at least one variable is assigned to an element of \(B^{-}\)). Then, we use that \(b_{1}\sigma(b_{2})=\sigma(b_{1}b_{2})=\sigma(b_{1})b_{2}\), for \(b_{1},b_{2}\in B-F\), and that \(f\sigma(b)=\sigma(b)f=\sigma(b)\), for \(b\in B-F\) and \(f\in F\), and the idempotency of \(\sigma\). In the end, the evaluation of the equation takes the form \(\sigma(b_{1}b_{2}\cdots b_{n})=\sigma(b_{n+1}b_{n+2}\cdots b_{m})\), where the \(b_{i}\)'s are exactly the elements of the form \(v(x)\), for \(x\in X_{C}^{c}\), in the exact order they appear in the equation. Therefore, by substituting \(1\) for all \(x\) with \(x\in X_{C}\), and the appropriate value \(b_{i}\) for the other variables, the original equation (which is valid in \(\mathbf{B}\)), yields \(b_{1}b_{2}\cdots b_{n}=b_{n+1}b_{n+2}\cdots b_{m}\), so \(\sigma(b_{1}b_{2}\cdots b_{n})=\sigma(b_{n+1}b_{n+2}\cdots b_{m})\) is also valid. ### \(\mathbf{HSP}_{U}\) Constructions as the one presented in this paper are particularly interesting when they help us better understand and describe the structure theory of the algebras. In what follows, we will just call "gluing" a gluing over a filter and an ideal. In this section we characterize when a gluing is subdirectly irreducible. We will also study the subalgebras, homomorphic images and ultrapowers of a gluing. We recall that \(\mathbf{Fil}(\mathbf{A})\) denotes the lattice of congruence filters of \(\mathbf{A}\). **Proposition 5.4**.: _Consider the \(P\)-gluing \(\mathbf{B}\oplus_{P}\mathbf{C}\), with \(P=(F,I)\), of two integral residuated lattices \(\mathbf{B}\) and \(\mathbf{C}\). We distinguish two cases:_ 1. _If_ \(C-I\) _is a congruence filter of_ \(\mathbf{C}\)_, then_ \(\mathbf{Fil}(\mathbf{B}\oplus_{\mathbf{P}}\mathbf{C})\) _is isomorphic to_ \(\mathbf{Fil}(\mathbf{B})\oplus\mathbf{Fil}(\mathbf{C}-I)\)_, the poset ordinal sum of the two lattices._ 2. _Otherwise,_ \(\mathbf{Fil}(\mathbf{B}\oplus_{\mathbf{P}}\mathbf{C})\cong\mathbf{Fil}( \mathbf{C})\)_._ Proof.: The first claim follows from the definition of the order and operations in the gluing construction, see Figure 2. Now, if \(C-I\) is not a congruence filter, then the congruence filter generated by \(C-I\) has non-empty intersection with \(I\), i.e., either some conjugate or some product of elements in \(C^{-}\) are in \(I\). Since \(C\) is closed under multiplication and divisions, this is also true in the gluing \(\mathbf{B}\oplus_{P}\mathbf{C}\). Thus the congruence filter \(\langle C-I\rangle\) generated by \(C-I\) in the gluing also has nonempty-intersection with the ideal \(I\). Since filters are closed upwards, \(B^{-}\) is contained in \(\langle C-I\rangle\) and the second claim follows. **Corollary 5.5**.: _For \(P=(F,I)\), if \(F\neq\{1\}\), then \(\mathbf{B}\oplus_{P}\mathbf{C}\) is subdirectly irreducible iff \(\mathbf{F}\) is subdirectly irreducible iff \(\mathbf{B}\) is subdirectly irreducible iff \(\mathbf{C}\) is subdirectly irreducible. If \(F=\{1\}\), then \(\mathbf{B}\oplus_{P}\mathbf{C}\) is subdirectly irreducible iff \(\mathbf{C}\) is subdirectly irreducible._ Proof.: Assume first that \(F\) is not trivial. By Proposition 5.4 and standard universal algebraic results (see Theorem 8.4 in [6]), \(\mathbf{B}\oplus_{P}\mathbf{C}\) is subdirectly irreducible iff \(\mathbf{F}\) is subdirectly irreducible as an IRL, thus the claim follows since \(F\) is a congruence filter of both \(\mathbf{B}\) and \(\mathbf{C}\), (strictly) above all their other elements. For \(F=\{1\}\), it follows from the definition of the operations and order of the gluing (see also Figure 2) that \(\mathbf{B}\oplus_{P}\mathbf{C}\) is subdirectly irreducible iff \(\mathbf{C}\) is subdirectly irreducible. We can now describe the homomorphic images of a gluing. **Proposition 5.6**.: _Let \(h\) be a homomorphism having as domain a gluing \(\mathbf{B}\oplus_{P}\mathbf{C}\) and \(H\) the associated congruence filter (the preimage of \(1\)). If \(B/H,C/H,F/H,I/H\) denote the images under \(h\), we have that the homomorphic image of \(\mathbf{B}\oplus_{P}\mathbf{C}\) via \(h\) is isomorphic to_ 1. \(\mathbf{C}/H\)_, if_ \(H\cap I\neq\emptyset\)_._ 2. \(\mathbf{B}/H\)_, if_ \(H\cap I=\emptyset\) _and_ \(H\cap B^{-}\neq\emptyset\)_._ 3. \(\mathbf{B}/H\oplus_{P/H}\mathbf{C}/H\)_, where_ \(P/H=(F/H,I/H)\)_, if_ \(H\cap B^{-}=\emptyset\)_._ Proof.: The fact that the homomorphic image through \(h\) is given by the gluing \(\mathbf{B}/H\oplus_{P/H}\mathbf{C}/H\) follows from Proposition 5.4. Moreover, notice that (1) and (2) are particular cases of (3). We call a subalgebra \(\mathbf{S}\) of \(\mathbf{C}\)_divisor-special_ if whenever it contains an element \(c\) that is a left \(I\)-divisor, it also contains \(\ell_{I}(c)\), and similarly if \(d\in S\) where \(d\) is a right \(I\)-divisor, then also \(r(d)\in S\). We call a subalgebra \(\mathbf{T}\) of \(\mathbf{B}\)_\(\sigma\)-special_ if for all \(b\in T-F\), also \(\sigma(b)\in T\), and \((\sigma,\gamma)\)_-special_ if also \(\gamma(b)\in T\). **Proposition 5.7**.: _Let \(\mathbf{B}\oplus_{P}\mathbf{C}\), with \(P=(F,I)\), be the \(P\)-gluing of IRLs \(\mathbf{B}\) and \(\mathbf{C}\). Then a subalgebra \(\mathbf{S}\) of \(\mathbf{B}\oplus_{P}\mathbf{C}\) is one of the following:_ 1. \(\mathbf{S}\) _is a subalgebra of_ \(\mathbf{C}\) _that does not include elements whose meet is the top element of_ \(B-F\)_._ 2. \(\mathbf{S}\) _is a subalgebra of_ \(\mathbf{B}\) _that does not include elements whose join is the bottom element of_ \(C-I\)_._ 3. _A gluing_ \(\mathbf{B}_{1}\oplus_{P_{1}}\mathbf{C}_{1}\)_, with_ \(P_{1}=(F_{1},I_{1})\)_, where:_ * \(F_{1}\subseteq F,I_{1}\subseteq I\) _and_ \(F_{1}\cup I_{1}\) _is a subalgebra of_ \(F\cup I\)_;_ * \(\mathbf{C}_{1}\) _is a divisor-special subalgebra of_ \(\mathbf{C}\) _containing at least an element that is not an_ \(I\)_-divisor;_ * \(\mathbf{B}_{1}\) _is a (nonempty) special_ \((\sigma,\gamma)\)_-subalgebra of_ \(\mathbf{B}\)_._ 4. _A gluing_ \(\mathbf{B}_{2}\oplus_{P_{2}}\mathbf{C}_{2}\)_, with_ \(P_{2}=(F_{2},I_{2})\)_, where:_ * \(F_{2}\subseteq F,I_{2}\subseteq I\) _and_ \(F_{2}\cup I_{2}\) _is a subalgebra of_ \(F\cup I\)_;_ * \(\mathbf{C}_{2}\) _is a divisor-special subalgebra of_ \(\mathbf{C}\) _containing only_ \(I\)_-divisors;_ * \(\mathbf{B}_{2}\) _is a (nonempty)_ \(\sigma\)_-special subalgebra of_ \(\mathbf{B}\)_._ Proof.: The first two claims follow from the fact that \(\mathbf{B}\) and \(\mathbf{C}\) are subalgebras except possibly for those joins and meets. The other two claims follow from the definition of the operations. For example, whenever a subalgebra \(\mathbf{S}\) of \(\mathbf{B}\oplus_{P}\mathbf{C}\) contains both an element \(b\in B^{-}\) and an element \(c\in C^{-}\) that is not an \(I\)-divisor, then both the minimal, \(\sigma(b_{F})\), and the maximal, \(\gamma_{F}(b)\), element of the equivalence class \([b]_{F}\) also belong to \(\mathbf{S}\), since \(bc=cb=\sigma(b)\) and \(c\backslash b=b/c=\gamma(b)\). Moreover, if in the subalgebra \(\mathbf{S}\) there is at least an element \(b\in B^{-}\), and \(c\in C^{-}\) is a left \(I\)-divisor, then \(c\backslash b=\ell(b)\) and similarly if \(d\in C^{-}\) is a right \(I\)-divisor then \(b/d=r(d)\), thus such elements need to be in \(S\) We are now going to show that an ultrapower of a gluing \(\mathbf{B}\oplus_{P}\mathbf{C}\) is a gluing of ultrapowers of \(\mathbf{B}\) and \(\mathbf{C}\). **Proposition 5.8**.: \(P_{U}(\mathbf{B}\oplus_{P}\mathbf{C})\subseteq P_{U}(\mathbf{B})\oplus_{(P_{U}( F),P_{U}(I))}P_{U}(\mathbf{C})\)__ Proof.: We sketch the proof. Let \(\mathbf{A}=\prod_{j\in J}\mathbf{B}\oplus_{P}\mathbf{C}\), and let \(U\) be an ultrafilter on \(J\). For \(x=(x_{j})_{j\in J}\in A\) and \(j\in J\), we distinguish cases according to whether \(x_{j}\) is in \(B^{-},C^{-}\), \(F\) or \(I\), and partition \(J\) in the sets: \[J_{B}(x)=\{j\in J:x_{j}\in B^{-}\},\,J_{C}(x)=\{j\in J:x_{j}\in C^{-}\},\] \[J_{F}(x)=\{j\in J:x_{j}\in F\},J_{I}(x)=\{j\in J:x_{j}\in I\}.\] Since \(U\) is an ultrafilter, for each \(x\in A\) only one of these sets belongs to \(U\); also if \([x]_{U}=[y]_{U}\), then the corresponding sets are both in \(U\) or neither in \(U\). This allows us to define the sets \[B_{U}=\{[x]_{U}:J_{B}(x)\in U\},C_{U}=\{[x]_{U}:J_{C}(x)\in U\},\] \[F_{U}=\{[x]_{U}:J_{F}(x)\in U\},I_{U}=\{[x]_{U}:J_{I}(x)\in U\}.\] It is easy to see that \(B_{U}\cup F_{U}\cup I_{U}\) and \(C_{U}\cup F_{U}\cup I_{U}\) are IRLs with the inherited operations, that \((B_{U}\cup F_{U}\cup I_{U},F_{U},C_{U}\cup F_{U}\cup I_{U},I_{U})\) is a compatible quadruple and that \(B_{U}\cup F_{U}\cup I_{U}\in P_{U}(\mathbf{B})\) and \(C_{U}\cup F_{U}\cup I_{U}\in P_{U}(\mathbf{C})\). Finally, it can also be shown that \(\mathbf{A}/U\) is isomorphic to the gluing \((B_{U}\cup F_{U}\cup I_{U})/U\oplus_{(F_{U},I_{U})}(C_{U}\cup F_{U}\cup I_{U})/U\) (see Proposition 3.3 in [1] for a similar instance). ## 6. Amalgamation property The gluing construction can be seen as a way of finding a (strong) amalgam of two algebras \(\mathbf{B}\) and \(\mathbf{C}\) in the particular case where the common subalgebra \(\mathbf{A}\) corresponds to the union of a congruence filter and an ideal of both \(\mathbf{B}\) and \(\mathbf{C}\). More precisely, let \((\mathbf{B},F,\mathbf{C},I)\) be a compatible quadruple, and let us call \(\mathbf{P}\) the subalgebra of \(\mathbf{B}\) (equivalently, \(\mathbf{C}\)) with domain \(F\cup I\). Then \(\mathbf{P}\) embeds into both \(\mathbf{B}\) and \(\mathbf{C}\); let us name the embeddings with \(i,j\) respectively. Moreover, by construction both \(\mathbf{B}\) and \(\mathbf{C}\) embed in the gluing \(\mathbf{B}\oplus_{P}\mathbf{C}\). Let us denote these embeddings by \(h,k\), respectively. With this notation in mind: **Proposition 6.1**.: _Let \((\mathbf{B},F,\mathbf{C},I)\) be a compatible quadruple as above, and let \(\mathbf{P}\) be the subalgebra of \(\mathbf{B}\) (equivalently, \(\mathbf{C}\)) with domain \(F\cup I\). \((\mathbf{B}\oplus_{P}\mathbf{C},h,k)\) is a (strong) amalgam of \((\mathbf{P},\mathbf{B},\mathbf{C},i,j)\)._ In this section we present two applications of the gluing and partial gluing constructions, respectively, where the gluing constructions shed some light on when amalgamation holds in classes of (bounded) IRLs. ### Generalized rotations We observe that the generalized \(n\)-rotation construction introduced in [7] is actually an example of gluing and we generalize this construction to the noncommutative case, using the gluing perspective. The _generalized \(n\)-rotation_, for \(n\geq 3\), defined in [7] is itself inspired by ideas in Wronski's reflection construction for BCK-algebras [28] and generalizes in this context the (dis)connected rotation construction developed by Jenei [20, 21] for ordered semigroups. In these constructions given a CIRL, and also more generally in [17] given a topped residuated lattice (not necessarily commutative or integral), a bounded involutive structure is produced, obtained by attaching below the original CIRL a rotated copy of it. On the other hand, the generalized rotation takes an CIRL and generates a bounded CIRL, which is not necessarily involutive, by attaching below it a rotated (possibly proper) _nuclear image_ of the original. The generalized \(n\)-rotation, for \(n\geq 3\), further adds a Lukasiewicz chain of \(n\) elements, \(n-2\) of which are between the original structure and its rotated nuclear image (see Figure 4 for a sketch). We introduce the non-commutative version of the generalized \(n\)-rotation, building on the construction in [17], and we apply it to IRLs. We first recall from [17] that the disconnected rotation of an IRL \(\mathbf{A}\) is the \(\mathsf{FL}_{\mathsf{ew}}\)-algebra \(\mathbf{A}^{*}\) whose lattice reduct is given by the union of \(A\) and its disjoint copy \(A^{\prime}=\{a^{\prime}:a\in A\}\) with dualized order, placed below \(A\): for all \(a,b\in A\), \[a^{\prime}<b,\text{ and }a^{\prime}\leq b^{\prime}\text{ iff }b\leq a.\] In particular, the top element of \(\mathbf{A}^{*}\) is the top \(1\) of \(\mathbf{A}\) and the bottom element of \(\mathbf{A}^{*}\) is the copy \(0:=1^{\prime}\) of the top \(1\). \(\mathbf{A}\) is a subalgebra, the products in \(A^{\prime}\) are all defined to be the bottom element \(0=1^{\prime}\), and furthermore, for all \(a,b\in A\), \[a\cdot b^{\prime}=(b/a)^{\prime},\quad b^{\prime}\cdot a=(a\backslash b)^{ \prime};\] \[a\backslash b^{\prime}=a^{\prime}/b=(b\cdot a)^{\prime},\quad a^{\prime} \backslash b^{\prime}=a/b,\quad b^{\prime}/a^{\prime}=b\backslash a.\] A nucleus on a residuated lattice \(\mathbf{A}=(A,\wedge,\vee,\cdot,\backslash,1)\) is a closure operator \(\delta\) on \(\mathbf{A}\) that satisfies \(\delta(x)\delta(y)\leq\delta(xy)\), for all \(x,y\in A\). It is known that then \(\mathbf{A}_{\delta}=(\delta[A],\wedge,\vee_{\delta},\cdot_{\delta},\)\(\backslash,\delta(1))\) is a residuated lattice, where \(x\vee_{\delta}y=\delta(x\lor y)\) and \(x\cdot_{\delta}y=\delta(xy)\). The _generalized disconnected rotation_\(\mathbf{A}^{\delta}\) of a IRL \(\mathbf{A}\) with respect to a nucleus \(\delta\) on \(\mathbf{A}\) serves as a non-commutative version of the construction given in [2] (which in turn was inspired by [9]). It differs from the disconnected rotation above in that it replaces \(A^{\prime}\) with \(\delta[A]^{\prime}=\{\delta(a)^{\prime}:a\in A\}\), where \(\delta(a)^{\prime}\) is short for \((\delta(a))^{\prime}\). It is easy to see that then with respect to the above order we have \(\delta(a)^{\prime}\wedge\delta(b)^{\prime}=\delta(\delta(a)\vee\delta(b))^{\prime}\). Moreover, for all \(a\in A\), \(b\in\delta[A]\), \[a\backslash b^{\prime}=(\delta(ba))^{\prime}\qquad\text{ and }\qquad b^{\prime}/a=( \delta(ab))^{\prime}.\] The proof that \(\mathbf{A}^{\delta}\) is a residuated lattice is a very small variation of the analogous proof for \(\mathbf{A}^{*}\), given in Section 6 of [17]. In particular, the product is well-defined given the fact that \(a\backslash b^{\prime}=a^{\prime}/b=(\delta(b\cdot a))^{\prime}\). For an in-depth analysis of this and other rotation constructions, see [15]. Clearly, the disconnected rotation is the special case of a generalized disconnected rotation where the nucleus is the identity map. Now, the _generalized \(n\)-rotation_\(\mathbf{A}_{n}^{\delta}\) of an IRL \(\mathbf{A}\) with respect to a nucleus \(\delta\) and \(n\geq 3\) is defined on the disjoint union of \(A^{\delta}\) and \(\{\ell_{i}:0<i<n-1\}\). We also set \(\ell_{0}=0\) and \(\ell_{n-1}=1\), the bounds of \(\mathbf{A}^{\delta}\). The order extends the order of \(\mathbf{A}^{\delta}\) by \[b<\ell_{1}<\ldots<\ell_{n-2}<a,\] for all \(a\in A\) and \(b\in\delta[A]\); see the rightmost structure of Figure 4. The operations extend those of \(\mathbf{A}^{\delta}\), of the \(n\)-element Lukasiewicz chain \(\mathbf{L}_{n}\), where \(0=\ell_{0}<\ell_{1}<\ldots<\ell_{n-2}<\ell_{n-1}=1\), and for \(0<i<n-1\): \[a\ell_{i}=\ell_{i}=\ell_{i}a,\quad b^{\prime}\ell_{i}=0=\ell_{i}b^{\prime}.\] The proof that the resulting structure is an \(\mathsf{FL}_{\mathsf{ew}}\)-algebra is an easy combination of the proofs of [17] and [7], but it also follows from Proposition 6.2 below. We mention that in [7] the generalized \(n\)-rotation is defined with respect to nuclei that preserve the lattice operations, thus the construction we propose here is more general also in the commutative case. The subvariety \(\mathsf{MWR}_{n}\) of \(\mathsf{FL}_{\mathsf{ew}}\) generated by the generalized \(n\)-rotations of CIRLs where the nuclei preserve the lattice operations is axiomatized in [7]. This class of algebras contains as subvarieties, among others, the varieties of: Godel algebras, product algebras, the variety generated by perfect MV-algebras, nilpotent minimum algebras, \(n\)-contractive BL-algebras, and Stonean residuated lattices. We now show that the generalized \(n\)-rotation is a special case of a gluing. We refer to \(1\)-sums of the kind \(\mathbf{L}_{n}\oplus_{1}\mathbf{A}\) as \(n\)_-liftings_ of an IRL \(\mathbf{A}\). Then generalized \(n\)-rotations are gluings of disconnected rotations and \(n\)-liftings. **Proposition 6.2**.: _The generalized \(n\)-rotation \(\mathbf{A}_{n}^{\delta}\) of an IRL \(\mathbf{A}\) with respect to a nucleus \(\delta\) for \(n\geq 3\) is isomorphic to the gluing \(\mathbf{A}^{\delta}\oplus_{(A,\{0\})}(\mathbf{L}_{n}\oplus_{1}\mathbf{A})\) of the generalized disconnected rotation \(\mathbf{A}^{\delta}\) and the \(1\)-sum \(\mathbf{L}_{n}\oplus_{1}\mathbf{A}\) over \(\mathbf{A}\), \(\{0\}\)._ Proof.: First we show that the conditions of the gluing are satisfied. Note that \(A\) is a congruence filter strictly above all other elements of \(\mathbf{A}^{\delta}\) and \(\mathbf{L}_{n}\oplus_{1}\mathbf{A}\), and \(\{0\}\) is a shared lattice ideal. For all \(x\in A^{\delta}-A=\delta[A]^{\prime}\), there is \(y\in A\) with \(x=\delta(y)^{\prime}\), so \(x\backslash 0=(\delta(y))^{\prime}\backslash 1^{\prime}=\delta(y)/1=\delta(y) \in A\); also \(0\backslash x=1\in A\). Therefore, \(x\;\theta_{A}\;0\) and \(\sigma_{A}(x)=\min[x]_{A}=0\). Furthermore, since all elements in \((\mathbf{L}_{n}\oplus_{1}\mathbf{A})-(A\cup\{0\})\) are \(0\)-divisors, \(\gamma\) does not need to be defined. Moreover, since \(\sigma_{A}(x)=0\), for all \(x\in A^{\delta}-A\), \(\sigma\) is clearly absorbing. Thus \((\mathbf{A}^{\delta},A)\) is a weak lower-compatible pair. To show that \((\mathbf{L}_{n}\oplus_{1}\mathbf{A},\{0\})\) is an upper-compatible pair, first note that \(\{0\}\) is a lattice ideal strictly below all other elements. Moreover, since being an \(I\)-divisor here means being a \(0\)-divisor, we have \(\ell(x)=x\backslash 0\) and \(r(x)=0/x\), for all \(x\in(\mathbf{L}_{n}\oplus_{1}\mathbf{A})-(A\cup\{0\})=\mathbf{L}_{n}-\{0,1\}\). Now we prove that \((\mathbf{A}^{\delta},A,\,\mathbf{L}_{n}\oplus_{1}\mathbf{A},\{0\})\) is a compatible quadruple: 1. All elements of \(\mathbf{L}_{n}-\{1\}\) are \(0\)-divisors, thus the first condition is satisfied. 2. If \(c,d\in\mathbf{L}_{n}-\{0\}\), with \(cd=0\), then \(0x=x0=0=\sigma_{A}(x)\) for all \(x\in\delta[A]^{\prime}-\{0\}\). 3. \(\delta[A]^{\prime}\) is closed under join, so this condition is vacuously true. 4. \((\mathbf{L}_{n}\oplus_{1}A)-\{0\}\) has a least element \(\ell_{1}\), so this condition is also vacuously true. It is clear that \(\mathbf{A}^{\delta}\) and \(\mathbf{L}_{n}\oplus_{1}\mathbf{A}\) are subalgebras of the gluing and also of the generalized \(n\)-rotation and they are ordered the same way in both of these structures. Finally, for \(\ell_{i}\in\mathbf{L}_{n}-\{1,0\}\), \(1<i<n-1\) and \(\delta(a)^{\prime}\in\delta[A]^{\prime}\), we have \(\ell_{i}\delta(a)^{\prime}=0\) in both the generalized \(n\)-rotation and in the gluing. In [3, Theorem 3.11], it is shown that a variety \(\mathsf{V}\) of semilinear CIRLs has the AP if and only if the variety generated by generalized \(n\)-rotations of chains in \(\mathsf{V}\) with respect to a nucleus definable by a term in the language of residuated lattices, has the AP. This allows to transfer known results about the AP in relatively tame varieties of CIRLs, to varieties of \(\mathsf{FL_{ew}}\)-algebras that are more complicated to study. For instance, since basic hoops, Wajsberg hoops, cancellative hoops, Godel hoops, all have the AP, so do the varieties generated by their generalized \(n\)-rotations ([3], Corollary 3.12). We show that one can go in the same direction also in the non-commutative case, with the following bridge result. **Proposition 6.3**.: _Let \(\delta\) be a term-defined nucleus for a class \(\mathsf{K}\) of IRLs, and let \(n\geq 3\). The following are equivalent:_ 1. \(\mathsf{K}\) _has the amalgamation property;_ 2. _the class of generalized_ \(n\)_-rotations_ \(\{\mathbf{A}^{\delta}_{n}:\mathbf{A}\in\mathsf{K}\}\) _has the amalgamation property._ 3. _the class_ \(\{\mathbf{A}^{\delta}_{m}:\mathbf{A}\in\mathsf{K},m-1\text{ divides }n-1\}\) _has the amalgamation property._ Proof.: In this proof, let us denote with \(\mathsf{K}^{\delta}_{n}\) the class of the generalized \(n\)-rotations via \(\delta\) of algebras in \(\mathsf{K}\). We first show \((1)\Leftrightarrow(2)\). The key idea here is that homomorphisms of generalized \(n\)-rotations are uniquely determined by their restriction to the upper-compatible triple in the gluing: i.e., the IRL and the \(\operatorname{L}_{n}\) chain. This is due to the fact that the lower-compatible triple is a rotation whose domain is \(A\cup\delta[A]^{\prime}\) for some IRL \(\mathbf{A}\), and the elements \(\delta(a)^{\prime}\in A\) are such that \(\delta(a)^{\prime}=a\backslash 0\), thus their homomorphic images are determined by those on \(A\). More precisely, any homomorphism \(h:A\to B\), for \(\mathbf{A},\mathbf{B}\) IRLs, extends to a homomorphism \(\bar{h}:\mathbf{A}_{n}^{\delta}\to\mathbf{B}_{n}^{\delta}\) in the following way: \[\bar{h}(a)=h(a),\quad\bar{h}(\delta(a)^{\prime})=\delta(h(a))^{\prime}\quad \bar{h}(l_{i})=l_{i}\text{ for all }i:1\ldots n-1\] And vice versa, given any homomorphism \(k:\mathbf{A}_{n}^{\delta}\to\mathbf{B}_{n}^{\delta}\) the restriction \(k_{A}\) to \(\mathbf{A}\) is a homomorphism from \(A\) to \(B\). Indeed, given \(a\in A\), suppose \(k(a)\in\mathbf{B}_{n}^{\delta}-B\). Then \(k(a^{n})=k(a)^{n}=0_{\mathbf{B}_{n}^{\delta}}\), but \(a^{n}\in A\), since \(A\) is a congruence filter of the disconnected rotation. This leads to a contradiction, since \(\neg(a^{n})=a^{n}\backslash 0=\delta(a^{n})^{\prime}\), thus \((\neg(a^{n}))^{2}=0\), but \[k((\neg(a^{n}))^{2})=(\neg(k(a^{n})))^{2}=(\neg 0)^{2}=1^{2}=1\neq k(0)=0.\] So, \(a\in A\) implies \(k(a)\in B\). Moreover, \(h\) is an embedding iff \(\bar{h}\) is an embedding, and if \(k\) is an embedding then clearly \(k_{A}\) is an embedding. Thus, suppose that \(\mathsf{K}\) has the amalgamation property, and consider a V-formation in \(\mathbf{K}_{n}^{\delta}\): \(\mathbf{A}_{n}^{\delta},\mathbf{B}_{n}^{\delta},\mathbf{C}_{n}^{\delta}\) with embeddings \(i:\mathbf{A}_{n}^{\delta}\to\mathbf{B}_{n}^{\delta},j:\mathbf{A}_{n}^{\delta} \to\mathbf{C}_{n}^{\delta}\). Then one can consider the restrictions of \(i\) and \(j\) to \(\mathbf{A}\), and obtain a V-formation in \(\mathsf{K}\), given by \(\mathbf{A},\mathbf{B},\mathbf{C}\) and the embeddings \(i_{A},j_{A}\). This has an amalgam, say \(\mathbf{D}\) with embeddings \(f:B\to D,g:C\to D\) such that \(f\circ i=g\circ j\). Thus it follows from what was shown before that \(\mathbf{D}_{n}^{\delta}\) is going to be an amalgam for the V-formation in \(\mathsf{K}_{n}^{\delta}\), with embeddings \(\bar{f},\bar{g}\). Similarly, supposing that \(\mathsf{K}_{n}^{\delta}\) has the amalgamation property, we consider a V-formation in \(\mathsf{K}\): \(\mathbf{A},\mathbf{B},\mathbf{C}\in\mathsf{K}\) and embeddings \(k:A\to B,l:A\to C\). We take the corresponding V-formation in \(\mathsf{K}_{n}^{\delta}\): \(\mathbf{A}_{n}^{\delta},\mathbf{B}_{n}^{\delta},\mathbf{C}_{n}^{\delta}\) with embeddings \(\bar{k},\bar{l}\). The amalgam in \(\mathsf{K}_{n}^{\delta}\) is going to be some \(\mathbf{D}_{n}^{\delta}\), with \(\mathbf{D}\) an IRL, and embeddings \(s:\mathbf{B}_{n}^{\delta}\to\mathbf{D}_{n}^{\delta},t:\mathbf{C}_{n}^{\delta} \to\mathbf{D}_{n}^{\delta}\). Thus \(\mathbf{D}\) with embeddings \(s_{B},t_{C}\) are an amalgam for the V-formation in \(\mathsf{K}\). Therefore, (1) and (2) are equivalent. While (3) clearly implies (2), (2) \(\Rightarrow\) (3) can be shown again via the fact that homomorphisms of generalized \(n\)-rotations are uniquely determined by their restriction to the upper-compatible triple in the gluing. In particular, consider a V-formation \(\mathbf{A}_{j}^{\delta},\mathbf{B}_{k}^{\delta},\mathbf{C}_{l}^{\delta}\), with \(\mathbf{A}_{j}^{\delta}\) embedding in the other two. If the amalgam in \(\mathsf{K}\) of \(\mathbf{A},\mathbf{B},\mathbf{C}\) is \(\mathbf{D}\), then it is routine to check that the desired amalgam is given by \(\mathbf{D}_{\operatorname{\mathrm{lcm}}\{\mathrm{k},\mathrm{l}\}}^{\delta}\). ### A \(2\)-potent variety of \(\mathsf{FL}_{\mathsf{ew}}\)-algebras We are now going to study a variety in which the subdirectly irreducible members can be characterized as partial gluings of a class of algebras. More precisely, as partial gluings of simple \(2\)-potent (i.e., satisfying \(x^{2}=x^{3}\)) CIRL-chains. Let us consider totally ordered \(\mathsf{FL}_{\mathsf{ew}}\)-algebras that are \(2\)-potent, and such that for every \(x,y\) \[x=1\text{ or }x\cdot(x\wedge y)\leq(x\wedge y)^{2}.\] Equivalently, for \(y\leq x<1\), we have \(xy=y^{2}\). The above is a positive universal first-order formula, thus by results in [13] these structures generate a variety of MTL-algebras that satisfy \(x^{2}=x^{3}\) and \[x\vee\ ((x\cdot(x\wedge y))\backslash(x\wedge y)^{2})=1.\] We will call this variety of \(\mathsf{FL}_{\mathsf{ew}}\)-algebras \(\mathsf{GL}_{2}\). One can easily see that the finite chains in this variety are of the form shown in Figure 5. More in detail, we will show that any \(\mathsf{GL}_{2}\)-chain consists of subintervals made of simple \(2\)-potent CIRL-chains, and it can be characterized as _partial gluings_ of such chains. Using this representation, we will show that the amalgamation property fails for the class of \(\mathsf{GL}_{2}\)-chains. In particular, using their representation as partial gluings will allow us to fully characterize their subalgebras and determine exactly which V-formations have an amalgam. First, let us show how we can iterate the partial gluing construction in this case. For example, we consider three simple \(2\)-potent CIRL-chains \(\mathbf{S}_{1},\mathbf{S}_{2}\) and \(\mathbf{S}_{3}\), as in Figure 6. Consider the triple \((\mathbf{S}_{1},\sigma_{1},\gamma_{1})\) where the implication in \(\mathbf{S}_{1}\) is redefined to be: \(x\to y=1\) iff \(x\leq y\), and undefined otherwise, and furthermore for all \(a\) with \(x^{2}\leq a\leq x\), we have \[\sigma_{1}(a)=y^{2},\quad\gamma_{1}(a)=y,\quad\sigma_{1}(1)=\gamma_{1}(1)=1.\] Moreover, consider the triple \((\mathbf{S}_{2},\ell_{2},r_{2})\) where the maps \(\ell_{2},r_{2}\) have empty domain. **Lemma 6.4**.: \((\mathbf{S}_{1},\sigma_{1},\gamma_{1})\) _is a lower-compatible triple and \((\mathbf{S}_{2},\ell_{2},r_{2})\) is an upper-compatible triple._ Proof.: For all \(a,b\in S_{1}-\{1\}\), we have \(\sigma(a)\leq b\leq\gamma(a)\), which implies that the two operators form a residuated pair. In a lower-compatible triple the implication \(x\to y\) is defined iff \(\sigma(x)\leq y\) and \(x\not\leq y\), and we can show that this holds in \((\mathbf{S}_{1},\sigma_{1},\gamma_{1})\). Indeed, notice that for all \(x,y\) such that \(y^{2}\leq x\), we have \(\sigma_{1}(x)\leq y\) and by definition \(x\to y\) is undefined if and only if \(x\not\leq y\). Also, it follows from direct computation that \(\sigma_{1}\) is a strong conucleus, \(\gamma_{1}\) is a closure operator, and \(cd=dc\leq\sigma_{1}(c)\) for all \(c,d\in S_{1},d\neq 1\). Thus \((\mathbf{S}_{1},\sigma_{1},\gamma_{1})\) is a lower-compatible triple. Figure 5. A subdirectly irreducible algebra in \(\mathsf{GL}_{2}\). Figure 6. The iterated partial gluing of three simple 2-potent chains. \((\mathbf{S}_{2},\ell_{2},r_{2})\) is an upper-compatible triple since all the products are defined and all other properties are vacuously true. Moreover, we can consider the ideal \(I=\{0\}\) with \(\top_{I}=0\), where \(0\) is the bottom element of the chain, and both assumptions \(A1,A2\) are satisfied. Also, conditions \(A3,A4\) are trivially satisfied since the algebras considered are chains. Thus, letting \(\tau_{2}=(\sigma_{1},\gamma_{1},\ell_{2},r_{2})\), we can define the partial gluing \(\mathbf{S}_{1}\oplus_{\tau_{2}}\mathbf{S}_{2}\), that is a total IRL since \(\mathbf{S}_{2}\) has a coatom (see Theorem 3.11). Similarly, we consider the upper-compatible triple \((\mathbf{S}_{3},\ell_{3},r_{3})\) where again \(\ell,r\) have empty domain. Now, we also consider \(\mathbf{S}_{1}\oplus_{\tau_{2}}\mathbf{S}_{2}\) where \(x\to y\) is defined and equal to \(1\) iff \(x\leq y\), and where for all \(a:x^{2}\leq a\leq x\), and \(b:y^{2}\leq b\leq y\) we have \[\sigma(a)=x^{2},\quad\gamma(a)=x,\quad\sigma(b)=y^{2},\quad\gamma(b)=y,\quad \sigma(1)=\gamma(1)=1.\] Building on the same line of reasoning as Lemma 6.4, \((\mathbf{S}_{1}\oplus_{\tau_{2}}\mathbf{S}_{2},\sigma,\gamma)\) is a lower compatible triple, and we can define the partial gluing \((\mathbf{S}_{1}\oplus_{\tau_{2}}\mathbf{S}_{2})\oplus_{\tau_{3}}\mathbf{S}_{3}\) where \(\tau_{3}=(\sigma,\gamma,\ell,r)\). This process can be iterated, and provides a way of constructing a partial gluing of a finite family of simple chains, indexed by a totally ordered set of indexes. Let us now give a more general definition, in order to be able to construct a partial gluing of a family of algebras, indexed by an arbitrary totally ordered chain with a largest element. We consider a family of algebras \(\{\mathbf{A}_{i}\}_{i\in\mathbf{I}}\), where: each \(\mathbf{A}_{i}\) is a simple 2-potent CIRL-chain with a coatom \(c_{i}\) and a bottom \(0_{i}\); \(A_{i}\cap A_{j}=\{1\}\) for all \(i,j\in I\); \(\mathbf{I}=(I,\leq)\) is a totally ordered index set with largest element \(i_{0}\). We will now define the _iterated partial gluing of \(\{\mathbf{A}_{i}\}_{i\in I}\)_, and denote it with \(\bigoplus_{I}\mathbf{A}_{i}\), as follows. The domain of \(\bigoplus_{I}\mathbf{A}_{i}\) is given by \(\bigcup_{i\in I}A_{i}\). The order is defined by \(x\leq y\) iff either: 1. \(x,y\in A_{i}\) for some \(i\in I\) and \(x\leq_{A_{i}}y\); or 2. \(x\in A_{i},y\in A_{j}\) and \(i<j\), For each \(\mathbf{A}_{i}\), and for all \(x\neq 1\), let \(\sigma_{i}(x)=0_{i}\), \(\gamma_{i}(x)=c_{i}\), and \(\sigma_{i}(1)=\gamma_{i}(1)=1\). The product and implication are as follows: \[x\cdot y =\left\{\begin{array}{ll}x\cdot_{A_{i}}y&\text{ if }x,y\in A_{i} \text{ for some }i\in I\\ \sigma_{i}(x)&\text{ if }x\in A_{i},y\in A_{j}\text{ and }i<j\\ \sigma_{j}(y)&\text{ if }x\in A_{i},y\in A_{j}\text{ and }j<i\end{array}\right.\] \[x \to\,y =\left\{\begin{array}{ll}c_{i_{0}}&\text{ if }x,y\in A_{i} \text{ for some }i\in I\text{ and }x\not\leq y\\ \gamma_{j}(y)&\text{ if }x\in A_{i},y\in A_{j}\text{ and }j<i\\ 1&\text{ if }x\leq y\end{array}\right.\] **Proposition 6.5**.: _Let \(\{\mathbf{A}_{i}\}_{i\in I}\) be a family of simple 2-potent CIRL-chains \(\mathbf{A}_{i}\), each with a coatom \(c_{i}\) and a bottom \(0_{i}\), such that \(A_{i}\cap A_{j}=\{1\}\) for all \(i,j\in I\), and \(\mathbf{I}=(I,\leq)\) a totally ordered index set with a largest element \(i_{0}\). Then the iterated partial gluing \(\bigoplus_{I}\mathbf{A}_{i}\) is a CIRL._ Moreover, if \(I\) is a totally ordered finite set of indexes, the above definition of iterated partial gluing corresponds to iterating the partial gluing construction as in the above example with the algebras \(\mathbf{S}_{1},\mathbf{S}_{2},\mathbf{S}_{3}\). Indeed: **Lemma 6.6**.: _Let \(\{\mathbf{A}_{i}\}_{i\in I}\) be a family of simple 2-potent CIRL-chains \(\mathbf{A}_{i}\), each with a coatom \(c_{i}\) and a bottom \(0_{i}\), such that \(A_{i}\cap A_{j}=\{1\}\) for all \(i,j\in I\). Let \(I\) be a totally ordered set with largest element \(i_{0}\) such that \(I-\{i_{0}\}\) has a largest element \(i_{1}\). Then:_ 1. \((\bigoplus_{I-\{i_{0}\}}\mathbf{A}_{i},\sigma,\gamma)\) _is a lower compatible triple, where: the implication is redefined to be_ \(x\to y=1\) _iff_ \(x\leq y\)_, and undefined otherwise; if_ \(x\in A_{i}-\{1\}\)_,_ \(\sigma(x)=\sigma_{i}(x)=0_{i},\gamma(x)=\gamma_{i}(x)=c_{i}\)_,_ \(\sigma(1)=\gamma(1)=1\)_._ 2. \((\mathbf{A}_{i_{0}},\ell,r)\) _is an upper-compatible triple where_ \(\ell\)_,_ \(r\) _have empty domain._ 3. \(\bigoplus_{I}\mathbf{A}_{i}\cong(\bigoplus_{I-\{i_{0}\}}\mathbf{A}_{i})\oplus_ {\tau}\mathbf{A}_{i_{0}}\)_, with_ \(\tau=(\sigma,\gamma,\ell,r)\) _and_ \(I=\{0\}\) _with_ \(\top_{I}=0\) We will now show how to characterize \(\mathsf{GL}_{2}\)-chains with iterated partial gluings. For a \(\mathsf{GL}_{2}\)-chain \(\mathbf{A}\) and \(a\in A\), let us now define \(A(a)=\{x\in A:x^{2}=a^{2}\}\). **Lemma 6.7**.: _Let \(\mathbf{A}\) be a \(\mathsf{GL}_{2}\)-chain and \(a,b\in A\). Then:_ 1. _If_ \(a\leq b<1\)_, then_ \(ab=\min A(a)\)_._ 2. _If_ \(a\leq b\)_, then_ \(a\to b=1\)_._ 3. _If_ \(a<b<1\) _and_ \(A(a)=A(b)\)_, then_ \(\mathbf{A}\) _has a coatom_ \(c\) _and_ \(b\to a=c\)_._ 4. _If_ \(a<b<1\) _and_ \(A(a)\neq A(b)\)_, then_ \(b\to a=\max A(a)\)_._ _Moreover, if \(\mathbf{A}\) has no coatom, then it is a Godel algebra._ Proof.: For (1), note that if \(a\leq b<1\), then using the defining property for \(\mathsf{GL}_{2}\), we get \(a^{2}\leq ab\leq(a\wedge b)^{2}=a^{2}\) so \(ab=a^{2}=\min A(a)\). (2) always holds in CIRLs. Let us prove (3). If \(a<b<1\) and \(A(a)=A(b)\), then for every non-identity element \(c\) of \(A\), we have \(bc\leq b^{2}=a^{2}\leq a\), and so \(c\leq b\to a\); therefore \(\mathbf{A}\) has a coatom, which is equal to \(b\to a\), for all \(a<b\neq 1\) with \(A(a)=A(b)\). If \(A(c)\) is a singleton for all \(c\in A\) (i.e. \(\mathbf{A}\) is a Godel algebra), then \(\mathbf{A}\) may have no coatom, but if there is at least one non-trivial \(A(c)\), then \(\mathbf{A}\) has a coatom. For (4) suppose \(a<b<1\) and \(A(a)\neq A(b)\), then for every \(c\in A(a)\), we have \(bc=a^{2}\leq a\), but for \(d>a\) with \(A(d)\neq A(a)\), we have \(bd\geq(b\wedge d)^{2}\neq a^{2}\), so \(bd\not\in A(a)\), hence \(bd\not\leq a\). Therefore, \(A(a)\) has a maximum element and \(b\to a=\max A(a)\), for all non-identity \(b>a\) with \(A(a)\neq A(b)\). If \(\mathbf{A}\) has no coatom, then \(A(a)=\{a\}\) for each \(a\in A\) by (3), thus every element of \(\mathbf{A}\) is idempotent. Therefore, \(\mathbf{A}\) is a Godel algebra. We are now ready to characterize \(\mathsf{GL}_{2}\)-chains as iterated partial gluings of simple chains. **Proposition 6.8**.: _The chains in \(\mathsf{GL}_{2}\) are exactly the iterated partial gluings of simple bounded CIRL-chains with a coatom over a totally ordered index set with both a bottom and a top element._ Proof.: For a chain \(\mathbf{A}\) in \(\mathsf{GL}_{2}\) we denote by \(A^{2}=\{a^{2}:a\in A\}=\{a:a=a^{2}\}\) the set of idempotent elements of \(A\) (equivalently all squares of \(\mathbf{A}\)). Note that for \(a,b\in A\) we have \(A(a)=A(b)\) iff \(a^{2}=b^{2}\) ; also for every \(a\in A\) we have \(A(a)=A(a^{2})\). Moreover, if \(A(a)\neq A(b)\), then \(a<b\) iff for all \(x\in A(a)\) and \(y\in A(b)\) we have \(x<y\). Therefore, the collection \(\{A(a):a\in A\}\) is equal to the collection \(\{A(s):s\in A^{2}\}\), it partitions \(A\) into equivalence classes, which are intervals, and these intervals are linearly ordered in \(\mathbf{A}\); also \(A(1)=\{1\}\). So, \(A\) is the order-theoretic ordinal sum of the chains \(A(s)\) along \(A^{2}\). Moreover, we have seen in the proof of Lemma 6.7 that for all \(a\in A\), \(A(a)\) has a maximum element whenever there is a \(b<1\) strictly larger that \(a\). Notice now that if such a \(b\) does not exist, then \(A(a)\) is the biggest interval below \(1\), which necessarily has a maximum element by the preceding paragraph (the coatom of \(\mathbf{A}\)), unless \(\mathbf{A}\) is a Godel algebra. But even if \(\mathbf{A}\) is a Godel algebra, then \(A(a)=\{a\}\), so \(A(a)\) has a maximum element. We define \(I:=\{\max A(a):a\in A-\{1\}\}\), the set of all these maximal elements, and note that \(\{A(i):i\in I\}=\{A(a):a\in A-\{1\}\}\). For \(i\in I\) we define the set \(A_{i}:=A(i)\cup\{1\}\) and note that it supports the structure of a simple \(2\)-potent integral residuated chain \(\mathbf{A}_{i}\); the order and the multiplication are inherited by \(\mathbf{A}\) and all interesting divisions produce the coatom of \(\mathbf{A}_{i}\). We mention that the structure of each \(A(i)\), for \(i\in I\), is that of an arbitrary bounded chain. We now claim that \(\mathbf{A}\) is the iterated gluing of the algebras \(\{\mathbf{A}_{i}\}_{i\in\mathbf{I}}\), where for \(x\in A(i)\), \(\sigma(x)=\min A(i)\) and \(\gamma_{i}(x)=\max A(i)(=i)\). Indeed it follows from the definition that the domain and the order coincide. The monoidal operation inside each \(\mathbf{A}_{i}\) coincides with the one inherited from in \(\mathbf{A}\): for \(z,w\in A(i)\) we have \(zw=z^{2}=w^{2}\). Also, for \(x\in A_{i},y\in A_{j}\) with \(j<i\), we have \(xy=y^{2}=\sigma(y)\). For the implications \(x\to y\) with \(y<x\), we have that if \(A(x)=A(y)\), then \(x\to y=c\), the coatom of the chain \(\mathbf{A}\), and if \(A(y)=A(i)\neq A(x)\) where \(i=\max A(y)\in I\), then \(x\to y=\max A(y)=\gamma_{i}(y)\). We now show that given any family \(\{\mathbf{A}_{i}\}_{i\in I}\) of simple \(2\)-potent chains (with a coatom), their iterated partial gluing belongs to \(\mathsf{GL}_{2}\). Indeed if \(x,y\in A_{i}-\{1\}\), then \(x\cdot(x\wedge y)=x^{2}=y^{2}=(x\wedge y)^{2}\). Also, if \(x\in A_{i}-\{1\},y\in A_{j}-\{1\},j<i\), then \(x\cdot(x\wedge y)=x\cdot y=\sigma(y)=y^{2}=(x\wedge y)^{2}\), and \(x\cdot(x\wedge y)=x\cdot x=(x\wedge y)^{2}\). As residuated lattices are determined by their order and multiplication reducts, the result follows. We are now going to show that \(\mathsf{GL}_{2}\) is generated by its finite members, that is, it has the finite model property (or FMP). First we need the following technical lemma. **Lemma 6.9**.: _Let \(\mathbf{A}\) be a chain in \(\mathsf{GL}_{2}\), \(X\) a subset of \(A\) and \(\langle X\rangle\) the subalgebra of \(\mathbf{A}\) generated by \(X\)._ 1. _If_ \(X\) _consists solely of idempotents, and for all_ \(x\in X\)_,_ \(A(x)\) _is a singleton except possibly for_ \(A(m)\) _in case_ \(X-\{1\}\) _has a maximum element_ \(m\)_, then_ \(\langle X\rangle=\{1\}\cup X\)_._ 2. _Otherwise, if either_ \(X\) _contains a non-idempotent element or if_ \(A(x)\) _is not a singleton for some non-maximal element_ \(x\) _of_ \(X-\{1\}\)_, then_ \(\mathbf{A}\) _has a coatom_ \(c\) _and_ \[\langle X\rangle=X\cup\{\min A(x):x\in X\}\cup\{\max A(x):x\in X\}\cup\{1,c, \min A(c)\}.\] Proof.: Clearly, \(1\) needs to be in \(\langle X\rangle\). Also, closure under the lattice operations does not increase the original set. From Lemma 6.7, \(x^{2}=\min A(x)\). Also, the only other elements that are generated are: (i) the coatom \(c\), when there is an \(A(x)\) such that \(A(x)\cap X\) is not a singleton (i.e., \(X\) does not consist solely of idempotents) and (ii) \(\max A(x)\), when there is a block \(A(y)\cap X\) strictly between \(A(x)\cap X\) and \(A(1)\), as well as the ones obtained by interactions of (i) and (ii). 1. In the first case, \(X\) is closed under multiplication, as for \(x,y\in X-\{1\}\), we have \(xy=(x\wedge y)^{2}\in\{x^{2},y^{2}\}=\{x,y\}\). Also, for \(x,y\in X-\{1\}\) with \(x<y\), the set \(A(x)\) is a singleton \(\{x\}\) and so \(A(x)\neq A(y)\), hence \(y\to x=\max A(x)=x\). 2. In the second case, either \(X\) has a non-idempotent element, which also is present in \(\langle X\rangle\), or every element in \(X\) is idempotent and there is some non-maximal element \(x\) of \(X-\{1\}\) such \(A(x)\) is not a singleton, in which case there also exists a \(y>x\) in \(X-\{1\}\) (as \(x\) is non-maximal there) and \(A(y)\neq A(x)\) as \(x,y\) are distinct idempotents; hence \(y\to x=\max A(x)\), where \(\max A(x)\neq x\), as \(A(x)\) is not a singleton, hence \(z\) is not idempotent in \(\langle X\rangle\). In any case, \(\langle X\rangle\) contains a non-idempotent element \(w\). Then \(\mathbf{A}\) has a coatom \(c\), which is equal to \(w\to w^{2}\) and which is in \(\langle X\rangle\). Closure under multiplication is equivalent to closure under squares, which is equivalent to containing the bottom of the block of an element (since \(a^{2}=\min A(a)\), for all \(a\in A\).) We need to consider only implications of the form \(a\to b\), for \(b<a\neq 1\). If \(A(a)=A(b)\), then \(a\to b=c\), and if \(A(a)\neq A(b)\) then \(a\to b=\max A(b)\). Conversely, if \(b\in\langle X\rangle-\{1,c\}\), \(\max A(b)=c\to b\in\langle X\rangle\), since \(c\in\langle X\rangle\); the special case of \(b=c\) also works as \(c=\max A(c)\). We are now ready to show the following. **Proposition 6.10**.: _The variety \(\mathsf{GL}_{2}\) is locally finite, hence it has the FMP._ Proof.: By Lemma 6.9, if \(X\) is finite of size \(n\), then \(\langle X\rangle\) is also finite of size at most \(3n+3\). We now use the previous results to show that the AP fails in the class of chains in \(\mathsf{GL}_{2}\). **Theorem 6.11**.: _The amalgamation property fails for the class of \(\mathsf{GL}_{2}\)-chains._ Proof.: Let \(\mathbf{A}\) be the \(3\)-element Godel algebra where \(A=\{0<a<1\}\), and also let \(\mathbf{B}\) and \(\mathbf{C}\) be the \(\mathsf{GL}_{2}\)-chains specified by the following ordered blockings \(B=\{\{0\}<\{a<b\}<\{1\}\}\) and \(C=\{\{0\}<\{a\}<\{c\}<\{1\}\}\); so \(b^{2}=a\), and the other elements are idempotent. Note that \(\mathbf{A}\) is a common subalgebra of \(\mathbf{B}\) and \(\mathbf{C}\), even though \(A\) does not contain the top of \(B(a)=\{a<b\}\), which is \(b\). Let \(\mathbf{D}\) be a \(\mathsf{GL}_{2}\)-chain that is an amalgam of this \(V\)-formation. Since \(a<c\) in \(\mathbf{C}\), the same is true in \(\mathbf{D}\). Since \(\mathbf{D}\) is a \(\mathsf{GL}_{2}\)-chain, we have that \(c\to a\) is the top of \(D(a)\). However, since \(b^{2}=a\), we have \(b\in D(a)\), hence \(b\leq c\to a\) in \(\mathbf{D}\). Since \(c\to a=a\) in \(\mathbf{C}\), this yields \(b\leq a\), a contradiction. Interestingly, we can characterize exactly when the AP fails for \(\mathsf{GL}_{2}\)-chains. **Proposition 6.12**.: _A \(V\)-formation \(\mathbf{A}\), \(\mathbf{B}\), \(\mathbf{C}\) of \(\mathsf{GL}_{2}\)-chains (WLOG we assume that \(A\subseteq B,C\)) fails to have a \(\mathsf{GL}_{2}\)-chain amalgam iff \(\mathbf{A}\) is a Godel algebra and there is \(a\in A\) such that \(B(a)\) is not singleton, \(C(a)\) is a singleton, and \(C(a)\) is not the maximum nontrivial block of \(\mathbf{C}\) (or the same with \(\mathbf{B}\) and \(\mathbf{C}\) swapped)._ Proof.: The right-to-left direction follows from the same argument in the proof of Theorem 6.11. We prove the other direction. In case \(\mathbf{A}\) has some non-idempotent element, then it has a coatom and hence so do \(\mathbf{B}\) and \(\mathbf{C}\), and the coatom of \(\mathbf{A}\) coincides with his copy in \(\mathbf{B}\) and also in \(\mathbf{C}\). Also, if \(\{\mathbf{A}_{i}\}_{i\in I}\), \(\{\mathbf{B}_{j}\}_{j\in J}\), and \(\{\mathbf{C}_{k}\}_{k\in K}\) are the associated decompositions, then \(I\subseteq J,K\) and for \(a\in A\), the chains \(A(a),B(a),C(a)\) share the same top and bottom. We may assume that \(I=J\cap K\), and we take \(J\cup K\) as the index set for \(\mathbf{D}\); the order on \(J\cup K\) is any amalgam of the chain \(V\)-formation given by \(I,J,K\). Then for \(i\in I\), we take \(D(i)\) to be any amalgam of the bounded chain \(V\)-formation given by \(A(i),B(i),C(i)\); for \(j\in J-I\) we take \(D(j)=B(j)\) and for \(k\in K-I\) we take \(D(k)=C(k)\). Now assume that \(\mathbf{A}\) is a Godel algebra. For each \(a\in A\), we define \(D(a)=B(a)\cup C(a)\) and the rest of \(\mathbf{D}\) is defined as above, except for one case. If both \(\mathbf{B}\) and \(\mathbf{C}\) have coatoms \(c_{B},c_{C}\) not in \(\mathbf{A}\), \(B(c)\) and \(C(c)\) are merged in the obvious way. It is easy to see that \(\mathbf{B}\) and \(\mathbf{C}\) are subalgebras of \(\mathbf{D}\) with respect to multiplication. Implication could create a problem by producing the top of \(B(a)\) and of \(C(a)\), as they could be different elements. This can happen only when one of them is a singleton, say \(C(a)\), and the other is not, say \(B(a)\). Also, this can happen only of the implication is of the form \(d\to x\), where \(x\in D(a)\), \(x<d<1\), \(d\not\in C\) and \(D(x)\neq D(d)\). But this is impossible by the assumption. Using the work in [12], we can actually prove that the AP fails for the variety \(\mathsf{GL}_{2}\). According to the authors a class of algebras \(\mathsf{K}\) is said to have the _one-sided amalgamation_ property, or the 1AP, if every V-formation \((\mathbf{A},\mathbf{B},\mathbf{C},i:A\to B,j:A\to C)\) in \(\mathsf{K}\) has a _1-amalgam_\((\mathbf{D},h:B\to D,k:C\to D)\) in \(\mathsf{K}\), i.e., \(\mathbf{D}\in\mathsf{K}\), \(k\) is an embedding, \(h\) is a homomorphism, and \(h\circ i=k\circ j\). Notice that if \(h\) is an embedding we have the usual notion of amalgam. Given a variety \(\mathsf{V}\), let \(\mathsf{V}_{FSI}\) the class of finitely subdirectly irreducible members of \(\mathsf{V}\). In [12, Theorem 3.4], the authors show that if a variety \(\mathsf{V}\) has the congruence extension property and \(\mathsf{V}_{FSI}\) is closed under subalgebras, then \(\mathsf{V}\) has the AP if and only if \(\mathsf{V}_{FSI}\) has the 1AP. Since \(\mathsf{GL}_{2}\) is congruence distributive, and the finitely subdirectly irreducibles are exactly the nontrivial chains (which are clearly closed under subalgebras), we can apply the mentioned result. **Proposition 6.13**.: _The amalgamation property fails for \(\mathsf{GL}_{2}\)._ Proof.: Consider again the V-formation \((\mathbf{A},\mathbf{B},\mathbf{C},i,j)\) in the proof of Theorem 6.11, where \(i,j\) are the inclusion maps and note that all three algebras are FSI. We show that every 1-amalgam for it is an actual amalgam. Since we have shown that it has no amalgams, via [12, Theorem 3.4] this concludes the proof. In particular, we prove that if \((\mathbf{D},h:B\to D,k:C\to D)\) is a 1-amalgam, then \(h\) is necessarily injective. Notice that the composition \(h\circ i=k\circ j\) is injective, since both \(j\) and \(k\) are injective. Thus, no elements of \(\mathbf{A}\) are collapsed by \(h\circ i\). In particular, \(h\) does not collapse \(i(1)\) and \(i(a)\), i.e., \(i(a)\notin\ker(h)\). Therefore, also \(i(b)\notin\ker(h)\), since otherwise \(i(b)^{2}=i(b^{2})=i(a)\) would be in the kernel as well, and it is not. We conclude that the kernel of \(h\) is trivial, and hence, \(h\) is injective. We remark that if we modify \(\mathsf{GL}_{2}\)-algebras to be expansions of IRLs with a new constant \(c\), and be such that for the finitely subdirectly irreducible members they satisfy \(x=1\) or \(x\leq c\), then the subalgebras will also contain \(c\) and amalgamation will hold for all such \(\mathsf{GL}_{2}\)-chains. Then by Theorem 49 in [23], which allows one to extend the amalgamation property from the FSI members to the whole variety, the amalgamation property extends to the variety of all modified \(\mathsf{GL}_{2}\)-algebras. ## Funding This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 890616 awarded to Ugolini.
2309.13096
Econometric Model Using Arbitrage Pricing Theory and Quantile Regression to Estimate the Risk Factors Driving Crude Oil Returns
This work adopts a novel approach to determine the risk and return of crude oil stocks by employing Arbitrage Pricing Theory (APT) and Quantile Regression (QR).The APT identifies the underlying risk factors likely to impact crude oil returns.Subsequently, QR estimates the relationship between the factors and the returns across different quantiles of the distribution. The West Texas Intermediate (WTI) crude oil price is used in this study as a benchmark for crude oil prices. WTI price fluctuations can have a significant impact on the performance of crude oil stocks and, subsequently, the global economy.To determine the proposed models stability, various statistical measures are used in this study.The results show that changes in WTI returns can have varying effects depending on market conditions and levels of volatility. The study highlights the impact of structural discontinuities on returns, which can be caused by changes in the global economy and the demand for crude oil.The inclusion of pandemic, geopolitical, and inflation-related explanatory variables add uniqueness to this study as it considers current global events that can affect crude oil returns.Findings show that the key factors that pose major risks to returns are industrial production, inflation, the global price of energy, the shape of the yield curve, and global economic policy uncertainty.This implies that while making investing decisions in WTI futures, investors should pay particular attention to these elements
Sarit Maitra, Vivek Mishra, Sukanya Kundu, Manav Chopra
2023-09-22T13:34:49Z
http://arxiv.org/abs/2309.13096v2
Econometric Model Using Arbitrage Pricing Theory and Quantile Regression to Estimate the Risk Factors Driving Crude Oil Returns ###### Abstract This work adopts a novel approach to determine the risk and return of crude oil stocks by employing Arbitrage Pricing Theory (APT) and Quantile Regression (QR). The APT identifies the underlying risk factors likely to impact crude oil returns. Subsequently, QR estimates the relationship between the factors and the returns across different quantiles of the distribution. The West Texas Intermediate (WTI) crude oil price is used in this study as a benchmark for crude oil prices. WTI's price fluctuations can have a significant impact on the performance of crude oil stocks and, subsequently, the global economy. To determine the proposed model's stability, various statistical measures are used in this study. The results show that changes in WTI returns can have varying effects depending on market conditions and levels of volatility. The study highlights the impact of structural discontinuities on returns, which can be caused by changes in the global economy and the demand for crude oil. The inclusion of pandemic, geopolitical, and inflation-related explanatory variables add uniqueness to the study as it considers current global events that can affect crude oil returns. Findings show that the key factors that pose major risks to returns are industrial production, inflation, the global price of energy, the shape of the yield curve, and global economic policy uncertainty. This implies that while making investing decisions in WTI futures, investors should pay particular attention to these elements. arbitrage pricing theory; crude oil; econometric model; quantile-regression; risk return; statistical methods; Manuscript received 15 Oct. 2020; revised 29 Jan. 2021; accepted 2 Feb. 2021. Date of publication 17 Feb. 2021. International Journal on Informatics Visualization is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. ## 1 Introduction International crude oil prices have significantly fluctuated in recent years, largely due to factors such as global economic conditions, technological advancements, political instability, and natural disasters. Despite hundreds of oil production locations, only a few crude oil benchmarks are used for oil pricing: WTI1 and Brent. The prices of these benchmarks have played a significant role in the variations in prices. This study aims to provide a new way of analyzing the risk and return of crude oil stocks in an uncertain market by combining market fundamentals and economic factors. The literature on multifactor analysis for oil returns is limited, as most research has focused on univariate correlations between oil prices and a single factor. The study uses a multi-factor QR model combined with APT to provide a more comprehensive understanding of oil prices. Footnote 1: WTI is a blend of several domestic crude streams in the United States, with its main trading location in Cushing, Oklahoma. Brent crude oil encompasses four crude streams pumped in the North Sea. The use of APT and QR to calculate the risk and return on crude oil stocks is an intriguing and profitable technique. APT is a multifactor model, which means it considers a variety of risk factors that affect any asset's returns. The fundamental tenet of APT is that an asset's expected return is a linear function of how exposed it is to different risk factors plus a particular risk premium (idiosyncratic risk) that is unique to that asset. According to the hypothesis, investors will constantly try to take advantage of arbitrage possibilities to correct market inaccuracies and create equilibrium situations. The existence of transient and short-lived arbitrage opportunities in financial markets is not denied by even the Efficient Market Hypothesis. The occurrence of arbitrage opportunities is a critical mechanism that contributes to market efficiency. The APT strategy aims to identify risk factors influencing WTI returns and use them to explain predicted returns. By considering multiple factors simultaneously, it aims to uncover hidden relationships and interactions. QR is a distribution analysis technique used to estimate the conditional distribution of data at different quantiles. While the individual methods are not new, their combination and application to crude oil stocks in conjunction with each other is a novel approach. Combining these methods with crude oil stocks is a novel approach. Descriptive statistics show relevant distributions are leptokurtic, justifying quantile regression to detect herding bias in tails. Herding bias refers to investors imitating others' actions without independent analysis. The two main contributions to this paper are: * implementation of APT to identify the underlying risk factors driving ROC, * application of QR to estimate the effect of these risk factors on different segments of the distribution of WTI returns. QR provides a deeper insight into the nature of the relationship between the factors and the returns. In a recent study, Zhao et al. (2023), emphasized the importance of investor sentiments, taking a clue from behavioral finance (Aloui et al., 2020). As a result, a multiple-factor study considering the current state of the oil industry is a crucial research agenda item. The results show distinctive and original aspects of the impact, such as industrial production (PROD), INFLATION, global price of energy (GPE), and global economic policy uncertainty (GEPU), and they have several implications for investors and decision-makers to reduce investment risks ## 2 Previous work We find the studies on the relationship between multiple-factors and the ROC (return on crude oil) are limited, which indicates a potential gap in the literature. However, there is a growing trend of academic research on macroeconomic factors impacting ROC (e.g., McMillan et al., 2021; Nayar, 2020; Hamdi et al., 2019; Salisu et al., 2019, etc.). Additionally, other aspects have been studied by researchers, such as the relationship between the pandemic and crude oil (Liu et al., 2020; Prabheesh et al., 2020, etc.) and geopolitical unpredictability (Wei et al., 2019; Alqahtani et al., 2020). In recent academic works, researchers have studied numerous risk factors to evaluate WTI returns such as: * Macroeconomic indicators: GDP, inflation, interest rates, and consumer sentiment are some of the macroeconomic variables that have been found to be significant predictors of WTI returns (McMillan et al., 2021; Mokni, 2020; Nayar, 2020). * Geopolitical events: Political events like wars, terrorist attacks, and geopolitical conflicts significantly influence oil prices (McMillan et al., 2021; Shahzad et al., 2021; Mahmoudi and Ghaneei, 2022; Wei et al., 2019; Alqahtani et al., 2020). * Financial market indicators: It has also been discovered that factors including stock market indices, volatility, and credit spreads can accurately predict oil prices (McMillan et al., 2021; Shahzad et al., 2021). * Energy policies: Government policies related to energy production, consumption, and conservation can also impact oil prices (McMillan et al., 2021). These studies provide insights into the various factors that have been considered to estimate WTI returns during the last decade, such as, economic indicators, geopolitical events, exchange rates, financial market indicators, and energy policies. Some of the most researched factors have been: * US Treasury Spread: Several research have discovered a considerable effect of the US Treasury yield spread on crude oil prices (e.g., Dai and Kang, 2021; Ferrer et al., 2018 etc.). While other research (such as Guo et al., 2021) suggest that a larger spread causes higher oil prices, Wang et al. (2023) discovered that the impact is time varying. * Global economic policy uncertainty: There has been substantial progress in our understanding of how uncertainty in oil prices, economic policy, and overall economic activity are related. According to research by Shahzad et al. 2019, Herrera et al. 2019, Adekoya et al. 2022, etc., there is a correlation between the volatility of the oil price and the unpredictability of global economic policy. * Inflation: Inflation can impact oil prices by affecting the demand for oil as well as the cost of production (Husaini and Lean, 2021; Kose and Unal, 2021, etc.). * Industrial production: Changes in industrial production can affect the demand for oil, as industrial processes often rely on oil as an input (Singhal et al., 2019; Wei et al., 2019; Herrera et al., 2019). * Currency fluctuation with the euro: The study by Malik and Umar (2019) finds that changes in exchange rate volatility are not explained by changes in oil prices. However, they have discovered a strong link between the volatility of currency rates. Salisu et al. (2022) assert that fluctuations in the value of the US dollar are directly influenced by changes in the price of oil. * Narrow money supply: As changes in the money supply influence total economic activity, they can also influence the demand for oil. According to Lee et al. (2019), there is evidence of a correlation between the volatility of the oil price and the unpredictability of economic policy. * Unemployment rate: Empirical results indicate a dynamic causal link between unemployment and ROC (Wang et al., 2022). Chan and Dong (2022) found that an unanticipated rise in oil price volatility causes the jobless rate to persistently rise. * VIX: Chen (2022) reveals that investment horizons impact EPU, VIX, and GPR's influence on oil stock movement. VIX is the most significant uncertainty measure in developed markets, while Brazil, India, GPR, and EPU are vulnerable in emerging markets. WTI returns were also significantly impacted by the COVID-19 epidemic, with prices collapsing as demand fell as a result of travel bans and lockdowns. Oil prices and COVID-19 instances have a negative link, claim Chua and Yang (2021). A growing body of research (e.g., Salisu et al., 2019, Pan et al., 2017, Le and Chang, 2013) points to a non-linear connection between oil prices and economies despite the studies' primary emphasis being on linear models. Nonlinearities, in accordance with Beckmann & Czudaj (2013), may be brought on by major external oil price shocks, discrete regime changes, or the inherently nonlinear structure of the data generation method (Alaqaralleh, 2020). In the literature, there is no agreement on the best effective approaches for doing multi-factor analysis and diagnostic testing for WTI excess returns. Despite numerous research on the relationship between oil prices and macroeconomic factors, the literature on the risk and return characteristics of crude oil assets in respect to these variables is still lacking. While earlier research has focused on the relationship between oil prices and macroeconomic indicators (e.g., McMillan et al., 2021; Nayar, 2020; Hamdi et al., 2019, etc.), it has not adequately investigated the implications of these findings on the risk and return characteristics of crude oil stocks. Hence, the above review provides a clear argument for more research into the relationship between several parameters and crude oil returns. While there has been some research on individual factors such as macroeconomic indicators, geopolitical events, financial market indicators, energy policies, and the impact of COVID-19 on oil prices, there is a gap in the literature regarding the comprehensive analysis of all these factors in a single crude oil return model. Table 1 displays a bibliometric report which analyzes the citation patterns and impact of scholarly articles in the field. The citation analysis indicates the influence and popularity of the studies in the field. \begin{table} \begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{**Sl.**} & \multirow{2}{*}{**Autho**} & \multirow{2}{*}{**Journals**} & **citatio** \\ **No** & & **rs** & **ns** \\ \hline \multirow{4}{*}{1} & Ferrer & Elsevier (Energy Economics), & \\ & et al. & [https://doi.org/10.1016/j.eneco.2018.0](https://doi.org/10.1016/j.eneco.2018.0) & 350 \\ & (2018) & 9.022 & \\ & Singha & Elsevier (Resources Policy) & \\ 2 & 1 et al. & [https://doi.org/10.1016/j.resourpol.20](https://doi.org/10.1016/j.resourpol.20) & 232 \\ & (2019) & 19.01.004 & \\ & Prabhe & Energy Research Letters, & \\ & esh et al. & 1(2). & \\ & al. & (2020) & [https://doi.org/10.46557/001c.13745](https://doi.org/10.46557/001c.13745) & \\ & Pan et & Elsevier (Journal of Empirical Finance) & \\ 3 & al. & [https://doi.org/10.1016/j.jempfin.2017](https://doi.org/10.1016/j.jempfin.2017) & \\ & (2017) &.06.005 & \\ & Beckm & Elsevier (International Review of & \\ & aan \& Economics \& Finance) & \\ 4 & Czudaj & [https://doi.org/10.1016/j.iref.2012.12](https://doi.org/10.1016/j.iref.2012.12) & \\ & (2013) & 002 & \\ & Le \& Elsevier (Energy Economics) & \\ 5 & Chang & [https://doi.org/10.1016/j.eneco.2012.1](https://doi.org/10.1016/j.eneco.2012.1) & 153 \\ & (2013) & 2.002 & \\ & Wei et & Elsevier (Finance Research Letters) & \\ 6 & al. & [https://doi.org/10.1016/j.frf.2019.03.0](https://doi.org/10.1016/j.frf.2019.03.0) & 145 \\ & (2019) & 28 & \\ & Herrer & Elsevier (Energy Policy) & \\ 7 & a et al. & [https://doi.org/10.1016/j.enpol.2019.0](https://doi.org/10.1016/j.enpol.2019.0) & 131 \\ & (2019) & 2.011 & \\ & & Malik & Elsevier (Energy Economics) & \\ & and & [https://doi.org/10.1016/j.eneco.2019.1](https://doi.org/10.1016/j.eneco.2019.1) & 126 \\ & Umar’s & 04501 & \\ & (2019) &. & \\ & Hamdi & Elsevier (Energy Economics) & \\ 9 & et al. & [https://doi.org/10.1016/j.eneco.2018.1](https://doi.org/10.1016/j.eneco.2018.1) & 123 \\ & (2019) & 2.021 & \\ & Salisu & Elsevier (Economic Modelling) & \\ 10 & et al. & [https://doi.org/10.1016/j.econmod.20](https://doi.org/10.1016/j.econmod.20) & 103 \\ & (2019) & 18.07.029 & \\ & Shahza & Elsevier (International Review of & \\ & Financial Analysis) & \\ 11 & d et al. & [https://doi.org/10.1016/j.irfa.2021.10](https://doi.org/10.1016/j.irfa.2021.10) & \\ & (2021) & 1754 & \\ & Alqaht & Elsevier (Economic Analysis and & \\ 12 & ani et & Policy) & \\ & al. & [https://doi.org/10.1016/j.eap.2020.09](https://doi.org/10.1016/j.eap.2020.09) & 53 \\ & (2020) & 017 & \\ 13 & Mokni & Elsevier (Energy) & \\ & (2020) & [https://doi.org/10.1016/j.energy.2020](https://doi.org/10.1016/j.energy.2020). & 31 \\ & (2020) & 118639 & \\ & Alqara & Taylor \& Francis (Journal of Applied \\ 14 & Ileh & Economics) & \\ & (2020) & [https://doi.org/10.1080/15140326.201](https://doi.org/10.1080/15140326.201) & 30 \\ & Köse & Elsevier (Energy) & \\ 15 & \& Unal & [https://doi.org/10.1016/j.energy.2021](https://doi.org/10.1016/j.energy.2021). & 29 \\ & (2021) & 120392 & \\ & Husain & Elsevier (Resources Policy) & \\ 16 & i \& & [https://doi.org/10.1016/j.resourpol.20](https://doi.org/10.1016/j.resourpol.20) & 24 \\ & Lean & 21.102175 & \\ & (2021) &. & \\ & Dai & Elsevier (Energy Economics), & \\ & and & & \\ 17 & Kang & & \\ & (2021) &. & \\ & Wang & Elsevier (Energy) & \\ 18 & et al. & [https://doi.org/10.1016/j.energy.2022](https://doi.org/10.1016/j.energy.2022). & 16 \\ & (2022) & 124107 & \\ & Mahm & Emerald Publishing Limited (Studies & \\ & on Economics and Finance) & \\ 19 & Ghane & & \\ & ei & & \\ & (2022) &. & \\ & McMillan et & Elsevier (Energy Economics) & \\ & lan et & [https://doi.org/10.1016/j.eneco.2021.1](https://doi.org/10.1016/j.eneco.2021.1) & 12 \\ & al. & \\ & (2021) &. & \\ & Chan & Elsevier (Economic Modelling) & \\ 21 & \& Dong & \\ & (2022) &. & \\ & Salisu & Elsevier (Energy Economics), & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (2022) &. & \\ & (20222) &. & \\ & (2022) &. & \\ & (2022) &. The literature review presents a compelling argument for the significance of conducting research on various factors and their influence on the risk and returns associated with crude oil. It underscores the imperative need for additional scholarly inquiry to deepen our comprehension of the intricate interaction between these factors and crude oil dynamics. The review identifies critical research areas that warrant investigation, thereby paving the way for future academic endeavors to fill the existing gaps in knowledge and contribute to the advancement of this field of study ## 3 Model and Econometric Approach The APT model provides a framework for understanding asset pricing based on the systematic risk factors that influence asset returns. The equilibrium asset pricing equation according to the APT model is: \[E(R_{l})\ =\ Rf\ +\ \beta_{1i}\ *\ f_{1}\ +\ \beta_{2i}\ *\] \[\ f_{2}\ +\ \cdots\ +\ \beta_{kl}\ *\ f_{k} \tag{1}\] Where, \(E(R_{l})\) is the expected return of asset \(i\), \(Rf\) is the risk-free rate of return, \(\beta_{1i},\beta_{2i},\ldots,\beta_{kl}\) are the sensitivity coefficients of WTI \(i\) to the \(k\) systematic risk factors (\(f_{1}\), \(f_{2}\),..., \(f_{k}\)), which represent different sources of risk in the economy. The \(\beta\)s are estimated by using linear regression. This is calculated using QR with an estimate of the conditional median (0.5 quantile), and the model's adequacy was checked using various regression diagnostic tests. The coefficients of the regression represent the sensitivities of the asset to each factor, while the intercept term represents the risk-free rate of return. ### _Data source and variables_ The study used five years of monthly data from the FRED Economic Database to estimate betas, using yield data for crude oil, specifically the WTI crude oil spot price, to provide insights. The S&P 500 index, represented by the SPY, was chosen as the benchmark for the US equity market and a symbol of financial stability and US economy health. Table 2 presents selected variables, macroeconomic indicators, and other relevant data sources for the research question. The US Treasury bill rate is viewed as a risk-free interest rate due to its empirical relevance and theoretical basis in explaining crude oil returns. Here are some reasons why these factors have been chosen: * SPREAD: The yield spread is a measure of risk and investor sentiment, reflecting market expectations for future economic conditions. It can be used to capture market sentiment and its impact on crude oil returns, as oil prices are sensitive to economic changes. The US Treasury spread is particularly useful in this context. * GEPU: The study explores the correlation between oil price volatility, economic policy uncertainty, and crude oil returns, highlighting the significant impact of these factors on investment decisions, global economic growth, and geopolitical stability. * INFLATION: Inflation affects the purchasing power of consumers and can impact the demand for oil. Higher inflation may increase production costs and reduce consumer spending, potentially affecting oil prices. We have explored the impact of inflation on crude oil returns by including it as a factor. * PROD: Changes in industrial production can reflect overall economic activity and demand for oil. Industries heavily reliant on oil as an input may experience fluctuations in production levels, which can, in turn, influence oil prices. Understanding the relationship between industrial production and crude oil returns requires taking it into account as a component. * CCU: Exchange rate fluctuations, particularly with major currencies like the euro, can affect oil prices. This can impact the affordability of oil for different countries and influence demand. Including currency fluctuation as a factor allowed us to explore the relationship between exchange rates and crude oil returns. * M1SL: Changes in the money supply can have an impact on overall economic activity, which, in turn, can affect the demand for oil. By considering the narrow money supply, we have examined its relationship with crude oil returns and assessed its influence on oil market dynamics. * UNRATE: This reflects labor market conditions and can indicate the overall economic health. High unemployment may affect consumer spending and demand for oil. Incorporating the unemployment rate as a consideration helped in comprehending the relationship between crude oil returns. * VIX: The VIX index measures market volatility and investor sentiment. High levels of market volatility can impact oil prices as they affect investor risk appetite and their investment decisions. We have investigated the VIX's impact on crude oil returns by including it as a factor in the analysis. We can classify the variables into two categories: market fundamentals and economic indicators where, INDPROD, M1SL, CCU, SP, GPR, GPE, WUPI, and GEPU; and DGS3MO, DGS5, CPIAUCSL, UNRATE and VIX ### _Econometric approach_ This work is divided into two stages: the first stage examines the excess return over time, and the second stage analyses the excess return's cross-section components. Our study's major presumptions are that markets are efficient, events cannot be predicted, and time is affected exogenously. Eq. (1) can be extended to Eq. (2) to discuss the excess return on WTI. \[\begin{array}{l}R_{wti}\ =\ \alpha_{1}\ +\ \beta_{M}R_{Mt}+\beta_{1}SPREAD_{t}t+ \beta_{2}INDPRO_{t}\\ past (either positive or negative) away from the average returns for the investment. Despite a positive mean reflecting favorable results on the average return for investors, the negative skewness (Fig. 1) shows that more negative data is concentrated on the mean value. According to the large standard deviation values associated with various variables, the pandemic crisis of 2020-21 and the post-crisis period makes up over half of the data set. At the 5% significance level, the JB test statistics reject the null hypothesis (\(H_{0}\)) of a normal distribution for all series. Considering the minimum values, the lowest in this range is UNRATE, with a minimum value of -131.38. GEPU is much more dispersed than other variables, with a standard deviation of 43.40; closely following this are the GPR with 30.93, UNRATE with 21.29, and MONEY with 20.32. Negative values for skewness are common (SP, CURRENCY, MONEY, UNRATE, INFLATION, GPR, and SPREAD) but are positive for the INDPRO, PANDEMIC, GPE, VIX, and GEPU. Most of these factors show excess kurtosis. To develop a new coordinate system and align it with the largest variation in the data, Principal Component Analysis (PCA) was carried out. The results are displayed in the next section. Value at Risk (VaR) is estimated (Table 3) on simple returns, which represent the worst-case loss associated with probabilities, and CVaR is estimated by averaging the severe losses in the tail of the distribution of WTI returns. The quantile normalization procedure was used to modify the raw data to preserve the true variance that we were interested in while removing any unwanted variation induced by technological artefacts. The normalized box plot of the dependent variables is shown in Fig. 2. The proportion of eigenvalues attributed to each component is shown in Fig. 3. This indicates the importance of each component for the analysis. ## 5 Multifactor Quantiles Estimates Table 4 reports the regression estimation (\(Qn0.5\)) based on Eq. 4. The diagnostic tests were performed on the conditional median quantile, which has been treated here as the estimation results for the baseline regression. The asymmetry in the model can be seen by comparing the coefficients of various quantiles. A few parameter estimations, e.g., "\(Dcurrency\)", "\(DMoney\)", "\(inflation\)", "\(wupi\)", "\(Dgpr\)", and "\(Spread\)" variables, are not statistically distinct from zero. \((hypotheses="Dcurrency=DMoney=inflation=wupi=Dgpr=Spread=0")\). 'D' prefix added to the relevant dataset after differencing to stabilize the series. Using an F-test, we tested \(H_{0}\) that the parameters on these six variables are all zero. The resulting F-test statistic value is 2.98, with a p-value of 0.013 indicating the regression model better fits the data than the model with no independent variables. This result is promising because it demonstrates that the independent variables in our model improve the model's fit. Heteroscedasticity was assessed using the Breusch-Pagan test. Table 5 reports the test results. p-values \(<\) 0.05, indicating a fundamental problem with heteroscedastic errors. Fig. 6 displays the residual vs. prediction error plot, though no clear pattern is visible, however, the Jarque-Bera (JB) normality assumption test was performed to ensure the correctness of our assumption. According to Fig. 6, the pandemic caused an early decline in prices throughout 2020-21, followed by a steep rise as producers reduced supply and demand soared. The assumption is satisfied because the Durbin-Watson (DW) test result of 1.98 indicates that there is no autocorrelation. However, the Breusch-Godfrey (BG) Test was employed too, which identifies the autocorrelation up to any predetermined order p. The null hypothesis (\(H_{0}\)) of BG shows no serial correlation of any order up to p. Table 6 displays the test statistic \(x^{2}=36.52\) and \(p-value=0.000\), indicating we can reject \(H_{0}\) and conclude that autocorrelation exists among the residuals at some order less than or equal to 6 lags. We have tested 12, 24, 48, and 50 lags and found a \(p-value>0.05\) at lag 50, where \(H_{0}\) cannot be rejected. However, considering the seasonal correlation, we have considered adding seasonal dummy variables to the model. Following that, a normality test was run on the residuals, with the premise that the model's residuals are normally distributed. Observing the histogram plot (Fig. 4), we observe that the distribution of the residuals roughly resembles a bell shape, although there are a few large outliers that could lead to a significant skewness. According to Fig. 4, the distribution of the residuals is bell-shaped. However, to ensure the normality assumption, we checked the QQ plot displayed in Fig. 5 followed by statistical tests (Table 7). QQ plot indicates a non-normal residual distribution. Fig. 6 displays the regression residuals and fitted series. Numerous significant outliers can be seen in the graph, but the largest one is in 2020. Table 8 displays the values for the residuals studied to determine the precise dates when the largest outliers were realized. It is evident that the two most extreme residuals were in April20 (-15.24) and May'20 (16.59). These residuals represent unique or critical events, outliers, or anomalies in the data that have a big impact on WTI returns. The inclusion of dummy variables for these residuals allows the model to adjust and account for these influential observations properly. Due to the perfect fit of the dummy variables to the two extremely outlying observations, the rerun of the regression along with the dummy variables significantly increased the pseudo \(R^{2}\) value from 0.58 to 0.72. Appendix II reports the estimates of the QR. The distributions were divided into four different quantiles (i.e., \(\tau=0.25,0.50,0.75,\&\) 0.90) to get a mixed variety of low, medium, and high return conditions. Fig. 7 displays the diagnostic plot, where it can be observed that the errors follow a normal distribution. This has effectively established a baseline model to estimate the effect of the event on our target variable. Furthermore, both missing variables and an inappropriate functional form were discovered using the RESET (Ramsey Regression Equation Specification Error Test). An F-value of 0.248 and a corresponding p-value of 0.620 from the data show that we cannot rule out \(H_{0}\) that the model contains no omitted variables. To ascertain whether there is a structural break in the data at any given moment, the CUSUM test (Ploberger and Kramer, 1992) for parameter stability based on OLS residuals was carried out. Table 10 presents the cumulative total and cumulative sum of squares of recursive residuals to test the structural stability of the models. The absence of any structural breaks is the null hypothesis. The test statistic and associated p-value (0.90) suggest that \(H_{0}\) cannot be rejected, and the coefficients are stable over time; this confirms that the model does not have a structural break for any possible break date in the sample. ### _Causality analysis:_ Causal Impact Analysis reduces the noise and provides real statistical insight which leads to the confidence to move forward with. The average value of the response variable is 1.36. If the intervention had not occurred, it was expected that the average response would have been 3. 21. The response variable had an overall value of 43.6 when the post-intervention period's individual data points were added together. But if the intervention had not happened, we would have anticipated a total of 116.77 in absolute terms, with a confidence interval of [80.29, 154.44]. With an upper and lower bound of [-94.96, -31.46], the response variable showed a relative decline of - 62.7%. This Fig. 8: Causal Impact plot Fig. 6: Regression Residuals and Fitted Series Fig. 7: Residuals diagnostics demonstrates that the detrimental impact seen during the intervention period is statistically significant. Fig. 8 displays the causal impact analysis plot. The Bayesian one-sided tail-area probability of getting this result by chance is exceedingly low (\(p=0.0\)). This indicates that the causal effect is statistically significant. ## 6 Empirical results & discussions The quantile analysis found the following intriguing trends: The fact that PROD, INFLATION, GPE, and GEPU have a positive and significant impact on the ROC at both the 25% and 50% levels suggests that the relationship is robust and not just limited to a particular quantile level. This implies that when the market is bullish, these variables have a substantial impact on the return on the asset, and investors need to take these factors into consideration when making investment decisions. The intercept term appeared negative for the lower and median quantiles, which suggests that, on average, WTI returns are negative or below zero at these quantiles, even when the predictor variables are set to zero. This is primarily because of pandemic panic and supply chain disruption during the pandemic phase. Table 10 presents a complete discussion on each factor based on the QR analysis displayed in Table 9. \begin{tabular}{p{28.5pt} p{28.5pt}} \hline \hline \multicolumn{2}{c}{**Variable**} \\ \multicolumn{2}{c}{**s**} & \multicolumn{1}{c}{**Causality analysis**} \\ \hline \multicolumn{2}{c}{The negative estimate of the coefficient implies that at the 50th quantile, the SP return has a negative effect on the WTI return, whereas at other quantiles, there is no meaningful effect. This seems logical considering the specific combination of conditions that led to this link between the SP return and the WTI return at the 50th quantile during and after the pandemic. Wang et al. (2020) discovered a statistically significant positive connection at lower quantiles in their work. But Dutta & Raunak (2020) found a strong negative relationship between SP and WTI returns during the pandemic, which is in line with our findings. & \\ \hline \multicolumn{2}{c}{The coefficient estimates for INDPRO are significant across all quantiles of the WTI return distribution, suggesting the relationship with WTI returns is consistent across different levels of returns. This implies that a strong industrial sector is associated with a higher ROC. This is in line with Kalymbetova (2021), and Ratti & Vespignani (2016), who found a positive cointegrated relationship between the INDPRO and oil prices. & \\ \hline \multicolumn{2}{c}{Positive and statistically significant estimates at the median and 3rd quantile show that when the value of the US dollar goes up, WTI returns go up at the median and 3rd quantile of the distribution. Similar findings were reported by Olayeni et al. (2020), and Singhal et. al. (2019). The median and 3rd quantiles of our data set correspond to the height of a devastating pandemic supply chain disruption. One possible inference from this relationship is that changes in the value of the US dollar can affect the price of WTI, which in turn can have implications for the wider economy. & \\ \hline \hline \multicolumn{2}{c}{The effect of M1SL on WTI's return is strongest at the median level but weaker at other levels. This finding may have important implications for investment strategies. The investors may want to adjust their investment strategies, accordingly, depending on whether they expect WTI return to be below or above the median level. However, no evidence of a long-run relationship can be drawn from this. This finding supports a recently concluded study from Sorensen & Johansen (2021), who applied cointegration tests on US assets and the money supply. & \\ \hline \multicolumn{2}{c}{Several studies (e.g., Hammoudeh & Li 2012; Li & Li 2019; Nguyen & Sriananthakumar 2019) have looked at the link between the unemployment rate and the WTI return, and they have come to different conclusions about the size and direction of the link. & \\ \multicolumn{2}{c}{During the time span of our investigation, we found no statistically significant effect. However, additional research is required to completely comprehend the nature of this link and its operating processes, which is outside the scope of this work.} & \\ \hline \multicolumn{2}{c}{There is a strong and positive link between inflation and WTI return in the 1st and middle quantiles. Even though Kilian (2014) presented a complete analysis of the elements that contribute to oil price volatility, including inflation, and implied that the link between these variables can be influenced by a variety of supply and demand factors in the oil market, Wang et al. (2019) evaluated the relationship between the oil price and inflation and reported a positive and statistically significant relationship between both variables at specific quantiles.} & \\ \hline \multicolumn{2}{c}{The epidemic had no meaningful effect on the WTI return across all quantiles of our data. This suggests that, while the pandemic caused some volatility in the WTI price, it did not produce a continuous trend in either direction that would have had a significant impact on the WTI return. Our findings are consistent with those of Liu et al., 2020; Narayan (2020); and Zhang & Hamori, S. (2021), who found that the pandemic had no effect on WTI returns across all quantiles.} & \\ \hline \multicolumn{2}{c}{All the quantiles show favorable and significant outcomes of GPE. This demonstrates the relationship between the global energy price index and the WTI return, both of which are impacted by the dynamics of supply and demand, geopolitical events, and global economic conditions. Given the close relationship between the WTI return and the global price of energy index, this positive relationship is not unexpected} & \\ \hline \multicolumn{2}{c}{The study reveals that the influence of GEPU on WTI return is stronger in the lower and middle ranges of the WTI return distribution, but weaker in the upper range. This suggests that economic policy uncertainty can significantly affect the oil market during market volatility or stress, while its impact may be less pronounced during market stability or good performance. This is consistent with the findings of Li & Yang 2020.} & \\ \hline \multicolumn{2}{c}{VIX indicates a significant negative correlation between WTI return and volatility, indicating an inverse relationship between volatility and returns in financial markets. Increased volatility leads to risk-averse investors selling off riskier assets, potentially resulting in lower returns.} & \\ \hline \hline \end{tabular} Table 10 Empirical estimation The critical findings are summarized as: * the market return (erSP) has a negative effect on crude oil returns at the median and 90th quantiles, but not at the lower or higher quantiles. Production (dPROD), global economic policy uncertainty (dGPE), and the treasury yield curve (dSPREAD) all have positive effects on crude oil returns across all quantiles. * the money supply (dMONEY) has a large negative effect on crude oil returns at the 25th quantile. * dUNRATRE has a positive effect on WTI returns, but not significant at any of the quantiles (Qn 0.25, Qn 0.5, Qn 0.75, and Qn 0.9). This indicates that the unemployment rate may have some influence on WTI returns but does not reach statistical significance in this model. * dINFLATION has a considerable positive effect on crude oil returns at the 25th and 50th quantiles, but not at higher quantiles. It is possible that the correlation between the inflation rate and WTI returns is nonlinear. The inflation rate may have a greater effect on returns while they are lower (e.g., during recessions), but as returns rise (e.g., during expansions of the economy), its effect may become less pronounced or level out. * the VIX volatility index (dVIX) has a major negative impact on crude oil returns at the 25th, 50th and 90th quantile. Crude oil returns typically suffer negative effects at various levels of the WTI return distribution when market volatility and fear are high (as evidenced by a higher VIX). * other factors, such as currency exchange rates (dCURRENCY), the pandemic index (dWUPI), and the geo-political risk (dGPR), have mixed or minor effects on crude oil returns across quantiles. * the returns on crude oil at all quantiles are significantly impacted by the month dummies D_Apr20 and D_May20. The significance of variables suggests that these extreme deviations have a significant effect on the overall relationship between the predictors and WTI returns. The pseudo R2 values are high, indicating that the model fits the data well. Since economic theory does not say which parts or how many should be used in the study, there are many possible variables that could be considered. Our empirical findings have implications for portfolio design and risk management for investors. It also has significant implications for risk management decisions involving hedging and downside risk, given that the financial utility of oil varies depending on market conditions. Finally, our findings have implications for the forecasting of COP across quantiles based on macroeconomic and financial variables. Furthermore, changes in the several parameters considered for this study account for almost 2/3 of the monthly fluctuation in the excess returns ## 7 Conclusion The study used an asset pricing model that combined Arbitrage Pricing Theory (APT) and Quantile Regression (QR) to assess the risk-return relationship of WTI crude oil. To evaluate the risk-return connection of WTI crude oil, the model used multivariate risk components and market returns (SP 500). The report finds that market return, industrial production, global economic policy uncertainty, and the Treasury yield curve have significant positive effects on crude oil returns across all quantiles. The study reveals that the money supply, unemployment rate, inflation rate, and VIX volatility index have significant negative and positive effects on the returns of WTI at different quantiles. The combination of APT and QR provides a comprehensive understanding of the risk-return relationship of the WTI, capturing both linear and nonlinear relationships. The study found that the SP 500 market return is not a significant predictor of WTI returns, suggesting a weak or non-linear relationship. Other key factors, such as PROD, inflation, GPE, and GEPU, have a more significant impact on WTI returns. However, the analysis's time horizon may be too short to detect a significant relationship, as the relationship between the SP 500 return and the WTI return is influenced by longer-term economic or geopolitical factors. The results can help identify profitable investment opportunities and make strategic investment decisions. However, building a trustworthy empirical model requires iteration and is not a precise science.
2309.03285
Nodal topological superconductivity in nodal-line semimetals
We analyze possible nodal superconducting phases that emerge from a doped nodal-line semimetal. We show that nodal-line superconducting phases are favored by interactions mediated by short-range ferromagnetic fluctuations or Hund's coupling. It is found that the leading pairing channels are momentum-independent, orbital-singlet and spin-triplet. In the pairing state, we show that the Bogoliubov-de Gennes (BdG) Hamiltonian hosts a pair of topologically protected nodal rings on the equators of the torus Fermi surface (FS). Using a topological classification for gapless systems with inversion symmetry, we find that these nodal rings are topologically nontrivial and protected by integer-valued monopole charges $\nu = \pm 2$. In the scenario of pairing driven by ferromagnetic fluctuations, we analyze the fate of superconductivity in the magnetically ordered phase. Based on Ginzburg-Landau free energy analysis, we find the energetically favored superconducting state is characterized by the coexistence of two pairing orders whose $\bf d$-vectors are perpendicular to the magnetization axis $\bf M$ with their phases unfixed. In this case, each nodal loop in the pairing state splits into two, carrying a $\pm 1$ monopole charge. For bulk-boundary correspondence, these nodal rings enclose flat-band Majorana zero modes on top and bottom surface Brillouin Zones with distinct $\mathbb{Z}$-valued topological invariants.
Zhenfei Wu, Yuxuan Wang
2023-09-06T18:02:40Z
http://arxiv.org/abs/2309.03285v2
# Nodal topological superconductivity in nodal-line semimetals ###### Abstract We analyze possible nodal superconducting phases that emerge from a doped nodal-line semimetal. We show that nodal-line superconducting phases are favored by interactions mediated by short-range ferromagnetic fluctuations or Hund's coupling. It is found that the leading pairing channels are momentum-independent, orbital-singlet and spin-triplet. In the pairing state, we show that the Bogoliubov-de Gennes (BdG) Hamiltonian hosts a pair of topologically protected nodal rings on the equators of the torus Fermi surface (FS). Using a topological classification for gapless systems with inversion symmetry, we find that these nodal rings are topologically nontrivial and protected by integer-valued monopole charges \(\nu=\pm 2\). In the scenario of pairing driven by ferromagnetic fluctuations, we analyze the fate of superconductivity in the magnetically ordered phase. Based on Ginzburg-Landau free energy analysis, we find the energetically favored superconducting state is characterized by the coexistence of two pairing orders whose \(\mathbf{d}\)-vectors are perpendicular to the magnetization axis \(\mathbf{M}\) with their relative phase unfixed. In this case, each nodal loop in the pairing state splits into two, carrying a \(\pm 1\) monopole charge. For bulk-boundary correspondence, these nodal rings enclose flat-band Majorana zero modes on top and bottom surface Brillouin Zones with distinct \(\mathbb{Z}\)-valued topological invariants. ## I Introduction Topological semimetals have recently attracted intense research interest in condensed matter physics. These systems harbor gapless band structures within the three-dimensional (3D) bulk Brillouin Zone (BZ), with a vanishing density of states, and are often topologically protected by specific crystalline symmetries. Depending on the co-dimension of the gapless region, the band crossing can form either nodal points [1; 2] or nodal lines [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. The line nodes can form rings [8; 9; 10; 11; 12; 13; 14; 15; 16], chains [14; 15; 16], links [17; 18] and other composite structures [19; 20; 12]. In the absence of spin-orbit coupling, the nodal rings can be further classified into Weyl or Dirac loops depending on the absence or presence of spin degeneracy. Nodal-line semimetals (NLSMs) are naturally interesting platforms for the interplay between correlation effects and nontrivial topology [21; 22; 23; 24; 25; 26; 27]. In particular, many recent theoretical studies have uncovered routes toward novel gapped and nodal topological superconductivity from topological semimetals [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. In doped Weyl semimetals [33; 34] and Dirac semimetals [35; 36; 37; 38], nodal topological superconducting phases have been studied extensively. In the latter system where crystalline symmetry plays an important role for the normal state topology, it was found that topological nodal pairing also requires crystalline symmetry and only appear for certain unconventional pairing symmetries [37; 38]. Recently, experimental studies have led to the discovery of superconductivity within nodal-loop semimetals [39; 40; 41; 42; 43; 44; 45; 46; 47], while for many of them the pairing symmetry remains to be elucidated. For such systems, much of the theoretical interest has focused on fully gapped pairing phases with unconventional pairing symmetry that displays first- and higher-order topology protected by crystalline symmetries [25; 26; 27; 28; 29]. In this work we focus on nodal pairing phases from a doped NLSM with a Dirac loop. Our key findings are that doped Dirac-loop semimetals host line-nodal pairing phases with \(B_{1u}\) pairing symmetry of the \(D_{2h}\) group, which are momentum-independent, orbital-singlet and spin-triplet. We study both the pairing mechanism of these orders and their topological classification. We show that short-ranged ferromagnetic fluctuations, as well as Hund's coupling favor these pairing orders as leading superconducting instabilities. The pairing orders are described by a \(\mathbf{d}\)-vector, which are degenerate. These pairing orders support a pair of gapless superconducting nodal rings. Despite the similarity to the nodal rings in the normal state, we show that they are characterized by different topological indices. The nodal rings are protected by particle-hole, inversion, and a composite time-reversal symmetry, which is a product of the physical time-reversal and a spin rotation. The Bogoliubov-de Gennes (BdG) Hamiltonian belongs to the CI+\(\mathcal{I}\) class according to the AZ+\(\mathcal{I}\) table in Ref. [48], which classifies the topological charges of gapless nodes in centrosymmetric systems. By directly computing the topological invariant, we show that the superconducting nodal rings found in this system are protected by non-trivial monopole charges \(\nu=\pm 2\). The topological stability of the nodal rings can be illustrated by adding symmetry-allowed perturbation terms. In the physical context, we consider the fate of the pairing phases when the ferromagnetic order, whose short-ranged fluctuations mediates superconductivity, becomes long-range. We show via a Ginzburg-Landau analysis that in the presence of a magnetic moment \(\mathbf{M}\), the leading pairing instability is towards a coexistence of two pairing orders. The two pairing orders belong to the \(B_{u}\) irreducible corepresentation of the magnetic point group No. 8.4.27 [49], and are identical to the pairing orders in the paramagnetic phase with \(\mathbf{d}\perp\mathbf{M}\) with their relative phase is fixed at \(\frac{\pi}{2}\). This can be understood as a fully spin-polarized pairing state on the "larger" toroidal Fermi surface while the "smaller" Fermi surface does not favor superconductivity. As the temperature lowers, both Fermi surfaces are gapped (albeit incompletely), and in the BdG spectrum each of the nodal loops in the paramagnetic phase splits into two. To understand their topological properties, we note that while the magnetic order breaks time-reversal symmetry and spin-rotation symmetry, it preserves their product, i.e., the composite time-reversal symmetry that we use for the CI topological classification. Indeed, a direct evaluation of the topological invariant shows that each nodal loop now carries a monopole charge \(\nu=\pm 1\). Furthermore, we find that the topological invariant \(\nu\) within class CI+\(\mathcal{I}\) can be interpreted as the difference of the topological invariants of two fully gapped 1D subsystem separated by the superconducting nodal rings. This is demonstrated in the energy spectrum with open boundary condition along \(z\) direction, and the corresponding surface Brillouin Zones host flat-band Majorana zero modes enclosed by the superconducting nodal rings. The rest of this manuscript is organized as follows. In Sec. II, we discuss the normal state Fermi surface of doped NLSMs. In Sec. III, we use Fierz identity to determine that from either ferromagnetic fluctuations or Hund's coupling, the leading pairing channels are both \(s\)-wave orbital-singlet and spin-triplet pairings. The superconducting critical temperature is also derived for these pairing channels. In Sec. IV, we show that these pairings get projected onto the torus Fermi surfaces and exhibit a pair of nodal rings on the equator due to the nontrivial pseudospin textures. In Sec. V, we investigate the leading pairing channel in the presence of a ferromagnetic order using a Ginzburg-Landau free energy analysis. In Sec. VI, we analyze the topological protection of nodal-ring superconducting orders in both paramagnetic and ferromagnetic phases as well as the bulk-boundary correspondence and summarize our results in Sec. VII. ## II Lattice model A lattice model Hamiltonian for a nodal-ring semimetal is given by [28] \[H_{0}(\mathbf{k})= (6-t_{1}-2\cos k_{x}-2\cos k_{y}-2\cos k_{z})\sigma_{z}s_{0}\] \[+2t_{2}\sin k_{z}\sigma_{x}s_{0}-\mu\sigma_{0}s_{0}, \tag{1}\] where \(\sigma_{i}/s_{i}\) denotes the \(i\)-th Pauli matrix representing the orbital/spin degrees of freedom and the implicit tensor product (i.e., \(\sigma_{0}\otimes s_{0}\)) is assumed. \(t_{1}(t_{2})\) are model parameters and \(\mu\) is the chemical potential. Here we neglect the term \(\epsilon(\mathbf{k})\sigma_{0}s_{0}\) in the dispersion which does not influence the normal state topology. For \(0<t_{1}<4\), \(H_{0}(\mathbf{k})\) displays a nodal ring (\(k_{z}=0,\cos k_{x}+\cos k_{y}=2-t_{1}/2\)) in 3D Brillouin Zone at half filling (\(\mu=0\)). When \(t_{1}\) is continuously tuned from positive to negative, the nodal ring shrinks to a point and annihilates itself. For finite but small doping (\(|\mu|<t_{1}\)), the nodal ring inflates into a torus-like Fermi surface (FS), shown in Fig. 1. Every point on the FS is two-fold degenerate. The Hamiltonian in Eq. (II) preserves inversion and time-reversal symmetry \[\hat{\mathcal{I}}H_{0}(\mathbf{k})\hat{\mathcal{I}}^{-1} =H_{0}(-\mathbf{k}), \tag{2}\] \[\hat{\mathcal{T}}H_{0}(\mathbf{k})\hat{\mathcal{T}}^{-1} =H_{0}(-\mathbf{k}), \tag{3}\] where \(\hat{\mathcal{I}}=\sigma_{z}\) and \(\hat{\mathcal{T}}=\sigma_{z}\mathcal{K}\) (\(\mathcal{K}\) is the complex conjugate operator). \(H_{0}(\mathbf{k})\) also preserves SU(2) spin-rotation symmetry due to the absence of spin-orbit coupling, and hence \(\hat{\mathcal{T}}^{2}=+1\). Correspondingly, \(H_{0}(\mathbf{k})\) belongs to class AI+\(\mathcal{I}\)[48] and the nodal ring is robust against symmetry-preserved perturbations due to the Berry phase \(\pi\) of a Wilson loop which interlocks with the nodal ring [50]. Different from a Weyl loop, due to time-reversal symmetry, here the nodal-ring is four-fold degenerate and is dubbed a "Dirac loop" [11; 12; 13; 14; 15]. ## III Pairing mechanism In this section, we analyze superconducting instabilities mediated by two similar types of interactions: short-range ferromagnetic fluctuations and the inter-orbital Hund's coupling. We find that both pairing mechanisms favor \(s\)-wave, orbital singlet and spin-triplet channels. In Sec. IV, we show that these pairing channels exhibit a pair of superconducting nodal rings which is attributed to the nontrivial FS pseudo-spin textures. ### Ferromagnetic fluctuations We consider a short-range ferromagnetic fluctuation among all orbitals (diagramatically shown in Fig. Figure 1: A schematic torus-like Fermi surface from \(H_{0}(\mathbf{k})\) with parameters \(t_{1}=2.46,t_{2}=0.5\) and \(\mu=1.2\). 2(a)), \[H_{\text{ferro}}=V_{0}\int d\mathbf{p}d\mathbf{k}c^{\dagger}( \mathbf{p})\sigma_{0}\vec{s}\;c(\mathbf{k})\cdot c^{\dagger}(-\mathbf{p})\sigma_ {0}\vec{s}\;c(-\mathbf{k}), \tag{4}\] where \(c^{\dagger}(\mathbf{p})=[\psi_{\mathbf{p},+,\uparrow}^{\dagger},\;\psi_{ \mathbf{p},+,\downarrow}^{\dagger},\;\psi_{\mathbf{p},-,\uparrow}^{\dagger}, \;\psi_{\mathbf{p},-,\downarrow}^{\dagger}]\) is a four-component fermionic creation operator and \(V_{0}<0\). Here \(\pm\) represents the orbital degree of freedom and \(\uparrow\downarrow\) labels the spin. This interaction can be decomposed into different orbital and spin pairing channels separately by means of the Fierz identity [51, 52] (also see Appendix A) and we find \[c^{\dagger}(\mathbf{p})\sigma_{0}\vec{s}\;c(\mathbf{k})\cdot c^ {\dagger}(-\mathbf{p})\sigma_{0}\vec{s}\;c(-\mathbf{k})\] \[=\frac{1}{4}\sum_{\begin{subarray}{c}a=0,x,y,z\\ b=x,y,z\end{subarray}}c^{\dagger}(\mathbf{p})\sigma_{a}i\sigma_{y}\otimes s_{ b}is_{y}\left[c^{\dagger}(-\mathbf{p})\right]^{\mathrm{T}}\] \[\times\left[c(-\mathbf{k})\right]^{\mathrm{T}}\left(\sigma_{a}i \sigma_{y}\right)^{\dagger}\otimes\left(s_{b}is_{y}\right)^{\dagger}c( \mathbf{k}). \tag{5}\] The result in Eq. (5) shows that ferromagnetic fluctuations naturally favor spin-triplet pairings [53, 29, 54]. Note that we approximately set the scattering amplitude in Fig. 2(a) to be a constant \(V_{0}\) which does not contribute to momentum transfer due to the short-range behavior of the ferromagnetic fluctuation. Moreover, the Pauli exclusion principle imposes constraints on the pairing function, \(\Delta_{\mathbf{k}}=-\Delta_{-\mathbf{k}}^{\mathrm{T}}\). Hence the leading pairing channel is expected to be momentum-independent, orbital-singlet and spin-triplet. After neglecting the orbital-triplet channels in Eq. (5), the interaction in Eq. (4) can be rewritten as \[H_{\text{int}}=\frac{V_{0}}{4}\int d\mathbf{p}d\mathbf{k}c^{ \dagger}(\mathbf{p})i\sigma_{y}\otimes\vec{s}\;is_{y}\left[c^{\dagger}(- \mathbf{p})\right]^{\mathrm{T}}\\ \cdot\left[c(-\mathbf{k})\right]^{\mathrm{T}}(i\sigma_{y}\otimes \vec{s}\;is_{y})^{\dagger}c(\mathbf{k}), \tag{6}\] diagrammatically shown in Fig. 2(b). The critical temperature of these orbital-singlet and spin-triplet pairing channels in the paramagnetic phase can be derived from a normal state FS instability, which is captured by the linearized gap equation [55] \[1=-\frac{V_{0}T}{4}\sum_{k}\mathrm{Tr}\left[\sigma_{y}s_{j}G_{0}(k)\sigma_{y} s_{j}G_{0}^{\mathrm{T}}(-k)\right], \tag{7}\] where \(k\equiv(\mathbf{k},\omega_{n})\) and \(\omega_{n}=(2n+1)\pi T\) is the fermionic Matsubara frequency. \(G_{0}(k)\) is the normal state Green's function \(G_{0}(k)=\left[i\omega_{n}-H_{0}(\mathbf{k})\right]^{-1}\) and \(j=x/y/z\) denotes three distinct spin-triplet pairing channels. The critical temperatures for three spin-triplet channels are the same by SU(2) spin-rotation symmetry, hence we only consider one particular spin index (e.g., \(s_{x}\)) throughout the rest of this subsection. In order to explicitly evaluate the critical temperature, we write down a \(\mathbf{k}\cdot\mathbf{p}\) continuum model for a nodal-ring semimetal [56, 8, 9] from Eq. (1) \[H_{0}(\mathbf{k})=\left(\frac{k_{x}^{2}+k_{y}^{2}}{m^{*}}-t_{1} \right)\sigma_{z}+v_{z}k_{z}\sigma_{x}-\mu, \tag{8}\] where \(m^{*}\) is the effective mass and \(v_{z}\) is the Fermi velocity along \(z\) direction. To further simplify the notation, we set \(k_{p}\equiv(k_{x}^{2}+k_{y}^{2})/m^{*}-t_{1}\) and replace \(k_{z}\to v_{z}k_{z}\) for now (\(v_{z}\) is resumed in the expression of density of states). The normal state Green's function is \[G_{0}(k)=\frac{(i\omega_{n}+\mu)\sigma_{0}+k_{p}\sigma_{z}+k_{z}\sigma_{x}}{(i \omega_{n}+\mu)^{2}-k_{p}^{2}-k_{z}^{2}}. \tag{9}\] The integrand in Eq. (7) is simplified to \[\mathrm{Tr}\left[\sigma_{y}s_{x}G_{0}(k)\sigma_{y}s_{x}G_{0}^{ \mathrm{T}}(-k)\right]\] \[= 2\mathrm{Tr}\Bigg{[}\frac{(i\omega_{n}+\mu)\sigma_{0}-k_{p} \sigma_{z}-k_{z}\sigma_{x}}{(i\omega_{n}+\mu)^{2}-k_{r}^{2}}\] \[\times\frac{(-i\omega_{n}+\mu)\sigma_{0}+k_{p}\sigma_{z}-k_{z} \sigma_{x}}{(-i\omega_{n}+\mu)^{2}-k_{r}^{2}}\Bigg{]}\] \[=\frac{4(\omega_{n}^{2}+\mu^{2}-k_{p}^{2}+k_{z}^{2})}{\left[(i \omega_{n}+\mu)^{2}-k_{r}^{2}\right][(-i\omega_{n}+\mu)^{2}-k_{r}^{2}]}, \tag{10}\] where \(k_{r}\equiv\sqrt{k_{p}^{2}+k_{z}^{2}}\) and the frequency summation in Eq. (7) yields \[T\sum_{n}\mathrm{Tr}\left[\sigma_{y}s_{x}G_{0}(k)\sigma_{y}s_{x} G_{0}^{\mathrm{T}}(-k)\right]\] \[= 4\int_{C}\frac{dz}{2\pi i}f(z)\frac{-z^{2}+\mu^{2}}{\left[(z+\mu) ^{2}-k_{r}^{2}\right][(-z+\mu)^{2}-k_{r}^{2}]}, \tag{11}\] where \(f(z)=1/(e^{\beta z}+1)\). There are four roots in the denominator \(z_{1,2}=\pm k_{r}-\mu\), \(z_{3,4}=\pm k_{r}+\mu\). Note that the momentum integral \(\sum_{\mathbf{k}}\equiv\frac{m^{*}}{8\pi^{2}v_{z}}\int_{\mathbf{k}}dk_{p}dk_{z}\) is performed within a narrow region around the Fermi surface (\(k_{r}=\mu\)), which is even for both \(k_{p}\) and \(k_{z}\). Assume an energy cutoff \(\omega_{c}\) around FS, we obtain \(z_{1}\in[-\omega_{c},\omega_{c}]\) Figure 2: Diagrammatic representations of (a) ferromagnetic spin fluctuations and (b) ferromagnetic fluctuations converted to Cooper pairing channels. Solid lines represent fermionic propagators. The single-way line in (a) denotes the interaction \(V_{0}\) while the double wavy-line in (b) denotes the interaction \(V_{0}/4\) after applying the Fierz identity. The vertices in (b) contain a spin-triplet part (\(\vec{s}\;is_{y}\)) which is not shown in the figure. and \(z_{2}\simeq-2\mu\). After applying residue theorem, Eq. (7) becomes \[1 \simeq -\frac{V_{0}}{4}\sum_{\mathbf{k}}\left(\frac{1}{2}\frac{\tanh\frac{ \beta z_{1}}{2}}{z_{1}}+\frac{3}{2}\frac{\tanh\frac{\beta z_{2}}{2}}{z_{2}}\right) \tag{12}\] \[\simeq -\frac{V_{0}}{4}\frac{N(0)}{2}\int_{-\omega_{c}}^{\omega_{c}}d \epsilon\frac{\tanh\frac{\beta\epsilon}{2}}{\epsilon}\] \[\simeq -\frac{V_{0}}{4}N(0)\log\frac{\omega_{c}}{T},\] where the density of states at Fermi energy \(N(0)\) is derived by noting that the total number of states below the FS is \[\frac{m^{*}}{8\pi^{2}v_{z}}\int_{k_{r}\leq\mu}dk_{p}dk_{z}=\frac{m^{*}}{8\pi^{2 }v_{z}}\pi\mu^{2}=\frac{m^{*}\mu^{2}}{8\pi v_{z}}, \tag{13}\] hence \[N(0)=\frac{d}{d\mu}\frac{m^{*}\mu^{2}}{8\pi v_{z}}=\frac{m^{*}|\mu|}{4\pi v_{z }}. \tag{14}\] Together with the linearized gap equation, the critical temperature is found to be \[T_{c}=\omega_{c}\exp\left(-\frac{16\pi v_{z}}{m^{*}|\mu V_{0}|}\right), \tag{15}\] It is important to emphasize that our theory is within the weak-pairing regime, which is inapplicable to the half-filling case (\(\mu=0\)) where the density of states vanishes. As a consequence, a strong pairing mechanism is necessary to ensure a superconducting instability at half-filling, which is beyond the scope of our work. ### Hund's coupling In this subsection, we analyze the pairing orders from Hund's coupling, which is an effective local ferromagnetic coupling between different orbitals [57; 58]. Following the Feynman diagram in Fig. 3, the Hund's coupling is expressed as \[H_{\text{Hunds}}=V_{H}\int d\mathbf{p}d\mathbf{k}c^{\dagger}( \mathbf{p})\frac{\sigma_{0}+\sigma_{z}}{2}\otimes\vec{s}\;c(\mathbf{k})\\ \cdot c^{\dagger}(-\mathbf{p})\frac{\sigma_{0}-\sigma_{z}}{2} \otimes\vec{s}\;c(-\mathbf{k}), \tag{16}\] where \(V_{H}<0\). Compared with Eq. (5), the spin channel decomposition from Fierz identity is the same as ferromagnetic fluctuations, both being spin-triplet. Nevertheless, the orbital part decomposition is different (details in Appendix A), \[c^{\dagger}(\mathbf{p})\frac{(\sigma_{0}+\sigma_{z})}{2}\otimes \vec{s}\;c(\mathbf{k})\cdot c^{\dagger}(-\mathbf{p})\frac{(\sigma_{0}-\sigma_ {z})}{2}\otimes\vec{s}\;c(-\mathbf{k})\\ =\frac{1}{8}\sum_{\begin{subarray}{c}\{i,j\}\\ b=x,y,z\end{subarray}}c^{\dagger}(\mathbf{p})\sigma_{i}i\sigma_{y}\otimes s_{ b}is_{y}\left[c^{\dagger}(-\mathbf{p})\right]^{\mathrm{T}}\\ \times\left[c(-\mathbf{k})\right]^{\mathrm{T}}\left(\sigma_{j}i \sigma_{y}\right)^{\dagger}\otimes\left(s_{b}is_{y}\right)^{\dagger}c(\mathbf{ k}), \tag{17}\] where the summation of indices \(\{i,j\}\) runs over four different combinations \(\{0,0\},\{z,0\},\{0,z\},\{z,z\}\) such that the orbital parts of scattering vertices in Eq. (17) are \(\{i\sigma_{y},i\sigma_{y}\}\), \(\{\sigma_{x},i\sigma_{y}\}\), \(\{i\sigma_{y},\sigma_{x}\}\) and \(\{\sigma_{x},\sigma_{x}\}\). Similar to the case of ferromagnetic fluctuations, Hund's coupling is also an effective short-range interaction, thus the pairing function should be momentum-independent. Accordingly, the only possible orbital part decomposition in Eq. (17) is \(\{i\sigma_{y},i\sigma_{y}\}\) in compliance with the Pauli exclusion principle. The pairing interaction from Hund's coupling has the same form as Eq. (6), which confirms that \(s\)-wave orbital-singlet and spin-triplet pairing channels are also attractive mediated from Hund's coupling. The derivation of the critical temperature is similar, and one only needs to replace \(V_{0}\) by \(V_{H}/2\) in Eq. (15), \[T_{c}=\omega_{c}\exp\left(-\frac{32\pi v_{z}}{m^{*}|\mu V_{H}|}\right). \tag{18}\] ## IV Nodal-Ring superconductivity In this section, we show that \(s\)-wave orbital-singlet and spin-triplet pairing orders exhibit a nodal gap structure on the equators of the torus FS. In order to verify the gap nodes, we project the pairing orders onto the torus FS since the pairing instability comes from electronic states near FS [51]. We note that the periodic parts of the Bloch wavefunction \(|\pm,\mathbf{k}\rangle\) of the normal state Hamiltonian \(H_{0}(\mathbf{k})\) in Eq. (8) are given by \[H_{0}(\mathbf{k})|\pm,\mathbf{k}\rangle=\varepsilon_{\pm,\mathbf{k}}|\pm, \mathbf{k}\rangle, \tag{19}\] with energies \(\varepsilon_{\pm,\mathbf{k}}=\pm\sqrt{k_{p}^{2}+k_{z}^{2}}-\mu\). Without loss of generality, we assume a positive and small \(\mu\) which satisfies \(0<\mu<t_{1}\). The Fermi surface is a torus given by \(k_{p}^{2}+k_{z}^{2}=\mu^{2}\) and the Bloch state on FS is \[|+,\mathbf{k}\rangle=\frac{1}{\sqrt{2\mu(\mu-k_{p})}}\left[\begin{array}{c}- k_{z}\\ k_{p}-\mu\end{array}\right]. \tag{20}\] Figure 3: Diagrammatic representation of Hund’s coupling. In Fig. 4, we plot orbital pseudo-spin textures on the FS contour at \(k_{y}=0\) (real spins lack nontrivial polarizations so we suppress them). The orbital-singlet pairing order \(c_{\mathbf{k}}^{\dagger}\left(i\Delta\sigma_{y}\right)\left(c_{-\mathbf{k}}^{ \dagger}\right)^{\mathrm{T}}\) considered in Eq. (6) can be projected onto the FS as [51] \[\Delta^{\mathrm{FS}}(\mathbf{k})=\langle+,\mathbf{k}|i\Delta\sigma_{y}\left(|+,-\mathbf{k}\right)^{*}\rangle=\frac{\Delta k_{z}}{\mu}, \tag{21}\] which exhibits two superconducting nodal rings located at \(k_{z}=0\). The vanishing pairing amplitudes on the equators can also be deduced from the orbital pseudo-spin textures on FS shown in Fig. 4: \(i\Delta\sigma_{y}\) is a pseudo-spin singlet state, while the electronic states with opposite momentum at \(k_{z}=0\) possess the same pseudo-spin polarizations. Therefore, electrons at \(k_{z}=0\) cannot form Cooper pairs in the orbital-singlet channels. This leads to a pair of superconducting nodal rings at the equators of the torus Fermi surface (see Fig. 5). The aforementioned nodal-ring superconductivity is a common feature for all three spin-triplet pairing channels in the paramagnetic phase. ## V Fate of superconductivity in the ferromagnetic phase When ferromagnetic fluctuations become long-ranged, the system may develop a ferromagnetic order below the Curie temperature. Here we investigate the fate of pairing orders in the presence of ferromagnetism. The normal state Hamiltonian becomes \[H_{0}^{\prime}(\mathbf{k})= (6-t_{1}-2\cos k_{x}-2\cos k_{y}-2\cos k_{z})\sigma_{z}s_{0}\] \[+2t_{2}\sin k_{z}\sigma_{x}s_{0}-\mu\sigma_{0}s_{0}-M_{z}\sigma_{0 }s_{z}, \tag{22}\] where we assume the magnetization along \(z\) axis. The Fermi surface is split into two spin-polarized sectors by the magnetic order [11]. The interplay between the ferromagnetic order and spin-triplet superconductivity depends on the relative orientations between the magnetization and \(\mathbf{d}\)-vectors of superconducting order parameters. The matrix forms of order parameters describing ferromagnetism and superconductivity are \(\sigma_{0}(\mathbf{M}\cdot\mathbf{s})\) and \(i\sigma_{y}(\mathbf{d}\cdot\mathbf{s})is_{y}\). To be specific, we denote the orbital-singlet and spin-triplet pairing orders by \[\Delta_{x}i\sigma_{y}s_{x}is_{y} \text{for} d_{x}\] component \[\Delta_{y}i\sigma_{y}s_{y}is_{y} \text{for} d_{y}\] component \[\Delta_{z}i\sigma_{y}s_{z}is_{y} \text{for} d_{z}\] component. (23) In order to determine the favored pairing channel, we perform a Ginzburg-Landau (GL) free energy analysis. Without loss of generality, we choose \(\mathbf{M}=(0,0,M_{z})\). Including the quadratic order term in \(\mathbf{d}\) and the lowest-order coupling between superconducting orders and a pre-formed ferromagnetic order \(\mathbf{M}\), the GL free energy can be generally expressed as [59] \[F(\mathbf{d},\mathbf{d}^{*})=\alpha(T-T_{c})\mathbf{d}\cdot\mathbf{d}^{*}+i \gamma\mathbf{M}\cdot(\mathbf{d}\times\mathbf{d}^{*}), \tag{24}\] where \(\alpha>0\) and \(T_{c}\) is the critical temperature given in Eq. (15). The three spin-triplet orders share the same \(T_{c}\) in the absence of ferromagnetism. When \(T<T_{c}\), the negative prefactor of the first term supports a finite order parameter \(|\mathbf{d}|\neq 0\). The coefficient \(\gamma\) in Eq. (24) can be evaluated from the Feynman diagram calculation in Appendix B, giving rise to \(\gamma=-N(0)/\mu\). Since \(\gamma<0\), the leading pairing channel has a relative phase \(\pi/2\) between \(d_{x}\) and \(d_{y}\) components. This can be verified from the second term in Eq. (24) \[i\gamma\mathbf{M}\cdot(\mathbf{d}\times\mathbf{d}^{*}) =i\gamma M_{z}(d_{x}d_{y}^{*}-d_{y}d_{x}^{*})\] \[=\gamma M_{z}|d_{x}||d_{y}|\left(ie^{i(\alpha_{x}-\alpha_{y})}- ie^{i(\alpha_{y}-\alpha_{x})}\right)\] \[=2\gamma M_{z}|d_{x}||d_{y}|\sin(\alpha_{y}-\alpha_{x}), \tag{25}\] where \(\alpha_{x}(\alpha_{y})\) is the phase factor carried by the \(d_{x}(d_{y})\) component. In order to minimize the free energy, we obtain \(\alpha_{y}-\alpha_{x}=\pi/2\). Therefore, \(\mathbf{d}\propto(1,i,0)\) and the Cooper pair carries a \(z\)-component total spin \(S_{z}=+1\). To understand this, we recall that \(\mathbf{m}=i\mathbf{d}\times\mathbf{d}^{*}\) is the Figure 4: Pseudo-spin textures on FS at \(k_{y}=0\) with parameters \(t_{1}=2.46\), \(t_{2}=0.5\) and \(\mu=1.2\). The blue curves represent the cross sections of FS on the \(k_{y}=0\) plane and the black arrows denote pseudo-spin orientations. Figure 5: Two superconducting nodal rings (red) on the equators of the torus Fermi surface. magnetic moment of the Cooper pair in spin-triplet pairing channels. The second term in Eq. (24) is nothing but the potential energy of a magnetic dipole placed in an external magnetic field \(\mathbf{M}\). The spin polarizations of the Cooper pair and the ferromagnetic order are aligned with each other so as to minimize the potential energy. This pairing state is an analogy of the superfluid \({}^{3}\)He-\(A1\) phase. As the temperature is further lowered, the emergence of a sub-dominant pairing channel is anticipated. The secondary transition depends on the quartic order terms: \((\mathbf{d}\cdot\mathbf{d}^{*})^{2},|\mathbf{d}\cdot\mathbf{d}|^{2}\) and \((\mathbf{d}\times\mathbf{d}^{*})^{2}\), which are not included in Eq. (24). With the primary pairing channel already identified, it is more straightforward to evaluate the quartic order terms in the free energy from the following transformations \[\Delta_{a}=\ d_{x}-id_{y},\quad\Delta_{b}=\ d_{x}+id_{y},\quad\Delta_{z}=d_{z}, \tag{26}\] where \(\Delta_{a}\) is the amplitude of \(|\uparrow\uparrow\rangle\) spin pairing which denotes the primary channel and \(\Delta_{b}\) denotes \(|\downarrow\downarrow\rangle\) spin pairing. The GL free energy is \[F(\Delta_{a},\Delta_{b},\Delta_{z})= -\alpha^{\prime}\left(|\Delta_{a}|^{2}+|\Delta_{b}|^{2}+2|\Delta_ {z}|^{2}\right)\] \[-\frac{|\gamma|M_{z}}{2}\left(|\Delta_{a}|^{2}-|\Delta_{b}|^{2}\right)\] \[+\beta_{a}|\Delta_{a}|^{4}+4\tilde{\beta}|\Delta_{a}|^{2}|\Delta_ {z}|^{2}, \tag{27}\] where \(\alpha^{\prime}\equiv-\alpha(T-T_{c})/2>0\) and \(\beta_{a}=\tilde{\beta}=\beta/4=N(0)/(16\pi^{2}T^{2})\) (see details in Appendix B). We have included only quadratic order terms in \(\Delta_{b}\) and \(\Delta_{z}\) in the free energy above, which are sufficient to determine the secondary phase transition. Note that the term \(|\Delta_{a}|^{2}|\Delta_{b}|^{2}\) does not show up in the free energy because it couples fermions from different spin sectors. In order to determine the secondary pairing channel, one needs to check the sign change of two quadratic order terms \(|\Delta_{b}|^{2}\) and \(|\Delta_{z}|^{2}\). Based on this reasoning, we set \[-\alpha^{\prime}+\frac{|\gamma|M_{z}}{2}=0, \tag{28}\] which is the condition for the prefactor of \(|\Delta_{b}|^{2}\) to vanish. The free energy in Eq. (27) becomes \[F(\Delta_{a},\Delta_{z})= -|\gamma|M_{z}|\Delta_{a}|^{2}+\frac{\beta}{4}|\Delta_{a}|^{4}\] \[+\left(-|\gamma|M_{z}+\beta|\Delta_{a}|^{2}\right)|\Delta_{z}|^{2}. \tag{29}\] The magnitude of the primary order \(|\Delta_{a}|\) can be determined by setting \(\partial F/\partial\Delta_{a}=0\), which yields \(|\Delta_{a}|^{2}=2|\gamma|M_{z}/\beta\). Therefore, the prefactor of \(|\Delta_{z}|^{2}\) term is \[-|\gamma|M_{z}+\beta|\Delta_{a}|^{2}=|\gamma|M_{z}>0, \tag{30}\] which indicates that \(\Delta_{z}\) has not developed yet. Therefore the \(\Delta_{b}\) pairing channel is favored compared with \(\Delta_{z}\). For a finite magnetic order \(\mathbf{M}\), the free energy analysis fails because higher-order terms in \(\mathbf{M}\) (e.g., \(\mathbf{M}^{2},\mathbf{M}^{3},\cdots\)) are not negligible. In general, one needs a non-perturbative method to analyze the interplay between ferromagnetism and superconductivity. Nevertheless, we argue here that the story is qualitatively the same as that for a small \(\mathbf{M}\), i.e., \(\Delta_{a}\) is the primary order and \(\Delta_{b}\) is secondary while \(\Delta_{z}\) is disfavored. The underlying reason is as follows: \(\Delta_{z}\sigma_{y}s_{x}\) pairs electrons from two Fermi surfaces with opposite spins, which is negligible compared with intra-FS equal-spin pairing terms in the weak pairing regime. Also, the FS with \(|\uparrow\uparrow\rangle\) spin polarization has a greater density of states, thus favoring \(\Delta_{a}\) compared with \(\Delta_{b}\). ## VI Topology of the nodal-ring superconducting orders ### Paramagnetic phase To analyze the topological properties of the aforementioned \(s\)-wave orbital-singlet and spin-triplet pairing channels, we write down the Bogoliubov-de Gennes (BdG) Hamiltonian of the superconducting nodal-ring system in the mean field regime \[\mathcal{H}_{\rm BdG}(\mathbf{k})=\left[\begin{array}{cc}H_{0}(\mathbf{k})& -\vec{\Delta}\cdot i\sigma_{y}\vec{s}\;is_{y}\\ -\vec{\Delta}^{\dagger}\cdot(i\sigma_{y}\vec{s}\;is_{y})^{\dagger}&-H_{0}^{ \rm T}(-\mathbf{k})\end{array}\right], \tag{31}\] where the second quantized Hamiltonian is \(\mathcal{H}=\frac{1}{2}\int_{\mathbf{k}}\Psi_{\mathbf{k}}^{\dagger}\mathcal{H }_{\rm BdG}(\mathbf{k})\Psi_{\mathbf{k}}\) and the Nambu spinor is defined as \(\Psi_{\mathbf{k}}=\left[c^{\rm T}(\mathbf{k}),\ c^{\dagger}(-\mathbf{k}) \right]^{\rm T}\). In the paramagnetic phase, due to the spin degeneracy, the BdG Hamiltonian can be decoupled into two identical copies of four-band models, each given by \[\mathcal{H}_{\rm BdG}(\mathbf{k}) =(6-t_{1}-2\cos k_{x}-2\cos k_{y}-2\cos k_{z})\sigma_{z}\tau_{z}\] \[\quad+2t_{2}\sin k_{z}\sigma_{x}\tau_{0}-\mu\sigma_{0}\tau_{z}+ \Delta\sigma_{y}\tau_{y}, \tag{32}\] where \(\tau_{i}\) is the \(i\)-th Pauli matrix in the Nambu space and \(\tau_{0}\) is the identity. \(\mathcal{H}_{\rm BdG}(\mathbf{k})\) preserves inversion and time reversal symmetry, and additionally, particle-hole symmetry (\(\hat{\mathcal{P}}\)) and chiral symmetry (\(\hat{\mathcal{S}}\)) \[\hat{\mathcal{I}}\mathcal{H}_{\rm BdG}(\mathbf{k})\hat{\mathcal{I }}^{-1} =\mathcal{H}_{\rm BdG}(-\mathbf{k}),\] \[\hat{\mathcal{P}}\mathcal{H}_{\rm BdG}(\mathbf{k})\hat{\mathcal{P }}^{-1} =\mathcal{H}_{\rm BdG}(-\mathbf{k}),\] \[\hat{\mathcal{S}}\mathcal{H}_{\rm BdG}(\mathbf{k})\hat{\mathcal{S }}^{-1} =-\mathcal{H}_{\rm BdG}(\mathbf{k}), \tag{33}\] where \(\hat{\mathcal{I}}=\sigma_{z}\tau_{z}\), \(\hat{\mathcal{T}}=\sigma_{z}\tau_{z}\mathcal{K}\), \(\hat{\mathcal{P}}=\tau_{x}\mathcal{K}\) and \(\hat{\mathcal{S}}\equiv i\hat{\mathcal{P}}\hat{\mathcal{T}}=\sigma_{z}\tau_{y}\). Moreover, Eq. (VI.2) preserves three mirror symmetries \(\hat{\mathcal{M}}_{x}=\mathbf{1}\), \(\hat{\mathcal{M}}_{y}=\mathbf{1}\) and \(\hat{\mathcal{M}}_{z}=\sigma_{z}\tau_{z}\), which characterizes the Hamiltonian by \(D_{2h}\) point group symmetry. The corresponding pairing function \(\Delta\sigma_{y}\vec{s}\;is_{y}\) with three distinct \(\mathbf{d}\)-vectors all belong to the \(B_{1u}\) irreducible representation [29]. The BdG quasiparticle spectrum can be directly solved from the Hamiltonian in Eq. (VI.2) \[E(\mathbf{k})=\pm\sqrt{f^{2}(\mathbf{k})+4t_{2}^{2}\sin^{2}k_{z}+\mu^{2}+\Delta^{2 }\pm 2\sqrt{f^{2}(\mathbf{k})\left(\mu^{2}+\Delta^{2}\right)+4\mu^{2}t_{2}^{2} \sin^{2}k_{z}}}, \tag{34}\] where \(f(\mathbf{k})=(6-t_{1}-2\cos k_{x}-2\cos k_{y}-2\cos k_{z})\). By setting \(E(\mathbf{k})=0\), a pair of gapless nodal rings can be found at \(k_{z}=0,\cos k_{x}+\cos k_{y}=2-t_{1}/2\pm\sqrt{\mu^{2}+\Delta^{2}}/2\simeq 2 -t_{1}/2\pm\mu\), which resides at two equators of the torus Fermi surface (\(\simeq\) comes from weak pairing assumption \(\Delta\ll\mu\)). This result is consistent with the projected gap found in Eq. (21), where we find the gap vanishing at \(k_{z}=0\). The two superconducting nodal rings are found to be robust against perturbations that preserve the symmetries listed in Eq. (33). Therefore, these gapless rings must carry some non-trivial topological charges which prevent them from being gapped. To check the robustness of the two nodal rings, we first notice that \((\hat{\mathcal{T}}\hat{\mathcal{I}})^{2}=+1\) and \((\hat{\mathcal{P}}\hat{\mathcal{I}})^{2}=-1\), hence \(\mathcal{H}_{\mathrm{BdG}}(\mathbf{k})\) belongs to CI+\(\mathcal{I}\) class in the AZ+\(\mathcal{I}\) table for classifying inversion symmetric Hamiltonians with band structure nodes proposed in Ref. [48]. There are two topological charges associated with the CI class, i.e., elements of the first and second homotopy groups \(\pi_{1}(M_{\mathrm{CI}})\) and \(\pi_{2}(M_{\mathrm{CI}})\), where \(M_{\mathrm{CI}}=\mathrm{U}(n)/\mathrm{O}(n)\) is the classifying topological space relevant for CI. The \(\pi_{2}\) monopole charge is trivial in our case, meaning that the nodal ring can shrink to a point and annihilate itself by continuously tuning the model parameters. Pertaining to the present case, we only focus on the effect of a nontrivial \(\pi_{1}\) charge when the Fermi surface topology do not change. In the presence of the chiral symmetry \(\hat{\mathcal{S}}\), the flattened Hamiltonian \(\mathcal{H}_{\mathrm{flat}}(\mathbf{k})\) can be deformed into an off-diagonal form \[\mathcal{H}_{\mathrm{flat}}(\mathbf{k})=\left[\begin{array}{c}q(\mathbf{k}) \\ q^{\dagger}(\mathbf{k})\end{array}\right], \tag{35}\] and the \(\pi_{1}\) charge can be captured by the phase winding number of the \(q(\mathbf{k})\) matrix along an arbitrary closed path \(S^{1}\) which interlocks with the nodal ring \[\pi_{1}(M_{\mathrm{CI}})=\frac{i}{2\pi}\oint_{S^{1}}d\mathbf{k}\cdot\mathrm{Tr }[q^{\dagger}(\mathbf{k})\nabla_{\mathbf{k}}q(\mathbf{k})]\in\mathbb{Z}. \tag{36}\] To calculate the winding number associated with each nodal ring, we follow Ref. [60] to derive \(q(\mathbf{k})\). For a generic BdG Hamiltonian with chiral symmetry \[\mathcal{H}_{\mathrm{BdG}}(\mathbf{k})=\left[\begin{array}{cc}H_{0}( \mathbf{k})&\Delta(\mathbf{k})\\ \Delta^{\dagger}(\mathbf{k})&-H_{0}^{\mathrm{T}}(-\mathbf{k})\end{array}\right], \tag{37}\] we can always unitarily transform it into an off-diagonal form \[\tilde{\mathcal{H}}_{\mathrm{BdG}}(\mathbf{k}) =V\mathcal{H}_{\mathrm{BdG}}(\mathbf{k})V^{\dagger}\] \[=\left[\begin{array}{cc}H_{0}(\mathbf{k})-i\mathcal{T}\Delta_{ \mathbf{k}}^{\dagger}\end{array}\right], \tag{38}\] where \(\mathcal{T}\) is the unitary part of the time reversal operator and \[V=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}\mathbb{I}&i\mathcal{T}\\ \mathbb{I}&-i\mathcal{T}\end{array}\right]. \tag{39}\] Since the weak pairing \(\Delta_{\mathbf{k}}\) is only turned on around the Fermi surface, the matrix elements of \(\mathcal{T}\Delta_{\mathbf{k}}^{\dagger}\) between different bands are negligible. Therefore, we can use the Bloch states of \(H_{0}(\mathbf{k})\) to expand the off-diagonal matrix \[H_{0}(\mathbf{k})+i\mathcal{T}\Delta_{\mathbf{k}}^{\dagger}\simeq\sum_{n} \left(\varepsilon_{n,\mathbf{k}}+i\delta_{n,\mathbf{k}}\right)|n,\mathbf{k} \rangle\langle n,\mathbf{k}|. \tag{40}\] The matrix elements \(\delta_{n,\mathbf{k}}\) are \[\delta_{\pm,\mathbf{k}}\equiv\langle\pm,\mathbf{k}|\mathcal{T}\Delta_{ \mathbf{k}}^{\dagger}|\pm,\mathbf{k}\rangle=\pm\frac{\Delta k_{z}}{k_{r}}, \tag{41}\] which are consistent with the projected gap onto the FS in Eq. (21). Correspondingly, the off-diagonal matrix \(q(\mathbf{k})\) in the flattened Hamiltonian \(\mathcal{H}_{\mathrm{flat}}(\mathbf{k})\) is given by \[q(\mathbf{k}) =\sum_{n}e^{i\theta_{n,\mathbf{k}}}|n,\mathbf{k}\rangle\langle n,\mathbf{k}|\] \[=\sum_{n}\frac{\varepsilon_{n,\mathbf{k}}+i\delta_{n,\mathbf{k}} }{|\varepsilon_{n,\mathbf{k}}+i\delta_{n,\mathbf{k}}|}|n,\mathbf{k}\rangle \langle n,\mathbf{k}|. \tag{42}\] Note that \(\Delta\ll\mu\), so we obtain \(e^{i\theta_{-,\mathbf{k}}}\simeq-1\) and thus \[q(\mathbf{k})\simeq e^{i\theta_{+,\mathbf{k}}}|+,\mathbf{k}\rangle\langle+, \mathbf{k}|-|-,\mathbf{k}\rangle\langle-,\mathbf{k}|. \tag{43}\] Figure 6: Sketch of the path \(S^{1}\). It winds around the inner nodal ring in a counter clockwise direction. Only the first term contains a relevant contribution to the phase winding, so we safely set \(q(\mathbf{k})=e^{i\theta_{+}\mathbf{x}}|+,\mathbf{k}\rangle\langle+,\mathbf{k}|\) and the \(\pi_{1}\) charge is \[\pi_{1}(M_{\text{CI}})=-\frac{1}{2\pi}\oint_{S^{1}}d\mathbf{k}\cdot\nabla_{ \mathbf{k}}\theta_{+,\mathbf{k}}=-\frac{1}{2\pi}\theta_{+,\mathbf{k}}\Big{|}_ {i}^{f}, \tag{44}\] where \(i\) and \(f\) represents the starting and ending point of the path \(S^{1}\). We can we choose a circular path \(S^{1}\) at \(k_{y}=0\), where the two nodal rings become four symmetric nodal points at \(\pm k_{x1},\pm k_{x2}\) with \(k_{x1}\simeq\sqrt{m-\mu}\) and \(k_{x2}\simeq\sqrt{m+\mu}\). \(S^{1}\) interlocks \(k_{x1}\) in a counter-clockwise direction shown in Fig. 6. From Eq. (42) we obtain \[e^{i\theta_{+,\mathbf{k}}}=\frac{k_{r}-\mu+i\frac{\Delta k_{x}}{k_{r}}}{\left[ \left(k_{r}-\mu\right)^{2}+\frac{\Delta^{2}k_{x}^{2}}{k_{r}^{2}}\right]^{1/2}}. \tag{45}\] Along \(S^{1}\), the phase \(\theta_{+,\mathbf{k}}\) changes as \[\theta_{+,\mathbf{k}}:\qquad\pi\rightarrow\frac{\pi}{2}\to 0\rightarrow- \frac{\pi}{2}\rightarrow-\pi. \tag{46}\] According to Eq. (44), the topological charge of the inner superconducting nodal ring is determined as \[\pi_{1}(M_{\text{CI}})=-\frac{1}{2\pi}\left[\theta_{+,\mathbf{k}}(f)-\theta_{ +,\mathbf{k}}(i)\right]=1. \tag{47}\] Similar calculations show that the charge for the outer nodal ring is \(-1\). Spin indices were suppressed throughout the calculations above, hence the winding number should be \(+2\) for the inner nodal ring and \(-2\) for the outer nodal ring after counting the spin degeneracy. Due to the nontrivial and opposite \(\pi_{1}\) charges carried by the pair of nodal rings, both top and bottom surface Brillouin zones corresponding to the BdG Hamiltonian in Eq. (32) contain flat-band Majorana zero modes enclosed by the projections of the pair of nodal rings onto the surfaces [27; 32]. ### Ferromagnetic phase In the presence of ferromagnetism, the symmetry of the system is lowered to the magnetic point group No. 8.4.27 [49]. The unitary crystalline symmetries of the normal state Hamiltonian in Eq. (22) are inversion \(\hat{\mathcal{I}}=\sigma_{z}\), two-fold rotation with respect to \(z\) axis \(\hat{\mathcal{C}}_{2z}=is_{z}\) and mirror operation with respect to \(xy\) plane \(\hat{\mathcal{M}}_{z}=i\sigma_{z}s_{z}\). As a result, the pairing orders with \(\mathbf{d}\perp\mathbf{M}\) belong to the \(B_{u}\) irrep while the order with \(\mathbf{d}\parallel\mathbf{M}\) belongs to \(A_{u}\). Their transformation properties are listed in Table. 1. From the energetic analysis in Sec. V, we have concluded that the favored pairing states are those with \(\mathbf{d}\)-vectors perpendicular to the magnetization axis. In this subsection, we only analyze the topological properties for \(\mathbf{d}\perp\mathbf{M}\). The \(B_{u}\) pairing channels (\(\mathbf{d}\perp\mathbf{M}\)) correspond to the mixed equal-spin pairing states. In order to preserve the U(1) spin rotation with respect to \(z\) axis, the phases of two equal-spin pairing gaps \(\Delta_{a}\sigma_{y}(s_{0}+s_{z})\) and \(\Delta_{b}\sigma_{y}(s_{0}-s_{z})\) do not couple to each other from the GL free energy analysis in Sec. V. Therefore, we generally assume that the two pairing orders (spin are aligned as \(|\uparrow\uparrow\rangle\) and \(|\downarrow\downarrow\rangle\)) carry arbitrary phases \(\alpha\) and \(\theta\). The corresponding BdG Hamiltonian is \[\mathcal{H}_{\text{BdG}}^{\perp}(\mathbf{k})= \left(6-t_{1}-2\cos k_{x}-2\cos k_{y}-2\cos k_{z}\right)\sigma_{z }s_{0}\tau_{z}\] \[+2t_{2}\sin k_{z}\sigma_{x}s_{0}\tau_{0}-\mu\sigma_{0}s_{0}\tau_{z }-M_{z}\sigma_{0}s_{z}\tau_{z}\] \[+\Delta_{a}\sigma_{y}\frac{s_{0}+s_{z}}{2}\left(\tau_{x}\cos \alpha+\tau_{y}\sin\alpha\right)\] \[+\Delta_{b}\sigma_{y}\frac{s_{0}-s_{z}}{2}\left(\tau_{x}\cos \theta+\tau_{y}\sin\theta\right), \tag{48}\] where the inversion (\(\hat{\mathcal{I}}=\sigma_{z}\tau_{z}\)) and particle-hole symmetries (\(\hat{\mathcal{P}}=\tau_{x}\mathcal{K}\)) are the same as the paramagnetic phase. Note that while the magnetic order breaks both time-reversal symmetry \(\mathcal{T}=i\sigma_{z}s_{y}\mathcal{K}\) and spin rotation symmetry, it preserves their composite \(\mathcal{T}^{\prime}=\sigma_{z}\mathcal{K}\) we used to identify the CI classification. However, the pairing orders in general breaks \(\mathcal{T}^{\prime}\) due to the complex phases of \(\Delta_{a,b}\). This can be remedied by a phase rotations in Nambu space for each of the two spins, and the modified time-reversal symmetry is \[\mathcal{T}^{\prime\prime}=\sigma_{z}\left[\frac{s_{0}+s_{z}}{2}e^{-i\alpha \tau_{z}}+\frac{s_{0}-s_{z}}{2}e^{-i\theta\tau_{z}}\right]\mathcal{K}, \tag{49}\] \begin{table} \begin{tabular}{l|l|c|c|c} \hline \hline Pairing order & Irrep & \(\hat{\mathcal{I}}=\sigma_{z}\) & \(\hat{\mathcal{C}}_{2z}=is_{z}\) & \(\hat{\mathcal{M}}_{z}=i\sigma_{z}s_{z}\) \\ \hline \(\Delta_{a}\sigma_{y}(s_{0}+s_{z})\) & \(B_{u}\) & \(-\) & \(-\) & \(+\) \\ \(\Delta_{b}\sigma_{y}(s_{0}-s_{z})\) & \(B_{u}\) & \(-\) & \(-\) & \(+\) \\ \(\Delta_{z}\sigma_{y}s_{x}\) & \(A_{u}\) & \(-\) & \(+\) & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 1: List of representative pairing orders, irreducible corepresentations, and their corresponding characters of three orbital-singlet and spin-triplet pairing channels in magnetic point group No. 8.4.27. Figure 7: Superconducting nodal rings of \(B_{u}\) pairing channels in the presence of ferromagnetism. The \(\pi_{1}\) charges of the nodal rings are +1, +1, -1 and -1 from inside to outside. which satisfies \(\mathcal{T}^{\prime\prime}\mathcal{H}^{\perp}_{\text{BdG}}(-\mathbf{k})\mathcal{T}^ {\prime\prime-1}=\mathcal{H}^{\perp}_{\text{BdG}}(\mathbf{k})\) and \((\mathcal{T}^{\prime\prime}\mathcal{I})^{2}=+1\). We emphasize that the \(\mathcal{T}^{\prime\prime}\) symmetry exists for arbitrary phases \(\alpha\) and \(\theta\). Consequently, the eight-band BdG Hamiltonian (48) belongs to class CI+\(\mathcal{I}\) as well and it describes two decoupled superconducting orders on the spin-polarized Fermi surfaces equivalent to the Hamiltonian in Eq. (32), both supporting topologically protected nodal rings on the equators shown in Fig. 7. The \(\pi_{1}\) charges defined in Eq. (36) are both \(+1\) for two inner nodal rings and \(-1\) for two outer nodal rings. Since the topological charge is an integer quantity, nodal rings with the same sign will not pair-annihilate when the ferromagnetic order is turned off. The nodal-loop superconductivity can be further verified by solving the BdG spectrum of Eq. (48) and subsequently setting \(E(\mathbf{k})=0\). Upon doing so, we find four nodal loops at \[k_{z}=0,\quad\cos k_{x}+\cos k_{y}=2-\frac{t_{1}}{2}\pm\frac{1}{ 2}\sqrt{(\mu+M_{z})^{2}+\Delta_{a}^{2}}\qquad\text{for}\qquad s_{z}=+1, \tag{50}\] \[k_{z}=0,\quad\cos k_{x}+\cos k_{y}=2-\frac{t_{1}}{2}\pm\frac{1}{ 2}\sqrt{(\mu-M_{z})^{2}+\Delta_{b}^{2}}\qquad\text{for}\qquad s_{z}=-1. \tag{51}\] For bulk-boundary correspondence, these nodal rings enclose flat-band Majorana zero modes on top and bottom surfaces of the lattice Hamiltonian in Eq. (48) depicted in Fig. 8. Moreover, the number of surface Majorana zero modes is determined by a \(\mathbb{Z}\)-valued topological invariant carried by the effective 1D Hamiltonian \(\mathcal{H}^{1D}_{\text{BdG}}(k_{z})\) by fixing \(k_{x}\) and \(k_{y}\) in Eq. (48). For notational simplicity, we set \(\alpha=\theta=0\) and \(\Delta_{a}=\Delta_{b}\), yielding \[\mathcal{H}^{1D}_{\text{BdG}}(k_{z})=(m-2\cos k_{z})\sigma_{z}s_ {0}\tau_{z}+2t_{2}\sin k_{z}\sigma_{x}s_{0}\tau_{0}\\ -\mu\sigma_{0}s_{0}\tau_{z}-M_{z}\sigma_{0}s_{z}\tau_{z}+\Delta_{ a}\sigma_{y}s_{0}\tau_{x}, \tag{52}\] where \(m\equiv 6-t_{1}-2\cos k_{x}-2\cos k_{y}\). The \(\pi_{1}\) charges (winding numbers) identified for the 3D Hamiltonian \(\mathcal{H}^{\perp}_{\text{BdG}}(\mathbf{k})\) within class CI+\(\mathcal{I}\) can be interpreted as the "difference" of the topological invariants of two fully gapped 1D subsystem \(\mathcal{H}^{1D}_{\text{BdG}}(k_{z})\) separated by the superconducting nodal rings, namely \[\pi_{1}(M_{\text{CI}})=N_{\text{1D}}^{>}-N_{\text{1D}}^{<}, \tag{53}\] where \(\lessgtr\) denotes the region inside(outside) the corresponding nodal ring and vice versa. From deforming the loop \(S^{1}\) (along which the \(\pi_{1}\) charge is defined) into two straight lines that cross the 1D Brillouin Zone along \(k_{z}\), we can determine distinct \(\mathbb{Z}\)-valued 1D winding numbers \(N_{\text{1D}}\) for \(\mathcal{H}^{1D}_{\text{BdG}}(k_{z})\) (details in Appendix C). The topological invariants are found to be \(N_{\text{1D}}=2\) for the annulus region enclosed by two superconducting nodal rings located on the "small" FS while \(N_{\text{1D}}=1\) for two other annulus regions between the "small" and "large" FS. We numerically solve the energy spectrum in Fig. 8(a) by introducing open boundary condition along \(z\), where we found flat-band Majorana zero modes enclosed by the bulk superconducting nodal-rings projected on the surface. The 1D topological invariants found above are also numerically verified in Fig. 8(c) and 8(d). A schematic picture of the topological regions on the top and bottom surface Brillouin Zones is illustrated in Fig. 8(b). We note that the surface flat-band Majorana zero modes discussed in our work are different from the Weyl-loop superconducting phases in Refs. [27; 32]. In Dirac-loop systems, there is an extra \(N_{\text{1D}}=2\) region with two surface Majorana zero modes due to the spin degeneracy. We briefly comment on the case for \(A_{u}\) pairing symmetry (\(\mathbf{d}\parallel\mathbf{M}\)) in Appendix D, where we find two toroidal Bogoliubov Fermi surfaces that are topologically unstable. This pairing order explicitly breaks time-reversal symmetry and the system belongs to C+\(\mathcal{I}\) class in Ref. [48]. This class lacks a nontrivial \(\pi_{0}\) topological charge, which is necessary to stabilize a nodal surface in 3D BdG spectrum. Due to the larger gapless regions in the BdG spectrum, the \(A_{u}\) channel is suppressed, consistent with the energetic analysis in Sec. V. ## VII Summary In this work, we analyzed the energetic and topological properties of nodal superconductivity induced by ferromagnetic spin fluctuations or Hund's coupling in Dirac-loop-type nodal-line semimetals. The favored Cooper pairing channels are found to be momentum-independent, orbital-singlet and spin-triplet, which belongs to the \(B_{1u}\) representation of the point group \(D_{2h}\). In the weak-pairing regime, we calculated the critical temperatures in the paramagnetic phase. From the pseudo-spin textures on the torus Fermi surface, three spin-triplet pairing channels all exhibit a pair of nodal rings, which are topologically protected by a \(\mathbb{Z}\)-valued charge \(\nu=\pm 2\) within class CI+\(\mathcal{I}\) from the AZ+\(\mathcal{I}\) table [48]. In the presence of a ferromagnetic order, the symmetry of the system is lowered to the magnetic point group No. 8.4.27. We analyze Ginzburg-Landau free energy of the system which captures the interplay between spin-triplet superconductivity and ferromagnetism. The leading pairing state is found to carry a relative phase \(\pi/2\) between \(d_{x}\) and \(d_{y}\) components, i.e., \(\mathbf{d}\propto(1,i,0)\). Upon further lowering the temperature, a sub-leading channel with \(|\downarrow\downarrow\rangle\) spin is favored. These two pairing orders correspond to the pairing of the two split FS's with opposite spins. We show that the BdG Hamiltonian belongs to class CI+\(\mathcal{I}\) since it still preserves a "modified" time-reversal symmetry which squares to +1. Therefore, the four-fold degenerate superconducting nodal rings found from the paramagnetic phase are split into two pairs, and the robustness of nodal rings can be characterized by an integer-valued topological invariant \(\nu=\pm 1\). Furthermore, the \(\pi_{1}\) charges (winding numbers) identified within class CI+\(\mathcal{I}\) can be interpreted as the "difference" of the topological invariants of two fully gapped 1D subsystem separated by the superconducting nodal rings. This is demonstrated in the energy spectrum with open boundary condition along \(z\) direction, where we find that the bulk superconducting nodal rings enclose flat-band Majorana zero modes with \(N_{\rm 1D}=2\) and \(N_{\rm 1D}=1\) on the top and bottom surface Brillouin Zones. For \(\mathbf{d}\parallel\mathbf{M}\), the BdG quasiparticle spectrum hosts nodal surfaces in class C+\(\mathcal{I}\) which are not topologically protected. This pairing channel is energetically disfavored because of the large gapless surfaces in the BdG spectrum. The nodal-ring superconductivity discussed in our theory can be applied to either paramagnetic nodal-line systems (mediated by Hund's coupling) or ferromagnetic nodal-line materials (mediated by ferromagnetic spin fluctuations). For the latter case, superconductivity may appear close to onset of ferromagnetism, although experimentally the bulk superconductivity is yet to be discovered. The superconducting phase should naturally host nodal rings inherited from the normal state. While our analysis is based on a simplified Hamiltonian, we expect the conclusions to hold for realistic materials so long as the corresponding symmetries are the same. ###### Acknowledgements. We acknowledge the support from startup funds at University of Florida and National Science Foundation (NSF) under Award number DMR-2045781. ## Appendix A Fierz identity ### Proof Fierz identities are reordering relations for four-fermion interactions: for two \(n\times n\) matrices \(M,N\) and \(\psi_{i}\) as \(n\)-component fermionic annihilation operators, there exist matrices \(M^{\prime},N^{\prime}\) such that \[\psi_{1}^{\dagger}M\psi_{2}\psi_{3}^{\dagger}N\psi_{4}=\psi_{1}^{ \dagger}M^{\prime}\left(\psi_{3}^{\dagger}\right)^{\rm T}\psi_{4}^{\rm T}N^{ \prime}\psi_{2}, \tag{10}\] to prove the relation in Eq. (10), we first choose a set of orthogonal basis of \(n\times n\) Hilbert space \(\{Q_{a}\},a=1,2,3,\ldots,n^{2}\), which satisfies \[{\rm Tr}\left(Q_{a}Q_{b}^{\dagger}\right)=n\delta_{ab}, \tag{11}\] an arbitrary \(n\times n\) matrix \(M\) can be expanded as \[M=\sum_{a}M_{a}Q_{a},\quad M_{a}=\frac{1}{n}{\rm Tr}\left(MQ_{a}^{ \dagger}\right). \tag{12}\] Note that \[\psi_{1}^{\dagger}M\psi_{2}\psi_{3}^{\dagger}N\psi_{4} =\psi_{1i}^{\dagger}M_{ij}\psi_{2j}\psi_{3k}^{\dagger}N_{kl}\psi_{ kl}\] \[=M_{ij}N_{kl}\psi_{1i}^{\dagger}\psi_{3k}^{\dagger}\psi_{4l} \psi_{2j}, \tag{13}\] where identical indices are summer over. We can further expand \[M_{ij}N_{kl}=\sum_{ab}C_{ab}(Q_{a})_{ik}(Q_{b}^{\dagger})_{lj}, \tag{14}\] the coefficient \(C_{ab}\) can be determined by multiplying both sides of Eq.(14) with \((Q_{c}^{\dagger})_{\lambda i}\) and \((Q_{d})_{\rho l}\) and sum over \(i,l\) \[(Q_{c}^{\dagger}M)_{\lambda j}(Q_{d}N^{\rm T})_{\rho k}=\sum_{ab}C_{ab}(Q_{c}^{ \dagger}Q_{a})_{\lambda k}(Q_{d}Q_{b}^{\dagger})_{\rho j}, \tag{15}\] Figure 8: (a)Energy spectrum of \(\mathcal{H}_{\rm BdG}^{\perp}(\mathbf{k})\) in Eq. (48) versus \(k_{x}\) at \(k_{y}=0\). The open boundary condition along \(z\) direction with lattice sites \(N_{z}=300\) is adopted in the spectrum. Red dots denote the positions of bulk superconducting nodal rings. (b)Topological regions on top (bottom) surface Brillouin Zone. The yellow region denotes \(N_{\rm 1D}=2\), cyan regions denote \(N_{\rm 1D}=1\) and blue regions denote \(N_{\rm 1D}=0\). Majorana zero modes and the corresponding density profiles are plotted for (c) \(k_{x1}=0.65\) and (d) \(k_{x2}=1.05\). We have set \(\alpha=\theta=0\) and \(\Delta_{a}=\Delta_{b}=0.2\). Other model parameters are set to be \(\{t_{1},t_{2},\mu,M_{z}\}=\{1,0.5,0.5,0.3\}\). set \(\lambda=k\) and \(\rho=j\) and sum over \(\lambda,k\), we obtain \[\text{Tr}\left(Q_{c}^{\dagger}MQ_{d}N^{\text{T}}\right)=n^{2}\sum_{ab}C_{ab} \delta_{ac}\delta_{bd}=n^{2}C_{cd}, \tag{10}\] therefore, the coefficient \(C_{ab}\) is given by \[C_{ab}=\frac{1}{n^{2}}\text{Tr}\left(Q_{a}^{\dagger}MQ_{b}N^{\text{T}}\right), \tag{11}\] yielding \[M_{ij}N_{kl}=\frac{1}{n^{2}}\sum_{ab}\text{Tr}\left(Q_{a}^{\dagger}MQ_{b}N^{ \text{T}}\right)(Q_{a})_{ik}(Q_{b}^{\dagger})_{lj}, \tag{12}\] combine Eq. (10) and Eq. (12), we obtain \[\psi_{1}^{\dagger}M\psi_{2}\psi_{3}^{\dagger}N\psi_{4}\] \[=\frac{1}{n^{2}}\sum_{ab}\text{Tr}\left(Q_{a}^{\dagger}MQ_{b}N^{ \text{T}}\right)\psi_{1}^{\dagger}Q_{a}\left(\psi_{3}^{\dagger}\right)^{ \text{T}}\psi_{4}^{\text{T}}Q_{b}^{\dagger}\psi_{2}, \tag{13}\] hence we have proved the Fierz identity. ### Decompositions from ferromagnetic fluctuations The four-fermion interaction in Eq. (4) in the main text can be decomposed into orbital and spin channels independently. Set \(n=2\) and choose \[\sigma_{0}i\sigma_{y},\quad\sigma_{x}i\sigma_{y},\quad\sigma_{y}i \sigma_{y},\quad\sigma_{z}i\sigma_{y}\qquad\text{for orbital subspace},\] \[s_{0}is_{y},\quad s_{x}is_{y},\quad s_{y}is_{y},\quad s_{z}is_{y} \qquad\text{for spin subspace}.\] For orbital decompositions, we treat \(c^{\dagger}(\mathbf{p})\) as a two component vector in orbital space and set \(M=N=\sigma_{0}\) in Eq. (13) to obtain \[c^{\dagger}(\mathbf{p})\sigma_{0}\vec{s}\;c(\mathbf{k})\cdot c^{\dagger}(- \mathbf{p})\sigma_{0}\vec{s}\;c(-\mathbf{k})=\frac{1}{2}\sum_{a=0,x,y,z}c^{ \dagger}(\mathbf{p})(\sigma_{a}i\sigma_{y})\otimes\vec{s}\left[c^{\dagger}(- \mathbf{p})\right]^{\text{T}}\cdot\left[c(-\mathbf{k})\right]^{\text{T}} \left(\sigma_{a}i\sigma_{y}\right)^{\dagger}\otimes\vec{s}\;c(\mathbf{k}). \tag{14}\] For spin decompositions, we treat \(c^{\dagger}(\mathbf{p})\) as a two component vector in spin space and set \(M=N=s_{x},s_{y},s_{z}\) in Eq. (13) separately, yielding \[c^{\dagger}(\mathbf{p})\sigma_{0}s_{x}c(\mathbf{k})c^{\dagger}(- \mathbf{p})\sigma_{0}s_{x}c(-\mathbf{k}) =\frac{1}{2}\left(-b_{\mathbf{p},0}^{\dagger}b_{\mathbf{k},0}-b_{ \mathbf{p},x}^{\dagger}b_{\mathbf{k},x}+b_{\mathbf{p},y}^{\dagger}b_{\mathbf{ k},y}+b_{\mathbf{p},z}^{\dagger}b_{\mathbf{k},z}\right), \tag{15}\] \[c^{\dagger}(\mathbf{p})\sigma_{0}s_{y}c(\mathbf{k})c^{\dagger}(- \mathbf{p})\sigma_{0}s_{y}c(-\mathbf{k}) =\frac{1}{2}\left(-b_{\mathbf{p},0}^{\dagger}b_{\mathbf{k},0}+b_{ \mathbf{p},x}^{\dagger}b_{\mathbf{k},x}-b_{\mathbf{p},y}^{\dagger}b_{\mathbf{ k},y}+b_{\mathbf{p},z}^{\dagger}b_{\mathbf{k},z}\right),\] (16) \[c^{\dagger}(\mathbf{p})\sigma_{0}s_{z}c(\mathbf{k})c^{\dagger}(- \mathbf{p})\sigma_{0}s_{z}c(-\mathbf{k}) =\frac{1}{2}\left(-b_{\mathbf{p},0}^{\dagger}b_{\mathbf{k},0}+b_{ \mathbf{p},x}^{\dagger}b_{\mathbf{k},x}+b_{\mathbf{p},y}^{\dagger}b_{\mathbf{ k},y}-b_{\mathbf{p},z}^{\dagger}b_{\mathbf{k},z}\right), \tag{17}\] where we have defined \(b_{\mathbf{p},a}^{\dagger}=c^{\dagger}(\mathbf{p})\sigma_{0}\otimes\left(s_{a} is_{y}\right)\left[c^{\dagger}(-\mathbf{p})\right]^{\text{T}}\). After summing over three equations above, we find \[c^{\dagger}(\mathbf{p})\sigma_{0}\vec{s}\;c(\mathbf{k})\cdot c^{\dagger}(- \mathbf{p})\sigma_{0}\vec{s}\;c(-\mathbf{k}) =\frac{1}{2}\left(-3b_{\mathbf{p},0}^{\dagger}b_{\mathbf{k},0}+b_{ \mathbf{p},x}^{\dagger}b_{\mathbf{k},x}+b_{\mathbf{p},y}^{\dagger}b_{ \mathbf{k},y}+b_{\mathbf{p},z}^{\dagger}b_{\mathbf{k},z}\right). \tag{18}\] Combine the results of Eq. (14) and Eq. (18), we obtain \[c^{\dagger}(\mathbf{p})\sigma_{0}\vec{s}\;c(\mathbf{k})\cdot c^{ \dagger}(-\mathbf{p})\sigma_{0}\vec{s}\;c(-\mathbf{k}) =\frac{1}{4}\sum_{\begin{subarray}{c}a=0,x,y,z\\ b=x,y,z\end{subarray}}c^{\dagger}(\mathbf{p})\sigma_{a}i\sigma_{y}\otimes s_{b} is_{y}\left[c^{\dagger}(-\mathbf{p})\right]^{\text{T}}\times\left[c(-\mathbf{k}) \right]^{\text{T}}(\sigma_{a}i\sigma_{y})^{\dagger}\otimes(s_{b}is_{y})^{ \dagger}c(\mathbf{k}), \tag{19}\] where the first term in Eq. (18) is neglected since only attractive channels favor superconductivity. ### Decompositions from Hund's coupling The Hund's coupling interaction is given in Eq. (16) and we find that only orbital space decomposition is required. Accordingly, we set \(M=(\sigma_{0}+\sigma_{z})/2\) and \(N=(\sigma_{0}-\sigma_{z})/2\) in Eq. (13). The \(2\times 2\) orbital space is spanned \(\{Q_{a}\}=\{\sigma_{a}i\sigma_{y}\}\) as well. After a complete analysis, we obtain \[\mathrm{Tr}\left(Q_{a}^{\dagger}MQ_{b}N^{\mathrm{T}}\right)=\left\{ \begin{array}{ll}1,&\quad\text{if}\quad\{a,b\}=\{0,0\},\{0,z\},\{z,0\}\text{ or }\{z,z\},\\ 0,&\quad\text{otherwise}.\end{array}\right. \tag{116}\] Therefore, we obtain \[c^{\dagger}(\mathbf{p})\frac{(\sigma_{0}+\sigma_{z})}{2}\otimes \vec{s}\;c(\mathbf{k})\cdot c^{\dagger}(-\mathbf{p})\frac{(\sigma_{0}-\sigma_ {z})}{2}\otimes\vec{s}\;c(-\mathbf{k})=\frac{1}{4}\sum_{\{i,j\}}c^{\dagger}( \mathbf{p})\sigma_{i}i\sigma_{y}\otimes\vec{s}\left[c^{\dagger}(-\mathbf{p}) \right]^{\mathrm{T}}\cdot\left[c(-\mathbf{k})\right]^{\mathrm{T}}\left(\sigma _{j}i\sigma_{y}\right)^{\dagger}\otimes\vec{s}\;c(\mathbf{k}), \tag{117}\] where \(\{i,j\}\) is chosen from \(\{0,0\},\{0,z\},\{z,0\},\{z,z\}\). The spin decomposition is the same as Eq. (101), hence we obtain the Fierz identity relation for Hund's coupling as follows, \[c^{\dagger}(\mathbf{p})\frac{(\sigma_{0}+\sigma_{z})}{2}\otimes \vec{s}\;c(\mathbf{k})\cdot c^{\dagger}(-\mathbf{p})\frac{(\sigma_{0}-\sigma_ {z})}{2}\otimes\vec{s}\;c(-\mathbf{k})=\frac{1}{8}\sum_{\begin{subarray}{c} \{i,j\}\\ b=x,y,z\end{subarray}}c^{\dagger}(\mathbf{p})\sigma_{i}i\sigma_{y}\otimes s_{ \mathrm{t}}is_{y}\left[c^{\dagger}(-\mathbf{p})\right]^{\mathrm{T}}\\ \times\left[c(-\mathbf{k})\right]^{\mathrm{T}}\left(\sigma_{j}i \sigma_{y}\right)^{\dagger}\otimes(s_{\mathrm{t}}is_{y})^{\dagger}c(\mathbf{k}). \tag{118}\] ## Appendix B Evaluations of \(\gamma,\beta_{a},\tilde{\beta}\) in the Ginzburg-Landau free energy From Fig. 9(a), we obtain \[i\gamma =-T\sum_{k}\mathrm{Tr}\left[s_{z}G_{0}(k)(-i\sigma_{y}s_{z})G_{0} ^{\mathrm{T}}(-k)(-\sigma_{y})^{\dagger}G_{0}(k)\right]\] \[=-4iT\sum_{\mathbf{k},n}\frac{\left[(i\omega_{n}+\mu)^{2}+k_{r}^{2 }\right]\left(-i\omega_{n}+\mu\right)}{\left[(i\omega_{n}+\mu)^{2}-k_{r}^{2} \right]^{2}\left[(-i\omega_{n}+\mu)^{2}-k_{r}^{2}\right]}\] \[=-4i\sum_{\mathbf{k}}\int_{C}\frac{dz}{2\pi i}f(z)\frac{\left[(z +\mu)^{2}+k_{r}^{2}\right](-z+\mu)}{\left[(z+\mu)^{2}-k_{r}^{2}\right]^{2} \left[(-z+\mu)^{2}-k_{r}^{2}\right]}\] \[\simeq 4iN(0)\int d\epsilon\frac{f^{\prime}(\epsilon)}{8\mu}\] \[=-\frac{iN(0)}{\mu}. \tag{119}\] The coefficients \(\beta_{a}\) and \(\tilde{\beta}\) can be distinguished by the spin vertices and symmetry factors in the Feynman diagrams shown in Fig. 9(b) and 9(c), which are \[\beta_{a} =\frac{\beta}{4}\mathrm{Tr}\left[(-s_{+})(-s_{+})^{\dagger}(-s_{+ })(-s_{+})^{\dagger}\right]=\frac{\beta}{4}, \tag{120}\] \[4\tilde{\beta} =\beta\mathrm{Tr}\left[(-s_{+})(-is_{z})^{\dagger}(-is_{z})(-s_{+ })^{\dagger}\right]=\beta, \tag{121}\] where \(s_{+}=(s_{0}+s_{z})/2\) and \[\beta =T\sum_{k}\mathrm{Tr}\big{[}(i\sigma_{y})G_{0}^{\mathrm{T}}(-k)( i\sigma_{y})^{\dagger}G_{0}(k)(i\sigma_{y})G_{0}^{\mathrm{T}}(-k)\] \[\times(i\sigma_{y})^{\dagger}G_{0}(k)\big{]}\] \[=2T\sum_{\mathbf{k},n}\frac{(\omega_{n}^{2}+\mu^{2}-k_{p}^{2}+k_{ z}^{2})^{2}-4\omega_{n}^{2}k_{p}^{2}+4k_{z}^{2}(\mu^{2}-k_{p}^{2})}{\left[(i\omega_{n}+ \mu)^{2}-k_{r}^{2}\right]^{2}\left[(-i\omega_{n}+\mu)^{2}-k_{r}^{2}\right]^{2} \left[(-i\omega_{n}+\mu)^{2}-k_{r}^{2}\right]^{2}}\] \[\simeq 2N(0)T\sum_{n}\int d\epsilon\frac{\omega_{n}^{4}+4\mu^{4}}{( \omega_{n}^{2}+4\mu^{2})^{2}(\omega_{n}^{2}+\epsilon^{2})^{2}}\] \[\simeq 4\pi N(0)T\int_{\pi T}^{\infty}\frac{d\omega}{2\pi T}\frac{ \omega^{4}+4\mu^{4}}{\omega^{3}(\omega^{2}+4\mu^{2})^{2}}\] \[\simeq\frac{N(0)}{4\pi^{2}T^{2}}, \tag{122}\] where the condition \(\pi T\ll\mu\) is utilized throughout the calculation. ## Appendix C Topological invariant of \(\mathcal{H}_{\mathrm{BdG}}^{1D}(k_{z})\) The topological invariant of \(\mathcal{H}_{\mathrm{BdG}}^{1D}(k_{z})\) can be evaluated from the method discussed in Ref. [61; 62], \[N_{\mathrm{1D}}=-\frac{i}{\pi}\int_{k_{z}=0}^{k_{z}=\pi}\frac{dz(k_{z})}{z(k_{ z})}, \tag{123}\] where \(z(k_{z})\equiv e^{i\theta(k_{z})}=\mathrm{det}Q(k_{z})/|\mathrm{det}Q(k_{z})|\) and \(Q(k_{z})\) is the off-diagonal block matrix of \(\tilde{\mathcal{H}}_{\mathrm{BdG}}^{1D}(k_{z})\), which is the original 1D BdG Hamiltonian under the rotation in Nambu space (\(U=e^{-i\frac{\pi}{4}\tau_{y}}\)) \[\tilde{\mathcal{H}}_{\mathrm{BdG}}^{1D}(k_{z})=U\mathcal{H}_{\mathrm{BdG}}^{1D}(k _{z})U^{\dagger}=\left[\begin{array}{c}Q(k_{z})\\ Q^{\dagger}(k_{z})\end{array}\right]. \tag{124}\] Via direct calculations, the expression of \(Q(k_{z})\) is \[Q(k_{z}) = (m-2\cos k_{z})\eta_{z}+(2t_{2}\sin k_{z}+i\Delta_{a})\eta_{x} \tag{10}\] \[-(\mu+M_{z}s_{z})\eta_{0},\] where \(\eta_{i}\) denotes the Pauli matrices in the \(2\times 2\) Hilbert space spanned by \(\langle\sigma_{z}\tau_{z}=+1|\sigma_{i}\tau_{j}|\sigma_{z}\tau_{z}=-1\rangle\). Since \(Q(k_{z})\) is diagonal in the spin space, the topological invariant in Eq. (10) can be written as \(N_{\rm 1D}=N_{\rm 1D}^{+\flat}+N_{\rm 1D}^{-}\), where \(\pm\) denotes the eigenvalues of \(s_{z}\). We further note that \[{\rm det}Q_{\pm}(k_{z}) = \Big{[}(\mu\pm M_{z})^{2}+\Delta_{a}^{2}-(m-2\cos k_{z})^{2} \tag{11}\] \[-4t_{2}^{2}\sin^{2}k_{z}-4i\Delta_{a}t_{2}\sin k_{z}\Big{]}.\] From the definition in Eq. (10), we obtain \[N_{\rm 1D}^{+}=\left\{\begin{array}{ll}1&\text{if}\quad{\rm det}Q_{+}(0){\rm det }Q_{+}(\pi)<0\\ 0&\text{if}\quad{\rm det}Q_{+}(0){\rm det}Q_{+}(\pi)>0\end{array}\right.. \tag{12}\] A similar conclusion holds for \(N_{\rm 1D}^{-}\). For the nodal-line system considered throughout this work, the Hamiltonian at \(k_{z}=\pi\) always yields \({\rm det}Q_{\pm}(\pi)<0\). After summing over the spin indices, \(N_{\rm 1D}\) can be generally determined as \[N_{\rm 1D}=\left\{\begin{array}{ll}2&\text{if}\quad(m-2)^{2}<(\mu-M_{z})^{2}+ \Delta_{a}^{2},\\ 1&\text{if}\quad(\mu-M_{z})^{2}+\Delta_{a}^{2}<(m-2)^{2}\\ &\text{or}\quad(m-2)^{2}<(\mu+M_{z})^{2}+\Delta_{a}^{2},\\ 0&\text{otherwise}.\end{array}\right. \tag{13}\] Since \(m\equiv 6-t_{1}-2\cos k_{x}-2\cos k_{y}\) by definition, the first inequality above represents the annulus region enclosed by the two nodal rings on "small" FS and the second inequality denotes two other annulus regions between the "small" and "large" FS. ## Appendix D \(A_{u}\) pairing channel When \({\bf d}\)-vector is parallel to the magnetization axis, the pairing function belongs to \(A_{u}\) representation in the magnetic point group listed in Table. 1. In this case, the system explicitly breaks time-reversal symmetry and the BdG Hamiltonian is \[\mathcal{H}_{\rm BdG}^{\parallel}({\bf k}) = (6-t_{1}-2\cos k_{x}-2\cos k_{y}-2\cos k_{z})\sigma_{z}s_{0}\tau_ {z} \tag{14}\] \[+2t_{2}\sin k_{z}\sigma_{x}s_{0}\tau_{0}-\mu\sigma_{0}s_{0}\tau_{z}\] \[-M_{z}\sigma_{0}s_{z}\tau_{z}+\Delta_{z}\sigma_{y}s_{x}\tau_{y},\] which only preserves inversion (\(\hat{\mathcal{I}}=\sigma_{z}\tau_{z}\)) and particle-hole (\(\hat{\mathcal{P}}=\tau_{x}\mathcal{K}\)) symmetries. \(\mathcal{H}_{\rm BdG}^{\parallel}({\bf k})\) belongs to class C+\(\mathcal{I}\) because \((\hat{\mathcal{P}}\hat{\mathcal{I}})^{2}=-1\)[48]. The quasiparticle spec Figure 10: A pair of toroidal nodal surfaces appear on the equators in the \(A_{u}\) pairing channel, which can be gapped out by symmetry preserved perturbations. Figure 9: Feynman diagrams relevant for the coefficients \(\gamma,\beta_{a},\tilde{\beta}\) in the GL free energy. Expressions of vertices are not shown in the figure. The arrows in (b) and (c) denote the fermion spin sectors. trum \(E(\mathbf{k})\) contains nodal surfaces that satisfy \[\left[f^{2}(\mathbf{k})+4t_{2}^{2}\sin^{2}k_{z}-\mu^{2}-\Delta_{z}^{ 2}-M_{z}^{2}\right]^{2}\\ +16t_{2}^{2}\Delta_{z}^{2}\sin^{2}k_{z}=4\left(\mu^{2}+\Delta_{z}^ {2}\right)M_{z}^{2}, \tag{10}\] where \(f(\mathbf{k})\equiv 6-t_{1}-2\cos k_{x}-2\cos k_{y}-2\cos k_{z}\). Eq. (10) describes a pair of toroidal Bogoliubov Fermi surfaces plotted in Fig. 10. Different from Refs. [63; 64], the nodal surfaces in Eq. (10) are not topologically protected because of the absence of Pfaffian-like topological charges in class C+\(\mathcal{I}\)[48]. As a simple proof, there are two \(p\)-wave pairing terms \(\sin k_{x}\sigma_{z}\tau_{y}\) and \(\sin k_{x}\sigma_{z}s_{z}\tau_{x}\) which preserve both \(\hat{\mathcal{P}}\) and \(\hat{\mathcal{I}}\) but gap out the nodal surfaces from Eq. (10).
2309.12388
Grand canonical ensemble of a $d$-dimensional Reissner-Nordström black hole in a cavity
The grand canonical ensemble of a $d$-dimensional Reissner-Nordstr\"om black hole space in a cavity is analyzed. The realization of this ensemble is made through the Euclidean path integral approach by giving the Euclidean action for the black hole with the correct topology, and boundary conditions corresponding to a cavity, where the fixed quantities are the temperature and the electric potential. One performs a zero loop approximation to find and analyze the stationary points of the reduced action. This yields two solutions for the electrically charged black hole, $r_{+1}$, which is the smaller and unstable, and $r_{+2}$, which is the larger and stable. One also analyzes the most probable configurations, which are either a stable charged black hole or hot flat space, mimicked by a nongravitating charged shell. Making the correspondence between the action and the grand potential, one can get the black hole thermodynamic quantities, such as the entropy, the mean charge, the mean energy, and the thermodynamic pressure, as well as the Smarr formula, shown to be valid only for the unstable black hole. We find that thermodynamic stability is related to the positivity of the heat capacity at constant electric potential and area of the cavity. We also comment on the most favorable thermodynamic phases and phase transitions. We then choose $d = 5$, which is singled out naturally from the other higher dimensions as it provides an exact solution for the problem, and apply all the results previously found. The case $d = 4$ is mentioned. We compare thermodynamic radii with the photonic orbit radius and the Buchdahl-Andr\'easson-Wright bound radius in $d$-dimensional Reissner-Nordstr\"om spacetimes and find they are unconnected, showing that the connections displayed in the Schwarzschild case are not generic, rather they are very restricted holding only in the pure gravitational situation.
Tiago V. Fernandes, José P. S. Lemos
2023-09-21T18:00:00Z
http://arxiv.org/abs/2309.12388v2
# Grand canonical ensemble of a \(d\)-dimensional ###### Abstract The grand canonical ensemble of a \(d\)-dimensional Reissner-Nordstrom black hole space in a cavity is analyzed in every possible aspect. The analysis starts with the realization of the grand canonical ensemble through the Euclidean path integral approach by giving the Euclidean action for the \(d\)-dimensional Reissner-Nordstrom black hole with the correct topology, and boundary conditions corresponding to a cavity. More precisely, the fixed quantities of the ensemble are the temperature and the electric potential at the cavity boundary. One then performs a zero loop approximation, to find and analyze the stationary points of the reduced action. This yields two solutions for the electrically charged black hole, the smaller, \(r_{+1}\), and the larger, \(r_{+2}\). Through perturbations of the reduced action around the stationary points, one finds stability criteria for the solutions of the black hole that show that \(r_{+1}\) is unstable and \(r_{+2}\) is stable. Moreover, one analyzes the most probable configurations for each value of the fixed quantities at the boundary, with the configurations being either a stable charged black hole or hot flat space. One also compares the stable black hole with a nongravitating charged shell, which serves as a model for an electrically charged hot flat space. By making the correspondence between the action already evaluated and the grand canonical ensemble potential of thermodynamics one can get the entropy, the mean charge, the mean energy, and the thermodynamic pressure, as well as the Smarr formula, here shown to be valid only for the unstable black hole \(r_{+1}\). We make a stability analysis in terms of thermodynamic variables, which yields that thermodynamic stability is related to the positivity of the heat capacity at constant electric potential and constant area of the cavity. We also comment on the most favorable thermodynamic phases and deduce the possible phase transitions. We then pick up a specific dimension, \(d=5\), which is singled out naturally from the other higher dimensions as it provides an exact solution for the problem, and apply all the results previously found. The case \(d=4\) is concisely put in an appendix where the results are directly equated with previous works. We also compare thermodynamic radii with the photonic orbit radius and the Buchdahl-Andreasson-Wright bound radius in \(d\)-dimensional Reissner-Nordstrom spacetimes and find they are unconnected, showing that the connections displayed in the Schwarzschild case are not generic, rather they are very restricted equalities holding only in the pure gravitational situation. black holes, Euclidean path integral, grand canonical ensemble, thermodynamics ## I Introduction Black hole thermodynamics has been studied since the hypothesis of Bekenstein that a black hole has entropy [1], and since the four laws of black hole mechanics have been introduced [2]. It was put on a firm ground after Hawking discovered that black holes radiate field quanta with a thermal spectrum at temperature \(T_{\rm H}=\frac{\rm e}{2\pi}\), the Hawking temperature in Planck units, where \(\kappa\) is the usual black hole surface gravity [3]. Moreover, through path integral methods, it was shown that the correct vacuum state that sits at the black hole horizon and enables the radiation to be produced is the Hartle-Hawking vacuum state [4]. Taking all these results together, it was possible to deduce that a black hole is indeed a thermodynamic object with entropy \(S=\frac{A_{+}}{4}\), the Bekenstein-Hawking entropy, where \(A_{+}\) is the black hole horizon area. A proper construction of the thermodynamics of a black hole can be done by building a statistical ensemble for the space under analysis. In the canonical and grand canonical ensembles one has to find the partition function \(Z\) of the corresponding space. A powerful approach to find \(Z\) is the Euclidean path integral approach to quantum gravity. With this method one finds that the partition function of the space under analysis is given by the path integral over the possible Euclidean metrics \(g\) and fields \(\phi\) of the Euclidean action \(I[g,\phi]\), with the restriction that the space and the fields are periodic with imaginary time length at the boundary being the inverse temperature, i.e., \(Z=\int Dg\,D\phi\,\mathrm{e}^{-I[g,\phi]}\). To make progress in the computation of the partition function, one can use a zero loop approximation, where the only contribution to the path integral is the configuration which minimizes the action \(I\) with respect to the relevant parameters. This action is the classical action and the partition function is then \(Z=\mathrm{e}^{-I}\). In spite of \(Z\) being constituted by the classical action alone in this approximation, it vanishes in the classical limit, i.e., it disappears when the Planck constant goes to zero. Thus, the partition function still has a quantum gravitational character, indeed one is dealing with the semiclassical approximation. The partition function of the canonical or grand canonical ensembles can then be related to the thermodynamic canonical or grand canonical potential and the thermodynamics of the space and fields can be worked out. These ideas were applied with some success to the Schwarzschild and Reissner-Nordstrom black holes in which the canonical and grand canonical ensembles, respectively, were formulated with heat reservoir boundaries with infinite radius, i.e., at infinity [5]. There, it was shown that the configuration which corresponds to a stationary point of \(I\) consisted in a black hole with Hawking temperature and Bekenstein-Hawking entropy. However, it was also shown that, at least for the Schwarzschild black hole, the zero loop approximation is not valid, since this configuration has negative heat capacity, indeed it is an extremum that does not minimize the action \(I\). In fact, in this set up the Schwarzschild solution corresponds to a saddle point of \(I\) and thus acts as an instanton to hot flat space [6]. Remarkably, it was later found that if the boundary, at which a temperature is defined, was put at a finite radius, specifically, at a radius less or equal to the circular photon orbit radius, then the instability would disappear [7]. York [8] then realized that a black hole space in a spherical cavity at finite radius attached to a heat reservoir at finite temperature was the correct setup to construct the canonical or grand canonical ensembles, and through the path integral approach it was possible to obtain sensible results. In this scheme, for the Schwarzschild black hole there are two stationary points of the action \(I\). The one which has the least mass is unstable and corresponds to the Gibbons-Hawking black hole when the radius of the cavity tends to infinity. The other, which has the highest mass, is stable. Further developments to understand the Schwarzschild black hole geometry in the quantum gravity context of York's formalism were performed in [9], the inclusion of matter fields within the formalism was sketched in [10], and the relation between the various statistical physics ensembles was studied in [11]. Using York's formalism, the grand canonical ensemble of a Reissner-Nordstrom black hole in a cavity, where the temperature and the electric potential are specified at the boundary, was obtained in [12]. An important development was achieved in [13] where the canonical ensemble for arbitrary configurations of self-gravitating systems was analyzed. In general relativistic asymptotically anti-de Sitter spacetimes with black holes, the canonical ensemble was studied in three and four dimensions [14], in the two-dimensional Teitelboim-Jackiw theory the corresponding asymptotically anti-de Sitter black hole was analyzed in the canonical ensemble [15], and the grand canonical ensemble of the anti-de Sitter Reissner-Nordstrom black hole in four dimensions in general relativity was put forward in [16]. Several studies in higher dimensions have also been performed. Indeed, the stability and the negative mode for a higher-dimensional Schwarzschild black hole in a finite cavity was studied in [17], the canonical ensemble of Schwarzschild-anti-de Sitter black holes was done in [18], the \(d=5\) and generic \(d\) Schwarzschild black holes were analyzed in [19; 20]. Gravastars in the canonical ensemble were studied in [21]. The Euclidean path integral approach had its domain of study extended from black holes to black branes. For instance, it has been conjectured that the mechanical stability of black branes is interwoven with their local thermodynamic stability [22]. This conjecture has been analyzed and proved in certain cases, see [23; 24; 25]. It is worth noting that there are similarities in York's formalism and the analysis of the thermodynamics of a hot thin shell of matter. This has been established for thin shells with an outer Schwarzschild spacetime in [26] and for thin shells with an outer Reissner-Nordstrom spacetime [27]. Hot thin shells were then studied in higher dimensions in a Schwarzschild-Tangherlini spacetime in [28] and in a Reissner-Nordstrom-Tangherlini spacetime in [29]. The Reissner-Nordstrom case has been revisited in [30]. In this work, we generalize the work in [12] for an arbitrary number of dimensions, and we generalize the work in [19; 20] by including electric charge. Thus, we construct the grand canonical ensemble of a Reissner-Nordstrom space in a cavity for an arbitrary number of dimensions \(d\). We compute the two possible solutions and analyze their stability, by looking at the reduced Euclidean action. We obtain the thermodynamic properties of the stable solution and comment about the similarities between this treatment and the analysis of the thermodynamics of a charged spherical thin shell in an arbitrary number of dimensions. We also compare the important thermodynamic radii that arise in the grand canonical ensemble with the photonic orbit radius and the Buchdahl-Andreasson-Wright bound radius [31] of \(d\)-dimensional Reissner-Nordstrom spacetimes to confirm or dispel some coincidences. In the thermodynamic investigation of stability we use the analysis set in [32]. This paper is organized as follows. In Sec. II, we describe the path integral approach and apply it to the \(d\)-dimensional Reissner-Nordstrom black hole space in a cavity, stipulating the statistical mechanics grand canonical partition function \(Z\) which is found from the path integral of the action \(I\). In Sec. III, we perform the zero loop approximation and obtain the reduced action, the stationary points of the reduced action, make a stability analysis, and look for the most probable configurations. In Sec. IV, we study the system as a grand canonical thermodynamic system, provide the connection between the grand canonical potential and the statistical mechanics grand canonical partition function \(Z\) found from the action \(I\) of the path integral, analyze the thermodynamic quantities and relations, perform a stability analysis in terms of thermodynamic variables, and look for the most favorable thermodynamic phase and phase transitions. In Sec. V, we apply the analysis to the case of \(d=5\), which possesses an explicit exact formula for the station ary points, and we study in detail the zero loop approximation and the thermodynamics with plots. In Sec. VI, we present the conclusions and further discussions. There are several appendices which supplement the analysis of the paper. In the Appendix A, we study the photonic orbit radius and the Buchdahl-Andreasson-Wright bound in \(d\)-dimensional Reissner-Nordstrom spacetime to compare with the grand canonical ensemble featured radii. In the Appendix B, we study the smoothness of the critical points, namely, hot flat space and extremal black hole. In the Appendix C, we make further comparisons between the grand canonical ensemble of statistical mechanics and the grand potential of thermodynamics. In the Appendix D, we apply to the case of \(d=4\) and compare with the results of previous works. In the Appendix E, we derive the null geodesic sphere, i.e., the circular photon orbit radius, in the Reissner-Nordstrom geometry, giving its value in terms of the horizon radius, the electric potential there, and the spacetime dimension, so that we can compare it to the thermodynamic radii. We assume natural units, i.e., \(G=1\), \(c=1\), and \(\hbar=1\). ## II Grand canonical ensemble through the path integral approach ### The path integral and thermodynamics Assume a system comprised of a \(d\)-dimensional spacetime \(\mathcal{M}\) with associated metric \(\mathbf{g}\) and Maxwell vector field \(\mathbf{A}\). The spacetime \(\mathcal{M}\) can be foliated by the history of a spacelike hypersurface \(\Sigma_{t}\) with induced metric \(\mathbf{h}\), where \(t\) is the parameter of a congruence of timelike curves that intersect \(\Sigma_{t}\). We can consider also that each \(\Sigma_{t}\) has a boundary \(\partial\Sigma_{t}\), and we can build the history of \(\partial\Sigma_{t}\), which we denominate \(\mathcal{B}\), with an induced metric \(\mathbf{\gamma}\). The path integral approach states that the evolution of the system from one state to another is given by the path integral \(\int D[\mathbf{g}]D[\mathbf{A}]\mathrm{e}^{iI_{L}[\mathbf{g},\mathbf{A}]}\), where \(I_{L}[\mathbf{g},\mathbf{A}]\) is the Lorentzian action. The Euclidean path integral approach of gravitational statistical mechanics and thermodynamics, which we follow, prescribes, first, an analytic extension of the spacetime to a Euclidean space with a Wick transformation of the time, namely, \(t=-i\tau\). Second, it states that the partition function of this Euclidean space is given by \[Z(\beta,\phi,\mathbf{\gamma})=\int D[\mathbf{g}]D[\mathbf{A}]\mathrm{e}^{-I[\beta,\phi, \mathbf{\gamma};\mathbf{g},\mathbf{A}]}\,, \tag{1}\] where \(I[\beta,\phi,\mathbf{\gamma};\mathbf{g},\mathbf{A}]\) is the Euclidean action of the system, \(D[\mathbf{g}]\) and \(D[\mathbf{A}]\) are the respective measures of the integral for the Euclidean metric and the Maxwell field, \(\beta\) is the inverse temperature at \(\mathcal{B}\) and \(\phi\) is the electrostatic potential at \(\mathcal{B}\), both of which will be defined below. When performing the integral, there are several considerations one must take into account, as the notation in Eq. (1) is too abstract. First, the path integral has the requirement that the Euclidean metric is periodic in time. This is motivated by the usual construction of the partition function of large quantum systems. The path integral in \(\mathbf{g}\) is the sum of all the paths, i.e., of all the possible values of \(\mathbf{g}\), starting at a hypersurface \(\Sigma_{\tau_{0}}\) and then returning to \(\Sigma_{\tau_{0}}\), where for simplicity we adopt \(\tau_{0}=0\). Note that the path integral is also summing all the possible induced metrics in \(\Sigma_{t_{0}}\). Second, the path integral needs to be performed taking into consideration fixed data at the boundary \(\mathcal{B}\). The fixed data we are considering is the inverse temperature \(\beta\) given by \[\beta=\int_{0}^{2\pi}(\gamma_{\tau\tau})^{1/2}d\tau\,, \tag{2}\] which is also the proper Euclidean time length of \(\mathcal{B}\), by construction of the partition function, with \(\gamma_{\tau\tau}\) being the time-time component of the induced metric \(\mathbf{\gamma}\). In addition, the components of the vector field \(A^{a}\) at \(\mathcal{B}\) need to be fixed. More specifically, we define the electrostatic potential at \(\mathcal{B}\) as \[\beta\phi=2\pi iA_{\tau}\big{|}_{\mathcal{B}}. \tag{3}\] In the case the integral is well-defined and can be performed, the partition function \(Z\) of Eq. (1) will just depend on the inverse temperature \(\beta\), the electrostatic potential \(\phi\) and other components of the induced metric \(\mathbf{\gamma}\). We can relate the partition function of the grand canonical ensemble directly to the thermodynamic grand potential and therefore obtain the thermodynamics of the system. Note also that one can instead obtain the thermodynamic quantities via the usual derivatives of \(\ln Z\). The partition function can be written as \(Z(\beta,\phi,\mathbf{\gamma})_{\mathrm{GC}}=\mathrm{e}^{-\beta W[\beta,\phi,\mathbf{ \gamma}]}\), where \(W\) is the grand potential. ### The Euclidean action The Euclidean action of the system considered here is the Euclidean Hilbert-Einstein-Maxwell action given by \[I= -\int_{\mathcal{M}}\left(\frac{R}{16\pi}-\frac{F_{ab}F^{ab}}{4} \right)\sqrt{g}d^{d}x\] \[-\frac{1}{8\pi}\int_{\mathcal{B}}(K-K_{0})\sqrt{\gamma}d^{d-1}x\,, \tag{4}\] where \(R\) is the Ricci tensor depending on the metric \(g_{ab}\) and its first and second derivatives, \(g\) is the determinant of \(g_{ab}\), \(F_{ab}=\nabla_{a}A_{b}-\nabla_{b}A_{a}\) is the strength field tensor for the Maxwell field \(A_{a}\), \(\nabla_{a}\) is the covariant derivative compatible with \(g_{ab}\), \(K\) is the trace of the extrinsic curvature of \(\mathcal{B}\) and \(K_{0}\) is the trace of the extrinsic curvature of \(\mathcal{B}\) embedded in flat space. We use the convention that given an outward unit normal vector \(n^{a}\) to the hypersurface \(\mathcal{B}\), then \(K=\nabla_{a}n^{a}\). ### Topology and boundary conditions We are interested in evaluating the path integral with the boundary \(\mathcal{B}\) described by the history of a static spherical surface. It is convenient to assume that the dominant terms in the path integral will correspond to sets of metrics that are spherically symmetric. For coordinates \((\tau,y,\theta^{A})\), where \(\theta^{A}\) are the \(d-2\) spherical coordinates of the space, we assume a line element of the form \[ds^{2}=b^{2}(y)d\tau^{2}+\alpha^{2}(y)dy^{2}+r^{2}(y)d\Omega^{2}\,, \tag{5}\] where \(b(y)\), \(\alpha(y)\), and \(r(y)\), are the metric functions, functions of \(y\) alone, and \(d\Omega\) is the volume element of a \((d-2)\)-sphere. The volume element of the space is \(\sqrt{g}d^{d}x=b\alpha r^{d-2}d\tau dyd\Omega\). We impose boundary conditions to the metric. At \(y=0\), we require \[b(0)=0\,, \tag{6}\] \[\left.(b^{\prime}\alpha^{-1})\right|_{y=0}=1\,, \tag{7}\] where \(b^{\prime}=\frac{db}{dy}\). The first condition comes from \(y=0\) being a degenerate hypersurface, since it corresponds to the black hole horizon. The degenerate hypersurface will have a geometry \(\mathbb{S}_{d-2}\), i.e., it is the geometry of a \(d-2\)-sphere. Notice that hypersurfaces of constant \(y\) have a geometry \(\mathbb{S}_{1}\times\mathbb{S}_{d-2}\), in which the length of \(\mathbb{S}_{1}\) smoothly goes to zero as \(y\) goes to zero. The second condition is required so that the geometry is regular, i.e., there is no conical singularity at \(y=0\). Defining \(r^{\prime}=\frac{dr}{dy}\), we also require that \[\left(\frac{r^{\prime}}{\alpha}\right)\bigg{|}_{y=0}=0\,, \tag{8}\] so that the hypersurface at \(y=0\) inherits the property that the normal to the black hole horizon \(n_{a}=(0,r^{\prime},0,0)\big{|}_{y=0}\) is a null vector. In even dimensions, the condition given in Eq. (8) can be motivated by requiring that \(\mathcal{M}\) has an Euler number \(\chi=2\), but in odd dimensions, there is no equivalent motivation, and since there is no effective difference between even and odd dimensions in this context, it is better to generically stick to the inheritance of the normal to the black hole horizon property. Finally, we define that \(r(0)=r_{+}\) is the radius of the black hole horizon. The parameter \(r_{+}\) is not fixed in the path integral. The boundary \(\mathcal{B}\) is assumed to be located at \(y=1\) with induced metric \[ds_{\mathcal{B}}^{2}=b(1)^{2}d\tau^{2}+R^{2}d\Omega^{2}\,, \tag{9}\] and its volume element is \(\sqrt{\gamma}d^{d-1}x=b(1)R^{d-2}d\tau d\Omega\), where we have defined \(R\equiv r(1)\). The unit normal vector to the boundary \(\mathcal{B}\) is \(r_{a}=(0,\alpha,0,0)\). The \(\tau=\text{constant}\) surfaces have area \(A\) given by \[A=\Omega R^{d-2}\,, \tag{10}\] where \(\Omega\) is the area of the unit sphere in \(d-2\) dimensions \(\Omega=\frac{2\pi\frac{d-1}{d}}{\Gamma\left(\frac{d-1}{d-2}\right)}\). For \(d=4\), the area is \(\Omega=4\pi\), for \(d=5\), \(\Omega=2\pi^{2}\), and so on for other higher dimensions. The condition of fixed temperature, or fixed inverse temperature, at the boundary \(\mathcal{B}\) is given by the boundary condition \[\beta=2\pi b(1)\,, \tag{11}\] where \(\beta\) is the inverse temperature, i.e., \(\beta=\frac{1}{T}\), with \(T\) being the temperature at the boundary. For the Maxwell field \(A_{a}\), spherical symmetry requires that the only nonzero component of the strength field tensor is \(F_{\tau y}\). One can choose a gauge in which only the component \(A_{\tau}\) of the Maxwell field is nonzero. It must be of the form \(A_{\tau}=A_{\tau}(y)\). We require that the Maxwell field is bounded, and so we impose that \[A_{\tau}(0)=0\,. \tag{12}\] At the boundary \(\mathcal{B}\), the Maxwell field should obey the boundary condition motivated above \[\beta\phi=2\pi iA_{\tau}(1)\,, \tag{13}\] see Eq. (3). ### Action of the spherically symmetric space The action given in Eq. (4) of the space with the given boundary conditions can now be computed. The first integrand in the action corresponding to the integral over \(\mathcal{M}\) is the Ricci scalar \(R\). For the line element given in Eq. (5), it can be written as \[-\sqrt{g}\frac{R}{16\pi}=\frac{1}{8\pi}\left(\frac{r^{d-2}b^{\prime}}{\alpha} \right)^{\prime}+\frac{\alpha br^{d-2}}{8\pi}G^{\tau}_{\ \tau}\,, \tag{14}\] where \(G^{\tau}_{\ \tau}\) is the time-time component of the Einstein tensor \(G_{ab}\), given by \[G^{\tau}_{\ \tau}=\frac{d-2}{2r^{\prime}r^{d-2}}\left(r^{d-3}\left[\left( \frac{r^{\prime}}{\alpha}\right)^{2}-1\right]\right)^{\prime}\,, \tag{15}\] and a prime means derivative with respect to \(y\). The second integrand is given by \[\sqrt{g}\frac{F_{ab}F^{ab}}{4}=\frac{1}{2}\frac{r^{d-2}}{\alpha b}(A^{\prime}_ {\tau})^{2}\,, \tag{16}\] where the expression \(F_{y\tau}=A^{\prime}_{\tau}\) was used. The third integrand is related to the extrinsic curvature of a hypersurface with constant \(y\). The extrinsic curvature of such a hypersurface is \(\mathbf{K}=\frac{bb^{\prime}}{\alpha}d\tau^{2}+\frac{rr^{\prime}}{\alpha}d\Omega^{2}\), while the extrinsic curvature of a hypersurface with constant \(y\) embedded in flat space is just \(\mathbf{K}_{0}=rd\Omega^{2}\), since in flat space \(\frac{r^{\prime}}{\alpha_{0}}=1\) and \(b_{0}\) is constant. Therefore, the integrand of the integral over the boundary \(\mathcal{B}\) is \[-\frac{\sqrt{\gamma}}{8\pi}(K-K_{0})=\] \[\frac{1}{8\pi}\left[(d-2)br^{d-3}\left(1-\frac{r^{\prime}}{\alpha }\right)-\frac{r^{d-2}b^{\prime}}{\alpha}\right]\bigg{|}_{y=1}\,\,. \tag{17}\] Having explicitly showed the integrands of the action given in Eq. (4), we can perform their integration. Let us present some simplifications in the integration process. The integral of the first term in Eq. (14) becomes two terms which are \(\left.\frac{r^{d-2}b^{\prime}}{8\pi\alpha}\right|_{y=1}-\left.\frac{r^{d-2}b^{ \prime}}{8\pi\alpha}\right|_{y=0}\), i.e., a boundary term at \(y=1\) and another at \(y=0\). Moreover, the boundary term at \(y=1\) cancels with the term \(-\left.\frac{r^{d-2}b^{\prime}}{8\pi\alpha}\right|_{y=1}\) in Eq. (17). To evaluate the term \(\left.\frac{r^{d-2}b^{\prime}}{8\pi\alpha}\right|_{y=0}\) we use the boundary condition Eq. (7), which is the condition that requires the absence of conical singularities, to rewrite the boundary term at \(y=0\) as \(\left.\frac{r^{d-2}b^{\prime}}{\alpha}\right|_{y=0}=r_{+}^{d-2}\). Moreover, the term \(\left.(d-2)br^{d-3}\left(1-\frac{r^{\prime}}{\alpha}\right)\right|_{y=1}\) in Eq. (17) can be further simplified. Integrating it over the Euclidean time coordinate which has period \(2\pi\) and using the boundary condition in Eq. (11), i.e., \(\beta=2\pi b(1)\), one obtains \((d-2)\beta R^{d-3}\left(1-\left.\frac{r^{\prime}}{\alpha}\right|_{y=1}\right)\). Performing then the final integrations, one finds the action \[I[\beta,\phi,R;b,\alpha,r,A_{\tau}]=\frac{(d-2)\Omega}{8\pi} \beta R^{d-3}\left(1-\left.\frac{r^{\prime}}{\alpha}\right|_{y=1}\right)\] \[-\frac{1}{4}\Omega r_{+}^{d-2}+\int_{\mathcal{M}}\frac{\alpha br^ {d-2}}{8\pi}G^{\tau}{}_{\tau}d^{d}x+\int_{\mathcal{M}}\frac{r^{d-2}}{2\alpha b }(A^{\prime}_{\tau})^{2}d^{d}x\,. \tag{18}\] Formally, one would then proceed with the calculation of the path integral in \(b\), \(\alpha\), \(r\) and \(A_{\tau}\). ## III Zero loop approximation ### The reduced action We perform the zero loop approximation of the path integral with the action in Eq. (18), since we are interested in the semiclassical computation of the partition function. The zero loop approximation consists on considering only the contribution of the classical path to the path integral, neglecting the other paths. The classical path obeys in this case the Einstein-Maxwell equations with the prescribed boundary conditions. Note that there may be more than one classical path, as we will see. Here, we follow the procedure given in [12]. First, one imposes the constraint equations of the Einstein-Maxwell equations to the possible paths. Then, one finds the zero-order action and the semiclassical partition function. The constraints of the metric are the momentum constraint and the Hamiltonian constraint that come out from the Einstein equation \(G_{ab}=8\pi T_{ab}\), where \(T_{ab}\) is the electromagnetic field energy-momentum tensor. The momentum constraint is satisfied since the extrinsic curvature of a \(\tau=\) const hypersurface vanishes. The Hamiltonian constraint is given by \(G^{\tau}{}_{\tau}=8\pi T^{\tau}{}_{\tau}\), where \(T_{ab}=F_{ac}F_{bd}g^{cd}-\frac{1}{4}g_{ab}F_{cd}F^{cd}\). So, the Hamiltonian constraint is \[G^{\tau}{}_{\tau}=\frac{4\pi}{r^{2(d-2)}}\left(\frac{r^{d-2}}{\alpha b}F_{y \tau}\right)^{2}\,. \tag{19}\] The Gauss constraint for the Maxwell field is given by the equation \[\nabla_{y}F^{\tau y} = 0\,. \tag{20}\] One can integrate the Gauss constraint, Eq. (20), to find \(\frac{r^{d-2}}{\alpha b}F_{y\tau}=-i\frac{q}{\Omega}\), where \(q\) is a constant of integration chosen to be the Lorentzian electric charge of the system. Then, using Eq. (15) for \(G^{\tau}{}_{\tau}\) in Eq. (19), the Hamiltonian constraint is \(\frac{(d-2)}{2r^{\prime}r^{d-2}}\left(r^{d-3}\left(\frac{r^{\prime 2}}{ \alpha^{2}}-1\right)\right)^{\prime}=-\frac{4\pi}{\Omega^{2}r^{2d-4}}q^{2}\). The Hamiltonian constraint can be integrated to yield \[\left(\frac{r^{\prime}}{\alpha}\right)^{2}=1-\frac{2\mu m}{r^{d-3}}+\frac{ \lambda q^{2}}{r^{2d-6}}\,, \tag{21}\] where \(\mu m\) is a constant of integration and \(\lambda\) is a convenient parameter. The constant \(\mu m\) can be written with the help of the boundary condition in Eq. (8) as \[2\mu m=r_{+}^{d-3}+\frac{\lambda q^{2}}{r_{+}^{d-3}}\,. \tag{22}\] When \(m\) is understood as a mass then \[\mu=\frac{8\pi}{(d-2)\Omega}\,, \tag{23}\] such that in \(d=4\) one has \(\mu=1\), and the mass term is \(2m\). The parameter \(\lambda\) is given by \[\lambda=\frac{8\pi}{(d-2)(d-3)\Omega^{2}}\,, \tag{24}\] such that in \(d=4\), \(\lambda=\frac{1}{4\pi}\). Since \(F_{y\tau}=A^{\prime}_{\tau}\), the integral of the Gauss constraint, Eq. (20), is \[\frac{r^{d-2}}{\alpha b}A^{\prime}_{\tau}=-i\frac{q}{\Omega}\,. \tag{25}\] The action given in Eq. (18) can now be simplified using Eqs. (21) and (25) in the following way. For the first term in Eq. (18), we use Eq. (21) to obtain \[\left(1-\left.\frac{r^{\prime}}{\alpha}\right|_{y=1}\right)=1-\sqrt{f[R;r_{+},q ]}\,, \tag{26}\] where \(f[R;r_{+},q]\) is defined as \[f[R;r_{+},q]=\left(1-\frac{r_{+}^{d-3}}{R^{d-3}}\right)\left(1-\frac{\lambda q^{2 }}{r_{+}^{d-3}R^{d-3}}\right)\,, \tag{27}\] and where use of Eq. (22) has been made. The term \(\int_{\mathcal{M}}\frac{\alpha br^{d-2}}{8\pi}G^{\tau}{}_{\tau}d^{d}x\) in Eq. (18) can also be simplified using Eq. (19), \(F_{y\tau}=A_{\tau}^{\prime}\), and Eq. (25), yielding \[\int_{\mathcal{M}}\frac{\alpha br^{d-2}}{8\pi}G^{\tau}{}_{\tau}d^ {d}x =\int d\tau drd\Omega\left(\frac{r^{d-2}}{2\alpha b}A_{\tau}^{ \prime}\right)A_{\tau}^{\prime}\] \[=-\frac{1}{2}q\beta\phi\,. \tag{28}\] The term depending explicitly on \(A_{\tau}^{\prime}\) in Eq. (18), namely, \(\int_{\mathcal{M}}\frac{r^{d-2}}{2\alpha b}(A_{\tau}^{\prime})^{2}d^{d}x\), can also be simplified. Using Eqs. (12), (13), and (25), one finds \[\int_{M}\left(\frac{r^{d-2}}{2\alpha b}A_{\tau}^{\prime}\right)A _{\tau}^{\prime}d^{d}x =\int d\tau d\Omega\left(-i\frac{q}{2\Omega}\right)\,A_{\tau} \bigg{|}_{y=1}\] \[=-\frac{1}{2}q\beta\phi\,. \tag{29}\] Then, putting Eqs. (26)-(29) into the action of Eq. (18), we obtain the reduced action \(I_{*}\) as \[I_{*}[\beta,\phi,R;r_{+},q]= \frac{(d-2)\Omega R^{d-3}\beta}{8\pi}\left(1-\sqrt{f[R;r_{+},q]}\right)\] \[-q\beta\phi-\frac{\Omega r_{+}^{d-2}}{4}\,, \tag{30}\] Note that the parameters \(r_{+}\) and \(q\) are not fixed in the path integral. In fact, with the constraints applied and the spherical symmetry of the system, the remaining paths correspond to spaces with \(\tau=\text{const}\) slices parametrized by any possible value of \(r_{+}\) and \(q\). The partition function with the constraints is then given by \[Z[\beta,\phi,R]=\int D[r_{+}]D[q]\mathrm{e}^{-I_{*}[\beta,\phi,R;r_{+},q]}\,. \tag{31}\] To fully apply the zero loop approximation, we must impose the rest of the Einstein-Maxwell equations. This turns out to be equivalent to finding the stationary points of the reduced action in \(r_{+}\) and \(q\). One then must analyze if the stationary points minimize the action, since only there the zero loop approximation is valid. The motivation for applying first the constraint equations instead of applying the full equations is that the reduced action is useful to understand the validity of the zero loop approximation and therefore the stability of the solution given by the stationary point. ### Stationary points of the reduced action #### iii.2.1 Stationary points of the reduced action properly said The solutions for an electrically charged black hole in a cavity within a reservoir are given by finding the stationary points of the reduced action given in Eq. (30), specifically, \[\frac{\partial I_{*}}{\partial r_{+}} =0\,, \tag{32}\] \[\frac{\partial I_{*}}{\partial q} =0\,. \tag{33}\] Using Eq. (30) together with Eqs. (32) and(33), one finds that the stationary points of \(I_{*}[\beta,\phi,R;r_{+},q]\) occur when \(\beta\) and \(\phi\) assume the following expressions \[\beta =\frac{4\pi}{(d-3)}\frac{r_{+}^{2d-5}}{r_{+}^{2d-6}-\lambda q^{2} }\sqrt{f[R,r_{+},q]}\,, \tag{34}\] \[\phi =\frac{q}{(d-3)\Omega\sqrt{f[R,r_{+},q]}}\left(\frac{1}{r_{+}^{d- 3}}-\frac{1}{R^{d-3}}\right)\,, \tag{35}\] respectively. Since in the canonical ensemble \(\beta\) and \(\phi\) are fixed, the solutions for the system of equations are \(r_{+}=r_{+}(\beta,\phi,R)\) and \(q=q(\beta,\phi,R)\). The reduced action evaluated at a stationary point in \(r_{+}\) and \(q\) is then formally \[I_{0}[\beta,\phi,R]=I_{*}[\beta,\phi,R;r_{+}[\beta,\phi,R],q[\beta,\phi,R]]\,. \tag{36}\] From Eq. (30) one can thus write the reduced action \(I_{0}[\beta,\phi,R]\) of Eq. (36) as \[I_{0}[\beta,\phi,R]=\] \[\frac{(d-2)\Omega R^{d-3}\beta}{8\pi}\left(1-\sqrt{f[R;r_{+}[ \beta,\phi,R],q[\beta,\phi,R]]}\right)\] \[-q[\beta,\phi,R]\beta\phi-\frac{\Omega r_{+}^{d-2}[\beta,\phi,R]}{4}\,, \tag{37}\] where \(f[R;r_{+}[\beta,\phi,R],q[\beta,\phi,R]]\) can be taken from Eq. (27) and the solutions of Eqs. (34) and (35). The partition function of the system given in Eq. (31) then reduces to \[Z[\beta,\phi,R]=\mathrm{e}^{-I_{0}[\beta,\phi,R]}\,, \tag{38}\] where \(I_{0}[\beta,\phi,R]\) is taken from Eq. (37) Now, Eqs. (34) and (35) for the stationary points can be put in a manageable form. For that we first define the following quantities, \[\gamma =\frac{16\pi^{2}R^{2}}{(d-3)^{2}}\frac{\Phi^{2}}{\beta^{2}(1-\Phi ^{2})^{2}}\,, \tag{39}\] \[\Phi =(d-3)\Omega\sqrt{\lambda}\phi\,,\] (40) \[x =\frac{r_{+}}{R}\,,\] (41) \[y =\frac{\lambda q^{2}}{R^{2d-6}}\,. \tag{42}\] So, with these definitions, \(\gamma\) substitutes the reservoir temperature \(T=\frac{1}{\beta}\), \(\Phi\) represents the electric potential \(\phi\), \(x\) is the horizon radius \(r_{+}\) in units of the reservoir radius, and \(y\) is a representation for the electric charge \(q\). Now, Eq. (35), written in these variables, can be inverted to give \(y=\frac{x^{2d-6}\Phi^{2}}{1-(1-\Phi^{2})x^{d-3}}\), and this expression can be used in Eq. (34), written in the new variables, to obtain \[(1-\Phi^{2})x^{d-1}-x^{2}+\frac{\Phi^{2}}{\gamma}=0\,. \tag{43}\] Equation (43) gives the values of \(x=\frac{r_{+}}{R}\) at the stationary points of the reduced action. Then, by using Eq. (43) in the expression \(y=\frac{x^{2d-6}\Phi^{2}}{1-(1-\Phi^{2})x^{d-3}}\), we obtain a simpler relation between \(y\) and \(x\), namely, \[y=\gamma x^{2(d-2)}\,. \tag{44}\] Equation (44) gives the values of \(y=\frac{\lambda q^{2}}{R^{2d-6}}\) at the stationary points of the reduced action. #### iv.2.2 Analysis of the stationary points The solutions for the Eq. (43) can be obtained analytically for specific choices of \(d\). Moreover, when \(d\) is odd, one is able to reduce by half the order of the equation. But, in general, for generic dimension \(d\), it is not possible to obtain an analytic expression. Notwithstanding, one is able to study the behavior of Eq. (43) in terms of the parameters \(\gamma\) representing the fixed ensemble temperature, \(\Phi\) representing the fixed ensemble electric potential, and the dimension \(d\). One imposes that the solutions must be physical, i.e., the black hole must lie inside the cavity and it must be subextremal. The condition to lie inside the cavity is \[0\leq x<1\,, \tag{45}\] and the condition to be subextremal is \(0\leq\frac{y}{x^{2(d-3)}}<1\). From Eq. (44), this latter equation can be put as \[\gamma x^{2}<1\,. \tag{46}\] Replacing Eq. (43) in Eq. (46), one obtains that the condition can only be obeyed if \[0\leq\Phi^{2}<1\,. \tag{47}\] Now, from Eq. (43) it is useful to define the function \[h(x)=(1-\Phi^{2})x^{d-1}-x^{2}+\frac{\Phi^{2}}{\gamma}\,. \tag{48}\] The values of the function at the boundary of the domain are \(h(0)=\frac{\Phi^{2}}{\gamma}>0\) and \(h(1)=h(0)(1-\gamma)\). Note that \(\gamma\) can still be higher than \(1\), even though the condition given in Eq. (46) must be obeyed. Indeed, \(\gamma\) is proportional to the temperature squared and so \(\gamma\) can assume high values for high temperatures and fixed \(\phi\) or \(\Phi\). Thus, we separate the analysis into \(\gamma<1\), \(\gamma=1\), and \(\gamma>1\). \[\gamma<1\] : For \(\gamma<1\), one has \(h(1)>0\) and so we must compute the zeros of the derivative of \(h(x)\), \(h^{\prime}(x)\), and the sign of the second derivative to deduce how many zeros \(h(x)\) contains. We have that \(h^{\prime}(x)=x(d-1)(1-\Phi^{2})\left(x^{d-3}-\frac{2}{(d-1)(1-\Phi^{2})}\right)\) and \(h^{\prime\prime}(x)=(d-1)(d-2)(1-\Phi^{2})x^{d-3}-2\). The derivative of \(h(x)\) vanishes, i.e., \(h^{\prime}(x_{\rm bif})=0\), and the second derivative is positive, i.e., \(h^{\prime\prime}(x_{\rm bif})>0\), at a bifurcation point \(x_{\rm bif}\) given by \[x_{\rm bif}=\left(\frac{2}{(d-1)(1-\Phi^{2})}\right)^{\frac{1}{d-3}}\,. \tag{49}\] The point \(x_{\rm bif}\) gives the only minimum of \(h(x)\), with value \(h(x_{\rm bif})\), so that \(h(x)\) bifurcates to higher values for \(x\) lower or greater than \(x_{\rm bif}\). In the case of \(\gamma<1\), if the location of the minimum lies in the interval \(x_{\rm bif}>1\), then there are no zeros of \(h(x)\) since \(h(x)>0\) in the interval \(0\leq x<1\). If the minimum of \(h(x)\) lies in the interval \(0<x_{\rm bif}<1\), which implies \(\Phi^{2}<\frac{d-3}{d-1}\), then \(h(x)\) may have zeros, but this is not a sufficient condition. One also must have the condition \(h(x_{\rm bif})<0\), which implies \(\gamma_{\rm bif}(\Phi,d)\leq\gamma\), with \[\gamma_{\rm bif}(\Phi,d)\equiv\frac{(d-1)^{\frac{d-1}{d-3}}}{4^{\frac{d-3}{d-3 }}}\Phi^{2}(1-\Phi^{2})^{\frac{2}{d-3}}\,. \tag{50}\] Therefore, in brief we have the following results. For \[\gamma<\gamma_{\rm bif}\,, \tag{51}\] there are no solutions. For \[\gamma_{\rm bif}\leq\gamma<1\,, \tag{52}\] there are two solutions. We denominate these two solutions by \(x_{1}\) and \(x_{2}\), with \(x_{1}\leq x_{2}\). Moreover, \[x_{1}\leq x_{\rm bif}\leq x_{2}\,. \tag{53}\] When the equality is saturated in Eq. (52), i.e., \(\gamma=\gamma_{\rm bif}\), the two solutions merge into one with \(x_{1}=x_{\rm bif}=x_{2}\). Conversely, one can envisage the two solutions \(x_{1}\) and \(x_{2}\) as bifurcating from \(x_{\rm bif}\). Both solutions obey the conditions set in Eqs. (45)-(47), and also obey \[0\leq\Phi^{2}<\frac{d-3}{d-1}\,, \tag{54}\] which is a stronger condition than Eq. (47). \[\gamma=1\] : For \(\gamma=1\), one of the zeros of \(h(x)\) is \(x=1\). We note however that this point is a critical point in the sense that it does not have the derivatives in Eqs. (32)-(33) defined. If \(\Phi^{2}\leq\frac{d-3}{d-1}\), the other zero is smaller than \(x=1\) or, in the case of equality, there is no other zero. Otherwise, the other zero is larger than \(x=1\) and is nonphysical. \[\gamma>1\] For \(\gamma>1\), the function \(h(x)\) will always have one zero between \(0\leq x<1\), while the other solution will be at \(x>1\), for \(0<\Phi^{2}<1\). The former zero is physical but we will see that it can be disregarded because of stability, while the latter is unphysical since it lies outside the cavity. Note from the definition of \(\gamma\) in Eq. (39) that \(\gamma\propto\frac{R^{2}}{\beta^{2}}\), therefore when considering the solutions in function of the temperature, i.e., in function of \(\frac{R}{\beta}\) or of \(RT\), with \(\Phi^{2}<\frac{d-3}{d-1}\), the solution \(x_{2}\) will have the range \(0\leq x_{2}<1\) for a finite range of \(T=\frac{1}{\beta}\). More specifically, for \(RT=\frac{d-3}{4\pi|\Phi|}(1-\Phi^{2})\), i.e., \(\gamma=1\), one has \(x_{2}=1\) and so for higher values of \(RT\), one has \(x_{2}>1\). This behavior is not present in the electrically uncharged case, see [20]. The plot of the two solutions, \(x_{1}\) and \(x_{2}\), as functions of \(RT\) for constant \(\phi=0.02\) and for four values of \(d\) is shown in the top part of Fig. 1. The plot of the two solutions, \(x_{1}\) and \(x_{2}\), as functions of \(\phi\) for constant \(RT=0.3\) and four values of \(d\) is shown in the bottom part of Fig. 1. In these plots, the quantity \(\phi\) was chosen instead of \(\Phi\) to show the full dependence of the solutions in the parameter \(d\), indeed \(\phi\) is the quantity fixed at the cavity while \(\Phi\) is proportional to \(\phi\) but the coefficient depends on \(d\). ### Perturbations around the zero loop approximation and stability analysis #### iv.3.1 Perturbations around the zero loop approximation We extend the calculation of the path integral by analyzing its expansion around the classical path. The reduced action near the stationary points is \[I_{*}[\beta,\phi,R;r_{+},q]=I_{0}[\beta,\phi,R]+\sum_{ij}{I_{*}}_{0ij}\delta i \delta j\,, \tag{55}\] where \(I_{0}[\beta,\phi,R]\) is defined generically in Eq. (36) and specifically in Eq. (37), and \({I_{*}}_{0ij}\) stands for the second derivatives of the reduced action \({I_{*}}_{ij}=\frac{\partial^{2}I_{*}}{\partial i\partial j}\) evaluated at an extremum of the action, with \(i\) and \(j\) being either \(r_{+}\) or \(q\). Then, from Eq. (31) we have for the path integral and the corresponding partition function the following expression \[Z[\beta,\phi,R]=\mathrm{e}^{-I_{0}[\beta,\phi,R]}\int D[\delta q ]D[\delta r_{+}]\mathrm{e}^{-\sum_{ij}{I_{*}}_{0ij}\delta i\delta j}\,. \tag{56}\] To have a proper path integral and a proper partition function, the stationary point of the reduced action must be a minimum. To explicitly obtain the latter condition, we must compute the second derivatives of the reduced action which from Eq. (30) and Eqs. (34)-(35), or Eqs. (43)-(44), are \[I_{*0_{\mathcal{T}_{+}}r_{+}}=\frac{(d-2)\Omega R^{d-3}\beta}{1 6\pi\sqrt{f}r_{+}^{2}}\mathcal{I}_{r_{+}r_{+}}\,, \tag{57}\] \[I_{*0_{\mathcal{T}_{+}}q}=\frac{(d-2)\Omega R^{d-3}\beta}{16\pi \sqrt{f}r_{+}q}\mathcal{I}_{r_{+}q}\,,\] (58) \[I_{*0_{qq}}=\frac{(d-2)\Omega R^{d-3}\beta}{16\pi\sqrt{f}q^{2}} \mathcal{I}_{qq}\,, \tag{59}\] with \[\mathcal{I}_{r_{+}r_{+}}=\frac{d-3}{fx^{2d-6}}\Big{[}\frac{d-3} {2}\left(x^{2d-6}-y\right)^{2}\] \[-\left(x^{2d-6}-(2d-5)y\right)\left(1-x^{d-3}\right)\left(x^{d-3 }-y\right)\Big{]}\,, \tag{60}\] \[\mathcal{I}_{r_{+}q}=-\frac{(d-3)}{x^{d-3}}\frac{\left(2x^{d-3}- x^{2d-6}-y\right)}{x^{d-3}-y}\,,\] (61) \[\mathcal{I}_{qq}=2\frac{1-x^{d-3}}{x^{d-3}-y}\,. \tag{62}\] Figure 1: Top plot: Stationary points of the action, \(x_{1}\) (in blue) and \(x_{2}\) (in red), in function of \(RT\), for \(\phi=0.02\) and for four values of \(d\): \(d=4\) in dotted lines, \(d=5\) in dashed lines, \(d=7\) in solid lines, and \(d=10\) in dot dashed lines. Bottom plot: Stationary points of the action, \(x_{1}\) (in blue) and \(x_{2}\) (in red), in function of \(\phi\), for \(RT=0.3\), and the maximum value of \(\phi\) (in orange) corresponding to \(\Phi=1\), for four values of \(d\): \(d=4\) in dotted lines, \(d=5\) in dashed lines, \(d=7\) in solid lines, and \(d=10\) in dot dashed lines. The matrix \(I_{*0ij}\) is positive definite if the pivots of the matrix after Gauss elimination are positive or, in this case, since the matrix has rank 2, the first element on the diagonal and the determinant need to be positive, i.e., \[\mathcal{I}_{r_{+}r_{+}}>0\,, \tag{63}\] \[\mathcal{I}_{r_{+}r_{+}}\mathcal{I}_{qq}-\mathcal{I}_{r_{+}q}^{2} >0\,. \tag{64}\] Since from Eq. (62) \(\mathcal{I}_{qq}\) is always positive, we have that the last condition, Eq. (64), is sufficient. Using Eqs. (60)-(62) and (44), we have that the condition in Eq. (64) reduces to \[-(d-3)\gamma x^{d-1}+(d-1)x^{d-3}-2>0\,. \tag{65}\] This is the condition for a stationary point, given by Eqs. (43) and (44), to be a minimum of the action. #### iv.3.2 Stability analysis Since the stability condition is applied to the stationary points of the reduced action, one can use Eq. (43) to simplify the condition given in Eq. (65) in the following way. One can rewrite Eq. (43) to get \(\gamma=\frac{\Phi^{2}}{x^{2}-(1-\Phi^{2})x^{d-1}}\) and substitute \(\gamma\) into Eq. (65) obtaining a condition depending solely on \(r_{+}\) and \(\Phi\). So, on using Eq. (43), one gets that Eq. (65) indeed simplifies to \[\frac{((d-1)(1-\Phi^{2})x^{d-3}-2)(1-x^{d-3})}{1-(1-\Phi^{2})x^{d-3}}>0\,\,\,. \tag{66}\] The physical range of solutions is \(0\leq x^{d-3}<1\), and so the denominator is always greater than zero. Therefore, there is thermodynamic stability if the solution satisfies \[x>x_{\rm bif}\,, \tag{67}\] where \(x_{\rm bif}=\left(\frac{2}{(d-1)(1-\Phi^{2})}\right)^{\frac{1}{d-3}}\), see Eq. (49). Note that \(x_{\rm bif}\) is the value of \(x=\frac{r_{+}}{R}\) from which the two solutions \(x_{1}\) and \(x_{2}\) bifurcate from, and so one always has that \(x_{1}\leq x_{\rm bif}\leq x_{2}\). Thus, the bifurcation radius is equal to the marginal thermodynamic stability radius. In the electrically uncharged case the bifurcation and marginal thermodynamic stability radius and the photon sphere radius coincide, so it is worth to see if this stands in the electrically charged case, see Appendix A. We now analyze the stability for each case of \(\gamma\). \[\gamma<1\] For the case of \(\gamma_{\rm bif}<\gamma<1\) and \(\Phi^{2}<\frac{d-3}{d-1}\), one has \(x_{1}<x_{\rm bif}<x_{2}<1\), so \(x_{1}\) is unstable and it corresponds to a saddle point of the action, while \(x_{2}\) is stable and corresponds to a minimum of the action. For the case of \(\frac{d-3}{d-1}\leq\Phi^{2}<1\), both solutions \(x_{1}\) and \(x_{2}\) are not physical as they lie outside the cavity. We now analyze the case \(x_{1}=x_{\rm bif}=x_{2}\). One must note that \(x_{\rm bif}\) is the only solution of Eq. (43) for \(\gamma=\gamma_{\rm bif}\) and for which Eq. (66) is an equality rather than an inequality. Thus, one cannot specify the critical point with only second derivatives of the action, perhaps third derivatives will do. However, by inspection of the action \(I(x,y)\) at this point one finds that it is a saddle point. \[\gamma=1\] For the case of \(\gamma=1\) and \(\Phi^{2}<\frac{d-3}{d-1}\), the solution \(x_{1}\) is unstable, while the solution \(x_{2}\) reaches the boundary of the cavity, \(x_{2}=1\), and at this point the derivatives of the action are not well defined, so one cannot specify the stability. For the case of \(\Phi^{2}=\frac{d-3}{d-1}\) the two solutions \(x_{1}\) and \(x_{2}\) reach the boundary and the derivatives of the action are not well defined as well. For the case of \(\frac{d-3}{d-1}<\Phi^{2}<1\), the solution \(x_{2}\) lies outside the cavity, thus not physical, while \(x_{1}\) is at the boundary of the cavity, \(x_{1}=1\), and one cannot specify its stability. \[\gamma>1\] : For the case of \(\gamma>1\) and for \(0<\Phi^{2}<1\), the solution \(x_{1}\) resides inside the cavity and it is still unstable, while the solution \(x_{2}\) is not physical as it is outside the cavity, i.e., \(x_{2}>1\). ### Most probable configurations From Eq. (31), we see that the paths with less \(I_{*}\), or from Eq. (38) the paths with less \(I_{0}\), are the ones that contribute more to the partition function, and so yield the most probable states. Here we deal with stable solutions and among those we want to find the one that gives the most probable state. In the electrically uncharged case done in [8] for \(d=4\) and in [19; 20] for generic \(d\), the comparison between the stable black hole solution and hot flat space was made. The stable black hole is a stationary point of the reduced action, and the hot flat space solution is an extra stationary point. The most probable state is the one with the lowest value of the action. In the case of the uncharged black holes, the value of the action \(I_{0}\) depends on \(\beta\), while in the case of hot flat space one has \(I_{\rm hot\,flat\,space}=0\). In [19; 20], it was shown for any dimension \(d\geq 4\) that \(I_{0}<I_{\rm hot\,flat\,space}\) if \(\beta\) is such that \(\frac{r_{+}}{R}>\frac{r_{\rm phot}}{R}\), where \(r_{\rm Buch}\) is the Buchdahl radius. In addition, in [19; 20], a comparison between the stable black hole solution and quantum hot flat space was also done. In the electrically charged case, one can also make a comparison of the stable black hole with the charged equivalent of the uncharged hot flat space, as we will see. The electrically charged case is more rich than the uncharged one. In the charged case, besides the station ary point related to the stable black hole, there are two critical points that are possible stable solutions of the ensemble. One critical point, which is a stationary point indeed, is \(r_{+}=0\) and \(q=0\), corresponding to a cavity without a black hole and without charge. This seems unphysical for a fixed nonzero value of \(\phi\), since it means that there is a difference of electric potential, which in turn implies the existence of an electric field and thus of an electric charge. For this reason, \(q=0\) seems unphysical. However, we have to recall that the path integral approach in the semiclassical approximation deals intrinsically with quantum systems, and when one writes \(q=0\), one should mean \(q\) of the order of the Planck charge, and a particle, say, carrying such a charge should be envisaged as having the dimensions of the order of a Planck length or a bit higher. Thus, we have to seek a corresponding action for such a particle in a reservoir of fixed \(R\) and \(\beta\). The other critical point is \(r_{+}=R\) and \(\sqrt{\lambda}q=R^{d-3}\), so that \(r_{+}=(\sqrt{\lambda}q)^{\frac{1}{d-3}}=R\). This critical point corresponds to an extremal black hole with the horizon localized at the radius of the cavity, meaning that the volume of the Euclidean space is zero, which can require a different procedure. However, again, this is a quantum system treated semiclassically, and so one should think of a black hole almost at its extremal state, failing to be extremal by a Planck charge and not touching the reservoir at \(R\) by a Planck length. Thus, in this state the approach is still valid under our treatment, and we have to find the value of the action for a large extreme black hole in a reservoir with fixed \(R\) and \(\beta\). So, the question of whether the stable black hole is the ground state or there is another ground state to which the black hole can make a transition is a pertinent one. Let us now deal with the first critical point \(r_{+}=0\) and \(q=0\), at which the derivative of the action in order to \(q\) is not well defined. Nonetheless, one can argue that this critical point can be considered as a local minimum of the action in the physical domain, see Appendix B for the calculation. In an attempt to describe an equivalent of hot flat space, we consider a hot sphere, made of a perfect conductor material, with a certain radius \(r_{\rm hs}\), inside the reservoir at constant \(\beta\) and \(\phi\), and with its center situated at the center of the reservoir. There is no gravitational interaction, i.e., the constant of gravitation is put to zero. This is equivalent to consider only the Maxwell term in Eq. (4). One must then consider a fixed radius \(r_{\rm hs}\) for the hot sphere conductor, in the boundary conditions. From the Gauss constraint, the charge of the conducting sphere can be related directly to the value of \(\phi\). Indeed, one has \(\phi=\frac{q}{(d-3)\Omega}\left(\frac{1}{r_{\rm hs}^{d-3}}-\frac{1}{R^{d-3}}\right)\), see also Eq. (35). Therefore, the action for this cavity being \(I=-\frac{1}{2}q\beta\phi\), turns, for the perfect conducting hot sphere in flat space, into the expression \[I_{\rm hot\,sphere}=-\frac{1}{2}\,\frac{(d-3)\Omega}{\frac{1}{r_{\rm hs}^{d-3 }}-\frac{1}{R^{d-3}}}\beta\phi^{2}\,. \tag{68}\] One can then compare the action of the conducting hot sphere with radius \(r_{\rm hs}\) given in Eq. (68) with the action of the stable configuration of the charged black hole, which is Eq. (30) with the largest positive solution of Eq. (43), i.e., the \(r_{+2}\) solution. From Eq. (68), it is clear that if \(r_{\rm hs}\) is high, of the order of \(R\), say, then \(I_{\rm hot\,sphere}\) is large and negative and so the hot flat sphere is the most probable solution when compared to the stable black hole \(r_{+2}\). On the other hand, if \(r_{\rm hs}\) is tiny, as we expect to be when dealing with a case analogous to hot flat space, then \(I_{\rm hot\,sphere}=0\), or approximately zero. In this situation one can say that \(I_{\rm hot\,sphere}\) is indeed \(I_{\rm hot\,flat\,space}\) which is a configuration with zero action. The stable black hole does have positive action for low temperatures \(T\), specifically, near the minimum temperature where the stable black hole exists. Therefore, one finds that the tiny charged sphere that emulates hot flat space is more probable for a small interval of low temperatures when compared with the stable black hole. Conversely, the black hole is more probable for a large interval of temperatures, in fact, when the solution of the stable black hole has \[\frac{r_{+2}^{d-3}}{R^{d-3}}\geq\mu m+\sqrt{\mu^{2}m^{2}-\lambda q^{2}}\,. \tag{69}\] where \(\mu=\frac{8\pi}{(d-2)\Omega}\), see Eq. (23), and also \(\mu mR^{3-d}=-\frac{4(d-2)^{2}}{(d-1)^{2}(d-3)^{2}}+\frac{2(d-2)((d-2)^{2}+1)} {(d-1)^{2}(d-3)^{2}}\sqrt{1+\frac{(d-1)^{2}(d-3)^{2}}{4(d-2)^{2}}\frac{\lambda q ^{2}}{R^{2d-6}}}\), see Appendix A for this equality. When Eq. (69) is obeyed then the action for the black hole \(r_{+2}\) is negative and the black hole is more probable, when Eq. (69) is not obeyed the tiny charged sphere is more probable. Notice that this radius \(R\) does not have a connection to the Buchdahl-Andreasson-Wright radius [31], a radius that generalizes the Buchdahl bound for \(d\)-dimensional self-gravitating electric charged spheres. The horizon radius with zero action is equal or lower than the Buchdahl-Andreasson-Wright radius in the case of \(d=4\), with a difference up to \(0.004\) in \(\frac{\mu m}{R}\), and being equal in the uncharged case and the extreme case \(\sqrt{\lambda}q=R\). Thus, the equality in the uncharged situation of the minimum most probable radius of a black hole in the canonical ensemble and the Buchdahl radius does not hold when other fields are added. So, it is a very restricted equality holding only in the pure gravitational situation. Let us now deal with the second critical point \(r_{+}=R\) and \(\sqrt{\lambda}q=R^{d-3}\), i.e., an extremal black hole with the horizon localized at the radius of the cavity, bearing in mind that the precise extremality and the precise location can fluctuate by Planck order quantities. We note that this is a critical point in the sense that the gradient of the action is not defined. Indeed, one can calculate the gradient of the reduced action in Eq. (30) and make the limit to \(r_{+}=(\sqrt{\lambda}q)^{\frac{1}{d-3}}=R\) along the curve \(\frac{r_{+}}{R}=(1-\epsilon)^{\frac{1}{d-3}}\) and \(\frac{\sqrt{\lambda}q}{R^{d-3}}=\sqrt{(1-\eta\epsilon)}\), where \(\eta\) is a positive constant and \(\epsilon\) parameterizes the curve. The constant \(\eta\) is restricted to the physical domain of the action, with the condition \(\eta>2\). After substituting the variables by the parameterization of the curve in the expression of the gradient and performing the limit \(\epsilon\to 0^{+}\), the gradient assumes an expression that depends on the constant \(\eta\). Since the limit is different for different values of \(\eta\), then the gradient cannot be defined in that point, but one can still analyze the directional derivatives along the considered paths. The directional derivatives along decreasing \(\epsilon\), i.e., from lower \(r_{+}\) and \(q\) toward \(r_{+}=(\sqrt{\lambda}q)^{\frac{1}{\lambda-3}}=R\), may be either positive, zero, or negative, and so the critical point does not resemble a local minimum. In particular, there is a set of temperatures and electric potential given by the condition \(\gamma=1\), where the stable black hole solution tends to this extremal black hole. Indeed, it can be seen that for such values of temperature and electric potential, there is a value of \(\eta\) in which the limit of the gradient vanishes, but the fact still remains that the gradient is undefined here, see Appendix B for a detailed analysis of the gradient at this critical point. Nevertheless, this critical point may be smoothed up by taking in consideration higher loops in the path integral or a different theory of gravity. The action for this critical point can be taken from Eq. (30), i.e., \(I_{\text{extreme black hole}}=\frac{(d-2)\Omega R^{d-3}\beta}{8\pi}\Big{(}1- \sqrt{f(R,r_{+},q)}\Big{)}-q\beta\phi-\frac{\Omega r_{+}^{d-2}}{4}\), where \(f(R,r_{+},q)\) is taken from Eq. (27) with \(r_{+}\) and \(q\) having extremal values, so that \(R=r_{+}\) and \(f(R,r_{+},q)=0\). Then, \[I_{\text{extreme black hole}}= \frac{(d-2)\Omega R^{d-3}\beta}{8\pi}\] \[-\frac{R^{d-3}}{\sqrt{\lambda}}\beta\phi-\frac{\Omega R^{d-2}}{4}\,. \tag{70}\] So \(I_{\text{extreme black hole}}\) has to be analyzed for each \(R\), \(\beta\) and \(\phi\), and compared with the action for the stable black hole \(r_{+2}\). It seems that the stable black hole is always a more probable configuration than the extreme black hole with horizon at the cavity. ## IV Thermodynamics of the \(d\)-dimensional Reissner-Nordstrom Black Hole Space in a Cavity Connection between the grand canonical potential \(W\) of thermodynamics and the statistical mechanics grand canonical partition function \(Z\) found from the path integral action \(I\) We can relate the partition function \(Z\) of the grand canonical ensemble directly to the thermodynamic grand potential \(W\) and therefore obtain the thermodynamics of the system. The relation is \[Z(\beta,\phi,\mathbf{\gamma})=\text{e}^{-\beta W[\beta,\phi,\mathbf{\gamma}]}\,, \tag{71}\] or \(\beta W=-\ln Z\). In the semiclassical approximation, one has \(Z=\text{e}^{-I_{0}}\) and so one has \(\beta W[\beta,\phi,R]=I_{0}[\beta,\phi,R]\). By taking in consideration that \(\beta=\frac{1}{R}\), one can write the relation between the thermodynamic grand potential \(W\) and the action \(I_{0}\) as \[W[T,\phi,A(R)]=T\,I_{0}[T,\phi,R]\,. \tag{72}\] Therefore, from Eqs. (37) and (72), the grand potential \(W\) can be written as \[W= \frac{(d-2)\Omega R^{d-3}}{8\pi}\Big{(}1-\sqrt{f\left(R,T,\phi \right)}\Big{)}\] \[-T\frac{\Omega r_{+}^{d-2}(R,T,\phi)}{4}-q(R,T,\phi)\phi\,, \tag{73}\] where \(f(R,T,\phi)\) can be taken from Eq. (27) and the solutions of Eqs. (34) and (35). The grand canonical potential \(W\) can be written by definition as a Legendre transformation of the mean energy \(E\). Written in terms of \(E\), the mean electrical charge \(Q\), the electric potential \(\phi\), the entropy \(S\), and the temperature \(T\), one has \[W=E-TS-Q\phi\,, \tag{74}\] where \(E=E(S,Q,A)\). Moreover, the entropy, the mean charge, and the thermodynamic pressure are obtained from the derivatives of the grand potential, which then allows one to find the mean energy by Eq. (74). The connection between the statistical path integral and thermodynamics can be summarized by Eq. (72), in the zero loop approximation. However, it is interesting to see the relationship between the choice of the path with minimum action, in the path integral, and the thermodynamics of the system, in the zero loop approximation. Indeed, it seems that if one identifies the expression of the quantities \(E\), \(S\) and \(Q\) in the reduced action, the choice of the path with minimum action imposes that the temperature and the electric potential are partial derivatives of the energy, as one would expect from the first law of thermodynamics, see Appendix C. ### Thermodynamic quantities and relations #### iv.2.1 Mean energy, entropy, mean charge, and thermodynamic pressure The grand potential \(W=W[T,\phi,R]\) of the cavity is given by Eq. (73). Since \(A=4\pi R^{2}\), see Eq. (10), one can trade \(R\) for \(A\) in \(W\) to have the dependence \(W=W[T,\phi,A]\). One can then write the differential of this form of the grand potential \(W\) as \[dW=-SdT-pdA-Qd\phi\,, \tag{75}\] where from Eqs. (74) and (75) one has \(S=-\left(\frac{\partial W}{\partial T}\right)_{A,\phi}\), \(p\equiv-\left(\frac{\partial E}{\partial A}\right)_{S,Q}=-\left(\frac{\partial W }{\partial A}\right)_{T,\phi}\), and \(Q=-\left(\frac{\partial W}{\partial\phi}\right)_{A,T}\). Here, a quantity in a subscript means that the partial derivative is performed with the corresponding quantity being kept constant, e.g., \(\left(\frac{\partial W}{\partial T}\right)_{A,\phi}\) means partial derivative of \(W\) in relation to \(T\) with \(A\) and \(\phi\) kept constant. In order to obtain the thermodynamic quantities, we must evaluate the derivatives just given of the grand potential \(W\). We start by calculating the entropy \(S=-\left(\frac{\partial W}{\partial T}\right)_{A,\phi}\). From Eq. (73) we see that \(W=W(T,\phi,A,r_{+}(T,\phi,R),q(T,\phi,R))\). So, using the chain rule one has \(S=-\left(\frac{\partial W}{\partial T}\right)_{A,\phi}=-\left(\frac{\partial W }{\partial T}\right)_{r_{+},q,A,\phi}-\left(\frac{\partial W}{\partial r_{+}} \right)_{q,T,A,\phi}\left(\frac{\partial r_{+}}{\partial T}\right)-\left(\frac {\partial W}{\partial q}\right)_{r_{+},T,A,\phi}\left(\frac{\partial q}{ \partial T}\right)\). Now, the black hole solutions given by \(r_{+}(T,\phi,R)\) and \(q(T,\phi,R)\) obey by definition the conditions \(\left(\frac{\partial W}{\partial r_{+}}\right)_{q,T,A,\phi}\equiv 0\) and \(\left(\frac{\partial W}{\partial q}\right)_{r_{+},T,A,\phi}\equiv 0\), yielding thus simply \(S=-\left(\frac{\partial W}{\partial T}\right)_{A,\phi}=-\left(\frac{\partial W }{\partial T}\right)_{r_{+},q,A,\phi}\). So, Eq. (73) yields directly \[S=\frac{A_{+}}{4}\,, \tag{76}\] with \(A_{+}\) is the area of the horizon given by \(A_{+}=\Omega r_{+}^{d-2}\). Thus, Eq. (76) yields that the black hole entropy is indeed given by the Bekenstein-Hawking entropy formula. In the same manner, one can calculate the electric charge \(Q\) to give the expression \(Q=-\left(\frac{\partial W}{\partial\phi}\right)_{T,A}=-\left(\frac{\partial W }{\partial\phi}\right)_{r_{+},q,T,A}\), yielding \[Q=q\,, \tag{77}\] so the thermodynamic value of the electric charge \(Q\) is equal to the typical electric charge \(q\) of a Reissner-Nordstrom black hole. The pressure is given by \(p=-\left(\frac{\partial W}{\partial A}\right)_{T,\phi}=-\frac{1}{(d-2)\Omega R ^{d-3}}\left(\frac{\partial W}{\partial R}\right)_{r_{+},q,T,\phi}\) and so it has the form \[p=\frac{d-3}{16\pi R\sqrt{f}}\left(\left(1-\sqrt{f}\right)^{2}-\frac{\lambda q ^{2}}{R^{2d-6}}\right)\,, \tag{78}\] which is the gravitational tangential pressure at the reservoir at radius \(R\). Finally, one can calculate the energy by putting Eqs. (76)-(78) into Eq. (74) and finding that \[E=\frac{(d-2)\Omega R^{d-3}}{8\pi}\left(1-\sqrt{f}\right)\,, \tag{79}\] which is the thermodynamic energy, a quasilocal energy, evaluated at radius \(R\). One can verify from the previous equations that \[TdS=dE+pdA-\phi dQ\,, \tag{80}\] i.e., the first law of thermodynamics for the system holds. In Eq. (80), \(T=\frac{1}{\beta}\) can be taken from Eq. (34), \(S\) is given by Eq. (76), \(E\) is given by Eq. (79), \(p\), is given by Eq. (78), \(\phi\) is given by Eq. (35), and \(Q\) is given by Eq. (77). It is worth pointing out that the expressions of the entropy, the mean charge, and the energy can be taken naively from the comparison of Eq. (73) with Eq. (74), but this is in fact true only because by definition of \(I_{0}\), and thus of \(W=TI_{0}\), the conditions \(\left(\frac{\partial W}{\partial r_{+}}\right)_{q,T,A,\phi}\equiv 0\) and \(\left(\frac{\partial W}{\partial q}\right)_{r_{+},T,A,\phi}\equiv 0\), are identically satisfied. Thus, the extrema of the reduced action \(I_{*}\) can be interpreted as points that set the first law of thermodynamics in Eq. (80), see also Appendix C. #### iv.2.2 Euler equation, Gibbs-Duhem relation, and Smarr formula With the thermodynamic variables \(E\), \(A\), and \(Q\) determined, one is able to obtain the Euler equation and the Gibbs-Duhem relation, i.e., the energy in terms of the remaining thermodynamic variables and differential relation between the thermodynamic variables, respectively. The energy in Eq. (79) can be rewritten in terms of the entropy \(S\), surface area of the cavity \(A\), and charge \(Q\) as \[E= \frac{(d-2)A^{\frac{d-3}{d-2}}\Omega^{\frac{1}{d-2}}}{8\pi}\] \[\left(1-\sqrt{\left(1-\left(\frac{4S}{A}\right)^{\frac{d-3}{d-2}} \right)\left(1-\frac{\lambda Q^{2}\Omega^{2}\frac{d-3}{d-2}}{(4SA)^{\frac{d-3} {d-2}}}\right)}\right)\,. \tag{81}\] One can then use the Euler's homogeneous function theorem considering that under a rescaling \(\nu\) of its arguments, the energy as a function has the property that \(E\left(\nu S,\nu A,\nu Q^{\frac{d-3}{d-3}}\right)=\nu^{\frac{d-3}{d-2}}E\left( S,A,Q^{\frac{d-3}{d-3}}\right)\). We thus have an integrated version of the first law of thermodynamics given by \[E=\frac{d-2}{d-3}(TS-pA)+\phi Q\,, \tag{82}\] which is the Euler equation for the system of a \(d\)-dimensional electrically charged black holes in a heat reservoir. By differentiating Eq. (82) and considering that \(dE=TdS-pdA+\phi dQ\), one obtains \[TdS-pdA+(d-2)(SdT-Adp)+(d-3)Qd\phi= 0\,. \tag{83}\] which is the Gibbs-Duhem relation for the system of a \(d\)-dimensional electrically charged black hole in a heat reservoir. In the limit of infinite radius of the cavity, one can obtain the Smarr formula from Eq. (82). Indeed, in the limit of infinite radius \(R\), the temperature in Eq. (34) reduces to the Hawking temperature, i.e., \(T=T_{\rm H}=\frac{d-3}{4\pi}\left(\frac{1}{r_{+}}-\frac{\lambda q^{2}}{r_{+}^{2d -6}}\right)\), the electric potential in Eq. (35) reduces to the electric potential of the Reissner-Nordstrom black hole, i.e., \(\phi=\phi_{\rm H}=\frac{q}{(d-3)\Omega r_{+}^{d-3}}\), the quantity \(pA\) with \(p\) in Eq. (78) being proportional to \(\frac{1}{R^{2-3}}\) vanishes, and the energy of the system in Eq. (79) reduces to the ADM mass, i.e., \(E=m\), Therefore, from Eq. (82) and from the considerations above, the Smarr formula is given by \[m=\frac{d-2}{d-3}T_{\rm H}S+\phi_{\rm H}Q\,. \tag{84}\] The Smarr formula of Eq. (84) can only be valid for the small black hole solution of the grand canonical ensemble, since it is the only solution that exists in this limit. ### Stability in terms of thermodynamic variables In a thermodynamic system with fixed size, fixed temperature, and fixed electric potential, attached to a heat reservoir, energy, entropy, and electric charge can flow from the system to the reservoir and back. In any thermodynamic process in such a system, the grand canonical potential \(W\) tends to decrease down to its minimum or stay at its minimum. In particular, a spontaneous process in the grand canonical ensemble can never increase the grand canonical potential \(W\). To see this one must resort to the second law of thermodynamics applied to the total structure. Indeed, a variation \(dS\) in entropy in the system, plus a variation \(dS_{\rm reservoir}\) in entropy of the reservoir, add to a variation \(dS_{\rm total}\) of the total entropy of the system plus reservoir, \(dS_{\rm total}=dS+dS_{\rm reservoir}\). Suppose the thermodynamic system absorbs energy \(dE\) and charge \(dQ\) from the reservoir. Then the reservoir absorbs energy \(-dE\) and charge \(-dQ\). The first law of thermodynamics then states that the change in entropy in the reservoir is \(TdS_{\rm reservoir}=-dE+\phi dQ\), where since the reservoir remains in internal equilibrium by definition, its temperature and its electric potential are kept constant, i.e., have the original values \(T\) and \(\phi\). So, we can write for the total change in entropy, \(TdS_{\rm total}=TdS-dE+\phi dQ=-d(E-TS-\phi Q)=-d\bar{W}\), where we used that \(T\) and \(\phi\) are constant since they are reservoir values. We have also defined \(\bar{W}\) as \[\bar{W}[\bar{T},A,\bar{\phi}]\equiv E(\bar{T},A,\bar{\phi})-TS( \bar{T},A,\bar{\phi})-\phi Q(\bar{T},A,\bar{\phi})\,, \tag{85}\] as the grand canonical potential related to the nonequilibrium situation. Note that due to the variation to a nonequilibrium situation the thermodynamic system has in general a new temperature \(\bar{T}\) and a new potential \(\bar{\phi}\) different from \(T\) and \(\phi\) of the reservoir. The new quantities that arise in the variation of the nonequilibrium situation, \(E\), \(S\), and \(Q\), have the same functional form of \(\bar{T}\), \(A\), and \(\bar{\phi}\), as they had of \(T\), \(A\), and \(\phi\) before the nonequilibrium process set in, but \(\bar{W}[\bar{T},A,\bar{\phi}]\) has a different functional form, since \(T\) and \(\phi\) that appear in Eq. (85) are quantities of the heat reservoir fixed by assumption at the origin. The area \(A\) has been kept fixed in the process. Thus, returning to the total entropy, one has in brief \(TdS_{\rm total}=-d\bar{W}\). Since mandatorily \(dS_{\rm total}\geq 0\) by the second law, one deduces \(d\bar{W}\leq 0\). Any spontaneous process decreases the grand canonical potential. For further discussion on these issues see Sec. 8.2 and the following sections of [32]. The equilibrium is assured if \[\bar{T}=T\,,\qquad\bar{\phi}=\phi\,, \tag{86}\] in which case \(\bar{W}\) is at its minimum. To be stable, fluctuations in the temperature \(T\) of the thermodynamic system should tend to increase \(W\), fluctuations in the electric potential \(\phi\) should tend to increase \(W\), and a mixture of the two fluctuations should tend to increase \(W\). So, stability is assured if \[\left(\frac{\partial^{2}\bar{W}}{\partial\bar{T}^{2}}\right)_{ \bar{\phi},A}>0\,, \tag{87}\] \[\left(\frac{\partial^{2}\bar{W}}{\partial\bar{T}^{2}}\right)_{ \bar{\phi},A}\left(\frac{\partial^{2}\bar{W}}{\partial\bar{\phi}^{2}}\right)_ {\bar{T},A}-\left(\frac{\partial^{2}\bar{W}}{\partial\bar{T}\partial\bar{ \phi}}\right)_{\bar{T},\bar{\phi},A}^{2}>0\,, \tag{88}\] \[\left(\frac{\partial^{2}\bar{W}}{\partial\bar{\phi}^{2}}\right)_ {\bar{T},A}>0\,, \tag{89}\] where in the second term of Eq. (88) the cross derivatives are performed maintaining constant the quantity that is not being differentiated, and all the derivatives are to be calculated at the solutions of the ensemble. Additionally, only two conditions from Eqs. (87)-(89) are sufficient, we choose Eqs. (87) and (88). From the expression of \(\bar{W}\), one can show that \(\left(\frac{\partial^{2}\bar{W}}{\partial\bar{T}^{2}}\right)_{A,\bar{\phi}}= \left(\frac{\partial S}{\partial T}\right)_{A,\phi}\), where the bars have been dropped on the right-hand side of the equality because \(S\) has the same functional form of \(\bar{T}\), \(A\), and \(\bar{\phi}\), as it has of \(T\), \(A\), and \(\phi\), and at equilibrium \(\bar{T}=T\) and the area was never perturbed. In the same manner, \(\left(\frac{\partial^{2}\bar{W}}{\partial\phi^{2}}\right)_{\bar{T},A}=\left( \frac{\partial Q}{\partial\phi}\right)_{T,A}\), and \(\left(\frac{\partial^{2}\bar{W}}{\partial\bar{T}\partial\bar{\phi}}\right)_{ \bar{T},\bar{\phi},A}=\left(\frac{\partial Q}{\partial T}\right)_{A,\phi}= \left(\frac{\partial S}{\partial\phi}\right)_{T,A}\). The two sufficient conditions, Eqs. (87) and (88), can then be written as \[\left(\frac{\partial S}{\partial T}\right)_{A,\phi}>0\,, \tag{90}\] \[\left(\frac{\partial Q}{\partial\phi}\right)_{T,A}\left(\frac{ \partial S}{\partial T}\right)_{A,\phi}-\left(\frac{\partial S}{\partial\phi} \right)_{T,A}^{2}>0 \tag{91}\] respectively. We now define two quantities. First, we define the isochoric heat capacity at constant electric potential as \[C_{A,\phi}=T\left(\frac{\partial S}{\partial T}\right)_{A,\phi}\,. \tag{92}\] Second, we define the adiabatic electric susceptibility as \(\chi_{S,A}=\left(\frac{\partial Q}{\partial\phi}\right)_{S,A}\). It can also be written in terms of the derivatives present in Eq. (91) from a change of variables \(Q(T,A,\phi)\) to \(Q(T(S,A,\phi),A,\phi)\), where \(T(S,A,\phi)\) is the inverse function of \(S(T,A,\phi)\). Indeed, one gets \[\chi_{S,A}=\frac{\left(\frac{\partial Q}{\partial\phi}\right)_{T,A}\left(\frac {\partial S}{\partial T}\right)_{A,\phi}-\left(\frac{\partial S}{\partial\phi }\right)^{2}_{T,A}}{\left(\frac{\partial S}{\partial T}\right)_{A,\phi}}\,, \tag{93}\] So, the two stability conditions, Eqs. (90) and (91), are now \[C_{A,\phi}>0\,, \tag{94}\] \[\chi_{S,A}C_{A,\phi}>0\,, \tag{95}\] respectively. This analysis to obtain the stability conditions is equivalent to the requirement that the matrix of variances in the grand canonical ensemble is positive definite. This matrix contains the variances \(\Delta E^{2}\), \(\Delta Q^{2}\) and the correlation \(\Delta E\Delta Q\), where \(E\) and \(Q\) are the quantities that are exchanged with the heat reservoir. By working out the conditions of positive definiteness, one recovers also the conditions Eqs. (94) and (95). For the electrically charged black hole in the cavity, one can compute Eq. (93) to find \(\chi_{S,A}=\frac{(d-3)\Omega r_{+}^{d-3}(1-\frac{r_{+}^{d-3}}{2d-3})}{(1-(1- \Phi^{2})(\frac{r_{+}^{d-3}}{2})^{d-3})^{\frac{r_{-}^{d-3}}{2}}}\). This adiabatic susceptibility is positive for all physical configurations of the charged black hole. Therefore, the two conditions for stability are reduced to a single one given in Eq. (94), \(C_{A,\phi}>0\). Now, \(C_{A,\phi}=\frac{\Omega(d-2)Tr_{+}^{d-3}}{4}\frac{\partial r_{+}}{\partial T}\), where \(\frac{\partial r_{+}}{\partial T}\) can be computed, such that Eq. (94) yields \[C_{A,\phi}=\frac{A(d-3)^{2}(d-2)x^{d-4}(1-\Phi^{2})^{2}}{32(\pi RT)^{2}((d-1)( 1-\Phi^{2})x^{d-3}-2)}>0\, \tag{96}\] with the dependence on the variable \(x=\frac{r_{+}}{R}\) being maintained for readability. With Eq. (96) we recover Eq. (66) for thermodynamic stability. See also Appendix C for further thermodynamic relations. We note that for the case of \(\Phi^{2}=0\), \(C_{A,\phi}\) becomes the heat capacity at constant area \(C_{A}\) with the expression given in [20]. In this uncharged case, the bifurcation and marginal stability radius and the photon sphere radius are the same. A comparison between the bifurcation and marginal stability radius and the photon sphere radius is done in Appendix A, showing that these radii do not coincide. This means that the connection displayed in the uncharged case is not generic, it holds only in the pure gravitational situation. A comparison with the self-gravitating static electrically charged thin shell in \(d\)-dimensions studied in [29] is worth doing. Indeed, it is remarkable that the thermodynamic pressure given in Eq. (78) and the thermodynamic energy given in Eq. (79) in the grand canonical ensemble have the same expression as the matter pressure and the matter rest mass, or rest energy, of the corresponding self-gravitating charged spherical shell in equilibrium. Additionally, by choosing for the matter of the thin shell the equations of state corresponding to the temperature and electric potential of the black hole, the shell will also have the Bekenstein-Hawking entropy and its stability at constant area is given by the same condition, i.e., positive heat capacity at constant electric potential. ### Most favorable thermodynamic phase and phase transitions In a thermodynamic system characterized by the grand canonical potential \(W\), it happens that all spontaneous processes always occur in the sense of decreasing \(W\). The configuration we are studying is a black hole inside a reservoir characterized by a fixed area \(A\), a fixed temperature \(T\), and a fixed electric potential \(\phi\). So, thermodynamically \(W\) is the most suited thermodynamic potential to use in this problem. One feature here is that, there are no restrictions on the energy \(E\), and on the electric charge \(Q\), which can flow through the boundary with area \(A\). Thus, it is relevant to know whether the stable black hole is the thermodynamic state with less energy \(W\), or there is another state to which the black hole can make a phase transition. One sees that now one uses the thermodynamic language, and so uses phase transitions instead of quantum transitions as we did previously. But the results are the same, as here one uses \(W\) instead of \(I_{0}\), with \(TI_{0}=W\). We summarize the results using the grand canonical potential \(W\). In the uncharged case, one has \(W_{\rm hot\,flat\,space}=0\) and so the black hole is favored or not depending on whether the black hole with horizon radius \(r_{+2}\) has a \(W\) lower or greater than zero. It has been found that the radius where \(r_{+2}\) yields \(W=0\) is equal to the Buchdahl radius, \(r_{\rm Buch}\). For radii \(r_{+2}\) higher than \(r_{\rm Buch}\) the black hole is favored. In the electrically charged case, one has \(W_{\rm hot\,flat\,space}=0\), and corresponds to a cavity without a black hole and without charge. One can emulate hot flat space by a tiny electric hot sphere in flat space to find \(W_{\rm hot\,sphere}\), tends to zero as the radius of the sphere tends to zero. So, essentially, in this setting the black hole is favored when its \(W\) is less than zero. It is found that the radius where \(r_{+2}\) yields \(W=0\) is not related to the Buchdahl-Andreasson-Wright radius, a generalization for the Buchdahl radius to any higher dimension \(d\) that includes electric charge, see Appendix A. There is also the extreme black hole solution localized at the radius of the cavity. It is found that the stable black hole \(r_{+2}\) has always lower or equal \(W\) than \(W_{\rm extreme\,black\,hole}\), and therefore the stable black hole is always more favorable than the extremal black hole with horizon at the cavity. The case \(d=5\): zero loop approximation and thermodynamics ### \(d=5\) in the zero loop approximation #### iv.1.1 Reduced action Here we apply the whole formalism to \(d=5\) dimensions. The \(d=4\) case is done in the Appendix D where a comparison with [12] is carried out. In \(d=5\), the reduced action taken from Eq. (30) is \[I_{*}=\frac{3\pi}{4}\beta R^{2}\left(1-\sqrt{f}\right)-q\beta\phi-\frac{\pi^{2 }r_{+}^{3}}{2}\,, \tag{97}\] where \[f=\left(1-\frac{r_{+}^{2}}{R^{2}}\right)\left(1-\frac{1}{3\pi^{3}}\frac{q^{2} }{r_{+}^{2}R^{2}}\right). \tag{98}\] with \(I_{*}=I_{*}[\beta,\phi,R;r_{+},q]\) and \(f=f[R;r_{+},q]\). For \(d=5\), one has \(\Omega=2\pi^{2}\) and, since \(\lambda=\frac{8\pi}{(d-2)(d-3)\Omega^{2}}\), one has \(\lambda=\frac{1}{3\pi^{3}}\), which are quantities that have been used and will be used along this section. #### iv.1.2 Stationary points and analysis of the stationary points The horizon radius solutions \(r_{+}\) obey a relation given by Eq. (43) which for \(d=5\) becomes \[(1-\Phi^{2})\left(\frac{r_{+}}{R}\right)^{4}-\left(\frac{r_{+}}{R} \right)^{2}+(1-\Phi^{2})^{2}\frac{1}{(2\pi RT)^{2}}=0\,, \tag{99}\] where here \(\gamma=(2\pi RT)^{2}\frac{\Phi^{2}}{(1-\Phi^{2})^{2}}\). The electric charge \(q^{2}\) obeys a relation given by Eq. (44) which now turns into \[\frac{|q|}{R^{2}}=2\sqrt{3}\,\pi^{\frac{5}{2}}\,\frac{RT|\Phi|}{1-\Phi^{2}} \left(\frac{r_{+}}{R}\right)^{3}\,. \tag{100}\] Clearly, the stationary points, i.e., the solutions for the horizon radius \(r_{+}\) and the electric charge \(q\) are functions of \(T\), \(\Phi\), and \(R\), i.e., one has \(r_{+}=r_{+}(T,\Phi,R)\) and \(q=q(T,\Phi,R)\). Since Eq. (99) can be reduced to a quadratic equation, a property of \(d=5\), one can obtain from Eqs. (99) and (100) the analytic expressions for the \(r_{+}\) and \(q\) of the two stationary points that yield the two electrically charged black hole solutions. The first stationary point or solution, \(r_{+1}\), is the small black hole and is given by \[\frac{r_{+1}}{R}=\frac{1}{\sqrt{2(1-\Phi^{2})}}\left[1-\sqrt{1-\frac{(1-\Phi^{ 2})^{3}}{(\pi RT)^{2}}}\right]^{\frac{1}{2}}\,, \tag{101}\] \[\frac{|q_{1}|}{R^{2}}=\sqrt{\frac{3}{2}}\,\frac{\pi^{\frac{5}{2}}RT\Phi}{(1- \Phi^{2})^{\frac{5}{2}}}\left[1-\sqrt{1-\frac{(1-\Phi^{2})^{3}}{(\pi RT)^{2}} }\right]^{\frac{3}{2}}\,, \tag{102}\] This solution \(r_{+1}\) given in Eq. (101) with negative sign in the square root was designated \(x_{1}\) in Sec. III.2.2. Here, we keep the \(r_{+1}\) notation. The second stationary point or solution, \(r_{+2}\), is the large black hole and is given by \[\frac{r_{+2}}{R}=\frac{1}{\sqrt{2(1-\Phi^{2})}}\left[1+\sqrt{1-\frac{(1-\Phi^ {2})^{3}}{(\pi RT)^{2}}}\right]^{\frac{1}{2}}\,, \tag{103}\] \[\frac{|q_{2}|}{R^{2}}=\sqrt{\frac{3}{2}}\,\frac{\pi^{\frac{5}{2}}RT\Phi}{(1- \Phi^{2})^{\frac{5}{2}}}\left[1+\sqrt{1-\frac{(1-\Phi^{2})^{3}}{(\pi RT)^{2}} }\right]^{\frac{3}{2}}\,. \tag{104}\] The solution \(r_{+2}\) given in Eq. (103) with positive sign in the square root was designated by \(x_{2}\) in Sec. III.2.2. Here, we keep the \(r_{+2}\) notation. The condition for the two black hole solutions to exist is given by Eq. (52), which reduces to \[0\leq(1-\Phi^{2})^{3}\leq(\pi RT)^{2}<\infty\,, \tag{105}\] in \(d=5\) dimensions. For zero electric charge and so \(\Phi=0\), i.e., the uncharged case, Eq. (105) turns into \(1\leq(\pi RT)^{2}<\infty\) of the \(d=5\) Schwarzschild-Tangherlini black hole, see [19]. Before embarking in a careful analysis of the stationary points, it is useful to make an analysis of the limits. First, for very large \(\pi RT\), \((\pi RT)^{2}\to\infty\), independently of \(\Phi\), the solution \(r_{+1}\) behaves as \(\frac{r_{+1}}{R}\to\frac{(1-\Phi^{2})}{2\pi RT}\), and since \(|\Phi|<1\) the solution always exists. For very large \(\pi RT\), \((\pi RT)^{2}\to\infty\), independently of \(\Phi\), the solution \(r_{+2}\) behaves as \(\frac{r_{+2}}{R}\to\frac{1}{\sqrt{1-\Phi^{2}}}\), which for values of \(\Phi^{2}<1\), as it is always the case, one has \(r_{+2}>R\), so the solution is unphysical. This situation is different from the uncharged case, where the solution with larger mass, \(r_{+2}\), only meets the cavity at infinite temperature, while in the charged case, the solution \(r_{+2}\) meets the cavity at finite temperature, as we are seeing here. Second, for \(\Phi^{2}\to 1\), independently of \((\pi RT)^{2}\), the solution \(r_{+1}\) tends to \(r_{+1}\to 0\). For \(\Phi^{2}\to 1\), independently of \((\pi RT)^{2}\), the solution \(r_{+2}\) tends to \(r_{+2}\to\infty\), and so is unphysical. A careful analysis of the stationary points presented in solution \(r_{+1}\) of Eqs. (101)-(102) and in solution of Eqs. (103)-(104) is now given. Several plots are made in Figs. 2\(-\)4 that complement Eqs. (101)-(105) and the main text. Note that \(\frac{d-3}{d-1}=\frac{1}{2}\) in \(d=5\), and so from Eq. (54) one has that the value \(\Phi^{2}=\frac{1}{2}\) plays an important role in the analysis. Thus, we divide the analysis into two parts, namely, \(0\leq\Phi^{2}\leq\frac{1}{2}\) and \(\frac{1}{2}<\Phi^{2}<1\). (i) For \(0\leq\Phi^{2}\leq\frac{1}{2}\), there are three branches. (a) For \(0\leq(\pi RT)^{2}<(1-\Phi^{2})^{3}\), there are no stationary points, and so no black hole solutions, only hot flat space, see below. (b) For \((1-\Phi^{2})^{3}\leq(\pi RT)^{2}\leq\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}\), both black hole solutions lie inside the cavity, i.e., \(r_{+1}\leq R\) and \(r_{+2}\leq R\). In the case of the equality on the right side, i.e., \((\pi RT)^{2}=\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}\), the solution \(r_{+1}\) obeys \(r_{+1}<R\), and the solution \(r_{+2}\) satisfies \(r_{+2}=R\) with the charge \(q_{2}\) obeying \(|q_{2}|=\sqrt{3\pi^{3}}\,r_{+}^{2}\), i.e., it is maximal, which means that the \(r_{+2}\) solution is an extremal electrically charged black hole. The particular case \(\Phi^{2}=\frac{1}{2}\) yields that \((1-\Phi^{2})^{3}=\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}=\frac{1}{8}\), so that \((\pi RT)^{2}=\frac{1}{8}\), and now the \(r_{+1}\) and \(r_{+2}\) solutions merge into one, an extremal electrically charged black hole that obeys \(r_{+1}=r_{+2}=R\). (c) For \(\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}<(\pi RT)^{2}<\infty\), the solution \(r_{+1}\) has always \(r_{+}<R\) and so exists. For \(\Phi\) near zero, \(r_{+1}\) is small and as the value of \((\pi RT)^{2}\) increases \(r_{+1}\) goes to zero. For \(\Phi\) near \(\frac{1}{2}\) from below, \(r_{+1}\) is near \(R\) and as \((\pi RT)^{2}\) increases \(r_{+1}\) goes to zero. On the other hand, the solution \(r_{+2}\) obeys \(r_{+2}>R\), so is unphysical. This situation is different from the uncharged case, where the solution with larger mass, \(r_{+2}\), only meets the cavity at infinite temperature, while in the charged case, the solution \(r_{+2}\) meets the cavity at finite temperature. (ii) For \(\frac{1}{2}<\Phi^{2}<1\), there are three branches. (a) For \(0\leq(\pi RT)^{2}<(1-\Phi^{2})^{3}\), there are no black hole solutions, only hot flat space, see below. (b) For \((1-\Phi^{2})^{3}\leq(\pi RT)^{2}\leq\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}\), both solutions \(r_{+1}\) and \(r_{+2}\) lie outside the cavity and so are unphysical. This means that within this range there are no black hole solutions, presumably only hot flat space, see below. (c) For \(\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}<(\pi RT)^{2}<\infty\), the solution \(r_{+1}\) starts at \(r_{+1}=R\) in the case of \((\pi RT)^{2}=\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}\) and then decreases toward zero as the temperature increases. On the other hand, the solution \(r_{+2}\) remains outside the cavity and so is unphysical. In Fig. 2 top, the plots of the two stationary points \(\frac{r_{+1}}{R}\) and \(\frac{r_{+2}}{R}\) given in Eqs. (101) and (103) as functions of \(RT\) for five values of \(\Phi\) are shown. In Fig. 2 bottom, the plots of the two stationary points \(\frac{r_{+1}}{R}\) and \(\frac{r_{+2}}{R}\) given in Eqs. (101) and (103) as functions of \(\Phi\) for five values of \(RT\) are shown. In these plots, the quantity \(\Phi\) was chosen instead of \(\phi\) so that the comparison between the analytical study and the plots is straightforward. \(\Phi\) is proportional to \(\phi\), specifically in \(d=5\) one has \(\Phi=\sqrt{\frac{16\pi}{3}}\phi\), then \(\Phi\) is fixed as \(\phi\) is fixed. In Fig. 3, a contour plot of the reduced action \(I_{*}\), given in Eq. (97), for \(RT=0.5\) and \(\Phi=0.2\), as a function of \(\frac{r_{+}}{R}=x\) and \(\frac{|q|}{\sqrt{3\pi^{3}R^{2}}}=\sqrt{y}\), displays the two stationary points \(\frac{r_{+1}}{R}=x_{1}\) and \(\frac{r_{+2}}{R}=x_{2}\) in this case. One can see the nature of the stationary points in the contour plot, with \(r_{+1}\) being a saddle point and \(r_{+2}\) being a minimum, this is proven generally in the next section. Although the effect on the contour plot for other values of \(RT\) and \(\Phi\) is not plotted here, we present in the next plot the effect Figure 2: Top plot: Stationary points \(\frac{r_{+1}}{R}=x_{1}\) (in blue) and \(\frac{r_{+2}}{R}=x_{2}\) (in red) of the reduced action \(I_{*}\) as a function of \(RT\), for dimension \(d=5\), and for five values of \(\Phi\), namely, \(\Phi=0.001\) in dotted lines, \(\Phi=0.2\) in dashed lines, \(\Phi=0.4\) in solid lines, \(\Phi=0.6\) in dot dashed lines and \(\Phi=\frac{1}{\sqrt{2}}=0.7\), the last equality is approximate, in dot double dashed lines. Bottom plot: Stationary points \(\frac{r_{+1}}{R}=x_{1}\) (in blue) and \(\frac{r_{+2}}{R}=x_{2}\) (in red) of the reduced action \(I_{*}\) as a function of \(\Phi\), for dimension \(d=5\), and for five values of \(RT\), namely, \(RT=0.05\) in dotted lines, \(RT=\frac{1}{2\sqrt{2\pi}}=0.112\), the last equality is approximate, in dashed lines, \(RT=0.2\) in solid lines, \(RT=\frac{1}{\pi}=0.318\), the last equality is approximate, in dot dashed lines, and \(RT=0.4\) in dot double dashed lines. The gray line corresponds to the points \(x\) and \(\Phi\) where the solutions \(x_{1}\) and \(x_{2}\) coincide. The orange line corresponds to \(\Phi=1\), which is the maximum possible electric potential. See text for further details. on the migration of the stationary points. In Fig. 4 top, the migration path of the two stationary points \(\frac{r_{+1}}{R}=x_{1}\) and \(\frac{r_{+2}}{R}=x_{2}\) from a point in the central region where they coincide to the two points at the corners is shown as a function of \(RT\) for four different values of \(\Phi\). The gray line corresponds to the condition of extremal black holes, namely, \(\sqrt{y}=x^{2}\), i.e., \(\frac{|\mathfrak{g}|}{\sqrt{3}\pi^{3}}=r_{+}^{2}\). The black line corresponds to the points \(x\) and \(\sqrt{y}\) where the solutions \(x_{1}\) and \(x_{2}\) coincide. For the minimum possible temperature in each case, the solutions are at the black line, and as one increases the temperature, \(x_{1}\) decreases toward the origin \(x=\sqrt{y}=0\), where \(RT\rightarrow+\infty\), and \(x_{2}\) increases toward \(x=\sqrt{y}=1\), where \(RT\rightarrow\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}\). In Fig. 4 bottom, the migration path of the two stationary points \(\frac{r_{+1}}{R}=x_{1}\) and \(\frac{r_{+2}}{R}=x_{2}\) from a point in the central region where they coincide to the two points at the corners is shown as a function of \(\Phi\) for four different values of \(RT\). In these plots, the quantity \(\Phi\) was chosen instead of \(\phi\) so that the comparison between the analytical and the plots is straightforward. Since \(\Phi=\sqrt{\frac{16\pi}{3}}\phi\), one has that \(\Phi\) is fixed as \(\phi\) is fixed. The gray line corresponds to the condition of extremal black holes, namely \(\sqrt{y}=x^{2}\), i.e., \(\frac{|\mathfrak{g}|}{\sqrt{3}\pi^{3}}=r_{+}^{2}\). The black line corresponds to the points \(x\) and \(\sqrt{y}\) where solutions \(x_{1}\) and \(x_{2}\) coincide. For minimum potential, the solutions either start from the black line where the solutions coincide or start separated in \(\sqrt{y}=0\) line. As one increases the potential, \(x_{1}\) tends to the origin \(x=\sqrt{y}=0\), where \(\Phi\to 1\), and \(x_{2}\) tends to \(x=\sqrt{y}=1\), where \(\Phi\rightarrow\sqrt{(\pi RT)^{2}+1}-\pi RT\). Figure 3: Contour plot of the reduced action in \(d=5\) dimensions, specifically of \(\frac{4I_{s}}{3\pi R^{2}}\), in function of \(\frac{r_{+}}{R}=x\) and \(\frac{|\mathfrak{g}|}{\sqrt{3}\pi^{3}R^{2}}=\sqrt{y}\), for \(\Phi=0.2\) and \(RT=0.4\). The blue dot is a saddle point and corresponds to \(\frac{r_{+1}}{R}=x_{1}\), and the red dot is a minimum and corresponds to \(\frac{r_{+2}}{R}=x_{2}\). See text for further details. Perturbations around the zero loop approximation and stability analysis Using Eq. (66) with \(x\equiv\frac{r_{+}}{R}\), one finds that the solutions are stable when \[\frac{\left(4(1-\Phi^{2})\left(\frac{r_{+}}{R}\right)^{2}-2\right)\left(1-\left( \frac{r_{+}}{R}\right)^{2}\right)}{\left(1-\left(1-\Phi^{2}\right)\left(\frac{r _{+}}{R}\right)^{2}\right)}>0\, \tag{106}\] for \(d=5\). The physical range is \(\frac{r_{+}}{R}<1\). Therefore, the solutions are stable if \(r_{+}>r_{+\text{bif}}\), where \(r_{+\text{bif}}=\frac{R}{\sqrt{2(1-\Phi^{2})}}\) is the bifurcation radius from which the solutions \(r_{+2}\) and \(r_{+1}\) bifurcate at \((\pi RT)^{2}=(1-\Phi^{2})^{3}\). So, one has always that \(r_{+1}<r_{+\text{bif}}<r_{+2}\). We can spell the consequences of Eq. (106) in more detail. For \(r_{+1}\), this means that for \((1-\Phi^{2})^{3}\leq(\pi RT)^{2}\leq\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}\), in the case \(0\leq\Phi^{2}\leq\frac{1}{2}\), the solution does not obey the stability condition, and so is thermodynamically unstable, and in the case \(\frac{1}{2}<\Phi^{2}<1\) the solution \(r_{+1}\) does not physically exist as it lies outside the cavity. For \(\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}<(\pi RT)^{2}<\infty\) and \(0\leq\Phi^{2}<1\), the solution \(r_{+1}\) does not obey the stability condition, and so is thermodynamically unstable. Moreover, \(r_{+1}\) corresponds to a saddle point of the action. For \(r_{+2}\), this means that for \((1-\Phi^{2})^{3}\leq(\pi RT)^{2}\leq\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}\), in the case \(0\leq\Phi^{2}\leq\frac{1}{2}\), the solution obeys the stability condition, therefore for this range of parameters the solution is thermodynamically stable, and it is also a minimum of the action. In the case \(\frac{1}{2}<\Phi^{2}<1\) the solution \(r_{+2}\) does not physically exist, as it lies outside the cavity. For \(\frac{(1-\Phi^{2})^{2}}{4\Phi^{2}}<(\pi RT)^{2}<\infty\) and \(0<\Phi^{2}<1\) the solution \(r_{+2}\) does not physically exist also, being located outside the cavity. #### v.1.4 Most probable configurations We now want to make a study of the most probable configurations in the case \(d=5\). For that, we deal with stable solutions only. The system with less \(I_{0}\), which is found from \(I_{*}\) in Eq. (97) and its stationary points, is the most probable system since it gives the most important contribution to the partition function \(Z\). To search for the most probable configuration, one has essentially to make a comparison of the stable black hole with a charged equivalent hot flat space. The action has two stable stationary points, namely, the stationary point \(r_{+2}\) related to the stable black hole, and the stationary point \(r_{+}=0\) and \(q=0\), which corresponds to a cavity without a black hole and without charge. The action also has a critical point, \(r_{+}=R\) and \(\frac{q}{\sqrt{3\pi^{3}}}=R\), so that \(r_{+}=\frac{q}{\sqrt{3\pi^{3}}}=R\), which corresponds to an extremal black hole with the horizon localized at the radius of the cavity. To have a model for the stationary point \(r_{+}=0\) and \(q=0\), we have proposed that a nongravitating perfect conductor hot sphere with radius \(r_{\text{hs}}\), inside the reservoir at constant \(\beta\) and \(\phi\), will do. The electric potential is now \(\phi=\frac{q}{4\pi^{2}}\left(\frac{1}{r_{\text{hs}}^{2}}-\frac{1}{R^{2}}\right)\), see also Eq. (35). So that the action for a hot sphere, as a model of hot flat space, in five dimensions is \[I_{\text{hot\,sphere}}=-\frac{1}{2}\,\frac{4\pi^{2}}{\frac{1}{r_{\text{hs}}^{ 2}}-\frac{1}{R^{2}}}\beta\phi^{2}\,. \tag{107}\] One can then compare the action of the conducting hot sphere given in Eq. (107) with the action of the stable configuration of the charged black hole given in Eq. (97) together with Eqs. (103) and (104). Clearly, from Eq. (107) one sees that for small \(r_{\text{hs}}\), which is the case analogous to hot flat space, then \(I_{\text{hot\,sphere}}=0\), or approximately zero, so that essentially \(I_{\text{hot\,sphere}}=I_{\text{hot\,flat\,space}}\) in this case. Now, the stable black hole has positive action only in a small range of low temperatures, namely, for temperatures near the minimum temperature for which the stable black hole exists. For higher temperatures, the action for the stable black hole is negative. Therefore, one finds that the small charged sphere that emulates hot flat space is less probable for a large interval of temperatures when compared with the stable black hole. In fact, when the solution of the stable black hole obeys \[\frac{r_{+2}}{R}\geq\mu m+\sqrt{\mu^{2}m^{2}-\frac{q^{2}}{3\pi^{3}}}\,, \tag{108}\] with \(\mu=\frac{4}{3\pi}\), see Eq. (23), and with \(\mu m=-\frac{9}{16}+\frac{15}{16}\sqrt{1+\frac{16}{27\pi}\frac{q^{2}}{R^{4}}}\) the corresponding action is negative and the black hole is more probable than the tiny charged sphere. Note that this radius \(r_{+2}\) does not have a connection to the Buchdahl-Andreasson-Wright bound, in contrast to the uncharged case, see also Appendix A for more details. The comparison between the hot flat sphere and the stable black hole is displayed in Fig. 5. In the two plots of the figure, the gray region represents the points of the pair \((RT,\Phi)\) in which the stable black hole solution \(r_{+2}\) is more probable. The region in purple in the top plot of the figure, and in blue in the bottom plot, represents the points in which the charged conducting sphere with radius \(r_{\text{hs}}\) is more probable. The regions in white represent points where there is no stable black hole solution, so presumably the most probable state is hot flat space. The upper white region is different from the uncharged case, see [20], because in the uncharged case the stable black hole solution exists for temperatures up to infinite ones, whereas in the electrically charged case, the stable black hole solution only exists within a range of finite temperature. In the top plot of Fig. 5, we can see that the lower the value of \(r_{\text{hs}}\), the larger the region where the stable black hole solution is more probable over the conducting sphere, until the point where one has a microscopic sphere. In that case, the whole of the region favors the stable black hole solution, apart from a very small region where the microscopic sphere is more probable, see Eq. (108). This case of a microscopic electrically charged sphere is the case that emulates hot flat space. In the bottom plot of Fig. 5, we can see that the higher the value of \(r_{\text{hs}}\), the smaller the region where the stable black hole solution is more probable, but this case is contrived, does not emulate hot flat space. Moreover, it must be stated that for relatively small values of \(r_{\text{hs}}\), the region of probability for the electrically charged shell does not change much, as even with \(r_{\text{hs}}=0.7\), the difference to \(r_{\text{hs}}=0\) is small. Indeed, only variations of \(r_{\text{hs}}\) close to \(R\) changes substantially the region of the most probable state. Note also that for the uncharged case, in \(d=5\), the comparison between the stable black hole solution and hot flat space was made in [19]. The stable black hole and the hot flat space solution are the stationary points of the reduced action. The most probable state is the one with the lowest value of the action. In the case of the black hole, the value of the action \(I_{0}\) depends on \(\beta\), while in the case of hot flat space one has \(I_{\text{hot flat space}}=0\). In [19], it was shown that \(I_{0}<I_{\text{hot flat space}}\) if \(\beta\) is such that \(\frac{r_{+}}{R}>\frac{r_{\text{bach}}}{R}\), where \(r_{\text{buch}}\) is the Buchdahl radius. Thus, in the pure gravitational case the size of the horizon radius for the black hole to be the dominant phase coincides with the Buchdahl radius. This agreement does not extend to when other fields are present, since the horizon radius for the electrically charged black hole to be the dominant phase does not coincide with the Buchdahl-Andreasson-Wright radius, as we have shown. In \(d=5\) one can make now a specific comparison of the stable black hole \(r_{+2}\) with the critical point given by \(r_{+}=R\) and \(\frac{d}{\sqrt{3\pi^{3}}}=R\), i.e., an extremal black hole with the horizon localized at the radius of the cavity, bearing in mind that the precise extremality and the precise location can fluctuate by Planck order quantities. The gradient of the action is not defined at this critical point but it may be smoothed up by taking in consideration higher loops in the path integral or a different theory of gravity, see Appendix B. The action for this critical point can be taken from Eq. (70) in the \(d=5\) case, i.e., \(I_{\text{extreme black hole}}=\frac{3\pi R^{2}\beta}{4}\Big{(}1-\sqrt{f(R,r_{+},q)} \Big{)}-q\beta\phi-\frac{\pi^{2}r_{+}^{3}}{2}\), where \(f(R,r_{+},q)\) is taken from Eq. (27) putting \(d=5\) with \(r_{+}\) and \(q\) having extremal values, so that \(r_{+}=R\), \(q=\sqrt{3\pi^{3}}\,R\), and \(f(R,r_{+},q)=0\). Then, \[I_{\text{extreme black hole}}=\frac{3\pi R^{2}\beta}{4}-\sqrt{3 \pi^{3}}\,R^{2}\beta\phi-\frac{\pi^{2}R^{3}}{2}\,. \tag{109}\] So \(I_{\text{extreme black hole}}\) has to be analyzed for each \(R\) and \(\beta\) and compared with the action for the stable black hole \(r_{+2}\). It is found that, in every instance, the stable black hole is a more probable configuration than the extreme black hole with horizon at the cavity. ### Thermodynamics of the \(d=5\) Reissner-Nordstrom space in a cavity #### iv.2.1 Relation between action and grand canonical potential In any dimension \(d\), in particular in \(d=5\), the grand potential \(W\) has the dependence \(W=W[T,\phi,A]\), where here \(A\) is the surface area of the 3-sphere at the boundary \(\mathcal{B}\). The correspondence between thermodynamics and the action of the system is given by Eq. (72), i.e., Figure 5: Regions of more probability in five dimensions, \(d=5\), between the stable black hole solution and the charged conducting sphere, in function of \(RT\) and \(\Phi\). Top plot: \(\frac{r_{\text{hs}}}{R}\to 0\). The region in gray represents the points where the black hole solution is more probable or favorable. The region in purple represents the points where the infinitesimal charged conducting sphere, emulating electrically charged hot flat space, is more probable. The regions in white do not have a stable black hole solution, so presumably the most probable state is hot flat space. Bottom plot: \(\frac{r_{\text{hs}}}{R}=0.99\). The region in gray represents the points where the black hole solution is more probable. The region in blue represents the points where the charged conducting sphere is more probable, with \(\frac{r_{\text{hs}}}{R}=0.99\). The regions in white do not have a stable black hole solution, so presumably the most probable state is hot flat space. \(W[T,\phi,A(R)]=T\,I_{0}[T,\phi,R]\). So, here one has \[W= \frac{3\pi}{4}R^{2}\left(1-\sqrt{\left(1-\frac{r_{+}^{2}}{R^{2}} \right)\left(1-\frac{1}{3\pi^{3}}\frac{q^{2}}{r_{+}^{2}R^{2}}\right)}\right)\] \[-T\,\frac{\pi^{2}r_{+}^{3}}{2}-q\phi\,, \tag{110}\] see Eqs. (97) and (98). The grand potential still has the expression \(W=E-ST-Q\phi\), with \(dW=-SdT-Qd\phi-pdA\) and with the first law of thermodynamics \(TdS=dE-\phi dQ+pdA\) holding, see Eq. (72). v.2.2 Charge, mean energy, and pressure, first law of thermodynamics, and Euler equation, Gibbs-Duhem relation, and Smarr formula The physical quantities of the system such as the entropy, electric charge, surface pressure, thermodynamic energy, and area can be given in this case. The entropy can be directly obtained from Eq. (76) in \(d=5\) \[S=\frac{1}{4}A_{+}\,, \tag{111}\] which is the Bekenstein-Hawking entropy of a black hole, where indeed \(S=\frac{\pi^{2}r_{+}^{3}}{2}\), with \(A_{+}=2\pi^{2}r_{+}^{3}\). The electric charge can be computed from Eq. (77). In \(d=5\), it has the same appearance as in general \(d\), i.e., for given \(T\), \(\phi\), and \(R\), one has \(Q=q\), the electric charge of Reissner-Nordstrom space. The gravitational thermodynamic surface pressure at \(R\) can be calculated from Eq. (78) to give \[p=\frac{1}{8\pi R\sqrt{f}} \left(\left(1-\sqrt{\left(1-\frac{r_{+}^{2}}{R^{2}}\right)\left(1 -\frac{q^{2}}{3\pi^{3}r_{+}^{2}R^{2}}\right)}\right)^{2}\right.\] \[-\left.\frac{q^{2}}{3\pi^{3}R^{4}}\right), \tag{112}\] where \(f\) is given in Eq. (98). This tangential surface pressure acts along an area \(A\) that in \(d=5\) is \(A=2\pi^{2}R^{3}\), see Eq. (10). The mean thermodynamic energy can be taken from Eq. (79) to the \(d=5\) case and is \[E=\frac{3\pi R^{2}}{4}\left(1-\sqrt{\left(1-\frac{r_{+}^{2}}{R^ {2}}\right)\left(1-\frac{q^{2}}{3\pi^{3}r_{+}^{2}R^{2}}\right)}\right)\,. \tag{113}\] This is the same expression as the quasilocal energy evaluated at a spherical shell of radius \(R\). The first law of thermodynamics, \(TdS=dE+pdA-\phi dQ\) given in Eq. (80), for the system in \(d=5\) holds, of course. From Eq. (113), one can write the energy in terms of the entropy \(S\) of Eq. (111), electric charge \(Q\), and surface area of the cavity \(A\), as \[E= \frac{3(2\pi^{2})^{\frac{1}{3}}A^{\frac{2}{3}}}{8\pi}\times\] \[\left(1-\sqrt{\left(1-\left(\frac{4S}{A}\right)^{\frac{2}{3}} \right)\left(1-\frac{Q^{2}(2\pi^{2})^{\frac{4}{3}}}{3\pi^{3}(4SA)^{\frac{2}{3} }}\right)}\right)\,. \tag{114}\] One can then use the Euler's homogeneous function theorem considering that under rescaling of its arguments, the energy as a function has the property that \(E\left(\nu S,\nu A,\nu Q^{\frac{3}{2}}\right)=\nu^{\frac{2}{3}}E\left(S,A,Q^{ \frac{3}{2}}\right)\). We thus have an integrated version of the first law of thermodynamics given by, see Eq. (82), \[\frac{2}{3}E=TS-pA+\frac{2}{3}\phi Q\,, \tag{115}\] which is the Euler equation for the system of a \(d=5\) electrically charged black hole in a heat reservoir. By differentiating Eq. (115) and considering that \(dE=TdS-pdA+\phi dQ\), one obtains \[TdS-pdA+3(SdT-Adp)+2Qd\phi=0\,. \tag{116}\] which is the Gibbs-Duhem relation for the \(d=5\) electrically charged black hole in a heat reservoir. Then, the Smarr formula in \(d=5\) is \[m=\frac{3}{2}T_{\rm H}S+\phi_{\rm H}Q\,, \tag{117}\] see Eq. (84). Again, the Smarr formula is valid for the small black hole solution only. #### v.2.3 Equilibrium and stability in terms of thermodynamic variables The heat capacity \(C_{A,\phi}=T\left(\frac{\partial S}{\partial T}\right)_{A,\phi}\) is given by Eq. (96). By setting \(d=5\), one gets \[C_{A,\phi}=\frac{3\left(\frac{r_{+}}{R}\right)\left(1-\Phi^{2} \right)^{2}}{8\pi\,T^{2}\left(2(1-\Phi^{2})\left(\frac{r_{+}}{R}\right)^{2}-1 \right)}\,, \tag{118}\] where we have that \(A\) is the area of the reservoir, and we have used \(x=\frac{r_{+}}{R}\). For the link between the validity of zero loop approximation and the heat capacity, see Appendix C. So \(C_{A,\phi}>0\) means \[2(1-\Phi^{2})\left(\frac{r_{+}}{R}\right)^{2}-1>0\,. \tag{119}\] Therefore we recover Eq. (106). Most favorable thermodynamic configurations The most favorable thermodynamic configuration is found from the state with the lowest value of the grand potential \(W\). Since \(W=TI_{0}\), the analysis is the same if done in \(I_{0}\) or in \(W\). What changes is that in \(I\) one talks about the most probable state and about quantum transitions, and when using \(W\) one talks about the most favorable state and phase transitions. Since we have done the analysis in detail for \(I\) in \(d=5\), it is not necessary to do in \(W\), we refer to Sec. V.1.4. ## VI Conclusions The grand canonical ensemble of a \(d\)-dimensional Reissner-Nordstrom space in a cavity was built using the path integral approach. The partition function of the space in a cavity was obtained by performing the zero loop approximation to the path integral relative to the Euclidean action, where only the term which minimizes the action contributes to the path integral. There are two stationary points of the action that correspond to a black hole in equilibrium with a heat reservoir with the temperature and the electric potential fixed at the boundary of the cavity. The stationary point with lower horizon radius was shown to be unstable, while the stationary point with higher horizon radius is stable, as has been proved for arbitrary \(d\). The corresponding values of the event horizon radius depending on the temperature and electric potential of the two stationary points cannot be found analytically for arbitrary dimensions. However, it is possible to find analytically the event horizon radius for \(d=5\), where one needs to solve a quadratic polynomial. There are some features of the stationary points in the electrically charged case that differ from the electrically uncharged case. First, the event horizon radius corresponding to the lowest temperature allowed does not correspond to the photon sphere, unlike the uncharged case, thus showing that this coincidence is really restricted to the pure gravitational case. Second, the larger horizon radius solution reaches the radius of the cavity at finite temperature, unlike the uncharged case. where the horizon radius only reaches the cavity radius at infinite temperature. The grand canonical ensemble of the stable stationary point can be constructed by comparing the partition function given by the path integral with the partition function of the grand canonical ensemble. The grand potential can be obtained from the comparison and the thermodynamics of the black hole corresponding to the stable stationary point is recovered. We have shown that the entropy corresponds to the Bekenstein-Hawking entropy, the pressure corresponds to the pressure of a self-gravitating static electrically charged spherical thin shell in equilibrium, and the thermodynamic energy has the same expression as the expression for the quasilocal energy. The first law of thermodynamics with constant area is obeyed at the stationary points of the action, as one would expect. The stability of the stationary points is described by the heat capacity at constant area and electric potential. If this heat capacity is positive, then the stationary point is stable. This fits well with the relationship between thermodynamic stability and the heat capacity. Additionally, we have compared the stable black hole solution to an electrically charged conducting hot sphere in flat space, to analyze when one is more favorable than the other. In this case, a configuration is more favorable than the other when its grand potential \(W\) is lower. We have shown that this depends on the value of the temperature, of the electric potential of the reservoir, and of the radius of the conducting sphere. Moreover, the smaller the radius of the conducting sphere, the larger the region where the stable black hole is favored. The comparison of the Buchdahl-Andreasson-Wright bound radius in \(d\)-dimensional Reissner-Nordstrom spacetimes with the minimum radius for which the stable black hole phase is thermodynamically favored, dictates that they have no relationship, thus showing that the connection displayed in the Schwarzschild case is not generic, rather it is a very restricted equality holding only in the pure gravitational situation. ###### Acknowledgements. We thank financial support from Fundacao para a Ciencia e Tecnologia - FCT through the project No. UIDB/00099/2020 and project No. UIDP/00099/2020 ## Appendix A Connection of thermodynamic radii to spacetime radii ### Thermodynamic bifurcation radius and the photon sphere radius In the case of the grand canonical ensemble of a \(d\)-dimensional Reissner-Nordstrom black hole in a cavity, we have seen in Eq. (49) that the two thermodynamic black hole solutions, represented by \(r_{+1}\) and \(r_{+2}\), bifurcate from a horizon radius obeying \(\frac{r_{+}}{R}=\frac{2^{\frac{1}{d-3}}}{((d-1)(1-\Phi^{2}))^{\frac{1}{d-3}}}\), or in terms of \(R\), \[R=\left(\frac{(d-1)}{2}(1-\Phi^{2})\right)^{\frac{1}{d-3}}r_{+}\,. \tag{101}\] A black hole for which the horizon radius \(r_{+}\) satisfies Eq. (101) is marginally stable to thermodynamic perturbations, black holes with larger radius \(r_{+}\) are thermodynamically stable. Thus, the bifurcation radius is also the marginal thermodynamic stable radius. The photon sphere radius \(R\) of a \(d\)-dimensional Reissner-Nordstrom black hole is given by, see Appendix E, \[R=\left(\frac{(d-1)}{2}\left(1+\frac{d-3}{d-2}\Phi^{2}\right)\right)^{\frac{1}{d- 3}}r_{+}\,. \tag{100}\] At this radius, null geodesics can have circular trajectories, and thus photons can execute circular orbits. From direct comparison between Eqs. (101) and (100), we see that the two radii are distinct in any dimension \(d\), so that in the grand canonical ensemble of the Reissner-Nordstrom black hole there is no connection between them. Of course, when there is no charge and so no \(\Phi\), the two radii coincide as Eqs. (101) and (100) both yield \(R=\left(\frac{d-1}{2}\right)^{\frac{d-3}{d-3}}r_{+}\), and so the radius of the cavity at which a stable black hole appears corresponds to the photon sphere radius. Thus, we have verified that the equality between the bifurcation and marginal stability radius and the photon orbit radius only holds in the pure gravitational case, the equality does not extend to the grand canonical ensemble of the \(d\)-dimensional Reissner-Nordstrom black hole. ### Most favorable configuration radius and the Buchdahl-Andreasson-Wright sphere radius In the case of grand canonical ensemble of a \(d\)-dimensional Reissner-Nordstrom black hole in a cavity, the stable solution has a negative action \(I_{0}\), see Eq. (37), or equivalently, a negative grand potential \(W\), see Eq. (73), if \[\frac{\mu m}{R^{d-3}}\leq-\frac{4(d-2)^{2}}{(d-1)^{2}(d-3)^{2}}\] \[+\frac{2(d-2)((d-2)^{2}+1)}{(d-1)^{2}(d-3)^{2}}\sqrt{1+\frac{(d- 1)^{2}(d-3)^{2}}{4(d-2)^{2}}}\frac{\lambda q^{2}}{R^{2d-6}}\,. \tag{101}\] Note that this condition for \(d=4\) is given by \(\frac{m}{R}\leq-\frac{16}{9}+\frac{20}{9}\sqrt{1+\frac{9}{16}\frac{q^{2}}{4\pi R ^{2d-6}}}\). The Buchdahl-Andreasson-Wright bound yields the minimum radius, below which, an electrically charged matter distribution obeying certain conditions, in general relativity coupled to Maxwell electromagnetism in \(d\) dimensions, the spacetime is singular. This Buchdahl-Andreasson-Wright radius was obtained in [31]. This radius and can also be found from our work with thin shells [29] by imposing that the trace of the stress-energy tensor of the matter in the thin shell is zero. The radius is given by \[\frac{m}{R^{d-3}}=\frac{d-2}{(d-1)^{2}}+\frac{1}{d-1}\frac{ \lambda q^{2}}{R^{2d-6}}\] \[+\frac{d-2}{(d-1)^{2}}\sqrt{1+(d-1)(d-3)\frac{\lambda q^{2}}{R^{ 2d-6}}}\,. \tag{102}\] For \(d=4\), this is \(\frac{\mu m}{R}\leq\left(\frac{1}{3}+\sqrt{\frac{1}{9}+\frac{1}{3}\frac{ \lambda q^{2}}{R^{2}}}\right)^{2}\), i.e., \(\frac{m}{R}\leq\frac{2}{9}+\frac{1}{3}\frac{q^{2}}{4\pi R^{2}}+\frac{2}{3} \sqrt{\frac{1}{9}+\frac{1}{3}\frac{q^{2}}{4\pi R^{2}}}\). From direct comparison between Eqs. (101) and (102), we see that the most favorable configuration radius and the Buchdahl-Andreasson-Wright radius are distinct in any dimension \(d\), and so there is no connection between them. Of course, when there is no charge, \(q=0\), and so no \(\Phi\), \(\Phi=0\), both radii are equal to Buchdahl radius. When \(q=0\), one finds that Eqs. (101) and (102) both yield \(\frac{r_{+}}{R}\geq\left(\frac{4(d-2)}{(d-1)^{2}}\right)^{\frac{1}{d-3}}\), which is the Buchdahl radius. In this case, the stable solution has a negative free energy if the radius of the black hole is larger than the Buchdahl radius. Thus, we have verified that the equality between the black hole radius for which the stable solution has a zero \(W\) and the Buchdahl-Andreasson-Wright radius only holds in the pure gravitational case. ## Appendix B Gradient of the action of the two critical points ### Gradient of the action The gradient of the action in Eq. (30) yields \[\frac{\mu}{R^{d-2}}\frac{\partial I_{*}}{\partial x}=\frac{(d-3) \beta}{2Rx\sqrt{f}}\left[x^{d-3}-\frac{y}{x^{d-3}}\right]-2\pi x^{d-3}\,, \tag{103}\] \[\frac{\mu}{R^{d-2}}\frac{\partial I_{*}}{\partial\sqrt{y}}=\frac {\beta\sqrt{y}}{Rx^{d-3}\sqrt{f}}(1-x^{d-3})-\frac{\beta\Phi}{R}\,, \tag{104}\] where \(x=\frac{r_{+}}{R}\), \(y=\frac{\lambda q^{2}}{R^{2d-6}}\) and \(\Phi=(d-3)\Omega\sqrt{\lambda}\phi\). ### Gradient of the action of hot flat space Here we want to analyze the gradient of the action given in Eqs. (103) and (104) at the critical point of hot flat space. For that, we calculate the limit of the gradient for \(x=y=0\) along the curve \(y=(\eta)^{2}x^{2d-6}\), where \(\eta\) is a positive constant of the curve. Note that one must consider \(\eta<1\) so that the curve is inside the domain of the action. This family of curves cover the possible directions from the point \(x=y=0\) to the physical domain of the action. The gradient is given by \[\frac{\mu}{R^{d-2}}\frac{\partial I_{*}}{\partial x}=\frac{(d-3) \beta x^{d-4}}{2R}\left(1-\eta^{2}\right)\,, \tag{110}\] \[\frac{\mu}{R^{d-2}}\frac{\partial I_{*}}{\partial\sqrt{y}}=\frac{ \beta}{R}\left(\eta-\Phi\right)\,. \tag{111}\] We have left the dependence in \(x^{d-4}\) since it yields different limits for the case of \(d=4\) and \(d>4\). Since the gradient depends on \(\eta\), then the gradient is undefined. Nevertheless, we can calculate the directional derivative along the vector \(v=\frac{1}{\sqrt{1+(d-3)^{2}\eta^{2}x^{2d-8}}}(1,(d-3)\eta x^{d-4})^{T}\), which will be given by \[D_{v}I_{*}= \frac{(d-3)\beta x^{d-4}}{2R}\left(1+\eta^{2}-2\eta\Phi\right)\, \tag{112}\] so for \(d>4\), we have that the directional derivative vanishes. Moreover, the directional \((d-3)\)th derivative is positive since \(1+\eta^{2}-2\eta\Phi>0\) for \(\eta<1\) and \(\Phi\leq 1\), and so this can be considered as a minimum, although formally the partial derivative in \(\sqrt{y}\) is undefined. For \(d=4\) case, the directional derivative does not vanish. Yet, since \(1+\eta^{2}-2\eta\Phi>0\) for \(\eta<1\) and \(\Phi\leq 1\), one can observe that the directional derivative is positive in the physical domain. Therefore, the action resembles a conical potential well at the origin and so hot flat space can be considered as a solution. ### Gradient of the action of the extremal black hole with horizon at the cavity The gradient of the action of the extremal black hole can be given by using Eqs. (109) and (110). In order to study the gradient in the critical point \(x=1\) and \(y=1\), we calculate the gradient of the action in this limit along the curve \(x^{d-3}=1-\epsilon\) and \(y=1-\eta\epsilon\), where \(\eta\) is a constant of the curve and \(\epsilon\) parametrizes the curve. The limit of \(\epsilon\to 0^{+}\) is then performed, yielding the gradient \[\frac{\mu}{R^{d-2}}\frac{\partial I_{*}}{\partial x}=\frac{(d-3) \beta}{2R\sqrt{\eta-1}}(\eta-2)-2\pi\,, \tag{113}\] \[\frac{\mu}{R^{d-2}}\frac{\partial I_{*}}{\partial\sqrt{y}}=\frac{ \beta}{R\sqrt{\eta-1}}-\frac{\beta\Phi}{R}\,, \tag{114}\] where \(\eta>2\) so that the curve is done along configurations of subextremal black holes, coming from the condition \(y<x^{2d-6}\). Since there is a dependence on the curve we choose to perform the limit, the gradient at the extremal point is not defined. It is interesting to see that for \(\gamma=1\), i.e., \(\beta=\frac{4\pi}{d-3}\frac{|\Phi|}{1-\Phi^{2}}R\), the gradient vanishes in the limit along a curve with \(\frac{1}{\eta}=1+\frac{1}{\Phi^{2}}\). Indeed, this set of temperatures corresponds to the stable black hole solution hitting the extremal point \(x=y=1\). Of course, this only happens in one particular curve, the gradient is still undefined. It is also interesting to consider the directional derivative along these curves, in the direction of smaller \(\epsilon\). Indeed, the direction can be described by the vector \(v=\frac{1}{\sqrt{1+(d-3)^{2}\eta^{2}/4}}(1,\frac{\eta(d-3)}{2})^{T}\), and so the directional derivative yields \[\frac{\mu}{R^{d-2}}D_{v}I_{*}=\frac{\frac{\beta(d-3)}{2R}\left(2\sqrt{\eta-1}- \eta\Phi\right)-2\pi}{\sqrt{1+(d-3)^{2}\eta^{2}/4}}\,. \tag{115}\] The directional derivative depends also on \(\eta\) and it can be positive or negative. In particular, for values of \(\eta\) and \(\Phi\) where \(\gamma_{\rm bif}(\Phi,d)<\frac{4(\eta-1)\Phi^{2}}{(1-\Phi^{2})^{2}}\left(1- \frac{\eta}{2\sqrt{\eta-1}}\Phi\right)^{2}\) the directional derivative in Eq. (115) can be positive in a region \(\gamma_{\rm bif}(\Phi,d)<\gamma<\frac{4(\eta-1)\Phi^{2}}{(1-\Phi^{2})^{2}} \left(1-\frac{\eta}{2\sqrt{\eta-1}}\Phi\right)^{2}\), with \(\gamma_{\rm bif}\) given in Eq. (50). Therefore, the action near this critical point does not resemble a potential well. ## Appendix C Further relations between canonical ensemble and thermodynamics ### Extrema as first law of thermodynamics The extrema of the reduced action and stability of the two solutions of \(r_{+}(\beta,\phi,R)\) and \(q(\beta,\phi,R)\) was analyzed in Sec. III.2. Yet, the physical interpretation of these extrema and stability does not seem explicit. Let us do this interpretation with the help of the thermodynamic variables. We rewrite the reduced action in terms of the thermodynamic variables. If we use \(S\) of Eq. (76), \(Q\) of Eq. (77), and \(E\) of Eq. (79), Eq. (30) turns into \[I_{*}=\beta E-S-Q\beta\phi\,, \tag{116}\] where the functions \(E\), \(S\), and \(Q\) are in a first moment seen as function of \(r_{+}\) and \(q\), and noting that \(r_{+}\) and \(q\) here are not restricted. The conditions for extrema of the action, and so of equilibrium, are given in Eqs. (32) and (33). Now, in terms of \(S\) and \(Q\) one has \(\left(\frac{\partial I_{*}}{\partial r_{+}}\right)_{\beta,R,q}=\frac{\beta S}{ \partial r_{+}}\left(\frac{\partial I_{*}}{\partial S}\right)_{\beta,\phi,A,Q}\) and \(\left(\frac{\partial I_{*}}{\partial q}\right)_{\beta,R,r_{+}}=\left(\frac{ \partial I_{*}}{\partial Q}\right)_{\beta,\phi,A,S}\). Since \(\frac{\partial S}{\partial r_{+}}>0\) and \(\frac{\partial Q}{\partial q}=1\), Eqs. (32) and (33), i.e., \(\frac{\partial I_{*}}{\partial r_{+}}=0\) and \(\frac{\partial I_{*}}{\partial q}=0\), respectively, can now be rewritten as \(\left(\frac{\partial I_{*}}{\partial S}\right)_{\beta,\phi,A,Q}=0\) and \(\left(\frac{\partial I_{*}}{\partial Q}\right)_{\beta,\phi,A,S}=0\). Then using Eq. (116), one finds from these two equations that \(\beta\frac{\partial E}{\partial S}-1=0\) and \(\beta\left(\frac{\partial E}{\partial Q}-\phi\right)=0\), respectively, where \(\frac{\partial E}{\partial S}\equiv\left(\frac{\partial E}{\partial S}\right)_{A,Q}\) and \(\frac{\partial E}{\partial Q}\equiv\left(\frac{\partial E}{\partial Q}\right)_{A,S}\), to simplify the notation. So, with \(\beta=\frac{1}{T}\), the extrema give the result \[\frac{\partial E}{\partial S} =T\,, \tag{100}\] \[\frac{\partial E}{\partial Q} =\phi\,. \tag{101}\] Moreover, \(E\) of Eq. (79), in the variables \(A\) of Eq. (10), \(S\) of Eq. (76), and \(Q\) of Eq. (77), is of the form \(E=E(S,A,Q)\). It is then useful to define a quantity, namely, thermodynamic pressure \(p\), such that \[\frac{\partial E}{\partial A}=-p\,, \tag{102}\] where \(\frac{\partial E}{\partial A}\equiv\left(\frac{\partial E}{\partial A}\right) _{S,Q}\). Since by definition \(dE=\frac{\partial E}{\partial S}dS+\frac{\partial E}{\partial A}dA+\frac{ \partial E}{\partial Q}dQ\), we obtain from Eqs. (100)-(102) that \[dE=TdS-pdA+\phi dQ, \tag{103}\] which is the first law of thermodynamics. Therefore, the condition of extrema, or of equilibrium, in the reduced action, is equivalent to impose the first law of thermodynamics to the thermodynamic energy \(E\). The derivation of the first law of thermodynamics here relies on the fact that the reduced action can be written as Eq. (100). As seen in Sec. IV, the thermodynamic quantities obtained through the grand potential are indeed the same as the ones considered here, since the extrema conditions are used. Perturbations around the zero loop approximation and stability of the action as thermodynamic stability In the path integral, the stability for the extrema is given by the requirement that the matrix \(I_{*0ij}\), whose components are given in Eqs. (57)-(59), is positive definite. In thermodynamics, we have to deal with second derivatives of \(W\) for stability, Eqs. (87)-(88). Since \(\beta W=I_{0}\) one should be able to confront and compare the results of the two formalism to show their equivalence in this stability aspect. For that we rewrite that a sufficient condition is \[I_{*0r_{+}r_{+}}I_{*0qq}-\left(I_{*0r_{+}q}\right)^{2}>0\,, \tag{104}\] see Eq. (64). So we have to find the components of the matrix \(I_{*0ij}\) by considering \(I_{*}\) given by Eq. (100), noting that for \(I_{*}\), \(r_{+}\) and \(q\) are not restricted. In addition, since we are making the transformations from \(r_{+}\) to \(S\) and from \(q\) to \(Q\), specifically, \(S=\frac{\Omega r_{+}^{4-2}}{4}\) and \(Q=q\), see Eqs. (76) and (77), we have to relate the derivatives in \(S\) to derivatives in \(r_{+}\) and derivatives in \(Q\) to derivatives in \(q\). The action \(I_{*}=I_{*}(r_{+},q;\beta,\phi,R)\) is now given as \(I_{*}=I_{*}(S(r_{+}),Q;\beta,\phi,R)\). It is understood that a partial derivative of \(I_{*}\) in order to either \(S\) or \(Q\) means we are working with \(I_{*}=I_{*}(S(r_{+}),Q;\beta,\phi,R)\). Also, we use the transformation formulas \(\frac{\partial P(S(r_{+}),q)}{\partial r_{+}}=\frac{\partial S}{\partial r_ {+}}\frac{\partial P(S(r_{+}),q)}{\partial S}\) and \(\frac{\partial^{2}P(S(r_{+}),q)}{\partial r_{+}^{2}}=\frac{\partial}{ \partial r_{+}}\left(\frac{\partial S}{\partial r_{+}}\frac{\partial P(S(r_{+ }),q)}{\partial S}\right)=\frac{\partial^{2}S}{\partial r_{+}^{2}}\frac{ \partial P(S(r_{+}),q)}{\partial S}+\left(\frac{\partial S}{\partial r_{+}} \right)^{2}\frac{\partial^{2}P(S(r_{+}),q)}{\partial S^{2}}\), where \(P(S(r_{+}),Q)\) stands for any of the functions of interest. To calculate \(I_{*0r_{+}r_{+}}\) we have to calculate first \(I_{*r_{+}r_{+}}\) with \(r_{+}\) and \(q\) not restricted. So we have \(I_{*r_{+}r_{+}}=\frac{\partial^{2}S}{\partial r_{+}^{2}}\frac{\partial I_{*}} {\partial S}+\left(\frac{\partial S}{\partial r_{+}}\right)^{2}\frac{\partial^ {2}I_{*}}{\partial S^{2}}\). Since the extrema condition implies that \(\frac{\partial I_{*}}{\partial r_{+}}=\frac{\partial S}{\partial r_{+}}\frac{ \partial I_{*}}{\partial S}=0\), we have then in the new variable \(\frac{\partial I_{*}}{\partial S}=0\). Putting this in the calculation above for \(I_{*r_{+}r_{+}}\) and restricting \(r_{+}\) and \(q\) to the extrema, we have \[I_{*0r_{+}r_{+}}=\left(\frac{\partial S}{\partial r_{+}}\right)^{2}_{0}\! \left(\frac{\partial^{2}I_{*}}{\partial S^{2}}\right)_{0}\,, \tag{105}\] where the prefix \(0\) means the derivative is evaluated at the extrema. To calculate \(I_{*0r_{+}q}\), we start as well with \(I_{*r_{+}q}\). The quantity \(I_{*r_{+}q}\) in the new variables is \(I_{*r_{+}q}=\frac{\partial S}{\partial r_{+}}\frac{\partial^{2}I_{*}}{\partial Q \partial S}\), where the derivatives in \(I_{*}\) can be interchangeable and it has been used that \(S=S(r_{+})\) as it does not depend on \(q\) or \(Q\). Evaluating at the extrema, we have then \[I_{*0r_{+}q}=\left(\frac{\partial S}{\partial r_{+}}\right)_{0}\left(\frac{ \partial^{2}I_{*}}{\partial Q\partial S}\right)_{0}\,. \tag{106}\] To calculate \(I_{*0qq}\), it is straightforward since \(q=Q\) that the correspondent quantity in new variables is \[I_{*0qq}=\left(\frac{\partial^{2}I_{*}}{\partial Q^{2}}\right)_{0}\,. \tag{107}\] With all the quantities in the new variables, the stability condition, Eq. (104), can be written as \[\left(\frac{\partial^{2}I_{*}}{\partial S^{2}}\right)_{0}\left(\frac{\partial^ {2}I_{*}}{\partial Q^{2}}\right)_{0}-\left(\frac{\partial^{2}I_{*}}{\partial S \partial Q}\right)_{0}^{2}>0\,, \tag{108}\] where we have divided by \(\left(\frac{\partial S}{\partial r_{+}}\right)^{2}_{0}\) since it is positive. To turn this condition, Eq. (108), into a condition on thermodynamic quantities, we recall our previous results. Starting by \(\left(\frac{\partial^{2}I_{*}}{\partial S^{2}}\right)_{0}\), we have that \(\left(\frac{\partial^{2}I_{*}}{\partial S^{2}}\right)_{0}=\left(\frac{\partial }{\partial S}\left(\beta\frac{\partial E}{\partial S}-1\right)\right)_{0}= \beta\frac{\partial^{2}E}{\partial S^{2}}=\beta\left(\frac{\partial T}{\partial S }\right)_{A,Q}=\frac{1}{\lambda_{A,Q}}\), where Eq. (100) was used and the heat capacity \(C_{A,Q}\) at constant area and charge has been defined as \(C_{A,Q}=T\left(\frac{\partial S}{\partial T}\right)_{A,Q}\), with \(\beta=\frac{1}{T}\). For \(\left(\frac{\partial^{2}I_{*}}{\partial Q\partial S}\right)_{0}\), we have that \(\left(\frac{\partial^{2}I_{*}}{\partial Q\partial S}\right)_{0}=\left(\frac{ \partial}{\partial Q}\left(\beta\frac{\partial E}{\partial S}-1\right)\right)_{0}= \beta\frac{\partial^{2}E}{\partial Q\partial S}=\frac{1}{T}\left(\frac{ \partial T}{\partial Q}\right)_{S,A}=-\frac{\lambda_{T,A}}{C_{A,Q}}\), where Eq. (100) was used and the latent heat capacity \(\lambda_{T,A}\) at constant temperature and area has been defined as \(\lambda_{T,A}=\left(\frac{\partial S}{\partial Q}\right)_{T,A}\). For \(\left(\frac{\partial^{2}I_{*}}{\partial Q^{2}}\right)_{0}\), we have that \(\left(\frac{\partial^{2}I_{*}}{\partial Q^{2}}\right)_{0}=\left(\frac{\partial}{ \partial Q}\left(\beta\frac{\partial E}{\partial Q}-\beta\phi\right)\right)_{0}= \beta\left(\frac{\partial^{2}E}{\partial Q^{2}}\right)_{0}=\beta\left(\frac{ \partial\phi}{\partial Q}\right)_{S,A}=\frac{1}{T}\frac{1}{\chi_{S,A}}\), where Eq. (100) was used and the electric susceptibility \(\chi_{S,A}\) has been defined as \(\chi_{S,A}=\left(\frac{\partial Q}{\partial\phi}\right)_{S,A}\). In summary, the connection of the second derivatives of the action evaluated at the extrema with thermodynamics coefficients, i.e., laboratory variables, is \[\left(\frac{\partial^{2}I_{*}}{\partial S^{2}}\right)_{0}=\frac{1 }{C_{A,Q}}\,, \tag{101}\] \[\left(\frac{\partial^{2}I_{*}}{\partial Q^{2}}\right)_{0}=\frac{ \beta}{\chi_{S,A}}\,,\] (102) \[\left(\frac{\partial^{2}I_{*}}{\partial S\partial Q}\right)_{0}=- \frac{\lambda_{T,A}}{C_{A,Q}}\,. \tag{103}\] Therefore, with Eqs. (101)-(103), the condition of stability, Eq. (101), in thermodynamic coefficients is \(\beta C_{A,Q}^{-1}\chi_{S,A}^{-1}-\lambda_{T,A}^{2}C_{A,Q}^{-2}>0\). Considering that \(C_{A,\phi}^{-1}=C_{A,Q}^{-1}-T\lambda_{T,A}^{2}C_{A,Q}^{-2}\), where the heat capacity \(C_{A,\phi}\) at constant area and electric potential is defined as \(C_{A,\phi}=\left(\frac{\partial S}{\partial T}\right)_{A,\phi}\), the stability condition becomes \(\beta C_{A,\phi}^{-1}\chi_{S,A}^{-1}>0\). But, because \(\chi_{S,A}>0\) in the case of this ensemble, then we have that the condition reduces to \[C_{A,\phi}>0\,, \tag{104}\] with \[C_{A,\phi}=\frac{(d-2)\Omega r_{+}^{d-2}}{2\left((d-1)x^{d-3}-(d-3)x^{3-d}y-2 \right)}\,, \tag{105}\] Since one has to impose \(x^{2d-6}>y\) in order to have black holes, and since \(y=\gamma x^{2d-4}\), we recover the condition Eq. (65). Moreover, by using that \(\gamma=\frac{\Phi^{2}}{x^{2}-(1-\Phi^{2})x^{d-1}}\) from Eq. (43), we recover the expression of the heat capacity given by Eq. (96) and the thermodynamic stability condition Eq. (94). Appendix D The case \(d=4\) in the zero loop approximation analyzed by Braden, Brown, Whiting, and York [12] ### Electric charge and potential The grand canonical ensemble of a Reissner-Nordstrom black hole in a cavity in \(d=4\) was constructed and analyzed in [12]. In this Appendix, we make the comparison of our results with [12] by setting \(d=4\) in our expressions. In order to do this, one must keep in mind that we use different definitions for the Lagrangian of the electromagnetic part. In \(d=4\), the charge we consider is \(q=\sqrt{4\pi}\,q_{\rm B}\) and the potential we consider is \(A_{\tau}=\frac{A_{\rm B}}{\sqrt{4\pi}}\), where \(q_{\rm B}\) and \(A_{\tau{\rm B}}\) are the electric charge and the electric potential used in [12], respectively. With these redefinitions, our analysis for the particular dimension \(d=4\) yields the same results as [12]. ### Action and stationary points The action Eq. (37) for \(d=4\) turns into \[I=\beta R\left(1-f(\beta,\phi,R)\right)-q\beta\phi-\pi r_{+}^{2}\,, \tag{106}\] with \[f=1-\frac{r_{+}}{R}-\frac{q^{2}}{4\pi r_{+}R}+\frac{q^{2}}{4\pi R^{2}}\,. \tag{107}\] From the action \(\rm{gin}\) in Eq. (106), one can obtain the extrema conditions as given in Eqs. (43) and (44) for \(d=4\). Putting \(x=\frac{r_{+}}{R}\), \(y=\frac{q^{2}}{4\pi R^{2}}\), \(\gamma=\frac{(4\pi RT)^{2}\Phi^{2}}{(1-\Phi^{2})^{2}}\) which is Eq. (39) for \(d=4\), Eqs. (43) and (44) yield \[(1-\Phi^{2})\left(\frac{r_{+}}{R}\right)^{3}-\left(\frac{r_{+}}{R }\right)^{2}+\frac{(1-\Phi^{2})^{2}}{(4\pi RT)^{2}}=0\,, \tag{108}\] \[\frac{q^{2}}{4\pi R^{2}}=\frac{(4\pi RT)^{2}\,\Phi^{2}}{(1-\Phi^{ 2})^{2}}\left(\frac{r_{+}}{R}\right)^{4}\,. \tag{109}\] Now, Eqs. (108) and (109) give precisely Eqs. (4.15) and (4.16) in [12] with the redefinitions \(q=\sqrt{4\pi}q_{\rm B}\) and \(\Phi=\phi_{\rm B}\), where again \(q_{\rm B}\) and \(\phi_{\rm B}\) are the electric charge and the electric potential used in [12]. The equation for \(\frac{r_{+}}{R}\), Eq. (108), is then a cubic equation, which was solved in [12]. There are two solutions, \(\frac{r_{+}}{R}\) for the small black hole and \(\frac{r_{+2}}{R}\) for the large black hole. For each solution, the electric charge \(q\) follows from Eq. (108). ### Stability condition The stability condition in this system for \(d=4\) corresponds to the positive definiteness of the Hessian of the action, meaning that if a solution of the extrema has positive definite Hessian of the action, then the solution is stable. The expression of the condition given in Eq. (66) in \(d=4\) is \[\frac{\left(3(1-\Phi^{2})\left(\frac{r_{+}}{R}\right)-2\right)(1-\left(\frac{r_{ +}}{R}\right))}{1-(1-\Phi^{2})\left(\frac{r_{+}}{R}\right)}>0\,. \tag{110}\] The stability condition in Eq. (110) implies then that a solution is stable if \[\frac{r_{+}}{R}>\frac{2}{3(1-\Phi^{2})}\,. \tag{111}\] These stability equations result should be compared with the stability condition given in [12]. Recovering \(x=\frac{r_{+}}{R}\), note that \(\frac{(3(1-\Phi^{2})x-2)(1-x)}{1-(1-\Phi^{2})x}\) of Eq. (110) reduces to \(\frac{(3x-2)(1-(1-\Phi^{2})x)-\Phi^{2}x}{1-(1-\Phi^{2})x}=3x-2-\frac{\Phi^{2}x}{ (1-\Phi^{2})x}\). Now, from Eq. (108) one has \(\frac{1}{1-(1-\Phi^{2})x}=\frac{(4\pi RT)^{2}}{(1-\Phi^{2})^{2}}\), so that making this substitution on the previous equation and multiplying it by \(x\) one gets \(3x^{2}-2x-\frac{(4\pi RT)^{2}\Phi^{2}}{(1-\Phi^{2})^{2}}x^{4}=3x^{2}-2x-y\) where Eq. (45) was used to recover \(y\). Thus, Eq. (46) is the same as \(3x^{2}-2x-y\geq 0\). This latter equation is the stability condition Eq. (5.18) in [12]. Its solution is \(x\geq\frac{1}{3}+\frac{1}{3}\sqrt{1+3y}\), i.e., \(\frac{r_{+}}{R}\geq\frac{1}{3}+\frac{1}{3}\sqrt{1+3\frac{9}{R^{2}}}\). This equation and Eq. (45) reduce to \(\frac{r_{+}}{R}\geq\frac{2}{3}\) when \(q_{\rm B}=0\) or \(\Phi=0\), i.e., to York's case [8]. Therefore, our results agree with [12]. From the two solutions that exist in the ensemble, \(\frac{r_{+1}}{R}\) is unstable, and indeed only the larger black hole solution \(\frac{r_{+2}}{R}\) is stable. ### Thermodynamics The thermodynamic quantities obtained for \(d=4\) from Eqs. (76)-(79) are the same as the thermodynamic quantities obtained in [12], with the redefinitions \(q=\sqrt{4\pi}q_{\rm B}\) and \(\Phi=\phi_{\rm B}\). To be complete, we list here those thermodynamic quantities in \(d=4\), such as the entropy, electric charge, surface pressure, thermodynamic energy, and area. One can also find the Euler equation, the Gibbs-Duhem relation, and the Smarr formula for this case. The entropy can be directly obtained from Eq. (76), giving \[S=\frac{1}{4}A_{+}\,, \tag{78}\] which is the Bekenstein-Hawking entropy of a four-dimensional black hole, which explicitly gives \(S=\pi r_{+}^{2}\), since \(A_{+}=4\pi r_{+}^{2}\). The electric charge can be computed from Eq. (77), for \(d=4\). The charge has the same appearance as in general \(d\), i.e., for given \(T\), \(\phi\), and \(R\), one has \(Q=q\), which is the electric charge of Reissner-Nordstrom space. The thermodynamic surface pressure at \(R\) can be calculated from Eq. (78) for the \(d=4\) case, giving \[p=\frac{1}{16\pi R\sqrt{f}} \left(\left(1-\sqrt{\left(1-\frac{r_{+}}{R}\right)\left(1-\frac{ q^{2}}{4\pi r_{+}R}\right)}\right)^{2}\right. \tag{79}\] \[-\left.\frac{q^{2}}{4\pi R^{2}}\right),\] where \(f\) is given in Eq. (46). This tangential surface pressure acts along an area \(A=4\pi R^{2}\) of the boundary of the cavity. The mean thermodynamic energy is given in Eq. (79), which in \(d=4\) case is \[E=R\left(1-\sqrt{\left(1-\frac{r_{+}}{R}\right)\left(1-\frac{q^{2}}{4\pi r_{ +}R}\right)}\right)\,. \tag{80}\] This is the same expression as the quasilocal energy evaluated at a spherical shell of radius \(R\). The first law of thermodynamics, \(TdS=dE+pdA-\phi dQ\) given in Eq. (80), for the system in \(d=4\) holds. Moreover, from Eq. (80), one can write the energy in terms of the entropy \(S\) of Eq. (78), electric charge \(Q\), and surface area of the cavity \(A\), as \[E= \left(\frac{A}{4\pi}\right)^{\frac{1}{2}}\times \tag{81}\] \[\left(1-\sqrt{\left(1-\left(\frac{4S}{A}\right)^{\frac{1}{2}} \right)\left(1-\frac{Q^{2}}{(4SA)^{\frac{1}{2}}}\right)}\right)\,.\] One can then use the Euler's homogeneous function theorem considering that under rescaling of its arguments, the energy as a function has the property that \(E\left(\nu S,\nu A,\nu Q^{2}\right)=\nu^{\frac{1}{2}}E\left(S,A,Q^{2}\right)\). We thus have an integrated version of the first law of thermodynamics given by, see Eq. (82), \[E=2(TS-pA)+\phi Q\,, \tag{82}\] which is the Euler equation for a \(d=4\) electrically charged black hole in a heat reservoir. By differentiating Eq. (82) and considering that \(dE=TdS-pdA+\phi dQ\), one obtains \[TdS-pdA+2(SdT-Adp)+Qd\phi=0\,. \tag{83}\] which is the Gibbs-Duhem relation for this case. Then, the Smarr formula in \(d=4\) is \[m=2T_{\rm H}S+\phi_{\rm H}Q\,, \tag{84}\] see Eq. (84). Again, the Smarr formula is valid for the small black hole solution only. The stability condition is related to thermodynamics as it corresponds to the positivity of the heat capacity at constant area and electric potential, which we show for arbitrary dimension and it was shown in [12] for \(d=4\). ### Most probable or most favorable configurations Again, the most probable, or most favorable, configurations are found from the state with the lowest value of the action \(I_{0}\), or what amounts to the same thing, the state with the lowest value of the grand potential \(W\). In \(d=4\) this comparison was not performed in [12]. First, we have to compare the large stable black hole \(r_{+2}\) with hot flat space given by the stationary point \(r_{+}=0\) and \(q=0\), which in \(d=4\) and contrarily to \(d>4\), is not smooth. To simulate hot flat space, a hot conductor charged sphere is considered, which from Eq. (68) it has an action \(I_{\rm hot\,flat\,sphere}=-\frac{2\pi}{r_{+}^{2}-\frac{1}{R}}\beta\phi^{2}\) for \(d=4\). For tiny spheres, which are the ones that better simulate hot flat space, the action for the stable black hole Eq. (81) is less than the action for \(I_{\rm hot\,flat\,sphere}\), and so the stable black hole dominates. Second, we have to compare the large stable black hole \(r_{+2}\) with the other stationary point, which in \(d=4\) is \(r_{+}=R\) and \(\frac{q}{\sqrt{4\pi}}=R\). One finds that the stable black hole is a more probable configuration than the extreme black hole with the horizon at the cavity. ## Appendix E Null geodesic sphere of a \(d\)-dimensional Reissner-Nordstrom spacetime The \(d\)-dimensional Reissner-Nordstrom spacetime has the line element given by \[ds^{2}=-f(r)\,dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega^{2}\,, \tag{108}\] where \[f(r)=1-\frac{2\mu m}{r^{d-3}}+\frac{\lambda q^{2}}{r^{2d-6}}\,, \tag{109}\] with \(\mu=\frac{8\pi}{(d-2)\Omega}\), and \(d\Omega\) is the surface element of a \(d-2\)-sphere with unit radius given by \[d\Omega^{2}=d\varphi_{0}^{2}+\sum_{i=1}^{d-3}\left(\prod_{j=0}^{i-1}\sin^{2} \varphi_{j}\right)d\varphi_{i}^{2}\,. \tag{110}\] We now consider a congruence of null geodesics with tangent vector \(l^{a}=\frac{dr^{a}}{d\nu}\), where \(\nu\) is the parameter of the geodesic. Since the Reissner-Nordstrom spacetime is static and possesses spherical symmetry, one can integrate the equations of motion using integration constants. One integration constant is related to the spacetime being static, which is characterized by the Killing vector \(\xi=\partial_{t}\), and gives the notion of the specific energy of the geodesic \(\varepsilon\), yielding \[\dot{t}=\frac{\varepsilon}{f(r)}\,. \tag{111}\] Another integration constant is the specific angular momentum which is conserved. Due to spherical symmetry, it is possible to consider only the equatorial geodesics without loss of generality, i.e., we can fix \(\varphi_{i}=\frac{\pi}{2}\) for \(i=0,...,d-4\). Therefore, the specific angular momentum \(h\) is given by \[\dot{\varphi}_{d-3}=\frac{h}{r^{2}}\,. \tag{112}\] Finally, the remaining integration constant is the fact that the vector tangent to the geodesic is null, i.e., \(g_{ab}l^{a}l^{b}=0\). This condition becomes \[\dot{r}^{2}=f(r)\dot{t}^{2}-r^{2}\dot{\varphi}_{d-3}^{2}\,. \tag{113}\] Using, Eqs. (111) and (112) in (113) we have \[\dot{r}^{2}=\varepsilon^{2}-\frac{h^{2}}{r^{2}}\left(1-\frac{2\mu m}{r^{d-3}} +\frac{\lambda Q^{2}}{r^{2d-6}}\right)\,, \tag{114}\] where Eq. (109) has also been used. Note now, that from Eq. (113) one has \(\frac{\dot{r}}{h}=-\frac{d}{d\phi}(\frac{1}{r})\). So Eq. (114) turns into \[\left(\frac{1}{r}\right)^{\prime 2}=\frac{\varepsilon^{2}}{h^{2}}-\left( \frac{1}{r}\right)^{2}+2\mu m\left(\frac{1}{r}\right)^{d-1}-\lambda q^{2} \left(\frac{1}{r}\right)^{2d-4}\,, \tag{115}\] where here a prime denotes differentiation with respect to \(\phi\). Differentiating once with respect to \(\phi\), we have the equation \[\left(\frac{1}{r}\right)^{\prime\prime}+\frac{1}{r}=\mu m(d-1)\left(\frac{1} {r}\right)^{d-2}-\lambda Q^{2}(d-2)\left(\frac{1}{r}\right)^{2d-5} \tag{116}\] for null geodesics. The null geodesic sphere is characterized by circular null geodesics, therefore \(r\) is constant, i.e., \(r^{\prime}=0\) and \(r^{\prime\prime}=0\). From the geodesic equation, Eq. (116), the value of \(r\) of the radius of a circular null geodesic must then obey the equation \[1=\mu m(d-1)\left(\frac{1}{r}\right)^{d-3}-\lambda q^{2}(d-2) \left(\frac{1}{r}\right)^{2(d-3)}\,. \tag{117}\] The solutions for this radius, for each dimension \(d\), are thus functions of \(m\) and \(q\), i.e., \(r=r(m,q)\). But as we have seen \(m=m(r_{+},q)\), explicitly, \[2\mu m=r_{+}^{d-3}+\frac{\lambda q^{2}}{r_{+}^{d-3}}\,, \tag{118}\] which is found from the zero of Eq. (109). Moreover, from Eq. (35) and Eq. (42), we can define formally an electric potential \(\Phi\) at the null geodesic circular radius \(r\) by the expression \[\Phi(r_{+},q,r)=\frac{\sqrt{\lambda}q}{\sqrt{f[R,r_{+},q]}}\left(\frac{1}{r_{+ }^{d-3}}-\frac{1}{r^{d-3}}\right) \tag{119}\] which can be inverted to \(q=q(r_{+},\Phi,r)\). So also \(m=m(r_{+},\Phi,r)\). Indeed, from Eqs. (118) and (119) we get the correspondence \[\mu m =\frac{1}{2}\left(1+\frac{\Phi^{2}}{1-(1-\Phi^{2})\left(\frac{r_{ +}}{r}\right)^{d-3}}\right)\,r_{+}^{d-3}\,, \tag{120}\] \[\lambda q^{2} = \frac{\Phi^{2}}{1-(1-\Phi^{2})\left(\frac{r_{+}}{r}\right)^{d-3}} \,\,r_{+}^{2(d-3)}\,. \tag{121}\] Using Eqs. (120) and (121) in the geodesic equation, Eq. (117), one has \[1-\left(\frac{d+1}{2}+\frac{d-3}{2}\Phi^{2}\right)\left(\frac{r_ {+}}{r}\right)^{d-3}\] \[+\left(\frac{d-1}{2}+\frac{d-3}{2}\Phi^{2}\right)\left(\frac{r_{ +}}{r}\right)^{2(d-3)}=0, \tag{122}\] which has the solution \(r=r_{\rm ps}\) with \[r_{\rm ps}=\left[\frac{(d-1)\left(1+\frac{d-3}{d-1}\Phi^{2}\right)}{2}\right] ^{\frac{1}{d-3}}r_{+}\,. \tag{123}\] This radius \(r_{\rm ps}\) is the radius of a circular null geodesic in a Reissner-Nordstrom black hole geometry with given \(r_{+}\), \(\Phi\) and \(d\). If photons, gravitons, or any other light-like particle, are placed at this null geodesic they will follow circular orbits. The corresponding sphere radius is usually called photon sphere radius, \(r_{\rm ps}\).
2301.13844
Do Multi-Document Summarization Models Synthesize?
Multi-document summarization entails producing concise synopses of collections of inputs. For some applications, the synopsis should accurately synthesize inputs with respect to a key aspect, e.g., a synopsis of film reviews written about a particular movie should reflect the average critic consensus. As a more consequential example, narrative summaries that accompany biomedical systematic reviews of clinical trial results should accurately summarize the potentially conflicting results from individual trials. In this paper we ask: To what extent do modern multi-document summarization models implicitly perform this sort of synthesis? We run experiments over opinion and evidence synthesis datasets using a suite of summarization models, from fine-tuned transformers to GPT-4. We find that existing models partially perform synthesis, but imperfectly: even the best performing models are over-sensitive to changes in input ordering and under-sensitive to changes in input compositions (e.g., ratio of positive to negative reviews). We propose a simple, general, effective method for improving model synthesis capabilities by generating an explicitly diverse set of candidate outputs, and then selecting from these the string best aligned with the expected aggregate measure for the inputs, or abstaining when the model produces no good candidate.
Jay DeYoung, Stephanie C. Martinez, Iain J. Marshall, Byron C. Wallace
2023-01-31T18:40:46Z
http://arxiv.org/abs/2301.13844v2
# Do Multi-Document Summarization Models Synthesize? ###### Abstract Multi-document summarization entails producing concise synopses of collections of inputs. For some applications, the synopsis should accurately _synthesize_ inputs with respect to a key property or aspect. For example, a synopsis of film reviews all written about a particular movie should reflect the average critic consensus. As a more consequential example, consider narrative summaries that accompany biomedical _systematic reviews_ of clinical trial results. These narratives should fairly summarize the potentially conflicting results from individual trials. In this paper we ask: To what extent do modern multi-document summarization models implicitly perform this type of synthesis? To assess this we perform a suite of experiments that probe the degree to which conditional generation models trained for summarization using standard methods yield outputs that appropriately synthesize inputs. We find that existing models do partially perform synthesis, but do so imperfectly. In particular, they are over-sensitive to changes in input ordering and under-sensitive to changes in input compositions (e.g., the ratio of positive to negative movie reviews). We propose a simple, general method for improving model synthesis capabilities by generating an explicitly diverse set of candidate outputs, and then selecting from these the string best aligned with the expected aggregate measure for the inputs, or _abstaining_ when the model produces no good candidate. This approach improves model synthesis performance. We hope highlighting the need for synthesis (in some summarization settings), motivates further research into multi-document summarization methods and learning objectives that explicitly account for the need to synthesize. ## 1 Introduction _Multi-document summarization_ (MDS) models aim to distill inputs into concise synopses that preserve key content. Examples of MDS include summarizing news articles (Dang, 2005; Fabbri et al., 2019; Gholipour Ghalandari et al., 2020; Evans et al., 2004), answering questions from multiple sources (Dang, 2006), and producing overviews of scientific literature (Liu et al., 2018; Lu et al., 2020; Molla and Santiago-Martinez, 2012; Wallace et al., 2020; DeYoung et al., 2021). We expect summarization models to produce outputs consistent with inputs (Kryscinski et al., 2020; Nan et al., 2021), e.g., discussing the same types of entities (Nan et al., 2021) and allowing one to answer questions similar in a way that is consistent with individual inputs (Wang et al., 2020; Scialom et al., 2021). In some applications models must _synthesize_ inputs--i.e., aggregate potentially conflicting information--to yield an accurate synopsis (Figure 1). As a simple example, consider the meta-reviews of movies featured on Rotten Tomatoes,1 which provide a consensus view of individual critic opinions. These reviews should reflect the mean and range of sentiment implicit in the input critiques: A summary of mostly negative reviews (e.g., _Gigli_) should communicate that the film was widely panned; a summary of mixed reviews (_The Fifth Element_) ought to convey that critics disagreed and discuss the main positive and negative attributes. Footnote 1: A website that aggregates film reviews: [https://www.rottentomatoes.com/](https://www.rottentomatoes.com/). A more consequential example is the task of summarizing the evidence presented in clinical trials. Individual trials will frequently present conflicting evidence about whether or not a particular health intervention is effective. An ideal summary of the evidence would appropriately weigh the findings presented in the constituent inputs and reflect the evidence on balance. What are the desiderata of multi-document _synthesis_? First, summaries produced by models should be _consistent_ with the input data, with respect to the latent property of interest. In the case of Rotten Tomatoes, the _sentiment_ of the summary should be in line with the aggregate sentiment expressed in the individual critic reviews. A corollary to this is that models should be _sensitive_ to changes in the composition of inputs, e.g., removing most of the negative reviews from a set of inputs should yield a summary with a corresponding increase in the expressed sentiment. In this work we evaluate neural MDS models with respect to these criteria. To this end we use a meta-reviews dataset from Rotten Tomatoes Leone (2020) and a dataset of systematic reviews (meta-analyses) summarizing the evidence about medical interventions Wallace et al. (2020). For the former we probe the degree to which generated meta-review sentiment agrees with the expected aggregate sentiment score; for the latter we evaluate whether the generated summary indicates that the input evidence suggests, on balance, that the intervention under consideration was effective. Our **main contributions** are summarized as follows. (1) To the best of our knowledge, this is the first work to investigate implicit _synthesis_ in summarization, and the degree to which modern models are capable of this.2 (2) We show that "off-the-shelf" neural MDS models are somewhat inconsistent and insensitive with respect to performing synthesis in summarization. (3) We propose and evaluate a simple and general technique of generating a diverse set of output candidates Vijayakumar et al. (2016) and then selecting from these on the basis of agreement with an expected aggregate measure (based on inputs), with promising results. Footnote 2: See Appendix A for discussion of related content aggregation work over structured relations Shah et al. (2021). ## 2 Synthesis and Summarization In standard multi-document summarization, we assume inputs \((X_{i},y_{i})\), where \(X_{i}=\{x_{i1},...,x_{i|X_{i}|}\}\). We then typically train a summarization model with parameters \(\theta\), to consume \(X_{i}\) and yield summaries \(\hat{y}_{i}\) as similar as possible to targets \(y_{i}\). More precisely, the standard objective entails finding estimates for \(\theta\) which maximize target token log-probabilities. Assuming the input documents \(x_{ij}\) in \(X_{i}\) have been linearized (i.e., concatenated, usually with adjoining special tokens to demarcate individual inputs) into a string \(x_{i}^{\oplus}\) of input tokens, this objective takes the form: \(\sum_{t=1}^{|y_{i}|}\text{log }p_{\theta}(y_{it}|y_{i1},...,y_{i(t-1)},x_{i}^{ \oplus})\), where \(p_{\theta}\) is a probability assigned to the token at position \(t\) in the (linearized) target \(x_{i}^{\oplus}\) by a summarization model with parameters \(\theta\). By myopically focusing on encouraging the model to produce tokens that mimic the targets, this objective aligns with standard (but flawed) measures of automated summary quality like ROUGE Lin (2004), which quantify \(n\)-gram overlap between targets \(y_{i}\) and outputs \(\hat{y}_{i}\). We are interested in settings in which there is an additional, latent property implicit in the constituent input texts \(x_{ij}\), \(z_{ij}\). For example, \(z_{ij}\) might reflect the sentiment in critique \(j\) of the film indexed by \(i\). Summaries should _synthesize_ this aspect, i.e., the generated summary \(\hat{y}_{i}\) should implicitly convey an aggregated \(z_{i}\) which reflects a synthesis or aggregation \(G\) over \(Z_{i}=\{z_{i1},...z_{i|X_{i}|}\}\). That is, we assume \(z_{i}=G(Z_{i})\). In both cases considered here--summaries of film critiques and synopses of clinical trials evidence--\(G\) can reasonably be assumed to be a (weighted) mean, \(G(Z_{i})=\frac{1}{|X_{i}|}\sum_{j=1}^{|X_{i}|}\alpha_{ij}z_{ij}\). That is, summaries should roughly reflect the average sentiment and reported treatment effect in the cases of movie reviews and clinical trial reports, respectively. We investigate the following questions. (1) Do model summaries \(\hat{y}_{i}\) reflect the anticipated aggregate aspect of interest? That is, how well calibrated is the aspect communicated in the generated summary (\(z_{i\hat{y}}\)) compared to the expected \(z_{i}\)? (2) Can we _improve_ the ability of summarization models to synthesize by explicitly incorporating synthesis targets \(z_{i}\) into the decoding process? We propose a simple inference-time procedure to explicitly preference output candidates that align Figure 1: Two multi-document summarization tasks where models must implicitly synthesize inputs to produce accurate summaries. Left: Summarizing film reviews with varying sentiment to yield a _critics consensus_. Right: Summarizing trials that have evaluated a particular medical invention. with the expected aggregate property of interest (e.g., average sentiment), and report promising results--under both automatic and manual evaluation--for the approach. This strategy naturally lends itself to _cautious_ summarization, i.e., approaches where the model can _abstain_ from generating an output if it does not produce any candidates that reflect the anticipated aggregate measure. ### Movie Reviews We first consider a dataset comprising movie reviews and associated meta-reviews summarizing these from Rotten Tomatoes. An in-house staffer summarizes audience reviews3 into meta-reviews. These meta-reviews synthesize the constituent input reviews, reflecting the aggregate critic reception of a film. Each meta-review is associated with a numerical "Tomatometer" score, which is an overall measure of what percent reviews were positive for the corresponding film (\(G\) then is an average of the positive indicator per review). The Rotten Tomatoes dataset we use comprises 9095 movies with meta-reviews constructed from 244,000 individual reviews (Table 1). Footnote 3: written by designated “top-critics”, audience members recognized for quality and quantity of reviews **Measuring sentiment in movie reviews.** As our measure \(g\) we train a BERT model Devlin et al. (2019) using the continuous (fine-grained) sentiment targets provided in the SST dataset Socher et al. (2013).4 We trained this model for 3 epochs using a learning rate of 5e-5 using the \(\mathsf{Huggingface}\) library5 with no hyperparameter tuning. While the raw text of the SST dataset is in-domain, the targets themselves are not. We find a reasonably strong correlation between our sentiment estimates and the "true" meta-review sentiment ("Tomatometer" score): The \(\text{R}^{2}\) (centered) is 0.696, mean squared error (MSE) of 0.022, and Pearson's r of 0.836 (Figure 2, upper left). Footnote 4: SST is itself based on a collection of Rotten Tomatoes critic reviews Pang and Lee (2005). We verified that the SST text fragments do not overlap with our target reviews by manually checking any (fragment, review) pair with substantial (\(\geq 75\%\)) overlap for one quarter of all reviews. Footnote 5: github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification ### Biomedical Systematic Reviews Our second dataset is a collection of systematic reviews from the Cochrane Collaboration.6 This dataset comprises roughly 2600 systematic reviews summarizing a total of 16,500 clinical trials evaluating interventions in healthcare (Table 1). Each review includes both a natural language summary and accompanying statistical meta-analysis results. The latter provides an aggregate statistical summary of the individual (study-level) data extracted from the trials included in each review. The natural language summary should accurately convey and contextualize the findings of the meta-analysis. Therefore, the (lack of) treatment efficacy communicated in a given summary should generally agree with the direction of the corresponding meta-analytic point estimate. Footnote 6: An international non-profit dedicated to helping healthcare providers make evidence-based decisions. **Measuring effects in evidence syntheses** For systematic reviews of clinical trials, we resort to a less granular _classification_ model \(g(x_{ij}),g(y_{i})\) which attempts to infer whether a given piece of text reports a significant result or not. In particular we use \(\mathsf{RobotReviewer}\)Marshall et al. (2017); DeYoung et al. (2020). Given a narrative describing a clinical trial result (or a systematic review summary of such results), \(\mathsf{RobotReviewer}\) predicts whether the reported result indicates a significant effect of the treatment being investigated, or not. We can compare this prediction to the "truth", which here is derived from the meta-analytic result (specifically by checking whether \(p<0.05\)). Applying this off-the-shelf model to the manually composed summaries accompanying the meta-analyses in our Cochrane set, we observe a macro-average F1 score of 0.577 (Table 10, Appendix C), providing a reasonable (if weak) measure for this task. ## 3 Models We evaluate a suite of transformer Vaswani et al. (2017) summarization models: Longformer Beltagy et al. (2020), Pegasus Zhang et al. (2020), PRIMERA Xiao et al. (2022), and T5 Raffel et al. (2020). PRIMERA was designed and pre-trained specifically for multi-document summarization. And while not explicitly designed as multi-document summarization models, both Pegasus Zhang et al. (2020) and T57 have been used on multi-document tasks, while Longformer has been used for a related multi-document summarization task DeYoung et al. (2021). See Appendix B for hyperparameter settings. ## 4 Experiments ### How well do summarization models synthesize? We report sentiment performance for all models (Table 2). These metrics quantify the strength of the relationship between (a) the continuous sentiment inferred (via our text-regression \(g\)) over model generated or reference (human written) summaries and (b) the reference sentiment (Tomatometer) score. Across these metrics, correlations between the sentiment measured in model generated outputs and the Tomatometer score are considerably lower than that between the same measurement over human-composed summaries and said score. This implies that human authors do a better job of synthesis than the models when composing summaries. For systematic reviews (Section 2.2), we are able to measure \(g\) whether a text appears to report significant treatment effect or not, and we can compare this against the \(p\)-value from the corresponding statistical meta-analysis. This permits only a coarse assessment of synthesis, as we are unable to measure correlations. Instead we report classification metrics describing how often the effect significance inferred from a summary (generated or manually written) matches the ground truth derived from the meta-analysis (Table 2). The results are qualitatively similar to the sentiment case, in that the humans appear to do a better job of synthesis -- as best we can measure, the significance reported in their summaries better aligns with the statistical results than in model generated summaries. ### Sensitivity to Input Ordering Synthesis of inputs should be invariant to ordering (e.g., the critics' consensus on a film does not depend on the order in which one reads the reviews). Here we evaluate if models are sensitive to input orderings with respect to the synthesized aspect of interest (\(z_{i\hat{y}}\)) in the resultant outputs. Specifically, \(X_{i}=\{x_{i1},...,x_{i|X_{i}|}\}\) will constitute an arbitrary ordering of inputs reflected in the linearized version \(x_{i}^{\oplus}\). This ordering should not affect the aggregate aspect \(z_{i\hat{y}}\) in the summary. To evaluate if models realize this invariance, we permute the instance \(i\) inputs \(X_{i}\) (and, consequently, the linearized \(x_{i}^{\oplus}\)) one hundred times, randomizing input orderings. For each such permutation \(\widehat{X}_{i}\) (and associated \(\tilde{x}_{i}^{\oplus}\)), we generate a summary \(\hat{y}_{i}\) and estimate of the resultant aspect \(\tilde{z}_{i\hat{y}}\), using the corresponding measurement model. By repeating this process for each instance \(i\), we can construct an empirical distribution over \(\tilde{z}_{i\hat{y}}\)'s under different random orderings. **Movie reviews.** We zero-mean the \(\tilde{z}_{i\hat{y}}\)'s inferred over each instance, and combine the distributions from all instances into a histogram (Figure 3 left). This shows the spread of sentiments inferred over outputs under random input orderings minus the corresponding instance mean sentiment. Were a model completely invariant to ordering, the empirical distribution over these differences would collapse to 0. Instead, we observe a relatively wide \begin{table} \begin{tabular}{l c c c||c c c} \hline \hline & Train & Dev & Test & Train & Dev\({}^{\dagger}\) & Test \\ \hline Number of metareviews & 7251 & 932 & 912 & 1675 & 360 & 397 \\ Avg. metareview length & 32.0 & 32.6 & 32.4 & 101 & 107 & 111 \\ Total number of inputs & 195033 & 24336 & 24474 & 111054 & 1238 & 2669 \\ Avg. number of inputs & 26.9 & 26.1 & 26.8 & 6.6 & 3.4 & 6.7 \\ Avg length of individual input & 30.6 & 30.8 & 30.6 & 475 & 379 & 449 \\ Avg length of concatenated inputs & 822 & 804 & 822 & 2641 & 1336 & 2544 \\ Target Percent Positive & 59.5 & 62.1 & 61.2 & 31.9 & 31.4 & 35.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics for movie reviews (left) and systematic reviews (right). Number of meta-reviews, average meta-review length (tokens), input reviews per split, average number of inputs per instance, average total length of instance-inputs. For movie reviews, the target percent positive reports the fraction of metareviews with a positive sentiment; for systematic reviews this refers to the fraction of metareviews reporting a significant effect. \({\dagger}\) We subset the original dev set to instances of \(\leq 4k\) tokens (accommodating T5; other models can consume up to 16k). Figure 2: Movie Reviews: Actual vs. Predicted Sentiments on generated summaries. We replaced LED with human outputs (upper left) for comparison; see Figure 8 in Appendix C for all models. spread in the sentiment measured over outputs generated from different permutations, indicating a counter-intuitive sensitivity to orderings.8 Footnote 8: For a ROUGE1 comparison, see Appendix D, Figure 10. **Systematic reviews**. For each \(X_{i}\) we have 100 order permutations and associated summaries; we infer whether these report _significant results_ or not, and record the fraction that do (\(p_{i}\)). If models were invariant to ordering, this fraction would always be 0 or 1. Values in-between suggest the model flips the report conclusion as a result of different input orderings. Figure 3 (right) shows a histogram of entropies over \(p_{i}\), computed over the subset of examples where the associated meta-analysis indicates a significant effect.9 Densities away from zero indicate sensitivity to ordering. Footnote 9: These are the more interesting cases; we provide results over the entire dataset in Appendix Figure 9. ### Sensitivity to Input Composition Synthesis models should be responsive to changes in the distribution of the attribute to be synthesized in the input composition: If we increase the ratio of positive to negative reviews in an input set, we would anticipate a concomitant change in the sentiment communicated in the meta-review \(z_{ij}\). To assess if models meet this synthesis desiderata, we manipulate model inputs \(X_{i}\) in such a way to induce an expected change in the target measure \(z_{ij}\); we then measure if the output yields a summary that aligns with this expected change. **Movie reviews**. We manipulate the ratio of positive to negative reviews and observe the resultant change in the property of interest latent in the corresponding output. We take movies with mixed reviews, and delete 10%, 20%, 30%,..., 100% of the positive inputs, retaining the negative inputs; we then repeat the process but instead remove negative inputs. For each of these permutations, we measure the input sentiment, the meta-review sentiment, and how well they correlate (Table 3). Figure 4 plots the relationship between the fraction of positive reviews in the (manipulated) input sets and the granular sentiment score inferred over the resultant outputs. The models are generally undersensitive to changes in their input: rather than having a change in meta-review sentiment equivalent in size to changes in input sentiment (a slope of 1, as we observe when we fit a model to the human written summaries). Models tend to have trouble changing their sentiment, and require a large change in input distribution to substantially change \begin{table} \begin{tabular}{l c c c} \hline \hline & R\({}^{2}\) & Pearson’s r & MSE & ROUGE1 \\ \hline LED & 0.551 & 0.742 & 0.042 & 0.242 \\ PRIMERA & 0.608 & 0.780 & 0.037 & 0.254 \\ T5 & 0.516 & 0.720 & 0.046 & 0.253 \\ Pegasus & 0.530 & 0.730 & 0.044 & 0.245 \\ Reference & **0.697** & **0.836** & **0.023** & \\ \hline \hline \end{tabular} \end{table} Table 2: Base synthesis results. **Movie reviews** (left): correlations (R\({}^{2}\), Pearson’s r, mean-squared errors) between sentiment measured in model outputs and targets. **Systematic reviews** (right): we report macro-averaged F1s. Figure 3: The spread of sentiment/treatment effect measured in outputs produced from permuted input orderings. Left: Movie review sentiment. Right: Systematic review significance prediction entropy (0 indicates order insensitivity) on the subset of reviews that report _significant_ effects. \begin{table} \begin{tabular}{l c c||c c c} \hline \hline & R\({}^{2}\) & P\({}^{*}\)s r & MSE & F1 & Acc. \\ \hline LED & 0.524 & 0.724 & 0.057 & 0.510 & 0.684 \\ PRIMERA & 0.572 & 0.756 & 0.052 & 0.533 & 0.675 \\ T5 & 0.481 & 0.694 & 0.063 & 0.469 & 0.658 \\ Pegasus & 0.499 & 0.706 & 0.060 & 0.452 & 0.680 \\ \hline \hline \end{tabular} \end{table} Table 3: **Movie** (left): Correlation (R\({}^{2}\), Pearson’s r, MSE) and **Systematic** (right) reviews: Classifications (F1, accuracy) for subsampled inputs and generations. the sentiment communicated in the output. **Systematic Reviews**. To measure sensitivity to changes in input composition, we manipulate inputs \(X_{i}\) such that the meta-analysis result (target \(z_{i\bar{y}}\)) flips from a significant effect to no effect, or from no effect to an effect. We first take a subset of the reviews that have conflicting evidence (yielding 139 unique reviews). We then order inputs in these by (weighted) effect sizes,10 and remove subsets which ought to flip the significance result. Footnote 10: In fixed effects meta-analysis the weights are inverse variances associated with study-level effect estimates. ## 5 Improving Synthesis in Summarization We propose a straightforward post-hoc approach to improving the synthesis performed by multi-document summarization models: (1) Generate an explicitly _diverse_ set of output candidates11; (2) Select from these as the final output the candidate that best agrees with the expected synthesis result (as predicted by an external model).1213 Footnote 11: See Appendix Tables 11, 12 for an ablation over diversity vs. standard beam search outputs Footnote 12: For a related generate-and-select approach (Oved and Levy, 2021) see Appendix A. For (1), we rely on a previously proposed technique for generating diverse outputs \(\mathcal{C}_{i}\) from input \(x_{i}^{\oplus}\), namely _Diverse Beam Search_ (DBS) (Vijayakumar et al., 2016). This method modifies standard beam search to maintain multiple _groups_ of beams. During decoding, a term is added to the next-token log probabilities, penalizing production of strings similar to candidates in _other_ groups.14 Footnote 13: We experiment with an additional decoding method: constrain the beam search to produce summaries with an approximately correct sentiment, Appendix E. Footnote 14: This penalty requires a hyperparameter \(\lambda\) that encodes the relative importance of diversity; we use \(\lambda\)=0.5 and did not tune this. We also used 5 groups and 1 beam per group. In (2) we would like to select the output that best synthesizes the property of interest; this requires a mechanism for specifying what we _expect_ the synthesized property be, given the inputs. For example, if we know the sentiment scores associated with input movie reviews, we might enforce that the sentiment expressed in the output agrees with the average of these. To realize this intuition, we can select as final output from \(\mathcal{C}_{i}\) the string that best aligns with this anticipated aggregate property (sentiment score or significance finding). Operationally, this requires an external model to measure--or estimate--the aspect of interest as latent in a given candidate output. This is a limitation of the approach, but in many settings it may be feasible to identify or construct a model; we were able to do so for both tasks considered in this paper. There is no guarantee that _any_ member of \(\mathcal{C}_{i}\) will align well with the anticipated aggregated property. In such cases, we have no means of yielding an output consistent with respect to synthesis, and it may be desirable to _abstain_ from outputting anything at all in such cases; that is, to be a _cautious_ summarizer (Ferri et al., 2004; Hechtlinger et al., 2018). We consider this strategy in the case of generating narrative synopses of evidence, as this constitutes a case in which (a) one would very much prefer not to produce a misleading summary of clinical evidence (Kell et al., 2021), and, (b) we observe many cases where the diverse decoding strategy yields an output that seems to communicate (at a granular level) the aggregate findings expected. **Movie Reviews** For movie reviews we use BERT (Devlin et al., 2019), fine-tuned on IMDB (Maas et al., 2011)15 to predict the sentiment of each input \(x_{ij}\), using the proportion of \(x_{ij}\in X_{i}\) with a positive score as an approximation for the target sentiment \(z_{i\bar{y}}\). For each diverse prediction \(\mathcal{C}_{i}\), we predict a sentiment \(\tilde{z}_{i\bar{y}}\) using our sentiment regression model (Section 2.1), and select the prediction closest to the estimated target sentiment Figure 4: Model sentiment sensitivity to manipulated input sentiment composition. The intensity patterns indicate that models tend to oscillate between low and high sentiments in outputs, and are not responsive to subtler shifts in input sentiment. For context we include a model regression (blue) and the reference sensitivity regression (black). \(|\tilde{z}_{\hat{i}\hat{y}}-z_{\hat{i}\hat{y}}|\). We find this improves model performance to human-like levels in terms of synthesis (Table 4, Figure 6). Two authors annotated 100 paired instances over PRIMERA generations for sentiment preference (matching the reference) between standard and diverse outputs. We find a moderate agreement Cohen's \(\kappa\)=0.59, with a statistically significant preference for the diverse summaries (p=0.003; Appendix F). **Systematic Reviews**. In the case of systematic reviews, we can have only a binary measure of _significant effect_ (or not). As a proxy for \(z_{i\hat{y}}\), we again use RobotReviewer to extract an effect for each of the model inputs \(x_{ij}\), using the majority vote (i.e., do the plurality of \(x_{ij}\in X_{i}\) indicate that there was an effect). We classify each output candidate in \(\mathcal{C}_{i}\) again using RobotReviewer to estimate \(\tilde{z}_{\hat{i}\hat{y}}\). We then select for output the highest probability candidate in \(\mathcal{C}_{i}\) which agrees with the majority vote of the inputs, and abstain where there are no viable candidates. For the models we do choose a summary for, we find performance similar to our measure (Table 5). Movie reviews show a wide range of sentiments; systematic reviews show some improvement but are biased towards no effect (qualitatively observed in Appendix H). ## 6 Related Work **Automatic (multi-document) summarization**(Nenkova and McKeown, 2011; Maybury, 1999) has been an active subfield within NLP for decades. We have focused our analysis on modern, neural abstractive models for conditional text generation Bahdanau et al. (2015). In light of their empirical success, we have specifically evaluated a set of Transformer-based (Vaswani et al., 2017) models which have recently been used for multi-document summarization (Beltagy et al., 2020; Zhang et al., 2020; Xiao et al., 2022; Raffel et al., 2020). There has been some work on highlighting conflicting evidence in health literature specifically (Shah et al., 2021, 2021), though this was focused primarily on highlighting conflicting evidence, and explicitly aggregating extracted content. **Sentence fusion** One view on synthesis might be that is a particular kind of _sentence fusion_(Barzilay and McKeown, 2005). However, past work on "fusing" sentences has assumed that the aim is to generate an output that contains the information common to similar sentences (Thadani and McKeown, 2013). This is intuitive in the context \begin{table} \begin{tabular}{l l l l l} \hline \hline & R\({}^{2}\) & MSE & Pearson’s r & R1 \\ \hline LED & 0.656 & 0.032 & 0.821 & 0.229 \\ Pegasus & 0.694 & 0.029 & 0.835 & 0.229 \\ PRIMERA & 0.749 & 0.024 & 0.880 & 0.240 \\ T5 & 0.721 & 0.026 & 0.856 & 0.231 \\ \hline Reference & 0.697 & 0.023 & 0.836 & \\ \hline LED & 0.763 & 0.022 & 0.878 & 0.227 \\ Pegasus & 0.799 & 0.019 & 0.894 & 0.232 \\ PRIMERA & 0.890 & 0.011 & 0.948 & 0.240 \\ T5 & 0.876 & 0.012 & 0.938 & 0.230 \\ \hline \hline \end{tabular} \end{table} Table 4: Movie Reviews: Generate diverse movie meta-reviews and select among them using an approximate target sentiment (top) or the oracle sentiment (bottom). Figure 5: Proposed strategy to improve synthesis. We generate an intentionally diverse set of output candidates (Vijayakumar et al., 2016) and then select from these the text that best agrees with the _predicted_ aggregate property of interest (here, sentiment). We can also _abstain_ when the model fails to yield an appropriate output. \begin{table} \begin{tabular}{l l l l l l} \hline \hline & \multicolumn{3}{c}{Multiple-then-select} & \multicolumn{3}{c}{Oracle} \\ & F1 & \%Abs. & R1 & Abs. & R1 \\ \hline LED & 0.557 & 0.386 & 0.252 & 0.233 & 0.259 \\ PRIMERA & 0.581 & 0.336 & 0.251 & 0.213 & 0.248 \\ T5 & 0.568 & 0.350 & 0.202 & 0.228 & 0.210 \\ Pegasus & 0.588 & 0.383 & 0.211 & 0.242 & 0.225 \\ \hline \hline \end{tabular} \end{table} Table 5: Systematic Review results with multiple-then-selected predictions. F1 is a macro-averaged F1 on the set of returned results. We abstain when no output matches the expected synthesis result. Abs. refers to Abstention, R1 to ROUGE1. Reference F1 is 0.577. of, e.g., summarizing multiple news articles covering the same event. But here we are interested in the more challenging setting in which the output should reflect an aggregate measure of potentially conflicting evidence or opinions. **Interpretation and analysis of neural models for NLP** This work is also related to the emerging body of work on analyzing neural NLP models, their behaviors, "knowledge", and "abilities" in general e.g., (Linzen et al., 2016; Tenney et al., 2019; Petroni et al., 2019; Niven and Kao, 2019; Meng et al., 2022). There has been some work specifically on analyzing neural summarization models. (Xu et al., 2020) investigated when a model is likely to extract (copy) rather than abstract (generate). (Xu and Durrett, 2021) furthered this analysis by assessing when models were relying on the local input to produce particular output tokens, and when they instead rely mostly on a background language distribution acquired in pre-training. **Factuality of neural summarizers** Neural conditional generation models have proven adept at producing fluent outputs, but when summarizing they are prone to _hallucinating_ content unsupported by input documents (Maynez et al., 2020; Kryscinski et al., 2019). Automated metrics such as ROUGE do not reliably capture such phenomena (Falke et al., 2019; Maynez et al., 2020). This has motivated several efforts to design automated factuality metrics; see (Pagnoni et al., 2021) for an overview. ## 7 Conclusions We have outlined and investigated the problem of _synthesis_ as related to some summarization tasks. We showed that existing models are partially able to synthesize implicitly, but do so imperfectly: For instance, the aggregation they perform is sensitive to input ordering, and they are not as sensitive to perturbations in the composition of inputs as one would hope. We proposed and validated a straightforward inference time method to improve model synthesis capabilities by preferentially outputting summary candidates that align with a predicted aggregate measure, and demonstrated empirically that this offers gains in performance. We hope this work encourages additional research into summarization models that explicitly optimize to accurately synthesize potentially conflicting evidence. Figure 6: Differences relative to human summaries under vanilla decoding and the proposed generate-diverse then select strategy on movie meta-reviews. We report Pearson’s r and \(R^{2}\) as measures of synthesis “calibration”. Vanilla decoding yields synthesis performance worse than humans, but explicitly considering synthesis at inference time results in performance comparable to and sometimes better than human summaries (as best we can measure). Figure 7: Distributions of outputs for the candiate summaries. **Movie reviews** (left) show a histogram for the range of differences between lowest and highest output sentiments. **Systematic reviews** (right) show histograms of the fractions of outputs reporting _significant_ results. ### Limitations This work investigates a narrow property in the realm of multi-document summarization. It focuses solely on sentiment as a measure of synthesis for movie meta-reviews, and automatically extracted effect findings for biomedical systematic reviews. While both of these measures are _important_ to understanding synthesis in these domains, they are not complete: neither measure covers topicality, fluency, or any other measure of quality. Our ability to measure the phenomenon of interest is limited by the quality of our classifiers and our annotation efforts; this may fail in more subtle cases. Though we have made an extensive effort to fine-tune several popular summarization models, we are limited to transformer-based models of relatively modest size (due to the GPU memory required to train long sequence summarization models). Conceivably these behaviors may change as models scale in size, or with a different flavor of model architecture. While we believe these results are more due to model behaviors than the properties of any particular language (English), this has not been experimentally confirmed. ## Ethics Beyond limitations of our measurements, we caution against a naive deployment of the methods introduced in this work. The exact aspects of a synthesis depend deeply on the relevant domain, and without domain specific measures for all quality aspects, these solutions may fail in unexpected ways. In general, we caution against any _current_ deployment of automated synthesis technology without a human in the loop; evaluation of synthesis methodology is generally understudied and requires domain expertise to assess quality.
2309.05804
Hi Model, generating 'nice' instead of 'good' is not as bad as generating 'rice'! Towards Context and Semantic Infused Dialogue Generation Loss Function and Evaluation Metric
Over the past two decades, dialogue modeling has made significant strides, moving from simple rule-based responses to personalized and persuasive response generation. However, despite these advancements, the objective functions and evaluation metrics for dialogue generation have remained stagnant. These lexical-based metrics, e.g., cross-entropy and BLEU, have two key limitations: (a) word-to-word matching without semantic consideration: It assigns the same credit for failure to generate "nice" and "rice" for "good", (b) missing context attribute for evaluating the generated response: Even if a generated response is relevant to the ongoing dialogue context, it may still be penalized for not matching the gold utterance provided in the corpus. In this paper, we first investigate these limitations comprehensively and propose a new loss function called Semantic Infused Contextualized diaLogue (SemTextualLogue) loss function. We also formulate an evaluation metric called Dialuation, incorporating both context and semantic relevance. We experimented with both non-pretrained and pre-trained models on two dialogue corpora, encompassing task-oriented and open-domain scenarios. We found that the dialogue generation models trained with SemTextualLogueloss attained superior performance compared to the traditional cross-entropy loss function. The findings establish that the effective training of a dialogue generation model hinges significantly on incorporating semantics and context. This pattern is also mirrored in the introduced Dialuation metric, where the consideration of both context and semantics correlates more strongly with human evaluation compared to traditional metrics.
Abhisek Tiwari, Muhammed Sinan, Kaushik Roy, Amit Sheth, Sriparna Saha, Pushpak Bhattacharyya
2023-09-11T20:16:38Z
http://arxiv.org/abs/2309.05804v2
_Hi Model, generating "nice" instead of "good" is not as bad as generating "nice"!_ Towards Context and Semantic Infused Dialogue Generation Loss Function and Evaluation Metric ###### Abstract Over the past two decades, dialogue modeling has made significant strides, moving from simple rule-based responses to personalized and persuasive response generation. However, despite these advancements, the objective functions and evaluation metrics for dialogue generation have remained stagnant, i.e., cross-entropy and BLEU, respectively. These lexical-based metrics have the following key limitations: (a) _word-to-word matching without semantic consideration:_ It assigns the same credit for failure to generate "nice" and "rice" for "good". (b) _missing context attribute for evaluating the generated response:_ Even if a generated response is relevant to the ongoing dialogue context, it may still be penalized for not matching the gold utterance provided in the corpus. In this paper, we first investigate these limitations comprehensively and propose a new loss function called Semantic Infused Contextualized diaLogue (_SemTextualLogue_) loss function. Furthermore, we formulate a new evaluation metric called _Dialuation_, which incorporates both context relevance and semantic appropriateness while evaluating a generated response. We conducted experiments on two benchmark dialogue corpora, encompassing both task-oriented and open-domain scenarios. We found that the dialogue generation model trained with _SemTextualLogue_ loss attained superior performance (in both quantitative and qualitative evaluation) compared to the traditional cross-entropy loss function across the datasets and evaluation metrics. ## 1 Introduction Building a human-like conversational agent has always been one of the primary goals of artificial intelligence Allen et al. (2001). Initially designed to aid humans, it has now evolved to such a degree that it is even being employed for casual conversation to fulfill the human desire for social interaction. Started with ruled-based ALIZA to moving to advanced chatbots like Alexa1 and ChatGPT2 clearly evidence the relevance and importance of building an adequate dialogue assistant. Over the past few years, there has been significant progress in the advancement of task-oriented dialogue assistants across several domains, even in sensitive areas such as healthcare Chen et al. (2017). The primary expectation from an adequate dialogue assistant is to provide an appropriate and contextually relevant response Valizadeh and Parde (2022). To incorporate the objective and evaluate the performance, a loss function, and evaluation metric are utilized. Thus, loss function and evaluation metrics are the backbone and soul of a learning framework. Footnote 1: [https://developer.amazon.com/alexa](https://developer.amazon.com/alexa) Footnote 2: [https://chat.openai.com/chat](https://chat.openai.com/chat) The most widely employed dialogue generation loss function is cross entropy (CE). The CE loss used in dialogue generation was borrowed from machine translation (MT) with the belief that the two tasks are identical. However, there are some substantial differences between the two tasks Hu et al. (2020): MT does not consider context, whereas it is a crucial aspect of dialogue generation. Furthermore, MT emphasizes lexical-based matching of generated text with reference target. The limitations caused by the discrepancies between the tasks are demonstrated in Figure 1. The generation model with CE loss has a fixed output expectation, which means that even semantically relevant responses (\(y_{1}\)) are being unfairly punished to the same extent or even more so than completely useless responses (\(y_{2}\)). The loss for the third response (\(y_{3}\)) is much higher because it has no uni-gram matches with the ground truth response. However, from a human perspective, the generated response appears to be contextually relevant and aligned with the reference response. Our objective in AI-based models is to replicate the process of human learning. Since humans are the ultimate consumers and evaluators of these models, their perception of learning and evaluation is crucial. Recently, many dialogue generation works have discovered that word-based evaluation metrics do not strongly align with human judgment Sato et al. (2020). Humans consider a response appropriate and relevant if it conveys a similar meaning as expected in the context, rather than a word-to-word match. Based on this observation, we were curious to investigate the importance of semantic-based evaluation and context relevance for dialogue loss and evaluation functions. **Research Questions** The paper aims to investigate the following three research questions related to dialogue generation: **(i)** Can the addition of a semantic-based evaluation component to the lexical-based loss function provide more accurate feedback on generated responses and thus improve the overall quality of dialogue generation? **(ii)** Can incorporating context relevance evaluation in the loss function improve the model's ability to generate responses that are more appropriate and coherent to the discourse? **(iii)** Will integrating the semantic component into the lexical-based evaluation metrics in dialogue generation result in a better correlation with human judgment? **Key Contributions** To address this, we develop a new dialogue generation loss function that incorporates semantic and contextual appropriateness in addition to lexical matching. Furthermore, we formulate a new context-infused, semantic-aware dialogue generation evaluation metric to validate the effectiveness of the loss function and assess its correlation with human judgment. The key contributions are enumerated as follows: * We thoroughly examine, analyze, and present some of the major drawbacks of the existing dialogue loss functions and evaluation metrics. * Inspired by human judgment, we propose a new dialogue function called SemTextualLogue loss, which leverages semantic space to incorporate the relevance of generated response and its relevance to the ongoing discourse. * We formulate a new _dial_ogue generation _evaluation_ metric named _Dialution_, which incorporates semantic similarity and contextual relevance. * The proposed loss function archives state-of-the-art performances over multiple datasets and across different evaluation metrics, including human evaluation. Furthermore, the evaluation matrix was found to be more related to human judgment compared to the existing matrices such as BLEU and ROUGE. ## 2 Background The proposed work is relevant to the following three research areas: Dialogue generation, Dialogue loss functions, and Dialogue generation evaluation metrics. In the following paragraphs, we have summarized the relevant works and highlighted the research gap. **Dialogue Generation** Dialogue generation can be approached using two primary methods: modular Griol et al. (2008), and end-to-end Serban Figure 1: Illustration of the key limitation of cross entropy for dialogue generation. Some adequate responses (\(y_{1}\) and \(y_{2}\)) are equally or more penalized as useless response (\(y_{2}\)) et al., 2016). The latter approach, end-to-end dialogue modeling, has gained popularity in recent years as a result of the modular approach's high demand for annotated data. In the last few years, there have been three kinds of works carried out: knowledge-grounded dialogue generation Zhao et al. (2020), transfer-learning-based dialogue generation Golovanov et al. (2019), and multimodal dialogue generation Shen et al. (2021). In Li et al. (2017), the authors build a generative adversarial network (GAN) based dialogue generation framework. The framework involves a sequence-to-sequence model serving as the generator module and a reinforcement learning model acting as the discriminator. The generator generates responses, while the discriminator evaluates the distinguishability of the generated responses from the corpus and provides feedback to the generator accordingly. **Dialogue Generation Loss Functions** Table 1 illustrates all the existing dialogue loss functions and some of their key limitations. In Kovaleva et al. (2018), the authors have also considered generated words' semantic similarity with words of gold response in addition to word-to-word matching to mitigate the fixed target issue. However, it's worth noting that although they incorporated word-to-word semantics, there could be cases where different word arrangements have nearly identical meanings, such as with the phrases _Nice to see you_ and _I am happy to meet you_. The CE loss favors maximum likelihood, and thus it suffers from a lack of diversity. To tackle this problem, the researchers Ueyama and Kano (2020) devised an inverse n-gram frequency (INF) loss function, which is a weighted cross-entropy function based on n-gram frequency calculated from the entire corpus context. The INF loss function assigns weights to n-gram mismatches based on the inverse of their frequency, giving rare tokens more weight. This weighting mechanism results in more diverse responses, effectively addressing the issue of low diversity. **Evaluation Metrics** There are mainly two kinds of evaluation: automatic and human. All the existing popular automatic dialogue generation evaluation metrics are described in Table 2. The most utilized automatic evaluation metrics include BLEU, ROUGE, and METEOR. Despite dialogue being a contextual phenomenon, none of the metrics consider dialogue context for judging the relevance of the generated text. Consequently, many recent dialogue generation works and surveys Sato et al. (2020); Feng et al. (2021) on dialogue generation have reported a poor correlation between these metrics and human judgment. Most of them (other than BERT similarity) are based on the n-gram overlap principle. In Zhang et al. (2017), the authors proposed an embedding-based BERT semantic similarity evaluation metric, which computes cosine similarity between the BERT embeddings of generated text and expected response. ## 3 Proposed Methodology In a typical dialogue generation model, there are two segments: encoder and decoder. The former encodes dialogue context, and the latter generates a sequence of words as a response. In order to incorporate the proposed dialogue generation loss function easily into any generation framework, we added it on top of the decoder. The proposed semantic and context-infused loss function incorporated dialogue generation model is illustrated in Figure 2. It contains the following sub-modules: encoder-decoder (Dialogue generation), _Contain_, and SemTexLogue loss. The working of each of the sub-modules is explained and illustrated in the following sub-sections. Finally, we illustrate the formulation of our proposed semantic and context-guided dialogue evaluation metric called _Dialution_. ### Dialogue Generation The input sequence (context and current utterance) is first tokenized into a sequence of tokens, and each token is represented as a vector using an em \begin{table} \begin{tabular}{l l l l l} \hline **Task function** & **Midong** & **C** & **S** & **WK** \\ \hline Enc. Imagery (Beur et al., 2005) & normal probability distribution divergence & x & x & x \\ Focal. Luo (Wang et al., 2014) & brain frequency score entropy & x & x & x \\ IFW (Li et al. 2017) & brain frequency score entropy & x & x & x \\ Inception & N. Sp Spinkowski (Johnson et al., 2019) & graph frequency score entropy & x & x & x \\ FELG (Giang et al., 2019) & dynamic frequency score entropy & x & x & x \\ SRSR (Kenkova et al., 2018) & CE loss with word noise ensemble similarity & ✓ & x & x \\ \hline \end{tabular} \end{table} Table 1: Existing dialogue generation loss functions and their characteristics. Here C, S, and WK denote context, semantic, and world knowledge, respectively \begin{table} \begin{tabular}{l l l l l} \hline **Task function** & **Midong** & **C** & **S** & **WK** \\ \hline Enc. Inspired (Beur et al., 2005) & n-gram overlap & x & x & x \\ ROUGE, L. Luo (Wang et al., 2014) & n-gram overlap & x & x & x \\ METEOR (Huang et al., 2014) & n-gram overlap & x & x & x \\ BERT Similarity (Zhang et al., 2014) & n-gram overlap & x & x & x \\ Sentiment Distance (Wang et al., 2014) & n-gram overlap & x & x & x \\ Jaccard Similarity bedding layer. It computes an input embedding of each word of the input as follows: \[h^{i}_{T},h^{i}_{P}=TE(u_{i}),PE(u_{i}) \tag{1}\] \[\hat{u_{i}}=h^{i}_{T}+h^{i}_{P} \tag{2}\] where \(u_{i}\), \(h^{i}_{T}\), and \(h^{i}_{P}\) represent the \(ith\) word of the input, the word's token embedding, and its positional embedding, respectively. These embeddings are then fed into a multi-layered transformer encoder, which consists of several identical layers, each of which performs two main operations: multi-head self-attention and position-wise feed-forward networks. In the proposed dialogue generation framework, the decoder unit is also a transformer-based network. It takes the encoded representation (\(h_{e}\)) and generates a token at each time step as follows: \[\hat{y}_{t}=argmax_{i}P(V_{i}|y_{1},y_{2},...,y_{t-1},h_{e}) \tag{3}\] where \(V\) denotes vocabulary space and \(\hat{y}_{t}\) are the generated token at \(t^{th}\) time step. The final continuation of all generated tokens represents the generated output sequence (\(\hat{(y)}\)). ### Containc Score In traditional dialogue generation, the predicted probability distribution is compared with the actual output sequence, and entropy deviation is calculated. The deviation is back-propagated to the network, and the parameter gets adjusted accordingly. In order to incorporate semantic and contextual adequacy of the generated text, we added another component called _context and semantic_ based score called _Containc_. It considers two fundamental expectations of a dialogue response: context relevance and adequate response, which are computed as follows: **Context Relevance** Given a context, there may be several suitable responses, so a fixed output matching approach usually suffers from a low diversity issue. Instead, assessing the relevance of a generated response in the given context and providing this feedback to the model would guide the model to generate appropriate and coherent responses. Thus, we calculate the relevance of the generated text for a context (\(con\)) as follows: \[CR=Cosine(e_{con},e_{gen}) \tag{4}\] \[e_{con}=BERT(<X_{1},Y_{1},X_{2},Y_{2}...X_{t-1},Y_{t-1},X_{t}) \tag{5}\] where \(e_{con}\), and \(e_{gen}\) are the representations for dialogue context and generated text, which are taken from BERT (Devlin et al., 2018). Here, the context is compromised of all previous utterances. Figure 2: Proposed architecture of semantic and context-reinforced dialogue generation. The encoder encodes dialogue context and current utterance; the decoder generates output tokens autoregressively. Based on the generated output, it calculates context and semantic relevance score (_Containc_) and reinforces the feedback with the traditional cross-entropy loss **Semantic Similarity** In natural language, we can convey the same information in various ways, i.e., a different combination of words. Thus, semantic evaluation is indeed a crucial factor in judging the adequateness of a generated text. We calculate semantic similarity (SS) between the gold sentence and generated response as follows: \[SS=Cosine(e_{gold},e_{gen}) \tag{6}\] where \(e_{gold}\) and \(e_{gen}\) are semantic embedding representations for gold response and generated text, respectively. Finally, the _Contanic_ score is computed as follows: \[Contanic=\alpha\cdot CR+\beta\cdot SS \tag{7}\] where \(\alpha\) and \(\beta\) are hyperparameters. We experimented with the two different combinations of the CE loss and _Contanic_: (a) Weighted Cross-Entropy and (b) _Contanic_ Reinforced Dialogue generation called SemTextualLogue, which are explained below. ### Weighted Cross Entropy We first experimented with the addition of _Contanic_ and CE loss, which performs as equivalent to CE loss. The reason was the non-differentiability of the _Contanic_ score due to the involvement of the argmax function, and the added component becomes zero during backpropagation. To overcome this issue, we further experimented with the multiplication of these scores as a loss function. The weight parameter of the generation model is updated as follows: \[\begin{split} w_{new}&=w_{cur}-\alpha\frac{dl_{L} }{dw}\\ &=w_{cur}-\alpha\frac{d(1-\text{contanic})L_{CE}}{dw}\\ &=w_{cur}-\alpha(1-\text{contanic score})\frac{dL_{CE}}{dw}\end{split} \tag{8}\] where \(L\) and \(\alpha\) denote total loss and learning rate, respectively. We multiplied (1-_contanic_) with CE, as when gold utterance and generated text were semantically similar but lexically different, the component (1-_contanic_) would be lower, and the weightage to the CE loss would be reduced. Conversely, when the _contanic_ score is low (contextually and semantically less appropriate), this component would be high, and thus the loss would be prioritized accordingly. ### SemTextualLogue Loss Over the past few years, reinforcement learning has emerged as a highly effective approach for integrating key aspects into a model. Inspired by its effectiveness, we introduce a dialogue generation model based on reinforcement learning that reinforces contextualization and semantic behavior in addition to the conventional cross-entropy loss. We build a baseline estimator, which acts as a human perception evaluator and reinforces the feedback as a reward. The baseline estimator takes the output probability distribution from the generation model and computes its relevance. The relevance estimate is combined with the CE loss, and the final loss is computed. The loss calculation is explained below. \[L_{final}=\lambda\cdot L_{CE}+(1-\lambda)\cdot L_{RL}+\sigma\cdot L_{BSE} \tag{9}\] \[L_{RL}=(1-BSEscore)\cdot L_{CE} \tag{10}\] \[L_{CE}=-\sum_{j=0}^{j=n}p(y_{j})\log p(\hat{y_{j}}) \tag{11}\] \[L_{BSE}=MSE(BSEScore,ContanicScore) \tag{12}\] where \(L_{CE}\), \(L_{RL}\), and \(L_{BSE}\) are the CE loss, reinforcement learning loss, and baseline estimator loss, respectively. The term, \(MSE\) indicates mean squared error loss. Here, \(\lambda\) and \(\sigma\) (\(\lambda\), \(\sigma\) \(\in\) [0, 1]) are hyperparameters. In the CE loss equation, \(y\) and \(\hat{y}\) are true probability distributions and the influenced distribution of the gold response and the generated response, respectively. Here, \(n\) is the output sequence length. ### Dialution The evaluation metrics, BLEU, ROUGE, and METEOR, only emphasize word-level matching and overlook other crucial aspects of dialogue, such as context and semantics, resulting in limited correlation with human judgments Liu et al. (2016). One such example is illustrated in Figure 3. _Response 1_ is semantically very similar to the _gold response_, but _response 2_ is neither meaningful nor relevant to the context. Here, the automatic evaluation score and human evaluation are not in sync with each other. This is because the evaluation only looks at word matching, which overlooks the fact that words like "fantastic" and "superb" carry a similar connotation. To overcome the conflicts, we first propose a contextualized semantic-driven dialogue evaluation metric called _Dialuation_. _Dialuation_ is a weighted average of contextual relevance (CR) and semantic score (SS). It is determined as follows: \[Dialution=(\frac{\delta_{c}\cdot CR+\delta_{ss}\cdot SS}{\delta_{c}+\delta_{ss}}) \cdot 100 \tag{13}\] where \(\delta_{c}\) and \(\delta_{ss}\) (\(\in\) [0, 1]) are the hyperparameters, which signify the importance of contextual relevance and semantic similarity, respectively. The _Diluation_ score would lie between 0 to 100. ## 4 Experimental Setup We have utilized the PyTorch framework for implementing the proposed model. We have experimented with the two most widely used dialogue datasets: MultiWoz 2.2 [22], and PersonaChat [2]. The datasets' statistics are provided in Table 3. The train-validation-test ratios for both the datasets models were 8:1:1. We have considered a context window of 3, i.e., dialogue context consists of only the last three utterances. The final values for hyperparameters, which are determined empirically, are as follows: source length (256), target length (256), learning rate (3e-05), batch size (32), \(\alpha\) (0.3 for MultiWoz), (0.2 for PersonaChat) \(\beta\) (0.7), \(\sigma\) (1) and activation function (ReLU). ## 5 Result and Discussion We employed the most popular automatic evaluation metrics, namely BLEU, Rouge, and METEOR [23, 24, 25], to evaluate the generation quality with different loss functions. **Baselines and Results** To make the model generic, which could be applied to any dialogue setting, we utilize only dialogue context, i.e., no additional semantic information such as intent, slot, and belief state. Thus, we compared the model with our baselines, and traditional CE loss utilized state-of-the-art model that employs only dialogue context for response generation. The performances of the dialogue generation model with different loss functions on the MultiWoz and PersonChat datasets are reported in Table 4 and Table 5. Table 6 and Table 7 summarize the performance of these loss functions in terms of BERT score and the newly introduced evaluation metric called _Diluation_. All the reported values in the following tables are statistically significant, which are validated using the statistical t-test [23] at a significant level of 5%. **Human Evaluation** To rule out the possibility of under informative assessment carried out by automatic metrics, we conducted the human evaluation of 150 test samples from each dataset. In this assessment, three researchers (other than the authors) were employed to evaluate the generated responses (50 samples of each model) of different models without revealing their names. The samples are assessed based on the following five metrics: _adequacy, fluency, coherence, naturalness, and completeness_ on a scale of 0 to 5. The obtained scores for both datasets are provided in Table 8 and Table 9. **Findings and Observations** Based on the experimental findings, we report the following answers (with evidence) to our investigated research questions (RQs). **RQ 1: Can the addition of a semantic-based evaluation component to the lexical-based loss function provide more accurate feedback on Figure 3: One example demonstrating the significance of context and sentence semantics for evaluating dialogue responses. \begin{table} \begin{tabular}{l l l} \hline \hline **Entity** & **MultiWoz 2.2** & **PersonaChat** \\ \hline nature & Task-oriented & Chit-Chat \\ \# of dialogues & 9575 & 8938 \\ \# of utterances & 71,514 & 65,719 \\ \# of unique words & 25,714 & 18,417 \\ avg dialogue length & 7.47 & 7.35 \\ \hline \hline \end{tabular} \end{table} Table 3: Statistics of MultiWoz 2.2 and PersonaChat dialogue datasets generated responses and thus improve dialogue generation quality?** The performances of models with traditional CE and proposed Semantic Reinforcement and SemTexLogue loss functions are reported in Table 4, Table 5 (in terms of traditional evaluation metrics), Table 6, and Table 7 (in terms of BERT and Dialuation scores). We can see a significant improvement across different metrics on the datasets (CE vs Semantic Reinforcement): MultiWoz (BERT: +1.86, _Dialution_: 0.42, ROUGE-L: +1.38, and METEOR: +1.48), PersonaChat (BERT: +1.65, _Dialution_: +3.30 ROUGE-L: +0.37, and METEOR: +0.53). Moreover, we also observed a significant enhancement in human evaluation. These improvements firmly establish that there is a role of semantic evaluation infusion in the loss function. RQ 2: Can incorporating context relevance evaluation in the loss function improve the model's ability to generate more appropriate and coherent responses to the discourse?We observed some small improvements across the various evaluation metrics when we utilized dialogue context relevance in the loss function modeling (Table 4: Semantic Reinforcement vs SemTexLogue Loss; Table 5: Semantic Reinforcement vs SemTexLogue Loss). Similar behavior has also been found in embedding-based evaluation (Tables 6 and 7). Thus, our findings support the hypothesis that the inclusion of context can provide additional feedback to the dialogue generation model about the adequateness of generated response, and hence it can lead to enhancement in generation quality. RQ 3: Will integrating the semantic and contextual components to the lexical-based evaluation metrics in dialogue generation result in better correlation with human judgment?We found that the performance of the _SemTexualLogue_ on the Multiwoz dataset is very close to baselines in terms of BLEU; however, the model significantly outperforms others in human evaluation. A similar notation as human evaluation is being reflected in our newly introduced loss function, _Dialuation_. We found many cases where a response was very relevant but did not match with the gold utterance; thus, both _Dialuation_ and human score were high despite low BLEU score. One such instance is as follows: _context_: Hi,...lets watch a new movie, _generated_: I prefer some new web-series, _gold_: lets go! We can watch it. It firmly shows that the _Diluation_, which considers both semantic and context relevance, is more aligned with human evaluation than any other metrics, including the BERT score. ## 6 Case Study and Analysis We have analyzed the models' performances for common test cases, and a few samples are shown in Table 10. The comprehensive analyses of the \begin{table} \begin{tabular}{l c c} \hline \hline **Model** & **BERT Score** & **Dialution** \\ \hline CE (Shi et al., 2021) & 57.58 & 51.43 \\ Weighted Semantic CE & 57.91 & 51.22 \\ Weighted Semantic and Context CE & 57.94 & 51.27 \\ Semantic Reinforcement & 58.36 & 51.85 \\ SemTextualLogue & 58.83 & 52.38 \\ \hline \hline \end{tabular} \end{table} Table 6: Vector-embedding based evaluation result on Multiwoz dataset \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline **Model** & **BLEU-1** & **BLEU-2** & **BELU-3** & **BELU-4** & **BLEU** & **ROUGE - 1** & **ROUGE - 2** & **ROUGE- L** & **METEOR** \\ \hline CE loss (Shi et al., 2021) & 32.35 & 12.47 & 7.46 & 4.49 & 10.78 & 27.77 & 32.59 & 13.15 & 31.11 \\ Weighted Semantic CE & 34.16 & 13.58 & 8.27 & 5.07 & 11.55 & 27.64 & 32.38 & 13.49 & 31.02 \\ Weighted Semantic and context CE & 33.99 & 13.63 & 8.34 & 5.13 & 11.87 & 28.13 & 32.79 & 13.75 & 31.39 \\ Semantic Reinforcement & 35.19 & 13.97 & 8.47 & 5.19 & 12.02 & 28.39 & 33.44 & 13.92 & 31.97 \\ SemTextualLogue & 33.64 & 13.06 & 7.73 & 4.60 & 11.18 & 28.56 & 33.43 & 13.64 & 31.95 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of the dialogue generation framework with different loss functions on MultiWoz dataset \begin{table} \begin{tabular}{l c c} \hline \hline **Model** & **BERT Score** & **Dialution** \\ \hline CE (Shi et al., 2021) & 32.72 & 26.64 \\ Weighted Semantic CE & 32.84 & 28.50 \\ Weighted Semantic and Context CE & 34.05 & 28.82 \\ Semantic Reinforcement & 33.35 & 26.52 \\ SemTextualLogue & 34.37 & 29.94 \\ \hline \hline \end{tabular} \end{table} Table 7: Vector-embedding based evaluation result on PersonaChat dataset \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline **Model** & **BLEU-1** & **BLEU-2** & **BELU-3** & **BELU-4** & **BLEU** & **ROUGE - 1** & **ROUGE - 2** & **ROUGE- L** & **METEOR** \\ \hline CE loss (Shi et al., 2021) & 17.72 & 3.91 & 1.08 & 0.31 & 2.21 & 17.07 & 4.41 & 16.72 & 12.73 \\ Weighted Semantic CE & 19.42 & 4.18 & 1.12 & 0.28 & 2.23 & 17.40 & 4.27 & 16.77 & 14.14 \\ Weighted Semantic and Context CE & 19.69 & 4.19 & 1.16 & 0.37 & 2.30 & 17.47 & 4.40 & 17.15 & 12.96 \\ Semantic Reinforcement & 18.85 & 4.33 & 1.16 & 0.33 & 2.36 & 15.96 & 4.18 & 15.63 & 13.52 \\ SemTextualLogue & 20.17 & 4.41 & 1.19 & 0.36 & 2.37 & 17.44 & 4.37 & 17.09 & 13.26 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance of the dialogue generation framework with different loss functions on PersonaChat dataset performances of different models lead to the following key observations: **(i)** Due to the incorporation of semantics and context, our model generates more contextualized and consistent responses, as shown in Table 10. **(ii)** We observed that the number of synonyms for a word is very limited in the MultiWoz dataset's vocabulary space, so the influence of semantic infusion is comparatively less than the performance gain observed on the PersonaChat dataset. **(iii)** In some cases, more often in the persona dataset, the proposed model repeats some words (persona entities) in response, primarily due to the fact that they occur more often in the corpora. ## 7 Conclusion and Future Work The core of a learning framework is the objective function and evaluation metrics, which are used to train the underlying task and assess its performance. Cross entropy (CE) and BLEU are the commonly employed loss function and evaluation metrics for dialogue generation, but they have the drawback of fixed target comparison. To address this issue, we propose a semantic-infused contextualized dialogue (_SemTextual Logue_) loss function. We formulate two different variations of the loss function to investigate the effectiveness of infusion of the semantic and context: (a) weighted cross entropy and (b) reinforced SemTextual loss function. Moreover, we introduced a new dialogue evaluation metric called _Dialuation_, which also considers dialogue context in addition to gold text to assess the relevance of generated response. We experimented with both kinds of dialogue corpora, namely task-oriented and chit-chat. The proposed _SemTextual Logue_ obtained superior performance on both datasets across various evaluation metrics, including human evaluation. The obtained improvements and analysis firmly establish the efficacy of dialogue context and semantic evaluation for dialogue generation loss function. Consequently, we found a strong correlation between human judgment and the _Dialuation_ metric. When we evaluate a response, we implicitly use global knowledge in addition to the context, and thus an evaluation by a child and an evaluation by an experienced individual differ. In the future, we would like to investigate the role of external knowledge in developing an appropriate loss function. \begin{table} \begin{tabular}{l} \hline \hline **Context:**\(<\)domain\(>\) restaurant,taxi,attraction \(<\)domain\(>\)\(<\)history\(>\)\(<\)user\(>\) \\ \(<\)history\(>\)\(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \\ \hline \hline **Context:**\(<\)domain\(>\) train,hotel \(<\)domain\(>\)\(<\)history\(>\)\(<\)user\(>\) \\ \(\Gamma\)d like a moderately priced hotel with free parking, please. \(<\)user\(>\)\(<\)system\(>\) acorn guest house is a 4-star hotel located in 154 chesteroent road, should \(\in\) bocki it for you? \(<\)system\(>\)\(<\)user\(>\)s that located in the north. \(<\)next\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)input\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \\ \hline **User:**\(<\)domain\(>\) train,hotel \(<\)domain\(>\)\(<\)history\(>\)\(<\)user\(>\) \\ \(\Gamma\)d like a moderately priced hotel with free parking, please. \(<\)user\(>\)\(<\)system\(>\) acorn guest house is a 4-star hotel located in 154 chesteroent road, should \(\in\) bocki it for you? \(<\)system\(>\)\(<\)user\(>\)s that located in the north. \(<\)next\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)input\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)system\(>\) \\ \end{tabular} \\ \hline **User:**\(<\)domain\(>\) train,hotel \(<\)domain\(>\)\(<\)history\(>\)\(<\)user\(>\) \\ \(\Gamma\)d like a moderately priced hotel with free parking, please. \(<\)user\(>\)\(<\)system\(>\) acorn guest house is a 4-star hotel located in 154 chesteroent road, should \(\in\) bocki it for you? \(<\)system\(>\)\(<\)user\(>\)s that located in the north. \(<\)next\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)input\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)system\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \\ \hline **User:**\(<\)domain\(>\) train,hotel \(<\)domain\(>\)\(<\)history\(>\)\(<\)user\(>\) \\ \(\Gamma\)d like a moderately priced hotel with free parking, please. \(<\)user\(>\)\(<\)system\(>\) acorn guest house is a 4-star hotel located in 154 chesteroent road, should \(\in\) bocki it for you? \(<\)system\(>\)\(<\)user\(>\)s that located in the north. \(<\)next\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \\ \hline **User:**\(<\)domain\(>\) train,hotel \(<\)domain\(>\)\(<\)history\(>\)\(<\)user\(>\) \\ \(\Gamma\)d like a moderately priced hotel with free parking, please. \(<\)user\(>\)\(<\)system\(>\) acorn guest house is a 4-star hotel located in 154 chesteroent road, should \(\in\) bocki it for you? \(<\)system\(>\)\(<\)user\(>\)s that located in the north. \(<\)next\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \\ \hline **User:**\(<\)domain\(>\) train,hotel \(<\)domain\(>\)\(<\)history\(>\)\(<\)user\(>\) \\ \(\Gamma\)d like a moderately priced hotel with free parking, please. \(<\)user\(>\)\(<\)system\(>\) acorn guest house is a 4-star hotel located in 154 chesteroent road, should \(\in\) bocki it for you? \(<\)system\(>\)\(<\)user\(>\)s that located in the north. \(<\)next\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \\ \hline **User:**\(<\)domain\(>\) train,hotel \(<\)domain\(>\)\(<\)history\(>\)\(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \\ \hline **User:**\(<\)domain\(>\) train,hotel \(<\)domain\(>\)\(<\)history\(>\)\(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \begin{tabular}{l} \(<\)has a lot located in the north. \(<\)next\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \(<\)user\(>\) \\ \end{tabular} \\ \hline **User:**\(<\)domain\(>\) train,hotel \(<\)domain\(>\)\(<\) history\(>\)\( ## 8 Limitations Despite the significant improvement demonstrated by the proposed framework with _SemTextualLogueLoss_, we also observed some weaknesses and limitations. These are as follows: **(i)** The work is certainly limited in scope with experimenting with non-pre-trained models. However, we believe the pre-training provides better grounding of context and thus generates more appropriate responses. It has not much to do with loss function and evaluation metrics. We aim to investigate the efficacy of the loss function and evaluation metric with these current state-of-the-art LLMs in future. **(ii)** The coherence between a response and the dialogue history is influenced by the nature of the dialogue. In task-oriented dialogues, each response is typically more closely aligned with the context compared to chit-chat conversations. Consequently, the coefficient (\(\alpha\)) for incorporating context in the loss function varies accordingly. As a result, we identified two different values that proved effective for these two settings. However, it is important to note that our experiments were conducted on only two datasets; the most appropriate values for these coefficients can be determined with experimentation on multiple datasets. **(iii)** Despite employing a contextual semantic vector for sentence representation, it falls short in capturing the similarity between two sentences, when one of them contains a negation and antonym word, despite having a similar meaning.
2305.19646
Part 1 of Martin's Conjecture for order-preserving and measure-preserving functions
Martin's Conjecture is a proposed classification of the definable functions on the Turing degrees. It is usually divided into two parts, the first of which classifies functions which are not above the identity and the second of which classifies functions which are above the identity. Slaman and Steel proved the second part of the conjecture for Borel functions which are order-preserving (i.e. which preserve Turing reducibility). We prove the first part of the conjecture for all order-preserving functions. We do this by introducing a class of functions on the Turing degrees which we call "measure-preserving" and proving that part 1 of Martin's Conjecture holds for all measure-preserving functions and also that all non-trivial order-preserving functions are measure-preserving. Our result on measure-preserving functions has several other consequences for Martin's Conjecture, including an equivalence between part 1 of the conjecture and a statement about the structure of the Rudin-Keisler order on ultrafilters on the Turing degrees.
Patrick Lutz, Benjamin Siskind
2023-05-31T08:23:26Z
http://arxiv.org/abs/2305.19646v3
# Part 1 of Martin's Conjecture for order-preserving and measure-preserving functions ###### Abstract Martin's Conjecture is a proposed classification of the definable functions on the Turing degrees. It is usually divided into two parts, the first classifies functions which are _not_ above the identity and the second of classifies functions which are above the identity. Slaman and Steel proved the second part of the conjecture for Borel functions which are order-preserving (i.e. which preserve Turing reducibility). We prove the first part of the conjecture for all order-preserving functions. We do this by introducing a class of functions on the Turing degrees which we call "measure-preserving" and proving that part 1 of Martin's Conjecture holds for all measure-preserving functions and also that all non-trivial order-preserving functions are measure-preserving. Our result on measure-preserving functions has several other consequences for Martin's Conjecture, including an equivalence between part 1 of the conjecture and a statement about the structure of the Rudin-Keisler order on ultrafilters on the Turing degrees. ###### Contents * 1 Introduction * 1.1 Statement of Martin's Conjecture * 1.2 Prior work on Martin's Conjecture * 1.3 Measure-preserving functions * 1.4 Technical Preliminaries * 1.5 Notation and conventions * 2 How to Prove Instances of Part 1 of Martin's Conjecture * 2.1 The basic strategy * 2.2 Finding pointed perfect trees * 2.3 Ordinal invariants * 3 Part 1 of Martin's Conjecture for Measure-Preserving Functions * 3.1 Proof of part 1 of Martin's Conjecture for measure-preserving functions * 3.2 An alternate proof that works in \(\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\) * 3.3 Application to part 2 of Martin's Conjecture * 4 Part 1 of Martin's Conjecture for Order-Preserving Functions * 4.1 A theorem on perfect sets * 4.2 Order-preserving functions are measure-preserving * 4.3 A proof for Borel functions that works in \(\mathsf{ZF}\) * 4.4 Application to the theory of locally countable Borel quasi-orders * 5 Ultrafilters on the Turing Degrees * 5.1 measure-preserving functions and the Martin measure * 5.2 The Rudin-Keisler order on ultrafilters on the Turing degrees 5.3 The Lebesgue and Baire ultrafilters * 5.4 Additional facts about the Martin measure and the Rudin-Keisler order * 6 Generalizations and Counterexamples * 6.1 Other degree structures * 6.2 Non-invariant functions * 6.3 Ideal-valued functions * 6.4 ZFC counterexamples * 7 Questions ## 1 Introduction Martin's Conjecture is a proposed classification of the definable functions on the Turing degrees, very roughly stating that every such function is either eventually constant, eventually equal to the identity function, or eventually a transfinite iterate of the Turing jump (see [19] for a survey). It is traditionally divided into two parts. The first states that every function is eventually constant or eventually above the identity; the second states that every function which is eventually above the identity is eventually equal to some transfinite iterate of the jump. The conjecture was introduced by Martin in the 1970s. It remains open, but several special cases have been proved by Martin, Lachlan [11], Steel [27], and Slaman and Steel [23]. In particular, Slaman and Steel proved that part 2 of the conjecture holds when restricted to Borel functions which are "order-preserving" (i.e. which preserve Turing reducibility). In this paper, we will prove that part 1 of the conjecture holds when restricted to order-preserving functions. When combined with Slaman and Steel's result, this almost completes the proof of Martin's Conjecture restricted to order-preserving functions. We will also prove that part 1 of the conjecture holds when restricted to a class of functions which we call "measure-preserving". This class of functions has been implicitly considered by Martin, but, to the best of our knowledge, has not been explicitly identified before. A central thesis of this paper is that this is a natural class of functions and that studying it provides useful insight into Martin's Conjecture. We will give two lines of evidence for this thesis. First, the class of measure-preserving functions has a few different equivalent characterizations in terms of concepts related to Martin's Conjecture. In particular, there is an ultrafilter on the Turing degrees known as the Martin measure, which is closely related to Martin's Conjecture and measure-preserving functions are exactly those functions which are measure-preserving for the Martin measure in the sense of ergodic theory. We will discuss this more thoroughly in section 5.1. Second, that part 1 of Martin's Conjecture holds for measure-preserving functions has several interesting consequences. * We will show that every order-preserving function is either constant on a cone or measure-preserving, so it implies part 1 of Martin's Conjecture for order-preserving functions (see section 4.2). * It implies a special case of part 2 of Martin's Conjecture (see section 3.3). * It implies that part 1 of Martin's Conjecture is equivalent to a statement about the structure of ultrafilters on the Turing degrees (see section 5.2). We will also show that the proof is quite general: it works in other degree structures (for example, for the arithmetic degrees and the hyperarithmetic degrees), for functions on \(2^{\omega}\) which are not required to be Turing-invariant (and thus do not induce a function on the Turing degrees) and for functions which take values in the set of all Turing ideals. For the rest of this introduction, we will explain the statement of Martin's Conjecture and mention some past work on it, give a definition of the class of measure-preserving functions on the Turing degrees, and provide background material necessary for some of our proofs. ### Statement of Martin's Conjecture Before we can give the formal statement of Martin's Conjecture, there are a few things we need to explain. First, a caveat: for technical reasons, the conjecture is usually stated in terms of Turing-invariant functions on \(2^{\omega}\) rather than functions on the Turing degrees. Second, we must explain what it means for two Turing-invariant functions to be "eventually equal" or for one to be "eventually above" the other. Third, the conjecture is false in \(\mathsf{ZFC}\) and is usually instead stated in the theory \(\mathsf{ZF}+\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\), which we will briefly introduce. #### Turing-invariant functions A function \(f\colon 2^{\omega}\to 2^{\omega}\) is **Turing-invariant** if for all \(x,y\in 2^{\omega}\), \[x\equiv_{T}y\implies f(x)\equiv_{T}f(y).\] The point is that any Turing-invariant function induces a function on the Turing degrees, but functions on the reals are easier to analyze from a descriptive set theoretic point of view. #### Eventual equality When we say that one Turing-invariant function is "eventually equal" to another we mean that they are Turing equivalent on a cone of Turing degrees and when we say that one Turing-invariant function is "eventually below" another we mean that the first is Turing reducible to the second on a cone. Here, a **cone of Turing degrees** (also sometimes just called a **cone**) is a set of the form \[\operatorname{Cone}(a)=\{x\in 2^{\omega}\mid x\geqslant_{T}a\}\] for some \(a\in 2^{\omega}\). Such a set is also called the **cone above**\(a\) and \(a\) is called the **base of the cone**. More formally, for Turing-invariant functions \(f,g\colon 2^{\omega}\to 2^{\omega}\), \(f\) is **equal to \(g\) on a cone** if for all \(x\) in some cone, \(f(x)\equiv_{T}g(x)\) (note that there is a slight abuse of terminology here since \(f\) and \(g\) are not literally equal on a cone, but merely Turing equivalent on a cone). Likewise, \(f\) is **below \(g\) on a cone** if for all \(x\) in some cone, \(f(x)\leqslant_{T}g(x)\). We will also say \(f\) is **constant on a cone** if it is equal to a constant function on a cone. If \(f\) is equal to \(g\) on a cone then we will write \(f\equiv_{M}g\) and say that they are **Martin equivalent**. Likewise, if \(f\) is below \(g\) on a cone, we will write \(f\leqslant_{M}g\) and say \(f\) is **Martin below \(g\)**. Note that \(\leqslant_{M}\) forms a quasi-order on Turing-invariant functions, sometimes called the **Martin order**. #### The Axiom of Determinacy We have said that Martin's Conjecture is stated in the theory \(\mathsf{ZF}+\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\). Here, \(\mathsf{DC}_{\mathbb{R}}\) denotes the Axiom of Dependent Choice on \(2^{\omega}\) and \(\mathsf{AD}\) denotes the Axiom of Determinacy. The Axiom of Determinacy is a strong axiom of set theory which is inconsistent with the Axiom of Choice but equiconsistent with a certain large cardinal principle (the existence of infinitely many Woodin cardinals [10]). One reason for stating Martin's Conjecture under \(\mathsf{AD}\) is that restricted versions of the axiom are true for various classes of definable sets under much weaker hypotheses. For example, Martin proved that a version of \(\mathsf{AD}\) for Borel sets (known as Borel Determinacy) is provable in \(\mathsf{ZF}\) and a version for \(\boldsymbol{\Pi}^{1}_{1}\) sets is provable from the existence of a measurable cardinal [18, 17]. Thus one might hope that a proof of Martin's Conjecture under \(\mathsf{AD}\) would yield a \(\mathsf{ZF}\) proof of Martin's Conjecture restricted to Borel functions and a proof for analytic functions assuming the existence of a measurable cardinal. Another key reason to use determinacy is that we have the following theorem, which makes it at all plausible that we might be able to classify Turing-invariant functions by their behavior on a cone. **Theorem 1.1** (\(\mathsf{ZF+AD}\); Martin's Cone Theorem [16]).: _Every set of Turing degrees either contains a cone or is disjoint from a cone._ It is often useful to restate this theorem in a different form. **Definition 1.2**.: A set \(A\subseteq 2^{\omega}\) is **cofinal in the Turing degrees** (also sometimes just **cofinal**) if for every \(a\in 2^{\omega}\) there is some \(x\in A\) such that \(a\leqslant_{T}x\). Note that a set of Turing degrees is cofinal if and only if its complement does _not_ contain a cone. Thus Martin's Cone Theorem is equivalent to the statement that every cofinal set of Turing degrees contains a cone. This form is useful because it means that to prove that some property holds on a cone, it is enough to prove that it holds cofinally. #### Formal statement of Martin's Conjecture We can now give the formal statement of Martin's Conjecture. **Conjecture** (Martin's Conjecture).: _Assuming \(\mathsf{ZF+AD+DC_{R}}\), both of the following hold:_ 1. _Every Turing-invariant function_ \(f\colon 2^{\omega}\to 2^{\omega}\) _is either constant on a cone or above the identity function on a cone._ 2. _The Martin order restricted to Turing-invariant functions which are above the identity on a cone is a prewellorder in which the successor of any function_ \(f\) _is the jump of_ \(f\) _(i.e. the function_ \(x\mapsto f(x)^{\prime}\)_)._ The second part of the conjecture can be interpreted as stating that every Turing-invariant function which is above the identity on a cone is equal to some transfinite iterate of the Turing jump on a cone, the idea being that a function with ordinal rank \(\alpha\) in the Martin order is the \(\alpha^{\text{th}}\) iterate of the Turing jump. ### Prior work on Martin's Conjecture We will now state a few special cases of Martin's Conjecture which are already known. First we must state some more definitions. **Definition 1.3**.: A Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) is: * **regressive** if for all \(x\), \(f(x)\leqslant_{T}x\) (i.e. \(f\) is below the identity). * **order-preserving** if for all \(x,y\in 2^{\omega}\) \[x\leqslant_{T}y\implies f(x)\leqslant_{T}f(y).\] * **uniformly invariant** (or uniformly Turing-invariant) if there is a function \(u\colon\mathbb{N}^{2}\to\mathbb{N}^{2}\) such that for all \(x,y\in 2^{\omega}\), if \(i\) and \(j\) are indices for Turing functionals witnessing that \(x\equiv_{T}y\)--i.e. \(\Phi_{i}(x)=y\) and \(\Phi_{j}(y)=x\)--then \(u(i,j)\) is a pair of indices for Turing functionals witnessing that \(f(x)\equiv_{T}f(y)\). **Theorem 1.4** (\(\mathsf{ZF+AD}\); Slaman and Steel [23]).: _Martin's Conjecture holds for all regressive functions--i.e. if \(f\colon 2^{\omega}\to 2^{\omega}\) is a regressive Turing-invariant function then either \(f\) is constant on a cone or \(f\) is above the identity on a cone._ **Theorem 1.5** (Slaman and Steel [23]).: _Part 2 of Martin's Conjecture holds for all Borel order-preserving functions \(f\colon 2^{\omega}\to 2^{\omega}\)._ **Theorem 1.6** (\(\mathbb{ZF}+\mathsf{AD}\); Steel [27], Slaman and Steel [23]).: _Martin's Conjecture holds for all uniformly Turing-invariant functions._ As we have mentioned, we will prove that part 1 of Martin's Conjecture holds for all order-preserving functions, complementing Slaman and Steel's result above. We will also prove that part 1 of Martin's Conjecture holds for all measure-preserving functions, a class of functions which we will now define. ### Measure-preserving functions A measure-preserving function is a function on the Turing degrees which eventually gets above every fixed degree (which you might also think of as a function which "goes to infinity in the limit"). This is made precise in the following definition. **Definition 1.7**.: A Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) is **measure-preserving** if for every \(a\in 2^{\omega}\), there is some \(b\in 2^{\omega}\) such that \[x\geqslant_{T}b\implies f(x)\geqslant_{T}a.\] In other words, for every \(a\), \(f\) is above \(a\) on a cone. One of the earliest results on Martin's Conjecture is a proof by Martin that Martin's Conjecture holds for regressive measure-preserving functions. We will give this proof in section 2.2 as an example of some of the techniques we will use in our proof of part 1 of Martin's Conjecture for measure-preserving functions. Note that any function which is above the identity on a cone is measure-preserving. Thus, restricting to the class of measure-preserving functions does not change the statement of part 2 of Martin's Conjecture. #### Measure-preserving functions and the Martin order It is also possible to define the class of measure-preserving functions in terms of the Martin order. We omit the simple proof. **Proposition 1.8**.: _A Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) is measure-preserving if and only if \(f\) is Martin above every constant function._ This characterization of measure-preserving functions allows us to fit our result on part 1 of Martin's Conjecture for measure-preserving functions into a developing picture of what the Martin order looks like under \(\mathsf{AD}\). It is relatively easy to use determinacy to show that if a Turing-invariant function is Martin below a constant function then it must be constant on a cone. Thus the constant functions form an initial segment of the Martin order isomorphic to the partial order of the Turing degrees. This initial segment has a natural upper bound, the identity function. Martin's result on regressive measure-preserving functions shows that it is a minimal upper bound: any function which is below the identity but above every constant function must be equivalent to the identity. Our result on part 1 of Martin's Conjecture for measure-preserving functions shows that it is actually a _least_ upper bound: any function which is an upper bound for all the constant functions must be above the identity. Furthermore, Slaman and Steel's result on regressive functions shows that it is not above any non-constant function. Thus our picture of the Martin order under \(\mathsf{AD}\) is as follows: there is an initial segment isomorphic to the Turing degrees, consisting of the constant functions. This initial segment has a least upper bound, the identity function, which additionally is not upper bound for any non-constant function. The remaining case of part 1 of Martin's Conjecture is to rule out functions which are "off to the side" of the constant functions in the Martin order--for example, functions which are incomparable to all nonzero constant functions. This is illustrated in Figure 1. ### Technical Preliminaries We will now review some technical material that we will need throughout the paper. #### Pointed perfect trees A key technical tool in a lot of work on Martin's Conjecture is the notion of a pointed perfect tree. **Definition 1.9**.: A **pointed perfect tree** is a perfect tree \(T\) such that for every \(x\in[T]\), \(T\leqslant_{T}x\). The key property of pointed perfect trees is that they contain a representative of every Turing degree in some cone. The idea is that if \(T\) is a perfect tree then any \(x\in 2^{\omega}\) can be thought of as describing a path through \(T\): at each branching point in \(T\) we use the next bit of \(x\) to decide whether the path should go left or right. Call the resulting path \(\widetilde{x}\). By construction, \(T\oplus x\) can compute \(\widetilde{x}\). But \(T\oplus\widetilde{x}\) can also compute \(x\) by checking whether \(\widetilde{x}\) goes left or right at each branching point. If \(x\geqslant_{T}T\) and \(T\) is pointed (so \(\widetilde{x}\geqslant_{T}T\)), then \(x\equiv_{T}\widetilde{x}\). This is summarized by the following proposition. **Proposition 1.10**.: _If \(T\) is a pointed perfect tree, then for every \(x\in\operatorname{Cone}(T)\) there is some \(\widetilde{x}\in[T]\) such that \(x\equiv_{T}\widetilde{x}\)._ There is also a strengthening of Martin's cone theorem that works for arbitrary sets of reals rather than sets of Turing degrees and gives pointed perfect trees rather than cones. The proof is more or less identical to the proof of the cone theorem and is also due to Martin. **Theorem 1.11** (\(\mathsf{ZF+AD}\); [15]).: _If \(A\subseteq 2^{\omega}\) is cofinal in the Turing degrees then there is some pointed perfect tree \(T\) such that \([T]\subseteq A\)._ The lesson of this theorem is that, under \(\mathsf{AD}\), if you want to find a pointed perfect tree whose paths all have some property, then it is enough to find a cofinal set whose elements all have that property. #### Variants of the Axiom of Choice and the Axiom of Determinacy The Axiom of Determinacy is inconsistent with the Axiom of Choice, but it is consistent with several weak forms of the Axiom of Choice. We will need to use a few of these so we will review them here. The following axioms are listed in order of increasing logical strength. * **The Axiom of Countable Choice for reals, \(\mathsf{CC}_{\mathbb{R}}\):** This axiom states that every countable collection \(\{A_{n}\}_{n\in\omega}\) of nonempty subsets of \(2^{\omega}\) has a choice function. This is implied by \(\mathsf{AD}\). * **The Axiom of Dependent Choice for reals, \(\mathsf{DC}_{\mathbb{R}}\):** This axiom states that if \(R\) is a binary relation on \(2^{\omega}\) such that for every \(x\in 2^{\omega}\) there is some \(y\in 2^{\omega}\) for which \(R(x,y)\) holds then there is some countable sequence of reals \(\{a_{n}\}_{n\in\omega}\) such that for each \(n\), \(R(a_{n},a_{n+1})\) holds. Whether this is provable in \(\mathsf{ZF+AD}\) is open, but \(\mathsf{ZF+AD+DC}_{\mathbb{R}}\) is equiconsistent with \(\mathsf{ZF+AD}\). Figure 1: A picture of what’s known about the Martin order. * **Uniformization for sets of reals, \(\mathsf{Uniformization}_{\mathbb{R}}\):** This axiom states that if \(R\) is a binary relation on \(2^{\omega}\) such that for each \(x\) there is some \(y\) for which \(R(x,y)\) holds then \(R\) can be uniformized--i.e. there is some function \(f\colon 2^{\omega}\to 2^{\omega}\) such that for each \(x\), \(R(x,f(x))\) holds. This is _not_ provable in \(\mathsf{ZF}+\mathsf{AD}\) and \(\mathsf{ZF}+\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\) is not equiconsistent with \(\mathsf{ZF}+\mathsf{AD}\), but the consistency of \(\mathsf{ZF}+\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\) is provable from sufficiently strong large cardinal principles. Martin's Conjecture is typically stated in the theory \(\mathsf{ZF}+\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\) but many results related to Martin's Conjecture only need \(\mathsf{CC}_{\mathbb{R}}\), not \(\mathsf{DC}_{\mathbb{R}}\) (and thus are provable in \(\mathsf{ZF}+\mathsf{AD}\)). In this paper we will sometimes use \(\mathsf{Uniformization}_{\mathbb{R}}\) and thus some of our results are proved in the theory \(\mathsf{ZF}+\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\). For some of these results, the proof can be made to work with \(\mathsf{DC}_{\mathbb{R}}\) instead of \(\mathsf{Uniformization}_{\mathbb{R}}\) but for a few, this is not apparent (in particular, our results on ultrafilters on the Turing degrees discussed in section 5). Throughout the paper, we will also occasionally refer to the theory \(\mathsf{ZF}+\mathsf{AD}^{+}\). This theory is a strengthening of \(\mathsf{ZF}+\mathsf{AD}\) due to Woodin which does not imply \(\mathsf{Uniformization}_{\mathbb{R}}\) but which implies many consequences of \(\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\) (including \(\mathsf{DC}_{\mathbb{R}}\)). In particular, all of the results in this paper which are proved in the theory \(\mathsf{ZF}+\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\) can also be proved in the theory \(\mathsf{ZF}+\mathsf{AD}^{+}\). This may seem like an obscure technical point, but it is significant for the following reason. Unlike \(\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\), it follows from sufficiently strong large cardinal principles that \(\mathsf{AD}^{+}\) holds in \(L(\mathbb{R})\). Thus, assuming those same large cardinal principles, any instance of Martin's Conjecture that holds under \(\mathsf{AD}^{+}\) also holds for all functions in \(L(\mathbb{R})\) (which constitute a very generous notion of the class of "definable functions"). ### Notation and conventions A number of times in this paper we will need to go back and forth between a real and its Turing degree or a Turing-invariant function on the reals and the function on the Turing degrees that it induces. To help make such transitions clearer, we will use the following notation. * \(\mathcal{D}_{T}\) denotes the Turing degrees, \(2^{\omega}/\equiv_{T}\). * For \(x\in 2^{\omega}\), \(\deg_{T}(x)\) denotes the Turing degree of \(x\). * Lightface letters denotes reals and boldface letters denote Turing degrees. E.g. \(a,b,x,y\) refer to elements of \(2^{\omega}\) and \(\boldsymbol{a},\boldsymbol{b},\boldsymbol{x},\boldsymbol{y}\) refer to elements of \(\mathcal{D}_{T}\). * If a lower case letter denotes a Turing-invariant function from \(2^{\omega}\) to \(2^{\omega}\), then the corresponidng upper case letter denotes the function on the Turing degree it induces. E.g. if \(f\colon 2^{\omega}\to 2^{\omega}\) is a Turing-invariant function then \(F\colon\mathcal{D}_{T}\to\mathcal{D}_{T}\) denotes the function \(F(\deg_{T}(x))=\deg_{T}(f(x))\). * \(id\) denotes the identity function on \(2^{\omega}\) and \(j\) denotes the Turing jump as a function on \(2^{\omega}\). Sometimes our slippage between Turing-invariant functions and functions on the Turing degrees will cause abuses of terminology. For example, we will often say that a Turing-invariant function is constant on a cone when it is really the induced function on the Turing degrees that is constant on a cone, and we will say that two Turing-invariant functions are equal on a cone when they are really just Turing equivalent on a cone. We will also use the following other conventions. * Unless explicitly stated otherwise, all results hold in \(\mathsf{ZF}\) and all results which are proved for all functions in \(\mathsf{ZF}+\mathsf{AD}\) hold for all Borel functions in \(\mathsf{ZF}\). * A **Turing functional** is a program with an oracle. If \(\Phi\) is a Turing functional and \(x\in 2^{\omega}\) then \(\Phi(x)\) denotes the element of \(2^{\omega}\) computed by \(\Phi\) when using \(x\) as an oracle and \(\Phi(x,n)\) denotes the output of \(\Phi\) when using \(x\) as an oracle and when given input \(n\) (so \(\Phi(x)=n\mapsto\Phi(x,n)\)). * We will think of a Turing functional \(\Phi\) as a partial function on \(2^{\omega}\) defined by \(x\mapsto\Phi(x)\), where \(x\) is in the domain of the function whenever \(\Phi(x)\) is total. * We will assume we have a fixed computable enumeration \(\Phi_{0},\Phi_{1},\Phi_{2},\ldots\) of Turing functionals. * We will use \(\Phi(x,n)[m]\) to denote the program \(\Phi\) run with oracle \(x\) on input \(n\) for up to \(m\) steps and \(\Phi(\sigma)\) (where \(\sigma\in 2^{\omega}\)) to denote the result of running \(\Phi\) and using \(\sigma\) to answer oracle queries (and when \(\Phi\) asks a question about the oracle past the length of \(\sigma\), the program diverges). * If \(T\) is a tree and \(\sigma\) is a node in \(T\) then \(T_{\sigma}\) is the tree consisting of all nodes in \(T\) which are compatible with \(\sigma\), i.e. \(T_{\sigma}=\{\tau\in T\mid\tau\subseteq\sigma\text{ or }\sigma\subseteq\tau\}\). ## 2 How to Prove Instances of Part 1 of Martin's Conjecture In this section we will describe a strategy which can be used to prove instances of part 1 of Martin's Conjecture. In other words, a strategy for proving that a Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) is either constant on a cone or above the identity on a cone. We will use this strategy in section 3 to prove part 1 of Martin's Conjecture for measure-preserving functions. ### The basic strategy The main idea underlying our strategy is actually just the computability theory version of a basic topological fact. **Basic topological fact:** If \(f:X\to X\) is a continuous, injective function on a compact, Hausdorff space, then \(f\) has a continuous inverse \(f^{-1}:\operatorname{range}(f)\to X\). **Computability theory version:** If \(f:2^{\omega}\to 2^{\omega}\) is a computable, injective function, then for each \(x\), \(f(x)\) can compute \(x\). The point is that if a function on \(2^{\omega}\) is computable and injective then it is automatically above the identity. Hence one way to prove that a function \(f\) is above the identity is to find a computable, injective function \(g\) such that \(g\) is below \(f\). In practice, it is often not possible to find such a function which is defined on all of \(2^{\omega}\), so we will instead try to find one which is defined only on a pointed perfect tree. However, such functions are not necessarily above the identity. Instead, they satisfy the following weaker property. If \(T\) is a perfect tree and \(g\colon[T]\to 2^{\omega}\) is computable and injective then for each \(x\in[T]\), \(g(x)\oplus T\geqslant_{T}x\). In other words, \(g\) is only above the identity after joining with a constant. All this suggests the following strategy for proving that a Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) is above the identity on a cone: 1. Find a pointed perfect tree \(T\) and a computable, injective function \(g\colon[T]\to 2^{\omega}\) which is below \(f\). This shows that for all \(x\in[T]\), \(x\leqslant_{T}f(x)\oplus T\). 2. Show that for all \(x\) on a cone, \(f(x)\geqslant_{T}T\). 3. Put these two facts together to show that for all \(x\) on a cone, \(x\leqslant_{T}f(x)\). The third step is easy and we take care of it below. If \(f\) is measure-preserving then the second step also follows immediately. Thus, in our proof of part 1 of Martin's Conjecture for measure-preserving functions, we don't need to worry about this step. In our proof of part 1 of Martin's Conjecture for order-preeserving functions we will take care of it by showing that all non-trivial order-preeserving functions are measure-preserving. This leaves us with the question of how to carry out the first step of the strategy: how can we find \(T\) and \(g\) with the necessary properties? In the next two sections we will introduce some techniques which can help answer this question. But first, we will prove the statement about computable, injective functions on perfect trees mentioned above and show formally that if \(f\) is measure-preserving and we can find a function \(g\) with the properties listed above then \(f\) is above the identity on a cone. **Lemma 2.1**.: _If \(T\) is a perfect tree and \(g\colon[T]\to 2^{\omega}\) is computable and injective then for each \(x\in[T]\),_ \[g(x)\oplus T\geq_{T}x.\] Proof.: Since \(g\) is computable, there is some Turing functional \(\Phi\) such that for all \(x\in[T]\), \(\Phi(x)\) is total and equal to \(g(x)\). So it suffices to prove that for all \(x\in[T]\), \(\Phi(x)\oplus T\geq_{T}x\). The main idea of the proof is just a routine application of compactness. First, we will give an algorithm to compute \(x\) given \(\Phi(x)\) and \(T\). Say we want to compute \(x\mathord{\upharpoonright}n\). For each \(\sigma\) in level \(n\) of \(T\) we do the following search (and we do all of these searches in parallel): Look for an \(m>n\) such that for all descendants \(\tau\) of \(\sigma\) on level \(m\) of \(T\), \(\Phi(\tau)[m]\) disagrees with \(\Phi(x)\mathord{\upharpoonright}m\). Once all but one of these searches have terminated, we output the remaining element of level \(n\) of \(T\) as our guess for \(x\mathord{\upharpoonright}n\). Hopefully it is clear that this search will never terminate for \(x\mathord{\upharpoonright}n\) (since on every level above \(m>n\) there is a descendant of \(x\mathord{\upharpoonright}n\) in \(T\), namely \(x\mathord{\upharpoonright}m\), which will not make \(\Phi\) disagree with \(\Phi(x)\)). So all we really need to do is show that the search will terminate for every \(\sigma\) in level \(n\) of \(T\) which is not equal to \(x\mathord{\upharpoonright}n\). Suppose this is not the case and let \(\sigma\) be such a node in \(T\). Then by Konig's lemma we can find some \(y\in[T]\) extending \(\sigma\) such that for all \(m\), \(\Phi(y)[m]\) does not disagree with \(\Phi(x)\). However, we know that \(\Phi\) is total on \(y\) and injective on \([T]\), hence \(\Phi(y)\) and \(\Phi(x)\) must disagree somewhere, a contradiction. **Lemma 2.2**.: _Suppose \(f\colon 2^{\omega}\to 2^{\omega}\) is Turing-invariant and measure-preserving, \(T\) is a pointed perfect tree and \(g\colon[T]\to 2^{\omega}\) is computable, injective and below \(f\). Then for all \(x\) on a cone, \(f(x)\geq_{T}x\)._ Proof.: Since \(f\) is measure-preserving, there is some cone on which \(f(x)\) is always above \(T\). We may also assume that this cone is high enough that all its elements are above \(T\). Let \(x\) be any element of this cone and let \(\widetilde{x}\) be an element of \([T]\) in the same Turing degree as \(x\) (which must exist since \(T\) is a pointed perfect tree). We can then calculate \[\widetilde{x}\leq_{T}g(\widetilde{x})\oplus T \text{by the previous lemma}\] \[\leq_{T}f(\widetilde{x}) \text{since $g$ is below $f$ and $f(\widetilde{x})\geq_{T}T$.}\] Since \(x\equiv_{T}\widetilde{x}\) and \(f\) is Turing-invariant, this implies that \(x\leq_{T}f(x)\). ### Finding pointed perfect trees In the previous section, we outlined a general strategy to prove that a function \(f\colon 2^{\omega}\to 2^{\omega}\) is above the identity. A key step involved finding a computable, injective function \(g\) below \(f\) which is defined on a pointed perfect tree. In this section we will discuss some lemmas which are useful for finding such a \(g\) and then give an example of using these lemmas to prove an instance of part 1 of Martin's Conjecture. #### Finding pointed perfect trees using determinacy Recall from the introduction the following theorem due to Martin, which is often useful for finding pointed perfect trees under \(\mathsf{AD}\). **Theorem 2.3** (\(\mathsf{ZF}+\mathsf{AD}\); Martin; [15], Lemma 3.5).: _Suppose \(A\subseteq 2^{\omega}\) is cofinal in the Turing degrees. Then there is a pointed perfect tree \(T\) such that \([T]\subseteq A\)._ The following consequence of this lemma is also useful. **Corollary 2.4** (\(\mathsf{ZF}+\mathsf{AD}\)).: _Suppose \(\langle A_{n}\rangle_{n\in\mathbb{N}}\) is a countable sequence of subsets of \(2^{\omega}\) such that \(\bigcup_{n}A_{n}\) is cofinal in the Turing degrees. Then there is some \(n\in\mathbb{N}\) and some pointed perfect tree \(T\) such that \([T]\subseteq A_{n}\)._ Proof.: It is enough to show that some \(A_{n}\) must be cofinal. Suppose not. So for each \(n\), there is some \(x_{n}\in 2^{\omega}\) such that \(A_{n}\) is disjoint from the cone above \(x_{n}\). But then any \(y\geqslant_{T}\bigoplus_{n}x_{n}\) cannot be in any of the \(A_{n}\)'s, contradicting the fact that \(\bigcup_{n}A_{n}\) is cofinal. There is a further very easy consequence of this corollary which has proved surprisingly useful. This consequence is not new and has often been used implicitly in research on Martin's Conjecture, but we have found it helpful to formulate it as an explicit principle. **Lemma 2.5** (\(\mathsf{ZF}+\mathsf{AD}\); Computable uniformization lemma).: _Suppose \(R\) is a binary relation on \(2^{\omega}\) such that both of the following hold._ * _The domain of_ \(R\) _is cofinal: for all_ \(a\) _there is some_ \(x\geqslant_{T}a\) _and some_ \(y\) _such that_ \((x,y)\in R\)__ * \(R\) _is a subset of Turing reducibility: for every_ \((x,y)\in R\)_,_ \(x\geqslant_{T}y\)_._ _Then there is a pointed perfect tree \(T\) and a computable function \(f\colon[T]\to 2^{\omega}\) such that for all \(x\in[T]\), \((x,f(x))\in R\). In other words, \(f\) is a computable choice function for \(R\) on \([T]\)._ Proof.: For each \(n\in\mathbb{N}\), let \(A_{n}\) be the set of \(x\) such that \(\Phi_{n}(x)\) is total and \(R(x,\Phi_{n}(x))\) holds. For each \(x\) in the domain of \(R\), there must be some \(n\) for which this holds and thus \(\bigcup_{n}A_{n}=\operatorname{dom}(R)\) is cofinal. So by Corollary 2.4, there is some \(n\) and pointed perfect \(T\) such that \([T]\subseteq A_{n}\). By construction, \(T\) and \(\Phi_{n}\) satisfy the conclusion of the lemma. Later, we will need the following corollary of this lemma, which also shows how it is typically used. The corollary says that any increasing function can be inverted by a computable function on a pointed perfect tree. Note that the function \(f\) in the statement of the corollary is not required to be Turing-invariant. **Corollary 2.6**.: _If \(f\colon 2^{\omega}\to 2^{\omega}\) is a function such that \(f(x)\geqslant_{T}x\) for all \(x\) then there is a pointed perfect tree \(T\) and a computable function \(g\colon[T]\to 2^{\omega}\) which is a right inverse for \(f\) on \([T]\). That is, for all \(x\in[T]\), \(f(g(x))=x\)._ Proof.: Let \(R\) be the binary function defined as follows. \[R(x,y)\iff x=f(y).\] Applying Lemma 2.5 to this relation gives us what we want. To show that we can apply the lemma, we need to check that \(R\) is a subset of Turing reducibility and that its domain is cofinal. The former is a consequence of the fact that \(f(y)\geqslant_{T}y\) for all \(y\). For the latter, consider any \(a\in 2^{\omega}\). We need to show that there is some \(x\geqslant_{T}a\) which is in the domain of \(R\). For this, we can just take \(x=f(a)\) #### Refining pointed perfect trees The next lemma is useful for building injective functions on pointed perfect trees. It is relatively well-known but we include a proof anyway for the sake of completeness. **Lemma 2.7** (Tree thinning lemma).: _If \(T\) is a pointed perfect tree and \(f\) is a computable function defined on \([T]\) then one of the following must hold:_ * _We can "thin out"_ \(T\) _to make_ \(f\) _injective: there is a pointed perfect tree_ \(S\) _such that_ \(S\subseteq T\) _and_ \(f\) _is injective on_ \([S]\)_._ * \(f\) _is constant on a large set: there is a node_ \(\sigma\) _in_ \(T\) _such that_ \(f\) _is constant on_ \([T_{\sigma}]\)_._ _In particular, \(f\) is either constant or injective on a pointed perfect subtree of \(T\)._ Proof.: The idea is basically the same as in Spector's construction of a minimal degree (i.e. Sacks forcing). Suppose that the second condition does not hold--i.e. that \(f\) is not constant on \(T_{\sigma}\) for any \(\sigma\) in \(T\). We will show how to find a pointed perfect tree \(S\subseteq T\) on which \(f\) is injective. Let \(\Phi\) be a Turing functional such that for all \(x\in[T]\), \(\Phi(x)\) is total and agrees with \(f(x)\). We will define \(S\) in a series of stages. In stage \(0\), we let \(S_{0}\) consist of just the empty sequence (i.e. the root node of \(T\)). In stage \(n+1\) we have a finite tree \(S_{n}\subseteq T\) which we want to extend to \(S_{n+1}\) in a way that makes sure that every leaf in \(S_{n}\) has two incompatible extensions in \(S_{n+1}\) and \(\Phi\) is injective on the leaves of \(S_{n+1}\). This is actually pretty straightforward to do: for each leaf \(\sigma\) of \(S_{n}\) we know that \(\Phi\) is not constant on \([T_{\sigma}]\) so we can find descendants \(\tau_{1}\) and \(\tau_{2}\) of \(\sigma\) in \(T\) and an \(m\) such that \[\Phi(\tau_{1})[m]\text{ disagrees with }\Phi(\tau_{2})[m]\] (i.e. there is a place where they both converge and are not equal). Put these \(\tau_{1}\) and \(\tau_{2}\), along with all their ancestors, into \(S_{n+1}\). Now define \(S\) as the union of all the \(S_{n}\)'s. It is clear that if we construct \(S\) in this way then \(S\) is a perfect tree, \(S\subseteq T\), and \(\Phi\) (and hence \(f\)) is injective on \(S\). To see that \(S\) is pointed, just note that the above process was computable in \(T\) and so \(S\leq_{T}T\). Since \([S]\subseteq[T]\) and each element of \([T]\) computes \(T\), we have that each element of \([S]\) computes \(T\) and hence also computes \(S\). #### Example: Martin's Conjecture for regressive, measure-preserving functions We will now give an example of using the strategy outlined in the previous section, along with the lemmas above, to prove part 1 of Martin's Conjecture for some class of functions: namely, functions which are both regressive and measure-preserving. As we mentioned in the introduction, this result was first proved by Martin in the 1970s (though he didn't use the term "measure-preserving"). He didn't publish his proof, but a proof was included in a paper by Steel [27]. Later, Slaman and Steel proved that a modified version of this theorem is still true even in ZFC [23]. **Theorem 2.8** (\(\mathsf{ZF}+\mathsf{AD}\); Martin).: _Suppose \(f\colon 2^{\omega}\to 2^{\omega}\) is a Turing-invariant function which is both regressive and measure-preserving. Then for all \(x\) on a cone, \(f(x)\geq_{T}x\)._ Proof.: Since \(f\) is measure-preserving, Lemma 2.2 implies that it suffices to find a pointed perfect tree \(T\) and a computable, injective function \(g\colon[T]\to 2^{\omega}\) such that for all \(x\in[T]\), \(g(x)\leq_{T}f(x)\). The idea is that since \(f\) is regressive, we can just use \(g=f\). By applying the computable uniformization lemma (Lemma 2.5) to \(f\) itself (i.e. to the relation \(\{(x,y)\mid y=f(x)\}\)) we obtain a pointed perfect tree \(T\) such that \(f\) is computable on \([T]\). By the tree thinning lemma (Lemma 2.7), either \(f\) is constant on a pointed perfect subtree of \(T\) or \(f\) is injective on a pointed perfect subtree of \(T\). In the latter case, we are done. So it suffices to prove that \(f\) cannot be constant on any pointed perfect tree. Suppose that \(f\) is constant on a pointed perfect tree. Since a pointed perfect tree contains a representative of every Turing degree on a cone, this means that \(f\) is constant on a cone. But that contradicts the assumption that \(f\) is measure-preserving. ### Ordinal invariants Suppose that we have a Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) and we are trying to find a computable, injective function below \(f\) which is defined on a pointed perfect tree. Here's a naive way we might go about this. First, define a binary relation \(R\) on \(2^{\omega}\) by \[R(x,y)\iff y\leqslant_{T}x\text{ and }y\leqslant_{T}f(x).\] By the computable uniformization lemma applied to \(R\), we get a computable function \(g\) which is below \(f\). We might then try to apply the tree thinning lemma to find a pointed perfect tree on which \(g\) is injective. But there is one problem: if \(g\) is constant on a cone (or more generally, on a pointed perfect tree) then we cannot apply that lemma. One solution to this problem is to make the definition of \(R\) more restrictive to ensure that \(g\) cannot be constant on a cone. In this section, we will introduce an idea which can help do this. Briefly stated, here's the idea: identify some function \(\alpha\) from Turing degrees to ordinals and modify \(R\) to \[R(x,y)\iff y\leqslant_{T}x\text{ and }y\leqslant_{T}f(x)\text{ and }\alpha(x)=\alpha(y).\] Then when we apply the computable uniformization lemma to \(R\), we get a function \(g\) which is not only computable and below \(f\), but also preserves \(\alpha\). As long as \(\alpha\) is not constant on any cone, neither is \(g\). We will refer to such a function \(\alpha\) as an **ordinal invariant1**. Footnote 1: Such functions are often studied in work on Martin’s Conjecture and determinacy more generally, but the term “ordinal invariant” is not standard and our use of them is somewhat different from their usual role. Of course, to make this work we need to choose \(\alpha\) so that the domain of the relation \(R\) defined above is cofinal (otherwise we cannot use the computable uniformization lemma). But if we can do this, then the argument sketched above is valid and we will use it in section 3.2. In the remainder of this section, we will formally introduce ordinal invariants, note a few of their properties and formalize the argument sketched above. **Definition 2.9**.: An **ordinal invariant** is a Turing-invariant function \(\alpha\colon 2^{\omega}\to\mathbf{Ord}\) (where \(\mathbf{Ord}\) denotes the class of ordinals), i.e. a function \(\alpha\colon 2^{\omega}\to\mathbf{Ord}\) such that if \(x\equiv_{T}y\) then \(\alpha(x)=\alpha(y)\). **Example 2.10**.: The quintessential example of an ordinal invariant is the function \(x\mapsto\omega_{1}^{x}\), i.e. the function mapping a real \(x\) to the least ordinal with no presentation computable from \(x\). At first it may seem that the notion of an ordinal invariant is much too general to be interesting, but it turns out that, assuming determinacy, it is possible to prove quite a lot about them. For example, Martin has shown that, under \(\mathsf{ZF}+\mathsf{AD}\), the function \(x\mapsto\omega_{1}^{x}\) is the least nontrivial ordinal invariant: for every ordinal invariant \(\alpha\), either \(\alpha\) is constant on a cone, or \(\alpha(x)\geq\omega_{1}^{x}\) on a cone. Under \(\mathsf{AD}+\mathsf{DC}\) (or \(\mathsf{AD}^{+}\)), it is easy to show that the relation "\(\alpha(x)\leq\beta(x)\) on a cone" prewellorders the ordinal invariants. The theorem of Martin just mentioned implies that \(x\mapsto\omega_{1}^{x}\) has rank \(\omega_{1}\) in this prewellorder. Steel, in [27], has calculated the rank of a number of other ordinal invariants. It is also possible to show that every ordinal invariant is order-preserving on a cone. **Proposition 2.11** (\(\mathsf{ZF}+\mathsf{AD}\)).: _If \(\alpha\) is an ordinal invariant, then \(\alpha\) is order-preserving on a cone--i.e. for all \(x\) and \(y\) in some cone_ \[x\leqslant_{T}y\implies\alpha(x)\leq\alpha(y).\] Proof.: Define \(\alpha_{\min}\colon 2^{\omega}\to\mathbf{Ord}\) by \[\alpha_{\min}(x)=\min\{\alpha(y)\mid y\geqslant_{T}x\}.\] The claim that \(\alpha\) is order-preserving on a cone is equivalent to the claim that \(\alpha(x)=\alpha_{\min}(x)\) on a cone. For each \(x\), there is some \(y\geqslant_{T}x\) such that \(\alpha(y)=\alpha_{\min}(x)\) and hence \(\alpha(y)=\alpha_{\min}(y)\). In other words, \(\alpha(x)=\alpha_{\min}(x)\) holds cofinally. By determinacy, this means \(\alpha(x)=\alpha_{\min}(x)\) on a cone. **Lemma 2.12** (\(\mathsf{ZF+AD}\)).: _Suppose \(f\colon 2^{\omega}\to 2^{\omega}\) is a Turing-invariant, measure-preserving function and \(\alpha\colon 2^{\omega}\to\textbf{Ord}\) is an ordinal invariant such that \(\alpha\) is not constant on any cone and for cofinally many \(x\), there is some \(y\) such that_ 1. \(y\leqslant_{T}x\)_,_ 2. \(y\leqslant_{T}f(x)\)_, and_ 3. \(\alpha(x)=\alpha(y)\)_._ _Then \(f\) is above the identity on a cone._ Proof.: By Lemma 2.2, it suffices to show that there is a pointed perfect tree \(T\) and a computable, injective function \(g\colon[T]\to 2^{\omega}\) which is below \(f\). We will find \(g\) as described above. First, define a binary relation \(R\) by \[R(x,y)\iff y\leqslant_{T}x\text{ and }y\leqslant_{T}f(x)\text{ and }\alpha(x)=\alpha(y).\] Note that by our assumption about \(\alpha\), the domain of \(R\) is cofinal. Thus there is a pointed perfect tree \(T\) and a computable function \(g\colon[T]\to 2^{\omega}\) uniformizing \(R\) on \([T]\). In particular, for all \(x\in[T]\), \(g(x)\leqslant_{T}f(x)\) and \(\alpha(x)=\alpha(g(x))\). By the tree thinning lemma, \(g\) is either constant or injective on a pointed perfect subtree of \(T\). If \(g\) is injective on a pointed perfect tree then we are done. So it suffices to show that it is not constant on any pointed perfect subtree of \(T\). Suppose it was. Then on the set of paths \(x\) through this tree, \(\alpha(x)=\alpha(g(x))\) would also be constant. Since any pointed perfect tree contains a representative of every Turing degree on some cone, this would contradict our assumption that \(\alpha\) is not constant on any cone. ## 3 Part 1 of Martin's Conjecture for Measure-Preserving Functions In this section, we will prove part 1 of Martin's Conjecture for measure-preserving functions. Actually, we will give two proofs: first, a relatively straightforward proof that works in \(\mathsf{ZF+AD+Uniformization_{R}}\) and then a somewhat more complicated proof (a modification of the first proof) that works in \(\mathsf{ZF+AD+DC_{R}}\). Both proofs follow the basic strategy explained in section 2; the second proof also uses the idea of ordinal invariants from section 2.3. We will finish the section by giving an application of our result to part 2 of Martin's Conjecture. ### Proof of part 1 of Martin's Conjecture for measure-preserving functions We will now give our first proof of part 1 of Martin's Conjecture for measure-preserving functions.2 The proof follows the strategy outlined in section 2: given a measure-preserving function \(f\), we will find a function defined on a pointed perfect tree which is computable, injective and below \(f\). To do so, we will first associate to any measure-preserving function \(f\) a family of functions called **increasing moduli** for \(f\), which are essentially Skolem functions witnessing that \(f\) is measure-preserving. We will then show: Footnote 2: Assuming \(\mathsf{AD+Uniformization_{R}}\), though \(\mathsf{AD}^{+}\) would also suffice. 1. Every measure-preserving function \(f\) has an increasing modulus, \(g\). 2. Every such \(g\) has a computable right inverse, \(h\), defined on a pointed perfect tree. 3. Every such \(h\) is computable, injective and below \(f\). Thus \(h\) satisfies the properties required by Lemma 2.2 and so \(f\) is above the identity on a cone. The second and third steps of this proof are straightforward: to find \(h\), we will invoke Corollary 2.6 on finding right inverses for increasing functions; the fact that \(h\) is computable, injective and below \(f\) will follow fairly directly from the definition of "increasing modulus." Finding \(g\), however, is trickier. We will construct \(g\) using \(\mathsf{Uniformization}_{\mathbb{R}}\), which is not provable in \(\mathsf{ZF}+\mathsf{AD}\) (this is the only part of the proof that cannot be carried out in \(\mathsf{ZF}+\mathsf{AD}\)). We do not know if it is possible to prove that every measure-preserving function has a modulus in \(\mathsf{ZF}+\mathsf{AD}\). **Definition 3.1**.: Suppose \(f\colon 2^{\omega}\to 2^{\omega}\) is a measure-preserving function. A **modulus** for \(f\) is a function \(g\colon 2^{\omega}\to 2^{\omega}\) such that for all \(x\) and all \(y\geq_{T}g(x)\) we have \(f(y)\geq_{T}x\). Note that \(g\) is not required to be Turing-invariant. Here's the idea behind the definition of modulus. If \(f\) is measure-preserving then for all \(a\in 2^{\omega}\), there is some \(b\in 2^{\omega}\) such that on the cone above \(b\), \(f(x)\) is always above \(a\). A function \(g\) is a modulus for \(f\) if for each \(a\) we can take \(b=g(a)\). For convenience, we will only use moduli which are above the identity. We will call such a modulus an **increasing modulus**. **Definition 3.2**.: If \(f\colon 2^{\omega}\to 2^{\omega}\) is a measure-preserving function then a modulus \(g\) for \(f\) is an **increasing modulus** for \(f\) if for all \(x\), \(g(x)\geq_{T}x\). As mentioned above, it is not clear how to show in \(\mathsf{ZF}+\mathsf{AD}\) that every measure-preserving function has a modulus. However, this is easy to prove in \(\mathsf{ZF}+\mathsf{Uniformization}_{\mathbb{R}}\). **Lemma 3.3** (\(\mathsf{ZF}+\mathsf{Uniformization}_{\mathbb{R}}\)).: _If \(f\) is a measure-preserving function then \(f\) has an increasing modulus._ Proof.: Let \(R(x,y)\) be the binary relation defined by \[R(x,y)\iff x\leq_{T}y\text{ and }\forall z\geq_{T}y\,(f(z)\geq_{T}x).\] Since \(f\) is measure-preserving, we know that for each \(x\), the set \(\{y\mid R(x,y)\}\) is nonempty. Finding an increasing modulus for \(f\) just means finding a function \(g\) such that for each \(x\), \(R(x,g(x))\) holds--in other words, a function \(g\) which uniformizes \(R\). Since we are assuming \(\mathsf{Uniformization}_{\mathbb{R}}\), we are done. We can now prove the main theorem of this section. **Theorem 3.4** (\(\mathsf{ZF}+\mathsf{Uniformization}_{\mathbb{R}}\)).: _If \(f\colon 2^{\omega}\to 2^{\omega}\) is a Turing-invariant, measure-preserving function then \(f\) is above the identity on a cone._ Proof.: Suppose \(f\) is a measure-preserving function. By Lemma 3.3 we can find an increasing modulus \(g\) for \(f\). By Corollary 2.6 we can invert \(g\) on a pointed perfect tree--that is, there is a pointed perfect tree \(T\) and a computable function \(h\) defined on \([T]\) such that for each \(x\in[T]\), \(g(h(x))=x\). Now let's review what we know about this \(h\). * Since \(h\) is a right inverse of \(g\) on \([T]\), \(h\) is injective on \([T]\). * \(h\) is below \(f\) on \([T]\): if \(x\in[T]\) then since \(g\) is a modulus for \(f\), \(f(g(h(x)))\) computes \(h(x)\). And since \(g(h(x))=x\), this just means that \(f(x)\) computes \(h(x)\). Thus by Lemma 2.2, \(f(x)\geq_{T}x\) on a cone. ### An alternate proof that works in \(\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\) In the previous section, we saw how to prove part 1 of Martin's Conjecture for measure-preserving functions in \(\mathsf{ZF}+\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\). In this section we will see how to modify the proof so that it works in the weaker theory \(\mathsf{ZF}+\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\).3 Recall that the reason we needed \(\mathsf{Uniformization}_{\mathbb{R}}\) in the previous section was to show that every measure-preserving function has a modulus. In this section we will get around that difficulty by using an ordinal invariant (see section 2.3) to approximate the role of the modulus in the previous proof. Footnote 3: Though note that we really do use \(\mathsf{DC}_{\mathbb{R}}\) in the proof, rather than just \(\mathsf{CC}_{\mathbb{R}}\), in contrast to most proofs of statements related to Martin’s Conjecture. We will start by defining something called a "modulus sequence," which can be thought of as a countable fragment of a modulus function for \(f\). **Definition 3.5**.: Suppose \(f\colon 2^{\omega}\to 2^{\omega}\) is a measure-preserving function and \(x\in 2^{\omega}\). A **modulus sequence** for \(x\) is a sequence of reals \(x=x_{0}\leqslant_{T}x_{1}\leqslant_{T}x_{2}\leqslant_{T}\ldots\) which is increasing in the Turing degrees and such that for all \(n\in\mathbb{N}\) and all \(y\in 2^{\omega}\), \[y\geqslant_{T}x_{n+1}\implies f(y)\geqslant_{T}x_{n}.\] In other words, \(x_{1}\) is large enough that \(f\) is above \(x\) on the cone above \(x_{1}\), \(x_{2}\) is large enough that \(f\) is above \(x_{1}\) on the cone above \(x_{2}\), and so on. The idea is that if \(g\) is an increasing modulus for \(f\) then \(x,g(x),g(g(x)),\ldots\) is a modulus sequence for \(x\), but that even when \(f\) does not have a modulus, it still has modulus sequences. It is easy to see that the amount of choice required to prove that modulus sequences exist is much weaker than the amount of choice that seems to be required to prove that modulus functions exist. This is expressed by the following lemma. **Lemma 3.6** (\(\mathsf{ZF}+\mathsf{DC}_{\mathbb{R}}\)).: _Suppose \(f\colon 2^{\omega}\to 2^{\omega}\) is a Turing-invariant function which is measure-preserving. Then for all \(x\in 2^{\omega}\), there is a modulus sequence for \(x\)._ Proof.: Let \(x\) be an arbitrary real. Note that a sequence \(x_{0},x_{1},x_{2},\ldots\) is a modulus sequence for \(x\) as long as \(x_{0}=x\) and for each \(n\), \(x_{n+1}\) satisfies a certain condition with respect to \(x_{n}\). It is easy to see that since \(f\) is measure-preserving, no matter what \(x_{n}\) we have picked, there is some \(x_{n+1}\) which satisfies this condition with respect to it. Thus we can use \(\mathsf{DC}_{\mathbb{R}}\) to pick a modulus sequence for \(x\). We can now prove the theorem. Before we actually give the proof, let's briefly review how ordinal invariants can be used to carry out the general strategy from section 2. Recall that to prove that a measure-preserving function \(f\) is above the identity, it is enough to find a computable function \(g\) which is below \(f\) and which can be made injective on a pointed perfect tree. It is easy to use the computable uniformization theorem to find functions \(g\) which are computable and below \(f\). But it is hard to ensure that they are not just constant on a cone (and thus cannot be injective on any pointed perfect tree). However, if we can find an ordinal invariant, \(\alpha\), and a function \(g\) such that \(\alpha(x)=\alpha(g(x))\) for all \(x\) then as long as \(\alpha\) is not constant on a cone, \(g\) cannot be constant on a cone. Thus the goal of the proof below is to come up with an ordinal invariant \(\alpha\) which is not constant on a cone and for which we can find a computable function \(g\) which is both below \(f\) and preserves \(\alpha\). To do so, we will use Lemma 2.12, which gives a list of conditions on an ordinal invariant \(\alpha\) which are sufficient to guarantee that we can find such a \(g\). **Theorem 3.7** (\(\mathsf{ZF}+\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\)).: _If \(f\colon 2^{\omega}\to 2^{\omega}\) is a Turing-invariant, measure-preserving function then \(f\) is above the identity on a cone._ Proof.: First we will define our ordinal invariant. Let \(\alpha\colon 2^{\omega}\to\mathbf{Ord}\) be the function defined by \[\alpha(x)=\min\{\sup_{n\in\mathbb{N}}\omega_{1}^{x_{n}}\mid\langle x_{n} \rangle_{n}\text{ is a modulus sequence for }x\}.\] Observe that \(\alpha(x)\) is always a countable ordinal which is at least \(\omega_{1}^{x}\) and therefore \(\alpha(x)\) is not constant on any cone. By Lemma 2.12, it suffices to check that the following set \(A\) is cofinal: \[A=\{x\mid\exists y\,(y\leqslant_{T}x\text{ and }y\leqslant_{T}f(x)\text{ and }\alpha(y)=\alpha(x))\}.\] So let \(x\) be an arbitrary real and we will find an element of \(A\) which computes \(x\). To do so, let \(x=x_{0}\leqslant_{T}x_{1}\leqslant_{T}x_{2}\leqslant_{T}\ldots\) be a modulus sequence for \(x\) which witnesses the value of \(\alpha(x)\) (i.e. such that \(\alpha(x)=\sup_{n}\omega_{1}^{x_{n}}\)). We now claim that \(x_{1}\) is in \(A\), as witnessed by \(x\). By the definition of modulus sequence, it is clear that \(x\leqslant_{T}x_{1}\) and \(x\leqslant_{T}f(x_{1})\). So we just need to show that \(\alpha(x)=\alpha(x_{1})\). First observe that \(\alpha(x_{1})\) cannot be larger than \(\alpha(x)\) because \(x_{1},x_{2},x_{3},\ldots\) is a modulus sequence for \(x_{1}\) and so \[\alpha(x_{1})\leqslant\sup\{\omega_{1}^{x_{1}},\omega_{1}^{x_{2}},\ldots\}= \sup\{\omega_{1}^{x},\omega_{1}^{x_{1}},\omega_{1}^{x_{2}},\ldots\}=\alpha(x).\] Next observe that \(\alpha(x_{1})\) also cannot be smaller than \(\alpha(x)\) because if \(x_{1}=y_{0}\leqslant_{T}y_{1}\leqslant_{T}y_{2}\leqslant_{T}\ldots\) is a modulus sequence for \(x_{1}\) witnessing the value of \(\alpha(x_{1})\) then \(x,y_{0},y_{1},y_{2},\ldots\) is a modulus sequence for \(x\) and so \[\alpha(x)\leqslant\sup\{\omega_{1}^{x},\omega_{1}^{y_{0}},\omega_{1}^{y_{1}}, \ldots\}=\sup\{\omega_{1}^{y_{0}},\omega_{1}^{y_{1}},\omega_{1}^{y_{2}},\ldots \}=\alpha(x_{1}).\qed\] #### What about Borel Functions? In the previous section, we saw how to prove part 1 of Martin's Conjecture for measure-preserving functions using \(\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\). A careful examination of that proof shows that when we restrict to Borel functions, the proof requires \(\mathbf{\Pi}_{1}^{1}\) determinacy. And the reason that the proof requires \(\mathbf{\Pi}_{1}^{1}\) rather than Borel determinacy is similar to the reason that the proof for all functions required something more than just \(\mathsf{AD}\). In particular, to prove that every measure-preserving function \(f\) has a modulus, we had to uniformize the following relation: \[R(x,y)\iff x\leqslant_{T}y\text{ and }\forall z\geqslant_{T}y\,(f(z) \geqslant_{T}x).\] When \(f\) is Borel, this relation is \(\mathbf{\Pi}_{1}^{1}\) and so the Kondo-Addison theorem says that it has a \(\mathbf{\Pi}_{1}^{1}\) uniformization (but not necessarily a Borel uniformization). Thus every Borel measure-preserving function has a \(\mathbf{\Pi}_{1}^{1}\) modulus. Since the rest of the proof needs to apply determinacy to sets defined using the modulus, the fact that the modulus is only guaranteed to be \(\mathbf{\Pi}_{1}^{1}\) rather than Borel causes the proof to require \(\mathbf{\Pi}_{1}^{1}\) determinacy rather than Borel determinacy. Above, we saw how to prove part 1 of Martin's Conjecture for measure-preserving functions using just \(\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\) rather than \(\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\). In light of the above discussion, it is reasonable to ask whether this yields a proof for Borel functions that only requires Borel determinacy (and thus works in ZF). Somewhat surprisingly, the answer seems to be "no." That is, the proof in this section seems to use more than Borel determinacy even when the functions considered are Borel. This is because the definition of the ordinal invariant that we use in the proof is rather complicated and so the set that we need to apply determinacy to in the proof is not Borel. This is interesting in part because it somewhat contradicts the idea that proofs of Martin's Conjecture should only use determinacy in a "local" way (that is, the proof for Borel functions should only require Borel determinacy, and so on). It would be interesting to know whether part 1 of Martin's Conjecture for measure-preserving Borel functions can be proved using just Borel determinacy. ### Application to part 2 of Martin's Conjecture In this section we will show that our result on part 1 of Martin's Conjecture for measure-preserving functions can be applied to obtain a new result about part 2 of Martin's Conjecture. Recall that part 2 of Martin's Conjecture says that the Martin order is a prewellorder on Turing-invariant functions which are above the identity and that the successor in this prewellorder is given by the Turing jump. As a first step towards proving this, we could try to show that all functions above the identity are comparable in the Martin order. This would not show that the quotient of the Martin order by Martin equivalence is a well order, but it would at least show it's a linear order. We will not even show this, but we will show that if we have two Turing-invariant functions \(f\) and \(g\) which are above the identity and which satisfy an additional assumption then they are comparable in the Martin order. To understand the idea of the proof, consider the following way that one might try to show any two functions above the identity are comparable. Suppose we are given Turing-invariant functions \(f\) and \(g\) and want to show \(f\leqslant_{M}g\). Naively, we might try to "subtract" \(f\) from \(g\) and show that the resulting function is above the identity. That is, we might try to define a function \(h\) as follows. Given a real \(x\), we first try to find a \(y\) such that \(f(y)=x\) and then set \(h(x)=g(y)\). This \(h\) can be thought of as the "difference" between \(g\) and \(f\) because for the \(y\) used to define \(h(x)\), we have \(h(f(y))=g(y)\). If we could show that \(h\) is above the identity then it would be some indication that \(g\) is above \(f\) since their "difference" is "positive." There are a number of obvious problems with this strategy. First, there is no reason to expect that any \(h\) we define this way will be Turing-invariant, much less amenable to known techniques for proving functions are above the identity. And second, even if \(h\) is a Turing-invariant function above the identity, it is not clear that this really implies that \(g\) is above \(f\) on a cone (for example, the \(y\)'s we use to define \(h\) could all lie outside of some cone). The key insight of the proof below is that if \(f\) and \(g\) satisfy a certain additional condition then all these problems disappear. **Theorem 3.8** (\(\mathrm{ZF}+\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\)).: _Suppose \(f,g\colon 2^{\omega}\to 2^{\omega}\) are Turing-invariant functions which are above the identity and such that for all \(x,y\in 2^{\omega}\),_ \[f(x)\equiv_{T}f(y)\implies g(x)\equiv_{T}g(y).\] _Then \(f\leqslant_{M}g\)._ Proof.: We want to define a function \(h\) in the following way: given any real \(x\), find some real \(y\) such that \(f(y)\equiv_{T}x\) and then define \(h(x)=g(y)\). However, it may not be immediately obvious how to formally define \(h\). We can do so as follows. First, note that since \(f\) is above the identity, we can apply Corollary 2.6 to find a pointed perfect tree, \(T\), and a (possibly non Turing-invariant) right inverse \(k\) for \(f\) defined on \([T]\). Next, recall that in section 1.4 we described a function \(x\mapsto\widetilde{x}\) that takes any \(x\in\mathrm{Cone}(T)\) to a Turing equivalent element of \([T]\). We can then formally define \(h\colon\mathrm{Cone}(T)\to 2^{\omega}\) by \[h(x)=g(k(\widetilde{x})).\] However, it is probably easier to think of \(h\) as defined by the informal procedure described above. Figure 2 below shows how \(h\) is defined. First, we will show that \(h\) is Turing-invariant. Suppose \(x_{1}\equiv_{T}x_{2}\) are in the domain of \(h\). By definition of \(h\), there are reals \(y_{1}\) and \(y_{2}\) such that \(f(y_{1})\equiv_{T}x_{1}\), \(f(y_{2})\equiv_{T}x_{2}\), \(h(x_{1})=g(y_{1})\) and Figure 2: \(h(x)\) is defined by finding some \(y\) such that \(f(y)=x\) and then setting \(h(x)=g(y)\). The real \(y\) can be found using \(k\). \(h(x_{2})=g(y_{2})\) (formally, \(y_{1}=k(\widetilde{x}_{1})\) and \(y_{2}=k(\widetilde{x}_{2})\)). Therefore \(f(y_{1})\equiv_{T}f(y_{2})\) and so \[h(x_{1})=g(y_{1})\equiv_{T}g(y_{2})=h(x_{2})\] by our assumption about \(f\) and \(g\). Note that this argument also implies that for any \(y\) for which \(f(y)\) is in the domain of \(h\), \(h(f(y))\equiv_{T}g(y)\). Next, we will show that \(h\) is measure-preserving. Let \(a\) be an arbitrary degree. We want to show that \(h\) gets above \(a\) on a cone. By determinacy, it is enough to show that it gets above \(a\) cofinally. So let \(b\) be an arbitrary real, and we will show that \(h\) is above \(a\) on some degree above \(b\). We claim that \(f(a\oplus b)\) is one such degree. Since \(f\) is above the identity, \(f(a\oplus b)\) is above \(b\). By the observation we have already made about \(h\), \(h(f(a\oplus b))\equiv_{T}g(a\oplus b)\). Since \(g\) is above the identity, this implies \(h(f(a\oplus b))\) is above \(a\). Since \(h\) is measure-preserving, Theorem 3.7 implies that \(h\) is above the identity on a cone4. We will now use this fact to show that \(f\) is below \(g\) on a cone. Let \(x\) be any degree in a cone on which \(h\) is above the identity. Since \(f\) is above the identity, \(f(x)\) is above \(x\). Since \(x\) was in a cone on which \(h\) is above the identity, this implies that \(h(f(x))\geq_{T}f(x)\). Since \(h(f(x))\equiv_{T}g(x)\), we have shown that \(g(x)\geq_{T}f(x)\), as desired. Footnote 4: One might object that Theorem 3.7 was stated for functions defined on all of \(2^{\omega}\) while \(h\) is only defined on a cone. However, the proof of that theorem works just as well for functions defined on a cone, or even a pointed perfect tree. One might hope to use the theorem above to show that the Martin order is linear above the identity by showing that for every pair of Turing-invariant functions \(f\) and \(g\) which are above the identity, the relationship required by the theorem holds for \(f\) and \(g\) in some order (or at least, holds on a cone). At first, this does not seem like such an unreasonable hope. It does hold for many pairs of functions on the Turing degrees. For example, if two Turing degrees have the same Turing jump then they also have the same \(\omega\)-jump. Likewise, if they have the same \(\omega\)-jump then they also have the same hyperjump. But if we go just a little bit higher than the hyperjump, we can find examples of pairs of Turing-invariant functions which do not have this sort of relationship. We give one such example below. **Example 3.9**.: Let \(f\colon 2^{\omega}\to 2^{\omega}\) be the hyperjump, i.e. \(f(x)=\mathcal{O}^{x}\), and let \(g\colon 2^{\omega}\to 2^{\omega}\) be the function defined by \[g(x)=(\mathcal{O}^{x})^{(\omega_{1}^{x})}.\] The function \(g\) is well-defined because \(\omega_{1}^{\mathcal{O}^{x}}\) is always strictly greater than \(\omega_{1}^{x}\) and thus the \(\omega_{1}^{x}\)-th jump of \(\mathcal{O}^{x}\) is well-defined. On the one hand, it is easy to see that there are reals \(x\) and \(y\) such that \(g(x)=g(y)\) but \(f(x)\neq f(y)\). To find such \(x\) and \(y\) we can take reals \(\widetilde{x}\) and \(\widetilde{y}\) which are above Kleene's \(\mathcal{O}\) and not Turing equivalent but such that \(\widetilde{x}^{(\omega_{1}^{\text{CK}})}\equiv_{T}\widetilde{y}^{(\omega_{1}^ {\text{CK}})}\) and then use hyperjump inversion to find \(x\) and \(y\) which are low for \(\omega_{1}^{\text{CK}}\) such that \(\mathcal{O}^{x}=\widetilde{x}\) and \(\mathcal{O}^{y}=\widetilde{y}\). On the other hand, we can _also_ find reals \(x\) and \(y\) such that \(f(x)=f(y)\) but \(g(x)\neq g(y)\). To see why, let \(x\) be some real such that \(\omega_{1}^{x}>\omega_{1}^{\text{CK}}\). We can use hyperjump inversion to find a real \(y\) such that \(\omega_{1}^{y}=\omega_{1}^{\text{CK}}\) and \(\mathcal{O}^{y}=\mathcal{O}^{x}\). Since \(\omega_{1}^{x}\neq\omega_{1}^{y}\), it is clear that \((\mathcal{O}^{x})^{(\omega_{1}^{x})}\neq(\mathcal{O}^{y})^{(\omega_{1}^{y})}\). Actually, without even bothering to construct these examples, it should have been apparent that \(f\) and \(g\) cannot have the relationship required by the theorem above. Since \(f<_{M}g\), we know that there must be \(x\) and \(y\) such that \(g(x)=g(y)\) and \(f(x)\neq f(y)\) since otherwise the theorem would imply that \(g\) is below \(f\) on a cone. On the other hand, if we look at the proof of the theorem we can also see it provides reason to believe that there are reals \(x\) and \(y\) such that \(f(x)=f(y)\) and \(g(x)\neq g(y)\). If not, then the proof of the theorem would imply that there is a Turing-invariant function \(h\) such that \(h(f(x))=g(x)\) on a cone. Such an \(h\) would have to be above every function of the form \(x\mapsto x^{(\alpha)}\) for a fixed countable ordinal \(\alpha\), but also below the hyperjump. But it seems plausible that no such function exists at all and it is known that such a function cannot be uniformly invariant or order-preserving so we should not expect to be able to find it so easily. Part 1 of Martin's Conjecture for Order-Preserving Functions In this section, we will prove part 1 of Martin's Conjecture for order-preserving functions. We will do so by first proving that every order-preserving function is either constant on a cone or measure-preserving and then obtain part 1 of Martin's Conjecture for order-preserving functions as a corollary to part 1 of Martin's Conjecture for measure-preserving functions. Since our proof of part 1 of Martin's Conjecture for Borel measure-preserving functions required more determinacy than is provable in \(\mathsf{ZF}\) (in particular, \(\mathbf{\Pi}^{1}_{1}\) determinacy), this does not give us a proof of part 1 of Martin's Conjecture for Borel order-preserving functions in \(\mathsf{ZF}\). We give such a proof using an idea due to Takayuki Kihara combined with our result that nontrivial order-preserving functions are measure-preserving (which is provable in \(\mathsf{ZF}\) for Borel functions). We will finish this section with an application of our results to the theory of locally countable Borel quasi-orders. ### A theorem on perfect sets A key step in our proof that order-preserving functions are either constant on a cone or measure-preserving is a technical theorem about perfect sets, which we will prove below. The theorem was inspired by, and is a strengthening of, a theorem proved by Groszek and Slaman in [5], which we will state next. **Definition 4.1**.: Suppose that \(A\) is a perfect subset of \(2^{\omega}\) and \(x\in A\). Say that \(x\) is **eventually constant in \(A\)** if there is some \(n\) such that \[\forall y\in A\left(x\!\upharpoonright\!n=y\!\upharpoonright\!n\implies x<y \right)\quad\text{or}\quad\forall y\in A\left(x\!\upharpoonright\!n=y\! \upharpoonright\!n\implies y<x)\] where the ordering is the usual lexicographic ordering on \(2^{\omega}\). If you think of \(A\) as the set of branches through a perfect tree, this is saying that \(x\) eventually either always goes to the left or always goes to the right in the tree. **Theorem 4.2** (Groszek and Slaman [5] lemma 2.2).: _Suppose that \(A\) is a perfect subset of \(2^{\omega}\), \(B\) is a countable dense subset of \(A\) which contains no element which is eventually constant in \(A\), and \(\langle c_{i}\rangle_{i\in\mathbb{N}}\) is a countable sequence which contains every element of \(B\). Then for every \(x\) there are \(y_{0}\) and \(y_{1}\) in \(A\) such that_ \[\left(\bigoplus_{i\in\mathbb{N}}c_{i}\right)\oplus y_{0}\oplus y_{1}\geq_{T}x.\] The main shortcoming of Groszek and Slaman's theorem for our purposes is that to compute \(x\) you need to be able to compute the countable sequence \(\langle c_{i}\rangle_{i\in\omega}\). In the situation where we would like to use the theorem we can only be assured of having some real which computes every element of the sequence, but does not necessarily compute the sequence itself (i.e. it may compute the sequence in a non-uniform way). The theorem we prove below was formulated to fix this problem. To prove our strengthened version of Groszek and Slaman's theorem, we use a proof that is somewhat different from theirs (and which is essentially a souped-up version of the coding argument used in the first author's proof of Martin's Conjecture for regressive functions on the hyperarithmetic degrees [12]). This proof also allows us to get rid of the requirement that no element of \(\langle c_{i}\rangle\) is eventually constant in \(A\) (though this is mostly just a cosmetic improvement). Roughly speaking, here's what is different about our proof. In Groszek and Slaman's proof, they start with a real \(x\) which they want to code using two elements of \(A\). To do so, they code the bits of \(A\) into the sequence of decisions about whether to turn left or right in the tree whose branches are \(A\). In our proof, we instead essentially code the bits of \(x\) into the Kolmogorov complexity of initial segments of the elements of \(A\). **Theorem 4.3**.: _Suppose that \(A\) is a perfect subset of \(2^{\omega}\), \(B\) is a countable dense subset of \(A\), and \(c\) is a real which computes each \(b\in B\). Then for every \(x\) there are \(y_{0},y_{1},y_{2},y_{3}\) in \(A\) such that_ \[c\oplus y_{0}\oplus y_{1}\oplus y_{2}\oplus y_{3}\geqslant_{T}x.\] **Remark**.: The proof of this theorem is the kind of thing that is not that hard to explain on a blackboard during a one-on-one conversation, but which looks quite complicated when all the details are written down. In the proof below, we have tried as best we can to explain the idea of the construction without getting lost in the messy details. If we have succeeded, then the construction should not actually seem so complicated. If we have failed then we hope the reader will forgive us. Proof.: The basic idea here is to build up \(y_{0},\ldots,y_{3}\) by finite extensions and on each step code one more bit of \(x\). To use \(c\) together with \(y_{0},\ldots,y_{3}\) to compute \(x\), we have to decode the results of this coding process: figure out what happened on each step and recover the bits of \(x\) as a consequence. Perhaps you could imagine that we have two rooms which are completely separated from each other. In the first room, someone--let's call them the **coder**--is given the real \(x\) (and whatever other information they need) and tasked with building the \(y_{i}\)'s one bit at a time. In the other room, the coder's friend--let's call them the **decoder**--is given \(c\) and then receives the bits of the \(y_{i}\)'s one at a time, and needs to reconstruct what the coder did. So the coder needs to not only encode the bits of \(x\), but also encode enough extra information to allow the decoder to reconstruct the coding process. Note, by the way, that the decoder's process needs to be computable, but there is no such requirement on the coder. There is also one more constraint: the coder needs to end up building elements of the set \(A\). They can accomplish this by making sure that on each step, the portions of the \(y_{i}\)'s that they have built so far are each consistent with some element of \(B\). This will ensure that each \(y_{i}\) is the limit of a sequence of elements of \(A\), and thus each \(y_{i}\) is in \(A\) since \(A\) is closed. The most natural way to describe all this is to describe how these two processes work together. That is, describe what the coder is doing on a single step of the process, and, at the same time, describe what the decoder is doing on the same step. In particular, we will assume that the decoder has so far reconstructed all the steps correctly and see how they can also reconstruct the next step correctly. Actually, the decoder will not completely reconstruct everything the coder does, but just enough to decode the next bit of \(x\) and to allow the decoding process to continue on the next step. We will now begin to describe what happens on a single step in both processes. **The situation after \(n\) steps.** Suppose the coder has just finished the \(n^{\text{th}}\) step of the coding process. In other words, they have formed finite initial segments \(y_{0}^{n},y_{1}^{n},y_{2}^{n},y_{3}^{n}\) of \(y_{0},y_{1},y_{2},y_{3}\), respectively. And to make sure that the reals being built will end up in \(A\), they have also picked elements \(b_{0}^{n},b_{1}^{n},b_{2}^{n},b_{3}^{n}\) of \(B\) such that each \(y_{i}^{n}\) is an initial segment of \(b_{i}^{n}\). They now want to code the \((n+1)^{\text{th}}\) bit of \(x\) by extending each of \(y_{0}^{n},y_{1}^{n},y_{2}^{n},y_{3}^{n}\) by a finite amount, making sure each one is still an initial segment of some element of \(B\) (though perhaps not the same element as at the end of the \(n^{\text{th}}\) step), while also giving the decoder enough information to recover what happened on this step. Let's now consider what things look like for the decoder. At the end of the \(n^{\text{th}}\) step of the decoding process, the decoder has a guess about exactly two of the \(b_{0}^{n},b_{1}^{n},b_{2}^{n},b_{3}^{n}\). In particular, if \(n\) is even then the decoder has a guess about \(b_{0}^{n}\) and \(b_{1}^{n}\) and if \(n\) is odd then the decoder has a guess about \(b_{2}^{n}\) and \(b_{3}^{n}\). From now on, we will assume that \(n\) is even and thus that the decoder has a guess about \(b_{0}^{n}\) and \(b_{1}^{n}\). This guess takes the form of two numbers, \(e_{0}^{n}\) and \(e_{1}^{n}\). These should be thought of as indices for programs which use \(c\) to compute \(b_{0}^{n}\) and \(b_{1}^{n}\), respectively. In other words, the decoder is guessing that \(\Phi_{e_{0}^{n}}(c)\) and \(\Phi_{e_{1}^{n}}(c)\) are both total and are equal to \(b_{0}^{n}\) and \(b_{1}^{n}\), respectively. Figure 3 below shows what the coder has built and what the decoder knows and has guessed at the end of step \(n\). We will now assume that the decoder's guesses at the end of step \(n\) are correct, describe what happens on the next step of the coding and decoding processes and argue that the decoder's guess at the end of this next step is still correct. **The decoding process.** Let's describe what happens on the decoding side first. The decoder first does the following: 1. Look at more and more bits of \(y_{0}\) until they find a place where \(y_{0}\) disagrees with \(\Phi_{e_{0}^{n}}(c)\)--in other words, a place where \(y_{0}\) disagrees with \(b_{0}^{n}\). Let \(l_{0}\) be the first such position. 2. Repeat this process with \(y_{1}\) and \(\Phi_{e_{1}^{n}}(c)\) to find the first position \(l_{1}\) where they disagree. Next, the decoder uses \(l_{0}\) and \(l_{1}\) to determine guesses \(e_{2}^{n+1}\) and \(e_{3}^{n+1}\) for \(b_{2}^{n+1}\) and \(b_{3}^{n+1}\). To pick \(e_{2}^{n+1}\), the decoder takes the least \(e\) such that when \(\Phi_{e}(c)\) is run for at most \(l_{1}\) steps, it converges and agrees with \(y_{2}\) on all inputs less than \(l_{0}\). In other words, the least \(e\) such that for all \(m\leqslant l_{0}\), \[\Phi_{e}(c,m)[l_{1}]\downarrow=y_{2}(m).\] The decoder then picks \(e_{3}^{n+1}\) in the same way. The decoder then extracts the \((n+1)^{\text{th}}\) bit of \(x\) by comparing \(e_{2}^{n+1}\) and \(e_{3}^{n+1}\). If \(e_{2}^{n+1}\) is smaller than \(e_{3}^{n+1}\), the decoder guesses that the \((n+1)^{\text{th}}\) bit of \(x\) is \(0\). Otherwise they guess that it is \(1\). The entire decoding process is pictured in Figure 4 below. This completes our description of the decoding process. If we have done a good job, then the reader should be able to fill in all the details about the coding process for themselves. But we will describe them here anyways for the sake of completeness. **The coding process.** At the end of step \(n\), the coder knows exactly what the decoder's guesses are (after all, they have access to all the same information as the decoder, so they can figure out exactly what the decoder did on each previous step). The coder also knows the next bit of \(x\) that they need to code. Suppose for convenience that the next bit is a \(0\). The coder will begin by choosing some value which they will make sure is the decoder's guess \(e_{2}^{n+1}\) for \(b_{2}^{n+1}\). One choice that works well enough is to simply let \(e_{2}^{n+1}\) be the first \(e\) such that \(\Phi_{e}(c)\) is total and equal to \(b_{2}^{n}\) (which is guaranteed to exist Figure 4: Step \(n+1\) of the decoding process. Figure 3: The coder and decoder’s view at the end of step \(n\). because \(b_{2}^{n}\leqslant_{T}c\) by assumption). So the coder can simply set \(b_{2}^{n+1}\) to be \(b_{2}^{n}\) (i.e. they will not change \(b_{2}^{n}\) on this step). Next, the coder wants to choose some value which they will ensure is the decoder's guess \(e_{3}^{n+1}\) for \(b_{3}^{n+1}\). In particular, since bit \(n+1\) of \(x\) is a \(0\), the coder wants to make sure that \(e_{3}^{n+1}\) is larger than \(e_{2}^{n+1}\). This is also easy enough to accomplish. Since there are infinitely many elements of \(B\) which extend \(y_{3}^{n}\) but only finitely many programs less than \(e_{2}^{n+1}\), there must be some \(b\in B\) such that the least \(e\) for which \(\Phi_{e}(c)=b\) is greater than \(e_{2}^{n+1}\). The coder then sets \(b_{3}^{n+1}\) to be such a \(b\) and sets \(e_{3}^{n+1}\) to be the least \(e\) such that \(\Phi_{e}(c)=b_{3}^{n+1}\). Now the coder needs to make sure that the values \(l_{0}\) and \(l_{1}\) that the decoder recovers are large enough that the decoder's guesses \(e_{2}^{n+1}\) and \(e_{3}^{n+1}\) are correct. To do so, the coder picks \(l_{0}\) to be large enough that \(e_{2}^{n+1}\) and \(e_{3}^{n+1}\) are the first indices \(e\) and \(e^{\prime}\) for which \(\Phi_{e}(c)\) and \(\Phi_{e^{\prime}}(c)\) converge on all inputs less than \(l_{0}\) and agree with \(b_{2}^{n+1}\) and \(b_{3}^{n+1}\), respectively, on all such inputs. The coder can then choose \(l_{1}\) to be large enough for \(\Phi_{e_{2}^{n+1}}(c)\) and \(\Phi_{e_{3}^{n+1}}(c)\) to both converge on all inputs less than \(l_{0}\). Next, the coder can choose \(b_{0}^{n+1}\) to be an element of \(B\) which has \(y_{0}^{n}\) as an initial segment and agrees with \(b_{0}^{n}\) on the first \(l_{0}\) bits, but which eventually disagrees with \(b_{0}^{n}\). They should retroactively increase the value of \(l_{0}\) to the first position of disagreement between \(b_{0}^{n}\) and \(b_{0}^{n+1}\), which may necessitate increasing \(l_{1}\) as well (so that the programs \(e_{2}^{n+1}\) and \(e_{3}^{n+1}\) are given enough time to converge on the first \(l_{0}\) inputs). The coder can now choose \(b_{1}^{n+1}\) in a similar manner, to be some element of \(B\) which has \(y_{1}^{n}\) as an initial segment and agrees with \(b_{1}^{n}\) on the first \(n\) bits, but eventually disagrees. They should then retroactively increase \(l_{1}\) to this first position of disagreement. Notice that increasing \(l_{0}\) and \(l_{1}\) in this way is harmless. Now the coder defines \[y_{0}^{n+1} :=b_{0}^{n+1}\,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Our goal is to show that \(A\) is cofinal. So we start with an arbitrary \(x\) and we want to find some \(y\) in \(A\) that computes \(x\). By the theorem we just proved, we can find reals \(y_{0},y_{1},y_{2},y_{3}\) in \(P\) (and therefore also in \(A\)) such that \[c\oplus y_{0}\oplus y_{1}\oplus y_{2}\oplus y_{3}\geqslant_{T}x.\] Since \(A\) is countably directed, we can find an upper bound for \(c,y_{0},y_{1},y_{2},y_{3}\) in \(A\). This upper bound obviously computes \(x\), so it is the \(y\) we are after. ### Order-preserving functions are measure-preserving In this section, we will prove that part 1 of Martin's Conjecture holds for all order-preserving functions. We will do this by proving that every order-preserving function is either constant on a cone or measure-preserving and then invoking Theorem 3.7, which states that part 1 of Martin's Conjecture holds for all measure-preserving functions. To prove that every order-preserving function is either measure-preserving or constant on a cone we will use the theorem on perfect sets that we proved in the previous section, together with the fact that, under AD, every set of reals is either countable or contains a perfect set (known as the **perfect set theorem**, see [7] theorem 33.3). **Theorem 4.6** (\(\mathsf{ZF}+\mathsf{AD}\)).: _If \(f\colon 2^{\omega}\to 2^{\omega}\) is an order-preserving function then \(f\) is either constant on a cone or measure-preserving._ Proof.: Before giving the proof in detail, here's a sketch. Suppose that \(f\colon 2^{\omega}\to 2^{\omega}\) is an order-preserving function. By the perfect set theorem, \(\operatorname{range}(f)\) is either countable or contains a perfect set (this is the only use of determinacy in the proof). We will show that if \(\operatorname{range}(f)\) is countable then \(f\) is constant on a cone and if \(\operatorname{range}(f)\) contains a perfect set then \(f\) is measure-preserving. The case where \(\operatorname{range}(f)\) is countable is straightforward. In the case where \(\operatorname{range}(f)\) contains a perfect set we will use the theorem on perfect sets that we proved in the previous section (more specifically, we will use Corollary 4.5). The key point is that since \(f\) is order-preserving, its range is countably directed for Turing reducibility. **Case 1: the range of \(f\) is countable.** In this case, we can write \(2^{\omega}\) as a countable union of sets on which \(f\) is constant (i.e. the preimages of points in the range of \(f\)). Since there are only countably many of these sets, Corollary 2.4 implies that at least one of them contains \([T]\) for some pointed perfect tree \(T\). Thus \(f\) is constant on \([T]\) and hence constant on a cone. **Case 2: the range of \(f\) contains a perfect set.** The main point here is that the range of an order-preserving function is countably directed. To see why, suppose that \(x_{0},x_{1},\ldots\) are all in the range of \(f\). Pick reals \(y_{0},y_{1},\ldots\) such that \(f(y_{0})=x_{0}\), \(f(y_{1})=x_{1}\), and so on. Let \(y\) be the Turing join of all the \(y_{i}\)'s. Since \(y\) computes each \(y_{i}\) and \(f\) is order-preserving, \(f(y)\) computes each \(x_{i}\). In other words, \(f(y)\) is an upper bound for \(\{x_{0},x_{1},\ldots\}\). Since \(\operatorname{range}(f)\) contains a perfect set and is countably directed, Corollary 4.5 implies that it is cofinal. Now we want to show \(f\) is measure-preserving. In other words, we start with an arbitrary \(a\) and we want to show that there is some \(b\) so that \(f\) sends everything in the cone above \(b\) into the cone above \(a\). Since the range of \(f\) is cofinal, there is some \(x\geqslant_{T}a\) in the range of \(f\). Since \(x\) is in the range of \(f\), there is some \(y\) such that \(f(y)=x\). And since \(f\) is order-preserving, it takes the cone above \(y\) into the cone above \(x\) and hence into the cone above \(a\). **Theorem 4.7** (\(\mathsf{ZF}+\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\)).: _Part 1 of Martin's Conjecture holds for all order-preserving functions._ Proof.: Suppose \(f\colon 2^{\omega}\to 2^{\omega}\) is a Turing-invariant function which is order-preserving. We want to show that \(f\) is either constant on a cone or above the identity on a cone. By Theorem 4.6, \(f\) is either constant on a cone or measure-preserving. If \(f\) is constant on a cone then we are done and if \(f\) is measure-preserving then by Theorem 3.7 it is above the identity on a cone. ### A proof for Borel functions that works in \(\mathsf{ZF}\) In the previous section, we proved part 1 of Martin's Conjecture for order-preserving functions by reducing to the case of measure-preserving functions. However, as discussed in section 3.2, we do not currently know how to prove part 1 of Martin's Conjecture for Borel measure-preserving functions in \(\mathsf{ZF}\), only in \(\mathsf{ZF}+\mathbf{\Pi_{1}^{1}}-\mathsf{Det}\). Thus our proof in the previous section only implies that part 1 of Martin's Conjecture for order-preserving Borel functions holds in \(\mathsf{ZF}+\mathbf{\Pi_{1}^{1}}-\mathsf{Det}\). In this section, we will prove that it also holds in \(\mathsf{ZF}\). The key idea in this section is due to Kihara [9], who used it to show that if \(f\) is an order-preserving function then either 1. \(f(x)\leq_{T}x\) on a cone (in other words, \(f\) is regressive) 2. or there is some real \(a\) such that \(x^{\prime}\leq_{T}f(x)\oplus a\) on a cone. In the first case, Slaman and Steel's theorem on Martin's Conjecture for regressive functions shows that either \(f\) is constant on a cone or equal to the identity on a cone. In the second case, our result that order-preserving functions are either constant on a cone or measure-preserving shows that \(f(x)\) computes \(a\) on a cone (note that \(f\) cannot be constant on a cone in this case) and thus that \(f(x)\) computes \(x^{\prime}\) on a cone. If \(f\) is assumed to be Borel, all three of these results--Kihara's result, Slaman and Steel's theorem on regressive functions and our theorem that order-preserving functions are constant on a cone or measure-preserving--can be proved in \(\mathsf{ZF}\). Thus, together, they yield a \(\mathsf{ZF}\) proof of part 1 of Martin's Conjecture for order-preserving Borel functions. All three results also hold in \(\mathsf{ZF}+\mathsf{AD}\) when \(f\) is an arbitrary function and thus this also gives us a proof of part 1 of Martin's Conjecture for order-preserving functions under \(\mathsf{ZF}+\mathsf{AD}\) (note that \(\mathsf{DC}_{\mathbb{R}}\) is not required here). Kihara's proof relies on a theorem in descriptive set theory known as the **Solecki dichotomy**. However, it is possible to replace the use of the Solecki dichotomy in his argument with a more elementary statement to obtain the following: if \(f\) is an order-preserving function then either 1. \(f\) is constant on a cone 2. or there is some real \(a\) such that \(x\leq_{T}f(x)\oplus a\) on a cone. This is a weaker statement than what Kihara proved (in the second case, we just have \(f(x)\oplus a\) above the identity rather than above the jump), but it suffices for our purposes and, when combined with Slaman and Steel's result on order-preserving functions which are above the identity, it is equivalent to Kihara's original statement. #### The Solecki dichotomy and the baby Solecki dichotomy Informally, the Solecki dichotomy states that for every sufficiently definable function \(f\colon 2^{\omega}\to 2^{\omega}\), either \(f\) is a countable union of continuous functions or the Turing jump (as a function on \(2^{\omega}\)) is reducible to \(f\). To state this formally, we need to give precise definitions of "a countable union of continuous functions" and "the Turing jump is reducible to \(f\)." **Definition 4.8**.: A function \(f\colon 2^{\omega}\to 2^{\omega}\) is \(\sigma\)**-continuous** if there is a countable partition \(\langle A_{n}\rangle_{n\in\mathbb{N}}\) of \(2^{\omega}\) such that for each \(n\), \(f\!\upharpoonright\!_{A_{n}}\) is continuous with respect to the subspace topology on \(A_{n}\). Note that there is a small subtlety here: just because \(f\!\upharpoonright\!_{A_{n}}\) is continuous with respect to the subspace topology on \(A_{n}\) does not mean that \(f\!\upharpoonright\!_{A_{n}}\) can be extended to a continuous function defined on all of \(2^{\omega}\). We will also refer to a partial function which is continuous with respect to the subspace topology on its domain as a **partial continuous function**. **Definition 4.9**.: Given functions \(f,g\colon 2^{\omega}\to 2^{\omega}\), \(f\) is **continuously reducible5** to \(g\), written \(f\leq_{c}g\), if there are partial continuous functions \(\varphi,\psi\colon 2^{\omega}\to 2^{\omega}\) such that for all \(x\in 2^{\omega}\), \(f(x)=\psi(g(\varphi(x)))\). In other words, the following diagram commutes Footnote 5: This notion of reducibility has also been called **strong continuous Weihrauch reducibility**[1]. \[\xygraph{2^{\omega}:(0,0)*{2^{\omega}}:(0,0)*{g}:(0,0)*{2^{\omega}}:(0,0)*{ \psi}:(0,0)*{2^{\omega}}:(0,0)*{2^{\omega}}:(0,0)*{2^{\omega}}:(0,0)*{2^{\omega}} :(0,0)*{2^{\omega}}:(0,0)*{2^{\omega}}:(0,0)*{2^{\omega}}:(0,0)*{2^{\omega}}:(0,0 )*{2^{\omega}}:(0,0)*{2^{\omega}}:(0,0)*{2^{\omega}}:(0,0)*{2^{\omega}}:(0,0)*{ 2^{\omega}}:(0,0)*{2^{\omega}}:(0,0)*{2^{\omega}}:(0,0)*{2^{\omega}}:(0,0)*{2^{ \omega}}:(0,0)*{2^{\omega}}:( Now let us show that such a set \(P\) exists. To do so, we will define a game, similar to the perfect set game and show that if player 1 has a winning strategy in this game then there is a perfect set on which \(f\) is continuous and injective and if player 2 has a winning strategy then \(\operatorname{range}(f)\) is countable. **Informal description of the game.** We will first give an informal description of the game. First, player 1 plays two pairs of strings \(\langle\sigma_{0},\tau_{0}\rangle\) and \(\langle\sigma_{1},\tau_{1}\rangle\) such that \(\sigma_{0},\sigma_{1}\) are incompatible and \(\tau_{0},\tau_{1}\) are incompatible. These should be thought of as two different options for initial segments of reals \(x\) and \(y\) such that \(f(x)=y\). In other words, \(\sigma_{0}\) and \(\tau_{0}\) are one option for initial segments of \(x\) and \(y\), respectively, and \(\sigma_{1}\) and \(\tau_{1}\) are another option. The requirement that these two options are incompatible is meant to witness injectivity of \(f\). Next, player 2 picks either \(\langle\sigma_{0},\tau_{0}\rangle\) or \(\langle\sigma_{1},\tau_{1}\rangle\). Suppose player 2 chooses \(\langle\sigma_{1},\tau_{1}\rangle\). Player 1 then plays two more pairs of strings \(\langle\sigma^{\prime}_{0},\tau^{\prime}_{0}\rangle\) and \(\langle\sigma^{\prime}_{1},\tau^{\prime}_{1}\rangle\) such that 1. \(\sigma^{\prime}_{0}\) and \(\sigma^{\prime}_{1}\) both extend \(\sigma_{1}\) and \(\tau^{\prime}_{0},\tau^{\prime}_{1}\) both extend \(\tau_{1}\), 2. \(\sigma^{\prime}_{0},\sigma^{\prime}_{1}\) are incompatible, 3. and \(\tau^{\prime}_{0},\tau^{\prime}_{1}\) are incompatible. and player 2 once again picks either \(\langle\sigma^{\prime}_{0},\tau^{\prime}_{0}\rangle\) or \(\langle\sigma^{\prime}_{1},\tau^{\prime}_{1}\rangle\). More generally, on each turn, player 1 plays two pairs of strings which both extend the pair of strings player 2 chose on the previous turn, making sure that the two pairs are incompatible with each other. At the end of this game, the two players have together determined two sequences, \(x\) and \(y\): \(x\) formed from the \(\sigma\)'s of the pairs chosen by player 2 and \(y\) formed from the \(\tau\)'s of the pairs chosen by player 2. Player 1 wins if \(f(x)=y\). **Formal description of the game.** On turn \(n\), player 1 plays two pairs of strings, \(\langle\sigma^{n}_{0},\tau^{n}_{0}\rangle\) and \(\langle\sigma^{n}_{1},\tau^{n}_{1}\rangle\), and then player 2 plays a bit \(b_{n}\in\{0,1\}\). \begin{tabular}{c|c c c c} player 1 & \(\langle\sigma^{0}_{0},\tau^{0}_{0}\rangle,\langle\sigma^{0}_{1},\tau^{0}_{1}\rangle\) & \(\langle\sigma^{1}_{0},\tau^{1}_{0}\rangle,\langle\sigma^{1}_{1},\tau^{1}_{1}\rangle\) & \(\ldots\) & \(\langle\sigma^{n}_{0},\tau^{n}_{0}\rangle,\langle\sigma^{n}_{1},\tau^{n}_{1}\rangle\) & \(\ldots\) \\ player 2 & \(b_{0}\) & \(b_{1}\) & \(\ldots\) & \(b_{n}\) & \(\ldots\) \\ \end{tabular} Additionally, player 1's plays must satisfy: 1. If \(n>1\) then \(\sigma^{n}_{0},\sigma^{n}_{1}\) both extend \(\sigma^{n-1}_{b_{n-1}}\) and \(\tau^{n}_{0},\tau^{n}_{1}\) both extend \(\tau^{n-1}_{b_{n-1}}\). 2. \(\sigma^{n}_{0},\sigma^{n}_{1}\) are incompatible 3. and \(\tau^{n}_{0},\tau^{n}_{1}\) are incompatible. **Winning condition.** Let \(x=\bigcup_{i\in\mathbb{N}}\sigma^{i}_{b_{i}}\) and \(y=\bigcup_{i\in\mathbb{N}}\tau^{i}_{b_{i}}\). Player 1 wins if and only if \(f(x)=y\). **Case 1: player 1 wins.** First suppose that player 1 has a winning strategy, \(\gamma\). It can easily be seen that the set of plays where player 1 plays according to \(\gamma\) is a perfect set on which \(f\) is continuous and injective. **Case 2: player 2 wins.** Now suppose that player 2 has a winning strategy, \(\eta\). We will show that \(\operatorname{range}(f)\) is countable. The idea is that we can tag each element of \(\operatorname{range}(f)\) by a unique position in the game. Since there are only countably many positions in the game, this is sufficient. Suppose \(y\in\operatorname{range}(f)\) and \(p\) is a position in the game such that each player has made exactly \(n+1\) moves so far. Say that \(y\) is **inescapable** at \(p\) if there is some \(x\in 2^{\omega}\) such that the following hold: 1. \(f(x)=y\) 2. all moves by player 2 so far have been following \(\eta\) 3. all moves so far are consistent with \(x,y\): for all \(i\leqslant n\), \(\sigma^{i}_{b_{i}}\) is an initial segment of \(x\) and \(\tau^{i}_{b_{i}}\) is an initial segment of \(y\) 4. and no matter what player 1 plays next, \(\eta\) will pick a move consistent with \(x,y\): for all initial segments \(\sigma\) of \(x\) extending \(\sigma_{b_{n}}^{n}\) and \(\tau\) of \(y\) extending \(\tau_{b_{n}}^{n}\) and all \(\sigma^{\prime},\tau^{\prime}\) such that \(\langle\sigma,\tau\rangle,\langle\sigma^{\prime},\tau^{\prime}\rangle\) is a valid next move for player 1, \(\eta\) will choose \(\langle\sigma,\tau\rangle\) when given \(\langle\sigma,\tau\rangle,\langle\sigma^{\prime},\tau^{\prime}\rangle\) and when given \(\langle\sigma^{\prime},\tau^{\prime}\rangle,\langle\sigma,\tau\rangle\). To finish the proof, it is enough to show that any \(y\in\operatorname{range}(f)\) is inescapable at some position \(p\) and that no distinct \(y,y^{\prime}\in\operatorname{range}(f)\) can be inescapable at the same position. First let's show that every \(y\in\operatorname{range}(f)\) is inescapable at some position. The idea is that if this is not the case, then we can find a way to defeat \(\eta\). Fix \(y\in\operatorname{range}(f)\) and \(x\in 2^{\omega}\) such that \(f(x)=y\). Suppose \(y\) is not inescapable at any position. Then in particular, \(y\) is not inescapable at the starting position of the game. Thus there are some initial segments \(\sigma\) of \(x\) and \(\tau\) of \(y\) and strings \(\sigma^{\prime},\tau^{\prime}\) such that for at least one of the two moves * \(\langle\sigma,\tau\rangle,\langle\sigma^{\prime},\tau^{\prime}\rangle\) * \(\langle\sigma^{\prime},\tau^{\prime}\rangle,\langle\sigma,\tau\rangle\) by player 1, \(\eta\) will choose \(\langle\sigma,\tau\rangle\). Consider playing this move for player 1. We are now at a position in the game where each player has made one move, player 2 has played according to \(\eta\) and all moves so far are consistent with \(x,y\). By assumption, \(y\) is not inescapable at this position either. So we can again find a move by player 1 so that \(\eta\) will still choose strings consistent with \(x,y\). We can continue this argument inductively to see that there is an infinite play where player 2 always plays according to \(\eta\) and all moves are consistent with \(x,y\). Hence the sequences formed at the end of this play are \(x\) and \(y\) themselves and since \(x\) was chosen so that \(f(x)=y\), this means player 1 wins. But this contradicts the assumption that \(\eta\) is a winning strategy. Now let's show that no distinct \(y,y^{\prime}\in\operatorname{range}(f)\) can be inescapable at the same position. Suppose not. In particular, suppose \(p\) is a position where each player has played \(n\) moves so far and \(y,y^{\prime}\) are both inescapable at \(p\), as witnessed by \(x\) and \(x^{\prime}\). Since \(y\neq y^{\prime}\) (and consequently \(x\neq x^{\prime}\)), there are incompatible initial segments \(\tau\) of \(y\) and \(\tau^{\prime}\) of \(y^{\prime}\) and incompatible initial segments \(\sigma\) of \(x\) and \(\sigma^{\prime}\) of \(x^{\prime}\) such that \(\langle\sigma,\tau\rangle\) and \(\langle\sigma^{\prime},\tau^{\prime}\rangle\) both extend the last move of \(p\). Now consider playing \(\langle\sigma,\tau\rangle,\langle\sigma^{\prime},\tau^{\prime}\rangle\) as player 1's next move in the game after \(p\). Since \(y\) is inescapable at \(p\) as witnessed by \(x\), \(\eta\) cannot choose \(\langle\sigma,\tau\rangle\). But since \(y^{\prime}\) is inescapable at \(p\) as witnessed by \(x^{\prime}\), \(\eta\) cannot choose \(\langle\sigma^{\prime},\tau^{\prime}\rangle\) either. But \(\eta\) must choose one of these two options, so this is a contradiction. #### Kihara's proof We will now explain how to prove part 1 of Martin's Conjecture for Borel order-preserving functions in \(\mathsf{ZF}\), following Kihara's idea. As discussed above, Kihara's original proof used the Solecki dichotomy, but we will instead use the baby Solecki dichotomy. **Theorem 4.12**.: _Part 1 of Martin's Conjecture holds for all order-preserving Borel functions._ Proof.: Since \(f\) is order-preserving, it is either constant on a cone or measure-preserving. If it is constant on a cone then we are done, so we may assume it is measure-preserving. By the baby Solecki dichotomy, either \(\operatorname{range}(f)\) is countable or \(id\leq_{c}f\). We will show that in the former case, \(f\) is constant on a cone and in the latter case, \(f\) is above the identity on a cone. **Case 1: \(\operatorname{range}(f)\) is countable.** In this case, we can write \(2^{\omega}\) as a countable union of Borel sets such that \(f\) is constant on each one. Since there are only countably many of these sets, one of them must contain the set of paths through some pointed perfect tree, \(T\). Thus \(f\) is constant on \([T]\) and so \(f\) is constant on a cone. **Case 2: \(id\leq_{c}f\).** By definition, there are partial continuous functions \(\varphi\) and \(\psi\) such that for all \(x\in 2^{\omega}\), \(\psi(f(\varphi(x)))=x\). Since every partial continuous function is (partial) computable relative to some oracle, we can pick some \(a\) and \(b\) such that \(\varphi\) is computable relative to \(a\) and \(\psi\) is computable relative to \(b\). Now consider any \(x\) in the cone above \(a\). Note that for such an \(x\), \(\varphi(x)\leqslant_{T}x\). Since \(\psi(f(\varphi(x)))=x\), \(x\) can compute some \(y\) (namely \(\varphi(x)\)) such that \(f(y)\oplus b\) can compute \(x\) (via \(\psi\)). This seems pretty close to saying that \(f(x)\) can compute \(x\), and hence that \(f\) is above the identity, but there are a couple problems. 1. We are using \(f(y)\) rather than \(f(x)\) to compute \(x\). 2. To compute \(x\) from \(f(y)\) we also need to know \(b\), but we would like to show that \(f(x)\) can compute \(x\) without any extra information. The solution to the first problem is to note that \(f\) is order-preserving and \(y\) is computable from \(x\) and thus \(f(y)\) is computable from \(f(x)\). The solution to the second problem is to use the fact that \(f\) is measure-preserving, so on a high enough cone, \(f(x)\) computes \(b\). Let's put all of this more formally. Let \(x\) be large enough that \(x\) computes \(a\) and \(f(x)\) computes \(b\). Thus \(\varphi(x)\) is computable from \(x\) and since \(f\) is order-preserving, this means that \(f(x)\) computes \(f(\varphi(x))\). Since \(f(x)\) also computes \(b\), \(f(x)\) computes \(\psi(f(\varphi(x)))=x\). So \(f(x)\) computes \(x\) on a cone. ### Application to the theory of locally countable Borel quasi-orders We will now discuss an application of our result on order-preserving functions to the theory of **locally countable Borel quasi-orders**. In particular, we will show that Turing reducibility is not a universal locally countable Borel equivalence relation. The motivation for this result comes from the theory of countable Borel equivalence relations (for a survey of this theory, see [8]). Kechris has conjectured that Turing equivalence is a universal countable Borel equivalence relation, which is known to contradict Martin's Conjecture [4]. The result we prove in this section refutes a natural strengthening of Kechris's conjecture. Before proving our result, we begin by reviewing some definitions. **Definition 4.13**.: A quasi-order \(\leqslant_{X}\) on a Polish space \(X\) is **Borel** if it is Borel as a subset of \(X\times X\) and **locally countable** if for every element \(x\) of \(X\), the set of predecessors of \(x\)--i.e. \(\{y\ |\ y\leqslant_{X}x\}\)--is countable. Note that Turing reducibility on \(2^{\omega}\) is a locally countable Borel quasi-order. **Definition 4.14**.: If \(\leqslant_{X}\) is a Borel quasi-order on \(X\) and \(\leqslant_{Y}\) is a Borel quasi-order on \(Y\) then \(\leqslant_{X}\) is **Borel reducible** to \(\leqslant_{Y}\) if there is a Borel function \(f\colon X\to Y\) such that for all \(x_{1},x_{2}\in X\) \[x_{1}\leqslant_{X}x_{2}\iff f(x_{1})\leqslant_{X}f(x_{2}).\] **Definition 4.15**.: A locally countable Borel quasi-order is **universal** if every other locally countable Borel quasi-order is Borel reducible to it. To show that Turing reducibility is not universal, we just need to exhibit a single locally countable Borel quasi-order that is not reducible to it (the general question of which locally countable Borel quasi-orders are reducible to Turing reducibility is explored more thoroughly in [6]). We will show that if we take Turing reducibility and add one extra point which is not comparable with anything else then the resulting quasi-order is not reducible to Turing reducibility. Here's the main idea of the proof. If we have a Borel reduction from Turing reducibility plus a point to regular Turing reducibility then by ignoring the extra point, we get an injective, order-preserving Borel function on the Turing degrees. By the Borel version of the results of section 4.2, this function must be measure-preserving. But this means that this function eventually gets above the image of the extra point, which contradicts the fact that it is a reduction (since the extra point is supposed to be incomparable to everything else). More formally, we can define a quasi-order as follows. Let \(0\) denote the element of \(2^{\omega}\) whose bits are all \(0\)s and let \(\leq_{T}^{\ast}\) be the binary relation on \(2^{\omega}\) defined as follows. \[x\leq_{T}^{\ast}y\iff\left\{\begin{aligned} x\leq_{T}y& \text{ and }x,y\neq 0\\ \text{ or }x=y=0.\end{aligned}\right.\] In other words, \(\leq_{T}^{\ast}\) is exactly like Turing reducibility except that there is a special point, \(0\), which is not comparable to anything else. It is easy to see that this is a locally countable Borel quasi order. **Theorem 4.16**.: _The quasi order \(\leq_{T}^{\ast}\) is not Borel reducible to \(\leq_{T}\)._ Proof.: Suppose for contradiction that \(f\colon 2^{\omega}\to 2^{\omega}\) is a Borel reduction from \(\leq_{T}^{\ast}\) to \(\leq_{T}\). Let \(f^{\ast}\) denote the function \(f\) restricted to all the reals not equal to \(0\). By definition of "Borel reduction," \(f^{\ast}\) is a Borel order-preserving function which is injective on the Turing degrees (though not necessarily on the reals). By Theorem 4.66, either \(f^{\ast}\) is constant on a cone or measure-preserving. But since it is injective on the Turing degrees it cannot be constant on a cone and thus must be measure-preserving. Footnote 6: Note that that theorem was stated as a theorem of \(\mathsf{ZF}+\mathsf{AD}\), but when restricted to Borel functions, it is provable in \(\mathsf{ZF}\). The key point is that the perfect set theorem used in the proof holds for all analytic sets in \(\mathsf{ZF}\). Since \(f^{\ast}\) is measure-preserving, its range is cofinal in the Turing degrees. Thus there is some \(x\neq 0\) such that \(f(0)\leq_{T}f^{\ast}(x)\). But \(f^{\ast}(x)\) is just \(f(x)\) and hence we have \(f(0)\leq_{T}f(x)\). Since \(f\) is a Borel reduction, this implies that \(0\leq_{T}^{\ast}x\), which contradicts the definition of \(\leq_{T}^{\ast}\) (since \(0\) is supposed to be incomparable to all other elements). **Corollary 4.17**.: _Turing reducibility is not a universal locally countable Borel quasi order._ ## 5 Ultrafilters on the Turing Degrees Several of the definitions, facts and theorems related to Martin's Conjecture can be recast in the language of ultrafilters. When recast in this language, our result on measure-preserving functions allows us to show that part 1 of Martin's Conjecture is equivalent to a statement about the structure of the class of ultrafilters on the Turing degrees. This equivalent statement suggests a few routes to making progress on Martin's Conjecture, one of which we will explore further. Central to this view of Martin's Conjecture is the **Martin measure**, also known as the **cone measure**, a countably complete filter on the Turing degrees. **Definition 5.1**.: The **Martin measure** on the Turing degrees is the class, \(U_{M}\), of all sets of Turing degrees which contain a cone, i.e. \[U_{M}=\{A\subseteq\mathcal{D}_{T}\mid\text{ for some }x,\text{ Cone}(x)\subseteq A\}.\] The most important fact about the Martin measure is that, under \(\mathsf{AD}\), it is an ultrafilter. **Theorem 5.2** (\(\mathsf{ZF}+\mathsf{AD}\); Martin).: _The Martin measure is an ultrafilter on the Turing degrees._ This theorem is simply a restatement of Martin's cone theorem in terms of the Martin measure. Likewise, several key definitions can be restated in terms of Martin measure. For example, the Martin order and Martin equivalence can be defined as follows. * \(f\leq_{M}g\) if and only if \(F(\mathbf{x})\leq_{T}G(\mathbf{x})\) for \(U_{M}\)-almost every \(\mathbf{x}\) (where \(F\) and \(G\) are the functions on the Turing degrees induced by \(f\) and \(g\), respectively). * \(f\equiv_{M}g\) if and only if \(F(\mathbf{x})=G(\mathbf{x})\) for \(U_{M}\)-almost every \(\mathbf{x}\). We will now see that the class of measure-preserving functions on the Turing degrees also has a natural definition in terms of the Martin measure: it is exactly the class of functions which are measure-preserving for the Martin measure in the sense of ergodic theory (which is the reason that we chose to call them "measure-preserving"). To explain this, we first need to recall some definitions from measure theory. **Remark 5.3**.: Several of the results in this section are proved in the theory \(\mathsf{ZF}+\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\). All of these results can also be proved in the theory \(\mathsf{ZF}+\mathsf{AD}^{+}\). We don't know whether they can be proved in \(\mathsf{ZF}+\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\). ### measure-preserving functions and the Martin measure #### Measure-preserving functions in general Given a measure \(\mu\) on a set \(X\) and a function \(f\colon X\to Y\), there is a canonical way of getting a measure on \(Y\), called the pushforward of \(\mu\) by \(f\). **Definition 5.4**.: If \(\mu\) is a measure on a space \(X\) and \(f\colon X\to Y\) is a function then the **pushforward** of \(\mu\) by \(f\), denoted \(f_{*}\mu\), is the measure on \(Y\) given by \[f_{*}\mu(A)=\mu(f^{-1}(A)).\] **Proposition 5.5**.: _The pushforward of a measure is itself a measure._ Note that for any functions \(f\colon X\to Y\) and \(g\colon Y\to Z\), \((g\circ f)_{*}\mu=g_{*}f_{*}\mu\). We can now define what it means for a function to be measure-preserving (in the sense of ergodic theory). **Definition 5.6**.: If \(\mu\) is a measure on a space \(X\) and \(f\colon X\to X\) is a function then \(f\) is **measure-preserving** for \(\mu\), or sometimes is said to **preserve**\(\mu\), if \(f_{*}\mu=\mu\). We will be mostly concerned with measures that are ultrafilters on the Turing degrees. Recall that an ultrafilter on a set \(X\) can be considered a \(\{0,1\}\)-valued measure on \(X\). The next proposition tells us that we can talk about the pushforwards of ultrafilters without worrying about measures which are not ultrafilters. **Proposition 5.7**.: _The pushforward of an ultrafilter is itself an ultrafilter._ Suppose that \(U\) is an ultrafilter on a set \(X\), \(V\) is an ultrafilter on a set \(Y\), \(f\colon X\to Y\) is any function and we would like to determine whether \(f_{*}U=V\). If we simply use the definition of pushforward, we must check that for each \(A\in V\), \(f^{-1}(A)\in U\)_and_ that for each \(A\notin V\), \(f^{-1}(A)\notin U\). The following lemma tells us that because \(U\) and \(V\) are ultrafilters, this condition can be simplified somewhat. **Lemma 5.8**.: _If \(U\) is an ultrafilter on \(X\), \(V\) is an ultrafilter on \(Y\) and \(f\colon X\to Y\) then \(f_{*}U=V\) if and only if for all \(A\in U\), \(f(A)\in V\)._ Proof.: (\(\implies\)) First, suppose that \(f_{*}U=V\) and let \(A\) be any set in \(U\). By the definition of pushforward, \(f(A)\) is in \(V\) if and only if \(f^{-1}(f(A))\) is in \(U\). Since \(f^{-1}(f(A))\) clearly contains \(A\) and \(A\) is in \(U\), we can conclude that \(f(A)\) is in \(V\). (\(\Leftarrow\)) Now suppose that for all \(A\in U\), \(f(A)\in V\). We need to show that for each \(B\subseteq Y\), \(B\in V\) if and only if \(f^{-1}(B)\in U\). First assume \(f^{-1}(B)\) is in \(U\). Then by our assumption, \(f(f^{-1}(B))\) is in \(V\) and therefore so is \(B\) (since it is a superset of \(f(f^{-1}(B))\)). Now assume that \(f^{-1}(B)\) is not in \(U\). Since \(U\) is an ultrafilter, this means \((f^{-1}(B))^{C}=f^{-1}(B^{C})\)_is_ in \(U\) (where we use \((\cdot)^{C}\) to denote the appropriate relative complement of a set). By the reasoning in the preceding paragraph, this means that \(B^{C}\) is in \(V\) and hence that \(B\) is not. #### Measure-preserving = measure-preserving We can now give our equivalent definition of the class of measure-preserving function on the Turing degrees in terms of the Martin measure. **Proposition 5.9**.: _A Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) is measure-preserving if and only if the function \(F\colon\mathcal{D}_{T}\to\mathcal{D}_{T}\) that it induces is measure-preserving for the Martin measure._ Proof.: (\(\implies\)) Suppose \(f\) is measure-preserving (in the sense of Definition 1.7). By proposition 5.8, we just need to show that if a subset \(A\) of the Turing degrees contains a cone then \(F(A)\) contains a cone. By determinacy, it is enough to show that \(F(A)\) is cofinal in the Turing degrees. So let \(\boldsymbol{a}\) be any Turing degree and we need to show that there is some degree above \(\boldsymbol{a}\) which is in \(F(A)\). Since \(A\) contains a cone, we can find some \(\boldsymbol{x}\in A\) such that \(\boldsymbol{x}\geq_{T}\boldsymbol{b}\) and by our choice of \(\boldsymbol{b}\), \(F(\boldsymbol{x})\) is above \(\boldsymbol{a}\). (\(\impliedleftarrow\)) Suppose \(F\) is measure-preserving for the Martin measure. Let \(a\) be any real. We need to find some real so that on the cone above that real, \(f\) is always above \(a\). Since \(F\) preserves the Martin measure, \(F^{-1}(\operatorname{Cone}(a))\) must contain a cone. Let \(b\) be a base of such a cone. Then for any \(x\geq_{T}b\), \(F(\deg_{T}(x))\in\operatorname{Cone}(a)\) and hence \(f(x)\geq_{T}a\). ### The Rudin-Keisler order on ultrafilters on the Turing degrees In the previous section we saw that measure-preserving functions can be defined in terms of the Martin measure. In this section we will see that our result on part 1 of Martin's Conjecture for measure-preserving functions implies that, at least under \(\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\), part 1 of Martin's Conjecture is equivalent to a statement about ultrafilters on the Turing degrees. #### The Rudin-Keisler Order To explain the connection between part 1 of Martin's Conjecture and ultrafilters on the Turing degrees, we first need to give some background on the **Rudin-Keisler order** on ultrafilters. **Definition 5.10**.: Suppose \(U\) is an ultrafilter on a set \(X\) and \(V\) is an ultrafilter on a set \(Y\). Then \(U\) is **Rudin-Keisler below**\(V\), written \(U\leq_{RK}V\), if there is a function \(f\colon Y\to X\) such that \[f_{*}V=U.\] **Example 5.11**.: If \(U\) is a principal ultrafilter on a set \(X\) then \(U\) is Rudin-Keisler below every other ultrafilter. To see why, suppose \(U\) concentrates on the point \(a\in X\) and suppose \(V\) is an ultrafilter on a set \(Y\). It is easy to check that if \(f\colon Y\to X\) is the constant function \(x\mapsto a\) then \(f_{*}(V)=U\). Note that in the definition of \(\leq_{RK}\), the function \(f\) is going in the opposite direction from what one might naively expect. This makes more sense if one considers embeddings of ultrapowers: if \(U\leq_{RK}V\) then for every structure \(M\) there is an embedding \(M^{X}/U\to M^{Y}/V\). Also note that it is possible to have distinct ultrapowers \(U\) and \(V\) such that \(U\leq_{RK}V\) and \(V\leq_{RK}U\); in other words, \(\leq_{RK}\) is only a quasi-order rather than a partial order. In case this happens we will say that \(U\) and \(V\) are **weakly Rudin-Keisler equivalent**. **Definition 5.12**.: Suppose \(U\) is an ultrafilter on a set \(X\) and \(V\) is an ultrafilter on a set \(Y\). Then \(U\) is **weakly Rudin-Keisler equivalent** to \(V\), written \(U\equiv_{RK}V\), if \(U\leq_{RK}V\) and \(V\leq_{RK}U\). Note that this definition is slightly different than the usual definition of "Rudin-Keisler equivalent" found in the literature (which is why we have added the word "weakly"). The usual definition is that ultrafilters \(U\) on \(X\) and \(V\) on \(Y\) are Rudin-Keisler equivalent if there is a bijection \(f\colon X\to Y\) such that \(f_{*}U=V\). The two definitions are equivalent under \(\mathsf{ZFC}\), but not under \(\mathsf{ZF}\). If we restrict our attention to a ultrafilters on a single set and ignore the principal ultrafilters, then the class of ultrafilters which are minimal in the Rudin-Keisler order often turns out to be an important class with a natural characterization that does not mention the Rudin-Keisler order. We must be slightly careful here about what we mean by "minimal." We mean minimal in the sense of a quasi-order, i.e. \(U\) is minimal if for all \(V\leq_{RK}U\), \(V\) is weakly Rudin-Keisler equivalent to \(U\). **Example 5.13**.: The minimal nonprincipal ultrafilters on \(\omega\) are exactly the Ramsey ultrafilters. **Example 5.14**.: Every normal ultrafilter on a cardinal \(\kappa\) is Rudin-Keisler minimal among nonprincipal ultrafilters on \(\kappa\) and every minimal nonprincipal ultrafilter on \(\kappa\) is Rudin-Keisler equivalent to either a normal ultrafilter or to a Ramsey ultrafilter on \(\omega\) (see chapter 9 of [3]). #### Part 1 of Martin's Conjecture and the Rudin-Keisler Order We will now show that, at least under \(\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\) (or \(\mathsf{AD}^{+}\)), part 1 of Martin's Conjecture is equivalent to a statement about the position of the Martin measure in the Rudin-Keisler order on nonprincipal ultrafilters on the Turing degrees. **Theorem 5.15** (\(\mathsf{ZF}+\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\)).: _Part 1 of Martin's Conjecture is equivalent to the following statement: if \(V\) is a nonprincipal ultrafilter on \(\mathcal{D}_{T}\) such that \(V\leq_{RK}U_{M}\) then \(V=U_{M}\)._ Note that this is a bit stronger than saying that \(U_{M}\) is minimal among the nonprincipal ultrafilters on the Turing degrees (though it does imply that). In particular, this statement rules out the existence of a nonprincipal ultrafilter \(V\) on \(\mathcal{D}_{T}\) which is weakly Rudin-Keisler equivalent, but not literally equal, to \(U_{M}\). Proof.: Since we are working under \(\mathsf{Uniformization}_{\mathbb{R}}\), part 1 of Martin's Conjecture is equivalent to the statement that every function \(F\colon\mathcal{D}_{T}\to\mathcal{D}_{T}\) is either constant on a cone or above the identity on a cone (the point is that under \(\mathsf{Uniformization}_{\mathbb{R}}\) every function on the Turing degrees is induced by a Turing-invariant function on the reals). We have the following equivalences. * \(F\) is constant on a cone if and only if \(F_{\bullet}U_{M}\) is a principal ultrafilter. * \(F\) is above the identity on a cone if and only if \(F\) is measure-preserving (one direction is clear from the definitions and the other follows from Theorem 3.4). By the equivalent definition of measure-preserving in terms of Martin measure, this means \(F\) is above the identity on a cone if and only if \(F_{\bullet}U_{M}=U_{M}\). Thus \(F\) is either constant on a cone or above the identity on a cone if and only if \(F_{\bullet}U_{M}\) is either a principal ultrafilter or \(U_{M}\) itself. By definition of \(\leq_{RK}\), the latter statement holds for all \(F\) if and only if no nonprincipal ultrafilter on \(\mathcal{D}_{T}\) besides \(U_{M}\) itself is below \(U_{M}\) in the Rudin-Keisler order. This equivalence suggests a few approaches to part 1 of Martin's Conjecture. * **Use work on the Rudin-Keisler order from set theory.** For example, we have mentioned that a normal ultrafilter on a cardinal \(\kappa\) is \(\leq_{RK}\)-minimal among nonprincipal ultrafilters on \(\kappa\). Notably, Slaman and Steel's theorem on Martin's Conjecture for regressive functions can be seen as providing a kind of analogue of normality for Martin measure. * **Study specific ultrafilters or classes of ultrafilters.** Given a specific ultrafilter (or class of ultrafilters) on the Turing degrees, one could try to show that this ultrafilter is not Rudin-Keisler below the Martin measure. We will discuss this approach more in the next section. * **Split part 1 of Martin's Conjecture into two parts.** By Theorem 5.15, to prove part 1 of Martin's Conjecture it is enough to prove two things: if \(V\) is a nonprincipal ultrafilter on \(\mathcal{D}_{T}\) such that \(V\leq_{RK}U_{M}\) then \(U_{M}\leq_{RK}V\) (i.e. they are weakly equivalent) and if \(U_{M}\leq_{RK}V\leq_{RK}U_{M}\) then \(V=U_{M}\). Perhaps one of these is easier to prove on its own than the full part 1 of Martin's Conjecture. We will discuss a proposition relevant to the latter of these two parts in section 5.4. ### The Lebesgue and Baire ultrafilters It is possible to show that under \(\mathsf{AD}\), Lebesgue measure on \(2^{\omega}\) induces an ultrafilter on the Turing degrees. Likewise, under \(\mathsf{AD}\), the Baire filter (i.e. the class of comeager sets) on \(2^{\omega}\) induces an ultrafilter on the Turing degrees. In light of the discussion in the previous section, it would be interesting to show that these two ultrafilters are not below the Martin measure in the Rudin-Keisler order. We don't know how to do that but we can show that neither of them is _above_ the Martin measure in the Rudin-Keisler order. This might sound like the wrong direction, but it's not as bad as it sounds: it shows that neither ultrafilter is weakly equivalent to the Martin measure. Lebesgue and Baire are ultrafilters First, we define the Lebesgue and Baire filters on \(\mathcal{D}_{T}\) as follows. * **The Lebesgue filter:** let \(U_{L}\) denote the class of subsets of \(\mathcal{D}_{T}\) with measure \(1\), i.e. \[A\in U_{L}\iff\lambda(\{x\in 2^{\omega}\mid\deg_{T}(x)\in A\})=1.\] where \(\lambda\) denotes the Lebesgue measure on \(2^{\omega}\). * **The Baire filter:** let \(U_{B}\) denote the class of subsets of \(\mathcal{D}_{T}\) which are comeager, i.e. \[A\in U_{B}\iff\{x\in 2^{\omega}\mid\deg_{T}(x)\in A\}\text{ is comeager in }2^{\omega}.\] Note that both \(U_{L}\) and \(U_{B}\) are countably complete filters on \(\mathcal{D}_{T}\). As we mentioned above, \(\mathsf{AD}\) implies that both \(U_{L}\) and \(U_{B}\) are actually ultrafilters. This can be proved using Kolmogorov's zero-one law together with standard regularity properties implied by \(\mathsf{AD}\). **Definition 5.16**.: A set \(A\subseteq 2^{\omega}\) is **closed under tail equivalence** if for all \(x,y\in 2^{\omega}\) which differ at only finitely many positions \[x\in A\iff y\in A.\] **Theorem 5.17** (Kolmogorov's zero-one law).: _Suppose \(A\subseteq 2^{\omega}\) is closed under tail equivalence._ 1. _If_ \(A\) _is Lebesgue measurable then either_ \(\lambda(A)=0\) _or_ \(\lambda(A)=1\)_._ 2. _If_ \(A\) _has the Baire property (i.e._ \(A\) _is either meager or comeager in some basic open set) then_ \(A\) _is either meager or comeager._ **Theorem 5.18** (\(\mathsf{ZF+AD}\)).: _Every subset of \(2^{\omega}\) is Lebesgue measurable and has the Baire property._ **Proposition 5.19** (\(\mathsf{ZF+AD}\)).: \(U_{L}\) _and \(U_{B}\) are ultrafilters._ Proof.: We will just provide a proof for \(U_{L}\) since the proof for \(U_{B}\) is almost identical. Let \(A\) be any set of Turing degrees. We want to show that either \(A\) is in \(U_{L}\) or \(\mathcal{D}_{T}\smallsetminus A\) is in \(U_{L}\). Since Turing degrees are closed under tail equivalence, the set \(\{x\mid\deg_{T}(x)\in A\}\) is also closed under tail equivalence. \(\mathsf{AD}\) implies that it is Lebesgue measurable and thus by Kolmogorov's zero-one law it either has measure \(0\) or measure \(1\). In the former case, \(\mathcal{D}_{T}\smallsetminus A\in U_{L}\). In the latter case, \(A\in U_{L}\). Lebesgue and Baire are not above Martin We will now prove that the Lebesgue and Baire ultrafilters are not Rudin-Keisler above the Martin measure. Our strategy is as follows. Suppose \(F\colon\mathcal{D}_{T}\to\mathcal{D}_{T}\) is a function such that \(F_{\bullet}U_{L}=U_{M}\). By composing \(F\) with the map \(\boldsymbol{x}\mapsto\omega_{1}^{\boldsymbol{x}}\) and then taking the pushforward of \(U_{L}\) by this function, we get a nonprincipal countably complete ultrafilter on \(\omega_{1}\). However, countably complete ultrafilters on \(\omega_{1}\) are rather constrained. In particular, they can only satisfy Fubini's theorem if they are principal. Since Lebesgue measure does satisfy Fubini's theorem, we can use this to derive a contradiction. **Lemma 5.20** (\(\mathsf{ZF+AD}\)).: _Every function \(f\colon 2^{\omega}\to\omega_{1}\) is constant on a set of positive Lebesgue measure._ Proof.: Suppose for contradiction that \(f\) is not constant on any set of positive measure. Note that for every \(\alpha\in\omega_{1}\), \(\mathsf{AD}\) implies that \(f^{-1}(\alpha)\) is Lebesgue measurable and so our assumption implies that it has measure \(0\). By countable additivity of the Lebesgue measure, this implies that for any countable set \(A\subseteq\omega_{1}\), \(f^{-1}(A)\) has measure \(0\). Now let \(B\) be the subset of \(2^{\omega}\times 2^{\omega}\) defined by \[B=\{(x,y)\mid f(x)\leq f(y)\}.\] Again, since we are working under \(\mathsf{AD}\), we know \(B\) is Lebesgue measurable. We will now use Fubini's theorem to compute the measure of \(B\) in two different ways to arrive at a contradiction. By Fubini's theorem we have: \[\lambda(B)=\int\lambda(\{y\mid(x,y)\in B\})\,dx=\int\lambda(\{x\mid(x,y)\in B \})\,dy.\] Now note that for any \(y\), we have \[\{x\mid(x,y)\in B\}=f^{-1}(\{\alpha\mid\alpha\leq f(y)\}.\] Since this is the inverse image under \(f\) of a countable set, its measure must be \(0\). Thus \[\int\lambda(\{x\mid(x,y)\in B\})\,dy=\int 0\,dy=0.\] On the other hand, for any \(x\), \[\{y\mid(x,y)\in B\}=f^{-1}(\{\alpha\mid f(x)\leq\alpha\}).\] Since this is the complement of the inverse image under \(f\) of a countable set, its measure must be \(1\). Thus \[\int\lambda(\{y\mid(x,y)\in B\})\,dx=\int 1\,dx=1.\] Therefore we have calculated that the measure of \(B\) is both \(0\) and \(1\), a contradiction. **Corollary 5.21**.: _Every function \(F\colon\mathcal{D}_{T}\to\omega_{1}\) is constant on some set in \(U_{L}\)._ Proof.: This follows from the previous lemma together with Kolmogorov's zero-one law. **Theorem 5.22** (\(\mathsf{ZF+AD}\)).: _The Lebesgue ultrafilter is not Rudin-Keisler above the Martin measure, i.e. \(U_{M}\preccurlyeq_{RK}U_{L}\)._ Proof.: Suppose for contradiction that \(U_{M}\preccurlyeq_{RK}U_{L}\), as witnessed by some function \(F\) (so \(F_{\bullet}U_{L}=U_{M}\)). Let \(G\colon\mathcal{D}_{T}\to\omega_{1}\) be the map defined by \[G(\boldsymbol{x})=\omega_{1}^{\boldsymbol{x}}.\] It is straightforward to check that \(G_{\bullet}(U_{M})\) is a nonprincipal ultrafilter on \(\omega_{1}\). But by the lemma above, \(G\circ F\) is constant on a set of Lebesgue measure \(1\) and hence \((G\circ F)_{\bullet}(U_{L})\) is a principal ultrafilter on \(\omega_{1}\). This is a contradiction since we have \[(G\circ F)_{\bullet}(U_{L})=G_{\bullet}(F_{\bullet}(U_{L}))=G_{\bullet}(U_{M}).\qed\] **Theorem 5.23** (\(\mathsf{ZF+AD}\)).: _The Baire ultrafilter is not Rudin-Keisler above the Martin measure, i.e. \(U_{M}\preccurlyeq_{RK}U_{L}\)._ Proof.: We can repeat the proof for the Lebesgue ultrafilter almost verbatim. In particular, Baire category satisfies a version of Fubini's theorem which is sufficient to carry out Lemma 5.20. In light of the results above, it would be very interesting to show that the Lebesgue and Baire ultrafilters are not below the Martin measure in the Rudin-Keisler order. Note that Andrew Marks has shown that under \(\mathsf{AD}_{\mathbb{R}}\), \(U_{L}\leq_{RK}U_{M}\) holds if and only if there is a Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) such that for all \(x\), \(f(x)\) is \(x\)-random [14]. Our results also suggest some questions which are less directly connected to Martin's Conjecture. For example, we showed that \(U_{L}\) and \(U_{B}\) are not Rudin-Keisler above the Martin measure. Is there any ultrafilter on \(\mathcal{D}_{T}\) which is Rudin-Keisler above the Martin measure (besides the Martin measure itself)? Also, is there any meaningful difference between the Lebesgue and Baire ultrafilters on \(\mathcal{D}_{T}\) with respect to the Rudin-Keisler order? ### Additional facts about the Martin measure and the Rudin-Keisler order We will finish our discussion of Martin's Conjecture and ultrafilters on the Turing degrees by mentioning a few facts which seem relevant to attempts to investigate the place of the Martin measure in the Rudin-Keisler order. #### A characterization of the Martin measure In order to use the approach to part 1 of Martin's Conjecture described in Section 5.2, it is necessary to have tools to show that ultrafilters on the Turing degrees are equal to the Martin measure. We do not know of many such tools, except for the following proposition. **Proposition 5.24** (\(\mathsf{ZF+AD}\)).: _Suppose \(V\) is a nonprincipal ultrafilter on the Turing degrees such that \(\{\boldsymbol{x}\mid\operatorname{Cone}(\boldsymbol{x})\in V\}\) is in \(V\). Then \(V=U_{M}\)._ Proof.: Let \(A=\{\boldsymbol{x}\mid\operatorname{Cone}(\boldsymbol{x})\in V\}\). We will first assume that \(A\) is cofinal in the Turing degrees and show that this implies \(U_{M}=V\). We will then show that \(A\) is cofinal. **The cofinality of \(A\) is sufficient.** Assume \(A\) is cofinal in the Turing degrees. Since \(U_{M}\) and \(V\) are both ultrafilters, it is enough to show that every set in \(U_{M}\) is in \(V\). To show this, it is enough to show that every cone is in \(V\). Fix an arbitrary degree \(\boldsymbol{x}\). We will show that the cone above \(\boldsymbol{x}\) is in \(V\). Since \(A\) is cofinal, there is some \(\boldsymbol{y}\geq_{T}\boldsymbol{x}\) such that \(\boldsymbol{y}\in A\). By definition of \(A\), this implies that \(\operatorname{Cone}(\boldsymbol{y})\in V\) and hence that \(\operatorname{Cone}(\boldsymbol{x})\) (which is a superset of \(\operatorname{Cone}(\boldsymbol{y})\)) is in \(V\). \(A\) **is cofinal.** Let \(\widetilde{A}=\{x\in 2^{\omega}\mid\deg_{T}x\in A\}\). The idea is to apply Corollary 4.5 to \(\widetilde{A}\) to show that \(\widetilde{A}\) (and hence \(A\) as well) cofinal. First, we claim that \(A\) is not countable. Since we are working in \(\mathsf{ZF+AD}\), \(V\) is countably complete (since every ultrafilter is). Since \(V\) is nonprincipal and countably complete and \(A\) is in \(V\), \(A\) cannot be countable. Next we claim that \(A\) is countably directed in the Turing degrees. Let \(\boldsymbol{x}_{0},\boldsymbol{x}_{1},\ldots\) be a countable sequence of elements of \(A\). Hence for each \(n\), \(\operatorname{Cone}(\boldsymbol{x}_{n})\in V\). So \(\bigcap_{n}\operatorname{Cone}(\boldsymbol{x}_{n})\) is also in \(V\), since \(V\) is countably complete. By assumption, \(A\) is in \(V\) and hence \[A\cap\bigcap_{n}\operatorname{Cone}(\boldsymbol{x}_{n})\] is nonempty. It is easy to check that any element of this intersection gives us an element of \(A\) which is an upper bound for all the \(\boldsymbol{x}_{n}\)'s. Since \(A\) is not countable, neither is \(\widetilde{A}\). Hence the perfect set theorem implies that \(\widetilde{A}\) contains a perfect set. Since \(A\) is countably directed in the Turing degrees, so is \(\widetilde{A}\). Thus Corollary 4.5 implies that \(\widetilde{A}\) is cofinal in the Turing degrees and so \(A\) is as well. This proposition has an interesting consequence in the world of Turing-invariant functions, which can be seen as a sharpening of our result on order-preserving functions. **Definition 5.25**.: A Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) is **almost order-preserving** if for every \(x\), there is a cone on which \(f\) is above \(x\)--i.e. for all \(y\) in some cone, \(f(y)\geqslant_{T}f(x)\). Note that if \(f\) is order-preserving then \(f(y)\geqslant_{T}f(x)\) for all \(y\geqslant_{T}x\) whereas if \(f\) is merely almost order-preserving then \(f(y)\geqslant_{T}f(x)\) only for all \(y\) of sufficiently high Turing degree. **Corollary 5.26** (\(\mathsf{ZF+AD+DC_{R}}\)).: _Suppose \(f\colon 2^{\omega}\to 2^{\omega}\) is almost order-preserving. Then either \(f\) is constant on a cone or \(f\geqslant_{M}id\)._ Proof.: Assume \(f\) is not constant on any cone and let \(F\colon\mathcal{D}_{T}\to\mathcal{D}_{T}\) be the function on \(\mathcal{D}_{T}\) induced by \(f\). We will show \(F_{\bullet}U_{M}\) has the property in the statement of Proposition 5.24. To see why, note that \(F_{\bullet}U_{M}\) concentrates on the image of \(F\) and it follows immediately from the definition of almost order-preserving that for every \(x\), the cone above \(x\) is in \(F_{\bullet}U_{M}\). Also note that since \(f\) is not constant on any cone, \(F_{\bullet}U_{M}\) is nonprincipal. By Proposition 5.24, this implies that \(F_{\bullet}U_{M}=U_{M}\). In other words, \(f\) is measure-preserving. So by Theorem 3.7, \(f\geqslant_{M}id\), as desired. **The Rudin-Keisler order below the Martin measure** Earlier, we showed that part 1 of Martin's Conjecture is equivalent to the statement that there are no nonprincipal ultrafilters on the Turing degrees which are Rudin-Keisler below the Martin measure (other than the Martin measure itself). It seems reasonable to ask whether there are any nonprincipal ultrafilters at all that are Rudin-Keisler below the Martin measure, even if we consider ultrafilters on sets other than the Turing degrees. In fact, a theorem due to Kunen implies that there are many such ultrafilters: in particular, any ultrafilter on an ordinal less than \(\Theta\) (where \(\Theta\) is the least ordinal such that there is no surjection \(\mathbb{R}\to\Theta\)). **Theorem 5.27** (\(\mathsf{ZF+AD+DC_{R}}\); Kunen).: _If \(V\) is an ultrafilter on an ordinal \(\kappa<\Theta\) then \(V\leqslant_{RK}U_{M}\)._ A proof of this theorem can be found in a paper by Steel [26] in the course of a proof of a more well-known theorem of Kunen which states that every ultrafilter on any ordinal less than \(\Theta\) is ordinal definable (Theorem 8.6 of Steel's paper). ## 6 Generalizations and Counterexamples In this section we will discuss the extent to which our results can be generalized to other contexts. In particular, we will consider whether they still hold in degree structures other than the Turing degrees, whether they hold for functions which are not Turing-invariant, and whether they hold (in a modified form) in \(\mathsf{ZFC}\). In general, we find that part 1 of Martin's Conjecture for measure-preserving functions is fairly robust, holding in a number of different contexts, while our results on order-preserving functions are much harder to generalize. ### Other degree structures There are many degree structures besides the Turing degrees studied in computability theory and it is possible to state a version of Martin's Conjecture in a number of these structures. All that's required is that the degree structure satisfy the appropriate analog of Martin's cone theorem and have a notion of a jump operator. For example, both the arithmetic degrees and the hyperarithmetic degrees satisfy these requirements--in the arithmetic degrees, the appropriate jump operator is the \(\omega\)-jump, while in the hyperarithmetic degrees it's the hyperjump--and thus we can state a version of Martin's Conjecture for both. Surprisingly, these different versions of Martin's Conjecture have turned out to work somewhat differently from each other. Some of the special cases of Martin's Conjecture which are known to hold in the Turing degrees are known to be false in the arithmetic degrees. And there are other instances of Martin's Conjecture which are known to hold for the Turing degrees, but whose status is open for the arithmetic degrees or the hyperarithmetic degrees. For example, here is the status of two special cases of Martin's Conjecture for the arithmetic and hyperarithmetic degrees. * **Martin's Conjecture for uniformly invariant functions:** This fails in the arithmetic degrees [15], but holds in the hyperarithmetic degrees (unpublished, but mentioned in [23]). * **Martin's Conjecture for regressive functions:** This is open for the arithmetic degrees and holds for the hyperarithmetic degrees [12]. In this section, we will discuss to what extent the results in this paper extend to the arithmetic and hyperarithmetic degrees. In brief, the results on measure-preserving functions seem quite robust and hold in most degree structures, while the results on order-preserving functions seem to rely more on the specific structure of the Turing degrees. #### Measure-preserving functions on other degree structures Both of our proofs of part 1 of Martin's Conjecture for measure-preserving functions seem to be very flexible and can be made to work in most degree structures, including both the arithmetic and hyperarithmetic degrees. Rather than recapitulate our proof in detail for many different degree structures, we will just give a sketch of the key points for the case of the arithmetic degrees. We will use \(\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\) proof since it is simpler to explain, but our second proof can also be easily adapted to the arithmetic degrees. For the sake of completeness, we begin by stating the definition of "measure-preserving" for arithmetically invariant functions. **Definition 6.1**.: An arithmetically invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) is **measure-preserving** if for every \(a\) there is some \(b\) such that \[x\geqslant_{A}b\implies f(x)\geqslant_{A}a.\] In other words, for every \(a\), \(f\) is arithmetically above \(a\) on a cone of arithmetic degrees. **Theorem 6.2** (\(\mathsf{ZF}+\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\)).: _If \(f\colon 2^{\omega}\to 2^{\omega}\) is an arithmetically invariant, measure-preserving function then \(f(x)\geqslant_{A}x\) on a cone of arithmetic degrees._ Proof.: First, use \(\mathsf{Uniformization}_{\mathbb{R}}\) to pick an increasing modulus, \(g\), for \(f\). There is one subtlety here: we want \(g\) to be increasing not only on the arithmetic degrees, but also on the Turing degrees. In other words, we want \(g\) to be a function such that \[g(x)\geqslant_{T}x\text{ and }y\geqslant_{A}g(x)\implies f(y)\geqslant_{A}x.\] This may look a little unintuitive: why not just require \(g(x)\geqslant_{A}x\)? The reason is that some of the lemmas we would like to invoke depend on finding a computable injective function on a pointed perfect tree and we cannot find a computable inverse for \(g\) unless \(g(x)\geqslant_{T}x\). It is possible to get around this difficulty by reproving our main lemmas with somewhat different hypotheses tailored to functions on the arithmetic degrees, but our approach of requiring \(g(x)\) to compute \(x\) seems easier and more general. In any case, there is no problem with requiring \(g(x)\) to compute \(x\) because if \(g\) is any modulus for \(f\), we can always replace \(g\) with \(x\mapsto g(x)\oplus x\) to get a modulus which is increasing on the Turing degrees. Now that we have \(g\), we can proceed with the rest of the proof more or less unchanged. By Corollary 2.6 we can find a pointed perfect tree, \(T\), and a computable function \(h\) defined on \([T]\) which is a right inverse for \(g\) on \([T]\) (in this case, it does not matter whether \(T\) is pointed in the sense of the Turing degrees or in the sense of the arithmetic degrees). For all \(x\in[T]\), the definition of modulus implies that \(f(x)\geqslant_{A}h(x)\) and Lemma 2.1 implies that \(h(x)\oplus T\geqslant_{T}x\). Putting these together we have that \(f(x)\oplus T\geqslant_{A}x\) on a cone of arithmetic degrees. Since \(f\) is measure-preserving, it gets above \(T\) on a cone and thus \(f(x)\geqslant_{A}x\) on a cone, as desired. The only things about the degree structure that seem required to make this proof work are that it satisfies something like Martin's pointed perfect tree theorem7 and its notion of reduction is reasonable enough to prove things like Corollary 2.6 and Lemma 2.1. Footnote 7: Note that this is not true of all degree structures. For example, Marks has observed that Martin’s cone theorem fails for polynomial time Turing equivalence [13]. #### Order-preserving functions on other degree structures Unlike our results on measure-preserving functions, our results on order-preserving functions do not seem easy to generalize to other degree structures. For order-preserving functions on the arithmetic degrees, for example, part 1 of Martin's Conjecture is simply false. **Theorem 6.3** (Slaman and Steel).: _There is a function \(f\colon 2^{\omega}\to 2^{\omega}\) which is order-preserving for arithmetic reducibility which is neither constant on a cone of arithmetic degrees nor above the identity on a cone of arithmetic degrees._ This theorem has not been published, but the construction is very similar to the counterexample to part 1 of Martin's Conjecture for the arithmetic degrees constructed in [15]. The theorem above also shows that the analog of Theorem 4.6 for the arithmetic degrees fails: not all functions which are order-preserving on the arithmetic degrees are measure-preserving on the arithmetic degrees. To see why, note that any counterexample to part 1 of Martin's Conjecture for order-preserving functions on the arithmetic degrees cannot be measure-preserving on the arithmetic degrees because, as observed above, part 1 of Martin's Conjecture does hold for such functions. In contrast, we do not know whether part 1 of Martin's Conjecture for order-preserving functions holds for the hyperarithmetic degrees. On the one hand, we do not know how to extend the crucial Theorem 4.6 to the hyperarithmetic degrees. On the other hand, we also don't know of any counterexample. Resolving this seems like an interesting question. ### Non-invariant functions In the previous section, we saw that our proof of part 1 of Martin's Conjecture for measure-preserving functions is robust in the sense that it works in many different degree structures. In this section, we will see that it is robust in another sense: it works even for functions which are not Turing-invariant. As in the previous section, both the \(\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\) proof from section 3.1 and the \(\mathsf{AD}+\mathsf{DC}_{\mathbb{R}}\) proof from section 3.2 can easily be modified to work in this new setting, but we will just explain how to modify the \(\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\) proof since it's a bit simpler. Before we actually give the proof, we will mention one reason why this sort of result is interesting. If we remove the requirement of Turing invariance from Martin's Conjecture then there are many counterexamples. In fact, practically every construction of classical computability theory gives rise to a counterexample. As a concrete example, consider the Friedberg jump inversion theorem. The proof of this theorem is quite constructive, and in fact produces a (non Turing-invariant) function \(f\colon 2^{\omega}\to 2^{\omega}\) such that for each \(x\), \(f(x)^{\prime}\equiv_{T}x\oplus 0^{\prime}\). Thus \(f\) is a regressive function which is neither constant on any cone nor ever above the identity. An interesting feature of most of these classical constructions is that the produce reals which are, in some sense, generic. Sometimes this is even true in a precise technical sense. For example, if \(f\) is the function produced by Friedberg jump inversion then \(f(x)\) is always \(1\)-generic relative to \(x\). It is reasonable to ask to what extent this is a necessary feature of such constructions. For example, is there a constructive proof of the jump inversion theorem that doesn't produce generic reals? Of course, this is a hard question to make precise, but we can at least note some ways that reals can fail to look generic. One feature of most types of generic reals is that they tend to avoid a cone--for example, if \(g\) is a function such that \(g(x)\) is always \(1\)-generic relative to \(x\) then \(g\) avoids the cone above \(0^{\prime}\). And if \(g(x)\) is instead always \(1\)-random relative to \(x\) then \(g\) does not necessarily avoid the cone above \(0^{\prime}\) (there are \(1\)-random reals in every degree above \(0^{\prime}\)), but as soon as \(x\) is above \(0^{\prime}\), it does. Thus any proof of jump inversion which produces a (not necessarily Turing-invariant) measure-preserving function would seem to be producing reals that are not generic. The fact that part 1 of Martin's Conjecture for measure-preserving functions holds even without the restriction to Turing-invariant functions shows that there is no such measure-preserving jump inversion function. In other words, this feature of the proof of jump inversion seems to be inescapable. Before describing how to modify the proof, we should clarify that when we say a non Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) is measure-preserving, we mean that for every \(a\in 2^{\omega}\), there is some \(b\) such that \[x\geqslant_{T}b\implies f(x)\geqslant_{T}a.\] In other words, exactly what we meant when we said a Turing-invariant function is measure-preserving. **Theorem 6.4** (\(\mathsf{ZF}+\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\)).: _If \(f\colon 2^{\omega}\to 2^{\omega}\) is measure-preserving (but not necessarily Turing-invariant) then \(f(x)\geqslant_{T}x\) on a cone._ Proof.: The proof is nearly identical to the proof for Turing-invariant functions. Let \(g\) be an increasing modulus for \(f\), and \(h\) a computable function which inverts \(g\) on a pointed perfect tree, \(T\). The key point is that it follows from the definition of modulus that if \(y\) is in \([T]\) and \(x\geqslant_{T}y\) then \(f(x)\geqslant_{T}h(y)\) and that this fact does not depend on \(f\) being Turing-invariant. To see why this is enough, suppose \(x\) is large enough to be Turing equivalent to something in \(T\) and also large enough such that \(f(x)\geqslant_{T}T\). Let \(y\in[T]\) be Turing equivalent to \(x\). Then \[f(x)\geqslant_{T}h(y)\oplus T\geqslant_{T}y\equiv_{T}x.\] In other words, \(f(x)\geqslant_{T}x\) for all large enough \(x\). The fact that our proof of part 1 of Martin's Conjecture for measure-preserving functions still works for functions which are not Turing-invariant points to an interesting feature of the proof: it relies mostly on manipulating functions which are _not_ themselves Turing-invariant, even when the function \(f\) is. The way that we chose the modulus, \(g\), came with no guarantees that \(g\) is Turing-invariant. What's more, it follows from Slaman and Steel's theorem on regressive functions that no inverse for \(g\) can be Turing-invariant (otherwise it would yield a non-constant, regressive function on the Turing degrees). ### Ideal-valued functions We will now see that our result on measure-preserving functions is robust in yet another way. In particular, it holds for functions which take values in the set of Turing ideals. **Definition 6.5**.: A **Turing ideal** is a set of Turing degrees \(\mathcal{I}\) such that * \(\mathcal{I}\) **is closed under Turing reducibility:** if \(\boldsymbol{x}\in\mathcal{I}\) and \(\boldsymbol{y}\leqslant_{T}\boldsymbol{x}\) then \(\boldsymbol{y}\in\mathcal{I}\). * \(\mathcal{I}\) **is closed under finite joins:** if \(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{n}\in\mathcal{I}\) then \(\boldsymbol{x}_{1}\oplus\ldots\oplus\boldsymbol{x}_{n}\in\mathcal{I}\). Also, \(\mathcal{I}\) is **proper** if it is not equal to all of \(\mathcal{D}_{T}\). **Notation**.: _Let \(\operatorname{Spec}(\mathcal{D}_{T})\) denote the set of Turing ideals._ **Definition 6.6**.: A function \(\mathcal{I}\colon 2^{\omega}\to\operatorname{Spec}(\mathcal{D}_{T})\) is **measure-preserving** if for every \(a\in 2^{\omega}\), there is some \(b\in 2^{\omega}\) such that \[x\geqslant_{T}b\implies a\in\mathcal{I}(x).\] It is not hard to see that our proof of part 1 of Martin's Conjecture for measure-preserving functions can also be used to prove a similar result about ideal-valued functions. **Theorem 6.7** (\(\mathsf{ZF}+\mathsf{AD}+\mathsf{Uniformization}_{\mathbb{R}}\)).: _Suppose \(\mathcal{I}\colon 2^{\omega}\to\operatorname{Spec}(\mathcal{D}_{T})\) is measure-preserving. Then for all \(x\) on a cone, \(x\in\mathcal{I}(x)\)._ Proof.: The proof is nearly identical to the proof for \(2^{\omega}\)-valued functions but requires a modification of the definition of "modulus." In particular, call \(f\colon 2^{\omega}\to 2^{\omega}\) a modulus for \(\mathcal{I}\) if for all \(a\), \[x\geqslant_{T}f(a)\implies a\in\mathcal{I}(x).\] Call \(f\) an increasing modulus if, in addition, for all \(x\), \(f(x)\geqslant_{T}x\). By \(\mathsf{Uniformization}_{\mathbb{R}}\), \(\mathcal{I}\) has an increasing modulus. By Corollary 2.6, we can invert \(f\) on a pointed perfect tree to get a pointed perfect tree \(T\) and a computable function \(g\) defined on \([T]\) such that for all \(x\in[T]\), \(f(g(x))=x\). Note that this implies that for all \(x\), \(g(x)\in\mathcal{I}(x)\). Since \(g\) is computable and injective on \([T]\), for all \(x\in[T]\), \(g(x)\oplus T\geqslant_{T}x\). If \(x\) is of high enough Turing degree then \(T\in\mathcal{I}(x)\) (since \(\mathcal{I}\) is measure-preserving). Since Turing ideals are closed under Turing joins, this implies that for any such \(x\), \[g(x)\oplus T\in\mathcal{I}(x).\] Since Turing ideals are closed under Turing reducibility, this implies that \(x\leqslant_{T}g(x)\oplus T\) is in \(\mathcal{I}(x)\). In light of the above theorem, it would be interesting to know if something similar holds for order-preserving ideal-valued functions. More precisely, say that a function \(\mathcal{I}\colon 2^{\omega}\to\operatorname{Spec}(\mathcal{D}_{T})\) is **order-preserving** if for all \(x,y\in 2^{\omega}\), \[x\leqslant_{T}y\implies\mathcal{I}(x)\subseteq\mathcal{I}(y).\] Suppose \(\mathcal{I}\colon 2^{\omega}\to\operatorname{Spec}(\mathcal{D}_{T})\) is order-preserving. Must \(\mathcal{I}\) either be constant on a cone or satisfy \(x\in\mathcal{I}(x)\) on a cone? Note that Slaman has proved an analogue of part 2 of Martin's conjecture for Borel order-preserving functions \(\mathcal{I}\colon 2^{\omega}\to\operatorname{Spec}(\mathcal{D}_{T})\) (unpublished, but see [22]). ### \(\mathsf{ZFC}\) counterexamples It is easy to construct counterexamples to Martin's Conjecture in \(\mathsf{ZFC}\), even when the conjecture is restricted to some special class of functions, such as order-preserving functions. However, Slaman and Steel have observed in [23] that if we alter the statement of Martin's Conjecture by replacing "on a cone" with "cofinally" then some special cases are provable in \(\mathsf{ZFC}\). They give two examples of this phenomenon. 1. For any order-preserving function \(f\colon 2^{\omega}\to 2^{\omega}\), if \(f\) is above the identity on a cone then either \(f(x)\equiv_{T}x\) on a cone or \(f(x)\geqslant_{T}x^{\prime}\) for cofinally many \(x\). 2. For any regressive, measure-preserving function \(f\colon 2^{\omega}\to 2^{\omega}\), \(f(x)\equiv_{T}x\) for cofinally many \(x\). In this section we will show that this does not happen for the main theorems of this paper: \(\mathsf{ZFC}\) proves that there are counterexamples to part 1 of Martin's Conjecture for measure-preserving functions and order-preserving functions, even when we replace "on a cone" with "cofinally" in the conclusion of Martin's Conjecture. #### Counterexample for measure-preserving functions We will first show that in \(\mathsf{ZFC}\) we can construct a Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) which is measure-preserving and such that for all uncomputable \(x\), \(x\not\preccurlyeq_{T}f(x)\). The main idea of the construction is to write the Turing degrees as an increasing union of Turing ideals and define \(f(x)\) to be a minimal upper bound for all the reals computable from \(x\) which first show up in a strictly earlier ideal than \(x\) itself. We will need one fact about Turing ideals. **Lemma 6.8**.: _If \(\mathcal{I}\) is a countable Turing ideal and \(\mathbf{x}\) is an uncomputable Turing degree which is not contained in \(\mathcal{I}\) then \(\mathcal{I}\) has an upper bound which does not compute \(\mathbf{x}\)._ Proof.: If \(\mathcal{I}\) is empty then this is trivial. Otherwise, it follows from a theorem of Spector (see Exercise 6.5.12 of Soare's textbook [24] for a proof) that \(\mathcal{I}\) has an exact pair--i.e. Turing degrees \(\mathbf{a}\) and \(\mathbf{b}\) such that * \(\mathbf{a}\) and \(\mathbf{b}\) are both upper bounds for \(\mathcal{I}\) * and if any Turing degree \(\mathbf{y}\) is below both \(\mathbf{a}\) and \(\mathbf{b}\) then \(\mathbf{y}\in\mathcal{I}\). Since \(\mathbf{x}\notin\mathcal{I}\), \(\mathbf{x}\) cannot be computable from both \(\mathbf{a}\) and \(\mathbf{b}\) and thus at least one of them satisfies the desired conclusion. **Theorem 6.9** (\(\mathsf{ZFC}\)).: _There is a Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) such that \(f\) is measure-preserving and for all uncomputable \(x\), \(f(x)\) does not compute \(x\)._ Proof.: Instead of defining a Turing-invariant function on the reals, we will define a function \(F\) on the Turing degrees (which is equivalent under \(\mathsf{ZFC}\)). Let \(\langle\mathcal{I}_{\alpha}\rangle_{\alpha}\) be an increasing, well-ordered sequence of proper Turing ideals such that \(\bigcup_{\alpha}\mathcal{I}_{\alpha}=\mathcal{D}_{T}\) and the sequence has no maximal element (such a sequence is easy to construct in \(\mathsf{ZFC}\)). Given a Turing degree \(\mathbf{x}\), let \(\alpha_{\mathbf{x}}\) denote the least ordinal \(\alpha\) such that \(\mathbf{x}\notin\mathcal{I}_{\alpha}\) and let \(\mathcal{I}_{\mathbf{x}}\) denote \[\mathcal{I}_{\mathbf{x}}=\{\mathbf{y}\in\mathcal{D}_{T}\mid\mathbf{y}\preccurlyeq_{T}\bm {x}\text{ and for some }\beta<\alpha_{\mathbf{x}},\,\mathbf{y}\in\mathcal{I}_{\beta}\}.\] In other words, \(\mathcal{I}_{\mathbf{x}}\) consists of those Turing degrees \(\mathbf{y}\) which are computable from \(\mathbf{x}\) and which show up in a strictly earlier ideal than \(\mathbf{x}\). It is easy to check that \(\mathcal{I}_{\mathbf{x}}\) is a countable (possibly empty) Turing ideal which does not contain \(\mathbf{x}\). If \(\mathbf{x}\) is uncomputable then by Lemma 6.8, \(\mathcal{I}_{\mathbf{x}}\) has an upper bound which does not compute \(\mathbf{x}\). Thus we can define \(F\) as follows: if \(\mathbf{x}\) is computable then set \(F(\mathbf{x})=\mathbf{0}\) and otherwise set \(F(\mathbf{x})\) to be any upper bound for \(\mathcal{I}_{\mathbf{x}}\) which does not compute \(\mathbf{x}\). By construction, \(\mathbf{x}\not\preccurlyeq_{T}F(\mathbf{x})\) holds for all uncomputable \(\mathbf{x}\). So we just need to check that \(F\) is measure-preserving. To this end, fix a Turing degree \(\mathbf{a}\) and we will show that \(F\) is above \(\mathbf{a}\) on a cone. Let \(\alpha\) be the least ordinal such that \(\mathbf{a}\in\mathcal{I}_{\alpha}\) and let \(\mathbf{b}\) be some Turing degree such that \(\mathbf{b}\notin\mathcal{I}_{\alpha}\) (which exists because \(\mathcal{I}_{\alpha}\) is proper). Finally, let \(\mathbf{c}=\mathbf{a}\oplus\mathbf{b}\). We claim that \(F\) is above \(\mathbf{a}\) on the cone above \(\mathbf{c}\). Let \(\mathbf{x}\geqslant_{T}\mathbf{c}\). Since \(\mathbf{x}\) computes \(\mathbf{b}\), which is not in \(\mathcal{I}_{\alpha}\) and since the sequence of ideals is increasing, \(\alpha_{\mathbf{x}}>\alpha\). Since \(\mathbf{x}\) also computes \(\mathbf{a}\), \(\mathbf{a}\in\mathcal{I}_{\mathbf{x}}\) and thus \(F(\mathbf{x})\) computes \(\mathbf{a}\). #### Counterexample for order-preserving functions We will now show that in \(\mathsf{ZFC}\) we can construct a Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) which is order-preserving, not constant on any cofinal set and not above the identity on any cofinal set. Furthermore, the function we construct is not measure-preserving--in fact, the range is disjoint from a cone--and thus Theorem 4.6 also fails under \(\mathsf{ZFC}\). The proof relies on a theorem of Sacks. **Theorem 6.10** (\(\mathsf{ZFC}\); Sacks [21]).: _Every locally countable countable partial order of size \(\omega_{1}\) can be embedded into the Turing degrees._ **Theorem 6.11** (\(\mathsf{ZFC}\)).: _There is a Turing-invariant function \(f\colon 2^{\omega}\to 2^{\omega}\) such that \(f\) is order-preserving, not constant on any cofinal set and not above the identity on any cofinal set. Also, the range of \(f\) is disjoint from a cone and hence \(f\) is not measure-preserving._ Proof.: We will once again just explain how to define a function on the Turing degrees which has the desired properties. Let \((P,\leqslant_{P})\) be the partial order consisting of \(\omega_{1}\), with its usual order, plus one point, \(q\), which is incomparable to everything in \(\omega_{1}\). Let \(\pi\) be an embedding of \(P\) into the Turing degrees. For any \(\mathbf{x}\), let \(\alpha_{\mathbf{x}}\) be the least ordinal such that \(\mathbf{x}\) does not compute \(\pi(\alpha_{\mathbf{x}})\). Note that such an \(\alpha_{\mathbf{x}}\) must exist since \(x\) can only compute countably many degrees, but the range of \(\pi\) is uncountable. Now define \(F(\mathbf{x})=\pi(\alpha_{\mathbf{x}})\). \(F\) **is order-preserving.** Let \(\mathbf{x}\) and \(\mathbf{y}\) be Turing degrees such that \(\mathbf{x}\leqslant_{T}\mathbf{y}\). Note that for any \(\alpha<\omega_{1}\), if \(\mathbf{x}\) computes \(\pi(\alpha)\) then so does \(\mathbf{x}\). Hence \(\alpha_{\mathbf{x}}\leqslant\alpha_{\mathbf{y}}\). Since \(\pi\) is an embedding of partial orders, we have \[F(\mathbf{x})=\pi(\alpha_{\mathbf{x}})\leqslant_{T}\pi(\alpha_{\mathbf{y}})=F(\mathbf{y}).\] \(F\) **is not constant on a cofinal set.** Suppose it is, with constant value \(\mathbf{a}\). By definition of \(F\), \(\mathbf{a}\) must be equal to \(\pi(\alpha)\) for some \(\alpha\). But then for any \(\mathbf{x}\) which computes \(\mathbf{a}\), \(\alpha_{\mathbf{x}}\) cannot be equal to \(\alpha\) and thus since \(\pi\) is an embedding, \(F(\mathbf{x})=\pi(\alpha_{\mathbf{x}})\neq\pi(\alpha)\). Therefore on the cone above \(\mathbf{a}\), \(F(\mathbf{x})\neq\mathbf{a}\), which contradicts the assumption that \(F(\mathbf{x})=\mathbf{a}\) on a cofinal set. **The range of \(F\) is disjoint from a cone.** In particular, the range of \(F\) is disjoint from the cone above \(\pi(q)\). This is because the range of \(F\) is contained in the image of \(\omega_{1}\) under \(\pi\) and since \(\pi\) is an embedding of partial orders, nothing in this image computes \(\pi(q)\). Note that this also immediately implies that \(F\) is not above the identity on any cofinal set. ## 7 Questions Throughout this paper we have mentioned some interesting questions raised by our work. For convenience we will now provide a list of these questions. **Question 1**.: _Does part 1 of Martin's Conjecture hold for all order-preserving functions on the hyperarithmetic degrees?_ **Question 2**.: _Does part 1 of Martin's conjecture hold for all order-preserving ideal-valued functions on the Turing degrees?_ **Question 3**.: _Is part 1 of Martin's Conjecture for measure-preserving functions provable in \(\mathsf{ZF}\) when restricted to Borel functions?_ A negative answer to questions 4 and 5 together would imply part 1 of Martin's Conjecture. **Question 4**.: _Is there any ultrafilter on the Turing degrees which is strictly below the Martin measure in the Rudin-Keisler order?_ **Question 5**.: _Is there any ultrafilter on the Turing degrees besides the Martin measure itself which is weakly Rudin-Keisler equivalent to the Martin measure?_ Question 6 is a special case of question 4. **Question 6**.: _Are the Lebesgue or Baire ultrafilters on the Turing degrees below the Martin measure in the Rudin-Keisler order?_ Question 7 has no direct bearing on Martin's Conjecture but seems intriguing. **Question 7**.: _Is there any ultrafilter on the Turing degrees which is strictly above Martin measure in the Rudin-Keisler order?_
2309.09958
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Visual instruction tuning has recently shown encouraging progress with open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However, most existing studies of open-source LMM are performed using models with 13B parameters or smaller. In this paper we present an empirical study of scaling LLaVA up to 33B and 65B/70B, and share our findings from our explorations in image resolution, data mixing and parameter-efficient training methods such as LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language capabilities when completing real-world tasks in the wild. We find that scaling LMM consistently enhances model performance and improves language capabilities, and performance of LoRA/QLoRA tuning of LMM are comparable to the performance of full-model fine-tuning. Additionally, the study highlights the importance of higher image resolutions and mixing multimodal-language data to improve LMM performance, and visual instruction tuning can sometimes improve LMM's pure language capability. We hope that this study makes state-of-the-art LMM research at a larger scale more accessible, thus helping establish stronger baselines for future research. Code and checkpoints will be made public.
Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen
2023-09-18T17:30:46Z
http://arxiv.org/abs/2309.09958v1
# An Empirical Study of Scaling Instruction-Tuned Large Multimodal Models ###### Abstract Visual instruction tuning has recently shown encouraging progress with open-source large multimodal models (LMM) such as LLAVA and MiniGPT-4. However, most existing studies of open-source LMM are performed using models with 13B parameters or smaller. In this paper we present an empirical study of scaling LLAVA up to 33B and 65B/70B, and share our findings from our explorations in image resolution, data mixing and parameter-efficient training methods such as LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language capabilities when completing real-world tasks in the wild. We find that scaling LMM consistently enhances model performance and improves language capabilities, and performance of LoRA/QLoRA tuning of LMM are comparable to the performance of full-model fine-tuning. Additionally, the study highlights the importance of higher image resolutions and mixing multimodal-language data to improve LMM performance, and visual instruction tuning can sometimes improve LMM's pure language capability. We hope this study makes state-of-the-art LMM research at a larger scale more accessible, thus helping establish stronger baselines for future research. Code and checkpoints will be made public. ## 1 Introduction Recent studies on large multimodal models (LMM) [9; 10] have been focused on the methods of _visual instruction tuning_[12]. The results are promising: _e.g.,_ the open-source project Large Language and Vision Assistant (LLAVA) shows that training a 7B large language model (LLM) with multimodal instruction-following data for 3 hours on 8 A-100 GPUs leads to a LMM with strong visual understanding and reasoning capabilities in the wild: reproducing some of the most appealing examples of the proprietary OpenAI multimodal GPT-4 model [14]. A similar idea is explored in its co-current work MiniGPT-4 [20]. It has rapidly become a prominent research topic, spurring the development of numerous new models, benchmarks, and applications [10]. However, the high compute cost has led most existing studies to utilize 7B and 13B LLMs. Thus, the impact of significantly scaling up the model size to _e.g.,_ 33B and 65B remains unexplored. This study aims to fill this gap by empirically investigating language models of larger sizes for LMM, sharing insights of our scaling experiments and establishing stronger baselines using larger-scale LLAVA for future research. Specifically, we explore the impact of larger model sizes, model tuning and data mixing methods on model performance, and present our findings and recommendations. The scaling recipe leads to new state-of-the-art (SoTA) performance on LLAVA-Bench [12] and MM-VET [19]. We hope that our findings and larger LLAVA checkpoints would provide a reference for future research on visual instruction tuning. Experiment Setup Model Checkpoints.To study the impact of scaling up LLM on multimodal capabilities, we increase the language model size to 33B and 65B [15], in addition to the 7B and 13B models used for existing LMM. * **LLaVA-33B** We employ the open source Vicuna-33B checkpoint 1[16] to preform the two-stage training. The training data is around 125K conversations collected from ShareGPT.com. Footnote 1: [https://huggingface.co/lmsys/vicuna-33b-v1.3](https://huggingface.co/lmsys/vicuna-33b-v1.3) * **LLaVA-65B** Due to a lack of public 65B Vicuna checkpoint, we conduct our own training of the Vicuna-65B model, utilizing ShareGPT data that we have independently processed. This data contains 159M tokens used during training. As a comparison, the reported number of tokens used in training Vicuna 33B is 370M 2. Footnote 2: [https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) Once the instruction-tuned LLM is given, we follow [12] to perform the two-stage LLaVA lightning training: \((i)\)_Stage 1: Pre-training for Feature Alignment._ The linear projection layer is trained, which maps the visual feature (the features before the last layer of the pre-trained image encoder) to word embedding space of LLM. More specifcally, the projection dimension is 1024\(\rightarrow\)6656 for the 33B model and 1024\(\rightarrow\)8192 for the 65B model, respectively. In this stage, we use the concept-balanced subset of LAION-CC-SBU data with 558K samples. \((ii)\)_Stage 2: Visual Instruction Tuning._ We use the LLaVA-80K multimodal instruct dataset for the fine-tuning stage. Various training schedules are explored to enable the model to follow the diverse instructions to complete tasks in the wild, as to be detailed below. Tuning Methods.We explore both the trainable modules and training data mixing for efficient and effective visual instruct tuning of large models. * **Trainable modules.** In addition to tuning the linear projection layer, two schemes are considered to tune the LLM: \((i)\) Full-model fine-tuning of LLM and \((ii)\) Parameter-efficient training methods. For the latter, LoRA [7] and QLoRA [4] are employed to allow us to tune large models with limited compute resource. This aims to gain an in-depth understanding of the trade-off between the training cost and model performance. * **Data mixing.** Typically only the multimodal instruction data is used in Stage-2. We further consider mixing the language-only instruct data ShareGPT with the LLaVA-80K multimodal instruction data to gain an in-depth understanding of the trade-off between models' language and multimodal capabilities. Hyper-parameters.In the training process of both stages, we utilize the DeepSpeed library 3 and employ the ZeRO3 optimizer, except for QLoRA runs we use ZeRO2. We use a maximum sequence length of 2048. For Stage 1, we train both the 33B and 65B models with a learning rate of \(1\times 10^{-4}\) with no weight decay, and a learning rate with linear decay and linear warmup for 3% of training steps in total. For Stage 2, we use a learning rate of \(2\times 10^{-5}\) in full fine-tuning to train 1 epoch for all the models in full finentuning, and a learning rate of \(1\times 10^{-4}\) for the LoRA/QLoRA runs. We conducted a set of hyperparameter search and for LoRA runs, and found larger LoRA alpha or equivalently larger learning rate was crucial to get the best performance. Specifically, we use LoRA alpha equals 2 times the LoRA rank, and a learning rate of \(1\times 10^{-4}\), which works the best for all the models. For full fine-tuning, we use a total batch size of 512 on 4 A100 nodes, where each of these nodes is equipped with 8 A100-80G GPUs. For LoRA/QLoRA runs, we use a total batchsize of 64 on 1 A100 node for 33B model and 2 nodes for 65B model. Footnote 3: [https://github.com/microsoft/DeepSpeed](https://github.com/microsoft/DeepSpeed) ## 3 Results We first compare our large checkpoints on two recent benchmarks which are specifically designed for LMM, then report our findings in the course of scaling up LLaVA models. ### Comparisons on Benchmarks LLaVA-Bench.LLaVA-Bench (In-the-Wild)4[12] is a diverse evaluation dataset consisting of 24 images with 60 questions in total, including indoor and outdoor scenes, memes, paintings, sketches. Each image is paired with a manually-curated, detailed description and a set of properly-selected questions related to open-ended visual chat scenarios. Each questions belongs to one of three types of tasks: conversations that contain simple visual recognition & QA questions, detailed descriptions that characterize the image with a long paragraph, and a complex reasoning task that focuses on deducing implications from an image. Language GPT-4 (gpt4-O314) is used to score to the generated answers. The relative scores between the model output and gold response are reported. We compare LLaVA against the commercial visual chat systems including Microsoft BingChat5 and Google Bard6 on LLaVA-Bench [12]. Footnote 4: [https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_Bench.md](https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_Bench.md) Footnote 5: [https://www.bing.com/chat](https://www.bing.com/chat) Footnote 6: [https://bard.google.com/](https://bard.google.com/) \begin{table} \begin{tabular}{l|c c c|c|c} \hline \hline Models & Reasoning & Conversation & Detail & Overall \\ \hline Bard-0718 & 78.7 & 83.7 & 69.7 & 77.8 \\ Bing-Chat-0629 & 90.1 & 59.6 & 52.2 & 71.5 \\ \hline LLaVA-13B (beam=1) & 81.7 & 64.3 & 55.9 & 70.1 \\ LLaVA-13B (beam=5) & 84.3 & 68.4 & 59.9 & 73.5 \\ LLaVA-33B (beam=1) & 82.9 & 70.2 & 62.6 & 73.9 \\ LLaVA-33B (beam=5) & 83.5 & 72.6 & 61.9 & 74.8 \\ LLaVA-65B (beam=1) & 87.3 & 63.8 & 62.3 & 74.2 \\ LLaVA-65B (beam=5) & 88.7 & 59.4 & 65.7 & 74.4 \\ \hline \hline \end{tabular} \end{table} Table 1: The performance comparison on LLaVA-Bench. Beam search sizes at 1 and 5 are reported. \begin{table} \begin{tabular}{l|c c c c c|c|c} \hline \hline Model & Rec & OCR & Knowledge & Generation & Spatial & Math & Total \\ \hline \multicolumn{7}{l}{_Results of various open-source LMM on reported in the MM-VET paper [19]_} \\ LLaMA-Adapter v2-7B [5] & 16.8 & 7.8 & 2.5 & 3.0 & 16.6 & 4.4 & 13.6\(\pm\)0.2 \\ OpenFlamingo-9B [1; 2] & 24.6 & 14.4 & 13.0 & 12.3 & 18.0 & 15.0 & 21.8\(\pm\)0.1 \\ MiniGPT-4-8B [20] & 27.4 & 15.0 & 12.8 & 13.9 & 20.3 & 7.7 & 22.1\(\pm\)0.1 \\ BLIP-2-12B [11] & 27.5 & 11.1 & 11.8 & 7.0 & 16.2 & 5.8 & 22.4\(\pm\)0.2 \\ LLaVA-7B [12] & 28.0 & 17.1 & 16.3 & 18.9 & 21.2 & 11.5 & 23.8\(\pm\)0.6 \\ MiniGPT-14B [20] & 29.9 & 16.1 & 20.4 & 22.1 & 22.2 & 3.8 & 24.4\(\pm\)0.4 \\ Otter-9B [8] & 28.4 & 16.4 & 19.4 & 20.7 & 19.3 & 15.0 & 24.6\(\pm\)0.2 \\ InstructBLIP-14B [3] & 30.8 & 16.0 & 9.8 & 9.0 & 21.1 & 10.5 & 25.6\(\pm\)0.3 \\ InstructBLIP-8B [3] & 32.4 & 14.6 & 16.5 & 18.2 & 18.6 & 7.7 & 26.2\(\pm\)0.2 \\ LLaVA-13B [12] & 30.9 & 20.1 & 23.5 & 26.4 & 24.3 & 7.7 & 26.4\(\pm\)0.1 \\ MM-React-GPT-3.5 [18] & 24.2 & 31.5 & 21.5 & 20.7 & 32.3 & 26.2 & 27.9\(\pm\)0.1 \\ LLaVA-7B (LLaMA-2) [12] & 32.9 & 20.1 & 19.0 & 20.1 & 25.7 & 5.2 & 28.1\(\pm\)0.4 \\ LLaVA-13B (V1.3, 336px) [12] & 38.1 & 22.3 & 25.2 & 25.8 & 31.3 & 11.2 & 32.5\(\pm\)0.1 \\ LLaVA-13B (LLaMA-2) [12] & 39.2 & 22.7 & 26.5 & 29.3 & 29.6 & 7.7 & 32.9\(\pm\)0.1 \\ MM-ReAct-GPT-4 [18] & 33.1 & 65.7 & 29.0 & 35.0 & 56.8 & 69.2 & 44.6\(\pm\)0.2 \\ \hline \multicolumn{7}{l}{_Results with our own experiment runs_} \\ \hline LLaVA-13B (LLaMA-2) & 38.4 & 21.0 & 26.3 & 28.8 & 28.0 & 7.7 & 32.6\(\pm\)0.1 \\ LLaVA-33B & 38.5 & 25.0 & 26.2 & 28.2 & 29.2 & 7.7 & 32.9\(\pm\)0.3 \\ LLaVA-33B (Data Mixing) & 37.7 & 27.1 & 26.2 & 28.6 & 28.1 & 11.5 & 34.1\(\pm\)0.3 \\ LLaVA-65B & 39.2 & 28.2 & 26.2 & 28.3 & 33.0 & 15.0 & 35.5\(\pm\)0.3 \\ LLaVA-65B (Data Mixing) & 41.8 & 27.9 & 30.4 & 32.3 & 30.5 & 7.3 & **36.4\(\pm\)0.2** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of various open-source LMM on MM-VET. Note that MM-ReAct is not an single multimodal model, it is a system built on chaining visual tools via GPT-3.5 or GPT-4, which we append as a reference. Our experiment run on LLaVA-13B (LLaMA-2) yields very similar score with the same checkpoint reported in MM-VET paper, indicating that our evaluation pipelines are consistent. The results are presented in Table 1. The 33B and 65B checkpoints outperform the 13B LLaVA model and Bing Chat. Despite the fact that LLaVA-Bench is small (thus the comparison might not be statistically significant), the results are encouraging: compared to large LMM, small open-sourced LMM are far more cost-effective to be deployed in real-world applications. With negligible increase of inference latency, we can significantly improve the performance for all model sizes by increasing the beam search size from 1 to 5. Our results show that larger LLaVA models generally exhibit better performance in tasks involving complex reasoning and generating detailed descriptions, which requires strong language competencies from larger LLM. In addition, larger LLaVA models obtain comparable results to BingChat in multi-turn, multi-modal conversation tasks that require strong image understanding capability. Mm-Vet.MM-VET [19] is designed based on the assumption that the intriguing capability of solving complicated tasks is often achieved by a generalist LMM which is able to integrate a varity of vision-language (VL) capabilities. MM-Vet contains 200 images and 218 questions (samples), aiming to evaluate6 core VL capabilities (recognition, OCR, knowledge, language generation, spatial awareness, and math) and their combinations. For evaluation, an LLM-based evaluator (gpt4-0613) is used to score open-ended outputs of different forms. In Table 2, we report the results on MM-VET. The performance is consistently improved from 13B to 33B and 65B. The largest LLaVA model improves SoTA performance among the end-to-end open-source LMM. The most significant improvements are observed when evaluating the capabilities of knowledge and generation, followed by recognition and OCR. The performance on spatial and math remains comparable. The result reveals that the improved LLM capability is instrumental in storing more knowledge in the weights and leading to a stronger language responding capability. ### Scaling up LLaVA The experiments are conducted to answer three research questions. 1. **Which scaling factor matters?** We study the relative contribution of three scaling-up factors to the performance improvement of LLaVA. The results are summarized in Table 3 (a). * **Model size.** Increasing the model size consistently improves the overall performance. We conjecture that larger data size is essential to train a larger model. For example, if we only train on LLaVA-80K data, we see smaller gain when model size becomes larger. * **Image resolution.** By fixing the CLIP ViT image encoder, we compare the variants that are pre-trained to take image resolution \(224\times 224\) and \(336\times 336\), and find that the higher resolution consistently yields 2-3 points improvement across all four LLM sizes. * **Data mixing.** Larger models tend to have higher capability of fitting the instruction data. By mixing the language-only instruction data (ShareGPT) with LLaVA-80K, we can improve model performance by 2 points, compared to training on multimodal instruction data only. In Table 3 (b), we present our result on MM-Bench [13], which contains a set of 2,974 questions, which evaluate models' reasoning skills of six categories. The combination of the three factors improve the baseline LLaVA 7B model, reported in [13]. 2. **When should the parameter-efficient training method be considered?** As model size increases, it becomes necessary to consider using tuning methods that are more efficient than full-model fine-tuning. LoRA and QLoRA are well-known parameter-efficient tuning methods. As shown in Table 4, we report compute cost using _GPU hours per node_, because the unit can be equivalent to the price $13.63/hour (ND A100 v4 series) on Azure 7. The total cost can be estimated by multiplying the #hours and #epochs. Footnote 7: [https://azure.microsoft.com/en-us/pricing/details/machine-learning/](https://azure.microsoft.com/en-us/pricing/details/machine-learning/) In Table 4(a), we train both the 33B and 65B model with LoRA rank 8 and 64 for 1 epoch on the LLaVA-80K instruction-tuning dataset. For models with 33B parameters and above, as we increase the LoRA rank values, we notice an increase in both performance and cost until full-model tuning reaches its maximum performance for a specific model size. In the case of the 13B model, we find that a rank of 64 can deliver comparable performance to full-model tuning. The cost is more related to the total number of parameters than the number of trainable parameters. The cost increase due to raising the LoRA rank for a given model size is significantly smaller than the cost increase by enlarging model sizes. For example, increasing the LoRA rank from 8 to 64 nearly matches the performance as LoRA fine-tuning a 65B model with same rank, but only requires 50% of 65B model's training cost. In practice we find that tuning 33B model provide a good trade-off between cost and performance. Different LoRA variations have similar performance, and QLoRA requires lower GPU memory cost and running-time cost than LoRA. When large models (_e.g.,_ 65B) are trained with DeepSpeed ZeRO2 mode, they can fit into GPU with QLoRA, while yield the OOM issue with LoRA. In the experiments, we find that the hyperparameters of LoRA have a large impact of performance:\((i)\) Large learning rate and alpha value of LoRA improves the results significantly. For example, With the same rank=64, we reduce the learning rate=\(2\times 10^{-5}\) and alpha=16, the performance decrease from 71.8 to 65.5 on LLaVA-Bench. \((ii)\) Under the same setting, large ranks leads to little improvement. _e.g.,_ we increase the rank from 64 to 128 and 512, it improves from 65.5 to 66.1 and 68.1, respectively. We also train LLaVA-70B based on the LLaMA-2-70B-Chat checkpoint [15], and find that mixed results on multimodal and language abilities. Interestingly, we improve LLaMA-2-70B-Chat by 2.4 points on MMLU, yielding an overall MMLU score of 65.1, which is the best performance for the 70B model size, according to [17] and the Chatbot Arena Leaderboard 8. To the best of our knowledge, this is the first reported result which shows visual instructing tuning improve language ability of large-scale LMM. Footnote 8: [https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) ## 4 Conclusions and Limitations We present an empirical study of scaling the language model size for LMM. Our main findings are: \((i)\) Scaling LMM consistently enhances model performance, resulting in significant improvements in language capabilities, primarily due to the increased LLM model size. We leave it to future work how to scale the vision encoder to enhance the visual capabilities and improve model performance on vision recognition and understanding tasks. \((ii)\) Parameter-efficient methods such as LoRA/QLoRA are viable solutions to finetune large-scale LLMs for a good performance-cost trade-off in some real-world settings with limited GPU memory. We observe that LoRA/QLoRA's performance are comparable to that of fine-tuning the full model, establishing their effectiveness through significant cost reduction in both model training and serving. \((iii)\) Our study of training data curation reveals that properly selecting image resolutions and mixing multimodal-language data for model training can significantly improve the performance of the resultant LMM. We also show for the first time that visual instruction tuning can improve LMM's language capability. Note that the training datasets used in this study is small. So, our findings are still preliminary. In future work, we will experiment using much larger datasets to investigate in detail whether and how different methods of training data selection and mixing can improve the quality of much larger LMM.
2309.07975
Smart Helper-Aided F-RANs: Improving Delay and Reducing Fronthaul Load
In traditional Fog-Radio Access Networks (F-RANs), enhanced remote radio heads (eRRHs) are connected to a macro base station (MBS) through fronthaul links. Deploying a massive number of eRRHs is not always feasible due to site constraints and the cost of fronthaul links. This paper introduces an innovative concept of using smart helpers (SHs) in F-RANs. These SHs do not require fronthaul links and listen to the nearby eRRHs' communications. Then, they smartly select and cache popular content. This capability enables SHs to serve users with frequent on-demand service requests potentially. As such, network operators have the flexibility to easily deploy SHs in various scenarios, such as dense urban areas and temporary public events, to expand their F-RANs and improve the quality of service (QoS). To study the performance of the proposed SH-aided F-RAN, we formulate an optimization problem of minimizing the average transmission delay that jointly optimizes cache resources and user scheduling. To tackle the formulated problem, we develop an innovative multi-stage algorithm that uses a reinforcement learning (RL) framework. Various performance measures, e.g., the average transmission delay, fronthaul load, and cache hit rate of the proposed SH-aided F-RAN are evaluated numerically and compared with those of traditional F-RANs.
Hesameddin Mokhtarzadeh, Mohammed S. Al-Abiad, Md Jahangir Hossain, Julian Cheng
2023-09-14T18:18:10Z
http://arxiv.org/abs/2309.07975v2
# Smart Helper-Aided F-RANs: ###### Abstract In traditional Fog-Radio Access Networks (F-RANs), enhanced remote radio heads (eRRHs) are connected to a macro base station (MBS) through fronthaul links. Deploying a massive number of eRRHs is not always feasible due to site constraints and the cost of fronthaul links. This paper introduces an innovative concept of using smart helpers (SHs) in F-RANs. These SHs do not require fronthaul links and listen to the nearby eRRHs' communications. Then, they smartly select and cache popular content. This capability enables SHs to serve users with frequent on-demand service requests potentially. As such, network operators have the flexibility to easily deploy SHs in various scenarios, such as dense urban areas and temporary public events, to expand their F-RANs and improve the quality of service (QoS). To study the performance of the proposed SH-aided F-RAN, we formulate an optimization problem of minimizing the average transmission delay that jointly optimizes cache resources and user scheduling. To tackle the formulated problem, we develop an innovative multi-stage algorithm that uses a reinforcement learning (RL) framework. Various performance measures, e.g., the average transmission delay, fronthaul load, and cache hit rate of the proposed SH-aided F-RAN are evaluated numerically and compared with those of traditional F-RANs. F-RANs, low latency communications, reinforcement learning, resource allocation, smart helpers. ## I Introduction In the digital era, there is a proliferation of connected devices and services that are to be supported by wireless networks [1, 2, 3]. As a result, there is a pressing need for wireless networks that can provide superior performances, such as ultra-reliable low latency communications (URRLC). To address this need, network densification by expanding the network access layer with more remote radio heads (RRHs) is shown to be a promising solution. Cloud-radio access networks (C-RANs) were proposed by China Mobile [4] as a potential network architecture that supports network densification. In C-RANs, the baseband units (BBUs) of the RRHs are centralized within the macro base station (MBS), also called the central unit (CU). The RRHs are then connected to the MBS via the fronthaul links [5]. The prime concern of centralized load within the CU is that it leads to notable transmission delay and heavy traffic on fronthaul links. More recently, Fog-RANs (F-RANs) have been introduced to address these issues [6]. In F-RANs, the conventional RRHs are replaced with enhanced RRHs (eRRHs) with more sophisticated signal processing capabilities. More importantly, eRRHs have caching capabilities. F-RANs offer two key benefits as mentioned as follows. First, network functionalities can be divided between eRRHs and the cloud based on the specific latency and reliability requirements. Furthermore, it enhances the efficiency and speed of content delivery as the eRRHs can serve the cached content to the users without frequently requesting these contents from the MBS [7, 8]. Consequently, caching capability helps to reduce the load on the fronthaul link by reducing the amount of data that needs to be retrieved from the MBS [9, 10]. Network densification, by incorporating more eRRHs, offers improved file caching and delivery capabilities. Nevertheless, this enhancement comes at the expense of significantly increased hardware and energy consumption. In addition, deploying a large number of eRRHs can be challenging in densely populated urban areas, as it may be constrained by site limitations and the scarcity of available space. Even if a massive deployment of eRRHs is possible, there remains a significant challenge in managing the additional fronthaul link setup. As an attempt to further reduce the traffic load on fronthaul links that the centralized-based F-RAN architectures may face, cache-enabled device-to-device (CE-D2D) communication systems have been proposed [11]. In CE-D2D communication systems, devices can cache popular content and assist in offloading eRRHs by transferring data between each other [12]. In particular, eRRHs and CE-D2D users transmit their cached content to interested users using cellular and D2D links, respectively. CE-D2D communications did not gain popularity for various reasons, including privacy and power consumption concerns of devices. Optimized cache resources play a crucial role in efficiently serving users directly at the network edge [13, 14, 15]. One trivial strategy is to predict the popularity of content requested by users, which helps in making decision in terms of placing various contents within the network [14]. However, despite significant advancements in the prediction of content popularity, the dynamic nature of wireless networks, user mobility, and data access behavior necessitate a joint optimization of edge caching and wireless resource allocation [16, 17, 18, 19]. Achieving such joint optimization often requires exhaustive search methods due to the interdependence between caching decisions and resource allocation, which are not feasible for implementations [20, 21]. To overcome the limitations of traditional optimization-based resource allocation algorithms, distributed machine learning (ML) algorithms have been pro posed as potential solutions [22]. For example, reinforcement learning (RL) frameworks have been extensively explored to optimize caching decisions in an online manner, without prior knowledge of resource allocation decisions. These RL frameworks learn the access behavior of users and the mobility of the network, enabling the joint optimization of caching decisions and resource allocation within a distributed F-RAN architecture. ### _Related Works_ A flourish of works has been done towards maximizing the overall throughput on F-RANs, thereby improving the end-to-end latency. Existing works mainly focused on either optimizing radio resources, such as user scheduling and channel coding [23, 24, 25, 26, 27, 28, 29], or integration of caching with radio resource optimization strategies [30, 31, 32, 33, 34, 35]. In what follows, we review the related works on radio resource optimization and joint caching and resource optimization. #### Ii-A1 Radio Resource Optimization In [23, 24], the authors proposed optimization strategies such as channel precoding, fronthaul compression, and superposition coding to reduce transmission latency and enhance the overall quality of service (QoS) in F-RANs. While these works have improved the performance of the network, they did not fully leverage the capabilities offered by the architecture of F-RANs. In addition, some research works have explored the benefits of cooperation between eRRHs in F-RANs. For example, the work in [25] proposed a distributed computing and content-sharing mechanism to enhance the file delivery process. Although their collaborative system improves the QoS, it can also result in higher traffic on the links connecting these eRRHs together. To address this issue, the works in [26, 27] explored the capabilities of D2D communication modes. They demonstrated that incorporating D2D communication in F-RANs has the potential to ease the transmission burden on the eRRHs and significantly decrease the bandwidth usage of both RANs and fronthaul links. However, D2D communication may pose unexamined challenges, such as security risks and increased power consumption for the users. In addition to enhancing the transmission mechanism, several research studies have focused on investigating the placement of eRRHs and addressing the challenges associated with the dynamic nature of mobile eRRH network topology [28, 29]. In [28], a study focused on maximizing throughput by dynamically determining optimal fog node locations using a clustering algorithm. Furthermore, an adaptive radio resource balancing scheme was proposed in [29] to enhance the QoS in a system with mobile eRRHs. However, their study did not address the challenges associated with deploying high-capacity fronthaul links between mobile eRRHs and the CU. #### Ii-A2 Joint Cache and Resource Optimization In [30], the authors performed an information-theoretic analysis to assess delivery latency relative to system interference. Their study shed light on the potential benefits of jointly optimizing caching and resource allocation on data delivery efficiency within this network architecture. Nevertheless, the interdependence between caching decisions and resource allocation makes such joint optimization a complex task that can only be optimally solved via exhaustive search solutions, which is impractical. In this regard, researchers have recently taken advantage of RL algorithms to tackle the challenges of such problems in F-RANs. For example, the authors of [33] presented an RL framework on double deep Q-network for caching optimization and power allocation incorporating personalized user request modelling. Similarly, in [31], a deep RL-based algorithm was developed for joint dynamic cache resource optimization and power allocation to minimize the total transmission cost. Additionally, in [32], an optimization problem was presented to minimize the weighted system cost for the uplink input data transmission, specifically to access the cached requested services of the fog access points. The authors adopted a two-timescale approach, combining a game-based resource allocation on a small timescale with a multi-agent RL (MARL) algorithm for caching decisions on a larger timescale. However, it should be noted that the problem of optimal user assignment was explicitly not addressed in these research works. In [34], a joint RL algorithm and game-theoretic approach for cache optimization and user assignment was proposed to maximize the sum data rate of the network, considering fixed transmission power for each fog node. In [35], a more comprehensive system model was introduced, incorporating D2D and eRRH communication modes. They proposed utilizing MARL and cross-layer network coding techniques to maximize the overall system sum rate. ### _Motivation and Contributions_ In all previous works, a fronthaul link between the eRRHs and the MBS was assumed. However, in emerging practical applications for dense networks, connecting the eRRHs to the MBS using fronthaul links may not be feasible due to hardware limitations and cost constraints. For example, regarding optical fibre eRRH-MBS links, installing multiple fibres may be financially unbearable for many practical applications or even hard due to harsh geographical terrains. On the other hand, wireless eRRHs-MBS links may cause error propagation losses at far distances since buildings or trees often act as link blockers. In view of this, we introduce the concept of using cost-effective elements, referred to as smart helpers (SHs), to extend the traditional F-RANs with no fronthaul connections between the SHs with the MBS. The proposed system is referred to SH-aided F-RAN system. In the SH-aided F-RAN, we consider that a few number of eRRHs are deployed and directly connected to the MBS using fronthaul links, while the SHs are not using fronthaul links to the MBS. Specifically, the SHs can efficiently listen to the communications between the eRRHs and the users in their vicinity without the need for fronthaul links. In conventional F-RANs, where only the eRRHs are deployed, the users' requests can be delivered from the eRRHs in a collaborative way. However, this is not enough in the proposed system since the SHs can cache some popular users' requested services. Hence, developing caching strategies for both eRRHs and SHs and prioritizing the association of users with them for an effective file delivery becomes quite challenging. To this end, our paper aims to fill this gap by introducing the concept of economical SHs while tackling the joint optimization of caching strategy at eRRHs and SHs and user-eRRH/SH priority association. In more detail, the contribution of the paper is summarized as follows: * We present the proposed SH-aided F-RAN system, where the SHs do not need fronthaul link connections to the MBS but can listen to the communications between nearby eRRHs and users to smartly cache popular content. Then, the SHs use their cached contents to potentially serve users with frequent on-demand service requests. * We develop a priority-based user assignment algorithm to improve the cache hit rate of our SH-aided F-RAN. We then develop a novel MARL algorithm, where popular file segments are treated as virtual agents that make binary caching decisions. The reward function for these virtual agents is designed to minimize the average transmission delay taking into account the limited caching capacity of eRRHs and SHs. We also thoroughly analyze the convergence rate and computational complexity of the proposed algorithm. * Numerical results of the proposed framework are presented in various scenarios and compared to several benchmark schemes of conventional F-RANs. Presented numerical results show that SHs can significantly alleviate the load on the fronthaul link and considerably reduce the average transmission delay. Interestingly, our findings also reveal that by tolerating a negligible amount of transmission delay, each eRRH can be replaced with a number of compact SHs. The rest of the paper is organized as follows. In Section II, we introduce the concept of SH-aided F-RAN. Section III formulates the problem of minimizing average content delivery delay within our proposed SH-aided F-RAN framework. In Section IV, we offer the solution by presenting algorithms for resource allocation and cache optimization. Section V presents simulation results to evaluate the performance of the proposed SH-aided FRAN and the designed algorithm. Finally, Section VI concludes the paper. ## II Proposed SH-Aided F-RAN ### _Overall Description_ Our proposed SH-aided F-RAN is shown in Fig. 1, where there is a number of \(H\) SHs with \(\mathcal{H}=\{1,2,\cdots,H\}\) denoting the set of all SHs, \(S\) eRRHs with \(\mathcal{S}=\{1,2,\cdots,S\}\) denoting the set of all eRRHs, \(U\) users in total with \(\mathcal{U}=\{1,\ldots,U\}\) denoting the set of all users, and an MBS. We assume that all nodes have a single antenna for simplicity. Each SH/eRRH has a certain subset of users under their service area. We represent these subsets by \(\mathcal{U}_{h}^{\mathcal{H}}\subseteq\mathcal{U}\) for the users under the service area of the \(h\)th SH and \(\mathcal{U}_{S}^{s}\subseteq\mathcal{U}\) for those under the service area of the \(s\)th eRRH. eRRHs are connected to the MBS using fronthaul links with a limited capacity of \(C_{fh}\). They serve users by utilizing their cache or obtaining requested content of users from the MBS as needed. SHs are relatively cheaper transceivers that have caching capability with a certain amount of caching capacity. We assume that there is no fronthaul link between SHs and the MBS. They listen to the wireless communications between the eRRHs and users happening in their vicinity. Then, they smartly select and cache relevant data and efficiently serve users' requests based on the cached information. It is assumed that eRRHs and SHs refine their cache locally to store more on-demand content and serve users effectively. Furthermore, we assume that SHs are placed randomly, and SHs are allowed to decode and cache content. Optimal placement of SHs can lead to improved performance; however, their location optimization is out of the scope of this paper. Users are associated with various eRRHs/SHs based on the content availability on the cache of them by the MBS. The MBS communicates user scheduling decisions to eRRHs via the fronthaul links. Also, a low-rate wireless control channel is used to transmit the user scheduling decisions from the MBS to the SHs. We consider that the eRRHs and SHs use \(R\) orthogonal time/frequency radio resource blocks (RRBs) to serve users. The set of all RRBs is denoted by \(\mathcal{R}=\{1,2,\ldots,R\}\). In particular, we consider the F-RAN's resource settings in [5, 13]. The content delivery process of SH-aided FRAN is shown in Fig. 2. Whenever a user requests some content, the MBS associates the user with an SH caching the requested content. In cases where none of the nearby SHs have the requested content in their cache, the system turns to nearby eRRHs to fulfill the user's request. Since the eRRHs are also equipped with caching capabilities, they can directly serve the requested content from their cache to the user without requiring back-and-forth communication with the MBS. However, if none of the nearby SHs or eRRHs have the requested content in their caches, the user will be associated with one of the nearby eRRHs so the eRRH can obtain the content from the MBS and serve the user. Users are served with different power levels based on the availability of power in SHs and eRRHs, ensuring an efficient content delivery experience. Fig. 1: SH-aided F-RAN architecture. ### _Advantages and Application Scenarios_ The use of SHs in F-RANs offers numerous advantages for network operators. SHs are relatively inexpensive compared to eRRHs, and they can be readily deployed without fronthaul links and significant planning effort. As such, it offers a scalable and flexible solution to improve the performance of F-RANs. SHs effectively address concerns related to the privacy and power consumption issues associated with CE-D2D communications. For example, SHs can be deployed in dense and crowded urban areas. They can reduce network congestion by providing localized caching and enhancing user experience. Also, for public events and other events where people gather for a certain duration, SHs can be deployed easily to serve users from the cached content. In such scenarios, installation of additional eRRHs is time-consuming or may not be feasible. Moreover, in rural and remote areas with limited infrastructure, SHs can store frequently accessed content locally, reducing the load on the fronthaul link and improving network performance. In disaster and emergency response situations, SHs can cache critical information and operate as temporary network nodes, ensuring uninterrupted communication. ### _Operating Constraints_ Let \(\mathbf{A}^{\mathcal{S},(t)}\) and \(\mathbf{A}^{\mathcal{H},(t)}\) be respectively three-dimensional matrices representing the assignment of RRBs and users to eRRHs and SHs during time frame \(t\). The matrix \(\mathbf{A}^{\mathcal{S},(t)}\) has dimensions of \(U\times R\times S\), where \([\mathbf{A}^{\mathcal{S},(t)}]_{u,r,s}=a_{u,r,s}^{\mathcal{S},(t)}\) indicates if the \(u\)th user is assigned to the \(r\)th RRB and associated with the \(s\)th eRRH. Specifically, \(a_{u,r,s}^{\mathcal{S},(t)}=1\), if the \(r\)th RRB and the \(s\)th eRRH are assigned to the \(u\)th user, and \(a_{u,r,s}^{\mathcal{S},(t)}=0\) otherwise. Similarly, the matrix \(\mathbf{A}^{\mathcal{H},(t)}\) has a dimension of \(U\times R\times H\), where \([\mathbf{A}^{\mathcal{H},(t)}]_{u,r,h}=a_{u,r,h}^{\mathcal{H},(t)}\) represents whether the \(u\)th user is assigned to the \(r\)th RRB and the \(h\)th SH during time frame \(t\), with \(a_{u,r,h}^{\mathcal{H},(t)}=1\) if assigned and \(a_{u,r,h}^{\mathcal{H},(t)}=0\) otherwise. To avoid interference between users, each RRB is assigned to only one user. Also, due to the radio resource limitation and to ensure fairness among the users, each user is assigned to only one RRB. Hence, the following constraints should be satisfied \[\sum_{r}(\sum_{s}a_{u,r,s}^{\mathcal{S},(t)}+\sum_{h}a_{u,r,h}^{ \mathcal{H},(t)})\leq 1, \tag{1}\] \[\sum_{u}(\sum_{s}a_{u,r,s}^{\mathcal{S},(t)}+\sum_{h}a_{u,r,h}^{ \mathcal{H},(t)})\leq 1. \tag{2}\] Since the number of users is generally higher than the number of available RRBs, we consider serving a certain subset of users during each time frame. Particularly, given a total of \(R\) available RRBs in the network, a maximum of \(R\) users can be served from the eRRHs and the SHs in a given time frame. Let \(\mathbf{P}^{\mathcal{S},(t)}\) and \(\mathbf{P}^{\mathcal{H},(t)}\) be the power allocation matrices of the eRRHs and SHs during time frame \(t\). Specifically, \([\mathbf{P}^{\mathcal{S},(t)}]_{s,r}=p_{s,r}^{\mathcal{S},(t)}\) and \([\mathbf{P}^{\mathcal{H},(t)}]_{h,r}=p_{h,r}^{\mathcal{H},(t)}\) are the allocated powers to the \(r\)th RRB by the \(s\)th eRRH and the \(h\)th SH, respectively. eRRHs can transmit with the maximum allowed power on each RRB, denoted by \(\bar{P}_{\mathcal{S}}\). However, SHs are required to properly allocate their available transmit power, limited by \(\bar{P}_{\mathcal{H}}\), to the associated users. Hence, it is essential to consider the following constraints for power allocation \[\sum_{r}p_{h,r}^{\mathcal{H},(t)}\leq\bar{P}_{\mathcal{H}} \forall h=\{1,2,\ldots,H\}, \tag{3}\] \[p_{s,r}^{\mathcal{S},(t)}=\{0,\bar{P}_{\mathcal{S}}\} \forall s=\{1,2,\ldots,S\}. \tag{4}\] We consider that the requested content by the users is divided into popular and unpopular file segments. We indicate \(a_{u}^{\text{req}}\) as the request type parameter of the \(u\)th user, where \(a_{u}^{\text{req}}=1\) if the user requests a popular file segment, and \(a_{u}^{\text{req}}=0\) otherwise. Denote the set of all popular \(F\) file segments by \(\mathcal{F}=\{1,2,\cdots,F\}\), and without loss of generality, assume that each file segment has a size of \(B\) bytes. We assume that the popularity of file segments follows well-known Zipf distribution [32, 35]. According to the Zipf distribution, the popularity of file segment \(f\), denoted by \(z_{f}\), is given by \(z_{f}=\frac{1}{f^{\gamma}}/\sum_{i=1}^{F}\frac{1}{i^{\gamma}}\), where \(\gamma\) is the Zipf parameter and governs the skewness of popularity distribution, and \(\mathcal{Z}_{\mathcal{F}}=\{z_{1},z_{2},\ldots,z_{F}\}\) represents the popularity distribution of the file segments. We also assume that \(\mathcal{F}\) and the popularity of file segments do not change during a large number of time frames. Let \(\mathbf{C}^{\mathcal{S},(t)}\) and \(\mathbf{C}^{\mathcal{H},(t)}\) be two matrices respectively representing the caching status of popular file segments in the cache of the eRRHs and SHs during time frame \(t\). The dimension of \(\mathbf{C}^{\mathcal{S},(t)}\) is \(S\times F\), where \([\mathbf{C}^{\mathcal{S},(t)}]_{s,f}=c_{s,f}^{\mathcal{S},(t)}\) indicates whether the \(f\)th popular file segment is stored in the cache of the \(s\)th eRRH. Specifically, \(c_{s,f}^{\mathcal{S},(t)}=1\) if the segment is cached and \(c_{s,f}^{\mathcal{S},(t)}=0\) otherwise. Similarly, the matrix \(\mathbf{C}^{\mathcal{H},(t)}\) has a dimension of \(H\times F\), where \([\mathbf{C}^{\mathcal{H},(t)}]_{h,f}=c_{h,f}^{\mathcal{H},(t)}\) denotes whether the \(f\)th file segment is cached in the \(f\)th Fig. 2: Content delivery process. SH, with \(c_{h,f}^{\mathcal{H},(t)}=1\) if cached and \(c_{h,f}^{\mathcal{H},(t)}=0\) otherwise. We assume eRRHs and SHs can cache up to \(K_{\text{eRRH}}\) and \(K_{\text{SH}}\) file segments, respectively that lead to following constraints \[\sum_{f=1}^{F}c_{h,f}^{\mathcal{H},(t)}\leq K_{\text{SH}}\qquad\qquad\forall h =\{1,2,\ldots,H\}, \tag{5}\] \[\sum_{f=1}^{F}c_{s,f}^{\mathcal{S},(t)}\leq K_{\text{eRRH}}\qquad\qquad\forall s =\{1,2,\ldots,S\}. \tag{6}\] We use the matrices \(\mathbf{G}^{\mathcal{S},(t)}\) and \(\mathbf{G}^{\mathcal{H},(t)}\) to denote the channel gains at time frame \(t\) from users to eRRHs and SHs, respectively. Specifically, we define \([\mathbf{G}^{\mathcal{S},(t)}]_{u,r,s}=g_{u,r,s}^{\mathcal{S},(t)}\) as the channel gain from the \(s\)th eRRH to the \(u\)th user over the \(r\)th RRB, and \([\mathbf{G}^{\mathcal{H},(t)}]_{u,r,h}=g_{u,r,h}^{\mathcal{H},(t)}\) as the channel gain from the \(h\)th SH to the same user over the same RRB. We assume that the channel remains constant while transmitting a single uncoded or coded file segment but changes between successive transmissions. Thus, the achievable rate for serving the \(u\)th user is determined by \[R_{u}=\sum_{r}W\log(1+\Gamma_{u,r}^{(t)}) \tag{7}\] where \(W\) is the bandwidth of each RRB, and \(\Gamma_{u,r}^{(t)}\) is the SINR received by the \(u\)th user over the \(r\)th RRB through an additive white Gaussian noise (AWGN) channel during time frame \(t\), which is given by \[\Gamma_{u,r}^{(t)}=\frac{\sum_{h}a_{u,r,h}^{\mathcal{H},(t)}g_{u,r,h}^{ \mathcal{H},(t)}p_{h,r}^{\mathcal{H},(t)}+\sum_{s}a_{u,r,s}^{\mathcal{S},(t)}g _{u,r,s}^{\mathcal{S},(t)}p_{s,r}^{\mathcal{S},(t)}}{N_{0}} \tag{8}\] where \(N_{0}\) is the noise variance. ### _Notations_ Table I summarizes the common notations used throughout the paper. ## III Delay Minimization in SH-aided FRAN The transmission delay of serving the \(u\)th user with a popular file segment of a size \(B\) bytes from an SH, denoted by \(D_{u}^{\mathcal{H}}\), or an eRRH, denoted by \(D_{u}^{\mathcal{S}}\), is given by \(B/R_{u}\). When the requested content is unavailable in the SHs or eRRHs, the transmission delay comprises both the time it takes to fetch the content from the MBS and the time to transmit it to the user through the connected eRRH. Hence, the transmission delay for serving the user from the MBS is determined by \(D_{u}^{MBS}=B/R_{u}+B/C_{\text{th}}\). Accordingly, the average latency for the \(u\)th user to download its requested content can be expressed as \[D_{u}= a_{u}^{\text{req}}[\sum_{s,r}a_{u,r,s}^{\mathcal{S},(t)}c_{s,f}^{ \mathcal{S},(t)}D_{u}^{\mathcal{S}}+\sum_{h,r}a_{u,r,h}^{\mathcal{H},(t)}c_{h,f}^{\mathcal{H},(t)}D_{u}^{\mathcal{H}} \tag{9}\] \[+\sum_{s,r}a_{u,r,s}^{\mathcal{S},(t)}(1-c_{s,f}^{\mathcal{S},(t)} D_{u}^{MBS}]\] \[+(1-a_{u}^{\text{req}})D_{u}^{MBS}.\] Since the maximum number of served users is equal to the number of the available RRBs, the average file segments downloading delay of all the users can be obtained by \(D_{\text{ave}}=\frac{1}{R}\sum_{u=1}^{U}D_{u}\). Now, we formulate an optimization problem that minimizes average content delivery delay by considering user scheduling and cache optimization subject to the constraints (1)-(6) as follows. \[\begin{array}{ll}\min\limits_{\begin{subarray}{c}\mathbf{C}^{\mathcal{S},(t )},\mathbf{A}^{\mathcal{S},(t)},\mathbf{P}^{\mathcal{S},(t)}\\ \mathbf{C}^{\mathcal{H},(t)},\mathbf{A}^{\mathcal{H},(t)},\mathbf{P}^{\mathcal{ H},(t)}\end{subarray}}\sum_{u=1}^{U}D_{u}\\ \text{C1:}&\sum_{r}(\sum_{s}a_{u,r,s}^{\mathcal{S},(t)}+\sum_{h}a_{u,r,h}^{ \mathcal{H},(t)})\leq 1,\\ \text{C2:}&\sum_{u}(\sum_{s}a_{u,r,s}^{\mathcal{S},(t)}+\sum_{h}a_{u,r,h}^{ \mathcal{H},(t)})\leq 1,\\ \text{C3:}&\sum_{f}c_{h,f}^{\mathcal{H},(t)}\leq K_{\text{SH}},\\ \text{c4:}&\sum_{f}c_{s,f}^{\mathcal{S},(t)}\leq K_{\text{eRRH}},\\ \text{s.t.}&\text{C5:}&a_{u,r,s}^{\mathcal{S},(t)},a_{u,r,h}^{\mathcal{H},(t)} \in\{0,1\},\\ \text{C6:}&\sum_{r}\mathcal{P}_{h,r}^{\mathcal{H},(t)}\leq\bar{P}_{\mathcal{H}}, \\ \text{C7:}&p_{s,r}^{\mathcal{S},(t)}=\{0,\bar{P}_{\mathcal{S}}\},\\ \text{C8:}&\mathcal{E}_{s,f}^{\mathcal{S},(t)},c_{h,f}^{\mathcal{H},(t)}\in\{0,1\},\end{array} \tag{10}\] where constraints \(\text{C1}\) and \(\text{C2}\) ensure a one-to-one mapping between the users and the RRBs such that each user is assigned to only one RRB, and each RRB is assigned to only one user. Constraints \(\text{C3}\) and \(\text{C4}\) limit the caching capacity of the SHs and the eRRHs to \(\Gamma_{SH}\) and \(\Gamma_{eRRH}\) file segments, respectively. Constraint \(\text{C6}\) specifies a maximum available transmission power for the SHs, while constraint \(\text{C7}\) sets a limit on the maximum transmission power on each RRB in the eRRHs. Constraint \(\text{C8}\) and \(\text{C8}\) consider binary decisions for variables \(a_{u,r,s}^{\mathcal{S},(t)}\), \(a_{u,r,h}^{\mathcal{H},(t)}\), \(c_{s,f}^{\mathcal{S},(t)}\), and \(c_{h,f}^{\mathcal{H},(t)}\). The optimization variables are the cache matrices for both SHs and eRRHs, assignment vectors that indicate which user is assigned to which RRB in each eRRH and SH, and the power allocation vectors for both eRRHs and SHs. The derived optimization problem in (10) is a mixed-integer nonlinear program (MINLP). It is generally difficult to obtain the globally optimal solution of a MINLP. In fact, due to its inherent computational complexity resulting from \begin{table} \begin{tabular}{c l} \hline \(S\), \(H\) & Numbers of eRRHs and SHs \\ \(U\), \(R\) & Numbers of users and RRBs \\ \(c_{s,f}^{\mathcal{S},(t)}\), \(c_{h,f}^{\mathcal{H},(t)}\) & Availability of popular file segment \(f\) in the \(s\)th eRRH and the \(h\)th SH during time frame \(t\) \\ \(a_{u,r,s}^{\mathcal{S},(t)}\), \(a_{u,r,h}^{\mathcal{H},(t)}\) & Assignment of the \(r\)th RRB and the \(u\)th user to the \(s\)th eRRH and the \(h\)th SH during time frame \(t\) \\ \(p_{s,r}^{\mathcal{S},(t)}\), \(p_{h,r}^{\mathcal{H},(t)}\) & The allocated power of the \(s\)th eRRH and the \(h\)th SH to the \(r\)th RRB during time frame \(t\) \\ \(g_{u,r,s}^{\mathcal{S},(t)}\), \(g_{u,r,h}^{\mathcal{H},(t)}\) & Channel gain from the \(u\)th user to the \(s\)th eRRH and the \(h\)th SH on the \(r\)th RRB during time frame \(t\) \\ \(a_{f,s}^{\mathcal{S}}\), \(b_{f,h}^{\mathcal{H}}\) & Action of agent \(f\) in the \(s\)th eRRH and the \(h\)th SH at time \(t\) \\ \(u_{f,s}^{\mathcal{i},(t)}\), \(v_{f,h}^{\mathcal{i},(t)}\) & Estimated utility of the action \(i\) for the \(f\)th file segment in the \(s\)th eRRH and the \(h\)th SH at time \(t\) \\ \(\pi_{f,s}^{\mathcal{i},(t)}\), \(\eta_{f,h}^{\mathcal{i},(t)}\) & Preference of the \(s\)th eRRH and the \(h\)th SH to take an action \(i\) for the \(f\)th file segment at time \(t\) \\ \(K_{\text{eRRH}}\), \(K_{\text{SH}}\) & Maximum caching capacity of each eRRH and SH \\ \(\bar{P}_{\mathcal{S}}\) & Maximum allowed transmission power of eRRH \\ \(\bar{P}_{\mathcal{H}}\) & Available transmission power of each SH \\ \hline \end{tabular} \end{table} TABLE I: Summary of Notations the interdependence of variables \(c_{s,f}^{\mathcal{S},(t)}\), \(c_{h,f}^{\mathcal{H},(t)}\), \(a_{u,r,s}^{\mathcal{S},(t)}\), and \(a_{u,r,h}^{\mathcal{H},(t)}\), solving this problem is computationally intractable. To tackle this intractability, we propose to solve (10) iteratively. Specifically, the optimization problem is first divided into two subproblems: 1) user assignment and power allocation and 2) cache resource optimization. Then, each subproblem is separately optimized. In the next section, each subproblem and the corresponding solution will be discussed in detail. We also analyze their convergence and computational complexity. ## IV Proposed solution: Resource Allocation and Cache Optimization ### _User Assignment and Power Allocation_ In this subsection, we effectively schedule users to eRRHs and SHs to minimize the average transmission delay through the following subproblem \[\min_{\mathbf{A}^{\mathcal{H},(t)},\mathbf{A}^{\mathcal{S},(t)},\mathbf{P}^{ \mathcal{H},(t)},\mathbf{P}^{\mathcal{S},(t)}}\quad\sum_{u=1}^{U}D_{u},\quad \text{s.t. C1, C2, C5-C8}. \tag{11}\] We construct a conflict graph that includes all the possible user assignment states to consider constraints C1 and C2. Then, we present an algorithm to associate the users with SHs and eRRHs by finding the graph's minimum weight independent set (MWIS). Additionally, this algorithm optimizes the allocation of transmission power between users. The obtained user association and power allocation solution minimizes the average transmission delay of the network. Consider \(\mathcal{G}=(\mathcal{V}^{\mathcal{H}},\mathcal{V}^{\mathcal{S}},\mathcal{E}, \mathcal{W})\) as a weighted undirected conflict graph, for which \(\mathcal{V}^{\mathcal{H}}\) and \(\mathcal{V}^{\mathcal{S}}\) are the sets of vertices of the SHs and eRRHs, respectively. The graph has edges represented by \(\mathcal{E}\), and weights assigned to the vertices are given by \(\mathcal{W}\). Let \(v^{\mathcal{H}}=(h,u,r)\) represent the vertices belonging to \(\mathcal{V}^{\mathcal{H}}\), where each vertex comprises an SH \(h\in\mathcal{H}\), a user \(u\in\mathcal{U}^{\mathcal{H}}_{h}\), and an RRB \(r\in\mathcal{R}\). Similarly, \(v^{\mathcal{S}}=(u,r,s)\) represents vertices of \(\mathcal{V}^{\mathcal{S}}\), where each vertex includes an eRRH \(s\in\mathcal{S}\), a user \(u\in\mathcal{U}^{\mathcal{S}}_{s}\), and an RRB \(r\in\mathcal{R}\). Every two vertices have a connecting conflict edge if they include the same user or the same RRB. Hence, the vertices of \(\mathcal{V}^{\mathcal{H}}\) and \(\mathcal{V}^{\mathcal{S}}\) are also linked to each other if they contain the same user or RRB. For instance, consider two sample vertices: \(v^{\mathcal{H}}=(u,r,h)\) and \(v^{\mathcal{S}}=(u^{\prime},r^{\prime},s)\). These two vertices are connected to each other if either \(r=r^{\prime}\), \(u=u^{\prime}\), or both conditions are met. The weight of each vertex is the transmission delay of serving the associated user of that vertex by the corresponding RRB and SH/eRRH. We prioritize SHs for serving users. For users requesting a specific file segment not present in SHs, we designate eRRHs to serve to them. In this way, we reduce the average transmission delay by maximizing the utilization of SHs' cached resources to serve a larger number of users. We propose the following procedure for user scheduling. First, we construct the graph by generating all the possible vertices of the eRRHs and the SHs, calculating their corresponding weights, and making the above-mentioned conflict links. Second, we need to find the MWIS of the whole graph such that we assign the users first to the SHs, and the remaining users need to be assigned to the eRRHs. Since this process is intractable, we use a heuristics method that was used in [5] to find a suboptimal user scheduling solution. In this fashion, first, we find the vertex of the SHs with the minimum weight, assign the user of that vertex to its RRB and SH, and remove all of the vertices linked to that vertex. This process is repeated till there are no more SH vertices in the graph. Then, the same procedure will be applied to the remaining vertices of \(\mathcal{V}^{\mathcal{S}}\), so that all of the RRBs are assigned to the users. Third, to allocate the power resources among the eRRHs effectively, since each RRB is assigned to only one user and the RRBs are orthogonal, we grant \(\widehat{P}_{\mathcal{S}}\) to the RRBs used by an eRRH. However, due to the limited amount of power in each SH during a time slot, a Water-Filling (WF) algorithm is utilized at each SH to allocate the power between users properly. Such a power allocation procedure takes into account the number of users served by each SH and their respective power requirements to ensure that each user receives an appropriate amount of power. This ensures that the power is utilized effectively and that each user is able to receive its requested file segment with a possible minimum delay. To summarize, the MBS takes the following three steps to schedule the users: (i) graph construction, (ii) user association, and (iii) power allocation, which are presented in Algorithm 1. ### _Cache Optimization_ We consider the following subproblem to optimize the cache state of the eRRHs and SHs \[\min_{\mathbf{C}^{\mathcal{S},(t)},\mathbf{C}^{\mathcal{H},(t)}} \quad\sum_{u=1}^{U}D_{u},\quad\text{s.t.~{}C3, C4, C8}. \tag{12}\] For each eRRH and SH, there exist permutations of \(\Gamma_{eRRH}\) and \(\Gamma_{SH}\), respectively, out of a total of \(F\) possible states. This results in a number of potential solutions for the problem, specifically \([P(F,\Gamma_{eRRH})]^{S}\times[P(F,\Gamma_{SH})]^{H}\), where \(P(n,r)=n!/(n-r)!\) represents the permutation of selecting \(r\) options out of \(n\) possible states. Given the considerable computational complexity involved, we develop two separate RL algorithms to update the cache resources of eRRHs and the SHs. #### Iii-B1 Cache Updates in the eRRHs eRRHs update their cache resources by fetching their desired file segments from the MBS. The more file segments saved in each eRRH, the more the chance of serving users from the cache of their assigned eRRH. However, the cache capacity of each eRRH is limited by \(K_{\text{eRRH}}\). In this regard, we develop an actor-critic MARL algorithm to optimize the cache resources of the eRRHs. Using the actor-critic algorithm, agents learn both a policy function and a value function referred to as the actor and critic, respectively. During each iteration, the critic evaluates and updates the actor's policy function by predicting the future rewards. Following this, the actor takes an action based on its updated policy function and receives a reward. The critic then updates its value function based on the received reward and the prediction error [36]. In our proposed algorithm, \(F\) virtual agents are created in each eRRH, with each agent corresponding to one of the popular file segments. These virtual agents make binary decisions based on each popular file segment's learned caching policies. Since maximizing the cache hit rate at the eRRHs leads to minimizing the average transmission delay of serving the users, the rewards of the virtual agents are considered to be related to their cache hit rate. Besides, the virtual agents should avoid caching the file segments being requested less because of the limited caching capacity. Accordingly, we consider the reward of virtual agent \(f\) of eRRH \(s\) as follows \[\mu^{(t)}_{f,s}=\alpha_{\mu}[(1-2c_{s,f}^{\mathcal{S},(t)})+c_{l} \ l(f,s)] \tag{13}\] where \((1-2c_{s,f}^{\mathcal{S},(t)})\) is the cost of caching the file segment \(f\), \(c_{l}\) is the impact of the cache hit status, and \(l(f,s)\) is the legitimacy of caching the file segment \(f\) and considered as follows. If the file segment \(f\) is requested by a user assigned to the \(s\)th eRRH, \(l(f,s)=-1\) in case of cache hit failure and \(l(f,s)=+1\) if a cache hit success happens, otherwise \(l(f,s)=0\). Denote \(b_{f,s}^{\mathcal{S},(t)}\) as the action of the \(f\)th virtual agent in the \(s\)th eRRH at time frame \(t\), and parameterize its policy function as an exponential softmax distribution \[\beta^{(t),i}_{f,s}(u^{i,(t)}_{f,s})=\frac{\exp(\lambda_{p}u^{i,(t)}_{f,s})}{ \exp(\lambda_{p}u^{(t),0}_{f,s})+\exp(\lambda_{p}u^{(t),1}_{f,s})} \tag{14}\] where \(u^{i,(t)}_{f,s}\) is the estimated utility of action \(i\) at iteration \(t\) and \(\lambda_{p}\) is the exploration-exploitation factor. During each iteration step, every eRRH takes a random action generated from \(\pi^{i,(t)}_{f,s}\) to cache \(K_{\text{eRRH}}\) files and obtains its own reward \(\mu^{(t)}_{f,s}\). Obtaining the reward, the policy and the utility functions are updated as follows \[u^{(t+1),i}_{f,s}=u^{i,(t)}_{f,s}+\alpha_{u}\left(\mu^{(t)}_{f,s }-u^{(t+1),i}_{f,s}\right)\mathcal{I}\{b_{f,s}^{\mathcal{S},(t)}=i\} \tag{15}\] \[\pi^{(t+1),i}_{f,s}=\pi^{i,(t)}_{f,s}+\alpha_{\pi}\left[\beta^{ (t),i}_{f,s}(u^{i,(t)}_{f,s})-\pi^{i,(t)}_{f,s}\right]. \tag{16}\] The learning rates \(\alpha_{u}\) and \(\alpha_{\pi}\) represent the rates at which the utility and policy functions are updated, respectively. To ensure the convergence of the algorithm, these learning rates must satisfy specific conditions [34]: \[\sum_{t\geq 1}\alpha_{u}=+\infty,\sum_{t\geq 1}\alpha_{u}^{2}\leq+\infty, \tag{17}\] \[\sum_{t\geq 1}\alpha_{\pi}=+\infty,\sum_{t\geq 1}\alpha_{\pi}^{2} \leq+\infty,\] (18) \[\lim_{t\rightarrow+\infty}\frac{\alpha_{\pi}}{\alpha_{u}}=0. \tag{19}\] #### Iii-B2 Cache Updates in the SHs Similarly, \(F\) virtual agents are created in each SH, where each agent is dedicated to a specific popular file segment. In contrast to the virtual agents of eRRHs, these agents make binary caching decisions by considering the file segments received during the previous time slot or those already stored in the SH's cache. However, the critic operates the same as the virtual agents in eRRHs. Denote the action of the \(f\)th virtual agent in the \(h\)th SH and its reward signal at iteration \(t\) as \(b_{f,s}^{\mathcal{H},(t)}\) and \(\nu^{(t)}_{f,h}\), respectively. The reward signal of SH virtual agents is considered as follows \[\nu^{(t)}_{f,h}=\alpha_{\mu}\alpha_{\mu}[(1-2c_{h,f}^{\mathcal{H},(t)})+c_{l} \ l(h,s)]. \tag{20}\] Then \(\nu^{i,(t)}_{f,h}\) and \(\eta^{j,(t)}_{f,s}\) as the utility and policy functions of the virtual agent are respectively updated based on the decision of the actor as follows \[v^{(t+1),i}_{f,h}=v^{i,(t)}_{f,h}+\alpha_{v}\left(\nu^{(t)}_{f,s }-v^{(t+1),i}_{f,s}\right)\mathcal{I}\{b_{f,s}^{\mathcal{H},(t)}=i\}, \tag{21}\] \[\eta^{(t+1),i}_{f,s}=\eta^{(t),i}_{f,h}+\alpha_{\eta}\left[\beta^ {(t),i}_{f,h}(v^{i,(t)}_{f,s}-\eta^{i,(t)}_{f,s}\right], \tag{22}\] where \(\alpha_{v}\) and \(\alpha_{\eta}\) are the learning rates of the utility function and the policy which should satisfy the conditions (17)-(19) for the convergence of the algorithm. ### _Summary Of The Overall Solution_ The algorithm proposed to minimize the average transmission delay can be outlined as follows. At each iteration of the algorithm, the initial step involves updating the cache resources of the eRRHs, which generates a decision \(b_{f,s}^{\mathcal{S},(t)}\). This decision is created based on the policy \(\pi^{i,(t)}_{f,s}\), and it aims to allocate popular file segments to the eRRHs' cache resources. The SHs use their policy, which takes into account the latest \(\eta^{i,(t)}_{f,s}\) and previous received communications, to generate a decision \(b_{f,h}^{\mathcal{H},(t)}\). The decision is based on whether the file segments have already been cached or were just received in the previous step. After updating the cache of both eRRHs and SHs, we employ Algorithm 1 to allocate the RRBs and users to the respective eRRHs and SHs. The eRRHs grant their assigned RRBs to \(\bar{P}_{S}\), while the SHs use the WF algorithm to distribute the available power among the assigned RRBs. Each agent receives a reward based on its cache hit factor, and their policies are updated using equations (15) and (21) for the next iteration step. The process of union user scheduling and caching of both the SHs and eRRHs is illustrated in Algorithm 2. ``` 1:Initialize \(t=1\) 2:Set \(u_{f,s}^{i,(t)}=v_{f,h}^{i,(t)}=0\) and \(\pi_{f,s}^{i,(t)}=\eta_{f,h}^{(t),i}=0.5\) for all agent 3:I. Cache Resource Update Stage 4:for\(f=1:F\)do 5:for\(s=1:S\)do 6: Generate an action \(b_{f,s}^{S,(t)}\sim\pi_{f,s}^{i,(t)}\) 7:endfor 8:for\(h=1:H\)do 9: Generate an action \(b_{f,h}^{\mathcal{H},(t)}\sim\eta_{f,h}^{(t),i}\) 10:endfor 11:endfor 12:Update the cache of the eRRHs and the SHs by \(b_{f,s}^{\mathcal{S},(t)}\) and \(b_{f,h}^{\mathcal{H},(t)}\) 13:2. User Assignment and Power Allocation Stage 14: Assign the users to the eRRHs and the SHs and allocate the power to them using Algorithm 1 15:3. Learning Update Stage (IV-B and IV-B) 16:Obtain the reward of each agent, \(\mu_{f,h}^{(t)}\) and \(\nu_{f,h}^{(t)}\) 17:Update \(u_{f,s}^{(t+1),i}\) and \(\pi_{f,s}^{(t+1),i}\) using (15) 18:Update \(v_{f,h}^{(t+1),i}\) and \(\eta_{f,h}^{(t+1),i}\) using (21) ``` **Algorithm 2**User Assignment and Caching Algorithm ### _Convergence and Complexity of the Algorithm_ #### Iv-D1 Convergence The process described in Algorithm 2 updates the probability of the agents of eRRHs and SHs selecting actions at each iteration. When the algorithm converges, the probabilities become stable and do not change much. This can be shown by the following theorem. **Theorem 1**: _The policies of binary action selection for each agent, i.e. \(\pi_{f,s}^{i,(t)}\) and \(\eta_{f,s}^{i,(t)}\) is converged as \(\lim_{t\rightarrow\infty}\pi_{f,s}^{i,(t)}\rightarrow\bar{\pi}_{f,s}^{i}\) and \(\lim_{t\rightarrow\infty}\eta_{f,h}^{i,(t)}\rightarrow\bar{\eta}_{f,h}^{i}\), \(\forall i\in\{1,2\},f\in\{1,2,\ldots,F\}\), \(s\in\{1,2,\ldots,S\}\), and \(h\in\{1,2,\ldots,H\}\). Here_ \[\bar{\pi}_{f,s}^{i}=\frac{\exp(\lambda_{p}\bar{\mu}_{f,s}^{i})}{\sum_{i=1}^{2} \exp(\lambda_{p}\bar{\mu}_{f,s}^{i})} \tag{23}\] \[\bar{\eta}_{f,h}^{i}=\frac{\exp(\lambda_{p}\bar{\nu}_{f,h}^{i})}{\sum_{i=1}^{2} \exp(\lambda_{p}\bar{\nu}_{f,h}^{i})} \tag{24}\] _where \(\bar{\mu}_{f,s}^{i}\) and \(\bar{\nu}_{f,h}^{i}\) are expected reward of taking action \(i\) by the \(f\)th agent of the SHRH and the hth SH, respectively._ For proof, please refer to the Appendix. #### Iv-D2 Complexity of Algorithm During each iteration, cache resource update stage requires to generate \(K_{\text{eRRH}}+K_{\text{SH}}\) random values from \(\pi_{f,s}^{i,(t)}\) and \(\eta_{f,h}^{(t),i}\). Using the Alias method [37], the computational complexity of the first stage at most is \(\mathcal{O}(F[S+H])\). Over the user assignment stage, Algorithm 1 at most finds the minimum utility value of \(R(S+H)U\) values for \(R\) times. So, the computation complexity of this stage is generally bounded by \(\mathcal{O}(UR^{2}(S+H))\). In the power allocation stage, the SHs allocate the available power to the assigned users and RRBs based on WF algorithm. Where the computational complexity of WF for \(N\) channels is generally in the range of \(\mathcal{O}(N\log N)\)[38]. The number of users that each SH can serve is bounded to a small number due to the limited available transmission power. So, the computational complexity order of this stage is approximately \(\mathcal{O}(H)\). As the learning update stage involves a fixed number of operations, i.e., \(2F(S+H)\) exponents, multiplications and additions, the computational complexity of the last stage in each learning step is \(\mathcal{O}(F(S+H))\). Therefore, the overall worst case computational complexity of Algorithm 2 for the all learning process approximately is \(\mathcal{O}(T[(UR^{2}+F)(S+H)])\). ## V Numerical Results and Analysis We numerically evaluate the performance of our developed scheme of the envisioned SH-aided F-RAN in terms of average transmission delay, average load on the fronthaul link, and cache hit rate with benchmark schemes proposed for conventional F-RANs. In particular, we consider two benchmark schemes: 1) random caching strategy and 2) caching strategy that caches the most probable file segments. Moreover, we quantify the performance gain that can be achieved using SHs in F-RANs. ### _Basic Simulation Settings_ We assume the eRRHs, SHs, and users are uniformly located within a circle of 1.5 km radius. The minimum distance between neighbouring eRRHs is set to 300 m. Similarly, the minimum distance between neighbouring SHs is set to 300 m. The MBS is assumed to have 50 orthogonal RRBs over each time frame to serve the users. It is assumed that there are 100 popular file segments, each with the size of \(6.25\) MBytes, where their popularity follows the Zipf distribution with different values of the Zipf parameter. Users are assumed to request a popular file segment with a probability of 0.3. Each of the eRRHs or SHs can store/cache up to 35 file segments. The channel gains between users and eRRHs, \(g_{u,s,h}^{\mathcal{G},(t)}\), and the channel gains between users and SHs, \(g_{u,r,h}^{\mathcal{H},(t)}\) are assumed to have a fading model used in the literature, e.g., [13, 35], that consists of 1) path-loss of \(128.1+37.6\log 10(dis.[km])\), 2) log-normal shadowing with \(4dB\) standard deviation, and 3) Rayleigh fading with zero-mean and unit variance. The additive white Gaussian (AWGN) noise power is assumed to be \(N_{0}=-174\ dBm\). The maximum allowed transmission power of each eRRH over an RRB is \(\bar{P}_{S}=4\times 10^{-4}\) mwatt. In addition, the maximum available power at each SH is assumed to be \(\bar{P}_{H}^{max}=15\times 10^{-4}\) mwatt that is allocated to different RRBs using the WF algorithm. The impact of the cache hit rate on the reward is considered to be \(c_{l}=2\). We evaluate each metric by averaging over 2000 Monte-Carlo simulation run. ### _Evaluation and Comparison_ Fig. 3 presents the policy update process of virtual agents associated with 100 popular file segments over 10,000 iterations to identify the convergence point. As seen, the caching policy of the agents associated with both the most and least probable file segments converges rapidly to 1 and 0, respectively, shortly after 1,000 iterations. However, the policy of the agents related to files with popularity orders around the caching capacity of 35 file segments exhibits continuous fluctuations until the end of the simulation. This fluctuation results from the competition among these agents to fill the remaining caching capacity, leading to a more indecisive learning process. Based on the results, we choose 1,000 iterations as the stopping point for our learning algorithm to balance learning efficiency and policy stability. By doing so, we ensure that the algorithm achieves an optimized caching policy without unnecessary computational overhead and extended training time. In Fig. 4, we further investigate the caching policy generated by the algorithm after 1,000 iterations considering various numbers of eRRHs in the system. Similarly, the results indicate that although a certain caching policy exists for the most and least popular file segments, the caching policy for file segments near the caching capacity may not be entirely deterministic. The optimized policy of the virtual agents of different eRRHs considers both the experienced request probability of each file segment and its ranking in terms of popularity among all file segments. In Fig. 5, we depict the cumulative rewards of virtual agents associated with eRRHs and SHs throughout 1,000 iterations. Fig. 4: Caching probabilities based on the policies obtained after 1,000 training steps. Fig. 5: The average cumulative reward of the agents of the eRRHs and the SHs over 1,000 training iterations. Fig. 3: Policy update process of the virtual agents associated with 100 popular file segments with \(\Gamma_{eRRH}=35\). Notably, the rewards are observed to increase initially and eventually achieve their maximum values before reaching the 1,000th iteration. This observation validates that 1,000 iterations are indeed appropriate to gain the peak rewards for both eRRHs and SHs. Furthermore, the figures illustrate that eRRHs outperform SHs in achieving higher cumulative rewards due to their ability to fetch their desired file segments based on their policy from the MBS, which grants them more opportunities to receive rewards. Fig. 6 illustrates the efficiency of different caching schemes in terms of cache hit rate. In panel (a), we can see how the inclusion of SHs affects the cache hit rate. Increasing the number of SHs results in a higher cache hit rate, which in turn enables the serving of more users in a given time frame. This reduces the load on the fronthaul link and allows more users to be served. The figure indicates that including SHs in the network is particularly beneficial when there are a low number of eRRHs to serve the users. As the number of eRRHs increases, the eRRHs are able to serve more users who request popular file segments, thereby improving the chances of maximizing the cache hit rate. However, the inclusion of SHs still remains useful as it reduces the load on the eRRHs by serving some of the users, leading to a better quality of service and more users served in each time subframe. In panel (b), the performance of the proposed algorithm is compared to random and most probable file segments caching policies. The figure shows that the proposed algorithm outperforms the other schemes for all values of \(\gamma\) from 0 to 1 when the number of eRRHs is greater than 2. This indicates that the proposed algorithm is effective in improving the cache hit rate and hence, the overall performance of the caching strategy in the network. Fig. 7 presents the average transmission delay for different scenarios with varying numbers of eRRHs and SHs. Panel (a) shows the average transmission delay for four different num Fig. 6: The total cache hit rate of eRRHs and SHs in different scenarios with various numbers of the eRRHs and SHs. Fig. 7: Average transmission delay of different scenarios with varying number of eRRHs and SHs. bers of eRRHs and SHs. As more SHs are added, the average transmission delay decreases while serving the same or even more number of users. Additionally, increasing \(\gamma\) significantly reduces the average transmission delay. This is because having a wide range of probabilities for requesting popular files can help make better caching decisions, leading to better performance. The figure also shows that SHs can compensate for the lack of eRRHs in the network. Panel (b) compares the performance of the proposed algorithm with two different caching schemes. As seen in the figure, our designed algorithm outperforms the most probable and random caching schemes for all values of \(\gamma\) between 0 to 1 when the number of eRRHs is greater than two. Similar to the cache hit rate figures, utilizing SHs is more advantageous when the number of eRRHs is lower. Overall, the figure demonstrates the effectiveness of the proposed algorithm in reducing the average transmission delay and improving the overall performance of the caching strategy in the network. In Fig. 8, the impact of using SHs on the load of the fronthaul link is shown, along with a comparison of the proposed algorithm with two other caching schemes. It is evident from the plot that adding up to three SHs to the network reduces the load on the fronthaul link by around \(5\times 10^{8}\) bits/sec. Moreover, the proposed algorithm performs better than the most probable and random schemes for all \(\gamma\) values when the number of eRRHs is more than 2. This indicates that the proposed algorithm is effective in reducing the load on the fronthaul link, especially when the network is facing a high load and excessive average transmission delay. ## VI Conclusion In this work, we proposed to expand the resource settings of F-RAN in a cost-effective manner by incorporating a set of SHs. The SHs act as compact caching elements that can cache popular file segments and alleviate the load on the fronthaul link, leading to reduced transmission delay. We developed a multistage user scheduling and caching algorithm to optimize the network in terms of average transmission delay and load on the fronthaul link. In particular, first we developed an MWIS search method to assign the users to the eRRHs/SHs while prioritizing the SHs to serve the users. Then for caching strategy at the eRRHs and SHs, we proposed two MARL algorithms to optimize the caches of eRRHs and SHs. Simulation results showed that the use of SHs significantly reduces the load on the fronthaul link and transmission delay, indicating their potential in designing wireless networks facing high loads on the fronthaul link and transmission delay. Moreover, it is shown that SHs can be substituted for some eRRHs to further dampen the load on the fronthaul link. ## Appendix Proof of Theorem 1 According to [39, Theorem 4], assuming \(\lim_{t\rightarrow\infty}\frac{\alpha_{n}}{\alpha_{\pi}}=0\), we can conclude that two conditions, \(\mathbf{c1:}\lim_{t\rightarrow\infty}|u_{f,s}^{i,(t)}-\bar{\mu}_{f,s}^{i}|=0\), \(\forall i,f,s\); and \(\mathbf{c2:}\) as \(t\rightarrow\infty\), are met. Then, the equation used to update the agent's action selection probability will converge to the following ordinary differential equation \[\dot{\pi}_{f,s}^{(t),i}=\beta_{f,s}^{(t),i}(u_{f,s}^{i,(t)})-\pi_{f,s}^{i,(t)}. \tag{25}\] Considering \(\lim_{t\rightarrow\infty}\frac{\alpha_{n}}{\alpha_{\pi}}=0\), the update of action selection probabilities occurs at a slower pace in comparison to the estimated utility. However, as per condition \(\mathbf{c1}\), the estimated utility will converge as time progresses. By substituting the converged estimated utility value into (25) and considering the fact that at the stationary point of (25), \(\dot{\pi}_{f,s}^{(t),i}=0\), we can determine the value of the stationary action selection probability, \[\bar{\pi}_{f,s}^{(t),i}=\beta(\bar{\mu}_{f,s}^{i}). \tag{26}\] We proceed to explain that (26) gives the resulting probability for selecting actions among the agents. Reference [40, eq. (2)] indicates that the most favourable action selection probability is achieved by maximizing the agents' expected reward along with the entropy function. Therefore, we derive (27) for each agent (\(s,f\)), \(f\in\{1,2,\ldots,F\}\) and \(s\in\{\{1,2,\ldots,S\}\), which is shown at the top of the next page. The value Fig. 8: Load on the fronthaul link for different schemes varying number of eRRHs and SHs. in (27) determines the relative importance of maximizing the expected reward and entropy functions. By using the Lagrangian optimization method, we can easily calculate the optimal values \[\bar{\pi}_{f,s}^{(t),i}=\frac{\exp(\lambda_{p}\bar{\mu}_{f,s}^{i})}{\sum_{i=1}^{2 }\exp(\lambda_{p}\bar{\mu}_{f,s}^{i})}\triangleq\beta(\bar{\mu}_{f,s}^{i}). \tag{28}\] It is apparent that when \(t\rightarrow\infty\), the likelihood of agents selecting an action becomes consistent and this consistent likelihood aligns with the ideal action selection probability for the agents. Hence, as the number of iterations increases, Algorithm 2 converges.
2309.09794
Universal responses in nonmagnetic polar metals
We demonstrate that two phenomena, the kinetic magneto-electric effect and the non-linear Hall effect, are universal to polar metals, as a consequence of their coexisting and contraindicated polarization and metallicity. We show that measurement of the effects provides a complete characterization of the nature of the polar metal, in that the non-zero response components indicate the direction of the polar axis, and the coefficients change sign on polarization reversal and become zero in the non-polar phase. We illustrate our findings for the case of electron-doped PbTiO$_3$ using a combination of density functional theory and model Hamiltonian-based calculations. Our model Hamiltonian analysis provides crucial insight into the microscopic origin of the effects, showing that they originate from inversion-symmetry-breaking-induced inter-orbital hoppings, which cause an asymmetric charge density quantified by odd-parity charge multipoles. Our work both heightens the relevance of the kinetic magneto-electric and non-linear Hall effects, and broadens the platform for investigating and detecting odd-parity charge multipoles in metals.
Fabian Jäger, Nicola A. Spaldin, Sayantika Bhowal
2023-09-18T14:12:08Z
http://arxiv.org/abs/2309.09794v1
# Universal responses in nonmagnetic polar metals ###### Abstract We demonstrate that two phenomena, the kinetic magneto-electric effect and the non-linear Hall effect, are universal to polar metals, as a consequence of their coexisting and contraindicated polarization and metallicity. We show that measurement of the effects provides a complete characterization of the nature of the polar metal, in that the non-zero response components indicate the direction of the polar axis, and the coefficients change sign on polarization reversal and become zero in the non-polar phase. We illustrate our findings for the case of electron-doped PbTiO\({}_{3}\) using a combination of density functional theory and model Hamiltonian-based calculations. Our model Hamiltonian analysis provides crucial insight into the microscopic origin of the effects, showing that they originate from inversion-symmetry-breaking-induced inter-orbital hoppings, which cause an asymmetric charge density quantified by odd-parity charge multipoles. Our work both heightens the relevance of the kinetic magneto-electric and non-linear Hall effects, and broadens the platform for investigating and detecting odd-parity charge multipoles in metals. ## I Introduction The idea of combining electric polarization with metallicity, against the common belief that polarization is screened by itinerant carriers, was first conceived by Anderson and Blount more than fifty years ago [1]. It has come to reality, however, rather recently with the practical material realization of polar metals [2; 3; 4]. These have consequently opened up a new paradigm for investigating numerous intriguing physical effects that result from the coexistence of the seemingly mutually exclusive properties of polarity and metallicity [5; 6; 7]. In the present work, we point out two such effects, the kinetic magneto-electric effect (KME) and non-linear Hall effect (NHE), which are universal to all polar metals. While these effects have been sporadically investigated in some candidate polar metal systems [8; 9; 10], a consensus in applying these effects to characterizing polar LP metals is still missing. Here we show that both these effects carry simultaneously the key signatures of the polar metal phase, that is the direction of the polar axis, the switchability of the polarization, and the ferroelectric-like nonpolar to polar structural transition, and so provide a complete characterization of polar metals. Furthermore, we reveal the microscopic origin of these two effects by analyzing asymmetries in the charge density. While both effects are dominated by contributions from the electric dipole moment i.e., the first-order asymmetry in the charge density, the electric octupole moment, characterizing the third-order asymmetry in the charge density, also plays an important role. The kinetic magneto-electric effect is a linear effect, describing electric field (\(\mathcal{E}\)) induced magnetization \(\mathcal{M}_{j}=\mathcal{K}_{ij}\mathcal{E}_{i}\) in a nonmagnetic metal [11; 12; 13; 14; 15]. The resulting magnetization, in turn, gives rise to a transverse Hall current (\(J\)) as a second-order response to the applied electric field, \(J_{i}=\chi_{ijk}\mathcal{E}_{j}\mathcal{E}_{k}\), known as nonlinear Hall effect [16]. \(\mathcal{K}_{ij}\) and \(\chi_{ijk}\) are the KME response and non-linear Hall conductivity (NHC) tensor respectively with \(i,j,k\) indicating the Cartesian directions. Within the relaxation-time approximation for the nonequilibrium electron distribution, both these responses can be elegantly recast in terms of the equilibrium reciprocal-space magnetic (spin plus orbital) moment \(\vec{m}(\vec{k})\) and the Berry curvature dipole (BCD) \(\mathcal{D}_{ij}\) respectively [13; 15; 16]: \[\mathcal{K}_{ij}=-\frac{e\tau}{\hbar}\sum_{n}\frac{1}{(2\pi)^{3} }\int d^{3}k\ m_{j}^{n}(\vec{k})\partial_{k_{i}}\epsilon_{k}^{n}\Big{(}\frac {\partial f_{0}}{\partial\epsilon_{k}^{n}}\Big{)}=\frac{e\tau}{\hbar}\tilde{ \mathcal{K}}_{ij}\] \[\chi_{ijk}=-\varepsilon_{ilk}\frac{e^{3}\tau}{2(1+i\omega\tau)} \mathcal{D}_{jl}\] \[=\varepsilon_{ilk}\frac{e^{3}\tau}{2(1+i\omega\tau)}\sum_{n}\frac{ 1}{(2\pi)^{3}}\int d^{3}k\ \Omega_{l}^{n}(\vec{k})\partial_{k_{j}}\epsilon_{k}\Big{(}\frac{ \partial f_{0}}{\partial\epsilon_{k}^{n}}\Big{)}. \tag{1}\] Here \(\tau\), \(\varepsilon_{adc}\), \(e,n\), \(f_{0}\) and \(\vec{\Omega}\) are respectively the relaxation time-constant, Levi-Civita symbol, the electronic charge, band index, equilibrium Fermi distribution function and Berry curvature. Both the reduced KME response \(\tilde{\mathcal{K}}_{ij}\) and the BCD are intrinsic properties of a material and are given by,[8; 9; 13; 15] \[\tilde{\mathcal{K}}_{ij} = -\sum_{n}\frac{1}{(2\pi)^{3}}\int d^{3}k\ m_{j}^{n}(\vec{k}) \partial_{k_{i}}\epsilon_{k}^{n}\Big{(}\frac{\partial f_{0}}{\partial\epsilon_ {k}^{n}}\Big{)} \tag{2}\] \[= \sum_{n}\frac{1}{(2\pi)^{3}}\int d^{3}k\ (\partial_{k_{i}}m_{j}^{n}) f_{0}\] and, \[\mathcal{D}_{ij} = -\sum_{n}\frac{1}{(2\pi)^{3}}\int d^{3}k\ \Omega_{j}^{n}(\vec{k}) \partial_{k_{i}}\epsilon_{k}\Big{(}\frac{\partial f_{0}}{\partial\epsilon_{k}^ {n}}\Big{)} \tag{3}\] \[= \sum_{n}\frac{1}{(2\pi)^{3}}\int d^{3}k\ (\partial_{k_{i}} \Omega_{j}^{n})f_{0}.\] Both \(\tilde{\mathcal{K}}_{ij}\) and \(\mathcal{D}_{ij}\) are allowed in nonmagnetic metals with gyrotropic point group symmetry [8; 9; 13; 15]. Since all polar point groups are gyrotropic [17; 18], both KME and NHE are allowed by symmetry in all polar metals. Interestingly, the components of the reciprocal-space magnetic moment \(\vec{m}(\vec{k})\), that contributes to the KME response, is determined by the direction of the electric polarization [19]. Similarly, the antisymmetric component of the BCD, \(\mathcal{D}^{-}=(\mathcal{D}-\mathcal{D}^{T})/2\), correlates with the orientation of the polar axis \(\vec{d}\), \(d_{i}\equiv\varepsilon_{ijk}\mathcal{D}_{jk}^{-}/2\)[16], suggesting a possible switching of both responses for a switchable orientation of the polar distortion. Furthermore, since both effects are forbidden by symmetry in an inversion symmetric structure, a structural transition from a centrosymmetric to a noncentrosymmetric polar structure can be inferred from the onset of these effects as the temperature is lowered. We illustrate these concepts by explicitly considering the case of electron-doped PbTiO\({}_{3}\) (PTO) as an example material. Undoped PTO is a prototypical conventional ferroelectric insulator [20]. Interestingly, even upon electron doping via replacing the Ti\({}^{4+}\) ions by Nb\({}^{5+}\) ions, the resulting PbTi\({}_{1-x}\)Nb\({}_{x}\)O\({}_{3}\) was observed to sustain the electric polarization up to \(x=0.12\), at which point the system also becomes conducting [21; 22]. In the present work, using both first-principles density functional theory (DFT) and a model Hamiltonian-based approach we show that the presence as well as the orientation of the polar axis in the polar metal phase of doped PTO can be determined from the non-zero components of KME and the NHE. The remainder of this manuscript is organized as follows. We start by describing the computational details in section II. This is followed by the results and discussions in section III, where we present our computational results for doped PTO, describing the existence and tuning of KME and NHE, the momentum space distribution of the orbital moment and Berry curvature that determine these effects, their microscopic origin within the model Hamiltonian framework, and the role of odd-parity charge multipoles. Finally, we summarize our results in section IV and give a proposal for measuring these effects. ## II Computational Details The responses \(\tilde{\mathcal{K}}_{ij}\) and \(\mathcal{D}_{ij}\) are computed using the QUANTUM ESPRESSO[23] and Wannier90 codes [24; 25; 26]. We use fully relativistic norm-conserving pseudo-potentials for all the atoms with the following valence electron configurations: Pb (\(6s^{2}6p^{2}\)), Ti (\(4s^{2}3d^{2}\)), and O (\(2s^{2}2p^{4}\)). Self-consistency is achieved with a 12\(\times\) 12\(\times\)10 \(k\)-point mesh and a convergence threshold of 10\({}^{-7}\) Ry. The _ab-initio_ wave functions, thus obtained, are then projected to maximally localized Wannier functions [24; 25] using the Wannier90 code [26]. In the disentanglement process, as initial projections, we choose 42 Wannier functions per unit cell which include the \(s\) and \(p\) orbitals of Pb, \(d\) orbitals of Ti and \(s\) and \(p\) orbitals of O atoms, excluding the rest. After the disentanglement is achieved, the wannierisation process is converged to 10\({}^{-10}\) A\({}^{2}\). We then compute the \(k\)-space distribution of the orbital moment and the Berry curvature as well as the reduced KME response, \(\tilde{\mathcal{K}}_{ij}\) and the BCD \(\mathcal{D}_{ij}\) for a 150\(\times\) 150\(\times\)140 \(k\)-point mesh. To estimate the doped charge density, we also compute the densities of states (DOS) for the same \(k\)-point mesh. ## III Results and Discussion ### NHE and KME and their tuning in polar metals We start with the electronic structure of PTO, which crystallizes in the non-centrosymmetric tetragonal (\(P4mm\)) structure with the polar \(C_{4v}\) point group symmetry [20]. In tetragonal PTO, both Pb\({}^{2+}\) (\(6s^{2}\) lone pair) and Ti\({}^{4+}\) (\(3d^{0}\)) ions off-center with respect to the surrounding O\({}^{2-}\) ions, resulting in a net polarization along \(\hat{z}\), which is switchable to \(-\hat{z}\) using an external electric field. We refer to these two structures, schematically depicted in Fig. 1 (a), as \(+P\) and \(-P\) respectively. The electronic structure of the polar undoped PTO (corresponding to \(+P\)) is shown in Fig. 1 (b), depicting the insulating band structure in which the occupied O-\(p\) states and the fromally empty Ti-\(t_{2g}\) states form the valence band maximum (VBM) and conduction band minimum (CBM) respectively. The doping electrons in doped PTO occupy the CBM, leading to a metallic band structure within the rigid band approximation. In order to compute \(\tilde{\mathcal{K}}_{ij}\) and \(\mathcal{D}_{ij}\), we first project the computed _ab-initio_ wave functions onto maximally localized Wannier functions, and then disentangle the relevant bands (see section II for computational details) from the rest using the Wannier90 code [26]. As depicted in Fig. 1 (b), the wannierised bands agree well with the full DFT band structure. The central quantities \(\tilde{\mathcal{K}}_{ij}\) and \(\mathcal{D}_{ij}\) in determining the magnitudes of the KME and NHE are then computed using Eqs. (2) and (3) as implemented within the Wannier90 code [24; 25; 26]. The computed non-zero components of the reduced KME response, \(\tilde{\mathcal{K}}_{xy}\) (blue circle), \(\tilde{\mathcal{K}}_{yx}\) (red circle), and BCD \(\mathcal{D}_{xy}\) (blue circle) and \(\mathcal{D}_{yx}\) (red circle) are shown as a function of energy in Fig. 1 (c) and (d) for the \(+P\) structure. To determine whether the energy range used in the computation is experimentally achievable, we further compute the doped electron density by integrating the corresponding DOS and show the results in Fig. 1 (f). Note that the zero of the energy corresponds to the CBM for the undoped case. The vertical dashed line in Fig. 1 (f) indicates the maximum doped electron density up to which the polarity of the lattice persists in the experiments [21; 22], justifying the chosen energy range. We note from Fig. 1 (c) and (d) that \(\tilde{\mathcal{K}}_{xy}=-\tilde{\mathcal{K}}_{yx}\) and \(\mathcal{D}_{xy}=-\mathcal{D}_{yx}\), consistent with the \(C_{4v}\) point group symmetry. Here \(\tilde{\mathcal{K}}_{ij}\) has both spin and orbital contributions. In order to understand the relative contributions of the two, the individual spin and orbital contributions are also shown in Fig. 1 (e) for the absolute value of the antisymmetric component of the reduced KME response, \(\tilde{\mathcal{K}}_{xy}^{-}=\frac{1}{2}(\tilde{\mathcal{K}}_{xy}-\tilde{ \mathcal{K}}_{yx})\). This clearly shows that the orbital contribution dominates over the spin contribution. Such a current-induced orbital magnetization has also been reported for other systems with broken inversion symmetry [27; 28; 29; 12] and may have important implications in the field of orbitronics. To see the effect of the polarization direction, we reverse the direction of the displacement of the ions, leading to the \(-P\) structure (see Fig. 1 (a)). The corresponding computed \(\tilde{\mathcal{K}}_{xy}\) (blue dashed line), \(\tilde{\mathcal{K}}_{yx}\) (red dashed line), and \(\mathcal{D}_{xy}\) (blue dashed line), \(\mathcal{D}_{yx}\) (red dashed line) are shown in Figs. 1 (c) and (d) respectively. We note that in this case, all the computed quantities switch sign compared to the \(+P\) structure, while still maintaining the symmetry of the \(C_{4v}\) point group, as discussed above. We further artificially decrease the amount of the Ti displacement to see the effect of the magnitude of polarization. We refer to the corresponding structure as \(+P_{1}\). The computed absolute values of \(\tilde{\mathcal{K}}_{xy}^{-}\) and \(\mathcal{D}_{xy}^{-}=\frac{1}{2}(\mathcal{D}_{xy}-\mathcal{D}_{yx})\) for \(+P_{1}\) are depicted in Figs. 2 (a) and (b) respectively, together with the values for \(+P\). We find that both \(\tilde{\mathcal{K}}_{xy}^{-}\) and \(\mathcal{D}_{xy}^{-}\) have smaller magnitude for \(+P_{1}\) compared to \(+P\), suggesting that both effects not only depend on the direction of polarization but also depend on the magnitude of the polarization. It is important to point out here that polarization is not the only factor that contributes to the value of the responses. For example, both responses also depend on the details of the electronic structure (see Eqs. 2 and 3). As a result, the situation can be more complicated if there is a drastic change in the band structure with the change in electric polarization. Nevertheless, our analysis clearly shows that the polarization is an important factor and that both KME and NHE are tunable by changing the direction or magnitude of the electric polarization. Furthermore, to understand the dependence on the spin-orbit coupling (SOC), we also perform additional calculations with the SOC turned off in our computations. Comparisons of the computed \(\tilde{\mathcal{K}}_{xy}^{-}\) and \(\mathcal{D}_{xy}^{-}\) both in the absence and presence of SOC are shown in Figs. 3 (a) and (b). As seen from these figures, both \(\tilde{\mathcal{K}}_{xy}^{-}\) and Figure 1: (a) Schematic illustration of the crystal structure of PTO, showing the off-centering of the Ti atom leading to a polarization \(+P\) along \(\hat{z}\). The dashed circle indicates the displacement of the Ti atom in the opposite direction, switching the direction of the polarization (\(-P\)) indicated by the dashed arrow. (b) Comparison of the band structure of undoped PTO, computed within DFT (dashed line) and that obtained from Wannier90 (solid line), showing a good agreement between the two. (c) Computed reduced KME response components \(\tilde{\mathcal{K}}_{xy}\) (blue) and \(\tilde{\mathcal{K}}_{yx}\) (red) for the two directions of polarization \(+P\) (circle) and \(-P\) (dashed line), shown in (a), as a function of energy. (d) BCD components \(\mathcal{D}_{xy}\) (blue) and \(\mathcal{D}_{yx}\) (red) for the polarization directions, \(+P\) (circles) and \(-P\) (dashed line) as a function of energy. (e) Energy variation of the spin and orbital contributions to the absolute value of the antisymmetric component of the reduced KME response, \(\tilde{\mathcal{K}}_{xy}^{-}=\frac{1}{2}(\tilde{\mathcal{K}}_{xy}-\tilde{ \mathcal{K}}_{yx})\) for polarization \(+P\). (f) Energy variation of the doped electron densities for the two polarization directions, \(+P\) (circles) and \(-P\) (dashed line). The vertical black dashed line corresponds to the experimentally achieved maximum doped electron density, that maintains the polarity of the structure. The zero of energy in (b)-(f) refers to the CBM of undoped PTO. Figure 3: Comparison of the energy variation of the absolute value of (a) \(\tilde{\mathcal{K}}_{xy}^{-}\) and (b) \(\mathcal{D}_{xy}^{-}\) in the absence and presence of SOC. Figure 2: (a) Comparison of the energy variation of the absolute value of \(\tilde{\mathcal{K}}_{xy}^{-}\) for two different displacements of the Ti ion, \(P\) and \(P_{1}\), with the former being larger than the latter. (b) Comparison of the energy variation of the absolute value of the antisymmetric BCD component \(\mathcal{D}_{xy}^{-}=\frac{1}{2}(\mathcal{D}_{xy}-\mathcal{D}_{yx})\) for the same \(P\) and \(P_{1}\). \(\mathcal{D}^{-}_{xy}\) exist even without the SOC. This suggests that both effects occur due to the symmetry of the structure and the presence of SOC is not necessary. Indeed, in the absence of SOC, the KME response is driven by the orbital contribution. With the inclusion of SOC, the orbital degrees of freedom couple to the spin degrees of freedom, and consequently, it leads to additional current-induced spin magnetization in the system. The inclusion of SOC, therefore, increases the magnitudes of both effects. It is important to point out here that unlike these responses, the spin-splitting of the bands and the resulting unconventional magnetic Compton scattering [30] occur only in the presence of SOC. ### \(k\)-space distribution of orbital moment and Berry curvature To better understand the responses, we further compute the \(k\)-space distributions of the relevant orbital magnetic moment components \(m^{\rm orb}_{x}(\vec{k}),m^{\rm orb}_{y}(\vec{k})\) and Berry curvature components \(\Omega_{x}(\vec{k}),\Omega_{y}(\vec{k})\) in the \(k_{x}\)-\(k_{y}\) plane. Since, the \(\tilde{\mathcal{K}}_{ij}\) response is dominated by the orbital contribution, here for simplicity we only consider the orbital magnetic moment distribution. The orbital magnetic moment is computed within the modern theory by evaluating the expectation value of the orbital magnetization operator \(\frac{-e}{2}(\vec{r}\times\vec{v})\)[31; 32; 33; 34] with \(-e<0\), as implemented in the Wannier90 code [35], \[\vec{m}^{n,\rm orb}(\vec{k}) = \frac{e}{2\hbar}\text{Im}\langle\nabla_{k}u^{n}_{k}|\times[ \mathcal{H}(\vec{k})-\epsilon^{n}_{k}]|\nabla_{k}u^{n}_{k}\rangle \tag{4}\] \[+ \frac{e}{\hbar}\text{Im}\langle\nabla_{k}u^{n}_{k}|\times[ \epsilon^{n}_{k}-\epsilon_{\rm F}]|\nabla_{k}u^{n}_{k}\rangle.\] Here, \(\epsilon^{n}_{k}\) and \(u^{n}_{k}\) are the energy eigenvalues and eigenfunctions of the Hamiltonian \(\mathcal{H}(\vec{k})\) obtained from Wannierization, and \(\epsilon_{\rm F}\) is the Fermi energy. We note that since the KME response is a Fermi surface property (see Eq. 1), the second term in Eq. 4 does not contribute to the KME response. This can be seen easily by recognizing that \(\frac{\partial\Omega}{\partial\epsilon^{n}_{k}}=-\delta(\epsilon^{n}_{k}- \epsilon_{\rm F})\), and so has a non-zero value only if \(\epsilon^{n}_{k}=\epsilon_{\rm F}\), in which case the second term in Eq. (4) vanishes. The \(k\)-space distribution of the Berry curvature is computed using the Kubo formula [36], \[\Omega^{n}_{k}(\vec{k}) = -2\hbar^{2}\sum_{m\neq n}\text{Im}\frac{\langle u^{n}_{i}|v_{i}| u^{m}_{k}\rangle\langle u^{m}_{k}|v_{j}|u^{n}_{k}\rangle}{(\epsilon^{n}_{k}- \epsilon^{n}_{k})^{2}}, \tag{5}\] where \(\vec{v}=\frac{1}{\hbar}\frac{\partial\mathcal{H}}{\partial k}\) is the velocity operator and \((i,j,k)\) are cyclic permutations of the Cartesian directions \((x,y,z)\). Both \(\vec{m}^{\rm orb}(\vec{k})\) and \(\vec{\Omega}(\vec{k})\) follow the same symmetry relations: Under spatial inversion \(\mathcal{I}\) symmetry both remain invariant, with \(\vec{m}^{\rm orb}(\vec{k})\xrightarrow{\mathcal{I}}\vec{m}^{\rm orb}(-\vec{k})\), whereas under time-reversal (\(\mathcal{T}\)) symmetry they switch sign, \(\vec{m}^{\rm orb}(\vec{k})\xrightarrow{\mathcal{T}}-\vec{m}^{\rm orb}(-\vec{k})\) (similarly for \(\vec{\Omega}(\vec{k})\)). Hence, for a non-zero \(\vec{m}^{\rm orb}(\vec{k})\) (\(\vec{\Omega}(\vec{k})\)), either of these two symmetries must be broken. In the present case, the broken \(\mathcal{I}\) symmetry leads to non-zero values of \(\vec{m}^{\rm orb}(\vec{k})\) and \(\vec{\Omega}(\vec{k})\). We plot our calculated \(\vec{m}^{\rm orb}(\vec{k})\) and \(\vec{\Omega}(\vec{k})\) in Fig. 4. Note that since \(\mathcal{T}\) symmetry is preserved, \(\vec{m}^{\rm orb}\) (\(\vec{\Omega}\)) at \(+\vec{k}\) has the opposite sign to that at \(-\vec{k}\), and as a result the sum of \(\vec{m}^{\rm orb}(\vec{k})\) (\(\vec{\Omega}(\vec{k})\)) over the occupied part of the Brillouin zone (BZ) is zero, consistent with the overall nonmagnetic behavior of PTO. The key features of the computed distributions in Figs. 4 (a)-(d) are the following. First of all, \(m^{\rm orb}_{x}\) (\(\Omega_{x}\)) is equal and opposite at \(\pm k_{y}\), while it has the same sign at \(\pm k_{x}\), consistent with the \(\sigma_{v}\) mirror symmetries (see Fig. 5 (a)) Figure 5: (a) Vertical (\(\sigma_{v}\)) and (b) diagonal (\(\sigma_{d}\)) mirror planes in PTO. that dictate: \[m_{x}^{\rm orb}(k_{x},k_{y},k_{z}) \xrightarrow{M_{100}}m_{x}^{\rm orb}(-k_{x},k_{y},k_{z})\] \[{\rm and} m_{x}^{\rm orb}(k_{x},k_{y},k_{z}) \xrightarrow{M_{101}}-m_{x}^{\rm orb}(k_{x},-k_{y},k_{z}). \tag{6}\] In contrast, \(m_{y}^{\rm orb}\) (\(\Omega_{y}\)) is equal and opposite at \(\pm k_{x}\), while having the same sign at \(\pm k_{y}\) due to the same \(\sigma_{v}\) symmetries, that is, \[m_{y}^{\rm orb}(k_{x},k_{y},k_{z}) \xrightarrow{M_{100}}-m_{y}^{\rm orb}(-k_{x},k_{y},k_{z})\] \[{\rm and} m_{y}^{\rm orb}(k_{x},k_{y},k_{z}) \xrightarrow{M_{101}}m_{y}^{\rm orb}(k_{x},-k_{y},k_{z}). \tag{7}\] Furthermore, the \(x\) and \(y\) components of \(\vec{m}^{\rm orb}(\vec{k})\) (\(\vec{\Omega}(\vec{k})\)) are related to each other by the mirror \(M_{1\bar{1}0}\) symmetry (see Fig. 5 (b)), viz., \(m_{x}^{\rm orb}(k_{x},k_{y},k_{z})\xrightarrow{M_{1\bar{1}0}}-m_{y}^{\rm orb} (k_{y},k_{x},k_{z})\). Moreover, since the velocity operator transforms as \((v_{x},v_{y},v_{z})\xrightarrow{M_{1\bar{1}0}}(v_{y},v_{x},v_{z})\) under the mirror \(M_{1\bar{1}0}\) symmetry, Eq. (2) [Eq. (3)] leads to the constraint \(\tilde{\mathcal{K}}_{xy}=-\tilde{\mathcal{K}}_{yx}\) [\(\mathcal{D}_{xy}=-\mathcal{D}_{yx}\)], in agreement with our results in Fig. 1 (c) [Fig. 1 (d)]. ### Microscopic origin: role of odd-parity charge multipoles _Model Hamiltonian-_ To understand the microscopic origin of these effects, we construct a minimal tight-binding (TB) model in the basis set of the Ti-\(t_{2g}\) orbitals, \(\{d_{xy},d_{yz},d_{xz}\}\). For small doping, the doped electrons occupy the bands around the \(\Gamma\) point of the BZ that correspond to the CBM for the undoped case, indicated by the black circles in Fig. 4. We, therefore, expand the TB model around the \(\Gamma\) point, and the resulting low energy model Hamiltonian is given by \[\mathcal{H}(\vec{k})=\mathcal{H}_{\rm inv}(\vec{k})+\mathcal{H}_{\rm BI}(\vec{ k}). \tag{8}\] Here \(\mathcal{H}_{\rm inv}\) is the inversion symmetric part of the Hamiltonian, and is given by, \[\mathcal{H}_{\rm inv}=\begin{pmatrix}h_{11}&h_{12}&h_{13}\\ h_{12}&h_{22}&h_{23}\\ h_{13}&h_{23}&h_{33}\end{pmatrix}, \tag{9}\] with the explicit analytical forms of the elements \(h_{ij}\) up to quadratic order in \(k\) given below: \[h_{11} = t_{\rm eff}^{1}-t_{\rm eff}^{2}(k_{x}^{2}+k_{y}^{2})a^{2}-t_{\rm eff }^{3}k_{z}^{2}c^{2}\] \[h_{22} = t_{\rm eff}^{4}-t_{\rm eff}^{5}k_{x}^{2}a^{2}-t_{\rm eff}^{6}k_{ y}^{2}a^{2}-t_{\rm eff}^{7}k_{z}^{2}c^{2}\] \[h_{33} = t_{\rm eff}^{4}-t_{\rm eff}^{6}k_{x}^{2}a^{2}-t_{\rm eff}^{5}k_{ y}^{2}a^{2}-t_{\rm eff}^{7}k_{z}^{2}c^{2}\] \[h_{12} = t_{\rm eff}^{8}k_{x}k_{z}ac\] \[h_{13} = t_{\rm eff}^{8}k_{y}k_{z}ac\] \[h_{23} = t_{\rm eff}^{9}k_{x}k_{y}a^{2}. \tag{10}\] Here \(a\) and \(c\) are the lattice constants for the tetragonal unit cell. Note that since \(\mathcal{H}_{\rm inv}\) is inversion symmetric, it contains only terms that are even in \(k\). The effective hopping parameters \(t_{\rm eff}^{i}\), \(i=1,9\) are linear combinations of the different effective \(t_{2g}\)-\(t_{2g}\) electronic hopping parameters that we extract using the N\({}^{\rm th}\) order muffin-tin orbital (NMTO) downfolding technique [37]. The computed parameters for one direction of polarization (\(+P\)) are listed in Table 1. We considered up to fourth nearest neighbor (NN) interactions. It is important to consider further neighbor interactions which are needed to capture the physics of the two effects of interest, as we discuss later. On the other hand, \(\mathcal{H}_{\rm BI}\) includes the hopping parameters that are induced by the broken \(\mathcal{I}\) symmetry. It can be written in terms of the components of the orbital angular momentum operator \(\hat{\vec{L}}\), \[\mathcal{H}_{\rm BI} = \frac{\alpha a}{\hbar}(k_{x}\hat{L}_{y}-k_{y}\hat{L}_{x})-\frac{ \alpha a^{3}}{6\hbar}(k_{x}^{3}\hat{L}_{y}-k_{y}^{3}\hat{L}_{x})\] \[- \frac{\beta ac^{2}}{\hbar}k_{z}^{2}(k_{x}\hat{L}_{y}-k_{y}\hat{L}_ {x})-\frac{\gamma a^{3}}{\hbar}k_{x}k_{y}(k_{y}\hat{L}_{y}-k_{x}\hat{L}_{x}).\] The parameters \(\alpha,\beta,\gamma\) are determined by the broken \(\mathcal{I}\)-symmetry-induced hopping parameters and have opposite signs for \(+P\) and \(-P\). In centrosymmetric PTO, \(\alpha,\beta,\gamma\) are zero so that \(\mathcal{H}=\mathcal{H}_{\rm inv}\). In addition, \(t_{\rm eff}^{8}=-2(t^{x}-t^{y})=0\) in the centrosymmetric structure, where \(t^{x}\) and \(t^{y}\) are the fourth nearest neighbor inter-orbital hopping integrals, which we discuss in detail later. The components of the orbital angular momentum operator in Eq. (III.1) in the \(t_{2g}\) orbital basis \(\{d_{xy},d_{yz},d_{xz}\}\) are given by, \[L_{x}^{(t_{2g})}= \hbar\begin{pmatrix}0&0&-i\\ 0&0&0\\ i&0&0\end{pmatrix},\quad L_{y}^{(t_{2g})}=\hbar\begin{pmatrix}0&i&0\\ -i&0&0\\ 0&0&0\end{pmatrix}, \tag{12}\] \[L_{z}^{(t_{2g})}=\hbar\begin{pmatrix}0&0&0\\ 0&0&i\\ 0&-i&0\end{pmatrix}.\] The advantage of writing \(\mathcal{H}_{\rm BI}\) in terms of the \(\hat{\vec{L}}\) operators is that we can readily identify the resulting orbital texture in momentum space. For example, the first term in Eq. (III.1), which is linear in \(\vec{k}\), depicts a toroidal arrangement of orbital magnetic moment in reciprocal space (see the inset of Fig. 6 (a)). Such a toroidal arrangement of the orbital moment in \(k\) space is also in agreement with our DFT results (see Fig. 4 (a) and (b)) \begin{table} \begin{tabular}{c c c c c c c c c} \hline \(t_{\rm eff}^{1}\) & \(t_{\rm eff}^{2}\) & \(t_{\rm eff}^{3}\) & \(t_{\rm eff}^{4}\) & \(t_{\rm eff}^{5}\) & \(t_{\rm eff}^{6}\) & \(t_{\rm eff}^{7}\) & \(t_{\rm eff}^{8}\) & \(t_{\rm eff}^{9}\) \\ \hline \hline 4.95 & -2.26 & 0.4 & 10.09 & -0.19 & -0.97 & -1.59 & -0.48 & -0.28 \\ \hline \end{tabular} \end{table} Table 1: Effective hopping parameters (in units of \(10^{-2}\) Ry) in Eq. (III.1), derived from the computed TB hopping parameters and onsite energies for PTO using the NMTO downfolding technique. and the symmetry analysis presented in section III.2. We note that the first term in Eq. (11) has a form \(\sim(\vec{k}\times\vec{L})\), which is an orbital counterpart of the (spin) Rashba effect \(\sim(\vec{k}\times\vec{\sigma})\) and, hence, is often referred to as an orbital Rashba effect [38; 39]. In the presence of SOC, the orbital texture in the orbital Rashba effect couples to the spin, additionally leading to spin texture and the Rashba effect in PTO [40]. The Rashba spin-splitting \(\Delta\varepsilon_{s}(\vec{k})\) is antisymmetric in \(\vec{k}\), corresponding to \(p\)-wave symmetry, due to the presence of time-reversal symmetry, which means that \(\Delta\varepsilon_{s}(\vec{k})=\varepsilon_{\uparrow}(\vec{k})-\varepsilon_{ \downarrow}(\vec{k})=-\Delta\varepsilon_{s}(-\vec{k})\). Here, for simplicity, we do not include SOC in our model Hamiltonian in Eq. (8), since both KME and BCD exist even in its absence (See Fig. 3). _Role of odd-parity charge multipoles-_ Interestingly, each term of different order in \(\vec{k}\) in the Hamiltonian \(\mathcal{H}_{\text{BI}}\) of Eq. (11) has a direct correlation to a corresponding odd-parity charge multipole. Recently, we showed that the \(k\)-space orbital and spin textures in ferroelectrics result from the \(k\)-space magnetoelectric multipoles that are reciprocal to the real-space odd-parity charge multipoles [30]. The odd-parity charge multipoles characterize the asymmetries in the charge density that are present due to the broken \(\mathcal{I}\) symmetry. For example, the electric dipole dictates the first-order asymmetry in the charge density, while the electric octupole corresponds to the third-order asymmetry, and so on. The first term within the parentheses in Eq. (11), which is linear in \(\vec{k}\), corresponds to the \(k\)-space representation of the electric dipole moment (\(p_{10}\)) whereas the remaining terms, which are all cubic in \(\vec{k}\), correspond to the electric octupole moment (\(\mathcal{O}_{30}\)). To verify the existence of the local electric dipoles and octupoles in PTO, we decompose the \(\mathcal{T}\) symmetric density matrix \(\rho_{lm,l^{\prime}m^{\prime}}\), computed within the DFT framework, into parity-odd tensor moments and explicitly compute the atomic-site electric dipole and octupole moments, for which only the odd \(l-l^{\prime}\) terms contribute [41]. The computed odd-parity charge multipoles on the Ti\({}^{4}+\) ions are non-zero in the polar structure, as shown in Fig. 6 (a), and confirm the presence of a ferrotype ordering of electric dipole component \(p_{10}\) and octupole component \(\mathcal{O}_{30}\) at the Ti site. Here the indices at the suffix of the multipole components represent the \(l\) and \(m\) indices of the spherical harmonics that are used to build these charge multipoles. The electric dipole moment \(\vec{p}\) is a tensor of rank 1 (vector), with \(p_{10}\) indicating its \(z\) component. Similarly, the octupole moment \(\mathcal{O}_{ijk}\) is a totally symmetric tensor of rank 3 with seven components. The \(\mathcal{O}_{30}\) component has the representation \(\frac{1}{2}z(5z^{2}-r^{2})\). _Results and discussion-_ Now that we have correlated the individual terms of the Hamiltonian to the charge multipoles, we diagonalize the Hamiltonian \(\mathcal{H}(\vec{k})\) in Eq. (8) for the realistic parameters listed in Table 1, extracted using the NMTO downfolding technique [42; 43]. We, then, use the computed eigenvalues \(\epsilon_{k}^{n}\) and eigenfunctions \(u_{k}^{n}\) to obtain the \(k\)-space distribution of the orbital moment and the Berry curvature using Eqs. (4) and (5) for the lowest energy band of the Hamiltonian in Eq. (8). Note that the second term in Eq. (4) does not contribute to the KME response, as stated before, and hence, we ignore this term for the computation of the orbital moment. We then compute the BCD density \(d_{ij}(\vec{k})=\partial_{k_{i}}\Omega_{j}(\vec{k})\) and the reduced KME density \(\kappa_{ij}(\vec{k})=\partial_{k_{i}}m_{j}^{\text{orb}}(\vec{k})\) for \(i,j=x,y\), the integrals of which over the occupied part of the BZ determine the magnitude of \(\mathcal{D}_{ij}\) and \(\tilde{\mathcal{K}}_{ij}\) respectively [see Eqs. (2) and (3)]. The computed densities show that they have the same sign (\(+\) or -) over \(k\)-space only if \(i\neq j\) and hence when integrated over the occupied part of the BZ, only the \(xy\) and \(yx\) components of \(\mathcal{D}\) and \(\tilde{\mathcal{K}}\) have non-zero values. The variations of these components along a specific momentum direction are shown in Fig. 7 (see the solid lines). For the opposite polarization direction (\(-P\)), the parameters \(\alpha,\beta,\gamma\) switch sign and, consequently, as shown in Fig. 7, the \(xy\) and \(yx\) components of \(d\) and \(\kappa\) switch signs, keeping their magnitudes unaltered. In an \(\mathcal{I}\)-symmetric system, on the other hand, \(\alpha=\beta=\gamma=0\), and consequently, we find that \(d_{ij}\), \(\kappa_{ij}\) become zero as shown in the insets of Fig. 7, emphasizing the important role of \(\mathcal{I}\) symmetry breaking. Further to gain insight into the origin of these two effects, we switch off the linear term in Eq. (11), which originates from the electric dipole moment. Interestingly, in this case, we find that while all the considered components of \(d\), \(\kappa\) still survive, their values reduce drastically by an order of magnitude. This suggests that the linear terms in \(k\) in Eq. (11), originating from the electric dipole moment, play an important role in determining the magnitudes of both these effects, although the importance of the electric octupole-driven \(k^{3}\) terms can not be ignored. Our findings are consistent with the multipole description Figure 6: Atomic-site charge dipole moment component \(p_{10}\) and octupole moment component \(\mathcal{O}_{30}\) on the Ti\({}^{4+}\) ions as a function of the displacement (in units of out-of-plane lattice constant \(c\)) of the Ti ion from the center of the unit cell in PTO. The inset shows the schematic for the toroidal arrangement of the orbital (spin) moment (indicated in thick arrows) in the \(k_{x}\)-\(k_{y}\) plane due to the first term in the Hamiltonian (11) driven by the charge dipole. (b) Fourth nearest neighbor Ti atoms (connected by the brown straight lines) along \((\pm a,0,\pm c)\) and \((0,\pm a,\pm c)\). Note that in the cubic high-symmetry structure with \(c=a\), these are second nearest neighbors. of the KME response, proposed by Hayami _et. al._ based on symmetry analysis Hayami et al. (1998). Indeed, we find that the antisymmetric part of the KME response \(\mathcal{K}^{-}_{ij}\) in PTO can be described by the existence of an electric dipole moment component, \(\tilde{\mathcal{K}}^{-}_{ij}=\frac{1}{2}(\mathcal{K}_{ij}-\mathcal{K}_{ji})= \varepsilon_{ijk}p_{k}\). It is important to point out here that the KME, although universal to all polar metals, can also occur in noncentrosymmetric but non-polar systems, e.g., chiral materials, in which case other multipoles such as the monopole of the electric toroidal dipole moment will dictate the symmetric part (with the trace) of the KME response Hayami et al. (1998). We further note that the fourth NN (see Fig. 6 (b)), inter-orbital (\(d_{xy}-d_{xz}\) and \(d_{xy}-d_{yz}\)) hopping integrals, \(t^{x}\) and \(t^{y}\), induced by the broken \(\mathcal{I}\) symmetry, are the key ingredients for both these effects. While both these hopping integrals contribute to the parameters \(\alpha\) and \(\beta\), \(\beta\) is solely determined by \(t^{x}\) and \(t^{y}\) while \(\alpha\) has additional contributions. As a result, in the absence of these hoppings, \(\beta\) and the effective hopping, \(t^{8}_{\rm eff}\), in \(\mathcal{H}_{\rm inv}\) vanish. In this case of \(t^{x}=t^{y}=0\), we find that the components of both \(d\) and \(\kappa\) also vanish, as shown in the insets of Fig. 7 (see the dashed brown line), emphasizing the importance of the further neighbor interactions. To understand why the fourth NN hopping parameters are crucial, we first note that the non-zero \(\beta\) and \(t^{8}_{\rm eff}\) resulting from the fourth NN hopping parameters appear in the third term of Eq. (11) and the off-diagonal elements \(h_{12}\) and \(h_{13}\) of Eq. (9) respectively. Interestingly, these are the only inter-orbital contributions in our minimal model that are also responsible for the band dispersion along the out-of-plane \(k_{z}\) direction. Since the inter-orbital hopping parameters drive the non-zero Berry curvature Herring and Van Hove (1992) and since the dispersion along \(k_{z}\) is crucial for the existence of the in-plane components of both orbital moment and Berry curvature [see Eqs. (4) and (5)], we see that both quantities vanish in the absence of fourth NN hopping. This, in turn, also leads to an absence of \(xy\) and \(yx\) components of \(d\) and \(\kappa\), explaining the crucial role of the fourth NN hopping integrals in driving the KME and NHE in doped PTO. ## IV Summary and outlook To summarize, taking the example of doped PTO, we have shown that both the KME and the NHE, are universal to all polar metals and can be used for a complete characterization of this class of materials. Our work paves the way for the broad applicability of these two effects in polar metals in general, going beyond their earlier investigation in topological systems Kiselev et al. (2009); Herring and Van Hove (2009); Herring et al. (2010); Hayami et al. (1998). Our detailed tight-binding analysis reveals the importance of the broken-symmetry-induced inter-orbital hopping parameters, correlated to the odd-parity charge multipoles, in mediating these effects. In particular, we have identified the broken-inversion-induced fourth NN inter-orbital hopping parameters as being essential in driving these effects in doped PTO. _Proposal for experiments._ Before concluding, here we briefly discuss possible routes to detecting the two effects. The second-order NHE in polar metals can be detected by measuring the second harmonic current \(J_{2\omega}\) at a frequency \(2\omega\) for an applied ac electric field \(\vec{E}\) of frequency \(\omega\), Herring and Van Hove (2009) \[\vec{j}_{2\omega}=\frac{e^{3}\tau}{2(1+i\omega\tau)}\vec{E}_{\omega}\times( \vec{p}\times\vec{E}_{\omega}). \tag{13}\] Here \(\vec{p}\) is the direction of the electric dipole moment, which is along \(\hat{z}\) for doped PTO. This suggests that for \(\vec{E}\) along \(\hat{z}\) (i.e., with polar angle \(\theta=0\)), the Hall current vanishes as we found also from our explicit calculations discussed above. Furthermore, for a general form Figure 7: Results of the tight-binding analysis. Computed variation (solid line with circles) of the reduced KME density components (a) \(\kappa_{xy}\), (b) \(\kappa_{yx}\), and the BCD density components (c) \(d_{xy}\), (d) \(d_{yx}\) around the \(\Gamma\) point for the \(+P\) polarization. The variation is shown along \(k_{y}\) for (a) and (c), and along \(k_{x}\) for (b) and (d). The same variation (indicated in dashed lines with diamonds) for \(-\kappa_{xy},-\kappa_{yx},-d_{xy}\), and \(-d_{yx}\) for the polarization \(-P\) are also shown in (a)-(d). The same variation of (e) \(\kappa_{xy}\), (f) \(\kappa_{yx}\), (g) \(d_{xy}\) and (h) \(d_{yx}\) in the presence of inversion symmetry (black solid line), in absence of fourth NN inter-orbital hopping parameters \(t^{x}\) and \(t^{y}\) (dashed brown line), and in absence of the first term (linear in \(\vec{k}\)) in Eq. (11) (green line with circles). The parameters used for the plots are listed in Table 1, and \(\alpha=0.22,\beta=0.02\), and \(\gamma=-0.10\) (in units of \(10^{-2}\) Ry) for \(+P\) polarization. of the field, \(\vec{E}=Ee^{i\omega t}(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\), it is also easy to see from Eq. (13) that the Hall current does not depend on the azimuthal angle \(\phi\) made by \(\vec{E}\) with \(\hat{x}\) for an in-plane \(\vec{E}\) (i.e., \(\theta=\pi/2\)). This means that rotation of \(\vec{E}\) within the \(x\)-\(y\) plane will leave the Hall current invariant. The current-induced magnetization in the KME should be detectable using the magneto-optical Kerr effect. In doped PTO the generated magnetization is dominated by the orbital moment for a reasonable doping concentration (see the inset of Fig. 1 (c)) and has a magnitude of \(1.8\times 10^{-4}\mu_{B}\)/atom at the experimentally observed maximum doping concentration (\(n_{x=0.12}=1.9\times 10^{21}\) cm\({}^{-3}\)) up to which the system retains the ferroelectricity, for an applied field of \(10^{5}\) V/m and a typical relaxation time constant \(\tau\simeq 1\) ps. The computed orbital magnetization is about four orders of magnitude larger than that reported in Te [15], while it is about an order of magnitude smaller than the orbital magnetization in BCC iron [35]. The computed total (spin plus orbital) magnetization of \(\sim 1.0\times 10^{-3}\mu_{B}\) per unit cell is also comparable to the magnetization of the Rashba system Bi/Ag(111), the (001) surface of the topological insulator \(\alpha\)-Sn, and the Weyl semimetal TaAs [46] and, hence, likely to be discernible in measurements. In the present work, we considered a rigid band approximation to describe the doped PTO case. While we expect this to provide a good description of the NHE and KME for the small doping concentration achievable in the measurements, future work should investigate computationally how electron doping affects the electronic structure of PTO. The dominance of the orbital magnetization in the KME response of doped PTO that emerges from our work, opens the door for the application of polar metals in orbitronics with the additional advantage of switchable orbital texture by reversal of the electric polarization. We hope that our work will motivate both theoretical and experimental work in these directions in the near future. ## Acknowledgements The authors thank Awadhesh Narayan and Dominic Varghese for stimulating discussions. NAS and SB were supported by the ERC under the EU's Horizon 2020 Research and Innovation Programme grant No 810451 and by the ETH Zurich. Computational resources were provided by ETH Zurich's Euler cluster, and the Swiss National Supercomputing Centre, project ID eth3.
2309.13914
Matrix Factorization in Tropical and Mixed Tropical-Linear Algebras
Matrix Factorization (MF) has found numerous applications in Machine Learning and Data Mining, including collaborative filtering recommendation systems, dimensionality reduction, data visualization, and community detection. Motivated by the recent successes of tropical algebra and geometry in machine learning, we investigate two problems involving matrix factorization over the tropical algebra. For the first problem, Tropical Matrix Factorization (TMF), which has been studied already in the literature, we propose an improved algorithm that avoids many of the local optima. The second formulation considers the approximate decomposition of a given matrix into the product of three matrices where a usual matrix product is followed by a tropical product. This formulation has a very interesting interpretation in terms of the learning of the utility functions of multiple users. We also present numerical results illustrating the effectiveness of the proposed algorithms, as well as an application to recommendation systems with promising results.
Ioannis Kordonis, Emmanouil Theodosis, George Retsinas, Petros Maragos
2023-09-25T07:29:59Z
http://arxiv.org/abs/2309.13914v1
# Matrix Factorization in Tropical and Mixed Tropical-Linear Algebras ###### Abstract Matrix Factorization (MF) has found numerous applications in Machine Learning and Data Mining, including collaborative filtering recommendation systems, dimensionality reduction, data visualization, and community detection. Motivated by the recent successes of tropical algebra and geometry in machine learning, we investigate two problems involving matrix factorization over the tropical algebra. For the first problem, Tropical Matrix Factorization (TMF), which has been studied already in the literature, we propose an improved algorithm that avoids many of the local optima. The second formulation considers the approximate decomposition of a given matrix into the product of three matrices where a usual matrix product is followed by a tropical product. This formulation has a very interesting interpretation in terms of the learning of the utility functions of multiple users. We also present numerical results illustrating the effectiveness of the proposed algorithms, as well as an application to recommendation systems with promising results. Ioannis Kordonis\({}^{1}\), Emmanouil Theodosis\({}^{2}\), George Retsinas\({}^{1}\), Petros Maragos\({}^{1}\)\({}^{1}\)School of Electrical and Computer Engineering National Technical University of Athens, Greece \({}^{2}\)School of Engineering and Applied Sciences Harvard University Cambridge, MA 02138 [email protected], [email protected], [email protected], [email protected] Tropical Algebra and Geometry, Matrix Factorization, Dimensionality Reduction, Recommendation Systems ## 1 Introduction Tropical geometry is a research field combining ideas and methods from max-plus algebra (e.g., [1]) with algebraic geometry (see for example [2]). In the last few years, there is a developing interest in the application of tropical geometric ideas and tools to machine learning problems. Some of the applications include the analysis and simplification of piece-wise linear neural networks and the modeling of graphical statistical models. For a review and some recent results see [3]. This paper proposes some ideas and algorithms for matrix factorization over the tropical algebra and over mixed tropical/linear algebras. Matrix Factorization (MF) is a classical topic in Machine Learning and Data mining and MF techniques (e.g. low-rank or nonnegative MF) have found numerous and diverse applications, such as collaborative filtering, dimensionality reduction, data visualization, community detection, blind source separation, and knowledge discovery, to name a few [4]. The contribution of this work is twofold. First, we propose some simple algorithms for Tropical Matrix Factorization (TMF) problem that manage to avoid a large number of locally optimal solutions and compare favorably with algorithms from the literature. Second, we introduce a new matrix factorization problem, that involves approximating a given matrix as a usual product of two matrices, followed by a tropical product with a third matrix. We refer to this problem as the Tropical Compression (TC) problem. This formulation has an interesting interpretation in terms of learning the utility function of multiple users. Particularly, utility functions are usually modeled as concave functions of their arguments (e.g. [5]). We will see that TC formulation can be used to approximate a vector of utility functions with unknown arguments. We will also present an application of the proposed matrix factorizations in recommendation systems. _Related Work:_ There is some prior work to the TMF problem, that is to approximate a matrix as max-plus product of two matrices with given dimensions. Early applications of TMF include the problem of state space realization of max-plus systems [6]. The exact formulation of TMF can be reduced to an Extended Linear Complementarity Problem (ELCP) [7]. ELCPs also describe the solution of sets of tropical polynomial equations [8]. Unfortunately, the general TMF problem is NP-hard [9]. An approximate technique for TMF was introduced in [6]. The algorithm was extended in [10, 11, 12], and some applications in data mining were presented. A closely related algorithm was proposed in [13], for approximating symmetric matrices as the max-plus product of a matrix with its transpose. Algorithms for the related problem of approximate sub-tropical matrix factorization, i.e., matrix factorization over the max-product semi-ring were proposed in [14, 15, 16]. For a review of several matrix factorization formulations over non-standard algebras see [17]. ## 2 Preliminaries In this section, we introduce some basic notions of max-plus or tropical algebra. The underlying space is \(\mathbb{R}_{\max}=\mathbb{R}\cup\{-\infty\}\). This set is equipped with two binary operations \(\vee\) and \(+\), where \(x\lor y=\max(x,y)\) and \(+\) is the usual scalar addition. In this space, maximization has the role of the usual addition and addition the role of usual multiplication. We also consider the vector space \(\mathbb{R}_{\max}^{p}\) where the internal operation \(\mathbf{x}\vee\mathbf{y}\) is defined entry-wise, i.e., \([\mathbf{x}\vee\mathbf{y}]_{i}=\max(x_{i},y_{i})\) and the external operation \(\lambda+\mathbf{x}\), for \(\lambda\in\mathbb{R}_{\max},\mathbf{x}\in\mathbb{R}_{\max}^{p}\), is defined as \([\lambda+\mathbf{x}]_{i}=\lambda+x_{i}\). For a matrix \(\mathbf{A}\in\mathbb{R}_{\max}^{m\times p}\) and a vector \(\mathbf{x}\in\mathbb{R}_{\max}^{p}\), we define the tropical matrix-vector multiplication as \[[\mathbf{A}\boxplus\mathbf{x}]_{i}=\max_{j}(A_{ij}+x_{j}). \tag{1}\] Similarly, for matrices \(\mathbf{A}\in\mathbb{R}_{\max}^{m\times p}\) and \(\mathbf{B}\in\mathbb{R}_{\max}^{p\times n}\), we define the tropical matrix multiplication as \[[\mathbf{A}\boxplus\mathbf{B}]_{ij}=\max_{l}(A_{il}+B_{lj}). \tag{2}\] Tropical polynomials are polynomials in the max-plus algebra. A tropical polynomial function \(p:\mathbb{R}^{n}\rightarrow\mathbb{R}_{\max}\) is defined as \[p(\mathbf{x})=\bigvee_{i=1}^{m_{p}}(a_{i}+\mathbf{b}_{i}^{T}\mathbf{x}), \tag{3}\] where \(\mathbf{b}_{i}\in\mathbb{R}^{n}\), \(a_{i}\in\mathbb{R}_{\max}\). A vector of tropical polynomials is called a tropical map. Observe that a tropical map can be expressed in the form \(\mathbf{A}\boxplus(\mathbf{B}\mathbf{x})\), for appropriate matrices \(\mathbf{A},\mathbf{B}\). For a matrix \(\mathbf{A}\) the Frobenius norm is given by \(\|\mathbf{A}\|_{F}=\sqrt{\sum_{i,j}a_{ij}^{2}}\). Finally, we use 1 to describe an indicator function, i.e., \(\intercal_{i=j}=1\) if \(i=j\) and zero otherwise. ## 3 Tropical Matrix Factorization Assume that \(\mathbf{Y}\) is an \(n\times p\) matrix. The approximate Tropical Matrix Factorization problem is to find \(n\times r\) and \(r\times p\) matrices \(\mathbf{A},\mathbf{B}\), with given \(r<\min(n,p)\) that solve the optimization problem \[\underset{\mathbf{A},\mathbf{B}}{\text{minimize}}\quad\|\mathbf{Y}-\mathbf{A}\boxplus\mathbf{B} \|_{F}^{2}, \tag{4}\] where \(\|\cdot\|_{F}\) is the Frobenius norm1 Footnote 1: We could also call the above problem as the Tropical Low Rank matrix approximation problem. However, tropical rank has at least three non-equivalent definitions (see for example [2]). This formulation corresponds to the ‘Barvinok rank’. However, to avoid confusion we call it the TMF problem. We start with a simple Gradient Descent (GD) formulation for the above problem. Observe that the function \[\mathbf{f}(\mathbf{A},\mathbf{B})=\mathbf{A}\boxplus\mathbf{B}\] is piecewise linear, and in the generic case, each entry of \([\mathbf{A}\boxplus\mathbf{B}]_{ij}\) depends on a single pair maximizing entries of \(\mathbf{A},\mathbf{B}\). Thus, GD takes the form \[\pi(i,j) \leftarrow\underset{l}{\text{argmax}}\{A_{il}+B_{lj}\}, \tag{5}\] \[A_{il} \gets A_{il}-\alpha\sum_{j}(A_{il}+B_{lj}-Y_{ij})\intercal_{l= \pi_{k}(i,j)}\] (6) \[B_{lj} \gets B_{lj}-\alpha\sum_{i}(A_{il}+B_{lj}-Y_{ij})\intercal_{l= \pi_{k}(i,j)} \tag{7}\] where \(\alpha\) is the step-size. In case of many maximizers in (5), assume that one is chosen at random. In this problem, there is a large number of local minima and stationary points. The partial derivatives with respect to all \(A_{il}\) such that \(\intercal_{l=\pi(i,j)}=0\) for all \(j\), are zero. Thus, if the value of \(A_{il}\) is very small, the partial derivative will be always zero and local search would not be able to change it. We call the entries \(A_{il}\) of matrix \(\mathbf{A}\) that do not contribute to any part of \(\mathbf{A}\boxplus\mathbf{B}\)_ineffective_. We then propose a simple modification of the gradient descent scheme to mitigate this issue \[A_{il} \gets A_{il}-\alpha\sum_{j}(A_{il}+B_{lj}-Y_{ij})s_{i,l,j} \tag{8}\] \[B_{lj} \gets B_{lj}-\alpha\sum_{i}(A_{il}+B_{lj}-Y_{ij})s_{i,l,j}, \tag{9}\] where \(s_{i,l,j}=1\) if \(l=\pi_{k}(i,j)\) and \(\epsilon_{k}\) otherwise. We choose \(\epsilon_{k}\) to be small positive constants. The idea behind this modification is that for all the ineffective entries of matrices \(\mathbf{A},\mathbf{B}\) change and thus have the opportunity in the next iteration to attain the maximum in (5). Note that similar optimization ideas were used in the context of neural network pruning in [18]. We call this method Gradient Descent with Multiplicative Noise (GDMN). We will also study a closely related modification, where we just add a stochastic value \(\varepsilon_{k}\) in (6), (7), which we call Gradient Descent with Additive Noise (GDAN). As we shall see in the numerical section, these modifications are surprisingly effective for avoiding bad local minima. We then shift our attention to the case where \(\mathbf{Y}\) is partially specified. That is, we do not have access to all the entries \(Y_{ij}\) but only for a subset \(\mathcal{O}\subset\{1,..,n\}\times\{1,..,p\}\). Then, problem (4) becomes \[\underset{\mathbf{A},\mathbf{B}}{\text{minimize}}\quad\|Z_{\mathcal{O}}\circ(\mathbf{Y}- \mathbf{A}\boxplus\mathbf{B})\|_{F}^{2}, \tag{10}\] where \(Z_{\mathcal{O}}\) is an \(n\times p\) matrix with ones in the entries \((i,j)\in\mathcal{O}\) and zeros elsewhere, and '\(\circ\)' stands for the Hadamard (element-wise) product. In this case (8), (9) become \[A_{il}\gets A_{il}-\alpha\sum_{j:(i,j)\in\mathcal{O}}(A_{il}+B _{lj}-Y_{ij})s_{i,l,j}\] \[B_{lj}\gets B_{lj}-\alpha\sum_{i:(i,j)\in\mathcal{O}}(A_{il }+B_{lj}-Y_{ij})s_{i,l,j},\] ## 4 The Tropical Compression Problem We first define the Tropical Compression (TC) problem. Assume that \(\mathbf{y}_{1},\ldots,\mathbf{y}_{N}\) are datapoints in \(\mathbb{R}^{n}\), with \(N\geq n\). The tropical compression problem is to find a description of the given dataset as the output of a tropical map. That is, we search for datapoints \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\) in \(\mathbb{R}^{p}\) with \(p<n\) and matrices \(\mathbf{B}\in\mathbb{R}^{m\times p}\) and \(\mathbf{A}\in\mathbb{R}^{n\times m}_{\max}\) that solve the following problem \[\underset{\mathbf{A},\mathbf{B},\mathbf{X}}{\text{minimize}}\|\mathbf{Y}-\mathbf{A}\boxplus(\mathbf{ B}\mathbf{X})\|_{F}^{2}, \tag{11}\] where \(\mathbf{X}=[\mathbf{x}_{1}\ \ldots,\mathbf{x}_{N}]\), \(\mathbf{Y}=[\mathbf{y}_{1}\ \ldots,\mathbf{y}_{N}]\), and \(\mathbf{B}=[\mathbf{b}_{1}^{T}\ \ldots\ \mathbf{b}_{m}^{T}]^{T}\). We then present a motivating example. Assume that there is a set of \(n\) persons and a set of \(N\) items and that the preference of each person towards an item is described by a utility function. Each item has several features and the utility of each user if they receive that item is a piece-wise linear concave function of its features2. Assume also that the features of each item \(i\) are described by an unknown \(p-\)dimensional vector \(\mathbf{x}_{i}\). Footnote 2: Let us note that utility functions are very often modeled as concave functions (e.g. [5]). An intuitive reason for this choice is the principle of diminishing marginal utility. Furthermore, piece-wise linear concave functions can approximate arbitrarily well any concave function. If \(\bar{\mathbf{Y}}\) is the matrix describing the utility of each person from each item, then \(\bar{Y}_{ij}\) can be written as \[\bar{Y}_{ij}=\min(-\mathbf{b}_{1}^{T}\mathbf{x}_{j}-a_{i,1},\ldots,-\mathbf{b}_{m}^{T}\bm {x}_{j}-a_{i,m}),\] where \(\mathbf{x}_{j}\) is the vector of characteristics of object \(j\), and \(-\mathbf{b}_{l}\)'s the slopes of the piecewise linear utility function. Then, \(\mathbf{Y}=-\bar{\mathbf{Y}}\) can be written as \[\mathbf{Y}=\mathbf{A}\boxplus(\mathbf{B}\mathbf{X}),\] for appropriate matrices \(\mathbf{A},\mathbf{B}\). Particularly, \(\mathbf{B}\) contains as rows the slopes of all the different users3. In the case where both the features \(\mathbf{x}_{j}\) of the objects and the slopes \(\mathbf{b}_{i,l}\) are unknown, the description of \(\mathbf{Y}\) reduces to a tropical compression problem. Footnote 3: In case where the utility function of some user \(i\) does not include a slope \(\mathbf{b}_{l}\), then \(a_{il}=-\infty\). ### A Numerical Algorithm for the TC Problem Let us transform (11) into \[\underset{\mathbf{A}\in\mathbb{R}^{m\times m}_{\max},\mathbf{C}\in\mathbb{R}^{m \times N}}{\text{minimize}} \|\mathbf{Y}-\mathbf{A}\boxplus\mathbf{C}\|_{F}^{2},\] (12) \[\text{sub.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~ Figure 1.a compares the norm of the error \(\|\mathbf{Y}-\mathbf{A}\boxplus\mathbf{B}\|_{F}\), where the matrices \(\mathbf{A},\mathbf{B}\) are computed using Algorithm (8)-(9), with different values of \(\epsilon\) (recall that \(\epsilon\) represents the contribution of non-maximizing entries to the algorithm). The modification allows the algorithm to overcome some local optima. Observe that there is a trade-off between convergence speed and quality of solution. With a large value of \(\epsilon\), we have faster convergence to a worse solution. Additionally, we used a diminishing scheme for \(\epsilon\) in the form \(\epsilon_{k}=9/(500+k)\), where \(k\) is the iteration count. All the results are normalized, that is, we divide the error with the norm \(a\|R\|_{F}\). Figure 1.b compares the modified versions of GD. We then compare the proposed algorithms with the FastSTMF algorithm from [12]. We factorize multiple \(10\times 11\) matrices with \(r=5\). To have a fair comparison, we use as an initial estimate in both algorithms the matrices proposed in [12]. Table 1 shows the matrix factorization error of FastSTMF and compares it with the error of the GD and the proposed variations. We have two implementations of GDAN, with zero mean (GDAN-ZM) and non-zero mean (GDAN-NZM). GDAN-NZM has the best performance between the variants examined. This is probably because it promotes competition between the different non-maximizing entries of matrices \(\mathbf{A},\mathbf{B}\). ### Real Data #### 5.2.1 Movielens 100k Dataset We use the Movielens 100k Dataset [21], consisting of the ratings of \(943\) users to \(1682\) movies. There are in total \(100000\) ratings. Here we use the implicit feedback formulation. That is, we consider a matrix \(\mathbf{Y}\) with a value of \(-1\) if the person has watched a movie and \(+1\) if they haven't. We then use a factorization of matrix \(\mathbf{Y}\). We split the data into \(80\%\) training, \(10\%\) validation, and \(10\%\) test, and apply a stochastic version of GD, and early stopping. We use two metrics, the RMS error and the Hit Rate at 10 (HR@10)4. Footnote 4: HR@10 is defined as follows. For each user, form a list of \(101\) items choosing randomly from the test set 1 positive and \(100\) negative items. Then, count the number of users for which the positive item is ranked among the first \(10\) of the list [22]. The best approximation comes for an intermediate dimension \(r=35\) and has an RMS error equal to \(0.396\) in the test set and HR@10 is \(0.755\). We then consider the TC formulation of the problem (11), with \(m=40\) and \(p=25\). Using the modified projected gradient descent algorithm (13)-(16) we get an RMS error equal to \(0.391\) and HR@10 equal to \(0.77\). Compared to the TMF formulation, TC performs slightly better. It has also a smaller number of parameters and an intuitive interpretation. #### 5.2.2 Movielens 1M Dataset We then turn to a larger dataset, Movielens 1M, with \(1\) million ratings from \(6000\) users on \(4000\) movies. We formulate matrix \(\mathbf{Y}\), as in the previous subsection. Then, using the same train/validation/test split, we compute an approximate tropical factorization for matrix \(\mathbf{Y}\), with \(r=40\). Then, the RMS error becomes \(0.328\) and the HR@10 becomes \(0.742\). For comparison, a carefully optimized and regularized linear factorization gives HR@10 equal to \(0.731\)[22]. For a TC formulation with \(m=100\) and \(p=35\), the RMS error becomes \(0.327\) and HR@10 becomes \(0.743\). ## 6 Conclusion and Future Work This paper formulates two matrix factorization problems, over the tropical algebra and over mixed linear tropical algebras respectively. For the first problem, we proposed some variations of Gradient Descent that lead to improved performance and compare favorably with an algorithm from the literature. For the second problem which, has an interesting interpretation, in terms of learning the utility function of a set of users, we proposed a non-convex projection gradient descent algorithm. The proposed algorithms were applied to a recommendation problem, using datasets MovieLens 100k and 1M, with promising results. Some interesting directions for further research are the use of appropriate regularization techniques and the study of sparse approximate solutions for the TMF problem. \begin{table} \begin{tabular}{l c c c} \hline \hline **Algorithm** & \multicolumn{3}{c}{Parameter \(a\)} \\ \cline{2-4} & \(0.01\) & \(0.1\) & \(0.5\) \\ \hline FastSTMF [12] & \(11.2\pm 4.9\) & \(1.38\pm 0.52\) & \(.52\pm.06\) \\ GD & \(13.2\pm 3.5\) & \(1.45\pm 0.40\) & \(.43\pm.04\) \\ GDMN & \(06.3\pm 4.1\) & \(0.77\pm 0.19\) & \(.39\pm.04\) \\ GDAN-ZM & \(11.4\pm 2.6\) & \(1.26\pm 0.43\) & \(.42\pm.04\) \\ GDAN-NZM & \(\mathbf{4.5\pm 2.1}\) & \(\mathbf{0.60\pm 0.28}\) & \(\mathbf{.34\pm.05}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the Algorithms: The normalized Frobenius error, for various values of \(a\). Figure 1: _(a) The approximation error norm for Algorithm (8)-(9). (b) The comparison of the GD scheme and the proposed modifications, with FastSTMF. The results for (b) have been averaged over 10 runs. FastSTMF converges very fast (in \(2\) to \(5\) steps) but does not improve further. Since it uses a different kind of iteration, we included only its final value._
2310.20265
Low-Dose CT Image Enhancement Using Deep Learning
The application of ionizing radiation for diagnostic imaging is common around the globe. However, the process of imaging, itself, remains to be a relatively hazardous operation. Therefore, it is preferable to use as low a dose of ionizing radiation as possible, particularly in computed tomography (CT) imaging systems, where multiple x-ray operations are performed for the reconstruction of slices of body tissues. A popular method for radiation dose reduction in CT imaging is known as the quarter-dose technique, which reduces the x-ray dose but can cause a loss of image sharpness. Since CT image reconstruction from directional x-rays is a nonlinear process, it is analytically difficult to correct the effect of dose reduction on image quality. Recent and popular deep-learning approaches provide an intriguing possibility of image enhancement for low-dose artifacts. Some recent works propose combinations of multiple deep-learning and classical methods for this purpose, which over-complicate the process. However, it is observed here that the straight utilization of the well-known U-NET provides very successful results for the correction of low-dose artifacts. Blind tests with actual radiologists reveal that the U-NET enhanced quarter-dose CT images not only provide an immense visual improvement over the low-dose versions, but also become diagnostically preferable images, even when compared to their full-dose CT versions.
A. Demir, M. M. A. Shames, O. N. Gerek, S. Ergin, M. Fidan, M. Koc, M. B. Gulmezoglu, A. Barkana, C. Calisir
2023-10-31T08:34:33Z
http://arxiv.org/abs/2310.20265v1
# Low-Dose CT Image Enhancement Using Deep Learning ###### Abstract The application of ionizing radiation for diagnostic imaging is common around the globe. However, the process of imaging, itself, remains to be a relatively hazardous operation. Therefore, it is preferable to use as low a dose of ionizing radiation as possible, particularly in computed tomography (CT) imaging systems, where multiple x-ray operations are performed for the reconstruction of slices of body tissues. A popular method for radiation dose reduction in CT imaging is known as the quarter-dose technique, which reduces the x-ray dose but can cause a loss of image sharpness. Since CT image reconstruction from directional x-rays is a nonlinear process, it is analytically difficult to correct the effect of dose reduction on image quality. Recent and popular deep-learning approaches provide an intriguing possibility of image enhancement for low-dose artifacts. Some recent works propose combinations of multiple deep-learning and classical methods for this purpose, which over-complicate the process. However, it is observed here that the straight utilization of the well-known U-NET provides very successful results for the correction of low-dose artifacts. Blind tests with actual radiologists reveal that the U-NET enhanced quarter-dose CT images not only provide an immense visual improvement over the low-dose versions, but also become diagnostically preferable images, even when compared to their full-dose CT versions. _Keywords--_ CT Image Enhancement, Image Enhancement with U-NET, CT Scan, Low-Dose CT Scan ## 1 Introduction The human body organs and extremities are visualized using medical imaging methods to identify several diseases [1]. Medical imaging is an important tool used in medical and biological research, and imaging systems often use an image reconstruction algorithm to create final visualizations of images [2]. The concept of image reconstruction is an inverse mathematical operation for mapping the sensor domain information to the image domain. A good image reconstruction is a key component for establishing high-quality images from sensors [3]. The goal of medical image reconstruction is to get high-quality medical images for clinical use at the lowest possible cost and risk to the patients [1]. One of the important and commonly used imaging methods is Computed Tomography (CT), where multiple x-ray images that are acquired at multiple orientations orthogonal to the slice to be visualized are used for reconstructing an image using modern or classical back projection methods [4, 5]. Since CT corresponds to an indirect tissue slice imaging, it offers information regarding vascular networks, luminal patency, and spatial geometry [6]. However, due to multiple X-ray operations, a patient receives high doses of radiation. Furthermore, some imaging contrast-boosting chemicals may aggravate situations with kidney diseases [7]. Low-dose CT is gaining popularity as a means of lower radiation exposure. However, a direct reduction of the radiation dose results in a considerable degradation in image quality [8]. As part of a rapidly growing discipline, deep learning (DL) and other machine learning (ML) approaches provide a promising method for image reconstruction, with artifact reduction and reconstruction speed-up [8, 9, 10, 11, 12]. Deep neural network (DNN) based image enhancement in medical imaging has shown encouraging results in under -sampled and low-dose settings. However, such methods normally need a massive amount of data for the training, which require a large amount of computer memory and process time [13]. Our goal in this study is to reduce the distortions caused by low-dose CT scanning using a pre-trained autoencoder-type DNN. Since the concept of distortion, together with its elimination, is a subjective issue that depends on the image content, biomedical images must be carefully handled and evaluated according to their performance for direct and correct diagnosis. In this study, we have incorporated practicing radiologists into the research group and performed blind opinion surveys to score how well distortion elimination via DNN works regarding diagnosis-wise visual information availability. ### Literature Review It is seen that image enhancement methods based on deep learning in the literature contribute positively to both qualitative and quantitative improvement of images. On the other hand, most deep learning techniques show computational complexity, requiring large training data sets. Besides, they are difficult to interpret, explain, and generalize. The majority of deep learning-based studies in the literature use open-source imaging data sets available for medical image processing. These studies have focused on open-science medical imaging research, including open-source software packages. Although related articles include a large number of general image processing applications that describe the specific deep learning technique and its application in detail, few examine deep learning applications in medical image enhancement [1]. Image enhancement in low-dose CT or CT with limited angle problems was attempted in a recent study with a novel neural network for 2D sparse image [13]. A tutorial by McCann et al. introduces the basic concepts of systems modeling and biomedical image enhancement methods using modern sparsity and learning-based approaches [2]. The tutorial explains how the system model to be used for describing a wide variety of imaging modalities can be created by integrating several blocks. In addition, image enhancement algorithms are discussed by grouping them into three general categories. The first category includes conventional and direct methods, including the Tikhonov arrangement; the second consists of sparsity and compression/detection theory-based methods; and the third category consists of learning-based (data-driven) methods, including various DNNs. Yedder et al. comparatively examined the basic image enhancement algorithms used in the literature and state-of-the-art image enhancement algorithms based on deep learning-based methods in terms of applied measurements, datasets, and key challenges to propose potentially strategic directions for new studies [9]. Reader et al. introduced traditional PET image development methods and then explained the principles of general back projection mapping from measurements to images. In addition, they discussed nonlinear problems that can be used in convolutional DNNs [14]. A deep learning-based back-projection methodology for PET image enhancement was reviewed. Such methods reportedly learn the view-port and data statistics from scratch without relying on a priori knowledge of these data patterns. In contrast, model-based or physics-informed deep learning reportedly uses back projection tools in PET image development and replaces traditional components with data-driven deep learning counterparts such as regularization. These methods rely on statistics from training data samples to learn deep maps for reconstruction while using reliable models of real imaging physics and noise distribution [14]. Haan et al. introduce the use of deep learning for optical sensing systems and computational microscopy [15]. The study reviews the fundamentals of inverse problems in optical microscopy and outlines deep learning methods to solve these problems with supervised methods. It also discusses deep learning applications for image enhancement and getting super-resolution from single images [15]. Chen et al. use a three-layer shallow convolutional neural network to remove noise on low-dose CT images by learning a feature mapping from low- to the corresponding normal-dose images [8]. The architecture splits the noisy CT images into image patches, then denoises the patches before reconstructing a new CT image. They compare their architecture with various state-of-the-art methods with regard to PSNR, RMSE, and SSIM. Chen et al. developed a CNN model that includes a deconvolution network for reducing noise artifacts in low-dose CT images and shortcut connections [17]. This model is called a residual encoder-decoder convolutional neural network (RED-CNN) [10]. The method uses an encoder decoder network which is similar to U-NET. Kang et al. followed a similar method, but they used directional wavelet transform of CT images [12]. The wavelet network adopted the shortcut connections included in the U-NET [16] directly, and the RED-CNN replaced the pooling/unpooling layers of the U-NET with convolution/deconvolution pairs. Generative adversarial networks are first applied to noise reduction problems in CT images by Wolterink et al. [11]. In their work, the first (generative) network generates a high-dose CT image from a low-dose, which is the noisy one, while the second (adversarial) network decides if the generated image is realistic. The image enhancement literature is not limited to low-dose CT images, and several deep-learning methods were proposed and applied for the enhancement of various types of images [18, 19, 20]. A common practice is to alter available DNN architectures or combine information from various network channels. For example, the techniques described in [10, 11, 12] involve auto-encoders, and, therefore, they are closely related to our methodology which also incorporates a U-NET autoencoder. However, they all propose adaptations of the architectures with more complex and computationally expensive operations, such as patch encoding and wavelet transform computations. Besides, although the enhancements should address the diagnosis requirements of actual radiologists, these studies lack blind tests with real radiologists for diagnostic accuracy after the enhancements. In our study, we have observed that the direct employment of U-NET readily generates images with reduced noise and retained sharpness, indicating applicability for diagnostic accuracy. In order to check the later applicability issue, the results are then verified by two radiologists, allowing us to provide a measure regarding the real usefulness of the proposed enhancement methodology. ## 2 Methodology U-NET is an autoencoder-type deep network architecture that comprises a contracting (input-side) and an expansive (output-side) path with extra connections between these paths. It was originally developed for the segmentation of biomedical images [16]. In our study, we use the U-NET structure that is given in Figure 1. The figure shows a trained U-NET model, illustrating its input and output images. The input and output image size of U-NET is selected as 256x256 in accordance with the utilized CT images. The contracting path of the network consists of repeated 3x3 convolution layers, each followed by a rectified linear unit (ReLU) layer and a 2x2 max pooling layer with a stride of 2 to double the channel size. The convolution layers extract useful information and create sparse spatial interactions. No padding is used in the contracting path convolutions as it alters the image size. The choice of ReLU in activation layers is to solve the vanishing gradient problem. Max-pooling is used to increase the robustness of the representation to small spatial translations in the input image. At the end of the contraction path, 64 features are collected in one large feature by a 1M convolution layer. The second half of the network, named the expansive path, contains convolutional and average pooling layers which restore the output size to the original input image size. As a distinct property of U-NET, certain contracting path features are transferred to the expanding path and concatenated with some expanding path features for more precise localization. The constructed U-NET is, then, trained from scratch, using 5926 image pairs that contain both full-dose and quarter-dose CT images, taken from the publicly available Low Dose CT Grand Challenge dataset [21]. The dataset contains slices of chest CT images for ten different patients. The original dataset contains images that are taken in two different slice thicknesses of 1 mm and 3 mm, whereas we use the images having a slice thickness of 1 mm. The aim of the training section is to adapt the U-NET parameters in such a way that a quarter-dose CT image input could yield a network output as close to the corresponding full-dose CT image, as possible. The loss function to measure the similarity between the generated and the desired image is classically selected as the Mean Squared Error (MSE). A total of 5926 image pairs with an image size of 256x256 pixels and with 32 bits/pixel bit depth were used in the training phase. For images with size mismatches, a center-crop operation was performed. Randomly selected 10 % of the images were used for the validation phase in every epoch. The training epoch number was set to 100, and the batch size was set to 4. The learning rate of the used RMSprop optimizer function was set to 0.0001. The training loss function values (i.e., the MSE values) were monitored throughout the training epochs. Figure 2 shows how the MSE value changes over 100 epochs during an actual training phase. An initial MSE value of over 0.0045 value gradually decreases down to 0.0020. It can be seen that the decrease rate of MSE slows down at each epoch, and reaches a nearly steady state after epoch 25, indicating that the selected epoch count is sufficient for the training process. Figure 1: U-net structure Figure 2: Change of mean squared error with progressing epochs Figure 3: CT slice versions from the patients \(P_{1}\), \(P_{2}\), \(P_{3}\), \(P_{4}\), \(P_{5}\) Figure 4: CT slice versions from the patients \(P_{b}\), \(P_{7}\), \(P_{8}\), \(P_{9}\), \(P_{10}\) Experimental Results Following the training phase that was described in the previous section, a test suit was applied using / 10 distinct CT images, which were also obtained at quarter- and full doses. These quarter-dose images were fed to the trained network, and the resultant images were recorded for quantitative, qualitative, and comparative analysis. Figures 3 and 4 show the results obtained when the model acquired at the end of 100 epoch training is tested on the above-mentioned 10 quarter-dose test images, which were strictly excluded from the training phase. In order to provide visual comparisons, the full dose, quarter-dose, and U-NET-enhanced slices are given in three separate columns for a total of ten different images. It can be seen that the enhancement of the trained U-NET mostly corresponds to the reduction of noisy surface patterns due to CT reconstruction from low-dose angular recordings, whilst retaining even the most subtle edge transitions. An important aspect of the enhancement is its usefulness according to actual radiologists. Even if a CT image version may look smoother or with better detail, it is eventually up to the radiologist to make a solid evaluation and comparison. It must be noted that the most critical evaluation parameter for the radiologist is the quality of the CT image for correct and accurate diagnosis. A radiologist strictly requires the image to reveal visual hints for pathology detection and classification; therefore, most alterations, although they might look pleasing to the untrained eye, may actually distort the image or occlude important visual hints. In order to consider this perilous possibility, a thorough assessment was carried out using two expert medical radiologists. The full and quarter-dose images, as well as the enhanced images for each CT, are given to the radiologists in random order, and the doctors were asked to quantify the suitability of each image with a subjective opinion score between 1 and 10 (1 being worst, 10 being best). The attained scores from these two independent radiologists are presented in Table 1. The scores reveal that both radiologists overwhelmingly prefer the U-NET-enhanced images over the quarter-dose, meaning that the U-NET enhancement does not depriive the output of necessary visual signatures for diagnostic purposes during the enhancement process. The perplexing observation is that the scores of the enhanced slice versions are even higher than the full-dose versions. Among these comparisons for 10 images, there is only one image case (P10), where one radiologist prefers the full dose over the enhanced version (while the other radiologist still prefers the enhanced version, by a greater vote margin). In an attempt to quantify the amount of change in the contained information, we have conducted visual correlation experiments. The overlap ratio of the information carried by two separate images can be measured using morphological correlation measurement techniques. Among several metrics for measuring these correlations, a universally accepted metric is the Pearson correlation coefficient, which is defined as the linear correlation between two data sets (pixel values of two images) [22]. It can be expressed as the ratio between the product of the covariance and the standard deviation of two variables, which provides an output value between -1 and 1. All these operations in the calculation of the Pearson coefficient are explained in Eq.1. In this work, Pearson correlations between Full Dose - Quarter Dose, Full Dose - Enhanced, and Quarter Dose - Enhanced slices for ten selected patients are evaluated and given in Figure 5. Since covariance can only reflect a linear correlation of variables, it may lack indication of other types of data relationships. In statistics, Spearman's rank correlation coefficient is a non-parametric measure of rank correlation (i.e., statistical dependence between the rank of two variables) [23]. It measures the success of defining the relationship between two variables with a monotonous function. As a result, the Spearman correlation between the two variables measures the Pearson correlation between the _rank values_ of these two variables. The calculation of the Spearman coefficient is explained in Eq.2. By this definition, it can be stated that while the Pearson correlation evaluates linear relationships, the Spearman correlation evaluates nonlinear relationships through a monotonous mapping. If the data values (pixel values) are not repeated, a perfect Spearman correlation of +1 or -1 occurs when each of the variables can be achieved by a perfectly monotonic function of the other. Spearman correlations between Full Dose - Quarter Dose, Full Dose - Enhanced, and Quarter Dose - Enhanced slices for ten selected patients are calculated and shown in Figure 6. \[\mathbf{r_{\mathrm{s}}}=\mathbf{r_{R(X)R(Y)}}=\frac{\mathrm{cov}(R(X),R(Y))}{\sigma_{R( X)}\sigma_{R(Y)}} \tag{2}\] \[\mathbf{r}=\text{Pearson correlation coefficient}\] \[\mathbf{r_{\mathrm{s}}}=\text{Spearman correlation coefficient}\] \[R(X)=\text{Sample rank of X variable}\] \[R(Y)=\text{Sample rank of Y variable}\] \[\mathrm{cov}(R(X),R(Y))=\text{Covariance of sample ranks of X and Y}\] \[\sigma_{R(X)}=\text{Standard deviation of sample rank of X}\] \[\sigma_{R(Y)}=\text{Standard deviation of sample rank of Y}\] Figure 5: Pearson correlations between pairs of Full dose, Quarter dose, and Enhanced slices An immediate observation from Figure 5 and 6 is that Pearson and Spearman correlations between quarter-dose and enhanced images are identically equal to one. Furthermore, their separate correlations to the full-dose images yield precisely the same correlation values. This observation shows that there is absolutely no morphological distortion or information change in the process of enhancement from the quarter dose. This is a critical observation to prove that the shape information contained in the enhanced image never deteriorates or alters with the enhancement procedure. Besides, no bogus shape information is contaminating the enhanced images. Therefore, the only effect of the performed operation is the increase in the image quality by means of an improved signal-to-noise ratio, which helps radiologists to perform more accurate diagnostic interpretations. ## 4 Discussion and Conclusion In this paper, it is shown that the enhancement process of U-NET DNN, which was trained for autoencoding using sufficiently many low-dose / full-dose input-output CT image pairs, does not cause any serious data loss for medical imaging. Furthermore, subjective tests show that it allows even better diagnostic assessment on the enhanced CT images. The key aspect of medical assessment is the detection and classification of image patterns, such as nodules or tissue patterns. In radiology, these image elements may degrade due to several factors, such as noise or measurement distortions. These degradations are known as image artifacts, which are also very commonly encountered in clinical computed tomography (CT), and may either degrade image quality and obscure or simulate pathology. Due to the specific image construction methodology of CT, several types of CT artifacts may occur, ranging from additive noise to beam hardening, from irradiance beam scatter to patient motion, and from metal particles in the tissue to low and inaccurate signal fidelity. Since CT is a reconstruction from thin x-ray data that are obtained at various rotational angles, the reconstructed image is very sensitive to high-attenuation internal tissues, such as bones or tissues with iodinated contrast. If the penetrating x-ray does not attain a sufficient magnitude, its attenuation through such tissues causes saturated observation, and its degradation is usually observed in the form of thin and long streaks along the major axis through individual high-attenuation objects. Another visible artifact due to lower dose x-rays reveals in the form of granularity in the images. With such increased granularity noise, certain high-contrast objects, such as bone, may still be visible, but lower-contrast regions due to soft-tissue boundaries may be obscured and difficult to notice or evaluate. A third visible artifact is known as beam hardening and scatter, where dark streaking bands appear between two high-attenuation objects positioned close to each other. Correct and careful elimination of these artifacts may result in improved image quality, thereby increasing diagnostic accuracy. In thorax CT examinations, since the large bone structures are located close to each other in the sections passing Figure 6: Spearman correlations between pairs of Full dose, Quarter dose and Enhanced slices through the thoracic apex, streaking bands are detected more frequently in these sections. The proposed U-NET system was trained with quarter- and full-dose versions of CT images that were expected to reveal the noise amount difference in the above-mentioned artifact cases. As an example, streaking bands were most frequently observed in the upper thorax sections in the related test images. However, it was observed that the streaking bands present in the quarter-dose images (which were also very prominent even in full-dose images) were significantly reduced in the reconstructed images. In a majority of the radiologist blind tests, the enhanced images were voted to provide better diagnostic quality than even the full-dose images. The areas zoomed in Figure 7 clearly show that the reconstructed image is more informative and rated higher in terms of diagnostic evaluation by both of the experienced radiologists compared to the full dose and quarter dose images. In this image, the radiologists report that the streaking bands of the low- and full-dose CT images considerably complicate the evaluation of the thyroid nodule details, whereas the nodule is better visible in the enhanced image. Among the test images, there is only one single case, where the full dose image was rated higher than the reconstructed image in terms of diagnostic image quality by two experienced radiologists, while the rest of the reconstructed test images were rated higher, even compared to full dose test images. The individual lower-rated enhancement is given in Figure 8. In this particular case, the reasoning of the lower-voting radiologist was that, while the enhanced image diminished a few mild calcifications in the aortic wall that were of no clinical significance, these calcifications were more clearly visible in both full-dose and quarter-dose images. When such mild calcifications are located within a mass or nodule, they might be of diagnostic importance for lesion classification. Hence, when a nodule is detected in the reconstructed images, we think that it would be more appropriate to decide whether the nodule contains calcification by examining other plane images as well as axial plane images. This evidence indicates that there is always a possibility of data loss in certain regions of the enhanced images, so extreme care must be taken in image alteration processes, including enhancement. As a result, this study not only puts forth a clearly successful and simple U-NET-based enhancement procedure but also indicates the necessity of diagnostic validation and provides clear subjective comparisons among quarter-, full-dose, and enhanced CT images. Figure 8: CT slice versions of the patient \(P_{10}\) Figure 7: CT slice versions of the patient \(P_{2}\)
2302.01904
Amazing behavior and transition to chaos of some sequences using Collatz like problems and Quibic duffing
In this paper we shall show amazing behavior of some discrete maps using Collatze like problems and some advanced theories in analytic number theory and dynamical system,we have investigated the driven cubic-quintic Duffing equation such that , We were able to predict the number of limit cycles around the equilibrium and to develop a theoretical approach to chaos suppression in damped driven systems using Collatze like problem sequences , some new results regarding behavior of that sequence are presented.
Zeraoulia Rafik
2022-12-15T18:56:38Z
http://arxiv.org/abs/2302.01904v1
Amazing behavior and transition to chaos of some sequences using Collatz like problems and Quibic Duffing 1 ###### Abstract In this paper we shall show amazing behavior of some discrete maps using Collatze like problems and some advanced theories in analytic number theory and dynamical system,we have investigated the driven cubic-quintic Duffing equation such that we were able to predict the number of limit cycles around the equilibrium and to develop a theoretical approach to chaos suppression in damped driven systems using Collatze like problem sequences, some new results regarding behavior of that sequence are presented. Collatze like problem irrationality sequences ## 1 Introduction The Collatz conjecture is one of the most famous unsolved problems in mathematics.[13],[7] The conjecture asks whether repeating two simple arithmetic operations will eventually transform every positive integer into 1. It concerns sequences of integers in which each term is obtained from the previous term as follows: if the previous term is even, the next term is one half of the previous term. If the previous term is odd, the next term is 3 times the previous term plus 1. The conjecture is that these sequences always reach 1, no matter which positive integer is chosen to start the sequence,That the conjecture has been tested via computers for numbers up to \(\approx 5.48\cdot 10^{18}\), which is quite impressive, although we was thinking ahead of time it would have been tested by this time for incredibly large integers given we've had computers for more than 50 years now. Newly discovered fundamental theories (meta mathematics) of integer numbers may be used to formalise and formulate a new theoretical number system from which other formal analytical frameworks may be discovered ([18]), primed and developed. The proposed number system ([15]), as well as its most general framework which is based on the modelling results derived from an investigation of the Collatz conjecture ([17]) (i.e., the 3x+1 problem), has emerged as an effective exploratory tool for visualising, mining and extracting new knowledge about quite a number of mathematical theorems and conjectures, including the Collatz conjecture. an increased interest has been witnessed in studying the theory of discrete dynamical systems including Collatz, specifically of their associated difference equations. Sizable number of works on the behavior and properties of pertaining solutions boundedness and unboundedness of sequences which are derived From Collatz problems have been published in various areas of applied mathematics and physics. ([1],[3],[4],[14],[5]) **Definition 1.1**: _The (borderline) Collatz-like problems: A map \(f:\mathbb{N}\rightarrow\mathbb{N}\) will be called a Collatz-like map if_ \[0\neq\lim_{n\rightarrow\infty}\left(\prod_{r=1}^{n}\frac{f(r)}{r}\right)^{1/n}\leq 1 \tag{1}\] _If the inequality (1) is an equality then the map \(f\) will be called a borderline Collatz-like map. For each (borderline) Collatz-like map \(f\), we have the (borderline) Collatz-like problem asking whether its iterations diverges nowhere to infinity, i.e._ \[\forall n>0,\ \exists m,r>0\text{ with }f^{\circ(m+r)}(n)=f^{\circ m}(n).\] _If the answer is yes, then let us call \(f\) an acceptable (borderline) Collatz-like map. This really focus on a specific family of borderline Collatz-like problems: For any given \(\alpha>0\), let us consider the following map [**10**]_ \[f_{\alpha}:n\mapsto\left\{\begin{array}{ll}\lfloor n\alpha\rfloor&\text{ if }n\text{ even,}\\ \lfloor n/\alpha\rfloor&\text{ if }n\text{ odd.}\end{array}\right. \tag{2}\] _The map \(f_{\alpha}\) in (2) is borderline Collatz-like. Let \(S\) be the set of \(\alpha>0\) for which \(f_{\alpha}\) is acceptable._ One of the most important topic in analytic number theory ([**12**]), which has attracted attention of researchers in the field, is irrationality measure of transcendental numbers like \(\sqrt{2},\pi,e,\cdots\),[**16**],[**17**]). In this paper we shall give amazing and surprising behavior of the following discrete map which is deduced from (2) taking \(\alpha=\sqrt{2}\) then,let us consider [**22**] : \[f:n\mapsto\left\{\begin{array}{ll}\lfloor n/\sqrt{2}\rfloor&\text{ if }n \text{ even,}\\ \lfloor n\sqrt{2}\rfloor&\text{ if }n\text{ odd.}\end{array}\right.\] such that it involves \(\sqrt{2},\pi\) and parity and so on. ## 2 Analysis and discussion Consider the following map: \[f:n\mapsto\left\{\begin{array}{ll}\lfloor n/\sqrt{2}\rfloor&\text{ if }n \text{ even,}\\ \lfloor n\sqrt{2}\rfloor&\text{ if }n\text{ odd.}\end{array}\right.\] Let \(f^{\circ(r+1)}:=f\circ f^{\circ r}\), consider the orbit of \(n=73\) for iterations of \(f\), i.e. the sequence \(f^{\circ r}(73)\): \[73,103,145,205,289,408,288,203,287,405,572,404,285,403,569,804,568,401,\ldots\] It seems that this sequence diverges to infinity exponentially, and in particular, never reaches a cycle. Let illustrate that with the following picture of \((f^{\circ r}(73))^{1/r}\), with \(200<r<20000\).See Figure 1: According to Figure1, it seems that \(f^{or}(73)\sim\delta^{r}\) with \(\delta\sim 1.02\). Now consider the probability ([9]) of the \(m\) first terms of the sequence \(f^{or}(73)\) to be even: \[p_{0}(m):=\frac{|\{r<m\mid f^{or}(73)\text{ is even}\}|}{m}.\] Then \(p_{1}(m):=1-p_{0}(m)\) is the probability of the \(m\) first terms of \(f^{or}(73)\) to be odd. If we compute the values of \(p_{i}(m)\) for \(m=10^{\ell}\), \(\ell=1,\ldots,5\), we get something unexpected: It is unexpected because it seems that \(p_{0}(m)\) does not converge to \(1/2\), but to \(\alpha\sim 0.465\). It matches with the above observation because \[\delta\sim 1.02\sim\sqrt{2}^{(0.535-0.465)}=\sqrt{2}^{(1-2\times 0.465)}\sim \sqrt{2}^{(1-2\alpha)}.\] \[\begin{array}{cccc}\ell&p_{0}(10^{\ell})&p_{1}(10^{\ell})\\ 1&0.2&0.8\\ 2&0.45&0.55\\ 3&0.467&0.533\\ 4&0.4700&0.5300\\ 5&0.46410&0.53590\\ 6&0.465476&0.534524\end{array}\] The line for \(\ell=6\) was computed Using Pari/GP and setting internal precision to \(15000\) decimal digits we were able to get \(p_{1}(10^{6})=0.534524\),The value of \(f^{1e6}(73)\) is about \(3.89439e10394\), (thus one needs such a big precision) and the \(\log_{73}()\) of it is \(5578.52\). Now one can ask is it true that \(f^{or}(73)\) never reach a cycle, that \((f^{or}(73))^{1/r}\) converges to \(\delta\sim 1.02\), that \(p_{0}(m)\) converges to \(\alpha\sim 0.465\), and that \(\delta^{2}4^{\alpha}=2\)? What are the exact values of \(\delta\) and \(\alpha\)? namely, better approximations? The following Figure provides the values of \(p_{0}(m)\) for \(100<m<20000\),See Figure2 Figure 1: The iterated map for 73 times (\((f^{or}(73))^{1/r}\), with \(200<r<20000\)) Note that this phenomenon is not specific to \(n=73\), but seems to happen as frequently as \(n\) is big, and then, the analogous probability seems to converge to the same \(\alpha\). If \(n<100\), then it happens for \(n=73\) only, but for \(n<200\), it happens for \(n=73,103,104,105,107,141,145,146,147\), \(148,149,151,152,153,155,161,175,199\); and for \(10000\leq n<11000\), to exactly \(954\) ones. Below is the picture shown by Figure3 as Figure2 but for \(n=123456789\): Figure 2: values of \(p_{0}(m)\) for \(100<m<20000\) One can ask Is it true that the set of \(n\) for which the above phenomenon happens has natural density one? Is it cofinite? When it happens, does it involves the same constant \(\alpha\)? There are exactly \(1535\) numbers \(n<10000\) for which the above phenomenon does not happen. The next Figure,namely,Figure 4 displays for such \(n\) the minimal \(m\) (in blue) such that \(f^{\circ m}(n)=f^{\circ(m+r)}(n)\) for some \(r>0\), together with the miniman such \(r\) (in red): In fact all these numbers (as first terms) reach the following cycle of length \(33\): \[(15,21,29,41,57,80,56,39,55,77,108,76,53,74,52,36,25,35,49,69,97,137,193,272,192,1 35,190,134,94,66,46,32,22)\] except the following ones: \[7,8,9,10,12,13,14,18,19,20,26,27,28,38,40,54,\] which reach \((5,7,9,12,8)\), and that ones \(1,2,3,4,6\) which reach \((1)\), and \(f(0)=0\). If the pattern continues like above up to infinity, they must have infinity many such \(n\). We may need to ask if there infinitely many \(n\) reaching a cycle? Do they all reach the above cycle of length \(33\) (except the few ones mentioned above)? What is the formula of these numbers \(n\)? Below in Figure5 is their counting function (it looks logarithmic): ## 3 Main result * 1) A number \(m\) admits no predecessor iff the interval \([m\sqrt{2},(m+1)\sqrt{2}]\) admits no even number and the interval \([m/\sqrt{2},(m+1)/\sqrt{2}]\) admits no odd number. There are exactly \(r_{\ell}\) such numbers \(m<10^{\ell}\) with \(\ell=1,2,3,4,5,6\) and \(r_{\ell}=2,29,292,2928,29289,292893\) Strangely, for \(\ell\leq 6\) we observe that \(r_{\ell-1}=\lfloor r_{\ell}/10\lfloor\) * 2) Numbers without predecessors those of the form \(\lfloor n(2+\sqrt{2})\rfloor\) Moreover, numbers with one predecessor are those of the form \(\lfloor 2k\sqrt{2}\rfloor\) and numbers with two predecessors those of the form \(\lfloor(2k-1)\sqrt{2}\rfloor\) * 3)The homoclinic orbit of unperturbed system for Quibic duffing oscillators using Collatz sequences separates the phase plane into two areas. Inside the separatrix curve the orbits are around one of the centers, and outside the separatrix curve the orbits surround both the centers and the saddle point ## 4 Analysis of the first result We take pairs of \((m,n)\) ([2]) for consecutive \(m\) and their 1-step predecessors \(n\) such that \(f(n)=m\). The value \(n=0\) indicates, that \(m\) has no predecessor. I didn't reflect, that one \(m\) can have two predecessors, but if \(n/2\) is odd, then \(n/2\) is a second predecessor ([7]).(This makes the table more interesting, because all odd predecessors \(n\) are overwritten by the even predecessors \(2n\)... Moreover, a nearly periodic structure occurs. We tried to resemble this by the arrangement of three or four columns of \((m,n)\) such that the first column contains all \(m\) which have no predecessor. The basic pattern is not really periodic, but has super-patterns which again seem to be periodic but actually aren't. This pattern-superpattern-structure is also recursive. It reminds me of a similar structure when I looked at \(\beta=\log_{2}(3)\) and found a similar style of pattern-superpattern-supersuperpattern-... and is there related to the continued fraction of \(\beta\). So We think we'll get no nice description for the cases \(m\) which have no predecessor Figure 5: counting function of the happped phenomena Some more explanation on the idea of "recursive aperiodic pattern". If we list the values \(m\) which have no predecessor, we get m_k: 3, 6,10,13, 17,20,23,27,30,... Writing the differences (We have prepended a zero-value to the above list of \(m_{k}\)) ,3,3,4,3,4,3,3,4,3,4,3,4,3,3,4,... We note, that we have a pattern of two different words: '3,3,4' and '3,4' repeating, but aperiodical. Let's denote the longer one with the capital 'A' and the shorter one with the small 'a' (and 'A' means a difference of 10 and 'a' of 7). We get Aa Aa Aa Aa Aa Aa Aa Aa Aa Aa Aa Aa Aa Aa Aa Aa Aa... Again we find only two kind of "words". Let's them shorten by 'Aaa'='B' and 'Aa'='b'. 'B' means now a difference of 24, 'b' of 17. Then we get bbB bB \(\dots\) Next obvious step gives \(\begin{array}{l}\mbox{\tt Cc Cc Ccc}\\ \mbox{\tt Cc Ccc}\\ \mbox{\tt Cc Ccc}\\ \mbox{\tt Cc Ccc}\\ \mbox{\tt Cc Ccc}\\ \mbox{\tt\ 846 -- tree seems to be complete (please check for errors!) We may give heuristic proof for our main results using Beatty theorem [19] which it states that: **Theorem 1**: _given an irrational number \(r>1\) there exists \(s>1\) so that the Beatty sequences \(\boldsymbol{B_{r}}\) and \(\boldsymbol{B_{s}}\) partition the set of positive integers: each positive integer belongs to exactly one of the two sequences_ **Proof 4.1**: _We can say with Beatty theorem(Theorem 1) : \(A=\{E(n(\sqrt{2}+2))\ ;\ n\in\mathbb{N}^{*}\}\) and \(B=\{E(n\sqrt{2});n\in\mathbb{N}^{*}\}\) is a partition of \(\mathbb{N}^{*}\) And we have \(E(n(\sqrt{2}+2))=2n+E(n\sqrt{2})\) with \(E\) is the function integer part,and this proves the partial of result 2 (Form of numbers without predecessor)._ _For the First result, let assume that the probabilty for an integer \(n\) to be odd is \(\frac{1}{2}\), and that the probabilty for \(f(n)\) to be odd when \(n\) is even (resp. odd) is also \(\frac{1}{2}\). We will observe that (surprisingly) it is no more \(\frac{1}{2}\) for \(f^{or}(n)\) when \(r\geq 2\) (in some sense, the probability does not commute with the composition of \(f\) with itself)._ * _1) if_ \(n\) _and_ \(m=f(n)\) _are even: note that_ \(\frac{n}{\sqrt{2}}=m+\theta\) _(with_ \(0<\theta<1\)_) so that_ \(m=\frac{n}{\sqrt{2}}-\theta\)_, then_ \[f^{\circ 2}(n)=f(m)=\left\lfloor\frac{m}{\sqrt{2}}\right\rfloor=\left\lfloor \frac{\frac{n}{\sqrt{2}}-\theta}{\sqrt{2}}\right\rfloor=\left\lfloor\frac{n}{2 }-\frac{\theta}{\sqrt{2}}\right\rfloor\] _but_ \(\frac{n}{2}\) _is even with probability_ \(\frac{1}{2}\)_, so in this case,_ \(f^{\circ 2}(n)\) _is odd with probability_ \(\frac{1}{2}\)_._ * _2) if_ \(n\) _is even and_ \(m=f(n)\) _is odd:_ \[f^{\circ 2}(n)=f(m)=\left\lfloor\sqrt{2}m\right\rfloor=\left\lfloor\sqrt{2}( \frac{n}{\sqrt{2}}-\theta)\right\rfloor=\left\lfloor n-\sqrt{2}\theta\right\rfloor\] _but_ \(n\) _is even and the probability for_ \(0<\sqrt{2}\theta<1\) _is_ \(\frac{\sqrt{2}}{2}\) _(because_ \(\theta\) _is assumed statistically equidistributed_ _[_9_]_ _on the open interval_ \((0,1)\)_), so_ \(f^{\circ 2}(n)\) _is odd with probability_ \(\frac{\sqrt{2}}{2}\)_._ * _3) if_ \(n\) _is odd and_ \(m=f(n)\) _is even:_ \[f^{\circ 2}(n)=f(m)=\left\lfloor\frac{m}{\sqrt{2}}\right\rfloor=\left\lfloor \frac{\sqrt{2}n-\theta}{\sqrt{2}}\right\rfloor=\left\lfloor n-\frac{\theta}{ \sqrt{2}}\right\rfloor\] _but_ \(n\) _is odd and_ \(0<\frac{\theta}{\sqrt{2}}<1\)_, so_ \(f^{\circ 2}(n)\) _is even._ * _4) if_ \(n\) _is odd and_ \(m=f(n)\) _is odd:_ \[f^{\circ 2}(n)=f(m)=\left\lfloor\sqrt{2}m\right\rfloor=\left\lfloor\sqrt{2}( \sqrt{2}n-\theta)\right\rfloor=\left\lfloor 2n-\sqrt{2}\theta\right\rfloor\] _but_ \(2n\) _is even and the probability for_ \(0<\sqrt{2}\theta<1\) _is_ \(\frac{\sqrt{2}}{2}\)_, so_ \(f^{\circ 2}(n)\) _is odd with probability_ \(\frac{\sqrt{2}}{2}\)_._ _By combining these four cases together, we deduce that the probability for \(f^{\circ 2}(n)\) to be odd is_ \[\frac{1}{2}\times\frac{1}{2}\times(\frac{1}{2}+\frac{\sqrt{2}}{2}+0+\frac{ \sqrt{2}}{2})=\frac{2\sqrt{2}+1}{8}\] _By continuing in the same way, we get that the probability for \(f^{\circ 3}(n)\) to be odd is:_ \[\frac{1}{4}(\frac{1}{2}\frac{1}{2}+\frac{1}{2}\frac{\sqrt{2}}{2}+\frac{\sqrt{ 2}}{2}\frac{\sqrt{2}}{2}+1\frac{1}{2}+\frac{\sqrt{2}}{2}\frac{\sqrt{2}}{2})= \frac{\sqrt{2}+7}{16}\] _For \(2\leq r\leq 24\), we computed the probability \(p_{r}\) for \(f^{\circ r}(n)\) to be odd (see Appendix). It seems (experimentally) that \(p_{r}\) converges to a number \(\simeq 0.532288725\simeq\frac{8+3\sqrt{2}}{23}\) by Inverse Symbolic Calculator. This leads to the following question/conjecture:_ \[\lim_{r\to\infty}p_{r}=\frac{8+3\sqrt{2}}{23}\ \?\] _If so, consider the number \(\alpha\), then_ \[\alpha=1-\frac{8+3\sqrt{2}}{23}=\frac{15-3\sqrt{2}}{23}\simeq 0.467711,\] _which matches with above computation (Analysis and discussion section). And next, we would have:_ \[\delta=\frac{\sqrt{2}}{2^{\alpha}}=2^{\frac{1}{2}-\alpha}=2^{\frac{6\sqrt{2}- 7}{6}}\simeq 1.022633\] ## 5 Homoclinic orbits and chaos in the un-perturbed system using Collatz problem Although no universally accepted mathematical definition of chaos exists, a commonly used definition originally formulated by Robert L. Devaney says that, to classify a dynamical system as chaotic, it must have these properties: * 1) it must be sensitive to initial conditions * 2) it must be topologically mixing * 3) it must have dense periodic orbits Firstly we may consider the harmonically driven damped pendulum which is often used as a simple example of a chaotic system, the equation is just \[\ddot{\phi}+\frac{1}{q}\dot{\phi}+\sin\phi=A\cos(\omega t) \tag{3}\] As long as \(A\) and \(\omega\) are small it behaves like a driven harmonic oscillator, and asymptotically settles into regular oscillations with a fixed period. However, as \(A\) (or \(\omega\)) are increased, with the rest of parameters fixed, the system undergoes a cascade of period doubling bifurcations leading to chaotic behavior, which then gives way to regular oscillations again when it is increased further. For example, when \(q=2\) and \(\omega=2/3\) the first period doubling ("symmetry breaking") occurs at \(A\approx 1.07\) and the first chaos at \(A\approx 1.08\). These rigorous results seem to be obtained by numerical simulations. One can be actually interested in situations where chaos does not occur [20]. Are there known rigorous conditions on \(A,\omega\) and \(q\) that put the system below the first period doubling? However this question does not belong to the aim of this paper but it would be very interesting to conclude somethings about chaotics behaviors of some dynamics and to discover new ways to supress chaos in the cubic-Quinic Duffing Equation using some discrete iterated map which it is the aim of our research in this paper. For the unperturbed system with fractional order displacement, when \(\varepsilon=0\), the differential equation (3) can be reformulated as For \(A=0\), to \[\ddot{x}-ax+bx^{3}+cx^{5}=0. \tag{4}\] Let \[\Delta:=b^{2}+4ac. \tag{5}\] Equilibrium points for \(\Delta>0\) are : \[(x,\dot{x})=\left(\pm\sqrt{\frac{-b+\sqrt{b^{2}+4ac}}{2c}},0\right)\ :\mbox{ centers} \tag{6}\] Define \[x_{e}^{+}=\sqrt{\frac{-b+\sqrt{b^{2}+4ac}}{2c}}\mbox{ and }x_{e}^{-}=-\sqrt{ \frac{-b+\sqrt{b^{2}+4ac}}{2c}} \tag{7}\] The energy function for (4) is \[\frac{1}{2}\dot{x}(t)^{2}-\frac{1}{2}ax(t)^{2}+\frac{1}{4}bx(t)^{4}+\frac{1}{ 6}cx(t)^{6}=K \tag{8}\] where \(K\) is the energy constant dependent on the initial amplitude \(x(0)=x_{0}\) and initial velocity \(x^{\prime}(0)=\dot{x}_{0}\) : \[K=\frac{1}{2}\dot{x}_{0}^{2}-\frac{1}{2}ax_{0}^{2}+\frac{1}{4}bx_{0}^{4}+\frac {1}{6}cx_{0}^{6}. \tag{9}\] Dependently on \(K\), the level sets are different. For all of them it is common that they form closed periodic orbits which surround the fixed points \((\mathrm{x},\dot{\mathrm{x}})=(x_{e}^{+},\ 0)\) or \((\mathrm{x},\dot{\mathrm{x}})=(x_{e}^{-},\ 0)\) or all the three fixed points \((x_{e}^{\pm},\ 0)\) and \((0,\ 0)\). The boundary between these two groups of orbits corresponds to \(K=0\), when \[\dot{x}_{0}=\pm x_{0}\sqrt{\frac{1}{6}\left(6a-3bx_{0}^{2}-2cx_{0}^{4}\right)}. \tag{10}\] Amazing behavior and transition to chaos of some sequences using Collatz like problems and Qubic duffing The level set \[\frac{\dot{x}^{2}}{2}-\frac{1}{2}ax^{2}+\frac{1}{4}bx^{4}+\frac{1}{6}cx^{6}=0 \tag{11}\] is composed of two homoclinic orbits \[\Gamma^{0}_{+}(\mathrm{t})\equiv(x^{0}_{+}(\mathrm{t}),\dot{x}^{0}_{+}( \mathrm{t})), \tag{12}\] \[\Gamma^{0}_{-}(\mathrm{t})\equiv(x^{0}_{-}(\mathrm{t}),\dot{x}^{0}_{-}( \mathrm{t})), \tag{13}\] which connect the fixed hyperbolic saddle point \((0,\ 0)\) to itself and contain the stable and unstable manifolds. The functions \(x^{0}_{\pm}(\mathrm{t})\) may be evaluated using these two formulas \[M(t_{0})=\int_{-\infty}^{+\infty}\dot{x}^{0}(t)[\gamma\cos\omega(t+t_{0})- \delta\dot{x}^{0}(t)]dt, \tag{14}\] with \[x^{0}(t)=\frac{A\mbox{sech}\left(\sqrt{k}t\right)}{\sqrt{1+\lambda\cdot\mbox{ sech}^{2}\left(\sqrt{k}t\right)}}. \tag{15}\] . See Figure 5 The homoclinic orbit [21] separates the phase plane into two areas. Inside the separatrix curve the orbits are around one of the centers, and outside the separatrix curve the orbits surround both the centers and the saddle point. Physically it means that for certain initial conditions the oscillations are around one steady-state position [23], and for others around all the steady- state solutions (two stable and an unstable). Now,in the second case which uses the iterated map (Collatze like problems sequences ) as the RHS of equation (3),that is perturbed system, We have noted transition to chaos(sensitivity to initial condition),Let us consider the following IVP (initial value problem): \[\ddot{x}-ax+bx^{3}+cx^{5}=f,\dot{(}x(0))=0,x(0)=0 \tag{16}\] such that : \[f:n\mapsto\left\{\begin{array}{ll}\left\lfloor n/\sqrt{2}\right\rfloor& \mbox{ if $n$ even,}\\ \left\lfloor n\sqrt{2}\right\rfloor&\mbox{ if $n$ odd.}\end{array}\right.\] Let \(f^{\circ(r+1)}:=f\circ f^{\circ r}\), consider the orbit of \(n=73\) for iterations of \(f\), i.e. the sequence \(f^{\circ r}(73)\): \[73,103,145,205,289,408,288,203,287,405,572,404,285,403,569,804,568,401,\ldots\] Figure 5: Homoclinic orbit. (see It seems that this sequence diverges to infinity exponentially, and in particular, never reaches a cycle. Let illustrate that with the following picture of \((f^{or}(73))^{1/r}\), with \(200<r<20000\), See Figure 7 And for initial sensitivity of IVP corresponding to the system (16),we have obtained the following Figure up to \(r=20000\) (small perturbation),see Figure 8: We may now show a phase space plot of the trajectory up to \(r=20000\),namely, small perturbation,See Figure9 Figure 8: Sensitivity to initial condition for small perturbation up to \(r=20000\) for the Collatze sequence Figure 7: exponential divergence for \(200<r<20000\) This looks complicated, but in fact, most of the plot shows the initial period of time during which the motion is approaching its final behavior which is much simpler. The early behavior is called an "initial transient". ## Acknowledgments This work is based on work of Sebastien Palcoux for his Arxiv research paper entitled " unexpected behavior of some transcendental number like \(\sqrt{2}\) " [22] ## 6 declaration statement The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## 7 Data Availability The authors were unable to find a valid data repository for the data used in this study ## 8 Appendix Computation: Figure 9: transition to chaos \(r=20000\) for (16) using Collatze sequence for 73 iteration sage: for i in range(3,26): ...: print(sq2(i)) ...: [1/4*sqrt(2) + 1/8, 0.478553390593274] [1/16*sqrt(2) + 7/16, 0.525888347648318] [3/32*sqrt(2) + 13/32, 0.538832521472478] [15/64*sqrt(2) + 13/64, 0.534581303681194] [5/128*sqrt(2) + 61/128, 0.531805217280199] [39/256*sqrt(2) + 81/256, 0.531852847392776] [93/512*sqrt(2) + 141/512, 0.53262965325925] [51/1024*sqrt(2) + 47/1024, 0.532348527032254] [377/2048*sqrt(2) + 557/2048, 0.532303961432938] [551/4096*sqrt(2) + 1401/4096, 0.532283123258685] [653/8192*sqrt(2) + 3437/8192, 0.532285334012406] [3083/16384*sqrt(2) + 4361/16384, 0.532288843554459] [3409/32768*sqrt(2) + 12621/32768, 0.532289246647030] [7407/65585*sqrt(2) + 24409/65536, 0.53228816168701] [22805/131072*sqrt(2) + 3751/7131072*sqrt(2) + 0.532288667983386] [24307/262144*sqrt(2) + 105161/262144, 0.532288700334941] [72761/524288*sqrt(2) + 176173/524288, 0.532288728736551] [159959/1048576*sqrt(2) + 331929/1048576, 0.532288729880941] [202621/2097152*sqrt(2) + 829741/2097152, 0.532288725958633] [639131/4194304*sqrt(2) + 1328713/4194304, 0.532288724978704] [114081/8388608*sqrt(2) + 2889613/8388608, 0.532288725350163] [1825983/16777216*sqrt(2) + 6347939/16777216, 0.532288725570602] [5183461/33554432*sqrt(2) + 10530125/33554432, 0.532288725561857] **Code:** def sq2(n): c=0 for i in range(2^n): l=list(Integer(i).digits(base=2,padto=n)) if l[-1]=1: cc=1/4 for j in range(n-2): ll=[1[j],l[j+1],l[j+2]] if ll==[0,0,0]: cc*=1/2 if ll==[0,0,1]: cc*=1/2 if ll==[0,1,0]: cc*=(1-sqrt(2)/2) if ll==[0,1,1]: cc*=sqrt(2)/2 if ll==[1,0,0]: cc*=1 if ll==[1,0,1]: cc*=0 break if ll==[1,1,0]: cc*=(1-sqrt(2)/2) if ll==[1,1,1]: cc*=sqrt(2)/2 c+=cc return [c.expand(),c.n()] \[M(t_{0})=\int_{-\infty}^{+\infty}\dot{x}^{0}(t)[\gamma\cos\omega(t+t_{0})- \delta\dot{x}^{0}(t)]dt, \tag{17}\] Amazing behavior and transition to chaos of some sequences using Collatz like problems and Quibic duffing Let \[x^{0}(t)=\frac{A\text{sech}\left(\sqrt{k}t\right)}{\sqrt{1+\lambda\cdot\text{ sech}^{2}\left(\sqrt{k}t\right)}}. \tag{18}\]